text
stringlengths 83
79.5k
|
|---|
H: Are the groups $\Bbb{Z}_8 \times \Bbb{Z}_{10} \times \Bbb{Z}_{24}$ and $\Bbb{Z}_4 \times \Bbb{Z}_{12} \times \Bbb{Z}_{40}$ isomorphic?
This question is taken from "A first course in Abstract Algebra" by Fraleigh 7th edition, section 11 question 18:
Are the groups $\mathbb{Z}_8 \times \mathbb{Z}_{10} \times \mathbb{Z}_{24}$ and $\mathbb{Z}_4 \times \mathbb{Z}_{12} \times \mathbb{Z}_{40}$ isomorphic?
The solution manual says no. My question is why not?
We have $$\mathbb{Z}_8 \approx \mathbb{Z}_{2} \times \mathbb{Z}_{2} \times \mathbb{Z}_{2},$$
and $$\mathbb{Z}_{10} \approx \mathbb{Z}_{5} \times \mathbb{Z}_{2},$$
and $$\mathbb{Z}_{24} \approx \mathbb{Z}_{2} \times \mathbb{Z}_{2} \times \mathbb{Z}_{2} \times \mathbb{Z}_{3}.$$
Thus $$\mathbb{Z}_8 \times \mathbb{Z}_{10} \times \mathbb{Z}_{24} \approx \mathbb{Z}_{2} \times \mathbb{Z}_{2} \times \mathbb{Z}_{2} \times \mathbb{Z}_{5} \times \mathbb{Z}_{2} \times \mathbb{Z}_{2} \times \mathbb{Z}_{2} \times \mathbb{Z}_{2} \times \mathbb{Z}_{3}$$
Similiarly, $$\mathbb{Z}_4 \times \mathbb{Z}_{12} \times \mathbb{Z}_{40} \approx \mathbb{Z}_{2} \times \mathbb{Z}_{2} \times \mathbb{Z}_{2} \times \mathbb{Z}_{2} \times \mathbb{Z}_{3} \times \mathbb{Z}_{2} \times \mathbb{Z}_{2} \times \mathbb{Z}_{2} \times \mathbb{Z}_{5}$$
These last 2 expressions are the same except for a reordering. Where is the mistake in my reasoning?
AI: Let's focus on a very simple case, comparing $\mathbb{Z}_{4}$ and $\mathbb{Z}_{2} \times \mathbb{Z}_{2}$. Are these the same? They have the same order, but they are actually not isomorphic! To see why, note that $1$ has order $4$ in $\mathbb{Z}_{4}$ (you need to add it to itself $4$ times to get to $0$). However, every element of $\mathbb{Z}_{2} \times \mathbb{Z}_{2}$ has order at most $2$. For example, $(1,0) + (1,0) = (2,0) = (0,0)$, or $(1,1) + (1,1) = (2,2) = (0,0)$.
What IS true is that $\mathbb{Z}_{pq} = \mathbb{Z}_{p} \times \mathbb{Z}_{q}$ for distinct primes $p$ and $q$. The same thing holds if you replace $p$ with $p^n$ and $q$ with $q^k$. This actually encapsulates the entire pattern: you can split up $\mathbb{Z}_{ab}$ into $\mathbb{Z}_{a} \times\mathbb{Z}_{b}$ if and only if $a$ and $b$ are coprime, i.e. having no prime factors in common.
|
H: Prove that $(A\cap C)-B=(C-B)\cap A$
$\mathbf{Question:}$ Prove that $(A\cap C)-B=(C-B)\cap A$
$\mathbf{My\ attempt:}$
Looking at LHS, assuming $(A\cap C)-B \neq \emptyset$
Let $x\in (A\cap C)-B$
This implies $x\in A$ and $x\in C$ and $x\notin B$
Looking at RHS, assuming $(C-B)\cap A \neq \emptyset$,
Let $y \in (C-B)\cap A$
This implies $y\in C$ and $y\notin B$ and $y\in A$
By comapring the LHS and RHS, we find that:
$$
x,y\in A
$$
$$
x,y\in C
$$
$$
x,y\notin B
$$
Thus LHS = RHS.
Is this correct?
AI: Aside from the typo, yes. In short.
$$\begin{align}&(A\cap C)\smallsetminus C \\ =~&\{x:(x\in A\wedge x\in C)\wedge x\notin B\}\\=~&\{x:(x\in C\wedge x\notin B)\wedge x\in A\}\\=~&(C\smallsetminus B)\cap A\end{align}$$
|
H: Find function given arc length
I'm creating a program that has two points and a cable hanging between them. I feel like modeling the cable using a catenary would be too hard, so I just simplified it to a parabola.
However, I'm still stuck on making sure that the length of the cable is constant. I don't know how to find a parabola (or any group of parabolas) given its arc length. The main purpose of my program isn't to model the cable, however, so if it's not possible, it's fine.
AI: hint
Let the two points be
$$A(-a,0) \text{ and } \; B(a,0)$$
and $ L $ be the length of the cable.
We can take as the equation of the parabola
$$y=f(x)=C(x^2-a^2)$$ such that
$$L=\int_{-a}^a\sqrt{1+f'^2(x)}dx$$
$$=2\int_0^a\sqrt{1+4C^2x^2}dx$$
To finish, put $$2Cx=\sinh(t)$$
then
$$C=\frac 1L\int_0^a\cosh^2(t)dt$$
Remark
You can also, take instead a parabola, the more natural curve whose equation is $$y=C(\cosh(x)-\cosh(a))$$
|
H: Solving for integral curves of a vector field - how to account for changing charts?
[Ref. 'Core Principles of Special and General Relativity by Luscombe]
Let $\gamma:\mathbb{R}\supset I\to M$ be a curve that we'll parameterize using $t$, i.e. $\gamma(t)\in M$. It's stated that:
If $\gamma(t)$ has coordinates $x^i(t)$ and [a vector field] $X$ has components $X^i$, finding the integral curve associated with $X$ reduces to solving a set of coupled first-order differential equations, $$\frac{d}{dt}x^i=X^i(x^1(t),\ldots,x^n(t))$$
Immediately after there's an example: if $X=x\partial_x+y\partial_y$, then $dx/dt=x$ and $dy/dt=y$, which gives the integral curve passing through $(a,b)$ at $t=0$ as $\gamma(t)=(ae^t,be^t)$.
Doubt: Referring to the phrase "If $\gamma(t)$ has coordinates $x^i(t)$...", I'm guessing it's unlikely that all the points on the curve belong to a single chart. So how can we claim only one coordinate system $x^i$ to represent the coordinates of all the points on the curve? Won't we have to adjust the coordinates according to the chart?
e.g. if some $p,p'\in\gamma(I)$ are covered by different charts, and if the coordinates of $p$ are $x^i$, won't the coordinates of $p'$ have to be characterized by an entirely different coordinate system (e.g. some $y^i$)?
My current thought process is as follows: [Referring to the example above] Consider any point $p\in M$. Can I say that whatever coordinate system $\{x, y\}$ is used at $p$ (in accordance with whatever chart covers it), the coordinates of $X_{p}$ in the coordinate basis corresponding to the coordinate system will be $x,y$?
Put another way, if I use coordinate system $\{a,b\}$ at a point $p_1$, then the components of $X_{p_1}$ will just be $a,b$. And if I use some other coordinate system $\{u,v\}$ at some other point $p_2$, then the coordinates of $X_{p_2}$ will now be $u,v$.
And what this implies for the integral curve that we calculate, i.e. $(ae^t,be^t)$, is that if $\gamma(t_0)=p_0$ and $\gamma(t_1)=p_1$, then $(ae^{t_0},be^{t_0})$ are the coordinates of the integral curve at $p_0$ in the coordinate basis $(x,y)$, while $(ae^{t_1},be^{t_1})$ are the coordinates of the integral curve at $p_1$ in the coordinate basis $(u,v)$. In essence, we're coming up with local solutions to the curve and "stitching them together".
Does the above seem correct or are there any mistakes?
AI: First, the answer to your "put another way" paragraph is no; coordinate charts don't generally behave that nicely. Consider Cartesian and polar coordinates for a counterexample. The example you're referring to is implicitly saying they're taking $\{x, y\}$ to be standard (Cartesian) coordinates on $\mathbb{R}^2$, and in these coordinates $X = x\partial_x + y\partial_y$. Then one can check that in polar coordinates $\{r, \theta\}$, this same vector field is given by $X = r\partial_r$ (NOT $r\partial_r + \theta\partial_\theta$). This is the change of coordinates formula for vector fields.
Let us examine the integral curves in both coordinate systems now. As you said, in the Cartesian coordinate system, the solution curve $\gamma(t)$ is given by $\gamma(t) = (a e^t, be^t)$ for $a, b \in \mathbb{R}$. The polar coordinate expression for this curve is given by
$$
r(t) = \sqrt{a^2 e^{2t} + b^2 e^{2t}} = e^{t}\sqrt{a^2 + b^2} = r_0 e^t, \ \ \ \ \theta(t) = \theta_0 = \arctan(b/a).
$$
Thus in polar coordinates $\gamma$ is given by $\gamma(t) = (r_0 e^t, \theta_0)$. We got this from changing coordinates in the expression for $\gamma$, but we could start with the expression for $X$ in polar coordinates and solve the ODE
$$
\dot \gamma(t) = r\partial_r \implies \gamma(t) = r_0 e^t, \ \theta(t) = \theta_0.
$$
The idea of "stitching things together", though, is what you want to think of when your curve isn't covered by one chart.
|
H: Show that for any infinite cardinal $\kappa$, we have $\kappa ! = 2^{\kappa}$.
Synopsis
For this exercise, we are asked to show that for any infinite cardinal $\kappa$, we have $\kappa ! = 2^{\kappa}$, where $\kappa! = \text{card}\{f | \text{$f$ is a permutation of $K$}\}$, and a permutation of $K$ is described as a bijection from $K$ onto $K$. My idea of a solution was to try to show that $\kappa! \leq 2^{\kappa} \leq \kappa!$. The first part of doing so, showing $\kappa! \leq 2^{\kappa}$, was easy since $\{f | \text{$f$ is a permutation of $K$}\} \subseteq \mathscr{P}(K \times K)$, and $2^{\kappa \cdot \kappa} = 2^{\kappa}$. But I've been having some trouble showing that $2^{\kappa} \leq \kappa!$. I was thinking of finding an injection between ${}^K2$ and $\{f | \text{$f$ is a permutation of $K$}\}$ where $|K| = \kappa$, but I wasn't quite sure how I would transform the range of $2$ of a function in ${}^K2$ to the range $K$ in the function defining the factorial with it still being an injection and well-defined. Any help would be appreciated.
AI: Hint For any $A \subseteq X$ with $|A| \neq 1$ you can construct a bijection
$$f_A: X \to X$$ such that
$$A:= \{ x \in X : f_A(x) \neq x \}$$
i.e. $X \backslash A$ is the set of fixed points of $f$.
This shows that
$$
\left| \{ A \subset X : |A| \neq 1 \} \right| \leq \kappa !
$$
and it is not hard to show that $2^\kappa \leq \left| \{ A \subset X : |A| \neq 1 \} \right|$.
P.S. It is probably easier to construct for each $A \subseteq X$ a bijection
$$
f_A : X \times X \to X \times X
$$
such that
$$
\{ z \in X \times X : f(z)=z \}= \{ (a,a) : a \in A \}
$$
[or $A= \{ x \in X : f_A(x,y)=(x,y) \forall y \in X \}$.]
|
H: Brownian motion mathematical model construction
I quote the construction of a mathematical model of Brownian Motion from Schilling-Partzsch
Consider a one-dimensional setting where a particle performs a random walk (notice that it can move to left or to right with equal probability $\dfrac{1}{2}$). We assume that each particle
starts at origin $x=0$;
changes its position only at discrete times $k\Delta t$ where $\Delta t>0$ is fixed and for all $k=1,2,\ldots$;
moves $\Delta x$ units to the left or to the right with equal probability;
$\Delta x$ does not depend on any past positions nor the current position $x$ nor on time $t=k\Delta t$.
Let us denote by $X_t$ the random position of the particle at time $t\in\left[0,T\right]$. During the time $\left[0, T\right]$, the particle has changed its position $\displaystyle\left\lfloor \frac{T}{\Delta t}\right\rfloor$ times. Since the decision to move left or right is random, we will model it by independent, identically distributed Bernoulli random variables $\varepsilon_k$, $k\geq 1$, where, as anticipated above
$$\mathbb{P}\left(\varepsilon_1=1\right)=\mathbb{P}\left(\varepsilon_1=0\right)=\dfrac{1}{2}$$
so that
$$S_N=\varepsilon_1+\ldots+\varepsilon_N\hspace{0.3cm}\text{and}\hspace{0.3cm} N-S_N$$
denote the number of right and left moves, respectively. We have that
$$X_T=\left(X_t-X_t\right)+\left(X_t-X_0\right)=\sum\limits_{k=n+1}^{N}\left(2\varepsilon_k-1\right)\Delta x +\sum\limits_{k=1}^{n}\left(2\varepsilon_k-1\right)\Delta x$$
Considering that $\varepsilon_k$ are i.i.d. random variables, we have that
$$X_T-X_t \sim X_{T-t}-X_0$$
We write $\sigma^2\left(t\right):= \mathbb{V} X_t$. By Bienaymé's identity we get
$$\mathbb{V} X_T=\mathbb{V}\left(X_T-X_t\right)+\mathbb{V}\left(X_t-X_0\right)=\sigma^2\left(T-t\right)+\sigma^2\left(t\right)$$
which means that $t\mapsto \sigma^2(t)$ is linear:
$$\mathbb{V} X_T=\sigma^2\left(T\right)=\sigma^2 T \tag{\(1\)}$$
Since $\mathbb{E}\varepsilon_1=\frac{1}{2}$ and $\mathbb{V} \varepsilon_1=\frac{1}{4}$ we get by a direct computation that
$$\mathbb{V} X_T = N\left(\Delta x\right)^2=\frac{T}{\Delta t}\left(\Delta x\right)^2 \tag{\(2\)}$$
which reveals that
$$\dfrac{\left(\Delta x\right)^2}{\Delta t}=\sigma^2=\text{constant}\tag{\(3\)}$$
As to the above quoted part, I have two doubts:
As to the definition of variance of $X_t$, that is $\sigma^2\left(t\right):= \mathbb{V} X_t$, obviously I interpret this as if it were saying that $\sigma^2(t)$ is a function of $t$. At this point, given that by definition $\sigma^2\left(T\right):= \mathbb{V} X_T$ and also that $\mathbb{V} X_T=\mathbb{V}\left(X_T-X_t\right)+\mathbb{V}\left(X_t-X_0\right)=\sigma^2\left(T-t\right)+\sigma^2\left(t\right)$, I deduce that $\sigma^2(T)=\sigma^2\left(T-t\right)+\sigma^2\left(t\right)$. However, how can this led to second equality in $(1)$, that is $\sigma^2\left(T\right)=\sigma^2 T$? That is, why can one state that $\sigma^2\left(T\right)$ corresponds to $\sigma^2$ times $T$?
I suppose that a direct computation gives equation $(2)$ only if one considers $\Delta x$ as a constant (and then compute the variance of $X_T=\sum\limits_{k=1}^N(2\varepsilon_k-1)\Delta x$). However, it appears to me that $\Delta x$ is not a constant, but, by definition, a random variable! Henceforth, how can a direct computation led to equation $(2)$?
AI: Note that you only have natural times. Hence, you now have
$$
\sigma^2(T)=\sigma^2(T-1)+\sigma^2(1)=\sigma^2(T-2)+\sigma^2(1)+\sigma^2(1)=\dots=T\sigma^2(1)
$$
This is just a general proof that additive functions on $\mathbb{N}$ always extend to linear functions on $\mathbb{R}$.
$\Delta x$ is not a random variable, it's the length of a single jump of a particle. The random part is whether the particle moves left or right.
|
H: Colimit of a constant functor
Let $\mathcal{C},\mathcal{D}$ be categories and $C\in\mathcal{C}$. Consider the constant functor to $C$, $\Delta_C:\mathcal{D}\rightarrow\mathcal{C}$. In other to show that $(C,(1_{\Delta_C(D)})_{D\in\mathcal{D}})$ is a colimit of $\Delta_C$, is it necessary to assume that $\mathcal{D}$ is connected? Wouldn't it be sufficient to assume $\mathcal{D}\ne\emptyset$?
Let $\mathcal{D}\ne\emptyset$. For a morphism $d:D'\rightarrow D$ of $\mathcal{D}$, it is the case that
$$1_{\Delta_C(D')}=1_C=1_C\circ1_C=1_{\Delta_C(D)}\circ\Delta_C(d).$$
This show that $(C,(1_{\Delta_C(D)})_{D\in\mathcal{D}})$ is a cocone on $\Delta_C$. Let $(M,(f_D)_{D\in\mathcal{D}})$ be another cocone on $\Delta_C$. Then for $(d:D'\rightarrow D)\in\mathcal{D}$
$$f_{D'}=f_D\circ\Delta_C(d)=f_D\circ1_C=f_D;$$
i.e. $f_{D'}=f_D$. Since $\mathcal{D}\ne\emptyset$, there exists $X\in\mathcal{D}$ and so $f_X:C\rightarrow M$. For $D\in\mathcal{D}$
$$f_D=f_X=f_X\circ1_C=f_X\circ1_{\Delta_C(D)}.$$
Let $m:C\rightarrow M$ be another morphism such that $f_D=m\circ1_{\Delta_C(D)}$ for $D\in\mathcal{D}$. Then $f_X=m\circ1_{\Delta_C(D)}=m$. This shows that $(C,(1_{\Delta_C(D)})_{D\in\mathcal{D}})$ is a colimit of $\Delta_C$.
Now, why does Borceux assume $\mathcal{D}$ is connected?
AI: Note that in the case where $\mathcal{D}$ is the discrete category with two objects, the colimit is the coproduct of two copies of $C$ and so $(C, (1_C)_{D \in \mathcal{D}})$ is not the colimit.
The problem in your proof comes when you use morphisms $d: D' \rightarrow D$ to prove that for any cocone on $\Delta_C$, each $f_D$ is equal. As we saw with the discrete category, such morphisms might not exist in $\mathcal{D}$. And without those morphisms you get cocones whose legs can be different, which don't factor through $(C, (1_C)_{D \in \mathcal{D}})$.
Making the assumption that you've got enough morphisms in $\mathcal{D}$, to be able to prove that the legs of the cocone must be all the same, is exactly the same as assuming that $\mathcal{D}$ is connected.
(That is, you've assumed that between any two objects $D$ and $D'$ there is a chain of objects $D_1, D_2, D_3...$, where you can prove that each leg of a cocone to the object $D_n$ is the same for $D_{n-1}$ (because there is some morphism $D_{n-1} \rightarrow D_n$ or vice-versa))
|
H: Show that the sequence is convergent $\frac{(1)(3)(5)\dots(2n-1)}{(2)(4)(6)\dots(2n)}$
$\mathbf{Question:}$ Show that $\frac{(1)(3)(5)\dots(2n-1)}{(2)(4)(6)\dots(2n)}$ is convergent where $n\in \mathbb{N}$
$\mathbf{My\ attempt:}$
Let $a_n = \frac{2n-1}{2n}$ and let $f(n) = a_n$
$$
f(n)=\frac{2n-1}{2n} = 1-\frac{1}{2n}
$$
$$
f'(n) = \frac{1}{2n^2}
$$
As $f'(n)>0$, it is a strictly increasing sequence
And $\frac{2n-1}{2n} >0$, therefore it is bounded below
But according to the Monotone convergence theorem, this sequence is divergent instead of convergent?
Any help is appreciated.
AI: We define:
$$a_n=\prod_{k=1}^n\bigg(\frac{2k-1}{2k}\bigg)$$
Then, $$a_{n+1}=\prod_{k=1}^{n+1}\bigg(\frac{2k-1}{2k}\bigg)=a_n\cdot\frac{2n+1}{2n+2}$$
$$\implies \frac{a_{n+1}}{a_n}=\frac{2n+1}{2n+2}<1$$
$$\implies a_{n+1}<a_{n}$$
$$(a_n)_{n\in \Bbb N} \text{ converges}$$
For boundedness, observe that $a_1=\frac12$ and each element of our product is non-negative, thusly $0<a_j\leq \frac12$ for each $a_j$
|
H: Homeomorphism from $\mathbb{N}$ to $T = \{1/n: n \in \mathbb{N}\}$
I'm trying to understand what is probably a basic topological fact and I'm probably overthinking it. I'm consider the map $f: \mathbb{N} \rightarrow T$ where $T = \{1/n: n \in \mathbb{N}\}$ defined by $f(n) = 1/n$, where $\mathbb{N}$ and $T$ have the subspace topology inherited from $\mathbb{R}$ and I'm trying to show that this is a homeomorphism. Clearly, the map is a bijection. My issue is trying to show that the map is bicontinuous. I was thinking about the usual definition of continuity, that a map is continuous if the inverse image of open sets are open, so I let $U$ be some open set in $T$. Then $f^{-1}(U) = A \subset \mathbb{N}$, and I was thinking that I could show that $A$ is open under the subspace topology. Any element of $\mathbb{N}$ can be viewed as $(n-\frac{1}{2}, n+\frac{1}{2}) \cap \mathbb{N}$, and since $(n-\frac{1}{2}, n+\frac{1}{2})$ is open in $\mathbb{R}$, then $\{n\}$ is open in $\mathbb{N}$ with the subspace topology. Since singletons are open, I can just view $A$ as the union of a bunch of singletons, which is an arbitrary union, and hence open. Hence $f$ is continuous. And then I can do something similar to show $f^{-1}$ is continuous. Is this the right idea or am I just trying to hard and missing something simple?
Thanks!
AI: It’s the right idea, and it’s a lot simpler than you seem to be making it. Just show that each of the spaces has the discrete topology, so that every subset of $\Bbb N$ and every subset of $T$ is open. Then continuity of $f$ and $f^{-1}$ becomes trivial.
|
H: Proving that $\sqrt[n]{a^m}=(\sqrt[n]{a})^m$
I have a doubt on how to solve Exercise B11 of Section 3.4 of Advanced Calculus of Watson Fulks. It only says: prove that $\sqrt[n]{a^m}=(\sqrt[n]{a})^m$. Clearly it's not always true if $a<0$, so I guess I most assume $a\geq0$. And given that the previous sections are about continuity, strictly increasing functions and inverses I guess that I most use those topics. The only thing I have done is to define the function $f(x)=\sqrt[n]{x^m}$. It's strictly increasing so it must have an increasing inverse $\varphi(y)=\sqrt[m]{y^n}$ but from this point I don't know how to continue.
AI: First, for any $a>0$ and $n,m\in\mathbb{N}$, we know
$$(a^n)^m=a^{nm}=a^{mn}=(a^m)^n$$
This is easily seen by a simple counting argument:
$$(a^n)^m=\underbrace{(\underbrace{a\cdot a\cdot a\cdot...\cdot a}_{n\text{ times}})\cdot (\underbrace{a\cdot a\cdot a\cdot...\cdot a}_{n\text{ times}})\cdot...\cdot (\underbrace{a\cdot a\cdot a\cdot...\cdot a}_{n\text{ times}})}_{m\text{ times}}$$
$$= \underbrace{a\cdot a\cdot a\cdot ...\cdot a}_{nm\text{ times}}=a^{nm}$$
By the same argument we get $(a^m)^n=a^{mn}$. Of course, since multiplication is commutative this implies
$$a^{nm}=a^{mn}$$
Before continuing, note that for all $x>0$, we know $(\sqrt[n]{x})^n=\sqrt[n]{x^n}=x$. Now, define the function
$$\phi(x)=\sqrt[m]{x^n}$$
We will show that this is the inverse function for both of your functions. Since they are strictly increasing functions on the non-negative reals, this will imply they are the same function.
Now, let us consider your two functions. We have
$$\phi(f(x))=\sqrt[m]{f(x)^n}=\sqrt[m]{(\sqrt[n]{x^m})^n}=\sqrt[m]{x^m}=x$$
If we call your other function $g(x)$, then
$$\phi(g(x))=\sqrt[m]{g(x)^n}=\sqrt[m]{((\sqrt[n]{x})^m)^n}=\sqrt[m]{(\sqrt[n]{x})^{mn}}=\sqrt[m]{(\sqrt[n]{x})^{nm}}=\sqrt[m]{((\sqrt[n]{x})^n)^m}=\sqrt[m]{x^m}=x$$
Thus, $\phi(x)$ is the inverse function for both $f(x)$ and $g(x)$ and we are done.
|
H: Is there more than one definition of homotopic equivalence for two spaces?
Is there more than one definition of homotopic equivalence?
Wolfram.com under the topic Homotopic says the unit circle and a point are homotopic (presumably meaning homotopic equivalent) in the plane. Wolfram also says one must define the “ambient space” to determine whether two objects are homotopic or not.
The Wolfram site goes on to say that there are other ways to compare two spaces via homotopy without ambient spaces. Then an alternative definition that matches Hatcher’s definition (p3) is given.
So, we have two different definitions of homotopic equivalence, one that results in the circle and the point being homotopic equivalent, and the other resulting in the circle and the point not being homotopic equivalent.
Am I understanding this correctly? Is the key in the nomenclature “homotopic” vs. homotopic equivalent”?
AI: We need to be careful with the terminology. I am following the definitions of Hatcher.
Two continous maps $f,g\colon X\to Y$ are homotopic, if there is a homotopy $$H\colon X\times[0,1] \to Y$$ such that $$H(\cdot, 0) = f,\ H(\cdot, 1) = g.$$
A map $f\colon X\to Y$ is called a homotopy equivalence, if there is a map $g:Y\to X$ such that $$g\circ f \simeq \operatorname{id}_X$$
$$f\circ g \simeq \operatorname{id}_Y$$
Now if there is a homotopy equivalence $f$ between the spaces $X,Y$, they are called homotopy equivalent.
Now $\mathbb{R}^2$ is simply connected, thus the unit circle $S^1\subset \mathbb{R}^2$ embedded as the image of a map $$f:S^1 \to \mathbb{R}^2$$ is indeed homotopic to the constant map. Indeed, this observation relies on the ambient space $\mathbb{R}^2$ being simply connected.
Removing a point, say the origin $p = \mathbf{0}$ from $\mathbb{R}^2$ violates the simple connectedness of $\mathbb{R}^2$, and therefore the inclusion of the unit circle $S^1$ in $\mathbb{R}^2$ is no longer homotopic to the constant map.
I've looked up the definitions provided by wolfram alpha and when they call two subspaces being homotopic, i think they mean that there is a homotopy $H$ between two maps $f,g\colon X\to Y$ and consider the respective images of these maps, i.e. $f(X), g(X)\subset Y$ as subspaces being homotopic.
If there is any notion about (sub)spaces being homotopic, it's most likely all about their respective maps being homotopic. That is for example the inclusion of a subspace (as for the example of $S^1 \hookrightarrow \mathbb{R}^2$ being homotopic to the constant map).
|
H: Find $\lim_{x \to 1} \cos(\pi \cdot x) \cdot \sqrt{\frac{(x-1)^2}{(x^2-1)}} \cdot \frac{1-x}{x^2+x-1}$ (I need a review of my resolution please :) )
Find the limit:
$$\lim_{x \to 1} \cos(\pi \cdot x) \cdot \sqrt{\frac{(x-1)^2}{(x^2-1)}} \cdot \frac{1-x}{x^2+x-1}$$
This is what I have, i'm not sure about my answer (I'm just learning limits).
$$\lim_{x \to 1} \cos(\pi \cdot x) \cdot \lim_{x \to 1} \sqrt{\frac{(x-1)^2}{(x^2-1)}} \cdot \frac{1-x}{x^2+x-1} $$
And I know that cos is a bounded limit, also its $L=-1$.
The other limit:
$$ \lim_{x \to 1} \sqrt{\frac{(x-1)^2}{(x^2-1)}} \cdot \frac{1-x}{x^2+x-1} $$
$$ \lim_{x \to 1} \sqrt{\frac{(x-1) \cdot (x-1)}{(x-1)\cdot (x+1)}} \cdot \frac{1-x}{x^2+x-1} $$
Simplifying:
$$ \lim_{x \to 1} \sqrt{\frac{(x-1) \cdot (x-1)}{(x-1)\cdot (x+1)}} \cdot \frac{1-x}{x^2+x-1} $$
$$ \lim_{x \to 1} \sqrt{\frac{(x-1)}{(x+1)}} \cdot \frac{1-x}{x^2+x-1} $$
Evaluating:
$$ \lim_{x \to 1} \sqrt{\frac{(1-1)}{(1+1)}} \cdot \frac{1-1}{1^2+1-1} $$
$$ = \sqrt{0} \cdot \frac{0}{1} $$
$$\fbox {= 0}$$
So since I have a bounded limit and the other limit function is zero, the whole limit is $0$.
So;
$$\lim_{x \to 1} \cos(\pi \cdot x) \cdot \sqrt{\frac{(x-1)^2}{(x^2-1)}} \cdot \frac{1-x}{x^2+x-1} = 0$$
AI: Your answer is correct.
The limit can be split into parts if each of them exists as follows
$$\lim_{x \to 1} \cos(\pi \cdot x) \cdot \sqrt{\frac{(x-1)^2}{(x^2-1)}} \cdot \frac{1-x}{x^2+x-1}$$
$$=\lim_{x \to 1} \cos(\pi \cdot x) \cdot \lim_{x \to 1}\sqrt{\frac{x-1}{x+1}} \cdot \lim_{x \to 1}\frac{1-x}{x^2+x-1}$$
$$=(-1)\cdot 0\cdot 0=\color{blue}{0}$$
|
H: Can these two conjectured relationships between two prime counting functions and the harmonic number function $H(x)$ be proven?
This question assumes the following definitions where $p$ is a prime and $n$ and $k$ are positive integers.
(1) $\quad\pi(x)=\sum\limits_{p\le x} 1\quad\text{(fundamental prime counting function)}$
(2) $\quad\Pi(x)=\sum\limits_{p^k\le x} \frac{1}{k}\quad\text{(Riemann's prime-power counting function)}$
(3) $\quad f(x)=\sum\limits_{p^k\le x}p^{1-k}=\sum\limits_{n\le x}\left\{
\begin{array}{cc}
\frac{\text{rad}(n)}{n} & n=p^k \\
0 & n\ne p^k \\
\end{array}
\right.\quad\text{(another prime-power counting function)}$
(4) $\quad H(x)=\sum\limits_{n\le x}\frac{1}{n}\quad\text{(harmonic number function)}$
(5) $\quad A137851(n)=\sum\limits_{p|n} p\ \mu\left(\frac{n}{p}\right)\quad$ (see OEIS entry A137851)
(6) $\quad sopf(n)=\sum\limits_{p|n} p\qquad\qquad$ (see OEIS entry A008472)
Note the prime-power counting function $f(x)$ defined in formula (3) above has the characteristic $\pi(x)\le f(x)\le \Pi(x)$ for $x\ge 0$ and the characteristic $\pi(x)<f(x)<\Pi(x)$ for $x\ge 8$.
Question: Can the following conjectured relationships between $\pi(x)$ and $f(x)$ defined in formulas (1) and (3) above and the harmonic number function $H(x)$ defined in formula (4) above be proven?
(7) $\quad\pi(x)=\sum\limits_{n\le x}\frac{\text{A137851}(n)}{n}\,H\left(\frac{x}{n}\right)\qquad\text{(conjectured relatiohship)}$
(8) $\quad f(x)=-\sum\limits_{n\le x}\frac{\mu(n)\,sopf(n)}{n}\,H\left(\frac{x}{n}\right)\quad\text{(conjectured relatiohship)}$
I verified the conjectured relationships illustrated in (7) and (8) above for integer values of $x\le 10,000$.
AI: $$\sum_{n\le x} a_n=\sum_{n\le x} b_n g(x/n), \qquad g(x)=\sum_{n\le x} c_n$$
iff $$a_n = \sum_{d| n} b_d c_{n/d}$$
Thus your last question becomes $$p^{1-k} 1_{n=p^k}=-\sum_{d| n} \frac{\mu(d)\sum_{p| d}p}{d} \frac{d}{n}$$
Changing the order of summation it is $$= -\frac1n\sum_{p| n} p \sum_{d| n/p} \mu(dp)$$
$n=p^r m,p\nmid m$. Then $\sum_{d| n/p} \mu(dp)=\sum_{d| m} \mu(dp)=-\sum_{d| m} \mu(d)=-1_{m=1}$. Thus it is $$=\frac1n\sum_{p| n} p 1_{n=p^k}= p^{1-k} 1_{n=p^k}$$
|
H: Confusion about the underlying function of a Functor.
I was reading about Brower's fixed point theorem and a doubt came to mind about the underlying function of a functor on morphisms. We can think of $\mathbb{S}^1 $ as a subset of $\mathbb{R}^2$, so we get the inclusion map $i : \mathbb{S}^1 \hookrightarrow \mathbb{R}^2$, of course we have the identity map $\mathrm{id}: \mathbb{S}^1 \rightarrow \mathbb{S}^1 $. As we know, the first fundamental group is a functor so $\pi^1(i : \mathbb{S}^1 \hookrightarrow \mathbb{R}^2)=0$ since $\mathbb{R}^2$ is contractible, also $\pi^1(\mathrm{id}: \mathbb{S}^1 \rightarrow \mathbb{S}^1)= \mathrm{id}_\mathbb{Z}: \mathbb{Z}\rightarrow \mathbb{Z}$. So far so good, but as functions, i.e. some subset of a cartesian product with some property, $i= \mathrm{id}$, and thus the underlying function of the functor is ill defined. So, what I'm asking is, what am I missing? Are these functions different since we are looking them as morphisms? From what definition does this follow? I'm pretty sure that when we say that a morphism has a domain and a codomain we actually are saying the morphisms are triples, but I'm feeling little insecure. Any insight is appreciated.
AI: A continuous map $f:(X,x_0)\to (Y,y_0)$ induces a homomorphism $f_*:\pi_1(X,x_0)\to\pi_1(Y,y_0)$ between the fundamental groups, two examples being the identity and inclusion maps
$$
f:S^1\to S^1, \ g:S^1\to \mathbb{R}^2
$$
which give rise to the identity and trivial homomorphisms
$$
f_*:\mathbb{Z}\to\mathbb{Z}, \ g_*:\mathbb{Z}\to1.
$$
The two "morphisms" $f$ and $g$ are not the same, since they have different codomains.
In general, a (covariant) functor $\mathcal{F}$ takes objects $A$ and $B$ to objects $\mathcal{F}(A)$ and $\mathcal{F}(B)$ and a morphism $f:A\to B$ to the morphism $\mathcal{F}(f):\mathcal{F}(A)\to\mathcal{F}(B)$ (respecting composition, etc.). In other words, the codomain is part of the information of a morphism as you say.
This is a feature, not a bug - we definitely want to view putting the circle inside itself as distinct from putting the circle into the plane, especially from the homotopic point of view.
For the fixed-point theorem, the assumption of a retract $r:D\to S^1$ from the disc to its boundary circle gives a contradiction after applying the $\pi_1$ functor. The composition
$$
S^1\xrightarrow{i}D\xrightarrow{r}S^1
$$
becomes
$$
\mathbb{Z}\xrightarrow{i_*}1\xrightarrow{r_*}\mathbb{Z}.
$$
On the one hand $\pi_1(r\circ i)$ should be the identity homomorphism since $r\circ i$ is the identity map. However, this homomorphism factors through the trivial group and must therefore be trivial. Hence the retract $r$ cannot exist.
|
H: Why is the expectation of the random variable "W" equal to the limited integral of the survival function of the random variable "Y"?
Ok, so here's the problem. Let the random variable X represent loss in 2005. The density function of X is exponential with mean equal to 1. Let the random variable Y represent loss in 2008. Y = 1.2X
An insurance company covers the losses in 2008 up to a maximum payment of M. The insurance payment in Year 2008 is the random variable "W". We are told that in year 2005, the insurance company covered the full loss and that results in an expected payment of 1 (because the expectation of X is 1). We are told that in year 2008, because of the new maximum payment policy, the expectation for payment in 2008 (W) is equal to the expectation of payment in 2005. In other words, E[W]=1. We are asked to find M.
The solution in the book tells me that E[W] is equal to the survival function of Y integrated from 0 to M. My question, essentially, is why is this true? I understand that integrating the survival function gives us expectation, but in order to get the expectation of W, don't we need to integrate the survival function of W with respect to w? Why can we find the expectation of W by integrating the survival function of Y from 0 to M? I can solve the problem if I just accept this as true, because I can use the fact that Y = 1.2X to actually perform the integration based on the density function for X, but I need to understand why that is true. If it helps, the solution said to think of W = min(Y, M), which stands to reason, since W is either M (if Y>m) or Y (if Y<M). Any help with this would be greatly appreciated. I've been setting up integrals and rearranging equations for hours to try to understand this.
AI: Note that $$W = \min(Y, M) = \begin{cases} Y, & Y \le M \\ M, & Y > M. \end{cases}$$ This is because if the loss in 2008 does not exceed the policy limit $M$, the insurer pays out the full loss on the claim, which is just $Y$. But if it does exceed the policy limit, then the payment is capped at the policy limit.
Next, we want to compute determine $M$ such that $$\operatorname{E}[W] = \operatorname{E}[X] = 1,$$ which is the condition that the expected payment in 2008 is equal to the expected payment in 2005. This is because there is no policy limit imposed on losses in 2005, and you already established that the expected payment is $1$.
Now we incorporate the relationship between $Y$ and $X$. Since $Y = 1.2X$, it follows that $$Y \sim \operatorname{Exponential}(\mu = 1.2),$$ that is to say, $$\Pr[Y \le y] = \Pr[1.2X \le y] = \Pr[X \le y/1.2] = 1 - e^{-y/1.2}, \quad y \ge 0.$$ So $$\operatorname{E}[W] = \operatorname{E}[\min(Y,M)] = \int_{y=0}^\infty \min(y,M) f_Y(y) \, dy.$$ When $0 \le y \le M$, we have $\min(y,M) = y$, and when $y > M$, we have $\min(y,M) = M$, therefore, $$\operatorname{E}[W] = \int_{y=0}^M y f_Y(y) \, dy + \int_{y=M}^\infty M f_Y(y) \, dy.$$ At this point we can substitute the relevant density and complete the computation, but this is not the essence of your question. Rather, you wish to understand why $$\operatorname{E}[W] = \int_{y=0}^M S_Y(y) \, dy,$$ To see this, observe that $$S_Y(y) = \Pr[Y > y] = \int_{t = y}^\infty f_Y(t) \, dt.$$ Hence $$\int_{y=0}^M S_Y(y) \, dy = \int_{y=0}^M \int_{t=y}^\infty f_Y(t) \, dt \, dy$$ by direct substitution. What is this region of integration? If we think of $(t,y)$ as being plotted in a Cartesian coordinate plane, with $t$ the horizontal axis and $y$ the vertical, then the region satisfying $(0 \le y \le M) \cap (y \le t \le \infty)$ is a horizontal "strip" of height $M$ in the first quadrant, with a triangular region removed. That is to say, this region is the intersection of the inequalities $y \ge 0$, $y \le M$, and $y \le t$. So if we are to interchange the order of integration, we must split up $t$ depending on whether $t \le M$ or $t > M$. In the first case, $t \le M$ gives us the integral $$\int_{t=0}^M \int_{y=0}^t f_Y(t) \, dy \, dt = \int_{t=0}^M f_Y(t) \int_{y=0}^t 1 \, dy \, dt = \int_{t=0}^M t f_Y(t) \, dt.$$ In the second case, $t > M$ gives $$\int_{t=M}^\infty \int_{y=0}^M f_Y(t) \, dy \, dt = \int_{t=M}^\infty f_Y(t) \int_{y=0}^M 1 \, dy \, dt = \int_{t=M}^\infty M f_Y(t) \, dt.$$ Adding these two pieces togther gies us $$\int_{y=0}^M S_Y(y) \, dy = \int_{t=0}^M t f_Y(t) \, dt + \int_{t=M}^\infty M f_Y(t) \, dt,$$ which, with an inconsequential change of the variable of integration from $t$ to $y$, demonstrates the desired equivalence with the first expression we derived for the expectation.
|
H: Any easier way to get the quotient and remainder from a minus number divided by a positive one?
negative number divided by positive number, what would be remainder?
I've read these answers linked above, but I don't feel I'm answered enough.
$$-27 = \underbrace{-6}_q\cdot \underbrace{5}_d + \underbrace{3}_r$$
They say this, but I don't think I can use it later when I need to divide some large minus number with another large positive number, such as -777777 with 439. The traditional positive number division is easy enough that no matter how young you are, even grammar school students could solve it with enough time given. I was wondering if there's any that kind of easy method to get the quotient and remainder from the minus numbers divided by the positive numbers.
AI: Try this,
First dived the number by asumming the number to be positive ( in your example 777777) then remainder when we divide it by 5 is 2 so when -777777 is divided by 5 remainder would be -2 but the tradition in mathematics is to always put remainder positive so just add 5 to -2 so the remainder will be 3.
|
H: One person from a party is selected at random
Here is the question:
John invites 12 friends to a dinner party, half of whom are men.
Exactly one man and one woman are bringing desserts. If one person
from this group is selected at random, what is the probability that it
is a woman, or a man who is not bringing a dessert?
I'm not a native speaker, how should I understand "a woman, or a man who is not bringing a dessert" please?
Should it be construed
(1) a man who is not bringing a dessert or a woman
or
(2) a woman who is not bringing a dessert or a man who is not
bringing a dessert
?
Because apparently the answer is different from each other and if I'm not wrong, (1) is 11/12 and (2) is 10/12.
By the way, how would this sentence "a woman who is not bringing a dessert or a man who is not bringing a dessert" be written if it is not "a woman, or a man who is not bringing a dessert"?
I appreciate your help!
AI: I think the question is about to calculate the probability of the person selected is either a woman, or a man without bringing dessert.
Interprete it to logic language, it should be:
$$P(woman\bigcup man without dessert)=P(woman)+P(man without dessert)$$
So what you need to do is simple arithmetic, $1/2 + 5/12=11/12$
Btw, since there is a comma at the end of woman, it means that we should consider this sentence separately. Your assumption(2) is making sense if and only if the comma doesn't exist.
|
H: Prove that a semigroup $(S, +)$ is a group if and only if the only ideal of $S$ is $S$ itself.
Consider a semigroup $(S, +).$ We say that a nonempty subset $I \subseteq S$ is an ideal of $S$ whenever we have that $S + I \subseteq I,$ where $S + I$ is the set consisting of all sums $s + i$ with $s \in S$ and $i \in I.$ Prove that $(S, +)$ is a group if and only if the only ideal of $S$ is $S$ itself.
One direction of the above is straightforward. Given that $S$ is a group with additive identity $0_S,$ consider an ideal $I$ of $S.$ Given any element $i \in I,$ by hypothesis that $S$ is a group, we have that $-i \in S.$ Consequently, we have that $$s = s + 0_S = s + (-i + i) = (s + (-i)) + i$$ is an element of $S + I.$ By assumption that $I$ is an ideal of $S,$ we have that $S + I \subseteq I,$ from which it follows that $s \in I.$ We conclude therefore that $S \subseteq I.$ Evidently, the reverse containment holds, so we find that $I = S.$ But this says precisely that the only ideal of $S$ is $S$ itself.
Unfortunately, however, I am unable to prove the converse, i.e., I cannot show that if the only ideal of $S$ is $S$ itself, then $S$ is a group. Of course, there are many different equivalent conditions to show that a semigroup is a group, but my initial thought is that I could show that $S$ has an additive identity $0_S$ and additive inverses $-s$ for each of its elements. I have tried for a while to use the fact that for any nonempty subset $J$ of $S,$ we have that $J \cup (S + J)$ is an ideal of $S.$ Explicitly, I tried to show by way of contradiction that the set $I = \{s \in S \,|\, \forall t \in S, \, s + t = t \}$ is nonempty, but I got tied up in the negation, and I could not come up with a nonempty set $J$ such that $J \cup (S + J)$ is an ideal -- in which case, I could use the fact that $J \cup (S + J) = S.$
I would greatly appreciate any advice or assistance. For reference, this is Theorem 1.1 in Gilmer's Commutative Semigroup Rings text; however, he asserts that the proof is "elementary."
AI: This is false without assuming commutativity. For instance, let $S$ be any set with more than one element and define the operation $+$ by $x+y=x$. This is associative and the only ideal in $S$ is $S$, but $S$ is not a group.
Here's how you can prove it assuming $+$ is commutative. First, since $S$ is assumed to be an ideal of $S$, it is by definition nonempty. Pick an element $s\in S$, and consider $I=S+s$. This is an ideal by associativity, so $I=S$. In particular, there is some element $0\in S$ such that $0+s=s$. Now let $J=\{x\in S:x+0=x\}$. Note that $J$ is an ideal since if $x+0=x$ then $(y+x)+0=y+(x+0)=y+x$ for any $x$, and it is nonempty since $s\in J$. Thus $J=S$, and so $0$ is an identity element. For any $x\in S$, now, $S+x$ is an ideal and thus is $S$, and in particular $0\in S+x$, so $x$ has an inverse.
|
H: Weakly open sets are unbounded
I was studying the proof that every open set in the weak topology of an infinite dimensional space is unbounded and I came across the following argument. If $(f_1,...,f_n)$ are continuous linear operators defined on an infinite normed space E than the set $\cap_{i=1}^{n}kerf_i$ has a point $x\neq0$. The one who used this argument said that otherwise the map defined by $x \in E \mapsto(f_1(x),...,f_n(x)) \subset \mathbb{R}^n$ would be an injection. What I don't understand is why the proposed map can't be an injection, how does this set up a contradiction?
AI: The proposed map can't be an injection because we can't inject an infinite-dimensional space into a finite-dimensional space. In particular, injective maps preserve linear independence, so mapping $n + 1$ linearly independent vectors from $E$ (which is possible to find, since $E$ is infinite-dimensional) will produce $n + 1$ vectors in $\Bbb{R}^n$, which cannot be linearly independent, contradicting injectivity.
Now, every weakly open set contains a basic weakly open set, i.e. one of the form:
$$\mathcal{U} = \{x \in E : |f_i(x) - \alpha_i| < \varepsilon_i \text { for } i = 1, \ldots, n\}.$$
Let $K$ be the previously considered kernel. Note that, if $y \in K$, and $x \in \mathcal{U}$, then $x + y \in \mathcal{U}$. But $K$ is a non-trivial subspace, and hence is unbounded, and so we can make $x + y$ as large as we like, thus proving $\mathcal{U}$ (and hence the weakly open set) is unbounded too.
|
H: Why does the polynomial splitting implies existence primitive root of unity in $\Bbb{F}_{p^2}$?
This question refers to WimC's answer to this question. Consider the cubic congruence problem:
$$
f(x) := x^3 - x^2 - 2x + 1 \equiv 0 \pmod{p}
$$
We want to know for which $p$ does $f(x)$ splits. The answer to this is $p \equiv 0,1,6 \pmod{7}$, and when proving this WimC made the following claim:
If $f(x)$ has a root in $\Bbb{F}_p$, then $\Bbb{F}_{p^2}$ contains a primitive seventh root of unity.
I fail to see why this is the case. As discussed in his answer/comments, if $\alpha$ is a solution then we can split $f(x) \equiv (x - \alpha_1)(x - \alpha_2)(x - \alpha_3)$ in any field, where $\alpha_1 = \alpha$, $\alpha_2 = \alpha^2 - \alpha - 1$, $\alpha_3 = -\alpha^2 + 2$. Furthermore, if $\beta$ is a root to $x^2 + \alpha x + 1$, then $\beta$ is a primitive seventh root of unity. I can understand these parts.
He then further pointed out that:
$$
\Phi_7(x) = x^6 + x^5 + x^4 + x^3 + x^2 + x + 1 = \prod_{i=1}^3 (x^2 + \alpha_ix + 1)
$$
and any root of $\Phi_7(x)$ must have degree $\leq 2$. I'm lost at this part.
Is he making the claim that $\Phi_7(x)$ has a root in $\Bbb{F}_{p^2}$? If so, why is this true?
Why does the degree of roots matter here?
I'm relatively new to number theory with minimal exposure to Galois theory, so any beginner-friendly explanation would be appreciated.
AI: Any root of $\Phi_7(x)$ must be a root of one of the quadratics $x^2 + \alpha_i x + 1$. If the quadratic is reducible, then clearly the root lies in $\mathbb{F}_p$; otherwise, the roots of an irreducible quadratic must lie in $\mathbb{F}_{p^2}$. This is a special case of the general fact that the field $\mathbb{F}_{p^n}$ contains roots for all irreducible polynomials of degree $n$ over $\mathbb{F}_p$, which can be proven from the uniqueness of the field of order $p^n$.
|
H: How to find the angle of a non right angled triangle in a cube?
I have to find $\angle MHN$ ($\angle H$ in $\Delta HMN$). It is inside a cube that has side lengths of $12$ cm. $M$ is the midpoint of the diagonal $BD$ and $N$ is the midpoint of edge $GF$. Here's the diagram:
I'm completely lost on how I would find $\angle MHN$ because the triangle is skewed if I try to imagine it in 3D (the top is further back than the bottom of the triangle).
I have however calculated the length of line $MH$, in exact form it is $3\sqrt{24}$ cm, which is also $14.696938...$ cm. I could also probably find the lengths of line $MN$ and $HN$ but I would need to know at least one other angle to find $\angle MHN$ using the sine or cosine rule.
I don't know how to figure out the size of another angle.
I'd appreciate the help.
AI: Use Pythagorean theorem to find all sides of $\Delta HMN$ as follows
$$\text{in right}\ \Delta MLN, \ \ \ \ \ \ MN^2=ML^2+LN^2=12^2+6^2=180$$
$$\text{in right}\ \Delta MLH, \ \ \ \ \ \ MH^2=ML^2+LH^2=12^2+(6\sqrt2)^2=216$$
$$\text{in right}\ \Delta HGN, \ \ \ \ \ \ HN^2=HG^2+GN^2=12^2+6^2=180$$
Method-1: Let $\angle MHN=\angle HMN=\theta\ $ in isosceles $\Delta HMN$, then $\angle MNH=\pi-2\theta$. Using Sine rule in $\Delta HMN$ as follows
$$\frac{\sin\angle MHN}{MN}=\frac{\sin\angle MNH}{MH}\iff \frac{\sin\theta}{\sqrt{180}}=\frac{\sin(\pi-2\theta)}{\sqrt{216}}$$
$$\sin\theta\left(\cos\theta-\frac{3}{\sqrt{30}}\right)=0\quad \quad (\sin\theta\ne 0)$$
$$\therefore \ \theta=\cos^{-1}\left(\frac{3}{\sqrt{30}}\right)\approx \color{blue}{56.79^\circ}$$
Method-2: Since all three sides of $\Delta MHN$ are known hence use Cosine rule as follows
$$\cos \angle MHN=\frac{MH^2+HN^2-MN^2}{2(MH)(HN)}=\frac{216+180-180}{2(\sqrt{216})(\sqrt{180})}$$
$$\angle MHN=\cos^{-1}\left(\frac{3}{\sqrt{30}}\right)\approx \color{blue}{56.79^\circ}$$
|
H: Calculating determinant of $A^n$ given the matrix $A$
Find if $det(A)=det(A^n)$ for $n>1$.
How do I tackle questions like this, in general if the matrix $A$ is provided in the question?
Should I work out with the basic definition of a determinant, which I found very difficult to apply in any question?
I cannot of course go on to calculate the value of $det(A^n)$?
AI: The determinant function is multiplicative, i.e. $\det(AB) = \det(A)\det(B)$. In particular, this means that $\det(A^n) = \det(A)^n$. So given a matrix $A$, just find its determinant and raise it to the given power.
For instance, let
$$A = \left[ \begin{matrix}
6 & 2 \\
7 & 5
\end{matrix} \right]$$
Then $\det(A) = 6 \cdot 5 - 2 \cdot 7 = 16 = 2^4$. So if I asked you to find $\det(A^n)$ you could just tell me $16^n$ or $2^{4n}$, whichever fits your fancy.
In fact, I could even specify large $n$ and you might calculate it easily enough, if you're familiar enough with your powers of $2$ in this case. For instance, $\det(A^4) = 2^{16} = 65536$.
|
H: Number of $3$-digit numbers with strictly increasing digits
A positive integer is called a rising number if its digits form a strictly increasing sequence. For example, 1457 is a rising number, 3438 is not a rising number, and neither is 2334.
(a) How many three digit rising numbers have 3 as their middle digit?
(b) How many three digit rising numbers are there?
My efforts have yielded 12 for (a) - 1 and 2 for the first digit, and 4, 5, 6, 7, 8, 9 for the 3rd so $2 \cdot 6 = 12$ possibilities. Is this correct? What is the best method for (b)?
AI: (A) If the middle digit is $3$, there are only $2$ possibilities for the $1$st digit: $1$ and $2$, for it to be rising. For the third, it can be any number greater than $3$, i.e. $4, 5, 6, 7, 8$, or $9$. This is $6$ numbers, therefore the total number of rising numbers with 3 as their middle digit is $ 2 \cdot 1 \cdot 6 = 12$ possibilities.
(b) We can list out by cases and subcases:
Case 1: First digit is $1$:
We see if the 2nd digit is $2$, there are $7$ possibilities for the 3rd.
We see if the $2$nd digit is $3$, there are $6$ possibilities for the $3$rd.
We see if the $2$nd digit is $4$, there are $5$ possibilities for the $3$rd.
This pattern continues, so there are $7 + 6 + 5 + 4 + 3 + 2 + 1 = 28$ possibilities.
Case $2$: First digit is $2$:
We see if the $2$nd digit is $3$, there are $6$ possibilities for the $3$rd.
We see if the $2$nd digit is $4$, there are $5$ possibilities for the $3$rd.
This pattern continues, so there are $6 + 5 + 4 + 3 + 2 + 1 = 21$ possibilities.
Case $3$: First digit is $3$:
Following the pattern from previous cases, there are $5 + 4 + 3 + 2 + 1 = 15$ possibilities
Case $4$: First digit is $4$:
Following the pattern from previous cases, there are $4 + 3 + 2 + 1 = 10$ possibilities
Case 5: First digit is $5$:
Following the pattern from previous cases, there are $3 + 2 + 1 = 6$ possibilities
Case $6$: First digit is $6$:
Following the pattern from previous cases, there are $2 + 1 = 3$ possibilities
Case $7$: First digit is $7$:
Following the pattern from previous cases, there is $1$ possibility here.
It cannot start with $8$, as the $2$nd digit would be $9$, leaving no possibilities for the $3$rd.
So the total is $28 + 21 + 15 + 10 + 6 + 3 + 1 =$ $84$ possibilities.
Edit: While I was accepted as the correct answer for confirming (a) also, I thought I should also acknowledge @DhanviSreenivasan's elegant formula:
$$\sum_{k=1}^9{9\choose k} = 2^9-1$$
Which gives us ${9 \choose 3}$ so $84$.
|
H: A jar contains 3 red and 2 white marbles
A jar contains 3 red and 2 white marbles. 2 marbles are picked without
replacement.
(1) The probability of picking two red marbles
(2) The probability of picking exactly one red and one white marble
A. (1) > (2)
B. (1) < (2)
C. (1) = (2)
The question doesn't mention anything about picking the marbles one by one. So should I assume that 2 marbles are picked simultaneously? If so, please correct me if I'm not wrong, the answer is C since both quantities are 3/10 because:
(1) (3/5) x (2/4) = 3/10
(2) (2/5) x (3/4) = 3/10
I appreciate your help!
AI: Without replacement means one-by one. The colour of the first influences the probability of the colour of the second. So consider it as two discrete events, picking of the first ball, and then picking the second ball
$$P(2R) = \frac{3}{5}.\frac{2}{4} = \frac{3}{10}$$
$$P(1R,1W) = \frac{3}{5}.\frac{2}{4} + \frac{2}{5}.\frac{3}{4} = \frac{6}{10}$$
|
H: about properties of the operator $T_f(g) := f\cdot g$.
let $f \in C([0,1])$ and $T_f: L^2([0,1]) \to L^2([0,1])$ and $T_f(g) := f\cdot g$ prove :
1)$T_f$ is well define , linear and bounded and find $|| T_f ||$ .
2 )if $T_f$ be compact operator then $f=0$
i can prove $T_f$ is linear.
AI: Answer of the second part: Let $a \in [0,1]$ Define $g_n(x)=\sqrt n$ for $|x-a|<\frac 1n$ and $0$ for all other $x$. Then $(g_n)$ is bounded in $L^{2}$. If $T$ is compact then $(fg_n)$ has convergent subsequence. Note that $(fg_n) \to 0$ almost eveywhere. Hence the subsequence can only converge to $0$. If $|f(a)| >0$ then $\int |fg_n|^{2} \geq c \int |g_n|^{2}$ for some $c>0$ and $n$ sufficiently large. We got a contradiction since $\int |g_n|^{2}$ does not tend to $0$. Thus $f$ vanishes at every point $a$.
Hints for first part: The norm of $T_f$ is $M$, the supremum of $|f|$. Clearly $\|Tf\| \leq M\|g\|$ so $\|T_f\| \leq M$. There exist $x$ such that $M=|f(x)|$. Now there exists $r>0$ such that $|f| >M-\epsilon$ on $(x-r,x+r)$. Take $g(y)=\frac 1 {\sqrt {2r}} $ for $|y-x| <r$ and $0$ for all other $y$. Then $\|g\|=1$. Can you show that $\|T_f\| \geq \|T_f(g)| \geq M-\epsilon$?.
|
H: Computing Tor for $R=k[x,y]$
Let $R=k[x,y]$ where $k$ is a field. I want to compute $\operatorname{Tor}_{i}^{R}(R/(y-x), R/(y^2 - x^3))$ for $i \ge 0$.
My attempt is:
Since $\cdots \rightarrow 0 \rightarrow R \xrightarrow{\times(y-x)} R \rightarrow R/(y-x) \rightarrow 0$ is a projective resolution, $\operatorname{Tor}_{i}^{R}(R/(y-x), R/(y^2 - x^3)) =0 $ for $i \ge 2$.
$\operatorname{Tor}_{0}^{R}(R/(y-x), R/(y^2 - x^3)) \simeq R/(y-x, y^2 - x^3) \simeq k[t]/(t^2- t^3)$
since $R/(y-x) \simeq k[t]$ (by considering $k[x,y] \rightarrow k[t]$, $p(x,y) \mapsto p(t,t)$)
$\operatorname{Tor}_{1}^{R}(R/(y-x), R/(y^2 -x^3)) \simeq \{ m \in R/(y^2 -x^3) : (y-x)m=0 \ \textrm{mod } y^2 -x^3 \}$. (I cannot go further from here.)
This is somewhat an incomplete solution, but I'm not even sure whether the above argument is valid.
Thank you.
AI: So $\text{Tor}_1$ (I won't write out the full notation) is the kernel of multiplication
by $y-x$ on $R/(y^2-x^3)$.
The ring $S=R/(y^2-x^3)$ is isomorphic to $k[t^2,t^3]$ (a subring of $k[t]$)
via $x\mapsto t^2$ and $y\mapsto t^3$. Multiplication by $y-x$ corresponds to
multiplication by $t^3-t^2$ on $S$. But $S$ is an integral domain, so the kernel
of this multiplication map is zero. Therefore $\text{Tor}_1=0$.
|
H: about finite rank operator
let $(X,\|.\|)$ be banach space and $T\colon (X,w)\to (X,\|.\|)$ is linear continuous operator . $(X,w)$ is a banach space with its weak topology. then $dim (rang (T)) < \infty$
AI: Hint: There exists a finite set $x_i^{*}, 1\leq i \leq N$ such that $x_i^{*} (x)=0, 1\leq i \leq N$ implies $Tx=0$. [Use the fact that (by continuity) $|x_i^{*}(x)| <\epsilon_i, 1\leq i \leq N$ implies $\|Tx\|<1$ for suitable $x_i^{*}$'s and $\epsilon_i$'s].
Now show that $Tx \to (x_1^{*}(x),x_2^{*}(x),...,x_N^{*}(x))$ is an injective linear map from the range of $T$ to $\mathbb R^{N}$.
|
H: how can I write this sum in sigma notation?
I find it difficult to write this in sigma notation. I tried but couldn't figure out.
$$ \frac{1}{n} \sqrt{1-\left(\frac{0}{n}\right)^2} + \frac{1}{n} \sqrt{1-\left(\frac{1}{n}\right)^2} + \dots + \frac{1}{n} \sqrt{1-\left(\frac{n-1}{n}\right)^2} $$
AI: It is more or less finding the pattern as to what changes and what does not:
\begin{align*}
&\frac{1}{n} \sqrt{1-\left(\frac{0}{n}\right)^2} + \frac{1}{n} \sqrt{1-\left(\frac{1}{n}\right)^2} + \dots + \frac{1}{n} \sqrt{1-\left(\frac{n-1}{n}\right)^2} \\
&= \frac{1}{n} \sqrt{1-\left(\frac{\color{blue}{\boxed{0}}}{n}\right)^2} + \frac{1}{n} \sqrt{1-\left(\frac{\color{blue}{\boxed{1}}}{n}\right)^2} + \dots + \frac{1}{n} \sqrt{1-\left(\frac{\color{blue}{\boxed{n-1}}}{n}\right)^2} \\
&= \Biggl[ \text{Sum of } \frac{1}{n} \sqrt{1-\left(\frac{\color{blue}{\blacksquare}}{n}\right)^2} \text{'s, where $\color{blue}{\blacksquare}$ runs over $0, 1, \dots, n-1$} \Biggr] \\
&= \sum_{\color{blue}{\blacksquare}=0}^{n-1} \frac{1}{n} \sqrt{1-\left(\frac{\color{blue}{\blacksquare}}{n}\right)^2}
\end{align*}
Now replace the placeholder $\color{blue}{\blacksquare}$ by another variable, say $k$ for instance, if you want to make it look fancier.
|
H: Counterexample to Inverse Operator Theorem
Let $Y=(\mathbb{R}^{\mathbb{N}},\|\cdot\|_1)$ where $\|x\|_1=\sum_{n=1}^\infty|x_n|$, and $X=(\mathbb{R}^{\mathbb{N}},\|\cdot\|_X)$ where $\|x\|_X=\|x\|_1+\sup_{n\in\mathbb{N}}n|x_n|$.
Then $\|\cdot\|_X$ is a norm since $\|ax\|_X=|a|\|x\|_X$ for a scalar $a$, $\|x\|_X=0$ implies $x=0$, and
\begin{align}
\|x_1+x_2\|_X&=\|x_1+x_2\|_1+\sup_{n\in\mathbb{N}}n|x_{1n}+x_{2n}| \\
&\leq\|x_1\|_1+\|x_2\|_1+\sup_{n\in\mathbb{N}}n|x_{1n}|+\sup_{n\in\mathbb{N}}n|x_{2n}| \\
&=\|x_1\|_X+\|x_2\|_X.
\end{align}
Note that $\|x\|_1<\infty$ if and only if $\|x\|_X<\infty$ and that $\|x\|_1\leq\|x\|_X$.
Let $T:X\to Y$ be identity, which is continuous, linear, and bijective.
Then, $T^{-1}$ exists and is linear, but is not continuous.
Which assumption of the inverse operator theorem is violated in this example?
AI: The map is not onto. The condition on $X$ is stricter than $Y$. Note the sequence given by $x_n=\frac{1}{k^2}$ if $n=k!$ and $x_n=0$ otherwise is in $Y$ but not in $X$.
|
H: $L^{\infty}(\mathbb{R}^{N})$ and smallness of function
Let $(f_{n})_{n\in\mathbb{N}}\subset C(\mathbb{R}^{N})$ be a sequence of real valued function such that $\|f_{n}\|_{L^{\infty}(\mathbb{R}^{N})}\to\infty$ as $n\to\infty$.
Then, I know that there exists $\delta_{1}>0$ and $x_{1}\in\mathbb{R}^{N}$ so that for $n\in\mathbb{N}$ large enough, we have $|f_{n}(x_{1})|\geq\delta_{1}$. Now, I want to show that if I define $X_{\delta_{1}}:=\{x\in\mathbb{R}^{N}\,|\, |f_{n}(x)|\geq\delta_{1} \text{ for }n\text{ large enough}\}$, I want to show that the measure of $X_{\delta_{1}}>0$. Is this actually possible or do I need to add stronger assumption?
I try to derive contradiction by using the fact that $(f_{n})_{n\in\mathbb{N}}$ is a sequence of continuous function but it seems to not work very well. Any help or insight is pretty much appreciated! Thank you!
AI: Let $N=1$ and $$f_n(x)=\begin{cases}n(x-n+1),&\ x\in[n-1,n)\\ n,&\ x\in[n,n+1]\\ n(n+2-x),&\ x\in(n+1,n+2]\\ 0,&\ \text{otherwise}\end{cases}$$ Then $\|f_n\|=n$. But $f_n(x)\to0$ for all $x$. So, for any $k>0$,
$$
\{x:\ |f_n(x)|>k\ \text{ for $n$ large enough}\}=\varnothing
$$
if you meant "for all $n$ large enough".
|
H: Product of Moore Penrose Inverse and Matrix
Say, we had m x n matrix B. B+ is the Moore Penrose Inverse (pseudoinverse) of the matrix. Would B+B (product of pseudoinverse of B and B) be a projection matrix? How would we prove this?
AI: By the defining properties of Moore-Penrose pseudoinverse, $B^+B$ is Hermitian and
$$
(B^+B)^2=(B^+BB^+)B=B^+B.
$$
Therefore, $B^+B$ is not just a projection, but an orthogonal projection.
|
H: Inverse map from $\mathbb{Z} \to \mathbb{Z}$
Let $f: \mathbb{Z} \to \mathbb{Z}, \; z \mapsto 3z$ be a map on the integers. I am trying to find a left inverse function, $g: \mathbb{Z} \to \mathbb{Z}$ such that $g \circ f = \text{id}$.
My attempt didn't quite seem to work:
\begin{align*}
(g \circ f)(x) & = g(f(x) = g(3x).
\end{align*}
Setting $g(3x) = x$, taking $g$ to be the function $g(t) = \frac{1}{3} t$ allows $g(3x) = \frac{3x}{x} = x$. However, $x \mapsto \frac{1}{3}x$ is not a map from $\mathbb{Z}$ to $\mathbb{Z}$ because taking $x = 1$ returns $\frac{1}{3}$, which is not an integer.
This choice of $g$, certainly, does not work unless we extend the codomain to $\mathbb{R}$. The question I have is, is there a better, more systematic approach of finding such a $g$?
AI: Your map $g$ can be
$g(x)=\frac{x}{3}$ if $x\equiv 0$ mod $3$ while $g(x)=x$ otherwise.
Clearly there are other maps $g$ such that $g\circ f=id$. For example you can define
$g(x)=\frac{x}{3}$ if $x\equiv 0$ mod $3$ and $g(x)=2x$ otherwise.
A systematic approach of find a left inverse function is to find the image of $f$ and to define a function $h: Im(f)\to \mathbb{Z}$
such that $h\circ f=id$. Then you can built
$g(x)=h(x)$ if $x\in Im(f)$ and $g(x)=x$ otherwise.
If $f$ is injective, i.e a monomorphism in the set category, then there always exists a left inverse function. In fact you can define $h=f^{-1}: Im(f)\to \mathbb{Z}$.
So a necessary and sufficient condition for which $f$ has a left inverse function is the injectivity. Clearly, there is not a unique left inverse function.
|
H: Prove that there exists a language L ⊆ {0}^∗ that is not Turing decidable.
I was revising for my final and got this question. I don't see there exists such a language. Is it really existing?
AI: If $N$ is a subset of $\mathbb{N}$ whose membership problem is not Turing decidable, I think that $\{x \in \{0\}^\ast \mid |x| \in N\}$ is a good candidate for being not Turing decidable as well...
|
H: Show that $\lim\limits_{h \rightarrow 0}\frac{f(\xi+h)-f(\xi-h)}{2h}$ exists and is $f'(\xi)$
Let $f:I \longrightarrow \mathbb{R}$ be differentiable in a inner point $\xi \in I$.
Show that $$\lim\limits_{h \rightarrow 0}\frac{f(\xi+h)-f(\xi-h)}{2h}$$
exists and is $f'(\xi)$.
Also give an example which shows, that the existence of this limit, doesnt mean, that $f$ is necessary differentiable in $\xi$.
My attempt:
Given is that:
$$\lim\limits_{h\rightarrow0}\frac{f(\xi+h)-f(\xi)}{h}=f'(\xi)$$
exists.
If $\,\lim\limits_{h \rightarrow 0}\frac{f(\xi+h)-f(\xi-h)}{2h}=\lim\limits_{h\rightarrow0}\frac{f(\xi+h)-f(\xi)}{h}=f'(\xi)$ then:
$$\lim\limits_{h\rightarrow0}\left(\frac{f(\xi+h)-f(\xi)}{h}-\frac{f(\xi+h)-f(\xi-h)}{2h}\right)=0$$
$$\lim\limits_{h\rightarrow0}\frac{2(f(\xi+h)-f(\xi))-f(\xi+h)+f(\xi-h)}{2h}=\lim\limits_{h\rightarrow0}\frac{f(\xi+h)-f(\xi)+f(\xi-h)-f(\xi)}{2h}=0$$
$$=\frac{1}{2}\lim\limits_{h\rightarrow0}\frac{f(\xi+h)-f(\xi)}{h}+\frac{1}{2}\lim\limits_{h\rightarrow0}\frac{f(\xi-h)-f(\xi)}{h}=\frac{1}{2}\lim\limits_{h\rightarrow0}\frac{f(\xi+h)-f(\xi)}{h}-\frac{1}{2}\lim\limits_{h\rightarrow0}\frac{f(\xi)-f(\xi-h)}{h}=\frac{1}{2}f'(\xi)-\frac{1}{2}f'(\xi)=0$$
$\Box$
Let $f:\mathbb{R}\longrightarrow \mathbb{R}:x \mapsto |x|$
Then for $\xi=0$:
$$\lim\limits_{h \rightarrow 0}\frac{f(\xi+h)-f(\xi-h)}{2h}=\lim\limits_{h \rightarrow 0}\frac{|h|-|-h|}{2h}=\lim\limits_{h \rightarrow 0}\frac{|h|-|h|}{2h}=\lim\limits_{h \rightarrow 0}0=0$$
$\Longrightarrow$ the limit exists, but $f$ is not differentiatable at $\xi=0$
Hello, as always it would help me alot, if someone could look over it and give me feedback weither my work is correct. And if not, what is wrong :) thank you
AI: $$\lim_{h \to 0} \frac{f(x+h)-f(x-h)}{2h}= \frac{1}{2} \left( \lim_{h \to 0} \frac{f(x+h)-f(x)}{h}-\lim_{h \to 0} \frac{f(x)-f(x-h)}{-h} \right)$$
$$=\frac{1}{2}[f'(x)+f'(x)]=f'(x)$$
|
H: Maximum volume of cylinder obtained by rotating a rectangle
Question: A cylinder is obtained by revolving a rectangle about the $x$-axis, the base of the rectangle lying on the $x$- axis and the entire rectangle lying in the region between the curve: $y=\dfrac{x}{x^2+1}$ and the $x$-axis. Find the maximum possible value of the cylinder so formed.
So I proceeded by first graphing out the function to get a visual:
Now, we call the length of the side of the rectangle parallel to the $y$ axis as $y$, and the length parallel to the $x$ axis as $x$. The volume of the cylinder formed with these parameters would be
$$V= πy^2x$$
If I find a relation between $x$ and $y$, I can differentiate the expression for volume, and arrive at the maximum value. This is where I'm exactly stuck. I can't find such a relation. Any help would be appreciated.
AI: We consider the line $y=c$ and we impose the condition
$c=\frac{x}{x^2+1}$
$cx^2-x+c=0$
that permit us to find two solutions $x_1,x_2$ in the definition domain of the intersection between the line and the curve, that corresponds in this case to $1-4c^2>0$
Moreover we obtain
$x_2-x_1=\frac{\sqrt{1-4c^2}}{c}$
But now
$V(c)=\pi c^2(x_2-x_1)=\pi c \sqrt{1-4c^2}$
What is the maximum value of $V(c)$?
The maximum point $c$ for $V(c)$ is exactly also the maximum point for $T=V^2(c)$, that is simple to study:
$T(c)=V^2(c)=\pi^2 c^2(1-4c^2)$
and so
$T’(c)=2\pi^2 c(1-4c^2)-8\pi^2c^3\geq 0$
if and only if
$8c^3\leq c$ so $c^2\leq \frac{1}{8}$
Thus the maximum point for $V$ will be
$c=\frac{\sqrt{2}}{4}$ and $V(\frac{\sqrt{2}}{4})=\frac{\pi}{4}$
|
H: Question in Proof of theorem 11.22 Apostol mathematical analysis
While self studying mathematical analysis from Tom apostol I have 2 question I proof of above mentioned theorem.
Question 1: How does author deduces $g_{x} $ is measurable on $\mathbb{R}$ ?
Question 2 : why $g_{x} $ belongs to $L^{2}( \mathbb{R}) $ ?
Any help will be really appreciated.
AI: $g_x=f\circ h$ where $h(t)=x-t$. Sinec $h$ is continuous it is Borel measurable. Composition of two measurable functions is measurable. Hence $g_x$ is measurable.
$\int |g_x(t)|^{2}dt =\int |f(x-t)|^{2}dt =\int |f(y)|^{2} dy <\infty$ by the substitution $y=x-t$.
|
H: Is 0 a real number?
I am just curious if 0 is a real number. The definition of a real number is all rational and irrational numbers. And the def definition for rational number is that "$\mathbb{Q}={a\div b|a,b\in\mathbb{Z}}$".
But in done say that $0$ is a whole number and some day that it is not.
In some websites some say that $0$ is a real number, and in some not.
Please help me.
Thanks in advance
AI: $\mathbb{Q}$, the set of rational numbers, is the set $\{\frac{a}{b}:a,b\in\mathbb{Z}, b\neq 0\}$
$\Rightarrow 0=\frac0b$ is a rational number, where $0,b\in\mathbb{Z}, b\neq 0$.
|
H: Finding The Number Of The Solutions For $\cos x = \frac{x^2}{100}$
Hello everyone how can I find the number of the solution to the equation:
$\cos x = \frac{x^2}{100}$ in real numbers?
I tried to convert it to function $y = \frac{x^2}{100} -\cos x$
and find all the cutting points with the x axis by find all the extreme points of y.
$y' = \frac{x}{50}+\sin x$
I found 1 minimum point in $(0,-1)$ because of that I know that the equation have
at least two solutions.
AI: Making a rough plot will give you 6 solutions
Draw the graph of $\cos x$ , and notice that $\cos x\le 1$ , and ${x^2\over 100} > 1$ for all $x$ such that $|x| > 10$ , so there won't be any solution before $x=-10$ and after $x=10$ . Now , a rough plot of graph between $x=-10$ and $x=10$ , will give you 6 points of intersection.!
|
H: $(G,\cdot)$ is abelian $\Longleftrightarrow$ $\forall a,b \in G: (ab)^{-1}=a^{-1}b^{-1}$
Show that for a group $(G,\cdot)$ the following statements are equivalent:
$A:$ $(G,\cdot)$ is abelian
$B:\forall a,b \in G: (ab)^{-1}=a^{-1}b^{-1}$
$A \Longrightarrow B:$
Since $(G,\cdot)$ is a group:
$\forall a,b \in G: (ab)^{-1}=b^{-1}a^{-1}$
and since $(G,\cdot)$ is commutative:
$b^{-1}a^{-1}=a^{-1}b^{-1}=(ab)^{-1}$
$B \Longrightarrow A:$
Since $B$ holds:
$e=(ab)(ab)^{-1}=(ab)a^{-1}b^{-1}=aba^{-1}b^{-1}$
Suppose $(G,\cdot)$ is not abelian, then:
$aba^{-1}b^{-1}$ has to equal $e$ which would mean, that since $(G,\cdot)$ is associative,
$a(ba^{-1}b^{-1})=e$
This implies $(ba^{-1}b^{-1})=a^{-1}$
And since $(aba^{-1})b^{-1}=e \Longrightarrow (aba^{-1})=b$
Which means $ba^{-1}=a^{-1}=a^{-1}b^{-1}$ and $ab=b=ba^{-1}$
in this case we can calculate $(ab)(a^{-1}b^{-1})=ba^{-1}$ which supposed to be $e$.
But since in a group the inverse has to be specific and unique, this is a contradicition!
Since weve choosen $aa^{-1}=e\,\,\,\wedge bb^{-1}=e$
$\Longrightarrow (G,\cdot)$ has to abelian
$B\Longrightarrow A$
So concluding to $A \Longleftrightarrow B$
$\Box$
Could someone look over it? Last exercise on group theory I messed up as some of you have seen already :) Is this correct this time?
AI: Your $A\implies B$ is fine. I would write the last line as
$$
(ab)^{-1}=b^{-1}a^{-1}=a^{-1}b^{-1}
$$
because that better follows the flow of your argument. But that's a minor aesthetic concern.
For $B\implies A$, I think you need to put more emphasis on the $\forall$. Also, you go from $(ba^{-1}b^{-1})=a^{-1}$ to $ba^{-1}=a^{-1}$, which is just not true.
I would personally choose to go contrapositive, as a first attempt. That is, show $\lnot A\implies \lnot B$. So assume the group isn't abelian, and find some $a,b$ which do not fulfill $B$. So that's what I'll do.
$\lnot A\implies \lnot B$: Since the group is non-abelian, by definition there is some pair of elements $a,b$ such that $ab\neq ba$. Multiply both sides from the right by $a^{-1}b^{-1}$, and you get $aba^{-1}b^{-1}\neq e$, and we're done.
But if you want to prove it directly, that's very doable too: we have, for each $a,b\in G$, that
$$
aba^{-1}b^{-1}=e\\
aba^{-1}=b\\
ab=ba
$$
proving that the group is abelian.
This might shed some light on why $aba^{-1}b^{-1}$ is called the commutator of $a$ and $b$: it measures whether $a$ and $b$ commutes.
|
H: What does $\{0,1\}^X$ mean?
I got this notation in a question without prior explanation, I suspect it's something related to the power set but I am not sure.
AI: As Angina said in the comments, $\{0, 1\}^X$ would be the set of all functions $f:X \to \{0, 1\}$. However, as you suspected, it is very closely linked to the power set of $X$ in the following manner.
Given a subset $U \subset X$, you have the function $\chi_U: X\to\{0, 1\}$ defined as
$$\chi_U(x) := \begin{cases}0 & x \notin U\\1 & x \in U\end{cases}$$
Given a function $f:X \to \{0, 1\}$, you have a subset $U_f \subset X$ defined as
$$U_f := \{x \in X \mid f(x) = 1\}.$$
The two correspondences are inverses in the sense that
$$U_{\chi_U} = U; \quad \chi_{U_f} = f$$
for all $U \subset X$ and all $f \in \{0, 1\}^X$.
Thus, you could identify every subset of $X$ with a function $f: X \to \{0, 1\}$ and vice-versa and so you may as well consider $\{0, 1\}^X$ as the power set of $X$, also sometimes written as $2^X$.
|
H: If $TS$ = $I$ then $S$ and $T$ need not be invertible
Let $T$ be a linear operator on a vector space $V$ over a field $F$. Suppose there is a linear operator S on $V(F)$ such that TS = I where I is identity operator on $V(F)$.
Now I am looking for an example where neither of $T$ or $S$ is invertible.
As $TS$ is one-one and onto, S is one-one and T is onto. And for finding an example to satisfy the above condition, $V$ must be infinite-dimensional.
But I am not able to find such an example.
AI: For example let $V$ be the space of infinite sequences of elements of $F$. Define $T,S$ like this:
$T(x_1,x_2,...)=(x_2,x_3,...)$
$S(x_1,x_2,...)=(0,x_1,x_2,x_3,...)$
Note that indeed $TS=I$, but not the other way around. So they can't be invertible.
|
H: Perms and Coms - what am I doing wrong??
The question is:
Find the number of ways that 6 different coloured balls can be placed in 3 non-identical urns so that no urn is empty.
Here's my working:
Let $A$ be the event Urn $A$ is empty
$B$: Urn $B$ is empty
$C$: Urn $C$ is empty
$n(A^C \cap B^C \cap C^C)$
\begin{align}
&=n(A\cup B\cup C)^C\\
&=\text{Total}-n(A\cup B\cup C)\\
&=3^6-\left\{n(A)+n(B)+n(C)-n(A\cap B)-n(A\cap C)-n(B\cap C)+n(A\cap B\cap C)\right\}\\
&=3^6-\left\{(2^6-2)+(2^6-2)+(2^6-2)-1-1-1+0\right\}\\
&=546
\end{align}
But the correct answer is $540$.
I don't know what I've done wrong.
Can someone please explain this to me?
Thanks a lot.
AI: Your $n(A), n(B), n(C)$ are not correct.
"$A$ is empty" means that all balls are assigned to the $2$ other boxes. This corresponds to assigning each of the $6$ balls a label $B$ or $C$. There are obviously $2^6$ ways to do this.
Similarly for $n(B)$ and $n(C)$.
So, at the end you have $3^6- (3\cdot 2^6 -3)$.
|
H: How to find a second solution for ODE $xy''+3y'+x^3y=0$?
I need some help with this problem. I need to solve the differential equation $$xy''+3y'+x^3y=0$$ using power series. I used Frobenius method to expand about $x=0$ since it is an singular regular point. So I assumed a solution $y(x)=\sum_{j=0}^\infty a_jx^{s+j}$. After replacing it on the origina equation I ended up with:
$$\sum_{j=0}^\infty a_j(s+j)(s+j-1)x^{s+j-1}+3\sum_{j=0}^\infty a_j(s+j)x^{s+j-1}+\sum_{j=0}^\infty a_jx^{s+j+3}=0$$ In order to equal the exponents of $x$, I expanded the first four terms in the first two series and ended up with the inditial equation: $$s^2+2s=0\Rightarrow s=0 \qquad s=-2$$
Now, because of the fact that the two roots differ by an integer, that means that the higher root will yield a solution while the smaller may or may not.
Thus, for $s=0$ I found that the solution is $$y_1(x)=a_0\sum_{j=0}^\infty\frac{(-1)^j}{2^{2j}(2j+1)!}x^{4j}$$
I'm struggling to find the second solution. I tried to use two methods. The first one by using:
$$y_2(x)=y_1(x)\int^x\frac{\exp\left[-\int^{x_2}P(x_1)dx_1\right]}{\left[y_1(x_2)\right]^2}dx_2$$ where $P(x)=\frac{3}{x}$, but I dont know what to do with the term $[y_1(x)]^2$.
The second method I tried was to use the series form of the second solution that my book (Mathematical Methods for physicists, Arfken) gives by writing $P(x)=\sum_{i=-1}^\infty p_ix^i$ and $Q(x)=\sum_{j=-2}^\infty q_jx^j$, replacing that in the integral form of the first method gives $$y_2(x)=y_1(x)\ln|x|+\sum_{j=-n}^\infty d_jx^{j+\alpha}$$ where $n$ is the difference between the two roots of the inditial equation and $\alpha$ is the higher root.
When I tried this method, I replaced $y_2(x)$ in the original ODE usiing $n=2$ and $\alpha=0$. After taking the derivatives and with the fact that $y_1(x)$ is a solution, I ended up with this:
$$2\left(\frac{y_1(x)}{x}+y_1'(x)\right)+\sum_{j=-2}^\infty [j(j-1)+3j]d_j x^{j-1}+\sum_{j=-2}^\infty d_j x^{j+3}=0$$ The problem here is the term $2\left(\frac{y_1(x)}{x}+y_1'(x)\right)$, I don't know what to do with it. In my book, they write it as a new power series, but I don't know how to determine the coefficients of such power series.
I apologize for the long post, but I wanted to show a bit of the process I did, Hope you can help me. If it is necessary, I can upload an image of the whole process I did.
AI: In your first approach, $y_2(x)=y_1(x)\int^x\frac1{s^3y_1(s)^2}ds$, you need to take into account that $y_1(x)$ is a series in $x^4$, so that the same is the case for $y_1(x)^{-2}=1+b_4x^4+b_8x^8+...$ Inserting that gives
$$
\int^x\frac1{s^3y_1(s)^2}ds=\int^x(s^{-3}+b_4s+b_8s^5+...)ds=-\frac12s^{-2}+\frac12b_4s^2+\frac16b_8s^6+...
$$
so that for this equation you do not get logarithmic terms, both basis solutions are Frobenius power series. This means that you can go back to the start and compute the second basis solution the same way as the first one via the coefficient recursion.
|
H: Prove that $\mathbb{Q}[x,y]/ \langle x+y \rangle \cong \mathbb{Q}$
My direction: Set mapping $\varphi:\mathbb{Q}[x,y] \to \mathbb{Q}$ that $\varphi$ is surjective homomorphic and $\ker{\varphi}=\langle x+y \rangle$. Since Noether Theorem, I have $\mathbb{Q}[x,y]/\langle x+y \rangle \cong \mathbb{Q}$.
I have try many mapping $\varphi$ but not working. In my attempt to prove this problem, I have found that $\mathbb{Q}[x,y]/\langle x,y \rangle \cong \mathbb{Q}$ by set mapping $\varphi: \mathbb{Q}[x,y]\to \mathbb{Q}, \varphi(f)=f(0,0)$.
AI: The claim is false as stated. We actually have $\Bbb{Q}[x,y]/\langle x+y\rangle\simeq \Bbb{Q}[x]$. The isomorphism follows from the fact that the kernel of the surjective homomorphism
$$
\phi:\Bbb{Q}[x,y]\to \Bbb{Q}[x], f(x,y)\mapsto f(x,-x)
$$
is the ideal generated by $x+y$.
$\phi$ is surjective, because $\phi(g(x))=g(x)$ for all polynomials $g(x)$.
The kernel is generated by $x+y$, because we can view $\Bbb{Q}[x,y]$ as $\Bbb{Q}[x][y]$. That is, the ring of polynomials in $y$ with coefficients coming from the integral domain $R=\Bbb{Q}[x]$. Viewed this way, we see that $\phi$ amounts to evaluating the polynomials at the point $y=-x$. The kernel of an evaluation homomorphism $R[y]\to R$ at the point $y=a$, $a\in R$, is more or less automatically generated by $y-a$. A proof comes from the polynomial divison algorithm. A polynomial $f(y)\in R[y]$ can be written in the form $f(y)=q(y)(y-a)+f(a)$ for some polynomial $q(y)\in R[y]$. Observe that the leading coefficient of $y-a$ is equal to $1$. When $R$ is not a field (as is the case here) we need that assumption for the usual long division of polynomials to work.
|
H: $\int_0^1f(x) dx =0$, $\int_0^1xf(x) dx =0$. How to show that f has at least two zeros?
$f:[0,1]\to \mathbb{R}$ is a countinous function.
$$\int_0^1f(x) dx =0 \qquad \mbox{ and } \qquad \int_0^1xf(x) dx =0. $$
If $f \ge 0$ ($f\le0$) were true then $\int_0^1f(x) dx \ge0$ ($\int_0^1f(x) dx \le0$). This is a contradiction, we can conclude that $f$ changes sign. By intermediate value property there exist a point $c$ such that $f(c)=0$. This is the first zero.
By using mean value theorems for integrals I can also show that a zero does exist. I can't show that these zeroes are different from each other.
How to show a second zero exists?
AI: $\newcommand{\d}[1]{\, \mathrm{d} #1}$
Define:
$$
F(x) = \int_0^x f(t) \d{t}
$$
Then we observe that $F(0) = F(1) = 0$, and:
$$
\int_0^1 F(x) \d{x} = \left[xF(x)\right]_{x=0}^{x=1} - \int_0^1 xf(x)\d{x} = 0
$$
Then, as you mentioned above, we have $c \in (0,1)$ such that $F(c) = 0$. We can now apply Mean Value Theorem, where there exist $c_1 \in (0,c)$ and $c_2 \in (c,1)$ such that $F'(c_1) = F'(c_2) = 0$. Since $F'(x) = f(x)$, $c_1$ and $c_2$ are two distinct zeroes of $f$.
|
H: Simplify $\frac{4\cos ^2\left(2x\right)-4\cos ^2\left(x\right)+3\sin ^2\left(x\right)}{4\sin ^2\left(x\right)-\sin ^2\left(2x\right)}$
Simplify:
$$\frac{4\cos ^2\left(2x\right)-4\cos ^2\left(x\right)+3\sin ^2\left(x\right)}{4\sin ^2\left(x\right)-\sin ^2\left(2x\right)}$$
After the substitution as $\cos(x)=a$ and $\sin(x)=b$, $(a^2+b^2=1)$, the expression becomes
$$\frac{4(a^2-b^2)^2-4a^2+3b^2}{4b^2-4a^2b^2}=\frac{4a^4-8a^2b^2+4b^4-4a^2+3b^2}{4b^4}=\bigg(\frac{a^2}{b^2}-1\bigg)^2-\frac{4a^2-3b^2}{4b^4}$$But I don't think I got anything useful... Any help is appreciated.
AI: \begin{align}\frac{4\cos ^2\left(2x\right)-4\cos ^2\left(x\right)+3\sin ^2\left(x\right)}{4\sin ^2\left(x\right)-\sin ^2\left(2x\right)}&=\frac{4(1-2\sin^2x)^2-4(1-\sin^2x)+3\sin^2x}{4\sin^2x-(2\sin x\cos x)^2}\\&=\frac{4-16\sin^2x+16\sin^4x-4+4\sin^2x+3\sin^2x}{4\sin^4x}\\&=\frac{16\sin^4x-9\sin^2x}{4\sin^4x}\\&=4-\frac94\csc^2x\end{align}
|
H: Derivative of the $dh(x)Ax$
Given a function $h:\mathbb{R}^n \rightarrow \mathbb{R}^n $ and a matrix $A$ what it is the derivative of
$dh(x)Ax$ ? That is how calculate $d(dh(x)Ax)$?
AI: Define $\phi(x) := Dh_x(Ax)$ (the subscript is simply to indicate the base point of the derivative). Think of this like a "product" between $Dh_x$ and $Ax$. More precisely, we consider the evaluation map $\text{ev}: L(\Bbb{R}^n) \times \Bbb{R}^n \to \Bbb{R}^n$ given by $(T,v) \mapsto T(v)$. Then,
\begin{align}
\phi(x) &= \text{ev}(Dh_x, Ax)
\end{align}
It is easy to see that $\text{ev}$ is a bilinear (and continuous, since the vector spaces are all finite-dimensional); i.e it is a "product" of some kind. So, there is a natural product rule in this case:
\begin{align}
D\phi_x(\cdot) &= \text{ev}(D^2h_x(\cdot), Ax) + \text{ev}(Dh_x, A(\cdot))
\end{align}
Notice what I did; I simply "differentiated the first term, kept the second one alone, then leave the first alone, and differentiate the second". The $(\cdot)$ notation means given $v \in \Bbb{R}^n$, that's where you plug it in (recall that derivatives are linear transformations, so they can be evaluated on some vector). So, for all $v \in \Bbb{R}^n$ (by unwinding the definition of ev):
\begin{align}
D\phi_x(v) &= (D^2h_x[v])[Ax] + Dh_x[Av] \\
&= D^2h_x[v,Ax] + Dh_x[Av]
\end{align}
where in the final line, I'm simply using the fact that the second derivative at a point $D^2h_x$ can be equivalently thought of as a linear map $\Bbb{R}^n \to L(\Bbb{R}^n)$, or as a bilinear map $\Bbb{R}^n \times \Bbb{R}^n \to \Bbb{R}^n$.
|
H: Is the statement "$(m, n)=d$ if and only if there exist integers $r$ and $s$ such that $r m+s n=d$" problematic?
My textbook Groups, Matrices, and Vector Spaces - A Group Theoretic Approach to Linear Algebra by James B. Carrell said that
$(m, n)=d$ if and only if there exist integers $r$ and $s$ such that $r m+s n=d$.
We have $2\cdot3+4\cdot5=26$, but $(3,5)=1\neq 26$. Could you please explain if this paragraph is problematic?
AI: The first implication says there exists such two integers, not saying that any two integers you pick will fit the bill. For the case of (3,5) we can find 7*3+(-4)*5=1. So there exists integers. Now I notice the if and only if, so the reverse implication also must hold. It looks problematic like this. You are right. By this definition if d is gcd, than any multiple of d will also be gcd, which is wrong. The correct statement will be d, the smallest positive number for which there exists such two r and s will be gcd.
In other words d will be gcd if it is a positive generator for the ideal generated by m and n.
|
H: If $A$ is a deformation retract of $X$, then is $H_n(X,A) \cong H_n(A,A)$?
If $A$ is a deformation retract of $X$, then is $H_n(X,A) \cong H_n(A,A)$? I know this is true for usual homology grouups but does it also hold for relative homology groups?
On first sight, the proof that works for usual homology seems to generalise. Is that correct?
AI: Let $r:X\to A$ be a retraction i.e. $r\circ i=\text{id}_A$ such that $i\circ r\simeq \text{id}_X$, where $i:A\hookrightarrow X$ is the inclusion. Note that we can think $r:(X,A)\to (A,A)$ and $i:(A,A)\to (X,A)$. Now, use homotopy invariance of homology.
|
H: Derivation of density of $X^2$ based on distribution of $X$
I have the problem following :
Let's say we have random variable with density function $f_x=\begin{cases} \frac{x+1}{2} &for \; x \in[-1,1] \newline 0 & for \; x \in(-\infty,1)\cup(1,+\infty)
\end{cases}$
And we want to derive density function of $X^2$
My work so far
My first guees was to derive CDF of $X$ : $$F_X=\begin{cases} 0 & for \; x \in (-\infty,-1) \newline \frac{x^2}{4}+\frac{x}{2}+\frac{1}{4} & for \; x\in[-1,1] \newline 1&for\; x \in (1,+\infty)\end{cases}$$
After that I wanted to derive CDF of $X^2$ :
$P(X^2 \le t)=0$ for $t<0$, so we can now consider our formula only for $t>0$
$P(X^2 \le t)=P(X \le \sqrt{t})=\frac{t}{4}+\frac{\sqrt{t}}{2}+\frac{1}{4}$ for $t \in [0,1]$
And $P(X^2 \le t)=1$ for $t>1$.
And finally I calculated PDF of X^2 $$f_{X^2}=\begin{cases} 0 & for \; x \notin [0,1] \newline
\frac{1}{4}+\frac{1}{4\sqrt{x}} & for \; x \in [0,1]
\end{cases}$$
And my problem is that $\int_{R}f_{X^2}=\frac{3}{4} \neq 1$
Can you please help me spot the mistake ?
Thank you in advance!
AI: Hint: Note that $P(X^2\le t) = P\left(-\sqrt{t}\le X \le \sqrt{t}\right)$ (you forgot to consider negative values of $X$ here).
|
H: Point spectrum of a particular operator
Let $X=C[0,1]$ and we define
$$T\colon X\to X\quad\text{as}\quad(Tf)(t)=g(t)f(t)\quad\text{for all}\quad t\in [0,1]$$ with $g\in C[0,1]$ fixed. I have proved that $\sigma(T)=g([0,1])$.
Question 1. If I prove that $\lVert T \rVert =\lVert g \rVert_{\infty}$ I can coclude that the operator $T$ is bounded?
Now, I must find the point spectrum. This is my attempt:
If $\lambda\in \sigma_p (T)$, then exists an $f\in X\setminus\{0\}$ such that $Tf=\lambda f$. Since $f\ne 0$ in continuous there exists an interval $[a,b]\subseteq [0,1]$, $a<b$ such that $f(t)\ne 0$ on $[a,b]$. But this implies that for all $t\in [a,b]$ we have $$g(t)f(t)=\lambda f(t)\iff g(t)=\lambda.$$
Therefore $\lambda\in M_g$, where $$M_g:=\big\{\lambda\in C\;|\;\text{exists}\; 0\le a< b\le 1\;\text{such that}\; m|_{[0,1]}=\lambda\big\}.$$
Vice versa, if $\lambda\in M_g$, then we have an interval $[a,b]\subseteq [0,1]$ such that $(T-\lambda I)f=0$ on $[a,b]$ and for all $f\in X$.
Question 2. Can we find a function $f\ne 0$ with $f(t)=0$ for all $t\notin [a,b]$? If yes, why?
In fact if this function exists, then $(T-\lambda I)=0$ on $[0,1]$, i,e $\lambda\in \sigma_p(T)$.
AI: For question 1: If you prove that $\|T\|$ is finite then you can conclude that $T$ is bounded so yes, of course if $\|T\|=\|g\|_\infty$, $T$ is bounded. Actually, $\|T(f)\|_\infty=\|g\cdot f\|_\infty\leq\|g\|_\infty\|f\|_\infty$ for any $f\in C[0,1]$, so $\|T\|\leq\|g\|_\infty<\infty$ and this is enough to conclude that $T$ is bounded.
For the point spectrum: if $\lambda\in\sigma_p(T)$, then there exists a continuous function $f\in C[0,1]$ that is not zero everywhere such that $g(t)f(t)=\lambda f(t)$ for all $t\in[0,1]$. If $S=\{x\in[0,1]: f(x)\neq0\}$, then $S$ is an open subset of $[0,1]$ and it is true that $g(t)=\lambda$ for all $x\in S$. Since $S$ is not empty and open, there does exist an interval $[a,b]\subset S$, thus for all $t\in[a,b]$ it is $g(t)=\lambda$.
Conversely: suppose that $\lambda\in\mathbb{C}$ has the property "there exists $[a,b]\subset[0,1]$ such that $g(t)=\lambda$ for all $t\in[a,b]$. Choose numbers $c,d$ such that $a<c<d<b$, i.e. $[c,d]$ is a proper subset of $[a,b]$. Define a function $f(t)$ on $[0,1]$ such that $f$ is equal to $1$ on $[c,d]$, $f$ is equal to $0$ on $[0,a]$ and $[b,1]$. Draw a graph and extend $f$ linearly in $[a,c]$ and $[d,b]$. It is immediate that $g(t)f(t)=\lambda f(t)$. This function is $0$ outside of $[a,b]$, as you want.
Conclusion:
$$\sigma_p(T)=\{\lambda\in\mathbb{C}: \text{ there exists }[a,b]\subset[0,1]\text{ such that for all }t\in[a,b]: g(t)=\lambda\}.$$
|
H: How to integrate $\int_{-\infty}^{\infty} xe^{-2\lambda |x|} dx$?
I need to integrate
$$\int_{-\infty}^{\infty} xe^{-2\lambda |x|} dx$$
We are given that $\lambda$ is positive and real
This is my attempt
$$\int_{-\infty}^{0} x e^{2\lambda x} dx+ \int_{0}^{\infty} xe^{-2\lambda x} dx$$
Using u-sub $u = {2\lambda x}$ and integrating by parts
$$\frac{1}{2\lambda}\int_{-\infty}^{0} ue^{u} du+ \frac{1}{2\lambda}\int_{0}^{\infty} ue^{-u} du$$
$$= \frac{1}{2\lambda}(-1+1) = 0$$
But it seems wrong to me
AI: This derivation is correct.
You can also just notice that the function under the integral is odd i.e.
$$f(-x)=-f(x)$$ for every $x$.
So... of course the definite integral from $-\infty$ to $+\infty$
if it is well defined (i.e. if it converges) will be equal to zero.
See also: Even and Odd functions
It is also important to note that the two improper integrals of $f(x)$ (from minus infinity to zero and from zero to plus infinity) converge. Why is this important? Because there are odd functions for which the integral from $-\infty$ to $+\infty$ is not zero but is undefined. A very simple example is $f(x)=x$ which is such an odd function.
Integral of f(x)=x from minus infinity to plus infinity
|
H: Deformation retracts induces deformation retract on quotient
I'm reading the following proof in Hatcher:
How does a deformation retract of $V$ onto $A$ give a deformation retract of $V/A$ onto $A/A$?
So, let $r: V \to A$ be a retract and $1_V \sim r$. How do I get a retract $V/A \to A/A$? Is it simply mapping all of $V/A$ to $A \in A/A = \{A\}$?
AI: If, $f,g:(W,W')\to (Z,Z')$ are maps of pairs that are homotopic
through maps of pairs, then the induced maps $\big(W/W',*\big)\to
\big(Z/Z',*\big)$ are also homotopic.
To prove it, let $H:W\times [0,1]\to W$ be such a homotopy, i.e. $H(\bullet, 0)=f$ and $H(\bullet, 1)=g$ with $H(w',t)\in Z'$ for all $w'\in W'$ and all $t\in [0,1]$. Now, consider the map $$\widetilde H:W/W'\times [0,1]\to Z/Z'\text{ defined by}$$$$\big(\text{cls}(w),t\big)\longmapsto \text{cls}\big(H(w,t)\big)$$
Let $q_Z:Z\to Z/Z'$ and $q_W:W\to W/W'$ be quotient maps. Note that $\widetilde H\circ \big(q_W\times \text{Id}_{[0,1]}\big)=q_Z\circ H$. Hence, $\widetilde H\circ \big(q_W\times \text{Id}_{[0,1]}\big)$ is continuous. But, $q_W\times \text{Id}_{[0,1]}$ is a quotient map as $[0,1]$ is locally compact Hausdorff space. Hence, $\widetilde H$ is continuous.
|
H: Bivariate normal distribution and law of $Z$
Let $(X,Y)$ a bivariate normal variable $f(x,y)=\frac{1}{2\pi \sqrt{1-\rho^2}}e^{\frac{1}{2(1-\rho^2)}(x^2-2\rho xy+y^2)}$. Let $Z=\frac{Y-\rho X}{\sqrt{1-\rho^2}}$.
Find the law of $Z$.
Say if $X$ and $Z$ are independent or not.
Find $\mathbb{P}(X>0,Y>0)$.
Already by eye it can be noticed that the mean vector is $\mu=\begin{bmatrix} 0\\ 0\end{bmatrix} $ and the covariance matrix is $\sum=\begin{bmatrix}1 & -\rho\\ -\rho & 1\end{bmatrix}$ . I also know that $Z=X_1+...+X_n\sim N_n(A\mu,A^T\sum A)$, thus $Z$ is a linear of $X$ and $Y$ with $\frac{1}{\sqrt{1-\rho^2}}=const.$. Can I say that $Z\sim N(0,2-2\rho)$?
AI: $(Y|X=x)\sim N(\rho x; 1-\rho ^2)$
Then $Z\sim \Phi$, a standard normal
EDIT: Let's factorize the joint distribution as follows
$$f_{XY}(x,y)=\frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2(1-\rho^2)}}\times \frac{1}{\sqrt{2\pi}\sqrt{1-\rho^2}}e^{-\frac{1}{2(1-\rho^2)}[-2\rho xy+y^2]}$$
Now let's complete the square at the esponent by summing $\pm \rho^2x^2$ and obtaining
$$f_{XY}(x,y)=\frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2(1-\rho^2)}}\times \frac{1}{\sqrt{2\pi}\sqrt{1-\rho^2}}e^{-\frac{1}{2(1-\rho^2)}[\rho^2x^2-2\rho xy+y^2]}e^{\frac{\rho^2x^2}{2(1-\rho^2)}}$$
That is
$$f_{XY}(x,y)=\frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}}\times \frac{1}{\sqrt{2\pi}\sqrt{1-\rho^2}}e^{-\frac{1}{2(1-\rho^2)}[y-\rho x]^2}$$
$$f_{XY}(x,y)=f_{X}(x)f_{Y|X}(x|y)$$
Where $f_X$ is a standard $\phi$ and $f_{Y|X}$ is a gaussian with mean $\rho x$ and variance $(1-\rho^2)$
Concluding, for evey $x$, your Z is the standardization of the $Y|X$ rv, thus $Z\sim \Phi$
|
H: Every locally integrable function defines a Radon measure
I have just started with Measure Theory and I have read several times that
Every locally integrable function defines a Radon measure.
I understand this statement in the sense that if we have $f\in L^1_{loc}(\mathbb{R}^N)$ (respecto to Lebesgue measure) , then $\mu(E)=\int_{E}f(x)dx$ (where the integral is respect to Lbesgue measure) defines a Radon measure. Is this right? How could I prove it?
Is there any measure which is not of this form?
Maybe these questions are trivial, but I am a little lost.
Thanks.
Edit: After see the answer of @JustDroppedIn I was wondering what happens if $f$ is not non-negative. I thought that in this case we would obtain a signed measure. However, if $f$ we descompose $f=f^+-f^-$, where $f^+=\frac{f+|f|}{2}\geq{0}$ and $f^-=\frac{f-|f|}{2}\geq{0}$, then $\mu(E)=\int_E f=\int_E f^+-\int_E f^- $ and, by the answer of @JustDroppedIn, $\mu$ is the difference of two positive Radon measures. The problem is that this difference doesn't have to bee a signed measure so I would like to know what extra hypothesis we neeed above $f$ to obtain a signed Radon measure.
AI: Let's recall: a Radon measure is a measure $\mu:\mathcal{B}(\mathbb{R}^N)\to[0,\infty]$ such that $\mu$ is regular and $\mu$ is locally finite. Suppose that $f\in L^1_{loc}(\mathbb{R}^N)$ and that $f\geq0$. Define $\mu(E)=\int_Ef$. This is a measure and the fact that $f\in L^1_{loc}$ shows immediately that $\mu$ is locally finite. Regularity of $\mu$ follows directly from the regularity of the Lebesgue measure and an application of the well-known convergence theorems.
|
H: Prove $\sup \{x^2|x \in S \}$ exists, and equal to $s^2$, given $S \subset \Bbb{R}, S \neq \emptyset$, with sup $s$, inf $t$ and $s \geq -t$
Question
Suppose $S$ is a non-empty set of real numbers, with supremum $s$ and infimum $t$, and also that $s \geq -t$.
a) Show that $-s \leq x \leq s, \forall x \in S.$
b) Show that $\sup \{x^2|x \in S \}$ exists, and it is equal to $s^2$.
Answer
a) is fully answered, but please verify ?
We know that, given $s=\sup(S), t=\inf(S)$, then $\forall x \in S \; t \leq x \leq s$,
but since $$s \geq -t, \text{ then } \\ -s \leq t, \text{ hence } \\ -s \leq t \leq x \leq s \\ \Rightarrow -s \leq x \leq s$$
The answer to b) is only partly answered and I require assistance
We know that $\forall x \in S, x \leq s=\sup(S)$, which implies $x^2 \leq s^2$, hence $s^2$ is an upper bound for $x^2$.
However the "least" upper bound for $x^2$ is $\sup(x^2) \; \Rightarrow x^2 \leq \sup(x^2) \leq s^2$. **
I need to show that $s^2 \leq \sup(x^2)$ , which is what I think is required, since combined with (**) one can say $\sup(x^2)=s^2$. Need assistance with this?
AI: $"x\le s\implies x^2\le s^2"$ In general this is not true if $s\lt 0$. Ho nwever, if $-t\le s\lt 0$, then $t\gt 0$, but we need to have $s\ge t$, so $s$ must be non-negative.
Since $s=\sup S,$ any upper bound of S (say $b$) must satisfy $b\ge s$. Now, $b^2\ge s^2$ is an arbitrary upper bound for $T=\{x^2|x \in S \}$, so by definition $\sup T=s^2$.
|
H: Tautological 1-form on the cotangent bundle is intrinsic using transformation properties
I'm following this lecture in symplectic geometry and I'm trying to show the result stated at 31 minutes that the canonical 1-form on the cotangent bundle $M = T^*X$ is well defined regardless of which coordinates we choose, that is:
$$\alpha = \xi_j dx^j = \xi'_j dx'^j$$
From what I understand, we have each $M \ni p = (x,\xi)$ where $x \in X$ and $\xi \in T_x^*X$ so each "point" on $M$ is actually a tuple of a point in $X$ and a 1-form on $X$.
It starts to get confusing from here, but from what I understand $\alpha$ is a valid 1-form on $M$ because although $\xi_j$ are forms they are also coordinate maps on $M$ for a given chart, so even though it looks like we are multiplying forms nonsensically it's all well defined if we think of this construction on the manifold $M$, so we indeed have a differential form written as functions in front of the exterior derivative of some coordinate maps.
If we change coordinate chart from $(x^1, \ldots, x^n, \xi^1, \ldots, \xi^n)$ to $(x'^1, \ldots, x'^n, \xi'^1, \ldots, \xi'^n)$ then I know my forms will translate as $\displaystyle dx^j = \frac{\partial x^j}{\partial x'^i} dx'^i$ but I'm having trouble showing that the $\xi_j$ will transform as we want.
I was thinking that since each $\xi_j$ is a 1-form on $X$ I can write them as $\displaystyle \xi_j = a^j_i dx^i$ and then use the coordinate transformation rules to get $\displaystyle \xi_j = a^j_i \frac{\partial x^i}{\partial x'^k} dx'^k :=a'^j_kdx'^k$
This leads to $\displaystyle \alpha = \xi_j dx^j = a'^j_kdx'^k \frac{\partial x^j}{\partial x'^i} dx'^i$
Ideally I was hoping for the appearance of a term like $\displaystyle \frac{\partial x^j}{\partial x'^i}\frac{\partial x'^i}{\partial x^k} = \delta^j_k$ which would cancel out, but expanding out $a'^j_k$ gives me a second $\displaystyle \frac{\partial x^i}{\partial x'^k}$ which doesn't lead to anything.
Later I found this question which provides a solution that I don't quite understand, what is the meaning of $\xi_i(dx^j)$ of a form being evaluated on a form?
I'm looking for explanation of the answer in the linked post and what the error was in my attempted proof, it's all well if my approach doesn't go anywhere useful, but I am unsure why I get something that looks so wrong/ugly just by applying what I think are simple and correct rules.
AI: The $\xi_j$ are definitely not forms. They are functions defined on an open subset of $M = T^*X$ mapping into $\Bbb{R}$. Just to be clear, let me introduce the following notation. Let $\pi:T^*X \to X$ be the canonical projection (map each covector to its base point). Now, given a chart $(U, x)$ on the manifold $X$ (i.e $x:U \to x[U] \subset \Bbb{R}^n$ is the chart map, and we set $x^i := \text{pr}^i \circ x$), we obtain a chart for the cotangent bundle as follows: on $T^*U$, we get a coordinate chart $(x^1 \circ \pi, \dots, x^n \circ \pi, \xi_1, \dots, \xi_n)$, defined as follows: $\xi_i:T^*U \to \Bbb{R}$
\begin{align}
\xi_i(\lambda) := \lambda\left(\dfrac{\partial}{\partial x^i}\bigg|_{\pi(\lambda)} \right) \in \Bbb{R}
\end{align}
Now, observe what each object is.
$x^i \circ \pi$ is a function $T^*U \to \Bbb{R}$ (people often abuse notation slightly and write simply $x^i$ when it really should be $x^i \circ \pi = \pi^*(x^i)$, where the RHS is the pull-back of a function).
Next, $\lambda \in T^*U$ is a covector, which means $\lambda \in T_{\pi(\lambda)}^*X$ lies in this specific cotangent space.
Next, $\frac{\partial}{\partial x^i}|_{\pi(\lambda)} \in T_{\pi(\lambda)}X$ is a tangent vector in this specific tangent space, so the evaluation of the covector on this tangent vector yields a number.
Finally, $\xi_i$ is a function $T^*U \to \Bbb{R}$, so it makes sense to feed it a covector. Note that essentially what $\xi_i$ is doing is telling us what the $i^{th}$ component of $\lambda$ is with respect to the basis $\{dx^1|_{\pi(\lambda)}, \dots dx^n|_{\pi(\lambda)}\}$ of the cotangent space. In other words,
\begin{align}
\lambda &= \xi_i(\lambda) \cdot dx^i|_{\pi(\lambda)}
\end{align}
(this should be (hopefully) somewhat familiar from linear algebra).
Now, for the sake of precision, let me write the Tautological form as:
\begin{align}
\alpha := \xi_i \, d(x^i \circ \pi) = \xi_i \, d(\pi^*x^i) = \xi_i \, \pi^*(dx^i)
\end{align}
This currently is a form defined on $T^*U$ (because $\xi_i$ and $x^i \circ \pi$ are functions on $T^*U$, while $d(x^i \circ \pi)$ is a 1-form on $T^*U$, so their product is still a form on $T^*U$).
The objective is to show that this formula yields a globally well-defined $1$-form on the whole manifold $M=T^*X$. So, let's take another chart $(V,z)$ on the base manifold $X$, and then we "lift it" to a chart $(T^*V, z^1 \circ \pi, \dots, z^n \circ \pi, \zeta_1, \dots \zeta_n)$ (excuse me not using primes for the other coordinates, because I'll definitely make mistakes lol). To complete the proof, we really need to understand how the $\zeta_j$ are related to the $\xi_i$. This is simple: given any covector $\lambda \in (T^*U)\cap (T^*V) = T^*(U \cap V)$, we have by definition:
\begin{align}
\xi_i(\lambda) &:= \lambda\left(\dfrac{\partial}{\partial x^i}\bigg|_{\pi(\lambda)} \right) \\
&= \lambda\left(\dfrac{\partial z^j}{\partial x^i}\bigg|_{\pi(\lambda)}\cdot \dfrac{\partial}{\partial z^j}\bigg|_{\pi(\lambda)} \right) \\
&= \dfrac{\partial z^j}{\partial x^i}\bigg|_{\pi(\lambda)}\cdot\zeta_j(\lambda), \end{align}
where in the last line I used $\Bbb{R}$-linearity of the covector $\lambda$, along with the definition of $\zeta_j$. If we write this as an equality of functions on $T^*(U\cap V)$, we get
\begin{align}
\xi_i &= \zeta_j \cdot \left(\dfrac{\partial z^j}{\partial x^i} \circ \pi\right) = \zeta_j \cdot \pi^*\left(\dfrac{\partial z^j}{\partial x^i} \right)
\end{align}
Now, finally proving the well-definition is simple:
\begin{align}
\xi_i \cdot \pi^*(dx^i) &= \zeta_j \cdot \pi^*\left(\dfrac{\partial z^j}{\partial x^i} \right) \, \pi^*(dx^i) \\
&= \zeta_j \cdot \pi^*\left(\dfrac{\partial z^j}{\partial x^i}\, dx^i \right) \\
&= \zeta_j \cdot \pi^*(dz^j)
\end{align}
Remarks.
Typically, this final computation is presented with the following abuse of notation (usually for good reason, since with a bit of practice, it get very cumbersome to keep track of the $\pi$):
\begin{align}
\xi_i\, dx^i &= \zeta_j \cdot \dfrac{\partial z^j}{\partial x^i} \, dx^i = \zeta_j\, dz^j.
\end{align}
(so $x^i$ can mean either a coordinate function on the base manifold $X$ or its pullback to the bundle $T^*X$).
Also, you should take note that there is a completely chart-free definition of $\alpha$, which shouldn't be too hard to find (but also, try to construct it by yourself if you can).
|
H: Power Series ODE Question - Final Step
I just started learning how to use the power series method to solve ordinary differential equations and this is one of the first questions we were asked. I've managed to get it right up to the last step (assuming I've made no silly errors which I don't think I have as my final line of maths is similar to the solution of the problem set). But I'm not sure how to get to the final solution from the working I have.
The question is as follows:
$$(2x-x^2)y''+(x-1)y'+3y=0, \;\;\;\; y(1)=1, \;\;\;\; y'(1)=1$$
I have my solution written in the form:
$$y(x) =\sum_{k=0}^{\infty}a_k(x-1)^k$$
and have come to the conclusion that:
$$a_{k+2}=\frac{k^2+3}{(k+2)(k+1)}a_k$$
so:
$$a_0=1$$
$$a_1=1$$
$$a_2=\frac{3}{2}$$
$$a_3=\frac{4}{3\cdot2}$$
$$a_4=\frac{21}{4\cdot3\cdot2}$$
$$a_5=\frac{48}{5\cdot4\cdot3\cdot2}$$
$$a_6=\frac{399}{6\cdot5\cdot4\cdot3\cdot2}$$
$$a_7=\frac{1344}{7\cdot6\cdot5\cdot4\cdot3\cdot2}$$
Happy to be corrected here but my main issue is how they got from these values to the final solution of:
$$y(x) = 1+3\left[-\frac{1}{2!!}(x-1)^2+\frac{1}{4!!}(x-1)^4+\sum_{k=3}^{\infty}\frac{(2k-5)!!}{(2k)!!}(x-1)^{2k}\right]+\left[(x-1)+\frac{2}{3}(x-1)^3\right]$$
I could go ahead and just expand the bottom and see how they're the same but I have no idea how I would've gotten numbers that look as arbitrary (to me at least) as 399 and 1344 to come together as the argument in the summation.
Any help is super appreciated!
AI: Your equation is
$$
0=[(x-1)^2-1]y''-(x-1)y'-3y
$$
leading to the coefficient relation at $(x-1)^k$
$$
0=[k(k-1)-k-3]a_k-(k+2)(k+1)a_{k+2}=(k+1)[(k-3)a_k-(k+2)a_{k+2}]
$$
Thus the corrected coefficient recursion is
$$
a_{k+2}=\frac{k-3}{k+2}a_k
$$
At $k=3$ one finds $a_5=0$ so that $a_{2k+1}=0$ for $k>1$. In the even index sequence then one has
$$
a_{2(k+1)}=\frac{2k-3}{2(k+1)}a_{2k}\iff a_{2k}=\frac{2k-5}{2k}a_{2(k-1)}
$$
which then leads to the double factorial formulas for the coefficients.
|
H: Conjecture If f is surjective then there exists x $\in$ (a, b) such that $|f'(x)| = 1$
Conjecture
Let $f$ be a continuous function from [a, b] to [a, b], and is differentiable
on (a, b).
If f is surjective then there exists x $\in$ (a, b) such that $|f'(x)| = 1$
Any counter example for this conjecture ?
**Addition after Kavi Rama Murthy'answer
**, we can improve the problem by: If $f(a)\leq f(b)$ and f is surjective then there exists x $\in$ (a, b) such that $f'(x)= 1$
AI: Suppose this is not true. By IVP of derivative either $|f'(x)| >1$ for all $x$ or $|f'(x)| <1$ for all $x$. In the first case $|f(b)-f(a)| =|b-a||f'(x)| >|b-a|$ for some $x$ but this contradicts the fact that $f$ maps $[a,b]$ into itself. In the second case choose $x$ and $y$ such that $f$ attains its maximum, say $M$, at $x$ and minimum, say $m$ at $y$. Then $M-m =|y-x| |f'(y)| < |y-x|\leq b-a$ for a suitable $y$ which is again contradiction since $f$ is surjective.
|
H: Finding The Complex Roots Of $4z^5 + \overline z^3= 0$?
Hello everyone how can I find all the complex roots of:
$4z^5 + \overline z^3= 0$?
I tried to mark $a+bi = z , a-bi = \overline z$ and
$4(a+bi)^5 +(a-bi)^3 = 0$
But I don't know how to continue.
AI: Use the exponential form: if $z=r\mathrm e^{i\theta}\enspace (r\ge 0)$, the equation can be rewritten as
$$4r^5\mathrm e^{5i\theta}+r^3\mathrm e^{-3i\theta}=0\iff 4r^5\mathrm e^{8i\theta}+r^3=0\iff\begin{cases} r=0\\[-1ex]\text{ or }\\r^2\mathrm e^{8i\theta}=-\frac 14= \frac14\mathrm e^{i\pi}\end{cases}$$
Can you proceed?
Some details:
$r=0$ if of course equivalent to $z=0$. As to $r^2\mathrm e^{8i\theta}=\frac14\mathrm e^{i\pi}$, identify the positive factors of the complex exponentials, and the arguments of both sides (as elements of $\mathbf R/2\pi\mathbf Z$):
$r^2=\frac 14\iff r=\frac12$ (as $r\ge 0$);
$8\theta \equiv \pi\mod2\pi\iff\theta\equiv \frac\pi 8\mod \frac\pi 4$.
|
H: Is there a function which satifies this condition?
Is there a function $f:\Bbb R\to\Bbb R$ which satisfies
$$\prod_{n=0}^{\infty} (f(x)^2-(2n+1)^2)=0$$
for all $x\in\Bbb R$? In other words, for all $x$, $f(x)$ is an odd integer.
AI: $$ y = 1+2\lfloor x\rfloor. \tag{1}$$
|
H: Notation for the union between the finite input alphabet and the empty string. What is the standard notation?
In the book “Introduction to the Theory of Computation”, the author writes this.
For any alphabet $\Sigma$ we write $\Sigma_{\varepsilon}$ to be $\Sigma \cup \{\varepsilon\}$.
In my formal lecture notes and online resources, I never came across this form of notation.
Is this form of notation a standard way to notate the union between the input alphabet and empty string or was it created by the author for clarity’s sake?
I prefer knowing the standard way to write out the notation to prevent ambiguity in future exams that I may have.
AI: I think that the standard is that stated in the
Wikipedia article formal language:
A word over an alphabet can be any finite sequence (i.e., string) of letters. The set of all words over an alphabet Σ is usually denoted by Σ* (using the Kleene star). The length of a word is the number of letters it is composed of. For any alphabet, there is only one word of length 0, the empty word, which is often denoted by e, ε, λ or even Λ.
|
H: Continuous spectrum of operator.
Let $X=C[0,1]$ and we define
$$T\colon X\to X\quad\text{as}\quad(Tf)(t)=g(t)f(t)\quad\text{for all}\quad t\in [0,1]$$ with $g\in C[0,1]$ fixed.
I must find the continuous spectrum.
Now, if $\lambda\in g([0,1])$, then $\lambda=g(t_0)$ for some $t_0\in [0,1]$. As a consequence we have for all $h\in\overline{\text{Ran}(T-\lambda I)}\subset X$
that $h(t_0)=0$
Question. Why this implies that $\overline{\text{Ran}(T-\lambda I)}$ cannot be whole space $X$?
AI: I am assuming you are working with the uniform norm and so since there are continuous functions in $C([0,1])$ such that $g(t_0)>\epsilon$ you cannnot have that the image is dense, since these functions will never be in it's closure.Now let's see why this statement is true, keep in mind that we are working with the uniform metric as our norm and so for any two functions $f,g $ we have $||f-g||=sup_{x\in [0,1]}|f(x)-g(x)|. $ Now take an arbitary function $f$ such that $f(t_0)\geq \epsilon$ we know this is possible because we can construct such continuous function. Now for any $h\in Im(T_{\lambda})$ we will have that $h(t_0)=0$ since $\lambda=g(t_0)$, and so we will have that $||f-h||=sup_{x\in [0,1]}|f(x)-h(x)|\geq |f(t_0)-h(t_0)|=|f(t_0)|\geq\epsilon $ and so we will have that $B(f,\epsilon)\cap Im(T_{\lambda})=\emptyset$ and so we have that $f\notin cl(Im(T_{\lambda}))$ and so the image cannot be dense.
But this means that the residual spectrum is equal to the range of $g$, if you assume that $\lambda$ is not there you can show that the inverse operator is bounded since the $||(T_{\lambda}f(t))^{-1}||=||\frac{f(t)}{g(t)-\lambda}||\leq ||f||M$ where M is the maximum of the continuous function $\frac{1}{g(t)-\lambda}$ in $[0,1]$ and so $\sigma_c(T)=\emptyset$.
And assuming that $g$ is not constant you have $\sigma_p(T)=\emptyset$,and you will have that $\sigma(T)=range(g(t)).$With this result you can actually see an interesting result that for any set of the form $[a,b]$ we can find an operator $T$ such that $\sigma(T)=[a,b]$, we just need to find the appropriate function for wich to consider in the multiplication operator.
|
H: Calculating $\lim _{n\to \infty }\left(\frac{1\cdot n + 2\cdot(n-1) + 3\cdot (n-2)+ ... +1\cdot n}{n^2}\right)$?
Hello everyone how can I calculate the limit of:
$\lim _{n\to \infty }\left(\frac{1\cdot n + 2\cdot(n-1) + 3\cdot (n-2)+ ... +1\cdot n}{n^2}\right)$?
My direction was to convert it to something looks like Riemann sum by doing this:
$\lim _{n\to \infty }\left(\frac{\sum_{k=0}^{n} (k+1)(n-k)}{n^2}\right)$
But I don't know how to continue.
AI: The limit of that sequence is $\infty$, since\begin{align}
\sum_{k=1}^nk(n-k+1)&=n\sum_{k=1}^nk-\sum_{k=1}^nk^2+\sum_{k=1}^nk\\&=n\frac{n(n+1)}2-\frac{n(n+1)(2n+1)}6+\frac{n(n+1)}2\\&=\frac{n(n+1)(n+2)}6,\end{align}and therefore$$\lim_{n\to\infty}\frac1{n^2}\sum_{k=1}^nk(n-k+1)=\lim_{n\to\infty}\frac{(n+1)(n+2)}{6n}=\infty.$$
|
H: If a finite group $G$ of order $n$ has at most one subgroup of each order $d\mid n$, then $G$ is cyclic
I'm reading the proof of a theorem in Fundamentals of Group Theory An Advanced Approach by Steven Roman.
(Characterization by subgroups) If a finite group $G$ of order $n$ has the property that it has at most one subgroup of each order $d\mid n,$ then $G$ is cyclic (and therefore has exactly one subgroup of each order $d\mid n$ ).
Here $o(a)$ is the order of $a$ and $\phi(d)$ is the number of elements of order $d$.
Because the order of a subgroup must be a divisor of that of a group, I get $$n=\sum_{d \in D} \phi(d) = \sum_{d \mid n} \phi(d)$$ Then I'm stuck at getting how $\phi(n) > 0$.
Could you please elaborate on this point?
AI: I am confused by your question, so I apologize if my answer misses the point.
The proof you quote is using Lagrange's theorem in the "$\leq$" step, that is, that the order of an element ($d$) has to divide the order of the group ($n$), and so without loss of generality you can restrict the sum only to those $d$ that divide $n$.
Then it uses the divisor sum property of Euler's totient function.
En passant, this also proves that there is exactly one subgroup of order $d$ (opposed to the "at most one" of the hypothesis), because otherwise you'd get $n<n$ which is a contradiction.
|
H: An Interesting Question I Posed to Myself About $\pi$ as an Average.
Prove or disprove:
There is a sequence $x$ with each $x_i\in\{1,2,3,4\}$ so that $\pi$ can be written as the average $$\pi = \lim_{n\rightarrow\infty}\sum_{i=1}^{n}\frac{x_i}{n}$$
I am sure that this question would be trivial using advanced number theory concepts, but I would like a solution using just high-school olympiad level mathematics.
Thanks a lot. ☺
AI: You could define the sequence recursively in terms of the average of the previous terms of the sequence:
$$x_k = \begin{cases}
3 & \text{ if } & a_{k-1} > \pi \\
4 & \text{ if } & a_{k-1} < \pi \\
\end{cases}$$
where
$$a_n = \frac {1}{n}\sum_k^n x_k$$
The convergence of $|a_n - \pi| \to 0$ follows from
$$- \frac{\pi - 3}{n} < a_n - \pi < \frac{4 - \pi}n$$
when $(x_{n-1}, x_n)$ is $(3, 4)$ or $(4, 3)$. Also, $|a_n - \pi|$ is decreasing in the other cases.
In the $(3, 4)$ case, $a_{n-1} < \pi$ so $$\begin{array} {rcl} a_n &=& (a_{n-1}\cdot(n-1) + 4)/n \\ &<& (\pi \cdot (n-1) + 4)/n \\ &=& \pi + (4 - \pi)/n \end{array}$$
Similarly for the $(4, 3)$ case.
To be pedantically rigorous, it would also need to pointed out that there is no final time $a_n - \pi$ changes signs.
|
H: Find number of triangles formed by lines( given:angle along x-axis)
i came across this problem in a competitive coding class :
A number of lines (extending infinity) in both directions are drawn on a plane. the lines are specified by the angle (positive or negative) made with the x axis(in degrees,constrained to -89° to 90° ).
the objective is to determine the number of triangles formed by the set of these lines.
An example:
If the lines are given with an angle of 10,70,30,30(with the x-axis) the figure looks like this
L1=10°,L2=70°,L3=30°,L4=30°
here there are two triangles (L1,L2,L3 and L1,L2,L4).
AI: Any set of three lines at different angles will form a triangle unless they pass through a common point. If we exclude cases where three lines go through a point, count the number of lines at each angle. I will take an example where the angles are $-10,-5,0,10,10,10,20,20,30,40,50$. We can represent this as $(1,1,1,3,2,1,1,1)$ because there are $3$ lines at $10$ and $2$ lines at $20$.
If all the lines were at different angles there would be ${11 \choose 3}=165$ triangles. We need to subtract the number of ways to get two parallel lines and one nonparallel, which is ${3 \choose 2}\cdot 8+{2 \choose 2}\cdot 9=33$. We then have to subtract the number of ways of getting three parallel lines, which is ${3\choose 3}=1$. The answer is then $131$.
If there are $n_i$ lines at angle $i$ and $n=\sum_i n_i$ is the total number of lines, the number of triangles is
$${n \choose 3}-\sum_i \left((n-n_i){n_i \choose 2} -{n_i \choose 3}\right)$$
|
H: Show that for $n>3$, there is always a $2$-regular graph on $n$ vertices. For what values of $n>4$ will there be a 3-regular graph on n vertices?
Show that for $n>3$, there is always a $2$-regular graph on $n$ vertices. For what values of $n>4$ will there be a 3-regular graph on n vertices?
I think this question is slightly out of my control. Can you please help me out with this question...
For part two what I think is yes by handshaking I will exclude all the odd vertices as $3(2n+1)$ is not even number. So what should be the answer? All even number of vertices? Does that make sense?
And for part 1 it is obviously true but how can I proceed to the answer?
Thanks.
AI: For part 1, consider the cycle of length $n$, so the statement is true. For part 2, the number of edges is $3n/2$
so the number of vertices must be even. It must be also at least 6 by assumption. For $n=4$ the square with two diagonals is the answer. For every even $n=2m\ge 6$ consider the dihedral group $D_m$. It is generated by 3 involutions. The Cayley graph corresponding to this generating set is 3-regular with $n$ verticed.
|
H: Convergence. Cauchy and uniform
I know that if function is uniformly convergent ($ |f_n(x)-f(x)|<\epsilon. \forall n > N(\epsilon)$), it is Cauchy convergent ($ |f_n(x)-f_m(x)|<\epsilon. \forall n,m > N(\epsilon)$)
So my question is: if sequence is Cauchy convergent does this imply uniform convergence? I think the answer is no, but can't figure out example.
AI: Yes it does; suppose $\{f_n\}$ is a series of functions on a subset $E$ of $\mathbb{R}$ such that for all $\epsilon > 0$ there exists a $N(\epsilon)$ such that
$m, n > N(\epsilon)$ implies that $|f_m(x) - f_n(x)| < \epsilon$ for all $x \in E$.
Since $f_n(x)$ is a Cauchy sequence for each particular $x \in E$, we know that our sequence $f_n(x)$ has a pointwise limit; we call it $f(x)$. We must now show that the convergence to $f(x)$ is uniform.
So let $\epsilon > 0$. By our assumption on $\{ f_n \}$ there exists an $N(\epsilon)$ such that $m, n > N(\epsilon)$ implies that $$|f_m(x) - f_n(x)| < \epsilon$$ for all $x \in E$. Now comes the interesting step: fix $n$, and take the limit $m \to \infty$ in the above expression. The result is that $$|f(x) - f_n(x)| \leq \epsilon$$ for all $x \in E$ and $n > N(\epsilon)$.
This proves the uniform convergence.
(This last step has to be justified; it relies on the fact that the function $\phi(x) = |x - c|$ is continuous.)
|
H: Calculate the volume of the region using triple integration.
A region bounded by the planes $x=0 , y=0 ,z=0 ,x+y=4, x=z-y-1$
I want to calculate the volume of the region using triple integration.
so what bounds should i use for the triple integration.
AI: You can just start by any one dimension although some are easier to calculate.
The integration starting by $z$ looks like $$\int_{0}^{4} \int_{0}^{4 - x} \int_{0}^{x + y + 1} \,dz \,dy \,dx.$$ Is this what you need? Or do your teacher asks you to start it by $x$?
|
H: Solve the equation: $\left|3^x - x\right|\left|3^x + x - 4\right| = 49$
I want to solve the equation in $\mathbb{R}$:
$$ \left|3^x - x\right|\left|3^x + x - 4\right| = 49 $$
My attempt:
The above equation is the same as:
$$ \left(3^x - x\right)\left(3^x + x - 4\right) = \pm 49 $$
Case 1: $\left(3^x - x\right)\left(3^x + x - 4\right) = 49$
By taking derivative of the LHS, I managed to prove that it is monotonically increasing. Therefore, the equation has at most $1$ solution. By plugging in $\space x = 2,\space$ the equation is satisfied, so the first solution is $\space x = 2$.
Case 2: $\left(3^x - x\right)\left(3^x + x - 4\right) = -49$
I was not able to guess any integer nor rational solutions to this case, so I have to expand it:
$$ x^2 - 4x + 4 \times 3^x - 3^{2x} = 49 $$
$$ (x - 2)^2 - \left(3^x - 2\right)^2 = 49 $$
This is where I got stuck and could not go further for a long time. I plugged it in WolframAlpha and received one solution: $\space x \approx -5.27928$.
I would like to know how to solve this equation appropriately, thanks in advance.
AI: Even with special functions, you could not get an explicit solution for the zero of function
$$f(x)=(x - 2)^2 - \left(3^x - 2\right)^2 - 49$$ and you will need some numerical method.
If you plot the function for $-6 \leq x \leq -5$, you will have a good surprise since, over that range $f(x)$ is very linear and this is very good for Newton method for example (or for any root-finding procedure). This is normal because $3^{-5}=\frac{1}{243} \ll 2$. So, for the time being, ignore that term and you will have
$$x_0=2-\sqrt{53}\approx -5.28011$$
Making one single iteration of Newton method wiil give
$$x_1=2-\sqrt{53}+\frac{9 \left(4\ 3^{\sqrt{53}}-9\right)}{2 \left(81 \log
(3)+3^{\sqrt{53}} \left(3^{\sqrt{53}} \sqrt{53}-18 \log (3)\right)\right)}\approx -5.27928$$
Now, if you really want to polish the solution, continue ierating to get
$$\left(
\begin{array}{cc}
n & x_n \\
0 & \color{red}{-5.2}8010988928051827109730249152703279377766968257647743837818 \\
1 & \color{red}{-5.2792786}8660406789534055303817937109818859725033462499045920 \\
2 & \color{red}{-5.279278638758321}32633710054196086428743183092915950945577438 \\
3 & \color{red}{-5.27927863875832116780469489398033}721735193951378723970745104 \\
4 & \color{red}{-5.27927863875832116780469489398033547687818546758970688733708}
\end{array}
\right)$$
which, amuzing but with no meaning at all, is close to
$$10\left(\sin \left(\frac{8 \pi }{59}\right)-\cos \left(\frac{9 \pi }{82}\right) \right)=-5.2792786221$$
|
H: Infinite Prisoners dilemma
Please help me understand the idea of solving this problem.
There are infinitely repeated game $G( \infty, \sigma)$.
$$\begin{array}{|c|c|c|}
\hline
&c&d \\
\hline
c&(0,0)&(7,-3) \\
\hline
n&(-3,7)&(4,4) \\
\hline
\end{array}$$
Strategies with punishments in the form of a "forgiving trigger" with a period of length $T = 4$.
I need to find equilibrium strategies and the corresponding value of $\sigma$ that provides a "good" trajectory - the constant repetition of $(n, n)$ as a trajectory in NE (SPNE).
Are my arguments true?
Initially, players do (n, n) until one of them changes the strategy. Then the other player will also change it and will act (c, c) in 3 steps.
By repeating (n, n) the player receives:
$$4+4δ+4δ^2+⋯=4/(1-δ).$$
In case of deviation:
$$7+4δ^4+4δ^5+⋯=7+\frac{4δ^4}{1-δ}.$$
Therefore, we have
$$\frac{4}{1-δ} \geq7+\frac{4δ^4}{1-δ}.$$
AI: You should start by specifying strategies for any history. The strategies you are implicitly using are called grim trigger, and work like this: if neither I nor my opponent have previously played anything but $n$ or $d$ (including the first round when the history is empty), cooperate and play $n$ or $d$; if anyone has previously deviated, punish everyone by adopting $c$. This maximizes the pain of the punishment, and gives the best incentives to cooperate.
After any history in which someone previously deviated from $n$ and $d$ and triggered the punishment, the grim trigger stategies are a subgame perfect Nash eqm, because my opponent is simply threatening to play a stage Nash strategy, and I have no profitable deviation from replying with my stage Nash strategy. So there are no profitable deviations for these histories.
After any history in which everyone has previously cooperated, you get the right discounted expected payoff, $4/(1-\delta)$. But if I deviate, my discounted expected payoff is
$$
7 + \delta 0 +\delta^2 0 + ... = 7
$$
because this triggers the $(c,c)$ profile forever. Your work rewards deviators by giving them 7 and then returning to cooperation, instead of punishing them with return to Nash play. Everyone would always deviate from the cooperative mode.
Then you have an SPNE if
$$
\dfrac{4}{1-\delta} \ge 7
$$
or
$$
\delta \ge \dfrac{3}{7}.
$$
So if the players are sufficiently patient, the grim trigger strategies are a subgame perfect Nash eqm of the infinitely repeated game.
Sorry, I now understand what you mean by "forgiving trigger" by reading your post after doing the work. The idea here is that if the punishment period $T$ is long enough, no one will deviate. So if someone triggers the punishment, we do the $(c,c)$ profile for $T$ periods, then go back to cooperating. Then the payoff is
$$
7 + \delta 0 + \delta^2 0 + ... + \delta^T0 + \delta^{T+1}4 + ... = 7 + \delta^T \dfrac{4}{1-\delta}.
$$
If you take $T\rightarrow \infty$, you get the grim trigger strategies. Then these strategies are an SPNE if
$$
\dfrac{4}{1-\delta} \ge 7 + \delta^{T+1}\dfrac{4}{1-\delta},
$$
so that no one finds deviation and return to the original profile profitable (this is called the ''one-shot deviation principle''). Because of the $\delta^{T+1}$ term, you can't really solve for a closed form solution.
|
H: Prove $\lim\limits_{n\to\infty}n[\int_a^b f(x)dx-\sum_{k=1}^n f(a+k\frac{b-a}{n})\frac{b-a}{n}]=-\frac{1}{2}(b-a)\int_a^b f'(x)dx$
Assume $f(x)$ is a continuous function on $[a,b]$, and its first-order derivative $f'(x)$ is also continuous on $[a,b]$, prove the identity:
$$\lim\limits_{n\to\infty}n\left[\int_a^b f(x)dx-\sum_{k=1}^n f\left(a+k\frac{b-a}{n}\right)\frac{b-a}{n}\right]=-\frac{1}{2}(b-a)\int_a^b f'(x)\,dx$$
I came across this problem in a generalized form question of my problem sets.
My idea is to use the definition of integral, namely expanding $\int_a^b f(x)\,dx$ into $\lim\limits_{m\to\infty}\sum_{k=1}^m f(\varepsilon_k)\frac{b-a}{m}$, in which $\varepsilon_k\in[a+(k-1)\frac{b-a}{m},a+k\frac{b-a}{m}]$.
But the problem is that there are two limit processes, namely $n\to\infty$ and $m\to\infty$, which are supposed to be separate. Thus, it is not easy to combine $\sum_{k=1}^m f(\varepsilon_k)\frac{b-a}{m}$ with $\sum_{k=1}^n f(a+k\frac{b-a}{n})\frac{b-a}{n}$. Even though somehow these two processes are integrated into one process, I get totally different result of this limit with Lagrange's Mean Value Theorem, by letting $\varepsilon_k=a+(k-\theta)\frac{b-a}{n}$, in which $\theta\in[0,1]$, and choosing $\theta$ differently.
There must be something wrong with my method, but I can not see it(probably it is not legal to integrate these two limit processes?).
I will really appreciate it if you can help to prove this identity or point out the problem in my method!
AI: You are on the right path, but using one partition depending on $n$ suffices. The main trick is to break the interval of integration in pieces of equal length and then compare integrals over small segments against their Riemann approximation.
Maybe this will help you complete your task
\begin{aligned}
n\Big(\int_a^b f(x)dx -\sum_{k=1}^n f\big(a+k\tfrac{b-a}{n}\big)\tfrac{b-a}{n}\Big)&=n\Big(\sum^n_{k=1}\int^{a+k\tfrac{(b-a)}{n}}_{a+(k-1)\tfrac{(b-a)}{n}} f(x)-f\big(a+k\tfrac{b-a}{n}\big)\,dx\Big)\\
&=n\Big(\sum^n_{k=1}\int^{k\tfrac{(b-a)}{n}}_{(k-1)\tfrac{(b-a)}{n}} f(a+u)-f\big(a+k\tfrac{b-a}{n}\big)\,du\Big)\\
&=(b-a)\Big(\sum^n_{k=1}\int^{k}_{k-1} f\big(a+t\tfrac{b-a}{n}\big)-f\big(a+k\tfrac{b-a}{n}\big)\,dt\Big)
\end{aligned}
Here we have used changes of variables $u=x-a$, followed by $t=\frac{n}{b-a}u$
Using the mean value theorem one obtains
\begin{aligned}
(b-a)\Big(\sum^n_{k=1}\int^{k}_{k-1} f\big(a+t\tfrac{b-a}{n}\big) &-f\big(a+k\tfrac{b-a}{n}\big)\,dt\Big)\\
&=(b-a)\sum^n_{k=1}\int^{k}_{k-1} f'\big(a+(t\theta_t+(1-\theta_t)k)\tfrac{b-a}{n}\big)(t-k)\tfrac{b-a}{n}\,dt\\
&\approx (b-a)\sum^n_{k=1}f'\big(a+k\tfrac{b-a}{n}\big)\Big(\int^{k}_{k-1} (t-k)\,dt\Big)\tfrac{b-a}{n}\\
&=-\frac{b-a}{2}\sum^n_{k=1}f'\big(a+k\tfrac{b-a}{n}\big)\tfrac{b-a}{n}\xrightarrow{n\rightarrow\infty} -\frac{b-a}{2}\int^b_af'
\end{aligned}
where $0<\theta_t<1$ depends on $t$. The approximation need some justification (the uniform continuity of $f'$ will do).
|
H: Module vs. absolute value
Can I use a term "module" as an alternative for a term "absolute value"? For example, could this phrase be used:
"We need to raise the module of the amplitude to the second degree"
for this expression:
$p = |a|^2$
($a$ in this expression can be a real or complex number)
AI: The correct term in English is modulus, the absolute value of a real or complex number.
Module is a generalization of vector spaces over rings.
In French, Italian, Spanish, and Portuguese, the same word (module, modulo, módulo) is used for both concepts.
|
H: For $f:N\rightarrow N$, where $f(x)=x-(-1)^x$, then prove that $f$ is one-one and onto
Taking the first derivative
$$f’(x)=-x(-1)^{x-1}$$
Depending on the value of $x$, the slope of the graph changes from positive to negative. Thus it cannot be one-one
However, if we simplify the original function
For $x$ is even
$$f(x)=x+1$$
For $x$ is odd
$$f(x)=x-1$$ which looks like a one-one function. Which is the right one?
AI: By definition,
$$f(2n)=2n-(-1)^{2n}=2n-1$$
$$f(2n+1)=2n+1-(-1)^{2n+1}$$
$$=2n+2=2(n+1)$$
thus
$$f^{-1}(2p+1)=2(p+1)$$
and
$$f^{-1}(2p)=2(p-1)+1=2p-1$$
The antecedents exist and are unique, $ f $ is one to one.
|
H: $\det(I+A)=1+\operatorname{Tr}(A)$ if $\operatorname{rank}(A)=1$
Let $A$ be a complex matrix of rank $1$. Show that $$\det (I+A) = 1 + \operatorname{Tr}(A)$$ where $\det(X)$ denotes the determinant of $X$ and $\operatorname{Tr}(X)$ denotes the trace of $X$.
Any hint, please. I do not get how to combine the ideas of rank, determinant and trace. Thank you.
AI: The minimal polynomial of $A$ splits in $\Bbb C$. So, there is $P\in \text{GL}(n,\Bbb C)$ such that $PAP^{-1}$ is upper diagonal. Now, $\operatorname{rank}(A)=1$, so at most one diagonal entry of $P^{-1}AP$ is non-zero and all other diagonal entries of $P^{-1}AP$ are zero.
Hence, $\det(I+A)=\det\left(I+P^{-1}AP\right)=(1+\lambda)$, where $\lambda$ is the only non-zero diagonal entry of $P^{-1}AP$. Now, $\operatorname{tr}(A)=\operatorname{tr}(P^{-1}AP)=\lambda$. So, we are done.
Another case is also possible, all diagonal entries of $P^{-1}AP$ are zero, that is $A$ is nilpotent. In this case the equality $\det(I+A)=1+\operatorname{tr}(A)$, holds similarly.
|
H: T $\models \varphi$ iff $T \cup \{\lnot \varphi\}$ inconsistent, proof verification
I would like to show that:
T $\models \varphi$ iff $T \cup \{\lnot \varphi\}$ inconsistent
My attempt:
$(\Rightarrow)$ Suppose that $T \models \varphi$. Then $\varphi$ is true in every model of $T$. Since $\varphi$ and $\lnot \varphi$ cannot be true in the same model, there doesn't exists a model where $\lnot \varphi$ is true. So we look at $T \cup \{\lnot \varphi\}$. If $M$ is a model of $T \cup \{\lnot \varphi\}$, then $M \models \psi$ for all $\psi \in T \cup \{\lnot \varphi\}$. Since this is not true for $\lnot \varphi$, there doesn't exists a model of $T \cup \{\lnot \varphi\}$, so $T \cup \{\lnot \varphi\}$ is inconsistent.
$(\Leftarrow)$ Suppose that $T \not\models \varphi$. Then there doesn't exists a model where $\varphi$ is true, so $\lnot \varphi$ is true in every model of $T$. So, $T \cup \{\lnot \varphi\}$ has a model, but that is impossible since $T \cup \{\lnot \varphi\}$ is inconsistent, so $T \models \varphi$
Can somebody check this proof, and correct me if I am wrong?
AI: Your $(\Rightarrow)$ direction is correct. For the other direction you are on the right track, but there is an error. The statement $T \not \models \varphi$ does not mean that there does not exist a model of $T$ where $\varphi$ is true. It just says that $\varphi$ is not true in every model of $T$. In other words, there is some model of $T$ where $\varphi$ is false. Can you now tie this to your last sentence to arrive at the required conclusion again?
I am sure you will be able to complete the proof. Just in case you want to double check, I have put the full proof for the $(\Leftarrow)$ in a spoiler tag below.
We prove the contraposition. So suppose that $T \not \models \varphi$. Then there is a model $M$ of $T$ such that $M \not \models \varphi$. That is $M \models \neg \varphi$, and thus $M \models T \cup \{\neg \varphi\}$. We thus see that $T \cup \{ \neg \varphi \}$ is consistent, which concludes the proof.
|
H: Inequality conservation about convergence in law
I am trying to prove a claim about probability, and it is concluded if I can prove the following statement:
Suppose $X_n \rightarrow X$ in law, and $\exists\, M>0$ s.t. $\forall\, n$, $E[|X_n|] \leq M$. Then $$E[|X|] \leq M$$
Is this statement true? And how can I prove this?
AI: The answer is yes.
Your assumption that $X_n\Rightarrow X$ implies $|X_n|\Rightarrow |X|$.
Let $G_n(x)=P[|X_n|>x]$ and $G(x)=P[|X|>x]$, so $E[|X_n|]=\int_0^\infty G_n(x)\,dx$ and $E[|X|]=\int_0^\infty G(x)\,dx$. Now apply Fatou's lemma, to learn that
$$\int_0^\infty \liminf_n G_n(x)\,dx\le\liminf_n E[|X_n|]\le M.$$
The integrand on the left is equal to $G$ at all continuity points of $G$, so differs from $G$ on a set of measure zero. So the integral on the left is equal to $\int_0^\infty G(x)dx=E[|X|].$
|
H: Proof of Euler's Theorem using Lagrange
Theorem : If $a,n \in \mathbb{N}$, $\gcd(a,n) = 1$ then $a^{\phi (n)} \equiv 1 \pmod n$
I am going through the proof that uses Lagrange's theorem
In the proof, we use the fact that if $G$ (s.t. $o(G) < \infty$) is a group and $a \in G$, then $a^{o(G)} = e$. The proof of this relies on the existence of the order of $a$, which could be infinite (ie, the order does not exist). How do I show that $o(a)$ exists? (Note that we cannot use Euler's theorem to show it exists because $o(a)\ | \ o(G) = \phi(n)$ and in the worst case, we can choose $o(a) = \phi(n)$).
AI: Note that $1, a, \ldots, a^{o(G)}$ are $o(G) + 1$ elements of $G$ and thus, cannot all be distinct. Thus,
$$a^m = a^n$$
for some $0 \le n < m \le o(G)$ which gives you that
$$a^{m-n} = 1.$$
Since $m - n \neq 0$, that gives you that $a$ has finite order.
In general, any element of a finite group has a finite order.
|
H: Function representing a Taylor Series
Find a function represented by the Taylor series $\sum_{k=0}^{\inf}\left(-1\right)^{k}\left(\frac{3^{2k+1}}{\left(2k+1\right)!}\right)x^{2k}$.
After taking a bunch of derivatives, I figured out $f^n(x)=\frac{3^{2n+1}}{2n+1}$, which further simplified to $\frac{3(3x)^2k}{(2k+1)!}$. I'm stuck here, I think I have to use $\sin(x)$ somehow, but I can't figure out how. Here's the graph. How should I continue?
AI: Using the Taylor series for the Sine function as mentioned namely
$$\sin{(x)}=\sum_{k=0}^\infty\frac{(-1)^kx^{2k+1}}{(2k+1)!}$$
we have that your sum (provided at the link) is simply
$$\frac1{x}\sum_{k=1}^\infty\frac{(-1)^k(3x)^{2k+1}}{(2k+1)!}=\frac{\sin{(3x)}}{x}-3$$
|
H: (Maple) Linear Combination of Matrices
Problem: [This problem is intended to be done with Maple] Suppose that:
$$\textbf{u}_1=\begin{pmatrix}
356\\ -185\\ -580\\ -918\\ 147\\ 468\\ 504\\ 594
\end{pmatrix},
\textbf{u}_2=\begin{pmatrix}
573\\ 230\\ -950\\ -877\\ 69\\ 677\\ 323\\ 486
\end{pmatrix},
\textbf{u}_3=\begin{pmatrix}
-1000\\ -833\\ 552\\ 935\\ -999\\ -483\\ 450\\ 457
\end{pmatrix},
\textbf{u}_4=\begin{pmatrix}
946\\ -379\\ -971\\ -840\\ 979\\ -591\\ -139\\ -451
\end{pmatrix}$$
Also suppose that $A = (\textbf{u}_1|\textbf{u}_2|\textbf{u}_3|\textbf{u}_4)$. Let $\textbf{v}=\begin{pmatrix}-27\\59\\-75\\21\end{pmatrix}$. Hence let $A\textbf{v}$ is a linear combination in the form
$$\lambda_1\textbf{u}_1 + \lambda_2\textbf{u}_2 + \lambda_3\textbf{u}_3 + \lambda_4\textbf{u}_4$$
What are the values of $\lambda_1, \lambda_2, \lambda_3$ and $\lambda_4$.
My attempts
If I'm being honest, I've been messing with the LinearAlegbra package for several hours and I have no idea if I'm on the right track. The first thing I did was use Gaussian Elimination on $A$ (which I defined as $B$), then used LinearSolve($B$,$\textbf{v}$) to find the values, although I do not believe this is correct (or at least the times I've tried always resulted in incorrect values).
Any help would be greatly appreciated!
AI: A := Matrix([u1, u2, u3, u4]);
N := Matrix([A, A.v]);
ReducedRowEchelonForm(N);
|
H: Upper bound for the sum of the nth powers of a digit
I am doing Project Euler # $30$. I currently want to find all the numbers that can be written as the sum of fifth powers of their digits. I solved this using a generic upper bound of a million. However, this isn't very precise and so I want to find out an upper bound on the largest number which can be written as the sum of the $n$th power of its digits. My original plan was to go with $10^{n + 1}$ but I don't think this is true for larger values of n.
AI: Let $N$ be a generic $n$ digit number that satisfies that property. We can write $$N = a_1 + 10a^2 + \cdots + 10^{n-1}a_n$$
where each $a_i \in \{0, \ldots, 9\}$ and $a_n \neq 0$.
We have $$a_1 + 10a^2 + \cdots + 10^{n-1}a_n = a_1^5 + \cdots + a_n^5.$$
The right side can be bounded as
$$a_1^5 + \cdots + a_n^5 \le n\cdot9^5.$$
The left side can be bounded (below) as
\begin{align}
a_1 + 10a^2 + \cdots + 10^{n-1}a_n &\ge 10^{n-1}a_n \ge 10^{n-1}.
\end{align}
Thus, we see that
$$10^{n-1} \le n\cdot 9^5.$$
The above is true only for $n \le 6$.
That gives an easy and reasonable upper bound.
|
H: Cayley-Hamilton-Theorem - Possible characteristic polynomial
Let $A: \mathbb{R}^3 \to \mathbb{R}^3$ s.t.
$A^3-2A^2+A= 0$
The Cayley-Hamilton-Thm. states that if I put $A$ into its characteristic polynomial it'll equal $0$.
But am I allowed to conclude from the given equation $A^3-2A^2+A= 0$ that $\lambda^3-2 \lambda^2+\lambda$ is the characteristic polynomial of $A$?
AI: No you're not. What if your matrix $A$ was a $3\times 3$ zero matrix (all elements are $0$). Then your equation would be valid but the characteristic polynomial is $\lambda^3 = 0$.
However you know that if $\lambda$ is an eigenvalue of $A$ then $\lambda=0$ or $\lambda =1$, which are the only two roots of $P$, where
$$P(\lambda) = \lambda^3 - 2\lambda^2 + \lambda = \lambda (\lambda-1)^2\,.$$
So the characteristic polynomial can have at most $0$ and $1$ as roots. So all you know is that the characteristic polynomial $Q$ must be in the form
$$Q(\lambda) = \lambda^n (\lambda-1)^m$$
with $n\in\mathbb{N}$, $m\in\mathbb{N}$ such that $m+n=N$, where $N$ is the number of rows (and so columns) of $A$. In your case $N=3$.
|
H: Difference between $S^{1}$ and $\mathbb{S}^{1}$
Whats the difference between the two? Or are they both the unit circle? My lecture notes for a topology course use $S^{1}$ for the "circle" in one example and $\mathbb{S}^{1}$ in the next example for the "boundary circle". Is it just a notational inconsistency?
EDIT: Neither is defined in the notes. I infer that the "boundary circle" is $\{x^{2} + y^{2} = 1 \mid x, y \in $$\mathbb{R}$$\}$ from the example. Indeed, $S^{1}$ seemingly denotes the same thing.
AI: These are both used to mean the circle. Probably the non-blackboard-bold one is the more common.
|
H: Inclusion-Exclusion Permutations
Taken from finals on discrete mathematics;
How many permutations of the set $\{a,b,c,d,e,f,g,h,i,j\}$ are there such that:
a) Each of the patterns ab, de, gh and ij appears
b) None of the patterns ab, de, gh and ij appears
c) At least one of the patterns ab, de and gh appears.
d) Exactly one of the patterns ab, de and gh appears.
e) Atleast two of the patterns ab, de and gh appear.
f) The patterns ab and de appear, but the patterns gh and ij do not.
Expain your answers.
a) Was sticking the letters together and thus yielding $6!$ total permutations.
b) None is total minus the size of intersection of all sets with patterns {ab} {de} etc. and was calculated using I-E principle.
c) Intersection of the sets, again found using I-E principle.
Got stuck at figuring out d, e and f.
I managed to find a formula for exact and atleast conditions on https://www.youtube.com/watch?v=D1T3xy_vtxU but haven't derived it.
Furthermore, any idea how to do f?
AI: Part (f) is just more inclusion-exclusion. There are $8!$ permutations that include both $ab$ and $de$. There are $7!$ that contain $ab$, $de$, and $gh$, and there are $7!$ that contain $ab$, $de$, and $ij$. Finally, there are $6!$ that contain $ab$, $de$, $gh$, and $ij$. Thus, there are
$$8!-2\cdot7!+6!=40320-2\cdot5040+720=30960$$
permutation that include $ab$ and $de$ but not $gh$ or $ij$.
This should help you quite a bit with (d). As for (e), suppose that you know how many permutations contain both $ab$ and $de$; then you also know how many contain both $ab$ and $gh$ and how many contain $de$ and $gh$. If you add those up, how many times have you counted the permutations that contain all three of these patterns?
|
H: Derivative of the complex norm as commonly used in physics
On the one hand, I read that the derivative of the complex conjugate $C[z]=\overline{z}$ is not differentiable anywhere (for instance see here). (see 1, below)
On the other hand, I see in physics taking the derivative of a complex scalar field to obtain the equation of motion using the Euler-Lagrange method (for instance see enter link description here (see 2, below)
So which is it, can we or can we not take the derivative?
For case 1, the reference states that a complex function is differentiable if and only if it satisfies the Cauchy-Riemann equations:
$$
f[z]=f[x+iy]=u[x,y]+iv[x,y]
$$
Then f is differentiable if
$$
\frac{\partial u}{\partial x} =\frac{\partial v}{\partial y} \\
\frac{\partial u}{\partial y} =-\frac{\partial v}{\partial x}
$$
Then for the complex conjugate $C[x+iy]=x-iy$ then $\partial u/\partial x =1$ and $\partial v/\partial y=-1$. Consequently $C[z]=\overline{z}$ is not differentiable anywhere in the complex plane.
For case 2, the physics paper defines the Lagrangian of a complex scalar free field as follows:
$$
\mathcal{L}=(\partial \phi^*)(\partial \phi)
$$
Then they claim that
$$
\frac{\partial \mathcal{L}}{\partial (\partial \phi)}=\partial \phi^*\\
\frac{\partial \mathcal{L}}{\partial (\partial \phi^*)}=\partial \phi
$$
To obtain these results I assume they apply the chain rule
$$
\frac{\partial }{\partial (\partial \phi)}(\partial \phi^* \partial \phi)=\partial \phi\frac{\partial }{\partial (\partial \phi)}(\partial \phi^* )+\partial \phi^* \frac{\partial }{\partial (\partial \phi)}(\partial \phi)
$$
Is the following term not an 'illegal' derivative of a complex conjugate function?
$$
\partial \phi\frac{\partial }{\partial (\partial \phi)}(\partial \phi^* )
$$
Why are they allowed to pose it equal to 0?
AI: I've already recommended a discussion here, but it may be worth rewriting to tweak its emphasis. I'll compare three contexts in which one would want to define "derivatives"; in each case, I'll consider a function $f(x)$, rather than switching from the label $x$ to $z$ for the complex case.
A function $f:\,\Bbb R\mapsto\Bbb R$ differentiable at $x$ satisfies $f(x+h)\in f(x)+[(Df)(x)]h+o(h)$, with $Df(x)\in\Bbb R$ denoting the derivative of $f$ at $x$.
For an integer $n\ge2$, A function $f:\,\Bbb R^n\mapsto\Bbb R^n$ differentiable at $x$ satisfies what looks like the same condition, only now $Df(x)\in\Bbb R^{n\times n}$.
But if $S$ is a number system $n$-dimensional over $\Bbb R$, a function $f:\,S\mapsto S$ might satisfy a stronger condition, that we can take $Df(x)\in S$ where, as in the original example, the $O(h)$ term multiplies a number, in this case one in $S$, by another such number, $h$. The Cauchy-Riemann equations characterize which functions $f:\,\Bbb C\mapsto\Bbb C$ succumb to this treatment.
The trick here is respecting the multiplication defined on $\Bbb C$. If we denote the real and imaginary parts of a compex number $w$ as $w_0$ and $w_1$, we cannot in general change$$f_j(x+h)\in f_j(x)+\sum_{k=0}^1[(Df)(x)]_{jk}h_k+o(h)$$to $f_j(x+h)\in f_j(x)+(yh)_j+o(h)$ for some $y\in\Bbb C$. But the general case succumbs to another treatment. Since any $w\in\Bbb C$ satisfies $w_0=(w+w^\ast)/2,\,w_1=(w-w^\ast)/(2i)$,$$\begin{align}f_j(x+h)- f_j(x)&\in[(Df)(x)]_{j0}(h+h^\ast)/2+[(Df)(x)]_{j1}(h-h^\ast)/2i+o(h)\\&=\frac12\left\{[(Df)(x)]_{j0}-i[(Df)(x)]_{j1}\right\}h\\&+\frac12\left\{[(Df)(x)]_{j0}+i[(Df)(x)]_{j1}\right\}h^\ast+o(h).\end{align}$$So now, instead of only having a rate of change against small $h$, we also have a separate one against $h^\ast$. And although $h,\,h^\ast$ are "dependent" in the sense either value determines the other, these two rates of change are uniquely defined. The abbreviation $x:=\partial\phi$ lets us work with the given example from physics:$$\mathcal{L}(x)=x^\ast x\implies\mathcal{L}(x+h)-\mathcal{L}(x)=(x^\ast+h^\ast)(x+h)-x^\ast x\in x^\ast h+xh^\ast+o(h).$$
|
H: Question on a problem regarding improper integral
I'm studying improper integrals with Paul's Online Notes as a reference. Sorry if I'm quoting it here, but the website has the following problem:
Determine if the following integral is convergent or divergent. If it is convergent find its value.
$$\int_{-2}^{3} \frac{1}{x^3} \, dx$$
And the solution provided in the website is:
This integrand is not continuous at $x=0$ and so we'll need to split the integral up at that point.
$$\int_{-2}^{3} \frac{1}{x^3} \, dx=\int_{-2}^0\frac{1}{x^3}\,dx+\int_{0}^3\frac{1}{x^3}\,dx$$
Now we need to look at each of these integrals and see if they are convergent.
$$\int_{-2}^0\frac{1}{x^3}\,dx=\lim_{t \to 0^-}\int_{-2}^t\frac{1}{x^3}\,dx$$
$$=\lim_{t \to 0^-}(-\frac{1}{2t^2}+\frac{1}{8})$$
$$=-\infty$$
At this point we're done. One of the integrals is divergent that means the integral that we were asked to look at is divergent. We don't even need to bother with the second integral.
Question is is the solution correct? If I use my intuition, the integral $$\int_{-2}^{3} \frac{1}{x^3} \, dx$$ should be equal to $$\int_{2}^{3} \frac{1}{x^3} \, dx$$ because $$\int_{-2}^{2} \frac{1}{x^3} \, dx=0$$ If it's true, then the integral should be convergent and its value should be $\frac{5}{72}$.
I checked an online integral calculator https://www.integral-calculator.com/ and it seemed to confirm my answer. So which solution and reasoning is correct and why?
AI: You have to be very, very careful when it comes to concluding results like
$${\int_{-2}^{2}\frac{1}{x^3}dx=0}$$
Because this simply isn't true by the standard definition of convergence of improper integrals. The Fundamental Theorem of Calculus requires our function to be continuous over the domain we are integrating - and as you can see, ${\frac{1}{x^3}}$ is not continuous over ${(-2,2)}$ Namely, it is discontinuous at ${x=0}$.
Now, it is true if we do the following:
$${\lim_{\epsilon \rightarrow 0^+}\left[\int_{-2}^{-\epsilon}\frac{1}{x^3}dx + \int_{\epsilon}^{2}\frac{1}{x^3}dx\right]=0}$$
So you may wonder - "why don't we just call the integral $0$ then"? Well, in this case it's ${0}$, because both integrals approach the point $0$ at the same rate. But there is absolutely no reason they should have to do this, for example, something like
$${\lim_{\epsilon\rightarrow 0^+}\left[\int_{-2}^{-\epsilon^2}\frac{1}{x^3}dx + \int_{\epsilon}^{2}\frac{1}{x^3}dx\right]}$$
would also be completely valid! And from this principle, you can arrive at different answers (although as others have pointed out - the "principle value" for the integral is $0$. This is just a way of assigning a number to the integral, but it's NOT what it "converges" to in the sense of convergence that we usually care/talk about).
The website does exactly how we define how to handle improper integrals of this type (ones that are discontinuous at a point over our domain of integration). In general, say we are integrating the function ${f(x)}$ over ${[a,c]}$ (${a<c}$), and ${f}$ has a discontinuity at ${b\ |\ a < b < c}$. Then we always must do the following
$${\int_{a}^{c}f(x)dx = \lim_{l\rightarrow b^-}\int_{a}^{l}f(x)dx + \lim_{m\rightarrow b^+}\int_{m}^{c}f(x)dx}$$
And you notice in this case, the limits are actually decoupled. So indeed if one diverges, you can conclude right away that the integral ${\int_{a}^{c}f(x)dx}$ is divergent, because of the fact the limits are decoupled. Otherwise, you could end up with nonsense like ${\infty-\infty}$, which is not something you can evaluate (and as I said, taking a "combined" limit is out of the question since our definition on how to handle improper integrals require the limits to be decoupled).
|
H: Prime numbers equation
Could you please help me for this proof :
Prove that there is no triplet of integers (x, y, z) prime to each other such that: $$x²+y² = 3z²$$
I tried to make a proof by contradiction...
AI: RHS is divisible by $3$. If either $x$ or $y$ are $1$ or $-1$ modulo $3$ then $3$ will not divide the LHS. Hence $x$ and $y$ must be both $0$ modulo $3$ which implies the GCD of $x$ and $y$ must be at least $3$. Hence they can't be coprime.
|
H: Could this integral be estimated with a positive constant?
Let $\Omega$ be an opend bounded subset of $\mathbb{R}^n$ and let $p, q$ be two real numbers such that $p, q\geq 1$. Let $(w_n)_n\subset W_0^{1, p}(\Omega)$ and $(z_n)_n\subset W_0^{1, q}(\Omega)$ such that $\exists w\in W_0^{1, p}(\Omega)$ such that
$$ w_n\longrightarrow w \quad \mbox{ in } L^{r}(\Omega) \quad \mbox{ for } \ 1\leq r < p^{\ast}$$
and
$$ w_n\longrightarrow w \quad \mbox{ a.e. in } \Omega.$$
Moreover, fix $k\geq 1$ and consider
$$\Omega_{n, k}:=\left\lbrace x\in\Omega \mid \vert w_n(x), z_n(x)\vert > k\right\rbrace.$$
I would like to show that the integral
$$\int_{\Omega\setminus\Omega_{n, k}} F(x, w_n, z_n) w \vert\nabla z_n\vert^{q} dx$$
can be estimated with a positive constant, i.e. $\exists c\in\mathbb{R}$ such that $\displaystyle\int_{\Omega\setminus\Omega_{n, k}} F(x, w_n, z_n) w \vert\nabla z_n\vert^{q} dx\leq c$.
Here, I assume $F:\Omega\times\Omega\times\mathbb{R}\to\mathbb{R}$ such that
$$ \sup_{\vert (u, v)\vert\leq t} \vert F(\cdot, u, v)\vert\in L^{\infty}(\Omega)$$
for any $t>0$.
Could anyone please help? Thank you in advance!
AI: Since $z_n \in W_{0}^{1, q}$, your only hope on estimating the integral of $|\nabla z_n|^q$ is to put it in $L^1$ and pull out an $L^\infty$ norm of the $F$ term:
$$
\int_{\Omega \setminus \Omega_{n,k}} |F(x, w_n, z_n)||w||\nabla z_n|^q \, dx \leq \|{z_n}\|_{W^{1
, q}}\sup_{x \in \Omega \setminus \Omega_{n, k}} |F(x, w_n(x), z_n(x))||w(x)|.
$$
So to get a uniform bound we'd first need to know that the $z_n$'s are uniformly bounded in $W^{1, q}$, or at least that their gradients are bounded uniformly in $L^q$. Meanwhile the supremum term is finite for each $k$, since $\Omega_{n,k}$ is the set where $w_n, z_n$ are greater than $k$, but this bound potentially depends on $k$. Therefore we'd probably also need some assumptions on $F$'s $L^\infty$ behavior.
If there are no such assumptions on $F$ or $z_n$, we can cook up a counterexample. Let $F \equiv 1$ and let $w_n \equiv w \in W_0^{1, q}$ be some smooth bump function. Then let $z_n$ be some sequence in $W_0^{1, q}$ such that $\sup_{x \in \Omega} |z_n| \leq C$ for all $n$, but such that $\|\nabla z_n\|_{L^q} \to \infty$ as $n \to \infty$ (for instance, on $\mathbb{R}$ we could let $z_n$ be something like $\sin(nx)$, perhaps a polygonal version of this). Then the integral above is always over all of $\Omega$ for $k > C$ and in fact it's equal to
$$
\int_{\Omega} |w(x)||\nabla z_n|^q\, dx.
$$
So as long as we choose, say, $w$ to be equal to 1 on some set of large enough measure, this integral will go to infinity as $n \to \infty$.
|
H: Why is there a hierarchy of interest between associativity and commutativity
In mathematical structures, there are among other things : groups.
Among their particular properties of the group, the groups have the property of associativity.
Within the various groups, there are commutative (abelian) and non commutative (non-abelian) groups.
Why is there a hierarchy of interest between associativity and commutativity in groups ; that is , why do we assume that groups are associative, while commutativity is only an "option" ?
(why is associativity "more important" than commutativity ?)
Are there algebra structures which don't assume associativity ?
AI: There absolutely are non-associative structures! Relevant terms here include "loop" and "Lie algebra." For concrete examples, consider octonion multiplication or the "midpoint" operation on points in $\mathbb{R}^n$ (note that the latter is commutative but not associative!). There are also weakenings of the associative law, like power associativity or alternativity. The study of nonassociative structures is very much a thing.
However, there is definitely an important sense in which associativity is special, namely that it lines up with function composition (note that function composition is not commutative: $(x+1)^2\not=x^2+1$ in general). This is important since we generally run into groups not "in a vacuum" but rather as a component of a richer object - namely a group action. A group action is basically a way of representing elements of a group $G$ as "structure-preserving" maps on some other structure $A$. The point is that this is generally the way groups emerge in the first place: consider for example Galois theory, where we're not interested in $Gal(K/F)$ on its own so much as we're interested in the action of $Gal(K/F)$ on $K$.
And the other direction is important too: given a group $G$ we can often gain a better understanding of $G$ by thinking about its possible actions. An early example of this is Cayley's theorem, where we look at a simple action of a group on itself, and representation theory is built around the idea that we can learn a lot about a group by looking at the various ways it can act on vector spaces.
The idea of an action - where elements of the acting structure represent functions on the acted-on structure, and the operation of the acting structure represents composition - automatically enforces associativity since function composition is associative. Conversely, if we try to whip up a notion of "action" which does not automatically enforce associativity - that is, a notion of a non-associative magma acting on a structure $X$ in such a way that $[a*(b*c)]x\not=[(a*b)*c]x$ in general - things get messy. This doesn't make non-associative structures uninteresting, but it does mean that the natural idea of "action" is really only appropriate to associative structures, and this does wind up being a big deal.
|
H: The antiderivative of $\sum_{n\gt 0}\frac{x}{n(x+n)}$
I tried to calculate $\int\sum_{n\gt 0}\frac{x}{n(x+n)}\, \mathrm dx$:
$$\begin{align}\int\sum_{n\gt 0}\frac{x}{n(x+n)}&=\sum_{n\gt 0}\frac{1}{n}\int\left(1-\frac{n}{x+n}\right)\, \mathrm dx \\&=\sum_{n\gt 0}\left(\frac{1}{n}\int\mathrm dx -\int \frac{\mathrm dx}{x+n}\right)\\&\overset{x+n\to x}{=}\sum_{n\gt 0}\left(\frac{x}{n}-\ln (x+n)\right)+C.\end{align}$$
It seems however, that the 'result' fails to converge for any $x$ (whereas $\sum_{n\gt 0}\frac{x}{n(x+n)}$ converges everywhere except for negative integers). How could a correct antiderivative be obtained?
AI: You need to choose the constants of integration wisely:
\begin{align*}
\int_{0}^{x} \sum_{n=1}^{\infty} \frac{t}{n(t+n)} \, \mathrm{d}t
&= \sum_{n=1}^{\infty} \int_{0}^{x} \frac{t}{n(t+n)} \, \mathrm{d}t \\
&= \sum_{n=1}^{\infty} \left[ \frac{t}{n} - \log(t+n) \right]_{t=0}^{t=x} \\
&= \sum_{n=1}^{\infty} \left( \frac{x}{n} - \log\left(1+\frac{x}{n}\right) \right).
\end{align*}
So in general,
$$ \int \sum_{n=1}^{\infty} \frac{x}{n(x+n)} \, \mathrm{d}x
= \sum_{n=1}^{\infty} \left( \frac{x}{n} - \log\left(1+\frac{x}{n}\right) \right) + C. $$
|
H: Why $M/\mathfrak{a}M \oplus M/\mathfrak{b}M \simeq M/(\mathfrak{a \cap b})M$?
Let $M$ be an $A$-module and let $\mathfrak{a}$ and $\mathfrak{b}$ be coprime ideals of A.
I must show that $M/ \mathfrak{a}M \oplus M/ \mathfrak{b}M \simeq M/ (\mathfrak{a \cap b})M$.
My attempt is the following:
Let $x \in M/ \mathfrak{a}M \oplus M/ \mathfrak{b}M$,then $x = [y]+[z]$, where $[y] = y+\mathfrak{a}M $ and $[z]=z + \mathfrak{b}M $, $y,z \in M$.
So, $x = y+z+ \mathfrak{a}M +\mathfrak{b}M $.
$\mathfrak{a}M +\mathfrak{b}M =\{z | z=am_1+bm_2, a \in \mathfrak{a}, b \in \mathfrak{b} \} $. But then I don't know how to continue.
Is this approach correct? Or is there another way to prove it?
Thanks
AI: There is an obvious homomorphism $\varphi\colon M\to M/\mathfrak{a}M\oplus M/\mathfrak{b}M$, namely $\varphi(x)=(x+\mathfrak{a}M,x+\mathfrak{b}M)$.
The kernel is obviously $\mathfrak{a}M\cap\mathfrak{b}M$. You want to prove that $\mathfrak{a}M\cap\mathfrak{b}M=(\mathfrak{a}\cap\mathfrak{b})M$, using the known fact that $\mathfrak{a}\cap\mathfrak{b}=\mathfrak{a}\mathfrak{b}$ when $\mathfrak{a}+\mathfrak{b}=A$.
We can write $1=a+b$, with $a\in\mathfrak{a}$ and $b\in\mathfrak{b}$. If $x\in\mathfrak{a}M\cap\mathfrak{b}M$, we have $$x=ax+bx\in (\mathfrak{a}\mathfrak{b})M$$ because $x\in\mathfrak{b}M$ implies $ax\in\mathfrak{a}\mathfrak{b}M$ and $x\in\mathfrak{a}M$ implies $bx\in\mathfrak{b}\mathfrak{a}M$. The other inclusion $\mathfrak{a}\mathfrak{b}M\subseteq\mathfrak{a}M\cap\mathfrak{b}M$ is obvious. Therefore $\ker\varphi=(\mathfrak{a}\cap\mathfrak{b})M$.
Now you want to show that $\varphi$ is also surjective, which is the (abstract version of the) Chinese remainder theorem. If $x,y\in M$, then
$$
(x+\mathfrak{a}M,y+\mathfrak{b}M)=(z+\mathfrak{a}M,z+\mathfrak{b}M)
$$
where $z=bx+ay$.
|
H: How to find an integrating factor?
I am trying to understand the integrating factor technique starting with a simple case to see how it develops into more complicated structures.
Suppose I have a differential equation I want to solve of the for $y'(x) = g(x)y(x)$ where $g(x)$ is known and $y(x)$ is not.
Now although the general solution to this is already known, suppose I'm just starting out and I make a guess of the form $Ce^{f(x)}$ where $f(x)$ is known and this guess gives me $Cf'(x)e^{f(x)}$ but it doesn't give me exactly the right solution though it does appear related, that it instead it gives me something like $C\frac{d}{dx}e^{f(x)} = Ch(x)g(x)e^{f(x)}.$
How can I derive an integrating factor $I(x)$ that fixes this guess and leads to the solution $f(x) = \int_{1}^{x}g(t)dt$?
AI: Here is the problem with the separable differential equation:
$$y'(x)=g(x)y(x)$$
If you try to solve it with integrating factor method then you have:
$$y'(x)\mu-\mu g(x)y(x)=0$$
$$\implies -\mu g(x) = \mu '$$
So that you have:
$$y'(x) \mu(x)+y(x) \mu'(x)=0$$
$$(y(x) \mu(x))'=0$$
So in order to solve an original separable differential equation with the integrating factor method, you have to solve another seprable DE to find the integrating factor $\mu $. This is where the problem is.
Starting from the DE:
$$-\mu g(x) = \mu '$$
$$\dfrac {\mu '}{\mu}=-g(x)$$
$$(\ln \mu)'=-g(x)$$
Integrate
$$\ln \mu (x)=-\int g(x)dx+c$$
$$\implies \mu(x)=Ce^{-\int g(x)dx}$$
Then you have that :
$$y(x)=\dfrac {c}{\mu(x)}$$
$$y(x)=ke^{\int g(x)dx}$$
|
H: Prove or Disprove : There exists a continuous bijection from $\mathbb{ R}^2$ to $\mathbb{R} $
This question was asked to me by a mathematics undergraduate to me and I was not able to solve it. So, I am asking it here.
Prove or Disprove : There exists a continuous bijection from $\mathbb{ R}^2$ to $\mathbb{R} $ .
I have no idea on how this problem can be tackled. It seems it has something to do with set theory but I only know elementary set theory( bijection from naturals) and I am unable to solve it.
AI: The standard argument that $\mathbb{R}^2$ is not homeomorphic to $\mathbb{R}$ works just as well to show that there is no continuous bijection $\mathbb{R}^2\to \mathbb{R}$.
Suppose such a continuous bijection $f$ exists. If we remove a point $p$ from $\mathbb{R}$, it becomes disconnected. The preimages of the disjoint open sets $(-\infty,p)$ and $(p,\infty)$ will be disjoint open subsets of $\mathbb{R}^2$ whose union is $\mathbb{R}^2\setminus \{f^{-1}(p)\}$. But this implies that $\mathbb{R}^2$ minus a point is disconnected, which is a contradiction, since $\mathbb{R}^2$ minus a point is path connected.
It is also true that there is no continuous bijection $\mathbb{R}\to \mathbb{R}^2$, but the proof is a bit harder - I don't know a way of proving this that doesn't use the Baire category theorem. See this answer and also this answer.
|
H: Area of a rectangle using congruency
I don't understand the lecturer, solving the question, says that the rectangles are congruent each other, so the result can be obtained by proportioning them to each other. However, AFAIK two quadlaterals are congruent to each other such that sides an interior angles of one quadrilateral in corresponds with sides and angle of another and to prove that all corresponding pairs of sides and angles are congruent.
I think parallelism property may be used to proportion, but not directly because they are quadlateral.
A means the area.
Would you mind picturize the solution?
AI: None of your rectangles are congruent to any other because congruence requires they be the same size. Still, from the fact that the area of a rectangle is the product of the length and width, you can say that rectangles that share one side have areas in proportion to the other side. So from $A(AEKH)=12, A(HKGD)=8$ you can conclude that $AH=\frac 32HD$. Similarly $EB=\frac 52AE$ and $A(KFCG)=\frac 52\cdot 8=20\ cm^2$
|
H: What is $\lim\limits_{b\to a}\frac{e^{-\frac{x}{a}}-e^{-\frac{x}{b}}}{a-b}$?
I'm trying to evaluate $$\lim\limits_{b\to a}\frac{e^{-\frac{x}{a}}-e^{-\frac{x}{b}}}{a-b}$$
I know that the limit exists. The limit of the numerator and denominator are both zero when $b\to a$, so I tried to apply L'Hospital's Rule for $\frac00$ form, but the denominator is constant with respect to $x$ so it becomes zero and then the whole thing is undefined.
How does one go about tackling this limit?
AI: This is nothing but
$$\frac{d}{da}e^{-\frac xa} = \frac x{a^2}e^{-\frac xa}$$
To see this, note that
$$\lim\limits_{b\to a}\frac{e^{-\frac{x}{a}}-e^{-\frac{x}{b}}}{a-b} = \lim\limits_{b\to a}\frac{e^{-\frac{x}{b}}-e^{-\frac{x}{a}}}{b-a}$$
|
H: Finding the area between ${y=x^2}$ and ${y=2x+8}$
The two equations are ${y=x^2}$ and ${y=2x+8}$
I got the result ${-64}$ multiple times, but the proper answer seems to be different
AI: The answer is supposed to be an area; therefore, it cannot be negative.
The intersection points of those two curves are $(-2,4)$ and $(4,16)$. When $x\in(-2,4)$, $x^2<2x+8$. So, the area is$$\int_{-2}^42x+8-x^2\,\mathrm dx=36.$$
|
H: If for invertible matrices $A$ and $X$, $XAX^{-1}=A^2$ then eigenvalues of $A$ are $n^{th}$ roots of unity.
Question: Let $A$ and $X$ be two complex invertible matrices such that $XAX^{-1}=A^2$. Show that there exists a natural number $n$ such that each eigenvalue of $A$ is an $n^{th}$ root of unity.
I can say from here, $\operatorname{det}(A)=1$ and I guess somehow I have to show $A^n=I$, for some $n$, which will give the result. But I have no idea how to show it from the fact that $A$ and $A^2$ are similar matrices.
Any hint!!
AI: Since similar matrices have the same eigenvalues, for any eigenvalue $b$ of $A$, the numbers $b^2, b^4, b^8, ...$ are also eigenvalues of $A$ (since by iteration, $A$ is similar to all those powers of itself and $b^k$ is an eigenvalue of $A^k$).
Also, $b$ is non-zero since $A$ is invertible.
But there are only finitely many eigenvalues of $A$, so that sequence of powers of $b$ must have repeats in it, i.e. $b^j = b^k$ for some $j<k$ and that yields that $b$ is a root of unity (since it's not $0$).
Since each eigenvalue is a root of unity, just take the least common multiple of the exponents to get a value of $n$ that works.
(sorry I don't know how to format well, so I've written mostly in English prose)
|
H: How to transform a regular expression into a context free grammar with 2 variables?
I'm tasked with transforming this regular expression $((0+1)(0+1)^*(0+1))^*$ into a context free grammar. As an added constraint I'm must do so with a maximum of 2 variables. This is what I did :
S -> VWV | S | ɛ
W -> 0 | 1 | W | ɛ
V -> 0 | 1
Is this considered as using 2 variables or 3 variables? Do we count the starting S as a variable?
Is the context free grammar that I derived correct? If not what would be a correct way of going about it?
AI: Note that the written regular expression is equivalent to $$\varepsilon+(0+1)^2(0+1)^*$$
So, a valid context free grammar is $$S\to\varepsilon\mid00X\mid01X\mid10X\mid11X$$$$X\to\varepsilon\mid X0\mid X1$$
Your example is $3$ variables.
Edit:
To see the equivalence of the regular expressions, it's easy to see that the empty string is accepted by both. Moreover, if a given string is not empty and is one character long, it is not accepted by either regular expression.
If a given string is 2 or more characters long, then it clearly is accepted by $(0+1)(0+1)^*(0+1)$, and also by $(0+1)^2(0+1)^*$.
So, the expressions agree on all inputs.
|
H: $\Vert f \Vert^{2}$, where $f: [0,1] \to \mathbb{R}$ continuously differentiable.
Let $f: [0,1] \to \mathbb{R}$ be continuously differentiable with $f(0)=0$. Prove that $$\Vert f \Vert^{2} = \int_{0}^{1} (f'(x))^{2}dx$$
Here $\Vert f \Vert$ is given by $\sup\{|f(t)|: t \in [0,1]\}$.
I see how to prove that $(\int_{0}^{1} (f'(x))^{2}dx)^{1/2}$ is an upper bound for $f(t)$, where $t \in [0,1]$, from this question.
However, I am not sure how to prove this is the least upper bound.
AI: You cannot prove it, since it is false. Take $f\colon[0,1]\longrightarrow\Bbb R$ defined by $f(x)=x(1-x)$. Then $\|f\|=\frac14$, but$$\int_0^1(f'(x))^2\,\mathrm dx=\int_0^1(1-2x)^2\,\mathrm dx=\frac13.$$
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.