text
stringlengths
83
79.5k
H: 2-dimensional regular submanifold of $GL(2, \Bbb R)$ I wish to show that the subset $A$ of $GL(2 ,\Bbb R)$ consisting of matrices of the form $$\begin{bmatrix} a &b\\0 &a \end{bmatrix}$$ where $a >0$ is a regular submanifold of dimension $2$. The easy way to show that it's a regular submanifold is to invoke Cartan's theorem. But I do not wish to use it. So I'll proceed as follows: first associate $2 \times2$ matrices in $A$ with elements of $\Bbb R^2$ in a natural way $$\begin{bmatrix} x &y\\0 &x \end{bmatrix} \longmapsto (x,y)$$ since $x >0$, $A$ is an open subset of $\Bbb R^2$ and therefore, $A$ is a smooth manifold of dimension $2$. But I'm not able to show $A$ is a regular submanifold. Any hints on how to proceed will be highly appreciated. AI: Let $U = \{(x,y) \in \Bbb R : x \ne 0 \}$. It suffices to note that the map $h:U \to A$ defined by $$ h(x,y) = \pmatrix{x & y\\0 & x} $$ is bijective and continuous with continuous inverse. That is, $h$ is a homeomorphism, so the map is indeed a topological embedding. To see that this map is continuous, it suffices to note that the topology on $\Bbb R^{2 \times 2}$ is the product topology, and $A \subset GL(2,\Bbb R) \subset \Bbb R^{2 \times 2}$ has the associated subspace topology.
H: If the number of units of a ring is odd, then the ring has cardinality as a power of two If the number of units of a finite ring is odd, then does the ring has cardinality as a power of $2$? I think yes. For fields, it is trivial. For non-fields, it is a hard question for me. I saw a paper here that sates that an odd number is the cardinality of the group of units of a ring if it is of the form $\prod_i (2^{n_i}-1)$. But, that proof is quite lengthy, and still the ring need not be a power of $2$. Any short proof? Thanks beforehand. AI: Consider the canonical ring morphism $\varphi \colon \mathbb{Z} \to R$. Since $\mathbb{Z}^{\times} = \{-1, 1\}$, the induced group morphism $\mathbb{Z}^{\times} \to R^{\times}$ must be trivial by Lagrange, so $\varphi(1) = \varphi(-1) = 1$. In particular, $\varphi$ factors through an injective morphism $\mathbb{F}_{2} \to R$, so $R$ is an $\mathbb{F}_{2}$-vector space, and thus must have cardinality a power of $2$.
H: Let $G$ be a group and $n\in \Bbb Z$. $\forall x,y\in G$, $x^n=y^n \Rightarrow x=y$ and $xy^n=y^nx$. Thus, prove that $G$ is abelian. Let $G$ be a group and $n\in \Bbb Z$. $\forall x,y\in G$, $x^n=y^n \Rightarrow x=y$ and $xy^n=y^nx$. Thus, prove that $G$ is abelian. Working backwards, I get $$xy=yx$$$$(xy)^n=(yx)^n$$ but I am unsure on how to show $xy^n=y^nx\Rightarrow (xy)^n=(yx)^n$. AI: Hint: $x^{-1}y^n x = (x^{-1} y x)^n.$
H: Can you do this question without matrices? Can I use the formula for a trapezium? I'm not really sure where to start AI: Refer to the graph: $\hspace{2cm}$ $$\small{A=\frac{b_1+b_3}{2}\cdot (a_3-a_1)+\frac{b_3+b_2}{2}\cdot (a_2-a_3)-\frac{b_1+b_2}{2}\cdot (a_2-a_1)}$$ I leave the rest work to you.
H: A graph with $n$ vertices has distinct degrees except for one degree, say $x$ which occurs twice. Find $x$ and prove it. I find out that when $n$ is an odd number, then $x$ equals to $\frac{n-1}{2}$. When $n$ is an even number, then $x$ has two possible values, one is $\frac{n}{2}$ and another is $\frac{n}{2} -1$. But I face difficulty in proving it. AI: Hint: there must be a vertex adjacent to no others (degree $0$) or a vertex adjacent to all others (degree $n-1$), but not both. If you remove this vertex, the remaining graph also has all degrees different except for one which occurs twice (unless there is only one vertex left). Also, if the removed vertex was adjacent to no others, the remaining graph has a vertex adjacent to all others, and vice versa. This should enable you to proceed by induction.
H: Comparing elements of $L^p$ spaces I was studying the answers to the following question: About two functions whose Lebesgue integral on all sets of a $\sigma-$algebra are equal. Now I am wondering how to interpret the sets $\{f>g\}$, $\{f=g\}$,... and so on. In the case where $f$ and $g$ are continuous (and thus Borel-measurable), the case is clear and we can write for example $\{f>g\}=\{x\in X\colon f(x)>g(x)\}$. But what about the case where $f,g\in L^p(X)$ are measurable? I learned that elements of $L^p$-spaces are equivalence classes of measurable and integrable functions. So how can we define such a set, when the values of some function $f$ in the equivalence class $[f]$ are arbitrary in certain (or even every) points? If I choose representatives $f$ and $g$ and try to determine the set $\{f>g\}$, can't it become a completely different set, if I choose new representatives? I always become completely confused every time when it comes to $L^p$-functions and pointwise arguments. Can you give me some simple or robust intuition? AI: The meaning of $[f]>[g]$ is that $f,g$ are almost everywhere real valued, and almost everywhere we have $f(x)>g(x)$. This doesn't depend on the choice of the representatives, because if you change the representatives you will change the functions only on a set of measure zero.
H: Representing $n!$ as a Polynomial For $n\in\mathbb N$, $n!$ could, theoretically, be expanded into a polynomial of degree $n$ as $$\underbrace{n(n-1)(n-2)(n-3)\cdots \left(n-(n-2)\right) (n-(n-1))}_{n \ \text{factors}} =\sum_{k=0}^n a_k n^k $$ How can I determine the coefficients $a_k$? For the $n^n$ term, there is only one choice, as every factor must contribute an $n$. So $a_n$ should be $1$. For the $n^{n-1}$ term, we need $n-1$ factors to contribute an $n$, and the remaining factor multiples it with a constant term. So, $$a_{n-1} = -\sum_{i=0}^n i$$and so on. But I’m not sure if what I’m doing really makes sense. Does such a polynomial represention of $n!$ really exist? AI: You are describing the functions $(x)_n = x(x-1)\cdots(x-n+1)$ , which are known as the Falling Factorials. Expressed as polynomials, these have the Stirling Numbers of the First Kind as their coefficients. To be specific, we have $(x)_n = \sum_{k=0}^n s(n,k) x^n$ where $s(n,k)$ denotes the $(n,k)$th Stirling Number of the First Kind. From the definition of $(x)_n$ we can, with a little tinkering, find that $s(n+1,k) = s(n,k+1) - n s(n,k)$ (see my comments above) which allows the coefficients to be recursively calculated as desired.
H: Areas of two polygons with same centroid Give a polygon $P$ in $\mathbb{R}^2$, the centroid of $P$ is $(v_1+\cdots+v_n)/n$, where $v_1,\ldots,v_n\in\mathbb{R}^2$ are the vertices of $P$. Suppose $P$ and $Q$ are two polygons in $\mathbb{R}^2$ satisfies: They have the same centroid. The ratios of their shaows in every direction are at most $C$, i.e. for all line $l$ going through their centroid, $$length(l\cap P)/length(l\cap Q)\leq C.$$ Claim: The ratios of their areas is at most $C^2$. Is the claim correct? AI: I don't think this can be true. Consider this figure: Although this has circular segments, it can be approximated by a polygon. Cuts through its centroid are almost all the same length, and if I did my calculations right, its area is $\frac{10}{9}$ times that of a circle with that diameter. So here are two figures, which when approximated by polygons $P$ and $Q$, will have almost the same cut lengths through their centroid, but a distinctly different area. If your polygons approximate them closely enough, $C$ will be close to $1$, and for sure you can get $C^2<\frac{10}{9}$, and construct a counterexample to your claim. I think that even with convex polygons, a similar technique will result in a counterexample.
H: I got a problem in indices $3^{(2x+3)} - 2.9^{(x+1)} =1/3$ Please help me with this problem Its my elementary mathematics indices problem AI: Using the index laws, transform $3^{2x+3} = 3^{2x} \times 3^3$ and $9^{x+1} = 3^{2(x+1)} = 3^{2x}\times 3^2$, so that the LHS becomes $3^{2x}(3^3 - 2\times 3^2) = 3^{2x}(3\times 3^2 - 2\times 3^2) = 3^{2x} \times 3^2$. Now you need only to solve $3^{2x} \times 3^2 = 3^{-1}$ which means $2x + 2 = -1$, so $x = -3/2$.
H: Does there exist a symmetric matrix $A$ such that $2^{\sqrt{n}}\le |\operatorname{Tr}(A^n)|\le2020 \ \cdot 2^{\sqrt{n}}$ for all $n$ Does there exist a symmetric matrix $A$ such that $2^{\sqrt{n}}\le |\operatorname{Tr}(A^n)|\le2020 \cdot2^{\sqrt{n}}$ for all $n$? I think no. The trace of $A^n$ equals $\sum\limits_{i=1}^n\lambda_i^n$ where $\lambda_i$ are the eigenvalues of $A$. Now, if the absolute value of trace of $A$ is bounded below by $2$, then I think the trace of $A^n$ will grow infinitely. Am I right? Thanks beforehand. AI: As you said, for a symmetric matrix $A$ we have $Tr(A^n) = \sum_i \lambda_i^n$. Now consider two cases for $\lambda_\max = \max_i |\lambda_i|$: $\lambda_\max \le 1$. Then $|Tr(A^n)| \le n$, and which is smaller than $2^{\sqrt n}$ for a sufficiently large $n$. $\lambda_\max > 1$. Then for even $n$ we have $Tr(A^n) \ge \lambda_\max^n$, which is greater than $2020 \cdot 2^{\sqrt n}$ for a sufficiently large $n$.
H: If $\sqrt{1-a}\leq\sqrt{1-b}+\sqrt{1-c}$ would it imply $\sqrt{1-a^2}\leq\sqrt{1-b^2}+\sqrt{1-c^2}$? Question: If $\sqrt{1-a}\leq\sqrt{1-b}+\sqrt{1-c}$ would it imply $\sqrt{1-a^2}\leq\sqrt{1-b^2}+\sqrt{1-c^2}$ That is, $a,b,c\in [-1,1].$ Would this inequality necessarily be true? I tried to break up $\sqrt{1-a^2}=\sqrt{1-a}\sqrt{1+a}\leq\sqrt{1+a}(\sqrt{1-b}+\sqrt{1-c}).$ However I am not sure how to proceed further. I also cannot seem to find a counter example either. Many thanks in advance! AI: No. An easy counterexample: let $c=-1, a=0$, then $$1=\sqrt{1-a}<\sqrt{2}=\sqrt{1-c}\leq\sqrt{1-b}+\sqrt{1-c},$$ but you won't have $1=\sqrt{1-a^2}\leq\sqrt{1-b^2}=\sqrt{1-b^2}+\sqrt{1-c^2}$ unless $b=0$.
H: There is only one positive integer that is both the product and sum of all its proper positive divisors, and that number is $6$. Confused as to how to show the number 6's uniqueness. This theorem/problem comes from the projects section of "Reading, writing, and proving" from Springer. Definition 1. The sum of divisors is the function $$\sigma (n) = \sum_{d\,|\,n} d,$$ where $d$ runs over the positive divisors of $n$ including 1 but not $n$ itself. Definition 2. The product of divisors is the function $$p(n) =\prod_{d\,|\,n} d,$$ where $d$ runs over the positive divisors of $n$ including 1 but not $n$ itself. This is my progress, ends very quickly: So the logical form of the problem is $\exists!x \left( \sigma(x) = x \wedge p(x) = x \right)$, which can be reexpressed as either $\exists x((\sigma(x) = x \wedge p(x) = x) \wedge \forall y (\sigma(y) = y \wedge p(y) = y)\rightarrow y=x)$ or $ \exists (\sigma(x) = x \wedge p(x) = x) \wedge \forall y \forall z ((\sigma(y) = y \wedge p(y) = y) \wedge (\sigma(z) = z \wedge p(z) = z) \rightarrow y=z)$. We use existential instantiation and choose x to be 6. So, choosing the first method — this choice seemed simpler to me — $(\sigma(6)= 6 \wedge p(6) = 6) \wedge \forall y ( \sigma(y)=y\wedge p(n) = n) \rightarrow y=6$. How do we go on about proving 6's uniqueness; how do we get that y=6? Theorem. There is only one positive integer that is both the product and sum of all its proper positive divisors, and that number is $6$. Proof. Existence: Suppose $n =6$. Then $\sigma(6) = 1 + 2 + 3 =6 $ and $p(6)= 1 \cdot 2 \cdot 3 = 6$, so 6 is both the product and sum of all its proper positive divisors. Uniqueness: [I have no idea.] $\square$ AI: Let $n$ be a positive integer that satisfies the requirement. It can be readily checked that $n>1$ and $n$ is not a prime power. If $n=p^k$ for some prime natural number $p$ and for some positive integer $k$, then we have $$1+p+p^2+\ldots+p^{k-1}=p^k=1\cdot p\cdot p^2\cdot \ldots\cdot p^{k-1}\,.$$ Thus, $p$ divides $1+p+p^2+\ldots+p^{k-1}$. Do you see a problem here? Therefore, $n$ has at least two distinct prime factors. Let $p$ and $q$ denote two prime distinct natural numbers that divide $n$. Obviously, $pq\mid n$, whence $$n\geq pq\,.$$ Then, $\dfrac{n}{p}$, $\dfrac{n}{q}$, and $\dfrac{n}{pq}$ are proper divisors of $n$. Consequently, as $n$ is the product of its (positive) proper divisors, we get $$n\geq \left(\dfrac{n}{p}\right)\cdot\left(\dfrac{n}{q}\right)\cdot\left(\dfrac{n}{pq}\right)=\frac{n^3}{p^2q^2}\,.$$ Therefore, $n^2\leq p^2q^2$, or $n\leq pq$. However, $n\geq pq$. We then conclude that $n=pq$. Thus, $1$, $p$, and $q$ are the only positive proper divisors of $n$. Ergo, from the requirement, $$1\cdot p\cdot q=n=1+p+q\,.$$ Therefore, $pq=p+q+1$, or $$(p-1)(q-1)=2\,.$$ You can finish this, I suppose. Related Questions. (a) If $n$ is a positive integer such that $n$ equals the product of all positive proper divisors of $n$, then show that $n=p^3$ for some prime natural number $p$, or $n=pq$ for some distinct prime natural numbers $p$ and $q$. (b) If $n$ is a positive integer such that the product of all positive proper divisors of $n$ equals the sum of all positive proper divisors of $n$ (without requiring that the product or the sum is equal to $n$ itself), then prove that $n=6$. (c) If $n$ is a positive integer such that the product of all positive divisors of $n$ equals the sum of all positive divisors of $n$, then prove that $n=1$.
H: why $(M_1^{\perp})^{\perp} \subset [T^*(H_2) ]^{\perp} ?$ I have some confusion in subset sign given below , my confusion marked in red circle as given below It is given that $T^*(H_2) \subset M_1^{ \perp}$ then why $(M_1^{\perp})^{\perp} \subset [T^*(H_2) ]^{\perp} ?$ I think its should be $[T^*(H_2) ]^{\perp} \subset (M_1^{\perp})^{\perp}$ AI: If $H$is an inner product space with inner product $( \cdot, \cdot)$ and isf $A \subset B \subset H,$ then we have $$B^{\perp} \subset A^{\perp}.$$ Proof: let $x \in B^{\perp}$, then $(x,b)=0$ for all $b \in B.$ Since $A \subset B $, we get $(x,a)=0$ for all $a \in A.$ This gives $x \in A^{\perp}$.
H: Understanding the definition of a differential operator on manifolds In Christian Bar's "Geometric Wave Equations" notes it has this definition of a differential operator. I know what $\frac{\partial f}{\partial x^i}$ means when $f:M\rightarrow \mathbb{R}^n$ is a smooth function. But I don't understand what is meant by $\frac{\partial v}{\partial x^i}$ when $v:M\rightarrow E$ is a smooth section (M and E being manifolds). Any help is appreciated, cheers. AI: You are choosing a local trivialisation of $E, F$. So when you restrict a section $v:M\to E$ to $U$ it is of the form $$x\mapsto (x,(v_1(x),...,v_p(x)))$$ with $v_1,...,v_p$ functions $U\to\Bbb K$. Concretely if $\iota: E\lvert _U\to U\times \Bbb K^p$ is the trivialisation then $v_i$ is the $i$-th component of $\iota\circ v$. Now you can apply the partial differentials of the coordinates of $U$ to get a map $U\to\Bbb K^p$, then apply the matrices $A(x)$ at each point and sum it all up to get a map $p:U\to \Bbb K^q$. Now if $\kappa : F\lvert_U\to U\times \Bbb K^q$ is the local trivialisation of $F$ apply $\kappa^{-1}$ to $u\mapsto (u, p(v))$ to get a local section of $F$.
H: Let $A,B\in\mathbb{R}^{n\times n}$, where $A$ is PSD and $B$ NSD. If $\mathrm{tr}(AB)=0$, show that $AB=0$. Let $A,B\in\mathbb{R}^{n\times n}$, where $A$ is a positive semidefinite matrix and $B$ a negative semidefinite matrix. If $\mathrm{tr}(AB)=0$, show that $AB=0$. AI: If you are using a definition in which "positive semidefinite" implies symmetric, then we can simply proceed as follows. Because $B$ is negative semidefinite, it has a decomposition $B = -MM^T$. It follows that $$ 0 = \operatorname{tr}(AB) = \operatorname{tr}(AMM^T) = -\operatorname{tr}(M^TAM). $$ $M^TAM$ is negative semidefinite. So, $\operatorname{tr}(M^TAM) = 0 \implies M^TBM = 0$. It follows that all vectors $x$ in the column space of $M$, which is also the column space of $B$, satisfy $Ax = 0$. Thus, we have $AB = 0$, which was what we wanted. Note that if we use the more general definition where we only require $x^TAx \geq 0$ for $x \neq 0$, the statement fails. For example, $$ A = \pmatrix{1&-1\\1&0}, \quad B = \pmatrix{0&0\\0&-1}. $$
H: What is the actual meaning of second derivative? I am confused why we use second derivative to find the maxima and minima. I cannot understand what is the meaning of second derivative. Also i have come across some formulae that is if second derivative is greater than zero then it is minima. if second derivative is less than zero then it is maxima if it is equal to zero then go on to higher order derivative. Can anyone explain me what is the reason behind this formulae? AI: The extrema are found where the derivative is zero. As zero has no sign, you can't tell a minimum from a maximum. A minimum is where the slope goes from negative to positive, hence the first derivative is decreasing and conversely a maximum is where the slope goes from positive to negative, hence the first derivative is increasing. So the sign of the second derivative allows you to tell la minimum from a maximum. If the second derivative is zero, you need more criteria.
H: When is the sum of two uniform random variables uniform? Suppose that $X$ and $Y$, two random variables, are both uniformly distributed over $[0,1]$. Let $Z=\frac{1}{2}X+\frac{1}{2}Y$. I know that in general, $Z$ is not uniform. For instance, $Z$ is not uniform if $X$ and $Y$ are independent. On the other hand, if $X=Y$, then $Z$ is uniformly distributed over $[0,1]$. My question: Suppose $Z$ is uniformly distributed over $[0,1]$. Is $X=Y$? In other words, is $X=Y$ the only case where $Z$ is uniform over $[0,1]$? AI: The answer is YES. We have $$ \begin{align*} \frac 1 3 &=E[Z^{2}] \\ &=\frac 1 4 E[(X+Y)^{2}]\\ &=\frac 1 4(E[X^{2}]+E[Y^{2}]+2E[XY])\\ &=\frac 1 4(\frac 1 3+\frac 1 3+2E[XY]). \end{align*}$$ This gives $E[XY]=\frac 1 3$. This implies that we have equality in Cauchy-Schwarz inequality: $$E[XY]=\sqrt {E[X^{2}]}\sqrt {E[Y^{2}]}$$ and hence $X$ and $Y$ are constant multiples of each other. But the constant factor has to be $1$ since $X$ and $Y$ have uniform distribution on $[0,1]$. Hence $X=Y$.
H: Relation between the eigenvalue of $T$ to the eigenvalue of $p(T)$ Let $V$ be a vector space over a field $\mathbb{F}$. Suppose that $T: V \rightarrow V$ is a linear operator with an eigenvalue $\lambda$, and $v$ is an eigenvector of $T$ corresponding to $\lambda$. Why is it true that, for every $p(x) \in \mathbb{F}[x]$, the scalar $p(\lambda)$ is an eigenvalue of the operator $p(T)$ and $v$ is also an eigenvector of $p(T)$ corresponding to the eigenvalue $p(\lambda)$? AI: If $p = a_0 + a_1 X + ... + a_n X^n$, and $Tv = \lambda v$ ($v \neq 0$), then $$ p(T) v = a_0 Iv + a_1 Tv + ... + a_n T^n v = a_0v + a_1 \lambda v + ... + a_n \lambda^n v = p(\lambda) v. $$ This uses the definition of $p(T),$ and $T^n v = \lambda T^{n-1}v = ... = \lambda^n v.$
H: Show that a function is invertible I have to show that the function $f(x) = \frac{ax + b}{cx + d}$, where $ad - bc\neq 0$, has an inverse function. I've tried some ways to go around it, i.e. checking if $g(f(x))$ has $x$ as an identity, but the algebra got really difficult and I could not get anywhere. Any hints on how to solve this one? Best, AI: Basically we have to prove that the function $f$ is bijective. We just need to show that $\bullet~$ $f$ is one-one. $\bullet~$ $f$ is surjective. $\circ~$Now, \begin{align*} &f(t) = f(z)\\ \implies & \frac{at + b}{ct + d} = \frac{az + b}{cz + d}\\ \implies & actz + bcz + dat + bd = actz + adz + bct + bd\\ \implies & (bcz - bct) - (adz - adt) = 0\\ \implies & bc(z - t) - ad(z - t) = 0\\ \implies & (z - t)(bc - ad) = 0\\ \implies & z = t \quad [\text{as }~ ad - bc \neq 0] \end{align*} hence $f$ is one-one. $\circ~$ Let's consider an arbitrary $y$ $\in$ $\text{im}(f)$, such that $$ y = \frac{ax + b}{cx + d} $$ Now we have that \begin{align*} &y = \frac{ax + b}{cx + d}\\ \implies & ycx + yd = ax + b\\ \implies & ycx - ax = b - yd \\ \implies & x (yc - a) = b - yd\\ \implies & x = \frac{b - yd}{yc - a} \end{align*} Therefore $f$ is surjective. Hence, the map is surjective + one-one = bijective, hence Invertible and the inverse exists. The co domain of $f$ is $~\mathbb{R}-\frac{a}{c}$ if $c \neq 0$, and if $c = 0$, then the map can be extended to $\mathbb{R}$. Moreover the inverse function is $$ f^{-1}(x) = \frac{b - xd}{xc - a} \quad \text{for } x \in \text{im}(f) $$
H: What is the expression for the centroid of an arbitrary parameterized space curve? Let $\gamma:t\in[a,b]\rightarrow (x(t),y(t),z(t))\in \mathbb{R}^3$ be a parametrized curve I am looking for the expression of the centroid of the curve $\gamma$ in a good reference. (I didn't find a good one.) AI: You can use the standard definition of center-of-mass: $$ r={\int_a^b\gamma(t)\,|\dot\gamma(t)|\,dt\over \int_a^b|\dot\gamma(t)|\,dt}, $$ where: $\dot\gamma(t)=(\dot x(t), \dot y(t), \dot z(t))$ and $|\dot\gamma(t)|=\sqrt{\dot x^2(t)+\dot y^2(t)+\dot z^2(t)}$.
H: Evans' PDE Exercise 6.6: Weak solution of Dirichlet-Neumann boundary value problem The exercise is the exercise 6.6 from Evans' PDE. Suppose $U$ is connected, and $\partial U$ consists of two disjoint, closed sets $\Gamma_1$ and $\Gamma_2$. Define what it means for $u$ to be a weak solution of Poisson equation with mixed Dirichlet-Neumann boundary conditions: $$ \begin{cases} -\Delta u = f \ \ \ \text{in $U$} \\ u = 0 \ \ \ \text{on $\Gamma_1$} \\ \frac{\partial u}{\partial \nu} = 0 \ \ \text{on $\Gamma_2$}. \end{cases} $$ My attempts: Let $u \in C^{\infty}(U)$ be a solution to the above problem. Then for $v \in H^1(U)$, integration by parts yields \begin{align} (f,v) = -\int_U (\Delta u) v & = \int_U Du \cdot Dv -\int_{\partial U} \frac{\partial u}{\partial \nu} v \\ & = \int_U Du\cdot Dv - \int_{\Gamma_1} \frac{\partial u}{\partial \nu} v \end{align} I wish to conclude $\int_{\Gamma_1} \frac{\partial u}{\partial \nu} v = 0$, but I don't know how to do that. Could anyone give me some hint? AI: You can pick the space of test functions $v$ to be the space $$H^1_{\Gamma_1}(U) = \{v \in H^1(U)\colon v = 0 \text{ on } \Gamma_1 \}, $$ which is a Hilbert space with respect to the inner product in $H^1$. Actually, you can verify that in $H^1_{\Gamma_1}$ the norm $$ \|v\|_{H^1_{\Gamma_1}(U)} = \|Dv\|_{L^2(U)}$$ is equivalent to the $H^1$ norm (by Poincaré inequality). Then the second integral equals $0$ since $v \in H^1_{\Gamma_1}(U).$
H: Use polar coordinates to compute volumes, via the change of variables theorem So there is a question in my Anaylis book (with 3 subquestions) that I think understand, but I can not seem to understand the approach used in the solution. I've tried all subquestions, and for each of them I seem to make a mistake somewhere. When I then look at the solution they approach it differently, and I don't understand why. The question goes: Use polar coordinates to compute the volume of the region enclosed by the $x y$ -plane and the paraboloid $z=25-x^{2}-y^{2}$; $\int_{0}^{1} \int_{0}^{\sqrt{1-y^{2}}} \frac{1}{1+x^{2}+y^{2}} d x d y$; the volume of $R=\left\{(x, y, z): 0 \leq z \leq \sqrt{4-x^{2}-y^{2}},(x-1)^{2}+y^{2} \leq 1\right\}$. To solve this I take the following steps. I sketch the volumes to get an idea of the problem. I then create my new region in polar coordinates by transforming the constraints via $x=rcos\phi$ and $y=rsin\phi$ (for $\mathbb{R^2}$, for $\mathbb{R^3}$ they're different but I won't mention them). This transforming is hard (for me) and this always seems to give me incorrect boundaries for my integrals. Then once I have the right boundaries I rewrite my integral over the new region and multiply it by the absolute Jacobian of my polar coordinate functions, which in case of polar coordinates (in $\mathbb{R^2}$) is $r$. This is because of the change of variable theorem. Then I can integrate over $\phi$ and over $r$ one by one and compute my result. These are the provided solutions: 1: approach 1 $$ \begin{aligned} A &=\int_{0}^{2 \pi} d \varphi \int_{0}^{5} rdr\left(25-r^{2}\right) \\ &=\int_{0}^{2 \pi} 2 \varphi\left[\frac{25}{2} r^{2}-\frac{1}{4} r^{4}\right]_{0}^{5} \\ &=2 \pi\left(\frac{5^{4}-5^{4}}{4}\right)=\frac{1}{2} \pi 5^{4} \\ &=\frac{625}{2} \pi \end{aligned} $$ approach 2: $$ \begin{array}{l} x^{2}+y^{2}=r^{2} \\ \int_{0}^{25} d z\left(\int_{x^{2}+y^{2} \leq 25-z}d x d y\right)= \\ \int_{0}^{25} d z(\pi \cdot(25-z))= \\ \pi \cdot\left[25 z-\frac{1}{2} z^{2}\right]_{0}^{25}= \\ \pi \cdot\left(25^{2}-\frac{1}{2} 25^{2}\right)=\frac{1}{2}(25)^{2} \cdot \pi \\ =312\frac{1}{2} \pi \end{array} $$ \begin{aligned} & \int_{0}^{\pi / 2} d \varphi \int_{0}^{1} rdr\left(\frac{1}{1+r^{2}}\right) \\ =& \int_{0}^{\pi / 2} d \varphi\left[\ln \left(1+r^{2}\right)\right]_{0}^{1} \\ =& \frac{\pi}{2} \cdot \ln (2) \end{aligned} 3. \begin{array}{l} \sqrt{4 x^{2}-y^{2}}=\sqrt{4-r^{2}} \\ (x-1)^{2}+y^{2} \leq 1 \Leftrightarrow \\ x^{2}-2 x+1+y^{2} \leq 1 \\ x^{2}+y^{2} \leq 2 x \\ r^{2} \leq 2+\cos \varphi \\ r \leq 2 \cos \varphi \end{array} \begin{array}{l} \text { So Vol}(R)= \\ \int_{-\pi / 2}^{+\pi / 2} \int_{r=0}^{2 \cos \varphi} r d r(\sqrt{4-r^{2}})= \end{array} \begin{array}{l} =\int_{-\pi / 2}^{\pi / 2} d \varphi\left[-\frac{1}{3}(4-r^2)^{3 / 2}\right]_{0}^{2 \cos \varphi} \\ =\int_{-\pi / 2}^{\pi / 2} d \varphi\left(4 / 3-\frac{8}{3}\left(1-\cos ^{2} \varphi\right)^{3 / 2}\right) \\ =\int_{-\pi / 2}^{\pi / 2} d \varphi\left(4 / 3-\frac{8}{3}|\sin \varphi|^{3}\right) \\ =2 \cdot \int_{0}^{\pi / 2} d \varphi\left(4 / 3-\frac{8}{3} \sin ^{3} \varphi\right) \\ =2 \cdot\left(\frac{4}{6} \pi-8 / 3-8 / 9\right)=\frac{4}{3} \pi-\frac{32}{9} \end{array} I hope I scanned/typed the solutions correctly. My question goes: Can anyone explain the steps taken to get to the boundaries in parts 1, 2 and 3? Where does the $r$ go in part 2 which states $rdr$ right before it is integrated? I hope I have explained everything clearly. Cheers! AI: I would usually get the bounds for polar coordinates by sketching the shape of the region of integration and looking for a polar equation. In case 1 the region is a disk centered at the origin and in case 2 it's a quarter of a disk centered at the origin; so the limits for $r$ run from $0$ (at the origin) to the radius of the disk, and the limits for $\phi$ run all the way round the disk in case 1 and just one-quarter of the way around in case 2. In case 3 you have a disk with the origin on the circumference. You might happen to recall that the polar equation of the circle bounding that disk is $r = 2\cos\phi.$ To "cover" the area of the disk, you need to integrate along the radials in all directions to the right of the origin: everything between $-\frac\pi2$ and $0$ to cover the region below the $x$ axis, and $0$ to $\frac\pi2$ to cover the region above the $x$ axis. In part 2, the $r$ didn't "go" anywhere, or you might say it "went" to the same place as the $1/(1+r^2)$. The solution requires evaluating an integral in $r,$ $$ \int r \,dr\left(\frac1{1+r^2}\right) = \int \left(\frac r{1+r^2}\right) dr = \frac12\ln(1+r^2). $$ Note that the given "solution" omitted the factor $\frac12.$ The "solution" provided is therefore twice as large as the correct answer.
H: Is the set of monotone functions $f:[a,b] \to [0,1]$ compact in $L^2([a,b])$? Is the set of equivalent classes of monotone functions $f:[a,b] \to [0,1]$ compact in $L^2([a,b])$? AI: Yes. Let $f_n$ be a sequence of such functions. By the Helly selection theorem, there is a subsequence $f_{n_k}$ converging pointwise to some $f$, which is clearly again monotone. And by dominated convergence this subsequence also converges in $L^2$.
H: Euler's method to approximate a differential equation $\frac{dy}{dx} = x - y$ Question: Use Euler's method to find approximate values for the solution of the initial value $-$ problem $$\frac{dy}{dx} = x-y$$ $$y(0)=1$$ on the interval $[0,1]$ using five steps of size $h = 0.2$. My attempts: I know that the recurrence relation $y_{n+1} = y_{n} + hf(x_n,y_n)$ however I am unable to see how the interval comes into play. An idea I had was to consider the bounds of the interval and approximate $y(0)$ and $y(1)$ however this does not include $h$ so I am extremely skeptical. Any help or guidance is greatly appreciated! AI: Make a little table -- I've filled in the first couple of rows for you: \begin{array}{|c|c|c|c|c|} \hline x & y & \Delta x & \frac{dy}{dx} = x - y & \Delta y \approx \frac{dy}{dx}\Delta x \\ \hline 0 & 1 & 0.2 & -1 & -0.2 \\ \hline 0.2 & 0.8 & 0.2 & -0.6 &-0.12 \\ \hline 0.4 & 0.68 & 0.2 & & \\ \hline 0.6 & &0.2 & & \\ \hline 0.8 & &0.2 & & \\ \hline 1 & & 0.2 & & \\ \hline \end{array} You are done when you get to the bottom left.
H: Intuition of subsets which are not in the sigma algebra I am studying probability theory and from what I have understood is that when the sample space is uncountable, the probability measure cannot assign probabilities to every possible subset of the sample space, hence we build another set containing the subsets of the sample space to which we can consistently assign probabilities to. But I was wondering about those other subsets which are left out and not included in the sigma-algebra. Are those probabilities 0 or do we not even consider those subsets as events? For example, the probability of choosing a number 1/2 between [0,1] is 0. Here is the event of choosing the number 1/2 defined to be equal to 0, or do we not even consider that to be an event? Any help or intuition would help a lot. Thanks in advance. AI: You have to be a bit more careful. We can define a probability measure on the power set of an uncountable set. For instance, with $x\in[0,1]$, the Dirac measure $\delta_x(A):=1$ if $x\in A$ and $\delta_x(A)=0$ otherwise can be defined on the power set of $[0,1]$. It is only certain properties we might want to enforce in addition to the defining properties of a measure which might force us to exclude some sets. In particular, there can be no measure $\lambda$ on the power set of $[0,1]$ which assigns each interval $[a,b]\subseteq[0,1]$ its length $\lambda([a,b])=b-a$, because we can construct sets $V$ ($V$ for Vitali, the guy who first constructed these) where any choice for $\lambda(V)$ leads to inconsistencies. But this only applies to this special choice of a measure and related measures. Not all measures! As for the sets we exclude: we simply do not consider them events, so we don't assign any probability to them. In particular, we also don't say that they have probability $0$. We don't talk about them at all. If an event has probability $0$ it is still an event and part of the $\sigma$-algebra.
H: Show that $\mathrm{Cov}[g(X), h(X)] \ge 0$ whenever $g$ and $h$ are nondecreasing. Intuitively, the covariance of two nondecreasing functions of a random variable should be nonnegative. However I can't seem to come up with a proof for this. Here is the formal setup: Let $X: (\Omega, \mathcal A)\to (\mathbb R, \mathcal B)$ be a random variable defined on the probability space $(\Omega, \mathcal A, P)$ and let $g$ and $h$ be nondecreasing functions $\mathbb R \to \mathbb R$. To make sure that everything is well-defined assume that $E[g(X)^2]<\infty$ and $E[h(X)^2]<\infty$. Question: Is is always true that $$\mathrm{Cov}[g(X),h(X)]\ge 0\,?$$ Some notes: Note that the assertion is equivalent to showing that $E[g(X)h(X)]\ge E[g(X)]E[h(X)].$ I tried reducing the problem to showing that $$E[Xf(X)] \ge E[X]E[f(X)]$$ holds for every nondecreasing $f$ whenever $E[X^2]$ and $E[f(X)^2]$ are finite. To do this, I defined $Y=h(X),$ $f = g \circ h^{-1}$ and wrote $$E[g(X)h(X)]=E[g(h^{-1}(Y)Y] = E[f(Y)Y].$$ But this assumes that $h$ is strictly increasing, which is not necessarily true. Moreover, even in this case I'm not sure how to start. AI: This is the Chebyshev sum inequality. The proof is also really nice. Let $X_1, X_2$ be two iid copies of $X$. Then note that $$(g(X_1)-g(X_2))(h(X_1)-h(X_2)) \ge 0 $$ from the fact that $g,h$ are non decreasing. Taking expectations then gives us $$\mathbb{E}[g(X)h(X)] \ge \mathbb{E}[g(X)] \mathbb{E}[h(X)].$$
H: Probability problem related to 2 rooks on a 8×8 chessboard Two distinct squares are chosen uniformly at random on an $8\times 8$ chessboard, and rooks are placed on these squares. What is the probability that they will attack each other? Edit : thanks everyone for their concern. I solved it :p. No matter where I place my rook it has 14 squares to go through in order to kill another one so my probability would be 14/63 or 2/9 AI: total options for placing first rook are 64 and second is 63 now in our scenario first rook can be placed at any of $64$ but second must be place in the $14$ affected boxes. answer is $\dfrac {14}{63}$.
H: Graph of Topologist's Sine Curve I'm looking for whether the graph of topologist's sine curve and closed topologist's sine curve are closed or not. But due to some misconception, I'm facing problems with this. $\underline{\text{Question} : 1}$ Here it proves that if $f\colon X\to Y$ continuous and $Y$ is Hausdorff, then the graph $G_f$ of $f$ is closed. $f(x)=\sin\frac1x,x\in(0,1]$ is continuous and $\mathbb R^2$ is Hausdorff, this means $G_f$ is closed, but $G_f\ne\bar{G_f}$, implying $G_f$ is not closed. $\underline{\text{Question} : 2}$ Here it proves that the graph $G_f$ of $f\colon X\to Y$ is closed in $X\times Y$, then $f$ is continuous if $Y$ is compact. The closed topologist's sine curve is closed in $\mathbb R^2$. If we take a compact subset of $\mathbb R^2$ containing this graph, for example take $[-\frac12,1]\times[-2,2]$, then the whole curve lies there. And hence $f\colon[0,1]\to Y$ is continuous, which is surely not. As a beginner in topology, I'm certainly missing something, but can't able to see what. It would be great if someone please point out where I'm going wrong. AI: Question 1: what do you take $X$ here to be? If $X$ is $\Bbb{R}$, then $f$ is not defined on all of $X$ (it is only defined in $(0,1]$). IF $X$ is $(0,1]$, then the graph should be considered as a subset of (say) $(0,1]\times \Bbb{R}$. And there it is indeed a closed subset. Question 2: The "The closed topologist's sine curve" is not a graph of any function: there is more than one $y$ with $(0,y)$ in it. So this result cannot be applied.
H: Using implicit function theorem to solve a system of equation I have the following question: Consider the set $\Gamma \subseteq \mathbb{R}^3$ of solutions of the system \begin{equation*} \begin{cases} x+\ln{y}+2z-2=0 \\ 2x+y^2+e^z-1-e=0 \end{cases} \end{equation*} Describe $\Gamma$ in a neighborhood of $(0,1,1)$ using the Implicit Function Theorem. I saw this and it helped me to start doing something.... but I couldn't conclude it. My attempt: define $f:U\subseteq\mathbb{R}^3\to\mathbb{R}^2$ by $f(x,y,z)=(x+\ln y+2z-2,2x+y^2+e^z-1)$ where $U$ could be $\mathbb{R}\times\mathbb{R}^+\times \mathbb{R}$. Then $\frac{\partial f}{\partial(y,z)}=\begin{matrix}1/y & 2y\\2 & e^z\end{matrix}$ which has non-zero determinant at the mentioned point $(0,1,1)$. So I can use the implicit function theorem to say that $y$ and $z$ can be described as a function of $x$ (in a open neighbourhood of $(0,1,1)$). Following the hint of the link, as I have by implicit function theorem $f(x,y(x),z(x))=0$ in an open neighbourhood I can differentiate both side with respect to $x$. But then I get $\begin{cases}1+y'/y+2zz'=0\\2+2yy'+e^zz'=0\end{cases}$ which I couldn't solve for $y,z$ or $y',z'$. Also I can't use ODE methods because I didn't learned it in my course. Could you help me? Is this the correct way I should follow? AI: The IFT gives you: $\frac{\partial f}{\partial x} + \frac{\partial f}{\partial y} \frac{dy}{dx} + \frac{\partial f}{\partial z} \frac{dz}{dx} = 0$, that is in matrix form: $\begin{pmatrix}\frac{\partial f}{\partial y} & \frac{\partial f}{\partial z}\end{pmatrix}% \begin{pmatrix} \frac{dy}{dx} \\ \frac{dz}{dx} \end{pmatrix} = -\frac{\partial f}{\partial x} $. Solving this linear system at the point $(0, 1, 1)$ will give you the total derivatives $\frac{dy}{dx}$ and $\frac{dz}{dx}$. Note: you get a system with 2 equations and 2 unknowns - the one your wrote. That shouldn't be too difficult ;)
H: A ferris wheel completes 2 revolutions in 30 seconds. Determine how far it has travelled in 15 seconds. The radius of the ferris wheel is 10 m. If the Ferris wheel completes two revolutions in $30$ seconds, how many revolutions does the Ferris wheel complete in $15$ seconds? The radius of the Ferris wheel is 10 m. I'm stuck in this question, I'm not sure how to solve it, I would appreciate it if you could help me out here, thanks! AI: Note that 2 revolutions in 30 seconds works out to 1 revolution every 15 seconds. The question asks the distance traveled in 15 seconds, which due to the previous calculation, works out to exactly 1 revolution. The distance traveled in one revolution is the circumference of the wheel, which is $2 \times \pi \times 10$ m, that is $20\pi$ meter, nearly 62 m.
H: Why a hyperplane is a subspace? Given a nonzero vector $a \in \mathbb{R}^n$ and a scalar $b \in \mathbb{R}$, we define the hyperplane $$ H = \{x \in \mathbb{R}^n \; | \; a^T x = b\}. $$ Let $x$ and $y$ be any two vectors that belong to $H$, clearly $a^T (x - y) \neq b$ (unless $b = 0$), that is, $x - y$ is not in $H$. Furthermore, the zero vector is not in $H$ unless $b = 0$. So why a general hyperplane is a subspace? AI: As you correctly note, hyperplanes are not necessarily "vector subspaces", which can be seen from the fact that they do not contain the zero vector. However, every hyperplane is an affine subspace of $\Bbb R^n$.
H: Clarification on definition of a Sheaf On Wikipedia, the gluing and locality properties of a Sheaf are defined in terms of elements $s$ of the object $S$ associated with $\mathscr{F}(U)$. I have two points of confusion. I thought objects in a category don't necessarily have elements so does this definition even makes sense for categories outside of sets with structure? My second question is, assuming $S$ is a set. What is even met by the gluing compatibility conditions, $$res_{V \cap W }(s_i) = res_{V \cap W }(s_j) $$ For instance in the case of the skyscraper sheaf at a point $p$, give an open covering of $U$, $s_i$ may only even exist for the $U_i$ containing $p$. From the definitions, it feels like you need to elements of a set to make the definition to make things work and implicitly a function associated with each element defined for every open subset of $U$ that maps to the empty set over subsets where an element disappears. My thinking must be horribly wrong here but I'm hoping someone can clarify these misconceptions. AI: A (pre)sheaf on a category $\mathcal{C}$ is a functor from $\mathcal{C}^{\rm op} \to \mathsf{Set}$ or sets with extra structure: Abelian groups, rings, modules, etc. $\mathcal{C}$ is often the category of open sets on a topological space. In particular, $\mathscr{F}(U)$ is always a set, by definition. In the equation $\operatorname{res}_{V \cap W }(s_i) = \operatorname{res}_{V \cap W }(s_j)$ we assume that $s_i \in \mathscr{F}(V)$ and $s_j \in \mathscr{F}(W)$. Or to use the notation on Wikipedia: $s_i \in \mathscr{F}(U_i)$ and $s_j \in \mathscr{F}(U_j)$ with $$ \operatorname{res}_{U_i \cap U_j}(s_i) = \operatorname{res}_{U_i \cap U_j}(s_j) $$ For a skyscraper sheaf (of sets, let's say) $\mathscr{F}(U) = \{0\}$ (the terminal object of $\mathsf{Set}$ up to isomorphism) for all $U$ not containing $p$ and hence $s_i = 0$ for all $U_i$ not containing $p$. These $s_i$ still exist. Maybe it would be best for you to read some examples of sheaves and think through the glueing axiom. For example, the sheaf of continuous functions on a topological space or sheaves of smooth/continuously differentiable functions on a manifold.
H: What equation best represents this set of data? Here is the graph. (It is the same as below.) The points are symmetrical over the $y$-axis, but I cannot find an equation that accurately represents this graph. AI: Well, observe that $y-x$ for $x>0$ are almost in arithmetical progression: $0, 3, 4.5, 6, 9, 12, 15, 30$. So we construct Lagrange polynomials for $y=y(t)$, $x=x(t)$ where $t=\frac{2}{3}(y-x)$: $$x=t(t - 1),\ y=t(t + \frac{1}{2}).$$ We might leave this as an answer, but let's obtain $y(x)$. $$t^2-t-x=0$$ $$t = \frac12 \left(1 \pm \sqrt{4 x + 1}\right)$$ $$t=\frac23(y-x)\Rightarrow y=\frac32 t+x$$ $$y=\frac34 \left(1 \pm \sqrt{4 x + 1}\right)+x$$ Now let's resolve the $\pm$. \begin{array}{|l|c|c|c|c|c|c|c|c|} \hline x&0&2&6&12&30&56&90&380\\ \hline y_1&0&1/2&3&15/2&45/2&91/2&153/2&703/2\\ \hline y_2&3/2&5&21/2&18&39&68&105&410\\ \hline \end{array} $$\hbox{So }y(x)=\begin{cases} \frac34 \left(1 + \sqrt{4 |x| + 1}\right)+|x|,&\hbox{for }x\ne 0,\\ 0,&\hbox{for }x= 0. \end{cases}$$ Edit: python script for computing the Lagrange polynomials from sympy import * R=Rational from sympy.abc import x xs=[0,2,6,12,30,56,90,380] ys=[0,5,10+R(5,10),18,39,68,105,410] def lp(xs,ys): monomes=[(x-R(i)) for i in xs] lps=[prod(monomes[:i]+monomes[i+1:]) for i in range(len(xs))] return simplify(sum(R(j)*f/f.subs({x:R(i)}) for i,j,f in zip(xs,ys,lps) )) xs_=[(i-j)/R(3,2) for i,j in zip(ys,xs)] print(lp(xs_,xs),lp(xs_,ys)) And it's output x*(x - 1) x*(x + 1/2) About $y-x$: observing $y-x$ is rather artificial and it is, but once you obtained it, you see that every $y-x$ is divisible by $3$, it's natural to divide by $3$. Also there's $1.5$ so it's natural to multiply by $2$ to get integer values. That's why $\frac 23$.
H: Overloading binary operation symbols In computer science I'm used to using overloaded operators. Is this also valid in mathematical notation? Concretely, I have the following example: Definition: Let $G_1=(V_1,E_1)$ and $G_2=(V_2,E_2)$ be two graphs. We define $\phi = \{(v,w) \in V_1 \times V_2 | v \text{ is mapped to } w \}$ be a mapping between vertices from these graphs, called an FU instance. We can say that a vertex $v_i$ is part of an FU instance $\phi$ iff: $\exists (v_i,x) \in \phi \lor \exists (x,v_i) \in \phi$; short-hand notation: $v \in \phi$. Usage in question: Let $G=(V_G,E_G)$ be a graph, and $\Phi_G$ be a set of FU instances. We define: $V^*_G = \{ v \in V_G | n_v > 1\}$ where $ n_v = \#\{ \phi \in \Phi_G | v \in \phi \}$. Is it appropriate to overload $\in$ in one statement, like that? In the case of $\phi \in \Phi_G$ it's the canonical set membership relation, while in $v \in \phi$ its my self-defined use. If not, I can easily define my shorthand using a different symbol, but I feel this would make things harder to parse? AI: I think it's inappropriate, because $v\in \phi$ already has a meaning. I would define a new object, say $$\phi^*=\{v\in V_1|\exists w\in V_2, (v,w)\in\phi\}\cup\{v\in V_2|\exists w\in V_1,(w,v)\in\phi\}$$ and then write $v\in \phi^*$.
H: Order of a subgroup $H$ and $\langle H,b\rangle$ Let $G$ be a finite abelian group, and $H$ a subgroup. Suppose $H$ contains an element $a$ where there is some $b \in G$ with $a \in \langle b \rangle$ and $|b|/|a| = p$, some prime $p$. Do we necessarily have $[\langle H,b\rangle : H] = 1 \text{ or } p$ ? Note I can apply something similar to second isomorphism to establish: $| H\langle b \rangle | / |H|$ divides $p$, but here $H\langle b\rangle \subset \langle H , b\rangle$, so that didn't quite get me there. Also, do we have some generalities to the order of $\langle H,x\rangle$ for any $x\in G$? Thanks for the inputs! Edited: Added condition $a \in \langle b \rangle$ AI: You're basically there. The step you are missing is: $G$ is abelian, so $H\langle b\rangle=\langle H, b\rangle$.
H: Basic graph theory proof verification Let $G(V, E)$ be an undirected final and simple graph. $\bar{G}(V, E')$ a simple graph on the same vertices, while $e \in E'$ iff $e \notin E$ Prove that if G is not a connected graph, then $\bar{G}$ is a connected graph. This is my idea: Let $v_1, v_2$ $(v_1 \neq v_2)$ be vertices in $V$, assume there isn't such a route $v_1 \overset{e_1}{\rightarrow} ... \overset{e_m}{\rightarrow} v_2$, while $e_i \in E'$. therefore in particular $(v_1, v_2) \notin E'$, therefore $(v_1, v_2) \in E$. this is true for every $(v_1, v_2)$ in contradiction to G not being connected. Is this ok? I feel like there is something wrong here... *I translated it so if anything is not clear please tell me. AI: Your proof is not correct. In proof by contradiction you started with letting there be two vertices $v_1$ and $v_2$ such that there is no path connecting them in $\bar{G}$. Which shows $v_1$ and $v_2$ are adjusten in $G$. This does not prove that every pair $v_1$ and $v_2$ are adjacent in $G$.
H: Advantage of fast Fourier transform in programming Someone asked me about the advantage(s) of fast Fourier Transform in civil engineering programs?! or What is the application of the Fourier series in engineering programming? Can you help me or give me a clue? I've been searching google, but do not find an explanation. I was looking for an example to find out what's happening in the algorithm. Remark: I know that the Fourier Transform is a function. Fast Fourier Transform is an algorithm. AI: It would help to explain what "fast" means. Suppose I compute an FT with power-of-two length $N=2^m$. What's the arithmetic complexity? I'll consider $1$-dimensional FTs for simplicity. We know FFT algorithms that get it as low as $O(N\log N)$, which you can prove by divide & conquer; whether we can beat that is an open question. Without FFT techniques, a DFT takes $O(N^2)$ time.
H: Pigeonhole Principle Proof and Existence So, I’m going through a textbook on combinatorics, and I came across this exercise question. Let $n$ be odd, and suppose $(x_1, x_2, \dots, x_n)$ is a permutation of $[n].$ Prove that the product of $(x_1-1)(x_2-2) \cdots (x_n-n)$ is even. So far, I have this: in order for the product to be even, we need to have an even number of odd integers $x_i$ and an odd number of even integers $x_j-j$. But neither do I think this helps nor do I see a way of tying it up to arrive at a proof. Furthermore, this section of the chapter involves the Pigeonhole Principle, so I’m sure the author wants us to incorporate that into each proof, but I can’t seem to do this either. Any help would be much appreciated. :) Thanks in advance. AI: Your pigeons are the odd $x_i$, your holes are the even $i$.
H: Gradient and laplacian of a function defined on Riemannian manifold in local coordinates. I was trying to derive an expression of the gradient of a riemannian manifold. Let $M$ be a Riemannian manifold of dimension $n$ and $f : M \to \mathbb{R}$ and let's define $grad f(p) : M \to\mathcal{X}(M)$ a vector field such that $$ \langle grad \;f, v \rangle(p) = df_p(v) $$ I want to derive an expression in terms of the metric for such gradient, here $d f_p$ represents the differential 1-form. Here is my attempt to work out such formula. Let $ B = \left\{ \frac{\partial}{\partial x^i}(p)\right\}_{i=1,\ldots,n} $ be the basis of $T_p M$ in local coordinates, for each $p \in M$ therefore we want to represent $grad \; f(p)$ as $$ grad \; f(p) = \sum_{i=1}^n a_i(p) \frac{\partial}{\partial x^i}(p), $$ so the goal is to find the coefficents $a_i(p)$. The gradient map is linear by definition, so in order to be determined we need to apply it to the basis $B$. Doing so we get $$ \langle grad \; f, \frac{\partial}{\partial x^j} \rangle(p) = \sum_{i=1}^n a_i(p) \langle \frac{\partial}{\partial x^i}, \frac{\partial}{\partial x^j} \rangle (p). $$ By definition of the gradient operator we actually have for the lhs $$ \langle grad \; f, \frac{\partial}{\partial x^j} \rangle = df_p \left( \frac{\partial}{\partial x^j} \right) = \frac{\partial f}{\partial x^j}(p), $$ while by definition of Riemannian metric we have for the rhs $$ \sum_{i=1}^n a_i(p) \langle \frac{\partial}{\partial x^i}, \frac{\partial}{\partial x^j} \rangle (p) = \sum_{i=1}^n a_i(p) g_{ij}(p) $$ Therefore the gradient operator is fully determined if we solve the linear system $$ \frac{\partial f}{\partial x^j}(p) = \sum_{i=1}^n a_i(p) g_{ij}(p) \;\;\; i = 1,\ldots,n\Leftrightarrow a_i(p) = \sum_{j=1}^n g^{ij}(p) \frac{\partial f}{\partial x^j}(p) $$ where with $g^{ij}(p)$ I denote the element of the inverse of the metric tensor, which exists since it's SPD by definition. Therefore I endup with the expression $$ grad \; f(p) = \sum_{i,j=1}^n g^{ij}(p) \frac{\partial f}{\partial x^j}(p) \frac{\partial}{\partial x^i}(p) $$ Is this expression correct? I was also trying to derive an expression for the laplacian but I don't know where to start at the moment, can you give me a clue maybe? AI: Yes, this expression is correct. Your argument is actually much more general: you have derived identification the metric provides between $T_p^*M$ and $T_pM$, via $\theta \mapsto \langle \theta, \cdot \rangle$, and then merely plugged in the coordinate expression for $df$. One way to compute a coordinate expression for the Laplace operator $\Delta$ is to use the characterization $\Delta = \operatorname{div}\operatorname{grad}$. To follow this approach, I'd start by computing a coordinate expression for the divergence operator.
H: conditional probability problem with two random events This the problem: In police station 1 there are 3 cars of type A and 8 of type B. In police station 2 there are 5 of type A and 2 of type B. In each station one of the cars is randomly chosen and damaged by an outsider!( a damaged car can not move). Some event happens and both stations cars go out to get the criminal. Randomly one of the cars caches the criminal. What is the probability that this car is of type A? My fist approach to this problem was to use the Law of total Probability and finding the chance of the car to be of type A. How ever two cars of the total 18 cars are damaged in two separate random events and I couldn't add that to my calculations. I also tried some other methods but I just got more confused with the conditional logic of the problem. Thanks for your attention AI: After the damages, station 1 has 10 cars and station 2 has 6 cars. So the probability that a car from station 1 will catch the criminal is $\tfrac{10}{10+6}$. Conditional on that event, the probability that the car that did it was an A typed car is $\tfrac{3}{11}$. The same reason if it was a car from station 2: $$\Pr(A\,car\,catches)=\tfrac{10}{16}\tfrac{3}{11}+\tfrac{6}{16}\tfrac{5}{7}=\tfrac{135}{308}$$ Wait, what about the damaged cars? Good question, they don't matter. You can make a complete calculation based on the law of total probability or use combinatorics. Line up the 11 cars of station 1 in a row. The first one is damaged and the second one is the one catches the criminal (reminder: all conditioned on the even that station 1 catches him). Essentially, the question is - given 11 cars, what is the probability that A typed car will be on the second place in the row, and this probability is the same as for any place in the row (just like when drawing a card from the deck, it will be an ace with the same probability, whether you take the upper one, second one, bottom one or whatever).
H: find explicit expression for the function $f(x)= \sum_{n=0}^\infty \frac{(-1)^n}{(2n+1)(x+1)^{2n}}$ I got this in one of my assignments: Let $$\sum_{n=0}^\infty \frac{(-1)^n}{(2n+1)(x+1)^{2n}}$$ (a) find the domain of convergence (b) let $\alpha=\arctan(\frac{1}{2})$, consider the function defined by $$f(x)=\sum_{n=0}^\infty \frac{(-1)^n}{(2n+1)(x+1)^{2n}}$$ for every $x$ in the domain of convergence. find an explicit expression for $f(1)$ as a function of $\alpha$ So I find this very confusing, I found the domain of convergence of (a) to be $x\le -2$ or $x\ge 0$ but obviously this is not a power series, as the domain of convergence isnt symmetric and the powers are negative since $x$ is in the denominator. How am supposed to approach (b)? If this isnt a power series I cant use element-element integration\differentiation... also I dont understand how to get $\alpha$ into this I know the Power Series of $\arctan(x)$ but I dont know how to make it relevant to this question , this is very confusing... Any help would be appreciated AI: Due to Newton-Gregory series, we have $$\tan^{-1}z=\sum_{0}^{\infty} \frac{(-1)^n x^{2n+1}}{2n+1}, |z|\le 1.$$ So the requried series is $$f(x)=\sum_{n=0}^{\infty} \frac{(-1)^n}{(2n+1) (1+x)^{2n}}=(1+x)\tan^{-1}\frac{1}{1+x}=.$$ So $f(1)=2\tan^{-1}(1/2)=2\alpha.$
H: Unitary operators and representations of Von-Neumann algebras I found the following assertion and I would like to know why does it hold: Let $\pi$ be a representation of a von-Neumann algebra such that $\pi=\pi_1 \oplus \pi_2$ where $\pi_1,\pi_2$ are irreducible representation. and let {$x_1,x_2$} linearly independent. Now by theorem 1 from here I know there is a self adjoint operator $B$ such that $Bx_i=(-1)^ix_i \ \ (i=1,2)$ and by taking exponent of fitting operator we can find unitary operator $A$ such that $Ax_i=(-1)^ix_i \ \ (i=1,2)$. My Question is if I can find now a unitary element $a$ of the algebra such that $\pi(a)x_i=(-1)^ix_i \ \ (i=1,2)$. In other words any such unitary operator can be written as $\pi(a)$ where $a$ is unitary element of the algebra. Note that $A$ is also unitary operator in $\pi(M)$ (where $M$ is the algebra). AI: If I understood your question correctly then the answer is yes only in case $\sigma(A)\neq S_1$ (that is the unit sphere in $\mathbb{C}$). Indeed in that case we can find a continuous function $f$ on $\sigma(A)\subsetneq S_1$ such that $f(A)$ is self-adjoint and $e^{if(A)}=A$ (try to figure out what is $f$ by yourself). Notice that $f(A)\in \pi(M)$ so if $f(A)=\pi(X)$ for $X\in M$ we have $f(A)=\pi(Y)$ where $Y=\frac{1}{2}(X+X^*)$ a self-adjoint element. So by taking the unitary element $a=e^{iY}$ we end up with $\pi(a) = \pi(e^{iY}) = e^{i\pi(Y)} = e^{if(A)} = A$ as wanted.//
H: (c) Find the area contained between the curve, the y-axis, the line t = 1 and the asymptote to the curve which is parallel to the t-axis. Part (a) and (b) are fine, and I believe (c) is an integral, but i'm not quite sure how to go about solving said integral with the given parameters. Mainly the vertical limits, as i'm sure the horizontal limits are just 0 and 1 AI: Here, it's given that $$f(t) = \frac{t^2 + 3t + 3}{(t+1)^2} = 1 + \frac{t+2}{(t+1)^2}$$ Now, as $t \to \infty$, $f(t) \to 1$, as the square term in the denominator would decay faster than the numerator term Hence, you need to find the area under the curve, but above $f(t)=1$, and with $0 \leq t\leq 1$ Now, $f(1) = 7/4 > 1$ and $f$ is monotonically decreasing (check with derivative) Hence $$A= \int_0^1f(t)dt - 1$$
H: Binomial Expansion Of $\frac{24}{(x-4)(x+3)}$ Can somebody help me expand $\frac{24}{(x-4)(x+3)}$ by splitting it in partial fractions first and then using the general binomial theorem? This is what I've done so far: $$\frac{24}{(x-4)(x+3)}$$ $$=\frac{24}{7(x-4)}-\frac{24}{7(x+3)}$$ Now I know I have to find the binomial expansion for this; I just don't know how. Can anyone help me with this? AI: $\dfrac{24}{7(x-4)}=\dfrac{-6}{7\left(1-\frac x4\right)}=-\dfrac67\left(1+\frac x4+(\frac x4)^2+(\frac x4)^3+\cdots\right)$ $\dfrac{24}{7(x-(-3))}=\dfrac{8}{7\left(1-\left(-\frac x3\right)\right)}=\dfrac87\left(1-\frac x3+(\frac x3)^2-(\frac x3)^3+\cdots\right)$ $\therefore\dfrac{24}{(x-4)(x+3)}=-2+\dfrac16x-\dfrac{13}{72}x^2 +\dfrac{25}{864}x^3\cdots$
H: How to prove a norm identity for a Banach space and its dual Is the following claim true? It feels like it should be true, but I don't really know how to show it. Let $X$ be a Banach space, and $x \in X$ an element of it. Then there exists a functional $\phi \in X^*$ such that $\| \phi \| = 1$ and $\| x \| = | \phi(x) |$. If I'm not mistaken, it would suffice to say that there exists a sequence $(\phi_k)_{k = 1}^\infty$ of unit functionals for which $| \phi_k (x) | \to \| x \|$, since the unit ball in $X^*$ is compact in the weak topology. However, I don't know how to prove the former result. EDIT: I forgot to actually define $\psi$ as $\psi(\lambda x) = \lambda$. My intuition is that I should be able to invoke Hahn-Banach and define a linear function $\psi$ on $\mathbb{C} x \subseteq X$ bounded by the norm $\rho(x) = \| x \|$ on $X$, then extend it from $\mathbb{C} x$ to all of $X$. Is this a correct application of Hahn-Banach? AI: You don't need completeness of $X$. This is true in normed spaces. Your last idea is a good one: We may assume $x \neq 0$. Define the functional $$\varphi: \Bbb{C}x \to \Bbb{C}: \lambda x \mapsto \lambda \Vert x \Vert$$ Then it is easily checked that $\Vert \varphi \Vert =1$ (the inequality $\leq$ is obvious, and then note that $\varphi(x/\Vert x \Vert) = 1$ so also $\Vert \varphi\Vert \geq 1$). By Hahn-Banach, we can extend to a functional $\tilde{\varphi}: X \to \Bbb{C}$ with $\Vert \tilde{\varphi} \Vert =1$ and this is the functional you are looking for.
H: area inside the curve $\phi(t)=(a(2\cos(t)-\cos(2t)),a(2\sin(t)-\sin(2t)))$ I tried using $$(1)A=\int_0^{2\pi}x(t)y'(t)\,dt=\int_0^{2\pi}a(2\cos(t)-\cos(2t))a(2\cos(t)-2\cos(2t))\,dt=6\pi a^2$$ and $$(2)A=\frac{1}{2}\int_0^{2\pi}r^2(t)\,dt=\frac{1}{2}\int_0^{2\pi}(a(2\cos(t)-\cos(2t)))^2+(a(2\sin(t)-\sin(2t)))^2\,dt=5\pi a^2 $$ but I get two different answers. Shouldn't they be the same? This is a plot, blue is $r(t)=\sqrt{x^2(t)+y^2(t)}$(polar coordinates) and red $\phi(t)=(x(t),y(t))$ AI: I think you have confused $t$, the parameter for your curve, and $\theta$, the independent variable to depict the angle a point makes with the x-axis. The curve in parametric form is $$r(t) = \sqrt{x(t)^2 + y(t)^2}, \theta(t) = \arctan\left(\frac{y(t)}{x(t)}\right)$$
H: Let $G$ be a group with a free subgroup of rank $2$. Let $H\leq G$ be such that $[G:H]<\infty$. Then $H$ also contains a free subgroup of rank $2$. I am having difficulties in solving the following problem. Let $G$ be a group with a free subgroup of rank $2$. Let $H\leq G$ be such that $[G:H]<\infty$. Then $H$ also contains a free subgroup of rank $2$. We know by Nielsen-Schreier theorem that a subgroup of a free group is also free. But in this problem $G$ is not necessarily free but contains a free subgroup. How to approach this problem? Any hint or idea will be highly appreciated. Thanks in anticipation. AI: Let $F$ be the free subgroup of rank $2$ in $G$. Then $|G:H|$ finite implies that $k := |F:H \cap F|$ is also finite, and by the Nielsen-Schreier Theorem $H \cap F$ is free of rank $k+1$. So $H \cap F$ and hence also $H$ contains a free subgroup of rank $2$.
H: What's with this strange sequence? We have the sequence : $$V_n=\frac{n(n+1)}{2V_{n-1}}\text{ with } V_1 = 1$$ This sequence appears really similar to this sequence : $$ a_n = \begin{cases} n & \text{if n odd}\\ \frac{n}{2} & \text{if n even} \end{cases} $$ like $a_{n+1}=V_{n}$ when $n\ge1$. How is this possible ? And it also seems that : $$a_n = \frac{2n}{3+(-1)^n}$$ Is this always true ? Thanks for the help ! Well it's easy to use induction. So is there a way to find this link between those two sequences without knowing the formula of $a_n$ ? AI: It’s easier to see why it happens if you rewrite the recurrence as $$\frac{V_n}{n+1}=\frac12\cdot\frac{n}{V_{n-1}}$$ and notice that a simple change of variable makes it even simpler. Let $x_n=\frac{V_n}{n+1}$; then $$x_n=\frac1{2x_{n-1}}=\frac1{2\cdot\frac1{2x_{n-2}}}=x_{n-2}\;.$$ Thus, the sequence $\langle x_n:n\ge 1\rangle$ is periodic with period $2$: it alternates between two values, $x_1=\frac12$ and $x_2=1$. Now just substitute back: $V_n=(n+1)x_n$, so $$V_n=\begin{cases} \frac{n+1}2,&\text{if }n\text{ is odd}\\ n+1,&\text{if }n\text{ is even.} \end{cases}$$
H: why can we factor a polynomial using its solutions Can someone please explain why we are able to factor an $n$ degree polynomial function using only it roots? What I mean is this: Lets say we have a function defined like so: $$f(x) = ax^4 + bx^3 +\dots$$ It can supposedly be factored like so: $$f(x) = a(x−p)(x−q)(x−r)\dots$$ Where $p, q, r$ etc. are the solutions of the function being equal to $0$. Is there a simple proof for why this is valid, and where does the coefficient $a$ in the factored form come from? (I don't want some lame answer for $a$ like :"if $a$ wasn't there the factored form wouldn't equal the original form" AI: Consider your polynomial $p(x)$, with zeros $z_1, z_2, \ldots, z_n$. Take: $\begin{align*} p(x) &= q(x) (x - z_i) + r(x) \end{align*}$ (plain polynomial division, $q$ is quotient, $r$ remainder). You know that the degree of $r$ must be less than the degree of $x - z_i$, i.e., it is a constant. Now: $\begin{align*} p(z_i) &= q(z_i) \cdot 0 + r(z_i) \end{align*}$ so you see that $r(z_i) = 0$, but $r(x)$ is a constant. Thus you conclude: $\begin{align*} p(x) &= q(x) (x - z_i) \\ &\vdots \\ &= a (x - z_1) (x - z_2) \dotsm (x - z_n) \end{align*}$ The $a$ is just the leading coefficient of $p(x)$, the coefficient of the highest power of $x$ (if you multiply out the rest, the leading coefficient is 1, a monic polynomial).
H: Perform NSGA 2 without variables I have a data set with two columns. The variable names are cost 1 and cost 2. I want to minimize both cost 1 and cost 2 using the Pareto optimization method. So, while implementing NSGA II I have two objective functions i.e. cost 1 values and cost 2 values but I don't have any variables in it since my data set already contains the cost values which are calculated by using the variable values (see the data set below). So, what I want is find the pair of cost values [cost 1,cost 2] which lie in the Pareto front. Can I perform NSGA II for this scenario? data = np.array([[97, 23], [55, 77], [34, 76], [80, 60], [99, 4], [81, 5], [ 5, 81], [30, 79], [15, 80], [70, 65], [90, 40], [40, 30], [30, 40], [20, 60], [60, 50], [20, 20], [30, 1], [60, 40], [70, 25], [44, 62], [55, 55], [55, 10], [15, 45], [83, 22], [76, 46], [56, 32], [45, 55], [10, 70], [10, 30], [79, 50]]) AI: In this case, since you do not have integer/float 'decision variables', I do not see the point of using NSGA II. Your case is much simpler. From the question, I gather that you have all the discrete 'options' as given and their attributed costs with respect to two objectives. In this case, in fact, by implementing a rather simple function you can determine whether an option is 'Pareto dominated' by another option, and then use this function to find the non-dominated ones. The non-dominated options will be the Pareto frontier or the Pareto optimal options. For gaining a better insight you can visualize these points on a 2D plot, noting that each of them represents an option, and then you will see that the ones in whose lower-left quadrant there does not exist any other point are the Pareto (cost-minimal) optimal solutions. A solution/option is said to be Pareto-dominated by another if the other is better (less costly in this case) w.r.t. all objectives.
H: Why is $\text{Gal}(K/\mathbb{Q}) \cong G_{\mathbb{Q}}/{\{\sigma \in G_{\mathbb{Q}}: \ \sigma|_K=id_K \}}$? Here, in page $1$, the absolute Galois group is defined by $$G_{\mathbb{Q}}:=\text{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})=\{\sigma: \bar{\mathbb{Q} }\to \bar{\mathbb{Q}}, \ \text{field automorphism} \}$$ is a profinite group. Then the article defines for any Galois extension $K$ of $\mathbb{Q}$, the Galois group by $$\text{Gal}(K/\mathbb{Q}) \cong G_{\mathbb{Q}}/{\{\sigma \in G_{\mathbb{Q}}: \ \sigma|_K=id_K \}}$$ to be the quotient group. My question- Why is $\text{Gal}(K/\mathbb{Q}) \cong G_{\mathbb{Q}}/{\{\sigma \in G_{\mathbb{Q}}: \ \sigma|_K=id_K \}}$ ? Because we know by definition of Galois extension $\text{Gal}(K/\mathbb{Q}) = \{\sigma \in \text{Aut}(K): \ \sigma(a)=a, \ \forall a \in \mathbb{Q} \}$. So the question- How to see the relation $ \{\sigma \in \text{Aut}(K): \ \sigma(a)=a, \ \forall a \in \mathbb{Q} \} \cong G_{\mathbb{Q}}/{\{\sigma \in G_{\mathbb{Q}}: \ \sigma|_K=id_K \}}$ ? How to see the isomorphism ? AI: There is a map $$ G_\mathbb{Q} \to \text{Gal}(K/\mathbb{Q}) $$ given by restriction. It is plain that the kernel of this map is $$ \{ \sigma \in G_\mathbb{Q} : \sigma|_K = \text{id}_K\}, $$ and so you need to know that this map is surjective, i.e. that every automorphism of $K$ lifts to an automorphism of the algebraic closure. This is one of the more difficult lemmas of Galois theory; see a proof here for instance: Extension of field automorphism to automorphism of algebraic closure
H: Limit as a Function on Sequence Spaces The question is motivated by the following posts: A and B. Let $X$ be a metric space (probably only Hausdorff is needed but I'm being safe) and let $X_0$ be the subspace of the sequence space $X^{\mathbb{N}}$ (equipped with the product topology) whose elements $(x_n)_{n \in \mathbb{N}}$ satisfy $\lim\limits_{n \to \infty} x_n \in X$. Let $\operatorname{Lim}$ denote the map from $X_0$ to $X$ taking a convergent sequence $(x_n)_{n \in \mathbb{N}}$ to $X$. When is this function continuous, is this object studied (in this setup), and if so what are some references? AI: Even in rather nice cases the map $\operatorname{Lim}$ need not be continuous. Let $X=\Bbb R$ with the usual topology. For each $n\in\Bbb N$ define a sequence $x^{(n)}=\langle x_k^{(n)}:k\in\Bbb N\rangle\in\Bbb R^{\Bbb N}$ as follows: $$x_k^{(n)}=\begin{cases} 1,&\text{if }k\le n\\ 0,&\text{otherwise.} \end{cases}$$ Clearly each of these sequences converges to $0$. However, the sequence $\langle x^{(n)}:n\in\Bbb N\rangle$ converges to $\langle 1,1,1,\ldots\rangle$ in $\Bbb R^{\Bbb N}$, the sequence that is constant at $1$.
H: homology groups of a torus with a disk glued to it in a certain way I am trying to study for my qualifying exams and I was trying to solve this problem. So the idea is to form a topological space X by attaching a disk $D^2$ along its boundary to the torus $T^2$ so that the boundary is attached to a loop representing the homology class $4[\alpha]-2[\beta]$ in $T^2$. And we need to calculate the homology groups of X. My approach: So I used Mayer-vietoris sequence by taking A and B respectively to be neighborhoods of $D^2$ and $T^2$ respectively. Then $ A \cap B$ is the circle $S^1$. I have used the reduced mayer-vietoris sequence. Here's my problem, we need the map $h: H_1( A \cap B) \mapsto H_1(A)+H_1(B) $. I know that $h([\gamma])=0 + 4[\alpha]-2[\beta]$. I think $h$ is injective. Alternatively, does anyone have an idea how to solve this using cellular homology? I would appreciate both methods so I can compare them. AI: $B$ is contractible, so you have $H_1(B)=H_2(B)=0$. You also have $H_2(A\cap B)=0$ since $A\cap B$ is homotopy equivalent to a circle. So you have an exact sequence $$H_1(A\cap B)\stackrel{h}\to H_1(A)\to H_1(X)\to H_0(A\cap B)\stackrel{k}\to H_0(A)\oplus H_0(B).$$ The map $k$ is injective, and effectively $h$ is the map $\Bbb Z\to\Bbb Z^2$ taking $1$ to $(4,-2)$. Therefore $$H_1(X)\cong\frac{\Bbb Z^2}{\{(4a,-2a):a\in\Bbb Z\}}\cong \Bbb Z\oplus\frac{\Bbb Z}{2\Bbb Z}.$$ Another stretch of the exact sequence is $$0\to H_2(A)\to H_2(X)\to H_1(A\cap B)\stackrel{h}\to H_1(A)$$ and as $H$ is injective, $$H_2(X)\cong H_2(A)\cong\Bbb Z.$$
H: Inverse function in $\mathbb R^2$ How do I find the inverse function of $f: X\to Y$ where $X,Y$ both are subsets of $\mathbb R^2$ and $f$ is defined as $f(x,y)=(x+y,x-y)$. AI: Hint:$$\underbrace{\begin{bmatrix}x \\y \end{bmatrix}}_f\mapsto\begin{bmatrix}1 &1 \\1 & -1 \end{bmatrix}\begin{bmatrix}x \\y \end{bmatrix}$$ find inverse of matrix to find $f^{-1}$
H: A surprisingly simple determinant Let $a_k^{(n)}$ be the $n$-vector whose components are the first $n$ non-null coefficients of the Taylor expansion of $\sin(k x)$ around $0$. Define the matrix $A^{(n)}$ as the matrix whose rows are the vectors $a_1^{(n)};a_2^{(n)}\dots a_n^{(n)}$, i.e. the $k$-th row of $A^{(n)}$ is the vector $a_k^{(n)}$. Then $A^{(n)}$ has determinant given by $$\det A^{(n)} = (-1)^{\lfloor n/2\rfloor}\,,$$ i.e. $\det A^{(1)} = 1$, $\det A^{(2)} = -1$, $\det A^{(3)} = -1$, $\det A^{(4)} = 1$, $\det A^{(5)} = 1$ and so on. To clarify the first matrices are $$A^{(1)}=\begin{pmatrix}1\end{pmatrix}\,;$$ $$A^{(2)}=\begin{pmatrix}1 & -\frac{1}{6}\\2 & -\frac{4}{3}\end{pmatrix}\,;$$ $$A^{(3)}=\begin{pmatrix}1 & -\frac{1}{6} & \frac{1}{120}\\2 & -\frac{4}{3} & \frac{4}{15}\\ 3&-\frac{9}{2} & \frac{81}{40}\end{pmatrix}\,\dots$$ I guess there must be a simple explanation for such an easy determinant and a smart way to prove it. Any idea? I came across this problem when thinking about some possible answer to a question asking for some intuitive way behind the fact that the trigonometric functions forms a basis. My idea was that one may show that the monomials $x^{2k+1}$ can be founded by eliminating term by term the unnecessary terms from the Taylor expansion of $\sin$. So I've started by looking at the truncated Taylor series and I found this matrices, whose inverse are the coefficient to find monomials from linear combination of the truncated expansions of $\sin(kx)$. My idea is that actually the fact that the determinant is $1$ may be somehow linked to the fact that the Fourier transform is an isometry, and to the extension of this fact on subspaces of $L^2$ with of polynomials of finite degree $<n$. But maybe I'm hallucinating... AI: Note that $$A^{(3)}=\pmatrix{1&1&1\\2&2^3&2^5\\3&3^3&3^5}\pmatrix{1&&\\&-1/6&\\&&1/120} =\pmatrix{1&&\\&2&\\&&3}\pmatrix{1&1&1\\1&2^2&2^4\\1&3^2&3^4}\pmatrix{1&&\\&-1/6&\\&&1/120}$$ a product of a Vandermonde matrix and two diagonal matrices. There's a well-known formula for the Vandermonde determinant. This pattern persists.
H: Finding the Möbius transformation when $z= \infty$ The following is available: $ T(2i) = \infty $ $ T(0) = -i $ $ T(\infty) = i $ So I've got: $ \frac{a(2i)+b}{c(2i)+d} = \infty \Rightarrow d=-2ic $ $ \frac{b}{d}=i \Rightarrow b = -2c $ $ \frac{a \cdot \infty -2c}{c \cdot \infty + -2ic} = i $ how do I continue when infinity is the argument? AI: We begin with the Ansatz $$T(z)={az+b\over cz+d}\ ,$$ noting that the coefficients are only determined up to a common $\ne0$ factor, and that there are certain exception rules concerning $\infty$. Since $T(2i)=\infty$ we conclude that $c\cdot2i+d=0$, hence $d=-2i c$. We are now at $$T(z)={az+b\over c(z-2i)}\ .$$ This shows that $c\ne0$, and that we may as well assume $c=1$. We are now at $$T(z)={az+b\over z-2i}\ .$$ It follows that $$a=\lim_{z\to\infty}{az+b\over z-2i}=T(\infty)=i\ ,$$ so that we arrive at $$T(z)={iz+b\over z-2i}\ .$$ The condition $T(0)=-i$ then leads to $b=2$, so that we finally have $$T(z)={iz+2\over z-2i}\ .$$
H: Dual of the subspace of sequences with finite non-zero entries I found the following question in the book Introdução à Análise Funcional, by César R. Oliveira. Let $\mathcal{N}_p \subset \mathcal{l}^p(\Bbb{N})$, $1 \le p \le \infty,$ the subspace of all sequences with finitely many non-zero entries. Show that $\mathcal{N}^{\ast}_p=\mathcal{l}^q(\Bbb{N})$, with $(1/p)+(1/q)=1.$ I think something is strange in this exercise. The definition of $\mathcal{N}_p$ is independent of $p$, since all sequencies with finitely many non-zero entries belong to all $\mathcal{l}^p$. In other words, $\mathcal{N}_p=\mathcal{N_q}$, for all $p,q$. But that would imply, for example, that $\mathcal{l}^1=\mathcal{N}^{\ast}_{\infty}=\mathcal{N}^{\ast}_{1}=\mathcal{l}^{\infty}$. Is my reasoning correct? Where I made a mistake? If I am wrong and the exercise is correct, I would appreciate any hints to solve it, I am lost. EDIT: After posting, I realized my mistake. As pointed in the answer below, the norms are different, making the $\mathcal{N}_p$ different. However, I still need help to prove the exercise. Thank you! AI: The elements of $\mathcal{N}_p$ itself indeed do not depend on $p$. However, when you consider the dual you have to specify which your norm you put on the space and that's where the $p$-dependency kicks in.
H: Why left side is negative on Number line? Yeah, probably it is a stupid question If so sorry about that. But I wonder why left side is negative on number line and who had proofed that? AI: Copied from my comment: It's just convention. The convention being this way rather than the other way sort of makes sense in societies that write left to right, since then the quantity on the horizontal axis increases as you move from left to right, which is the natural way to read graphs etc. for people from such societies. But from the math standpoint it could just as easily be the other way.
H: Can we prove that in any ring $a+a=2a$? Before I get to the question itself I want to clarify a few things: Definition for a ring from my textbook (translated (not that well) to English and then shortened by yours truly,so please pardon any mistakes): Set $R$ is called a ring if it has two operations defined on it, which we will note as $+$ and $\times$, if both have the commutative and associative properties and they are tied together with distributive properties. $a+b=b+a$ and $ab=ba$ $a+(b+c)=(a+b)+c$ and $a(bc) = (ab)c$ $(a+b)c=ac+bc$ With that out of the way, I want to ask if it's possible to prove that in any ring $R$ for any $a$ from $R$, $a+a=2a$? If it isn't, then can we prove that for any $+$ and $\times$ defined on the set of natural numbers for any natural number $n$, $n+n=2n$? The reason I put my text book's definition of a ring is because I have looked through forums for similar questions already, but they define addition there after defining natural numbers with Peano axioms (things like $a+1=\mathrm{successor}(a)$ and similar things). I would like to know if there exists a proof where we don't define operator $+$ more than the definition above. AI: Generally $1 + 1 = 2$; there are rings of characteristic $2$ (like mod $2$ arithmetic), where $1 + 1 = 0$, but in these rings $2 = 0$ so you still kind of have $1 + 1 = 2$ in a sense. Anyway $2 a = (1 + 1) a = 1 a + 1 a = a + a$. And this always holds.
H: Ferris wheel Trig Question Question: Suppose you wanted to model a Ferris wheel using a sine function that took 60 seconds to complete one revolution. The Ferris wheel must start 0.5 m above ground. Provide an equation of such a sine function that will ensure that the Ferris wheel’s minimum height of the ground is 0.5 m. Explain your answer. Hey, I don't really know how to do this question, I mean if we don't have max how do we find any equation. I would appreciate it if you could help me to get the answer. thanks AI: Let’s start with the standard sine function: $$f(t)=\sin t$$ If the radius of the wheel is $r$, then to adjust the amplitude, i.e. the farthest the wheel can go from its middle position, you need to multiply with $r$: $$f(t)=r\sin t $$ Now, you need to adjust the time period. Note, the period for $\sin (nx)$ is $\frac{2\pi}{n}$. Set $\frac{2\pi}{n} =60 \implies n=\frac{\pi}{30}$. $$f(t)=r\sin\left(\frac{\pi}{30}t\right) $$ To ensure that it attains its minimum value at $t=0$, shift the graph to the right by $\frac{\pi}{2}$: $$f(t)=r\sin\left(\frac{\pi}{30} t -\frac{\pi}{2} \right) $$ Now, you need to add a constant that will take care of the minimum height constraint. It is needed that $f(0) = 0.5$, so that constant is $r+0.5$. This gives the final equation: $$f(t) = r+0.5+r\sin\left(\frac{\pi}{30}t -\frac{\pi}{2} \right) $$ Note: Since $r$ is not given in the question, you might assume $r=1$.
H: Trigonometric Equation $\cos^{2}{x} - \sin{x} = 0$ Hi I tried everything I know with this equation but, I can not solve it. $$\cos^{2}{x} - \sin{x} = 0$$ I know it has a solution because I made a graph and it cuts the $x$ axis. Do you have advice? AI: Turn this into a polynomial using the hint @AnginaSeng gave: $\cos^2 x - \sin x = 0$ $1 - \sin^2 x - \sin x = 0$ Now let $y = \sin x$: $1 - y^2 - y = 0$ This should be more solvable. Solve for $y$, then use $y$ to solve for $x$
H: Determine the value of “c” using the mean value theorem For the function $F(x) = Ax^2 + Bx + C$ determine the value of $c$ (critical point) at which the tangent line is parallel to the secant through the endpoints of the graph on the interval $[x1,x2]$. Not sure how to start this or do it at all so any help would be very much appreciated! Thank you in advance for your time and assistance! AI: According to the Mean Value Theorem: \begin{equation} F'(c) = \frac{Ax_{2}^2 + Bx_{2} + C - (Ax_{1}^2 + Bx_{1} + C)}{(x_{2} - x_{1})} \end{equation} \begin{equation} \implies F'(c) = \frac{A(x_{2}^2 - x_{1}^2) + B(x_{2} - x_{1})}{(x_{2} - x_{1})} \end{equation} We can factor out $(x_2 - x_1)$, resulting in \begin{equation} F'(c) = A(x_{2} + x_{1}) + B \end{equation} From this result, we can use the fact that $F'(x) = 2Ax + B$ and get: \begin{equation} 2Ac + B = A(x_{2} + x_{1}) + B \end{equation} \begin{equation} 2Ac = A(x_{2} + x_{1}) \end{equation} \begin{equation} c = \frac{(x_{2} + x_{1})}{2} \end{equation} Hope it helps!!
H: Is $e$ arbitrary? If not, how is it derived? Probably a stupid question but where did the constant $e$ come from? How did it come about? How is it derived mathematically other than $e^{i\cdot \pi} = -1$? What exactly does natural growth mean? Or is Euler's constant arbitrary? AI: Some definitions of $e$: The number such that $\int_1^e \frac{\mathrm d x}{x}=1$. $\sum_{n=0}^\infty \frac{1}{n!}$. $\lim\limits_{n \to \infty} \left(1+ \frac{1}{n}\right)^n$ Interesting topic is to prove that all those definitions lead to the same real number.
H: Getting three of a kind in a game of yahtzee A.I am trying to calculate getting three of a kind in a game of yahthzee but I am not sure what I am doing wrong. So we have five tossed dices so our possible outcomes are $6*6*6*6*6=7776$ We then have the form of three of a kind $AAABC$ where A can be chosen 6 different ways, B 5 DIFFERENT WAYS ,and C 4 different ways. then since you have $5C3$ WAYS TO place A $2C1$ WAY TO PLACE B AND $1C1$ WAYS to place C.So then you multiply $6*5*4*10*2=2400$ so you get three of kind is $\frac{2400}{7776}$ But the correct answer is $25/162$ so I am not sure what I am doing wrong. B. MY second question is how to get two of a pair. So you have AABBC so you have 6 ways pick A if will be ,5 ways to pick B, 4 ways to pick c. Then $5C2$ WAYS TO PICK where to place A ,$3C2$ TO PICK where to place B. Two of pair would be $(2,2,1,3,3)$ for example. SO YOU GET $6*5*4*10*3=3600$ so two a pair has probability $3600/7776$ but my book says $25/108$ AI: A.You are counting $66653$ as different from $66635$, which makes you a factor $2$ off. B.You are counting $66553$ as different from $55663$, which makes you a factor $2$ off. When you have two numbers that are in the same quantity, the same roll can come from picking one first and then the other or from picking the other first. Go through your calculation and you will see that you count these examples twice.
H: Prove that for any sets $A$ and $B$, if $\mathscr P(A)\cup\mathscr P(B)=\mathscr P(A\cup B)$ then either $A\subseteq B$ or $B\subseteq A$. Not a duplicate of Suppose $\mathcal{P} (A) \cup \mathcal{P} (B) = \mathcal{P} (A \cup B) $. Then either $A \subseteq B$ or $B \subseteq A$. Prove that if $\mathcal P(A) \cup \mathcal P(B)= \mathcal P(A\cup B)$ then either $A \subseteq B$ or $B \subseteq A$. Prove that if $\mathcal{P}(A)\cup\mathcal{P}(B)$=$\mathcal{P}(A\cup B)$ then $A\subseteq B$ or $B\subseteq A$ Proof verification: $P(A\cup B)=P(A)\cup P(B)\rightarrow A\subseteq B\vee A\supseteq B$ How do you prove $P(A) \cup P(B) = P(A \cup B) \Rightarrow (A \subseteq B) \lor (B \subseteq A)$ This is exercise $3.5.8$ from the book How to Prove it by Velleman $($$2^{nd}$ edition$)$: Prove that for any sets $A$ and $B$, if $\mathscr P(A)\cup\mathscr P(B)=\mathscr P(A\cup B)$ then either $A\subseteq B$ or $B\subseteq A$. Here is my proof: Let $A$ and $B$ be arbitrary sets. Suppose $\mathscr P(A)\cup\mathscr P(B)=\mathscr P(A\cup B)$. Now we consider two different cases. Case $1.$ Suppose $A\subseteq B$. Ergo $A\subseteq B$ or $B\subseteq A$. Case $2.$ Suppose $A\nsubseteq B$. So we can choose some $x_0$ such that $x_0\in A$ and $x_0\notin B$. Let $y$ be an arbitrary element of $B$. Since $A\cup B\in\mathscr P(A\cup B)$, then $A\cup B\in\mathscr P(A)\cup\mathscr P(B)$. So either $A\cup B\subseteq A$ or $A\cup B\subseteq B$. Again we consider two different cases. Case $2.1.$ Suppose $A\cup B\subseteq A$. Since $y\in B$, $y\in A\cup B$. Ergo $y\in A$. Case $2.2.$ Suppose $A\cup B\subseteq B$. Since $x_0\in A$, $x_0\in A\cup B$. Ergo $x_0\in B$ which is a contradiction. From $y\in A$ or a contradiction we obtain $y\in A$. Thus if $y\in B$ then $y\in A$. Since $y$ is arbitrary, $\forall y(y\in B\rightarrow y\in A)$ and so $B\subseteq A$. Ergo $A\subseteq B$ or $B\subseteq A$. Since case $1$ and case $2$ are exhaustive, $A\subseteq B$ or $B\subseteq A$. Therefore if $\mathscr P(A)\cup\mathscr P(B)=\mathscr P(A\cup B)$ then either $A\subseteq B$ or $B\subseteq A$. Since $A$ and $B$ are arbitrary, $\forall A\forall B\Bigr(\mathscr P(A)\cup\mathscr P(B)=\mathscr P(A\cup B)\rightarrow(A\subseteq B\lor B\subseteq A)\Bigr)$. $Q.E.D.$ Is my proof valid$?$ Thanks for your attention. AI: It’s correct, but it’s much too wordy and far more complicated than necessary. To prove a theorem of the form $X\implies Y\text{ or }Z$, it suffices to show that if $X$ holds and $Y$ does not, then $Z$ must hold. Here that means that we need only show that if $\wp(A\cup B)=\wp(A)\cup\wp(B)$ and $A\nsubseteq B$, then $B\subseteq A$. This can be done in five lines, even writing it up in fairly wordy fashion: Suppose that $\wp(A\cup B)=\wp(A)\cup\wp(B)$, but $A\nsubseteq B$. $A\cup B\in\wp(A\cup B)$, so $A\cup B\in\wp(A)\cup\wp(B)$, and therefore $A\cup B\in\wp(A)$, or $A\cup B\in\wp(B)$. $A\subseteq A\cup B$, so if $A\cup B\in\wp(B)$, then $A\in\wp(B)$, and therefore $A\subseteq B$, contradicting our assumption that $A\nsubseteq B$; thus, we must instead have $A\cup B\in\wp(A)$. And $B\subseteq A\cup B$, so this implies that $B\in\wp(A)$ and hence that $B\subseteq A$.
H: How to subtract IEEE754 floating point? I have two numbers represented in floating point: $A: 10101001001110000000000000000000$ $B: 01000011011000000000000000000000$ For $A$ I know $e=82$ and for $B$, $e=134$ ($e$=exponent), but I don't know how to subtract these numbers in floating point. How should we make the subtraction? AI: Interpreting the floating point bits A: 1 01010010 01110000000000000000000 B: 0 10000110 11000000000000000000000 gives indeed $$ A=-2^{82-127}\cdot(1.0111)_2 \\ B=+2^{134-127}\cdot(1.1100)_2 $$ The exponent difference is so large that when shifting both numbers towards the larger exponent, $A$ gets shifted/rounded to zero. The operation is thus in this case trivial.
H: Suppose $x,y \in V$ are vectors such that $\lVert x\rVert=\lVert y\rVert=1$ and $\langle x,y\rangle=1$. Show $x=y$ Let V be a vector space over $\mathbb{R}$. Let $(x,y) \mapsto \langle x,y\rangle$ be an inner product on $V$ with induced norm $\lVert x\rVert=\sqrt{\langle x,y\rangle}$. Suppose that $x$ and $y$ are two vectors in $V$ such that $\lVert x\rVert=\lVert y\rVert=1$ and $\langle x,y\rangle=1$. Show that $x=y$. This looks really obvious, but I tried using $(\lVert x\rVert-\lVert y\rVert)^2$ and Cauchy inequality to approach it, and still didn't get it. I am running out of ideas now. Any help is appreciated. AI: Notice that $$\|x-y\|^2 = \langle x-y,x-y \rangle = \langle x,x \rangle - 2 \langle x,y \rangle + \langle y,y \rangle = 1-2+1 = 0,$$ so $\|x-y\| = 0$ and hence $x=y$.
H: Find $\sum_{n=1}^{\infty} \frac{1}{\prod_{i=0}^{k} \left(n+i\right)}$ Original question is $$\sum_{n=1}^{\infty} \frac{1}{\prod_{i=0}^{k} \left(n+i\right)}$$ I got it down to $$\sum_{n=1}^{\infty} \frac{(n-1)!}{(k+n)!}$$ Here I am confused. Possible fraction decomposition but its ugly! Maybe this approach is not good? Ideas? Answer is $$\frac{1}{k \cdot k!}$$ I want to know how to proceed with my work though AI: \begin{align*}\sum_{n=1}^{\infty}\frac{(n-1)!}{(k+n)!} &= \frac{1}{k!}\sum_{n=1}^{\infty}\frac{k!(n-1)!}{(k+n)!}\\ &=\frac{1}{k!}\sum_{n=1}^{\infty} \beta(k+1,n)\\ &=\frac{1}{k!}\sum_{n=1}^{\infty} \int_0^1 t^k(1-t)^{n-1}dt\\ &=\frac{1}{k!}\int_0^1 t^k\bigg(\sum_{n=1}^{\infty}(1-t)^{n-1}\bigg)dt\\ &=\frac{1}{k!}\int_0^1 \frac{t^k}{t}dt\\ &=\frac{1}{k \cdot k!} \end{align*} Here, $\beta(\cdot,\cdot)$ is beta function.
H: Why do we need to make use of the random variable concept if we already know the measure $P$? I have a doubt involving the concept of random variable. Let us consider that $\Omega = \{\omega_{1},\omega_{2},\omega_{3}\}$, $\mathcal{F} = 2^{\Omega}$ and $P(\{\omega_{i}\}) = 1/3$ for $1\leq i\leq 3$. Having said that, let us also consider the r.v. $X:(\Omega,\mathcal{F},P)\to(\mathbb{R},\mathcal{B}(\mathbb{R}))$ defined by $X(\omega_{i}) = i$. Then we may consider the probability of the event $A_{i} = \{\omega_{i}\}$, which is given by \begin{align*} P(A_{i}) = P(\{\omega_{i}\}) = P(X^{-1}(\{i\})) = P_{X}(\{i\}) = 1/3 \end{align*} Here it is my question: why do we need to make use of the random variable concept if we already know the measure $P$? After some thought, I concluded that we use random variables to convert arbitrary outcomes into numbers, which we can manipulate more comfortably. But I do not know if I am correct. Could someone please help me understand it properly? AI: The point of random variables is to describe classes of events in a more elegant way than listing all of them. And in addition, since random variables are measurable functions and a lot of functional analysis also deals with measurable functions, we can use functional analysis to work with random variables. For instance, we can consider $\mathrm E[XY]$ and $\mathrm{Cov}(X,Y)$ as inner products on suitable spaces of random variables, which allows us to transfer useful stuff like the Cauchy-Schwarz inequality to the realm of probability theory. In particular, we can use everything we know about measure integrals in probability theory, since, for instance, expected values are just measure integrals w.r.t. the probability measure. So there is this whole theory we can tap into in order to describe any kind of event which can reasonably be defined via a random variable.
H: How to find the number of groups of 5 with 2 defective modems. A store has 80 modems in its inventory, 30 coming from Source A and the remainder from Source B: Of the modems from Source A; 20% are defective. Of the modems from Source B; 8% are defective. How many groups of 5 modems will have exactly two defective modems? so i am not looking for the probability, but instead i am looking for how many groups of 5 will have the defective modems. I know have am going to use combinations but i just don't know how. I know there are 24 non defective modems, 6 defectives ones from A and 46 non defective modems, 4 defective ones from B. AI: As you correctly pointed out, you have a total of $d = 10$ defective and $n = 70$ non-defective modems. You need to combine 2 defective modems with 3 non-defective modems. Your answer is: $$ {d \choose 2} \times {n \choose 3} = {10 \choose 2} \times {70 \choose 3} = 45 \times 54,740 = 2,463,300 $$
H: What Lipschitz function can tell me about? If a function is said to be "Lipschitz", what kind of informations it can give? I know that is about continuity but, I think maybe it can give more informations about. AI: It is stronger than simple continuity. It is one of its strongest form. The greatest disadvantage I can see is that it is only defined on metric spaces which means that you cannot have a similar approach on general spaces. But coming back to the real case. If you have a Lipschitz function $f:\mathbb{R}\rightarrow\mathbb{R}$, then it has many nice properties. The best one being differentiable almost everywhere. If you don't know that almost everywhere means, you can see it as follows (which is a stronger statement, but also true for every Lipschitz function): there is a bounded (locally integrable) function $g$ such that $$ f(x) = f(0) + \int_0^x g(y)\;dy. $$ In this case one usually writes $f' = g$. And the above says that $f'$ is bounded by some constant.
H: Let $G$ be a group of order $p^n q$, where $p$ and $q$ are distinct prime. , Assume $q \not| p^i - 1$ for $1 \leq i \leq n - 1$. Prove that $G$ is solvable. Since if $G$ has a solvable normal subgroup $N$ such that $G/N$ is solvable, and if $r$ is prime, every $r$-group is solvable, we know that if $G$ has a normal Syl$_p$ group or a normal Syl$_q$ group then $G$ is solvable. Suppose $G$ does not have a normal Syl$_p$ group or a normal Syl$_q$ group. Then $G$ must have $q$ many Syl$_p$ subgroups, and since $q \not| p^i - 1$ for $1 \leq i \leq n - 1$, and the number of Syl$_q$ subgroups $= 1$ mod $q$, we get that the number of Syl$_q$ subgroups is $p^n$ I'm not really sure what to do at this point but my idea was if I can show that $G$ has a normal non trivial $p$ subgroup $N$, then $G/N$ must have a normal sylow$_q$ subgroup and so by repeating my argument in the first paragraph I am done. But I cant figure out how to get such a group. I know $p$-groups can't have trivial center so I am trying to think of a way to use that idea to get such a group. Another idea I had was if $P$ is a syl$_p$ group then since $|G/P| = q$. we know there exists a normal subgroup $K$ in $G$, contained in $P$, such that $|G/K|$ divides $q!$. So if I can show that $k$ is non trivial then $K$ is my non trivial $p$-group. AI: By Burnside's theorem, every group of order $p^mq^n$ where $p,q$ are prime,is solvable. In your particular case, by the Sylow theorem the number of Sylow $q$-subgroups must divide $p^n$ and be congruent to 1 mod $q$. By your assumption this number is 1 or $p^n$. Suppose it is 1. Hence the Sylow $q$-subgroup $Q$ is unique and has $q$ elements whence is cyclic. Hence the group is an extension of $Q$ by $G/Q$ of order $p^n$. The $p$-group $G/Q$ is nilpotent, whence solvable. Therefore $G$ is solvable.Now suppose it is $p^n$. All Sylow $q$-subroups pairwise intersect at $1$ and have $q$ elements. Hence the group has $p^n(q-1)=p^nq-p^n$ elements of order $q$. Therefore there can be only one Sylow $p$-subgroup $P$ of order $p^n$. Hence $P$ is normal, nilpotent, and $G/P$ is cyclic. So $G$ is solvable.
H: Undetermined or indeterminate forms: $\frac{0}{0}, \frac{\infty}{\infty}, 0\cdot\infty, 1^\infty, 0^0, +\infty-\infty$ I wanted to know who has decided that for the calculation of the limits of the following forms, $$\color{orange}{\frac{0}{0},\quad \frac{\infty}{\infty},\quad 0\cdot\infty,\quad 1^\infty,\quad 0^0,\quad +\infty-\infty}$$ are called indeterminate forms. For example, it would be spontaneous to me to say that $$1^{\infty}=1\cdots 1 \cdots 1 =1$$ or $$0^0=1$$ AI: It depends where you look. Many people define $0^0 = 1$. The problem is that there is not one solution that makes sense. Take for example $1^\infty$. You would like to have some kind of continuity, but for every $x > 1$, you have $x^\infty = \infty$ and for $0 < x < 1$, you have $x^\infty = 0$. Why should $1^\infty$ have one specific value in between? For $0^0$, you can plot $x^y$ on wolframalpha and you will see that there are many possibilities to define $0^0$ as limit $x \to 0$ and $y\to 0$, depending on the direction you take. As for fractions of the form $0/0$ or $\infty/\infty$, how do you want to define them? As limits? How would you like to distinguish $$ \lim_{x\to 0} \dfrac{x}{x} = 1 $$ or $$ \lim_{x\downarrow 0} \dfrac{x}{x^2} = +\infty $$ or $$\lim_{x\uparrow 0} \dfrac{x}{x^2} = -\infty$$ or $$\lim_{x\to 0} \dfrac{x^2}{x} = 0?$$ There are so many limits which could be interpreted as $0/0$ that it makes no sense to choose between them. And this is true for every indeterminate form.
H: sums and differences of perfect powers We have $1=3^2-2^3$ $2=3^3-5^2$ $3=2^7-5^3$ $4=5^3-11^2$ $5=3^2-2^2$ and it is unknown if $6$ is representable as a difference of two perfect powers. Next such undecided example is $14$. More: http://oeis.org/A074981 However, I found that $6=64-49-9=2^6-7^2-3^2$ and $6=27+4-25=3^3+2^2-5^2$ Similarly $14=27-9-4=3^3-3^2-2^2$ and $14=9+9-4=3^2+3^2-2^2$ My question: Is every positive integer representable in the form: $a_1^{n_1}+a_2^{n_2}-a_3^{n_3}$ or/and in the form $a_1^{n_1}-a_2^{n_2}-a_3^{n_3}$ where $a_1,a_2,a_3,n_1,n_2,n_3$ are natural numbers greater than $1$ with $a_2=0$ also acceptable ? Are these things known? The question is based on my own investigation. AI: There are trivial solutions with squares: $$n=a^k+b^2-c^2\iff(c+b)(c-b)=a^k-n$$ Choose $a$ to have opposite parity from $n$, so that $a^k-n\pm1$ is even, and then let $c+b=a^k-n$ and $c-b=1$ and solve for $c$ and $b$, i.e., $$n=a^k+\left(a^k-n+1\over2\right)^2-\left(a^k-n-1\over2 \right)^2$$ Remark: You could eliminate these trivial solutions by asking that the three powers all be different. Adding such a requirement, though, suggests doing the same for the original problem, and seeing which numbers can be written as a difference of two perfect powers of different degree. So $5=3^2-2^2$ is no longer allowed, but $5=2^5-3^3$ is.
H: Bipartite-Graph GCD question Let $G$ be a bipartite graph with bipartition $(A, B)$. Suppose every vertex in $A$ has degree $k_a$, and every vertex in $B$ has degree $k_b$. Prove that if $G$ has a bridge, then $\operatorname{gcd}(k_a,k_b) = 1$. AI: HINT: Take a look at the answer to this question, which proves that if $k_a=k_b>1$, $G$ has no bridge. The idea used in that answer can be combined with Bézout’s identity to yield your result.
H: Identities involving hyperbolic functions. I came across the following identity, $$ \int_{-\infty} ^\infty dx \frac{e^{-i kx}}{e^{-ax} +1} = \frac{2\pi i}{a} \sum_{n=0}^{\infty} e^{-\frac{(2n+1)\pi k}{a}} = \frac{\pi i }{a\mathrm{sinh}\frac{k \pi}{a}} $$ I can to some extent see the first equality by doing the contour integration in the lower half plane which has poles at $\frac{i\pi(2 n +1)}{a}$. I however get the summation for negative integers as well, $$ \int_{-\infty} ^\infty dx \frac{e^{-i kx}}{e^{-ax} +1} = \frac{2\pi i}{a} \sum_{n=-\infty}^{\infty} e^{-\frac{(2n+1)\pi k}{a}} $$. I however have no idea how one can get the second equality. I am actually more interested in a slightly different integral, $\int_{-\infty} ^\infty dx \frac{e^{-i kx}}{e^{-ax} -1} $. Any help as to how one can obtain these identities is appreciated. AI: The second equality can be proven by writing the summation as a geometric progression. \begin{align} \sum_{n=0}^\infty e^{-(2n+1)\frac{\pi k}{a}}&= e^{-\frac{\pi k}{a}}\sum_{n=0}^\infty\big(e^{\frac{-2\pi k}{a}}\big)^n \\&=\frac{e^\frac{-\pi k}{a}}{1-e^\frac{-2\pi k}{a}} \\ &= \frac{1}{e^{\frac{\pi k}{a}}-e^{-\frac{\pi k}{a}}} \\ &= \frac{1}{2\sinh{\frac{\pi k}{a}}} \end{align} Note that the infinite summation converges only if $\frac{k}{a} > 0$.
H: Minoration of max function I am wondering if we have the following minoration of the $\max$ function : $$ \forall a, b, c \in \mathbf{R} ~~~~~ \max(a, b, c) \geq \dfrac{1}{3} ( a+b+c) $$ AI: $$\dfrac{1}{3} ( a+b+c) \leqslant \frac{\max(a,b,c)+\max(a,b,c)+\max(a,b,c)}{3} = \max(a,b,c)$$ $$\min(a,b,c) = \frac{\min(a,b,c)+\min(a,b,c)+\min(a,b,c)}{3} \leqslant \dfrac{1}{3} ( a+b+c)$$ And so for any fixed amount of numbers. $$\min(a_1,\cdots,a_n)\leqslant \frac{1}{n}\sum_{i=1}^{n}a_i\leqslant \max(a_1,\cdots,a_n)$$
H: Why is $\mathbb{Z}_{m} \otimes_{\mathbb{Z}} \mathbb{Z} = \mathbb{Z} \otimes_{\mathbb{Z}} \mathbb{Z}_{m} = \mathbb{Z}_{m} $? Why is $\mathbb{Z}_{m} \otimes_{\mathbb{Z}} \mathbb{Z} = \mathbb{Z} \otimes_{\mathbb{Z}} \mathbb{Z}_{m} = \mathbb{Z}_{m} $? Could anyone show me the proof of this, please? I have read this question When is the tensor product commutative? here but I do not fully understand the answer to my question. AI: I'll give you an outline. Let $A$ denote a commutative unital ring, and let $M$ and $N$ denote $A-$modules. Then we can define a map $M\times N\to N\times M$ by $(m,n)\mapsto (n,m)$. This is easily seen to be an $A-$bilinear map, and using the universal property of the tensor product of $A-$modules we get a map $$ M\otimes_A N\to N\otimes_A M$$ which you can check is an isomorphism. Similarly, we define a map $A\times M\to M$ by $(a,m)\mapsto am$. This is $A-$bilinear, and descends to a map $A\otimes_A M\to M$. You can check directly again that this is an isomorphism. We get that $M\otimes_A N\cong N\otimes_A M$ when $A$ is commutative. Similarly, we get that $A\otimes_AM\cong M$. Now, we take $A=\Bbb{Z}$ and observe that $\Bbb{Z}/m\Bbb{Z}=M$ is a $\Bbb{Z}-$module. It follows that $$ \Bbb{Z}/m\Bbb{Z}\otimes_{\Bbb{Z}}\Bbb{Z}\cong \Bbb{Z}\otimes_{\Bbb{Z}}\Bbb{Z}/m\Bbb{Z}\cong \Bbb{Z}/m\Bbb{Z}.$$
H: small distances between powers of irrationals The value of $$\inf \left\{ |\pi^m-e^n|: m,n\in\mathbb{N} \right\}$$ is a known unsolved problem. But transcendental numbers are known to cause problems of this sort. Is the value of $$\inf \left\{ |\sqrt{2}^m-\sqrt{3}^n|: m,n\in\mathbb{N} \right\}$$ known? Or at least is it known if $|\sqrt{2}^m-\sqrt{3}^n|$ can be arbitrarily small? AI: Note you have $$d = \left|\sqrt{2}^m-\sqrt{3}^n\right| = \frac{\left|2^m - 3^n\right|}{\sqrt{2}^m + \sqrt{3}^n} \tag{1}\label{eq1A}$$ As stated near the bottom of Differences Between Powers, Indeed, Tijdeman proved that there exists a number $c \ge 1$ such that $$\left|2^m - 3^n\right| \ge \frac{2^m}{m^c}$$ Also, a closely related post is $\liminf |2^m - 3^n|$. Its accepted answer uses Baker's theorem to show that $|2^m-3^n|/m>2^m\cdot c'\cdot m^{-C}$ which is very similar to what Tijdeman determined. Since you're looking for $d$ in \eqref{eq1A} to be very small, let $$\sqrt{3}^n = (1 + \epsilon)\sqrt{2}^m \tag{2}\label{eq2A}$$ where $\epsilon \approx 0$. Also, to get smaller values of $d$, $\epsilon$ should get closer to $0$ as $m$ increases. From \eqref{eq1A}, using Tijdeman's result and \eqref{eq2A}, gives $$\begin{equation}\begin{aligned} \left|\sqrt{2}^m-\sqrt{3}^n\right| & \ge \frac{2^m}{m^c(\sqrt{2}^m + \sqrt{3}^n)} \\ & = \frac{2^m}{m^c(2 + \epsilon)\left(2^{m/2}\right)} \\ & = \frac{2^{m/2-1}}{m^c\left(1 + \frac{\epsilon}{2}\right)} \end{aligned}\end{equation}\tag{3}\label{eq3A}$$ The numerator is an exponential in $m$ while, since $c$ is a fixed real number and $\epsilon$ is relatively small (and ideally decreasing), the denominator is basically a polynomial in $m$. Since exponentials grow faster than polynomials, this means \eqref{eq3A} shows the minimum difference grows without bound as $m$ increases. This also means the $\epsilon$ in \eqref{eq2A} cannot stay close to $0$ and, actually, must be increasing. Thus, this proves $\left|\sqrt{2}^m-\sqrt{3}^n\right|$ can't be made arbitrarily small. Regarding the smallest value $d$ can be, this can be determined by checking the smallest values of $m$, with the required number to check depending on what the value of $c$ is. However, I don't know if anybody has done this and, if so, what the result is.
H: Show that there exists a neighborhood $U$ of $(0,1)$ such that the restriction $g:U \rightarrow g[U]$ is invertible Let $g: \mathbb{R}^2 \rightarrow \mathbb{R}^2$ be defined by $g(x,y)=(2ye^{2x},xe^y)$. Show that there exists a neighborhood $U$ of $(0,1)$ such that the restriction $g:U \rightarrow g[U]$ is invertible, and $g^{-1} \in C^{\infty}(g[U];\mathbb{R}^2)$. Here is the theorem I am thinking about (Inverse Function Theorem): Let $W \subseteq \mathbb{R}^n$ be open, and let $f \in \mathbb{C}^r(W; \mathbb{R}^n), r \le 1$. If $a \in W$ is a point such that $Df(a)$ is an invertible matrix, then there exist open sets $U \subseteq W$ and $V \subseteq \mathbb{R}^n$ such that $a \in U$ and the restriction $f:U \rightarrow V$ is invertible with $f^{-1} \in \mathbb{C}^r(V; \mathbb{R}^m)$. The related lemma is: let $U \subseteq \mathbb{R}^n$ be open, and let $f \in \mathbb{C}^1(U;\mathbb{R}^n)$, and let $a \in U$. If $Df(a)$ is an invertible matrix, then there exists $\alpha, \epsilon>0$ such that $\lVert f(x_0)-f(x_1)\rVert \leq \sigma \lVert x_0-x_1 \rVert$ for all $x_0,x_1 \in \mathbb{C}(\alpha, \epsilon)$. I am still stuck in constructing the neighborhood for this question. Any help is appreciated. AI: Hint: Compute $Df$, and evaluate it at the point $a=(0,1)$. Is $Df(a)$ an invertible matrix? If yes, then the inverse function theorem tells us that such a neighborhood exists, and you're done! No need to actually construct such a neighborhood, it is sufficient to know that it's there (at least, that's what your questions seems to ask).
H: Simultaneous convergence of sequence with weak- and weak$^*$-convergent terms Suppose I have a Banach space $X$ with continuous dual. In that space, I have a sequence $(x_n)_{n = 1}^\infty$ in $X$ converging weakly to $y$, and a sequence $(\phi_n)_{n = 1}^\infty$ in $X^*$ satisfying $\phi_n \to \psi$ in the weak$^*$ topology. Suppose further that $\| x_n \| \leq 1, \| \phi_n \| \leq 1$ in their respective norms. Is it necessarily true that $\phi_n x_n \to \psi y$? I have the inequality $$ |\psi y - \phi_n x_n| $$ that I want to make small, but whenever I try to expand it and apply a triangular inequality to several terms, but I keep ending up with a term like $|\phi_n (y - x_n)|$ or $|(\psi - \phi_n) x_n|$, a term which I can't make small by attending to only of the modes of convergence. Because both "parts" of the term are moving at once, and I don't have norm-convergence for either, I don't know how to make the term small because both the functionals and the vectors they act on are $n$-dependent. So is this result true? Am I missing something? Or does this kind of thing just not work in general? AI: This result is false. Take $X=c_0$, $x_n$ the usual basis in $c_0$, and $\phi_n$ the usual basis in $X^*=\ell_1$. Then $x_n$ tends weakly to zero in $c_0$, and $\phi_n$ tends to zero in the weak-star topology in $\ell_1$, but $\phi_n(x_n)=1$ for all $n$.
H: Separation axiom implied by semidecidability of comparison I am studying computable analysis. What I'm fascinated by is the analogy between computable analysis and general topology: a Wikipedia article Semidecidable sets are analogous to open sets. So I treat them essentially the same. Discrete sets in topology are analogous to sets in computability where equality between elements is semi-decidable. This would actually make every set decidable, for every set is clopen in discrete topology. But I'm puzzled with: Hausdorff sets in topology are analogous to sets in computability where inequality between elements is semi-decidable. If the "inequality" refers to $≠$, this would mean every cofinite set is open, and thus every finite set is closed. Doesn't that mean the space is $T_1$, but not necessarily $T_2$? AI: Saying that $\neq$ is semi-decidable for a space $\mathbf{X}$ is the effective counterpart of saying that the complement of the diagonal $\{(a,b) \mid a \neq b\} \subseteq \mathbf{X} \times \mathbf{X}$ is open. This is a standard example of a condition equivalent to being Hausdorff. You do need to be careful though with discreteness: In classical topology discrete indeed implies Hausdorff, and you thus don't need to distinguish between equality being semidecidable and equality being decidable. In the effective world, there is a difference though. A counter-example is the quotient of $\mathbb{N}$ by the equivalence relation $\sim$ where $a \sim b$ iff $a = b$ or $a,b \in H$ where $H$ is the Halting set. This space has semidecidable equality, but not decidable equality (so it is computably discrete, but not computably Hausdorff). As you mention $T_1$, it turns out that that does not have a clear effective counterpart. In fact, some statements that look like computable counterparts are already the computable counterpart to being Hausdorff. If you want to read more on the computability/topology analogy, I'll be so forward and advertise my article here: Journal arXiv
H: Is it true that $\operatorname{meas}(\partial(\operatorname{supp}(f)))=0$? Let $\Omega$ be a open subset of $\mathbb{R}^{d}$ and $C_c^{m}(\Omega)$ the space of $m$-times continuously differentiable functions with compact support with $0 \leq m \leq \infty$. Denote by $\partial X$ the boundary of the set $X$ and by $\operatorname{meas}(X)$ the measure of the set $X$. My question: Is it true that $\operatorname{meas}(\partial(\operatorname{supp}(f)))=0$ for all $f \in C_c^{m}(\Omega)$? Intuitively it seems true. For example, if $n=1$ then $\partial(\operatorname{supp}(f))$ has two points. If $n=2$, then $\partial(\operatorname{supp}(f))$ looks like a deformed circumference. But I do not know how to give an analytical proof of this fact. Observations: The support of $f$ is the set $\operatorname{supp}(f)=\overline{\{x \in \Omega:f(x) \neq 0\}}^{\Omega}$. Here $\operatorname{meas} (X)=0$ can be the Lebesgue measure or prove that for all $\varepsilon>0$ there is a sequence of $d$-dimensional cubes $C_1, C_2, \dots, C_i, \dots$ such that $\operatorname{supp}(f) \subset \bigcup_{i=1}^{\infty} C_i$ and $\sum_{i=1}^{\infty} \operatorname{vol} C_i<\varepsilon.$ AI: This is false even for $C^\infty_c$ functions on the real line. Let $A$ be a "fat" Cantor set; it's obtained similarly to the traditional Cantor set, by starting with $[0,1]$ and repeatedly removing middle intervals, but taking those removed pieces to be not middle-thirds but middle-tiny-intervals, getting tinier from stage to stage, so rapidly that the left-over $A$ has positive measure (in contrast to the traditional Cantor set's zero measure). Now build the function $f$ by putting a $C^\infty$ bump function on each of the removed middle intervals, making the heights of the bumps shrink rapidly so that the whole $f$ is $C^\infty$. It's non-zero exactly on those middle intervals, so the boundary of its support is all of $A$, which has positive measure.
H: Usage of linear operator $T$ on basis Let $T: V \rightarrow V$ linear operator and $V$ is finite vector space. Let $$\varepsilon=\left\{\varepsilon_{1}, \ldots, \varepsilon_{n}\right\}$$ be basis for $V$. so if I have $\vec{v} \in span(\left\{\varepsilon_{1}, \ldots, \varepsilon_{n}\right\})$ why is it true that $$T(\vec{v}) \in span(\left\{T(\varepsilon_{1}), \ldots, T(\varepsilon_{n})\right\})$$ I do not understand how we can use the linear of T here, AI: $\vec{v} \in \text{ span}(\left\{\varepsilon_{1}, \ldots, \varepsilon_{n}\right\})$ means $\vec v=a_1\varepsilon_1+\cdots+a_n\varepsilon_n$ for some scalars $a_1,...a_n$, so $T(\vec v)=T(a_1\varepsilon_1+...+a_n\varepsilon_n)$, which by linearity is $a_1T(\varepsilon_1)+...+a_nT(\varepsilon_n)$, which is in$ \text{ span}(\left\{T(\varepsilon_{1}), \ldots, T(\varepsilon_{n})\right\})$.
H: Solve Complex Equation $z^3 = 4\bar{z}$ I'm trying to solve for all z values where $z^3 = 4\bar{z}$. I tried using $z^3 = |z|(\cos(3\theta)+i\sin(3\theta)$ and that $|z| = \sqrt{x^2+y^2}$ so: $$z^3 = \sqrt{x^2+y^2}(\cos(3\theta)+\sin(3\theta))$$ and $$4\bar z = 4x-4iy = 4r\cos(\theta)-i4r\sin(\theta)$$ but I have no idea where to go from there. AI: Just for fun, let's do this by expanding $(x+iy)^3=4(x-iy)$ with $x,y\in\mathbb{R}$ and separate the real and imaginary parts. We wind up with $$x(x^2-(3y^2+4))=0\quad\text{and}\quad y(y^2-(3x^2+4))=0$$ If $x=0$ then $y(y^2-4)=0$, so $y=0,\pm2$, hence $z=0$, $2i$ and $-2i$ are solutions. If $x\not=0$, then we must have $x^2=3y^2+4$, which implies $y(y^2-(9y^2+12+4))=-8y(y^2+2)=0$. The only real solution is $y=0$, which leads to $x^2=3\cdot0^2+4=4$, or $x=\pm2$. So $z=2$ and $-2$ are also solutions. In all we have $z=0,2,2i,-2$, and $-2i$ as solutions of $z^3=4\overline{z}$.
H: Let $b \in [0,1)$. Prove that $\frac{b}{1-b} \in [0,\infty)$ Can someone check my solution for this problem? It seems to me that it’s incomplete, and I’m not sure. Problem: Let $b \in [0,1)$. Prove that $\frac{b}{1-b} \in [0,\infty)$. Solution: We know that $b \in [0,1)$, so $0 \leq b < 1$. From here we can also deduce that $ 0 < 1-b \leq 1$. So $\frac{1}{1-b} \geq 1$. Multiplying by $b$ we obtain that $\frac{b}{1-b} \geq b$. Since $b \geq 0$ we conclude that $\frac{b}{1-b} \geq 0$. Therefore $\frac{b}{b-1} \in [0,\infty)$. AI: Define the function $ f $ from $[0,1)$ to $ \Bbb R$ by $$(\forall x\in[0,1))\;\; f(x)=\frac{x}{1-x}$$ $ f $ is continuous at $ [0,1)$. $ f $ is differentiable at $ [0,1)$ and $$(\forall x\in[0,1))\;\; f'(x)=\frac{1-x+x}{(1-x)^2}>0$$ $ f $ is then strictly increasing at $ [0,1)$. Thus, $ f $ is a bijection from $ [0,1)$ to $$f([0,1))=[f(0),\lim_{x\to 1^-}f(x))=[0,+\infty)$$ we conclude that $$(\forall b\in[0,1))\;\; f(b)=\frac{b}{1-b}\ge 0$$ Remark: You can simply say $$0\le b<1\; \implies$$ $$b\ge 0 \text{ and } 1-b>0 \;\implies$$ $$\frac{b}{1-b}\ge 0\; \implies$$ $$\frac{b}{1-b}\in [0,+\infty)$$
H: Show that there do not exist functions $f(x)$ and $g(h)$ such that $\cos{(x + h)} − \cos{x} = f(x)g(h)$ for all $x, h \in \mathbb{R}$ Show that there do not exist functions $f(x)$ and $g(h)$ such that $\cos{(x + h)} − \cos{x} = f(x)g(h)$ for all $x, h \in \mathbb{R}$. So far, I have tried following the same logic as this similar problem (part b): Prove No Functions f and g Satisfy f(x)g(y)= x + y Here is the solution to that problem, which I cannot quite understand. Solution I don't understand how $\frac{xy}{g(0) f(0)} = xy$ is yielded. AI: Dividing by $h$ and letting $h \to 0$ we get $-\sin x =f(x) g'(0)$. Thus $f(x)=c \sin x$ where $c=-\frac 1 {g'(0)}$. Now put $x=0$ to get $\cos h -1= 0$ which is a contradiction. .
H: Find an upper bound for a modulus of a complex number I need to find an upper bound for the modulus $ |3z^2+2z+1| $ if $ |z| \leq 1$ My solution is: $$ |3z^2+2z+1| = |(3z-1)(z+1)+2|$$ Using the next 2 equations from triangle inequality $$(1).|z_{1}+z_{2}| \leq |z_{1}|+|z_{2}| $$ $$(2).|z_{1}-z_{2}| \geq ||z_{1}|-|z_{2}|| $$ Using eq.(1) I got $$|(3z-1)(z+1)+2|\leq|(3z-1)(z+1)|+|2| ,\; \; which \; is \; equal\; to\;\; |3z-1||z+1|+2$$ Using eq.(2) in $|3z-1|$ I got $$|3z-1|\geq||3z|-|1||=|3|z|-1|$$ Using eq.(1) in $|z+1|$ I got $$|z+1|\leq|z|+|1|=|z|+1$$ Finally, substituting I got $$(|3|z|-1|)(|z|+1)+2$$ Using $|z|\leq1$ $$(|3(1)-1|)((1)+1)+2 = 6$$ The book says the result is 6, however I'm not sure about my procedure, so I want to know: Is my procedure wrong, and I'm getting the correct answer by coincidence? If it's not, Is there a better way to solve it? AI: $|3z^2+2z+1| \leq |3z^2| + |2z| + |1| =3|z|^2+2|z|+1 \leq 3+2+1=6$
H: Why do we need arithmetical operations instead of teaching arithmetic based on the successor function? Since we can construct the set of natural numbers off of the peano axioms with the operations of addition, subtraction, multiplication and division following, why do we even need 4 arithmetical operations? Since the successor function is "modelled" off of counting as we know it as babies, it is quite intuitive for many people. Why then don't we just teach all of arithmetic based off a axiomatic, intuitive, and quite "primitive" notion of the successor function, and why do we specifically need 4 arithmetic operators? (Specifically where even the basic notion of addition of natural numbers is hard to put into words, without a lengthy exposition which a young person would have a hard time understanding). AI: That is, in effect, what the Peano definitions of the arithmetic operations do: they build them from the successor function - e.g. the recursive definition for addition is $$a + 0 := a$$ (base case) $$a + S(b) := S(a + b)$$ (recursion case) It becomes clearer if one notes that effectively almost by definition, $$b = (\underbrace{S \circ S \circ S \cdots S}_{b})(0)$$ so that $$a + b = a + (\underbrace{S \circ S \circ S \cdots S}_{b})(0) = (\underbrace{S \circ S \circ S \cdots S}_{b})(a)$$ which says that "$a$ plus $b$ works by adding $b$ single things one at a time to $a$", and if you want to show a child, show hir stacking little blocks of hir favorite type together on top of a given tower of blocks, one block at a time, and then the remaining tower has a number of blocks that equals the sum of the two amounts in the preceding towers. The reason we have the operation of addition is because it very quickly gets awful to say "two, then one more, then one more, then one more, then one more", and much better to say "two plus four". And better, because it generalizes beyond the simple case of natural numbers.
H: Can an injective function have unmapped elements of the domain? I know that a function is injective if every element of the domain maps onto at most one element of the co-domain, that is, if $f(x_{1}) = f(x_{2})$ implies $x_{1} = x_{2}$, or the contrapositive. However, is it allowable for an injective function to have some number of elements $n$ that don't map onto any values in the co-domain? It seems like, at most, there could be one unmapped element. Otherwise, with an unmapped $x_{1}$, $x_{2}$, $f(x_{1}) = f(x_{2}) = \varnothing$, implying that in fact $x_{1} = x_{2}$. Am I missing something here? AI: It is possible to have what is called a partial function from a set $A$ to a set $B$: this is a subset $f$ of $A\times B$ such that for each $a\in A$ there is at most one $b\in B$ such that $\langle a,b\rangle\in f$. In other words, $f$ is a function from some subset $D$ of $A$ to $B$. However, the domain of $f$ is that subset $D$, not $A$ itself (unless, of course, $D=A$, in which case it is called a total function on $A$): by definition $a$ is in the domain of $f$ if and only if there is some $b\in B$ such that $\langle a,b\rangle\in f$. A partial function can certainly be injective. Your last sentence contains a significant misunderstanding: $f(x)$ is undefined does not mean that $f(x)=\varnothing$. If $f(x)$ is undefined for some particular $x$, then every statement of the form $f(x)=y$ is false.
H: Neighbourhood of infinity In the extended complex plane, does there exist a neighbourhood of $\infty$ which contains the origin? My feeling is that there is no neighborhood of $\infty$ containing the origin. But this defies my intuition since for example a neighborhood of any point in the complex plane can be made to include the origin by choosing the radius large enough! Any clarification please. AI: The complement of any compact set is a neighborhood of $\infty$. So $\mathbb C_{\infty} \setminus \{z: |z-2| \leq 1\}$ is a neighborhood of $\infty$ other than $\mathbb C_{\infty}$ itself which contains $0$.
H: How to prove that this construction is a group homomorphism? Let $\phi:G \rightarrow H$ a group homomorphism such that $M=\phi(G) \neq H$ and $M$ having at least 3 different cosets in $H$. Take $K$ as being the group of all permutations of $H$. Choose 3 different cosets $M$, $Mh'$, $Mh''$ of $M$ in $H$ and define $\sigma$ in $K$ by $\sigma(xh'')=xh'$, $\sigma(xh')=xh''$ for $x \in M$, while otherwise $\sigma(h)=h$. Define $\psi,\psi':H \rightarrow K$ by $\psi(h) = \text{left multiplication by}\,\,\,h$, and $\psi'(h) = \sigma^{-1}\psi(h)\sigma$. I'm having a hard time proving that $\psi'$ is a group homomorphism, the reason is that the only way I see on how to solve the problem is testing different cases for the argument, and there exists a lot of that cases. There is a easier way to prove that $\psi'$ is a homomorphism? AI: If you remove all of the extraneous details from the question, it will become a lot simpler; the main content of the question is not related to $\phi$ or $M$ or the choice of $\sigma$. Stripping out these details, we have Let $H$ be a group and $K$ the symmetric group on $H$. Fix any permutation $\sigma \in K$. Let $\psi: H \rightarrow K$ be the map sending $h$ to the permutation corresponding to left multiplication by $h$. Prove that the map $\psi'$ defined by $$\psi'(h) = \sigma^{-1}\psi(h)\sigma$$ is a homomorphism. One can prove this in two ways. First, a direct proof: $\psi'(e) = \sigma^{-1}\psi(e)\sigma = \sigma^{-1}\sigma = e$ $\psi'(ab) = \sigma^{-1}\psi(ab)\sigma = \sigma^{-1}\psi(a)\psi(b)\sigma = \sigma^{-1}\psi(a)\sigma\sigma^{-1}\psi(b)\sigma = \psi'(a)\psi'(b)$ Second: observe that conjugation by $\sigma$ induces a homomorphism $K\rightarrow K$, and that $\psi'$ is just the composition of that homomorphism with $\psi$. The composition of homomorphisms is a homomorphism, so you're done (this is implicit in the above).
H: Help with a differential equation system Given $x' = -x$ and $y' = -4x^3+y$, we want to linearize and show phase portrait at origin. So I make system $\vec{Y}' = \begin{pmatrix} -1 & 0 \\ 0 & 1 \end{pmatrix}\vec{Y}$ by just scrapping the $-4x^3$ term. But now we have repeated $0$ eigenvalue, so I try to find an eigenvector. $\left[ \begin{pmatrix} -1 & 0 \\ 0 &1 \end{pmatrix} - \begin{pmatrix} 0 & 0 \\ 0 & 0\end{pmatrix} \right]\begin{pmatrix}v_1 \\ v_2\end{pmatrix} = \begin{pmatrix}0 \\ 0\end{pmatrix} \implies v_1 = v_2 = 0$. So $\vec{v} = \begin{pmatrix} 0 \\ 0 \end{pmatrix}$? Unless I am mistaken. What kind of eigenvector is this? I can't think of how to draw a phase portrait, thanks! AI: The assertion that the eigenvalues of the matrix $\vec Y'$ are both zero is erroneous. However, we have: The eigenvectors of the matrix $\vec Y' = \begin{pmatrix} -1 & 0 \\ 0 & 1 \end{pmatrix} \tag 1$ are $(1, 0)^T$, with eigenvalue $-1$, and $(0, 1)$, with eigenvalue $1$, as is easily checked, e.g. $\begin{pmatrix} -1 & 0 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} 1 \\ 0 \end{pmatrix} = -1\begin{pmatrix} 1 \\ 0 \end{pmatrix}, \tag 2$ with a similar calculation for eigenvcector $(0, 1)$. Thus the point $(0, 0)$ is a saddle, as corroborated by the phase portrait which is easily drawn.
H: How many elements of order $2$ does Sym $6$ have? First, I will answer the following question: ''How many elements of order $2$ does Sym $5$ have?'' The answer is: $(12),(13),(14),(15),(23),(24),(25),(34),(35),(45),(12)(34),(12)(35),(12)(45),(13)(24),(13)(25),(13)(45),(14)(23),(14)(25),(14)(35),(15)(23),(15)(24),(15)(35), $ that is, there are 22 elements of order $2$ does Sym $5$ have. I omitted 3 products of two transpositions; the correct number for S5 is 25. Thanks @BrianM.Scott How many elements of order $2$ does Sym $6$ have? I can compute as a manual but it will be too long. Is there any easy method to find it? Thanks... AI: One can always count all the elements of order $2$ in $S_6$. Since we have six elements to play with, it's clear that elements of order two look either like $(ab)$, $(ab)(cd)$ or $(ab)(cd)(ef)$. First we find all the elements that look like $(ab)(cd)(ef)$. There are ${6\choose 2} = 15$ ways to create a 2-cycle $(ab)$. To get $(ab)(cd)$ we have ${4\choose 2}=6$ options left. Next, there's only one possible choice left to get $(ab)(cd)(ef)$. Finally, by canceling the $3!$ repetitions due to ordering 3 cycles, we get $$ \frac{15 \times 6 \times 1}{3!} = 15 $$ Similarly, for $(ab)(cd)$ we have $$ \frac{15 \times 6}{2!} = 45 $$ Finally, for $(ab)$ we have $$ {6\choose 2} = 15 $$ Therefore there are $15 + 45 + 15=75$ elements of order $2$ in $S_6$.
H: Contraposition with assumptions I was just doing a practice problem, and found myself in the following scenario, which I've abstracted to get at the logical question that I have. We want to prove: Given an assumption $A$, $B$ and $C$ cannot both be true at once. If we can show "$B\implies\text{not }C$", I would usually just take a contrapositive and conclude that "$C\implies\text{not }B$". However, I've really shown "$A\text{ and } B\implies \text{not }C$", and the contrapositive of this is not what I'm after. Further, if I try to take the contrapositive of "$A\text{ and } B\implies A\text{ and not }C$" and use De Morgan's laws, I can't seem to produce anything like "$A\cap C\implies\text{not }B$". So, now I'm considering the possibility that contraposition does not always work in the presence of an ambient assumption $A$, but I'm struggling to find an example where it fails. I was wondering if anyone can point out my error or provide an example that would crystallize this for me. AI: You are asked to show that "If $A$ is true, then $B$ and $C$ cannot both be true at once". This means to show that $A \implies \neg (B \wedge C)$. Writing out the definition of implication, this is $\neg A \vee \neg (B \wedge C)$. Which is $\neg A \vee (\neg B \vee \neg C)$. Since disjunction/OR is associative and commutative, it doesn't matter what order you arrange them. You could also write this as $(\neg A \vee \neg B) \vee \neg C$ or as $(\neg A \vee \neg C) \vee \neg B$. These last two ways of writing are equivalent to "$A$ and $B$ $\implies$ not $C$" and "$A$ and $C$ $\implies$ not $B$", respectively.
H: How do we prove that $\sup_{n\geq 1}f_{n}$ is a measurable function when each term $f_{n}$ is measurable? Proposition For each $n\in\mathbb{N}$, let $f_{n}:(\Omega,\mathcal{F})\to(\overline{\mathbb{R}},\mathcal{B}(\overline{\mathbb{R}}))$ be a $\langle\mathcal{F},\mathcal{B}(\overline{\mathbb{R}})\rangle$-measurable function. Then $\sup_{n\in\mathbb{N}}f_{n}$ is $\langle\mathcal{F},\mathcal{B}(\overline{\mathbb{R}}\rangle$-mesurable. Proof Let $g = \sup_{n\geq 1}f_{n}$. To show that $g$ is $\langle\mathcal{F},\mathcal{B}(\overline{\mathbb{R}}\rangle$-measurable, it is enough to show that $\{\omega\in\Omega : g(\omega)\leq r\}\in\mathcal{F}$ for all $r\in\mathbb{R}$. Now, for any $r\in\mathbb{R}$, \begin{align*} \{\omega:g(\omega)\leq r\} & = \bigcap_{n=1}^{\infty}\{\omega:f_{n}(\omega)\leq r\}\\\\ & = \bigcap_{n=1}^{\infty}f^{-1}_{n}((-\infty,r)])\in\mathcal{F} \end{align*} since $f^{-1}_{n}((-\infty,r])\in\mathcal{F}$ for all $n\geq 1$, by the measurability of $f_{n}$. My concerns I do not know how to interpret the symbol $\sup_{n\in\mathbb{N}}f_{n}$. As far as I have understood, for each $\omega\in\Omega$, $g(\omega) = \sup_{n\geq 1}f_{n}(\omega)$. That is to say, for each $\omega\in\Omega$, $\sup_{n\geq 1}f_{n}(\omega)$ is the least upper bound of the sequence $f_{n}(\omega)$. Is it correct to think so? If it is not the case, please let me know. Moreover, is there a more detailed way to write the proof? I've tried the following one. Since $g(\omega)\geq f_{n}(\omega)$ for every natural $n$, one has that \begin{align*} x\in\{\omega:g(\omega)\leq r\} \Rightarrow g(x)\leq r & \Rightarrow (\forall n\in\mathbb{N})(f_{n}(x)\leq g(x) \leq r)\\\\ & \Rightarrow (\forall n\in\mathbb{N})(f_{n}(x)\leq r)\\\\ & \Rightarrow x\in\bigcap_{n=1}^{\infty}\{\omega:f_{n}(\omega)\leq r\} \end{align*} Conversely, if $f_{n}(\omega)\leq r$ for every $n\in\mathbb{N}$, taking the sup one obtains that $g(\omega) = \sup f_{n}(\omega)\leq \sup r = r$. This means that \begin{align*} x\in\bigcap_{n=1}^{\infty}\{\omega:f_{n}(\omega)\leq r\} \Rightarrow x\in\{\omega: g(\omega)\leq r\} \end{align*} Hence we conclude that both sets are equal. AI: You are on the right track. Just need to use the properties of the supremum to get $$\{\sup_nf_n>a\}=\bigcup_n\{f_n>a\}$$ where $\{h>a\}:=\{\omega\in \Omega: h(\omega)>a\}$. If each $f_n$ is measurable, then each set $\{f_n>a\}$ is measurable and so the union of all of them. Recall that a real valued function $g$ is (Borel) measurable iff $\{g>a\}$ is measurable.
H: Proving $(Y\cap Z)\cup (X \cap Z ) \cup (X \cap Y )= ((Y \cup Z) − (\bar{X} \cap \bar{Z})) \cap (X \cup Y)$ I'm trying to prove that $$(Y\cap Z)\cup (X \cap Z ) \cup ({X} \cap {Y} )= ((Y \cup Z) − (\bar{X} \cap \bar{Z})) \cap (X \cup Y)$$ using set identities. I have been using mainly de morgans law and gotten the left side to become $$\overline{((\bar{Y}\cup \bar{Z}) - (X \cap Z )) \cap (\bar{X} \cup \bar{Y})}$$ but I can't quite get to to become the right side after a lot of efford. AI: In general I find it easier to start on the more complicated side, which in this case is the righthand side. I’ll let $U$ be a universal set for the problem. First, $$\begin{align*} (Y\cup Z)\setminus\big((U\setminus X)\cap(U\setminus Z)\big)&=(Y\cup Z)\setminus\big(U\setminus(X\cup Z)\big)\\ &=(Y\cup Z)\cap(X\cup Z)\;, \end{align*}$$ so the righthand side reduces to $$(Y\cup Z)\cap(X\cup Z)\cap(X\cup Y)\;.$$ Apply a distributive law to pull $Z$ out of the first two terms to get $$\big((Y\cap X)\cup Z\big)\cap(X\cup Y)\;,$$ and then distribute the final $X\cup Y$ through the first term: $$\big((Y\cap X)\cap(X\cup Y)\big)\cup\big(Z\cap(X\cup Y)\big)\;.$$ Now use the fact that $Y\cap X\subseteq X\cup Y$, and hence $(Y\cap X)\cap(X\cup Y)=Y\cap X$, to get $$(Y\cap X)\cup\big(Z\cap(X\cup Y)\big)\;.$$ (Depending on just what identities you have available, this may require more than a single step.) One last application of a distributive law essentially finishes it off: $$(Y\cap X)\cup\big(Z\cap(X\cup Y)\big)=(Y\cap X)\cup(Z\cap X)\cup(Z\cap Y)\;.$$
H: For $n \in \Bbb{Z}_{>0}$, let $(3+i)^n = a_n+ib_n$, where $a_n, b_n \in \Bbb{Z}$. Find expressions for $a_{n+1}$ and $b_{n+1}$ as linear combinations of $a_n$ and $b_n$ with coefficients independent of n. With some of your comments, I see $a_{n+1} +ib_{n+1} = (a_n+ib_n)(3+i) = 3a_n + ia_n + i3b_n-b_n$. So the imaginary parts have to be equal which means that $b_{n+1} = a_n +3b_n$ and the real parts have to be equal so $a_{n+1} = 3a_n - b_n$, right? So that's that question I believe Show that for $n\geq 1, a_n \equiv3\pmod 5$ and $b_n \equiv 1\pmod 5$ Here we know that if n = 1, $(3+i) = a_1 + ib_1$ so $a_1 = 3$ and $b_1 = 1$ which means that $a_1 \equiv 3(mod 5)$ and $b_1 \equiv 1 (mod 5)$. We can use these as our base case for $a_n$ and $b_n$ and see that $a_{n+1} = 3*3-1 = 8 \equiv 3(mod 5)$ and $b_{n+1} = 3*1+3 = 6 \equiv 1(mod 5)$. [Thank you to J.W. Tanner for your help with this] AI: Hint: $a_{n+1}+i b_{n+1} = (a_n+ib_n)(3+i)$.
H: Question on order of perfect shuffles Imagine you have a stack of $n$-even chips where the bottom half is blue and the top half is red. You split the stack equally and perform a perfect shuffle where the lowest blue chip remains on the bottom and hence the top red chip stays on the top. How many shuffles does it take to get all the blue chips on the bottom and all the red chips on top again? One idea I had was to look at the order of the permutation at hand. Let $G$ be the permutation group of order $n$ with $\phi \in G$. Then, the permutation $\phi$ that mirrors the question is: $$\phi((1,2,3,\dots,(n-2),(n-1),n)) \to (1, (n/2+1),2,\dots,(n-1),(n/2),n)$$ I tried to find the order of $\phi$ by writing this in some general cyclic notation, but I couldn't seem to be able to figure it out. Also, the question has the subtlety that just the colors have to be reordered, not the actual original ordering of the chips. Any idea on how to go about solving this? AI: I don't think that one can restore the colours without actually restoring each chip to its original position. Label the chips $0,1,\ldots,2n-1$. The shuffle fixes chips $0$ and $2n-1$ (forget these then) and moves any other chip $j$ to $2j$ considered modulo $2n-1$. So $k$ shuffles takes $j$ to $2^kj$ considered modulo $n$. So the smallest number of shuffles needed to restore all chips to the original position is the multiplicative order of $2$ modulo $2n-1$. Suppose that one has just restored the colours. Write $A=\{1,2,\ldots,n-1\}$. Then $2^ka$ must lie in $A$ modulo $2n-1$ for all $a\in A$. In particular $2^k\equiv c\pmod{2n-1}$ where $c\in A$. We need to show that $c>1$ leads to a contradiction. Let $dc$ be the smallest multiple of $c$ that is $>(n-1)$. As $c>1$, $d\le n-1$ so $d\in A$, but as $(d-1)c\le n-1$ and $c\le n-1$, $dc\le 2n-2$. Thus this putative permutation maps $d\in A$ to $dc\notin A$ giving the required contradcition.
H: Metric Space where every real continuous functions are Bounded, but does not attain its Bound. Are there any metric space in which every continuous function are bounded but there exists one such function who does not attain its bound. AI: Assuming that you are talking about real-valued functions, there is not. A space on which every continuous real-valued function is bounded is called pseudocompact, and every pseudocompact metric space is compact. Finally, every real-valued function on a compact space attains its maximum and minimum values.