text
stringlengths
83
79.5k
H: Derivation of an integral function in $L^p$ I know that for any continuous function $f:[0,1]\to\mathbb{R}$ $$\frac{d}{dx} \int_0^x f(y)dy = f(x).$$ Let's say that $f\in L^p$ for $p>1$. Can I say that the equality still holds almost everywhere? If not, what is the largest subset of $L^p$ such that the equality holds almost everywhere? AI: By Lebesgue's differentiation theorem, this is true for all $f \in L^1$. As $L^p([0,1]) \subset L^1([0,1])$, your conjecture holds.
H: Prove a relation $\mathcal R$ is reflexive if and only if its complement $\overline{\mathcal R}$ is irreflexive (strict). Given a homogeneous binary relation $\mathcal R$ over a set $A$, $\mathcal{R}$ is reflexive if: $$\forall a \in A:(a,a) \in \mathcal R$$ Prove a relation $\mathcal R$ is reflexive if and only if its complement $\overline{\mathcal R}$ is irreflexive (strict). $\Longrightarrow$ By the definition of complement relation: $$\forall a,b \in A :(a,b) \in \mathcal R \implies (a,b) \notin \overline{\mathcal R}$$ Taking $a=b$ follows: $$\forall a \in A :(a,a) \in \mathcal R \implies (a,a) \notin \overline{\mathcal R}$$ Which is true since $\mathcal R$ is reflexive. $\Longleftarrow$ By the definition of complement relation: $$\forall a,b \in A :(a,b) \in \overline{\mathcal R} \implies (a,b) \notin \mathcal R$$ Taking $a=b$ follows: $$\forall a \in A :(a,a) \in \overline{\mathcal R} \implies (a,a) \notin \mathcal R$$ Since $ \overline{\mathcal R}$ is irreflexive, hence $\forall a \in A :(a,a) \in \overline{\mathcal R}$ is never true, and hence its negation is always true for all $a \in A$, however I still cannot finish the proof. Another way is using contradiction argument, assume $\overline{\mathcal R}$ is irreflexive, but $\mathcal R$ is not reflexive, i.g.: $$\forall a \in A :(a,a) \notin \overline{\mathcal R}$$ And $$\exists a \in A :(a,a) \notin \mathcal R$$ From here we see that exists such $a \in A$ satisfying the two conditions $(a,a) \notin \overline{\mathcal R}$ and $(a,a) \notin \mathcal R$, but do we end up with a contradiction? Can someone help me finishing this proof? AI: The $\implies$ part of the proof is fine. Now for the $\Longleftarrow$ part: Use proof by contraction. Suppose $\overline{\mathcal R}$ is irreflexive and $R$ is not reflexive. By definition of complement, we know that $\mathcal{R} \cup \overline{\mathcal R} = A \times A$. Let $a \in A$, such that $(a,a) \notin \overline{\mathcal R}$ (since $\overline{\mathcal R}$ is irreflexive) and $(a,a) \notin \mathcal{R}$ (since $\mathcal{R}$ is not reflexive, then such ordered pair in this conditions must exists). From here we deduce that $(a,a) \notin A \times A$, which is a contradiction. $\square$
H: 2 Cross Products? Usually, if we want to find the cross product of 2 vectors $\vec{b}$ and $\vec{c}$, we want to find the vector which is perpendicular to both of them. Let's say the cross product of $\vec{b}$ and $\vec{c}$ is $\vec{d}$. Isn't $-\vec{d}$ then also perpendicular to $\vec{b}$ and $\vec{c}$? Does that mean that there are 2 cross products, or am I making a mistake? AI: The cross product of $\vec b$ and $\vec c$ is defined as the vector with the following properties: The length of the product is equal to $|\vec b|\cdot|\vec c|\cdot\sin(\alpha)$, where $\alpha$ is the angle between the two vectors. The product is perpendicular to both $\vec b$ and $\vec c$. The direction of the product is such that it follows the right hand rule. The last point ensures that the cross product is uniquelly defined by $b$ and $c$. That is, of the two vectors that satisfy points 1 and 2, only one of them satisfies point 3 Note that there are many interpretations of the right hand rule, from (literally) hand-wavy ones, to (for the purpose of this question) circular ones (i.e., one way to define the right hand rule would be to say that it is defined by the direction of the cross product). Let's strike a balance then and define the right hand rule as such: If $\vec a \times \vec b=\vec c$, then, looking onto the plane, spanned by $\vec a$ and $\vec b$ from the positive side (i.e., from the side into which $\vec c$ points into), the angle required to rotate $\vec a$ into $\vec b$ is smaller than the angle required to rotate $\vec b$ into $\vec a$.
H: Grad Function Directional Derivative Let $h(x, y)$ be some function from $\mathbb{R}^2$ to $\mathbb{R}$ which outputs height of a hill at a given point $(x, y)$. At a given moment, I am travelling up the hill with velocity $\mathbf{v}$ at angle $\theta$ to $\nabla h$. How do I prove that the rate at which my height is increasing is $\mathbf{v} \cdot \nabla h$? My progress: I know that for some 2D vector $\mathbf{a}$, the expression $\mathbf{a} \cdot \nabla h$ will give me the slope of hill in direction of $\mathbf{a}$. So if $\mathbf{\hat{v}}$ is the unit vector in direction of $\mathbf{v}$, then $\mathbf{\hat{v}} \cdot \nabla h$ will give me the slope of hill in the direction of the velocity. Since $\mathbf{v} \cdot \nabla h = (\left|\mathbf{v}\right|)(\mathbf{\hat{v}} \cdot \nabla h)$, the equation is essentially saying that the rate at which my height is increasing is speed multiplied by my slope. But this equation is clearly incorrect. Where am I going wrong? AI: If you write up the derivatives explicitly, you may read $$ \mathbf{v} \cdot \nabla h = \nabla h \cdot \mathbf{v} = \frac{\partial h}{\partial x} \frac{\operatorname{d} x}{\operatorname{d} t} + \frac{\partial h}{\partial y} \frac{\operatorname{d} y}{\operatorname{d} t},$$ which is exactly the rate of change $ \frac{\operatorname{d} h}{\operatorname{d} t}$ by chain rule.
H: Condition of positive definiteness based upon diagonal elements of the original and inverse matrices This is a sequel to this question in which I sought to expand on this question. Let me put it straight. Given a non-singular symmetric real matrix $A\in\mathbb{R}^{n\times n}$ such that $A_{ii}>0$. Can we conclude that $A$ is positive definite if $$(A^{-1})_{ii}\ge \frac1{A_{ii}}$$ holds for all $1\le i\le n$? AI: No. Random counterexample: $$ A=\pmatrix{ 2& 3&-3\\ 3& 2&-3\\-3&-3& 4}, \ A^{-1}=\frac12\pmatrix{1&3&3\\ 3&1&3\\ 3&3&5}. $$ $A$ is nonsingular as its determinant is $-2$, but $A$ isn't positive definite as it has an eigenvector $(1,-1,0)^T$ corresponding to the eigenvalue $-1$.
H: Conditions for the existence of a lower bound of the operator norm $\|A^*A\|$ for a linear and continuous operator $A:X \to Y$ Setting: $X$, $Y$ Banach (or Hilbert) spaces, $A: X \to Y$ linear, continuous, injective, $A(X)$ is dense in $Y$ ($A$ is an imbedding operator in a Gelfand triple), $A^*$ is its adjoint Find: Conditions on $A$ for the existence of a constant $C>0$ such that $\|A^*A\|=\sup_x \frac{\|A^*Ax\|_Y}{\|x\|_X} \geq C$. You are free to propose additional assumptions, e.g. compactness of $A$. I imagine you could use a spectral theorem to find a lower bound with eigenvalues, afaik it was $\|A^*A\|\geq |\lambda|$ for $\lambda \in \sigma(A^*A)$. Or the optimal lower bound is the spectral radius. I do not really want to assume self-adjointness. Of course, unitary operators (i.e. $A^*A=Id$) are a class of operators with this feature. I don't know if it helps, but since $A(X)$ is dense in $Y$, we know that $A^*$ and $A^*A$ are injective AI: If $C$ can depend on $A$ we can take $C=\|A^{*}A\|$. You cannot have $C$ independent of $A$. For example take $K$ to be an injective compact operator with dense range and let $A_n =\frac 1 n K$. Your inequality obviously fails for large $n$.
H: Completeness of $\mathcal{l}^1$? I am a bit confused with how $\mathcal{l}^1$ can be complete. So we know that the sequence space $\mathcal{l}^1$, equipped with $||\cdot||_{\mathcal{l}^1}$ is complete. But the sequence of sequences $$\left(x^{(n)}_k\right)_{k\in\mathbb{N}} = \left(\frac{1}{k^{1+1/n}}\right)_{k\in\mathbb{N}},$$ with norm $||x^{(n)}||_{\mathcal{l}^1}=\zeta(1+1/n)<\infty$ is in $\mathcal{l}^1$ for each $n\in\mathbb{N}$, but the limit $$\left(x_k\right)_{k\in\mathbb{N}} = \left(\frac{1}{k}\right)_{k\in\mathbb{N}}$$ is not in $\mathcal{l}^1$, as $||x||_{\mathcal{l}^1}=\infty$. Could anybody point out, where I am going wrong? Any help is very much appreciated. AI: It is true indeed that, for each $n\in\Bbb N$, $\lim_{n\to\infty}\dfrac1{k^{1+1/n}}=\dfrac1k$. However, it doesn't follow from this that your sequence converges to $\left(\frac1k\right)_{k\in\Bbb N}$. That doesn't even make sense, since, as you wrote, this sequence does not belong to $\ell^1$.
H: number of pairs - combinatorics if I want to know the number of pairs in a group of 2n elements what is the difference between: a. ${2n}\choose{2}$ b. $\frac{(2n)!}{2!^n\cdot n!}$ c.${2n}\choose{2}$ ${2n-2}\choose{2}$... ${2}\choose{2}$ I do not understand the differences between these options, many thanks. AI: For example, let $n=6$ a) counts the number of ways in which you can choose two elements out of $2n$. Here, we are counting the list: $\{1,2\}, \{1,3\}, \{1,4\}, \{1,5\}, \{1,6\}, \{2,3\}, \{2,4\},\dots$ It answers a question such as "You have $2n$ people in a class. How many possible ways can you choose two of them to be co-class presidents?" Note how we are only having chosen a total of two people overall here. Also notice how it doesn't matter which person was "first" versus "second." They are co-class presidents. Not one president and one vice president. b) counts the number of ways in which you can partition the set of $2n$ elements into $n$ unlabeled parts each of which of size two. Here, we are counting the list: $\{\{1,2\},\{3,4\},\{5,6\}\}, \{\{1,2\},\{3,5\},\{4,6\}\}, \{\{1,2\},\{3,6\},\{4,5\}\}, \{\{1,3\},\{2,4\},\{5,6\}\},\dots$ It answers a question such as "You have $2n$ people in a class. How many ways can they pair up to have each group work on the same project?" Here, notice that our pairs aren't named or distinguishable in any way except by who is in it. We don't care if Billy and his friend sit on the left of the room to work on their assignment or the right side of the room. All we care about is who Billy's partner happened to be and similarly how each other person was partnered, etc... c) counts the number of ways in which you can partition the set of $2n$ elements into $n$ unlabeled parts each of which of size two $(\{1,2\},\{3,4\},\{5,6\}), (\{1,2\},\{3,5\},\{4,6\}), (\{1,2\},\{3,6\},\{4,5\}), (\{1,2\},\{4,5\},\{3,6\}),\dots$ It answers a question such as "You have $2n$ people in a class. You have $n$ different tasks you want accomplished. In how many possible ways can you assign different pairs of students to these different tasks where each student gets assigned to exactly one task each?" For example, you assign two students to be co-class presidents, two students to clean the chalkboard, two students to take out the trash, two students to feed the class hamster, etc... Unlike the first problem, we are needing to think about choosing more than just two students in total. Unlike the second problem, it matters where each pair was assigned. Note in particular the difference in connotation between using curly brackets $\{~\}$ and parentheses $(~)$. We consider order relevant within parentheses but not relevant within curly brackets. In particular, you have $\{\{1,2\},\{3,4\},\{5,6\}\}$ is equivalent to $\{\{3,4\},\{1,2\},\{6,5\}\}$ for instance however $(\{1,2\},\{3,4\},\{5,6\})$ is not equivalent to $(\{3,4\},\{1,2\},\{6,5\})$
H: Evaluate $\int_{(0,\infty)^n}\text{Sinc}(\sum_{k=1}^nx_k) \prod_{k=1}^n \text{Sinc}(x_k) dx_1\cdots dx_n$ In this post @metamorphy established this remarkable result (here Sinc$(x)$ denotes $\frac{\sin(x)}x$): $$I(n)=\int_{(-\infty,\infty)^n}\text{Sinc}(\sum_{k=1}^nx_k) \prod_{k=1}^n \text{Sinc}(x_k) dx_1\cdots dx_n=\pi^n$$ The current problem is: What can we say about $$J(n)=\int_{(0,\infty)^n}\text{Sinc}(\sum_{k=1}^nx_k) \prod_{k=1}^n \text{Sinc}(x_k) dx_1\cdots dx_n=?$$ It's not hard to establish $J(1)=\frac \pi 2, J(2)=\frac {\pi^2}6$. Due to lack of enough symmetry, in general $J(n)$ can't be deduced from $I(n)$ directly. I tried to apply the method used in previous post but did not succeed. Any suggestion is appreciated. AI: The answer is surprisingly simple: $$\color{blue}{J(n)=\pi^n B_n}$$ for $n>1$, where $B_n$ are the Bernoulli numbers. Following the approach from the linked post, we consider (for $a_k,b_k,c_k>0$) $$\Xi=\int_{(0,\infty)^n}\left(\prod_{k=1}^n\frac{e^{-c_k x_k}\sin a_k x_k}{x_k}\right)\frac{\sin\sum_{k=1}^{n}b_k x_k}{\sum_{k=1}^{n}b_k x_k}\,dx_1\cdots dx_n;$$ this time, we cannot replace $e^{itb_k x_k}$ by $\cos tb_k x_k$, so we leave it as is and arrive at $$\Xi=\frac12\int_{-1}^1\prod_{k=1}^{n}\left(\frac{1}{2i}\log\frac{c_k+i(a_k-b_k t)}{c_k-i(a_k+b_k t)}\right)\,dt,$$ with the principal value of the logarithm. Our $J(n)$ is obtained at $a_k=b_k(=1)$ and $c_k\to 0$: $$J(n)=\frac{1}{2^{n+1}}\int_{-1}^1\left(\pi+i\log\frac{1+t}{1-t}\right)^n\,dt.$$ Now consider the exponential generating function (for $|z|$ small enough): \begin{align*} \sum_{n=0}^\infty J(n)\frac{z^n}{n!} &=\frac12\int_{-1}^1\exp\frac{z}{2}\left(\pi+i\log\frac{1+t}{1-t}\right)\,dt \\&=\frac{e^{\pi z/2}}{2}\int_{-1}^1(1+t)^{iz/2}(1-t)^{-iz/2}\,dt \\&=e^{\pi z/2}\mathrm{B}\left(1+\frac{iz}{2},1-\frac{iz}{2}\right) \\&=e^{\pi z/2}\frac{i\pi z/2}{\sin(i\pi z/2)}=\frac{\pi z}{1-e^{-\pi z}}. \end{align*} It just remains to recall that $z/(e^z-1)=\sum_{n=0}^\infty B_n z^n/n!$, and that $B_n=0$ for odd $n>1$.
H: Splitting the set $A=\{1,2,...,n\}$ into at most $m$ non-empty disjoint subsets, whose union is $A$ How many different ways do there exist to split the set $A=\{1,2,...,n\}$ into at most $m$ non-empty disjoint subsets, whose union is $A$. For example, if $m=3$ then we have the following: $n=1: \quad$ there is only 1 such subset; $n=2: \quad$ we have the following possible splittings: $\quad\quad\quad\quad\big\{\{1\}, \{2\}\big\}, \big\{\{1,2\}\big\}$ - $2$ in total; $n=3: \quad$ we have the following possible splittings: $\quad\quad\quad\quad\big\{\{1\}, \{2\},\{3\}\big\}, \big\{\{1,2\}, \{3\}\big\}, \big\{\{1,3\}, \{2\}\big\}, \big\{\{1\}, \{2,3\}\big\}, \big\{\{1,2,3\}\big\}$ - $5$ in total. AI: Let $S(n,k)$ be the so-called Stirling number of the second kind which is the number of ways to partition a set of $n$ objects into exactly $k$ non-empty subsets. Hence the number of ways to split the set $\{1,2,...,n\}$ into at most $m$ non-empty disjoint subsets, whose union is $A$ is $$\sum_{k=0}^m S(n,k)=\sum_{k=0}^m\frac{1}{k!}\sum_{j=0}^k (-1)^j\binom{k}{j}(k-j)^n.$$ .
H: calculate $\oint_{|z|=1}z^{2018}e^{\frac{1}{z}}\sin\frac{1}{z}\text{dz}$ calculate $\oint_{|z|=1}z^{2018}e^{\frac{1}{z}}\sin\frac{1}{z}\text{dz}$ my try: $ \begin{array}{c} \oint_{|z|=1}z^{2018}e^{\frac{1}{z}}\sin\frac{1}{z}\text{dz}\\ \oint_{|z|=1}z^{2018}{\displaystyle \mathop{\sum_{0}^{\infty}}}\frac{\left(\frac{1}{z}\right)^{n}}{n!}{\displaystyle \mathop{\sum_{0}^{\infty}}}\frac{\left(\frac{1}{z}i\right)^{n}-\left(-\frac{1}{z}i\right)^{n}}{2in!}\text{dz}\\ \oint_{|z|=1}z^{2018}{\displaystyle \mathop{\sum_{0}^{\infty}}}\frac{z^{-n}}{n!}{\displaystyle \mathop{\sum_{0}^{\infty}}}\frac{\left(-zi\right)^{-n}-\left(zi\right)^{-n}}{2in!}\text{dz}\\ \oint_{|z|=1}{\displaystyle \mathop{\sum_{0}^{\infty}}}\frac{z^{-n+2018}}{n!}{\displaystyle \mathop{\sum_{0}^{\infty}}}\frac{\left(-zi\right)^{-n}-\left(zi\right)^{-n}}{2in!}\text{dz}\\ \oint_{|z|=1}{\displaystyle \mathop{\sum_{0}^{\infty}}}\frac{z^{-n+2018}}{n!}{\displaystyle \mathop{\sum_{0}^{\infty}}}\frac{\left(-zi\right)^{-n}-\left(zi\right)^{-n}}{2in!}\text{dz}\\ {\displaystyle \mathop{Res\sum_{0}^{\infty}}}\frac{z^{-n+2018}}{n!}{\displaystyle \mathop{\sum_{0}^{\infty}}}\frac{\left(-zi\right)^{-n}-\left(zi\right)^{-n}}{2in!}\text{}\\ {\displaystyle \mathop{Res\sum_{0}^{\infty}}}\frac{z^{-n+2018}}{n!}{\displaystyle \mathop{\sum_{0}^{\infty}}}\frac{\left(z\right)^{-n}(-i)^{-n}-\left(z\right)^{-n}(i)^{n}}{2in!}\text{}\\ {\displaystyle \mathop{Res\sum_{0}^{\infty}}}\frac{z^{-2n+2018}\left((-i)^{-n}-i^{n}\right)}{2i(n!)^{2}}\text{}\\ -2n+2018=-1\\ n\notin\mathbb{N}\\ {\displaystyle \mathop{Res\sum_{0}^{\infty}}}\frac{z^{-2n+2018}\left((-i)^{-n}-i^{n}\right)}{2i(n!)^{2}}=0 \end{array}$ $\oint_{|z|=1}z^{2018}e^{\frac{1}{z}}\sin\frac{1}{z}\text{dz}=0$ edit: I took the advise in the comment now I get: $ {\displaystyle \mathop{Res\sum_{0}^{\infty}}}\frac{z^{-n+2018}}{n!}{\displaystyle \mathop{\sum_{0}^{\infty}}}\frac{\left(z\right)^{-k}(-i)^{-k}-\left(z\right)^{-k}(i)^{k}}{2ik!}\text{}\\{\displaystyle \mathop{Res\sum_{0}^{\infty}}}\frac{z^{-n-k+2018}\left((-i)^{-k}-i^{k}\right)}{2i(n!)(k!)}\text{}\\-n-k+2018=-1\\n+k=2019\\{\displaystyle \mathop{Res\sum_{n+k=2019;n,k=0}}}\frac{z^{-n-k+2018}\left((-i)^{-k}-i^{k}\right)}{2i(n!)(k!)}$ which I don't know to sum ${\displaystyle \mathop{Res\sum_{k=0}^{2019}}}\frac{z^{-1}\left((-i)^{-k}-i^{k}\right)}{2i((2019-k)!)(k!)}={\displaystyle \mathop{Res\sum_{k=0}^{2019}}}\frac{\left((-i)^{-k}-i^{k}\right)}{2i((2019-k)!)(k!)}$ AI: If inside the unit disc is too nasty, we can try to move it to outside the unit disc. In other words, integrate it on the other side of the Riemann sphere using the chart $w=\frac1z$. So, with $w=\frac1z$, the unit circle $\lvert z\rvert=1$ is $\lvert w\rvert=1$, in the opposite direction. Everything else is just what you expect: $$ \int_{S^1}z^{2018}\exp(z^{-1})\sin(z^{-1})\,\mathrm{d}z =-\int_{S^1}w^{-2018}\exp(w)\sin(w)\cdot -w^{-2}\,\mathrm{d}w. $$ Now you can calculate the RHS much more readily: $\exp(w)\sin(w)$ is holomophic and we have $$ \exp(w)\sin(w)=\frac1{2i}\left[\exp((1+i)w)-\exp((1-i)w)\right] $$ so \begin{align*} &=\frac{1}{2i}\int_{S^1}\frac{\exp((1+i)w)-\exp((1-i)w)}{w^{2020}}\,\mathrm{d}w \\ &=\pi\cdot\text{coefficient of }w^{2019}\text{ in }\big[\exp((1+i)w)-\exp((1-i)w)\big]\\ &=\pi\frac{(1+i)^{2019}-(1-i)^{2019}}{2019!} \end{align*} and it is not hard to calculate powers of $1\pm i$ using $(1\pm i)/\sqrt{2}$ is an 8-th root of unity.
H: Is a square commutation matrix positive semidefinite? Let $A \in \mathbb{R}^{n \times n}$ and denote the commutation matrix, made up of 0 and 1 such that each row and each column has exactly one 1, as $K_{n} \in \mathbb{R}^{n^2 \times n^2}$ , which is such that: \begin{equation} \operatorname{vec}(A^T) = K_{n} \operatorname{vec}(A) \end{equation} It is known (cf. Harville D.A., Matrix Algebra from a Statistician's Perspective) that such a matrix is symmetric,orthogonal, and with determinant $\pm 1$. Moreover, is it positive semidefinite ? AI: $K_2=\pmatrix{1&0&0&0\\ 0&0&1&0\\ 0&1&0&0\\ 0&0&0&1}$ is indefinite. In fact, every commutation matrix except $K_1$ is indefinite. In particular, for every $\mathcal I=\{(j-1)n+i,\ (i-1)n+j\}$ with $i\ne j$, its principal submatrix $$ A(\mathcal I,\mathcal I)=\pmatrix{0&1\\ 1&0} $$ is indefinite.
H: Smooth map between Riemannian manifolds of same dimension is local isometry iff. metric is preserved I am just starting to read Lee's "Riemannian Manifolds" and one of the first exercises in the text (2.7) is the following: given a smooth map $\phi:(M,g)\to (\bar{M},\bar{g})$, prove that for $\dim M=\dim\bar{M}$ we have that $\phi$ is a local isometry iff. $\phi^*\bar{g}=g$. Right to left is obvious. The converse statement, on the other hand, I'm not sure how to solve. I think I need to show that $\phi$ is a bijection, but how would I do this, only knowing that it is a local isometry? I feel like I'm making this more difficult in my head than it needs to be, and would appreciate if someone could offer a hint, or solution. Here, the definition given for a local isometry is: for each $p\in M$ there is a neighourhood $U$ of $p$ such that $\phi |_U$ is an isometry. An isometry is defined to be a diffeomorphism such that $\phi^*\bar{g}=g$. AI: A local isometry is not always a bijection, consider for example $\mathbb{R}\rightarrow S^1$ where $\mathbb{R}$ is endowed with the Eucledean metric and $S^1$ is the quotient of $\mathbb{R}$ by the translation $t(x)=x+1$.
H: Minimization Problem: Deriving Dual Problem Consider the following minimization problem $\min\{H(x,z) \equiv h_1(x) + h_2(z): Ax + Bz = c\}$, where $A \in \Bbb{R}^{m \times n}, B \in \Bbb{R}^{m \times p}$ and $c \in \Bbb{R}^{m}$ and $h_1, h_2$ are proper, closed and convex. To find the dual problem of the optimization problem, one can construct a Lagrangian: $L(x,z;y) = h_1(x)+h_2(z) + \langle y, Ax + Bz - c \rangle$ The objective function is therefore given by $q(y) = \min_{x, z} \{h_1(x) + h_2(z) + \langle y, Ax+Bz-c \rangle\}$ Apparently, the last line is the same $\max_{y}h_1^{*}(-A^{T}y)-h_2^{*}(-B^{T}y) - \langle c,y \rangle$ I guess that his is an application of some duality principle but I don't see how it exactly works. AI: Our problem is equivalent (with some domain qualification conditions) to $$\min_{x,z}\max_{y} L(x,z,y) =\max_y \Big\{\min_{x} \{ h_1(x) + \langle y, Ax\rangle \} + \min_z \{h_2(z) + \langle y, Bz\rangle\} -\langle y, c\rangle\Big\}$$ Now use the definition of the convex conjugate to get $$\min_{x,z}\max_y L(x,z,y) = \max_y \{-h_1^*(-A^* y) -h_2^*(-B^*y) - \langle y,c\rangle\}$$
H: Show estimator is consistent For a special case of the gamma distribution: $$f(x)=\frac{x}{\theta^2}e^{-x/ \theta}$$ $$E(x) = 2\theta\\ V(x) = 2\theta^2$$ I find MLE of $\hat{\theta}$ to be $\sum_{i=1}^n \frac{x_i}{2n}$ To show consistency, I'd like to show that: $\lim_{n\to \infty} E(\hat{\theta}) = \theta$ $\lim_{n\to \infty} V(\hat{\theta}) = 0$ Focusing on $\lim_{n\to \infty} V(\hat{\theta}) = 0$: $V(\sum_{i=1}^n \frac{x_i}{2n}) = \frac{1}{4n^2}V(\sum_{i=1}^n x_i)$ I'm somewhat stuck here, it seems obvious that this limit approaches, $0$, however, I'm not sure if I need to break the variance further (say $V(x) = E(x^2)-E(x)^2$) to prove this I'd appreciate any help! AI: I assume that $x_1,\dots,x_n$ are independent, resulting in: \begin{align*} \lim_{n \to \infty} V(\hat{\theta}) = \lim_{n \to \infty} V(\sum_{i = 1}^n \frac{x_i}{2n}) = \lim_{n \to \infty} \frac{1}{4n^2}\sum_{i = 1}^n V(x_i) = \lim_{n \to \infty} \frac{1}{4n^2}\sum_{i = 1}^n 2\theta^2 = \lim_{n \to \infty} \frac{\theta^2}{2n} = 0 \end{align*}
H: Let $p(x)$ be a polynomial with integer coefficients. Show that if $p(2)=3$ and $p(3)=5$ then $p(n)\ne0$ for all integers $n$. Let $p(x)$ be a polynomial with integer coefficients. Show that if $p(2)=3$ and $p(3)=5$ then $p(n) \neq 0$ for all integers $n$. I did manage to solve it using the fact that $a-b | p(a)-p(b)$ but I found a more elegant solution online and didn,t quite understand it and I am hoping that someone can help me understand it! If $p(n)=0$ then $p(n)=0 \pmod 2$ as well. But either $n=2 \pmod 2$ or $n=3 \pmod 2$ and in both cases $p(n) = 1 \pmod 2$. The contradiction. I understand that any number must either be divisible by $2$ or have a remainder of $1$ after division by $2$ but how did they conclude that this would imply that $p(n) = 1 \pmod 2$ from this? Thanks in advance! AI: It is your proof using a more fancy language. It isn't more elegant, it's just a different packaging. You say that $a-b\mid p(a)-p(b)$, and also that for any $n$ you can make either $n-2$ or $n-3$ even, meaning either $p(n)-p(2)$ or $p(n) - p(3)$ must be even. Thus $p(n)$ cannot possibly be $0$. The proof you found uses the exact same idea, but instead of saying, for instance, "either $p(n)-p(2)$ or $p(n) - p(3)$ must be even", they say either $p(n)\equiv p(2)\pmod 2$ or $p(n) \equiv p(3)\pmod 2$ Any other difference between the two proofs is, presumably, a similarily elementary rewriting. (I haven't seen the exact phrasings of the two proofs, so I can't be entirely certain of this, of course.)
H: given a set of 3d points and their covariance matrices finding the mean point Say I have a set of observations of a 3d object in space. Knowing only the location of the observations my best guess for the location of the object would be the mean point. But let's say I have the covariance matrix for each point based on some noise model. How can I calculate the most likely location of the object using this new information? AI: Given covariance $\Lambda_i$ per observation $y_i$ we have: $P(\{y_{1}\cdots y_{n}\}|y_{mean})\sim e^{-\frac{1}{2}\sum(y_{i}-y_{mean})^{T}\Lambda_{i}^{-1}(y_{i}-y_{mean})}$ the log prob is: $\log P \sim -{1\over 2}\sum(y_{i}-y_{mean})^{T}\Lambda_{i}^{-1}(y_{i}-y_{mean})$ Extremizing the log probability according to the mean (Max likelihood): $\sum\Lambda_{i}^{-1}y_{i}=\left(\sum\Lambda_{i}^{-1}\right)y_{_{mean}}^{estimated}$ So that the maximum likelihood estimate of the center is $y_{mean}^{estimated}=\left(\sum\Lambda_{i}^{-1}\right)^{-1}\sum\Lambda_{i}^{-1}y_{i}=\Lambda^{tot}\sum\Lambda_{i}^{-1}y_{i}$ If we plug that in to the probability expression we find that: $P(\{y_{1}\cdots y_{n}\}|y_{mean})\sim e^{-\frac{1}{2}(y_{mean}-y_{mean}^{estimated})^{T}(\Lambda^{tot})^{-1}(y_{mean}-y_{mean}^{estimated})}$ So we can interpret $\Lambda^{tot}$ as the covariance our guess for the location itself !
H: An Infinite non-nilpotent group whose every maximal subgroup is a normal subgroup. It is widely known that for every finite group $G$, $G$ is nilpotent If and only if every maximal subgroup of $G$ is a normal subgroup. But I don't know if there is an infinite non-nilpotent group whose every maximal subgroup is a normal subgroup. Thank you for your help. AI: Yes, there is. There is an example of a non-nilpotent group with the normalizer condition, i.e., $H<N_G(H)$ for all proper subgroups $H$. It was constructed by Heineken and Mohamed in 1968. The paper is A group with trivial centre satisfying the normalizer condition, J. Algebra 10, 368-376 (1968). Edit: This group is seriously wonky. From their paper, it has: $Z(G)=1$. $G'$ is of exponent $p$ and abelian Every proper subgroup of $G$ is subnormal and nilpotent.
H: Matrix A,B,C 2/3 not invertible Let A,B,C be matrices such that the algebraic operations are defined. Question: Disprove the following statement (by giving a counter example): If AB=C and 2 of the 3 matrices are not invertible, then the third is not either My struggle: I can't think of such an example, could I maybe get a hint in the right direction? If you prefer, you can also comment the answer to it. Thanks in advance AI: Let consider $$A=\begin{bmatrix}1&0\\0&0\end{bmatrix}\:, B=\begin{bmatrix}1&0\\0&1\end{bmatrix}\:,C=\begin{bmatrix}1&0\\0&0\end{bmatrix}$$ Note also that since $$\det(AB)=\det A\:\det B$$ it is impossible that $A$ and $B$ are not invertible and $C$ is invertible.
H: Confusion with Lagrange Multipliers I am numerically solving an optimization problem of the form: Maximize $z$ subject to $f(\alpha,z)=c$. Using the method of Lagrange Multipliers I first write down the Lagranian $$ \mathscr L(\alpha,z,\lambda)=z-\lambda(f(\alpha,z)-c), $$ for which upon setting the gradient equal to zero yields the system of equations $$ \begin{aligned} \lambda\partial_\alpha f(\alpha,z)&=0\\ \lambda\partial_z f(\alpha,z)&=1\\ f(\alpha,z) &=c. \end{aligned} $$ Here is my confusion: I have already proven that $\partial_z f(\alpha,z)>0$ for all $\alpha$ and $z$; thus, according to the second equation $\lambda$ will always be some positive constant. If this is the case, then why do I need the Lagrange multiplier at all? Wouldn't it suffice to simply solve the system $$ \begin{aligned} \partial_\alpha f(\alpha,z)&=0\\ f(\alpha,z) &=c. \end{aligned} $$ I proceeded to (numerically) solve this system of two equations and did indeed verify that the solution solves my maximization problem. So do I need the original system of three equations? What am I missing? AI: Your observations are correct, though they apply quite specifically to your problem. It is not uncommon for the method of Lagrange multipliers to yield equations that either you already knew, or are useless (like $0 = 0$). What is true in general is that you never have to use the method of Lagrange multipliers. It's always possible (perhaps not algebraically, but definitely numerically) to use the constraint to eliminate one of the variables, but this method may be disadvantageous for a couple reasons (it may complicate the calculations, for one). For your problem, in many cases we could use the constraint $f(\alpha, z) = c$ and solve for $z$ as a function of $c$ and $\alpha$ and then set the derivative of $z$ with respect to $\alpha$ to zero like in a normal one-variable extremization problem. These will lead to the exact same equations you have already deduced. The moral of the story? There is no hands-down most efficient way for solving many extremization problems; it will depend on the nature of the problem.
H: Unitary transformation: order of $U^{\dagger}$ and $U$ If $U$ is a unitary matrix and $U^{\dagger} A U$ a unitary transformation, then also $U A U^{\dagger}$ is a unitary transformation. But are $U^{\dagger} A U$ and $U A U^{\dagger}$ necessarily equal in the general case? AI: Counterexample: $$ U=\pmatrix{0&-1&0\\ 1&0&0\\ 0&0&1}, A=\pmatrix{1&0&0\\ 0&0&1\\ 0&1&0}, U^\ast AU=\pmatrix{0&0&1\\0&1&0\\ 1&0&0}, UAU^\ast=\pmatrix{0&0&-1\\0&1&0\\ -1&0&0}. $$
H: Prove: $\int_0^{\infty} \frac{\ln{(1+x)}\arctan{(\sqrt{x})}}{4+x^2} \, \mathrm{d}x = \frac{\pi}{2} \arctan{\left(\frac{1}{2}\right)} \ln{5}$ Prove: $$\int_0^{\infty} \frac{\ln{(1+x)}\arctan{(\sqrt{x})}}{4+x^2} \, \mathrm{d}x = \frac{\pi}{2} \arctan{\left(\frac{1}{2}\right)} \ln{5}$$ This might be a repeat question (I couldnt find a question of this here). If im being honest I dont know the first step really... Maybe a clever integration by parts, substitution, differentiation under integral sign, power series, or contour? If someone could give advice. AI: For $a>0$, $$\begin{aligned}I = \int_0^\infty {\frac{{\log (1 + x)\arctan \sqrt x }}{{{a^4} + {x^2}}}dx} &= \int_{ - \infty }^\infty {\frac{x}{{{a^4} + {x^4}}}\log (1 + {x^2})\arctan xdx} \\ &= -\Im \int_{ - \infty }^\infty {\frac{x}{{{a^4} + {x^4}}}{{\log }^2}(1 - ix)dx} \end{aligned}$$ The integrand is holomorphic on upper half plane, integral around big semicircle tends to $0$, calculating residues at $a\zeta, a\zeta^3$ (with $\zeta = e^{\pi i /4}$) give $$ I= \frac{{ \pi }}{{2{a^2}}}\Im\left[ {{{\log }^2}(1 + a\zeta ) - {{\log }^2}(1 - a{\zeta ^3})} \right]$$ when $a=\sqrt{2}$, it becomes $\frac{1}{2}\pi\arctan(1/2)\log 5$.
H: Open set implies the theorem, but does the condition imply the set is open? Below is the theorem and my proof. Theorem : $(X,d)$ is a metric space and $U,V \subseteq X$ are sets with $U$ open, such that $U \cap V=\varnothing$. Then $U \cap \overline V=\varnothing$ as well. Proof : Suppose for the sake of contradiction that $U \cap \overline V \ne \varnothing$. Let $x \in U\cap \overline V$. Note that $x \notin U \cap V$ thus $x \in U$ and $x \in V'$ . As $U$ is open, there is $\varepsilon>0$ such that $B(x,\varepsilon)\subseteq U$. Also, $x \in V'$, hence $B(x, \varepsilon)\cap V \ne \varnothing$. But $B(x,\varepsilon)\cap V \subseteq U\cap V=\varnothing$, contradiction. It's all good, but I want to ask if whenever $U,V$ are such that $U\cap V=U\cap \overline V =\varnothing$, then $U$ is open? Till now I haven't found a counterexample, but I feel it's false. AI: If $U$ is such that $$\forall V \subseteq X: (U \cap V = \emptyset) \to (U \cap \overline{V} = \emptyset)\tag{1}$$ then indeed $U$ is open. If $U$ were not open, $V:=X\setminus U$ is not closed and so $U \cap V = \emptyset$ while $U \cap \overline{V} \neq \emptyset$ as witnessed by any point in the closure of $V$ that is not in $V$ (and hence in $U$), so $(1)$ does not hold for $U$.
H: Are singular foliations spanned by collinear vector fields equal? Let $M$ be a compact $n$-manifold (let's say with boundary, but this isn't too important), $X$ a vector field on $M$ and $f:M \to \mathbb{R}$ a non-zero function on $M$. My question: are the singular foliations spanned by $X$ and $fX$ equal? In other words, do the traces of trajectories of $X$ and $fX$ with the same starting point coincide? I tried a couple of simple examples and checked that they do coincide, and I did a heuristic argument which works in my favor: if $p \in M$ such that $X_p = 0$, then obviously the trajectories starting at $p$ of $X$ and $fX$ coincide (it's just the point $p$). Otherwise, there exists a chart centered at $p$ where $X = \frac{\partial}{\partial x_1}$, and so in this chart, $\phi_t^X(p) = (e^t,0,\dots,0)$. On the other hand, the flow of $fX$ starting at $p$ is locally the solution of the equation $$x' = (f(x) \cdot x_1, 0, \dots, 0), \hspace{5pt} x(0) = 0,$$and if we set $g(x_1) := f(x_1,0,\dots,0)$, then $x_1' = g(x_1)x_1$, $x_2 = \cdots = x_n = 0.$ If we define $$G(x_1) := \int \frac{dx_1}{g(x_1)x_1},$$ then, heuristically, $$x_1 = G^{-1}(t),$$ and both of the trajectories are contained in the $\{ x_2 = \cdots = x_n = 0\}$-part of the chart. However, I don't think this is a difficult question and I'd like to see a formal argument. I suspect that the only reason that I'm unable to resolve this is because of my (very) rusty knowledge of the theory of ODE's. AI: You'll need some additional smoothness properties of $X$ (and $f$), but with those in hand, the answer is yes. The reason for this answer is the uniqueness part of the existence and uniqueness theorem for solutions of ODE's (for example look at Theorem 2 here). That theorem says that if $X$ is $C^1$ then then for each initial condition $(x,v)$ ($x \in M$, $v \in T_x M$) the solution is unique over some open interval. Now the idea is to observe that any solution for $fX$ with initial condition $(x,v)$ can be reparameterized to give a solution for $X$ with the same initial condition $(x,v)$. To put this another way, by a change of variables one converts a solution for $fX$ into a solution for $X$. The key point is that a change of variable does not affect the image set of the solution, i.e. it does not affect the trajectory as a subset of $M$.
H: How to properly substitute my variable so that these two integrals are equivalent? I'm having troubles figuring out if what I'm doing is mathematically correct. I replaced the $\theta$ term in my multidimensional numerical integral by giving it a functional dependence of the angle $\beta$ such as $\theta\left(\beta\right)$. Now, the angle $\theta$ goes from $[0,\pi]$, while $\beta$ goes from $[0,\pi/2]$. I made the substituion in my integral and the only term that seems to go wrong is the $\sin(\theta)$. Usually I would integrate like this: $\int_0^{\pi}~f(\theta)\sin(\theta)~d\theta$, and now I'm trying to do this : $\int_0^{\pi/2}~f(\theta(\beta))~\sin(\theta(\beta))~d\beta$. Now, those two integrals should deliver me the same result, but they don't. I figured out that if I multiply the $\beta$-dependent integral by a factor of two or change the integration limits to $[-\pi/2,\pi/2]$, I end up with the desired result. So I am clearly failing with my substitution but I don't know what I'm doing wrong, or what am I not taking into account. Do I need to calculate my integration element $d\beta$ and the integration limits in another way? However, $\beta$ needs to be kept in this range $[0,\pi/2]$. AI: You need to apply the chain rule, $\int_a^b f(x)dx=\int_{g^{-1}(a)}^{g^{-1}(b)}f(g(x))\frac{dg}{dx}dx$. So you need an extra factor of $\frac{d\theta}{d\beta}$ in your integrand.
H: Estimating Error due to replacing the sum $\sum\limits_{n=1}^{\infty} \frac{1}{n!} (\frac{1}{2})^n$ by the first $n$ terms Question: Estimating Error due to replacing the sum $\sum\limits_{n=1}^{\infty} \frac{1}{n!} (\frac{1}{2})^n$ by the first $n$ terms All I can really say at this point is that the remainder $R_n$ when summing the first $n$ terms will be: $$R_n = a_n [\frac{1}{2}\frac{1}{n+1} + (\frac{1}{2})^2\frac{1}{(n+1)^2} + (\frac{1}{2})^3\frac{1}{(n+1)^3} + ...]$$ Where $a_n = \frac{1}{n!} (\frac{1}{2})^n$. I can technically restrict $R_n$ by the following: $$R_n < a_n[\frac{1}{2}\frac{1}{n+1} + (\frac{1}{2})^2 \frac{1}{(n+1)^2} + ...] = a_n\sum(\frac{1}{2(n+1)})^p$$ I am suppose to conclude that the error bounds $R_n < \frac{a_n}{2n+1}$, can anyone help me make that conclusion? AI: Note that we have $$\begin{align} \sum_{m=n+1}^\infty \frac{1}{m!2^m}&=\frac{1}{n!2^n}\sum_{m=n+1}^\infty \frac{n!2^n}{m!2^m}\\\\ &=\frac{1}{n!2^n}\sum_{m=n+1}^\infty \frac{n!}{m!2^{m-n}}\\\\ &=\frac{1}{n!2^n}\sum_{m=1}^\infty \frac{n!}{(m+n)!2^{m}}\\\\ &\le \frac1{n!2^n}\sum_{m=1}^\infty \frac1{(n+1)^m2^m}\\\\ &=\frac1{n!2^n}\frac{\frac1{2(n+1)}}{1-\frac1{2(n+1)}}\\\\ &=\frac1{(2n+1)n!2^n} \end{align}$$
H: A Probability question on Bayes theorem I am struggling to understand a problem from https://docplayer.net/6566428-Probability-exam-questions-with-solutions-by-henk-tijms-1.html Problem : On the island of liars each inhabitant lies with probability 2/3 . You overhear an inhabitant making a statement. Next you ask another inhabitant whether the inhabitant you overheard spoke truthfully. Find the probability that the inhabitant you overheard indeed spoke truthfully given that the other inhabitant says so Here is my Answer - A = 1st person is actually truthful B : 2nd person says 1st person is truthful So we need P[A|B] Bays theorem : $P[A|B] = \frac{P[B|A]P[A]}{P[B|A]P[A] + P[B|A^c]P[A^c]}$ So $P[A] = 1/3, P[A^c] = 2/3, P[B|A] = 1/3, P[B|A^c] = 2/3$ This gives me answer as 0.20 But, actual answer comes as 0.25. Can somebody please help me to understand where am I wrong? AI: "On the island of liars each inhabitant lies with probability 2/3 . You overhear an inhabitant making a statement. Next you ask another inhabitant whether the inhabitant you overheard spoke truthfully. Find the probability that the inhabitant you overheard indeed spoke truthfully given that the other inhabitant says so." Here's how I would do this: Imagine this happens 900 times. Then 600 times the first person lied, 300 times told the truth. Of the 300 times the first person told the truth, 100 times the second person says he told the truth, 200 times says he lied. Of the 600 times the first person lied, 400 times the second person says he told truth 200 times the second person says he lied. So the second person says that the first person told the truth a total of 100+ 400= 500 times. Of those 500 times, the first person actually told the truth 100 times. The probability the first person told the truth, given that the second person said he did is $\frac{100}{500}= \frac{1}{5}= 0.2$. What makes you say that "the actual answer comes back as 0.4"?
H: Find derivative of $\lfloor{x}\rfloor$ in distribution Find derivative in distribution of $f(x)=\lfloor{x}\rfloor=E(x)$ $$E(x)≤x≤E(x+1)$$ Answer is : $$\lfloor{x}\rfloor '=\displaystyle\sum_{k=-\infty}^{\infty}\delta_{k}$$ I don't have any idea about how to. Can you assist? I'm too thankful AI: Let $f(x)=\lfloor x\rfloor$. For any test function $\phi(x)$, we have $$\begin{align} \langle f',\phi\rangle&=-\langle f,\phi'\rangle\\\\ &=-\int_{-\infty}^\infty \lfloor x\rfloor \phi'(x)\,dx\\\\ &=-\sum_{n=-\infty}^\infty \int_{n}^{n+1} n\phi'(x)\,dx\\\\ &=-\sum_{n=-\infty}^\infty n (\phi(n+1)-\phi(n))\\\\ &=-\sum_{n=-\infty}^\infty n \phi(n+1)+\sum_{n=-\infty}^\infty n\phi(n)\\\\ &-\sum_{n=-\infty}^\infty (n-1) \phi(n)+\sum_{n=-\infty}^\infty n\phi(n)\\\\ &=\sum_{n=-\infty}^\infty \phi(n)\\\\ &=\sum_{n=-\infty}^\infty \langle \delta_n ,\phi\rangle\\\\ &= \langle \sum_{n=-\infty}^\infty\delta_n ,\phi\rangle \end{align}$$ Hence, in distribution $$\lfloor x\rfloor' = \sum_{n=-\infty}^\infty \delta(x-n)$$as was to be shown!
H: Is there an irrational number that the digits never repeat anywhere and have all 10 digits appear everywhere? Is there an irrational number that the digits never repeat anywhere and have all 10 digits appear everywhere? let's look at one that doesn't work like $$\pi=3.141592653589793238462643383...$$ starting at the 23rd digit you get 33 so it fails another example of one that fails is $0.10102101023135791...$ even tho no digit ever repeats twice a pair of digits do $10,10$ and and here 5 digits in a row do $10102,10102$. my question is there an irrational number such that all digits are used equally and no sequence of the digits repeat twice like this. $123547123547,8989,0909,182182,99,...$ AI: The sequence of digits you want is an infinite square-free word on the alphabet 0123456789. EDIT: Consider an infinite square-free word on the alphabet 012, which we know exists by Thule's construction. Let the positions of $2$ in this word be $i_1, i_2, \ldots$: there must be infinitely many, otherwise after some point we would have an infinite square-free word on alphabet 01, which is impossible. For each $k$, change the letter in position $i_k$ to $3$, $4$, \ldots $9$ or leave it as $2$ if $k \equiv 0, 1, \ldots, 7 \mod 8$ respectively. The resulting infinite word is still square-free, and now has infinitely many of each of $0,1, \ldots, 9$.
H: how given a family of orthonormal functions on $[0..2\pi]$, modify this family to work on $[0..\ell]$ Suppose $\varphi_0(x), \varphi_1(x), ...$ are orthonormal functions on $[0..2\pi]$ How can I find an orthonormal family of functions working on $[0..\ell]$ and what is the intuition behind looking for such a modification? (I know that the answer is $\psi_n(x)=\sqrt{\frac{2\pi}{\ell}}\varphi_n(\frac{2\pi x}{\ell})$, but I can't come up with intuition and a correct path leading there) AI: Squish or stretch their graphs horizontally so that their entire domain goes from $[0, 2\pi]$ to $[0, \ell]$. This is done by changing the argument to $\phi_n\left(\frac{2\pi x}{\ell}\right)$. Substitution shows that they are still orthogonal. However, they aren't normalized any more: $$ \int_0^\ell\phi_n\left(\frac{2\pi x}{\ell}\right)\cdot \phi_n\left(\frac{2\pi x}{\ell}\right)\,dx = \int_0^{2\pi}\frac{\ell}{2\pi}\phi_m\left(x\right)\cdot \phi_n\left(x\right)dx $$ Squish or stretch their graphs vertically so that they are normalized. This is done by multiplying each one by $\sqrt{\frac{2\pi}{\ell}}$, cancelling out the substitution factor above. This doesn't change the fact that they are pairwise orthogonal, and they are now also normalized. Thus they are orthonormal.
H: show that $F= \underset{n\geqslant1}{\bigcap} \left\{ x \in X, d(x,F) < \frac{1}{n} \right\}.$ Let $F$ be a closed set from a metric space $(X,d)$ , show that $$F= \underset{n\geqslant1}{\bigcap} \left\{ x \in X, d(x,F) < \frac{1}{n} \right\}.$$ My attempt : $\Rightarrow)$ for all $x \in X$, $ x\in F\iff d(x,F)=0$, then $x \in \underset{n\geqslant1}{\bigcap} \left\{ x \in X, d(x,F) < \frac{1}{n} \right\} $ $\Leftarrow)$ for all $n\geqslant 1$, $0\leqslant d(x,F) < \frac{1}{n} \xrightarrow[n\to \infty]{}0,$ then $0\leqslant d(x,F) <0 $ !! ( can a strict inequality became broad ? this is my question ? ) AI: Generally, limits always transform strict into large hence the result. (Strict always implies large actually) Think of $a_n=\frac{1}{n}$ : $$0<a_n$$ implies $$0\leq a_n$$ which is convenient with taking the limit
H: Convergence of two complex valued sequence [CMI PG2010, Part B] Let $\{a_n\}$ and $\{b_n\}$ be sequences of complex numbers such that each $\{a_n\}$ is non-zero, $\lim_{n\rightarrow\infty}a_n = \lim_{n\rightarrow\infty}b_n = 0$, and such that for every natural number $k$, $$\lim_{n\rightarrow\infty}\frac{b_n}{{a_n}^k} = 0.$$ Suppose $f$ is an analytic function on a connected open subset $U$ of $\mathbb{C}$ which contains $0$ and all the $\{a_n\}$. Show that if $f(a_n)=b_n$ for every natural number $n$, then $b_n=0$ for every natural number $n$. Since $f$ is analytic on $U$, we can have a power series expansion of $f$, say $f(z)= \displaystyle\sum_{n=0}^{\infty}{r_n}{z^n}$. Repalcing $z$ by $a_j\forall j$, we get $f(a_j)= \displaystyle\sum_{n=0}^{\infty}{r_n}{{a_j}^n}=b_j$. Hence we have, $r_0 = f(0) = 0$. Now I am stuck here. Please give me hint how to proceed. Thanks in advance!! AI: Hint: If $f$ is nonzero, then $f(z)=z^kg(z)$ for some $k\ge 0$ and some analytic function $g$ in $U$ such that $g(0)\ne 0$. Therefore, $$\lim_{z\to 0}\frac{f(z)}{z^K}=\lim_{z\to 0}g(z)=g(0)\ne 0.$$
H: Show that the $L^1$ and $L^2$ norms are not equivalent on the set of continuous functions from $[0,1]$ to $\mathbb{R}$ Let $E$ be the vector space of continuous functions on $[0,1]$. Show that the $L^1$-norm is not equivalent to the $L^2$-norm. My thought was that, given a sequence of functions $f_n\in E$ which converges to the function $\frac{1}{\sqrt{x}}$, we can see that $$||f_n||_1=\int_0^1|f_n|dx \to \int_{0}^1 \frac{1}{\sqrt{x}}dx=2\sqrt{0}+2\sqrt{1}=2 $$ However, $$||f_n||_2=\left(\int_{0}^1 (f_n)^2 dx \right)^{1/2}\to \left(\int_0^1 \frac{1}{x}dx\right)^{1/2}$$ Since this sequence converges with respect to one norm but not the other we can conclude that they are not equivalent. Does this argument make any sense? It feels like it doesn't make sense to talk about the norm of a function that isn't in the space $E$ since $\frac{1}{\sqrt{x}}\notin E$. But the hint for the problem says to consider truncating said function near 0. AI: Just consider the sequence $\{f_n\}_{n \ge 1} \subset C[0,1]$ defined by $f_n(x)=n x^n$ Then $||f_n||_1 = \int_{0}^1 nx^ndx=\frac{n}{n+1}$ . Hence $\lim_{n \to \infty} ||f_n||_1=1$ But $||f_n||_2=\int_{0}^1 |nx^n|^2dx=\frac{n^2}{2n+1} \to \infty \text{ as } n \to \infty$
H: How to mix 2 Fourier tables. I am currently working on a homework problem in relation to the Fourier series. Here are the function's properties: f(x)= {0, 0<x<1} {1, 1<x<2} {0, 2<x<3} We are asked to find the Fourier equation from the Fourier tables. However, as you can see, there are 3 conditions, and all of the given Fourier tables only have 2 conditions. Our teacher suggested that we mix 2 of the Fourier table, however, I have no idea where to start or what to do. I tried to rework the function in order to have only 2 conditions as below: f(x)= {1, 1<x<2} {0, else} But no luck, once again the calculated function is not corresponding to the one given. Thank you for your help. AI: Notice that you know the Fourier transform is linear, meaning that the transform of the sum (or difference) of two functions is the sum (or difference) of their two transforms. So, can we write this $f$ as the sum (or difference) of two functions whose transforms we know? This $f$ has a step up followed by a step down. Can we write this function as a step up minus a later step up? (Think a bit before revealing the below by hovering your pointer over it.) $$ f(x) = \left( \begin{cases} 0 ,& 0 < x \leq 1 \\ 1 ,& 1 < x \end{cases} \right) - \left( \begin{cases} 0 ,& x < 2 \\ 1 ,& 2 \leq x < 3 \end{cases} \right) $$ We should not be concerned about the difference between "${}<{}$" and "${}\leq{}$", since the integral in the definition of the transform is blind to it.
H: When is convolution not commutative? Let $G$ be a locally compact Hausdorff group with a left Haar measure $\lambda$. Define the convolution of two functions $f,g \in L^1(G)$ by $$(f \ast g)(x) = \int f(y) g(y^{-1}x) d\lambda (y), ~~~ \forall x \in G$$ If the group $G$ is abelian the convolution is commutative: $f \ast g = g \ast f$. In general, for any $x \in G$ we have (written multiplicatively) $$ (f \ast g)(x) = \int f(y) g(y^{-1}x) d\lambda(y) = \int f(xy) g((xy)^{-1}x) d\lambda(y) = \int f(xy) g(y^{-1}) d\lambda(y)$$ In the second equality, we apply a left shift by $x^{-1}$ which does not change the integral since $\lambda$ is left invariant. Precomposing with inversion yields $$ \int f(xy^{-1}) g(y) d\rho(y)$$ where $\rho$ is the associated right Haar measure defined by $\rho(B) = \lambda(B^{-1})$ for any Borel set $B \subseteq G$. Finally, commuting $x$ and $y^{-1}$ gives $$ \int g(y) f(y^{-1}x) d\rho(y)$$ Now, if $G$ is unimodular, $\rho$ and $\lambda$ coincide, so the last expression is the convolution $g \ast f$. Also, since both $y^{-1} \in G$ and $x \in G$ are arbitrary, the step requires $G$ to be abelian (which then also makes it unimodular). I am looking for an explicit counterexample to the claim that $f \ast g = g \ast f$ in general, and conditions under which the formula is true (which are hopefully weaker than $G$ being abelian). Thank you very much in advance! AI: Convolution of two $C_c$ functions commute $\iff$ $G$ is abelian As you noted if $G$ is abelian then it is trivial that convolutions commute. For the converse, let convolution of any two $C_c$ functions commute. Let $f,g \in C_c(G)$ Then $\forall x \in G \text{ we have }$ $$0= f*g(x)-g*f(x)=\int_G f(xy)g(y^{-1}) d\lambda(y) - \int_{G} g(y)f(y^{-1}x)d\lambda(y)$$ $$=\int_G f(xy^{-1})g(y)\Delta(y^{-1}) d\lambda(y) - \int_{G} g(y)f(y^{-1}x)d\lambda(y)$$ $$\implies \int_G g(y)(\Delta(y^{-1})f(xy^{-1})-f(y^{-1}x))d\lambda(y)=0$$ Since, $g \in C_c(G)$ was arbitrarily chosen, it follows that $$\Delta(y^{-1})f(xy^{-1})=f(y^{-1}x), \forall x,y \in G$$ So put $x=1$ above and note that $\Delta(y^{-1})f(y^{-1})=f(y^{-1})$ . Again $f \in C_c(G)$ was arbitrarily chosen thus $f$ can very well be non-zero at $y^{-1}$. So we get, $\Delta(y^{-1})=1, \forall y \in G$ Hence, $f(xy^{-1})=f(y^{-1}x) \forall x,y \in G$ . Then just replace $y$ by $y^{-1}$ and we get $$f(xy)=f(yx) \forall f \in C_c(G) \implies xy=yx, \forall x,y \in G$$ Since, you have the result for $C_c(G)$, it follows for $L^1(G)$
H: A triangle is a compact set Let's fix a triangle $\Delta=\{t_1x_1+t_2x_2+t_3x_3 \in \mathbb{R}^2 \mid t_1,t_2,t_3 \ge 0 \quad \land \quad t_1+t_2+t_3=1\}$ of fixed vertices $x_1,x_2,x_3 \in \mathbb{R}^2$. I want to show that $\Delta$ is compact in the plane. That's my attempt (the metric used here is the euclidean one, of course): Let's define $f\colon \mathbb{R}^3 \to \mathbb{R}^2 \mid f(t_1,t_2,t_3)=t_1x_1+t_2x_2+t_3x_3$, $g\colon \mathbb{R}^3 \to \mathbb{R} \mid g(t_1,t_2,t_3)=t_1+t_2+t_3$. Obviously $f$ and $g$ are continuous functions. So the set: $K=g^{-1}(\{1\}) \cap \{(t_1,t_2,t_3) \in \mathbb{R}^3 \mid t_1,t_2,t_3 \ge 0\}$ is closed, being the intersection of two closed sets. Moreover: $\forall \,(t_1,t_2,t_3) \in K \quad \|(t_1,t_2,t_3)-(0,0,0)\|=\sqrt{t_1^2+t_2^2+t_3^2} \le t_1+t_2+t_3=1<2$ so that $K$ is also bounded. So $K$ is compact, and thus $\Delta=f(K)$ is compact by Weierstrass theorem. Is it correct? Is there a easier (elementary) way to prove it? Thank you! AI: Your method works, though I am not sure which theorem you have in mind when you say "by Weierstrass theorem." The image of a compact set under a continuous function is compact, so the triangle is compact, like you showed. Can you prove that the line segment in $\mathbb R^2$ joining two points $P$ and $Q$ is closed? If so, then you have a direct proof that triangles (in fact any polygons) are closed, being a union of a finite number of such segments.
H: Is a function uniformly continuous on the union of two disconnected sets Suppose $f: [0,1]\cup [2,3] \rightarrow \mathbb{R}$ is defined by $$f(x) =\begin{cases} \sin(x), &0 \leq x \leq 1 \\x^2, &2 \leq x \leq 3.\end{cases}$$ Now $f(x)$ is uniformly continuous on $[0,1]$ and uniformly continuous on $[2,3]$. But is it uniformly continuous over the union? Why or why not? I am thinking that it is, because $f(x)$ is not defined on the interval $(1,2)$. AI: More generally: If a function $f$ is uniformly continuous on each of two disjoint compact sets $A$ and $B$ in a metric space, it is uniformly continuous on the union $A \cup B$. Proof: Since $A$ and $B$ are compact and disjoint, the distance between $A$ and $B$ is nonzero, i.e. there is $\delta_1 > 0$ such that $d(a,b) \ge \delta_1$ for all $a \in A$ and $b \in B$. Now given $\epsilon > 0$, uniform continuity says there is $\delta_2 > 0$ such that if $x,y \in A$ and $d(x,y) < \delta_2$, $|f(x)-f(y)| < \epsilon$, and similarly there is $\delta_3$ that works for $x,y \in B$. If $\delta = \min(\delta_1, \delta_2, \delta_3)$ and $x,y \in A \cup B$ with $d(x,y) < \delta$, then $x$ and $y$ are either both in $A$ or both in $B$, and then $|f(x) - f(y)| < \epsilon$. Without the assumption of compactness, this would not be true. For example, $$ f(x) = \cases{0 & for $x \in [0,1)$\cr 1 & for $x \in (1,2]$\cr}$$ is uniformly continuous on $[0,1)$ and on $(1,2]$, but not on $[0,1) \cup (1,2]$.
H: What is wrong with this definition of a truth predicate? Tarski's theorem, interpreted in Peano Arithmetic, says there is no predicate $T$ such that $PA\vdash T(\phi)\leftrightarrow \phi$. However, we know that there are partial truth predicates for each $k< \omega$ such that, for all $\phi \in \Sigma_k$, $PA\vdash T_k(\phi)\leftrightarrow \phi$. What is wrong with this supposed truth predicate, I will call $T_\omega$? I will define it by means of a recursive algorithm. On input $\phi$, determine $k$ as the least $j$ such that $\phi\in\Sigma_j$. Then output $T_\omega(\phi) = T_k(\phi)$. AI: That's not actually an algorithm in the sense of a computable process: already, checking truth of $\Sigma_1$ sentences is not computable. And if we switch to the language of, well, language, things aren't any better. Presumably the formula you have in mind is something like $\varphi$ is true iff $T_k(\varphi)$ holds, where $\Sigma_k$ is the optimal complexity of $\varphi$. Determining $k$ is of course easy. The problem is that you've essentially quantified over the $T_k$s. This can only be done if you can whip up a formula $T(x,y)$ where for each $k$ the formula $T(\underline{k}, y)$ corresponds to $T_k(y)$ ... but that's exactly what you're trying to do here. Put another way, even if the sequence of formulas $(\psi_i)_{i\in\mathbb{N}}$ is as simple as you want (e.g. computable), expressions like $$\forall x(P(x)\rightarrow \psi_x(a))$$ are not first-order formulas: we can't have "variable formulas."
H: A question on number of elements in a set I am trying questions in permutations and combinations from an assignment and I was unable to solve this question. Let D be a set of tuples $(w_{1} ,..., w_{10} )$ , where $w_{i} \in \{1,2,3\}$ , $1\leq i\leq 10$ and $w_{i}+w_{i+1} $ is an even number for each $i$ with $1\leq i \leq 9$ is . Then number of elements in D are ? Attempt: $1$ st elements can be chosen in $3$ ways. If element chosen is $1$ or $3$ then rest of elements can be chosen in $2^{9} $ ways. If $2$ is chosen then rest of elements can be chosen in $1$ ways. So I got the total $3\times2^{9}+ 1$ ways. But that's not correct as answer is $2^{10} +1$ . Can someone please tell what mistake I am making. AI: I would say your reasoning is correct so I can't understand how you got your result. If the first element is $1$ then you have $N_1=2^9$ possibilities as you said. If the first element is $2$ then you have $N_2=1$ possibilities as you said. If the first element is $3$ then you have $N_3=2^9$ possibilities as you said. So just sum them up! $$N=N_1+N_2+N_3 = 2^9+1+2^9 = 2\times 2^9 + 1 = 2^{10}+1$$
H: What is the importance of $R$ being a field in this question? Here is the question I am trying to solve (Jeffery Strom, “Modern classical homotopy theory” on pg. 511): Problem 22.39. Suppose $R$ is a field. (a) Show that $h^n(?) = \operatorname{Hom}_R( H_n(? ; R), R)$ is a cohomology theory defined on (at least) the category of finite CW complexes. (b) Show that $u$ is a natural transformation of cohomology theories. (c) Prove Theorem 22.37. (image) My professor said the importance of $R$ being a field in this question is that it converts the $\operatorname{Hom}$-functor into a right exact functor. But I do not understand how this happened. Could anyone explain this for me, please? I know that either of $\operatorname{Hom}(-,D)$ or $\operatorname{Hom}(D,-)$ are left exact functors. AI: By definition, being a cohomology theory is equivalent to satisfying the Eilenberg-Steenrod axioms. You can check easily that for any ring $R$, $Hom_R (H_n(-;R),R)$ satisfies all but the LES axiom. However, how should we get a LES out of this? Well the obvious way to do this would be to start with the long exact sequence associated to $H_n(-;R)$ and attempt to dualize it. However, $R$ is not generally an injective $R$-module, so homming out to it will not preserve exact sequences. If you find one of these guys, then you are not far from proving there is no way to get a LES for this cohomology theory. So what is the issue? $R$ is not an injective $R$-module. However, if $R$ is a field, it is and nothing goes wrong.
H: Is the function $f = \sum_{n=0}^{\infty} 2^{-n}\chi_{[n,n+1)}$ Lebesgue integrable on $\mathbb{R}$? Is the function $f = \sum_{n=0}^{\infty} 2^{-n}\chi_{[n,n+1)}$ Lebesgue integrable on $\mathbb{R}$? Justify your answer. I came across this question on a past exam paper for a measure theory course I'm taking and I can't find anything similar in my professors notes to help me work through it. I have a feeling that I should be using a convergence theorem but I'm not quite sure which one. A push in the right direction would be appreciated! Thanks in advance. AI: Observe that \begin{align*} \int_{\mathbb{R}}|f| &=\int_{\mathbb{R}} \left|\sum_{n=0}^{\infty} 2^{-n}\chi_{[n,n+1)} \right|\\ &= \int_{\mathbb{R}} \sum_{n=0}^{\infty}2^{-n}|\chi_{[n,n+1)}|\\ &=\sum_{n=0}^{\infty}2^{-n} \int_{\mathbb{R}}\chi_{[n,n+1)}&\because\text{Monotone Convergence Theorem}\\ &=\sum_{n=0}^{\infty}2^{-n}\\ &=\frac{1}{1-\frac{1}{2}}\\ &=2\\&<\infty \end{align*} Therefore, $f$ is integrable.
H: $3$ digit number being subtracted by its digits This question is an AMC style question. The question is this: If the integer $A$ is reduced by the sum of its digits, the result is $B$. If $B$ is increased by the sum of its ($B$'s) digits, the result is $A$. Compute the largest $3$-digit number $A$ with this property. I would like to know how you do this question. AI: It follows that the digit sum of $B$ is the same as that of $A$. That measn that $A\equiv B\pmod 9$, so the digit sum is a multiple of $9$. On the other hand, the digit sum of $B<A<1000$ cannot be larger than $9+9+8=26$. Hence the common digit sum is either $9$ or $18$. The largest candidate for $A$ with such a digit sum is $A=990$ and leads to $B=A-18=972$, which works fine.
H: Is a ring homomorphism surjective if the restriction to the group of units is surjective? Let $R_1,R_2$ be two rings and suppose that $f:R_1\to R_2$ is a ring homomorphism. Denote $R_1'$ and $R_2'$ for the groups of units of $R_1$ and $R_2$. Next, let $f':R_1'\to R_2'$ be the restriction of $f$ to $R_1'$. I've already proven that this is well-defined and that $f'$ is a group homomorphism. Finally, suppose that $f'$ is surjective. Does this imply that $f$ is surjective? So far I've tried a couple of things. First I tried to prove this, but couldn't think about anything that helped. I can't really see any connection between units and other elements from some arbitrary ring, so I started trying to come up with some counter examples. This gave me the idea to use $R_2=\mathbb{Z}$, since it is has a very small set of units compared to its size. Next I was trying some finite rings with more than 2 units as candidates for $R_1$, but couldn't really think of any ring homomorphism that would make sense. I've mostly tried rings of the form $\mathbb{Z}/n\mathbb{Z}$ for some integer $n$, all without success. Am I forgetting something obvious? Or is it actually true? AI: The units in $R=\Bbb Z[X]$ are $\pm1$. The ring homomorphism $R\to R$, $p(X)\mapsto p(X^2)$ is not onto.
H: Sigma-distributivity of the algebra of Baire sets modulo meagre sets Let $A$ be the free $\sigma$-algebra with $\omega_1$ free $\sigma$-generators, $X$ its Stone space, and $Ba(X)/M$ the algebra of Baire subsets of $X$ modulo meagre sets. Then $A$ is $\sigma$-isomorphic to $Ba(X)/M$ by the Loomis-Sikorski theorem. Let $2^{\omega_1}$ be the Cantor cube of weight $\omega_1$ (i.e. the Stone space of the free Boolean algebra with $\omega_1$ free generators) and $Ba(2^{\omega_1})$ the $\sigma$-field of Baire subsets of $2^{\omega_1}$. Then it can be shown that $A$ is also $\sigma$-isomorphic to $Ba(2^{\omega_1})$. Now, since $Ba(2^{\omega_1})$ is a $\sigma$-field of sets, it is $\sigma$-distributive. Hence, since $A$ is $\sigma$-isomorphic to $Ba(2^{\omega_1})$, $A$ is an example of an atomless $\sigma$-distributive $\sigma$-algebra. My question: Since $A$ is also $\sigma$-isomorphic to $Ba(X)/M$, is $Ba(X)/M$ also $\sigma$-distributive? (I have a doubt.) AI: Yes, it should be clear that two if two Boolean Algebras are isomorphic by an isomorphism that preserves all countable sups (i.e. $\sigma$-isomorphic), and one is $\sigma$-distributive, then the other one is too.
H: Finite vs infinite Ramsey theorem - what's the difference? The finite Ramsey theorem states that given a $k$ and an $r$, there exists an $N$ such that every $r$ coloring of the edges of $K_N$ contains a monochromatic clique of size $k$. The infinite version says that every coloring $c:\binom{\mathbb{N}}{2}\mapsto [r]$ of the set of all pairs of integers contains an infinite monochromatic set $\{x_1,x_2\cdots\}$ such that every pair $\{x_i,x_j\}$ is of the same color. My question is, why is the infinite version any different from the finite version? We were told that they are actually different and were given separate proofs which I understood. But doesn't the finite version simply imply the infinite version, because we can find arbitrarily large monochromatic cliques for sufficiently large $N$'s? Why is arbitrarily large not the same as infinite? So if we are given a coloring of the set of pairs of all integers, then given any $k$, we can always find a monochromatic set of size $k+1$. Isn't that sufficient to prove the existence of an infinite clique (in a way similar to Euclid's proof of infinitude of primes ) AI: Why is arbitrarily large not the same as infinite? Well, there are arbitrarily large finite natural numbers but there are no infinite natural numbers. More seriously, consider (for example) the graph $G$ consisting of the disjoint union of a copy of $K_n$ for each $n\in\mathbb{N}$, where $K_n$ is the complete graph on $n$ vertices. Arbitrarily large finite cliques occur in $G$, but there is no infinite clique in $G$. The issue is that a priori we might not be able to "piece together" the increasingly large finite configuration of some type into a single infinite configuration of that type. Infinite Ramsey's theorem says that in one particular case we can find increasingly large finite configurations which do cohere appropriately. Incidentally, we can make the idea that finite Ramsey's theorem doesn't trivially imply infinite Ramsey's theorem precise in a technical way: the theory $\mathsf{RCA_0+I\Sigma_2}$ proves finite Ramsey's theorem but not infinite Ramsey's theorem. Another tool we can use here is computability theory. On the one hand, via brute-force search we can always computably locate a homogeneous set of size $k$ in a given computable $r$-coloring of pairs of natural numbers. On the other hand, we can whip up a computable two-coloring of pairs of natural numbers with no computable infinite homogeneous set. Basically, there will be lots of "dead ends" - finite cliques which can't be extended to larger cliques - and there's no computable way to detect these.
H: calculate: $\int_{0}^{2\pi}e^{\cos\theta}(\cos(n\theta-\sin\theta))d\theta$ calculate: $\int_{0}^{2\pi}e^{\cos\theta}(\cos(n\theta-\sin\theta))d\theta$ my try: $ \begin{array}{c} \int_{0}^{2\pi}e^{\cos\theta}(\cos(n\theta-\sin\theta))d\theta\\ \int_{0}^{2\pi}e^{\cos\theta}(\frac{e^{-i(n\theta-\sin\theta)}}{2}+\frac{e^{i(n\theta-\sin\theta)}}{2})d\theta\\ \int_{0}^{2\pi}(\frac{e^{-i(n\theta \cos\theta-\sin\theta \cos\theta)}}{2}+\frac{e^{i(n\theta \cos\theta-\sin\theta \cos\theta)}}{2})d\theta\\ \frac12\int_{-2\pi}^{2\pi}(\frac{e^{-i(n\theta \cos\theta-\sin\theta \cos\theta)}}{2}+\frac{e^{i(n\theta \cos\theta-\sin\theta \cos\theta)}}{2})d\theta\\ \frac14\int_{-2\pi}^{2\pi}(e^{-i(n\theta \cos\theta-\sin\theta \cos\theta)}+e^{i(n\theta \cos\theta-\sin\theta \cos\theta)})d\theta \end{array}$ I failed to find a path that will allow me to evaluate it. I thought about a semi circle but I wasn't able to show the arc tends to 0. AI: You seem to have incorrectly thought $e^xe^y=e^{xy}$ rather than $e^xe^y=e^{x+y}$. Your integral is$$\Re\int_0^{2\pi}e^{\cos\theta+in\theta-i\sin\theta}d\theta=\Re\int_0^{2\pi}e^{in\theta}e^{e^{-i\theta}}d\theta\stackrel{z=e^{-i\theta}}{=}\Re\oint_{|z|=1}\frac{e^zdz}{-iz^{n+1}}.$$If you want to evaluate this (you should find it's $2\pi/n!$), bear in mind the contour is clockwise.
H: Counting irreducible polynomials of the form $x^2 - ax + 1$ over a finite field. There are some variations of the question. Fixing the finite field ${\mathbb{F}}_q$, the number of all monic irreducible polynomials $x^2 - ax + b \in {\mathbb{F}}_q[x]$ is $(q^2 - q)/2$, which is easy to see just by dividing the number of elements in ${\mathbb{F}}_{q^2} \setminus {\mathbb{F}}_q$ by $2$. The assumption $b = 1$ forces ${\mathrm{N}}(\alpha) = 1$ for a root $\alpha \in {\mathbb{F}}_{q^2} \setminus {\mathbb{F}}_q$ of the polynomial. Beyond this point my arguments about this turns out to be circular around and leading nowhere. I tried using the fact: the radical $a^2 - 4$ is not a square in ${\mathbb{F}}_q$ and was hoping that the preimage of $4$ by the surjective map $\varphi : {\mathbb{F}}_q \times {\mathbb{F}}_q \rightarrow {\mathbb{F}}_q$ given by $(a,b) \mapsto -a^2 + b^2$ give something. The question came up while counting the centraliser of an anisotropic regular semi-simple class in ${\mathrm{SL}}(2,{\mathbb{F}}_q)$. Any help? AI: The number of reducible polynomials of the form $x^2+ax+1$ is easier to compute. This is the number of polynomials $(x-b)(x-b^{-1})$. For each $b$ you obtain a polynomial, but you count each one twice ($b$ and $b^{-1}$). However, if $b=b^{-1}$ you don't need to double-count. For $q$ odd there are two values for which is is true ($\pm 1$) and for $q$ even there is one ($1$). Thus for $q$ odd you obtain $(q-3)/2+2$ reducible polynomials and for $q$ even you obtain $(q-2)/2+1$ reducible polynomials. Subtract this from $q$ to obtain the number of irreducible ones. Thank you to Jyrki Lahtonen for pointing out that I'd been very hasty in my formulae earlier.
H: A Probability question on picking right ball I picked one example from https://docplayer.net/6566428-Probability-exam-questions-with-solutions-by-henk-tijms-1.html The problem is - Bill and Mark take turns picking a ball at random from a bag containing four red balls and seven white balls. The balls are drawn out of the bag without replacement and Mark is the first person to start. What is the probability that Bill is the first person to pick a red ball Where the solution is given already in that material, I failed to grasp it. Can you please help understand in intuitive way. Any help will be highly appreciated. Thanks, AI: The event $A_i$, the first red ball was taken out on the $i$th draw occurs when the first $i-1$ balls are white, then a red ball. So you need to choose the first $i-1$ white balls: ${7 \choose i-1}$, then choose the order in which they are placed (the permutation, $(i-1)!$), then pick the red ball to be the first ($4$ options) and then place the rest of the balls in any order. There are $3$ red balls to place and $7-(i-1)$ white ones, so the number of permutations is $(7-(i-1)+3)!$. All this should be divided by the sample space, which is the permutations of all 11 balls, $11!$. To find the probability asked for in the question, red should be on an even place so we only sum these probabilities for $i=2,4,6,8$. This was the solution from the link. For me, it is much more intuitive to choose a sample space where the balls are the same. So the sample space is to choose the places of the red among the 11 possibilities: ${11 \choose 4}$ and the event $A_i$ occurs when there is a red ball in place $i$, so it is only left to choose $3$ places after it, i.e., among the other $7-(i-1)+3$ places. The probability is $$\tfrac{7-(i-1)+3 \choose 3}{11 \choose 4}$$
H: Cauchy product of $1+2+4+8+16+32+\dots$ and $1-1+1-1+1-1+\dots$ Cauchy product of $1+2+4+8+16+32+\dots$ and $1-1+1-1+1-1+\dots$. Then $c_1=1\times 1=1$ $c_2=1\times(-1)+2\times 1=-1+2$ $c_3=1-2+4$ $c_4=-1+2-4+8$ $\dots$ so when $n$ is even $s_n=\sum^{n/2}_1 2^{2n-1}$, when $s$ is odd, $s_n=\sum^{(n+1)/2}_12^{2(n-1)}$. But the solution says it's $(2^{n+1} +(-1)^n)/3$. Where am I wrong or just the solution is wrong? AI: You seem to have shifted the indices in series, which start at $0$ (I recall $0\in\mathbf N$). Therefore, you have \begin{align} c_0&=1,&c_1&=-1+2, & c_2&=1-2+4, &c_3&=-1+2-4+8, \end{align} and more generally $$ c_n=(-1)^n\sum_{k=0}^n (-2)^k=(-1)^n\,\frac{1-(-2)^{n+1}}{1-(-2)^{\phantom{n+1}}} . $$ Can you proceed?
H: Integral of an Odd Function on $\Bbb{R}^{n}$ Given an odd integrable function $\Omega$ on $\Bbb R^n$, i.e. $\Omega \in L^1(\Bbb R^n)$ and $\Omega(-x) = -\Omega(x)$, how do I show that its integral over a symmetric set limited $C$ is zero, ie, if $C = - C$, then $$\int_{C} \Omega(x)dx = 0.$$ It seems reasonable this assertion, since its true on the real line, but I'm in a little trouble to prove that... AI: Multiplication by $-1$ gives a regular enough function $g: C \rightarrow C$ to apply the substitution rule: $$\int_{g(C)} \Omega(x) dx= \int_C \Omega(g(y)) | \det ( g’(y))| dy.$$ Since $g’(y)=g$ because $g$ is linear, and since $g(C)=C$, the above gives $$\int_{C} \Omega(x) dx =\int_C \Omega(-x) dx = - \int_C \Omega(x) dx.$$ I think you can take it from here.
H: Is this function, constructed by taking the maximum values between continuous functions, still continuous? For each natural number $n$, let $f_n : [0,1] \to [0,1]$ be a continuous function, and for each $n$ let $h_n$ be defined by $h_n(x) = \max\{f_1(x),\ldots,f_n(x)\}$. Show that for each $n$ the function $h_n$ is continuous on $[0,1]$. Must the function $h$ defined by $h(x) = \sup\{f_n(x) : n\in\mathbb{N}\}$ be continuous? I solved the first part, i.e. the finite case this way: Let, for every $i$, $f_{n_i}$ be the function $f_n$ which takes the highest values on $[x_i$,$x_{i-1}[$. Then $h_n=f_{n_i}$ in this interval, and hence $h_n$ is continuous in every open intervals $]x_i$,$x_{i+1}[$. Defining $g_i(x)=f_{n_i}(x)-f_{n_{i-1}}(x)$, we have $g_i(x)$ is continuous (being the sum of continuous functions), $g_i(x)<0$ for $x_{i-1}<x<x_i$, and $g(x_i)\ge 0$. So, by continuity, $g(x_i)=0$, and $f_{n_{i-1}}(x_i)=f_{n_i}(x_i)$. Therefore $h_n(x)$ is continuous also in each $x_i$, so it is continuous. However I'm not sure if this covers also in the infinite case, which is the second part of the problem. AI: Define $f_n$ to be the continuous function whose graph is given by connecting the following points by straight lines $(0,0), ({1 \over 2},0), ({1 \over 2} + {1 \over n},1), (1,1)$. We see that $h(x) = 1_{({1 \over 2},1]}(x)$ which is not continuous.
H: $(0,1), [0,1), [0,1]$ are not homeomorphic I need to show $(0,1)$, $[0,1)$ and $[0,1]$ are not homeomorphic using intermediate value theorem (without using connectedness). I have already did proved that $(0,1)$, $[0,1]$ are not homeomorphic but I struggle with the 2 other couples. My proof: assume there is an homeomorphism $f:(0,1)\rightarrow[0,1]$ and take $a,b$ such that $f(a)=0, f(b)=1$. So using intermediate value theorem we can say that $f([a,b]) = [0,1]$. so $f$ isn't injective. Will appreciate any help AI: You have to be a little more careful: you’ve written your argument on the assumption that $a<b$, but it could be that $b<a$, in which case it’s the interval $[b,a]$ that maps to $[0,1]$. You get the same contradiction, of course. HINT: The same argument works if the domain of $f$ is $[0,1)$. For $(0,1)$ and $[0,1)$ you have to work a little harder, but you can still use the same idea. Suppose that $f:(0,1)\to[0,1)$ is a homeomorphism. There is an $a\in(0,1)$ such that $f(a)=0$, and for each $n\ge 2$ there is a $b_n\in(0,1)$ such that $f(b_n)=1-\frac1n$. Let $$I_n=\begin{cases}[a,b_n],&\text{if }a<b_n\\ [b_n,a],&\text{if }b_n<a\,;\end{cases}$$ clearly $f[I_n]\supseteq\left[0,1-\frac1n\right]$. The sequence $\langle b_n:n\ge 2\rangle$ has a convergent subsequence; let $c$ be the limit of this subsequence, and let $$I=\begin{cases} [a,c),&\text{if }a<c\\ (c,a],&\text{if }c<a\,. \end{cases}$$ What can you say about $f[I]$?
H: Is a locally compact hereditarily Lindelof Hausdorff space first countable? Is a locally compact hereditarily Lindelof Hausdorff space first countable? I was recently told that it is but I can't find any reference to what I would have thought would be a standard fact if it is correct. AI: Let $X$ be a locally compact hereditarily Lindelöf Hausdorff space and let $x\in X$. Then $X\setminus\{x\}$ is covered by sets of the form $X\setminus K$ where $K$ is a compact neighborhood of $x$. Since $X\setminus\{x\}$ is Lindelöf, it is in fact covered by countably many such sets $X\setminus K_n$, and we may assume the $K_n$ are nested. Thus we have a nested sequence of compact neighborhoods $K_n$ of $x$ such that $\bigcap K_n=\{x\}$. I claim that these are in fact a neighborhood base at $x$. To prove this, suppose $U$ is a neighborhood of $x$ that does not contain any $K_n$. Pick a point $x_n\in K_n\setminus U$ for each $n$. Then $x_n\in K_0$ for all $n$, so by compactness the sequence $(x_n)$ accumulates somewhere in $K_0$. However, since the sequence is eventually in each $K_n$, any accumulation point must be in each $K_n$. Since $\bigcap K_n=\{x\}$, this means the accumulation point can only be $x$. But then since $U$ is a neighborhood of $x$, infinitely many of the $x_n$ must be in $U$. This is a contradiction, since $x_n\not\in U$ for all $n$.
H: Deriving Parallel and perpendicular vectors from triple vector product How would one go about resolving the vector $\vec{p}$ into parallel and perpendicular vectors to the given vector $\vec{w}$ By considering - $\vec{w}\times(\vec{p}\times\vec{w})$ So far I have used the triple vector product however I seem to just get zero when I do this so I feel like i'm making a mistake somewhere. AI: Inasmuch as $\vec p\times \vec w$ is perpendicular to both $\vec p$ and $\vec w$, we can decompose $\vec p$ as $$\begin{align} \vec p&=A\vec w+B[\vec w\times(\vec p\times \vec w)]\tag1 \end{align}$$ Note that $\vec w\times(\vec p\times \vec w)$ is perpendicular to $\vec w$. Taking the inner product of $\vec p$ with $\vec w$, we find from $(1)$ that $$A=\frac{\vec p\cdot \vec w}{|\vec w|^2} $$ Taking the vector product of $\vec p$ with $ \vec w$, we find from $(1)$ that $$B=\frac{1}{|\vec w|^2}$$ Hence, denoting the unit vector along $\vec w$ as $\hat w=\frac{\vec w}{|\vec w|}$ $$\vec p=(\vec p\cdot \hat w)\hat w+ ( \hat w \times\vec p)\times \hat w$$
H: For $1 For $1<p<2$ , Fourier transform $\mathscr{F}$ is not onto $L^p(\Bbb T) \to \ell^q(\Bbb Z)$ where $\frac{1}{p}+\frac{1}{q}=1$ For $1 \le p \le 2$, Hausdorff-Young inequality implies that $\mathscr{F}:L^p(\Bbb T) \to \ell^q(\Bbb Z)$ where $\frac{1}{p}+\frac{1}{q}=1$ Now for $p=1$ showing that $\mathscr{F}:L^1(\Bbb T) \to \ell^\infty(\Bbb Z)$ is not onto was easy due to Riemann-Lebesgue lemma (I know that it's image is in fact a dense sub-alegbra of $c_0(\Bbb Z)$). But for $1<p<2$, I'm facing difficulty to show that it's not onto. I know that $\forall 1<p<2, \mathscr{F}(L^p(\Bbb T))\subset \ell^q(\Bbb Z)$ . Hence we have $$\mathscr{F}(L^p(\Bbb T))\subset \ell^q(\Bbb Z) \subsetneq c_0(\Bbb Z)$$ Also considering images of trigonometric polynomials under $\mathscr{F}$ we get that $c_{00}(\Bbb Z) \subsetneq \mathscr{F}(L^p(\Bbb T)$ So basically the scenario is, $$c_{00}(\Bbb Z) \subsetneq \mathscr{F}(L^p(\Bbb T)) \subset \ell^q(\Bbb Z) \subsetneq c_0(\Bbb Z)$$ How to produce an $\ell^q$ sequence which cannot be in the image of $\mathscr{F}$ ? Or if there are other functional analysis proofs which will assert the same? AI: Take $S=\sum_{n \ge 1} \frac{\cos (2^nx)}{\sqrt n}$. Note that the coefficients of $S$ are in $l^q(\mathbb Z)$ for all $q>2$ (but not in $l^2$!) Then one can show that the partial series of $S$ and the Caesaro means of said partial series diverge ae on the unit circle so $S$ cannot be a Fourier series. Actually it is easier to show that the partial series/Caesaro means are unbounded ae using the fact that $\cos 2^nx$ and $\cos 2^mx$ are orthogonal for $n \ne m$ (and more generally any $k$ distinct such are orthogonal since for $n_1<n_2<..n_k, \pm 2^{n_1} \pm 2^{n_2}..\pm 2^{n_k} \ne 0$)
H: How do I show that $x$ is the supremum of set $S$? (decimal representation of reals) Let $x$ be a fixed positive real number. Let $l_0 = a_0$ be the largest integer less than x (that is, $a_0\in Z$ such that $a_0 \le x$), $a_1$ be the largest integer such that $l_1 = a_0+\frac{a_1}{10^1}\le x$, $a_2$ be the largest integer such that $l_2 = a_0+\frac{a_1}{10^1}+\frac{a_2}{10^{2}}\le x$ and so on until $a_n$ be defined similarly as to let us have $l_n = a_0+\frac{a_1}{10^1}+\frac{a_2}{10^2}+\ldots +\frac{a_n}{10^n} \le x$. We define the set $S$ as the set that contains $l_n$ for all $n\ge0$ ($n$ is a nonnegative integer). We know that S is non-empty since we know that there's an unique integer $a_0$ such that $a_0\le x \lt a_0+1$ (I managed to prove that) and it is bounded above since $x$ is a upper bound, then by the supremum axiom, we know that S has a supremum $b = sup S$ in which $b \in \Re$. The question is, how do I show that $b = x$? I tried to use the fact that $l_n \le b$ for all $n \ge 0$, $l_n \le x \lt l_n + \frac{1}{10^n}$ for all $n \ge 0$ and the tricotomy to show that $b \gt x$ and $b \lt x$ both lead to a contradiction, thus $b = x$, but I haven't had the ideas to deal with this information to lead me to the contradiction in each case... So any help is very much appreciated! AI: As $S$ is bounded above by $x$ you know $\sup S$ exists and $\sup S \le x$. If we assume $\sup S < x$ the $x - \sup S > 0$. Let's call $x - \sup S = d$. Now make, and prove, the claim that there is an $m\in \mathbb N$ so that $0 < \frac 1{10^m} < d$. (Note: This has nothing to do with $d=x-\sup S$.... this has only to do with $d > 0$. This claim is true for all positive real numbers.) Consider $l_m = a_0 + ......$. Now make, and prove, the claim that $x - l_m < \frac 1{10^m}$. (That should be simply a matter of how $l_m$ was created.) That means $\sup S = x- d < x-\frac 1{10^m} < l_m \le x$. So we have $l_m > \sup S$ but $l_m \in S$. That's a contradiction. ===== So the job I leave to you is to prove that for any $d > 0$ there is a $m\in \mathbb N$ so that $0 < \frac 1{10^m} < d$. (Hint: $0< \frac 1{10^m} < d \iff 10^m > \frac 1d> 0\iff m \ge \log_{10} \frac 1d$) ANd to prove that for any $m$ that $x - l_m < \frac 1{10^{m}}$. .... But that was how $l_m$ was constructed and that is the definition of $l_m$ so that is already proven!
H: Prove that the supremum of an affine function is concave Suppose I have a function such that $F(\theta x+(1-\theta)y,z)=\theta F(x,z)+(1-\theta)F(y,z)$ for $\theta\in(0,1)$. I want to show that $F(x,z)$ is concave in the first argument when taking a supremum, that is $G(x)=\sup_z F(x,z)\text{ is concave.}$ Let $\theta\in(0,1)$. Then $F(\theta x+(1-\theta)y,z)=\theta F(x,z)+(1-\theta)F(y,z)$ and so $$G(\theta x+(1-\theta)y)=\sup_z F(\theta x+(1-\theta)y,z)\geq\theta F(x,z)+(1-\theta)F(y,z).$$ What I don't know is also how to do is bring the $\sup$ onto the RHS, ie. to get $$G(\theta x+(1-\theta)y)\geq\theta G(x)+(1-\theta)G(y).$$ There is a result that says if $F(x,z)$ is concave in both ($x,z)$, then $\sup_z F(x,z)$ is also concave, but I don't have this luxury of concavity in the second argument. I am also not sure if the result is correct, but was hoping to verify it. AI: The supremum is actually convex because supremum of a family of convex functions is convex. Your case is almost equivalent to the full generality of this fact and therefore there is no reason to believe that $\sup_zF(x,z)$ is affine.
H: prove or give counter example, for every holomorphic function on the unit disc there is $f(z)=z$ let f be a holomorphic function on $D=\{z\in \mathbb C:|z|<1\}$. and let $f$ be continuous on $cl(D)$ and $f[D]\subseteq D$. Prove or give counter example, $\exists z\in D\mathrm{.f(z)=z}$ AI: Counter example: Let $a$ with $0<|a|<1$ and $$f(z)=\frac{z-a}{1-\bar a z}.$$
H: Proving absolute continuity of the laplace transform Suppose $f \in L^\infty(\mathbb{R})$ and define the laplace transform $F:(0,\infty)\rightarrow \mathbb{R}$ by $$F(s) = \int_0^\infty f(t)e^{-st}dt.$$ Prove that $F$ is absolutely continuous on $[a,b]$ for any $b>a>0$. So, I'm thinking that I might be able to do this by proving that $F$ is Lipschitz. I think I got most of the way to proving that it's Lipschitz, but I'm a little stuck on some algebra getting the final Lipschitz inequality in place. What I have so far: Let $M$ be the essential upper bound of $f$ (which exists because $f\in L^\infty(\mathbb{R})$), and let $x,y \in [a,b]$. Then $$|F(x)-F(y)| = \left\vert \int_0^\infty f(t)e^{-xt}dt - \int_0^\infty f(t)e^{-yt}dt\right\vert \leq M \left\vert\int_0^{\infty}e^{-x}-e^{-y}\right\vert = $$ $$M\left\vert \frac{1}{x}-\frac{1}{y}\right\vert \leq M\left\vert \frac{1}{a}-\frac{1}{b}\right\vert.$$ And here's where it's not totally coming to me. I'm pretty sure I'm going to want to do something with $M\left\vert \frac{1}{a}-\frac{1}{b}\right\vert$ as my constant for the lipschitz condition. I'm just not totally sure how to incorporate it. I suppose I could just be completely wrong, and the laplace transform isn't lipschitz continuous. Given that $|F(x)-F(y)| \leq M\left\vert \frac{1}{x}-\frac{1}{y}\right\vert$ I think it could be proved directly from the definition of absolute continuity. Any thoughts either way would be greatly appreciated. Thanks in advance. AI: You were almost there. For $x, y \in [a,b]$ $$\left| \frac{1}{x} - \frac{1}{y} \right| = \frac{|x-y|}{|xy|} \le \frac{1}{a^2} |x-y|$$ So that $F$ is $(M/a^2)$-Lipschitz.
H: Continuous Random Variable Conditional Proability I need to answer two questions: Find $P(Y|X)$; $P(0<Y<1/2 | X=0.15)$. For #1 I know I would have to use the double integral and find Pxy and I understand how to do #1. However, I'm completely stuck on #2 and don't understand how to use the value of $X= 0.15$ because if this is plugged into the conditional probability formula would it not make the denominator $0$? Any help would be greatly appreciated. AI: $f(x|y) = \frac{f(x,y)}{f(y)}$, which happens to be $1$, i.e. $f(x|y)=f(x)$, so $X$ and $Y$ are independent. Therefore. $P(Y<a|X)=P(Y<a)$.
H: What is the number of connected components of a continuous image of some topological space? We know that continuous image of a connected space is always connected i.e continuous image of a space with one component will always have one component. Also a space with two components (2×2 invertible matrices) can have a connected image ( to real line via trace) But what are the other possibilities of the number of components of a disconnected space? Can It be anything? AI: The number of components of the image cannot be greater than that of the domain, since every component of the domain has connected image, but it can go down as much as you like, for example the Baire space $\mathcal N=\Bbb N^{\Bbb N}$ is totally disconnected and has uncountably many connected components, but every separable, completely metrizable space is the continuous image of $\mathcal N$.
H: Order 5 rational map? $p(z) = 1-\frac{1}{z}$ has order 3: $p(p(p(z))) = z$ Is there an order 5 rational map with rational coefficients? AI: Such a rational map would be a linear fractional transformation: $$f(z)=\frac{az+b}{cz+d}.$$ Its $k$-th iterate is the identity iff $A^k=\lambda I$ for some $\lambda$ where $A=\pmatrix{a&b\\c&d}$. If the coefficients $a,\ldots,d$ are rational this is impossible for $k=5$ unless $f$ is already the identity. The eigenvalues of $A$ would have to be $\sqrt[5]{\lambda}\zeta$ and $\sqrt[5]{\lambda}\zeta^{-1}$ for some non-trivial fifth root of unity $\zeta$. Their sum and product must be rational, so that $\lambda^{2/5}\in\Bbb Q$ which means that $\sqrt[5]{\lambda}$ is rational. But then $\sqrt[5]{\lambda}(\zeta+\zeta^{-1})$ is irrational, a contradiction.
H: Find $\lim inf A_n$ and $\lim sup A_n$ Let $A_n = (−1 + \frac{1}{n}, 2 − \frac{1}{n})$ if $n$ is odd and $[0, n]$ if $n$ is even. Find $\liminf A_n$ and $\limsup A_n$. This is a question on a past paper for a measure theory module I'm taking and I'm not quite sure if my answer is correct. I have $\limsup A_n = [-1, 2]$ and $\liminf A_n=[0,0]$. I'm not quite sure if either of these are correct as the $[0,n]$ when $n$ is even is throwing me. Since we just have $n$ does that mean that $\limsup A_n$ could be $[-1, ∞ ]$? Any help would be appreciated! AI: We know that $x \in \limsup_{n\to\infty} A_n$ if and only if $x \in A_n$ for infinitely many $n \in \Bbb{N}$. We claim that $$\limsup_{n\to\infty }A_n = (-1,+\infty).$$ Indeed, let $x \in (-1,+\infty)$. If $x \in [0,+\infty)$ then $x \in A_n$ for every even $n\in\Bbb{N}$ such that $n \ge x$. If $x \in (-1,0)$ then pick $n_0\in\Bbb{N}$ such that $-1+\frac1{n_0} <x$ and hence for every odd $n \ge n_0$ we have $x \in A_n$. We conclude $x \in \limsup_{n\to\infty} A_n$. Conversely, if $x \le -1$, then $x$ is not contained in any $A_n$ and hence $x \notin \limsup_{n\to\infty} A_n$. Similarly, $x \in \liminf_{n\to\infty} A_n$ if and only if $x \in A_n$ for all except finitely many $n \in \Bbb{N}$. We claim that $$\liminf_{n\to\infty }A_n = [0,2).$$ Indeed, let $x \in [0,2)$. Clearly $x \in A_n$ for every even $n \ge 2$. Pick $n_0 \in \Bbb{N}$ such that $x < 2-\frac1{n_0}$. Then for every odd $n \ge n_0$ we have $x \in A_n$. We conclude that for every $n \ge \max\{2,n_0\}$ holds $x \in A_n$ and hence $x \in \liminf_{n\to\infty} A_n$. Conversely, if $x < 0$ then $x\notin A_n$ for every even $n \in \Bbb{N}$, and if $x \ge 2$ then $x\notin A_n$ for every odd $n \in \Bbb{N}$ and thus $x \notin \liminf_{n\to\infty} A_n$.
H: Proof that no number less than $b$ can be an upper bound. The following are definitions used in the two proofs given below: Definition: A subset $I$ of $\mathbb{R}$ is called an interval if, for any $a,b \in I$ and $x \in \mathbb{R}$ such that $a \le x \le b$, we have $x \in I$. Definition: Let $a \le b$ be any two real numbers. The open interval $(a,b)$ is defined as the set $$ (a,b) = \{x \in \mathbb{R} : a < x < b \}$$ The following proof is from a Calculus textbook: Let $a \leq b$ be any two real numbers. What is $\sup(a,b)$? Proof. If $x \in (a,b)$, then $a < x < b$. This immediately tells us that $a$ is a lower bound and $b$ is an upper bound. Any real number less than $b$ (but greater than $a$) is in the interval, so no number less than $b$ can be an upper bound. Therefore, $\sup(a,b) \geq b$. But $\sup(a,b) \leq b$ by definition (since $b$ is an upper bound), so $\sup(a,b)=b$. If one tried to show "no number less than $b$ can be an upper bound," would that proof look like the following? Proof. Suppose $y \in \mathbb{R}$ is less than $b$ and greater than $a$. Then by definition, $y \in (a,b)$. This means that there are real numbers $a'$ and $b'$ in $(a,b)$ such that $a<a' \leq y \leq b' < b$ (where $b' < b$ since $b$ is an upper bound and $b \notin (a,b)$, and $a < a'$ since $a$ is a lower bound and $a \notin (a,b))$. If $y = b'$, then there exists $\dfrac{b' + b}{2} \in (a,b)$ where $y < \dfrac{b + b'}{2} < b$. In particular, $y=b'$ is not an upper bound of $(a,b)$ since $y < \dfrac{b + b'}{2}$. On the other hand, if $y < b'$, then $\dfrac{y + b'}{2} \in (a,b)$ where $y<\dfrac{y + b'}{2}<b'$. This implies that $y < b'$ is not an upper bound of $(a,b)$ since $y < \dfrac{y + b'}{2}$. Any $y \in \mathbb{R}$ less than or equal to $a$ would, by definition, be a lower bound. Thus, no real number less than $b$ can be an upper bound. AI: I think your argument is a little more complicated than it needs to be. We can say given $x\in(a,b)$, there exist a $y=\frac{x+b}{2}\in (a,b)$. This is because the average of x and b will always be between the two. Since there exists such a $y$ then $x$ is not an upper bound.
H: Union of intersection of families I'm studying Halmos' Naive Set Theory. In Section 9, Families, he (essentially) mentions a following exercise (on page 35). Exercise. If $\{A_i\}$ and $\{B_j\}$ are both nonempty families, then $(\bigcap_iA_i)\bigcup(\bigcap_jB_j)=\bigcap_{i,j}(A_i\bigcup B_j)$. However, I think that this is wrong, and the equality should be replaced with inclusion, i.e., LHS should be a subset of (not generally equal to) the RHS. Correct? AI: No, the equality is correct. Suppose that $x\in \bigcap_{i,j}(A_i\cup B_j)$. If $x\in A_i$ for all $i$, then $x\in \bigcap A_i$ and we are done. Otherwise, there exists $i$ with $x\not\in A_i$. As $x\in A_i\cup B_j$ for all $j$, it follows that $x\in B_j$ for all $j$, so $x\in\bigcap_jB_j$.
H: Real Signal Properties and $\cos(t)$ Fourier coefficients I am struggling to conceptually understand why the Fourier coefficients for $\cos(t)$ are $a_1 = 1/2$ and $a_{-1} = 1/2$ in light of the fact that, for a real and even signal, the Fourier coefficients should be real and even (I am referring to a complex Fourier series). What am I missing to explain this discrepancy? Thank you! AI: $1/2$ is real so these coefficients are real. (All the coefficients you didn't list are $0$, which is also real.) Also, $a_n = a_{-n}$ for all $n$, so this set of coefficients has even symmetry. There is no discrepancy, so I can't guess what you are missing.
H: Can the Chinese Remainder Theorem extend to an infinite number of moduli? I've been trying to find info on this and have come up lacking. The CRT says that a system of congruences with coprime moduli always has a unique answer (modulo the product of the original moduli). And the generalizations I've seen defined say you can use residues $\{a_1, a_2,..., a_n\}$ and moduli $\{m_1, m_2,..., m_n\}$ to find a unique $X$ mod $M$ (with $M = m_1 \cdot \ m_2 \cdot ... \cdot m_n$), so long as the set of moduli are all coprime. Can this be extended for an infinite set of residues and moduli? It seems to me you could choose $n$ to be as large as you desire, but I feel like it's unclear. Thoughts? It feels vaguely like Euclid's proof of infinitude of primes, but "feels" isn't really well-defined... AI: For example, suppose you want $x \equiv 0\bmod 2$ and $x \equiv 1 \bmod p_i$ for all $i > 1$, where $p_i$ is the $i$'th prime. Since $x \equiv 1 \bmod p_i$ but $x \ne 1$, $|x - 1| \ge p_i$. But then there is no $x$ that works for infinitely many $i$.
H: Contradicting Equations describing "Resultant" velocity My definition of resultant velocity: If a certain object, at some instant of time, moves with speed $v_x$ in the x-direction, and with speed $v_y$ in the y-direction, then it has a resultant velocity which is the hypotenuse of the triangle formed by the two vectors: one in pure x-direction with magnitude $v_x$ and the other in purely y-direction with magnitude $v_y$. Thus, $v_x$ and $v_y$ are components of the resultant velocity vector. One way to represent how the three vectors relate in magnitude is by the classic Pythagorean theorem: (1) $$v_{res}^2 = v_x^2 + v_y^2$$ However, the object's position also follows the Pythagorean theorem (for ease-of-calculation let's say at $t = 0$, the object is at the origin, yielding: $$r(t)^2 = x(t)^2 + y(t)^2$$ Differentiating with respect to $t$ on both sides, and re-arranging yields: $$ r\dot r = x\dot x + y\dot y $$(2) $$ \dot r = v_{res} = \frac{x\dot x + y\dot y}r$$ Of course (1) and (2) are not equivalent - but if they are both derivations for the resultant velocity of an object - why are they not the same? I suspect that the two setups are representing different scenarios (like the first one is a simple relative speed problem and the latter is a related rates problem involving, perhaps, 2 objects). AI: They are not the same in general because derivative of the norm is not the same as norm of the derivative. Speed $v$ is the norm of the velocity vector $\vec{v}$, i.e. $$v = \|\vec{v}\| = \left\|\frac{d}{dt}\vec{r}\right\|= \|(\dot{x},\dot{y})\| = \sqrt{\dot{x}^2+\dot{y}^2}$$ Your second concept is the derivative of the norm of the position vector $\vec{r}$, i.e. $$\dot{r} = \frac{dr}{dt} = \frac{d}{dt} \|\vec{r}\| = \frac{d}{dt}\|(x,y)\|=\frac{d}{dt}\sqrt{x^2+y^2}.$$ For a simple example, consider a circular motion given by $\vec{r}(t) = (\cos t, \sin t)$. The velocity is $$\vec{v}(t) = (-\sin t,\cos t) \implies v = \|v\| = 1.$$ Your other concept is $$r = \|\vec{r}\| = 1 \implies \dot{r} = 0$$ so clearly $v \ne \dot{r}$. It is interesting to see that it always holds $\dot{r} \le v$. Namely, we have $$2r\dot{r}=\frac{d}{dt}(r^2) = \frac{d}{dt}\|\vec{r}\|^2 = \frac{d}{dt}(\vec{r}\cdot\vec{r}) = 2\dot{\vec{r}}\cdot \vec{r} = 2\vec{v}\cdot\vec{r}$$ and hence Cauchy-Schwartz inequality implies $$r\dot{r} = \vec{v}\cdot\vec{r} \le \|\vec{v}\|\|\vec{r}\| = vr \implies \dot{r} \le v.$$
H: Need help with $\arccos$ equation I have the equation $$ \cos(2x + \frac{\pi}{9}) = 0.5$$ I know that in order to solve for $x\in \Bbb R$, I need to use $$\arccos(0.5) = 2x + \frac{\pi}{9} $$ This yields $$ 2x + \frac{\pi}{9} = \begin{cases} \frac{\pi}{3} + 2k\pi, & \text{Positive angle} \\ 2 \pi - \frac{\pi}{3}+ 2k\pi, & \text{Negative angle} \end{cases} $$ I would then subtract $\frac{\pi}{9}$ from both sides and get: $$ 2x = \begin{cases} \frac{2\pi}{9} + 2k\pi, & \text{Positive angle} \\ \frac{14\pi}{9}+ 2k\pi, & \text{Negative angle} \end{cases} $$ However according to the handout the correct solution is: $$ 2x = \begin{cases} \frac{4\pi}{9} + 2k\pi, & \text{Positive angle} \\ \frac{16\pi}{9}+ 2k\pi, & \text{Negative angle} \end{cases} $$ Can anyone help me? AI: Remember that $\;\cos x=\alpha\implies x=\pm\arccos x\;$. Besides this, we only need basic trigonometry to solve that equation: $$\cos t=\frac12\iff t=\pm\frac\pi3+2k\pi\;,\;\;k\in\Bbb Z\implies$$ puting $\;t=2x+\frac\pi9\;$ we get $$2x+\frac\pi9=\pm\frac\pi3+2k\pi\implies\begin{cases}2x=\cfrac{2\pi}9+2k\pi,&\text{(positive solution)}\\{}\\ 2x=-\cfrac{4\pi}9+2k\pi=\cfrac{14\pi}9+2k\pi,&\text{(negative solution)}\end{cases}\;,\;k\in\Bbb Z$$ Thus you're right, the handout's solution is wrong.
H: Understanding the definition of a $G$-module I took the following definition from Milne's Fields and Galois Theory (page 69): The part I underlined is the one giving me trouble. In particular, I would like to know why a $G$-module by that definition is the same as giving a homomorphism $f: G \to \operatorname{Aut}(M)$. Criterion (a) ensures that $f$ maps to an endomorphism over $M$. Criterion (b) ensures that $f$ is indeed a homomorphism. Now I think that Criterion (c) guarantees that $f$ maps to an automorphism but I am not sure if this explanation is sufficient: We have $f_{1_G} = f_{\sigma \sigma^{-1}} = f_\sigma \circ f_{\sigma^{-1}}$ for any $\sigma \in G$ (second equality follows from (b)). Criterion (c) should guarantee that $f_\sigma^{-1} = f_{\sigma^{-1}}$. Since $f_\sigma$ is a endomorphism (cf. my first point) which has an inverse map, it must be an automorphism. Is this argumentation correct? AI: The elements of $G$ are each assigned to a function acting on $M$; the map $f\colon G\to \operatorname{Aut}(M)$ tells you the assignment. In this language, (a) says that $f(g)(m+m')=f(g)(m)+f(g)(m')$, i.e., that $f(g)$ is a module homomorphism (endomorphism) of $M$. Now (b) says that $f(gh)(m)=f(g)(f(h)(m))$, so as functions $M\to M$, we have $f(gh)=f(g)f(h)$. Really, this says that $f$ is a monoid homomorphism from $G$ to $\operatorname{End}(M)$. Together, note that $f(g^{-1})f(g)=f(g)(f(g^{-1}))=f(1)$, so (c) guarantees that $f(g)$ is always invertible. Thus, the assignment $g\mapsto f(g)$ is really a map $G\to\operatorname{Aut}(M)$, and in fact $f$ is a group homomorphism. Now its easy to see that any group homomorphism $f\colon G\to \operatorname{Aut}(M)$ satisfies (a), (b), and (c), and these processes are inverse, so a group action of $G$ on $M$ is naturally equivalent to a homomorphism $G\to \operatorname{M}$.
H: Determining linear independence by inspection I was given a question asking me to "determine by inspection whether the following vectors are linear independent." I know how to determine if vectors are independent by putting in row-echelon form and looking for free columns, but I don't know if determining by inspection is referring to a different method. Is anyone familiar with this? Is there a different way to determine linear independence? AI: "By inspection" literally means "by looking at it and seeing if there is anything obvious". In the case of determining dependence of vectors, you know that the vectors are linearly dependent if you can write one of them as a linear combination of the others. So, for example, if you're given the vectors $x = (1, 0, 0)$, $y = (0, 1, 0)$ and $z = (4, 5, 0)$ then you can quickly see that $z = 4x + 5y$ and hence they are linearly dependent. Similarly, if you see that the vectors are such that any attempt to combine them will result in non-zero values somewhere, then you can state that they are independent by inspection. Sometimes this might rely on knowing a few extra theorems - e.g. knowing that it takes $n$ linearly independent vectors to span $\mathbb{R}^n$, so if you can see that the group of vectors is able to span the space then they must be independent.
H: Getting rid of not invertible matrices I recently came across the follwing formulation: $CYC'=M$ where all letters are matrices and the ' stands for transposed. I was wandering if there is a clever way to "isolate" $Y$ (in the sense of having $Y=\dots$) even if matrix $C$ is not invertible. $Y$ is also diagonal and positive semidefinite. I probably should mention that (due to the specific case I am working with), with a little bit of work I could reduce the problem to the form $x'Yz=k$, where $x$ and $z$ are different vectors and $k$ is a scalar. I don't know if this is relevant to the solution, and still I'd be more interested in knowing if there is an aswer for the general case. Thanks for the help (and please let me know if I am doing something wrong regarding guidelines, I am kind of new) AI: With the vectorization operator, we have $$ CYC' = M \implies (C \otimes C)\operatorname{vec}(Y) = \operatorname{vec}(M), $$ where $\otimes$ denotes a Kronecker product. In the case that $C$ is invertible, the equation can be solved to yield $$ \operatorname{vec}(Y) = (C \otimes C)^{-1}\operatorname{vec}(M) = (C^{-1} \otimes C^{-1})\operatorname{vec}(M). $$ From there, $\operatorname{vec}(Y)$ could be "unvectorized" to yield $Y$. In the case that $C$ is not invertible, the solution is non-unique and may not exist. However, we can obtain a least-squares solution using the MP Pseudoinverse.
H: Show that $X \sim Y$ implies $E[f(X)] = E[f(Y)]$ in any subset of $\Omega$ I know that given two identically distributed variables $X$, $Y$ and a measureable function $f$, the theorem holds when you integrate over the universe of events $\Omega$. However, I am not sure if it holds when you integrate over a smaller subset. Given a subset $A \subset \Omega$, I was trying to use the indicator function to change the probability of $X \in A$ with $Y \in A$, but the $|X|$ is still different from $|Y|$. Is it possible that the statement is false? Thanks for reading or giving any suggestions. AI: Is this what you mean? No, it is not the same over any set. Imagine we roll two dice. Let $X$ be the first outcome, $Y$ the second. The outcomes are independent; they are identically distributed, $X \sim Y$. Of course (as noted) on the whole space $$ \mathbb E[X] = \mathbb E[Y] = \frac{7}{2} $$ Now consider this set: $A = \{X = 1\}$. Then $$ \mathbb E[X\mathbf1_A] = 1\cdot\frac{1}{6}=\frac{1}{6},\qquad \mathbb E[Y\mathbf1_A] = \frac{1}{6}\cdot\frac{7}{2} = \frac{7}{12} $$
H: Let $A$ be a finite set and let $B$ be a subset of $A$ with $|A|=n$,$|B|=m$ and $0 Task is: Let $A$ be a finite set and let $B$ be a subset of $A$ with $|A|=n$,$|B|=m$ and $0< m < n$.Find a formula for the number of subsets of $A$ that contain $B$ and prove your statement. I assume this is $2^m$ + something. $2^m$ would count the subsets containing only elements of $B$, but i'm not sure how the sets of the form $(a_1,a_2,b_1)$ or similar would be counted. I'm having troubles with this so help would be appreciated. AI: Hint: As such subsets can be complemented with subsets of $A$ that do not meet $B$ in a unique way, it is the same as counting the subsets of $A\smallsetminus B$. I suppose you can count them.
H: Equality of Splitting Fields I am well aware of the fact that any two splitting fields of a set of polynomials are isomorphic. However, I am wondering when two splitting fields are actually the same. Fix an algebraic closure of $E$ of $F$. Then, if $\{ f_i \}$ are polynomials in $F[x]$ and $K_1$ and $K_2$ are splitting fields of $F$ contained in $E$ I want to say that $K_1 = K_2$. In the case of a finite set of polynomials, I can prove this by looking at the product of polynomials in $E[x]$. If $K_1 \neq K_2$, then the product of polynomials would have too many roots over $E[x]$ which would lead to a contradiction. However, I am not sure in the case that $\{ f_i \}$ is an infinite as I cannot just take the product of polynomials. For reference, this question was asked here: Set-theoretic equality of splitting fields within a fixed algebraic closure in the case of a single polynomial. Does this result extend to the infinite case? AI: If $\alpha\in K_1$ is a root of $f_i$ for some $i$, then $\alpha\in K_2$, as $f_i$ splits in $K_2$. That is:$$0=f_i(\alpha)=\lambda \prod_{j=1}^{\deg f_i}(\alpha-\beta_j),$$ with $\beta_j\in K_2$ and $\lambda\neq0$, so $\alpha=\beta_j$ for some $j$. Thus $K_1\subseteq K_2$ and vice versa by the same argument.
H: Proof of Existence of Von Neumann Numerals in ZFC Let us recall the recursive definition of the Von Neumann representation of the natural numbers: $0=\emptyset, S(n)=n \cup \{n\}$ We know by the Axiom of Empty Set that $0$ exists, and we are now left with proving whether or not $1, 2, 3$ and so on exist. $S(n)$ is defined as the Union of the two sets $n$ and $\{n\}$, thus we need to construct the set $X = \{n,\{n\}\}$ and then apply the Axiom of Union to obtain $S(n)$. But how do we know if $X$ exists? Simply by applying the Axiom of Pairing to the sets $n$ and $\{n\}$. Now here's the part where I am completely stuck, how do I know that given $n$ exists, the set $\{n\}$ exists? I thought of applying the Axiom of Pairing to $\emptyset$ and $n$ giving $\{n, \emptyset\}$ but that doesn't go anywhere. Seems incredibly simple and intuitive, but I don't see how this follows from any of the Axioms of ZFC. AI: Show me where it says that the axiom of pairing can only be applied to two distinct sets. It should perhaps be mentioned that you can get the singleton from power set and separation. Which is not a bad idea, since both empty set and pairing are redundant axioms in ZFC.
H: Does this recursion problem have a typo? Just wondering how to cast this sentence into the actual intention — #8 AI: The notation $(a_n,a_{n+1})=1$ can mean that the greatest common divisor of $a_n$ and $a_{n+1}$ is $1$; i.e., $a_n$ and $a_{n+1}$ are relatively prime.
H: Meaning of P-symmetric and O-symmetric I'm working on some problem sets and I come across the phrases "P-symmetric" and "O-symmetric" which were referring to a region in the Cartesian plane. The only clue I have towards their meanings is one of the questions was asking if a region was P-symmetric "for some P in the plane". I would guess that O-symmetric is the same as P-symmetric but the point P must be the origin. I searched for the two terms on the internet but I couldn't find anything. Are these phrases widely used in mathematics or could they have been devised specifically for these problem sets? I'm not sure what these mean because I don't understand how a region could be symmetric about a point because if the region isn't a circle centered at the point, then not all parts of the region's boundary are equidistant from the point. Just in case anyone is wondering, these problem sets are homework but finding the definitions of those two phrases is merely one part of a multi-step problem so I don't think this falls in the realm of asking for homework answers. I did ask my instructors about this but I know that I would probably not get an answer until tomorrow so that is why I am asking on here as well. AI: A region is symmetric about a point $P$ if it is preserved by a point reflection through $P$. In 2-dimensional space, a point reflection is actually a $180^\circ$ rotation through $P$. So a region that has $180^\circ$ rotational symmetry is symmetric about some point in the plane, which I assume is what the term "$P$-symmetric" that you're seeing refers to. (In higher dimensions, point reflections are not the same as rotations; the general definition is that a point reflection through $P$ maps a point $X$ to a point $X'$ such that $P,X,X'$ are collinear and $P$ is the midpoint of the segment $XX'$.)
H: Showing Existence of Antiderivative for Complex-Valued Function I am asked to show that for $z\in \mathbb{C} \setminus \{0,1\}$, there exists an analytic (single-valued) function, $F(z)$ on $\mathbb{C} \setminus \{0,1\}$, such that $F'=f$, where $$f(z) = \frac{(1-2z)\cos(2\pi z)}{z^2 (1-z)^2}$$ I know that if $$\int_{\gamma} f(z) dz =0$$ for all closed contours, $\gamma$, then $f$ has an antiderivative. Furthermore, in the case of the given function above, $f(z)$, I know that Res$(f,0)=$ Res$(f,1)=0$, so using the Residue Theorem I know that for any simple closed contour, $\gamma$, we have $$\int_{\gamma} f(z) dz =0$$ However, to ensure that $f$ has an antiderivative, I need to show that this is true for all closed $\gamma$, not just simple closed $\gamma$. How can I go about finishing this last step of the proof? AI: Here is an alternative solution. Write $$ g(z) = f(z) - \left( \frac{1}{z^2} - \frac{1}{(z-1)^2} \right). $$ Then $g$ has removable singularities at both $z=0$ and $z=1$, and so, $g$ extends to a holomorphic function on $\mathbb{C}$. In particular, $g$ has an antiderivative, say $G(z)$. Then $$ f(z) = g(z) + \frac{1}{z^2} - \frac{1}{(z-1)^2} $$ has an antideriviative $$ G(z) - \frac{1}{z} + \frac{1}{z-1}. $$
H: $t$ derivative of Kirchhoff's solution I know that the solution to the PDE \begin{align*} u_{tt} - \Delta u = 0, \quad \mathbb{R}^3\times[0, \infty)\\ u(x, 0) = 0, \quad x \in \mathbb{R}^3\\ u_t(x, 0) = g(x), \quad x \in \mathbb{R}^3 \end{align*} is $$u(x,t) = \mathrel{\int\!\!\!\!\!\!-}_{\partial B(0,1)}t g(x + tw)dS(w).$$ My question is how is this found: $$ u_t(x,t) = \mathrel{\int\!\!\!\!\!\!-}_{\partial B(0,1)}[g(x + tw) + t \nabla g(x + tw)\cdot w] dS(w). $$ I can tell that the first term in the integral is from the product rule, but I do not understand how apparently $\frac{\partial}{\partial t} g(x + tw) = \nabla g(x + tw)\cdot w$. Is the gradient with respect to $x$? Is this an application of the chain rule and I just don't see it? AI: Yes, this is just the multivariate chain rule, and the gradient is wrt $x$. You can set $h(t)=x+tw$, then $g(x+tw)=g\circ h(t)$ so $$ \frac{d}{dt} g\circ h(t) = (\nabla g)(h(t))\cdot \frac{dh}{dt}(t) = (\nabla g)(x+tw)\cdot w$$
H: What is the difference between recursion and induction? What is the difference between recursion and induction? I have heard those terms used interchangeably, but I was wondering if there is a difference between them, and if so, what the difference is. AI: In my experience: "Recursion" is a way of defining some mathematical object (including a function or computation whose definition involves a recursive algorithm); "Induction" is a way of proving some mathematical statement. Extremely often, if a mathematical statement is made about a recursively-defined object, then the proof of that statement will involve induction. For example, the definition of the Fibonacci numbers is a recursive definition. The proof of the assertion that the $n$th Fibonacci number is at most $2^n$ is an inductive proof.
H: Confusion with Regards to General and Particular Solution Terminology in Differential Equations I have been reading R. Kent Nagle's Fundamental's of Differential Equations textbook and I'm really confused as to the meaning of the terms of "Particular Solution" and "General Solution", specifically as they change from being used in a first-order equation to a second-order equation. So in a first-order differential equation, your answer should have a "C" somewhere resulting from integrating somewhere. This would be called your general solution because you haven't specified any initial conditions. If you did, and you incorporated that information into your solution, then it would be called your particular solution. Alright, so far so good. But when I started learning about second-order differential equations, I got really confused because I understood that if you solve a homogeneous second-order linear differential equation with no initial condition information then you would get a general solution with two "C"s because of its second order. This made sense, but when I learned about the Method of Undetermined Coefficients and the Variation of Parameters, I learned that the general solution to a non-homogeneous second-order linear differential equation involved a particular solution AND the general solution to the homogeneous diff eq. But I don't see why or how this could make sense. Does this occur because the terms particular and general are redefined for second-order diff EQs? I'm overall just very confused about the terminology. General clarification would be greatly appreciated. Edit 1: Thank you both Professor @Robert Israel and @K.defaoite It makes a bit more sense, but I'm still overall confused. To be a little more explicit, why does the Method of Undetermined Coefficients give you a particular solution? Again, I'm very used to the idea that a particular solution is a general solution with initial conditions applied to it (from first-order differential equations) but I don't see the particular-ness that the Method of Undetermined Coefficients gives you in the way that solving an initial value problem for a first-order differential equation does. Thanks again to you both. R. Kent Nagle - Method of Undetermined Coefficients AI: The meaning is the same: a particular solution is just that, one solution, corresponding to one choice of initial conditions. If you have a formula for the general solution, in a second order equation it will have two arbitrary parameters and each choice of values for those parameters gives you a particular solution. Linear equations (for any order) have the property that if you add a solution of the homogeneous equation and a solution of the non-homogeneous equation, you get another solution of the same non-homogeneous equation. If you take the general solution of the homogeneous equation (involving, for a second-order equation, two parameters), and add a particular solution of the non-homogeneous equation, you get the general solution of the non-homogeneous equation.
H: Reference for a combinatorial identity (likely redundant) Fix two natural numbers $m$ and $n$ accordingly. Using Wolfram we have $$\sum_{k=1}^m\binom{k+n}{n}=\dfrac{(m+1)\binom{m+n+1}{n}-n-1}{n+1}.$$ Question: Can someone give me either an appropriate reference in which this formula might have appeared in so I can cite the result immediately in a technical report I am writing or a hint how to prove the identity without taking so much space (or an intuitive combinatorial sketch obvious to any reader)? Thanks in advance. Also given likely similar (or exact) sum have been asked before in stack exchange I would like to apologise in advance for this redundant posting (there are too many of them for me to check). Do give me the link to the relevant previous posting if you don't mind. AI: It’s essentially the hockey stick identity. $$\sum_{k=1}^m\binom{n+k}n=\sum_{k=1}^{m+n}\binom{k}n=\sum_{k=0}^{m+n}\binom{k}n-1\,,$$ since $\binom{k}n=0$ when $k<n$, and as you can see (with proofs) at the link, that last summation simplifies to $\binom{m+n+1}{n+1}$, giving you identity $$\sum_{k=1}^m\binom{n+k}n=\binom{m+n+1}{n+1}-1\,.$$ (Note that the Wolfram result simplifies to this.)
H: sequence of functions converges pointwise at irrationals Let $g:\mathbb{N}\to \mathbb Q$ be a bijection; let $x_n=g(n)$. Define the function $f:\mathbb{R}\to \mathbb{R}$ as $$x_n\mapsto 1/n \text{ for } x_n\in \mathbb Q$$ $$x\mapsto 0 \text{ for } x\notin \mathbb Q. $$ I proved that this function is continuous precisely at $\mathbb R\setminus\mathbb Q$. But I need to find also a sequence of continuous functions $f_n:\mathbb R\to \mathbb R$ that converge pointwise to $f$. Here is my attempt: Fin $n\in \mathbb N$, for every $k\in \{1,\ldots, n\}$ set $\delta_{nk}=1/4 \text{ min}\bigr(\{1/n\}\cup \{\vert x_m-x_r\vert: m\neq r\text{ and }m,r=1,\ldots,n\} \bigr)$. Then use Urysohn Lemma to set continuous functions $h_{nk}:\mathbb{R}\to [0,1/n]$ such that $h_{nk}(x_k)=1/k$ and $h_{nk}(U_{nk}^c)=0$, where $U_{nk}$ is the open interval about $x_k$ with diameter $\delta_{nk}$; and $U_{nk}^c$ denotes its complement. So defined, the sequence $\{f_n\}$ clearly converges pointwise at $\mathbb Q$; but I cannot shows it also converges at each irrational. Also, this is a question at the end of the section on Baire Cathegory Theorem in Munkres' Topology book. I do not find the connection of this with this exercise. AI: Suppose there is some irrational $x$ such that $f_n(x)\not\to f(x)=0$. That is, for every $\epsilon>0$ and every $N\in \mathbb N$ we can find some positive integer $m>N$ such that $f_m(x)\geq \epsilon$. Thus there is some subsequence $\{f_{n_i} \}$ such that $f_{n_i}\geq \varepsilon$ for some positive real $\varepsilon$. By the construction of the $f_{n}$'s for fixed $n_i$, we have that $x$ lies within $U_{n_i k_i}$ for some rational $x_{k_i}$. The construction of $\delta_{nk}$ implies that if $i<j $, then $k_i<k_j$ (This is because $U_{nk}$ and $U_{n\tau}$ are disjoint for $k\neq \tau$). This means that $k_i\to \infty $ as $i\to \infty$. Continuity of $f_{n_i}$ implies that $f(x)\leq f(x_{k_i})=1/k_i\to 0$, a contradiction.
H: Prime Factorization Proof - Find the unique integer k Let $S = 1! 2! \dotsm 100!$ Prove that there exists a unique positive integer $k$ such that $S/k!$ is a perfect square. I've seen this question asked before but the answers were quite confusing. Anyone have a simpler solution to this problem? I believe the idea is to factor out perfect squares, but I'm not entirely sure how this works. AI: Here's an easy answer for the first part $S = (1!) (1! \cdot 2) (3!) (3! \cdot 4) \cdots (99!) (99! * 100)$ $=(1!) (1! \cdot 2\cdot 1) (3!) (3! \cdot 2\cdot 2) \cdots (99!) (99! * 2\cdot 50)$ $=(1!)^2(3!)^2\cdots(99!)^2\cdot 2^{50}\cdot (50!)$ It is now easy to see that $\frac{S}{50!} = [(1!3!\cdots99!)(2^{25})]^2$
H: Will the center of one be larger than the center of the other?(center of gravity) Assume that you have $n$ positive values $C_1,C_2,\ldots,C_n$, and you have $n$ values $g_1,g_2,\ldots,g_n$ where each $g_t\in[0,0.1]$. Do we then have that $$\frac{\sum\limits_{t=1}^n\frac{C_tg_t}{(1+g_t)^t}}{\sum\limits_{t=1}^n\frac{C_t}{(1+g_t)^t}}\ge\frac{\sum\limits_{t=2}^n\frac{C_tg_t}{(1+g_t)^{t-1}}}{\sum\limits_{t=2}^n\frac{C_t}{(1+g_t)^{t-1}}}?$$ AI: As a counterexample, if $$ \left\lbrace \begin{align*} &n=2\\[4pt] &C_1=C_2=1\\[4pt] &g_1,g_2=\frac{1}{20},\frac{1}{10}\\[4pt] \end{align*} \right. $$ then letting $L,R$ denote respectively the $\text{LHS},\text{RHS}$ of your proposed inequality, we get $$ \left\lbrace \begin{align*} R&=\frac{1}{10}\\[4pt] L&=\frac{331}{4520}\\[4pt] R-L&=\frac{121}{4520}\\[4pt] \end{align*} \right. $$ so $R > L$. In fact, for the case $n=2$, your proposed inequality holds if and only if $g_1 \ge g_2$.
H: Prove that $F$ is Lebesgue measurable and $\sum_{n=1}^\infty m(E_n)\geq Km(F)$ under these conditions... Question: Suppose $E_n$, $n\in\mathbb{N}$, is a sequence of Lebesgue measurable subsets of $[0,1]$. Let $F$ be the set of all points $x\in[0,1]$ that belong to at least $K$ (some positive number) of the $E_n$'s. Prove that $F$ is Lebesgue measurable and $\sum_{n=1}^\infty m(E_n)\geq Km(F)$. My Attempt/Idea: First, let's show that $F$ is measurable. Let's consider a function $f=\sum_n\chi_{E_n}$. Then, $f:[0,1]\rightarrow[0,\infty]$ is measurable, and so $f^{-1}([K,\infty])$ is measurable. Since $f^{-1}([K,\infty])$ is precisely the number of points that belong to at least $K$ of the $E_n$'s, we have that $F=f^{-1}([K,\infty])$ is measurable. Now we want to show the inequality. $\int f=\int\sum_n\chi_{E_n}=\sum_n\int\chi_{E_n}$, since $f$ are nonnegative functions by MCT. Let $G$ be the set of all points $x\in[0,1]$ that don't belong to at least $K$ of the $E_n$'s. Then, $\sum_n\int\chi_{E_n}=\sum_n(\int_F\chi_{E_n}+\int_G\chi_{E_n})$.... but I am not sure if I am on the right track..... AI: You were almost there. Hint: $\int_0^1 f\, dm \ge \int_F f\,dm.$
H: Does a right angle triangle ABC, right angled at A has A-symmedian? A symmedian is defined to be the isogonal of a median in a triangle . In EGMO , lemma 4.24 (Constructing the Symmedian),which states, "Let $X$ be the intersection of the tangents to $(ABC)$ at $B$ and $C$. Then line $AX$ is a symmedian." My question is what happens to a right angle triangle , when we do this construction, the tangent lines don't meet . Is this construction of symmedian limited to only acute triangles and obtuse triangles. The author of the book hasn't commented anything about this . Though , by a simple angle chase, we can see that for a right angle triangle, the symmedian is the altitude. Can someone clarify ? Note: By EGMO, I mean the book, Euclidean Geometry in Mathematical Olympiads by Evan Chen. AI: In the projective real plane, if $A$ is a right angle then the tangents at $B$ and $C$ both are perpendicular to the side $BC$ and therefore are parallel to each other and meet at a point at infinity. Let this point at infinity be $X$. Then the line $AX$ is parallel to the tangents at $B$ and $C,$ and therefore it also is perpendicular to side $BC$. The altitude from $A$ to $BC$ lies along line $AX,$ as required. You can also do this without a point at infinity if you are willing to consider limiting cases for $A$ an acute angle approaching a right angle and $A$ an obtuse angle approaching a right angle.
H: Isomorphic Sylow p-subgroups of two finite abelian groups G and H Let $G$ and $H$ be abelian groups of order $n$. I want to prove that $G$ is isomorphic to $H$ if and only if for every prime $p\mid n$, Sylow $p$-subgroup of $G$ is isomorphic to Sylow $p$-subgroup of $H$. One direction is obvious. How do I show that the other direction, i.e. if Sylow $p$-groups are isomorphic then Groups are isomorphic. The goal of this question is to make easier to verify list of finite abelian groups of order $n$ (up to isomorphism). AI: We can certainly prove a baby version of the structure theorem for finite abelian groups: Theorem: A finite abelian group $G$ is the direct product of its Sylow $p$-subgroups. Proof: Let $\#G=n=p_1^{n_1}\dots p_r^{n_r}$. By Euclid/Bezout/..., there exists integers $m_1,\dots,m_r$ such that $n/p_i^{n_i}$ divides $m_i$ and $m_i\equiv 1\pmod{p_i^{n_i}}$. Then $m_1+m_2+\dots+m_r\equiv 1\pmod{p_i^{n_i}}$ for all $i$ and hence $\equiv 1\pmod{n}$. By Cauchy, $g^n=1$ for all $g\in G$ so $g^{m_i}$ has $p$-power order, i.e., lies in the Sylow-$p_i$ of $G$, and $$ g=g^{m_1+m_2+\dots+m_r}=g^{m_1}g^{m_2}\dots g^{m_r} $$ gives the desired result. QED.
H: If $|g| = k$ then $|g^m| = k / $lcm$(k, m)$ Here $g$ is in a group $G$. The only proof I got uses the concept of cyclic groups, but this wasn't introduced yet. How can I prove it in a simpler way? AI: Note: The result is $$|g^m|=\frac{k}{\color{red}{\gcd(k,m)}}.$$ Let $|g^m|=t$, then $(g^m)^t=e$. This means $k | tm$ ($\because$ $|g|=k$). Consider $$ak=tm \qquad \text{ for some } a \in \Bbb{N}.$$ Then, $$ak=tm \implies a\,\,\frac{k}{\gcd(m,k)}=t\,\,\frac{m}{\gcd(m,k)}.$$ Since $\gcd\left(\frac{m}{\gcd(m,k)}, \frac{k}{\gcd(m,k)}\right)=1$. This means $\frac{k}{\gcd(m,k)}$ divides $t$. Since $t$ is the order so it has to be the least positive integer with this property. Thus $$t=\frac{k}{\gcd(m,k)}$$
H: How to solve a fraction with a numerator in exponential form and a denominator in numerical form without a calculator? The question: "Imagine unwinding (straightening out) all of the DNA from a single typical cell and laying it "end-to-end"; then the sum total length will be approximately $2$ meters. Assume the human body has $10^{14}$ cells containing DNA. How many times would the sum total length of DNA in your body wrap around the equator of the earth." The Earth's equator is $40,075$ km Now I got this question right by dividing the assumed total length of DNA by the distance of the equator: $$\frac{10^{14} \cdot 2 \ m}{40,075,000 \ m} = 4,990,642$$ The answer key says the answer to the question is "about $5 * 10^6$ times around the equator". But my question is, can I solve this question with an equation that converts the distance of the equator to exponential form to arrive at the same formatted answer as the answer key? Is there a mnemonic that makes it simple to do in your head? For example, if I used the equation: $$\frac{10^{14} \cdot 2}{10^7 \cdot 4}$$ Then solved that equation to this: $$\frac{10^7 \cdot 2}{4}$$ From here is it possible to get $$10^6 \cdot 5$$ (the answer) without using a calculator? AI: Yes, it is possible. For your simpler example, $\frac{2 \cdot 10^7}{4}$, rewrite $10^7 $ as $10^1 \cdot 10^6 = 10 \cdot 10^6$. Then you have $\frac{20 \cdot 10^6}{4} = 5 \cdot 10^6$. Now back to the original question: $$\frac{2 \cdot 10^{14}}{40,075,000}$$ First, convert the denominator to standard form (scientific notation), which is $4.0075 \cdot 10^7$. Then rewrite the numerator as $20 \cdot 10^{13}$ using the same process as before. Then you have: $$\frac{20 \cdot 10^{13}}{4.0075 \cdot 10^7}$$ where you can now estimate the denominator as $4 \cdot 10^7$ since you will not lose any precision, except if you are using more than $3$ sig figs. Then use the laws of indices to calculate this expression (which one is it)?
H: Given $a^2 \equiv n\pmod q$ find $b$ such that $b^2 \equiv n\pmod {q^2}$ Given $a^2 \equiv n\pmod q$ find $b$ such that $b^2 \equiv n\pmod {q^2}$ $a,n,q$ are given. How to find $b$? I know I am supposed to use Hensel's lemma and "lifting" $q$, I just don't know how to do it. AI: You're looking for something of the form $b=a+kq\bmod q^2$. You want $$(a+kq)^2\equiv n\pmod{q^2}\Leftrightarrow a^2+2akq\equiv n\pmod{q^2}.$$ So, you're going to want to choose $k$ so that $$k\equiv \frac{n-a^2}{2aq}\bmod q.$$ You may notice that we're given that $q|n-a^2$, but there's not necessarily an extra factor of $2$ or a factor of $a$. For $a$, Hensel's lemma will fail for $b\equiv 0\bmod q$ if it's not a multiple of $q^2$, and for $2$, Hensel's lemma in fact fails for this polynomial when $q=2$, so dividing by $2$ modulo $q$ is not an issue.
H: Find the density of Z I'm doing this problem from Carol Ash's The Probability Tutoring Book Setup - Let $Z = min(X,Y)$ where X and Y are independent random variables. $$X \sim Exp(\lambda = 1) \\Y \sim Exp(\lambda = 1)$$ Find $F_Z(z)$ My attempt: Since X and Y are independent: $f_{X,Y}(x,y) = f_X(x)f_Y(y) = e^{-x}e^{-y}$ Now trying to find $F_Z(z)$ $$F_Z(z) = P(Z \leq z) = P(min(X,Y) \leq z)$$ In order for $min(X,Y) \leq z$, at least one of $X$ or $Y$ has to be $\leq z$ Graphically it looks like So we just need to integrate over the shaded area. $$F_Z(z) = P(Z \leq z) = P(min(X,Y) \leq z) \\ = \int_{x = 0}^{x = z}\int_{y = 0}^{y = \infty}{e^{-x}e^{-y}}{dydx}\ +\ \int_{x = z}^{x = \infty}\int_{y = 0}^{y = z}{e^{-x}e^{-y}}{dydx}$$ $$ = \int_{x = 0}^{x = z}{e^{-x}}(1){dx}\ + \ \int_{x = z}^{x = \infty}{e^{-x}}(1-e^{-z}){dx}$$ $$ = 1 - e^{-z} + {e^{-z}}(1 - e^{-z}) = e^{-z}({1 - e^{-z}})$$ However, the answers in the back state that the answer is $F_Z(z) = 1 - e^{2z}$ I can see intuitively how they would get that. Since X and Y are both exponential random variables with $ \lambda = 1$, Z should be a exponential random variable with $\lambda = 2$... But, I think my work should've lead me to the same conclusion. Why didn't it? Any pointers please??? AI: $$P(\min\{X,Y\} \leq z) = 1-P(\min\{X,Y\} > z) = 1-P(X > z)P(Y >z) = 1 - e^{-2z}$$ The penultimate equality uses the property of $\min$.
H: Which of the following topological spaces are separable? This is Exercise 6 from Section 2.2 on page 25 of Topology and Groupoids, by Brown. Exercise: A topological space is separable if it contains a countable, dense subset. Which of the following topological spaces are separable? $\mathbb{Q}$ with the order topology $\mathbb{R}$ with the usual topology $\mathbb{I}^2$ with the television topology an uncountable set with the indiscrete topology the following spaces: (a) $X$ is uncountable, and $N$ is a neighborhood of $x \in X$ if $x \in N \subseteq X$ and $X \setminus N$ is finite. (b) $X$ is uncountable, and $N$ is a neighborhood of $x \in X$ if $x \in N \subseteq X$ and $X \setminus N$ is countable. My attempt: Separable, because $\mathbb{Q}$ is a countable and dense subset. Separable, because $\mathbb{Q}$ is a countable and dense subset. Separable, because $\{ (p, q) \colon p, q \in \mathbb{Q} \cap \mathbb{I}^2 \}$ is a countable and dense subset. Separable, because any countable set $A \subseteq X$ will intersect $X$, which is the one and only neighborhood of any $x \in X$. (a) Separable, because if we let $A$ be countable (and infinite), then for any $x \in X$ and neighborhood $N$ of $x$, we have $X \setminus N$ finite, so it is not possible that $X \setminus N$ contains $A$, which means that $N \cap A \neq \emptyset$. (b) Comments: I think 1, 2, 4, and 5 (a) are probably correct. I am not sure about 3 because I read here that rational points in the plane are countable, but then I read here that the lexicographically ordered square is not separable. For 5 (b) I am lost. I need to either find a countable, dense subset or prove that one can't exist. Thanks for any help. Edit: The television topology is defined as follows: AI: All of your answers except the third are correct. $\Bbb Q\cap\Bbb I^2=\varnothing$, so $\Bbb Q\cap\Bbb I^2$ definitely isn’t dense in $\Bbb I^2$. Neither is $\Bbb Q^2\cap\Bbb I^2$, since the sets $(0,1)\times\{y\}$ are all open in $\Bbb I^2$ with the television topology, there are uncountably many of them, and they are pairwise disjoint; any dense subset of the space must have a point in each of them and must therefore be uncountable. (This is actually just the lexicographically ordered square with the axes interchanged.) $5(b)$ is not separable: if $D$ is any countable set, $X\setminus D$ is a non-empty open set disjoint from it, so $D$ is not dense in $X$.
H: Dealing with Subspaces Just having a little trouble understanding how subspaces work. I know that to be a subspace it has to hold for vector addition and scalar multiplication, which I assume is equivalent to $u+v = v+u$ and $ku = (ku$1, $ku$2) But how does that work for showing something like $U = [x,y,z | 3x + y - 2z = 0]$ as a subspace (let's assume for $R^3$ for now). How do you differentiate between the subspace equaling $0$ or say equaling $3$ and use it in the axioms for vector addition and scalar multiplication? Also, let's say we had $U$ as a subspace of $R^4$ spanned by a set $V = [(3, -2, 0, 1), (1, -2, 1, 2), (0, 2, 1, 3)]$ (completely random numbers just to show example) and we want to show that $V$ is a basis of $U$, would we still calculate linear independence and the span using just $V$ or? I know that every spanning set for a subspace is either a basis for that subspace or has a basis as a subset, but not sure what to do to show it. AI: Subspace (say $U$) of a Vector Space automatically inherits its operations and the same field, additionally (you missed this) the operation must be closed in $U$, i.e, for any $x,y \in U, \alpha x + \beta y \in U ~ \forall ~ \alpha, \beta \in \mathbb F$ That being said, if you want to figure out whether a given list of vectors is a basis of a (sub)space, you just need to check the following - whether they span the entire space & whether they are linearly independent. As for your example, suppose $(x_1,y_1,z_1),(x_2,y_2,z_2)$ are solutions to the equation $3x+y-2z=0$. Pick arbitrary field elements $\alpha,\beta$ and note that $3\alpha x_1+\alpha y_1-2 \alpha z_1+ 3 \beta x_2+\beta y_2 - \beta z_2=0$ or $3(\alpha x_1+\beta x_2)+(\alpha y_1+\beta y_2)-2(\alpha z_1+\beta z_2)=0$ which implies $(\alpha x_1+\beta x_2, \alpha y_1+\beta y_2,\alpha z_1+\beta z_2)$ is a solution of the equation as well, showing that $U$ is "closed" under the operation. Hence it is a sub space of $\{ (x,y,z) : x,y,z \in \mathbb R \}$ or $\mathbb R^3$. Note : This is a homogenous equation in three variables.
H: Which of the following functions is even? Let $f(x)$ be a continuous function. Which of the following must be an even function? $(1) \int_{0}^{x} f(t^2)\mathop{dt}$ (2) $\int_{0}^{x} f(t)^2\mathop{dt}$ (3) $\int_0^x t(f(t) - f(-t))\mathop{dt}$ (4) $\int_0^x t(f(t) + f(-t)) \mathop{dt}$. I know an even function satisfies $f(x) = f(-x)$, so I thought it should be the first one since $t^2 = (-t)^2$, but the integral is confusing me. I know for sure that $f(x^2)$ would be an even function without the integral, but the integral makes me think that this is a trick answer. I'm thinking it might also be 4, because I saw somewhere that the integral of an odd function is an even function, and $f(t) = t(f(t) + f(-t))$ satisfies $f(-t) = -t(f(-t) + f(t)) = -\left(t(f(t) + f(-t))\right) = -f(t)$ (it's odd). Can someone please explain which of the two reasons are right? AI: Let $F(x)=\int_0^x t(f(t)+f(-t))\,dt$. Then, $$\begin{align} F(-x)&=\int_0^{-x}t(f(t)+f(-t))\,dt\\\\ &\overbrace{=}^{t\mapsto -t}\int_0^x (-t)(f(-t)+f(t))\,(-1)\,dt\\\\ &=\int_0^x t(f(t)+f(-t))\,dt\\\\ &=F(x) \end{align}$$ Hence $F(x)=F(-x)$ and $F(x)$ is even. Can you do the other three?
H: Solving the complex equation $(1+z)^5=z^5$ I must to find $z\in\mathbb{C}$ such that: $\boxed{(1+z)^5=z^5}$ Is the following equivalence correct? $(1+z)^5=z^5\Leftrightarrow 1+z=z$ If this is not correct, how can solve this problem? AI: Note that $z=0$ is not a solution, so you may divide both sides by $z^5$ and get $$\left(\frac{1+z}z\right)^5=1.$$ Thus for your equation to hold you must have $1+z=z\zeta^r$, where $\zeta=e^{2\pi i/5}$ is a primitive fifth root of unity and $r=0,1,2,3$ or $4$. Rearranging you get $$z=\frac1{\zeta^{r}-1},$$ for $r=1,2,3,4$. Note there is no solution for $r=0$.
H: How many $4$-digit numbers of the form $1a2b$ are divisible by $3$? How many $4$-digit numbers of the form $\overline{1a2b}$ are divisible by $3?$ Hello I am new here so I don’t really know how this works. I know that for something to be divisible by 3, you add the digits and see if they are divisible by $3$. So that means $3+a+b=6, 9, 12, 15, 18,$ or $21.$ I’m just confused about how to calculate the number of cases. AI: Giving you a hint :- You got $3 + a + b = 6,9,12,15,18$ or $21$, which implies that $a + b = 3,6,9,12,15$ or $18.$ Now do Case-Work and find all possible $a,b$ which can satisfy these . This may take a bit of work. $($For e.g. when $a + b = 3$ we have $(a,b) = (0,3),(1,2)(2,1)(3,0))$ Note that you forgot the case when $3 + a + b = 3$, in that case $(a,b)$ = $(0,0)$. Edit :- Keep in mind that $a,b$ are $1$-digit numbers . Hence if $a + b = 12$ , $(a,b) = (1,11)$ is not a solution, but $(a,b) = (3,9)$ is a solution .
H: pairwise relatively prime pairs Let m be divisible by $1,2, ... , n$. Show that the numbers $1+m(1+i)$ where $i = 0,1,2, ... , n$ are pairwise relatively prime. My proof was as following let us have two different numbers $1+m(1+i)$ and $1+m(1+j)$, let d divides them. Thus $d\mid i-j$. I feel this won't lead anywhere, any hints or solutions will be appreaciated. AI: Actually, what you did does lead to somewhere, although you missed a factor of $m$. The appropriate assumption is for some $i \neq j$, there's a $d \ge 2$ where $d \mid 1 + m(1 + i)$ and $d \mid 1 + m(1 + j)$, leading to $d \mid m(i - j)$. Note each prime factor $p$ of $d$ must divide $m$ and/or $i - j$. Since $|i - j| \le n$, if $p \mid i - j$ then $p \le n$, so because $2$ through $n$ divides $m$, you also have $p \mid m$. As such, in any case, all prime factors $p$ of $d$ must have $p \mid m$. This means $p \mid m(1 + i)$, so $p \not\mid 1 + m(1 + i)$, and likewise $p \not\mid 1 + m(1 + j)$. Since this means $d$ doesn't divide either value, this is a contradiction of the assumption, thus showing no such $d \ge 2$ exists, i.e., all of the numbers are relatively prime.
H: Proof by contradiction of a variant of PHP Let $a_1, a_2,\ldots , a_n$ be positive integers. Prove that if $(a_1+a_2+\ldots+a_n)-n+1$ pigeons are to be put in $n$ pigeonholes, then for some $i$, the statement "The $i^{th}$ pigeonhole must contain at least $a_i$ pigeons" must be true. My approach: Let us assume that this hypothesis is incorrect. Let $p(i)$ denote the number of pigeons in $i^{th}$ pigeonhole. Thus no $i\in \mathbb N$ exists such that $i^{th}$ pigeonhole contains at least $a_i$ pigeons. $$\therefore p(i)<a_i\space \forall\ i\in \mathbb N$$ $$\sum_{i=1}^{n} p(i)<\sum_{i=1}^{n} a_i$$ $$(a_1+a_2+\ldots+a_n)-n+1<(a_1+a_2+\ldots+a_n)$$ This gives us $1<n$ which certainly is true. Where did I go wrong in my proof? Please help. THANKS Note: This is question number $3.3.12$ from the book 'The Art and Craft of Problem Solving' by Paul Zeitz. AI: Since $p(i)$ is an integer, $p(i)<a_i$ is equivalent to $p(i)\leq a_i-1$. Then we have the following, $$\sum_{i=1}^{n}p(i)\leq\sum_{i=1}^{n}(a_i-1)\\\implies(a_1+a_2+\cdots+a_n)-n+1\leq(a_1+a_2+\cdots+a_n)-n$$ which is a contradiction!