text
stringlengths
83
79.5k
H: Why $O(\lambda_\text{max}^{\tau/2})=O(1/n^2)$ for $\tau=\frac{4\ln n}{1-\lambda_\text{max}}$? It appears in the proof of Lemma 9 in 'Bounds on the cover time' by A. Z. Broder and A. R. Karlin. AI: I am assuming that $0 < \lambda_{max} < 1$. Hint : Prove that $\frac{2 ln(\lambda)} {(1-\lambda)} < -2$.
H: What is the difference between diagonalising a matrix and its eigendecomposition? When we diagonalise a matrix, we write it in terms of a diagonal matrix $S$ which contains all of the eigenvalues of the matrix, a matrix $P$ which contains all of the corresponding eigenvectors of $A$, and $P^{-1}$. $$A=P^{-1}SP$$ However, when we have an eigendecomposition, we also represent a matrix $B$ in terms a diagonal matrix $\Lambda$ which contains all of the eigenvalues of $B$, a matrix $Q$ (which I think is the eigenbasis) and $Q^T$. $$B=Q^T \Lambda Q$$ What's the difference between the two? AI: A square matrix $A \in \mathbb{R}^{n \times n}$ is called diagonalizable if there exists a matrix $P$ and a diagonal matrix $\Lambda$ such that \begin{equation} A = P \Lambda P^{-1}. \end{equation} Necessarily, the columns of $P$ are eigenvectors of $A$ and the diagonal elements of $\Lambda$ are the corresponding eigenvalues. A natural question to ask is: when is $A$ diagonalizable? One sufficient condition is $A$ is symmetric i.e. $A^T = A$. For symmetric $A$ it turns out that not only does there exist such $P$, we can always choose $P$ to be an orthogonal matrix i.e. $P^T = P^{-1}$, meaning that the columns of $P$ form an orthonormal basis for $\mathbb{R}^n$. Plugging into the above display you find \begin{equation} A = P \Lambda P^T. \end{equation} More generally, it turns out that a matrix $A \in \mathbb{C}^{n \times n}$ is unitarily diagonalizable (there exists $P$ with $P^* = P^{-1}$) if and only if $A$ is normal i.e. satisfies $AA^* = A^*A$. Here we use the notation $A^* = \bar{A}^T$.
H: Positive integer solutions to $a^b-b!=a$ Conjecture: The only solutions to $a^b-b!=a$ such that $a,b\in\Bbb Z^+$ are $(a,b)=(2,2)$ and $(2,3)$. Evidently, we have $a<b$, and extending the domain of $a,b$ to $\Bbb R^+$ reveals that asymptotically as $a\to+\infty$ either $b\sim1$ or $b\sim e(a-1)$ (using Stirling's approximation). However I don't see a way to continue further. Can the conjecture be proven? AI: We are to solve the equation: $$a^b-a=b!$$ over positive integers. It is clear that $a,b>1$. Assume that $a \geqslant b$. It then follows: $$b^b-b \leqslant b!$$ and this clearly holds only for $b=2$, with equality, forcing $(a,b)=(2,2)$. Now, let $a<b$ and $p$ be a prime factor of $a$ (such a prime exists as $a>1$). If $p<a$, then we have: $$ap \mid b! \implies p \mid(a^{b-1}-1) \implies p \mid 1$$ which is clearly a contradiction. Thus, we must have $a=p$, which makes the equation: $$p^b-p=b!$$ It is clear that as $p \mid b!$ and $p^2 \nmid b!$, we have $p<b<2p$. Then, we have: $$p>\frac{b}{2} \implies \bigg(\frac{b}{2}\bigg)^b-\frac{b}{2}<b! \implies b<6$$ Thus, we have $a<b<6$, forcing $a<5$. If $a=2$, we have $2^b-2=b!$ forcing $(a,b)=(2,3)$. If $a=3$, then we have $3^b-3=b!$. It is simple to check that this has no solutions for $b<6$. Thus, the only solutions are $(a,b)=(2,2),(2,3)$. Note : There are many elementary ways to prove that: $$\bigg(\frac{b}{2}\bigg)^b-\frac{b}{2}<b! \implies b<6$$ One approach is by pairing terms that add up to $b$ in the product $b!$ with slight modifications, and using $x(b-x)<b^2$. Being a little careful at places like $x=1,2$, we can show $b<6$. Another approach can be to use calculus to show the inequality required.
H: How to show that $I$ = {f $\in$ $F(X,R)$ : $f(a)=0$ $\forall a \in A$} is an ideal of $F(X,R)$. I am studying ring theory and have come across a ring: $F(X,R)$ ={all functions $X \to R$} with $(f+g)(x)=f(x)+g(x)$ and $(f \times g)(x) = f(x) \times g(x)$ with $X$ a nonempty set. There is a question: Show that, for $A \subset X$ the subset $$ I = \{ f \in F(X,R) : f(a)=0 \forall a \in A \} $$ is an ideal of $F(X,R)$. I have found that an ideal is a subring of a ring which is closed under R-multiplication. I think that I need to use the subring test to first show that $I$ is a subring and then show that $I$ is closed under R-multiplication, but I am confused about showing that $I$ is closed under R-multiplication. AI: Let $f\in I$ and let any $g\in F(X,R)$. Then for each $a\in A$ we have $(gf)(a)=g(a)f(a)=g(a)\times 0=0$. Hence $gf\in I$. Similarly you can show that $fg\in I$.
H: If every subset of $X$ is saturated, then $(X,\tau)$ is a $T_1$ - space In a general topology exercise the following question is asked: Is it true that if the topological space $(X,\tau)$ is such that every subset is saturated, then $(X,\tau)$ is a $T_1$- space? My approach to answering this: If "every subset is saturated", then $\forall A \subseteq X, \exists B_j \in \tau: A = \bigcap \limits_{j \in J}B_j$, where $J$ is some index set. A topological space is an $T_1$ - space if every singleton $\{x\}$ with $x \in X$ is closed. So, because $X\setminus\{x\} \subseteq X$, then we have that $\exists B_j \in \tau: X\setminus\{x\} = \bigcap \limits_{j \in J}B_j$. Because $\tau$ is a topology on $X$, $\bigcap \limits_{j \in J}B_j \in \tau$, and therefore $X\setminus\{x\} \in \tau$. This means that $\forall x \in X$, $\{x\}$ is closed, this meaning that $(X, \tau)$ is a $T_1$ - space. I think that this proof only applies to cases where $X$ is a finite set right? Because I used the property that if $B_j \in \tau$, then $\bigcap \limits_{j=1}^nB_j \in \tau$. So $\bigcap \limits_{j \in J}B_j \in \tau$ is only necessarily true if this is a finite intersection. How can I show that the statement is true or false for infinite sets where $\bigcap \limits_{j \in J}B_j$ can represent an infinite intersection? AI: Equality $ X\setminus\{x\} = \bigcap \limits_{j \in J}B_j$ implies that $B_j= X\setminus\{x\}$ or $B_j=X$ for every $j\in J$ because these are the only subsets of $X$ that contain $ X\setminus\{x\}$ as a subset. However it cannot be that $B_j=X$ for every $j\in J$ because that would lead to the statement that $ X = \bigcap \limits_{j \in J}B_j$ which contradicts that $ X\setminus\{x\} = \bigcap \limits_{j \in J}B_j$. So we conclude that $B_{j_0}=X\setminus\{x\}$ for some $j_0\in J$. Here $B_{j_0}$ is open so $X\setminus\{x\}$ must be open, hence $\{x\}$ must be closed.
H: From the given definition of function $f(n)$, find $f^{-1}(-100)$ $f: \{1,2,3..\}\rightarrow \{\pm 1,\pm 2, \pm 3..\}$ is defined by $f(n)=\begin {matrix} \frac n2~~\text{if n is even} \\ -\frac{n-1}{2}~~\text{if n is odd}\end{matrix}$ $-100$ is even, so situation one would apply in which case $$y=\frac n2$$ $$n= 2y$$ $$f^{-1}(n)=2n$$ $$f^{-1}(-100)=-200$$ But the given answer is $201$ which would be the case if we solved using situation 2. Why should we use situation 2 is the number given is even? AI: As the domain of the function $f$ is positive integers, you will get positive value of $f$ only by first case i.e. $n$ should be even. Now, it is given that for a particular $n'$, $f(n') =-100$. Here, $f$ has yielded a negative value which is possible only for second case. So, you've $$-\left(\frac{n'-1}{2}\right)=-100\\\therefore \quad n'=201$$
H: Extending Taylor's theorem to differentiable, but not continuously differentiable functions Taylor's theorem says that if $f:I\to\mathbb R$, $I\subseteq\mathbb R$ an open interval, is $n$ times continuously differentiable, then for $x_0\in I$ there exists a continuous function $R_{n,x_0}:I\to\mathbb R$ with $\frac{R_{n,x_0}(x)}{(x-x_0)^n}\to0$ as $x\to x_0$ such that $f(x)=\sum\limits_{k=0}^n\frac{f^{(k)}(x_0)}{k!}(x-x_0)^k~+~R_{n,x_0}(x)$. For the case $n=1$, the requirements of continuity, or even differentiability on all of $I$, can be dropped: a function $f$ like above is differentiable in $x_0\in I$ iff there exist a number $f'(x_0)$ and a function $R:I\to\mathbb R$ with $\frac{R(x)}{x-x_0}\to0$ as $x\to x_0$ such that $f(x)=f(x_0)+f'(x_0)(x-x_0)+R(x)$. This is essentially Taylor's theorem without requiring differentiability or even continuity anywhere other than $x_0$. As a consequence, the remainder $R$ is also no longer continuous. My question is: can I extend this to higher order Taylor polynomials? I would like a statement like the following: If $f$ is $n$-times differentiable in $x_0$ (the $n$-th derivative need not be continuous in $x_0$, or even existent outside of $x_0$). Then there exists a function $R_{n,x_0}:I\to\mathbb R$ with $\frac{R_{n,x_0}(x)}{(x-x_0)^n}\to0$ such that $f(x)=\sum\limits_{k=0}^n\frac{f^{(k)}}{k!}(x-x_0)^k~+~R_{n,x_0}(x)$. Context: I want to find a definition of higher order derivatives using polynomial approximation. The above case with $n=1$ is a useful definition of differentiability, which essentially says that a function is differentiable iff there is a linear polynomial and a small remainder such that $f=\textrm{polynomial}+\textrm{remainder}$. I want to define higher order derivatives by saying that $f$ is $n$-times differentiable if there is a polynomial of degree $n$ and a small remainder such that $f=\textrm{polynomial}+\textrm{remainder}$. If my question from above can be answered with yes, then this definition would be a generalization of higher order differentiability, otherwise it would just be something different and thus useless. AI: The pointwise form of Taylor's theorem you propose is true; in fact the same statement holds almost word-for-word even in a higher dimensional case for functions $f: \Bbb{R}^n \to \Bbb{R}^m$ (or even more generally, for functions $f:V\to W$ where $V$ and $W$ are real Banach spaces, not necessarily finite-dimensional). See for example this answer for a statment and outline of proof in the general case of several variables (but by making necessary simplifications you can easily restrict yourself to the $1$-dimensional case, and the proof should be just as easy to carry out). However, in the "context" part of your question you say I want to define higher order derivatives by saying that $f$ is $n$-times differentiable if there is a polynomial of degree $n$ and a small remainder such that $f=\text{polynomial+remainder}$. If my question from above can be answered with yes, then this definition would be a generalization of higher order differentiability, otherwise it would just be something different and thus useless. Unfortunately the situation isn't quite as nice as you might hope. Taylor's theorem says that "if $f$ is $n$-times differentiable at a point $x_0$, then $f$ is a polynomial plus a small remainder with $\frac{R_{n,x_0}(x)}{(x-x_0)^n} \to 0$". What you seem to be asking for is the converse. But a direct converse is completely incorrect. Just because you have $f$= polynomial+ "small remainder", it DOES NOT (for $n\geq 2$) imply $n$-times differentiability at $x_0$. For example, with $n \geq 2$, the function $f: \Bbb{R} \to \Bbb{R}$ defined as \begin{align} f(x) &= \begin{cases} x^{n+1} & \text{if $x$ irrational}\\ 0 & \text{if $x$ rational} \end{cases} \end{align} satisfies $f(0+h) = 0 + o(|h|^n)$ (i.e it equals the zero polynomial to order $n$), yet fails to even be twice differeniable. Take a look at this answer where I present this counter example (which I learnt from Spivak's Calculus) and give links for partial converses to Taylor's theorem (all these answers assume slightly stronger hypotheses on the form of the remainder).
H: Is there any matrix for which $A^{-1} = - A^t$? where $A \in GL_{3}(\mathbb{R})$ My idea: $A^T \cdot A = I$ and we know that $A^{-1} \cdot A = I$ which implifies that, $A^T \cdot A = I$ $(A^T⋅A)⋅A^{−1}=(I)⋅A^{−1}$ $A^{T}⋅(A⋅A^{−1})=A{−1}$ $A^T⋅(I)=A^{−1}$ $A^T=A^{−1}$ I'm not sure that my idea above is a correct proof to the problem above, that there is no $A$ where $A^{-1} = - A^t$ and $A \in GL_{3}(\mathbb{R})$. AI: Your proof starts with $A^t \cdot A = I$. This is not correct. Correct is $A^t \cdot A = -I$. Then $A^t \cdot A $ is positive semi-definite, but $-I$ is negative definite, a contradiction.
H: Integral of $\int_{-\infty}^{\infty} e^{\alpha x}/({e^x+1})$ Show that, for $0<\alpha<1$: $$\int_{-\infty}^{\infty} \frac{e^{\alpha x}}{{e^x+1}}\text{d}x=\frac{\pi}{\sin(\pi\alpha)}.$$ Hint: Use the recangular path $S_r=\left[-r, r, r+2\pi, -r+2\pi,-r \right]$ (see fig. 1). My attempt: Denoting $f(z)=\frac{e^{\alpha x}}{{e^x+1}}$, we notice that $f$ has only simple poles at $z=\pi i+ 2\pi i\mathbb{Z}$. Using the suggested path, we have $$ \oint_{S_r} f(z) \ \text{d}z=\int_{-r}^{r} \frac{e^{\alpha x}}{{e^x+1}}\text{d}x+\int_{[r+2\pi i,-r+2\pi i]}f(z) \ \text{d} z+\int_{[r,r+2\pi i]}f(z) \ \text{d} z+\int_{[-r+2\pi i, -r]}f(z) \ \text{d}z. $$ The integrals on the right and left paths, denoted $\gamma_1$ and $\gamma_2$ in fig. 1, become insignificant as $r\to\infty$: $$ \left|\int_{\gamma_1} f(z) \ \text{d}z \right|\le \max_{z\in[r,r+2\pi i]} \left| \frac{e^{\alpha r}e^{i 2\pi\alpha t}}{e^r e^{i2\pi t}+1} \right|\ell(\gamma_1)\le \frac{e^{\alpha r}}{e^r+1}\cdot 2\pi\xrightarrow[r\to\infty]{}0. $$ A similar argument applies for $\gamma_2$. But what do I do with the path $\Gamma_1$? AI: Just parametrize it. This is the path $\Gamma_1:[-r,r]\to\mathbb{C}$ defined by $\Gamma_1(t)=-t+2\pi i$. The derivative if $\Gamma_1'(t)=-1$. So: $\int_{\Gamma_1} f=\int_{-r}^r \frac{e^{\alpha(-t+2\pi i)}}{e^{-t+2\pi i}+1} (-dt)=\{u=-t\}=-\int_{-r}^r \frac{e^{\alpha u}e^{\alpha 2\pi i}}{e^ue^{2\pi i}+1}du=-e^{2\pi \alpha i}\int_{-r}^r \frac{e^{\alpha u}}{e^u+1}du\to$ $\to -e^{2\pi\alpha i}\int_{-\infty}^\infty f(u)du$
H: Proving a constructed approximating sequence in a Banach space converges Suppose that $(V,\|\cdot\|)$ is a Banach space. Let $S_n$ be a sequence of closed sets such that $\bigcup_{n\in\mathbb{N}} S_n = V$ and $S_n \subseteq S_m$ for $n\leq m$. Let $(x_n)_{n\in\mathbb{N}}$ be a converging sequence in $V$ and define a new sequence $(\hat{x}_n)_{n\in\mathbb{N}}$ where each $\hat{x}_n\in S_n$ such that $\|x_n - \hat{x}_n\| = \min_{x\in S_n}\|x_n-x\|$. Does it follow that $\| x_n - \hat{x}_n\|\rightarrow 0$? AI: I suppose you meant that $\hat {x_n} \in S_n$ and $\|x_n-\hat {x_n}\|=\inf_{y \in S_n} \|x_n-y\|$. Let $x_n \to x$. Then $x \in S_k$ for some $k$. For $n \geq k$ we have $x \in S_n$ so $\|x_n-\hat {x_n}\|\leq \|x_n-x\| \to 0$.
H: Question about probability / mutually exclusive events Decide whether this statement is true or false: Let $(\Omega, \mathbb{F}, \mathbb{P})$ be a probability space, if for two events $A,B \in \mathbb{F}$ $\hspace{1cm}$ $\mathbb{P}(A \cup B)= \mathbb{P}(A) + \mathbb{P}(B)$ holds, then $ A \cap B= \emptyset$ In the solutions it is stated that this statement is false, however I do not really understand why? Isn't the definition that for two mutually exclusive events $\mathbb{P}(A \cup B)= \mathbb{P}(A) + \mathbb{P}(B)$ and $\mathbb{P} (A \cap B) = \emptyset$? AI: It is always true that $P(A\cup B)+P(A \cap B)=P(A)+P(B)$. Hence $P(A\cup B)=P(A)+P(B)$ iff $P(A\cap B)=0$. But that does not imply that $A \cap B $ is the empty set. It can be any event of probability $0$. For example take $A=[0, \frac 1 2]$ and $B=[\frac 1 2 ,1]$ on $[0,1]$ with Lebesgue mesure.
H: Multidimensional Young diagrams Consider a Young diagram defined as follows: A Young diagram (also called a Ferrers diagram, particularly when represented using dots) is a finite collection of boxes, or cells, arranged in left-justified rows, with the row lengths in non-increasing order. Listing the number of boxes in each row gives a partition $\lambda$ of a non-negative integer $n$, the total number of boxes of the diagram. For example we may write 1+4+5=10: Question: Are there higher-dimensional versions, using cubes, such that the "faces" of the diagram are each themselves Young diagrams? Here is an example, with three distinct faces, each representing diagrams: 1+2+3+3, 2+2+3+3, and 0+0+4+4. The faces are the Young diagrams on the faces of the cube in this case. It has 6 faces, and three pairs of (up,right,in), each a Young diagram. In 2d, there is only 1 face (1 diagram). In 3d, one has a cube with six faces, but only three are unique diagrams. One diagram in the 3d case is forced from the other two (up,right,up,right....up and up,up,in,in,up lead to the other necessarily being right,right,in,in,right). If so, is there a way of writing an integer in terms of the diagram, in the same way as an integer can be represented via one of many Young diagrams (i.e. integer partitions)? This would represent a restricted integer partition, but in a relatively unusual way. For example the image below would represent the integer partition ((2+2) + (3+3)) + ((2+2) + (3+3)) = 20. AI: The three dimensional form is called a "plane partition". That is because it can be represented as a histogram, filling the squares of a Young diagram (the base) with numbers representing the height of the column on that square. These numbers are weakly decreasing along rows and columns, just like numbers in an ordinary partition are weakly decreasing. I do not know any name for higher dimensional variants.
H: Conditional probability with inequality It is known that: $P(X > a) = 1 - P(X \leq a)$ Is there a rule for $P(X > a | Y > b)$ ? Maybe something similar to: $P(X > a | Y > b) = 1- P(X < a | Y < b)$ (I am not sure, just a guess) Thank you! AI: $P(X > a | Y > b) = 1- P(X \le a | Y >b)$ is true since \begin{align*} &P(X > a | Y > b)+P(X \le a | Y >b)\\ =&\frac{P( X>a \& Y>b)}{P(Y>b)}+\frac{P( X\le a \& Y>b)}{P(Y>b)}\\ =&\frac{P(Y>b)}{P(Y>b)}\\ =&1 \end{align*}
H: hereditary ring and Dedekind ring On page 161 of Rotman's homological algebra book, it states that: A Dedekind ring is a hereditary domain. Following it is an example: If k is a field, R = k<x,y> is both left and right hereditary. This ring is neither left-noetherian or right-noetherian. However, Dedekind rings are always noetherian. Perheps I am being silly. But should R be a domain now and hence Dedekind by the definition given at first? So R should be noetherian? Got a bit confused here. Any help would be appreciated! AI: By definition, a Dedekind ring is an integral domain, hence is commutative $\qquad$ https://mathworld.wolfram.com/DedekindRing.html whereas$\;k\text{<}x,y\text{>}\;$is non-commutative.
H: Is this function surjective? $F:(0,1) \times \mathbb{Z} \ni (a,b) \rightarrow a+b \in \mathbb{R}$ Consider the function $$F:(0,1) \times \mathbb{Z} \ni (a,b) \rightarrow a+b \in \mathbb{R}$$ Is it surjective? At first I thought that $F$ is not surjective, because we will not get integer numbers. But, what if $0,(9)=1$? AI: Your first thought is indeed correct. You will never get any integer in the image of the function. In case you are thinking of something like $$0.\bar{9} = 1,$$ note that $0.\bar{9}{\color{red}\notin}(0, 1)$ and thus, you really can't get integers in the image. As a formal proof, it suffices to find some real number not in the image. So, let us show that $43$ is not. To do this, assume the contrary. Then, there exists $(a, b) \in (0, 1) \times \Bbb Z$ such that $a + b = 43$. Now, note that $$a = 43 - b.$$ Since $b$ is an integer, so is $43 - b$; thus, we see that $a \in \Bbb Z$. However, this is a contradiction since $a \in (0, 1)$ and $(0, 1) \cap \Bbb Z = \varnothing$.
H: What Is Bigger $\frac{3}{e}$ or $\ln(3)$ Hello everyone what is bigger $\frac{3}{e}$ or $\ln(3)$? I tried to square it at $e$ up and I got: $e^{\frac{3}{e}} = \left(e^{e^{-1}}\right)^{3\:}$ and $3$ but I don't know how to continue I also tried to convert it to a function but I didn't find. AI: Note that $$\dfrac d{dx}\left(\dfrac{x}{\ln x}\right)=\dfrac{\ln x-1}{(\ln x)^2}$$ Therefore, this function takes minimum value at $e$. Hence, $$\dfrac{3}{\ln3}>\dfrac{e}{1}\\ \implies\boxed{\dfrac 3e>\ln3}$$
H: How can I solve $x^2.(y')^2-2.(x.y-3).y'+y^2=0$? How can I solve the differential equation $x^2.(y')^2-2.(x.y-3).y'+y^2=0$ ? I don't have any idea of what method to use on this. AI: Hint: $$x^2.(y')^2-2.(x.y-3).y'+y^2=0$$ $$(xy'-y)^2=-6y'$$ This is Clairaut's differential equation: $$y=xy'+f(y')$$ $$y=xy'\pm \sqrt {-6y'}$$
H: Determining whether an angle is between two given angles on the unit circle I am trying to find a way to determine whether an angle is between two given angles where all angles are provided as vectors on the unit circle i.e.: $\mathbf{a}=(\cos(\theta),\sin(\theta))$ Note that by inbetween I mean on the arc of the smaller of the two segments of the unit circle formed by the vectors we want to check between. Specifically I do not want to obtain the angles from the given vectors by applying the inverse trig functions I just want to work with the given vectors. I think the following is true if and only if the angle $\mathbf{c}$ is between $\mathbf{a}$ and $\mathbf{b}$: $$|\mathbf{a} + \mathbf{b} - \mathbf{c}|\leq 1$$ but I'm having trouble proving it. Is this statement true and can you prove it? AI: It's true (except possibly for edge cases). It is fairly obvious if you draw a diagram with two unit circles - one at the origin with vectors $\mathbf{a}$ and $\mathbf{b}$ in it, and the other unit circle centred at $\mathbf{a}+\mathbf{b}$ and which contains the vector $-\mathbf{c}$. Vectors $\mathbf{a}$ and $\mathbf{b}$ will then point to the intersections of the unit circles. For $|\mathbf{a} + \mathbf{b} - \mathbf{c}| < 1$ to hold, the head of $-\mathbf{c}$ must lie inside the first unit circle, and this happens exactly when it lies on the circular arc between the two intersection points. Therefore by symmetry of the paralellogram the same is true for $\mathbf{c}$ when placed at the origin. Those intersection points are pointed at by $\mathbf{a}$ and $\mathbf{b}$, so $|\mathbf{a} + \mathbf{b} - \mathbf{c}| < 1$ iff $\mathbf{c}$ lies strictly between $\mathbf{a}$ and $\mathbf{b}$. Note that you used a $\le$ sign, but you'll have to think about whether that is what you want or if strict inequality is better. Does $\mathbf{a}$ itself lie between $\mathbf{a}$ and $\mathbf{b}$? And in the case of $\mathbf{a}=-\mathbf{b}$, does every unit vector lie between them or none at all?
H: Given $f\in A(\{z\in\mathbb{C}:|z|<2\})$ and $f(1)=0,f'(1)\neq0$ calculate the angle Given $f\in A(\{z\in\mathbb{C}:|z|<2\})$ and $f(1)=0,f'(1)\neq0$ setting $u=Re(f),v=Im(f)$ and assuming in a neighborhood oh1 the $u(z)=0$ define a smooth path $\gamma_0$ and the same for $v$ define $\gamma_1$. Find the angle between $\gamma_0,\gamma_1$ at the point 1 AI: $f'(1) \ne 0$ means $f'(z) \ne 0$ in a small neighbourhood $U$ of $1$ so there is $g$ (holomorphic and conformal,so $g'(w) \ne 0$) defined on a small neighborhood $W$ of $0$ with $f \circ g(w)=w, w \in W$ and $g \circ f(z)=z, z \in U$. If $w_1=x, w_0=y$ are the curves given by the real/imaginary axis in $W$, $\Im f\circ g(w_1)= \Im x=0$ amd $\Re f\circ g(w_0)=0$ so $\gamma_0, \gamma_1$ are the images of $w_0, w_1$ under the conformal map $g$, so the angle is $90^{\circ}$ as expected
H: A simpler refutation of the General Comprehension Principle? The famous Russell's Paradox in which $R = \{ x \; | \; x \notin x \}$ leads to a contradiction $$ R \in R \Longleftrightarrow R \notin R, $$ thereby showing that the General Comprehension Principle entails inconsistency. But I think I found an even simpler refutation of it. We know that for some definite condition $P$ $$ A = \{ x \; | \; P(x) \} \Longleftrightarrow_{\text{df}} x \in A \Longleftrightarrow P(x) \text{ is true.} $$ The General Comprehension Principle guarantees the existence of all such sets $A$, therefore we only have to find such $A$ that leads to contradiction to refute it. This is easy. Just take $$ A = \{ x \; | \; x \notin A \} \Longleftrightarrow x \in A \Longleftrightarrow x \notin A $$ by the above definition. Notice that this contradiction holds even if we assume that $A$ is the only set that exists, since we can still ask ourselves whether $A \in A$. However, this almost seems too easy. I checked my truth tables again and again and couldn't find a mistake in my reasoning. The above equivalence truly seems like a contradiction no matter how I look at it. I would appreciate your thoughts on this. AI: If you look closely at a (careful) statement of the general comprehension principle, it will say something like: For any formula $\phi(x)$ in which $y$ is not free, $\exists y\forall x\,(x\in y\iff\phi(x))$. The proviso "in which $y$ is not free" prevents attempts to circularly define a set $y$ (or your $A$) in terms of itself. This issue is not specific to the general comprehension principle. The separation axiom of Zermelo-Fraenkel set theory has an analogous proviso: For any formula $\phi(x,z)$ in which $y$ is not free, $\exists y\forall x\,(x\in y\iff (x\in z\land\phi(x,z)))$. If the proviso were omitted here, we'd have a contradiction, just like yours, as soon as $z$ is nonempty. Analogous provisos also appear in the replacement axiom of ZF and in other set theories like NBG, MK, NF. And they're always needed for the same reason as in your question; circular definitions easily lead to contradictions.
H: Does the curve with the following equation holds a line segment? For a smooth curve $\alpha:I\to \mathbb{R}^3$ with $[a,b]\subset I$, if $$|\alpha(b)-\alpha(a)| = \int_a^b |\alpha'(t)|dt$$ holds Does the curve with the equation hold a line segment? We can show for any smooth curve with point $\alpha(a),\alpha(b)$ fixed we have $|\alpha(b)-\alpha(a)| \le \int_a^b |\alpha'(t)|dt$ ,line segment between these two points are the one with lower bounded is attained. AI: If you have equality in the following inequality $$ |\alpha(b) - \alpha(a)| = \left|\int_a^b \alpha '(t)dt\right| \leq \int_a^b |\alpha '(t)| dt $$ then since $\alpha$ is smooth there exists $\theta$ such that for all $t \in [a,b]$, $$ \alpha'(t) = e^{i\theta}|\alpha'(t)| $$ (see the case of equality in the complex integral triangle inequality). Therefore $$ \alpha(b) - \alpha(a) = \int_a^b \alpha'(t)dt = e^{i\theta}\int_a^b |\alpha'(t)|dt $$ and for all $t \in [0,1]$ $$ \begin{aligned} \alpha(a + t(b-a)) &= \alpha(a) + \int_a^{a + t(b-a)} \alpha'(t)dt \\ &= \alpha(a) + e^{i\theta}\int_a^{a + t(b-a)} |\alpha'(t)|dt \\ &= \alpha(a) + \underbrace{\frac{\int_a^{a + t(b-a)} |\alpha'|}{\int_a^b |\alpha'|}}_{= \phi(t) \in [0,1]} (\alpha(b) - \alpha(a)) \end{aligned} $$ so $\alpha([a,b])$ is indeed the segment $ [\alpha(a),\alpha(b)] $ because $\phi : [0,1] \rightarrow [0,1]$ is continuous, non-decreasing and $\phi(0) = 0$, $\phi(1) = 1$. Does that help you?
H: In a metric space a sequence with no converging subsequences is discrete (?) I've been trying to prove that given a metric space $X$ (not necessarily complete) and a sequence $(x_n)_n \subseteq X$ which contains no convergent subsequences, there exists an open neighborhood $V_n$ of $x_n$ for each $n \in \mathbb{N}$ such that these $V_n$'s are pairwise disjoint i.e.: $$ \exists V_1 \in \mathcal{v}_{x_1}, \ldots, \exists V_n \in \mathcal{v}_{x_n}, \ldots: \forall n_1 \neq n_2: V_{n_1} \cap V_{n_2} = \emptyset. $$ So far I am stuck trying to prove it by contradiction. All I've managed to see is that it wouldn't be enough to consider balls of the same radius around each element of the sequence, as this wouldn't prove anything for the sequence $(1+1/1, 1+1/2, \ldots, n+\frac{1}{n}, n+\frac{1}{n+1}, \ldots)$ in $\mathbb{R}$, which doesn't have any convergent subsequences. So I'm trying to work with the general hypothesis by contradiction: $$ \forall r_1, \ldots, r_n, \ldots: \exists n_1 \neq n_2: B(x_{n_1}, r_{n_1}) \cap B(x_{n_2}, r_{n_2}) \neq \emptyset, $$ but I am stuck. Obviously, I could just pick for each $x_n$ a radius small enough $r$ so that $B(x_n, r)$ doesn't intersect some balls $B(x_1, r_1), \ldots B(x_{n-1}, r_{n-1}), B(x_{n+1}, r_{n+1}), \ldots$, because if that weren't posible, it would mean the sequence would have a converging subsequence to $x_n$. But the problem is, I can't guarantee that the balls obtained this way also don't intersect each other, not just $B(x_n, r)$. I feel like there is something important I should be observing here, but I can't see it. Also, in case you happen to know that this result isn't true or even better, you happen to have a counterexample, it would be much appreciated. Thank you for reading. AI: You're pretty close. As you noted, you can find radii $r_n$ such that $B(x_n, r_n)$ does not contain any other $x_k \ne x_n$. Now consider the balls $B(x_n, r_n/2)$. By a triangle inequality argument, you should be able to show that these balls are pairwise disjoint.
H: Using Euler's theorem and I still can't solve this question let $n\in \Bbb Z$ suppose for every $q|n-1$ there is $a_q\in \Bbb Z_n^*$ that keeps equations $a_q^{n-1} \equiv 1 \pmod n$ and $a_q^{\frac{n-1}{q}}\neq1 \pmod n$. proof that n is prime number. can you help me? I've been told to prove that $\Bbb Z_n^*$ is cyclic group and to calculate the order of it. Thank you! AI: We have $\ a^{n-1}\equiv 1\mod n\ $ implying $\ \gcd(a,n)=1\ $ hence there is some positive integer $\ k\ $ with $\ a^k\equiv 1\mod n\ $ because Euler's theorem states $\ a^{\varphi(n)}\equiv 1\mod n\ $. The smallest positive integer $\ k\ $ with $\ a^k\equiv 1\mod n\ $ is called the order of $\ a\ $ modulo $\ n\ $, in the proof just called the "order". If we have shown that $$a^k\equiv 1\mod n$$ does not hold for any positive integer $\ k<n-1\ $ , then we have shown that the order is $\ n-1\ $ , hence $\ n\ $ must be prime. Now, suppose there is some $\ k<n-1\ $ with $\ a^k\equiv 1\mod n\ $ Then, $k$ is a proper factor of $n-1$ ($1$ is possible as well) , hence $\frac{n-1}{k}$ must have a prime factor $q$. So, we have some positive integer $\ m\ $ with $\ mkq=n-1\ $. This implies $\ k\mid \frac{n-1}{q}\ $, hence we would have $\ a^{\frac{n-1}{q}}\equiv 1\mod n\ $ , but this has been ruled out. This is the easiest variant of the $\ n-1-$ method to prove the primality of some number $\ n\ $ , which only works if we can factor $\ n-1\ $ completely.
H: If $A$ is an orthogonal matrix with $|A|=-1$, show that $|I-A|=0$ Let $A$ be an $n \times n$ orthogonal matrix where $A$ is of even order with $|A|=-1.$ Show that, $|I-A|=0,$ where $I$ denotes the $n \times n$ identity matrix. My approach $A \cdot A^{\top}=I$ $|A| \cdot\left|A^{\top}\right|=1 \quad$ or $\quad\left|A^{\top}\right|=-1.....(2)$ let $A=\left[\begin{array}{ll}a & b \\ c & d\end{array}\right]$ $L = I-A$ $L=\left[\begin{array}{cc}1-a & -b \\ -c & 1-d\end{array}\right]; \quad 2 a d=2 b c$(from eq2) $|L|=(1-a)(1-d) - bc$ $=-(a+d)$ What to do next? Am I going wrong? AI: Since $A$ is orthogonal, rewrite $\det(I-A)$ as follows: $$\begin{align}\det(I-A)&=\det(A^TA-A)\\&=\det(A)\det(A^T-I)\\&=\det(A)\det((A^T-I)^T)\\&=-\det(A-I)\end{align}$$ It follows that $$\det(I-A)=(-1)^{n+1}\det(I-A)$$ As $n$ is even, we get $$\boxed{\det(I-A)=0}$$ as desired.
H: $f(x) = \cos|x| - 2ax + b$ increases for all $x$. Find the maximum value of $2a + 1$ Here's how I approached the problem. $f'(x) = -\sin x - 2a$ $f'(x) \geq 0$ $\Rightarrow -\sin x -2a \geq 0$ $\Rightarrow 2a \leq -\sin x$ $\Rightarrow 2a+1 \leq 1- \sin x \tag{*}$ Since range of $\sin x$ is $[-1,1]$, $\Rightarrow 2a+1\leq 1-(-1)$ $\Rightarrow 2a+1\leq 2$ Hence maximum value of $2a+1$ is coming to be $2$. However the answer to the question is $0$. What am I doing wrong? AI: After taking the derivative you want $f'(x) \ge 0$ for all $x$. This means you want $$\sin(x) \le -2a$$ for all $x$ Now... for this to be true for all $x$, you must have: $$-2a \ge 1$$ i.e. $a \le -0.5$ So the max value of $2a+1$ is $2\cdot(-0.5) + 1 = 0$ So... up to line $(*)$ everything is OK in your solution. But at that line you need to ask yourself: what can I put for $(2a+1)$ for this $(*)$ to be always true. And the answer is you need $(2a+1) \le 0$ because the range of $$g(x) = 1 - \sin(x)$$ is $[0,2]$.
H: Why is the finite-projective-plane minus a single edge r-partite? Let $P_r$ be the finite projective plane in which each line contains $r$ points (when it exists). For example, $P_2$ is a triangle, $P_3$ is the Fano plane, and $P_r$ exists whenever $r-1$ is the power of a prime number. Let $P_r'$ be $P_r$ with one line removed. Füredi (1981) claims that $P_r'$ is an $r$-partite hypergraph (page 158, below Corollary 5). I do not understand why this is true: the projective plane is certainly not constructed as an $r$-partite hypergraph. Why does it become $r$-partite when we omit a single line from it? For example, consider the Fano plane, which has 7 hyperedges {123, 145, 167, 246, 257, 347, 356}. Suppose we delete the last hyperedge 356. How is the remaining hypergraph 3-partite? What are the parts? For your convenience, here are the relevant paragraphs from the paper. Definition of $P_r$: Definition of $P_r'$: Claim that it is $r$-partite: EDIT: After reading the answer by Saul Spatz, I now think there is a typo in the paper: $P'_r$ is constructed from $P_r$ by deleting one point plus all lines that contain it. So for example, if in the Fano plane we delete point 7, we get a hypergraph with 6 vertices and 4 edges: {1,2,3}, {1,5,4}, {6,2,4}, {6,5,3}. It is tripartite with partition {1,6}, {2,5}, {3,4}. This also explains why he says that $\tau^*(P'_r)=r-1$: after removing one point, there are $r^2-r$ remaining points. Assigning a weight of $1/r$ to every point yields a fractinal cover of size $r-1$. EDIT 2: I now see that this construct has a name in the literature - it is called the Truncated projective plane. AI: In a projective plane, just as each line has $r$ points, each point lies on $r$ lines. So, if we consider each point $P$ on the deleted line, there are $r-1$ lines remaining which used to meet in $P$ but no longer intersect. These $r-1$ lines form a pencil of parallel lines. There are $r$ such pencils. The author's terminology seems idiosyncratic. It is usual to say that a projective plane of order $r$ has $r+1$ points on a line. For example, the famous result that there is no projective plane of order $10$ is referring to a plane with $11$ points on a line. EDIT In response to the OP's comment. If edge $356$ is deleted the six remaining edges become $$ 12, 14, 17, 24, 27, 47$$ and the parts are $$ \{12,47\}\\ \{14,27\}\\ \{17,24\}$$ EDIT I believe I understand where the confusion lies. The author states that $P_r$ is the hypergraph consisting of the lines of the projective plane. I take this to mean that the lines are the vertices of the hypergraph, and you take it to mean that the lines are the edges. Without seeing more of the text, I can't be sure of the author's usage, but I believe my interpretation to be in line with common parlance. Also, I can't see how to make sense of his further statements if the line are the edges.
H: In a nonabelian finite group $G$, if a prime $p$ divides order of $|G|$, $p$ divides order of centralizer of some element which is not in the center Here's what I want to prove: Suppose $G$ be nonabelian finite group and $p$ be a prime which divides the order of G. Then there is some element $b\in G$ such that $b \not\in Z(G)$ and $p$ divides $|Z(b)|$. (Note: $Z(b)$ is the centralizer of $b$) Here's my attempt: Suppose that for all $b \not\in Z(G)$, $p$ does not divide $|Z(b)|$. Let $\{a_1 , \ldots , a_k \}$ be the system of representatives of those conjugacy classes which contain more than one element. Then $a_i \not\in Z(G)$ and since $p$ does not divide $|Z(a_i)|$, $p$ must divide $[G: Z(a_i)]$ for all $i \in \{1, \ldots , k\}$. Then by the class equation, $p$ must divide $|Z(G)|$. This is where I am stuck. I cannot decide what to do after this. Hints would be appreciated. AI: You have already proved that $p$ divides the order of the center. Now if you take any element outside the center , its centralizer contains the center and so by Lagrange, the order of the centralizer is divisible by $p$.
H: Can every symmetric, unimodular and positive definite $G\in\mathbb{Z}^{n\times n}$ be written as $G=U^TU$? Let $G\in\mathbb{Z}^{n\times n}$ be symmetric, unimodular and positive definite. Does there exist a unimodular matrix $U\in\mathbb{Z}^{n\times n}$ such that $G=U^TU$? I now that the result is true if $n=2$, but I have a feeling that it fails if $n$ gets larger. AI: No. This is essentially asking whether every unimodular lattice is isometric to $\Bbb Z^n$. This is false from $n=8$ onwards, and the $n=8$ counterexample is the $E_8$ root lattice. I think an appropriate $G$ is $$\pmatrix{2&0&0&0&0&0&0&1\\0&2&1&1&1&1&1&-2 \\0&1&2&1&1&1&1&-2 \\0&1&1&2&1&1&1&-2 \\0&1&1&1&2&1&1&-2 \\0&1&1&1&1&2&1&-2 \\0&1&1&1&1&1&2&-2 \\1&-2&-2&-2&-2&-2&-2&4 }.$$ I should add that this matrix is, or at least is intended to be $VV^T$ where $$V=\pmatrix{1/2&1/2&1/2&1/2&1/2&1/2&1/2&1/2\\ 0&1&0&0&0&0&0&-1\\ 0&0&1&0&0&0&0&-1\\ 0&0&0&1&0&0&0&-1\\ 0&0&0&0&1&0&0&-1\\ 0&0&0&0&0&1&0&-1\\ 0&0&0&0&0&0&1&-1\\ 0&0&0&0&0&0&0&2}$$ so that being unimodular and positive definite is clear. Also it cannot have the form $UU^T$ for $U$ a unimodular integer matrix, since such a matrix would have at least one odd diagonal entry.
H: Reversing MODULO operation ? system of equations i have 1000 prime numbers p1 ... p1000 .. which i use to encode a value v % p1 = r1 v % p2 = r2 v % p3 = r3 .... v % p1000 = r1000 then I pick the 20 PRIMES which gives the SMALLEST reminders and store them. Later I want to be able to recover back the VALUE (or approximate value) by only having the 20 PRIMES available, and knowing they give the smallest reminders out of the 1000. btw i still have access to the 1000 primes if it also helps I pick the sequence of 1000 primes above number N ... and for ease of use I encode a NEW_VALUE = VALUE + BIGGEST_PRIME def encode(self, value) : mods = (value + self.max_prime) % self.primes return np.argsort(mods)[:20] @lulu finding multiple values that match the prime-mods, should be ok too ... not 100% but i suspect i will use values in range .. 1-100, 1-1000 or any other , so I can narrow it down to that range. I dont want to , but If I have to ;( ... may be some sort of constraint satisfaction my final goal is to generate sparse 1000 bit binary based the position of the small-reminder primes self.primes = np.array(nprimes(start=11,cnt=1000), dtype=DTYPE) np.random.shuffle(self.primes) Looking for the decode() part ! AI: By the Chinese Remainder theorem, you can retrieve your number modulo the product of the twenty primes. That might be enough for your number. If not, you are stuck because you can hypothesize any combination of remainders with the other primes (that are larger than the twenty remainders) and retrieve some number.
H: Getting a negative magnitude when solving for magnitude using the formula for angle between two vectors. Problem: $\vec{u},\vec{v}$ are two given vectors. It is known that $\vec{u} \cdot \vec{v} = 5, \ \ \ ||\vec{v}|| = 2, \ \ \ \theta=2\pi/3$ Find $||\vec{u}||$ I am using the formula for the angle between two vectors to solve for the magnitude of one of my vectors. Work is included in this picture. After using the formula for the dot product I get $||\vec{u}|| = -5$ How am I getting a negative magnitude when I solve for the magnitude of one of my vectors? Do I simply take the absolute value of this? AI: The problem statement is somewhat messed up. The sign of the dot product defines if the angle between the two vectors is acute or obtuse. So if the dot product is $5$ then the angle cannot be $2\pi/3$. Tell them to make up their mind. They should either change the dot product to $-5$ or the angle to $\pi/3$ (in the problem statement, I mean).
H: How to get $(ab)1 = (a1)(b1)$ in Galois field? I'm reading Galois field from textbook Groups, Matrices, and Vector Spaces - A Group Theoretic Approach to Linear Algebra by James B. Carrell. Here $r,a,b \in \mathbb N$ and $1 \in \mathbb F$. While I understand $(ab)1 = a(b1)$, I could not get $(ab)1 = (a1)(b1)$. Clearly, $(ab)1$ means we sum $1 \in \mathbb F$ repeatedly $ab$ times, whereas $(a1)(b1)$ means the multiplication of $a1 \in \mathbb F$ and $b1 \in\mathbb F$. Could you please elaborate on this equality? AI: By the distributive property, we have $$ (a1)(b1) = (\overbrace{1 + \cdots + 1}^{a \text{ times}})(b1) = \overbrace{b1 + \cdots + b1}^{a \text{ times}} = a(b1) = (ab)1. $$
H: If $\frac{a}{k} - \frac{a(c-1)}{kb} > 1$, is $\frac{a}{k+1} - \frac{a(c-1)}{(k+1)b} > 1$? I'm trying an induction method, but not sure if I'm able to prove it or find a counterexample. Suppose $3 \leq a < b$ and $1 \leq c \leq b$, and $k \geq 1$, where $a, b, c, k$ are integers. Suppose $n = 1$ is proven. Then inductive hypothesis is $$\frac{a}{k} - \frac{a(c-1)}{kb} > 1$$ Is the following true? $$\frac{a}{k+1} - \frac{a(c-1)}{(k+1)b} > 1$$ AI: `Take $a-\frac{a(c-1)}{b}=k+\frac{1}{4}.$ We have $$\frac{a}{k}-\frac{a(c-1)}{kb}=1+\frac{1}{4k}>1,$$ but $$\frac{a}{k+1}-\frac{a(c-1)}{(k+1)b}=\frac{k+\frac{1}{4}}{k+1}<1.$$
H: Minimum value of the function $\sin5x/\sin^5x$ I recently came across a question as follows: Find the minimum value of the function $\sin5x/\sin^5x$. I tried differentiating the function but the calculation was messy. The resulting differentiated equation had many roots, difficult for me to identify which ones actually correspond to the minimum. Taking the second derivative resulted in a horrendous calculation. Would someone please help me to find any easier solution? AI: We have: $\dfrac{\sin (5x)}{\sin^5x}= \dfrac{16\sin^5x-20\sin^3x+5\sin x}{\sin^5x}= 16-\dfrac{20}{\sin^2x}+\dfrac{5}{\sin^4x}=16-20t+5t^2=f(t), t = \frac{1}{\sin^2x} \ge 1$. You can take it from here as you have a quadratic function in $t \ge 1$.
H: Inclusion map $i : C^{0,\beta}[0, 1] \rightarrow C^{0,\alpha}[0, 1] $ is linear, and therefore compact Q) Given $0 < \alpha < \beta \leq 1$. Show that the inclusion map $i : C^{0,\beta}[0, 1] \rightarrow C^{0,\alpha}[0, 1] $ is compact. Ans) Let {$u_n$}$_1^\infty$ ⊂ $C^{0,β}[0,1]$ such that $\|u_n\|_{C^{0,β}} \leq 1$ , i.e. $\|u_n\|_\infty \leq 1$ and $$|u_n(x) − u_n(y)| \leq |x − y|^\beta \ \text{ for all } \ x, y ∈ [0,1]$$ By Arzela-Ascoli, there exists a subsequence {$\tilde u_n$}$_1^\infty$ of {$u_n$}$_1^\infty$ and $u ∈ C[0,1]$ such that $\tilde u_n \rightarrow u$ in $C$. Since $$|u(x) − u(y)| = \lim_{n→∞} |\tilde u_n(x) − \tilde u_n(y)| ≤ |x − y| β$$ $u ∈ C^{0,β}$ as well. Define $g_n := u − \tilde u_n ∈ C^{0,β}$, then $$[g_n]_{0,β} + \|g_n\|_C = \|g_n\|_{C^{0,β}} ≤ 2$$ and $g_n → 0$ in $C$. To finish the proof we must show that $g_n → 0$ in $C^{0,α}$. Given $δ > 0$, $$[g_n]_{0,α} = \sup_{\substack{x,y\in [0,1] \\ x \neq y}} \frac{|g_n(x)-g_n(y)|}{|x-y|^{\alpha}} ≤ A_n + B_n$$ where $$A_n = \sup \Biggr \{ \frac {|g_n(x) − g_n(y)|}{|x − y|^α} : x \neq y \ \text{ and } \ |x − y| ≤ δ \Biggr \}$$ $$= \sup \Biggr \{ \frac {|g_n(x) − g_n(y)|}{|x − y|^β}·|x − y|^{β−α} : x \neq y \ \text{ and } \ |x − y| ≤ δ \Biggr \}$$ $$≤ δ^{β−α}·[g_n]_{0,β} ≤ 2δ^{β−α}$$ and $$B_n = \sup \Biggr \{ \frac {|g_n(x) − g_n(y)|}{|x − y|^α} : |x − y| > δ \Biggr \}$$ $$≤ 2^{δ−α}\|g_n\|_C → 0 \ \ \ \ \ \text{ as } \ n → ∞$$ Therefore, $$\limsup_{n→∞} [g_n]_{0,α} ≤ \limsup_{n→∞} A_n + \limsup_{n→∞} B_n ≤ 2δ^{β−α} + 0 → 0 \ \ \ \text{ as } \ δ ↓ 0$$ $\ $ Now the Definition of compactness for the map being used is Let $X$ and $Y$ be normed spaces and $T : X → Y$ a linear operator. Then $T$ is compact if for any bounded sequence $(x_n)_{n \in \mathbb {N}}$ in X, the sequence $(Tx_n)_{n \in \mathbb {N}}$ contains a converging subsequence Assuming the proof has been done correctly, I am stuck in this one step which seems minor, but is holding up the rest of it. How do I prove that inclusion map $i$ is linear, for this defintion to be applicable and thus the proof to be valid? AI: $C^{0,\beta}[0,1]$ is a subset of $C^{0,\alpha}[0,1]$ and $i$ is nothing but the inclusion map so that $i(f) = f$ as a function (the only difference is that $i(f)$ is an element of a larger vector space). Hence, for linearity of $i$ you only need to know that the vector space operations on $C^{0,\beta}[0,1]$ are the restrictions of the ones on $C^{0,\alpha}[0,1]$ to $C^{0,\beta}[0,1]$. This is trivial since both spaces define addition by $$(f+g)(x) = f(x) + g(x)$$ and scalar multiplication by $$(\lambda f)(x) = \lambda f(x).$$
H: The flow of a Killing vector field preserves submanifolds? I am wondering if the flow of a Killing vector field preserves the submanifolds? In fact, on a Riemannian manifold $(M,h)$ I have a Killing vector field $W$. Let $\varphi:I\times M\to M$ be its flow. We know that $\varphi_t:=\varphi(t,.):M\to M$ is an isometry. I can say that given a submanifold $A\subset M$, $\varphi_t(A)\subset A$ and therefore, $d\varphi_t:T_pA\to T_pA$? Or, in othe wrods, $\forall u\in T_pA$, $d\varphi_t(u)\in T_pA$? I will appreciate any comments or answers in advance! AI: Consider $\mathbb{R}^2$ with the Eucledean metrix $\phi_t(x)=x+tu$ is the flow of the vector field $X(x)=u$ where $u\in\mathbb{R}^2$ is not zero. Take the circle of radius $1$ center at $0$, it is not preserved by $\phi_t$ and $X$ is tangent to $C$ at $(0,1)$ if $u=(1,0)$.
H: $T$ is a normal operator, prove any eigenspace of $T+T^*$ is invariant under $T$ I've been asked to prove in a homework problem exactly what the title describes. This was the $3^{rd}$ part of a question whose first 2 parts were to prove that $\ker T=\ker TT^*$ and $\ker T=\ker T^n$, but I can't find any way to implement these into this part. I've also tried proving that $$\operatorname{Tr}[(P_{w^\bot}TP_w)(P_{w^\bot}TP_w)^*]=0$$ where $P_w$ is the projection onto the eigenspace, but I've reached the expression $$\operatorname{Tr}[(P_{w^\bot}TP_w)(P_{w^\bot}TP_w)^*]=\operatorname{Tr}(P_wTP_wT)-\operatorname{Tr}(TP_wT)$$ and got stuck there. I would appreciate assistance. Thanks in advance. AI: Suppose $\lambda$ is an eigenvalue of $T+T^*$ and $V_{\lambda}$ is its eigenspace. We have to show that if $v\in V_{\lambda}$ then $T(v)\in V_{\lambda}$ as well. And indeed: $(T+T^*)(Tv)=(T^2+T^*T)(v)=(T^2+TT^*)(v)=T(T+T^*)(v)=T(\lambda v)=\lambda T(v)$ The equality $T^*T=TT^*$ is true because $T$ is normal.
H: Find coordinates of point in $\mathbb{R^4}$. There is a problem here in Q. $5$ on the last page. It states to find coordinates of point $p$. Taking point $a=(3,2,5,1), \ b=(3,4,7,1), \ c= (5,8,9,3)$. Also, $b$ has two coordinates in common with $a$, and $p$ lies on the same line as $a,b$. So, those two coordinates of $p$ are same as $a,b$. Hence, $p= (3,x,y,1)$; where $x,y\in \mathbb{R}$ are unknown. Given that $\triangle acp, \triangle bcp$ are right-angled; get: $1. \ \ \triangle acp:\ \ \ \ \ {ac}^2 = {ap}^2 + {cp}^2\implies({(-2)}^2+6^2+4^2+2^2) = ({(x-2)}^2 +{(y-5)}^2) + (2^2+{(8-x)}^2+{(9-y)}^2+{(-2)}^2 )$ $60 = 2x^2+2y^2-20x-28y+182\implies x^2+y^2-10x-14y+61=0$ $2. \ \ \triangle bcp:\ \ \ \ \ {bc}^2 = {bp}^2 + {cp}^2\implies(2^2+4^2+2^2+2^2) = ({(x-2)}^2 +{(y-5)}^2) + (2^2+{(8-x)}^2+{(9-y)}^2+{(-2)}^2 )$ $28 = 2x^2+2y^2-20x-28y+190\implies x^2+y^2-12x-16y+95=0$ From $1,2$, get: $-2x -2y +34 = 0\implies x +y -17=0$. But, how to proceed it further to find coordinates of $p$ is unclear. AI: Having $p=(3,x,y,1)$ does not reflect that $p$ is on the line $ab$. However, parametric equation of an arbitrary line passing through arbitrary points $a,\,b$ in Euclidean space is $x=a+t(b-a)$ with some parameter $t\in \mathbb{R}$, hence $$p=a+t(b-a)=(3,2,5,1)+t\,(0,2,2,0)$$ or, more compactly $$p=(3,2+2t,5+2t,1)$$ and your further computations will succeed as we have only one equation $$(b-a)\cdot(p-c)=0\;(\Leftrightarrow (b-a)\perp (p-c))$$ in other words, the two equations in the OP solution are equivalent and do not give solutions to two variables. $$(0,2,2,0)\cdot((3,2+2t,5+2t,1)-(5,8,9,3))=0$$ $$(0,2,2,0)\cdot(-2,2t-6,2t-4,-2)=0$$ $$2\cdot(2t-6)+2\cdot(2t-4)=0$$ $$t=\frac{5}{2}$$ $$p=(3,7,10,1)$$ Verification, for a case of a typo: $(b-a)\cdot(p-c)=0$.
H: Simplifying ${n\choose k} - {n-1 \choose k}$. How would I simplify $${n\choose k} - {n-1 \choose k}$$ I've expanded them into the binomial coefficient form $$\frac{n!}{k!(n-k)!} - \frac{(n-1)!}{k!(n-1-k)!}$$ but that's about all I've got. I'm having trouble with factorials. Any suggestions? AI: To explain why your expression is equal to ${n - 1 \choose k - 1}$ combinatorially, note that to choose $k$ out of $n$ objects entails one of two things: You choose the first object and $k - 1$ objects from the remaining $n - 1$ objects. Clearly, there are ${n - 1 \choose k - 1}$ ways to choose $k$ objects like this. You don't choose the first object, meaning you must choose $k$ objects from the remaining $n - 1$ objects. Here there are $n - 1 \choose k$ ways to do this. Hence, $${n - 1 \choose k} + {n - 1 \choose k - 1} = {n \choose k}$$ from which the equality in other answers follows.
H: Find the number of solutions for the equation $4\{x\}=x+[x]$ $$4\{x\}=[x]+\{x\}+[x]$$ $$3\{x\}=2[x]$$ $$\{x\}=\frac 23 [x]$$ $$0\le \frac 23 [x] <1$$ $$0\le [x]<1.5$$ So $[x]=0,1$ The solutions for $x$ should be infinite, but the given answer is 3. Even if assume that the answer not being $\infty$ is upto interpretation, I still get only 2 as the answer. Please verify this solution. The brackets represent fractional part and greatest integer part AI: Let $x=I+f$, with $I \in \Bbb{Z}$ and $0 \leq f <1$. Then we have $3f=2I$. Thus $f=0,\frac{2}{3}$ for $I=0,1$ respectively. Thus only two solutions $0,1+\frac{2}{3}=\frac{5}{3}$.
H: Why is $\operatorname{res}(0, \cos(\frac{1}{z}))=0$? The residue of a function $f$ represented by a Laurent Series in complex analysis is defined as the coefficient at $n=-1$ of the series ($a_{-1}$). Giving the definition of the cossine in complex analysis: $$f(z)=\cos(\frac{1}{z})=\sum_{n=0}^\infty \frac{(-1)^n}{(2n)!}(\frac{1}{z})^{2n}$$ So the series is only defined from $n=0 $ to $\infty$. Is the residue $0$ at $z=0$ because the series of the function is undefined at $n=-1$? Or is it because the function only has even exponents of $z$ (and so there is no term $z^{-1}$). And, finally, is the residue at a singularity $\operatorname{res}(z_0, f)=0$ for every function that can only be expanded as a series from $n=0\to \infty$? (I'm still a bit confused on this whole topic.) AI: Before getting to the residue, we need to correct your series expansion of the cosine. It should be $\cos(z)=\sum_{n=0}^\infty\frac{(-1)^n}{(2n)!}z^{2n}$. Notice that the exponent of $z$ is $2n$, not $n$. Now we can insert $1/z$, and use $(1/z)^n=z^{-n}$: $\cos(1/z)=\sum_{n=0}^\infty\frac{(-1)^n}{(2n)!}(1/z)^{2n}=\sum_{n=0}^\infty\frac{(-1)^n}{(2n)!}z^{-2n}$. But to find the residue, we have to find the coefficient $a_{-1}$ of the Laurent series of the form $\sum_{n=-\infty}^\infty a_n z^n$. The exponent of $z$ here is $n$, not $-2n$. So we have to find the term in the series above where $z^{-1}$ appears. This term does not exist, since the exponent is always even. This means that its coefficient is $0$, and so is the residue.
H: Confusion regarding number of ordered pairs for symmetry/asymmetry My Discrete Mathematics textbook says the following : A relation is symmetric/antisymmetric/transitive even if there’s one pair/triplet that satisfies the condition. This probably means that if I have a relation that consists of a number of ordered pairs and suppose that only one pair satisfies the relation for symmetry , we can say that the relation is symmetric. But then again , it makes the following statement: Also a relation is not asymmetric/antisymmetric/transitive/asymetric if there’s one pair that does not satisfy the condition. So I would interpret this as ‘even if there’s just one pair in a relation that doesn’t satisfy the condition for symmetry , the relation would not be considered as symmetric. However don’t these two statements actually counter each other ? Suppose we have a relation where some pairs satisfy the condition for symmetry(say) and others don’t , then according to the first statement the relation would be symmetric since there’s atleast some pairs that satisfy the condition while according to the second , it would not be symmetric since there’s atleast one pair that doesn’t. So what would be the correct interpretation? Please help me out , I’m a beginner with relations. AI: I suspect that the word not is missing in the first statement immediately before the list of properties: as it stands, the statement is false. But even with that change the statement is very poorly worded. I suspect that the author is trying to say that finding one pair that satisfies the symmetry condition, for instance, is not enough to show that a relation is symmetric: all pairs must satisfy it in order for the relation to be symmetric. Similarly, finding one triplet satisfying the condition for transitivity does not show that a relation is transitive: you have to show that the condition is satisfied by all triplets. The other statement is poorly worded but correct: if there is even just one pair (or triplet, in the case of transitivity) that does not satisfy the defining condition for the property, then the relation does not have the property.
H: Is the function convex? Let's have the following function $f:\mathbb{R}^{2}\to\mathbb{R}$ defined by $|x+y|$, is it convex? We have $\lambda\in (0,1),x,y\in$ dom$(f)$, so $|\lambda x+(1-\lambda)y|\le \lambda |x| + (1-\lambda)|y|$. It means the function is convex according to the definition of a convex function. Is it correct? AI: The function is convex, but your reasoning is a bit off (or at least a bit too terse, in my opinion). Remember, the elements of the domain of $f$ are pairs $(x,y) \in \mathbb R^2$. You need to show that for $(x,y), (z,w) \in \mathbb R^2$, you have $$f(\lambda \cdot (x,y) + (1-\lambda)\cdot (z,w)) \le \lambda f(x,y) + (1-\lambda)f(z,w).$$ In this case, we see \begin{align*} f(\lambda \cdot (x,y) + (1-\lambda)\cdot (z,w)) &= f(\lambda x + (1-\lambda)z, \lambda y + (1-\lambda)w) \\ &= \lvert \lambda x + (1-\lambda)z + \lambda y + (1-\lambda)w \rvert\\ &= \lvert \lambda(x+y) + (1-\lambda)(z+w)\rvert\\ &\le \lambda\lvert x+y \rvert + (1-\lambda) \lvert z+w \rvert \\ & = \lambda f(x,y) + (1-\lambda)f(z,w) \end{align*} which shows that $f$ is indeed convex.
H: Showing Lebesgue Measurable Set is Measure Zero I'm trying to show that given $A \subseteq \mathbb{R}$ with $A$ Lebesgue measurable and given that $m(A\cap [a,b]) < \frac{b-a}{2}$ for every $a<b$, that $A$ must have measure zero. I've been trying to use continuity of measure in some way, but I've been unsuccessful so far. AI: By definition of outer Lebesgue measure (or by regularity, depending on how you define Lebesgue measure), given $\varepsilon>0$ there exist disjoint intervals $I_1,\ldots,I_r$, $I_\ell=(a_\ell,b_\ell)$ such that $A\subset \bigcup_\ell I_\ell$ and $$ m(\bigcup_\ell I_\ell)<m(A)+\varepsilon. $$ So \begin{align} m(A)&\leq m(\bigcup_\ell I_\ell)<m(A)+\varepsilon= m(A\cap\bigcup_\ell I_\ell)+\varepsilon =\sum_\ell m(A\cap(a_\ell,b_\ell))+\varepsilon\\[0.3cm] &<\frac12\,\sum_\ell(b_\ell-a_\ell)+\varepsilon =\frac12\,m(\bigcup_\ell I_\ell)+\varepsilon\\[0.3cm] &\leq\frac12\,m(A)+\frac{3\varepsilon}2. \end{align} So $$ m(A)\leq 3\varepsilon $$ for all $\varepsilon>0$, showing that $m(A)=0$.
H: Why can we expand an analytic function in such way? In one of the answers to my questions on StackExchange (Open Mapping Theorem Serge Lang Proof) included the fact that a an analytic function can be expanded as $$f(z)=f(a)+C(z-a)^n + \ldots$$ but I am unsure about how the person got to this fact. My best guess is that by definition since $f$ is analytic on some open set $U$ we have $f(z)=\sum_{i=1}^{m}b_i(z-a)^i=b_o+\sum_{i=2}^{m}b_i(z-a)^i=f(a) + (z-a)^n\sum_{i=2}^{m}b_i(z-a)^{i-n}$. My second best guess would be that they used the fact that $f$ being analytic on $U$ implies that $f$ is continuous on $U$ which implies that there exists a function which satisfies $f(z)= f(a) + g(z-a)$ with $g(0)=0$, but again, I am unsure if that's what the person meant. Could someone shine some light as to how this assertion is made? AI: The $\;n\;$ in $\;f(z)=f(a)+C(z-a)^n\;$ can be anything in $\;\Bbb N\;$. For example, if $\;f(z)=e^z\;$ , say, then $$f(z)=\sum_{n=0}^\infty\frac{ z^n}{n!}=f(0)+\overbrace{\left(\sum_{n=1}^\infty\frac{z^{n-1}}{n!}\right)}^{=C}\,z$$ or if you want any other $\;a\in\Bbb C\;$, then $$f(z)=e^z=e^ae^{z-a}=e^a\sum_{n=0}^\infty\frac{(z-a)^n}{n!}=\overbrace{e^a}^{=f(a)}+\overbrace{\sum_{n=1}^\infty\frac{(z-a)^{n-1}}{n!}}^{=C}\,(z-a)$$ and etc.
H: Why can the $n_{\epsilon}$ of the definitions of convergence and Cauchy sequence be the same in the following proposition? I have the following proposition proved in my lectures notes, but I think there are a couple of errors and there is one think I don't get: If $p_n$ is a Cauchy sequence in a metric space $(X,d)$, the set $\{p_n| n \in \mathbb{N}\}$ is bounded. Moreover, if $p_n$ has $p_0$ as a limit point, then $p_n$ converges to $p_0$ ----->(1) Proof Let $\epsilon > 0$. Now for each $n \geq n_{\epsilon}$, $d(p_n, p_{n_{\epsilon}}) < \epsilon$. Then $p_n \in B_d(p_{n_{\epsilon}}, \epsilon)$ for each $n\geq n_{\epsilon}$, from which the thesis follows. Moreover, if $p_0$ is the limit of subsequence $p_{n_k}$, for $\epsilon >0$, there exists $n_{\epsilon}$ such that $d(p_{n_k},p_0)< \epsilon/2$ and $d(p_{n_k},p_n)< \epsilon/2$ for $n\geq n_{\epsilon}$ and $k\geq n_{\epsilon}$. ----->(2) Then, $d(p_n,p_0) \leq d(p_n,p_{n_k})+d(p_{n_k},p_0) < \epsilon$ for $n, k \geq n_{\epsilon}$ ----->(3) from which the thesis follows I think there are some errors in this proof. I would like some feedback/confirmation on it (1) --> it should say " if $p_{n_k}$ has $p_0$ as a limit point" instead of " if $p_{n}$ has $p_0$ as a limit point" (2), (3), it should be $n_k \geq n_{\epsilon}$, instead of $k \geq n_{\epsilon}$ And a small question that is really bothering me : For all $\epsilon > 0$, If $p_n$ is a Cauchy sequence, the definition says an $n_{\epsilon}$ exist , such that for all $n_1, n_2 >n_{\epsilon}$ $d(p_{n_1},p_{n_2})<\epsilon$. The definition of convergence says: If $p_n$ converges to $p_0$, then for all $\epsilon > 0$, there exist and $n_{\epsilon}$, such that for any $n > n_{\epsilon}$, $d(p_n, p_{n_{\epsilon}})<\epsilon$. When they mix the definitions to prove the second part of the proposition, the one about the subsequence, they consider the $\epsilon$ and the $n_{\epsilon} $ of both definitions to be the same. I agree for the $\epsilon$, because since it must be true for all $\epsilon $ I can choose them to be equal, but then what guarantees that the $n_{\epsilon} $ of the first definition is the same as the $n_{\epsilon} $ of the second definition ? The definitions just state some $n_{\epsilon} $ exists. It is assumed they are equal without saying why and I really can't figure it out. AI: The wording ‘if $p_n$ has $p_0$ as a limit point’ is very sloppy, but not for the reason that you suggest at (1). It’s sloppy because $p_n$ is a single point, not a sequence; what the author means is ‘if $\langle p_n:n\in\Bbb N\rangle$ has $p_0$ as a limit point’ (in which any other standard notation for a sequence can be substituted for my preferred notation). What the author means here, however, is correct: if the original Cauchy sequence has $p_0$ as a limit point (I would say cluster point), then it converges to $p_0$. Not only is your suggestion not what he’s trying to say, it would make no sense: no subsequence $\langle p_{n_k}:k\in\Bbb N\rangle$ has even been defined at that point. There is also nothing wrong with $k\ge n_\epsilon$ at (2) and (3), though it would have been helpful if the author had explained why this is the case. The point is that since $\langle p_{n_k}:k\in\Bbb N\rangle$ is a subsequence of $\langle p_n:n\in\Bbb N\rangle$, the sequence $\langle n_k:k\in\Bbb N\rangle$ is a strictly increasing sequence of natural numbers, and it’s easy to prove by induction on $k$ that this implies that $n_k\ge k$ for each $k\in\Bbb N$. Thus, $k\ge n_\epsilon$ immediately implies that $n_k\ge n_\epsilon$. The definition of convergence in the penultimate paragraph of your question is incorrect. It should read: $\langle p_n:n\in\Bbb N\rangle$ converges to $p_0$ if and only if for each $\epsilon>0$ there is an $n_\epsilon\in\Bbb N$ such that for each $n>n_\epsilon$, $d(p_n,\color{red}{p_0})<\epsilon$. The last part of the proof is correct, but the author has omitted a little explanation. There is an $m_\epsilon\in\Bbb N$ such that $d(p_{n_k},p_0)<\frac{\epsilon}2$ whenever $k\ge m_\epsilon$. There is a possibly different $m_\epsilon'\in\Bbb N$ such that $d(p_{n_k},p_n)<\frac{\epsilon}2$ whenever $k,n\ge m_\epsilon'$. Now let $n_\epsilon=\max\{m_\epsilon,m_\epsilon'\}$, then $d(p_{n_k},p_0)<\frac{\epsilon}2$ and $d(p_{n_k},p_n)<\frac{\epsilon}2$ whenever $k,n\ge m_\epsilon'$, and we’re home free.
H: The countable product of Fréchet spaces is a Fréchet space Let $\{E_n \; ; \; n \in \mathbb{N}\}$ be a family of Fréchet spaces. I want to prove that the product $$E:= \prod_{n=1}^{\infty} E_n$$ is a Fréchet space, that is, $E$ is metrizable (Hausdorff space and admits a countable basis of neighborhoods of $0\in E$), complete and locally convex (admits a basis of neighborhoods of $0\in E$ consisting of convex sets). I already know that $ E $ is Hausdorff, locally convex and complete space. I don't know how to prove that $ E $ is metrizable. How to proceed? AI: Let $d_n$ be one metric in each $E_n$. Then $$d(\tilde{x}, \tilde{y}):= \sum_{n=1}^{+\infty} \frac{d_n(x_n,y_n)}{1+d_n(x_n,y_n)}\frac{1}{2^n} $$ where $\tilde{x}, \tilde{y}\in E$, is a metric in $E$.
H: Notation for matrix filled with zeros except for one row and one column Is there existing succinct notation for a matrix $[A]_i$ whose elements are all $0$, except that the $i$th row and $i$th column is given by a particular vector, for example the vector $$\mathbf{z} = \frac{\mathbf{v} - 2\mathbf{v}_i}{a}$$ I'm trying to avoid having to put in something like $$ [A]_i = \begin{bmatrix} 0 & \ldots & \dfrac{\mathbf{v}_1 - 2\mathbf{v}_i}{a} & 0 & \ldots & 0 \\ \vdots & \ddots & \dfrac{\mathbf{v}_2 - 2\mathbf{v}_i}{a} & 0 & \ldots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ \dfrac{\mathbf{v}_1 - 2\mathbf{v}_i}{a} & \dfrac{\mathbf{v}_2 - 2\mathbf{v}_i}{a} & \ldots & \ldots & \ldots & \dfrac{\mathbf{v}_n - 2\mathbf{v}_i}{a} \\ 0 & \ldots & \dfrac{\mathbf{v}_{i+1} - 2\mathbf{v}_i}{a} & 0 & \ldots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ \end{bmatrix} $$ AI: If ${\bf e_i}$ is the column vector with $1$ in position $i$ and $0$ elsewhere, this is ${\bf z} {\bf e_i}^\top + {\bf e_i} {\bf z}^\top - z_i {\bf e_i} {\bf e_i}^\top$.
H: Show that $A^{-1}+B^{-1}$ is also invertible Let $A$ and $B$ be two invertible $n \times n$ real matrices. Assume that $A+B$ is invertible. Show that $A^{-1} + B^{-1}$ is also invertible. My approach \begin{aligned} &|\mathrm{A}|\left|A^{-1}+B^{-1}\right||\mathrm{B}|=|\mathrm{B}+\mathrm{A}| \neq 0 \\ \Rightarrow &\left|A^{-1}+B^{-1}\right| \neq 0 \text { as }|\mathrm{A}|,|\mathrm{B}| \neq 0 \\ \Rightarrow & A^{-1}+B^{-1} \text {is invertible } \end{aligned} Am I correct? Any other method or hint would be greatly appreciated! AI: Just for the sake of a slightly different approach. Let $C$ be the inverse of $A+B$ (same as $B+A)$. Then we can show that $ACB$ is the inverse of $A^{-1}+B^{-1}$. \begin{align*} (A^{-1}+B^{-1}) (ACB)&=CB+B^{-1}ACB\\ &=B^{-1}B (CB) +B^{-1}ACB\\ &=B^{-1}\underbrace{(B+A)C}_{=I}B\\ &=B^{-1}B\\ &=I. \end{align*}
H: Determine the order of the subgroup H of $S_n$ for $n \geq 3$ From the chapter on Permutation Groups in Gallian's 'Abstract Algebra' 9th Ed., we are asked to prove that for $n \geq 3,$ $$H=\{\beta \in S_{n}|\beta (1) \in \{1,2\} \land \beta (2)\in\{1,2\}\}$$ is a subgroup of $S_{n}$ and then determine (not prove) what its order is. To the extent that it matters, a sketch of my proof of the first part is... "Note $H \neq \emptyset$ since clearly the identity permutation $\epsilon \in H$. Now take $\alpha, \gamma \in H$. Consider $\alpha\gamma(1)=\alpha(\gamma(1)) \in S_{n}$. Note $\gamma(1)\in\{1,2\}$ by definition of $\gamma \in H$. It follows that $\alpha\gamma(1) \in \{1,2\}$ by definition of $\alpha \in H$. A similar proof shows $\alpha\gamma(2)\in\{1,2\}$ and thus $\alpha\gamma \in H$, proving $H \leq G$ by the 'Finite Subgroup Test.' QED." The last part of the question stuck me. My limited combinatorics talents are rusty so I played around with examples for $S_{3}, S_{4}, S_{5}, S_{6}$ before throwing in the towel since I could find no apparent patterns that relied only on $n\geq3$. My examples did give me the correct orders of the $H$ in each case - that is, $2, 4, 12,$ and $48$ - and I did this by the tried and true method of listing the lengths of the disjoint cycles of all elements in $S_{n}$, noting which disjoint cycles would allow $\beta(1)\in\{1,2\}$ and $\beta(2)\in\{1,2\}$, and then meticulously counting them. tldr; Upon giving up, I found that $|H|=2|S_{n-2}|=2(n-2)!$ on the internet/ solutions manual without any proof as to why. This is interesting and would like to know why for my own edification. So, my question ultimately is, what is a [hopefully] elementary algebraic/ combinatorial proof as to why $|H|=2|S_{n-2}|=2(n-2)!$? Any help or insight would be greatly appreciated! AI: If $\beta\in H$ then, as both $\beta(1)$ and $\beta(2)$ are in $\{1,2\}$, and $\beta$ is a permutation, we conclude that either $\beta(1)=1$ and $\beta(2)=2$ or $\beta(1)=2$ and $\beta(2)=1$ For the rest of the numbers in $\{3,\ldots,n\}$, note that $\beta$ has to permute these numbers (since the action on $1$ and $2$ is already constrained). But there are no other constraints on what $\beta$ does to $\{3,\ldots,n\}$. In other words an element of $H$ breaks down into a permutation of $\{1,2\}$ and a permutation of $\{3,\ldots,n\}$. So we can uniquely specify such a $\beta$ by choosing one of the two options above and then choosing a permutation of $\{3,\ldots,n\}$. Since there are $(n-2)!$ permutations of $\{3,\ldots,n\}$, this means $2(n-2)!$ total choices. Another way to look at it is to let $S$ be the group of permutations of $\{3,\ldots,n\}$. Clearly $S\cong S_{n-2}$. Then the above description induces a natural ismorphism from $H$ to $S_{2}\times S\cong S_{2}\times S_{n-2}$.
H: Show that $\psi_{b}=\psi_{c}\Leftrightarrow b=c$. Let $K$ finite field with $|K|=p^{n}$,where $p$ is prime number.Let the transformation (Trace) : $$Tr:K\to K,\ Tr(a)=a+a^{p}+a^{p^{2}}+\cdots a^{p^{n-1}}.$$ For every $a\in K$ we khow it's true that $Tr(a)\in \mathbb{Z}_{p}$ and $Tr$ is $\mathbb{Z}_{p}$- linear transformation. Let $b\in L$.We define the following transformation : $$\psi_{b}:K\to K,\ \psi_{b}=Tr(ba).$$ We have that $\psi_{b}$ is $\mathbb{Z}_{p}$- linear tranformation, for every $b\in K$. $\quad$ But how can we prove that $\psi_{b}=\psi_{c} \Leftrightarrow b=c\ ?$ It's obvius that, if $b=c\ $ then $\ \psi_{b}=\psi_{c}$. Let we assume now, that for $b,c\in K$ we have that $$\psi_{b}=\psi_{c}\Leftrightarrow Tr(ba)=Tr(ca)\Leftrightarrow Tr((b-c)a)=0,\ \forall\ a\in K.$$ I suppose that we must find the right choice of $a\in K$ to give us the required result but i couldn't something more. How can we prove that if $\psi_{b}=\psi_{c}\ $ then $\ b=c$ ? AI: Hint: $\psi_b-\psi_c=\psi_{b-c}$. And if $b-c\neq0$ then $\psi_{b-c}$ is a polynomial of degree $p^{n-1}$. Therefore its number of zeros ____?
H: How to find integer solutions of the equation $x(x+9)=y(y+6)$ where x,y are integers? I found that the equation becomes $$(2x+9)^2-(2y+6)^2=45$$. And $45 = 9^2-6^2 = 7^2-{2^2}$. From here I found these solutions $(x,y)=(0,0), \ (0,-6), \ (-9,0),\ (-9,-6),\ (-1,-2),\ (-1, 4), \ (-8,-2), \ (-8,-4)$. Does this equation has finite numbers of integer solutions and what would be better approach to solve such problems? AI: The equation in the title (which is different from the one in your text) can be written as $$(x^2-y^2)+6(x-y)=0 \implies (x-y)(x+y+6)=0.$$ Now either $x=y$ or $x+y+6=0$. So it has solutions as $\{(a,a) \, | \, a \in \Bbb{Z}\} \cup \{(a,-6-a)\, |\, a \in \Bbb{Z}\}$. After you modified your original problem The equation can be written as $$(2x+2y+15)(2x-2y+3)=45.$$ Now factor $45=ab$ and solve the system \begin{align*} 2x+2y+15&=a\\ 2x-2y+3&=b \end{align*} To get $$x=\frac{a+b-18}{4} \quad \text{ and } \quad y=\frac{a-b-12}{4}.$$ So test the factors $a,b$ of $45$ such that these quantities are integers. \begin{align*} ab&=45\\ a+b & \equiv 2 \pmod{4}\\ a-b & \equiv 0 \pmod{4}\\ \end{align*} Hopefully you can take it from here and see for example $a=9,b=5$ is a solution.
H: Eigenvectors of $A \in SO(2n)$ and $A \in SO(2n+1)$ Does every matrix $A \in SO(2n)$ have an eigenvector? Does every matrix $A \in SO(2n+1)$ have an eigenvector? I think that you can answer both questions with yes, is that true? AI: Every matrix has eigenvectors over the complex numbers. If the matrix is real, there are real eigenvectors for every real eigenvalue, but none for a non-real eigenvalue. Assuming you want real eigenvectors, the questions become, does every matrix in $SO(2n)$ or $SO(2n+1)$ have at least one real eigenvalue?
H: Solving $AX=B$ recursively where $X$ and $B$ are matrices It's possible to solve $Ax=b$ recursively where $x, b \in \Re^n$ are vectors and $A \in \Re^{m*n}$ where $m > n$, buy using Recursive Least Squares(RLS). But what if $AX=B$ where $A \in \Re^{m*n}$ and $X, B \in \Re^{n*k}$ where $k > 1$ and $m > n$. How can I solve that recursively if $A, B$ are known? AI: Notice that each column of $B$ obtains contributions only from the matching column of $X$, and, conversely, each column of $X$ only contributes to one column of $B$, so you have "number of columns of $B$" simultaneous uncoupled equations in your system. So the matrix version of your problem appears to be number-of-columns-of-$B$ uncoupled parallel copies of the vector version you describe first.
H: Conformal mapping of the disk $| z | could you help me with the following please: Find the conformal mapping of the disk $| z | <R_1$ to disk $| w | <R_2$ such that $w (a) = b, Arg(w´(a)) = \alpha $, $(| a | <R_1, | b | <R_2)$. I have tried the following, but it does not match the solution that comes in the book: We go to the unit circle with $w_1=\frac{z}{R_1}$, and then to the radio $R_2$ with $w_2=R_2 w_1$ and the transformation we are looking for is then its composition, I do not know how to make the conditions are fulfilled. I would greatly appreciate your help AI: Hint : What are the complex automorphisms of the disk ? Could they be used to fulfill the conditions ?
H: Showing that $Z[\sqrt{-n}]/\sqrt{-n}\approx Z_n $ and other similar isomorphisms. I found the isomorphism here: Show that $\sqrt{-n}$ and $\sqrt{-n} +1$ are not prime in $\mathbb{Z}[\sqrt{-n} ]$ First I would like to show the isomorphism $\mathbb Z[\sqrt{-n}]\simeq\mathbb Z[X]/(X^2+n)$. This is my attempt: Consider the map $\phi: Z[x] \to Z[\sqrt{-n}]$ defined by $f \to f(\sqrt{-n})$. It is clearly surjective and so by by First isomorphism theorem $Z[x]/\ker(\phi)\approx Z[\sqrt{-n}]$. It is easy to see that $\ker(\phi)$ contains $(x^2+n)$ but i am not sure how to show equality. We know that $(x^2+n)$ is not maximal (that is aposteriori) so that is not helpful. We do know it is prime as $x^2+n$ is irreducable and so prime ($Z[x]$ is a UFD). How do i conclude the two ideals are equal. I was thinking of perhaps doing things with the field of fractions and minimal polynomials but that does not seem to work. Next how do I show the other two isomorphisms in the link above? $\mathbb Z[\sqrt{-n}]/(\sqrt{-n})\simeq \mathbb Z/n\mathbb Z$ $\mathbb Z[\sqrt{-n}]/(\sqrt{-n}+1)\simeq \mathbb Z/(n+1)\mathbb Z$ My suspicion is to use the isomorphism theorems here. The ideal $(\sqrt{-n})$ coresponds to the ideal $(x+(x^2+n))$ I think but I am not sure. Thank yoou AI: The polynomials in $\ker(\phi)$ are exactly the polynomials with $\sqrt{-n}$ as a root. These polynomials are exactly multiples of the minimal polyomial of $\sqrt{-n}$, which is $x^2 + n$. To prove this, assume there is a polynomial $p(x) \in \ker(\phi)$ and use division with remainder to write $p(x) = q(x)(x^2+n) + r(x)$ where $r$ is linear (which is valid because $x^2+n$ is monic so you can divide by it without resorting to the field of fractions). Then because you know that $\sqrt{-n}$ is a root of $p(x)$, you know that it must be a root of $r(x)$. All that remains to be shown is that the only linear polynomial with $\sqrt{-n}$ as a root is the zero polynomial (this is equivalent to showing that $x^2+n$ is the minimal polynomial of $\sqrt{-n}$). EDIT: I just saw the last two parts. Your intuition here is correct. By the fourth isomorphism theorem, the ideal $(\sqrt{-n}) \subset Z[\sqrt{-n}]$ corresponds to an ideal of $Z[x]$ containing $x^2+n$ which you’ve correctly identified as the ideal generated by $x,x^2+n$. Then by the third isomorphism theorem we have that $$ Z[\sqrt{-n}]/(\sqrt{-n}) \cong \left(Z[x]/(x^2+n)\right)/\left((x,x^2+n)/(x^2+n)\right) \cong Z[x]/(x,x^2+n),$$ and this quotient is pretty easily seen to be isomorphic to $Z/nZ$. Similarly, the other example is isomorphic to $$ Z[x]/(x+1,x^2+n) \cong Z[x]/(x+1,x-n) \cong Z[x]/(x+1,n+1) \cong Z/(n+1)Z. $$
H: Matrix exponential of a particular block matrix Can anyone compute the following matrix exponential $$\Phi = \exp\begin{pmatrix} -\alpha I_3 & \alpha A \\ I_3 & O_3 \\ \end{pmatrix}$$ where $A$ is an arbitrary $3 \times 3$ matrix, $I_3$ is the $3 \times 3$ identity and $O_3$ is the $3 \times 3$ zero matrix? Thanks in advance! Edit: This comes about as the solution to a linear ODE. I tried evaluating it in MAPLE for the 2D case and it gives a result, but MAPLE can only do calculations component-wise so it's difficult to extract a concise expression in terms of the matrix A. For the 2D case, the answer seems to be in the form $$\Phi=M_1e^{\omega_1}+M_2e^{-\omega_1}+M_3e^{\omega_2}+M_4e^{-\omega_2}$$ where $$\omega_{1}=\alpha/2+1/2\,\sqrt {\alpha\, \left( \alpha+2\,\sqrt { \left( {\it Tr} \left( A \right) \right) ^{2}-4\,{\it \det} \left( A \right) }+2\,{\it Tr} \left( A \right) \right) } $$ $$\omega_{2}=-\alpha/2+1/2\,\sqrt {\alpha\, \left( \alpha-2\,\sqrt { \left( {\it Tr} \left( A \right) \right) ^{2}-4\,{\it \det} \left( A \right) }+2\,{\it Tr} \left( A \right) \right) } $$ and all the $M_i$ are 4 by 4 matrices. It is not clear to me if $\Phi$ will have this structure in 3 dimensions, however. AI: Write out the eigenvector equation $$ Mx = \lambda x \implies \pmatrix{-\alpha I & \alpha A\\ I & 0} \pmatrix{x_1\\x_2} = \lambda \pmatrix{x_1\\x_2}. $$ That is, we have the system $$ \begin{cases} - \alpha x_1 + \alpha Ax_2 = \lambda x_1\\ x_1 = \lambda x_2. \end{cases} $$ Substituting the second equation into the first yields $$ - \alpha \lambda x_2 + \alpha Ax_2 = \lambda^2 x_2 \implies\\ Ax_2 = \frac{\lambda(\lambda + \alpha)}{\alpha}x_2. $$ So: if $\alpha$ is non-zero, then for every eigenvector $v$ of $A$ associated with $\mu$ and the solutions to $\lambda(\lambda+ \alpha)/\alpha = \mu$, the vector $x = (x_1,x_2)$ with $x_1 = \lambda v$ and $x_2 = v$ is an eigenvector of $M$. If $\alpha \neq 0$ and $A$ is diagonalizable, this allows us to compute the exponential of $M$ directly. Yet another approach: suppose that $J = S^{-1}AS$ is the Jordan form of $A$. It follows that $$ M_2 = \pmatrix{S & 0\\0 & S} \pmatrix{-\alpha I & \alpha A\\ I & 0} \pmatrix{S & 0\\0 & S}^{-1} = \pmatrix{-\alpha I & \alpha J \\ I & 0}. $$ If $J = \operatorname{diag}(\lambda_1,\lambda_2,\lambda_3)$, then we can see that $M_2$ is permutation similar to the block-diagonal matrix $$ \operatorname{diag}\left[\pmatrix{-\alpha & \alpha \lambda_1\\ 1 & 0}, \pmatrix{-\alpha & \alpha \lambda_2\\ 1 & 0}, \pmatrix{-\alpha & \alpha \lambda_3\\ 1 & 0}\right]. $$ The blocks can be exponentiated separately. (My previous answer below) One approach is to simply use eigenvalues to calculate the Jordan form and proceed in the standard fashion. The key here, however, is that it is easy to find the eigenvalues of $M$ using only the eigenvalues of $A$. Denote $$ M = \pmatrix{-\alpha I & \alpha A\\ I & 0}. $$ Because the blocks of $M$ commute, we find that $$ \det(M - \lambda I) = \det((-\alpha - \lambda I)(-\lambda I) - (\alpha A) (I)) \\ = \det(\lambda(\alpha + \lambda)I - \alpha A). $$ So, if $p$ is the characteristic polynomial of $A$ (i.e. $p(\lambda) = \det(A - \lambda I)$), then the characteristic polynomial of $M$ is $p(\frac{\lambda(a + \lambda)}{\alpha})$. So, for each eigenvalue $\lambda$ of $A$, both $\lambda$ and $\lambda - \alpha$ are eigenvalues of $M$. In fact, we could say a bit more. Suppose that $\lambda$ is an eigenvalue of $A$. We find that $$ \operatorname{rank}(\lambda I - M) = \operatorname{rank} \pmatrix{-\alpha I - \lambda I & \alpha A\\ I & -\lambda I}. $$ If it holds that $\alpha$ is not an eigenvalue of $A$, then the nullity of $M - \lambda I$ is equal to the nullity of the Schur complement $$ (M - \lambda I)/(-\alpha I - \lambda) = -\lambda I + \frac{\alpha}{\lambda + \alpha}A \\ \to \frac{\lambda(\lambda + \alpha)}{\alpha} I - A $$
H: Evaluate the integral $\int_{0}^{10} 2^x dx$ using the limit definition How do I evaluate a the integral $\int_{0}^{10} 2^x dx$ using the limit definition of an integral? We can get the values: $\Delta x = \frac{10-0}{n}$ and $f(\frac{10i}{n}) = 2^{\frac{10i}{n}}$, thus setting up $$\int_{0}^{10} 2^x dx = \lim_{n \to \infty} \sum_{i=1}^{n} \frac{10}{n} 2^{\frac{10i}{n}}$$ However, I am unable to evaluate this to get a value. AI: Hint: This is a geometric progression with common ratio $2^{\frac{10}{n}}$. Use $$\sum_{i=1}^n r^i =\frac{r(r^n-1)}{r-1}$$
H: Find $n$ such that $1-a c^{n-1} \ge \exp(-\frac{1}{n})$ I am trying to find the integer $n$ such that \begin{align} 1-a c^{n-1} \ge \exp(-\frac{1}{n}) \end{align} where $a>0$ and $c \in (0,1)$. I know that finding it exactly is difficult. However, can one find good upper and lower bounds it. It tried using lower bound $\exp(-x) \le 1-x+\frac{1}{2}x^2$. However, it didn't really work. AI: Since $\exp(-1/n) \sim 1 - 1/n $ while the left side converges exponentially to $1$, the inequality will be true for all sufficiently large $n$. Somewhat explicitly, you want $$ a c^{n-1} \le 1 - \exp(-1/n)$$ so it suffices to have $$ c^{-n} \ge 2an/c $$ with $n \ge 1$ (note that $\exp(-1/n) \le 1 - 1/n + 1/(2n^2)$ so $\exp(-1/n) \le 1 - 1/(2n)$). Now $$ c^{-n} = (1 + (c^{-1}-1)^n) > \frac{n(n-1)}{2} (c^{-1}-1)^2 $$ so it suffices to have $$n-1 > \frac{2a}{c(c^{-1}-1)^2} = \frac{2ac}{(1-c)^2}$$
H: Question on the colored Jones polynomial (from Wikipedia) I'm trying to understand how "coloring" the component of a link changes the link. I'm looking at the picture for the section on the colored Jones polynomial on the link provided, and was wondering if somebody could tell me what the contents of the rectangle with the label [n cable]. Am i correct in assuming that the larger rectangle on the left just has n copies of the link L running parallel to each other? Thanks!! https://en.wikipedia.org/wiki/Jones_polynomial#Colored_Jones_polynomial AI: The rectangle with contents "$N$ cable" does not represent a block of the link diagram. It labels the number of duplicates of the knot in the cabling, in particular, there are $N$ copies of the knot in the cabling. The poorly described diagram seems to be representing a knot as an unspecified braid, represented by the large empty rectangle on the left, that has been closed, represented by the four strands that meet the large rectangle along its top and bottom edges and pass around to the right of the diagram, that has an $N$-cabling, represented by the label that you ask about. (For the description as a closed braid, compare/contrast Figure 4 at https://arxiv.org/pdf/1804.07910.pdf .) For Jones polynomial calculations, representing the knot as a closed braid is a substantial (conceptual) aid to computation. A cabling is a specific kind of satellite knot. Start with a closed regular neighborhood, $n(K)$, of the knot $K$. (This neighborhood is homeomorphic to a solid torus, but rather than imagine a "doughnut", unless you really are thinking of the colored Jones polynomial of the unknot, the solid torus is knotted.) On the surface of $n(K)$, draw a torus knot. This torus knot is a cabling of $K$. For the purpose of the colored Jones polynomial, it matters how many longitudinal copies of the knot are cut by a meridional disk of the solid torus (minimized under isotopy), but it doesn't really matter how they revolve around the core of the solid torus as they proceed along the knot as long as each strand closes itself off. That is, as long as the number of $2\pi/N$ revolutions is a multiple of $N$. (In general, if there are $m$ of $2\pi/N$ revolutions, there are $\gcd(m,N)$ parallel closed curves on the surface, each of which is an $N/\gcd(m,N)$ cabling of the original knot.)
H: How to construct a function $f(x)$ such that $f(x)e^{-px}$ wouldn't tend to $0$ as $x$ tends to infinity How to construct a function $f(x)$ such that $f(x)e^{-px}$ wouldn't tend to $0$ as $x$ tends to infinity? This question is motivated when studying Laplace Transform when I encountered the following result Suppose $f$ and $f'$ both have Laplace Transform on some half plane $\Re(p)>p_0$, provided that $f(x)e^{-px}\to 0$ as $x\to \infty$ for $p$ such that $\Re(p)>p_0$. Then we have $\hat {f'}(p)=p\hat {f}(p)-f(0)$. Where the hat notation is meant to be the Laplace Transform of function. Now, I am just quite curious if there exists a function $f$ such that the condition of "provided that $f(x)e^{-px}\to 0$ as $x\to \infty$ for $p$ such that $\Re(p)>p_0$" would fail. Surely we can just choose $p_0$ to be very large in its real part unless we can construct something that can grow more rapidly than exponentials? Many thanks in advance! AI: How about $f(x) = \exp(x^2)$. EDIT: Note that $$ \dfrac{d}{dx} \left(f(x) e^{-px}\right) = f'(x) e^{-px} - p f(x) e^{-px} $$ so $$\int_0^b f'(x) e^{-px}\; dx - p \int_0^b f(x) e^{-px}\; dx = f(b) e^{-pb} - f(0)$$ If $f(x)$ and $f'(x)$ both have Laplace transforms at $p$, the left side must go to a finite limit as $b \to \infty$, so the right side must also, and in particular $f(x) e^{-px}$ must be bounded, and for $\text{Re}(c) > \text{Re}(p)$ we must have $f(x) e^{-cx} \to 0$ as $x \to \infty$. So it is not possible for both $f$ and $f'$ to have a Laplace transform for $\text{Re}(p) > p_0$ without $|f(x) e^{-px}| \to 0$ as $x \to \infty$ for $\text{Re}(p) > p_0$.
H: A sum of powers of $2$ or $4$ that is or isn't divisible by $3$ Let $n\in\mathbb{Z^+}$ (a positive integer), and define $E(n)=2^{2n}$ and $O(n)=2^{2n+1}$. Alternatively we can define $E$ to be $4^n$ and $O$ to be $2\cdot4^n\\$. Let $a$ and $b$ be some arbitrary positive integer where $a>b$. I empirically found out that $$\frac{E(a) - E(b)}{3}$$ after testing a bunch of numbers. Hence, $E(a)-E(b)$ is always divisible by $3$. The same applied to $O(a)-O(b)$. How do I mathematically prove that $E(a) - E(b)$ and $O(a) - O(b)$ is always divisible by $3$, but $E(a) - O(b)$ is not divisible by $3$? Im not sure if these are common theorem, I could not find online. I have a hard time figuring out where to start and proving techniques since Im didnt really understood number theory proving yet. AI: $4 \equiv 1 \mod 3$, so $4^n \equiv 1^n \equiv 1 \mod 3$, and $2 \cdot 4^n \equiv 2 \mod 3$.
H: Properties of the solutions of the ODE $y'' + p(x)y' + q(x)y = 0$ Let $u(x)$ and $v(x)$ be solutions of $y'' + p(x)y' + q(x)y = 0$, $p$ and $q$ continuous in $\mathbb{R}$, such that $u(0) = 1$, $u'(0) = 0$, $v(0) = 0$ and $v'(0) = 1$. Prove that, if $x_{1} < x_{2}$ are such that $u(x_1) = u(x_2) = 0$ and $u(x) \neq 0$ for all $x \in (x_1, x_2)$, then there exists a point $c \in (x_1, x_2)$ such that $v(c) = 0$. What's more, prove that such a point is unique. I don't really know how to start tackling this problem. I know that the Wronskian of $u$ and $v$ is greater than $0$ in the entire interval, and that $v(x_1) \neq 0$ and $v(x_2) \neq 0$, but I couldn't go much farther than that... Does anyone have a tip to get me going in the right direction? Thanks in advance! AI: By the uniqueness of solutions, if $u$ is not identically $0$ there are no points where both $u$ and $u'$ are $0$. Thus if $x_1$ and $x_2$ are adjacent zeros of $u$, $u'(x_1)$ and $u'(x_2)$ are nonzero and have different signs (e.g. if $u > 0$ on $(x_1, x_2)$ we must have $u'(x_1) > 0$ and $u'(x_2) < 0$). Now if $W(u,v)(x_1) = u'(x_1) v(x_1)$ and $W(u,v)(x_2) = u'(x_2) v(x_2)$ have the same sign, $v(x_1)$ and $v(x_2)$ must have different signs. So by the Intermediate Value Theorem...
H: How to solve a value of K for a root locus? I'm using the book "Control Systems Engineering - Norman S. Nise" and in the Root Locus chapter most of the exersices are solved using software. I'm aware that this is a practical solution to real problems, however I'm curious how can I get a closed form solution to problems of the type : Find a value of K for which the poles of the negative feedback closed-loop system would have a $\zeta = 0.707 $ Assume the open loop transfer function is $$ G(s) = \frac{K(s-1)(s-2)}{(s+1)(s+2)} $$ $$ H(s) = 1 $$ AI: The closed-loop transfer function is, \begin{align} y &= G(x-z) \\ &= G(x-Hy) \\ (1+GH)y &= Gx \\ y &= \frac{G}{1+GH}x \end{align} Let $n_G$, $n_H$ be the numerator of $G$ and $H$, and $d_G$, $d_H$ be the denominator of $G$ and $H$. \begin{align} y &= \frac{G}{1+GH}x \\ &= \frac{\frac{n_G}{d_G}}{1+\frac{n_G}{d_G}\frac{n_H}{d_H}}x \\ &= \frac{n_Gd_H}{n_Gn_H + d_Gd_H}x \\ &= \frac{K(s-1)(s-2)}{K(s-1)(s-2) + (s+1)(s+2)}x \\ &= \frac{K(s-1)(s-2)}{K(s^2-3s+2) + (s^2+3s+2)}x \\ &= \frac{K(s-1)(s-2)}{(1+K)\left[s^2+\frac{3(1-K)}{1+K}s+\frac{2+K}{1+K}\right]}x \end{align} $\omega_n^2 = \frac{2+K}{1+K}$ and $2\zeta\omega_n = \frac{3(1-K)}{1+K}$. Therefore, \begin{align} \zeta &= \frac{3(1-K)}{2\omega_n} = \frac{3(1-K)}{2\sqrt{2+K}\sqrt{1+K}} \end{align} If you want $\zeta = \frac{1}{\sqrt{2}}$ then $K= \frac{12-\sqrt{109}}{7} $
H: Isometric image of an open ball. Let $(X,d)$ be a compact metric space, where $f: X \rightarrow X$ is a distance preserving map, ie, $\forall x, y \in X$ we have that $d(f(x),f(y)) = d(x,y)$. a) Show that f is injective. b) Show that $\forall x \in X$ and every open ball of radius $\epsilon$ centered at $x$, $B_{\epsilon}(x)$, one of the balls in the sequence: $f(B_{\epsilon}(x)),f(f(B_{\epsilon}(x))), ...$ has non-empty intersection with $B_{\epsilon}(x)$ c) Use part b) or another reasoning to show that f is surjective. I proved part a), but am stuck on part b). I'm thinking of assuming that all the images (composed any amount of times) of the ball all have empty intersection with the original ball, but am having trouble trying to concretize what the image of the ball should be or look like. AI: HINT: Let $f^{(0)}$ be the identity function on $X$, and for $n\in\Bbb N$ let $f^{(n+1)}=f\circ f^{(n)}$. Note that $f^{(1)}=f$. For $n\in\Bbb N$ let $B_n=f^{(n)}[B_\epsilon(x)]$; $\langle B_n:n\in\Bbb N\rangle$ is your sequence of balls in (b). Suppose that there are $m,n\in\Bbb N$ such that $m<n$ and $B_m\cap B_n\ne\varnothing$, and let $y\in B_m\cap B_n$. Show that there is a $z\in B_0$ such that $f^{(m)}(z)=y$. Show further that $z\in B_{n-m}$. Conclude that $B_0\cap B_{n-m}\ne\varnothing$. This shows that if $B_0\cap B_n=\varnothing$ for each $n\ge 1$, then the family $\{B_n:n\in\Bbb N\}$ is pairwise disjoint. Use the compactness of $X$ to show that the family $\{B_n:n\in\Bbb N\}$ cannot be pairwise disjoint.
H: Defining a mapping For a supbspace $S$ of $V$, prove $L:V \rightarrow V$ such that $\ker(L)=S$. I am reviewing the answers posted in this link: For $\mathbb{S}$ subspace of $\mathbb{V}$, prove $L:\mathbb{V}\to\mathbb{V}$ such that Ker$(L)=\mathbb{S}$ How would we represent the linear mapping $L$ mathematically? Rather than saying "$L$ maps the basis vectors of $S$ to 0 and the other $n−k$ vectors to themselves." AI: Actually just as you said. Lets consider a function $L$ that sends the $k$ out of $n$ basis vectors, spanning S to $0$ and each of the remaining $n-k$ to itself. Then we are extend $L$ in a natural way as follows: Let $u\in V$. Then $u=a_1e_1+ \dots + a_ne_n$ where $e_i$ are the basis vectors and $a_i\in \mathbb{K}$. Then, $$ L(u):= a_1 L(e_1) + \dots +a_nL(e_n)$$
H: Calculate the volume of the solid generated by two regions How can I calculate the volume of the solid generated by the S1 and S2 regions by rotating around the Ox axis and around the axis Oy? S1 : $0 ≤ x ≤ 2$ $0 ≤ y ≤ 2x − x^2$ S2: $0 ≤ x ≤ π$ $0 ≤ y ≤ \sin^2(x)$ AI: $S_1$ $$\int\limits_{x=0}^2 \pi (2 x - x^2)^2\ dx = \frac{16 \pi}{15}$$ Can you do $S_2$?
H: If a sequence converges to $c$, then $c$ is the only limit point of that sequence I'm currently on chapter 6.4 of Analysis I by Terence Tao and am stuck on this proposition which was left as an exercise: Let $\left(a_{n}\right)_{n=m}^{\infty}$ be a sequence which converges to a real number c. Then c is a limit point of $\left(a_{n}\right)_{n=m}^{\infty}$ and in fact it is the only limit point of $\left(a_{n}\right)_{n=m}^{\infty}$ I was able to show that c was a limit point easily enough but wasn't able to get the second part of the proposition done. I tried setting up a contradiction for using the definition of convergence and using that to show that the two limit points were equal but I wasn't able to make any progress. AI: If $c'\ne c$, let $\varepsilon=\frac12|c-c'|$. For some $N\in\Bbb N$, $|c-a_n|<\varepsilon$ whenever $n\geqslant N$. But then, if $n\geqslant N$,$$|c'-a_n|\geqslant\varepsilon=\frac12|c-c'|.$$So, there are only finitely many $n$'s such that $|c'-a_n|<\varepsilon$.
H: Determine a rational number $r$ such that $0\lt \sqrt2 - r \lt 10^{-5}$ Determine a rational number $r$ such that $0\lt \sqrt2 - r \lt 10^{-5}$. Any hint, please? AI: Manipulating the inequality gives $$-\sqrt2\lt -r\lt10^{-5}-\sqrt2$$ $$\sqrt2\gt r\gt\sqrt2-10^{-5}$$ So we need to find a number within $10^{-5}$ of $\sqrt2$. Use a sequence which approximates $\sqrt2$ with rational numbers, such as $$\dfrac 1 1, \dfrac 3 2, \dfrac 7 5, \dfrac {17} {12}, \dfrac {41} {29}, \dfrac {99} {70}, \dfrac {239} {169}, \dfrac {577} {408}, \ldots$$ I am going to use $\frac{577}{408}$. It is larger than $\sqrt2$ by around $2\times10^{-6}$, so subtract $10^{-5}$ so it is less than $\sqrt2$ and fits within the boundries of the inequality. So a solution for $r$ is $$r=\frac{577}{408}-10^{-5}$$ Writing as a fraction $$r=\frac{7212449}{5100000}$$
H: Intuitive meaning of Diffeomorphism Let $U\subset\mathbb{R}^n$, $V\subset\mathbb{R}^m$ and a bijection $f:U\to V$ is a diffeomorphism if $f$ and $f^{-1}$ are differentiable. I would like to know the intuitive meaning of two open sets being diffeomorphic. For example, if two spaces are homeomorphic, these spaces share the same topological properties. And we have a clear intuitive idea of two spaces being homeomorphs, like the classic relationship between a donut and a mug. Is there a similar intuitive idea for diffeomorphism? Edit: The proprieties that homeomorphism preserves is, for example, if one of them is compact, then the other is as well; if one of them is connected, then the other is as well; if one of them is Hausdorff, then the other is as well; their homotopy and homology groups will coincide. So, what are the properties preserved by diffeomorphism? I quoted homeomorphism, to indicate what I meant by an intuitive idea. AI: Diffeomorphisms are precisely the isomorphisms in the category of differentiable manifolds. So it might be helpful to think about diffeomorphisms between differentiable manifolds in exactly the same way as you would think about homeomorphisms between topological spaces. Since we consider spaces that are isomorphic to be "essentially the same" or "indistinguishable", this is what we have in mind in terms of differentiable manifolds whenever we talk about them being diffeomorphic.
H: Suppose $\mathbb{F}$ is a field of characteristic $p$. Show that if $a, b \in$ $\mathbb{F}$ and $a^{p}=b^{p}$, then $a=b$ I'm trying to do Exercise 2.6.12 from textbook Groups, Matrices, and Vector Spaces - A Group Theoretic Approach to Linear Algebra by James B. Carrell. Could you please confirm if my attempt is fine or contains logical mistakes? Suppose $\mathbb{F}$ is a field of characteristic $p .$ Show that if $a, b \in$ $\mathbb{F}$ and $a^{p}=b^{p}$, then $a=b$. My attempt: We have $(a + b)^p = a^p + b^p$. Substitute $c$ for $a+b$, we get $c^p = a^p + (c-a)^p$ and thus $c^p-a^p = (c-a)^p$. Apply this identity to $a,b$, we get $0 = (b-a)^p$ and thus $b-a = 0$. Finally, we have $a=b$. AI: Your proof is fine. Here is a slightly simpler one: $$ (a-b)^p = a^p-b^p = 0 \implies a-b=0 \implies a=b $$ This is clear when $p$ is odd but also works when $p=2$ because $-x=x$.
H: Is there a way to approximate the terms of $\frac{\left(2n\right)!}{\left(2^nn!\right)^2}$ for successive $n$ as $n$ becomes large? I have encountered the ratio of the product of the first n odd numbers to the product of the first n even numbers and want to chart its ultimate convergence to zero. If a white noise signal is passed through a cascade of $n$ linear filters, then this ratio is the factor by which the variance of the signal is reduced by the combined action of those $n$ filters. I am, therefore, interested in the rate at which the expression converges such that I can determine the effectiveness of adding more filters. Of course, doing so requires very large numbers for the numerator and denominator that exceed computing capacity. Is there a way to approximate the terms of $$\frac{\left(2n\right)!}{\left(2^nn!\right)^2}$$ for successive n as n becomes large? AI: Stirling's approximation gives the following asymptotic for the central binomial coefficient: $$ {2n \choose n} \sim \frac{4^n}{\sqrt{\pi n}}\text{ as }n\rightarrow\infty $$ Therefore, $$ \frac{\left(2n\right)!}{\left(2^nn!\right)^2} = \frac{1}{4^n}{2n \choose n} \sim \frac{1}{\sqrt{\pi n}} $$
H: Help in understanding integrating a function with an absolute value My math is rusty, and although I initially thought I understand the solution, upon further examination I think I don't: That's the original function: $$ \Psi(x,t) = A \mathrm{e}^{-\lambda|x|} \mathrm{e}^{-\mathrm{i} \omega t} $$ \begin{align*} \langle x^2 \rangle &= 2|A|^2 \int_0^\infty x^2 \mathrm{e}^{-2\lambda x}\,\mathrm{d}x \\ &= 2 \lambda \left[ \frac{2}{(2\lambda)^3} \right] \\ &= \frac{1}{2\lambda^2} \text{.} \end{align*} What I don't understand is why there's 2 in front of A square, why parameters of integration changed from minus infinity-plus infinity to 0-plus infinity, and why x lost its absolute value. At first I thought that he's using the symmetry of the function and calculating the integral from 0 to infinity, where |x| = x, then multiplying it by two. But after checking how to integrate absolute value functions I'm not sure my reasoning is correct. Sorry if the question is messed up, but I cannot imbed images directly, and have to use links. AI: Your reasoning seems correct to me. There's some dot/inner product of functions defined there (in your particular context). https://mathworld.wolfram.com/InnerProduct.html They just use this particular definition of the inner product, and calculate $\langle x,x\rangle$ i.e. $\langle x^2\rangle$ The rest of your reasoning is OK, indeed that's why all these modifications took place.
H: How to prove $|\sin x| \leq 1$, $|\cos x| \leq 1$ in a simplistic way without resorting to pictures? So my question is looking at if there is a way to show in a less advanced way than this question: Is it possible to prove $|\sin(x)| \leq 1$, $|\cos(x)| \leq 1$ and $|\sin(x)| \leq |x|$ algebraically? if it is possible to show these inequalities? I ask because I'm working through Spivak's Calculus and it dawned on me that up to this point I haven't "shown" that $|\sin (x)| \leq 1$, $|\cos(x)| \leq 1$ are actually true. We've taken them as true, but I would've thought that since they appear to be "simple" expressions that we would've shown these properties at this point. This came up because I was trying to prove that $\cos(x)$ is continuous everywhere but the only idea I have of doing this is by using the mean value theorem which hasn't been addressed yet (I'm on Chapter 8: Least Upper Bound and Uniform Continuity). AI: By Pythagore's Theorem and by definition, we have for all reals $ x$ $$\cos^2(x)+\sin^2(x)=1$$ $$\implies \cos^2(x)=1-\sin^2(x)\le 1$$ $$\implies -1\le \cos(x) \le 1$$ $$\implies |\cos(x)|\le 1.$$
H: Prove that if $p_1,...,p_k$ are distinct prime numbers, then $\sqrt{p_1p_2...p_k}$ is irrational Prove that if $p_1,...,p_k$ are distinct prime numbers, then $\sqrt{p_1p_2...p_k}$ is irrational. I do not usually prove theorems, so any hint is appreciated. I have taken a look at this and tried to repeat that argument over and over, but I messed up. Perhaps there is an easier to do it. Thanks in advance. Edit: For the case $k=1$, suppose that $\sqrt{p}=\frac{m}{n}, n\neq0$ and $gcd(m,n)=1$. It follows that $m^2=pn^2$, so $p\mid m$. Also, $p\mid n$. Therefore, $m$ and $n$ are not relatively prime. Contradiction. AI: Assume ${p_1,...,p_k}$ are distinct primes, and assume (aiming for a contradiction) that ${\sqrt{p_1p_2...p_k}=\frac{a}{b}}$ for coprime positive integers ${a,b}$ (alternatively, you can write ${(a,b)=1}$). As before, squaring both sides and rearranging for ${a^2}$ yields $${\Rightarrow a^2 = p_1...p_kb^2}$$ In other words,${a^2}$ contains ${p_1,...,p_k}$ as factors, and thus ${a}$ must contain ${p_1,...,p_k}$ as factors (since ${a^2}$ contains these primes as factors, this has to be the case because they are prime. This wouldn't be true for some random composite number). Anyways, we rewrite ${a=p_1...p_ka^*}$. Plugging back in gives $${\Rightarrow \frac{p_1...p_ka^*}{b}=\sqrt{p_1...p_k}}$$ And this implies $${\Rightarrow \frac{p_1^2...p_k^2\left(a^*\right)^2}{b^2}=p_1...p_k}$$ You can rearrange this and get $${b^2 = p_1...p_k\left(a^*\right)^2}$$ And we have got our desired contradiction. By the same argument as before, this would tell us ${b^2}$ has factors ${p_1...p_k}$, and because again these are primes that means ${b}$ contains factors ${p_1...p_k}$. This is a contradiction since we assumed that ${(a,b)=1}$ (that ${a,b}$ were coprime, hence could not share any common factors), and yet from assuming the rationality of our expression we have shown that ${a,b}$ both contained ${p_1,...,p_k}$ as factors! QED. Or Quantum Electrodynamics if you are a Physicist :P
H: $\ A+A=\{3i,3+2i,3+4i\}$. find A Can someone help me and answer this exercise of complex analysis? I don't know how to try to resolve this because I have never seen an exercise like this. AI: Hint Let $z = a+bi \in A$. Then $z+z \in A+A=\{3i,3+2i,3+4i\}$. This gives you 3 choices for $z$. Since $A$ must have at least two elements, this gives you 4 possible choices for $A$, and you simply check which ones work. You can reduce the case by case analysis by showing that $A$ has exactly two elements, and showing that these correspond to $z+z$ having the smallest/largest imaginary part.
H: Let $K,L$ be two rings and $g:K\to L$ be a homomorphism. Now $a\in K$ is zero divisor if and only if $g(a)$ is a zero divisor. Let $K,L$ be two rings and $g:K\to L$ be a homomorphism. Now $a\in K$ is zero divisor if and only if $g(a)$ is a zero divisor. For the right direction: Let $a$ be a zero divisor $\exists x\in K$ s.t. $ax=0$ and since $g$ is a homomorphism $g(a)g(x)=g(ax)=g(0)=0$ so $g(a)$ is a zero divisor. However in the converse I am having trouble, if $g(a)$ is a zero divisor $\exists y\in L$ s.t. $g(a)y=0$ but since $g$ is not necessarily onto, I don't know $y\in g(K)$ or not. Or if I try to prove not $p\Rightarrow q$ but $q'\Rightarrow p'$ since I don't know $ax$ is not in kernel $g(ax)$ might be zero so I cannot prove it. AI: Even if $g$ is onto , the converse is false: the canonical map $$\mathbf Z\longrightarrow \mathbf Z/6\mathbf Z,\quad a\longmapsto a+6\mathbf Z$$ maps $2\times 3$ to $2\bmod 6\times 3\bmod 6=0\bmod 6$, yet $\mathbf Z$ is an integral domain.
H: System of one quadratic and two linear equations over the positive integers Find all solutions $(x, y, z)$ of the system of equations \begin{align*} x y + z &= 2019 ,\\ x − y + 2 z &= 1 , \\ x + y − 7 z &= 2 \end{align*} in positive integers. I am thinking of solving this system equation. The only problem I have is that I've not learned any method for solving $3 \times 3$ systems of equations. I am pretty amazed by any help! AI: hint Equation $ 2 $ plus( +)equation $ 3$ gives $$2x-5z=3$$ Equation $ 3 $ minus (- )equation $ 2$ gives $$2y-9z=1$$ so $$\boxed{x=\frac{5z+3}{2}\;\;;\;\;y=\frac{9z+1}{2}}$$ what we replace in equation $ 1 $, to get $$(5z+3)(9z+1)+4z=8076$$ or $$45z^2+36z-8073=0$$ $$\iff 5z^2+4z-897=0$$ thus $$z=\frac{-2\pm 67}{5}$$ One solution is $$z=\frac{65}{5}=\color{red}{13}\;\;x=\frac{5z+3}{2}=\color{red}{34}$$ $$\;y=\frac{9z+1}{2}=\color{red}{59}$$ the second is $$z=\color{blue}{\frac{-69}{5}}\;\;, x=...,y=...$$
H: Matrix equation as a Tensor I am new to Tensor algebra and still getting used to many of the terms. I have the below matrix equation and I wish to write it in Ricci calculus notation but am struggling: $$(A \otimes_k B)(C \otimes_k D)$$ Where $\otimes_k$ is the Kronecker product. I understand that for a matrix, say $A$, you can express it in terms of it's (scalar) components, $A_j^i$ and a product of a basis vector, $\vec{e}_i$, and a basis covector $\epsilon^j$ as: $$A_j^i(\vec{e}_i \otimes \epsilon^j)$$ Where $\otimes$ is the Tensor product. And I believe that the Kronecker product of two matrices, $A$ and $B$, would look something like this: $$A_j^iB^m_n(\vec{e}_i \otimes \epsilon^j \otimes \vec{e}_n \otimes \epsilon^m)$$ However, I am unsure what happens with the "normal" matrix product in my equation above. I know that is $A$, $B$, $C$, and $D$ have the appropriate dimensions I can apply the mixed product property of the Kronecker product to get: $$AC \otimes_k BD$$ Which I imagine would be something like: $$A_k^iC^k_jB^n_qD^q_m(\vec{e}_i \otimes \epsilon^j \otimes \vec{e}_n \otimes \epsilon^m)$$ However, I want to write the expression in Tensor notation without the assumption that $A$ and $C$ share a dimension, and likewise with $B$ and $D$. How can I do this? Is there a convention? Where can I learn more about converting expressions such as this one? AI: One correct way to describe the Kronecker product is $$ A \otimes_k B = A^i_j B^m_n (e_i \otimes_k e_m)\otimes (e^j \otimes_k e^n). $$ With that, you should be able to prove the mixed product property by simplifying $$ (A \otimes_k B)(C \otimes_k D) = A^i_j B^m_n C^j_k B^n_p [(e_i \otimes_k e_m)\otimes (e^j \otimes_k e^n)] [(e_j \otimes_k e_n)\otimes (e^k \otimes_k e^p)], $$ using the fact that $$ (e^i \otimes_k e^p)(e_j \otimes_k e_q) = \delta_{ij}\delta_{pq}. $$
H: Solving for exact solution: $0.5 = 1-e^{-x}-xe^{-x}$ Is it possible to solve the following equation for an exact solution, and if so how? $$0.5=1-e^{-x}-xe^{-x}$$ My textbook (Introduction to Mathematical Probablity) jumps from this equation directly to the solution $1.678$. I was unable to solve this myself, and computer algebra software seems to approximate this using Newton's method. AI: Not outside of use of the Lambert W function. Begin by subtracting $1$ from both sides to get us to $$-\frac 1 2 = -e^{-x} - xe^{-x}$$ On the right-hand side, factor out $e^{-x}$, and then divide both sides by $e$. We then obtain $$-\frac{1}{2e} = (-x-1)e^{-x-1}$$ Apply the Lambert W function to both sides; it is the inverse function to $f(x)=xe^x$. Thus, $W(xe^x)=x$. The right-hand side would be $f(-x-1)$, and thus return the parenthetical of $-x-1$. This gives us $$-x-1 = W \left( - \frac 1 {2e} \right)$$ Now solve for $x$: $$x = -W \left( - \frac 1 {2e} \right) - 1$$ (We of course assume the principle branch here; the $W$ function would otherwise be multivalued for complex numbers without a choice of branch.) Granted, the Lambert W function is a special nonelementary function, so this can understandably not count as a closed form for you. But I don't believe any other way of getting something even close would be possible (and Wolfram certainly seems to have no ideas). If nothing else, it being a well-known function would make approximations somewhat easy to derive based on previous discoveries people have made. For real-valued solutions, Wolfram offers the approximations of $$x≈-0.768039 \;\;\;\;\; x \approx 1.67835$$ Based on the behavior of the function you're looking at, these will be the only two real solutions.
H: $p(x).y''+p'(x).y'+\frac{1}{p(x)}.y=0$ Is there a general solution I can give to the differential equation $\; p(x).y''+p'(x).y'+\frac{1}{p(x)}.y=0\;$ (the function $p(x)$ wasn't given) ? AI: $$\; p(x).y''+p'(x).y'+\frac{1}{p(x)}.y=0\;$$ You can easily reduce the order: $$(p(x).y')'=-\frac{1}{p(x)}.y$$ $$p(x)y'(p(x).y')'=-y'y$$ Integrate. $$p^2(x)y'^2=- {y^2}+C$$ This differential equation is separable.
H: How can you play a game that requires an 15 sided die with a 6 sided die How can you play a game that requires a 15 sided die with a 6 sided die? AI: Roll the 6-sided die once, discarding any 6s until you get a roll that's not a 6. Then roll the 6-sided die again, and halve it (divide by 2, rounding up). The formula $$ 3\times(\text{first roll}) - (\text{halved second roll}) + 1 $$ gives a result that is equally distributed from 1 to 15.
H: Bijection between subgroups $X$ satisfying $U\leq X\leq G$ and $U$-invariant subgroups $Y$ satisfying $U\cap N\leq Y\leq N$. Hi: this problem is from Kurzweil and Stellmacher, The Theory of Finite Groups, An Introduction, page 20: The homomorphism theorem gives a bijection between subgroups of the image and subgroups of the domain containiing the kernel. Don't know if this can be useful. On the other hand, by the 2nd isomorphism theorem, $G/N \cong U/U\cap N$. Again, I don't know if this is of any help. I need a function from $G$ to $N$ and then see if the image of $X$ is a $U$-invariant subgroup satisfying the second inequalities. But how do I find such a function? AI: If you take any subgroup $X$ of $G$ containing $U$ then it is equal to the product $U(X\cap N)$. That subgroup $X\cap N$ is your $Y$. It is clearly $U$-invariant since $U\subset X$ and $N$ is normal. Also clearly different $X$ correspond to different $Y$ and each $U$-invariant $Y$ between $U\cap N$ and $N$ corresponds to $X=UY$. To summarize: The map $X\to X\cap N$ is a map from the set of all subgroups $X\ge U$ and all subgroups between $U\cap N$ and $N$ normalized by $U$. The inverse map is $Y\to YU$. Indeed, if we denote $X=YU$, then $X$ is always $\ge U$, and $X\cap N=Y(U\cap N)$ by the Dedekind (modular) law $=Y$ because $Y\ge U\cap N$. QED
H: Complex polynomial whose roots contain the fifth roots of another complex number Let $\alpha,\beta$ be two complex numbers with $\beta\ne 0$ and $f(z)$ a polynomial function on $\mathbb C$ such that $f(z)=\alpha$ whenever $z^5=\beta$. What can you say about the degree of the polynomial $f(z)$? It is very clear that the degree of the polynomial $f(z)$ must be at least five. Can we say anything more specific about the degree? Please help. AI: It is clear that $f(z) = (z^5 - \beta) + \alpha$ works. Furthermore, this is the simplest monic polynomial such that $f(z) = \alpha$ if and only if $z^5 = \beta$. Others are $(z^5 - \beta)^n + \alpha$ for $n \ge 2$. From them you can manufacture polynomials of higher degrees with other zeros, e.g. $((z^5 - \beta)^4 + \alpha) \cdot (z - \gamma)$. So the answer is "any degree higher than 5".
H: A tangent property of an incircle In $ \triangle{ABC}$ with $ AB = 12$, $ BC = 13$, and $ AC = 15$, let $ M$ be a point on $ \overline{AC}$ such that the incircles of $ \triangle{ABM}$ and $ \triangle{BCM}$ have equal radii. Let $ p$ and $ q$ be positive relatively prime integers such that $ \tfrac{AM}{CM} = \tfrac{p}{q}$. Find $ p + q$. It seems that a rather obvious solution in terms of insight is to find a ratio of areas, but I was reading a more synthetic solution that brought up a confusing claim. First, let the incenters be $I_1$ and $I_2$ for $\triangle ABM$ and $\triangle CBM$ respectively, and point $P$ is the radius from $I_1$ to $AM$. It is then claimed that $MP=(MA+MB-AB)/2$. I've tried manipulating some relations, but I'm stuck. How can we prove this? AI: Let the incircle $\Gamma$ of $\Delta ABM$ touch the sides $AB$ and $BM$ at $Q$ and $R$ respectively. It already touches $AM$ at $P$ as the problem mentions. Then $AP=AQ=x\ (\text{say})$, since they are tangents from the point $A$ to $\Gamma$. Similarly, we have $BQ=BR=y\ (\text{say})$ and $MP=MR=z \ (\text{say})$. Then, if you have already marked the points as outlined, $$x+y=AB,\ y+z=BM, \ z+x=MA \\ \implies 2(x+y+z) = AB+BM+MA \\ \implies MP = z= x+y+z-(x+y) = \dfrac{AB+BM+MA}2-AB = \dfrac{MA+MB-AB}2$$ This is a well-known formula for the distance between the vertex of a triangle and it's intouch point (point of contact of triangle with incirle) on a side. containing that vertex.
H: Question about $\lim _{q \rightarrow \infty}\|f\|_{q}=\|f\|_{\infty}$ Let $(X,B,\mu)$ be a complete measure space,Show that $$\lim _{q \rightarrow \infty}\|f\|_{q}=\|f\|_{\infty}, \quad \forall f \in \bigcup_{p} \bigcap_{p \leqslant q<\infty} L^{q}$$ So,$\lim _{q \rightarrow \infty}\|f\|_{q}$ , $\|f\|_{\infty}$ are equal-norm with space $ L^{\infty} \cap\left(\bigcup_{p} \bigcap_{p \leqslant q} L^{q}\right)$. Case 1: $m(X)<\infty $.It's easy to prove that. Case 2: $m(X)=\infty $. I have no idea about it,And I started to doubt the correctness of this conclusion. Can somebody give me a hint for this problem or just give an example to prove that this is a wrong conclusion when $m(X)=\infty $. Thanks in advance. AI: The following argument does not really on any finiteness assumptions on $\mu$. The only general assumptions is that $f\in L_r$ for some $r>0$. If $\|f\|_\infty=0$ the conclusion is trivial. There are two cases other cases to consider: $0<\|f\|_\infty<\infty$ and $\|f\|_\infty=\infty$. Assume first that $0<\|f\|_\infty<\infty$ and $f\in L_r$ for some $r>0$. Then $|f|/\|f\|_\infty<1$ a.s. For $p>r$ $$\frac{|f|^p}{\|f\|^p_\infty}\leq \frac{|f|^r}{\|f\|^r_\infty}\in L_1$$ hence $p\in E:=\{s: \|f\|_s<\infty\}$. Integrating on both side yields $$\frac{\|f\|_p}{\|f\|_\infty}\leq\Big(\frac{\|f\|_r}{\|f\|_\infty}\Big)^{r/p}\xrightarrow{p\rightarrow\infty}1$$ That is, $$\limsup_p\|f\|_p\leq \|f\|_\infty.$$ By the Markov-Chebyshev inequality, for any $0<\alpha<\|f\|_\infty$ $$0<\alpha\big(\mu(|f|>\alpha)\big)^{1/p}\leq\|f\|_p$$ Hence $\alpha\leq\liminf_p\|f\|_p$ and so, $\|f\|_\infty\leq\liminf_p\|f\|_p$. The conclusion follows. Now assume that $\|f\|_p=\infty$ and $f\in L_r$ for some $r>0$. Then $0<\mu(|f|>n)\leq\frac{1}{n^r}\|f\|^r_r<\infty$ and so, $$0<n\big(\mu(|f|>n)\big)^{1/p}\leq\|f\|_p\quad\text{for}\quad p\geq r$$ This implies $n\leq\liminf_p\|f\|_p$ for any $n\in\mathbb{N}$. The conclusion follows. The assumption $f\in L_r$ for some $r>0$ may not dropped in general. Consider the measure space $(\mathbb{R},\mathscr{B}(\mathbb{R}),m)$ where $m$ Lebesgue's measure. The function $f(x)=\mathbb{1}_{(0,\infty)}(x)$ is measurable, and $\|f\|_p=\infty$ for all $p>0$, while $\|f\|_\infty=1$.
H: The condition and proof about the integral test for convergence The proof about the integral test: Suppose $f (x) $ is nonnegative monotone decreasing over $[1,\infty)$, then the positive series $\sum_{n=1}^{\infty}f(n)$ is convergent if and only if $\lim_{A\rightarrow +\infty}f(x)dx$ exists. Proof: Since $f(x)$ is monotone decreasing, we can get $f(n+1)<\int_{n}^{n+1}f(x)\Bbb{dx}< f(n)$, sum them up and get $\sum_{k=1}^{n+1}f(x)-f(1)<\int_{1}^{n+1}f(x)\Bbb{dx}<\sum_{k=1}^{n}f(k)$, when the series is convergent, the integral is bounded, since $f(x)$ is nonnegative, the integral is monotone increasing, the $\lim_{A\rightarrow +\infty}f(x)dx$ exists. When $\lim_{A\rightarrow +\infty}f(x)dx$ exists, the positive series is bounded, so it is convergent. When reading the integral test for positive series, I confused about the condition "Monotone decreasing": this condition is used to get $f(n+1)<\int_{n}^{n+1}f(x)\Bbb{dx}< f(n)$ and following that $\sum_{k=1}^{n+1}f(x)-f(1)<\int_{1}^{n+1}f(x)\Bbb{dx}<\sum_{k=1}^{n}f(k)$, but I am confused because when the condition is monotone increasing we can just change the direction of the sign of the inequality like $f(n+1)>\int_{n}^{n+1}f(x)\Bbb{dx}> f(n)$, so why the condition "monotone decreasing" is necessary? "Continuous": the condition says that $f (x) $ is continuous. But when proves it seems that the condition that be used is the integrable. Is the condition continuous not necessary? I think there is something wrong with my understanding. Could you find and point it? Thank you! AI: With $S_n = \sum_{k=1}^n f(k)$ we get the inequalities $$\tag{*}S_n - f(1) < \int_1^n f(x) \, dx < S_n - f(n)$$ The desired conclusion for the integral test is that the ser1es and improper integral converge or diverge together. If $S_n$ converges, then it is necessary that $f(n)= S_n - S_{n-1} \to 0$ as $n \to \infty$. The hypothesis that $f$ is nonincreasing is required to proceed. The right-hand inequality of (*) shows that $\int_1^n f(x) \, dx$ is bounded and, as it is monotone increasing, it must converge. The reverse implication assumes that the improper integral converges. Again, this cannot be the case if $f$ is nondecreasing. Do you see why? The left-hand inequality of (*) shows that $S_n$ must be bounded and, as it is monotone increasing, it must converge.
H: Find the equation of the plane in $\mathbb{ℝ}^{3}$ perpendicular to the subspace $S = \{(\alpha, 3\alpha, -4\alpha):\alpha\in\mathbb{R}\}$ I'm totally lost on how to do this. I know if I was given a normal $Ax + By + Cz = D$ plane, I would just have to find the normal vector to a point to find the perpendicular plane, but how do I find the perpendicular plane to a subspace?? Thanks in advance for any advice/help! AI: The subspace is $$S={(α,3α,−4α)}$$ which means that it is a trace of the points passing the constant multiples of (1, 3, -4). Meaning the subspace is consisted of the span of one vector, which is (1, 3, -4), so it is just a line passing through the origin with direction vector (1, 3, -4). The rest goes same as you said. $$x+3y-4z=k$$ would be the hyperplane for the vector.(For any real number k)
H: Showing that a complex function is constant The question is: Let $f:\Omega\rightarrow\mathbb{C}$ be analytic and $\Omega$ be a domain in $\mathbb{C}$. Prove that if there exists $c\in\mathbb{C}$ such that $f(z)=\overline{cf(z)}$ for every $z\in\Omega$, then $f$ is a constant function. my thoughts: Since $f$ is analytic, $f$ must satisfy the Cauchy Riemann equations. So, if we let $f(z)=u+iv$ then $f(z)=\overline{cf(z)}=cu-civ$. Then, $\frac{du}{dx}=\frac{dv}{dy}$ and $\frac{du}{dy}=-\frac{dv}{dx}$. But, we must also have that $\frac{du}{dx}=-\frac{dv}{dy}$ and $\frac{cdu}{dy}=\frac{cdv}{dx}$. So, I'm assuming that we are just playing a game with setting different parts of the CR equations equal to other parts based on the $f(z)=\overline{cf(z)}$, but I just can't seem to get it to come out clean. Any help would be very much appreciated :) AI: Suppose $f$ in non zero, then we see that $|c| = 1$, so we can let $c=e^{i \theta}$. Let $a=e^{i {\theta \over 2}}$ then $a f = \overline{af}$ and so $g= af$ is analytic on $\Omega$ and real valued, hence by the open mapping theorem we see that it is constant.
H: Show that a sequence $a_n$ is a solution of the given recurrence relation Show that the sequence $a_n=3^{n+4}$ is a solution of the recurrence relation $a_n=4a_{n-1}-3a_{n-2}$. I'm stuck on this question as I'm having trouble figuring it out when $a_n=3^{n+4}$. After substituting $3^{n+3}$ for both n's in $4a_{n-1}$ and $3a_{n-2}$, I have no idea where to go from there. AI: Consider the right hand side of the relation: $4a_{n-1}-3a_{n-2}$. Since $a_n=3^{n+4}=81\cdot 3^n$, we have $a_{n-1}=81\cdot 3^{n-1},a_{n-2}=81\cdot 3^{n-2}$. Substitute that in the above term: $$4\cdot (81\cdot 3^{n-1})-3\cdot (81\cdot 3^{n-2})$$ Taking out $81$ common and writing $4$ as $3+1$, the above transforms to $$81(3\cdot 3^{n-1}+3^{n-1}-3^{n-1})$$ Now this is nothing but $81\cdot 3^n$ which in turn is equal to $a_n$. Ta-da!
H: Counting the number of rooted $m$-ary trees. I know that the Catalan number $C_n$ is the number of full (i.e., 0 or 2 children per node) binary trees with $n+1$ leaves. I am interested in the generalization. Note that I do not care about any labeling, ordering, or number of leaves. I just want the tree to be rooted and have total number of nodes equal $n$, that's all. I am also not referring to a full $m$-ary tree, i.e., in my case nodes can have any number of children $\in\{0,\dots,m\}$ (instead of just 0 or $m$ in the full case). To summarize, my trees are rooted, unordered, unlabeled, $m$-ary, incomplete, not full, and have $n$ nodes in total. With that being said, I would also like to point out the Fuss-Catalan numbers. From the Wiki page of "m-ary tree", it states that the total number of possible m-ary tree with n nodes is \begin{align} C_n=\frac{1}{(m-1)n+1}\cdot{mn\choose n}. \end{align} Does this hold for non-full $m$-ary trees? If so, why? Can I see a derivation of this result with relation to the tree. I've checked the book "Concrete Mathematics 2nd edition" (p. 361) but their derivation wasn't with regards to the trees but instead with $m$-Raney sequences (perhaps a strong link exists with trees). Thanks. AI: If you have Concrete Mathematics, you know that the numbers $$C_n^{(m)}=\frac1{(m-1)n+1}\binom{mn}n$$ satisfy the recurrence $$C_{n+1}^{(m)}=\sum_{\substack{0\le n_1,n_2,\ldots,n_m\\n_1+n_2+\ldots+n_m=n}}C_{n_1}^{(m)}C_{n_2}^{(m)}\ldots C_{n_m}^{(m)}+[n=0]\;;$$ see the bottom of page $361$. The same induction argument that shows that the Catalan number $C_n=C_n^{(2)}$ is the number of full binary trees with $n$ internal nodes shows that $C_n^{(m)}$ is the number of full $m$-ary trees with $n$ internal nodes. Any $m$-ary plane tree with $n$ nodes can be extended to a full $m$-ary tree with $n$ internal nodes by adding a suitable number of leaf children to each node that does not have $m$ children. Conversely, any full $m$-ary tree with $n$ internal nodes can be reduced to an $m$-ary plane tree with $n$ nodes by deleting all of its leaves. These two operations are inverses, so each is a bijection between full $m$-ary trees with $n$ internal nodes and $m$-ary plane trees with $n$ nodes. Thus, $C_n^{(m)}$ is also the number of $m$-ary plane trees with $n$ nodes. Note, however, that this is for plane trees, meaning that the children of each node are assumed to be linearly ordered, and changing the order changes the tree. The situation for unordered trees is much messier even without restrictions on the degrees of the nodes: see OEIS $A000081$, for instance.
H: Multiplicity of each eigenvalue in a minimal polynomial of a matrix It is well known that for a $n \times n$ matrix $A$ , the charicteristic polynomial $p(x)$ satisfies $p(x)=\prod_{\lambda : eigenvector} (x-\lambda)^{a(\lambda)}$ where $a(\lambda )$ is the algebraic multiplicity of $\lambda$ The minimal polynomial of $A$ , $\mu _A (x) $ can also be represented as a form of $\mu_A (x)=\prod (x-\lambda)^{b(\lambda)}$ The question is: if the geometric multiplicity of $\lambda$ is $g( \lambda )$, is $b(\lambda ) \le a(\lambda )-g(\lambda )+1$ ? I think I saw this inequality somewhere, but I'm failing to find how to prove it, and I'm not even sure if it works. Please tell me whether it works, and a proof if it does. AI: Yes, this is true. Suppose that the Jordan normal form of $A$ has Jordan blocks with eigenvalue $\lambda$ of sizes $k_1,\dots,k_m$. Then $$a(\lambda)=\sum_{i=1}^mk_i,$$ $$g(\lambda)=m,$$ and $$b(\lambda)=\max(k_1,\dots,k_m).$$ Thus $$a(\lambda)-g(\lambda)=\sum_{i=1}^m(k_i-1)\geq b(\lambda)-1$$ since $b(\lambda)$ is one of the $k_i$. (This is assuming $\lambda$ actually is an eigenvalue at all; if not then $a(\lambda)=g(\lambda)=b(\lambda)=0$ and the inequality still holds.)
H: Define $k:R[x,y]\to R[x,y]$ by $f(x,y)\mapsto\frac{\partial^2f}{\partial x\partial y}$. What is the kernel of $k$? Let $R[x,y]$ be a set of all real polynomial with two variables, $x$ and $y$. Define a homomorphism $k:R[x,y] \to R[x,y]$ by $f(x,y) \mapsto \frac{\partial^2 f}{\partial x \partial y}$. Find $Ker \ k$. I know that $Ker (k) = \lbrace f \in R[x,y] \mid k(f) = e \rbrace$. But, what's the identity element of the codomain? Any idea or hints? Thanks in advanced. AI: The operator $k$ is not a ring homomorphism (because $k(fg) \neq k(f)k(g)$) so it is a group homomorphism,where the group property is addition (because the double derivative is additive and scaling, certainly). Therefore, we must find all elements of $R[x,y]$ such that $\frac{\partial^2f}{\partial x \partial y} = 0$ as a function. But $f$ is a polynomial : so let us collect all powers of $y$ together, and write $f(x,y) = a_0(x)+a_1(x)y+a_2(x)y^2 + ... + a_n(x)y^n$ where $a_i$ are polynomials only in $x$. The derivative of this, with respect to $y$ is $a_1(x) + 2a_2(x)y + ... + na_n(x)y^{n-1}$. The derivative of that with respect to $x$ is $\frac{da_1}{dx} + 2y\frac{da_2}{dx} + ... + \frac{da_n}{dx}ny^{n-1}$. We have to find when this is a zero polynomial. However, for it to be zero, there's only one way : every coefficient of $y^i$, and the constant term has to be $0$. So, each of $\frac{da_i}{dx} = 0$ for $i>0$, and therefore $a_i$ are constants not depending on $x$! EXCEPT for $a_0$. So we get $f(x,y) = a_0(x) + a_1y+a_2y^2+...+a_ny^n = a_0(x) + y(a_1+2a_2+...+a_ny^{n-1})$ is of the form $h(x) + yg(y)$ for one-variable polynomials $h$ and $g$. NOTE : Just caught me off guard this one, but if people are wondering why the answer looks "asymmetric" in $x$ and $y$ (every polynomial of the form $h(x)+y(g(y))$, why is there a $y$ multiplied with $g(y)$ while nothing is multiplied with $h(x)$?) while we know that the mixed partial is symmetric in $x$ and $y$, then the answer is that the aforementioned polynomial class is symmetric in $x$ and $y$. That is because a function of the form $h(x)+yg(y)$ is actually of the form $C + xh'(x) + yg(y)$ where $C$ is a constant. By moving the $C$ to the $yg(y)$ term, we just get $xh'(x) + g'(y)$ where $h',g'$ are some polynomials. So, if we were to make this set look symmetric in both $x$ and $y$, we could write it as $$ f(x,y) = C + xh(x) + yg(y) $$ where $C\in R$ and $h,g$ are any one-variable polynomials.
H: Is a function $f\colon C\to\mathbb{R}$ bounded if $C$ is compact but $f$ is not necessarily continuous? If I have a function mapping a compact set to the real numbers, is that function bounded? I know that this is true if the function is continuous. But is it true even if the function is not continuous? If so, how do I prove this? AI: Consider $f: [0, 1] \to \mathbb{R}$ defined by $f(x) = \frac{1}{x}$ if $x \neq 0$ and $f(0) = 2$. Then $f$ is unbounded but its domain $[0,1]$ is compact.
H: Finding $\lim_{n \to \infty}(n^{1\over n}+{1\over n})^{n\over \ln n}$ $$\lim_{n \to \infty}\left(n^{1\over n}+{1\over n}\right)^{n\over \ln n}$$ The limit is same as $$e^{\displaystyle{\lim_{n \to \infty}}{n^{(1+{1\over n})}-n+1\over \ln n}}$$ But I am stuck here , I noticed that if I take $n$ common from the numerator then it is of the form $0 \times \infty$ , but I can't seem to solve further , using L'hopital's rule is too hectic or perhaps not meant to solve this one . I couldn't use any of the Maclaurin expansions either in this case . Could someone please guide me in this ? Thanks ! AI: Here is simple approach for this limit. $$ \lim_{n\to \infty} \left(\sqrt[n]{n} +\frac{1}{n} \right)^{\frac{n}{\ln n}} = \lim_{n\to\infty} \left[ n^{\frac{1}{\ln n}}\left(1+\frac{1}{n \sqrt[n]{n}}\right)^{\frac{n}{\ln n}}\right]=\lim_{n\to\infty} n^{\frac{1}{\ln n}} \exp\left(\lim_{n\to\infty} \frac{1}{\sqrt[n]{n} \ln n}\right) =e \times 1 = e $$ Since $$ \lim_{n\to\infty} n^{\frac{1}{\ln n}} = \exp\left(\lim_{n\to \infty} \frac{\ln n}{\ln n}\right)=e$$ In the latter limit we use the fact that $\displaystyle \sqrt[n]{n} =1$ so as $n\to \infty $ and $\ln n\to \infty$ which tell us that $\frac{1}{\sqrt[n]{n} \ln n}$ is decreasing sequence for all $ n>1$ and it's limit around large $ n$ is $\displaystyle \lim_{n\to\infty}\frac{1}{\ln n} =0$ .
H: On the number of invariant Sylow subgroups under coprime action -Antonio Beltrán, Changguo Shao I'm reading the papers of Antonio Beltrán, Changguo Shao. The article is On the number of invariant Sylow subgroups under coprime action: https://www.researchgate.net/publication/318675516_On_the_number_of_invariant_Sylow_subgroups_under_coprime_action In the proof of the theorem C, I see I don't understand how $ν_p^A (H) | ν_p^A(K)$ and $ν_p^A (K) | ν_p^A(G)$. Thank you very much. AI: That's the induction step. Firstly, observe that the pairs $K \& H$, as well as $G \& K$, satisfy the condition of the theorem. For $|G:H|=1$ the theorem holds. He then assumes that $m_0=|G:H|> 1$ and that the statement holds for every $m<|G:H|$ (in order to apply induction) and he proves it for $m_0 =|G:H|$.
H: Rings Trapped Between Fields Some Background and Motivation: In this question, it is shown that an integral domain $D$ such that $F \subset D \subset E$, $E$ and $F$ fields with $[E:F]$ finite, is itself a field. However, a significantly more general result holds and seems worthy, of independent address; hence, Let $F \subset E \tag 1$ be fields with $[E:F] < \infty; \tag 2$ if $R$ is a ring such that $F \subset R \subset E, \tag 3$ show that $R$ is in fact a field. AI: If $x\in R\setminus F$, then for some minimal $n\in\mathbb{N}$, $x$ is a root of a polynomial $\sum_{i=0}^nc_ix^i$ over $F$ of degree $n$. Otherwise, $\{1,x,x^2,\ldots\}$ is a basis of an infinite dimensional vector space over $F$, but $[E:F]$ is finite. And note that $c_0\neq0$. Otherwise $x$ would be a zero-divisor, and this all happens within field $E$. So for any $x\in R\setminus F$, you have $x\cdot\overbrace{\left(-\frac{1}{c_0}\sum_{i=1}^{n}c_ix^{n-1}\right)}^{\in R}=1$. So $R$ contains the inverse of $x$.
H: Can we define the distance from $x$ to boundary $A$ when $A$ is open? (Michael Spivak "Calculus on Manifolds" on p.64) I am reading "Calculus on Manifolds" by Michael Spivak. On p.64 in this book, Spivak wrote "distance from $x$ to boundary $A$" when $A \subset \mathbb{R}^n$ is open. Intuitively, I think we can define the distance from $x$ to boundary $A$ when $A$ is open. But I cannot prove that. Please answer the following question: Let $X$ be a metric space. Let $A \subset X$. Let $x$ be an element of $X$. Let $D := \{r\in \mathbb{R} ; r = \text{d}(a, x) \text{ for some }a \in A\}$. If $A$ is compact, then there exists the minimum value of $D$. Even if $A$ is closed but not compact, I guess there exists the minimum value of $D$. AI: Distance from a point to any subset of a metric space is always defined. It is an infimum, not necessarily a minimum. A well known example where $D$ doesn't have a minimum is the following: Consider $C[0,1]$ with the sup norm. Let $A=\{f: \int_0^{1/2} f(x)dx-\int_{1/2}^{1} f(x)dx=1\}$. Then we can show that the infimum of $D$ corresponding to $x=0$ is $1$ but if this is attained we will get the contradiction that there is a continuous function $f$ which has the value $1$ for $x <\frac 1 2$ and the value $-1$ for $x >\frac 1 2$. I will add details if necessary.
H: Dealing with negative values for $d$ in RSA Encryption. I worked through an a RSA Encryption example, where I am given $p,q,e$ and I have to work out $n,\phi(n),d$. I don't have any difficulty determining all the of other items, however I get a negative value for $d$. Namely, $d=-37$. I've heard suggestions from my classmates that I should do $\phi(n)+d=113-37=76$ but some say I should do $\phi(n) - d = 150$, and I'm not sure which one is correct. PS - The values for $d$ and $\phi(n)$ are not the ones I worked out in my example. The ones above are just random numbers I picked to understand what to do in my case. AI: Assuming $\ p,q\ $ are odd primes, all you need for $\ d\ $ to do is satisfy the relation $$ ed\equiv 1\pmod{\lambda(n)}\ $$ where $\ \lambda\ $ is Carmichael's lambda function (i.e. $\ \lambda(n)=$$\text{lcm}(p-1,$$q-1)\ $). If $\ d=-37\ $ satisfies that congruence, and $\ r=m^e \pmod{pq}\ $ is a received RSA-encrypted message, then the congruence $\ m\equiv r^{-37} \pmod{pq}\ $ will hold, so you don't need to do anything at all to $\ d\ $. Nevertheless, if you really want to get a positive decryption exponent that will work, you can do so by adding an appropriate integer multiple of $\ \lambda(n)\ $ (or of $\ \phi(n)\ $, since $\ \phi(n)\ $ is an integer multiple of $\ \lambda(n)\ $) to it. So you could use $\ \phi(n)+d\ $ (or $\ \lambda(n)+d\ $) if you wish, but definitely not $\ \phi(n)-d\ $, which will not work. Note that it is impossible for $\ \phi(n)\ $ to have a value of $113$, since its only odd values are $\ \phi(1)=\phi(2)=1\ $.
H: Prove each linear transformation can be written as a matrix I'd like to show that Any linear transformation between two finite-dimensional vector spaces can be represented by a matrix. I've seen a proof for linear transformation from and to $\mathbb{R}^n$ but I want to generalize it to any finite-dimensional vector space. I'd also like to show that the composition of two linear tranformations can be written as a multipication of two matrices. I believe it should look like this: $$ [S\circ T]^B_D=[S]^C_D\cdot [T]^B_C $$ Where $T\rightarrow V:W, S\rightarrow U:V$ and $B,C,D$ are bases of $U,V,W$ respectively AI: Let the transformation $T$ be from $R^n \to R^m$. We will need bases for each of these spaces, let them be $B_n = \{e_{1n}, e_{2n}... e_{nn}\}$ and $B_m = \{e_{1m}, e_{2m}... e_{mm}\}$ respectively. Now, any vector $v$ can be expressed as the following $$v = \sum_1^na_ie_{in}$$ $$\implies T(v) = \sum_i^na_iT(e_{in})$$ To complete the matrix representation, we need to express each $T(e_{in})$ in the basis of the $m$-space Hence, let $T(e_{in}) = \sum_{k=1}^mb_{ik}e_{km}$ Therefore $$\implies T(v) = \sum_{i=1}^na_i\sum_{k=1}^mb_{ik}e_{km}$$ Now, we consider the matrix representation of $T$, we express $v$ as a column vector in $R^{n \times 1}$ $$v = \begin{bmatrix}a_1 \\ a_2 \\ . \\. \\. \\a_n\end{bmatrix}$$ Hence, $T(v)$ can be thought of as the sum of $m$ vectors in $R^{m \times 1}$, weighted by the $v$ column scalars. Therefore, we pre-multiply by the column wise representation of $T(e_{in})$ in the basis $B_m$, which is given by scalars $b_{ik}$ as defined above $$[T] = \begin{bmatrix} b_{11} & b_{21} & b_{31} & ... & b_{n1} \\ b_{12} & b_{22} & b_{32} &...& b_{n2} \\ . & .& . \\ .&.&.&.\\b_{1m} & b_{2m} & b_{3m} &...&b_{nm} \end{bmatrix}$$
H: Abelian extensions $E|F$ and $F|K$ $\implies E|K$? Let $E,F,K$ fields such that $K\subset F \subset E$ If $F|K$ and $E|F$ are abelian extensions, then $E|K$ is an abelian extension? I can't find a counter example AI: $K=\Bbb Q$, $F=\Bbb Q(\sqrt2)$, $E=\Bbb Q(\sqrt[4]2)$.