text
stringlengths
83
79.5k
H: Vector spaces without an inner product Many simple vector spaces such as $\mathbb{R}^m$ have natural inner products, in this case the dot product. Even function spaces such as $C[0,1]$, continuous real-valued functions on the unit interval, lend themselves to nice inner products, e.g. $$ \langle f,g \rangle := \int^1_0 f(x)g(x) dx. $$ In trying to think of vector spaces $V$ without a sensible inner product, it seems easiest to put $V$ over a less "nice" field, say the finite field $\mathbb{F}_5$. If $V$ is the vector space of polynomials up to degree $n$ with coefficients in $\mathbb{F}_5$, what would a reasonable inner product on $V$ be? What's an example of a real vector space without an inner product? More generally, given an $F$-vector space $V$, are there necessary and sufficient conditions to show $V$ has an inner product? AI: Assuming the axiom of choice, every vector space has a basis. Using that, you can turn every real vector space into an inner product space by first defining the inner product on the basis vectors so as to make them orthonormal and then extending it. If you don't assume choice, then this answer show that it is consistent with $\sf ZF+ DC$ that $\cal C(\Bbb R)$ has no norm. (Since every inner product gives a norm, it answers your question as well.) Note that inner-product requires your base field to have an "order", that is, your field must be an ordered field. In particular, it must have characteristic $0$. Therefore, it doesn't make much sense to talk about finite fields.
H: Why is [(4,4), (3)] not a descending plane partition of order 4? I am trying to understand descending plane partitions. According to Wolfram MathWorld, A descending plane partition of order n is a two-dimensional array (possibly empty) of positive integers less than or equal to n such that the left-hand edges are successively indented, rows are nonincreasing across, columns are decreasing downwards, and the number of entries in each row is strictly less than the largest entry in that row. Following code written by Fischer and Konvalinka, I have printed the 42 descending plane partitions of order 4: It seems to me as if [(4,4),(3)] satisfies the conditions of a descending plane partition, but it's not on this list. Could someone elucidate the exact part of the definition that fails for [(4,4),(3)]? AI: The definition on Wolfram MathWorld is incorrect; in order for the descending plane partitions of order n to be enumerated as they claim, we need the added restriction that the first part in each row is less than or equal to the length of the previous row. 3 is the first part of the second row, and 3 exceeds the length of the first row.
H: Counting endofunctions with a certain recurrent condition. Problem: Let $\phi:X\rightarrow X$ be an endofunction. A vertex $v\in X$ is recurrent if there is some positive integer $k\geq 1$ such that $\phi^k(v)=v$. Let $\mathcal{Q}$ be the species of endofunctions $\phi:X\rightarrow X$ such that if $v\in X$ has $|\phi^{-1}(v)|\geq 2$, then $v$ is recurrent. Obtain a formula for the exponential generating function $$Q(x)=\sum_{n=0}^\infty |\mathcal{Q}_n|\frac{x^n}{n!},$$ where $|\mathcal{Q}_n|$ is the number of such endofunctions on a set $|X|=n$. My approach: The species of endofunctions is naturally equivalent to the species of sets of cycles of rooted labelled-trees, which we denote by $\mathcal{F}\equiv \mathcal{E}[\mathcal{C}[\mathcal{R}]]$. In this setting its easy to observe that a vertex is recurrent if and only if it lives on a cycle. The condition that $v\in X$ has $|\phi^{-1}(v)|\geq 2$, then $v$ is recurrent means that only each tree root vertex can have degree at most 2, while all others tree vertices must be degree 1 or 0. So to find a decomposition of $\mathcal{Q}$ (and hence its e.g.f $Q(x)$) we should replace the species of rooted labelled-trees in the general endofunction decomposition with another species that decomposes like $\mathcal{X}*\left(\mathcal{L}\oplus \mathcal{L}*\mathcal{L}\right)$ i.e. a root followed by a linear order or a pair of linear orders (here I'm thinking about the linear orders like trees with no branches). The species of linear orders $\mathcal{L}$ has exponential generating function $L(x)=(1-x)^{-1}$, and so we replace the e.g.f $R(x)$ in the endofunction e.g.f $$F(x)=\exp(\log(1-R(x))^{-1}$$ with $$A(x)=x\left((1-x)^{-1}+(1-x)^{-2}\right).$$ This gives $$Q(x)=\exp(\log(1-A(x))^{-1}=\frac{(1-x)^2}{(2x^2-4x+1)}.$$ Using SAGE the coefficients $\left[\frac{x^n}{n!}\right]Q(x)=|\mathcal{Q}_n|$ gave me 1,2,7,24,82... which seems reasonable and is https://oeis.org/A003480 in the OEIS. My question: Does my description of the species replacing the rooted labelled trees give the correct description of $\mathcal{Q}$, and if so is there a formula for these objects that is not a rational expression? AI: I disagree with your statement "each tree root vertex can have degree at most $2$". For example, the endofunction $\phi:\{1,2,3,4\}\to \{1,2,3,4\}$ defined by $$ \phi(1)=\phi(2)=\phi(3)=\phi(4)=1 $$ satisfies "$|\phi^{-1}(v)\ge 2|$ implies $v$ is recurrent," and $1$ is a root vertex in the composition you described, yet $|\phi^{-1}(1)|=4$. Let $\mathcal P$ be the species of permtutations (linear orderings), so that $\mathcal P=\mathcal E[\mathcal C]$. Your first observation can be stated as $$ \mathcal F=\mathcal P[\mathcal R] $$ That is, you are decomposing your endofunction as a permutation of certain rooted trees. In each tree, the root can have arbtrary in-degree, but all other vertices must have in-degree $0$ or $1$. This means that each root has some set of nonempty chains dangling off of it. Therefore, your species $\mathcal R$ is consists of a root, together with a set of nonempty linearly ordered chains, or $$ \mathcal R=\{\bullet\}\times \mathcal E[\mathcal P^+] $$ where $\mathcal P^+$ is the species of nonempty permtuations, so that $\mathcal P=\mathcal P^++1$. This should give $$ R(x) = x\cdot \exp\left(\frac{x}{1-x}\right) $$ Combined with $\mathcal F=\mathcal P[\mathcal R]$ and $P(x)=\frac1{1-x}$, we get $$ Q(x)=\frac1{1-R(x)}=\frac1{1-x\cdot \exp\left(\frac{x}{1-x}\right)} $$
H: Derivative axiom This is confusing me very much... Is there (rigorous) proof that slope of secant line "goes to" slope of tangent line on some point when $\Delta x \rightarrow 0$? This is actually not obvious at all. Intuitive problem: What guarantees me that function will not be all messy at infinite small input changes? How one can prove that function output sensitivity can't be "good" enough to produce such curve ("output machine") that there is no such small $\Delta x$ for which $\text{slope(secant)} = \text{slope(tangent)}$? I thought about structure of $\mathbb{R}$ and series but I am stuck. Thanks AI: Have you seen the following example? $$ f(x) = \begin{cases} x^2 \sin\frac1x, & x \neq 0 \\ 0, & x = 0 \\ \end{cases} $$ It has $f'(0) = 0$ but it's not obvious what one should mean with a tangent at $x=0$ since the graph oscillates fast close to origin.
H: Find solutions to $g : \mathbb{R} \to \mathbb{R}$ where $g$ is additive and satisfies $g(x^z) = g(x)^z$ Find solutions to $g : \mathbb{R} \to \mathbb{R}$ such that $g(x+y) = g(x) + g(y)$ and $g(x^z) = g(x)^z$ for $z \in \mathbb{R} \backslash \{0,1\}$. $z$ is a fixed number and not a variable. Note : Please do not invoke any additional condition that forces $g$ to be linear AI: From the first condition, we need only two specializations: $y=x$ and $y=-x$, i.e., $$\tag 1g(2x)=2g(x)$$ and $$\tag2 g(x)+g(-x)=g(0).$$ From $(1)$ with $x=0$, we find $g(0)=0$ and then from $(2)$, $g(-x)=-g(x)$. We will also use the second condition only where it is clear that it makes sense, i.e., when $x>0$ and it is known that $g(x)>0$. From $g(x^2)=g(x)^2$, we see $g(x)=g(\sqrt x)^2\ge0$ for all $x\ge 0$. Assume $g$ is not identically $0$, i.e., $g(x)\ne0$ for some $x$ (so clearly $x\ne 0$). Then by $(1)$ and $(2)$ also $g(-x)\ne 0$ and $g(\pm2x)\ne0$. At least one of these four is a positive number $\ne1$. Thus we have $x_0\in(0,\infty)\setminus \{1\}$ with $g(x_0)>0$. Then for $x>0$, $x\notin\{1,x_0\}$, we have $$\ln g(x)=\ln g(x_0^{\log_{x_0}(x)})=\ln\left(g(x_0)^{\log_{x_0}(x)}\right)=\frac{\ln g(x_0)}{\ln x_0}\cdot\ln x.$$ In particular, $g(x)>0$ for all $x\notin \{-1,0,1\}$. If additionally $x\notin \{\frac12,\frac 12x_0\}$, we obtain $$\ln 2=\ln g(2x)-\ln g(x)=\frac{\ln g(x_0)}{\ln x_0}\cdot(\ln x+\ln 2)- \frac{\ln g(x_0)}{\ln x_0}\cdot\ln x=\frac{\ln g(x_0)}{\ln x_0}\cdot\ln 2$$ so that $$ g(x_0)=x_0.$$ But as $g(x)>0$ for all $x>0$ except possibly $1$ and $x_0$, we could have picked any positive number $\ne1$ for $x_0$. Thus we find $g(x)=x$ for all $x>0$ except possibly $x=1$. Then by $(1)$, also $g(1)=2g(\frac12)=1$ and by $(2)$, $g(x)=x$ for all $x$.
H: Partial Derivatives on Manifolds as Derivations first time poster, finally decided to take the plunge and not just lurk anonymously. I'm just an experimental physicist with a desire to know some math beyond my few semesters of abstract algebra. Just a quick question regarding the set of point derivations $ D: C_p^\infty (M) \rightarrow \mathbb{R}$ on a smooth manifold $ M $. I'm currently working through tangent bundles, and while I understand $ T_p\mathbb{R}^n $ as the point derivations on $ C_p^\infty(\mathbb{R}^n) $, I'm realizing I'm missing some of the subtleties of $ C_p^\infty (M) $, and therefore $ T_pM $. For some $ f \in C_p^\infty (M) $, a chart $ (U,\phi) = (U,x^1,...,x^n) $, and $ x^i = r^i \circ \phi $, by definition $$ \frac{\partial f}{\partial x^i}(p) := \frac{\partial (f \circ \phi^{-1})}{\partial r^i} \big(\phi(p)\big)$$ from Tu's Intro to Manifolds on page 87. However, he claims that it's "easy to check" that this is indeed a derivation, but I'm having difficulty getting the nitty gritty to work. Say $ f,g \in C_p^\infty(M)$. My first thought was to introduce a second chart $ (V,\psi) = (V,y^1,...y^n) $ on $ p \in U \cap V $, $$ \frac{\partial(fg)}{\partial x_i} (p) = \frac{\partial(fg \circ \phi^{-1})}{\partial r_i} \big(\phi(p)\big) = \frac{\partial\big(( fg \circ \psi^{-1} ) \circ ( \psi \circ \phi^{-1}) \big)}{\partial r_i} \big(\phi(p)\big) = \sum_j^n \frac{\partial ( fg \circ \psi^{-1} )}{\partial r^j} \big(\psi(\phi^{-1}(\phi(p)))\big) \frac{\partial(\psi \circ \phi^{-1})^j}{\partial r^i} \big( \phi(p)\big) = \sum_j^n \frac{\partial(fg)}{\partial y^j}(p) \cdot \frac{\partial y^j}{\partial x^i}(p) $$ then setting the chart $ (V,i_{\mathbb{R}^n}) = (V,r^1,..,r^n) $ so that the partials $ \partial(fg) / \partial r^j $ can be manipulated by standard calculus product rule, then work it backwards with two terms up to the definitions of partial derivatives, $$ \frac{\partial f}{\partial x^i}(p) \cdot g(p) + f(p) \frac{\partial g}{\partial x^i}(p) $$ and I felt good about this, but realized that it only works on $ M = \mathbb{R}^n $, else $ fg:M \rightarrow \mathbb{R} $ differentiated by $ r^i $ is nonsensical. Is there some easy way of proving this Liebniz rule that's flown way over my head? Maybe something elegant, like the chain rule with the differential map? Thanks in advance, AtomJZ AI: There's no need to introduce a second chart. Just note that \begin{align} (f \cdot g) \circ \phi^{-1} = (f \circ \phi^{-1}) \cdot (g \circ \phi^{-1}) \end{align} So, now you can use the standard product rule which you know is true for maps from $\Bbb{R}^n \to \Bbb{R}$ (or between open subsets), and of course, unwind the meaning of all the symbols present :)
H: Matrix inner product, how can I prove this question $\langle u,v\rangle$ is the Euclidean inner product on $\mathbb R^n$ and $A$ is an $n \times n$ matrix, then how can I prove $\langle u,Av\rangle = \langle A^Tu,v\rangle$ ? AI: Using the definition of inner product $⟨u,v⟩:=u^Tv$. Then $$⟨u,Av⟩=u^T(A^T)^Tv=(A^Tu)^Tv=⟨A^Tu,v⟩$$
H: (Proof verification) For each $s \in S$, show that $\sum_{t \in Gs} \frac{1}{|Gt|}=1$ This is an exercise from Lang's Algebra, Chapter 1. I got my solution but I'm not sure that it's correct. Feel free to point out what's wrong with me. Let $G$ be a finite group operating on a finite set $S$. For each $s \in S$, show that $$\sum_{t \in Gs} \frac{1}{|Gt|}=1$$ ($Gs$ means the orbit) My efforts Notice that for $t \in Gs$, we have some $t = g \cdot s$ where $g \in G$, which is equivalent to $s = g^{-1} \cdot t$. Hence $s \in Gt$ iff $t \in Gs$. Therefore we have $$\sum_{t \in Gs} \frac{1}{|Gt|}=\sum_{s \in Gt}\frac{1}{|Gt|}=\frac{1}{|Gt|}\sum_{s \in Gt}1=1$$ Does this approach work? I found some solution claimed that $|Gs|=|Gt|$, but I found no way to prove it, so I tried this one. But is it possible to for me to apply the equivalence relationship under the sum operator? Did I miss something? Appreciated in advance! Update 1: I realized how to prove that $|Gs|=|Gt|$. In fact, for $t \in Gs$, we have $Gt=G(g\cdot s)=Gs$. Generally, the two orbits of $G$ are either disjoint or are equal. I forgot this. AI: This is a lovely idea, but unfortunately the symbols have led us astray. Consider the following functions and sums, where I will overdo the notation to help make the point: $$ f(s) = \sum_{t \in Gs} \frac{1}{|Gt|} = \sum_{\substack{t\in S \\ t \in Gs}} \frac{1}{|Gt|} $$ is a function of $s$ (not $t$, which is a dummy variable), and $$ g(s) = \sum_{s \in Gt}\frac{1}{|Gt|} = \sum_{\substack{t\in S \\ s \in Gt}} \frac{1}{|Gt|} $$ is again a function of $s$ (not $t$, which is a dummy variable); but $$ h(t) = \sum_{s \in Gt}\frac{1}{|Gt|} = \sum_{\substack{s\in S \\ s \in Gt}} \frac{1}{|Gt|} $$ is a function of $t$ (not $s$, which is a dummy variable), and $$ j(t) = \frac{1}{|Gt|}\sum_{s \in Gt}1 = \frac{1}{|Gt|}\sum_{\substack{s\in S \\ s \in Gt}} 1 $$ is again a function of $t$ (not $s$, which is a dummy variable). You are correct that $f(s)=g(s)$, since $t\in Gs$ if and only if $s\in Gt$. You are also correct that $h(t)=j(t)$ (one can factor out from a sum an expression that's independent of the dummy variable) and that $j(t)=1$. However, $g(s)$ and $h(t)$ are different functions (of different variables, even), and while it turns out that they happen to both equal the same constant, that fact has to be established by some argument. In the OP there was no such argument, and indeed the more concise notation in the two definitions (which is the same for both functions) subtly asserted that $g(s)=h(t)$ without proof (and without us noticing).
H: Complex inner product spaces: are $A^*A$ and $AA^*$ always unitarily equivalent? Problem 2(b) from Sec 79, pg 158 of PR Halmos's Finite-Dimensional Vector Spaces: If $A$ is an arbitrary linear transformation on a complex inner product space $V$ (not given to be finite-dimensional), does it follow that $A^*A$ and $AA^*$ are always unitarily equivalent? (Nomenclature: transformations $A$ and $B$ on $V$ are said to be unitarily equivalent if there exists some unitary transformation $U$ on $V$ such that $A = U^{-1}BU$. A unitary transformation on $V$, in turn, is the same as an invertible isometry on $V$.) Am trying to write a proof that works in the infinite-dimensional case, assuming that the answer to the question is "yes". Assuming that the answer is going to be "no", am also trying to think of a counterexample in finite or infinite dimensional cases. Based on the intuition of the underlying "similarity", I suspect (rightly or wrongly) that if $A$ is not a normal transformation (that is, if $A^*A \neq AA^*$), then $A$ should serve as a counterexample. An explicit counterexample in finite-dimensions could perhaps be the one wherein $\rho(A^*A) = n \neq \rho(AA^*)$ where $n$ is the dimension of the underlying vector space. Haven't been able to proceed further along these lines however. Would appreciate guidance. Thanks. AI: I think this is not true for initinite-dimensional spaces: Put $V = \ell^2$ and set $A$ to be the "right shift", i.e. $A((x_1, x_2, x_3 \dots)) = (0, x_1, x_2, \dots)$. Then $A^*$ is the "left shift" $(x_1, x_2, \dots) \mapsto (x_2, x_3, \dots)$, so $A^* A$ is the identity while $AA^*$ is $(x_1, x_2, \dots) \mapsto (0, x_2, x_3, \dots)$. One of these is bijective while the other isn't so they can't be (unitarily) equivalent.
H: Let $a$ be an element of a group. Show that $a$ and $a^{-1}$ have the same order I think one way to solve it is by considering the conjugation of $a$ and $a^{-1}$ using $g \in G$. That is, $a^g=gag^{-1}$ and $(a^{-1})^g= ga^{-1}g^{-1}$. Then, if $n$ is the order of $a$ we have $(a^g)^n=e$; since, $(a^g)^n=gag^{-1} gag^{-1} \cdots g^{-1} gag^{-1} =$ $= gaegaea \cdots ag^{-1} =ga^ng^{-1}=geg^{-1}=e$ and $((a^{-1})^g)^n=e$; since, $((a^{-1})^g)^n=ga^{-1}g^{-1}ga^{-1}g^{-1} \cdots g^{-1}ga^{-1}g^{-1}= ga^{-1}ea^{-1}e\cdots a^{-1}g^{-1}= g(a^{n})^{-1}=geg^{-1}=e$ Therefore, $|a|=|a^{-1} |$. I am right? If not, how else can I do it? AI: Suppose $a\neq e$ (the result is obvious if $a=e$) $aa^{-1}=a^{-1}a=e$ implies that $e=(aa^{-1})^=a^n(a^{-1})^n$. Let $N(a)$ the set of positive integers such that $a^n=e$. Suppose that $a^n=e$, we deduce that $a^n(a^{-1})^n=(a^{-1})^n=e$. Suppose that $(a^{-1})^n=e, a^n(a^{-1})^n=a^n=e$ we deduce that $N(a)=N(a^{-1})$, since $ord(a)$ is the smallest element of $N(a)$, $a$ and $a^{-1}$ have the same order.
H: Understanding the definition of the index of $H$ in $G$, i.e., $[G:H]$ Let $H \leq G$. Then, the number of left cosets of $H$ in $G$ is the index $[G:H]$ of $H$ in $G$. Now, I read that $[G:H]$ can either be finite or infinite. My question: if I know that $[G:H]$ is finite, would it be valid to conclude that $G$ must be finite as well? If not, would it be conclude that $H$ must be finite? AI: If $[G:H]$ is finite, then either $G$ is finite and $H$ is finite; or else both $G$ and $H$ are infinite. For an example of the latter, take $G$ to be the additive group of integers, $\mathbb{Z}$, and take $H$ to be the even integers $2\mathbb{Z}$. The index is $2$. (And more generally, the index of $n\mathbb{Z}$ in $\mathbb{Z}$ is $n$). It is also possible for both $G$ and $H$ to be infinite, and for $[G:H]$ to be infinite. For example, take the additive group of polynomials with real coefficients, $G=\mathbb{R}[x]$, and take $H$ to be the subgroup of constant polynomials. Both are infinite, and $[G:H]$ is also infinite. If $G$ is infinite and $H$ is finite, then necessarily there are infinitely many cosets of $H$ in $G$, so $[G:H]$ is infinite. If $G$ is finite, then there are only finitely many cosets of any subgroup, so $[G:H]$ will be finite. I will note that one can define the index to be a cardinal, so that you don’t have to just say “infinite” in the general case.
H: Is this justification to prove this is a continuous function correct? Given the following function: $$ f(x,y)=\frac{\sin xy}{e^x-y^2}\;, $$ is the following justification correct? As sin is a continuous function, the exponential is a continuous function, the polynomials are continuous functions and the subtraction of continuous functions are continuous functions, both the numerator and denominator are continuous functions, hence the quotient of two continuous functions is a continuous function for all values of its domain. AI: The ratio of 2 continuous functions is continuous in points where denominator is not null. So, in your case, you need $e^x \ne y^2$.
H: Complete orhonormal set is basis? Do I understand correctly that a "complete orthonormal set" is the same as basis? Complete meaning that every Cauchy sequence in the set converges to an element in the set. But then if you consider $l^2$, can't I choose the set containing $e_1,e_3,e_4,...$ e.g. skipping over the second standard basis element. This is orthonormal and seems complete, but no infinite linear sum of the elements converges to $e_2$, and then it is not a basis. Is there something wrong with the reasoning? AI: I think you are confusing two meanings of "complete". An orthonormal set (in an infinite dimensional normed vector space) is complete just when every vector is a(n infinite) sum of scalar multiples of vectors from the set. Strictly speaking, those complete orthonormal sets are not bases, since a set can be a basis only if every vector is a finite sum of multiples of set elements. In any case the "complete" in "complete orthonormal set" is not the same as a complete set in the topological sense. There what you are calling complete is that the closure of the set is the whole space.
H: Explain the value of $(0,0)$ for $f(x,y)=1+x\frac{x^2-y^2}{x^2+y^2}$ Let $f:\mathbb R^2\to\mathbb R$ (where $\mathbb R$ is the real numbers) continuous in $\mathbb R^2$, such that $$f(x,y)=1+x\frac{x^2-y^2}{x^2+y^2}$$ for $(x,y)\neq(0,0)$ Justify the value of $f(0,0)$. The function is continuous in its domain, and $f(0,0)$ is defined by another equation. What else do I have to justify? AI: We have: $\left|x\cdot \dfrac{x^2-y^2}{x^2+y^2}\right| \le |x|\cdot \left|\dfrac{x^2}{x^2+y^2}+\dfrac{y^2}{x^2+y^2}\right| = |x| \implies \displaystyle \lim_{(x,y) \to (0,0)} f(x,y) = 1+0 = 1 \implies f(0,0) = 1$.
H: Consider the set $A=\{1,2,3,4,…99,100\}$, maximum number of elements that can be chosen making sure that their sum does not exceed 1000?. Consider the set $A = \{1,2,3,4,…99,100\}$, i.e. the set of natural numbers less or equal to $100$. Elements are chosen at random from this set without repetition of elements. What is the maximum number of elements that can be chosen making sure that their sum does not exceed $1000$?. Clearly justify your answer. My solution so far is as follows: Assuming worst case scenario, largest numbers are chosen first so: $100 + 99 + 98 + ... + 91 + 90$, etc. $100 + 99 + 98 + ... + 92 + 91 = (100 + 91) \times 5 = 955$ Adding 90 would make the sum exceed 1000, so the maximum number of elements is 10 However, the answer key says the answer is $11$ elements - is my working out correct or is there simply an error? This is from a Grade 11 maths exam practice. AI: Your answer is correct. In the worst case scenario, the sum of $11$ numbers will exceed $1000$. So, to be certain that the sum does not exceed $1000$, the maximum number of elements that can be chosen is $10$.
H: Why is the derivative of the binormal vector parallel to the normal vector? In my notes, it says to consider an arc length parameterization r(s). Then we can show that B' points in the direction of N such that we can write B'=-N. I understand why ||B'||= but am unsure how to show why B' is parallel to N? Also, is the minus sign just for convention or is there a reason it is there? Any help would be greatly appreciated! AI: $\def\vt{\mathbf{T}} \def\vn{\mathbf{N}} \def\vb{\mathbf{B}} \def\vr{\mathbf{r}}$In what follows differentiation with respect to arc length is indicated with a prime. We have \begin{align*} \vt &= \vr' \\ \vn &= \vt'/\|\vt'\| \\ \vb &= \vt\times\vn. \end{align*} Then \begin{align*} \vb' &= \vt'\times\vn+\vt\times\vn' \\ &= \|\vt'\|\underbrace{\vn\times\vn}_{\mathbf{0}} + \vt\times\vn' \\ &= \vt\times\vn'. \end{align*} Note that $\vb$ is orthogonal to $\vt$ by definition of the cross product. Since $\|\vb\|=1$, $\vb'$ is also orthogonal to $\vb$. (Proof: $(\vb\cdot\vb)' = 2\vb\cdot\vb' = 0$.) Since $\vt,\vn,\vb$ form an orthogonal basis, it must be the case that $\vb'$ is proportional to $\vn$. We take $\vb' = -\tau\vn$. The minus sign is a convention. It is a version of the right-hand rule. Note that $\tau>0$ indicates a twisting of the coordinate frame about the $\vt$ axis in a right-handed manner. (Take your right hand and point your thumb in the $\vt$ direction. Twist your hand about this axis in the direction in which your fingers are pointing. This twist corresponds to $\tau>0$.)
H: Estimating the remainder of a converges series Is there a way in general to estimate how fast the remainder of a convergent series decreases? Given a sequence $\{a_n\}$ s.t. $\sum_{n=1}^\infty a_n$ converges, find how fast $\sum_{n=j}^\infty a_n$ decreases. This is frequently needed in large number (probability theory) related problems. For example, in the proof of the strong law of large numbers, we need $\sum_{n=j}\frac{1}{n^2}=\frac{O(1)}{j}$. Another example is, $\sum_{n=j}\frac{1}{n^{2/p}}=O(1)j^{(p-2)/p}$, $p\in(1,2)$. AI: You can try bounding series by integrals: if $f(x) > 0$ and $f(x)$ is decreasing for $x \ge N-1$, $$ \int_{n=N-1}^\infty f(x)\; dx \ge \sum_{n=N}^\infty f(n) \ge \int_{n=N}^\infty f(x)\; dx$$ This is particularly useful for series $\sum_n n^p$ where $p > 1$ is constant.
H: How to notate this kind of function? I have a function $f$ whose domain is $\mathbb{N}$ and range is a set of another functions. The domain of $f(n)$ is $\mathbb{N}_0 \cap [0,\frac{n}{2})$ and its codomain is $\mathbb{Z}$. How can I notate the codomain of $f$? AI: You can use either $$f:\Bbb N\to\bigcup_{n\in\Bbb N}{^{n}\Bbb Z}$$ or $$f:\Bbb N\to\bigcup_{n\in\Bbb N}\Bbb Z^{n}\;,$$ where $n=\{0,1,\ldots,n-1\}$. In general ${{^A}B}$ and $B^A$ are used for the set of functions from $A$ to $B$. The fact that the domain of $f(n)$ is a function of $n$ pretty much has to be specified separately.
H: For ordinals $\delta$ and $\beta$, if $\delta\in$ or $=\beta$, then there exists $\gamma\in$ or $=\beta$ such that $\beta=\delta+\gamma$ Prove that for ordinals $\delta$ and $\beta$, if $\delta \in$ or $= \beta$ then there exists an ordinal $\gamma \in$ or $= \beta$ such that $\beta = \delta + \gamma$. I tried using transfinite induction on $\delta$ Base case seems simple, since you can let $\gamma = \beta$, but I'm stuck on the proof for both the successor ordinal and limit ordinal cases. My class has not yet defined ordinal subtraction, so I can't use that in the proof. Any help or hints would be appreciated! AI: (Note that you can write $\delta\le\beta$ instead of the clumsy $\delta\in$ or $=\beta$.) HINT: Fix $\delta$ and prove it by induction on $\beta$ for all $\beta\ge\delta$. Suppose that you know that $\delta+\xi=\beta$; what must $\gamma$ be in order that $\delta+\gamma=\beta+1$? Suppose that $\beta$ is a limit ordinal, and for each $\rho$ such that $\delta\le\rho<\beta$ you know that there is an ordinal $\xi_\rho$ such that $\delta+\xi_\rho=\rho$; what is the natural candidate for an ordinal $\gamma$ such that $\delta+\gamma=\beta$?
H: Proving, with Dedekind cuts, that $\mathbb{R}$ has the lub property I am trying to follow Rudin's proof that $\mathbb{R} = \{\text{set ot cuts}\}$ has the least upper bound property. Here is the set up: $A$ is a nonempty subset of $\mathbb{R}$ (so it's some set of cuts) and $\beta$ is an upper bound of $A$. We define $$\gamma := \bigcup\limits_{\alpha \in A} \alpha.$$ The claim is that $\gamma = \sup A$. I was able to easily show that $\gamma \in \mathbb{R}$ by demonstrating that it satisfies the three criteria for a cut (non-trivial, closed downward, and possesses no maximal element) and that $\gamma$ is an upper bound of $A$. I have having trouble demonstrating that $\gamma$ is the least of the upper bounds. Here is a copy of Rudin's proof. Suppose $\delta < \gamma$. Then there is an $s \in \gamma$ and that $s \not \in \delta$. Since $s \in \gamma$, $s \in \alpha$ for some $\alpha \in A$. Hence $\delta < \alpha$, and $\delta$ is not an upper bound of $A$. This gives the desired result: $\gamma = \sup A$. I understand that Rudin is attempting to prove the contrapositive of the statement that "if $\delta$ were an upper bound of $\gamma$, that $\delta$ is greater than (properly contains) $\gamma$" by demonstrating that if $\delta$ is less than (is properly contained in) $\gamma$, then it is not an upper bound. The first sentence about some element (rational number) $s \in \gamma$ makes sense by the definition of a proper subset. If $s \in \gamma$, where $\gamma$ is the set of $\alpha$, it also makes sense that $s \in \alpha$ for some $\alpha$. I do not understand why this gives $\delta < \alpha$. How do we know that all of $\delta$ is properly contained in some $\alpha$? Rather, what if $\delta$ is "spread out" amongst different $\alpha_i \in A$? Does this have something to do with the fact that cuts are closed downward? I am trying to prove this to myself without invoking the notion of a "maximal element" (because none exists, and I'm really trying to think of a "supremum," which I don't yet know exists.) Any help on this would be appreciated. AI: Let $r\in \delta$. Then $r<s$ (if not then $r\ge s$ and by definition of a cut, $s\in \delta$). Since $s\in \alpha$, we have $r\in \alpha$ again by definition of a cut. Thus $\delta \subset \alpha$. Note that we proved essentially the following: let $\alpha, \beta$ be two cuts, then either one of the following holds: $$\alpha \subset \beta, \beta \subset \alpha, \alpha = \beta.$$
H: Determining all $f : \mathbb R^+ \to \mathbb R^+$ that satisfy $f(x + y) = f\left(x^2 + y^2\right)$ Determine all $f : \mathbb R^+ \to \mathbb R^+$ that satisfy $$f(x + y) = f(x^2 + y^2)\ \forall x,y \in \mathbb R^+.$$ My Proof: I define $g(x) + g(y) = f(x + y)$. Then, we get $g\left(x^2\right) - g(x) = c$; $c \in \mathbb R^+$. $f(x) = \frac{1 + \sqrt{4c + 1}}{2}$, which is some constant $k$. Hence, $f(x) = k\ \forall k \in \mathbb R^+$. Is my proof correct? AI: This doesn't work; there's no reason why $f(x+y)$ can be written as $g(x) + g(y)$, and moreover, $g(x^2) - g(x) = c$ doesn't give you that solution for $f$, because it's not a quadratic in $g(x)$; you'd need $g(x)^2 - g(x) = c$ instead. The correct solution is as follows: fix $x+y=c$, for some $c \in \mathbb R^+$. Then as we vary $x$ over $(0, c)$, $x^2+y^2$ takes all values between $\frac{c^2}{4}$ and $c^2$ (with an open interval at $c^2$). So $f(c) = f(a)$ for all $a \in [c^2/4, c^2).$ We can then proceed by "induction" in some sense; $f(1)$ has the same value as $f(x)$ for $x \in [1/4, 1)$, so $f(x)$ is constant on $[1/4, 1]$. We then apply the same method to obtain that $f$ is constant on $[(1/4)^2/4, 1] = [1/64, 1]$, and so on. Similarly, we can extend the range which $f$ is constant to all positive reals greater than $1$, and conclude $f$ is constant. I'll leave this as an exercise.
H: Calcuating an Integral via Residues I am trying to solve the following integral: $$\int_0^\infty\frac{dx*x}{(x^2+1)(2+x)}$$ What I have done is an analytical extenstion to the upper half plane only by recognising that the above integral is just 1/2 the integral over the entire real line, then only considered poles that exist in the upper half. I then found the corresponding residues of the poles and used the resiude theorem. My final answer is: $$(\frac{1}{10} - \frac{i}{5}) \pi$$ However, using mathematica I find that the actual answer should be: $$\frac{1}{10}(\pi + Log[16])$$ What am I doing wrong? Any ideas? AI: Consider $\displaystyle \oint_C \frac{z \, \textrm{Log }z \, dz}{(z^2+1)(z+2)}$ where $C$ is the keyhole contour that goes from $\epsilon$ to $R$ then around a large circle of radius $R$ counterclockwise, from $R$ to $\epsilon$ (below the positive real axis) and around a circle of radius $\epsilon$ clockwise. There are three poles within the contour at $\{-2,i,-i \}$. The integral is equal to $2\pi i$ times the sum of the residuals at the poles inside the contour. The integral around the keyhole is equal to the sum of the integrals along the curves making up the keyhole contour. On the circles of radius $R$ and radius $\epsilon$, the integrals approach zero in the limit. Taking the limit as $\epsilon\rightarrow 0$ and $R\rightarrow \infty$, and showing only the non-zero terms that remain, we have $$\lim_{\epsilon \rightarrow 0 , R \rightarrow \infty} \left[ \int_\epsilon^R \frac{x \log x \, dx}{(x^2+1)(x+2)} - \int_\epsilon^R \frac{x (\log x+2\pi i)\,dx}{(x^2+1)(x+2)}\right]= 2\pi i \sum_{z\in \textrm{poles}} \textrm{Res }\left[ \frac{z \, \textrm{Log }z}{(z^2+1)(z+2)}\right].$$ The left-hand side is $$-2\pi i \int_0^\infty \frac{x \, dx}{(x^2+1)(x+2)}$$ The sum of the residues (multiplied by $2\pi i$) is also purely imaginary, so we obtain a real expression for the integral. $$ \int_0^\infty \frac{x \, dx}{(x^2+1)(x+2)} = - \left. \text{Res } \frac{z\,\text{Log }z}{(z^2+1)(z+2)}\right|_{z=i} - \left. \text{Res } \frac{z\,\text{Log }z}{(z^2+1)(z+2)}\right|_{z=-i} - \left. \text{Res } \frac{z\,\text{Log }z}{(z^2+1)(z+2)}\right|_{z=-2}.$$ $$ \int_0^\infty \frac{x \, dx}{(x^2+1)(x+2)} = - \frac{ i (\frac{i\pi}{2})}{2i(2+i)} - \frac{ (-i) (\frac{3\pi i}{2})}{(-2i)(2-i)} + \frac{2(\log 2 + \pi i)}{5}.$$ After simplifying, we get $$\int_0^\infty \frac{x \, dx}{(x^2+1)(x+2)}=\frac{\pi}{10} + \frac{2 \ln 2}{5}.$$ This agrees with the result you quoted from Mathematica.
H: Measure "uniform distribution" or "how equally distributed" is an amount represented with a group of numbers To code a program I'm looking for an algorithm that returns 0 when a set is evenly distributed and 1 when it's totally "unfairly" distributed (or the other way around, it does not matter). For example: Let's say there are 3 persons, each person has 2 dollars (so 6 dollars is the total amount of money). Suppose each element in the list represent the number of dollars that a particular person is given. Then, given the set: [2, 2, 2] the algorithm should return 0, because 6 it's 100% fairly distributed. However, if we instead give the algorithm the set: [0, 6, 0] it should return 1 because one person has all the money and the others have zero money, it's 100% unfair. As another example, the set: [0, 5, 1] it should return something like 0.85 (I'm guessing because I don't know how to calculate it) Of course it should work with any amount of numbers. I have zero knowledge of math so I really need your help. At least if you can guide me on the terms to search the web for this that would be great because I was not able to find anything useful. AI: Perhaps one way of doing this would be to find the standard deviation of the set, and to then normalize it. The way to normalize it would be to find the maximum standard deviation for the set, and divide by it. The linked answer allows you to determine such a maximum value. Then, suppose you were to calculate the standard deviation $\sigma$ of your given set, and $\sigma_m$ is your maximum standard deviation. Then, $\sigma/\sigma_m$ would give you an indication of how "unfairly" the set is distributed.
H: Prove Lissajous curves are closed given the ratio between a and b is rational Lissajous curves are curves described by the parametric equations: $$x(t)=A\sin(at+k), \ \ y(t)=B\sin(bt)$$ According to wikipedia, the curve is only closed iff the ratio between $a$ and $b$ is rational. I was wondering if someone could prove this rigorously? I have not been able to find anything online. AI: Suppose $\frac{a}{b}$ is rational, with simplified form $\frac{p}{q}$. $x(t)$ has period $\tau_a = \frac{2\pi}{a}$ and $y(t)$ has period $\tau_b = \frac{2\pi}{b}$. Hence $q\tau_b = p\tau_a$, i.e. $q$ periods of $\tau_b$ fit into $p$ periods of $\tau_a$. Then the curve $f(t) = (x(t), y(t))$ has period $\tau = \max\{p\tau_a, q\tau_b\}$. Thus the curve $f$ is closed (since after a time $\tau$, $x(t)$ and $y(t)$ return to the same position and then retrace the same curve). Now assume $\frac{a}{b}$ is irrational. Again $x(t)$ has period $\tau_a = \frac{2\pi}{a}$ and $y(t)$ has period $\tau_b = \frac{2\pi}{b}$, but no integer number of periods $p$ of $\tau_a$ fit into an integer number of periods $q$ of $\tau_b$ (otherwise $\frac{\tau_b}{\tau_a} = \frac{a}{b} = \frac{p}{q}$, contradicting the fact that $\frac{a}{b}$ is irrational). Therefore $f(t) = (x(t), y(t))$ has infinite period, i.e. it is not periodic. Hence as $t \rightarrow \infty$, $f(t)$ never returns to $(x(0), y(0))$ and retrace what has been traversed thus far (otherwise, it has a finite period). But then what $f$ traces out is not a closed curve.
H: I cannot follow the logic of the proof of 3-5 Theorem on p.51 in "Calculus on Manifolds" by Michael Spivak. Please explain me. I am reading "Calculus on Manifolds" by Michael Spivak. I cannot follow the logic of the proof of the above theorem. Then each $v(U_i)$ is the sum of certain $t_j - t_{j-1}$. Moreover, each $[t_{j-1}, t_j]$ lies in at least one $U_i$ (namely, any one which contains an interior point of $[t_{j-1}, t_j]$), so $\sum_{i=1}^n v(U_i) \geq \sum_{j=1}^k (t_j - t_{j-1}) = b - a$. I can understand that each $v(U_i)$ is the sum of certain $t_j - t_{j-1}$ and I can understand that each $[t_{j-1}, t_j]$ lies in at least one $U_i$, because $t_0, t_1, \cdots, t_k$ are all endpoints of all $U_i$. But I cannot understand why $\sum_{i=1}^n v(U_i) \geq \sum_{j=1}^k (t_j - t_{j-1})$. Please explain me. AI: That is because of the statement each $[t_{j-1}, t_j]$ lies in at least one $U_i$. Let $w_j$ be the number of times $[t_{j-1}, t_j]$ is included in all those $U_i$, we know that $w_j \ge 1$ and $t_j -t_{j-1} > 0$. $$\sum_{i=1}^nv(U_i)=\sum_{j=1}^kw_j(t_{j}-t_{j-1}) \ge \sum_{j=1}^k(t_{j}-t_{j-1})$$
H: Infinite union clarification I am 99% sure but still I want some clarification. Is it true that: $$\bigcup_{n\in\mathbb N}[0,\frac{n-1}{n})=[0,1]$$ AI: Unfortunately, this is false; it's actually $[0,1)$, with an open endpoint at $1$. This is because $1$ is not in any set $[0, \frac{n-1}{n})$.
H: Asymmetric Random Walk on $\mathbb{Z}$ Suppose we have an asymmetric random walk on $\mathbb{Z}$ starting at $0$, with transition probabilities $p(x,x+1)=\frac{1}{3}$ and $p(x, x-1)=\frac{2}{3}$. What is the probability that this random walk ever reaches some positive integer $n$? I see that this random walk is asymmetric and the probability that it ever reaches any negative integer is $1$. But I am not sure about the case for positive $n$. I can only intuitively guess that the probability gets closer to $0$ the bigger $n$ is, but I was wondering how one would solve for each $n$. AI: Let $a$ be the probability that you ever reach $n+1$ from $n$. Then we know that $$a=\frac23a^2 + \frac13^*$$ which gives us $a=\frac12$, as $a\neq 1$. *How did I get this? Notice that the probability you ever reach $n+1$ in the next move is $\frac13$ -- and if you don't reach it in one move, then you go down a step. This means the probability of reaching it is $a^2$ because you need to move up twice, for a total of, well, what it says there. So your answer is just $a^n=\frac1{2^n}$.
H: Finding the circumradius-to-inradius ratio for the regular pentagon Find the value of $R/r$: I go with co-ordinate geometry, considering the centre of the circles is at the origin, then the equation of the circle becomes as $$ x^2 + y^2 = R^2 $$ $$ x^2 + y^2 = r^2 $$ After this I not able to solve this. 2nd Attempt :- AI: The hint: Let $ABCDE$ be our regular pentagon, $O$ be a center of the circles and $T$ be a touching point to $AB$. Thus, $OT=r$, $OA=R$ and $\measuredangle ATO=90^{\circ}.$ Hence, $$\measuredangle AOT=\frac{360^{\circ}}{5\cdot2}=36^{\circ}$$ and $$AT=R\sin36^{\circ}=r\tan36^{\circ}.$$ Can you end it now?
H: Diagram shows quadrilateral ABCD, with OA = (-6,3) , OB = (5,5), OC = (7,-2) , OD = (-4,-6). Show that midpoints P, Q, R & S form a parallelogram. The diagram below shows the quadrilateral ABCD, with OA = (-6,3) , OB = (5,5), OC = (7,-2) , OD = (-4,-6). Show that the midpoints P, Q, R & S form a parallelogram. My working out so far: To show that PQRS forms a parallelogram, we must show: PQ//SR and SP//RQ I'm unsure how to find each of the position vectors of P, Q, R and S. For example to find OP would you first find AB? ^^ Given this is the case, I have taken AB = OB + AO = OB - OA = (5i + 5j) - (-6i + 3j) = -i +2j I don't know how to continue the proof from here. How would you explain the steps of how to prove this? This is from a Grade 11 exam paper. AI: First of all, I recommend that when you can, without too much effort, solve a general problem (for which the question pertains to a specific case), you should always do so. Here, the general case may actually be simpler! Let (for instance), $\vec{OA} = \mathbf a, \vec{OP} = \mathbf p$ and so forth. These are position vectors relative to the origin. First find the position vectors for $\mathbf{p,q,r,s}$ respectively as $\frac 12\mathbf{(a+b)},\frac 12\mathbf{(b+c)}, \frac 12\mathbf{(c+d)}, \frac 12\mathbf{(d+a)}$ Next find the vectors for $\vec{PQ}, \vec{QR},\vec{RS},\vec{SP}$ respectively, for example: $\vec{PQ} = \vec{OQ} - \vec{OP} = \frac 12\mathbf{(b+c)} - \frac 12\mathbf{(a+b)} = \frac 12\mathbf{(c-a)}$. Similarly, $\vec{QR} = \frac 12\mathbf{(d-b)}, \vec{RS} = \frac 12\mathbf{(a-c)}, \vec{SP} = \frac 12\mathbf{(b-d)}$. You shouldn't have to work each of these out, just use the cyclical symmetry of the vertices! Finally, note that $\vec{PQ} = -\vec{RS}$ and $\vec{QR} = -\vec{SP}$, which yields: $|PQ| = |RS|, PQ \parallel RS$ and $|QR| = |SP|, QR \parallel SP$ which is sufficient to establish that $PQRS$ is a parallelogram. (To be precise, even one set of line segments of equal length and parallel would do it, but no harm in being thorough). Note that the actual numerical coordinates of the points in the specific question never needed to be used because it's just a case of a more general theorem called Varignon's Theorem. The proof I gave above establishes the general theorem.
H: A beginner’s explanation for PCA on a multivariate time series This is very much a beginner’s question. Say you have a 10 dimensional vector for every day in a time series of 100 days. I was reading about using PCA to reduce this to a one dimensional time series. If the time series is represented by a 10 by 100 matrix, what are the exact mathematical steps to reduce it to a 1 by 100 time series? AI: I don't know what your mathematical background is, but in terms of matrices, you can do like this. Call the $10\times 100$ matrix for $M$. Now build a new matrix by subtracting the mean vector from each vector. $$M_{ml} = M - \mu$$ Now calculate $M_{ml} {M_{ml}} ^T$ Calculate the eigenvalues and eigenvectors for this matrix. By the spectral theorem it will have an ON basis of eigenvectors. As a usual ON transformation you can write it like $T = ADA^{T}$ Now remove the rows of all the non-largest diagonal values in $D$ matrix. Call the modified matrix $D_m$ To get the 1x100 matrix multiply the matrix $M_{ml}$ by the modified $D_mA^T$ To project on the shrunken space we need to remove corresponding columns in $A$ matrix (same as the rows we removed in $5$)). Call the modified matrix $A_m$. Now your 1x100 matrix will be $(D_mA^T)M_{ml}$ And your projection onto the space will be $A_m(D_mA^T)M_{ml}$
H: Contour integral around an essential singularity The complex function $$f(z)=\frac{1}{e^{1/z}-1}$$ has an essential singularity at $z=0$, and an infinite quantity of poles inside every open neighborhood containing it. Let $\mathbb{R}\ni\epsilon>0$. What is the value of $$\int_{|z|=\epsilon} \frac{dz}{e^{1/z}-1}=?$$ AI: Instead of giving ourselves a headache with residues, we can convert this into different line integral $$\int_{|z|=1} \frac{dz}{e^{\frac{1}{z}}-1} = \int_0^{2\pi} \frac{ie^{it}\:dt}{e^{e^{-it}}-1} = \int_0^{2\pi} \frac{ie^{2it}e^{-it}\:dt}{e^{e^{-it}}-1} = \int_{|z|=1} \frac{dz}{z^2(e^z-1)}$$ The second integral only has a pole of order $3$ at $z=0$ inside the unit disk. Calculating the Laurent series gives us $$z^{-3}\left(1+\frac{1}{2}z+\frac{1}{6}z^2+\cdots\right)^{-1} = z^{-3}\left(1-\frac{z}{2}-\frac{1}{6}z^2+\frac{1}{4}z^2+\cdots\right) = \frac{1}{z^3} - \frac{1}{2z^2} + \frac{1}{12z}+\cdots$$ by geometric series. The residue is $\frac{1}{12}$ thus the integral evaluates to $$\int_{|z|=1} \frac{dz}{e^{\frac{1}{z}}-1} = \int_{|z|=1} \frac{dz}{z^2(e^z-1)} = \frac{\pi i}{6}$$ and since the original function did not have any poles outside the unit disk, the integral on $|z|=2$ will be exactly the same.
H: Why is $a$ not coercive when it is defined on $H^1(\Omega)$? Given the Poisson's equation with homogeneous Neumann boundary conditions and the associated bilinear form $$a(u,v) = \int_{\Omega}\nabla u \cdot \nabla v \, dx$$ on $H^1(\Omega)$, why is $a$ not coercive? Whereever I look it says we need $H_0^1(\Omega)$ for coercivity, but I am not sure why $H^1(\Omega)$ wouldn't work. Thanks for the help. AI: This is because $1 \in H^1(\Omega)$ and $a(1,1) = 0$. Here, $1$ is the constant function with value $1$ everywhere.
H: Is the sequence $f_n(x)=\frac{x^{2n}}{1+x^{2n}}$ uniformly convergent? Given $R\in\mathbb{R}$, $R>1$ investigate the sequence of functions $(f_n)_{n\in\mathbb{N}}$, given by $$f_n(x)=\frac{x^{2n}}{1+x^{2n}}, \qquad x \in\ [R,\infty)$$ with regard to uniform convergence. Could someone help me with this problem? I got that the limit of $f_n(x)$ converges to $1$. But does this mean, that $f_n(x)$ is not pointwise convergent and that's why also not uniformly convergent? AI: The question has been edited. $f_n(x)$ converges point-wise to $1$. $|f_n(x)-1|=\frac 1 {1+x^{2}n} \leq \frac 1{1+R^{2}n}$ and $\frac 1 {1+R^{2}n} \to 0$. Hence the convergence is uniform. Answer for the revised version: $f_n(x)$ converges point-wise to $1$. $|f_n(x)-1|=\frac 1 {1+x^{2n}} \leq \frac 1{1+R^{2n}}$ and $\frac 1 {1+R^{2n}} \to 0$. Hence the convergence is uniform.
H: How to find an argument of $ z=1-\cos\varphi-i\sin\varphi $? How to find an argument of this complex number? $$ z=1-e^{i\varphi}=1-\cos\varphi-i\sin\varphi $$ I know I should try to make $z$ equal to $r(\cos\psi+i\sin\psi)$. And then $Arg(z)=\psi$. But I don't really understand how to do that. I tried something like that: $$ z=2\sin^2\frac{\varphi}{2}-2i\sin\frac{\varphi}{2}\cos\frac{\varphi}{2}=2\sin\frac{\varphi}{2}\left(\sin\frac{\varphi}{2}-i\cos\frac{\varphi}{2}\right) $$ But obviously it didn't help much. AI: I think you're on the right track: \begin{align*} z&=2\sin\frac{\varphi}{2}\left(\sin \frac{\varphi}{2}-i\cos\frac{\varphi}{2}\right)\\ &=2\sin\frac{\varphi}{2}\left(-ie^{i\varphi/2}\right)\\ &=2\sin\frac{\varphi}{2}\left(e^{-i\pi/2}e^{i\varphi/2}\right)\\ &=\dots \end{align*}
H: Multivariable polynomial in $R[X_1,..,X_n]$ whose roots are all elements in $R^n$ is zero. Suppose that the roots of $p\in R[x_1,...,x_n]$, where R is algebraically closed and $n\geq 1 $, is the entire set $R^n$, is $p$ necessarily the zero polynomial. I think it might be true but I cant prove it. I know that it holds for the case n=1 since polynomials of degree $k$ can have at most k roots. But it is possible for multivariable polynomials to have infinite roots and still not be zero. So I would like to know if p is zero. AI: Good question. Yes, $p$ is zero and what we need to use is that $R$ is infinite (which is implied since it is algebraically closed in your case). The idea is to induct (because we understand polynomials in one variable after all!). The result is folklore for $n = 1$. Let $$0 \neq p \in R[x_1, \dots, x_n] = R[x_1, \dots, x_{n-1}][x_n].$$ Since $p \neq 0$, there exists some $f \in R[x_1, \dots, x_{n-1}]$ such that $0 \neq p(f) \in R[x_1, \dots, x_{n-1}]$. By induction hypothesis, there exists $a_1, \dots, a_{n-1} \in R$ such that $p(f)(a_1, \dots, a_{n-1}) \neq 0$. In particular, $p(a_1, \dots, a_{n-1}, f(a_1, \dots, a_{n-1})) \neq 0$.
H: Is ring $\mathbb{Z}_5[x]/I$ is a field, when: $I=(3x^3+2x+1)\mathbb{Z}_5[x]$? Is ring $\mathbb{Z}_5[x]/I$ is field, when: $I=(3x^3+2x+1)\mathbb{Z}_5[x]$? Can you show me a step by step instruction how to solve this problem? AI: Since $\mathbb{Z}_5$ is a field, we know $\mathbb{Z}_5[x]$ is a PID. Note that $3x^3+2x+1$ is irreducible over $\mathbb{Z}_5$, since the polynomial is of degree $3$ and plugging in every element of $\mathbb{Z}_5$ does not yield a zero. This argument works because any polynomial of degree $2$ or $3$ over a field is irreducible if and only if it does not have a root in that field. Therefore the ideal generated by $3x^3+2x+1$ is prime, hence is maximal (we are over a PID). We thus conclude that the quotient is a field (quotient of a ring by a maximal ideal is a field).
H: Distributive law for subgroups I believe that for subgroups $H,K,L\le G$ distributivity does not apply, but could someone give an example to illustrate this: $$ \langle H\cup K\rangle\cap L\neq\langle (H\cap L)\cup (K\cap L)\rangle $$ Apparently with subgroups $n\mathbb{Z}$ of $\mathbb{Z}$ equality is fulfilled. Any hints would be appreciated. AI: Let $G = S_3$, and let $H$ and $L$ be generated by distinct $2$-cycles and $K$ be generated by a $3$-cycle. Then $\langle H \cup K\rangle \cap L = G \cap L = L$, whereas $\langle (H \cap L) \cup (K \cap L)\rangle = \langle 1 \cup 1\rangle = 1$.
H: Proving that PDF of a random variable is given by $\frac{l}{\pi(l^2+t^2)}$ I'm solving the following exercise in "Problems of probability theory by Sveshnikov". A straight line is drawn at random through the point $(0,l$) in the plane. Let $X$ be the random variable denoting the distance between the $x$-intercept of this line and the origin. Show that the probability density function of $X$ is given by $$f_X(t) = \frac{l}{\pi(l^2+t^2)}$$ My attempt was to first assume that $\theta$ the angle between the line and the line $y = a$ is uniformaly distributed over the interval $(0,\pi)$ so we have $X = a \tan(\theta)$. Then intergrating the density function $\frac{1}{\pi} \mathbb{1}_{(0,\pi)}$, but my solution is still not correct. Not sure where I went wrong. Any hint/help? Thanks. Edit: I tried to get CDF first $$F_X(t) = \mathbb{P}(X \leq t) = \mathbb{P}(\theta \leq \arctan(t/a)) = \frac{1}{\pi} \int_{-\infty}^{\arctan(t/a)} \mathbb{1}_{(0,\pi)}(s) \ \text{d}s$$ I think my confusion is in re-writing the bounds of the integral, I re-wrote as $\int_{0}^{\arctan(t/a)}$ and got $1/\pi \arctan(t/a)$ which of course gives the wrong derivative $F' = f$. AI: Let $\theta$ denote angle between the random line and line $y = l$. Than $\theta \sim Uni[0, \pi]$ and $p_{\theta}(x) = \frac{1}{\pi}, \ x \in [0, \pi]$. $X = \frac{l}{\tan \theta}$. By the change of variables formula we get: $p_{X}(y) = p_{\theta}(x^{-1}(y)) \cdot |x^{-1}(y)^{\prime}|$. $y = \frac{l}{\tan x} \Rightarrow x = \arctan \left( \frac{l}{y} \right), \quad x^{-1}(y)^{\prime} = \frac{-l}{y^2 + l^2}$. So we have: $p_{X}(y) = \frac{1}{\pi} \cdot \frac{l}{y^2 + l^2}, \ y \in (-\infty, +\infty)$. The granzes for $X$ are given from equation: $X = \frac{l}{\tan \theta}$. $X(\theta = 0) = + \infty, \ X(\theta = \pi) = -\infty$
H: Angle between the diagonal and edge of a cube (application of dot product) Question: Find the angle between the diagonal of a cube and one of its edges. I know how to do this question but in particular I do not understand why $\vec b=<1,0,0>$ rather than $<0,0,0>$ For this question, I assumed that the cube has side lengths of 1 and is placed such that its left corner is at the origin $<0,0,0>$. So the vectors $\vec b$ and $\vec a$ have components $<0,0,0>$ and $<1,1,1>$ respectively, where $\vec b$ is the bottom left corner of the cube at the origin and $\vec a$ is the position vector from the origin to the top right corner of the cube. So the angle between the diagonal and one of its edges is $$θ=\cos^{-1}\frac{\vec a.\vec b}{|\vec a||\vec b|}=\cos^{-1}\frac{<1,1,1>.<0,0,0>}{√3}=\cos^{-1}\frac{0}{√3}=90°$$ But this answer is wrong as $\vec b=<1,0,0>$. So my question is why is $\vec b=<1,0,0>$ and not $<0,0,0>$? AI: $(0,0,0)$ is the vector from $(0,0,0)$ to $(0,0,0)$ so it has length 0 and in particular cannot possibly represent an edge of the cube. By the symmetry of the cube we can assume that without loss of generality it is given by the standard cube $[0,1]^3$. Moreover it does not matter which diagonal and which edge we choose to compare, as long as they share a point. Hence we may take the diagonal $(1,1,1)$ and any of the edges $(1,0,0),$ $(0,1,0)$ or $(0,0,1)$. Indeed we find that $$\langle (1,0,0),(1,1,1)\rangle = ... = \langle (0,0,1),(1,1,1) \rangle = 1$$ Thus we can compute $$\vartheta = \cos^{-1} \dfrac{\langle (0,1,0),(1,1,1) \rangle}{\sqrt 3} = \cos^{-1} \dfrac{1}{\sqrt 3}$$
H: Continuity of map from $M(n,\mathbb{R})$ to itself. I am self studying topology of metric spaces and i encountered an exercise which requires to prove the map going from $M(n,\mathbb{R})$ to $M(n,\mathbb{R})$ defined by $A \mapsto A^{k}$ where k is any positive integer and $M(n,\mathbb{R})$ is given with euclidean norm. I am able to do it with $k=2$ but don’t know how to show it for any natural number k. I also thought of induction on k but can’t process. Please give me a hint to show this. AI: Yes, it can be done by induction. If $k=1$, you have the identity map, which is continuous. And, if $k\in\Bbb N$ and if the map $A\mapsto A^k$ is continuous, then the map $A\mapsto A^{k+1}$ is continuous too, since it is the previous map times the identity map. And the product of two continuous maps is continuous.
H: Mistake in the proof of Binet's formula I want to prove Binet's formula $F_{k} = \dfrac{1}{\sqrt{5}}\times\left[\left(\dfrac{1+\sqrt{5}}{2}\right)^k-\left(\dfrac{1-\sqrt{5}}{2}\right)^k\right]$ for the $k_{th}$ fibonacci number. I did as follows - $F_{0}=0, F_{1} = 1$ ..... $F_{k} = F_{k-1}+F_{k-2}$ $F_kx^k=xF_{k-1}x^{k-1} + x^2F_{k-2}x^{k-2}$ summing over $k$ from $2$ to $\infty$, $g(x)-1 = x(g(x)-1)+x^2g(x)\implies g(x) = \dfrac{x}{1-x-x^2} = \dfrac{-x}{x^2+x-1}$ where $g$ is the generating function of $F_0,F_1,F_2....$ Factoring we get $g(x) = \dfrac{-x}{\left(x-\frac{1+\sqrt{5}}{2}\right)\left(x-\frac{1-\sqrt{5}}{2}\right)} =-\left[\dfrac{\dfrac{\sqrt{5}-1}{2\sqrt{5}}\left(x-\dfrac{1+\sqrt{5}}{2}\right)+\dfrac{\sqrt{5}+1}{2\sqrt{5}}\left(x-\dfrac{1-\sqrt{5}}{2}\right)}{\left(x-\frac{1+\sqrt{5}}{2}\right)\left(x-\frac{1-\sqrt{5}}{2}\right)}\right]$ $\implies g(x) = -\left[\dfrac{\sqrt{5}-1}{2\sqrt{5}}\cdot\dfrac{1}{\left(x-\frac{1-\sqrt{5}}{2}\right)} + \dfrac{\sqrt{5}+1}{2\sqrt{5}}\cdot\dfrac{1}{\left(x-\frac{1+\sqrt{5}}{2}\right)}\right] = \left[\dfrac{\sqrt{5}-1}{2\sqrt{5}}\cdot\dfrac{1}{\left(\frac{1-\sqrt{5}}{2}-x\right)} + \dfrac{\sqrt{5}+1}{2\sqrt{5}}\cdot\dfrac{1}{\left(\frac{1+\sqrt{5}}{2}-x\right)}\right]$ $\implies g(x) = \dfrac{\sqrt{5}-1}{2\sqrt{5}}\cdot\dfrac{\frac{2}{1-\sqrt{5}}}{\left(1-\frac{2x}{1-\sqrt{5}}\right)}+\dfrac{\sqrt{5}+1}{2\sqrt{5}}\cdot\dfrac{\frac{2}{1+\sqrt{5}}}{\left(1-\frac{2x}{1+\sqrt{5}}\right)} = \dfrac{1}{\sqrt{5}} \left[\dfrac{1}{1-\frac{2x}{1+\sqrt{5}}}-\dfrac{1}{1-\frac{2x}{1-\sqrt{5}}} \right]$ $\implies g(x) = \dfrac{1}{\sqrt{5}}\left[\displaystyle\sum_{k=0}^{\infty}\left(\dfrac{2x}{1+\sqrt{5}}\right)^k-\displaystyle\sum_{k=0}^{\infty}\left(\dfrac{2x}{1-\sqrt{5}}\right)^k\right] = \dfrac{1}{\sqrt{5}}\displaystyle\sum_{k=0}^{\infty}\left[\left(\dfrac{2}{1+\sqrt{5}}\right)^k-\left(\dfrac{2}{1-\sqrt{5}}\right)^k\right]x^k$ $\\$ So I'm getting $F_k = \dfrac{1}{\sqrt{5}}\times \left[\left(\dfrac{2}{1+\sqrt{5}}\right)^k-\left(\dfrac{2}{1-\sqrt{5}}\right)^k\right]$ wheres actually $F_{k} = \dfrac{1}{\sqrt{5}}\times\left[\left(\dfrac{1+\sqrt{5}}{2}\right)^k-\left(\dfrac{1-\sqrt{5}}{2}\right)^k\right]$. Where did I go wrong? I cant figure it out. Any help is very appreciated. Thanks! AI: Since the roots of $x^2+x-1$ are $\frac{-1\pm\sqrt{5}}{2}$, at the begining it should be $$g(x) = \dfrac{-x}{\left(x+\frac{1+\sqrt{5}}{2}\right)\left(x+\frac{1-\sqrt{5}}{2}\right)}.$$ and therefore, at the end, we find $$F_k = -\dfrac{1}{\sqrt{5}}\times \left[\left(\dfrac{-2}{1+\sqrt{5}}\right)^k-\left(\dfrac{-2}{1-\sqrt{5}}\right)^k\right].$$ We recover the classic Binet's formula, by noting that, $$\frac{-2}{1+\sqrt{5}}=\frac{-2}{1+\sqrt{5}}\cdot\frac{1-\sqrt{5}}{1-\sqrt{5}}= \frac{-2(1-\sqrt{5})}{1-5}=\frac{1-\sqrt{5}}{2}$$ and $$\frac{-2}{1-\sqrt{5}}=\frac{-2}{1-\sqrt{5}}\cdot\frac{1+\sqrt{5}}{1+\sqrt{5}}= \frac{-2(1+\sqrt{5})}{1-5}=\frac{1+\sqrt{5}}{2}.$$
H: Auto-Homeomorphism of R making polynomials discontinuous Consider $\mathbb{R}$ with its usual topology and let $f(x)=x^2$ and $g(x)=\max\{1,x\}$. Does there exist a homeomorphism $\phi:\mathbb{R}\to \mathbb{R}$ such that $\phi\circ f = g$? Obviously no such diffeomorphism can exist but it's not clear that this is the not the case for a homeomorphism... AI: No, because then $1 = \max(1, -2) = (\phi \circ f)(-2) = (\phi \circ f)(2) = \max(1, 2) = 2$. Even if we consider $f \circ \phi$, it cannot be done. $\phi$ must be continous and injective, so it must be strictly monotone. Then $\lim_{x \to -\infty} (f \circ \phi)(x) = \infty$, but $\lim_{x \to -\infty} \max(1, x) = 1$. This also shows that $(\phi^{-1} \circ f \circ \phi)(x) = \max(x, 1)$ is impossible.
H: Approximation of an integrable function by continuous function with an extra criteria. Let $f :[0,1]\rightarrow \mathbb{R}$ satisfying $$\int_0^1|f(t)|dt<\infty .$$ We know that for given $\epsilon >0$ there exists a continuous function $g$ on $[0,1]$ such that $$\|f-g\|_{L^1([0,1])}<\epsilon .$$ Question: Can we find a continuous function $g$ satisfying above such that $Z(f)\subseteq Z(g)$? ($Z(f)$ is the zero set of $f$) Edit: After comments I changed from $f\in L^1([0,1])$ to above. AI: Let $f=I_A$ where $A$ is the set of all irrational numbers on $[0,1]$. If such a continuous function $g$ exists for some $\epsilon <1$ then $g(x)=0$ for all rational $x$. But $g$ is continuous, so $g(x)=0$ for all $x$. Hence $1=\int f= \int |f-g| <\epsilon$, a contradiction.
H: Probability: Indicators function and random variable Given $A,B,C$ three events of a sample space, with their indictaros functions, $I_A$, $I_B$ and $I_C$. Prove that the application defined by $$Z=(I_A-I_B)*(I_C-I_B)$$ is a random variable. AI: Hint: $(\mathsf1_A-\mathsf1_B)(\mathsf1_C-\mathsf1_B)$ is also an indicator function. Can you find out yourself of what set? Indicator functions of measurable sets are random variables.
H: Conditional distribution of a Bernoulli given the relative binomial Let $X_1,\dots,X_n$ be independent random variables with $X_i \sim $ Ber($p$), and let $S_n=X_1+\dots+X_n$ be the relative binomial distribution with parameters $n,p$. Then, what is the conditional distribution of $X_i$ given $S_n=r$? I have that $$P(X_i=k \mid S_n=r)=\dfrac{P(X_i=k,\ S_n=r)}{P(S_n=r)}$$ With $k=0,1$, but I really don't understand what can I say about $P(X_i=k,\ S_n=r)$. I mean, they aren't independent, but how the fact that $S_n=r$ influences the probability that $X_i=k$? AI: The conditional distribution you are looking for is the following $$ P(X_i=k) = \begin{cases} \frac{n-r}{n}, & \text{if k=0} \\ \frac{r}{n}, & \text{if k=1} \end{cases}$$ it is very easy to prove.. Without loss of generality let's prove the conditional distribution of $X_1$ (all the variables have the same density) $$\mathbb{P}[X_1=0|\sum_iX_i=r]=\frac{\mathbb{P}[X_1=0;\sum_{i=1}^nX_i=r]}{\mathbb{P}[\sum_{i=1}^nX_i=r]}=$$ $$=\frac{\mathbb{P}[X_1=0;\sum_{i=2}^nX_i=r]}{\mathbb{P}[\sum_{i=1}^nX_i=r]}=\frac{\mathbb{P}[X_1=0]\mathbb{P}[\sum_{i=2}^nX_i=r]}{\mathbb{P}[\sum_{i=1}^nX_i=r]}=$$ $$\frac{(1-p)\binom{n-1}{r}p^r(1-p)^{n-1-r}}{\binom{n}{r}p^r(1-p)^{n-r}}=\frac{(n-1)!r!(n-r)!}{r!(n-1-r)!n!}=\frac{n-r}{n}$$ Similar brainstorming for $\mathbb{P}[X_1=1]$
H: Prove that solution of equation $x^p=a$ exists, where $a$ is a fixed element in a finite field $F$ and $charF=p.$ I have to prove that solution of equation $x^p=a$ exists and it's unique, where $a$ is a fixed element in a finite field $(F,+,\cdot,0,1)$ and $charF=p.$ I know how to prove that $p$ is a prime number and using binomial formula it is easy to prove $(x-y)^p=x^p-y^p,$ so the solution is unique. I don't know how to prove that the solution exists. For $a=0,$ $x=0$ and for $a=1,$ $x=1.$ But what can be a solution if $a\neq0,$ $a\neq1?$ AI: Define $\varphi: F\to F$ by $x\to x^p$. It is a field homomorphism, hence injective. Since it is an injective map from a finite set to itself it must be surjective as well.
H: Discriminant of $x^4 - 7$ According to the formula on the wikipedia page, the discriminant of $x^4 - 7$ should be 0. But the polynomial has roots $\pm \sqrt[4]{7}$ and $\pm i\sqrt[4]{7}$ so it is separable, so it shouldn't have zero discriminant. What's going on? AI: There is the term $256a^3e^3$, which is nonzero, and all other terms are zero.
H: Why z=0 is a simple pole of $\frac{z}{1-e^{z^2}}$ Why $z=0$ is a simple pole of $\frac{z}{1-e^{z^2}}$ I understood why for $z\neq 0 $ it is a simple pole but for the other part I didn't an explanation will be nice :) AI: It suffices to show that $\lim_{z \to 0} z\left(\frac{z}{1 - e^{z^2}}\right) = L \neq 0$ (i.e. the limit exists, is finite, and is non-zero). We can simply apply L'Hopital Rule: $$ \lim_{z \to 0} \frac{z^2}{1 - e^{z^2}} = \lim_{z \to 0} \frac{2z}{ - 2ze^{z^2}} = \lim_{z \to 0} -e^{-z^2} = -1 $$ Hence it's a pole of order $1$.
H: Easy example of a bijective continuous self mapping whose inverse is discontinuous Let $f : X \to X$ be a continuous bijective mapping from a metric space onto itself. Is $f^{-1}$ continuous too? I don't think so, but I'm struggling to find a counterexample. I've read that if $X=\mathbb{R}^n$ or $X$ a compact Hausdorff space, $f^{-1}$ is always continuous. The Open Mapping Theorem gives us another restriction to find a counterexample, namely all linear operators from a Banach space to itself. Also, I know there's an easy example if we allow to use two different topologies. Here I'm only considering the topology induced by the one metric we have. AI: Consider the space$$c_{00}=\left\{(x_n)_{n\in\Bbb N}\,\middle|\,\bigl((\forall n\in\Bbb N):x_n\in\Bbb C\bigr)\text{ and }x_n=0\text{ if }n\gg0\right\},$$endowed with the $\sup$ norm, and define $f\colon c_{00}\longrightarrow c_{00}$ by$$f\bigl((x_n)_{n\in\Bbb N}\bigr)=\left(\frac{x_n}n\right)_{n\in\Bbb N}.$$Then $f$ is bijective and continuous, but $f^{-1}$ is discontinuous. Actually, it is discontinous everywhere.
H: What is the $\mathbb{P} [S_n=0 \mbox{ for infinitely many $n$}]$? For random variables $X_i$ and define the sum $S_n=\sum_{i=1}^n X_i$. $$\limsup_{n} \mathbb{P} [S_n=0]?$$ AI: In general you cannot express $P[S_n=0 \, \text {for infinitely many n} \, ] $ in terms of the numbers $P[S_n=0]$. What you can say is $P[S_n=0 \, \text {for infinitely many n} \, ] \geq \lim \sup P[S_n=0]$. This is a consequence of Fatou's Lemma.
H: Square root in $\mathbb F_{2^n}$ Let $\mathbb F_{2^n}$ be a finite field with $2^n$ elements. I am just wondering if it is true that for all $n\in \mathbb N$ all elements of $\mathbb F_{2^n}$ have square roots, i.e for all $a\in \mathbb F_{2^n}$ there is an element $b\in \mathbb F_{2^n}$, with $a=b^2$? It may be very easy but I can not see now how to see it. AI: Let $x,y\in \Bbb F_{2^n}$ be two elements with $x^2=y^2\implies (x-y)(x+y)=0$, since a field is an integral domain we have either $x=y$ or $x=-y=y$ as $\text{char}(\Bbb F_{2^n})=2$. Hence, $x\not=y\iff x^2\not=y^2$, so that $\big\{x^2|x\in \Bbb F_{2^n}\big\}=\Bbb F_{2^n}$. So every element is a square root.
H: For $\pi<\alpha<\frac{3\pi}{2}$ what is the value of $\sqrt{4\sin^4\alpha + \sin^2 2\alpha} + 4\cos^2\left(\frac{\pi}{4} - \frac{\alpha}{2}\right)$? I solved part of it this way: $$\sqrt{4\sin^4\alpha + \sin^2 2\alpha} + 4\cos^2\left(\frac{\pi}{4} - \frac{\alpha}{2}\right)$$ $$= \sqrt{4\sin^4\alpha + 4\sin^2\alpha \cos^2\alpha} + 2\cos\left(\frac{\pi}{2} - \alpha\right) + 2$$ because $\sin 2\alpha = 2\sin\alpha\cos\alpha$ and $\cos2\alpha = \cos^2\alpha - \sin^2\alpha = 2\cos^2\alpha - 1$. The expression then reduces to: $$\sqrt{4\sin^2\alpha(\sin^2 \alpha + \cos^2\alpha)} + 2\sin\alpha + 2$$ since $\cos\left(\frac{\pi}{2} - \alpha\right) = \sin\alpha$. Now, because $\sin^2 \alpha + \cos^2\alpha = 1$, the expression becomes: $$\sqrt{4\sin^2\alpha} + 2\sin\alpha + 2$$ Keep in mind that $\pi < \alpha < \frac{3\pi}{2}$. How do you proceed? I always seem to be left with extraneous sines of $\alpha$, while according to my textbook, the answer is $2$. AI: Now, since $\pi<\alpha<\frac{3\pi}{2}$, it's $$-2\sin\alpha+2\sin\alpha+2=2$$ because for $x<0$ we have: $$\sqrt{x^2}=-x.$$
H: Chain rule for matrix-vector composition Suppose $A(t,x)$ is a $n\times n$ matrix that depends on a parameter $t$ and a variable $x$, and let $f(t,x)$ be such that $f(t,\cdot)\colon \mathbb{R}^n \to \mathbb{R}^n$. Is there a chain rule for $$\frac{d}{dt} A(t,f(t,x))?$$ It should be something like $A_t(t,f(t,x)) + ....$, what is the other term? AI: Yes, there is a chain rule for such functions. Before getting to that, let's just briefly discuss partial derivatives for multivariable functions. Let $V_1, \dots, V_n, W$ be Banach spaces (think finite-dimensional if you wish, such as $\Bbb{R}^{k_1}, \dots \Bbb{R}^{k_n}, \Bbb{R}^m$, or spaces of matrices, like $M_{m \times n }(\Bbb{R})$ if you wish). Let $\phi: V_1 \times \dots \times V_n \to W$ be a map, and $a = (a_1, \dots, a_n) \in V_1 \times \dots \times V_n$. We say the function $\phi$ has an $i^{th}$ partial derivative at the point $a$, if the function $\phi_i:V_i \to W$ defined by \begin{align} \phi_i(x) := \phi(a_1, \dots, a_{i-1}, x, a_{i+1}, \dots a_n) \end{align} is differentiable at the point $a_i$. In this case, we define the $i^{th}$ partial derivative of $\phi$ at $a$ to be the derivative of $\phi_i$ at the point $a_i$. In symbols, we write: \begin{align} (\partial_i\phi)_a := D(\phi_i)_a \in \mathcal{L}(V_i,W). \end{align} We may also use notation like $\dfrac{\partial \phi}{\partial x^i}(a)$ or anything else which resembles the classical notation. The important thing to keep in mind is that $(\partial_i \phi)(a) \equiv \dfrac{\partial \phi}{\partial x^i}(a)$ is by definition a linear map $V_i \to W$. Note that this is almost word for word the same definition you might have seen before (or atleast if you think about it for a while, you can convince yourself it's very similar). THe idea is of course that we fix all but the $i^{th}$ variable, and then consider the derivative of that function at the point $a_i$. Next, we need one last bit of background. One very important special case which deserves mention is if the domain of the function is $\Bbb{R}$. So, now, suppose we have a function $\psi: \Bbb{R} \to W$. Then, we have two notions of differentiation. THe first is the familiar one as the limit of difference quotients: \begin{align} \dfrac{d\psi}{dt}\bigg|_t \equiv \dot{\psi}(t) \equiv \psi'(t) := \lim_{h \to 0} \dfrac{\psi(t+h) - \psi(t)}{h}. \end{align} (the limit on the RHS being taken with respect to the norm on $W$). The second notion of derivative is that since $\Bbb{R}$ is a vector space and $W$ is also a vector space, $\psi: \Bbb{R} \to W$ is a map between normed vector spaces. So, we can consider the derivative $D \psi_t: \Bbb{R} \to W$ as a linear map. How are these two notions related? Very simple. Note that $\mathcal{L}(R,W)$ is naturally isomorphic to $W$ (because $\Bbb{R}$ is one-dimensional), and the isomorphism is $T \mapsto T(1)$. So, there is a theorem which says that $\psi'(t)$ (the limit of difference quotients) exists if and only if $D\psi_t$ exists (a linear map from $\Bbb{R} \to W$), and in this case, \begin{align} \psi'(t) = D\psi_t(1). \end{align} Hence forth, whenever I use $\dfrac{d}{dt}$ notation or $\dfrac{\partial}{\partial t}$ notation, where the $t$ refers to the fact that the domain is $\Bbb{R}$, I shall always mean the vector in $W$ obtained by the limit of the difference quotient (which you now know is simply the evaluation of the linear map on the element $1 \in \Bbb{R}$). See Loomis and Sternberg's Advanced Calculus, Section $3.6-3.8$ ($3.7, 3.8$ mainly) for more information. Anyway, the chain rule in this case is as follows: \begin{align} \dfrac{d}{dt}A(t, f(t,x)) &= \dfrac{\partial A}{\partial t}\bigg|_{(t,f(t,x))} + \dfrac{\partial A}{\partial x}\bigg|_{(t,f(t,x))} \left[ \dfrac{\partial f}{\partial t}\bigg|_{(t,x)}\right] \tag{$*$} \end{align} What does this mean? Well, on the LHS, we have a function $\psi: \Bbb{R} \to M_{n \times n}(\Bbb{R})$, defined by \begin{align} \psi(t):= A(t,f(t,x)) \end{align} and we're trying to computing $\psi'(t)$. On the RHS, note that $A: \Bbb{R} \times \Bbb{R}^n \to M_{n \times n}(\Bbb{R})$. So, the first term is $\dfrac{\partial A}{\partial t}\bigg|_{(t,f(t,x))} \in M_{n \times n}(\Bbb{R})$, which is exactly what you predicted. Now, how do we understand the second term? Again, note that $A$ maps $\Bbb{R} \times \Bbb{R}^n \to M_{n\times n}(\Bbb{R})$. So, $\dfrac{\partial A}{\partial x}\bigg|_{(t,f(t,x))}$ is the partial deriavtive of $A$ with respect to the variables in $\Bbb{R}^n$ (i.e we're considering $V_1 = \Bbb{R}$ and $V_2 = \Bbb{R}^n$, so it's the $2$nd partial derivative of $A$), calculated at the point $(t,f(t,x)) \in \Bbb{R} \times \Bbb{R}^n$ of its domain. Note that this by definition is a linear map $\Bbb{R}^n \to M_{n \times n}(\Bbb{R})$. We are now evaluating this linear transformation on the vector $\dfrac{\partial f}{\partial t}\bigg|_{(t,x)} \in \Bbb{R}^n$ to finally end up with the matrix $\dfrac{\partial A}{\partial x}\bigg|_{(t,f(t,x))} \left[ \dfrac{\partial f}{\partial t}\bigg|_{(t,x)}\right] \in M_{n \times n}(\Bbb{R})$. This is how to read the notation in $(*)$. If for some reason you don't like to think in terms of linear transformations, here's an alternative approach, in a simplified case, using Jacobian matrices (but I just don't like such a presentation). Suppose that $A$ is a function $A : \Bbb{R} \times \Bbb{R}^n \to \Bbb{R}^m$, and $f: \Bbb{R} \times \Bbb{R}^n \to \Bbb{R}^n$. Then, we can say \begin{align} \dfrac{d}{dt} A(t, f(t,x)) &= (\text{Jac}_{\Bbb{R}} A){(t,f(t,x))} + (\text{Jac}_{\Bbb{R}^n}A)(t, f(t,x)) \cdot \dfrac{\partial f}{\partial t}\bigg|_{(t,x)}\\ &=\dfrac{\partial A}{\partial t}\bigg|_{(t,f(t,x))} + (\text{Jac}_{\Bbb{R}^n}A)(t, f(t,x)) \cdot \dfrac{\partial f}{\partial t}\bigg|_{(t,x)}. \end{align} Note that the Jacobian matrix of $A: \Bbb{R}\times \Bbb{R}^n \to \Bbb{R}^m$ evaluated at the point $(t,f(t,x)) \in \Bbb{R}\times \Bbb{R}^n$, denoted by $(\text{Jac }A)(t, f(t,x))$ is an $m \times (1 +n)$ matrix. So, when I say $(\text{Jac}_{\Bbb{R}}A)(t, f(t,x))$, I mean the $m \times 1$ submatrix obtained by taking the first column (so that we only keep track of the derivative with respect to the $\Bbb{R}$ variable, i.e with respect to $t$). You see, this is just a vector in $\Bbb{R}^m$. Next, when I say $(\text{Jac}_{\Bbb{R}^n}A)(t, f(t,x))$, I mean the $m \times n$ submatrix obtained by ignoring the first column (so that we only keep track of the derivative with respect to the $\Bbb{R}^n$ variables). Then, we multiply this $m \times n$ matrix by the $n \times 1$ vector $\dfrac{\partial f}{\partial t}\bigg|_{(t,x)}$, to finally get a $m \times 1$ matrix, or simply a vector in $\Bbb{R}^m$. The reason I don't like this approach is because in your case, the target space is $M_{n \times n}(\Bbb{R})$, so it is not natural to think of it as $\Bbb{R}^m$. I mean sure, you could construct an isomorphism to $\Bbb{R}^{n^2}$, but this requires a certain choice of basis in order to "vectorize" a matrix. But then in the end you will probably want to "undo" the vectorization, and then the whole thing is just a mess. Doable, but I think it's very adhoc, and that it's much cleaner to treat everything as linear transformations, because then it doesn't matter what the domain or target space are... it's pretty much linear algebra from here. To hopefully convince more about the generality (and simplicity) of the linear transformations approach, let $V,W$ be normed vector spaces, $A: \Bbb{R} \times V \to W$ be a differentiable map, and let $f: \Bbb{R} \times V \to W$ be differentiable. Then, \begin{align} \dfrac{d}{dt} \bigg|_t A(t,f(t,x)) &= \dfrac{\partial A}{\partial t} \bigg|_{(t,f(t,x))} + \dfrac{\partial A}{\partial x} \bigg|_{(t,f(t,x))}\left[ \dfrac{\partial f}{\partial t}\bigg|_{(t,x)}\right] \in W \end{align} i.e, the formula for the chain rule stays exactly the same, regardless of what vector spaces $V,W$ are. But if you insist on thinking of everything in terms of Jacobian matrices, you're going to have a tough time first constructing isomorphisms $V \cong \Bbb{R}^n$ and $W \cong \Bbb{R}^m$, and then doing everything in the cartesian spaces, and then "undoing" the isomorphisms, to reexpress everything back in terms of the spaces $V$ and $W$. Or of course, another way to think of it is to express everything in terms of component functions of the matrix-valued function $A$: \begin{align} \dfrac{d}{dt}A_{ij}(t,f(t,x)) &= \dfrac{\partial A_{ij}}{\partial t}\bigg|_{(t,f(t,x))} + \sum_{k=1}^n\dfrac{\partial A_{ij}}{\partial x_k}\bigg|_{(t,f(t,x))} \cdot \dfrac{\partial f_k}{\partial t}\bigg|_{(t,x)} \end{align} (all these partial derivatives being real numbers). But of course, for obvious reasons, this component-by-component approach can get very tedious very quickly (and doesn't generalize well), and also it didn't seem to be what you really wanted to ask about, which is why I'm mentioning it at the end.
H: Let $f$ holomorphic at $D=\{z:0<|z|< 1\}$ Prove that exist a unique $c\in\mathbb{C}$ s.t $f(z)-\frac{c}{z}$ has primitive function. Let $f$ holomorphic at $D=\{z:0< |z|< 1\}$ Prove that exist a unique $c\in\mathbb{C}$ s.t $f(z)-\frac{c}{z}$ has primitive function in D. So my thoughts were that we can write a laurent series for $f$ around $0$ and if $0$ is a simple pole the question is trivial because and $c$ is the laruent coefficent of $a_{n-1}$ and it's unique. If $z=0$ is not a simple pole i'm not sure how to prove this. Any hints will be welcome AI: If the Laurent series is $\sum_{n \in \mathbb Z} a_n z^{n}$ then $f(z)-\frac {a_{-1}} z $ is the derivative of $\sum_{n \in \mathbb Z \setminus {\{-1\}}} a_n \frac {z^{n+1 }} {n+1}$. This series converges to a holomorphic function in $0<|z|<1$. Uniqueness is obvious since $\frac {c_1-c_2} z$ has no pri mitive if $c_1 \neq c_2$.
H: Extending the definition of stochastic integral from simple processes I am reading stochastic integration from Brownian Motion And Stochastic Calculus by Karatzas and Shreve. In the course of extending the definition of the stochastic integral from simple processes to other measurable and adapted processes, they first prove the following lemma: Let $X$ be a bounded, measurable, $\{\mathcal{F}_{t}\}$- adapted process. Then there exists a sequence of simple processes $\{X^{(m)}\}_{m=1}^{\infty}$ of simple processes such that \begin{equation} \tag1 \sup_{T >0} \lim_{m \to \infty} E\int_{0}^{T}|X^{(m)}_{t}-X_{t}|^2dt=0 \end{equation} From this, they conclude that it is possible to extract a subsequence $\{X^{(m_{k})}\}$ such that the set \begin{equation} \{(t,\omega\} \in [0,\infty) \times\Omega; \lim_{k \to \infty}X^{(m_{k})}_{t}(\omega)=X_{t}(\omega)\}^{c} \end{equation} has product measure zero. My question is, how did they arrive at this conclusion? I understand if $(1)$ is satisifed, then for each $T >0$, $\lim_{m \to \infty} E\int_{0}^{T}|X^{(m)}_{t}-X_{t}|^2dt=0 $ and hence there is a subsequence converging to $X_{t}(\omega)$ for almost every $[0,T] \times \Omega$. But this subsequence need not be the same for every $T$ right? So how do we get a subsequence that converges almsot everywhere on $[0, \infty) \times \Omega$. Any help would be appreciated. Thanks. AI: For every $n\in \Bbb N$ choose a subsequence $(m_k^n)_k$ in the following way. For $n=1$ choose a subsequence $(m_k^1)_k$ such that $X_t^{(m_k^1)} (\omega) \to X_t (\omega)$ for almost every $(t,\omega ) \in [0,1] \times \Omega$. Given the sequence $(m_k^n)_k$ we have that $(1)$ holds for $T=n+1$ and $(m_k^n)_k$. Thus, take $(m_k^{n+1})_k$ as a subsequence from $(m_k^n)_k$ such that $X_t^{(m_k^{n+1})} (\omega) \to X_t (\omega)$ for almost every $(t,\omega ) \in [0,n+1] \times \Omega$. Now take as final sequence $m_k := m_k^k$, for which $$X_t^{(m_k)} (\omega) \to X_t (\omega)$$ holds for almost every $\cup_n [0,n]\times\Omega =[0,\infty) \times \Omega$.
H: People choosing numbers - uniform distribution there's a group of 100 people. each one chose a number that distributes uniformly $Ud~(1,50)$. $X$ is all the numbers that got chosen. What is the $E(x)$? So all I got to do is $(1+50)/2$ and to multiply it by $100$? or is there a trick? AI: It is correct, there is no a trick but a very important reason for that What you are asking to calculate is $$\mathbb{E}[\frac{1}{100}\sum_{i=1}^{100}X_i]=\frac{1}{100}\sum_{i=1}^{100}\mathbb{E}[X_1]=\frac{1+50}{2}$$
H: Let $f_n:[1,2]\to[0,1]$ be given by $f_n(x)=(2-x)^n$ for all non-negative integers $n$. Let $f(x)=\lim_{n\to \infty}f_n(x)$ for $1\le x\le 2$. Let $f_n:[1,2]\to[0,1]$ be given by $f_n(x)=(2-x)^n$ for all non-negative integers $n$. Let $f(x)=\lim_{n\to \infty}f_n(x)$ for $1\le x\le 2$. Then which of the following is true? (a) $f$ is continuous on [1,2] (b) $\lim_{n\to \infty}\int_1^2f_n(x)dx=\int_1^2f(x)dx$ (c)$f_n$ converges uniformly to $f$ on $[1,2]$ as $n \to \infty$ (d) for any $a\in(1,2)$ we have $\lim_{n \to \infty}f_n'(a)\neq f'(a)$ Solution: $f_n(x)$ converges to $f(x)=1$ when $x=1$ and $f(x)=0$ when $x\in(1,2]$. Hence option (a), (c) are clearly incorrect. Now for any $a\in(1,2)$ $$\lim_{n\to \infty}f_n'(a)=\lim_{n \to \infty}-n(2-a)^{n-1}=0$$ and $$f'(a)=0 \;\;\forall \;\;a\in(1,2) $$ which implies $\lim_{n \to \infty}f_n'(a)= f'(a)$. Hence option (d) is incorrect as well. Now, for part (c): $$lim_{n\to \infty}\int_1^2f_n(x)=\lim_{n \to \infty}\frac{1}{n+1}=0$$ and $$\int_1^2f(x)=\int_1^20dx=0$$ Hence option (c) is correct. Have I done everything correctly here? Also, i think $\lim_{n\to \infty}\int_1^2f_n(x)dx=\int_1^2f(x)dx$ happens only when $f_n(x)$ converges uniformly to $f(x)$. But here it doesn't converge uniformly then also this result holds, is there any explanation for that? Please help me in this question. AI: What you have done is correct. b) is true. Just evaluate the integral explicitly. $\int_1^{2} f_n(x) dx=\int_1^{2} (2-x)^{n} dx=\int_{0}^{1} y^{n} dy =\frac 1{n+1 } \to 0$. And $\int_1^{2} f(x) dx=0$. Uniform convergence is a sufficient condition for the convergence of the integrals but it is not a necessary condition.
H: How can I solve $f(x) = \int \frac{\cos{x}(1+4\cos{2x})}{\sin{x}(1+4\cos^2{x})}dx$? $$f(x) = \int \frac{\cos{x}(1+4\cos{2x})}{\sin{x}(1+4\cos^2{x})}dx$$ I have been up on this problem for an hour, but without any clues. Can someone please help me solving this? AI: It is quite straight forward after rewriting the integrand: $$\frac{\cos x (1+4\cos 2x )}{\sin x (1+4\cos^2 x)}= \frac{\cos x}{\sin x}\left(1 - \frac{4\sin^2 x}{1+4\cos^2 x}\right)$$ $$= \frac{\cos x}{\sin x} - \frac{2\sin 2x}{3+2\cos 2x}$$ Note, that $(3+2\cos 2x)' = -4\sin 2x$ and integrate: $$\int \frac{\cos{x}(1+4\cos{2x})}{\sin{x}(1+4\cos^2{x})}dx =\log |\sin x| + \frac 12 \log(3+2\cos 2x) + C$$
H: Scaling property of Laplace Transform not working I know that $L(f(t)) = F(s) \implies L(f(at))= \frac{1}{a}F(s/a)$ and if $L(1) = 1/s$ $L(2 \cdot 1) = 1/2 \cdot F(s/2) = 1/2 \cdot 1/(s/2) = 1/s$ but $L(2 \cdot 1) = 2 \cdot L(1) = 2 \cdot 1/s = 2/s$ Why is there a contradiction? What am I missing? AI: In both calculations, you appear to seek the Laplace transform of a function that always returns $2$. Your second calculation is correct. To get $1/2\cdot1/(s/2)$ you'd want a function that multiplies $t$ by $2$ before returning $1$, which is just the original always-$1$ function. It helps to consider the more general $L(cf(at))=ca^{-1}F(s/a)$, which includes rescaling of not only the argument but also the function's returned value. We must then take care not to confuse $c=2,\,a=1$ with $c=1,\,a=2$.
H: Limits of two variables Using $\delta-\epsilon$ approach can someone help me prove that the limit is equal to 1? $$\lim_{(x,y)\rightarrow (0,0)}\frac{e^{xy}}{1+x^2}$$ AI: Pick $\varepsilon>0$, and suppose $x^2+y^2<\delta$ where $\delta=f(\varepsilon)$ will be chosen later. Then $|xy|\le\frac{x^2+y^2}{2}<x^2+y^2$ so $$\exp(-\delta)<\exp(-x^2-y^2)<\exp(xy)<\exp(x^2+y^2)<\exp(\delta)$$ and $1\le 1+x^2\le 1+\delta$. Therefore $$\frac{e^{-\delta}}{1+\delta}<\left(\frac{e^{xy}}{1+x^2}\right)<e^\delta.$$ Thus we want to prove that for any $\varepsilon$, we can choose $\delta$ such that $$1+\varepsilon>e^\delta>\frac{e^{-\delta}}{1+\delta}>1-\varepsilon.$$ The left inequality is just $\delta<\ln(1+\varepsilon)$, and since the RHS is positive this is easily satisfiable. If we set $\delta<\epsilon$, then the right inequality is satisfied when $e^{-\delta}>1-\varepsilon^2=(1-\varepsilon)(1+\varepsilon)>(1-\varepsilon)(1+\delta)$, that is, when $\delta<-\ln(1-\varepsilon^2)$. Since the RHS is positve, this is also satisfiable. Thus the condition we place on $\delta$ is $$\delta=f(\varepsilon)=\min(\ln(1+\varepsilon),\varepsilon,-\ln(1-\varepsilon))$$
H: Upper bound of $\int^{\pi}_{0} \frac{\operatorname dx}{(\sqrt{2-2\cos(x)})^\alpha}$ for $\alpha\in(0,1)$ I have the following integral: $$\int^{\pi}_{0} \frac{\operatorname dx}{(\sqrt{2-2\cos(x)})^\alpha} \qquad \text{for $\alpha\in(0,1)$}$$ Now, I know that I can use the fact that $1-\cos(x)\ge\frac{x^2}{9}$ to apply the Comparison Theorem and show that the integral converges. But I'm asked to find a numeric upper bound for this integral. How should I approach that? AI: You can rewrite the integrand using a half-angle formula: $$\frac 1{(\sqrt{2(1-\cos x)})^{\alpha}} = \frac 1{(\sqrt{4\sin^2\frac x2})^{\alpha}} \stackrel{0<x\leq \pi}{=}\frac 1{2\sin^{\alpha} \frac x2}$$ So, your integral becomes $$\frac 12\int_0^{\pi}\frac{dx}{\sin^{\alpha} \frac x2}\stackrel{u=\frac x2}{=}\int_0^{\frac{\pi}{2}}\frac{du}{\sin^{\alpha} u}$$ To find an upper bound you can use the concavity of $\sin u $ on $(0,\frac{\pi}{2})$ where you have $\sin u \geq \frac 2{\pi} u$. Hence, $$\int_0^{\frac{\pi}{2}}\frac{du}{\sin^{\alpha} u} \leq \left(\frac{\pi}{2}\right)^{\alpha}\int_0^{\frac{\pi}{2}}\frac{du}{u^{\alpha}}$$ I leave it up to you to evaluate the last integral.
H: Proposed proof for:$b! \equiv 0 \pmod a$ $\Rightarrow$ $a \le b$ Let a,b $\in$ $\mathbb{Z}$. Obviously, a counterexample can disprove this statement. I tried this approach to seek another method (possibly proof by contradiction): $a|b!$ $a|[b(b-1)!]$ If $gcd(a,(b-1)!)=1$, then $a|b$ and hence: $a \le b$. But I think claiming that $a|b$ is too restrictive because $a$ can be less than $b$ without having $a|b$ necessarily. I tried to study two cases to reach a contradiction that $gcd(a,(b-1)!)\not=1$, yet I couldn't figure out how to continue so this might actually help out : 1) $a=b$ 2) $a \lt b$ Is there any mistake that I have done? Is this proof inconclusive or are there any improvements to be made so that it is a valid proof? Thank you in advance EDIT: Counterexamples most certainly get the job done here. Though, I am seeking another method to disprove the statement. EDIT 2: The original proposition is: $$a \le b \Rightarrow b! \equiv 0 \pmod a$$ I am trying to justify whether the converse of this statement is right or wrong, which is obviously wrong using counterexamples. The purpose is to find another method to disprove the converse. AI: I think there's no point to try and prove a false statement. Because $15|5!$ for example. What is a true version of this statement is $$a|b! \implies \text{prime factors of }a \le b$$ You can try and prove this! Update: I understood you seek to prove it without using counterexamples, so let's try: Let $p$ and $q$ be prime numbers where $p > q$, we have $pq|p!$ while $pq >p$ Another update: in your solution of "disproving the statement" you suggested that $\gcd(a,(b-1)!)=1$ while $a \le b$. Well if $a<b$ we have $a|(b-1)!$ so this directly means $a=b$ and this is something you wouldn't want as a step in disproving the statement. So the thing you'd first try is finding $a>b$ and $a|b!$, and I've given that in my first update.
H: $(f_n)$ integrable sequence of functions that converges uniformly to $f$, then $f$ is integrable I'm reading Abbot's Understanding Analysis, and have stumbled across this problem, but there is a step that I don't fully understand. The problem is: Assume that for each $n$, $f_n$ is an (Riemann) integrable function on $[a, b]$. If $(f_n) \rightarrow f$ uniformly on $[a, b]$, prove that $f$ is also integrable on this set. Solution: $$U(f, P) - L(f, P)$$$$=U(f, P) - U(f_N, P) + U(f_N, P) - L(f_N, P) + L(f_N, P) - L(f, P)$$$$\leq |U(f, P) - U(f_N, P)| + (U(f_N, P) - L(f_N, P)) + |L(f_N, P) - L(f, P)|$$ Let $\epsilon > 0$ be arbitrary. Because $f_n \rightarrow f$ uniformly, we can choose $N$ so that: $|f_N(x) - f(x)| \leq \dfrac{\epsilon}{3(b-a)}$ for all $x \in [a, b]$ Now the function $f_N$ is integrable and so there exists a partition $P$ for which $U(f_N, P) - L(f_N, P) < \dfrac{\epsilon}{3}$ Let's consider a particular subinterval $[x_{k-1}, x_k]$ from this partition. If $M_k = \sup\{f(x): x \in [x_{k -1}, x_k]\}$ and $N_k = \sup\{f_N(x): x \in [x_{k -1}, x_k]\}$ then our choice of f_N guarantees that $|M_k - N_k| \leq \dfrac{\epsilon}{3 (b - a)}$ The proof goes on and it demonstrates that: $U(f, P) - L(f, P) < \epsilon / 3 + \epsilon / 3 + \epsilon / 3$ My doubt is why is it so that $|M_k - N_k| \leq \dfrac{\epsilon}{3 (b - a)}$ We have that $|f_N(x) - f(x)| \leq \dfrac{\epsilon}{3(b-a)}$ for all $x \in [a, b]$ But this only work for the same $x$. The supremums can have different $x$ value, or they may even not have a $x$ value. AI: $M_k-N_k>\frac{\epsilon}{3(b-a)}$ would imply $f(x)-N_k>\frac{\epsilon}{3(b-a)}$ for some $x$, and this would imply $f_N(x)>N_k$.
H: Supremum and maximum I'm asked to tell whether the supremum and maximum of the following functions exist and to derive them when $x \in B$: $1- B = [0, 4 \pi)~\text{and}~g(x)=\cos x$ $2- B = [0, 4 \pi)~\text{and}~g(x)=\sin^2x+\cos^2x$ For the first one, I found that $\max f(x)_{x \in B}$ doesn't exist and $\sup f(x)_{x \in B} = -\infty$. Is it correct ? For the second one, I need help. Thanks! AI: Hint: your conclusion for the first one seems absurd, since $\cos x$ takes values in $[-1,1]$. For the second, observe that $\sin^2 x + \cos^2 x = 1$.
H: Thinker with no maths knowledge - checking primes My route for checking whether a number is a prime (>3) is if n-5÷6= whole number or n-7÷6= whole number Is this wrong please? Sue AI: This is wrong: it is a very elementary sieve which relies on the fact that any prime greater than $3$ is of the form $6n\pm 1$ or alternatively of the form $6m+5$ or $6m+7$. You can tell it is wrong because dividing by $6$ can only possibly deal with the primes $2$ and $3$ - it does that efficiently enough, but leaves untested $5, 7, 11 \dots$
H: Prove the following measure statement Prove: An algebra $A$ is a $\sigma$-algebra if and only if it is close for disjoint numerable unions. Well my doubt is in the right direction because I don't know how to start, the left direction is okay. After thinking about it I did the following development: I write $\bigsqcup_{j=1}^{\infty} A_j$ = $(\cap^{\infty}A_j^{c})^c$ Using the properties of being a $\sigma$-algebra, I write that $A_j \in A$ so $A_j^{c} \in A$ so $\cap^{\infty}A_j^{c} \in A$ and finally $(\cap^{\infty}A_j^{c})^c \in A$. And like $\sqcup^{\infty} A_j$ = $(\cap^{\infty}A_j^{c})^c$ then $\sqcup^{\infty} A_j \in A$. I don't know if it is correct or not. AI: Let $A_1,A_2,\dots$ be elements of the algebra. Then for every $n$ the set $B_n=A_n\setminus\left(\bigcup_{k=1}^{n-1}A_k\right)$ is also an element of the algebra and the sets $B_1,B_2,\dots$ are disjoint. Since in your case the algebra is closed under disjoint countable unions we can conclude that $\bigcup_{n=1}^{\infty}B_n$ is an element of the algebra. But: $$\bigcup_{n=1}^{\infty}A_n=\bigcup_{n=1}^{\infty}B_n$$ so actually we proved that the algebra is closed under countable unions. Further by its nature the algebra is closed under complementation. Our final conclusion is that we are dealing with a $\sigma$-algebra.
H: Existence of linear combination My book proves the following theorems: Three distinct points $A,B,C$ are collinear if and only if there exist 3 numbers, $\lambda_1,\lambda_2,\lambda_3$, all different from zero, such that $$\lambda_1 \boldsymbol a +\lambda_2 \boldsymbol b +\lambda_3 \boldsymbol c =0,\space \space \space \space \space \lambda_1+\lambda_2+\lambda_3=0$$ If no three of the points of $A,B,C,D$ are collinear, they will be coplanar if and only if there exist 4 numbers, $\lambda_1,\lambda_2,\lambda_3,\lambda_4$, different from zero, such that $$\lambda_1 \boldsymbol a +\lambda_2 \boldsymbol b +\lambda_3 \boldsymbol c +\lambda_4 \boldsymbol d=0,\space \space \space \space \space \lambda_1+\lambda_2+\lambda_3+\lambda_4=0$$ A logical extension of this is to ask the following: Given that no four points of A,B,C,D,E are coplanar, will there always exist 5 numbers $\lambda_1,\lambda_2,\lambda_3,\lambda_4,\lambda_5$ sll different from zero, such that $$\lambda_1 \boldsymbol a +\lambda_2 \boldsymbol b +\lambda_3 \boldsymbol c +\lambda_4 \boldsymbol d+\lambda_5 \boldsymbol e=0,\space \space \space \space \space \lambda_1+\lambda_2+\lambda_3+\lambda_4+\lambda_5=0 $$ AI: If we assume that these points lie in (3 dimensional) space, then this is true. Indeed, if we consider the four-dimensional vector space $V=\mathbb R^4$ composed of vectors $\lambda=(\lambda_1,\lambda_2,\lambda_3,\lambda_4)$ with an implied $\lambda_5=-(\lambda_1+\lambda_2+\lambda_3+\lambda_4),$ the map $\phi:V\to \mathbb R^3$ sending $$\lambda\to\lambda_1\boldsymbol a+\lambda_2\boldsymbol b+\lambda_3\boldsymbol c+\lambda_4\boldsymbol d+\lambda_5\boldsymbol e$$ must have kernel dimension at least $1$. If, however, the points lie in $4$ or higher dimensions, then your statement does not hold, for the same reason the analogous statements do not hold in lower dimensions.
H: Convergence of $\sqrt{\frac{2N}{d}}\frac{\Gamma(\frac{d+1}{2})}{\Gamma(\frac{d}{2})}$? (expected value from random walks) I saw from this post (stack) that the expected distance from the origin after $N$ steps in $d-$dimensional space is $$ \sqrt{\frac{2N}{d}}\frac{\Gamma(\frac{d+1}{2})}{\Gamma(\frac{d}{2})}. $$ I was curious to know the convergence behaviour of this (as $d\rightarrow\infty$ or as $N\rightarrow\infty$), I was hoping somebody here could help me find out? The context for this is that I have the function $$ F_N(x) = \sum_{p\leq N}\log pe^{2\pi ipx} $$ (where $p$ is a prime number), and the average value of the absolute value of this function squared is $N\log N + o(N\log N)$; i.e. $$ \int_{0}^{1}|F_N(x)|^2dx = N\log N + o(N\log N). $$ In a sense summing $N$ random numbers from the unit circle can be bounded above by $N\log N$. I'm wondering whether the expected value from a random walk could give some insight into whether this bound is very crude or not. In fact it seems likely that it is, because creating a better bound for it would allow the circle method to dig deeper into Goldbach's conjecture. AI: If $N\to\infty$, the expression diverges. The other limit is more interesting. If I recall correctly, it is the case that $$\Gamma(d+a) \sim \Gamma(d)\,d^a\qquad\operatorname{as}d\to\infty.$$ Hence, $$\sqrt{\frac{2N}{d}}\frac{\Gamma(d/2+1/2)}{\Gamma(d/2)}\sim \sqrt{\frac{2N}{d}}\sqrt\frac{d}{2}=\sqrt{N}\qquad\operatorname{as}d\to\infty.$$ Thus, we see that $$\lim_{d\to\infty}\sqrt{\frac{2N}{d}}\frac{\Gamma(d/2+1/2)}{\Gamma(d/2)} = \sqrt{N}.$$
H: Examine the convergence: $\sum_{n=1}^{\infty}(-1)^{n-1}\sin\frac{a}{n}(a\gt 0)$ (I solved but not sure if it is correct) I have a question that I don't know if I solved it correct. Can you check and show me the right way? Question: Examine the convergence of series given below: $$\sum_{n=1}^{\infty}(-1)^{n-1}\sin\frac{a}{n}\space (a\gt 0)$$ My answer: (absolute convergence): i) $$\sum_{n=1}^\infty \lvert (-1)^{n-1}\sin\frac{a}{n}\rvert =\sum_{n=1}^\infty\sin\frac{a}{n}\space a\gt 0$$ (can take positive and negative values) (conditional convergence) $$\rightarrow Should\space not\space be\space absolute\space convergent$$$$\rightarrow Alternating\space series\space should\space be\space convergent\space too$$$$Alternating\space series\space test:$$ For every $n$, $a_n$ is positive $x$ For every $n$, should be $a_n\gt a_n-1$ $\lim_{n\to\infty}a_n=0$ $\space$ 3.step: It is Divergent AI: Let $N = 2\left\lceil {\frac{a}{\pi }} \right\rceil$ and write $$ \sum\limits_{n = 1}^\infty {( - 1)^{n - 1} \sin \left( {\frac{a}{n}} \right)} = \sum\limits_{n = 1}^N {( - 1)^{n - 1} \sin \left( {\frac{a}{n}} \right)} + \sum\limits_{n = N + 1}^\infty {( - 1)^{n - 1} \sin \left( {\frac{a}{n}} \right)} . $$ The first part is a finite sum, hence it converges. For the second, note that by the choice of $N$, $$ \sin \left( {\frac{a}{n}} \right) > 0 $$ for all $n\geq N+1$. Also, $\sin \left( {\frac{a}{n}} \right) \to 0$ strictly monotonically, whence by the alternating series test, the second sum converges. Accordingly, your original sum is convergent. It is not however absolutely convergent. To see this, note that for $0<x<\frac{\pi}{2}$, we have $\sin x>\frac{2}{\pi}x$. Thus, $$ \sum\limits_{n = 1}^\infty {\left| {( - 1)^{n - 1} \sin \left( {\frac{a}{n}} \right)} \right|} = \sum\limits_{n = 1}^\infty {\left| {\sin \left( {\frac{a}{n}} \right)} \right|} \ge \sum\limits_{n = N}^\infty {\left| {\sin \left( {\frac{a}{n}} \right)} \right|} \ge \frac{{2a}}{\pi }\sum\limits_{n = N}^\infty {\frac{1}{n}} . $$ But we know that $\sum\nolimits_{n = N}^\infty {\frac{1}{n}}$ is divergent, whence $\sum\nolimits_{n = 1}^\infty {\left| {( - 1)^{n - 1} \sin \left( {\frac{a}{n}} \right)} \right|}$ must be divergent too. Consequently, the series is not absolute convergent.
H: Problem related to output in python programming (not gives expected result) I'm very beginner at python. Given code gives error with output. Want to correct it. The mathematics behind the code is : Let denote $S_m(n)=1^m+2^m+...+n^m$ and denote $D_x(y)$ is sum of digits of $y$ in base $x$. Example $S_2(3)=14$ and $14=(1,1,1,0)_2$ then $D_2(14)=3$ Theorem if $x-1\mid y$ then $x-1\mid D_x(y)$ In code, for $n$, output shows "divisible" if $n-1\mid S_m(n-1)$ and shows "oooooooooook" if $n-1\mid D_n(S_m(n-1))$ Problem codes output must show "oooooooooook" if "divisible" had shown but greater value $m$ with greater $n$ does not shows expected result. How to correct it? n= 2 m = 15 while n < 100: print("\n n=",n) num=n sum_num = 0 for i in range(1, num): sum_num += i**(m) n2 = (sum_num) if(n2%(num-1)== 0): print("diviasible") #else: #print("not divisible") rem_array = [] while n2 != 0: mod = n2%n if mod != 0: rem = mod n2 = n2 - rem rem_array.append(round(rem)) n2 = n2/n else: n2 = n2/n rem_array.append(0) #print(rem_array[::-1],sum(rem_array)) if(sum(rem_array)%(n-1)==0): print("oooooooooook") #else: #print("not ok") n += 1 Thank you AI: Don't see all the points that are not ok, but using // integer division instead of / float division somewhat helps (tested, that worked). Let me re-write the entire program: s=lambda m,n:sum(i**m for i in range(1,n+1)) def digits(x,y): res=[] while y: res.append(y%x) y//=x return res m=15 #theorem: for n in range(2,101): s_m=s(m,n-1) assert s_m%(n-1) !=0 or sum(digits(n,s_m))%(n-1) == 0
H: Show that each local max/min point is a critical point I have that $X$ is a normed vector spaces over $\mathbb{R}$, $U\subseteq X$ is an open set and $F:U\rightarrow \mathbb{R}$ is a differentiable function. Need to show that each local max/min is also a critical point. I figured one could somehow attempt to combine/input the critical point, $F'(a)=\mathbf{0}\hspace{1mm}\forall\hspace{1mm}a\in U$ with the local minimum $F(a)=min_{x\in V}F(x)\hspace{1mm}\forall\hspace{1mm}a\in U$ where $V$ is a neighbourhood of $a$ and then simply attempt to solve/simplify this. Any tips on how to approach this? AI: (I'm going to assume the derivative is Gateaux.) Suppose $a \in U$ is a local extreme point. By replacing $F$ by $-F$ as necessary, we may assume without loss of generality that $a$ is a local maximum. Let $r > 0$ be such that $a$ is the maximum of $F$ on $B[a; r] \subseteq U$. Let $\phi \in X^*$ be the Gateaux derivative of $F$. That is, for all $h \in X$, we have $$\lim_{t \to 0} \frac{F(a + th) - F(a)}{t} = \phi(h).$$ If $\phi \neq 0$, then there exists some $h \in X$ such that $\phi(h) \neq 0$. By replacing $h$ with $-h$ as necessary, we may assume $\phi(h) > 0$. It follows therefore that $$\lim_{t \to 0} \frac{F(a + th) - F(a)}{t} = \phi(h) > 0.$$ Using the $\varepsilon$-$\delta$ definition of continuity, there must exist some $\delta > 0$ corresponding to $\varepsilon = \phi(h)$. That is, $\delta$ satisfies \begin{align*} 0 < |t| < \delta &\implies \left|\frac{F(a + th) - F(a)}{t} - \phi(h)\right| < \phi(h) \\ &\implies \frac{F(a + th) - F(a)}{t} > 0 \\ &\implies F(a + th) > F(a). \end{align*} Choose $t$ small enough so that $a + th \in B[a; r]$, and we also get $F(a + th) \le F(a)$. This presents a contradiction; $\phi = 0$.
H: Partial derivatives of random variables I have a question about partial derivatives and random variables, say we have two random variables, $r$ and $s$, and they are related by the expression $s=e^r$. Now, $r$ has an explicit dependency on the variable $t$, $r(t)$. Hence, $s$ doesn't explicitly depend on the variable t. Say we want to compute $\partial s/\partial t$. Can I just use the chain rule? I have seen some arguing that this is zero, but I'm not convinced. Note that I want the partial derivative, not the total derivative. Thank you. AI: I think the confusion comes from how you define the partial derivative. Usually, in calculus books, you have something like $z=f(x,y), x=g(u,v),$ and $y=h(u,v)$, and you can use the chain rule to find $\partial z/\partial u$. When you do, the definition of the “partial” derivative is that $v$ is held constant. You would NOT define $\partial z/\partial u$ as holding $x$ and $y$ constant. However, in physics we often have a quantity that depends on position and time, with position being a function of time, say $f(x(t), t)$. If for a particular function $f$ there was no explicit time dependence, then we say $\partial f/\partial t =0$; we wouldn’t use the chain rule with $x$ because we’d be holding $x$ constant. I’m not sure which is more appropriate for your situation. I should add that I’m not sure how “random variables” factors into your question. $r$ seems to be a deterministic function of $t$.
H: Do all diagonalizable matrices $A\in M_n(\mathbb C)$ have square root in $\mathbb C$? Let $A\in M_n(\mathbb C)$ be diagonalizable matrix and $\mathbb F = \mathbb C$. Does it mean this matrix has root? I think this is correct, and thought on proving it with the fact that every polynomial has root in the complex numbers field, but it doesn't work well. Am I wrong? How am I supposed to approach this question? AI: Hint: first construct a root for a matrix $\mathrm{diag}(\lambda_1, \dots, \lambda_n)$. Then use this to construct a root of $$M = P\, \mathrm{diag}(\lambda_1, \dots, \lambda_n) P^{-1}$$
H: Confusion on how to find derivative. I am confused about the following, why do I get different results when I change the point where I use the substitution: (ie where am I making a silly mistake) In both I use the substitution $$x = \cos\theta$$ so $$\frac{dy}{d\theta} = -\sin\theta\frac{dy}{dx}:$$ $$\frac{1}{\sin\theta}\frac{d}{d\theta}(\sin\theta \frac{dy}{d\theta}) = \frac{d}{dx}((1-x^2)\frac{dy}{dx}) = (1-x^2)\frac{d^2y}{dx^2} - 2x\frac{dy}{dx}$$ So far so good, but if instead I expand the bracket using the product rule: $$\frac{d^2y}{d\theta^2} + \frac{\cos\theta}{\sin\theta}\frac{dy}{d\theta} = \sin^2\theta\frac{d^2y}{dx^2} - \cos\theta\frac{dy}{dx} = (1-x^2)\frac{d^2y}{dx^2} - x\frac{dy}{dx}$$ Where'd my two go! AI: Just because $\frac{dy}{d\theta} = - \sin \theta \frac{dy}{dx}$ doesn't mean that $\frac{d^2y}{d\theta^2} = \sin^2\theta \frac{dy^2}{dx^2}$. Indeed, if you take $\frac{d}{d\theta} \frac{dy}{d\theta}$, you get $$\frac{d^2y}{d\theta^2} = \frac{d}{d\theta} \left( - \sin \theta \frac{dy}{dx} \right) = (-\sin \theta) \frac{d}{dx} \left( - \sin \theta \frac{dy}{dx} \right),$$ and you can't simply $\frac{d}{dx}$ the $\frac{dy}{dx}$ term because $\sin \theta$ also depends on $x$. So you need to use product rule here instead.
H: Be $E\supset F$ galois extension and $\alpha \in E$, $O$ the orbit of $\alpha$ under $Aut(E/F)$. $|O|=|F(\alpha)/F|$? Be $E\supset F$ galois extension and $\alpha \in E$. Be $O$ the orbit of $\alpha$ under $Aut(E/F)$. I know that $|O|=|F(\alpha)/F|$ if $F(\alpha)\supset F$ is a galois extension. My question is: there are some cases or hypotesis in which the equality is true but $F(\alpha)\supset F$ is not a galois? AI: The equation always holds if you consider the action of the galois group of the normal closure of the extension. A root of an irreducible polynomial must be mapped to a root of the same polynomial. (I intended the number in your question as the degree of the extension) if you don’t consider a galois group otherwise, you have not defined the automorphism and so you are not allowed to define actions.
H: Minimal polynomial of finite Galois extensions Let $L/K$ be a finite Galois extension. In this context, I heard people talk about the minimal polynomial of $L/K$. I want to understand what is exactly meant by that phrase. What I suppose: Since $L/K$ is Galois, the extension is separable. Therefore, there is an element $\alpha \in L$ with $L = K(\alpha)$ (as every finite and separable extension in simple by the Primitive Element Theorem). Now I think the minimal polynomial of $L/K$ is just the minimal polynomial of $\alpha$ over $K$. Could you tell me whether my thoughts are correct or not? AI: Yes, a finite galois extension has a finite number of intermediate subfields, a theorem of artin with this hypoteses grants that exists a so called “primitive” element so you are right, since a galois extension is a splitting field of a separable polynomial, you can see the extension equivalently as the splitting field of the minimum polynomial of the primitive element.
H: Spectrum and resolvent set of operator $ (Tx)(t) = (1+t^2) x(t) $ on $C[0,1]$ Operator $T : C[0,1] \rightarrow C[0,1]$ is defined by: $ (Tx)(t) = (1+t^2) x(t) , \forall x \in C [0,1]$. We assume that norm on $C[0,1] $ is standard supremum norm. I have to find $\rho(T), \sigma_{p}(T), \sigma_{c}(T) $and $\sigma_{r}(T) $. The range of function $ 1+ t^2$, where $t \in [0,1]$ is $[1,2]$, so if $ \lambda \notin [1,2]$, the inverse $ (T - \lambda I ) ^{-1} $ exists and is bounded. It is defined on whole space $C[0,1]$, so the range of $ T - \lambda I $ is dense in $C[0,1] $. This means that the resolvent set $ \mathbb{C} \setminus [1,2] \subset \rho(T) $. Now, if $\lambda \in [1,2] $, there is unique number $t_{0} $ which gives $ 1+ t_{0} ^2 - \lambda = 0$. If we assume that $ x \in C [0,1]$ is a vector such that $ (T - \lambda I ) x(t)= (1+ t_{0} ^2 - \lambda) x(t)= 0, \forall t \in [0,1] $, it follows from continuity of $x$ that $x = 0$. So, $T - \lambda I$ is injective. I proved that $\operatorname{Range}(T - \lambda I) $ is not dense in $C[0,1] $ because all functions that are in range satisfy this: $ y (t_{0})= 0$, so $\sigma_{c}(T) = \emptyset $. I know that $\sigma_{p}$ is the set of eigenvalues, so $\lambda \in \sigma_{p}$ if $(T-\lambda I) ^ {-1}$ doesn't exist. We have already proved that $T - \lambda I$ is injective, but I wonder for which values of $\lambda$ is it non-surjective? (I am also confused if we need surjectivity at all when we consider eigenvalues). If I take arbitrary $ y \in \operatorname{Range}(T - \lambda I )$, then $x(t) = (T - \lambda I ) ^{-1} (y) = \frac{y(t)}{1+ t ^2 - \lambda }, t\neq t_{0} $. What is then $x(t_{0})$ and is $x(t)$ continuous on $[0,1] $ at all? Here is the solution of the task: $\sigma_{p} (T)=\sigma_{c} (T)= \emptyset$, $\sigma_{r}(T)= [1,2]$. Please help, at least give me some hint (I have recently started studying the spectral theory and this task would help me understand some basics better). AI: Since $\sigma(T) = \{1+t^2: t\in[0,1]\}=[1,2],$ therefore $\rho(T)= \mathbb{C} \setminus [1,2].$ Suppose $\lambda \in \sigma_p(T),$ then there exists $0\neq x \in C[0,1]$ such that $Tx=\lambda x.$ Since $x \neq 0,$ there exists $t_0 \in [0,1]$ such that $x(t_0)\neq 0.$ Moreover $x$ is continuous, hence there exists a neighborhood $U$ of $t_0$ in $[0,1]$ such that $x(t) \neq 0$ for all $t \in U.$ It follows that $$\lambda = 1+t^2 \; \forall\; t \in U$$which is absurd and hence $\sigma_p(T) = \emptyset.$ You have already shown $\sigma_c(T)=\emptyset$ and so $\sigma_r(T)=\sigma(T)=[1,2].$
H: Manifold with a nilpotent fundamental group. I am looking for a manifold with a nonabelian nilpotent fundamental group. I know the above terms, but I couldn't find out an example of that. AI: The Klein bottle has a fundamental group with a normal $\mathbb Z \oplus \mathbb Z$ subgroup and a $\mathbb Z / 2 \mathbb Z$ kernel.
H: Complex number square roots I am very confused how to do this problem and I would appreciate if someone could explain. Cheers Suppose u+vi is a square root of x+yi. Express the following in terms of x and/or y only. u^2 - v^2 = ? 2uv = ? u^2 + v^2 = ? Hence find the square root of 8-15i AI: As $u+vi$ is a square root of $x+yi$, it follows that $(u+vi)^2 = x+yi$. Expanding, we obtain $u^2+2uvi-v^2 = x+yi$. Assuming that $u,v,x,y \in \mathbb{R}$, equating real parts gives $x = u^2-v^2$, and equating imaginary parts gives $y = 2uv$. We then have \begin{align*}\left(u^2+v^2\right)^2 &= u^4+2u^2v^2+v^4 \\ &= \left(u^4-2u^2v^2+v^4\right) + 4u^2v^2 \\ &= \left(u^2-v^2\right)^2+(2uv)^2 \\ &= x^2+y^2\end{align*} so, as $u^2 + v^2 \geq 0$, we take the positive square root to get $u^2 + v^2 = \sqrt{x^2+y^2}$. Now, to find the square root of $8-15i$, we can use the above results to get $u^2-v^2 = 8$ and $u^2+v^2=\sqrt{8^2+(-15)^2} = 17$. Thus $2u^2 = 8 + 17 = 25$, so $u = \pm\frac{5}{\sqrt{2}}$. As $2uv = -15$, we deduce $v = \mp\frac{3}{\sqrt{2}}$. This means the square roots of $8+15i$ are \begin{equation*}\frac{5}{\sqrt{2}} - \frac{3}{\sqrt{2}}i \quad\text{ and }\quad -\frac{5}{\sqrt{2}} + \frac{3}{\sqrt{2}}i\end{equation*}
H: What is meaning of $X/P$? ($X$ is a set and $P$ is a partition) The definition of $x/E$ when $E$ is an equivalence relation is : $$x/E = \{y\in X \mid (y,x)\in E \},$$ and the definition of $X/E$: $$X/E = \{x/E\ \mid x\in X\}.$$ Now, what is $X/P$ when $P$ is a non-empty partition of X? AI: A partition $P$ defines uniquely an equivalence relation $E$ by setting up $$ (x,y) \in E \iff \exists p \in P (x \in p \wedge y \in p)$$ Then $X/P$ is just $X/E$.
H: Homeomorphism on (a,b) an R in terms of discrete topology Are (a,b) and R homeomorphic, if they both have discrete topology? AI: Any bijection between spaces that have the discrete topology is a homeomorphism (all maps on a discrete space are continuous).
H: Positive solutions to this exponential diophantine Find positive integral solutions to : $2013^p$ $+$ $2014^q$ = $2015^r$ Attempt: $2013^p$ $+$ $(2013 + 1)^q$ $=$ $($2013$ + $2$)^r$ Expanding, $2013^p$ + $2013^q$ + $1$ + $2013Q$ = $2013^r$ + $2^r$ + $2013R$ Here R and Q are some constants, via binomial theorem expansion of $(a+b)^n$ involves aⁿ, bⁿ and something divisible by ab Anyways, looking at modulo 2013 $0$ + $0$ + $1$ + $0$ = $0$ + $2^r$ + $0$ Or, 1 congruent to $2^r$ modulo 2013 Now how do I solve this? AI: It's usually not helpful to take the equation modulo anything other than a prime or a prime power. We can factor $2013$ as $3 \cdot 11 \cdot 61$. Taking the equation modulo $61$ tells us that $0^p + 1^q \equiv 2^r \pmod{61}$, or $2^r \equiv 1 \pmod{61}$, which only happens if $r$ is a multiple of $60$. When $r$ is a multiple of $60$, we have $2^r \equiv 1 \pmod 3$ and $2^r \equiv 1 \pmod{11}$ as well, so we gain no other information from considering those moduli. We can learn a bit from prime factors of $2014$ and $2015$: Taking the equation modulo $19$ (a small prime factor of $2014$), we get $(-1)^p + 0^q \equiv 1^r \pmod{19}$, from which we learn that $p$ is even. Taking the equation modulo $31$ (a small prime factor of $2015$), we learn that $(-2)^p + (-1)^q \equiv 0^r \pmod{31}$, or $4^{p/2} + (-1)^q \equiv 0 \pmod{31}$. No power of $4$ is congruent to $-1$ modulo $31$, and we can only satisfy this if $p/2$ is a multiple of $5$ and $q$ is odd. Taking the equation modulo $13$ (another small prime factor of $2015$), we get $(-2)^p + (-1)^q \equiv 0^r \pmod{13}$, which we now know means $4^{p/2} \equiv 1 \pmod{13}$, and this only holds when $p/2$ is a multiple of $6$. Altogether, we now know that $p = 60p'$, $q = 2q' + 1$, and $r = 60r'$, which is actually a bit more than what we need to proceed. It now makes sense to take the equation modulo $7$, because $2013^{6} \equiv 2015^{6} \equiv 1 \pmod{7}$ due to Fermat's little theorem. So we have $$ 1^{p/6} + 2014^q \equiv 1^{r/6} \pmod{7} $$ which has no solutions, because it leads to $2014^q \equiv 0 \pmod 7$, but $7$ does not divide $2014$. Therefore there are no solutions to this equation over the positive integers. We do have the solution $2013^0 + 2014^1 = 2015^1$ if we allow nonnegative integers; above, we excluded such cases when we assumed things like $2013^p \equiv 0 \pmod{61}$ just because $2013$ is divisible by $61$. Above, I made claims such as "$2^r \equiv 1 \pmod{61}$ only if $r$ is a multiple of $60$" or "there are no solutions to $4^{p/2} + 1 \equiv 0 \pmod{31}$". These are all finite problems; to understand an exponent modulo $m$, you only have to consider exponents of $1, \dots, \phi(m)$, which is $1, \dots, m-1$ when $m$ is prime. Whatever conclusions we make about the exponent repeat modulo $\phi(m)$. I just used brute force in Mathematica to do the calculations.
H: Sufficient conditions on $f$ to obtain that $\arg\max_{x \in K}f(x) \cap K' \neq \emptyset$ ($K$, $K'$ compact) Let $f : \mathbb{R}^n \rightarrow \mathbb{R}$ (for some $n \in \mathbb{N}^\ast$) be a continuous function and $K, K'$ be two compact subspaces of $\mathbb{R}^n$ such that $K' \subset K$. I was wondering if there exists simple sufficient conditions on $f$ to obtain that $$\arg\max_{x \in K}f(x) \cap K' \neq \emptyset$$ and sufficient conditions on $f$ to obtain that $$\arg\max_{x \in K'}f(x) \subset\arg\max_{x \in K}f(x) \cap K'$$ AI: Likely not very satisfying, but a simple sufficient (and in fact necessary) condition for both is that $\max_{x\in K} f(x) = \max_{x\in K'} f(x).$ Note also that the two results you are looking for are equivalent. Suppose $\arg\max_{x\in K} f(x) \cap K'\neq \emptyset,$ then there exists $x^*\in K'$ such that $f(x^*) = \max_{x\in K}f(x).$ We can combine this with $\max_{x\in K} f(x) \geq \max_{x\in K'} f(x),$ to obtain that $$\max_{x\in K'} f(x) \geq f(x^*) = \max_{x\in K} f(x).$$ In particular, for all $x'\in \arg\max_{x\in K'} f(x),$ we have $f(x') = \max_{x\in K'} f(x) = \max_{x\in K} f(x).$ Hence also $x'\in \arg\max_{x\in K} f(x)$ (and in particular $x'\in \arg\max_{x\in K} f(x)\cap K'$). The converse is much easier to show (thanks to OP for noticing!). Note that by compactness, $\arg\max_{x\in K'}f(x) \neq \emptyset,$ so $\arg\max_{x\in K'}f(x) \subset \arg\max_{x\in K}f(x) \cap K'$ immediately implies that the RHS is non-empty.
H: Joint Default Probability Range There is a 50% probability that bond A will default next year and a 30% probability that bond B will default. What is the probability range for event that at least one bond defaults? The author then goes on with: To have the largest probability we can assume whenever A defaults, B does not default; and vice versa. A does not default -- so the maximum probability that at least one bond defaults is 50% + 30% = 80%. OK, so he is talking about a scenario where A and B are independent. $$p(\text{at least one defaults}) = 1 - p(\text{none default})$$ A and B are Bernoulli RVs, so: $$p(\text{none default}) = 0.5 \cdot 0.7 = 0.35$$ $$p(\text{at least one defaults}) = 0.65$$ 65% is nowhere close to 80%. What is going on? AI: "Whenever A defaults, B does not" is not independent at all. The description explains how B's default depends on A's. Let's talk about this with dice. If we know that A is some event with a probability of 3/6 and B is some event with a probability of 2/6 but we don't know what those events are, what is the maximum probability that either A or B could happen? Well, we could imagine an interpretation where A = "roll a 1, 2, or 3" and B = "roll a 4 or 5", so the probability of at least one occurring under this interpretation is 5/6. It should be fairly evident that we could not do better.
H: The infinite union of singletons in the Lebesgue measure in $\mathbb{R}$. I know that the Lebesgue measure of a countable infinite number of singletons has a zero measure, but what about the Lebesgue measure of an infinite number of singletons, which is not countable. Can we generalize on this particular measure space? Or can some one provide a counter example? AI: Well, any subset of $\mathbb{R}$ can be written as a union of singletons, so if it was true for any such union then every measurable set would have measure zero. However, $\mathbb{R}$ doesn't have measure zero.
H: Asymptotic variance $\text{Let}~(X_i)_{i=1}^n~ \text{be an i.i.d. sample of}$ $\text{n observations, with} ~ E(X_i)=\mu\in\mathbb{R}~ \text{and} ~ Var(X_i)=\sigma^2 \in (0, \infty)$ $\text{I'm asked to propose estimator}$ $\text{for the asymptotic variance of the}$ $\text{following distribution and to prove}$ $\text{that this estimator is consistent:}$ $\sqrt{n}\left(\ln \bar{X}_{n} - \ln \mu\right) \xrightarrow{d} \mathcal{N}(0,\frac{\sigma^2}{\mu^2})$ $\text{To do this, we have to use :}$ $\bar{X}_{n}=\frac{1}{n}\sum_{i=1}^{n}X_{i} ~\text{and}$ $B_n=\frac{1}{n}\sum_{i=1}^{n}X_{i}^{2}-\bar{X}_{n}^2$ $\text{I tried but didn't get something}$ Thanks! AI: $\bar{X}_n$ is an estimator of $\mu,$ $B_n$ is an estimator of $\sigma^2.$ A straightforward estimator for the asymptotic variance is then $$V_n = B_n/(\bar{X}_n^2).$$ Are you able to show that $B_n$ and $\bar{X}_n$ are consistent estimators of $\sigma^2$ and $\mu$ respectively? If we have this consistency, we have by the continuous mapping theorem that also $\bar{X}_n^2$ is consistent for $\mu^2$ and we can then complete the proof that $V_n$ is consistent for $\sigma^2/\mu^2$ by using Slutsky's lemma.
H: Prove that is a vectorial subspace Given $I\subset \mathbb{R}$ an interval. I want to prove that $C_0(I)$, the set of continuous function $f:I\to \mathbb{R}$ that, for all $\epsilon>0$, $\{x\in I, |f(x)|\geq \epsilon\}$ is compact and that it is a vectorial subspace of $C_b(I)=\{f:I\to \mathbb{R}, \text{continuous}\}$. We need to prove 2 items, the first is $f=0\in C_0(I)$ is clear, because suppose by contradiction, we don't have $\epsilon>0$ that satisfies what we want. The problem is $f+\lambda g\in C_0(I)$, $\lambda\in \mathbb{R}$ and $f,g\in C_0(\mathbb{R})$. Can somebody give me a hint? Thanks. AI: Hint: try it by proving closure under addition and scalar multiplication separately. Scalar multiplication is relatively straightforward. For the addition, consider $f, g \in C_0(I)$ and $\varepsilon > 0$, and begin by showing \begin{align*} \{x \in I : |f(x) + g(x)| \ge \varepsilon\} &\subseteq \{x \in I : |f(x)| + |g(x)| \ge \varepsilon\}\\ &\subseteq\left\{x \in I : |f(x)| \ge \frac{\varepsilon}{2}\right\} \cup \left\{x \in I : |g(x)| \ge \frac{\varepsilon}{2}\right\}. \end{align*} Hope that helps!
H: Selecting a ball probability There are 11 white and 3 black balls in the urn. Balls are randomly removed from the urn until it is empty. What is the probability that the 8th ball will be taken a black ball? Kind of stuck with this question, is Bayes' Theorem required here? AI: We have 3 Black Balls on 14 total. The probability that the $1^{st}$, the $2^{nd}$ or the $i^{th}$ is black is always $\frac{3}{14}$ To have an evidence of this fact, and avoiding to do a page of counts, let's modify the request in the following: Calculate the probability that the $2^{nd}$ ball extracted is black. We can have the following situations: $WB$ with a probability of $\frac{11}{14}\cdot \frac{3}{13}$ and $BB$ with a probability of $\frac{3}{14}\cdot \frac{2}{13}$ The probability we are looking for is the sum of the two: $\frac{11}{14}\cdot \frac{3}{13}+\frac{3}{14}\cdot \frac{2}{13}=\frac{3(11+2)}{13\cdot 14}=\frac{3}{14}$ You will have the same result for any trial....try, if you want, to verify.
H: How do error bars change for variables squared? Let's say I have some variable $\mu$ with an uncertainty estimate: $$\mu = 2 \pm .5$$ Let's say I have another variable $\nu = \mu^2$. Is the uncertainty estimate in $\nu$ equal to the the uncertainty in $\mu$ squared, such that $$\nu = 4 \pm .25$$ This does not seem to be right to me. What would be the appropriate way of getting the uncertainty in $\nu$? AI: Uncertainty can be written for a quantity : $$ X=Y \pm u(Y) $$ Then taking the square you have $$ (Y\pm u(Y))^2=Y^2\pm 2Yu(Y)+u(Y)^2$$ Or a the first order for maybe more sense $$ (Y\pm u(Y))^2=Y^2\pm 2Yu(Y)$$ It is ok for you ?
H: discrete metric converges iff it is eventually constant I have read previous answers about the proof but there is a small point I want to be sure of. First if we assume that the sequence converges $x_n \to x$ there for all $N\leq n$ there is $\varepsilon >0$ s.t $d(x_n,x)< \varepsilon$ Let $\varepsilon = \frac{1}{2}$, because $x_n$ converges there is $N\leq n_0$ such that $d(x_{n_0},x)< \frac{1}{2}$, now if $x_{n_0}\neq x$ then by definition of the discrete metric $d(x_{n_0},x)= 1$ with is a contradiction and therefore it must be that $x_{n_0}=x$ which means that $x_n$ is eventually constant. Second if $x_n$ is eventually constant that means that there is $N\leq n_0$ such that $x_{n_0}=x_{n_0+1}=...$ why can we assume that $x_{n_0}=x_{n_0+1}=...=x$ ? AI: You know there is $N$ such that $\forall n \ge N$ you have $d(x_n, x) <1/2$ and you know that this implies $x_n=x$ because the metric is discrete. Thus you proved that not only $x_N=x$, but that FOR ALL $n \ge N$ it happens that $x_n=x$. If the sequence is eventually constant, then it literally means that there are $x$ and $N$ such that $x_n=x$ for every $n \ge N$. So it trivially converges to that $x$.
H: How to find a basis for an eigenspace? Consider the real vector space $P_2(\mathbb{R})$ of real polynomials of grade $\leq 1$. Consider the inner product defined as $$ \langle p,q \rangle = p(0)q(0)+p(1)q(1) $$ and the linear operator $$ L: P_2(\mathbb{R}) \rightarrow P_2(\mathbb{R}) $$ defined as $$ L(\alpha + \beta X) = (8 \alpha + 2 \beta) + (\beta - 3 \alpha) X $$ Then I have to determine all eigenvalues for $L$ as well as the basis for the eigenspaces but I am not sure about the part about eigenspaces. I have found the eigenvalues to be $2$ and $7$ I know that from my book that $$ E_L(\lambda) = N(A - \lambda I) $$ where $I$ is the identity matrix. Normally I use this to find the eigenspaces and if I do so I get $(-2,1)^T$ is a basis for $E_L(7)$ and that $(-1/3,1)^T$ is a basis for $E_L(2)$. Does this mean now that I have to express these bases as polynomials? So the desired bases are $p_1 = -2+x$ for $E_L(7)$ and $p_2 = -1/3 + x$ for $E_L(2)$. Have I understood this correctly? Thanks for your help in advance. AI: $\newcommand{\span}{\textrm{span}\{}$ Yes you have. There is a simple way to verify this. Since $L(-2+x)=-14+7x=7(-2+x)$ and $L\left(-\frac{1}{3}+x\right)=-\frac{2}{3}+2=2\left(-\frac{1}{3}+x\right)$, $-2+x$ and $-\frac{1}{3}+x$ are eigenvalues of $L$ corresponding to distinct eigenvalues. As the two polynomials correspond to distinct eigenvalues they are necessarily linearly independent. Now, notice that since $L$ acts on a vector space of $\dim 2$, the eigenspace corresponding to $7$ and $2$ must be $\span 2+x\}$ and $\span -\frac{1}{3}+x\}$ respectively. If not, suppose for instance that $E_L(7)>\span 2+x\}$. Then, there would be a vector $v\in E_L(7)$ that is linearly independent of $2+x$. Given that the domain of $L$ is of $\dim 2$, we'll have that $E_L(7)=\span v,2+x\}$ equals the entire vector space. But this is clearly impossible, as $P_2(\mathbb R)$ contains, in particular, $-\frac{1}{3}+x$ which doesn't belong to $E_L(7)$.
H: How to prove $\lim_{x\rightarrow\infty}-x^2+\log(x^2)=-\infty$ I'm proving the following $$\lim_{x\rightarrow\infty} \bigg(-\frac{x^2}{2}+\frac{7}{2}\log(x^2+2\sqrt{2})\bigg)=-\infty$$ but how to state that $-x^2$ is going faster to $-\infty$ than $\log(x^2)$ is going to $\infty$? This is a part of a proof for convergence of heavy tailed distributions. Thank you in advance. AI: hint If we put $t=x^2$, it will be equivalent to prove that $$\lim_{t\to+\infty}\Bigl(-\frac t2+\frac 72\ln(t+2\sqrt{2})\Bigr)=-\infty$$ $$\ln(t+2\sqrt{2})=\ln(t)+\ln(1+\frac{2\sqrt{2}}{t})$$ we want $$\lim_{t\to+\infty}\Bigl(t(\frac{-1}{2}+\frac 72\frac{\ln(t)}{t})+\ln(1+\frac{2\sqrt{2}}{t})\Bigr)$$ $$=+\infty(\frac{-1}{2}+0) + 0 =-\infty$$
H: Alternate approach on probability proof Let $A_1,A_2,...$ be an infinite sequence of events which is monotonically increasing, $A_n \subset A_{n+1}$ for every $n$. Let $A=\cup_{n=1}^{\infty} A_n$. Show that $P(A)=\lim_{n \rightarrow \infty} P(A_n)$. My approach : As the $A_i$'s are monotonically increasing, their respective probabilities are monotonically increasing as well. (Not sure if this holds, but my argument is each subsequent $A_i$ is a larger set, contains more or equal elements => greater probability) Therefore, $P(A_1) \leq \cdot \cdot \cdot \leq P(A_n) \leq \cdot \cdot$. Claim : The sequence $P(A_i)$ is bounded above by $P(A)$. Suppose P(A) is not the upper bound of the sequence $P(A_i)$, then at least $\exists n_1 \in \mathbb{N}$, such that $P(A_{n_1}) \geq P(A)$. But since $A=\cup_{n=1}^{\infty} A_n$, if there exists an event $A_{n_1}$, where $n_1 \in \mathbb{N}$ such that $P(A_{n_1}) \geq P(A)$ then $A \leq \cup_{n=1}^{\infty} A_n$. Contradiction. Therefore, since the sequence $P(A_i), i \in \mathbb{N}$ is bounded above by $P(A)$, it converges to $P(A)$, which implies $P(A)= \lim_{n \rightarrow \infty} P(A_n)$. Please correct me if I'm wrong, I'm just curious whether this works or not. I know someone else has posted this question already, link :Increasing sequence of events and the probability of their limit. AI: This does not prove yet that the sequence "converges" to $P(A)$. (It does not exclude that it converges to some value less than $P(A)$.) For a real proof you should have a look at the sets $B_n=A_n-\bigcup_{k=1}^{n-1}A_k$. They are disjoint and satisfy:$$\bigcup_{n=1}^{\infty} B_n=\bigcup_{n=1}^{\infty}A_n=A$$ Then consequently: $$P(A)=\sum_{k=1}^{\infty}P(B_k)=\lim_{n\to\infty}\sum_{k=1}^{n}P(B_k)=\lim_{n\to\infty}P(A_n)$$
H: How can I convert these Euler angles to a 3x3 rotation matrix I'm working with a set of stereo cameras and having trouble with the math for the rotation calibration in openCV. Each set of cameras comes with a set of calibration data that includes three Euler angles to describe rotation between the left camera to the right camera. (It also includes translation data and distortion parameters). The angles are described as using passive rotation where you rotate about the first axis, then from the new position you rotate around the next axis if I understand that correctly. Finally the coordinate system's axis are defined like this: If you stood in a t-pose and could shoot lasers out of your eyes they would go in the positive z direction, your legs would be the positive y direction, and right arm the positive x direction. edit: I forgot to add that I know the order of rotations is x, y, z. I tried the following approach. I define the R_x, R_y, and R_z matrix like this: R_x << 1, 0, 0, 0, cos(E_x[0]), -sin(E_x[0]), 0, sin(E_x[0]), cos(E_x[0]); R_y << cos(E_y[0]), 0, sin(E_y[0]), 0, 1, 0 -sin(E_y[0]), 0, cos(E_y[0]); R_z << cos(E_z[0]), -sin(E_z[0]), 0, sin(E_z[0]), cos(E_z[0]), 0, 0, 0, 1; Then I multiply them like this: R = R_x * R_y * R_z; And I also tried every possible ordering of that multiplication. Still my results are either a blurred mess or rotated 90 degrees. So I'm missing something about how to do this. AI: Given what you've said about matrix-vector multiply, the order you need for your rotations is certainly $$ R_z \cdot R_y \cdot R_x $$ because when applied to a vector $v$, this will result in $$ v \mapsto R_z (R_y (R_x \cdot v))) $$ so that rotation about $x$ is applied first, then rotation about $y$, etc. What's not clear is your sentence "The angles are described as using passive rotation where you rotate about the first axis, then from the new position you rotate around the next axis if I understand that correctly." The question that remains is whether the axes themselves are rotated as well. I'm going to assume not, but ... who knows? As for the end results being rotated 90 degrees, you might try out your experiment using 0-degree rotation angles for all three angles; the resulting matrix should be the identity matrix, and your images should not move at all. If they do, then maybe there's a row-column swap somewhere. It's also possible that your $R_y$ matrix should be R_y << cos(E_y[0]), 0, -sin(E_y[0]), 0, 1, 0 sin(E_y[0]), 0, cos(E_y[0]); Both conventions are used by various software. The one you wrote is better (in my mind, at least), but beauty is in the eye of the beholder, and all that. Even so, unless your y-rotation amount is approximately 45 degrees, this change would not create an off-by-90-degrees error, so I doubt it's the source of the problems.
H: Convergence of series in Hilbert space I want to show that for $(a_n)$ a bounded series in $\mathbb{R}$, $x_n := \frac{1}{n} \sum_{k= 0}^n a_k e_k$ converges in norm (induced by the inner product) to 0, where $e_n$ is an orthonormal basis of the Hilbert space. If $c$ is the upper bound for $(a_n)$, I can show that $||x_n||^2 = \frac{1}{n} \big|\big|\sum_{k = 0}^n a_k e_k\big|\big| = \frac{1}{n} \sum_{j,k=0}^n a_j a_k^* \langle e_j,e_k\rangle = \frac{1}{n} \sum_{k= 0}^n |a_k|^2 \leq c^2 \frac{1}{n}(n+1)$. But this converges to $c^2$ and not to 0. So now I am not sure how to find a stronger estimate in order to show my claim. AI: Actually, for real $a_n$, $$\|x_n\|^2=\frac{1}{n^2}\sum_{k=0}^na_k^2\leq \frac{c^2}{n}$$
H: $p$-Norm of Block Diagonal Matrix Let $A\in \mathbb{K}^{r\times r}$, $B\in \mathbb{K}^{(n-r)\times (n-r)}$ and $C = \mathbb{K}^{n\times n}$ such that \begin{equation} C = \begin{pmatrix} A & 0_{r\times (n-r)}\\ 0_{(n-r)\times r} & B\\ \end{pmatrix}. \end{equation} I want to prove $||C||_p = \max\{||A||_p, ||B||_p\}$ for all induced $p$-matrix norms. I already showed the direction "$\ge$" by writing \begin{equation} ||C||_p = \max_{||x||_p = 1} ||Cx||_p \ge \max_{\substack{||x||_p = 1\\ x_j=0\ \forall\ j>r}} ||Cx||_p = \max_{||x||_p = 1} ||Ax||_p = ||A||_p \end{equation} and the same for $B$, thus $||C||_p \ge \max\{||A||_p, ||B||_p\}$. However, I do not see an easy approach to prove the direction "$\le$". Any help is appreciated. AI: Hint to get the other inequality: use $p$-th powers. Let $x \in \mathbb{R}^r$ and $y \in \mathbb{R}^{n-r}$ with $\left\|\begin{bmatrix} x \\ y \end{bmatrix} \right\|_p^p = \|x\|_p^p + \|y\|_p^p=1$. Then try to get an estimate on $\left\|C \begin{bmatrix} x \\ y \end{bmatrix} \right\|_p^p$ Spoiler (solution) $$\left\|C \begin{bmatrix}x \\y\end{bmatrix}\right\|_p^p = \|Ax\|_p^p + \|By\|_p^p \le \|A\|_p^p \|x\|^p_p + \|B\|_p^p \|y\|^p_p \le \max\{\|A\|_p^p, \|B\|_p^p\} $$
H: Finding the order of elements in $\mathbb{Z}_4 \oplus \mathbb{Z}_2$ For the solution in part (b), I am confused about how they got that $(0,1)$ has order $2$. For the others, such as say $(2,1)$, isn't is just lcm$(|2|,|1|)=2$ for the order? I'm confused about how this works. AI: $(0,1)$ has order $2$ in $\mathbb Z_4\oplus\mathbb Z_2$, because $(0,1)\ne(0,0)$ but $(0,1)+(0,1)=(0,2)=(0,0)$.
H: Radius of convergence $\Sigma_{n=0}^{\infty}\frac{(-1)^n}{(2n)!}x^{2n}$ is my method right? I am trying to find the radius of convergence of the power series $$\Sigma_{n=0}^{\infty}\frac{(-1)^n}{(2n)!}x^{2n}$$ is it true to mark $x^2=t$ than substitude in the original series and get $\Sigma_{n=0}^{\infty}\frac{(-1)^n}{(2n)!}t^{n}$ so that I am avoiding the split of the sequence for $a_k= \frac{(-1)^n}{(2n)!}$ for $k=2n $ and $ a_k= 0 else $ AI: This is a valid first step, sure. Just remember that if you find a finite radius of convergence for the series in terms of $t$, you then need to go back to the original $x$. For example, if you find the series converges when $|t|<2$, then you know it converges when $|x|<\sqrt{2}$. (Although that won't be a concern for this particular series.)
H: Order of an element in $U(13)$ and $U(7)$. It is a fact that in $U(13)$, the order of $12$ is $|12|=2$ and in $U(7)$, the order of $6$ is $|6|=2$. But I don't understand why. What I am trying is this: $(12)(12) = 144 \equiv_{13} 1$ so the order should be 1... can anyone explain this to me? AI: The order of an element $a$ in $U(n)$ is the smallest positive integer $k$ such that $a^k \equiv 1 \pmod{n}$. With this, note that $12 \not\equiv 1 \pmod{13}$ but $12^2 \equiv 1 \pmod{13}$, so, the order of $12$ is in fact $2$.
H: In how many permutations of the integers 1,2,...,7 does 1 appear between 2 and 3? I am missing something in this and I need to discuss it . Step 1) We can arrange numbers [4 ,7] with $4!$ , different ways . Step 2) There are 4 dinstinct cases: a) We are going to put '2' , '1' and '3' in 3 of the 5 available spaces between numbers [4,8] _N_N_N_N_. There are $C(5,3)=10$ ways to do that b) We are going to put '21' and '3' or '2' and '13' there are $C(5,2)=10$ to do this for each case so $2*C(5,2)=20$ in total quick fix : I only counted the cases that 2 1 3 come in that order. So to fix this we need to change the positions of 2 and 3 so we have $2!*840=1680$ c) we are going to put ' 213' and there are 5 ways to do this Sum : $4!*(10 + 20 + 5) = 24 * 35 = 840$ ways But this not the correct answer. Any ideas? AI: You are counting the cases where $2$ appears first, then $1$, then $3$. However, when the problem asks you to compute permtuations where "$1$ appears between $2$ and $3$," they mean that it is also valid for $3$ to appear first, then $1$, then $2$. For example, in $3415672$, it is valid to say that $1$ is between $2$ and $3$. It does not take much more work to include these cases, can you see the quick fix? By they way, there is a quicker way to do this problem. In each of the $7!$ permutations of $\{1,\dots,7\}$, the number of permutations where $1$ is between $2$ and $3$ equals the number of permtuations where $2$ is between $1$ and $3$, which is in turn equal to the number of permutaitons where $3$ is between $1$ and $2$. This is because switching around the names of the numbers does not change the structure of the problem. Since all three of these cases are equal, the number of each must be $7!/3$.