text
stringlengths
83
79.5k
H: A problem on separable extension on field of characteristic $p>0$ Problem: Let $k$ be a field of characteristic $p>0$. Let $\alpha $ be algebraic over $k$ then show that $\alpha$ is separable if and only if $k(\alpha)=k(\alpha^{p^n})$ for all positive integer $n$. I have a solution below but I am not convinced if it works. Other approaches are always welcome. Thank you. AI: Suppose we have $k(\alpha)=k(\alpha^{p^n})$ for all positive integer $n$. We are to show that $\alpha$ is separable (i.e. the minimal polynomial of $\alpha$ over $k$ has no repeated roots in any extensions over $k$). If possible let us assume that $\alpha$ is not separable. As it is a field of characteristic $p(>0)$ the minimal polynomial of $\alpha$ over $k$ will be $g(x^p)$ for some $g(x)\in k[x]$. $\therefore g(\alpha^p)=0\implies g(x)$ annihilates $\alpha^p\implies$ $\textit{min-poly }_k(\alpha^p) $ divides $g(x)$ Now from $k(\alpha)=k(\alpha^{p})$ we have $$[k(\alpha):k]=[k(\alpha^p):k]=\deg(\textit{min-poly }_k(\alpha^p) )\le\deg(g(x))< p\deg(g(x))=\deg(g(x^p))=[k(\alpha):k]$$ But that is a contradiction. So we must have $\alpha$ is separable. Conversely, suppose $\alpha$ is separable we are to show that $k(\alpha)=k(\alpha^{p^n})$ for all positive integer $n$. It is clear that for all positive integer $n$ , $\alpha^{p^n}\in k(\alpha)\implies k(\alpha^{p^n})\subseteq k(\alpha)$. Let us fix $n$ a positive integer. $f(x)$ be the minimal polynomial of $\alpha$ over $k$ and $g(x)$ be the minimal polynomial of $\alpha^{p^n}$ . Also $[k(\alpha^{p^n}):k]=\deg(g)\le \deg(f)=[k(\alpha):k]$ Let $L$ be the splitting field of $f$ over $k(\alpha)$. Let us define $I:=\{ \sigma|\sigma:k(\alpha)\to L \text{ is a } k-\text{endomorphism} \}$. Observe that number of elements in $I$ is equal to $[k(\alpha):k]$, as $\alpha$ is seperable. Moreover $\{\sigma(\alpha)|\sigma\in I\}$ is the set of $[k(\alpha):k]$ distinct roots of $f$ in $L$. Then in $L$ polynomial $f(x)$ can be written as $$f(x)=\prod_{\sigma\in I}\left(x-\sigma(\alpha)\right)$$ Next we look at $g(x)$. $g(x)$ is an irreducible in $k[x]$. We have $\alpha^{p^n}\in L$ a root of $g(x)$, then $g(x)$ splits in $L$. For every $\sigma\in I$ , $\sigma(\alpha^{p^n})$ is a root of $g(x)$ and these roots are distinct $(\because \sigma_1(\alpha^{p^n})=\sigma_2(\alpha^{p^n})\implies(\sigma_1(\alpha))^{p^n}=(\sigma_2(\alpha))^{p^n}\implies \sigma_1(\alpha)=\sigma_2(\alpha)\implies \sigma_1=\sigma_2)$. So the $\deg(g)$ is greater or equal to the number of elements in $I$. $\therefore \deg(g)=[k(\alpha):k]=[k(\alpha^{p^n}):k]\implies k(\alpha)=k(\alpha^{p^n})$
H: Compact condition for base elements Suppose $X$ is a topological space and $\{B_i\}$ form a base for the topology on $X$, where the $i$ run over some index set $J$. $X$ is said to be compact if every open cover of $X$ contains a finite subcover of $X$. Suppose you know that for every covering of $X$ by base elements, $X=\bigcup_{i\in I}B_i$, there exists a finite subcover $X=\bigcup_{i\in S}B_i$ where $S$ is a finite subset of $I$. Does this then imply that $X$ is compact? If we have an arbitrary open covering of $X$, say $X=\bigcup_{j\in J} U_j$, then for each $j\in J$, there exists a covering of $U_j$ by some base elements $B_{j_i}$. Putting these together, we have $X=\bigcup_{i\in I,j\in J}B_{i_j}$, for which we know there is a finite subcover. But does this then imply that our original cover of arbitrary open sets $U_i$ has a finite subcover? AI: Yes, checking a basis is enough. Perhaps the following is clearer: Let $\mathscr{U}$ be an open cover of $X$. For each $x\in X$ and each $U\in\mathscr{U}$, choose $i(x,U)\in J$ such that $x\in B_{i(x,U)}\subset U$. Then $$ X\subset \bigcup_{x\in X,U\in\mathscr{U}}B_{i(x,U)} $$ By assumption, there are finitely many $x_1,\dots,x_n\in X$ and $U_1,\dots,U_m\in \mathscr{U}$ such that $$ X\subset \bigcup_{k=1}^n\bigcup_{l=1}^mB_{i(x_k,U_l)} $$ Now use that $B_{i(x,U)}\subset U$, so that $$ X\subset \bigcup_{i=1}^n U_i $$ and we're done. Some remarks: The previous proof seems to rely on the axiom of choice. This is not necessary, since we can define, for every $x\in X$ and $U\in\mathscr{U}$, the set $J(x,U)=\{i\in J:x\in B_j\subset U\}$. Then $$ X\subset \bigcup_{x\in X,U\in\mathscr{U}}\bigcup_{i\in J(x,U)}B_j $$ The rest of the argument is identical. The statement is still true when dealing with a subbasis, not just a basis. This is a non-trivial result, usually called the Alexander Subbasis Theorem. See here for a proof.
H: Kan extenstion and left adjoint This is a continuation of the question asked here: Kan extension "commutes" with a certain left adjoint. Let $\mathcal{A},\mathcal{B}$ be small categories and $\mathcal{C},\mathcal{D}$ an arbitrary category. Consider functors $F:\mathcal{A}\rightarrow\mathcal{B}$, $G:\mathcal{A}\rightarrow\mathcal{C}$, $K:\mathcal{B}\rightarrow\mathcal{C}$, $R:\mathcal{D}\rightarrow\mathcal{C}$ and $L:\mathcal{C}\rightarrow\mathcal{D}$, where $L$ is left adjoint to $R$ and $K=\text{Lan}_F(G)$. We want to show that $$L\circ\text{Lan}_F(G)=\text{Lan}_F(L\circ G).$$ I have already convinced myself that for every functor $H:\mathcal{B}\rightarrow\mathcal{D}$, we get the following bijections: $$ \begin{align} \text{Nat}\left(L\circ\text{Lan}_F(G),H\right) & \cong \text{Nat}\left(\text{Lan}_F(G),R\circ H\right) \\ & \cong \text{Nat}\left(G,R\circ H\circ F\right) \\ & \cong \text{Nat}\left(L\circ G,H\circ F\right)\\ & \cong \text{Nat}\left(\text{Lan}_F(L\circ G),H\right) .\end{align}$$ At this point, Borceux says the following: So $L\circ\text{Lan}_F(G)\cong\text{Lan}_F(L\circ G)$, by putting successively $H=L\circ\text{Lan}_F(G)$ and $H=\text{Lan}_F(L\circ G)$. Following this procedure, I get $$\text{Nat}\left(L\circ\text{Lan}_F(G),L\circ\text{Lan}_F(G)\right)\cong\text{Nat}\left(\text{Lan}_F(L\circ G),L\circ\text{Lan}_F(G)\right)$$ and $$\text{Nat}\left(L\circ\text{Lan}_F(G),\text{Lan}_F(L\circ G)\right)\cong\text{Nat}\left(\text{Lan}_F(L\circ G),\text{Lan}_F(L\circ G)\right).$$ How do I use this to conclude $L\circ\text{Lan}_F(G)\cong\text{Lan}_F(L\circ G)$?–Anyway, mustn't I show this as an equality instead of an isomorphism? Edit: I have managed to prove the claim directly using the functor $L\circ\text{Lan}_F(G):\mathcal{B}\rightarrow\mathcal{D}$ and the natural transformation $L*\alpha:L\circ G\Rightarrow L\circ\text{Lan}_F(G)\circ F$, where $\alpha: G\Rightarrow\text{Lan}_F(G)\circ F$ is canonical natural transformation. I am still not clear about Borceux's reasoning. AI: You can't get an equality anyways because $\mathrm{Lan}$ is only defined up to isomorphism, so the best you can hope for is a canonical isomorphism - which is indeed what you get. Borceux's reasoning is just reproving the Yoneda lemma, I don't understand why he says that again (or maybe he didn't prove Yoneda yet ? That would be awfully weird) But essentially the point is that if you have a natural isomorphism $\hom(A,-)\cong \hom(B,-)$, then you get an isomorphism $B\cong A$ which is given by the image of $id_A\in \hom(A,A)\to \hom(B,A)$. That's why you "plug in $A$" (and to find its inverse you plug in $B$ and take the unique antecedent of $id_B$) : this is essentially a special case of the Yoneda lemma
H: Rank of $R^n$ in characteristic $n$ I'm reading Deligne-Milne's introduction to Tannakian categories, and I noticed a troubling consequence of the definition of rank in a rigid ACU tensor category $(\mathcal{C}, \otimes)$. Specifically, the rank of an object $X$ is defined as the trace of its identity morphism, and the trace of $X$ is defined as the natural map from endomorphsims of $X$ to endomorphisms of the unit object $\mathbb{1}$ induced by evaluation of the internal end object, which is isomorphic to $X^{\vee} \otimes X$. $$ \mathrm{Tr}_X = \mathrm{Hom}(\mathbb{1}, -)(\underline{Hom} (X, X) \overset{\sim}{\to} X^{\vee} \otimes X \overset{\mathrm{ev}}{\to} \mathbb{1}): \mathrm{End}(X) \to \mathrm{End}(\mathbb{1}). $$ If $\mathcal{C}$ is the category of $R$-modules for a commutative ring $R$, then the module $R^n$ has rank $n$ (identified with the multiplication action of $n$ on $\mathbb{1} = R$) for all $n$. In characteristic $0$, this is exactly what I would expect. But in the case $\mathrm{char} (R) = n$, this means the rank of $R^n$ is $0$. Moreover, because the rank can be identified with an element of $R$, there is no meaningful notion of rank that can distinguish any object $X$ from $X \oplus R^n$. Are there ways around this problem working in tensor categories with positive characteristic? Should I consider an alternative framework, or somehow try to define trace as an element of some ring other than $R$, e.g. its $(n)$-adic completion? Or is this just a failure of my intuition that I should work to adjust? AI: I think the takeaway is that this notion of the categorical dimension or rank of an object really doesn't detect as much in positive characteristic as it does in characteristic zero. Of course you can try to remedy this by looking at more refined invariants, here is a paper that defines a $\mathbb{Z}_p$-valued notion of categorical dimension for rigid ACU tensor categories in characteristic $p$: https://arxiv.org/abs/1510.04339 Still, while the $p$-adic dimension dimension defined there does greatly refine the ordinary categorical dimension, if you look at the paper you'll see it still has some quirks that make it not quite behave how you want a notion of dimension to in all situations. Ultimately, rigid ACU tensor categories in characteristic $p$ are just more complicated than their characteristic zero analogs and there are still lots of things that aren't well understood about them.
H: Problem with extension of a continous function Let $X$ a first-countable Topological Space, let $Y$ an Hausdorff Topological Space, let $A\subset X$ a subset ot $X$ and let $f:A\rightarrow Y$ a continous function. Prove that, if there is an extension $$\overline{f} :\overline{A}\rightarrow Y$$ $\overline{f}$ is solely determined from $f$. I thought that: If there is $g$ that is another extension of $f$, and I call $Z=\lbrace x\in \overline{A}\mid \overline{f} (x)=g(x)\rbrace$. Then, $Y$ is an Hausdorff Space so, $Z$ is closed in $X$ and, $A$ is dense in $\overline{A}$ so $A\subseteq Z\Rightarrow \overline{A}\subseteq Z\Rightarrow \overline{A} = Z$ But I don't think it's right because I didn't use the fact that $X$ is first-countable. I have to use also the fact that $f$ is continous so if $x_{n}\rightarrow x\Rightarrow f(x_{n})\rightarrow f(x)$. Can someone help me? AI: Your argument works. You may need to elaborate a bit on Then, $Y$ is a Hausdorff space so, $Z$ is closed in $X$ and it may be preferable to write $Z$ is closed in $\overline{A}$ there. Whether you need to elaborate or not depends on what properties of Hausdorff spaces and continuous maps can be assumed as generally known. First countability of $X$ plays no role in it, it was probably assumed to enable arguments with sequences for those who aren't yet used to topological arguments using open and closed sets, neighbourhoods, preimages etc. I find your argument much preferable to a sequence (or net/filter for the version not assuming first countability) argument.
H: Almost everywhere pointwise convergence I am trying to solve this problem Let $a_n$ be a sequence of numbers so that $\lim_{n \to \infty} \sin(a_nx)$ exists pointwise almost everywhere on $\mathbb{R}$. Show that $\lim_{n \to \infty}a_n$ exists. I tried to use Egoroff's theorem and other things, but I could not solve it. AI: There is a stronger result: Suppose $\sin(a_nx)$ converges pointwise on a set of positive measure. Then $a_n$ converges to a finite limit. Proof: Let $E$ be a set of positive and finite measure where $\sin(a_{n})$ converges pointwise. We first prove $(a_n)$ must be bounded. If not, then WLOG there exist $0<a_{n_1} < a_{n_2} < \cdots \to \infty.$ By DCT we have $$\tag 1 \int_E \sin^2(a_{n_k}x)\,dx \to \int_E f(x)^2\,dx.$$ The left side of $(1)$ equals $$\int_E \frac{1-\cos(2a_{n_k}x) }{2}\, dx = m(E)/2-\frac{1}{2}\int_E \cos(2a_{n_k}x)\, dx.$$ Because $a_{n_k}\to \infty,$ the Riemann-Lebesgue lemma shows the last integral $\to 0.$ Hence $m(E)/2=\int_E f(x)^2\,dx.$ On the other hand, $$\int_0^1f(x)^2\,dx = \lim \int_0^1f(x)\sin(a_{n_k}x)\,dx.$$ The limit on the right is $0,$ again using RL. We therefore have $m(E)/2=0,$ contradiction. This proves $(a_n)$ must be bounded. So now assume the bounded sequence $(a_n)$ has the given limit property but $\lim a_n$ fails to exist. Then $$L=\liminf a_n< \limsup a_n = M.$$ There exist subsequences $a_{l_k}\to L$ and $a_{m_k}\to M.$ It follows that $$\sin (a_{l_k}x) \to \sin (Lx),\,\, \sin (a_{m_k}x) \to \sin (Mx)$$ for $x\in E.$ We conclude $\sin (Lx)=\sin(Mx)$ on $E.$ Since these are analytic functions, we have $\sin (Lx)=\sin(Mx)$ everywhere. Differentiating then gives$$L\cos (Lx) = M\cos (Mx)$$everywhere. Now plug in $x=0$ to get $L=M,$ contradiction. Therefore $\lim a_n$ exists.
H: What is the mathematical notation for dependency length calculation algorithm? I'm doing a computational linguistic research in python programming. I have written an algorithm that calculate dependency length of any sentence, but I won't to describe it in a simple statistical notation. The idea is simple: Any sentence is a set, whereas, a word in any of those sentences is an element of that sentence. Thus; $x$ is a word and $A$ is the sentence $A$. $x \in A$ Moreover, any x contains a subset. In our example of sentence A (see figure), the verb 'threw' contains John, out, thrash. Whereas each element of the subset has a property that represent the distance between it and its head. The result I want is to sum all those distances to get the sentence total dependency length. AI: Sentence $A$ is a sequence of $n$ words, where each word is a set of other words with the uniqueness property that if $w \in x \in A$ and $w \in y \in A$ then $x =y$. We can denote the $k^{\text{th}}$ word in $A$ as $A_k$. The sentence total dependency of a sentence $A$ can be denoted $$\sum_{k=1}^n\sum_{w \in A_k} \phi(w),$$ where $\phi: A \mapsto \mathbb{N}$ satisfies $\phi(w) = d$, where $w=A_j$, the unique superset of $w$ is $A_k$, and $|j-k|=d$. More compactly, the sentence total dependency of a sentence $A$ can be denoted $$\sum_{k=1}^n\sum_{w = A_j \in A_k} |j-k|$$
H: What operation does this algorithm do on graphs? I have to find the solution that this algorithm gives about graphs, knowing that the Graph is given by $ G = (V, E) $, with arcs labeled by $ w $ and that $ G $ \ $ e $ indicates that the arc $ e $ of the graph. I mean $ G $\ $ e $ is the elgraph with vertices $ V $ and arcs $ E $ \ {e}. Being $ V_G $ for the vertices of $ G $ and $ E_G $ for its arcs, and that the graph is connected. I understand that the first line orders the values of $ w $ in decreasing order, then it goes through the arcs, but after the if condition I don't understand what the next line does, G $ \leftarrow $ $ G $ \ $ e $. AI: It goes through the edges in order of decreasing weight. If removing an edge would disconnect the graph, that edge is kept; otherwise, it is deleted. When the algorithm is done, what’s left is a spanning tree for $G$. To see this, let $G'$ be the graph that is returned by the algorithm. Clearly $G'$ is connected: the algorithm never removes an edge whose removal would disconnect the graph. Suppose that $G'$ contained a cycle $C$. Let $e$ be the first edge of $C$ in the sorted list produced by the first line of the algorithm. When the algorithm reached $e$ in the for loop, all of the edges of $C$ were still in the graph, so removing $e$ would not have disconnected the graph, and therefore $e$ would have been removed. Thus, $G'$ cannot contain a cycle. Being connected and acyclic, $G'$ is a tree and hence a spanning tree for $G$.
H: True /False question based on quotient groups of $S_{n} $ and $A_{n} $. I am trying assignment questions of Abstract algebra and I need help in following True/ False question. Which one of following is true? Every finite group is subgroup of $A_{n} $ for some $n\geq 1.$ Every finite group is quotient of $A_{n} $ for some $n\geq 1$. No finite group is quotient of $S_{n} $ for $n\geq 3.$ I think 2 can't be true as quotient group of $A_{n} $ will also have even cardinality and Group can be of odd cardinality. For 3 . I need to know about all quotient groups of $S_{n } $ which are $S_{n} $ and {0,1} and so $Z_{2} $ is an abelian group asked in 3 . Hope I am right!! Can anyone please tell in detail on how I can prove 1. AI: This is true. Every finite group is isomorphic to some subgroup of some $S_n$ and $S_n$ is isomorphic to a subgroup of $A_{n+2}$. This is false. It follows from the fact that $A_n$ is simple if $n>4$. The group $S_n$ itself is a quotient of $S_n$, for every $n\in\Bbb N$.
H: Show that $\binom{p}{0} + \binom{p+1}{1} + \binom{p+2}{2} +\dots+\binom{p+q}{q}$=$\binom{p+q+1}{q}$ How can I prove that $\binom{p}{0} + \binom{p+1}{1} + \binom{p+2}{2} +\dots+\binom{p+q}{q}$=$\binom{p+q+1}{q}$ using a combinatorial argument? The left part is all the permutations of $p$ white balls that have $0\le k \le q$ black balls but I don't know how I can relate this with the right part. AI: Your identity is: $$\sum_{k=0}^q \binom{p+k}{k} = \binom{p+q+1}{q}$$ Rewrite as: $$\sum_{k=0}^q \binom{p+k}{p} = \binom{p+q+1}{p+1}$$ The RHS counts $(p+1)$-subsets of $\{1,\dots,p+q+1\}$. The LHS does so by conditioning on the size $p+k+1$ of the largest element. Once you have chosen that element to be the largest, choose the remaining $p$ elements from among $\{1,\dots,p+k\}$.
H: How many digits are there in the product of $(3698765432123456789)$ and $(345678909876543)$? How many digits are there in the product of $(3698765432123456789)$ and $(345678909876543)$? I could not find any formula to solve it and I stuck in it. Can you suggest any formula or way for it? AI: The first number has $19$ digits, and the second has $15$. If you write them in scientific notation as $a\times 10^{18}$ and $b\times 10^{14}$, the product is $ab\times 10^{32}$. It’s clear that $10<ab<100$, so $ab\times 10^{32}$ has $2+32=34$ digits.
H: 3.85 of LADR by Sheldon Axler I'm a little confused on the proof of the implication of $(c)\to(a)$. In his proof, Axler says to take two elements $u_1, u_2$ of $U$ such that $v+u_1=w+u_2$. Whatever follows after that, I understand, but this beginning I don't. How can we just suppose this from $(c)$? Here is the theorem: Suppose $U$ is a subspace of $V$ and $v,w\in V$. Then, the following are equivalent. (a) $v-w\in U$ (b) $v+U=w+U$ (c) $(v+U)\cap(w+U)\neq \varnothing$ AI: Axler is assuming that $(v+U)\cap(w+U)\ne\emptyset$. So, take a vector $z\in(v+U)\cap(w+U)$. Then $z=v+u_1$ for some $u_1\in U$ (since $z\in v+U$) and $z=w+u_2$ for some $u_2\in U$ (since $z\in w+U$). But then $v+u_1=w+u_2$.
H: Let $X$ be a standard normal random variable and $Z$ be a random variable taking values $\{-1,1\}$ with probability $\frac{1}{2}$ Let $X$ be a standard normal random variable and $Z$ be a random variable taking values $\{-1,1\}$ with probability $\frac{1}{2}$ Let $Y=XZ$, determine whether $X$ and $Y$ are independent. As it turns out $Y$ will also be a standard normal random variable. For independence I am getting hung up on. If they are independent then we will have $P(X \le y, Y \le y) = P(X \le y)P(Y \le y)$ I am not quite convinced of either independence nor dependence. I am trying to write it as $P(X \le y, Y \le y) = P(X \le y)P(Y \le y \mid X \le y) = P(Y \le y)P(X \le y \mid Y \le y) $ Which then I can condition further on the value of $Z$ but it seems maybe it is possible to see it without conditioning that far. AI: For $y <0$ you have $$P(X <y )=P(Y <y ) \lt \frac12$$ and $$P(X \le y, Y \le y) = P(X \le y,Z=1) \\= P(X \le y)P(Z=1) = P(X \le y) \times \frac12 \\> P(X <y )P(Y <y )$$
H: Find Probability Of First Rolling an even number and then rolling a The sides of a cube show numbers $2, 3, 3, 4, 4, 4$. Alice is rolling this cube three times. Find the probability that the first roll results in an even number, and the sum of the numbers obtained from the second and third rolls is six. My Work: $\frac46\times\frac{3}{36}=\frac{1}{18}$ $\frac46$ = $4$ even numbers / $6$ possible outcomes $3$ = Amount of Successful Outcomes when rolling a die twice ($2 + 4 = 6$ is one, $4 + 2 = 6$ is another, and $3 + 3$ is the third) $6^2 = 36$ possible outcomes/arrangements when rolling a die twice. so $\frac{3}{36}$ $\frac46\times\frac{3}{36}=\frac{1}{18}$ Did I do this correctly? AI: Yes, $4$ faces out of $6$ have an even number, so you are correct on the first roll being even having probability $\frac46$. Your analysis about the second and third rolls having a sum of $6$ is not correct. One way is to roll a $2$, then roll a $4$. This has probability $\frac16\times\frac36=\frac{3}{36}=\frac{1}{12}$ Another is to roll a $4$, then roll a $2$. This has probability $\frac36\times\frac16=\frac{3}{36}=\frac{1}{12}$ Finally, you can roll a $3$, followed by another $3$. This has probability $\frac26\times\frac26=\frac{4}{36}=\frac{1}{9}$ The overall probability of rolls $2$ and $3$ having a sum of $6$ is the sum of those probabilities. $\frac{3}{36}+\frac{3}{36}+\frac{4}{36}=\frac{10}{36}=\frac{5}{18}$ To answer your question, multiply the probability of first roll even by the probability of the second and third rolls having sum of $6$ $\frac46\times\frac{5}{18}=\frac{20}{108}=\frac{5}{27}$
H: Lebesgue dominated convergence counterexample I'm working on the following problem: Given a sequence of integrable functions $f_n: \mathbb{R} \to \mathbb{R}$ with $f_n \to 0$ pointwise and $|f_n(x)|≤ \frac{1}{|x|+1}$ for all $x$ and $n≥1$, prove or find a counterexample of the following assertion: $$\lim_{n \to \infty}\int_{-\infty}^{\infty} |f_n(x)| dx=0$$ I'm thinking that this is false as the function $\frac{1}{|x|+1}$ may not dominate $|f_n(x)|$ for all $x$, but I'm not sure what sequence of $f_n$'s would serve as a counterexample here. AI: As $$\int_0^\infty\frac{dx}{|x|+1}=\int_0^\infty\frac{dx}{x+1}=\infty,$$ for all $n$, there is an $a_n>n$ with $$\int_n^{a_n}\frac{dx}{x+1}=1.$$ Let $f_n(x)$ equal $1/(x+1)$ on the interval $[n,a_n]$ and $0$ elsewhere.
H: Number of functions $f: X \to X$ with $k$ being the minimal such that $f^k(a) = b$ With some notations added, I am trying to calculate $\Psi_X$ where: Given finite set $X$ define $\Psi_X: X \times X \times \mathbb{N} \to \mathbb{N}$ where $\Psi_X(a, b, k)$ is the number of functions $f: X \to X$ such that $k$ is the minimal with $f^k(a) = b$ where the power of $f$ is in composition context i.e, $f^3(a) = f(f(f(a)))$. If we take for example $k=2$ and $X = \{ x_1, x_2, x_3 \}$ then $\Psi_X(x_1,x_2,2) = 3$ This is since $f(x_1)$ can not be $x_2$ since it contradicts the minimality of $k=2$ and $f(x_1)$ can not be $x_1$ becuase then $f^2(x_1) \neq x_2$ hence we must have $f(x_1) = x_3$ and hence $f(x_3) = x_2$ and so $f(x_2)$ can be any of the three values. As we can notice, $\Psi_X$ is apathetic to the choice of $x_1, x_2 \in X$ so we can more conveniently define $\Psi_X: \mathbb{N} \to \mathbb{N}$. Generalizing this, we can define $\Psi: \mathbb{N} \times \mathbb{N} \to \mathbb{N}$ to be $\Psi(n,k) = \Psi_{\{1,2,\dots,n\}}(k)$ which is equal to $\Psi_X(k)$ for all $|X| = n$. So this goes down to calculating $\Psi: \mathbb{N} \times \mathbb{N} \to \mathbb{N}$. By the example above, we can see $\Psi(3,2) = 3$. It is easy to see that $\Psi(n,1) = n^{n-1}$ because we can freely define the functions except for one element. By looking at these 2 examples only I thought we might have $\Psi(n,k) = n^{n-k}$ but this is not the case for all $n,k$: For example, if we calculate for $n=4$ and $k=3$ then denote for convenience $X = \{x_1, x_2,x_3,x_4\}$ and solve $\Psi_X(x_1,x_4,3)$ So we know $f(x_1) \in \{x_2,x_3 \}$. Assume first case is $f(x_1) = x_2$, then $f(x_2) \in \{x_1,x_3\}$. We can notice here that $f(x_2) = x_1$ would lead to $f^3(x_1) \neq x_4$ so we must have $f(x_2) = x_3$ and hence $f(x_3) = x_4$ so that we have $f^3(x_1) = x_4$ indeed. here we have 4 choices for $x_4$. The second case is symmetric and disjoint so we get $8$ functions in total. Hence, for $n=4,k=3$ we get a different form which is $\Psi(n,k)=2n^{n-k}$ My next guess was $\Psi(n,k) = d(n,k)n^{n-k}$ for some function $d(n,k)$ One more example shows: For $X = \{x_1, x_2,x_3,x_4\}$ and $\Psi_X(x_1,x_4,2)$ we get: $f(x_1) \in \{x_2,x_3 \}$. First case is $f(x_1) = x_2$ so we must have $f(x_2) = x_4$. here we have 4 choices for $x_3$ and 4 for $x_4$ so we get 16 functions in total in this case. The second case is disjoint and symmetric so we get a total of $32$ functions. which again gives the form $\Psi(n,k) = 2n^{n-k}$ with $n=4,k=2$ I guess this could be some elementary combinatorics formula but not sure how to proceed to find it. Any help or a full answer is much appreciated. AI: You can't be entirely apathetic about the choices of $x_1,x_2$: for example, for $[3]=\{1,2,3\}$, we get $\Psi_{[3]}(1,1,2)=6$ (assuming that we ignore that $f^0(1)=1$) while $\Psi_{[3]}(1,2,2)=3$. The latter uses the argument you presented, but the former exposes that your constraints $f(x_1)\neq x_2$ and $f(x_1)\neq x_1$ coincide when $x_1=x_2$. Beyond checking if $x_1=x_2$, your definition of $\Psi$ does not care about how these variables are assigned. Let's define the function $\Psi_1:\Bbb N\times\Bbb N\to\Bbb N$ which has $\Psi_1(n,k)$ counting the number of functions $f:[n]\to[n]$ with $k>0$ minimal so that $f^k(1)=1$. Therefore, the choices $1\mapsto a_1\mapsto a_2\mapsto\dots\mapsto a_{k-1}\mapsto1$ describing the orbit of $1$ under $f$ have to all be distinct elements of $[n]\setminus\{1\}$. There are $\frac{(n-1)!}{(n-k)!}$ ways of picking the $a_i$'s in sequence. For the remaining $n-k$ elements of $[n]$ (namely, those in $[n]\setminus\{1,a_1,\dots,a_{k-1}\}$), we are free to define $f$ however we please, which gives us $n^{n-k}$ choices. In total, this gives us $\Psi_1(n,k) = \frac{(n-1)!}{(n-k)!}n^{n-k}$. notice that $\Psi_1(3,2)=6$ and not $3$, like I mentioned earlier. On the other hand, the function $\Psi:\Bbb N_{>1}\times\Bbb N\to\Bbb N$ that you defined can be rephrased as $\Psi(n,k)$ counting the number of functions $f:[n]\to[n]$ with $k$ minimal such that $f(1)=2$. We can compute this similarly. Now, we have a sequence $1\mapsto a_1\mapsto\dots\mapsto a_{k-1}\mapsto2$ with $a_i$ chosen from $[n]\setminus\{1,2\}$. This leaves $\frac{(n-2)!}{(n-k-1)!}$ ways of choosing the $a_i$. However, now we are free to define the behaviour of $f$ on $[n]\setminus\{1,a_1,\dots,a_{k-1}\}$ however we like (notice that this set includes the number $2$). This is a set of $n-k$ elements and gives us $n^{n-k}$ choices total. Therefore, the formula is $$ \Psi(n,k) = \frac{(n-2)!}{(n-k-1)!}n^{n-k} $$ Comparing against your examples: here, $\Psi(3,2)=3$ like you said $\Psi(n,1)=n^{n-1}$ as before $\Psi(4,2)=\frac{2!}{1!}4^2 = 2n^{n-k}$ setting $n=4$ and $k=2$
H: Find the derivative of $f(x)= \int_{\sin x}^{\tan x} \sqrt{t^{2}+t+1}\, \mathrm d t$ Find the derivative of $$f(x)=\int_{\sin x}^{\tan x} \sqrt{t^{2}+t+1}\, \mathrm d t$$ with respect to $x$ So from may understanding, I need to apply the fundamental theorem of calculus and then differentiate. I think the upper and lower limits are throwing me off. AI: Let $g(t)$ denote the integrand $\sqrt{t^2+t+1}$. On the one hand, the FTC guarantees $$ \frac{d}{dx}\int_{\sin x}^{\tan x} g(t)\, dt $$ $$ =g(\tan(x))\cdot (\tan(x))' - g(\sin(x))\cdot (\sin(x))' $$ $$ =g(\tan(x))\cdot \sec^2(x) - g(\sin(x))\cdot \cos(x) $$ $$ =\sqrt{\tan^{2} (x)+\tan(x)+1}\cdot \sec^2(x) - \sqrt{\sin^{2} (x)+\sin(x)+1}\cdot \cos(x) $$Were we masochisitic, we could compute the antiderivative using the substitution $(t+1/2)^2= (3/4)\tan^2(\theta)$ (note this cannot always be done, which is part of the power of the FTC), back-substitute, and then differentiate to verify we get the same result. $$ \int \sqrt{t^2+t+1}\,dt = \int \sqrt{(t+1/2)^2+3/4}\,dt $$ $$ =\frac{1}{2} t\sqrt{t^2+t+1} +\frac{1}{4} \sqrt{t^2+t+1}+\frac{3}{8} \log \left(\frac{2 t+1}{\sqrt{3}}+\sqrt{\frac{1}{3} (2 t+1)^2+1}\right) $$For instance, replacing $t$ with $\tan(x)$ at the upper limit and differentiating gives: $$ \frac{d}{dx}\left(\frac{1}{2} \tan (x) \sqrt{\tan ^2(x)+\tan (x)+1}+\frac{1}{4} \sqrt{\tan ^2(x)+\tan (x)+1}+\frac{3}{8} \log \left(\frac{2 \tan (x)+1}{\sqrt{3}}+\sqrt{\frac{1}{3} (2 \tan (x)+1)^2+1}\right)\right) $$ $$ =\frac{1}{2} \sqrt{\tan ^2(x)+\tan (x)+1} \sec ^2(x)+\frac{\tan (x) \left(\sec ^2(x)+2 \tan (x) \sec ^2(x)\right)}{4 \sqrt{\tan ^2(x)+\tan (x)+1}}+\frac{\sec ^2(x)+2 \tan (x) \sec ^2(x)}{8 \sqrt{\tan ^2(x)+\tan (x)+1}}+\frac{3 \left(\frac{2 \sec ^2(x)}{\sqrt{3}}+\frac{2 (2 \tan (x)+1) \sec ^2(x)}{3 \sqrt{\frac{1}{3} (2 \tan (x)+1)^2+1}}\right)}{8 \left(\frac{2 \tan (x)+1}{\sqrt{3}}+\sqrt{\frac{1}{3} (2 \tan (x)+1)^2+1}\right)}$$ $$ =\sqrt{\tan^2(x)+\tan(x)+1}\cdot \sec^2(x), $$as promised. If you want, you can try the lower limit.
H: Is there a metric space on $\omega^\omega$ such that $\alpha+n\to\alpha+\omega$ as $n\to\infty$? Is there a metric space on $\omega^\omega$ such that $\alpha+n\to\alpha+\omega$ as $n\to\infty$? Let $\omega^\omega$ be the set of all ordinals less than $\omega^\omega$ then I seek a function: $d:\omega^\omega\times\omega^\omega\to\Bbb R$ such that $\omega^\omega,d$ is a metric space and for all $\alpha\in\omega^\omega$, adding further integers converges to $\alpha+\omega$ I'm aware of the Order Topology but this looks to be far from a metric space. AI: Let $\alpha$ be any countably infinite ordinal. Fix a bijection $\varphi:\alpha\to\omega$ and define $$f:\alpha\to\Bbb R:\eta\mapsto\sum_{\xi<\eta}2^{-\varphi(\xi)}\;;$$ then $f$ is an order-embedding of $\alpha$ in $\Bbb R$. If $\alpha$ has the order topology, $f$ is a homeomorphism of $\alpha$ onto $f[\alpha]$, and you can use it to define a metric on $\alpha$.
H: If $B_1\subseteq\mathbb R^d$ and $f$ is a diffeomorphism of $B_1$ onto an open subset of $\mathbb R^d$, then $B_1$ is open Let $d\in\mathbb N$ $B_1\subseteq\mathbb R^d$ $\Omega_2\subseteq\mathbb R^d$ be open $f:B_1\to\Omega_2$ be a $C^1$-diffeomorphism (in the sense of equation $(2)$ in this question) Why can we conclude that $B_1$ is open? This seems to be an application of the inverse function theorem. By definition, $$f=\left.\tilde f\right|_{B_1}\tag2$$ for some $\mathbb R^d$-open neighborhood $\Omega_1$ of $B_1$ and some $\tilde f\in C^1(\Omega_1,\mathbb R^d)$. Now it should hold $$\operatorname{id}_{\mathbb R^d}={\rm D}\left(\tilde f\circ f^{-1}\right)(x_1)={\rm D}\tilde f(x_1)\circ{\rm D}f^{-1}(f(x_1))\tag3$$ for all $x_1\in B_1$. How can we proceed? AI: $f$ is differentiable so if $g=f^{-1}$, $g$ can be seen as a differentiable map to $\Omega_1$. We have $$\tilde{f} \circ g = \text{id}_{\Omega_2}.$$ Applying the chain rule we see that the jacobi matrix of $g$ is nonsingular at all points of $\Omega_2$. Let $g(y)=x \in B_1$. By the inverse function theorem there exists an open neighbourhood of $x$ such that all points of this neigbourhood are in the image of $g$. But the image of $g$ is contained in $B_1$. Since $x$ was arbitrary this shows $B_1$ can be covered by opens in $\mathbb{R}^d$ and is thus open.
H: Polynomial bijections from $\mathbb{Q}$ to $\mathbb{Q}$ Prove or Improve: Polynomials $f\in \mathbb{Q}[x]$ which induce a bijection $\mathbb{Q}\to\mathbb{Q}$ are linear. The question of existence of a polynomial bijection $\mathbb{Q}\times\mathbb{Q}\to \mathbb{Q}$ is open, as discussed in this MO thread, this post by Terry Tao, and many more places. However, I cannot find much about the simpler question of polynomial bijections $\mathbb{Q}\to \mathbb{Q}$ (probably because this is easier and less interesting). Here are a few somewhat immediate observations: One quickly notes that any such bijection can always be put in the form $$a_nx^n+\dots+a_1x$$ for $a_1,\dots,a_n\in \mathbb{Z}$ by composing with an appropriate linear polynomial. From there, I have tried to use the rational root theorem to obtain some sort of result, but to no avail. Note that, unlike the $\mathbb{Q}\times\mathbb{Q}\to \mathbb{Q}$ case, it is quite easy to obtain an injection. For example, $f(x)=x^3+x$ is clearly injective, but unfortunately fails to be surjective ($f(x)=1$ yields $x^3+x-1=0$, which has only irrational roots by the rational root theorem, hence $1$ has no rational inverse). Is this a known result, and if so, how would one prove it? Or is there some higher order bijective polynomial on $\mathbb{Q}$? AI: Assume $f(x)=a_nx^n+\ldots +a_1x+a_0\in\Bbb Q[x]$ with $a_n\ne0$ induces a surjection $\Bbb Q\to \Bbb Q$. Let $p$ be a large prime such that $|a_i|_p=1$ for all non-zero $a_i$. Then $$|f(x)|_p\le\max\{\,|a_kx^k|_p\mid k\ge0\,\}=\max\{\,|x|^k_p\mid a_k\ne 0\,\}$$ with equality if these are distinct, i.e., if $|x|_p\ne 1$. In particular, either $|x|_p\le 1$ and so $|f(x)|_p\le 1$, or $|x|_p\ge p$ and so $|f(x)|_p\ge p^n$. For surjectivity, we need some $x\in\Bbb Q$ with $|f(x)|_p=p$. Therefore we need $n\le 1$.
H: Let $a, u$ be vectors in $\mathbb{R}^n$ where $|u| = 1$. Show that there is exactly one number $t$ such that $a - tu$ is orthogonal to $u$. Let $a, u$ be vectors in $\mathbb{R}^n$ where $|u| = 1$. Show that there is exactly one number $t$ such that $a - tu$ is orthogonal to $u$. My attempt: I tried expanding $(a - tu) \cdot u$ to no avail. The identity $|a-tu||u|\cos\theta = |a-tu|\cos\theta = 0$ also seems pretty useless. Could someone please give me a conceptual hint? I'm very bad with manipulating the dot product. (That's why I'm doing exercises to improve my fluency in the subject.) AI: By the distributive law, $(\vec a-t\vec u)\cdot \vec u=\vec a\cdot \vec u-t\vec u\cdot \vec u$, so, if $\vec a-t\vec u$ is orthogonal to $\vec u$, we then have $\vec a\cdot \vec u-t\vec u\cdot \vec u=0.$ Can you then solve for $t$? Note that $|\vec u|=1$ means that $\vec u\cdot \vec u=|\vec u|^2=1\ne0$, so there is no problem dividing by $\vec u \cdot \vec u$.
H: Why doesn't u-Substitution work for $\int \ln({e^{6x-5}})\,dx$? I was trying to evaluate the indefinite integral $\int \ln({e^{6x-5}})\,dx$. I know that the correct way to solve it is to use the following property of logarithms: $$\ln{e^{f(x)}}=f(x)\ln{e}=f(x)$$ Using this property, the integral becomes $\int 6x-5\,dx$, and we can use the Reverse Power Rule to get $\color{red}{3x^2-5x+C}$ as the answer. The above method was not my first attempt. I initially tried to solve it using u-substitution but got a different answer. I cannot find where the mistake occurs. Here was my first attempt: $$u=6x-5 \\ du=6\,dx \Rightarrow \dfrac{1}{6}du=dx \\ \dfrac{1}{6}\int \ln{e^u}\,du=\dfrac{1}{6}\int u\,du \\ \dfrac{1}{6} \cdot \dfrac{1}{2}u^2+C \Rightarrow \color{red}{\dfrac{1}{12}(6x-5)^2+C}$$ I already checked that the two answers are not the same as their graphs are different. Where does the mistake occur? AI: I bet the graphs are exactly the same except one is shifted up or down from the other. The only difference in the two answers is the $+C.$ The $C$'s aren't the same in both answers, but if you call one of them $D$, you can figure out how they're related. See that $$\frac{1}{12}(6x-5)^2+C = 3x^2 - 5x +\frac{25}{12} +C$$. If $C$ is an arbitrary constant, then so is $\frac{25}{12}+C.$
H: Having trouble determining the quotient group in an algebraic topology course I am working on an algebraic topology course (hatcher's book) and it has been quite a time since I took akgebra. I have my exams soon and I want a suggestion for a chapter or a resource online that helps me understand this: For example, if we take the following answer, I understand everything related to the geometry and topology of the answer but when it comes to the last step, I am struggling with algebra: https://math.stackexchange.com/a/58844/752801 How can i see that $$\mathbb{Z} \oplus \mathbb{Z} / \langle 2\mathbb{Z}(1,1)\rangle= \mathbb{Z} \oplus\mathbb{Z_2}?$$ I would be grateful if someone can explain this answer but also if someone can give me a resource that helps me understand this type of quotient group specifically for this course. AI: Maybe the clearest way to see that $(\mathbb Z \oplus \mathbb Z) / \langle (2, 2) \rangle = \mathbb Z \oplus \mathbb 2 \mathbb Z$ is to perform a change of basis on $\mathbb Z \oplus \mathbb Z$. Usually when we think of $\mathbb Z \oplus \mathbb Z$, we think in terms of the standard basis, $\{(1, 0), (0, 1)\}$. But there are other bases for $\mathbb Z \oplus \mathbb Z$! One such alternative basis is $\mathcal B = \{(1, 1), (0, 1) \}$. Now $(2, 2)$ can be expanded in terms of this alternative basis as $(2, 2) = 2(1,1) + 0(0, 1)$. In other words $(2,2) = (2, 0)_{\mathcal B}$ (if that notation makes sense). Thus $(\mathbb Z \oplus \mathbb Z) / \langle (2, 2) \rangle$ is isomorphic to $(\mathbb Z \oplus \mathbb Z) / \langle (2, 0) \rangle$ (via the above change of basis), and obviously $(\mathbb Z \oplus \mathbb Z) / \langle ( 2, 0) \rangle$ is isomorphic to $\mathbb Z_2 \oplus \mathbb Z$. (By the way, there is a typo in your post - it's not isomorphic to $\mathbb Z \oplus 2 \mathbb Z$!) This begs the question: Given a subgroup $G$ of $\mathbb Z^{\oplus n}$ (defined by a set of generators), is there a systematic algorithm that will find an alternative basis $\mathcal B = \{ \mathbf v_1, \dots, \mathbf v_n \}$ for $\mathbb Z^{\oplus n}$, a $k \in \{0, \dots, n\}$ and integers $c_1, \dots, c_k$ such that $\{ c_1 \mathbf v_1, \dots, c_k \mathbf v_k \}$ is a basis for $G$? Because if so, then $\mathbb Z^{\oplus n} / G \cong \mathbb Z_{c_1} \oplus \dots \oplus \mathbb Z_{c_k} \oplus \mathbb Z^{n - k}$. [In the above example, $n = 2$, $G = \langle (2, 2) \rangle$, $\mathcal B = \{ (1, 1), (1,0) \}$, $k = 1$ and $c_1 = 2$.] Yes, such an algorithm exists, and it's called the Smith normal form algorithm. Take a look at the final few pages of these notes for an explanation.
H: Rank and Jacobian matrix of smooth $F:M\to N$ between manifolds I'm currently reading about submersions and immersion's Lee's Introduction to Smooth Manifolds (p.77), and I'm slightly confused about what is meant when he says that the rank of $F$ and $p$ is "the rank of the Jacobian matrix of $F$ in any smooth chart." Letting $(U,\varphi)$ and $(V,\psi)$ be the local charts at $p$ and $F(p)$, respectively, I was wondering if the Jacobian matrix that was mentioned referred to the Jacobian matrix of $\psi\circ f\circ\varphi^{-1}$. This matrix seems to make sense because $\psi\circ f\circ\varphi^{-1}$ maps between Euclidean spaces, but I was wondering if the "Jacobian matrix of $F$" could be defined in a way that is independent of charts (if this makes any sense). AI: Given your map $f: M \rightarrow N$ (where $\dim M = m$ and $\dim N = n$), the rank of the Jacobian matrix at a point $p\in M$ is independent of your choice of charts, which is why Lee defines it to be the rank of the differential $df_p : T_pM \rightarrow T_{f(p)}N$, a linear map that does not exist as a matrix until you assign a basis on the tangent spaces. In practice, actually computing the rank of $df_p$ requires you to assign a coordinate chart (over the sets $U \subset M$ and $V \subset N$ let's say) as you've indicated. The reason rank is independent of the chart chosen is because $\phi : U \rightarrow \mathbb{R}^m$ and $\psi : V \rightarrow \mathbb{R}^n$ are diffeomorphisms, and hence their differentials are isomorphisms on the level of tangent spaces. You should convince yourself then that for purely linear algebraic reasons: $$\text{rank }df_p = \text{rank } d\psi\circ df_p \circ d\phi^{-1} = \text{rank } d(\psi\circ f \circ \phi^{-1})_p.$$ I hope that makes it a little clearer for you!
H: modulus question!! I just have a question that is it: I want to know the equation that finds an unknown number which is a number that when we will mod it with 17 it is equal to 3 and when we mod it with 16 it is equal to 10 and when we will mod it with 15 it will equal to 0. in other words, I am a programmer and I want to know what is the equation in mathematics that will find the unknown number. I know the unknown number is 3930, but I don't know what is the equation that will find the number. thanks to all of you. AI: We can use CRT, which guarantees that a solution exists, or by direct calculation $x\equiv 3 \mod 17 \implies x=17k+3$ $x\equiv 10 \mod 16 \implies 17k+3\equiv10 \implies k\equiv 7 \mod 16 \implies k=16h+7 \\\implies x=16\cdot 17h+122$ $x\equiv 0 \mod 15 \implies 16\cdot 17h+122\equiv 0\implies 2h\equiv-2\implies h\equiv 14 \mod 15 \\\implies h=15j+14$ then all solutions are in the form $$x=15\cdot 16\cdot 17 j+3930$$
H: Problem with showing that any nonempty open subset of plane is not contained in countable sum of segments and usage of this fact I have a problem with showing the fact which states that none of nonempty subset of plane is not contained in countable sum of segments. It seems to be trivial, but maybe my intuition is wrong. Anyway I have no idea how to write it down formally. Before I will describe the second problem I have to remind that a set is very dense when $G\cap U$ is uncountable for every nonempty open $U\subseteq\mathbb{R}^2.$ With this knowledge I was trying to prove that there exists such borel set $G$ which is very dense and its complemention is very dense, too. My attempt: We assume that there exists borel set $G$ which is very dense, that is $G\cap U$ is uncountable for every nonempty open $U\subseteq\mathbb{R}^2.$ We have to show that $G^C$ is very dense, that is $G^C\cap U$ is uncountable for every nonempty open $U\subseteq\mathbb{R}^2.$ Knowing that none of nonempty subset of plane is not contained in countable sum of segments we have that $U$ must be uncountable, so $G^C\cap U$ is uncountable, too. Is my reasoning correct? It seems to be too easy, so I think it's not correct. I appreciate any help and advices. AI: Why not just let $\Bbb P=\Bbb R\setminus\Bbb Q$ and take $G=\Bbb P\times\Bbb P$? Every non-empty open set in $\Bbb R^2$ contains a set of the form $\{x\}\times(a,b)$ for some $x\in\Bbb P$, which clearly intersects $G$ in an uncountable set, and a set of the form $\{q\}\times(a,b)$ for some $q\in\Bbb Q$, which is uncountable and contained in $\Bbb R^2\setminus G$, so $G$ and $\Bbb R^2\setminus G$ are very dense in $\Bbb R^2$. And $$\begin{align*} \Bbb P\times\Bbb P&=(\Bbb P\times\Bbb R)\cap(\Bbb R\times\Bbb P)\\ &=\bigcap_{q\in\Bbb Q}\big((\Bbb R\setminus\{q\})\times\Bbb R\big)\cap\bigcap_{q\in\Bbb Q}\big(\Bbb R\times(\Bbb R\setminus\{q\})\big) \end{align*}$$ is clearly a $G_\delta$ in $\Bbb R^2$, hence Borel.
H: Find a norm for $\mathbb{R}^d$ Let $B : \mathbb{R}^{d} \to \mathbb{R}^{d}$ be a linear isomorphism such that all eigenvalues ​​have absolute value less than $1$. Show that there is some norm in $\mathbb{R}^{d}$ for which the operator norm of $B$ is less than $1$. The operator norm is $$\| B \| := \sup \, \left\{ \| B(x)\| : \| x \|_2 \leq 1 \right\}$$ AI: As per this answer, let $J = P^{-1}BP$ be the Jordan form of $B$ and define $D=\operatorname{diag}(1,\varepsilon, \varepsilon^2, \ldots, \varepsilon^{n-1})$. Notice that $$(D^{-1}JD)_{ij} = \varepsilon^{j-i}J_{ij}$$ and hence $\lim_{\varepsilon\to 0} D^{-1}JD$ is precisely the diagonal of $J$, which contains numbers of absolute value $<1$. Hence we can pick $\varepsilon > 0$ small enough so that $\|D^{-1}JD\|_\infty < 1$, where the infinity norm is induced by sup-norm on vectors:$$\|A\|_\infty = \sup_{v \ne 0} \frac{\|Av\|_\infty}{\|v\|_\infty}.$$ Define a vector norm $$\|v\| := \|(PD)^{-1}v\|_\infty.$$ In the corresponding matrix norm we have $$\|B\| = \sup_{v \ne 0} \frac{\|(PD)^{-1}Bv\|_\infty}{\|(PD)^{-1}v\|_\infty} = \sup_{w\ne 0} \frac{\|D^{-1}P^{-1}BPDw\|_\infty}{\|(PD)^{-1}PDw\|_\infty} = \sup_{w\ne 0}\frac{\|D^{-1}JDw\|_\infty}{\|w\|_\infty} = \|D^{-1}JD\|_\infty < 1$$ where we used the substitution $v = PDw$ which is valid since $PD$ is invertible.
H: Is $(\mathbb{Q}, +)$ an essential subgroup of $(\mathbb{R},+)$? Given $H$ subgroup of $G$, we say that $H$ is a essential subgroup of $G$ if, for every non-trivial subgroup $K$ of $G$ we have that $H\cap K$ is not the trivial subgroup. An example is $\mathbb{Z}$, which is an essential subgroup of $\mathbb{Q}$. I wonder, as $\mathbb{R}$ is a completion of $\mathbb{Q}$, if $\mathbb{Q}$ is essential for $\mathbb{R}$. Thanks in advance. AI: Take the non-trivial subgroup $K$ of $\mathbb R$ that has integer multiples of an irrational number. Then $\mathbb Q\cap K$ is the trivial subgroup. This demonstrates that $\mathbb Q$ is not an essential subgroup of $\mathbb R$.
H: General Method To Find All Of The Isomorphism Classes Of Groups Of A Particular Order Ok, so bare with me here, there's quite a few questions. I am looking at this website: https://www.math.wisc.edu/~mstemper2/Math/Pinter/Chapter13F. It's basically explaining the method of going about finding all (two: $\mathbb{Z}_6$ and $S_3$) of the isomorphism classes of groups of order six. It starts with Cauchy's Theorem: "Let $G$ be a finite group and $p$ be a prime, if $p$ divides the order of $G$, then $G$ has an element of order $p$". Using this theorem makes perfect sense to me, though I don't fully understand the proof. As a side question, if there is a relatively simple proof for Cauchy's Theorem that you could point me towards, I'd greatly appreciate it. Basically for $G=\{S,*\}$ and $|G|=6$, since the prime factorisation of $6$ is $2 \times 3$, there must be $\{e,a,b,b^2\} \subset S:(|e|=1) \land (|a|=2) \land (|b|=3)$. The orders of $e$ (1) and $a$ (2) are different to that of $b$ (3) and $b^2$ (3). And $b \neq b^2$ because $e \neq b$ (again due to their different orders). Therefore $e \neq a \neq b \neq b^2$ and every element in $\{e,a,b,b^2\}$ is distinct. Then $ab$ and $ab^2$ are also both distinct from those four other elements since they both have an order of six (different to they other four) and $ab \neq ab^2$ because $e \neq b$. So $S=\{e,a,b,b^2,ab,ab^2\}$. But the way they differentiate the two isomorphic classes of groups is by; one, having (i) $ba=ab$ leading to $\mathbb{Z}_6$; two, having (ii) $ba=ab^2$ leading to $S_3$. This doesn't seem intuitive to me. Why go about doing that? I get that $ba$ cannot be equal to $\{e,a,b,b^2\}$. And the only other ways in which these two element can interact without repeating themselves are $\{ba,b^2a\}$ since powers of $a$ higher than one repeat themselves and powers of $b$ higher than 2 repeat themselves. And obviously $ba \neq b^2a$ since $e \neq b$. But then what about $bab$ and $baba$ and $b^2ab^2$ and so on? Are all of these other arrangements of $a$s and $b$s automatically closed within this group without any assumptions, or are they only closed once you assume (i) or (ii)? Essentially, is there a reason why only the relationship between $ab$, $ab^2$, $ba$, and $b^2a$ need to be considered? And why not consider all; $ab=ba$; $ab=b^2a$; $ab^2=ba$; $ab^2=b^2a$? Is it just the case that they happen to result in isomorphic classes of groups in this example, and should all of these be considered when trying to find all of the isomorphic classes of groups of a particular order? Or do these always cancel out and there's only a smaller number of cases you need to consider (since – for larger orders – my proposed method would quickly increase in cases to test)? The same website finds all of the isomorphic classes of groups of order ten (https://www.math.wisc.edu/~mstemper2/Math/Pinter/Chapter13G) and then of order eight (https://www.math.wisc.edu/~mstemper2/Math/Pinter/Chapter13H). The website is good at showing how to get to these different isomorphic classes of groups of those particular orders, but not so good at showing why they go about the way they do (since they show you how to find them all with the prompts). I'm looking for an efficent and exhaustive general method. How could you this general method find every class and also know that there cannot be any more classes for any order group? If you could use the same genral method for order ten and eight as you'd propose for six, that'd be greatly appreciated. I haven't been able to find it yet myself, but (maybe implies from what I've seen) if there's any half methods that are still general to find all of the isomorphism classes of abelian groups or solvable groups (whatever those are) that'd help too. AI: "I'm looking for an efficent and exhaustive general method." Such a method does not exist, at least for now, probably ever. The number of groups of order 2048 is not known, for example. This means that soluble groups are impossible to easily classify. Abelian groups are easy though, because of the fundamental theorem of finite abelian groups, which states that every finite abelian group is a direct product of cyclic groups.
H: Problem about uniform convergence of series of functions Prove the following series are not uniformly convergent in $[0,1]$: \begin{align*} &1.\quad\sum\limits_{n=0}^\infty x^n\log x\\ &2.\quad\sum\limits_{n=0}^\infty \frac{x^2}{(1+x^2)^n} \end{align*} A common way to prove $\sum f_n(x)$ is not uniformly convergent seems to be finding $n_k$ and $x_k$ such that $f_{n_k}(x_k)\to Const\neq0$, that is to prove $f_n$ does not uniformly converge to 0. But for those two problems, the general terms both uniformly converge to 0. Indeed, let $f_n(x)=x^n\log(x)$, $g_n(x)=\frac{x^2}{(1+x^2)^n}$. Then, $f_n'(x)=x^{n-1}(n\log(x)+1)$, $g_n'(x)=\frac{2x}{1+x^2}((1-n)x^2+1)$ which leads to \begin{align*} \sup_{x\in[0,1]}|f_n(x)|&=|f(e^{-1/n})|=\frac{e^{-1}}{n}\to0, \text{ as }n\to\infty\\ \sup_{x\in[0,1]}|g_n(x)|&=|g(\sqrt{\frac{1}{n-1}})|=\frac{\frac{1}{n-1}}{(1+\frac{1}{n-1})^n}\to0, \text{ as }n\to\infty. \end{align*} So is there any other efficient way to disprove uniform convergence? Thanks! AI: For the first, The series converges for $ x\in(0,1]$. the sum function is defined by $$f(x)=\ln(x)\sum_{n=0}^{+\infty}x^n=\frac{\ln(x)}{1-x} \text{ if } x\ne 1$$ and $$f(1)=0$$ but $$\lim_{x\to 1^-}f(x)=\lim_{x\to 1^-}\frac{\ln(1-(1-x))}{1-x}=-1$$ thus, As pointed by @Daniel, $ f $ is not continuous at $ (0,1] $ and the convergence is not uniform at $ (0,1]$. For the second, observe that $$\frac{x^2}{(1+x^2)^n}=\frac{1}{(1+x^2)^{n-1}}-\frac{1}{(1+x^2)^n}$$
H: Expected number of steps needed until every point is visited in bounded simple symmetric random walk? I was wondering how to calculate this. Say the state-space is $\{1, \dots, N \}$. Would it be correct to calculate the expected value of the first hitting time of $N$ starting from $1$ by using the coupon's collector formula? AI: Let $e_k$ be the expected number of steps to reach state $N$ if we are currently in state $k$. Then $$e_k = 1+\frac12(e_{k-1}+e_{k+1})\tag1$$ because we take one step and then with probability $\frac12$ we need $e_{k-1}$ more steps on average, and with probability $\frac12$ we need $e_{k+1}$ more steps on average. We have the boundary condition $e_N=0$. We can rewrite $(1)$ as $$e_{k+1}-2e_k+e_{k-1}=-2$$ which has characteristic equation $$r^2-2r+1=0$$ which has $1$ as a double root. Therefore, the general solution to the homogeneous equation is $e_k=a$ for a constant $a$. Now we must find a particular solution to $(1)$. From the form of the equation,, we guess that a quadratic polynomial will work, and we quickly find that $e_k=-k^2$ is a solution. The general solution to $(1)$ is the general solution to the associated homogeneous equation, plus any particular solution to the inhomogeneous equation, so the general solution is $$e_k=a-k^2$$ Substituting the boundary condition, we find $a=N^2$ so that $e_k=K^2-k^2$ and the expected number of steps starting in state $1$ is $$e_1=\boxed{N^2-1}$$ Observe that this is nothing like the answer for the coupon collector's problem. The two problems have nothing in common except that the absorbing state is state $N$.
H: Pre image of product of ideal Let $f$ be a surjective homomorphism from $R$ to $S$. How pre image of product of ideal $f^{-1}(I_1...I_n)$and product of pre images of ideals $f^{-1}(I_1)...f^{-1}(I_n)$ are related. I know they need not be equal, is any of the containment holds? AI: Let, for each $1 \leq k \leq n$, $a_k \in f^{-1}(I_k)$. Then $f(a_1\ldots a_n)=f(a_1)\ldots f(a_n) \in I_1 \ldots I_n$ so that $a_1 \ldots a_n \in f^{-1}(I_1 \ldots I_n)$, therefore $f^{-1}(I_1) \ldots f^{-1}(I_n) \subset f^{-1}(I_1 \ldots I_n)$.
H: inverse of $y(x) = 1 - \exp( - (\alpha x + \beta x^2 )) $ I'd like to compute the inverse of $y(x) = 1 - \exp( - (\alpha x + \beta x^2 )) $. I used to know a method but I can't remember how to do it. I am stuck at the step where I have: $$- \ln( 1 - y) = x ( \alpha + \beta x ) $$ The function is not itself invertible, but I am quite positive about the idea that it is if one sets some conditions on $\alpha$ and $\beta$. The function is 2-piecewise monotone and since I am only interested about $x > 0$, if the extremum is below zero, my function is invertible on the positive axis. How can I find such inverse ? Cheers. AI: As suggested in the comments we simply have $$ \beta x^2+\alpha x+ \ln( 1 - y)=0 $$ $$\implies x=\frac{-\alpha+\sqrt{\alpha^2-4\beta\ln(1-y)}}{2\beta}\: x=\frac{-\alpha-\sqrt{\alpha^2-4\beta\ln(1-y)}}{2\beta}$$
H: Suppose $t, u, v, w \in \mathbb{R}^3$. If $(t \times u) \times (v \times w) = 0$, are $t,u,v,w$ on the same plane? Suppose $t, u, v, w \in \mathbb{R}^3$. If $(t \times u) \times (v \times w) = 0$, are $t,u,v,w$ on the same plane? My Answer No. Let $p_1$ be the plane described by $x + y + z = 3$ and $p_2$ the plane described by $x + y + z = 4$. Then the planes do not intersect but they have the same norm, $n = (1,1,1)$. Letting $t,u \in p_1$ and $v,w \in p_2$, it follows that they are not on the same plane but $$(t \times u) \times (v \times w) = n \times n = 0$$ AI: If $t$ and $u$ are linearly dependent, then it is trivial that $$(t\times u)\times (v\times w)=0\,.\tag{*}$$ If $v$ and $w$ are linearly dependent, then (*) is obviously true. From now on, suppose that $t$ and $u$ are linearly independent, and $v$ and $w$ are also linearly independent. If the span of $t$ and $u$ coincides with the span of $v$ and $w$, then let $p$ denote the span of all vectors $t$, $u$, $v$, and $w$. Thus, both $t\times u$ and and $v\times w$ are vectors normal to $p$. Hence, $t\times u$ and $v\times w$ are linearly dependent, whence (*) is true. Suppose now that the span of $t$ and $u$ does not coincide with the span of $v$ and $w$. Therefore, the span of $t$, $u$, $v$, and $w$ is the whole $\mathbb{R}^3$, which is a $3$-dimensional vector space. Thus, one of the vectors is linearly dependent of the other three. Without loss of generality, assume that $$t=au+bv+cw$$ for some $a,b,c\in\mathbb{R}$. This means $u$, $v$, and $w$ are linearly independent. Note that $$t\times u=-b(u\times v)-c(u\times w)\,.$$ That is, $$(t\times u)\times (v\times w)=-b(u\times v)\times (v\times w)-c(u\times w)\times (v\times w)\,.$$ Now, $$(u\times v)\times (v\times w)=\big((u\times v)\cdot w\big)\,v-\big((u\times v)\cdot v\big)\,w=\rho\,v\,,$$ where $\rho:=(u\times v)\cdot w$. Similarly, $$(u\times w)\times(v\times w)=\big((u\times w)\cdot w\big)\,v-\big((u\times w)\cdot v\big)\,w=\rho\,w\,.$$ Consequently, $$(t\times u)\times (v\times w)=-b\,\rho\,v-a\,\rho\,w\,.$$ Since $u$, $v$, and $w$ are linearly independent, $\rho\neq 0$. Thus, (*) is true if and only if $a=b=0$. Therefore, $t=au$. However, this contradicts the assumption that $t$ and $u$ are linearly independent. In conclusion, we have shown that (*) is true if and only at least one of the following conditions is true: $t$ and $u$ are linearly dependent, $v$ and $w$ are linearly dependent, and the span of $t$ and $u$ is the same as the span of $v$ and $w$.
H: Proving a result for $\prod_{k=0}^{\infty}\Bigl(1-\frac{4}{(4k+a)^2}\Bigr)$ $$\prod_{k=0}^{\infty}\Bigl(1-\frac{4}{(4k+a)^2}\Bigr)=\frac{(a^2-4)\Gamma^2\bigl(\frac{a+4}{4}\bigr)}{a^2\Gamma\bigl(\frac{a+2}{4}\bigr)\Gamma\bigl(\frac{a+6}{4}\bigr)}$$ According to WA. I attempted using $$\prod_{k=0}^{\infty}\Bigl(1-\frac{x^2}{\pi^2k^2}\Bigr)=\frac{\sin x}{x}$$ But I couldn’t reindex the product appropriately to use it the way I wanted to (factoring out a 4 and then continuing from there). I’d like to have at least a direction to go in or an idea on how to do the product. AI: It isn't pretty, but we can prove this by working backwards using Euler's product definition of the gamma function (seen here): $$ \Gamma(x) = \lim_{n\to\infty} n!(n+1)^x \prod_{k=0}^n (x+k)^{-1}. $$ When we substitute that in for the right hand side, we get $$ \begin{align} \frac{(a^2-4)\Gamma^2\bigl(\frac{a+4}{4}\bigr)}{a^2\Gamma\bigl(\frac{a+2}{4}\bigr)\Gamma\bigl(\frac{a+6}{4}\bigr)} &= \frac{a^{2}-4}{a^{2}}\frac{\left(n+1\right)^{\frac{a+4}{2}}n!^{2}\prod_{k=0}^{n}\left(\frac{a+4}{4}+k\right)^{-2}}{\left(n+1\right)^{\frac{a+4}{2}}n!^{2}\left(\prod_{k=0}^{n}\left(\frac{a+2}{4}+k\right)^{-1}\right)\left(\prod_{k=0}^{n}\left(\frac{a+6}{4}+k\right)^{-1}\right)}\\ &= \frac{a^{2}-4}{a^{2}}\frac{\prod_{k=0}^{n}\left(\frac{a+4}{4}+k\right)^{-2}}{\prod_{k=0}^{n}\left(\frac{a+2}{4}+k\right)^{-1}\left(\frac{a+6}{4}+k\right)^{-1}}\\ &= \frac{a^{2}-4}{a^{2}}\prod_{k=0}^{\infty}\frac{\left(\frac{a+2}{4}+k\right)\left(\frac{a+6}{4}+k\right)}{\left(\frac{a+4}{4}+k\right)^{2}}\\ &= \frac{a^{2}-4}{a^{2}}\prod_{k=1}^{\infty}\frac{\left(\frac{a}{4}+k+\frac{1}{2}\right)\left(\frac{a}{4}+k-\frac{1}{2}\right)}{\left(\frac{a}{4}+k\right)^{2}}\\ &=\frac{a^{2}-4}{a^{2}}\prod_{k=1}^{\infty}\frac{\left(\frac{a}{4}+k\right)^{2}-\frac{1}{4}}{\left(\frac{a}{4}+k\right)^{2}}\\ &= \left(1-\frac{4}{a^{2}}\right)\prod_{k=1}^{\infty}\left(1-\frac{4}{\left(a+4k\right)^{2}}\right)\\ &= \prod_{k=0}^{\infty}\left(1-\frac{4}{\left(4k+a\right)^{2}}\right). \end{align} $$ (I omitted the limit of $n$ in the equations because it already takes up so much space.)
H: calculating Area inside intersection of circle and ellipse using line integral Consider a circle parametrized as $(r\cos (t), r \sin (t))$ and an ellipse parametrized as $(a\cos (t), b \sin (t))$. Assuming that $a>r>b$, you find the area of region of intersection of circle and elipse by setting up line integral and using greens theorem. I tried to parametrize those four curves (boundary of the region). For the left and right region, $r$ is fixed but couldn't find $θ$. For upper and lower curves, both $r$ and $θ$ are varying. Are there other ways to approach this problem? AI: Using Green's theorem area bounded by curve $C$ (as I guess $b<r<a$) can be calculated as: $$S=\int\limits_Cxdy=-\int\limits_Cydx=\frac{1}{2}\int\limits_C(xdy-ydx)$$ 1) Intersection points can be calculated as $$\begin{array}{} x^2=a^2\frac{r^2-b^2}{a^2-b^2} \\ y^2=b^2\frac{a^2-r^2}{a^2-b^2} \end{array} $$ So we have 4 symmetrical point with argument in first quadrant $$\tan (\phi_0) = \frac{b}{a}\sqrt{\frac{a^2-r^2}{r^2-b^2}}$$ From obtained curve $C$ is ellipse on $[\phi_0, \pi-\phi_0]$ and $[\pi+\phi_0, 2\pi-\phi_0]$ and is circle on remained part of $[0,2\pi]$. Now integral can be divided in 4 summands and each part separately can be parametrized by polar or extended polar coordinates. 2) Sweet in the end: possibly it is more easy to calculate ellipse parts outside of circle, then subtract from ellipse area. Obtained subtract from circle area. So you'll need calculate only integral on $[-\phi_0, \phi_0]$ more then ellipse and less then circle. As suggested by Charlie Chang it's more easy by double integral, but as you insist on Green, then everything needed you have.
H: The action of $SL(2,\mathbb R)$ on $T^1(\mathbb H)$ is transitive? Let $\mathbb H$ to be the complex upper-half plane and let $SL(2,\mathbb R)$ act on $\mathbb H$ by $$\phi(z)=\frac{az+b}{cz+d},$$ where $\begin{bmatrix} a & b \\ c & d \end{bmatrix}\in SL(2,\mathbb R).$ A book I am reading says $SL(2,\mathbb R)$ acts transitively on the unit tangent bundle $T^1(\mathbb H)$ (that the action is isometry is easy to verify). But I don't know why the action is transitive. Say given the points $z_1,z_2$ and vectors $v_i\in T^1_{x_i}(\mathbb H), i=1,2$ then I need to find $\begin{bmatrix} a & b \\ c & d \end{bmatrix}\in SL(2,\mathbb R),$ such that $\phi(z)=\frac{az+b}{cz+d}$ satisfies: (1) $\phi(z_1)=z_2$; and (2) $d\phi_{z_1}(v_1)=v_2$. But I don't know how to realize this since it involves some nonlinear systems in $a,b,c,d$ AI: The subgroup that fixes $z=i$ is defined by the rotation matrices $$\begin{pmatrix} a & b \\ c & d \end{pmatrix} = \begin{pmatrix} \cos(\theta) & \sin(\theta) \\ -\sin(\theta) & \cos(\theta)\end{pmatrix} $$ For $\phi(z) = \frac{az+b}{cz+d}$ in this subgroup, one can calculate $\phi'(i) = \cos(2\theta) - i \sin(2\theta)$ (I think, or something very close to that). It follows that as $\theta$ verifies, the linear transformations $d\phi_i : z \mapsto \phi'(i) z$ acts transitively on the unit circle, that is to say it acts transitively on $T^1_i(\mathbb H)$. It's also not hard, given $z_0 \in \mathbb H$, to find $a,b,c,d$ so that $\phi(i)=z_0$. Thus, fixing any base vector in $T^1_i(\mathbb H)$, I can first rotate it by an appropriate rotation matrix, and then map it by $d\phi$, to get any vector in $T^1_{z_0}(\mathbb H)$. This implies transitivity on $T^1(\mathbb H)$.
H: Find exact value of $f^{-1}(f(a))$ Given the function $$ f(x)=\frac{1}{4}\left((x-1)^{2}+7\right) $$ The first part of the question asks to find the largest domain containing the value $x=3$ for which $f^{-1}(x)$ exists. I determined the domain to be $x≥1$. The second part of the question is: Let $a$ be a real number not in the domain found in the previous part, find the exact value of $f^{-1}(f(a))$. My thinking process was since $a<1$, based from the domain we found previously, then therefore $f(a)=f(-a)$. Do I use the inverse function i.e. $f^{-1}(x)=1+\sqrt{4x+7}$ and just sub in $-a$? I'm not entirely sure if this is correct. Any help is greatly appreciated! AI: If the function were symmetric about $x=0$, then you would have $f(-x) = f(x)$. In your case, it's symmetric about $x=1$, so the relationship is instead $f(1-x) = f(1+x)$. Thus $$f^{-1}(f(1-x)) = f^{-1}(f(1+x)) = 1+x.$$ Now substitute $a=1-x$.
H: Necessity of uniformity in "almost uniform convergence $\implies$ convergence a.e" Let $(X,\mathcal{A}, \mu)$ be a measure space, and $E$ a Banach space (for this discussion, a metric space suffices I guess). We say a sequence of functions $f_n:X \to E$ is $\mu$-almost uniformly convergent if for every $\delta>0$, there is a measurable set $A\in \mathcal{A}$ with $\mu(A^c) < \delta$ and such that the restricted sequence $\{f_n|_{A}\}$ is uniformly convergent. It is then a common theorem that "almost uniform convergence implies convergence a.e", and the proof goes like this: For each $k\in \Bbb{N}$, we set $\delta_k = \frac{1}{k}$ for example, and obtain a corresponding measurable set $A_k$ as per the definition. Put $A:= \bigcup A_k$, then it's easy to see that $\mu(A^c) = 0$. Since $\{f_n|_{A_k}\}$ is uniformly convergent, it is pointwise convergent, and hence we can define $f:X\to E$ by \begin{align} f(x):= \begin{cases} \lim\limits_{n\to \infty}f_n(x) & \text{if $x\in A$}\\\\ 0 & \text{otherwise} \end{cases} \end{align} Then, $f_n \to f$ pointwise on $A$, which completes the proof (because $\mu(A^c) = 0$). My question is whether the uniformity assumption is actually necessary, because based on the proof it seems that it's not needed, but every source I read always adds in this seemingly extra hypothesis (maybe they just want to give a sufficient condition?). So, I would just like some verification, to make sure I'm not overlooking something obvious. AI: Uniformity is not needed, but while there is a difference between almost uniform convergence and uniform convergence almost everywhere, there is no material reason to introduce the terminology almost pointwise convergence: a sequence $\{f_n\}_{n\in\Bbb N}$ such that for all $\delta>0$ there is some $A$ such that $\mu(X\setminus A)<\delta$ and $\{\left.f_n\right\rvert_A\}_{n\in\Bbb N}$ converges pointwise is just almost-everywhere convergent, and the seemingly weaker condition is not easier to state, nor does it look significantly easier to check.
H: How to find the Number of Roots of a Polynomial in a Real Range Is there a way to efficiently find the number of real roots of a polynomial $P$ in a range $[a,b]$ with $a,b \in \mathbb{R}$? You may/may not know much about the coefficients of the polynomial, so I want methods that work based on the fact that it's a polynomial. EDIT: I know about Sturm's theorem, but I think it would be too slow for my use case (polynomial of around degree 30), as I have to generate at most n polynomials, n being the degree of the original polynomial. AI: Use Sturm's theorem. For low degrees, one can frequently translate the variable $x \mapsto x -a$ and $x \mapsto x-b$, apply Descartes' rule of signs to each, and also get the desired result.
H: What's the difference between $v = a\cdot t$ and $\vec{v} = \int \vec{a} \, \mathrm dt$ In highschool, I learned $v = at$ and in university, I am learning $\vec{v} = \int \frac{\vec{F}}{m} \, \mathrm dt = \int \vec{a} \, \mathrm dt$. I understand one is for $v= at$ is for one-dimension and the latter for multiple dimensions. However, I don't understand why in one dimension, we don't do $v = \int a(t) \, \mathrm dt$ but rather multiply it by the time to get the acceleration at time $t$. Shouldn't the acceleration accumulate and therefore do the integral instead? I am confused. As an example, A particle of mass $m=2$ is acted on by a force $$ \mathbf{F}=\left(4 t, 6 t^{2},-4 t\right) $$ At $t=0,$ the particle has velocity zero and is located at the point $(1,2,3)$ . Find the velocity vector $\mathbf{v}(t)$ for $t \geq 0$ We can easily know that $\vec{a} = \langle 2t,3t^2,-2t\rangle$. However, the velocity is not $\vec{a}\cdot t$ (which is possible with no problem since $t$ is a scalar and it still returns a vector), but rather anti-integral of the vector? AI: The formula $v = v_0 + at$ assumes that the acceleration is constant. The formula $v = v_0 + \int_{t_0}^{t_f} a(t) \, dt$ allows for the possibility that the acceleration changes with time.
H: Conditional Expectation Properties Let $(\Omega, \mathcal{F}, P)$ be a probability space and $G$ a finite group of measurable, bijective maps $g: \Omega \rightarrow \Omega$ which are $P$ invariant, i.e. they have the property $P(g^{-1}(A)) = P(A) \quad \forall A \in \mathcal{F}$, and define $$\mathcal{C}_G \equiv \{A \in \mathcal{F}: g(A) = A \quad \forall g \in G\}$$ It is trivially seen that $\mathcal{C}_G$ is a $\sigma-$algebra. I want to show the following: $$\textbf{(I)} \quad \quad E(X | \mathcal{C}_G) = \frac{1}{|G|}\sum_{g \in G} X \circ g(\omega) \quad P-\text{a.s.}$$ My attempt: Define $Y$ as the RHS of (I). It is easily shown that $E(Y1_A) = E(X1_A) \quad \forall A \in \mathcal{C}_G$ so it suffices to show that the RHS is $\mathcal{C}_G$ measurable. And for this we simply need to show that $X \circ g$ is $\mathcal{C}_G$ measurable for each $g \in G$. I don't really know how to do this, because what I need to prove is the following: Fix $g \in G$ and then show $\forall h \in G$, $$h(g^{-1}(X^{-1}(B))) = g^{-1}(X^{-1}(B))$$ Does anyone have any ideas? Any help would be massively appreciated. Thanks! AI: Hint: For any $g_1 \in G$ we have $Y\circ g_1 =Y [P] \, a.s.$ since $\{gg_1: g \in G\}=G$. This implies that $Y$ is measurable w.r.t the $P-$ completion of $\mathcal C_G$.
H: How to factorize $a^2-2ab+a^2b-2b^2$? I have been stuck on factorizing this: $$a^2-2ab+a^2b-2b^2$$ I thought I could solve it by making $(a+b)$ as one factor but it didn't work then I tried to add and deduct some terms which that didn't lead me to anything either. I don't really know what to do next. AI: $a^2-2ab+a^2b-2b^2=$ $=-2b^2+a(a-2)b+a^2.$ In order to factorize the last polynomial, we have to solve the following quadratic equation: $-2b^2+a(a-2)b+a^2=0$ that is equivalent to $2b^2-a(a-2)b-a^2=0.$ $\Delta=a^2(a-2)^2+8a^2=a^2(a^2-4a+12)\ge0,$ $b=\frac{a(a-2)\pm\sqrt{a^2(a^2-4a+12)}}{4}=\frac{a(a-2)\pm a\sqrt{a^2-4a+12}}{4}.$ Now we can factorize the polynomial: $-2b^2+a(a-2)b+a^2=$ $=-2\left(b-\frac{a(a-2)-a\sqrt{a^2-4a+12}}{4}\right)\cdot\left(b-\frac{a(a-2)+a\sqrt{a^2-4a+12}}{4}\right)=$ $\begin{align*} &=-\frac{1}{8}\left(4b-a^2+2a+a\sqrt{a^2-4a+12}\right)\cdot\left(4b-a^2+2a-a\sqrt{a^2-4a+12}\right).\\ \end{align*}$ Therefore we get that $\begin{align} &a^2-2ab+a^2b-2b^2=\\ &=-\frac{1}{8}\left(4b-a^2+2a+a\sqrt{a^2-4a+12}\right)\cdot\left(4b-a^2+2a-a\sqrt{a^2-4a+12}\right).\\ \end{align}$
H: Show that $\mathcal A_1$ $\cap$ $\mathcal A_2$ is also a $\sigma$-algebra I am currently studying for a measure theory final and have come across a past short exam question that reads "Let $\mathcal A_1$ and $\mathcal A_2$ be a $\sigma$-algebra of subsets of a set X. Show that $\mathcal A_1$ $\cap$ $\mathcal A_2$ is also a $\sigma$-algebra." Conceptually I cannot quite grasp this. Since $\mathcal A_1$ and $\mathcal A_2$ both have the set X as an element by the definition of a $\sigma$-algebra, how can we know that the intersection of $\mathcal A_1$ and $\mathcal A_2$ also has X as an element? A proof of this would be really great or even just a push in the right direction. Thanks in advance. AI: As mentioned by @Reveillark, you shall need the definition of intersection of sets. Based on it, we can proceed as follows. Let $\Omega$ be a nonempty set and $\mathcal{A}_{1}$ and $\mathcal{A}_{2}$ be $\sigma$-algebras on $\Omega$. Then clearly $\Omega\in\mathcal{A}_{1}\cap\mathcal{A}_{2}$, because $\Omega\in\mathcal{A}_{i}$ for $i = 1,2$. Now let us assume that $A\in\mathcal{A}_{1}\cap\mathcal{A}_{2}$. Thus $A\in\mathcal{A}_{1}$ and $A\in\mathcal{A}_{2}$. But $\mathcal{A}_{i}$ are $\sigma$-algebras. Then $A^{c}\in\mathcal{A}_{1}$ and $A^{c}\in\mathcal{A}_{2}$. Consequently, $A^{c}\in\mathcal{A}_{1}\cap\mathcal{A}_{2}$. At last but not least, let us suppose that $A_{n}\in\mathcal{A}_{1}\cap\mathcal{A}_{2}$ for $n\in\mathbb{N}$. Since $\mathcal{A}_{i}$ are $\sigma$-algebras, one has that \begin{align*} \left(\bigcup_{n\in\mathbb{N}}A_{n}\in\mathcal{A}_{1}\right)\wedge\left(\bigcup_{n\in\mathbb{N}}A_{n}\in\mathcal{A}_{2}\right) \Rightarrow \bigcup_{n\in\mathbb{N}}A_{n}\in\mathcal{A}_{1}\cap\mathcal{A}_{2} \end{align*} and we are done. Hopefully this helps.
H: Derivative of $h(x,y) = \lambda f(x) + \mu g(y)$ Let $A \subseteq \mathbb{R}^m$ be open and $B \subseteq \mathbb{R}^n$ be open. Suppose $f: A \to \mathbb{R}^k$ is differentiable at $a$, and $g: B \to \mathbb{R}^k$ is differentiable at $b$. Define $h: A \times B \to \mathbb{R}^k$ by $h(x,y) = \lambda f(x) + \mu g(y)$ where $\lambda, \mu \in \mathbb{R}$. Is it true that $h'(a,b)$ exists and is the $k \times (m + n)$ matrix $\begin{bmatrix} \lambda f'(a) & \mu g'(b) \end{bmatrix}$? I tried to unravel the definition of the derivative but I found it quite complex and wasn't able to do much. AI: In these cases, one useful "trick" is to invoke the use of the projection functions $\pi_1:\Bbb{R}^m \times \Bbb{R}^n \to \Bbb{R}^m$ and $\pi_2:\Bbb{R}^m \times \Bbb{R}^n \to \Bbb{R}^n$. Then, we can write: \begin{align} h &= \lambda \cdot (f \circ \pi_1) + \mu \cdot (g \circ \pi_2) \end{align} From here, let's apply the sum and scalar multiple rule of derivatives: \begin{align} Dh_{(a,b)} &= \lambda \cdot D(f\circ \pi_1)_{(a,b)} + \mu \cdot D(g\circ \pi_2)_{(a,b)} \end{align} Now, we have to calculate the derivative of a composition, so the chain rule is very handy here: \begin{align} D(f\circ \pi_1)_{(a,b)} &= Df_{\pi_1(a,b)} \circ D(\pi_1)_{(a,b)} \\ &= Df_a \circ \pi_1, \end{align} where in the last line I used the definition of $\pi_1$, and the fact that $\pi_1$ is a linear transformation so it is equal to its derivative at any point. Similarly, we have that $D(g\circ \pi_2)_{(a,b)} = Dg_b \circ \pi_2$. Putting this all together, \begin{align} Dh_{(a,b)} &= \lambda \cdot Df_a \circ \pi_1 + \mu \cdot Dg_b \circ \pi_2 \end{align} This is the equation which expresses the derivative as a linear transformation. Now, if you want everything in terms of "Jacobian matrices", then all you have to do is express these linear transformations as matrices using the standard ordered bases on the domain $\Bbb{R}^{m}\times \Bbb{R}^n \cong \Bbb{R}^{m+n}$ and the target $\Bbb{R}^{k}$. Then, we have: \begin{align} h'(a,b) &= \lambda \cdot f'(a) \cdot [\pi_1] + \mu \cdot g'(b) \cdot [\pi_2] \end{align} Here, $h'(a,b)$ is the matrix representation of $Dh_{(a,b)}$, and $f'(a)$ is the $k\times m$ matrix representation of $Df_a:\Bbb{R}^m \to \Bbb{R}^k$, and $[\pi_1]$ is the $m \times (m+n)$ matrix representation of $\pi_1:\Bbb{R}^m\times \Bbb{R}^n \to \Bbb{R}^m$ etc, and the $\cdot$ which appears above is scalar mulitplication and also multiplication of matrices (because composition of linear maps corresponds to multiplication of their respective matrices). It is now a simple matter to calculate what the matrices $[\pi_1]$ and $[\pi_2]$ look like. If you do so, then you immediately get \begin{align} h'(a,b) &= \begin{bmatrix} \lambda f'(a) & \mu g'(b) \end{bmatrix} \end{align} (I leave this last bit of simple linear algebra to you). Of course, this isn't the only way to do things. You could also use the fact that if $\phi:\Bbb{R}^p \to \Bbb{R}^q$ is a differentiable map, then the matrix entries of $\phi'(\alpha)$ is the various partial derivatives $(D_j\phi_i)(\alpha)$. But I don't particularly like this method because I always forget which of $i,j$ corresponds to rows/columns, and by the time I figure this out, I can usually already get the answer using the above method. Also, I think for such simple problems, it is completely unnecessary to invoke the use of partial derivatives, because it obscures what's going on.
H: Proving solution space for $y'+a_0y=0$ has $\{{e^{-a_{0}t}}\}$ as a basis I am trying to prove that the solution space for $y'+a_0y=0$ has $\{{e^{-a_{0}t}}\}$ as a basis. (From Friedberg Linear Algebra Thm2.30) First we can see that ${e^{-a_{0}t}}$ is a solution since $y'(t)+a_0y(t)=-a_0e^{-a_0t} +a_0e^{-a_{0}t}=0$ Suppose $x(t)$ is any solution. Then $x'(t)=-a_0x(t)$ for all $t \in R$. I do not understand this part. Why is the derivative of $x(t)$ of that form? Is it assumed that all solutions have the form of the basis? Define $z(t)=e^{a_0t}x(t)$.I do not understand how they came to this? Why would they make such definition? $z'(t)=0$ and since $z$ is identically zero, $z$ is a constant function. Thus, there exists $k \in C$ such that $z(t)=e^{a_0t}x(t)=k$ for all $t \in R$. Therefore $x(t)=ke^{-a_0t}$ ending the proof. AI: Suppose $x$ is a solution to $y'+a_0 y=0$. Then since $x$ solves the differential equation, it must be that $x'(t)+a_0x(t)=0$ for all $t$, implying that $x'(t)=-a_0x(t)$. Define $z(t)=e^{a_0 t}x(t)$. We choose this definition of $z$ in order to conveniently show that the arbitrary solution $x$ takes the form of the exponential basis function. Differentiating $z$, we find that \begin{equation*} z'(t) = e^{a_0 t}x(t) + e^{a_0 t} x'(t), \end{equation*} by the product rule. Substituting our form for $x'$, this becomes \begin{equation*} z'(t) = a_0e^{a_0 t}x(t) + e^{a_0 t} (-a_0 x(t)) = 0, \end{equation*} which holds for all $t$. Therefore, it must be that $z(t) = k$ for some constant $k$. By our definition of $z$, this implies that \begin{equation*} z(t) = e^{a_0 t}x(t) = k, \end{equation*} which shows that \begin{equation*} x(t) = ke^{-a_0 t}. \end{equation*}
H: Are there operations that can't be defined using a rule, and if they exist what is their significance? Wikipedia:Operations(mathematics) In mathematics, an operation is a function which takes zero or more input values (called operands) to a well-defined output value What I took away from this fact was that operations are essentially just functions. I know that a function could just be an insignificant set of ordered pairs, with no formula associated with it, for example, $$\{(red,255),(green,127),(blue,0)\}$$ Are there functions that can't be described by anything more complex than sets, and if they do exist how would we define or use them? Would these "operations" be considered functions, or would we still consider them to be operations? AI: In fact, the vast, vast majority of functions on, say $\mathbb R$, have no “rule”. This is not very surprising when you realize that there are at least as many functions as there are real numbers (because there are constant functions spitting out any real number) and the vast, vast majority of real numbers have no “rule”. So, any given human description can be encoded in the Unicode character set which maps it to a unique integer in $\mathbb N$. But, it is known that the cardinality of $\mathbb R$ is simply a larger infinity than $\mathbb N$ as the binary expansion of a number in $[0, 1)$ (except for some details about repeating 1s at the end of an expansion, which turn out to be immaterial) can be viewed as specifying a subset of $\mathbb N$ consisting of all indices which have a 1-bit there. The set of subsets of $S$ is always a larger cardinality than $S$, they cannot be put into one-to-one correspondence; and so too with $S = \mathbb N$. But, this is also true also for functions $\mathbb N \to \mathbb \{0, 1\}$ for example, and so the vast vast majority of functions from natural numbers to nontrivial sets are also indescribable in words or formulas. Words and formulas can only exhaustively define all those functions from finite sets to sets which are either finite or countably infinite. Otherwise there are literally just not enough words to define some of the rules.
H: Example of a weak convergent sequence that is not strongly convergent in $\ell^p(\mathbb{N})$ Here $\ell^p(\mathbb{N})$ is the space of non-negative integer sequences $\{x_n\}_{n \in \mathbb{N}}$ where $$\sum_{n \in \mathbb{N}} |x_n|^p < \infty$$ So I'm looking for an example of a sequence in this space that converges in the weak topology but doesn't converge in the topology generated by the norm. AI: $e_1=(1,0,...), e_2=(0,1,0,..),...$ gives such a sequence. It converges weakly because $(y_n) \in \ell^{p}$ implies $y_n \to 0$.
H: Regular hexagon divided into triangles Problem: Give an regular hexagon and an interior point of this, join this point with each vertex. The hexagon is divided in $6$ triangles, paint the triangles alternately. Show that the sum of the areas of the painted triangles is equal to that of the unpainted triangles The same problem is proposed with an square and is easy prove this, because the sum of heights of opposite triangles is $l$ (the side of the square). But in this case I can't. I try to prove that the sum of heights of painted triangles is $\frac{3\sqrt3\cdot l}{2}$. I tried to test this but I was not able, what do you suggest or how else can I do it? AI: Let $ABCDEF$ be the hexagon. Extend the alternating sides $AB,CD,EF$ until pairs of them meet at points $G,H,I$, where $\triangle GHI$ is equilateral with sides measuring $l+l+l=3l$. Then the sum of the distances from these sides to any interior point is twice the area divided by each side of the triangle, which you should be able to render as $3\sqrt3l/2$.
H: Defining the natural almost complex structure on a complex manifold. The definition of an almost complex structure is as follows. If $X$ is a differentiable manifold and $TX$ is its tangent bundle, then the endomorphism $I: TX \to TX$ defines an almost complex structure if $I \circ I = -1$ on all the fibers. If $X$ is a complex manifold, it would be easy to define $I$ locally (on the holomorphic tangent bundle) on a chart $U_i$ with the trivialization $\Phi_i : U_i \times \mathbb{C}^n \to \pi^{-1}(U_i)$ by letting $I_i(p, v) = (p, iv)$. Then clearly, $I_i^2 = -1$ so every chart has an almost complex structure. This would then give a natural map $I_i': \pi^{-1}(U_i) \to \pi^{-1}(U_i)$ by $I_i' = \Phi_i I_i \Phi^{-1}_i$ which satisfies the conditions of an almost complex structure on $TU_i$. However, I'm having trouble seeing how this extends to a global endomorphism $I$. To do so, we would need $I_i \equiv I_j$ on $\pi^{-1}(U_i \cap U_j).$ If $U_j$ has a local trivialization $\Phi_j$, then the transition map $\Phi_j^{-1} \circ \Phi_i: (U_i \cap U_j )\times \mathbb{C}^n \to (U_i \cap U_j )\times \mathbb{C}^n$ is given by $(p, v) \mapsto (p, \tau_p(v))$ for some differentiable map $\tau: U_i \cap U_j \to GL(n, \mathbb{C})$. Then, in order for $I_i' = I_j'$, we would need $\Phi_i I_i \Phi_i^{-1} = \Phi_j I_j \Phi_j^{-1}$ which is true iff $\Phi_j ^{-1} \Phi_i I_i = I_j \Phi_j^{-1} \Phi_i$ which is true iff $(p, \tau_p(iv)) = (p, i \tau_p(v))$ which is obviously true by the fact that $\tau_p \in GL(n, \mathbb{C})$. Does this prove that X has an almost complex structure? Also, is there a more clear way to construct it? The book I'm reading from dismisses this fact as obvious which makes me concerned. Thank you! AI: Looks good to me. Your argument does indeed prove that $X$ admits an almost complex structure. It can also be viewed as a proof that $TX$ is a complex vector bundle (which is equivalent). Another proof I have seen is to define $J$ in a complex coordinate chart $(U, (z^1, \dots, z^n))$ by \begin{align*} J\left(\frac{\partial}{\partial x^k}\right) &= \frac{\partial}{\partial y^k}\\ J\left(\frac{\partial}{\partial y^k}\right) &= -\frac{\partial}{\partial x^k} \end{align*} where $x^k = \operatorname{Re}(z^k)$ and $y^k = \operatorname{Im}(z^k)$. Then the fact that this gives rise to a well-defined almost complex structure $J$ is equivalent to the Cauchy-Riemann equations.
H: How to say a variable is invertible in Macaulay2? I'm a very beginner in Macaulay2, so I apologize if this question is too trivial... I'm using Macaulay2 for a computation involving over $30$ variables. Roughly speaking I have a $4\times 4$ matrix where entries are polynomials while coefficients are also variables. It's (minor) determinants give certain conditions and should simplify the form of the matrix. I'm trying to solve this by creating a huge ring with many variables, compute an (again huge) ideal generated by the given conditions, and use "trim" to express the ideal in a simple way. One important part of this computation is that some of the variables are invertible, like say $x$ is invertible if I know $xy=0$ , then $y=0$. I tried to put this condition by adding one more auxiliary variable, say $z$ , and give the condition $xz-1 = 0$ (as what we usually do in commutative algebra). However, I found that Macaulay2 does not do this job; when it has $xy$ in the ideal, it does not provide $y$ in the ideal and so the set of generators does not get simplified well. Are there some other way to put this condition, so that Macaulay2 reflects the invertibility in its computation? AI: Here is a way to turn an indeterminate into a unit, however I can't say if it will be enough to do what you want to do. Declare a ring in the indeterminate x which you want to invert, say R=QQ[x], take its fraction field F=frac R. Now you can test if x is a unit in F with isUnit x, and the returned answer is True. Now x will be a unit in any ring S=F[y,z,...]. Note however that something as simple as roots(x*y) will not work since the expected ring is one of ZZ, QQ, RR or CC. Here is a small example in which everything works as intended A=QQ[x]; B=frac A; R=B[y]; (note that something like R=frac(QQ[x])[y] or R=(frac(QQ[x]))[y] will not work properly, i.e. it will not consider x as a unit) isUnit x returns True isUnit y returns False (sanity check) gens ideal(x) still returns ideal generated by x, however gens gb ideal(x) returns ideal generated by 1.
H: Integrable function $f$ such that $\int_I f(x)dx=0$ for intervals of arbitrarily small length. A past qual question from my university reads: Let $f$ be an integrable function satisfying $\int_0^1 f(x)dx=0$. Prove that there are intervals $I$ of arbitrarily small positive length such that $$\int_I f(x)dx=0$$ I'm not sure how to approach the problem. One has that $\nu(E)=\int_E f(x)dx$ is a signed measure with a Hahn decomposition of $[0,1]=P\cup N$ where $f\geq 0$ on P and $f\leq 0$ on N. But I can't seem to be able to come up with a way of finding an interval with the desired property. AI: We can show that for every $n \ge 2$ there is an interval $I_n$ of length between $1/n$ and $2/n$ s.t. $\int_{I_n} f(x)dx=0$ Fix $n \ge 2$ and consider $a_k=\int_{\frac{k}{n}}^{\frac{k+1}{n}}f(x)dx, k=0,1..,n-1$; since $\sum a_k=0$ we either have some $a_k =0$ so done or there are consecutive $a_ka_{k+1} <0$ for some $k \le n-2$; wlog assume $a_k >0, a_{k+1} <0, a_k+a_{k+1} >0$ since if $a_k+a_{k+1}=0$ we are again done, while the other cases are treated as below with the obvious changes. Then $g(a)=\int_a^{\frac{k+2}{n}}f(x)dx$ is a continuos function for $\frac{k}{n} \le a \le \frac{k+1}{n}$ and $g(\frac{k}{n}) >0, g(\frac{k+1}{n}) <0$ so there is an $a_n, \frac{k}{n} \le a_n \le \frac{k+1}{n}, g(a_n)=0$; letting $I_n=[a_n,\frac{k+2}{n}]$ we are done since $\int_{I_n} f(x)dx=0, 1/n < |I_n| < 2/n$
H: A Rational Parameterization of Multiple Simple Expressions (Or the intersection of two rational parameterizations) Context I am interested specifically in all rational values of $x$ for which $\sqrt{1-x}$ and $\sqrt{1+x}$ are rational. In general; however, I am curious if there is a method for taking any number of expressions in the form $\sqrt{n_i \pm x}$ and finding all rational values for x that ensure all of the expressions are rational. What I have tried I have tried a few different methods: Method 1 I have tried calculating rational parameterizations of both expressions individually by taking the rational point $(0,1)$ and finding the intersection between a line with a rational slope through that point and the graphs individually. For $y=\sqrt{1-x}$ I used the line $x=t(y-1)$ Plugging this into $y=\sqrt{1-x}$ I get $y=\sqrt{1-t(y-1)}$ By solving this for $y$ (and ultimate $x$) in terms of $t$, I get $x=-t^2-2t$ Similarly for $y=\sqrt{1+x}$ with this method I get $x=t^2+2t$ But I have not been able to parameterize the intersection of these two parameterizations. Method 2 I have tried calculating rational parameterizations of both expressions together by combining them into the system: $$y=\sqrt{1-x}$$ $$z=\sqrt{1+x}$$ and taking the rational point $(0,1,1)$ and finding the intersection between a line with a rational slope through that point and the surface $(x,\sqrt{1-x},\sqrt{1+x})$. By this method I get a long and messy formula that does not guarantee rational coordinates. Method 3 I have tried using the same techniques to rationally parameterize $y=\sqrt{1-x}+\sqrt{1+x}$ with a similarly messy result. Method 4 Since $x$ is rational, $x=\frac{a}{b}$ where $a$ and $b$ are co-prime integers, the expressions above can be rewritten as: $$\sqrt{1-x}=\sqrt{1-\frac{a}{b}}=\sqrt{\frac{b-a}{b}}$$ $$\sqrt{1+x}=\sqrt{1+\frac{a}{b}}=\sqrt{\frac{b+a}{b}}$$ For these to be rational either $a$ must contain $b$ as a factor (which is impossible because $a$ and $b$ are defined co-prime) or $b$ must be a square integer $b=c^2$. I performed a search for all positive integers in certain range that where $\sqrt{c^2-a}$ and $\sqrt{c^2+a}$ are integers to attempt to identify a pattern. The first few fully reduced fractions (where $a \neq 0$) that I found are: $$\frac{24}{25},\frac{120}{169},\frac{240}{289},\frac{336}{625},\frac{840}{841},\frac{840}{1369},\frac{720}{1681},\frac{2520}{2809},\frac{1320}{3721},\frac{2016}{4225},\frac{3696}{4225},\frac{5280}{5329},\frac{2184}{7225},\frac{5544}{7225},\frac{6240}{7921},...$$ The denominators (the value of $c$, not $c^2$) seem to correspond directly to the "Ordered hypotenuses (with multiplicity) of primitive Pythagorean triangles" OEIS A020882 and the numerators to "Common differences in triples of squares in arithmetic progression, that are not a multiples of other triples in (A$198384$, A$198385$, A$198386$)" OEIS A198438. With this information, I am unsure how to prove that these sequences will enumerate a full rational parameterization of my two initial expressions without missing any rational points, and how to generate a parameterization of these rational values. Final Notes Any hints, ideas, or references would be much appreciated! Edit Thanks to John Omielan and using my techniques above, I have determined that $\sqrt{1-x}$ and $\sqrt{1+x}$ are rational when $x=\frac{4t(t^2-1)}{(t^2+1)^2}$ for all rational values of $t$. AI: For $\sqrt{c^2 - a}$ and $\sqrt{c^2 + a}$ to be integers means for some integers $b$ and $d$ you have $$\sqrt{c^2 - a} = b \implies c^2 - a = b^2 \tag{1}\label{eq1A}$$ $$\sqrt{c^2 + a} = d \implies c^2 + a = d^2 \tag{2}\label{eq2A}$$ \eqref{eq2A} minus \eqref{eq1A} gives $$2a = d^2 - b^2 \implies a = \frac{d^2 - b^2}{2} \tag{3}\label{eq3A}$$ Adding these $2$ equations instead gives $$2c^2 = b^2 + d^2 \tag{4}\label{eq4A}$$ There are a couple of good answers in Quora's Are there two squares that when added the sum is twice another square? that help to solve this. First, Justin Rising's answer explains We start by noting that $a^2 + b^2 = 2c^2$ if and only if $\left(\frac{a}{c\sqrt{2}}\right)^2 + \left(\frac{b}{c\sqrt{2}}\right)^2 = 1$. This means that the point $(\frac{a}{c\sqrt{2}}, \frac{b}{c\sqrt{2}})$ lies on the unit circle. If we rotate it by $\frac{\pi}{4}$ radians, we get $(\frac{a}{2c} − \frac{b}{2c}, \frac{a}{2c} + \frac{b}{2c})$. Therefore, every solution to the the original equation corresponds to a rational point on the unit circle. Next, Ben Packer's answer extends this to show that every rational point on the unit circle corresponds to a Pythagorean triple, i.e., $$x^2 + y^2 = z^2 \tag{5}\label{eq5A}$$ Then setting $$\frac{b}{c} = -\frac{x}{z} + \frac{y}{z} \tag{6}\label{eq6A}$$ $$\frac{d}{c} = \frac{x}{z} + \frac{y}{z} \tag{7}\label{eq7A}$$ gives a solution to \eqref{eq4A}. Note this connection to Pythagorean triples helps to explain your observation of The denominators (the value of $c$, not $c^2$) seem to correspond directly to the "Ordered hypotenuses (with multiplicity) of primitive Pythagorean triangles" OEIS A020882
H: If $P$ is on the circumcircle of a triangle, show that the feet of the perpendiculars from $P$ to the side-lines of the triangle are collienar Let $ABC$ be a triangle and $P$ be any point on its circumcircle. Let $X,Y,Z$ be the feet of the perpendiculars from $P$ onto lines $BC, CA$ and $AB$. Prove that points $X,Y,Z$ are collinear. So I've already made a diagram(it is attached below), but I don't know how to prove it from there. Please help and explain your solution thoroughly because I have a test about this tomorrow and I want to understand this! Thank you! :D AI: The method of proof is to show that $\displaystyle \angle NMP+\angle PML=180^{\circ }$. $\displaystyle PCAB$ is a cyclic quadrilateral, so $\displaystyle \angle PBA+\angle ACP=\angle PBN+\angle ACP=180^{\circ }$. $\displaystyle PMNB$ is a cyclic quadrilateral (Thales' theorem), so $\displaystyle \angle PBN+\angle NMP=180^{\circ }$. Hence $\displaystyle \angle NMP=\angle ACP$. Now $\displaystyle PLCM$ is cyclic, so $\displaystyle \angle PML=\angle PCL=180^{\circ }-\angle ACP$. Therefore $\displaystyle \angle NMP+\angle PML=\angle ACP+(180^{\circ }-\angle ACP)=180^{\circ }$.
H: Show $e^{-tA}$ is a trace class operator, $t>0$ I have the next definition: $Tr(A)=\sum_n<u_n,Au_n>$, where $A$ is a positive linear operator on $H$ (Hilbert), and $\{u_n\}$ is an orthonormal base of $H$. And an operator is trace class if $Tr(A)<\infty$. Let $A:D(A)\subset H\to H$ a positive, self-adjoint, densely defined operator in $H$, Hilbert. $A$ only has point spectrum $\sigma (A)=\{ \lambda_k\}_{k\in \mathbb{N}}$ and $\lambda_k < \lambda_{k+1}$ $\forall k$. $\{u_k\}_{k\in \mathbb{N}}$ is the respective orthonormal base of eigenvectors of $A$. I know that the trace is independent of the base, its ciclicity, and that $<v,Av>\geq\lambda_1\|v\|$. I got that $Tr(e^{-tA})=\sum_k e^{-t\lambda_k}$ but I don't know how to show it's finite if is possible. Any help is regarded. AI: Consider the operator \begin{align} A = \sum^\infty_{n=1} \log(1+n) |u_n\rangle \langle u_n| \end{align} then we see that \begin{align} e^{-tA} = \sum^\infty_{n=1}e^{-t \log(1+n)}|u_n\rangle \langle u_n|. \end{align} Note that \begin{align} \operatorname{Tr}(e^{-tA}) = \sum^\infty_{n=1} \frac{1}{(1+n)^t}. \end{align} which does not converge for $t\le 1$.
H: What is wrong with this derangement argument $((n-1) !(n-1))$? The number of derangements for $n$ objects is given by the recursive relation: $$!n = (n-1) (!(n-1) + !(n-2))$$ This can be easily proved (for example, see the argument on Wikipedia page). Before looking at this argument, I thought along these lines: suppose we know $!(n-1)$, then I can create a derangement for $n$ objects by first taking a derangement for $n-1$ objects, placing $n$'th object at place $n$, and then swapping it with one of first $n-1$ objects. This would give us: $$!n = (n-1) (!(n-1))$$ But this number is less than the actual number given above. I was wondering what is wrong with this argument and which derangements it misses. AI: Let $\pi$ be a derangement of $\{1,2,\dots,n\}$. Suppose that $\pi(n)=j$ and $\pi(i)=n$. Then $i,j\in\{1,2,\dots,n-1\}$. Swapping values $j$ and $n$ in $\pi$ yields a permutation $\pi'$ such that $\pi'(n)=n$, $\pi'(i)=j$ and $\pi'(k)=\pi(k)\ne k$ for all $k\ne i,n$. This permutation has exactly $1$ fixed point if $i\ne j$, and exactly $2$ fixed points if $i=j$. Deleting those fixed points, you obtain either $!(n-1)$ or $!(n-2)$ permutations. There are $n-1$ choices for $j$, hence the recursive relation you stated at the beginning.
H: If G is a connected graph and C is a cycle from G, my question is: G-C is connected graph? If G is a connected graph and C is a cycle from G, my question is: G-C is connected graph? This question is related with clasiffication surfaces theorem. If the Euler characteristic is lower than 2, then exist a simple curve and not separed the surface. Thank's AI: That depends. If you remove the edges of a wheel graph that are not adjacent to the universal vertex, what remains is connected. If you remove the edges of the cycle in the graph shown below, what remains is not connected. * | * / \ * * \ / * | *
H: Understanding why the answer is no? Let $P (A) = 0.7$, $P(B^c) = 0.4$ and $P(B ~\text{and} ~C) = 0.48$ a. Find $P (A ~\text{or}~ B)$ when $A$ and $B$ are independent $P(B) = 1 - P(B^c) = 1 - 0.4 = 0.6$ Seeing as $A$ and $B$ are independent $P(A ~\text{and}~ B) = 0.7 \times 0.6 = 0.42$ $P (A~\text{ or}~ B) = 0.7 + 0.6 - 0.42 = 0.88$ b. Is it possible that $A$ and $C$ are mutually exclusive if they are independent? The answer in the book is no, they cannot be mutually exclusive, but I don't understand why? I thought events could be both mutually exclusive AND independent. Why is the answer no? If anyone can explain why, (and double-check if my work for a. is right). I would greatly appreciate it. AI: Your answer for the first part is correct. If $A$ and $C$ are mutually exclusive and independent then $0=P(\emptyset)=P(A \cap C) =P(A)P(C)$ so either $P(A)=0$ or $P(C)=0$.
H: Solve $ \frac{d^2y}{dx^2} \cos x + \frac{dy}{dx} \sin x - 2y \cos^3 x = 2\cos^5x $ by a suitable transformation Consider $ \frac{d^2y}{dx^2} \cos x + \frac{dy}{dx} \sin x - 2y \cos^3 x = 2\cos^5x $. By a suitable transformation, reduce this equation to a second order linear differential equation with constant coefficients. My try: $$ \frac{d^2y}{dx^2} \cos x + \frac{dy}{dx} \sin x - 2y \cos^3 x = 2\cos^5x \\ \Rightarrow \frac{\frac{d(\frac{dy}{dx})}{dx} \cos x - \frac{dy}{dx} \frac{d(\cos x)}{dx} }{\cos^2x} -2y \cos^3 x = 2\cos^5x \\ \Rightarrow \frac{d\left( \frac{\frac{dy}{dx}}{\cos x} \right)}{dx} - 2y\cos^3x = 2\cos^5x$$ Unable to get rid of $\sin x$ or $\cos x$ terms from the coefficients. AI: $$ \frac{d^2y}{dx^2} \cos x + \frac{dy}{dx} \sin x - 2y \cos^3 x = 2\cos^5x $$ HINT : The change of variable $\quad t=\sin(x)\quad$ leads to : $$\frac{d^2y}{dt^2}-2y=2(1-t^2)$$ Solving leads to : $$y=c_1e^{\sqrt{2}\:t}+c_2e^{-\sqrt{2}\:t}+t^2$$
H: Evaluating limits of integrals How to evaluate $$\lim_{n \to \infty}\sum_{m=1}^{\infty}\int_{0}^{\infty} \left(\frac{ m+x}{(m^n+x^n)^n} \right )dx$$ I made the substitution $$x = mt$$and factored out $$m^{-(n^2-2)}$$. I got this: $$\lim_{n \to \infty} \left(\sum_{m=1}^{\infty}m^{-(n^{2}-2)}\right)\int_{0}^{\infty} (1+t)(1+t^n)^{-n} dt $$ After that I tried the substitution: $$t^{n}=tan^{2}\theta$$ but after substitution I got two beta integrals after which I couldn't proceed further. I got the following: $$\frac{2}{n}\lim_{n \to \infty} \left(\sum_{m=1}^{\infty}m^{-(n^{2}-2)}\right)\int_{0}^{\frac{π}{2}} (\sin^{(\frac{2}{n}-1)}{\theta}\cos^{(2n-\frac{2}{n}-1)}\theta + \sin^{(\frac{4}{n}-1)}{\theta}\cos^{(2n-\frac{4}{n}-1)}\theta) d\theta $$ I couldn't proceed further.The answer is 3/2. Could someone clarify ? Thank you. AI: By dominated convergence we can move the limit in. Then notice that only the $m=1$ term survives the summation and notice that $$\lim_{n\to\infty} \frac{1}{(1+t^n)^n} = \begin{cases}0 & t \geq 1 \\ 1 & t < 1 \\ \end{cases}$$ so the limit becomes $$\int_0^1 1+t\:dt = \frac{3}{2}$$
H: Inequality with a High Degree Constraint This question- Suppose that $x, y, z$ are positive real numbers and $x^5 + y^5 + z^5 = 3$. Prove that $$ {x^4\over y^3}+{y^4\over z^3}+{z^4\over x^3} \ge 3 $$ The inequality has a high degree constraint which can convert a $5$-degree polynomial to a $0$-degree term and makes it difficult. On trying C-S to manage- $$ \left({x^4\over y^3}+{y^4\over z^3}+{z^4\over x^3}\right)\left(x^5 + y^5 + z^5\right) \ge 9 \Rightarrow \left(x^2y+y^2z+z^2x\right)^2\geq9 \Rightarrow x^2y+y^2z+z^2x\geq3 $$Still gives a third degree inequality and not a useful fifth degree. How can I do it and solve the problem? AI: Using the AM-GM inequality, we have $$\frac{30 x^4}{y^3} +7x^{10}+y^{10}+16x^5y^5 \geqslant 54\sqrt[54]{\left(\frac{x^4}{y^3}\right)^{30} \cdot (x^{10})^7 \cdot y^{10} \cdot (x^5y^5)^{16}} = 54x^5.$$ Similar $$\frac{30 y^4}{z^3} +7y^{10}+z^{10}+16y^5z^5 \geqslant 54y^5,$$ and $$\frac{30 z^4}{x^3} +7z^{10}+x^{10}+16z^5x^5 \geqslant 54z^5.$$ Therefore $$30\left({x^4\over y^3}+{y^4\over z^3}+{z^4\over x^3} \right) +8(x^5+y^5+z^5)^2 \geqslant 54(x^5+y^5+z^5).$$ So $${x^4\over y^3}+{y^4\over z^3}+{z^4\over x^3} \geqslant .3$$
H: $f$ isn't necessarily bijective but still $f^{-1}$ shows up If $A$ is compact, is then $f(A)$ compact? The answer here by David Mitra uses $f^{-1}$, however we only know $f$ is continuous in its domain, so how do we come up with the inverse? Its not mentioned to be strictly montone either. I do think it's a stupid question but I really want to know. AI: For any function $f: X \to Y$ and any set $A \subseteq Y$ we define $f^{-1}(A)$ as $\{x \in X: f(x) \in A\}$. This is called the inverse image of $A$ under $f$.
H: Relative error when exact quantity is $e^{-200}$ and significant digits; what's going on? Caveat: I've already searched for this topic here in MSE but also on other sites, but I have not still found anything that can answer my doubt. I'm checking my own implementation of a code. The context is not important. The correct value is $x=e^{-200}$, and I computed $\hat{x}$ with my routine. I computed the absolute error $|x- \hat{x}|=1.2\cdot10^{-14}$. This means that in $\hat{x}$ I have 14 correct digits of $x$. If I compute now the relative error I have $\frac{|x-\hat{x}|}{|x|} = 8.67 \cdot 10^{72} $. I know it is related to the number of significant digits, and in this case the denominator $x=e^{-200}$ is not zero (even in is really small). I'm really puzzled because I can't understand what is going on: I mean, the result of the relative error is saying me that the approximation is poor? But the first $14$ digits are equal. AI: Your $x$ has 86 zeros after the decimal period. So having determimed the first 14 zeros you are still 72 zeros away from the real thing. More technically, your error is $10^{72}$ bigger than the actual $x$, which is what your quotient is showing.
H: Finding the general solution of a system of Differential Equations I need helping find the general solution to the following systen $$ x'=2t^2+2-4x+6y $$ $$ y'=-2t^2-t+6-3x+5y $$ I know i need to turn the 2 equations into a matrix, but I can't figure out how to do it. So far I have the matrix for the $x$ and $y$ values and for $t$, but I don't know what to do with the $+2$ and $+6$ in each equation. AI: I presume you are using $x'$ to denote $\frac{\mathrm{d}x}{\mathrm{d}t}$. Now in general, you try to represent a coupled differential equation of $x$ and $y$ in the variable $t$ in the format $$\begin{bmatrix} x'(t) \\ y'(t)\end{bmatrix} = A(t)\begin{bmatrix} x(t) \\ y(t)\end{bmatrix}+ \begin{bmatrix}f_1(t) \\ f_2(t)\end{bmatrix},$$ where $A(t)\in \mathbb{R}^{2\times2}$. For instance, in your case, you would write the differential equation as $$ \begin{bmatrix}x'(t) \\ y'(t)\end{bmatrix} = \begin{bmatrix}-4 & 6\\ -3 & 5\end{bmatrix}\begin{bmatrix}x(t) \\ y(t)\end{bmatrix} + \begin{bmatrix}2t^2+2 \\ -2t^2 -t +6\end{bmatrix}.$$ Since your $A$ matrix is time-invariant the solution of the above equation can be found easily. There are a couple of ways you can use, one way would be to transform $A$ using a similarity transformation to the Jordan cannonical form $A_J = PAP^{-1}$ and writing the above equation in terms $$ \begin{bmatrix} z_1 \\ z_2 \end{bmatrix} = P \begin{bmatrix} x \\ y\end{bmatrix}, \qquad \begin{bmatrix} f_1(t) \\ f_2(t) \end{bmatrix} = P\begin{bmatrix}2t^2+2 \\ -2t^2 -t +6\end{bmatrix},$$ as $$ \begin{bmatrix} z_1'(t) \\ z_2'(t)\end{bmatrix} = A_J\begin{bmatrix} z_1(t) \\ z_2(t)\end{bmatrix}+ \begin{bmatrix}f_1(t) \\ f_2(t)\end{bmatrix}.$$ The solution to the above equation is straight forward. You can obtain back $x$ and $y$ back using $$\begin{bmatrix} x \\ y \end{bmatrix} = P^{-1} \begin{bmatrix} z_1 \\ z_2\end{bmatrix}.$$
H: Does weak continuity imply continuity? I have come across the following excerpt from a mathematical Statistics book: where $H$ and $J$ are Hilbert spaces and $H^{\star}$ is the dual space. For me, the statement after Definition 10 is unconvincing and I cannot, in general, show that weak continuity implies continuity. In particular, if $x_n$ converges strongly to $x$, then it also converges weakly and by weak continuity $f(x_n)$ converges weakly to $f(x)$. However, we need $f(x_n)$ to converge strongly to $f(x)$ in order to have continuity and it's not clear to me how we can obtain this. I expect this to be true when $J$ is finite-dimensional as in that case weak and strong convergence are equivalent but other than that I am at a loss. I was wondering then, is the book wrong on this? AI: 'Clearly' should not have been there but the result is true. This require the so-called Closed Graph Theorem (CGT). If $x_n \to x$ and $f(x_n) \to y$ in the norm then $f(x_n) \to f(x)$ weakly and it converges to $y$ in the norm (hence also weakly) and this implies $y=f(x)$. By CGT it follows that $f$ is continuous.
H: Suppose $x_1,x_2$ and $x_3$ are roots of $(11 - x)^3 + (13 - x)^3 - (24 - 2x)^3$ . What is the sum of $x_1 + x_2 + x_3$? Suppose $x_1,x_2$ and $x_3$ are roots of $(11 - x)^3 + (13 - x)^3 - (24 - 2x)^3$ . What is the sum of $x_1 + x_2 + x_3$ ? What I Tried :- I expanded the expression and got :- $$ \rightarrow (11^3 + 13^3 + 24^3) - 10x^3 - 33x(11 - x) - 39x(13 - x) - 144x(24 - 2x)$$ $$ \rightarrow 17352 - 10x^3 + (33 + 39 + 288)x^2 - (363 + 507 + 3456)x $$ $$ \rightarrow -10x^3 + 360x^2 - 4326x + 17352 $$ Now on Wolfram Alpha I get that this can be factorised as $(-2)(x - 12)(5x^2 - 120x + 723)$ , hence these are the $x_1,x_2,x_3$. So $x_1 + x_2 + x_3 = 5x^2 - (120 - 1)x + (723 - 12 - 2)$ . $\rightarrow 5x^2 - 119x + 713$ . I hope this is the final answer , and I went through a lot of calculations, so this method of getting the answer is definitely not that simple . I am seeking a far more easier way or a shortcut to do this problem , can anyone help ? AI: Hint: Writing expression $ax^3+bx^2+cx+d$ as: $$a\left(x-x_{1}\right)\left(x-x_{2}\right)\left(x-x_{3}\right)$$ reveals that the coefficient of $x^{2}$ takes value: $$-a\left(x_{1}+x_{2}+x_{3}\right)$$
H: Is a dense subset in the domain of a closed, densely defined linear operator a core? Let $X_0,X_1$ be Banach spaces. Let $A:D(A)\subseteq X_0\to X_1$ be a closable linear operator. Recall the definition of a core for such an operator: A set $\mathcal D\subseteq D(A)$ is called a core for $A$ if $\overline{A_{\mathcal D}}=\overline A$. In the case of a bounded linear operator one has the result: Let $A\in L(X_0,X_1)$, and $\mathcal D_0\subseteq X_0$ be a dense linear subspace. Then $\mathcal D_0$ is a core for $A$. My Question: In the case that $A:D(A)\subseteq X_0\to X_1$ is a closed and densely defined linear operator, is there an analogous result which says that a (dense) subset $\mathcal D\subseteq D(A)$ is a core for A? AI: Let $A: D(A): \to X_1$ be some closed operator between Banach spaces and let $x\in X_0$ be some element not in $D(A)$. Define $$A': D(A)+\Bbb C \cdot x\to X_1\oplus_{\ell^1} \Bbb C, \qquad y+ \lambda x\mapsto (A(y), \lambda).$$ We will check that this is also a closed operator. Suppose $y_n +\lambda_n x$ converges and $(A(y_n), \lambda_n)$ also converges. It follows then that $\lambda_n$ converges (call the limit $\lambda$) and also that $A(y_n)$ converges. From $\lambda_n$ and $y_n+\lambda_n x$ converging you get that $y_n$ converges, call the limit $y$. By closedness of $A$ you get that $y\in D(A)$ and $A(y_n)\to A(y)$. So $$A'(y_n+\lambda_n x) = (A(y_n), \lambda_n) \to (A(y),\lambda) =A'(y+\lambda x)$$ verifying that $A'$ is closed. If $i: X_1\to X_1\oplus_{\ell^1}\Bbb C$ is the inclusion note that $i\circ A$ is a closed operator. Putting it all together you have that $A'$ is a proper closed extension of the closed operator $i\circ A$. In particular $D(i\circ A) = D(A)\subseteq D(A')$ is densely defined, but $\overline{i\circ A}= i\circ A\neq A'$. As such $D(A)$ is not a core for $A'$.
H: Number of ways to select a target In how many ways given 8 targets can be shot (one at a time), if no target can be shot until the target (s) below it have been shot ? My approach : ${3 \choose 1}$ to select any group and $1$ way to shoot it. Followed by ${3\choose 1}$ to select it again and $1$ way to shoot it. Then I took $4$ cases: case 1. The middle column was over case 2. either of the first or third column had taken both shots case 3. either third/first and middle case 4 . one of third and one of first This proved futile as I ended up counting each variation manually. What would be the correct way to solve it ? I'm guessing distribution into groups is used somehow. the answer is $560$ A similar question of targets exists but it is entirely different in the details. AI: You have to shoot at the left targets three times, the middle targets twice and the right targets three times. So in some order, we have to fire $$ LLLMMRRR $$ As long as we know whether we're firing left, middle or right, we know exactly which target we're aiming at, so an ordering of the above letters is sufficient to describe an order of targets. There are $\frac{8!}{3!\cdot 2!\cdot 3!}$ ways to order the above letters.
H: how to compute $\sum{\frac{(s+k)!}{s!k!}*x^k}$ For $\sum_{k=0}^{\infty}{\frac{(s+k)!}{s!k!}x^k}$, $0\leq x\leq1$. It is not binomial. So how can we simplify the factorial? AI: It is binomial: $\frac{(s+k)!}{s!k!} = \frac{1}{k!}(s+k)(s+k-1)\cdots(s+1) = (-1)^k\frac{1}{k!}(-s-1)(-s-2)\cdots (-s-k) = (-1)^k\binom{-s-1}{k}$ so the sum is $\sum_{k=0}^\infty \binom{-s-1}{k}(-x)^k = (1-x)^{-s-1}$
H: Enclose open interval as $ x\to \infty$ Can I "close" an open interval $[0,\infty)$ as $x$ approaches infinity with some real number, if given that $\displaystyle \lim_{x \to \infty }f(x)=f(0)$ ? The final goal of the exercise is to prove that $f$ isn't one-to-one. So I thought I could use Weierstrass theorem to prove that a bounded interval (our new interval $[0,M]$ where $M$ as our "infinity" constant) has min/max so it will definitely will have $x_1,x_2$ that will $f(x_1) = f(x_2)$ AI: First of all, you can do that as long as you also prove that the theorem holds in that context. I don't think that that's worth the trouble. Even if you do that, I don't see how is it that you deduce from that there there are $x_1,x_2\in[0,\infty)$ such that $x_1\ne x_2$ and that $f(x_1)=f(x_2)$. Anyway, if $f$ is constant, it is obvious that it is not injective. If there is some $x_0\in[0,\infty)$ such that $f(x_0)>f(0)$, then, since $\lim_{x\to\infty}f(x)=f(0)$, if $x\gg0$ you have $f(x)<f(x_0)$. So, take some $x_1>x_0$ such that $f(x_1)<f(x_0)$. If $f(x_1)\geqslant f(0)$, there is some $x_2<x_0$ such that $f(x_2)=f(x_1)$. ANd if $f(x_1)<f(0)$, ther is some $x_2\in(x_0,x_1)$ such that $f(x_2)=f(0)$. The case in which there is some $x_0\in[0,\infty)$ such that $f(x_0)<f(0)$ is similar.
H: Bounds for $\frac{\sigma(q^k)}{2\sigma(q^{k-1})}$ in terms of $q$ and $k$ If $q$ is a prime number and $k$ is a positive integer, does the following double-sided inequality hold? $$\frac{q}{2} + \frac{q - 1}{2q^k} < \frac{\sigma(q^k)}{2\sigma(q^{k-1})} \leq \frac{q}{2} + \frac{1}{2q^{k-1}}$$ Here, $\sigma(x)$ is the classical sum of divisors of the positive integer $x$. MY ATTEMPT Since $\sigma(q^k) = q^k + \sigma(q^{k-1})$, then we have $$\frac{\sigma(q^k)}{2\sigma(q^{k-1})}=\frac{q^k + \sigma(q^{k-1})}{2\sigma(q^{k-1})}=\frac{q}{2}\cdot\frac{q^{k-1}}{\sigma(q^{k-1})}+\frac{1}{2}=\frac{q}{2}\cdot\frac{1}{I(q^{k-1})}+\frac{1}{2},$$ where $I(x)=\sigma(x)/x$ is the abundancy index of $x$. I also know that $$1 \leq I(q^{k-1}) < \frac{q}{q-1}.$$ Alas, this is where I get stuck. AI: We have $\dfrac{\sigma(q^k)}{\sigma(q^{k-1})}=\dfrac{q^{k+1}-1}{q^k-1}=q+\color{blue}{\dfrac{q-1}{q^k-1}}$ and $$\frac{q-1}{q^k}<\color{blue}{\dfrac{q-1}{q^k-1}}\leqslant\frac{q-1}{q^k-q^{k-1}}=\frac{1}{q^{k-1}}.$$
H: Factorisation of a polynomial. Let $F$ be a field and $\operatorname{char} F = p$. If $x^p - x - a$ is reducible in $F[X]$, I am to prove that the irreducible factors of the polynomial have at most degree 1. The case for $p = 2$ is easy. I have not been able to progress any further. All I know is that $F[X]$ is an Euclidean Domain, hence a PID and hence a UFD, implying that the polynomial can be written as a product of irreducible factors. AI: If $\alpha$ is a root we have that $(\alpha+1)^p -(\alpha+1) +a=\alpha^p+1-\alpha -1 +a=0$ as well, so that the roots of the polynomial in a splitting field are $\alpha+r$ for $r=0,1,\dots,p-1$. Suppose then that $\phi(x)$ is an irreducible divisor of degree $s$ of our polynomial in $F[X]$. The coefficient of $-X^{s-1}$ in $\phi(X)$ is $s\alpha+t$ where $t$ is an integer; hence $s\alpha\in F$. Either $s=p$, contrary to the assumption that $X^p-X+a$ is reducible; or $\alpha\in F$. In the latter case each $X-\alpha-r\in F[X]$, and $X^p-X+a=\prod_{r=0}^{p-1}(X-\alpha-r)$.
H: Prove that this set of functions is not a subspace It was asked to prove that the set $X=\{f: f(x) = (f(x))^{2}\}$ it is not a subspace of continuous functions from $\mathbb{R}$ to $\mathbb{R}$. I took $f(x)=1, \forall x$, and $g(x)=1, \forall x$. So both $f,g$ satisfy the condition. But, $f(x)+g(x)=1+1=2 \neq [f(x)+g(x)]^2$. So X it is not closed for addition and then not a subspace. Is there another non trivial solution for this? Thanks! AI: There are just two continuous functions that are always their own squares: the functions that are constantly $0$ or $1$. So it's hard to imagine a counterexample other than the one you found.
H: Some point set topology regarding the set $[7, \infty)\setminus \mathbb{Q}$ Consider the set $A=[7, \infty)\setminus \mathbb{Q}$. a) Determine $\operatorname{int}A$, $\operatorname{cl} A$, $A'$ and $\delta A$. b) Is $A$ connected or compact? Ok, so for a) I think that $\operatorname{int}A=(7, \infty) \setminus \mathbb{Q}$, $\operatorname{cl} A=[7, \infty)$, $A'=[7, \infty)$, $\delta A=\operatorname{cl}A\setminus \operatorname{int}A=[7, \infty) \cap \mathbb{Q}$. For b), $A$ is not connected because it is not an interval and it is not compact because it is not bounded. I would like to know if I am right because I have just started learning general topology. AI: The closure and $A'$ are correct, but the interior is empty (!): all open sets in $\Bbb R$ contain rationals, so no non-empty open set can be contained in $A$ which consists only of irrational numbers. So $\delta A= \text{cl} A$. $A$ is indeed not connected (it's not order convex, is the more technical property you mean). E.g. we can separate it on any rational $>7$, $A = (A \cap (-\infty,8)) \cup (A \cap (8, \infty))$ e.g. The unboundedness and non-closedness of $A$ both show non-compactness of $A$.
H: Are these norms equivalent in the product space? Let $\Omega$ be an open bounded subset of $\mathbb{R}^n$ and let $p_1, p_2\geq 1$. Consider the product space $W:=W_0^{1, p_1}(\Omega)\times W_0^{1, p_2}(\Omega)$ equipped with the norm $$\Vert (u, v)\Vert_W = \Vert u\Vert_{W_0^{1, p_1}} + \Vert v\Vert_{W_0^{1, p_2}} \quad \mbox{ for all } (u, v)\in W.$$ My question is the following one. If I consider the norm given by $$\left(\Vert u\Vert_{W_0{1, p_1}}^{\max(p_1, p_2)} + \Vert v\Vert_{W_0{1, p_2}}^{\max(p_1, p_2)}\right)^{\frac{1}{\max(p_1, p_2)}},$$ it is equivalent to the norm defined above? And what about if I replace $\max(p_1, p_2)$ with $\min(p_1, p_2)$? The equivalence is also preserved? Could anyone please help or give some references? Thank you in advance! AI: All norms on a product $X \times Y$ that have the form $$\lVert (x,y)\rVert = N(\lVert x\rVert_X, \lVert y\rVert_Y)$$ where $N \colon \mathbb{R}^2 \to \mathbb{R}$ is a norm are equivalent. This easily follows from the equivalence of all norms on $\mathbb{R}^2$. The analogous result holds for products of an arbitrary finite number of normed spaces. All norms of the form $$N(\lVert x_1\rVert_{X_1}, \dotsc, \lVert x_n\rVert_{X_n})$$ where $N$ is a norm on $\mathbb{R}^n$ are equivalent.
H: Continuous inverse of an injective linear function $X,Y,Z$ are Banach spaces, $A:X\to Y$ and $B:X\to Z$ are continous linear injective functions and $B$ is also compact. Moreover, there exists $C>0$ such that $\Vert x\Vert\leqslant C\Vert Ax\Vert+C\Vert Bx\Vert$. To show is there exists some constnat $D$ such that $\Vert x\Vert\leqslant D\Vert Ax\Vert$. I would like to apply the continuous inverse theorem or open mapping theorem. But I need the image of $A$ in $Y$ is closed. However, I am not able to show it. AI: If this is false then there exist $x_n$'s such that $\|x_n\|>n\|Px_n\|$. Let $y_n=\frac {x_n} {\|x_n\|}$. Since $y_n$ is bounded there is a subsequence of $Q(y_n)$ which is convergent. Let $Q(y_{n_i}) \to z$. Then $\|y_{n_i}-y_{n_j}\| \leq C\|P(y_{n_i}-y_{n_j})\|+C\|Q(y_{n_i}-y_{n_j})\| \to 0$ since $Py_{n_i} \to 0$. Let $y =\lim y_{n_i}$. Then $\|y\|=1$ but $Py=0$ a contradiction to injectivity.
H: Question about strongly convergent nets. Consider the following theorem in Murphy's book "$C^*$-algebras and operator theory": Why do we need to truncate the net in order to conclude that $(u_\lambda)_{\lambda}$ is bounded below? Would the following be correct? Fix $\lambda_0 \in \Lambda$ and consider $\Lambda':= \{\lambda \in \Lambda: \lambda \geq \lambda_0\}$. Then by definition of increasing net we have $u_{\lambda_0} \leq u_\lambda$ for all $\lambda \in \Lambda'$ so $(u_{\lambda})_{\lambda \in \Lambda'}$ is bounded below by $u_{\lambda_0}$. Moreover, if we can show that $(u_\lambda)_{\lambda \in \Lambda'} $ converges strongly to $u$, then $(u_\lambda)_{\lambda \in \Lambda}$ converges strongly to $u$ as well and thus we can safely replace $(u_\lambda)_{\lambda\in \Lambda}$ by $(u_\lambda)_{\lambda\in \Lambda'}$. AI: One of the key differences between sequences and nets is that there is no first element. For example, you might have $\Lambda = \mathbb{R}$. However, to perform this argument, you need a starting point $\lambda_0$, so you just choose one. Your argument after that is fine.
H: Let $f:ℝ→ℕ$ be onto. Does there exist a $g:ℕ→ℝ$ such that $f(g(b))=b$ for all $b∈ℕ$? I'm reading Classic Set Theory for Guided Independent Study, and they introduced ZF set theory (no axiom of choice still) and the construction of integers, real, rational and natural numbers. The books says that it's impossible to describe, finitely, a way of obtaining $g(b)$ for each $b∈ℕ$, but i can't see why. If i let $f(x)=x$ and $g(b)=b$ we would have $f(g(b))=g(b)=b$ why wouldn't this work? Thank you! AI: A complement to Asaf's answer. Your problem here might be more about English than mathematics. (I mean English as used in mathematics which takes some learning even for native English speakers.) An analogy might help. Suppose I claim that given a positive real number, I can calculate its square root. I "prove" this ability by saying that the square root of $4$ is $2$. Have I proved the ability that I claimed? You have done the same. You have found a $g$ for a specific $f$ but not proved your ability to do it for any $f$.
H: In a pretriangulated category, a morphism is an isomorphism if and only if its homotopy kernel and homotopy cokernel are zero Let $\mathcal{T}$ be a pretriangulated category with suspension $\Sigma$ (assumed to be an automorphism) and a class of distinguished triangles. $v\colon Y\to Z$ is a homotopy cokernel of $u\colon X\to Y$ if there is a distinguished triangle $X \xrightarrow{u} Y \xrightarrow{v} Z \xrightarrow{w} \Sigma X$. Dually, $t\colon T\to X$ is a homotopy kernel if there is a distinguished triangle $X \xrightarrow{u} Y \xrightarrow{v} \Sigma T \xrightarrow{\Sigma t} \Sigma X$. These notions are unique up to non-unique isomorphism. For the reference, I use Murfet's notes: http://therisingsea.org/notes/TriangulatedCategories.pdf Murfet attempts to prove that $u$ is an isomorphism if and only if its homotopy kernel and homotopy cokernel are zero. One direction is proven correctly. But in proving that a morphism with zero homotopy kernel and homotopy cokernel is an isomorphism, he seems to make an obvious mistake (page 9). So, what should be the right way to prove that? One useful lemma Murfet proves beforehand is the following one: $u\colon X\to Y$ is an isomorphism if and only if there is a distinguished triangle $X \xrightarrow{u} Y \xrightarrow{} 0 \xrightarrow{} \Sigma X$. The problem is, a homotopy kernel and a homotopy cokernel of $u$ are zero, then there is a triangle $X \xrightarrow{u} Y \xrightarrow{0} Z \xrightarrow{0} \Sigma X$, but $Z$ is not necessarily zero. Also, I would like to avoid the use of enhancements (but it's unlikely they are needed since the problem seems to be rather elementary). AI: It follows from the second to next result in the notes, namely Proposition 12: a morphism with zero homotopy kernel (resp., homotopy cokernel) is necessarily a split monomorphism (resp., split epimorphism). Of course, a morphism which is a split monomorphism and a split epimorphism is necessarily an isomorphism.
H: Confusion with $U(1)$ and $SU(2)$ I was reading Physics from Symmetry from Jakop Schwichtenberg and I got confused by the definitions of groups $U(1)$ and $SU(2)$. As far as I understood, unit complex numbers with the ordinary multiplication forms a group and it is called $U(1)$. $U$ for its being unitary ($U^*U =1$) and $1$ for its being represented by single complex numbers. Moreover, in the book he defines \begin{align} 1 =& \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} &i = \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix} \end{align} and shows that we end up with the same results as $SO(2)$. On the other hand, just like unit complex numbers, unit quaternions also form a group with ordinary multiplication. At this point he defines \begin{align}\label{asd} &1 = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} &i = \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix} &j = \begin{pmatrix} 0 & i \\ i & 0 \end{pmatrix} &k = \begin{pmatrix} i & 0 \\ 0 & -i \end{pmatrix} \end{align} and called this $SU(2)$, S denotes $det(U)=1$ and $U$ denotes $U^\dagger U=1$ and 2 denotes $2\times 2$ matrices. So, the question is: By the same logic we called unit complex numbers $U(1)$, shouldn't we say unit quaternions also $U(1)$, since they both unitary and represented by single number. Also, By the same logic we called $SU(2)$ to matrix representation of the unit quaternions, shouldn't we also say that matrix representation of unit complex numbers are $SU(2)$ AI: You are technically correct, but obviously this would cause a lot of confusion if you call the quaternion unitary group $U(1)$ as well. Note that if you say "represented by a single number", it matters if you mean real, complex or quaternion numbers. Therefore, the quaternion unitary group is usually denoted $U(1,\mathbb{H})$ or $Sp(1)$. As you correctly observed, it holds $SO(2) \cong U(1)$ and $SU(2) \cong U(1,\mathbb{H})$.
H: Solving $x^8 - x^5 + x^2 - x + 1 > 0$ over $\mathbb{R}$ I could not find any decent approach to solve this inequality. I would appreciate any help, and input if this is even possible to solve(without a computer). $$x^8-x^5+x^2-x+1>0$$ AI: Using the AM-GM inequality, we have $$x^8 + \frac{x^2}{2} \geqslant 2\sqrt{x^8 \cdot \frac{x^2}{2}} = \sqrt{2} \cdot |x^5| \geqslant x^5,$$ and $$1+\frac{x^2}{2} \geqslant 2\sqrt{\frac{x^2}{2}} = \sqrt 2 \cdot |x| \geqslant x$$ Therefore $$x^8+x^2+1> x^5 + x.$$ Done. Note. The SOS form $$x^{8}+x^{2}+1-x^{5}-x=\left(x^{4}-\frac{x}{2}\right)^{2}+\left(\frac{x}{2}-1\right)^{2}+\frac{x^{2}}{2}>0.$$
H: A sequence $\{ x_{n}\}$ converges to $x \in (X, d)~$ if and only if $~\lim\limits_{n \to \infty} d(x_{n}, x) = 0$ $\blacksquare~$Problem: A sequence $\{ x_{n}\}$ converges to $x \in X~$ if and only if $$\lim_{n \to \infty} d(x_{n}, x) = 0$$ Where $(X, d)$ is a Metric Space. $\blacksquare~$My approach: $\bullet~$If case : Let's consider $x_n \rightarrow x$. Where $\{x_n\}$ $\in $ $ X ~$ and $~x $ $ \in $ $ X $. We know from the definition of convergence of a sequence $\{ x_n \}$ $ \in $ $ X $ to $x$ $\in $ $ X $, [where $( X , d ) $ is a metric space] gives us the following- For any given $\epsilon_n > 0, $ $\exists $ $ N $ $ \in $ $ \mathbb{ N } $ such that, \begin{align*} d ( x_n , x ) < \epsilon_n \quad \forall~ n \geqslant N \end{align*} Now we can pick our $\epsilon_n < \frac{1}{n} $. Therefore we get by the properties of a metric- \begin{align*} 0 \leqslant d ( x_n , x ) < \epsilon_n < \frac{1}{n} \end{align*} When we take $n \rightarrow \infty$ on both sides, we obtain: \begin{align*} 0 \leqslant \lim_{n \to \infty } d( x_n , x ) \leqslant 0 \end{align*} Therefore, by sandwich theorem we obtain that, $\lim\limits_{n \to \infty} d( x_n , x ) $ exists and equals 0. $\bullet~$Only if case: Let's consider $ \lim\limits_{n \to \infty} d ( x_n , x ) = 0 $. Therefore, from the definition of limit, we have, for any given $ \epsilon > 0,$ $ \exists$ $ N $ $ \in $ $ \mathbb{N} $, such that \begin{align*} d( x_n , x ) < \epsilon \quad \forall~ n \geqslant N \quad [ \text{As } d( a , b ) \geqslant 0~ \text{ for any } a, b \in X ] \end{align*} Therefore, we have obtained the result that, $ x_n \rightarrow x $ from the definition we have applied before . Hence, we have got our needed solution. Please check the solution for glitches :) AI: I think your proof is fine. The key point here is that $\lim_{n \to \infty} d(x_n,x) = 0$ is the same as $\forall \epsilon > 0 \; \exists N \in \mathbb{N}$ such that $n \geq N \implies d(x_n,x) < \epsilon,$ which you seem to have captured.
H: Prove that $p^n \nmid ((p - 1)n!)$ for all primes $p$ Prove that $p^n \nmid ((p - 1)n!)$ for all primes $p$ . First I am thinking maybe modular arithmetic will help (although I am not sure) , and I don't know a quick and a general proof of this . Can anyone help ? AI: It is well-known that the maximal $k$ with $p^k\mid m!$ is given by $$ k=\left\lfloor\frac mp\right\rfloor +\left\lfloor\frac m{p^2}\right\rfloor +\left\lfloor\frac m{p^3}\right\rfloor+\cdots<\frac mp+\frac m{p^2}+\frac m{p^3}+\cdots=\frac m{p-1}$$ When $m=(p-1)n$, this implies $k<n$, as desired.
H: Weak Topology and the induced topology Given a normed space $E$ with a subspace $M$, it is known that the weak topology on $M$ is the same as the induced topology of the weak topology on $E$. Why is this the case? From the Hahn-Banach theorem, we can extend the linear functionals on $M$ to $E$. So my intuition is that any element in the weak topology on $M$ is in the induced topology of the weak topology on $E$. But why does the other way also hold? I am not really clear how to work with a linear functional on $E$ which cannot be obtained by extending a linear functional on $M$. AI: The easiest way is to work with nets. A net $\{x_{\lambda}\}_{\lambda\in\Lambda}$ converges in the weak topology to $x$ if and only if $f(x_{\lambda})\to f(x)$ for every bounded linear functional $f$. Now suppose we have a net $(x_{\lambda})_{\lambda\in\Lambda}\subseteq M$ which converges to $x\in M$ in the weak topology on $M$. This means $f(x_{\lambda})\to f(x)$ for every $f\in M^*$. We want to prove that $x_{\lambda}\to x$ also in the induced topology from $E$. So let $f\in E^*$. Then $g:=f|_M\in M^*$ and hence $f(x_{\lambda})=g(x_{\lambda})\to g(x)=f(x)$. Conversely, suppose $(x_{\lambda})\subseteq M$ is a net which converges to $x\in M$ in the induced topology from $E$. This means we have $f(x_{\lambda})\to f(x)$ for every $f\in E^*$. We want to show $x_{\lambda}\to x$ in the weak topology on $M$. So let $f\in M^*$. By Hahn-Banach we can extend it to a bounded functional $F\in E^*$. Then by our assumption $f(x_{\lambda})=F(x_{\lambda})\to F(x)=f(x)$.
H: Number of conjugacy classes of maximal subgroups $G$ is a finite group. If $G$ has, say, $n$ conjugacy classes of maximal subgroups, can we say that each subgroup of $G$ has at most $n$ conjugacy classes of maximal subgroups? I tried some small groups, $S_4$ for example. Is it true for all finite groups? AI: No. For example, the group $C_p\wr C_p$, the Sylow $p$-subgroup of $S_{p^2}$, has $p+1$ maximal subgroups. It contains the group $C_p\times C_p\times C_p$, if $p\geq 3$, which has $p^2+p+1$ maximal subgroups.
H: How to transform $z$ into $\hat z$ Assume that we have vector $a = \begin{bmatrix} a_1 \\ a_2 \\ \vdots \\ a_n \end{bmatrix} \in \mathbb R^n$, $b = \begin{bmatrix} b_1 \\ b_2 \\ \vdots \\ b_n \end{bmatrix} \in \mathbb R^n$, $c = \begin{bmatrix} c_1 \\ c_2 \\ \vdots \\ c_n \end{bmatrix} \in \mathbb R^n$. Then, the vector $z = \begin{bmatrix} a \\ b \\ c \end{bmatrix}\in \mathbb R^{3n}$. How to obtain the vector $\hat z = \begin{bmatrix} a \\ a \\ b \\ b \\ c \\ c \end{bmatrix}\in \mathbb R^{2\times 3n}$ based on $z$? The Kronecker product is not useful in this case, because it focuses on elements, instead of subvectors. AI: If $a,b,c$ were numbers, you could multiply the original vector $(a,b,c)^T$ by $$\begin{pmatrix} 1 & 0 & 0 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 1 \end{pmatrix}$$ Can you think of how to generalize when $a,b,c, \in \mathbb{R}^n$?
H: Global minimum for $\frac{2(q - 1)(q^k + 1)}{q^{k+1} + q - 1}$, if $q \geq 5$ and $k \geq 1$ Let $q$ be a prime number, and let $k$ be an integer. THE PROBLEM Does the function $$f(q,k) = \frac{2(q - 1)(q^k + 1)}{q^{k+1} + q - 1}$$ have a global minimum, if $q \geq 5$ and $k \geq 1$? MY ATTEMPT I tried asking WolframAlpha, it was unable to find a global minimum for $f(q,k)$ in the domain $q \geq 5$ and $k \geq 1$. I then computed the partial derivatives (still using WolframAlpha): Partial derivative with respect to $q$ $$\frac{\partial}{\partial q} f(q,k) = \frac{2q^{k-1}\bigg(q^{k+1} - k(q - 1) + q\bigg)}{\bigg(q^{k+1} + q - 1\bigg)^2} > 0$$ Partial derivative with respect to $k$ $$\frac{\partial}{\partial k} f(q,k) = -\frac{2(q-1){q^k}\log(q)}{\bigg(q^{k+1} + q - 1\bigg)^2} < 0.$$ Does this mean that we can have (say) $$f(q,k) \geq f(5,1) = \frac{48}{29} \approx 1.65517?$$ AI: $$\frac{2}{f(q,k)}=\frac{q^{k+1}+q-1}{(q-1)(q^k+1)}=1+\frac{1}{q-1}\left(1-\frac{1}{q^k+1}\right)$$ is strictly increasing in $k$, so it can't have a global maximum, hence $f(q,k)$ can't have a global minimum.
H: Epsilon recursion and ZF-Inf+TC in the Inverse Ackermann Interpretation In the paper On Interpretations of Arithmetic and Set Theory of Kaye and Wong they write: Equipped with $\in$-induction, we obtain an inverse interpretation of PA in ZF−Inf*. The plan is to define a natural bijection $p: V \to \text{On}$ between the whole universe and the ordinals. The required interpretation can then be obtained by composing this map with the map $o$ defined on $\text{On}$. At first, it appears difficult to see how to use $\in$-induction at all, since the required inductive definition of $p$ is $p(x) = \sum_{y\in x} \, 2^{p(y)}$ and this seems to need a separate induction on the cardinality of $x$ —- just the sort of induction we don’t yet have and are trying to justify. However, there is a way round this problem using ordinal summation. Here ZF−Inf* is ZF plus the negation of Inf in addition to the axiom TC, which says that every set is contained in a transitive closure. I don't understand the point the authors are trying to make here. Specifically: Can't we just define $p(x) = \sum_{y\in x} \, 2^{p(y)}$ using $\in$-induction in ZF-Inf*? What do they mean by "this seems to need a separate induction on the cardinality of x —- just the sort of induction we don’t yet have and are trying to justify." ? AI: How can you define the summation over $x$? We have not defined the notion of summation over a given set, even when the set is finite. We finally lead to the definition of Kaye and Wong, when we tried to define the summation over $x$ formally. Kaye and Wong defined their sum by using set induction on $x$. They suggested that it is likely to define their sum ranged over a finite set $x$ by induction on the size of $x$. (They did not follow it, however.) Our usual sum notation $\sum_{i=1}^n a_n$ only cares the number of terms, not the set of range. It could be the reason why they mention this sentence.
H: Find what is span of 2 linearly independent vectors is I have been trying assignment questions of linear algebra and I am unable to solve this particular question Let $x=\left(x_{1}, x_{2}, x_{3}\right), y=\left(y_{1}, y_{2}, y_{3}\right) \in \mathbb{R}^{3}$ be linearly independent. Let $\delta_{1}=x_{2} y_{3}-y_{2} x_{3}, \delta_{2}=x_{1} y_{3}-y_{1} x_{3}$ $\delta_{3}=x_{1} y_{2}-y_{1} x_{2} .$ If $V$ is the span of $x, y$ then $V=\left\{(u, v, w): \delta_{1} u-\delta_{2} v+\delta_{3} w=0\right\}$ $V=\left\{(u, v, w):-\delta_{1} u+\delta_{2} v+\delta_{3} w=0\right\}$ $V=\left\{(u, v, w): \delta_{1} u+\delta_{2} v-\delta_{3} w=0\right\}$ $V=\left\{(u, v, w): \delta_{1} u+\delta_{2} v+\delta_{3} w=0\right\}$ I know the definitions of Linearly Independent and Span of vectors but I don't know how to solve this problem due to $\delta_{1}$, $\delta_{2}$,$\delta_{3}$ as I am unable to write V in terms of $\delta_{i}$'s and $(u, v, w)$. Any help will be really appreciated . AI: Given, $x=(x_1,x_2,x_3), y=(y_1,y_2,y_3) \in \mathbb{R}^3 $ are linearly independent. Also, given, $V$ is the span of that two linearly independent vectors $x,y\in \mathbb{R}^3 $ It means clearly that each $(u,v,w)\in V $ can be uniquely spanned by the linearly independent vectors $x,y\in \mathbb{R}^3 $ That means for each $(u,v,w)\in V $, $$ \begin{vmatrix} x_1 & x_2 & x_3 \\ y_1 & y_2 & y_3 \\ u & v & w \\ \end{vmatrix} =0 $$ $\implies u(x_2y_3-y_2x_3)-v(x_1y_3-y_1x_3)+w(x_1y_2-x_2y_1) = 0 \implies \delta_{1} u-\delta_{2} v+\delta_{3} w=0 $
H: Need help in finding limit. I have solved the limit of an expression like this. Solution But the answer is 1/2 by using L Hospital rule. Why am I wrong? AI: The pieces of a function in a limit have to go to their limits together. When you replace $\ln(1+x)/x$ by $1$, then you're letting one bit run on ahead of the other bits. You don't get to do this: $$e = \lim_{x \to \infty} \left(1+\frac{1}{x}\right)^x = \lim_{x \to \infty} 1^x = 1.$$ Both occurences of $x$ have to go to infinity at the same time.
H: Paracompact Hausdorff Space with Dense Lindelof subset is Lindelof Let $X$ be a Paracompact Hausdroff space with a dense subset $A$ which is Lindelöf. Then, $X$ is Lindelof I've written down my attenpt below - As per the hint in the problem, as a paracompact $T_2$ space is regular, all I have to do is show that every open cover of $X$ has a countable subcollection whose closures cover. So, for any open cover $\{U_\alpha\}$ of $X$, we get an open cover $\{V_\alpha\}$ of $A$, where $V_\alpha = A \cap U_\alpha$. As $A$ is Lindelöf, we can thus get a countable subcollection $\{V_{\alpha_i}:i\in \mathbb{N}\}$, such that $\bigcup_\limits{i=1}^{\infty} \overline V_{\!\!\alpha_i} = A$. So, I believe now that we will get $\bigcup_\limits{i=1}^{\infty} \overline U_{\!\!\alpha_i} = X$, thus showing $X$ is Lindelöf. But, this is the part I'm stuck at. Somehow, we have to use the fact that $A$ is dense, but I just can't figure it out. Any help in solving this is appreciated! AI: First make a boring observation: Lemma: any locally finite family of subsets on a Lindelöf space $X$ is at most countable. Proof: for every $x \in X$, pick $O_x$ witnessing the local finiteness, and since this cover has a countable subcover, the original family of subsets is also at most countable. Let $\mathcal{U}=\{U_i: i \in I\}$ be an open cover, and let $\mathcal{V}=\{V_i: i \in I\}$ be a locally finite open refinement of it, so that $\overline{V_i} \subseteq U_i$ for all $i$ (it is a standard fact that this can be done in paracompact Hausdorff spaces). As $A$ is Lindelöf, $\{V_i \cap A: i \in I\}$ is at most countable. It is clear that $X$ is covered by the corresponding $U_i$ and we have a countable subcover.
H: prove that if $|f(z)|\geq |z|+|\sin(z)|$ then it cannot be an entire function Problem: Prove that if $\forall z \in \mathbb{C}.|f(z)|\geq |z|+|\sin(z)|$ then it cannot be an entire function. I thought about claiming that $f$ must be a polynomial because it has a pole in infinity, but I stuck why it polynomial cannot satisfy this property. AI: Assume $f$ entire: $|f(z)|\geq |z|+|\sin(z)| \ge |z|$ implies $f(z) \to \infty, z \to \infty$ hence $f$ is a polynomial of degree $n \ge 1$ But then if $f=\sum_{k=0}^n a_kz^k, M=\max |a_k|$ we have that $| f(iR)| \le M(n+1)R^n$ for $R >1$ On the other hand $2|\sin (iR)|=|e^{-R}-e^{R}| \ge e^R-1$ so one gets the inequality: $e^R-1 \le 2M(n+1)R^n$ for all $R>1$ where $M,n$ are fixed and that is plainly impossible since $e^R/R^n \to \infty, R \to \infty$ so we get a contradiction!
H: Is there a formula for generating all positive integers that cannot be written as a linear combination over nonnegative integers? Let $a,b \in\mathbb{Z}_+$ such that $a \leq b$. We know that when $a$ and $b$ are coprime, then the largest integer that cannot be written as $am+bn$ for some nonnegative integers $m$ and $n$ not both zero is $ab-a-b$ (this number is typically called the Frobenius or McNugget number). Does anyone happen to know if there is a formula for generating all positive integers $< ab-a-b$ that cannot be expressed as $am+bn$, where $m$ and $n$ are nonnegative integers not both zero, assuming that they exist? Thanks! AI: Using Bezout's identity, you find integers $u,v$ with $ua+vb=1$. Then all ways to write $n$ as linear combination of $a,b$ are of the form $$ n=(nu-kb)a+(nv+ka)b.$$ So the question is whether there exists $k\in\Bbb Z$ such that both $nu-kb\ge0$ and $nv+ka\ge0$, i.e., $$ nbv\le kab\le nau.$$ In other words, you are looking for those other $n$ with $$ \left\lceil\frac{nv}{a}\right\rceil>\left\lfloor\frac{nu}{b}\right\rfloor$$
H: Prove $\sum_{n=0}^{\infty} \frac{\Gamma(n+(1/2))}{4^n(2n+1)\Gamma(n+1)}=\frac{\pi^{3/2}}{3}$ Prove $$\sum_{n=0}^{\infty} \frac{\Gamma\left(n+\frac{1}{2}\right)}{4^n\left(2n+1\right)\Gamma\left(n+1\right)}=\frac{\pi^{\frac{3}{2}}}{3}$$ The original sum is multiplied by $\frac{\sqrt{\pi}}{2}$ and so it equals $\frac{\pi^2}{6}$ but I pulled the constant out because the actual series troubles me. I dont know how to evaluate this. I think maybe the Gammas and $4^n$ simplify and leave some constant divide by $2n+1$ which is the familiar arctan series. Wolfram can't help simplify it, just compute it. Any help please? AI: We use the Taylor series for $\arcsin$. Begin with $$ (1-x^2)^{-1/2} = \sum_{k=0}^\infty \binom{-1/2}{k} (-1)^k x^{2k} $$ Integrate term-by-term $$ \arcsin(x) = \sum_{k=0}^\infty\binom{-1/2}{k}\frac{(-1)^k\;x^{2k+1}}{2k+1} $$ Prove (by induction) that $$ \binom{-1/2}{k} = \frac{(-1)^{k}\;\Gamma(\frac12+k)}{\sqrt{\pi}\; k!} $$ Thus $$ \arcsin(x) = \sum_{k=0}^\infty\frac{x^{2k+1}\Gamma(\frac12+k)}{\sqrt{\pi}(2k+1)k!} $$ Plug in $x=1/2$ to get $$ \arcsin \frac12 = \frac{1}{2\sqrt{\pi}}\sum_{k=0}^\infty\frac{\Gamma(\frac12+k)}{4^k(2k+1)k!} $$ Finally, $\arcsin\frac12 = \frac{\pi}{6}$. $$ \frac{\pi}{6} = \frac{1}{2\sqrt{\pi}}\sum_{k=0}^\infty\frac{\Gamma(\frac12+k)}{4^k(2k+1)k!} \\ \frac{\pi^{3/2}}{3} = \sum_{k=0}^\infty\frac{\Gamma(\frac12+k)}{4^k(2k+1)k!} $$
H: Canadian Mathematical Olympiad 1987, Problem 4 On a large flat field, $n$ people $(n>1)$ are positioned so that for each person the distances to all the other people are different. Each person holds a water pistol and at a given signal fires and hits the person who is closest. When $n$ is odd, show that there is at least one person left dry. This question is a variant of the question I am asking but I am not using induction in my approach. My Approach: Our primary goal is to ensure that no person remains dry. When a total of $k$ people are present ($k$ is odd), it is evident that if no one remains dry, then a closed chain must have been formed when considering the order of firing. (Since the pairing doesn't change the parity, atleast one dry person will remain in the end) WLOG, let $P_1$ attack $P_2$, $P_2$ attack $P_3$, $P_3$ attack $P_4$ and so on till $P_{k-1}$ attack $P_k$ and $P_k$ attack $P_1$ Let us denote the distance between $P_i$ and $P_j$ as $i_j$ or $j_i$ Now $2_3<2_1$ since $P_2$ attacks $P_3$, thus $2_3<1_2$. Similarly $3_4<3_2$ since $P_3$ attack $P_4$, thus $3_4<2_3<1_2$ $\therefore $ In the end, we get $k_1<(k-1)_k<(k-2)_{k-1}<\ldots<3_4<2_3<1_2$ From this we can see $k_1=1_k<1_2$ which implies that $P_1$ must have attacked $P_k$ instead of $P_2$ which is a contradiction. This means that $P_1$ and $P_k$ attacks each other while $P_2$ attacks $P_3$, $P_3$ attacks $P_4$ and so on till $P_{k-1}$ attacks $P_k$ hence leaving an open chain where $P_2$ remains dry. It can be observed that any pairing will result in an open chain consisting the pair if any of the remaining person attacks a person from the pair. If none of the remaining persons attack any person from pair, then the pair can be isolated and similar argument can be used for remaining $(k-2)$ people. $\therefore $ We will always get an open chain if the number of persons are odd which means that atleast one person will remain dry. Please check my solution for any mistakes. Also please suggest any improvements in the solution. THANKS AI: As stated in the comments: The argument as written is not correct. The initial assumption, that no pair fires on each other, is not possible. The two people $A,B$ at minimal distance from each other must fire at each other. (of course the case where there is only one person is trivial). Two ways to solve the problem: Method I: consider that minimal pair $A,B$. We distinguish two cases (according to whether anyone else shoots at either $A$ or $B$). Since the case $n=1$ is trivial it makes sense to proceed by induction. Let's assume we have a counterexample with minimal $n$ (we will derive a contradiction). If nobody else shoots at $A,B$ then we can ignore that pair and focus on the $n-2$ remaining people. By the induction hypothesis, at least one of those stays dry and we are done. If somebody else, $C$ say, shoots at one of them, say $A$, then at least two people shoot at $A$. It follows that the map $F: \{1,\cdots, n\}\to \{1,\cdots,n\}$ which maps the $i^{th}$ person to their target is not injective. Thus it can't be surjective and again we are done. Method II (sketch). Suppose we had a collection with odd $n$ in which nobody stayed dry. Then consider the shooting pattern. Since it must be the case that everyone shoots at (and is shot at by) a unique person, the collection must break up into distinct closed loops. These can't all have length $2$ since the collection is odd. There must, in fact, be an odd loop of length $>2$. But consider the members of that loop. There must be a minimal distance between any two members of that loop and, as before, we quickly see that those two people can't shoot at anyone else in that loop. Thus the loop is not possible, and we are done.
H: How to find the generators of a principal Ideal? Suppose I have $\mathbb{Z}/24\mathbb{Z}$ and $I = \{0, 3, 6, 9, 12, 15, 18, 21\}$. $I$ is a principal ideal. Is there a method to find ALL the generators ? Thanks in advance ! AI: Hands-on approach: We know that the order of $a\in G$ an element equals the order of the ideal it generates. In this case, $|I| = 8$, so $I = (a) \iff n\cdot a\bmod 24 \neq 0$ for $n = 1,\dots,7$. Since elements of $I$ are of the form $m\cdot 3$, this is looking for $m = 1,\dots,7$ such that $n\cdot m \bmod 8 \neq 0$ for $n = 1,\dots,7$: $$n\cdot m \bmod 8 = 0 \iff (n,m) \in \{(4,2),(2,4),(4,4),(6,4),(4,6)\}$$ So $6,12,18$ don't generate $I$, and $3,9,15,21$ generate it.
H: If $15$ distinct integers are chosen from the set $\{1, 2, \dots, 45 \}$, some two of them differ by $1, 3$ or $4$. $\blacksquare~$ Problem: If $15$ distinct integers are chosen from the set $\{1, 2, \dots, 45 \}$, some two of them differ by $1, 3$ or $4$. $\blacksquare~$ My Approach: Let the minimum element chosen be $n$. Then $n + 1 , n + 3 , n + 4 $ can't be taken. We make a small claim. $\bullet~$ Claim: In a set of $~7$ consecutive numbers at most $2$ numbers can be chosen. $\bullet~$ $\textbf{Proof:}$ Let us name the elements of the set as $\{ 1,2,3,\dots,7 \}$. Now let's consider the least element is chosen. If the least element is $1$, then $2,4,5$ can't be chosen. So we are left with $3, 6, 7$. $\circ~$ If $~3~$ is chosen, then $6, 7$ can't be in the set. And if $~3~$ is not chosen, then only any one of the 2 elements $\{ 6, 7 \}$ be chosen. So, a maximum of $2$ elements can be chosen in this case. $\circ \circ~$ If the least element is $2,$ then $3, 5, 6$ can't be there in the set. So, possible elements are 4, 7. So, one of these two can be chosen. Then, a maximum of 2 elements can be chosen in this case. $\circ \circ~$ If the least element is $3$, then $4,6,7$ gets cancelled. so only $5$ is left in the set i.e., $2$ elements at most. $\circ \circ~$ If the least element is $4,$ then $5,7$ gets cancelled. So the only element left is $6$. Similarly, $\circ \circ~$ If $5$ is the least element then $6$ gets cancelled and only $7$ is left. i.e., two elements. If the least element is either $6$ or $7$, then there is only one element. So a maximum of two elements in a set of $7$ consecutive elements can be chosen. Hence, the proof of the claim is done! So, for $42$ elements, a maximum of $2 \times 6 = 12$ can be taken. However, $3$ more elements are required from $3$ more consecutive elements, which is not possible since only 2 elements at most can be chosen from a set of 3 consecutive elements. So, a $14$ element subset can be formed such that, no two of them differ by $1, 3, 4$. Hence the $15$th element is one of the cancelled elements, that is, there exists a pair with their difference being $1, 3$ or $4.$ Hence, done! Please check the solution for glitches and give new ideas too :). AI: It is a nice argument, and you have explained it very clearly, so well done. If you are looking for improvements, and you want it to be a bit more formal, I would make two suggestions: You are very thorough proving the claim, going through all the cases, which is great. However you could shorten the proof of the claim by saying: "If the least element is $x$, then we may subtract $x-1$ from all the numbers without changing their differences or the number of integers. Thus without loss of generality we may assume the least element is $1$." Then you do not need to consider the other cases. The last part of the proof (after the claim is proved) is not as thorough as the proof of the claim. You assume that for $k=0,\cdots,5$ the numbers $7k+1,7k+3$ are selected, without justifying that this is optimal. It is obvious in a way, but for a formal proof you should justify this by saying something like: "Pick the smallest $k$ where $7k+1, 7k+3$ are not picked. Then replace the numbers in $\{7k+1,\cdots 7k+7\}$ that are picked with $7k+1, 7k+3$. From the claim we know that we have not reduced the number of integers. Also, we have not created any new differences of $1,3,4$." Then finish the argument with your second to last paragraph. I would reword the first sentence slightly: "So of the first $42$ elements, we may assume these $12$ are picked." I would lose the last paragraph as it is not needed.
H: Derivative of $a^{T}Xb$ with respect to b, where $a, b$ is a $d$-dim vector and $X$ is a $d\times d$ matrix Since the derivative of $a^{T}Xb$ with respect to a is $Xb$, I was wondering how do I solve the derivative of $a^{T}Xb$ with respect to $b$? AI: It's easy to get confused, due to a lack of clear and uniform conventions about what "the derivative" of a multivariate function $f(b)$ should mean. One possibility is the gradient, defined to be the vector $\nabla f$ so that, for all vectors $\delta b$ representing a change in $b$, $\langle \nabla f, \delta b\rangle$ gives the directional derivative in the $\delta b$ direction: $$\langle \nabla f, \delta b\rangle = \lim_{t\to 0} \frac{d}{dt} f(b + t\delta b).$$ The right-hand side in your case is $a^TX\delta b$, so $\nabla f = X^Ta$. Another possibility is the Jacobian/differential/push-forward $Jf$, which is the linear map from an infinitesimal change $\delta b$ in $b$ to an infinitesimal change in $f$. It's defined by $$[J f]\delta b = \lim_{t\to 0} \frac{d}{dt} f(b + t\delta b)$$ for all $\delta b$; here we see that for your $f$ the Jacobian is the row vector $$Jf = a^TX.$$ Unfortunately it's very common (especially in applied math) to conflate the gradient and Jacobian (and don't even get me started about the hodge-podge of conventions that surround differentiation with respect to matrix variables), and you have to infer from context what kind of object you need.
H: Evaluating 8th derivative of $(e^x-1)^6$ at $x=0$ 8 distinct objects are distributed into 7 distinct boxes. Find the number of ways in which these objects can be distributed to exactly 6 boxes. I have solved this question quite easily using method of division and distribution ${7 \choose 6}(\frac{8! \times 6!}{5!\times 3!} + \frac{8! \times 6!}{4! \times (2!)^2 \times 2!})$, And also using principle of inclusion and exclusion ${7 \choose 6}(\sum_{r=1}^{6} {6 \choose r} r^8 (-1)^r)$ so please do not solve this question. The question in the title: Evaluating 8th derivative of $(e^x-1)^6$ at $x=0$ arose as I was thinking of evaluating the expression (I have excluded the ${7 \choose 6}$ term as that doesn't affect my question) obtained in the method using principle of inclusion and exclusion using generating functions, as the calculations were getting a bit too messy. I'm also unable to evaluate this expression easily. Is there a way we can evaluate this expression easily? (Preferably without any calculator). I deduced that the $(e^x-1)$ factor must be differentiated 6 times, or else at x=0 it would become 0. However I'm not able to proceed; the additional $e^x$ factors from chain rule of is making it still tedious for me. I have verified using Wolfram alpha that all the three expressions give the same answer $266*7!$. AI: We need to find the $x^8$ coefficient of $(e^x-1)^6$. This is the $x^2$ coefficient of $[(e^x-1)/x]^6$. But $$\frac{e^x-1}x=1+\frac x2+\frac{x^2}6+\text{higher terms}.$$ So we need to find the $x^2$ term of $$\left(1+\frac x2+\frac{x^2}6\right)^6.$$ But $$(1+ax+bx^2)^n=1+nax+\left(nb+\binom n2a^2\right)x^2+\text{higher terms}.$$ So $$\left(1+\frac x2+\frac{x^2}6\right)^6 =1+3x+\left(1+\frac{15}4\right)x^2+\text{higher terms}.$$ So the $x^8$-coefficient of $(e^x-1)^6$ is $19/4$. So your eighth derivative is $8!$ times this, which is $38\times 7!=266\times 6!$.
H: Find the bilinear transformation which maps $z=(1, i, -1)$ respectively into $(w=i, 0, -i)$ Find the bilinear transformation which maps $z=(1, i, -1)$ respectively into $w =(i, 0, -i)$ My try: Here, $w_1=i$, $w_2=0$, $w_3=-i$, $z_1=1$, $z_2=i$, $z_3=-1$ $\text{As the formula states,}$ $$\begin{align}\\ &{\begin{aligned}\\ \frac{(w-w_1)(w_2-w_3)}{(w-w_3)(w_2-w_1)}&=\frac{(z-z_1)(z_2-z_3)}{(z-z_3)(z_2-z_1)}\\ \end{aligned}\\}\\ &\implies\frac{(w-i)(0+i)}{(w+i)(0-i)}=\frac{(z-1)(i+1)}{(z+1)(i-1)}\\ &\implies\frac{(w-i)i}{(w+i)(-i)}=\frac{(z-1)(i+1)^2}{(z+1)(-2)}\\ &\implies-\frac{w-i}{w+i}=\frac{(z-1)2i}{(z+1)(-2)}\\ &\implies\frac{w-i}{w+i}=\frac{(z-1)i}{z+1}\\ &{\begin{aligned}\\ \implies\require{cancel}\frac{w-\cancel{i}+w+\cancel{i}}{\cancel{w}-i-\cancel{w}-i}&=\frac{i(z-1)+z+1}{i(z-1)-z-1}\\ \end{aligned}\\}\\ &\implies w=i\frac{(z-1)i+z+1}{(z-1)i-(z+1)}\\ &\implies w=\frac{i(z+1)-z+1}{i(z-1)-z-1}\\ \end{align}\\ $$ Now, I can't understand how to shape this equation like $w=\frac{az+b}{cz+d}$. Please help. AI: You get $\omega=\dfrac{(i-1)z+i+1}{(i-1)z-(1+i)}$. It is important to remember that here $a,b,c,d$ are complex numbers.