text
stringlengths
83
79.5k
H: Solve the equation, for $p$ prime, $x^{2p}- x^p= [6]$, in $\mathbb {Z}_p$ According to the title, the equation is : $$x^{2p}- x^p=[6]$$ for $p$ prime, in $\mathbb {Z}_p$. It's known that $a^p \equiv a \mod p$. So, if the equation was $a^p -a=[6]$ , $a=6$. I tried to make some manipulations using as tool this theorem, but I am lost here. I can't figure this out after lots of time thinking, anything else could I do? AI: As you know $x^p \equiv x \pmod{p}$, the equation you want to solve is equivalent to: $$x^2 - x \equiv 6 \pmod{p}$$ The quadratic polynomial factors: $x^2 -x -6 = (x+2)(x-3)$. So does that get you there?
H: Factorisation of $x^{4}-x^{2}+2x+1$ in $\mathbb Q[x]$? Hallo can any one tell me what is the idea behind this? the polynomial $p=x^{4}-x^{2}+2x+1 \in \mathbb Q[x]$ is irreducible, because in in $F_{2}[x]$ it can factored to $(x^2+x+1)^2$ and in $F_{3}[x]$ it can be factored to $(x-1)(x^3+x^2-1)$. thank you in advance AI: Suppose that $p$ were reducible in $\mathbb{Q}[x]$; that is, suppose that $p=fg$, where $f,g\in\mathbb{Q}[x]$ are some monic, non-constant polynomials (in fact, $f$ and $g$ would have to be in $\mathbb{Z}[x]$ by Gauss's lemma). For a monic polynomial $q\in\mathbb{Z}[x]$, let $\tilde{q}$ denote its reduction modulo $2$ to an element of $\mathbb{F}_2[x]$, and let $\overline{q}$ denote its reduction modulo $3$ to an element of $\mathbb{F}_3[x]$. Note that $\deg(q)=\deg(\tilde{q})=\deg(\overline{q})$. Because the reduction maps $\mathbb{Z}[x]\to\mathbb{F}_2[x]$ and $\mathbb{Z}[x]\to\mathbb{F}_3[x]$ are ring homomorphisms, we have that $$\tilde{p}=\widetilde{fg}=\tilde{f}\tilde{g},\qquad \bar{p}=\overline{fg}=\bar{f}\bar{g}.$$ Because the factorization of $\tilde{p}$ into irreducibles in $\mathbb{F}_2[x]$ is $(x^2+x+1)^2$, where there are only two irreducible factors, the non-trivial factorization $\tilde{p}=\tilde{f}\tilde{g}$ must be this factorization, i.e. in $\mathbb{F}_2[x]$, $$\tilde{f}=\tilde{g}=x^2+x+1.$$ Thus $\deg(f)=\deg(g)=\deg(\tilde{f})=\deg(\tilde{g})=2$. However, we know that in $\mathbb{F}_3[x]$, the irreducible element $x^3+x^2-1$ divides $\bar{p}$, and since $\bar{p}=\bar{f}\bar{g}$, it must divide either $\bar{f}$ or $\bar{g}$. But this is impossible since $\deg(\bar{f})=\deg(\bar{g})=2$.
H: What does $X \subsetneqq Y$ mean What does $X \subsetneqq Y$ mean? I cannot find it on Google, I suppose this means, it is a subset and at the same time it is not the same. I.e. it could be simply written as $\subset$. Pardon my ignorance. AI: Some people use $\subsetneq$ for strict and $\subset$ for non-strict. This is OK, but may confuse people accustomed to other conventions. Some people use $\subset$ for strict and $\subseteq$ for non-strict. This is OK, but it will confuse many people. To avoid confusion, you might think to use $\subsetneq$ and $\subseteq$, but those symbols are visually very similar. Thus $\subsetneqq$ and $\subseteq$ would be the safe approach. Note: There is also a $\subseteqq$ for symmetry. Also note: $X\not\subsetneqq Y$ is a bad idea, which gives an advantage to the $\subset$/$\subseteq$ crowd—an advantage unlikely to help their cause.
H: Atiyah and Macdonald Exercise 1.27 Please do not ruin the fun by telling me why $\mu$ is surjective! I am having trouble understanding the idea of the coordinate functions on the affine algebraic variety $X$. I am trying to understand that $P(X)$ is generated as a $k$-algebra by the coordinate functions. I understand what it means to be generated as a $k$-algebra, but the problem is I don't understand what the coordinate functions are! Just some notation: $P(X) = k[t_1,t_2,\ldots, t_n]/I(X)$ where k is algebraically closed and $I(X)$ is the ideal of the variety $X$. What do they explicitly mean when they say "Let $\xi_i$ be the image of $t_i$ in $P(X)$."? I can't really see what the image of an unknown is. For example, if I have $k[t]/(t^2)$, then the image of the $t$ is still $t$ but with the relation that $f(t)t^2 = 0$ for $f \in k[t]$. If I have $k[x,y]/(xy-1)$ then the images of $x$ and $y$ are still $x$ and $y$ with $f(x,y)(xy-1) = 0$? So there is no explicit way to show the image of a variable? Then the problem states that if $x \in X$, then $\xi_i(x)$ is the $i$th coordinate of x. I guess this just means the usual "plugging in" of an element in an equation. Why doesn't this work for arbitrary elements in $k^n$? AI: A lot of this repeats what Paul says in his comment above. Since $k$ is algebraically closed it's infinite, so the map $A := k[x_1, \dots, x_n] \to \operatorname{Fun}(k^n, k)$ is injective. [This is not the important part, but it's comforting.] We can compose with restriction of functions to get a map into $\operatorname{Fun}(X, k)$. The ideal $I(X)$ is the kernel of this composition. So, $A(X) := A/I(X)$ can be identified with a subring of the ring of functions on $X$ and this allows us to evaluate elements of $A(X)$ on points of $X$. As with any quotient, there is a canonical map $\phi\colon A \to A(X)$ and we're setting $\xi_i = \phi(x_i)$. I would not say that $\xi_i$ and $x_i$ are the same thing: they live in different rings. $\xi_i$ is the coset $x_i + I(X)$. Of course it's common to say things like $x_1 = 2x_2$ in $A/I(X)$ but one should be aware of what's going on. We can't evaluate outside of $X$ because if $x \notin X$ then there is some element $f \in I(X)$ such that $f(x) \neq 0$. To define $\bar g(x)$ for some $\bar g \in A(X)$ we would try to choose some representative $g \in A$ such that $\phi(g) = \bar g$ and then set $\bar g(x) = g(x)$. But $g + f$ is another "lift" of $\bar g$ and $g(x) \neq g(x) + f(x)$. In your first example [note, however, that $(t^2)$ couldn't be of the form $I(X)$; if you want an associated geometric space, do the exercises on schemes!], $t$ is the same thing as $t + t^2$ in the quotient and these evaluate differently at $1$.
H: How to find what point a wave is reflected off If a wave is reflected off a surface, the angle of reflection is equal to the angle of incidence. But, how can we use this to find the actual path of the incident and reflected waves if we only know the positions of the wave origin and observer? It seems clear that the reflection will occur in the plane normal to the reflective surface and including the origin and observer. We can then look at this problem in two dimensions: (I've drawn the reflection off a horizontal line because my actual application involves a blast wave reflected off the ground in order to simulate the mach stem effect in a game. The blast wave also reaches the observer directly, but the mach stem effect is concerned with the constructive interference between this and the reflected wave.) We know point O (the wave origin) and point P (the observer.) We know that R (the reflection point) lies on the ground line somewhere between where O and P project to the ground, but we don't know where. We also don't know theta although we do know it's the same on both sides. I'm sure this is a relatively simple trigonometry problem but I'm terrible at seeing these things. Any ideas? AI: Reflect $O$ in the ground to $O'$, then $R$ is where $PO'$ meets the ground.
H: Does this normalization of a positive definite matrix alter its positive definiteness? I have a matrix $A$ that is positive definite. Denoting the elements of $A$ by $a_{ij}$, let $A'$ be a new matrix formed as: $$A'_{ij} = \frac{a_{ij}}{\sqrt{a_{ii}a_{jj}}}$$ Is $A'$ also positive definite? Note: All diagonal elements of $A$ are positive. Backstory: This situation arises in certain covariance matrix normalizations and several domain specific algorithms in my field require the input to be a PD matrix, but scripts used in our lab do not check for PDness and continue as if they were (they work correctly for PD though). Since I'm using $A'$ instead of $A$, I'd like to know if the PDness can change so that I can interpret the output of the scripts cautiousy (the PDness didn't change for a few sample tests I tried, but that isn't convincing proof). AI: Yes, it is. We have $A'=D^\ast AD$ where $D=\operatorname{diag}\left(\frac1{\sqrt{a_{11}}},\ldots,\frac1{\sqrt{a_{nn}}}\right)$. Since $A$ is positive definite and $D$ is invertible, $A'$ is positive definite (for any $x\neq0$, we have $Dx\neq0$ and hence $x^\ast A'x=(Dx)^\ast A(Dx)>0$).
H: Difference between internal category and subcategory? The category SET has an internal category, which is a small category with small objects and small morphisms, and that means that it's a subcollection of the collection of objects in SET. Is an internal category the same as a subcategory? Are they different only when dealing in the context of Grothendieck universes? Thanks! AI: No. An internal category in $\text{Set}$ is just a (small) category. A subcategory of $\text{Set}$ is a category equipped with an inclusion functor into $\text{Set}$. The difference between the two constructions is that in the first case you pick out two objects in $\text{Set}$, one to serve as objects and one to serve as morphisms in your internal category. In the second case you pick out a whole bunch of objects, all of which are objects in your subcategory, with morphisms given by some morphisms between them in $\text{Set}$. It will be easier to make this distinction by replacing $\text{Set}$ with a more general category $C$ (with enough pullbacks). For example, an internal category in $\text{Top}$ is a category whose object and morphism sets are both equipped with a topology such that everything in sight is continuous, whereas a subcategory of $\text{Top}$ is a collection of topological spaces and a collection of morphisms between them containing identities and closed under composition. In particular, it is an ordinary category equipped with an inclusion functor into $\text{Top}$.
H: Problem understanding domain of circular relation I came across an exercise in a book which introduces the circular relation as: $C$ is a relation from $R \to R$ such that $(x,y) \in C$ means $x^2+y^2 = 1$. It then says that the domain of $C$ is $R$. However, as I understand things, the domain can only be the set of all $x$ that satisfy the relation. For example, the number 5 will never be in the domain of $C$, no matter what $y$ we choose. What is going on here? Is the book in error? AI: This is a more general context than that of functions - a relation need not be "defined" everywhere on the domain. A relation $R$ between sets $X$ and $Y$ (also sometimes phrased as "from $X$ to $Y$") is just a subset $R\subseteq X\times Y$. The set $X$ is, by definition, the domain of $R$. Here's the Wikipedia article.
H: derivative of $f(x)=\frac{x}{x-1}$ I can't figure it out, what is the derivative of $$f(x)=\frac{x}{x-1}$$I have tried it many different ways and I just can't seem to get it figured out. AI: Of course, you can use the quotient rule, but with this function, we can express it in a manner which makes computing the derivative fairly straightforward: You can use polynomial long division, or simply subtract and add $1$ to the numerator to split the function into a sum: $$f(x)=\frac{x}{x-1} = \frac{(x - 1) + 1}{x-1} = 1 + \frac 1{x-1}$$ Now we can simply compute using the power rule $(\dagger)$ $$f'(x) = \frac{d}{dx}\left(\frac 1{(x-1)}\right) = \frac{d}{dx}(x - 1)^{-1} = -{(x - 1)^{-2}} = \frac{-1}{(x-1)^2}$$ $(\dagger)$ Note that we are technically using the chain rule and power rule to compute $\frac{d}{dx}\left(\frac{1}{x - 1}\right)$, setting $u = x - 1,\; \frac {du}{dx} = 1$ and computing $\frac{d}{dx}\left(u^{-1}\right)= u'\cdot \frac{du}{dx}$. It just so happens that $\frac {du}{dx} = 1$.
H: Stuck on an 'advanced logarithm problem': $2 \log_2 x - \log_2 (x - \tfrac1 2) = \log_3 3$ I'm stuck on solving what my textbook calls an "advanced logarithm problem". Basically, it's a logarithmic equation with logarithms of different bases on either side. My exercise looks like this: $$2 \log_2 x - \log_2 (x - \tfrac1 2) = \log_3 3$$ To start off, I used the power rule to simplify the first term to get this: $$\log_2 x^2 - \log_2 (x - \tfrac1 2) = \log_3 3$$ Then I used the quotient rule to get this: $$\frac {x^2} {x - \frac 1 2} = \log_3 3$$ Then I turned the logarithmic equation into an exponential equation to get this: $$3^{\frac {x^2} {x - \frac 1 2}} = 3$$ Now, however, I'm unsure of how to proceed. The textbook has neither explained to me how to simplify such complex exponents nor do such exponents have any precedence. I'm therefore assuming that I went wrong somewhere previously in solving the problem, but as far as I can tell I did everything by the book. Did I go wrong? And if not, am I really supposed to simplify that exponent? AI: Sorry, read too quickly at first. You were on the right track - but you lost a logarithm at this step: $$\frac{x^2}{x-\frac{1}{2}}=\log_3(3)$$ It should be $$\log_2\left(\frac{x^2}{x-\frac{1}{2}}\right)=\log_3(3)$$ Now, what is $\log_3(3)$?
H: Relate to GP 1.3.9 - Differentiating $x_{i_1}, \dots, x_{i_k}$ result span($e_{i_1}, \dots, e_{i_k}$)? I start to think of this is question when I attempt exercise 1.3.9 on Guillemin and Pollack's Differential Topology Consider the projection $\varphi: \mathbb{R}^N \rightarrow \mathbb{R}^k$: $$(x_1, \dots, x_N) \mapsto (x_{i_1}, \dots, x_{i_k})$$ Differentiating: $$d \varphi: T_x(X) \mapsto \text{ span}(e_{i_1}, \dots, e_{i_k})$$ So just to confirm - differentiating $x_{i_1}, \dots, x_{i_k}$ result span($e_{i_1}, \dots, e_{i_k}$)? AI: A projection $\pi: \mathbb{R}^n \rightarrow \mathbb{R}^k$ defined by $$ \pi (x) = \sum_{j=1}^k [x \cdot e_{i_j}]e_j $$ will push $\frac{\partial}{\partial x^{i_j}}$ to $\frac{\partial}{\partial y^{i_j}}$ under the differential $d\pi$. Let $y$ be the coordinate system $(y^{i_j})$ for $j=1, \dots k$. Consider: $$ d\pi (\frac{\partial}{\partial x^{i_j}}) = \sum_{l=1}^k \frac{\partial y^{i_l}}{\partial x^{i_j}}\frac{\partial}{\partial y^{i_l}}$$ where $y = \pi (x)$ hence $y^l = x \cdot e_{i_l}= x^{i_l}$ and $\frac{\partial y^l}{\partial x^{i_j}}=\frac{\partial x^{i_l}}{\partial x^{i_j}} = \delta_{i_j,i_l} = \delta_{jl}$ thus, $$ d\pi (\frac{\partial}{\partial x^{i_j}}) = \sum_{l=1}^k\delta_{jl}\frac{\partial}{\partial y^{i_l}} = \frac{\partial}{\partial y^{i_j}}$$ I think you want to identify $\frac{\partial}{\partial y^{i_j}}$ with $e_j$. Under that assumption I suppose your claim is true. On the other hand, if you wish to view that span as the copy of $\mathbb{R}^k$ embedded in $\mathbb{R}^n$ in the natural manner by setting the complement of the $x^{i_j}$ coordinates to zero then the span is literally accurate. However, I'm not sure what you intend so I wrote this post. I suppose, the formula for the embedded case is just $$ \pi (x) = (0,...0,x^{i_1},0,...,0,x^{i_k},0,...0) \in \mathbb{R}^n. $$
H: countability of limit of a set sequence Let $S_n$ be the set of all binary strings of length $2n$ with equal number of zeros and ones. Is it correct to say $\lim_{n\to\infty} S_n$ is countable? I wanted to use it to solve this problem. My argument is that each of $S_n$s is countable (in fact finite) thus their union would also be countable. Then $\lim_{n\to\infty}S_n$ should also be countable as it is contained in the union. AI: The collection of all finite strings of $0$'s and $1$'s is countably infinite. The subcollection of all strings that have equal numbers of $0$'s and $1$'s is therefore countably infinite. I would advise not using the limit notation to denote that collection. The usual notation for this kind of union is $\displaystyle\bigcup_n S_n$.
H: Stuck on Elementary Exponentiation I'm confused. Why is it that for a problem in the form of: $(2^{x+1})(2^{x-1})$ we get $2^{2x}$ instead of $4^{2x}$; Shouldn't we multiply the $2$s by each other..? Similarly for a problem like: $(2^{x+1})(4^{x-1})$, why do we get $2^{3x-1}$ rather than $8^{2x}$? And for $(3^{2x}/3^{x-1})$ we get $3^{x+1}$...shouldn't the $3$s cancel? AI: Recall that $$\large a^b\cdot a^c = a^{b+ c}$$ $$\large \dfrac{a^b}{a^c} = a^{b - c}$$ $$\large \left(a^b\right)^c = a^{bc}$$ So, using these "laws of exponents", we have: $$\begin{align} (1)\quad \large 2^{(x + 1)}\cdot 2^{(x - 1)} & = \large 2^{(x + 1) + (x - 1)}\\ \\ & = \large 2^{2x}\end{align}$$ $$ $$ $$\begin{align} (2)\quad \large 2^{x+1} \cdot 4^{x-1} & = \large 2^{x + 1} \cdot \left(2^2\right)^{(x-1)}\\ \\ &= \large \large 2^{x + 1} \cdot 2^{2(x-1)} \\ \\ & = \large 2^{(x+1) + 2(x-1)} \\ \\ & = \large 2^{3x - 1}\end{align}$$ Now, for the last question, I'll let you try to work that one out.
H: Finding the interval for increase of the function $y =x^2e^{-x}$ Problem : Find the interval in which the function $y =x^2e^{-x}$ is increasing . My approach : We can take first derivative to the find the increase or decrease of function ie. $y'=2xe^{-x}-x^2e^{-x}$ finding the critical point by putting y'=0 $ \Rightarrow xe^{-x}(2-x)=0$ $\Rightarrow x =0, x =2$ are the critical point ( please clarify here) Now if we take second derivative at these critical points : $y'' = x^2e^{-x}-4xe^{-x}+2e^{-x}$ ; If we put x =0 then we get 2 ( which is positive ) that means at 0 function attains minimum values.... please help me to find the interval in which it increases...As the function attains minimum value at 0, I think from 0 onwards it start increasing.. but unable to locate the interval .... AI: The function $e^{-x}$ is always positive, so you only need to look at $x(2-x)$. Figure out on what interval it is positive. It is positive if the two factors are both positive or both negative. But the first factor is negative only if $x<0$, and on that interval $2-x$ is positive. Next, notice that $2-x$ is positive precisely if $x<2$. You should be able to take it from there.
H: $d$ is a metric on $X$ if $d(a,b) = 0 ⇔ a = b$ and $d(a, b) ≤ d(z, a) + d(z, b)$ The following is a question of Metric Spaces by O'Searcoid (pg 19) Suppose $X$ is a set and $d:X×X→\mathbb{R}$. Show that $d$ is a metric on $X$ if, and only if, for all $a,b,z ∈ X$, the two conditions $d(a,b) = 0 ⇔ a = b$ and $d(a, b) ≤ d(z, a) + d(z, b)$ are both satisfied. The the first direction seems easy to demonstrate. If $d$ is a metric, the first condition holds by definition, and by applying symmetric, the second condition follows from the triangle inequality. The converse statement, however, is presenting some difficulties. From what I can see, I must now demonstrate the the two conditions imply that $d$ is positive for all $ a \ne b$ and that $d$ is symmetric. Suppose $d$ is not symmetric. Then this contradicts the first statements, since $d(a,b) \ne d(b,a)$ and if $a=b$, than $d(a,b) = 0$ but $d(b,a) \ne 0$. This also implies the triangle inequality by applying symmetry on the second condition in the question. What I cannot figure out is how to demonstrate $d(a,b) \ge 0$. If we allow $d$ to be negative, I do not see how this may lead to a contradiction of these statements. One thing I thought of was, suppose $d(a,b) = 0$. Then this implies that $0 ≤ d(z, a) + d(z, b)$ and so these distance functions with $z$ must be positive. We could form any "intermediary" distance function we please regardless of the left-hand-side distance function. Thus in order to solve the triangle inequalities for all $a$ given $d(a,a)$ we require each $d$ to be positive. Is my reasoning valid? AI: We have $d(a, b) ≤ d(z, a) + d(z, b)$ so: If $a=b$ we have $0\leq 2d(z,a)$ and then $d(z,a)$ is non negative for all $z$ and $a$ Now replace $z$ by $b$ we find $d(a,b)\leq d(b,a)$ and switching $a$ and $b$ gives $d(a,b)\geq d(b,a)$ so we find $d(a,b)=d(b,a)$.
H: Will it be a Cauchy sequence? Let $<x_n>$ be a sequence satisfying $|x_{n+1}-x_n|\le \frac{1}{n^2}$ Will it be a Cauchy sequence? AI: We want to show that given any $\epsilon \gt 0$, there is an $N$ such that if $N\le m\lt n$ then $|a_n-a_m|\lt \epsilon$. Note that $$a_n-a_m=(a_{m+1}-a_m)+(a_{m+2}-a_{m+1})+\cdots+(a_n-a_{n-1}).$$ Taking absolute values, and using the Triangle Inequality, we get $$|a_n-a_m|\le |a_{m+1}-a_m|+|a_{m+2}-a_{m+1}|+\cdots+|a_n-a_{n-1}|.$$ The right-hand side is non-negative and less than or equal to $$\frac{1}{m^2}+\frac{1}{(m+1)^2}+\cdots+\frac{1}{(n-1)^2}.$$ Since $\sum_1^\infty \frac{1}{k^2}$ converges, by choosing $m$ large enough, we can make the tail $\lt \epsilon$. If you want to be fully explicit, note that $$\frac{1}{m^2}+\frac{1}{(m+1)^2}+\frac{1}{(m+2)^2}+\cdots \lt \frac{1}{(m-1)m}+\frac{1}{(m)(m+1)}+\frac{1}{(m+1)(m+2)}+\cdots,$$ and the series on the right is a telescoping series with sum $\frac{1}{m-1}$. If omitted detail makes the solution obscure, please leave a message. Remark: The same is true if $\frac{1}{n^2}$ is replaced by $\frac{1}{n^p}$, where $p$ is any real number $\gt 1$. But we cannot replace $\frac{1}{n^2}$ by $\frac{1}{n}$. That is because the harmonic series diverges.
H: Filling a conical tank I have been working on this problem for about 2 hours and I can't seem to get it, here is exactly what the question reads. "Water is poured into the top of a conical tank at the constant rate of 1 cubic inch per second and flows out of an opening at the3 bottom at a rate of .5 cubic inches per second. The tank has a height of 4 inches, and a radius of 2 inches at the top. How fast is the water level changing when the water is 2 inches high?" AI: Think about what is happening. The water (volume) is being poured in at a constant rate. This relates to how the water level ($h$) changes and how the width of the water in the tank at that level ($r$) changes. Further, $r$ and $h$ are related. The volume of a cone is $$V = \frac13 \pi \, r^2 \, h$$ How is $h$ related to $r$? You know that, at the top, the radius is $2$ and $h=4$. Because this is a cone, we can say that $r = h/2$ at all levels of the cone. Thus, $$V(h) = \frac{1}{12} \pi \, h^3$$ We may then differentiate with respect to time; use the chain rule here $$\frac{dV}{dt} = \frac{\pi}{4} h^2 \frac{dh}{dt}$$ You are given $dV/dt$ and the height $h$ at which to evaluate; solve for $dh/dt$.
H: Need help with a simple math proof Imagine an infinitely long sequence of squares where one of these squares contains a frog, and another square contains a fly. For simplicity, let's number all of the (infinitely many) squares by assigning each an integer. We'll say that the frog starts in position 0, and will assign positive integers to the squares to the right of the frog and negative numbers to the squares to the left of the frog. The frog can hop across the squares forward and backward, but can only make jumps of two different lengths: 3 and 7. For example, to get to square five to eat the fly, the frog might might jump forward seven squares to square 7, forward seven squares again to square 14, then back three squares three times to squares 11, 8, and (finally) 5. 1) Prove that, starting at position 0, the quantum frog can move to any square using only jumps of length 3 and 7. 2) Prove that in an optimal series of jumps from square 0 to square k, all jumps of the same distance must be made in the same direction. That is, all of the frog's jumps of distance 3 must be in the same direction, and all of the frog's jumps of distance 7 must be in the same direction (though these two directions don't have to be the same). 3) Prove that in an optimal series of jumps from square 0 to square k, the frog can never use jumps of size three more than six times. AI: For question 2, if the frog made two jumps of the same length in opposite directions, it could omit those two jumps and end up at the same final destination. For question 3, if the frog made 7 or more jumps of size 3, then by question 2 they're all in the same direction, so 7 of them could be replaced by three jumps of size 7, leading to the same final result with fewer jumps.
H: The difference with sup and without sup The difference with sup and without sup,how to judge and choose the use For example, here is rudin's "Root" Test: Given $\sum a_n$, put $\color{Green}{\{\alpha =\lim_{n\to \infty}\sup \sqrt[n]{\left|a_n\right|}\}}$ Here is mathworld "Root" Test I also see SupremumLimit So, as title shows, and the question may not only occurs in this example. Others maybe RatioTest, ... other Tests, the use of Sup and Inf and without use, when should I use, and when should I not use? AI: Generally speaking: the texts that use limsup instead of lim are just being more careful. You can certainly prove that if $\lim_{k\rightarrow\infty}u_k^{1/k}$ exists, and call its value $\rho$, then the root test applies: if $\rho>1$ you get divergence, whereas you get absolute convergence for $\rho<1$. However, the result given by Rudin is stronger! Whenever the limit exists, necessarily the limsup also exists and gives the same values; but, there are certainly sequences which do not have a limit but which still have a limit supremum.
H: Is $\mathbb{N}$ infinite? Intuitively the answer is yes. According to the definition, a set $A$ is infinite if there is no bijection between $A$ and some natural number. Now, I don't know the problem of my reasoning. Accordingly the funcion $f:\mathbb{N}\longrightarrow 0$ is a bijection (Because $f\subseteq \mathbb{N}\times 0=\emptyset$ and then it's vacuously true that $\forall a\in \mathbb{N}\forall b\in \emptyset((a,b)\in f \wedge(a,c)\in f\Longrightarrow b=c)$. Also, if I wanted to prove that indeed $\mathbb{N}$ is infinite, the only option I see is by making a proof by induction. In such a case then my statement should be false. I don't know... Edit: The approach to prove that there is a biyection is the same way as to proove that it's a function: It's injective: It's vacously true that $\forall a\in \mathbb{N}\forall b\in \emptyset((a,b)\in f \wedge(c,b)\in f\Longrightarrow a=c)$. It's surjective: It's vacuously true that $\forall a\in \emptyset \exists b((a,b)\in f)$. Now I know my problems are in my understanding of the meaning of "vacously". It would be very nice if you guys could tell me the mistakes in it and also in my definitions. AI: There is no total function $f:\Bbb N\to 0$: $0$ is the empty set, so $\Bbb N\times 0=\varnothing$, and therefore the domain of $f$ is empty. Some would not call this a function on $\Bbb N$ at all; others would call it a partial function on $\Bbb N$. But all of this is beside the point: in order to show that $\Bbb N$ was finite, you would have to find a bijection between $\Bbb N$ and some $n\in\Bbb N$. For each $n\in\Bbb Z^+$ it is certainly possible to find a surjection $f:\Bbb N\to n$ (where $n=\{0,\dots,n-1\}$): one possibility is $$f(k)=\begin{cases} k,&\text{if }k\in n\\ 0,&\text{otherwise}\;. \end{cases}$$ This function is certainly not a bijection, however, since $f(0)=f(n)=f(n+1)=\ldots=0$. And it turns out to be impossible to find such a bijection, so $\Bbb N$ must be infinite.
H: Expectation with a "regular" function I hope this is not a silly question. I know that the expectation of a constant is just a constant (i.e. $E[c]=c$ for $c\in \mathbb{R}$), and that for a function $g$ of a random variable X, $E[g(X)]=\int_{-\infty}^{\infty}g(x)~f(x)~dx$ (in the continuous case). But, what if you have something like $E[cx^2 X^3]$? I know you can take the $c$ out: $cE[x^2 X^3]$ but can you do the same with the $x^2$? Is it still considered like a "constant" even though it's a function of $x$ (but still not a random variable)? AI: In this context, a random variable is a function $X:\Omega\rightarrow\mathbb{R}$, where $\Omega$ is your underlying probability space. Then a "constant" is just a function $x:\Omega\rightarrow\mathbb{R}$ which is constant - that is, no matter what the result of your underlying experiment happens to be, $x$ always has the same value. So, for instance, $\DeclareMathOperator*{\E}{\mathbb{E}}\E[cx^2X^3]=cx^2\E[X^3]$, assuming $x$ is constant in this sense - even though $x$ doesn't strictly have a numerical value. This is useful for things like the moment generating function and the characteristic function of a random variable: they can be defined as $$ t\mapsto\E[e^{tX}]\qquad\text{and}\qquad t\mapsto\E[e^{itX}], $$ respectively. Clearly, $t$ is an unknown quantity; but it is not considered random, as it doesn't depend on the random choice of $X$.
H: Statements on Cardinality of sets Which of the statements regarding cardinality of sets are always correct Let $X$ be an infinite set then (1)$|\{F|F \subseteq X \;\text {and} \;X \; \text {is finite}\}| > $|X|$ (2) $A \in P(X) $ and $ X\setminus A $ is infinite $\rightarrow |A|<|X|$ (3) $A \subseteq X $ and $|A|<|X| \Rightarrow |X\setminus A|=|X|$ (4)$|X|<|P(X)|$ (5) $|X*X|>|X|$ and $P(X)$ indicates the power set of X. I think 4 is correct but I am not sure of the rest please provide help since we have not been covered cardinality in the course properly AI: You are right about (4): this is Cantor’s theorem, a very important basic result about cardinalities. Assuming the axiom of choice, (5) is false: it is a general fact about infinite sets that $|X\times X|=|X|$. Unless you’re taking a course that goes into well-orderings in some detail, you could not reasonably be asked to prove this; I imagine that you’re simply expected to learn it as a fact. The same goes for (1): if $X$ is an infinite set, and $\mathscr{F}$ is the set of finite subsets of $X$, then $|\mathscr{F}|=|X|$. (3), on the other hand, is true; again, this is probably something that you’re expected simply to learn as a fact, unless you’re taking a fairly serious elementary set theory course. Syd Henderson has given a nice, straightforward example showing that (2) is not necessarily true.
H: Scaling of eigenvalues Suppose $A_N$ is a positive definite matrix of size $N$ with eigenvalues $\Lambda=\{\lambda_1,\ldots,\lambda_N\}$. Let $D = \text{diag}\{d_1,\ldots,d_N\},\ d_i>0$ be a diagonal matrix. Can the eigenvalues of $A'_N=DAD$ be written in terms of $\Lambda$ and $d_i$? AI: (Unfortunately) No, the eigenvalues of $A_N'$ will also depend on the eigenvectors of $A_N$.
H: the continuity of the primitive Good day! Need to prove the existence of $t\in [0,1]$ such that $$\int\limits_{0}^{t} f(x)\,dx = \frac{1}{2} \int\limits_{0}^{1} f(x)\,dx,$$ where $f$ is integrable. My solution: $$F(t)=\int\limits_{0}^{t} f(x)\,dx-\frac{1}{2}\left(\int\limits_{0}^{t} f(x)\,dx +\int\limits_{t}^{1} f(x)\,dx\right)=\frac{1}{2} \int\limits_{0}^{t} f(x)\,dx-\frac{1}{2} \int\limits_{t}^{1} f(x)\,dx.$$ $$F(0)=- \int\limits_{0}^{1} f(x)\,dx,$$ $$F(1)= \int\limits_{0}^{1} f(x)\,dx.$$ If $F(t)$ is continuous than o.k.(Intermediate Value Theorem). It maybe a silly question, but.. if $f$ is integrable function, is a primitive for this function continuous? I can't imagine a counterexample. And don't know what to do if $\int\limits_{0}^{1} f(x)\,dx=0$.Thanks. AI: We assume that by integral you mean the Riemann Integral. Then $F(t)=\int_0^tf(x)\,dx$ is a continuous function of $t$ for any integrable $f$. Without loss of generality we may assume that $\int_0^1 f(x)\,dx =I\ge 0$. If $I=0$, let $t=0$. Otherwise, by the Intermediate Value Theorem, for any $b$ such that $0\le b\le I$, there is a $t$ such that $F(t)=b$. Let $b=\frac{I}{2}$.
H: Iverson bracket help How can I show that if $G$ is a group and $x,y,m$ are in $G$ then, $$[xy=m]=\sum_{r\in G}[x=r][y=r^{-1}m]$$ Where $[P]$ is the Iverson bracket. AI: Fix $x,y,m\in G$. The lefthand side, $[xy=m]$, is $1$ if $xy=m$ and $0$ otherwise. Suppose first that $xy=m$, so that the lefthand side is $1$. Now consider the righthand side. As $r$ ranges over $G$, one of the terms will be the $r=x$ term, since $x\in G$. For that term $[x=r]=1$, so we have $$[x=r][y=r^{-1}m]=[y=r^{-1}m]=[y=x^{-1}m]\;.$$ But $y=x^{-1}m$ if and only if $xy=xx^{-1}m=m$, which is in fact the case, so this term of the sum is $1$. And of course all the other terms are $0$, because if $r\ne x$, then $[x=r]=0$, and the other factor doesn’t matter. Thus, the formula is correct when $xy$ is equal to $m$. Now suppose that $xy\ne m$, so that $[xy=m]=0$. Again the only possible non-zero term in the summation is the $x=r$ term, which is $$[x=r][y=r^{-1}m]=[y=x^{-1}m]\;.\tag{1}$$ But $xy\ne m$, so $y\ne x^{-1}m$, and the term $(1)$ is equal to $0$. In this case, therefore, every term of the summation is zero, and so is the sum — which is exactly what we want.
H: When does $\|x+y\|=\|x\|+\|y\|?$ Let $(V_\mathbb R,\langle,\rangle)$ be an inner product space. I'm trying to see for $x,y\in V$ when does $\|x+y\|=\|x\|+\|y\|?$ Let $\|x+y\|=\|x\|+\|y\|$ Squaring both sides, $\langle x+y,x+y\rangle=\langle x,x\rangle+\langle y,y\rangle+2\|x\|.\|y\|\\\ge\langle x,x\rangle+\langle y,y\rangle+2\langle x,y\rangle\\=\langle x,x\rangle+\langle x,y\rangle+\langle x,y\rangle+\langle y,y\rangle\\=\langle x,x+y\rangle+\langle x+y,y\rangle...(*)\\\implies\langle x+y,x+y\rangle-\langle x,x+y\rangle\ge\langle x+y,y\rangle\\\implies0\ge\langle x,0\rangle+\langle y,y\rangle\text{ due to linearity}\\\implies y=0$ Again from $(*),\langle x+y,x+y\rangle-\langle x+y,y\rangle\ge\langle x,x+y\rangle\\\implies 0\ge\langle x,x\rangle+\langle0,y\rangle\\\implies x=0$ Where did I go wrong? AI: The third line in your formulas can be written (if you use that $\langle x,y\rangle=\langle y,x\rangle$) as $\langle x+y,x+y\rangle$. So you have equality in the inequality in your second line. This implies $$ \langle x,y\rangle=\|x\|\,\|y\|, $$ i.e. equality in Cauchy-Schwarz. So $x$ and $y$ are colinear. Regarding your mistake, the two sides of the inequality in your first "implies" are equal. So your second "implies" is wrong.
H: how to compare $\sin(19^{2013}) $ and $\cos(19^{2013})$ how to compare $ \sin(19^{2013})$ and $\cos (19^{2013})$ or even find their value range with normal calculator? I can take $2\pi k= 19^{2013} \to \ln(k)= 2013 \ln(19)- \ln(2 \pi)=5925.32 \to k= 2.089 \times 10^{5925}$, but it useless.(I can get final answer with WolframAlpha but it is not allowed.) Any hint? thanks! AI: To find the sine and cosine, you'll need to reduce the angle ($19^{2013} \approx 1.352 \times 10^{2574}$) modulo $2\pi$. So, for reasonable accuracy, you'll need about 2600 digits of $\pi$. Fortunately, the first 100 000 or even million digits are readily accessible online. It happens that $19^{2013} \approx 1.2329141525482654$ modulo $2\pi$, so: $\sin(19^{2013}) \approx \sin(1.2329141525482654) \approx 0.9434588183383549$ $\cos(19^{2013}) \approx \cos(1.2329141525482654) \approx 0.3314897556480367$ All you need is a programming language that supports arbitrary-precision rational arithmetic, and a way of obtaining lots of digits of $\pi$. A normal TI-89 calculator will provide the former; the hard part is implementing an algorithm for the latter. But that's a topic for another question.
H: $n-1$ linearly dependent functions among $f_1',f_2',\ldots,f_n'$ This is a problem from IMC training camp last year. Given differentiable functions $f_i:\mathbb{R}\rightarrow\mathbb{R}, i=1,2,\ldots,n$ such that $\{f_1,f_2,\ldots,f_n\}$ is linearly independent. Show that there are $n-1$ linearly independent functions among $f_1',f_2',\ldots,f_n'$ AI: The proof is by contradiction. The fact that each $n-1$ derivatives are linearly depend imply that nontrivial linear combination of any $n-1$ of the functions is constant. Now, without loss of generality we can assume that $\sum_{k=1}^{n-1}a_kf_k(x)=b_n$ and $a_1\ne 0$ and $\sum_{l=2}^nc_lf_l(x)=b_1.$ Since $f_1,...f_n$ are linearly independent we have $b_1,b_n\ne 0.$ Multiply the first equation by $b_1$ and the second one by $b_n$ and subtract them to get a nontrivial linear combination of $f_1,...f_n$ that adds up to $0.$ This provides a contradiction.
H: How to break permutation group in normal subgroup and the quotient group? could you give steps to show this decomposition or show in maple code http://en.wikipedia.org/wiki/Simple_group AI: Let $G$ be a finite group. The set of all non-identity normal subgroups of $G$ is a non-empty finite set (it's finite because $G$ is finite and it's nonempty because $G$ is a normal subgroup of itself). Therefore, there exists a minimal normal subgroup $N_1$ of $G$. $N_1$ need not be unique! (See the exercises below.) The quotient $G/N_1$ (which exists because $N_1$ is normal in $G$) in turn possesses a minimal normal subgroup. Under the canonical bijection between subgroups of $G/N_1$ and subgroups of $G$ containing $N_1$, this minimal normal subgroup of $G/N_1$ corresponds to a normal subgroup of $G$ containing $N_1$. Let's denote it by $N_2$. $N_2$ is minimal with respect to the property that it is a normal subgroup of $G$ properly containing $N_1$. $N_2$ need not be unique! (... for the same reason that $N_1$ need not be unique!) The process continues inductively and it must terminate because $G$ is finite; the last term of the process will, of course, be $G$. So, we end up with a sequence $\{e\}=N_0\triangleleft N_1\triangleleft\cdots\triangleleft N_{k-1}\triangleleft N_k=G$ where the notation $\triangleleft$ signifies "is a normal subgroup of" and $e$ denotes the identity element of $G$. Such a series is known as a composition series of the group $G$. Note that each quotient group in this series $N_{i+1}/N_i$ for all $0\leq i\leq k-1$ is a simple group. The quotients are sometimes known as the composition factors of the composition series. The composition length of $G$ is the number of non-identity terms in a composition series of $G$ (so it's $k$, not $k+1$, in the notation of the process described above). It turns out that the composition length doesn't depend on which composition series of $G$ you choose. So we can speak of the composition length! Exercise 1: Verify that $\{e\}\triangleleft \{(1 2),(1 3),(2 3)\}\triangleleft S_3$ and $\{e\}\triangleleft \{(1 2 3),(1 3 2)\}\triangleleft S_3$ are composition series for $S_3$. Exercise 2: What is the composition length of the following groups: (a) An abelian group? (b) The permutation group $S_3$? (c) (challenge) The permutation group $S_4$? (d) (challenge) The alternating group $A_4$? (If you couldn't figure out (c), then what is the composition of length in terms of the (possibly unknown) composition length of $S_4$?) Exercise 3: What are the composition factors of the groups you considered in Exercise 2? Exercise 4 (Jordan-Holder): Prove that the composition length and the isomorphism classes of the composition factors of a finite group are independent of the chosen composition series. (This exercise is difficult and it's known as the Jordan-Holder theorem; try induction on the order of the group to get started.) Exercise 5: Prove that there is a unique composition series of $S_n$ for all $n\geq 5$ and find it! (Hint: first prove that the only non-trivial proper normal subgroup of $S_n$ for $n\geq 5$ is $A_n$, the alternating group on $n$ letters.) I hope this helps!
H: Does a property of Xor like this exist? Is there a property on Xor that says basically $a = b \oplus (a \oplus b)$? I was thinking associative but I don't think that's correct. AI: That’s actually a consequence of four properties of $\oplus$: it’s associative, it’s commutative, $x\oplus x=0$ for all $x$, and $0$ is an identity for $\oplus$. The derivation is simple: $$b\oplus(a\oplus b)=b\oplus(b\oplus a)=(b\oplus b)\oplus a=0\oplus a=a\;.$$
H: Polynomials and Trig Question: The equation $x^{2}-x+1=0$ has roots $\alpha$ and $\beta$. Show that $\alpha ^{n}+\beta ^{n}=2\cos\frac{n\pi }{3}$ for $n=1, 2, 3...$ Attempt: $x^{2}=x-1 \Rightarrow x^{n}=x^{n-1}-x^{n-2}$ for $n=3, 4, 5...$ $\therefore \alpha^{n}=\alpha^{n-1}-\alpha^{n-2}$ $\therefore \alpha ^{n}+\beta ^{n}=\alpha ^{n-1}+\beta ^{n-1}-\alpha ^{n-2}-\beta ^{n-2}$ I don't see how I could link this with cosine. Could you please go beyond answering the question and proving that $\alpha ^{n}+\beta ^{n}=2\cos\frac{n\pi }{3}$ and explain the question to me why this relation between the roots and trig happen? The question can probably be done by induction but is there another way? Thank you! AI: Hints: $$x^2-x+1=0\implies x_{1,2}=\frac{1\pm\sqrt{-3}}2=\begin{cases}\frac{1-\sqrt3\,i}2=e^{-\frac{2\pi i}3}=\text{cis}\left(-\frac{2\pi}3\right)\\{}\\\frac{1+\sqrt3\,i}2=e^{\frac{2\pi i}3}=\text{cis}\left(\frac{2\pi}3\right)\end{cases}$$ Note thus that $$x_1=\overline{x_2}=x_2^{-1}\implies x_1+x_2=2\text{Re}\,(x_1)=2\cos\frac{2\pi}3\implies x_1^n+x_2^n=\;\ldots\ldots$$
H: Over-constrained general solution to wave equation d'Alembert's formula states that the general solution to the one-dimensional wave equation is $$ u(x,t) = f(x+ct) + g(x-ct).$$ for any well-behaved functions $f$ and $g$. This is a well-known and popular result. The 1D wave equation can be accompanied by two initial conditions and two boundary conditions, such as \begin{align} u(x,0) &= a(x) \\ u_t(x,0) &= b(x) \\ u(0,t) &= c(t) \\ u(1,t) &= d(t) \end{align} But given this much initial/boundary data, the "general solution" is over-constrained. What is actually meant then when $ u(x,t) = f(x+ct) + g(x-ct)$ is referred to as a "general solution"? AI: Actually, your conditions set up the requirement that $$f(x) + g(x) = a(x)$$ and $$c (f'(x) - g'(x)) = b(x)$$ for $x \in (0,1)$, which basically completely determines $f$ and $g$ on the interval. More generally, you get the requirements that for all $t>0$, $$f(ct) + g(-ct) = c(t)$$ and $$f(1+ct) + g(1-ct) = d(t)$$ Now you can proceed inductively to sweep out the domains where $f$ and $g$ are defined. For the zeroth, $f$ is defined on $0 \leq x \leq 1$, which from the first equation defines $g$ on $[-1,0]$, and similarly having $g$ defined on the unit interval defines $f$ on $[1,2]$ from the second equation. Having $f$ defined on $[n,n+1]$ means $g$ is defined on $[-n-1,-n]$ using the first equation. Having $g$ defined on $[-n,1-n]$ means that $f$ is now defined on $[n+1,n+2]$ from the second equation. Rinse and repeat to define $f$ for $x> 0$ and $g$ for $x < 0$. So rather than being overdetermined, I think you actually end up being underdetermined: you don't care what $f$ is for $x<0$, or $g$ for $x>1$. This actually makes sense: $f$ is a wave travelling in the negative direction, so the domain $[0,1]$ never "sees" what's travelling away from it. similarly, $g$ is a wave travelling in the positive direction.
H: proving continuity For each n belongs to natural numbers let, $$f_n(x) = \begin{cases} 0, & x \in \mathbb{Q} \cap[-1,-1/n] \cup [1/n,1] \\ 1, & x \in \mathbb{Q}^c\cap[-1,-1/n] \cup [1/n,1] \\ 0, & x \in (-1/n,1/n) \\ \end{cases} $$ Prove that for each $n \in \mathbb{N}$ $f_n$ is continuous at 0. Find $\lim\limits_{ n\to \infty} f_n(x)$ for each $x \in (-1,1)$. Let $f(x)=\lim\limits_{ n\to \infty} f_n(x)$ for each $x \in (-1,1)$. Is $f$ continuous at $0$? Justify your answer? I proved that $f_n(x)$is continuous at 0by epsilon delta definition.Then $\lim\limits_{ n\to \infty} $$f_n(x)$ = \begin{cases} 0, & \text{, x $\in $ Q$\cap$[-1,-1/n] $\cup$ [1/n,1]} \\ 1, & \text{,x $\in $ Q$^c$$\cap$[-1,-1/n] $\cup$ [1/n,1)}] \\ 0, & \text{, x $\in $(-1/n,1/n)} \\ \end{cases} This is equl to $\lim\limits_{ n\to \infty} $$f_n(x)$ = \begin{cases} 0, & \text{, x $\in $ Q$\cap$[-1,-1] } \\ 1, & \text{,x $\in $ Q$^c$$\cap$[-1,1}] \\ \\ \end{cases} But I am not sure of the $\mathbf ranges$ of this. When limit is 1 I am not sure if the range can be written as x $\in $ Q$^c$$\cap$[-1,1}.Should it be x $\in $ Q$^c$$\cap$[-1,-1/n] $\cup$ [1/n,1)}? And how to show f is continuous at 0? AI: Your answer is wrong, assuming you haven't mis-typed it. $f_n$ is defined as equal to the Dirichlet function on $[-1,1]$, except on a small neighborhood of $0$ $(-1/n,1/n)$, where it is constant and equal to $0$. Since that open interval will shrink to the single point $0$ when $n\rightarrow\infty$, $f(x)$ is simply equal to the Dirichlet function everywhere on $[-1,1]$ except $0$, where $f(x) = 0$. But $0$ is rational, so the Dirichlet function is equal to $0$ at $0$, anyway. So actually, we have simply: $$\lim_{n\rightarrow\infty}f_n(x)=f(x)= \begin{cases} 0, x\in\Bbb Q\cap[-1,1]\\\\1, x\in\Bbb Q^c\cap[-1,1] \end{cases}$$ Showing that the Dirichlet function is continuous at $0$ is not so difficult. Your question as to the range shows some confusion: it would meaningless to use the expression $x\in\Bbb Q^c\cap([-1/,-1/n]\cup[1/n,1])$ or anything like it in the expression for the limit of $f_n$. You can not use $n$ in the expression for the limit of something as $n\rightarrow\infty$, because what is $n$ equal to then? It's not a fixed variable, it's simply something that is being made to "tend towards" infinity.
H: A canonical homomorphism of sheaves of modules Let $\mathcal F$ be a sheaf of $\mathcal O_X$ modules on a scheme $X$. Fix an affine open subset $U$. If $M$ is a module over the coordinate ring of $U$, we let $\tilde{M}$ denote the associated sheaf of modules. Why do we have a morphism of sheaves $$\mathcal F (U)^\tilde{} \rightarrow \mathcal F|_U?$$ Of course on the set $U$ this is obvious (it's the identity), but I don't see how to define the morphism on subsets $V\subset U$. It suffices to do it just on principal open subsets. On these, we know that $ F (U)^\tilde{}$ has a nice description: on $D(f)$, it is $F (U)^\tilde{}_f$. But I don't know what $\mathcal F_U$ is on such sets, so I don't see how to define the homomorphism. (Perhaps I am missing something obvious -- it is quite late...) A reference to this morphism appears on page 160 of Liu. AI: Note that it suffices to take $X = \mathrm{Spec}(A)$ and to construct a homomorphism $$ (\Gamma(X, \mathscr{F}))^\sim \longrightarrow \mathscr{F}. $$ On an open subset $U = D(f)$ $(f \in A)$, the homomorphism $$ \Gamma(X, \mathscr{F})_f \to \Gamma(D(f), \mathscr{F}) $$ is induced by the restriction homomorphism $\Gamma(X,\mathscr{F}) \to \Gamma(U,\mathscr{F})$. To see this, note that $U$ is precisely the set of points $x \in X$ such that $f_x \in \mathscr{O}_x$ is invertible.
H: Is this true? $f(g(x))=g(f(x))\iff f^{-1}(g^{-1}(x))=g^{-1}(f^{-1}(x))$. Is this true? Given $f,g\colon\mathbb R\to\mathbb R$. $f(g(x))=g(f(x))\iff f^{-1}(g^{-1}(x))=g^{-1}(f^{-1}(x))$. I met this problem when dealing with a coding method, but I'm really not familiar with functions. Please help. Thank you. AI: Others have covered the case where $f$ and $g$ are invertible, and this may very well be the intended meaning of the original question. However, it's worth pointing out that the notation $f^{-1}(y)$ is also used even when $f$ is not injective. In general, suppose that $f:X\to Y$ and $A\subseteq Y$. Then $f^{-1}(A)$ is defined to be the set $\{x\in X:f(x)\in A\}$. Then $f^{-1}(y)$ is defined to be $f^{-1}(\{y\})$, with the caution that $f^{-1}(y)$ is now a set, which may contain more than one element or which may be empty. Now in this general setting the answer to the original question is still yes. Indeed if $f$ and $g$ both map $X\to X$ and $f\circ g(x)=g\circ f(x)$ for all $x\in X$ then \begin{eqnarray*}x\in f^{-1}\circ g^{-1}(y)&\Leftrightarrow& g\circ f(x)=y\\ &\Leftrightarrow& f\circ g(x)=y\\ &\Leftrightarrow& x\in g^{-1}\circ f^{-1}(y)\end{eqnarray*} That is, $f^{-1}\circ g^{-1}(y)$ and $g^{-1}\circ f^{-1} (y)$ contain the same elements; in other words they are equal. It follows that $f^{-1}\circ g^{-1}(A)=g^{-1}\circ f^{-1}(A)$ for any $A\subseteq X$. EDIT: Just saw the $\Leftrightarrow$ in the question; this answer just shows the $\Rightarrow$ implication.
H: a fundamental clarification about predicate expression (formula) I have few foundation questions to be clear about expression involving predicates. $\forall n\in \Bbb N.p(n) \tag {1.2}$ Here the symbol $\forall$ is read “for all.” The symbol $\Bbb N$ stands for the set of nonnegative integers, namely, $0, 1, 2, 3, \ldots$ (ask your instructor for the complete list). The symbol $\in$ is read as “is a member of,” or “belongs to,” or simply as “is in.” The period after the $\Bbb N$ is just a separator between phrases. Q1 : what is the correct mathematical term for expression like above ? is it Predicate formula? Q2: When to use "." to separate phrases? because in the same doc I have seen that there are more than one term such as $\forall n\in \Bbb N$ without separated using ".", is there different meaning in such declarations? http://courses.csail.mit.edu/6.042/spring12/mcs.pdf AI: There is not one single correct term for expressions like $\forall n \in \Bbb N. p(n)$. "Predicate formula" seems as adequate as the more common "(well-formed) formula of predicate logic". A distinction worth pointing out is: (Well-formed) Formula (of predicate calculus): Any expression that you would call a "predicate formula", i.e. that can be formed using the rules of formation for predicate logic; a listing of those is e.g. here (there are as many variations in notation for predicate calculus as there are books on it, so don't mind the small differences with what you may have encountered before). Sentence (of predicate calculus): A formula where all variables are "quantified over". For example, $\forall n \in \Bbb N. p(n)$ is a sentence, while $\forall n \in \Bbb N. p(n,m)$ is not (the $m$ is not quantified over). As for the use of the dot, it is simply a syntactical/convenience device to help you extract the intended reading of the formula. Almost everyone is very sloppy in the use of these whenever possible, to keep formulae as readable as possible without being ambiguous. Thus, the following are all expressions of the same thing (I've encountered them all): $\forall n \in \Bbb N.p(n)$; $\forall n \in \Bbb N: p(n)$; $(\forall n \in \Bbb N)\, p(n)$; $\forall n \in \Bbb N\, (p(n))$; $\forall n \in \Bbb N\, p(n)$; $\forall n \in \Bbb N\,pn$. Outside of the context of formal logic, many authors take the liberty to write down things that are not strictly speaking admissible as formulae, but nonetheless their reading is (or at least is intended to be) completely clear. So if you are asking yourself questions like: Is this expression just a sloppy version of that other, formally correct expression? then I can assure you that it is safe to assume that the answer is "Yes".
H: If $G$ is a graph with no independent set of size $4$, prove that it is not $4$ colorable Given $G$ a graph with $n \geq {7 \choose 3} = 35$ vertices, and with no independent sets of size $4$, prove it is not $4$ colorable. (independent set of vertices is a set of vertices that has no edges between any two of the vertices) This was in the exam I did today. This was my answer and my professor tells me it is probably wrong (after I asked him personally): Assume it is $4$ colorable, and since we have at least $35$ vertices, we take the $4$ colours as cages and the $n \geq 35$ vertices as pigeons, so according to the pigeonhole principle there is a cage with at least $4$ vertices. If it has $4$, you take them, and they are of the same colour then according to the rules of colouring it is an independent set of $4$ vertices. If it has more than $(k > 4)$, then you take $4$ out of these $k$ vertices, and again, they should be independent set of size $4$. In contradiction to the given info that the graph has no independent set of size $4$. What exactly is wrong about that? AI: There’s nothing wrong with it, though you could have drawn a stronger conclusion from the pigeonhole principle: with $35$ vertices and $4$ colors, there must be at least $9$ vertices of one color, so there must be an independent set of at least $9$ vertices, hence certainly one of $4$. Someone else asked about this problem recently and specifically asked about a proof using Ramsey theory; the answer given there covers that (unnecessarily complicated) argument as well.
H: Is Binet's formula for the Fibonacci numbers exact? Is Binet's formula for the Fibonacci numbers exact? $F_n = \frac{(1+\sqrt{5})^n-(1-\sqrt{5})^n}{2^n \sqrt{5}}$ If so, how, given the irrational numbers in it? Thanks. AI: As others have noted, the $\sqrt 5$ parts cancel, leaving an integer. We can recover the Fibonacci recurrence formula from Binet as follows: $$F_n+F_{n-1} = \frac{(1+\sqrt{5})^n-(1-\sqrt{5})^n}{2^n \sqrt{5}}+\frac{(1+\sqrt{5})^{n-1}-(1-\sqrt{5})^{n-1}}{2^{n-1} \sqrt{5}}=$$$$\frac{(1+\sqrt{5})^{n-1}(1+\sqrt 5+2)-(1-\sqrt{5})^{n-1}(1-\sqrt5+2)}{2^n \sqrt{5}}$$ Then we notice that $(1\pm\sqrt5)^2=6\pm2\sqrt 5=2(3\pm\sqrt5)$ And we use this to simplify the final expression to $F_{n+1}$ so that $F_n+F_{n-1} =F_{n+1}$ And the recurrence shows that if two successive $F_r$ are integers, every Fibonacci number from that point on is an integer. Choose $r=0,1$. This is another way of proving that the cancellation happens.
H: Constant Growth Rate Problem Say the population of a city is increasing at a constant rate of 11.5% per year. If the population is currently 2000, estimate how long it will take for the population to reach 3000. How can this be solved using the formula below. I know how to solve when we input x number of years but don't know how to solve when the number of years is the unknown. AI: So, we have that after $x$ years, there are 3000 people where we started from 2000. Thus $$3000=2000 e^{rx}$$ which can be rewritten as $$e^{rx}=1.5$$ We also know that after 1 year, there is a 11.5% increase. Thus $$e^r=1.115$$ From the conjugation of those two formulas, we have that $$(1.115)^x=1.5$$ So all we have to do is see how many times we have to multiply 1.115 by itself to obtain 1.5. Since this will likely involve a non-integer number, we just take the integer solution such that we exceed 1.5. Which is 4. So somewhere in the fourth year, population exceeds 3000.
H: Measures on Iwasawa decomposition In the following I present two results, which look very similar, but require different proofs. I'd like to know why the second result doesn't admit the same proof as the first. Lang $SL_2$ p39: Let $P$, and $K$ be closed subgroups of $G$ such that $G=PK$. Assume that the map $$(p,k) \mapsto pk$$ gives a topological isomorphism (not group iso). Assume that $G$ and $K$ are unimodular. Then $dx = dpdk$ Proof: Indeed there exists a left invariant measure on $G/K$, and $G/K$ is $P$-isomorphic to $P$ itself as a transformation space, under left translation. Hence this measure is a Haar measure on P. Now compare the preceding result with the following: Lang $SL_2$ p40: Assume that $P$ can be expressed as a product, $P = AN$ where $A, N$ are closed unimodular subgroups and $A$ normalizes $N$, i.e. $ana^{-1} \in N$ for $a \in A$ and $n\in N$. In other words, the map of $A \times N \to P$ given by $$(a,n) \mapsto an$$ is a topological isomorphism. Then $dp = dadn$ Proof: The measure $dadn$ is clearly left invariant under A. Let $n_1 \in N$. $$\int_A\int_Nf(n_1an)dnda=\int_A\int_Nf(aa^{-1}n_1an)dnda.$$ But $a^{-1}n_1a \in N$ so we can cancel it in the inner integral by left invariance of the Haar measure on N. This proves our assertion. Questions: 1) Why can't we simply show in the second case that $N = P/A$. This was done in the first proof even without knowing that $P$ normalized $K$. 2) We see that in both cases we need to prove left invariance of the group corresponding to the inner integral (the left invariance of the group corresponding to the outer integral is assumed in the hypothesis). My question is why. I would only expect that the invariance in the following sense would be necessary: $$f(an) \mapsto f(an_1n),$$ i.e. with the left multiplication acting exactly left to the integrating variable. This would be trivial because it is in the assumption. AI: 1) In the first case, you use the unimodularity of $G$. Since $P$ is not unimodular, you cannot argue the same way in the second case. 2) Let me answer as follows. The two theorems, as you mention, are similar: they both strive to express a Haar integral on a group as a double integral on two subgroups. However, their hypotheses are different, and this changes their proofs dramatically. Namely, the second one has an easy proof: you just transfer the product measure on $A\times N$ to $P$ and make use of the normalization assumption. Since you do not have this assumption in the first case, the proof is not so easy: you first need to deduce (by Theorem 1 on p. 37) the existence of a measure on the factor space $G/K$ and then transfer this measure to $P$; it is with respect to this measure on $P$ that the integral formula expressed by $\mathrm{d}x=\mathrm{d}p\,\mathrm{d}k$ holds.
H: Prove if there are 4 points in a unit circle then at least two are at distance less than or equal to $\sqrt2$ There is a unit circle and 4 points inside the circle. The problem is to prove at least two are at distance less than or equal to $\sqrt2$ AI: An idea: taking the circle $\,S^1:=\{(x,y)\in\Bbb R^2\;;\;x^2+y^2=1\}\,$ in order to use some analytic geometry/linear algebra in case of need, we can argue as follows: If two points $\,w_i:=(x_i,y_i)\;,\;i=1,2\;$ , lay on the same quadrant (including the respective parts of both axis in the quadrant), then their maximal distance is $\,\sqrt2\;$ , because $$||w_1-w_2||^2=||w_1||^2+||w_2||^2-2\langle w_1\,,\,w_2\rangle$$ But $$\min_{w_i\in S^1}|\langle w_1\,,\,w_2\rangle|=\min_{w_i\in S^1}||w_1||\,||w_2||\cos\theta=0\iff \theta=\frac\pi2\implies$$ $$||w_1-w_2||^2\le ||w_1||^2+||w_2||^2\le2$$ After the above one can see that the maximal possible distance between four points on the unit circle or within the unit disk is attained when they are on the intersection of the circle with the axis, i.e. on the points $\,(-1,0)\,,\,(1,0)\,,\,(0,-1)\,,\,(0,1)\,$ , and then the distance between any two consecutive (clock or anticlockwise) points is precisely $\,\sqrt2\,$ (between antipode points it is, of course, $\,2\,$ ...)
H: Graphing Basic Exponential Functions How can we use the graph of $y=2^x$ to sketch the graph of $y=2^{x-1}$? AI: Note that for any $x,y$ graph: $y = f(x-1)$ is just the graph $y = f(x)$ shifted over one unit to the right.
H: $\mathbb{S}^2$ as a fibre bundle I know, by the Hopf fibration, that $\mathbb{S}^3$ is an $\mathbb{S}^1$-fibre bundle over $\mathbb{S}^2$. Can $\mathbb{S}^2$ be an $\mathbb{S}^1$-fibre bundle over some manifold $M$? AI: If $F\to E\to B$ is a fiber bundle, $\chi(E)=\chi(F)\cdot\chi(B)$. But $\chi(S^1)=0$ and $\chi(S^2)=2\ne0$, so there is no $S^1$ bundle with total space $S^2$ (or with any even-dimensional sphere for that matter).
H: Find equation of plane formed by a point and line It is required to find the equation of a plane $Q$ formed by point $B\,(5,2,0)$ and the line (d) of parametric equation $$ \begin{align} x&=-2t+1\\ y&=2t-2 \\ z&=t \end{align}$$ What is the easiest way to find the equation ? Answer is: $x-y+4z-3 = 0$ AI: I think the easiest way is to find the value of following determinant: $$\begin{vmatrix} (x-5) & (y-2) & (z-0)\\ 4 & 4 & 0\\ -2 &2 & 1 \end{vmatrix}=0$$ Note that the point $(-1,-2,0)$ is a point lying on the line and the vector $\langle -2,2,1\rangle$ is the leading vector for the line.
H: Does this function belong to $\mathcal{S}( \mathbb{R}) $ Let $\mathcal{S}( \mathbb{R}) $ be the Schwartz space, that is the space of all infinitely differentiable functions $f$ such that $f$ an all its derivatives are rapidly decreasing in the sense that $$ \sup_{x \in \mathbb{R}} |x|^k | f^{(l)}(x)| < \infty .$$ The question is: does $$f(x) = \frac{1}{\cosh(x)} = \text{sech}(x)\in \mathcal{S}( \mathbb{R}) ? $$ My thoughts are as following: $ \cosh{x} \neq 0$ for all $x \in \mathbb{R}$ and $$ \lim_{x \rightarrow \infty} \text{sech}(x) = \lim_{x \rightarrow \infty} \frac{2 e^x}{1+ e^{2x}} = \lim_{x \rightarrow \infty} e^{-x}. $$ Similary $$ \lim_{x \rightarrow -\infty} \text{sech}(x) = \lim_{x \rightarrow -\infty} \frac{2 e^{-x}}{1+ e^{-2x}} = \lim_{x \rightarrow -\infty} e^{x}. $$ Which shows that $\text{sech}(x) $ decays exponential as $x \rightarrow \pm \infty$. Moreover $$f'(x) = -\tanh(x) \text{sech}(x) $$ and $$ \frac{d}{dx} \tanh(x) = \text{sech}^2(x) , $$ which shows that $$ f^{(l)}(x) = \text{sech}(x) p(\text{sech}(x) , \tanh(x)) ,$$ where $p$ is some polynomial. Thus $f$ is infinitely differentiable. Since $$ \sup_{x \in \mathbb{R}} |x|^k | \text{sech}(x)| < \infty $$ we also have $$ \sup_{x \in \mathbb{R}} |x|^k | f^{(l)}(x)| < \infty .$$ Hence $f \in \mathcal{S}( \mathbb{R}) $. Is this correct? I have no idea if $f \in \mathcal{S}( \mathbb{R}) $ or not. Thanks AI: Basically, it is correct. I have some issue with $$\lim_{x \rightarrow \infty} \text{sech}(x) = \lim_{x \rightarrow \infty} \frac{2 e^x}{1+ e^{2x}} = \lim_{x \rightarrow \infty} e^{-x},$$ since, as written, you would seem to reach the same conclusion for $f(x) = \frac{1}{1+x^2}$, since that also has limit $0$ for $x \to \pm\infty$. Show that $f(x) \sim 2e^{-\lvert x\rvert}$ by considering $\lim\limits_{x \to \pm\infty} f(x)e^{\lvert x\rvert}$. Near the end, it would IMO be beneficial to explicitly state that the factor $$p(\operatorname{sech}(x),\tanh(x))$$ in the expression for $f^{(l)}(x)$ is bounded, since $\tanh$ and $\operatorname{sech}$ are both bounded. Without that, the assertion $$\sup_{x \in \mathbb{R}} \lvert x\rvert^k \lvert f^{(l)}(x)\rvert < \infty$$ hangs in the void a little.
H: Check my solution of: H is compact $\iff$ every cover {${E_\beta}$}$_{\beta \in A}$ of $H$ has a finite subcover. Question: Let $H ⊆ \Bbb R^n$ Prove that H is compact $\iff$ every cover $\{{E_\beta}\}_{\beta \in A}$ of $H$, where $E_\beta$ 's are relatively open in $H$, has a finite subcover. Solution: I did this solution with my friend. Therefore I posted the solution as a picture. Can you please check my solution? Is it correct or does there exist any mistakes or is there something missing? AI: You are proving that when $H$ is a topological subspace of a larger space $X$, every open covering of $H$ in $X$ induces an open covering of $H$ in $H$ and vice-versa. If this was your real intent, your proof is correct. However I think that this is a little bit trivial to be an exercise in topology...
H: Construction of field extensions In our lecture notes of Algebra, we have the following construction: Let $K$ be a field and $P \in K[X]$ be irreducible and monic. Let $L := K[X]/(P)$ and $a:=X+(P)$. Then, $L=K(a)$ is a field extension of $K$ and $P$ is the minimal polynomial of $a$. I am slightly confused by the definition of $a$. Isn't $a$ supposed to be an element of $L$? We define $a := X + (P)$ so basically we add a variable $X$ and an ideal $(P)$ to each other? What is the meaning of this addition? As an example, we are given $\mathbb{C} \cong \mathbb{R}[X]/(X^2+1)$. So in this case, $P = X^2 +1$. What is $a$? By above definition, it should be $a = X + (X^2+1)$, right? Did I just make a mistake in my lecture notes or is $a$ really supposed to be defined like this? AI: The point is, what is $K[X]/(P)$? Or more generally, given a ring $R$ and and ideal $I$ of $R$, what is $R/I$? $R/I$ is the set of cosets $r + I$, for $r \in R$. So $R/I = \{ r + I : r \in R \}$. In turn, $r + I$ is defined as $\{ r + u : u \in I\}$. (One could see that the $r+ I$ are the equivalence classes of the relation on $R$ given by $r \sim s$ if and only if $r - s \in I$.) $R/I$ becomes a ring defining $$ (r + I) + (s + I) = (r+s) + I,\quad\text{and}\quad (r + I) \cdot (s + I) = (rs) + I. $$ In this case $I = (P)$ is the principal ideal generated by $P$, so $(P) = \{ P Q : Q \in K[X] \}$.
H: if $X\sim U(0,1)$ show that $(b-a)X+a \sim U(a,b)$ I'm using the MGF method, this is what I get: $$ \begin{align} Y&=(b-a)X+a\\ M_Y(t)&=E[e^{(b-a)X}e^a] \\ &=E[e^{(b-a)X]}e^a &\text{I think this is my error} \\ &= M_x((b-a)t)e^a\\ &=E[\dfrac{e^{(b-a)t}}{(b-a)t}]e^a \end{align} $$ and then the $e^a$ messes it up. AI: Your original definition of the MGF can't be right - it doesn't involve $t$. I also think you (incorrectly) added an expectation to the MGF for $X$, after it had already been computed. Instead: $$ \DeclareMathOperator{\E}{\mathbb{E}}M_Y(t)=\E[e^{t((b-a)X+a)}]=e^{at}\E[e^{t(b-a)X}]=e^{at}M_X((b-a)t) $$ Assuming you have already shown that the MGF of a uniform variable on $(0,1)$ is $$ M_X(t)=\frac{e^t-1}{t}, $$ this yields $$ M_Y(t)=e^{at}\frac{e^{(b-a)t}-1}{(b-a)t}=\frac{e^{bt}-e^{at}}{(b-a)t}, $$ as you wanted.
H: Isomorphic varieties I just want to see if my approach for this problem is fine: Show $W=\mathbb{P}^1 \times \mathbb{P}^1$ is not isomorphic to $W'=\mathbb{P}^2.$ Well $V= \{ [0:1] \} \times \mathbb{P}^1, V' = \{ [1:0] \} \times \mathbb{P}^1$ are closed subvarieties of $W$ each isomorphic to $\mathbb{P}^1$ so each of dimension $1.$ So $W$ has two dimension 1 closed subvarieties that don't intersect, while $W'$ does not (any two projective plane curves intersect) and thus they are not isomorphic. Edit: Isomorphic as varieties. AI: The argument seems fine, though you should clarify what class of objects you are working in: algebraic varieties? projective varieties? Note that they are not even homeomorphic (or even homotopy equivalent) as topological spaces, since $H_2(W;\mathbb{Z})=\mathbb{Z}^2$ while $H_2(W';\mathbb{Z})=\mathbb{Z}$. The same argument can be stated with $\pi_2$ in place of $H_2$; some people think that the groups $\pi_i$ are easier to define than $H_i$.
H: Under what conditions are the variables $a, b$ satisfying $A \cap B = \emptyset$ and $A \cap B = \{0\}$ I am struggling with the following question: You have the two sets $A={x∈R; |x−a|≤1}$ och $B={x∈R; |x−b|≥2}.$ Give the conditions for when the variables: $a$ and $b$ are satisfying $A \cap B = \emptyset$ and $A \cap B =\{0\}.$ I am not sure how to think here. But here is my start: $0 \leq| x−a | ≤ 1$ and $ 2 \leq | x−b |$ Which means that for $A \cap B = \emptyset$, the sets can not "overlap" and they wont do that when $|a - b| \geq 1$. This is apparently wrong and the correct answer is: $|a - b| < 1,$ $b-1 < a < b + 1$ (according to my textbook). For the second question when $A \cap B =\{0\}.$ I know that the only element after the set union operation should be $0$. I am not sure how to get to the correct answer which (according to my textbook) is: $|a − b| = 3$, $a = b \pm 3$ I would be very happy if someone can help me out and explain how to think to get these answers. Thank you. AI: I'll give you some hints on how to tackle this problem. If you find something unclear, just leave a comment, and I'll be more than happy to help you. One thing you should note is that $|a - b|$ tell you how far apart $a$, and $b$ are. For example: $|5 - 9| = |-4| = 4$, that means 5, and 9 are 4 (units) apart. Another example is: $|-2 - (-7)| = |-2 + 7| = |5| = 5$, which means that -2; and -7 are 5 units apart. (in the latter example, $a = -2$, and $b = -7$). If you still cannot imagine it, get some crap paper, a pencil, draw a real line, and mark the points according to the numbers on it. So all values of $x$ that satisfy $\color{red}{|x - 5|} \color{blue}{< 1}$ are those whose distance from 5 is less than 1 ($\color{red}{|x - 5|}$ means that 'the distance from $x$ to 5'; and $\color{blue}{< 1}$ basically means that 'less than 1'), another way to put it, it's all number on the interval $(4; 6)$. If you cannot see why, use the real line. Analogously, all numbers $x$ satisfy $|x - 2| \ge 3$ are those whose distance from 2 is greater than or equal to 3, it's of course the interval $(-\infty; -1] \cup [5; +\infty)$. Note that 1, and 5 are included, since there distance from 2 is equal to 3. Also note that, in the above example, the numbers 4; and 6 do NOT satisfy our requirement (i.e, the requirement that the distance from to 5 is less than 1. In fact, their distance from 5 is indeed 1). Ok, back to your problem, mark some arbitrary number $a$ on the real line, find the set $A$, i.e the set of all numbers, whose distance from $a$ is less than or equal to 1. Do the same for set $B$. So what conditions that $a$, and $b$ must have so that $A \cap B = \emptyset$; and $A \cap B = \{ 0 \}$?
H: where this series converges Given the series $$\sum_{j=0}^{\infty}\frac{1}{6j^2-5j+1}$$ I am completely stuck and do not understand the answer from my book which is $\pi^2/36-1$. I need explanation and different approach how this result is gained. Thanks AI: Rewrite your series as : \begin{align} \tag{1}\sum_{j=0}^{\infty}\frac{1}{6j^2-5j+1}&=\sum_{j=0}^{\infty}\frac{1}{(3j-1)(2j-1)}\\ &=\sum_{j=0}^{\infty}\frac2{2j-1}-\frac3{3j-1}\\ \tag{2}&=-2+3+\sum_{j=1}^{\infty}\frac1{j-\frac 12}-\frac1{j-\frac 13}\\ &=1-\psi\left(1-\frac 12\right)+\psi\left(1-\frac 13\right)\\ \tag{3}&=1+2\ln(2)-\frac{3\ln(3)}2+\frac{\pi}{2\sqrt{3}}\\ &\approx 1.645275610234835007\\ \end{align} using the special values of the digamma function (or the Gauss digamma sum) and the resolution method exposed in the excellent Abramowitz and Stegun $(6.8)$. Here a more 'elementary' derivation is possible if we observe that for any integer $n>1$ : \begin{align} \sum_{j=1}^{\infty}\frac1{j-\frac 1n}-\frac1{j}&=\sum_{j=1}^{\infty}\int_0^1x^{j-\frac 1n-1}-x^{j-1}\;dx\\ &=\int_0^1 \sum_{k=0}^{\infty}\left(x^{k-1/n}-x^{k}\right)\;dx\\ &=\int_0^1 \frac{x^{-1/n}-1}{1-x}\;dx\\ \text{setting}\ x:=y^n\ \text{ we get :}\\ &=n\int_0^1 \frac{y^{-1}-1}{1-y^n}y^{n-1}\;dy\\ &=n\int_0^1 \frac{y^{n-2}-y^{n-1}}{1-y^n}\;dy\\ \text{that may be solved using}&\text{ partial fractions.} \end{align} From $(2)$ we need : \begin{align} \sum_{j=1}^{\infty}\frac1{j-1/2}-\frac1{j-1/3}&=2\int_0^1 \frac{1-y}{1-y^2}\;dy-3\int_0^1 \frac{y-y^2}{1-y^3}\;dy\\ &=2\ln(2)-3\int_0^1 \frac{y}{1+y+y^2}\;dy\\ &=2\ln(2)-\frac 32\left(\int_0^1 \frac{1+2y}{1+y+y^2}\;dy-\int_0^1 \frac 1{(3/4)+(y+1/2)^2}\;dy\right)\\ &=2\ln(2)-\frac 32\left|\ln(1+y+y^2)-\frac 2{\sqrt{3}}\arctan\left(\frac{1+2y}{\sqrt{3}}\right)\right|_0^1\\ &=2\ln(2)-\frac {3\ln(3)}2+\sqrt{3}\left(\arctan\left(\sqrt{3}\right)-\arctan\left(\frac1{\sqrt{3}}\right)\right)\\ \end{align} Adding $1$ from $(2)$ we get again the result $(3)$ : $$\boxed{\displaystyle \sum_{j=0}^{\infty}\frac{1}{6j^2-5j+1}=1+2\ln(2)-\frac{3\ln(3)}2+\frac{\pi}{2\sqrt{3}}}$$
H: Locally exact differential in a disk is exact I'm reading through Ahlfors' Complex Analysis text for self study, and I found difficulty with a proof. In chapter 4 he defines a locally exact differential as a differential who is exact in some neighborhood of every point of its domain. In the proof of theorem 16 however, he claims that such differential is exact in every open disk contained in the region. I fail to see why is it so: Each point should have its own radius, and these can get very small (?). I can use the fact that a locally exact differential of class $C^1$ is closed and apply Poincare's lemma, but no differentiability assumptions were made in the statement of the theorem. Any help will be appriciated. AI: All you need is a continuous $1$-form $\omega$. Since a disk is convex, you know that $\int_{\partial R}\omega=0$ for any rectangle $R$ contained in the disk. But this is all you need to define a potential function. See Theorem 1 in the same chapter.
H: How to show that this limit $\lim_{n\rightarrow\infty}\sum_{k=1}^n(\frac{1}{k}-\frac{1}{2^k})$ is divergent? How to show that this limit $\lim_{n\rightarrow\infty}\sum_{k=1}^n(\frac{1}{k}-\frac{1}{2^k})$ is divergent? I applied integral test and found the series is divergent. I wonder if there exist easier solutions. AI: We can estimate. Note that $2^k \ge 2k$, and therefore $$\frac{1}{k}-\frac{1}{2^k} \ge \frac{1}{k}-\frac{1}{2k}=\frac{1}{2k}.$$ It is a familiar fact that $\sum \frac{1}{2k}$ diverges. Thus by Comparison so does our series.
H: Holomorphic function with bounded real part Suppose that $f(z)$ is holomorphic over $|z| \leq R$, for some positive $R$, and that $f(0)=0$. Further, suppose that $Re(f(z)) \leq C$ for all $|z| \leq R$. How do we show that $|f(z)| \leq \dfrac{2Cr}{R-r}$ for all $|z| \leq r$, for any $0 < r < R$? Any help or hint would be greatly appreciated. Thanks! AI: Let $S(z) = \displaystyle \frac{-z}{z - 2C}$. Then $S(z)$ maps $\Re z < C$ conformally onto the unit disk with $S(0) = 0$. Then the composition $g(z) = S(f(Rz))$ maps the unit disk into the unit disk with $g(0) = 0$. We can apply the Schwarz lemma to conclude that $|g(z)| \le |z|$ on the unit disk. This corresponds to $$ \Bigg| \frac{f(z)}{f(z) - 2C} \Bigg | \le \Bigg| \frac{z}{R} \Bigg |$$ From this you get (using the triangle inequality, etc.) $$|f(z)| \le \frac{|z|2C}{R - |z|}$$ The final inequality follows from $a < b < R \implies \frac{a}{R-a} < \frac{b}{R-b}$.
H: Why is the well ordering principle counter-intuitive? I read here that while 'The Axiom of Choice agrees with the intuition of most mathematicians; the Well Ordering Principle is contrary to the intuition of most mathematicians'. I don't understand why this is so. According to Wikipedia, 'the well-ordering principle states that every non-empty set of positive integers contains a least element'. This is quite a straightforward definition. Is it the way you derive the equivalence of the Axiom of Choice and well ordering principle that makes it counter-intuitive or the definition of the least element? AI: The well-ordering principle just refers to $\mathbb{N}$. What is being referred to is the well-ordering theorem, which says that every set admits a well-order. That's the thing that seems a little bit crazy, if you consider how you might achieve this even for a "basic" set like $\mathbb{R}$. As for choice, a very non-objectionable version is "the product of an arbitrary family of non-empty sets is non-empty". It's equally difficult to think of what a counter-example might look like. If I would be so bold as to endorse a book on the topic, it would definitely be Horst Herrlich's Axiom of Choice. It will simultaneously convince you of the statement, of the statement's negation, and of neither. Cognitive dissonance eat your heart out.
H: Determine the lengths of the sides of a right triangle The positive real numbers $a,b,c$ are such that $a^2+b^2=c^2$, $c=b^2/a$ and $b-a=1$. Determine $a,b,c.$ AI: From $a^2+b^2=c^2$ , $c=b^2/a$ and $b-a=1$, $a^2+b^2 = b^4/a^2$, so $a^4+a^2b^2 = b^4$. We could substitute $b = a+1$ (if we do, I get $a^4-2a^3-5a^2-4a-1=0$, which I would rather not try to solve), but instead we will multiply by $4$ and use $4x^2-4x+1 = (2x-1)^2$. $\begin{align} 4a^4 &= 4b^4-4a^2b^2\\ 4a^4+a^4 &= 4b^4-4a^2b^2+a^4\\ 5a^4&=(2b^2-a^2)^2 \end{align} $ so, taking the square root with $2b^2-a^2$, $\begin{align} a^2\sqrt{5} &= 2b^2-a^2\\ a^2(\sqrt{5}+1) &= 2b^2\\ a\sqrt{\sqrt{5}+1} &= b\sqrt{2}\\ &= (a+1)\sqrt{2}\\ a(\sqrt{\sqrt{5}+1}- \sqrt{2}) &= \sqrt{2}\\ a &= \frac{\sqrt{2}}{\sqrt{\sqrt{5}+1}-\sqrt{2}}\\ b &=a+1\\ &= \frac{\sqrt{2}+\sqrt{\sqrt{5}+1}-\sqrt{2}}{\sqrt{\sqrt{5}+1}-\sqrt{2}}\\ &= \frac{\sqrt{\sqrt{5}+1}}{\sqrt{\sqrt{5}+1}-\sqrt{2}}\\ \end{align} $ To get $c$, let $u = \sqrt{\sqrt{5}+1}$ and $v = \sqrt{2}$, so $a = \frac{v}{u-v}$ and $b = \frac{u}{u-v}$. The first equation for $c$ is $c = \frac{b^2}{a} = \frac{u^2}{(u-v)^2}\frac{u-v}{v} =\frac{u^2 }{v(u-v)}$ or $c^2 = \frac{u^4 }{v^2(u-v)^2}$. The second equation for $c$ is $c^2=a^2+b^2 =\frac{v^2}{(u-v)^2} +\frac{u^2}{(u-v)^2} =\frac{u^2+v^2}{(u-v)^2} $. For the two expressions for $c^2$ to be equal, we need $\frac{u^4 }{v^2(u-v)^2} =\frac{u^2+v^2}{(u-v)^2} $ or $v^2(u^2+v^2) = u^4$. To check that these are equal: $\begin{align} u^2 &= \sqrt{5}+1\\ u^4 &= 6+2\sqrt{5}\\ v^2 &= 2\\ u^2+v^2 &= \sqrt{5}+3\\ v^2(u^2+v^2) &=2(\sqrt{5}+3)\\ &= 2\sqrt{5}+6\\ \end{align} $ so the two are equal (whew!). We see that $c = \frac{u^2 }{v(u-v)} = \frac{\sqrt{5}+1}{\sqrt{2}(\sqrt{\sqrt{5}+1}-\sqrt{2})} = \frac{\sqrt{5}+1}{\sqrt{2}\sqrt{\sqrt{5}+1}-2} $. Note: If we take the square root with $a^2-2b^2$, we get $\begin{align} a^2\sqrt{5} &= a^2-2b^2\\ a^2(1-\sqrt{5}) &= 2b^2\\ \end{align} $ which can not be since the left side is negative and the right side is positive.
H: Shannon's entropy in a set of probabilities Let $P = p_1, \ldots, p_N$ be a set of probabilities (i.e., $0 \leq p_i \leq 1$). I can compute the Shannon's entropy as follows: $$ H(P) = -\sum_{i=1}^N p_i \log_2 p_i $$ Now, suppose I perform the following operations: I select some $p_i \in P$, and create the set $P_{\text{sub}} \subseteq P$ Some of the probabilities $p_i \in P_{\text{sub}}$ are decreased somehow The probabilities in $P_{\text{sub}}$ are normalized so that $\sum_i p_i = 1$ Is it possible that the entropy of the set $P_{\text{sub}}$ is greater than the entropy in $P$, i.e., $H(P) < H(P_{\text{sub}})$? AI: Yes, choose, for example, your initial probabilities as $[0.99\,\,0.05\,\,0.05]$, then choose your subset to be the last two probabilities $[0.05\,\,0.05]$ (normalizing we get $[0.5\,\,0.5]$).
H: When solving an ODE using power series method, Why do we need to expand the solution around the singular point? When solving a differential equation using series expansion method, if it has the following form : $$y''+\frac{p(x)}{x}y'+\frac{q(x)}{x^2}y=0$$ ; where $p$ and $q$ are analytic at $x_0$; if we want to find the solution in a series fprm expanded around $x_0$, it can't be solved using the regular method of series expansion and we must use Frobenius method to find the series around $x_0$. Why do we need to expand around the singular point? why we don't solve the equation by expanding $y$ around a regular, nonsingular point (I mean a point different than $x_0$) and without using Frobenius method ? Is it a matter of convergence of the solution series at the singularity point? AI: First of all expansion around singular points are important/interesting for applications. For example, Bessel equation $$x^2y''+xy'+(x^2-\nu^2)y=0$$ often arises in problems with axial symmetry. Its singular point $x=0$ corresponds precisely to approaching the symmetry axis. Second, expanding around an arbitrarily chosen point, in general, does not allow to obtain all expansion coefficients in a nice closed form - one only finds a recursion formula they satisfy. Whereas the recursion relation for the corresponding expansion coefficients around a singularity is often solvable. Example: the very same Bessel equation has a nice series solution $$J_{\nu}(x)=\sum_{k=0}^{\infty}\frac{(-1)^k}{k!\,\Gamma(k+\nu+1)}\left(\frac{x}{2}\right)^{2k+\nu}.$$ A more philosophical viewpoint: any 2nd order differential equation can be written as a rank 2 1st order linear system. Suppose we have its $2\times 2$ fundamental matrix solution $\Phi(x)$. Being analytically continued along a closed path on the Riemann sphere, $\Phi$ will be multiplied by a constant matrix, called monodromy matrix, which depends only on the homotopy class of the path. If we choose $\Phi$ arbitrarily, the monodromy matrix for the loop encircling a particular singular point is nothing special. However, when $\Phi$ is constructed from the solutions obtained by the Frobenius method, the monodromy matrix is special (e.g. diagonal in the Bessel case). This is to say that such solutions are rather distinguished from the complex-analytic point of view. Yet in other words, Frobenius method suggests a particular "good" basis in the solution space. For instance, this choice of the basis is largely responsible for the existence of differentiation and recursion formulas for Bessel functions.
H: Inequality involving definite integral Just wondering, what may be the best way to show that $$\int_0^1 xf(x)dx \leq \frac{1}{2}\int_0^1 f(x)dx,$$ provided that $f(x) \geq 0$ over the interval $[0,1]$ and that $f(x)$ is monotonically decreasing? Thanks! AI: $f$ and $x\mapsto x$ are of opposing monoticity, thus by Chebyshev's inequality for integrals: $$\int_0^1 xf(x)\,dx\leq \left(\int_0^1 f(x)\,dx\right)\left(\int_0^1 x\,dx\right)=\frac{1}{2}\int_0^1 f(x)\,dx$$ If you are interested in a method without knowledge of the inequality, as suggested above: $$\begin{aligned}\int_0^1 \left(\frac{1}{2}-x\right)f(x)\,dx &=\int_0^{\frac{1}{2}}\left(\frac{1}{2}-x\right)f(x)\,dx+\int_{\frac{1}{2}}^1\left(\frac{1}{2}-x\right)f(x)\,dx\\&=\int_0^{\frac{1}{2}} xf\left(\frac{1}{2}-x\right)\,dx-\int_0^{\frac{1}{2}} xf\left(x+\frac{1}{2}\right)\,dx\\&=\int_0^{\frac{1}{2}}x\left[f\left(\frac{1}{2}-x\right)-f\left(\frac{1}{2}+x\right)\right]\,dx\\&=\int_0^{\frac{1}{2}}\left(\frac{1}{2}-x\right)\Big(f\left(x\right)-f\left(1-x\right)\Big)\,dx\geq 0\end{aligned}$$ (draw a sketch to see why the last inequality is true)
H: Directional derivative, gradient and a differential function Let there be some function $f$, some point $(x_0,y_0)$ and some vector $u$. Is $D_{u}f(x_0,y_0)=∇f(x_0,y_0)⋅u$ always correct? Even if the the function is not differentiable at the point? Or in more general, I didn't quite understand when that statement is true. Thanks in advance for any explanation! AI: The directional derivative can be expressed as a dot product with the gradient if the function is differentiable. Otherwise the formula $D_u f=\nabla f \cdot u$ may fail: you can take $$f(x,y)=\begin{cases} \frac{x^3}{x^2+y^2} & (x,y) \neq (0,0) \\ 0 & (x,y)=(0,0) \end{cases}. $$ as a counterexample
H: How find this value $\frac{a^2+b^2-c^2}{2ab}+\frac{a^2+c^2-b^2}{2ac}+\frac{b^2+c^2-a^2}{2bc}$ let $a,b,c$ such that $$\left(\dfrac{a^2+b^2-c^2}{2ab}\right)^2+\left(\dfrac{b^2+c^2-a^2}{2bc}\right)^2+\left(\dfrac{a^2+c^2-b^2}{2ac}\right)^2=3,$$ find the value $$\dfrac{a^2+b^2-c^2}{2ab}+\dfrac{a^2+c^2-b^2}{2ac}+\dfrac{b^2+c^2-a^2}{2bc}$$ is true? Yes, I tink this problem can prove $$(a+b+c)(a+b-c)(a+c-b)(b+c-a)=0$$ so $$\dfrac{a^2+b^2-c^2}{2ab}+\dfrac{a^2+c^2-b^2}{2ac}+\dfrac{b^2+c^2-a^2}{2bc}=1or -3$$ How many nice methods prove $$(a+b+c)(a+b-c)(a+c-b)(b+c-a)=0$$ ? and I see this easy problem http://zhidao.baidu.com/question/260913315.html AI: First, we simplify the initial equation by multiplying out by the denominator. Let $$f(a,b,c) = c^2(a^2+b^2-c^2)^2 + a^2(b^2+c^2-a^2)^2 + b^2(c^2+a^2-b^2)^2 - 12a^2b^2c^2$$ In Ron Gordon's deleted post, he realized that $f(a,b,c) = 0 $ if $a+b=c$, $b+c=a$, $c+a=b$. This strongly suggests that $a+b-c, b+c-a, c+a-b$ are factors (sort of Remainder Factor Theorem). And indeed they are. We have $$f(a,b,c) = -(a+b-c)(a-b+c)(-a+b+c)(a+b+c)(a^2+b^2+c^2)$$ which you can check in Wolfram. Since the denominators are non-zero, we have $(a^2+b^2+c^2)>0$. Thus $$f(a,b,c) = 0 \Leftrightarrow (a+b-c)(a-b+c)(-a+b+c)(a+b+c) = 0 $$ We now split into cases. Case 1. $(a+b-c)(a-b+c)(-a+b+c) = 0$ (Once again, multiply by denominators, and using Ron's observation.) Defining $$g(a,b,c) = c(a^2+b^2-c^2) + a(b^2+c^2-a^2) + b(c^2+a^2-b^2) - 2abc $$ gives us $$g(a,b,c) = - (a+b-c)(a-b+c)(-a+b+c) = 0 $$ Hence, the answer is 1. Case 2. $a+b+c = 0$. Then each term can be simplified in the form $\frac{ (a+b)^2 - c^2 - 2ab}{2ab} = -1$, hence the answer is -3.
H: How can I calculate the expected change of my function? I am attempting to model the following equation with 3 variables (H, M, and S) and 1 constant (C): D = .05*H*M + .05*S*(M+C) or, when factored D = .05*M (H+S) + .05*C(S) It'd be a simple matter to calculate the value of D at any point in the formula (just plug in the variables), but I'm more interested in the relationship between S, M, and H; I'd like to be able to determine how much D will change (dD) when I increase S, M, or H (or some combination thereof). It's been a while since I've used partial derivatives, but I believe that's what I'd need to use to answer this? I think I've got a handle for single values (i.e., partial derivative with respect to M only), but how do I calculate the expected change of both (for instance) increasing H by 1 and decreasing S by 1? Any help? AI: If $D(H,M,S) = .05 M (H+S) + .05 C(S)$, then $$\frac{\partial D(H,M,S)}{\partial H} = 0.05 M, \frac{\partial D(H,M,S)}{\partial M} = 0.05(S+H), \frac{\partial D(H,M,S)}{\partial S} = 0.05(M+C)$$ If you are trying to estimate the change due to 'small' changes in the parameters, you can use: $$D(H+h,M+m,S+s) \approx D(H,M,S) + \frac{\partial D(H,M,S)}{\partial H}h + \frac{\partial D(H,M,S)}{\partial M}m+\frac{\partial D(H,M,S)}{\partial S}s$$
H: Projection onto plane given by matrix without full rank This should be a really simple question, but I can't seem remember my linear algebra. Suppose I have a complex $m\times n$ matrix $A$, $m > n$, that may not have full rank (hence $A^{*}A$ may not be invertible). Consider the linear equation system $Ax = y$. Given $z$ such that $Az \neq y$, how do I find the $z_P$ such that $Az_P = y$ minimize $\|z-z_P\|$? AI: We have $A(z-z_p)=Az-y$. Therefore the minimum-norm solution (if exists) is given by $z-z_p=A^+(Az-y)$, i.e. $z_p = z-A^+(Az-y)$, where $A^+$ denotes the Moore-Penrose pseudoinverse of $A$. So, you compute this $z_p$ and verify whether $Az_p=y$. If equality holds, it is the minimum-norm solution. Otherwise, $Az_p=y$ is not solvable.
H: Supremum over dense subset I'm interested about the following supremum: $$\sup_{g\in A}E[-gf]$$ where $A\subset\{g\in L^1:g\ge 0,E[g]=1\}$ and $f\in L^\infty$ is fixed. $E$ denotes the expectation, i.e. it is the integral with respect to a finite measure. If I have a dense subset $B$ of $A$ in the $L^1$ sense. Is the following true: $$\sup_{g\in A}E[-gf]=\sup_{h\in B}E[-hf]$$ $"\ge"$ is trivial, since $B\subset A$. For the other inequality, I need to use the density property. I know that I can finde a sequence $h_n$ in $B$ such that $-fh_n\to -fg$ in $L^1$ for a $g\in A$. My approach was: $$E[-fg]\le E[|fg|]=\lim_nE[|h_nf|]\le\sup_{h\in B}E[|hf|]$$ The problem is that on the RHS I have $|\cdot|$ instead of $E[-fh]$. How can I fix my argument? AI: What you actually have to prove is that $$\inf_{g\in A}E(gf)=\inf_{h\in B}E(hf).$$ Fix $\varepsilon>0$, there is $g\in A$ such that $E(gf)<\inf_{g\in A}E(gf)+\varepsilon$. Take $h\in B$ such that $E|g-h|<\varepsilon$. Then $$\inf_{h\in B}E(hf)\leqslant E(hf)\leqslant \varepsilon\lVert f\rVert_\infty+E(gf)<\varepsilon(\lVert f\rVert_\infty+1)+\inf_{g\in A}E(gf).$$
H: What is the definition of the slope of a linear function in the context of economic graphs? I only ask this because of the fact that economists tend to plot the dependent variable on the horizontal axis and the independent variable on the vertical, which is opposite to the "normal" way of doing things in math. This leads to the question of how to quantify the slope of a linear function when normally (in basic math) it is $$\frac{\Delta y}{\Delta x}=\frac{rise}{run}$$ but in economics you'd find the same variables reversed: $$\frac{\Delta x}{\Delta y}=\frac{run}{rise}$$ AI: The terms independent/dependent variables are used too loosely in mathematics. The differentials you listed are simply different ways of looking at something. Take an economics graph and turn it 90 degrees, 180 degrees.. It will still give you the same information, it's just a different perspective.
H: three distinct positive integers $a, b, c$ such that the sum of any two is divisible by the third I need to determine three distinct positive integers $a, b, c$ such that the sum of any two is divisible by the third. I tried like with out loss of generality let $a<b<c$ As, $a\mid (b+c)$ so $b+c=ak_1$ for some $k_1\in\mathbb{N}$ similarly $$a+b=k_2 c$$ $$a+c=k_3b$$ so adding them I get $k_1+k_2+k_3=2$, could anyone help me to proceed? AI: $$ak_1-b-c=0 \ \ \ \ \ (1)$$ $$a-bk_2+c=0 \ \ \ \ (2)$$ $$a+b-ck_3=0 \ \ \ \ (3)$$ Using this, for non-trivial solution, $$\det\begin{pmatrix} k_1 & -1 & -1 \\ 1 & -k_2 &1 \\ 1&1&-k_3\end{pmatrix}=0$$ $$\implies k_1(k_2k_3-1)-1(k_3+1)+1(1-k_2)=0$$ $$\implies k_1+k_2+k_3=k_1\cdot k_2\cdot k_3$$ From the answer by BrianM.Scott of this, we can take $k_1=1,k_2=2,k_3=3$ for any base $b$ where $b$ is a natural number $>1$ Solving $(1),(2),(3)$ we get $b=2c,a=3c\implies (a,b,c)=(3c,2c,c)$
H: induction proof: $\sum_{k=1}^nk^2 = \frac{n(n+1)(2n+1)}{6}$ I encountered the following induction proof on a practice exam for calculus: $$\sum_{k=1}^nk^2 = \frac{n(n+1)(2n+1)}{6}$$ I have to prove this statement with induction. Can anyone please help me with this proof? AI: If $P(n): \sum_{k=1}^nk^2 = \frac{n(n+1)(2n+1)}{6},$ we see $P(1): 1^2=1$ and $\frac{1(1+1)(2\cdot1+1)}{6}=1$ so, $P(1)$ is true Let $P(m)$ is true, $$\sum_{k=1}^mk^2 = \frac{m(m+1)(2m+1)}{6}$$ For $P(m+1),$ $$ \frac{m(m+1)(2m+1)}{6}+(m+1)^2$$ $$=\frac{m(m+1)(2m+1)+6(m+1)^2}6$$ $$=\frac{(m+1)\{m(2m+1)+6(m+1)\}}6$$ $$=\frac{(m+1)(m+2)\{2(m+1)+1\}}6$$ as $m(2m+1)+6(m+1)=2m^2+7m+6=(m+2)(2m+3)$ So, $P(m+1)$ is true if $P(m)$ is true
H: Proof of $A\cap(B\cup C) = (A\cap B)\cup(A\cap C)$ I have to resit a calculus exam and for some reason set proofs were never my best friend... Anyway, on a practice exam I encountered the following proof: $$A\cap(B\cup C) = (A\cap B)\cup(A\cap C)$$ When I draw a Venn-diagram it seems quite obvious but I couldn't manage to write the proof down properly. If someone could help me, that'd be great! AI: If $x\in A\cap(B\cup C)$, then $x\in A$ and $x\in B\cup C$. $x\in B\cup C\implies (x\in B$ or $x\in C)$. So, $x\in A\cap(B\cup C)\implies x\in (A\cap B)$ or $ x\in (A\cap C)$ $\implies x\in (A\cap B)\cup(A\cap C)$ $\implies A\cap(B\cup C)\subseteq (A\cap B)\cup(A\cap C)$. Similarly, if $y\in (A\cap B)\cup(A\cap C),$ $\implies y\in (A\cap B)$ or $y\in (A\cap C),$ $\implies y\in A$ and $y\in (B$ or $C)$ $\implies y\in A$ and $y\in (B \cup C)$ $\implies y\in A\cap (B \cup C)$. Now, $A \subseteq B$ and $B \subseteq A \implies A=B$.
H: Finding parameter of equation $$x^4 + 1 = kx; k>0$$ The question is at what k the equation has 1 solution. I understand that it could rephrased as follows: at what k $f(x) = x^4 - kx + 1$ has 1 $x:f(x) = 0$. but I've no idea how to solve quadratic equation. Also, the problem may be seen as finding point of intersection of parabola-like $x^4$ and the line $kx - 1$. In this case, the line is just a tangent for $x^4$. Thus, we can find its equation via derivative, i.e. $f'(x) = 4 x^3 \rightarrow y-1 = 4 x^3 (x-1) \rightarrow y = 4 x^4 - 4 x^3 + 1$. Now, we can compare two forms of the same line: $4 x^4 - 4 x^3 + 1 = kx - 1 $. I feel I've made a mistake somewhere, because if both equations describe the same line, they should give equal results for any x, for example: $x = 0 \rightarrow 0 - 0 + 1 \neq 0 - 1$ Could you help, please? AI: If we count multiplicity, a quartic polynomial can't have exactly $1$ real root, so the real root must be a double root. A double root is also a root of the derivative, which is $4x^3-k$. Thus the root must be $\sqrt[3]{k/4}$. Substitute in the equation. We get $(k/4)^{4/3}+1=k(k/4)^{1/3}$. This is easy to solve for $k$ if we rewrite the right-hand side as $4(k/4)^{4/3}$.
H: Is there an intuitive explanation for the formula for the number of observations in an average given two averages and a marginal observation? First, apologies for the long-winded title! I'm helping my 10 year old son with math, and we had a set of problems based on the following scenario: Given an average of a set, a single marginal value for the set, and the resulting new average, find out the original number of items in the set. With no further explanation, the book provides the following formula: take the difference between the original average and the new observation and then divide this difference by the difference of the old and new averages: (X - A1)/(A2 - A1) It is easy to answer the problems by simply plugging the values in the formula. I have two questions. Is there any way to intuitively explain this formula or concept? All of the other problems in the book we are using have been relatively easy to explain to a 10 year old using pictures or other techniques. The second question is how is this formula derived? In order to explain it, I tried to derive it my self. I started with the two formulas: A1 = S/N and A2 = (S+X)/(N+1) where S is the sum of the values in the set. And was unable to solve for N (or N+1) given X, A1, and A2, though obviously my algebra skills are rusty to non-existant at this point. (As I write this, I can't remember if we were solving for N or N+1 -- I don't have the book in front of me.) If the derivation is trivial, I'd like to know it. If it takes a fair amount of manipulation unsuitable for a 10 year old, I can probably figure it out myself eventually, so no need to provide the exact answer here. If there is no intuitive explanation, and the only way to grasp the concept is to do the algebraic manipulation of these two equations, proving the formula holds true, then we will probably skip this section of the book. As I noted, while many problems in the book we are using can be solved doing the algebra, they all, until this one, had a much easier way to look at the problem and solve it. Any insight would be greatly appreciated. AI: Let's say with the new item, you have $n+1$ items in the set. Then, the average of the values is $$A_{n+1} = \frac{\left(\sum_{i=1}^n x_i\right) + x_{n+1}}{n+1}.$$ Basically, the average is the sum of the first $n$ values, plus your new marginal value, divided by the total number of old and new values ($n$ old plus 1 new value). Then, $(n+1)A_{n+1} = x_{n+1} + \sum_{i=1}^nx_i$. Therefore, $\sum_{i=1}^n x_i = (n+1)A_{n+1}-x_{n+1}$. Now, before we added the marginal value, we had $$A_n = \frac{\sum_{i=1}^n x_i}{n}.$$ See how the summation term is the same as the equation above? Let's use that fact. $$A_n = \frac{(n+1)A_{n+1}-x_{n+1}}{n}.$$ We'll expand terms out a bit: $$\begin{align*} A_n &= \frac{nA_{n+1} + A_{n+1} -x_{n+1}}{n} \\ &= A_{n+1} + \frac{A_{n+1}}{n} - \frac{x_{n+1}}{n}. \end{align*} $$ Shifting things over a bit, $$\begin{align*} n(A_n-A_{n+1}) &= A_{n+1}-x_{n+1}\\ n &= \frac{A_{n+1}-x_{n+1}}{A_n-A_{n+1}} \end{align*} $$ This gives you the desired result. (You can multiply the right hand side by $\frac{-1}{-1}$ to obtain the exact form that you wrote, but it's the same thing really.)
H: Relations between radius of convergence Let $R$ denote the radius of convergence. Then $\sum_{n=0} a_{2n} x^{2n}$ has $R = 2$, and $\sum_{n=0} a_{3n} x^{3n}$ has $R = 3$. How to prove that $\sum_{n=0} a_{n} x^{n}$ has $R \leq 2$? For simplification suppose that $a_n > 0\quad \forall n$. Then the following could be written:$$\frac {a_{2n} }{a_{2n+2}} = 2; \frac {a_{3n} }{a_{3n+3}} = 3 \Rightarrow a_{2n+2} = \frac 12 a_{2n}; a_{3n+3} = \frac 13 a_{3n}$$All three series share common $a_0$, thus $a_{2} = \frac 12 a_{0}; a_{3} = \frac 13 a_{0}$. AI: I will assume you know that following result: If a power series $\sum a_n x^n$ has radius of convergence $r$, then $\sum a_n x^n$ converges absolutely for any $x$ such that $|x|\lt r$. Suppose now that $\sum_1^\infty a_n x^n$ has radius of convergence $r\gt 2$. Let $s$ be strictly between $2$ and $r$. Then $$\sum |a_n|s^n$$ converges. This cannot be the case, since $\sum |a_{2n}| s^{2n}$ diverges. (This follows from the given fact that this series has radius of convergence $2$.) We conclude that $r\le 2$, which is what we needed to show.
H: Is't possible to define such a real inner product? Consider the $\mathbb R-$linear space $B(X,\mathbb C)$ of all bounded functions from $X\to\mathbb C$ for some $X\ne\emptyset.$ Then $\displaystyle||.||:B(X,\mathbb C)\to\mathbb R:f\mapsto\sup_{x\in X}|f(x)|$ is a norm on $B(X,\mathbb C).$ Is't possible to define a real inner product $\langle,\rangle: B(X,\mathbb C)\times B(X,\mathbb C)\to\mathbb R$ such that it generates the above norm? AI: If $X$ consists of more than one point then it is not going to happen. Assume $a,b\in X$. Then you can have $f(x)=0$, for $x\neq b$, $g(x)=0$, for $x\neq a$, and $f(b)=g(a)=||f||=||g||=1$. Then $$||f+g||^2+||f-g||^2=|f(a)+g(a)|^2+|f(a)-g(a)|^2=2||f||^2=2\neq4=2||f||^2+2||g||^2.$$ For $X=\{a\}$ then we have \begin{align}||f+g||^2+||f-g||^2&=|f(a)+g(a)|^2+|f(a)-g(a)|^2\\&=(f(a)+g(a))(\overline{f(a)}+\overline{g(a)})+(f(a)-g(a))(\overline{f(a)}-\overline{g(a)})\\&=2f(a)\overline{f(a)}+2g(a)\overline{g(a)}\\&=2|f(a)|^2+2|g(a)|^2\\&=2||f||^2+2||g||^2.\end{align} Therefore it comes from an scalar product. More directly, it comes from the scalar product $$\left<f,g\right>:=f(a)\overline{g(a)}.$$
H: Check whether or not the series $\sum_{n=1}^{\infty}\sqrt{4n^2-4n+9}-(2n-1)$ converges I want to check whether or not the series converges. the series is: $$\sum_{n=1}^{\infty}\sqrt{4n^2-4n+9}-(2n-1)$$ The first thing I thought of is to do multiply by his compliment and the result I get is: $$\sum_{n=1}^{\infty} \frac{4n^2-4n+9-(2n-1)^2}{\sqrt{4n^2-4n+9}+(2n-1)}$$ $$=\sum_{n=1}^{\infty} \frac{8}{\sqrt{4n^2-4n+9}+(2n-1)}$$ the demand was to check it by the Comparison Test so I decide that this frac is: $$=\sum_{n=1}^{\infty} \frac{8}{\sqrt{4n^2-4n+9}+(2n-1)}\leq\frac{1}{n}$$and this function are not Convergent. the answer is Convergent. what I did wrong? Thanks! AI: We have $$\sqrt{4n^2-4n+9}-(2n-1)=\sqrt{(2n-1)^2+8}-(2n-1)=(2n-1)\left(\sqrt{1+\frac{8}{(2n-1)^2}}-1\right)\sim_\infty(2n-1)\left(1+\frac{1}{2}\frac{8}{(2n-1)^2}-1\right)=\frac{4}{(2n-1)}\sim_\infty\frac{2}{n}$$ so the given series is divergent.
H: If Sam's age is twice the age Kelly was two years ago, Sam's age in four years will be how many times Kelly's age now? If Sam's age is twice the age Kelly was two years ago, Sam's age in four years will be how many times Kelly's age now? (A) .5 (B) 1 (C) 1.5 (D) 2 (E) 4 So say at -2 years Sam's age is 12 and Kelly's age is 6. Add one to each year Sam passes. -2 = 12 -1 = 13 0 = 14 (this would be the present year) 1 = 15 2 = 16 3 = 17 4 = 1 In four years, Sam will be 18 years old. With the same process with Kelly: -2 = 6 -1 = 7 0 = 8 Kelly is now 8 years old. 18/8 is 2.25, which is not part of the answer choice, and the answer is apparently D) 2. What did I do wrong? AI: Let $S$ denote Sam's age now, and let $K$ denote Kelly's age now. $(1)$: We are given that Sam's current age is twice the age Kelly was $2$ years ago, so we have that $$S = \text{twice}\left(\text{Kelly's age 2 years ago}\right)\tag{1}$$ $(2)$ We are also asked to determine what multiple $x$ of $K$ will be equal to Sam's age 4 years from now: $$\text{When will}\;xK\;\text{equal}\; S+ 4\quad ?\tag{2}$$ So we need to determine which of the given values for $x$ makes the following system "match": $$S = 2(K - 2) \iff S = \color{blue}{\bf 2}K - 4\tag{1}$$ $$S + 4 = xK \iff \;\;S = \color{blue}{\bf x}K - 4\tag{2}$$ Now, what value must $\color{blue}{\bf x}$ be to make $\color{blue}{\bf 2}K - 4 = \color{blue}{\bf x} K - 4$?
H: $(A\cup B)\cap C = A\cup(B\cap C)$ if and only if $A\subset C$ I tried to prove this statement: $$[(A\cup B)\cap C = A\cup(B\cap C)]\iff A\subset C$$ I did it in the following way, can anyone tell me if it's correct what I've done? $\leftarrow$ Assume $A\subset C$, then $\forall x \in A$, $x\in C$ Then, $\forall x \in (A\cup B)\cap C$, $x\in C$ and $\in B$ Similarly, for $\forall x \in A\cup(B\cap C)$, $x\in B$ and $x\in C$ So $(A\cup B)\cap C = A\cup(B\cap C)$ $\rightarrow$ I didn't know how to do the counterpart. Could someone please help me out? AI: For the counterpart: assume that $(A\cup B)\cap C = A\cup(B\cap C)$ and let $x\in A$ then $x\in A\cup(B\cap C)$ so $x\in (A\cup B)\cap C$ and therefore $x\in C$ hence we find $$x\in A\Rightarrow x\in C$$ and you can conclude.
H: If $\sin a+\sin b=2$, then show that $\sin(a+b)=0$ If $\sin a+\sin b=2$, then show that $\sin(a+b)=0$. I have tried to solve this problem in the following way : \begin{align}&\sin a + \sin b=2 \\ \Rightarrow &2\sin\left(\frac{a+b}{2}\right)\cos\left(\frac{a-b}{2}\right)=2\\ \Rightarrow &\sin\left(\frac{a+b}{2}\right)\cos\left(\frac{a-b}{2}\right)=1 \end{align} What will be the next ? AI: HINT: As $-1\le \sin x\le 1$ for real $x$ $\implies -2\le \sin a+\sin b\le 2$ The equality occurs if $\sin a=\sin b=1$ $a=2n\pi+\frac\pi2, b=2m\pi+\frac\pi2,$ for some integer $m,n$ So, $a+b=2\pi(m+n)+\pi=\pi(2m+2n+1)$ and we know $\sin r\pi=0$ for integer $r$ Alternatively, if $\sin x=1,\cos x=\pm\sqrt{1-1^2}=0\implies $ here $\cos a=\cos b=0$ and we know $\sin(a+b)=\sin a\cos b+\cos a \sin b$
H: Topological Conditions Equivalent to "Very Disconnected" Definition: Let $(X, \mathcal{T})$ be a topological space, where the set $X$ has more than one element. Suppose that for every pair of distinct elements $a, b \in X$, there exists a separation $(A,B)$ of $X$ such that $a \in A$ and $b \in B$. Then we say $(X, \mathcal{T})$ is very disconnected. Is this condition (being "very disconnected") equivalent to another, well-known one? The definition above is my own, but I suspect it is equivalent to some pre-existing notion (e.g., a $T_{n \frac{1}{2}}$ space for some $n$). Here are a few propositions that I have proved about v.d. spaces: Any very disconnected space is disconnected. Any discrete space is very disconnected. There are very disconnected spaces that are not discrete. If a space is very disconnected, then all singletons are closed. All singletons closed does not imply the space is very disconnected. AI: You can define a relation where $x\sim y$ if there is no separation of the space $X$ between $x$ and $y$, or equivalently, each clopen set containing $x$ also contains $y$. It is very easy to check that this is an equivalence relation and that the equivalence class of $x$ is the intersection of all clopen neighborhoods. This class is called quasi-component of $x$. This relation is coarser than the relation defining connectedness, so a quasi-component is a disjoint union of components. There are conditions under which the components and the quasi-components coincide, for example when $X$ is compact Hausdorff, or when there are only finitely many quasi-components. It also holds if the components are open (which is the case for locally connected spaces or spaces with only a finite number of components.) An example of a totally disconnected space, i.e. all components are singletons, where the quasi-components are not the singletons is the sequence $$\left\{\frac1n\mid n\in\Bbb N\right\}\cup\{0,0'\}$$ converging to two distinct zeros, where the neighborhood base of $0$ is given by the intervals $[0,\epsilon),\ \epsilon>0$ and similarly for $0'$. Then each $0$ is a component but the quasi-component of $0$ is $\{0,0'\}$ Edit: @Chris Eagle points out that a space where the quasi-components are the singletons is called totally separated.
H: Why does this sequence happen like this? The other day I sent my girlfriend this text <3 she sent me back <3<3<3 not to be one upped I responded with <3<3<3<3<3<3<3 this got very silly very quickly. After our "<3" battle was over I got to thinking about the pattern we were forming. Since we were doubling the number of hearts and adding one I thought the sequence would be something like 2^n + 1. BUT IT IS NOT. It is 2^n - 1 . Why is this? AI: Because you started with $1=2^1-1$. Then $2*1+1=2*(2^1-1)+1=2^2-1$. Generally $2(2^n-1)+1=2^{n+1}-1$ When you double the $-1$ and add $1$, you get $-1$ back again.
H: Finding a point on a circle I have a circle that I am trying to find series of points on. I know the radius and horizontal tangent point at the top of the circle. I need to find a point that lies on the circle's circumference that is $x$ distance below the top point. AI: The vertical coordinate of that point will obviously be $Y=R-x$, where $R$ is the radius. The horizontal one follows from Pythagoras as $X=\sqrt{R^2-(R-x)^2} = \sqrt{2Rx - x^2}$. I don't see where you need this tangent for? Or is this by construction?
H: $A$ is a subset of $B$ if and only if $P(A) \subset P(B)$ I had to prove the following for a trial calculus exam: $A\subset B$ if and only if $P(A) \subset P(B)$ where $P(A)$ is the set of all subsets of $A$. Can someone tell me if my approach is correct and please give the correct proof otherwise? $PROOF$: $\Big(\Longrightarrow\Big)$ assume $A\subset B$ is true. Then $\forall$ $a\in A$, $a\in B$ Then for $\forall$ A, the elements $a_1, a_2,$ ... , $a_n$ in A are also in B. Hence $P(A)\subset P(B)$ $\Big(\Longleftarrow\Big) $ assume $P(A) \subset P(B)$ is true. We prove this by contradiction so assume $A\not\subset B$ Then there is a set $A$ with an element $a$ in it, $a\notin$ B. Hence $P(A) \not\subset P(B)$ But we assumed $P(A) = P(B)$ is true. We reached a contradiction. Hence if $P(A) = P(B)$ then $A\subset B$. I proved it both sides now, please improve me if I did something wrong :-) AI: Neither direction seems to be valid. Here is the correct approach. $(\Rightarrow)$ Assume $A \subseteq B$. Then for any $C \in P(A)$, we have $C \subseteq A \subseteq B$. Hence, $C \in P(B)$. $(\Leftarrow)$ Assume $P(A) \subseteq P(B)$. Since $A \in P(A)$, $A \in P(B)$, meaning $A \subseteq B$.
H: Filter properties: Downward directed set and finite intersection property I have a list of questions concerning the properties of filters: (1) If a finite subset of a poset is downward directed is it necessarily closed under finite intersection? At the very least, if said subset is downward directed, is the intersection of any two subsets of the subset contained in the subset? (2) If a set is directed does it have a unique bound? For example, do all downward directed sets have a unique lower bound? (3) Can a filter of neighborhoods of x, where x \in R, be conceived of as a family of intervals containing x and in which x is the only element in the intersection of all subsets in said family? Thank you for your time. AI: (1) No. In the first place, in a general poset, there is no notion of intersection. Even if your poset is the set $P(X)$ of all subsets of a set $X$, so that "intersection" makes sense, downward-directedness does not guarantee anything about closure under intersections, even binary intersections. (2) No. Anything below a lower bound of a set is another lower bound of that set. The greatest lower bound of a set is unique if it exists, but in general posets it need not exist. In $P(X)$, greatest lower bounds exist, but the greatest lower bound of a downward-directed set need not be a member of that set. (3) A filter, being closed under supersets, won't consist only of intervals. In the real line, the neighborhood filter of a point $x$ consists of the open intervals containing $x$ and all the supersets of those intervals. $x$ is indeed the only point in the intersection of all those intervals.
H: Free $\mathbb{Z}$ module question Is $M=\left\{\frac{a}{2^n}| a \in \mathbb{Z}, n \in \mathbb{N}\right\}$ a free $\mathbb{Z}$-module? I am struggling with free modules and am not sure how I would check this? I know that the basis cannot be finite from another problem I have worked before. However, I am not sure about the infinite basis. AI: Since you already know that a basis, if one existed, would have to be infinite, you can complete the proof by noting that your group doesn't contain even two (let alone infinitely many) linearly independent elements.
H: Input expression with summations in Wolfram Alpha Wolfram understand this expression , but i need to do the limit when n tends to infinity of that expression . As you can see in the wolfram web itself, in the last link it fails to understand the query. How can i do that? AI: As suggested in the comment, at the very bottom of the page (first link), you'll see "Expanded form". If you merely "click" on the expression, the page will open to this page, where you'll find the limit of the expression stated explicitly: Here's the limit (scroll down on the linked page to see it): $$\lim_{n\to\pm\infty}\dfrac{4+24n+26n^2}{2n^2}=\dfrac{26}{3}\approx8.66667$$ Note that once you know the "expanded form" (or closed form) of the sum: $$S(n) = \frac{4 + 24n + 26n^2}{3n^2},$$ Finding $\lim\limits_{n \to \infty} S(n)$ is fairly straightforward: We can see this limit more readily if we simply divide the numerator and denominator by $n^2:$ $$\lim_{n \to \infty}S(n) = \large \lim_{n \to \infty}\frac{\frac{4}{n^2} + \frac{24}{n} + 26}{3} = \frac {26}{3}$$
H: Counting number of elements in the empty set When is was making some exercises I encountered the following exercise: $Exercise$: Let $P(A)$ denote the set of all subsets of an arbitrary set $A$. List first the elements of $P(\emptyset)$, then the elements of $P(P(\emptyset))$. Finally, check in two steps whether you have listed the correct number of elements. I wasn't quite sure how to handle this exercise. My idea was that $P(\emptyset)$ = {$\emptyset$}, thus $P(P(\emptyset))$ = {$\emptyset$} I am not sure if this is correct and I don't know how to check if they are the correct number of elements. Maybe with $2^n$ because this counts the number of all subsets of a set $A$? I hope someone can correct me and help me out. AI: $P(\{x\}) = \{\emptyset,\{x\}\}$ for any $x$. Then take $x=\emptyset$. Then note that $\emptyset \neq\{\emptyset\}$. So $P(\emptyset) = \{\emptyset\}$. And $P(P(\emptyset))=\{\emptyset,\{\emptyset\}\}$.
H: Direct sum of orthogonal subspaces I'm working on the following problem set. Let $\mathcal{H}$ be a Hilbert space and $A$ and $B$ orthogonal subspaces of $\mathcal{H}$. Prove or disprove: 1) $A \oplus B$ is closed, then $A$ and $B$ are closed. 2) $A$ and $B$ are closed, then $A \oplus B$ is closed. I could prove 1) and if my proof is correct, it even holds if $\mathcal{H}$ is just an inner product space. Unfortunately, I can't manage to prove 2). Since $\mathcal{H}$ is by assumption a Hilbert space and I didn't use that fact to prove 1), I should probably use it here. It means that $A$ and $B$ are also complete. Given some convergent sequence in $A \oplus B$, I want to show that the limit is also in $A \oplus B$. Here I'm stuck. I want to use the completeness of $A$ and $B$, but I don't see how to obtain suitable Cauchy sequences. Can anyone drop me a hint? Or is my approach all wrong? AI: If $(a_n+b_n)_n$ is a Cauchy sequence in $A\oplus B$ ($a_n\in A, b_n\in B$), then the identity $$\langle a_n+b_n-(a_m+b_m),a_n+b_n-(a_m+b_m)\rangle = \langle a_n-a_m,a_n-a_m\rangle+\langle b_n-b_m, b_n - b_m\rangle$$ (which follows by orthogonality of $A,B$) implies that $(a_n)_n$, $(b_n)_n$ are Cauchy sequences in $A$, $B$ respectively. Hence they converge to some $a\in A$ and $b\in B$, repectively, whence $a_n+b_n\rightarrow a+b\in A\oplus B$, so $A\oplus B$ is closed. The same argument also works the other way around.
H: Convergence of binomial to normal Problem: Let $X_n \sim \operatorname{Bin}(n,p_n) $ where $p_n \xrightarrow{} 0$ and $np_n \xrightarrow{} \infty$. What I need to show is that $$\frac{X_n - np_n}{\sqrt{np_n}} \xrightarrow{d} N(0,1) \text{ as } n\xrightarrow{} \infty.$$ My thoughts: My first thought was to set $$Y_n=\frac{X_n - np_n}{\sqrt{np_n}}$$ and investigate $P(Y_n=k) = p_{X_n}(\sqrt{np_n}k + np_n) $ as $n$ goes to infinity, but this led to very messy calculations so I don't know if it is the right approach. My second attempt was with Central Limit Theorem, rewriting $X_n$ as a sum of Bernoulli random variables, $X_n = Z_1 + \dots + Z_n$, $Z_n \sim \operatorname{Be}(p_n)$. The expectation and variance of each $Z_i$ is dependent on $n$ in that case. Will that violate any assumptions in CLT? I prefer working out these kind of questions from definition rather than using a theorem and previous results, so if anyone can show me that I would be very grateful! Thanks. AI: I would do this in two steps. First take $n\to \infty$ and $p\to 0$ while keeping $np=m$ for some finite integer $m$. That brings you from the Binomial distribution to the Poisson distribution with parameter $m$ because $$ P(X_n=k)=\binom{n}{k}p^k(1-p)^{n-k}\approx\frac{1}{k!}\left(\frac{np}{1-p}\right)^k(1-p)^n \rightarrow\frac{m^k}{k!}e^{-m}. $$ Then view this Poisson distribution with parameter $m$ as the result of summing $m$ independent Poisson random variables, each with parameter $1$. You can then use the central limit theorem to take $m\to\infty$.
H: The set of "variations" of an unconditionally convergent series is compact Proposition: If $\sum x_i$ is an unconditionally convergent series in a Banach space $X$, then $S=\{\sum \varepsilon_ix_i:\varepsilon_i=\pm1\}$ is compact. Proof: 1) $\{-1,1\}^{\mathbb N}$ is compact in the pointwise topology. 2) $f:\{\varepsilon_i\}\to\sum\varepsilon_ix_i$ is a continuous function. 3) Im$f=S$ Therefore, $S$ is compact. q.e.d. Problem: The problem is with the continuity of the proposed function. It is easy to show that for any unconditionally convergent series $\sum x_i$, in a Banach space $$(\forall\varepsilon>0) (\exists n_0\in\mathbb N) (\forall m\geq n\geq n_0) (||\sum_{i=n}^m \varepsilon_ix_i||<\varepsilon)$$ for any choice of $\{\varepsilon_i=\pm1\}$ Does it directly follow that $f$ is continuous? How come? Any help would be appreciated. AI: Every projection $$ \pi_i:\{-1,1\}^\mathbb{N}\to\{-1,1\}:\{\varepsilon_j\}\mapsto\varepsilon_i $$ is continuous by definition of topology of $\{-1,1\}^\mathbb{N}$. Every map $$ m_i:\{-1,1\}\to X:\varepsilon_i\mapsto \varepsilon_i x_i $$ is continuous because topology of $\{-1,1\}$ is discrete. So the map $m_i\pi_i$ is continuous for each $i\in\mathbb{N}$. Hence the map $$ f_n:\{-1,1\}^\mathbb{N}\to X:\{\varepsilon_j\}\mapsto \sum\limits_{i=1}^n m_i\pi_i $$ is continuous as finite sum of continuous functions. As you noted in your question $$ \forall\varepsilon>0\quad\exists n_0\in\mathbb{N}\quad\forall m\geq n\geq n_0\quad\forall\{\varepsilon_i\}\in\{-1,1\}^\mathbb{N}\implies\left\Vert \sum\limits_{i=n}^m \varepsilon_i x_i\right\Vert<\varepsilon $$ This is nothing more that the statement that the set of continuous fnction $\{f_n:n\in\mathbb{N}\}$ converges uniformly on $\{-1,1\}^\mathbb{N}$. Since uniform limit of continuous functions is continuous we see that $$ f:\{-1,1\}^\mathbb{N}\to X:\{\varepsilon_j\}\mapsto\sum\limits_{i=1}^\infty\varepsilon_i x_i $$ is continuous.
H: Concave function properties Given a concave function $f(x)$, $\,f(x)$ decreases as $\,x\,$ increases. That is, $\;f(x_1)\gt f(x_2)\,$ if $\,x_2\gt x_1$ For $\;f(x_1)+f(x_2)\;$ and $\;\large\left(\frac{f(x_1+x_2)}{2}\right)^2,\;$ which one is larger? For $\;(1-f(x_1))(1-f(x_2))\;$ and $\;1\large-\left(\frac{f(x_1+x_2)}{2}\right)^2,\;$ which one is larger? Is there any theorem to prove it? Thanks! AI: The answer, for both problems, is that either might be larger; i.e. there is not enough information. Take $f_k(x)=-x+k$. This is a concave, decreasing, function, as specified in the problem. First, $f_k(0)+f_k(1)=2k-1$, while $\left(\frac{f_k(0+1)}{2}\right)^2=\left(\frac{k-1}{2}\right)^2$. For $k=0$, the latter is larger; for $k=1$, the former is larger. Second, $(1-f_k(0))(1-f_k(1))=k^2-3k+2$, while $1-\left(\frac{f_k(0+1)}{2}\right)^2=-\frac{1}{2}k^2+k+\frac{1}{2}$. For $k=0$, the former is larger; for $k=1$, the latter is larger.
H: Can a bipartite summation graph have a unique solution? Suppose we define a summation graph $G$ as follows: Each vertex $v \in G$ has a unique but unknown value ascribed to it. Each edge $e \in G$ is labelled with the sum of the values of the two vertices it joins. This construction corresponds to a system of equations where each equation is of the form $v_a + v_b = e_{ab}$ where $v_1$ and $v_2$ correspond to the unknown values of two vertices $a$ and $b$, and $e_{ab}$ the value of the edge joining the two. Now, if there is any odd cycle, it's known that there will always be a unique solution for just that cycle. But with an even cycle, there are either an infinite number of possible values, or an inherent contradiction in the values of the edges that make a solution impossible. So my question is this: is it possible that there is some bipartite graph (which has no odd cycles, but can have a bunch of even cycles joined together) that has a configuration of values with a unique solution? AI: No, you can just add $x$ to the nodes in one part and $-x$ to the nodes in the other part to get another solution. It might be possible to get such a case if your values had to be non-negative.
H: Check my answer: Prove that every open set in $\Bbb R^n$ is a countable union of open intervals. I have a question. I have solved this but please can you check my solution? Thank you. If there are any mistakes or something is missing and so on, please tell me. This is important to me. Is this proof enough to get a successful grade on an exam? Btw, I underlined the question with pink a pencil. AI: There are two problems with your argument. The first is when you have $$V\subseteq\bigcup_{x\in A_0}B_{\epsilon_x}(x)\;,\tag{1}$$ where $A_0$ is a countable subset of $X$, and say that $A_0=\Bbb N$. $A_0$ is not $\Bbb N$: it’s a subset of $X$. If it’s a countably infinite subset of $X$, then there is a bijection from $\Bbb N$ to $A_0$, and you can enumerate $A_0=\{x_j:j\in\Bbb N\}$, let $B_j(x_j)=B_{\epsilon_j}(x_j)$ for each $j\in\Bbb N$, and say (as you did) that $$V\subseteq\bigcup_{j\in\Bbb N}B_j(x_j)\;,\tag{2}$$ but you have to say that that is what you’re doing. However, $A_0$ might be finite, in which case there is no bijection from $\Bbb N$ to $A_0$. There’s no need to do all of this, however: $(2)$ is an unnecessary rewriting of $(1)$ even when $(2)$ is correct. If $A_0$ is countable, then the union is $(1)$ is a countable union, and that’s all you need. However, I would strengthen $(1)$ and say that $$V=\bigcup_{x\in A_0}B_{\epsilon_x}(x)\;:\tag{3}$$ you’ve actually proved this, and it’s what you need: $V$ is a countable union of open balls, not just a subset of a countable union of open balls. Now we come to the larger problem. Your last step does not work at all: a ball in $\Bbb R^n$ is not an interval. To complete your proof, you need to show that an open ball in $\Bbb R^n$ is a countable union of intervals. Then you can argue like this: For each $x\in A_0$ there is a countable family $\mathscr{I}_x$ of intervals in $\Bbb R^n$ such that $B_{\epsilon_x}(x)=\bigcup\mathscr{I}_x$. Let $\mathscr{I}=\bigcup_{x\in A_0}\mathscr{I}_x$; then $\mathscr{I}$, being the countable union of countable sets, is countable, and $$B=\bigcup_{x\in A_0}\bigcup\mathscr{I}_x=\bigcup\mathscr{I}$$ is a countable union of intervals. To do this, though, you have to prove that if $B_r(x)$ is any open ball in $\Bbb R^n$, then $B_r(x)$ is the union of countably many sets of the form $$[a_1,b_1)\times[a_2,b_2)\times\ldots\times[a_n,b_n)\;.$$ I suggest the following approach. First prove that $B_r(x)$ is the union of countably many sets of the form $$(a_1,b_1)\times(a_2,b_2)\times\ldots\times(a_n,b_n)\;.$$ You can do this by showing that if $y\in B_r(x)$, there are rational numbers $p_1,\dots,p_n,q_1,\dots,q_n$ such that $$y\in(p_1,q_1)\times(p_2,q_2)\times\ldots\times(p_n,q_n)\;.\tag{4}$$ There are only countably many rational numbers, so there are only countably many open boxes like $(4)$. Show that each open box like $(4)$ is the union of countably many intervals. HINT: Use the fact that in $\Bbb R$, $$[a,b)=\bigcup_{n\in\Bbb Z^+}\left(a+\frac1n,b\right)\;.$$
H: Fundamental Theorem of Calculus in Multivariate Case From the FTC we have, for continuously differentiable $f: \mathbb{R} \to \mathbb{R}$, $$ f(a) - f(b) = \int_b^a \frac{d}{dx} f(x) dx $$ I'm trying to write the difference between a vector function in similar terms, that is, given $g : \mathbb{R}^d \to \mathbb{R}^d$, having known Jacobian $J_f(x)$, what can we say about $$ g(x_1) - g(x_2) = ? $$ AI: $g(x_1) - g(x_2) = \int_0^1 Dg(x_2 + t (x_1-x_2)) (x_1-x_2) dt$.
H: Differentiability of the supremum norm in $\ell^{\infty}$ Let $\ell^{\infty}=\{x\in \mathbb{R}^{\mathbb{N}}: x\,\, \text{is bounded}\}$ and $E=\{x\in \ell^{\infty}:x_n\rightarrow 0\}$ with the norm $||\cdot||_{\infty}$ and let $f(x)=||x||_{\infty}$. How to prove that: a) If $x\in E$ then there exists $m\in\mathbb{N}$ such that $f(x)=x_m$. b) $f$ is differentiable at $x\in E$ if and only if $m$ is unique. AI: a) If $x_n$ is identically zero, the claim is trivial. So select $n$ with $x_n\ne 0$. Then there exists $N$ with $|x_k|<\frac12|x_n|$ for all $k>N$. Hence $\lVert x\rVert_\infty=\max\{|x_1|,\ldots,|x_N|\}$ and there exists $m\in\{1,\ldots,N\}$ with $f(x)=|x_m|$. b) Select $m$ with $f(x)=|x_m|$. Let $y=x$ except that $y_m=x_m$. If $m$ is unique then $f(y)<|x_m|$ by a) and especially $|x_m|>0$. Let $\epsilon=\frac12(f(x)-f(y))$. Then for all $z\ne0$ we have $f(x+hz)=|x_m+hz_m|$ for $|h|<\frac{\epsilon}{\lVert z\rVert_\infty}$. Since $h\mapsto |a+hb|$ is differentiable at $h=0$ if $a\ne 0$, we find that $f$ is differentiable at $x$. On the other hand, assume $f(x)=|x_m|=|x_n|$ with $n\ne m$. Let $z_m=x_m$, $z_n=-x_n$ and $z_k=0$ otherwise. Then $f(x+hz)=(1+|h|)f(x)$. If $f(x)>0$, this shows that $f$ is not differentiable at $x$. And if $f(x)=0$, we consider $z_1=1$, $z_2=-1$ instead and obtain $f(x+hz)=|h|$, again not differentiable.
H: How to find the limit $\lim\limits_{m\to\infty}\frac{m^{m-2}}{(m-1)^{m-2}}$? I am trying to evaluate the limit of this: $$\lim_{m\rightarrow \infty} \frac{m^{m-2}}{(m-1)^{m-2}}$$ That is just basic calculus I think but I forget those methods for finding the limit. I think the limit is just $e$. Anyone could prove that? Thanks! Fei AI: $$\lim_{m\to \infty} \frac{m^{m-2}}{(m-1)^{m-2}}=\lim_{m\to \infty}\left( \frac{m}{m-1}\right)^{m-2}=\lim_{m\to \infty}\left( \frac{m-1+1}{m-1}\right)^{m-2}=$$ $$=\lim_{m\to \infty}\left( 1+\frac{1}{m-1}\right)^{m-1-1}=\lim_{m\to \infty}\left( 1+\frac{1}{m-1}\right)^{m-1}\left( 1+\frac{1}{m-1}\right)^{-1}=$$ $$=\lim_{m\to \infty}\left( 1+\frac{1}{m-1}\right)^{m-1}\lim_{m\to \infty}\left( 1+\frac{1}{m-1}\right)^{-1}=e\cdot 1=e$$
H: Convergence of $\sum_{n=1}^{\infty}(-1)^n*\frac{n^2}{(\sqrt{n^7+n+2})^{1/3}}$ I`m trying to check if this series is convergent and would like to get some advice. $$\sum_{n=1}^{\infty}(-1)^n*\frac{n^2}{(\sqrt{n^7+n+2})^{1/3}}$$ 1) I need to multiply in his compliment? 2) I need to make the comparison test? Thanks! AI: $$ \frac{n^2}{(\sqrt{n^7+n+2})^{1/3}} = \frac{n^2}{(n^7+n+2)^{1/6}} \sim \frac{n^2}{n^{7/6}} \to \infty $$ so by comparison the general term of your series diverges; the series itself cannot converge.
H: Is an inverse Laplace Transform always solvable? I just read on Wikipedia that if we got a certain Laplace Transform $$\mathcal{L}\{f(t)\}= \frac{A}{s-\alpha_1} + \frac{B}{s-\alpha_2} + ... $$ can be solved like this: $$f(t)= A e^{\alpha_1 t}+ Be^{\alpha_2 t}+ ...$$ My question now is: Given that we can always use partial fractions, can we solve every inverse Laplace Transform of the form $$\frac{P(s)}{Q(s)}$$ ? AI: Yes, but remember that: $\deg P$ can be greater than $\deg Q$, $Q$ can have multiple roots. The first thing produces some delta function derivatives in the inverse, and the second means that exponentials $\times$ polynomials may also appear.
H: How to know the flattening factor for a ellipse? I want to know how can I get the flattening factor for a ellipse by knowing its semi-major and semi-minor axes ? Actually I tried this formula: $f=\left(\frac{a}{b}-1\right)$ While $f$ is the flattening factor, $a is the semi\,major\,axes$, $b is the semi\,minor\,axes$. I think it's true because when I try it with circle it gives me $\left(\,f=0.0\,0\right)$. but I don't have any source for this formula and I'm not sure if it's true ! So if any one know what is the exact formula for finding the flattening factor for a ellipse ? AI: The flattening factor is given by $\;f = 1 - \cfrac ba$. A closely related term you might be interested in is the eccentricity of an ellipse, usually denoted $e$ or $\varepsilon$. Eccentricity in general represents ratio of the distance between the two foci, $2h$, to the length of the major axis, $2a$: $$e = \dfrac{2h}{2a} = \dfrac ha$$ where the distance between a focus and the center is given by $\;h = \sqrt{a^2-b^2}.$ In fact, we can represent eccentricity in terms of $a, b$: $$e=\varepsilon=\sqrt{\frac{a^2-b^2}{a^2}} =\sqrt{1-\left(\frac{b}{a}\right)^2} = \frac ha$$ For an ellipse, the eccentricity is $0 < e < 1$. Eccentricity is zero when the foci coincide with the center point, i.e, the figure is a circle. As eccentricity increases, the shape gets more elongated (stretched/flattened): the closer to $1$ it gets, the flatter the ellipse.
H: "universe" in set theory and category theory In all applications of the theory of sets, all sets under investigation take place in the context of the universal set $U$. What exactly is the purpose of the universal set in set theory? $U$ is infinite, why do we need to mention this infinite context in regards to set theory? In category theory is there such a notion of a universe? Would it be the category of categories? I have a feeling that the answer has to do with Russell's paradox, but if you could elaborate that'd be great, I'm a beginner. Also, is there such a thing as "outside" of a universe? AI: It is convenient to know that if you have a bunch of sets and then do something with these sets (e.g., take their cartesian product), then the object you will get will still be manageable. In other words, we want to know that we can perform lots of convenient operations with sets we like and still remain in the same 'universe' of sets we like. Russele's Paradox demonstrates clearly that this is nothing trivial. In set theory the universe of sets, or set of discourse, is used in two different ways. Naively, it just specifies a set where all the elements come from when you write $\forall$ or $\exists$. For instance, when talking about real numbers it is usually assumed the set of discourse is $\mathbb R$. In axiomatic set theory one specifies some axioms of set theory and then consideres models of the axioms. A model of the axiom is itself a set whose elements are the sets that the model defines. So, 'set' here is used in two totally different ways. The universe is the set $M$ which is the model of the axioms of set theory. The elements of $M$ are all of the sets that the model allows. The set $M$ itself is, typically, not an element on $M$, thus is not a set in the model. In category theory it is convenient to know that given a bunch of categories we can perform lots of constructions with them, like forming categories of functors. This means that the underlying set theory we employ should be strong enough to allow for lots of 'big' constructions. Grothendieck introduced the notion of a tower of universes of sets to manage these constructions. This is required since typical categories are big in the sense that their objects do not form a set of the same magnitude as the hom-sets do. For instance, for any two groups $G,H$ the hom-set of all group homomorphisms $\psi:G\to H$ is indeed a set (even if the groups are very big). So, when we define the category $Grp$, each hom-set is just a set, but the class of all groups does not form a set. There is thus a hierarchy, using Grothendieck universes, of sizes for categories and various constructions may transcend the size of the categories it operates on, or it may not, depending on the constructions. On that note, the category $Cat$ usually refers to the category of all small categories, meaning all categories whose objects form a set (in a fixed level of the tower of universes which is ambient and assumed fixed). Thus, the category $Cat$ itself is not a small category (avoiding such silly paradoxes as the category of all categories containing itself as an object). Instead, $Cat$ is a larger category than any of the categories it contains (which makes sense). $Cat$ is still a manageable object using universes, however, the category $CAT$ of all categories is huge and presents many more difficulties if one really needs to deal with it. To see the importance of size issues in category theory, recall that a category is called small complete if it has all set-indexed limits, and that a category is a poset if between any two of its objects there is at most one morphism. There are plenty of small complete categories (e.g., $Set$, $Grp$) which are clearly not posets. Interestingly, if one looks at complete categories (i.e., those having all limits, with no size restriction), then any such category must be a poset.
H: For polynomial $f$, does $f$(rational) = rational$^2$ always imply that $f(x) = g(x)^2$? If $f(x)$ is a polynomial with rational coefficients such that for every rational number $r$, $f(r)$ is the square of a rational number, can we conclude that $f(x) = g(x)^2$ for some other polynomial $g(x)$ with rational coefficients? I proved the quadratic case in my answer to this question, and am guessing that the general case is true, but don't know how to proceed. Does this extend to polynomials in several variables? What about in different fields of fractions? Note: this is not true for complex numbers, since every complex value is the square of a complex number, but linear polynomials are not perfect squares It seems like the proper formulation of this question is that if $f(x_1, x_2, \ldots x_n)$ is a polynomial with integer coefficients such that every integer specialization of $x_1, x_2, \ldots, x_n$ is a perfect $p$th power, then $f$ is a perfect $p$th power polynomial. A proof is available here, which further shows that it only needs to hold for some $|x_i| < C$ (though it's a humongous $C$). Theorem 4 answers the question above and is similar to that presented by Franklin. The multi-variable case is dealt with via induction. AI: First, we don't need to consider rational coefficients. We can clear denominators multiplying by a square integer. For the polynomial with integer coefficients it is enough to know that it is a square for integer input. Now you need to prove that infinitely many primes are going to be divisors of the output of any non-constant polynomial $h$ in the integers. Assume that $p_1,\ldots, p_r$ are the only primes that divide every value $h(n)=a_nx^n+\cdots+a_1x+a_0$. Then take $h(a_0p_1\cdots p_r)=a_0\left(a_0^{n-1}a_n(p_1\cdots p_r)^n+\cdots+1\right)$. A la Euclid's proof the last factor should be divisible by a new prime (we probably need to take $r$ large to avoid that that last factor is $1$). If $a_0=0$ then $h(x)=xg(x)$, so just put $x=p$ any prime you want to divide $h(p)$. Write $f=g^2h$ with $h$ squarefree. Take a prime $p$ that doesn't divide the resultant $R(h,h')$ but such that divides $h(n)$ for some $n$. Such a $p$ and $n$ exists because of what we proved above. Since $f(n)$ is a square $p^2|h(n)$. Now $h(n+p)$ is also divisible by $p$ and therefore also divisible by $p^2$. But $h(n+p)=h(n)+ph'(n)$ mod $p^2$. This implies $p$ divides $h'(n)$ and therefore divides $R(h,h')$. Contradiction. Therefore $h$ is constant. Finally this constant $h$ must be a square since $f$ is always giving you square values.
H: Find $m$ so that $2^x+m^x-4^x-5^x\ge0$ Let: $$f:\mathbb{R}\to \mathbb{R}, \ \ f(x)= 2^x +m^x -4^x -5^x \text{ with } \ m>0$$ Find $m$ so that $f(x)\ge0$ for all $x$ in $\mathbb{R}$. I tried proving that its ascending for $x>0$ and descending for $x<0$ but it didn't work AI: You want the minimum of $f(x)$ to be at $x=0$. So $\log 2 + \log m -\log 4 - \log 5 = 0$. So $m=10$. This doesn't show that $m=10$ solves this problem, only that $m=10$ is the only possibility.
H: Proving if $|a|_p=1$ then $a$ is invertible in $\mathbb{Z}_p$ I decided to take a look $p$-adic integers. I am trying to show that $$a \in \mathbb{Z}_p \text{ is invertible if and only if } |a|_p=1$$ where $$|x|_p= \left\{ \begin{array}{ll} p^{-n} & \mbox{if } x\neq0 \\ 0 & \mbox{if } x=0 \end{array} \right.$$ where $n=\min\{ m \geq 0 : x_n\neq0\}$ and $$x=x_0+x_1p+x_2p^2+\cdots.$$ The "forward" direction is very easy. I am having trouble going backwards. What I have been trying to do is construct the inverse of $a$ and tentatively I am calling the inverse of $a$ as $b=b_0+b_1p+\cdots.$ I have that $b_0=a_0^{-1}$, obviously, and as $|a|_p=1$ it follows $a_0\neq0$ so it has an inverse modulo $p$, which was needed. My goal is to show: $$\text{For all $m\geq0,$ } 1=\left[\left(\sum_{i=0}^m a_ip^i\right)\left(\sum_{i=0}^m b_ip^i\right)\right] \mod{p^{m+1}} \qquad \text{(1)}$$ I am doing this on my own and have been having some trouble formulating (to me at least) good sounding ways to express what I need to prove, in terms of lemmas and things to establish $(1)$. Roughly, I have been trying to show: $Lemma$. Given $b_0,b_1,\cdots,b_k \in \{0,1,\cdots,(p-1)\}$ which satisfy $$1=\left[\left(\sum_{i=0}^k a_ip^i\right)\left(\sum_{i=0}^k b_ip^i\right)\right] \mod{p^{k+1}} \qquad\text{(2)}$$ I can find a $b_{k+1}$ which makes the following statement true: $$1=\left[\left(\sum_{i=0}^{k+1} a_ip^i\right)\left(\sum_{i=0}^{k+1} b_ip^i\right)\right] \mod{p^{k+2}}\qquad\text{(3)}$$ This is what I have been working with and finding a $b_1$ was not difficult, I chose $b_1 \in \{0,1,\cdots,(p-1)\}$ with $b_1 \equiv -a_1a_0^{-2}\mod p$, which I found to be equivalent to kind of case of $k=0$ for the Lemma. I was trying to do this by induction but I've found kind of a mess and there is a statement I need to be true that I am guessing it not generally true, namely that $g\equiv1\mod p^k$ means $g\equiv1\mod p^{k+1}$, at least that would help me out. I don't think this is true at all in general as there is the counterexample of $126\equiv1\mod 5^3$ but $126\equiv 126\not\equiv 1 \mod 5^4$. Maybe there are more conditions that need to be put on this $g$. Anyways, is there another way I can go about doing this? Sorry for the long typing, it may or may not help to give an idea of what I am trying to do etc..Thank you AI: It's not necessary to compute the actual digits of the inverse, only to show it exists. Your method of argument though becomes important when dealing with Hensel's lemma in the near future. Since $p\nmid x$ it's clear that $x$ is invertible mod $p^k$ for each $k\ge0$. Let $a_k$ be a sequence of integers which are inverses for $x$ mod $p^k$. Can you show $(a_k)$ is a Cauchy sequence? This implies that it converges to some $a\in{\bf Z}_p$, and you can subsequently show that $ax=1$ in ${\bf Z}_p$. Hints: $|x|_p=1\implies |a_n-a_m|_p=|a_nx-a_mx|_p$ $a_nx\equiv 1\equiv a_mx$ mod $p^n$ when $n\le m$ $ax\equiv a_kx\equiv 1$ mod $p^k$ If you want, you can show that $a_n\equiv a_m\bmod p^n$ when $n\le m$; this shows that the "digits" of $a_n$ are only being added to; the "initial segments" of digits are never altered. I am assuming you have constructed ${\bf Z}_p$ as the inverse limit of ${\bf Z}/p^n{\bf Z}$, or as ${\bf Z}[T]/(T-p)$, both routes quickly yielding permission for unique $p$-adic expansions of all $p$-adic integers and continuous ring quotient maps ${\bf Z}_p\to{\bf Z}/p^k{\bf Z}$ for all $k\ge0$. Since ${\bf Z}_p$ is an integral domain, it has a fraction field, denoted ${\bf Q}_p$. It can be seen that the relation $|a/b|_p=|a|_p/|b|_p$ allows the $p$-adic valuation to be extended to all of ${\bf Q}_p$. From there we may show that ${\bf Q}_p\cong{\bf Z}_p[p^{-1}]$ so all $p$-adic numbers have $p$-adic expansions. Alternatively, we could have defined ${\bf Q}_p$ to be the metric completion of $\bf Q$ with respect to the $p$-adic metric. If we have ${\bf Q}_p$ available as a valued field, the proof is much quicker: $|x|_p=1$ implies $|x^{-1}|_p=1$ implies $x^{-1}\in{\bf Z}_p$ implies $x$ is invertible in ${\bf Z}_p$.