text
stringlengths
83
79.5k
H: Significance of starting the Fibonacci sequence with 0, 1.... DISCLAIMER: I do not deal with in-depth mathematics on a daily basis as some of you may, so please pardon my ignorance or lack of coherence on this topic. QUESTION: What is the significance of starting the Fibonacci sequence with $0,1$ ? For instance, if I picked any two random integers, say 2 and 7, to start a sequence would I actually be creating some multiple or derivation of the Fibonacci sequence? Is there a general mathematical explanation for the relationship between any sequence represented by $a[0] = x, a[1] = y, a[n] = a[n-1] + a[n-2]$ and the Fibonacci sequence? Or, back to my example sequence, is there a general mathematical relationship between: $2,7,9,16,25,41,66,107,173,280...$ and $0,1,1,2,3,5,8,13,21,34...$ Perhaps the Golden Ratio explains it somehow? Any help would be appreciated. AI: Yes, such sequences are closely related, and the relationship does involve the golden ratio. Let $\varphi=\frac12(1+\sqrt5)$ and $\widehat\varphi=\frac12(1-\sqrt5)$; $\varphi$ is of course the golden ratio, and $\widehat\varphi$ is its negative reciprocal. Let $a_0$ and $a_1$ be arbitrary, and define a Fibonacci-like sequence by the recurrence $a_n=a_{n-1}+a_{n-2}$ for $n\ge 2$. Then there are constants $\alpha$ and $\beta$ such that $$a_n=\alpha\varphi^n+\beta\widehat\varphi^n\tag{1}$$ for each $n\ge 0$. Indeed, you can find them by substituting $n=0$ and $n=1$ into $(1)$ and solving the system $$\left\{\begin{align*} a_0&=\alpha+\beta\\ a_1&=\alpha\varphi+\beta\widehat\varphi \end{align*}\right.$$ for $\alpha$ and $\beta$. In the case of the Fibonacci numbers themselves, $\alpha=\frac1{\sqrt5}$ and $\beta=-\frac1{\sqrt5}$; in the case of the Lucas numbers $L_n$, for which the initial values are $L_0=2$ and $L_1=1$, $\alpha=\beta=1$.
H: stuck trying to find a matrix using lamda? I'm stuck with the first step converting to a matrix to find a solution. I've got: $$ 6y + (3-\lambda)x = 0 $$ $$ (4-\lambda)y + 5x = 0 $$ I need help finding two values of $\lambda$ so that they don't have unique answers? AI: If you are looking for eigenvalues, we have the matrix: $$A = \begin{bmatrix}3 & 6\\5 & 4\end{bmatrix}$$ To find the eigenvalues, we setup and solve $|A-\lambda I| = 0$ This gives us: $$\begin{vmatrix}3-\lambda & 6\\5 & 4-\lambda\end{vmatrix}\ = 0 \rightarrow \lambda^2 - 7 \lambda - 18 = 0 \rightarrow \lambda_{1,2} = -2,9$$ If you are just looking to solve the system, we can use substitution as: $x = -2y$, so $5(-2y) + 4 y = 0 \rightarrow y = 0 \rightarrow x = 0$ If you are looking for something else, you are going to have to clarify.
H: Pointwise Convergence of Continuous, Real valued functions Let $(f_n)$ be a sequence of continuous, real-valued functions on $[0,1]$ converging pointwise to $f$ . Prove that there is some closed sub-interval of $[0, 1]$ on which $f$ is bounded. I'm struggling with a proof for this, any help will be appreciated (I can intuitively see why it is true). Thanks AI: Fix $\epsilon > 0$ and define $$ A_N = \bigcap_{n, m \ge N} \{x \in [0, 1] : \left|f_n(x) - f_m(x)\right| \le \epsilon\}. $$ Each $A_N$ is closed (why?). Since$\{f_n\}$ has a pointwise limit everywhere on $[0, 1]$, we have $\bigcup_{N=1}^\infty A_N = [0, 1]$. By the Baire category theorem, at least one $A_N$ (say $A_{N_0}$) must contain an interval $I$. Hence $$ \forall x \in I, \forall n, m > N_0 : \left|f_n(x) - f_m(x)\right| \le \epsilon. $$ By taking the pointwise limit as $m \to \infty$ and fixing $n$, we get $$ \forall x \in I : \left|f(x) - f_n(x)\right| \le \epsilon. $$ $f_n$ is continuous, hence bounded (say by $M$). It follows that $$ \forall x \in I : \left|f(x)\right| \le M + \epsilon $$ as desired. Note: This proof is adapted from the proof of the Baire-Osgood theorem. $f$ is a Baire class one function.
H: Rearranging logarithmic equation I've tried hard to rearrange the following equation to calculate the (AGE) $$\mu=18.8144+(-1.8443\log(\text{age}))+(-1.4032\log(\text{SBP}))+(-0.3899\cdot\text{Smoke})+(-0.5390\log(\text{TC}/\text{HDL})).$$ For example if someone with $\mu=3.13422$, $\text{SBP}= 140$, $\text{smoke}= 1$, and $\text{TC}/\text{HDL}= 5$ The age will be 55 years, But I could not arrange it in an appropriate way to get the right results. Could you please help? AI: Hopefully the process will become clear if we generalize the expression with some arbitrary constants: $\mu=18.8144+(-1.8443\log(\text{age}))+(-1.4032\log(\text{SBP}))+(-0.3899\cdot\text{Smoke})+(-0.5390\log(\text{TC}/\text{HDL}))$ is isomorphically converted to: $$a = b-c\log d + E \log f + gh + i\log j$$ and we want to solve for $d$ = age. Note how the mathematical constant $e$ is NOT used as the coefficients/arguments in the above expression; we will need $e \approx 2.71828...$ later to solve for $d$. Solving for ("isolating") $d$: $$a = b-c\log d + E \log f + gh + i\log j \\ a - b - E\log f - gh -i \log j = - c \log d \\ - c \log d = a - b - E \log f - gh -i \log j \\ c\log d = -a+b+E\log f + gh + i\log j \\ \log d = \frac{-a+b+E\log f + gh + i\log j}{c} \\ d = e^{\bigg(\dfrac{-a+b+E\log f + gh + i\log j}{c}\bigg)}$$ I suggest labeling each constant ($a=\mu, b=18.8144, c=-1.8443, etc.)$ and then plugging in the values when you have $d$ = age on side of the equation.
H: binary division and remainder Q=A/B , Q is a real number expressed as a pair of 8 bits: most significant 8 bits for the integer part least significant 8 bits for the fractional part the number is unsigned for example: 0 0 1 0 1 1 0 1 . 0 1 0 1 0 0 0 0 Can you find the remainder of division if you know B? example: 44/20 0010 1100 . 0000 0000 / 0001 0100 . 0000 0000 = 0000 0010 . 0011 0011 (2.1999...) Now, to find the reminder I should multiply(0.19... * 20) 0000 0000 . 0011 0011 * 0001 0100 . 0000 0000 = 0000 0011 . 1111 1100 Now, can I say that the number is 4 because is greater than 3.5? I don't think this works for all the numers There are exceptions, for example A=2 and B=172, I wonder if increasing the fractional part bits number solves the problem 2/172 : 0000 0010 . 0000 0000 / 1010 1100 . 0000 0000 =0000 0000 . 0001 0010 0000 0000 . 0001 0010 * 1010 1100 . 0000 0000 =0000 0000 . 1100 0001 (should be 2, or at least something greater than 1.5) AI: You should be multiplying the fractional part by the original denominator, not by the original numerator. Look at the same division in base ten: you get $2.2$, and the remainder is $0.2\cdot20=4$, which is correct. After dividing $a$ by $b$, you have an integer part $n$, say, and a fractional part $\alpha$: $\frac{a}b=n+\alpha$. Multiplying both sides by $b$ gives you $a=nb+\alpha b$, or $\alpha b=a-nb$; clearly $\alpha b$ here is the remainder.
H: Problem 4.4 - Lie Algebras - Humphreys I read the exercise 4.4 in the book Introduction to Lie algebras and representation theory of J. Humphreys, and I do not quite understand the sentence : We start with $L\leq\mathfrak{gl}(p,F)$ as in Exercise 4.3, and let $M:=L+F^p$, the direct sum. We then make $M$ into a Lie algebra by decreeing that $F^p$ is abelian, while $L$ has its usual product and acts on $F^p$ in the given way. Could some one tell me what the Lie bracket inside $M$ is? Many thanks in advance. AI: The answer is given in Jacobson's book on Lie algebras. $M$ is the split extension of $L$ and $F^p$. That is, the Lie bracket is given by $$ [m_1,m_2]=[l_1+f_1,l_2+f_2]=[l_1,l_2]+l_1.f_2-l_2.f_1+[f_1,f_2] $$ for $l_1,l_2\in L$ and $f_1,f_2\in F^p$. We have $[f_1.f_2]=0$ for all $f_1,f_2$, since $F^p$ is an abelian Lie algebra. The action of $L$ on $F^p$ is given in the exercise before. If $e_1,\ldots e_n$ is a basis of $F^p$, then $E.e_i=e_{i+1}$ and $E.e_p=e_1$ and $F.e_i=(i-1)e_i$, where $L=\langle E,F\rangle$.
H: does a continous $f(x)$ limit is a necessary condition for uniform convergence? Suppose I have function sequence $S = \{f_n(x)\}_{n = 1}^{\infty}$. and $S$ converge to $f(x)$. (I mean $\lim_{n \to \infty}f_n(x) = f(x)$) in order that $\{f_n(x) \}_{n =1}^\infty$ converge unfirmly(uniform convergence) to $f(x)$, does $f(x)$ must be a continous function? AI: No. Let $f_n=1_{\mathbb Q}$. Then all $f_n$ are equal, hence equal their limit and none is continuous
H: finding sequence for e converging at some speed I want to find an infinite sequence that conerges to e so that the kth term of the sequence is less than 10^-k away from e. Obviously, I've considered the Taylor series, but asymptotic bounds on the truncation error don't tell you what the actual upper bound for a specific index is. AI: The error of $\sum_{k=0}^n\frac1{k!}$ is approximately the next term $\frac1{(n+1)!}$ and an upper bound for the error is obtained from $$ \sum_{k=n+1}^\infty\frac1{k!}< \frac1{n!}\sum_{k=1}^\infty\frac1{(n+1)^k}=\frac1{n\cdot n!}.$$ As $n\cdot n!>10^n$ for $n$ sufficiently big, this should help you.
H: logical statement: proving $\mathrm{len}(\psi)\leq4\cdot\mathrm{lenz}(\psi)+1$ Given a logical statement $\psi$ I want to prove $\mathrm{len}(\psi)\leq4\cdot\mathrm{lenz}(\psi)+1$ with $\mathrm{lenz}:=\textrm{number of all logical connectives}$ and $\mathrm{len:=\textrm{number of all signs}}$ of the logical statement. So I want to use the induction. If $\psi\equiv A$ with a statement $A$ the inequality is obvious. But now I am stuck. So how can you do the induction step for proving the formula above? AI: Since $\{\land,\lnot,\forall\}$ is an adequate set of connectives, it suffices to show it for these. I will assume the parentheses are as $(\varphi\land\psi)$, $(\lnot\varphi)$ and $(\forall x)\varphi$. Assume it holds for $\varphi$, $\psi$. Then for $\land$ we have $$len[(\varphi\land\psi)]=len[\varphi]+len[\psi]+3\leq 4(lenz[\varphi]+lenz[\psi])+5\\=4(lenz[(\varphi\land\psi)]-1)+5=4(lenz[(\varphi\land\psi)]+1,$$ for $\lnot$ we have $$len[(\lnot\varphi)]=len[\varphi]+3\leq 4lenz[\varphi]+4=4(lenz[(\lnot\varphi)]-1)+4=4lenz[(\lnot\varphi)]$$ and finally for $\forall$: $$len[(\forall x)\varphi]=len[\varphi]+4\leq 4lenz[\varphi]+5=4(lenz[(\forall x)\varphi]-1)+5=4lenz[(\forall x)\varphi]+1.$$
H: What does $i^i $ equal and why? I've been reading up on why the value of 0^0 is controversial (see Zero to the zero power - is $0^0=1$?) and I wondered: is it possible for $i^i$ to have a value? I plugged it into a TI-83 calculator and it returned 0.2078795764 (!) How is this possible and why is the result a real number not a complex number? Update I realize how to use Euler's Identity of $e^{i\theta} = \cos(\theta) + i\sin(\theta)$ and I understand the oldest answer's value of $\theta = \pi/2$ to solve for $i^i$, but doesn't this mean I can pick any odd multiple of $\pi/2$ (such as $3\pi/2, 5\pi/2$, etc.) as a value of $\theta$? Does that mean that $i^i$ is somehow "periodic" like the cosine curve is? AI: $i=e^{i(\pi/2+2k\pi)}$ therefore $i^i=(e^{i(\pi/2+2k\pi)})^i=e^{-\pi/2-2k\pi}$ for $k=0$ you get $0.207879576350761908...$ for $k=1$ you get $0.000388203203926766...$ .... Or $i=1\times i$, then $i^i=1^i\times i^i$: as tomasz commented, this post may be helpful: What is the value of $1^i$?
H: Derivative of polynomial division in Maple This is proably a beginner's question about Maple. I'm trying to use Maple to differentiate: $$\frac{(z^2-1)^2}{(az-1)(z-a)}$$ Where $a$ is a constant. On the first line, is there a way to tell Maple not to expand the denominator? How can you tell Maple that $z$ is variable and $a$ is constant? On the second line, what does Maple mean with $D(az)$ ? The differential seems to be missing the term with $(z^2-1)^2 (az-1)$ in the numerator? Any hints or pointers welcome. AI: You need to type out $a*z$ instead of $az$. Maple thinks, $az$ is another variable. Also $$(z^2 - 1)^2/((a*z-1)*(z-a))$$ Will give the desired output. This pretty much fixes anything.
H: Is the cofinite topology on an uncountable set first countable? Let $X$ be any uncountable set with the cofinite topology. Is this space first countable? I don't think so because it seems that there must be an uncountable number of neighborhoods for each $ x \in X$. But I am not sure if this is true. AI: You are correct: it is not first countable. However, this is not because each point of $X$ has uncountably many nbhds: each point of $\Bbb R$ also has uncountably many nbhds, but $\Bbb R$, being a metric space, is certainly first countable. To prove that $X$ is not first countable, you must show that some point of $X$ does not have a countable local base. All points of $X$ ‘look alike’ in the cofinite topology, so it doesn’t matter what point we pick, so let $x\in X$ be any point. Suppose that $\mathscr{B}=\{B_n:n\in\Bbb N\}$ is a countable local base of open sets at the point $x$, meaning that if $U$ is any open nbhd of $x$, then $x\in B_n\subseteq U$ for some $n\in\Bbb N$. For each $n\in\Bbb N$ let $F_n=X\setminus B_n$: $B_n$ is open, so by definition $F_n$ is finite. Let $F=\{x\}\cup\bigcup_{n\in\Bbb N}F_n$; $F$ is the union of countably many finite sets, so $F$ is countable. $X$ is uncountable, so there is some $y\in X\setminus F$. Let $U=X\setminus\{y\}$. Is $U$ an open nbhd of $x$? Is there any $n\in\Bbb N$ such that $x\in B_n\subseteq U$?
H: Prove that for all integers $x$ and $y$, $x - y$ is odd if and only if $x + y$ is odd. the homework question I'm having trouble with is this one. Write a detailed structured proof to prove that for all the integers $x$ and $y$, $x - y$ is odd if and only if $x + y$ is odd. I have the proof format and structure down fine, its the proof part that is throwing me in circles. What I worked out so far makes no sense so I think my understanding of it is flawed. $2k+1$ is the definition of odd, so I use that when trying to prove it. I'm going off the premise ($x - y$ is odd) $\Rightarrow$ ($x + y$ is odd) Let $i \in \mathbb Z, $ be such that $x - y = 2i + 1$. Then $x - y = 2i + 1$. $2i = x - y - 1$. $i = (x - y - 1)/2$. $Let p = i$. $x + y = 2p + 1$. $x + y = 2(x-y-1)/2 + 1$. $x + y = x - y$. So now I end up with an impossible case... Can anyone help me understand what I'm doing wrong? AI: Suppose that $x-y=2n+1$ then $x-y+2y=2n+1+2y$ or $x+y=2(n+y)+1$ showing $x+y$ is odd. For the argument in the other direction suppose $x+y=2n+1$ so that $x+y-2y=2n+1-2y$ and so $x-y=2(n-y)+1$ showing $x-y$ is odd.
H: About the Collatz conjecture I worked on the Collatz conjecture extensively for fun and practise about a year ago (I'm a CS student, not mathematician). Today, I was browsing the Project Euler webpage, which has a question related to the conjecture (longest Collatz sequence). This reminded me of my earlier work, so I went to Wikipedia to see if there's any big updates. I found this claim The longest progression for any initial starting number less than 100 million is 63,728,127, which has 949 steps. For starting numbers less than 1 billion it is 670,617,279, with 986 steps, and for numbers less than 10 billion it is 9,780,657,630, with 1132 steps.[11][12] Now, do I get this correctly: there is no known way to pick a starting number $N$, so that the progression will last at least $K$ steps? Or, at least I don't see the point of the statement otherwise. I know how this can be done (and can prove that it works). It does not prove the conjecture either true/false, since it is only a lower bound for the number of steps. Anyway, I could publish the result if you think it's worth that? When I was previously working on the problem, I thought it was not. AI: The answer to your question is yes; one can work the Collatz relation backwards to build numbers that last arbitrarily long (for a simple example, just take powers of 2). However the purpose of the wikipedia records is to see if we can find SMALL numbers that last a long time.
H: Basis of the polynomial vector space I don't understand how to find a basis for a polynomial vector space. Can someone help me with an example? AI: The simplest possible basis is the monomial basis: $\{1,x,x^2,x^3,\ldots,x^n\}$. Recall the definition of a basis. The key property is that some linear combination of basis vectors can represent any vector in the space. If, instead of thinking of vectors as tuples such as $[1\ 2\ 4]$, you think of them as polynomials in and of themselves, then you see that you can make any real-valued polynomial of degree less than or equal to $n$ out of the monomial basis listed above. You don't have to take the monomial basis. For example, you could have $\{1, x^2-4, x^3+x\}$ as a basis. But you cannot make every possible polynomial of degree $\le 3$ out of this basis!
H: Proof that the trace of a matrix is the sum of its eigenvalues I have looked extensively for a proof on the internet but all of them were too obscure. I would appreciate if someone could lay out a simple proof for this important result. Thank you. AI: These answers require way too much machinery. By definition, the characteristic polynomial of an $n\times n$ matrix $A$ is given by $$p(t) = \det(A-tI) = (-1)^n \big(t^n - (\text{tr} A) \,t^{n-1} + \dots + (-1)^n \det A\big)\,.$$ On the other hand, $p(t) = (-1)^n(t-\lambda_1)\dots (t-\lambda_n)$, where the $\lambda_j$ are the eigenvalues of $A$. So, comparing coefficients, we have $\text{tr}A = \lambda_1 + \dots + \lambda_n$.
H: Help with inf sup concept I am very bad at maths , by definition we have $$\limsup_{n\to\infty} x_n:=\inf_n(\sup_{k\geq n}x_k)$$ where $(x_n)$ is a sequence. I was doing this in a finite sequence to understand this better: for example with $$(2,3,5,4)$$ we start with $n=1$ then we have $\sup\{ 2,3,5,4\}=5$. $n=2$ then we have $\sup\{ 3,5,4\}=5$. $n=3$ then we have $\sup\{ 5,4\}=5$. $n=4 $ then we have $\sup\{ 4\}=4$. and finally $\inf\{5,5,5,4\}=4$ so in conclusion that limit is $4$ in this cases? AI: If your example was an infinite sequence that was eventually constant $6, 3, 7, 9, 2, 4, 4, 4, \ldots$ then the limit would be $4$. Existence of limits with $n \to \infty$ involves 'eventual' behaviour. A sequence with only a finite number of points in cannot have limit points. In $\{2, 3, 5, 4\}$, if you get really close to $4$, meaning that you can have an open set in $\mathbb R$ for example the interval $(3.9, 4.1)$ which is small enough that it only contains $4$, it does not contain an infinite number of points of your sequence still close to $4$. So it's an isolated point. And every point in a finite set of numbers will be isolated. For an infinite example, where we can apply your statement, consider $\{1/1, 1/2, \ldots, ..., 1/n, \ldots\}$. No matter how small an interval you place around zero, e.g. $(-1/m, 1/m)$ for any $m$ you choose, there will always be an infinite number of points from the sequence inside that interval. Without this property, what you're trying to apply makes no sense. If $$\limsup_{n\to\infty}x_n=\inf_n(\sup_{k\geq n}x_k)$$ then, if we choose a value of $n$, say $n=m$, $x_m = 1/m$ and $\forall k\geq m$ we have $1/m \geq 1/k$. So the $\sup_{k\geq m}$ at this point is $1/m$ whereas the infimum over all $n$ of $1/n$ is zero because $1/n\geq0, \forall n$ and there is no number greater than zero for which this is true. So zero is the smallest of all possible values, the $\inf_n$, of the largest of the values remaining, the $\sup_{k\geq n}$ as $n$ approaches infinity, because the largest of those values, $1/n$, is approaching zero. I hope this explanation helps. This stuff will seem quite technical at first and does take some time to get used to it. Note: Just to point out how difficult this can be to explain, zero is not strictly the smallest value, as it is not attained by any of the remaining values in the sequence. It is just the largest value which is always less than those values remaining, now matter how large $n$ becomes. I used that wording to emphasise why $\limsup$ is equivalent to the $\inf$ of the $\sup$, by comparing the idea of smaller with larger. Looking at another example $\{y_n=1-1/n\}$. Choose an $n$ say $n=m$. Now we have $\sup_{k\geq m}y_k=1$ as $1\geq 1-1/k, \forall k\geq m$ and no number smaller than $1$ will do. Here the $\inf_n(\sup_{k\geq n}y_k)$ is $1$ for all $n$. Similar interpretations apply to the 'opposite' definition $$\liminf_{n\to\infty}x_n=\sup_n(\inf_{k\geq n}x_k).$$ A good way to see the difference between $\limsup$ and $\liminf$ is to try them out on the sequence defined by $$\left\{x_n=\begin{cases}1/n & n \text{ odd}\\1-1/n & n \text{ even}\end{cases} \ \left| \ \right.n\geq 1\right\}$$ Here, the $\liminf$ is $0$ and the $\limsup$ is $1$.
H: Why does the maximal irrelevant is out of this correspondence? I'm solving the Hartshorne's questions and I didn't understand why $S_+$ doesn't occur in this equivalence: My reasoning By the previous exercise, if $X$ is an algebraic set in $\mathbb P^n$, we have $Z(I(X))=\overline X= X$. If $\mathfrak a$ is a radical homogeneous ideal, by the homogeneous Hilbert Nullstellensatz, we have $I(Z(\mathfrak a))=\sqrt {\mathfrak a}=\mathfrak a$ (since ${\mathfrak a}$ is ideal). So, it seems that $S_+$ occurs in this equivalence, where am I wrong? Thanks in advance. AI: Fact. If $\mathfrak a\subset S=k[x_0,\dots,x_n]$ is a homogeneous ideal, then $Z(\mathfrak a)=\emptyset$ if and only if $\sqrt{\mathfrak a}=S$ or $\sqrt{\mathfrak a}=S_+$. So in the correspondence between algebraic sets in $\mathbb P_k^n$ and homogeneous radical ideals in $S$ (the usual $\mathfrak a\mapsto Z(\mathfrak a)$ with inverse $X\mapsto I(X)$), $S_+=(x_0,\dots,x_n)$ does not appear, because we already have $\mathfrak a=S$ corresponding $\emptyset$.
H: Finding the limit of a sequence; difficult I have a sequence $(a_n)$ where $a_n=\sqrt[n]{3^n +5^n +7^n}$ I have to find the limit of this sequence. By intuition I should find the limits of two sequences $(a_n)$ & $(b_n)$ such that $(a_n)\le(x_n)\le(b_n)$ Does anyone if this will be the correct way of going for it or if there is a more sophisticated way of finding the limit of $(x_n)$? Please help, thank you. AI: Hint: $7^n\leqslant3^n+5^n+7^n\leqslant3\cdot7^n\implies7\leqslant\sqrt[n]{3^n+5^n+7^n}\leqslant7\cdot\sqrt[n]{3}$. Hence...
H: Is this valid? ( (easy?) Limit Manipulation question) I'm current studying series for a calc 2 exam. The problem I'm on is to determine the convergence/divergence of this series: $$\sum_{n=1}^\infty \frac n{(n+1)^n} $$ The ratio test is appropriate here, so I've set it up: $$\lim_{n\to\infty} \frac {(n+1)}{(n+2)^{(n+1)}} / \frac n{(n+1)^n}$$ which comes out to: $$\lim_{n\to\infty} \frac {(n+1)^{(n+1)}}{n(n+2)^{(n+1)}}$$ the limit rules say that: $$\lim_{n\to\infty}(a_n*b_n) = \lim_{n\to\infty}a_n * \lim_{n\to\infty} b_n $$ so $$\lim_{n\to\infty} \frac {(n+1)^{(n+1)}}{n(n+2)^{(n+1)}} = \lim_{n\to\infty}\frac {(n+1)^{(n+1)}}{(n+2)^{(n+1)}} * \lim_{n\to\infty} \frac 1n$$ I don't really know how to evaluate the first term, (at least not without a bunch of rearranging): $\lim_{n\to\infty}\frac {(n+1)^{(n+1)}}{(n+2)^{(n+1)}}$, but I do know that $\lim_{n\to\infty} \frac 1n = 0$ , so I get $$\lim_{n\to\infty} \frac {(n+1)^{(n+1)}}{n(n+2)^{(n+1)}} = \lim_{n\to\infty}\frac {(n+1)^{(n+1)}}{(n+2)^{(n+1)}} * 0 = 0$$ On the surface this looks perfectly fine to me, but I have a bit of a weird feeling about it. Is it valid to pull a piece of the limit that you know is equal to 0 out like this? Since anything multiplied by 0 equals 0, is what I've shown here enough to say that: $$\lim_{n\to\infty} \frac {(n+1)^{(n+1)}}{n(n+2)^{(n+1)}} = 0$$ I realize this may look like a bit of stupid question, but it's been a while since I had to deal extensively with limits, and I want to be sure that my understanding of how to manipulate them is rock solid for the coming exam. Thanks in advance! AI: Listen to your weird feeling. The rule $\lim x_ny_n = (\lim x_n)(\lim y_n)$ holds only provided that both limits on the right-hand side exist. So you can't conclude that the limit is $0$ without first knowing that $\lim\frac{(n+1)^{n+1}}{(n+2)^{n+1}}$ converges. If not for the bolded restriction, you could write $$ 1 = \lim_{n\to \infty} 1 = \lim_{n\to\infty}n\cdot\frac1n = (\lim_{n\to\infty}n)(\lim_{n\to\infty}\frac 1n) = (\lim_{n\to\infty}n)0 = 0 $$ which is obviously not true.
H: In how many ways can four students be chosen from a group of 12 students? Myself and my Math teacher are at a disagreement in to what the proper method of solving the question In how many ways can four students be chosen from a group of 12 students? is. The question comes straight out of a Math revision sheet from a Math book distributed under the national curriculum. The options it gives for answers are: 12 48 495 11880 40320 As we are currently learning Permutations and Combinations, my above interpretation is that it is asking for a Combination without repetition or $\frac{(n+r-1)!}{r!(n-1)!}$ which gives you the amount of combinations without repetition (as you cannot pick the same student twice.) Now my teacher argues that the answer the book provides is correct. The books answer simply says to use $^{n}C_{r}$ or $\frac{n!}{r!(n-r)!}$. What is the correct method of answering this? The book states 3. 495 is the answer. AI: You have 12 choices for the first student chosen, 11 choices for the next, then 10, then 9. However, this over-counts everything by a factor of 4! (the number of ways in which four objects can be arranged with regard to order). Thus, the answer is $$\frac{12\cdot 11 \cdot 10 \cdot 9}{4!} = 495$$
H: Proving a triangle is isoceles In the graphic we have an isosceles triangle, and the problem is Calculate $\text{m}\angle BCD$ I added the point $E$ at distance $x$ from $C$ because it causes $DE=x$, after playing with geogebra. With this, the question is easily solved. Of course since the triangle is determined (Since the ratios are scale invariant, we can WLOG assume $x=1$)we can prove $DE=x$ using trigonometry, but what is the elegant, geometric-like way of showing it? AI: I post a solution for a similar question that I had solved in the past: In this problem $x=10$. In your case the answer will be $80-10=70$. You draw $AEC$ equilateral triangle and see similarity between $ABE$ and $CDA$. In you case you would draw an equilateral triangle on $AC$ side but first join $C$ and $D$.
H: Continuity type problem involving a limit? I am trying to solve this problem involving a limit / continuity. https://www.dropbox.com/s/aglohigapdfalny/Screenshot%202013-10-30%2020.22.40.png I've set the equations equal to each other and ended up with: x^2 - 4x - 20 = 0 However, using the quadratic equation and getting both of the roots for x does not give me the right answer. Any idea on how to approach this question? AI: You have to subtract a $-12$, which makes it adding a $12$. The result is $$ x^2-4x+4=0. $$
H: Derivative of $\sin(-x) = -\cos(x)$? Doesn't derivative of $\sin(-x)$ equal: $f'(x) = \cos(-x)(-1)$ $f'(x)= -\cos(-x)$ how do you get $-\cos x$? AI: $\cos(x)$ is an even function which means $\cos(-x)=\cos(x)$. Using this we have $-\cos(-x)=-\cos(x)$
H: Question on the formal completion of a ring $R$ w.r.t. an ideal $J$ I am trying to understand the completion $\hat R$ of a commutative unitary ring $R$ w.r.t. an ideal $J$. Please let me first recall, what I think is true (since if there is a misunderstanding already, I would be very glad for pointing it out). The ideal $J$ defines the $J$-adic topology on $R$ by defining $$ V\subseteq R \mbox{ open} \Leftrightarrow \forall v\in V~\exists m\geq 0\mbox{ with }v+J^m\subseteq V. $$ In this topology, the closure of subset $S\subseteq R$ is the set $\cap_{m\geq 0}(S+J^m)$. This topology on $R$ is Hausdorff iff $\cap_{m\geq 0}J^m=\{0\}$ and in this case (I hope that I got this right! I am not sure about this part.) the topology is induced by a metric $d$, setting $$ d(x,y)=\delta^{-r} $$ for $x,y\in R$ where $r$ is the unique integer such that $x-y\in J^r$ but $x-y\notin J^{r+1}$ (some sources on the internet say only that ''$\delta>1$'' and I think this means that one can choose an arbitrary $\delta$ with this property and they all induce the same topology, correct?). Now the completion $\hat R$ of $R$ w.r.t. $J$ is defined as the inverse limit $$ \hat R:=\underset{\leftarrow}{\lim}\left(\ldots\to R/J^3\to R/J^2\to R/J^1\right) $$ and I have some questions on this construction: In what category is the limit taken? Just in the category of (commutative unitary) Rings? I am asking, since the $R/J^m$ admit also a topology (the quotient topology, i.e. the final topology) induced from the topology on $R$ described above, or the discrete topology. I've read on the internet, that if one takes the limit in the category of topological rings where all the $R/J^m$ are equipped with the discrete topology (perhaps not the quotient topology from above), the induced topology on $\hat R$ is the right one, i.e. $$ q:R\to \hat R $$ is a continuous injective ring homomorphism and the universal one into a completion of the metric on $R$. If this is true, what happens if one takes the quotient topology on the $R/J^m$? What if I take the limit in the category of topological spaces instead? Then, $\hat R$ admits a ring structure since every $R/J^m$ has a ring structure. Here the question is the same as before: What if I take once the discrete topology on the $R/J^m$ and once the quotient topology? Thank you! AI: Fix an integer $m\geq 0$, and let $\pi:R\to R/J^m$ be the quotient map. For every subset $S\subset R/J^m$, $\pi^{-1}(S)\subset R$ is an open subset of $R$: for each $u\in\pi^{-1}(S)$, $u+J^m\subset\pi^{-1}(S)$. This means $S$ is open in $R/J^m$, so the quotient topology on $R/J^m$ is the discrete topology. The underlying set that we get is the same whether we take the inverse limit of rings, topological spaces, or sets. One might define $\hat{R}$ to be the inverse limit of the sets $R/J^m$, given the inverse limit ring structure and the inverse limit topology (where $R/J^m$ has the quotient, i.e. discrete, topology).
H: probability of exactly, atleast, and expected number The people discovered that 54% of refugees who ask for asylum in the New York immigration court win asylum, but only 12% are granted asylum in the Florida immigration court. Assume that you randomly select 20 refugees who are asking for asylum in the Florida immigration court. A. find the probability that exactly five asylum seekers are granted asylum. $${20}\choose{5}$$ $$= 15504$$ $$(15504)(.12)^5$$ $$= 0.385789133$$ $$(0.385789133)(.88)^{(20-5)}$$ $$= 0.056700916$$ B. find the probability that at least three asylum seekers are granted asylum. (I used the same formula above to find the probability for each x value) x | P(x) 0 | 0.077562794 1 | 0.211534892 2 | 0.274033847 3 | 0.224209503 $$ P(x=0) + P(x=1) + P(x=2) + P(x=3) $$ $$= 0.787341026$$ $$1 - 0.787341026 = 0.212658974$$ C. what is the expected number on asylum seekers who win their cases? $$(n)(p)(q)$$ $$(20)(.12)(.88) = 2.112$$ are A,B,C correct? for b, since probabilities must add up to 1, it is ok to find the probability for 0 - 3 and subtract that number from 1 to get the probability of atleast 3? (3,4,5,6,7,...) ? AI: If the random variable $X$ is the number accepted, then the probability that $X$ is at least $3$ is $\Pr(X\ge 3)$. This is $1-[\Pr(X=0)+\Pr(X=1)+\Pr(X=2)]$. You were right to subtract from $1$, but should not have used $\Pr(X=3)$. The expected number $E(X)$ is $np$, not $npq$. The calculation of $\Pr(X=5)$ is correct.
H: Ultrametric on a normed space (real or complex) Given some normed space $E$ (real or complex), why is it impossible that $E$ can't be an ultrametric space? My professor briefly said something along these lines today and I didn't follow... Also, I am not sure what the strong triangle inequality would be for a normed space... Would it be $\forall x,y \in E$ we have $$||x+y|| \leq ||x||+||y||$$ ? Or does it include 3 elements of $E$ like the definition of the ultrametric $\forall x,y,z \in E$ we have $$d(x,z) \leq d(x,y)+d(y,z)$$ I feel like I am missing something here and I don't know where to start... Any pointers? I am not as familiar with normed spaces... AI: The strong triangle inequality actually says that for any $x,y,z\in E$, $$\|x-y\|\le\max\{\|x-z\|,\|y-z\|\}\;;$$ however, this is equivalent to saying that for any $x,y\in E$, $$\|x+y\|\le\max\{\|x\|,\|y\|\}\;.\tag{1}$$ In fact this can be strengthened: one can prove that if $(1)$ holds, then $\|x+y\|=\max\{\|x\|,\|y\|\}$ whenever $\|x\|\ne\|y\|$. (The easy proof is in Wikipedia.) Open balls in an ultrametric space are automatically also closed, so every ultrametric space has a clopen base and is therefore zero-dimensional. In particular, it’s totally disconnected. If $E$ is a real or complex normed space, then for any non-zero $x\in E$ the set $\{\lambda x:1\le\lambda\le 2\}$ is connected, since it’s a continuous image of $[0,1]$, so $E$ cannot be totally disconnected. Thus, the norm metric on $E$ cannot be an ultrametric.
H: Solving integral using contour integration: $I = \int_0^{+\infty} \frac{x^a}{x^2 + 1} = \frac{\pi / 2} {\cos \frac{\pi a}{2}}$ provided $-1 < a < 1$. I am trying to show that $I = \int_0^{+\infty} \frac{x^a}{x^2 + 1} = \frac{\pi / 2} {\cos \frac{\pi a}{2}}$ provided $-1 < a < 1$. So I consider $I = \int_{C_R}\frac{z^a}{z^2 + 1}$ where $C_R$ is the positively oriented contour consisting of the real segment [-R,R] and the upper semi circle of radius R. Now $C_R$ only contains one sigularity of the above integrand, $\exp(\frac{i \pi}{2})$. So I take the residue at this point, multiply by $2\pi i $ and get $\pi \left( \cos \frac{a \pi}{2} + i \sin \frac{a \pi}{2} \right)$ Now am I supposed to take the real part of this? And even then, why do I not get the desired result? Note that I should take half of the answer since the complex integral is from negative to plus infinity. However I am aware that there might be an issue here, since $a$ can be negative and not necessarily an integer, thus possibly screwing up the parity of I. Any help would be appreciated. AI: Because of the branch point, you need to deform your contour so as to avoid the branch point. To do this, introduce a little circular bump above the branch point. Thus, consider $$\oint_C dz \frac{z^a}{z^2+1}$$ where $C$ is the above-described deformed semicircle. The contour integral is then equal to $$\int_{-R}^{-\epsilon} dx \frac{x^a}{x^2+1} + i \epsilon \int_{\pi}^0 d\phi \, e^{i \phi} \frac{\epsilon^a e^{i a \phi}}{\epsilon^2 e^{i 2 \phi}+1} \\ + \int_{\epsilon}^R dx \frac{x^a}{x^2+1} + i R \int_{0}^{\pi} d\theta \, e^{i \theta} \frac{R^a e^{i a \theta}}{R^2 e^{i 2 \theta}+1}$$ We consider the limits as $R \to \infty$ and $\epsilon \to 0$; in each of these limits, the fourth and second integrals vanish, respectively, because $a \in (-1,1)$. Therefore, the contour integral is the sum of the first and third integrals. Note that, in the first integral, $-1=e^{i \pi}$. Therefore, sub $x \mapsto e^{i \pi} x$ and get $$\left (1+e^{i \pi a} \right ) \int_0^{\infty} dx \frac{x^a}{x^2+1} $$ The contour integral, on the other hand, is equal to $i 2 \pi$ times the residue of the pole inside $C$, i.e., $z = e^{i \pi/2}$; we therefore have $$\int_0^{\infty} dx \frac{x^a}{x^2+1} = i 2 \pi \frac{1}{2 i} \frac{e^{i \pi a/2}}{1+e^{i \pi a}} = \frac{\pi}{2 \cos{(\pi a/2)}}$$
H: Determine a function that satisfies the differential equation $f''(x) = -4y$? How would I come to this solution? Is it a matter of working backwards? $f'(x)= -y^4 + 3$? AI: Assuming you meant: $$y''(x) = -4y(x)$$ The solution for these type of equations is known to be a linear combination of $\sin$ and $\cos$ functions. So, if you assume the solution is: $$y(x) = A\sin(ax)+B\cos(bx)$$ Substitution will lead to $b= a = 2$, since: $$y''(x) = -A a^2 \sin(ax) -B b^2 \cos(bx) = -4\left(A \sin(ax) + B \cos(bx)\right)$$ $$\to\quad a^2=b^2=4$$ As to how you could have figured it out without "guessing" you could use a power series expansion to show this.
H: Partial fraction integration problem in calculus $$ \int \frac{x^2-3x+7}{(x^2-4x+6)^2} dx = \int \frac{1}{x^2-4x+6} + \frac{x+1}{(x^2-4x+6)^2} dx $$ $$=\int \frac{1}{(x-2)^2 +2} +\frac{1}{2}\frac{2x-4}{(x^2-4x+6)^2} + \frac{3}{(x^2-4x+6)^2} dx $$ Here how can we calculate last term ? [Supplement] -------- In followging article we can find a solving way : Partial fraction integration AI: If you haven't been doing calculus with complex numbers, you want to "complete the square" in the denominator of the last term: $$\frac{dx}{(x^2 -4x + 4 + 2)^2} \ = \ \frac{dx}{([x- 2]^2 + 2)^2} $$ and transform to a variable $ \ u \ = \ x - 2 \ $ , with $ \ du \ = \ dx \ \ : \ \rightarrow \ \frac{du}{(u^2 + 2)^2} \ $ . You then have something you can finish using a tangent substitution, $ \ u \ = \ \sqrt{2} \ \cdot \ \tan \theta \ : $ $$\rightarrow \frac{\sqrt{2} \ \cdot \ \sec^2 \theta \ \ d\theta}{[ \ (\sqrt{2} \ \cdot \ \sec \theta)^2 \ ]^2 } \ \ . $$ [Note: this will reduce nicely to just having to integrate a constant times $ \ \cos^2 \theta \ $ , which after using the appropriate trig identity, will produce two terms in your complete anti-derivative.]
H: Prove that a continuous $f$ in $(0,1)$ can be extended into its one-point compactification if the limit at both end point exist and equal Let $X = (0,1)$. Consider the one-point compactification of $X$ (which is homeomorphism to $S^{1}$). Prove that a bounded continuous function $f:(0,1) \rightarrow R$ is extendable to this compactification if and only if the limits $\lim_{x\to 0+}f(x)$ and $\lim_{x\to 1-}f(x)$ exist and equal Hi everybody. I have some problem related to compactification. The point is: this problem can be imagined intuitively but I can't prove it formally. So please give me a clear, formal proof with enough persuasive statement. Thanks. AI: Let $X_\infty=X\sqcup\left\{\infty\right\}$ be the one-poit compactification of $X$. By definition, a basic neighborhood of $\infty$ in $X_\infty$ is a set of the form $(0,\delta_1)\cup(1-\delta_2,1)\cup\left\{\infty\right\}$ for $0<\delta_1,\delta_2<1$. If $f:X\rightarrow\mathbb{R}$ is continuous, bounded and $\lim_{x\rightarrow 0^+}f(x)=\lim_{x\rightarrow 1^-}f(x)=:L$, define $f(\infty)=L$. Using the fact above, it should be easy for you to show that $f:X_\infty\rightarrow\mathbb{R}$ is continuous at $\infty$, and it is obviously continuous at points of $X$, since $X$ is open in $X_\infty$ and it's topology is induced by $X_\infty$'s topology. On the other hand, if $f$ can be continuously extended to $X_\infty$, use again the fact about basicc neighborhoods of $\infty$ and show that $\lim_{x\rightarrow 0^+}f(x)=\lim_{x\rightarrow 1^-}f(x)=f(\infty)$.
H: Probability, Statistics - Whats the answer? Assume that a programmer makes on average two errors in every hundred lines of code written and that errors occurring in different lines of code are independent. Suppose the programmer writes a software application consisting of 75 lines of code. a. What is the probability that the application contains no errors? b. What is the probability that the application contains exactly one error? Show your working I don't even know how to start! Pls can someone help me! AI: Note: I am assuming in my answer that this is a question about binomial probability. I am assuming that there can be no more than one error per line. I'm not too sure how to give a hint without giving it all away, so here are solutions (in spoilers - mouse over at your own risk!): a. P(Error in One Line) = 0.02. (Multiply by 100: he makes two errors every 100 lines.) So P(No Error in One Line) = 0.98. So $0.98^{75}$ is the probability of no error in all 75 lines, because the events are independent, so you just multiply them. b. We can do this with the binomial formula, which we use to calculate P(n successes in k trials). We're looking for $P(k=1)$. We have $n=75$ trials and a probability of 'success' $p =0.02$. The binomial formula is: $${n \choose k}p^k \cdot (1-p)^{n-k} = {75 \choose 1}\cdot 0.02^1 \cdot (1-0.02)^{74}$$ You can do the computation.
H: Given some ultrametric space $X$, is its completed metric $\hat{X}$ necessarily an ultrametric? I have a lot of difficulty understanding completed metrics. In fact, I don't think I understand them at all! How could I start showing this? I'm nearly certain that the first two properties of a metric would hold, but the third property is the one I'm not too sure about $\forall x,y,z \in X$ $$d(x,y) \leq max\{d(x,y),d(y,z)\}$$ It almost seems intuitive that it would need to be an ultrametric, just because $i(X)$ is dense in $\hat{X}$, its completed metric with ($i$ an isometry), but I still can't see it... Any clues? Much appreciated! AI: Suppose that $x=\langle x_n:n\in\Bbb N\rangle$, $y=\langle y_n:n\in\Bbb N\rangle$, and $z=\langle z_n:n\in\Bbb N\rangle$ are Cauchy sequences in $X$. Let $\hat x,\hat y$, and $\hat z$ be the equivalence classes of these sequences, interpreted as point of $\widehat X$. We’d like to show that $\hat d(\hat x,\hat y)\le\max\{\hat d(\hat x,\hat z),\hat d(\hat z,\hat y)\}$. For each $n\in\Bbb N$ we know that $$d(x_n,y_n)\le\max\{d(x_n,z_n),d(z_n,y_n)\}\;,$$ so $$\hat d(\hat x,\hat y)=\lim_{n\to\infty}d(x_n,y_n)\le\lim_{n\to\infty}\max\{d(x_n,z_n),d(z_n,y_n)\}\;.$$ If $d(\hat x,\hat z)<\hat d(\hat z,\hat y)$, then there is an $m\in\Bbb N$ such that $d(x_n,z_n)<d(z_n,y_n)$ for all $n\ge m$, in which case $$\hat d(\hat x,\hat y)\le\lim_{n\to\infty}\max\{d(x_n,z_n),d(z_n,y_n)\}=\lim_{n\to\infty}d(z_n,y_n)=\hat d(\hat z,\hat y)\;.$$ Similarly, $\hat d(\hat x,\hat y)\le\hat d(\hat x,\hat z)$ if $\hat d(\hat z,\hat y)<\hat d(\hat x,\hat z)$. The only remaining possibility is that $d(\hat x,\hat z)=\hat d(\hat z,\hat y)$, in which case $$\lim_{n\to\infty}\max\{d(x_n,z_n),d(z_n,y_n)\}=d(\hat x,\hat z)=\hat d(\hat z,\hat y)\;.$$ In all cases, therefore, $$\hat d(\hat x,\hat y)\le\max\{\hat d(\hat x,\hat z),\hat d(\hat z,\hat y)\}\;,$$ and $\hat d$ is an ultrametric.
H: Repeated Summation function I am writing a solution to a question, and the solution requires a lot of $\sum$ functions, is there a way to notate many $\sum$ functions in a row? for example is there one function that can simplify: $$\sum_{x=0}^n\left(\sum_{y=0}^{n-x} \frac{n!}{x!y!(n-x-y)!}\times a^xb^yc^{n-x-y} \right) $$into possibly just one function? AI: Multiple summations (especially when the number of summation symbols depends on some variable) are commonly denoted with a single summation sign, where the list of constraints and dummy variables is given under the sign. For example, I could write your example as: $$\sum_{0\le x\le n;0\le y\le n-x}\frac{n!}{x!y!(n-x-y)!}a^xb^yc^{n-x-y}$$ or more eloquently, changing $x\mapsto i$, $y\mapsto n-j$: $$\sum_{0\le i\le j\le n}\frac{n!}{i!(n-j)!(j-i)!}a^ib^{n-j}c^{j-i}$$ or even (letting $z=n-x-y$, so that now we have a triple summation; the constraints of $x,y,z$ being nonnegative integers is implicit): $$\sum_{x+y+z=n}\frac{n!}{x!y!z!}a^xb^yc^z$$
H: Undetermined Coefficients: $y''-2y'-3y=-3te^{-t}$ I am trying to solve the following DFQ: $y''-2y'-3y=-3te^{-t}$. I solved the homogeneous equation to find that the general solution is $y(t)=c_1e^{3t}+c_2e^{-t}$, and therefore a good guess for the particular equation would be $Y_p(t)=Ate^{-t}$b because it is not proportional to $e^{3t}$ or $e^{-t}$. So, we get $Y_p'(t)=A e^{-t}-A e^{-t} t$ and $Y_p''(t)=A e^{-t} t-2 A e^{-t}$. But, when we plug $Y_p$ into $y''-2y'-3y=3te^{-t}$ we get: \begin{align*} Ate^{-t} -2Ae^{-t} +2Ate^{-t} -2Ae^{-t} -3Ate^{-t} &= -3te^{-t}\\ -4Ae^{-t} &= -3te^{-t} \end{align*} So, $Y_p$ turns out to be a bad guess. Why is it that $Y_p$ turns out to be a bad guess? I thought that it met all of the criteria, unless I am missing something. AI: I think the general rule (if you look up Stewart or some other "standard" book) is you need a higher order term than what appears on the right hand side. So in your case we should try $$At^{2}e^{-t}+Bte^{-t}$$I hope this helps as I do not have time to do the computation.
H: Throw 10 dice, probability of getting 6 identical numbers? I got this result when I threw a set of 10 dice (btw. for the first with this set): What is a probability of getting 6 identical numbers (any) with 10 dice, with no particular order? AI: If you count only throws that have exactly 6 identical dice, the probability is exactly $\frac{787500}{6^{10}}$, which is around 1.3%, broken down as follows: Pattern Rolls out of 6^10 Percent chance AAAAAABBBB 6300 0.0104 AAAAAABBBC 100800 0.167 AAAAAABBCC 75600 0.125 AAAAAABBCD 453600 0.75 AAAAAABCDE 151200 0.25 The left column is the pattern of numbers in the roll. The pattern depicted in your question, with 555555 33 44, would be AAAAAABBCC. The middle column counts the number of ways of rolling each pattern out of $6^{10}$ possible different rolls. The right-hand column is the percentage of total rolls that show each pattern: it's $100\cdot m\cdot 6^{-10}$, where $m$ is the middle column. If you also want to count rolls that have 7 to 10 identical dice, add $\frac{97056}{6^{10}} \approx 0.16\%$ for a total of exactly $\frac{884556}{6^{10}}\approx 1.46\%$; these rolls are exceedingly rare. This article explains how I calculated these probabilities, and there is a form at the bottom of the page that invokes a program that tabulates them. The complete tabulation for 10 dice is here.
H: What is the proof that the total number of subsets of a set is $2^n$? What is the proof that given a set of $n$ elements there are $2^n$ possible subsets (including the empty-set and the original set). AI: Suppose you want to choose a subset. For each element, you have two choices: either you put it in your subset, or you don't; and these choices are all independent. Remark: this works also for the empty set. An empty set has exactly one subset, namely the empty set. And the fact that $2^0=1$ reflects the fact that there is only one way to pick no elements at all!
H: Convergence or divergence The sum is $$\sum_{n=1}^{\infty} \frac{n+2^n}{n+3^n}$$ Is this convergent or divergent? I tried to use the divergent test but the test fail because $a_n = (n+2^n)/(n+3^n) = 0 $ as $n$ goes to infinity. Could someone point me to the right direction? Thanks AI: If you don't want to resort to the "big mallets" of the Ratio Test or the Limit Comparison Test, you could also note that $$ \frac{n + 2^n}{n + 3^n} \ < \ \frac{2 \ \cdot \ 2^n}{n + 3^n} \ < \ \frac{2 \ \cdot \ 2^n}{ 3^n} \ , \ \text{for} \ n \ \ge \ 1 \ , $$ the last term in this inequality being the general term of a convergent geometric series.
H: Why are Polynomial Time Problems Considered Tractable, and Larger Times are Not? I've been reading up on $P=NP$, problem tractability, etc. Here's my question: Why is it that we consider problems that can be solved in polynomial time - or algorithms/problem-solvers running in polynomial time - nice, tractable, solvable, etc. while considering problems with a greater time complexity to be intractable, inefficient to compute, etc.? My thoughts are that: Surely it can't be the case that there is a stark divide (solvable-unsolvable) between algorithms taking polynomial time and algorithms taking marginally more than polynomial time. Couldn't we have an arbitrarily large polynomial expression, such that this polynomial is slower to compute than some non-polynomial? AI: The Garey and Johnson book (Computers and Intractibility: A Guide to the Theory of NP-Completeness, 1979) has a detailed explanation of this, early on. I can't recommend it enough. There are several reasons why problems are considered "tractable" if they have polynomial algorithms, but the most important is this: Suppose you have a computer and with it you can feasibly calculate solutions to the Widget Problem for up to 10,000 widgets. Your customer, however, needs to solve larger instances, and if you can help them, they will pay you a lot of money. To help them do this, you will invest some of the fee in upgrading your computer to the latest model, which is twice as fast as your current computer. If your algorithm for solving the Widget Problem is $O(n)$, the new computer will solve instances involving up to 20,000 widgets, a major improvement. If your algorithm for solving the Widget Problem is $O(n^2)$, the new computer will make it feasible to solve instances involving up to 14,142 widgets, also a major improvement. Even if your algorithm is $O(n^3)$, it will be feasible to solve instances involving 12,599 widgets, still a significant improvement. But if your algorithm is $O(2^n)$, the new computer will enable you to solve instances of the Widget Problem involving up to 10,001 widgets, which is hardly an improvement at all. That is the difference between polynomial and exponential algorithms. You are correct that there is not exactly a stark divide between polynomial and exponential algorithms. A polynomial-time algorithm that takes $153672n^{537}$ time is probably less useful in practice than an exponential one that takes $\frac{1.00001^n}{153672}$ time. But the sorts of algorithms that actually appear in practice are not usually $O(n^{537})$; they are usually $O(n^3)$ or better. And also, regarding your question 2, any polynomial algorithm will outperform any exponential algorithm for sufficiently large instances of the problem. But yes, there are real examples of nominally exponential-time algorithms that are efficient enough to be useful in practice. The "simplex" algorithm for integer linear programming problems is one well-known example.
H: Inputting complicated equation into Wolfram I am having a bear of a time getting this equation into Wolfram so I can solve it for E(r) = 1000. Everything is constant except for little r, which is the cursive r in each term. In case anyone is wondering, this is the expression for the electric field from the center of a hydrogen atom. r is for radius. Can anyone put this in and link me to the solution? Much appreciated. AI: Here is one approach, but you can put in your constants and then play around with defining them using the proper names. solve (.2)/(4 pi .5 r^2)(1 - e^(-2 r /.6)(2(r/.6)^2+2(r/.6)+1)) = 1000 for r See this WA page. By defining parameters, I mean, something like: q = .2, solve (q)/(4 pi .5 r^2)(1 - e^(-2 r /.6)(2(r/.6)^2+2(r/.6)+1)) = 1000 for r Here you can define your parameters: q=.2, t = .4, a=.6, solve (q)/(4 pi t r^2)(1 - e^(-2 r /a)(2(r/a)^2+2(r/a)+1)) = 1000 For the parameters you posted, I got two solutions: $\large r = 7.71723 \times 10^{-20}$ $\large r = 1.19945 \times 10^{-6}$ Note, there are other options: Get a free CAS like SAGE, Maxima or others. These also have working online copies. Use Mathics.org
H: Finding slope of a curve by finding the limits of secant slopes Find the slope of the curve $y=x^2-4x-5$ at the point $P(3,-8)$ by finding the limit of the secant slopes through point $P$. My try: I picked another point $Q$ to get the secant $PQ$. Since $P$ is $(3,-8)$, $Q$ is $(3+h, x^2-4(3+h)^2-5)$. The secant slope is $$\frac{\Delta y}{\Delta x}$$ $$\implies \frac{[x^2-4(3+h)^2-5]- [x^2-4(3)-5]}{(3+h)-3}$$ $$\frac{[x^2-4(h^2+6h+9)-5] - [x^2-17]}{h}$$ $$\frac{[x^2-4h^2-24h-36-5] - [x^2-17]}{h}$$ $$\frac{-4h^2-24h-24}{h}$$ and this is where I'm lost. The answer is $2$, and I don't see how they got that. The limit of the secant slopes through $P$ means that $h$ is getting closer to $0$, right? Please help and explain. Thanks. AI: Your slope calculation is off a bit, you should have $${[(3+h)^2-4(3+h)-5]-[3^2-4(3)-5]\over (3+h)-3}$$ $$={2h+h^2\over h}$$ This is because $Q$ is located at $(3+h,(3+h)^2-4(3+h)-5)$. Then yes, take the limit as $h\to 0$ and you have the slope.
H: Proof that $e^{n}-\lfloor e^{n} \rfloor \neq \frac{1}{2} $ for all $n\in\mathbb{N}$ Let $n\in\mathbb{N}$, how can I proof that $e^{n}-\lfloor e^{n} \rfloor$ is never equal to $\frac{1}{2}$? Thanks AI: Hint: If we have $$e^{n} = \frac{1}{2} + \lfloor e^n \rfloor \in \Bbb{Q}$$ then $e$ would be an algebraic number.
H: With the pigeon hole principle how do you tell which are the pigeons and which are the holes? For example, I was reading this example from my textbook: Let S be a set of six positive integers who maximum is at most 14. Show that the sums of the elements in all the nonempty subsets of S cannot all be distinct. For each nonempty subset A of S the sum of the elements in A denoted $$S_A$$ satisfies $$1\leq S_A \leq 9+10+...+14=69$$ and there are $$2^6-1=63$$ nonempty subsets of S. We should like to draw the conclusion from the pigeonhole principle by letting the possible sums, from 1 to 69 be the pigeonholes with 63 nonempty subsets of S as the pigeons but then we have too few pigeons. Why can't you say that there are 63 pigeonholes and 69 pigeons? AI: In this example you are putting the subsets of $S$ (which all have sums) and placing them into the sums of $1$ to $69$. Think of the sums of $S$ as bins, we only have $63$ non empty subsets, and $69$ bins to put them in.
H: Help with area of surface of revolution $x=\frac{1}{3}(y^2+2)^{\frac{3}{2}}, 4 \le y \le 5$ The question is: Find the exact area of the surface obtained by rotating the curve about the x-axis. $$x=\frac{1}{3}(y^2+2)^{\frac{3}{2}}, 4 \le y \le 5$$ I'm really confused by how the solution is presented. The formulas I just studied show two possibilities. When rotated about the x-axis: $$S= \int 2 \pi y ds$$ When rotated about the y-axis: $$S= \int 2 \pi x ds$$ And of course, ds is: $$ds= \int \sqrt{1+(\frac{dy}{dx})^2} dx$$ Or: $$ds= \int \sqrt{1+(\frac{dx}{dy})^2} dy$$ So putting this altogether I got: $$\frac{dx}{dy}=\frac{1}{2}(y^2+2)^{\frac{1}{2}}2y$$ Which simplifies to: $$\frac{dx}{dy}=y(y^2+2)^{\frac{1}{2}}$$ So then: $$1+ (\frac{dx}{dy})^2=1+y^2(y^2+2)$$ I assumed that I then setup my integral in the following way (which seems wrong to me): $$2 \pi \int_{4}^{5}\frac{1}{3}(y^2+2)^{\frac{3}{2}}(\sqrt{1+y^2(y^2+2)}dy$$ But everything about that seems wrong. The solution shows them going from my last confident step to this: $$2 \pi \int_{1}^{2}y(y^2+1)dy$$ I know how to integrate that, but I don't know how they got here. Can someone help me to understand this? AI: You were okay up to $$1+ (\frac{dx}{dy})^2 \ = \ 1+y^2(y^2+2) \ = \ y^4 + 2y^2 + 1 $$ $$\Rightarrow \ \sqrt{1+ (\frac{dx}{dy})^2} \ = \ \sqrt{y^4 + 2y^2 + 1} \ = \ \sqrt{(y^2 + 1)^2} \ \ . $$ So $$S= \int 2 \pi \ y \ \ ds \ = \ \int 2 \pi \ y \ \ (y^2 + 1 ) \ \ dy \ . $$ The $ \ " y " \ $ in the surface area integral is the "radius arm" extending from the $ \ x$-axis to the surface. Since you are integrating along the $ \ y$-direction, you don't replace it with a function (least of all with your function $ \ x \ = \ f(y) \ $ ), but you leave it as $ \ " y " \ $ . What I'm not following in your post is why the interval is given as $ \ 4 \ \le \ y \ \le \ 5 \ $ initially, but the limits of the integration became 1 to 2 ...
H: Indefinite Integration (1) $ \int 2^{\log_{e}(x)}dx$ (2) $\int 2^{mx}\cdot 3^{nx}dx$ calculation of some Indefinite Integration (1) $\displaystyle \int 2^{\log_{e}(x)}dx$ (2) $\displaystyle \int 2^{mx}\cdot 3^{nx}dx$ $\bf{My\; Try}::$ for (1) $\log_{e}(x)=t\Rightarrow x=e^t$ and $dx = e^tdt$ $\displaystyle \int 2^t\cdot e^t dt = \int (2e)^tdt = \frac{(2e)^t}{\log_{e}(2e)}+C$ My Question is can we solve it without using Substution Similarly How can I solve it Directly Means without using Integration by parts, Help Required Thanks AI: For number (1), try using $$2^{\ln{x}} = \left(e^{\ln{2}}\right)^{\ln{x}} = \left(e^{\ln{x}}\right)^{\ln{2}} = x^{\ln{2}}$$ and use the power rule. Likewise for (2), start by writing $$2^{mx} = e^{(m\ln{2})x}$$ and $$3^{nx} = e^{(n \ln{3})x}$$
H: Sum of multinomial coefficients with constraints The title doesn't reflect the question properly, since I don't know enough about combinatorics to get it right, here. Feel free to change the title. From the multinomial theorem, we can deduce, that the sum over all multinomials is $ \sum_{k_1+\ldots+k_m=n}\binom{n}{k_1,\ldots,k_m}=m^n\;. $ The question is, how can we compute the sum $ \sum_{k_1+\ldots+k_m=n}^{k_1\geq 1,\ldots,k_m\geq 1}\binom{n}{k_1,\ldots,k_m} $, that is the sum over multinomials with positive $k_j$'s? With respect to the polynomial $(x_1+\cdots+x_m)^n$, this count the number of monomials, such that each factor $x_j$ doesn't vanish in it. Don't know if there is a common name for those monomials. AI: We can obtain a messy expression for the answer as follows. If there is no restriction of the $k_i\ge 1$ kind, the sum is $m^n$. Now we deal with the $k_i\ge 1$ restriction by using the Principle of Inclusion/Exclusion. There are $(m-1)^n$ functions that "miss" $1$, and $(m-1)^n$ that miss $2$, and so on up to $m$. So we need to subtract $\binom{m}{1}(m-1)^n$. But we have subtracted too much, since for every $i,j$, with $i\ne j$, we have subtracted twice the functions that "miss" $i$ and $j$. So we must add back $\binom{m}{2}(m-2)^n$. However, we have added back too many times for the functions that miss $i$, $j$, and $k$. And so on. So we get the expression $$m^n-\binom{m}{1}(m-1)^n +\binom{m}{2}(m-2)^n -\binom{m}{3}(m-3)^n +\cdots.$$ For details, and much more, please look at Stirling Numbers of the Second Kind. A great deal is known, but there is no pleasant closed form.
H: Global approximation theorem in Sobolev space ${\bf Global\ Approximation\ Theorem}$(251 page inEvans's PDE book) : If $U$ is ${\bf bounded}$ in ${\bf R}^n$, then for $u\in W^{k,p}(U)$, there exists $u_n \in C^\infty (U)\cap W^{k,p}(U)$ such that $$u_n \rightarrow u\ in\ W^{k,p}(U)$$ For the proof, we used partition of unity argument. But I cannot understand why we need the assumption of boundedness. Proof : First recall that fact, $$ u^\epsilon \rightarrow u \ in \ W^{k,p}(V)$$ where $V$ has compact closure and $u^\epsilon =\eta_\epsilon\ast u$. For the convenience we let $U={\bf R}^n$ and $V_i$ to be an open ball $B(i,0)$. On $ U_i=V_{i+3} - V_{i+1}$ we can have subordinate partition of unity : $$ f_i \in C_c^\infty(U_i) $$ Hence $$ f_i u\in W^{k,p}(U) $$ If $u^i = (f_iu)^{\epsilon_i}$ then $$ |u^i - f_iu|_{W^{k,p}(U) }< \frac{\delta }{2^{i+1}} $$ for small $\epsilon_i$. Hence for any compact set $V$ in $U$ since $u=\sum f_iu$, $$ |\sum_{i=1}^\infty u^i - u|_{W^{k,p}(V)} \leq \delta$$ AI: Edit: One problem with Evans's proof is that he defines $U_i$ as $$U_i=\{x\in U\big|\mathrm{dist}(x,\partial U)<1/i\},$$ and $V_i=U_{i+3}\setminus\overline{U}_i$. But if, for example, $U=\mathbb R^n$, what are those $V$'s? So, Evans's proof will not work here. In your proof, you use the geometry of $\mathbb R^n$, to say that it is covered by your $U_i$'s. This proves that the theorem holds for $\mathbb R^n$, but might not apply in other cases.
H: In metric spaces, is a function uniformly continuous iff $\delta$ depends on $\varepsilon$? Most book examples end with an expression for $\delta$ that depends on $\varepsilon$ when proving uniform continuity. What I am wondering is whether a function can be uniformly continuous as long as the distance between any two elements of the domain of the metric space is within some number that does not depend on $\varepsilon$ or the where we "fix" the domain. AI: There exist bounded, non uniformly continuous functions, so the answer is no. One example of such a function is $f(x)=\cos(x^2)$, for $x\in\mathbb R$. Then, for all $x,y\in\mathbb R$, you have that $|\cos(x^2)-\cos(y^2)|\leq 2$, but $f$ is not uniformly continuous on $\mathbb R$.
H: probability of no matching or exactly one matching and generalization I had the following question in a midterm today: There are $10$ pairs of shoes. One randomly selects $8$ shoes. What is the probability that : $\textbf{a)}$ There are no matching pairs of shoes in the selected shoes. $\textbf{b)}$ There is exactly one matching pair of shoes. What I wrote as a solution is : $\textbf{a)}$ We randomly select one shoe, then two select the second one so that we don't violate the requirement we can select any of the 18 out of the 19 shoes left. Then for the third shoes that we will select we can select any of the 16 out of the 18 shoes left and so on. So we get that : $$ P = \frac{18}{19}\frac{16}{18}\frac{14}{17}\frac{12}{16}\frac{10}{15}\frac{8}{14}\frac{6}{13}.$$ $\textbf{b)}$ The probability to pick exactly one pair of shoes is the probability of getting a pair of shoes and then getting no matching pairs for the remaining $18$ shoes(and to compute this we apply same logic as in part a) $$P=\frac{10}{\dbinom{20}{2}} \cdot \frac{16}{17}\frac{14}{16}\frac{12}{15}\frac{10}{14}\frac{8}{13}.$$ $\textbf{Firstly:}$ I know that my solution is quite ugly but is it right at least? If not where's the mistake? $\textbf{Secondly:}$ Searching for duplicates of this question I found many other question similar but with some different parameters so here I am proposing the general version: $\textbf{General Version:}$ There are $m$ pairs of shoes. One randomly selects $n$ shoes. What is the probability that there are exactly $k$ pairs of shoes in the selected ones? ($k < \lfloor n \rfloor$) AI: Here is the solution to the general version: There are m pairs of shoes. One randomly selects n shoes. What is the probability that there are exactly k pairs of shoes in the selected ones? $$\frac{\dbinom{m}{k} \dbinom{ m - k }{ n - 2k } 2^{n-2k}}{\dbinom{ 2m }{ n }}\qquad \left(\forall k\leq \left\lfloor\frac{n}{2}\right\rfloor\right)$$ There are $\binom{ 2m }{ n }$ ways to choose $n$ shoes. Assume first that we have $k=0$, then there are $\binom{ m }{ n }$ ways to choose from unique pairs, and $2^n$ ways to choose whether we take the left or right shoe from each pair. In the case $k > 0$ there are $\binom{ m }{ k }$ ways to choose which pairs the matches come from and $\binom{ m - k }{ n - 2k }$ ways to choose which pairs the remaining non-matching pairs come from. Of the $n-2k$ non-matching pairs we selected, there are $2^{n-2k}$ ways to choose whether we take the left or right shoe from each pair.
H: If $G$ is simple with diameter two and maximum degree $|V(G)| - 2$, then $|E(G)| \geq 2|V(G)| - 4$ This is my try: Because the diameter of $G$ is two and have maximum degree the number of vertex: $|V(G)| - 2$, where $|V(G)|$ is the number of vertex, then the grade for any vertex in $G$ is greater than or equal to two. And since $\sum \delta(v_i) = |E(G)|$, where $\delta(v_i)$ represents the grade of a vertex in $V(G)$ and $|E(G)|$ the number of edges in $G$, then $\sum \delta(v_i) \geq 2(|V(G)| - 2) = 2|V(G)| - 4$. AI: Let $G$ be a graph of diameter $2$ and order $n=|V(G)|$, and suppose that $G$ has a vertex $v$ of degree $n-2$. It is easy to see that $G-v$ is a connected graph. [There is a vertex $u$ of $G-v$ which is not adjacent to $v$; every other vertex of $G-v$ is connected to $u$ by a path in $G$ of length at most $2$; since there is no edge $uv$, that path must be contained in $G-v$.] Since $G-v$ is a connected graph with $n-1$ vertices, it must have at least $n-2$ edges; added to the $n-2$ edges which are incident with $v$, this makes at least $2n-4$ edges in $G$. Alternatively, without using the fact that a connected graph on $m$ vertices must have at least $m-1$ edges: Let $N(x)$ denote the neighborhood of vertex $x$, i.e., the set of all vertices adjacent to $x$. Since $|N(v)|=n-2$, there is a unique vertex $u\in V(G)\setminus[\{v\}\cup N(v)]$. Let $X= N(v)\setminus N(u)$; so $|X|=n-2-|N(u)|$.Each vertex in $X$ is connected to $u$ by a path of length $2$, therefore, each vertex in $X$ is adjacent to some vertex in $N(u)$. Now there are $ n -2$ edges incident with $v$, and $|N(u)|$ edges incident with $u$, and at least $|X|$ edges joining vertices in $X$ to vertices in $N(u)$.Hence$$|E(G)|\ge(n-2)+|N(u)|+|X|=(n-2)+|N(u)|+(n-2-|N(u)|)=2n-4.$$
H: Cesaro mean approaching average of left and right limits Let $f\in L^1(\mathbb{R}/2\pi\mathbb{Z})$, where $\mathbb{R}/2\pi\mathbb{Z}$ means that $f$ is periodic with period $2\pi$. Let $\sigma_N$ denote the Cesaro mean of the Fourier series of $f$. Suppose that $f$ has a left and right limit at $x$. Prove that as $N$ approaches infinity, $\sigma_N(x)$ approaches $\dfrac{f(x^+)+f(x^-)}{2}$. We can write $\sigma_N(x)=(f\ast F_N)(x)$, where $F_N$ is the Fejer kernel given by $F_N(x)=\dfrac{\sin^2(Nx/2)}{N\sin^2(x/2)}$. We have $F_N(x)\geq 0$ and $\int_{-\pi}^{\pi}F_N(x)dx=2\pi$. We want to show that $\left|\dfrac{f(x^+)+f(x^-)}{2}-(f\ast F_N)(x)\right|\rightarrow 0$ as $N\rightarrow\infty$. How can we go from here? AI: Note that you can write $$(f*F_N)(x)-\frac{f(x^+)+f(x^-)}{2}=$$$$\int_{-\pi}^{0}f(x-y)F_N(y)\,dy-\frac{f(x^+)}{2}+\int_{0}^{\pi}f(x-y)F_N(y)\,dy-\frac{f(x^-)}{2}=$$$$\int_{-\pi}^{0}(f(x-y)-f(x^+)F_N(y)\,dy+\int_{0}^{\pi}(f(x-y)-f(x^-))F_N(y)\,dy,$$ since $F_N$ is symmetric around $0$, so its integral from $-\pi$ to $0$ is equal to its integral from $0$ to $\pi$, so they are both equal to $\frac{1}{2}$. Now, use that $|f(x-y)-f(x^+)|$ is small for $y$ small and negative, also $|f(x-y)-f(x^-)|$ is small for $y$ small and positive, and the fact that $F_N$ is nonnegative and goes uniformly to $0$ away from $0$.
H: Suppose $f:[0,1] \Rightarrow \mathbb{R}$ is continuous and $\int_0^x f(x)dx = \int_x^1 f(x)dx$. Prove that $f(x) = 0$ for all $x$ Suppose $f:[0,1] \Rightarrow \mathbb{R}$ is continuous and $\int_0^x f(x)dx = \int_x^1 f(x)dx$. Prove that $f(x) = 0$ for all $x$. So, I can intuitively see that this is true. My proof mostly makes sense, I think, but I'm not sure if it covers the case where there are negative and positive values in each segment, resulting in a mean value of 0, but still having nonzero values. Can someone tell me how to cover that, or how this does? Suppose $f(x) \neq 0$ for all x $\in$ [0,1]. By the mean value theorem, there exist some c and d for which $f(c)(x) = f(d)(1-x) = \int_0^x f(x)dx$. Since x $\neq$ (1-x) for all x $\in$ [0,1], it follows that $f(c) = f(d) = 0$. Therefore, $f(x) = 0$ for all $x \in [0,1]$. AI: The Fundamental Theorem of Calculus proof suggested in a comment by Peter Tamaroff is one short line, and one cannot do better. Here is a more awkward proof that does not use the FTC. Suppose that $f(x)\ne 0$ for some $x$. Say for example that $f(a)=c\gt 0$ for some $a$. By continuity we can assume that $a$ is not $0$ or $1$. Then there is an interval $(a-\epsilon,a+\epsilon)$ contained in $(0,1)$ such that $f(x)\gt c/2$ in this interval. Note that $\int_{a-\epsilon}^{a+\epsilon}\gt c\epsilon\gt 0$. Let $x_1=a-\epsilon$, and $x_2=a+\epsilon$. Then if $\int_0^{x_1} f(t)\,dt=\int_{x_1}^1 f(t)\,dt$, we must have $\int_{0}^{x_2} f(t)\,dt \gt \int_{x_2}^1 f(t)\,dt$.
H: Show that $\sqrt{n^2+1}-n$ converges to 0 I want to use the definition of the limit to show that $\sqrt{n^2+1}-n$ converges to 0. The definition is as follows: if $\sqrt{n^2+1}-n$ converges to 0, then $\forall \epsilon>0$, there exists an $N>0$ such that $n\ge N \implies \mid\sqrt{n^2+1}-n\mid<\epsilon$. Now I want to start backwords in order to figure out how to pick N. I know: $\mid\sqrt{n^2+1}-n\mid=\sqrt{n^2+1}-n$ since $n$ is a natural number and $\sqrt{n^2+1}>n$. So I need to pick an N such that $\sqrt{n^2+1}-n<\epsilon$. I tried multiplying $\sqrt{n^2+1}-n$ by $\frac{\sqrt{n^2+1}+n}{\sqrt{n^2+1}+n}$ but that didn't really seem to help. Do you have any ideas on how to find this N? Thanks AI: $\newcommand{\+}{^{\dagger}}% \newcommand{\angles}[1]{\left\langle #1 \right\rangle}% \newcommand{\braces}[1]{\left\lbrace #1 \right\rbrace}% \newcommand{\bracks}[1]{\left\lbrack #1 \right\rbrack}% \newcommand{\dd}{{\rm d}}% \newcommand{\isdiv}{\,\left.\right\vert\,}% \newcommand{\ds}[1]{\displaystyle{#1}}% \newcommand{\equalby}[1]{{#1 \atop {= \atop \vphantom{\huge A}}}}% \newcommand{\expo}[1]{\,{\rm e}^{#1}\,}% \newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,}% \newcommand{\ic}{{\rm i}}% \newcommand{\imp}{\Longrightarrow}% \newcommand{\ket}[1]{\left\vert #1\right\rangle}% \newcommand{\pars}[1]{\left( #1 \right)}% \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\pp}{{\cal P}}% \newcommand{\root}[2][]{\,\sqrt[#1]{\,#2\,}\,}% \newcommand{\sech}{\,{\rm sech}}% \newcommand{\sgn}{\,{\rm sgn}}% \newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}} \newcommand{\ul}[1]{\underline{#1}}% \newcommand{\verts}[1]{\left\vert #1 \right\vert}% \newcommand{\yy}{\Longleftrightarrow}$ $$ \verts{\root{n^{2} +1} - n - 0} = {1 \over \root{n^{2} +1} + n} < {1 \over 2n} $$ Given $\epsilon > 0$ $$ n > N \equiv \floor{1 \over 2\epsilon} + 1 \quad\imp\quad \verts{\root{n^{2} +1} - n} < \epsilon \quad\imp\quad \lim_{n \to \infty}\pars{\root{n^{2} +1} - n} = 0 $$
H: Fundamental Theorem of Calculus help I am working on a problem set assigned to us by my professor. One item is to find the derivative of g $g(x)=\int_a^b \dfrac{u^2-1}{u^2+1}~du$ where a=2x and b=3x. A hint was given and it confuses me: $g(x)=\int_a^b f(u)~du= \int_a^0 f(u)~du + \int_0^b f(u)~du$ where a=2x and b=3x Why is this so? AI: You really do not even need the hint. Just use chain rule on both parts. $g'(x) = f(b)\frac{db}{dx} - f(a)\frac{da}{dx}= 3\frac{9x^2-1}{9x^2 +1}-2\frac{4x^2-1}{4x^2 +1}$ This is also what the hint would give you, except that you would have two parts where you evaluate f(u) at 0 and multiply by the derivative of 0 (which is zero) In general, if $g(x) = \int_{a(x)}^{b(x)}f(u) du$ then $g'(x) = f(b)\cdot b'(x) - f(a)\cdot a'(x)$.
H: Intersection of a Infinite Collection of Sets - null set or infinity? Let's say we have a collection of sets $\bigcap_{i=1}^\infty A_i$ where $A_i=[i,\infty]$. In other words: $$ \bigcap_{i=1}^\infty A_i = [1,\infty] \cap [2,\infty] \cap [3,\infty] \cap ... [\infty,\infty] $$ I was thinking that there is no intersection between each of these sets $\varnothing$. However, I was thinking it could also be {$\infty$}. Which evaluation is correct? What is infinity anyway? Is it some really high number that ends somewhere, or does it keep on going.....forever, with no ending. AI: Every $A_i$ contains $\infty$. For any finite $n\in \mathbb{N}$ the set $A_{n+1}$ does not contain $n$. Hence the intersection contains only $\infty$. In the extended reals you can for all intents and purposes treat infinity as an actual number.
H: How many sequences of n letters chosen from { A,B, ..., Z } are in non-increasing, or non-decreasing order I am studying for a test and this is one of the practice questions. I really don't understand how to start this? It looks like a derangement question to me but I might be overthinking it AI: Let me see if I understand the question. So AAABNN counts (nondecreasing), and NNBAAA counts (nonincreasing), but NNAAAB and BANANA don't count. Have I got that right? Looks like a simple in-and-out ("inclusion-exclusion") problem to me: Answer = #(nonincreasing sequences) + #(nondecreasing sequences) - #(constant sequences), since the constant sequences are the only ones that are both nonincreasing and nondecreasing. The number of constant sequences is exactly $26$ (assuming $n\gt0)$. To specify a nondecreasing sequence of length $n$, since the order is determined, all you need to know is how many of each letter. I.e., $26$ nonnegative integers adding up to $n$, that's $\binom{n+25}{25}$ or something like that. So your final answer is$$2\binom{n+25}{25}-26$$. P.S. The binomial coefficient $\displaystyle\binom{n+25}{25}$ comes from setting $k=26$ in $\displaystyle\binom{n+k-1}{k-1}$ which is the formula for the number of ordered $k$-tuples $(x_1,x_2,\dots,x_k)$ of nonnegative integers such that $x_1+x_2+\dots+x_k=n$. This is the so-called "stars-and-bars" theorem, which has probably been covered in class and you will need to know it for the exam. You can find this theorem (and a proof) on this Wikipedia page; it's Theorem Two. Watch out, I think they switched the letters around.
H: Calculus questions Could anyone tell me how to solve 9b and 10? I've been thinking for five hours, I really need help. AI: Well, Notice $f'' = \frac{1}{x} \implies f''' = -\frac{1}{x^2} \implies f'''' = \frac{2}{x^3} \implies f^{5} =-\frac{2 \times 3 }{x^4}$ $$ \therefore f^{(n)} = \frac{(-1)^{n}(n-2)!}{x^{n-1}}$$ To see why this is true, inducte on $n$. This is routine. For part $10$, Since $f$ is invertible, there exists $f^{-1}$ such that $f \circ f^{-1} = Id$. Now we apply the chain rule, $$ 0 < 1 = Id' = (f \circ f^{-1})'(x) = f'(f^{-1}(x))f'^{-1} (x) $$ $$ \frac{1}{f'(f^{-1}(x))} = \frac{d f^{-1}(x)}{dx} $$ Verification of formula: If $n=2$, then $f'' = \frac{(-1)^2 (2 -2)!}{x} = \frac{1}{x}$. Remember $0! = 1 \; \; (proof?)$ Now suppose $f^{(n)} = \frac{(-1)^{n}(n-2)!}{x^{n-1}}$ is true for $n$. We show this implies $n+1$ case is true. $$ f^{(n+1)} = \frac{d f^{(n)}}{dx} = (1-n)(-1)^n(n-2)!x^{1 -n - 1} = (-1)(n-1)(-1)^n(n-2)!x^{-n}$$ $$ = \frac{(-1)^{n+1}(n-1)!}{x^n}$$ The problem is now solved by induction.
H: Determining the order of the kernel and image Let $G$ be a finite group. Let $G'$ be a group and let $\phi : G \to G'$ be a homomorphism. Let $K \leq G$ be the kernel of $\phi$ and $I \leq G'$ be the image of $\phi$. (a) Find a formula that relates the number of elements in $G$, $K$, and $I$. (b) Suppose $H \leq G$, Find a formula for the number of elements of $\phi(H)$. I'm not sure if (a) is some sort of application of the Main Homomorphism Theorem or not. I'm hoping to get some hints to sort of push me in the right direction. AI: This is absolutely an application of the first isomorphism theorem. If you can recall the statement, the formula should follow quickly.
H: ${\rm erf}$ question: I am stuck can somebody help me? I am stuck at this problem. Show that $\displaystyle{\int_{a}^{b}{\rm e}^{y}\,{\rm d}t = \dfrac{1}{2}\sqrt{\pi\,}\,\left[{\rm erf}\left(b\right) - {\rm erf}\left(a\right)\right]}$ where $y = -t^{2}$ Thanks AI: If you can't use integration by parts, then I guess you're supposed to use the definition of the error function $$\frac{2}{\sqrt{\pi}}\int_0^te^{-s^2}\,ds = \mathrm{erf}\,(t),$$ to get that $$\int_a^b e^{-t^2}\,dt = \int_a^0 e^{-t^2}\,dt + \int_0^be^{-t^2}\,dt = \int_0^b e^{-t^2}\,dt - \int_0^ae^{-t^2}\,dt = \ldots$$ I leave it to you to finish this calculation.
H: Let $f(x)=|x|$ for $x$ rational and $f(x)=1$ for $x$ irrational. Show $f$ has limits at $1$ and $-1$. Find them. Let $f(x)=|x|$ for $x\in\mathbb{Q}$ and $f(x)=1$ for $x\in\mathbb{R}/\mathbb{Q}$, where $\mathbb{Q}$ is the set of rationals. (a) Show $f$ has limits at $1$ and $-1$. Find them. (b) Show that if $c\not=1$ or $c\not=-1$, then $f$ doesn't have a limit at $c$. Attempt: (a) Since $f(x)$ is either $|x|$ or $1$, we have $|f(x)|\leq|x|$ $\forall x\in\mathbb{R}$. Let $\epsilon>0$, $\delta(\epsilon)>0$. I don't know what $\delta(\epsilon)>0$ should be. But from there I think a basic outline would be something like: Then it follows that $1<|x|<\delta$, then $|f(x)|<\epsilon$. So $\lim\limits_{x\to0}f(x)=1. But I'm not sure. Any help/hints would be greatly appreciated. (b) Suppose $c\not=1$. Let $(x_n)$ be a sequence of rationals converging to $c$. Let $(y_n)$ be a sequence of irrationals converging to $c$. These sequences exist because both $\mathbb{Q}$ and $\mathbb{R}/\mathbb{Q}$ are dense in $\mathbb{R}$. Then $$\lim\limits_{n\to\infty}f(x_n)=\lim\limits_{n\to\infty}x_n=c\not=1=\lim\limits_{n\to\infty}1=\lim\limits_{n\to\infty}f(y_n).$$ By the Sequential Criterion, $f$ does not have a limit at $c\not=1$. For $c\not=-1$, I don't know if I can use the same method. Any help/hints would be greatly appreciated. AI: Hint $f(x)=-x\forall x (-\infty,0)\cap\mathbb{Q}$ $f(x)=+x \forall x [0,\infty)\cap\mathbb{Q}$ $f(x)=1\forall x\in\mathbb{Q}^c$ can you do now? Add take $c\in\mathbb{R}$ , now say $x\to c$ via rationals then $\lim_{x\to c}f(x)=|c|$ and if $x\to c$ via irrationals then $\lim_{x\to c} f(x)=1$, for existance of limit at $c$ we need this two limit equal, so $|c|=1$ so $c=1,-1$ take $c\in\mathbb{R}\setminus\{1,-1\}$ Now use sequential argument to get a contradiction.
H: Given probability of event If I'm given the probability that a certain event will occur, how can I find the probability of at least $x$ events occurring given $y$ opportunities? For example, the probability of a single coin landing heads is 50%. What are the chances of landing at least 2 heads out of 5 coins? AI: Model event $E$ happening as a Binomial random variable, that is, if the probability of event $E$ happening is $p$, then the probability of it happening exactly $m$ times in $n$ trials is: $$\mathbb{P}[X=m]=\binom{n}{m}p^m (1-p)^{n-m}.$$ Since the event consisting of $E$ happening exactly $m_1$ times means the event didn't happen $m_2$ times ($m_1\neq m_2$), the events $X=m_1$ and $X=m_2$ are disjoint, and the probability of a certain set of events happening is the sum of the corresponding probablities.
H: Trapezium drawn in the circle A circle is drawn inside a trapezium such that it touches all the sides of trapezium. The line joining the midpoints of the non parallel sides divides the trapezium in two parts with the area in the ratio of 3:5. If the length of the non parallel sides are 6 cm and 10 cm, then what is the length of the longer parallel side? AI: The midline divides the trapezoid into two equally high trapezoids. Its length is also the mean of the length of the two parallel sides. (You can see that by extending the nonparallel sides until they intersect in a point. The two parallel sides and the midline then forms the base line in three similar triangles). Call the long parallel side $AB$, and the short one $CD$ with $AD$ and $BC$ being the two other sides of the trapezoid. Further let $r$ be the radius of the circle. The trapezoid then has height $2r$. Further, let $E, F, G, H$ be the places the circle touches the trapezoid at $AB$, $BC$, $CD$ and $AD$ respectively. The hypothesis about the midline dividing the area (note that the two parts are also trapezoids) then dictates that $$ 3\frac{AB + \frac{AB + CD}{2}}{2}r = 5\frac{CD + \frac{AB + CD}{2}}{2}r \\ {3}AB + 3\frac{AB + CD}{2} -5CD -5\frac{AB+CD}{2} = 0\\ 2AB - 6CD = 0\\ AB = 3CD $$ so the midline has length $2CD$. Since the sides touch the circle we must have the following: $$ AE = AH\\ BE = BF\\ CF = CG\\ DG = DH $$ Since $DH + AH + BF + CF = 16$, then we must have $AE+BE + CG + DG = 16$. But this also equals $4CD$, so $CD = 4$. The long parallel side $AB$ must therefore be $12$.
H: Let $a_n = \int\limits_{0}^{n} e^{-x^4} dx$. Does $\{ a_n \}_{n \rightarrow \infty}$ converge? Let $a_n = \int\limits_{0}^{n} e^{-x^4} dx$. Does $\{ a_n \}_{n \rightarrow \infty}$ converge? $\{ a_n \} =\{ \int\limits_{0}^{1} e^{-x^4} dx, \int\limits_{0}^{2} e^{-x^4} dx, ..., \int\limits_{0}^{\infty} e^{-x^4} dx \}$ So we need need only to check that the definite integral $\int\limits_{0}^{\infty} e^{-x^4} dx$ converges By using Wolfram Alpha, $\int\limits_{0}^{\infty} e^{-x^4} dx \} = \Gamma \left( \frac54 \right) \approx 0.906402$ where $\Gamma$ is the Gamma function Therefore $\{ a_n \}$ converges, to $\Gamma \left( \frac54 \right)$. But what if I can't use Wolfram Alpha? How can I solve for this integration by hand? Sorry if this makes me look stupid, I don't recall learning any techniques from Calculus that can help me solve this integration. Thanks in advance! AI: We do not need an explicit expression to show that an improper integral converges. The sequence $(a_n)$ is obviously increasing. It is bounded above by $\int_0^1 e^{-x^4}\,dx+\int_1^\infty e^{-x}\,dx$. To be explicit, the first integral is less than $1$, and the second is $e^{-1}$, so the sequence $(a_n)$ is bounded above by $1+e^{-1}$. Any increasing sequence which is bounded above converges.
H: H ow to compute the integral of an absolute value How do we compute the integral of an absolute value? $\int |x|\,dx$ $\int_0^1 |x|\,dx$ AI: $\newcommand{\+}{^{\dagger}}% \newcommand{\angles}[1]{\left\langle #1 \right\rangle}% \newcommand{\braces}[1]{\left\lbrace #1 \right\rbrace}% \newcommand{\bracks}[1]{\left\lbrack #1 \right\rbrack}% \newcommand{\dd}{{\rm d}}% \newcommand{\isdiv}{\,\left.\right\vert\,}% \newcommand{\ds}[1]{\displaystyle{#1}}% \newcommand{\equalby}[1]{{#1 \atop {= \atop \vphantom{\huge A}}}}% \newcommand{\expo}[1]{\,{\rm e}^{#1}\,}% \newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,}% \newcommand{\ic}{{\rm i}}% \newcommand{\imp}{\Longrightarrow}% \newcommand{\ket}[1]{\left\vert #1\right\rangle}% \newcommand{\pars}[1]{\left( #1 \right)}% \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\pp}{{\cal P}}% \newcommand{\root}[2][]{\,\sqrt[#1]{\,#2\,}\,}% \newcommand{\sech}{\,{\rm sech}}% \newcommand{\sgn}{\,{\rm sgn}}% \newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}} \newcommand{\ul}[1]{\underline{#1}}% \newcommand{\verts}[1]{\left\vert #1 \right\vert}% \newcommand{\yy}{\Longleftrightarrow}$ \begin{align} \color{#0000ff}{\large\int_{a}^{b}\verts{x}\,\dd x} &= \left.\vphantom{\LARGE A}x\verts{x}\,\right\vert_{a}^{b} - \int_{a}^{b}x\sgn\pars{x}\,\dd x = b\verts{b} - a\verts{a} - \left.\vphantom{\LARGE A}{1 \over 2}\,x^{2}\sgn\pars{x}\,\right\vert_{a}^{b} + \int_{a}^{b}{1 \over 2}\,x^{2}\bracks{2\delta\pars{x}}\,\dd x \\[3mm]&= b\verts{b} - a\verts{a} - {1 \over 2}b\verts{b} + {1 \over 2}a\verts{a} = \color{#0000ff}{\large{1 \over 2}\pars{b\verts{b} - a\verts{a}}} \end{align}
H: About Diophantine Equation This is a problem about Diophantine equation. The problem is the following. If $ax+by=c$ is solvable and $b\ne0$, then prove that it has a solution $x_0$, $y_0$ with $0 \le x_0 <|b|$ First I thought that $x=x_1-\frac bg k$ , $y=y_1+\frac ag k$, where $g=gcd(a,b)$, $k$ is integer and $x_1$, $y_1$ is initial solution. But, after this step, I am stuck and can't find what to do next. Please help me solve this problem!!! Thank you AI: You are nearly finished. Let us use a weaker result than the one you quoted. If $(x_1,y_1)$ is a solution, so is $(x_1-bk,y_1+ak)$ for any integer $k$. If $a$ and $b$ are not relatively prime, this does not produce all solutions, but it is enough for our needs. By the result often called the Division Algorithm, there is a unique $r$, with $0\le r\lt |b|$, and an integer $q$, such that $x_1=q|b|+r$. This $r$ is the $x_0$ that we want. We pick $k=q$ if $b$ is positive, and $k=-q$ if $b$ is negative.
H: Find $f(4)$ if $x \sin(\pi x) = \int_0^{x^2} f(t)\,\mathrm dt$ The problem I am currently working on is this: Suppose $x \sin (\pi x) = \int_0^a f(t)~\mathrm dt$ where $a=x^2$. Find $f(4)$. I have tried this. $$\sin (\pi x) + \pi x \cos (\pi x)= f'(x) 2x$$ $$f'(x) = \dfrac{\sin \pi x + \pi x \cos \pi x}{2x}$$ $$f(x) = \int \dfrac{\sin \pi x + \pi x \cos \pi x}{2x}~\mathrm dx$$ $$f(x) = \dfrac {1}{2} \int \dfrac {\sin x}{x}~dx+\dfrac {\pi}{2}\sin x$$ I must find $f(4)$ but I am stuck in finding the integral of $\dfrac{\sin x}{x}$. Can anyone help me? AI: First note that $$\frac{d}{dx}\int_0^{g(x)} f(t)\,dt \neq f^{\color{red}{\prime}}(g(x))\cdot g^{\prime}(x)$$ as seen in your computation; instead, it should be $$\frac{d}{dx}\int_0^{g(x)} f(t)\,dt = f(g(x))\cdot g^{\prime}(x)$$ Doing the differentiation correctly should leave you with $$\sin(\pi x) + \pi x\cos(\pi x) = 2xf(x^2)$$ and thus finding $f(4)$ should be rather straightforward from here.
H: How many ways to arrange $20$ items on $4$ towers Suppose you have $20$ different rings and $4$ display towers. On each tower the rings are stacked one above another. In how many ways can they be arranged if: [a]: The order of rings on each tower does not matter: [b]: The order of rings on each tower matters, and there are exactly 5 rings on each tower? [c]: The order of rings on each tower matters, and each tower can hold any number of rings? Here is what I am thinking, not sure if it's right: [a]: Order does not matter - standard combination question: $$Ni = # of rings on tower i$$ then: $$N1+N2+N3+N4 = 20$$ and we find number of solutions to this problem I.E (Stars and bars): $$\binom{20+3} {3} = 1771$$ [b]: I know that this is a permutation question since order is important, but I am not sure if I am doing it right: $$ 20P5 * 4 $$ [c] have no clue AI: For [a], what matters is what tower each ring is put on. The first ring can go on each of the four towers, so can the second, and so on. So the result is $4^{20}$. For [b], you can just order all $20$ rings in a single row, then put the first five on the first stand, rings number 6 through 10 on the second stand, and so on. So the answer is $20!$ For [c], it's a stars and bars solution, only the internal order of the stars and of the bars matter. The answer then comes out to be $23!$
H: Find $\int_0^1 \mathrm{\frac{x-1}{ln(x)}}\,\mathrm{d}x$ Find $\int_0^1 \mathrm{\frac{x-1}{ln(x)}}\,\mathrm{d}x$ I tryed this: $\int_0^1 \mathrm{\frac{x-1}{ln(x)}} = \int_0^1 \mathrm{\frac{x}{ln(x)}} - \int_0^1 \mathrm{\frac{1}{ln(x)}}\,\mathrm{d}x$ To $\int_0^1 \mathrm{\frac{1}{ln(x)}}\,\mathrm{d}x$ Let $t=lnx $ then $\frac{dt}{dx}=\frac{1}{x}$ and $dx=e^tdt$ $\int \mathrm{\frac{1}{ln(x)}}\,\mathrm{d}x \equiv \int \mathrm{\frac{e^t}{t}}$$\mathrm{d}t$ and well $e^t=\sum _{n=0}^\infty \frac{t^n}{n!}$ $\frac{e^t}{t}=\sum_{n=0}^\infty \frac{t^{n-1}}{n!}$ $\int \mathrm{\frac{e^t}{t}}\,\mathrm{d}t=\int \mathrm \sum_{n=0}^\infty{\frac{t^{n-1}}{n!}}\,\mathrm{d}t=\sum_{n=0}^\infty\frac{t^n}{n*(n)!}$ How can I solve $\int_0^1 \mathrm{\frac{x}{ln(x)}}$ ? Thanks for your help :) have a nice day AI: $\newcommand{\+}{^{\dagger}}% \newcommand{\angles}[1]{\left\langle #1 \right\rangle}% \newcommand{\braces}[1]{\left\lbrace #1 \right\rbrace}% \newcommand{\bracks}[1]{\left\lbrack #1 \right\rbrack}% \newcommand{\dd}{{\rm d}}% \newcommand{\isdiv}{\,\left.\right\vert\,}% \newcommand{\ds}[1]{\displaystyle{#1}}% \newcommand{\equalby}[1]{{#1 \atop {= \atop \vphantom{\huge A}}}}% \newcommand{\expo}[1]{\,{\rm e}^{#1}\,}% \newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,}% \newcommand{\ic}{{\rm i}}% \newcommand{\imp}{\Longrightarrow}% \newcommand{\ket}[1]{\left\vert #1\right\rangle}% \newcommand{\pars}[1]{\left( #1 \right)}% \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\pp}{{\cal P}}% \newcommand{\root}[2][]{\,\sqrt[#1]{\,#2\,}\,}% \newcommand{\sech}{\,{\rm sech}}% \newcommand{\sgn}{\,{\rm sgn}}% \newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}} \newcommand{\ul}[1]{\underline{#1}}% \newcommand{\verts}[1]{\left\vert #1 \right\vert}% \newcommand{\yy}{\Longleftrightarrow}$ $\ds{\int_{0}^{1}{x - 1 \over \ln\pars{x}}\,\dd x: {\large ?}.\quad}$ Whith the change of variables $x \equiv \expo{-z}:$ \begin{align} \color{#0000ff}{\large\int_{0}^{1}{x - 1 \over \ln\pars{x}}\,\dd x} &= \int_{\infty}^{0}{\expo{-z} - 1 \over -z}\,\pars{-\expo{-z}\,\dd z} = \int^{\infty}_{0}{\expo{-z} - \expo{-2z}\over z}\,\dd z \\[3mm]&= -\int^{\infty}_{0}\ln\pars{z}\pars{-\expo{-z} +2 \expo{-2z}}\,\dd z = \int^{\infty}_{0}\ln\pars{z}\expo{-z}\,\dd z - \int^{\infty}_{0}\ln\pars{z \over 2}\expo{-z}\,\dd z \\[3mm]&= \int^{\infty}_{0}\ln\pars{z}\expo{-z}\,\dd z - \int^{\infty}_{0}\bracks{\ln\pars{z} - \ln\pars{2}}\expo{-z}\,\dd z = \ln\pars{2}\overbrace{\int^{\infty}_{0}\expo{-z}\,\dd z}^{=\ 1} \\[3mm]&= \color{#0000ff}{\large\ln\pars{2}} \end{align}
H: Arranging numbers so that $i$ is not immediately followed by $i+1$ How many arrangements of the integers $1,2,\ldots,n$ are such that no number $i$ is ever immediately followed by $i+1$? AI: This is sequence 255 from the Online Encyclopaedia of Integer Sequences https://oeis.org/A000255
H: $G$ abelian. If $G\cong \sum G_i$ then $mG \cong \sum mG_i$ Let $G$ be an abelian group and $m \in \mathbb{Z}$. If $G\cong \sum_{i \in I} G_i$, then $mG \cong \sum_{i \in I} mG_i$. $$\sum_{i \in I} G_i = \{ f:I\rightarrow \cup G_i \mid f(i) \in G_i \text{ and } f(i)=e_i \text{ for all but finitely many $i$}\}$$ I know that if $f:X\rightarrow Y$ is an injective homomorphism of groups and if $A \leq X$ then $A \cong f(A)$. In the first part of the problem we showed that if $G$ is abelian then $mG \leq G$. $G$ is abelian so $mG\trianglelefteq G$. I was magically hoping that somehow if I took $f:G\rightarrow \sum_{i\in I}G_i$ to be an isomorphism of groups that since we have $mG \cong f(mG)$ that it would happen to be that $$f(mG)=\sum_{i \in I} mG_i.$$ It doesn't look apparent and may not be true so I was thinking of if I could do some kind of composition of maps and use that isomorphism $f$ that I am given. What is a direction I could try to head in to get the answer? AI: Using the advice of PavelC: We have $f:G\rightarrow \sum_{i\in I}G_i$, an isomorphism of groups. As I said in my question, we know that $mG \cong f(mG)$. $$f(mG)=\{f(mg)\mid g \in G\} =\{ \underbrace{f(g)+f(g)+\dots+f(g)}_{m \text{ times}} \mid g \in G\} =\{mf(g)\mid g\in G\}.$$ If $(a_i)_{i\in I} \in f(mG)$, $(a_i)_{i \in I} = mf(g)$ for some $g \in G$. Call $f(g)=(b_i)_{i \in I}$. $$(a_i)_{i \in I}=mf(g)=m(b_i)_{i \in I}=(mb_i)_{i \in I}$$ and so each $a_i=mb_i \in mG_i$ for all $i \in I$ and $a_i=e_i$ for all but finitely many $I$ as $(a_i)_{i \in I} \in \sum_{i\in I} G_i$. We have then $(a_i)_{i \in I} \in \sum_{i \in I} mG_i$. If we pick $(mg_i)_{i\in I}\in\sum_{i\in I} mG_i$ then $$(mg_i)_{i\in I}=\underbrace{(g_i)_{i \in I}+\dots+(g_i)_{i \in I}}_{m \text{ times}}=m(g_i)_{i \in I}=mf(g)$$ for some $g \in G$, as $f$ is surjective, so $(mg_i)_{i\in I} \in f(mG)$.
H: Cancellation of products in an arbitrary category does not hold; does it hold with this extra condition? Let $\mathcal{C}$ be a category with products, and let $A,B,C$ be objects of $\mathcal{C}$. Certainly, it need not be true that $$A\times B\cong A\times C\implies B\cong C.$$ For an easy example, we could have $\mathcal{C}=\mathsf{Set}$, with $A$ an infinite set, and with $B$ and $C$ finite sets of different cardinalities. There are perhaps more surprising examples with topological spaces (see this MathOverflow thread). Let $p\colon A\times B\to A$ and $q\colon A\times C\to A$ be these products' respective projection maps to $A$. Suppose there is an isomorphism $f\colon A\times B\to A\times C$ such that $p=q\circ f$. Does this imply $B\cong C$? I believe the answer is yes - at the very least, this condition rules out the example with sets I mentioned above - but a proof (or counterexample) is eluding me at the moment. Any help would be appreciated. AI: Assume that $A$ has some global element, i.e. a morphism $* \to A$ where $*$ is a terminal object. Then an isomorphism $A \times B \cong A \times C$ over $A$ induces an isomorphism $* \times_A (A \times B) \cong * \times_A (A \times C)$ over $*$. But $* \times_A (A \times B) \cong * \times B \cong B$ and likewise $* \times_A (A \times C) \cong C$.
H: strict separation theorem? Im learning and we have a theorem that says: Let $C$ be a non-empty, convex subset of $\mathbb R^d$ and let $p \in \mathbb R^d$ be a point which is not in the closure of $C$. Then there exists a strict separating hyperplane, which means there exists an $\eta \in \mathbb R^d\backslash \{0\}$ such that $p\cdot\eta<c\cdot \eta$ for all $c\in C$. My Question is the following statement from our professor and what its proof is. He said said that for a random non-empty convex subset $C \subset \mathbb R^d$ and a point $p \in \mathbb R^d$ with $p\notin C$ there generally doesnt exist a $\eta \in \mathbb R^d\backslash \{0\}$ such that $p\cdot\eta<c\cdot \eta$ for all $c\in C$. The difference between the theorem and statement is obviously that the point has to be not in the closure of $C$ if there should be a strict separating hyperplane. A proof or an example would be nice. AI: Try $C=\{(x,y)\in\mathbb R^2\mid y\gt0\}\cup\{(0,0)\}$ (can you check that $C$ is indeed convex?). Then, for $\eta\ne(0,0)$, the set $C_\eta=\{c\cdot\eta\mid c\in C\}$ has no lower bound except if $\eta$ is a positive multiple of $(0,1)$. And if $\eta=(0,1)$, then $C_\eta=[0,+\infty)$ hence the property that $p\cdot\eta\lt c\cdot\eta$ for every $c$ in $C$ is equivalent to $p\cdot\eta\lt0$. Thus $\eta$ does not exist for the points $p=(x,0)$, $x\ne0$, although these points are not in $C$. Obviously, all these points $p$ are in the closure of $C$, as was to be expected.
H: X is the largest sum of rupees which can never be paid using any number of coin of denominations Rs. 4, Rs. 8, Rs. 13 and Rs. 18 'X' is the largest sum of rupees which can never be paid using any number of coin of denominations Rs. 4, Rs. 8, Rs. 13 and Rs. 18. What is the sum of digits of 'X'? Answer is 9. But how? I could got up to the denominations which are in form of 4, 4K+1, 4K+2...then there is the problem. AI: $4$ is the least of these numbers, so your best bet for paying n is smallest number you can pay, with the same remainder modulo $4$ as n, plus a lot of $4$s. Examine what numbers can be on the left. $n \equiv 0 \pmod 4$ is uninteresting. $13 \equiv 1 \pmod 4$. $18 \equiv 2 \pmod 4$. Finally, $3$ can be either $1+2$ or $1+1+1$. That is $13 + 18 \equiv 13+13+13 \equiv 3 \pmod 4$. Clearly $13 + 18 = 31$ is the smaller of them and thus the smallest number you can pay, with remainder $3$, modulo $4$. The number you can't pay, is the one before it - namely $27$.
H: Find the value of $a>1$ such that the curve $y=a^x$ meets the line $y=x$ once and only once. Find the value of $a>1$ such that the curve $y=a^x$ meets the line $y=x$ once and only once. AI: The derivative of $y=a^x=e^{x\ln a}$ is $e^{x\ln a}\ln a= y\ln a$ and shall equal the derivative $1$ of $y=x$ at a point where $y=x$. That is, at this point we have $a^x=x$ and $a^x\ln a=1$ at the same time, hence $x=\frac1{\ln a}$, which makes $x=y=e^{\frac{\ln a}{\ln a}}=e$ and finally necessarily $a=e^{\frac 1x}=e^{\frac1e}$.
H: Find $f(\dfrac{\pi}{2})$. Provided two functions Suppose $f(x)=\int_0^{g(x)}\dfrac{1}{\sqrt{1+t^2}}~dt$ and $g(x)=\int_0^{\cos x}1+\sin t^2~dt$ Find $f(\dfrac{\pi}{2})$ I evaluated everything and ended up with $f(\dfrac{\pi}{2})=\dfrac {2}{1+\int_0^{\cos\dfrac{\pi}{2}} 1+\sin t^2~dt}$ As usual I am stuck at evaluating the indefinite integral in the denominator. Can anybody help me? AI: Since $\cos\frac{\pi}{2}=0$, $$g(\frac{\pi}{2})=\int_0^{\cos\frac{\pi}{2}}1+\sin t^2~dt =\int_0^{0}1+\sin t^2~dt=0,$$ which implies $$f(\frac{\pi}{2})=\int_0^{g(\frac{\pi}{2})}\dfrac{1}{\sqrt{1+t^2}}~dt =\int_0^{0}\dfrac{1}{\sqrt{1+t^2}}~dt=0.$$
H: Find the number of positive integers whose digits add up to 42 Find the number of positive integers $$n <9,999,999 $$ for which the sum of the digits in n equals 42. Can anyone give me any hints on how to solve this? AI: Let the digits be $d_1,d_2,d_3,d_4,d_5,d_6$, and $d_7$, where we allow leading zeroes so as to make each number in the specified interval a seven-digit integer. You’re looking for all solutions in non-negative integers to $$d_1+d_2+d_3+d_4+d_5+d_6+d_7=42\;,$$ with the restriction that $d_k\le 9$ for $k=1,\ldots,7$. Without the restriction this is a standard stars-and-bars problem, whose solution is $$\binom{42+7-1}{7-1}=\binom{48}6\;.\tag{1}$$ Both the formula that I used here and a pretty decent explanation/derivation of it can be found at the link. However, $(1)$ includes unwanted solutions in which one or more of the $d_k$ exceeds $9$. To remove these, you can use an inclusion-exclusion argument. This answer shows such an argument in some detail in a smaller problem of this kind.
H: Basic Issue With the Hom Functor on Commutative Rings In the category of $A$-modules, one has the following property: if $f:M \rightarrow M''$ is a map of $A$-modules, and the induced map $f^*:Hom(M'',N) \rightarrow Hom(M,N)$ is injective for all $N$, then $f$ itself must be surjective. The proof I know of this fact would be to take $N = coker(f)$ and note that if $q:M'' \rightarrow N$ is projection, we have $f^*q=0$ and therefore $q=0$. I would like to know if the same result holds for commutative rings with 1. That is, if the pullback $f^*:Hom(A,C) \rightarrow Hom(B,C)$ is injective for all commutative rings $C$ with 1, then is it true that $f:B \rightarrow A$ is surjective, even now that our abelian category logic no longer applies? AI: No, it is not true. For example, take the map $\mathbb Z \hookrightarrow \mathbb Q$.
H: Prove that for positive number, some multiple only has 0 and d as it's digits Let $ n$ be a positive integer, and let $1<=d<=9$. Show that some multiple of $n$ has $0$ and $d$ as its only digits. I don't know how to even start this question. It's under the pigeonhole section of the text book so I am guessing that we have to use that somehow. Any help? AI: HINT: Consider the numbers written $d$, $dd$, $ddd$, and so on. E.g., if $d=3$ these are $3$, $33$, $333$, etc. Note that if you subtract one of these numbers from a larger one, you get a number whose digits are all either $d$ or $0$. Let $r_k$ be the remainder when you divide $$\underbrace{dd\ldots dd}_{k\text{ digits}}$$ by $n$. There are only $n$ possibilities for $r_k$: it must be in the set $\{0,1,\ldots,n-1\}$. These possibilities are your pigeonholes.
H: How prove this $ab|a^8+b^4+1$ show that: there exsit infinite $(a,b)$ such $$ab|a^8+b^4+1$$ my try: let $a^8+b^4+1=kab,k\in N^{*}$ and I can't work,Thank you AI: We show that for any solution $(a, b), a, b \in \mathbb{Z}^+$, we can get another solution $(a', b'), a', b' \in \mathbb{Z}^+$ such that $a'+b'>a+b$. This will imply that starting from the solution $(1, 1)$, we can get an infinite sequence of distinct solutions $(a_n, b_n)$, with $2=a_0+b_0<a_1+b_1<a_2+b_2< \ldots$. Consider $2$ cases. Case $1$: $a \leq b^2$. Then we have $a \mid b^4+1$ so $b^4+1=ka$ for some $k \in \mathbb{Z}^+$ so $k \mid b^4+1$. Also $b \mid a^8+1 \Rightarrow b \mid k^8(a^8+1)=(ka)^8+k^8=(b^4+1)^8+k^8 \Rightarrow b \mid k^8+1$. Thus $(k, b)$ is another solution, and $k+b=\frac{b^4+1}{a}+b>a+b$ since $a \leq b^2$. Case $2$: $a>b^2$. Then clearly $a^4>b^8 \geq b$. We have $b \mid a^8+1$ so $a^8+1=lb$ for some $l \in \mathbb{Z}^+$ so $l \mid a^8+1$. Also $a \mid b^4+1 \Rightarrow a \mid l^4(b^4+1)=(bl)^4+l^4=(a^8+1)^4+l^4 \Rightarrow a \mid l^4+1$. Thus $(a, l)$ is another solution, and $a+l=a+\frac{a^8+1}{b}>a+b$ since $b<a^4$. As such, we get an infinite sequence of distinct solutions $(a_n, b_n)$, with $2=a_0+b_0<a_1+b_1<a_2+b_2< \ldots$, so indeed there are infinitely many positive integer solutions.
H: Is this "set quotient" known? Let $A,B$ be subsets of a set $X$. Then there is a largest subset $C \subseteq X$ such that $C \cap A \subseteq B$. Explicitly, we have $C = \{x \in X : x \in A \Rightarrow x \in B\} = (X \setminus A) \cup B$. Does $C$ have a name? I would call $C$ the set quotient and denote it by $(B:A)$. Namely, there is an analogy$^1$ to ideal quotients: If $A,B$ are ideals of a commutative ring $R$, then there is a largest ideal $C$ such that $C \cdot A \subseteq B$, namely $C=(B:A) = \{x \in R : x \cdot A \subseteq B\}$. $^1$ Actually it is more than just analogy. Both quotients are internal homs in monoidal preorders, the one being $(\wp(X),\subseteq,\cap)$ and the other one $(\mathrm{Id}(R),\subseteq,*)$. AI: Let $L$ be a lattice that is cartesian closed as a category. Then $L$ is a Heyting algebra, and the operation you describe is sometimes called the Heyting implication or relative pseudocomplement. If $L$ is moreover a boolean algebra then the Heyting implication $a \to b$ is given by the classical formula $(\lnot a) \vee b$, as you described.
H: Linear independence of vectors over larger fields I was just wondering whether anyone knows an answer to the following: Suppose that ${\mathbb F}$ is a subfield of a field ${\mathbb G}$ and that $v_1,\ldots ,v_k$ are linearly independent vectors in ${\mathbb F}^n$ (over $\mathbb F$). Is it necessarily true that $v_1,\ldots ,v_k$ are also linearly independent when considered as vectors in ${\mathbb G}^n$ (over $\mathbb G$})? Any help would be much appreciated. Thanks! AI: Yes. Extend the set $\{v_1,\ldots,v_k\}$ to a basis of $\mathbb{F}^n$ and put the basis vectors together to form a square matrix $A$. Then $A$ is invertible over $\mathbb{F}$. That is, $A^{-1}\in M_n(\mathbb{F})\subseteq M_n(\mathbb{G})$. So, $A$ is invertible over $\mathbb{G}$ and its first $k$ columns are linearly independent over $\mathbb{G}$. However, do not confuse your question with the following one: Suppose $\mathbb{F}$ is a subfield of $\mathbb{G}$ and the set $V$ is a vector space over each of $\mathbb{F}$ and $\mathbb{G}$. If $v_1,v_2,\ldots,v_k\in V$ are linearly independent over $\mathbb{F}$, are they necessarily linearly independent over $\mathbb{G}$? The answer to this seemingly similar question is negative, as illustrated by the following counterexample: $V=\mathbb{C},\,\mathbb{F}=\mathbb{R},\,\mathbb{G}=\mathbb{C}$ and $\{v_1,v_2\}=\{1,\,i\}$. It is easy to see that $v_1$ and $v_2$ are linearly independent over $\mathbb{R}$ but linearly dependent over $\mathbb{C}$.
H: Arranging 7 rows of 3 motercycles of 5 different types and 4 different colours A playground equipment manufacturer makes a biker formation comprised of $21$ wooden motorcycles that are fixed in place, three abreast, to form $7$ rows of $3$ motorcycles each. She has $5$ different types of wooden motorcycle that can be installed. All motorcycles face forwards. Each motorcycle is painted with one of $4$ different colours. How many different biker formations can be made if: (a)there are no restrictions? (b) at least one motorcycle of each type (and some colour) must be installed? (c)the only restriction is that no two motorcycles in the same row can have the same colour? Here is what I attempted (not sure if its right): a): $5^3$ for each row (5 selections of bike, 3 bikes each row) so : $7 * 5^3$ bikes. There are 4 colours available, so $7*4^3$ colouring. In total then : $$7* (5^3 + 4^3)$$ b): One bike of each model must be installed: 21-5 = 16 bikes left to install: Don't know how to go on from here? c): so first get all the bikes installed: $7*5^3$ then to colour it $7*(4*3*2)$ Am I doing part a and c right? and what should I do for part b AI: Since a motorcycle can be of any of $5$ types and $4$ colors, there are altogether $5\cdot4=20$ variants of a single motorcycle. If there are no restrictions, each of the $21$ motorcycles in the formation can be any of the $20$ variants, so filling out the formation involves making $21$ $20$-way choices. This can be done in $20^{21}$ different ways, which is therefore the answer to (a). We can analyze (c) as follows. There are $5^3$ possible sequences of $3$ types, so if we ignore color, there are $5^3$ possible rows. There are $4\cdot3\cdot2=24$ ways to assign $3$ different colors to the motorcycles in a row of $3$, so when we take colors into account, there are $24\cdot 5^3$ different possibilities for each row. Each row must be one of these $24\cdot 5^3$ possibilities, and there are $7$ rows, so there are $(24\cdot5^3)^7=24^7\cdot 5^{21}$ formations in which no two motorcycles in the same row have the same color. Probably the easiest way to get the answer to (b) is to compute the number of formations that violate the constraint and subtract that from the answer to (a). The formations that violate the constraint are those that use at most $4$ of the types. If the types are $T_1,T_2,T_3,T_4$, and $T_5$, the analysis in (a) shows that there are $16^{21}$ formations that use only types $T_1,T_2,T_3$, and $T_4$. Similarly, there are $16^{21}$ that use only types $T_1,T_3,T_4$, and $T_5$, and the same number for each of the other $5$ sets of $4$ types. That’s a total of $5\cdot16^{21}$ unacceptable formations, so a first approximation to the desired answer is $20^{21}-5\cdot16{21}$. Unfortunately, we’ve subtracted too much: any formation that uses only types $T_1,T_2$, and $T_3$ was counted once in the $20^{21}$ term and subtracted twice in the $5\cdot16^{21}$ term, once for using only types $T_1,T_2,T_3$, and $T_4$ and once for using only types $T_1,T_2,T_3$, and $T_5$. Thus, the figure $20^{21}-5\cdot16^{21}$ counts each of these formations a net of $-1$ times, and we must add them back in so that they are counted $0$ times. That means adding back in the $12^{21}$ formations that use only the three types $T_1,T_2$, and $T_3$, and similarly for each other set of three types. There are $\binom53=10$ sets of three types, so our new approximation is $20^{21}-5\cdot16^{21}+10\cdot12^{21}$. Two further corrections are needed to adjust for formations using just two colors and just one color; I’ll leave these to you. What I’m doing here is an inclusion-exclusion argument.
H: Characterization of an invertible module Let $B$ be a commutative ring. Let $A$ be a subring of $B$. If $M$ and $N$ are $\mathbb{Z}$-submodules of $B$, we denote by $MN$ the submodule of $B$ generated by the subset $\{ab\mid a \in M, b\in N\}$. If $M$ and $N$ are $A$-submodules of $B$, $MN$ is clearly an $A$-submodule of $B$. If $M$ and $N$ are $A$-submodules of $B$, we denote by $(N : M)$ the set $\{x \in B\mid xM \subset N\}$. $(N : M)$ is clearly an $A$-submodule of $B$. Is the following proposition correct? If yes, how do we prove it? Proposition. Let $M$ be an $A$-submodule of $B$. Suppose there exists an $A$-submodule $N$ of $B$ such that $MN = A$. Then $N = (A : M)$. AI: Yes, this is correct, and easy to prove. $MN=A$ implies $N \subseteq (A:M)$ and $(A:M) = (A:M)MN \subseteq AN \subseteq N$, hence $(A:M)=N$.
H: Formal proof for $\lim_{x\to\infty}x\exp(-x) =0$ I intuitively understand that $$\lim_{x\rightarrow\infty} xe^{-x}=0$$ as the $e^{-\infty}$ approaches zero faster than $x$ approaches infinity. But this requires one to have a knowledge of the property of exponents. Is there any way to prove this formally that is mathematically sound? For example, how would the person without the knowledge of the property of exponents can derive the same answer? AI: Recall that $e^x = 1 + x + \dfrac{x^2}{2!} + \cdots$. Hence, $e^x > \dfrac{x^2}2 \implies xe^{-x} < \dfrac2x$. Now conclude what you want.
H: Help with rearranging equation to get real and imaginary parts.. I know this is so simple but my algebra is totally failing me.. I have the equation 1/1+2i and I want to extract the real and imaginary parts so I have it in the form.. Re+Im could someone just show me the algebra steps for doing this please.. Thanks AI: Multiply by the conjugate: $$ \frac{1}{1+2i}=\frac{1}{1+2i}\cdot \frac{1-2i}{1-2i}=\frac{1-2i}{5}. $$
H: Help with exponential integrals I'm trying to find a nice expression for the following function \begin{equation} f_k(x)=\int_0^\infty y^k (x+y) e^{-(x+y)^2} \text{d}y. \end{equation} So far I know that \begin{equation} f_k(x)=P_k(x)e^{-x^2}+Q_k(x)\text{ erfc}(x), \end{equation} where $P_k(x)$ and $Q_k(x)$ are polynomials of degree $k-1$ and $k$, respectively. However, I'm having trouble with finding general expressions for these polynomials. I'm hoping someone can help me with this. AI: By direct computation, we have: $$ P_0 = \frac{1}{2}, \qquad Q_0 = 0, $$ $$ P_1 = 0, \qquad Q_1 = \frac{\sqrt{\pi}}{4}. $$ Assuming $k>1$, integration by parts gives: $$ f_k(x) = \frac{k}{2}\int_{0}^{+\infty}y^{k-1}e^{-(x+y)^2}dy. $$ For the sake of simplicity, define: $$ g_\tau(x) = \int_{x}^{+\infty}(y-x)^\tau e^{-y^2}dy. $$ We clearly have: $$ g_\tau(x) = \int_{x}^{+\infty}(y-x)^{\tau-1}\,y e^{-y^2}dy-x\int_{x}^{+\infty}(y-x)^{\tau-1}\, e^{-y^2}dy, $$ and integrating by parts the first integral in the RHS we get: $$ g_{\tau}(x) = \frac{\tau-1}{2}g_{\tau-2}(x)-x\cdot g_{\tau-1}(x).$$ Since $f_k(x)=\frac{k}{2}g_{k-1}(x)$, $f_k(x)$ satisfies the recurrence relation: $$f_k(x) = \frac{k}{2}f_{k-2}(x) - \frac{k}{k-1}x\cdot f_{k-1}(x), $$ so $P_k(x)$ and $Q_k(x)$ can be computed in a recursive way through the same formula: $$P_k(x) = \frac{k}{2}P_{k-2}(x) - \frac{k}{k-1}x\cdot P_{k-1}(x), $$ $$Q_k(x) = \frac{k}{2}Q_{k-2}(x) - \frac{k}{k-1}x\cdot Q_{k-1}(x). $$ If we further set: $$ f_k(x) = \frac{k(-1)^k}{2^{k}}h_k(x), $$ the recursion simplifies to: $$ h_k(x)= 2x\cdot h_{k-1}(x) + 2(k-2)\cdot h_{k-2}(x). $$
H: tail events and tail sigma-field I'm working on tail-events. I have a sequence $(X_{n})_{n}$ of random variables. Let $\tau$ be its $\sigma$-field. From this, I defined $G_n:=\sigma (X_n,X_{n+1},...)$, so that $\tau = \bigcap_{n\geq 1}{G_n}$. Now I should say if the event {$X_n \rightarrow \infty$} is in $\tau$, but I have trouble understanding how to check if a general event is in a tail $\sigma$-field or not. Does anyone have some advice for checking this? Thanks :-) AI: The event $X_n(\omega)\to \infty$ is not influenced by the first values of $X_k(\omega)$, say $k\leqslant K$. Indeed, if $(a_n,n\geqslant 1)$ is a sequence of real numbers, then defining $b_n:=a_{n+K}$, we have $a_n\to \infty$ if and only if $b_n\to \infty$. So fix $K$ and write $\{X_n\to \infty\}=\{X_{n+K}\to \infty\}$. The event $\{X_{n+K}\to \infty\}$ is in $\sigma(X_j,j\geqslant K)$.
H: How to weigh up to 100kg with 5 weights 1) You are a shopkeeper who is selling sugar between 1-100 kg .Now you have to design 5 weights in such a way that any integer weight between 1-100 can be measured in a single attempt ,without using more than 5 weights.You can't repeat weights. He gave me time like 1 hour to figure it out, and i tried it as hard as i could. But finally i couldn't figure out more than this that weights have to be distributed on both sides of measurement. Does anybody have some clue as what could be the answer. AI: Use powers of $3$: $1,3,9,27,81$ can weigh anything up to $121$. The trick is to place weights on both sides of the pan. Without that, the maximum you can do is with powers of two, and five of those will allow you to go up to $63$. At least if you only consider exact weighings. If you're allowed to infer that the weight is $14$ if it is heavier than $13$ and lighter than $15$ will probably allow you to go a bit higher, and I don't yet see a systematic way of going about this. $121$ is the maximum you can do with 5 weights. See http://www.uri.edu/artsci/math/clark/mthdl/scale/explore.pdf for necessary and sufficient conditions. On the other hand, since you can optimally go up to $121$, there is some choice if you only want to go up to $100$. There are 132 sets of weights that work, 15 of them measuring up to exactly $100$, and the lexicographically smallest one is $1,3,7,22,67$. Further, the proof for sufficiency is constructive. That is, you can figure out an algorithm to tell you which weights go on which side for any of these sets of weights.
H: Designing Context-Free Grammars for Sets of Strings I'm pretty lost, and would appreciate help or solutions to the following two exercises. I don't really know where to begin or even how to correctly denote a context-free grammar. I have to design CFGs for the following two sets: 1) $\{a^ib^jc^k \mid i \neq j \;or\; j \neq k \}$; i.e. the set of strings containing $a$s followed by $b$s followed by $c$s, s.t. there is a different number of $a$s and $b$s or a different number of $c$s and $b$s, or both. I think I should have something like $S \rightarrow aAbBcC$, where thereafter $A \rightarrow a | A | \epsilon$, and similar for $b,c$, etc. But I don't know how to ensure different numbers of letters in the string, and more generally, I don't know what I'm doing. 2) The set of all strings of $a$s and $b$s that are not of the form $ww$, that is, not equal to any string repeated. AI: That or in the first problem should suggest writing the grammar in two parts, one that generates $L_1=\{a^ib^jc^k:i\ne j\}$ and one that generates $L_2=\{a^ib^jc^k:j\ne k\}$. If $S_1$ is the initial symbol for the first grammar, $S_2$ is the initial symbol for the second grammar, and the two grammars have no non-terminal symbols in common, they can be combined to give a grammar for the desired language simply by adding the production $S\to S_1\mid S_2$. In generating $L_1$ you can generate any number of $c$’s, but you have to make sure that you don’t generate equal numbers of $a$’s and $b$’s. The $c$’s are no problem: we can have a production $S_1\to XC$, where $C$ will generate any number of $c$’s, and $X$ will deal with that $a$’s and $b$’s. We can let $X$ generate any number of matched pairs of $a$’s and $b$’s with the production $X\to aXb$, and then we can switch either to something that generates any positive number of $a$’s or to something that generates any positive number of $b$’s. For instance, the production $X\to Y$ together with $Y\to aY\mid a$ will take care of the words with more $a$’s than $b$’s. That’s all of the pieces needed for the first problem; all you have to do is finish putting them together. Added: The second problem is solved in this PDF: a context-free grammar is given along with a proof that it generates the desired language.
H: Binomial distribution false reasoning While reading the answer of a previous question Binomial Distribution Question (Exactly/At Least $x$ Trials for Success), it got me thinking a little. I know the reasoning must be flawed somewhere, but I can't tell where. So here it goes: Given I have a biased coin, 5% chance of H, 95% chance of T. I toss the coin and I win when the coin turns up H. I can play indefinitely, until I win. Question What is the probability of getting an H after at most 5 throws? Reasoning so far $$ \Pr(\text{5 or fewer tosses}) = \Pr(\text{exactly 1 toss}) + \Pr(\text{exactly 2 tosses}) + \Pr(\text{exactly 3 tosses}) + \Pr(\text{exactly 4 tosses}) + \Pr(\text{exactly 5 tosses})\\ =0.05^1\cdot0.95^0+0.05^1\cdot0.95^1+0.05^1\cdot0.95^2+0.05^1\cdot0.95^3+0.05^1\cdot0.95^4\\ =0.226 $$ This could eventually be interpreted that if you've had a long enough string of T, you're bound to eventually have a H... which you shouldn't! At every toss you have exactly 5% probability of getting one! EDIT: Thinking about it a little more, the result doesn't say that after 4 T tosses you've got 22.6% proba of getting a H. But rather that, given the conditions, in 5 throws you've got a 22.6% proba of getting a H... Which still doesn't sound right! Say you watch someone play the game, and he's 13 tosses in, all of them T. Knowing that in 14 tosses he's got a 51.2% proba of getting a H, you'd be inclined to bet the next toss would result in a H, wouldn't you? That's my question: this flawed reasoning suggests you should take the bet, however we know if you take it you've got only a 5% proba of winning... Why is this happening? AI: you are mixing two different things : probability of an event $A_n$ : $\mathbb P(A_n)$ probability of an event $A_n$ knowing $B$ : $\mathbb P_B(A_n)$ Here, the probability of winning in less than $n$ tosses will reach $1$ as $n$ goes to infinity because you will eventually get at least one head sometimes. This is $\mathbb P(A_n)$. But if you observe a game, and you see the guy tossing the coin and getting 10 tails, you know that the first ten tosses are tails, so the next toss as probability $\mathbb P_B(A_{11})$ where $B$ is "the first 10 tosses are tails". And this is equal to $\mathbb P(A_{1})$ The geometric distribution is memoryless.
H: Constructing an increasing function on a set A that is continuous only at the irrational points in A. Exercise 6.2 Show that there is a strictly increasing function on $[0,1]$ that is continuous only at the irrational numbers in $[0,1]$ . Proof Let $C=[0,1] \cap \mathbf{Q}$ and let $\left\{q_{n}\right\}_{n=1}^{\infty}$ be an enumeration of $C .$ Define the function $f$ on [0,1] by setting $$ f(x)=\sum_{\left\{n \mid q_{n} \leq x\right\}} \frac{1}{n^{2}} \text { for all } 0 \leq x \leq 1 $$ since a $p$ -series converges for all $p>1, \ f$ is well-defined. Observe that if $0 \leq x<y \leq 1,$ then $$ f(y)-f(x)=\sum_{\left\{n \mid q_{n} \leq y\right\}} \frac{1}{n^{2}}-\sum_{\left\{n \mid q_{n} \leq x\right\}} \frac{1}{n^{2}}=\sum_{\left\{n \mid x<q_{n} \leq y\right\}} \frac{1}{n^{2}}>0 $$ which implies that $f(y)>f(x)$. Hence $f$ is strictly increasing, as desired. The question is the one above and what follows is my attempt. I now need to show that this function is continuous only at the irrational numbers, but I am unsure how to do this. AI: Note that there is a slight problem with your question as stated : you must exclude $0$ to avoid ugly things like $\frac{1}{0^2}$. Let $r$ be a rational number in $(0,1]$. There is an index $i_0$ such that $r=q_{i_0}$ Let $r_m=r-\frac{1}{m}$. We have $r_m\in[0,1]$ for all large enough $m$, and then one has $$ f(r)-f(r_m)=\sum_{ r_m <q_n \leq r } \frac{1}{n^2} \geq \sum_{ n=i_0 } \frac{1}{n^2} =\frac{1}{i_0^2} $$ In particular, the sequence $(f(r_m))$ cannot converge to $f(r)$. So $f$ is discontinuous (in fact, it is left-discontinuous) at $r$.
H: Solving integrals by substitution with exponent $du$ Given the integral $\int_0^4 x^3(x^2 + 1)^{-\frac{1}{2}}dx$, I have tried to choose $u = x^2 + 1, du = 2x\space dx$, thus being left with the integral $\int_1^{17} \frac{1}{2}(du)^ 3\space u^{-\frac{1}{2}}$ Is there a simple way to solve this, or do I need to find a more suitable method? AI: Let $u = x^2 +1$. Then we have both $x^2 = u-1$ and $\frac{1}{2} du = xdx$. Rewriting the integrand as $x^3(x^2+1)^{-1/2} dx = x^2 (x^2+1)^{-1/2} \,x\,dx = \frac{1}{2}(u-1)(u)^{-1/2}\,du.$ So, your $u$-sub was right on, just needed to take care of that other term in a reasonable way! Your integral should reduce to $$ \int_0^4 x^3(x^2+1)^{-1/2} \,dx = \frac{1}{2}\int_1^{17}\frac{u-1}{\sqrt{u}} \,du $$
H: Computing $a^b$ as $\lim_{n\to\infty} a^\frac{\lfloor b \cdot 10^n\rfloor}{10^n}$ Let $b \in \mathbb{R}$, then $ \forall n \in \mathbb{N}(\frac{\lfloor b \cdot 10^n\rfloor}{10^n} \in \mathbb{R})$, but does $\frac{\lfloor b \cdot 10^n\rfloor}{10^n}$ have a particular name? And is the following correct? Let $a,b \in \mathbb{R}$, then $a^b=\lim\limits_{n \to +\infty}a^\frac{\lfloor b \cdot 10^n\rfloor}{10^n}$? Thanks in advance! AI: By definition, $\lfloor b\cdot 10^n\rfloor$ is the floor of $b\cdot 10^n$, that is, the unique integer $n \in \Bbb Z$ such that $n \le b\cdot 10^n < n+1$. In particular, then, $\dfrac{\lfloor b\cdot10^n\rfloor}{10^n}$ is a rational number. Moreover, we have that: $$b-\frac{\lfloor b\cdot10^n\rfloor}{10^n} = \frac{b \cdot 10^n - \lfloor b\cdot10^n\rfloor}{10^n}$$ and it is not hard to see that for any $x \in \Bbb R$, $0 \le x - \lfloor x \rfloor<1$. That is, $\dfrac{\lfloor b\cdot10^n\rfloor}{10^n}$ is a rational number that differs from $b$ by less than $\dfrac{1}{10^n}$. In other words: $$0 \le \lim_{n\to\infty} b- \dfrac{\lfloor b\cdot10^n\rfloor}{10^n} < \lim_{n\to\infty} \frac1{10^n} = 0$$ so by the squeeze theorem: $$\lim_{n\to\infty} b-\dfrac{\lfloor b\cdot10^n\rfloor}{10^n} = 0$$ meaning that $\dfrac{\lfloor b\cdot10^n\rfloor}{10^n}$ converges to $b$ as $n \to\infty$. The continuity of $a^x$ in $x$ then assures that: $$a^b = \lim_{n\to\infty} a^{\frac{\lfloor b\cdot10^n\rfloor}{10^n}}$$ as desired. On the other hand, this convergence can be taken as the definition of $a^b$ for real $b$ and $a > 0$: for rational $b = \frac pq$, we can use roots and powers to calculate $a^b = \sqrt[q]{a^p}$. One then defines $a^b = \lim_{n\to\infty} a^{b_n}$ for any sequence $(b_n)_n$ that converges to $b$. (Of course, it then has to be shown that this does not depend on the particular choice of $(b_n)_n$, which requires some work.) I hope that clarifies things for you.
H: Why is the geometric multiplicity of an eigen value equal to number of jordan blocks corresponding to it? Geometric multiplicity of an eigen value is $$ \dim \mathrm{null} (A -\lambda I)\tag 1.$$ Suppose $A$ is in jordan normal form and has two Jordan forms with eigen value $\lambda$, one of size $2 \times 2$ and other of size $3\times 3$. Then, why is $\dim \mathrm{null} (A -\lambda I)$ necessarily equal to $2 $ i.e. why is geometric multiplicity of $\lambda =2$? From the concept of generalised eigen vectors , I know the following : $(T-\lambda I)^3$ will produce a Zero Matrix in place of both these sub-blocks. AI: You are trying to solve $(A-\lambda I)v=0$. The matrix $A-\lambda I$ has three 1s in it - one in the $2\times 2$ block and two in the $3\times 3$ block. So it has three pivots, each in a different column. There are only two free variables: one from column of the $2\times 2$ block without a 1 in it; and one from the $3\times 3$ block. So the set of solutions is two dimensional. If there are any other eigenvalues, they will all have non-zero numbers on the diagonal of $A-\lambda I$, so they won't contribute any more free variables.
H: Difference between interior and set of accumulation points I don't understand the difference between the interior of a set, and the set of all its accumulation points. My understanding of an accumulation point is any point in a set which has an epsilon neighborhood around it, which is contained in the set- not necessarily implying that the accumulation point itself is in the set. From what I can gather, the interior is identical. Can someone explain the difference? AI: There is no necessary relationship between the two sets. Let $A$ be a set in a topological space $X$. A point $x$ is in the interior of $A$ if there is an open set $U$ such that $x\in U\subseteq A$; in particular this implies that $x\in A$. A point $x$ is an accumulation point of $A$ if for each open set $U$ containing $x$, $U\cap(A\setminus\{x\}\ne\varnothing$; in words, if every open nbhd of $x$ contains at least one point of $A$ different from $x$. This does not imply that $x\in A$. Take the space $\Bbb R$ with the usual topology as a familiar example. The set $\Bbb Z$ has empty interior: for any $n\in\Bbb Z$, no matter how small an $\epsilon>0$ you take, $(n-\epsilon,n+\epsilon)\nsubseteq\Bbb Z$, so $n$ is not in the interior of $\Bbb Z$. $\Bbb Z$ also has no accumulation points: if $x\in\Bbb R\setminus Z$, there is an integer $n$ such that $n<x<n+1$, and $(n,n+1)$ is then an open nbhd of $n$ that contains no point of $\Bbb Z$; and if $n\in\Bbb Z$, $(n-1,n+1)$ is an open nbhd of $n$ that contains no point of $\Bbb Z\setminus\{n\}$. In this case the interior of the set is its set of accumulation points: each is the empty set, $\varnothing$. But now consider the following sets: The set $\left\{\frac1n:n\in\Bbb Z^+\right\}$, on the other hand, has empty interior and exactly one accumulation point, $0$. $\Bbb Q$ also has empty interior, and every real number is an accumulation point of $\Bbb Q$. $[0,1]$, $[0,1)$, and $(0,1)$ all have interior $(0,1)$, and all have $[0,1]$ as their set of accumulation points. (In each case you should try to prove the assertions.) If we look at $\Bbb Z$ as a space in its own right, with the discrete topology, then every subset of $\Bbb Z$ is open, and no point of $\Bbb Z$ is an accumulation point of any subset of $\Bbb Z$. Thus, if $A\subseteq\Bbb Z$, then the interior of $A$ is $A$ itself, but $A$ has no accumulation points.
H: Improper integrals are "not totally Improper" Question is to evaluate $$\int _{-\infty}^{\infty} \frac{dx}{(x^2+a^2)^2}\text {for } a>0$$ Idea is to calculate this using complex analysis/residue theory/contour integration. Approach is consider contour $D_R$ consisting of a semicircle in upper half plane of radius $R$ with the line $[-R,R]$ (I am not familiar with idea how to draw figures in latex so, it would be better if some one can help me out if they are sure that they understood what i actually mean). So, then, we have $$\int_{\partial D_R} \frac{dx}{(z^2+a^2)^2}= \int_{-R}^{R}\frac{dx}{(x^2+a^2)^2}+ \int_{\mathcal{T}_R}\frac{dx}{(x^2+a^2)^2}$$ where $\partial D_R$ is boundary of contour $D_R$ and $\mathcal{T}_R$ is contour except the line $[-R,R]$. Now, as $D_R$ is bounded domain, we can use residue theorem to find what is $$\int_{\partial D_R} \frac{dx}{(z^2+a^2)^2}$$ we have $$\int_{\partial D_R} \frac{dx}{(z^2+a^2)^2}=\int_{\partial D_R} \frac{dx}{(z+ai)^2(z-ai)^2}$$ $$=2\pi i .\text{Residue at } (ai) $$ $$=2\pi i .\lim_{x\rightarrow ai} \frac{d}{dx}\frac{1}{(z+ai)^2}$$ $$= 2\pi i \lim_{x\rightarrow ai} \frac{-2}{(z+ai)^3}$$ $$=2\pi i \frac{-2}{(2ai)^3}$$ $$=2\pi i\frac{-2}{-8a^3i}$$ $$=\frac{\pi}{2a^3}$$ So, I have $$\frac{\pi}{2a^3}= \int_{-R}^{R}\frac{dx}{(x^2+a^2)^2}+ \int_{\mathcal{T}_R}\frac{dx}{(x^2+a^2)^2}$$ i.e., $$\int_{-R}^{R}\frac{dx}{(x^2+a^2)^2} = \frac{\pi}{2a^3} - \int_{\mathcal{T}_R}\frac{dx}{(x^2+a^2)^2}$$ as $R \rightarrow \infty $ we see that $\int_{\mathcal{T}_R}\frac{dx}{(x^2+a^2)^2}\rightarrow 0$ So, $$\int _{-\infty}^{\infty} \frac{dx}{(x^2+a^2)^2}=\frac{\pi}{2a^3}$$ Now, I would be thankful if some one can help me what i have done is valid and I am afraid this should be the case always at least when considering $\int_{\mathcal{T}_R}\frac{dx}{f(x)}$ for $f(x)$ a polynomial What exactly i mean is we do not have to bother about any other extra conditions except residue theorem when considering $$\int _{-\infty}^{\infty} \frac{dx}{f(x)}$$ because in any case i am fixing a bound for $\int_{\mathcal{T}_R}\frac{dx}{f(x)}$ which goes to $0$ as $R\rightarrow 0$ So, what i would like to say is $\int_{\mathcal{T}_R}\frac{dx}{f(x)}$ is actually seen as $\int _{\partial D_R}$ where $R$ is maximum magnitude of zeros of $f(x)$ in upper half plane. I am a bit afraid if i am missing some thing. I would like someone to verify if my idea is true. $$\int_{\mathcal{T}_R}\frac{dx}{f(x)}=2\pi i \sum {\text{Res. at zeros of f(x)}}$$ If this is the case always then I would like to say $$"\text{In contrast to its name, Improper Integrals behave properly (conditions apply)}"$$ AI: It seems that the only problem you need to worry about is the integral over $ \mathcal{T}_R$, otherwise the approach clearly works. If the polynomial has degree $d = \deg f \geq 2$, then you can write $f(x) = a_0 x^d + a_1x^{d-1}+\dots = \Theta(x^d)$, where by this notation I mean that there are constants $r,C_1,C_2$ such that if $|x|>r$ then $C_1|x|^d< |f(x)|<C_2|x|^d$. Thus, the integral $\int_{\mathcal{T}_R} \frac{dx}{f(x)}$ can be estimated by: $$ \left| \int_{\mathcal{T}_R} \frac{dx}{f(x)}\right| \leq \frac{\text{length of $ \mathcal{T}_R$}}{\text{maximum of } |f(x)|} \leq \frac{\pi R}{C_1 R^d} = \frac{\pi }{C_1 }\cdot \frac{1}{R^{d-1}}.$$ This obviously tends to $0$ with $R \to \infty$ so you can be sure this term can be omitted in the limit.
H: Differential - Aproximation to $ \triangle A$ (area variation) The central angle of a cirular sector is $80°$. It is desired to reduce it by $1°$. By how much should the radius of the sector be increased so that the area will remain unchanged if the original length of the radius is $20cm$? Let $A$ be the area of the sector if function of the angle $\theta rad$ and the radius $R$. Then, we have that $A(\theta, R) = \frac{\theta R^2}{2} $. I want to aproximate my $\triangle A$, that is, the variation of the area, by $$ dA = \frac{\partial A}{\partial \theta} d\theta + \frac{\partial A}{\partial R}dR$$ We have that $dA = 0$ and we want to find $dR$ I've found that: • $80° = \frac{4 \pi}{9} rad$; • $R = \frac{45}{\pi} cm$ •$1° = \frac{\pi}{180} rad$ Using this, I couldn't get the answer given ($1/8 cm$). Maybe I am thinking wrong. Can you help me? Thanks in advance! AI: Guess it's just a little error in your calculation. Start with keeping everything in cm and degrees; they are just units. $$ 0 = \frac{20^2}{2} \times (-1) + 20 \times 80 \times dR $$ Giving the outcome as required.
H: Another proof for Liouville's Theorem I'm having trouble completing a homework question which will produce an alternative proof for Liouville's Theorem. The question reads Let $f$ be an entire function. Evaluate, for $|a|<R,|b|<R,\int_{|z|=R}\frac{f(z)dz}{(z-a)(z-b)}$. When $f$ is bounded, let $R\to \infty$ and deduce another proof for Liouville's Theorem. Liouville's Theorem: If a function f is entire and bounded in the complex plane, then $f(z)$ is constant throughout the plane. So I begin by evaluating the integral. Using the Cauchy-Residue theorem (and skipping some methodic computation) I have that $\int_{|z|=R}\frac{f(z)dz}{(z-a)(z-b)}=2\pi i[\frac{f(a)-f(b)}{a-b}].$Now, it is given that $f$ is entire (analytic everywhere) and bounded. Thus all I must show is that it is constant. Since $f$ is bounded there exists some non-negative number $M$ such that $|f(z)|<M$. I proceed by using the M-L Lemma in attempts to try and generate some ideas. $|\frac{f(z)dz}{(z-a)(z-b)}|\leq\frac{M}{(R-a)(R-b)}\times 2\pi R$, by the conditions given in the question and the use of the reverse triangle inequality. Hence, $|\int_{|z|=R}\frac{f(z)dz}{(z-a)(z-b)}|\leq\frac{M}{(R-a)(R-b)}\times 2\pi R$. Now as $R\to\infty$, the RHS of the inequality clearly approaches 0. Thus $\int_{|z|=R}\frac{f(z)dz}{(z-a)(z-b)}=0?$ I think I saw this in one of my text books. Does this lead on to the conclusion that $f(z)$ is constant? But then I realise I have not made use of the equation I established above. Thank you to all in advanced for you help. AI: Fix $a$. Take $b$ arbitrarily and you showed that $$ \big| f(a) - f(b) \big| \leq \text{constant}\cdot \frac{M}{R} $$ taking $R \to \infty$ you get that $f(a) = f(b)$. Therefore for any $b \in \mathbb{C}$ you have $f(b) = f(a)$, implying that $f$ is constant.
H: Does this function satisfy "Intermediate Value Property"? Although this problem may look easy, but I am very much confused over this. Consider the function $$f(x) = \begin{cases}(1-x)/|x|, & x\neq 0\\1, & x=0\end{cases}$$ Does $f$ satisfy the Intermediate Value Property on $[-2,2]$? Thanks. AI: $f$ does not satisfy intermediate value property. By definition $f(1/4)=3$. If $f$ satisfy IVP, then there is $0<a<1/4$ s.t. $f(a)=2$. But $f(x)=1/x-1$ for $x>0$ and $1/x-1> 3$ if $0<x<1/4$.
H: Calculating $\lim_{x \to 0}\frac1x \int_{0}^{x}f(y)dy$ when $f$ is continuous I was trying the following problem which is : Let $f \colon \Bbb R \to \Bbb R$ be a continuous function such that $$\lim_{x \to 0}f(x)=a .$$ Then $$\lim_{x \to 0}\frac1x \int_{0}^{x}f(y)dy $$ is which of the following: $(A)1 \,\,(B)a \,\,\,(C)-1 \,\,\,(D)$ does not exist My Attempt: WLOG,I take $f(x)=a+bx$ ,then I see option (B) is the correct choice. Am I right? Is there any better alternative way to tackle the problem? Thanks in advance for your time. AI: Let $\epsilon > 0$. Choose $\delta > 0$ so that $|x| < \delta\implies |f(x) - a| < \epsilon$. Then if $x < \delta$, $$\left|{1\over x}\int_0^x f(t)\, dt - a\right| \le \frac {1}{|x|} \int_0^x |f(t) - a| \, dt \le \epsilon.$$ Consequently, the limit is $a$.