Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
How to show that $\lim\limits_{x \to \infty} f'(x) = 0$ implies $\lim\limits_{x \to \infty} \frac{f(x)}{x} = 0$? I was trying to work out a problem I found online. Here is the problem statement: Let $f(x)$ be continuously differentiable on $(0, \infty)$ and suppose $\lim\limits_{x \to \infty} f'(x) = 0$. Prove that $\lim\limits_{x \to \infty} \frac{f(x)}{x} = 0$. (source: http://www.math.vt.edu/people/plinnell/Vtregional/E79/index.html) The first idea that came to my mind was to show that for all $\epsilon > 0$, we have $|f(x)| < \epsilon|x|$ for sufficiently large $x$. (And I believe I could do this using the fact that $f'(x) \to 0$ as $x \to \infty$.) However, I was wondering if there was a different (and nicer or cleverer) way. Here's an idea I had in mind: If $f$ is bounded, then $\frac{f(x)}{x}$ clearly goes to zero. If $\lim\limits_{x \to \infty} f(x)$ is either $+\infty$ or $-\infty$, then we can apply l'Hôpital's rule (to get $\lim\limits_{x \to \infty} \frac{f(x)}{x} = \lim\limits_{x \to \infty} \frac{f'(x)}{1} = 0$). However, I'm not sure what I could do in the remaining case (when $f$ is unbounded but oscillates like crazy). Is there a way to finish the proof from here? Also, are there other ways of proving the given statement?
let $f:\left[ a,+\infty \right[ \longrightarrow \mathbb{R} $ differentiable suppose that $\displaystyle \lim_{x\rightarrow +\infty} f'(x)=l \in \mathbb{R}$ prove that $\displaystyle \lim_{x\rightarrow +\infty} \dfrac{f(x)}{x}=l $ we will begin with the mean value theorem : for $ x > \max(0,a) \ \exists c\in \left]a,x\right[ $ such that $$f(x)-f(a)=f'(c)(x-a)\implies \dfrac{\dfrac{f(x)}{x}-\dfrac{f(a)}{x}}{1-\dfrac{a}{x}}=f'(c)$$ and because as $c\rightarrow +\infty$ , $x\rightarrow +\infty$ then we have: $$\lim_{c\rightarrow +\infty} f'(c)=\lim_{c\rightarrow +\infty }\dfrac{\dfrac{f(x)}{x}-\dfrac{f(a)}{x}}{1-\dfrac{a}{x}}\implies \lim_{c\rightarrow +\infty}\dfrac{\dfrac{f(x)}{x}-\dfrac{f(a)}{x}}{1-\dfrac{a}{x}}=l\implies \lim_{x\rightarrow +\infty}\dfrac{\dfrac{f(x)}{x}-\dfrac{f(a)}{x}}{1-\dfrac{a}{x}}=l$$ thus we have the desired result: $$\lim_{x\rightarrow +\infty} \dfrac{f(x)}{x}=l$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/62916", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "35", "answer_count": 5, "answer_id": 2 }
Inverting Matrix Equations Y = F(X) I have a arbitrary $N \times N$ matrix $S$. I have an function of this matrix given by: $$A = F(S) = 2S + P^{-1}S + 2SP + PSP$$ where $P$ is the a cyclic permutation matrix which when acting on the left of a matrix sends row $i$ to row $i-1$ (with $0 = N$). For example with $N=3$ $$P = \begin{pmatrix} 0&0&1 \\ 1&0&0 \\ 0&1&0 \end{pmatrix}.$$ Thus $P^N = I$. I want to invert the above equation to find $ S = F^{-1}(A) $ which I am sure can be done. Does anyone have any suggestions for how to do this most effectively?
Why are you "sure" that $F$ has an inverse? In fact, when $N=3$, $F$ has no inverse. To see this, note that $P^{-1}=P^2$ and $P^3=I$. So, if we put $S=P-I$, we have $$ \begin{eqnarray} F(S) &=& 2S + P^{-1}S + 2SP + PSP\\ &=& 2(P-I) + P^{-1}(P-I) + 2(P-I)P + P(P-I)P\\ &=& (2P-2I) + (I-P^{-1}) + (2P^2-2P) + (P^3-P^2)\\ &=& (2P-2I) + (I-P^2) + (2P^2-2P) + (I-P^2) = 0. \end{eqnarray} $$ So $F(S)=0$ for some nonzero $S$. Hence $F$ has no inverse.
{ "language": "en", "url": "https://math.stackexchange.com/questions/63014", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why do all objects have enough generalized elements? I understand why in the category of sets two parallel morphisms $f, g: A \rightarrow B$ are identical iff for each element $x: 1 \rightarrow A$ it holds that $f\circ x = g \circ x$. Awodey on p. 36 of Category Theory asks (as an exercise), why in any category two parallel morphisms $f, g: A \rightarrow B$ are identical iff for each generalized element $x: X \rightarrow A$ it holds that $f\circ x = g \circ x$. Could someone please give me a hint how to prove this?
There should be a way to see this using Yoneda's embedding, too; I post it because it has been mentioned in the comments. Fix any category $\mathbf C$ and two arrows $f,g:C\to C'$. The Yoneda embedding $y:\mathbf C\to \mathbf{Set}$ is a fully faithful functor, hence $f=g$ if and only if $y(f)=y(g)$. By definition, $y(f):\mathbf C(-,C)\to \mathbf C(-,C')$ is the natural transformation whose component $y(f)_D:\mathbf C(D,C)\to \mathbf C(D,C')$, for every object $D$, sends any $h:D\to C$ to $f\circ h$; and similarly for $y(g)$. But $y(f)=y(g)$ if and only if $y(f)_D=y(g)_D$ for every object $D$, that is, $f\circ h=g\circ h$ for every generalized element $h:D\to C$. Yoneda's lemma asserts that two natural transformations $\alpha,\beta$ from $\mathbf C(-,C)$ to any functor $\mathbf C\to \mathbf{Set}$ coincide if and only if $\alpha_C(\operatorname{id}_C)=\beta_C(\operatorname{id}_C)$; thus, it is a way of seeing that $y(f)=y(g)$ if and only if $f\circ \operatorname{id}_C=g\circ \operatorname{id}_C$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/63069", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Variations on the Stirling's formula for $\Gamma(z)$ I am currently reading some material that makes heavy usage of Hypergeometric functions, and there is one particular point about applying Stirling's approximation to various terms consisting of Gamma-Functions that is not very clear to me. We have the classical Stirling's approximation formula for the Gamma-Function in the form: $$ \Gamma(z)=\sqrt{2\pi}e^{-z}z^{z-1/2}\left(1+O\left(\frac{1}{|z|}\right)\right) $$ for $|\arg(z)|<\pi$ as $|z|\to\infty$. In absolute value: $$ |\Gamma(z)|=\sqrt{2\pi}e^{-\Re(z)}|z|^{\Re(z)-1/2}e^{-\Im(z)\arg(z)}\left(1+O\left(\frac{1}{|z|}\right)\right) $$ Now, there is also the shifted Stirling's approximation for the Gamma-Function due to C. Rowe, if I am not mistaken, that says: $$ \Gamma(z+a)=\sqrt{2\pi}e^{-z}z^{z+a-1/2}\left(1+O\left(\frac{1}{|z|}\right)\right) $$ uniformly for $|\arg(z)|\leq\pi-\varepsilon$, $a$ in a compact subset of $\mathbb{C}$ and some suitable fixed $\varepsilon>0$, as $|z|\to\infty$. In absolute value: $$ |\Gamma(z+a)|=\sqrt{2\pi}e^{-\Re(z)}|z|^{\Re(z+a)-1/2}e^{-\Im(z+a)\arg(z)}\left(1+O\left(\frac{1}{|z|}\right)\right) $$ My first question refers to terms of the type $$ \Gamma(az+b)\Gamma(cz+d) $$ for some complex numbers $a,b,c$ and $d$. (Q1) Using the classical Stirling's approximation formula (i.e. not the shifted one), how can one obtain a meaningful aggregated asymptotics for the above expression? I am asking this because there are several places that apply the non-shifted version of Stirling's formula to shifted Gamma factors without mentioning any details, which leaves the impression that this is a fairly standard or even trivial argument. Unfortunately, at this point I am unable to see its triviality. What bothers me here is the term $$ (az+b)^{az+b-1/2}(cz+d)^{cz+d-1/2} $$ as well as $\arg(az+b)$ and $\arg(cz+d)$, since the shifts by $c$ and $d$ break any easy manipulations. I am naturally assuming that I am missing something very obvious here (as usual). I have intentionally not specified anything about the parameters $a,b,c$ and $d$ because my question rather aims at the principle of applying the non-shifted Stirling's approximation to Gamma terms like the above one. (Q2) Are there any other versions or forms of the Stirling's approximation for $\Gamma(z)$ that could be particularly useful for computing such kinds of asymptotics? I will be extremely thankful if someone could give some insight in (principle of) the application of Stirling's approximatioin formula(s) to terms composed of Gamma factors!
Let me take a stab at (Q1). I find it better working for $\log\Gamma(z)$ rather than $\Gamma(z)$, since $\Gamma(z) = \exp( \log\Gamma(z))$. Stirling formula reads as follows: $$ \log\Gamma(z) \sim (z-\frac{1}{2}) \log(z) - z + \frac{1}{2} \log(2 \pi) + o(1) $$ for $\vert z \vert \to \infty$ and $ \vert \arg(z) \vert < \pi - \epsilon$. Notice that shifted formula is a simple consequence of the above: $$ \begin{eqnarray} \log\Gamma(z+a) &\sim& ( z+a -\frac{1}{2}) \log(z+a) - z - a + \frac{1}{2} \log(2 \pi) + o(1) \\ &\sim& ( z+a -\frac{1}{2}) \log(z) + ( z+a -\frac{1}{2}) \log(1+\frac{a}{z}) - z - a + \frac{1}{2} \log(2 \pi) + o(1) \\ &\sim& ( z+a -\frac{1}{2}) \log(z) + z \log(1+\frac{a}{z}) - z - a + \frac{1}{2} \log(2 \pi) + o(1) \\ &\sim& ( z+a -\frac{1}{2}) \log(z) - z + \frac{1}{2} \log(2 \pi) + o(1) \end{eqnarray} $$ Now $$ \begin{eqnarray} \log\Gamma(a z + b) + \log\Gamma(c z + d) & \sim & ( a z +b -\frac{1}{2}) \log(a z) - a z + \frac{1}{2} \log(2 \pi) + \\ & & ( c z +d -\frac{1}{2}) \log(c z) - c z + \frac{1}{2} \log(2 \pi) + o(1) \end{eqnarray} $$ Then it is a matter of recombining terms as $ ( \mathcal{A} + \mathcal{B} z) + (\kappa z + \rho -\frac{1}{2}) \log (\kappa z) - \kappa z + \frac{1}{2} \log(2 \pi) + o(1) $. See the book of Paris and Kaminski to fill in the details.
{ "language": "en", "url": "https://math.stackexchange.com/questions/63115", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
Jordan Measures, Open sets, Closed sets and Semi-closed sets I cannot understand: $$\bar{\mu} ( \{Q \cap (0,1) \} ) = 1$$ and (cannot understand this one particularly) $$\underline{\mu} ( \{Q \cap (0,1) \} ) = 0$$ where $Q$ is rational numbers, why? I know that the measure for closed set $\mu ([0,1]) = 1$ so I am puzzled with the open set solution. Is $\underline{\mu} ( (0,1) ) = 0$ also? How is the measure with open sets in general? So far the main question, history contains some helper questions but I think this is the lion part of it what I cannot understand. More about Jordan measure here. Related * *Jordan measure with semi-closed sets here *Jordan measure and uncountable sets here
* *The article you linked to explains it clearly- the expression you posted is the measure of a single rectangle, and one may add up the measures of a countable number of these rectangles separately to find the total measure of their union if the rectangles are disjoint but otherwise, they may have overlaps and you have to either account for the overlaps somehow or settle for sub-additivity. These are standard defining properties of measures/outer measures, which I suggest you review. *Yes, the Jordan measure of a single point is 0. Just apply the definitions using a very small covering rectangles. *Yes the Jordan measure of $[0,1]$ is $1$. Take outer measures with the rectangles $[0,1+\epsilon) $ and inner measures with rectangles $[0,1-\epsilon)$. *$\bar{\mu} ( \{Q \cap (0,1) \} ) = 1$ because the rationals are dense in the reals, so covering the rationals in $[0,1]$ with semi-open rectangles nessitates covering $[0,1)$ as well. On the other hand, $\underline{\mu} ( \{Q \cap (0,1) \} ) = 0$ because the irrationals are dense in the reals, so any simple set you try to place inside $\{Q \cap (0,1) \} $ will intersect with irrational numbers which are not in the set, so the only simple set inside is the empty set. By definition, the measure of the empty set is $0$. In response to your edits: We are just applying the definitions of outer and inner Jordan measure. Hopefully my 4th point above addresses your question. The inner and outer measure of (0,1) is indeed 1, you should try proving it from the definitions as an exercise.
{ "language": "en", "url": "https://math.stackexchange.com/questions/63155", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Produce output with certain probability using fair coin flips "Introduction to Algorithm" C.2-6 Describe a procedure that takes as input two integers a and b such that $0 < a < b$ and, using fair coin flips, produces as output heads with probability $a / b$ and tails with probability $(b - a) /b$. Give a bound on the expected number of coin flips, which should be O(1) (Hint: Represent a/b in binary.) My guess is that we can use head to represent bit 0 and tail for bit 1. Then by flipping $m = \lceil \log_2 b \rceil$ times, we obtain a $m$ bit binary based number $x$. If $ x \ge b $, then we just drop x and do the experiment again, until we can an $ x < b$. This x has probility $P\{ x < a\} = \frac a b$ and $P\{ a \le x < a\} = \frac {b - a} b$ But I'm not quite sure if my solution is what the question asks. Am I right? Edit, I think Michael and TonyK gave the correct algorithm, but Michael explained the reason behind the algorithm. The 3 questions he asked: * *show that this process requires c coin flips, in expectation, for some constant c; The expectation of c, the number of flips, as TonyK pointed out, is 2. * *show that you yell "NO!" with probability 2/3; P(yell "NO!") = P("the coin comes up tails on an odd toss") = $ \sum_{k=1}^\infty (\frac 1 2)^ k = \frac 2 3$ * *explain how you'd generalize to other rational numbers, with the same constant c It's the algorithm given by TonyK. We can restate it like this Represent a/b in binary. Define $f(Head) = 0$, and $f(Tail) = 1$. If f(nth flip's result) = "nth bit of a/b" then terminate. If the last flip is Head, we yell "Yes", otherwise "No". We have $ P \{Yes\} = \sum_{i\in I}(1/2)^i = \frac a b $ where I represent the set of index where the binary expression of a/b is 1.
Basically, what Michael says is it like this: We express a/b using say 64 binary digits. Now, flip 64 coins. Represent the outcome of this as a binary using head=1 and tail=0. The best approximation of the problem is: * *If this number is larger than the 64 bit binary representation of a/b then say "HEADS". *If this number is smaller than the 64 bit binary representation of a/b then say "TAILS". Hope I am right.
{ "language": "en", "url": "https://math.stackexchange.com/questions/63207", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 5, "answer_id": 3 }
Closure of the Laplacian in $L^2(\mathbb R^n)$ Consider the Laplacian $\Delta$ as an operator on $L^2(\mathbb R^n)$, densely defined on the subspace $C^\infty_0(\mathbb R^n)$. * *Is the domain of the closure of the Laplacian, in the sense described here: https://en.wikipedia.org/wiki/Unbounded_operator#Closed_linear_operators, equal exactly to: $$\{u \in L^2(\mathbb R^n) | \Delta u \in L^2(\mathbb R^n)\}$$ (where $\Delta$ here means in the distributional sense)? *Does any of the above spaces (which I hope are equal) in turn exactly equal the Sobolev space $W^{2,2}(\mathbb R^n)$, or is $W^{2,2}$ actually a strictly smaller space? *Does any of the above spaces equal the Friedrichs extension?
I don't know much about Friedrichs extension, so I will only comment on the first two. I will sketch how to prove that your space with the $2$ replaced by a $p$ is equal to that Sobolev space. For $p = 2$ you can just use Plancherel together with our friend the Fourier transform (try it!). Using the Riesz transform one can show (See Stein's Singular integrals and differentiability properties of functions) that Theorem. Suppose $f \in C^2$ and suppose that $f$ has compact support. Then we have $$\left \|\frac{\partial^2 f}{\partial x_j \partial x_k} \right \|_p \leq A_p \|\Delta f\|_p \text{ for $1 < p < \infty$}$$ Using limits and so on we can show that this holds for $W^{2, p}(\mathbf{R}^d)$ and with some PDE tricks also for domains. From this we get Corollary. For $1 < p < \infty$ we have, $$W^{2, p}(\mathbf{R}^d) = \{f \in L^p(\mathbf{R}^d): \Delta f \in L^p\}.$$ This is quite easy. Just introduce the norm $|\!|\!|f|\!|\!| = \|f\|_p + \|\Delta f\|_p$ and show that this is equivalent to the Sobolev norm. To show this we can use that in $\mathbf R$ we have that $\|f'\|_p \lesssim \|f\|_p + \|f''\|_p$. This is just by integration by parts. A similar formula holds for $\mathbf R^d$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/63251", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
Space-filling curve with distance locality Is there a space-filling curve $C$ that has the property that, if $C$ passes through $p_1=(x_1,y_1)$ at a distance $d_1$ along the curve, and through $p_2$ at $d_2$, then if $|p_1 - p_2| \le a$, then $|d_1 - d_2| \le b$, for some constants $a$ and $b$? In other words, any two points of the plane within distance $a$ are separated by at most $b$ along $C$. Call this property distance locality. So I am asking whether a curve exists mapping $\mathbb{R}$ to $\mathbb{R}^2$ with distance locality. Although I doubt the answer differs, permit me also to ask the same question for $\mathbb{Q}^2$, and for $\mathbb{Z}^2$. I have little experience with the properties of the known space-filling curves. Those better schooled on this topic can likely answer these questions easily. Thanks! Addendum. I noticed a paper just released today which is focused on "locality properties" of 3D space-filling curves: "An inventory of three-dimensional Hilbert space-filling curves," arXiv:1109.2323v1 [cs.CG]. The author explores several different locality measures that have been considered in the literature, and cites a wealth of references.
Let $f:{\bf Z}\to{\bf Z}^2$ be one-one and onto. Suppose that for all pairs of adjacent lattices points $p$ and $q$ there exist integers $b$, $m$, and $n$ such that $f(m)=p$, $f(n)=q$, and $|m-n|\le b$. Suppose without loss of generality that $f(0)=(0,0)$. For any positive integer $d$, if $p$ is a lattice point $d$ steps away from the origin, then we must have $f(n)=p$ for some $n$ with $|n|\le bd$. But the number of lattice points $d$ steps away from the origin grows as the square of $d$, while the numbers of integers $n$ with $|n|\le bd$ grows linearly with $d$, contradiction. So there isn't a map satisfying "distance locality", even for distance $a=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/63302", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 1 }
How can I prove this random process to be Standard Brownian Motion? $B_t,t\ge 0$ is a standard Brownian Motion. Then define $X(t)=e^{t/2}B_{1-e^{-t}}$ and $Y_t=X_t-\frac{1}{2}\int_0^t X_u du$. The question is to show that $Y_t, t\ge 0$ is a standard Brownian Motion. I tried to calculate the variance of $Y_t$ for given $t$, but failed to get $t$..
Calculate the covariance $E(Y_s,Y_t)$, and it is $min(s,t)$. But the algebra is really tedious, I wonder whether there is other simpler way to show it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/63355", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
Eigenvalues of unitary matrices I have a sparse unitary matrix with complex entries and want to compute all its eigenvalues. Unfortunately, matlab doesn't like this. If I try do enter eigs(A, N) (A the matrix, N its size), it tells me that I should use eig(full(A)) instead. This is awfully slow ... comparred to the computation for self-adjoint sparse matrices. Is there any way to do this quicker?
An unitary matrix $A$ is normal, i.e. $A^HA=AA^H$. Let's define $\operatorname{Re}(A):=(A+A^H)/2$ and $\operatorname{Im}(A):=(A-A^H)/(2i)$. Note that $\operatorname{Re}(A)$ and $\operatorname{Im}(A)$ are self adjoint (sparse) matrices, and satisfy $\operatorname{Re}(A)\operatorname{Im}(A)=\operatorname{Im}(A)\operatorname{Re}(A)$, i.e. they commute. So you can compute the real and imaginary parts of the eigenvalues separately. To match corresponding real and imaginary parts together, you have to look at the mutual eigenspaces.
{ "language": "en", "url": "https://math.stackexchange.com/questions/63413", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Proofs of $\lim\limits_{n \to \infty} \left(H_n - 2^{-n} \sum\limits_{k=1}^n \binom{n}{k} H_k\right) = \log 2$ Let $H_n$ denote the $n$th harmonic number; i.e., $H_n = \sum\limits_{i=1}^n \frac{1}{i}$. I've got a couple of proofs of the following limiting expression, which I don't think is that well-known: $$\lim_{n \to \infty} \left(H_n - \frac{1}{2^n} \sum_{k=1}^n \binom{n}{k} H_k \right) = \log 2.$$ I'm curious about other ways to prove this expression, and so I thought I would ask here to see if anybody knows any or can think of any. I would particularly like to see a combinatorial proof, but that might be difficult given that we're taking a limit and we have a transcendental number on one side. I'd like to see any proofs, though. I'll hold off from posting my own for a day or two to give others a chance to respond first. (The probability tag is included because the expression whose limit is being taken can also be interpreted probabilistically.) (Added: I've accepted Srivatsan's first answer, and I've posted my two proofs for those who are interested in seeing them. Also, the sort of inverse question may be of interest. Suppose we have a function $f(n)$ such that $$\lim_{n \to \infty} \left(f(n) - \frac{1}{2^n} \sum_{k=0}^n \binom{n}{k} f(k) \right) = L,$$ where $L$ is finite and nonzero. What can we say about $f(n)$? This question was asked and answered a while back; it turns out that $f(n)$ must be $\Theta (\log n)$. More specifically, we must have $\frac{f(n)}{\log_2 n} \to L$ as $n \to \infty$.)
Here's a different proof. We will simplify the second term as follows: $$ \begin{eqnarray*} \frac{1}{2^n} \sum\limits_{k=0}^n \left[ \binom{n}{k} \sum\limits_{t=1}^{k} \frac{1}{t} \right] &=& \frac{1}{2^n} \sum\limits_{k=0}^n \left[ \binom{n}{k} \sum\limits_{t=1}^{k} \int_{0}^1 x^{t-1} dx \right] \\ &=& \frac{1}{2^n} \int_{0}^1 \sum\limits_{k=0}^n \left[ \binom{n}{k} \sum\limits_{t=1}^{k} x^{t-1} \right] dx \\ &=& \frac{1}{2^n} \int_{0}^1 \sum\limits_{k=0}^n \left[ \binom{n}{k} \cdot \frac{x^k-1}{x-1} \right] dx \\ &=& \frac{1}{2^n} \int_{0}^1 \frac{\sum\limits_{k=0}^n \binom{n}{k} x^k- \sum\limits_{k=0}^n \binom{n}{k}}{x-1} dx \\ &=& \frac{1}{2^n} \int_{0}^1 \frac{(x+1)^n- 2^n}{x-1} dx. \end{eqnarray*} $$ Make the substitution $y = \frac{x+1}{2}$, so the new limits are now $1/2$ and $1$. The integral then changes to: $$ \begin{eqnarray*} \int_{1/2}^1 \frac{y^n- 1}{y-1} dy &=& \int_{1/2}^1 (1+y+y^2+\ldots+y^{n-1}) dy \\ &=& \left. y + \frac{y^2}{2} + \frac{y^3}{3} + \ldots + \frac{y^n}{n} \right|_{1/2}^1 \\ &=& H_n - \sum_{i=1}^n \frac{1}{i} \left(\frac{1}{2} \right)^i. \end{eqnarray*} $$ Notice that conveniently $H_n$ is the first term in our function. Rearranging, the expression under the limit is equal to: $$ \sum_{i=1}^n \frac{1}{i} \left(\frac{1}{2} \right)^i. $$ The final step is to note that this is just the $n$th partial sum of the Taylor series expansion of $f(y) = -\ln(1-y)$ at $y=1/2$. Therefore, as $n \to \infty$, this sequence approaches the value $$-\ln \left(1-\frac{1}{2} \right) = \ln 2.$$ ADDED: As Didier's comments hint, this proof also shows that the given sequence, call it $u_n$, is monotonoic and is hence always smaller than $\ln 2$. Moreover, we also have a tight error estimate: $$ \frac{1}{n2^n} < \ln 2 - u_n < \frac{2}{n2^n}, \ \ \ \ (n \geq 1). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/63466", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33", "answer_count": 6, "answer_id": 5 }
Writing the Laplacian matrix of directed graphs as product? The Laplacian matrix of a nondirected graph can be written as $M^TM$ with $M$ being the incidence matrix of the graph. This makes the (otherwise tedious) proof of Kirkhoff's theorem into a beautiful application of the Cauchy-Binet formula (and indeed, this is one of the proofs in "Proofs from THE BOOK"). If the graph is directed, $M^TM$ does not work anymore; the diagonal of the resulting matrix contains the total degree of vertices, whereas for Kirkhoff's theorem to work, only the indegree should appear. So my question is this: can this approach still be salvaged by a slightly different definition of $M$ that eludes me, or is the "tedious" proof necessary and Cauchy-Binet simply can't be used here?
I believe I have found a way to write the Laplacian as a product. First recall the definition in the directed-graph case: The Laplacian of $G=(V,E)$ is a matrix of size $|V|\times |V|$ such that $L_{ii}$ is the in-degree of $v_i$, and $L_{ij}$ for $i\ne j$ is minus the number of edges from $v_i$ to $v_j$. Since the Laplacian needs not be symmetric in the directed case, it can't be written as $M^{T}M$. However, it can be written as $AB^T$ for two very similar matrices: Define both A and B to be $n\times m$ matrices, where each row represents a vertex and each column represents an edge of $G$. Now, for each edge $e_k = v_i\to v_j$: * *$A_{ik} = 1, A_{jk} = -1$ *$B_{ik} = 0, B_{jk} = -1$ The rest of the entries are 0. Unless I have some computational error, we have $L=AB^T$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/63507", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 1, "answer_id": 0 }
How to create a one to one correspondence between two sets? I am stuck with, Give a one to one correspondence between Z+ and positive even integers. Now, I don't have an idea how to show that there is a one to one correspondence between the two. I would be thankful for some hints.
When asked to prove that "$\exists$ a ..."; then what you are really doing is actually finding whatever is that you need to prove exists. For example in your question you must prove that there exists a one-to-one correspondance between $\mathbb{Z}^{+}$ and positive even integers, which I will now denote $\mathbb{Z}_e$ so we should attempt to find a map which takes $\mathbb{Z}^{+} \rightarrow \mathbb{Z}_e$. So how should we go about finding one? well first lets think what is the formal definition of even? I would say an integer $x$ is even if $x = 2k$ for some $k\in \mathbb{Z}$ so the set of positive even integers is $\mathbb{Z}_e = \{x = 2k : k\in\mathbb{Z}^{+}\}$. Now once we have actually formalized what a positive even integer is it is not hard to think of a map, for example take: $f: \mathbb{Z}^{+} \rightarrow \mathbb{Z}_e$ defined by : $k \mapsto 2k$ Now we've got a map we think we will work, and we just need to check if it is one-to-one. Suppose $f(r) = f(s)$ Then $2r = 2s$, but this quickly implies that $r = s$ so the map is one-to-one, as desired. Furthermore the map is also onto, because $\mathbb{Z}_e = \{x = 2k : k\in \mathbb{Z}^{+}\}$ is the set of integers of the form $2k$ by definition.
{ "language": "en", "url": "https://math.stackexchange.com/questions/63559", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Distances between closed sets on metric spaces Which says that $\mathbb R^n$ the distance between a point $b$ and a set $X$ defined by$$ \inf \left\{ d(b,x) \mid x \in X \right\} $$ The proposition: If $X$ is closed, this distance is reached at some point in the set $X$. To prove it I assumed without loss of generality that $X$ is also bounded, because if not, it intersected with (e.g.) $$ B_b (n) = \left\{ x \in \mathbb R^n \mid d(b,x) \leqslant n \right\}, $$ where $n$ for example, can be chosen as the first natural number such that the intersection is not empty, and obviously the rest of the points of $X$, are at a greater distance, and will not be candidates. Then define $$ f(x) = \left| x - b \right| \quad \text{for }x \in X $$ and since the domain is compact we conclude the result, that reaches its minimum at some point in the set. This demonstration clearly not true for any metric space, because being closed and bounded is not enough to be compact in general, but anyway maybe it can be shown in a more general way. That is my question, is this true?
No, such a statement does not hold for a general metric space. Let $V$ be the metric space $[-1, 0) \cup (0, 1]$, and $X = (0,1]$ (verify that $X$ is closed and bounded in $V$). Then for $b = -1$, the set of distances $$ \{ |x-b| \,:\, x \in (0, 1] \} $$ has the infimum $1$, but no minimum value. Theo's comment describes a stronger counterexample showing that this conclusion does not follow even assuming the completeness of the underlying metric space. Consider the space $\ell^p$ of sequences (say, $p=2$), and let $X$ be the set of sequences $a_n$, where $a_n$ is $1+\frac{1}{n}$ in the $n$th entry, and zero in every other entry. Boundedness of $X$ is clear; it is also closed since it has no limit points. However, for $b=0$, the distance $d(b,a_n)$ gets arbitrarily close to, but never, $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/63619", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 1, "answer_id": 0 }
Complex Analysis curve We are given three complex numbers a,b,and c. Consider $Re(az^{2} + bz +c)=0$. What is this curve? I am having a hard time approaching this problem. Any suggestions or help would be great.
If $a=a_1+ia_2$, $b=b_1+ib_2$, $c=c_1+ic_2$ and $z=x+iy$, then $$z^2=x^2-y^2+2ixy$$ hence $az^2+bz+c$ equals $$ a_1(x^2-y^2)-2a_2xy+b_1x-b_2y+c_1+i\cdot(\text{something real}), $$ and you are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/63680", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Probability of iid random variables to be equal? Suppose $X_1$ and $X_2$ are iid random variables. I want to determine $P(X_1=X_2)$. If they are integer-valued random variables, then $$P(X_1=X_2) = \sum_{i \in \mathbb{Z}} P_{X_1,X_2}(i,i) = \sum_{i \in \mathbb{Z}} P_{X_1}(i)^2. $$ If they are continuous random variables, then $$P(X_1=X_2) = \int_{x \in \mathbb{R}} f_{X_1,X_2}(x,x) dx = \int_{x \in \mathbb{R}} f_{X_1}(x)^2 dx. $$ But when $X_1$ and $X_2$ are uniformly distributed over $[0,1)$, $$P(X_1=X_2) = \int_{x \in \mathbb{R}} f_{X_1}(x)^2 dx = \int_{x \in [0,1)} 1 dx = 1. $$ Intuitively it is not possible, since $P(X_1\neq X_2) > 0$. So is there some mistake I have made? Thanks!
No, your calculation for the continuous case is wrong. It should be $ P(X_1 = X_2) = \displaystyle\iint_D f_{X_1}(x) f_{X_2}(y)\ dx \ dy$, where $D = \{(x,y) \in {\mathbb R}^2: x = y\}$. But $D$ has two-dimensional measure (i.e. area) $0$, so the answer is $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/63794", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 1, "answer_id": 0 }
A conjecture about the form of some prime numbers Let $k$ be an odd number of the form $k=2p+1$ ,where $p$ denote any prime number, then it is true that for each number $k$ at least one of $6k-1$, $6k+1$ gives a prime number. Can someone prove or disprove this statement?
You got confused with your quantifiers, but if your conjecture is what I guess it is, then the first five counterexamples are $p=$ 59,83,89,103,109.
{ "language": "en", "url": "https://math.stackexchange.com/questions/63862", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Does the image of the projection morphism of a fiber product contain an open set? Let $f: X\times_S Y \to X$ be the projection morphism in the definition of fiber product.$ U\subset X\times_S Y$ be an open set. Does $f(U)$ contain a non-empty open set of $X$? I know this can be reduced to the affine case. If it is not true in general, can we save it by adding extra condition such as :$X,Y$ are noetherian, integral, etc.? The problem comes from the attempt to prove: when $X$ is a noetherian integral separated scheme which is regular in codimension one, then $X\times_{\operatorname{Spec}\mathbb{Z}}\operatorname{Spec}(\mathbb{Z}[t])$ is also integral.
Unfortunately, it is completely false that if $U\subset X\times_{S} Y$ is open, then $f(U)$ contains a non-empty open subset of $X$. Here is why. Take $X=S$ and for $X\to S$ just take the identity $X=S \to S$. Then the morphism $f: X\times_{S} Y \to X$ is just the initially given arbitrary morphism $u: Y\to S$, for which there is absolutely no reason that the image of an open subset should contain a non-empty open subset. For a completely concrete example, embed a point in a line: $$u: Y=Spec(k) \to S=Spec(k[T])=\mathbb A^1_k : (0) \mapsto (T)$$. The good news Fortunately your initial question is not affected at all by what precedes: If a scheme $X$ is integral, so is $X\times_{Spec(\mathbb Z)} Spec(\mathbb Z[T])$ This result boils down to the fact that if $D$ is a domain, then the polynomial ring $D[T]$ is a domain too: indeed, constructing the product can be done locally on open affines of $X$ . Note joyfully that absolutely no hypothesis of noetherianness, regularity, ... is required of $X$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/63911", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Fatou's Lemma and Almost Sure Convergence (Pt. 1) I have a question regarding Fatou's Lemma and a sequence of random variables converging almost surely. Fatou's Lemma states If $\forall n \in \mathbb{N}, \,\, X_{n} \ge 0$ and $\displaystyle X = \liminf_{n \rightarrow \infty} X_{n}$, then $\displaystyle\mathbb{E}[ \liminf_{n \rightarrow \infty}\: X_{n}] \le \liminf_{n \rightarrow \infty}\: \mathbb{E}[ X_{n}]$ Suppose we also know that $X_{n} \rightarrow X$ almost surely. How can we connect this to the requirements of Fatou's Lemma? It seems to me that the Lemma asks for pointwise convergence, a wholly different beast.
Suppose you know that $X_n \to X$ a.s. To avoid confusion, let's write $Y$ for $\liminf X_n$. Since for a convergent sequence, the limit and liminf are equal, we have $X = Y$ a.s., so $E[X] = E[Y]$, and by Fatou $E[Y] \le \liminf E[X_n]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/63976", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Gandalf's adventure (simple vector algebra) So, I found the correct answer to this homework question, but I was hoping there was an easier way to find the same answer. Here's the question: Gandalf the Grey started in the Forest of Mirkwood at a point with coordinates $(-2, 1)$ and arrived in the Iron Hills at the point with coordinates $(-1, 6)$. If he began walking in the direction of the vector $\bf v = 5 \mathbf{I} + 1 \mathbf{J}$ and changes direction only once, when he turns at a right angle, what are the coordinates of the point where he makes the turn. The answer is $((-1/13), (18/13))$. Now, I know that the dot product of two perpendicular vectors is $0$, and the sum of the two intermediate vectors must equal $\langle 1, 5\rangle$. Also, the tutor solved the problem by using a vector-line formula which had a point, then a vector multiplied by a scalar. I'm looking for the easiest and most intuitive way to solved this problem. Any help is greatly appreciated! I'll respond as quickly as I can.
The best way I can see it is to write an equation for the line of Galdalf's path $p$. Since he walks in the $\langle 5,1\rangle$ direction, such a path will have slope $1/5$ in $\mathbb{R}^2$. Since he starts at $(-2,1)$, $p$ can be described by $$ p=\frac{1}{5}x+\frac{7}{5}. $$ To find the turning point, you want to project $(-1,6)$ onto the path, since the turning point is supposed to be at a right angle. I always have a hard time recalling the projection formulas, so an easy way to remember is that this projection must have slope $-5$ (in order to be perpendicular), and since it passes through $(-1,6)$ has equation $$ p'=-5x+1. $$ The turning point is now just the intersection. Calculating, $$ \frac{1}{5}x+\frac{7}{5}=-5x+1\implies x=\frac{-1}{13},\ y=\frac{18}{13}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/64021", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Is $M^2-[M]$ a local martingale when $M$ is a local martingale? I've learned that for each continuous local martingale $M$, there's a unique continuous adapted non-decreasing process $[M]$ such that $M^2-[M]$ is a continuous local martingale. For a local martingale $M$, is there a adapted non-decreasing process $[M]$ such that $M^2-[M]$ is a local martingale? (i.e. Do we have an analogous result for discontinuous local martingales?) Thank you. (The notes I have only consider the continuous case. I tried to adapt the argument, but ran into various problems...)
Yes this is a consequence of Doob-Meyer decomposition theorem I think. For a reference you can also look at Philip Protter's book "Stochastic Integration and Differential Equations" Another web-reference is George Lowther's blog but I think Doob-Meyer decomposition is not yet proved. Regards
{ "language": "en", "url": "https://math.stackexchange.com/questions/64071", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
What is the logical operator for but? I saw a sentence like, I am fine but he has flu. Now I have to convert it into logical sentence using logical operators. I do not have any idea what should but be translated to. Please help me out. Thanks
An alternative way of conveying the same information would be to say "I am fine and he has flu.". Often, the word but is used in English to mean and, especially when there is some contrast or conflict between the statements being combined. To determine the logical form of a statement you must think about what the statement means, rather than just translating word by word into symbols.
{ "language": "en", "url": "https://math.stackexchange.com/questions/64123", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20", "answer_count": 8, "answer_id": 2 }
Set of linear transformations How to determine the dimension of {$T\in$ Lin($X,Y$) s.t. $T(A)\subset B$} where $A,B$ are subspaces of finite-dimensional vector spaces $X$ and $Y$? Thanks in advance! P.S. Is there a general way of determining the dimension of a given set of functions? References to examples and/or explicit examples would be appreciated.
Choose bases of $A$ and $B$ and extend them to bases of $X$ and $Y$, respectively. The matrix of $T$ with respect to these bases must have zeros where it would send a basis vector of $A$ to a basis vector outside $B$, and can have arbitrary values everywhere else. Thus the dimension of the space of such matrices, and thus of the linear transformations, is $\dim X\dim Y - \dim A(\dim Y-\dim B)$. [Edit in response to the comment:] Here's an example. Let's say $X$ is the vector space (over the reals) of all polynomials in one variable up to fourth degree, $A$ is the subspace of even polynomials, $Y$ is $\mathbb R^3$ and $B$ is the $x$-$y$-plane. Then $\dim X=5$, $\dim A=3$, $\dim Y=3$ and $\dim B=2$, so $$ \begin{eqnarray} \dim\{T\in\text{Lin}(X,Y)\mid T(A)\subseteq B\} &=& \dim X\dim Y - \dim A(\dim Y-\dim B) \\ &=& 5\cdot3-3\cdot(3-2)\\ &=& 12 \;. \end{eqnarray} $$ For instance, we can choose bases $\{1,x^2,x^4\}$ for $A$ and $\{(1,0,0),(0,1,0)\}$ for $B$ and extend them to bases of $X$ and $Y$ by $x$ and $x^3$ and by $(0,0,1)$, respectively. The functions that map one of the basis vectors of $X$ to one of the basis vectors of $Y$ form a basis of $\text{Lin}(X,Y)$, and there are $\dim X\dim Y = 5\cdot3=15$ such functions. But we can't use those that map an even monomial to $(0,0,1)$, because they don't map $A$ to $B$. There are $\dim A(\dim Y-\dim B)=3\cdot(3-2)=3$ of these, one for each even monomial. All the others we can use, since they either map basis elements that don't belong to $A$, which we don't care about, or they map basis elements of $A$ to basis elements of $B$, which is OK.
{ "language": "en", "url": "https://math.stackexchange.com/questions/64178", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Differential Equations I have solve this differential equation : $$\left\{\begin{aligned}\frac{dy}{dx} + y &= f(x) \\ y (0)&=0. \end{aligned}\right.$$ where $f (x) = \begin{cases}2 & \text{if } 0 \leq x < 1 \\ 0 &\text{if }x \geq 1.\end{cases}$ Please explain how to solve as it involves discontinuous function $f$.
Multiply by the integrating factor $e^x$ and then you can factor the LHS: $$(e^xy)'=e^xf(x).$$ Now integrate from $0$ to $x$ to get $$e^xy(x)-e^0y(0)=\int_0^xe^uf(u)du$$ but remember $y(0)=0$. Now divide by the integrating factor and you have $y(x)=e^{-x}\int_0^xe^uf(u)du$. We can evaluate this by (a) looking at $x\in[0,1)$ and then (b) looking at $x\ge1$ for a piecewised defined solution $y$ (note: in the latter case you will have to split $\int_0^x$ into $\int_0^1+\int_1^x$ to substitute for $f$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/64251", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to interpret the product rule with multiple sets? Having two sets like $S_1 = \{ A, B, C \}$ $S_2 = \{ X, Y\}$ is fairly simple to understand. You can join every item from $S_1$ with every item from $S_2$ to get the possible combinations. There are $3$ of them in $S_1$ and $2$ of them in $S_2$, so $3$ with X and $3$ with Y. So $3 + 3$ combinations, $3$ times $2 = 6$. $S_{combinations} = \{ AX, BX, CX, AY, BY, CY \} $ Am I thinking about it the right way thus far? If so, what happens when you add more sets? Say... $S_1 = \{ A, B, C \}$ $S_2 = \{ X, Y\}$ $S_3 = \{ M, N, O\}$ The only way I could wrap my head around that is to combine the first two sets: $S_{1 x 2} = \{ AX, BX, CX, AY, BY, CY \} $ and then combine it with the $S_3$. Is there a better way to think about multiple sets, should I just try to trust the product rule based on its "good behavior"?
Your way of thinking about this is perfectly fine. Another approach is the following: Draw the elements of your three sets in three different parts of the page. Then each choice of an element from each set corresponds to a triangle having one vertex in each part of the page. Now how many such triangles are there? I think you will be able to see that it is equal to the product of the sizes of the three sets. Can you see now what the situation looks like for an arbitrary finite number of sets?
{ "language": "en", "url": "https://math.stackexchange.com/questions/64304", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Precalculus Project decision OK so I have to do a research paper/presentation on an experiment/project that relates to my precalculus class. Only problem is that I was given no topics to choose from and I couldn't find any real good ones online. Can anybody give me some good ideas/topics that I can do? (P.S. if its fun then that's a plus :D)
I have no idea what does or does not relate to your precalc class. But I hope that the construction of 3d figures as stacked 2d images fits, because I think it's very beautiful. For instance, . Depending on the things that you do in your class, these shapes might be different. But I think they're beautiful and fun. If you're very careful, you can even approximate certain volumes by adding up the weights of the pieces of paper (or whatever material), which suggests some deep things in math. Like calculus, in a way. Or perhaps I'm completely off mark - just an idea.
{ "language": "en", "url": "https://math.stackexchange.com/questions/64332", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 0 }
Birthday attack/problem, calculate exact numbers? Possible Duplicate: Birthday-coverage problem An example of what I wish to do is the following: https://stackoverflow.com/questions/4681913/substr-md5-collision/4785456#4785456 How would I calculate how many people would be required, as in the link above, to reach 50% or 0.001% or n% probability of collision exactly? I am able to calculate the likelyhood of a collision in say a hash, with $1-e^\frac{-n^2}{(2*10^6)}$ 10^6 being six numerical digits from zero to nine. However, I would have to guess a lot of times before I got the exact number of people it would take to reach exactly 50%, which may be a fraction (i.e. 20.2 people) How would I be able to find this?
I'm somewhat confused by the question because it contains the word "exact" four times but you suggest to calculate the probability of a collision using a relatively simple approximation. For this answer, I'll assume that you're aware that there are better approximations for this probability, and of the various answers you get by searching for "birthday" on this site, and that your question is only about calculating $n$ given $1-\exp(-n^2/(2k))$. This you can do by solving for $n$ as follows: $$ p=1-\mathrm e^{-n^2/2k}\;, $$ $$ \mathrm e^{-n^2/2k}=1-p\;, $$ $$ -\frac{n^2}{2k}=\log(1-p)\;, $$ $$ n^2=-2k\log(1-p)\;, $$ \begin{eqnarray} n&=&\sqrt{-2k\log(1-p)}\\ &=&\sqrt{2k\log\left(\frac{1}{1-p}\right)}\;, \end{eqnarray} where $\log$ is the natural logarithm, i. e. the logarithm to base $\mathrm e$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/64377", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
On Dilworth's theorem Dilworth's Theorem on Posets states that if $P$ is a poset and $w(P)$ is the maximum cardinality of antichains in $P$ then there exist a decomposition of P of size $w(P)$. The question is, why this theorem is not trivial? Consider that there is a whole paper on Annals of Mathematics devoted to it: Dilworth, Robert P. (1950), "A Decomposition Theorem for Partially Ordered Sets", Annals of Mathematics 51 (1): 161–166, doi:10.2307/1969503. Best f
If Dilworth’s theorem just said that every poset $P$ has a decomposition of size $w(P)$, where $w(P)$ is the maximum size of an antichain in $P$, it would indeed be trivial, but Dilworth’s theorem is actually a much stronger statement. It says that if $w(P)$ is finite, then $w(P)$ is equal to the minimum size of any partition of $P$ into chains. The requirement that each member of the partition be a chain is what makes the theorem non-trivial. It isn’t enormously difficult $-$ there are nice short proofs by H. Tverberg (On Dilworth’s decomposition theorem for partially ordered sets, J. Combinatorial Theory 3 (1967), 305-306) and Fred Galvin (A proof of Dilworth’s chain decomposition theorem, Amer. Math. Monthly 101 (1994), 352-353) $-$ but it’s definitely not trivial.
{ "language": "en", "url": "https://math.stackexchange.com/questions/64427", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Why is lambda calculus named after that specific Greek letter? Why not “rho calculus”, for example? Where does the choice of the Greek letter $\lambda$ in the name of “lambda calculus” come from? Why isn't it, for example, “rho calculus”?
The symbol “λ” is used for one of two basic constructions in the system introduced by Alonzo Church, specifically abstraction. The notation did not just happen to be chosen but was to distinguish it from another construction by Whitehead and Russell represented as “xˆ.” For his new system, Church initially used “∧x,” then replaced it to “λx” to ease printing, obviously, interpreting the former logical symbol as the capital Greek letter “Λ.” See “History of λ-calculus and Combinatory Logic” by J. R. Hindley, F. Cardone (Handbook of the History of Logic, 5: 723–817, Elsevier, 2009).
{ "language": "en", "url": "https://math.stackexchange.com/questions/64468", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "47", "answer_count": 4, "answer_id": 1 }
Number of ways to arrange $3$ rocks in $6$ boxes Possible Duplicate: Unique ways to keep N balls into K Boxes? This question may be sound stupid, but we really cant figure it out. We have 3 rocks and 6 boxes. All the rocks have to be in the boxes. The rocks can be all in one box or spread out. How many unique combinations can we possible have? I made a spreadsheet of this https://docs.google.com/spreadsheet/ccc?key=0AjBAKweB5syRdDFVeE5qNVJqT3F5RE9heERvYVBWdnc&hl=en_US i came up with 53. I am missing some? please let me know. The question to this problem is whats the equation. How do we come up with this number without the spreadsheet so i can apply it to a another set of questions.
It all depends on what you count as a different pattern. If each rock and each box count as different you can put each rock in one of six boxes so you get $6^3$. The number of ways of having each rock in a different box is $6\times 5 \times 4$, so the probability is $\frac{5}{9}$. If the rocks all look the same and the boxes look the same there are $3$ ways: three boxes with one rock each and three empty; a box with two, a box with one and four with none; or a box with three and five with none. The probability of having each rock in a different box becomes $\frac{1}{3}$ if each pattern is equally likely. Or perhaps the boxes look the same and the rocks different, or the boxes are different but the rocks look the same, and you have two more possible answers. You seem to want the last of these, and it is not difficult, so why not show us what you think the answer might be.
{ "language": "en", "url": "https://math.stackexchange.com/questions/64579", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to prove that $\sqrt 3$ is an irrational number? Possible Duplicate: $a^{1/2}$ is either an integer or an irrational number I know how to prove $\sqrt 2$ is an irrational number. Who can tell me that why $\sqrt 3$ is a an irrational number?
If you follow through the usual proof for $\sqrt{2}$ substituting $3$ for $2$, it goes through just fine. Let $\sqrt{3}=\frac{p}{q}, p,q $ relatively prime. $3=\frac{p^2}{q^2}$, so $3$ divides $p$ and so on.
{ "language": "en", "url": "https://math.stackexchange.com/questions/64643", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 7, "answer_id": 0 }
$\frac{a}{b} = \frac{c}{d} $ if $ad = cb$, how to intuitively understand this? This works if you multiply both sides with $bd$ and cancel stuff out... But how does it work? When I look at it, I would never guess something like that is valid without resorting to the established arithmetic rules. Maybe this is a nonsense question, should these things be analyzed in such a way or just accepted from the arithmetic rules? Since, that's the reason humanity has developed mathematics. To simplify and abstract things which would otherwise be out of the reach of our mind. Just a simple example, so, please... Set me straight. Thanks!
If $\frac{a}{b}$ represents "the solution to the equation $bx=a$", then saying that $\frac{c}{d}=\frac{a}{b}$ means that any solution to $bx=a$ is a solution to $dy=c$, and vice-versa. So if $x$ is a solution to $bx=a$, then multiplying by $d$ we have $ad = dbx = b(dx)$. But since $x$ is also a solution to $dy=c$, that means that $dx=c$, so $ad=b(dx) = bc$. So if $\frac{a}{b}=\frac{c}{d}$, then $ad=bc$. Conversely, if $ad=bc$, and $x$ is a solution to $bx=a$, then it is also a solution to $dbx = da=bc$. Since $b\neq 0$, $dbx = bc$ if and only if $dx=c$, so $x$ is a solution to $bx=a$ if and only if it is a solution to $cy=d$. In short, the equations $bx=a$ and $cy=d$, with $a,b,c,d$ integers, $b$ and $d$ nonzero, have the same solution if and only if $ad=bc$. So if $\frac{r}{s}$ for integers $r,s$, $s\neq 0$, represents "the solution to $sx=r$", then for integers $a,b,c,d$, $b\neq 0$, $d\neq 0$, $$\frac{a}{b}=\frac{c}{d}\text{ if and only if }ad=bc.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/64693", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 7, "answer_id": 0 }
Gradient of an integral Let's suppose that we have a three dimensional function $f(\vec{x})$ which is the integral of some another function $g(\vec{x},\vec{y})$, i.e $f(\vec{x})=\int_{\mathbb{R}^3}g(\vec{x},\vec{y})d^3 \vec{y}$ What is the gradient of the $f(\vec{x})$? Can the operator pass inside the integral? $\nabla f(\vec{x})=\nabla_x\int_{\mathbb{R}^3}g(\vec{x},\vec{y})d^3 \vec{y}=\int_{\mathbb{R}^3}\left[\nabla_x g(\vec{x},\vec{y})\right]d^3 \vec{y}$ The quantity $\nabla g(\vec{x},\vec{y})$ is a vector and it doesn't make sense to me integrating a vector. In the case of the Laplacian operator $\nabla^2$ can it pass inside the integral? Edit: The question was inspired from a physics problem where we have a potential $V(\textbf{x})=-\int_{\mathbb{R}^3}\frac{G}{|\textbf{x}-\textbf{y}|}\rho(\textbf{y})d^3\textbf{y}$ and we take a gradient to find the accelaration: $g(\textbf{x})=-\nabla V(\textbf{x})=\nabla_x\int_{\mathbb{R}^3}\frac{G}{|\textbf{x}-\textbf{y}|}\rho(\textbf{y})d^3\textbf{y}$.
The operator $\nabla$ can be passed inside the integral if some suitable conditions on $g$ are fulfilled. There are appropriate theorems on differentiating integrals with respect to parameter. It can be done with potential $V\;$ if the function $\rho$ (say) is bounded in $\mathbb R^3$ and has bounded support, since in this case the integral $\int_{\mathbb{R}^3}\nabla{\!\!}_{x}\frac{G}{|\textbf{x}-\textbf{y}|}\rho(\textbf{y})d^3\textbf{y}$ will be converging absolutely and uniformly. The Laplace operator cannot be put inside the integral because it would mean that $\Delta V(x)\equiv0$ and for smooth enough $\rho$ actually $\Delta V(x)=\rho(x)$. The theorem aplied above doesn't work because the integral $\int_{\mathbb{R}^3}\left|\Delta \frac{G}{|\textbf{x}-\textbf{y}|}\rho(\textbf{y})\right| d^3\textbf{y}$ may diverge. The expression $\left|\Delta \frac{1}{|\textbf{x}-\textbf{y}|}\right|$ has non-integrable singularity at $y=x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/64797", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 1, "answer_id": 0 }
What is the significance of the three nonzero requirements in the $\varepsilon-\delta$ definition of the limit? What are the consequences of the three nonzero requriments in the definition of the limit: $\lim_{x \to a} f(x) = L \Leftrightarrow \forall$ $\varepsilon>0$, $\exists$ $\delta>0 :\forall$ $x$, $0 < \lvert x-a\rvert <\delta \implies \lvert f(x)-L \rvert < \varepsilon$ I believe I understand that: * *if $0 = \lvert x-a\rvert$ were allowed the definition would require that $f(x) \approx L$ at $a$ ($\lvert f(a)-L \rvert < \varepsilon$); *if $\varepsilon=0$ and $\lvert f(a)-L \rvert \le \varepsilon$ were allowed the theorem would require that $f(x) = L$ near $a$ (for $0 < \lvert x-a\rvert <\delta$); and *if $\delta=0$ were allowed (and eliminating the tautology by allowing $0 \le \lvert x-a\rvert \le \delta$) the definition would simply apply to any function where $f(a) = L$, regardless of what happened in the neighborhood of $f(a)$. Of course if (2'.) $\varepsilon=0$ were allowed on its own, the theorem would never apply ($\lvert f(a)-L \rvert \nless 0$). What I'm not clear about is [A] the logical consequences of (3'.) allowing $\delta=0$ its own, so that: $\lim_{x \to a} f(x) = L \Leftrightarrow \forall$ $\varepsilon>0$, $\exists$ $\delta≥0 :\forall$ $x$, $0 < \lvert x-a\rvert <\delta \implies \lvert f(x)-L \rvert < \varepsilon$ and [B] whether allowing both 1. and 2. would be equivalent to requiring continuity?
The question has been answered, but for sorting out the $(2^5 - 1)$ different ways of replacing strict inequalities by weak ones in the definition, the following might help. The condition to be met is more stringent for smaller $\epsilon$. If you allow $\epsilon \geq 0$ there is no need for the $\forall \epsilon > 0$ quantifier, one can just replace $\epsilon$ by $0$ everywhere in the definition. The logical formula will then either express the condition that a function be equal to $L$ on a neighborhood of $a$, or be so strict that no function meets the condition. Assume, then, that the formula begins $\forall \epsilon > 0 \dots \quad$. In that case it makes no difference whether in the final inequality $|f(x)-L|$ is $ < \epsilon$ or $\leq \epsilon$. The condition to be met is less stringent for smaller $\delta$. If $\delta =0$ is allowed then the $\exists \delta \dots$ can be satisfied if and only it is satisfied by $\delta=0$, and one can replace $\delta$ by zero everywhere instead of quantifying over $\delta$. In that case one gets either a condition that is true for every function, or the condition that $f(a)=L$, according to whether $x=a$ is allowed. The requirement that $0 < |x-a| < \delta$ is the one that is most natural to modify. It defines the type of neighborhood of $a$ on which the convergence to $L$ occurs. Here it is a punctured two-sided neighborhood (usually to allow discussion of derivatives where ratios of type 0/0 appear, like $\sin(x)/x$ near $x=a=0$) but allowing $x=a$ gives a definition of continuity, or one might want one-sided limits with $ 0 < x-a < \delta$ or $0 < a - x < \delta$. If $\delta=0$ is permitted then the natural neighborhood to use would be $0 \leq |x-a| \leq \delta$ but this would only lead to a complicated restatement of "$f(a)=L$". Finally, changing the upper bound to $|x-a| \leq \delta$ would not affect anything (except in the useless case where $\delta=0$ is allowed). To summarize, allowing $0 \leq |x-a|$ gives a definition of continuity, but changes to any of the other inequalities $\epsilon > 0$, $\delta > 0$, $|x-a| < \delta$ or $|f(x)-L| < \epsilon$ either do not affect the definition, or trivialize it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/64849", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
In how many ways can one or more of $101$ letters be posted in $101$ letter boxes? In how many ways can one or more of $101$ letters be posted in $101$ letter boxes? $\quad\quad\quad\quad\quad1)10100 \quad\quad 2) 101^{100} \quad\quad 3) 100^{101} \quad\quad 4) 101(101^{101} - 1)/100$ I am not sure where I am going wrong in interpreting this problem but the obvious thing that came to my mind is to assume letters and letter boxes all distinct and apply mutual inclusion-exclusion but from the answer options that doesn't seems not be the correct approach for this one.where exactly I am going wrong?
Hint: It appears you are considering all the letters and all the boxes to be distinct, but you post letters in a given order. I can't get any of the answers to work any other way. Then one letter can be posted to one of $101$ boxes in $101$ ways, two letters can be posted in $101^2$ ways (each letter is independent of the other), and so on. Summing the geometric series gives what?
{ "language": "en", "url": "https://math.stackexchange.com/questions/64885", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Smoothness of a real-valued function on $\mathbb{R}^n $ Let $$ f(x)= \begin{cases} \exp\left(\frac{-1}{1-|x|^2}\right), &\text{ if } |x| < 1, \\ 0, &\text{ if } |x|\geq 1. \end{cases} $$ Prove that $f$ is infinitely differentiable everywhere. ($x$ belongs to $\mathbb{R}^n$ for fixed $n$.) Well, this is obvious for $|x|>1$ and easy enough for the first derivative at $|x|=1$, but I can't seem to use the definition of the Gateaux derivative to show it for $|x|<1$. Any advice would be appreciated. (This is not homework.)
We can show by induction that $$\partial_{\alpha}f(x)=\begin{cases} \frac{P_{\alpha}(x)}{(1-|x|^2)^{2|\alpha|}}\exp\left(\frac 1{|x|^2-1}\right)&\mbox{ if }|x|<1,\\\ 0&\mbox{ otherwise}, \end{cases}$$ where $\alpha\in\mathbb N^n$ and $P_{\alpha}$ is a polynomial. It's true for $\alpha=0$, and if $\alpha=e_k$ and $|x|<1$, $$\partial_{e_k}f(x)=-\exp\left(\frac 1{|x|^2-1}\right)\frac{2x_k}{(|x|^2-1)^2},$$ which shows that $f$ is also differentiable at $|x|=1$ and $P_{e_k}(x)=-2x_k$. If we assume that the property is true for $|\alpha|\leq p$ and $|\alpha|=p+1$ then let $k$ such that $\alpha_k\neq 0$, and put $\alpha'=\alpha-e_k$. Then $|\alpha'|=p$ and for $|x|<1$ we have \begin{align*} \partial_{\alpha}P(x)&=\frac{\partial_{e_k}P_{\alpha'}(x)}{(1-|x|^2)^{2|\alpha'|}}\exp\left(\frac 1{|x|^2-1}\right)+\frac{P_{\alpha'}(x)(-2|\alpha'|-1)2x_k}{(1-|x|^2)^{2|\alpha'|+1}}\exp\left(\frac 1{|x|^2-1}\right)\\ &+\exp\left(\frac 1{|x|^2-1}\right)\frac{P_{\alpha'}(x)}{(1-|x|^2)^{2|\alpha'|}}\frac{2x_k}{(1-|x|^2)^2}\\ &=\exp\left(\frac 1{|x|^2-1}\right)\frac 1{(1-|x|^2)^{2|\alpha|}}\Big(\partial_{e_k}P_{\alpha'}(x)(1-|x|^2)^2\\ &- 2(1-|x|^2)P_{\alpha'}(x)x_k+2x_kP_{\alpha'}(x) \Big)\\ &=\exp\left(\frac 1{|x|^2-1}\right)\frac 1{(1-|x|^2)^{2|\alpha|}}\left(\partial_{e_k}P_{\alpha'}(x)(1-|x|^2)^2+2|x|^2x_kP_{\alpha'}(x)\right). \end{align*} So we got the induction formula $$P_{\alpha'+e_k}(x)=\partial_{e_k}P_{\alpha'}(x)(1-|x|^2)^2+2|x|^2x_kP_{\alpha'}(x),$$ and $\partial_{\alpha}f(x)=0$ if $x=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/64948", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
How to prove that the Binet formula gives the terms of the Fibonacci Sequence? This formula provides the $n$th term in the Fibonacci Sequence, and is defined using the recurrence formula: $u_n = u_{n − 1} + u_{n − 2}$, for $n > 1$, where $u_0 = 0$ and $u_1 = 1$. Show that $$u_n = \frac{(1 + \sqrt{5})^n - (1 - \sqrt{5})^n}{2^n \sqrt{5}}.$$ Please help me with its proof. Thank you.
See here -- the solutions of $a_n=Aa_{n-1}+Ba_{n-2}$ are given by $a_n=C\lambda_1^n+D\lambda_2^n$ if $\lambda_1\neq \lambda_2$, where $C,D$ are constants created by $a_0,a_1$, and $\lambda_1, \lambda_2$ are the solutions of $\lambda^2-A\lambda-B=0$ (the characteristic polynomial), and $a_n=C\lambda^n+Dn\lambda^n$ if $\lambda_1=\lambda_2=\lambda$. $u_n=\frac{1}{\sqrt{5}}\left(\left(\frac{1 + \sqrt{5}}{2}\right)^n - \left(\frac{1 - \sqrt{5}}{2}\right)^n\right)$ In this case, you want $\lambda_1=\frac{1 + \sqrt{5}}{2}$, $\lambda_2=\frac{1 - \sqrt{5}}{2}$, $C,D$ created by $u_0=0$, $u_1=1$. Apply Vieta's formulas. $\lambda_1+\lambda_2=1=A$, $\lambda_1\lambda_2=-1=-B$. The characteristic polynomial is $\lambda^2-\lambda-1=0$. The recurrence relation is $u_n=u_{n-1}+u_{n-2}$ for $n>1$ with $u_0=0$, $u_1=1$. $u_n$ is an integer because $u_0$, $u_1$ are integers and the recurrence relation shows that $u_2=u_1+u_0\in\mathbb Z$, etc. You could use induction here. (I.e., if $u_k$, $u_{k+1}$ are integers for some $k\in\mathbb Z$, $k\ge 0$, then $u_{k+2}=u_{k+1}+u_k$ is also an integer). Furthermore, $u_n$ is the integer closest to $\frac{1}{\sqrt{5}}\left( \frac{1 + \sqrt{5}}{2} \right)^n$ (see this question). To prove this, it's enough to prove that $\left|\frac{1}{\sqrt{5}}\left( \frac{1 - \sqrt{5}}{2} \right)^n\right|<\frac{1}{2}$ and two proofs of that are seen in the linked question (one of them is in the comments there). Similar facts are applicable for Pell's equations. See, e.g., this answer. It's not easily applicable for Fibonacci numbers because $\frac{1}{2}$ isn't an integer, unlike in this sequence.
{ "language": "en", "url": "https://math.stackexchange.com/questions/65011", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27", "answer_count": 7, "answer_id": 3 }
"belongs to" versus "contained in" Let us consider a set $A$. let $B$ be an element of the set. Now what I want to know is that whether saying $B$ is contained in $A$ and $B$ belongs to $A$ means the same? Could anyone here cite any context where they do not mean the same?
Paul Halmos in his autobiography reports that he once decided that henceforward he would say "$x$ contains $y$" when he meant "$y$ is a member of $x$" and "$x$ includes $y$" when he meant "$y$ is a subset of $x$". He adhered to that usage fastidiously for 18 months. At the end of that time he drew his conclusions: (1) the practice is harmless, and (2) he didn't think anybody ever noticed. I was inclined to agree with the usage on the grounds that people speak of a family of subsets being "partially ordered by inclusion" but they never say "partially ordered by containment" as far as I know. And as far as I know, "$x$ belongs to $y$" would meant the same thing as "$x$ is a member of $y$". But sometimes people say "$x$ is contained in $y$" when they mean $x$ is a subset of $y$. And sometimes they say the same thing when they mean $x$ is a member of $y$. So always make it clear which meaning you have in mind. Sometimes context is enough for that and probably sometimes it is not.
{ "language": "en", "url": "https://math.stackexchange.com/questions/65119", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Can a group be defined in terms of a relation on a set? Wikipedia defines a group as "an algebraic structure consisting of a set together with an operation that combines any two of its elements to form a third element." I keep thinking that there is a connection to this definition and a relation on a set, but I'm not sure what it is. Obviously, relations and operators are connected. Can groups be defined in terms of sets and relations? I am new to this, and the Wikipedia article is over my head.
A group can be defined as a set and a relations. Note that a relation $R$ on a set $G$ is any subset of $G \times G$. A function is a relation on $G \times G$ such that if $(a,b) \in f$ and $(a,c) \in f$, then $b = c$. Therefore a group is a set $G$ with a relation $*$ which happens to be a function. Moreover, this function satisfies some properties like associativity, etc. Also as is typical in model theory, you often say a group is a set, with a binary function $*$, and a constant $e$, which represents the identity. Again, the constant can still be thought of as a unary relation. You can also define a group to include a symbol for taking an inverse. This can still be thought of as a relation since it is a unary function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/65191", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Right identity and Right inverse in a semigroup imply it is a group Let $(G, *)$ be a semigroup. Suppose * *$ \exists e \in G$ such that $\forall a \in G,\ ae = a$; *$\forall a \in G, \exists a^{-1} \in G$ such that $aa^{-1} = e$. How can we prove that $(G,*)$ is a group?
I assume that (a) should read $\exists e\in G$ such that $ae=a$, $\forall a\in G$. For each $a \in G$ we have $$\begin{align*} (a^{-1})^{-1}a^{-1} &= e[(a^{-1})^{-1}a^{-1}]\\ &= (aa^{-1})[(a^{-1})^{-1}a^{-1}]\\ &= [(aa^{-1})(a^{-1})^{-1}]a^{-1}\\ &= (a[a^{-1}(a^{-1})^{-1}])a^{-1}\\ &= (ae)a^{-1}\\ &= aa^{-1}. \end{align*}$$ Multiplying $(a^{-1})^{-1}a^{-1} = aa^{-1}$ on the right by $(a^{-1})^{-1}$ yields $$\begin{align*} (a^{-1})^{-1} &= (a^{-1})^{-1}e\\ &= (a^{-1})^{-1}[a^{-1}(a^{-1})^{-1}]\\ &= [(a^{-1})^{-1}a^{-1}](a^{-1})^{-1}\\ &= (aa^{-1})(a^{-1})^{-1}\\ &= a[a^{-1}(a^{-1})^{-1}]\\ &= ae\\ &= a, \end{align*}$$ so $a^{-1}a=e$ for all $a \in G$. Added: The foregoing obviously assumes that $e$ is a left identity, which was not given, and somehow none of us caught it at the time. Here is a corrected argument. For each $a\in G$ we have $$a^{-1}=a^{-1}e=a^{-1}(aa^{-1})=(a^{-1}a)a^{-1}\;,$$ so $$e=a^{-1}(a^{-1})^{-1}=\left((a^{-1}a)a^{-1}\right)(a^{-1})^{-1}=(a^{-1}a)\left(a^{-1}(a^{-1})^{-1}\right)=(a^{-1}a)e=a^{-1}a\;.$$ In other words, $a^{-1}$ is both a left as well as a right inverse for $a$. It follows that $$ea = (aa^{-1})a = a(a^{-1}a) = ae = a\;,$$ so $e$ is a left as well as a right identity for $G$. Now you can use the usual arguments to show that the identity and inverses are unique. (For example, if $e'$ were another identity, we’d have $e = ee' = e'$, because $e$ is a left identity and $e'$ is a right identity.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/65239", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "40", "answer_count": 6, "answer_id": 1 }
Real-world uses of Algebraic Structures I am a Computer science student, and in discrete mathematics, I am learning about algebraic structures. In that I am having concepts like Group,semi-Groups etc... Previously I studied Graphs. I can see a excellent real world application for that. I strongly believe in future I can use many of that in my Coding Algorithms related to Graphics. Could someone tell me real-world application for algebraic structures too...
Here's one place to start. The Unreasonable Effectiveness of Number Theory contains the following interesting surveys. Their references should provide good entry points to related literature. • M. R. Schroeder -- The unreasonable effectiveness of number theory in physics, communication, and music • G. E. Andrews -- The reasonable and unreasonable effectiveness of number theory in statistical mechanics • J. C. Lagarias -- Number theory and dynamical systems • G. Marsaglia -- The mathematics of random number generators • V. Pless -- Cyclotomy and cyclic codes • M. D. McIlroy -- Number theory in computer graphics
{ "language": "en", "url": "https://math.stackexchange.com/questions/65300", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 4, "answer_id": 3 }
Is there error in the answer to Spivak's Calculus, problem 5-3(iv)? I'm puzzled by the answer to a problem for Spivak's Calculus (4E) provided in his Combined Answer Book. Problem 5-3(iv) (p. 108) asks the reader to prove that $\mathop{\lim}\limits_{x \to a} x^{4} =a^{4}$ (for arbitrary $a$) by using some techniques in the text to find a $\delta$ such that $\lvert x^{4} - a^{4} \rvert<\varepsilon$ for all $x$ satisfying $0<\lvert x-a\rvert<\delta$. The answer book begins (p. 67) by using one of these techniques (p. 93) to show that $$\lvert x^{4} - a^{4} \rvert = \lvert (x^{2})^{2} - (a^{2})^{2} \rvert<\varepsilon$$ for $$\lvert x^{2} - a^{2} \rvert <\min \left({\frac{\varepsilon}{2\lvert a^{2}\rvert+1},1}\right) = \delta_{2} .$$ In my answer, I use the same approach to show that $$\lvert x^{2} - a^{2} \rvert <\delta_{2}$$ for $$\lvert x - a \rvert <\min \left({\frac{\delta_{2}}{2\lvert a\rvert+1},1}\right) = \delta_{1} ,$$ so that $$\lvert x^{4} - a^{4} \rvert<\varepsilon$$ when $$\delta = \delta_{1}=\min \left({\frac{\delta_{2}}{2\lvert a\rvert+1},1}\right). \Box$$ But Spivak's answer book has $$\delta =\min \left({\frac{\delta_{1}}{2\lvert a\rvert+1},1}\right),$$ which I believe is an error.
Where you (correctly) iterated the bound twice it seems that Spivak iterated three times. This particular $\delta$ is shrinking at each iteration, because it satisfies $\delta(\epsilon,a) < \epsilon$ for all $a$. Given that two iterations are enough, three are more than needed, but still logically correct. Without seeing the answer book, it is impossible to determine whether Spivak's extra layer of work is consistent with the methods he gives for this and other problems.
{ "language": "en", "url": "https://math.stackexchange.com/questions/65369", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 1, "answer_id": 0 }
Why does this expected value simplify as shown? I was reading about the german tank problem and they say that in a sample of size $k$, from a population of integers from $1,\ldots,N$ the probability that the sample maximum equals $m$ is: $$\frac{\binom{m-1}{k-1}}{\binom{N}{k}}$$ This make sense. But then they take expected value of the sample maximum and claim: $$\mu = \sum_{m=k}^N m \frac{\binom{m-1}{k-1}}{\binom{N}{k}} = \frac{k(N+1)}{k+1}$$ And I don't quite see how to simplify that summation. I can pull out the denominator and a $(k-1)!$ term out and get: $$\mu = \frac{(k-1)!}{\binom{N}{k}} \sum_{m=k}^N m(m-1) \ldots (m-k+1)$$ But I get stuck there...
Alternatively, $$\mu = \sum_{m=k}^N m \frac{\binom{m-1}{k-1}}{\binom{N}{k}} = \sum_{m=k}^{N}\frac{m! k!(N-k)!}{N!(k-1)!(m-k)!}$$ $$= k\frac{k! (N-k)!}{N!} \sum_{m=k}^N \binom{m}{m-k} = k\frac{k! (N-k)!}{N!} \binom{N+1}{N-k}$$ $$= k\frac{k! (N-k)! (N+1)}{N! (N-k)! (k+1)!} = k\frac{N+1}{k+1}$$ where $$\sum_{m=k}^N \binom{m}{m-k} = \sum_{i=0}^{N-k} \binom{k+i}{i} = \binom{N+1}{N-k}$$ can be seen from Pascal's Triangle.
{ "language": "en", "url": "https://math.stackexchange.com/questions/65398", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 4, "answer_id": 2 }
How can the following be calculated? How can the following series be calculated? $$S=1+(1+2)+(1+2+3)+(1+2+3+4)+\cdots+(1+2+3+4+\cdots+2011)$$
Hints: useful formulas are $$\begin{eqnarray*} \sum_{k=1}^N 1 = N \\ \sum_{k=1}^N k = \frac{N(N+1)}{2}\\ \sum_{k=1}^N k^2 = \frac{N(N+1)(2N+1)}{6}\end{eqnarray*}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/65465", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 1 }
Interpolation, Extrapolation and Approximations rigorously A foreign book mentioned that "when the Lagrange's interpolation formula fails (for example with large sample due to Runge's phenomenon), you should use approximation methods such as Least-squares-method." I am confused because I have always thought that interpolation/extrapolations are approximations. My confusion lies in the fact that the book used the three terms as disjoint while I have considered the first two terms as approximating. For example, can the Lagrange polynomial (also known as Lagrange interpolation) be extrapolation, interpolation and approximation at the same time? I would say yes and cannot see no problem to use the L polynomial to create extrapolations and approximations (I feel the terms fuzzy). So how do you define the terms about approximation more rigorously?
Interpolation or extrapolation produces an exact formula, not an approximation, for a polynomial that matches given data. However, this might be used as an approximation to the unknown function that produced those data.
{ "language": "en", "url": "https://math.stackexchange.com/questions/65532", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Trouble counting the number of "ace high" hands in poker I'm trying to count the number of "ace high" hands in a five card poker hand. The solution from my answer key puts the count at 502,860; however, I have an argument for why this number is too high. Please help me understand where my logic is flawed. Instead of coming up with an exact answer for the number of ace high hands I will show an upper bound on the number of ace high hands. First, go through the card deck and remove all four aces leaving a deck of 48 cards. We will use this 48 card deck to form the four card "non ace" part of the "ace high" hand. First, how many ways are there to form any four card hand from a 48 card deck? This is (48 choose 4) = 194,580. Now, not all of these hands when paired with an ace will form an "ace high" hand. For example A Q Q K K would be two pair. In fact, any four card hand with at least two cards of the same rank (e.g. Queen of Spades, Queen of Hearts) will not generate an ace high hand. So let's find the number of such hands and subtract them from 194,580. I believe the number of such hands can be found by first selecting a rank for a pair from these 48 remaining cards, that is, (12 choose 1)--times the number of ways to select two suits for our rank (4 choose 2)--times the number of ways to pick the remaining 2 required cards from 46 remaining cards, that is, (46 choose 2). So, restated, given our 48 card deck we can create a four card hand that contains at least one pair this many ways: (12 choose 1)(4 choose 2) (46 choose 2) = 74,520 [pair rank] [suits of pair] [remaining 2 cards] Thus the number of four card hands that do not include at least one pair is: (48 choose 4) - 74,520 = 120,060 We can pair each of these four card sets with one of our four aces to form the number of five card hands that contain an ace, but not any single pair (or better). This is 120,060 * 4 = 480,240 hands. However, this is already less than 502,860 shown by the key... and I haven't even begun to start subtracting out straights. Clearly I have made a mistake, but what is it?
I came to 502,860; Ace-High has no pairs, one card is an ace, and any other 4 cards There are 12 other types of cards in the deck So the total number of Ace high boards with any combination of the other 4 cards is 12*11*10*9 there are 4*3*2 possible ways that the 4 non ace cards can be arranged so there are a total of 12!/(8!4!) possible unique ace high hands this comes to 495 2 of those possibilities contain straights AKQJT and A2345 so you subtract those 2; each card can be any of 4 suits, so there are 4^5 permutations of each unique ace high board 4 out of those 4^5 will be 5 cards of the same suit or a flush, so you subtract those 4. you then multiply this product by 493 and you get 502,860. formula is [12!/(8!4!)-2]*(4^5-4)
{ "language": "en", "url": "https://math.stackexchange.com/questions/65576", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 2 }
Examples of mapping two sets where if the two sets are isomorphic doesn't imply that mapping is also 1 to 1 I am struggling with getting an example of two sets S and T and (onto) mapping f, where the fact S and T are isomorphic does not imply that f is also 1 - 1. If possible could you also give an example in which the fact that they are isomorphic would imply that they are 1 - 1? Thank You!
Consider the map $f: \mathbb{N} \to \mathbb{N}$ given by $$\begin{aligned} 1 &\mapsto 1,\\ 2 &\mapsto 1,\\ 3 &\mapsto 2,\\ 4 &\mapsto 3, \\ 5 &\mapsto 4,\\ & \ \ \vdots\end{aligned}$$ ie $$f(n) = \begin{cases} 1 &\text{ if } n =1,\\ n - 1 &\text{ otherwise.} \end{cases} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/65684", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
bounds for Fourier coefficients of non-holomorphic automorphic forms of weight 2 Are there any results about the bounds for Fourier coefficients of non-holomorphic automorphic forms of weight 2? More precisely, let $k$ be a positive integer and $m=4/k$. Write \begin{equation*} \sum\limits_{n=1}^{\infty}a_nq^n=\eta(k\tau)^2\eta(2k\tau)^{1+m}\eta(4k\tau)^{3-3m}\eta(8k\tau)^{2m-2}, \end{equation*} where $\eta(\tau)=q^{1/24}\prod\limits_{n=1}^{\infty}(1-q^n)$ is the Dedekind eta function with $q=e^{2\pi i\tau}$ and $Im \tau>0$. I want to know the bound for $a_{k^2+k+1}$.
Non-holomorphic automorphic forms are called Maass forms. There are nontrivial bounds avalaible and an analogue of the Ramanujan conjecture for modular forms is expected. This goes under the name Ramanujan-Petersson conjecture. The state of art is conserved in the introduction here Blomer, Brumley - The role of the Ramanujan conjecture in analytic number theory, Bulletin AMS 50 (2013), 267-320
{ "language": "en", "url": "https://math.stackexchange.com/questions/65740", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to prove $\log n \leq \sqrt n$ over natural numbers? It seems like $$\log n \leq \sqrt n \quad \forall n \in \mathbb{N} .$$ I've tried to prove this by induction where I use $$ \log p + \log q \leq \sqrt p \sqrt q $$ when $n=pq$, but this fails for prime numbers. Does anyone know a proof?
That's the same as $n \le e^{\sqrt n}$ or $n^2 \le e^n$. If we allow the power series for $e^x$, $e^n > n^3/6$ so $e^n > n^2$ for $n \ge 6$. If we don't allow the power series, we can instead prove by induction that $n^2 < 2^n$ (which, of course, is better) for $n \ge 5$: True for $n = 5$; if true for $n \ge 5$, $$\frac{(n+1)^2}{2^{n+1}} = \frac{n^2}{2^n}\frac{(1+1/n)^2}{2} \le (6/5)^2/2 = 36/50 < 1.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/65793", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 6, "answer_id": 3 }
What is the gravitational force imposed by a rectilinear 2d body? I'm starting with the simplest problem I can relate to mine: the force imposed on a point mass at the origin by a rectangle that is orthogonal to the $x$- and $y$-axis, stretching from $(x_1, y_1)$ to $(x_2, y_2)$. I'm using Newton's formula, but currently ignoring the mass and density. I start off doing something like $\displaystyle\int\nolimits^{y_2}_{y_1}\int\nolimits^{x_2}_{x_1}\frac{1}{x^2 + y^2}dxdy$ Where $x^2+y^2$ equals the squared distance between $(x,y)$ and the origin. This leads me to the following integral: $\displaystyle\int^{y_2}_{y_1}(\frac{1}{y}\arctan\frac{x_2}{y}) - (\frac{1}{y}\arctan\frac{x_1}{y})dy$ Is there a better way? I can't find any way to solve this.
I'm elaborating my comment. Consider the rectangle $R:=[a,b]\times[c,d]$ with $0<a<b$ and $0<c<d$. The $x$-component $F_1$ of the force exerted by the rectangle at the origin is given by $$F_1=\int\nolimits_R{x\over(x^2+y^2)^{3/2}} \> {\rm d}(x,y)=\int_c^d\int_a^b {x\over(x^2+y^2)^{3/2}} \>dx\>dy\ .$$ Here the inner integral has the value $${-1\over (x^2+y^2)^{1/2}}\Biggr|_a^b ={1\over\sqrt{a^2+y^2}}-{1\over\sqrt{b^2+y^2}}\ .$$ Now the outer integral can be expressed in terms of $\ {\rm arsinh}{y\over a}\ $ resp. $\ {\rm arsinh}{y\over b}$ for $y=c$ and $y=d$. I leave the details to you.
{ "language": "en", "url": "https://math.stackexchange.com/questions/65909", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Combinatorics-number of permutations of $m$ A's and at most $n$ B's Prove that the number of permutations of $m$ A's and at most $n$ B's equals $\dbinom{m+n+1}{m+1}$. I'm not sure how to even start this problem.
Here's a combinatorial solution. Take a string $\alpha$ containing exactly $m+1$ A's and $n$ B's. Identify the final occurrence of $A$ in $\alpha$. Form the string $\beta$ by deleting this final occurrence of $A$ and all the characters that follow it (the characters following it must all be $B$). It is clear that $\beta$ contains exactly $m$ A's and at most $n$ B's. Further, the strings $\alpha$ and $\beta$ are in one-to-one correspondence with each other. (We already described how to obtain $\beta$ from $\alpha$. To write down $\alpha$ given $\beta$, concatenate $\beta$ with an $A$ followed by enough $B$'s to make the total number of $B$'s exactly equal to $n$.) Thus the number of permutations with $m$ A's and at most $n$ B's is equal to the number of permutations with exactly $m+1$ A's and exactly $n$ B's, which is clearly $\binom{m+n+1}{m+1}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/65947", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Prove that the center of a group is a normal subgroup Let $G$ be a group. We define $H=\{h\in G\mid \forall g\in G: hg=gh\},$ the center of $G$. Prove that $H$ is a (normal) subgroup of $G$.
I have two different solutions. These are probably too fancy for this problem, but they might be interesting. For the first solution, define the map $f: G \rightarrow \operatorname{Aut}(G)$ by $g \mapsto \phi_g$, where $\phi_g(x) = gxg^{-1}$ for all $x \in G$. The map $f$ is a homomorphism and $\operatorname{Ker}(f) = Z(G)$. Thus $Z(G)$ is a normal subgroup since the kernel of a homomorphism is always a normal subgroup. The image $\operatorname{Im}(\phi)$ is called the inner automorphism group of $G$ and is denoted $\operatorname{Inn}(G)$. For the second solution, recall that for a subgroup $H \leq G$ the normal core of $H$ in $G$ is defined as $$\operatorname{core}_G(H) = \bigcap_{g \in G} gHg^{-1}$$ The subgroup $\operatorname{core}_G(H)$ is always a normal subgroup. This can be seen directly or by noticing that it is the kernel of the coset action induced by $H$. For any conjugacy class $C$ of $G$, let $t_c \in C$. Let $\mathscr{C}$ be the family of all conjugacy classes of $G$. Then \begin{align*} Z(G) &= \bigcap_{g \in G} C_G(g) \\ &= \bigcap_{C \in \mathscr{C}} \bigcap_{t \in C} C_G(t_c) \\ &= \bigcap_{C \in \mathscr{C}} \bigcap_{g \in G} C_G(gt_cg^{-1}) \\ &= \bigcap_{C \in \mathscr{C}} \bigcap_{g \in G} gC_G(t_c)g^{-1} \\ &= \bigcap_{C \in \mathscr{C}} \operatorname{core}_G(C_G(t_c)) \\ \end{align*} Since $Z(G)$ is the intersection of normal subgroups, it is a normal subgroup.
{ "language": "en", "url": "https://math.stackexchange.com/questions/66052", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 3, "answer_id": 0 }
$\mathbb{Q}/\mathbb{Z}$ has a unique subgroup of order $n$ for any positive integer $n$? Viewing $\mathbb{Z}$ and $\mathbb{Q}$ as additive groups, I have an idea to show that $\mathbb{Q}/\mathbb{Z}$ has a unique subgroup of order $n$ for any positive integer $n$. You can take $a/n+\mathbb{Z}$ where $(a,n)=1$, and this element has order $n$. Why would such an element exist in any subgroup $H$ of order $n$? If not, you could reduce every representative, and then every element would have order less than $n$, but where is the contradiction?
Let $x$ be a real number in the open interval $(0,1)$. If $nx$ is an integer $k$ for some positive integer $n$, we have that $x = k/n$. You see that $k=1,\ldots,n-1$. I'll leave the rest to you.
{ "language": "en", "url": "https://math.stackexchange.com/questions/66145", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 5, "answer_id": 1 }
For a random permutation, what's the probability that each half of the elements keep relative order? If we take a random permutation of a sequence of $2k$ elements, $X_1, X_2, \ldots X_k, X_{k+1}, \ldots, X_{2k}$. What's the probability that $X_1, X_2, .. X_k$ and $X_{k+1}, \ldots, X_{2k}$ both keep their relative orders in the new sequence? My guess is, these 2 event are independent because whether one happen doesn't change the other's probability. So we have $$ \Pr \{ [\text {both sub-sequence keep relative order}]\} = (\frac 1 {n!})^2 $$ What do you think about it? Is it possible to get the result by counting how many permutations fulfill the condition?
Your reasoning seems correct to me. Another way of getting it: To count the posible arrangements, lets imagine the first $k$ elements are white and the rest black. It's easy to see if we are given the colors of a particular arrangement, the elements can be identified; hence, to count all "legal" permutations is equivalent to count all the possible ways of placing $k$ black and $k$ white elements in $2k$ positions. This is ${2n \choose n}$. And the total number of permutations is $(2n)!$ Hence, the probability is $$\frac{{2n \choose n}}{(2n)!} = \frac{1}{(n!)^2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/66213", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Proving a limit involved in the Lagrangian inversion of $\frac{\log\sqrt{1+x}}{\sqrt{1+x}}$ In my attempt to complete this answer, I hit a snag in showing that $$\lim_{t\to 0} \dfrac{\mathrm d^{k-1}}{\mathrm dt^{k-1}}\left(\frac{t\sqrt{1+t}}{\log\sqrt{1+t}}\right)^k=2(k+2)^{k-1}$$ This shows up when trying to apply Lagrangian inversion to the function $\dfrac{\log\sqrt{1+x}}{\sqrt{1+x}}$. My sticking point here is that I am unable to find a convenient expression for the derivatives. Is there an easy proof for this?
Complex analysis comes to the rescue. Using Cauchy differentiation formula: $$ f^{n-1}(0) = \frac{(n-1)!}{2 \pi i} \oint \frac{f(z)}{z^n} \mathrm{d} z $$ Now $$ \begin{eqnarray} \lim_{t\to 0} \dfrac{\mathrm d^{k-1}}{\mathrm dt^{k-1}}\left(\frac{t\sqrt{1+t}}{\log\sqrt{1+t}}\right)^k &=& \frac{(k-1)!}{2 \pi i} \oint\left(\frac{t\sqrt{1+t}}{\log\sqrt{1+t}}\right)^k \frac{\mathrm{d} t}{t^k} \\ &=& \frac{(k-1)!}{2 \pi i} \oint \left(\frac{\sqrt{1+t}}{\log\sqrt{1+t}}\right)^k \mathrm{d} t \end{eqnarray} $$ Now performing the change of variable $t = \mathrm{e}^u-1$: $$ \begin{eqnarray} \lim_{t\to 0} \dfrac{\mathrm d^{k-1}}{\mathrm dt^{k-1}}\left(\frac{t\sqrt{1+t}}{\log\sqrt{1+t}}\right)^k &=& \frac{(k-1)!}{2 \pi i} \oint \left( \frac{\exp(u/2)}{u/2} \right)^k \mathrm{e}^u \mathrm{d} u \\ &=& 2^k \left[ \frac{(k-1)!}{2 \pi i} \oint \frac{\exp( u(k/2+1)}{u^k} \mathrm{d} u \right] \\ &=& 2^k \lim_{u \to 0} \dfrac{\mathrm d^{k-1}}{\mathrm du^{k-1}} \mathrm{e}^{ u \left(\frac{k}{2}+1\right) } = 2^k \left(\frac{k}{2}+1\right)^{k-1} = 2 (k+2)^{k-1} \end{eqnarray} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/66275", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
Suppose $f \in End_{R}(M)$ is an $R$-module endomorphism. If $f$ is surjective then $f$ is not a right divisor of zero in $End_{R}(M)$ I am having trouble thinking about a practice problem that I feel should be pretty easy because it is part 1 of a 4 part problem. I am probably overthinking it because I keep trying to construct maps of induced sequences of Hom since this comes from a section where we have introduced the notion of projectivity and injectivity for modules. Let $R$ be a commutative ring with identity Suppose $f \in End_{R}(M)$ is an $R$-module endomorphism. If $f$ is surjective then $f$ is not a right divisor of zero in $End_{R}(M)$. Conversely if or every submodule $N \neq M$ with $N \subset M$ there exits a linear form $x^{*} \in E^{*}$ which is zero on $N$ and surjective, every element of $End_{R}(M)$ which is not a right divisor of zero is a surjective endomorphism. Does the first half of the problem follow simply from the fact that $f$ surjective implies $f^*$ injective? There is a similar statement for injective endomorphsisms in the problem set but I cannot seem to come up with the ideas for either I was also wondering if there where any good texts or lecture notes where they cover module endomorphisims in a way that expalins these type of problems clearly.
Suppose $f$ is surjective and $gf=0$. To prove $f$ is not a right divisor of zero, we need to show that $g=0$, i.e. that $g(m)=0$ for all $m$. So let $m$ be in $M$. Since $f$ is surjective, $m=f(n)$ for some $n$ in $M$. Thus $g(m)=g(f(n))=(gf)(n)=0(n)=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/66354", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Distribution of results of a biased coin the mass distribution of a coin is so that the chance of getting a head s only 40% .The coin is tossed a 100 times what is the chance -to get more than (inclusive ) 50 heads ? -to get more than (exclusive) 50 tails? to get more than (exclusive) 60 tails and more than (exclusive ) 20 heads ? if you do the experiment 3times,what is the chance of getting every time less than (inclusive ) 30 heads ?
To get you started, I will work the first one. Using the Binomial Distribution, we get $$ \sum_{k=50}^{100}\binom{100}{k}\;.4^k\;.6^{100-k} = 0.027099197757009005051 $$ If we approximate, using the Normal Distribution, where the mean is $-.2\times100=-20$ and the variance is $.96\times100=96$, the probability of getting at least $0$ ($20/\sqrt{96}$ s.d. above the mean) is $\frac{1}{2}(1-\operatorname{erf}(20/\sqrt{96}/\sqrt{2}))=0.020613416668581847098$. As Brian Scott points out in the comments, we can get a better approximation with the Normal Distribution by including the whole count around $50$ rather than only half the count. That is, the probability of $49.5$ heads or more. $49.5\text{ heads}-50.5\text{ tails}=-1$ instead of $0$, which gives $19/\sqrt{96}$ or more s.d. above the mean, with a probability of $\frac{1}{2}(1-\operatorname{erf}(19/\sqrt{96}/\sqrt{2}))=0.026239749779623305681$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/66409", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Open $\sigma$-compact sets with finite measure Let $X$ be locally compact Hausdorff space and let $\mu$ be positive Borel measure, finite on compacts, outer regular with respect to open subsets, for each Borel set, and inner regular with respect to compact subsets, for each open set and for each Borel with finite measure. Is it true that for every compact $F$ there exists an open $\sigma$-compact $G$ such $F\subset G$ and $G$ has finite measure. Thanks.
Yes. As $X$ is locally compact and $F$ is compact, there is an $f\in C_c(X)$ (i.e., $f\colon X\to\mathbb{R}$ with compact support) which is strictly positive on $F$. We can then take $G=\{x\in X\colon f(x) > 0\}$. This is contained in the support of $f$, which is compact, so has finite measure. Also, $$ G=\bigcup_{n=1}^\infty\left\{x\in X\colon f(x)\ge1/n\right\} $$ expresses $G$ as a countable union of compact sets, so it is $\sigma$-compact.
{ "language": "en", "url": "https://math.stackexchange.com/questions/66464", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
If $A$ and $B$ are positive-definite matrices, is $AB$ positive-definite? I've managed to prove that if $A$ and $B$ are positive definite then $AB$ has only positive eigenvalues. To prove $AB$ is positive definite, I also need to prove $(AB)^\ast = AB$ (so $AB$ is Hermitian). Is this statement true? If not, does anyone have a counterexample? Thanks, Josh
In general no, because for Hermitian $A$ and $B$, $(AB)^* = AB$ if and only if $A$ and $B$ commute. On the other hand, $ABA$ and $BAB$ can be proven to be positive definite.
{ "language": "en", "url": "https://math.stackexchange.com/questions/66520", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 1 }
A finite sum of prime reciprocals How can you prove that $\sum\limits_k \frac1{p_k}$, where $p_k$ is the $k$-th prime, does not result in an integer?
Inspiring by this: https://en.wikipedia.org/wiki/Divergence_of_the_sum_of_the_reciprocals_of_the_primes#Partial_sums $\displaystyle \frac{1}{p_1} + \frac{1}{p_2} + \cdots + \frac{1}{p_n} = \frac{1}{2} + \frac{1}{odd} + \cdots + \frac{1}{odd} = \frac{odd.odd \dots odd + 2.odd \dots odd + \cdots + 2.odd \dots odd}{2.odd \dots odd}$ $\displaystyle = \frac{odd + even + \dots + even}{2.odd} = \frac{odd + even}{even} = \frac{odd}{even} \neq integer$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/66642", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 0 }
Determine a point $$\text{ABC- triangle:} A(4,2); B(-2,1);C(3,-2)$$ Find a D point so this equality is true: $$5\vec{AD}=2\vec{AB}-3\vec{AC}$$
Recall that the vector $\overrightarrow{PQ}$ is the difference of two points $Q{-}P$. In this way, $$ 5\overrightarrow{AD}=2\overrightarrow{AB}-3\overrightarrow{AC} $$ becomes $$ 5(D-A)=2(B-A)-3(C-A) $$ All that is left is to solve for $D$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/66671", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Field $\mathbb{Q}(\sqrt{3}, \sqrt{-1})$ Am I right to say that the field $\mathbb{Q}(\sqrt{3}, \sqrt{-1})$ is an algebraic extension of $\mathbb{Q}$? Because $\mathbb{Q}\subset\mathbb{Q}(\sqrt{3})\subset\mathbb{Q}(\sqrt{3})( \sqrt{-1})=\mathbb{Q}(\sqrt{3}, \sqrt{-1})$. Thanks.
HINT $\rm\ K =\: \mathbb Q(\sqrt 3, \sqrt{-1})\:$ is a $4$-dimensional vector space over $\rm\mathbb Q\:,\:$ viz. $\rm\: K =\: \mathbb Q\langle 1,\:\sqrt 3,\:\sqrt{-1},\:\sqrt{-3}\rangle\:.$ Hence $\rm\:\alpha\in K\ \Rightarrow\ 1,\ \alpha,\ \alpha^2,\ \alpha^3,\ \alpha^4\:$ are linearly dependent over $\rm\:\mathbb Q\:.\:$ This dependence relation yields a nonzero polynomial $\rm\:f(x)\in \mathbb Q[x]\:$ of degree $\le 4\:$ such that $\rm\:f(\alpha)=0\:.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/66758", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
When is the preimage of prime ideal is not a prime ideal? If $f\colon R\to S$ is a ring homomorphism such that $f(1)=1$, it's straightforward to show that the preimage of a prime ideal is again a prime ideal. What happens though if $f(1)\neq 1$? I use the fact that $f(1)=1$ to show that the preimage of a prime ideal is proper, so I assume there is some example where the preimage of a prime ideal is not proper, and thus not prime when $f(1)\neq 1$? Could someone enlighten me on such an example?
First, I object to the fact that you consider maps which don't take $1_R$ to $1_S$ to be ring homomorphisms. That said, yes if you were to consider such maps there are many examples. For example let $R$ be any integral domain, $S = R\oplus R$ and $f:R \rightarrow S$ be the embedding of $R$ into the first coordinate in $S$. Then $f(R)$ is a prime ideal of $S.$ And voila a counterexample.
{ "language": "en", "url": "https://math.stackexchange.com/questions/66832", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Finite/Infinite Coxeter Groups In the same contest as this we got the following problem: We are given a language with only three letters letters $A,B,C$. Two words are equivalent if they can be transformed from one another using transformations of consecutive letters in a word like this: $ABA \leftrightarrow BAB$, $ACA \leftrightarrow CAC$ and $AA=BB=CC=\emptyset$ Decide if there are finite or infinite number of non equivalent words in this language if the following condition is added: 1) $BC=CB$; 2) $BCB=CBC$. If there are a finite number of words, how many inequivalent words are there? The problem can be quickly translated into group theory (which I didn't see...) like this: A group has three generators $a,b,c$ with the following relations between them. Decide in each case if the group is finite/infinite. If finite, find the number of elements of the group. 1) $a^2=b^2=c^2=e, \ (ab)^3=e, \ (ac)^3=3,\ (bc)^2=e$; 2) $a^2=b^2=c^2=e, \ (ab)^3=e, \ (ac)^3=3,\ (bc)^3=e$. These are easily seen to be particular cases of Coxeter groups. The official proofs were based on finding actual geometrical models of the given Coxeter groups (this is always possible, although not very simple...). Of course, no one of the participants thought of it like this, and in case $1)$ it is possible to prove that there are finitely many words by proving that a large enough word, which is equivalent to a word of the form $ABCABC..., BABCABC...,BCABCABC...$ can be made shorter, but even if we prove that there are finitely many words, there is still the part when we need to count the different words, which can be very tricky. For the second problem no solution without Coxeter groups geometrical representation was presented. My questions are: 1) Are there any results from which we can see directly from the relations given for the Coxeter group if the group is finite or not? I am interested especially in the 3 generators case, but maybe there are some results in the general case also. 2) Can we find the number of inequivalent words in the first part without using Coxeter groups? 3) Can you solve second part without using Coxeter groups?
To expand on Jack Schmidt's comment, a confluent rewriting system for the first example has the eight reduction rules: $a^2 \to 1$, $b^2 \to 1$, $c^2 \to 1$, $bab \to aba$, $cac \to aca$, $cb \to bc$, $caba \to bcab$, $cabc \to acab$, form which you can see that there are exactly 24 irreducible words (i.e. those not containing the left-hand-side of a reduction rule): $1$, $a$, $b$, $c$, $ab$, $ac$, $ba$, $bc$, $ca$, $aba$, $abc$, $aca$, $bac$, $bca$, $cab$, $abac$, $abca$, $acab$, $baca$, $bcab$, $abaca$, $abcab$, $bacab$, $abacab,$ although it makes life easier to check such calculations by computer. The second example also has a confluent rewriting system, this time with the nine rules: $a^2 \to 1$, $b^2 \to 1$, $c^2 \to 1$, $bab \to aba$, $cac \to aca$, $cbc \to bcb$, $cabcb \to acabc$, $cbaca \to bcbac$, $cabcaba \to acabcab.$ Note that the words $(abc)^n$ are irreducible for all $n$, so the group is infinite.
{ "language": "en", "url": "https://math.stackexchange.com/questions/66889", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 2, "answer_id": 0 }
Proof that $x \Phi(x) + \Phi'(x) \geq 0$ $\forall x$, where $\Phi$ is the normal CDF As title. Can anyone supply a simple proof that $$x \Phi(x) + \Phi'(x) \geq 0 \quad \forall x\in\mathbb{R}$$ where $\Phi$ is the standard normal CDF, i.e. $$\Phi(x) = \int_{-\infty}^x \frac{1}{\sqrt{2\pi}} e^{-y^2/2} {\rm d} y$$ I have so far: Defining $f(x) = x \Phi(x) + \Phi'(x)$ we get $$ \begin{align} f'(x) & = \Phi(x) + x \Phi'(x) + \Phi''(x) \\ & = \Phi(x) + x\Phi'(x) - x\Phi'(x) \\ & = \Phi(x) \\ & >0 \end{align}$$ so it seems that if we can show $$\lim_{x\to-\infty} f(x) = 0$$ then we have our proof - am I correct? Clearly $f$ is the sum of two terms which tend to zero, so maybe I have all the machinery I require, and I just need to connect the parts in the right way! Assistance will be gratefully received. In case anyone is interested in where this question comes from: Bachelier's formula for an option struck at $K$ with time $T$ until maturity, with volatility $\sigma>0$ and current asset price $S$ is given by $$V(S) = (S - K) \Phi\left( \frac{S-K}{\sigma S \sqrt{T}} \right) + \sigma S \sqrt{T} \Phi' \left( \frac{S-K}{\sigma S \sqrt{T}} \right) $$ Working in time units where $\sigma S\sqrt{T} = 1$ and letting $x=S-K$, we have $$V(x) = x \Phi(x) + \Phi'(x)$$ and I wanted a simple proof that $V(x)>0$ $\forall x$, i.e. an option always has positive value under Bachelier's model.
I will concentrate at $\lim_{x \to -\infty} f(x) = 0$, where $f(x) = x \Phi(x) + \Phi'(x)$. Consider $$ g(x) = \sqrt{2 \pi} f(-x) = \mathrm{e}^{-\frac{x^2}{2}} - x \int_{x}^\infty \mathrm{e}^{-\frac{y^2}{2}} \mathrm{d} y \qquad \text{for} \qquad x > 0. $$ Then $$ g(x) = \mathrm{e}^{-\frac{x^2}{2}} - x \int_{\frac{x^2}{2}}^\infty \mathrm{e}^{-t} \frac{\mathrm{d} t}{\sqrt{2t}} \, > \, \mathrm{e}^{-\frac{x^2}{2}} - \int_{\frac{x^2}{2}}^\infty \mathrm{e}^{-t} \mathrm{d} t = 0 $$ where $ x \int_{\frac{x^2}{2}}^\infty \mathrm{e}^{-t} \frac{\mathrm{d} t}{\sqrt{2t}} < \frac{x}{\sqrt{2 \frac{x^2}{2}}} \int_{\frac{x^2}{2}}^\infty \mathrm{e}^{-t} \mathrm{d} t = \int_{\frac{x^2}{2}}^\infty \mathrm{e}^{-t} \mathrm{d} t = \mathrm{e}^{-\frac{x^2}{2}}$ for $x>0$. On the other hand $g(x) < \mathrm{e}^{-\frac{x^2}{2}}$ by definition, so $\lim_{x \to \infty} g(x) = \lim_{x \to \infty} \sqrt{2 \pi} f(-x)$ vanishes being sandwiched between 0 and $\mathrm{e}^{-\frac{x^2}{2}}$ which also tends to zero for large $x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/66958", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 5, "answer_id": 3 }
Normalizer of the normalizer of the sylow $p$-subgroup If $P$ is a Sylow $p$-subgroup of $G$, how do I prove that normalizer of the normalizer $P$ is same as the normalizer of $P$ ?
Let $N=N_G(P)$. Let $x\in N_G(N)$, so that $xNx^{-1}=N$. Then $xPx^{-1}$ is a Sylow $p$-subgroup of $N\leq G$. Since $P$ is normal in $N$, $P$ is the only Sylow $p$-subgroup of $N$. Therefore $xPx^{-1}=P$. This implies $x\in N$. We have proved $N_G(N_G(P))\subseteq N_G(P)$. Let $y\in N_G(P)$ Then certainly $yN_G(P)y^{-1}=N_G(P)$, so that $y\in N_G(N_G(P))$. Thus $N_G(P)\subseteq N_G(N_G(P))$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/67008", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33", "answer_count": 7, "answer_id": 3 }
Best numerical method to integrate a differential equation like $\frac{\partial y}{\partial t}=f\left(t,\frac{\partial y}{\partial r}\right)$? What is the best numerical method to solve an equation like this: $$\frac{\partial y}{\partial t}=f\left(t,\frac{\partial y}{\partial r}\right)\quad?$$ Can somebody give, at least, a reference?
This is quite difficult to answer without further details. Assuming that you have no clue about what to do, I would recommend you to look into finite difference methods. These are probably the easiest methods to understand. Other methods are finite elements (bit more complicated and in 1d often similar to finite differences, though boundary conditions are handled differently) and (pseudo)spectral methods (which may give you better accuracy if everything is smooth). A good reference is Morton & Mayers, Numerical solutions of partial difference methods (though as Ross says, any numerical analysis book will have something about this, often in the last chapter). There is also a list of links at http://math.fullerton.edu/mathews/n2003/finitediffpde/FiniteDifferencePDEBib/Links/FiniteDifferencePDEBib_lnk_1.html .
{ "language": "en", "url": "https://math.stackexchange.com/questions/67053", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
For what $x\in[0,1]$ is $y = \sum\limits_{k = 1}^\infty\frac{\sin( k!^2x )}{k!}$ differentiable? For what $x\in[0,1]$ will the function $y = \sum\limits_{k = 1}^\infty\frac{\sin( k!^2x )}{k!}$ be differentiable? How do you know? Here is the equation expressed more clearly on Wolfram Alpha. The only difference is that 10 should be Infinity (Wolfram apparently can't handle that yet). I'm trying to understand for what $x\in[0,1]$ this function is differentiable. I've used a computer to plot the graph of $y'$ (the derivative of the function) with the upper limit (top number of sigma) as 10, and then with the upper limit as 11, 12... it looks like these "zig-zags" continue to exist as you "go deeper" into the function. ...so I'm thinking the values of $x\in[0,1]$ that make the function differentiable are all of them... But is my line of thinking correct? How can I validate?
Thanks to great comments from people like Henning Makholm, I have an answer, and I decided to summarize it here. (Please edit this if I have any inaccuracies.) First of all, I was mistaken in my supposed graphing of the derivative. The derivative of y equates to the sum of the derivatives, namely $y' = \sum\limits_{k = 1}^\infty\ k!cos(k!^2x)$ This sum diverges, meaning it should be impossible to graph the derivative. We know the sum diverges because cos oscillates between -1 and 1. As Henning Makholm says, "[The graph of y] is nowhere differentiable because for every $x_0$, the difference quotient $\frac{\ f(x)−f(x_0)}{x−x_0}$ can be made arbitrarily large for $x$ arbitrarily close to $x_0$." This means that we have points of non-differentiability at every point in the graph. Also, if we examine the graph of y with increasing magnification, we see that the "zig zags" continue to appear, and this further confirms the non-differentiability of all points. Therefore, for no $x\in[0,1]$ is the function y differentiable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/67120", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
The sum of a polynomial over a boolean affine subcube Let $P:\mathbb{Z}_2^n\to\mathbb{Z}_2$ be a polynomial of degree $k$ over the boolean cube. An affine subcube inside $\mathbb{Z}_2^n$ is defined by a basis of $k+1$ linearly independent vectors and an offset in $\mathbb{Z}_2^n$. [See "Testing Low-Degree Polynomials over GF(2)" by Noga Alon, Tali Kaufman, Michael Krivelevich, Simon Litsyn, Dana Ron - http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.9.1235 - for more details] Why does taking such a subcube and evaluating the sum of $P$ on all $2^{k+1}$ elements of it, always results in zero ?
The key idea in proving this is restricting a polynomial to an affine subcube. Let $S$ be the subcube $$ S := \{ z_1 \mathbf v_1 + z_2 \mathbf v_2 + \cdots + z_{k+1} \mathbf v_{k+1} + \mathbf v_0 \,:\, z_i \in \{ 0, 1 \} \text{ for } 1 \leq i \leq k+1 \} $$ for some fixed vectors $\mathbf v_1, \ldots, \mathbf v_{k+1}$ and $\mathbf v_0$. The restriction of $P$ to $S$ is then defined to be the polynomial $Q : \mathbb Z_2^{k+1} \to \mathbb Z_2$ obtained by formally plugging in $z_1 \mathbf v_1 + z_2 \mathbf v_2 + \cdots + z_{k+1} \mathbf v_{k+1} + \mathbf v_0$ for $\mathbf x$ in $P(\mathbf x)$. That is, $$ Q(\mathbf z) = P(z_1 \mathbf v_1 + z_2 \mathbf v_2 + \cdots + z_{k+1} \mathbf v_{k+1} + \mathbf v_0), \tag{1} $$ for all $\mathbf z = (z_1, z_2, \ldots, z_{k+1}) \in \mathbb Z_2^{k+1}$. Restrictions are useful because they preserve the degree of the polynomial: Proposition. If $P$ has degree at most $k$, then so does its restriction $Q$. (The proof is simply to expand out $P$ in terms of the indeterminates $z_i$'s, keeping in mind that the $v_i$'s are fixed vectors. Complete the proof as an exercise!) Now, the sum $\sum_{\mathbf x \in S} P(\mathbf x)$ is the same as the sum $\sum_{\mathbf z \in \mathbb Z_2^{k+1}} Q(\mathbf z)$ where $Q$ is defined as in $(1)$; so it is enough to prove that the latter sum is $0$. Lemma. If $Q : \mathbb Z_2^{k+1} \to \mathbb Z_2$ is a polynomial of degree at most $k$, then $$\sum_{\mathbf z \in \mathbb Z_2^{k+1}} Q(\mathbf z) = 0 .$$ First of all, we can assume that the polynomial $Q$ is multilinear without loss of generality. Then since any multilinear polynomial is a linear combination of the basis polynomials $z_T := \prod\limits_{i \in T} z_i$ (where $T \subseteq [n]$), it suffices to prove the lemma for these basis polynomials. The proof is quite easy for the polynomials of the form $z_T$. I recommend that you try it; I will supply hints if you are stuck.
{ "language": "en", "url": "https://math.stackexchange.com/questions/67158", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
How to prove the sum of squares is minimum? Given $n$ nonnegative values. Their sum is $k$. $$ x_1 + x_2 + \cdots + x_n = k $$ The sum of their squares is defined as: $$ x_1^2 + x_2^2 + \cdots + x_n^2 $$ I think that the sum of squares is minimum when $x_1 = x_2 = \cdots = x_n$. But I can't figure out how to prove it. Can anybody help me on this? Thanks.
More generally, if the objective function is strictly convex (objective is quadratic, check), the feasible region is convex (constraint is linear, check), and the problem is symmetric (i.e., the variables can be interchanged without changing the problem, check), then the global minimum has all the variables equal to each other. (See for, example Boyd and Vandenberghe, Convex Optimization, Exercise 4.4.). That immediately gives $x_i = \frac{k}{n}$ as well.
{ "language": "en", "url": "https://math.stackexchange.com/questions/67192", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27", "answer_count": 9, "answer_id": 5 }
Proving that a process is a Brownian motion Let $B$ be a Brownian motion with natural filtration $(\mathcal{F}_t)_{t\geq 0}$ and let $\mathcal{H}_t$ be the $\sigma$-algebra generated by $\mathcal{F}_t$ and $B_1$. Define $$A_t = B_t-\int_0^{\min(t,1)} \frac{B_1-B_s}{1-s}ds$$ I'm trying to show that $A_t$ is a Brownian motion with respect to $(\mathcal{H_t})_{t\geq0}$. As a first step, I'm attempting to show that $A_t$ is a martingale, but haven't made much progress. Thank you.
Look at the quadratic variation of the process. Regards.
{ "language": "en", "url": "https://math.stackexchange.com/questions/67242", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Do real matrices always have real eigenvalues? I was trying to show that orthogonal matrices have eigenvalues $1$ or $-1$. Let $u$ be an eigenvector of $A$ (orthogonal) corresponding to eigenvalue $\lambda$. Since orthogonal matrices preserve length, $ \|Au\|=|\lambda|\cdot\|u\|=\|u\|$. Since $\|u\|\ne0$, $|\lambda|=1$. Now I am stuck to show that lambda is only a real number. Can any one help with this?
No, a real matrix does not necessarily have real eigenvalues; an example is $\pmatrix{0&1\\-1&0}$. On the other hand, since this matrix happens to be orthogonal and has the eigenvalues $\pm i$ -- for eigenvectors $(1\mp i, 1\pm i)$ -- I think you're supposed to consider only real eigenvalues in the first place.
{ "language": "en", "url": "https://math.stackexchange.com/questions/67304", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 3, "answer_id": 2 }
Sequence sum question: $\sum_{n=0}^{\infty}nk^n$ I am very confused about how to compute $$\sum_{n=0}^{\infty}nk^n.$$ Can anybody help me?
Asssuming $|k|<1$, this series converges uniformly and therefore sum and derivative can be interchanged: $\sum_{n=0}^{\infty}nk^n=k \sum_{n=0}nk^{n-1} = k \sum_{n=0}^{\infty}\frac{d}{dk}(k^n) =k \frac{d}{dk} \sum_{n=0}^{\infty}k^n = k \frac{d}{dk}\frac{1}{1-k}=\frac{k}{(1-k)^2}$ EDIT: in case $k>1$ this is a completely different situation, as the geometric series diverges, but I don't know how to solve this problem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/67364", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 6, "answer_id": 3 }
Group of order 15 is abelian How do I prove that a group of order 15 is abelian? Is there any general strategy to prove that a group of particular order (composite order) is abelian?
There's a unique normal Sylow subgroup of order $5$, because it's index is the smallest prime dividing the group's order. It follows, after a couple details, that we have a semi-direct product $$G=\Bbb Z_5\rtimes \Bbb Z_3$$, which can only be trivial, because the orders of $\Bbb Z_5^×$ and $\Bbb Z_3$ are relatively prime.
{ "language": "en", "url": "https://math.stackexchange.com/questions/67407", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "45", "answer_count": 6, "answer_id": 5 }
If $L$ contains a $n$-th root of $a\in K$, why does $K$ already contain a $n$-th root? I'm trying to solve the following problem from Algebra by Siegfried Bosch (english version below): Es seien $m,n$ teilerfremde positive ganze Zahlen. Ist dann $L/K$ eine Körpererweiterung vom Grad $m$, so hat jedes Element $a\in K$, welches eine $n$-te Wurzel in $L$ besitzt, bereits eine $n$-te Wurzel in $K$. My attempt at a translation: Let $m,n$ be positive integers with $\mathrm{gcd}(m,n) = 1$. If $L/K$ is a field extension of degree $m$, then show that for any element $a\in K$ having an $n$-th root in $L$, there already is an $n$-th root of $a$ in $K$. Since it's in the chapter, where trace and norm are defined, I'm guessing the norm will have to be used. So the question is: how? The only (obvious) thing I can see: If $b \in L$ is an $n$-th root of $a \in K$, then we have that $a^m = N_{L/K}(a) = N_{L/K}(b^n) = (N_{L/K}(b))^n$ and $N_{L/K}(b)\in K$. But I don't know whether (or how) this helps. I don't seem to be able to come up with any good ideas, so I would appreciate some pointers. But please don't write down a full solution, if possible. I really would like to learn something here, so if you could give me just a hint, that would be great! Thanks a lot.
Hint: You have shown that $a^m$ has an $n$th root in $K$. Also $a^n$ has an $n$th root in $K$. Is the set of integers $\ell$ such that $a^\ell$ has an $n$th root in $K$ closed under ...? Spoiler solution(for readers other than Sam, who solved his problem before this was added): $G=\{a^\ell\mid \ell\in\mathbf{Z}\}\cap {K^\times}^n$ is a multiplicative group. As a subgroup of a cyclic group it has to be cyclic itself, i.e. generated by some $a^t,t>0$. We saw that $a^m,a^n\in G$, so $t\mid m$, $t\mid n$ and the only possibility is $t=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/67479", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Uncountable Cartesian product of closed interval I have a question about product topology. Suppose $I=[0,1]$, i.e. a closed interval with usual topology. We can construct a product space $X=I^I$, i.e. uncountable Cartesian product of closed interval. Is $X$ first countable? I have read Counterexamples of Topology, on item 105, it is dealing with $I^I$. I do not quite understand the proof given on the book. Can someone give a more detail proof?
It is not. Let's look at open sets containing 0 (sequence of 0s). We will argue by contradiction, so suppose there is a countable local neighborhood basis $U_i$ for 0. Every such $U_i$ contains some $V_i = \prod_{r} V_{i,r}$ where $V_{i,r} = I$ for almost every $r$, $V_{i,r}$ open. (By definition of product topology). Let's look at the set of all $r$ that $V_{i,r} \neq I$ for some $i$. This set is countable, because it's a countable union of countable sets. So it's not the whole of $I$. Let's choose some $r_0$ outside this set. Let $H = \prod_r H_r$ where $H_{r_0} = [0, 1/2)$ and $H_{r} = I$ otherwise. Then $H$ is an open set containing $0$, not containing any of $U_i$, contradicting first-countability at 0.
{ "language": "en", "url": "https://math.stackexchange.com/questions/67553", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
Asymptotics of sums of Dirichlet-Characters over prime numbers Again in relation with some stuff I am currently reading, the authors make use of the following "standard argument in prime number theory": Let $\chi$ be a non-principal Dirichlet-character. Then $$\sum_{y< p \leq x} \chi(p)\overline{\chi(p)}=\frac{x}{\log(x)}+ o\left(\frac{x}{\log(x)}\right),$$ when $x\to\infty$, where $p$ runs over prime numbers. This expression very much reminds of Polya's inequality plus some use of character orthogonality, but I don't see how to "restrict" the sum to only prime numbers. I would be thankful if someone could point to the way how this is derived. As usual, references are most welcome!
Let $m$ be the conductor of $\chi$, $\omega(m)$ its number of prime divisors and $q$ the largest prime dividing it - then for $x\ge q$ the sum is precisely $\pi(x)-\omega(m)$, because $|\chi|^2$ is always either $1$ or $0$, and is only the latter for numbers that share common divisors with $m$. The only primes that share common divisors with $m$ are those that divide it, and there are $\omega(m)$ of those (a finite amount), so all other prime numbers will contribute exactly $1$ to the overall sum. This means that $\sum\sim x/\log x$ by the prime number theorem, which gives the identity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/67606", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
The canonical form of a nonlinear second order PDE Can anyone help me find the canonical form of $$x^2u_{xy} - yu_{yy} + u_x - 4u = 0?$$ I don't know how to solve it because $a = 0$. I just got that it's hyperbolic since $a=0$ , $b =(x^2)/2$, $c= -y$, then we have $b^2- ac =\frac{x^4}4-0=\frac{x^4}4 > 0$ (hyperbolic), where $x \neq 0$.
see: V.S.Vladimirov, A Collection of Problems on the Equations of Mathematical Physics, Springer, 1986 In your case characteristic equation is $$-x^2dxdy-ydx^2=0$$ with solutions $$x=c_1,\quad -1/x+\log(y)=c_2.$$ By a change of variables $$\xi=x,\quad\eta=-1/x+\log(y)$$ we get canonical form $$u_{\xi\eta}=\frac{{{e}^{\frac{1}{\xi }+\eta }}\, \left( {{\xi }^{2}}\, {u_{\xi }}-4 u\, {{\xi }^{2}}+{u_{\eta }}\right) }{{{\xi }^{4}}}-\frac{{u_{\eta }}}{{{\xi }^{2}}}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/67748", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
What does the exclamation mark do? I've seen this but never knew what it does. Can any one let me in on the details? Thanks.
I just wanted to add a common usage for it as well since that may be of more interest to you. A common usage for the factorial is in permutations. For example: If there were 3 people in a race, how many possible ways could there be to arrange the ranking. The answer would be 3! (3 factorial) or 3 x 2 x 1 or 6 ways. Listing them the brute force way: 1) A,B,C 2)A,C,B 3)B,A,C 4)B,C,A 5)C,A,B 6)C,B,A This extends into a lot of other things but most commonly associated with statistics, I believe.
{ "language": "en", "url": "https://math.stackexchange.com/questions/67801", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 6, "answer_id": 2 }
Does strict convexity imply differentiability? I know that convexity does not imply differentiability, for example f(x)=|x| is convex but not differentiable. However, |x| is not strictly convex. So I wonder whether strict convexity imply differentiability. I did some search and found out the Wikipedia implicitly gives the negative answer: http://en.wikipedia.org/wiki/Convex_function#Strongly_convex_functions It says that "a strongly convex function is also strictly convex" and "a function doesn't have to be differentiable in order to be strongly convex". Can anyone provide a concrete example? Thanks in advance.
$$f(x)=x^2+|x|$$ is strictly convex because of the $x^2$ term but not differentiable at $0$ because of the $|x|$ term
{ "language": "en", "url": "https://math.stackexchange.com/questions/67853", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26", "answer_count": 3, "answer_id": 0 }
Square root of differential operator If $D_x$ is the differential operator. eg. $D_x x^3=3 x^2$. How can I find out what the operator $Q_x=(1+(k D_x)^2)^{(-1/2)}$ does to a (differentiable) function $f(x)$? ($k$ is a real number) For instance what is $Q_x x^3$?
$1+kD^2$ has positive spectrum but this is not enough for the existence of a square root operator on the same space of functions. This is because derivative is an unbounded operator (so that the expansion of $Q_x$ as a power series in $D$ may not converge), and the square root function is multi-valued. The restriction to functions that are polynomials does give a well-defined $Q$ using the power series.
{ "language": "en", "url": "https://math.stackexchange.com/questions/67904", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 1 }
Dependence of certain random variables Consider $X_1,X_2$ i.i.d. standard normal random variables(mean 0, variance 1). Are the random variables $Y=X_1+X_2$ and $Z=X_1-X_2$ dependent? I am not sure how to prove this one way or the other.
$X_1$ and $X_2$ are independent standard normals, so $(X_1, X_2)$ has rotationally symmetric density, namely $$ {1 \over 2\pi} \exp(-(x_1^2 + x_2^2)/2). $$ If you change coordinates with $u = (x_1 + x_2)/\sqrt{2}, v = (x_1 - x_2)/\sqrt{2}$ (so the change from $(x_1, x_2)$ to $(u,v)$ is area-preserving) then this becomes $$ {1 \over 2\pi} \exp(-(u^2+v^2)/2). $$ That is, the random variables $U = (X_1 + X_2)/\sqrt{2}$ and $V = (X_1 - X_2)/\sqrt{2}$ are also independent standard normals. Your random variables are $Y = U \sqrt{2}$ and $Z = V \sqrt{2}$, so they're independent normals with mean 0 and SD $\sqrt{2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/67985", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Does taking closure preserve finite index subgroups? Let $K \leq H$ be two subgroups of a topological group $G$ and suppose that $K$ has finite index in $H$. Does it follow that $\bar{K}$ has finite index in $\bar{H}$ ?
Yes, if your group has a countable basis (so the closure of a set is the set of limits of sequences of points in that set). Let $\{h_1,...,h_n\}$ be representatives of the left cosets of $K$ in $H$. Then for any point $h \in \bar{H}$, express it as a limit of a sequence $x_1, x_2, x_3, ...$ of points $x_i \in H$. These points can then be represented using the coset representatives as $h_{i_1}y_1, h_{i_2}y_2, h_{i_e}y_3,...$, with $i_k \in \{1,...,n\}$ and $y_n \in K$. Now some $h_k$ must appear infinitely often, thus by passing to a subsequence we have that $h$ is the limit of a sequence $h_ky_1, h_ky_2, h_ky_3,...$ and so by the continuity of group multiplication we have $h = h_ky$, where $y \in \bar{K}$ is the limit of the $y_i$. This shows that any point $h \in \bar{H}$ lies in $h_k\bar{K}$ for some $k \in \{1,...,n\}$, thus the same coset representatives of $K$ in $H$ also form a set of (possibly redundant) coset representatives of $\bar{K}$ in $\bar{H}$, and in particular $\bar{K}$ has finite index in $\bar{H}$. I suspect the sequences in this argument could be adapted into more of a "for any open set around $h$, there is ..." sort of language to make the proof work for general topological groups, but I prefer the intuition you get with sequences.
{ "language": "en", "url": "https://math.stackexchange.com/questions/68034", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 2, "answer_id": 1 }
What is the probability of rolling 2 7's before 6 even numbers on successive rolls of a pair of fair dice? Would this equal Pr( rolling 2 7's)/Pr(rolling 2 7's or rolling 6 even numbers)? Or could I approach the problem as follows: 1- (Pr(rolling 6 even numbers)+ Pr( rolling 5 even numbers, 7 and an even number)) ?
Here’s a slightly different way to look at it. Odd rolls other than seven are meaningless and can be ignored. By the seventh meaningful roll you must have either two sevens or six even numbers. Thus, you get two sevens before you get six even numbers if and only if you get at least two sevens in the first seven rolls. On each meaningful roll the probability of getting a seven is $$\frac{\frac16}{\frac16+\frac12} = \frac14,$$ so the probability of getting at most one seven in seven meaningful rolls is $$\binom70\left(\frac34\right)^7 + \binom71\left(\frac14\right)\left(\frac34\right)^6 = \frac{3^7+7\cdot 3^6}{4^7} = \frac{10\cdot 3^6}{4^7} = \frac{3645}{8192}.$$ The probability of getting at least two sevens in seven meaningful rolls is therefore $$1 - \frac{3645}{8192} = \frac{4547}{8192}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/68080", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Why does $A^TA=I, \det A=1$ mean $A$ is a rotation matrix? I know if $A^TA=I$, $A$ is an orthogonal matrix. Orthogonal matrices also contain two different types: if $\det A=1$, $A$ is a rotation matrix; if $\det A=-1$, $A$ is a reflection matrix. My question is: what is the relationship between the determinant of $A$ and rotation/reflection. Can you explain why $\det A=\pm 1$ means $A$ is a rotation/reflection from the geometric perspective? EDIT: I think the following questions need be figured out first. * *Do rotation matrices only exist in 2D and 3D space? That is: for any dimensional matrix, as long as it is orthogonal and with determinant as 1, the matrix represents a rotation transformation, right? Note the orthogonality and determinant are applicable to arbitrary dimensional matrices. *What is the most fundamental definition of a rotation transformation? *Since an orthogonal matrix preserves length and angle, can we say an orthogonal matrix represents a "rigid body" transformation? "Rigid body" transformation contains two basic types: rotation and reflection?
This depends on how we want to define "rotation" in the first place. Some people prefer a narrow definition where the only things that qualify as "rotations" are things that can be expressed as a $(2+n)\times(2+n)$ block matrix $$\pmatrix{\pmatrix{\cos\theta&\sin\theta\\-\sin\theta&\cos\theta}&0\\0&I}$$ with respect to some orthogonal basis. Under this definition there are $4\times 4$ matrices that are orthogonal and have determinant 1 but are not rotations - for example, $$\pmatrix{0.6&0.8&0&0\\-0.8&0.6&0&0\\0&0&0&1\\0&0&-1&0}$$ But one might also say that a "rotation" is any matrix $A$ such that there is a continuous family of matrices $(A_t)_{0\le t\le 1}$ such that $A_0=I$, $A_1=A$ and $A_t$ is always orthogonal. This captures the idea of a gradual isometric transformation away from a starting point. Such a definition immediately tells us that the determinant of a rotation must be 1 (because the determinant is continuous), but it is harder to see that we get all orthogonal matrices with determinant 1 this way. I don't have a quick proof of the latter, but I imagine that it can be done fairly elementarily by induction on the dimension, first a series of rotations to make the first column fit, then sort out the remaining columns recursively working within the orthogonal complement of the first column.
{ "language": "en", "url": "https://math.stackexchange.com/questions/68119", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 7, "answer_id": 6 }
Expectation of supremum Let $x(t)$ a real valued stochastic process and $T>0$ a constant. Is it true that: $$\mathbb{E}\left[\sup_{t\in [0,T]} |x(t)|\right] \leq T \sup_{t\in [0,T]} \mathbb{E}\left[|x(t)|\right] \text{ ?}$$ Thanks for your help.
Another way of looking at Zhen's comment: If $x(t)$ is measured in inches and $T$ in seconds, then the left side would be in inches, whereas the right side would be in seconds times inches. So if we change the units of time from seconds to eons, the numerical value on the right gets smaller, but that on the left does not. The numerical value on the right can be made as close to $0$ as desired by making the units of time big enough, whereas that on the left remains fixed at a positive number.
{ "language": "en", "url": "https://math.stackexchange.com/questions/68187", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Notations involving squiggly lines over horizontal lines Is there a symbol for "homeomorphic to"? I looked on Wikipedia, but it doesn't seem to mention one? Also, for isomorphism, is the symbol a squiggly line over an equals sign? What is the symbol with a squiggly line over just one horizontal line? Thanks.
I agree with Qiaochu Yuan's answer for the most part, but if you are working in an area where you must distinguish between homeomorphism, homotopy equivalence, and diffeomorphism, the standard notation becomes ambiguous. This has been relevant for me because I've been studying the interplay of topology and geometry for hyperbolic 3-manifolds. In this context what seems to be the most consistent is $\sim$ for homotopy equivalence, $\simeq$ for homeomorphic, and $\approx$ for diffeomorphic. This way $\sim$ agrees with usage as indicating same members of an equivalence class, where the equivalence class is that of the fundamental group; and $\approx$ agrees with what geometers like (for instance John Lee), and $\simeq$ is just a decent choice for something that looks halfway between them.
{ "language": "en", "url": "https://math.stackexchange.com/questions/68241", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 3, "answer_id": 2 }
Do there exist points in a perfect metric space satisfying these conditions? This question popped up the other day while browsing about perfect spaces. Suppose $(X,\rho)$ is a perfect metric space, and I take any ball $B_r(x)$, the ball centered at $x$ of radius $r$. Out of curiosity, for any $b>0$ sufficiently small, can I then find distinct points $x,y\in B_r(x)$ such that the closures $Y:=\text{cl}{B_b(y)}$ and $Z:=\text{cl}{B_b(z)}$ are disjoint, both contained in $\text{cl}{B_r(x)}$, and both perfect in the induced topology? Thanks!
Yes. You can do better than that. Let $y$ and $z$ be distinct points in $B_r(x)$. Choose a positive number $b < \min\{r-\rho(x,y)$, $r-\rho(x,z),\rho(y,z)/2\}$. Then $$\begin{align*}&\operatorname{cl}B_b(y) \subseteq B_r(x), \text{ since }b<1-\rho(x,y);\\ &\operatorname{cl}B_b(z) \subseteq B_r(x), \text{ since }b<1-\rho(x,z);\text{ and}\\ &\operatorname{cl}B_b(y) \cap \operatorname{cl}B_b(z), \text{ since }b<\rho(y,z)/2. \end{align*}$$ Suppose that $p \in \operatorname{cl}B_b(y)$. If $p \in B_b(y)$, $p$ is clearly not isolated in $\operatorname{cl}B_b(y)$ (since it’s not isolated in $X$, and $B_b(y)$ is open in $X$). If $p \in \operatorname{cl}B_b(y) \setminus B_b(y)$, let $V$ be any relatively open nbhd of $p$ in $\operatorname{cl}B_b(y)$. Then there is an open $U$ in $X$ such that $V = U \cap \operatorname{cl}B_b(y)$, and since $p \in \operatorname{cl}B_b(y)$, $U \cap B_b(y) \ne \varnothing$. But $U\cap B_b(y) \subseteq V$, so $V\cap B_b(y) \ne \varnothing$. Since $p\notin B_b(y)$, this implies that $V \ne \{p\}$, and since $V$ was an arbitrary relatively open nbhd of $p$ in $\operatorname{cl}B_b(y)$, $p$ is not isolated in $\operatorname{cl}B_b(y)$. Thus, $\operatorname{cl}B_b(y)$ is perfect. The argument that $\operatorname{cl}B_b(z)$ is perfect is mutatis mutandis identical.
{ "language": "en", "url": "https://math.stackexchange.com/questions/68288", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Popcorn function I need to construct a function whose set of points of discontinuities is a given F-sigma set. At the Follow-Up section of Wikipedia's article on the popcorn function, they give an example of such a function. I just don't see why would we need closedness of the sets in the proof. Could somebody briefly explain the proof? Thanks
It’s convenient to assume that the closed sets $F_n$ are increasing, so that $F_1 \subseteq F_2 \subseteq F_3 \subseteq \dots$. This is a harmless assumption, since the union of any finite number of closed sets is closed: just replace $F_n$ by $\bigcup_{i=1}^n F_i$. Now we have $A = \bigcup_{n=1}^\infty F_n$, and we define the function $$f_A(x) = \begin{cases} \frac1n,&\text{ if }x\text{ is rational and }n\text{ is minimal so that }x\in F_n\\ \frac{-1}{n}&\text{ if }x\text{ is irrational and }n\text{ is minimal so that }x\in F_n\\ 0,&\text{ if }x \notin A. \end{cases}$$ First we show that $f_A$ is continuous at each point of $\mathbb{R}\setminus A$. Suppose that $x \in \mathbb{R}\setminus A$; clearly $f_A(x)=0$. If $x$ has a nbhd $V$ disjoint from $A$, then $f_A(y)=0$ for every $y \in V$, so $f_A$ is certainly continuous at $x$. Assume now that $x$ has no such nbhd, so that every nbhd of $x$ meets $A$. For $n \in \mathbb{Z}^+$ let $V_n = \mathbb{R}\setminus F_n$; each $V_n$ is a nbhd of $x$. Suppose that $y \in V_m$ for some $m$. If $y \notin A$, then $f_A(y) = 0$. Otherwise, $f_A(y) = \pm 1/n$, where $n$ is minimal with $y \in F_n$. Since $y \in V_m$, $y \notin F_m$; and since the $F_i$ are nested, $y \notin F_i$ for any $i \le m$. Thus, $n>m$. This shows that $$\vert f_A(y)\vert < \frac1m$$ for every $y \in V_m$. In other words, given any $\epsilon > 0$, we can choose a positive integer $m$ such that $1/m < \epsilon$, and $V_m$ will be a nbhd of $x$ such that $\vert f_A(y)-f_A(x)\vert < \epsilon$ for every $y \in V_m$. This of course means that $f_A$ is continuous at $x$. It remains to show that $f_A$ is discontinuous at each point of $A$. Fix $x\in A$, and assume that $x$ is rational. (The argument for irrational $x$ is almost identical.) Then $f_A(x) = 1/n$ for some $n \in \mathbb{Z}^+$, and we have to consider two possibilities. First, it may happen that $x$ has a nbhd $V\subseteq F_n$. If $n>1$, we may further assume that $V \subseteq V_{n-1}$, since $x$ is not in the closed set $F_{n-1}$. (Here I’m using the fact that $F_{n-1}$ is closed.) Let $W$ be any nbhd of $x$; then $W\cap V$ is a nbhd of $x$, so it contains some irrational $y$. But $W \cap V \subseteq F_n$, so $y\in F_n$; moreover, $y\in V_{n-1}$ if $n>1$, so $f_A(y) = -1/n$. Thus, each nbhd of $x$ contains a point $y$ such that $$\vert f_A(y)-f_A(x)\vert = \left\vert\frac1n-\frac{-1}n\right\vert=\frac2n,$$ and $f_A$ must be discontinuous at $x$. The other possibility is that no nbhd of $x$ is contained in $F_n$. Then if $V$ is a nbhd of $x$, $V\cap V_n \ne \varnothing$. Let $y \in V\cap V_n$ be irrational. (Here again I’m using the fact that $F_n$ is closed: I need to know that $V\cap V_n$ is open in order to be sure that it contains an irrational.) If $y\in A$, $f_A(y)<0$, and if $y\in \mathbb{R}\setminus A$, $f_A(y)=0$, so in any case $f_A(y)\le 0$. But then $$\vert f_A(y)-f_A(x)\vert = f_A(x)-f_A(y) \ge f_A(x) = \frac1n,$$ so again $f_A$ must be discontinuous at $x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/68326", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
summation of x * (y choose x) binomial coefficients What does this summation simplify to? $$ \sum_{x=0}^{y} \frac{x}{x!(y-x)!} $$ I was able to realize that it is equivalent to the summation of $x\dbinom{y}{x}$ if you divide and multiply by $y!$, but I am unsure of how to further simplify. Thanks for the help!
For a combinatorial proof of the non-obvious step in Zev’s argument, $$\sum_{x=0}^y x\binom{y}{x} = y2^{y-1},$$ suppose that you have $y$ children, and you want to choose a team (of any size) from the group. However, a team is required to have a captain, and two teams are counted differently if they have different captains, even if they have exactly the same members. For any $x$ there are $\dbinom{y}{x}$ ways to choose $x$ children to form a team, and there are then $x$ ways to choose the captain of the team, so $x\dbinom{y}{x}$ is the number of ways of choosing a ‘captained’ team of $x$ players. Thus, the sum on the left-hand side of the equation gives the total number of possible ‘captained’ teams. On the other hand, we could first choose a captain and then choose the rest of the team. There are $y$ ways to choose a captain. Once the captain has been chosen, there are $2^{y-1}$ subsets of the remaining $y-1$ children that could form the rest of the team, so there are $y2^{y-1}$ ways to form a ‘captained’ team.
{ "language": "en", "url": "https://math.stackexchange.com/questions/68384", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 2 }
Quadratic iterative system A general linear iteative system can be represented as a matrix: $$(x,y)\mapsto(ax+by,cx+dy)$$ is essentially the same as $$\left[\begin{array}{cc} a&b\\ c&d\\ \end{array}\right] \left[\begin{array}{c} x\\ y\\ \end{array} \right]$$ which is useful because it can be iterated quickly (matrix exponentiation) and enables various matrix techniques for determining asymptotic behavior and the like. (Of course the number of variables can be increased as needed.) Is there a similar tool for quadratic iterative systems like $$(x,y)\mapsto(ax^2+bxy+cy^2,dx^2+exy+fy^2)$$ ? I'm interested in computing the $n$th iterate ($n$ not too small), finding asymptotic behavior, and any other interesting things that can be determined for a given collection of constants $a,b,\ldots$. My immediate interest (genetics, oddly enough) does not use any of the diagonal terms $x^2,y^2$ so a treatment that ignores them would be fine (though I suspect including is more natural).
The short answer is no. The dynamics of linear maps is very easy to understand, as you mention, but the dynamics of nonlinear maps usually is very complicated, and there is no easy way to describe the iteration or "asymptotics". Recall that the dynamics of the logistic map $$ \lambda\mapsto \lambda x(1-x)$$ can be very complicated ("chaotic"), and can depend sensitively both on the starting value and on the parameter $\lambda$. In your setting, we can simulate this map by studying $$(x,y)\mapsto (-\lambda x^2 + \lambda xy , y^2),$$ and using a starting value with $y=1$. However, in the two-variable case, it may be interesting to note that we can project $\mathbb{R}^2$ to projective space (since your polynomial is homogeneous), and the iteration is semiconjugate to a one-dimensional map. More precisely, if we set $p := x/y$, then your map is semiconjugate to the quadratic rational map $$ R(p) = \frac{ap^2+bp+c}{dp^2+ep+f}.$$ I mention this because dynamics in one variable is much, much better understood than dynamics in several variables. For example, the Hénon family, in two variables, still poses many mysteries, while the real quadratic family (in one variable) is by now rather well-understood. (Although it takes a lot of deep mathematics!)
{ "language": "en", "url": "https://math.stackexchange.com/questions/68454", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Is there a rule of integration that corresponds to the quotient rule? When teaching the integration method of u-substitution, I like to emphasize its connection with the chain rule of integration. Likewise, the intimate connection between the product rule of derivatives and the method of integration by parts comes up in discussion. Is there an analogous rule of integration for the quotient rule? Of course, if you spot an integral of the form $\int \left (\frac{f(x)}{g(x)} \right )' = \int \frac{g(x) \cdot f(x)' - f(x) \cdot g(x)'}{\left [ g(x)\right ]^2 }$, then the antiderivative is obvious. But is there another form/manipulation/"trick"?
As for me, I cannot see an advantage in introduction of such a rule since for any two functions $f,g$ it clearly holds that $$ \frac fg = f\cdot\frac1g $$ so the 'quotient rule' for derivatives is a product rule in disguise, and the same will also hold for the integration by parts. Indeed, when you are looking for the proper function to put under the differential sign integrating by parts, in case you have a bit of experince with such a procedure, you also will think about the 'quotients'. As an example: $$ \int\frac{\sin\frac1x}{x^2}\,dx $$ Of course you can present it as $\frac{f(x)}{x^2}$ and apply the new integration by parts based on the quotient rule, but I almost sure that a lot of the readers will rather think of the fact that $\frac1{x^2}\,dx = -d\frac1x$, by this seeing a product in the integrand rather than a quotient.
{ "language": "en", "url": "https://math.stackexchange.com/questions/68505", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 5, "answer_id": 2 }
Why do these inequalities in metric spaces hold? The other day I stumbled across some inequalities regarding properties of metric spaces. I'm curious to see a proof of why it holds. Suppose $(X,\rho)$ is any metric space. For a given $\epsilon\gt 0$, I let $N(X,\epsilon)$ denote the least $n$ such that $X=\bigcup\limits_{i=1}^n U_i$ where $U_i$ are sets such that $\operatorname{diam}(U_i)\leq 2\epsilon$. I also denote by $M(X,\epsilon)$ the greatest number of $m$ points $x_i$, $1\leq i\leq m$ such that $\rho(x_i,x_j)\gt\epsilon$ whenever $i\neq j$. With this notation, what is it that $N(X,\epsilon)\leq M(X,\epsilon)$ and $M(X,\epsilon)\leq N(X,\epsilon/2)$? Thanks.
For given $\epsilon$, pick a set of $M(X,\epsilon)$ points at distances greater than $\epsilon$, and form closed balls with radius $\epsilon$ around them. If there is a point in $X$ that belongs to none of these balls, we can add it to the set, contradicting the maximality of $M(X,\epsilon)$. Thus these $M(X,\epsilon)$ sets of diameter $2\epsilon$ cover $X$, and hence $N(X,\epsilon)\le M(X,\epsilon)$. For the other direction, note that a set with diameter $2\epsilon/2=\epsilon$ can contain at most one point of a set of $M(X,\epsilon)$ points at distances greater than $\epsilon$; thus we need at least $M(X,\epsilon)$ such sets to cover $X$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/68614", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
On the GCD of a Pair of Fermat Numbers I've been working with the Fermat numbers recently, but this problem has really tripped me up. If the Fermat theorem is set as $f_a=2^{2^a}+1$, then how can we say that for an integer $b<a$, the $\gcd(f_b,f_a)=1$?
This is because the Fermat numbers belong to the companion Lucas sequence $V(3,2) = 2^{k} + 1$. Hence, all the prime factors of either $V_{p}$ where $p \neq 3$ is a prime, or $V_{2^{n}}$, are primitive; that is, they enter the sequence for the first time as factors at that very term. So, as every prime factor of $f_{a}$ is primitive, it follows that $gcd(f_{a},f_{b}) = 1$ when $b < a$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/68653", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 4 }
Convexity of intersection I have been asked to prove that, given a convex set $C$, its intersection with a line is also convex. From convexity definition, I have that $\forall x_1,x_2\in C, \alpha x_1+\beta x_2 \in C$ with $\alpha,\beta\ge0, \alpha+\beta=1$. If I have $x_3,x_4 \in C \cap L$, every point in the line can be expressed as $x = x_4 + t (x_3-x_4)=tx_3+(1-t)x_4$. If the convex combination $\alpha x_3+(1-\alpha) x_4$ belongs to $C$, then it is trivial to show that, just by making $t=\alpha$, it also belongs to L, hence to $C \cap L$. As always, I think I am oversimplifying or forgetting something. Is my proof right, or am I missing the crucial point? Thanks in advance.
In general, the intersection of two convex sets is again a convex set. To do this, pick $x,y \in C_1 \cap C_2$ and $\alpha \in (0,1)$. Then by convexity of $C_1$ we have that $\alpha x +(1-\alpha)y \in C_1$. The same argument proves that $\alpha x +(1-\alpha)y \in C_2$ so $\alpha x +(1-\alpha)y \in C_1\cap C_2$ and by definition $C_1 \cap C_2$ is convex. A line is a convex set, so the intersection of a line and a convex set is again convex.
{ "language": "en", "url": "https://math.stackexchange.com/questions/68714", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Solve $t_{n}=t_{n-1}+t_{n-3}-t_{n-4}$? I missed the lectures on how to solve this, and it's really kicking my butt. Could you help me out with solving this? Solve the following recurrence exactly. $$ t_n = \begin{cases} n, &\text{if } n=0,1,2,3, \\ t_{n-1}+t_{n-3}-t_{n-4}, &\text{otherwise.} \end{cases} $$ Express your answer as simply as possible using the $\Theta$ notation.
Hint: Write $t_n = r^n$, substitute into the definition of $t_n$ for $n>3$ and solve for $r$. You will find four possible solutions. Call them $r_1$, $r_2$, $r_3$ and $r_4$. Because the defining equation is linear, you can write $$t_n = A r_1^n + B r_2^n + C r_3^n + D r_4^n$$ Now how can you work out what the constants $A$, $B$, $C$, $D$ are?
{ "language": "en", "url": "https://math.stackexchange.com/questions/68822", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Sum of a series of minimums I should get sum of the following minimums.Is there any way to solve it? $$\min\left\{2,\frac{n}2\right\} + \min\left\{3,\frac{n}2\right\} + \min\left\{4,\frac{n}2\right\} + \cdots + \min\left\{n+1, \frac{n}2\right\}=\sum_{i=1}^n \min(i+1,n/2)$$
Suppose first that $n$ is even, say $n=2m$. Then $$\min\left\{i,\frac{n}2\right\}=\min\{i,m\}=\begin{cases}i,&\text{if }i\le m\\ m,&\text{if }i\ge m. \end{cases}$$ Thus, $$\begin{align*} \sum_{i=2}^{n+1}\min\left\{i,\frac{n}2\right\} &= \sum_{i=2}^m i + \sum_{i=m+1}^{n+1} m\\ &= \frac{m(m+1)}2-1 + (n+1-m)m\\ &= \frac12\left(\frac{n}2\right)\left(\frac{n}2+1\right)-1+\left(\frac{n}2\right)\left(\frac{n}2+1\right)\\ &= \frac{3n(n+2)}{8}-1\\ &=\frac{3n^2+6n-8}8. \end{align*}$$ If $n$ is odd, say $n=2m+1$, $$\min\left\{i,\frac{n}2\right\}=\min\left\{i,m+\frac12\right\}=\begin{cases}i,&\text{if }i\le m\\ m+\frac12,&\text{if }i> m. \end{cases}$$ Thus, $$\begin{align*} \sum_{i=2}^{n+1}\min\left\{i,\frac{n}2\right\} &= \sum_{i=2}^m i + \sum_{i=m+1}^{n+1} \left(m+\frac12\right)\\ &= \frac{m(m+1)}2-1 + (n+1-m)\left(m+\frac12\right)\\ &= \frac12\left(\frac{n-1}2\right)\left(\frac{n+1}2\right)-1+\left(\frac{n}2\right)\left(\frac{n+3}2\right)\\ &= \frac{3n^2+6n-1}{8}-1\\ &= \frac38 (n^2+2n-3). \end{align*}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/68873", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Prove that $\cos(x)$ is identically zero using integration by parts Consider $$\int\cos(t-x)\sin(x)dx,$$ where $t$ is a constant. Evaluating the integral by parts, let \begin{align} u = \cos(t-x),\ dv = \sin(x), \\ du = \sin(t-x),\ v = -\cos(x), \end{align} so $$ \int\cos(t-x)\sin(x)dx = -\cos(t-x)\cos(x) - \int\sin(t-x)\cdot-\cos(x)dx. $$ Evaluating the integral on the right by parts again (with a slight abuse of notation), \begin{align} u = \sin(t-x),&\quad dv = -\cos(x), \\ du = -\cos(t-x),&\quad v = -\sin(x), \end{align} we get \begin{align} \int\cos(t-x)\sin(x)dx &= -\cos(t-x)\cos(x) - \left( -\sin(t-x)\sin(x)-\int\cos(t-x)\sin(x)dx\right) \\ &= -\cos(t-x)\cos(x) + \sin(t-x)\sin(x) + \int\cos(t-x)\sin(x)dx, \end{align} and subtracting the integral from both sides, we obtain the dazzling new identity $$\sin(t-x)\sin(x)-\cos(t-x)\cos(x)=0$$ for all $t$ and $x$! Pushing it further, the LHS expression is $-\cos(t)$, and as $t$ was just an arbitrary constant, this implies $\cos(x)$ is identically zero! Now I obviously know something's wrong here. But what, and where? Where's the flaw in my reasoning? P.S. I can evaluate the integral to get the proper answer lol. But this was rather interesting.
The issue is that you're working with indefinite integrals, so you have to be careful about arbitrary constants of integration. When using by-parts integration on indefinite integrals you're only guaranteed to get another antiderivative, not necessarily the same antiderivative as this example clearly demonstrates. Here it turns out that the LHS is the original antiderivative, and the RHS is the same antiderivative plus $-\cos t$ (which is a constant with respect to $x$). If you worked the same example out with definite integration instead, you would wind up with $$0=-\cos( t)\big|_{x=a}^{x=b}=-\cos t-(-\cos t)=0. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/68926", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Extremally disconnected space Can one view $\ell_{\infty}^{4}$, $\mathbb{R}^{4}$ equipped with the $\ell_{\infty}$-norm, as a space of continuous functions on some extremally disconnected space? and what would the extremally disconnected space be? Thanks!
Take the four point space $X = \{0,1,2,3\}$ with the discrete topology. Then $C(X,\mathbb{R}) = \ell^{\infty}(X,\mathbb{R})$ is the space $\mathbb{R}^4$ equipped with the $\sup$-norm. Obviously, $\{0,1,2,3\}$ is extremally disconnected, as every subset is closed and open. Added: Of course, this generalizes to all finite dimensions. If $n = \{0,1,2,\ldots,n-1\}$ is the $n$-point space with the discrete topology, then $C(n) = \ell^{\infty}(n)$ is $\mathbb{R}^n$ with the $\sup$-norm. There is no wiggle room here: by a theorem of Banach (metric case) Stone (general case) two spaces $C(K)$ and $C(L)$ with $K$ and $L$ compact Hausdorff are isometrically isomorphic if and only if $K$ and $L$ are homeomorphic. The proof is not hard: if $T: C(K) \to C(L)$ is an isometric isomorphism then its adjoint is a homeomorphism from the unit ball of $C(L)^{\ast}$ to the one of $C(K)^{\ast}$ in the weak$^{\ast}$-topology. Then one finishes off by observing that $K$ and $L$ are stored as extremal points of the unit balls. Details can be found e.g. in Theorem 2 of §3 in Chapter V on page 115 of Day's Normed Linear Spaces, 3rd edition, Springer, 1973.
{ "language": "en", "url": "https://math.stackexchange.com/questions/68983", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is a good book for learning math, from middle school level? Which books are recommended for learning math from the ground up and review the basics - from middle school to graduate school math? I am about to finish my masters of science in computer science and I can use and understand complex math, but I feel like my basics are quite poor.
I like "Mathematics for the Million" by Lancelot Hogben, here for about $12: http://www.amazon.com/Mathematics-Million-Master-Magic-Numbers/dp/039331071X/ref=sr_1_1?ie=UTF8&qid=1417675748&sr=8-1&keywords=Mathematics+for+the+Million
{ "language": "en", "url": "https://math.stackexchange.com/questions/69060", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "63", "answer_count": 15, "answer_id": 12 }