Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Is $\frac{d}{dx} \ln(\phi(x))$ decreasing where $\phi$ is the Gaussian cumulative distribution function? I was plotting the function $x \mapsto \frac{d}{dx} \ln(\phi(x))$ and it seams like this is a decreasing function. But somehow I cannot prove this. Here, $\phi$ is the Gaussian cumulative distribution function, i.e. $$ \phi(x) = \int_{-\infty}^x \gamma(t) \, dt $$ and $\gamma$ is the Gaussian. Do you have any suggestion how to show this?
$(ln (\phi(x))'=\frac {\gamma (x)} {\phi(x)}$. In view of smoothness this function is decreasing iff its derivative is $\leq 0$. So we require $\phi \gamma' \leq \gamma^{2}$ or $\phi \leq \frac {\gamma^{2}} {\gamma'}$. Look at the behavior at $\infty$ to see that this is false.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3735327", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Why do Bourbaki define the characteristic of a ring the way they do? Here is the definition of the characteristic of a ring $R$ that is common in everyday usage (for example in Lang's Algebra and Wikipedia): take the unique homomorphism $$ \mathbb{Z} \to R $$ and define $\operatorname{char}(R)$ to be the smallest nonnegative integer that generates the kernel of this map. And yet on page A.V.2 of Bourbaki's Algebra II: Chapters 4 - 7 the characteristic is defined in a way that excludes rings without subrings which are fields. This leaves the ring of integers without a characteristic. While I can appreciate that $\mathbb{Z}$ is best considered as a mixed characteristic ring, what was the purpose of defining the characteristic like that?
I can't speak to their motivation, but it seems that a little later on they remark that a quotient of a ring has the same characteristic as the original ring, which would be false if they allowed rings not containing fields.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3735471", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 1, "answer_id": 0 }
If $R_1$ and $R_2$ have the same cardinality, then $R_1 = R_2$ In proving one direction (the part at most) of this statement from Wikipedia I re-phrase it as below theorem. Could you please leave me some hints (not involved modular arithmetic) to prove it. Let * *$0<p<q < N$ be natural numbers. *$R_1$ be the set of remainders when all multiples of $p$ is divided by $N$. *$R_2$ be the set of remainders when all multiples of $q$ is divided by $N$. If $R_1$ and $R_2$ have the same cardinality, then $R_1 = R_2$. Thank you so much for your help!
HINT. Let $a=\gcd (p,N).$ Let $p=p'a$ and $N=N'a.$ Then $\gcd(p',N')=1.$ Now $p'ax=px=Ny+r=N'ay+r$ with $r\in R_1$ iff $r=r'a$ for some $ r'\in R'_1$, where $R'_1$ is the set of remainders when multiples of $p'$ are divided by $N'.$ So $R_1=\{ar': r'\in R'_1\}.$ Use $\gcd(p',N')=1$ to obtain $R'_1=\{0,..., N'-1\},$ which has $N'=N/a=N/\gcd(p,N)$ members.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3735587", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Parametrization the curve of intersection of sphere $x^2+y^2+z^2=5$ and cylinder $x^2+\left(y-\frac{1}{2}\right)^2=\left(\frac{1}{2}\right)^2$. I've seen similar posts here but none of the answers helped me. I am trying to parametrize a curve of intersection of a (top half $z>0$) sphere $x^2+y^2+z^2=5$ and cylinder $x^2+\left(y-\frac{1}{2}\right)^2=\left(\frac{1}{2}\right)^2$. I tried $$x=\frac{1}{2}\cos t $$ $$y=\frac{1}{2}+\frac{1}{2}\sin t$$ $$z=\sqrt{5-\left(\frac{1}{2}\cos t\right)^2-\left(\frac{1}{2}+\frac{1}{2}\sin t\right)^2}=\sqrt{\frac{9}{2}-\frac{1}{2}\sin t}$$ for $t \in (0,2\pi)$, but I don't think it is correct. Even if it was, is there a better (more simple) approach? Note: I need to find the circulation of a field $F=(y+z,x-z,0)$ over this curve so I need a good parametrizatian so that I was able to integrate it.
A third alternative is using Stokes' theorem, parameterize the surface enclosed by the curve and integrate... And at first it seems like you are going to end up with a nicer integral \begin{eqnarray} && \int_0^\pi d\phi \int_0^{\arcsin\left[\frac{1}{\sqrt{5}}\sin(\phi)\right]} d\theta \, \left[5 \sin(\theta)\right] (\nabla \times F)\cdot(\sin\theta\cos\phi,\sin\theta\sin\phi,\cos\theta) = \\ && \int_0^\pi d\phi \int_0^{\arcsin\left[\frac{1}{\sqrt{5}}\sin\phi\right]} d\theta \, \, 5 \sqrt{2} \sin^2 \theta \sin \left( \phi + \frac{\pi}{4}\right) \\ && \int_0^\pi d\phi \, \frac{1}{2} \left(\cos \phi + \sin \phi \right) \left\{ 5 \arcsin \left[\frac{1}{\sqrt{5}}\sin\phi\right] - \sin\phi \sqrt{5 - \sin^2\phi}\right\} \end{eqnarray} but, at least at first sight, it does not seem specially simpler than the integral you get in the line integrals.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3735739", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Small Span Theorem & Understanding Span It seems I'm a bit confused about the concept of span. The definition of span makes complete sense to me (i.e. set of all linear combinations of vectors), but I'm confused how this works in the context of the Small Span Theorem: Let $f$ be continuous on $\left [a, b \right]$. Given $\epsilon > 0$, there exists a partition P : $x_0 < x_1 < ... < x_n$ of the interval $\left [a, b \right]$ s.t. $f$ is bounded on each closed subinterval $\left [x_{j-1}, x_j \right]$, and s.t. the span of $f$ on each closed subinterval is at most $\epsilon$. At this point, I'm trying to visualize what the theorem is saying, but the one part stopping me is confusion around how the span could be quantified. For the span of two vectors that fills the whole 2D plane, how could it be "at most" $\epsilon$? Are we actually dealing with the set of all $\epsilon > 0$ instead of just one real value? (This is less important but if anyone also has insight on the applications for the Small Span Theorem & what makes it important, that would be very interesting to learn once I can better understand the concept!)
What I'm understanding by "span" in this context seems to be the range of $f$ on a particular interval of a partition. Basically what the theorem is saying is that for a continuous function, you can always partition it in such a way that the maximum deviation of $f$ within any given partition is at most $\epsilon$ This probably would be used to set up the Riemann Integrability of all continuous functions, as a direct consequence of this theorem would be that LUB - MLB will also be bounded by epsilon for any partition
{ "language": "en", "url": "https://math.stackexchange.com/questions/3735896", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
$1996$ Austrian-Polish Number theory problem Let $k \ge 1$ be a positive integer. Prove that there exist exactly $3^{k-1}$ natural numbers $n$ with the following properties: (i) $n$ has exactly $k$ digits (in decimal representation), (ii) all the digits of $n$ are odd, (iii) $n$ is divisible by $5$, (iv) the number $m = n/5$ has $k$ odd digits My work - Let $n=10^{k-1}a_1+10^{k-2}a_2+...+a_k$ Now because all digits of n are odd and $5 | n$ we have $a_k =5$. Now $m=n/5=2(10^{k-2}a_1+10^{k-3}a_2+...+a_{k-1})+1$ For $k=2$ I found that $n=55,75,95$ but not able to prove in general... Hint says that all digit of m must be $1,5$, or $9$ and so there are $3^{k-1}$ choices for m hence n but I am not able to understand why all digit of m must be 1,5, or 9 ??? Thankyou
As you found out yourself (with the exception of the $+1$ mentioned by user3052655, coming from dividing $a_k=5$ by $5$), we have $$\begin{eqnarray} m=n/5 & = & 2(10^{k-2}a_1+10^{k-3}a_2+...+a_{k-1})+1 = \\ & = & 10^{k-2}(2a_1)+10^{k-3}(2a_2)+\ldots+10^1(2a_{k-2})+(2a_{k-1}+1). \end{eqnarray}$$ If you look at the last line, this looks suspiciously like the decimal representation of a number with $k-1$ digits, all but the last of which are even, which is not what we want, so how can this become a $k$ digit number with all odd digits? The answer is the carry, of course. If any $2a_i$ value is $10$ or greater, the decimal digit will be $2a_i-10$ and the next higher digit gets a carry. Since the carries start from the lowest value digits, let's start with the $1$-digit, $2a_{k-1}+1$, it's odd, so at the moment there is no further condition on $a_{k-1}$ (in addition to being odd, as $a_{k-1}$ is a digit of $n$ which has only odd digits). Now let's look at the $10$-digit, $2a_{k-2}$. It's even, and even if it was $\ge 10$, $2a_{k-2}-10$ is again an even digit. The only way to make it an odd digit is if there is carry from the $1$-digit. So now we need that $2a_{k-1}+1 \ge 10$, that leaves us with exactly 3 options: $a_{k-1}=5,7$ or $9$. So now that we have the carry from the $1$-digit, the $10$-digit is actually $2a_{k-2}+1$ (if there is no carry from this digit) or $2a_{k-2}-9$ (if there is a carry from this digit), both of which are odd, so that's what we want. From now on, this argument continues throughout all the digits. Each time for a digit of $m$ ($2a_i$) to become odd , there must be a carry from the next lower value digit ($2a_{i+1}+1$, after applying the carry from the digit before), which can only happen if $a_{i+1}$ is $5,7$ or $9$. This continues until we find that $a_2$ must be $5,7$ or $9$. This makes $2a_1+1$ odd, even if $a_1=1$ or $3$. But in those cases, the resulting number has only $k-1$ digits, which contradicts condition (iv) of the problem. So we need again $a_1$ must be at least $5$, such that $2a_1+1$ is at least $10$ and $2a_1+1$ produces a carry so that there is actually a $k$-th digit ($1$) for $10^{k-1}$. If you look back, we found that $a_k$ must be $5$, while for $i=1,2,\ldots,k-1$ we have $a_i=5,7$ or $9$. This means those are exactly $3^{k-1}$ numbers, and I leave to you to check that they are actually solutions (which isn't hard, consodering the necessary conditions for producing a carry are also sufficient).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3736233", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
For $x^a = e^2x$. Determine the solutions as a power of$ e$ For $x^a = e^2x$. Determine the solutions as a power of $e$ One obvious solution is $0$ but I fail to find the second solution. $x^a - e^2x = 0$ $x(x^{a-1} - e^2) = 0$ I am not sure how to find the other solution please help
Let $x$=$e^k$ Then, $$\therefore e^{ka}=e^{k+2}$$$$\Rightarrow ka=k+2$$$$\Rightarrow k= \frac{2}{a-1}$$$$\therefore x=e^{\frac {2}{a-1}}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3736384", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Prove that there exist infinitely many primes $p$ such that $13 \mid p^3+1$ $\textbf{Question:}$Prove that there exist infinitely many primes $p$ such that $13 \mid p^3+1$ I could easily see that the given is equivalent to showing that there are infinitely many primes $p$ such that, $p \equiv \{4,10,12\} \pmod{13}$ To prove this,I wanted to somehow generalize the idea used to show the infinitude of primes of the form $4k+1$. And yes,this whole thing follows easily from drichlet theorem.But I was looking for a somewhat elementary proof.
Yes, there is an elementary proof which requires nothing more than quadratic reciprocity. Thanks to @lhf for the pointer to Murty’s classical paper on this. I had heard about it but never seen the proof before, and the result of Schur cited in the paper was enlightening. Murty’s paper is already sufficient to answer the OP because it implies we can pick out primes that are $12 \pmod{13}$, but it’s a bit unsatisfying not to include the other two residues by a simpler construction. This is indeed possible! Note that the set of residues $\{4,10,12\}$ is equal to the set difference of the subgroups $\langle 4 \rangle$ (of order 6) and $\langle 3\rangle = \{1,3,9\}$ in $(\mathbb Z/13\mathbb Z)^\times$. This affords us the following strategy: * *Choose a polynomial $f$ such that the primes dividing $f(n)$ always lie in the subgroup $\langle 4 \rangle$. *Further select $f$ so that $f(n)$ does not belong to $\{1,3,9\} \pmod{13}$, so that at least one prime factor belongs to the coset $\{4,10,12\}$. The theorem of Schur cited by Murty assures us that 1 is possible, but in this case it’s a familiar object: since $(\mathbb Z/13\mathbb Z)^\times$ is cyclic, the unique subgroup of order 6 is just the set of quadratic residues, so reciprocity gives us this easily by choosing $f$ so that $13$ is a quadratic residue mod $f(n)$, such as $f(n) = 4n^2 - 13$. To satisfy 2, we just need to tweak it very slightly: $f(n) := 52n^2-1$ will do. We now proceed with the Euclidean argument. Let $p_1, \ldots, p_k$ be any finite (possibly empty) list of primes congruent to $\{4,10,12\}$, and take $P = 52(p_1 \cdots p_k)^2 - 1 > 1$. Let $q$ be a prime divisor of $P$. Clearly $q$ is odd, and has $52$ (hence $13$) as a quadratic residue, so by reciprocity $q$ belongs to one of the residue classes $\{1,3,4,9,10,12\}$ mod 13. And $q$ cannot be equal to any of the $p_i$ since $P$ is coprime to all of them. So $P$ is composed entirely of primes in those 6 classes, but since $P\equiv 12 \pmod{13}$ (and $P$ is positive), at least one prime divisor of $P$ does not belong to any of the 3 residue classes $\{1,3,9\}$ mod 13. This divisor is thus a prime congruent to $\{4, 10, 12\}$ mod 13 which does not appear in the original list of $k$ primes, which completes our argument.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3736544", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
$\det(I+A)=1+\operatorname{Tr}(A)$ if $\operatorname{rank}(A)=1$ Let $A$ be a complex matrix of rank $1$. Show that $$\det (I+A) = 1 + \operatorname{Tr}(A)$$ where $\det(X)$ denotes the determinant of $X$ and $\operatorname{Tr}(X)$ denotes the trace of $X$. Any hint, please. I do not get how to combine the ideas of rank, determinant and trace. Thank you.
Assume $A$ is diagonal. As its rank is $1$, it has only one non-zero eigenvalue $\lambda$. Then $$ \det(I+A) = 1+\lambda = 1 + \mathrm{tr}\, A. $$ Assume that $A$ is diagonalisable, so that $A = PDP^{-1}$. Then $$\det(I + A) = \det(I + PDP^{-1}) = \det(P(I + D)P^{-1} ) = \det(I + D)$$ Similarly $$1 + \mathrm{tr}\,A = 1 + \mathrm{tr}\, D,$$ so we reduced this case to the previous one. Now assume $A$ is an arbitrary complex matrix. Both sides of the equation are continuous and $A$ can be approximated by diagonalisable matrices. This finishes the proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3736628", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 2 }
Closest matrix that achieves positive semidefinite condition Suppose we have two symmetric positive semidefinite $n$ dimensional matrices $A$ and $B$. We use the notation $X\leq Y$ means that $Y-X$ is positive semidefinite. Suppose $A \not\leq B$ i.e. $B-A$ has at least one negative eigenvalue. We are interested in perturbing $A$ to some positive semidefinite $\tilde{A}$ such that $\tilde{A} \leq B$ while minimizing $|A-\tilde{A}|_1$ where $|\cdot|_1$ is the nuclear norm and defined by $$|X|_1 := \text{Tr} \left( \sqrt{X^\dagger X} \right)$$ and $X^\dagger$ is the transpose conjugate of $X$. To make things simpler, I will now consider the case where $A$ is a rank-$1$ matrix. Is it true that $$\tilde{A} = \lambda A$$ for some $\lambda < 1$? An immediate corollary is that $\tilde{A}\leq A$. EDIT: After a bit of searching, I found a result for the same question but where the norm considered is the induced 2-norm (spectral norm) or the Frobenius norm. For the induced 2-norm (spectral norm), it holds that $\tilde{A} = A - \lambda I$ where $\lambda$ is the smallest positive number such that $\tilde{A}\leq B$ is true. So for this case, my conjecture that $\tilde{A} = \lambda A$ is false but the statement $\tilde{A}\leq A$ is true. For the Frobenius norm case, we first write the polar decomposition of $B-A = UH$. Then $B -\tilde{A} = \frac{1}{2}(B - A + H)$ is the solution. Since $H= ((B-A)^\dagger(B-A))^{1/2}\geq B-A$, one can again conclude that $\tilde{A}\leq A$ I do not know what happens for the 1-norm though. EDIT 2: Here is another look at the problem that almost works. Suppose the solution $\tilde{A}\not\leq A$. We prove that there exists some $A'$ such that $A'\leq B, A'\leq A$ and $|A'-A|_1\leq|\tilde{A}-A|_1$. Let us diagonalize $\tilde{A}-A = ZDZ^\dagger = ZD^{+}Z^\dagger + ZD^{-}Z^\dagger$ where $D$ is diagonal, $D^{\pm}$ is also diagonal and includes the nonnegative and negative eigenvalues respectively. By assumption $\tilde{A}\leq B \implies A + ZD^{+}Z^\dagger + ZD^{-}Z^\dagger \leq B$. Define $A':= A + ZD^{-}Z^\dagger$. Since $ZD^{+}Z^\dagger$ is positive semidefinite, it holds that $A' = A + ZD^{-}Z^\dagger \leq B$. Since $ZD^{-}Z^\dagger$ is negative definite, it follows that $A'\leq A$. Finally, $|A' - A|_1 = |ZD^{-}Z^\dagger|_1 = |D^{-}|_1 \leq |D^{+}+D^{-}|_1 = |Z(D^{+}+D^{-})Z^\dagger|_1 = |\tilde{A} - A|_1$ EDIT 3 Unfortunately, the $A'$ constructed is not positive semidefinite in general.
Some thoughts on the problem: As a further simplification, I suggest that we say that $\tilde A$ does not only satisfy $\tilde A \leq B$, but also has a rank of $1$. If your hypothesis is correct, then this assumption should not change our answer. Write $$ A = \alpha xx^T, \quad \tilde A = \beta yy^T $$ for some scalars $\alpha, \beta > 0$ and unit vectors $x,y$. The minimization problem now becomes $$ \min_{y \in \Bbb R^n, \beta > 0} |\alpha xx^T - \beta yy^T|_1 \quad \text{s.t.} \quad \beta yy^T \leq B. $$ Now, I make several claims: * *$yy^T \leq B \iff \beta \leq [y^TB^+y]^{-1}$ where $B^+$ denotes the Moore-Penrose pseudoinverse of $B$. I give some proofs of this here. *$\alpha xx^T - \beta yy^T$ has the same nuclear norm as that of the $2 \times 2$ matrix $\pmatrix{\alpha & \alpha (x^Ty)\\ -\beta (x^Ty) & -\beta}$. (explanation below). *The nuclear norm turns out to be $|M|_1 = \sqrt{(\beta - \alpha)^2 + 4(1 - (x^Ty)^2)}$. My first approach would be to, by considering the nuclear norm as a function of $\beta$, maximize the nuclear norm given a particular choice of $y$. The nuclear norm of a symmetric matrix is the sum of the absolute values of its eigenvalues. With that said, we want the eigenvalues of $M = \alpha xx^T - \beta yy^T$. $$ M = \pmatrix{x & y} \pmatrix{\alpha & 0 \\ 0 & -\beta} \pmatrix{x & y}^T. $$ Because $AB,BA$ have the same non-zero eigenvalues, $M$ will have the same non-zero eigenvalues as the $2 \times 2$ matrix $$ N = \pmatrix{\alpha & 0 \\ 0 & -\beta} \pmatrix{x & y}^T\pmatrix{x & y} = \pmatrix{\alpha x^Tx & \alpha x^Ty\\ -\beta x^Ty & -\beta y^Ty}. $$ Point 3: $$ \lambda^2 + (\beta - \alpha) \lambda + ((x^Ty)^2 - 1)\alpha\beta \implies\\ \lambda = \frac{\alpha - \beta \pm \sqrt{(\beta - \alpha)^2 + 4(1 - (x^Ty)^2)}}{2} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3736802", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Hasse's Theorem: min and max values with or without point of infinity? I have to calculate the min and max values of a field. Min: $\lfloor{q+1-2 \sqrt{q}}\rfloor$ Max: $\lfloor{q+1+2 \sqrt{q}}\rfloor$ According to Hasse. BUT the exercise says that min and max should be found together with the point of infinity. So should I say min+1 and max+1? Hope you get my question.
Hasse's theorem is usually stated as $$ | \# E(\mathbb{F}_q) - (q + 1) | \leq 2\sqrt{q}$$ When we talk about the points on an elliptic curve $E/K$ where $K$ is a field, we are always talking about the points on the projective curve (that is, including the point at infinity). Thus if $E/\mathbb{F}_q$ is given by $$f(x,y) = y^2 + a_1xy + a_3y - (x^3 + a_2x^2 + a_4x + a_6) = 0$$ when we talk about $E(\mathbb{F}_q)$ we really mean $$\{(x,y) \in \mathbb{F}_q^2 : f(x,y) = 0 \} \cup \{(0:1:0)\}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3737096", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Evaluate using Residues $\int_0^{2\pi}\frac{d\theta}{1+a\sin\theta}$ I need to evaluate the following using residues: $\int_0^{2\pi}\frac{d\theta}{1+a\sin\theta}$ where $-1<\theta<1$. I suppose the $a$ in front of $\sin\theta$ is throwing me off. I was thinking I could let $z=e^{i\theta}$ and so $\sin\theta=\frac{z-z^{-1}}{2i}$ and $dz=izd\theta$. So, the integral becomes: $\int_{|z|=1}\frac{dz}{iz(1+a(\frac{z-z^{-1}}{2i}))}$. After some, hopefully mistake-free, algebra, we'd get: $2\int_{|z|=1}\frac{dz}{az^2+2iz-a}$. Now, we can use the quadratic formula (again, hopefully mistake-free) to get $z=\frac{-i\pm\sqrt{-1-a^2}}{a}$. From here, I'm not really sure where to go. Do I just plug and chug and find residues using these two poles, or is there something sneaky going on? Or, did I make a mistake somewhere earlier? Any help is appreciated :) Thank you.
I guess nothing prevents you from exploiting some symmetry before switching to the computation of residues. $$\int_{0}^{2\pi}\frac{d\theta}{1+a\sin\theta}=\int_{0}^{\pi}\frac{d\theta}{1+a\sin\theta}+\int_{0}^{\pi}\frac{d\theta}{1-a\sin\theta}=\int_{0}^{\pi}\frac{2\,d\theta}{1-a^2\sin^2\theta}$$ equals $$ 4\int_{0}^{\pi/2}\frac{d\theta}{1-a^2\sin^2\theta}=4\int_{0}^{\pi/2}\frac{d\theta}{1-a^2\cos^2\theta} $$ or, by letting $\theta=\arctan u$, $$ 4\int_{0}^{+\infty}\frac{du}{(1+u^2)-a^2}=2\int_{-\infty}^{+\infty}\frac{du}{(1-a^2)+u^2} $$ which equals $$ 4\pi i\operatorname*{Res}_{u=i\sqrt{1-a^2}}\frac{1}{(1-a^2)+u^2}=\frac{2\pi}{\sqrt{1-a^2}}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3737230", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
A non-empty subset of integers bounded above has a maximum Suppose the set $\mathrm A$ $\neq$ $\emptyset$ , $\mathrm A$ $\subseteq$ $\Bbb Z$ is bounded above. Then since $\Bbb Z$ $\subseteq$ $\Bbb R$, I know that by the completeness axiom there exists a $supremum$ for the set $\mathrm A$, say $s$ $=$ $sup$($\mathrm A $). But, I need to show this is in fact the maximum of this set $\mathrm A$. For that, I know it has to be an element of the set $\mathrm A$. How can we show that $s$ $\in$ $\mathrm A$ ?
If $A$ is your subset of $\mathbb{Z}$, consider the identity function, $i:A\rightarrow \mathbb{R}$. $f(A)$ is closed since $i$ is continuous and $A$ is closed, and $f(A)$ is bounded since there is an $M$ such that each term of the sequence is bounded, so $|x_n| \le |i(x_n)| \le M$ for all $n$. Therefore, $f(A)$ is compact in $\mathbb{R}$, so it has a greatest and least element that are in the set, and the greatest and least element are the maximizer and minimizer, since $i$ is the identity function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3737363", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Find the number of functions $4f^3(x)=13f(x)+6$ $f(x)$ is a function satisfying $$4f^3(x)=13f(x)+6$$ $\forall\ x\in [-3,3]$ and is discontinuous only and only at all integers in $[-3,3]$. If $N$ denotes the unit digit of the number of all such functions, find $N$. Answer: $6$. Solving the cubic for $f(x)$ gives us $$f(x)=-\frac{3}{2},-\frac{1}{2}\ \text{or}\ 2$$ Then I figured, for $f$ to be discontinuous at integers, we must define it piecewise(i.e casewise) on the integers, with the value of the function changing at integers. Something like$f(x)=$ $$ \begin{cases} -0.5 & -3\leq x<-2 \\ -1.5 & -2\leq x<-1 \\ -0.5 & -1\leq x<0 \\ 2 & 0\leq x<1 \\ -1.5 & 1\leq x<2 \\ 2 & 2\leq x<3 \\ -1.5 & x=3\\ \end{cases} $$ So I thought, this problem is equivalent to filling $7$ blanks with $3$ distinct objects, and no two adjacent blanks contain the same object. Hence, the first blank can be filled in $3$ ways, the second in $2$ ways, the third in $2$ ways and so on, until the seventh blank in $2$ ways. So the number of ways is $$3\times2\times2\times2\times2\times2\times2=192$$ whose unit digit is not $6$. Could anyone tell me where I went wrong? Thanks in advance!
The following are the properties that fully characterize your function $f$: * *For every $x$, $f(x)\in \{2,-1/2,-3/2\}$ *$f$ must be constant on every open interval $(n,n+1)$ for $n=-3,-2,-1,0,1,2$ *among $f(n-1,n)$, $f(n)$, $f(n,n+1)$ there must be at least two different values for $n=-2,-1,0,1,2$ *$f(3) \ne f(2,3)$, $f(-3)\ne f(-3,-2)$ As you did, you can start by fixing $f(-3)$ and $f(-3,-2)$ in 6 ways. Then you can fix $f(-2)$ and $f(-2,-1)$ in $6 + 2 =8$ ways. The same 8 ways for $f(-1)$ and $f(-1,0)$, etc. The last choice is $f(3)$ that can be made in 2 ways. For a total of $6\times 8^5\times 2$ ways. Since $2^5\equiv 2\pmod{10}$, we have $6\times 8^5\times 2 = 3 \times 2^{17}\equiv 3 \times 2^{5} \equiv 6\pmod {10}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3737458", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Proving that this formula all over the positive integer gives us this sequence Firstly, we have this sequence : $1,1,1,1,1,1,1,1,1,1,2,2,2,...$ which is the sequence of the number of digits in decimal expansion of $n$. Secondly, we have this formula : $$a_n=\Bigl\lceil\log_{10}(n+1)\Bigr\rceil-\Bigl\lceil\frac{n}{n+1}\Bigr\rceil+1$$ where $n\ge0$ This formula seems to gives us this sequence. How to prove this ?
Firstly, for $n \ne 0$, $n$ has $d$ digits if and only if $10^{d - 1} \le n < 10^d$. One way to find $d$ from here is to take logarithms, using $d - 1 \le \log_{10}(n) < d$, so $d = \lfloor \log_{10}(n) \rfloor$. However, this expression doesn't work so well when $n = 0$, so we have to be a bit creative. Adding $1$, we get $10^{d - 1} + 1 \le n + 1 < 10^d + 1$. But a strict inequality of natural numbers of the form $a < b + 1$ can be equivalently written as $a \le b$, so in fact we can write $10^{d - 1} < n + 1 \le 10^d$. Then $d - 1 < \log_{10}(n + 1) \le d$, so $d = \lceil \log_{10}(n + 1) \rceil$. This expression doesn't freak out quite so much when $n = 0$. The last bit is just a clever trick so that $a_0 = 1$. If $n = 0$, then $n/(n + 1) = 0$, so $\lceil n/(n + 1) \rceil = 0$. Otherwise, $0 < n/(n + 1) < 1$, so $\lceil n/(n + 1) \rceil = 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3737655", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
$\frac{x^2}{by+cz}=\frac{y^2}{cz+ax}=\frac{z^2}{ax+by}=2$ If $$\frac{x^2}{by+cz}=\frac{y^2}{cz+ax}=\frac{z^2}{ax+by}=2$$ then find the value of $$\frac{c}{2c+z}+\frac{b}{2b+y}+\frac{a}{2a+x}.$$ I think all the terms need to be manipulated in some way to get the corresponding terms from the expression whose value needs to be found. For example we have to go from $\frac{x^2}{by+cz}$ to $\frac{a}{2a+x}$ in some way or maybe from $\frac{x^2}{by+cz}$ to $\frac{c}{2c+z}$ or something like that. It's very hard to tell from which term to which term I need to go because all the variables are used. So I just tried to make a system of equations. $$x^2=2(by+cz)$$ $$y^2=2(cz+ax)$$ $$z^2=2(ax+by)$$ $$x^2+y^2+z^2=4(ax+by+cz)$$ $$(x-a)^2+(y-b)^2+(z-c)^2=a^2+b^2+c^2+2(ax+by+cz)$$ But I don't know how to proceed from here.
Hint: $$x(x+2a)=x^2+2ax=2(ax+by+cz)$$ $$\dfrac1{2a+x}=?$$ $$\dfrac a{2a+x}=\dfrac{ax}{2(ax+by+cz)}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3737787", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Dimension of a vector space/ subspace with a finite basis Is the dimension of a vector space/subspace with a finite basis always the same as the number of elements in each vector and if so how can I derive that from the definition of a dimension?
"Elements" of a vector is not something that makes sense in an abstract vector space. It does make sense when considering vectors in $\Bbb{R}^n$ (or more generally, $\Bbb{F}^n$ for some field $\Bbb{F}$), but not for more general vectors (e.g. functions). Even if we restrict ourselves to vectors in $\Bbb{R}^n$, the number of elements in a vector doesn't correspond necessarily to the dimension of the space. For example, if we consider the vector space $$V = \{(x, y, z) \in \Bbb{R}^3 : x + y + z = 0\},$$ then each vector has exactly $3$ elements, but the space is $2$-dimensional (though, of course, it is a subspace of the $3$-dimensional space $\Bbb{R}^3$). What makes more sense, instead of "elements", is "coordinates". Given any finite-dimensional space $V$ (i.e. one with a finite basis $B$), we can express vectors in $V$ as unique linear combinations of vectors in $B$. We can describe a vector $v \in V$ completely by the coefficients in the unique linear combination of vectors in $B$ that forms $v$. Since every basis has the same number of elements, every coordinate system has the same number of coordinates. This number is what we call the dimension of $V$. So, in summary, "number of elements" of a vector doesn't make sense in general, but "number of coordinates" does make sense and is the very definition of dimension.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3737890", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Continuously compounded interest with additional monthly deposits What is the equation for a continuously compounded with monthly additions of $300$ dollars for the first $10$ years and $500$ for the next $20$ with an initial investment of $0$? I know the equation $Pe^{rt}$ is used but i don't understand how to set it so $300$ dollars is added every month.
Calling the monthly savings $C,$ i.e. ($C=\$300$) and the monthly interest $m=r/12$ (divided by $12$ because time will be given in months, and presumably the interest rate $r$ is annual); and with the number of payments (for the first part of the problem $n=10[\text{years}]\times 12 [\text{months}]=120,$ it would seem like we could adapt the formula for an annuity, which is simply the application of the geometric series formula to payments $C$ that get different, and decreasing interest accumulation, depending on how early or late in the process they have been made (although at regular intervals) as in $$\begin{align} \text{Future Value}&=C(1+m)^{n-1} + C(1+m)^{n-2} + \cdots+ C(1+m)+ C\\ &=C\sum_{k=0}^{n-1} (1+m)^k\\ &=C\frac{1-(1+m)^n}{1-(1+m)}\\ &=C\frac{(1+m)^n-1}{(1+m)-1}\\ &=C\frac{(1+m)^n-1}{m} \end{align}$$ With this formula, there is one payment that gets no interest, at the end, which is the last $C$ in the series. And the first payment has to wait a month to receive the first interest, accounting for the $n-1$ in the first term. With continuous interest, and the formulation of the OP, however, the first payment at time $0$ would receive interest throughout the entire time, and presumably the last payment would be at month $n-1,$ accruing interest for one month. The monthly interest $(1+m)$ here turns into $e^{m}, $ so that for a $6\%=0.06$ annual interest, the continuously compounding interest would be (again, assuming that time is in months) $e^{0.06/12}=1.004175.$ Hence, $$\begin{align} FV&=C\frac{1-(1+m)^n}{1-(1+m)}\\ &=C\frac{e^{mn}-1}{e^m - 1} = \$49,203.91 \end{align}$$ The considerations about the first or last payment, can be easily compensated for with the formula for continued interest $FV=Ce^{mt},$ where here $t=1$ would be $1$ month. As for the second part of the OP, $C=\$500$ for $n=20\times 12=240,$ an equivalent calculation will yield $\$231,432.15.$ If the contributions of the first part of the question are still compounding during these $20$ additional years, we'll have to add their value after these last $20$ years, calculated as $\$49,203.91 e^{0.005\times 240}=\$163,362.73.$ So the total at a fixed annual $6\%$ would be $\$394,794.89.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3738060", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Evaluate $\sqrt{a+b+\sqrt{\left(2ab+b^2\right)}}$ Evaluate $\sqrt{a+b+\sqrt{\left(2ab+b^2\right)}}$ My attempt: Let $\sqrt{a+b+\sqrt{\left(2ab+b^2\right)}}=\sqrt{x}+\sqrt{y}$ Square both sides: $a+b+\sqrt{\left(2ab+b^2\right)}=x+2\sqrt{xy}+y$ Rearrange: $\sqrt{\left(2ab+b^2\right)}-2\sqrt{xy}=x+y-a-b$ That's where my lights go off. Any leads? Thanks in advance.
Although it is not clear what range of values is acceptable for the variables, I believe I have a sense of what you are trying to do here. This is called denesting square roots. In particular, if $a,b,c$ are positive real numbers such that $a^2-b^2 c$ is non-negative, then it holds that $$\sqrt{a+ b\sqrt{c}}=\sqrt{\frac{a+\sqrt{a^2-b^2 c}}{2}}+ \sqrt{\frac{a-\sqrt{a^2-b^2 c}}{2}}.$$ It can be easily proven by squaring both sides and using the difference of squares factorization. There is a way of coming up with it naturally by using the same idea behind finding complex square roots, though it might not be an entirely rigorous approach. This yields \begin{align*} \sqrt{a+b+\sqrt{2ab+b^2}}&=\sqrt{\frac{a+b+|a|}{2}}+\sqrt{\frac{a+b-|a|}{2}}\\ &= \begin{cases} \sqrt{a+\frac{b}{2}}+\sqrt{\frac{b}{2}} &\text{ if } a\ge 0\\ \sqrt{\frac{b}{2}}+\sqrt{a+\frac{b}{2}} &\text{ if } a<0\end{cases}. \end{align*} which works because $$(a+b)^2-1^2\cdot (2ab+b^2)=a^2$$ is non-negative. Both cases yield the same answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3738201", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Show $\det(F_n)=1$ for all $n$ Consider the $n\times n$ matrix $F_n= (f_{i,j})$ of binomial coefficients $$f_{i,j}=\begin{pmatrix}i-1+j-1\\i-1\end{pmatrix}$$ Prove that $\det(F_n)=1$ for all $n$. My current idea is to apply Leibniz formula for determinants and induction, but it seems too complicated. Any better ideas and suggestions are welcome.
Using the formula $\binom{n}{k}=\binom{n-1}{k}+\binom{n-1}{k-1}$ you get, by applying the column operations $$ \left\{ \begin{array}{lcl} C_n&\gets& C_n-C_{n-1} \\[1mm] C_{n-1}&\gets& C_{n-1}-C_{n-2} \\[1mm] &\vdots\\[1mm] C_2&\gets& C_2-C_{1} \end{array} \right. $$ that $$\det(F_n) = \det\left( \begin{array}{c|ccc} 1&0&\cdots&0\\\hline *\\ \vdots&&F_{n-1}\\ *\end{array}\right) = \det(F_{n-1})$$ So that the sequence $\det(F_n)$ is constant equalt to its first term $\det(F_1)=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3738281", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
High probability range of chromatic number Prove that there is an absolute constant $c$, for every $n>1,$ there is an interval $I_{n}$ of at most $c \sqrt{n} /$ log $n$ consecutive integers such that the probability that the chromatic number of $G(n, 0.5)$ lies in $I_{n}$ is at least 0.99. (From Noga Alon, Joel H. Spencer - The Probabilistic Method-Wiley (2016)) I tried use the Azuma's inequality with vertex/edge exposure martingale, but the can not be a constant in this way. I think I need a special martingale construction to solve it.
I wrote this solution when I worked through this amazing book about a year ago! Let $\epsilon > 0$ be arbitrary for the moment. We describe three events that each happen with probability at least $1-\epsilon$ for large $n$. Given $G\sim G(n,1/2)$, let $u$ be the smallest positive integer such that \begin{equation} \mathbb{P}(\chi(G) \leq u) \geq \epsilon. \end{equation} The minimality of $u$ implies that \begin{equation} \mathbb{P}( \chi(G) \geq u ) = 1- \mathbb{P}( \chi(G) \leq u -1 ) > 1 - \epsilon, \end{equation} so that with high probability, we will need at least $u$ colors to properly color $G(n,1/2)$. Let $Y(G)$ be the minimal size of a subset $S$ of $V(G)$ such that $\chi(G-S)=u$. If we examine $Y(G)$ using the vertex exposure martingale, then it is clear that $Y$ is vertex--Lipschitz: the exposure of each vertex may increase $Y$ by at most one. Thus we can apply Azuma's Inequality to conclude that \begin{align} \mathbb{P}(Y\leq \mu - \lambda\sqrt{n-1})& < e^{-\lambda^2/2}\\ \mathbb{P}(Y\geq \mu + \lambda\sqrt{n-1})& < e^{-\lambda^2/2}. \end{align} We pick $\lambda = \lambda(\epsilon)$ such that $e^{-\lambda^2/2} = \epsilon$. Now, we have selected $u$ such that $\mathbb{P}(Y=0) = \mathbb{P}( \chi(G)\leq u ) > \epsilon$, so the first of the two bounds says that we must have $\mu \leq\lambda\sqrt{n-1}$. Then the second bound becomes \begin{equation} \mathbb{P}(Y\geq 2\lambda\sqrt{n-1}) \leq \mathbb{P}(Y\geq \mu +\lambda\sqrt{n-1}) < e^{-\lambda^2/2}=\epsilon. \end{equation} That is, with probability $1-\epsilon$, there is some set $S$ of size $2\lambda \sqrt{n}$ so that $\chi(G-S)=u$. Observe that $G[S] \sim G(|S|, 1/2)$. Bollob'as showed that a.a.s. $\chi(G(n,1/2)) \leq \frac{n}{\ln n}$, so for large $n$, with probability at least $1-\epsilon$ we can color $G[S]$ with, say, at most $2 \lambda \frac{\sqrt{n}}{\ln \sqrt{n}}=4\lambda\frac{\sqrt{n}}{\ln n} $ colors. So with probability at least $1-3\epsilon$, the following all hold: * *$\chi(G) \geq u$. *There is an $S$ with $|S| \leq 2\lambda \sqrt{n}$ such that *$\chi(G\setminus S) = u$, and *$\chi(S) \leq 4 \lambda \frac{\sqrt{n}}{\ln n}$. The above conditions imply that with probability $1-3\epsilon$ we have that $u \leq \chi(G) \leq u + 4\lambda \frac{\sqrt{n}}{\ln n}$, where $\lambda$ depends only on $\epsilon$. Set $\epsilon = 1/300$ so that with probability at least $0.99$ we have that $\chi(G)$ lies in an interval of size of order $4\lambda \frac{\sqrt{n}}{\ln n}$. Note The result by Bollobas on the chromatic number is also given somewhere in the book.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3738415", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Can this inductive proof that $\sum_{i=0}^n2^{2i+1}=\frac23(4^n-1)$ be simplified? The general structure of equations I've used for the inductive step for proofs with a summation is something like: We'll prove that $\sum_{i = 0}^{n + 1} (\text{something}) = (\text{closed form expression})$ \begin{align} \sum_{i = 0}^{n + 1} (\text{something}) &= \sum_{i = 0}^n (\text{something}) + \text{last term} &\\ &= [\text{expression via I.H.}] + \text{last term} &\\ &= \text{do some work...} &\\ &= \text{some more work...} &\\ &= (\text{finally reach the closed form expression we want}) \end{align} This structure is very nice, since the equation is one-sided, and very easy to follow. However I solved a problem that I couldn't solve with this one-sided structure, and I had to substitute the LHS with the closed form expression I'm trying to prove, so I could use some of its terms to simplify the RHS. This is fine and valid, but I'd like to know if there's a simpler way to perform this proof that doesn't employ the substitution you see below: In other words, I couldn't figure out how to simplify $\frac{2}{3}(4^n - 1) + 2^{2n + 1}$ to get $\frac{2}{3}(4^{n + 1} - 1)$. The farthest I got was: \begin{align} &= \frac{2}{3}(4^n - 1) + 2^{2n + 1} &\\ &= \frac{2}{3}(4^n - 1) + 2\cdot 2^{2n} &\\ &= \frac{2}{3}(4^n - 1) + 2\cdot 4^n &\\ &= \frac{2}{3}(4^n - 1) + \frac{3 \cdot 2\cdot 4^n}{3} &\\ &= \frac{2}{3}(4^n - 1 + 3 \cdot 4^n) &\\ \end{align}
$$\frac{2}{3}(4^n - 1) + 2^{2n + 1}= \frac23\left(\color{red}{4^n}-1+\overbrace{\color{red}{3\cdot2^{2n}}}^{=3\cdot4^n}\right)=\frac23\left(\color{red}{4\cdot4^n}-1\right) = \frac{2}{3}(4^{n + 1} - 1)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3738570", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Calculating The Vertex Of a Square That Circumscribed Ellipse Hello everyone how can I find the vertex of a square that circumscribed the ellipse $\frac{x^2}{9}+y^2 =1$? I tried to mark the vertex at $(u,v),(-u,v),(u,-v),(-u,-v)$ and use the equation to calculate the tangent lines to the ellipse by the vertex points, but I don't know how to continue.
It is possible to give a very quick answer if we use this nice property of the ellipse: the locus of the intersections of perpendicular tangents to an ellipse is a circle called director circle, and the square of its radius is the sum of the squares of the ellipse semi-axes. Tangents drawn from any point on the director circle, and from its reflection about the center, form then the sides of a circumscribed rectangle. If we choose those points as the intersections between director circle and axes of the ellipse, the rectangle is by symmetry a square. In your particular case the vertices of the circumscribed square lie then at points $(0,\pm\sqrt{10})$ and $(\pm\sqrt{10},0)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3738672", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving that the Average Product function attains max value at the point where the Marginal Product is equal to the Average Product. I am basically trying to prove the following excerpt from my Microeconomics textbook: When Average Product is maximum, Marginal Product is equal to the Average Product. The Total Product function $TP(x)$ gives the output produced by using $x$ units of variable input. The Average Product function $AP(x)$ is just $\frac{TP(x)}{x}$ and is defined only when $x > 0$. Here's my attempt at a formal proof: Let $AP(x)$ be maximum when $x = r$ where $r$ is a constant greater than $0$. Then, $$AP(r)\ge AP(x)\forall x>0$$ We have to prove that at $x=r$, Marginal Product is equal to the Average Product. That is, $$MP(r)=AP(r)$$ Proof: $$AP(r)\ge AP(x)\forall x>0$$ $$\implies \frac{TP(r)}{r}\ge \frac{TP(x)}{x}\forall x>0$$ $x>0$ and $r>0 \implies xr > 0$. So multiplying both sides of the above inequality by $xr$ won't reverse the inequality. $$\implies xr \frac{TP(r)}{r}\ge xr\frac{TP(x)}{x}\forall x>0$$ $$\implies xTP(r)\ge rTP(x)\forall x>0$$ Multiply both sides by $-1$. $$\implies -xTP(r)\le -rTP(x)\forall x>0$$ Add $rTP(r)$ to both sides. $$\implies -xTP(r)+rTP(r)\le -rTP(x)+rTP(r)\forall x>0$$ $$\implies (-x+r)TP(r)\le r\bigr(-TP(x)+TP(r)\bigr)\forall x>0$$ $$\implies (r-x)TP(r)\le r\bigr(TP(r)-TP(x)\bigr)\forall x>0$$ Divide both sides by $r$. Again, the inequality will not be flipped since $r>0$. $$\implies (r-x)\frac{TP(r)}{r}\le TP(r)-TP(x)\forall x>0$$ $AP(r)= \frac{TP(r)}{r}$ so $$\implies (r-x)AP(r)\le TP(r)-TP(x)\forall x>0$$ If $0<x<r$ then $r-x>0$. Dividing both sides by $r-x$ won't change the sign of the inequality. $$\implies AP(r)\le \frac{TP(r)-TP(x)}{r-x} ; 0<x<r$$ If $x>r$ then $r-x<0$. Since $r-x$ is a negative number, dividing both sides by it will change the sign of the inequality. $$\implies AP(r)\ge \frac{TP(r)-TP(x)}{r-x} ; x>r$$ If $x=r$ then both sides of the inequality become $0$. We can't divide by $x-r$ because that would make the expression undefined. I am not sure how to handle this case and was hoping someone would let me know what to do here. Anyway, my textbook has two formulae for calculating the value Marginal product at some input. They are $$MP(x)=TP(x)-TP(x-1)$$ and $$MP(x) = \frac{\Delta TP(x)}{\Delta x}$$ Using the second formula we get: $$MP(r)=\frac{TP(r)-TP(x)}{r-x}$$ Substituting this in the above two inequalities, we get: $$\implies AP(r)\le MP(r) ; 0<x<r$$ $$AP(r)\ge MP(r) ; x>r$$ Now, we know that $r$ is a constant. That means $AP(r)$ and $MP(r)$ are also constants! Irrespective of whether $x$ is greater than or less than $r$, $AP(r)$ and $MP(r)$ will remain constants. In both of the above two cases, either $AP(r)$ is less than $MP(r)$ OR $AP(r)$ is equal to $MP(r)$ OR $AP(r)$ is greater than $MP(r)$. There is only one case where both of the two inequalities are true and this is when $AP(r)$ is equal to $MP(r)$. $$\therefore MP(r) = AP(r)$$ Is this proof correct? Edit: The book I am using asserts that $MP$ is not a strictly decreasing function. Therefore, $MP'$ is not always negative.
This answer is to supplement the answer by Trurl and reply to the query about the significance of the sign of the derivative of marginal product. The mathematical result you need to apply is: Let $f$ be a differentiable function of a single variable defined on the interval $I$. If a point $x$ in the interior of $I$ is a local or global maximizer or minimizer of $f$ then $f'(x)=0$. First, note that in your example, the domain is all the positive real numbers, so all points in the domain are interior. So, applying the result to the average product function, it must be that at a (global or local) maximum the derivative of average product is zero. There is no need to check second-order conditions; it does not matter whether the second derivative of average product is positive or negative and so it does not matter whether the derivative of marginal product is positive or negative.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3738759", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
What Is Bigger $\frac{3}{e}$ or $\ln(3)$ Hello everyone what is bigger $\frac{3}{e}$ or $\ln(3)$? I tried to square it at $e$ up and I got: $e^{\frac{3}{e}} = \left(e^{e^{-1}}\right)^{3\:}$ and $3$ but I don't know how to continue I also tried to convert it to a function but I didn't find.
Hint: Compare the logs and use that $\ln $ is concave, hence its representative curve is below each of its tangents.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3738864", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Natural deduction proof that $(P\leftrightarrow \neg P)$ is a contradiction, without first deriving $(P\vee \neg P)$ I'm looking to prove that $(P \leftrightarrow \neg P)$ is a contradiction using a natural deduction proof (which is to say, I want a proof to show $(P\leftrightarrow \neg P)\vdash Q$). In case it helps, the specific system I'm working in is as outlined in Halbach's Logic Manual (tree structure, introduction and elimination rules for each connective; see the link below), but it's the overall structure of the proof I'm struggling with. Given a proof that shows $\vdash (P \vee \neg P)$ I can transform this into the desired proof, but that generates a very large tree given the simplicity of the sentence, because the proof for $\vdash (P \vee \neg P)$ is fairly long itself. I can't shake the feeling there must be a more straightforward (even if still indirect) proof, but haven't been able to find it so far. Edit: As found by lemontree, the ruleset I am using is listed here.
(Posted after answer accepted.) Using a form of natural deduction:
{ "language": "en", "url": "https://math.stackexchange.com/questions/3739007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
rigorous proof of simple phenomenon Suppose we want to prove the following simple statement rigorously - If the number of beads in a necklace is $n$ then there exists a colored (suppose we have infinitely many colors at our disposal) necklace of minimal period $k$ (rotations) iff $k|n$. Intuitively this seems obvious, but how do you make a rigorous proof? EDIT : I think i figured this one : given that for some string of length $n$, $k$ is the minimal period, use the euclidean algorithm to get $n=qk+r$, $0\leq r<k$. Then it is easy to see that the string is also $r$-periodic (because it is $n$-periodic & $qk$-periodic). So $r=0$ because $k$ is minimal. The other direction of the proof is quite easy - just break up the string into chunks of $k$ beads, color every bead differently and repeat for all chunks.
If $k\not|n$ and the period is $k$, then take every other $k^{th}$ beads. After several turns, you will get $\dfrac n{\gcd(k,n)}$ different positions, spaced by $\dfrac{k}{\gcd(k,n)}$, holding beads of the same color. Hence the period is $\gcd(k,n)<k$, a contradiction. E.g. $k=6,n=15\to\gcd(k,n)=3$. The beads $0,6,12,3,9$ ($5$ of them) have the same color; also $1,7,13,4,10$ and $2,8,14,5,11$. Hence $0,1,2$ is a period !?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3739292", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Help in understanding integrating a function with an absolute value My math is rusty, and although I initially thought I understand the solution, upon further examination I think I don't: That's the original function: $$ \Psi(x,t) = A \mathrm{e}^{-\lambda|x|} \mathrm{e}^{-\mathrm{i} \omega t} $$ \begin{align*} \langle x^2 \rangle &= 2|A|^2 \int_0^\infty x^2 \mathrm{e}^{-2\lambda x}\,\mathrm{d}x \\ &= 2 \lambda \left[ \frac{2}{(2\lambda)^3} \right] \\ &= \frac{1}{2\lambda^2} \text{.} \end{align*} What I don't understand is why there's 2 in front of A square, why parameters of integration changed from minus infinity-plus infinity to 0-plus infinity, and why x lost its absolute value. At first I thought that he's using the symmetry of the function and calculating the integral from 0 to infinity, where |x| = x, then multiplying it by two. But after checking how to integrate absolute value functions I'm not sure my reasoning is correct. Sorry if the question is messed up, but I cannot imbed images directly, and have to use links.
Your reasoning seems correct to me. There's some dot/inner product of functions defined there (in your particular context). https://mathworld.wolfram.com/InnerProduct.html They just use this particular definition of the inner product, and calculate $\langle x,x\rangle$ i.e. $\langle x^2\rangle$ The rest of your reasoning is OK, indeed that's why all these modifications took place.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3739379", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Showing a function is convex/concave given another function is concave Assuming $f(x)$ is not a linear function, given a concave decreasing function $f(x)$, I want to find whether $g(x)=f(x)\cdot x$ is convex or concave for strictly positive $x$. However, I'm having a trouble proving it mathematically. Since $f(x)$ is concave, $$ f((1-\alpha)x_1 +\alpha x_2)\geq (1-\alpha)f(x_1)+\alpha f(x_2)$$ Then $$g(x)=f(x)\cdot x$$ and $$f((1-\alpha)x_1 +\alpha x_2)\cdot ((1-\alpha)x_1+\alpha x_2)\geq (1-\alpha)f(x_1)+\alpha f(x_2)\cdot ((1-\alpha)x_1+\alpha x_2)$$ since $((1-\alpha)x_1+\alpha x_2)$ is positive and $f(x)$ is concave. And then I'm stuck and don't know how to go further. Could anyone help please? Thank you in advance.
If $f$ is twice differentiable and concave then $f''(x)\leq 0$. And $f$ decreasing implies $f'(x)\leq 0$. Setting $g(x)=xf(x)$, we have \begin{align*} g''(x)=2f'(x)+xf''(x)\leq 0 \end{align*} for all $x>0$. It follows that $g$ is concave for all $x>0$. If the assumption of $f$ twice differentiable is too stringent for your purposes let me know and I can remove my answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3739537", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
What are some questions that seem easy but aren't? I've only just started to study a degree in Mathematics and I find it extremely satisfying to have a seemingly easy question that is incredibly difficulty or tricky to work out. I've looked into heaps of competition exams (although they aren't really what I'm looking for - seeing that they don't usually seem easy). A good example are the so called 'Coffin Problems' (you can find some here). I'm reaching out here to hear any original problems that fit into this category. Here's an example of mine: If $\sin \theta + \cos \theta + \tan \theta = -1$, find the values of $\sin \theta + \cos \theta - \tan \theta$ for $\theta$ not being a multiple of $\pi$. From memory, one of the answers I got for that problem was $\sqrt{2}-1$, although I know that there are plenty of other answers. Also, I appologise if this question does not belong on this site. This was the only site I thought of to ask a question like this. Thanks!
There are some questions that are easy to comprehend, but for which we have no solution yet, period. I don't know if that is what you're looking for, or if you only want problems that do have a solution, but that hard to find, but here are some problems of the first kind: Goldbach Conjecture: Every even number greater than $2$ is the sum of two prime numbers E.g. $4=2+2; 6=3+3; 8=5+3$, etc. We can comprehend this question immediately ... but no one knows the answer. We suspect it's true, but there is no proof yet. https://en.wikipedia.org/wiki/Goldbach%27s_conjecture Collatz Conjecture: Take any whole positive number. If ity is even, divide by $2$. If odd, multiply by $3$ and add $1$. Keep doing this. The Collatz Conjecture is that if you keep doing this, you'll always at some point end up with 1. Example: $7\to 22\to11\to34\to17\to52\to26\to13\to40\to20\to10\to5\to16\to8\to4\to2\to1$ Again, easy to comprehend, but no one knows if it's true or not. https://en.wikipedia.org/wiki/Collatz_conjecture
{ "language": "en", "url": "https://math.stackexchange.com/questions/3739653", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
the simplest non-trivial line bundle over Riemann sphere We define Riemann sphere as $S=\mathbb{C}^2-\{0\}/\sim$. Given a point $p$ over $S$, I have seen somewhere there exists a line bundle $L_p$ associated to $p$, and $L_p$ has a non-zero holomorphic section with only one zero at $p$. I think this construction is well-known to most people, I want to know how to construct the line bundle associated to $p$, by definition of the line bundle, how to define the space $E$ and the map $E\rightarrow S$ such that each fiber is a complex vector space of dimension $1$? Thanks!
For simplicity let's take $P = [0: 1]$. Cover $S$ with the two opens $U_0 = S \setminus [1: 0]$ and $U_\infty = S \setminus [0:1]$. Now take the disjoint union of two copies of the trivial bundle: $M = (U_0 \times \mathbb{C}) \sqcup(U_\infty \times \mathbb{C})$ We impose an equivalence relation on $M$ by the rule $([a:b], z)_0 \sim ([a:b], bz/a)_\infty$ when neither $a$ nor $b$ are zero, and set $L = M/\sim$. Since $\sim$ respects the projection to $S$, there is a natural map $L \to S$. The fiber over a point is $\mathbb{C}$, and local triviality is clear (every point is in $U_0$ or $U_\infty$). To give a section of $L$ is to give a map $f_0: U_0 \to \mathbb{C}$ and a map $f_\infty: U_\infty \to \mathbb{C}$ such that $$ bf_0([a: b]) = af_\infty([a:b]) $$ whenever neither $a$ nor $b$ are zero. There is such a section: $f_0([a: b]) = a/b$ and $f_\infty([a: b]) = 1$. This is locally holomorphic, so it's holomorphic, and it has a unique zero of order one at $P$ as desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3739811", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Obtaining eigenvalues of a complex matrix Let $$A = \begin{bmatrix} 0 & i+1 & 0 \\ i & 0 &2 -i \\ 2-i & 0 & i \end{bmatrix}$$ and eigenvalues, eigenvectors are asked. * *by the frequently used method ,we get the characteristic equation as $$\lambda^{3} - i (1 + \lambda )\lambda + \lambda - 8 = 0$$ no way to solve this equation by hand. By using some WEB sites 3 different complex roots are obtained, all with "ugly" coefficient like $(-1.0463-1.6363i),\dots$ Then by using these eigenvalues, almost impossible to get the eigenvectors by hand. *Diagonalizing matrix $A$ by row operations and picking up the diagonal entries. I've done this too. I've obtained "nice" eigenvalues but they do not satisfy the original characteristic equation given above. So there is some inconsistance. What I've missed here? Any help will be appreciated. (This was an exam question, so it is supposed to be solved in a limited time by hand)
The characteristic polynomial is given by $$ \chi(t)=t^3 - it^2 + ( 1 - i)t - 8 $$ and the roots $z_1,z_2,z_3$ satisfy $$ z_1+z_2+z_3=i,\; z_1z_2z_3=8. $$ So we have $z_3=i-z_1-z_2$ and $z_1z_2(i-z_1-z_2)=8$. This gives a quadratic equation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3739959", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that if $X$ is absolutely continuous and $g$ is absolutely continuous on bounded intervals, then $g(X)$ is absolutely continuous. I'm taking a course on probability, and the following question showed up in the reference textbook: Show that if $X$ is absolutely continuous with pdf $f_X(\cdot)$ and $g$ is absolutely continuous on bounded intervals such that $g'(\cdot)>0 \ a.e \ (\lambda)$, then $Y = g(X)$ is also absolutely continuous with pdf $$ f_Y(y) = \frac{f_X(g^{-1}(y))}{g'(g^{-1}(y))} $$ I'm quite lost on how to solve this. It would be much help with someone could show how to do it. Note: the $\lambda$ stands for the Lebesgue measure, $g:\mathbb R \rightarrow \mathbb R$ is Borel measurable, $X$ is a random variable with probability space $(\Omega, \mathcal F, P)$.
Maybe a hint: Since $g$ is invertible under these assumptions, $ F_Y(t) = P( Y \le t) = P(X \le g^{-1}(t)). $ Calculate the derivative and in the chain rule apply the inverse function theorem of calculus.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3740077", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Suppose every element of $\mathcal F$ is a subset of every element of $\mathcal G$. Prove that $\bigcup \mathcal F\subseteq \bigcap\mathcal G$. Not a duplicate of Prove that if F and G are nonempty families of sets, then $\bigcup \mathcal F \subseteq \bigcap \mathcal G$ Validity of this proof: Prove that $\cup \mathcal{F} \subseteq \cap \mathcal{G}$ Proof that $\bigcup\mathscr F\subseteq\bigcap\mathscr G$, when every element of $\mathscr F$ a subset is of every element of $\mathscr G$ This is exercise $3.3.17$ from the book How to Prove it by Velleman $($$2^{nd}$ edition$)$: Suppose $\mathcal F$ and $\mathcal G$ are nonempty families of sets, and every element of $\mathcal F$ is a subset of every element of $\mathcal G$. Prove that $\bigcup \mathcal F\subseteq \bigcap\mathcal G$. Here is my proof: Suppose $x$ is an arbitrary element of $\bigcup\mathcal F$. This means that we can choose some $A_0$ such that $A_0\in \mathcal F$ and $x\in A_0$. Let $B$ be an arbitrary element of $\mathcal G$. Since $\forall A\in\mathcal F\forall B\in\mathcal G(A\subseteq B)$, $A_0\subseteq B$. From $A_0\subseteq B$ and $x\in A_0$, $x\in B$. Thus if $B\in \mathcal G$ then $x\in B$. Since $B$ was arbitrary, $\forall B\Bigr(B\in\mathcal G\rightarrow x\in B\Bigr)$ and so $x\in\bigcap \mathcal G$. Therefore if $x\in \bigcup\mathcal F$ then $x\in\bigcap\mathcal G$. Since $x$ was arbitrary, $\forall x\Bigr(x\in \bigcup\mathcal F\rightarrow x\in\bigcap\mathcal G\Bigr)$ and so $\bigcup\mathcal F\subseteq\bigcap\mathcal G$. $Q.E.D.$ Is my proof valid$?$ Thanks for your attention.
Your proof is correct! Also I like it how you justify everything. You can also check this alternative approach: Proof. Take $A \in \mathcal F$. Then, as $A \subseteq B$ for all $B \in \mathcal G$, $$A \subseteq \bigcap_{B \in \mathcal G} B = \bigcap \mathcal G.$$ As $A$ was arbitrary, it follows that $$\bigcup \mathcal F = \bigcup_{A \in \mathcal F} A \subseteq \bigcap \mathcal G. \quad \blacksquare$$ In the first part we used that the intersection of a non-empty family of sets is the biggest element that is contained in each element of the family, and in the second part we used that the union of a family of sets is the smallest element that contains each member of the family.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3740222", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
exp(A+B) = exp(A)exp(B) for matrices proof In this thread, On the proof: $\exp(A)\exp(B)=\exp(A+B)$ , where uses the hypothesis $AB=BA$?, it was mentioned that absolute convergence is required for swapping sums. What theorem is used precisely?
I think actually absolute convergence is only mentioned in the original question that is linked in the question you linked: https://math.stackexchange.com/a/356763/688699 Anyway if you're talking about rearranging an infinite sum then this is just the Riemann Series Theorem. In particular (quoting from Wikipedia) let $X$ be a topological vector space. For example, this could be an additive matrix group, which we can see as $\mathbb{R}^{n}$ for some $n$. Then a series $\sum_{n=0}^{\infty} x_{n}$ is called unconditionally convergent if it converges to some point $x\in X$ and any rearrangement of the order of summation produces a series converging to $x$ also. Then the Reimann Series Theorem says that, for $X=\mathbb{R}^{n}$, a series is unconditionally convergent if and only if it is absolutely convergent. For more details see: https://en.wikipedia.org/wiki/Unconditional_convergence
{ "language": "en", "url": "https://math.stackexchange.com/questions/3740304", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Smith normal form and basis for the image of a module homomorphism Let's say $$\varphi: \mathbb{Z}^n \to \mathbb{Z}^m$$ is a $\mathbb{Z}$-linear mapping and $A$ is the transformation matrix of $\varphi$ and $$SAT = Q$$ where $Q$ is the Smith normal form of $A$ with $S$ and $T$ both invertible over the Ring $\mathbb{Z}$. I want to prove that the columns of $AT$ are a basis of $im(\varphi)$ but I cannot come up with a solution.
The image of $\varphi$ consists of the vectors $Av$ for $v\in\Bbb Z^n$. As $T$ is invertible over $\Bbb Z$, the $ATw$ for $w\in\Bbb Z^n$ are the same as the $Av$ for $v\in\Bbb Z^n$. So this image is the $\Bbb Z$-span of the columns of $AT$ (as well as the $\Bbb Z$-span of the columns of $T$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3740412", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Einstein notation for a sum of vector elements Perhaps this question is misguided but I am having difficulty writing a simple matrix expression in einstein notation. In my expression I have a vector $v$ and I wish to define the scalar value, $a$, as the sum of the components of $v$, $v^i$. I know in einstein notation you use repeated indices to represent a summation but my problem is where would the second index appear? Do I have to use a kronecker delta with one index like so perhaps: $$a=v^i\delta_i$$ Is this the usual convention?
The issue here is that the notion of the sum of the elements of a vector is not basis independent. So the way you would do this is to define a row-vector whose entries are all $1$s in the basis that you're working in, but note that this vector could look completely different in another basis. I think $\delta_i$ is a logical name for this, but you might also see it called $1_i$. Then you're right, you can define the sum of the elements of $v$ as $v^i\delta_i$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3740765", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Expected number of tosses to get 3 heads (not necessarily in a row) Intuition tells me the expected number of tosses of a fair coin needed to get 3 TOTAL heads is 6. I'm trying to show this using purely (1) Bernoulli random variables and (2) purely Geometric random variables For approach (1), Let $X_i = 1$ if the i-th toss gives heads and $X_i = 0$ if the i-th toss gives tells. Let $N$ be the number of tosses to get 3 heads. Then we have that $$ 3 = \sum_{i}^{E[N]} E[X_i] \\ 3 = E[N] \frac{1}{2} \\ E[N] = 6 $$ I'm now analyzing my own solution, and I feel that the first line $3 = \sum_{i}^E[N] E[X_i]$ seems odd, but it makes sense in my head. It seems odd because I haven't solved a problem before where the summation goes up to an expected value. Is this procedure sound? If so, is there a formal name for the formula that I applied in $3 = \sum_{i}^E[N] E[X_i]$? Now for approach 2. I am having problems with formulating this problem in terms of geometric random variables. The way I've learned geometric random variables is that they're the count of the number of trials before of getting the FIRST success. In this case, the first success could simply be getting the first head. Would it still be a geometric random variable if I instead defined the first success as getting all 3 heads? If so, I am having trouble figuring out how to solve the problem from this definition of "first success." Now if the first success was defined as just getting the first head, then the expected number of trials is certainly 2. It seems that I can define 3 geometric RVs, $X_1, X_2, X_3$ to represent the number of trials needed to get the first, second, and third head. In this case the answer is $E[X_1] + E[X_2] + E[X_3] = 6$. Is $X_2$ and $X_3$ independent of $X_1$? It seems that they are dependent because to get the second or third head, you'd need to get the first head first?
Since expectations are additive, the second approach works as well and is used, for example, in the coupon-collector-problem : If we have $n$ possible outcomes, every outcome equally likely, then the number of trials to get each outcome at least once has expectation $$\sum_{j=1}^n \frac{n}{j}$$ This can be seen as a chain for "first-success-problems" with probabilities $1,\frac{p-1}{p},\cdots,\frac{1}{p}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3740922", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How is $\sum_{n=0}^N\frac{a_n}{10^n}$ a Cauchy sequence? I've been studying the construction of real numbers this week and i've read about cantor's construction using the Cauchy sequence and Dedekind's construction. Now the book i'm reading (classic set theory for guided independent study) gives a new kind of construction by decimal expansion. First it says: "We are quite accustomed to writing numbers by their decimal expansions. An expansion of this sort is really an infinite series of the form $\sum_{n=0}^\infty\frac{a_n}{10^n}$" then it says "The definition of an infinite series says that this is the limit of the sequence of its partial sums $\langle s_N\rangle $, where $\langle s_N\rangle =\sum_{n=0}^N\frac{a_n}{10^n}$ where all $a_n$ are integers and $a_n\in\{0,1,2,3,4,5,6,7,8,9\}$" now the part i don't understand: "$\langle s_N\rangle $ is a Cauchy sequence of rationals, which connects decimal expansions to Cantor reals - each equivalence class in Cantor's definition contains such a sequence $\langle s_N\rangle $" How is $\sum_{n=0}^N\frac{a_n}{10^n}$ a cauchy sequence? And how is this a sequence at all if it's a series? i thought series and sequences were two different things.. for example $\sum_{n=0}^3\frac{n}{10^n}=0+0.1+0.02+0.03$ how is this a cauchy sequence? Maybe it's because i've never studied real analysis before and that's why i'm struggling with this, can you guys help me out please?
The generic term of the sequence is $s_N=\sum_{n=0}^N\frac{a_n}{10^n}$ with $N\geq 0$. In order to show that $(s_N)_N$ is a Cauchy sequence, note that for $M\geq N\geq 0$, $$0\leq s_M-s_N=\sum_{n=N+1}^M\frac{a_n}{10^n}\leq 9\sum_{n=N+1}^M\frac{1}{10^n}<9\sum_{n=N+1}^\infty\frac{1}{10^n}=\frac{1}{10^N}$$ where we used that fact that $0\leq a_n\leq 9$. Can you take it from here?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3741053", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
product of nonsingular polynomials In $R^n$, the polynomial $P$ is called nonsingular if $0$ is its regular value i.e. $\forall x s.t. p(x)=0,\mbox{then} \nabla p(x)\neq0$. How to prove that the product of nonsingular polynomials has the property that its nonsingular points are dense in its zero set? I have seen this property on Guth,s paper of polynomial partition. But I think it is wrong. If $$p1=p2=x,p=p1\cdot p2=x^2$$ It only has one singular point $0$ in zero set. So, something must be wrong.
The question is wrong usually. But if we ask the polynomials are prime to each other, the question is true. And in the paper of Guth, we can ask this property by the prime factorization of the product.Then delete the power.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3741161", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that $\lim_{n\to \infty} a_n=\frac{\sqrt{5}-1}{2}$ if $a_{n+1}=\sqrt{1-a_n}$ and $0If $0<a_0<1$ and $a_{n+1}=\sqrt{1-a_n}$, prove that: $$\lim_{n\to \infty} a_n=\dfrac{\sqrt{5}-1}{2}$$ Here what I do is that when $n\to \infty$, $a_{n+1}=a_n$ $\therefore a_n=\sqrt{1-a_n}\Rightarrow a_{n}^2+a_n-1=0$ which implies that when $n\to \infty$, $a_n\to \dfrac{\sqrt{5}-1}{2}$ $\therefore \lim_{n\to \infty} a_n=\dfrac{\sqrt{5}-1}{2}$ My doubt here is, was I correct in simply assuming that $a_{n+1}=a_n$ when $n\to \infty$? (I'm sorry for I may be weak in basics.) If not, then please suggest alternative method.
hint It is easy to prove by induction that for all $ n\ge 0$, $0\le a_n\le 1$. Let $$f(x)=\sqrt{1-x}$$ from $ [0,1)] $ to $[0,1)$. $$f'(x)=\frac{-1}{2\sqrt{1-x}}<0$$ $ f$ is decreasing at $ [0,1) $, so the subsequences $ (a_{2n})$ and $( a_{2n+1})$ are monotonic and convergent. I let you to prove they have the same limit.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3741423", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Show that $\frac{\partial}{\partial r}\int_{S_r}u(x,y)ds =0$ Show that for all $0<\rho\leq r$ $\frac{\partial}{\partial r}\int_{S_r}u(x,y)ds =0$.Well, the hypotesis is $u$ is harmonic ($\Delta u=0$) , $0\in \Omega$,and $B_{\rho}$ the ball with radios, $\rho>0$ $B_{\rho}\subset \Omega$ and $\frac{\partial}{\partial r}$ is a unitary radial vector, $S_r$ circle with center in origin. Iam not sure how to star, i need to show that the integral is a constant respect to $r$, i know that the derivate can be introduce under integral sign because $u$ is of class $C^2$ and $S_r$ is compact set(close and bounded in $R^n$) but what can i do? The other side i try to do parts integration but i need to put other function $v$ that let zero on the boundary, but dont go anywhere? Can you help me with a hint please? Thank you
One answer is that this is automatically zero by the spherical mean-value property of harmonic functions. However, this fact is generally used to prove the spherical mean-value property, so let's do it directly. First for simplicity let's define $\phi(r) = \frac{1}{r^{n - 1}}\int_{S_r} u(x) dS(x)$. Here $x \in \mathbb{R}^n$ and $dS(x)$ represents the surface measure with respect to varying $x$ (the proof is no different in 2D than in $n$D). The first step is realizing that it's hard to take derivatives when the variable is in the limit of integration, so the first thing to do would be to reparametrize the integral via a change of variables to get the $r$ out of the limits and into the integrand. The standard choice is to integrate over the unit sphere. This gives, by changing variables $x \mapsto ry$ for $y \in S_1$, $$ \phi(r) = \int_{S_1}u(ry)dS(y). $$ (Where did the factor of $r^{n - 1}$ go?) Then take a derivative using the chain rule.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3741562", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
prove that if $E$ is connected and $E \subseteq F \subseteq \overline{E}$, then $F$ is connected. Define a set $A$ to be disconnected iff there exist nonempty relatively open sets $U$ and $W$ in $A$ with $U\cap W = \emptyset$ and $A = U\cup W.$ Define a set $A$ to be connected iff it is not disconnected.(there are many equivalent definitions, but I want to prove this lemma using this one). Prove that if $E$ is connected and $E\subseteq F \subseteq \overline{E},$ then $F$ is connected. Let $U, W$ be a separation for $F$. Find open sets $O_U$ and $O_W$ so that $U = F \cap O_U$ and $W = F\cap O_W.$ I claim that $E\cap O_U, E\cap O_W$ separate $E$. However, I'm unable to show that $U' = E\cap O_U, W' =E\cap O_W \neq \emptyset$ (I think this should be straightforward, but for some reason, I can't figure this out). Suppose $U' = \emptyset.$ Then $E\cap O_U = \emptyset.$ Since $E \subseteq F = U\cup W = F\cap (O_U \cup O_W)\subseteq O_U\cup O_W,$ we have that $E \subseteq O_W,$ so $E\cap O_W = E\subseteq F\cap O_W = W\subseteq F\subseteq \overline{E}.$ Observe that since $E\cap O_U = \emptyset, F\cap O_U = (F\backslash E)\cap O_U\subseteq F\backslash E.$ Similarly, $W'\neq \emptyset.$ Clearly, $U', W'$ are relatively open in $E$. Suppose $U'\cap W' \neq \emptyset.$ Let $x\in U'\cap W'.$ Then $x\in E\cap O_U\cap O_W \subseteq F\cap O_U\cap O_W = U\cap W = \emptyset,$ a contradiction. So $U'\cap W' = \emptyset.$ Also, $U'\cup W' = (E\cap O_U)\cup (E\cap O_W) = E\cap (O_U \cup O_W)$ and $E\subseteq (O_U \cup O_W),$ so $U'\cup W' = E.$
Assume $E \subseteq F \subseteq \overline{E} \subseteq X$ and that we're working in the topology of $E$ relative to $X$. If $F$ is disconnnected, then $\exists U, V \subseteq X$ open (in $X$) such that $F \subseteq U \cup V$ and $U \cap V \cap F = \emptyset$. But then $U \cap V \cap E = \emptyset$ and $E \subseteq F \subseteq U \cup V$, so $U$ and $V$ are a separation for $E$, which is connected. Thus, without loss of generality, $E \subseteq U$ and $E \cap V = \emptyset$. But then $E \subseteq X \setminus V$, which is closed (because $V$ is open), and since the closure of $E$ is the intersection of all closed sets containing $E$, that means $\overline{E} \subseteq X \setminus V$, so $\overline{E} \cap V = \emptyset$ and since $F \subseteq \overline{E}$, we also have $F \cap V = \emptyset$ so that $U, V$ do not separate $F$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3741882", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Converting between bound on probability measures and densities Suppose that $P$ and $Q$ are two probability measures on the same probability space with $P(A) \leq c Q(A)$ for each (measurable) set $A$. Is it true that $dP/dQ$ is then bounded by $c$ $P$-almost surely?
Let $f$ denote the Random-Nikodym derivative $\frac{dP}{dQ}$. Then $\int_Af\,dQ\leq cQ(A)$ for all $A$ and so $$\int_A (c-f)dQ\geq0\quad\text{for all} \quad A$$ From this, it should follow that $f\leq c$ $Q$-a.s. Take for instance $A=\{f>c\}$ (You ma use a well know fact that says that if $\phi\geq0$ and $\int \phi\,d\mu=0$, then $\phi=0$ $\mu$-a.s.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3742050", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that there is a positive number $B$ such that $|x_{n}| \geq B$ for all $n$ $[{x_n}]^{\infty}_{n=1}$ is a sequence of real numbers that converges to $x_0$ and that all $x_n$ and $x_0$ are nonzero. I have done the following: for all $n$ in $|x_{n}|\geq B$ $x_0 < B$ $\epsilon<B-x_o$ $\epsilon+x_o<B$ for all $n$ in $N\geq n \in I_{\epsilon}(x_0)=[x_0-\epsilon,x_0+\epsilon]$ $|x_{n}|\geq B$ contradicts $\epsilon+x_o<B$ Would this be a reasonable approach? I am fairly new to this side of mathematics and have been having trouble with proofs. Introductory Analysis is a pre-requisite to my further studies in Economics which is why I am attempting to learn it.
We have $x_n \to x_0$ as $n \to \infty$. Then, by definition, for any $\epsilon > 0$, $$ \exists N \in \mathbb{N} : \forall n \geq N, | x_n - x_0 | < \epsilon $$ If we choose $\epsilon = |\frac{x_0}{2}|$ we get that for all $n \geq N$ $$ |x_n - x_0| < |\frac{x_0}{2}| \Longrightarrow |x_n| \geq |\frac{x_0}{2}| $$ If you struggle to see why this is true you can separate it into $x_0 >0$ and $x_0 < 0$ and work it out from there. Then, you will have that for all $n$, $$ |x_n| \geq \min \{|x_1|, |x_2|, ..., |x_N|, |\frac{x_0}{2}|\} > 0 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3742222", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
An alternate definition of Limit Let's say we were to define the concept of limit as follows: $\displaystyle{\lim_{x \to c}}f(x)=L$ means that for every $x$ in the domain of $f$, there exists an $x_0 \neq x$ in the domain of $f$ such that: $$|x-c|>|x_0-c|$$and$$|f(x)-L|\ge|f(x_0)-L|$$ I have two questions: * *Does there exist a limit that can be proven using the (ε, δ)-definition but not by using the above definition? *Does there exist a limit that can't be proven using the (ε, δ)-definition but can be proven using the above definition?
The issue is not what you can prove, but that these two definitions are different. * *Let us define the function: $$ f(x)=\begin{cases} 1 & x=0\\ 0 & x=1\\ x & x\ne 0 \text{ and } x\ne 1 \end{cases} $$ Then, with the conventional definition, $\lim_{x\to 0} f(x)=0$. Take $x=1$. There is no point $x_0\ne x$ with $|f(x_0)|\le|f(x)|=0$. So the limit does not exist with your definition. *Let $f(x)=\sin\dfrac{1}{x}$. For every $x\ne 0$, you can choose an integer $k$ large enough so that: $$ x_0=\frac{1}{2\,k\,\pi}<|x| $$ Then: $$ |f(x_0)|=\left|f\left(\frac{1}{2\,k\,\pi}\right)\right|=|\sin 2\,k\,\pi|=0\le\left|\sin\dfrac{1}{x}\right|=|f(x)| $$ This "proves" that $\lim_{x\to 0}f(x)=0$ with your definition. But the conventional limit does not exist. ADDITION: As mentioned in other answers, your definition does not even define an unique limit. In the last example, choose $x_0=\frac{1}{2k\pi+m}$ with $\sin m=L$. Then, again $|f(x_0)-L|=0$ and $L$ would also be a limit for any $-1\le L\le 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3742377", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Inverse function on the interval $[1, 9.5]$ I am struggling to find the inverse of the following function $$f(x) = \frac{10}{3}\exp\big(-0.06x\big)\log\bigg(\frac{1}{5}\big(2x + 3\big)\bigg).$$ I noticed that this function is not one-to-one, so I restricted its domain to the interval $[1, 9.5]$. I am aware that there may not be an elementary answer, so while I may prefer an elementary answer, I would be very glad to get an answer in the form of something like a series. Thank you in advance!
I can almost guarantee that this will not have an elementary inverse. If your need is simply to calculate values, then I suggest a root-finding technique. You have this tagged as pre-calculus, but I'm going to bend that a bit and suggest Newton's method. Though setting up the recurrence for Newton's method requires a derivative, that only needs performed once, and then the rest is just algebra. Suppose you want to find $f^{-1}(a)$ for some $a$. Define $g(x) = f(x) - a$. The problem is now one of finding the root of $g$. Newton's method sets up the recurrence $$x_{n+1} = x_n - \dfrac{g(x_n)}{g'(x_n)} = x_n - \dfrac{f(x) - a}{f'(x)}$$ Which hopefully will converge to the root of $g$, which is $f^{-1}(a)$. For your function, if you start with $x_0 = 1$, it will converge very quickly for $a < 2.5$. As $a$ gets close to its maximum (a little under $3$), convergence will slow, but the sequence should still converge instead of blowing up, as occasionally happens with Newton's method. $$f'(x) = \frac{10}3e^{-0.06x}\left[(-0.06)\log\left(\dfrac{2x+3}5\right) + \dfrac2{2x+3}\right]$$ So the method becomes (with some simplification) $$\begin{align}x_0 &= 1\\x_{n+1} &= x_n - \dfrac{\dfrac{50}3\log\left(\dfrac{2x_n + 3}5\right) - 5a\,e^{0.06x_n}}{\dfrac {100}{6x_n+9} - \log\left(\dfrac{2x_n+3}5\right)}\end{align}$$ which will converge to $f^{-1}(a)$ for $a \in (-\infty, 2.7949)$, giving values in $\left(-1.5, 9.6483\right)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3742513", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Regular level set theorem proof, manifolds Let $F:N\rightarrow M$ be a smooth man between manifolds of dimension $n$ and $m$ respectively. A non-empty regular level set $F^{-1}(c)$ where $c\in M$ is a submanifold of $N$ of dimension equal to $n-m$. The proof starts of by something like this: Choose a chart $(V,\psi)=(V,y^1,.....,y^m)$ centred about $c$. By continuity of smooth maps, $F^{-1}(V)$ is an open set in $N$ that contains $F^{-1}(c)$. In $F^{-1}(V)$, $F^{-1}(c)=(\psi \circ F)^{-1}(\textbf{0})$. However the part I don't understand is the following: In $F^{-1}(V)$, $F^{-1}(c)$ is the common zero set of the functions $r^i \circ (\psi \circ F)$, where $r^i$ is the standard coordinates on euclidean space. How do I actually prove that?
Upgrading my comment to an answer: We wish to show that $F^{-1}(c)$ is the common zero set of the functions $r^i \circ \psi \circ F$ for $i \in \{1, \dots, m\}$. In other words, we want $$F^{-1}(c) = \bigcap_{i = 1}^m (r^i \circ \psi \circ F)^{-1}(0).$$ We already know that $F^{-1}(c) = (\psi \circ F)^{-1}(\mathbf{0})$, and of course $\{\mathbf{0}\} = \bigcap_{i=1}^m (r^i)^{-1}(0)$. Thus, $$F^{-1}(c) = (\psi \circ F)^{-1}(\{\mathbf{0}\}) = (\psi \circ F)^{-1}\left(\bigcap_{i=1}^m (r^i)^{-1}(0)\right) = \bigcap_{i=1}^m (\psi \circ F)^{-1}((r^i)^{-1}(0)) = \bigcap_{i=1}^m (r^i \circ \psi \circ F)^{-1}(0),$$ as desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3742645", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Need help in solving a linear algebra ( System of Equations) quiz problem I am solving previous year quiz problem of my class and I am unable to solve this question in linear algebra. Let $A \in M_{m \times n}(\Bbb{R})$ and let $b_0 \in \Bbb{R}^m$. Suppose the system of equations $Ax = b_0$ has a unique solution. Which of the following statement(s) is/are true? * *$Ax = b$ has a solution for every $b \in \Bbb{R}^m$. *If $Ax = b$ has a solution then it is unique. *$Ax = 0$ has a unique solution. *$A$ has rank $m$. ( Multiple answers can be correct) Answer is : 2, 3 I know the theorem that if $A$ is invertible then $Ax=0$ has only trivial solution and $A$ has rank $m$ and then $Ax=b$ has a solution for every $b \in \mathbb{R}^m$. But the problem is, how I can be sure if $A$ is invertible? Kindly help.
Recall that the system $Ax=b$ is solvable iff $b$ is an element of the span of the column vectors of $A$. Now if this solution is unique the column vectors of $A$ are linearly independent. Does this clear up things?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3742744", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why is the Penrose triangle "impossible"? I remember seeing this shape as a kid in school and at that time it was pretty obvious to me that it was "impossible". Now I looked at it again and I can't see why it is impossible anymore.. Why can't an object like the one represented in the following picture be a subset of $\mathbb{R}^3$?
Start at the bottom left-hand corner, taking othonormal unit vectors $\pmb i$ horizontally, $\pmb j$ inward along the cross-member bottom left-hand edge, and $\pmb k$ upward and perpendicular to $\pmb i$ and $\pmb j$. I'll take the long edge of a member as $5$ times its (unit) width; the exact number doesn't matter. Then, working by vector addition anticlockwise round the visible outer edge to get back to the starting point, we have $$5\pmb i+\pmb k+5\pmb j-\pmb i-5\pmb k-\pmb j=4\pmb i+4\pmb j-4\pmb k=\pmb0,$$which of course is impossible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3742825", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "165", "answer_count": 6, "answer_id": 3 }
System of equations and recurrence relation I am trying to find the general solution for $N$ of the following system of equations $$ \begin{cases} (x_n - x_{n-1})^2 + (y_n - y_{n-1})^2 = \left(\frac{\theta}{N}\right)^2 \\ {x_n}^2 + {y_n}^2 = 1 \end{cases} $$ with the initial values $x_0 = 1$ and $y_0 = 0$ and the following * *$\theta$ is a constant and $0 \leqslant \theta \leqslant 2$ *$N$ is a constant and we want to find the terms $(x_N, y_N)$ Using substitution with respect to $N$, we have $$ \begin{align} x_0 = 1 \quad & ; \quad y_0 = 0 \\ x_1 = -\frac{\theta^2 - 2N^2}{2N^2} \quad & ; \quad y_1 = -\frac{\theta \sqrt{4N^2 - \theta^2}}{2N^2} \\ x_2 = \frac{\theta^4 - 4N^2\theta^2 + 2N^4}{2N^4} \quad & ; \quad y_2 = \frac{(\theta^3 - 2N^2\theta) \sqrt{4N^2 - \theta^2}}{2N^4} \\ x_3 = -\frac{\theta^6 - 6N^2\theta^4 + 9N^4\theta^2 - 2N^6}{2N^6} \quad & ; \quad y_3 = -\frac{(\theta^5 - 4N^2\theta^3 + 3N^4\theta) \sqrt{4N^2 - \theta^2}}{2N^6} \end{align} $$ By using substitution, it becomes very difficult with $N \geqslant 2$.
The second equation expresses that the points $(x_n,y_n)$ remain on the unit circle, and the first, that the successive points form chords of constant length, subtending an angle $\alpha=2\arcsin\frac\theta{2N}$. Hence $$(x_n,y_n)=(\cos n\alpha,\sin n\alpha).$$ By the way, the system has in fact $2^n$ distinct solutions, as from every intermediate point, the chord can be drawn in two directions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3742927", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Understanding Serge Lang's Definition of Homotopy I have been following Serge Lang's Complex Analysis text book and today I came across a chapter on homotopy. I have trouble visualising and honestly, understanding the definition that he has given in his book. Here is the definition from his book Could somebody explain to me how I can visually interpret this? I would also be really grateful if someone had a graphic or visual that would illustrate what is meant in this definition. Any help will be appreciated.
By definition, $\psi(t,c)=\gamma(t)$. Since $\psi$ is continuous, if $c_1$ is slightly bigger than $c$, then $t\mapsto\psi(t,c_1)$ is a path which is close to $\gamma$. And if $c_2$ is slightly bigger than $c_1$, then $t\mapsto\psi(t,c_2)$ is a path which is close to the previous one. And so on, until you reach $d$. So, $\psi$ deforms $\gamma$ into $\eta$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3743039", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
a linear map on $W$ Define $W = \{(a_1, a_2,\cdots) : a_i \in \mathbb{F}, \exists N\in\mathbb{N}, \forall n \geq N, a_n = 0\},$ where $\mathbb{F} = \mathbb{R} $ or $\mathbb{C}$ and $W$ has the standard inner product, which is given by $\langle(a_1,a_2,\cdots), (b_1,b_2,\cdots)\rangle = \sum_{i=1}^\infty a_i \overline{b_i}$ ($\overline{b_i}$ is simply the complex conjugate of $b_i$). Prove that the linear map $T : W \to W$ given by $T(a)_j = \sum_{i=j}^\infty a_i$, where $T(a) = (T(a)_1, T(a)_2,\cdots),$ has no adjoint. I know that to show that $T$ has an adjoint $T^*$, it suffices to show that for all $a,b \in W$, $\langle T(a),b \rangle = \langle a, T^* b\rangle$. So to show that $T$ does not have an adjoint, it suffices to show that there is no linear map $T^*$ so that for all $a,b \in W \langle T(a),b\rangle = \langle a,T^*b\rangle$ For any $a \in W,$ we may find $N\in\mathbb{N}$ st $i \geq N\Rightarrow a_i = 0.$ Hence $$\langle T(a),b\rangle = \sum_{i=1}^\infty \left(\sum_{j=i}^\infty a_j\right) \overline{b_i}=\sum_{i=1}^{N-1} \left(\sum_{j=i}^{N-1} a_j\right)\overline{b_i}.$$ Also, $$\langle a, T^*b\rangle = \sum_{i=1}^\infty a_i\overline{(T^*b)_i}=\sum_{i=1}^{N-1} a_i\overline{(T^*b)_i}.$$ Hence $$\langle T(a),b\rangle = \langle a,T^*b\rangle \iff \sum_{i=1}^{N-1} \left(\sum_{j=1}^{N-1} a_j \overline{b_i}-a_i \overline{(T^*b)_i}\right) = 0.$$ I know I'm supposed to find a $b \in W$ that'll make it impossible for $\langle a,T^*b\rangle = \langle T(a),b\rangle$ for all $a\in W$, but I'm unsure how to find this.
Let $e_i\in W$ satisfy $(e_i)_j=1$ if $i=j$ and $(e_i)_j=0$ otherwise. Then $$\langle Te_i,e_j\rangle=\cases{1\,\,{\rm if}\,\,j\leq i,\\0 \,\,{\rm if}\,\,j>i.}$$ Thus if $T^*$ exists: $$\langle e_i,T^*e_j\rangle=\cases{1\,\,{\rm if}\,\,j\leq i,\\0 \,\,{\rm if}\,\,j>i.}$$ So we have $(T^*e_j)_i=1$ for all $i\geq j$ which contradicts $T^*e_j\in W$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3743178", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Equality of the totient function of two multiples of $x$ I'm looking to solve for $x \in \mathbb{N}$ in the equation $\phi(4 x) = \phi(5 x)$. I know the totient function $\phi(y)$ just gives the number of integers less than or equal to $y$ that are coprime to $y$. I tried approaching it like a normal equation and expanding out $\phi(4 x) - \phi(5 x) = 0$ into its prime number decomposition, but I didn't get anywhere. Any ideas? Graphing it, I noticed the equation seems to hold only when n is even, but I can't figure out why it fails at certain even values (like $n=10$, for instance).
We use the fact that the totient function is multiplicative. Let: $$x=2^a5^by$$ where $\gcd(y,10)=1$. Then: $$\phi(4x)=\phi(5x) \implies \phi(2^{a+2}5^bx)=\phi(2^a5^{b+1}x)$$ Using the fact that the totient function is multiplicative, we yield: $$\phi(2^{a+2})\phi(5^b)\phi(x)=\phi(2^a)\phi(5^{b+1})\phi(x)$$ Cancelling $\phi(x)$, we have: $$\phi(2^{a+2})\phi(5^b)=\phi(2^a)\phi(5^{b+1})$$ We know that $\phi(2^{a+2})=2^{a+1}$ and $\phi(5^{b+1})=4\cdot 5^b$. If $b>0$, then $\phi(5^b)=4 \cdot 5^{b-1}$. However, this would be a contradiction as the LHS as one less factor of $5$ than required. Thus, $b=0$. Substituting: $$\phi(2^{a+2})=4\phi(2^a)$$ which holds for all $a \geqslant 1$. Thus, $x=2^ay$ where $a>0$. This means that $x$ can be any even number not divisible by $5$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3743288", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Quadratic Forms on a (finite dimensional real) vector space with same zero set are scalar multiples? Let $V$ be a finite-dimensional vector space over $\Bbb{R}$, let $g,h:V \times V \to \Bbb{R}$ be bilinear symmetric functionals, and let $Q_g,Q_h:V \to \Bbb{R}$ be the associated quadratic forms. Suppose that $Q_g, Q_h$ have the same zero set; $Q_g^{-1}(\{0\}) = Q_h^{-1}(\{0\})$. Questions: * *Is it then true that $Q_g$ and $Q_h$ are scalar multiples of each other? *If (1) isn't true as stated, could it perhaps be made true by making certain restrictions on the quadratic forms, such as restricting them to have Lorentz signature (i.e with the notation below, $p=1, n$ arbitrary and $k=0$). This is the case I'm mainly interested in, but I'd of course like to know it in the more general case if it's true. *Can this result (if true) be generalized to arbitrary multilinear, symmetric functionals on $V$ and their associated homogeneous polynomials (as opposed to bilinear functionals and their quadratic forms). If so could you outline such a proof/ provide a reference. My Attempt This reminds me of a similar result in linear algebra, namely that if $\phi,\psi \in V^*$ have the same kernel then they are scalar multiples of each other. So, my attempt at "proving" (1) was to mimic that proof as much as possible. I know that every quadratic form over $\Bbb{R}$ can be "diagonalized" (Sylvester's Law?), in the sense that we can find a basis $\beta$ for $V$ such that the matrix representation of $g$ is of the type \begin{align} [g]_{\beta} &= \begin{pmatrix} I_p & & \\ & -I_n & \\ & & 0_k \end{pmatrix} \end{align} such that $p+n+k = \dim V$. But from here, I'm not sure how to proceed. Any help is appreciated.
Partial answer: As can be seen in the positive definite case, it does not generally hold that two bilinear forms are scalar multiplies if they have the same zero set. However, the statement becomes true for the case of $\dim V \leq 3$ if we extend the bilinear form to a $\Bbb C$-bilinear form over $\Bbb C$. Without loss of generality, take $V = \Bbb R^3$. Plugging in $v_1 = (1,x,x^2)$, we see that $$ Q_g(v_1) = g_{11} + 2g_{12} x + 2(g_{22} + g_{13}) x^2 + 2g_{23} x^3 + g_{33}x^4. $$ Plugging in $v_2 = (-1,x,x^2)$ yields $$ Q_g(v_2) = -g_{11} - 2g_{12} + 2(g_{22} - g_{13}) x^2 + 2g_{23} x^3 + g_{33} x^4. $$ If two polynomials over $\Bbb C$ have the same zero-sets, then those polynomials must be multiples. Thus, if $h$ is such that $Q_h(v_1) = Q_h(v_2) = 0$ for all $x \in \Bbb C$, then it must hold that $h_{ij}$ is a multiple of $g_{ij}$ for all $j$, and hence $h$ is a multiple of $g$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3743404", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Ascoli-Arzelà example and equicontinuity My teacher did an example of Ascoli-Arzelà Theorem that I really don't understand, above all how he show that a series of functions is equicontinuous. Let $f_n:=x^n \quad , \quad x\in [0,1] \quad , \quad f_n\subset C([0,1],\Bbb{R})$ $\forall x, \quad \exists \lim_{x\to +\infty}f_n(x)=f(x) \quad with \quad f(x):= \begin{cases} 0 & \text{if $0\le x<1$} \\ 1 & \text{if $x=1$} \end{cases}$ $f(x)$ is not continuous $\to$ $f_n$ doesn't uniformly converge to $f$ Now: $f_n(0)=0 \quad \forall n$ And $f_n$ isn't uniformly equi-continuous: $\quad$ $f_n'(x)=(x^n)'=nx^{n-1} \qquad f_n'(1)=n \longrightarrow +\infty$ I really don't understand this last part: Why did he calculated the derivate in $1$? And why can he say that $f_n$ isn't uniformly equi-continuous in this way? And does a correlation exist between derivates and equicontinuity?
I don't think the behavior of the derivative at the point $x=1$ proves (without much effort) that the sequence is not uniformly equi-continuous . So your teacher's argument is not complete and not a good approach for this. If the sequence is uniformly equicontinuous then there exists $\delta >0$ such that $|f_n(x)-f_n(y)| <1-\frac 2 e$ for all $n$ whenever $|x-y| \leq \delta$. In particular $|f_n(1-\frac 1 n)-f_n(1)| <1$ for all $n$ sufficiently large. This lead to the contradiction $1-\frac 1 e < 1-\frac 2 e$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3743536", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Is $\int_{0}^{\infty} e^{-(a^2x^2+\frac{b^2}{x^2})}dx=\int_{0}^{\infty}e^{-(b^{2}{X}^2+\frac{a^2}{X^2})}dX$ for arbitrary $a,b$ and fixed range? The problem says, If $\int_{0}^{\infty} \mathbb{e^{-(a^2x^2+\frac{b^2}{x^2})dx}=\frac{\sqrt{\pi}}{2a}.e^{-2ab}} \longrightarrow(i)$, then prove that $\mathbb{\int_{0}^{\infty}\frac{1}{x^2}e^{-(a^2x^2+\frac{b^2}{x^2})}dx=\frac{\sqrt{\pi}}{2b}.e^{-2ab}}$ If we take $\mathrm{x=\frac{b}{a\mathscr{X}}}$, where $\mathrm{a}, \mathrm{b}$ are any constant, then we have $$\mathbb{\int_{0}^{\infty}\frac{1}{x^2}e^{-(a^2x^2+\frac{b^2}{x^2})}dx} \longrightarrow (ii)\\=\mathbb{e^{-2ab}}\mathbb{\int_{0}^{\infty}\frac{1}{x^2}e^{-(ax+\frac{b}{x})^2}dx}\\=\mathbb{\frac{-a\ e^{-2ab}}{b}}\mathbb{\int_{\infty}^{0}e^{-(b\mathscr{X}+\frac{a}{\mathscr{X}})^2}d\mathscr{X}}\\=\mathbb{\frac{a}{b}}\mathbb{\int_{0}^{\infty}e^{-(b^{2}\mathscr{X}^2+\frac{a^2}{\mathscr{X}^2})}d\mathscr{X}}\longrightarrow(iii)\\$$ * *Since $\mathrm{a}, \mathrm{b}$ are any constant and condition $(i) \ \textrm{&} \ (iii)$ look same and the integration range remains same, therefore should we use the value of $(i)$ in $(iii)$ ? If we plug in the value of $(i)$ in $(iii)$, the desired result comes. But is this a correct aprroach to do that? Any help, explanation is valuable and highly appreciated.
You can "plug i into iii" via the "integration-by-substitution" $$ \mathscr{X} = x \\ d \mathscr{X} = dx $$ (i.e., using the chain rule in the simplest possible way); this shows that the integral you're calling "iii" is the same as the integral you're calling "i".
{ "language": "en", "url": "https://math.stackexchange.com/questions/3743644", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Are coslice categories of preadditive categories preadditive? I do not see any natural way how a coslice category of a preadditive category can be preadditive (other than in some degenerate cases). However, they are given as an example in Popescu's book "Abelian Categories with Applications to Rings and Modules" (1973), page 17: *Let $\mathcal(C)$ be a preadditive category and let $X$ be an object of $\mathcal{C}$. We denote by $X/\mathcal{C}$ the category whose objects are couples $(f, Y)$, $Y\in\operatorname{Ob}{\mathcal{C}}$ and $f\in\operatorname{Hom}_{\mathcal{C}}(X, Y)$ and whose morphisms $g\colon (f, Y)\to(f', Y')$ are in fact morphisms $g\colon Y\to Y'$ such that $gf = f'$. $X/\mathcal{C}$ is a preadditive category. Analogously, we have the category $\mathcal{C}/X$. Is this an error in the book, or am i missing something?
You are right and it is easy to come up with examples. In fact, every non-trivial example does it: Suppose that $C$ has a non-initial object $X$. Then there exists an object $Y$ with a non-zero morphism $f'\colon X\to Y$. Moreover, let $f\colon X\to Y$ be the zero morphism (i.e., the unit of the abelian group $\hom(X,Y)$). Then there is no $g\colon Y\to Y$ such that $gf=f'$ since the left-hand side is the zero-morphism and the right hand side isn't. But then $\hom((Y,f),(Y',f'))=\emptyset$ cannot be an abelian group. Thus, the only case that works is if $X$ is initial, but then we have $X/C\cong C$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3743772", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Evaluating the $ \lim_{n \to \infty} \prod_{1\leq k \leq n} (1+\frac{k}{n})^{1/k}$ I am really struggling to work out the limit of the following product: $$ \lim_{n \to \infty} \prod_{1\leq k \leq n} \left (1+\frac{k}{n} \right)^{1/k}.$$ So far, I have spent most of my time looking at the log of the above expression. If we set the desired limit equal to $L$, I end up with: $$\log L = \lim_{n\to \infty}\log\left(\frac{n+1}{n} \right)+\frac{1}{2}\log\left(\frac{n+2}{n} \right) +\cdots +\frac{1}{n}\log\left(\frac{n+n}{n} \right),$$ which I can simplify to: $$ \log L = \lim_{n\to \infty} \log(n+1)+\frac{1}{2}\log(n+2)+\cdots \frac{1}{n}\log(2n)-\log(n)\left(1+\frac{1}{2}+\cdots\frac{1}{n}\right). $$ I tried to consider the above expression in a different form with an integral, but was unable to arrive at anything useful. I have been stuck on this for quite awhile now, and would appreciate any insight. Thanks
I thought that it would be instructive to present an approach that does not rely on Riemann Sums, but rather makes use of the Taylor Series of $\log(1+x)$. To that end, we proceed. The function $\log(1+x)$ can be represented by its Taylor Series, $\log(1+x)=\sum_{\ell=1}\frac{(-1)^{\ell-1}}{\ell}x^\ell$ for $-1<x\le 1$. Using this representation, we can write $$\begin{align} \sum_{k=1}^n\log\left(1+\frac kn\right)^{1/k}&=\sum_{k=1}^n\frac1k\log\left(1+\frac kn\right)\\\\ &=\sum_{k=1}^n\left(\frac1k \sum_{\ell=1}^\infty \frac{(-1)^{\ell-1}}{\ell}\left(\frac kn\right)^\ell\right)\\\\ &=\sum_{\ell=1}^\infty \frac{(-1)^{\ell-1}}{\ell n^\ell}\sum_{k=1}^nk^{\ell-1}\tag1 \end{align}$$ Next, noting that $\displaystyle \sum_{k=1}^n k^{\ell-1}=\frac{n^\ell}{\ell}+O\left(n^{\ell -1}\right)$, we have from $(1)$ that $$\begin{align} \sum_{k=1}^n\log\left(1+\frac kn\right)^{1/k}&=\sum_{\ell=1}^\infty \frac{(-1)^{\ell-1}}{\ell^2 }+O\left(\frac1n\right)\tag2 \end{align}$$ Finally, letting $n\to \infty$ in $(2)$ yields the result $$\bbox[5px,border:2px solid #C0A000]{\sum_{k=1}^\infty \log\left(1+\frac kn\right)^{1/k}=\sum_{k=1}^\infty \frac{(-1)^{k=1}}{k^2}}\tag3$$ The series on the left-hand side of $(3)$ is equal to $\frac{\pi^2}{12}$ (See This).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3743837", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 1 }
Biholomorphism between Riemann Surfaces. Let's consider the homeomorphism $\psi : \mathbb C \rightarrow B_{1}(0), z \rightarrow \frac {z}{|z|+1}$. Let $Z$ be be the Riemann Surface with topological space $\mathbb C$ induced by the chart $( \mathbb C,\psi)$. I have to show that $Z$ is biholomorphic $B_{1}(0)$ . I think that this means that I have to find a holomorphic function $g$ such that : $\psi \space \circ \space g \space \circ \space id_{\mathbb{C}}^{-1}$ is a holomorphism and $id_{\mathbb{C}} \space \circ \space g^{-1} \space \circ \space \psi^{-1}$ is also a holomorphism, but I got stuck finding such function. Any ideas? Thank you in advance.
One point of importance here is that $g$ need only be holomorphic as a map $B_1(0) \to Z$, not to the usual complex structure on $\mathbb{C}$. For this it suffices to check that $g$ is a diffeomorphism and $\varphi \circ g$ is holomorphic for all coordinate charts $\varphi$ in an atlas for the Riemann surface structure on $Z$. (We only need check holomorphicity one way, which comes from the fact that the inverse of a holomorphic map with nonvanishing derivative is holomorphic). Since $Z$ is given with $\psi$ as the only chart in an atlas, we only need to show there exists a diffeomorphism $g$ with $\psi \circ g$ holomorphic. Such a map is given by the inverse map of $\psi: \mathbb{C} \to B_1(0)$, which one can work out to be $$g(z) = \frac{z}{1-|z|},$$ which is a diffeomorphism $B_1(0) \to \mathbb{C}$ and satisfies $\psi \circ g(z) = z$ for all $z \in B_1(0)$, which is holomorphic. This completes the proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3744081", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Divergent integrals $\int_{a}^{\infty}\frac{dx}{f(x)}$ and $\int_{a}^{\infty}\frac{dx}{f(x)+b}$ for positive $f$ and $b$ Assume that $f\colon [a,\infty)\to(0,\infty)$ is continuous ($a\in\mathbb{R}$) and such that $$\int_{a}^{\infty}\frac{dx}{f(x)}=+\infty$$ and let $b>0$. Can we claim that $$\int_{a}^{\infty}\frac{dx}{f(x)+b}=+\infty\quad ?$$ If $f$ is nondecreasing, then it is true. And in general? What else makes this implication true? Proof for nondecreasing $f$: we have $$\frac{1}{f(x)}=\frac{1}{f(x)+b}\left(1+\frac{b}{f(x)}\right)\leq\frac{1}{f(x)+b}\left(1+\frac{b}{f(a)}\right)$$ and we compare the integrals.
This is false in general. First, we may as well take $a=0$. If $f(x)$ is not positive for $[0,a)$, modify the function and consider $f(x-a)$ instead of $f(x)$. Second, note that it is sufficient to consider a positive function that is discontinuous at a countable set $S$ (but otherwise continuous) but whose limits exist as $x$ approaches $m\in S$. That is, if $h(x)$ is positive discontinuous at $x\in S$ (and only there) with $$\lim_{x\to m^{+}}h(x)=L_1\text{ and }\lim_{x\to m^{-}}h(x)=L_2$$ existing, we can construct $f(x)$ from $h(x)$ such that $f(x)$ is positive, continuous, and $$\int_0^\infty \frac{1}{h(x)}dx<\infty \Leftrightarrow \int_0^\infty \frac{1}{f(x)}dx<\infty $$ To do this, let $g(x)=h(x)^{-1}$ and define $$\beta_m^{+}=\lim_{x\to n^{+}}g(x)\text{ and }\beta_m^{-}=\lim_{x\to n^{-}}g(x)$$ $$\beta_m=\beta_m^{+}-\beta_m^{-}$$ (with $m\in S$). Now, for $m$ in $S$, choose $0<\alpha_m\leq \frac{1}{4}$ small enough such that $$\int_{m-\alpha_m}^{m+\alpha_m}\left[\frac{g(m+\alpha_m)-g(m-\alpha_m)}{2\alpha_m}x+g(m+\alpha_m)-(m+\alpha_m)\frac{g(m+\alpha_m)-g(m-\alpha_m)}{2\alpha_m}\right]dx$$ $$\leq \frac{1}{n^2}$$ and $$\int_{m-\alpha_m}^{m+\alpha_m}g(x)dx\leq \frac{1}{n^2}$$ (where $n$ is the natural number associated with $m$ in the bijection between $S$ and $\mathbb{N}$). To show that this is indeed possible, first note that $$\lim_{\alpha_m\to 0^{+}}g(m+\alpha_m)=\beta_m^{+}\text{ and }\lim_{\alpha_m\to 0^{+}}g(m-\alpha_m)=\beta_m^{-}$$ This then implies the integral $$\lim_{\alpha_m\to 0^{+}}\int_{m-\alpha_m}^{m+\alpha_m}\left[\frac{g(m+\alpha_m)-g(m-\alpha_m)}{2\alpha_m}x+g(m+\alpha_m)-(m+\alpha_m)\frac{g(m+\alpha_m)-g(m-\alpha_m)}{2\alpha_m}\right]dx$$ $$\leq\lim_{\alpha_m\to 0^{+}}\left[\alpha_m (g(m+\alpha_m)-g(m-\alpha_m))+\alpha_m \frac{g(m+\alpha_m)+g(m-\alpha_m)}{2}\right]=0$$ For the second integral, note that $$\int_{m-\alpha_m}^{m+\alpha_m}g(x)dx\leq 2\alpha_m \max_{x\in [m+1/4,m+1/4]}g(x)$$ which goes to zero as $\alpha_m$ goes to zero. With this, define $$f(x)=\left\{\begin{matrix} h(x) && x\not\in N_{\alpha_m}(m)\\ \left[\frac{g(m+\alpha_m)-g(m-\alpha_m}{2\alpha_m}x+g(m+\alpha_m)-(m+\alpha_m)\frac{g(m+\alpha_m)-g(m-\alpha_m)}{2\alpha_m}\right]^{-1} && x\in N_{\alpha_m}(m) \end{matrix}\right.$$ (where $N_{\alpha_m}(m)=(m-\alpha_m,m+\alpha_m)$) for all $m\in S$. Also, for ease of notation let $$T=\bigcup_{m\in S} N_{\alpha_m}(m)$$ First, it is not to hard to show that $f(x)$ is continuous. This is because the $f(x)$ is equal to $h(x)$ except for an inverse linear function around the discontinuities. Second, note that $$\infty=\int_T \frac{1}{h(x)}dx+\int_{\mathbb{R}/T}\frac{1}{h(x)}dx=\int_T g(x)dx+\int_{\mathbb{R}/T}\frac{1}{h(x)}dx$$ $$\leq \sum_{n=1}^\infty \frac{1}{n^2}+\int_{\mathbb{R}/T}\frac{1}{h(x)}dx=\frac{\pi^2}{6}+\int_{\mathbb{R}/T}\frac{1}{h(x)}dx$$ This implies $$\int_{\mathbb{R}/T}\frac{1}{h(x)}dx=\infty$$ Then we get $$\int_0^\infty \frac{1}{f(x)}dx=\int_T \frac{1}{f(x)}dx+\int_{\mathbb{R}/T}\frac{1}{f(x)}dx$$ $$=\sum_{m\in S}\int_{m-\alpha_m}^{m+\alpha_m}\left[\frac{g(m+\alpha_m)-g(m-\alpha_m)}{2\alpha_m}x+g(m+\alpha_m)-(m+\alpha_m)\frac{g(m+\alpha_m)-g(m-\alpha_m)}{2\alpha_m}\right]dx$$ $$+\int_{\mathbb{R}/T}\frac{1}{h(x)}dx\geq -\sum_{n=1}^\infty\frac{1}{n^2}+\int_{\mathbb{R}/T}\frac{1}{h(x)}dx=\infty$$ We conclude $$\int_0^\infty \frac{1}{f(x)}dx=\infty$$ If, on the other hand, $$\int_0^\infty \frac{1}{h(x)}dx<\infty$$ then in a similar manner we can show that $$\int_0^\infty \frac{1}{f(x)}dx<\infty$$ Having shown that it is sufficient to consider a positive, discontinuous function with direction limits that exist at all discontinuities, define $$g(x)=\left\{\begin{matrix} \frac{1}{n}&& x\in [n,n+1/n^2]\\ \lfloor x \rfloor ^2+1&&\text{otherwise} \end{matrix}\right.$$ It is easy to show that $g(x)$ satisfies all the conditions necessary for the work above to apply. Then $$\int_0^\infty \frac{1}{g(x)}dx=\sum_{n=1}^\infty \frac{1}{g(n)}=\sum_{n=1}^\infty \frac{1}{n}+1+\sum_{n=1}^\infty\frac{1}{n^2+1}\left(1-\frac{1}{n^2}\right)=\infty$$ However, for all $b>0$ we have $$\int_0^\infty \frac{1}{g(x)+b}dx=\sum_{n=1}^\infty \frac{1}{g(n)+b}$$ $$=\sum_{n=1}^\infty\frac{n}{1+bn}\cdot \frac{1}{n^2} +\frac{1}{1+b}+\sum_{n=1}^\infty\frac{1}{n^2+1+b}\left(1-\frac{1}{n^2}\right)$$ $$=\sum_{n=1}^\infty\frac{n}{bn^3+n^2} +\frac{1}{1+b}+\sum_{n=1}^\infty\frac{1}{n^2+1+b}\left(1-\frac{1}{n^2}\right)<\infty$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3744216", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Evaluating $\int _0^1\frac{\ln \left(x^2+x+1\right)}{x\left(x+1\right)}\:dx$ How can i evaluate this integral, maybe differentiation under the integral sign? i started expressing the integral as the following, $$\int _0^1\frac{\ln \left(x^2+x+1\right)}{x\left(x+1\right)}\:dx=\int _0^1\frac{\ln \left(x^2+x+1\right)}{x}\:dx-\int _0^1\frac{\ln \left(x^2+x+1\right)}{x+1}\:dx\:$$ But i dont know how to keep going, ill appreciate any solutions or hints.
Solution using harmonic series $$\int _0^1\frac{\ln \left(x^2+x+1\right)}{x\left(x+1\right)}\:dx=\int _0^1\frac{\ln \left(x^2+x+1\right)}{x}\:dx-\int _0^1\frac{\ln \left(x^2+x+1\right)}{x+1}\:dx\:$$ $$\int _0^1\frac{\ln \left(x^2+x+1\right)}{x}\:dx=\underbrace{\int _0^1\frac{\ln \left(1-x^3\right)}{x}\:dx}_{x^3\to x}-\int _0^1\frac{\ln \left(1-x\right)}{x}\:dx$$ $$=-\frac23\int _0^1\frac{\ln \left(1-x\right)}{x}\:dx=\frac23\zeta(2)$$ $$\int _0^1\frac{\ln \left(1+x+x^2\right)}{1+x}\:dx\overset{IBP}{=}\ln(2)\ln(3)-\int_0^1\frac{(2x+1)\ln(1+x)}{1+x+x^2}dx$$ For the latter integral, set $a=\frac{2\pi}{3}$ in the identity $$\sum_{n=1}^{\infty}x^{n-1} \cos(na)=\frac{\cos(a)-x}{1-2x\cos(a)+x^2}, \ |x|<1$$ we have $$-2\sum_{n=1}^{\infty}x^{n-1} \cos(n\frac{2\pi}{3})=\frac{2x+1}{1+x+x^2}$$ $$\Longrightarrow \int_0^1\frac{(2x+1)\ln(1+x)}{1+x+x^2}dx=-2\sum_{n=1}^\infty \cos(n\frac{2\pi}{3})\int_0^1 x^{n-1}\ln(1+x)dx$$ $$=-2\sum_{n=1}^\infty \cos(n\frac{2\pi}{3})\left(\frac{H_n-H_{n/2}}{n}\right)$$ $$=-2\Re\sum_{n=1}^\infty \left(e^{i\frac{2\pi}{3}}\right)^n\left(\frac{H_n-H_{n/2}}{n}\right)$$ And finally we use the generating functions $$\sum_{n=1}^\infty x^n\frac{H_n}{n}=\frac12\ln^2(1-x)+\text{Li}_2(x)$$ $$\sum_{n=1}^\infty x^n\frac{H_{n/2}}{n}=i\pi\frac{\ln(1-x^2)-\ln(-x^2)}{x^2}$$ $$+\frac{\ln(x-1)\ln(-x^2)-\ln(x-1)\ln(1-x^2)}{x^2}$$ $$+\frac{\text{Li}_2\left(\frac{1-x}{1+x}\right)-\text{Li}_2\left(\frac{1}{1+x}\right)-\text{Li}_2\left(\frac{1}{1-x}\right)}{x^2}$$ I found the second generating function with help of Mathematica after I converted it to integral; $$\sum_{n=1}^\infty x^n\frac{H_{n/2}}{n}=-\int_0^1\frac{xy^2\ln(1-y^2)}{1-xy}dy$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3744330", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Why is a stochastic matrix a $l^2$ contraction If $P$ is a doubly stochastic matrix i.e. $P=(p_{ij})_{1\leq i,j \leq n}$ is s.t. the row sums $\sum_j p_{ij}=1$ for all $i$ and $\sum_i p_{ij}=1$ for all $j$, then may I know why $$||Px||\leq ||x||$$ for all $x\in \mathbb{R}^n$ where $||\cdot||$ is the $l^2$ norm?
This is related to the more general formulation that if $P\colon L^2\to L^2$ is given by $Pf(x) = \int p(x,y)\,f(y)\,dy$, and \begin{align*} \sup_x\int|p(x,y)|\,dy &\le 1,\\ \sup_y\int|p(x,y)|\,dx &\le 1, \tag{$\ast$} \end{align*} then $\|P\|_{L^2\to L^2}\le 1$. To see this, recall that \begin{align*} \|P\|_{L^2\to L^2} &= \sup |\langle Pf,g\rangle|\\ &= \sup\bigg|\iint p(x,y)\,f(y)\,g(x)\,dy\,dx\bigg| \end{align*} where the sup is taken over all $f,g$ with $\|f\|_{L^2},\|g\|_{L^2}\le 1$. Then, since $|fg| \le \frac12(|f|^2 + |g|^2)$, we have \begin{align*} \|P\|_{L^2\to L^2}&\le \sup\bigg(\frac12\iint |p(x,y)|\,|f(y)|^2\,dy\,dx + \frac12\iint |p(x,y)|\,|g(x)|^2\,dy\,dx\bigg). \end{align*} Now in the first integral, integrate first with respect to $x$ and then with respect to $y$, and in the second integral, integrate first with respect to $y$ and then with respect to $x$, and the conclusion is that \begin{align*} \|P\|_{L^2\to L^2} \le \frac12 + \frac12 = 1. \end{align*} This argument can be easily adapted to the case you are interested in where $Pf$ is given by \begin{align*} (Pf)_x = \sum_y p_{xy}f_y \end{align*} and the doubly stochastic assumption gives us the two inequalities $(\ast)$, where we replace integration by summation in this case.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3744453", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Trace of map & matrices Let $1\leq m,n\in \mathbb{N}$ and let $\mathbb{K}$ be a field. For $a\in M_m(\mathbb{K})$ we consider the map $\mu_a$ that is defined by $$\mu_a: \mathbb{K}^{m\times n}\rightarrow \mathbb{K}^{m\times n}, \ c\mapsto ac$$ I want to show that $trace(\mu_a)=n\cdot trace(a)$ . I have done the following: Let $\lambda$ be the eigenvalues of $\mu_a$ then we have that $\mu_a(c)=\lambda c$. From $\mu_a(c)=\lambda c$ we get $ac=\lambda c$. So if $\lambda$ is an eigenvalue of $\mu_a$, tthere is a non-zero $c\in\mathbb{K}^{m\times n}$ with $\mu_a(c)=\lambda c$. The columns of $c$ are all eigenvectors of $a$ with eigenvalue $\lambda$. The matrix $c$ has $n$ columns. So for each eigenvalue $\lambda$ of $a$ there are $n$ eigenvectors, so the multiplicity of $\lambda$ is $n$. The trace of a matrix is the sum of teh eigenvalues considering the multiplicity. Since each eigenvalue of $\mu_a$ has a multiplicity of $n$, it follows that $\text{trace}(\mu_a)=\sum_i n\cdot \lambda_i=n\cdot \sum_i\lambda $. Since $\lambda_i$ is the eigenvalue of $a$, it follows that $\text{trace}(a)=\sum_i\lambda_i$. Therefore we get $\text{trace}(\mu_a)=n\cdot \text{trace}(a)$. Is everything correct?
With a matrix representation for $\mu_a$ you may compute the trace somewhat simpler as the sum of its diagonal elements, without having to worry about diagonalising the map. To get such a representation you may use the trick of introducing a $\delta$ (over indices) and rewrite in the following index form (like a disguised variant of the comment of @Omnomnomnom): $$(\mu_a(c))_{ij}= \sum_{k} a_{ik}c_{kj} = \sum_k\sum_l a_{ik}\delta_{jl} c_{kl} = \sum_{k,l} M_{ij,kl} c_{kl}.$$ Here $M_{ij,kl}= a_{ik} \delta_{jl}$ is just a 'normal' $mn \times mn$ matrix and the trace is given by $${\rm tr\;} \mu_a = {\rm tr\;} M = \sum_{ij} M_{ij,ij} = \sum_{i=1}^m \sum_{j=1}^n a_{ii}\delta_{jj} = {\rm tr\; } a \times n.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3744627", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Finding sign of leading coefficient of a quadratic equation In a given quadratic equation $f(x)=ax^2+bx+c$ if $f(-1)>-4, f(1)<0$ and $f(3)>5$, then how can I find the sign of $a$? Answer in the textbook: $a>0$
If $a=0$, then you have a straight line. Then $$f(1)=\frac{f(-1)+f(3)}2>\frac{-4+5}2=\frac 12>0$$ Then for a parabola with $a>0$ you have $$f(\frac{x+y}2)<\frac{f(x)+f(y)}2$$ and for $a<0$ you have $$f(\frac{x+y}2)>\frac{f(x)+f(y)}2$$ Since $$\frac{f(-1)+f(3)}2>0$$ and $$f(1)<0$$ then $$f(\frac{-1+3}2)<\frac{f(-1)+f(3)}2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3744841", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Series with coefficients coming from beta function I recently came across the series $$G(t):=\sum_{n=0}^\infty\frac{t^{n}}{B(n+1,\xi+1)}$$ defined for $t\in(0,1)$ and $\xi+1>0$. I am trying to get a better-looking form for this sum, something more "usable" in general, but I don't know how to handle the beta function. I am pretty sure that this is the power series for $g(t)=\frac{1}{(1-t)^{2+\xi}}$, but I can't prove it. Right now I have only noticed that for $\xi=0$ we get $G(t)=\sum_{n\geq0}(n+1)t^{n}=\frac{1}{(1-t)^2}$, the derivative of the geometric series. Any ideas on how to deal with this sum?
Here is the general non-integer solution (leading to the same solution as @K.dafaoite integer solution). From the paper by E.Stade The reciprocal of the beta function we see that if $n+\xi+1>0$: $$\frac{1}{B(n+1, \xi+1)}=\frac{n+\xi+1}{2\pi i}\int_{|u|=1}\left(1+\frac{1}{u}\right)^n(1+u)^\xi\frac{du}{u}$$ where the integral is taken counterclockwise in the complex plane. Therefore, $$G(t)=\sum_{n=0}^\infty t^n\cdot\frac{n+\xi+1}{2\pi i}\int_{|u|=1}\left(1+\frac{1}{u}\right)^n(1+u)^\xi\frac{du}{u}$$ Changing order of summation and integration we get $$G(t)=\frac{1}{2\pi i}\int_{|u|=1}\sum_{n=0}^\infty(n+\xi+1)\left(1+\frac{1}{u}\right)^nt^n(1+u)^\xi\frac{du}{u}=$$ $$=\frac{1}{2\pi i}\int_{|u|=1}\sum_{n=0}^\infty(n+1)\left(1+\frac{1}{u}\right)^nt^n(1+u)^\xi\frac{du}{u}+\frac{\xi}{2\pi i}\int_{|u|=1}\sum_{n=0}^\infty\left(1+\frac{1}{u}\right)^nt^n(1+u)^\xi\frac{du}{u}=$$ $$=\frac{1}{2\pi i}\int_{|u|=1}\frac{(1+u)^\xi}{\left(1-\left(1+\frac{1}{u}\right)t\right)^2}\frac{du}{u}+\frac{\xi}{2\pi i}\int_{|u|=1}\frac{(1+u)^\xi}{\left(1-\left(1+\frac{1}{u}\right)t\right)}\frac{du}{u}=$$ which can be slightly simplified, leading to $$G(t) = \frac{1}{(1-t)^2}\frac{1}{2\pi i}\int_{|u|=1}\frac{u(1+u)^\xi}{\left(u-\frac{t}{1-t}\right)^2}du+\frac{\xi}{1-t}\frac{1}{2\pi i}\int_{|u|=1}\frac{(1+u)^\xi}{(u-\frac{t}{1-t})}du.$$ Let's compute the second integral first. Using the residue theorem, $$\frac{1}{2\pi i}\int_{|u|=1}\frac{(1+u)^\xi}{(u-\frac{t}{1-t})}du={\tt{Res}}_{u=t/(1-t)}f(u)=\frac{1}{(1-t)^\xi}.$$ Similarily, the first integral (having double singularity) leads to $$\frac{1}{2\pi i}\int_{|u|=1}\frac{u(1+u)^\xi}{\left(u-\frac{t}{1-t}\right)^2}du = \left.\frac{d}{du}\right|_{u=1/(1-t)}u(u+1)^\xi=\frac{1+\xi t}{(1-t)^\xi}$$ Combinig the integrals leads to: $$G(t) = \frac{1}{(1-t)^2}\frac{1+\xi t}{(1-t)^\xi}+\frac{\xi(1-t)}{(1-t)^2}\frac{1}{(1-t)^\xi}=\frac{1+\xi}{(1-t)^{2+\xi}}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3745150", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Prove or disprove that $PQ = P + Q - I$ if $P$ and $Q$ are disjoint permutation matrices whose cycle lengths sum to $n.$ Prove or disprove that if the matrices $P$ and $Q$ represent disjoint permutation cycles in $S_{n}$ with sum of cycle lengths equal to $n,$ then $PQ = P+Q-I$. MY TRY: Let's start by an example. Let $P$ and $Q$ be the matrices corresponding to the respective permutations $p = (1 \, 2)$ and $q = (3 \, 4 \, 5)$ in cycle notation. We have that $$ P = \begin{pmatrix} 0 & 1 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 1 \end{pmatrix} \text{ and } Q = \begin{pmatrix} 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 1\\ 0 & 0 & 1 & 0 & 0 \end{pmatrix}. $$ It seems obvious that the matrix $PQ$ representing the permutation $pq = (1 \, 2)(3 \, 4 \, 5)$ will be $P+Q-I,$ as the "untouched" $1$s in the matrices are simply canceled by $I$ and the touched $1$s create the derangements. But isn't there some clear method to prove it? I am new to group theory. Please ask for clarifications in case of any discrepancies. Any hint will be a great help!
Not an answer but a generalisation of the above:- Let $P_{1}, P_{2},\cdots P_{n}$ ($n\gt 2$) represent disjoint permutation cycles, we'll have $$\prod_{i=1}^{n}P_{i} = \sum_{i=1}^{n}P_{i}-(n-1)I$$ Proof :- We'll prove it by induction. It holds for $n=2$, as proved in the problem. Let it holds for $n = k$ i.e. $$\prod_{i=1}^{k}P_{i} = \sum_{i=1}^{k}P_{i}-(k-1)I$$ we'll prove it also holds for $n=k+1$. Let $P_{k+1}$ represent permutation cycle disjoint to $P_{1},P_{2}\cdots P_{k}$ then $$\biggl(\prod_{i=1}^{k}P_{i}\biggr)P_{k+1} = P_{1}P_{k+1}+P_{2}P_{k+1}+\cdots P_{k}P_{k+1}-(k-1)P_{k+1}.$$ We also have $$P_{1}P_{k+1} = P_{1}+P_{k+1}-I $$ $$P_{2}P_{k+1} = P_{2}+P_{k+1}-I$$ $$ \vdots$$ $$P_{k}P_{k+1} = P_{k}+P_{k+1}-I$$ adding these equations we get $$P_{1}P_{k+1}+P_{2}P_{k+1}\cdots P_{k}P_{k+1} = \sum_{i=1}^{n}P_{i}+kP_{k+1}-kI$$ $$P_{1}P_{k+1}+P_{2}P_{k+1}\cdots P_{k}P_{k+1}-(k-1)P_{k+1} = \sum_{i=1}^{n}P_{i}+kP_{k+1}-kI-(k-1)P_{k+1}$$ $$P_{1}P_{k+1}+P_{2}P_{k+1}\cdots P_{k}P_{k+1}-(k-1)P_{k+1} = \sum_{i=1}^{k+1}P_{i}-kI$$ which gives $$\prod_{i=1}^{k+1}P_{i}=\sum_{i=1}^{k+1}P_{i}-kI$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3745261", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 2 }
Trader with 60% chance of gaining 50% and 40% and 40% chance of dropping 50%. Average return per day calculation. I am reading Paul Wilmott's Frequently Asked Questions in Quantitative Finance, and there is a question that states the following: Every day a trader either makes 50% with prob- ability 0.6 or loses 50% with probability 0.4. What is the probability the trader will be ahead at the end of a year, 260 trading days? Over what number of days does the trader have the maximum probability of making money? He explains how to break even the trader's best chance is at ~164 days. This is simple, he solves for $1.5^n 0.5^{260-n}$. But then he says that the trader's average return per day is: $1−e^{0.6 \ln1.5 + 0.4\ln0.5}$ = −3.34% Where does this formula come from? Any idea how to get the formula on the left hand side?
$$e^{0.6\ln 1.5+0.4\ln 0.5}=e^{0.6\ln 1.5}\cdot e^{0.4\ln 0.5}=(1.5)^{0.6}\cdot (0.5)^{0.4}=0.966\ldots\quad(\because e^{n\ln x}=e^{\ln x^n}=x^n)$$ times the money at the beginning of the day becomes the money at the end of the day on average because the probability of getting $1.5$ times the money at the end of the day is $0.6$ and that of getting $0.5$ times the money is $0.4$. So, over a $10$(say)-day period, money gets $(1.5)^{6}\cdot (0.5)^{4}$ times (just a check). So, the average loss per day is $[1-(1.5)^{0.6}\cdot (0.5)^{0.4}]\%$. Also, it seems to me that $164$ are the days required in the best case to achieve a no-profit-no-loss situation in $260$ days. Hence, $1.5^{n}\cdot0.5^{260-n}=1\Rightarrow n=164$. Note that the best case is far from from the $10$-day logic as $(1.5)^{0.6\cdot260}(0.5)^{0.4\cdot260}\approx 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3745372", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Evaluate $\lim\limits_{x \to 1} \left( \frac{1}{x-1}-\frac{3}{1-x^3} \right)$ Evaluate $$\lim_{x \to 1} \left( \frac{1}{x-1}-\frac{3}{1-x^3} \right)$$ My attempt: $$\lim_{x \to 1} \left( \frac{1}{x-1}-\frac{3}{1-x^3} \right) = \lim_{x \to 1} \frac{x+2}{x^2+x+1}=1$$ According to the answer key, this limit does not exist. I turned that into one fraction, then I factored the polynomial on the numerator as $-(x-1)(x+2)$ and the one on the denominator as $(x+1)(-x^2-x-1)$. What did I do wrong?
$\lim_{x \to 1} \left( \frac{1}{x-1}-\frac{3}{1-x^3} \right)=\lim_{x\to 1} \frac{x^2 + x + 4}{x^3 - 1}=\lim_{x\to 1}\frac{p(x)}{q(x)}$, where $p(x) =x^2+x+4, q(x) =x^3-1 $ Note that, $p(x) \to 6, q(x) \to 0$ as $x\to 1$ Now let's claim that the limit $\lim_{x\to 1}\frac{p(x)}{q(x)}$ doesn't exist on set of real nos. $\mathbb R$. On the contrary, suppose that the above limit (i.e. $\lim_{x\to 1}\frac{p(x)}{q(x)}$) exists. Hence let$\lim_{x\to 1}\frac{p(x)}{q(x)}=L$, then we have: $p(x) =\frac{p(x)} {q(x)} q(x) \implies \lim_{x\to 1} p(x) = L\times 0=0$, which is a contradiction. Hence the limit doesn't exist on set of real nos. and hence our claim is correct. Also, even if we consider extended set of real nos. i.e. $\mathbb R\cup \{-\infty, \infty\} $, then also left hand limit(LHL) $\lim_{x\to 1^{-}}\frac{p(x)}{q(x)}= - \infty$ and right hand limit(RHL) $\lim_{x\to 1^{+} }\frac{p(x)}{q(x)}=\infty$ and hence LHL and RHL are not equal and therefore, the limit doesn't exist on extended set of real nos. also.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3745531", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 3 }
Mean and extinction probability of a Galton-Watson branching with pmf of offspring produced $P(Q=q) = (q+1)(1-r)^2r^q, 0Initial population is $X_0 = g$, ($g$ being a positive number or $0$) and the probability mass function of the number of offsprings $(q)$ produced by an individual is $P(Q=q) = (q+1)(1-r)^2r^q, 0<r<1$. I'm trying to calculate the expected value of $X_n$ and the extinction probability. I'm stuck on both but here's how far I got. Mean: $E[X_n] = E[f(q)]^q(g)$ (I'm using a known formula for this. let me know if I've used it wrong). Assuming $X_0 =g $ isn't $0$, we will have to calculate: $$E[f(q)] = \Sigma^\infty_{q=1} qP(Q=q) = \Sigma^\infty_{q=1} q(q+1)(1+r)^2r^q$$ Is the upper limit of the sum here correct? Should it be $\infty$, or $g$ as we are starting with $g$ people in the population Extinction probability $(\pi_0)$: Assuming that my $E[f(q)]>1 \implies \pi_0 = \Sigma^\infty_{q=1} \pi^q_0P(Q=q)$. $\pi^q_0$ being the probability that the population dies out given $X_0 = q$. This gives me: $$\Sigma^\infty_{q=1} \pi^q_0(q+1)(1+r)^2r^q$$ In both these cases I have no idea how to proceed further. This isn't a distribution that I recognize. Is there something I'm missing? Did I do a step wrong? Or is there an easier way to approach this that I am not seeing.
It is straightforward to show that if the offspring distribution has finite mean, that is, $$ \mathbb E[Q] = \sum_{k=0}^\infty k\cdot\mathbb P(Q=k) :=\mu <\infty $$ then the expected population size at time $n$, conditioned on $\{X_0=1\}$ is given by $$ \mathbb E[X_n\mid X_0=1] = \mu^n. $$ If $g$ is a positive integer, then with some additional work we see that $$ \mathbb E[X_n\mid X_0=g] = g\cdot\mu^n. $$ (The intuition is that the process is equivalent to $g$ separate processes each starting with one individual.) We compute the mean of $Q$: $$ \mu = \sum_{k=0}^\infty k\cdot(k+1)(1-r)^2 r^k = \frac{2r}{1-r}, $$ and hence $$ \mathbb E[X_n\mid X_0=g] = g\cdot\left(\frac{2r}{1-r}\right)^n. $$ For the extinction probability, I will only consider the case where $g=1$. Let $$ \tau = \inf\{n>0:X_0=0\}. $$ It is known that $\pi:=\mathbb P(\tau<\infty)=1$ if $\mu\leqslant1$ and is a positive number less than one if $\mu>1$. Since $0<r<1$, it is clear that $$ 0<\frac{2r}{1-r}\leqslant 1 \iff 0<r\leqslant\frac13, $$ and so extinction occurs with probability one if $r\leqslant\frac 13$. If $\frac13<r<1$, then it is well known that $\pi$ satisfies the equation $P(\pi)=\pi$, where $P(\cdot)$ is the probability generating function of $Q$; indeed, $\pi$ is the unique solution to this equation on the interval $(0,1)$. Let $P(s):= \mathbb E[s^Q]$ for $s\in[0,1]$, then $$ P(s) = \sum_{k=0}^\infty (k+1)(1-r)^2 r^ks^k = \left(\frac{1-r}{1-rs}\right)^2. $$ The equation $P(\pi)=\pi$, i.e. $$ \left(\frac{1-r}{1-r\pi}\right)^2 = \pi $$ is a cubic, and so has three solutions: \begin{align} \pi &= \frac{2r-r^2-\sqrt{4 r^3-3 r^4}}{2 r^2}\tag1\\ \pi &= \frac{2r-r^2+\sqrt{4 r^3-3 r^4}}{2 r^2}\tag2\\ \pi &= 1\tag3. \end{align} By inspection, we see that $(1)$ is the correct choice, since it yields numbers between zero and one.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3745713", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Closure of a subgroup is again a subgroup Let $G$ be a topological group and $H$ a subgroup. Then $\overline{H}$ is again a subgroup. Attempt: Let $x,y \in \overline{H}$. Choose nets $\{x_\alpha\}_{\alpha \in I}$ and $\{y_\beta\}_{\beta \in J}$ with $x_\alpha \to x, y_\beta \to y$ and these nets are in $H$. Then we get a net $\{(x_\alpha, y_\beta)\}_{(\alpha, \beta) \in I \times J}$ where $I \times J$ is ordered in the obvious way such that $(x_\alpha, y_\beta) \to (x,y)$ in $G \times G$. By continuity of multiplication, we obtain $x_\alpha y_\beta \to xy$ and since $x_\alpha y_\beta \in H$ for all indices $\alpha, \beta$, we get $xy\in \overline{H}$. Is this correct?
Seems ok, but I'd use "pre-image of open is open" directly instead of converging nets. Let $x,y\in\overline H$. Suppose $xy\notin\overline H$. By openness of $\overline H^\complement$, there is a neighbourhood of $U$ of $(x,y)\in G\times G$ such that $uv\notin \overline H$ for all $(u,v)\in U$. $U$ contains some $V_x\times V_y$ where $V_x,V_y$ are neighbourhoods of $x,y$, respectively. Pick $u\in V_x\cap H$, $v\in V_y\cap H$ and arrive at a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3745908", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Calculating the size of uniform cabinet doors on two walls I know using this board for this problem is overkill, but I'm struggling with something that I know should be simple. I'm building kitchen cabinets on two different walls. I'm going to buy the doors for these cabinets from a manufacturer in bulk, and all cabinet doors I buy will be the exact same width, which I can specify. The width of the doors I'm ordering is what I'm trying to calculate. One wall of the kitchen is 15' long, and one is 9' long. (The walls don't connect) I want the maximum number of cabinet doors possible on each wall. The doors must all be the same width because I'm buying in bulk, but also conform to a minimum width and maximum width that a cabinet door must have to be functional. And so I need an equation that let's me input: * *A base-width (BW) for cabinet doors. *A maximum width (MW) for cabinet doors. *The size of wall A (WA) *The size of wall B (WB) And outputs the width of cabinet doors I should be ordering. So that I can have the maximum amount of doors possible on each wall, while ensuring all doors are the same width, but also within the min/max width range for cabinet doors.
If you measure the width in inches, a width of $w$ will give you $\frac {180}w$ doors along the $15'$ wall and $\frac {108}w$ doors along the $9'$ wall. If the fractions do not come out even, throw away any remainder. To get the most doors you want to choose $w$ as small as possible. This will make the cabinets quite narrow. As you talk of a minimum width, you can just use that. It will give you the most doors.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3746033", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Assume both r and s are rational then $\frac{r}{s}\in\mathbb{Q}$ Currently done with showing for both r+s and r $\times s$ rational. Now stuck with the following part; If we know that r and s are both rationals what could we say about $\frac{r}{s}$ Assume both r and s are rationals. Then we would have by definition r = $\frac{a}{b}$ where a and b are integers and b $\neq$ 0 Similarly; s = $\frac{c}{d}$ where c and d are integers and d $\neq$ 0 $\frac{r}{s}$ = ($\frac{a}{b}$) /($\frac{c}{d}$) Hence $\frac{r}{s}$ = (a $\times d$)/ (b $\times c$) Since ad and bc are integers, numerator and denumerator are both integers. Integers divided by integers are rational ? Is this valid proof ?
Yes, it's a valid proof. The only "error" that may be corrected is that you must assume $s\in\mathbb{Q}\backslash\{0\}$, because if $s=0$ then $r/s$ is undefined and consequently we get that $r/s\notin\mathbb{Q}$. So, as conclussion for your proof: $$r\in\mathbb{Q}\phantom{a}, \phantom{a}s\in\mathbb{Q}\backslash\{0\} \Longrightarrow \frac{r}{s}\in\mathbb{Q}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3746187", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Approximating a correlated random walk on a 2D grid I have been working on modeling the movement of ants, and a number of studies suggest the use of correlated random walks. These are biased random walks, where the direction of the next step is dependent on the direction of the previous step. The way this is usually modeled is the agent (ant) can take steps of length $r$, and the direction of next step $\theta$ is chosen relative to the direction of the previous step. This turning angle is drawn from a linear normal distribution of mean $0$. Higher standard deviation results in more tortuous paths, as shown in figures below. I would like to approximate this kind of a random walk on a 2D grid, where the walker can only take $90^\circ$ or $180^\circ$ turns. In the simplest case, this would be something like the following figure. What should be the relation between the standard deviation in the original random walk and the way the probabilities of moving towards each direction are calculated during a step of the random walk on a 2D grid such that the final walks in both the cases resemble each other? Simply assigning a higher probability to the 'front' direction and lower probabilities to the other directions does simulate a walk where the agent prefers moving 'ahead'. However, in my opinion (which might be wrong), these probabilities need to have a dependence on time steps, and that is something I'm struggling with and would like some help with. The figures have been taken from here. Edit: The reference mentioned also states that for standard deviations greater than $5$, we essentially have a random walk with no correlation.
I think one of the easier ways of doing this is calculating the individual probabilities of every angle using the probability density formula for Gaussian distribution. In the grid, the random walker can take steps only in 90$^{\circ}$ and 180$^\circ$. Diagonal steps ($45^\circ$, $135^\circ$, etc) can be added as well. The probability of the random walker going in one of the directions can be calculated from $$f(x) = \frac{1}{\sigma\sqrt{2\pi}}\exp\left[\left(-\frac12\right)\left(\frac{x-\mu}{\sigma}\right)^2\right]$$ where $x$ is the angle. This is one of the solutions that I see of approximating a correlated random walk on a 2D grid.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3746444", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Proof of Infinitude of Primes by Euler's Product Formula is Circular? Many guides will refer to Euler's product formula as simple way to prove that the number of primes is infinite. $$\sum_n\frac{1}{n} = \prod_p \frac{1}{1-\frac{1}{p}}$$ The argument is that if the primes were finite, the product on the right hand side is finite, noting that $1-\frac{1}{p}$ is never zero. However, the product formula itself is constructed by application of the fundamental theorem of arithmetic to an infinite series with terms involving only primes. Does this mean such proofs are a circular argument - because they use a product formula whose construction depends entirely on the infinitude of primes?
No, the proof is not circular. If we assume there are a finite number of primes $p_1,\ldots,p_k$, then we would assume that any $n\in\mathbb{N}$ would be able to be factorized as $p_1^{\alpha_1}\ldots p_k^{\alpha_k}$. (The proof of the Fundamental Theorem of Arithmetic does not require that there be an infinite number of primes.) This would lead to the Euler product formula, which we would then use to provide the contradiction, once we have shown that $\sum_{n\in\mathbb{N}} \frac1n$ is infinite.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3746532", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 3, "answer_id": 2 }
Geometric explanation for the convergence of $\sum_{n = 1}^{\infty} (1/a)^n$ when $1I'm studying some series and I'm trying to see them in a more geometric way, thinking of integrals or parts of something big (you will know what i mean). But I can't find a geometric way to see why those should converge! Being a bit more precise, what I want is: Let $S = \sum_{n = 1}^{\infty} (1/a)^n$ be a series with $a > 1$. Then, $S < \infty$. If $a = 2$, using the "part of a hole" idea works, because we can think of the circle with radius $1$ and then the sum $1/2 + 1/4 + \cdots + 1/\infty$ is just filling this circle by painting half of it, then taking the rest and painting half of it, doing this recursively. Then, if $a > 2$, obviously the series will converge because $(1/a)^n \leq (1/2)^n$ and so the sum will never get to fill the circle, hence it must be finite. Now, the problem lies in $2 > a > 1$. Is there a geometric argument that is somehow intuitive for why this should converge?
Consider the following continuous function: $$f(s) = \frac{1}{a^s}$$ Compare this to the left-continuous step function: $$g(s) = \frac{1}{a^{\lceil s \rceil}}$$ Note that $$\int_0^n g(s) ds \equiv \sum_{i=1}^{n} \frac{1}{a^i} \quad \forall n \in \mathbb{N}$$ Also note that $\lceil s \rceil \geq s \implies f(s) \geq g(s)\quad\forall s \in \mathbb{R}$. Therefore, we know $$\int_0^n f(s) ds \geq \int_0^n g(s) ds \quad \forall n\geq0$$ Geometrically, we can see that the area under the curve for $f(s)$ includes the area under the curve $g(s)$ as a subregion. We can evaluate the improper integral of $f(s)$ to get a closed-form result: $$\lim_{b\to\infty}\int_0^b \frac{1}{a^s} ds =\lim_{b\to\infty}\frac{1}{\ln(a)}\left[ 1- a^{-b}\right] = \frac{1}{\ln(a)} \implies \lim_{b\to\infty}\int_0^b g(s) ds < \infty $$ $$\implies \sum_{n=1}^{\infty} \frac{1}{a^n} < \infty$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3746741", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
$\int_0 ^ \frac{\pi}{2}[\sin 2x (1+\cos 3x) ]dx$ . Here $[t]$ denotes the greatest integer less than or equal to $t$. $\int_0 ^ \frac{\pi}{2}[\sin 2x (1+\cos 3x) ]dx$ . Here $[t]$ denotes the greatest integer function. Can anyone give me a hint?
$$y=\sin(2x)(1+\cos(3x)),x\in[0,\pi/2]$$note that $0\leq 2x\leq \pi \to 0\leq \sin(2x)\leq 1\\1+\cos(3x)\geq0\\$ now find max,min of $y=\sin(2x)(1+\cos(3x)) $ on $x\in[0,\pi/2]$ it seems $0\leq y<1 $so $[y]=0$ so $\int_0^{\pi/2}[y]dx\to 0 \\$ https://www.desmos.com/calculator/inubfmxu8p
{ "language": "en", "url": "https://math.stackexchange.com/questions/3746875", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Does $\partial A$ determine $A$? Given a bounded closed set $A$ in $\mathbb R^n$, can $A$ be uniquely determined by $\partial A$, except for the boundary itself? Or, use it differently, given two bounded closed sets $A_1, A_2$ in $\mathbb R^n$ with $\partial A_1 = \partial A_2$, $A_1 \ne\partial A_1$, and $A_2 \ne\partial A_2$, is it true that $A_1 = A_2$? Without the boundedness assumption the assertion is clearly false: the sphere $\{ x \in \mathbb R^n : |x|=1\}$ is the common boundary to $\{ x \in \mathbb R^n : |x| \ge 1\}$ and $\{ x \in \mathbb R^n : |x| \le 1\}$. note) It was originally intended for the Euclidean compact(bounded and closed) set, but it was incorrectly modified as an open set. I am sorry.
I think the other examples given so far are not correct. I think this works: Take two disks in $\Bbb{R}^2$, and let one set be the union of one disk and the circle around the other, and let the other set be the union of the other disk with the circle around the former. Then they are closed, bounded, different than their boundaries, but have the same boundary while being different than one another. I'll write a formula if this isn't clear. EDIT. In fact, here's a much simpler example, in $\Bbb{R}$. Let $A_1=\{0,1\}\cup [2,3]$ and $A_2=[0,1]\cup \{2,3\}$. EDIT 2. Like feynhat said, if you want a connected example, take my first example (in $\Bbb{R}^2$) and set $A_1=\{(x,y):x^2+y^2\leq 1\}\cup\{(x,y):((x-2)^2+y^2= 1\}$, $A_2=\{(x,y):x^2+y^2= 1\}\cup\{(x,y):((x-2)^2+y^2\leq1\}$, or something like this. The idea is to take two tangent disks. Perhaps someone can draw a figure and add it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3747178", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 1 }
Prove: $\tan{\frac{x}{2}}\sec{x}= \tan{x} - \tan{\frac{x}{2}}$ I was solving a question which required the above identity to proceed but I never found its proof anywhere. I tried to prove it but got stuck after a while. I reached till here: To Prove: $$\tan{\frac{x}{2}}\sec{x}= \tan{x} - \tan{\frac{x}{2}}$$ But I don't know what to do next. Any help is appreciated Thanks
In terms of $t=\tan\tfrac{x}{2}$, the LHS is $t\cdot\frac{1+t^2}{1-t^2}$, while the RHS is $\frac{2t}{1-t^2}-t=t\cdot\frac{2-(1-t^2)}{1-t^2}=t\cdot\frac{1+t^2}{1-t^2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3747280", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Does remainder of 2^-1 divide by 7 exists? (mod 7) Decimal? I am confused as to how decimal plays a part compared to the multiplicative inverse. For example, I know that $2^2\equiv 2^5\equiv 4\bmod 7$ (the pattern is 1,2,4, for every power of 3) This then implies that $2^{-1}\equiv 4 \bmod 7 .$ However $2^{-1}$ is a decimal, and the definition of the divides I know is specific for integers. When searched online, it says the remainders for decimals do not exist. So does $2^{-1}$ divide by 7 exists? Is it different from $2^{-1}$ mod 7?
$2^{-1}\equiv 4 \bmod 7$ makes perfect sense because $2 \cdot 4 \equiv 1 \bmod 7$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3747405", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Graph proving a cycle inequality I was solving some old exams from my university and I've stumbled across this one which I didn't know how tto think through, it says: Give a graph $H$. Let $u(H)$ be the number of vertices of $H$ of degree 1. Let $C$ be a cycle in $H$. Show that $l(C) <= v(H) - u(H) $
HINT: Can a vertex of degree $1$ belong to $C$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3747486", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why is $ \frac{5}{64}((161+72\sqrt{5})^{-n}+(161+72\sqrt{5})^{n}-2)$ always a perfect square? I'm working on a puzzle, and the solution requires me somehow establishing that $$ f(n):=\frac{5}{64}\Big(\big(161+72\sqrt{5}\big)^{-n}+\big(161+72\sqrt{5}\big)^{n}-2\Big)$$ is a perfect square for $n\in \mathbb{Z}_{\geq 0}$. I've done a lot of simplification to get to this point, and am stuck here. I can provide the context of the puzzle if necessary, but it's pretty far removed from what I have here. The goal is basically to show that a formula generates solutions to a given equation. Any tips on how to proceed? Here's the first few values: $$\begin{array}{|c|c|} \hline n&\text{value}\\ \hline 0&0\\ \hline 1& 5^2 \\ \hline 2&90^2 \\ \hline 3& 1615^2\\ \hline 4& 28980^2\\ \hline \end{array}$$
It's $$\frac{5}{64}\left((9+4\sqrt5)^{2n}+(9+4\sqrt5)^{-2n}-2\right)=\frac{5}{64}\left((9+4\sqrt5)^n-(9-4\sqrt5)^n\right)^2.$$ Can you end it now?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3747774", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
Why rotations with two angles in $3D$ do not form a group? Let us use any parametrization of $3D$ rotations with three angles (e.g. Euler angles or yaw-pitch-roll), and throw away one of the angles (just assign it a fixed value). Will the remaining set of transformation form a group? If yes — which? If no — why? Follow up: the same question on more general Lie groups. What typically happens if we fix some of the parameters? In which cases does this result in getting a new group? If it doesn't — why? CLARIFICATION Just in case ⁠— I'm NOT asking why the new set of transformations is not $SO(3)$ anymore, that's pretty obvious. The question is: which group axioms are no longer satisfied? We clearly have a neutral element, and for each transformation there is an inverse. So what's wrong then?
Let consider some 90 degree rotations. We have two rotation matrices that generate our group. $P=\pmatrix{&1&\\-1&&\\&&1}$ and $R=\pmatrix{1&&\\&&1\\&-1&}$ Traditionally our third rotation matrix $Y=\pmatrix{&&1\\&1&\\-1&&}$ has been left out. But $PRP^{-1} = Y$ Two rotations will generate the 3rd rotation and hence $SO_3$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3747886", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Why does $\frac{a}{b}<0$ imply $ab<0$? I'm not sure if this was asked before, but my question is: why does $\frac{a}{b}<0$ imply $ab<0$? How do you prove it both intuitively and rigorously(using math)? I think I understand it intuitively: it's becuase for $\frac{a}{b}$ to be negative, exactly one of $a$ or $b$ has to be negative. For $ab$ to be negative, exactly one of $a$ or $b$ has to be negative. That means that these two imply each other. But how would I prove this rigorously? If I multiply both sides of $\frac{a}{b}<0$ by $b$, first of all, I don't know whether $b$ is positive or negative so I don't know which way the inequality sign is facing, and second, even if we did know that it flipped or didn't flip, we would only get $a<0$ or if the sign didn't flip $a>0$. Do I split it into cases then(case 1: $b<0$ and case 2: $b>0$)? It seems like it would work but there might be a more slicker way of proving it?
If you know one, you can obtain the other by multiplying/dividing by $b^2$, which is possible since that will always be a positive quantity. Hence, it suffices to show that any one of these statements is true.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3748124", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 3 }
A kind of Cauchy's Condensation Test? Let $f:[0,\infty) \to \mathbb{R}$ be a positive , monotone decreasing function and $ f(x) \to 0$ as $x\to \infty$ Let $\{a_n\}$ be a bounded sequence such that $a_n \ge 0$ and $\displaystyle\sum _{n=1}^{\infty} a_n=\infty$ . Then prove that $\sum_{n\ge 1} f(n)$ and $\sum_{n\ge 1}a_nf(a_1+a_2+\cdots+a_n)$ converge or diverge together. My attempt The zero terms of $\{a_n\}$ contribute nothing to the convergence/divergence of $$\sum_{n=1}^{\infty}a_n$$ and $$\sum_{n\ge 1}a_nf(a_1+a_2+\cdots+a_n).$$ So without loss of generality , let us assume that $a_n \gt 0, \forall n$. Since $a_n$ is bounded, we must have $m_*\le a_n \le M^*$ , for some $m_*, M^*\gt 0$ So, we have $$a_1+a_2+\cdots+a_n \ge nm_*$$ If $m_* \ge 1$, then $$a_1+a_2+\cdots+a_n \ge n$$ $\Rightarrow f(a_1+a_2+\cdots+a_n) \le f(n)$, since $f(x)$ is monotone decreasing function. So by Comparision test if $\sum f(n)$ converges, then $\sum_{n\ge 1}f(\sum _{i=1}^{n}a_i)$ converges and so does $\sum_{n\ge 1}a_nf(\sum _{i=1}^{n}a_i)$ since $$a_nf\left(\sum _{i=1}^{n}a_i\right) \le M^*f\left(\sum _{i=1}^{n}a_i\right).$$ The case for $m_* \ge 1$ ends here. Let $M^* \le 1.$ Then $a_1+a_2+\cdots+a_n \le nM^*\le n$ $$\Rightarrow f\left(\sum _{i=1}^{n}a_i\right)\ge f(n)$$ Then by Comparision Test, if $\sum_{n\ge 1}f(n)$ diverges , then $\sum_{n\ge 1}f(\sum _{i=1}^{n}a_i)$ diverges and so does $\sum_{n\ge 1}a_nf(\sum _{i=1}^{n}a_i)$ since $$a_nf\left(\sum _{i=1}^{n}a_i\right) \ge m_*f\left(\sum _{i=1}^{n}a_i\right).$$ The case for $M^* \le 1$ ends here. I do not know how to approach for $m_* \le 1 \le M^*$ and moreover I am familiar with the missing reverse implications in the above cases. Do you have any ideas? Thanks for your time. As a sidenote, this is a previously asked question on MSE but the question was voluntarily removed by the OP for some unknown reasons (at least unknown to me). I would really like the question to be answered.
PARTIAL answer: $(a_n)$ is bounded so there exists an $M > 0$ such that for all $n$ we have $0 \leq a_n \leq M$. Moreover, as you suggested, we can assume without loss of generality that $a_n > 0$ for all $n$. Let $N_k = \min \{ n : a_1 + \dots + a_n \geq k\}$. $N_k$ is well defined and finite for all $k$ because $\sum_n a_n = \infty$. Now since all terms are non-negative we can write $$ \sum_{n \geq 1} a_n f(a_1 + \dots + a_n) = \sum_{k\geq 0} \sum_{n=N_k}^{N_{k+1}-1} a_n \underbrace{f(a_1 + \dots + a_n)}_{\leq f(k)} \leq \sum_{k \geq 0} \underbrace{\left(\sum_{n=N_k}^{N_{k+1}-1} a_n\right)} _{\leq a_{N_k} + 1 \leq M + 1} f(k) \leq (M + 1) \sum_{k \geq 0} f(k) $$ where the sum in parenthesis in the center may be empty and thus equal to zero. This proves that the divergence of $\sum_n a_n f(a_1 + \dots + a_n)$ implies the divergence of $\sum_n f(n)$. Now we write similarly $$ \begin{aligned} \sum_{n \geq 1} a_n f(a_1 + \dots + a_n) &= \sum_{k \geq 0} \sum_{n=N_k}^{N_{k+1}-1} a_n \underbrace{f(a_1 + \dots + a_n)}_{\geq f(k+1)} \\ &\geq \sum_{k \geq 0} \left(\sum_{n=N_k}^{N_{k+1}-1} a_n\right) f(k+1) \\ &= \sum_{k \geq 1} \underbrace{\left(\sum_{n=N_{k-1}}^{N_k-1} a_n\right)}_{b_k} f(k) = \sum_{k \geq 1} b_k f(k). \end{aligned} $$ Notice that $ 0 \leq b_k \leq M + 1$ for all $k$, and that $\sum_k b_k = \sum_n a_n = \infty$. In fact, we even have that for all $K$, $$ \max_{\substack{0 \leq k < K \\ N_k < N_K}} k \ \leq \sum_{k=1}^K b_k = \sum_{n=1}^{N_K-1} a_n \leq K $$ The idea here is that since $f$ is decreasing, you can always 'shift' the weights towards bigger values of $k$ to get a lower bound of $\sum_k b_k f(k)$. If you get a lower bound of the form $ C \sum_{k \geq N} f(k)$ with $N$ an integer and $C$ a constant you are done because this means that the divergence of $\sum_n f(n)$ implies the divergence of $\sum_n a_n f(a_1 + \dots + a_n)$. I believe that finding such a lower bound is doable. There might be a nicer solution though... Hope this helps a bit even though it is not a full solution!
{ "language": "en", "url": "https://math.stackexchange.com/questions/3748269", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
A question regarding the proof of $\gcd(a^m-1, a^n-1) = a^{\gcd(m,n)}-1$ I have a problem trying to understand the proof: Theorem $\boldsymbol{1.1.5.}$ For natural numbers $a,m,n$, $\gcd\left(a^m-1,a^n-1\right)=a^{\gcd(m,n)}-1$ Outline. Note that by the Euclidean Algorithm, we have $$ \begin{align} \gcd\left(a^m-1,a^n-1\right) &=\gcd\left(a^m-1-a^{m-n}\left(a^n-1\right),a^n-1\right)\\ &=\gcd\left(a^{m-n}-1,a^n-1\right) \end{align} $$ original image We can continue to reduce the exponents using the Euclidean Algorithm, until we ultimately have $\gcd\left(a^m-1,a^n-1\right)=a^{\gcd(m,n)}-1$. $\square$ I see that the next iteration (if possible) is $\gcd(a^{m-2n}, a^n-1)$. However, why is the conclusion is true by euclidean algorithm?
Suppose : $gcm(m, n)= k$ $m=m_1 k$ $n=n_1 k$ Then we have: $a^m-1=(a^K)^{m_1}-1=(a^k-1)[(a^k)^{m_1-1}+(a^k)^{m_1-2}+ . . . $ $a^n-1=(a^K)^{n_1}-1=(a^k-1)[(a^k)^{n_1-1}+(a^k)^{n_1-2}+ . . . $ That means: $gcd(a^m-1, a^n-1)=a^k-1=a^{gcm(m, n)}-1$ Notice that the factor $(a^k-1)$ exists for both cases where m and n are both odd or even.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3748376", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Without the ZF axiom of regularity can any infinite sets be constructed? Update with Direct Question Based on Asaf's comments, here is a related question: Prove that the mapping $n \mapsto n \cup \{n\}$ on the set $\Bbb N$ is injective without the axiom of foundation. The wikipedia $\text{ZF}$ article under axiom 7 contains the text (It must be established, however, that these members are all different, because if two elements are the same, the sequence will loop around in a finite cycle of sets. The axiom of regularity prevents this from happening.) ORIGINAL QUESTION Without the axiom of foundation (axiom 2 in the wikipedia $\text{ZF}$ article) can any infinite sets be constructed? By an infinite set we mean a set that is not Kuratowski finite. I suspect that without it, the axiom of infinity (axiom 7) might be better described as $\quad$ The formula of finitary frustration. My work I saw the axiom of foundation mentioned inside parentheses in the paragraph for axiom 7 allowing us to construct the natural numbers. So apparently, the familiar program of constructing the natural numbers $\Bbb N$ can't be carried out without axiom 2.
It is true, if $x=\{y\}$ and $y=\{x\}$ and $x\neq y$, then $x\cup\{x\}=\{x,y\}=y\cup\{y\}$. So without assuming the axiom of regularity the map $x\mapsto\{x\}$ is not provably injective. But just because it is not necessarily injective on the entire universe does not mean that we cannot find a set on which it is injective. Setting $\omega$ to be the intersection of all the inductive sets, we can now prove that $x\mapsto\{x\}$ is in fact injective on $\omega$. The reason being is that $\omega$ is in fact well-founded, so in particular the above situation does not happen inside $\omega$, it it happens at all. The quickest, dirtiest, hackiest way to see this is to simply note that $\omega$ belongs to the von Neumann universe which is an inner model of the universe (working in $\sf ZF-Reg$). But this can be shown by hand, through what is tantamount to an ad-hoc proof that $\omega$ is well-founded.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3748489", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What is the relation of the Infinite power series with these fraction series? Take this infinite series: $$S = 1 + \sum_{n=1}^\infty\prod_{i=1}^n\frac{2i+1}{4i} = 1 + \frac{3}{4} + \frac{3\times5}{4\times8} + \frac{3\times5\times7}{4\times8\times12} + ....$$ We want to find the sum of this series. I didn't know how to solve this. But when I went to look at the solution, they compared this series with the infinite power series $$P = (1+x)^n = 1 + nx + \frac{n(n-1)}{2!}x^2 + \frac{n(n-1)(n-2)}{3!}x^3 + ......$$ for some real $n$ and $x$. Equating the corresponding terms ($nx = \frac{3}{4}$ and $\frac{n(n-1)}{2}x^2 = \frac{3\times5}{4\times8}$) they found $n=-\frac{3}{2}$ and $x=-\frac{1}{2}$. So that the sum is simply $2^\frac{3}{2}$. And when I checked the infinite power series $P$ plugging these values of $x$ and $n$, it really turn out to be the series $S$. Now, I don't understand why this comparison works. Let's generalise this thing. Say, $S$ is given by $$S = 1 + \sum_{n=1}^\infty\prod_{i=1}^n\frac{ai+b}{di}$$ for some positive integers $a, b$ and $d$ with $b < a$, and it is guaranteed (given) that the series converges. Let $P$ be the same as above. Now, can anyone say if $S$, as defined just now, can always be compared with the series $P$, that is, are there always some real $n$ and $x$ such that $S = P?$
$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} S & \equiv 1 + \sum_{n = 1}^{\infty} \prod_{i = 1}^{n}{2i + 1 \over 4i} = 1 + \sum_{n = 1}^{\infty}{2^{n} \over 4^{n}} {\prod_{i = 1}^{n}\pars{i + 1/2} \over n!} = 1 + \sum_{n = 1}^{\infty}{1 \over 2^{n}\, n!}\, \pars{3 \over 2}^{\overline{\large n}} \\[5mm] & = 1 + \sum_{n = 1}^{\infty}{1 \over 2^{n}\, n!}\, {\Gamma\pars{3/2 + n} \over\Gamma\pars{3/2}} = 1 + \sum_{n = 1}^{\infty}{1 \over 2^{n}}\, {\pars{n + 1/2}! \over n!\pars{1/2}!} \\[5mm] & = 1 + \sum_{n = 1}^{\infty}{1 \over 2^{n}}\,{n + 1/2 \choose n} = 1 + \sum_{n = 1}^{\infty}{1 \over 2^{n}} \bracks{{-3/2 \choose n}\pars{-1}^{n}} \\[5mm] & = 1 + \sum_{n = 1}^{\infty} {-3/2 \choose n}\pars{-\,{1 \over 2}}^{n} = 1 + \braces{\bracks{1 + \pars{-\,{1 \over 2}}}^{-3/2} - 1} \\[5mm] & = \bbox[15px,#ffd,border:1px solid navy]{\large 2\root{2}}\ \approx\ 2.8284 \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/3748814", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Prove that $s(z)=\frac{1}{1+e^{-z}}$ is always increasing I want to prove that $s(z)=\frac{1}{1+e^{-z}}$ is always increasing. I know from previous work that taking the derivative and proving that it is always greater than $0$ is one way to prove $s(z)$ is always increasing. How do I go about proving that the derivative $s'(z)=\frac{e^z}{(e^z+1)^2}$ is positive? I know from plotting it graphically that I can see it is always positive, and from observing its limits I know the derivative approaches $0$ as $z\rightarrow -\infty,$ and $\infty$ as $z\rightarrow \infty$. Is there a more rigorous proof I can use to show that $s'(z) > 0$ for all values of $z?$
$e^z$ is increasing, $e^{-z}$ is decreasing, $e^{-z}+1$ is decreasing and positive, $\dfrac1{e^{-z}+1}$ is increasing. $$z_0<z_1\implies-z_0>-z_1\implies e^{-z_0}>e^{-z_1}\implies e^{-z_0}+1>e^{-z_1}+1>0 \\\implies\frac1{e^{-z_0}+1}<\frac1{e^{-z_1}+1}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3748938", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How find the 2001th and 2003th derivatives of $f(x)= \frac{x^5}{1+x^6}$ Lef $ f: \mathbb{R} \rightarrow \mathbb{R} $ be defined by $f(x)= \dfrac{x^5}{1+x^6}$. I want to find the 2001th and 2003th derivatives of $f$ at the point $x=0$. I tried to use the chain rule but I do not see the pattern and I do not know what theorem to use.
Note that $\frac{x^5}{1+x^6}=x^5 \sum_{k=0}^\infty (-1)^k x^{6k}=\sum_{k=0}^\infty x^{6k+5}$ by using the geometric series expansion of $\frac1{1+y}$ for $y=x^6$. Then try to write 2001 and 2003 in the form $6k+r$ with $0\leq r<6$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3749347", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Find a subgroup of index 3 of dihedral group $D_{12}$ Find a subgroup of index 3 in the dihedral group $D_{12}$. I know the number of elements in $D_{12}$ is 24 and also that is we have this subgroup of index 3, then we obtain that $|D_{12}:H|=8$, where $H$ is our wanted subgroup, but I don`t know how to go further. I am new to this type of problems and I do not have many examples, could you provide a full proof, or at least in the form of an answer, such that it would serve as a model for similar problems I encounter? Thank you very much!!!
$D_{12}$ is generated by $a,b$ with $a^2=b^{12}=1, aba=b^{-1}$. Then $b^3$ generates a subgroup $A$ of order $12/3=4$. That subgroup is normal in $D_{12}$ because $ab^3a=b^{-3}=b^9=(b^3)^3$. Then $a$ and $A$ generate a subgroup $H$ of order 8. The index of that subgroup is then 3.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3749487", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Let $G$ be a graph such that all its vertices have degree 2. Prove that $G$ is a union of pairwise disjoint cycles. Let $G$ be a graph such that all its vertices have degree 2. Prove that $G$ is a union of pairwise disjoint cycles. This is Exercise 4.1.4 in the book Problem-Solving Methods in Combinatorics by Pablo Soberón. Since exercises do not have any solutions in the book, I came here to ask for help. I was thinking of using mathematical induction, since most basic graph properties I saw can be proven using induction. So let's induct on the number of vertices the graph has. * *Base case: if the graph has one vertex, then the problem is obvious (the graph must be a loop, which is a cycle) *Induction step: I don't have any ideas how to do this. I was thinking of considering a subgraph $G'$ that has less than $n$ vertices and also all of its vertices have degree 2. Please help me! Thank you in advance!
Hint. Consider a connected component $C$ of $G$. Then the vertices of $C$ have degree $2$. Show that $C$ is a cycle. Note that, more generally, any connected graph with all even degree vertices have an Eulerian circuit.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3749686", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Solve for $x$ in $\sin^{-1}(1-x)-2\sin^{-1}x =\frac{\pi}{2}$ Let $x=\sin y$ $$\sin^{-1}(1-\sin y)-2\sin^{-1}\sin y=\frac{\pi}{2}$$ $$\sin^{-1}(1-\sin y)-2y=\frac{\pi}{2}$$ $$1-\sin y =\sin (\frac{\pi}{2}+2y)$$ $$1-\cos2y=\sin y$$ $$\sin y(2\sin y-1)=0$$ $$x=0,~ \frac 12$$ Clearly, $x=\frac 12$ isn’t correct, because it doesn’t satisfy the original expression Why was an extraneous root obtained in this solution? I want to know the reason behind it.
Find $x$ satisfying $\sin^{-1}(1-x)-2\sin^{-1}x =\frac{\pi}{2}$. Common mistakes While going step-by-step, if you go from * *$(\sin^{-1}a)=y\stackrel{\text{to}}{\longrightarrow}\sin(\sin^{-1}a)=a=\sin (y\pm 2n\pi)$, and from *$\sin y=a\stackrel{\text{to}}{\longrightarrow}\sin^{-1}(\sin y)=\begin{cases}2n\pi+y&y\in\text{I, IV quadrant}\\(2n-1)\pi-y&y\in\text{II, III quadrant}\end{cases}=\sin^{-1}a$, there occurs an uncertainty$\color{red}{^1}$ of $m\pi$ in $y$. So, the new value of $y$ also begins to satisfy the original equation $(\sin^{-1}a)=y$ or $\sin y=a$. Your mistakes apart from not doing the domain test Let $x=\sin y$, then \begin{align*} \sin^{-1}(1-\sin y)-2\sin^{-1}\sin y&=\frac{\pi}{2}\\ \end{align*} $\color{red}{\text{CASE I: $y\in$ I, IV quadrant}}$ \begin{align*} \sin^{-1}(1-\sin y)-2\color{red}{(y+2n\pi)}&=\frac{\pi}{2}\tag{1}\\ \color{red}{\sin^{-1}(1-\sin y)}&\color{red}{=}\color{red}{\frac{\pi}{2}+2(y+2n\pi)\Rightarrow (y+2n\pi)\in\left[-\frac{\pi}{2},0\right]}\Rightarrow y\in\text{IV quadrant}\tag{2}\\ 1-\sin y &=\sin \left(\frac{\pi}{2}+2y\color{red}{+4n\pi}\right)\tag{3}\\ 1-\cos2y&=\sin y\\ \sin y(2\sin y-1)&=0\Rightarrow \sin y=0,\ \sin y=\frac 12 \end{align*} but $\sin y=\frac 12$ is not allowed by the range test that bounds $y$ in IV quadrant in which sine is negative. $\color{red}{\text{CASE II: $y\in$ II, III quadrant}}$ \begin{align*} \sin^{-1}(1-\sin y)-2\color{red}{[(2n-1)\pi-y]}&=\frac{\pi}{2}\tag{4}\\ \color{red}{\sin^{-1}(1-\sin y)}&\color{red}{=}\color{red}{\frac{\pi}{2}+2[(2n-1)\pi-y]\Rightarrow ((2n-1)\pi-y)\in\left[-\frac{\pi}{2},0\right]\Rightarrow y\in\left[(2n-1)\pi,\frac{4n-1}{2}\pi\right]\Rightarrow y\in\text{III quadrant}}\tag{5}\\ 1-\sin y &=\sin \left(\frac{\pi}{2}\color{red}{+2[(2n-1)\pi-y]}\right)\tag{6}\\ 1-\cos2y&=\sin y\\ \sin y(2\sin y-1)&=0\Rightarrow \sin y=0,\ \sin y=\frac 12 \end{align*} but $\sin y=\frac 12$ is not allowed by the range test that bounds $y$ in III quadrant in which sine is negative. Your mistake apart from the domain test (that gives the answer instantly) is the range test. :p My solution Since taking sine both sides produces no uncertainty for both below LHS and RHS can't deviate by even $2\pi$, we get \begin{align*} \sin^{-1}(1-x)-\sin^{-1}x&=\frac{\pi}{2}+\sin^{-1}x\\ \sin[\sin^{-1}(1-x)-\sin^{-1}x] &=\sin\left(\frac{\pi}{2}+\sin^{-1}x\right)\\ \sin[\sin^{-1}(1-x)]\cdot\cos[\sin^{-1}x]-\sin(\sin^{-1}x)\cdot\cos[\sin^{-1}(1-x)] &=\cos(\sin^{-1}x)\\ (1-x)\sqrt{1-x^2}-x\sqrt{2x-x^2} &=\sqrt{1-x^2}&\left(\because \sin^{-1}a\in\left[-\frac{\pi}{2},+\frac{\pi}{2}\right]\Rightarrow \cos(\sin^{-1}a)=\color{red}{+}\sqrt{1-a^2}\right)\\ -x\left(\sqrt{1-x^2}+\sqrt{2x-x^2}\right)&=0 &\\ -x\left(\sqrt{1-x^2}+\sqrt{2x-x^2}\right)&=0 &\\ \Rightarrow x&=0&(\because x\in[0,1]) \end{align*} since both $\sqrt{1-x^2}$ and $\sqrt{2x-x^2}$ can't vanish simultaneously. The former requires $x=\pm 1$ that the latter does not. $\color{red}{^1}$This occurs due to defining the range of an inverse function as some part of the domain of the original function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3749783", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 3 }
How to find the total distance travelled in a kinematics integration question? I completed question a which is $\int _0^3\:t^2-6t+5 = -3$ or more specifically $3m$ to the left Now isnt distance the absolute value of the displacement? $\left|\int _{t1}^{t2}\:v\left(t\right)\right|=\left|x1\:-x2\right|$ So for question b i did this: $\left|\int _0^3\:t^2-6t+5\right| = 3$ and apparently this is incorrect. I am a bit confused what I did wrong, everything seems logical to me. So if you can try question b) that will be great and please point out what I am missing. Thnkas
Some definitions: * *$$\text{average speed} = \frac{\text{distance travelled}}{\text{time elapsed}}$$ *$$\text{speed at time }t=\left| v(t)\right|$$ *$$\text{average value of } f(t) \text{ over } [t_1, t_2]=\frac{\int_{t_1}^{t_2}f(t)\,\mathrm{d}t}{t_2-t_1}$$ Therefore \begin{align}\text{distance travelled over }[t_1, t_2] &= \text{average speed}\times\text{time elapsed}\\ &=\frac{\int_{t_1}^{t_2}\left| v(t)\right|\,\mathrm{d}t}{t_2-t_1} \times \left(t_2-t_1\right)\\ &=\boxed{\int_{t_1}^{t_2}\left| v(t)\right|\,\mathrm{d}t}\\ &\neq \left| {\int_{t_1}^{t_2} v(t)\,\mathrm{d}t } \right|\\ &=\text{magnitude of displacement over }[t_1, t_2].\end{align} The magnitude of displacement is the shortest distance between Positions $1$ and $2$, so can be smaller than the distance travelled from Position $1$ to $2$ (what an odometer measures).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3749879", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 4 }
Motion down an inclined plane with leg equal to the diameter of a circle This is an exercise from Morris Kline's "Calculus: An Intuitive and Physical Approach". An object slides down an inclined plan $OP'$ (Fig. 3-9) starting from rest at $0$. Show that the point $Q$ reached in the time $t_1$ required to fall straight down to $P$ lies on a circle with $OP$ as diameter. The text gives a proof by contradiction for this problem: From the formula $s = 16t^2$ we find that the time to fall the distance $OP$ is $t_1 = \sqrt{OP}/4$. For the motion along $OP'$ we use (35), that is, $s = 16t^2 \sin{A}$. The distance $OQ$ that the object slides in time $t_1$ is $OQ = 16(OP/16) \sin{A}$. Then $ \sin{A} = OQ / OP$. Suppose $Q$ is not on the circle but R on $OP'$ is. Then $\angle OPR$ is $ \angle A$ by the use of right triangles. Then $\sin A = OR/OP$. But $\sin{A} = OQ/OP$. Hence $Q = R$ and $Q$ lies on the circle. This proof is confusing to me because I do not know the logical form of the statement we are trying to prove. What if $Q$ does not intersect with the circle at all? What is the justification for the bold portions of the proof? Is it possible to directly show that $Q$ lies on the circle? My approach is to let $C$ be the center of the circle. If we can show that the length of $CO$ is equal to the length of $CQ$, then $Q$ will be on the circle. Although, I keep getting stuck trying to show that $CQ = \frac{OP}{2}$.
For both paths. Time = $\frac 12 \frac {\text {distance}}{\text {acceleration}}$ On the trip from $O$ to $Q,$ the acceleration is $g\sin A$ The distance covered is $\overline {OP} \sin A$ The alternative trip from $O$ to $P$ the accelartion is $g$ and the distance is $\overline {OP}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3749965", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 4 }
Changing a double integral into a single integral - Volterra-type integral equations I have a question regarding a calculation that i stumbled upon when proving that a Cauchy problem can be converted in a Volterra-type integral equation. Specifically, this equality: \begin{equation*} \int_0^t\int_0^sy(t) dt ds = \int_0^t (t-s) y(s) ds \, . \end{equation*} There seems to be some geometric intuition behind this that I am missing. The generalization, which is also a problem for me, is the following: $$ \int_0^t ds\int_0^s ds_1 ... \int_0^{s_{n-1}}ds_n y(s_n) = \frac{1}{n!}\int_0^t (t-s)^n y(s)ds \, . $$ These integrals also come up when trying to say that a Volterra-type integral equation of the second kind has one and only one solution in $C([a,b])$, using the fixed point theorem. Thanks to everyone for reading.
There is an issue of confusion between the dummy variable of integration $t$ and the upper limit on the outer integral. Instead, write $$F(t)=\int_0^t\int_0^s y(x)\,dx\,ds\tag1$$ Then, note that the region $0\le x\le s$, for $0\le s\le t$ is a triangular shaped region with vertices in the $(x,s)$-plane at $(0,0)$, $(0,t)$, and $(t,t)$. So, this triangular region is also defined by $x\le s\le t$, for $0\le x\le t$. Thus, we can write $(1)$ as $$F(t)=\int_0^t\int_x^t y(x)\,ds\,dx\tag2$$ But note that in $(2)$, $y(x)$ is independent of $s$. So, we can "take $y(x)$ outside the inner integral" to obtain $$F(t)=\int_0^t y(x)\int_x^t (1)\,ds\,dx=\int_0^t (t-x)y(x)\,dx$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3750112", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
2D integration by parts I have the 2D integral $$\iint_{xy}\psi\textbf{u}_H\cdot\nabla_Hh\,{\rm d}x{\rm d}y,$$ with $\psi = \psi(x, y)$, $\textbf{u}_H = (u(x, y), v(x, y))$, $\nabla_H = (\partial/\partial x, \partial/\partial y)$ and $h = h(x, y)$. Is there a 2D integration by parts equivalent, so that I can remove the differentiation of $h$?
The integral seems to be of the form $$ \iint_\Omega \mathbf{v}\cdot\nabla f \, dA $$ The integration by parts equivalent is to use $$ \nabla \cdot (f \mathbf{v}) = \nabla f \cdot \mathbf{v} + f (\nabla\cdot\mathbf{v}) $$ and then Stokes' theorem: $$ \iint_\Omega \mathbf{v}\cdot\nabla f \, dA = \iint_\Omega \left( \nabla\cdot (f\mathbf{v}) - f (\nabla\cdot\mathbf{v}) \right) \, dA = \iint_\Omega \nabla\cdot (f\mathbf{v}) \, dA - \iint_\Omega f (\nabla\cdot\mathbf{v}) \, dA \\ = \oint_{\partial\Omega} f\mathbf{v} \cdot \mathbf{n} \, ds - \iint_\Omega f (\nabla\cdot\mathbf{v}) \, dA, $$ where $\mathbf{n}$ is the normal on the boundary and $ds$ is boundary length measure.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3750237", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
About Two Goldbach's Conjectures I've learned that * *"The Strong Goldbach's Conjecture" is that 'All the even natural numbers greater than 2 can be written as a sum of two prime numbers.' And, *"The Weak Goldbach's Conjecture" is that 'All the natural numbers greater than 5 can be written as the sum of 3 prime numbers.' But Sometimes people say that the weak conjecture is that 'All the odd natural numbers greater than 5 can be written as the sum of three prime numbers.' Which one is correct? If it's the first one, then I think the weak conjecture is logically equivalent to the strong one. It's because of the following reasoning; Strong$\implies$ Weak: If a natural number $n$ is greater than 5, then there are two cases; i) $n$ is even: then we can write $n$ as $n=(n-2)+2 = p+q+2$, where $p, q$ are primes, by the strong conjecture($n-2>3$, so $n-2>2$ and also $n-2$ is even). ii) $n$ is odd: then we can write $n$ as $n = (n-3)+3 = p+q+3$, where $p, q$ are primes, by the strong conjecture($n-3$ is even and $n-3>2$). Weak$\implies$ Strong: All the even numbers can be written as the sum of three primes. But it's not possible that all three are odd primes. So there are at least one $2$. So if we subtract $2$ from $n$, we can conclude that all the even numbers greater than $2$ can be written as a sum of two primes. As a result, I ask two things. * *Which one is correct version of "Goldbach's Weak Conjecture"? *If the weak conjecture says about all the natural numbers, then why they aren't equivalent? I've heard that the weak conjecture was proven but strong is not. What is wrong with my reasoning?
As stated in the Origins section of Wikipedia's "Goldbach's conjecture" article, On $7$ June $1742$, the German mathematician Christian Goldbach wrote a letter to Leonhard Euler (letter XLIII), in which he proposed the following conjecture: $\;\;\;\;\;$Every integer that can be written as the sum of two primes, can also be written as the sum of as many primes as one wishes, until all terms are units. He then proposed a second conjecture in the margin of his letter: $\;\;\;\;\;$Every integer greater than $2$ can be written as the sum of three primes. He considered $1$ to be a prime number, a convention subsequently abandoned. The two conjectures are now known to be equivalent, but this did not seem to be an issue at the time. A modern version of Goldbach's marginal conjecture is: $\;\;\;\;\;$Every integer greater than $5$ can be written as the sum of three primes. It also later states ... Goldbach remarked his original (and not marginal) conjecture followed from the following statement $\;\;\;\;\;$Every even integer greater than 2 can be written as the sum of two primes, Thus, what you stated as what you learned to be the Weak Goldbach conjecture is actually basically just a restatement of Goldbach's strong conjecture that Goldbach made himself (apart from it starting at $2$ because he considered $1$ to be a prime), with it now known to be equivalent to what is now known at the Strong Goldbach conjecture, as you also determined & pointed out in your post. The correct statement of Goldbach's weak conjecture is $\;\;\;\;\;$Every odd number greater than 5 can be expressed as the sum of three primes. (A prime may be used more than once in the same sum.) which matches your second version.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3750593", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solving $\frac{\cos^2\left(\frac\pi2 \cos\theta\right)}{\sin^2\theta} = 0.5$ $$\frac{\cos^2\left(\dfrac\pi2 \cos\theta\right)}{\sin^2\theta} = 0.5$$ I want to solve the above equation for $\theta$ in order to find its value, but I am stuck. Could anyone enlighten me by a method to solve it?
Let $x =\cos\theta$ to simplify the equation to $$\cos(\pi x) +x^2=0$$ which has the trivial roots $\pm 1$ (excluded due to $\sin\theta \ne 0$), as well as the root that can be approximated with $\frac\pi2-\pi x+x^2 =0$, i.e. $$x=\left(1+\sqrt{1-\frac2\pi} \right)^{-1}= 0.6239$$ (vs. the exact $ 0.6298$). Thus, the solutions are $$\theta = 2\pi k\pm \text{arcsec} \left(1+\sqrt{1-\frac2\pi} \right) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3750747", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Given a straight pyramid with a regular hexagonal base, we pass through the center of its base an alpha plane parallel to a side face. Given a straight pyramid with a regular hexagonal base, we pass through the center of its base an alpha plane parallel to a side face. Find the ratio between the area of ​​the obtained section and the side face. I can't imagine this. What will be the polygon formed by this plane? Can someone draw me this situation? Thanks for attention.
Here is a picture of how this will look (my awkward effort to show it). S is the center of the hexagonal base and PS is perpendicular to the base. The plane in question (say Q) is parallel to ABP side face, passes through CSD and cuts CFP, FEP and DEP sides. The cross section that you see is trapezoid CXYD parallel to ABP. As it is a regular hexagonal pyramid, a few things to note - Say AB = a, then all 6 sides of the base is a; CS = SD = a. Given plane Q is parallel to ABP, $\angle PMS = \angle TSN = \angle TNS$. Also, $\triangle TSN$ and $\triangle PMN$ are similar. So, Given $SN = \frac{MN}{2}$, $TS = \frac{PM}{2}$. So, $TN = \frac{PN}{2} = \frac{PM}{2}$ Also, $\triangle PXY$ is similar to $\triangle PFE$ and as $TN = \frac{PN}{2}$, $XY = \frac{FE}{2} = \frac{a}{2}$ $\triangle ABP = \frac{1}{2}.PM.AB = \frac{a}{2}PM$ Area of trapezoid $CXYD = \frac{1}{2}.(CD+XY).TS = \frac{1}{2}(2a+a/2)\frac{PM}{2}$ So, the ratio of area between trapezoid cross-section and side face ABP $= \frac{5}{4}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3750820", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }