Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
A good reference to begin analytic number theory I know a little bit about basic number theory, much about algebra/analysis, I've read most of Niven & Zuckerman's "Introduction to the theory of numbers" (first 5 chapters), but nothing about analytic number theory. I'd like to know if there would be a book that I could find (or notes from a teacher online) that would introduce me to analytic number theory's classical results. Any suggestions?
Thanks for the tips,
| I'll try introduce something different from the other answers: Kiran Kedlaya's course notes for Analytic Number Theory seem like a good option.
An updated version is here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/153022",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "30",
"answer_count": 8,
"answer_id": 2
} |
harmonic function question Let $u$ and $v$ be real-valued harmonic functions on $U=\{z:|z|<1\}$. Let $A=\{z\in U:u(z)=v(z)\}$. Suppose $A$ contains a nonempty open set. Prove $A=U$.
Here is what I have so far: Let $h=u-v$. Then $h$ is harmonic. Let $X$ be the set of all $z$ such that $h(z)=0$ in some open neighborhood of $z$. By our assumptions on $A$, $X$ is not empty. Let $z\in X$. Then $h(z)=0$ on some open set $V$ containing $z$. If $x\in V$, then $h(w)=0$ in some open set containing $x$, namely $V$. So $X$ is open.
I want to show $X$ is also closed but I am having trouble doing so. Any suggestions:
| Harmonic functions are continuous, and closed under addition and scalar multiplication. Therefore $u-v$ is harmonic (and continuous), so $\{z\in\mathbb{C}:u(z)=v(z)\}$ is closed.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/153059",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
What is $\lim_{(x,y)\to(0,0)} \frac{(x^3+y^3)}{(x^2-y^2)}$? In class, we were simply given that this limit is undefined since along the paths $y=\pm x$, the function is undefined.
Am I right to think that this should be the case for any function, where the denominator is $x^2-y^2$, regardless of what the numerator is?
Just wanted to see if this is a quick way to identify limits of this form.
Thanks for the discussion and help!
| If you define $$\lim_{\langle x,y\rangle\to\langle a,b\rangle}f(x,y)\tag{1}$$ in such a way that it exists only when the function is defined in some open ball centred at $\langle a,b\rangle$, then what you wrote is correct. This is analogous to defining $$\lim_{x\to a}f(x)$$ only when $f(x)$ is defined in some open interval centred at $a$. However, just as we can talk about one-sided limits on the real line, it makes perfectly good sense to talk about $(1)$ whenever $f(x,y)$ is defined at points arbitrarily close to $\langle a,b\rangle$. In that case it’s understood that we look only at the limit along ‘paths’ within the domain of $f$. On that understanding $$\lim_{\langle x,y\rangle\to\langle 0,0\rangle}\frac{x^3+y^3}{x^2-y^2}$$ still does not exist, but for a more fundamental reason.
Suppose that you approach the origin along the curve $y=\sin x$. Then by l’Hospital’s rule you have
$$\begin{align*}
\lim_{\langle x,y\rangle\to\langle 0,0\rangle}\frac{x^3+\sin^3x}{x^2-\sin^2x}&=\lim_{\langle x,y\rangle\to\langle 0,0\rangle}\frac{3x^2+3\sin^2x\cos x}{2x-2\sin x\cos x}\\\\
&=\lim_{\langle x,y\rangle\to\langle 0,0\rangle}\frac{3x^2+\frac32\sin2x\cos x}{2x-\sin2x}\\\\
&=\lim_{\langle x,y\rangle\to\langle 0,0\rangle}\frac{6x-\frac32\sin2x\sin x+3\cos2x\cos x}{2-2\cos2x}
\end{align*}$$
which does not exist: the numerator approaches $3$ and the denominator, $0$. The problem is that this path, although it stays within the domain of the function, approaches the line $y=x$ so quickly as it approaches the origin that the denominator approaches $0$ much faster than the numerator, and therefore the function blows up as we approach the origin along this path.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/153134",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 3
} |
Is there any finite non trivial Group with this property? I was asked to have a look at a problem:
There is no a finite non-trivial group $G$ that all its non-trivial elements can be commuted with exactly half elements of group .
For the first step, I saw I could not prove it directly so, I assumed we have such a group $G$, finite and satisfying above property. The property led $|G|$ to have an even order because $∀ (e≠)x\in G$, $|C_G(x)|=\frac{|G|}{2}$. Am I on the right way? Any hints will be appreciated. Thanks.
| You're on the right track. Remember that $[G: C_G(x)]$ is the size of the conjugacy class of $x \in G$. Use the fact that $G$ is a disjoint union of conjugacy classes to complete your solution.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/153199",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Why doesn't this find the mid point? I saw a simple question and decided to try an alternate method to see if I could get the same answer; however, it didn't work out how I had expected.
Given $A(4, 4, 2)~$ and $~B(6, 1, 0)$, find the coordinates of the
midpoint $M$ of the line $AB$.
I realize that this is quite easy just taking $\frac{1}{2}(A+B) = (5, \frac{5}{2}, 1)$; however, I don't understand why this doesn't give me the same answer:
If I take $\frac{1}{2}\vec{AB}~$ I would have thought that I would be half way to B from A which would be the midpoint right? but, of course I get:
$\frac{1}{2}\vec{AB} = \frac{1}{2}(2, -3, -2) = (1, -\frac{3}{2}, -1)$
Is it just because this is a directional vector which doesn't indicate position in any way, and I am trying to halve the direction/angle or something?
| The idea is a good one. But we need to add $\frac{1}{2}\overrightarrow{AB}$ to $\overrightarrow{A}$.
Remark: The idea can be generalized. Let $0 \lt t \lt 1$. We want to find the point $C$ on the line segment $AB$ such that the ratio $AC:DC$ is $t:1-t$. (You were dealing with the case $t=1/2$.) Then
$$\overrightarrow{C}=\overrightarrow{A}+t\overrightarrow{AB}=\overrightarrow{A}+t(\overrightarrow{B}-\overrightarrow{A})=(1-t)\overrightarrow{A}+t\overrightarrow{B}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/153288",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Integral of $\int \frac{\cos x+\sin 2x}{\sin x}$ I am trying to find the integral of $$\int \frac{\cos x+\sin 2x}{\sin x}$$
$$\int \frac{\cos x}{\sin x} + \int \frac{\sin 2x}{\sin x}$$
$$\int \tan x + \int \frac{\sin 2x}{\sin x}$$
I think I am suppose to have the integral of tanx memorized so I will put that to the side for now.
$$\int \frac{\sin 2x}{\sin x}$$
I do not know what to do with this since I can't make a u subsitution or anything else so I will just randomly use the double angle identity I have memorized.
$$\int \frac{2\sin x\cos x}{\sin x}$$
$$\int 2\cos x$$
$$2 \int \cos x$$
$$2\sin x + \int \tan x$$
$$2\sin x + \ln|\sec x| + c$$
This is of course wrong.
| Remember that $$\dfrac{\cos(x)}{\sin(x)} = \cot(x)$$ and not $\tan(x)$. An easier way to do $\int \dfrac{\cos(x)}{\sin(x)} dx$ is to do as follows. Hence, $$I = \int \dfrac{\cos(x)}{\sin(x)} dx.$$ Set $\sin(x) = t$, then we get $\cos(x) dx = dt$. Hence, $$I = \int\dfrac{dt}{t} = \log(t) + C = \log(\lvert \sin(x) \rvert) + C$$
Hence, your answer is $$2 \sin(x) + \log(\lvert \sin(x) \rvert) + C$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/153349",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
well-defined functions I am asked to argue whether or not the following two functions are well-defined (textbook definition: a) define $y$ for all $x$ in domain, and b) any is mapped to exactly one y). Both of the below functions are functions from $\mathbb{Q}$ to $\mathbb{Q}$.
$$f\left(\frac{p}{q}\right) = \frac{p+1}{q}$$
$$g\left(\frac{p}{q}\right) = \frac{p+q}{p-q}$$
My argument is that since $0$ is a rational number, we can take, for $f$, $p=0$ and $q=x$ and the function will not be defined. Similarly, we can take $p=q=0$ for $g$, and the function, again, will not be defined.
But the argument seems to be too easy. Is there something I am missing that won't allow me to use these two counter examples?
Thanks!
| Your argument does not quite work because $\frac{0}{0}$ is not a rational number. What you want to look for is two different representations of the same fraction that give different answers. For instance
$$
f\left(\frac{1}{2}\right)=\frac{2}{2}=1,
$$
but,
$$
f\left(\frac{2}{4}\right)=\frac{3}{4}.
$$
The problem is that in $\mathbb{Q}$, $\frac{1}{2}=\frac{2}{4}$, but the function is not the same for both.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/153413",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 0
} |
Calculating the surface area of sphere above a plane How do I calculate the surface area of the unit sphere above the plane $z=\frac12$?
EDIT: I have been attempting things and I am thinking about parameterizing this... While I know that surface area is given by the double integral of the cross products of partial derivatives of the new parameters, I don't know what to set them to.. (sorry I'm not good with the fancy notation)
| The circumference of an infinitesimal ring of the unit sphere between $z$ and $z+\mathrm dz$ is $2\pi\sqrt{1-z^2}$, and its width is $\mathrm dz/\sqrt{1-z^2}$. Thus its surface area is $2\pi\,\mathrm dz$. That is, the surface area of a slab of the unit sphere between two $z$ coordinates (or in fact between any two parallel planes) is simply $2\pi$ times the difference of the $z$ coordinates (or, generally, the distance between the two planes). Thus the surface area of the slab of the unit sphere between $z=1/2$ and $z=1$ is $2\pi\cdot(1-1/2)=\pi$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/153472",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
Examples of non-Riemann surfaces? While studying Complex Analysis, I have come across Riemann Surfaces:
http://mathworld.wolfram.com/RiemannSurface.html
Can anyone please provide some examples of non-Riemannable surfaces? Thanks a lot!
| A Riemann surface is a $1$-dimensional complex manifold, i.e. a surface that admits a complex structure. The complex structure on a Riemann surface induces a canonical orientation. So, in particular, a nonorientable surface cannot be a Riemann surface. Examples of nonorientable surfaces are the real projective plane and the Klein bottle.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/153518",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
How to solve $x_j y_j = \sum_{i=1}^N x_i$ I have N equations and am having trouble with finding a solution.
$$\left\{\begin{matrix}
x_1 y_1 = \sum_{i=1}^N x_i\\
x_2 y_2 = \sum_{i=1}^N x_i\\
\vdots\\
x_N y_N = \sum_{i=1}^N x_i
\end{matrix}\right.$$
where $x_i$, ($i = 1, 2, \cdots, N$) is an unknown and $y_i$, ($i = 1, 2, \cdots, N$) is a known variable.
Given $y_i$'s, I have to find $x_i$'s but, I don't know where to start and even if it has a solution.
| You can rewrite the system in the following form:
$$\left\{\begin{align*}
&(1-y_1)x_1+x_2+x_3+\ldots+x_N=0\\
&x_1+(1-y_2)x_2+x_3+\ldots+x_N=0\\
&x_1+x_2+(1-y_3)x_3+\ldots+x_N=0\\
&\qquad\qquad\qquad\qquad\vdots\\
&x_1+x_2+x_3+\ldots+(1-y_N)x_N=0\;,
\end{align*}\right.$$
or in matrix form as
$$\pmatrix{1-y_1&1&1&\dots&1&1\\1&1-y_2&1&\dots&1&1\\1&1&1-y_3&\dots&1&1\\\vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\1&1&1&\dots&1-y_{N-1}&1\\1&1&1&\dots&1&1-y_N}\pmatrix{x_1\\x_2\\x_3\\\vdots\\x_{N-1}\\x_N}=\pmatrix{0\\0\\0\\\vdots\\0\\0}\;.$$
Solving this homogeneous linear system is in principle completely straightforward.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/153590",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Center of SO(V,q) Let $V$ be finite dimensional vector spaces and $q$ is quadratic form. I'm looking for $Z(SO(V,q))$. where $SO(V,q)$ is special orthogonal group.
If $\operatorname{dim} V$ is odd then $Z(SO(V,q))={I}$ because $-\sigma_u\in SO(V,q) $ where $\sigma_u$ is reflection $$\sigma_u(x)=x-2\frac{b(x,u)}{q(u)}$$
But I couldn't calculate center when $\operatorname{dim} V$ is even.
I have not any idea how to deal with it.
Thanks.
| For finite fields of odd characteristic, there are two types of quadratic form when $n = {\rm dim} V$ is even and one type when $n$ is odd. When $n>2$, the groups ${\rm SO}(V,q)$ act absolutely irreducibly on $V$, and so only scalar matrices could be in the centre. But for the scalar $\lambda I_n$ to preserve $q$, we need $\lambda^2=1$, so $\lambda = \pm 1$. Hence $Z({\rm SO}(V,q))$ has order 1 when $n$ is odd and 2 when $n$ is even.
When $n=2$, ${\rm SO}(V,q)$ is a cyclic (and hence abelian) group of order $|F|-1$ when the form $q$ has plus-type and $|F|+1$ when $q$ has minus-type.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/153630",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
A problem about an endomorphism of the vector space $\mathrm{Mat}_2(\mathbb R)$ Let $f_A: \mathrm{Mat}_2(\mathbb R)\rightarrow \mathrm{Mat}_2(\mathbb R)$ be such that $f_A(X)=AX$ be an endomorphism of vector spaces. Clearly if $A$ is invertible, then $f_A$ is invertible and ${(f_A)}^{-1}=f_{A^{-1}}$. How can I prove the following statement?
$f_A\;\textrm{invertible} \Rightarrow A\;\textrm{invertible}$
| Suppose $\,A\,$ is not invertible, then$$\exists\,\, \mathbf{0}\neq \mathbf{b}:=\begin{pmatrix}b_1\\b_2\end{pmatrix}\in\mathbb{R}^2\,\,s.t.\,\,A\mathbf{x}=\mathbf{b}$$has no solution, so if $\,B\in\operatorname{Mat}_2(\mathbb{R})\,$ is any element with first column equal to $\,\mathbf{b}\,$ , then
$\,\,AX\neq B\,\,\,\forall\,X\in\operatorname{Mat}_2(\mathbb{R})$ , which contradicts $f_A$ is invertible and thus onto...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/153781",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Proof of existence of square root of unitary and symmetric matrix I'm struggling with this exercise
Let $U$ be a unitary and symmetric matrix ($U^T = U$ and $U^*U = I$).
Prove that there exists a complex matrix $S$ such that:
*
*$S^2 = U$
*$S$ is a unitary matrix
*$S$ is symmetric
*Each matrix that commutes with $U$, commutes with $S$
| Let $\lambda_j, j=1 \ldots k$ be the distinct eigenvalues of $U$ (which must be numbers of absolute value $1$). For each $\lambda_j$ let $\mu_j$ be a square root of $\lambda_j$. These also have absolute value $1$. There is a polynomial $p(z)$ such that $p(\lambda_j) = \mu_j$ for each $j$. Let $S = p(U)$.
1) $S^2 = p(U)^2 = U$: in fact $p(z)^2 - z$ is divisible by $\prod_j (z - \lambda_j)$, which is the minimal polynomial of $U$.
2) Since $U$ is normal, the algebra generated by $U$ and $U^*$ is commutative, and in particular $S$ is normal. Since $S$ is normal and its eigenvalues, which are the $\mu_j$, have absolute value $1$, $S$ is unitary.
3) Any nonnegative integer power of a symmetric matrix is symmetric; $S$ is symmetric because it is a linear combination of the symmetric matrices $U^j$.
4) Every matrix that commutes with $U$ commutes with each $U^j$ and therefore with $S$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/153837",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 0
} |
Length of curve in metric space Let $(X, d)$ be a metric space, and $\gamma: [a,b]\to X$ be a curve. For any partition $P=\{a=y_0<y_1<\cdots<y_n=b\}$, one can associate to it the lengh of the "inscribed polygon" $$\Sigma(P)=\sum_id(\gamma(y_i), \gamma(y_{i+1}))$$
Then we define the length of the curve to be
$$L(\gamma)=\sup_{P\in \mathcal P}\Sigma(P)$$
where $\mathcal P$ is the collection of all partitions of $[a,b]$.
If the supremum is finite then we call the curve is rectifiable.
We denote: $\|P\|=\max_i|y_i-y_{i+1}|$
Now my question is the proof of the following statement:
$$\lim_{\|P\|\to 0}\Sigma(P)=L(\gamma)$$
The hard part for me to prove the statement is if $P$ and $Q$ are two partitions, with $\|P\|\le \|Q\|$, we only know $$\Sigma(P\cup Q)\ge\max(\Sigma(P), \Sigma(Q))$$ and this won't give me any contradiction when we assume there is a sequence of partitions say $P_i$ with $\|P_i\|\to 0$ and $\Sigma(P_i)\le L(\gamma)-\varepsilon_0$ for some fixed $\varepsilon_0>0$. Anybody can help?
(btw. I thought that this may be similar with the proof of the Riemann sum for integrable function, but there one has the Osilation)
| *
*Take a partition $Q=\{y_0,\dots, y_n\}$ such that $\Sigma(Q)>L(\gamma)-\epsilon$.
*By the uniform continuity of $\gamma$, there exists $\delta>0$ such that $d(\gamma(t),\gamma(s))<\epsilon/n$ whenever $|t-s|<\delta$
*Let $P=\{x_0,\dots,x_m\}$ be any partition with $\|P\|<\delta$. For each $y_j$ there exists $x_{k(j)}$ such that $|x_{k(j)}-y_j|<\delta$.
*Use uniform continuity to estimate $\Sigma(\{x_{k(j)}\colon j=0,\dots,n\})$ from below.
There are some things to tidy up here, but this being homework, I'll leave the rest to you.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/153892",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Computational efficiency of Machin-like formulae From what I have read, it appears that the most efficient methods of calculating $ \pi $ are Machin-like formulae. And it is known that certain formulas are more efficient than others.
Are there any theoretical properties (rates of convergence, etc) of these formulas that make them more efficient than others?
Like, why is
$$
\frac{\pi}{4} = 183\arctan\frac{1}{239} + 32\arctan\frac{1}{1023} - 68\arctan\frac{1}{5832} + 12\arctan\frac{1}{110443} - 12\arctan\frac{1}{4841182} - 100\arctan\frac{1}{6826318}
$$
more efficient than
$$ \frac{\pi}{4} = \arctan\frac{1}{2} + \arctan\frac{1}{3} $$
(as claimed by Wikipedia)
Or is this a purely empirical result?
I presume that the calculation of arctan is done using the Maclaurin series.
| In the Maclaurin series of $\arctan(x)$, the $n$'th term is $x^{2n-1}/(2n-1)$. When $|x|$ is small, the important part here is the $x^{2n-1}$. So to get accuracy $\epsilon$, you need $2n-1 \approx \log (\epsilon)/\log (x)$. Roughly speaking,
in a Machin-like formula $\sum_j a_j \arctan(1/b_j)$
the number of terms you need to calculate is inversely proportional to the log of the smallest $b_j$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/153969",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Integral of$\int_0^1 x\sqrt{2- \sqrt{1-x^2}}dx$ I have no idea how to do this, it seems so complex I do not know what to do.
$$\int_0^1 x\sqrt{2- \sqrt{1-x^2}}dx$$
I tried to do double trig identity substitution but that did not seem to work.
| Since you tried to use a trig identity, I'll use one in this solution. Let $x = \sin \theta$ so that $1 - x^2 = 1 - \sin^2 \theta = \cos^2 \theta$, and $\mathrm d x = \cos \theta \mathrm{d}\theta$. Our integral becomes:
$$ \int_0^{\frac \pi 2} \sin \theta \cos \theta \sqrt{2 - \cos\theta} \mathrm d \theta.$$
Now set $u = \cos\theta$ to give $\mathrm d \mathrm u = - \sin \theta \mathrm d \theta$. Our integral becomes
$$\int_1^0 - u\sqrt{2-u} \mathrm \ \mathrm d u.$$
Can you solve that?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/154015",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Sum with binomial coefficients: $\sum_{k=1}^m \frac{1}{k}{m \choose k} $ I got this sum, in some work related to another question:
$$S_m=\sum_{k=1}^m \frac{1}{k}{m \choose k} $$
Are there any known results about this (bounds, asymptotics)?
| A rather simple and gross bound is $$\sum_{k=1}^m\frac{1}{k}\binom{m}{k}\leq\sum_{k=1}^m\binom{m}{k}<\sum_{k=0}^m\binom{m}{k}=2^m$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/154060",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18",
"answer_count": 7,
"answer_id": 2
} |
How to make sure a proof is correct If you come up with a proof of a mathematical proposition, how do you verify the proof is correct?
Put it another way, how do you avoid a wrong proof?
I guess there is no definitive answer to this.
However, I believe this question is important.
Any idea, suggestion or method to help make sure a proof is correct would be appreciated.
EDIT
Thanks for all your suggestions.
I'd like to add my idea which is perhaps similar to Leslie Lamport's (I haven't read his paper yet).
My idea is basically "divide and conquer".
Divide your proof to small propositions or lemmas.
The smaller, the better.
Ideally each of these small proposition should be trivial.
To do this, first divide the main theorem into several propositions.
The main theorem should be almost trivial assuming each proposition is correct.
Then divide each proposition into several propositions.
Repeat this process until each proposition cannot be divided or trivial enough.
Then apply some or all of your ideas to each proposition.
Generalizing your theorem can help make this process easier(there is an adage(?): if your problem is difficult, generalize it).
I got this idea from the Grothendieck's approach in his EGA.
See the article "The rising sea; Grothendieck on simplicity and generality" by Colin McLarty.
| Leslie Lamport's paper on "How to Write a Proof" (PDF) offers some useful advice.
Lamport suggests structuring the proof formally, with each statement accompanied by its own sub-proof in terms of simpler claims. How far should one continue this refinement? Lamport says: "My own rule of thumb is to expand the proof until the lowest level statements are obvious, and then continue for one more level. This takes discipline."
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/154116",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19",
"answer_count": 6,
"answer_id": 0
} |
Limit of the sequence $\lim_{n\rightarrow\infty}\sqrt[n]n$
Possible Duplicate:
$\lim_{n \to +\infty} n^{\frac{1}{n}} $
I know that
$$\lim_{n\rightarrow\infty}\sqrt[n]n=1$$
and I can imagine that $n$ grows linearly while $n$th root compresses it exponentially and therefore the result is $1$, but how do I calculate it?
| $$\lim_{n\rightarrow\infty}\sqrt[n]n$$
$$=\lim_{n\rightarrow\infty}e^{\frac{\ln(n)}{n}}$$
and as we know that $\lim_{n\rightarrow\infty}\frac{\ln(n)}{n} = 0$ {apply l'Hospital's rule}
So $$\lim_{n\rightarrow\infty}\sqrt[n]n=1$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/154163",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 6,
"answer_id": 0
} |
Evaluating $ \int_0^{\infty}\frac{v}{\sqrt{v + c}}e^{-\frac{y^2}{2(v + c)} - \frac{(u-v)^2}{u^2v}}dv$ While working on mixture (variance) of normal distribution and keep running into these two integrals
$$ \int_0^{\infty}\dfrac{v}{\sqrt{v + c}}e^{-\dfrac{y^2}{2(v + c)} - \dfrac{(u-v)^2}{u^2v}}dv,$$
$$\int_0^{\infty}\dfrac{v^{-1}}{\sqrt{v + c}}e^{-\dfrac{y^2}{2(v + c)} - \dfrac{(u-v)^2}{u^2v} }dv,$$
where $c>0, u>0 ,y\in \mathbb R$.
I was wondering are they solvable? Can they be expressed as some known function or in elementary terms?
Any help would be appreciated.
| For this integral, the best result seems to be a Taylor series in $c$ with coefficients in closed form.
In the typical case $c=1$, even specializing to the easiest case of $u=1$, $y=0$, we get the integrals $\int_0^\infty v^{\pm1}(v+1)^{-1/2}e^{-v-1/v}dv$,
for which neither Mathematica nor Gradshteyn and Ryzhik has any answer.
However, there is an explicit expression in the limit case $c=0$, which includes Gradshteyn and Ryzhik 3.471.15. Setting $z=\sqrt{4+2y^2}/u$, Mathematica gives the two integrals as:
$$\frac{u^3(1+z)}{2}\sqrt{π}\,e^{\,z(\sqrt{2}−1)} \text{ and }
\frac{2}{zu}\sqrt{π}\,e^{\,z(\sqrt{2}−1)}.$$
When we expand the integrands above as power series in $c$, the integrals of each term have similar closed-form expressions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/154384",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 1,
"answer_id": 0
} |
"8 Dice arranged as a Cube" Face-Sum Equals 14 Problem I found this here:
Sum Problem
Given eight dice. Build a $2\times 2\times2$ cube, so that the sum of the points on each side is the same.
$\hskip2.7in$
Here is one of 20 736 solutions with the sum 14.
You find more at the German magazine "Bild der Wissenschaft 3-1980".
Now my question:
Is $14$ the only possible face sum? At least, in the example given, it seems to related to the fact, that on every face two dice-pairs show up, having $n$ and $7-n$ pips. Is this necessary? Sufficient it is...
| A note regarding your first question: If a solution with face sum $k$ exists, then a solution with face sum $28-k$ also exists. To see this, start with a solution $S$. Note that three sides of each die are exposed. If we move each die to the position exactly opposite where it is in $S$, this creates an arrangement of dice $S^\prime$. Now consider the front face of $S$ and the back face of $S^\prime$. Together these contain four pairs of opposing faces of dice, and each opposing pair sums to 7. So the front face of $S$ and the back face of $S^\prime$, together, have eight numbers that sum to 28; if the front face sums to $k$ then the back face sums to $28-k$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/154451",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 2,
"answer_id": 1
} |
The spectra of weighted shifts Since weighted shifts are like the model-operators in operator theory and people have been studying them for so long, I think there should be quite a large literature on the spectra of such operators. However, after some search I hardly found anyone which gives a complete picture of what the spectra of a weighted shift.
Most of the papers I found deal with some specific properties of their spectra, but the question I have in mind is what the spectra are, exactly.
I wonder whether there is some good reference on this.
Thanks!
| For complete description of the spectra of weighted shifts, see either
1: A.L. Shields, Weighted Shift Operators and Analytic Function Theory, in Topics in Operator Theory, Mathematical Surveys, N0 13 (ed. C. Pearcy), pp. 49-128. American Mathematical Society, Providence, Rhode Island 1974
Or
2: http://surface.syr.edu/cgi/viewcontent.cgi?article=1138&context=mat
Or
3: https://www.impan.pl/shop/publication/transaction/download/product/90721
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/154511",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Computing the length of a finite group Can someone suggest a GAP or MAGMA command (or code) to obtain the length $l(G)$ of a finite group $G$, i.e. the maximum length of a strictly descending chain of subgroups in $G$?
Thanks in advance.
| I believe the length of the simple groups are not known in general, so this will require the groups involved to be small enough to do some brute force calculations.
Assuming the length is an invariant of the composition factors, we try an inductive approach for simple groups: l(G) = max( l(M) : M a maximal subgroup of G ) + 1. For composite groups, we just add up the lengths on the composition factors.
Since magma has large precomputed tables of maximal subgroups of groups (that have had errors), I recommend it for speed.
subgroupLength := function( grp )
len := $$;
if #grp eq 1 then return 0;
elif IsSolvable( grp ) then return &+[ pn[2] : pn in FactoredOrder( grp ) ];
elif IsSimple(grp) then return 1 + Max( { len( max`subgroup ) : max in MaximalSubgroups( grp) } );
// else return &+[ len( groupForm( cf ) ) : cf in CompositionFactors( grp ) ];
else // use the series 1 < O_oo(O^oo(G)) < O^oo(G) < G
top := SolvableQuotient( grp );
if #top gt 1 then bot := SolvableResidual( grp );
else top, _, bot := RadicalQuotient( grp );
end if;
if #top gt 1 and #bot gt 1 then return &+[ len( part ) : part in [* top, bot *] ];
else // oh no! resort to Len
return 1 + Max( { len( max`subgroup ) : max in MaximalSubgroups( grp ) } );
end if;
end if;
end function;
This is much faster than Len above for Alt(8) but still slow on A5 wr A5 which has a large composite insoluble section. This would be fixed if one could convert the output of CompositionFactors to a form suitable as input for MaximalSubgroups.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/154563",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Casorati-Weiestrass theorem for essential singularities Casorati-Weiestrass theorem for essential singularities
Let $\Omega\subseteq \mathbb{C}$ be open, $ a\in\Omega,\ f\in H(\Omega\backslash \{a\})$ ($f$ analytic on $\Omega\backslash \{a\})$. In the case of an essential singularity: If $ C(a,r)\subseteq\Omega$ ($C(a,r)$ is the disk with origin $a$ and radius $r$), then $f(C(a,r)\backslash \{a\})$ is dense in $\mathbb{C}$.
$\color{green}{\text{(1) What does this theorem actually say? How would you put in descriptive words?}}$
$\color{green}{\text{(2) How would an outline of the proof (in words) look like? What steps should be followed?}}$
| *
*The theorem says that if $f$ has an essential singularity at $a$, then arbitrarily close to $a$, $f$ takes values arbitrarily close to whatever you like. So the behaviour of $f$ is pretty wild close to $a$ (by comparison to removable singularities, where $f$ is straightforward near $a$, or poles, where $f$ just goes to infinity near $a$).
*Suppose $f$ on $D(a,r) \setminus {a}$ doesn't take any values near $b\in \mathbb C$. Then define $g(z) = \frac{1}{f(z)-b}$. Then $g$ is holomorphic and bounded, so it can be extended to $a$. But then $f(z) = \frac{1}{g(z)} + b$ has a pole or removable singularity at $a$. Hence if $f$ does not have a pole or removable singularity at $a$, the above process must fail, i.e. $f$ takes values very close to $b$.
The latter is not a proof as such, but it's a good outline, and the real proof is not much more complex: as a commenter said, see Wikipedia.
It's worth noting that in fact the Casorati-Weierstrass theorem is strengthened by the Big Picard theorem, which states that $f$ doesn't just come close to every value, it in fact takes every value (with at most one exception). The proof of the Picard theorems is unfortunately not so simple.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/154703",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
maximizing number of 4s times number of 7s in decimal representation $F_4(X)$ be the number of digits 4 in the decimal representation of $X$, and $F_7(X)$ be the number of digits 7 in the decimal representation of $X$. We have to find largest product $F_4(X)\cdot F_7(X)$, where $L \leq X \leq R$.
$$\max\{F_4(X)\cdot F_7(X) : L ≤ X ≤ R\}$$
can a general solution be acheived?
eg:
$L=47$ AND $R=74$
$$\max\{F_4(X)\cdot F_7(X)\}=1$$
| For any range with a fixed number of unrestricted digits like $0 \le X \le 99999999$, the problem is trivial: since we can choose any 8 digits, it is easy to maximize $F_4(X) \cdot F_7(X)$ by choosing four 4's and four 7's.
Even if some initial digits are unchangeable, as in $4440000 \le X \le 4449999$, the problem is very simple. Here, the first 3 digits are fixed, but the last 4 digits may be chosen at will, and it is easy to see that we should additionally choose one/zero 4's and three/four 7's.
Finally, note that any range $L \le X \le R$ can be decomposed into $O(\log R)$ separate intervals of the type in the previous paragraph. For instance, if $47 \le X \le 74$, then $X$ fits one of the following templates: 47, 48, 49, 5x, 6x, 70, 71, 72, 73, 74. Since we can maximize over each one, we can maximize over their union.
Of course some optimizations are possible, but this is already an linear-time algorithm in the number of digits of $L$ and $R$. One could even use this technique to compute things like $\sum\limits_{L \le X \le R} F_4(X) \cdot F_7(X)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/154752",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Given an interval $I$, what does the notation $\overline{I}$ mean? I have below a beginning of a theorem:
If a function $f:I \rightarrow \mathbb{C}$ defined on an interval $I$ of length $p$ can be expanded to a piecewise differentiable function on $\overline{I}$, then will...
What does $\overline{I}$ mean in this context?
| Here $\overline I$ means the closure of $I$ - in general this is the smallest closed set which contains $I$. If $I \subseteq \mathbb{R}$ is an interval, then it is just the interval with the endpoints included.
For a more general example, in $\mathbb{R}^2$ you have the set $B = \{ (x,y) \in \mathbb{R}^2 : | (x,y) | < 1 \}$, the open ball of radius $1$, centred at the origin. If we take its closure we get $\overline B =\{(x,y) \in \mathbb{R}^2 : |(x,y)| \le 1 \}$, the open disc which contains the boundary. This is the two-dimensional analogue of an interval.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/154824",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Indefinite integral of secant cubed $\int \sec^3 x\>dx$ I need to calculate the following indefinite integral:
$$I=\int \frac{1}{\cos^3(x)}dx$$
I know what the result is (from Mathematica):
$$I=\tanh^{-1}(\tan(x/2))+(1/2)\sec(x)\tan(x)$$
but I don't know how to integrate it myself. I have been trying some substitutions to no avail.
Equivalently, I need to know how to compute:
$$I=\int \sqrt{1+z^2}dz$$
which follows after making the change of variables $z=\tan x$.
| It appears that Mathematica is using the "universal change" for trigonometric integrals $\tan(x/2)=t$:
http://en.wikibooks.org/wiki/Calculus/Integration_techniques/Tangent_Half_Angle.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/154900",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20",
"answer_count": 8,
"answer_id": 2
} |
Liouville's theorem via contraction mappings Let $f$ be a function that maps $\mathbb{Z}^2$ to $\mathbb{R}$ and consider the operator $T$ which replaces the value of $f$ at $(i,j)$ by the average of the values of $f$ at its four neighbors (left, right, down, up): $$ Tf(i,j) = \frac{f(i-1,j) + f(i+1,j) + f(i,j-1) + f(i,j+1)}{4}.$$ The discrete-version of Liouville's theorem say that the equation $$ Tf = f$$ does not have any solutions $f$ which are bounded.
My question is whether its possible to prove this by demonstrating that $T$ is a contraction mapping in some sense.
Specifically, let $\mathbb{S}$ be the set of bounded $f: \mathbb{Z}^2 \rightarrow \mathbb{R}$ with the equivalence relation $f = g$ whenever $f-g$ is a constant. Is there a metric on $\mathbb{S}$ with respect to which $T$ is a strict contraction?
Note that is related, but not identical with, my other question. In particular, a positive answer to this question would likely imply an answer to that question as well.
| No. If such a metric existed then $T^2$ would also be a contraction, hence would also have a unique fixed point (namely the equivalence class of constant functions), which it doesn't: $T^2$ fixes every function such that $f(i, j) = g(i + j \bmod 2)$ for some $g$, and there are infinitely many equivalence classes of such functions.
(More generally, in order for such a metric to exist for a general set map $T : X \to X$ it is necessary that $T^n$ has a unique fixed point for all $n \in \mathbb{N}$. As it turns out, this is sufficient.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/154944",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Need help with the integral $\int \frac{2\tan(x)+3}{5\sin^2(x)+4}\,dx$ I'm having a problem resolving the following integral, spent almost all day trying.
Any help would be appreciated.
$$\int \frac{2\tan(x)+3}{5\sin^2(x)+4}\,dx$$
| If you convert everything to sines and cosines, you get $$\frac{2\tan x+3}{5\sin^2x+4}=\frac{2\sin x+3\cos x}{5\sin^2x\cos x+4\cos x}\;,\tag{1}$$ which probably doesn’t look very promising. However, you can rewrite it as $$\frac{2\sin x+3\cos x}{\cos x(9-5\cos^2x)}=\frac{2\sin x}{\cos x(9-5\cos^2x)}+\frac3{9-5\cos^2x}\;.$$ The first term of this is nice: apart from a factor of $-1$, $\sin x$ is the derivative of $\cos x$, so it can be integrated by substituting $u=\cos x$ and using partial fractions. The second term still requires a bit of work. When sines and cosines don’t do the job, try secants and tangents:
$$\frac3{9-5\cos^2x}=\frac3{9-\frac5{\sec^2x}}=\frac{3\sec^2x}{9\sec^2x-5}=\frac{3\sec^2x}{9(\tan^2x+1)-5}=\frac{3\sec^2x}{9\tan^2x+4}\;,$$ which can be integrated by substituting $u=\tan x$.
If you don’t see any way forward from $(1)$, you can always jump directly to the stage of trying to get secants and tangents.
$$\frac{2\tan x+3}{5\sin^2x+4}=\frac{2\tan x+3}{5\tan^2x\,\cos^2x+4}=\frac{2\tan x+3}{5\frac{\tan^2x}{\sec^2x}+4}=\frac{2\sec^2x\tan x+3\sec^2x}{5\tan^2x+4\sec^2x}\;,$$
and after converting the $4\sec^2x$ in the denominator to $4\tan^2x-4$, we have
$$\frac{(2\tan x+3)\sec^2x}{9\tan^2x+4}\;;$$
$\sec^2x$ being the derivative of $\tan x$, the substitution $u=\tan x$ will turn this into a nice rational function of $u$. This is exactly what Chandrasekhar achieved directly in his solution by multiplying the fraction by $\dfrac{\sec^2x}{\sec^2x}$. That’s a nicer, more efficient way to go. My purpose in writing out this more roundabout route is to show that you often don’t have to see the really clever tricks if you can put together enough more routine manipulations.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/155022",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 1
} |
Counting matrices over $\mathbb{Z}/2\mathbb{Z}$ with conditions on rows and columns I want to solve the following seemingly combinatorial problem, but I don't know where to start.
How many matrices in $\mathrm{Mat}_{M,N}(\mathbb{Z}_2)$ are there such that the sum of entries in each row and the sum of entries in each column is zero? More precisely find cardinality of the set
$$
\left\{A\in\mathrm{Mat}_{M,N}(\mathbb{Z}/2\mathbb{Z}): \forall j\in\{1,\ldots,N\}\quad \sum\limits_{k=1}^M A_{kj}=0,\quad \forall i\in\{1,\ldots,M\}\quad \sum\limits_{l=1}^N A_{il}=0 \right\}
$$.
Thanks for your help.
| Take any $(M-1)\times (N-1)$ matrix of $0$'s and/or $1$'s. We can in a unique way add a column of $0$'s and/or $1$'s at the right and bottom to make the sums of rows and columns congruent to $0$. In coding theory they would be called check bits. Note that the entry at bottom right is uniquely determined, for the sum of row sums must, by basic accounting principles, be equal to the sum of column sums.
So there are just as many restricted $M\times N$ matrices as there are unrestricted $(M-1)\times (N-1)$ matrices.
The number of restricted $M\times N$ matrices is therefore $2^{(M-1)(N-1)}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/155057",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
How to prove that the sum and product of two algebraic numbers is algebraic? Suppose $E/F$ is a field extension and $\alpha, \beta \in E$ are algebraic over $F$. Then it is not too hard to see that when $\alpha$ is nonzero, $1/\alpha$ is also algebraic. If $a_0 + a_1\alpha + \cdots + a_n \alpha^n = 0$, then dividing by $\alpha^{n}$ gives $$a_0\frac{1}{\alpha^n} + a_1\frac{1}{\alpha^{n-1}} + \cdots + a_n = 0.$$
Is there a similar elementary way to show that $\alpha + \beta$ and $\alpha \beta$ are also algebraic (i.e. finding an explicit formula for a polynomial that has $\alpha + \beta$ or $\alpha\beta$ as its root)?
The only proof I know for this fact is the one where you show that $F(\alpha, \beta) / F$ is a finite field extension and thus an algebraic extension.
| Consider fields $ E \supseteq F $, and elements $ \alpha, \beta \in E $ algebraic over $ F $. We want to show $ \alpha + \beta $, $ \alpha \beta $ are algebraic over $ F $ too. If even one of $ \alpha, \beta $ are $ 0 $, the result is trivial, so let's take both $ \alpha, \beta $ to be non-zero.
We have $ \alpha ^m + a_{m-1} \alpha ^{m-1} + \ldots + a_0 = 0 $ ( each $ a_i \in F $ ), and $ \beta ^n + b_{n-1} \beta ^{n-1} + \ldots + b_0 = 0 $ ( each $ b_j \in F $ ).
(The first equation lets us express all powers of $ \alpha $ as $F$-combinations of $ 1, \alpha, \ldots, \alpha ^{m-1} $. Similarly for $ \beta $)
Let $$ Z := \, [ \, \alpha ^0 \beta ^0, \alpha ^0 \beta ^1, \ldots, \alpha ^0 \beta ^{n-1} ; \alpha ^1 \beta ^0, \ldots, \alpha ^1 \beta ^{n-1} ; \ldots ; \alpha ^{m-1} \beta ^{0}, \ldots, \alpha ^{m-1} \beta ^{n-1} \, ]^{T} \in E^{mn} $$
Now notice we can express $ (\alpha + \beta)Z $ as $ M_1 Z $ with $ M_1 \in F^{mn \times mn} $. So $ ( ( \alpha + \beta ) I - M_1 ) Z = 0 $, and as $ Z \neq 0 $ we have $ \det( (\alpha + \beta)I - M_1 ) = 0 $. Hence $ \alpha + \beta $ is a root of the polynomial $ P(t) := \det( tI - M_1 ) \in F[t] $, and is therefore algebraic over $ F $. Similarly we can show $ \alpha \beta $ is algebraic over $ F $ (Write $ \alpha \beta Z = M_2 Z $ and proceed as above).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/155122",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "51",
"answer_count": 5,
"answer_id": 3
} |
Rational embedded irreducible curves in a complex surface. Given a complex surface $X$ and an embedded irreducible compact curve $C$ with its arithmetical genus $g(C) = 0$, how can one show that $C$ is non-singular ?
Thanks for your answers!
| Given an irreducible complete curve $C$ and its normalization $\tilde C$ you have the following relation between their arithmetic genera:$$ p_a(C)=p_a(\tilde C)+\sum_{P\in Sing(C)}dim_\mathbb C (\mathcal O_{C,\tilde P}/\mathcal O_{C, P})$$
where for a singular point $P\in Sing (C)$ the ring $\mathcal O_{C,\tilde P}$ is the integral closure of the ring $\mathcal O_{C, P}$.
It is then clear that $ p_a(C)=0$ forces $ p_a(\tilde C)=0$ and also forces all $P$ to actually not be singular at all, since all $\mathcal O_{C,\tilde P}=\mathcal O_{C, P}$ . The curve $C$ was smooth after all and isomorphic to $\mathbb P^1$.
Note that the embedding of $C$ in some surface $X$ is irrelevant.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/155172",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to solve this quartic equation? For the quartic equation:
$$x^4 - x^3 + 4x^2 + 3x + 5 = 0$$
I tried Ferrari so far and a few others but I just can't get its complex solutions. I know it has no real solutions.
| Use the magic quartic formula!
$$a = -1, b = 4, c = 3, d = 5$$
$$u = -\frac{29}{12}, \Delta_0 = 85, \Delta_1 = -826, Q = \sqrt[3]{666i-541}$$
The discriminant is 65712, meaning that there are either four real or four complex roots.
$$v = \frac{(\sqrt[3]{666i-541})^2 + 85}{3\sqrt[3]{666i-541}} = \frac{(666i-541)\sqrt[3]{666i-541} + 85(\sqrt[3]{666i-541})^2}{3(666i-541)}$$
$$=\frac{736237\sqrt[3]{666i-541} - 85(666i+541)(\sqrt[3]{666i-541})^2}{2208711}$$
And you should be able to take it from there.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/155242",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 3
} |
Tensor operation on a vector space From the various definitions provided in the article https://en.wikipedia.org/wiki/Tensor, the tensor seems always to be defined, even in the more abstract forms, as a multilinear map, from a product of vector and dual spaces to the underlying field.
However, in applied mathematics, one often come across a tensor when used in the form that maps elements of vector and dual spaces to elements of vectors and dual spaces, like this, for example:
$\theta^j=\mu_l^j dx^l$
No operation is defined in the article between a tensor and a dual vector, which gives a dual vector. But since other operations are defined on tensors, should the previous notation actually be read in two steps:
*
*a tensor product: $\mu_l^j dx^k$
*followed by a contraction of the indices $k$ and $l$
| Yes. This is what is called the Einstein summation convention.
(I happen to think it is ultimately less confusing to define tensors as elements of tensor products of a vector space and its dual, but whatever floats your boat, I suppose.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/155307",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Is the integral closure of a local domain in a finite extension of its field of fractions semi-local?
Is the integral closure of a local domain in a finite extension of its field of fractions semi-local?
If the answer is negative, I wonder under what conditions it would be semi-local.
EDIT
Here's an example of a local domain which is not necessarily a Japanese ring.
Let $A$ be a valuation ring, $K$ its field of fractions.
Let $P$ be the maximal ideal of $A$.
Let $L$ be a finite extension of $K$.
Let $B$ be the integral closure of $A$ in $L$.
It is well-known that there exist only finitely many valuation rings of $L$ dominating $A$.
Let $M$ be a maximal ideal of $B$.
It is well-known that $M$ lies over $P$.
Hence $B_M$ dominates $A$.
There exists a valuation ring $R$ of $L$ dominating $B_M$.
Since $M$ is determined by $R$ and $R$ dominates $A$, there exist only finitely many maximal ideals of $B$.
Hence $B$ is a semilocal ring.
Note that $B$ is not necessarily a finite $A$-module(even if $A$ is a discrete valuation ring).
Hence $A$ is not necessarily a Japanese ring.
| Here are some sufficient conditions:
Let $A$ be a local domain, $K$ its fraction field, $L$ a finite extension of $K$, and $B$ the integral closure of $A$ in $L$. If $A$ is Noetherian and integrally closed, and $L$ is separable over $K$, then $B$ is necessarily finite over $A$, and so is semi-local (by going-up/down-type theorems).
If $A$ is not integrally closed, but its integral closure in $K$ is finite over itself, then again $B$ will be finite over $A$.
The condition that the integral closure of $A$ in $K$ be finite over $A$ is (at least by some people) called N-1. More generally, the condition that $B$ be finite over $A$ is called N-2, or Japanese. (The letter N here stands for Nagata, and I believe Grothendieck coined the adjecive Japanese for these rings because these properties were studied by Nagata and the commutative algebra school around him in Japan.)
So if $A$ is a Japanese ring, then $B$ will be finite over $A$, and hence semilocal. Of course, this is rather tautological: its utility follows from the fact that many rings (indeed, in some sense, most rings --- i.e. most of the rings that come up in algebraic number theory and algebraic geometry) are Japanese. E.g. all finitely generated algebras over a field, or over $\mathbb Z$, or over a complete local ring, are Japanese.
Here are some useful wikipedia entries related to this topic: Nagata rings and Excellent rings.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/155376",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Evaluating $ \int_1^{\infty} \frac{\{t\} (\{t\} - 1)}{t^2} dt$ I am interested in a proof of the following.
$$ \int_1^{\infty} \dfrac{\{t\} (\{t\} - 1)}{t^2} dt = \log \left(\dfrac{2 \pi}{e^2}\right)$$
where $\{t\}$ is the fractional part of $t$.
I obtained a circuitous proof for the above integral. I'm curious about other ways to prove the above identity. So I thought I will post here and look at others suggestion and answers.
I am particularly interested in different ways to go about proving the above.
I'll hold off from posting my proof for sometime to see what all different proofs I get for this.
| The integral on $[1,N+1]$ is (see @Rahul's first comment)
$$
I_N=\sum_{n=1}^N\big(2+2\log n+(2n-1)\log n-(2n+1)\log(n+1)\big),
$$
that is,
$$
I_N=2N+2\log(N!)-(2N+1)\log(N+1).
$$
Thanks to Stirling's approximation, $2\log(N!)=(2N+1)\log N-2N+\log(2\pi)+o(1)$. After some simplifications, this leads to
$$
I_N=\log(2\pi)-(2N+1)\log(1+1/N)+o(1)=\log(2\pi)-2+o(1).
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/155430",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 1
} |
$\{1,1\}=\{1\}$, origin of this convention Is there any book that explicitly contain the convention that a representation of the set that contain repeated element is the same as the one without repeated elements?
Like $\{1,1,2,3\} = \{1,2,3\}$.
I have looked over a few books and it didn't mention such thing. (Wikipedia has it, but it does not cite source).
In my years learning mathematics in both US and Hungary, this convention is known and applied. However recently I noticed some Chinese students claim they have never seen this before, and I don't remember I saw it in any book either.
I never found a book explicitly says what are the rules in how $\{a_1,a_2,a_3,\ldots,a_n\}$ specify a set. Some people believe it can only specify a set if $a_i\neq a_j \Leftrightarrow i\neq j$. The convention shows that doesn't have to be satisfied.
| For variety, I'll note that both magma and python have a set constructor using a comma-separated list surrounded by curly braces, and they both allow repeated entries. For example, in python:
>>> {1,1,2,3}
{1, 2, 3}
>>> {3,2,1} == {1,1,2,3}
True
Mathematica, on the other hand, uses curly braces to construct lists rather than sets, so it would behave differently. Array initializers in C also use curly braces -- but again you're creating lists, not sets.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/155475",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 5,
"answer_id": 4
} |
$p$ an prime number of the form $p=2^m+1$. Prove: if $(\frac{a}{p})=-1$ so $a$ is a primitive root modulo $p$ Let $p$ be an odd prime number of the form $p=2^m+1$.
I'd like your help proving that if $a$ is an integer such that $(\frac{a}{p})=-1$, then $a$ is a primitive root modulo $p$.
If $a$ is not primitive root modulo $p$ so $Ord_{p}(a)=t$, where $t<p-1=2^m$ and $t|2^m$ since $Ord_{p}(a)|p-1$ . I also know that there are no solutions to the congruence $x^2=a(p)$, How can I use it in order to reach a contradiction?
Thanks a lot.
| Ok well we know by Fermat's little theorem that:
$a^{p-1} \equiv 1$ mod $p$
i.e.
$a^{2^m} \equiv 1$ mod $p$
Now $a$ must have order $2^i$ for some $0\leq i\leq m$ by Lagrange's theorem.
But the fact that the Legendre symbol is $-1$ coupled with Euler's criterion tells us that:
$a^{\frac{p-1}{2}} \equiv \left(\frac{a}{p}\right)$ mod $p$
i.e.
$a^{2^{m-1}} \equiv -1$ mod $p$
So that $a$ cannot have order $2^i$ for $0\leq i < m$ (otherwise this congruence would be false).
Thus $a$ has order $2^m$, and so is a primitive root.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/155574",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 0
} |
Prove that $4^{2n} + 10n -1$ is a multiple of 25 Prove that if $n$ is a positive integer then $4^{2n} + 10n - 1$ is a multiple of $25$
I see that proof by induction would be the logical thing here so I start with trying $n=1$ and it is fine. Then assume statement is true and substitute $n$ by $n+1$ so I have the following:
$4^{2(n+1)} + 10(n+1) - 1$
And I have to prove that the above is a multiple of 25. I tried simplifying it but I can't seem to get it right. Any ideas? Thanks.
| Another solution, via congruences mod. $25$. First note $16\bmod 25$ generates a cyclic group of order $5$:
$$16^2\equiv 6,\quad16^3\equiv 6\cdot 16\equiv-4,\quad16^4\equiv6^2\equiv 11, \quad 16^5\equiv6\cdot-4\equiv1\mod25.$$
So let's examine each case:
*
*If $n\equiv 0\mod 5$, $\;16^n+10n-1\equiv1+0-1=0$.
*If $n\equiv 1\mod 5$, $\;16^n+10n-1\equiv16+10-1=25\equiv 0$.
*If $n\equiv 2\mod 5$, $\;16^n+10n-1\equiv 6+20-1=25\equiv 0$.
*If $n\equiv 3\mod 5$, $\;16^n+10n-1\equiv -4+30-1=25\equiv 0$.
*If $n\equiv 4\mod 5$, $\;16^n+10n-1\equiv 11+40-1=50\equiv 0$.
Thus in each case, $\;16^n+10n-1$ is divisible by $25$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/155637",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 7,
"answer_id": 3
} |
Value of the integral : $ I_r$ =$\int_{C_r}$ $\frac{dz}{z(z-1)(z-2)}$ It is given that $$ I_r =\int_{C_r}\frac{dz}{z(z-1)(z-2)}$$
where $ C_r = \{z\in \Bbb{C}: |z|=r\}$ , $ r >0 $, $r\neq 1,2$ . Then which of the following holds:
*
*$ I_r = 2 \pi\ i $ if $r\in(2,3)$
*$ I_r = -2 \pi\ i $ if $r\in(1,2)$
*$ I_r = 0 $ if $r >3$
Please suggest which option is correct.
| Use Cauchy's Residue Theorem:$$\oint_\gamma f(z)dz=2\pi i\sum_{a_i\in A}\operatorname{Res}_{z=a_i}f(z)$$
When $\,A=\,$interior of the rectifiable curve $\,\gamma\,$ which meets no poles of $\,f\,$ .
Note that taking $\,r\in (2,3)\,$ or taking $\,r>3\,$ is the same regarding this integral (why?), and since all the function's poles are simple you can easily calculate its residue at pole $\,a_k\,$ by evaluating $$\lim_{z\to a_k}(z-a_k)f(z)$$ with $$f(z):=\frac{1}{z(z-1)(z-2)}$$
Added For any $\,r>0\,\,,\mathcal{C}_r\,$ is a circle centered at the origin and radius $\,r\,$, thus for instance:
$\,(2)\,$ For $\,r\in (1,2)\,\,,\,\mathcal{C}_r\,$ is a circle centered at the origin that intersects the $x-$axis at some point between $\,1\,$ and $\,2\,$, thus the inner part of this circle, $\,A\,$ (which is inclosed by the path $\,|z|=r\,$ , the circle's perimeter if you will) only contains the poles $\,0,1\,$of the function $\,f(z)\,$, and thus here $$I_r=2\pi i\sum_{a_i\in A}\operatorname{Res}_{z=a_i}f(z)=2\pi i\left(\frac{1}{2}+(-1)\right)=-\pi i$$
Why? Because for example, as stated above: $$\operatorname{Res}_{z=1}f(z)=\lim_{z\to 1}\left[(z-1)\frac{1}{z(z-1)(z-2)}\right]=\frac{1}{1\cdot (1-2)}=-1$$
Similarly, the residue at $\,z=0\,$ equals $\,1/2\,$, as you can readily check, and now you can try the other options...
Ps. The formula above to evaluate the residues works for simple poles ...!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/155691",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Why does it always take n numbers to characterize a point in n-dimensional space (or does it)? I don't know if this is obvious and a dumb question or not, but, here we go. To characterize a point in 2-d space we can use standard $x,y$ coordinates or we can use polar coordinates. There are probably other ways to do it other than those two as well. It's very interesting to me that those somehow both require exactly two numbers—either an $x$ and a $y$ or an $r$ and a $\theta$. It seems like a magical coincidence to me that these two completely different ways to describe a point require the same number of numbers.
Then moving into 3-d space, there's the same thing. We can use $(x,y,z)$ or $(\rho,\phi ,z)$ (cylindrical coordinates) or $(r,\theta ,\phi)$ (spherical coordinates). These coordinate systems seem to be to function in vastly different ways, and yet they all take three numbers. It's a conspiracy.
So I mean on the one hand, it's intuitive that it should take three numbers to describe three dimensional space. On the other hand, I can't figure out why this should be true. So question a) why is this the case and question b) can we imagine a world where there were points in n dimensions and two coordinate systems that took different numbers of numbers to characterize points?
P.S. I don't really know what to tag this as.
| It only takes one number. Just braid the digits up in some fashion. For example if we are working in base 10:
$$[1.1 \,\,\, 2.2 \, \,\,3.3]^T \to 123.123$$
The number will likely have to be bigger so we need more to store it, but it's clearly just one number.
To recover just do a loop where you do a round robin:
*
*Check modulo 10.
*Append current bin.
*Drop last digit.
*Repeat for next bin.
( Inspired by Conways base 13 funtion. )
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/155763",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
} |
I can't find differences between $P(1+r)^n$ and $P(2.71828)^{rn}$ They told me $P(1 + r)^n$ can be used to calculate money interest, example:
You invert $15,000.00, 20% interest annual, for 3 years:
$15,000(1 + 0.20)^3$ = 25,920
And that $P(2.71828)^{rn}$ can be used to calculate population growth, example:
We have 500 bacterias, growing at the rate of 25% monthly, for 16 months:
$(500)(2.71828)^{(0.25)(16)}$ = roughly 26,968
I can't tell difference on when to use one over another. I mean different than seeing the keywords money and population. How do I tell when to use one over the another?
As far as I can tell, they both are something like:
*
*We have a starting value, a percent, a period of time
| The difference is whether you try to model each discrete step or whether you choose a continuous model.
If you regard money, then you know that your interest will be calculated annually, so the discrete model will be exact.
If you have bacteria or population, you do not have any control about the number of babies or bacteria born at a particular time. You just know that you have enough people or bacteria, so that a statistical approach to the growth rate applies.
This means that you say that there are so many bacteria or people that you model it in a continuous way.
But you have to take care. In the continuous model, $r=0.25$ in your formula does not mean that you have a quarter more bacteria at the end of the month. This is a different $r$ from the one in the discrete model.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/155823",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
$\bar{\partial}$-Poincaré lemma This is $\bar{\partial}$-Poincaré lemma: Given a holomorphic funtion $f:U\subset \mathbb{C} \to \mathbb{C}$ ,locally on $U$ there is a holomorphic function $g$ such that : $$\frac{\partial g}{\partial \bar z}=f$$
The author says that this is a local statement so we may assume $f$ with compact support and defined on the whole plane $\mathbb{C}$, my question is why she says that... thanks.
*Added*
$f,g$ are suppose to be $C^k$ not holomorphic, by definition $$\frac{\partial g}{\partial \bar z}=0$$ if $g$ were holomorphic...
| The statement is defined on a local subset $U$... so we can make $f$ have compact support which vanishes outside $U$, thus trivially extending to the whole complex plane (defined to be zero outside the support).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/155868",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
What are the zero divisors of $C[0,1]$? Suppose you have a ring $(C[0,1],+,\cdot,0,1)$ of continuous real valued functions on $[0,1]$, with addition defined as $(f+g)(x)=f(x)+g(x)$ and multiplication defined as $(fg)(x)=f(x)g(x)$. I'm curious what the zero divisors are.
My hunch is that the zero divisors are precisely the functions whose zero set contains an open interval. My thinking is that if $f$ is a function which is at least zero on an open interval $(a,b)$, then there exists some function which is nonzero on $(a,b)$, but zero everywhere else on $[0,1]\setminus(a,b)$. Conversely, if $f$ is not zero on any open interval, then every zero is isolated in a sense. But if $fg=0$ for some $g$, then $g$ is zero everywhere except these isolated points, but continuity would imply that it is also zero at the zeros of $f$, but then $g=0$, so $f$ is not a zero divisor.
I have a hard time stating this formally though, since I'm only studying algebra, and not analysis. Is this intuition correct, and if so, how could it be rigorously expressed?
| If $f$ and $g$ are not identically zero and $f \cdot g = 0$ then $g^{-1}(\mathbb{R} \setminus \{0\})$ is open and non-empty and $f$ vanishes on this open subset. You already did the implication in the other direction. So $f$ is a zero divisor if and only if it is not identically zero and vanishes on some non-empty open set.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/155938",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26",
"answer_count": 4,
"answer_id": 0
} |
Why are bump functions compactly supported? Smooth and compactly supported functions are called bump functions. They play an important role in mathematics and physics.
In $\mathbb{R}^n$ and $\mathbb{C}^n$, a set is compact if and only if it is closed and bounded.
It is clear why we like to work with functions that have a bounded support. But what is the advantage of working with functions that have a support that is also closed? Why do we often work with compactly supported functions, and not just functions with bounded support?
| In metric spaces a real-valued function whose domain is compact is uniformly continuous and it attains a minimum and maximum value.
These properties are not to be taking lightly, for example the function $x\mapsto\frac1{x(1-x)}$ on $(0,1)$ and zero else where would be with a bounded support, but the function itself is not bounded.
Generally speaking, compact sets are very well-behaved because everything that can be characterized by open sets has, in some sense, a finite character. Continuous functions are such object, as the continuous preimage of an open set is open, so continuous functions from a compact domain have a very well-behaved nature.
The above is compatible with the following statement: Mathematicians like well-behaved objects. I have to admit that until recently I always tried to explore the naughty terrains of the mathematics and slowly I understand more and more why well-behaved objects are good. Especially when they are enough to describe a whole lot of mathematics to go around.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/156036",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 2
} |
How do I show that this function is always $> 0$
Show that $$f(x) = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} +
\frac{x^4}{4!} > 0 ~~~ \forall_x \in \mathbb{R}$$
I can show that the first 3 terms are $> 0$ for all $x$:
$(x+1)^2 + 1 > 0$
But, I'm having trouble with the last two terms. I tried to show that the following was true:
$\frac{x^3}{3!} \leq \frac{x^4}{4!}$
$4x^3 \leq x^4$
$4 \leq x$
which is not true for all $x$.
I tried taking the derivative and all that I could ascertain was that the the function became more and more increasing as $x \rightarrow \infty$ and became more and more decreasing as $x \rightarrow -\infty$, but I couldn't seem to prove that there were no roots to go with this property.
| You have had some good ideas so far. You tried to see when this was true: $$\frac{x^3}{3!} \leq \frac{x^4}{4!}.$$
You rearranged this to $4x^3\leq x^4$ but you made an incorrect conclusion when you divided by $x^3$ (if $x<0$ then the inequality sign should flip). Instead, lets divide by $x^2$ to get $4x \leq x^2$ or $x(x-4)\geq 0.$ This is true when $x\leq 0$ or $x\geq 4$ so the desired inequality is true in that range.
For $0< x < 4$ we don't have $\frac{x^3}{3!} \leq \frac{x^4}{4!}$ but lets see if the other terms can save us. To do this, we need to see exactly how large $g(x) = x^3/3! - x^4/4!$ can be in $(0,4).$ We calculate that $g'(x) = -(x-3)x^2/6$ so $g$ increases when $0\leq x\leq 3$, the maximum occurs at $g(3)=9/8$, and then it decreases after that.
This is good, because the $1+x+x^2/2$ terms obviously give at least $1$ from $x=0$, and will give us more as $x$ gets bigger. So we solve $1+x+x^2/2=9/8$ and we take the positive solution which is $\frac{\sqrt{5}-2}{2} \approx 0.118.$ So the inequality is definitely true for $x\geq 0.12$ because $g$ is at most $9/8$ and $1+x+x^2/2$ accounts for that amount in that range.
Remember that $g$ was increasing between $x=0$ to $x=3$, so the largest $g$ can be in the remaining range is $g(0.12) = 873/315000 <1$, which is less than the amount $1+x+x^2/2$ gives us. So the inequality is also true for $0\leq x\leq 0.12$, so overall, for all $x.$
So all in all, the only trouble was for $x$ in $(0,4)$ and the contribution from the other terms was always enough to account for $x^3/3!$ when $x^4/4!$ wasn't enough.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/156117",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 2
} |
Good books on "advanced" probabilities what are some good books on probabilities and measure theory?
I already know basic probabalities, but I'm interested in sigma-algrebas, filtrations, stopping times etc, with possibly examples of "real life" situations where they would be used
thanks
| I learned probability from Grimmett & Stirzaker, Probability and Random Processes. It has a lot of exercises with a good mix of difficulties. It was a standard fixture on the desks of quants at the bank where I used to work. It's pleasant to read, includes interesting applications, does its best to build intuition and the occasional joke is pretty funny (YMMV).
Caveats: I stopped just short of the material on the Itô calculus, and if/when I come to study that subject I'll probably seek out a more leisurely treatment. Also, although it does things in terms of sigma-algebras etc the book aims to teach probability, not rigorous measure theory.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/156165",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "47",
"answer_count": 12,
"answer_id": 3
} |
Banach-Tarski theorem without axiom of choice Is it possible to prove the infamous Banach-Tarski theorem without using the Axiom of Choice?
I have never seen a proof which refutes this claim.
| The Banach-Tarski theorem heavily uses non-measurable sets. It is consistent that without the axiom of choice all sets are measurable and therefore the theorem fails in such universe. The paradox, therefore, relies on this axiom.
It is worth noting, though, that the Hahn-Banach theorem is enough to prove it, and there is no need for the full power of the axiom of choice.
More information can be found through here:
*
*Herrlich, H. Axiom of Choice. Lecture Notes in Mathematics, Springer, 2006.
*Schechter, E. Handbook of Analysis and Its Foundations. Academic Press, 1997.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/156212",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 3,
"answer_id": 2
} |
Number of embeddings in algebraic closure I'm having trouble following the details of the discussion on pages 9 and 10 of Neukirch's algebraic number theory book.
Suppose $L$ is a separable extension of $K$ with degree n. Consider the set of embeddings of $L$ into $\bar K$, the algebraic closure of $K$, that fix $K$ (K-embeddings). Why are there $n$ embeddings in this set?
EDIT: Also, consider some element $x\in L$. Let $d$ be the degree of $L$ over $K(x)$ and $m$ be the degree of $K(x)$ over $K$. Why are the $K$-embeddings of $L$ partitioned by the equivalence relation
$$ \sigma\sim\tau\ \Leftrightarrow\ \sigma x = \tau x $$
into $m$ equivalence classes of $d$ elements each?
| The idea behind the proof is that for a field $K$ and an element $\alpha \in \bar{K}$, the roots of the minimal polynomial of $\alpha \in \bar{K}$ are exactly the conjugates of $\alpha$ over $K$. Then taking $L = K(\alpha)$ each conjugate of $\alpha$ defines a unique embedding from $L$ to $\bar{K}$. Since $[L: K] = n$, there are $n$ distinct embeddings.
For the full details of this proof look at Lemma 5.17 and Theorem 5.18 of this.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/156275",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 3,
"answer_id": 0
} |
Nature of the series $ \sum u_{n}, u_{n}=n!\prod_{k=1}^n \sin\left(\frac{x}{k}\right) $ Is the series $$ \sum u_{n}$$
$$ u_{n}=n!\prod_{k=1}^n \sin\left(\frac{x}{k}\right)$$
$$ x\in]0,\pi/2] $$
convergent or divergent?
We have:
$$ u_{n}\leq n!\prod_{k=1}^n \frac{x}{k}$$
$$ u_{n}\leq x^n$$
If $0<x<1$ the series is convergent.
$$ u_{n}\geq n! \prod_{k=1}^n \frac{2x}{\pi k}$$
$$ u_{n} \geq \prod_{k=1}^n \frac{2x}{\pi}$$
If $x=\pi/2$, $u_{n}\geq1$, $\sum u_{n}$ is divergent.
What about the case $x\in[1,\pi/2[$ ?
| $$\frac{u_{n+1}}{u_n}=\frac{(n+1)!\prod_{k=1}^{n+1} \sin\left(\frac{x}{k}\right)}{n!\prod_{k=1}^n \sin\left(\frac{x}{k}\right)}=(n+1)\sin(\frac{x}{n+1})$$
Thus, $\left|\lim_n \frac{u_{n+1}}{u_n}\right|=|x|$.
Now, ratio test solves all cases excepting $x=1$.
If $x=1$, I think the product is known, but can't remember a reference.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/156416",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
Prove that 16, 1156, 111556, 11115556, 1111155556… are squares. I'm 16 years old, and I'm studying for my exam maths coming this monday. In the chapter "sequences and series", there is this exercise:
Prove that a positive integer formed by $k$ times digit 1, followed by $(k-1)$
times digit 5 and ending on one 6, is the square of an integer.
I'm not a native English speaker, so my translation of the exercise might be a bit crappy. What is says is that 16, 1156, 111556, 11115556, 1111155556, etc are all squares of integers. I'm supposed to prove that. I think my main problem is that I don't see the link between these numbers and sequences.
Of course, we assume we use a decimal numeral system (= base 10)
Can anyone point me in the right direction (or simply prove it, if it is difficult to give a hint without giving the whole evidence). I think it can't be that difficult, since I'm supposed to solve it.
For sure, by using the word "integer", I mean "natural number" ($\in\mathbb{N}$)
Thanks in advance.
As TMM pointed out, the square roots are 4, 34, 334, 3334, 33334, etc...
This row is given by one of the following descriptions:
*
*$t_n = t_{n-1} + 3*10^{n-1}$
*$t_n = \lfloor\frac{1}{3}*10^{n}\rfloor + 1$
*$t_n = t_{n-1} * 10 - 6$
But, I still don't see any progress in my evidence. A human being can see in these numbers a system and can tell it will be correct for $k$ going to $\infty$. But this isn't enough for a mathematical evidence.
| Mark Bennet already suggested looking at the numbers as geometric series, so I'll use a slightly different approach. Instead of writing the squares like that, try writing them as follows:
$$\begin{align} 15&.999\ldots = 16 \\ 1155&.999\ldots = 1156 \\ 111555&.999\ldots = 111556 \\ \vdots\end{align}$$
These numbers can be expressed as a sum of three numbers, as follows:
$$\begin{align} 111111&.111\ldots \\ 444&.444\ldots \\ 0&.444\ldots \\ \hline 111555&.999\ldots \end{align}$$
Since $1/9 = 0.111\ldots$, we get
$$\begin{align} 111111&.111\ldots = \frac{1}{9} \cdot 10^{2k} \\ 444&.444\ldots = \frac{1}{9} \cdot 4 \cdot 10^k \\ 0&.444\ldots = \frac{1}{9} \cdot 4 \\ \hline 111555&.999\ldots = \frac{1}{9} \left(10^{2k} + 4 \cdot 10^k + 4\right). \end{align}$$
But this can be written as a square:
$$\frac{1}{9} \left(10^{2k} + 4 \cdot 10^k + 4\right) = \left(\frac{10^k + 2}{3}\right)^2.$$
Since $10^k + 2$ is always divisible by $3$, this is indeed the square of an integer.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/156462",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "41",
"answer_count": 10,
"answer_id": 3
} |
Formula to estimate sum to nearly correct : $\sum_{n=1}^\infty\frac{(-1)^n}{n^3}$ Estimate the sum correct to three decimal places :
$$\sum_{n=1}^\infty\frac{(-1)^n}{n^3}$$
This problem is in my homework. I find that n = 22 when use Maple to solve this. (with some programming) But, in my homework, teacher said find the formula for this problem.
Thanks :)
| $$\sum_{n=1}^{\infty} \dfrac{(-1)^n}{n^3} = \sum_{k=1}^{\infty} \left(\dfrac{1}{k^3}-\dfrac{1}{(k+1)^3}\right)= \sum_{n=1}^{\infty} \left(\dfrac{(k+1)^3-k^3}{k^3(k+1)^3}\right)$$
$$= \sum_{k=1}^{\infty} \dfrac{(k+1-k)(k^2+k+1)}{k^3(k+1)^3}$$
$$= \sum_{k=1}^{\infty} \dfrac{k^2+k+1}{k^3(k+1)^3}$$
$ $
Note that for the sum to be accurate within 3 decimal planes the nth term must be less than 0.001
Therefore we have $$\dfrac{k^2+k+1}{k^3(k+1)^3} < \dfrac{1}{1000}$$
You will notice that answer can either be $2k$ or $2k+1$
I will think about it later, how to pin point it, I will have to go now, let me know what you guys think about it
Using wolfram you will find that $k=5.18$ satisfies the inequality
Therefore the smallest value of $k$ to satisfy the inequality will be $6$
And hence the answer can either be $12$ or $13$
In fact it should be $n=13$ terms, do you need me to explain that? Try thinking first, there is a clear logical reason for $n = 13$ and not $n=12$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/156518",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 1
} |
Evaluating $\int \frac{dx}{x^2 - 2x} dx$ $$\int \frac{dx}{x^2 - 2x}$$
I know that I have to complete the square so the problem becomes.
$$\int \frac{dx}{(x - 1)^2 -1}dx$$
Then I set up my A B and C stuff
$$\frac{A}{x-1} + \frac{B}{(x-1)^2} + \frac{C}{-1}$$
With that I find $A = -1, B = -1$ and $C = 0$ which I know is wrong.
I must be setting up the $A, B, C$ thing wrong but I do not know why.
|
My book is telling me that I have to complete the square
$$I=\begin{eqnarray*}
\int \frac{dx}{x^{2}-2x} &=&\int \frac{dx}{\left( x-1\right) ^{2}-1}\overset{
u=x-1}{=}\int \frac{1}{u^{2}-1}\,du=-\text{arctanh }u+C
\end{eqnarray*},$$
$$\tag{1}$$
where I have used the substitution $u=x-1$ and the standard derivative $$\frac{d}{du}\text {arctanh}=\frac{1}{1-u^{2}}\tag{2}$$
You just need to substitute $u=x-1$ to write $\text{arctanh }u$ in terms of $x$.
Added 2: Remark. If we use the logarithmic representation of the inverse hyperbolic function $\text{arctanh }u$
$$\begin{equation*}
\text{arctanh }u=\frac{1}{2}\ln \left( u+1\right) -\frac{1}{2}\ln \left(
1-u\right),\qquad (\text{real for }|u|<1)\tag{3}
\end{equation*}$$
we get for $u=x−1 $
$$\begin{eqnarray*}
I &=&-\text{arctanh }u+C=-\text{arctanh }\left( x-1\right) +C \\
&=&-\frac{1}{2}\ln x+\frac{1}{2}\ln \left( 2-x\right) +C \\
&=&\frac{1}{2}\left( \ln \frac{2-x}{x}\right) +C\qquad (0<x<2).
\end{eqnarray*}\tag{4}$$
Added. If your book does require using partial fractions then you can proceed as follows
$$\begin{equation*}
\int \frac{1}{u^{2}-1}\,du=\int \frac{1}{\left( u-1\right) \left( u+1\right)
}\,du=\int \frac{1}{2\left( u-1\right) }-\frac{1}{2\left( u+1\right) }du.
\end{equation*}$$
$$\tag{5}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/156566",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
} |
Michael Spivak in "Calculus" asserts that $\sqrt2$ cannot be proven to exist, and that such a proof is impossible. What does he mean by "exist"? Michael Spivak in "Calculus" asserts that $\sqrt2$ cannot be proven to exist, and that such a proof is impossible. What does he mean by "exist"? How are you to prove that any number "exists"? Why can't we define $\sqrt2$ as a number that fits under some arbitrary definition of existence, while asserting that its most concise expression is with a functional root?
I'm sorry if these questions seem a bit sophomoric; in some ways it resembles an 8 year old repeatedly asking "why". But given that his prose is very concise and technical, his usage of "exist" was out of the ordinary.
(I used two tags representing the book's field of study; and one representing the actual relevant tag.)
edit
Oh, I'm sorry. I misquoted. My question still stands, though; how has he defined existence such that $\sqrt2$ might possible not be within it.
Direct quote: "We have not proved that any such number exists..." in reference to $\sqrt2$.
| Spivak says that $\,\sqrt{2}\,$ exists means that there's a real number $\,x\,\,s.t.\,\,x^2=2\,$ , and this cannot be proved with the knowledge assumed in page 26 of his book.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/156619",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 6,
"answer_id": 0
} |
non existence of $K_{r,r}$ in a given graph on any number of vertices I need to prove that: for every r>1 there exists $c>0$, s.t for every $n$, there exists some $G$, a graph on $n$ vertices, with average degree $cn^{1-\frac{2}{r}}$ (or above), s.t $K_{r,r}\nsubseteq G$.
I am trying to use the probabilistic method here in a few ways: Looking at all the possible edges in a random (uniformly distributed) order and adding them to the graph, or trying to find the expectancy of the existence of $K_{r,r}$ given $2r$ vertices, but with no luck.
Any help? Am I even in the right direction? Thanks :)
| I don't have time to answer this properly, but I'll get you headed in the right direction. Perhaps some bounty hunter will fill in the details [or you can]. I would start by including edges independently with probability $p = cn^{-2/r}$ (this will roughly give your average degree condition). For a given $r$ vertices and another given $r$ vertices, there are $r^2$ edges to check to see if there is a $K_{r,r}$ (note that I specify the partition beforehand). The probability that there is a $K_{r,r}$ on this specific partition is $(cn^{-2/r})^{r^2} = c^{r^2} n^{-2r}$. Since there are at most $n^{2r}$ partitions to check, the probability that you have at least one $K_{r,r}$ is (by a union bound) at most $n^{2r} \cdot c^{r^2} n^{-2r} = c^{r^2}$. So $1-c^{r^2}$ proportion of the graphs have no $K_{r,r}$ . Now you just need to argue that the one of the graphs in this proportion meets your average degree condition.
Note that the expected value of the number of edges is $c\binom{n}{2}n^{-2/r}$, by linearity of expectation. Most of this binomial random variable is concentrated close to expected value [using concentration results; Chebyshev might be enough here, or some other result might work]. This should be able to finish it off, as it will give that for suitable $c$ most of the graphs have high average degree, and also most of the graphs have no $K_{r,r}$. If most have one property or the other, then there has to be one with both of those properties.
Good luck, and I hope this argument works out (again, I haven't filled in the details, will someone come along and write this up properly?)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/156676",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Set theory puzzles - chess players and mathematicians I'm looking at "Basic Set Theory" by A. Shen. The very first 2 problems are: 1) can the oldest mathematician among chess players and the oldest chess player among mathematicians be 2 different people? and 2) can the best mathematician among chess players and the best chess player among mathematicians be 2 different people? I think the answers are no, and yes, because a person can only have one age, but they can have separate aptitudes for chess playing and for math. Is this correct?
| Yes, it’s correct. If $M$ is the set of mathematicians, and $C$ is the set of chess players, you’re looking rankings of the members of $M\cap C$. If for $x\in M\cap C$ we let $m(x)$ be $x$’s ranking among mathematicians, $c(x)$ be $x$’s ranking among chess players, and $a(x)$ be $x$’s age, then there is a unique $x_a\in M\cap C$ such that $$a(x_a)=\max\{a(x):x\in M\cap C\}\;,$$ but there can certainly be distinct $x_m,x_c\in M\cap C$ such that $$m(x_m)=\max\{m(x):x\in M\cap C\}$$ and $$c(x_c)=\max\{c(x):x\in M\cap C\}\;.$$
All of which just says what you said, but a bit more formally.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/156816",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 2,
"answer_id": 1
} |
Change of Basis vs. Linear Transformation If i understand it correctly, change of basis is just a specific case of a linear transformation. Specifically given a vector space $V$ over a field $F$ such that $\dim V=n$, change of basis is just a transformation from $F^n$ to $F^n$. Does change of basis in and of itself have practical uses that are separate from linear transformations? What I mean is separate from linear transformations that do more than just change the basis of a vector in it's own vector space.
| The Fourier transform is an example of a practical change of basis. Some often used operations in signal processing are easier in the Fourier transformed basis. For example, a convolution in the time basis is simply a multiplication in the Fourier transformed basis, i.e., the frequency basis.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/156870",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 1
} |
Proving that $A=\left\{\frac{n}{2n+1}:n \in \mathbb{N}\right\}$ is bounded by $\frac12$ Let $A=\left\{\dfrac{n}{2n+1}:n \in \mathbb{N}\right\}$. I want to prove that $supA=\dfrac{1}{2}$ so I need to show that $$\forall\epsilon\gt0 \exists a\in A:a\gt\dfrac{1}{2}-\epsilon$$
So suppose by contradiction that $$\exists \epsilon\gt0 \forall a\in A:a\le\dfrac{1}{2}-\epsilon$$
which essentially means that $$n \le \left( \dfrac{1}{\epsilon}-1\right)\cdot \dfrac{1}{2}$$
and this is not possible as the natural numbers are not bounded from above.
Am I right? This seems to easy - I could have plugged any other number (smaller then $\frac{1}{2}$) and prove the same.
| The idea is right, the calculations I'm not that sure as I get $$\exists \epsilon >0\,\,s.t.\,\,\forall n\in\mathbb{N}\,\,,\,\frac{n}{2n+1}\leq\frac{1}{2}{-\epsilon}\Longleftrightarrow \rlap{/}{n}\leq \rlap{/}{n}+\frac{1}{2}-2n\epsilon-\epsilon\Longleftrightarrow$$$$\Longleftrightarrow2n\epsilon\leq \frac{1}{2}-\epsilon\Longleftrightarrow n\leq\frac{1}{4}\left(\frac{1}{\epsilon}-2\right)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/156915",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 0
} |
Limit exercise from Rudin: $\lim\limits_{n \to \infty} \sqrt{n^2+n} -n$ This is Chapter 3, Exercise 2 of Rudin's Principles.
Calculate $\lim\limits_{n \to \infty} \sqrt{n^2+n} -n$.
Hints will be appreciated.
| $\sqrt{n^2+n} - n= \frac{n}{\sqrt{n^2+n} + n} = \frac{1}{\sqrt{1+\frac 1n} + 1}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/156955",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 0
} |
Limit of an integral Let $\{f_{n}(x)\}$ be a sequence of continuous positive real valued functions on $\mathbb R$. If $a_{n}=\sup_{x\in \mathbb R}|f_{n}(x)|$, such that $a_{n}\to 0$ as $n\to\infty$,
and $a_{n}$ is a decreasing sequence, with $a_{n}\in (0,1), \forall n$, and $$\int_{\mathbb R}|f_{n}(x)|^{2}dx\leq A$$ for some $A$, for all $n\geq 1$.
Is it true that $\lim_{n\to\infty}\int_{\mathbb R}|f_{n}(x)|^{2}dx=0$?
My guess: Since $a_{n}\to 0$, this means that the sequence $|f_{n}(x)|$ converges to 0 uniformly on $\mathbb R$, hence $|f_{n}(x)|^{2}$ also converges to 0 uniformly on $\mathbb R$, this will imply the result somehow!
I asked this before but I got one answer which is not applied for the edit (uniform-convergence-and-integration)
| Try $f_n(x) = \frac{1}{n} \sqrt{|x-n|} 1_{[-n,n]} (x)$. Then $\int |f_n(x)|^2 \, dx = 1$, $\forall n$. It is easy to check that $a_n = \frac{1}{\sqrt{n}}$, which converges to $0$.
So the answer is no, the integral does not converge to $0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/157091",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
"Negative" versus "Minus" As a math educator, do you think it is appropriate to insist that students say "negative $0.8$" and
not "minus $0.8$" to denote $-0.8$?
The so called "textbook answer" regarding this question reads:
A number and its opposite are called additive inverses of each other because their sum is zero, the identity element for addition. Thus, the numeral $-5$ can be read "negative five," "the opposite of five," or "the additive inverse of five."
This question involves two separate, but related issues; the first is discussed at an elementary level here. While the second, and more advanced, issue is discussed here. I also found this concerning use in elementary education.
I recently found an excellent historical/cultural perspective on What's so baffling about negative numbers? written by a Fields medalist.
| I have almost always said, "minus." What is interesting here is that the - operator has two guises. It is an infix binary operator (as in $5 - 3$) and it is a prefix unary operator, as in $-7$.
The word "negative" has the liability of an extra syllable. Occasionally, I do find myself saying "negative 3" though.
This seems to me to be a distinction without a huge difference.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/157127",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "55",
"answer_count": 26,
"answer_id": 10
} |
Which of the following are Dense in $\mathbb{R}^2$? Which of the following sets are dense in $\mathbb R^2$ with respect to the usual topology.
*
*$\{ (x, y)\in\mathbb R^2 : x\in\mathbb N\}$
*$\{ (x, y)\in\mathbb R^2 : x+y\in\mathbb Q\}$
*$\{ (x, y)\in\mathbb R^2 : x^2 + y^2 = 5\}$
*$\{ (x, y)\in\mathbb R^2 : xy\neq 0\}$.
Any hint is welcome.
| *
*is not dense. The set of verticle lines with natural number $x$ coordinate is not dense.
*This is dense. Given $(x,y)$, let $r = x + y$. If $r$ is irrational , let $q$ be any rational number close to $r$. Then $(x - (r - q), y)$ has rational sum and gets close to $(x,y)$.
*This is a circle of radius of radius $\sqrt{5}$ which is not dense.
*This is dense. You can get arbitrary close to any $(x,y)$ without intersecting the $x$ or the $y$ axis.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/157164",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 3
} |
Let $X = \Bbb{R}$ with the discrete metric. Is $X$ connected? No. Any nonempty subset $A ≠ X$ is open, as well as its complement. So $X$ is the union of disjoint nonempty open subsets.
Is there a more formal way of doing it?
Thanks for your help.
| I think your answer is formal enough for any mathematical purpose, but if you want to go fancy you can try the following.
First, prove that a topological space $\,X\,$ is disconnected iff there exists a continuous and onto function $\,f:X\to \{0,1\}\,$ , where the latter space inherits its topology from the usual one on the reals (and, thus, it's a discrete space with two elements).
Now, for your case, show that $\,f:\mathbb{R}_{disc}\to \{0,1\}\,$ defined by $$f(x)=\left\{\begin{array}{ll} 0 \,&\,\text{if}\;x=0\\1\,&\,\text{if}\;x\neq 0\end{array}\right.$$is continuous and onto $\,\{0,1\}\,$...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/157243",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Change of variables in double integral. Finding limits of integration
I need to integrate $\int_ \! \int \sin \frac{1}{2}(x+y) \cos\frac{1}{2}(x-y)\,dx\,dy$ over region $R$:{triangle with vertices $(0,0),(0,2),(1,1)$}. They ask to use $u=\frac{1}{2}(x+y)$ and $v=\frac{1}{2}(x-y)$.
Attempt:First, I transformed $(x,y)$ to $(x=x(u,v),y=y(u,v))$. Namely, I solved for x and y:
$$\begin{cases}u=\frac{1}{2}(x+y)\\v=\frac{1}{2}(x-y)\end{cases}$$
The Jacobian I found is $J(u,v)=\frac{\partial (x,y)}{\partial (u,v)}=-1$.
I am having hard time founding the limits of integration. In xy-plane $R$ looks like that:
So, the region R is bounded by $\begin{cases} y=0\\y=x\\y=-x+2 \end{cases}$
In uv-plane it looks like:
The region S is bounded by $\begin{cases} u=1\\ u=v\end{cases}$
Now the double integral looks like:
$$\int_0^1 \! \int_0^v \sin u \cos v\,du\,dv$$
When, I solve it I get
$$\int_0^1 \! \int_0^v \sin u \cos v\,du\,dv=\frac{1}{2} (\frac{1}{2} \sin2 -1)$$
But in the answer key the answer is $1-\frac{1}{2} \sin2 $
Can you please tell me what I am doing wrong. Hints please.
| The Jacobian :
You have $x=u+v$ and $y=u-v$ then : $|J|=2$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/157283",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
subsets probability question Consider a set $\Omega$ with $N$ distinct members, and a function $f$ defined on $\Omega$ that takes the values 0,1 such that $ \frac{1}{N} \sum_{x \in \Omega } f(x)=p$. For a subset $S⊆Ω$ of size n, define the sample proportion
$p:= p(S)= \frac{1}{n} \sum_{x\in S} f(x)$.
If each subset of size $n$ is chosen with equal probability, calculate the expectation and standard deviation of the random variable $p$.
| It helps to introduce indicator random variables here.
For each $x\in\Omega$, let $Z_x$ be the indicator random variable
that takes the value 1 if $x\in S$, and value 0 otherwise.
We can express $$p(S)={1\over n}\sum_{x\in\Omega} Z_x\cdot f(x),$$
where the sum is no longer over the random set $S$. Since all points
are equally likely to be elements of $S$, it is not hard to calculate
$$\mathbb{E}(Z_x)={n\over N},\quad \text{Var}(Z_x)={n\over N}\left({1-{n\over N}}\right),
\quad \text{cov}(Z_x,Z_y)={-n\over N^2} {N-n \over N-1}\text{ for }x\neq y.$$
Using linearity of expectation, and bilinearity of covariance, after some
calculation we get
$$\mathbb{E}(p(S))={1\over N}\sum_{x\in\Omega} f(x),$$
and
$$\text{Var}(p(S))={1\over n} {N-n \over N-1} \left[{1\over N}\sum_{x\in\Omega} f(x)^2-
\left( {1\over N}\sum_{x\in\Omega} f(x)\right)^2\right].$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/157353",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Do improper integrals like $\int_{-\infty}^{+\infty} f$ converge if $xf(x)\rightarrow 0$? My teacher assumes without proof in his notes that, given a rational function $R(x)$, the improper integral $\int_{-\infty}^{+\infty} R(x)dx$ converges if $\lim_{|x|\rightarrow\infty} xR(x) = 0$. He then proceeds to explain the Estimation lemma, that is a very similar result.
However it is unclear for me whether this fact is true, and if it is bidirectional (i.e. if $\int_{-\infty}^{+\infty} R(x)dx$ converges then $\lim_{|x|\rightarrow\infty} xR(x) = 0$) and holds true even for non-rational functions
| One cannot extend the result to general functions. For example, let $f(x)=\frac{1}{|x|\log(|x|)}$ if $|x|\ge 2$, and let $f(x)=f(2)$ for $-2 \lt x\lt 2$.
Then $\lim_{x\to\infty} xf(x)=\lim_{x\to -\infty} xf(x)=0$, but the improper integral does not converge.
The result does hold, bidirectionally, for rational functions whose denominator vanishes nowhere. It is essentially obvious, though writing out the proof is typographically unpleasant. For a rational function $R(x)=\frac{P(x)}{Q(x)}$, where $P$ and $Q$ are polynomials with $Q(x)$ nowhere $0$ (and $P$ not identically $0$), there is convergence iff $\deg(Q)\ge 2+\deg(P)$. This is precisely the condition for $\lim_{|x|\to \infty} xR(x)$ to be $0$. The reason is basically that degrees can only take on integer values.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/157412",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Show that $\frac{n}{\sigma(n)} > (1-\frac{1}{p_1})(1-\frac{1}{p_2})\cdots(1-\frac{1}{p_r})$ If $n=p_1^{k_1}p_2^{k_2}\cdots p_r^{k_r}$ is the prime factorization of $n>1$ then show that :
$$1>\frac{n}{ \sigma (n)} > \left(1-\frac{1}{p_1}\right)\left(1-\frac{1}{p_2}\right)\cdots\cdots\left(1-\frac{1}{p_r}\right)$$
I have solved the $1^\text{st}$ inequality($1>\frac{n}{ \sigma (n)}$) and tried some manipulations on the right hand side of the $2^\text{nd}$ inequality but can't get much further.Please help.
| Let $S$ be the set of products of powers of the prime divisors of $n$ and $n/S$ the set of ratios $(n/s) : s \in S$.
$\frac{n}{\Pi (1 - \frac {1}{p_i})} > \sigma(n) \quad $ is the statement that the sum of all rational numbers in $n/S$ is larger than the sum of all integers in the same set.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/157459",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
hausdorff, intersection of all closed sets Can you please help me with this question?
Let's $X$ be a topological space.
Show that these two following conditions are equivalent :
*
*$X$ is hausdorff
*for all $x\in X$ intersection of all closed sets containing the neighborhoods of $x$ it's $\{x\}$.
Thanks a lot!
| Recall the definition of Hausdorff:
$X$ is a Hausdorff space if for every two distinct $x,y\in X$ there are disjoint open sets $U,V$ such that $x\in U$ and $y\in V$.
Suppose that $X$ is Hausdorff and $x\in X$. Suppose $y\neq x$. We have $U,V$ as in the definition. So $x\in U$ and $y\in V$ and $U\cap V=\varnothing$. Suppose that $F$ is a closed set containing an open neighborhood of $x$, intersecting this open set with $U$ yields an open neighborhood of $x$ which is a subset of $F$ so without loss of generality $U\subseteq F$. $F'=F\cap(X\setminus V)$ is closed and does not contain $y$. Furthermore $U$ itself is a subset of this closed set and $y\notin F'$. Therefore when intersecting all closed subsets which contain an open environment of $x$ we remove every other $y$, so the result is $\{x\}$.
On the other hand, suppose that for every $x\in X$ this intersection is $\{x\}$, by a similar process as above deduce that $X$ is Hausdorff as in the definition above.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/157534",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Sequence in $L^p(X,M,\mu)$ I have two question.
Suppose that {$f_k$} is a sequence in $L^p(X,M,\mu)$ such that
$f(x) = \lim_{k \to \infty} f_k(x)$
exists for $\mu$ -a.e. $x \in X$.
Assume $1\le p<\infty$,
$\liminf_{k\to \infty} ||f_k||_p = a$
is finite.
*
*First one is proving that $f \in L^p$ and $||f||_p \le a$.
And if additionally assume that $||f||_p = \lim_{k \to \infty} ||f_k||_p $.
*
*Second one is to prove $$\lim_{k \to \infty} ||f-f_k||_p =0 $$
Those are very natural fact, but I want have strict proof of them. How can I approach?
| 1) is easy using Fatou's lemma
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/157606",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How are the limits of this integral transformed?
Using $$\ln(x) = \int_1^x \frac{1}{t} dt$$
Show that for $x > 0$, $\ln\left(\frac{1}{x}\right) = -\ln(x)$
I am following a provided answer and didn't quite understand the following transformation and why/how it is done:
$$\ln\left(\frac{1}{x}\right) = \int_1^{\frac{1}{x}} \frac{1}{t}dt$$
$$~u = \frac{1}{t}, du = \frac{-1}{t^2}dt$$
$$= -\int_1^{\frac{1}{x}} t\frac{dt}{t^2}$$
$$=-\int_1^x \frac{1}{u}du$$
$$= -\ln(x)$$
I don't understand how the limits of integration were changed in the $2nd$ and $3rd$ term (from $\frac{1}{x}$ to $x$). Is there a property that makes this correct or is there some other reasoning behind the change?
| \begin{align}
\ln\left(\frac{1}{x}\right) & = \int_1^{\frac{1}{x}} \frac{1}{t}dt & & {\text{Replace $x$ by $\dfrac1x$ in the definition}}\\
u & = \frac{1}{t} & &{\text{This is the substitution you are making}}\\
du & = \frac{-1}{t^2}dt & &{\text{This is because }\dfrac{du}{dt} = - \dfrac1{t^2}}\\
\ln\left(\frac{1}{x}\right) & = \int_1^{\frac{1}{x}} \frac{1}{t}dt & = \int_1^{\frac{1}{x}} -t \times \left(-\frac{1}{t^2}dt \right) & \text{Multiply and divide the integrand by $-t$}
\end{align}
In the above integrand, we can replace $-\dfrac{1}{t^2}dt$ by $du$ and $-t$ by $- \dfrac1u$.
Note that we are making the change of variable, the integrand is in terms of $u$ now.
Hence, we need to look at the limits for the variable $u$ in the integral.
We have the transformation that $u = \dfrac1t$. Hence, if $t$ goes from $1$ to $1/x$, then $u$ goes from $1$ to $x$. This is because when $t=1$, $u = \dfrac11 = 1$. Similarly, if $t = 1/x$, then $u = \dfrac1{1/x} = x$.
(For example, if $t$ goes from $1$ to $2$, then $1/t$ goes from $1$ to $1/2$.)
Hence, we get that
\begin{align}
\ln \left( \dfrac1x\right) & = \int_1^{\frac{1}{x}} -t \times \left(-\frac{1}{t^2}dt \right) & = \int_1^x \left(-\dfrac1u \right) \times du = - \int_1^x \dfrac{du}{u} = - \ln(x)
\end{align}
where the last equality is obtained since we have defined $\ln(x)$ as $\displaystyle \int_1^x \dfrac{dt}{t}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/157666",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
a function that maps half planes Define
$H^{+}=\{z:y>0\}$
$H^{-}=\{z:y<0\}$
$L^{+}=\{z:x>0\}$
$L^{-}=\{z:x<0\}$
$f(z)=\frac{z}{3z+1}$ maps which portion onto which from above and vice-versa? I will be glad if any one tell me how to handle this type of problem? by inspection?
| $f(z)=\frac{z}{3z+1}=\frac{x+iy}{3(x+iy)+1}=\frac{x+iy}{3x+1+i3y}=\frac{(x+iy)(3x+1-i3y)}{(3x+1)^2+9y^2} \implies \Im (f(z))=\frac{y}{(3x+1)^2+9y^2}$. So, If $y > 0$, then $\Im (f(z))>0$.If $y < 0$, then $\Im (f(z))<0.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/157724",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
showing a set is not a subgroup Let $G$ be the orthogonal subgroup $O_2$. Show that the set $\{g \in G : g^2= e\}$ is not a subgroup of $G$
The question before says let $G$ be an abelian group and I can see where I have used that fact. It lets us write $a^2b^2=(ab)^2$ and so $ab \in G$. But I cant find a way to 'get out' of $G$.
| Hint: Every rotation is a product of two reflections.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/157778",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Multiplicative Selfinverse in Fields I assume there are only two multiplicative self inverse in each field with characteristice bigger than $2$ (the field is finite but I think it holds in general). In a field $F$ with $\operatorname{char}(F)>2$ a multiplicative self inverse $a \in F$ is an element such that
$$ a \cdot a = 1.$$
I think in each field it is $1$ and $-1$. Any ideas how to proof that?
| The equation $x^2-1$ is degree $2$ and thus can have at most two solutions in any field. So checking that $1$ and $-1$ satisfy this is enough to know that they are the only self-inverse elements. (As Nate points out, in the field of characteristic $2$ they are also equal to each other, so there is only one self-inverse element in this case).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/157831",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Evaluation of $\lim\limits_{x\rightarrow0} \frac{\tan(x)-x}{x^3}$ One of the previous posts made me think of the following question: Is it possible to evaluate this limit without L'Hopital and Taylor?
$$\lim_{x\rightarrow0} \frac{\tan(x)-x}{x^3}$$
| The statement $\dfrac{\tan(x)-x}{x^3} \to c$ as $x \to 0$ is equivalent to
$\tan(x) = x + c x^3 + o(x^3)$ as $x \to 0$, so this is a statement about a
Taylor polynomial of $\tan(x)$, and I'm not sure what would count as doing
that "without Taylor". However, one thing you could do is start from $$\sin(x) = x + o(x)$$ integrate to get $$\cos(x) = 1 - x^2/2 + o(x^2)$$ then $$\sec(x) = \frac{1}{1-x^2/2 + o(x^2)} = 1 + x^2/2 + o(x^2)$$ $$\sec^2(x) = \left(1 + x^2/2 + o(x^2)\right)^2 = 1 + x^2 + o(x^2)$$ and integrate again to get
$$\tan(x) = x + x^3/3 + o(x^3)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/157903",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 5,
"answer_id": 3
} |
Combining a radical and simplifying? How would I combine and simplify the following radical:
$$\sqrt {\frac{A^2}{2}} - \sqrt \frac{A^2}{8}$$
| $\sqrt{\frac{A^2}{2}} - \sqrt{\frac{A^2}{8}} = \frac{A}{\sqrt{2}} - \frac{A}{2\sqrt{2}} = \frac{2A}{2\sqrt{2}} - \frac{A}{2\sqrt{2}} = \frac{A}{2\sqrt{2}}$
Assuming $A \geq 0$. If $A < 0$, you can replace with $|A|$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/157967",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Infinite products - reference needed! I am looking for a small treatment of basic theorems about infinite products ; surprisingly enough they are nowhere to be found after googling a little. The reason for this is that I am beginning to read Davenport's Multiplicative Number Theory, and the treatment of L-functions in there requires to understand convergence/absolute convergence of infinite products, which I know little about. Most importantly I'd like to know why
$$
\prod (1+|a_n|) \to a < \infty \quad \Longrightarrow \quad \prod (1+ a_n) \to b \neq 0.
$$
I believe I'll need more properties of products later on, so just a proof of this would be appreciated but I'd also need the reference.
Thanks in advance,
| Complex Analysis, Princeton Lectures in Analysis by Stein and Shakarchi. p 140-141.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/158089",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 3,
"answer_id": 2
} |
Embedding an ideal to an extension of an algebraic number field I'm looking for a proof of the following well-known proposition.
I checked some books on algebraic number theory but could not find it.
Proposition
Let $L$ be a finite extension of an algebraic number field $K$.
Let $A$ and $B$ be the rings of integers in $K$ and $L$ respectively.
Let $I$ be an ideal of $A$.
Then $I = IB \cap A$.
| You can also prove the equality directly using properties of Dedekind domains.
Let $a\in IB\cap A$. For any maximal ideal $\mathfrak p$ of $A$ and for any maximal ideal $\mathfrak q$ of $B$ lying over $\mathfrak p$, we have
$$ v_{\mathfrak p}(a)=v_{\mathfrak q}(a)/e_{\mathfrak q/\mathfrak p}\ge
v_{\mathfrak q}(IB)/e_{\mathfrak q/\mathfrak p}=v_{\mathfrak p}(I).$$
So $a\in I$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/158155",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Notation for an indecomposable module. If $V$ is a 21-dimensional indecomposable module for a group algebra $kG$ (21-dimensional when considered as a vector space over $k$), which has a single submodule of dimension 1, what is the most acceptable notation for the decomposition of $V$, as I have seen both $1\backslash 20$ and $20/1$ used (or are both equally acceptable)?
| $V=1\backslash 20$? The 20/1 and 1\20 notation "for a module" looks nonsensical and bad for a few reasons, but then again math is a huge subject, so I could just be suffering from limited experience.
Is the idea of $1\backslash 20$ to say "$V$ has one submodule of codimension 20"? If so I imagine they have a better way of writing this than "module=number\number".
If $V$ had a single submodule $S$, I would talk about the quotient $V/S$, and its dimension $\dim (V/S)=20$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/158192",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Numerical Analysis over Finite Fields Notwithstanding that it isn't numerical analysis if it's over finite fields, but what topics that are traditionally considered part of numerical analysis still have some substance to them if the reals are replaced with finite fields or an algebraic closure thereof? Perhaps using Hamming distance as a metric for convergence purposes, with convergence of an iteration in a discrete setting just meaning that the Hamming distance between successive iterations becomes zero i.e. the algorithm has a fixed-point.
I ask about still having substance because I suspect that in the ff setting, na topics will mostly either not make sense, or be trivial.
| There is substantial interest in vector spaces over finite fields. Much of the research falls into the area of Computational group theory or computations in Modular Representation theory. In particular Gauss-Jordan Elimination works and if I recall correctly has the same computational complexity as over characteristic 0.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/158261",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
In which case $M_1 \times N \cong M_2 \times N \Rightarrow M_1 \cong M_2$ is true? Usually for modules $M_1,M_2,N$
$$M_1 \times N \cong M_2 \times N \Rightarrow M_1 \cong M_2$$
is wrong. I'm just curious, but are there any cases or additional conditions where it gets true?
James B.
| I recommend A Crash Course on stable range, cancellation,
substitution and exchange by T.Y. Lam, 2004, which is a pretty good guide to the phenomenon.
Update: Another question surfaced which contains a sufficient condition for cancellation. The condition is: $\hom(M_1,N)=\hom(M_2,N)=\{0\}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/158316",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Why is there no functor $\mathsf{Group}\to\mathsf{AbGroup}$ sending groups to their centers? The category $\mathbf{Set}$ contains as its objects all small sets and arrows all functions between them. A set is "small" if it belongs to a larger set $U$, the universe.
Let $\mathbf{Grp}$ be the category of small groups and morphisms between them, and $\mathbf{Abs}$ be the category of small abelian groups and its morphisms.
I don't see what it means to say there is no functor $f: \mathbf{Grp} \to \mathbf{Abs}$ that sends each group to its center, when $U$ isn't even specified. Can anybody explain?
| The problem with such a functor is group theoretical, not categorical. The problem arises because morphisms between groups need not map centers to centers. It doesn't have anything to do with universes, smallness, or foundational issues.
Consider for example $G=C_2$, $H=S_3$, $K=C_2$, and the maps $f\colon G\to H$ sending the nontrivial element of $G$ to $(1,2)$, and $g\colon H\to K$ by viewing $S_3/A_3$ as the cyclic group of order $2$.
Since $Z(G) = Z(K) = C_2$, and $Z(H) = \{1\}$, such a putative functor $\mathcal{F}$ would give that $\mathcal{F}(f)\colon C_2\to\{1\}$ is the zero map $\mathbf{z}$, and $\mathcal{F}(g)\colon \{1\}\to C_2$ is the inclusion of the trivial group into $C_2$. But $g\circ f=\mathrm{id}_{C_2}$, so
$$\mathrm{id}_{C_2} = \mathcal{F}(\mathrm{id}_{C_2}) = \mathcal{F}(gf) = \mathcal{F}(g)\mathcal{F}(f) = \mathbf{z}$$
where $\mathbf{z}\colon C_2\to C_2$ is the zero map.
Thus, no such functor $\mathcal{F}$ can exist.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/158438",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25",
"answer_count": 2,
"answer_id": 1
} |
Divisibility is transitive: $\ a\mid b\mid c\,\Rightarrow\ a\mid c$ As the title says, if a number is divisible by a number, is it always divisible by that number's factors?
An example being that $100$ is divisible by $20$, it is also divisible by $10, 5, 4, 2$ as well?
Does this always apply?
| Write $a$ as a product of primes. We have:
$$\mu k = b \tag 1$$
where $\mu$ is either a prime factor of $a$ or a product of some of its prime factors, $k$ is the product of the rest of the factors. Now, $(1)$ is the notation of divisibility read as $\mu$ divides $b$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/158492",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 4
} |
Prime as sum of three numbers whose product is a cube Good evening!
I am very new to this site. I would like to put the following material from Prof. Gandhi's note book and my observations. Of course it is little long with more questions. But, with good belief on this site, I am sending for good solutions/answers.
If we take other than primes $2$, $5$ and $11$, every prime can be written as $x + y + z$, where $x$, $y$ and $z$ are some positive numbers. Interestingly, $x \times y \times z = c^3$, where $c$ is again some positive number.
Let us see the magic for primes $3,7,13,31,43,73$
$$
\begin{align}
3 = 1 + 1 + 1 &\Longrightarrow 1 \times 1 \times 1 = 1^3\\
7 = 1 + 2 + 4 &\Longrightarrow 1 \times 2 \times 4 = 2^3\\
13 = 1 + 3 + 9 &\Longrightarrow 1 \times 3 \times 9 = 3^3\\
31 = 1 + 5 + 25 &\Longrightarrow 1 \times 5 \times 25 = 5^3\\
43 = 1 + 6 + 36 &\Longrightarrow 1 \times 6 \times 36 = 6^3\\
73 = 1 + 8 + 64 &\Longrightarrow 1 \times 8 \times 64 = 8^3\\
\end{align}
$$
Can you justify the above pattern? How to generalize the above statement either mathematically or by computer?
But, I observed that it is true for primes less than $9500$. Can your provide a computational algorithm to describe this?
Also, prove that, we conjecture that except $1, 2, 3, 5, 6, 7, 11, 13, 14, 15, 17, 22, 23$, every positive number can be written as a sum of four positive numbers and the product is again can be expressible in 4th power. Now, can we generalize this? Also, I want to know that, is there any such numbers can be expressible as some of $n$-integers with their product is again in $n$-th power?
Thank you so much.
edit
Concerning this cubic property :
Notice that this can be extended to hold for almost all squarefree positive integers $> 2$, not just the primes.
for instance :
we know for the prime $7$ : $7=1+2+4$ so we also get $7A = 1A + 2A + 4A$ and $1A * 2A * 4A$ is simply equal to $8A^3$.
In fact this can be extended to all odd positive integers $>11$ if $25,121$ have a solution.
Hence I am intrested in this and I placed a bounty.
I edited the question because its to much for a comment and certainly not an answer.
Btw Im curious about this Ghandi person though info about that does not get the bounty naturally.
I would like to remind David Speyer's comment : Every prime that is $1 mod 3$ is of the form $a^ 2 +ab+b^ 2$ , so that covers half the primes immediately.
So that might be a line of attack.
| I wrote a small script in MATLAB and verified your claim for all the primes less than $150,000$. Only $2,5,11$ do not satisfy your claim. For many of the primes, there are multiple ways to write it as a sum of three number such that its product is a cube. My script just looks for the first occurrence of such triplets for each prime.
Here is a .txt file with the "first" set of triplets for primes less than $150,000$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/158595",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "50",
"answer_count": 5,
"answer_id": 0
} |
Orthonormal basis Consider $\mathbb{R}^3$ together with inner product $\langle (x_1, x_2, x_3), (y_1, y_2, y_3) \rangle = 2x_1 y_1+x_2 y_2+3 x_3 y_3$. Use the Gram-Schmidt procedure to find an orthonormal basis for $W=\text{span} \left\{(-1, 1, 0), (-1, 1, 2) \right\}$.
I don't get how the inner product $\langle (x_1, x_2, x_3), (y_1, y_2, y_3) \rangle = 2 x_1 y_1+x_2 y_2+3 x_3 y_3$ would affect the approach to solve this question.. When I did the gram-schmidt, I got $v_1=(-1, 1, 0)$ and $v_2=(0, 0, 2)$ but then realized that you have to do something with the inner product before finding the orthonormal basis. Can someone please help me?
Update: So far I got $\{(\frac{-1}{\sqrt{3}}, \frac{1}{\sqrt{3}}, 0), (0, 0, \frac{2}{\sqrt{12}})\}$ as my orthonormal basis but I'm not sure if I am doing it right with the given inner product.
| Perhaps you carried out the Gram-Schmidt algorithm using the ordinary inner product? I think that is the only way you could have gotten through without using the given inner product :)
Anyhow, you need to use the given inner product at each step of the orthonormalization procedure. Changing the inner product will change the output of the algorithm, because different inner products yield different lengths of vectors and report different "angles" between vectors.
For example, when you begin with the first step (normalizing $(-1,1,0)$, you should compute that $\langle(-1,1,0),(-1,1,0)\rangle=3$, and so the first vector would be $\frac{1}{\sqrt{3}}(-1,1,0)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/158651",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Repeatedly rolling a die and the tails of the multinomial distribution. For $1\leq i\leq n$ let $X_i$ be independent random variables, and let each $X_i$ be the uniform distribution on the set ${0,1,2,\dots,m}$ so that $X_i$ is like an $m+1$ sided die. Let $$Y=\frac{1}{n}\sum_{i=1}^n \frac{1}{m} X_i,$$ so that $\mathbb{E}(Y)=\frac{1}{2}$. I am interested in the tails of this distribution, that is the size of $$\Pr\left( Y \geq k\right)$$ where $\frac{1}{2}< k\leq 1$ is a constant.
In the case where $m=1$, we are looking at the binomial distribution, and $$\Pr\left( Y \geq k\right)= \frac{1}{2^n}\sum_{i=0}^{(1-k) n} \binom{n}{i}$$ and we can bound this above by $(1-k)n \binom{n}{(1-k)n}$ and below by $\binom{n}{(1-k)n}$ which yields $$\Pr\left( Y \geq k\right)\approx \frac{1}{2^n} e^{n H(k)}$$ where $H(x)=-\left(x\log x+(1-x)\log (1-x)\right)$ is the entropy function. (I use approx liberally)
What kind of similar bounds do we have on the tails of this distribution when $m\geq 2$? I am looking to use the explicit multinomial properties to get something stronger than what you would get using Chernoff of Hoeffding.
| The variance of $X_i$ is $\frac{m(m+2)}{12}$ so of $Y$ is $\frac{m+2}{12mn}$. The central limit theorem leads to a normal approximation for moderate to high $n$
$$\Pr\left( Y \geq k\right) \approx 1 - \Phi\left((2k-1){\sqrt{\frac{3mn}{m+2}}}\right)$$
Stein's method and other similar methods can help calculate the rates of convergence
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/158716",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
What is the result of sum $\sum\limits_{i=0}^n 2^i$
Possible Duplicate:
the sum of powers of $2$ between $2^0$ and $2^n$
What is the result of
$$2^0 + 2^1 + 2^2 + \cdots + 2^{n-1} + 2^n\ ?$$
Is there a formula on this? and how to prove the formula?
(It is actually to compute the time complexity of a Fibonacci recursive method.)
| Let $S = 2^0 + 2^1 + 2^2 + \cdots + 2^{n}$.
Then $2S = 2^1 + 2^2 + 2^3 + \cdots + 2^{n} + 2^{n+1}$.
Then
$$\begin{align*}
S = 2S - S &= & & 2^1 &+& 2^2 & + & 2^3 & + & 2^4 &+&\cdots &+& 2^{n} &+& 2^{n+1}\\
&& -2^0 -& 2^1 & - & 2^2 & - & 2^3 & - & 2^4 & - & \cdots & - & 2^n
\end{align*}$$
How much is that?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/158758",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 7,
"answer_id": 4
} |
Integral of $\int 2\,\sin^{2}{x}\cos{x}\,dx$ I am asked as a part of a question to integrate $$\int 2\,\sin^{2}{x}\cos{x}\,dx$$
I managed to integrate it using integration by inspection:
$$\begin{align}\text{let } y&=\sin^3 x\\
\frac{dy}{dx}&=3\,\sin^2{x}\cos{x}\\
\text{so }\int 2\,\sin^{2}{x}\cos{x}\,dx&=\frac{2}{3}\sin^3x+c\end{align}$$
However, looking at my notebook the teacher did this:
$$\int -\left(\frac{\cos{3x}-\cos{x}}{2}\right)$$
And arrived to this result:
$$-\frac{1}{6}\sin{3x}+\frac{1}{2}\sin{x}+c$$
I'm pretty sure my answer is correct as well, but I'm curious to find out what how did do rewrite the question in a form we can integrate.
| Another way using a simple substitution:
$$I = \int 2\,\sin^{2}{x}\cos{x}\ \ dx$$
Let $u = \sin x, du = \cos x \ dx$
$$ I = 2\int u^2 \ du$$
$$I = \frac{2}{3} u^3$$
$$I = \frac{2}{3} \sin^3 x + C$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/158818",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 6,
"answer_id": 4
} |
How do I show this formula involving several variables? This is from Woll's "Functions of Several Variables," but there's no proof.
If $g$ is of class $C^k$ ($k \ge 2$) on a convex open set $U$ about $p$ in $\mathbb{R}^d$, then for each $q \in U$,
$
g(q) = g(p) + \sum_{i=1}^d \frac{\partial g}{\partial r_i} \bigg|_p (r_i(q) - r_i(p)) + \sum_{i,j} (r_i(q) - r_i(p)) (r_j(q) - r_j(p)) \int_0^1 (1-t) \frac{\partial^2g}{\partial r_i \partial r_j} \bigg|_{p + t(q - p)} dt.
$
It looks like Taylor's or mean value theorem. I especially don't understand the integral part.
| Consider function
$$
h(t)=g(p+t(q-p))
$$
and its Taylor series with the remainder in the integral form
$$
h(1)=h(0)+h'(0)+\int\limits_{0}^{1}(1-t)h''(t)dt
$$
Now note that
$$
h(0)=g(p)
$$
$$
h'(0)=\sum\limits_{i=1}^d\frac{\partial g}{\partial r_i}\biggl|_p(r_i(p)-r_i(q))
$$
$$
h''(t)=\sum\limits_{i=1}^d\sum\limits_{j=1}^d\frac{\partial^2 g}{\partial r_i\partial r_j}\biggl|_{p+t(q-p)}(r_i(p)-r_i(q))(r_j(p)-r_j(q))
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/158886",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
What is the probability of two people meeting? I am trying to figure out a solution to the following problem:
Let there be two groups of people, Group A and Group B. Group A represents x percent (e.g. 1%) of the world's population, and Group B represents y percent (e.g. 2%) of the world's population. What is the probability that a person from Group A will meet a person from Group B? Assume the following things:
*
*The average human being meets z people (e.g. 100,000) in a lifetime.
*The world's population is kept at a constant k people.
*Everyone in the world was born and will die at the same time.
Disclaimer:
I came up with this question myself, but I'm not a mathematician, so please feel free to clean this up if need be. Also, if there is not enough information in the problem to solve it, add assumptions and please indicate the reasons for adding them. The assumptions I wrote are my attempt at making the problem easier. If they are not necessary, and removing any produces a more accurate answer, then I encourage the removal of them.
| Let $a$ be the number of people in group $A$ and $b$ be the number of people in group $B$. Then the chance that you never meet somebody from group $B$ is the chance that all your $z$ acquaintances are outside, $\frac {k-b}k\frac{k-b-1}{k-1} \ldots \frac{k-b-z+1}{k-z+1}=\frac {(k-b)!(k-z)!}{k!(k-b-z)!}$ If the chances that an $A$ individual meets a $B$ individual are independent, the chance they never meet is $\left(\frac {(k-b)!(k-z)!}{k!(k-b-z)!}\right)^a$. Although it is not obvious, this should be symmetric in $a$ and $b$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/158936",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Parametric equation of a cone.
I have a cone with vertex (a, b, c) and base circumference with center $(x_0,y_0)$ and radius R. I can't understand what is the parametric representation of three dimensional space inside the cone. Any suggestions please?
| Begin with a parametric representation of the base, using polar coordinates (with the center shifted to $(x_0,y_0)$. For any given point inside the cone, draw a straight line from the vertex through the point to the base of the cone. Let two parameters specify the latter point. Use $z$ as the third parameter, or perhaps a variable $t$ that varies linearly with $z$ so that $t=0$ at the vertex and $t=1$ at the base of the cone. Many minor variations over this theme are possible.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/158987",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Signature of a manifold as an invariant Could you help me to see why signature is a HOMOTOPY invariant? Definition is below (from Stasheff)
The \emph{signature (index)} $\sigma(M)$ of a compact and oriented $n$ manifold $M$ is defined as follows. If $n=4k$ for some $k$, we choose a basis $\{a_1,...,a_r\}$ for $H^{2k}(M^{4k}, \mathbb{Q})$ so that the \emph{symmetric} matrix $[<a_i \smile a_j, \mu>]$ is diagonal. Then $\sigma (M^{4k})$ is the number of positive diagonal entries minus the number of negative ones. Otherwise (if $n$ is not a multiple of 4) $\sigma(M)$ is defined to be zero \cite{char}.
| I think that the "absolute value" of the signature is a homotopy invariant, not the signature itself! Indeed, $\sigma(-M) = -\sigma(M)$ ($-M$ is the manifold with the opposite orientation). And of course $M$ and $-M$ are diffeomorphic: the identity is a (reversing orientation) diffeomorphism from M to -M.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/159036",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Positive Semi-Definite matrices and subtraction I have been wondering about this for some time, and I haven't been able to answer the question myself. I also haven't been able to find anything about it on the internet. So I will ask the question here:
Question: Assume that $A$ and $B$ both are positive semi-definite. When is $C = (A-B)$ positive semi-definite?
I know that I can figure it out for given matrices, but I am looking for a necessary and sufficient condition.
It is of importance when trying to find solutions to conic-inequality systems, where the cone is the cone generated by all positive semi-definite matrices. The question I'm actually interested in finding nice result for are:
Let $x \in \mathbb{R}^n$, and let $A_1,\ldots,A_n,B$ be positive semi-definite. When is
$(\sum^n_{i=1}x_iA_i) - B$
positive semi-definite?
I feel the answer to my first question should yield the answer to the latter. I am looking for something simpler than actually calculating the eigenvalues.
| There's no easy answer. The relation is usually denoted $B\leq A$. The most useful test I can think of is that $A\leq B$ if and only if
$$
x^TBx\leq x^TAx, \ \ x\in\mathbb{R}^n.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/159104",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Traces of all positive powers of a matrix are zero implies it is nilpotent Let $A$ be an $n\times n$ complex nilpotent matrix. Then we know that because all eigenvalues of $A$ must be $0$, it follows that $\text{tr}(A^n)=0$ for all positive integers $n$.
What I would like to show is the converse, that is,
if $\text{tr}(A^n)=0$ for all positive integers $n$, then $A$ is nilpotent.
I tried to show that $0$ must be an eigenvalue of $A$, then try to show that all other eigenvalues must be equal to 0. However, I am stuck at the point where I need to show that $\det(A)=0$.
May I know of the approach to show that $A$ is nilpotent?
| Here is an argument that does not involve Newton's identities, although it is still closely related to symmetric functions. Write
$$f(z) = \sum_{k\ge 0} z^k \text{tr}(A^k) = \sum_{i=1}^n \frac{1}{1 - z \lambda_i}$$
where $\lambda_i$ are the eigenvalues of $A$. As a meromorphic function, $f(z)$ has poles at the reciprocals of all of the nonzero eigenvalues of $A$. Hence if $f(z) = n$ identically, then there are no such nonzero eigenvalues.
The argument using Newton's identities, however, proves the stronger statement that we only need to require $\text{tr}(A^k) = 0$ for $1 \le k \le n$. Newton's identities are in fact equivalent to the identity
$$f(z) = n - \frac{z p'(z)}{p(z)}$$
where $p(z) = \prod_{i=1}^n (1 - z \lambda_i)$. To prove this identity it suffices to observe that
$$\log p(z) = \sum_{i=1}^n \log (1 - z \lambda_i)$$
and differentiating both sides gives
$$\frac{p'(z)}{p(z)} = \sum_{i=1}^n \frac{- \lambda_i}{1 - z \lambda_i}.$$
(The argument using Newton's identities is also valid over any field of characteristic zero.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/159167",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "40",
"answer_count": 3,
"answer_id": 2
} |
What is the correct way to solve $\sin(2x)=\sin(x)$
I've found two different ways to solve this trigonometric equation
$\begin{align*}
\sin(2x)=\sin(x) \Leftrightarrow \\\\ 2\sin(x)\cos(x)=\sin(x)\Leftrightarrow \\\\ 2\sin(x)\cos(x)-\sin(x)=0 \Leftrightarrow\\\\ \sin(x) \left[2\cos(x)-1 \right]=0 \Leftrightarrow \\\\ \sin(x)=0 \vee \cos(x)=\frac{1}{2} \Leftrightarrow\\\\ x=k\pi \vee x=\frac{\pi}{3}+2k\pi \vee x=\frac{5\pi}{3}+2k\pi \space, \space k \in \mathbb{Z}
\end{align*}$
The second way was:
$\begin{align*}
\sin(2x)=\sin(x)\Leftrightarrow \\\\ 2x=x+2k\pi \vee 2x=\pi-x+2k\pi\Leftrightarrow \\\\ x=2k\pi \vee3x=\pi +2k\pi\Leftrightarrow \\\\x=2k\pi \vee x=\frac{\pi}{3}+\frac{2k\pi}{3} \space ,\space k\in \mathbb{Z}
\end{align*}$
What is the correct one?
Thanks
| For what it's worth, the second one is "better" because it generalizes nicer. Imagine solving $\sin(3 x) = \sin(x)$ using the first method ($\sin(3 x) = 3\cos^2(x)\sin(x) - \sin^3(x)$). On the other hand, it's easy to see that $\sin(a x)= \sin(b x)$ will have an infinite number of solutions for any real $a$ and $b$ from the second method.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/159244",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 3,
"answer_id": 1
} |
Motivation for Koszul complex Koszul complex is important for homological theory of commutative rings.
However, it's hard to guess where it came from.
What was the motivation for Koszul complex?
| I don't know the historical origins, but it is not so hard to make up a story:
Consider the basic example
$$0 \to k[x] \to k[x] \to k \to 0,$$
where the middle arrow is mult. by $x$. This is a resolution
of $k = k[x]/(x)$ as a $k[x]$-module.
Now suppose you want to generalize this to obtain a resolution of $k = k[x_1,...,x_n]/(x_1,...,x_n)$ as a $k[x_1,...,x_n]$-module. It is not hard to see that you need "one copy" of the above sequence for each variable; tensoring these all together over $k$ gives you the usual Koszul resolution of $k$ over $k[x_1,...,x_n]$.
It is not hard to pass now to the more general context of elements $a_1,\ldots,a_n$ in a ring $A$, and to imagine the the Koszul complex of $a_1,\ldots,a_n$ will related to the module $A/(a_1,\ldots,a_n)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/159318",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 2,
"answer_id": 1
} |
Generating function of Lah numbers Let $L(n,k)\!\in\!\mathbb{N}_0$ be the Lah numbers. We know that they satisfy
$$L(n,k)=L(n\!-\!1,k\!-\!1)+(n\!+\!k\!-\!1)L(n\!-\!1,k)$$
for all $n,k\!\in\!\mathbb{Z}$. How can I prove
$$\sum_nL(n,k)\frac{x^n}{n!}=\frac{1}{k!}\Big(\frac{x}{1-x}\Big)^k$$
without using the explicit formula $L(n,k)\!=\!\frac{n!}{k!}\binom{n-1}{k-1}$?
Attempt 1: $\text{LHS}=\sum_nL(n\!-\!1,k\!-\!1)\frac{x^n}{n!}+\sum_n(n\!+\!k\!-\!1)L(n\!-\!1,k)\frac{x^n}{n!}\overset{i.h.}{=}?$
Attempt 2: $\text{RHS}\overset{i.h.}{=}$ $\frac{1}{k}\frac{x}{1-x}\sum_nL(n,k\!-\!1)\frac{x^n}{n!}=$ $\frac{1}{k}\frac{x}{1-x}\sum_nL(n\!-\!1,k\!-\!1)\frac{x^{n-1}}{(n-1)!}=$
$\frac{1}{k(1-x)}\sum_nn\big(L(n,k)-(n\!+\!k\!-\!1)L(n\!-\!1,k)\big)\frac{x^n}{n!}=?$
| The given recurrence relation can be used to show that the Lah number $L(n,k)$ counts the number of poset structures on a set with $n$ elements that are a disjoint union of $k$ non-empty chains.
In the language of combinatorial species, $L(n,k)$ counts the number of $E_k\circ L_+$-structures on a set of cardinality $n$, where $E_k$ is the species of sets of size $k$ and $L_+$ is the species of non-empty linear orders.
Since $E_k$ has exponential generating function $E_k(x)=\frac{x^k}{k!}$ and $L_+$ has exponential generating function $L_+(x)=\frac x{1-x}$, $E_k(L_+)$ has exponential generating function $E_k(L_+(x))=\frac 1{k!}\left(\frac x{1-x}\right)^k$.
Therefore,
$$
\sum_{n=0}^k L(n,k)\frac{x^n}{n!} = \frac 1{k!}\left(\frac x{1-x}\right)^k
$$
as required.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/159377",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Prove the convergence/divergence of $\sum \limits_{k=1}^{\infty} \frac{\tan(k)}{k}$ Can be easily proved that the following series onverges/diverges?
$$\sum_{k=1}^{\infty} \frac{\tan(k)}{k}$$
I'd really appreciate your support on this problem. I'm looking for some easy proof here. Thanks.
| Let $\mu$ be the irrationality measure of $\pi^{-1}$. Then for $s < \mu$ given, we have sequences $(p_n)$ and $(q_n)$ of integers such that $0 < q_n \uparrow \infty$ and
$$\left| \frac{1}{\pi} - \frac{2p_n + 1}{2q_n} \right| \leq \frac{1}{q_n^{s}}.$$
Rearranging, we have
$$ \left| \left( q_n - \frac{\pi}{2} \right) - p_n \pi \right| \leq \frac{\pi}{q_n^{s-1}}.$$
This shows that
$$ \left|\tan q_n\right| = \left| \tan \left( \frac{\pi}{2} + \left( q_n - \frac{\pi}{2} \right) - p_n \pi \right) \right| \gg \frac{1}{\left| \left( q_n - \frac{\pi}{2} \right) - p_n \pi \right|} \gg q_n^{s-1}, $$
hence
$$ \left| \frac{\tan q_n}{q_n} \right| \geq C q_n^{s-2}.$$
Therefore the series diverges if $\mu > 2$. But as far as I know, there is no known result for lower bounds of $\mu$, and indeed we cannot exclude the possibility that $\mu = 2$.
p.s. Similar consideration shows that, for $r > s > \mu$ we have
$$ \left| \frac{\tan k}{k^{r}} \right| \leq \frac{C}{k^{r+1-s}}.$$
Thus if $r > \mu$, then
$$ \sum_{k=1}^{\infty} \frac{\tan k}{k^r} $$
converges absolutely!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/159438",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "44",
"answer_count": 3,
"answer_id": 2
} |
Evaluation of $\Xi(z)=\sum_{t=1}^{\infty}\frac{t^z}{e^t}$ I would like to try and evaluate the following gamma function inspired sum.
$$\Xi(z)=\sum_{t=1}^{\infty}\frac{t^z}{e^t}$$
According to my computations, for large $z$,
$$\Xi(z)\approx\Gamma (z+1)$$
and perhaps even
$$\Xi(z) \sim \Gamma (z+1)$$
Does a closed form exist for this sum?
| Your sum is in the form of the Polylogarithm. In fact, it is equal to $\operatorname{Li}_{-z}(1/e).$ When $z$ is a negative integer, this sum is easily computed in closed form using the identity $\displaystyle \frac{d}{dx} \operatorname{Li}_n (x) = x\operatorname{Li}_{n-1} (x).$
By applying the Abel-Plana summation to the Polylogarithm series, we get
$$\operatorname{Li}_s(z) = {z\over2} + {\Gamma(1 \!-\! s, -\ln z) \over (-\ln z)^{1-s}} + 2z \int_0^\infty \frac{\sin(s\arctan t \,- \,t\ln z)} {(1+t^2)^{s/2} \,(e^{2\pi t}-1)} \,\mathrm{d}t
$$
and so $$\operatorname{Li}_{-z}(1/e) = \frac{1}{2e} + \Gamma(z+1,1) + \frac{2}{e} \int^{\infty}_0 \frac{ (1+t^2)^{z/2} \sin(-z \tan^{-1}t+t)}{e^{2\pi t} -1} dt .$$
where $\Gamma(s,x)$ is the incomplete gamma function. Your asymptotic would be explained if you could show why the remaining integral is comparatively small.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/159519",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
How to calculate all the four solutions to $(p+5)(p-1) \equiv 0 \pmod {16}$? This is a kind of a plain question, but I just can't get something.
For the congruence and a prime number $p$: $(p+5)(p-1) \equiv 0\pmod {16}$.
How come that the in addition to the solutions
$$\begin{align*}
p &\equiv 11\pmod{16}\\
p &\equiv 1\pmod {16}
\end{align*}$$
we also have
$$\begin{align*}
p &\equiv 9\pmod {16}\\
p &\equiv 3\pmod {16}\ ?
\end{align*}$$
Where do the last two come from? It is always 4 solutions? I can see that they are satisfy the equation, but how can I calculate them?
Thanks
| The assertion $(p+5)(p-1) \equiv 0 \pmod{16}$ is equivalent to $16 \mid (p+5)(p-1)$. Then you consider cases: $2^4 \mid (p+5)$, $2^3 \mid (p+5)$ and $2 \mid p-1$, $2^2 \mid p+5$ and $2^2 \mid p-1$, etc.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/159585",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 7,
"answer_id": 1
} |
Is there any geometric way to characterize $e$? Let me explain it better: after this question, I've been looking for a way to put famous constants in the real line in a geometrical way -- just for fun. Putting $\sqrt2$ is really easy: constructing a $45^\circ$-$90^\circ$-$45^\circ$ triangle with unitary sides will make me have an idea of what $\sqrt2$ is. Extending this to $\sqrt5$, $\sqrt{13}$, and other algebraic numbers is easy using Trigonometry; however, it turned difficult working with some transcendental constants. Constructing $\pi$ is easy using circumferences; but I couldn't figure out how I should work with $e$. Looking at
made me realize that $e$ is the point $\omega$ such that $\displaystyle\int_1^{\omega}\frac{1}{x}dx = 1$. However, I don't have any other ideas. And I keep asking myself:
Is there any way to "see" $e$ geometrically? And more: is it true that one can build any real number geometrically? Any help will be appreciated. Thanks.
| For a certain definition of "geometrically," the answer is that this is an open problem. You can construct $\pi$ geometrically in terms of the circumference of the unit circle. This is a certain integral of a "nice" function over a "nice" domain; formalizing this idea leads to the notion of a period in algebraic geometry. $\pi$, as well as any algebraic number, is a period.
It is an open problem whether $e$ is a period. According to Wikipedia, the answer is expected to be no.
In general, for a reasonable definition of "geometrically" you should only be able to construct computable numbers, of which there are countably many. Since the reals are uncountable, most real numbers cannot be constructed "geometrically."
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/159707",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "58",
"answer_count": 14,
"answer_id": 1
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.