Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Help in a mathematical relation I'm trying to give my user the possibilty to get a value using a mathematical relation. I need your help in this one. What it is the value of $A$? $$ S = \frac{{\color{red}A}B^2}{4\tan\frac{\pi}{{\color{red}A}}} $$
As was pointed out in a comment, the equation is of a type that typically does not have a closed form solution. That leaves numerical procedures. But there is more to be said, that might be useful. Let $x=\frac{\pi}{A}$. Then we can rewrite our equation as $\cot x=kx$, where $k$ is a constant easily obtained from our parameters $B$ and $S$. A sketch of $y=\cot x$ and $y=\cot x$ shows that there are infinitely many solutions, and that large solutions $x$ are near a multiple of $\pi$. (In terms of $A$, small solutions $A$ are close to the reciprocal of an integer.) This brings us close to a very similar problem that has been much discussed, finding good estimates for large solutions of $\tan x=x$. We can get good general estimates for $\tan x$ when $x$ is close to an odd multiple of $\frac{\pi}{2}$. Essentially identical estimates can be made for $\cot x$ near multiples of $\pi$. So even though one cannot expect a closed form solution of the original equation, one can get good estimates of small solutions $A$ in terms of the parameters.
{ "language": "en", "url": "https://math.stackexchange.com/questions/123913", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
The Fundamental Theorem of Algebra and Complex Numbers We had a quiz recently in a linear algebra course, and one of the true/false question states that The Fundamental Theorem of Algebra asserts that addition, subtraction, multiplication and division for real numbers can be carried over to complex numbers as long as division by zero is avoided. According to our teacher, the above statement is true. When asked him of the reasoning behind it, he said something about the FTA asserts that the associative, commutative and distributive laws are valid for complex numbers, but I couldn't see this. Can someone explain whether the above statement is true and why? Thanks.
What your instructor probably had in mind is the following. Let $z_1=a_1+ib_1$ and $z_2=a_2+ib_2$ be two complex numbers. Then, the polynomials $P(X)= [\frac{1}{2}X+a_1]+[\frac{1}{2}X+a_2]+i(b_1+b_2)$ has by FTA a complex root, which we can define as $z_1+z_2$. Similar things can be done for $z_1z_2$ and $\frac{1}{z_1}$. Anyhow, given a polynomial $P$, even if somehow you can define the polynomial algebraically only by using real operations and $i$, you cannot use the FTA without defining first addition and multiplication. The FTA is not about the polynomial as an algebraic expression, it is about the Polynomial as a function... To Quote zev, FTA asserts that any non-constant polynomial with complex coefficients has a root in the complex number. But what does a root of a polynomial mean? How do you calculate/evaluate $P(z)$ if you don't know how to calculate the powers of $z$, and multiply $z$ to the coefficients of the polynomial? And how do you add the monomials of the polynomial together? The only way in which you can get around this issue is by trying to define $C$ as the algebraic closure of $R$, and then the algebraic process of "algebraic closure" and the uniqueness of it shows that any algebraic extension of $R$ in which FTA holds has to be $C$... But then, if you use this definition of $C$, how do you prove that $C= \{ a+bi |a,b \in R\}$??
{ "language": "en", "url": "https://math.stackexchange.com/questions/124014", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Fermat Factorization Does anyone know how I can use Fermat Factorization to find the two prime factors of the integer $n = pq = 321179$? I am not sure how to go about solving this and any help would be much appreciated!
Let $m = \lceil \sqrt{n} \rceil = 567$. Now try if $m^2-n$ is a square or $(m+1)^2-n$ is a square or... Once you've found a value (near $m$) for which this difference is a square then $n$ can be expressed as a difference of two squares, which could also give an idea of how to find a factor.
{ "language": "en", "url": "https://math.stackexchange.com/questions/124080", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Sum of two closed sets in $\mathbb R$ is closed? Is there a counterexample for the claim in the question subject, that a sum of two closed sets in $\mathbb R$ is closed? If not, how can we prove it? (By sum of sets $X+Y$ I mean the set of all sums $x+y$ where $x$ is in $X$ and $y$ is in $Y$) Thanks!
consider $\mathbb Z$ and $\sqrt 2 \mathbb Z$ both are closed but the sum is not...:) moreover it is dense on $\mathbb R$
{ "language": "en", "url": "https://math.stackexchange.com/questions/124130", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "61", "answer_count": 5, "answer_id": 1 }
Image countable when state space is not? From Jacod / Protter: "Probability Essentials", Springer: Note that even if the state space (or range space) $T$ is not countable, the image $T'$ of $\Omega$ under $X$ (that is, all points $\{i\}$ in $T$ for which there exists an $\omega\in\Omega$ such that $X(\omega) = i$ ) is either finite or countably infinite. (where $X$ is a function (random variable) from $\Omega$ into a set $T$) I do not understand this. If $T$ is the uncountable set $\bf R$ (the real numbers), could the image also be uncountably infinite?
Generally speaking, if $f$ is a function then $f$ is always onto its range. If the domain of $f$ is countable (or generally can be well ordered) then $f$ has a right inverse, and therefore $|\operatorname{Rng}(f)|\le|\operatorname{Dom}(f)|$. By this property we have that if the domain of $f$ is countable then its range is at most countable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/124194", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
A condition for a function to be constant I need to proof this result: Let $\alpha >1$ and $c\in\mathbb{R}$. If $f:U\subset\mathbb{R}^m\rightarrow\mathbb{R}^n$, U open, satisfies $|f(x)-f(y)|\leq c|x-y|^\alpha$ for every $x$, $y$ $\in U$, then $f$ is constant in every component of $U$. I just didn't have any idea on how to start it, I'm doing my first multivariable analysis course now!
We show that $f$ is locally constant. Let $x_0\in U$ that I assume open, and let $r$ such that $B(x_0,r)\subset U$. Then for $y\in B(x_0,r)$ and $n\geq 1$ \begin{align*} |f(x_0)-f(y)|&\leq \sum_{k=0}^{n-1}\left|f\left(x_0+\frac{k+1}n(y-x_0)\right)-f\left(x_0+\frac kn(y-x_0)\right)\right|\\ &\leq \sum_{k=0}^{n-1}\left|x_0+\frac{k+1}n(y-x_0)-\left(x_0+\frac kn(y-x_0)\right)\right|^{\alpha}\\ &=\sum_{k=0}^{n-1}n^{-\alpha}|y-x_0|^{\alpha}\\ &\leq r^{\alpha}n^{1-\alpha} \end{align*} and since $n$ is arbitrary, $f(x_0)=f(y)$ for all $y\in B(x_0,r)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/124263", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
What are curves (generalized ellipses) with more than two focal points called and how do they look like? An ellipse is usually defined as the locus of points so that sum of the distances to the two foci is constant. But what are curves called which are defined as the locus of points so that the sum of the distances to three foci is constant? Trilipse? And what about $n$ foci? $n$-lipse? How do these curves look like? Is there any literature about them?
These are called $k$-ellipses. Yes, there is a literature. Here is one 2007 reference, which can lead you to others: "Semidefinite Representation of the $k$-Ellipse," arXiv:math/0702005v1:     Caveat emptor: The Zariski closure of the 5-ellipse is an algebraic curve of degree 32(!). See their Fig.5:                
{ "language": "en", "url": "https://math.stackexchange.com/questions/124333", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "37", "answer_count": 1, "answer_id": 0 }
Generalized Laws of Cosines and Sines I wonder the "laws of sines and cosines" in the two cases below and how to derive them. (or any related sources) (i) For geodesic triangles on a sphere of radius $R>0$. (so constant curvature $1/R^2$) (ii) On the upper half-plane, say $\{(x,y):y>0\}$ with metric $\frac{R^2}{y^2}\pmatrix{ 1& 0\\\ 0& 1}.$ (which has curvature $-1/R^2$) Thank you so much.
A nice derivation for these formulas in the case $R = 1$ is Thurston's Three-Dimensional Geometry and Topology, Volume 1., pp. 74--81. We think of $S^2$ as the unit sphere in $\mathbb{R}^3.$ Let $u, v, w$ be points on the sphere. We draw a spherical triangle between them on the unit sphere. Let $\theta_{u v}, \theta_{w u}, \theta_{v w}$ be, respectively, the lengths of the sides $u v, w u, v w.$ Let $\phi_u, \phi_v, \phi_w$ be the angles in the triangle at the respective vertices. The first spherical law of cosines (when $R = 1$) is $$\cos \phi_w = \frac{\cos \theta_{u v} - \cos \theta_{w u} \cos \theta_{v w}}{\sin \theta_{v w} \sin \theta_{w u}};$$ the second is like it: $$\cos \theta_{u v} = \frac{\cos \phi_v \cos \phi_u + \cos \phi_w}{\sin \phi_v \sin \phi_u}.$$ And the spherical law of sines is $$\frac{\sin \theta_{u v}}{\sin \phi_w} = \frac{\sin \theta_{w u}}{\sin \phi_v} = \frac{\sin \theta_{v w}}{\sin \phi_u}.$$ So, if instead we were to consider a sphere of radius $R,$ then these formulas would become $$\cos \phi_w = \frac{\cos (\theta_{u v}/R) - \cos (\theta_{u w}/R) \cos (\theta_{v w}/R)}{\sin (\theta_{u w}/R) \sin (\theta_{v w}/R)},$$ $$\cos (\theta_{u v}/R) = \frac{\cos \phi_v \cos \phi_u + \cos \phi_w}{\sin \phi_v \sin \phi_u},$$ and $$\frac{\sin(\theta_{u v}/R)}{\sin\phi_w}=\frac{\sin(\theta_{w u}/R)}{\sin\phi_v}=\frac{\sin(\theta_{v w}/R)}{\sin\phi_u}.$$ We can see this from the fact that dilations about the origin preserve ratios of lengths, and angles. To carry such intuition over to hyperbolic laws of cosines and sines, instead we should say that similarities preserve ratios of inner products. That is, a similarity $T: \mathbb{R}^3 \to \mathbb{R}^3$ satisfies, for the typical inner product on Euclidean space, $$\frac{\langle T a, T b \rangle}{\langle T c, T d \rangle} = \frac{\langle a, b \rangle}{\langle c,d \rangle}.$$ In fact, the above equation holds for any bilinear form $\langle\, ,\,\rangle.$ In particular, it holds for the Lorentz metric on three-space, which is $$\left\langle \begin{pmatrix} v_0\\ v_1\\ v_2 \end{pmatrix}, \begin{pmatrix} w_0\\ w_1\\ w_2 \end{pmatrix}\right\rangle = -v_0 w_0 + v_1 w_1 + v_2 w_2.$$ This also defines an indefinite quadratic form $Q(v) = \langle v, v \rangle.$ Now, the usual hyperbolic plane is given by restricting this Lorentz metric to the "sphere of radius $i$," i.e. $\mathcal{H}^2 = \{v\,|\,Q(v) = -1\}.$ So suppose we have three points $u,v,w$ on this "pseudosphere," and as before, we draw the hyperbolic triangle with these points as vertices, and we call the lengths of the sides $\theta_{u v},\theta_{w u},\theta_{v w},$ and the angles at the respective vertices $\phi_u, \phi_v, \phi_w.$ Then the hyperbolic laws of cosines and sines are $$\cosh \theta_{u v} = \cosh \theta_{w u} \cosh \theta_{v w} - \sinh \theta_{w u} \sinh \theta_{v w} \cos \phi_w,$$ $$\cos \phi_w = - \cos \phi_u \cos \phi_v + \sin \phi_u \sin \phi_v \cosh \theta_{u v},$$ and $$\frac{\sinh \theta_{u v}}{\sin \phi_w}=\frac{\sinh \theta_{w u}}{\sin \phi_v}=\frac{\sinh \theta_{v w}}{\sin \phi_w}.$$ Again, a dilation about the origin won't change ratios of the Lorentz form. This implies that dilations don't change hyperbolic angles, since such angles are defined in terms of such ratios. Therefore, if instead we were to consider the pseudosphere of radius $iR,$ we would have the laws $$\cosh (\theta_{u v}/R) = \cosh (\theta_{w u}/R) \cosh (\theta_{v w}/R) - \sinh (\theta_{w u}/R) \sinh (\theta_{v w}/R) \cos \phi_w,$$ $$\cos \phi_w = - \cos \phi_u \cos \phi_v + \sin \phi_u \sin \phi_v \cosh (\theta_{u v}/R),$$ and $$\frac{\sinh (\theta_{u v}/R)}{\sin \phi_w}=\frac{\sinh (\theta_{w u}/R)}{\sin \phi_v}=\frac{\sinh (\theta_{v w}/R)}{\sin \phi_w}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/124406", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Help in evaluating $\lim_{x \rightarrow \infty} \frac{1000^x}{x^x} = 0$ I suspect that $$\lim_{x \to \infty} \frac{1000^x}{x^x} = 0.$$ However, I do not know how to prove that this is the case. Any help would be greatly appreciated.
Write $$\frac{1000^x}{x^x} = \exp(x (\ln 1000 - \ln x))$$ What can you say about $\ln 1000 - \ln x$, and then about $x (\ln 1000 - \ln x)$, as $x \to +\infty$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/124459", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
a characterization of colinear points in a vector space with real inner product Let $a,b,c$ be three different points in a vector space $E$ over the real numbers, and with an inner product. Let $d$ be the metric defined by the inner product. Prove that if: $ d(a,c) = d(a,b)+d(b,c) $ then $ c= tb + (1-t)a $ for some $ t > 1 $. The only thing I could do , is write the equality in terms of the dot product, but I don´t know how to use the new equality :/
Let $u = b-a, v = c-b$. Then you have to prove that if $\Vert u+v\rVert = \lVert u\rVert + \lVert v\rVert$ then $v = tu$ for some $t > 0$. By explaining the equality in terms of inner product and raising both sides to the power of two, you'll get $$\langle u, u\rangle + 2\langle u, v\rangle + \langle v, v\rangle = \langle u, u\rangle + 2\sqrt{\langle u, u\rangle*\langle v, v\rangle } + \langle v, v\rangle,$$ from which follows $$\langle u, v\rangle^2 = \langle u, u\rangle*\langle v, v\rangle,$$ where $\langle u, v\rangle$ is positive , from which, by inner product linearity, follows $\langle e_u, e_v\rangle = 1$ (where $e_u = \frac{u}{\lVert u\rVert}, e_v = \frac{v}{\lVert v\rVert}$). Then it is obvious that $$\langle e_u-e_v, e_u-e_v\rangle = \langle e_u, e_u\rangle - 2\langle e_u, e_v\rangle+ \langle e_v, e_v\rangle = 0,$$ meaning that $e_u = e_v = e$, $u = e*\lVert u\rVert, v = e*\lVert v\rVert$, $v = u*\frac{\lVert v\rVert}{\lVert u\rVert}$, QED.
{ "language": "en", "url": "https://math.stackexchange.com/questions/124514", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What are the properties of functions that cannot be expressed in closed form? Do they necessarily have asymptotes? Can they be finite over the first interval ($0$ to $x$), infinite over the second ($x$ to $y$), and return to be finite over a third ($y$ to $z$)? When expressed as an infinite series, do the endpoints of their radii of convergence correspond to asymptotes? Thanks.
"Closed form" is an arbitrary term which basically means nothing. In its most extreme form, it would mean that only polynomials are in "closed form". Common wisdom would probably add trigonometric and exponential functions into the mix, but there is no substantial reasons why these functions would be considered "good" while others are "bad". To illustrate what I mean, consider these two functions: $$ f(x)=\sum_{k=0}^\infty\frac{x^k}{k!},\ \ \ g(x)=\sum_{k=0}^\infty\frac{x^k}{(5k+1)!}. $$ The first one is the exponential, a very respected function with its own notation, i.e. $f(x)=e^x$. The second function is "exotic" (I guess, I didn't make an effort to think about it, the point is just that it isn't one of the canonical functions) but it is probably almost as "good" as the exponential. Finally, to address your question directly, given almost any possible property of a function ("good" or "bad" behaviour), there is most likely a "non-closed-form" function with that property.
{ "language": "en", "url": "https://math.stackexchange.com/questions/124568", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
a reduction to Artinian case let $f:X\rightarrow Y$ be a proper, smooth morphism of schemes over a field $k$. In a paper I read that by "classical reduction arguments" to compute $R^pf_{*}\Omega_{X/Y}$ I can assume that $Y=Spec(A)$, where $A$ is an artinian $k$-algebra. I always find assertion like this. Since I realized that one cannot ask for a survey of all "reduction arguments" like this one, I decided to ask this particular case. Of course it would be nice if one had such a survey, without quoting "somewhere in EGA"
This post on MathOverflow seems to provide the survey you're looking for.
{ "language": "en", "url": "https://math.stackexchange.com/questions/124648", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
finding a minorant to $(\sqrt{k+1} - \sqrt{k})$ Need help finding a minorant to $(\sqrt{k+1} - \sqrt{k})$ which allows me to show that the series $\sum_{k=1}^\infty (\sqrt{k+1} - \sqrt{k})$ is divergent.
Since $$\sum_{k=1}^n (\sqrt{k+1} - \sqrt{k}) =\sum_{k=1}^n ((\sqrt{k+1} - \sqrt{k})\frac{\sqrt{k+1} + \sqrt{k}}{\sqrt{k+1} + \sqrt{k}}) $$ $$ =\sum_{k=1}^n \frac{1}{\sqrt{k+1} + \sqrt{k}} \geq 2\sum_{k=1}^n \frac{1}{\sqrt{k+1}} \geq 2\sum_{k=1}^n \frac{1}{k+1}, $$ then the series does not converge, but the telescope argument is much simpler.
{ "language": "en", "url": "https://math.stackexchange.com/questions/124708", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 2 }
An example of a "pathological" power-spectral density function? Suppose that we are given a wide-sense stationary random process $X$ with autocorrelation function $R_X(t)$. Power spectral density $S_X(f)$ of $X$ is then given by the Fourier transform of $R_X(t)$, i.e. $S_X(f)=\mathcal{F}(R_X(t))$. I am wondering if there is a valid power spectral density function $S_X(f)$ such that, for a positive integer $n$, the integral over the entire frequency domain of the absolute value of $S_X(f)$ taken to the $n$-th power is not a finite constant. Formally, is there $S_X(f)$ such that: $$\int_{-\infty}^{\infty} |S_X(f)|^n df=\infty$$ I know that this is impossible for $n=1$, as $\int_{-\infty}^{\infty} S_X(f) df=R_X(0)=E[X^2]<\infty$, however, I haven't found any result for $n>1$. Perhaps it's very obvious one way or the other (though it seems to me that such $S_X(f)$ does not exist, but I can't find a formal proof). In any case, I would appreciate elucidation.
This is impossible just by definition. $S_X(f)$, if it exists, is the Radon-Nikodym derivative of the spectral measure $\mu$ with respect to the Lebesgue measure and, being nonnegative, necessarily $L^1$ as you noted. By definition, the Fourier transform of $S_X(f)$ (or $\mu$ in general) is $R_X(t)$. If one insists that $R_X(t)$ lies in the domain of the (inverse) Fourier transform, then Fourier inversion theorem implies that $S_X(f)$ is in fact continuous almost everywhere. A continuous function in $L^1$ lies also in $L^p$ for any $p > 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/124760", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Show whether matrix is positive semidefinite or not Background and motivation: When creating a Mercer Kernel Function we need to show that the Gram matrix defined by the function is positive semidefinite. Let $A_1, \ldots, A_n$ be subsets of $\{0, 1, \ldots, D\}$. Let $S(X)$ be the smallest $k$ elements of the set $X$. so with $k= 3$ we have $S(\{5,7,1,2,8,3\}) = \{1,2,3\}$. Let's construct a matrix $G$ with $$ G_{i,j} = |S(A_i)\cap S(A_j)\cap S(A_i\cup A_j)| = |S(A_i)\cap S(A_j)\cap S(S(A_i)\cup S(A_j))| $$ (The last equality is pretty intuitive and easy to show). Is $G$ Positive semidefinite? $D$, $n$ and $k$ are all given positive integers. It is clear that $G$ is symmetric with only positive entries. For inspiration it is pretty easy to show that the matrix $M$ defined by $M_{i,j} = |S(A_i) \cap S(A_j)|$ is Positive semidefinite. Create a vector $v_i$ that has $v_{i,j} = 1$ if $j\in S(A_i)$ and $0$ otherwise. each $v_i$ is then a $D+1$ dimensional vector and we have $M_{i,j} = \langle v_i, v_j\rangle$, so $M$ is the product of a matrix and its transpose.
$G$ is positive semidefinite for $n = 2$. Proof: The principal minors of $G$ are $|S(A_1)|$, $|S(A_2)|$ and $|S(A_1)||S(A_2)| - |S(A_1) \cap S(A_2) \cap S(A_1 \cup A_2)|^2$. All principal minors are nonnegative, so $G$ is positive semidefinite. ($|S(A_1)||S(A_2)| - |S(A_1) \cap S(A_2) \cap S(A_1 \cup A_2)|^2 \geq |S(A_1)||S(A_2)| - |S(A_1)||S(A_2)| = 0$) Some numerical testing suggests that $G$ is always positive semidefinite, but I don't know how to prove it in higher dimensions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/124829", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 1, "answer_id": 0 }
Are the eigenvalues of $AB$ equal to the eigenvalues of $BA$? First of all, am I being crazy in thinking that if $\lambda$ is an eigenvalue of $AB$, where $A$ and $B$ are both $N \times N$ matrices (not necessarily invertible), then $\lambda$ is also an eigenvalue of $BA$? If it's not true, then under what conditions is it true or not true? If it is true, can anyone point me to a citation? I couldn't find it in a quick perusal of Horn & Johnson. I have seen a couple proofs that the characteristic polynomial of $AB$ is equal to the characteristic polynomial of $BA$, but none with any citations. A trivial proof would be OK, but a citation is better.
Notice that $\lambda$ being an eigenvalue of $AB$ implies that $\det(AB-\lambda I)=0$ which implies that $$\det(A^{-1})\det(AB-\lambda I)\det(B^{-1})=0=\det(A^{-1}(AB-\lambda I)B^{-1})=\det((B-\lambda A^{-1})B^{-1}) $$ $$=\det(I-\lambda A^{-1}B^{-1}) = 0.$$ This further implies that $$\det(BA)\det(I-\lambda A^{-1}B^{-1})=\det(BA(I-\lambda A^{-1}B^{-1}))=\det(BA-\lambda I)=0,$$ i.e., $\lambda$ is an eigenvalue of $BA$. This proof holds only for invertible matrices $A$ and $B$ though. For singular matrices you can show that $0$ is a common eigenvalue, but I can't think of a way to show that the rest of the eigenvalues are equal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/124888", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "87", "answer_count": 4, "answer_id": 3 }
Sequence of first differences strictly increasing? If $ \pi (x) $ := number of primes $ \leq x $, the operation $T(x_{n+1}) = x_{n+1} - \pi(x_{n+1}) = x_n$ gives a sequence whose elements are those for which repeated application of T gives the preceding elements of the sequence. For example: $s(n) = \{1,2,4,8,14,22,33,48,66,...\}$. The next term is 90, because T(90) = 66. This does not adequately define the sequence. Explicitly, I constructed the sequence as follows. I applied T to n = 1,2,3,..., until there was a repetition: 1-1, 2-1 3-1 4-2,1 5-2,1 6-3,1 7-3,1 8-4,2,1 9-5,2,1 10-6-3-1 11-6,3,1 12-7,3,1 13-7,3,1 14-8,4,2,1;... The first element which results in an increase in the number of elements of the sequence is the one I included. For example, 1 is the first element for which the length of T is 1 (before repetitions). 4 is the first element to which T can be applied 2 times. 8 is the first element which T can be applied 3 times. And so on. My question is whether first differences between successive elements cannot be shown to be strictly increasing. This need not be the case. It might be that the only primes less than $x_{n+1}$ are those less than $x_n$, so $\pi(x_{n+1}) = \pi(x_n)$. Then the next element after 90 might be 114, so while the sequence is strictly increasing, if there was (for the sake of argument) no prime between 90 and 114, so the difference 114 - 90 would equal to 90 - 66. It's obviously not true for the example, but in general I don't think it's obviously true. I think equivalently: can we show there is a prime between each element of the sequence as defined? Hopefully this is well-defined with the additional note. Thanks for any insight.
If $\pi(x_{n+1})=\pi(x_n)$ then there are no primes between $x_n$ and $x_n+\pi(x_n)$ which is to say, roughly, (and writing $m$ for $x_n$) no prime between $m$ and $m+{m\over\log m}$. Now it is widely believed that there is always a prime between $m$ and $m+2(\log m)^2$; if we could prove the Riemann Hypothesis, we would know that there is always a prime between $m$ and something like $m+\sqrt m\log m$; both of these are much shorter gaps than that between $m+{m\over\log m}$ (at least, for large $m$). I don't know offhand what the best unconditional bound is for gaps between primes (but you could probably find it by searching for some term like "gaps between primes"), but I think there's a number $c$ strictly between 1/2 and 1 such that it's known that there's always a prime between $m$ and $m+m^c$ for $m$ sufficiently large. That's enough to show that there are at most finitely many places where your first differences are not strictly increasing, maybe even to bring it within computational range to prove there are no such examples. And since the known results about prime gaps are so much weaker than the conjectured ones, I'd be confident that there are no examples at all.
{ "language": "en", "url": "https://math.stackexchange.com/questions/124937", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to determine the limit of this sequence? I was wondering how to determine the limit of $ (n^p - (\frac{n^2}{n+1})^p)_{n\in \mathbb{N}}$ with $p>0$, as $n \to \infty$? For example, when $p=1$, the sequence is $ (\frac{n}{n+1})_{n\in \mathbb{N}}$, so its limit is $1$. But I am not sure how to decide it when $p \neq 1$. Thanks in advance!
By Lagrange's theorem, there is a $\xi_n\in(0,1)$ for each $n\in\mathbb N$ such that $(n+1)^p-n^p=p(n+\xi_n)^{p-1}$, so we have: $$n^p -\left(\frac{n^2}{n+1}\right)^p = \frac{n^p((n+1)^p-n^p)}{(n+1)^p}=\frac{n^pp(n+\xi_n)^{p-1}}{(n+1)^p}=p\left(\frac{n}{n+1}\right)^p(n+\xi_n)^{p-1}.$$ Now, for any $p\in(0,\infty)$, the expression $\left(\frac{n}{n+1}\right)^p$ will converge to $1$. For $p>1$, the expression $(n+\xi_n)^{p-1}$ will converge to $+\infty$ and for $p\in(0,1)$ will converge to $0$. So your original expression will also converge to $+\infty$ and $0$, respectively, in these two cases.
{ "language": "en", "url": "https://math.stackexchange.com/questions/125043", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 2 }
Ways to solve these types of indeterminations? I'm looking for this type of indeterminations and what I have to do to solve these: $\lim\limits_{x\to\infty}{f(x)=\dfrac{\infty}{\infty}}$ $\lim\limits_{x\to\infty}{f(x)=\infty-\infty}$ $\lim\limits_{x\to\infty}{f(x)=\dfrac{0}{0}}$ $\lim\limits_{x\to\infty}{f(x)=\dfrac{n}{0}},n\not=0$ EDIT: Could anyone post any examples of these indeterminations?
The answer will generally depend on the particular $f(x)$ (though for cases 1 and 3 there is a general tool, L'Hopital's Rule if you already know it, that will often work, and for case 4 you can always say the limit does not exist). The first limit type may be done by L'Hopital's rule (if both numerator and denominator are differentiable), or by algebraic manipulations dependent on the particular $f(x)$. The second type of limit will require some algebraic manipulations (usually particular to the $f(x)$ in question) to bring it to some manageable form. The third type may be done by L'Hopital's rule if both numerator and denominator are given by differentiable functions; or again by algebraic manipulations that depend on the particular form of $f$. In the fourth case, the limit does not exist. If $n\gt 0$ and the denominator is always positive for large enough $x$, then the limit will be $\infty$; same if $n\lt 0$ and the denominator is negative for all large enough $x$. If $n\gt 0$ and the denominator is negative for all large enough $x$, or $n\lt 0$ and the denominator is always positive for large enough $x$, then the limit will be $-\infty$. Otherwise, the limit will simply not exist and not diverge to either $\infty$ or $-\infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/125161", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
An unwanted property of the set $T=\{x,\{x\} \}$ Let $T$ be $T=\{x,\{x\},y \}$ and let $f:A\rightarrow T, \ f(a):=x$, where $A=\{a\}$. Define $B=\{x,y \}$. Now a weird thing happens: We should have that $x \in f(f^{-1}(B))$ by construciton of $f$, but instead we get that $\{x\} \subseteq f(f^{-1}(B))$, althought $\{x\}$ isn't even in the image of $f$! The argument goes like this: $x\in B \Rightarrow f^{-1}(x) \subseteq f^{-1}(B) $ by definition. Then $f(f^{-1}(\{x \})) \subseteq f(f^{-1}(B))$. So far, so good. But $$f(f^{-1}(\{x \})) =\{x\} \neq x$$ by definition of the image of a set (the image of a set - even if it contains just one element is also a set - not just the element!), which "proves" the above. My question are: Hoe to deal with this exotic set ? Should one modifiy the definition of the image to a set with one element to exlude this strange behaviour ? Or is there a better explanation ? EDIT: I'm terrible sorry for making so many mistakes. I hope I got everything right - but if I didn't, please excuse - it is very tired and I can't focus anymore. I shall correct any mistakes I find tomorrow.
In this problem, there is a problem with the notation we use: $f^{-1}(\{ x \})$ has two different meanings. The preimage of the element $\{ x \}$ and also the preimage of the set $\{ x \}$. But the two meanings are completely different, the "paradox" comes from the fact that during the proof we use both meanings at different spots... $f^{-1}(\{x \}) \subseteq f^{-1}(A)$ here $\{x\}$ is a subset of $A$. $f(f^{-1}(\{x \})) =\{x\} \neq x$ here $x$ is an element of $A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/125209", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Compute the Unit and Class Number of a pure cubic field $\mathbb{Q}(\sqrt[3]{6})$ Find a unit in $\mathbb{Q}(\sqrt[3]{6})$ and show that this field has class number $h=1$. I am done with the first part which is relatively simple: Suppose that $\varepsilon$ is a unit in $\mathbb{Q}(\sqrt[3]{6})$. Then we have $\varepsilon=c+b\sqrt[3]{6}+a\sqrt[3]{6^2}$, since the integral base of $\mathbb{Q}(\sqrt[3]{6})$ can be written as $\{1,\sqrt[3]{6},\sqrt[3]{6^2}\}$. Thus, $$\varepsilon=c+b\sqrt[3]{6}+a\sqrt[3]{6^2},$$ $$\sqrt[3]{6}\varepsilon=6a+c\sqrt[3]{6}+b\sqrt[3]{6^2},$$ $$\sqrt[3]{6^2}\varepsilon=6b+6a\sqrt[3]{6}+c\sqrt[3]{6^2}.$$ As we see it, the system of equations with variable $\varepsilon$ has only zero solution, since $\{1,\sqrt[3]{6},\sqrt[3]{6^2}\}$ is a base. Then $$\det\left( \begin{array}{ccc} c-\varepsilon & b & a \\ 6a & c-\varepsilon & b \\ 6b & 6a & c-\varepsilon \\ \end{array} \right) $$ is the minimal polynomial of $\varepsilon$. Since $\varepsilon$ is a unit in $\mathbb{Q}(\sqrt[3]{6})$ if and only if $N(\varepsilon)=\pm1$, we take $\varepsilon=0$ in the above polynomial, and $$\det\left( \begin{array}{ccc} c & b & a \\ 6a & c & b \\ 6b & 6a & c \\ \end{array} \right)=\pm1. $$ Compute the determinant we find that $a=33,~b=60,~c=109$ is one of the solutions. Hence a unit in $\mathbb{Q}(\sqrt[3]{6})$ is of the form $\varepsilon=109+60\sqrt[3]{6}+33\sqrt[3]{6^2}$. For the second part of the problem, I have no idea how to show that $\mathbb{Q}(\sqrt[3]{6})$ is a principal ideal domain. Any comment will be appreciated!
The first idea for computing units in such fields is finding a generator of a purely ramified prime. Here $2 - \sqrt[3]{6}$ has norm $2$, hence $$ (2 - \sqrt[3]{6})^3 = 2(1 - 6\sqrt[3]{6} + 3\sqrt[3]{6}^2) $$ is $2$ times a unit. Finding an element generating the prime ideal above $3$ is more difficult, but it turns out that $\beta = 3 + 2\sqrt[3]{6} + \sqrt[3]{6}^2$ is such an element with norm $3$. As above you now get $$ (3 + 2\sqrt[3]{6} + \sqrt[3]{6}^2)^3 = 3(109+60\sqrt[3]{6} +33\sqrt[3]{6}^2), $$ and you get the unit you mentioned in your question. Actually we have $$ \frac1{1 - 6\sqrt[3]{6} + 3\sqrt[3]{6}^2} = 109+60\sqrt[3]{6} +33\sqrt[3]{6}^2 . $$ Finding elements of norms $5$ and $7$ is rather easy, which then shows that the ring ${\mathbb Z}[\sqrt[3]{6}]$ is principal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/125291", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 1, "answer_id": 0 }
An integral of a complementary error function I really appreciate it if someone help me solving this integral: $$ \int \frac 1x \cdot \operatorname{Erfc}^n x\, dx,$$ where $\operatorname{Erfc}$ is the complementary error function, defined as $\operatorname{Erfc}=\frac 2{\sqrt \pi}\int_x^{+\infty}e^{-t^2}dt$. thank you
A Taylor series at $x=0$ may be found here: $$ \int \frac{\text{Erfc}^n(x)}{x}dx=\log(x)-\frac{2nx}{\sqrt{\pi}}+\frac{(n-1)nx^2}{\pi}+\cdots $$ There is also a result for $n=1$ given: $\log(x)-\frac{2x}{\sqrt{\pi}}{ _2F_2}\left(1/2,1/2;3/2,3/2;-x^2\right)$. EDIT: You get a series expansion for $\text{Erfc}^n(x)$ at $x=\infty$ here: $$ \text{Erfc}(x)^n=\left(1-2 \sum _{k=0}^{\infty } \frac{(-1)^k x^{2 k+1}}{\sqrt{\pi }(2 k+1) k!}\right){}^n $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/125344", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to solve $\int_0^\pi{\frac{\cos{nx}}{5 + 4\cos{x}}}dx$? How can I solve the following integral? $$\int_0^\pi{\frac{\cos{nx}}{5 + 4\cos{x}}}dx, n \in \mathbb{N}$$
Of course it can be solved easily with complex residues. Since $\cos x$ is even you replace it with 1/2 the integral from $0$ to $ 2 \pi$. Then you make the substitution $z=e^{i\theta}$. You end-up with an integral over the unit circle. You end-up with a function which has singularities only at $0$ and $-\frac{1}{2}$. Then find the residues.
{ "language": "en", "url": "https://math.stackexchange.com/questions/125399", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 5, "answer_id": 1 }
Sheffer stroke the most important advance in logic? I think I once read, or heard, that Bertrand Russell once said that the discovery that all logical operators are expressible in terms of the Sheffer stroke was the most significant advance in logic since the publication of Principia Mathematica, and that had he and Whitehead known about it beforehand, they would have proceeded completely differently. This is a really strange claim, so I imagine that I (or the person who told me) misunderstood, or misheard, or made it up completely. If Russell really did say something like this, what exactly did he say, and where and when?
The focus that Russell had was very different than the focus logicians have today. Russell's viewpoint of "logic" was tightly connected to Principia Mathematica, and his interpretations of other results depended on their relation with PM, because that system was the basis for his research program in logic and philosophy. Russell's work was the peak of the logicist program: to argue that all of mathematics can be expressed in pure "logic" and to study the properties of that "logic". The existence of a single operator to which all other "logical" operators can be reduced would be wonderful from that point of view - just as a single physical law that explains all of physics would be wonderful for the foundations of physics. The use of a single primitive would have reduced the number of "logical notions" that were needed in the foundations of PM, which was an important philosophical goal. So, in the context of PM, Russell's remarks were perfectly reasonable. The key is that his viewpoint of "logic" was directly focused on PM.
{ "language": "en", "url": "https://math.stackexchange.com/questions/125454", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
If we define $\sin x$ as series, how can we obtain the geometric meaning of $\sin x$? In Terry Tao's textbook Analysis, he defines $\sin x$ as below: * *Define rational numbers *Define Cauchy sequences of rational numbers, and equivalence of Cauchy sequences *Define reals as the space of Cauchy sequences of rationals modulo equivalence *Define limits (and other basic operations) in the reals *Cover a lot of foundational material including: complex numbers, power series, differentiation, and the complex exponential *Eventually (Chapter 15!) define the trigonometric functions via the complex exponential. Then show the equivalence to other definitions. My question is how can we obtain the geometry interpretation of $\sin x$, that is, the ratio of opposite side and hypotenuse.
From the series, it is easy to see Euler's forumla, $$ e^{ix} = \cos(x) + i\sin(x)$$ With more series manipulation, we can obtain the Pythagorean theorem, $$|e^{ix}| = e^{ix}e^{-ix} = (\cos(x) + i\sin(x))(\cos(x) - i\sin(x)) = \cos^{2}(x) + \sin^{2}(x) = 1$$ Knowing that $\sin(x)$ and $\cos(x)$ have range $[-1,1]$, and are odd and even functions respectively, we see that $e^{ix}$ traces out the unit circle in $\mathbb{C}$. From this, we can extract the geometric interpretation of sine and cosine.
{ "language": "en", "url": "https://math.stackexchange.com/questions/125511", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "29", "answer_count": 6, "answer_id": 4 }
Limit of $2^{1/n}$ as $n\to\infty$ is 1 How do I prove that: $\lim \limits_{n\to \infty}2^{1/n}=1$ Thank you very much.
Note that for $a\gt 0$, $$a^b = e^{b\ln a}.$$ So $$\lim_{n\to\infty}2^{1/n} = \lim_{n\to\infty}e^{\frac{1}{n}\ln 2}.$$ Since the exponential is continuous, we have $$\lim_{n\to\infty}e^{\frac{1}{n}\ln 2} = e^{\lim\limits_{n\to\infty}\frac{1}{n}\ln 2}.$$ Can you compute $\displaystyle\lim_{n\to\infty}\frac{\ln 2}{n}$ ?
{ "language": "en", "url": "https://math.stackexchange.com/questions/125588", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Show $ I = \int_0^{\pi} \frac{\mathrm{d}x}{1+\cos^2 x} = \frac{\pi}{\sqrt 2}$ Show $$ I = \int_0^{\pi} \frac{\mathrm{d}x}{1+\cos^2 x} = \frac{\pi}{\sqrt 2}$$
If we make the standard "Weierstrass" $t=\tan(x/2)$ substitution, we get $\cos t=\frac{1-t^2}{1+t^2}$ and $dx=\frac{2\,dt}{1+t^2}$. We end up quickly with $$\int_0^\infty \frac{1+t^2}{1+t^4}\,dt.$$ But $1+t^4=(1-\sqrt{2}t+t^2)(1+\sqrt{2}t +t^2)$, so by partial fractions our integrand is $$\frac{1}{2-2\sqrt{2}t+2t^2} +\frac{1}{2+2\sqrt{2}t+2t^2}.$$ Completing the squares, we end up with the integrand $$\frac{1}{1+(\sqrt{2}t-1)^2}+\frac{1}{1+(\sqrt{2}t+1)^2}.$$ The substitutions $u=\sqrt{2} t-1$ and $u=\sqrt{2}t+1$ give $$\int_{-1}^\infty \frac{1}{\sqrt{2}}\frac{du}{1+u^2}+\int_{1}^\infty \frac{1}{\sqrt{2}}\frac{du}{1+u^2}.$$ The first integral is $(1/\sqrt{2})(3\pi/4)$ and the second is $(1/\sqrt{2})(\pi/4)$. Add. We get $\pi/\sqrt{2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/125637", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 1 }
Why does $n \choose r$ relate to permutations? For example, how come $4 \choose 3$ (from 4 dice, choose 3 to be the same) can relate to the list: d, s, s, s s, d, s, s s, s, d, s s, s, s, d Where s = same number, and d = different number? Shouldn't this be a permutations problem?
Consider the number of permutations of $0$ and $1$ where there are $a$ $0$s and $b$ $1$s. You have a total of $a+b$ spots and you select $a$ of them to place the $0$s. This amounts to $\binom{a+b}{a}$. If you want to count as permutations, if the zeroes and ones were distinct, you would get $(a+b)!$ permutations. But each permutation where they are not distinct, gives rise to $a! b!$ permutations where they are considered distinct. Thus the total number of "not-distinct" permutations is $\frac{(a+b)!}{a!b!}$. Since the total is the same, irrespective of how you count them, you have just proved that (assuming a combinatorial definition of the binomial coefficient) $$\binom{a+b}{a} = \frac{(a+b)!}{a!b!}$$ See Also: What is the proof of permutations of similar objects?
{ "language": "en", "url": "https://math.stackexchange.com/questions/125716", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Counting words with subset restrictions I have an alphabet of N letters {A,B,C,D...N} and would like to count how many L-length words do not contain the pattern AA. I've been going at this all day, but continue to stumble on the same problem. My first approach was to count all possible combinations, (N^L) and subtract the words that contain the pattern. I tried to count the number of ways in which I can place 'AA' in L boxes, but I realized early on that I was double counting, since some words can contain the pattern more than once. I figured that if I had a defined length for the words and the set, I could do it by inclusion/exclusion, but I would like to arrive at a general answer to the problem. My gut feeling is that somehow I could overcount, and then find a common factor to weed out the duplicates, but I can't quite see how. Any help would be appreciated!
The answer supplied by Gerry might be correct, but the math to solve that equation is a bit beyond me. In the end, I gave up and used inclusion :) I take the number of words of length L and visualize the space between each word as a box. I then calculate all the ways that I can place one ball (representing the letter A) in these boxes, with the constraint of only placing one ball per box max (Duplicate balls would mean adjacent AA). This is achieved by calculating C(balls, boxes). Finally since each box, can represent any letter of the alphabet (Except A) I multiply this result by the number of possible box permutations. C(balls,boxes)*(number of letters -1)^boxes; My end result is the sum of the above equation solving for (balls = 0, boxes=lenghtOfWord ; balls = boxes; balls++ , boxes-- ). MY programmatic solution: int k = 10; int n = 2; double res = 0; int balls = 0; int boxes = n; while(balls<=boxes){ res += comb(boxes,balls)*Math.pow(k-1,boxes); balls++; boxes--; }
{ "language": "en", "url": "https://math.stackexchange.com/questions/125778", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How to get rid of the integral in this equation $\int\limits_{x_0}^{x}{\sqrt{1+\left(\dfrac{d}{dx}f(x)\right)^2}dx}$? How to get rid of the integral $\int\limits_{x_0}^{x}{\sqrt{1+\left(\dfrac{d}{dx}f(x)\right)^2}dx}$ when $f(x)=x^2$?
Go back to the basics! You need a derivative of the curve to begin with. If the integrand has root of sum of squares in the denom Immediately draw a right triangle with appropriate sides and the rest is an algebraic walk.
{ "language": "en", "url": "https://math.stackexchange.com/questions/125828", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
The $n^{th}$ root of the geometric mean of binomial coefficients. $\{{C_k^n}\}_{k=0}^n$ are binomial coefficients. $G_n$ is their geometrical mean. Prove $$\lim\limits_{n\to\infty}{G_n}^{1/n}=\sqrt{e}$$
$$\lim_{n\to\infty} G_n=\lim_{n\to\infty}\sqrt[n]{C_n^0C_n^1C_n^2\cdots C_n^n}=\lim_{n\to\infty}\sqrt[n]{\prod_{k=0}^n \binom{n}{k}}=\frac{n!}{G(n+2)^{2/n}},$$ where $G()$ is Barnes G-Function. $\lim_{n\to \infty}G_n$ diverges as can be seen here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/125890", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 5, "answer_id": 4 }
Convergence without metric or topology or sigma field. You can set some kind of convergence in a space of functions without using some metric or topology or sigma field?
Here is a trifle of an example that seems to suffice your requirement. Let $A$ and $B$ be any sets, and consider the space $B^A$ of functions from $A$ to $B$. Also, let $\omega$ be a nonprincipal ultrafilter on $\mathbb{N}$. That is, $\omega$ is a maximal filter on $\mathbb{N}$ that contains no finite sets. (The existence of such filter is ensured by the Axiom of Choice.) Then for a sequence $(f_n) \subset B^A$ of functions and a function $f \in B^A$, we say $$ f_n \stackrel{\omega}{\longrightarrow}f$$ if for every $x \in A$, the set $\{ n \in \mathbb{N} : f_n (x) = f(x) \}$ is contained in $\omega$. It is easy to prove the uniqueness of the limit, and if there is an algebraic structure on $B$, it easily follows that this notion of limit is compatible with the operations on $B$. But it does not capture any useful concept of 'closeness' (rather, it is just a description incognito of 'equal a.e. $n$ pointwise'), so it seems of little importance to consider this kind of notion.
{ "language": "en", "url": "https://math.stackexchange.com/questions/125960", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Using a keyhole contour I've noticed that some complex analysis textbooks discuss evaluating real-valued integrals like $\int_{0}^{\infty} \frac{\sqrt{x}}{1+x^{2}} \, dx $ using a keyhole contour before they have defined the Cauchy principal value of an integral (first definition). But isn't using a keyhole contour a principal value approach in the sense that the contour approaches the singularity at the origin in a symmetrical way?
The reason to use a keyhole contour is to do with the fact that $z^{1/2}$ has a branch point at the origin. You don't really need to know the concept of Cauchy Principal Value to be able to take the limit of the inner radius going to $0$. But, in a way you are right that it is a principal value approach.
{ "language": "en", "url": "https://math.stackexchange.com/questions/126033", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How do I take the limit as $n$ goes to $\infty$ of $\frac{\sqrt{n}}{\log(n)}$? How do take this limit: $$ \lim_{n\to\infty} \frac{\sqrt{n}}{\log(n)}$$ I have a feeling that it is infinity, but I'm not sure how to prove it. Should I use L'Hopitals Rule?
Let $n = e^x$. Note that as $n \rightarrow \infty$, we also have $x \rightarrow \infty$. Hence, $$\lim_{n \rightarrow \infty} \frac{\sqrt{n}}{\log(n)} = \lim_{x \rightarrow \infty} \frac{\exp(x/2)}{x}$$ Note that $\displaystyle \exp(y) > \frac{y^2}{2}$, $\forall y > 0$ (Why?). Hence, we have that $$\lim_{n \rightarrow \infty} \frac{\sqrt{n}}{\log(n)} = \lim_{x \rightarrow \infty} \frac{\exp(x/2)}{x} \geq \lim_{x \rightarrow \infty} \frac{\frac{x^2}{8}}{x} = \lim_{x \rightarrow \infty} \frac{x}{8} = \infty$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/126099", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Inequality involving the regularized gamma function Prove that $$Q(x,\ln 2) := \frac{\int_{\ln 2}^{\infty} t^{x-1} e^{-t} dt}{\int_{0}^{\infty} t^{x-1} e^{-t} dt} \geqslant 1 - 2^{-x}$$ for all $x\geqslant 1$. ($Q$ is the regularized gamma function.)
Here is a proof for $x\ge2$: $$ \begin{align} \int_0^{\log(2)}t^{x-1}e^{-t}\mathrm{d}t &\le\int_0^{\log(2)}t^{x-1}\mathrm{d}t\\ &=\frac1x\log(2)^x\tag{1} \end{align} $$ Thus, we get that $$ \frac{\int_0^{\log(2)}t^{x-1}e^{-t}\mathrm{d}t}{\int_0^\infty t^{x-1}e^{-t}\mathrm{d}t} \le\frac{\log(2)^x}{\Gamma(x+1)}\tag{2} $$ For $x\ge2$, $$ \frac{(2\log(2))^x}{\Gamma(x+1)}\le1\tag{3} $$ Once we show $(3)$, the result follows because $$ \begin{align} \frac{\int_{\log(2)}^\infty t^{x-1}e^{-t}\mathrm{d}t}{\int_0^\infty t^{x-1}e^{-t}\mathrm{d}t} &=1-\frac{\int_0^{\log(2)}t^{x-1}e^{-t}\mathrm{d}t}{\int_0^\infty t^{x-1}e^{-t}\mathrm{d}t}\\ &\ge1-\frac{\log(2)^x}{\Gamma(x+1)}\\ &\ge1-2^{-x}\tag{4} \end{align} $$ Inequality $(3)$ is equivalent to $$ \log(\Gamma(x+1))\ge x(\log(2\log(2)))\tag{5} $$ Note that $(5)$ holds at $x=2$ since $\log(2)>2\log(2\log(2))$ follows from $\log(2)<\sqrt{1/2}$. Since $\Gamma$ is log-convex and for $x\ge2$, $\frac{\mathrm{d}}{\mathrm{d}x}\log(\Gamma(x+1))\ge\frac32-\gamma>\log(2\log(2))$. Thus, $(5)$, and therefore $(3)$, hold for $x\ge2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/126156", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Calculate the slope of a line passing through the intersection of two lines Let say I have this figure, I know slope $m_1$, slope $m_1$, $(x_1, y_1)$, $(x_2, y_2)$ and $(x_3, y_3)$. I need to calculate slope $m_3$. Note the line with $m_3$ slope will always equally bisect line with $m_1$ slope and line with $m_2$.
Suppose $m_1$ is $\tan(x)$ and $m_2$ is $\tan(y)$. We basically want $$ \begin{eqnarray} \tan((x+y)/2) &=& (1-\cos(x+y))/\sin(x+y)\\ & =& (1-(\cos(x)\cos(y) - \sin(x)\sin(y)))/(\sin(x)\cos(y) + \cos(x)\sin(y)) \end{eqnarray} $$ This is easy as $\sin(x) = m_1/\sqrt{1+m_1^2}, \cos(x) = 1/\sqrt{1+m_1^2}$, and similarly for $y$. Call $\sqrt{1+m_1^2} = n_1$ and similarly $n_2$ for $m_2$. $$m_3 = (1-(1/n_1n_2 - m_1m_2/n_1n_2)/(m_1/n_1n_2 + m_2/n_1n_2) = (n_1n_2 + m_1m_2 - 1)/(m_1+m_2)$$ So $m_3 = \left(\sqrt{(1+m_1^2)(1+m_2^2)} + m_1m_2 - 1\right)/(m_1 + m_2).$ You can easily check that if $m_2 = m_1$, then $m_3 = m_1$ here. Looks legit.
{ "language": "en", "url": "https://math.stackexchange.com/questions/126237", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Solving $217 x \equiv 1 \quad \text{(mod 221)}$ I am given the problem: Find an integer $x$ between $0$ and $221$ such that $$217 x \equiv 1 \quad \text{(mod 221)}$$ How do I solve this? Unfortunately I am lost.
Using the Euclid-Wallis Algorithm: $$ \begin{array}{r} &&1&54&4\\ \hline \color{red}{1}&0&1&\color{red}{-54}&217\\ 0&\color{green}{1}&-1&\color{green}{55}&-221\\ \color{red}{221}&\color{green}{217}&4&\color{blue}{1}&0 \end{array} $$ we get that $\color{green}{55\cdot217}\color{red}{-54\cdot221}=\color{blue}{1}$. Thus, $x=55\pmod{221}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/126286", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Help me understand a 3d graph I've just seen this graph and while it's isn't the first 3d graph I've seen, as a math "noob" I never thought how these graphs are plotted. I can draw 2d graphs on paper by marking the input and output values of a function. It's also easy for me to visualize what the graph I'm seeing says about the function but what about graphs for functions with 2 variables? How do I approach drawing and understanding the visualization?
An illustration should make things more clear. Here is the function you mentioned: I have drawn contours (the black curves) to indicate the set of points for which the function is equal to a particular constant. As you go up the "wall" from one contour "rung" to the next, the value of the function (the z axis) increases.
{ "language": "en", "url": "https://math.stackexchange.com/questions/126401", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Is there an abelian category of topological groups? There are lots of reasons why the category of topological abelian groups (i.e. internal abelian groups in $\bf Top$) is not an abelian category. So I'm wondering: Is there a "suitably well behaved" subcategory of $\bf Top$, say $\bf T$, such that $\bf Ab(T)$ is an abelian category? My first guess was to look for well behaved topological spaces (locally compact Hausdorff, compactly generated Hausdorff, and so on...) Googling a little shows me that compactly generated topological groups are well known animals, but the web seems to lack of a more categorical point of view. Any clue? Thanks in advance.
Perhaps it's worth posting an update here: by replacing topological spaces by condensed sets resp. pyknotic sets we can get an abelian category of abelian group objects, the condensed abelian groups resp. the pyknotic abelian groups; for a discussion of the first see Scholze's Lectures on Condensed Mathematics and for a discussion of the second see Barwick-Haine's Pyknotic objects, I. Basic notions. If I'm reading correctly, the category of compactly generated topological spaces embeds fully faithfully as a reflective subcategory of either condensed or pyknotic sets, and I believe this implies that the category of compactly generated abelian groups (not necessarily Hausdorff!) embeds fully faithfully as a reflective subcategory of either condensed or pyknotic abelian groups. If that's right, then this embedding preserves limits but it does not preserve colimits, and this is necessary to get an abelian category since we need to alter the behavior of cokernels, e.g. the cokernel of the map from $\mathbb{R}$ with the discrete topology to $\mathbb{R}$ with the Euclidean topology is nontrivial, which Scholze memorably uses as a motivating example.
{ "language": "en", "url": "https://math.stackexchange.com/questions/126537", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 2, "answer_id": 1 }
General Fatou's Lemma How can I proof the general Fatou's Lemma without using the Monotone convergence Theorem. Lemma: Let $(X,\mathcal{M},\mu)$ be a measure space and $\{f_n\}$ a non-negative measurable sequence. Then $$ \int_X \liminf_{n\to\infty}f_n~d\mu \leq \liminf_{n\to\infty}\int_X f_n~d\mu.$$
Here is one proof based on the bounded convergence theorem, adapted from Durrett. Define $g_n(x) = \inf_{m\geq n} f_m(x)$. So, $f_n \geq g_n$ and $g_n \uparrow g(x) := \liminf_n f_n(x)$ as $n \to \infty$. By monotonicity of the integral, we know that $\newcommand{\du}{\,\mathrm d \mu} \int f_n \du \geq \int g_n \du$, whence $$\liminf_n \int f_n \du \geq \liminf_n \int g_n \du \>.$$ Suppose $X_n \uparrow X$ where $\mu(X_n) < \infty$. By the bounded convergence theorem, for fixed $m$, we have $$ \liminf_n \int g_n \du \geq \int_{X_m} g_n \wedge m \du \to \int_{X_m} g \wedge m \du \>, $$ since the integrand in the middle is bounded and converges to the integrand on the right. But, then $$ \liminf_n \int g_n \du \geq \sup_m \int_{X_m} g \wedge m \du = \int \liminf_n f_n \du \>. $$ Since $\liminf_n \int f_n \du \geq \liminf_n \int g_n \du$, we are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/126686", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Double Integral, Change of Variables to Polar Coordinates Quick question on Polar Coordinates. When evaluating the double integral and changing variables, I'm not sure if the limits are correct. The question is as follows: Evaluate $$\int\!\!\!\int_D xy\sqrt{x^2 + y^2}\,dxdy $$where $D = \{(x,y) \mid 1 \leq x^2 + y^2 \leq 4,\ x \geq 0,\ y \geq 0\}$ So my question is when I change to polar coordinates, is the limit for the integral with respect to r from 1 to 2 or 1 to 4? Instinctively, I would say it's 1 to 4 but the answer given out by the lecturer (which does not have all the steps) has the limts at 1 to 2. Is it maybe because $x^2 + y^2 = a^2$? Note: I have the new integral, in terms of r and $\theta$ as: $\int$$\int$$r^4$cos$\theta$sin$\theta$drd$\theta$
Look at the actual variables in your integral. You have $\sqrt{x^2+y^2}=\sqrt{r^2}=|r|$, $x=r\cos\theta$, and $y=r\sin\theta$, and you have $dxdy=r dr d\theta$, so your integrand must be $|r|r^3 \sin\theta\cos\theta dr d\theta$. Now look at the region over which you’re integrating: $$\begin{align*}&1\le x^2+y^2\le 4\;,\tag{1}\\ &x\ge 0\;,\tag{2}\\ &y\ge 0\;.\tag{e} \end{align*}$$ What limits on $r$ and $\theta$ describe this region? $(1)$ says that $1\le r^2\le 4$, so $1\le |r|\le 2$. $(2)$ and $(3)$ say that the region is limited to the first quadrant. If you take $0\le\theta\le\pi/2$, you stay in the first quadrant provided that you also keep $r\ge 0$. Thus, $(1)-(3)$ translate to $$\begin{align*} &1\le r\le 2\;,\\\\ &0\le\theta\le\frac{\pi}2\;. \end{align*}$$ Your integral should therefore be $$\int_0^{\pi/2}\int_1^2 r^4\sin\theta \cos\theta dr d\theta\;.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/126750", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
What is the group generated by the conjugacy class containing $(12\ldots n)$ in $S_n$? This question is motivated by this answer to a question about groups generated by conjugacy classes. Let $n \geq 1$ and $S_n$ be the symmetric group on $\{1,2,\ldots,n\}$. Define the n-cycle $\alpha=\pmatrix{1&2&\cdots &n}$, and let $\operatorname{Cl}(\alpha)=\{\beta \alpha \beta^{-1}:\beta \in S_n\}$ denote the conjugacy class of $\alpha$. Question: What is the group $\langle \operatorname{Cl}(\alpha) \rangle$? Observation 1: If $n$ is odd and $n \geq 3$, then $\alpha$ and its conjugates are even permutations (since they all have the same cycle structure), so $\langle \operatorname{Cl}(\alpha) \rangle$ contains only even permutations, and thus is a subgroup of the alternating group $A_n < S_n$. Observation 2: Judging from some computations in GAP, it looks like $\langle \operatorname{Cl}(\alpha) \rangle=A_n$ for odd $n \geq 3$ and $\langle \operatorname{Cl}(\alpha) \rangle=S_n$ for even $n \geq 2$.
The following is even true: let $1 < k \leq n$, and $H_k$ the subgroup of $S_n$ generated by the $k$-cycles. If $k$ is odd then $H_k = A_n$ and if $k$ is even then $H_k = S_n$. Note (added 6 years later, ahum, apologies!!) I never supplied a proof, so here it is. It is well-known and to be found in every standard introductory book on group theory, that $S_n$ is generated by $2$-cycles and $A_n$ is generated by $3$-cycles. So let's assume $3 \lt k \leq n$. Observe the following. Put $\sigma=(k \ 2 \ 3 \ 4 \cdots \ k-1 \ 1)$ and $\tau= (k \ k-1 \ k-2 \ \cdots \ 3 \ 2 \ 1)$. Then $\sigma, \tau \in H_k$ and $\sigma \tau=(1 \ 2 \ k)$. By conjugating $(1 \ 2 \ k)$ with the appropriate element, say $\lambda$ of $S_n$, you can "reach" any $3$-cycle, while conjugating $\sigma$ and $\tau$ with such a $\lambda$ does not change their cycle structure. In other words, $A_n \subseteq H_k$ for any $k$. Since $|S_n:A_n|=2$, it follows that either $H_k=A_n$ or $H_k=S_n$. If $k$ is odd, then all elements of $H_k$ are even cycles, so then the first case applies. If $k$ is even then $H_k$ contains odd cycles and the latter case holds true. Finally, observe that of course $H_k=\langle Cl_{S_n}((1 \ 2 \ 3 \ 4 \cdots \ k-1 \ k)) \rangle$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/126824", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
Equation for stationary values of $px^2+qy^2+rz^2$ given sphere and plane constraints Consider stationary points of the function $V=px^2+qy^2+rz^2$ subject to the constraints $x^2+y^2+z^2=1$ and $lx+my+nz=0$, where $l,m,n$ not all zero and $p,q,r$ not all equal. How can we show that the stationary values of $V$ satisfy $\frac{l^2}{V-p} + \frac{m^2}{V-q} + \frac{n^2}{V-r} = 0$? I've shown that the stationary point $(x,y,z)$ must satisfy $l(q-r)yz + m(r-p)xz + n(p-q)xy = 0$, but can't see how to convert this into the required equation. It would be sufficient to show that $(q-r)yz(px^2+qy^2+rz^2-p)=l$ (and similar conditions with $p,q,r$ and $l,m,n$ and $x,y,z$ cycled), but is this true? What about if we add $lx+my+nz$ to the equation I've got? Thanks for any help with this!
Answering my own question here... Rearrange the equation I already got to $px(ny-mz)+qy(lz-nx)+rz(mx-ly)=0$, which shows that $\begin{pmatrix} px\\qy\\rz \end{pmatrix}$ is perpendicular to $\begin{pmatrix} x\\y\\z \end{pmatrix}\times\begin{pmatrix} l\\m\\n \end{pmatrix}$. So $\begin{pmatrix} px\\qy\\rz \end{pmatrix}=V\begin{pmatrix} x\\y\\z \end{pmatrix}+c\begin{pmatrix} l\\m\\n \end{pmatrix}$, which means $\begin{pmatrix} V-p\\V-q\\V-r \end{pmatrix}=-c\begin{pmatrix} l/x\\m/y\\n/z \end{pmatrix}$. Then take reciprocals of components, sum them, and use the given constraint $lx+my+nz=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/126891", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Simple Logic Question I've very little understanding in logic, how can I simply show that this is true: $$((X \wedge \neg Y)\Rightarrow \neg Z) \Leftrightarrow ((X\wedge Z)\Rightarrow Y)$$ Thanks a lot.
A simple way to work with exercises like these (unless trying to formally prove them in some formal system), lies in seeing what happens if one variable is true, and then seeing what happens if that same variable is false. Then you can use logical equations (which you can derive and check quickly from truth tables for the basic connectives) like (0 $\land$ y)=0 (0 $\implies$ y)=1 (1 $\land$ y)=y (x $\implies$ 0)=$\lnot$x (1$\implies$y)=y (x$\implies$1)=1. These ones should suffice for this exercise.
{ "language": "en", "url": "https://math.stackexchange.com/questions/126940", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 7, "answer_id": 6 }
The sum of an Irreducible Representation I was hoping someone could help me with the following question. Let $\rho$ be an irreducible presentation of a finite group $G.$ Prove \begin{equation*} \sum_{g \in G} \rho(g) = 0 \end{equation*} unless $\rho$ is the trivial representation of degree $1$. I think I have to use Schur's Lemma which states the following. Let $\rho: G \longrightarrow GL(n,\mathbb{C})$ be a representation of G. Then $\rho$ is irreducible if and only if every $n \times n$ matrix $A$ which satisfies \begin{equation*} \rho(g)A = A\rho(g) \ \ \ \forall \ g \in G \end{equation*} has the form $A = \lambda I_n \, $ with $\lambda \in \mathbb{C}$. But I am really not sure how the lemma can be applied to this question?
I think you can use this variant of Schur's lemma, yes! If $A$ is your sum, then $\rho(g)A = A\rho(g) = A$ for all $g \in G$: use the fact that $G$ is a group and that $\rho$ is a homomorphism. Thus $A = \lambda I$. If $\rho$ is not the trivial representation, then there exists a $g$ such that $\rho(g) \neq I$. Now you have $\lambda\rho(g) = \lambda I$. If $\lambda \neq 0$, then does this make any sense? Added. That $\rho(g)\sum_{x \in G} \rho(x) = \sum_{x \in G} \rho(x)$ is a special case of the following fact: if I have a commutative monoid $A$, a finite set $I$, an indexing function $f\colon I \to A$, and a bijection $\mu\colon J \to I$ then $\sum_{i \in I} f(i) = \sum_{j \in J} f(\mu(j))$. This is just a pedantic way of changing variables. Here both $I$ and $J$ are $G$, and $\mu(x) = gx$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/127002", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Complex Analysis: Liouville's theorem Proof I'm being asked to find an alternate proof for the one commonly given for Liouville's Theorem in complex analysis by evaluating the following given an entire function $f$, and two distinct, arbitrary complex numbers $a$ and $b$: $$\lim_{R\to\infty}\oint_{|z|=R} {f(z)\over(z-a)(z-b)} dz $$ What I've done so far is I've tried to apply the cauchy integral formula, since there are two singularities in the integrand, which will fall in the contour for $R$ approaches infinity. So I got: $$2{\pi}i\biggl({f(a)\over a-b}+{f(b)\over b-a}\biggr)$$ Which equals $$2{\pi}i\biggl({f(a)-f(b)\over a-b}\biggr)$$ and I got stuck here I don't quite see how I can get from this, plus $f(z)$ being bounded and analytic, that can tell me that $f(z)$ is a constant function. Ugh, the more well known proof is so much simpler -.- Any suggestions/hints? Am I at least on the right track?
$$\lim_{R\to\infty}\oint_{|z|=R} {f(z)\over(z-a)(z-b)} \; dz=2{\pi}i\biggl({f(a)-f(b)\over a-b}\biggr) \to 2\pi if'(b)\text{ as }a\to b.$$ If one could somehow use boundedness of $f$ to show that $$ \lim_{R\to\infty}\oint_{|z|=R} {f(z)\over(z-a)(z-b)} \;dz \to 0\text{ as }a\to b, $$ then one would have shown that $f'(b)=0$. Since $b$ was arbitrary, one would have $f'=0$ everywhere.
{ "language": "en", "url": "https://math.stackexchange.com/questions/127046", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Greatest $n$ that can be written in the form of $ax+by=n$ In a diophantine equation $ax + by = n$ with $(a, b) = 1$, the greatest possible value of $n$ such that both $(x, y)$ are not positive is $ab − b − a$? This is given in my module (without any proof). I am assuming that "both $(x, y)$ are not positive" means at-least one must be negative. I was wondering how to prove this.
First note that if $x=b-1$ and $y=-1$ then $ax+by=ab-b-a$. Then what you have to know is that if $(u,v)$ is one solution to $ax+by=n$ then the full set of solutions is given by $x=u+kb$, $y=v-ka$ where $k$ runs through the integers (positive and negative (and zero)). So to increase $y$, you have to decrease $x$ by $b$ (or more), so you can't get a positive solution when $n=ab-b-a$. By the way, I'm assuming that $a,b$ are meant to be positive integers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/127123", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving an asymptotic lower bound for the integral $\int_{0}^{\infty} \exp\left( - \frac{x^2}{2y^{2r}} - \frac{y^2}{2}\right) \frac{dy}{y^s}$ This is a follow up to the great answer posted to https://math.stackexchange.com/a/125991/7980 Let $ 0 < r < \infty, 0 < s < \infty$ , fix $x > 1$ and consider the integral $$ I_{1}(x) = \int_{0}^{\infty} \exp\left( - \frac{x^2}{2y^{2r}} - \frac{y^2}{2}\right) \frac{dy}{y^s}$$ Fix a constant $c^* = r^{\frac{1}{2r+2}} $ and let $x^* = x^{\frac{1}{1+r}}$. Write $f(y) = \frac{x^2}{2y^{2r}} + \frac{y^2}{2}$ and note $c^* x^*$ is a local minimum of $f(y)$ so that it is a global max for $-f(y)$ on $[0, \infty)$. We are trying to determine if there exist upper and lower bounds of the same order for large x. The coefficients in our bounds can be composed of rational functions in x or even more complicated as long as they do not have exponential growth. The Laplace expansion presented in the answer to the question cited above gives upper bounds. In particular can we prove a specific lower bound: Does there exist a positive constant $c_1(r,s)$ and such that for x>1 we have $$I_1 (x) > \frac{c_1(r,s)}{x} \exp( - f(c^* x^*))$$ (it is ok in the answer if the function $\frac{1}{x}$ in the upper bound is replaced by any rational function or power of $x$)
Let $$ \phi_{r,x}(y)=-\frac{x^2}{2y^{2r}}-\frac{y^2}{2}\tag{1} $$ Taking the first and second derivatives of $\phi_{r,x}(y)$ yields $$ \phi_{r,x}^\prime(y)=r\frac{x^2}{y^{2r+1}}-y\tag{2} $$ and $$ \phi_{r,x}^{\prime\prime}(y)=-(2r+1)r\frac{x^2}{y^{2r+2}}-1\tag{3} $$ Using $(2)$, $\phi_{r,x}(y)$ reaches a maximum at $y_0=(rx^2)^{\frac{1}{2r+2}}$. At that point, $$ \phi_{r,x}(y_0)=-\frac{r+1}{2r}(rx^2)^{\frac{1}{r+1}}\tag{4} $$ Furthermore, $(3)$ gives that $$ \frac12\phi_{r,x}^{\prime\prime}(y_0)=-(r+1)\tag{5} $$ Standard stationary phase methods yield $$ \begin{align} &\int_0^\infty\exp\left(-\frac{x^2}{2y^{2r}}-\frac{y^2}{2}\right)\frac{\mathrm{d}y}{y^s}\\ &\sim\exp\left(\phi_{r,x}(y_0)\right)\int_0^\infty\exp\left(-(r+1)(y-y_0)^2\right)\frac{\mathrm{d}y}{y^s}\\ &\sim y_0^{-s}\exp\left(\phi_{r,x}(y_0)\right)\int_{-\infty}^\infty\exp\left(-(r+1)y^2\right)\mathrm{d}y\\ &=(rx^2)^{\frac{-s}{2r+2}}\exp\left(-\frac{r+1}{2r}(rx^2)^{\frac{1}{r+1}}\right)\sqrt{\frac{\pi}{r+1}}\tag{6} \end{align} $$ Where $f(x)\sim g(x)$ means that $\lim\limits_{x\to\infty}f(x)/g(x)=1$. Estimate $(5)$ says that the kind of estimate sought above can be achieved only when $s\le r+1$. Taking the derivative of $(2)$ yields $$ \phi_{r,x}^{\prime\prime\prime}(y)=(2r+1)(2r+2)r\frac{x^2}{y^{2r+4}}\tag{7} $$ which says that the second derivative of the exponent increases monotonically, whereas the second derivative of the quadratic approximation is constant. Since $\phi_{r,x}$ and its first and second derivatives match the quadratic approximation at $y_0$, we get that for $y\ge y_0$, $$ \phi_{r,x}(y)\ge\phi_{r,x}(y_0)-(r+1)(y-y_0)^2\tag{8} $$ Furthermore, since $(1+t)^{-s}\ge1-st$ for $t\ge0$, we get $$ \begin{align} &\int_0^\infty\exp\left(-\frac{x^2}{2y^{2r}}-\frac{y^2}{2}\right)\frac{\mathrm{d}y}{y^s}\\ &\ge\int_{y_0}^\infty\exp\left(\phi_{r,x}(y_0)-(r+1)(y-y_0)^2\right)y_0^{-s}\left(1-s\frac{y-y_0}{y_0}\right)\mathrm{d}y\\ &=y_0^{-s}\exp(\phi_{r,x}(y_0))\int_0^\infty\exp\left(-(r+1)t^2\right)\left(1-\frac{st}{y_0}\right)\mathrm{d}t\\ &=y_0^{-s}\exp(\phi_{r,x}(y_0))\left(\frac12\sqrt{\frac{\pi}{r+1}}-\frac{s}{2y_0\sqrt{r+1}}\right)\\ &=(rx^2)^{\frac{-s}{2r+2}}\exp\left(-\frac{r+1}{2r}(rx^2)^{\frac{1}{r+1}}\right)\left(\frac12\sqrt{\frac{\pi}{r+1}}-\frac{s}{2y_0\sqrt{r+1}}\right)\tag{9} \end{align} $$ For $x\ge x_0$, we get $(9)$ with $y_0=(rx_0^2)^\frac{1}{2r+2}$. This is the bound required as long as $s\le r+1$. For example, if we set $\displaystyle x_0=\max\left(\frac{s^{r+1}}{\sqrt{r}},1\right)$, for $x\ge x_0$, we get $$ \begin{align} &\int_0^\infty\exp\left(-\frac{x^2}{2y^{2r}}-\frac{y^2}{2}\right)\frac{\mathrm{d}y}{y^s}\\ &\ge\left(\frac{\sqrt{\pi}-1}{2\sqrt{r+1}}r^{\frac{-s}{2r+2}}\right)x^{\frac{-s}{r+1}}\exp\left(-\frac{r+1}{2r}(rx^2)^{\frac{1}{r+1}}\right)\\ &\ge\left(\frac{\sqrt{\pi}-1}{2\sqrt{r+1}}r^{\frac{-s}{2r+2}}\right)\frac1x\exp\left(-\frac{r+1}{2r}(rx^2)^{\frac{1}{r+1}}\right)\tag{10} \end{align} $$ as long as $s\le r+1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/127177", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Reference request: GL(n) There are many places, which describe the unitary irreducible representations of $GL(n, F)$ with $F = \mathbb{C}$ or $F =\mathbb{R}$. Basically, we obtain a bunch of parabolically induced representation indexed by irreducible reps of parabolic subgroups and complex parameters and construct various quotients. However, I could not find how the restriction of a unitary irreducible representation to the maximal compact groups decomposes into irreducibles. Does somebody have a reference for $GL(n, \mathbb{C})$? For $SL(2, \mathbb{C})$, I found Barut-Raczka "The theory of group representations" pg.567 and for $GL(2, \mathbb{R})$, I found Knightly-Li "Hecke opeartors" and much more on $GL(n, \mathbb{R})$ in Goldfeld-Hundley's new book on automorphic forms.
For my own research I am intensively using the books by N. Ja. Vilenkin and A.U. Klimyk "Representation of Lie Groups and Special Functions". For $GL(n,\mathbb{C})$ I would recommend volumes 2 and 3.
{ "language": "en", "url": "https://math.stackexchange.com/questions/127240", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Root Calculation by Hand Is it possible to calculate and find the solution of $ \; \large{105^{1/5}} \; $ without using a calculator? Could someone show me how to do that, please? Well, when I use a Casio scientific calculator, I get this answer: $105^{1/5}\approx " 2.536517482 "$. With WolframAlpha, I can an even more accurate result.
You can just do it by trial, but it gets tiring: $2^5\lt 105 \lt 3^5$ so it is between $2$ and $3$. You might then try $2.5^5 \approx 98$ so the true value is a bit higher and so on. An alternate is to use the secant method. If you start with $2^5=32, 3^5=243$, your next guess is $2+\frac {243-105}{243-32}=2.654$ Then $2.654^5=131.68$ and your next guess is $2.654-\frac {131.68-105}{131.68-32}=2.386$ and so on. Also a lot of work. Added: if you work with RF engineers who are prone to use decibels, you can do this example easily. $105^{0.2}=100^{0.2}\cdot 1.05^{0.2}=10^{0.4}\cdot 1.01=4 dB \cdot 1.01= (3 dB + 1 dB)1.01=2 \cdot 1.25 \cdot 1.01=2.525$, good to $\frac 12$%, where $1.05^{0.2}\approx 1.01$ comes from the binomial $(1+x)^n\approx 1+nx$ for $x \ll 1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/127310", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "92", "answer_count": 5, "answer_id": 4 }
Prove that $||x|-|y||\le |x-y|$ I've seen the full proof of the Triangle Inequality \begin{equation*} |x+y|\le|x|+|y|. \end{equation*} However, I haven't seen the proof of the reverse triangle inequality: \begin{equation*} ||x|-|y||\le|x-y|. \end{equation*} Would you please prove this using only the Triangle Inequality above? Thank you very much.
Given that we are discussing the reals, $\mathbb{R}$, then the axioms of a field apply. Namely for :$x,y,z\in\mathbb{R}, \quad x+(-x)=0$; $x+(y+z)=(x+y)+z$; and $x+y=y+x$. Start with $x=x+0=x+(-y+y)=(x-y)+y$. Then apply $|x| = |(x-y)+y|\leq |x-y|+|y|$. By so-called "first triangle inequality." Rewriting $|x|-|y| \leq |x-y|$ and $||x|-y|| \leq |x-y|$. The item of Analysis that I find the most conceptually daunting at times is the notion of order $(\leq,\geq,<,>)$, and how certain sentences can be augmented into simpler forms. Hope this helps and please give me feedback, so I can improve my skills. Cheers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/127372", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "118", "answer_count": 7, "answer_id": 1 }
Find a subset that forms a basis for the span of a set Let $p_1 = 1 - 2t - t^2$, $p_2 = t + t^2 + t^3$, $p_3 = 1 - t + t ^3$ and $p_4 = 3 + 4t + t^2 + 4t^3$. Let $S$ be the set of these four functions. Find a subset of $S$ that is a basis for the span of $S$. So I've started out by taking these functions, making them vectors and putting them into a matrix: $$ \begin{bmatrix} 0 & -1 & -2 & 1\\ 1 & 1 & 1 & 0\\ 1 & 0 & -1 & 1\\ 4 & 1 & 4 & 3 \end{bmatrix} $$ Then I reduced it: $$ \sim \begin{bmatrix} 1 & 0 & 0 & 1\\ 0 & 1 & 0 & -1\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 \end{bmatrix} $$ I guess my question is, is this the correct way to do this? Where do I go from here? What does a zero row mean for a basis?
A better way is to put them into as matrix as columns instead of as rows. The elementary row operations preserve linear dependence relations among the columns, so when you get to reduced form, if, say, columns 1, 2, and 4 are a basis for the column space, then $p_1,p_2,p_4$ are a basis for the original space.
{ "language": "en", "url": "https://math.stackexchange.com/questions/127405", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Convergence of an Infinite Series involving Absolute Values If $|a_n| < 10^{-n}$, prove that $\sum^{\infty}_{n=1} a_n$ converges. Could someone give me a hint as to how to start this?
If $A_n = \sum^{n}_{i=1} a_i$, if $n > m$, $$A_n - A_m = \sum^{n}_{i=m+1} a_i < \sum^{n}_{i=m+1} 10^{-i} < 10^{-m}$$ (you can do better, but this is enough). Then apply the Cauchy criterion.
{ "language": "en", "url": "https://math.stackexchange.com/questions/127473", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Find the radius of the circle? Two Circle of an equal of an radii are drawn , without any overlap , in a semicircle of radius 2 cm. If these are the largest possible circles that the semicircle can accomodate , then what is the radius of each of the circles? Thanks in advance.
By construction, the maximum radius of two circles of equal size in a circle of radius r is 1/2r. The maximum radius of 4 circles of equal size in a circle of radius x is 1/4r.
{ "language": "en", "url": "https://math.stackexchange.com/questions/127639", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Proving that a set is countable by finding a bijection $Z$ is the set of non-negative integers including $0$. Show that $Z \times Z \times Z$ is countable by constructing the actual bijection $f: Z\times Z\times Z \to \mathbb{N}$ ($\mathbb{N}$ is the set of all natural numbers). There is no need to prove that it is a bijection. After searching for clues on how to solve this, I found $(x+y-1)(x+y-z)/z+y$ but that is only two dimensional and does not include $0$. Any help on how to solve this?
Just for fun, here’s an explicit enumeration. First do it with $\Bbb N\times\Bbb N\times\Bbb N$. If you think of this as a subset of $\Bbb R^3$, you can chop it up into points lying in the parallel planes $P_k$ defined by $x+y+z=k$ for $k\in\Bbb N$. $P_0$ contains only $\langle 0,0,0\rangle$, $P_1$ contains $\langle 0,0,1\rangle,\langle 0,1,0\rangle$, and $\langle 1,0,0\rangle$ and so on. There are $\binom{k+2}2$ solutions to $x+y+z=k$ in non-negative integers $x,y$, and $z$, so $P_k$ contains $\binom{k+2}2$ members of $\Bbb N^3$. We’ll enumerate $\Bbb N^3$ by enumerating the individual $P_k$ in order of increasing $k$. We’ll start by explicitly enumerating the $\binom{k+2}2$ points of $P_k$. Suppose that $\langle x,y,z\rangle\in P_k$. Then $x+y=k-z$, and a little thought shows that there are only $k-z+1$ possibilities, namely $x=i$ and $y=k-x-i$ for $i=0,\dots,k-z$. We’ll start by listing $P_k$ in reverse order of $z$. For $z=k$ there is only one point, $\langle 0,0,k\rangle$; in this ordering it has $0$ predecessors in $P_k$. For $z=k-1$ there are two points, $\langle 0,1,k-1\rangle$ and $\langle 1,0,k-1\rangle$; we’ll list them in increasing order of $x$, so that $\langle 0,1,k-1\rangle$ has $1$ predecessor, and $\langle 1,0,k-1\rangle$ has $2$ predecessors. For $z=k-2$ there are $3$ points; again listing them in increasing order of $x$, we have $\langle 0,2,k-2\rangle$ with $3$ predecessors, $\langle 1,1,k-2\rangle$ with $4$ predecessors, and $\langle 2,0,k-2\rangle$ with $5$ predecessors. In general, $\langle 0,i,k-i\rangle$ has $\sum_{j=1}^ij=\binom{i+1}2$ predecessors, so $\langle j,i-j,k-i\rangle$ has $\binom{i+1}2+j$ predecessors. Thus, if $\langle x,y,z\rangle\in P_k$, it has $\binom{k-z+1}2+x$ predecessors in this ordering of $P_k$. Next, note that $$\sum_{0\le i<k}|P_i|=\sum_{0\le i<k}\binom{i+2}2=\binom{k+2}3\;.$$ Thus, in the ordering of $\Bbb N^3$ as a whole, $\langle 0,0,k\rangle$ has $\binom{k+2}3$ predecessors. Combining results, we see that $\langle x,y,z\rangle$ has $$\binom{x+y+z+2}3+\binom{x+y+1}2+x$$ predecessors in this ordering of $\Bbb N^3$. (That last term can be thought of as $\binom{x+0}1$, and the obvious pattern can be generalized to higher powers of $\Bbb N$.) Since an enumeration $X\to\Bbb N$ of a set $X$ is simply a function that assigns to each $x\in X$ the number of predecessors of $x$ in the induced ordering, we have our enumeration of $\Bbb N^3$: $$\varphi:\Bbb N^3\to\Bbb N:\langle x,y,z\rangle\mapsto\binom{x+y+z+2}3+\binom{x+y+1}2+x\;.$$ All that remains is to convert this to an enumeration of $\Bbb Z^3$. This is easily done by starting with the enumeration $$\psi:\Bbb Z\to\Bbb N:n\mapsto(-1)^n\left\lceil\frac{n}2\right\rceil\;,$$ which orders $\Bbb Z$ as $\langle 0,-1,1,-2,2,-3,3,\dots\rangle$. Let $$\overline\psi:\Bbb Z^3\to\Bbb N^3:\langle x,y,z\rangle\mapsto\langle \psi(x),\psi(y),\psi(z)\rangle\;;$$ this is clearly a bijection, and the desired explicit enumeration $\sigma:\Bbb Z^3\to\Bbb N$ is now given by $\sigma=\varphi\circ\overline\psi$. That is, $\langle x,y,z\rangle\in\Bbb Z^3$ is sent to $$\begin{align*} &\binom{(-1)^x\left\lceil\frac{x}2\right\rceil+(-1)^y\left\lceil\frac{y}2\right\rceil+(-1)^z\left\lceil\frac{z}2\right\rceil+2}3+\\ &\qquad\qquad+\binom{(-1)^x\left\lceil\frac{x}2\right\rceil+(-1)^y\left\lceil\frac{y}2\right\rceil+1}2+(-1)^x\left\lceil\frac{x}2\right\rceil\;. \end{align*}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/127695", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 3 }
Do countable unital rings with uncountably many distinct right ideals have uncountably many maximal right ideals? Suppose we are given a countable unital ring $R$ with uncountably many distinct right ideals. Does it follow from this that $R$ has uncountably many maximal right ideals?
Let $V$ be a $\mathbb Q$-vector space of countable dimension and let $R=\mathbb Q\oplus V$ with commutative multiplication such that the injection $\mathbb Q\to R$ is a map of rings, multiplication between $\mathbb Q$ and $V$ is the obvious one, and $v\cdot w=0$ for all $v$, $w\in V$. Every subspace of $V$ is an ideal of $R$, so there are uncountably many of these, yet $R$ is local.
{ "language": "en", "url": "https://math.stackexchange.com/questions/127773", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 1 }
Optimization with an ellipse I have worked on this one for a while and I can not make my answer match the author's. Find the points on the ellipse $4x^2 + y^2 = 4$ that are the farthest away from the point (1,0). I have: $$4x^2 + y^2 = 4$$ and then the distance formula, so I set y to terms of x and I get $$\sqrt{(x-1)^2 + (2-2x)^2}$$ Setting the difference the a square this gives me a derivative of $$10x-10$$ which gives me a zero of 1, this is wrong according to the book and I am not sure why.
We want to maximize $\sqrt{(x-1)^2+(y-0)^2}$, given that $4x^2+y^2=4$. Equivalently, we want to maximize $(x-1)^2+y^2$, same side condition. But $y^2=4-4x^2$. So we want to maximize $(x-1)^2+(4-4x^2)$. Your turn.
{ "language": "en", "url": "https://math.stackexchange.com/questions/127835", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
If a holomorphic map from an open disc to $\mathbb{C}^n$ extends continuously to the closed disc, what about its partial derivatives? Let $F$ be a holomorphic map from an open disc $D \subset \mathbb{C}^n$ to $\mathbb{C}^n$ and suppose $F$ extends continuously to $\overline{D}$. Do the maps $\partial F_i / \partial z_k$ extend continuously to $\overline{D}$? Thanks
The partial derivatives do not necessarily extend continuously to the closed disc. WimC gave $F(z)=\sqrt{1-z}$ as a counterexample in one dimension. In any dimension, $F(z)=(\sqrt{1-z_1},\dots,\sqrt{1-z_n})$ has the same property: each partial fails to have a continuous extension, and is not even bounded. A situation in which one might hope for boundary smoothness is when $F$ is a biholomorphic map onto a smooth domain. In one dimension there is a very satisfactory result called the Kellogg-Warschawski theorem. The situation in several variables is much more involved: see this brief overview.
{ "language": "en", "url": "https://math.stackexchange.com/questions/127988", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is the correct terminology for Permutation & Combination formulae that allow repeating elements. Let me explain by example. Q: Given four possible values, {1,2,3,4}, how many 2 value permutations are there ?
nPm and nCm give you the number ways to choose two elements (in either case no element can be chosen twice). nPm cares about the order (so $(1,2)\neq(2,1)$), nCm does not. So the answers correclty are 12 and 6. What you want is just an exponential. For n distinct elements (4) forming a sequence of m elements (2), there are $m^n$ possible combinations (16).
{ "language": "en", "url": "https://math.stackexchange.com/questions/128048", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 3 }
Where is the highest point of $f(x)=\sqrt[x]{x}$ in the $x$-axis? I mean, the highest point of the $f(x)=\sqrt[x]{x}$ is when $x=e$. I'm trying to calculate how can I prove that or how can it be calculated.
It really only makes sense for $x\gt 0$, at least if you stick to real numbers. On $(0,\infty)$ can rewrite the function as $$f(x) = x^{1/x} = e^{(\ln x)/x}.$$ Note that as $x\to\infty$, $$\lim_{x\to\infty}\frac{\ln x}{x} = 0,$$ so $\lim\limits_{x\to\infty}f(x) = e^0 = 1$ and as $x\to 0^+$, we have $$\lim_{x\to 0^+}\frac{\ln x}{x} = -\infty$$ so $\lim\limits_{x\to 0^+}f(x) = \lim\limits_{t\to-\infty}e^t = 0$. So that means that the function is bounded. We find its critical points by taking the derivative: $$\begin{align*} \frac{d}{dx}f(x) &= \frac{d}{dx} e^{(\ln x)/x}\\ &= e^{(\ln x)/x}\left(\frac{d}{dx}\frac{\ln x}{x}\right)\\ &= e^{(\ln x)/x}\left(\frac{x\frac{1}{x} - \ln x}{x^2}\right)\\ &= e^{(\ln x)/x}\left(\frac{1-\ln x}{x^2}\right). \end{align*}$$ This is zero if and only if $1-\ln x=0$, if and only if $\ln x = 1$, if and only if $x=e$. So the only critical point is at $x=e$. If $0\lt x \lt e$, then $f'(x)\gt 0$ (since $\ln x \lt 1$), so the function is increasing on $(0,e)$, and if $e\lt x$, then $f'(x)\lt 0$, so the function is decreasing on $(e,\infty)$. Thus, $f$ has a local maximum at $x=e$, and since it is the only local extreme of the function, which is continuous, $f(x)$ has a global extreme at $x=e$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/128114", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Ideal not finitely generated Let $R=\{a_0+a_1 X+a_2 X^2 +\cdots + a_n X^n\}$, where $a_0$ is an integer and the rest of the coefficients are rational numbers. Let $I=\{a_1 X+a_2 X^2+\cdots +a_n X^n\}$ where all of the coefficients are rational numbers. Prove that I is an ideal of R. Show further that I is not finitely generated as an R-module. I have managed to prove that I is an ideal of R, by showing that I is the kernel of the evaluation map that maps a polynomial in Q[x] to its constant term. Hence I is an ideal of R. However, I am stuck at showing I is not finitely generated as an R-module. Sincere thanks for any help.
Hint: $\frac{1}{p}X \not\in P_1R + P_2R + \dots + P_kR$, where $P_1, P_2, \dots, P_k \in I$, with coefficients $\frac{a_1}{b_1}, \frac{a_2}{b_2}, \dots, \frac{a_k}{b_k}$ in degree one, and $p$ does not divide $b_1 b_2 \dots b_k$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/128300", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Independent Standard Normal Gaussian Random Variables I am having trouble making sense of this. I know what independence of random variables means. Suppose $X$ and $Y$ are independent, standard normal (Gaussian) random variables. Then, it is supposed to be that $X^2 + Y^2$ and $\frac X Y$ are also independent random variables. I just cannot intuitively make sense of this. Where does that fact that they are standard normal Gaussian come into play? Also, if we know that $\frac X Y \leq k$ for some fixed $k$, then it seems obvious that it should affect the probability distribution of $X^2 +Y^2$.
Consider the random point $(X,Y)$ in $\mathbb{R}^2$. The ratio $X/Y$ tells us what angle the segment from $(0,0)$ to $(X,Y)$ makes with the $x$-axis, while $X^2+Y^2$ tells us how far $(X,Y)$ is from $(0,0)$. The distribution of $(X,Y)$ is symmetric under rotations, so the distribution of the angle $\Theta$ is uniform and independent of the radius $R=\sqrt{X^2+Y^2}$. This intuitive explanation can be made more rigorous by converting to polar coordinates. The joint density of two independent standard normals $(X,Y)$ is $$f(x,y)={1\over 2\pi} \exp(-(x^2+y^2)/2).$$ Converting to polar coordinates we get the joint density of $(R,\Theta)$ as $$g(r,\theta)={r\over 2\pi}\exp(-r^2/2)={1\over 2\pi}\cdot r\exp(-r^2/2).$$ This product form of $g(r,\theta)$ shows that $\Theta$ and $R$ are independent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/128364", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How to determine the number of directed/undirected graphs? I'm kind of stuck on this homework problem, could anyone give me a springboard for it? If we have $n\in\mathbb{Z}^+$, and we let the set of vertices $V$ be a set of size $n$, how can we determine the number of directed graphs/undirected graphs/graphs with loops etc.? Is there a formula for this? I feel like it can be done using combinatorics but I can't quite figure it out. Any ideas? Thanks!
For labeled vertices: To count undirected loopless graphs with no repeated edges, first count possible edges. As Andre counts, there are $\binom{n}{2}$ such edges. One by one, each edge is either included or excluded. So this gives $2^{\binom{n}{2}}$ possible graphs. If loopless graphs with no repeated edges are directed, each pair of vertices $a<b$ provides $3$ possibilities for a (potentially absent) edge. Do you see what they are and how that modifies the count? If there are loops (but still no repeated edges), then either of the above scenarios are modified by realizing that there are $n$ more pairs of points to consider - the ones where $a=b$. Do you see how this would modify the count of graphs? Watch out - it doesn't really make sense to count a loop as directed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/128439", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
what is the simplest example of an etale cover which is not Galois? By "Galois morphism" I mean a morphism $f: Y \to X$ such that $Y \times_X Y$ is a disjoint union of schemes isomorphic to $Y$. Let $X$ be reduced curve over afield of char. 0. I wonder what is a simple example of an etale morphism $f: Y \to X$ which is not Galois in the above sense. update: What about a "geometric" example? (i.e. the base field is algebraically closed).
Let $C$ be a curve of genus $2$, say over $\mathbb C$. Think of $C$ as a Riemann surface for a moment. Then $\pi_1(C)$ is generated by four elements $a,b,c,d$ satisfying the relation $[a,b][c,d] = 1$. Let $p: \pi_1(C) \to S_3$ be the surjection that takes $a$ and $d$ to $(12)$, and $b$ and $c$ to $(123)$, and let $H \subset \pi_1(C)$ be the preimage under $p$ of a subgroup of order $2$ in $S_3$, so $H$ has index $3$ in $\pi_1(C)$ and is not normal. Covering space theory shows that $H$ corresponds to a degree $3$ cover $C' \to C$ of Riemann surfaces which is not Galois. Now the Riemann existence theorem shows that $C'$ has a unique structure of algebraic curve over $\mathbb C$ so that $C' \to C$ is an etale morphism, which will not be Galois. This is the simplest sample in some strict sense: $\pi_1$ of a genus zero Riemann surface is trivial, while $\pi_1$ of a genus one curve is abelian (so all subgroups are normal), and all index two subgroups of a group are normal; thus we have to go to genus $2$ and a degree $3$ cover in order to find an example, and this is what I have done. [Added: I should add that this is the simplest example if one wants an etale morphism of projective curves; Georges found a simpler example in his answer by considering non-projective curves.] Of course, one could write down examples with explicit algebraic equations, but I would have to begin with a genus $2$ curve, which is of the form $y^2 = f(x)$ for some degree $5$ or $6$ equation, and then write down $C'$ explicitly. By Riemann--Hurwitz, $C'$ has genus $4$, so I would then have to write down an equation for a genus $4$ curve, and find an explict degree $3$ map to $C$. I haven't tried to do this; it's probably a good exercise, though.
{ "language": "en", "url": "https://math.stackexchange.com/questions/128533", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 2, "answer_id": 0 }
Does the group of Diffeomorphisms act transitively on the space of Riemannian metrics? Let $M$ be a smooth manifold (maybe compact, if that helps). Denote by $\operatorname{Diff}(M)$ the group of diffeomorphisms $M\to M$ and by $R(M)$ the space of Riemannian metrics on $M$. We obtain a canonical group action $$ R(M) \times \operatorname{Diff}(M) \to R(M), (g,F) \mapsto F^*g, $$ where $F^*g$ denotes the pullback of $g$ along $F$. Is this action transitive? In other words, is it possible for any two Riemannian metrics $g,h$ on $M$ to find a diffeomorphism $F$ such that $F^*g=h$? Do you know any references for this type of questions?
The group of C^1 diffeomorphism does not act transitively on the space of Riemannian metrics on a compact manifold. For example, two circles of different radius have different diameters and by pullback we can "copy" the metric of one of them on the other. Doing so we get two metrics on a circle with different diameters, hence they can not be C^1 related. Thus, the point is that "the diameter" is a C^1-invariant hence an homotety of the metric change the diameter and the "new metric" have different diameter. For non compact manifolds, by using a theorem of Nomizu, there exist both complete and non-complete riemannian metrics. See http://www.oberlin.edu/faculty/jcalcut/Nomizu_Ozeki_1961.pdf Since "completeness" is a C^1 invariant we get that the group of C^1 diffeomorphism does not act transitively on the space of riemannian metrics. h.
{ "language": "en", "url": "https://math.stackexchange.com/questions/128651", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 3, "answer_id": 1 }
What matrices preserve the $L_1$ norm for positive, unit norm vectors? It's easy to show that orthogonal/unitary matrices preserve the $L_2$ norm of a vector, but if I want a transformation that preserves the $L_1$ norm, what can I deduce about the matrices that do this? I feel like it should be something like the columns sum to 1, but I can't manage to prove it. EDIT: To be more explicit, I'm looking at stochastic transition matrices that act on vectors that represent probability distributions, i.e. vectors whose elements are positive and sum to 1. For instance, the matrix $$ M = \left(\begin{array}{ccc}1 & 1/4 & 0 \\0 & 1/2 & 0 \\0 & 1/4 & 1\end{array}\right) $$ acting on $$ x=\left(\begin{array}{c}0 \\1 \\0\end{array}\right) $$ gives $$ M \cdot x = \left(\begin{array}{c}1/4 \\1/2 \\1/4\end{array}\right)\:, $$ a vector whose elements also sum to 1. So I suppose the set of vectors whose isometries I care about is more restricted than the completely general case, which is why I was confused about people saying that permutation matrices were what I was after. Sooo... given the vectors are positive and have entries that sum to 1, can we say anything more exact about the matrices that preserve this property?
Since you originally asked about $L^1$ spaces I dared to add this comment. If one wants to preserve the integral in (finite-dimensional and with finite measure ) $L^1$ spaces rather than the norm of $\ell^p$, the matrices $M$ that do this are more general than the stochastic matrices. One can define these matrices with two components, labeled $S$ (for the stochastic component) and $G$ (for the generalized permutation matrix component) such that that $M= S * G$, where * represents the Hadamard product. The $S$ matrices are effectively stochastic matrices as shown by Robert Israel. The $G$ matrix is given by the unique matrix resulting of the outer product $u_{\mu} \otimes \frac{1}{u_{\mu}} := | u_{\mu} \rangle \langle \frac{1}{u_{\mu}} |$ of the unique column vector $u_{\mu} :=\left(\begin{array}{c}\mu_1 \\ \mu_2 \\ \ldots \\ \mu_2\end{array}\right)$ and the also unique row vector $\frac{1}{u_{\mu}} :=\left(\frac{1}{\mu_1} \ \frac{1}{\mu_2} \ \ldots \ \frac{1}{\mu_n}\right)$: $G:=u_{\mu} \otimes \frac{1}{u_{\mu}} = \left(\begin{array}{cccc} 1 & \frac{\mu_2}{\mu_1} & \ldots & \frac{\mu_n}{\mu_1} \\ \frac{\mu_1}{\mu_2} & 1 & \ldots & \frac{\mu_n}{\mu_2} \\ \ldots & \ldots & \ldots & \ldots \\ \frac{\mu_1}{\mu_n} & \frac{\mu_2}{\mu_n} & \ldots & 1 \end{array}\right)$ where $\mu_i$ are the measures of the generating family of subsets $\{ A_i \}$ of the underlying sigma algebra, i.e. $\mu_i := \mu(A_i)$ and $n = |\{ A_i \}|$. To give you an example of where the stochastic component $S$ is absent, take the stochastic matrix $S$ to be simply a permutation matrix. In this case your $M$ that preserves the integral is a generalized permutation matrix whose non-zero elements are of the form $A_{i,j} =\frac{\mu_{j}}{\mu_i}$. To see why the measure values $\mu_i$ are needed in the definition of $M$ recall that a $L^p$ space is defined given a measure space $(X,\Sigma,\mu)$. So if the $L^1$ space is finite dimensional then the vectors $v$ in $L^1$ are simple functions, whose integral is defined as the product $\langle u_{\mu}|v\rangle$. And if the measure is finite, then this integral is always well defined. I hope I made myself clear.
{ "language": "en", "url": "https://math.stackexchange.com/questions/128702", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 2, "answer_id": 1 }
Summing numbers which increase by a fixed amount (arithmetic progression) An auditorium has 21 rows of seats. The first row has 18 seats, and each succeeding row has two more seats than the previous row. How many seats are there in the auditorium? Now I supposed you could use sigma notation since this kind of problem reminds me of it, but I have little experience using it so I'm not sure.
$$s_1 = 18 = 18 + 2(1-1)\\ s_2 = 20 = 18 + 2(2-1) \\ s_3 = 22 = 18 + 2(3-1) \\ \ldots \\ s_{i} = 18 + 2(i - 1) \\ \ldots \\ s_{21} = 18 + 2(21 - 1)$$ Sum: $$\sum_{i=1}^{21} (18 + 2(i-1))$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/128786", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 2 }
Maximum area of rectangle with fixed perimeter. How can you, with polynomial functions, determine the maximum area of a rectangle with a fixed perimeter. Here's the exact problem— You have 28 feet of rabbit-proof fencing to install around your vegetable garden. What are the dimensions of the garden with the largest area? I've looked around this Stack Exchange and haven't found an answer to this sort of problem (I have, oddly, found a similar one for concave pentagons). If you can't give me the exact answer, any hints to get the correct answer would be much appreciated.
Put the perimeter into the vertex formula. (Find the P=a+b equation and put it into the A=a*b equation.) The answer is the maximum point.
{ "language": "en", "url": "https://math.stackexchange.com/questions/128825", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 4, "answer_id": 3 }
inequality $(a+c)(a+b+c)<0$, prove $(b-c)^2>4a(a+b+c)$ If $(a+c)(a+b+c)<0,$ prove $$(b-c)^2>4a(a+b+c)$$ I will use the constructor method that want to know can not directly prove it?
Because $$(b-c)^2-4a(a+b+c)=(b-c)^2+(4a+8c)(a+b+c)-8(a+c)(a+b+c)=$$ $$=(2a+b+3c)^2-8(a+c)(a+b+c)>0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/128898", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
special equation $z^y$ - $y^z$ = $x^y$ I have prepared an equation by myself $z^y$ - $y^z$ = $x^y$. This equation has infinitely $(0, n, n)$ and $(n, 0, n)$ solutions for some positive integer $n$. Can this equation can be treated as Diophantine equation or not? How we can find surface of this curve? Is there any member, who have an idea to discuss...plz. Thanks in advance.
Over $\mathbb{N}$ or $\mathbb{Z}$, there is a third "trivial" solution set: $\{(n-1,1,n)\}$, as well as some "sporadic" solutions: $(1,2,3),~(-1,3,2),~(0,2,4)$. One should examine the existence of solutions with $z\ne y>1$ over $\mathbb{Z}$ or with $y\ne1$ more generally, but I haven't done that yet. Over $\mathbb{R}$ or $\mathbb{C}$, the equation is a surface with infinitely many solutions. Over $\mathbb{R}$, $$ x = \left(z^y-y^z\right)^{1/y} $$ always has a solution when $z^y-y^z > 0$, i.e. for $(y,z)$ in $$ R_1= \left\{ \matrix{ (y,z) \quad: & \\\\ 0 ~ < ~ z ~ < & \infty \\ 0 ~\le~ y ~\le& \min \left( h_1^{-1} \left( h \left( \max \left( e,~ z \right) \right) \right) ,~ z \right) }\right\} \quad \text{or} \quad R_3= \left\{ \matrix{ (y,z) & : \\\\ e &\le& y & < & \infty \\ h_1^{-1}\left(\tfrac{\ln y}{y}\right) &\le& z &\le& y }\right\} $$ where $h_1^{-1}(y)\le e$ and $h_2^{-1}(y)\ge e$ are the inverse functions of the restrictions $h_1(z),~h_2(z)$ of $y=h(z)=\frac{\ln z}{z}$ to $I_0\cup I_1$ for $I_0=(0,1],~I_1=(1,e]$ and $I_2=[e,\infty)$, depicted below right in blue and green respectively. These are the regions $1$ & $3$ on the graph below left that include the horizontal line through $(e,e)$. If we further restrict $h_1$ to $I_1$, so that $ I_1 \stackrel{h_1}{\longrightarrow} I_0 \stackrel{h_2}{\longleftarrow } I_2 $, then the nonlinear part of the left graph above separating regions $1$ & $2$ from regions $3$ & $4$, a self-inverse function on $(1,\infty)=I_1\cup I_2$, $$ g(z)=\left\{\matrix{ h_2^{-1}\circ h_1=h_2^{-1}\left(\tfrac{\ln z}{z}\right) \quad&1<z\le e \\ h_1^{-1}\circ h_2=h_1^{-1}\left(\tfrac{\ln z}{z}\right) \quad&e\le z<\infty \\ }\right. $$ is tangent to $yz=e^2$ at $(e,e)$ but with a smaller growth (decay) rate as $y\to1$ ($\infty$). Also, for $y=g(z)$, $$ 0>g'(z)=\left\{\matrix{ \frac{y^2}{z^2} \cdot \frac{1-\ln z}{1-\ln y} =\frac{\ln(\frac{z}{e})/(\frac{z}{e})^2}{\ln(\frac{y}{e})/(\frac{y}{e})^2} && y,~z\ne e \\\\ -1 && y=z=e }\right. $$ while $G(z)=\frac{e^2}{z}$ has slope $G'(z)=-\frac{e^2}{z^2}$ so that, still with $y=g(z)$, $$ \frac{g'(z)}{G'(z)} = - \left(\frac{y}{e}\right)^2 \frac{\ln(z/e)}{\ln(y/e)} \implies -\frac{G'(z)}{g'(z)}\ln\frac{z}{e} =\frac{\ln(y/e)}{(y/e)^2} \le \frac1y $$ using the well-known inequality $\ln t\le\frac{t}{e}$ (which can be proved graphically) with $t=\frac{y}{e}$. All of the above follows from noting that for $y,~z>0,~x=0~\iff$ $$ z^y = y^z \quad\iff\quad y\,\ln z = z\,\ln y \quad\iff\quad \frac{ln y}{y} = \frac{ln z}{z} $$ $$ \iff\qquad h(y)=h(z) \quad\iff\quad W(-\ln y)=W(-\ln z) $$ and analyzing the function $h(z)=\frac{ln z}{z}$ on its maximal real intervals of invertibility as above. Note also that $y=g(z) \iff y=z=e$ or $y\ne z$ and $W(-\ln y)=W(-\ln z)$, where $W$ is the Lambert W function. Some other facts about $h(z)$ are that its reciprocal, $\frac{z}{\log z}$ (where $\log$ has base $e$), is the analytic asymptote of the prime counting function $\pi(z)$, which is also related to the logarithmic integral function. Some further facts are that it has derivative is $h'(z)=\frac{1-\ln z}{z^2}$, it satisfies the differential equation $(zh)'=zh'+h=z^{-1}$ or $h'+z^{-1}y=z^{-2}$, and it has antiderivative $$ H(z)=\int_1^zh(s)ds=\left[\frac12(\ln s)^2\right]_1^z=\frac12(\ln z)^2. $$ In $\mathbb{R}^3$, there are also solutions whenever $y=\frac{p}{q}\in\mathbb{Q}$ is expressed in lowest terms and $p$ is odd, for then, we can take the $p^{\text{th}}$ root of $z^y-y^z$ even if it is negative: $$ x = \left(z^{p/q}-\left(\tfrac{p}{q}\right)^z\right)^{q/p} $$ This will have a unique real solution $x$ iff one of the several conditions is met...
{ "language": "en", "url": "https://math.stackexchange.com/questions/128955", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
How do you explain the concept of logarithm to a five year old? Okay, I understand that it cannot be explained to a 5 year old. But how do you explain the logarithm to primary school students?
I recently proceeded as above with more than 5 year old students ( in fact 17-18 y. old). Simply I was surprised to see that they seemed to get it immediatly; which made me suppose that much younger students could grasp this explanation. I think that students already have a " logarithm box" in their head. I mean, before having heard the term logarithm, they already have the concept of " the power to which I have to raise number $5$ to get number $25$", or "the power to which I have to raise number $2$ to get number $8$". If this assumption is correct, teaching what is a logarithm simply amounts to giving a name to this concept, to this " logarithm box".
{ "language": "en", "url": "https://math.stackexchange.com/questions/129013", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "94", "answer_count": 21, "answer_id": 20 }
Evaluating $\int \dfrac {2x} {x^{2} + 6x + 13}dx$ I am having trouble understanding the first step of evaluating $$\int \dfrac {2x} {x^{2} + 6x + 13}dx$$ When faced with integrals such as the one above, how do you know to manipulate the integral into: $$\int \dfrac {2x+6} {x^{2} + 6x + 13}dx - 6 \int \dfrac {1} {x^{2} + 6x + 13}dx$$ After this first step, I am fully aware of how to complete the square and evaluate the integral, but I am having difficulties seeing the first step when faced with similar problems. Should you always look for what the $"b"$ term is in a given $ax^{2} + bx + c$ function to know what you need to manipulate the numerator with? Are there any other tips and tricks when dealing with inverse trig antiderivatives?
I look at that fraction and see that the numerator differs from the derivative of the denominator by a constant, $6$. If the numerator were $2x+6$ instead of $2x$, the fraction would be of the form $u'/u$, and I’d be very happy. So I simply make it $2x+6$, subtracting $6$ to compensate: $$\frac{2x}{x^2+6x+13}=\frac{(2x+6)-6}{x^2+6x+13}=\frac{2x+6}{x^2+6x+13}-\frac6{x^2+6x+13}\;.$$ Then I consider whether I can integrate the correction term. In this case I recognize it as the derivative of an arctangent, so I know that I’ll be able to handle it, though it will take a little algebra.
{ "language": "en", "url": "https://math.stackexchange.com/questions/129093", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 1 }
Finite Sum $\sum_{i=1}^n\frac i {2^i}$ I'm trying to find the sum of : $$\sum_{i=1}^n\frac i {2^i}$$ I've tried to run $i$ from $1$ to $∞$ , and found that the sum is $2$ , i.e : $$\sum_{i=1}^\infty\frac i {2^i} =2$$ since : $$(1/2 + 1/4 + 1/8 + \cdots) + (1/4 + 1/8 + 1/16 +\cdots) + (1/8 + 1/16 + \cdots) +\cdots = 1+ 1/2 + 1/4 + 1/8 + \cdots = 2 $$ But when I run $i$ to $n$ , it's a little bit different , can anyone please explain ? Regards
Let $r=1/2$. Write all the terms being added as $$ \left. \matrix{ r& \phantom{2}r^2& \phantom{2}r^3& \phantom{2}r^4&\cdots&\phantom{n}r^n \cr 0 & \phantom{2}r^2& \phantom{2}r^3& \phantom{2}r^4&\cdots& \phantom{2}r^n \cr 0&0& \phantom{2}r^3& \phantom{2}r^4&\cdots& \phantom{2}r^n\cr & \vdots&&&&\cr 0&0&0&0&\cdots& \phantom{2} r^n\cr }\ \ \right\} n-\text{rows} $$ $$ \overline{\matrix{ r&2r^2&3r^3&4r^4&\cdots&nr^n}\phantom{dfgfsdfsfs}} $$ Using the formula for the sum of a finite Geometric series, the sum of row $i$ is $$ r^i+r^{i+1}+\cdots+ r^n ={r^i-r^{n+1}\over 1-r }. $$ The sum of the row sums is $$\eqalign{ \sum_{i=1}^n {r^i-r^{n+1}\over 1-r } &= {1\over 1-r}\Bigl(\,\sum_{i=1}^n r^i- \sum_{i=1}^nr^{n+1} \Bigr)\cr &={1\over 1-r}\cdot \biggl({r-r^{n+1}\over 1-r}-nr^{n+1}\biggr)\cr &={1\over 1-r}\cdot \biggl({r-r^{n+1}\over 1-r}-{(1-r)nr^{n+1}\over1-r}\biggr)\cr &={ r-r^{n+1}-(1-r)nr^{n+1}\over (1-r)^2}\cr &={r-r^{n+1}(1+n-nr)\over (1-r)^2}. } $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/129302", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21", "answer_count": 3, "answer_id": 0 }
Upper bound for the number of open disks containing $k$ points in the plane I hope that you can help me with this. Let P be a set of points in the plane, such that $|P|=n$, what is the maximal number of open disks containing at least $k$ points for some $k$, two discs are equivalent if they contain the same points. I have some intuition here, but I'm not sure if I should follow it. the number of distinct open discs containing at least $k$ points, for $k>2$ is bounded by ${n\choose 3}$, since every disk is uniquely defined by the 3 points closed to its boundary. Every 3 points form a triangle bounded by some disk. Suppose two different disks have the same 3 points being closest to the edge, than at least one disk has a point contained in it, which is not contained inside the other disc, than we can "shrink" the first disk until the "spare" point is closest to its edge, then it is defined by a different triplet. Is there any flaw in my thinking?
Your argument could do with clearer terminology and presentation. At any rate something is wrong, at least in the specifics for smallish $n$ and $k = 3$. Letting $n = 4$ and arranging points as in a square, there are $5$ different subsets of points of size three or more that can be realized (segregated by open disks): all four points together with the four subsets of size three. But $\binom{4}{3} = 4$. Letting $n = 5$ and arranging points as in a regular pentagon, there are $11$ different subsets of points of size three or more that can be realized: all five points, the five subsets of size four, and five subsets of size three that exclude a consecutive pair of points. But $\binom{5}{3} = 10$. [Indeed by "pinching" the regular pentagon slightly, we could fit an additional subset of three into an open disk.] Perhaps meditating on these small examples will help you to sharpen your insight. Note that the problem includes the parameter "at least $k$ points, for $k \gt 2$" but the proposed upper bound $\binom{n}{3}$ does not depend on $k$. In that case the requirement would be as if $k = 3$ had been specified.
{ "language": "en", "url": "https://math.stackexchange.com/questions/129363", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How many numbers of the form $p_1^2 p_2 p_3$ are there less than $10^{15}$ for $p_1$, $p_2$, $p_3$ distinct primes? Is there an easy way to compute the following question: How many numbers of the form $p_1^2 p_2 p_3$ are there less than $10^{15}$ for $p_1$, $p_2$, $p_3$ distinct primes? The only thing that strikes me as a possibility is iterating through the various primes and finding the number of primes such that $2^2 3 p_3<10^{15}$, etc, which would give me $$\operatorname{primepi}[10^{15}/(2^23)]+\operatorname{primepi}[10^{15}/(2^25)]+\operatorname{primepi}[10^{15}/(2^27)]+\cdots$$ But that gets me no where fast. I'm mostly looking for "Is there an easy way to do this?" and hints at what it might be, I'm not actually looking for it to be solved for me, just a push in the right direction. My other issue is that my primepi function I have written in python doesn't really support going up this high... I end up having to turn to wolfram alpha to get many of the values and that is no way to make an automated computation.
This Wolfram's Mathematica code gives the answer, but it will take ages for calculating up to $10^{15}$. The variable MAX is the upper limit. MAX = 10^3; count = 0; For[n = 1, n < MAX, n++, array = FactorInteger[n]; (* factorizes N in a product of prime powers *) If[Length[array] == 3, (* if N is composed by exactly 3 primes *) If[Or[ (* then it will check their exponents *) And[array[[1]][[2]] == 2, array[[2]][[2]] == 1, array[[3]][[2]] == 1], And[array[[1]][[2]] == 1, array[[2]][[2]] == 2, array[[3]][[2]] == 1], And[array[[1]][[2]] == 1, array[[2]][[2]] == 1, array[[3]][[2]] == 2] ], (* if the exponents are (2,1,1) or (1,2,1) or (1,1,2) *) count++ (* then it increases the count variable *) ] ] ] Print[count]; I've tested it up to $10^{9}$ and got exactly $68391432$ numbers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/129447", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 1, "answer_id": 0 }
Why is the tangent bundle orientable? Let $M$ be a smooth manifold. How do I show that the tangent bundle $TM$ of $M$ is orientable?
There is also a way to see it with fibre bundles and characteristic classes. It's possible the original poster is not familar, but other people might be and more importantly I need practice. It is based on two relatively basic facts (at least I'm mostly sure I've seen them before): 1) A smooth $n$-manifold $M$ is orientable iff the first Stiefel-Whitney class of its tangent bundle $\tau_M$ vanishes, and 2) If $\xi$ is a smooth $k$-plane bundle with base space $M^n$, total space $E^{n+k}$ (both smooth manifolds) and projection $\pi:E\rightarrow M$, then $$\tau_E=\pi^*(\tau_M)\oplus\pi^*(\xi)$$ Then, if $TM$ is the total space of the $n$-plane bundle $\tau_M$ with projection map $\pi\colon TM\rightarrow M$, it is a smooth manifold with its own tangent bundle $\tau_{TM}$. Since $\pi$ is the projection map of $\tau_M$ we have $$\tau_{TM}=\pi^*(\tau_M)\oplus\pi^*(\tau_M)$$ so by the Whitney product formula $$\omega_1(\tau_{TM})=(\pi^*\omega_0)(\tau_M)\cup(\pi^*\omega_1)(\tau_M)+(\pi^*\omega_1)(\tau_M)\cup(\pi^*\omega_0)(\tau_M)=2\pi^*\omega_1(\tau_{M})=0\in H^1(TM;\mathbb{Z/2})$$ Hence the manifold $TM$ is orientable. (But for all intents and purposes, writing down charts is the easiest way to go)
{ "language": "en", "url": "https://math.stackexchange.com/questions/129514", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33", "answer_count": 4, "answer_id": 3 }
I need help with this divisibility problem. I need help with the following divisibility problem. Find all prime numbers m and n such that $mn |12^{n+m}-1$ and $m= n+2$.
Here are some thoughts to start off... The first twin primes are $(n,m)\in\{(3,5),(5,7),(11,13),(17,19)\}$. Clearly $3\not\mid12^{m+n}-1$ since $12^{m+n}\equiv0\pmod3$. Furthermore, $12^{m+n}\equiv1\pmod{m,n}\implies2,3\not\mid m,n$ and $m+n\equiv0\pmod{m-1,n-1}$, which seems like it could disqualifies many candidates. However, $11\cdot13=12^2-1\mid12^{24}-1$ so $(11,13)$ is a solution... If $m=6k+1,~n=6k-1$ is prime, then $m-1=6k\mid m+n=12k$ $\implies$ $12^{12k}\equiv1\pmod m$ for free (from Euler's theorem), while $12^{12k}\equiv1\pmod n$ iff $\operatorname{ord}_n12\mid12k$. Now * *for $k=1$, $\operatorname{ord}_512=4\mid12$; *for $k=2$, $\operatorname{ord}_{11}12=1\mid24$; *for $k=3$, $\operatorname{ord}_{17}12=16\not\mid36$; *for $k=4$, $m$ is not prime; *for $k=5$, $\operatorname{ord}_{29}12=4\mid12\cdot5$; *for $k=7$ (the next twin prime pair), $\operatorname{ord}_{41}12=40\not\mid12\cdot7$. Perhaps there is a good argument why all solutions must be of the form $6k\pm1$. A little checking in sage shows that @Pedja's solutions, $k=1,2,5$, are the only ones of this form for $k\le10^6$. for k in range(1,100): m = 6*k+1 n = 6*k-1 if is_prime(m) and is_prime(n): test_order = multiplicative_order(mod(12,6*k-1)) test_value = 12*k % test_order print '%5d%5d%5d%5d%5d' % (k, m, n, test_order, test_value) 1 4 2 1 5 4 In fact, there is a good argument using Fermat's little theorem, and @Michalis presents it in his post.
{ "language": "en", "url": "https://math.stackexchange.com/questions/129582", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
On solvable quintics and septics Here is a nice sufficient (but not necessary) condition on whether a quintic is solvable in radicals or not. Given, $x^5+10cx^3+10dx^2+5ex+f = 0\tag{1}$ If there is an ordering of its roots such that, $x_1 x_2 + x_2 x_3 + x_3 x_4 + x_4 x_5 + x_5 x_1 - (x_1 x_3 + x_3 x_5 + x_5 x_2 + x_2 x_4 + x_4 x_1) = 0\tag{2}$ or alternatively, its coefficients are related by the quadratic in f, $(c^3 + d^2 - c e) \big((5 c^2 - e)^2 + 16 c d^2\big) = (c^2 d + d e - c f)^2 \tag{3}$ then (1) is solvable. This also implies that if $c\neq0$, then it has a solvable twin, $x^5+10cx^3+10dx^2+5ex+f' = 0\tag{4}$ where $f'$ is the other root of (3). The Lagrange resolvent are the roots of, $z^4+fz^3+(2c^5-5c^3e-4d^2e+ce^2+2cdf)z^2-c^5fz+c^{10} = 0\tag{5}$ so, $x = z_1^{1/5}+z_2^{1/5}+z_3^{1/5}+z_4^{1/5}\tag{6}$ Two questions though: I. Does the septic (7th deg) analogue, $x_1 x_2 + x_2 x_3 + \dots + x_7 x_1 – (x_1 x_3 + x_3 x_5 + \dots + x_6 x_1) = 0\tag{7}$ imply such a septic is solvable? II. The septic has a $5! = 120$-deg resolvent. While this is next to impossible to explicitly construct, is it feasible to construct just the constant term? Equating it to zero would then imply a family of solvable septics, just like (3) above. More details and examples for (2) like the Emma Lehmer quintic in my blog.
It turns out there is an infinite number of such septics, such as the Hashimoto-Hoshi septic, $$\small x^7 - (a^3 + a^2 + 5a + 6)x^6 + 3(3a^3 + 3a^2 + 8a + 4)x^5 + (a^7 + a^6 + 9a^5 - 5a^4 - 15a^3 - 22a^2 - 36a - 8)x^4 - a(a^7 + 5a^6 + 12a^5 + 24a^4 - 6a^3 + 2a^2 - 20a - 16)x^3 + a^2(2a^6 + 7a^5 + 19a^4 + 14a^3 + 2a^2 + 8a - 8)x^2 - a^4(a^4 + 4a^3 + 8a^2 + 4)x + a^7=0$$ For example, let $a=1$ so, $$1 - 17 x + 44 x^2 - 2 x^3 - 75 x^4 + 54 x^5 - 13 x^6 + x^7=0$$ which is the equation involved in $\cos\frac{\pi k}{43}$ and order its roots as, $$x_1,\,x_2,\,x_3,\,x_4,\,x_5,\,x_6,\,x_7 =\\ r_1,\,r_2,\,r_5,\,r_6,\,r_3,\,r_7,\,r_4 = \\ -0.752399,\; 0.0721331,\; 2.63744,\; 3.62599,\; 0.480671,\; 6.29991,\; 0.636246$$ where the $r_i$ is the root numbering in Mathematica. Then, $$ x_1 x_2 + x_2 x_3 + \dots + x_7 x_1 - (x_1 x_3 + x_3 x_5 + \dots + x_6 x_1) = 0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/129655", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 2, "answer_id": 1 }
(co)reflector to the forgetful functor $U:\mathbf{CMon} \to \mathbf{ Mon}$ I've been asking questions on reflectors before and I hope you are not getting annoyed. Apologies if that's the case. My question is the following: Are there reflectors to the forgetful functor $U: \mathbf{CMon} \to \mathbf{Mon}$ from commutative monoids to the general monoids? I know they exist in rings and groups but I have trouble working it out for monoids. Any answer is very much appreciated, but one not referring to the adjoint functor theorem is preferred.
Well it's pretty much the same as for groups, and in fact, every algebraic structure which includes a binary composition law. When $M$ is a monoid, then $M^{\mathrm{ab}}$ is defined to be $M/\sim$, where $\sim$ is the smallest congruence relation on $M$ which satisfies $ab \sim ba$ for all $a,b \in M$. Then $M \mapsto M^{\mathrm{ab}}$ is left adjoint to the inclusion functor $U$. The proof is trivial. If you want to have an explicit description of $\sim$ (which is important for computations, but not for the proof that the above is true): $x \sim y$ iff there is a composition $x=x_1 \cdots x_n$ and a permutation $\sigma$ of $1,\dotsc,n$ such that $y = x_{\sigma(1)} \cdots x_{\sigma(n)}$. [This easy description is not available for the category of groups] In other words: Computation in $M^{\mathrm{ab}}$ is as in $M$, but you don't care for the order in which they are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/129714", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show the existence of a complex differentiable function defined outside $|z|=4$ with derivative $\frac{z}{(z-1)(z-2)(z-3)}$ My attempt I wrote the given function as a sum of rational functions (via partial fraction decomposition), namely $$ \frac{z}{(z-1)(z-2)(z-3)} = \frac{1/2}{z-1} + \frac{-2}{z-2} + \frac{3/2}{z-3}. $$ This then allows me to formally integrate the function. In particular, I find that $$ F(z) = 1/2 \log(z-1) - 2 \log(z-2) + 3/2 \log(z-3) $$ is a complex differentiable function on the set $\Omega = \{z \in \mathbb{C}: |z| > 4\}$ with the derivative we want. So this seems to answer the question, as far as I can tell. The question then asks if there is a complex differentiable function on $\Omega$ whose derivative is $$ \frac{z^2}{(z-1)(z-2)(z-3)}. $$ Again, I can write this as a sum of rational functions and formally integrate to obtain the desired function on $\Omega$ with this particular derivative. Woo hoo. My question Is there more to this question that I'm not seeing? I was also able to write the first given derivative as a geometric series and show that this series converged for all $|z| > 3$, but I don't believe this helps me to say anything about the complex integral of this function. In the case that it does, perhaps this is an alternative avenue to head down? Any insight/confirmation that I'm not overlooking something significant would be much appreciated. Note that this an old question that often appears on study guides for complex analysis comps (one being my own), so that's in part why I'm thinking (hoping?) there may be something deeper here. For possible historical context, the question seems to date back to 1978 (see number 7 here): http://math.rice.edu/~idu/Sp05/cx_ucb.pdf Thanks for your time.
Using Morera's theorem you can prove that your function has an antiderivative on any disk $D \subset \{z \in \Bbb{C} : |z| > 4\}$. Morera's theorem states that if $f: D \to \Bbb{C}$ is a complex continuous function such that $\int_T f(z)dz=0$ on any triangle contained in $D$, then $f$ has an antiderivative. The idea is that for any such disk your function is holomorphic on $D$, and the integral on any closed(usually triangles or rectangles are chosen) path is zero (Cauchy's theorem). Therefore, by Morera's theorem you can find a antiderivative of $f$ on $D$. I do not know how to prove or disprove that there exists a antiderivative on your whole domain $\{z \in \Bbb{C} : |z| > 4\}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/129773", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Interpreting $F[x,y]$ for $F$ a field. First, is it appropriate to write $F[x,y] = (F[x])[y] $? In particular, if $F$ is a field, then we know $F[x]$ is a Euclidean domain. Are there necessary and sufficient conditions for when $F[x,y]$ is also a Euclidean domain?
Euclidean domains are always principal ideal domains. The ideal $(x,y)$ is never principal. Hence $F[x,y]$ is never a Euclidean domain.
{ "language": "en", "url": "https://math.stackexchange.com/questions/129830", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Showing a linear mapping is continuous (or not) I have three linear mappings: \begin{equation}t_0(f)=f(t_0)\end{equation} \begin{equation}I(f)=\int_{0}^{1}f(t)f_0(t)dt\end{equation} \begin{equation}T(f)=f(t)f_0(t)\end{equation} and I want to determine whether or not they are continuous on $(C[0,1],\|\centerdot\|_1)$. I have been trying to prove continuity by showing boundedness, e.g. $|T(f)|\leq M\|f\|_1$, with no success. I also have tried to construct a sequence $f_n$ satisfying $\|f_n\|_1=1$ and $|f_n(t)|\rightarrow\infty$, or find a sequence $f_n$ such that $\int_{0}^{1}f_n(t)dt\rightarrow0$ but $|T(f)|\nrightarrow0$. I have a hard time with counter examples. I would greatly appreciate any hint or push in the right direction.
$||f||_1$ is the area under the curve $t \mapsto |f(t)|$. One way of thinking about the first problem is to look for a sequence of simple shapes that have constant area, but the height goes to $\infty$. Rectangles are an obvious choice, except they are not continuous, but this can be fixed easily in many ways. For the second, consider using Hölder's inequality. For the third (assuming this is a mapping into $(C[0,1],\|\centerdot\|_1)$), notice that $||Tf||_1 = I(Tf)$, where $I$ is from the second problem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/129890", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
An harmonic radial function in $\mathbb{R}^2$ I'm taking multivariable-calculus, and I got the following question: A function $f$ in n variables is called harmonic if $\sum_{i = 1}^{n}{\frac{\partial ^2 f}{\partial x_{i}^2}} = 0$. Is there a non-constant, radial harmonic function in $\mathbb{R}^2$? I found an almost-identical question here (PDF file) (number three), and I'm guessing that the answer is no. One explanation I could think of, is that if there was such function, it would contradict the mean value property at the origin. However, we didn't learn about the mean value property (or about harmonic functions in general), so I'm not sure if I'm correct and either way I can't use it. I feel like there is something very simple I'm missing. Ideas? Thanks!
If $u$ is a radial harmonic function on the unit ball of $\mathbb R^2$, we can write $u(x,y)=g(x^2+y^2)$, where $g\colon\mathbb R_{\geq 0}\to\mathbb R$. We have $\partial_x u(x,y)=2xg'(x^2+y^2)$ and $\partial_{xx}u(x,y)=4x^2g''(x^2+y^2)+2g'(x^2+y^2)$ so $$\Delta u(x,y)=4(x^2+y^2)g''(x^2+y^2)+4g'(x^2+y^2).$$ We denote $h:=g'$, then for $r>0$ we have $rh'(r)+h(r)=0$, so $(rh(r))'=0$ and $rh(r)$ is constant. If we want $h$ non identically $0$ in order to have $g$ non constant, we have to take $h(r)=Cr^{-1}$ for $C\neq 0$, and $r\in\mathbb R_{>0}$. But $h$ cannot be defined at $0$, and so $g$ cannot be defined at $0$, so necessary $h$ is identically vanishing, $g$ is constant and $u$ is constant.
{ "language": "en", "url": "https://math.stackexchange.com/questions/130025", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
If "multiples" is to "product", "_____" is to "sum" I know this might be a really simple question for those fluent in English, but I can't find the term that describes numbers that make up a sum. The numbers of a certain product are called "multiples" of that "product". Then what are the numbers of a certain sum called?
According to Wikipedia, "summands", "addends", or "terms" are all acceptable. Also, I've never heard "multiples" used as the corresponding word for multiplication. Wikipedia lists the words "factors" and "multiplicands" both of which I'm familiar with.
{ "language": "en", "url": "https://math.stackexchange.com/questions/130093", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 4, "answer_id": 1 }
Improper integral of $\frac{x}{e^x-1}$ This integral came up in an exercise on the estimation of the specific heat of a 1-D solid and is probably a standard integral, possibly one that can be solved by contour integration: \begin{equation} \int_0^{+\infty} \frac{x}{e^x-1} dx \end{equation} I have some rudimentary knowledge of contour integrals, but I can't come up with a proper path, also because of the many singularities along the imaginary axis. Any suggestion?
Make a change or variables, $e^{-x}=u$, and write the integral as $$\int_0^\infty \frac{x e^{-x}}{1 - e^{-x}}dx$$ Now substituting gives $$-\int_0^1 \frac{\log u}{1 - u}du $$ This is a known integral that evaluates to $\dfrac{\pi^2}{6}$. If you want to prove it, you can use the dilogarithm. Let $1-u=x$, so that $$-\int_0^1 \frac{\log(1 - x)}{x}dx$$ Now, since we're working on $(0,1)$ it is legitimate to use $$\frac{{ - \log \left( {1 - x} \right)}}{x} = \sum_{n = 1}^\infty {\frac{{{x^{n - 1}}}}{n}} $$ Integrating termwise gives $$\int_0^t \frac{-\log(1 - x)}{x} dx = \sum_{n = 1}^\infty \frac{t^n}{n^2} = \mathrm{Li}_2 (t)$$ Evaluating at $t=1$ gives $$ -\int_0^1 \frac{\log(1 - x)}{x} dx = \sum_{n = 1}^\infty \frac{1}{n^2} = \frac{\pi^2}{6}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/130167", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 2, "answer_id": 0 }
Calculating conditional probability for markov chain I have a Markov chain with state space $E = \{1,2,3,4,5\}$ and transition matrix below: $$ \begin{bmatrix} 1/2 & 0 & 1/2 & 0 & 0 \\\ 1/3 & 2/3 & 0 & 0 & 0 \\\ 0 & 1/4 & 1/4 & 1/4 & 1/4 \\\ 0 & 0 & 0 & 3/4 & 1/4 \\\ 0 & 0 & 0 & 1/5 & 4/5\ \end{bmatrix} $$ How would I find the conditional probabilities of $\mathbb{P}(X_2 = 5 | X_0 =1)$ and $\mathbb{P}(X_3 = 1 | X_0 =1)$? I am trying to use the formula (or any other formula, if anyone knows of any) $p_{ij}^{(n)} = \mathbb{P}(X_n = j | X_0 =i)$, the probability of going from state $i$ to state $j$ in $n$ steps. So $\mathbb{P}(X_2 = 5 | X_0 =1) = p_{15}^2$, so I read the entry in $p_{15}$, and get the answer is $0^2$, but the answer in my notes say it is $1/8$? Also, I get for $\mathbb{P}(X_3 = 1 | X_0 =1) = p_{11}^3 = (\frac{1}{2})^3 = 1/8$, but the answer says it is $1/6$?
I will give not really formal solution, but maybe this will help you too. $P(X_2=5|X_0=1)$ means getting from the state 1, at the moment 0, to the state 5, at the moment 2. So we are allowed to make to make two steps. Final destination - state 5, is column 5, so nonzero probabilities to get there are from states 3,4,5. So the first step must be getting to one of these. Lets check first row - from state 1 we can get only to states 1,3, which leaves us only one way $1\rightarrow 3\rightarrow 5$, with probability $1/2\cdot1/4=1/8$. A bit longer different situation is $P(X_3=1|X_0=1)$, here we have three steps. Thinking in the same way as before there are two ways with probabilities $(1/2)^3$ and $1/3\cdot1/4\cdot1/2$ (sums to $1/6$). Of course Didier's solution is more formal, and it may be worth doing it in this my way only when we have lots of zeros, like in this case (to eliminate possible combinations). This should help you understanding why $p_{15}^2$ doesn't work here - that would mean $1\rightarrow 1\rightarrow 1$, staying in state 1.
{ "language": "en", "url": "https://math.stackexchange.com/questions/130217", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Critical points of $f(x,y)=x^2+xy+y^2+\frac{1}{x}+\frac{1}{y}$ I would like some help finding the critical points of $f(x,y)=x^2+xy+y^2+\frac{1}{x}+\frac{1}{y}$. I tried solving $f_x=0, f_y=0$ (where $f_x, f_y$ are the partial derivatives) but the resulting equation is very complex. The exercise has a hint: think of $f_x-f_y$ and $f_x+f_y$. However, I can't see where to use it. Thanks!
Here $f_{x}=2x+y-\frac{1}{x^{2}}$ and $f_{y}=2y+x-\frac{1}{y^{2}}$ For critical point $$f_{x}=0,\ f_{y}=0$$ $$2x+y-\frac{1}{x^{2}}=0\ ,\ 2y+x-\frac{1}{y^{2}}=0$$ $$2x^{3}+x^{2}y-1=0\ ,\ 2y^{3}+xy^{2}-1=0$$ Substracting this two equations we get, $$ 2(x^{3}-y^{3})+xy(x-y)=0$$ $$ 2(x-y)(x^{2}+xy+y^{2})+xy(x-y)=0$$ $$(x-y)(2x^{2}+2xy+2y^{2}+xy)=0$$ $$x-y=0\ \Rightarrow x=y$$ Put $x=y$ in $2x^{2}+2xy+2y^{2}+xy=0$,we get, $$ 2x^{2}+2x^{2}+2x^{2}+x^{2}=0\ \Rightarrow x=0 $$ Also $y=0$ \ $\therefore (0,0)$ is a critical point.
{ "language": "en", "url": "https://math.stackexchange.com/questions/130277", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
On Krull dimension of $M/(0 :_{M} \mathfrak{m}^t)$ module Let $(R,\mathfrak{m})$ be a commutative Noetherian local ring and $M$ is an $R$-module. There is an non-negative integer $t$ such that $M/(0 :_{M} \mathfrak{m}^t)$ is finitely generated. Then $$\dim M = \dim M/(0 :_{M} \mathfrak{m}^t)$$ I try to prove this proposition but i have not find out the proof. Can someone give me any idea to prove this? Thanks.
If I understand the notation correctly, then $(0 :_{M} \mathfrak m^t)$ denotes the submodule of $M$ consisting of elements killed by $\mathfrak m^t$. Call this submodule $N$. Consider the short exact sequence $$0 \to N \to M \to M/N \to 0.$$ If we write $I = Ann(M)$ and $J = Ann(M/N)$, then we see that $\mathfrak m^t J \subset I \subset J.$ If $M/N = 0$, so $J = R$, then $\mathfrak m^t \subset I,$ hence $R/I$ is a quotient of $R/\mathfrak m^t$, and so $R/I$ has dimension zero. Thus $\dim M := \dim M/I = 0 = \dim M/N$, as required. If $M/N \neq 0,$ so that $J \neq R,$ then $J \subset \mathfrak m,$ and so we see that $J^{t+1} \subset I \subset J$. Since $\dim R/J^{t+1} = \dim R/J$ (the former is just a nilpotent thickening of the latter), we see that both are equal to $\dim R/I$, which is intermediate between the two, and hence again we have $\dim M := \dim R/I = \dim R/J =: \dim M/N.$ (So $M/N$ being f.g. is not actually needed.) If instead you want to think about the dimension of the support, rather than in terms of Krull dimensions of annihilators (the two are the same in the f.g. context), then from the above exact sequence you see that $M$ and $M/N$ have identical supports, since $N$ is supported at the closed point $\mathfrak m$ (being annihilated by $\mathfrak m^t$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/130339", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Finding the minimal polynomials of trigonometric expressions quickly If if an exam I had to calculate the minimal polynomials of, say, $\sin(\frac{2 \pi}{5})$ or $\cos(\frac{2 \pi}{19})$, what would be the quickest way to do it? I can use the identity $e^{ \frac{2 i \pi}{n}} = \cos(\frac{2 \pi}{n}) + i\sin(\frac{ 2 \pi}{n})$ and raise to the power $n$, but this gets nasty in the $n = 19$ case... Thanks
William E. Heierman has a nice, simple, generic and quick method here. It would have no problem with $n=19$ (the original example is for $n=15$). Since it is a recurrent question, and nice application of cyclotomic polynomials, I chose to expose it in French Wikipedia.
{ "language": "en", "url": "https://math.stackexchange.com/questions/130412", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Residue integral: $\int_{- \infty}^{+ \infty} \frac{e^{ax}}{1+e^x} dx$ with $0 \lt a \lt 1$. I'm self studying complex analysis. I've encountered the following integral: $$\int_{- \infty}^{+ \infty} \frac{e^{ax}}{1+e^x} dx \text{ with } a \in \mathbb{R},\ 0 \lt a \lt 1. $$ I've done the substitution $e^x = y$. What kind of contour can I use in this case ?
The substitution $e^x=y$ leads to the integral $$ \int_0^\infty\frac{y^{a-1}}{y+1}\,dy. $$ This can be computed integrating the function $f(z)=z^{a-1}/(z+1)$ along the keyhole contour. We consider the branch of $z^{a-1}$ defined on $\mathbb{C}\setminus[0,\infty)$ with $f(-1)=e^{\pi i}$. For small $\epsilon>0$ and large $R>0$, he contour is made up of the interval $[\epsilon,R]$, the circle $C_r=\{|z|=R\}$ counterclockwise, the interval $[R,\epsilon]$ and the circle $C_\epsilon=\{|z|=\epsilon\}$ clockwise. The function $f$ has a simple pole at $z=-1$ with residue $(-1)^{a-1}=e^{\pi(a-1)i}$. It is easy to see that $$ \lim_{\epsilon\to0}\int_{C_\epsilon}f(z)\,dz=\lim_{R\to\infty}\int_{C_R}f(z)\,dz=0. $$ Then $$ \int_0^\infty\frac{y^{a-1}}{y+1}\,dy+\int_\infty^0\frac{(e^{2\pi i}y)^{a-1}}{y+1}\,dy=2\,\pi\,i\operatorname{Res}(f,-1), $$ from where $$ \bigl(1-e^{2\pi(a-1)i}\bigr)\int_0^\infty\frac{y^{a-1}}{y+1}\,dy=2\,\pi\,i\,e^{\pi(a-1)i} $$ and $$ \int_0^\infty\frac{y^{a-1}}{y+1}\,dy=\frac{\pi}{\sin((1-a)\pi)}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/130472", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 2, "answer_id": 1 }
Gram schmidt, incorrect answer I projected y onto x giving me a vector y1. I then subtracted y1 from y which gave me a vector orthogonal to x, I called this vector y2. This vector was $y_2 = (-1, -7, -1)$. I then normalized x and y2 and entered them as my answers. But it says my answer for y2 is incorrect.
You have the right procedure. But $$\eqalign{ y_1 ={\rm proj}_x(y)&={x\cdot y\over x\cdot x} x \cr &={ 4\cdot 1+3(-11/2)+1(-1/2)\over 4\cdot4+3\cdot 3+1\cdot1 }x\cr &={-13\over26 }x\cr &={-1\over2}\Biggl(\matrix{4\cr3\cr1}\Biggr)\cr &= \Biggl(\matrix{-2\cr-3/2\cr-1/2}\Biggr). } $$ So, $$ y_2= \Biggl(\matrix{1\cr-11/2\cr-1/2}\Biggr) - \Biggl(\matrix{-2\cr-3/2\cr-1/2}\Biggr)= \Biggl(\matrix{3\cr-4\cr0}\Biggr).$$ Here's the salient point I want to make: It's a good idea to check your work when you can. As a quick spot check, we have $$y_2\cdot x=3\cdot4+(-4)(3)+0\cdot1=0,$$ as it should (the $y_2$ of your calculation isn't orthogonal to $x$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/130529", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Probability - Coin Toss - Find Formula The problem statement, all variables and given/known data: Suppose a fair coin is tossed $n$ times. Find simple formulae in terms of $n$ and $k$ for a) $P(k-1 \mbox{ heads} \mid k-1 \mbox{ or } k \mbox{ heads})$ b) $P(k \mbox{ heads} \mid k-1 \mbox{ or } k \mbox{ heads})$ Relevant equations: $P(k \mbox{ heads in } n \mbox{ fair tosses})=\binom{n}{k}2^{-n}\quad (0\leq k\leq n)$ The attempt at a solution: I'm stuck on the conditional probability. I've dabbled with it a little bit but I'm confused what $k-1$ intersect $k$ is. This is for review and not homework. The answer to a) is $k/(n+1)$. I tried $P(k-1 \mbox{ heads} \mid k \mbox{ heads})=P(k-1 \cap K)/P(K \mbox{ heads})=P(K-1)/P(K).$ I also was thinking about $$P(A\mid A,B)=P(A\cap (A\cup B))/P(A\cup B)=P(A\cup (A\cap B))/P(A\cup B)=P(A)/(P(A)+P(B)-P(AB))$$
Let $X$ be the random variable for the number of heads in $n$ tosses. Then your questions amount to finding the conditional probabilities: $$ \mathbb{P}(X = k-1 | k-1 \leqslant X \leqslant k) = \frac{\mathbb{P}( \{X = k-1\} \cap \{ k-1 \leqslant X \leqslant k \} }{\mathbb{P}( \{ k-1 \leqslant X \leqslant k \}) } = \frac{\mathbb{P}( X = k-1)}{\mathbb{P}( X =k-1) + \mathbb{P}( X =k) } $$ and $$ \mathbb{P}(X = k | k-1 \leqslant X \leqslant k) = \frac{\mathbb{P}( \{X = k\} \cap \{ k-1 \leqslant X \leqslant k \} }{\mathbb{P}( \{ k-1 \leqslant X \leqslant k \}) } = \frac{\mathbb{P}( X = k)}{\mathbb{P}( X =k-1) + \mathbb{P}( X =k) } $$ Both are solved by noting, for $1 \leqslant k \leqslant n$ $$ \frac{\mathbb{P}(X=k-1)}{\mathbb{P}(X=k)} = \frac{\binom{n}{k-1}}{\binom{n}{k}} = \frac{k}{n-k+1} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/130562", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Degrees of connected vertices and average vertex degree. This is a problem inspired by Hard planar graph problem. Let $\nu$ be the average vertex degree of a graph $\Gamma$. Is it always possible to find an edge $\{u, v\}$ of $\Gamma$ such that $$\deg(u) + \deg(v) \le 2\nu$$ Intuitively, it seems like that this is the case, but I can't seem to find a nice way to prove it. Edit/Update: Since the result seems to be trivially false for disconnected graphs, let us consider the case of connected graphs. It seems intuitively true that this should be the case (atleast for connected graphs).
The inequality should go the other way, I think (assuming, of course, that there exists at least one edge). Let $V$ and $E$ be the sets of vertices and edges respectively. $|V| \nu = \sum_{v \in V} \text{deg}(v) = 2 |E|$ (each edge is counted twice) Let $R = \sum_{\{u,v\} \in E} \text{deg}(u) + \text{deg}(v) = \sum_{v \in V} \text{deg}(v)^2$ (each vertex $v$ occurs $\text{deg}(v)$ times on the left side). Since there are $|E|$ terms in the sum on the left, at least one $\deg(u) + \deg(v)\ge R/|E|$. By Cauchy-Schwarz, $|V|\nu = \sum_{v \in V} \text{deg}(v) \le \left(\sum_{v \in V} 1\right)^{1/2} \left(\sum_{v \in V} \text{deg}(v)^2\right)^{1/2} = |V|^{1/2} R^{1/2}$ i.e. $R \ge |V| \nu^2 = 2 |E| \nu$. Thus at least one $\text{deg}(u) + \text{deg}(v) \ge 2 \nu$. But as Chris's example shows, you can't always get $\text{deg}(u) + \text{deg}(v) \le 2 \nu$. Indeed, any case where $\deg(u) + \deg(v)$ are equal for all edges but not all vertices have the same degree will be a counterexample, because the Cauchy-Schwarz inequality will be strict.
{ "language": "en", "url": "https://math.stackexchange.com/questions/130642", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Show that a function is increasing. Qusestion: Let f be a continuous and differentiable function on $[0, \infty[$, with $f(0) = 0$ and such that $f'$ is an increasing function on $[0, \infty[$. Show that the function g, defined on $[0, \infty[$ by $$g(x) = \begin{cases} \frac{f(x)}{x}, x\gt0& \text{is an increasing function.}\\ f'(0), x=0 \end{cases}$$ I have tried to solve this problem but I don't know whether I have done it right or not. Solution: I have applied mean value theorem on the interval $[0, x]$. Then, $$\frac{f(x)}{x} =f'(c) , 0\lt c \lt x$$ It is given that $f'$ is an increasing function. So I deduce that $\frac{f(x)}{x}$ is also increasing. Further, $$g(x) = f'(c) \text {such that } 0<c<x$$ Therefore, $$g(0) =f'(c) \text{such that} 0<c<0$$ So, $c=0$ Thus $g(x) = f'(0)$ at $x=0$
$c$ depends on $x$, and what you did doens't prove that if $x_1\leq x_2$ then $c_{x_1}\leq c_{x_2}$. But we can write for $x>0$, since $f'$ is increasing hence integrable over finite intervals $$\frac{f(x)}x=\frac{f(x)-f(0)}x=\frac 1x\int_0^xf'(t)dt=\int_0^1f'(xs)ds$$ by the substitution $t=xs$. This formula also works for $x=0$, and now it's easy to deduce that $g$ is increasing: if $x_1\leq x_2$, for all $0\leq s\leq 1$ we have $sx_1\leq sx_2$ and since $f'$ is increasing...
{ "language": "en", "url": "https://math.stackexchange.com/questions/130806", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
can I approximate Bernoulli random variables in order to have a Poisson process? $\newcommand{\length}{\textrm{length}}$I have queues of packets of different sizes. The probability of choosing queue $i$ where I pop the first packet is $$p(i) = \frac{\length(i)}{ \sum_j \length(j)}$$ The probability that I don't choose queue $i$ is $1-p(i)$. It should thus follow a Bernoulli distribution. I calculate these probabilities at the beginning and stick to them until the end. My goal is have for each queue an approximation of a Poisson Process. Is this possible? Example: I have queues $a$,$b$,$c$ each with with packets 1, 2, 3, etc. Say that I get the following output: $$a_1, b_1, b_2, c_1, a_2, b_3, a_3, c_2, b_4, a_4, b_5,\dots$$ I want to approximate the arrival times for elements from queue $a$ by some exponential random variable with a certain intensity. Same thing for queues $b$ and $c$. Is this possible?
First consider the case when none of the queues is empty. You have probability $p_i$ of picking from queue $i$. The sequence of 0s and 1s that indicates whether an element came from queue $i$ or not is therefore a Bernoulli process with probability $p_i$. The waiting time $W_i$ between arrival of packets from queue $i$ therefore follows a Geometric distribution with parameter $p_i$, i.e. $$P(W_i=w) = (1-p_i)^{w-1}p_i$$ which is the exponential-like process you are after (a geometric distribution is the discrete version of the exponential distribution - if there are $n$ packets per second and you let $n\rightarrow \infty$ and $p\rightarrow 0$ in such a way that $\lambda=np$ remains constant, then you get an exponential distribution with parameter $\lambda$). If you want to consider the case of finite queues, then I think that the simplest thing to do is to reset the model whenever one of the queues becomes empty. If queue $j$ empties, then one option is to shift all the probabilities by $$p_j \rightarrow 0, \qquad p_i \rightarrow \frac{p_i}{1-p_j} \textrm{ for }i\neq j$$ and carry on as before (you now have zero probability of selecting from queue $j$, which is equivalent to ignoring that queue if it is selected with the previous probabilities). Another option is to re-evaluate the probabilities according to the current length of each queue.
{ "language": "en", "url": "https://math.stackexchange.com/questions/130846", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Use inverse function theorem to evaluate $d/dx \arcsin(x)$ Use inverse function theorem to evaluate $d/dx \arcsin(x)$. How is this any different then finding it using implicit differentiation? Thanks!
For the moment I'll assume that by the "inverse function theorem" you mean $$ \frac{d}{dx} f^{-1}(x) = \frac{1}{f'(f^{-1}(x))}. $$ Let's see: $$ \begin{align} y & = f^{-1}(x) \\ \\ f(y) & = x \\ \\ \frac{d}{dx} f(y) & = \frac{d}{dx} x = 1\tag{implicit differentiation} \\ \\ f'(y) \frac{dy}{dx} & = 1 \\ \\ \frac{dy}{dx} & = \frac{1}{f'(y)} = \frac{1}{f'(f^{-1}(x))}. \end{align} $$ This uses implicit differentiation to prove the implicit function theorem. So one answer to "What's the difference?" is that if you're to use the theorem, you just use the conclusion without writing the derivation, whereas if you're to use implicit differentiation, then you're writing out the derivation. But probably the latter means you're to apply it to a particular concrete case rather than writing something like what appears above, where $f$ could be any function at all rather than some specified particular one.
{ "language": "en", "url": "https://math.stackexchange.com/questions/130911", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Can there be a scalar function with a vector variable? Can I define a scalar function which has a vector-valued argument? For example, let $U$ be a potential function in 3-D, and its variable is $\vec{r}=x\hat{\mathrm{i}}+y\hat{\mathrm{j}}+z\hat{\mathrm{k}}$. Then $U$ will have the form of $U(\vec{r})$. Is there any problem?
That depends on what exactly you mean by scalar. Do you mean a function that's just a number? Then of course you can think up any function you want that turns three input numbers $x, y$ and $z$ into one output number $U(\vec{r}) = U(x,y,z)$. Examples would be $U(\vec{r}) = |\vec{r}| = \sqrt{x^2 + y^2 + z^2}$ or $U(\vec{r}) = x + y - 2z$, or just general definitions such as $U(\vec{r}) = $ temperature at location $\vec{r}$, or $\varrho(\vec{r}) = $ Charge density at location $\vec{r}$. If, however, by scalar you mean an object that's not only a number, but also rotationally invariant, then your options are a tiny bit limited. An object that's invariant under rotation cannot depend on the specific direction of $\vec{r}$ but only on its magnitude, so the first example above would be a scalar, but the second example would not. EDIT: Since this was migrated from the physics SE, let me note that there are some subtle differences in nomenclature. In physics, the term "scalar" can have more meaning than "element of $\mathbb{R}$ or $\mathbb{C}$. It can mean: "Something that is invariant under rotation". Or, in the case of a Lorentz scalar, "something that is invariant under a certain transformation".
{ "language": "en", "url": "https://math.stackexchange.com/questions/130953", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Proof that $\sum\limits_{k=1}^\infty\frac{a_1a_2\cdots a_{k-1}}{(x+a_1)\cdots(x+a_k)}=\frac{1}{x}$ regarding $\zeta(3)$ and Apéry's proof I recently printed a paper that asks to prove the "amazing" claim that for all $a_1,a_2,\dots$ $$\sum_{k=1}^\infty\frac{a_1a_2\cdots a_{k-1}}{(x+a_1)\cdots(x+a_k)}=\frac{1}{x}$$ and thus (probably) that $$\zeta(3)=\frac{5}{2}\sum_{n=1}^\infty {2n\choose n}^{-1}\frac{(-1)^{n-1}}{n^3}$$ Since the paper gives no information on $a_n$, should it be possible to prove that the relation holds for any "context-reasonable" $a_1$? For example, letting $a_n=1$ gives $$\sum_{k=1}^\infty\frac{1}{(x+1)^k}=\frac{1}{x}$$ which is true. The article is "A Proof that Euler Missed..." An Informal Report - Alfred van der Poorten.
For every $n\geqslant1$ and every $(x,a_1,\ldots,a_n)$ such that $x\ne -a_k$ for every $k$, $$ \color{red}{\sum_{k=1}^n\frac{a_1a_2\cdots a_{k-1}x}{(x+a_1)\cdots(x+a_k)}=1-\frac{a_1a_2\cdots a_{n}}{(x+a_1)\cdots(x+a_n)}} $$ Hence the formula in the post holds if and only if $$ \prod_{k=1}^{\infty}\frac{a_k}{x+a_k}=0, $$ which, for $x\gt0$ and at least if the sequence $(a_k)$ is nonnegative, is equivalent to the fact that $$ \color{green}{\sum_{k}\frac1{a_k}}\ \text{diverges}. $$ Here is a probabilistic proof of the finitary version, valid for every nonnegative $a_k$ and positive $x$ (note that once one knows these two rational expressions in $(x,a_1,\ldots,a_n)$ coincide for these values, one knows they are in fact identical). Proof: Assume that one performs a sequence of $n$ independent experiments and that the $k$th experiment succeeds with probability $$p_k=\frac{x}{x+a_k}.$$ Then the $k$th term of the sum on the LHS of the equation above is the probability that every experiment from $1$ to $k-1$ failed and that experiment $k$ succeeded. Hence their sum is the probability of the disjoint union of these events, which is exactly the event that at least one experiment from $1$ to $n$ succeeded. The complementary event corresponds to $n$ failures, hence its probability is the product from $1$ to $n$ of the probabilities of failures $1-p_k$, that is, $$\prod_{k=1}^n(1-p_k)=\prod_{k=1}^n\frac{a_k}{x+a_k}.$$ This proves the claim.
{ "language": "en", "url": "https://math.stackexchange.com/questions/131004", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "37", "answer_count": 2, "answer_id": 1 }
How to denote a function mapping of two parameters to an output? I am wondering how to denote something like this: I have a function mapping two sets to a set of vectors and am currently denoting it as so: $p: [F, \Sigma] \to S $ I am wondering if this is correct, and if not, what is? I am a computer science student and did a year of mathematics at degree level, but that was a long time ago and I have forgotten a lot of the basics. If I am wrong I would appreciate a semi in depth answer as to why.
A function mapping two arguments from $A$ and $B$ respectively to $C$ is denoted $$f:A\times B\to C,$$ where $\times$ here denotes the Cartesian product.
{ "language": "en", "url": "https://math.stackexchange.com/questions/131052", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Orientation on $\mathbb{CP}^2$ I am confused by the orientation of a topological manifold. My understanding is: An orientation of a topological manifold is a choice of generator of the $H^n(M,\mathbb Z)$. So given a manifold, we could have 2 orientation defined on the manifold. For $\mathbb{CP}^2$ on orientation is determined by the complex structure, the other orientation is denoted by $\overline{\mathbb{CP}}^2$. And it is well known that there is no orientation reversing map from $\mathbb{CP}^2$ to itself. My question is: are they homeomorphic? I guess they are not homeomorphic. But I am confuesed, doesn't the orientations defined on the same manifold, how come after reversing the orientation they become not homeomorphic?
A topological manifold is a topological space with extra conditions. In particular, for $\mathbb{CP}^2$, whatever the orientation you choose, the underlying topological space is the same. Therefore, the identity map is a homeomorphism. However, they are not equivalent as oriented topological manifold, since there exists no orientation-reversing map from $\mathbb{CP}^2$ to itself. Hence, I think the confusion arises from a confusion with terminology.
{ "language": "en", "url": "https://math.stackexchange.com/questions/131130", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }