Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Why is two to the power of zero equal to binary one? Probably a simple question and possibly not asked very well. What I want to know is.. In binary, a decimal value of 1 is also 1. It can be expressed as $x = 1 \times 2^0$ Question: Why is two to the power of zero equal to one? I get that two to the power of one is equal to two, or binary 10, but why is to the power of zero equal to one, is this a math convention? is there a link I could read?
Because we want $2^{m+n} = 2^m \cdot 2^n$, and if $n = 0$ this requires that $2^0 = 1$. More combinatorially, $a^b$ is the number of functions from a set with $b$ elements to a set with $a$ elements, and there is exactly one function from the empty set to any other set (the empty function). This is the same reason that $0! = 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/6832", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 2 }
What is a $G$-Galois Branched Cover What is, in the language of Schemes, a $G$-galois branched cover?
I don't think this phrase has a single precise translation into the general language of schemes, but it will mean something like the following: a finite surjective morphism $X \to Y$ of integral schemes such that the corresponding extension of function fields $K(Y) \subset K(X)$ is Galois with Galois group $G$. It would also be reasonable to require that $X$ and $Y$ be normal. To understand what it means geometrically, one should imagine that $X$ and $Y$ are projective varieties over some field. What this will mean then is that $X \to Y$ is surjection with finite fibres, that $G$ acts as a group of automorphisms of $X$ over $Y$, and that if we remove the branch locus (i.e. the closed subset of $Y$ along which the fibres contain points with multiplicity $> 1$) then each fibre is acted on faithfully and transitively by $G$ (so, away from the ramification locus, there are $|G|$ sheets of $X$ over $Y$, which are permuted by $G$). A concrete example is given by (the projectivization of) the map $(x,y) \mapsto x$ from the elliptic curve $E$ defined by $y^2 = x^3 - x$ to $\mathbb P^1$. The group $G$ is cyclic of order two, acting by $(x,y) \mapsto (x,-y)$, and the branch points are precisely the points where $y = 0$ (four of them; three three finite points $(0,0), (\pm 1,0)$, and also one at infinity). In this context, an important result is the Zariski--Nagata purity theorem, which says (under mild hypotheses, e.g. that $Y$ and $X$ are smooth, or more generally, that $Y$ is smooth and $X$ is normal) that the set of branch points has pure codimension one, i.e. is a divisor.
{ "language": "en", "url": "https://math.stackexchange.com/questions/6923", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Principal and Annuities Suppose you want to accumulate $12\,000$ in a $5 \%$ account by making a level deposit at the beginning of each of the next $9$ years. Find the required level payment. So this seems to be an annuity due problem. I know the following: $ \displaystyle \sum_{k=1}^{n} \frac{A}{(1+i)^{k}} = \frac{A}{1+i} \left[\frac{1- \left(\frac{1}{1+i} \right)^{n}}{1- \left(\frac{1}{1+i} \right)} \right] = P$. So in this problem, we are trying to solve for $P$? Just plug in the numbers? Or do we need to calculate the discount rate $d = i/(i+1)$ since the annuity is being payed at the beginning of the year?
I get $F = A \frac{(1+i)^{2n}-(1+i)^n}{i(1+i)^n}$. But when I solve for $A$, I get the level payment as being $1088.28$. But the correct answer is $1036.46$. Why is there a discrepancy?
{ "language": "en", "url": "https://math.stackexchange.com/questions/6971", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Finite Index of Subgroup of Subgroup Prove the following: If $H$ is a subgroup of finite index in a group $G$, and $K$ is a subgroup of $G$ containing $H$, then $K$ is of finite index in $G$ and $[G:H] = [G:K][K:H]$. So this is basically a bijective proof? The number of cosets of $H$ in $G$ equals the number of cosets of $K$ in $G$ times the number of cosets of $H$ in $K$ by the multiplication principle? Reference: Fraleigh p. 103 Question 10.35 in A First Course in Abstract Algebra
The canonical map $G/H \to G/K$ is surjective. The fiber of $gK$ is $\{gkH : k \in K\}$, which can be identified with $K/H$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/7002", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 1 }
Prove that the sequence$ c_1 = 1$, $c_{n+1} = 4/(1 + 5c_n) $ , $ n \geq 1$ is convergent and find its limit Prove that the sequence $c_{1} = 1$, $c_{(n+1)}= 4/(1 + 5c_{n})$ , $n \geq 1$ is convergent and find its limit. Ok so up to now I've worked out a couple of things. $c_1 = 1$ $c_2 = 2/3$ $c_3 = 12/13$ $c_4 = 52/73$ So the odd $c_n$ are decreasing and the even $c_n$ are increasing. Intuitively, it's clear the the two sequences for odd and even $c_n$ are decreasing/increasing less and less. Therefore it seems like the sequence may converge to some limit $L$. If the sequence has a limit, let $L=\underset{n\rightarrow \infty }{\lim }a_{n}.$ Then $L = 1/(1+5L).$ So we yield $L = 4/5$ and $L = -1$. But since the even sequence is increasing and >0, then $L$ must be $4/5$. Ok, here I am stuck. I'm not sure how to go ahead and show that the sequence converges to this limit (I tried using the definition of the limit but I didn't manage) and and not sure about the separate sequences how I would go about showing their limits. A few notes : I am in 2nd year calculus. This is a bonus question, but I enjoy the challenge and would love the extra marks. Note : Once again I apologize I don't know how to use the HTML code to make it nice.
So $c_{1} = 1$ and $c_{n+1} = 4/(1+5c_{n})$ for $n \geq 1$. Let $C(x) = \sum_{n \geq 1} c_{n}x^n$. Then maybe try to express $\sum_{n \geq 1} c_{n+1}x^n$ and $\sum_{n \geq 1} \frac{4x^n}{1+5c_{n}}$ in terms of $C(x)$ to get a closed form.
{ "language": "en", "url": "https://math.stackexchange.com/questions/7034", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 5, "answer_id": 4 }
Conditions that torsion is zero in a space curve What are the conditions for torsion to be zero other than having a plane curve? The only thing I can thing of is an equation that have the torsion that cancels out each other.
Here's an example of an infinitely differentiable regular curve with identically zero torsion but not contained in any plane. (By regular, I mean that the velocity never vanishes.) $$ \alpha(t) = \left\{ \begin{aligned} &(t,e^{-1/t},0), & t>0,\\ &(0,0,0), &t=0,\\ &(t,0,e^{1/t}), &t<0. \end{aligned} \right. $$ EDIT: As Mariano Suárez-Alvarez points out in his comment, the torsion is only defined at points where the curvature is nonzero. Since the curvature of this curve is zero at the origin, the torsion is not defined there. Thus the argument given by yasmar shows that if the curve is regular, its curvature is nowhere zero, and its torsion is everywhere zero, then it's a plane curve.
{ "language": "en", "url": "https://math.stackexchange.com/questions/7108", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Median of distinct numbers What is the least number of comparisons we need to find the median of 6 distinct numbers? I am able to find the answer to the median of 5 distinct numbers to be 6 comparisons, and it makes sense, however in the case of 6 numbers I can't find an answer. The best I was able to do it in by hand was 9 comparisons. Can that be minimized further? Edit: Median in this case, we are assuming to be the lower median.
It looks like the answer is 8. My Knuth Volume Three (1970s—not as dusty as you think) reports an upper bound of 8, which, paired with Moron's lower bound of 8, ... The general form of this question is called the Selection Problem. If you google that phrase you will get lots of useful results. Edit: Knuth doesn't give an explicit algorithm for finding the median of 6 elements in at most 8 steps (at least in the first edition). However, in exercise 12 of section 5.3.3, he does give the explicit method for finding the median of 7 elements using at most 10 comparisons, which may be of some help.
{ "language": "en", "url": "https://math.stackexchange.com/questions/7151", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 2, "answer_id": 0 }
How can one intuitively think about quaternions? Quaternions came up while I was interning not too long ago and it seemed like no one really know how they worked. While eventually certain people were tracked down and were able to help with the issue, it piqued my interest in quaternions. After reading many articles and a couple books on them, I began to know the formulas associated with them, but still have no clue how they work (why they allow rotations in 3D space to be specific). I back-tracked a little bit and looked at normal complex numbers with just one imaginary component and asked myself if I even understood how they allow rotations in 2D space. After a couple awesome moments of understanding, I understood it for imaginary numbers, but I'm still having trouble extending the thoughts to quaternions. How can someone intuitively think about quaternions and how they allow for rotations in 3D space?
Thinking about quaternions as 4D is misleading. Quaternions are the union of a scalar and a 3-vector. Think: time and space. Space is a 3-vector. You can point in directions in space. Time is a scalar. There is a past (negative time) and a future (positive time) and now (0), but no ability to point in the direction of time. Think of a blinking light on a train, each event having a time and location. These events can be written as quaternions. If the train travels at a constant velocity, you are seeing the addition of quaternions. One can make movies out of quaternions. Examples are available on my web site, http://visualphysics.org The rotations in 3D space calculations ignore time.
{ "language": "en", "url": "https://math.stackexchange.com/questions/7187", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "38", "answer_count": 7, "answer_id": 6 }
Applications of Fractional Calculus I've seen recently for the first time in Special Functions (by G. Andrews, R. Askey and R. Roy) the definitions of fractional integral $$(I_{\alpha }f)(x)=\frac{1}{\Gamma (\alpha )}\int_{a}^{x}(x-t)^{\alpha -1}f(t)dt\qquad \text{Re}\alpha >0$$ and fractional derivative $$\frac{d^{\nu }w^{\mu }}{dw^{\nu }}=\frac{\Gamma (\mu +1)}{\Gamma (\mu -\nu +1)}w^{\mu -\nu },$$ in The Hypergeometric Functions Chapter. I would like to know some applications for Fractional Calculus and/or which results can only be obtained by it, if any.
Miller and Ross looks very nice indeed. As far as applications are concerned: Applications of Fractional Differential Equations. For some strange reason, the original link is not available. Instead, you can look at it by using Google Docs viewer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/7296", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 7, "answer_id": 4 }
Homology of the Klein Bottle I know that in general, $H_{n}(X)$ counts the number of $n$-cycles that are not $n$-boundaries of a simplicial complex $X$. So for the sphere, $H_{0}(X) \cong \mathbb{Z}$ since it is connected. Also $H_{n}(X) = 0$ for all $n>0$ (e.g. all $1$-cycles are $1$-boundaries, etc..). How do you use this geometric interpretation to deduce that $H_{1}(X) = \mathbb{Z} \times \mathbb{Z}_2$ where $X$ is the Klein bottle? This doesn't seem to correspond to the Betti number.
Think of the bottle as two Möbius strips glued along their edges. A closed loop that circles one of the bands (along its center, say) is not a boundary, but if you follow it twice it becomes one: you can think of it as a loop that follows precisely the edge of one of the original bands, so it's just the boundary of that band. This gives you the "torsion" piece of the homology, and it shows you that homology is a slightly more refined invariant than just counting the number of closed "empty" loops. EDIT: perhaps worth to point out that the group-theoretic term "torsion", which corresponds to elements of finite order, is derived precisely from this geometric picture: the torsion element result from the "twist" in the band.
{ "language": "en", "url": "https://math.stackexchange.com/questions/7334", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 3, "answer_id": 1 }
Meaning of convolution? I am currently learning about the concept of convolution between two functions in my university course. The course notes are vague about what convolution is, so I was wondering if anyone could give me a good explanation. I can't seem to grasp other than the fact that it is just a particular integral of two functions. What is the physical meaning of convolution and why is it useful? Thanks a lot.
Have a look here: http://answers.yahoo.com/question/index?qid=20070125163821AA5hyRX ...and lots of good answers here: https://mathoverflow.net/questions/5892/what-is-convolution-intuitively
{ "language": "en", "url": "https://math.stackexchange.com/questions/7413", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26", "answer_count": 3, "answer_id": 0 }
Find all points with a distance less than d to a (potentially not convex) polygon I have a polygon P, that may or may not be convex. Is there an algorithm that will enable me to find the collection of points A that are at a distance less than d from P? Is A in turn always a polygon? Does the solution change materially if we try to solve the problem on the surface of a sphere instead of on a Euclidean plane?
I heard Ravi Vakil give a series of plenary talks at an MAA conference on this question. The talks were entitled "The Mathematics of Doodling." He generalized the question, though, by asking what happens in the limit when you start with some set in the plane and then iterate the process of finding the set of points at a distance $r$ from the current set. Some interesting mathematics came out as he showed that the resulting sequence of points gets more and more circular. There's a good picture of him doing this here on the MAA website. He also has a series of related problems that he developed for the Stanford Math Circle available here. Also according to his website, he has an article, "The Mathematics of Doodling," that will be appearing in the February 2011 issue of the American Mathematical Monthly. (Update: The article has just appeared in print. The rest of the reference is Vol. 118(2), pp. 116-129.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/7459", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Help me understand linearly separability in a binary SVM I have a question pertaining to linear separability with hyperplanes in a support vector machine. According to Wikipedia: ...formally, a support vector machine constructs a hyperplane or set of hyperplanes in a high or infinite dimensional space, which can be used for classification, regression or other tasks. Intuitively, a good separation is achieved by the hyperplane that has the largest distance to the nearest training data points of any class (so-called functional margin), since in general the larger the margin the lower the generalization error of the classifier.classifier. The linear separation of classes by hyperplanes intuitively makes sense to me. And I think I understand linear separability for two-dimensional geometry. However, I'm implementing an SVM using a popular SVM library (libSVM) and when messing around with the numbers, I fail to understand how an SVM can create a curve between classes, or enclose central points in category 1 within a circular curve when surrounded by points in category 2 if a hyperplane in an n-dimensional space V is a "flat" subset of dimension n − 1, or for two-dimensional space - a 1D line. Here is what I mean: That's not a hyperplane. That's circular. How does this work? Or are there more dimensions inside the SVM than the two-dimensional 2D input features?
Cross-posted my question at StackOverflow. More responses there.
{ "language": "en", "url": "https://math.stackexchange.com/questions/7499", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Finding all complex zeros of a high-degree polynomial Given a large univariate polynomial, say of degree 200 or more, is there a procedural way of finding all the complex roots? By "roots", I mean complex decimal approximations to the roots, though the multiplicity of the root is important. I have access to MAPLE and the closest function I've seen is: with(RootFinding): Analytic(Z,x,-(2+2*I)..2+2*I); but this chokes if Z is of high degree (in fact it fails to complete even if deg(Z)>15).
I think one of the biggest problems is approximating multiple roots. The approach described in L.Brugnano, D.Trigiante. "Polynomial Roots: the Ultimate Answer?", Linear Algebra and its Applications 225 (1995) 207-219 relies on the approximation of eigenvalues of a tridiagonal matrix, obtained via the application of Euclid's GCD algorithm to the original polynomial, and seems to work pretty well. I couldn't find the pdf for the article though, sorry.
{ "language": "en", "url": "https://math.stackexchange.com/questions/7539", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 1 }
Problems on combinatorics The following comes from questions comes from a recent combinatorics paper I attended : 1.27 people are to travel by a bus which can carry 12 inside and 15 outside. In how many ways can the party be distributed between inside and outside if 5 people refuse to go outside and 6 will not go inside? The solution given C(16,7), I have no clue how they got it ?! 2.The number of functions f from the set A = {0, 1, 2} into the set B = {1, 2, 3, 4, 5, 6, 7} such that $f(i) \le f(j) $ for $i \lt j $ and $i,j$ belongs to A is The solution given is C(8,3). I didn't really understood this one. 3.The number of ordered pairs $(m, n) m, n $ is in {1 , 2, … , 100} such that $7^m + 7^n$ is divisible by 5 is The solution given is 2500, but how ? 4.The coefficient of $x^{20}$in the expansion of $(1 + 3x + 3x^2 + x^3)^{20}$, is ? How to solve this one elegantly ? 5.An eight digit number divisible by 9 is to be formed by using 8 digits out of the digits 0, 1, …, 9 without replacement. The number of ways in which this can be done is: Now this one seems impossible for me to solve in 1 mint,or is it ? Given soln is 36(7!)
For number 4 you would use the multinomial theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/7644", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 4, "answer_id": 2 }
How to prove $\cos \frac{2\pi }{5}=\frac{-1+\sqrt{5}}{4}$? I would like to find the apothem of a regular pentagon. It follows from $$\cos \dfrac{2\pi }{5}=\dfrac{-1+\sqrt{5}}{4}.$$ But how can this be proved (geometrically or trigonometrically)?
Note that $$2\cdot \dfrac{2\pi}{5} + 3\cdot \dfrac{2\pi}{5} = 2\pi,$$ therefore $$\cos\left(2\cdot \dfrac{2\pi}{5}\right) = \cos\left(3\cdot \dfrac{2\pi}{5}\right).$$ Put $\dfrac{2\pi}{5} = x$. Using the formulas \begin{equation*} \cos 2x = 2\cos^2 x - 1, \quad \cos 3x = 4\cos^3 x - 3\cos x, \end{equation*} we have \begin{equation*} 4x^3 - 2x^2 -3x + 1 = 0 \Leftrightarrow (x - 1)(4x^2 + 2x - 1) = 0. \end{equation*} Because $\cos \dfrac{2\pi}{5} \neq 1$, we get \begin{equation*} 4x^2 + 2x - 1 = 0. \end{equation*} Solving the above quadratic equation for $x$ gives us $\cos \dfrac{2\pi}{5} = \dfrac{-1 \pm \sqrt{5}}{4}$. Because $\cos \dfrac{2\pi}{5} > 0$, we take the positive sign, giving us $\cos \dfrac{2\pi}{5} = \dfrac{-1 + \sqrt{5}}{4}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/7695", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34", "answer_count": 11, "answer_id": 6 }
finding the minima and maxima of some tough functions ok so I did all the revision problems and noted the ones I couldn't do today and Im posting them together, hope thats not a problem with the power that be? I have exhibit A: $e^{-x} -x + 2 $ So I differentiate to find where the derivative hits $0:$ $-e^{-x} -1 = 0 $ Now HOW do I figure when this hits zero!? $-1 = e^{-x} $ $\ln(-1) = \ln(e^{-x})$ ??? More to come ... as one day rests between me and my final exam/attempt at math!
I don't understand why the book wants you to find the max and min. You have correctly deduced that the derivative is never zero, which says there isn't a max or min. Looking for the root of this function is not hard. If you graph it over a reasonable range, say -5 to 5, you will find it close enough to get N correctly
{ "language": "en", "url": "https://math.stackexchange.com/questions/7827", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Degree of polynomial :$ [x + (x^3 – 1)^{1/2}]^5 + [x – (x^3 – 1)^{1/2}]^5 $ What is the degree of the polynomial of the following expression ? $$ [x + (x^3 – 1)^{1/2}]^5 + [x – (x^3 – 1)^{1/2}]^5 $$ If I am not very wrong the highest power of x is $\frac {15}{2} $ ?! So the degree is floor(15/2) = 7 ?! The answer is 7 but I am not sure of this approach. Please comment.
Usually, a polynomial is understood to have only non-negative powers of $x$ (i.e. $x^0=1,x^1=x,x^2$ etc.), so a-priori it's not even clear that this is a polynomial. However, if we open up the binomial theorem, we see that the odd powers cancel: \begin{equation*} (x+y)^n + (x-y)^n = \sum_{k=0}^n \binom{n}{k} x^{n-k} [y^k + (-y)^{k}]. \end{equation*} When $k$ is odd, $y^k + (-y)^k$ cancels. In your case, the term corresponding to $k = 2l$ has degree $n-2l+3l = n+l$, and so the maximum is achieved for $l=2$ and $n+l = 7$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/7868", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Let $\psi$ be a wavelet. Can its Fourier transform $\hat{\psi}$ be also wavelet? Let $\psi$ be a wavelet. Can its Fourier transform $\hat{\psi}$ be also wavelet? produce an example or prove that it is not possible. A wavelet is a function $\psi:\mathbb R\to\mathbb R$ such that (i) $\psi \in L^1(R) \cap L^2(R)$, (ii) $\int_{-\infty}^{\infty} \psi(t) dt = 0$, fourier transform is $\hat f(\omega) =\int_{-\infty}^{\infty} f(t)e^{-i\omega t} dt$.
$f(x)=\sin(x)\cdot\exp(-x^2)$ should do, because: * *The decay of $f$ ensure $f,\hat{f}\in L^1\cap L^\infty$. *$f$ is odd $f(-x)=-f(x)$. *$\hat{f}(-\xi)=\int_{-\infty}^\infty e^{-ix(-\xi)}f(x)dx=\int_{-\infty}^\infty e^{-i(-x)\xi}f(x)dx=\int_{\infty}^{-\infty} e^{-it\xi}f(-t)(-dt)=-\hat{f}(\xi)$ where in the last step we used that $f$ is odd. EDIT: If $g$ is integrable and odd then $$\int_{-\infty}^0g(x)dx =\int_{-\infty}^0-g(-x)dx=\int_{+\infty}^0g(t)dt=-\int_0^{+\infty}g(t)dt$$ hence $$\int_{-\infty}^\infty g(t)dt = 0.$$ This imply that $$\int_{-\infty}^\infty f dx= \int_{-\infty}^\infty\hat{f}d\xi=0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/7893", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
$n!+1$ being a perfect square One observes that \begin{equation*} 4!+1 =25=5^{2},~5!+1=121=11^{2} \end{equation*} is a perfect square. Similarly for $n=7$ also we see that $n!+1$ is a perfect square. So one can ask the truth of this question: * *Is $n!+1$ a perfect square for infinitely many $n$? If yes, then how to prove.
The sequence of factorials $n!+1$ which are also perfect squares is here in Sloane. It contains three terms, and notes that there are no more terms below $(10^9)!+1$, but as far as I know there's no proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/7938", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "53", "answer_count": 3, "answer_id": 0 }
Can someone please explain the Riemann Hypothesis to me... in English? I've read so much about it but none of it makes a lot of sense. Also, what's so unsolvable about it?
Let $H_n$ be the nth harmonic number, i.e. $ H_n = 1 + \frac12 + \frac13 + \dots + \frac1n.$ Then, the Riemann hypothesis is true if and only if $$ \sum_{d | n}{d} \le H_n + \exp(H_n)\log(H_n)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/7981", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "67", "answer_count": 7, "answer_id": 2 }
Windows lightweight Math Software I'm looking for lightweight, free, Windows, Math software. Something I can put an expression and get an answer, or graph it. I tried Euler, but it is quiet complicated and HUGE. Basic needs: * *Expression Based *Supports Variables *Support Functions, User defined and auto loaded. *Supports graphs, 2D. Not really needing 3D. *Supports History. What do you use? What do you recommend?
You can try PARI/GP I am more fan of sage but currently it doesn't natively support your OS, you can try it in a virtual machine.
{ "language": "en", "url": "https://math.stackexchange.com/questions/8015", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 11, "answer_id": 6 }
A problem on progression If a,b,c are in arithmetic progression., p,q,r in harmonic progression and ap,bq,cr are in geometric progression., then $\frac{p}{r}+\frac{r}{p} = $ ? EDIT: I have tried to use the basic/standard properties of the respective progressions to get the desired result, but I am not yet successful.
If a,b,c are in arithmetic progression, then you can write them as a, a+d, a+2d. Maybe all you need is 2b=a+c. Similarly for geometric progression, you can write them as ap, apz, apz^2 where z is the common ration (often we use r, but that was taken). Again, maybe (bq)^2=ap*cr is all you need.
{ "language": "en", "url": "https://math.stackexchange.com/questions/8062", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
A short way to say f(f(f(f(x)))) Is there a short way to say $f(f(f(f(x))))$? I know you can use recursion: $g(x,y)=\begin{cases} f(g(x,y-1)) & \text{if } y > 0, \ \newline x & \text{if } y = 0. \end{cases}$
You should define it this way: $$ \begin{eqnarray} \text{iterate}_0(f) &:=& id \\ \text{iterate}_{n + 1}(f) &:=& \text{iterate}_{n}(f) \circ f \end{eqnarray} $$ Then write $\text{iterate}_4(f)(x)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/8111", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 2 }
If a set contain $(2n+1)$ elements and if the number of subsets which contain at most n elements is 4096, then what is the value of $n$? If a set contain $(2n+1)$ elements and if the number of subsets which contain at most n elements is 4096, then what is the value of $n$?
There are $2^{2n + 1}$ subsets in total. Now note that the function which takes a subset to its complement is a bijection, and that a subset has $n$ or fewer elements if, and only if, its complement has more than $n$ elements. Thus the number of subsets with $n$ or fewer elements is equal to the number with more than $n$; thus the number of each is $2^{2n + 1}/2 = 2^{2n}$. Now solve for $n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/8157", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Logistic function passing through two points? Quick formulation of the problem: Given two points: $(x_l, y_l)$ and $(x_u, y_u)$ with: $x_l < x_u$ and $y_l < y_u$, and given lower asymptote=0 and higher asymptote=1, what's the logistic function that passes through the two points? Explanatory image: Other details: I'm given two points in the form of Pareto 90/10 (green in the example above) or 80/20 (blue in the example above), and I know that the upper bound is one and the lower bound is zero. How do I get the formula of a sigmoid function (such as the logistic function) that has a lower asymptote on the left and higher asymptote on the right and passes via the two points?
I believe you're looking for constants $a$ and $b$ so that $f(x_\ell) = y_\ell$ and $f(x_u) = y_u$ where $f(x) = \exp(a + bx) / (1 + \exp(a + bx))$. This is equivalent to the linear system $a + b x_\ell = g(y_\ell)$ and $a + b x_u = g(y_u)$ where $g(y) = f^{-1}(y) = \log(y/(1-y))$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/8213", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
How many even positive integers are there that are divisors of 720? How many even positive integers are there that are divisors of 720 ? I know how to compute the number of divisors but how to compute the number of even or odd positive divisors of a number ? If we list the divisors of 720 (using mathematica) : {1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24, 30, 36, 40, 45,48, 60, 72, 80, 90, 120, 144, 180, 240, 360, 720} among these only 24 are even,I am looking for some tricks that can be used in solving similar kinds of problems during exam (under a minute solution).
An even number $2m$ is a factor of $720$ iff $m$ is a factor of $360$. So it's the same problem as counting divisors of $360$. For that it helps to consider the prime factorization of $360$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/8257", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 2 }
$\mathfrak{c} = 2^{\aleph_0}$ I know a few proofs over this theorem (where $\mathfrak{c}$ is the cardinality of $[0,1]$ and $\aleph_0$ is the cardinality of $\mathbb{N}$) where they construct two injections and then use Schröder-Bernstein (via the Cantor set or something like that). Now I was wondering if I could do something like this: Define $s: \{0,1\}^\mathbb{N} \to [0,1]$ by $$s(x) = \sum_{i = 1}^\infty \frac{x(i)}{2^i}.$$ Now this is clearly a surjection because this is just a binary expansion of numbers in $[0,1]$, but not injective because these expansions are not unique. Is there a way to make this work? Can I use this to construct a bijection?
Yes, but in my opinion, it's really not worth it. Binary expansions are unique for all real numbers except dyadic rationals $\frac{k}{2^n}$, since they can end with a string of zeroes or a string of ones. But there are only countably many dyadic rationals. So define $s : \{ 0, 1 \}^{\mathbb{N}} \to [0, 1]$ to be what you said for all strings that don't end with a string of zeroes or a string of ones, and for the countably many exceptions pick any bijection you like. For example, if a sequence $a_i \in \{ 0, 1 \}^{\mathbb{N}}$ ends with a string of ones, send it to $0.1 a_1 a_2 a_3 ...$, and if it ends with a string of zeroes, send it to $0.0 a_1 a_2 a_3 ...$. But really, there's no point in not using Schroeder-Bernstein. Explicit bijections are highly overrated; in general it is much easier to construct an injection and a surjection.
{ "language": "en", "url": "https://math.stackexchange.com/questions/8314", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
The Basel problem As I have heard people did not trust Euler when he first discovered the formula (solution of the Basel problem) $$\zeta(2)=\sum_{k=1}^\infty \frac{1}{k^2}=\frac{\pi^2}{6}$$ However, Euler was Euler and he gave other proofs. I believe many of you know some nice proofs of this, can you please share it with us?
This is not really an answer, but rather a long comment prompted by David Speyer's answer. The proof that David gives seems to be the one in How to compute $\sum 1/n^2$ by solving triangles by Mikael Passare, although that paper uses a slightly different way of seeing that the area of the region $U_0$ (in Passare's notation) bounded by the positive axes and the curve $e^{-x}+e^{-y}=1$, $$\int_0^{\infty} -\ln(1-e^{-x}) dx,$$ is equal to $\sum_{n\ge 1} \frac{1}{n^2}$. This brings me to what I really wanted to mention, namely another curious way to see why $U_0$ has that area; I learned this from Johan Wästlund. Consider the region $D_N$ illustrated below for $N=8$: Although it's not immediately obvious, the area of $D_N$ is $\sum_{n=1}^N \frac{1}{n^2}$. Proof: The area of $D_1$ is 1. To get from $D_N$ to $D_{N+1}$ one removes the boxes along the top diagonal, and adds a new leftmost column of rectangles of width $1/(N+1)$ and heights $1/1,1/2,\ldots,1/N$, plus a new bottom row which is the "transpose" of the new column, plus a square of side $1/(N+1)$ in the bottom left corner. The $k$th rectangle from the top in the new column and the $k$th rectangle from the left in the new row (not counting the square) have a combined area which exactly matches the $k$th box in the removed diagonal: $$ \frac{1}{k} \frac{1}{N+1} + \frac{1}{N+1} \frac{1}{N+1-k} = \frac{1}{k} \frac{1}{N+1-k}. $$ Thus the area added in the process is just that of the square, $1/(N+1)^2$. Q.E.D. (Apparently this shape somehow comes up in connection with the "random assignment problem", where there's an expected value of something which turns out to be $\sum_{n=1}^N \frac{1}{n^2}$.) Now place $D_N$ in the first quadrant, with the lower left corner at the origin. Letting $N\to\infty$ gives nothing but the region $U_0$: for large $N$ and for $0<\alpha<1$, the upper corner of column number $\lceil \alpha N \rceil$ in $D_N$ lies at $$ (x,y) = \left( \sum_{n=\lceil (1-\alpha) N \rceil}^N \frac{1}{n}, \sum_{n=\lceil \alpha N \rceil}^N \frac{1}{n} \right) \sim \left(\ln\frac{1}{1-\alpha}, \ln\frac{1}{\alpha}\right),$$ hence (in the limit) on the curve $e^{-x}+e^{-y}=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/8337", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "814", "answer_count": 48, "answer_id": 8 }
The Basel problem As I have heard people did not trust Euler when he first discovered the formula (solution of the Basel problem) $$\zeta(2)=\sum_{k=1}^\infty \frac{1}{k^2}=\frac{\pi^2}{6}$$ However, Euler was Euler and he gave other proofs. I believe many of you know some nice proofs of this, can you please share it with us?
by using Fourier series of $f(x)=1, x\in[0,1]$ $$1=\sum_{n=1}^\infty\frac{4}{(2n-1)\pi}\sin (2n-1)\pi x$$ integrate both sides when integration limits are $x=0 \rightarrow 1$ $$\int_{0}^{1}1.dx=\int_{0}^{1} \sum_{n=1}^\infty\frac{4}{(2n-1)\pi}\sin (2n-1)\pi x dx$$ $$1=\sum_{n=1}^\infty\frac{8}{(2n-1)^2\pi^2}$$ $$\sum_{n=1}^\infty\frac{1}{(2n-1)^2}=\frac{\pi^2}{8}$$ then we use the equality series $$\sum_{n=1}^\infty\frac{1}{n^2}=\sum_{n=1}^\infty\frac{1}{(2n-1)^2}+\sum_{n=1}^\infty\frac{1}{(2n)^2}$$ simplify it to get $$\sum_{n=1}^\infty\frac{1}{n^2}=\frac{4}{3}\sum_{n=1}^\infty\frac{1}{(2n-1)^2}$$ so, $$\sum_{n=1}^\infty\frac{1}{n^2}=\frac{4}{3}\frac{\pi^2}{8}=\frac{\pi^2}{6}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/8337", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "814", "answer_count": 48, "answer_id": 40 }
$|G|>2$ implies $G$ has non trivial automorphism Well, this is an exercise problem from Herstein which sounds difficult: * *How does one prove that if $|G|>2$, then $G$ has non-trivial automorphism? The only thing I know which connects a group with its automorphism is the theorem, $$G/Z(G) \cong \mathcal{I}(G)$$ where $\mathcal{I}(G)$ denotes the Inner- Automorphism group of $G$. So for a group with $Z(G)=(e)$, we can conclude that it has a non-trivial automorphism, but what about groups with center?
As you note in the question, the group of inner automorphisms Inn($G$) is isomorphic to $G/Z(G)$. In particular, it's trivial if and only if $Z(G)=G$. So there is a non-trivial (inner) automorphism unless $G=Z(G)$. Now, notice that, by definition, $Z(G)=G$ if and only if $G$ is abelian; so we have reduced to the abelian case. If $G$ is abelian then $g\mapsto -g$ is an automorphism, and it is non-trivial unless $g=-g$ for all $g\in G$. But $g=-g$ if and only if the order of $g$ divdes two. So we have now reduced to the case in which $2g=0$ for all $g\in G$. In this case, $G$ is a vector space over the field $\mathbb{Z}/2$. As $|G|$ is equal to 2 raised to the power of the $\mathbb{Z}/2$-dimension of $G$, the hypothesis that $|G|>2$ implies that $\mathrm{dim}_{\mathbb{Z/2}} G>1$. But now we can write down lots of linear automorphisms of $G$. For instance, you could fix any basis $g_1,g_2,\ldots$ and take the automorphism $g_1\mapsto g_2$, $g_2\mapsto g_1$ and $g_i\mapsto g_i$ for every $i>2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/8379", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "54", "answer_count": 3, "answer_id": 1 }
Number of isomorphism types of functions f:[n]->[n] Consider the set $F_n: = [n]^{[n]}$ of all functions $f:[n] \rightarrow [n]$, $[n] = \lbrace 1,2,...,n\rbrace$. It is well known that $|F_n| = n^n$. Edit: Let two functions $f, g$ in $F_n$ be of the same isomorphism type ($f\sim g$) iff there exists a permutation $\pi$ such that $f\pi = \pi g $ What is the number of isomorphism types of functions $f:[n] \rightarrow [n]$, i.e. what is $|F_n/_{\sim}|$? Examples: * *$|F_2| = 4$, $|F_2/_{\sim}| = 3$ *$|F_3| = 27$, $|F_3/_{\sim}| = 7$
If you want to count the number of functions up to renamings on the domain and codomain, then the number is $p_n$, the number of partitions of $n$. Later: now that you've made precise what you wanted... This is counted here, with references.
{ "language": "en", "url": "https://math.stackexchange.com/questions/8449", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Best Cities for Mathematical Study This may sound silly, but... Suppose an aspiring amateur mathematician wanted to plan to move to another city... What are some cities that are home to some of the largest number of the brightest mathematicians? I'm sure this may depend on university presence, or possibly industry presence, or possibly something surprising. Wondering where the best place to take a non-faculty job at a university and try to make friends with some sharp minds in the computer lab or at the nearby pub might be.
I imagine that you could sit in on a lot of classes at Berkeley if you ask the professors. I doubt many would mind if you are very serious about mathematics.
{ "language": "en", "url": "https://math.stackexchange.com/questions/8501", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 7, "answer_id": 3 }
Duality with a stochastic matrix If I have a stochastic matrix $X$- the sum of each row is equal to $1$ and all elements are non-negative. Given this property, how can I show that: $x'X=x'$ , $x\geq 0$ Has a non-zero solution? I'm assuming this has something to do with proving a feasible dual, but not may be wrong..
Here's the linear programming approach (with considerable help from another user, Fanfan). Consider the LP $$\min 0^T y$$ subject to $$(X - I)y \geq 1,$$ where $0$ and $1$ are, respectively, vectors containing all 0's and all 1's. By Fanfan's answer to my question this LP is infeasible. Thus its dual, $$\max 1^T x$$ subject to $$x^T (X-I) = 0^T,$$ $$x \geq 0,$$ is either infeasible or unbounded. But $x = 0$ is a solution, and so this dual problem is feasible. Thus it must be unbounded. But that means there must be a nonzero solution $x_1$ in its feasible region. Thus we have $x_1^T X = x_1^T$ with $x_1 \geq 0$, $x_1 \neq 0$. (Fanfan's answer to my question also includes another answer to your question - one that uses Farkas's Lemma rather than LP duality. It ends up being quite similar to my answer here, as, of course, Farkas's Lemma and LP duality are basically equivalent.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/8556", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
Continued Fraction expansion of $\tan(1)$ Prove that the continued fraction of $\tan(1)=[1;1,1,3,1,5,1,7,1,9,1,11,...]$. I tried using the same sort of trick used for finding continued fractions of quadratic irrationals and trying to find a recurrence relation, but that didn't seem to work.
We use the formula given here: Gauss' continued fraction for $\tan z$ and see that $$\tan(1) = \cfrac{1}{1 - \cfrac{1}{3 - \cfrac{1}{5 -\dots}}}$$ Now use the identity $$\cfrac{1}{a-\cfrac{1}{x}} = \cfrac{1}{a-1 + \cfrac{1}{1 + \cfrac{1}{x-1}}}$$ To transform $$\cfrac{1}{a - \cfrac{1}{b - \cfrac{1}{c - \dots}}}$$ to $$\cfrac{1}{a-1 + \cfrac{1}{1 + \cfrac{1}{b-2 + \cfrac{1}{1 + \cfrac{1}{c-2 + \dots}}}}}$$ to get the expansion for $\displaystyle \tan(1)$ The above expansion for $\tan(1)$ becomes $$ \cfrac{1}{1-1 + \cfrac{1}{1 + \cfrac{1}{3-2 + \cfrac{1}{1 + \cfrac{1}{5-2 + \dots}}}}}$$ $$ = 1 + \cfrac{1}{3-2 + \cfrac{1}{1 + \cfrac{1}{5-2 + \dots}}}$$ $$= 1 + \cfrac{1}{1 + \cfrac{1}{1 + \cfrac{1}{3 + \cfrac{1}{1 + \cfrac{1}{5 + \dots}}}}}$$ To prove the transformation, let $\displaystyle x = b - \cfrac{1}{c - \dots}$ Then $$ \cfrac{1}{a-\cfrac{1}{x}} = \cfrac{1}{a-1 + \cfrac{1}{1 + \cfrac{1}{x-1}}}$$ $$ = \cfrac{1}{a-1 + \cfrac{1}{1 + \cfrac{1}{b-1 + \cfrac{1}{c - \dots}}}}$$ Applying the identity again to $$\cfrac{1}{b-1 + \cfrac{1}{c - \dots}}$$ we see that $$\cfrac{1}{a-\cfrac{1}{x}} = \cfrac{1}{a-1 + \cfrac{1}{1 + \cfrac{1}{b-2 + \cfrac{1}{1 + \cfrac{1}{c-1 + \cfrac{1}{d - \dots}}}}}}$$ Applying again to $\cfrac{1}{c-1 + \cfrac{1}{d - \dots}}$ etc gives the required CF.
{ "language": "en", "url": "https://math.stackexchange.com/questions/8610", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 1, "answer_id": 0 }
Why is the following evaluation of Apery's Constant wrong and do you have suggestions on how, if at all, this method could be improved? Please let me summarize the method by which L. Euler solved the Basel Problem and how he found the exact value of $\zeta(2n)$ up to $n=13$. Euler used the infinite product $$ \displaystyle f(x) = \frac{\sin(x)}{x} = \prod_{n=1}^{\infty} \Big(1-\frac{x^2}{n^2\pi^2}\Big) , $$ Newton's identities and the (Taylor) Series Expansion (at $x=0$) of the sine function divided by $x$ to arrive at $$ 1 - \frac{x^2}{\pi^2} \cdot (1 + \frac{1}{4} + \frac{1}{9} + \frac{1}{16} + ... + \frac{1}{n^2}) + x^4(...) = 1 - \frac{x^2}{6} + \frac{x^4}{120} - ... $$ Upon subtracting 'one' from both sides, equating the $x^2$ terms to each other and multiplying both sides by $ - \pi^2$, one finds that $$ \zeta(2)=\frac{\pi^2}{6}. $$ When I first saw this proof and the way it was extended to find the values of the other even zeta-constants, I couldn't help myself thinking: "How could this method be strenghtened to find the values of the odd zeta-constants?" (And, a little while later, "why hasn't this been done before?") I started looking for a similar-looking infinite product, only now I focussed on one of the form $$ \displaystyle f(x) = \prod_{n=1}^{\infty} \Big(1-\frac{x^a}{k^3 \cdot q}\Big) $$ (for some $ a \in \mathbb{N} , q \in \mathbb{R} $). A little while later I stumbled upon this website and fixated my eyeballs on equation (27). If we take $n=3$, Prudnikov et al. tell us that $$ \prod_{n=1}^{\infty} \Big(1-\frac{x^3}{k^3}\Big) = - \frac{1}{x^3} \cdot \prod_{k=1}^{2} \frac{1}{\Gamma(-e^{2/3 \pi i k} \cdot x)}. $$ Now, I thought that if we could use Newton's Identities again on the left side of the equation and find out what the Taylor Series Expansion of the right-hand side would be, we could find out what the exact value of Apery's Constant and other odd zeta-constants would be. In this answer by Robert Smith, I was told the Series Expansion. So we have $$ 1 - x^3(1 + \frac{1}{8} + \frac{1}{27} + ... + \frac{1}{n^3}) = -1 - 2 \cdot \gamma x - 2 \gamma^2 x^2 + \frac{1}{6}x^3(-8\gamma^3 - \psi^{(2)} (1)) - x^4(...) $$ Notice that on the left side we only have 'one minus a term with an $x^3$ coefficient', while on the other side we see 'minus one plus $x$, $x^2$, $x^3$ coefficients with their terms'. This is important, because it probably answers the question why the following will not work, but I don't know why and I really would like to know. I guess you know what I will attempt to do now. We equate the $x^3$ terms with each other, set $x=1$, multiply by minus one and 'find' that $$ \zeta(3) = \frac{1}{6}(8\gamma^3 + \psi^{(2)} (1)). $$ By combining this with the already known result $$ \zeta(3) = -\frac{1}{2} \psi^{2}(1), $$ we 'find' that $$ \zeta(3) '=' \gamma^3. $$ Obviously, this is wrong. Apery's constant is larger than one, and this value is clearly smaller than one. Could someone please elaborate one where I went wrong? And does anybody have any sugguestions and/or ideas related to the discussion from above using which we could find "better" values for Apery's Constant and the other odd zeta constants? (For example by pointing out a similar infinite product relation, and by showing that that infinite product has a nicer Series Expansion?) Or could someone point out to me why this approach to finding nicer closed-form representations for these constants clearly won't lead to any results? Thanks in advance, Max Muller (Moderators: If you find any spelling mistakes or errors grammar errors, feel free to correct them. To the rest: $\gamma$ is the Euler-Mascheroni Constant, and it amounts to approximately $0.5772$. The $\psi^{(2)}(x)$ stands for the second logarithmic deriviative of the Gamma-function. As usual, Wikipedia is a pretty good reference for this sort of things.)
Your previous question was about the wrong function. Instead of $\Gamma(x)$ in the denominator you should have $\Gamma(-x)$. If you fix this you'll probably end up with the same polygamma identity you already knew. In any case, proving anything about $\zeta(2k+1)$ is known to be quite hard. If anything simple worked, Euler would have done it, or somebody in the last few centuries anyway. Let me also mention that Prudnikov's identity is, if you are willing to accept Euler-style manipulations, trivial. It is equivalent to a product formula for $\frac{1}{\Gamma(x)}$ which follows (again if you are willing to accept Euler-style manipulations) from an investigation of its roots and does not really tell you anything deep about zeta values.
{ "language": "en", "url": "https://math.stackexchange.com/questions/8654", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 1, "answer_id": 0 }
Chinese remainder theorem I am looking for a simple proof to show that given a countable set of natural numbers $C$ that is closed under addition and whose gcd is 1, there exists two elements $c_1, c_2 \in C$ such that $\gcd(c_1, c_2)=1$. I used the Chinese remainder theorem, but my proof is about half a page and I was wondering if there is a shorter one.
Because $\gcd(C)=1$ there are $a_1,a_2,\ldots,a_n\in C$ such that $\gcd(a_1,\ldots,a_n)=1$. Thus there are integers $m_1,\ldots m_n$ such that $m_1a_1+\cdots m_na_n=1$. Let $\{i_1,\ldots,i_p\}\subset\{1,\ldots,n\}$ be those indices corresponding to positive coefficients $m_{i_k}$ and $\{j_1,\ldots,j_q\}$ those corresponding to negative coefficients $m_{j_k}$. Because $C$ is closed under addition, $c_1=\sum_{k=1}^pm_{i_k}a_{i_k}$ and $c_2=\sum_{k=1}^q-m_{j_k}a_{j_k}$ are in $C$, and $c_1-c_2=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/8720", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
What are the three cube roots of -1? What are the three cube roots of -1? Not sure if this is a trick question, But I have been asked this. one of the ansers is -1, what are the other 2?
HINT $\ $ Let $\rm\ \ x\ \to\ -x\ \ $ in $\rm\displaystyle\ \ \frac{1-x^3}{1-x}\ =\ 1+x+x^2\:.\ $ Generally suppose $\rm\:f(x)\:$ is a polynomial over a field with roots $\rm\: a \ne b\:$. Then $\rm\ f(x) = (x-a)\ g(x)\ $ hence $\rm\: f(b) = 0\: \Rightarrow\ (a-b)\:g(b) = 0\ \Rightarrow\ g(b) = 0\ $ i.e. $\rm\:b\:$ is a root of $\rm\ f(x)/(x-a)\:$. From a factorization perspective, the reason that this works is because, over a domain, monic linear polynomials are prime, so the linear factors of a polynomial are unique, i.e. the roots and their multiplicity are unique. e.g. see my post here. This fails over coefficient rings that are not domains, i.e. have zero-divisors, e.g. $\rm\ x^2-1 = (x-1)(x+1) = (x-4)(x+4)\ $ over $\ \mathbb Z/15\:$. Here, although $4 \ne 1$ is a root of $\rm\ x^2 - 1$ it is not true that 4 is a root of $\rm\ (x^2-1)/(x-1) = x+1\:$. For the example at hand we have $\rm\ x^3 + 1 = (x+1)(x+9)(x-10) = (x+16)(x+22)(x-38)\ $ over $\ \mathbb Z/91\:$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/8760", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Why does the polynomial equation $1 + x + x^2 + \cdots + x^n = S$ have at most two solutions in $x$? Américo Tavares pointed out in his answer to this question that finding the ratio of a geometric progression only from knowledge of the sum of its first $n+1$ terms $S = 1+x+x^2+\cdots+x^n$ amounts to solving a polynomial of degree $n$. This suggested to me that there might be up to $n$ real solutions of $x$ for a given sum, but I could not find any. In fact, it turned out that the following fact is true: For $n \ge 1$ and $S \in \mathbb{R}$, the polynomial equation $x^n + x^{n-1} + \cdots + x + 1 = S$ has at most two real solutions. A corollary is that if $n$ is odd, there is exactly one real solution. I was only able to prove this using a rather contrived geometric argument based on the shape of the graph of $y = x^{n+1}$. Is there a simple, direct (and ideally, intuitive) proof of this fact?
A powerful algorithmic way to handle such problems is to employ Sturm's Theorem. If you work out the details you will see that it is quite simple for this example. This in turn is a special case of the CAD (cylindrical algebraic decomposition) algorithm - an effective implementation of Tarski's quantifier elimination for the first order theory of the reals. The general ideas behind these methods prove helpful in solving a variety of problems. It's well worth the effort to learn the general methods rather than ad-hoc techniques. Per Rahul's request, here are further details of applying Sturm's algorithm to the example at hand. We desire to prove that $\rm\ \ g(x) =\ x^n +\:\cdots\: + x^2 + x + 1 - s\ \ $ has at most two distinct real roots. Consider $\rm\ f(x) = (x-1)\ g(x) =\ x^{n+1}-1-s\ (x-1)\:.\ $ Since $\rm\ \ f\:' = (n+1)\ x^n - s\ \ $ we have $\rm\ f\ mod\ f\:'\: =\ f\: - \: x/(n+1)\ f\:'\ =\ a\ x + b\ $ for some $\rm\:a,\:b\in \mathbb R\:$. So the euclidean remainder sequence in the calculation of $\rm\ gcd(f,\:f\:')\ $ has length at most $4$, viz. $\rm\ f,\ f\:',\ a\ x + b,\ c\ $. Thus it has at most $3$ sign changes at any point, so Sturm's theorem implies that $\rm\ f(x)\ $ has at most $3$ distinct real roots. So if $\rm\: x = 1\:$ isn't a multiple root of $\rm \:f(x)\:$ then $\rm\ g(x) = f(x)/(x-1)\ $ has at most $2$ distinct real roots. Else $\rm\ x-1\:|\:gcd(f,\:f\:')\ \Rightarrow\ x-1\:|\:c\ \Rightarrow\ c=0\:$. So in this case the remainder sequence has length at most $3$, so at most $2$ sign changes, so $\rm\:f\ $ has at most $2$ distinct real roots, therefore ditto for $\rm\:g\:$. Although Sturm's theorem is slightly more work here than Rolle's theorem, it has the added benefit that it allows one to compute the precise number of roots in any interval (versus only bounds using Rolle's theorem).
{ "language": "en", "url": "https://math.stackexchange.com/questions/8811", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 3, "answer_id": 2 }
Funny identities Here is a funny exercise $$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$ (If you prove it don't publish it here please). Do you have similar examples?
$$\sum\limits_{n=1}^{\infty} n = 1 + 2 + 3 + \cdots \text{ad inf.} = -\frac{1}{12}$$ You can also see many more here: The Euler-Maclaurin formula, Bernoulli numbers, the zeta function, and real-variable analytic continuation
{ "language": "en", "url": "https://math.stackexchange.com/questions/8814", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "281", "answer_count": 63, "answer_id": 31 }
How can Gröbner bases used to describe discrete probability? I am working in the field of machine learning, and I have come across a few papers that show relationships between Gröbner bases and discrete probability. So I come here for help. Can you please explain how can Gröbner bases be used to describe discrete probability? I have looked at Gröbner bases and I understand the general concepts (and used Maple to calculate a few examples). So it is the link that is missing for me.
You can find a nice introduction here: Pistone, Giovanni and Riccomagno, Eva and Wynn, Henry, Pistone, Pistone, Giovanni; Riccomagno, Eva; Wynn, Henry Computational commutative algebra in discrete statistics
{ "language": "en", "url": "https://math.stackexchange.com/questions/8830", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
convergence of sequence of random variables What does this expression mean - $\lim_{n\rightarrow\infty} E|X_n-X|=0$? $X_n$ is a sequence of random variables and $X$ is a random variable. What does this expression imply? Can I say that the sequence $X_n$ converges to $X$ in probability and almost sure convergence?
This is called convergence in the mean or convergence in the $L^{1}$-norm. In general, if \begin{eqnarray} \lim_{n \to \infty} \mathbb{E}(| X_{n} - X|^{p}) = 0, \end{eqnarray} then $X_{n}$ is said to converge to $X$ in the $L^{p}$-norm (provided that $\mathbb{E}(|X_{n}|^{p})$ is finite for all $n \geq 1$). Analytically, there are nice implications of such convergence. For example, convergence in an $L^{p}$-norm implies convergence in an $L^{q}$-norm if $p \geq q$. (See http://en.wikipedia.org/wiki/Convergence_of_random_variables). Markov's inequality states \begin{eqnarray} \mathbb{P}(|X_{n} - X| > \epsilon) \leq \epsilon^{-p} \, \mathbb{E}(|X_{n} - X|^{p}). \end{eqnarray} Thus, $L^{p}$-norm convergence implies convergence in probability.
{ "language": "en", "url": "https://math.stackexchange.com/questions/8876", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Relationship between these two probability mass functions If I have two different discrete distributions of random variables X and Y, such that their probability mass functions are related as follows: $P(X=x_i) = \lambda\frac{P (Y=x_i)}{x_i} $ what can I infer from this equation? Any observations or interesting properties that you see based on this relation? What if, P($X=x_i$) = $\lambda\sqrt{\frac{P (Y=x_i)}{x_i} }$ In both cases, $\lambda$ is a constant.
if you rewrite the equation as $P(Y = x) = xP(X = x)/\lambda$, then the distribution of $Y$ is called the length-biased distribution for $X$. it arises, for example, if one has a bunch of sticks in a bag and reaches in and selects one at random - where the probability a particular stick is selected is proportional to its length. if the lengths of the sticks are realizations of the random variable $X$, the distribution of the length of the selected stick is that of $Y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/8969", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
If $F$ is strictly increasing with closed image, then $F$ is continuous Let $F$ be a strictly increasing function on $S$, a subset of the real line. If you know that $F(S)$ is closed, prove that $F$ is continuous.
Here's an approach by contraposition. Let $f$ be a strictly increasing function discontinuous at $x\in S$. Then $f(x)\lt\lim_{y\to x+}f(y)$ or $f(x)\gt\lim_{y\to x-}f(y)$ (or both). Suppose $f(x)\lt\lim_{y\to x+}f(y)$. Then you can show that $\lim_{y\to x+}f(y)$ is in $\overline{f(S)}\setminus f(S)$, so $f(S)$ is not closed. To see that the limit is in the closure of $f(S)$ is a straightforward unwinding of definitions. It's not in $f(S)$ because for every $z\lt x$, $f(z)\lt f(x)\lt\lim_{y\to x+}f(y)$, and for every $z\gt x$, $\lim_{y\to x+}f(y)\lt f(z)$. (Similarly on the other side. It may help to keep in mind that $\lim_{y\to x-}f(y)=\sup_{y\lt x}f(y)$ and $\lim_{y\to x+}f(y)=\inf_{y\gt x}f(y)$.) Here's a way that doesn't use contraposition (although there is a bit of contradiction). Let $x$ be an element of $S$, and let $x_1,x_2,\ldots$ be an increasing sequence in $S$ converging to $x$. Then $f(x_1),f(x_2),\ldots$ is an increasing sequence bounded above by $f(x)$, and hence it converges. Since $f(S)$ is closed, there is a $z\in S$ such that $f(x_n)\to f(z)$ as $n\to \infty$. I claim that $z=x$. If $z$ were bigger than $x$, then we'd have $f(x_n)\leq f(x)\lt f(z)$ for all $n$, making the convergence impossible. If $z$ were smaller than $x$, we'd have $z$ smaller than $x_n$ for some $n$, so $f(z)\lt f(x_n)\leq f(x_{n+1})\leq\cdots$, again making the convergence impossible. So $z=x$ as claimed. This implies that the left-hand limit of $f$ at $x$ exists and equals $f(x)$. Similarly on the right, so $f$ is continuous.
{ "language": "en", "url": "https://math.stackexchange.com/questions/9008", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
What happens to the 0 element in a Finite Group? So, I'm relearning Group Theory. And I got the axioms down, I think. So let's make a concrete example: * *The collection of numbers the positive integers less than 7: 1,2,3,4,5,6 *The • operation will be multiplication mod 7. *Associativity holds. *The Identity e is 1. *Every element has an inverse: * *1*? mod 7 = 1 --> 1 *2*? mod 7 = 1 --> 4 *3*? mod 7 = 1 --> 5 *4*? mod 7 = 1 --> 2 *5*? mod 7 = 1 --> 3 *6*? mod 7 = 1 --> 6 But! What is the order of the group?! I thought the order would be 7. But there are 6 elements! So maybe I was wrong and 0 should be in the group. But 0 does not have an inverse! There is no x such that 0*x mod 7 = 1. So what am I misunderstanding here? Is it the definition of order? Is it some other trick about groups?
You're right, the group has order 6 because it has six elements. You can make {0,1,2,3,4,5,6} a group with addition mod 7. This would be a group of order 7.
{ "language": "en", "url": "https://math.stackexchange.com/questions/9115", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 2 }
Finding the fixed points of a contraction Banach's fixed point theorem gives us a sufficient condition for a function in a complete metric space to have a fixed point, namely it needs be a contraction. I'm interested in how to calculate the limit of the sequence $x_0 = f(x), x_1 = f(x_0), \ldots, x_n = f(x_{n-1})$ for a fixed $x$. I couldn't figure out a way to do this limit with ordinary limits calculations. The only thing I have at my disposal is the proof of the theorem, from which we see that the sequence $x_n$ is a Cauchy sequence; from this, I'm able to say, for example, that $\left|f(f(f(x))) - f(f(f(f(x))))\right| \leq \left|f(x_0)-f(x_1)\right| ( \frac{k^3}{1-k})$, where $k$ is the contraction constant, but I can't get any further in the calculations. My question is: how should I procede to calculate this limit exactly? If there are non-numerical (read: analytical) way to do this. Remark: I'm interested in functions $\mathbb{R} \rightarrow \mathbb{R}$ (as it can be seen from my use of the euclidean metric in $\mathbb{R}$)
In addition to what Hans Lundmark has said about solving $x = f(x)$, you could also try writing a simple computer programme to read a number $n$ and a starting value $x_0$, and compute the result of applying f to $x_0$ $n$ times, thus delivering an approximation to the root that you are seeking. The value of $n$ may have to be fairly large in some cases.
{ "language": "en", "url": "https://math.stackexchange.com/questions/9150", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Isomorphism between [0,1] to (0,1) I need to find an isomorphism between [0,1] to (0,1) can you help me with this please? thanks. benny
What do you mean by "isomorphism"? Are you considering these sets with a given group or ring structure (if then, which one?), or as topological spaces (in which case you'd be looking for a "homeomorphism"), or just as sets? I don't know of a canonical group structure on either set, but if you want your map between the two to be continuous you might have some luck by looking at the notion of compactness and how it behaves under continuous functions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/9274", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 2 }
Product of all elements in an odd finite abelian group is 1 This should be an easy exercise: Given a finite odd abelian group $G$, prove that $\prod_{g\in G}g=e$. Indeed, using Lagrange's theorem this is trivial: There is no element of order 2 (since the order must divide the order of $G$, but it is odd), and so every element except $e$ has a unique inverse which is different from it. Hence both the element and its inverse participate in the product and cancel each other. My problem is simple - I need to solve this without Lagrange's theorem. So either there's a smart way to prove the nonexistance of an element of order 2 in an odd abelian group, or I'm missing something even more basic...
If finite abelian group $ G$ has an elt $\, j\, $ of order $\,2\,$ then $\,g \to\ j g\,$ pairs its elts so $ G$ has even order. This is a special case of the often useful fact that the cardinalities of a finite set and its fixed-point set under an involution have equal parity, since the non-fixed points are paired by the involution. Hence, as above, when there are no fixed points $\,( j\ne 1\, \Rightarrow\, j\,g\ne g)\,$ the set has even cardinality. Such simple symmetries often lie at the heart of elegant proofs, e.g. the famous Heath-Brown-Zagier proof that every prime $\,\equiv 1\pmod{\!4}\, $ is a sum of two squares.
{ "language": "en", "url": "https://math.stackexchange.com/questions/9311", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31", "answer_count": 8, "answer_id": 0 }
About a weighted sum of hitting times for random walks on graphs Consider a random walk on an undirected, non-bipartite graph. Let $\pi$ be the stationary distribution of this process, and let the hitting time $H(s,t)$ be the expected time until a walk beginning at node $s$ reaches node $t$. I learned from Random walks on graphs: a survey, by L. Lovasz, that the quantity $$ \sum_{t} \pi(t) H(s,t)$$ is independent of $s$. In the Lovasz survey, this falls out as a byproduct of a lengthy calculation. I have two questions: (i) Is there a simple (calculation-free, combinatorial) proof of this statement? (ii) In what generality does this statement hold? Random walks on undirected graphs are a very particular kind of Markov process. What conditions on the probability transition matrix of the process do you need for the above to hold?
To answer (ii), I think it needs to be ergodic, i.e., aperiodic and recurrent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/9355", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 1 }
what is the cardinality of set of all smooth functions in $L^1$? What is the cardinality of set of all smooth functions belonging to $L^1$ or $L^2$ ? What is that of set of all integrable or square integrable functions ?
The second part of the question asks for the cardinality of the set of integrable or square-integrable functions. I will assume you actually mean "functions" rather than "equivalence classes of functions under the relation of equality almost everywhere". Let $\beta$ be the cardinality of the set of all functions from $\mathbb{R}$ to $\mathbb{R}$; by standard set theory this is the same as the cardinality of the powerset of the real numbers: $\beta = 2^{|\mathbb{R}|} = 2^{2^{\aleph_0}}$. Certainly the set of integrable functions, and the set of square integrable functions, can have cardinality no more than $\beta$. It turns out this is exactly the cardinality. Let $E$ be a Cantor set; the key properties are that $|E| = |\mathbb{R}|$ and the measure of $E$ is $0$. Consider the set of all functions that are $0$ for every $x$ that is not in $E$. All of these functions are both integrable and square integrable, because the measure of $E$ is zero. The cardinality of this set is the cardinality of the set of functions from $E$ to $\mathbb{R}$, which is exactly $\beta$ because $|E| = |\mathbb{R}|$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/9395", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
Limit of integral - part 2 Inspired by the recent post "Limit of integral", I propose the following problem (hoping it will not turn out to be too easy). Suppose that $g:[0,1] \times [0,1] \to {\bf R}$ is continuous in both variables separately. Is it true that, for all $x_0 \in [0,1]$, $$ \lim \limits_{x \to x_0 } \int_0^1 {g(x,y)\,{\rm d}y} = \int_0^1 {g(x_0 ,y)\,{\rm d}y} . $$
This is true if there is a (measurable) function $f(y)$ such that $$|g(x,y)|\le f(y), \mbox{ for }0\le x\le 1 $$ and $$\int_0^1 f(y)\,dy<\infty.$$ This follows from the Lebesgue dominated convergence theorem. Note: Every continuous function is measurable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/9488", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Motivating infinite series What are some good ways to motivate the material on infinite series that appears at the end of a typical American Calculus II course? My students in this course are generally from biochemistry, computer science, economics, business, and physics (with a few humanities folks taking the course for fun) - not just math majors. I have struggled some in the past to motivate the infinite series material to these students. For one, it doesn't fit with the rest of Calc II, which is on the integral. Over the years I have "converged" on telling them that the main point of the unit is Taylor series and that the rest of the material is there primarily so that we have the tools we need in order to understand Taylor series. Then I illustrate some of the many uses of Taylor series (mainly function approximation, at this level). This approach works better than anything I've come up with thus far with respect to getting my students to care about infinite series, but I feel a little like I'm selling the rest of the material short by subordinating it to Taylor series. Does anyone have other ways of motivating infinite series that they would like to share? (Again, only a small percentage of the students in my class are math majors.) Background: The material in this unit typically consists of sequences, basic series (like geometric and telescoping ones), a slew of tests for convergence (e.g., integral test, ratio test, root test), an introduction to power series, Taylor and Maclaurin series, and maybe binomial series.
From the point of view of a student that is struggling with the abstract concepts of maths, I found that using Zeno's Paradox was an interesting approach to infinite series.
{ "language": "en", "url": "https://math.stackexchange.com/questions/9524", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25", "answer_count": 10, "answer_id": 9 }
How many ways can I make six moves on a Rubik's cube? I am writing a program to solve a Rubik's cube, and would like to know the answer to this question. There are 12 ways to make one move on a Rubik's cube. How many ways are there to make a sequence of six moves? From my project's specification: up to six moves may be used to scramble the cube. My job is to write a program that can return the cube to the solved state. I am allowed to use up to 90 moves to solve it. Currently, I can solve the cube, but it takes me over 100 moves (which fails the objective)... so I ask this question to figure out if a brute force method is applicable to this situation. If the number of ways to make six moves is not overly excessive, I can just make six random moves, then check to see if the cube is solved. Repeat if necessary.
There are 7,618,438 diferent positions after six movements according to this site, but they use face movements. By the way they show that the Rubik's cube can be solved in 20 face movements or less.
{ "language": "en", "url": "https://math.stackexchange.com/questions/9594", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Cohomology of a tensor product of sheaves Say I have two locally free sheaves $F,G$ on projective variety $X$. I know the cohomology groups $H^i(X,F)$ and $H^i(X,G)$. Is this enough to give me information about $H^i(X,F\otimes G)$? In particular, if $H^i(X,F)=0$, what conditions on $G$ guarantee that also $H^i(X,F\otimes G)=0$?
You need to make some positivity assumptions on $E$ and $F$, as your conclusion is just not true in general. The only case I know of is Le Poitier's vanishing theorem, which says that if $E \otimes F \otimes \omega_X^{-1}$ is ample on a smooth projective variety $X$, then $H^i(X,E \otimes F)=0$ for $i \geq rs$, where $rk(E)=r$ and $rk(F)=s$. This is satisfied for instance if $\omega_X^{-1}$ is nef, and $E$ and $F$ are both ample on $X$, but even here you need $rs$ to be small relative to the dimension of $X$ if you want to say something meaningful.
{ "language": "en", "url": "https://math.stackexchange.com/questions/9627", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 1, "answer_id": 0 }
Scaling a decimal number to have leading digit 5 or more Suppose we are given two real numbers: $a, b \in \mathbb{R}$, $b > a$. Find $m \in \{1, 2, 2.5, 5\}$ and $k \in \mathbb{Z}$, satisfying the following condition: $\frac{b-a}{m10^k} \in [5,10)$. How to solve this problem without using brute force?
This is a "normalization" type operation. For simplicity we can replace the two parameters a,b with the single one c = b - a > 0. Assuming the interpretation proposed by Eric above, you want c*(10^-k) to fall into an interval m*[5,10) where m is in {1,2,2.5,5}. In other words we want to normalize c by a power of 10 so that it belongs to (at least) one of these intervals: [5,10), [10,20), [12.5,25), [25,50) While the middle pair of these intervals has nonempty overlap, the union of all four intervals is [5,50) and covers exactly one order of magnitude (power of 10). It follows that the exponent k is unique, although the factor m may or may not be unique. Note that this is equivalent to pigeonholing c/5 into one of four subintervals covering [1,10). How to find these "without using brute force" ? The best programmatic approach depends on what language and hardware is available. The IEEE standard for floating-point arithmetic (IEEE-754) allows some fairly portable assumptions to be made, but the challenge is to get a "leading digit" and characteristic exponent in base 10 when the usual representation of floating-point numbers is base 2. If we take the "common logarithm" log_10(c/5) and break it into an integer part and a fractional part: log_10(c/5) = iPart + fPart where fPart in [0,1) Now iPart here is the same value we want for k in your original notation for the problem, and we can determine m from fPart by doing a bit of testing: if ( fPart < log_10(2) ) then m = 1 else if ( fPart < log_10(2.5) ) then m = 2 else if ( fPart < log_10(5) ) then m = 2.5 else m = 5 regards, hm
{ "language": "en", "url": "https://math.stackexchange.com/questions/9684", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Applications of the Mean Value Theorem What are some interesting applications of the Mean Value Theorem for derivatives? Both the 'extended' or 'non-extended' versions as seen here are of interest. So far I've seen some trivial applications like finding the number of roots of a polynomial equation. What are some more interesting applications of it? I'm asking this as I'm not exactly sure why MVT is so important - so examples which focus on explaining that would be appreciated.
An application which is used a lot in any calculus course: The derivative of a differentiable function vanishes at an extremum(ie maximum or minimum) point.
{ "language": "en", "url": "https://math.stackexchange.com/questions/9749", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 7, "answer_id": 6 }
Parametric Equation Question Ok this is a really silly question and I should know this, but I can't seem to figure something out: for the last step, how do they know that $0 \leq x \leq 4$? If we use the minimum value of theta, which is $-\pi/2$, and plug that into $x=\cos(\theta)$, then we get $0$, and same for $\pi/2$.
Because over $-\pi /2\leq \theta \leq \pi /2, 0\leq\cos{\theta}\leq1$. At $\theta =0, \cos{\theta}=1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/9797", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Finding roots of polynomials, negative square root The formula for finding the roots of a polynomial is as follows $$x = \frac {-b \pm \sqrt{ b^2 - 4ac }}{2a} $$ what happens if you want to find the roots of a polynomial like this simplified one $$ 3x^2 + x + 24 = 0 $$ then the square root value becomes $$ \sqrt{ 1^2 - 4\cdot3\cdot24 } $$ $$ = \sqrt{ -287 } $$ which is the square root of a negative number, which isn't allowed. What do you do in this case? I know there are other methods, i.e. factorisation and completing the square, but does this mean that this formula can only be used in specialised cases or have i gone wrong somewhere along the path?
Yeah so have you read about complex numbers. The root will be $$x = \frac{-1 \pm{i} \sqrt{287}}{2 \times 3}$$ where $i=\sqrt{-1}$. Read more about complex numbers and when a polynomial can have complex roots, that happens when the *Discriminant* factor $b^{2}-4ac <0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/9840", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Reference for matrix calculus Could someone provide a good reference for learning matrix calculus? I've recently moved to a more engineering-oriented field where it's commonly used and don't have much experience with it.
Gene Howard Golub, Charles F. Van Loan book on "Matrix Computations" is regarded as the "Bhagavad Gita" for Matrix Algorithms. http://books.google.com/books?id=mlOa7wPX6OYC&printsec=frontcover There is also another book by "Gene Howard Golub, Gerard Meurant" on "Matrices, moments, and quadrature with applications". http://books.google.com/books?id=IZvkFET3LlwC&printsec=frontcover Also, "Numerical Linear Algebra" by Trefethen and Bau is well-written and easy to read. http://books.google.com/books?id=bj-Lu6zjWbEC&printsec=frontcover I would highly recommend Trefethen and Bau since I have read it completely. I feel it is ideal for self-study or for a one quarter course. Once you are done with this you can take a look at Golubs' book. Golubs' book is really good for reference.
{ "language": "en", "url": "https://math.stackexchange.com/questions/9891", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Why is negative times negative = positive? Someone recently asked me why a negative $\times$ a negative is positive, and why a negative $\times$ a positive is negative, etc. I went ahead and gave them a proof by contradiction like so: Assume $(-x) \cdot (-y) = -xy$ Then divide both sides by $(-x)$ and you get $(-y) = y$ Since we have a contradiction, then our first assumption must be incorrect. I'm guessing I did something wrong here. Since the conclusion of $(-x) \cdot (-y) = (xy)$ is hard to derive from what I wrote. Is there a better way to explain this? Is my proof incorrect? Also, what would be an intuitive way to explain the negation concept, if there is one?
I would explain it by number patterns. First, to establish that a positive times a negative is negative: $3 \times 2 = 6, 3 \times 1 = 3, 3 \times 0 = 0$. Notice in each case, as we reduce the second factor by 1, the product is being reduced by 3. So for consistency the next product in the pattern must be $0 - 3 = -3$. Therefore we have $3 \times (-1) = -3, 3 \times (-2) = -6$, and likewise a negative for any other other positive times a negative. Second, to establish that a negative times a negative is positive: we now know that $3 \times (-2) = -6, 2 \times (-2) = -4, 1 \times (-2) = -2, 0 \times (-2) = 0$. Notice in each case, as we reduce the first multiplier by 1, the product is being increased by 2. So for consistency the next product in the pattern must be $0 + 2 = 2$. Therefore we have $(-1) \times (-2) = 2, (-2) \times (-2) = 4$, and likewise a positive for any other other negative times a negative.
{ "language": "en", "url": "https://math.stackexchange.com/questions/9933", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "138", "answer_count": 40, "answer_id": 8 }
Groups having at most one subgroup of any given finite index Cyclic groups have at most one subgroup of any given finite index. Can we describe the class of all groups having such property? Thank you!
Let $G$ be a group. The canonical residually finite quotient of $G$ is $R(G)=G/K$ where $K$ is the intersection of all the finite-index subgroups of $G$. Lemma: If $G$ is finitely generated (update) then $G$ has at most one subgroup of each index if and only if $R(G)$ is cyclic. Proof: First, note that $R(G)$ is residually finite. If every finite quotient of $R(G)$ is cyclic then $R(G)$ is residually cyclic, and it follows that $R(G)$ is abelian. So $R(G)$ has a non-cyclic finite quotient unless $R(G)$ is cyclic. Therefore, if $R(G)$ is not cyclic then $R(G)$, and hence $G$, has a finite non-cyclic quotient, and hence, by Artuto's answer, has a two distinct finite-index subgroups of the same index. Conversely, suppose that $R(G)$ is cyclic. Every finite-index subgroup of $G$ contains $K$, so the quotient map $G\to R(G)$ maps finite-index subgroups to finite-index subgroups bijectively and preserves the index. Therefore, if $R(G)$ is cyclic then $G$ has at most one subgroup of each index. QED I believe that it is an open question whether or not there is an algorithm to determine whether a fp group has a proper finite-index subgroup, ie whether or not $R(G)$ is non-trivial. So it may be open whether or not it is possible to determine if $R(G)$ is cyclic, too. Note: Earlier, I forgot to mention that I had implicitly assumed that $G$ is finitely generated. This assumption is clearly necessary; otherwise the additive group of the rationals is a counterexample. If $G$ is not finitely generated, then the same argument shows that if $G$ has at most one subgroup of each finite index then $R(G)$ is residually cyclic. But it's not clear to me that the converse of this statement is true. So I'll finish with a question: If $G$ is residually cyclic, does $G$ have at most one subgroup of each finite index?
{ "language": "en", "url": "https://math.stackexchange.com/questions/9934", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 4, "answer_id": 0 }
Complex inequality $||u|^{p-1}u - |v|^{p-1}v|\leq c_p |u-v|(|u|^{p-1}+|v|^{p-1})$ How does one show for complex numbers u and v, and for p>1 that \begin{equation*} ||u|^{p-1}u - |v|^{p-1}v|\leq c_p |u-v|(|u|^{p-1}+|v|^{p-1}), \end{equation*} where $c_p$ is some constant dependent on p. My intuition is to use some version of the mean value theorem with $F(u) = |u|^{p-1}u$, but I'm not sure how to make this work for complex-valued functions. Plus there seems to be an issue with the fact that $F$ may not smooth near the origin. For context, this shows up in Terry Tao's book Nonlinear Dispersive Equations: Local and Global Analysis on pg. 136, where it is stated without proof as an "elementary estimate".
Without loss, you can assume $|u|\le|v|$ and replace $u$ by $uv$ to reduce the problem to the situation $v=1$ and $|u|\le1$. Unless you need $c_p$ explicitly, then it's clear, yes?
{ "language": "en", "url": "https://math.stackexchange.com/questions/9960", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Continuous function of one variable Let $f(x)$ continuous function on $R$ wich can be in different signs. Prove, that there is exists an arithmetic progression $a, b, c (a<b<c)$, such that $f(a)+f(b)+f(c)=0$.
HINT: 1) Think about the intermediate value theorem. 2) Think about some $x$ and some $y$ with $f(x)\gt0$ and $f(y)\lt0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/10068", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
quick basic math equation question I've got a solution to a economy class calculation, but I can't figure out how they came to the answer. Could you help me? -5p + 80 = 8p +-50 .......? = ......? 120 = 12p p = 10 Extra: do you know which rule they are using to get from the first line to the 3d? Many thx Math beginner, Frank
-5p+80=8p-50 80=5p+8p-50 80=13p-50 80+50=13p 130=13p (to get 120=12p at this step, you can multiply both sides by 12/13, but it's not necessary to arrive at the final answer) p=10
{ "language": "en", "url": "https://math.stackexchange.com/questions/10120", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
The characteristic and minimal polynomial of a companion matrix The companion matrix of a monic polynomial $f \in \mathbb F\left[x\right]$ in $1$ variable $x$ over a field $\mathbb F$ plays an important role in understanding the structure of finite dimensional $\mathbb F[x]$-modules. It is an important fact that the characteristic polynomial and the minimal polynomial of $C(f)$ are both equal to $f$. This can be seen quite easily by induction on the degree of $f$. Does anyone know a different proof of this fact? I would love to see a graph theoretic proof or a non inductive algebraic proof, but I would be happy with anything that makes it seem like more than a coincidence!
This is essentially Yuval's answer expressed in a slightly different way. Let your companion matrix be $$C=\pmatrix{0&1&0&\cdots&0\\\\ 0&0&1&\cdots&0\\\\ \vdots&\vdots&\vdots&\ddots&\vdots\\\\ 0&0&0&\cdots&1\\\\ -a_0&-a_1&-a_2&\cdots&-a_{n-1}}.$$ Then for the vector $v=(1\,\,0\,\,0\cdots 0)$, $$v\sum_{j=0}^{n-1} b_j C^j= \pmatrix{b_0&b_1&b_2&\cdots&b_{n-1}}$$ so that $g(C)\ne0$ for all nonzero polynomials $g$ of degree less than $n$. So the minimal polynomial has degree $n$, and equals the characteristic polynomial (via Cayley-Hamilton). But $vC^n=(-a_0\,\, {-a_1}\,\, {-a_2}\cdots{-a_{n-1}})$ and for $v(C^n+\sum_{j=0}^{n-1}b_j C^j)=0$ we need $a_j=b_j$. So the minimal and characteristic polynomials both equal $f$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/10216", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "32", "answer_count": 5, "answer_id": 4 }
How to show that every Boolean ring is commutative? A ring $R$ is a Boolean ring provided that $a^2=a$ for every $a \in R$. How can we show that every Boolean ring is commutative?
If $a,b\in R$, \begin{align} 2ba &=4ba-2ba\\ &=4(ba)^2-2ba\\ &=(2ba)^2-2ba\\ &=2ba-2ba\\ &=0, \end{align} so \begin{align} ab &=ab+0\\ &=ab+2ba\\ &=[ab+ba]+ba\\ &=[(a+b)^2-a^2-b^2]+ba\\ &=[(a+b)-a-b]+ba\\ &=0+ba\\ &=ba. \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/10274", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "58", "answer_count": 13, "answer_id": 0 }
Invertibility of compact operators in infinite-dimensional Banach spaces Let $X$ be an infinite-dimensional Banach space, and $T$ a compact operator from $X$ to $X$. Why must $0$ then be a spectral value for $T$? I believe this is equivalent to saying that $T$ is not bijective, but I am not sure how to show that injectivity implies the absence of surjectivity and the other way around (or if this is even the right way to approach the problem).
You are correct that it is the same as saying that $T$ is not bijective, because it follows from the open mapping theorem that a bounded operator on a Banach space has a bounded inverse if it is bijective. However, more straightforward answers for your question can be given without explicitly thinking in these terms. You can show that if $T$ is compact and $S$ is bounded, then $ST$ is compact. If $T$ were invertible, this would imply that $I=T^{-1}T$ is compact. This in turn translates to saying that the closed unit ball of $X$ is compact. One way to see that this is impossible in the infinite dimensional case is implicit in this question. There is an infinite sequence of points in the unit ball whose pairwise distances are bounded below, no subsequence of which is Cauchy. Compact operators on infinite dimensional spaces can be injective, but they can never be surjective. The closed subspaces of the range of a compact operator are finite dimensional. Also, what Qiaochu said in his comment above.
{ "language": "en", "url": "https://math.stackexchange.com/questions/10329", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 1, "answer_id": 0 }
What will be the remainder? I'm stuck with this problem I'm trying to solve from about an hour. Here's the question. What is the remainder when (3^202)+137 is divided by 101? There are 4 options -> 36, 45, 56, 11 I want to know the answer of the question with proper and possibly easiest method to solve the problem. Thanks in advance, waiting for reply. :)
I think Chandru1 is correct. The answer is 45. How did I get it? 1) The remainder of 3^30 / 101 = 6 2) 6^6 = 46656. 3) 46656 * (The remainder of 3^22 / 101) = 2286144. 4) 2286144 + 137 = 2286281 5) The remainder of 2286281 / 101 = 45 Note in step 2 and 3 that 30 * 6 = 180. And 180 + 22 = 202 A simpler example: The reaminder when (2^10)+7 is divided by 6 (the answer is 5). 1) The reaminder of 2^4 / 6 = 4 2) 4^2 = 16 3) 16 * (The remainder of 2^2 / 6) = 64 4) 64 + 7 = 71 5) The remainder of 71 / 6 = 5
{ "language": "en", "url": "https://math.stackexchange.com/questions/10383", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Solving $2x - \sin 2x = \pi/2$ for $0 < x < \pi/2$ What is $x$ in closed form if $2x-\sin2x=\pi/2$, $x$ in the first quadrant?
An analytical form of x can be obtained solving Kepler equation: $$M= E-\epsilon \sin(E)$$ with eccentricity=1 and mean anomaly = $\pi/2$ by means of Kapteyn series: $$2x = \frac{\pi}{2}+\sum_{n=1} \frac{2J_n(n)}{n} \sin(\pi n/2)$$ where $J_n()$ are the Bessel functions. Simplifying: $$2x = \frac{\pi}{2}+\sum_{n=0} \left( \frac{2J_{4n+3}(4n+1)}{4n+1} - \frac{2J_{4n+3}(4n+3)}{4n+3}\right)$$ $$x = \frac{\pi}{4}+\sum_{n=0} \left( \frac{J_{4n+1}(4n+1)}{4n+1} - \frac{J_{4n+3}(4n+3)}{4n+3}\right)$$ Such series can be numerically evaluated, but it converges slowly and n=10000 terms are required to obtain: $$x = 1.154940317134$$ with $$2x-\sin(2x)-\pi/2=-1.38017659479e-006$$ In order to improve the convergence, we can employ an acceleration series technique as Levin's acceleration. (See http://en.wikipedia.org/wiki/Series_acceleration) With only 10 (ten!) terms we obtain: $$x=1.1549406884223$$ A simple c++ code, based on gsl library is the following: #include <iostream> #include <fstream> #include <iomanip> #include "gsl_sf.h" #include "gsl_sum.h" using namespace std; #include <cmath> int main(int argc, char* argv[]) { double PIH = atan(1.)*2; cout<<setprecision(13); double E=PIH; cout<<"raw series"<<endl; //raw series for( int i = 0 ; i < 1e4; i +=2 ) { double term = 2*gsl_sf_bessel_Jn( 2*i+1, 2*i+1 )/(2*i+1); double term2 = 2*gsl_sf_bessel_Jn( 2*i+3, 2*i+3 )/(2*i+3); E += (term-term2); } cout<< E/2<<endl; cout<< "error: "<<E-sin(E)-PIH<<endl; //levin cout<<"levin accelerated series"<<endl; const int N = 10; double t[N]; double sum_accel=0, err; gsl_sum_levin_u_workspace* w = gsl_sum_levin_u_alloc( N ); t[0] = PIH; for( int i = 1 ; i < N; i++ ) { double term = 2*gsl_sf_bessel_Jn( 4*i-3, 4*i-3 )/(4*i-3); double term2 = 2*gsl_sf_bessel_Jn( 4*i-1, 4*i-1 )/(4*i-1); t[i] = term-term2; } gsl_sum_levin_u_accel( t, N, w, &sum_accel, &err ); E=sum_accel/2; cout<<sum_accel/2<<endl; cout<<"error: "<<sum_accel-sin(sum_accel)-PIH<<endl; }
{ "language": "en", "url": "https://math.stackexchange.com/questions/10427", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 6, "answer_id": 1 }
Nonzero Octonions as a 7-sphere While reading about Moufang loops in the book "An introduction to Quasigroups and their Representations" by Smith, I've encountered the following statement: The set $ S^7 $ of nonzero octonions of norm 1 forms a Moufang loop under multiplication. Geometrically, this set is a 7-sphere. While I understand why this set indeed forms a Moufang loop, I'm not sure how it is viewed as a sphere, or what is the general connection between Moufang loops and this geometrical point of view. Could anyone elaborate?
The real octonions are a normed division algebra, and in such a thing you always have a unit sphere. The unit sphere, because of multiplicativity of the norm, is always closed under the product. The title of your question, though, refers to the nonzero octonions, and that is surly not a sphere (it has the homotopy type of a sphere, and in fact the unit sphere is a strong deformation retract---but I guess that if you know what this term means then you also are aware of this fact!)
{ "language": "en", "url": "https://math.stackexchange.com/questions/10475", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Availability of logic lectures online I am currently reading Logic an Structure by Dirk van Dalen (2008). However, I am missing some basics I try to find related lectures on online / on youtube. I frequently watch MIT, Stanford, and University of Nottinham lectures on youtube. However, I have a hard time to find lectures on topics like predicate logic, high order logic, and intuitionistic logic. Does anyone happen to have pointers? Thanks.
MIT is certainly good, I don't know what your level is but I personally prefer summer school courses: they are shorter and more condensed, so you may learn more in shorter periods of time. Plus they are addressed to a general audience. When you search on google, try to add key words like "summer school", "winter school". Take a look at this lecture by Alwen Tiu, for instance. More generally, I think that for very basic notions, books are better.
{ "language": "en", "url": "https://math.stackexchange.com/questions/10539", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 0 }
Existence of circuit passing through each vertex in a directed graph Are there necessary and sufficient conditions for the existence of a circuit, or a disjoint set of circuits, that passes through each vertex once in a directed graph?
According to the classic text, "Computers and Intractability, A Guide to the Theory of NP-Completeness" by Garey and Johnson, the following problem: Partition into Hamiltonian Subgraphs Given a directed graph $G=(V,A)$, can the vertices be partitioned into disjoint sets $V_1, V_2, \dots, V_k$ for some $k$, such that each $V_i$ contains at least three vertices and induces a subgraph of $G$ that contains a Hamiltonian Circuit. is NP-Complete, by a reduction from 3SAT. The book also mentions that if we allow each $V_i$ to contain at least two vertices, then this is solvable in polynomial time using Matching techniques.
{ "language": "en", "url": "https://math.stackexchange.com/questions/10572", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
Formula for positioning each image in a row of images I'm working on a program that displays a bunch of thumbnails on the bottom of the screen and I'm using the mouse's position to slide the images from side to side. The center of the screen is 0 on a number line. Every frame, the mouse's X position is either added or subtracted from the x position of each thumbnail. (So if the mouse is at -15, each thumbnail will move -15 pixels.) The images should stop moving if they are too far from the center. "Too far" means that they are off the screen to a particular side, or if there are more images than fits on the screen, the image should stop sliding when there is enough room for the last image to appear on the opposite side. I guess you could say that the x position of each thumbnail is a function of the mouse position, what number thumbnail it is and the the number of thumbnails. The screen width is 800 here (or -400 to 400) What would be the mathematical formula for the above scenario? EDIT, a (perhaps) clearer explanation of what I want: I want the images to scroll horizontally. If the mouse is on the right, they should move towards the left, and if the mouse is on the left, the images should scroll towards the right. The images should stop scrolling so that they never completely scroll offscreen. (If there are more images than the screen can show, allow images to scroll off just enough to show the other images.) The speed of the scrolling depends on the distence of the mouse cursor from the center.
You're probably looking for something like a logit or an expit or even a sine function with the right parameters put in. On a more practical level though, it might be easier to program in a piecewise-linear relationship b/w your mouse position and the strip movement based on where the strip is.
{ "language": "en", "url": "https://math.stackexchange.com/questions/10623", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Calculating probability with a piecewise density function I tried solving this, and I am pretty sure I am integrating this correctly, however, my solution manual shows -1 in the equation when doing this and I do not know why. The answer in the solution manual is correct. Problem: Find the corresponding distribution function and use it to determine the probability that a random variable having the distribution function will take on a value between 0.4 and 1.6. f(x) = x for 0 < x < 1 2-x for 1 <= x < 2 0 elsewhere so for F(0.4 < x < 1.6) I did after integrating: 2(1.6) - [(1.6)^2 / 2] - [(0.4)^2 / 2] = 1.84 however the correct answer is 0.84. The solution manual has a -1 in their equation, but I do not know how they got it.
Since $f(t)$ is defined piecewise, you have to be careful with the integral. This is the source of your mistake. If $0 < x < 1$, $$F(x) = \int_0^x f(t) dt = \int_0^x t dt = \left.\frac{1}{2}t^2\right|_0^x = \frac{1}{2}x^2,$$ which you have. However, if $1 \leq x < 2$, then you have to break the integral that yields $F(x)$ up into pieces. This is $$F(x) = \int_0^x f(t) dt = \int_0^1 t dt + \int_1^x (2-t) dt = \left.\frac{1}{2}t^2\right|_0^1 + \left[2t - \frac{1}{2}t^2\right]_1^x $$ $$= \frac{1}{2} + 2x - \frac{1}{2}x^2 - 2 + \frac{1}{2} = 2x - \frac{1}{2}x^2 - 1.$$ Here's where the $-1$ comes in. This is a common mistake. If it makes you feel any better, my probability students trip up over this all the time. :)
{ "language": "en", "url": "https://math.stackexchange.com/questions/10664", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Law of Cosines Proof This question is based on the diagram taken from this link. I don't understand why the areas of A6 and A5 adds up to $2bc\cos(A)$.
If you look at A5. It is a rectangle with side $b$ and, say, $x$. Now $\cos A= x/c$, so $x=c\cos A$. Hence the area of A5 is $bx=bc\cos A$. A similar argument tells you that the area of A6 is $cb\cos A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/10727", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Moments and non-negative random variables? I want to prove that for non-negative random variables with distribution F: $$E(X^{n}) = \int_0^\infty n x^{n-1} P(\{X≥x\}) dx$$ Is the following proof correct? $$R.H.S = \int_0^\infty n x^{n-1} P(\{X≥x\}) dx = \int_0^\infty n x^{n-1} (1-F(x)) dx$$ using integration by parts: $$R.H.S = [x^{n}(1-F(x))]_0^\infty + \int_0^\infty x^{n} f(x) dx = 0 + \int_0^\infty x^{n} f(x) dx = E(X^{n})$$ If not correct, then how to prove it?
A better way to prove it would be to use Fubini's theorem to change the order of integration. This also gives you a condition when the result you have is true. Consider $I = \displaystyle \int_0^{\infty}n x^{n-1} (1-F(x))dx$. Using the fact that $\displaystyle \int_x^{\infty} f(y)dy = 1 - F(x)$, we get $I = \displaystyle \int_0^{\infty}n x^{n-1} \int_x^{\infty} f(y) dy dx$. Now we first integrate with respect to $y$ (the inner integral) and $y$ goes from $x$ to $\infty$ and then integrate with respect to $x$ (the outer integral), $x$ goes from $0$ to $\infty$. Change the order of integration. i.e. integrate with respect to $x$ first and then with respect to $y$. Note that this can be done provided the integral $I < \infty$ (See Fubini's theorem). This is the condition svenkatr and trutheality get as well. Changing the order of integration, we get $I = \displaystyle \int_0^{\infty} \displaystyle \int_{0}^{y} nx^{n-1}f(y)dxdy$. Note that now $x$ in the inner integral goes from $0$ to $y$ and $y$ goes from $0$ to $\infty$. Now the inner integral with respect to $x$ can performed easily and now we get $I = \displaystyle \int_0^{\infty} y^{n}f(y)dy = E[X^n]$. Hence, we have $\displaystyle \int_0^{\infty}n x^{n-1} (1-F(x))dx = E[X^n]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/10779", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 5, "answer_id": 1 }
Partitioning a graph (clustering of point sets in 2 dimensions) I am given $n$ points in 2D.(Each of say approximately equal weight). I want to partition it into $m$ clusters ($m$ can be anything and it is input by the user) in such a way that the center of mass of each cluster is "far" from center of mass of all other clusters. What is a good heuristic approach (it should also be quick and easy to implement) for this? My current approach is to set up a binary tree at each step. What I am doing now is that the line I choose to separate cluster at each step which maximizes the moment of inertia of the set of points in the cluster I am splitting. Any suggestion welcome!
You might find K-Means Clustering (and related) helpful. This is one of the classic machine learning problems, you should find plenty of literature on this.
{ "language": "en", "url": "https://math.stackexchange.com/questions/10856", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Let $a$ be a quadratic residue modulo $p$. Prove $a^{(p-1)/2} \equiv 1 \bmod p$. Question: Let $a$ be a quadratic residue to a prime modulus $p$. Prove $a^{(p-1)/2} \equiv 1 \pmod{p}$. My attempt at a solution: \begin{align*} &a\text{ is a quadratic residue}\\\ &\Longrightarrow a\text{ is a residue class of $p$ which has even index $c$ relative to a primitive root $g$}\\\ &\Longrightarrow a \equiv g^c \pmod{p}\\\ &\Longrightarrow a \equiv g^{2k} \pmod{p}\text{ where $2k=c$}\\\ &\Longrightarrow g^{2kv} \equiv g^{c} \pmod{p}\text{ for some natural number $v$}\\\ &\Longrightarrow 2kv \equiv c \pmod{p-1}\text{ (by a proof in class)}\\\ &\Longrightarrow 2kv \equiv 2k \pmod{p-1}\\\ &\Longrightarrow kv \equiv k \pmod{(p-1)/2}\\\ &\Longrightarrow v \equiv k(k^{-1}) \pmod{(p-1)/2}\text{ since $\gcd(2k, p-1)$ is not equal to 1}\\\ &\Longrightarrow k^{-1} \text{ (k inverse exists)}\\\ &\Longrightarrow v \equiv 1 \pmod{(p-1)/2}. \end{align*} I believe this implies that $g^{(p-1)/2} \equiv 1 \pmod{p}$, is this correct? Although what I was required to show was $a^{(p-1)/2} \equiv 1 \pmod{p}$, am I on the right track, how do I show this, I've spent quite some time on this and looked over all the proofs in my notes, I can't seem to find out how.
Here is my proof of Euler's criterion that I created to avoid the words 'field' and 'Lagrange's theorem' mentioned in Wikipedia's proof of Euler's criterion (here). $$\left(\frac{a}{p}\right)\equiv a^{\frac{p-1}{2}}\pmod{\! p},$$ where $p$ is an odd prime (Legendre symbol is only defined for odd primes $p$). $a\equiv 0\pmod{\! p}$ clearly works. If $a\not\equiv 0$, then $a^{p-1}\equiv 1\pmod{\! p}$ by Fermat's little theorem, so $$p\mid (a^{\frac{p-1}{2}}+1)(a^{\frac{p-1}{2}}-1),$$ so by Euclid's lemma (see proof in Wikipedia) $$p\mid a^{\frac{p-1}{2}}+1\ \ \text{ or }\ \ p\mid a^{\frac{p-1}{2}}-1$$ In below theorem I'll use $$a^n-b^n=(a-b)\left(a^{n-1}+a^{n-2}b+\cdots+b^{n-1}\right)$$ Theorem: a polynomial of degree $n\ge 1$ has at most $n$ zeroes mod $p$. Proof: By induction. $x- b\equiv 0\pmod{\! p}$ has exactly one solution. Assume $f(x)_k\equiv 0\pmod{\! p}$ has at most $k$ solutions (where $k\ge 1$ and $f(x)_k$ is a polynomial of degree $k$ with coefficients $a_i$). If $f(x)_{k+1}\equiv 0\pmod{\! p}$ has no solutions, we're done. Otherwise let a solution be $x_1$. Then $$f(x)_{k+1}\equiv f(x)_{k+1}-f(x_1)_{k+1}$$ $$\equiv a_{k+1}\left(x^{k+1}-x_1^{k+1}\right)+a_k\left(x^{k}-x_1^{k}\right)+\cdots+a_1\left(x -x_1\right)+a_0\left(1-1\right)$$ $$\equiv (x-x_1)P(x)\pmod{\! p}$$ with $P(x)$ being a polynomial of degree $k$, so $f(x)_{k+1}\equiv 0\pmod{\! p}$ has at most $k+1$ solutions. We know $a^{p-1}\equiv 1\pmod{\! p}$ has exactly $p-1$ solutions (by little Fermat). $a^{\frac{p-1}{2}}\equiv 1$ and $a^{\frac{p-1}{2}}\equiv -1$ mod $p$ have at most $\frac{p-1}{2}$ solutions each by above theorem, so each has exactly $\frac{p-1}{2}$ solutions. There are $\frac{p-1}{2}$ quadratic residues (excluding $0$) and $\frac{p-1}{2}$ quadratic non-residues (see below for proof), from which, with $\left(x^2\right)^{\frac{p-1}{2}}\equiv 1\pmod{\! p}$ by Fermat's little theorem, Euler's criterion follows. $x^2\equiv y^2\pmod{\! p}\iff x\equiv \pm y\pmod{\! p}$, so $1^2,2^2,\ldots, \left(\frac{p-1}{2}\right)^2$ generate all different non-zero quadratic residues.
{ "language": "en", "url": "https://math.stackexchange.com/questions/10904", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Integral with Tanh: $\int_{0}^{b} \tanh(x)/x \mathrm{d} x$ . What would be the solution when 'b' does not tends to infinity though a large one? two integrals that got my attention because I really don't know how to solve them. They are a solution to the CDW equation below critical temperature of a 1D strongly correlated electron-phonon system. The second one is used in the theory of superconductivity, while the first is a more complex variation in lower dimensions. I know the result for the second one, but without the whole calculus, it is meaningless. $$ \int_0^b \frac{\tanh(c(x^2-b^2))}{x-b}\mathrm{d}x $$ $$ \int_0^b \frac{\tanh(x)}{x}\mathrm{d}x \approx \ln\frac{4e^\gamma b}{\pi} \text{as} \ b \to \infty$$ where $\gamma = 0.57721...$ is Euler's constant
The constant $C$ given in Aryabhata's answer, as suspected, is exactly $$\gamma + \log \frac{4}{\pi},$$ which, together with Aryabhata's answer, nicely rounds off the second part of this question. Since $ \text{sech} x = 2(e^{-x} – e^{-3x} + e^{-5x} + \cdots ) \qquad (1)$ we have $$\int_0^1 \frac{\tanh x}{x}\mathrm dx = 2\int_0^1 \frac{\sinh x}{x}(e^{-x} – e^{-3x} + e^{-5x} + \cdots )\mathrm dx$$ Now $$2\int_0^1 \frac{\sinh x}{x} e^{-x}\mathrm dx = - \mathrm{Ei}(-2) + \gamma + \log 2$$ $$2\int_0^1 \frac{\sinh x}{x} e^{-3x}\mathrm dx = - \mathrm{Ei}(-4) + \mathrm{Ei}(-2) + \log 2$$ $$2\int_0^1 \frac{\sinh x}{x} e^{-5x}\mathrm dx = - \mathrm{Ei}(-6) + \mathrm{Ei}(-4) + \log (3/2)$$ $$2\int_0^1 \frac{\sinh x}{x} e^{-7x}\mathrm dx = - \mathrm{Ei}(-8) + \mathrm{Ei}(-6) + \log (4/3)$$ and so on, where $\mathrm{Ei}(x)$ is the exponential integral. Thus, interchanging the order of summation, summing and using Wallis's product we obtain $$\int_0^1 \frac{\tanh x}{x}\mathrm dx = \gamma + \log \frac{4}{\pi} -2\mathrm{Ei}(-2)+2\mathrm{Ei}(-4)-2\mathrm{Ei}(-6) + \cdots. \qquad (2)$$ Using $(1)$ for $\mathrm{sech} x$ we also have $$\int_0^1 \frac{2}{x(e^{2/x}+1) }\mathrm dx = 2 \int_1^\infty \frac{1}{x(e^{2x}+1)}\mathrm dx$$ $$= \int_1^\infty \frac{\text{sech} x}{x} e^{-x}\mathrm dx = 2 \int_1^\infty \frac{e^{-2x}}{x} - \frac{e^{-4x}}{x} + \frac{e^{-6x}}{x} - \cdots\mathrm dx $$ $$= -2\mathrm{Ei}(-2)+2\mathrm{Ei}(-4)-2\mathrm{Ei}(-6) + \cdots.$$ And so the result follows from $(2).$
{ "language": "en", "url": "https://math.stackexchange.com/questions/10972", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 2 }
Convergence of integrals in $L^p$ Stuck with this problem from Zgymund's book. Suppose that $f_{n} \rightarrow f$ almost everywhere and that $f_{n}, f \in L^{p}$ where $1<p<\infty$. Assume that $\|f_{n}\|_{p} \leq M < \infty$. Prove that: $\int f_{n}g \rightarrow \int fg$ as $n \rightarrow \infty$ for all $g \in L^{q}$ such that $\dfrac{1}{p} + \dfrac{1}{q} = 1$. Right, so I estimate the difference of the integrals and using Hölder end up with: $$\left|\int f_{n} g - \int fg\right| \leq \|g\|_{q} \|f_{n} - f\|_{p}$$ From here I'm stuck because we are not assuming convergence in the seminorm but just pointwise convergence almost everywhere. How to proceed?
HINT By Egorov's theorem, convergence a.e. implies for every $\epsilon$ there exists $B$ with $|B| < \epsilon$ such that $f_n\to f$ uniformly on $X\setminus B$ (where $X$ is "almost" the whole space). Split $\int (f_n - f)g$ in two pieces, one over $B$ and one over $X\setminus B$. On $X\setminus B$ uniform convergence implies the integral can be made as small as you want. Holder's inequality implies the integral on $B$ is controlled by $(M + \|f\|_p)\|g\|_{L^q(B)}$. Taking $B\searrow$ a measure zero set, then the integral on $B$ of $g$ goes to zero. Finish by taking a diagonalizing sequence as usual.
{ "language": "en", "url": "https://math.stackexchange.com/questions/11028", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 4, "answer_id": 3 }
Calculus, find the limit, Exp vs Power? $\lim_{x\to\infty} \frac{e^x}{x^n}$ n is any natural number. Using L'hopital doesn't make much sense to me. I did find this in the book: "In a struggle between a power and an exp, the exp wins." Can I refer that line as an answer? If the fraction would have been flipped, then the limit would be zero. But in this case I the limit is actually $\infty$
I believe the problem is tailor-made for repeated application of L'Hopital's Rule, but here are some thoughts ... You could note that $e^{x} = (e^{x/n})^n$, and consider $\left( \lim \frac{e^{x/n}}{x}\right)^n$, so that you are comparing an exponential to a single power of $x$, which might be a bit less daunting for you. A bit more cleanly, and to make the numerator and denominator match better, define $y := \frac{x}{n}$. Then $$\frac{e^{x}}{x^n}=\frac{e^{ny}}{(ny)^n}=\frac{\left(e^{y}\right)^n}{n^n y^n}=\frac{1}{n^n}\frac{\left(e^{y}\right)^n}{y^{n}}=\frac{1}{n^n}\left(\frac{e^y}{y}\right)^n$$ Since $n$ is a constant, you can direct your limiting attention to $\frac{e^y}{y}$ (as $y \to \infty$, of course).
{ "language": "en", "url": "https://math.stackexchange.com/questions/11081", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 4 }
Yet another inequality: $|a+b|^p<2^p(|a|^p+|b|^p)$ Let $a$ and $b$ be real numbers and $p>0$. What is the best way to prove that $|a+b|^p<2^p(|a|^p+|b|^p)$?
If $0<p\leq1$ you don't need any powers of $2$. This came up in another recent question. If $1\leq p$ you can strengthen the inequality to $|a+b|^p\leq 2^{p-1}(|a|^p+|b|^p)$ by applying convexity of the function $t\mapsto t^p$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/11122", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
Solving the recurrence relation that contains summation of nth term $$T(n)=1+2\sum_{i=1}^{n-1}T(i) , \quad n > 1$$ $$T(1)=1$$ any hint or how to solve?
Take a look at Kelley and Peterson's textbook [1]. They provide a very good discussion of difference equations in this text. I believe you can relate the information in this book to answer your question. You will find further discussion regarding the answers posted here by Moron and Ross Millikan. Let me know if you still cannot figure it out! [1] Kelley, W. & Peterson, A. (2001). Difference Equations: An Introduction with Applications (2nd Ed.). San Diego, CA: Academic Press.
{ "language": "en", "url": "https://math.stackexchange.com/questions/11194", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 4 }
Another limit task, x over ln x, L'Hôpital won't do it? $$\lim_{x\to 0 } \frac{x}{\ln x}.$$ This was wrong, I got a big red wrong! Why doesn't L'Hôpital work on this one? The problem is that $\ln$ is not defined for 0. It needs to be rewritten? (Thanks to everyone helping me out with my homework, due anxiety I'm not able attend the class workshops.) Edit: I did get the correct limit ($0$), but that was a coincidence.
L'Hopital's rule is best forgotten about. Of course as $x\to0^+$, $\log x\to-\infty$ and so $1/\log x\to0$. A fortiori $x/\log x\to 0$. A better problem is to find $\lim_{x\to0^+}x\log x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/11236", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
What are some interpretations of Von Neumann's quote? John Von Neumann once said to Felix Smith, "Young man, in mathematics you don't understand things. You just get used to them." This was a response to Smith's fear about the method of characteristics. Did he mean that with experience and practice, one obtains understanding?
One thing he might have been referring to is that in mathematics you often have to learn to apply a method without actually understanding what it is all about. Take for example matrix multiplication. You could (and many students do) beat themselves up about why it is so "weird" in comparison to say multiplication of the reals. But it turns out that yes it has those weird properties because it is perfect for representing a linear transform, amongst other things. In general I have found it counter productive to try and understand every aspect of something before moving on to the next thing. I just accept that that's the way it works, trust that one day it will have some sort of application, be useful or otherwise "make sense". Note that the history of mathematics is full of branches of mathematics that didn't even have this sort of utility when they were initially created and explored, but have later turned out to be enormously important. Take for example Boolean algebra and knot theory. Another important point is that mathematics is the study of abstract logical systems, including wholly invented ones. Therefore it can be pretty fruitless to understand some of the deeper meanings of a mathematical concept, because they might not even exist. Sure there might be deep connections or generalisations to other mathematical concepts, and applications might be found, but trying to say that the application is "the true form" of the mathematical concept is putting the cart before the horse.
{ "language": "en", "url": "https://math.stackexchange.com/questions/11267", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "101", "answer_count": 13, "answer_id": 7 }
When functions commute under composition Today I was thinking about composition of functions. It has nice properties, its always associative, there is an identity, and if we restrict to bijective functions then we have an inverse. But then I thought about commutativity. My first intuition was that bijective self maps of a space should commute but then I saw some counter-examples. The symmetric group is only abelian if $n \le 2$ so clearly there need to be more restrictions on functions than bijectivity for them to commute. The only examples I could think of were boring things like multiplying by a constant or maximal tori of groups like $O(n)$ (maybe less boring). My question: In a euclidean space, what are (edit) some nice characterizations of sets of functions that commute? What about in a more general space? Bonus: Is this notion of commutativity important anywhere in analysis?
A classic result of Ritt shows that polynomials that commute under composition must be, up to a linear homeomorphism, either both powers of $x$, both iterates of the same polynomial, or both Chebyshev polynomials. Actually Ritt proved a more general rational function case - follow the link. His work was motivated by work of Julia and Fatou's work on Julia sets of rational functions, e.g. see here for a modern presentation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/11431", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34", "answer_count": 3, "answer_id": 2 }
If there are $200$ students in the library, how many ways are there for them to be split among the floors of the library if there are $6$ floors? Need help studying for an exam. Practice Question: If there are $200$ students in the library, how many ways are there for them to be split among the floors of the library if there are $6$ floors? Hint: The students can not be told apart (they are indistinguishable). The answer must be in terms of $P(n,r), C(n,r)$, powers, or combinations of these. The answers do not have to be calculated.
I'm too new here to comment on @Sivaram's answer, but I believe it to be correct and well explained. For further reference see Stanley's "Twelvefold Way" in Combinatorics at either [1] or the Wikipedia page. [1] http://mathsci.kaist.ac.kr/~drake/pdf/twelvefold-way.pdf
{ "language": "en", "url": "https://math.stackexchange.com/questions/11468", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 2 }
Find formula from values Is there any "algorithm" or steps to follow to get a formula from a table of values. Example: Using this values: X Result 1 3 2 5 3 7 4 9 I'd like to obtain: Result = 2X+1 Edit Maybe using excel? Edit 2 Additional info: It is not going to be always a polynomial and it may have several parameters (I think 2).
The best tool for doing this is that impressive piece of software: http://www.nutonian.com/products/eureqa/ Edit: For your abovementioned very easy example, even WA will find the right formula: http://www.wolframalpha.com/input/?i=3,+5,+7,+9,... Edit 2: Unfortunately Nutonian was bought by DataRobot and their product is no longer freely available (not even for academic use). Yet in many cases other solutions exist, see e.g. my blog post here: Symbolic Regression, Genetic Programming… or if Kepler had R.
{ "language": "en", "url": "https://math.stackexchange.com/questions/11502", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28", "answer_count": 8, "answer_id": 5 }
$\gcd(b^x - 1, b^y - 1, b^ z- 1,\dots) = b^{\gcd(x, y, z,\dots)} -1$ Possible Duplicate: Prove that $\gcd(a^n - 1, a^m - 1) = a^{\gcd(n, m)} - 1$ $b$, $x$, $y$, $z$, $\ldots$ are integers greater than 1. How can we prove that $$ \gcd (b ^ x - 1, b ^ y - 1, b ^ z - 1 ,\dots)= b ^ {\gcd (x, y, z, \dots)} - 1\ ? $$
It suffices to prove it for two terms, that is, $\gcd(a^n - 1, a^m - 1) = a^{\gcd(n,m)} - 1$. The basic idea is that we can use the Euclidean algorithm on the exponents, as follows: if $n > m$, then $$\gcd(a^n - 1, a^m - 1) = \gcd(a^n - 1, a^n - a^{n-m}) = \gcd(a^{n-m} - 1, a^m - 1).$$ So we can keep subtracting one exponent from the other until we get $\gcd(n, m)$ as desired. Another way to look at this computation is to write $d = \gcd(a^n - 1, a^m - 1)$ and note that $$a^n \equiv 1 \bmod d, a^m \equiv 1 \bmod d \Rightarrow a^{nx+my} \equiv 1 \bmod d$$ from which it readily follows, as before, that $a^{\gcd(n,m)} \equiv 1 \bmod d$, so $d$ dividess $a^{\gcd(n,m)} - 1$. On the other hand, $a^{\gcd(n, m)} - 1$ also divides $d$. To see this, denote $e \cdot \gcd(n,m) = n$ and $f \cdot \gcd(n,m) = m$ (verify for yourrself that this makes sense). We then have \begin{align*}a^n-1 = (a^{\gcd(n,m)})^e -1 \equiv & 0 \pmod{a^{\gcd(n,m)}-1} \\ a^m-1 = (a^{\gcd(n,m)})^f-1 \equiv & 0 \pmod{a^{\gcd(n,m)}-1}. \end{align*} Hence we have $$a^{\gcd(n,m)}-1 \; |\; d .$$ What's really nice about this result is that it holds both for particular values of $a$ and also for $a$ as a variable, e.g. in a polynomial ring with indeterminate $a$. You can readily deduce several seemingly nontrivial results from this; for example, the sequence defined by $a_0 = 2, a_n = 2^{a_{n-1}} - 1$ is a sequence of pairwise relatively prime integers, from which it follows that there are infinitely many primes. By working only slightly harder you can deduce that in fact there are infinitely many primes congruent to $1 \bmod p$ for any prime $p$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/11567", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 2, "answer_id": 0 }
Algebraic Identity $a^{n}-b^{n} = (a-b) \sum\limits_{k=0}^{n-1} a^{k}b^{n-1-k}$ Prove the following: $\displaystyle a^{n}-b^{n} = (a-b) \sum\limits_{k=0}^{n-1} a^{k}b^{n-1-k}$. So one could use induction on $n$? Could one also use trichotomy or some type of combinatorial argument?
EDIT Proof by induction $n=1$ is valid. Supose valid by n, then $$a^{n+1}-b^{n+1}=a(a^{n})+b(b^{n})$$, using the hipotesis : $$a(a^{n})+b(b^{n})=a\left[b^{n}+(b-a)\displaystyle\sum\limits_{k=0}^{n-1} a^{k}b^{n-1-k}\right] + b\left[a^{n}-(b-a)\displaystyle\sum\limits_{k=0}^{n-1} a^{k}b^{n-1-k}\right]=$$ $$\left[ab^{n}+(b-a)a\displaystyle\sum\limits_{k=0}^{n-1} a^{k}b^{n-1-k}\right] + \left[a^{n}b-(b-a)b\displaystyle\sum\limits_{k=0}^{n-1} a^{k}b^{n-1-k}\right]=$$ $$\left[ab^{n}+(b-a)\displaystyle\sum\limits_{k=0}^{n-1} a^{k+1}b^{n-1-k}\right] + \left[a^{n}b-(b-a)\displaystyle\sum\limits_{k=0}^{n-1} a^{k}b^{n-k}\right]=$$ $$\left[ab^{n}+(b-a)\displaystyle\sum\limits_{k=0}^{n-1} a^{k+1}b^{n-1-k}+(b-a)b^{n}-(b-a)b^{n}\right] +$$ $$ \left[a^{n}b-(b-a)\displaystyle\sum\limits_{k=0}^{n-1} a^{k}b^{n-k}-(b-a)a^{n}+(b-a)a^{n}\right]=$$ $$\left[(b-a)[\displaystyle\sum\limits_{k=0}^{n-1} a^{k+1}b^{n-1-k}+b^{n}]+b^{n+1}\right] +\left[(b-a)[\displaystyle\sum\limits_{k=0}^{n-1} a^{k}b^{n-k}+a^{n}]-a^{n+1}\right] +$$ $$\left[(b-a)\displaystyle\sum\limits_{k=0}^{n} a^{k}b^{n-k}+b^{n+1}\right] +\left[(b-a)\displaystyle\sum\limits_{k=0}^{n} a^{k}b^{n-k}-a^{n+1}\right] =$$ $$-a^{n+1}+b^{n+1}+2\left[(b-a)\displaystyle\sum\limits_{k=0}^{n} a^{k}b^{n-k}\right] =$$ Thus: $$a^{n+1}-b^{n+1}=-a^{n+1}+b^{n+1}+2\left[(b-a)\displaystyle\sum\limits_{k=0}^{n} a^{k}b^{n-k}\right] $$, then $$2(a^{n+1}-b^{n+1})=2\left[(b-a)\displaystyle\sum\limits_{k=0}^{n} a^{k}b^{n-k}\right]$$ thus: $$a^{n+1}-b^{n+1}=\left[(b-a)\displaystyle\sum\limits_{k=0}^{n} a^{k}b^{n-k}\right]$$ So $n+1$ is valid. Complete the proof
{ "language": "en", "url": "https://math.stackexchange.com/questions/11618", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 7, "answer_id": 1 }
Why is $|Y^{\emptyset}|=1$ but $|\emptyset^Y|=0$ where $Y\neq \emptyset$ I have a question about the set of functions from a set to another set. I am wondering about the degenerate cases. Suppose $X^Y$ denotes the set of functions from a set $Y$ to a set $X$, why is $|Y^{\emptyset}|=1$ but $|\emptyset^Y|=0$ where $Y\neq \emptyset$?
Because the empty function is the unique function from the empty set to an arbitrary set $Y$, while if $Y\neq\emptyset$, then there exists $y\in Y$, but there's no place in $\emptyset$ for a function to send $y$ to.
{ "language": "en", "url": "https://math.stackexchange.com/questions/11771", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 2, "answer_id": 0 }
Proof by induction $\frac1{1 \cdot 2} + \frac1{2 \cdot 3} + \frac1{3 \cdot 4} + \cdots + \frac1{n \cdot (n+1)} = \frac{n}{n+1}$ Need some help on following induction problem: $$\dfrac1{1 \cdot 2} + \dfrac1{2 \cdot 3} + \dfrac1{3 \cdot 4} + \cdots + \dfrac1{n \cdot (n+1)} = \dfrac{n}{n+1}$$
Essentially you test the base case, p(1), which is true since 1/(1+1) = 1/2. Then, you want to assume your p(n) is true, namely, that sum you wrote above: 1/2 + .... + 1/(n+1) and then add the next number in the sequence. 1/[(n+1)+1] Since you know your p(n) sum equals n/(n+1), you add the next number in the sequence above to your p(n): 1/2 + ... + 1/(n+1) + 1/[(n+1)+1] = p(n) + 1/[(n+1)+1] = n/(n+1) + 1/[(n+1)+1] Given the useful hint above from user17762, they have done the algebra for you. n/(n+1) + 1/[(n+1)+1] = (n+1)/(n+2) You can rewrite as, (n+1)/(n+1+1), which is exactly p(n+1), so we assume it's true for all n because it's true for the next term in the sequence. B.Y.U.
{ "language": "en", "url": "https://math.stackexchange.com/questions/11831", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 3 }
What Re(f(z))=c is if f is a holomorphic function? Suppose that $f:U\subset\mathbb{C}\to\mathbb{C}$, where $U$ is a region in the complex plane, is a holomorphic function. If $c\in\mathbb{R}$ is a regular value for $\text{Re}(f(z))$ then it follows from implicit function theorem that $\text{Re}(f(z))^{-1}(c)$ is at least locally a differentiable curve in the plane. Question: 1- If $c$ is a regular value is any connected component of $\text{Re}(f(z))^{-1}(c)$ a global differentiable curve ? 2-If $c$ is not a regular value and $\text{Re}(f(z))^{-1}(c)$ have at least one cluster point is this set locally a curve ?
(1) To use your notation, if $c \in \mathbb{R}$ is a regular value, then the level set $Re(f(z))^{-1}(c)$ will be a 1-dimensional embedded submanifold of $U \subset \mathbb{R}^2$. Therefore, every connected component will be a connected 1-manifold. Now, I'm not sure what exactly you mean by "global differentiable curve." If you mean "something of the form $f(t) = (x(t), y(t))$," then it follows from the comments in this question that yes, every connected component can be put in that form. If you mean "something of the form $y = f(x)$ or $x = f(y)$" then the answer is (I think) no. For example, consider $f(z) = \log z$ on $U = R^2 - \{x \geq 0, y = 0\}$, i.e. the plane with the non-negative x-axis deleted. Then $Re(f(z)) = \log(\sqrt{x^2+y^2})$, so the level set $Re(f(z))^{-1}(1)$ is the circle $x^2 + y^2 = e^2$ minus the point $(e,0)$. So, it doesn't seem like you'd be able to represent this curve by a single function $x = f(y)$ or $y = f(x)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/11892", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Evaluation of the limit, $\lim \limits_{x\rightarrow\infty} \left(\frac{20x}{20x+4}\right)^{8x}$, using only elementary methods I was assisting a TA for an introductory calculus class with the following limit, $$\lim_{x \rightarrow \infty} \left(\frac{20x}{20x+4}\right)^{8x}$$ and I came to simple solution which involved evaluating the "reciprocal" limit $$\lim_{z \rightarrow 0} \left(\frac{1}{1+\frac{z}{5}}\right)^{8/z}$$ by using the Taylor expansion of $\log(1+z)$ around $z=0$. However, the TA claims that the students have not learned about series expansions so that would not be a valid solution for the course. I tried applying L'Hopital's rule, which I was told the class did cover, but I was unsuccessful. As a note I will mention that $$\lim_{x \rightarrow \infty} \left(\frac{20x}{20x+4}\right)^{8x} = e^{-8/5}.$$ Any ideas for a solution to this problem using only knowledge from a first quarter (or semester) calculus course which hasn't covered series expansions?
HINT: Always start by simplifying $$\left(\frac{20x}{20x+4}\right)^{8x}=\left(\frac{20x+4}{20x}\right)^{-8x}=\left(1+\frac{1}{5x}\right)^{-8x}$$ perhaps a substitution helps...
{ "language": "en", "url": "https://math.stackexchange.com/questions/11941", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 0 }
Guidance on a Complex Analysis question My homework question: Show that all zeros of $$p(z)=z^4 + 6z + 3$$ lie in the circle of radius $2$ centered at the origin. I know $p(z)$ has a zero-count of $4$ by using the Fundamental Theorem of Algebra. Then using the Local Representation Theorem the $$\int \frac{n}{z+a} = 4(2 \pi i).$$ I am assuming $a=0$ since we are centered at the origin. I apologize for my lack of math-type. What does $$= 8 \pi i$$ mean? Am I going around the unit circle $4$ times? Or is it even relevant to my final answer. Which I am assuming is finding the coordinates to the $4$ singularities. I have always looked for my singularities in the values that make the denominator zero, but in this question my denominator is $z$. $z=0$ doesn't seem right. So the question is, am I suppose to factor the polynomial $z^4 + 6z + 3$ to find the zeros? Thanks
Chhose $f(z)=z^4$ Then on $|z|=2$, $|f(z)-p(z)|<|p(z)|$ so by Rouche's Theorem $f(z)$ and $p(z)$ has same number of zeroes inside $|z|<2$ and we are done
{ "language": "en", "url": "https://math.stackexchange.com/questions/11986", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
What are the conditions for existence of the Fourier series expansion of a function $f\colon\mathbb{R}\to\mathbb{R}$ What are the conditions for existence of the Fourier series expansion of a function $f\colon\mathbb{R}\to\mathbb{R}$?
If $f\in L^1_\text{loc}(\mathbb{R})$, then on an interval $I=(a,b)$ we can define $$\hat{f}(n)=\frac{1}{b-a}\int_a^b f(x)e^{-2\pi inx/(b-a)}dx.$$ However, in order for the formal Fourier series $$S[f](x)=\sum_{-\infty}^{\infty} \hat{f}(n)e^{2\pi inx/(b-a)}$$ to converge we need more conditions on $f$. Kolmogorov proved in 1925 that there is $f\in L^1(0,2\pi)$ such that $S[f]$ diverges almost everywhere. In 1966 Carleson proved that $S[f]$ converges almost everywhere provided $f\in L^2(0,2\pi)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/12034", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 1 }
Given a function $f(x)$ where $x$ is uniformly distributed between $a$ and $b$, how do I find the probability density function of $f$? For example, if $f(x) = \sin x$ and $x$ is uniformly distributed on $[0, \pi]$, how is the equation found that satisfies the probability distribution function of $f(x)$? I imagine the distribution function will be greater when the derivative of $f(x)$ is closer to zero, but this is just a guess. I apologize if this question is vague or not advanced enough, but I can't find the answer anywhere.
in searching the solution for a nearly problem, I have been falling in your interesting discussion. But there's something I doubt about Zarrax's answer. If the probablity density function is like your result, we can find out easily there's some values of alpha which makes the distribution function greater than 1, is it physically correct?
{ "language": "en", "url": "https://math.stackexchange.com/questions/12069", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Is $\lim\limits_{n \to \infty}\frac{1}{n}\left( \cos{\frac{\pi}{n}} + \cos{\frac{2\pi}{n}} + \ldots + \cos{\frac{n\pi}{n}} \right)$ a Riemann sum? This is probably simple, but I'm solving a practice problem: $\lim_{n \to \infty}\frac{1}{n}\left( \cos{\frac{\pi}{n}} + \cos{\frac{2\pi}{n}} + \ldots +\cos{\frac{n\pi}{n}} \right)$ I recognize this as the Riemann sum from 0 to $\pi$ on $\cos{x}$, i.e. I think its the integral $\int_0^\pi{ \cos{x}dx }$ which is 0, but the book I'm using says it should be $ \frac{1}{\pi}\int_0^\pi{ \cos{x}dx }$ Still 0 anyway, but where did the $\frac{1}{\pi}$ in front come from?
The sum is also the real part of $$\frac{1}{n}\left(e^{i\frac{\pi}{n}}+e^{i\frac{2\pi}{n}}+\ldots+e^{i\frac{n\pi}{n}}\right) \; .$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/12107", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 5, "answer_id": 4 }
How do I calculate the instantaneous angular speed of a spindle given a spooled radius and required feed speed? In an attempt to model the cassette tape spooling action, I'm hoping to solve this problem. Required feed speed 5cm/second. Two Spindles with starting radius of S1: 0.8cm (empty spool) S2: 1.8cm (full spool) The spooled tape thickness is unknown but can be assumed to be around 15micrometer if needed. The time to complete the spooling from one spool to the other spool is required to be 45 minutes.
Suppose that the feed spindle has outer radius $r_f(t)$ and the intake spindle has outer radius $r_i(t)$, similarly let the spindle's angular velocity be given by $\omega_f(t)$ and $\omega_i(t)$. Note that both $r_f$ and $r_i$ lie in the interval $[r_0, R_0]$, where $r_0= .8$ cm and $R_0 = 1.8$ cm. The only constraint is that the tape must pass between the two spindles at a constant rate of $s = 5$ cm/s. For the tape to remain taught under this condition the tangential velocity of each spindle must be $s$. Thus, $$s = r_f(t)\cdot\omega_f(t) = r_i(t)\cdot\omega_i(t).$$ So $$\omega(t) = \frac{s}{r(t)}$$ for either spool. So now we simply need to find a description of the outer radius of the spindle. Consider that the tape has thickness $a \ll r_0$ and width $w$, thus the volume of tape transfered per unit time is $$ \frac{dV}{dt} = \pm a s w$$ and since $V = \pi w r(t)^2$ then we also have $$ \frac{dV}{dt} = 2\pi r(t) \dot{r}(t) w$$ and then by equating these expressions and solving the IVP for the feeding spindle yields, $$r_f(t) = \sqrt{R_0^2 - \frac{as}{\pi} t}.$$ Similarly, the IVP for the intake spindle yields $$r_i(t) = \sqrt{r_0^2 + \frac{as}{\pi} t}.$$ From those expressions it is easy to see that the proper speeds to drive the spindles are approximately $$\omega_f(t) = \left(\sqrt{\left(\frac{R_0}{s}\right)^2 - \frac{a}{s\pi} t}\right)^{-1}$$ and $$\omega_i(t) = \left(\sqrt{\left(\frac{r_0}{s}\right)^2 + \frac{a}{s\pi} t}\right)^{-1}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/12162", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Unit sphere compactness in a metric space Define $A$ as a nonempty set, $\mathcal{B}:=\{f: A \rightarrow \mathbb{R}: f(A) \text{is bounded} \} ,d_\infty:=\text{sup}\{|f(x)-g(x)|:x \in A\}$. For which $A$ is $\overline {B_1(0)} \subset \mathcal{B}$ compact? Notes: ${B_1(0)}$ is the unit sphere. I already proved that $(\mathcal{B},d_\infty)$ is a metric space. My thoughts: I investigated a bit on this and found some proofs that the unit sphere in a banach space is compact when the banach space is finite dimensional. But I don't want to use this here, I don't even know if $(\mathcal{B},d_\infty)$ is a banach space. All proofs we did who depend on the dimension of a metric space is that the intersections of compact subsets is compact and that the finite union of compact subsets is compact. Maybe we can use this?
The result you mention has a converse: the closed unit ball in a Banach space is compact if and only if the Banach space is finite-dimensional. That should suggest to you what is true here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/12202", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }