Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
polar coordinates of Gaussian Distribution with non zero mean I found that the polar coordinates of 2-dimensional Gaussian distribution with mean zero $$\frac{1}{2\pi\sigma^2}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\exp\big(-({x^2+y^2})/{2\sigma^2}\big) \,\mathrm{d}x\,\mathrm{d}y$$ is $$\frac{1}{\sigma^2}\int_{0}^{\infty}\exp\big(-r^2/{2\sigma^2}\big) \,r\mathrm{d}r$$ What if we consider non-zero mean, that is what would exactly be the following equation in polar coordinate system? $$\frac{1}{2\pi\sigma^2}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\exp\big(-({(x-\mu_x)^2+(y-\mu_y)^2})/{2\sigma^2}\big) \,\mathrm{d}x\,\mathrm{d}y$$
If you mean a polar coordinate system with respect to the origin, then the result is a complicated mess. However, expressed in a polar coordinate system with respect to the point $(\mu_x,\mu_y)$, your third expression is again equal to your second expression, where $r$ now stands for the distance from the point $(\mu_x,\mu_y)$. P.S.: I suggest to take more care to use terms precisely. These expressions are neither distributions, nor coordinates, nor equations; they're normalization integrals over distributions expressed in certain coordinates.
{ "language": "en", "url": "https://math.stackexchange.com/questions/49380", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Natural isomorphisms and the axiom of choice The definitions of "natural transformation", "natural isomorphism between functors", and "natural isomorphism between objects" captures - among other things - the intuitive notion of "an isomorphism that does not depend on an arbitrary choice" (from Wikipedia). The standard example is a finite dimensional vector space $V$ being naturally isomorphic to its double dual $V$** because the isomorphism doesn't depend on the choice of basis. What I wonder: Does this informal notion of choice have to do with the formal notion of choice in the axiom of choice? Is "being naturally isomorphic" somehow related to "being provably isomorphic from $ZF$ without $AC$"? Or are these completely unrelated concepts?
I'm no expert but I think they're different. My understanding is that "natural isomorphism" has always been a little vaguely defined but the concept is fairly intuitive. It's a case where there is clearly a "best" or simplest isomorphism. Like the Chinese Remainder Theorem perhaps. Cmn ~ Cm x Cn where m, n are relatively prime. There may be more than one isomorphism but one stands out as the most natural or most obvious, I guess. Generally the "natural" isomorphisms are easily and unambiguously provable. It's recognising that such a connection exists that's the real art. Funnily enough I find the axiom of choice much vaguer (and have never been a fan). I'm probably way off...
{ "language": "en", "url": "https://math.stackexchange.com/questions/49420", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 5, "answer_id": 4 }
For which $n$ is $ \int \limits_0^{2\pi} \prod \limits_{k=1}^n \cos(k x)\,dx $ non-zero? I can verify easily that for $n=1$ and $2$ it's $0$, $3$ and $4$ nonzero, $4$ and $5$ $0$, etc. but it seems like there must be something deeper here (or at least a trick).
Hint: Start as anon did with $$ \int_0^{2\pi}\prod_{k=1}^n\cos(kx)\,\mathrm{d}x =\int_0^{2\pi}e^{-i\frac{n(n+1)}{2}x}\prod_{k=1}^n(1+e^{i2kx})\,\mathrm{d}x\tag{1} $$ which would be $2\pi$ times the coefficient of $x^{n(n+1)/2}$ in $$ \prod_{k=1}^n(1+x^{2k})\tag{2} $$ $(2)$ is the number of ways to write $n(n+1)/2$ as the sum of distinct even integers $\le2n$. So $(1)$ is non-zero precisely when you can write $n(n+1)/2$ as the sum of distinct even integers $\le2n$ (a much simpler problem). Claim: $\dfrac{n(n+1)}{2}$ can be written as the sum of distinct even integers no greater than $2n$ in at least one way precisely when $n\in\{0,3\}\pmod{4}$. Proof: By induction. If $n\in\{1,2\}\pmod{4}$, then $\dfrac{n(n+1)}{2}$ is odd, and so cannot be written as the sum of even integers. Suppose that $n\in\{0,3\}\pmod{4}$ and $\dfrac{n(n+1)}{2}$ can be written as the sum of distinct even integers no greater than $2n$. Then $$ \frac{(n+4)(n+5)}{2}=\frac{n(n+1)}{2}+(2n+4)+(2n+6) $$ Thus, if the statement is true for $n$, it is true for $n+4$. Once we note that $\dfrac{3(3+1)}{2}=2+4$ and $\dfrac{4(4+1)}{2}=4+6$, we are done. QED
{ "language": "en", "url": "https://math.stackexchange.com/questions/49467", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 3, "answer_id": 2 }
Express $\int^1_0x^2 e^{-x^2} dx$ in terms of $\int^1_0e^{-x^2} dx$ (Apologies, this was initially incorrectly posted on mathoveflow) In the MIT 18.01 practice questions for Exam 4 problem 3b (link below), we are asked to express $\int^1_0x^2 e^{-x^2} dx$ in terms of $\int^1_0e^{-x^2} dx$ I understand that this should involve using integration by parts but the given solution doesn't show working and I'm not able to obtain the same answer regardless of how I set up the integration. Link to the practice exam: http://ocw.mit.edu/courses/mathematics/18-01-single-variable-calculus-fall-2006/exams/prexam4a.pdf
Hint: $x^2 e^{-x^2} = x ( x e^{-x^2})$ and the second factor is a derivative.
{ "language": "en", "url": "https://math.stackexchange.com/questions/49520", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 2 }
Are there numerical algorithms for Roman numerals? In positional number systems there are algorithms for performing certain operations, like long division, to name one of the simplest. This works for positional systems, whatever base. I realize in number theory there are very advanced algorithms, typically for working with Very Long Numbers. (disclaimer: except for a Fourier transform I don't know any of them, I'm not a mathematician.) I was wondering how the Romans could do anything numerical with their odd Roman numerals. You can't divide MMDCCI by LXXIII using long division. So, question: are there numerical methods for Roman numerals, and if not, how did the Romans divide MMDCCI by LXXIII?
This web page has information about some possible approaches to doing arithmetic using Roman Numerals: http://turner.faculty.swau.edu/mathematics/materialslibrary/roman/
{ "language": "en", "url": "https://math.stackexchange.com/questions/49582", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 0 }
Conceptual basis for formula for sum of n first integers raised to power k, Most programmers (including me) are painfully aware of quadratic behavior resulting from a loop that internally performs 1, 2, 3, 4, 5 and so on operations per iteration, $$\sum_{i=1}^n i = \frac{n \left(n+1\right)}{2} $$ It’s very easy to derive, e.g. considering the double of the sum like $(1+2+3) + (3+2+1) = 3\times4$ . For the sum of squares there is a very similar formula, $$\sum_{i=1}^n i^2 = \frac{n(n+\frac{1}{2})(n+1)}{3}$$ But the only way I found to derive that was to assume it would sum to a cubic polynomial $f(x)=Ax^3+Bx^2+Cx+D$, and solve for $f(n)-f(n-1)=n^2$. I’m guessing that there is a much simpler system and concept here, generalizing to $\sum_{i=1}^n i^k$?
These "power sum" polynomials are known as Bernoulli polynomials, and have been studied for centuries, and there is a vast literature about them. There are many inductive formulae relating them. As you observed, the right thing to do is to consider a polynomial $f_{k}(x)$ with $f_{k}(0) = 0$ and $f_{k}(x+1) - f_{k}(x) = (x+1)^{k}$, where $k$ is a chosen positive integer. Notice that is a polnomial of finite degree satisfies this equation for all positive integers, then it must satisfy it for all real $x$, and furthermore, it is unique. Notice also that, given such a polynomial exists, we must have $f(-1) =0$, so that both $x$ and $x+1$ must be factors for $f_{k}(x)$ for every $k$, given that $f_{k}$ exists. How do we know that the polynomial $f_{k}(x)$ always exists? There are many ways to see this. I like an inductive approach. The polynomial $f_{1}(x) = \frac{x(x+1)}{2}$ gets us started. How can we find $f_{k+1},$ given $f_{k}$? Well, one way to do it is to notice that if we had $f_{k+1}$ and differentiated its defining equation, we would obtain $f_{k+1}^{'}(x+1) - f_{k+1}^{'}(x) = (k+1)(x+1)^{k}$, which is nearly the defining equation for $f_{k}$, apart from a factor $k+1$ and the possible addition of a constant. Hence, if it is to exist, we should have $f_{k+1}(x) = c(k+1)x + d(k+1) + (k+1)\int_{-1}^{x} f_{k}(t) dt$ for certain constants $c(k+1)$ and $d(k+1)$. We can determine the constants $c(k+1)$ and $d(k+1)$. Since we need $f_{k+1}(0) = 0$, we must have $d(k+1) = -(k+1)\int_{-1}^{0} f_{k}(t)dt$. Since we need $f_{k+1}(-1) = 0$, we need $c(k+1) = d(k+1)$. Hence we have uniquely specified a polynomial $f_{k+1}$ (of degree $k+2$) with the right properties. It is $f_{k+1}(x) = -(x+1)(k+1)\int_{-1}^{0}f_{k}(t)dt + (k+1) \int_{-1}^{x} f_{k}(t)dt$. This can be rewritten as $$ \frac{f_{k+1}(x)}{k+1} = x \int_{0}^{-1}f_{k}(t)dt + \int_{0}^{x} f_{k}(t)dt$$ if preferred.
{ "language": "en", "url": "https://math.stackexchange.com/questions/49630", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How to find maximum $x$ that $k^x$ divides $n!$ Given numbers $k$ and $n$ how can I find the maximum $x$ where: $n! \equiv\ 0 \pmod{k^x}$? I tried to compute $n!$ and then make binary search over some range $[0,1000]$ for example compute $k^{500}$ if $n!$ mod $k^{500}$ is greater than $0$ then I compute $k^{250}$ and so on but I have to compute every time value $n!$ (storing it in bigint and everytime manipulate with it is a little ridiculous) And time to compute $n!$ is $O(n)$, so very bad. Is there any faster, math solution to this problem? Math friends?:) Cheers Chris
Here is a possible approach. If $p$ is prime, the largest value of $p$ dividing $n!$ is $$\sum_{k=1}^\infty \left\lfloor {n\over p^k}\right\rfloor.$$ If the number $k$ you are considering is easily factorable into primes, you can adapt this result to your needs.
{ "language": "en", "url": "https://math.stackexchange.com/questions/49670", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
Prove in full detail that the set is a vector space So I'm doing a review test and I have this problem: Prove in full detail, with the standard operations in R2, that the set {(x,2x): x is a real number} is a vector space. Attempt: Given: $(x_1, 2x_1) \in \mathbb{R}^2$ and $(x_2, 2x_2) \in \mathbb{R}^2$ Addition: $(x_1, 2x_1) + (x_2, 2x_2) = (x_1 + x_2, 2x_1 + 2x_2) \in \mathbb{R}^2$ $ = (x_1 + x_2, 2(x_1 + x_2)) \in \mathbb{R}^2$ $ ≃ (x, 2x) \in \mathbb{R}^2$ Thus the set is closed under addition Scalar multiplication: $c(x_1, 2x_1) = (cx_1, 2(cx_1)) \in \mathbb{R}^2$ $ ≃ (x, 2x) \in \mathbb{R}^2$ Thus the set is closed under scalar multiplication Are these operations enough to prove that the set is a vector space? Or do I have to go through each of the following (or in other words do I have to to the same thing for each property in the definition):
Since you are working in a subspace of $\mathbb{R}^2$, which you already know is a vector space, you get quite a few of these axioms for free. Namely, commutativity, associativity and distributivity. With the properties that you have shown to be true you can deduce the zero vector since $0 v=0$ and your subspace is closed under scalar multiplication, and same thing for the inverse, $-1 v=-v$. You seemed to have skipped a few steps in your reasoning though, after doing the addition of two vectors from your subspace, in order to show that the resulting vector actually is in that same subspace you should show explicitly that it is of the form $(x,2x)$ (you're almost there). For scalar multiplication, you seem to have taken a generic vector of $\mathbb{R}^2$ instead of a vector belonging to your subset, so it needs a bit of correction as well.
{ "language": "en", "url": "https://math.stackexchange.com/questions/49733", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 6, "answer_id": 0 }
Can a Gaussian integer matrix have an inverse with Gaussian integer entries? Is there any way to characterize the set of complex matrices with Gaussian integer entries whose inverses also have Gaussian integer entries? I'm aware of the numerous examples of integer matrices whose inverses also have integer entries (usually involving binomial coefficients), but I'm wondering if those constructions can be generalized to Gaussian integers.
If $R$ is a subring of $\Bbb C$, the group ${\rm GL}_n(R)$ of the invertible matrices with coefficients in $R$ consists of the matrices with determinant in $R^\times$ (the group of invertible elements in $R$). Thus, if $R={\Bbb Z}[i]$ is the ring of Gaussian integers, the group ${\rm GL}_n(R)$ consists of the matrices $M$ with coefficients in $R$ such that $$ \det(M)\in\{\pm1,\pm i\}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/49781", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Combinatorics: how many different triples of cards can be formed let's say i have a hand of cards. the number of cards i have in my hand is x > 3. how many unordered different triples of cards can i form with the cards in my hand? example: i have the following cards in my hand: A B C D i could form A B C A B D A C D B C D 4 different triples can be formed. i am looking for a formular that gives me the number of triples depending on x.
First let's look at the case where order matters. The number of possible first cards is $ x $, the number of possible second cards is $ x - 1 $ (since the first one is no longer available), and the number of possible third cards is $ x-2 $. The total number of three-card hands, where order matters, is then $ x(x-1)(x-2)$. However, given any three distinct cards, there are $ 3! = 6 $ different ways to order them all, e.g. ABC ACB BAC BCA CAB CBA Hence our value of $ x(x-1)(x-2) $ contains 6 copies of every possible three-card hand (without ordering). Thus the value you are looking for is $ x(x-1)(x-2)/6 $. Note that, in general, the number of ways of choosing $ m $ objects out of a collection of $ n $ objects, without order, is determined by the binomial coefficient: $$ {n \choose m} = \frac{n!}{(n-m)! m!}, $$ which can be derived by the same reasoning process used above.
{ "language": "en", "url": "https://math.stackexchange.com/questions/49863", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Some questions on $\frac{\partial f}{\partial x} + \frac{\partial f}{\partial y}+\frac{\partial f}{\partial z} = g(x,y,z)$ Let $\quad\displaystyle \frac{\partial f}{\partial x} + \frac{\partial f}{\partial y}+\frac{\partial f}{\partial z} = g(x,y,z)\;$. * *Is there a better notation for writing the above? *Given $g$, can $f$ always be found? *Given $g$, is there a unique $f$ that satisfies the above equation? *Is there a name given to this equation?
Your condition is that the directional derivative of $f$ in the $(1,1,1)$ direction is given by $g(x,y,z)$. It's a rotated version of the simple equation ${\partial f \over \partial x} = h(x,y,z)$. Just as you can solve this by $f(x,y,z) = \int_0^x h(t,y,z) \,dt + H(y,z)$, where $H(y,z)$ can be anything, you can always solve your equation by integrating $g(x,y,z)$ in the $(1,1,1)$ direction, and you have nonuniqueness because you can add any function of $x - y$ and $y - z$ to your solution and obtain another solution; the directional derivative of any function $H(x-y,y-z)$ in the $(1,1,1)$ direction is zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/49988", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Different ways to prove there are infinitely many primes? This is just a curiosity. I have come across multiple proofs of the fact that there are infinitely many primes, some of them were quite trivial, but some others were really, really fancy. I'll show you what proofs I have and I'd like to know more because I think it's cool to see that something can be proved in so many different ways. Proof 1 : Euclid's. If there are finitely many primes then $p_1 p_2 ... p_n + 1$ is coprime to all of these guys. This is the basic idea in most proofs : generate a number coprime to all previous primes. Proof 2 : Consider the sequence $a_n = 2^{2^n} + 1$. We have that $$ 2^{2^n}-1 = (2^{2^1} - 1) \prod_{m=1}^{n-1} (2^{2^m}+1), $$ so that for $m < n$, $(2^{2^m} + 1, 2^{2^n} + 1) \, | \, (2^{2^n}-1, 2^{2^n} +1) = 1$. Since we have an infinite sequence of numbers coprime in pairs, at least one prime number must divide each one of them and they are all distinct primes, thus giving an infinity of them. Proof 3 : (Note : I particularly like this one.) Define a topology on $\mathbb Z$ in the following way : a set $\mathscr N$ of integers is said to be open if for every $n \in \mathscr N$ there is an arithmetic progression $\mathscr A$ such that $n \in \mathscr A \subseteq \mathscr N$. This can easily be proven to define a topology on $\mathbb Z$. Note that under this topology arithmetic progressions are open and closed. Supposing there are finitely many primes, notice that this means that the set $$ \mathscr U \,\,\,\, \overset{def}{=} \,\,\, \bigcup_{p} \,\, p \mathbb Z $$ should be open and closed, but by the fundamental theorem of arithmetic, its complement in $\mathbb Z$ is the set $\{ -1, 1 \}$, which is not open, thus giving a contradiction. Proof 4 : Let $a,b$ be coprime integers and $c > 0$. There exists $x$ such that $(a+bx, c) = 1$. To see this, choose $x$ such that $a+bx \not\equiv 0 \, \mathrm{mod}$ $p_i$ for all primes $p_i$ dividing $c$. If $a \equiv 0 \, \mathrm{mod}$ $p_i$, since $a$ and $b$ are coprime, $b$ has an inverse mod $p_i$, call it $\overline{b}$. Choosing $x \equiv \overline{b} \, \mathrm{mod}$ $p_i$, you are done. If $a \not\equiv 0 \, \mathrm{mod}$ $p_i$, then choosing $x \equiv 0 \, \mathrm{mod}$ $p_i$ works fine. Find $x$ using the Chinese Remainder Theorem. Now assuming there are finitely many primes, let $c$ be the product of all of them. Our construction generates an integer coprime to $c$, giving a contradiction to the fundamental theorem of arithmetic. Proof 5 : Dirichlet's theorem on arithmetic progressions (just so that you not bring it up as an example...) Do you have any other nice proofs?
Let $p$ be the last prime. Then according to Bertrand's postulate the interval $(p,2p)$ contains a prime number. We get a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/50006", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "119", "answer_count": 27, "answer_id": 20 }
Identity of closure of sets in a topological space A fun problem: show that for any sequence $A_{1},A_{2},...$ of subsets of a topological space we have: $$\overline{\bigcup_{i=1}^{\infty} A_{i}} = \bigcup_{i=1}^{\infty} \overline{A_{i}} \cup \bigcap_{i=1}^{\infty} \overline{\bigcup_{j=0}^{\infty}A_{i+j}}.$$ So I think I have the inclusion $\supseteq$. Let's see: First note that for each $i$ we have $\displaystyle A_{i} \subseteq \bigcup_{i=1}^{\infty} A_{i}$. Taking closure on both sides gives: $$\overline{A_{i}} \subseteq \overline{\bigcup_{i=1}^{\infty} A_{i}}.$$ Taking the union on both sides from $i=1$ to $\infty$ yields: $$\bigcup_{i=1}^{\infty} \overline{A_{i}} \subseteq \overline{\bigcup_{i=1}^{\infty} A_{i}}.$$ Call this (*). Now note that (taking $i=1$) $$\bigcap_{i=1}^{\infty} \overline{\bigcup_{j=0}^{\infty}A_{i+j}} \subseteq \overline{\bigcup_{j=0}^{\infty} A_{j+1}}.$$ Thus: $$\bigcup_{i=1}^{\infty} \overline{A_{i}} \cup \bigcap_{i=1}^{\infty} \overline{\bigcup_{j=0}^{\infty}A_{i+j}} \subseteq \bigcup_{i=1}^{\infty} \overline{A_{i}} \cup \overline{\bigcup_{j=0}^{\infty} A_{j+1}}.$$ Now by (*) it follows that the above set is a subset of: $$\overline{\bigcup_{i=1}^{\infty} A_{i}} \cup \overline{\bigcup_{j=0}^{\infty} A_{j+1}}.$$ But the latter set is equal to $\displaystyle \overline{\bigcup_{i=1}^{\infty} A_{i}}$, so we have the inclusion $\supseteq$. Is this OK? Now, how to prove the other inclusion? I tried contradiction but gets messy.
Let $\displaystyle x\in\overline{\bigcup_{i\in\mathbb{N}^*}A_i}$. We assume that $\displaystyle x\notin\bigcup_{i\in\mathbb N^*}\overline{A_i}$ (if it's not the case we are done).We have to show that for all $i\geq 1$ we have $\displaystyle x\in\overline{\bigcup_{j\geq i}A_j}$. Let $V$ a neighborhood of $x$. Since $\displaystyle x\notin\bigcup_{i\in\mathbb N^*}\overline{A_i}$ we can find for all $i\geq 1$ an open set $U_i$ which contains $x$ and such that $U_i\cap A_i=\emptyset$. For $i\geq 1$, let $\displaystyle V_i=\bigcap_{k=1}^iU_k$: this is an open set which contains $x$. Since $V\cap V_i$ is still a neighborhood of $x$ and $\displaystyle x\in\overline{\bigcup_{i\in\mathbb{N}^*}A_i}$ we have $\displaystyle V\cap V_i\cap\bigcup_{k\in\mathbb{N}}A_k \neq \emptyset$. Now we can conclude since $\displaystyle V_i\cap\bigcup_{k\in\mathbb{N}}A_k =V_i\cap \bigcup_{k\geq i+1}A_k$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/50061", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Find out the border of a planar figure for given a set of points – 2D case Original post is edited after getting some suggestions; I am looking for a fast algorithm which is able to detect outer most boundary of a plane for given set of points. Suppose, I have 3D point clouds and points are segmented as belonging to different (identified) planes. Now I want to extract outer most points of each plane. The problem can be considered as a 2D case by projecting x,y coordinates of each point to the XY Plane. So what I am expecting is fast, precise algorithm, which is able to detect all the boundary points along very irregular borders. convex hull doesnt fit for me as it fails on irregular cases. •publications relavent to this, codes and psudo codes are expecting to implement in c++. thank you
I'm not sure about your problem statement, but you might find the Hough transform, (in 3D) usefule. For example: http://plum.eecs.jacobs-university.de/download/3dresearch2011.pdf Added: statement misunderstood, it seems that the planes are already identified, we just want to find out the borders of the figures that each subset of points determine over each plane. For that, I'd project the points to the respective plane and apply iterative algorithm, starting with the convex hull and deleting/splitting borders. For example: http://forja.uji.es/docman/view.php/43/83/border_cloud_points.pdf
{ "language": "en", "url": "https://math.stackexchange.com/questions/50108", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Help with solving an integral I am looking for help with finding the integral of a given equation $$ Y2(t) = (1 - 2t^2)\int {e^{\int-2t dt}\over(1-2t^2)^2}. dt$$ anyone able to help? Thanks in advance! UPDATE: I got the above from trying to solve the question below. Solve, using reduction of order, the following $$y'' - 2ty' + 4y =0$$ , where $$f(t) = 1-2t^2$$ is a solution
The differential equation you have is a special case of the Hermite differential equation, with $\lambda =4$. The standard "regular" solution is the Hermite Polynomial $H_{\lambda/2}(x)=-2+4x^2$ (your solution is merely a scaled version), and the "irregular" solution is a bit complicated, involving the so-called "imaginary error function" $\mathrm{erfi}(x)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/50220", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How to solve this very quickly? A number leaves a remainder $2$ when divided by $9$.Which of the following could not be the remainder when it is divided by $45$? * *$20$ *$30$ *$29$ *$38$ How could we solve this under a minute? Please explain your approach.
$x = 9k + 2$. Now if $k = 5m + r$, $0 \leq r \leq 4$ then $x = 45m + 9r + 2$. Thus the remainder upon dividing by $45$ should be of the form $9r + 2$. $30$ isn't it. In other words, $x-2$ is divisible by $9$, hence the remainder upon dividing $x-2$ by $45$ should also be divisible by $9$, as $45$ is divisible by $9$. In general, if $x$ leaves a remainder $r$ upon dividing by $a$, the possible remainders it leaves when divided by $b$ are of the form $k \times \text{gcd}(a,b) + r$. ($\text{gcd}$ = greatest common divisor). In this case, $\text{gcd}(9,45) = 9$. If $a$ and $b$ are co-prime, then any remainder is possible. See: Bezout's Identity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/50270", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
mapping of cube by itself From exam textbook I am given to solve following problem: question is like this in space how many lines are such by which if turn cube by $180^\circ$ it will map itself? I was thinking about this problem many times I though it should be axis of symmetry for which answer would be $4$ but in answers there is not 4 so I did not find solution of it yet please help me to make it clear for me
What 4 did you get? Clearly the axis has to pass through the center of the cube. There are three through the face centers, six through the centers of edges, and four body diagonals. You can see a figure here
{ "language": "en", "url": "https://math.stackexchange.com/questions/50326", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
What's the meaning of algebraic data type? I'm reading a book about Haskell, a programming language, and I came across a construct defined "algebraic data type" that looks like data WeekDay = Mon | Tue | Wed | Thu | Fri | Sat | Sun That simply declares what are the possible values for the type WeekDay. My question is what is the meaning of algebraic data type (for a mathematician) and how that maps to the programming language construct?
I'm sure the term comes from Algebraic Theories (as opposed to Geometric Theories), see Model Theory. The comp.sci. "ADT" is basically an object of a free model of an algebraic theory. More details here: https://github.com/vpatryshev/wowiki/blob/master/Algebraic%20Data%20Types%20Categorically.md
{ "language": "en", "url": "https://math.stackexchange.com/questions/50375", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 3, "answer_id": 2 }
Proof that $\dim(U \times V) = \dim U + \dim V$. The following theorem in Serge Lang's Linear Algebra is left as an exercise, namely, Let $U$ and $V$ be finite dimensional vector spaces over a field $K$, where $\dim U = n$ and $\dim V = m$. Then $\dim W = \dim U + \dim V$, where $W = U \times V$, the direct product of the two vector spaces $U$ and $V$. Namely, $W$ contains the set of all ordered pairs $(u,w)$ such that $u \in U$ and $v \in V$. The usual axioms for such a direct product are: 1) Addition is defined component wise, namely if $(u_1,v_1),(u_2,v_2) \in W$, then $(u_1,v_1)+(u_2,v_2) = (u_1 + u_2, v_1 + v_2)$; 2)If $c \in K$, then $c(u_1,w_1) = (cu_1,cw_1)$. To prove it, let $(u_1, u_2 \ldots u_n)$ be a basis for $U$ and $(v_1,v_2 , \ldots v_m)$ a basis for $V$. So by definition, every element of $W$ can be written in the form $(a_1u_1 + \ldots a_nu_n, b_1v_1 + \ldots b_mv_m)$, where the $a_i's$ and $b_j's$ belong to the field $K$. Using the above axioms this can be rewritten as: $a_1(u_1,0) + a_2(u_2,0) + \ldots a_n(u_n,0) + b_1(0,v_1) + \ldots b_m(0,v_m)$. Doubt: If we view all the $(u_i,0)$ ordered pairs as being the "basis" vectors of $U$ and similarly for the $(0,v_j)$ ordered pairs of $V$, then there are $n+m$ number of them and so proving the linear independence of these objects should suffice. But I'm confused because I know that $u_i's$ by themselves are the basis vectors of $U$, but now we are talking about ordered pairs $(u_i,0)$. How can I get out of such a situation? Perhaps one can define some linear map between say a $u_i$ and the ordered pair $(u_i,0)$. $\textbf{Edit}:$ First it is easy to see that $(U \times \{0\}) \cap (\{0\} \times V)$ is the ordered pair $(0,0)$. The linear independence of the basis vectors as stated above then follows.
Proof of exercise 1: Say that the $v_i$ are linearly independent. Then $c_1v_1 + c_2v_2 + \ldots c_nv_n = 0$ only when all the $c_i = 0$. Applying $T$ to both sides we have $c_1T(v_1) + c_2T(v_2) + \ldots c_nT(v_n) = 0$ by linearity of $T$ and the fact that it is injective. But the assumption was that all the $c_i$ were zero in order for the first equation to hold. It follows that $(T(v_1), T(v_2), \ldots T(v_n))$ are a linearly independent set of vectors. As for exercise 2, is it not clear from what it means for two ordered pairs to be equal that $f$ and $g$ are injective?
{ "language": "en", "url": "https://math.stackexchange.com/questions/50502", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 3, "answer_id": 2 }
Does solution of $ x=\sum_{n=0}^\infty e^{-A_n/x}$ exist? Usually when working with indefinite sums, I want to work out the sum or whether it convergence. But now I encountered a problem they other way around and I'm clueless... Is there even a general solution of $$ x=\sum_{n=0}^\infty e^{-A_n/x}$$ for $A_n$, where $x$ is given and real, $A_n >0\space\forall n$ and $\frac{dA_n}{dx}=0\space\forall n$? Thank you EDIT: To make my question clearer for the commenters and others, I'm searching for a systematic sequence $A_n$ which, when entered in the equation above, yields $x$ and this should hold for all (real) $x$.
It cannot be done. For the proof write $x:={1\over y}$. Then we should have $${1\over y}\ \equiv\ \sum_{n=0}^\infty e^{-A_n y}\qquad(*)\ ,$$ say for all $y\geq1$. In particular $\sum_{n=0}^\infty e^{-A_n}=1$, so necessarily $\lim_{n\to\infty} A_n=\infty$. It follows that $\alpha:=\inf_n A_n>0$ and therefore $$\sum_{n=0}^\infty e^{-A_n y}=\sum_{n=0}^\infty e^{-A_n} \ e^{-A_n(y-1)} \leq e^{-\alpha(y-1)} \qquad (y\geq1)\ .$$ This shows that $(*)$ cannot hold for all $y\geq 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/50574", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
Taylor Series expansion at z=0, and radius of convergence I have the following question: Consider the domain $$ D=B(0,1)\cup B\left(\frac{1}{2}, 1\right) $$ It is given that $f:D\rightarrow \mathbb{C}$ is an analytic function in $D$, and $f^{(n)}(0)$ is a positive real number for every positive integer $n$. Let $R$ be the radius of convergence of the Taylor series of $f$ at $z=0$. Is it true that $R>1$? $$ $$ I have attached my proof to the following problem, although I am not sure if it correct, as I have clearly not used the fact that $f^{(n)}(0)$ is a positive real number for every positive integer $n$. How do I make use of this fact to prove/disprove the statement? $$ $$ Proof: Since $f$ is analytic on the ball $B(0,1)$, it follows from the definition of radius of convergence that $R\geq1$. Suppose on the contrary that $R=1$. By Taylor's Theorem, we may express $f$ as a Taylor series at $z=0$ as follows: $$f(z)=\sum_{n=0}^\infty\frac{f^{(n)}(0)}{n!}z^n$$ where the series converges absolutely for all $z\in B(0,1)$, and diverges for all $|z|>1$. Thus, by differentiating both sides of the above equation $k$ times, we have that for all $z\in B(0,1)$, $$ f^{(k)}(z)=\sum_{n=k}^\infty\frac{f^{(n)}(0)}{(n-k)!}z^{n-k}. $$ Also, since $f$ is analytic on the ball $B\left(\frac{1}{2},1\right)$, it follows from Taylor's Theorem that we may also express $f$ as a Taylor series at $z=\frac{1}{2}$ as follows: $$ f(z)=\sum_{k=0}^{\infty}\frac{f^{(k)}\left(\frac{1}{2}\right)}{k!}\left(z-\frac{1}{2}\right)^k, $$ where the series converges absolutely for all $z\in B\left(\frac{1}{2},1\right)$. Now, by setting $z=\frac{1}{2}$, we have that for all $k\geq0$, $$ f^{(k)}\left(\frac{1}{2}\right)=\sum_{n=k}^{\infty}\frac{f^{(n)}(0)}{(n-k)!}\cdot\frac{1}{2^{n-k}}. $$ Then for all $z\in B\left(\frac{1}{2},1\right)$, we have the following: $$ f(z) =\sum_{k=0}^{\infty}\frac{f^{(k)}\left(\frac{1}{2}\right)}{k!}\left(z-\frac{1}{2}\right)^k =\sum_{k=0}^{\infty}\sum_{n=k}^{\infty}\frac{f^{(n)}(0)}{(n-k)!k!}\cdot\frac{1}{2^{n-k}}\cdot\left(z-\frac{1}{2}\right)^k $$ $$ =\sum_{n=0}^{\infty}\sum_{k=0}^n\frac{f^{(n)}(0)}{(n-k)!k!}\cdot\left(\frac{1}{2}\right)^{n-k}\cdot\left(z-\frac{1}{2}\right)^k =\sum_{n=0}^{\infty}\frac{f^{(n)}(0)}{n!}\sum_{k=0}^n\frac{n!}{(n-k)!k!}\cdot\left(\frac{1}{2}\right)^{n-k}\cdot\left(z-\frac{1}{2}\right)^k $$ $$ =\sum_{n=0}^{\infty}\frac{f^{(n)}(0)}{n!}z^n. $$ Note: The interchanging of the summations is possible as the series $\sum_{k=0}^{\infty}\frac{f^{(k)}\left(\frac{1}{2}\right)}{k!}\left(z-\frac{1}{2}\right)^k$ converges absolutely for all $z\in B\left(\frac{1}{2},1\right)$; this follows from the Rearrangement Theorem, where any rearrangement of an absolutely convergent series converges to the same sum as the original series. This implies that the Taylor series of $f$ at $z=0$ converges for all $z\in B\left(\frac{1}{2},1\right)$; and in particular for all $z\in\mathbb{R}$, $1<z<\frac{3}{2}$, which contradicts the fact that the series diverges for all $|z|>1$. So we must have $R>1$ as desired.
The statement is correct and the proof is essentially correct though the fact that all coefficients are positive is crucial. You actually used it without noticing: when you talked about exchanging the order of summations, you were a bit sloppy because we need the terms in the double series to be summable in absolute value to use "sequential Fubini". Fortunately, the originally inner sum consists of terms of the same sign, which allows to say that the absolute value of that sum is the same as the sum of absolute values and reduce the property that we really need to the one you declared as sufficient.
{ "language": "en", "url": "https://math.stackexchange.com/questions/50751", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
What graph theoretic methods which can identify groups? Given a social graph, for instance, from Twitter, I want to identify groups. Here I define groups as any highly connected (although not necessarily complete) subgraph. What algorithms or methods exist that could help me here?
I recommend that you take a look at the paper Fast Unfolding of Communities in Large Networks. It gives a natural definition of what you mean by a group (a subgraph where the members are highly connected to each other, and non-members are not strongly connected to members) and then gives a fast algorithm for detecting such groups. As a bonus, the algorithm produces as its output a graph of groups, with edges indicating the strength of connection between different groups. You can then apply the algorithm again to the new graph, and so forth, generating a hierarchical community structure for your network.
{ "language": "en", "url": "https://math.stackexchange.com/questions/50819", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Some inequality Given a probability distribution function $F(x)$, consider other probability distribution functions $F_1$ and $F_2$ such that $aF_1(x)+bF_2(x)=F(x)$ for some $a,b$ for all $x$. Under what conditions on $F_1$ and $F_2$ we have $F_1(x)(1-F_1(x))+F_2(x)(1-F_2(x)) \ge F(x)(1-F(x)) $?
In order for $a F_1 + b F_2$ to be a probability distribution function, you need $a + b = 1$. I'll assume you're interested in the case $0 < a < 1$. If $F = a F_1 + (1 - a) F_2$, then $G = F_1 (1-F_1) + F_2 (1 - F_2) - F (1 - F) = (F_1 - F_2)^2 a^2 + (F_1 - F_2)(2 F_2 - 1) a + F_1 - F_1^2$. Now certainly the $a^2$ and constant terms are nonnegative. So one sufficient condition is that $(F_1 -F_2) (2 F_2 - 1) \ge 0$, i.e. either $F_1 \ge F_2 \ge 1/2$ or $F_1 \le F_2 \le 1/2$. By symmetry, it also is true if $F_2 \ge F_1 \ge 1/2$ or $F_2 \le F_1 \le 1/2$. On the other hand, when $F_2 < 1/2 < F_1$ or $F_1 < 1/2 < F_2$, the minimum of $G$ is at $a = \frac{1/2 - F_2}{F_1 - F_2}$ where we get $G =F_1 - F_1^2 + F_2 - F_2^2 - 1/4$. Note that the curve $F_1 - F_1^2 + F_2 - F_2^2 = 1/4$ is a circle of radius $1/2$ centred at $(1/2,1/2)$. So the condition to have $G \ge 0$ for all $0 \le a \le 1$ is that $(F_1, F_2)$ avoids the two regions of the unit square outside that circle that are shown in red in this plot.
{ "language": "en", "url": "https://math.stackexchange.com/questions/50898", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Something connected with Ulam's tightness theorem Well known theorem of Ulam says, that each probability measure $\mu$ defined on Borel subsets of polish space $X$ satisfies the following condition: for each $\epsilon>0$ there is a compact subset $K$ of $X$, such that $\mu(K)>1-\epsilon$. I wonder there are any reasonable condition on measure $\mu$ which would guarantee that for each $\epsilon>0$ there is an open set subset $U$ of $X$, such that $\mbox{cl}\,U$ is compact set and $\mu(\mbox{cl}\,U)>1-\epsilon$. Any idea? It would be very helpful for me.
Think about Brownian Motion on ${\mathcal C}[0,1]$ that starts at 0. This is a probability measure on the polish space ${\mathcal C}[0,1]$. This is also a Banach space that is infinite dimensional. Such a space cannot have a precompact open subset -- normed linear spaces with this property are finite-dimensional. Therefore, I think you are out of luck here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/50955", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
How to show that this series does not converge uniformly on the open unit disc? Given the series $\sum_{k=0}^\infty z^k $, it is easy to see that it converges locally, but how do I go about showing that it does not also converge uniformly on the open unit disc? I know that for it to converge uniformly on the open disc that $sup{|g(z) - g_k(z)|}$, z element of open unit disc, must equal zero. However, I am finding it difficult to show that this series does not go to zero as k goes to infinity. Edit:Fixed confusing terminology as mentioned in answer.
If I take your wording literally, your difficulty might stem from the fact that you confused the supremum going to zero with the series going to zero pointwise. This is precisely the difference between convergence and uniform convergence. The expression $|g(z)-g_k(z)|$ does go to zero for all $z$ on the open unit disc, but the supremum doesn't. (I'm assuming that by $g(z)$ you mean the series and by $g_k(z)$ its $k$-th partial sum; the question should introduce the notation used if it's not obviously standard.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/51004", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 5, "answer_id": 4 }
Rotational invariance Spherical harmonics functions are said to be "rotationally invariant" On the Wikipedia page, it says: In mathematics, a function defined on an inner product space is said to have rotational invariance if its value does not change when arbitrary rotations are applied to its argument. For example, the function $f(x,y) = x^2 + y^2$ is invariant under rotations of the plane around the origin. However this is confusing. It sounds like "rotationally inert", where rotations basically have no effect. (Spinning the circle $x^2 + y^2=r^2$ around the z-axis doesn't change anything about the values of the function anywhere). Here's what I understand: SH are rotationally invariant, which $ROT_1( SH( g ) ) = SH( ROT_2( g ) )$, where $ROT_1$ is a SH-domain rotation and $ROT_2$ is a spatial domain rotation, where $ROT_1$ and $ROT_2$ produce the same resultant orientation. I got this from page 18 of this paper Am I right? What is the Wikipedia page talking about? Have I misunderstood that Wikipedia page?
A specific spherical harmonic is not rotationally invariant, and the Wikipedia article does not claim this. What is true is that the space of all spherical harmonics of a fixed degree is a finite-dimensional irreducible representation of the rotation group $\text{SO}(3)$, and that the spherical harmonics form particularly nice bases of these representations. (In particular, rotating a spherical harmonic gets you a linear combination of spherical harmonics.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/51069", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Calculating point on a circle, given an offset? I have what seemed like a very simple issue, but I just cannot figure it out. I have the following circles around a common point: The Green and Blue circles represent circles that orbit the center point. I have been able to calculate the distance/radius from the point to the individual circles, but I am unable to plot the next point on either circle, given an angle from the center point. Presently, my calculation looks like the following: The coordinates of one of my circles is: y1 = 152 x1 = 140.5 And my calculation for the next point, 1 degree from the starting point (140.5,152) is: distance = SQRT((160-x1)^2 + (240-y1)^2) = 90.13 new x = 160 - (distance x COS(1 degree x (PI / 180))) new y = 240 - (distance x SIN(1 degree x (PI / 180))) My new x and y give me crazy results, nothing even close to my circle. I can't figure out how to calculate the new position, given the offset of 160, 240 being my center, and what I want to rotate around. Where am I going wrong? Update: I have implemented what I believe to be the correct formula, but I'm only getting a half circle, e.g. x1 = starting x coordinate, or updated coordinate y1 = starting y coordinate, or updated y coordinate cx = 100 (horizontal center) cy = 100 (vertical center) radius = SQRT((cx - x1)^2 + (cy - y1)^2) arc = ATAN((y1 - cy) / (x1 - cx)) newX = cx + radius * COS(arc - PI - (PI / 180.0)) newY = cy + radius * SIN(arc - PI - (PI / 180.0)) Set the values so next iteration of drawing, x1 and y1 will be the new base for the calculation. x1 = newX y1 = newY The circle begins to draw at the correct coordinates, but once it hits 180 degrees, it jumps back up to zero degrees. The dot represents the starting point. Also, the coordinates are going counterclockwise, when they need to go clockwise. Any ideas?
George, instead of subtracting for the offset, try adding, i.e. distance = SQRT((160-x_{1})^2 + (240-y_{1})^2) = 90.13 new x = 160 + (distance x COS(1 degree x (PI / 180))) new y = 240 + (distance x SIN(1 degree x (PI / 180))) The part, $x_{new}=(distance)(cos(pi/180))$ is assuming the distance is from the origin (0,0). Since you are starting your x value from positive 160, you need to add that, i.e. $x_{new}=x_{center} + (distance)(cos(pi/180))$ $x_{new}=160 + (distance)(cos(pi/180))$ And similarly for the new y value.
{ "language": "en", "url": "https://math.stackexchange.com/questions/51111", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 3 }
Take any number and keep appending 1's to the right of it. Are there an infinite number of primes in this sequence? Ignoring sequences that are always factorable such as starting with 11, Can we take any other number such as 42 and continually append 1s (forming the sequence {42, 421, 4211, ...}) to get a sequence that has an infinite number of primes in it?
I think this is an open question. Lenny Jones gave a talk in which he noted that the numbers 12, 121, 1211, 12111, 121111, etc., are all composite - until you get to the one with 138 digits, that's a prime. Jones' work appears in the paper, When does appending the same digit repeatedly on the right of a positive integer generate a sequence of composite integers?, Amer. Math Monthly 118 (Feb. 2011) 153-160. He finds that 37 is the smallest positive integer such that you get nothing but composites by appending any positive number of ones. It seems to be easier to find a sequence with no primes than a sequence which you can prove has infinitely many.
{ "language": "en", "url": "https://math.stackexchange.com/questions/51168", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 3, "answer_id": 1 }
Yet another sum involving binomial coefficients Let $k,p$ be positive integers. Is there a closed form for the sums $$\sum_{i=0}^{p} \binom{k}{i} \binom{k+p-i}{p-i}\text{, or}$$ $$\sum_{i=0}^{p} \binom{k-1}{i} \binom{k+p-i}{p-i}\text{?}$$ (where 'closed form' should be interpreted as a representation which is free of sums, binomial coefficients, or any other hypergeometric functions).
I derived the following simple inequality (confirmed numerically): for any $0 < s < 1/2$, $$ \sum\limits_{i = 0}^p {{k \choose i}{k + p - i \choose p - i}} \le \frac{1}{{(1 - 2s)^k }}\bigg(\frac{{1 - s}}{s}\bigg)^p. $$ Given $0 < s < 1/2$, let $X$ be a binomial$(k,s)$ random variable, and $Y$ a binomial$(k+p-X,t)$ random variable, where $t=s/(1-s) \, (\in (0,1))$. Then, by the law of total probability, $$ {\rm P}(X + Y = p) = \sum\limits_{i = 0}^p {{\rm P}(X + Y = p|X = i){\rm P}(X = i)} = \sum\limits_{i = 0}^p {{\rm P}(Y = p - i|X = i){\rm P}(X = i)}. $$ Noting that $$ {\rm P}(X = i) = {k \choose i}s^i (1-s)^{k-i} $$ and $$ {\rm P}(Y = p - i|X = i) = {k + p - i \choose p - i}t^{p - i} (1 - t)^k , $$ and using $$ \frac{s}{{(1 - s)t}} = 1, $$ we get $$ {\rm P}(X + Y = p) = (1 - s)^k (1 - t)^k t^p \sum\limits_{i = 0}^p {{k \choose i}{k + p - i \choose p - i}} . $$ Finally, from ${\rm P}(X + Y = p) \leq 1$ and $$ (1 - s)^k (1 - t)^k t^p = (1 - 2s)^{k} \bigg(\frac{s}{{1 - s}}\bigg)^p , $$ it follows that $$ \sum\limits_{i = 0}^p {{k \choose i}{k + p - i \choose p - i}} \le \frac{1}{{(1 - 2s)^k }}\bigg(\frac{{1 - s}}{s}\bigg)^p. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/51218", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 1 }
How to find the least $N$ such that $N \equiv 7 \mod 180$ or $N \equiv 7 \mod 144$ but $N \equiv 1 \mod 7$? How to approach this problem: N is the least number such that $N \equiv 7 \mod 180$ or $N \equiv 7 \mod 144$ but $N \equiv 1 \mod 7$.Then which of the these is true: * *$0 \lt N \lt 1000$ *$1000 \lt N \lt 2000$ *$2000 \lt N \lt 4000$ *$N \gt 4000$ Please explain your idea. ADDED: The actual problem which comes in my paper is "or" and the "and" was my mistake but I think I learned something new owing to that.Thanks all for being patient,and appologies for the inconvenience.
The title says "or" and the text says "and". I will assume "and". We want $N$ to be congruent to $7$ modulo $180$ and modulo $144$. This will be true iff $N$ is congruent to $7$ modulo the LCM of $180$ and $144$, which is $720$. So $N$ must have shape $N=720k+7$ for some integer $k$. But we want $N \equiv 1 \pmod{7}$. Since $N=700k +20k +7$, we can see that $N\equiv 20k \pmod{7}$. Presumably we want $N$ positive, though this was not specified. It is easy to see that the least positive $k$ that works is $k=6$. Why is it so easy? Note that $20\equiv -1 \pmod{7}$. So to make $20k \equiv 1 \pmod{7}$, we must have $k \equiv -1\pmod{7}$. The least positive $k$ congruent to $-1$ is $6$. That forces $N>4000$. Added: The text of the original question said $180$ and $144$. For the "or" version, we note that $N \equiv 7 \pmod{\gcd(180,140)}$. Thus $N\equiv 7 \pmod{36}$, or equivalently $N$ is of the shape $36k+7$. In particular, since $N \equiv 1 \pmod 7$, we must have $k\equiv 1 \pmod 7$. Probably at this stage (or earlier!) search is most efficient. Try $k=8$. That gives $N=295$, which works, since $295=(2)(144)+7$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/51272", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Not clear whether ratio of two holomorphic functions is holomorphic function except points whether denominator is zero By the link Proof that a function is holomorphic it is told Elementary operations or compositions of holomorphic functions give holomorphic functions on the maximal domain where the functions are defined. This is a consequence of the rules of derivation for product, ratio and compositions of functions. But per my understand I have counterexample $$ f(z) = f(x + \imath y) = \frac{x - \imath y}{x + \imath y}. $$ I calculate by dividing complex that $$ f(x + \imath y) = 1 - 2\imath\frac{xy}{x^2 + y^2} = u(x, y) + \imath v(x, y). $$ Then I verify Cauchy-Riemann criteria $$ \frac{\partial u}{\partial x} = 0, $$ and while this $$ \frac{\partial v}{\partial y} = -2x\frac{x^2 - y^2}{(x^2 + y^2)^2}, $$ which means that $f$ is not holomorphic. Did I made mistake in calculations or this means that cited statement is not correct?
Take the representation $z=x+y i$. Then the function $f:\mathbb{C}\rightarrow\mathbb{C}$ defined by $f(z)=\overline{z}=x-y i$ is not a holomorphic function. It is antiholomorphic. See http://en.wikipedia.org/wiki/Complex_conjugate
{ "language": "en", "url": "https://math.stackexchange.com/questions/51320", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Can a prime in a Dedekind domain be contained in the union of the other prime ideals? Suppose $R$ is a Dedekind domain with a infinite number of prime ideals. Let $P$ be one of the nonzero prime ideals, and let $U$ be the union of all the other prime ideals except $P$. Is it possible for $P\subset U$? As a remark, if there were only finitely many prime ideals in $R$, the above situation would not be possible by the "Prime Avoidance Lemma", since $P$ would have to then be contained in one of the other prime ideals, leading to a contradiction. The discussion at the top of pg. 70 in Neukirch's "Algebraic Number Theory" motivates this question. Many thanks, John
This answer refers to the contributions of Jyrki and Georges: assume that a maximal ideal $P$ of a Dedekind domain $R$ is NOT contained in the union of all other maximal ideals. Then there exists an element $f\in P$ such that $v_P(f)=n>0$ for the discrete valuation attached to $P$ and $v_Q(f)=0$ for all $Q\neq P$. Now $P^n$ consists of those elements $r\in R$ such that $v_P(r) \geq n$. Thus for every $r\in P$ we get $r=fs$ with $s\in R$. Hence $P^n =fR$. Jyrki has already shown that if $R$ has torsion class group, then no maximal ideal contained in the union of all others can exist. So: a maximal ideal of $R$ contained in the union of all other maximal ideals exists if and only if the class group of $R$ is not torsion.
{ "language": "en", "url": "https://math.stackexchange.com/questions/51362", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25", "answer_count": 3, "answer_id": 2 }
Finding the closest match in a "golden" sequence of points I am not a mathematician, and corrections are welcome (including tags). Background: For the last few days, i have been interested in the problem of placing points along a line segment (of length $1$, for simplicity), such that no matter how many points are added, the points are still relatively evenly spaced. This is somewhat vague, so let's look at one specific sequence based on the golden ratio, which seems to fit the description: $$p_n = \left\{n\phi\right\}$$ By $\{\cdot\}$, I mean the fractional part function. Here is one example of how this sequence is interesting (middle column). The question: Given the sequence defined above of length $s$ and a freely chosen point $x$ between $0$ and $1$, how can I find the point $p_n$ closest to $x$, where $n \leq s$?
Take the largest Fibonacci number $F_m\le s$. Find the nearest fraction $k/F_m$ to $x$. Consider the numbers $n_p=[(-1)^m F_{m-1}(k+p)]\mod F_m$ and $n'_p=F_m+n_p$ (if the latter is in the admissible range) with integer $p\in[-4,4]$ (this should be enough and I'm too lazy to check if we can do $[-3,3]$ or even $[-2,2]$). Now just compare the results for these 18 values and choose the best one. The reason is that $\varphi-\frac{F_{m+1}}{F_m}$ is so small that $n\varphi-n\frac{F{m+1}}{F_m}$ is not much bigger than $1/F_m$ for all $n\le s$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/51411", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
If $f_k \to f$ a.e. and the $L^p$ norms converge, then $f_k \to f$ in $L^p$ Let $1\leq p < \infty$. Suppose that * *$\{f_k\} \subset L^p$ (the domain here does not necessarily have to be finite), *$f_k \to f$ almost everywhere, and *$\|f_k\|_{L^p} \to \|f\|_{L^p}$. Why is it the case that $$\|f_k - f\|_{L^p} \to 0?$$ A statement in the other direction (i.e. $\|f_k - f\|_{L^p} \to 0 \Rightarrow \|f_k\|_{L^p} \to \|f\|_{L^p}$ ) follows pretty easily and is the one that I've seen most of the time. I'm not how to show the result above though.
This is a theorem by Riesz. Observe that $$|f_k - f|^p \leq 2^p (|f_k|^p + |f|^p),$$ Now we can apply Fatou's lemma to $$2^p (|f_k|^p + |f|^p) - |f_k - f|^p \geq 0.$$ If you look well enough you will notice that this implies that $$\limsup_{k \to \infty} \int |f_k - f|^p \, d\mu = 0.$$ Hence you can conclude the same for the normal limit.
{ "language": "en", "url": "https://math.stackexchange.com/questions/51502", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "100", "answer_count": 2, "answer_id": 0 }
Number of ways a natural number can be written as sum of smaller natural number It is easy to realize that given a natural number N the number of doublets that sum N are $\frac{N+(-1)(N \pmod 2)}{2}$ , so I thought I could reach some recursive formula in the sense that found the number of doublets I could find the number of triplets and so on ..., example: N=3 the only doublet is 2+1=3 -not said yet but 2+1, and 1+2 count as one- then I could count the number of way the number 2 can be expressed as the indicated sum and got the total number of ways 3 can be written as a sum. But this seems not so efficient, so I was wondering if there is other way to attack the problem and if there is some reference to this problem such as if it is well known its used, once I read that this have a chaotic behavior, and also read It was used in probability but don't remember where I got that information. So if you know something I would be grateful to be notice, thanks in advance.
It is not clear to me whether you want $p_k(N)$, which is the number of ways to write $N$ as a sum of $k$ naturals (order not counting), or whether you want $p(N)$, which is the total number of ways of writing $N$ as a sum of natural numbers (order not counting). Of course, $p(N)$ is just the sum of all the values of $p_k(N)$ for $k=1,2,\dots,N$, so the two concepts are closely related, and both have been subject to a lot of study, but, still, the $p(N)$ question is a lot harder than the $p_k(N)$ question - at least for small $k$ there are simple formulas for $p_k(N)$. So, what are you after?
{ "language": "en", "url": "https://math.stackexchange.com/questions/51721", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
A good book on Statistical Inference? Anyone can suggest me one or more good books on Statistical Inference (estimators, UMVU estimators, hypothesis testing, UMP test, interval estimators, ANOVA one-way and two-way...) based on rigorous probability/measure theory? I've checked some classical books on this topic but apparently all start from scratch with an elementary probability theory.
There's the book by Morris de Groot, and one by Bernard Lindgren. Both have bland titles that I don't remember. I think the former might be "Probability and Statistics" and the latter "Statistical Inference" or something like that. Lindgren's book contains a proof that the location-scale family of Cauchy distributions admits no coarser sufficient statistic than the order statistic (i.e. an i.i.d. sample sorted into increasing order); maybe that's not a crucial thing but it's something you find frequently asserted but seldom proved, so it stands out in my mind. Both books cover the topics you've mentioned, although they don't assume you've had measure theory. Since you mention ANOVA, let me add that if you want to understand the theory, you should know things like the (finite-dimensional) spectral theorem, the singular value decomposition, etc. Many books treat ANOVA and regression without that, so you won't learn why the sampling distributions of test statistics are what they are, etc. I'm not sure which book to recommend for this right now.....
{ "language": "en", "url": "https://math.stackexchange.com/questions/51785", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 4, "answer_id": 3 }
The exp map and distance on Riemannian manifolds Let $(M,g)$ be a Riemannian manifold. Let $p$ be a point in $M$, and suppose we create a diffeomorphism between the tangent space at $p$ and a small neighborhood of $p$ in $M$. Is it then true that the distance between $q$ and $p$ is $\langle v,v\rangle$, where $v=p-q$ in the tangent space, where we use the exp map to locate $q$ in the tangent space?
Just to give a reference for "any book on Riemannian geometry": A proof of the above (and much more) can be found in do Carmo's Riemannian Geometry. Your question is answered in chapter 3; In particular paragraph 3 of this chapter treats minimizing properties of geodesics.
{ "language": "en", "url": "https://math.stackexchange.com/questions/51864", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Continuity of this function at $x=0$ The following function is not defined at $x=0$: $$f(x) = \frac{\log(1+ax) - \log(1-bx)}{x} .$$ What would be the value of $f(0)$ so that it is continuous at $x=0$?
As Chandru has pointed out, this is the $\ln(\cos\,x)/x$ problem all over again. So, being unimaginative at this time of the night, I will solve it in the same way all over again. Please note that most of the time in this post, $f$ does not refer to your $f$. So let's call your function $g$. The value we need to assign to it at $0$ in order to end up with a continuous function is $$\lim_{x\to 0}\frac{g(x)}{x}$$ if this limit exists. (If the limit does not exist, no assignment of value to $g$ at $0$ will make the resulting function continuous.) Using the notation of Did's answer, let's find $$\lim_{x \to 0} \frac{\ln(1+cx) -\ln(1)}{x-0}.$$ We have done nothing here, since $\ln 1=0$, and $x-0=x$, but we have done nothing in what will turn out to be a useful way. We recognize the above expression as the definition of $f'(0)$, where $f(x)=\ln(1+cx)$. For our particular function $f$, we have $$f'(x)=\frac{c}{1+cx}$$ and therefore $f'(0)=c$. Thus the required answer is $a-(-b)$, which is $a+b$. Comment: The limit is also easily arrived at by considering the power series expansion of $\ln(1+u)$. We sort of took a chance in the above calculation (well, not really, sine the answer is obvious from the power series). In principle, it would have been better to do exactly the same thing, but with $f(x)=\ln(1+ax) -\ln(1-bx)$. We would then have $$f'(x)=\frac{a}{1+ax}-\frac{-b}{1-bx}$$ and again we would obtain $f'(0)=a+b$. The method is available for any limit of the kind $$\lim_{x\to a} \frac{f(x)-f(a)}{x-a}$$ once we know how to differentiate $f$. It can be thought of as a hypersimple version of L'Hospital's Rule. Or else, more ambitiously but less accurately, as the beginnings of an explanation of why the L'Hospital's Rule works.
{ "language": "en", "url": "https://math.stackexchange.com/questions/51914", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Do addition and multiplication have arity? Many books classify the standard four arithmetical functions of addition, subtraction, multiplication, and division as binary (in terms of arity). But, "sigma" and "product" notation often writes just one symbol at the front, and indexes those symbols which seemingly makes expressions like $+(2, 3, 4)=9$ meaningful. Of course, we can't do something similar for division and subtraction, since they don't associate, but does the $+$ symbol in the above expression qualify as the same type of expression as when someone writes $2+4=6$? Do addition and multiplication qualify as functions which don't necessarily have a fixed arity, or do they actually have a fixed arity, and thus instances of sigma and product notation should get taken as abbreviation of expressions involving binary functions? Or is the above question merely a matter of perspective? Do we get into any logical difficulties if we regard addition and multiplication as $n$-ary functions, or can we only avoid such difficulties if we regard addition and multiplication as binary?
For definiteness, let's think first-order theory of groups. We can change the language by introducing infinitely many function symbols, one of each arity $\ge 2$. We will then need axioms in order to move freely between products of various arities. These axioms are quite simple, just the usual inductive definition of $n+1$-product in terms of $n$-product and $2$-product. However, we do need infinitely many such axioms. Occam's Razor suggests that we leave well enough alone, and use arity $3$ multiplication, for example, as an informal abbreviation. The technical difficulties become much greater if we try to produce a two-sorted theory to handle notions such as $a^n$. I do not see any demonstrable gain to compensate for the pain of introducing operations of infinitely many arities, or of variable arities. If we really do want what you suggest, we might as well go all the way, and do group theory within a formal set theory. Then all of the usual abbreviations can be given a formal meaning. That is in fact close to the usual mathematical practice, except that the underlying set theory is informal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/51962", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 1 }
Probability of Median from a continuous distribution For a sample of size $n=3$ from a continuous probability distribution, what is $P(X_{(1)}<k<X_{(2)})$ where $k$ is the median of the distribution? What is $P(X_{(1)}<k<X_{(3)})$? $X_{(i)},i=1,2,3$ are the ordered values of the sample. I'm having trouble trying to solve this question since the median is for the distribution and not the sample. The only explicit formulas for the median I know of are the median $k$ of any random variable $X$ satisfies $P(X≤k)≥1/2$ and $P(X≥k)≥1/2$, but I don't see how to apply that here.
This is also the probability of exactly one success in three trials, with probability $1/2$ of success on each trial. Hence $\binom{3}{1} \left(\frac12\right)^3 = \frac38$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/52032", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How is Kleene's T predicate defined? What I don't understand is how to extract information from the number that encode the computation history. I know it's defined in Kleene's Introduction to Metamathematics. But what page? References are welcome. More information on Kleene's T predicate can be found here.
There is also a more indirect proof method, which is to first show that any computable relation is definable in arithmetic and then note that the relation represented by the T predicate is computable. In fact the relation is primitive recursive, but the computability is easier to see, via Church's thesis. Also, not only the computable relations are definable, all the arithmetical relations are also definable. The proof that every primitive recursive relation (or function) is representable in arithmetic is somewhat easier than a proof specifically for the T predicate, because you can ignore details of Turing machines. Essentially the only issue is proving that one can quantify over finite sequences. There is a complete proof in many textbooks and in section 2.2 of these lecture notes by Stephen Simpson: http://www.math.psu.edu/simpson/notes/fom.pdf . There is also a proof in section 49 of Kleene's Introduction to metamathematics, at least for primitive recursive functions. The general result for arithmetical relations follows immediately by simply adding quantifiers and using the normal form theorem from computability to show that one-quantifier relations are definable. This may require proving that the T predicate is primitive recursive, but this is easier than proving it is representable in arithmetic, because one can use primitive recursion freely without having to worry about the $\beta$ function at the same time.
{ "language": "en", "url": "https://math.stackexchange.com/questions/52141", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Maximize and Minimize a 12" piece of wire into a square and circle A wire of length 12" can be bent into a circle, a square or cut into 2 pieces and make both a circle and a square. How much wire should be used for the circle if the total area enclosed by the figure(s) is to be: a) a Maximum b) a Minimum What I've got so far is that the formula for the square is $A_s=\frac{1}{16}s^2$ and the circumfrance of the circle to be $P=12-c$ and area to be $A_c = \pi(\frac{P}{2\pi})^2$ where $c$ is the length of the wire for the circle and $s$ is the length of the wire for the square. Now I know I need to differentiate these formulas to then find the max and min they both can be, but what am I differentiating with respect to? The missing variable in each of the formulas? Also, once, I find the derivitives, what would my next steps be to minimizing and maximizing these? And did I set the problem up correctly? Thanks for any help
Every so often, one might mention the following sort of approach. Let $x$ be the length of wire we will devote to the circle, and $y$ the length we will devote to the square. Let $A$ be the combined area of the circle and square. A calculation identical to the one done by the OP shows that $$A=\frac{x^2}{4\pi}+\frac{y^2}{16}.$$ We want to find the values of $x$ that give maximum and minimum area, given that $x$ and $y$ are non-negative, and $x+y=12$. Maximum and/or minimum values may be reached at an endpoint. So we compute $A$ when $x=0$, $y=12$, and also when $x=12$, $y=0$. The remaining candidates for maximum/minimum are with $0<x<12$. At such a candidate $x$, we will have $\dfrac{dA}{dx}=0$. (We are doing one-variable calculus.) It is easy to see that $$\frac{dA}{dx}=\frac{2x}{4\pi} +\frac{2y}{16}\frac{dy}{dx}.$$ But from $x+y=12$, we can see that $\dfrac{dy}{dx}=-1$. Now we have two equations in the two unknowns $x$ and $y$. Solve for $x$, compute $A(x)$, and compare with the endpoint values. The above procedure carries no advantage in this case, and may increase the probability of mechanical error. However, when the "constraint" is non-linear, there can be real computational advantages to working with implicit functions, particularly if the constraint has symmetries.
{ "language": "en", "url": "https://math.stackexchange.com/questions/52200", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Range of a sum of sine waves Suppose I'm given a function f(x) = sin(Ax + B) + sin(Cx + D) is there a simple (or, perhaps, not-so-simple) way to compute the range of this function? My goal is ultimately to construct a function g(x, S, T) that maps f to the range [S, T]. My strategy is to first compute the range of f, then scale it to the range [0,1], then scale that to the range [S, T]. Ideally I would like to be able to do this for an arbitrary number of waves, although to keep things simple I'm willing to be satisfied with 2 if it's the easiest route. Numerical methods welcome, although an explicit solution would be preferable.
If ${A\over C}\in{\mathbb Q}$ we may assume $A$, $C\in{\mathbb Z}$. In this case $f$ is periodic with period $2\pi$, and the range of $f$ is found by evaluating $f$ at the zeros of $f'$. The latter have to be determined by solving a certain polynomial equation which one obtains by introducing the variable $z:=e^{ix}$. If ${A\over C}\notin{\mathbb Q}$ then $f$ is almost periodic. In this case the range of $f$ is the open interval $\ ]{-2},2[\ $, because one can find a sequence $x_n\to\infty$ such that the $x_n$ are local maxima of $x\mapsto\sin(Ax+B)$ and at the same time "almost" local maxima of $x\mapsto\sin(Cx+D)$. ${\bf Edit}$ concerning the case ${A\over C}\notin{\mathbb Q}$: As noted in yoriki's answer the range might include one of $\pm2$ if $B$ and $D$ are such that "by coincidence" two local maxima or minima of $x\mapsto\sin(Ax+B)$ and $x\mapsto\sin(Cx+D)$ coincide.
{ "language": "en", "url": "https://math.stackexchange.com/questions/52352", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Should I combine the negative part of the spectrum with the positive one? When filtering sound I currently analyse only the positive part of the spectrum. From the mathematical point of view, will discarding the negative half of the spectrum impact significantly on my analysis? Please consider only samples that I will actually encounter, not computer generate signals that are designed to thwart my analysis. I know this question involves physics, biology and even music theory. But I guess the required understanding of mathematics is deeper than of those other fields of study.
The general Fourier transform is defined for complex functions (signals), and in that case all the frequencies are meaningful. For most common applications, we have a real signal, and then the reality condition implies that the values of the transform for negative frequencies is the complex conjugate of those corresponding for positive frequencies, hence they are redundant and one does not need to compute/store them. It's in this scenario and in this sense, that we can "ignore" the negative frecuencies, safely, with no error. But be aware that this does NOT mean that we consider them to be zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/52428", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Question about indefinite integral Let's say I have expression with multiplication which has variable x $$\int x^2e^{x^3}dx$$ So in example it shows $$\int x^2e^{x^3}dx=\frac{1}{3}\int e^u du=\frac{1}{3}e^u+C=\frac{e^{x^3}}{3}+C$$ $u=x^3$ $du=3x^2 dx$ So I don't understand from where comes $\frac{1}{3}$ before integral and from where comes 3 in $du=3x^2 dx$
This is called substitution. Here are the steps in detail: You want to make two replacements: $u=x^3$ and $du=3x^2 dx$. But you don't have $3x^2 dx$ in your integral. No matter, construct it! Starting with what you have: $$\int x^2e^{x^3}dx $$ multiply by 1=3/3: $$= \frac{3}{3}\int x^2e^{x^3}dx $$ move the 3 inside of the integral and move $x^2$ next to the $dx$: $$= \frac{1}{3}\int e^{x^3}3x^2dx$$ Then make two replacements: $u=x^3$ and $du=3x^2 dx$. $$ =\frac{1}{3}\int e^u du$$ These replacements are compatible with each other because if you differentiate $u=x^3$ you get $du=3x^2 dx$. In general, if what you are looking to substitute ($3x^2 dx$ in this case) differs only by a constant from what you have ($x^2 dx$) , you can introduce the constant you need by putting 1 over that constant "outside" the integral. Your best bet is to do another very similar example right away so that this concept will solidify for you. Best!
{ "language": "en", "url": "https://math.stackexchange.com/questions/52501", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 0 }
A prize of $27,000 is to be divided among three people in the ratio 3:5:7. What is the largest share? This is not homework; I was just reviewing some old math flash cards and I came across this one I couldn't solve. I'm not interested in the solution so much as the reasoning. Thanks
Hint: 3+5+7=15. So separate the money into 15 distinct piles of equal amounts (why can we do that?). Give 3 piles to the first person, 5 piles to the second, and 7 piles to the third. This now amounts to finding how much money was given to the third person. Hope that helps.
{ "language": "en", "url": "https://math.stackexchange.com/questions/52552", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
Problem in skew-symmetric matrix Let $A$ be a real skew-symmetric matrix. Prove that $I+A$ is non-singular, where $I$ is the identity matrix.
Let $\lambda \neq 0$ be an eigenvalue of $A$ with eigenvector $x$. Then: $$x^* A x = x^*(\lambda x) = \lambda x^* x$$ Where $x^*$ is the hermitian adjoint. Now, since $A$ is real, $A^* = A^T$ and we get $(x^* A x)^* = x^* A^T x = -x^* A x$. And since $(x^* A x)^* = (\lambda x^* x)^* = \lambda^* x^* x$. Putting these two equations together yields: $$x^* A x = - \lambda^* x^* x$$ But since we have the same vector $x$, we have $-\lambda^* = \lambda$. Now, say $\lambda = a + ib$, so $-\lambda^* = -a + ib$. Thus we get $\lambda = ib$, i.e. $\lambda$ is pure imaginary. Now, say we have an eigenvalue $\lambda = ib$ with eigenvector $x$, $Ax = \lambda x$. This implies $(Ax^*) = \lambda^* x^*$ and $(Ax^*) = x^* A^* = x^* A^T = - x^* A$, so $-x^* A = \lambda^* x^*$ Take the tranpose of both sides $(-x^* A)^T = -A^T \overline{x} = A \overline{x}$ and $(\lambda^* x^*)^T = \lambda^* \overline{x}$, where $\overline{x}$ is the complex conjugate of $x$. We have reached: $$A \overline{x} = \lambda^* \overline{x}$$ Thus, $\lambda^* = -ib$ is an eigenvalue to $A$ with eigenvector $\overline{x}$. So, all non-zero eigenvalues of a real skew symmetric matrix are pure imaginary and come in pairs $\lambda$ and $-\lambda$. Now, let $\lambda$ be a (possibly zero) eigenvalue of $A$ with eigenvector $x$. From this eigenvalue we get an eigenvalue for $I + A$: $$(I+A)x = Ix + Ax = x + \lambda x = (1+\lambda)x$$ since $\lambda$ is pure imaginary or zero, $1 + \lambda$ will always be non-zero. Since the determinant of a matrix is the product of its eigenvalues, we have that $\det(I+A) \neq 0$ and we can even deduce that $\det (I+A)$ is real and positive (since $(1 + ib)(1-ib) = 1 + b^2$). Hence $I+A$ is always invertible. Just a note: If $A$ is $n \times n$ with $n$ odd, $A$ will always have a zero eigenvalue, since $$\det A = \det A^T = \det (-A) = (-1)^n \det A$$ if $n$ is odd we have $\det A = - \det A$ which implies $\det A = 0$, which implies that at least one eigenvalue is zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/52593", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 4, "answer_id": 1 }
What is the name of the vertical bar in $(x^2+1)\vert_{x = 4}$ or $\left.\left(\frac{x^3}{3}+x+c\right) \right\vert_0^4$? I've always wanted to know what the name of the vertical bar in these examples was: $f(x)=(x^2+1)\vert_{x = 4}$ (I know this means evaluate $x$ at $4$) $\int_0^4 (x^2+1) \,dx = \left.\left(\frac{x^3}{3}+x+c\right) \right\vert_0^4$ (and I know this means that you would then evaluate at $x=0$ and $x=4$, then subtract $F(4)-F(0)$ if finding the net signed area) I know it seems trivial, but it's something I can't really seem to find when I go googling and the question came up in my calc class last night and no one seemed to know. Also, for bonus internets; What is the name of the horizontal bar in $\frac{x^3}{3}$? Is that called an obelus?
This may be called Evaluation bar. See, in particular, here (Evaluation Bar Notation:).
{ "language": "en", "url": "https://math.stackexchange.com/questions/52651", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21", "answer_count": 4, "answer_id": 2 }
Questions about composite numbers Consider the following problem: Prove or disprove that if $n\in \mathbb{N}$, then $n$ is prime iff $$(n-1)!+n$$ is prime. If $n$ is composite and greater than $1$, then $n$ has a divisor less than $n-1$, therefore $(n-1)!$ and $n$ have a common factor. Thus "$\Leftarrow$" is true. To proof the other direction we can consider the more general problem: Let $n\in\mathbb{N}$. Consider the set $$C(n)=\{m\in\mathbb{N}:n+m\text{ is composite}\}.$$ How can we characterize the elements of $C(n)$? The ideal answer would be to describe all elements in $C(n)$ in terms of only $n$. But, is that possible? As a first approximation to solve this, we can start by defining for $n,p\in\mathbb{N}$: $$A(n,p)= \{ m\in\mathbb{N}:n+m\equiv 0\pmod{p} \}.$$ After of some observations we can prove that $$A(n,p)=\{(\lceil n/p \rceil + k)p - n:k\in \mathbb{N}\}$$ and then $A(n,p)$ is the range of a function of the form $f_{n,p}:\mathbb{N}\to \mathbb{N}$. From this $$C(n)=\bigcup_{p=2}^\infty A(n,p),$$ But this still far from a characterization in terms of $n$. What do you think that is the best that we can do or the best we can hope?
This is false. $$29 | 479001613 = (13 - 1)! + 13.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/52765", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 5, "answer_id": 3 }
On sorting in an array-less language This is partly a programming and partly a combinatorics question. I'm working in a language that unfortunately doesn't support array structures. I've run into a problem where I need to sort my variables in increasing order. Since the language has functions for the minimum and maximum of two inputs (but the language does not allow me to nest them, e.g. min(a, min(b, c)) is disallowed), I thought this might be one way towards my problem. If, for instance, I have two variables $a$ and $b$, I only need one temporary variable so that $a$ ends up being less than or equal to $b$: t = min(a, b); b = max(a, b); a = t; for three variables $a,b,c$, the situation is a little more complicated, but only one temporary variable still suffices so that $a \leq b \leq c$: a = min(a, b); t = max(a, b); c = max(t, c); t = min(t, c); b = max(a, t); a = min(a, t); Not having a strong combinatorics background, however, I don't know how to generalize the above constructions if I have $n$ variables in general. In particular, is there a way to figure out how many temporary variables I would need to sort out $n$ variables, and to figure out what is the minimum number of assignment statements needed for sorting? Thanks in advance!
I'd like to expand on Rahul's answer and note that, given that the number of items you're going to sort is presumably fixed and fairly small, you might want to take the extra effort to look up an optimal (or near-optimal) sorting network for that number of items.
{ "language": "en", "url": "https://math.stackexchange.com/questions/52802", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Showing $a^2 < b^2$, if $0 < a < b$ Lately, I've been stumbling with proofs of inequalities. For example: Given $0 < a < b$ Show $a^2 < b^2$ The only thing I've been able to come up with so far: $a^2 < b^2$ $\sqrt{a^2} < \sqrt{b^2}$ $a < b$ OR $a < b$ $a^2 < b^2$ However, neither of these solutions seem to be really "showing" that $a^2 < b^2$, assuming $0 < a < b$. I've tried some other things, but to no avail. Am I merely overthinking the problem when, in fact, these are actually acceptable solutions, or am I truly missing something here?
$$0<a<b\Rightarrow 0\cdot a<a\cdot a<b\cdot a<b\cdot b\Rightarrow a^{2}<b^{2}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/52877", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 0 }
Integer solutions of $3a^2 - 2a - 1 = n^2$ I've got an equation $3a^2 - 2a - 1 = n^2$, where $a,n \in \mathbb{N}$. I put it in Wolfram Alpha and besides everything else it gives integer solution: see here. For another equation (say, $3a^2 - 2a - 2 = n^2$, where $a,n \in \mathbb{N}$) Wolfram Alpha does not provide integer solutions: here. Could you please tell me: * *How does Wolfram Alpha determine existence of the integer solutions? *How does it find them? *What should I learn to be able to do the same with a pencil and a piece of paper (if possible)? Thanks in advance!
Lagrange showed how to reduce a general binary quadratic Diophatine equation to Pell form. $$\rm a\ x^2 + b\ xy + c\ y^2 + d\ x + e\ y + f\ =\ 0 $$ reduces to a Pell equation as follows: put $\rm\ D = b^2-4ac,\ E = bd-2ae,\ F = d^2-4af\:.\ $ Then $$\rm D\ Y^2\ =\ (D\ y + E)^2 + D\ F - E^2,\quad\quad Y\ =\ 2ax + by + d $$ Therefore if we put $\rm\quad\ \ X\: =\: D\ y + E,\quad\ \ N\: =\: E^2 - D\ F\quad\ \ $ we obtain the Pell equation $$\rm X^2 - D\ Y^2\ =\ N $$ Now you can apply standard techniques for solving Pell equations. They are a bit too complex to describe here. However, you can obtain complete step-by-step descriptions of the solution to any Pell equation using Dario Alpern's Quadratic two integer variable equation solver. For some recent optimizations of Lagrange's algorithm see this paper H. C. Williams et al. A new look at an old equation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/52940", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 2 }
Finding the nth term in a repeating number sequence I'm trying to figure out how to solve these types of repeating number sequence problems. Here is one I made up: Consider the following repeating number sequence: {4, 8, 15, 16, 23, 42, 4, 8, 15, 16, 23, 42, 4, 8, 15, 16, 23, 42,…} in which the first 6 numbers keep repeating. What is the 108th term of the sequence? I was told that when a group of k numbers repeats itself, to find the *n*th number, divide n by k and take the remainder r. The *r*th term and the *n*th term are always the same. 108 / 6 = 18, r = 0 So the 108th term is equal to the 0th term? Undefined? I'm confused at how this works. Thanks!
You are looking for modular arithmetic. The procedure you described of dividing and taking the remainder is encapsulated in modular arithmetic.
{ "language": "en", "url": "https://math.stackexchange.com/questions/52998", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
Prove that $(a_1a_2\cdots a_n)^{2} = e$ in a finite Abelian group Let $G$ be a finite abelian group, $G = \{e, a_{1}, a_{2}, ..., a_{n} \}$. Prove that $(a_{1}a_{2}\cdot \cdot \cdot a_{n})^{2} = e$. I've been stuck on this problem for quite some time. Could someone give me a hint? Thanks in advance.
Here is a hint: for any given $a_i\in G$, there are two possibilities: either * *$a_i$ is its own inverse, or *$a_i$ is not its own inverse, but rather $a_j=a_i^{-1}$ for some $j\neq i$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/53026", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 5, "answer_id": 4 }
Example of a linear operator T with only trivial T-invariant subspaces I am trying to construct an example of a linear operator $T : \mathbb{Q}^3 \rightarrow \mathbb{Q}^3$ for which the only $T$-invariant subspaces are the whole space and the zero subspace. If we first look at an example from the 2x2 case let $T$ be the linear operator on $\mathbb{R}^2$ represented in the standard ordered basis by $$ A = \left( \begin{array}{ccc} 0 & -1 \\ 1 & 0 \end{array} \right) $$ Then if $W$ is any other invariant subspace not equal to $\{0\}$ or the whole space then $W$ must have dimension $1$ and so is spanned by some nonzero vector $\alpha$. But $W$ invariant under $T$ implies that $\alpha$ is a eigenvector, but $A$ has no real real eigenvalues. If we try to apply the above logic to a 3x3 matrix then I am stuck on what to do if I assume the dimension of the invariant subspace is 2. Question: In any case is it still clear that if $A$ represents some linear operator $T : \mathbb{Q}^3 \rightarrow \mathbb{Q}^3$ then for $T$ to have no nontrivial invariant subspaces should A not have any real eigenvalues?
Over the reals, you won't find any examples in dimension 3 or any odd dimension because every operator in such a space has an eigenvector (since every real polynomial of odd degree has a real root). Over the rationals, you only need to find a polynomial of degree 3 with rational coefficients having no rational root and take its companion matrix. The simplest one I can think of is $x^3-x-1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/53091", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving that $ 30 \mid ab(a^2+b^2)(a^2-b^2)$ How can I prove that $30 \mid ab(a^2+b^2)(a^2-b^2)$ without using $a,b$ congruent modulo $5$ and then $a,b$ congruent modulo $6$ (for example) to show respectively that $5 \mid ab(a^2+b^2)(a^2-b^2)$ and $6 \mid ab(a^2+b^2)(a^2-b^2)$? Indeed this method implies studying numerous congruences and is quite long.
The intention of this answer is to show you that trying all possibilities is in fact not that long. (Of course, it si more elegant, when you find a solution which avoids trying all possibilities.) Let me start by plotting 5x5 table with all possibilites for the remainders of a and b. (I do not know of a good way of making tables here - I tried something anyway.) $$ \begin{array}{c|ccccc} b \backslash a & 0 & 1 & 2 & 3 & 4 \\ \hline 0 & & & & & \\ 1 & & & & & \\ 2 & & & & & \\ 3 & & & & & \\ 4 & & & & & \\ \end{array} $$ If we rewrite our expression as $ab(a-b)(a+b)(a^2+b^2)$, we see that all possibilities where $a=0$ or $b=0$ are ok (marked by $\circ$). $$ \begin{array}{c|ccccc} b \backslash a & 0 & 1 & 2 & 3 & 4 \\ \hline 0 & \circ & \circ & \circ & \circ & \circ \\ 1 & \circ & & & & \\ 2 & \circ & & & & \\ 3 & \circ & & & & \\ 4 & \circ & & & & \\ \end{array} $$ Also possibilities where a=b are ok (since $a-b\equiv 0\pmod 5$ and so are those where $a=5-b$ (since $a+b\equiv 0\pmod 5$), hence we can omit both diagonals (marked by $\bullet$). $$ \begin{array}{c|ccccc} b \backslash a & 0 & 1 & 2 & 3 & 4 \\ \hline 0 & \circ & \circ & \circ & \circ & \circ \\ 1 & \circ & \bullet & & & \bullet \\ 2 & \circ & & \bullet & \bullet & \\ 3 & \circ & & \bullet & \bullet & \\ 4 & \circ & \bullet & & & \bullet \\ \end{array} $$ There are only 8 possibilities left, and since the roles of a and b are symmetric, we only have to try: (1,2), (1,3), (4,2), (4,3). In all these cases $a^2+b^2\equiv 0 \pmod 5$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/53135", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 2 }
Approximating Lambert W for input below 0 As a small part of a much bigger project, I need to be able to approximate the numerical output of the Lambert W function. I have found decent approximations (good up to at least 4 decimal places), for W for inputs on $[3,\infty)$ and $[0,3)$. I thought this was going to be sufficient, but it turns out much of the input to the function will have to be below zero. What is a good (and not too complicated) approximation of W for negative input? Keep in mind I'm a programmer, not a mathematician.
Assuming you're going for the principal branch, you'll want the expansion of the Lambert function $W(z)$ about the branch point $z=-e^{-1}$: $$W(z)=-1+t-\frac{t^2}{3}+\frac{11}{72}t^3+\dots$$ where $t=\sqrt{2ez+2}$. If you require more terms, section 3 of this paper mentions the (complicated!) recurrence required to generate the coefficients of this series, as well as the adjustments you need to make if what you require is the "lower" branch $W_{-1}(z)$ (hint: you futz with the square root). Alternatively, Winitzki gives in his paper a convenient approximation for the principal branch for arguments near the branch point: $$W(z)\approx\frac{ez}{1+\left((e-1)^{-1}-\frac1{\sqrt{2}}+\frac1{\sqrt{2ez+2}}\right)^{-1}}$$ Here's a graphical side-by side comparison of three approximants over the interval $(-e^{-1},0)$. The plots are of functions of the form $f(z)-W(z)$, where $f(z)$ is one of the following: the first five terms of the branch point series, a $(3,2)$ Padé approximant constructed from the series, $$W(z)\approx\frac{-1+\frac16 t+\frac{257}{720}t^2+\frac{13}{720}t^3}{1+\frac56 t+\frac{103}{720}t^2}, \qquad t=\sqrt{2ez+2}$$ and the Winitzki approximant:
{ "language": "en", "url": "https://math.stackexchange.com/questions/53191", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Factorial decomposition of integers? This question might seem strange, but I had the feeling it's possible to decompose in a unique way a number as follows: if $x < n!$, then there is a unique way to write x as: $$x = a_1\cdot 1! + a_2\cdot 2! + a_3\cdot3! + ... + a_{n-1}\cdot(n-1)!$$ where $a_i \leq i$ I looked at factorial decomposition on google but I cannot find any name for such a decomposition. example: If I chose : (a1,a2) = * *1,0 -> 1 *0,1 -> 2 *1,1 -> 3 *0,2 -> 4 *1,2 -> 5 I get all number from $1$ to $3!-1$ ideas for a proof: The number of elements between $1$ and $N!-1$ is equal to $N!-1$ and I have the feeling they are all different, so this decomposition should be right. But I didn't prove it properly. Are there proofs of this decomposition? Does this decomposition as a name? And above all is this true ? Thanks in advance
You can also reason as follows : suppose you've shown for some integer $n$ that every integer $\in\lbrace0,\dots,n!-1\rbrace$ has a unique decomposition as you suggest. Take $k\in\lbrace0,\dots,(n+1)!-1\rbrace.$ Write $$k=q\cdot n!+r$$ the euclidean division of $k$ with respect to $n!$. Necessarily you have $0\leq q < n+1$. This gives you an expression you want, for $0\leq r<n!$ has, by hypothesis, an expression involving only factorials up to $(n-1)!$ . Finally, to show uniqueness, you can again use your hypothesis to deduce that if $k=a_n\cdot n!+ \sum_0^{n-1} a_i\cdot i!,$ then $0\leq \sum\dots< n!-1$ by hypothesis, and this tells you that this decomposition is the euclidean division of $k$ by $n!$. So by uniqueness of the euclidean division, it's the one decomposition we defined earlier.
{ "language": "en", "url": "https://math.stackexchange.com/questions/53262", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 5, "answer_id": 2 }
Normal curvature along a line of curvature I have come across the following exercise (the context is curves and surfaces in $\mathbb{R}^3$ and the Gauss map): If $C=\alpha(I)$ is a line of curvature, and $k$ is its curvature at $p$, then $$ k = \mid k_n k_N \mid $$ where $k_n$ is the normal curvature at $p$ along the tagent line of $C$, and $k_N$ is the curvature of the spherical image $N(C) \subset S^2$ at $N(p)$. I am not sure I understand the question, though... because it seems to me that if $C$ is a line of curvature, the normal curvature should be identical to the curvature of $C$ itself, hence $k = k_n$. Am I mistaken?
$C$ being a line of curvature means that the tangent vector of $C$ at every point is a principal direction, not that its normal curvature is identical to its curvature. To solve your problem, you will need a formula for curvature that doesn't use parameterization by arc-length since the gauss map doesn't always give a curve parameterized by arc length. Let's use $k(t) = \frac{|\alpha' \times \alpha''|}{|\alpha'|^3}$ (exercise 12 in section 1-5 of Do Carmo, which you are probably using?) $C$ being a line of curvature means that the Gauss map $N$ is such that $N'(t) = \lambda(t) \alpha'(t)$ where $-\lambda(t)$ is the curvature in the direction of $\alpha'(t)$ If you compute the curvature $k_N$ using these two formulas you should get the result after rearranging. Comment if you have more questions :)
{ "language": "en", "url": "https://math.stackexchange.com/questions/53384", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Real world applications of Pythagoras' Theorem I have a school assignment, and it requires me to list a few of the real world applications of Pythagoras Theorem. However, most of the ones I found are rather generic, and not special at all. What are some of the real world applications of Pythagoras' Theorem?
Say you're playing frisbee with a few of your friends, and the frisbee gets stuck in a tree. You want to get a ladder to reach it, but you don't know how long the ladder needs to be. You can make a point where you want the ladder to touch the ground, and then measure from there, whereas $ds^2=dx^2+dy^2$ is true, so we can say, as an example, that $dx=3 \text{m}$ long and $dy=4\text{m}$ long. Now, we must solve for $ds$. Here's my calculations: $ds^2=3^2+4^2=9+16=25$ $\sqrt{ds^2}=ds$ $\sqrt{25}=5$ $ds=5\text{m}$ You'd need a ladder that is $5\text{m}$ long to reach the frisbee.
{ "language": "en", "url": "https://math.stackexchange.com/questions/53463", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 5, "answer_id": 4 }
Find the image of a vector by using the standard matrix (for the linear transformation T) Was wondering if anyone can help out with the following problem: Use the standard matrix for the linear transformation $T$ to find the image of the vector $\mathbf{v}$, where $$T(x,y) = (x+y,x-y, 2x,2y),\qquad \mathbf{v}=(3,-3).$$ I found out the standard matrix for $T$ to be: $$\begin{bmatrix}1&1\\1&-1\\2&0\\0&2\end{bmatrix}$$ From here I honestly don't know how to find the "image of the vector $\mathbf{v}$". Does anyone have any suggestions?
I would like to do it in a systematic way. It is instructive to see how OP fits in the following steps. Let $V$ (resp. $W$) be an $n$ (resp. $m$) dimensional vector space over $\mathbb{R}$. Let $$\alpha=(v_1,\cdots,v_n)$$ be an ordered basis in $V$ and $$\beta=(w_1,\cdots,w_m)$$ an ordered basis in $W$. For any vector $x\in V$, denote its coordinate w.r.t. the basis $\alpha$ as $$ [x]_\alpha=(x_1,\cdots,x_n)^T $$ and for any vector $y\in W$, denote its coordinate w.r.t. the basis $\beta$ as $$ [y]_\beta=(y_1,\cdots,y_m)^T. $$ Let $T:V\to W$ be a linear transformation. Let $[T]_\alpha^\beta$ denotes the matrix for $T$ w.r.t. the bases $\alpha$ and $\beta$, i.e., $$ [T]^\alpha_\beta=[[Tv_1]_\beta,\cdots,[Tv_n]_\beta]. $$ Note in particular that $[T]^\alpha_\beta$ is an $m\times n$ matrix. Given $x\in V$, we have $$ [Tx]_\beta=[T(x_1v_1+\cdots+x_nv_n )]_\beta\\ =x_1[T(v_1)]_\beta+\cdots x_n[T(v_n)]_\beta\\ =[T]_\beta^\alpha[x]_\alpha $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/53525", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
Any idea about N-topological spaces? In Bitopological spaces, Proc. London Math. Soc. (3) 13 (1963) 71–89 MR0143169, J.C. Kelly introduced the idea of bitopological spaces. Is there any paper concerning the generalization of this concept, i.e. a space with any number of topologies?
I close the question with the following answer- On the possibility of N-topological spaces, International Journal of Mathematical Archive-3(7), 2012, 2520-2523 (http://www.ijma.info/index.php/ijma/article/view/1442)
{ "language": "en", "url": "https://math.stackexchange.com/questions/53573", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Speeding through the primes until they look uniform Imagine you are traveling out the $x$-axis such that your velocity at $x$ is $v(x)$. If $v(x)=1$, then you pass the primes at increasingly longer intervals, on average (of course there are close primes, e.g., twin primes). Is there some function $v(x)$ so that the event of passing a prime becomes uniformly random, in that the expected time between passing primes becomes a constant? For example, $v(x) = x / \log^2 x$ doesn't quite work, in that the delay between passing primes still increases with $x$.
My closest interpretation to your question is if there is a function $v(t)$ such that $$\pi\left( \int_0^T v(t)dt\right)\sim aT$$ for some "prime-hitting" frequency $a$. The answer is yes. We'll look at a basic example. Denote the position function as $F(T) = \int_0^T v(t)dt$, then, using PNT (Wikipedia / MathWorld), transform the equation into a more tractable form $$ \frac{F(T)}{\ln F(T)} \sim aT.$$ We will do one better and ensure equality of the above expression. (This is not the same as equality in the original asymptotic formula with $\pi(\cdot)$, but still entails our original desired form as a logical consequence.) Take the negative reciprocal of both sides, $$ \frac{1}{F(T)} \ln \frac{1}{F(T)} = -\frac{1}{aT},$$ and then use the Lambert W function (Wikipedia / MathWorld) to simplify, $$ \ln\frac{1}{F(T)} = W\left(-\frac{1}{aT}\right)$$ $$ F(T) = \exp\left( - W\left(-\frac{1}{aT}\right)\right)$$ which gives: $$ v(t) = \frac{d}{dt} \exp\left( - W\left(-\frac{1}{at}\right)\right).$$ Note you can use a derivative formula for $W(\cdot)$ found in the linked articles alongside the chain rule if you so desire. And you can use even better asymptotes of the prime counting function with the same inverse-function reasoning in order to obtain velocities with more uniform prime-hitting. EDIT: There might be problems in realizing this empirically due to the domain of the $W$ function and branch cuts. I'm having problems figuring how how to get a working graphic off of Alpha, or if my formula needs to be augmented to address some technicality I'm not seeing.
{ "language": "en", "url": "https://math.stackexchange.com/questions/53657", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Lebesgue measure of an intersection of four sets that are contained in [0,1] Let $A_1,A_2,A_3, A_4$ be measurable subsets of $[0,1]$, such that $\displaystyle\sum_{k=1}^{4}m(A_k)>3$. Prove that $$ m\left(\bigcap_{k=1}^{4}A_k\right)>0. $$
Let the superscript $c$ denote the complement in $[0,1]$. Recall that $m(A)=1-m(A^c)$. Then $$ m\left(\bigcap_{i=1}^4A_i\right)=1-m\left(\bigcup_{i=1}^4A_i^c\right)\geq 1-\sum_{i=1}^4 m(A_i^c)$$ $$=\sum_{i=1}^4 m(A_i)-3>0.$$ The last inequality follows from our starting asumption.
{ "language": "en", "url": "https://math.stackexchange.com/questions/53706", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
A question about independent rvs and expectation. Let $X$, $Y$ be independent random variables, $E\left(\left|X\right|^p\right)<+\infty$ where $p\geq 1$ and $E(Y)=0$. Show that $E\left(\left|X+Y\right|^p\right)\geq E(\left|X\right|^p)$, where $E\left(\cdot\right)$ stands for expectation.
Hint: for every fixed $x$ and every random variable $Y$, $E(|x+Y|^p)≥|x+E(Y)|^p$. Jensen's inequality seems to be the way to prove this, hence the first goal is to find a convex function somewhere... Once you know this, the rest should be easy.
{ "language": "en", "url": "https://math.stackexchange.com/questions/53827", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Multiplicative inverses for elements in field How to compute multiplicative inverses for elements in any simple (not extended) finite field? I mean an algorithm which can be implemented in software.
If 'simple' means a prime field $\mathbf{Z}/p\mathbf{Z}$ to you, then, given an integer $x$ coprime to $p$, you simply need to find an integer $y$ such that $xy\equiv1\pmod{p}.$ Look up the paragraph of multiplicative inverses in the wikipedia page on Euclidean algorithm
{ "language": "en", "url": "https://math.stackexchange.com/questions/53879", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Connection Between Automorphism Groups of a Graph and its Line Graph First, the specific case I'm trying to handle is this: I have the graph $\Gamma = K_{4,4}$. I understand that its automorphism group is the wreath product of $S_4 \wr S_2$ and thus it is a group of order 24*24*2=1152. My goal is to find the order of the AUTOMORPHISM GROUP of the Line Graph: $L(\Gamma)$. That is - $|Aut(L(G))|$ I used GAP and I already know that the answer is 4608, which just happens to be 4*1152. I guess this isn't a coincidence. Is there some sort of an argument which can give me this result theoretically? Also, I would use this thread to ask about information of this problem in general (Connection Between Automorphism Groups of a Graph and its Line Graph). I suppose that there is no general case theorem. I was told by one of the professors in my department that "for a lot of cases, there is a general rule of thumb that works" although no more details were supplied. If anyone has an idea what he was referring to, I'd be happy to know. Thanks in advance, Lost_DM
I'm not sure if this is what you are asking, but there is always a map from $\mbox{Aut}(G)$ to $\mbox{Aut}(L(G))$ that is an injection in your case, explaining your divisibility relation. If $\phi$ is an automorphism of $G$, then $\phi$ also acts on $L(G)$: $\phi$ already knows what to do to vertices of $L(G)$ (which are edges of $G$), and $\phi$ also preserves the edge incidence relation, so extends to a map on edges of $L(G)$. This is not always an injection: consider the case of the graph with a single edge connecting distinct vertices. But in your case the map is an injection, which explains your divisibility relation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/53939", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 1 }
Determine limit of |a+b| This is a simple problem I am having a bit of trouble with. I am not sure where this leads. Given that $\vec a = \begin{pmatrix}4\\-3\end{pmatrix}$ and $|\vec b|$ = 3, determine the limits between which $|\vec a + \vec b|$ must lie. Let, $\vec b = \begin{pmatrix}\lambda\\\mu\end{pmatrix}$, such that $\lambda^2 + \mu^2 = 9$ Then, $$ \begin{align} \vec a + \vec b &= \begin{pmatrix}4+\lambda\\-3 + \mu\end{pmatrix}\\ |\vec a + \vec b| &= \sqrt{(4+\lambda)^2 + (\mu - 3)^2}\\ &= \sqrt{\lambda^2 + \mu^2 + 8\lambda - 6\mu + 25}\\ &= \sqrt{8\lambda - 6\mu + 34} \end{align} $$ Then I assumed $8\lambda - 6\mu + 34 \ge 0$. This is as far I have gotten. I tried solving the inequality, but it doesn't have any real roots? Can you guys give me a hint? Thanks.
Given $λ^2+μ^2=9$ (1), you need to find the maximum and minimum of y = $8*\lambda - 6* \mu$ (2) One way to do it, I think is to substitute $\lambda$ from (2) into (1), get a quadratic equation of $\mu$, with the parameter of y. You need this equation to have a root, from that you could find the range values of y Another way is to use graph. In particular, you draw graph of (1), you get a circle, and the graph of (2): $\lambda = y/8 + 3/4 \mu$. With various values of y, you will get various lines parallel to $\lambda = 3/4 \mu$, and you want the min and max of y, so that the line will still cut the circle (the line intersects the $O\lambda$ at y/8) A third way, more general way(in Calculus 3) is to use the Lagrange Multiplier method
{ "language": "en", "url": "https://math.stackexchange.com/questions/54001", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 3 }
To determine whether a field contains free abelian groups of arbitrarily large finite rank Suppose that $K$ is an algebraically closed field. There is a statement: If $K$ is not the algebraic closure of a finite field, then $K^*$ contains free abelian groups of arbitrarily large finite rank. Is it true? And why? Moreover, is $K$ is not the algebraic closure of a finite field, if and only if $K^*$ contains free abelian groups of arbitrarily large finite rank. True? Thanks very much.
Yes. If $K$ has characteristic zero, then it contains $\mathbb{Q}$, and $\mathbb{Q}^{\ast}$ contains a free abelian group of infinite rank (on the primes). Otherwise, $K$ contains an element $x$ transcendental over the prime subfield $\mathbb{F}_p$, so $K$ contains $\mathbb{F}_p(x)$, and $\mathbb{F}_p(x)^{\ast}$ contains a free abelian subgroup of infinite rank (on the irreducible polynomials over $\mathbb{F}_p$). For the second statement, it is necessary and sufficient that $K$ is not contained in the algebraic closure of a finite field. We do not need the hypothesis that $K$ is algebraically closed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/54080", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Finding the correct pivot in Smith normal form I have been working through Smith normal form examples and I am wondering if I am finding the correct pivot in order to carry out the calculation. Let $V \subset \mathbb{Z}$ be an Abelian group with relation matrix $A$. $$ A = \begin{pmatrix} 2 & -6 & 0 \\ 0 & 2 & -6 \\ -6 & 0 & 2 \end{pmatrix} $$ Question 1: For the case of entries in $\mathbb{Z}$, is the first step always to bring smallest integer to 1-1 position in the matrix? So we don't divide by 2 in this case right? (I was trying to do something similar to the matrix A in the Wikipedia article but I have no idea how to make the first pivot 1 and still get $SNF(xI-A) = \begin{pmatrix}1&0 \\0 &(x-1)^2 \end{pmatrix}$ Applying my logic this is what I get for the Smith normal form (SNF) of the original problem $$\begin{pmatrix} 2 & -6 & 0 \\ 0 & 2 & -6 \\ -6 & 0 & 2 \end{pmatrix} \sim \begin{pmatrix} 2 & -6 & 0 \\ 0 & 2 & -6 \\ 0 & -18 & 2 \end{pmatrix} \sim \begin{pmatrix} 2 & 0 & 0 \\ 0 & 2 & -6 \\ 0 & -18 & 2 \end{pmatrix} \sim \begin{pmatrix} 2 & 0 & 0 \\ 0 & 2 & -6 \\ 0 & 0 & 52 \end{pmatrix} \sim \begin{pmatrix} 2 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 52 \end{pmatrix}$$ Question 2: Is $V = \mathbb{Z}/2\mathbb{Z} \oplus \mathbb{Z}/2\mathbb{Z} \oplus \mathbb{Z} / 52\mathbb{Z}$ based on the above Smith normal form?
In Wikipedia's example of SNF$(xI-A)$, the purpose is to determine whether two matrices over a field are similar by computing the Smith normal forms of their characteristic matrices. The field might, for example, be the rational numbers. In that case, the elements of the characteristic matrix live in the ring of polynomials with rational coefficients. The units in this ring, i.e. the elements that have inverses, are the nonzero rational numbers. So in this case you are allowed to multiply rows or columns by nonzero rational numbers. You just can't multiply rows or columns by polynomials of nonzero degree, since they don't have inverses in the ring. But you would be allowed, for example, to multiply row 1 of the matrix by $1/2$. Then, by adding a suitable multiple of column 2 to column 1, and a suitable multiple of row 1 to row 2, you can obtain the result Wikipedia gives, up to permutation. In general, the operations you are allowed to do are * *Permute rows. *Multiply a row by a unit (invertible element) of the ring. (In the ring of integers this means only $\pm1$, but in a ring of polynomials over a field, it could be any nonzero field element.) *Add a multiple of an row to another row. *Do any of the corresponding column operations. Note that you cannot always clear the first row and column in a single pass. In a Euclidean domain, you can, by suitable row operations, bring the GCD of the first column into the (1,1) position and clear the remainder of the column. Then by suitable column operations, you can bring the GCD of the first row into the (1,1) position and clear the remainder of the row. But at that point the first column may no longer be cleared, so you may have to repeat the process. It is guaranteed that, in finitely many steps, you will end up with both the first row and column cleared. (Think about why.) By an iterative process, you eventually reach diagonal form. The diagonal elements may not satisfy the divisibility requirements of the SNF, however. That can always be fixed by suitable operations involving pairs of diagonal elements. For example, $$ \begin{aligned} \begin{bmatrix} 18 & 0\\0 & 30 \end{bmatrix}&\sim \begin{bmatrix} 18 & 18\\0 & 30 \end{bmatrix}\sim \begin{bmatrix} 18 & 18\\-18 & 12 \end{bmatrix}\sim \begin{bmatrix} 36 & 18\\-6 & 12 \end{bmatrix}\sim \begin{bmatrix} 0 & 90\\-6 & 12 \end{bmatrix}\sim \begin{bmatrix} 0 & 90\\-6 & 0 \end{bmatrix}\\ &\sim\begin{bmatrix} -6 & 0\\0 & 90 \end{bmatrix} \sim\begin{bmatrix} 6 & 0\\0 & 90 \end{bmatrix} \end{aligned} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/54135", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Powers and Roots of Group Elements Let $G$ be a group, and $a$, $b \in G$ * *$(bab^{-1})^{n} = ba^{n}b^{-1}$, for every positive integer $n$ \begin{align*} \text{Let P(n) be the statement: } (bab^{-1})^{n} &= ba^{n}b^{-1} \newline \text{Show the base case P(1) : } bab^{-1} &= bab^{-1} \newline \text{Assume P(k) is true : } (bab^{-1})^{k} &= ba^{k}b^{-1} \newline \text{Now I need to show P(k+1) is true by multiplying P(k) by } bab^{-1} \newline (bab^{-1})^{k}(bab^{-1}) &= ba^{k}b^{-1}(bab^{-1}) \newline &= ba^{k}(b^{-1}b)ab^{-1} \newline &= ba^{k}ab^{-1} \newline &= ba^{k+1}b^{-1} \newline \text{Which is the statement P(k+1)} \end{align*} *If $a^{-1}$ has a cube root, so does $a$. If $a^{-1}$ does have a cube root, then there is an element $x$ in $G$ such that $a^{-1} = x^{3}$. \begin{align*} a^{-1} &= x^{3} \newline a^{-1}a &= x^{3}a \newline e &= x^{3}a \newline (x^{-1})^{3} &= (x^{-1})^{3}x^{3}a \newline (x^{-1})^{3} &= a \end{align*} Comments on the correctness or ways to improve either proof would be appreciated :)
HINT $\ \ \ $ Both answers follow from $\rm\ f(x^n) = f(x)^n\ $ for an (anti-) multiplicative map $\rm\:f\:.$ Slightly simpler and more general is to note that the map $\rm\ f(x) = b\:x\:b^{-1}\ $ is multiplicative, i.e. $\rm\:f(xy) = f(x)\:f(y)\:,\:$ so, by induction $\rm\ f(x^n) = f(x)^n\:.$ Similarly for the second problem $\rm\: f(x) = x^{-1}\: $ satisfies $\rm\ f(xy) = f(y)\:f(x)\:,\: $ therefore upon applying $\rm\ f\ \:$ to $\rm\ a^{-1}\! =\: x^n\ $ we infer that $\rm\ a\: =\: f(x^n) = f(x)^n = (x^{-1})^n\ \ $ (for $\rm\:n=3\:$ in your case).
{ "language": "en", "url": "https://math.stackexchange.com/questions/54182", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Rules for Factorisation? Basically presented with this, simplify \begin{aligned} {\Bigl(\sqrt{x^2 + 2x + 1}\Big) + \Bigl(\sqrt{x^2 - 2x + 1}\Big)} \end{aligned} Possible factorisations into both \begin{aligned} {\Bigl({x + 1}\Big)^2}, {\Bigl({x - 1}\Big)^2} \end{aligned} \begin{aligned} {\Bigl({1 + x}\Big)^2} , {\Bigl({1 - x}\Big)^2} \end{aligned} Hence when simplified, answer has two possibilities. One independent of x, and the other not. ( Simplified Answers: 2x, 2 ) Why is one independent and the other not? If such is equal to 2, why then when, say x=2, the answer does not simplify to 2?
$x^2+2x+1$ has two square roots, $x+1$ and $-(x+1)$. Similarly, $x^2-2x+1$ has two square roots, $x-1$ and $-(x-1)=1-x$. If you combine the first choice for each, you get $(x+1)+(x-1)=2x$; if you combine the first choice for the first term with the second choice for the second term, you get $(x+1)+(1-x)=2$. The remaining two combinations yield two more results: $-(x+1)+(x-1)=-2$, and $-(x+1)+(1-x)=-2x$. However, none of this is correct, because by convention $\sqrt{y}$ always denotes the non-negative square root of $y$. Thus, $\sqrt{x^2+2x+1}$ is actually $|x+1|$, and $\sqrt{x^2-2x+1}=|x-1|$, so that the correct simplification is $$|x+1|+|x-1|.$$ If you want to get rid of the absolute values, you’ll have to break the real line into pieces and use a multi-part definition of the function. And if you do this, you’ll see how $2$, $2x$, etc. actually come into the picture. (Literally: a graph should prove quite informative.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/54235", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
Order of solving definite integrals I've been coming across several definite integrals in my homework where the solving order is flipped, and am unsure why. Currently, I'm working on calculating the area between both intersecting and non-intersecting graphs. According to the book, the formula for finding the area bounded by two graphs is $$A=\int_{a}^{b}f(x)-g(x) \mathrm dx$$ For example, given $f(x)=x^3-3x^2+3x$ and $g(x)=x^2$, you can see that the intersections are $x={0, 1, 3}$ by factoring. So, at first glance, it looks as if the problem is solved via $$\int_0^1f(x)-g(x)\mathrm dx+\int_1^3f(x)-g(x)\mathrm dx$$ However, when I solved using those integrals, the answer didn't match the book answer, so I took another look at the work. According to the book, the actual integral formulas are $$\int_0^1f(x)-g(x)\mathrm dx+\int_1^3g(x)-f(x)\mathrm dx$$ I was a little curious about that, so I put the formulas in a grapher and it turns out that $f(x)$ and $g(x)$ flip values at the intersection $x=1.$ So how can I determine which order to place the $f(x)$ and $g(x)$ integration order without using a graphing utility? Is it dependent on the intersection values?
One approach is to integrate $\int_a^b |f(x)-g(x)|\;dx$, but in practice this usually means you need to know which of $f$ and $g$ is the larger function, which could change at some points in the interval $[a,b]$. One method of determining this behavior is to graph the functions. But this can also be done without graphing. Suppose $f$ and $g$ are continuous on $[a,b]$ and that $f(a)=g(a)$ and $f(b)=g(b)$ and $f(x) \neq g(x)$ on $(a,b)$. By continuity, the function $f-g$ is either non-negative on $(a,b)$ or non-positive on $(a,b)$. We can determine which by choosing and value $c$ between $a$ and $b$. So, in your example, to determine which function is greater on the interval $[0,1]$ we can choose to evaluate each function at $1/2$: $f(1/2)=7/8$ and $g(1/2)=1/4$. Thus $f \geq g$ on $[0,1]$ and so the area is $\int_0^1 f(x)-g(x)\;dx$ on this interval. You can repeat this process on each interval between points of intersection.
{ "language": "en", "url": "https://math.stackexchange.com/questions/54309", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 3 }
If $V=L$, is every $V_\alpha$ an $L_\beta$? I guess that amounts to if there is a continuous $f$ with $\mathbb{P}(\mathbb{L}_{f(\alpha)}) \cap \mathbb{L} = \mathbb{L}_{f(\alpha+1)}$ I seem to remember reading that it is, but I forget where or when I read it, why it's true, what f is, and I can't find it now. Thanks for any info.
The answer is no. * *It is a theorem of $ZFC$ that $V\models\alpha>\omega\rightarrow |L_\alpha|=|\alpha|$, therefore it is consistent with $V=L$ as well, so $L\models |L_\alpha|=|\alpha|$ for infinite $\alpha$. *On the other hand, $V\models V_{\alpha+1}=\mathcal P(V_\alpha)$, which for $\alpha>0$ is not the case that $|V_\alpha|=|\alpha|$, but rather much larger. (Of course for inaccessible cardinals there is an equality but this requires a consistency strength greater than $Con(ZFC)$). *Now consider the following case: $L\models 2^\omega=\omega_1$, since $L_{\omega+1}$ is countable it cannot have all the subsets of $\omega$. On the other hand $\mathcal P(\omega)\subseteq V_{\omega+1}$ so clearly $L_{\omega+1}\neq V_{\omega+1}$. *And finally $\beta$ is the set of all ordinals in both $L_\beta$ and $V_\beta$. If for some $\beta$ we had $V_{\omega+1}=L_\beta$ then $\beta=\omega+1$ but we have seen that this is not the case.
{ "language": "en", "url": "https://math.stackexchange.com/questions/54356", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Why does this expression equal $\pi$? I noticed that the following expression equals $\pi$ and I was curious as to why. Is it just a coincidence, or is there a meaningful explanation? $$\int_{-\infty}^\infty\frac{1}{x^2+1}~dx=\pi$$
This is an elucidation of the "perverse" suggestion of GEdgar in the comments. Note that the fraction decomposes as $$\frac{1}{i}\left(\frac{1}{x-i}-\frac{1}{x+i}\right).$$ The indefinite integral of this is $$\frac{1}{i}(\log(x+i)-\log(x-i))=\frac{1}{i}\log\left(\frac{x+i}{x-i} \right)=\frac{1}{i}\log\left(\frac{x^2-1+2ix}{x^2+1} \right)$$ We are considering $x$ to be a real number here. As $x$ tends to infinity, the real part of the argument of the logarithm above approaches $0$, and the complex part approaches $\pi/2$. By similar reasoning, as $x$ goes to negative infinity, the value of the logarithm becomes $-i\pi/2$. Subtracting, we get $(1/i)(i\pi)=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/54414", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 6, "answer_id": 4 }
Comparing infinite numbers Suppose you have 2 infinite numbers, say $A$ and $B$. $A$ is an element of the hyperreals, so that $A$ is greater than every real number. $B$ is the size of the set of natural numbers, $\aleph_0$ Does it make sense to compare $A$ and $B$? And if so, how can you compare these kind of numbers?
In a nutshell: No. Hyperreal numbers are "non-sets objects" while $\aleph_0$ is essentially a notion of size for a set. While hyperreal numbers can be represented as sets (but not only, e.g. in ZF+Atoms the atoms may be given the structure of the hyperreal numbers), they are interpreted as something else, while $\aleph_0$ is a lot more "concrete" as it will always be interpreted as a set of some sort. There are different notions of infinite numbers. There are hyperreal numbers, ordinal numbers, cardinal numbers, one can view real numbers as infinite sequences of rationals and so infinitely more accurate than rational numbers (just as infinitesimals give us the ability to be more accurate than real numbers). These notions grew out of some place where they were needed, and sometimes these places are somewhat orthogonal or unrelated (at least not directly). In this case, we consider cardinals vs. hyperreal numbers. The cardinalities (under the axiom of choice, without the axiom of choice this is an even bigger mess, however with somewhat surprising results - more on that later) are well ordered. This means that between $\aleph_0$ and $\aleph_1$ there are no other cardinals. There are ordinals but they are all countable as sets. Suppose you could somehow identify an element, call it $B$, in the hyperreals, $^*\mathbb R$, to be $\aleph_0$. What is $B+1$? In cardinalities $\aleph_0+1=\aleph_0$. In the hyperreals this is impossible. What you could say is that all the elements of the form $B\pm n$ are still "infinite numbers". The same problem would be met if we chose instead to identify ordinal numbers, since $1+\omega=\omega$ (where $\omega$ is the ordinal representing the natural numbers), this addition is non-commutative, as well not inversible, since subtraction is even less nice than addition when applied on ordinal numbers. A nice thing to consider: without the axiom of choice there can be cardinal numbers which are not $\aleph$-numbers (that is cannot be well ordered). It is consistent to have $2^{\aleph_0}$ cardinals so that they are ordered (by the natural ordering of cardinalities) like the real numbers. I would expect that it is possible to have that result extended to the hyperreal numbers. (Obviously we cannot expect cardinal addition and multiplication to be anything reversible) This means that you can interpret the real numbers as cardinals. However none of them is even comparable with $\aleph_0$ (that is, none contain a countable subset). (You might be interested in my answer here: "Homomorphism" from set of sequences to cardinals? which seems as though it may be somewhat relevant to this discussion.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/54449", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Is this Batman equation for real? HardOCP has an image with an equation which apparently draws the Batman logo. Is this for real? Batman Equation in text form: \begin{align} &\left(\left(\frac x7\right)^2\sqrt{\frac{||x|-3|}{|x|-3}}+\left(\frac y3\right)^2\sqrt{\frac{\left|y+\frac{3\sqrt{33}}7\right|}{y+\frac{3\sqrt{33}}7}}-1 \right) \\ &\qquad \qquad \left(\left|\frac x2\right|-\left(\frac{3\sqrt{33}-7}{112}\right)x^2-3+\sqrt{1-(||x|-2|-1)^2}-y \right) \\ &\qquad \qquad \left(3\sqrt{\frac{|(|x|-1)(|x|-.75)|}{(1-|x|)(|x|-.75)}}-8|x|-y\right)\left(3|x|+.75\sqrt{\frac{|(|x|-.75)(|x|-.5)|}{(.75-|x|)(|x|-.5)}}-y \right) \\ &\qquad \qquad \left(2.25\sqrt{\frac{(x-.5)(x+.5)}{(.5-x)(.5+x)}}-y \right) \\ &\qquad \qquad \left(\frac{6\sqrt{10}}7+(1.5-.5|x|)\sqrt{\frac{||x|-1|}{|x|-1}} -\frac{6\sqrt{10}}{14}\sqrt{4-(|x|-1)^2}-y\right)=0 \end{align}
You may be able to see more easily the correspondences between the equations and the graph through the following picture which is from the link I got after a curious search on Google(link broken now):
{ "language": "en", "url": "https://math.stackexchange.com/questions/54506", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "466", "answer_count": 10, "answer_id": 7 }
Integer partition with fixed number of summands but without order For a fixed $n$ and $M$, I am interested in the number of unordered non-negative integer solutions to $$\sum_{i = 1}^n a_i = M$$ Or, in other words, I am interested in the number of solutions with distinct numbers. For $n = 2$ and $M = 5$, I would consider solutions $(1,4)$ and $(4,1)$ equivalent, and choose the solution with $a_1 \ge a_2 \ge ... \ge a_n \ge 0$ as the representative of the class of equivalent solutions. I know how to obtain the number of total, ordered, solutions with the "stars and bars" method. But unfortunately, I cannot just divide the result by $n!$ since that would only work if all the $a_i$ are distinct.
You want to know the number of partitions of $M$ into at most $n$ parts. A standard bijection (transposing the Young diagram) shows that this is equal to the number of partitions of $M$ into parts of size at most $n$. This number $p_n(M)$ has, for fixed $n$, generating function $$\sum_{M \ge 0} p_n(M) t^M = \frac{1}{(1 - t)(1 - t^2)...(1 - t^n)}.$$ By computing the partial fraction decomposition of this rational function, you can write down a closed form for $p_n(M)$ (again, for fixed $n$). This is efficient in the regime where $M$ is large compared to $n$. I don't know what regime you care about. For what it's worth, the dominant term (for fixed $n$ as $M \to \infty$) is easy to extract: it's given by $$p_n(M) \approx \frac{1}{n!} {M+n-1 \choose n-1}$$ which follows from the fact that the dominant pole at $t = 1$ has multiplicity $n$ and from a computation of the coefficient of the corresponding term in the partial fraction decomposition. In other words, dividing the number you get from stars-and-bars by $n!$ is approximately correct (for fixed $n$ as $M \to \infty$) because in this regime the probability of any two of the numbers being the same becomes negligible. There is also a nice geometric way to see this, as $p_n(M)$ is just the number of non-negative integer solutions $x_2, ... x_n$ to $$2x_2 + 3x_3 + ... + nx_n \le M$$ and this approximates the volume of the corresponding simplex in $\mathbb{R}^{n-1}$. See Wilf's generatingfunctionology for general background about generating functions and, for very powerful methods for extracting asymptotics, see Flajolet and Sedgewick's Analytic Combinatorics.
{ "language": "en", "url": "https://math.stackexchange.com/questions/54635", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 2, "answer_id": 1 }
Gradient descent with constraints In order to find the local minima of a scalar function $p(x), x\in \mathbb{R}^3$, I know we can use the gradient descent method: $$x_{k+1}=x_k-\alpha_k \nabla_xp(x)$$ where $\alpha_k$ is the step size and $\nabla_xp(x)$ is the gradient of $p(x)$. My question is: what if $x$ must be constrained on a sphere, i.e., $\|x_k\|=1$? Then we are actually to find the local minima of $p(x)$ on a sphere. Can gradient descent be applied to constrained optimizations? Can anyone give any suggestions? Thanks.
The sphere is a particular example of a (very nice) Riemannian manifold. Most classical nonlinear optimization methods designed for unconstrained optimization of smooth functions (such as gradient descent which you mentioned, nonlinear conjugate gradients, BFGS, Newton, trust-regions, etc.) work just as well when the search space is a Riemannian manifold (a smooth manifold with a metric) rather than (classically) a Euclidean space. The branch of optimization concerned with that topic is called Riemannian optimization or optimization on manifolds. There is a great reference book on the topic that you can access online for free: Optimization algorithms on matrix manifolds, P.-A. Absil, R. Mahony, R. Sepulchre, Princeton university press, 2008. https://press.princeton.edu/absil Some of the theory presented in that book (and some more) is implemented in a Matlab toolbox called Manopt (which I develop with a number of collaborators), also available freely online: http://www.manopt.org The tutorial happens to start with an example on the sphere, which could be a handy starting point for the question raised by the OP (assuming Matlab is an option). EDIT: I wrote an introduction to optimization on manifolds, available here: http://www.nicolasboumal.net/book
{ "language": "en", "url": "https://math.stackexchange.com/questions/54855", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "46", "answer_count": 4, "answer_id": 3 }
Two easy questions about algebraic varieties I'm studying Fraleigh's abstract algebra(7ed), and there are little contents about algebraic geometry, just the definitions of varieties and ideals. Since I have few backgrounds about algebraic geometry, I don't know how to solve the following two exercises. Sec28, Ex27, b. Give an example of a subset of $\mathbb{R}^2$ which is not an algebraic variety. Ex34. Give an example of a subset $S$ of $\mathbb{R}^2$ such that $V(I(S))\neq S$. (Here, the algebraic variety $V(S)$ in $F^n$ is the set of all common zeros in $F^n$ of the polynomial in $S$, where $S$ is a finite subset of $F[\mathbf{x}]$.) I think that the answer of the two exercises can be same. But I don't know how to show some subset is not an algebraic variety. How can I solve it?
Hint1: A univariate polynomial can have infinitely many zeros only if it is the zero polynomial. Hint2: Show that if the polynomial $p(x,y)$ has infinitely many zeros on the line $y=y_0$, it must be divisible by $y-y_0$. Hint3: Show that if the conclusion of the previous hint holds for infinitely many choices of $y_0$, then $p(x,y)$ must be the zero polynomial.
{ "language": "en", "url": "https://math.stackexchange.com/questions/54893", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Notation/name for the number of times a number can be exactly divided by $2$ (or a prime $p$) I am using this simple snippet of code, variants of which I have seen in many places: for(int k = 0 ; n % 2 == 0 ; k++) n = n / 2; This code repeatedly divides num by 2 until it is odd and on completion k contains the number of divisions performed. I am wondering what the appropriate way to write this using mathematical notation is? Does this correspond to some named concept? Of course, $lg\ n$ gives the appropriate $k$ when $n$ is a power of 2, but not for anything else. For example, $k = 1$ when $n = 6$ and $k = 0$ when $n$ is odd. So it looks it should be specified using a piece-wise function but there may be some mathematical concept or nomenclature here that I am not aware of...
For prime numbers in general it is sometimes called the multiplicity of the prime (implicitly meaning the multiplicity of the prime in the prime factorisation).
{ "language": "en", "url": "https://math.stackexchange.com/questions/54965", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 3 }
Fast PCA: how to compute and use the covariance of x I'm trying to understand the paper Fast principal component analysis using fixed-point algorithm by Alok Sharma and Kuldip K. Paliwal (1151–1155), and especially what is said about $\Sigma_x$, the covariance of x. But before being specific, let me summarize how I understand the algorithm. The PCA finds a linear transformation matrix $\varphi$ of size $d \times h$ which is meant to reduce a set of $n$ d-dimensional feature vectors $x$ ($x \in \mathbb{R}^d$) to a set of $n$ h-dimensional feature vectors $y$ ($y \in \mathbb{R}^h$), with $h < d$. So given one feature vector $x$, we have $\varphi x \rightarrow y$, or: $\pmatrix{ a_{1,1} & a_{1,2} & \cdots & a_{1,h} \\ a_{2,1} & a_{2,2} & \cdots & a_{2,h} \\ \vdots & \vdots & \ddots & \vdots \\ a_{d,1} & a_{d,2} & \cdots & a_{d,h} }\pmatrix{ x_{1} \\ x_{2} \\ \vdots \\ x_{d} } \rightarrow \pmatrix{ y_{1} \\ y_{2} \\ \vdots \\ y_{h} }$ And we want to define the matrix $\varphi$. Let me quote the algorithm (Table 1): * *Choose $h$, the number of principal axes or eigenvectors required to estimate. Compute covariance $\Sigma_x$ and set $p \leftarrow 1$ *Initialize eigenvector $\varphi_p$ of size $d \times 1$ e.g. randomly *Update $\varphi_p$ as $\varphi_p \leftarrow \Sigma_x \varphi_p$ *Do the Gram-Schmidt orthogonalization process [...] *Normalize $\varphi_p$ by dividing it by its norm: $\varphi_p \leftarrow \varphi_p/||\varphi_p||$ *If $\varphi_p$ has not converged, go back to step 3 *Increment counter $p \leftarrow p + 1$ and go to step 2 until $p$ equals $h$ So basically, the orthogonalization process and the normalization are pretty straightforward and simple to implement, same for the convergence. Unfortunately, I'm having hard time trying to figure out the first steps: How am I supposed to compute the covariance $\Sigma_x$ given that $x$ is one of the feature vector of the input? (BTW, is the described algorithm actually only explaining the definition of $\Sigma$ for one single features vector of the $n$ input vectors?) I was unable to understand how to apply the definition of the covariance matrix to that case; is it possible to have a simple example of what it takes in input, and what gets out? Now suppose I am able to compute the covariance $\Sigma_x$, what does step 2 and 3 means? $\varphi_p$ is supposed to be one column of the $d \times h$ matrix $\varphi$, so what does the random initialization of $\varphi_p$ means and how to apply the transformation $\varphi_p \leftarrow \Sigma_x\varphi_p$ using the previously computed covariance?
Actually, it is not necessary to compute the covariance matrix $Σx$ explicity. Since $Σx = HH'$ and $H=(1/\sqrt{n})[(x_1-u),(x_2-u),...,(x_n-u)]$ (where $n$ is the number of samples and $u$ is the centroid of all $x$), then $Σx*φp$ can be written as $HH'* φp$. Now $v=H'*φp$ can be computed first (a vector multiplied by another vector), which will give a vector $v$. Thereafter, $H*v$ can be computed (which is again a vector multiplication). Thus, the computation of $Σx$ is altogether avoided.
{ "language": "en", "url": "https://math.stackexchange.com/questions/55018", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
non time constructible functions A function $T: \mathbb N \rightarrow \mathbb N$ is time constructible if $T(n) \geq n$ and there is a $TM$ $M$ that computes the function $x \mapsto \llcorner T(\vert x\vert) \lrcorner$ in time $T(n)$. ($\llcorner T(\vert x\vert) \lrcorner$ denotes the binary representation of the number $T(\vert x\vert)$.) Examples for time-constructible functions are $n$, $nlogn$, $n^2$, $2^n$. Time bounds that are not time constructible can lead to anomalous results. --Arora, Barak. This is the definition of time-constructible functions in Computational Complexity - A Modern Approach by Sanjeev Arora and Boaz Barak. It is hard to find valid examples of non-time-constructible functions. $f(n)=c$ is an example of a non-time-constructible function. What more (sophisticated) examples are out there?
According to the definition of Arora, trigonometric functions are not good counter-examples since they are not functions from $\mathbb{N}$ to $\mathbb{N}$. However, the function $TUC : \mathbb{N} \rightarrow \mathbb{N}$ defined by $TUC(n) = n$ if $M_n(n) \neq n$ and $TUC(n) = 2n + 1$ otherwise, is not time-constructible: Suppose there exists $M$ that computes $TUC$ in $TUC(n)$ time, then there exist $n$ such that $M = M_n$ (cf. encoding of a Turing machine in Arora's book), but -if $TUC(n) = n$, then $M(n) \neq n$ and so $TUC(n) \neq n$ $\rightarrow$ contradiction. -if $TUC(n) = 2n +1 $, then $M(n) = n$ and so $TUC(n) = n$ $\rightarrow$ contradiction. Both cases end up to a contradiction so $TUC$ is not time-constructible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/55096", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 2, "answer_id": 1 }
Chord dividing circle , function Two chords $PA$ and $PB$ divide circle into three parts. The angle $PAB$ is a root of $f(x)=0$. Find $f(x)$. Clearly , $PA$ and $PB$ divides circle into three parts means it divides it into $3$ parts of equal areas How can i find $f(x)$ then ? thanks
Hint: If you look up circular segment in Wikipedia, you should be able to write the area of the two segments cut off as a function of the angle between the chord and the tangent. That angle is $\theta/2$ using the figure in the article. Then the area of the circle that is left is what you want. The question is whether you can write this area as a function of the angle between the chords (one variable) instead of the angles between the chords and their respective tangents (two variables).
{ "language": "en", "url": "https://math.stackexchange.com/questions/55147", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
(k+1)th, (k+1)st, k-th+1, or k+1? (Inspired by a question already at english.SE) This is more of a terminological question than a purely mathematical one, but can possibly be justified mathematically or simply by just what common practice it. The question is: When pronouncing ordinals that involve variables, how does one deal with 'one', is it pronounced 'one-th' or 'first'? For example, how do you pronounce the ordinal corresponding to $k+1$? There is no such term in mathematics 'infinityeth' (one uses $\omega$, with no affix), but if there were, the successor would be pronounced 'infinity plus oneth'. Which is also 'not a word'. So then how does one pronounce '$\omega + 1$' which is an ordinal? I think it is simply 'omega plus one' (no suffix, and not 'omega plus oneth' nor 'omega plus first'. So how ist pronounced, the ordinal corresponding to $k+1$? * *'kay plus oneth' *'kay plus first' *'kay-th plus one' *'kay plus one' or something else?
From the Handbook of Writing for the Mathematical Sciences section 5.5 p. 63: Here are examples of how to describe the position of a term in a sequence relative to a variable k: kth, (k+1)st, (k+2)nd, (k+3)rd, (k+4)th, … (zeroth, first, second, third, fourth, …) Generally, to describe the term in position k±i for a constant i, you append to (k±i) the ending of the ordinal number for position i (th, st, or nd), which can be found in a dictionary or book of grammar." So the formal answer is that it should be: (k+1)st
{ "language": "en", "url": "https://math.stackexchange.com/questions/55200", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 2, "answer_id": 1 }
$u$-substitution into integral For example we want to integrate following integral $$\int_0^4 2x\cdot((9+x^2)^{1/2})\,dx$$ I have read that if we denote $u$ as $u=9+x^2$ then $du=2x*dx$ everything is clear here but then there was used such kind of method: the author simply put $x=0$ and $x=4$ into the original $u$ definiton got $u=9$ and $u=25$ and finally wrote integral as $$\int_{u=9}^{u=25} u^{1/2}\,du.$$ My question is: is it correct? Or if I will use the same method to solve similar integrals next time, will I be wrong or not? thanks
It is correct, because you have the following: $$\int_{a}^{b} f(\varphi(t)) \cdot \varphi'(t)\,\mathrm{d}t = \int_{\varphi(a)}^{\varphi(b)} f(x)\,\mathrm{d}x$$ Hope this helps.
{ "language": "en", "url": "https://math.stackexchange.com/questions/55296", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Rank of second ace after first ace is drawn I have a probability problem with cards and the expected value of a card rank. I have a deck of 52 cards. I draw cards without replacement. While drawing cards from the deck, a first ace is drawn at rank $k$ (that is the $k^{th}$ card drawn is an ace, all previous were not). We want to find the expected number of additional draws until we get an ace. My idea is to follow this route: If I call $X$ the random variable of the rank of the second ace, $N=52$ the total number of cards, $p = N -k$ the remaining number of cards after the first ace is drawn, the idea is to plug the expectation value of $X$ $$E[X] = \sum_{i=1}^{p-3}i P(X=i)$$ But the formula doesn't seem to simplify. What would be your take at this?
The five strings of non-aces can be permuted amongst each other, conserving probability. Note that some of the strings may be empty and where there are multiple empty strings, the 120 possible "abstract" permutations will produce fewer than 120 physical permutations of the cards. The symmetry implies that the expected length each string of non-aces is the same, or 48/5 for a standard deck of cards. The expected waiting time to the first ace, or from the first to the second ace, or ace $k$ to ace $k+1$, is the preceding number $+1$. That is the expected length of a full interval of non-aces followed by an ace. So 53/5 is the answer for a standard deck. The general answer is (cards+1)/(aces+1), by the same argument.
{ "language": "en", "url": "https://math.stackexchange.com/questions/55350", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
How to find a smooth parametrization of a Curve In order to solve a line integral, I need to establish a smooth parametrization of the curve over which it is supposed to be integrated. The curve, $D$, is the intersection of the surfaces $x^2 + y^2 = 1$ and $z=x^2$. To me, a logical parametrization is: $$r(t)=(t, (1-t^2)^{1/2}, t^2)$$ because if $x = t$, then $$y = (1-t^2)^{1/2},$$ and $$z = t^2$$ Is this right? I appreciate any help.
No, your parametrization only gives you positive values of $y$. Since $x^2 + y^2 = 1$ is a circle in the $x-y$ plane, you might try $x = \cos(t)$, $y = \sin(t)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/55396", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
When do we get extraneous roots? There are only two situations that I am aware of that give rise to extraneous roots, namely, the “square both sides” situation (in order to eliminate a square root symbol), and the “half absolute value expansion” situation (in order to eliminate taking absolute value). An example of the former is $\sqrt{x} = x – 2$, and an example of the latter is $|2x – 1| = 3x + 6$. In the former case, by squaring both sides we get roots of $1$ and $4$, and inspection reveals that $1$ is extraneous. (Of course, squaring both sides is a special case of raising both sides to an positive even power.) In the latter case we expand the equation into the two equations $2x – 1 = 3x + 6$ and $2x – 1 = -(3x + 6)$, getting roots of $-1$ and $-7$, and inspection reveals that $-7$ is extraneous. Now, my question is: Is there any other situation besides these two that gives rise to extraneous roots? -Perhaps something involving trigonometry? I asked this question some time ago in MO, where I got ground in the dirt like a wet french fry (as Joe Bob would say). So, I’m transferring the question here to MSE. :) edit (1.Jan.2017): In general, in mathematics and the real world, you get extraneous roots any time you are initially presented with (via some mechanical / automated / canonical process) a superset of the set that you want, and the sifting out of that set from the superset is left to you, for example, when panning for gold, or when reading the owner’s manual for your vehicle: “This owner’s manual covers all models of your vehicle. You may find descriptions of equipment and features that are not on your particular model.”
Suppose you have two expressions $e_1$ and $e_2$ and you know $$e_1 = e_2.$$ Then, if you apply a function to both sides, you have $$f(e_1) = f(e_2).$$ However, this logic in general does not reverse, unless the function $f$ is 1-1. This is the mechanism by which extraneous roots get introduced. When you square both sides of an equation, you are destroying information about the signs of the two sides. Now, the equality will match if the two sides have the same absolute value. This process can, and often does, introduce spurious roots.
{ "language": "en", "url": "https://math.stackexchange.com/questions/55445", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26", "answer_count": 5, "answer_id": 0 }
A criterion for series convergence? I have a conjecture regarding series convergence that feels like it would be a useful tool to me if I could prove it, but I have been unable to prove or disprove it. Let $\sum a_n$ be a series of nonnegative terms. Define $\lambda(N)$ to be the number of terms of the series that are greater than $1/N$. Conjecture: If $\lambda(N)=O(N^\alpha)$, with $0<\alpha<1$, then $\sum a_n$ converges. The conjecture is based on the fact that this is true for series of the form $\sum 1/n^k$, $k$ constant (let $\alpha = 1/k$), and my intuition that the hypothesis of the conjecture is enough to make $\sum a_n$ "sufficiently similar to or bounded by" such a series. Can you offer a counterexample or point me toward an idea for a proof? (I tried to bound $\Delta\lambda(N) = \lambda(N+1)-\lambda(N)$ from the assumption $\lambda(N)=O(N^\alpha)$, since possibly excluding a finite number of terms, the series sum is bounded above by $\sum \Delta\lambda(N)/N$; but I couldn't see how to do this.)
This is summation by blocks in disguise. For every $k\ge0$, the sum of the terms $a_n$ such that $2^{-(k+1)}\le a_n\le 2^{-k}$ is at most the number of terms times a common upper bounds of these, hence at most $\lambda(2^{k+1})$ times $2^{-k}$. It happens that the number $\lambda(1)$ of terms $a_n$ such that $a_n\ge1$ is finite (1), hence it suffices to consider the sum of the terms $a_n$ such that $a_n\le1$. This sum is at most $$ \sum_{k=0}^{+\infty}\sum_na_n\cdot[2^{-(k+1)}\le a_n\le 2^{-k}]\le\sum_{k=0}^{+\infty}\lambda(2^{k+1})2^{-k}. $$ Now, if $\lambda(N)\le c\cdot N^a$ for every $N$ large enough, then $\lambda(2^{k+1})\le c\cdot2^{a(k+1)}$ for every $k$ large enough and the last series above is controlled by the series $$ \displaystyle2^ac\cdot\sum_k2^{-(1-a)k}, $$ which converges since the hypothesis that $a<1$ implies that $2^{-(1-a)}<1$. (1) The proof is simple: $\lambda(1)\le\lambda(N)$ for every $N\ge1$ and the hypothesis that $\lambda(N)=O(N^a)$ implies that $\lambda(N)$ is finite for $N$ large enough.
{ "language": "en", "url": "https://math.stackexchange.com/questions/55494", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 3, "answer_id": 0 }
Use of a null option in a combination with repetition problem Ok, so I am working on a combinatorics problem involving combination with repetition. The problem comes off a past test that was put up for study. Here it is: An ice cream parlor sells six flavors of ice cream: vanilla, chocolate, strawberry, cookies and cream, mint chocolate chip, and chocolate chip cookie dough. How many combinations of fewer than 20 scoops are there? (Note: two combinations count as distinct if they differ in the number of scoops of at least one flavor of ice cream.) Now I get the correct answer $\binom{25}{6}$, but the way they arrive at the answer is different and apparently important. I just plug in 20 combinations of 6 flavors into $\binom{n+r-1}{r}=\binom{n+r-1}{n-1}$. The answer given makes use of a "null flavor" to be used in the calculation. I can't figure out for the life of me why, could someone explain this to me? Answer: This is a slight variation on the standard combinations with repetition problem. The difference here is that we are not trying to buy exactly 19 scoops of ice cream, but 19 or fewer scoops. We can solve this problem by introducing a 7 th flavor, called “noflavor” ice cream. Now, imagine trying to buy exactly 19 scoops of ice cream from the 7 possible flavors (the six listed an “no-flavor”). Any combination with only 10 real scoops would be assigned 9 “no-flavor” scoops, for example. There is a one-to-one correspondence between each possible combination with 19 or fewer scoops from 6 flavors as there are to 19 “scoops” from 7 flavors. Thus, using the formula for combination with repetition with 19 items from 7 types, we find the number of ways to buy the scoops is $\binom{19+7-1}{19}=\binom{25}{19}=\binom{25}{6}$. (Grading – 4 pts for mentioning the idea of an extra flavor, 4 pts for attempting to apply the correct formula, 2 pts for getting the correct answer. If a sum is given instead of a closed form, give 6 points out of 10.) Any assistance would be greatly appreciated!
Without using a null, the 'standard calculation' is very annoying. The standard idea would be to see how many ways no scoops can be taken, plss how many ways 1 can be taken, plus 2, ... plus 19. Using a null simplifies all that to one step, as having 1 null just means that you are looking at 18 scoops. 2 nulls means 17 scoops, etc. It's more troublesome that you just took ${ n + r - 1 \choose n - 1}$.With 20 and 6, you are literally saying how many ways exactly 20 scoops can be chosen from 6 flavors, which is not at all what the question asks. But perhaps you already considered combinatorial identities?
{ "language": "en", "url": "https://math.stackexchange.com/questions/55637", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Simple problem with pattern matching in Mathematica I'm having trouble setting a pattern for simplifying a complex expression. I've distilled the question down to the simplest case where Mathematica seems to fail. I set up a simple rule based on a pattern: simpRule = a b v___ - c d v___ -> e v which works on the direct case a b - c d /. simpRule e but fails if I simply add a minus sign. -a b + c d /. simpRule -a b + c d How do I go about writing a more robust rule? Or perhaps there's a better way to go about performing simplifications of this sort? Thanks, Keith
Here's the best solution I've come up with so far. (Yes, I'm answering my own question, but I thought it would be nice to have some resolution to this question.) simp[expr_] := Module[{fab,fcd,fabmcd,f0}, {fab,fcd} = {Coefficient[expr, a b], Coefficient[expr, c d]}; fabmcd = (1/2)(fab - fcd); f0 = Simplify[expr - fabmcd (a b - c d)]; f0 + fabmcd e ] Here's how it works: simp[a b - c d] produces e, simp[-a b + c d] produces -e and simp[a b - c d + c] produces e + c. My original intention was to make a more robust pattern rule, however I haven't figured out how to do that. Instead the solution I'm presenting here is based on using Mathematica's Coefficient[] function. In my experience, Coefficient[] and CoefficientList[] can be used to do quite sophisticated expression rearrangement. None the less, someday I'd like figure how to do sophisticated pattern matching.
{ "language": "en", "url": "https://math.stackexchange.com/questions/55741", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
invex functions and their usefulness? An invex function $f$ is a differentiable function from $\Bbb R^n$ to $\Bbb R$ that for some function $\eta : \Bbb R^n \times \Bbb R^n \to \Bbb R^n$ satisfies for all $x, u$, $f(x) - f(u) \geq \eta(x, u) \cdot \nabla f(u)$ . The Wikipedia article on invex functions can be found here. Since there are no restrictions on $\eta$, give a pair of vectors $x$ and $u$, I can always find a vector $z$ such that the dot product of this vector with the gradient vector at $u$ takes any real value (assuming the gradient vector is not degenerate or unbounded). Thus, the only bite this definition has is at points where the gradient is $0$. In fact, any stationary point i.e. point with gradient of $0$ must be a global minimum. So invex functions are just those functions that have all stationary points as global minima. Have I misunderstood something?
There has in fact been some controversy over the usefulness of invex functions at all. This article is interesting A critical view on invexity As far as I can tell, the criticism is justified and the definition is vacuous. Please can someone prove me wrong by providing a concrete example of a novel, interesting theorem which cannot be easily proven without using the concept of invexity!
{ "language": "en", "url": "https://math.stackexchange.com/questions/55822", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 2, "answer_id": 0 }
Fuzzy logic and topos theory Why doesn't one develop fuzzy logic by extending topos theory, by simply extending the subobject classifier $\Omega$ to the unit interval [0,1]? Have people done that?
Fuzzy set theory is a mathematical model of vagueness, which is assumed to be a basic property of our world. In different words, it is assumed that we live in a world with vague objects where nothing is actually crisp. Ordinary mathematics assumes that everything is crisp. Thus I find it at least strange to try to "embed" mathematics of vagueness into universes of crisp mathematics. So I do not really understand what a topos of fuzzy sets would be? Arrows should connect objects up to some degree but this does not happen when one deals with say categories of fuzzy sets. Since we have a new set theory, which was also formalized, we need a new catetory theory and why not a new type theory.
{ "language": "en", "url": "https://math.stackexchange.com/questions/55957", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 2, "answer_id": 1 }
On the growth order of an entire function $\sum \frac{z^n}{(n!)^a}$ Here $a$ is a real positive number. The result is that $f(z)=\sum_{n=1}^{+\infty} \frac{z^n}{(n!)^a}$ has a growth order $1/a$ (i.e. $\exists A,B\in \mathbb{R}$ such that $|f(z)|\leq A\exp{(B|z|^{1/a})},\forall z\in \mathbb{C}$). It is Problem 3* from Chapter 5 of E.M. Stein's book, Complex Analysis, page 157. Yet I don't know how to get this. Will someone give me some hints on it? Thank you very much.
@Srivatsan's answer only proves the order of $f$ is at most $1 / \alpha$, so here's a way of proving that the order of $f$ is at least $1 / \alpha$. Consider the sequence $z_k := (ek)^\alpha$, and observe that $$ |f(z_k)| = |f((ek)^\alpha)| = \sum_{n = 0}^\infty \frac{((ek)^\alpha)^n}{(n!)^\alpha} \geq \frac{((ek)^\alpha)^k}{(k!)^\alpha} = \left(\frac{(ek)^k}{k!}\right)^\alpha. $$ Using the inequality that $k! \leq k^k$, we get $$ |f(z_k)| \geq \left(\frac{(ek)^k}{k^k}\right)^\alpha = e^{\alpha k} = \exp\left(\frac{\alpha}{e} (z_k)^{1 / \alpha}\right). $$ It then follows that $f$ cannot have order less than $1 / \alpha$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/56068", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 2 }
General term of sequence 2 An other sequence that arise in context of formula for number of partitions of number natural in parts non greater than 5 is $81,123,167,229,295,381,473,587, 709, 855,1011,1193,1387,1609,1845,2111,2393,2707,3039,..$ If we try method of finite differences we get following sequences $ 42 ,44 ,62, 66 ,86 ,92,114, 122,146, 156, 182 , 194, 222,….$ $ 2 , 18, 4 , 20, 6, 22, 8 , 24, 10, 26, 12, 28,,…$ $16 , -14, 16, -14, 16, -14, 16, -14 , 16 , -14, 16,…$ $-30, 30 , -30 , 30, -30, 30, -30, 30, -30 , 30,…$ Method of finite differences is useless. How to find general term.
Define $\Delta a_n = a_{n+1} - a_n$ (which is the forward difference) and define inductively $\Delta^{(k)}a_n = \Delta^{(k-1)} a_{n+1} - \Delta^{(k-1)} a_n$ (which is the $k^{\text{th}}$ forward difference). It can easily be shown that $$ \Delta^{(k-1)}a_n = \sum_{i=1}^{n-1} \Delta^{(k)}a_i + \Delta^{(k-1)}a_1 $$ because it is just a telescopic sum. Now consider your sequence ; beginning from the second forward difference (or the third, say fourth if you have a bad-eye), can you guess the other forward differences? Hint : One sees that $$ \Delta^{(2)}a_n = n+1 + 15 \left( \frac{1 + (-1)^n}2 \right) $$ This gives you the sequence $2, 18, 4, 20, 6, 22, ...$ Now you have, using the above identity $$ \Delta^{(1)}a_n = \sum_{i=1}^{n-1} \Delta^{(2)} a_i + 42 = \sum_{i=1}^{n-1} \left( i+1 + 15 \left( \frac{1 + (-1)^i}2 \right) \right) + 42 $$ You can easily compute this sum if you know elementary identities on summations. (If you can't guess the second forward difference, start with the fourth, and keep using this trick until you get to the second.) One computes and finds $$ \Delta a_n = n^2 + 8n + \frac{67}2 - \frac{15}2 \left( \frac{1+(-1)^n}2 \right). $$ I leave the computation for you as an exercise, and using the identities $$ \sum_{i=1}^k i = \frac{k(k+1)}2, \qquad \sum_{i=1}^k i^2 = \frac{k(k+1)(2k+1)}6, \qquad \sum_{i=1}^{n-1} (-1)^i = - \left( \frac{1 + (-1)^n}2 \right) $$ you can easily find $a_n$, but I don't have the will of computing right now. =) Hope that helps,
{ "language": "en", "url": "https://math.stackexchange.com/questions/56132", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Residual finiteness for locally indicable groups Let $G$ be a finitely generated locally indicable group. For every $n \in \mathbb{N} $ there is a normal subgroup with index $n$. What can we say about residual finiteness of $G$?
Thompson's group $F$ has a normal subgroup of each finite index, just because it admits $\mathbf{Z}$ as a quotient, it is locally indicable (= every nontrivial f.g. subgroup admits $\mathbf{Z}$ as a quotient) but $F$ is not residually finite.
{ "language": "en", "url": "https://math.stackexchange.com/questions/56199", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Regular curve which tangent lines pass through a fixed point How to prove that if a regular parametrized curve has the property that all its tangent lines passs through a fixed point then its trace is a segment of a straight line? Thanks
Suppose the curve is $r(s)$ where $s$ is the regular parameter.WLOG,we may assume that all the tangent lines pass through the origin.Thus $r//T$ ($T=r(s)'$ is the unit tangent vector).i.e.$r\times T=0$.Differetiate both sides by $s$,we get $r\times N=0$ ($N$ is the unit normal vector).If $T$ is not a constant vector,then $N\neq 0$,and $N\bot T$,thus $r=0$,a contradiction.Hence $T$ is const,which implies that $r$ is a straight line.
{ "language": "en", "url": "https://math.stackexchange.com/questions/56275", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Proving $\sum\limits_{k=1}^{n}{\frac{1}{\sqrt{k}}\ge\sqrt{n}}$ with induction I am just starting out learning mathematical induction and I got this homework question to prove with induction but I am not managing. $$\sum\limits_{k=1}^{n}{\frac{1}{\sqrt{k}}\ge\sqrt{n}}$$ Perhaps someone can help me out I don't understand how to move forward from here: $$\sum\limits_{k=1}^{n+1}{\frac{1}{\sqrt{k}}+\frac{1}{\sqrt{n+1}}\ge \sqrt{n+1}}$$ proof and explanation would greatly be appreciated :) Thanks :) EDIT sorry meant GE not = fixed :)
If you wanted to prove that $$ \sum_{k=1}^n \frac 1{\sqrt k} \ge \sqrt n, $$ that I can do. It is clear for $n=1$ (since we have equality then), so that it suffices to verify that $$ \sum_{k=1}^{n+1} \frac 1{\sqrt k} \ge \sqrt{n+1} $$ but this is equivalent to $$ \sum_{k=1}^{n} \frac 1{\sqrt k} + \frac 1{\sqrt{n+1}} \ge \sqrt{n+1} \ $$ and again equivalent to $$ \sum_{k=1}^n \frac{\sqrt{n+1}}{\sqrt k} + 1 \ge n+1 $$ so we only need to prove the last statement now, using induction hypothesis. Since $$ \sum_{k=1}^n \frac 1{\sqrt k} \ge \sqrt n, $$ we have $$ \sum_{k=1}^n \frac{\sqrt{n+1}}{\sqrt k} \ge \sqrt{n+1}\sqrt{n} \ge \sqrt{n} \sqrt{n} = n. $$ Adding the $1$'s on both sides we get what we wanted. Hope that helps,
{ "language": "en", "url": "https://math.stackexchange.com/questions/56335", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 6, "answer_id": 3 }
Do finite algebraically closed fields exist? Let $K$ be an algebraically closed field ($\operatorname{char}K=p$). Denote $${\mathbb F}_{p^n}=\{x\in K\mid x^{p^n}-x=0\}.$$ It's easy to prove that ${\mathbb F}_{p^n}$ consists of exactly $p^n$ elements. But if $|K|<p^n$, we have collision with previous statement (because ${\mathbb F}_{p^n}$ is subfield of $K$). So, are there any finite algebraically closed fields? And if they exist, where have I made a mistake? Thanks.
As an alternative approach, suppose we have a field $K$ such that $\overline{K}$, the algebraic closure of $K$, is finite (and we'll also assume that $\vert K \vert >1$). It is clear that $K$ must then be finite, so $K=\mathbb{F}_p^n$ for some prime $p$ and some $n\in \mathbb{N}$. However, for $i \vert j$, we have $\mathbb{F}_{p^i}$ isomorphic to a subfield of $\mathbb{F}_{p^j}$. Thus, $\overline{K}=\overline{\mathbb{F}_{p^n}}=\bigcup\limits_{n\vert m} \mathbb{F}_{p^m}$, which is infinite. Therefore the algebraic closure of any (non-trivial) field is infinite.
{ "language": "en", "url": "https://math.stackexchange.com/questions/56397", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "45", "answer_count": 4, "answer_id": 1 }