Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
The practical implication of P vs NP problem Although whether $$ P = NP $$ is important from theoretical computer science point of view, but I fail to see any practical implication of it.
Suppose that we can prove all questions that can be verified in polynomial time have polynomial time solutions, it won't help us in finding the actual solutions. Conversely, if we can prove that $$ P \ne NP,$$ then this doesn't mean that our current NP-hard problems have no polynomial time solutions.
From practical point of view ( practical as in the sense that we can immediately use the solution to real world scenario), it shouldn't bother me whether P vs NP is proved or disproved any more than whether my current problem has a polynomial time solution.
Am I right?
| If $P = NP$, computational revolution (once a specific algorithm is identified for an NP-hard problem, with explicit asymptotic runtime bounds).
If $P < NP$ and one can prove it, secure (classical) cryptography provably exists, and a huge missing piece in our understanding of computation is filled in. The first already has significant implications for daily life, and developing the second would have much larger implications.
You should also understand that after 40 years of research, today P=NP carries a host of related ideas like: easy-to-hard phase transition in combinatorial problems; quantifiable boundaries between easy and hard approximate versions of specific NP-complete problems (so getting within 7/8 of the optimal solution is easy but anything closer is NP-complete); counting and randomly sampling combinatorial objects are the same problem; zero-knowledge proofs "that reveal nothing but their own validity" (unforgeable ID cards). It's a very rich universe of ideas and it doesn't run out of questions once you know the answer to P=NP.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/1892",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 6,
"answer_id": 0
} |
Methods to see if a polynomial is irreducible Given a polynomial over a field, what are the methods to see it is irreducible? Only two comes to my mind now. First is Eisenstein criterion. Another is that if a polynomial is irreducible mod p then it is irreducible. Are there any others?
| Here's an elementary trick that I occasionally find useful: Let $y=x+c$ for some fixed integer $c$, and write $f(x)=g(y)$. Then $f$ is irreducible if and only if $g$ is irreducible. You may be able to able to reduce $g$ modulo a prime and/or apply Eisenstein to show that $g$ is irreducible.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/1935",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "50",
"answer_count": 7,
"answer_id": 2
} |
Does a Person Need a Mathematics Degree in order to Publish in a Mathematics jounal? I am a neophyte amateur mathematician. I have been reading a lot about journals and the topic of peer-review in mathematics journals. Does one have to have professional credentials or have a Doctorate in order to publish in peer-reviewed mathematics journals or just the desire to compently solve mathematical problems?
| There is usually no technical requirement for any particular credentials, and it seems unlikely to me that editors and referees will check for this. The usual publication process is that you submit a paper to a journal editor, who then forwards it one or more competent referees, who make a recommendations on the suitability for publication. Often, the referees make suggestions for improving the paper, whether or not it is found acceptable for publication.
Nevertheless, an amateur mathematician will find him or herself at a disadvantage in several respects of this process. First, for someone with less experience in the professional mathematical community, it may be more difficult to recognize which accomplishments actually merit publication, and editors sometimes receive articles submitted in good faith from amateurs, which are trivial or seriously wrong in some respect. Second, even when the result is correct and worthwhile, if the presentation of the paper deviates from the accepted norms, it may be found wanting. For example, it is more difficult for an amateur to know which topics need more careful explaining in a paper and which do not, and inappropriate decisions in such cases can hurt the reception of the paper.
I see (as I write this) that others have now given some concrete advice. The most important advice I can give is to take the suggestions of the editors and referees seriously. If someone claimes that part of your paper is unclear or wrong, then you should try to understand exactly why they thought so; doing so will inevitably lead you to a better paper.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/1984",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "34",
"answer_count": 9,
"answer_id": 2
} |
Solution(s) to 'power equations' I'm not sure a 'power equation' is the right name for the equation I'd like to know more about (and, specifically, about its solutions), but I don't know the 'proper' way to name it.
The simplest 'power polynomial' is : $P(x) = x^x $. The simplest 'power equation is: (1) $x^x = c$ for some $c \in \mathbb{R} $. What are the exact solutions to (1) of x (in $\mathbb{C}$) with regards to $c$, apart from the 'obvious' solutions $(x=0, c=0), (x=1, c=1)$ and $(x=2, c=4)$ and all the other solutions of the form $x^x = n^n$ for $n \in \mathbb{N}$ ?
We could extend the power polynomial: $P_{2}(x) = x^{{ax}^{bx}} + x^{cx} $. What are the exact solutions of $P_{2}(x) = d$ for $a,b,c,d \in \mathbb{R}, x\in \mathbb{C}$ ? Or perhaps I should ask what the exact form of the solutions is.
We could generalize the power polynomial even further to $P_{3}(x)$ and $P_{n}(x)$, but I don't know how to write down the latter, general polynomial.
Thanks in advance,
Max
(P.S. I References are always welcome. II If you think this question belongs to MO, please tell me).
| Maybe you should read about the Lambert W-function which gives the solutions to expressions like $z=x^x$. However, I am not sure what to do in case of such a "power tower" like $P_2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2048",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Does there exist a bijective $f:\mathbb{N} \to \mathbb{N}$ such that $\sum f(n)/n^2$ converges? We know that $\displaystyle\zeta(2)=\sum\limits_{n=1}^{\infty} \frac{1}{n^2} = \frac{\pi^2}{6}$ and it converges.
*
*Does there exists a bijective map $f:\mathbb{N} \to \mathbb{N}$ such that the sum $$\sum\limits_{n=1}^{\infty} \frac{f(n)}{n^2}$$ converges.
If our $s=2$ was not fixed, then can we have a function such that $\displaystyle \zeta(s)=\sum\limits_{n=1}^{\infty} \frac{f(n)}{n^s}$ converges
| For $s=2$ the answer is negative. This series doesn't converge.
To prove this we can use Abel transformation.
$$
\sum_{n=1}^{n=N} \frac{f(n)}{n^2} = \sum_{n=1}^{n=N} (\sum_{k=1}^{k=n} f(k)) (\frac{1}{n^2} - \frac{1}{(n + 1)^2}) + (\sum_{n=1}^{N} f(n))\frac{1}{(N+1)^2}
$$
Since $f$ is bijection, $\sum_{k=1}^{k=n} f(k) \ge \frac{n^2}{2}$ hence the first sum is greater than $\sum_{n=1}^{N}\frac{c}{n}$ for some $c > 0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2120",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22",
"answer_count": 4,
"answer_id": 2
} |
Division of Factorials [binomal coefficients are integers] I have a partition of a positive integer $(p)$. How can I prove that the factorial of $p$ can always be divided by the product of the factorials of the parts?
As a quick example $\frac{9!}{(2!3!4!)} = 1260$ (no remainder), where $9=2+3+4$.
I can nearly see it by looking at factors, but I can't see a way to guarantee it.
| The key observation is that the product of $n$ consecutive integers is divisible by $n!$. This can be proved by induction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2158",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "46",
"answer_count": 12,
"answer_id": 7
} |
For any $n$, is there a prime factor of $2^n-1$ which is not a factor of $2^m-1$ for $m < n$? Is it guaranteed that there will be some $p$ such that $p\mid2^n-1$ but $p\nmid 2^m-1$ for any $m<n$?
In other words, does each $2^x-1$ introduce a new prime factor?
| Yes, it's true (except for $2^6-1=7\times 9$).
This is known as Bang's theorem, and is a corollary of Zsigmondy's Theorem.
You can find a proof here (Theorem 3).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2251",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 3,
"answer_id": 2
} |
Proof that $1+2+3+4+\cdots+n = \frac{n\times(n+1)}2$ Why is $1+2+3+4+\ldots+n = \dfrac{n\times(n+1)}2$ $\space$ ?
| All these proofs seem very complicated. The way I remember it is:
The sequence is: 1, 2, 3, ..... (n-2), (n-1), n.
Taking the last and first term, 2nd and (n-2)th term and so on, we form n/2 pairs of (n+1). So the sum of n/2 pairs of (n+1) is n/2 * (n+1)
Example: 1, 2, 3, 4, 5, 6
= (1+6) + (2+5) + (3+4)
= 3x7
=21
This still holds for an odd number of terms
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2260",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "136",
"answer_count": 36,
"answer_id": 29
} |
Fundamental theorem of algebra using Lie Theory It is said that there is a proof of fundamental theorem of algebra using Lie Theory. I have seen this claim at various places. But I could never find such a proof. Can anybody help me out?
| The references can be found in the comments here:
https://mathoverflow.net/questions/34699/approaches-to-riemann-hypothesis-using-methods-outside-number-theory/34718#34718
For those who don't know it, the (well, a) Lie-theoretic proof of Fund. Thm. of Algbera due to Witt is on p. 245 of the book "Numbers" by Ebbinghaus et al. – KConrad Aug 6 at 4:44
Witt's Lie-theoretic proof of Fund. Thm. of Algbera seems to be Witt (Ernst), Über einen Satz von Ostrowski, Arch. Math. (Basel) 3, (1952). 334. – Chandan Singh Dalawat Aug 6 at 6:11
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2300",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How to prove $(f \circ\ g) ^{-1} = g^{-1} \circ\ f^{-1}$? (inverse of composition) I'm doing exercise on discrete mathematics and I'm stuck with question:
If $f:Y\to Z$ is an invertible function, and $g:X\to Y$ is an invertible function, then the inverse of the composition $(f \circ\ g)$ is given by $(f \circ\ g) ^{-1} = g^{-1} \circ\ f^{-1}$.
I've no idea how to prove this, please help me by give me some reference or hint to its solution.
| This is straightforward. Take x in the domain of f. It goes to f(x) = y. And g takes y to z = g(y). Therefore $g^{-1}$ takes z to y and $f^{-1}$ takes y to x. Both sides of your equation takes $z$ to $x$.
Please try to think more before asking. This was not hard, was it? :)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2349",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 7,
"answer_id": 6
} |
Asimov quote about "eight million trillion" arrangements of amino acids A friend of mine is subediting a book whose author died in 1999. The author, at some point, uses the word "trillion" which is, unfortunately, an ambiguous word in the UK: when I was at school it used to mean $10^{18}$ but nowadays it means $10^{12}$. My friend is faced with the following paragraph:
According to Asimov, the amino acids of the proteins behave in a much freer way than our words do: they can be rearranged in any manner and always retain some meaning. A simple protein is made up of eight amino acids, which can be classified putting the numbers one to eight in a series, changing the order of sequence by one digit each time. Out of the same number of "words" we can construct a little over 40,000 organised "biological phrases" from the same genetic code, each one with its own meaning, which is the mission of every protein. But if the chains become longer, as in the case of more complicated molecules such as insulin, which consists of 30 amino acids, the tally rises to a staggering eight million trillion possibilities.
The editor doesn't like "eight million trillion" and wants to replace it with "800000..000". The question, of course, is "how many zeros"? More precisely, the question is: is someone with a better understanding of chemistry than me able to reconstruct Asimov's calculation and see whether the answer is approximately $8 \times 10^{24}$ or $8 \times 10^{18}$?
| What I think the author intends in the first part is that some piece of genetic code gives you $8$ distinct amino acids and you want to count the number of ways to rearrange them to get a protein, which is where the $8! \approx 40000$ number comes from. Since there are $20$ amino acids, the best you can do for $30$ amino acids is to use each of them once and ten of them twice, giving an answer of $\frac{30!}{2^{10}} \approx 2.59 \times 10^{29}$.
I am unable to find the actual list of amino acids in insulin (which is ridiculous to me; why isn't this information on a wiki somewhere?), but the above calculation suggests that the $10^{24}$ number is closer. If anyone wants to help me out, the actual polypeptide chain in question is the B chain.
Edit: According to this citation, Asimov's actual estimate is $8 \times 10^{27}$. So it looks like the author misquoted him.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2419",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
} |
Multiples of 4 as sum or difference of 2 squares Is it true that for any $n \in \mathbb{N}$ we can have $4n = x^{2} + y^{2}$ or $4n = x^{2} - y^{2}$, for $x,y \in \mathbb{N} \cup (0)$?
I was just working out a proof and this turns out to be true from $n=1$ to $n=20$. After that I didn't try, but I would like to see if a counterexample exists for a greater value of $n$.
| It is true because
$$ (n+1)^2 - (n-1)^2 = 4n $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2462",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
} |
Why can't the Polynomial Ring be a Field? I'm currently studying Polynomial Rings, but I can't figure out why they are Rings, not Fields. In the definition of a Field, a Set builds a Commutative Group with Addition and Multiplication. This implies an inverse multiple for every Element in the Set.
The book doesn't elaborate on this, however. I don't understand why a Polynomial Ring couldn't have an inverse multiplicative for every element (at least in the Whole numbers, and it's already given that it has a neutral element). Could somebody please explain why this can't be so?
| Consider $\mathbb{C}[x]$ the ring of polynomials with coefficients from $\mathbb{C}$. This is an example of polynomial ring which is not a field, because $x$ has no multiplicative inverse.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2514",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "86",
"answer_count": 6,
"answer_id": 0
} |
About powers of irrational numbers Square of an irrational number can be a rational number e.g. $\sqrt{2}$ is irrational but its square is 2 which is rational.
But is there a irrational number square root of which is a rational number?
Is it safe to assume, in general, that $n^{th}$-root of irrational will always give irrational numbers?
| It's true precisely because the rationals $\mathbb Q$ comprise a multiplicative subsemigroup of the reals $\mathbb R$,
i.e. the subset of rationals is closed under the multiplication operation of $\mathbb R$. Your statement arises by taking the contrapositive of this statement - which transfers it into an equivalent statement in the complement set $\mathbb R \backslash \mathbb Q$ of irrational reals.
Thus $\rm\quad\quad\quad r_1,\ldots,r_n \in \mathbb Q \;\Rightarrow\; r_1 \cdots r_n \in \mathbb Q$
Contra+ $\rm\quad\; r_1 r_2\cdots r_n \not\in \mathbb Q \;\Rightarrow\; r_1\not\in \mathbb Q \;\:$ or $\rm\;\cdots\;$ or $\rm\;r_n\not\in\mathbb Q$.
Your case $\rm\;\;\; r^n\not\in \mathbb Q \;\Rightarrow\; r\not\in \mathbb Q \;$ is the special constant case $\rm r_i = r$
Obviously the same is true if we replace $\rm\mathbb Q\subset \mathbb R$ by any subsemigroup chain $\rm G\subset H$
The contrapositive form is important in algebra since it characterizes prime ideals in semigroups, rings, etc.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2571",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
} |
Adjoint functors requiring a natural bijection When showing that two functors $F:A\rightarrow B$ and $G:B\rightarrow A$ are adjoint, one defines a natural bijection $\mathrm{Mor}(X,G(Y)) \rightarrow \mathrm{Mor}(F(X),Y)$. What if one do not require the bijection to be natural, what issues would arise?
| Consider this example: take $A$ and $B$ to both be the category of non-zero finite dimensional real vector spaces; then all $\mathrm{Mor}(U,V)$ have the same cardinality, making any two endofunctors adjoint in the unnatural sense!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2605",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 4,
"answer_id": 0
} |
probability and statistics: Does having little correlation imply independence? Suppose there are two correlated random variable and having very small correlation coefficient (order of 10-1). Is it valid to approximate it as independent random variables?
| Independence of random variables implies that they are uncorrelated, but the reverse is not true. Hence no, such an approximation is not valid (given that information alone).
A clear description is given on this Wikipedia page:
If X and Y are independent, then they
are uncorrelated. However, not all
uncorrelated variables are
independent. For example, if X is a
continuous random variable uniformly
distributed on [−1, 1] and Y = X²,
then X and Y are uncorrelated even
though X determines Y and a particular
value of Y can be produced by only one
or two values of X.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2645",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17",
"answer_count": 4,
"answer_id": 3
} |
What is the importance of the Collatz conjecture? I have been fascinated by the Collatz problem since I first heard about it in high school.
Take any natural number $n$. If $n$ is even, divide it by $2$ to get $n / 2$, if $n$ is odd multiply it by $3$ and add $1$ to obtain $3n + 1$. Repeat the process indefinitely. The conjecture is that no matter what number you start with, you will always eventually reach $1$. [...]
Paul Erdős said about the Collatz conjecture: "Mathematics is not yet ready for such problems." He offered $500 USD for its solution.
QUESTIONS:
How important do you consider the answer to this question to be? Why?
Would you speculate on what might have possessed Paul Erdős to make such an offer?
EDIT: Is there any reason to think that a proof of the Collatz Conjecture would be complex (like the FLT) rather than simple (like PRIMES is in P)? And can this characterization of FLT vs. PRIMES is in P be made more specific than a bit-length comparison?
| I will answer to this question:
Question: How important do you consider the answer to this question to be? Why?
See the given below Reason given by mathematician Terence Tao here:
See Terence Tao you tube video on Collatz Conjecture where you will understand it's more application.
EDIT:
Also see this pdf which contains all the things discussed by professor Terence Tao in this video.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2694",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "247",
"answer_count": 9,
"answer_id": 8
} |
Is there an elementary proof that $\sum \limits_{k=1}^n \frac1k$ is never an integer? If $n>1$ is an integer, then $\sum \limits_{k=1}^n \frac1k$ is not an integer.
If you know Bertrand's Postulate, then you know there must be a prime $p$ between $n/2$ and $n$, so $\frac 1p$ appears in the sum, but $\frac{1}{2p}$ does not. Aside from $\frac 1p$, every other term $\frac 1k$ has $k$ divisible only by primes smaller than $p$. We can combine all those terms to get $\sum_{k=1}^n\frac 1k = \frac 1p + \frac ab$, where $b$ is not divisible by $p$. If this were an integer, then (multiplying by $b$) $\frac bp +a$ would also be an integer, which it isn't since $b$ isn't divisible by $p$.
Does anybody know an elementary proof of this which doesn't rely on Bertrand's Postulate? For a while, I was convinced I'd seen one, but now I'm starting to suspect whatever argument I saw was wrong.
| An elementary proof uses the following fact:
If $2^s$ is the highest power of $2$ in the set $S = \{1,2,...,n\}$, then $2^s$ is not a divisor of any other integer in $S$.
To use that,
consider the highest power of $2$ which divides $n!$. Say that is $t$.
Now the number can be rewritten as
$\displaystyle \frac{\sum \limits_{k=1}^{n}{\frac{n!}{k}}}{n!}$
The highest power of $2$ which divides the denominator is $t$.
Now the highest power of $2$ that divides $\displaystyle \frac{n!}{k}$ is at least $t-s$. If $k \neq 2^{s}$, then this is atleast $t-s+1$ as the highest power of $2$ that divides $k$ is atmost $s-1$.
In case $k=2^s$, the highest power of $2$ that divides $ \dfrac{n!}{k}$ is exactly $t-s$.
Thus the highest power of $2$ that divides the numerator is atmost $t-s$. If $s \gt 0$ (which is true if $n \gt 1$), we are done.
In fact the above proof shows that the number is of the form $\frac{\text{odd}}{\text{even}}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2746",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "237",
"answer_count": 10,
"answer_id": 0
} |
Companions to Rudin? I'm starting to read Baby Rudin (Principles of mathematical analysis) now and I wonder whether you know of any companions to it. Another supplementary book would do too. I tried Silvia's notes, but I found them a bit too "logical" so to say. Are they good? What else do you recommend?
| 1) Introduction to real analysis by Bartle and Sherbert
2) Methods of Real Analysis by R.R. Goldberg
3) Mathematical Analysis by Tom Apostol
4) Real and Abstract Analysis by Karl Stromberg.
5) A radical approach to real analysis by David M Bressoud by MAA.
The first book is a very good book for a beginner. The next two are classics. (4) is also very good in case you want to read something advanced. The last one keeps entertaining you with some interesting examples as well as some interesting history of Real Analysis.
Happy Reading!!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2786",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17",
"answer_count": 6,
"answer_id": 4
} |
How to tell if a line segment intersects with a circle? Given a line segment, denoted by it's $2$ endpoints $(X_1, Y_1)$ and $(X_2, Y_2)$, and a circle, denoted by it's center point $(X_c, Y_c)$ and a radius $R$, how can I tell if the line segment is a tangent of or runs through this circle? I don't need to be able to discern between tangent or running through a circle, I just need to be able to discern between the line segment making contact with the circle in any way and no contact. If the line segment enters but does not exit the circle (if the circle contains an endpoint), that meets my specs for it making contact.
In short, I need a function to find if any point of a line segment lies in or on a given circle.
EDIT:
My application is that I'm using the circle as a proximity around a point. I'm basically testing if one point is within R distance of any point in the line segment. And it must be a line segment, not a line.
| This dude have some nice info about this topic!
Click his name to see more math stuff.
http://local.wasp.uwa.edu.au/~pbourke/geometry/sphereline/
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2837",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 8,
"answer_id": 6
} |
Show $\sqrt 3$ is irrational using $3p^2=q^2$ implies $3|p$ and $3|q$ This is a problem from "Introduction to Mathematics - Algebra and Number Systems" (specifically, exercise set 2 #9), which is one of my math texts. Please note that this isn't homework, but I would still appreciate hints rather than a complete answer.
The problem reads as follows:
If 3p2 = q2, where $p,q \in \mathbb{Z}$, show that 3 is a common divisor of p and q.
I am able to show that 3 divides q, simply by rearranging for p2 and showing that
$$p^2 \in \mathbb{Z} \Rightarrow q^2/3 \in \mathbb{Z} \Rightarrow 3|q$$
However, I'm not sure how to show that 3 divides p.
Edit:
Moron left a comment below in which I was prompted to apply the solution to this question as a proof of $\sqrt{3}$'s irrationality. Here's what I came up with...
[incorrect solution...]
...is this correct?
Edit:
The correct solution is provided in the comments below by Bill Dubuque.
| Does not exist $p,q$ integers such that $3q^2=p^2$ because
$v_3{3q^2}=odd$ but $v_3{p^2}=even$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2883",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 7,
"answer_id": 6
} |
Why are $x$ and $y$ such common variables in today's equations? How did their use originate? I can understand how the Greek alphabet came to be prominent in mathematics as the Greeks had a huge influence in the math of today. Certain letters came to have certain implications about their meaning (i.e. $\theta$ is almost always an angle, never a function).
But why did $x$ and $y$ come to prominence? They seem like $2$ arbitrary letters for input and output, and I can't think why we began to use them instead of $a$ and $b$. Why did they become the de facto standard for Cartesian coordinates?
| I read somewhere this convention was started by Rene Descartes. While conceptualizing the coordinate system, he used $x$ and $y$ to denote the axes. It took root from there on and has been used ever since.
I am sorry I can't remember the source now, but I will cite if I do remember later.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2936",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 4,
"answer_id": 2
} |
Which one result in mathematics has surprised you the most? A large part of my fascination in mathematics is because of some very surprising results that I have seen there.
I remember one I found very hard to swallow when I first encountered it, was what is known as the Banach Tarski Paradox. It states that you can separate a ball $x^2+y^2+z^2 \le 1$ into finitely many disjoint parts, rotate and translate them and rejoin (by taking disjoint union), and you end up with exactly two complete balls of the same radius!
So I ask you which are your most surprising moments in maths?
*
*Chances are you will have more than one. May I request post multiple answers in that case, so the voting system will bring the ones most people think as surprising up. Thanks!
| The primitive element theorem is quite surprising.
Theorem: Let $E \supseteq F$ be a finite degree separable extension. Then $E=F[\alpha]$ for some $\alpha \in E$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2949",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "200",
"answer_count": 91,
"answer_id": 24
} |
Which one result in mathematics has surprised you the most? A large part of my fascination in mathematics is because of some very surprising results that I have seen there.
I remember one I found very hard to swallow when I first encountered it, was what is known as the Banach Tarski Paradox. It states that you can separate a ball $x^2+y^2+z^2 \le 1$ into finitely many disjoint parts, rotate and translate them and rejoin (by taking disjoint union), and you end up with exactly two complete balls of the same radius!
So I ask you which are your most surprising moments in maths?
*
*Chances are you will have more than one. May I request post multiple answers in that case, so the voting system will bring the ones most people think as surprising up. Thanks!
| Fermat's "two square theorem".
G.H. Hardy's A Mathematician's Apology is a book everyone should read, but for those who haven't here's something Hardy mentions that is rather surprising:
(If we ignore 2) All primes fit into two classes: those that leave remainder $1$ when divided by $4$ and those that leave remainder $3$.
This much is obvious. The surprising thing is that all of the first class, and none of the second can be expressed as the sum of two integer squares.
That is, for all prime $p$, if $p = 1 \mod 4$ then there exist $x,y$ integers such that $p = x^2 +y^2$ and if $p = 3 \mod 4$ there exists no such $x,y$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2949",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "200",
"answer_count": 91,
"answer_id": 56
} |
Which one result in mathematics has surprised you the most? A large part of my fascination in mathematics is because of some very surprising results that I have seen there.
I remember one I found very hard to swallow when I first encountered it, was what is known as the Banach Tarski Paradox. It states that you can separate a ball $x^2+y^2+z^2 \le 1$ into finitely many disjoint parts, rotate and translate them and rejoin (by taking disjoint union), and you end up with exactly two complete balls of the same radius!
So I ask you which are your most surprising moments in maths?
*
*Chances are you will have more than one. May I request post multiple answers in that case, so the voting system will bring the ones most people think as surprising up. Thanks!
| This one goes hand in hand with the enumerability of $\mathbb{Q}$
The fact that though most of the real numbers are transcendental, it is extremely difficult to find one. (excluding some slight modification of the already known ones)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2949",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "200",
"answer_count": 91,
"answer_id": 88
} |
Could you explain why $\frac{d}{dx} e^x = e^x$ "intuitively"? As the title implies, It is seems that $e^x$ is the only function whoes derivative is the same as itself.
thanks.
| One interesting way of looking at this is by turning the differential equation into an integral equation.
The problem $ y'=y $ is the same as $ \int y \ dx = y $. For the problem to be well defined I have to define an initial condition, we use the condition $y(0)=1$.
Now I am going to guess an answer for y. The simplest function $y(x)$ that I know which goes through $(0,1)$ is $y_0(x)=1$. The subscript here has no special mathematical meaning, it just signifies that this is my first guess.
If we put our guess into the integral equation we get, $$ \int y_0(x) \ dx = \int 1 \ dx = x + C . $$
The integral added a constant that wasn't there before, our guess must have not been very good so we will take this new result as our second guess hoping that it knows more about the problem. So our new guess is,
$$y_1(x) = x + C = x + 1 $$
Notice that we must set C=1 so that $y_1(0)=1$. We will substitute this into the integral equation again which will give us a new guess $y_2(x)$.
$$ y_2(x) = \int y_1(x) \ dx = \int x + 1 \ dx = \frac{x^2}{2} + x + C = \frac{x^2}{2} + x + 1 $$
If we do this a few more times we will get the following,
$$ y_3(x) = \frac{x^3}{6} + \frac{x^2}{2} + x + 1 $$
$$ y_4(x) = \frac{x^4}{24} + \frac{x^3}{6} + \frac{x^2}{2} + x + 1 $$
If you look closely at the coefficients you will see that we are generating the Maclaurin series for $e^x$.
This is happening because the only function which is it's own anti-derivative is the exponential function. In the terminology of dynamics we would say that the infinite series representation of $e^x$ is a fixed point of the integral operator. It is also an attractor which means that repeated integration of $some$ unrelated functions will produce a sequence of functions that approach $e^x$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3006",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "48",
"answer_count": 21,
"answer_id": 2
} |
Tolman-Bondi-Lemaitre space times One can see this reference for TBL space-times.
I would like to know how the explicit expression for the function called $G$ in equations $3.108,3.108,3.110$ in the above reference is obtained.
Also it would be nice to see some further references about TBL space-times.
| It actually just comes from integration. Equation 3.106 from that book is
$$ \dot R^2 = \left( - \frac{\partial R}{\partial t} \right)^2 = \frac FR + f $$
rearranging the terms gives (note that $\dot R < 0$)
$$ dt = - \frac{dR}{\sqrt{\frac FR + f}} $$
Now perform the substitution $z = fR/F$, giving
$$ dt = -\frac F{f^{3/2}} \frac{dz}{\sqrt{1+\frac1z}}. $$
Integrating gives
$$ t - t_0 = \frac F{f^{3/2}} \left( \sinh^{-1}\sqrt z - \sqrt{z(z+1)}\right) $$
the rest is just algebra.
I am no expert on general relativity, but the common term of that TBL spacetime seems start with "Lemaitre-Tolman". There is a review article on arXiv[1] which might help.
[1]: Kari Enqvist (2008). Lemaitre–Tolman–Bondi model and accelerating expansion. General Relativity and Gravitation 40, 2–3, pp 451–466. DOI: 10.1007/s10714-007-0553-9.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3034",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
What should the high school math curriculum consist of? "Life is open book."
With the advent of widely accessible, inexpensive (or even free) computational tools and Computer Algebra Systems (TI-89, Wolfram|Alpha, etc.), much of what traditionally comprises a high school math curriculum can now easily be done by almost everyone. Factoring polynomials, solving inequalities, graphing linear equations, differentiation and integration -- these are the types of skills high school math students spend most of their time learning, and yet all of it can be done for free by anyone with a web browser.
What does this mean for the high school math curriculum? On the one hand, we could leave it more-or-less the same, insisting that today's student learn what we learned decades ago, while banning or carefully regulating the use of these new tools. On the other hand, we could embrace the tools and the opportunities they create to spend more math class time on different topics and skills, perhaps focusing more on analytic and synthetic problem solving and less on mechanical symbolic manipulation -- but at the risk of students never learning some basic foundations.
So how about it? Binomial coefficients? The angle-addition formulas for trig functions? The conditions under which a function has an inverse? Basic computer programming? Keeping in mind that the vast majority of high school students do not go on to become professional mathematicians, what should the high school math curriculum consist of?
Btw, I post this question (inspired by this discussion) here because this is a community of thoughtful mathematicians. I recognize this discussion may belong in a different forum, but I don't what/where that forum is. Any suggestions are welcome.
| Whatever you think the high school mathematics curriculum "should" be, in the United States a curriculum has been put in place, known as the Common Core Standards (CCS), which will significantly - I believe - change American mathematics education. (The CCS extend to all K-12 mathematics.)
http://www.corestandards.org/the-standards/mathematics
I don't think these changes are for the better. A short summary of my personal views are available here:
http://www.education.umd.edu/MathEd/conference/vbook/public-perceptions_Malkevitch.pdf
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3087",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 8,
"answer_id": 6
} |
Find the coordinates in an isosceles triangle Given:
$A = (0,0)$
$B = (0,-10)$
$AB = AC$
Using the angle between $AB$ and $AC$, how are the coordinates at C calculated?
| Use Polar Co-ordinates. A point with polar co-ordinates $(r,\theta)$ is the same point in $x,y$ co-ordinates (or as also called, rectangular co-ordinates) as $(r\cos \theta, r\sin \theta)$.
In this case, point $C$ lies at a distance $10$ from $A$ which is the origin, so $r = 10$.
If given angle CAB is $\alpha$, the the polar angle is either $\frac{3pi}{2}-\alpha$ or $\frac{3pi}{2}+\alpha$, i.e in polar co-ordinates $C$ is either $(10,\frac{3pi}{2}-\alpha)$ or $(10,\frac{3pi}{2}+\alpha)$.
(It might help to draw a figure).
Now convert back to $x,y$ co-ordinates.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3120",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
} |
Gauge transformations in differential forms I am aware of gauge transformations and covariant derivatives as understood in Quantum Field Theory and I am also familiar with deRham derivative for vector valued differential forms.
I thinking of the gauge field A of the gauge group G as a Lie(G) valued 1-form on the manifold.
But I can't see why under a gauge transformation on A by an element $g\in G$ amounts to the following change, $A \mapsto A^g = gAg^{-1} -dgg^{-1}$ (if G is thought of as a matrix Lie Group) or in general $A_g = Ad(g)A + g^* \omega$ (where $\omega$ is the left invariant Maurer-Cartan form on G and I guess $g^*$ is pull-back of $\omega$ along left translation map by $g$).
Curvature is defined as $F = dA + \frac{1}{2}[A,A]$ and using this one wants to now see why does $F \mapsto F_g = gFg^{-1}$.
Firstly is the expression for $A_g$ a definition or is there a derivation for that?
When I try proving this (assuming matrix Lie groups) I am getting stuck in multiple places like what is $dA_g$ ?
I would be happy if someone can explain the explicit calculations and/or give a reference where such things are explained. Usual books which explain differential forms or connections on principal bundles don't seem to help with such calculations.
| The method is to use the Leibniz rule in the differentiation and and change the sign whenever the exterior derivative moves past an odd form. In addition the following identity must be used $ dgg^{-1} + g dg^{-1} = 0$, and remember that the commutator is is between odd forms thus it is with a plus sign.
Here are the intermediate results:
$\frac{1}{2}[A_g, A_g] = \frac{1}{2}g[A, A] g^{-1} -[dgg^{-1}, g A g^{-1}] + dgg^{-1}\wedge dgg^{-1}$
$dA_g = g dA g^{-1} + [dgg^{-1}, g A g^{-1}] - dgg^{-1}\wedge dgg^{-1}$
Here are the required details:
$ d(gAg^{-1})$
*
*Application of the Leibniz rule (Please observe the minus sign in the last term)
$ d(gAg^{-1}) = dg \wedge A g^{-1} + g dA g^{-1} - g A \wedge dg^{-1}$
*
*Using the identities $g g^{-1} = 1$ in the first term and $ dg^{-1} = - g^{-1}dg g^{-1} $
in the last term
$ = dg g^{-1} g \wedge A g^{-1} + g dA g^{-1} +g A g^{-1} \wedge dg g^{-1} $
*
*Collection of the first and last term into a commutator:
$ = g dA g^{-1} +[ dg g^{-1}, g A g^{-1} ]$
$ d(dg g^{-1})$
*
*Application of the Leibniz rule (Please observe the minus sign in the last term)
$ d(dg g^{-1}) = ddg g^{-1} - dg \wedge dg^{-1}$
*
*Using the identities $dd = 0$ and again $ dg^{-1} = - g^{-1}dg g^{-1} $, we obtain:
$ d(dg g^{-1}) = + dg g^{-1}\wedge dg g^{-1}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3192",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
How to convert a hexadecimal number to an octal number? How can I convert a hexadecimal number, for example 0x1A03 to its octal value?
I know that one way is to convert it to decimal and then convert it to octal
0x1A03 = 6659 = 0o15003
*
*Is there a simple way to do it without the middle step (conversion to decimal or conversion to binary)?
*Why do we tend to convert it to Base10 every time?
| There's a fast procedure involving no intermediate representation. You need only four lookup tables totaling 544 entries.
Earlier answers have established that the conversion can be done in blocks of 3 (working right to left). Consider the rightmost block in your example:
0xA03 = 1010 0000 0011 B
You need to break this binary string into four groups of three bits, which I will number 1, 2, 3, 4 from the right to the left:
1: 011 B
2: 000 B
3: 000 B
4: 101 B
The first depends only on the rightmost hex digit 0x3. The second depends only on the two rightmost digits 0x03 (it gets its rightmost bit from the 0x3 and its first two bits from 0x0). The third depends only on the second and third digits 0xA0. The fourth depends only on the third digit 0xA. Whence, you only need to perform the following conversions, each of which can be stored in its own static lookup table:
1: 0x3 --> 03 [16 entries cover all possibilities]
2: (0x0, 0x3) --> 00 [16 * 16 entries]
3: (0xA, 0x0) --> 00 [16 * 16 entries]
4: 0xA --> 5 [16 entries].
Now you repeat with the next block, padding with zeros on the left as needed. Continuing the example, the next block is 0x001:
1: 0x1 --> 01
2: (0x0, 0x1) --> 00.
You can stop here because all the original input has been consumed. The output, working backwards, is 0015003.
This directly answers both parts of the original question: it provides a simple conversion without the middle step and it avoids base 10 (which is rarely used for computer conversion anyway: usually the job is done with bit shifting and masking, essentially a binary operation). In case this procedure still looks too much like the other proposed solutions, please note that it performs absolutely no arithmetic (apart from decrementing pointers to input and output as it proceeds): it takes a string representation (hex) as its input, uses the characters to index its tables, and outputs an octal string.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3231",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 4,
"answer_id": 3
} |
When does the product of two polynomials = $x^{k}$? Suppose $f$ and $g$ are are two polynomials with complex coefficents (i.e $f,g \in \mathbb{C}[x]$).
Let $m$ be the order of $f$ and let $n$ be the order of $g$.
Are there some general conditions where
$fg= \alpha x^{n+m}$
for some non-zero $\alpha \in \mathbb{C}$
| The answer just occured to me. The roots of $f$ and $g$ must be be at 0.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3376",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 4,
"answer_id": 2
} |
Characterizing non-constant entire functions with modulus $1$ on the unit circle
Is there a characterization of the nonconstant entire functions $f$ that satisfy $|f(z)|=1$ for all $|z|=1$?
Clearly, $f(z)=z^n$ works for all $n$. Also, it's not difficult to show that if $f$ is such an entire function, then $f$ must vanish somewhere inside the unit disk. What else can be said about those functions?
Thank you.
| Partial answer.
If $|f(z)|=1$ for all $|z|=1$ and $f$ is entire function, then $f:\mathbb D \to \mathbb D$, by the Maximum Modulus Theorem.
Using Schwarz-Lemma we can characterize, at least, the conformal ones.
( See Functions of one Complex Variable, Conway 2ed pag. 131 )
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3408",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "44",
"answer_count": 5,
"answer_id": 0
} |
$F_{\sigma}$ subsets of $\mathbb{R}$ Suppose $C \subset \mathbb{R}$ is of type $F_{\sigma}$. That is $C$ can be written as the union of $F_{n}$'s where each $F_{n}$'s are closed. Then can we prove that each point of $C$ is a point of discontinuity for some $f: \mathbb{R} \to \mathbb{R}$.
I refered this link on wiki : http://en.wikipedia.org/wiki/Thomae%27s_function and in the follow up subsection they given this result. I would like somebody to explain it more precisely.
| One can also see this article, "S.S.Kim, Amer.Math.Monthly 106 (1999), 258-259".
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3499",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Minimal Ellipse Circumscribing A Right Triangle Find the equation of the ellipse circumscribing a right triangle whose lengths of it's sides are $3,4,5$ and such that its area is the minimum possible one.
You may chose the origin and orientation of the $x,y$ axes as you want.
Motivation: It can be proved [Problem of the Week, Problem No. 8 (Fall 2008 Series), Department of Mathematics, Purdue University] that the area of this ellipse is $8\pi /\sqrt{3}$, without the need of using its equation, but I am also interested in finding it.
Edit: picture from this answer.
| If you look at the proof that you linked, the proof is using an Affine transformation.
Now you can also see that the centroid of the triangles are mapped to each other in the affine transformation, and so do the centers of the ellipses.
Thus the ellipse you need has the same center as your triangle! This property uniquely defines the ellipse with the maximum area.
Now consider the points $A = (-1,-4/3), B = (2, -4/3)$ and $C= (-1, 8/3)$. This is a 3-4-5 triangle whose center is origin which was gotten by starting with $(0,0), (3,0)$ and $(0,4)$ and translating so that the centroid is the origin.
Now the equation of an ellipse whose center is the origin is given by
$Px^2 + Qxy + Ry^2 = 1$.
Thus we must have that
(1) $P + 4Q/3 + 16R/9 = 1$
(2) $4P - 8Q/3 + 16R/9 = 1$
(3) $P -8Q/3 + 64R/9 = 1$
Solving these (see footnote) gives us the equation of the ellipse as
$x^{2}/3 + xy/4 + 3y^{2}/16 = 1$
In order to verify this, the area of $Px^2 + Qxy + Ry^2 = 1$ is given by $\displaystyle \frac{2\pi}{\sqrt{4PR - Q^2}}$ which comes out to $\displaystyle \frac{8\pi}{\sqrt 3}$
You can ignore the below if you like. This is just manually solving the equations
(1) $P + 4Q/3 + 16R/9 = 1$
(2) $4P - 8Q/3 + 16R/9 = 1$
(3) $P -8Q/3 + 64R/9 = 1$
Subtracting (1) and (2) gives $3P = 4Q$.
Subtracting (2) and (3) gives $3P = 48R/9$
Thus $Q = 3P/4$ and $R = 9P/16$
Thus using (3) we have that
$P - (8/3)*(3P/4) + 64/9 * (9P/16) = 1$
i.e
$P - 2P + 4P = 1$ i.e $P = 1/3$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3531",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 3,
"answer_id": 0
} |
Why are the only numbers $m$ for which $n^{m+1}\equiv n \pmod{m}$ is true also unique for $\displaystyle\sum_{n=1}^{m}{n^m}\equiv 1 \bmod m$? It can be seen here that the only numbers for which $n^{m+1}\equiv n \pmod{m}$ is true are 1, 2, 6, 42, and 1806. Through experimentation, it has been found that $\displaystyle\sum_{n=1}^{m}{n^m}\equiv 1 \bmod m$ is true for those numbers, and (as yet unproven) no others. Why is this true?
If there is a simple relation between $n^{m+1} \bmod{m}$ and $n^m \bmod{m}$, that would probably make this problem make more sense. It is obvious that $n^m \equiv 1 \pmod{\frac{m}{d}}$ (dividing out $n$ from both sides gives this result) for all $n$ on the interval $[1,m]$ where $d$ is a divisor of $m$. As a result of this, $n^m \bmod{m}$ takes on only values of the form $1+k \frac{m}{d} \pmod m$ where $k = -1, 0, 1$. How can it be shown that the sum of those values is equivalent to $1 \bmod{m}$?
| Well, I've made a full proof! Part 1 was solved here, and Part 2 was solved here.
Lemma 1: Any integer $m$ which satisfies the original problem also satisfies $n^{m+1} \equiv n \bmod{m}$ for all $n$.
Proof: Let $p$ be a prime dividing $m$. Then $\sum_{n=1}^mn^m\equiv1\pmod p$, so $(m/p)\sum_{n=1}^{p-1}n^m\equiv1\pmod p$, so $p^2$ doesn't divide $m$. Let $g$ be a primitive root mod $p$. Then $\sum_{n=1}^{p-1}n^m\equiv\sum_{r=0}^{p-2}g^{rm}$. That's a geometric series, it sums to $(1-g^{(p-1)m})/(1-g^m)$ which is zero mod $p$ - unless $g^m=1$, in which case it sums to $-1$ mod $p$. So we must have $p-1$ dividing $m$. Looking at $n^{m+1}\equiv n\pmod m$ and letting $n=p$, we see that $p^2$ cannot divide $m$. Now looking mod $p$, we get $n^{m+1}\equiv n\pmod p$. This is equivalent to $m+1\equiv1\pmod{p-1}$ (if $a^x \equiv a^y \bmod{n}$, then $x \equiv y \bmod{\varphi(n)}$ by Euler's theorem, and $\varphi(p) = p-1$), that is, $p-1$ divides $m$, so any integer $n^{m+1} \equiv n \bmod{m}$ as $p-1|m$ for all $m$ if $p|m$.
Lemma 2: There are only a finite amount of integers $m$ which satisfy $n^{m+1} \equiv n \bmod{m}$
Proof: Since $p^2$ does not divide $m$, we may let $m = p_1 \ldots p_r$ with $p_1 < p_2 < \ldots < p_r$, with $p_i$ prime; as $p-1|m$ for all $p|m$, we say that $p_i-1|p_1 \ldots p_{i-1}$ for $i = 1, \ldots, r$. If we take $i = 1$, this forces $p_1-1|1$, so if $r \ge 1$, $p_1 = 2$. If $i = 2$, $p_2-1|2$, so if $r \ge 2$, $p_2 = 3$. Continuing, if $r \ge 3$, then $(p_3-1)|p_1 p_2 = 6$, so $p_3 = 7$; if $r \ge 4$, $(p_4 - 1)|p_1 p_2 p_3 = 42$, so $p_4 = 43$, as the numbers $d+1$ are not prime for other divisors $d$ of 42 larger than 6. If $r \ge 5$, then $(p_5 -1)|p_1 p_2 p_3 p_4 = 1806$, but 1, 2, 6 and 42 are the only divisors of 1806 with $d+1$ prime, so $p_5$ cannot exist. Therefore, $r \le 4$ and $m \in \\{1, 2, 6, 42, 1806\\}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3579",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 1,
"answer_id": 0
} |
Finding the $N$-th derivative of $f(x)=\frac {x} {x^2-1}$ I'm practicing some problems from past exams and found this one:
Find the n-th derivative of this function:
$$f(x)=\frac {x} {x^2-1}$$
I have no idea how to start solving this problems. Is there any theorem for finding nth derivative?
| According to the Binomial Theorem (or using the usual formula for the sum of a geometric series with initial term 1 and common ratio $x^2$),
$f(x)=\frac {x} {x^2-1} = -x \frac {1} {1 - x^2} = -x \left( 1 + x^2 + x^4 + \cdots + x^{2n} + \cdots \right)$
$= -x - x^3 - x^5 - \cdots - x^{2n+1} - \cdots$.
Because the right side converges absolutely for $|x^2| < 1$ you can differentiate it term by term, introducing a coefficient $(2n+1)(2n) \cdots (2n+1-k+1)$ for $x^{2n+1-k}$; in other words, the coefficient of $x^j$ is $(j+1)(j+2) \cdots (j+k)$. Dividing the entire thing through by $k!$ gives a series you can easily relate to the binomial expansion of $( 1 - x^2 ) $ to a negative integral value, yielding a closed form solution.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3628",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 6,
"answer_id": 1
} |
What is the value of $1^i$? What is the value of $1^i$? $\,$
| it's 1. 1 to the power of anything is 1.
Edit: I'll elaborate. In defining $a^b$, we know intuitively what to do in certain cases. When $a$ is a positive integer and $b$ is an integer, for example. This definition is easily extended to when $a$ is real and positive, and $b$ is a real number. In the case where $a$ is negative, or complex, we run into trouble... one way to get around this is to define the power in terms of the logarithm, as $a^b = e^{b \log a}$. Then, we have the issue that the logarithm is multivalued, so we could get different answers depending on which branch we choose. This is the approach discussed in Carl's solution.
I do not believe this proposed method applies for the case where $a$ is positive and real, which is the relevant case being discussed. In the case where $a$ is positive and real, $a^b$ is unambiguously defined as $a^b = e^{b \ln a}$, where $\ln a$ is the unique real number $x$ satisfying $e^x = a$. This definition works even in the case where $b$ is complex. So we can apply this to the original problem: $1^i = e^{i\log(1)} = e^0 = 1$ (Myke, you got my +1!). In fact, $1^z = 1$ for any complex $z$. This viewpoint is shared by MathWorld, WolframAlpha, and Wikipedia, for what that's worth.
I guess at some level, it's a question of semantics and preference. Why not use multiple branches and all that jazz for the case where $a>0$? Because (and this is my opinion), it's unnecessary, and doesn't fit with existing definitions. The exponential function $e^z$ is well-defined via a power series and I think everyone would agree that it is single-valued, even when $z$ is complex. It makes no sense to me that $a^z$ should be any different when $a$ is real and positive.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3668",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "45",
"answer_count": 3,
"answer_id": 2
} |
Finding the Heavy Coin by weighing twice Suppose you have $100$ coins. $96$ of them are heavy and $4$ of them are light. Nothing is known regarding the proportion of their weights. You want to find at least one genuine (heavy) coin. You are allowed to use a weight balance twice. How do you find it?
Assumptions:
Heavy coins all have the same weight; same for the light coins.
The weight balance compares the weight of two sides on the balance instead of giving numerical measurement of weights.
| To avoid wasting readers' time when posting puzzles, it is useful (and will improve the upvote/downvote ratio) to provide:
*
*an indication of the difficulty level. Was this puzzle in a math-is-fun book for children, or is it a generalization of a problem from the final round of a national olympiad? Having schoolchildren attempt a question that (they were not told) is the $n=100$ specialization of a combinatorial lemma from a research paper, is a waste of time. Having mathematicians spend time on (what might look like) a coin weighing puzzle from the literature, but is really an easy task the high school students could solve with some patience, is a waste of resources.
*the source, if known. This helps those who want to evaluate the difficulty before taking time to attempt a solution, or who would like to look up hints or answers.
*indicators of whether the problem is correct or solvable as stated. Problems from a book, a competition or a journal are likelier to be correct and unambiguously formulated. Questions invented, circulated and (believed to be) solved among friends or students are more likely to have pitfalls that were missed in setting or solving the problem.
*clear definitions. In this case, coin weighing problems involve at least two very different interpretations of "use a weight balance". Weighing can mean comparison or numerical measurement and what is possible is completely different in those settings.
Not specifying which is meant can waste the time of users who try to solve the problem in the wrong interpretation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3724",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 6,
"answer_id": 1
} |
Implicit Differentiation... the triple product rule? When implicitly finding the derivative of:
$xy^3 - xy^3\sin(x) = 1$
How do you find the implicit derivative of:
$xy^3\sin(x)$
Is it using a triple product rule of sorts?
| It is possible to derive a 'triple product rule.' In fact, you can generalize it to an arbitrary number of products. You may ask why these rules aren't presented in these general forms. One reason for this is that writing the general formula out is a more complicated to memorize and more intimidating towards students just learning calculus. The other reason, is that having the product rule is enough to give you the general rule by a simple induction argument. Also, for practical purposes, problems can be solved with only the usual product rule being applied in multiple steps.
In your case we can apply the product rule twice:
$\frac{d}{dx}xsin(x)y^{3}=y^{3}\frac{d}{dx}(xsin(x))+3y^2\frac{dy}{dx}(xsin(x))=y^{3}(sin(x)+xcos(x))+3y^{2}\frac{dy}{dx}$
We can also derive a 'triple product rule'
$\frac{d}{dx}(f\cdot g\cdot h)=\frac{df}{dx}\cdot(h\cdot g)+\frac{dg}{dx}\cdot (f\cdot h)+\frac{dh}{dx}\cdot (f\cdot g)$
But you see that you still have to apply the chain rule to that to get it to fit how you need to use it in this situation and so it would be a real mess to memorize all the different special cases. This is why we just provide the product rule with 2 functions.
Does my rambling make sense?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3758",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 0
} |
If $AB = I$ then $BA = I$
If $A$ and $B$ are square matrices such that $AB = I$, where $I$ is the identity matrix, show that $BA = I$.
I do not understand anything more than the following.
*
*Elementary row operations.
*Linear dependence.
*Row reduced forms and their relations with the original matrix.
If the entries of the matrix are not from a mathematical structure which supports commutativity, what can we say about this problem?
P.S.: Please avoid using the transpose and/or inverse of a matrix.
| $\newcommand{\mat}[1]{\left(\begin{matrix}#1\end{matrix}\right)}$
Here is a proof using only calculations and induction over the size $n$ of the matrices.
Observe that no commutativity is needed; we can work in any division ring.
If $n=1$ then two scalars $a,b$ with $ab=1$ are given. Then $b=1/a$ and we also have $ba=1$.
Now suppose that the statement is true for all matrices of size $n-1$, $n\geq2$.
Given two $n$ by $n$ matrices with $AB=I$,
we can assume without loss of generality that the upper left element
of $A$ is nonzero. Otherwise, since the first row of $A$ cannot be all zero, we can achieve this
by permuting two columns of $A$ and the correponding rows of $B$. We can also assume that this
upper left element $\alpha$ equals 1,
otherwise we multiply the first row of $A$ by the inverse of this element from the left and
the first column of $B$ by $\alpha$ from the right.
Now we write $A,B$ in block matrix form:
$$A=\mat{1&a_2\\a_3&a_4}, B=\mat{b_1&b_2\\b_3&b_4},$$
where $b_1$ is a scalar, $a_2,b_2$ are matrices of size 1 by $n-1$,
$a_3,b_3$ have size $n-1$ by 1 and $a_4,b_4$ are $n-1$ by $n-1$.
$AB=I$ means that $$b_1+a_2b_3=1,\ b_2+a_2b_4=0,\ a_3b_1+a_4 b_3=0\mbox{ and }a_3b_2+a_4b_4=I.$$ Here $I$ has size $n-1$ only and "0" abbreviates any matrix of zeros).
First, we calculate $(a_4-a_3a_2)b_4=a_4b_4+a_3b_2=I$. Since both matrices
have size $n-1$ we can conclude that $b_4(a_4-a_3a_2)=I$.
Next, we calculate
$$(a_4-a_3a_2)(b_3+b_4 a_3)=a_4 b_3-a_3a_2b_3+a_3=
(a_4 b_3+a_3b_1)+a_3(-b_1-a_2b_3+1)=0$$
and, multiplying by $b_4$ from the left, we obtain $b_3+b_4 a_3=0$.
Then we obtain that $a_2b_3=-a_2b_4a_3=b_2a_3$ and therefore also $b_1+b_2a_3=1$.
Finally we have $a_2+b_2(a_4-a_3a_2)=(a_2b_4+b_2)(a_4-a_3a_2)=0$ and thus
$b_2a_4=(b_2a_3-1)a_2=-b_1a_2$.
Altogether we obtain $BA=\mat{b_1&b_2\\b_3&b_4}\mat{1&a_2\\a_3&a_4}=I$
which completes the proof.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3852",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "379",
"answer_count": 34,
"answer_id": 15
} |
Vanishing criterion for sections of module on a product This queston can be regarded as a variant of this one:
Does a section that vanishes at every point vanish?
Let $X,Y$ be varieties, $M$ be an $\mathcal{O}_X$-Module and
$$\pi:X\times Y\rightarrow X$$
be the projection. Let $s$ be a local section of $M$, which vanishes along each "horizontal line":
$$\forall y: i_y^* s=0$$
where denotes the map $i_y:x\mapsto (x,y)$. Does it follow, that $s=0$?
| If you take $X$ to be a point, then this reduces to your previous question: you are just taking fibres at each point $y \in Y$, and so the answer is "no" for the same reason.
Just to see that this isn't just being pedantic, let me explain how to bootstrap the answer
to the previous question to give examples in this case which truly have a non-trivial product
structure. For this, suppose that $Y = \mathbb A^1$ and that $X$ is the variety attached to
the ring $A$, so that $X \times Y$ corresponds to the ring $A[t]$. Let $M$ be the module
$A[t]/t^2,$ and let $s$ be the section $t \bmod t^2$. Then $s$ will vanish along each "horizontal line", but is non-zero. (The case $X = $ a point gives the counterexample to
your previous question.)
Again, if $M$ is torsion free and finitely generated then you should be okay.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3878",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
A $1-1$ function is called injective. What is an $n-1$ function called? A $1-1$ function is called injective. What is an $n-1$ function called ?
I'm thinking about homomorphisms. So perhaps homojective ?
Onto is surjective. $1-1$ and onto is bijective.
What about n-1 and onto ? Projective ? Polyjective ?
I think $n-m$ and onto should be hyperjective as in hypergroups.
| Perhaps these types of $n-1$ functions are called "$n-to-1$" functions. There is also a "$Division$ $Rule$" which states that if $f$ is a $n-1$ function, then $|X| =n|Y|$, in set theory and counting principles.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3926",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 3
} |
CAS with a standard language I hope this question is suitable for the site.
I recently had to work with Mathematica, and the experience was, to put it kindly, unpleasing. I do not have much experience with similar programs, but I remember not liking much Matlab or Maple either. The result is that I am a mathematician who likes programming, but I never managed to learn how to work with a computer algebra system.
Does there exist a CAS which can be programmed using a standard language? I guess the best thing would be just an enormous library of mathematical algorithms implemented for C or Python or whatever.
I know SAGE is based on Python, but as far as I understand (which is not much) it just collects preexisting open source software, so (I assume) one has to learn how to use a new tool for every different problem.
| I have used GiNaC several years ago to do complex analytical continuation on a special option that my company had embedded in a Financial product. It is a flexible tool, but if you need to integrate a Taylor series, you may need to construct your own integrators.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3968",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 6,
"answer_id": 5
} |
Group as the Union of Subgroups We know that a group $G$ cannot be written as the set theoretic union of two of its proper subgroups. Also $G$ can be written as the union of 3 of its proper subgroups if and only if $G$ has a homomorphic image, a non-cyclic group of order 4.
In this paper http://www.jstor.org/stable/2695649 by M.Bhargava, it is shown that a group $G$ is the union of its proper normal subgroups if and only if its has a quotient that is isomorphic to $C_{p} \times C_{p}$ for some prime $p$.
I would like to make the condition more stringent on the subgroups. We know that Characteristic subgroups are normal. So can we have a group $G$ such that , $$G = \bigcup\limits_{i} H_{i}$$ where each $H_{i}$'s are Characteristic subgroups of $G$?
| Update: While searching on the internet i found this paper, "A remark on Hyperabelian groups" by G.Baumslag where there is this remark
Moreover so is any torsion abelian group of finite rank, since it is the union of finite characteristic subgroups. Would like to get a reference for this.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4003",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Intuitive explanation of Cauchy's Integral Formula in Complex Analysis There is a theorem that states that if $f$ is analytic in a domain $D$, and the closed disc {$ z:|z-\alpha|\leq r$} contained in $D$, and $C$ denotes the disc's boundary followed in the positive direction, then for every $z$ in the disc we can write:
$$f(z)=\frac{1}{2\pi i}\int\frac{f(\zeta)}{\zeta-z}d\zeta$$
My question is:
What is the intuitive explanation of this formula? (For example, but not necessary, geometrically.)
(Just to clarify - I know the proof of this theorem, I'm just trying to understand where does this exact formula come from.)
| An exercise that was a big help to me was to compute the integral of 1/z on the closed path consisting of the square from (1,-1) to (1,1) to (-1,1) to (-1,-1) to (1,-1). It illustrates how the sides with varying real differ from the sides with varying imaginary values.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4054",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "100",
"answer_count": 14,
"answer_id": 4
} |
$5$-vertex graphs with vertices of degree $2$ I'm trying to show that all graphs with $5$ vertices, each of degree $2,$ are isomorphic to each other. Is there a more clever way than simply listing them all out?
| I'm assuming you do not allow multi-edges, as otherwise there is a trivial counterexample (the cycle of length $5$, versus a disconnected graph consisting of a triangle and two vertices joined by two edges).
Pick one vertex $v_0$; it must be joined to two other distinct vertices, which we may call $v_{-1}$ and $v_1$. Each of those must be joined to another vertex; can they be joined to each other? Can they both be joined to the same vertex that is not yet listed? Consider the possibilities. Then see where each of them leads you.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4099",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Beautiful identity: $\sum_{k=m}^n (-1)^{k-m} \binom{k}{m} \binom{n}{k} = \delta_{mn}$ Let $m,n\ge 0$ be two integers. Prove that
$$\sum_{k=m}^n (-1)^{k-m} \binom{k}{m} \binom{n}{k} = \delta_{mn}$$
where $\delta_{mn}$ stands for the Kronecker's delta (defined by $\delta_{mn} = \begin{cases} 1, & \text{if } m=n; \\ 0, & \text{if } m\neq n \end{cases}$).
Note: I put the tag "linear algebra" because i think there is an elegant way to attack the problem using a certain type of matrices.
I hope you will enjoy. :)
| The given quantity is the constant term in $ (-1)^m \times { \sum_{ k \geq 0 } (-1)^{k} \binom{n}{k} (\frac{1}{x})^k } \times \sum_{k \geq 0} \binom{k}{m} x^k $
or the constant term in $(-1)^m \times (1-\frac{1}{x})^n \times x^{m} \times (1-x)^{-(m+1)}$ = $(-1)^{m+n} x^{m-n} \times (1-x)^{n-m-1} $
If $m >n$ clearly the constant term is 0, if $m < n$ then writing the above as $(-1)^{m+n} \frac{(1-x)^{n-m-1}}{x^{n-m}}$ and noting the maximum exponent of $x$ in numerator is $n-m-1$ we again see the constant term is 0. If $n=m$ then the constant term is clearly 1.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4175",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "39",
"answer_count": 7,
"answer_id": 2
} |
Can we slice an object into two pieces similar to the original? I suspect it is impossible to split a (any) 3d solid into two, such that each of the pieces is identical in shape (but not volume) to the original. How can I prove this?
| You can certainly take a rectangular box, $2^{1/3} \times 2^{2/3} \times 2$ and slice it into two boxes of size $1 \times 2^{1/3} \times 2^{2/3}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4215",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
} |
Are all algebraic integers with absolute value 1 roots of unity? If we have an algebraic number $\alpha$ with (complex) absolute value $1$, it does not follow that $\alpha$ is a root of unity (i.e., that $\alpha^n = 1$ for some $n$). For example, $(3/5 + 4/5 i)$ is not a root of unity.
But if we assume that $\alpha$ is an algebraic integer with absolute value $1$, does it follow that $\alpha$ is a root of unity?
I know that if all conjugates of $\alpha$ have absolute value $1$, then $\alpha$ is a root of unity by the argument below:
The minimal polynomial of $\alpha$ over $\mathbb{Z}$ is $\prod_{i=1}^d (x-\alpha_i)$, where the $\alpha_i$ are just the conjugates of $\alpha$. Then $\prod_{i=1}^d (x-\alpha_i^n)$ is a polynomial over $\mathbb{Z}$ with $\alpha^n$ as a root. It also has degree $d$, and all roots have absolute value $1$. But there can only be finitely many such polynomials (since the coefficients are integers with bounded size), so we get that $\alpha^n=\sigma(\alpha)$ for some Galois conjugation $\sigma$. If $\sigma^m(\alpha) = \alpha$, then $\alpha^{n^m} = \alpha$.
Thus $\alpha^{n^m - 1} = 1$.
| Let $x$ be an algebraic number with absolute value $1$. Then $x$ and its complex conjugate $\overline{x} = 1/x$ have the same minimal polynomial. Writing $f(T)$ for the minimal polynomial of $x$ over $\mathbb{Q}$, with degree $n$, the polynomials $T^nf(1/T)$ and $f(T)$ are irreducible over $\mathbb{Q}$ with root $\overline{x}$, so the polynomials are equal up to a scaling factor: $$T^nf(1/T) = cf(T).$$ Setting $T = 1$, $f(1) = cf(1)$.
Assuming $x$ is not rational (i.e., $x$ is not $1$ or $-1$), $f$ has degree greater than $1$, so $f(1)$ is nonzero and thus $c = 1$. Therefore $$T^nf(1/T) = f(T),$$ so $f(T)$ has symmetric coefficients. In particular, its constant term is $1$. Moreover, the roots of $f(T)$ come in reciprocal pairs (since $1$ and $-1$ are not roots), so $n$ is even.
Partial conclusion: an algebraic number other than $1$ or $-1$ which has absolute value $1$ has even degree over $\mathbb{Q}$ and its minimal polynomial has constant term $1$. In particular, if $x$ is an algebraic integer then it must be a unit.
There are no examples of algebraic integers with degree $2$ and absolute value $1$ that are not roots of unity, since a real quadratic field has no elements on the unit circle besides $1$ and $-1$ and the units in an imaginary quadratic field are all roots of unity (and actually are only $1$ and $-1$ except for $\mathbb{Q}(i)$ and $\mathbb{Q}(\omega)$). Thus the smallest degree $x$ could have over $\mathbb{Q}$ is $4$ and there are examples with degree $4$: the polynomial $$x^4 - 2x^3 - 2x + 1$$ has two roots on the unit circle and two real roots (one between $0$ and $1$ and the other greater than $1$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4323",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "114",
"answer_count": 7,
"answer_id": 1
} |
Solving (quadratic) equations of iterated functions, such as $f(f(x))=f(x)+x$ In this thread, the question was to find a $f: \mathbb{R} \to \mathbb{R}$ such that
$$f(f(x)) = f(x) + x$$
(which was revealed in the comments to be solved by $f(x) = \varphi x$ where $\varphi$ is the golden ratio $\frac{1+\sqrt{5}}{2}$).
Having read about iterated functions shortly before though, I came up with this train of thought:
$$f(f(x)) = f(x) + x$$
$$\Leftrightarrow f^2 = f^1 + f^0$$
$$f^2 - f - f^0 = 0$$
where $f^n$ denotes the $n$'th iterate of $f$.
Now I solved the resulting quadratic equation much as I did with plain numbers
$$f = \frac{1}{2} \pm \sqrt{\frac{1}{4} + 1}$$
$$f = \frac{1 \pm \sqrt{1+4}}{2} = \frac{1 \pm \sqrt{5}}{2}\cdot f^0$$
And finally the solution
$$f(x) = \frac{1 \pm \sqrt{5}}{2} x .$$
Now my question is: *Is it somehow allowed to work with functions in that way?** I know that in the above, there are denotational ambiguities as $1$ is actually treated as $f^0 = id$ ... But since the result is correct, there seems to be some correct thing in this approach.
So can I actually solve certain functional equations like this? And if true, how would the correct notation of the above be?
| Whether or not $f$ is linear, the equation can in fact be rewritten as $F^2 = F + 1$, in a suitable interpretation where to a function $f$ is associated a linear operator $F$.
Let $V$ be the vector space of sums $aX + bf(X)$. Denote by $F$ the operator that takes an element of $V$ to its composition with $f$, that is, $F (h(x)) = h(f(x))$. Let $1$ denote the identity operator on $V$.
Then $F$ is linear (from the definition), and satisfies $F^2=F+1$ as one can check on basis elements.
I have been temporarily but deliberately unclear as to whether $V$ is the 2-dimensional space of formal sums with basis vectors $x$ and $f(x)$, or the at most 2-dimensional space of functions of that form. The question of whether $f$ is linear is the same as asking whether the second space is one-dimensional, which is no longer an algebraic question but one of analysis using regularity assumptions on $f$. (In particular, seeing $f$ as an operator or 2x2 matrix $F$, illuminates but does not quite trivialize the problem of showing that linear functions are the only continuous solutions to the functional equation. Additional arguments are needed.) The equation $F^2 = F + 1$ holds in both interpretations.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4346",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 4,
"answer_id": 3
} |
Boy Born on a Tuesday - is it just a language trick? The following probability question appeared in an earlier thread:
I have two children. One is a boy born on a Tuesday. What is the probability I have two boys?
The claim was that it is not actually a mathematical problem and it is only a language problem.
If one wanted to restate this problem formally the obvious way would be like so:
Definition: Sex is defined as an element of the set $\\{\text{boy},\text{girl}\\}$.
Definition: Birthday is defined as an element of the set $\\{\text{Monday},\text{Tuesday},\text{Wednesday},\text{Thursday},\text{Friday},\text{Saturday},\text{Sunday}\\}$
Definition: A Child is defined to be an ordered pair: (sex $\times$ birthday).
Let $(x,y)$ be a pair of children,
Define an auxiliary predicate $H(s,b) :\\!\\!\iff s = \text{boy} \text{ and } b = \text{Tuesday}$.
Calculate $P(x \text{ is a boy and } y \text{ is a boy}|H(x) \text{ or } H(y))$
I don't see any other sensible way to formalize this question.
To actually solve this problem now requires no thought (infact it is thinking which leads us to guess incorrect answers), we just compute
$$
\begin{align*}
& P(x \text{ is a boy and } y \text{ is a boy}|H(x) \text{ or } H(y)) \\\\
=& \frac{P(x\text{ is a boy and }y\text{ is a boy and }(H(x)\text{ or }H(y)))}
{P(H(x)\text{ or }H(y))} \\\\
=& \frac{P((x\text{ is a boy and }y\text{ is a boy and }H(x))\text{ or }(x\text{ is a boy and }y\text{ is a boy and }H(y)))}
{P(H(x)) + P(H(y)) - P(H(x))P(H(y))} \\\\
=& \frac{\begin{align*} &P(x\text{ is a boy and }y\text{ is a boy and }x\text{ born on Tuesday}) \\\\
+ &P(x\text{ is a boy and }y\text{ is a boy and }y\text{ born on Tuesday}) \\\\
- &P(x\text{ is a boy and }y\text{ is a boy and }x\text{ born on Tuesday and }y\text{ born on Tuesday}) \\\\
\end{align*}}
{P(H(x)) + P(H(y)) - P(H(x))P(H(y))} \\\\
=& \frac{1/2 \cdot 1/2 \cdot 1/7 + 1/2 \cdot 1/2 \cdot 1/7 - 1/2 \cdot 1/2 \cdot 1/7 \cdot 1/7}
{1/2 \cdot 1/7 + 1/2 \cdot 1/7 - 1/2 \cdot 1/7 \cdot 1/2 \cdot 1/7} \\\\
=& 13/27
\end{align*}
$$
Now what I am wondering is, does this refute the claim that this puzzle is just a language problem or add to it? Was there a lot of room for misinterpreting the questions which I just missed?
| Well, given the unstated assumption that the writer is a mathematician and therefore not using regular english, then I agree with the 13/27 answer.
But in everyday english, from "there are two fleems, one is a glarp" we all infer that the other is not a glarp.
From "there are two fleems, one is a glarp, which is snibble" we would still infer that the other is not a glarp. Whereas from "there are two fleems, one is a glarp which is snibble" (absence of comma, or when spoken, difference in intonation) we would infer that the other is not a snibble glarp, but it could still be an unsnibble glarp.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4400",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "42",
"answer_count": 11,
"answer_id": 2
} |
Parallel transport of a vector along two distinct curves Let $\mathcal{M}$ be an n-dimensional manifold endowed with an affine connection $\nabla$. Let $\gamma_1:[a,b]\rightarrow M$ and $\gamma_2:[c,d]\rightarrow \mathcal{M}$ be two curves with the same initial and final points, that is,
$p=\gamma_1(a)=\gamma_2(c), q=\gamma_1(b)=\gamma_2(d)$. Take $X\in T_p\mathcal{M}$. Parallelly propagating $X$ along $\gamma_1$ and $\gamma_2$ we obtain two vectors $X_1, X_2\in T_q\mathcal{M}$, respectively. Let $R$ be the curvature tensor of the connection, $R(X,Y)Z=\nabla_X\nabla_Y Z - \nabla_Y\nabla_X Z -\nabla_{[X,Y]}Z$, and $\tau$ its torsion, $\tau(X,Y)=\nabla_X Y-\nabla_Y X -[X,Y]$.
The question is: How can I compare the two vectors $X_1$ and $X_2$? Can I write the difference $(X_2-X_1)$ in terms of $R,\tau$ and the curves?
| If your two paths are homotopic you can make a comparison between your two vectors. And yes the comparison involves an integral over the homotopy of a function of curvature.
See for example Theorem 13.6.4 in Pressley's "Elementary Differential Geometry" (Google books will bring up the statement of the theorem).
If your paths are not homotopic you're out of luck, as T describes. There are things you can say of course but it's not clear what you're looking for. You should think of an example of a Riemann manifold you're interested in, to get a sense for how bad it can get.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4479",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
} |
How can I prove this trigonometric equality? $$\arccos\left(\frac{1-x^2}{1+x^2}\right) = 2\arctan{x}$$ for $x \geq 0$.
I'm not even sure what kind of math to try? :(
| Put $t=\arctan x$. Then $x=\tan t$.
You have to prove that $2t=\arccos(1-x^2)/(1+x^2)$.
This is more-or-less the same as $(1-x^2)/(1+x^2)=\cos2t$. If you put
$x=\tan t$ into $(1-x^2)/(1+x^2)$ what do you get?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4521",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
proof by contradiction: a composite $c$ has a nontrivial factor $\le \sqrt c$
Let $c$ be a positive integer that is not prime. Show that there is some positive integer $b$ such that $b \mid c$ and $b \leq \sqrt{c}$.
I know this can be proved by contradiction, but I'm not sure how to approach it. Usually I write the proof in the form $P \rightarrow Q$, and then if we can prove $P \land \neg Q$ is false, $P \rightarrow Q$ must be true.
In this case, I wrote it as:
If $c$ is a composite, positive integer, then $b \mid c$ and $b \leq \sqrt{c}$, for some positive integer $b$.
I'm guessing that as long as I assume that $b \nmid c$ or $b > \sqrt{c}$, then this is still valid as $\neg Q$; that is, I don't have to assume the converse of both parts of $Q$?
Moving on, if $b > \sqrt{c}$, and $b \mid c$, then $br=c$ for some integer $r$, which means $r < \sqrt{c}$.
And this is where I get stuck.
| I: That there exists some $b$ with $b|c$ is true by the definition (c is not prime).
Now we assume that - as II - there is no $b \leq \sqrt c$.
Since there has to be some $b$ due to I, this $b$ now has to meet $b > \sqrt c$.
Now $z = \frac{c}{b}$ surely is an integer that divides $c$ too, and, furthermore, $z \leq \sqrt c$. Therefore, by assuming II, we disprove II - Contradiction, q.e.d.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4556",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
If $(a_n)\subset[0,\infty)$ is non-increasing and $\sum_{n=1}^\infty a_n<\infty$, then $\lim_{n\to\infty}{n a_n} = 0$ I'm studying for qualifying exams and ran into this problem.
Show that if $\{a_n\}$ is a nonincreasing sequence of positive real
numbers such that $\sum_n a_n$ converges, then $\lim_{n \rightarrow \infty} n a_n = 0$.
Using the definition of the limit, this is equivalent to showing
\begin{equation}
\forall \varepsilon > 0 \; \exists n_0 \text{ such that }
|n a_n| < \varepsilon \; \forall n > n_0
\end{equation}
or
\begin{equation}
\forall \varepsilon > 0 \; \exists n_0 \text{ such that }
a_n < \frac{\varepsilon}{n} \; \forall n > n_0
\end{equation}
Basically, the terms must be bounded by the harmonic series. Thanks, I'm really stuck on this seemingly simple problem!
| By the Cauchy condensation test, $\displaystyle \sum 2^n a_{2^n} $ converges so $ 2^n a_{2^n} \to 0. $ For $ 2^n < k < 2^{n+1} $,
$$ 2^n a_{2^{n+1}} \leq k a_{k} \leq 2^{n+1} a_{2^n}$$
so $n a_n \to 0.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4603",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "105",
"answer_count": 16,
"answer_id": 0
} |
How do I find a function from a differential equation? Hey, I'm looking for a guide on how I find Q given the following, where $a$ and $b$ are constants:
\begin{equation}
\frac{dQ}{dt} = \frac{a + Q}{b}
\end{equation}
I have the answer and working for specific case I'm trying to solve but do not understand the steps involved. A guide on how I can solve this, with an explanation of each step would be much appreciated.
| Edit: Your particular differential equation can be solved without any calculation if you know the solutions of $$\frac{dQ}{dt} = Q$$, i.e. $C\exp(t), C\in\mathbb{R}$.
Note that, if $f$ is a solution to $\frac{dQ}{dt} = \frac 1b Q$, then $f-a$ is a solution to your equation, therefore is suffices to only look at $a=0$.
The chain rule can be used to see, that if $f$ solves $\frac{dQ}{dt} = Q$, then $f(t/b)$ solves $\frac{dQ}{dt} = \frac 1b Q$.
Therefore all your solutions are of the form $C\exp(t/b) - a, C\in\mathbb{R}$.
Old Post:
I think these differential equations can be solved by "Separation of Variables". See the Wikipedia for a guide. It's got examples! =)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4627",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 1
} |
Simplification of expressions containing radicals As an example, consider the polynomial $f(x) = x^3 + x - 2 = (x - 1)(x^2 + x + 2)$ which clearly has a root $x = 1$.
But we can also find the roots using Cardano's method, which leads to
$$x = \sqrt[3]{\sqrt{28/27} + 1} - \sqrt[3]{\sqrt{28/27} - 1}$$
and two other roots.
It's easy to check numerically that this expression is really equal to $1$, but is there a way to derive it algebraically which isn't equivalent to showing that this expression satisfies $f(x) = 0$?
| Pardon my skepticism, but has anyone so much as breadboarded Blömer '92 or Landau '93 in all these 18 years? For lack of same, people still publish ugly surdballs, e.g.,
$$\vartheta _3\left(0,e^{-6 \pi }\right)=\frac{\sqrt[3]{-4+3 \sqrt{2}+3 \sqrt[4]{3}+2 \sqrt{3}-3^{3/4}+2 \sqrt{2}\, 3^{3/4}} \sqrt[4]{\pi }}{2\
3^{3/8} \sqrt[6]{\left(\sqrt{2}-1\right) \left(\sqrt{3}-1\right)} \Gamma \left(\frac{3}{4}\right)}$$
(J. Yi / J. Math. Anal. Appl. 292 (2004) 381–400, Thm 5.5 vi) instead of
$$\vartheta _3\left(0,e^{-6 \pi }\right)=\frac{\sqrt{2+\sqrt{2}+\sqrt{2} \sqrt[4]{3}+\sqrt{6}} \,\sqrt[4]{\pi }}{2\ 3^{3/8} \Gamma
\left(\frac{3}{4}\right)}\quad .$$
And why do both papers trot out the same old Ramanujan denestings instead of new and interesting ones? E.g.,
$$\sqrt{2^{6/7}-1}=\frac{2^{8/7}-2^{6/7}+2^{5/7}+2^{3/7}-1}{\sqrt{7}}$$
or
$$\sqrt[3]{3^{3/5}-\sqrt[5]{2}}=\frac{2^{2/5}+\sqrt[5]{3}+2^{3/5} 3^{2/5}-\sqrt[5]{2}\, 3^{3/5}}{5^{2/3}}$$
or
$$\frac{\sqrt[3]{1+\sqrt{3}+\sqrt{2}\, 3^{3/4}}}{\sqrt[6]{\sqrt{3}-1}}=\frac{\sqrt{1+\sqrt{3}+\sqrt{2} \sqrt[4]{3}}}{\sqrt[6]{2}}\quad ?$$
These results were found by two young students of mine who would very much like to know values of q and b in Bill Dubuque's structure theorem which effect the denesting
$$\sqrt[3]{-\frac{106}{25}-\frac{369 \sqrt{3}}{125}+\frac{3 \sqrt{3} \left(388+268 \sqrt{3}\right)}{100 \sqrt[3]{2}\,
5^{2/3}}}=\frac{3}{5^{2/3}}-\frac{1+\sqrt{3}}{\sqrt[3]{10}}+\frac{1}{5} \sqrt[3]{2} \left(3+2 \sqrt{3}\right)\quad.$$
Thanks in advance.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4680",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23",
"answer_count": 3,
"answer_id": 1
} |
Deriving the probability of an event from limited trial data I have a stochastic event (say a weighted coin toss) that produces a positive outcome (heads) according to some unknown probability $P$.
Given $N$ total events (coin tosses) and n positive outcomes (heads), how can I measure the likelihood that $\frac nN$ is a good approximation of $P$?
Obviously, as $N$ grows, it becomes more likely than $\frac nN$ is close to the value $P$, but how can this convergence be described mathematically?
| While I realise this isn't a full answer, the convergence you discuss is guaranteed by the weak and strong laws of large numbers. You might also be interested in Chebychev's inequality, which would say $$\mathbb{P}(|n/N - p| > \alpha) \leq \frac{\sigma^2}{\alpha^2}$$ where $\sigma^2$ is the variance of your estimator, $n/N$. Finally the Central Limit Theorem (CLT) proves that as $N \to \infty$ the estimator becomes normally distributed about $p$ with variance $\frac{\sigma^2}{N}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4725",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
every permutation is either even or odd,but not both How we can show every permutation is either even or odd,but not both......I can't arrive at a proof for this ..... Can anybody give me the proof...
Thanks in advance...
| It is enough to show that the product of an odd number of transpositions cannot be the identity.
Every permutation of a finite set $S$ is a unique product of disjoint cycles in which every element of $S$ occurs exactly once (where we include fixed points as 1-cycles). Let $p$ be any permutation of $S$, let $(ij)$ be a transposition ($i,j \in S$), and let $q=p \cdot (ij)$. It is easy to check that if $i$ and $j$ are in the same cycle in $p$, then that cycle splits into two in $q$; if $i$ and $j$ are in different cycles in $p$, then those cycles merge into one in $q$. Cycles of $p$ not containing either $i$ or $j$ remain the same in $q$. Therefore, $q$ has either one more or one less cycle than $p$ does.
Now let $t$ be any product of an odd number of transpositions. Then by the above, multiplying any permutation by $t$ changes the parity of the number of cycles in the permutation. Therefore $t$ cannot be the identity.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4768",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 6,
"answer_id": 0
} |
Why does a circle enclose the largest area? In this wikipedia, article http://en.wikipedia.org/wiki/Circle#Area_enclosed its stated that the circle is the closed curve which has the maximum area for a given arc length. First, of all, I would like to see different proofs, for this result. (If there are any elementary ones!)
One, interesting observation, which one can think while seeing this problem, is: How does one propose such type of problem? Does, anyone take all closed curves, and calculate their area to come this conclusion? I don't think that's the right intuition.
| First you can propose this problem as: The area $A$ encompassed by any simple closed rectifiable curve $C$, of length $L$, satisfacts the inequality $A\geq \frac{L^2}{4\pi}$, and equality occurs, if and only if, $C$ is a circle.
The only proof I have done for this was using Parseval's identity (and therefore Fourier series), so it's not elementary (but it's rather simple if you know the aforementioned identity). Thought if you want I can post that proof.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4808",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "31",
"answer_count": 10,
"answer_id": 4
} |
Inverse of an invertible triangular matrix (either upper or lower) is triangular of the same kind How can we prove that the inverse of an upper (lower) triangular matrix is upper (lower) triangular?
| The proof is based on cayley–hamilton theorem.
Suppose $A$ is upper triangular matrix. Then we know the characteristic polynomial of $A$ is $P_A(\lambda)=(\lambda-\lambda_1)(\lambda-\lambda_2)...(\lambda-\lambda_n)$. By cayley–hamilton theorem we know $P_A(A)=0$, i.e.$(A-\lambda _1I)(A-\lambda _2I)...(A-\lambda _nI)=0$, from where we can see that $I=A\frac{\sum_i{c_iA^i}}{\underset{i}{\Pi}\left( -\lambda _i \right)}$ where $c_i$ can be solved from $P_A(A)=0$. Compare $I=A\frac{\sum_i{c_iA^i}}{\underset{i}{\Pi}\left( -\lambda _i \right)}$ with $I=AA^{-1}$ we can see that $A^{-1}$ is also upper triangular matrix.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4841",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "52",
"answer_count": 9,
"answer_id": 7
} |
Network of streets Say we have a undirected, connected graph that represents a network of streets. How would you prove that there always exists a tour of the network, where one ``drives'' in both directions on every street exactly once?
| This is essentially the same as Isaac’s comment on Moron’s answer, but I’ll post it as an answer.
I assume that by “one ‘drives’ in both directions on every street exactly once,” you mean “one ‘drives’ in each direction on every street exactly once.”
A directed graph has an Eulerian circuit (i.e. a circuit which uses every edge exactly once) if and only if it is strongly connected and each vertex has equal in-degree and out-degree. This fact can be proved in the same way as the well-known fact that an undirected graph has an Eulerian circuit if and only if it is connected and each vertex has even degree. See a textbook on graph theory for a proof.
Back to your question, let G be the connected undirected graph representing the network of streets. You are looking for an Eulerian circuit (or an Eulerian path if you do not require the tour to end at the starting point) in the directed graph G′ obtained by replacing each edge in G by a pair of directed edges in both directions. It is easy to see that the directed graph G′ satisfies the two conditions above, and therefore G′ has an Eulerian circuit.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4896",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
What is wrong with my reasoning? The Questions
$70\%$ of all vehicles pass inspection. Assuming vehicles pass or fail independently. What is the probability:
a) exactly one of the next $3$ vehicles passes
b) at most $1$ of the next $3$ vehicles passes
The answer to a) is $.189.$ The way I calculated it was:
$P(\text{success}) \cdot P(\text{fail}) \cdot P(\text{fail})\\
+ P(\text{fail}) \cdot P(\text{success}) \cdot P(\text{fail})\\
+ P(\text{fail}) \cdot P(\text{fail}) \cdot P(\text{success})\\
=.7\cdot.3\cdot.3 + .3\cdot.7\cdot.3 + .3\cdot.3\cdot.7\\
= .189$
I summed the $3$ possible permutations of $1$ success and $2$ failures.
For b) the answer is $.216.$ To get that answer you take your answer to a) and add the probability of exactly $0$ successes which is $P(\text{fail}) \cdot P(\text{fail}) \cdot P(\text{fail}) = .189 + .3\cdot.3\cdot.3 = .216$
What I don't understand is why the probability of exactly $0$ successes doesn't follow the pattern of exactly $1$ success. Why doesn't the "formula" work:
$P(\text{fail}) \cdot P(\text{fail}) \cdot P(\text{fail})\\
+ P(\text{fail}) \cdot P(\text{fail}) \cdot P(\text{fail})\\
+ P(\text{fail}) \cdot P(\text{fail}) \cdot P(\text{fail})\\
= .3\cdot.3\cdot.3+.3\cdot.3\cdot.3+.3\cdot.3\cdot.3\\
= .081$
$\Rightarrow .189 + .081 = .27$ (not $.216$)
Now I'm wondering if I calculated the answer to a) the wrong way, and it was merely a coincidence that I got the right answer!
| I'm not too confident with my probability, but I'll try my hand at an explanation.
In the first part, you are considering the next 3 trials as successive events so you can have a success in either the first spot, second spot, or third spot.
In part b, you require all 3 events to be failures so there is only 1 way that the 3 failures can occur in the next 3 events.
Does that make any sense? I never felt probability was my strong point :)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4948",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Applications of Probability Theory in pure mathematics My (maybe wrong) impression is that while probability is widely used in science (for example, in statistical mechanics), it is rarely seen in pure mathematics. Which leads me to the question -
Are there some interesting application of Probability Theory in pure mathematics, outside Probability Theory itself?
| In addition to examples, the overall "gestalt" answer to the question is: probability is used everywhere in mathematics. It is a basic idea on the level of algorithm, algebraic structure, geometry, calculus, or other very ubiquitous things. It has become a very popular source of questions and intuitions in research, in all fields. Knowing that this is true, it is not surprising that many examples of theoretical uses of probability can be posted.
Being a basic language, it is also true that many of the uses of probability are basic, and do not go far beyond the idea of a probability distribution, frequencies of events, expectations, related combinatorics and so on. But in some fields, advanced results in probability theory are constantly being used.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4983",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 7,
"answer_id": 6
} |
What are we solving with row reduction? Why are row reductions so useful in linear algebra? It is easy to get lost in the mechanical solving of equations. I know I can get a matrix into reduced row echelon form. But what are the outcomes of this? What can this mean?
| The main point of row operations is that they do not change the solution set of the underlying linear system. So when you take a system of linear equations, write down its (augmented) coefficient matrix, and row reduce that matrix, you get a new system of equations that has the same solutions as the original system.
When the coefficient matrix is in reduced row echelon form, it is particularly easy to understand the solution set of the linear system. In particular, you can readily tell whether the system is consistent or inconsistent, and when the system is consistent, you can readily see how a choice for the values of the free (non-pivotal) variables leads to a solution.
I recommend David Lay's book as a textbook for this material.
One thing that sometimes gets lost in the process of Gaussian elimination (the usual algorithm for putting a matrix into reduced row echelon form) is just how similar this process is to the approach, often taught in high school algebra, of solving for one variable in a particular equation and then eliminating its occurrences from all the other equations. When you produce a pivot in a row of the coefficient matrix, this corresponds to solving the equation for that variable. When you add or subtract (multiples of) this pivotal row from the other rows to produce zeros in the pivotal column, the new rows represent the equations you get by substituting in for the pivotal variable. In a sense, Gaussian elimination is just an excellent way to keep track of the process of solving for variables and then eliminating them from the other equations.
Here's an example. We start with the equations
$3x + 6y + 3z = 3$
$2x + y + 7z = 6$.
The corresponding matrix is
$\left[\begin{array}{rrr|r} 3 & 6 & 3 & 3\\\
2 & 1 & 7 & 6\end{array}\right]$.
Dividing the first row/equation by 3 gives you the equation $x + 2y + z = 1$, which you can think of as $x = 1 - 2y - z$. You can plug that expression into the second equation to get
$2(1-2y -z) + y + 7z = 6$, which simpifies to $-3y + 5z = 4$. Notice that when you row reduce by dividing the first row by 3 and then subtracting twice row 1 from row 2, you get the same thing:
$\left[\begin{array}{rrr|r} 3 & 6 & 3 & 3\\\
2 & 1 & 7 & 6\end{array}\right] \to \left[\begin{array}{rrr|r} 1 & 2 & 1 & 1\\\
2 & 1 & 7 & 6 \end{array}\right] \to \left[\begin{array}{rrr|r} 1 & 2 & 1 & 1\\\
0 & -3 & 5 & 4\end{array}\right].$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/5019",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 3
} |
Relationships between Symmetric Groups Here is a question that I can't find the answer to in my notes or my textbook.
Is there any relation between $\text{Sym}_n$ and $\text{Sym}_k$ where $k < n$? Is $\text{Sym}_k \subset \text{Sym}_n$? Since $k! | n!\text{ } \forall k < n $ they satisfy Lagrange's theorem, but of course that doesnt guarantee the existence of a subgroup
I'm trying to use this result to show that that there is a transitive action of a finite group $G$ on a smaller finite set $S$ by showing that $G$ or a subgroup of $G$ is isomorphic to $\Sigma (S)$ (The permutation group of $S$) which has a natural transitive action.
| $S_n$ is not contained in $S_m$ when $n < m$ if you define $S_k$ to be the set of all bijections $\{1,\dots,k\}\to\{1,\dots,k\}$, simply because an element of $S_n$ has domain $\{1,\dots,n\}$ and elements of $S_m$ have domain $\{1,\dots,m\}$, and the two sets are different.
Now this is a silly problem, so surely there is some way out...
Suppose again that $n < m$. There is a map $\phi:S_n\to S_m$ such that whenever $\pi\in S_n$, so that $\pi:\{1,\dots,n\}\to\{1,\dots,n\}$ is a bijection, then $\phi(\pi):\{1,\dots,m\}\to\{1,\dots,m\}$ is the map given by $$\phi(\pi)(i)=\begin{cases}\pi(i),&\text{if $i\leq n$;}\\ i,&\text{if $i > n$.}\end{cases}$$ You can easily check that $\phi$ is well-defined (that is, that $\phi(\pi)$ is a bijection for all $\pi\in S_n$) and that moreover $\phi$ is a group homomorphism which is injective.
It follows from this that the image of $\phi$ is a subgroup of $S_m$ which is isomorphic to $S_n$. It is usual to identify $S_n$ with its image $\phi(S_n)\subseteq S_m$, and then we can say that «$S_n$ is a subgroup of $S_m$», but this is nothing but a façon de parler.
It should be noted, though, that there are many injective homomorphisms $S_n\to S_m$, so there are many ways to identify a subgroup of $S_m$ with $S_n$. The one I described above is nice because it looks very natural.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/5131",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
What structure does the alternating group preserve? A common way to define a group is as the group of structure-preserving transformations on some structured set. For example, the symmetric group on a set $X$ preserves no structure: or, in other words, it preserves only the structure of being a set. When $X$ is finite, what structure can the alternating group be said to preserve?
As a way of making the question precise, is there a natural definition of a category $C$ equipped with a faithful functor to $\text{FinSet}$ such that the skeleton of the underlying groupoid of $C$ is the groupoid with objects $X_n$ such that $\text{Aut}(X_n) \simeq A_n$?
Edit: I've been looking for a purely combinatorial answer, but upon reflection a geometric answer might be more appropriate. If someone can provide a convincing argument why a geometric answer is more natural than a combinatorial answer I will be happy to accept that answer (or Omar's answer).
| The polynomial $p\in K\left[X_1,...,X_n\right]$ given by $$p\left(X_1,...,X_n\right)=\prod_{i<j}\left(X_i-X_j\right)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/5191",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "57",
"answer_count": 8,
"answer_id": 3
} |
Vivid examples of vector spaces? When teaching abstract vector spaces for the first time, it is handy to have some really weird examples at hand, or even some really weird non-examples that may illustrate the concept. For example, a physicist friend of mine uses "color space" as a (non) example, with two different bases given essentially {red, green, blue} and {hue, saturation and brightness} (see http://en.wikipedia.org/wiki/Color_space). I say this is a non-example for a number of reasons, the most obvious being the absence of "negative color".
Anyhow, what are some bizarre and vivid examples of vector spaces you've come across that would be suitable for a first introduction?
| The vector space of all order $n$ magic squares ($n\times n$ matrices with real entries and all row and column and diagonal sums equal).
The reals as a vector space over the rationals. ${\bf Q}(\sqrt2)$ as a vector space over the rationals.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/5233",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "42",
"answer_count": 12,
"answer_id": 2
} |
Evaluating the integral $\int_0^\infty \frac{\sin x} x \,\mathrm dx = \frac \pi 2$? A famous exercise which one encounters while doing Complex Analysis (Residue theory) is to prove that the given integral:
$$\int\limits_0^\infty \frac{\sin x} x \,\mathrm dx = \frac \pi 2$$
Well, can anyone prove this without using Residue theory? I actually thought of using the series representation of $\sin x$:
$$\int\limits_0^\infty \frac{\sin x} x \, dx = \lim\limits_{n \to \infty} \int\limits_0^n \frac{1}{t} \left( t - \frac{t^3}{3!} + \frac{t^5}{5!} + \cdots \right) \,\mathrm dt$$
but I don't see how $\pi$ comes here, since we need the answer to be equal to $\dfrac{\pi}{2}$.
| Let's consider the integrals
$$I_1(t)=\int_t^{\infty}\frac{\sin(x-t)}{x}dx\qquad\mbox{ and }\qquad
I_2(t)=\int_0^{\infty}\frac{e^{-tx}}{1+x^2}dx,\qquad t\geq 0.$$
A direct calculation shows that $I_1(t)$ and $I_2(t)$ satisfy the ordinary differential equation
$$y''+y=\frac{1}{t},\qquad t>0.$$
Therefore, the difference $I(t)=I_1(t)-I_2(t)$ satisfy the homogeneous differential equation
$$y''+y=0,\qquad t>0,$$
hence it should be of the form
$$I(t)=A\sin (t+B) $$
with some constants $A$, $B$. But $I_1(t)$ and $I_2(t)$ both converge to $0$ as $t\to\infty$. This implies that $A=0$ and $I_1(t)=I_2(t)$ for all $t\geq 0$. Finally, we have that
$$\int_0^{\infty}\frac{\sin x}{x}dx=\int_{0}^{\infty}\frac{1}{1+x^2}dx=\lim_{n\to\infty}\left(\arctan(n)\right)-\arctan(0)=\frac{\pi}{2}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/5248",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "265",
"answer_count": 32,
"answer_id": 16
} |
Determine whether a number is prime How do I determine if a number is prime? I'm writing a program where a user inputs any integer and from that the program determines whether the number is prime, but how do I go about that?
| How do I mathematically determine if a number is prime?
If the number is n, then dividing it by every prime number less than or equal to sqrt(n) and showing that there is a remainder.
There are a number of different sieve solutions for finding prime numbers, the oldest and most famous of which is the Sieve of Eratosthenes. These are generally easy to programme and can, for example, find all of the primes below 100,000 in a few milliseconds on a modern processor.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/5277",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18",
"answer_count": 6,
"answer_id": 3
} |
Why do statements which appear elementary have complicated proofs? The motivation for this question is : Rationals of the form $\frac{p}{q}$ where $p,q$ are primes in $[a,b]$ and some other problems in Mathematics which looks as if they are elementary but their proofs are very much sophisticated.
I would like to consider two famous questions: First the "Fermat's Last Theorem" and next the unproven "Goldbach conjecture". These questions appear elementary in nature, but require a lot of Mathematics for even comprehending the solution. Even the problem, which I posed in the link is so elementary but I don't see anyone even giving a proof without using the prime number theorem.
Now the question is: Why is this happening? If I am able to understand the question, then I should be able to comprehend the solution as well. A Mathematician once quoted: Mathematics is the understanding of how nature works. Is nature so complicated that a common person can't understand as to how it works, or is it we are making it complicated.
At the same time, I appreciate the beauty of Mathematics also: Paul Erdős' proof of Bertrand's postulate is something which I admire so much because, of its elementary nature. But at the same time i have my skepticism about FLT and other theorems.
I have stated 2 examples of questions which appear elementary, but the proofs are intricate. I know some other problems, in number theory which are of this type. Are there any other problems of this type, which are not Number Theoretical? If yes, I would like to see some of them.
| There are elementary questions that have no elementary answers, other than questions in Number Theory and situations invoking Godel incompleteness. I have in mind finding antiderivatives of elementary functions such as $e^{-x^2}$, $(\sin x)/x$, $1/(\log x)$, and many others. It is known that the antiderivatives of these functions cannot be expressed in closed form in terms of powers, exponentials, logarithms, trig functions, etc.
It's also known that the solutions of simple equations like $x+e^x=0$, $x=\cos x$, $x\log x=1$ and so on can't be expressed in closed form in terms of the standard elementary functions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/5303",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 7,
"answer_id": 5
} |
What's the generalisation of the quotient rule for higher derivatives? I know that the product rule is generalised by Leibniz's general rule and the chain rule by Faà di Bruno's formula, but what about the quotient rule? Is there a generalisation for it analogous to these? Wikipedia mentions both Leibniz's general rule and Faà di Bruno's formula for the product and the chain rule, but rather nothing for the quotient rule.
| As others have already said, you just apply the product rule to $f.g^{-1}.$ However, the is an
American Mathematical Monthly article on how NOT to do it, which you may find instructive.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/5357",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 9,
"answer_id": 1
} |
Intuitive explanation of variance and moment in Probability While I understand the intuition behind expectation, I don't really understand the meaning of variance and moment.
What is a good way to think of those two terms?
| The localization and the dispersion probabilistic measures can be "seen" as the corresponding momentums of mechanical systems of "probabilistic masses".
The expectation has the following mechanical interpretation. Given that $F(x)$ is the "probabilistic mass" contained in the interval $0\le X\lt x$ (in one dimention), the mathematical expectation of the random variable $X$ is the static momentum with respect to the origin of the "probabilistic masses" system.
The variance is the mechanical analog of the inertia momentum of the "probabilistic masses" system with respect to the center of masses.
The variance of the random variable $X$ is the 2nd order momentum of $X-m$ where $m$ is the expectation of $X$, i.e. $m=E(X)=\displaystyle\int_0^{x} x\, dF(x)$ ($F(x)$ is the cumulative distribution function).
The $k$-order momentum is the expectation of $(X-m)^k$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/5392",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20",
"answer_count": 4,
"answer_id": 0
} |
Some basic questions about the Selberg zeta function I'm trying to learn about the Selberg zeta function, but it seems like introductory texts assume more knowledge of Riemannian geometry than I'm comfortable with.
I have some basic questions that someone might be able to help with:
Is the composition of two closed geodesics itself a closed geodesic?
Is composition of geodesics a commutative operation?
Can all geodesics be decomposed into compositions of primitive closed geodesics?
If anyone has comments or references, I would appreciate them.
| Hyperbolic geodesic can be run once, twice, three times, every time resulting in a new hyperbolic geodesic formally ..., which admit a multiple of the origninal length.
The primitive one are those, which are not a multiple of any other geodesic.
This can be phrased in terms of generators of the fundamental group, which gives the translation to Fuchsian groups, i.e. $\pi_1(\Gamma \backslash \mathbb{H}) \cong \Gamma$.
Your Riemann surface has no singularieties, iff the generators of $\Gamma$ are in one to one correspondance to primitive geodesics.
So your first question: ... if and only if they are a multiple of the same primitive geodesic. But it is not possible to combine two arbitrary closed geodesics.
2nd question: ... Yes, the topological operation commutes, if it is well defined.
3rd question: ... Yes, every closed geodesic is a multiple of an unique primitive geodesic.
Reference: Iwaniec - Spectral theory of automorphic forms.
Google Fuchsian groups.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/5439",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
How is the codomain for a function defined? Or, in other words, why aren't all functions surjective? Isn't any subset of the codomain which isn't part of the image rather arbitrary?
| Often what is important is not functions but collections of functions, especially subsets of the set of functions between two sets $X$ and $Y$, some of which will be surjective and some of which won't. What also doesn't get emphasized a lot at the level of elementary set theory is the compositional structure of functions, e.g. a function $f : X \to Y$ can be composed with a function $g : Y \to Z$ to give a function $fg : X \to Z$. For the purposes of studying this compositional structure (for example if $X = Y = Z$) it is generally important not to require that $f$ and $g$ be surjective or you will miss out on structure.
For example, a simple way to define a dynamical system is just as a function $f : X \to X$. Whether $f$ is surjective or not is an important aspect of classifying the dynamics of $f$, or in other words of classifying the behavior of the sequences $\{ x, f(x), f^2(x), f^3(x), ... \}$ for various $x$. This sequence is not well-defined if you pretend that the domain and codomain of $f$ are different just because the range and the codomain aren't equal.
This is another way of saying that a function really consists of three pieces of data (a domain, a codomain, and the mapping from one to the other), but the reason these three pieces of data are all important is really the compositional structure.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/5480",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 6,
"answer_id": 2
} |
Find thickness of a coin This is one of the question asked in a written test conducted by a company. The question sounded stupid to me. May be its not.
"Given the area of the coin to be 'A'. If the probability of getting a tail, head and the edge are same, what is the thickness of the coin?
| If you assume that the probability of getting tail, head or edge only depends on the surface area that those regions of the coin have, then you see that the area of the edge must also equal $A$. If you denote by $T$ the thickness of the coin, then the area of the edge is given by $T \cdot C$, where $C$ is the circumference of the coin. Now, as you are given $A$, you can determine $C = 2 \sqrt{A \pi}$ and thus you get $T = \frac{A}{C} = \frac{A}{2 \sqrt{A \pi}} = \frac{\sqrt{A\pi}}{2\pi}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/5538",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 2
} |
Related Integer Combination The title and tags are a little goofy since I'm not quite sure what to call this. This may be more of a computer science question, but I thought I'd ask here too.
I have two integers $n$ and $m$ that are arbitrarily (logically) related with integers $x$ and $y$, respectively, where all values are finite and $> 0.$
I'd like to know if there's a way to combine $n$ and $x$, and $m$ and $y$, into two numbers (integer, decimal, whatever) such that neither will overlap. Additionally, I need to be able to perform some kind of operation to reverse the previous process to once again obtain $n$ and $x$, and $m$ and $y$.
Let me know if this isn't clear. Thanks in advance!
| You could employ a pairing function, e.g. Cantor's or others.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/5587",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Geometric Progression If $S_1$, $S_2$ and $S$ are the sums of $n$ terms, $2n$ terms and to infinity of a G.P. Then, find the value of $S_1(S_1-S)$.
PS: Nothing is given about the common ratio.
| HINT $\quad\:$ In $\rm\ \ (1-X)\ (1-(1-X))\ =\ 1-X^2-(1-X)\ \ \ $ put $\rm\ \ \ X = x^n\ $
then multiply both sides by $\rm\ 1/(1-x)^2\ =\ S/(1-x)\:.\ \ $ More generally one has
$\rm\ \ (1-x^a)\:(1-x^b)\ =\ (1-x^a) + (1-x^b) - (1-x^{a+b})$
$\rm\quad\quad\quad\ \Rightarrow\quad\quad S_a\ S_b\ =\ S\ (S_a + S_b - S_{a+b})\:,\quad S_n = \displaystyle\frac{1-x^n}{1-x},\quad S = S_\infty = \frac{1}{1-x}$
This generalizes to arbitrary products $\rm\: S_{a}\: S_b\: S_c\cdots S_k\:$ using the Inclusion–exclusion principle.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/5616",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Generalizing values which Euler's-totient function does not take I was reading about Euler's totient function on wikipedia, and it eventually led me to this book on google:
Page 74 of the book, Prime numbers: the most mysterious figures in math
By David G. Wells.
Anyway, the books lists many assertions without proof, only references which I can't find. One I could not solve for myself said that not all even numbers are values of $\phi(n)$. The sequence of even non-values of $\phi(n)$ starts:
$14, 26, 34, 38, 50, 62,\dots$
After thinking about it for a while, I've made little headway. Looking at 14 specifically, I suppose if such a solution $a$ to $\phi(x)=14$ were to exist, $(a,m_i)=1$ for $1\leq i\leq 14$, for some $m_i\lt a$ and so there must also exist inverses $\overline{a_i}$ such that $\overline{a_i}a\equiv 1 \pmod {m_i}$ for each $i$. I'm looking for some contradiction, possibly of the Chinese Remainder Theorem, to show such $a$ cannot exist.
Is there some way to generalize which values are not taken by $\phi$, or at least explain why this is the case? I was hoping to see why $14$ is the least such integer such that this is true, but values $1$ to $13$ must be indeed taken. I suppose this would also explain why 26 is the next such value that is not taken, while $15$ to $25$ are.
| From the comments to A005277 :
If p is prime then the following two statements are true. I. 2p is in the sequence iff 2p+1 is composite (p is not a Sophie Germain prime). II. 4p is in the sequence iff 2p+1 and 4p+1 are composite. - Farideh Firoozbakht (mymontain(AT)yahoo.com), Dec 30 2005
This covers most of your cases, 50 is covered by the next comment.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/5661",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 2,
"answer_id": 0
} |
Characteristics for 2nd order differential equations If I have an equation
$p(x)\frac{\partial^2u}{\partial x^2} + r(x)\frac{\partial^2u}{\partial x\partial y} + q(x)\frac{\partial^2 u}{\partial y^2}=f(x,y,u)$
Where $f$ maybe contains first partial derivatives for $u$.
Can anyone give me a worked example of how to solve this using the method of characteristics.
Thanks in advance.
| See these lecture notes on the method of characteristics for second-order PDEs, where the method is applied to steady isentropic flow (gasdynamics)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/5707",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Prove that $\beta \rightarrow \neg \neg \beta$ is a theorem using standard axioms 1,2,3 and MP I've proven that $\neg \neg \beta \rightarrow \beta$ is a theorem, but I can't figure out a way to do the same for $\beta \rightarrow \neg \neg \beta$.
It seems the proof would use Axiom 2 and the deduction theorem (which allows $\beta$ to be an axiom)--but I've endlessy tried values to no avail.
Axiom 1: $A \rightarrow ( B \rightarrow A )$.
Axiom 2: $( A \rightarrow ( B \rightarrow C ) ) \rightarrow ( ( A \rightarrow B ) \rightarrow (A \rightarrow C) ) $.
Axiom 3: $( \neg B \rightarrow \neg A) \rightarrow ( ( \neg B \rightarrow A) \rightarrow B )$.
To clarify: A, B, C, $\alpha$, and $\beta$ are propositions (i.e. assigned True or False). $\rightarrow$ and $\neg$ have the standard logical meanings.
Note: $\TeX$ification does not work in the IE9 beta.
| I'll use the deduction theorem, so I'll assume $\beta$ and need to prove $\neg\neg\beta$.
*
*$\beta$ (assumption)
*$\beta\to (\neg\neg\neg\beta\to\beta)$ (axiom 1)
*$\neg\neg\neg\beta\to\beta$ (modus ponens using 1 and 2)
*$\neg\neg\neg\beta\to\neg\beta$ (you have proved that $\neg\neg\beta\to\beta$ is a theorem)
*$(\neg\neg\neg\beta\to\neg\beta)\to((\neg\neg\neg\beta\to\beta)\to\neg\neg\beta)$ (axiom 3)
*$(\neg\neg\neg\beta\to\beta)\to\neg\neg\beta$ (modus ponens using 4 and 5)
*$\neg\neg\beta$ (modus ponens using 3 and 6)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/5772",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 0
} |
Why is $T_1$ required for a topological space to be $T_4$? Let's say we have some topological space.
Axiom $T_1$ states that for any two points $y \neq x$, there is an open neighborhood $U_y$ of $y$ such that $x \notin U_y$.
Then we say that a topological space is $T_4$ if it is $T_1$ and also satisfies that for any two closed, non-intersecting sets $A,B$, there are open neighborhoods $U_A,U_B$ respectively, such that $U_A\cap U_B = \emptyset$.
Could anyone give an example of a topological space which satisfies the second condition of $T_4$, but which is not $T_1$?
| As an addition: the separating closed disjoint sets part is often called normality (a space is normal if it satisfies this), so $T_4$ is normal plus $T_1$, and similarly for regular: $T_3$ is regular and $T_1$. Spaces that have no disjoint non-empty closed sets (besides the trivial topology, we have examples like $\mathbf{N}$ with the topology generated by the sets of the form $U(n) = \{ k : k \ge n \}$ e.g.) trivially satisfy normality. $T_4$ is to avoid these pathologies: the extra $T_1$ ensures that at least all finite sets are closed, so we have some "relevant" closed sets to apply normality to...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/5819",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 2,
"answer_id": 0
} |
Finding a line that satisfies three conditions Given lines $\mathbb{L}_1 : \lambda(1,3,2)+(-1,3,1)$, $\mathbb{L}_2 : \lambda(-1,2,3)+(0,0,-1)$ and $\mathbb{L}_3 : \lambda(1,1,-2)+(2,0,1)$, find a line $\mathbb{L}$ such that $\mathbb{L}$ is parallel to $\mathbb{L}_1$, $\mathbb{L}\cap\mathbb{L}_2 \neq \emptyset$ and $\mathbb{L}\cap\mathbb{L}_3 \neq \emptyset$.
Since $\mathbb{L}$ must be parallel to $\mathbb{L}_1$, then $\mathbb{L}:\lambda(1,3,2)+(x,y,z)$ but I can't figure out how to get that (x,y,z) point. I'd like to be given just a slight nod because I'm sure the problem is really easy. Thanks a lot!
| $\mathbb L \cap \mathbb L_2 \ne \emptyset$ implies that for some value of $\lambda$, your equation for $\mathbb L$ is on $\mathbb L_2$. Since substituting $\lambda' = \lambda - c$ doesn't change the line $\mathbb L$, you can take $\lambda = 0$ without loss of generality. That is, $(x,y,z)$ is on $\mathbb L_2$, or $(x,y,z) = \lambda_2(-1,2,3) + (0,0,-1)$. ...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/5868",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
Integral solutions to $y^{2}=x^{3}-1$ How to prove that the only integral solutions to the equation $$y^{2}=x^{3}-1$$ is $x=1, y=0$.
I rewrote the equation as $y^{2}+1=x^{3}$ and then we can factorize $y^{2}+1$ as $$y^{2}+1 = (y+i) \cdot (y-i)$$ in $\mathbb{Z}[i]$. Next i claim that the factor's $y+i$ and $y-i$ are co-prime. But i am not able to show this. Any help would be useful. Moreover, i would also like to see different proofs of this question.
Extending Consider the equation $$y^{a}=x^{b}-1$$ where $a,b \in \mathbb{Z}$ and $(a,b)=1$ and $a < b$. Then is there any result which states about the nature of the solution to this equation.
| I think I have found a proof for the statement made in the top answer:
Setting $y+i$ to be a perfect cube (upto units), i.e. $y+i = (xi)^3$,
easily gives us $y=0$ and so $x=1, y=0$ is the only solution.
Let $y + i = (a + bi)^3 = (a^3 - 3ab^2) + i(3a^2b - b^3),$ then by comparison of coefficients, $1 = 3a^2b - b^3 = b(3a^2 - b^2)$, and therefore $b = \pm 1$.
*
*Case 1: $b = 1\colon$ $3a^2 = 2 \Rightarrow a = \sqrt{2/3} \not\in \mathbb Z.$
*Case 2: $b = -1\colon$ $3a^2 = 0 \Rightarrow a = 0 \Rightarrow y+i = (-i)^3 = i \Rightarrow y = 0 \Rightarrow x = 1.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/5914",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 5,
"answer_id": 3
} |
What is the minimal coloring of a planar graph with swaps? I play a game known as Bejeweled (I'm sure some of you have heard of it) which has the following properties:
Given a planar graph with every vertex having degree 4, any path of 3 (or more) contiguous colors is considered "complete." Additionally, any two adjacent vertices may be "swapped" to produce said path. If such a swap exists the board is also said to be "complete." You cannot swap if it does not produce a path of 3 or more contiguous colors.
For the moment I'm ignoring the property that this path must be in a straight line.
What I'd like to do is to limit the number of colors I have so that the board will always be complete (having a path, or a single swap produces the path). Now obviously this is trivially true in the 2 color case, but I want the maximal number of colors. Or more precisely,
What is the minimum number of colors that I need to color a planar graph, with every vertex having degree 4, such that there is no contiguous path of 3 identical colors or such a path cannot be produced by a single swapping of two nodes?
| The answer is three. As for a proof:
As stated above two colours is not enough. Every square grid colouring must have either alternating colours uniformly across the grid, or somewhere there are two adjacent vertices of the same colour. In the former case then any swap of vertices will create a three-in-a-row. If there are two in a row of the same colour and we assume it is not possible to get to a three in a row configuration, then all the vertices marked X below must not be colour 1.
.X..X.
XX11XX
.X..X.
With only two colours this clearly creates a problem as all the X vertices must be colour 2.
To show that 3 colours suffices consider the colouring below:
123123
231231
312312
123123
The pattern can be extended to cover an infinitely large grid and it's not possible to swap to get three-in-a-row or even to get a coloured path of length three.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/5964",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Polygonal billiards and uniform distribution According to this article in Wikipedia: A billiard is a dynamical system in which a particle alternates between motion in a straight line and specular reflections from a boundary. When the particle hits the boundary it reflects from it without loss of speed. Billiard dynamical systems are Hamiltonian idealizations of the game of billiards, but where the region contained by the boundary can have shapes other than rectangular and even be multidimensional.
My question is motivated by a videogame I've been playing lately, which can be seen at http://www.youtube.com/watch?v=LLLmfwxNJYU.
Essentially, the "physics" of the game involves several billiards in a polygon, and the player has to slash off pieces of the polygon while avoiding the billiards. Also, the removed piece has to be void of billiards.
I've been assuming that the distribution of billiards is in the long run uniform, in some hand waving sense. That's assuming that initial distributions and velocities are random. Is that true, or can the polygon be shaped in various ways to make that distribution non-uniform? In other words, are certain regions of a polygon are more likely to be void than other regions?
(I believe that a term like "ergodic" applies to this, but I'm not confident using it).
| I agree with Joel's answer, but it doesn't address the question of random directions. See the first sentence of
http://arxiv.org/pdf/math/0701658.pdf
which was subsequently published. For rational billiards (ie, all angles a rational multiple of $\pi$), the dynamics is ergodic in almost all directions, and in particular it is uniformly distributed in space.
Much less is known about irrational polygons; it is possible this is an open question.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/6038",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
} |
Can $G$ of order $pqr$ be simple if it's generated by elements of orders $p,q$? Let $G$ be a group of order $pqr$, where $p$, $q$ and $r$ be three distinct primes. By Cauchy's theorem there exist three elements, $a$, $b$ and $c$, whose orders are $p$, $q$ and $r$, respectively. If the subgroup generated by $a$ and $b$ is the whole group, then I wonder if it is possible that there exists a proper normal subgroup of $G$.
| More generally, let $G$ be a finite group with order $p_1p_2\cdots p_n$, where $p_1< p_2< \cdots < p_n$ are distinct primes. Then by the normalizer-centralizer theorem, the Sylow $p_1$-subgroup is central in its normalizer, so there is a normal $p_1$-complement in $G$ by Burnside's transfer theorem. Inductively, we can then show the Sylow $p_n$-subgroup is normal in $G$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/6078",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
What are the most important questions or areas of study in the philosophy of mathematics? This question is intended to complement What mathematical questions or areas have philosophical implications outside of mathematics?
| Paul Benacerraf's identification problem is an argument against Platonism: Many different models satisfy Peano's arithmetic axioms. Which of these are really the Natural numbers? I.e., the uniquely intended set? We are unable to pick one account to the exclusion of all the others. Adding axioms won't help. Any such axiomatic system will be satisfied by multiple models. For example, is the number Three given by $3 = \{ \emptyset, \{ \emptyset \}, \{ \emptyset, \{ \emptyset \} \} \}$ or by $3 = \{ \{ \{ \emptyset \} \} \}$? Benacerraf concludes that numbers are not sets, as most mathematicians believe.
A recent trend for dealing with Benacerraf's identification problem is fictionalism, a form of structuralism: "Fictionalism holds that mathematical theories are like fiction stories such as fairy tales and novels. Mathematical theories describe fictional entities, in the same way that literary fiction describes fictional characters." See http://plato.stanford.edu/entries/philosophy-mathematics/#Fic
Fictionalists include Hartry Fields, John Burgess, and Charles Chihara.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/6101",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 4,
"answer_id": 1
} |
Derivative of Integral I'm having a little trouble with the following problem:
Calculate $F'(x)$:
$F(x)=\int_{1}^{x^{2}}(t-\sin^{2}t) dt$
It says we have to use substitution but I don't see why the answer can't just be:
$x-\sin^{2}x$
| $\displaystyle{%
{{\rm d} \over {\rm d}x}
=
2x\,{{\rm d} \over {\rm d}\left(x^{2}\right)}
}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/6155",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 4
} |
Calculate combinations of characters My first post here...not really a math expert, but certainly enjoy the challenge.
I working writing a random string generator and would like to know how to calculate how many possible combinations there are for a particular combination.
I am generating a string of 2numbers followed by 2 letters (lowercase) e.g. 12ab
I think the calculation would be (breaking it down)
number combinations 10*10=100
letter combinations 26*26=676
So the number of possible combinations is 100*676=67600, but this seems a lot to me so I'm thinking I am off on my calculations!!
Could someone please point me in the right direction?
Thx
| Some nomenclature: when you say "2 number", you really mean "2 digits". Also, you need to specify if the digits can be anything or not (for example, do you allow leading zeroes?).
If each of the two digits can be anything, 0 through 9, and each of the letters can be anything, a through z, then your computation is correct. If you think about it, you can see why the number is not off: any particular pair of letters have 100 numbers that can go before them to make the string. Each particular letter going first has 26 possible "second letters", and each of them has 100 possible pairs of digits to go. So there are already 2600 possible strings of the form xxxa. Another batch for xxxb, etc. They add up very quickly.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/6216",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 1
} |
Number of ways to stack N cubes I'm trying to figure out how many ways are there to stack N cubes on top of each other.
Once you get a particular stack of 4 cubes let's say, the order of the cubes doesn't matter, just which sides are being shown does.
I'm generally finding 2 numbers:
1.) 6 * 24^(n-1)
and
2.) 3 * 24^(n-1)
Which is the correct answer, and why?
To understand the question better, you can refer to the Instant Insanity puzzle:
http://en.wikipedia.org/wiki/Instant_Insanity
| Your solutions presuppose that the cubes are identical; this is not the
case in Instant Insanity. If they are all distinguishable you are
missing a factor of $N!$.
The smaller possible solution is valid under the assumption
that a stack is regarded as identical with that stack
produced by turning the original stack upside down. The larger
solution regards these two stacks as different.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/6262",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Is $[0,1]$ a countable disjoint union of closed sets? Can you express $[0,1]$ as a countable disjoint union of closed sets, other than the trivial way of doing this?
| This is just a consequence of the fact that a Cantor like set is uncountable.
In our case one creates the Cantor set by intersecting compact sets formed by subtracting from the [0,1] the union of first N intervals without their boundary points.
Since the intersection is uncountable it contains a point which is not the end point of any intervals. That is, there is a point in [0,1] which does not belong to the sum of closed intervals.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/6314",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "109",
"answer_count": 8,
"answer_id": 6
} |
How do you determine dimension Suppose $U$ and $V$ are $2$ dimensional subspaces of $\mathbb{R}^4$. How do you determine the dimension of $U \cap V$? I know that
\begin{equation*}
\text{dim}(U + V) = \text{dim}(U)+ \text{dim}(V) - \text{dim}(U \cap V).
\end{equation*}
So it seems that $\text{dim}(U \cap V) = 0$.
| First: it is incorrect, in general, to even write $\dim(U\cup V)$, because $U\cup V$ is almost never a subspace: it is a subspace if and only if $U\subseteq V$ or $V\subseteq U$. Not being a subspace, it doesn't even make sense to talk about its dimension.
Rather, what you probably meant is the correct equation
$$\dim(U+V) = \dim(U) + \dim(V) - \dim(U\cap V)$$
(though I prefer to put the intersection on the left hand side, because in that form it is valid even in the infinite dimensional case). Here, $U+V$ is the smallest subspace that contains $U$ and $V$, and it happens to equal the set of all vectors of the form $u+v$ with $u\in U$ and $v\in V$.
The equation does not give you the full answer, because there are several situations that can occur: you could have $U$ and $V$ intersect trivially: this is what happens when $U+V=\mathbb{R}^4$. For an explicit example, you could have $U=\{(a,b,0,0):a,b\in\mathbb{R}\}$, and $V=\{(0,0,c,d) : c,d\in\mathbb{R}\}$. Here, $\dim(U\cap V)=0$.
Or you could have that $U$ and $V$ intersect in a one-dimensional subspace (for example, take $U$ as above, but take $V=\{(0,b,c,0) : b,c\in\mathbb{R}\}$).
Or you could have $U=V$, in which case the intersection has dimension $2$.
What you can say is: if the spaces are distinct, then the intersection will either have dimension 1 or dimension 0. But that is all you can say with the given information.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/6353",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Constructing Non Abelian Groups of Given order The motivating factor behind this question is the comments given for this question asked some hours ago Group of order $105$
So, given a group $G$ of order $n$, what are different methods for constructing a non abelian group of order $n$? Well, I have seen a method which Herstein uses in his book. He takes a cyclic group of given order and defines an Automorphism on the cyclic group and then places some restrictions.
I would like to know whether there are anymore methods for obtaining a Non Abelian group of a given order.
| Have you tried taking the semi-direct product of two subgroups (one needs to be a normal subgroup). The order of G is the product of the orders of the subgroups. One subgroup needs to be normal. If the other is not, then the semidirect product is usually (if not always) non-abelian
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/6546",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Finiteness of $\lim_{h \to 0^+} \frac1{h} \int_x^{x+h} f(t)\mathrm{d}t$ If f is in $L^1(\mathbb{R})$, is it true that $\lim_{h \to 0^+} \frac1{h} \int_x^{x+h} f(t)\mathrm{d}t$ exists and is finite for every x in $\mathbb{R}$?
Would it be possible to use something along the lines of the following argument: The Lebesgue Differentiation Theorem says that this integral is equal to f(x) a.e., which is finite a.e. if f is in $L^1(\mathbb{R})$. And since the integral doesn't change on a set of measure 0, then the limit itself must be finite a.e.
| No. As you said, it is true almost everywhere, but for example if $f(x)=x^{-1/2}$ on $(0,1)$ (and whatever you like elsewhere), then for $0\lt h\lt 1$, $1/h\int_0^h f(t)dt=2/\sqrt{h}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/6587",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How to simplify or calculate a formula with very big factorials I'm facing a practical problem where I've calculated a formula that, with the help of some programming, can bring me to my final answer. However, the numbers involved are so big that it takes ages to compute, and I think it might not be necessary. I have the following formula,
$\sum\limits^{k}_{i=m}(N-i)^{k-i}(\frac{1}{N})^k\frac{k!}{(k-i)!i!} \leq a$
from which I need to calculate N. The rest of the values are constants, but in the range of the 100,000s. The factorial there is giving me a headache, since the values involved are too large; what simplifications could I make that will loosen the bounds slightly and thereby simplify the calculation? Are there any standard tricks? Or perhaps a way to calculate this in matlab / octave?
| You don't need to compute the individual factorials in order to compute $k!/(k-i)!i!$, since that's the binomial coefficient $\binom{k}{i}$. A simple algorithm for computing binomial coefficients can be found on Wikipedia. A more sophisticated algorithm is due to Goetgheluck (JSTOR); implementations can be found here and here.
Of course, with numbers of the size that you have, this might still not be feasible, and in this case I also recommend Stirling's formula.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/6647",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 5,
"answer_id": 0
} |
Show that a number is not prime? Show that for any integer $n>1$, all the numbers $n!+2, n!+3, \ldots, n!+n$ are composite (i.e. not prime).
| Hint:
Try to show, that if for two numbers $a$ and $b$, $a$ is divisible by $d$ and $b$ is divisible by $d$, then so is their sum. Then go looking for such a common divisor in your sums.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/6682",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
} |
Boolean Simplification I've been trying to simplify this equation:
$Y = (¬AB + ¬C)(¬A + ¬D)$
.. into this equation.
$Y = ¬A¬C + ¬C¬D + ¬AB$
Unfortunately I keep going in circles with expanding and minimizing the booleans. Any tips or advice? Thanks.
| Use the following simplification:
$\neg AB(\neg A + \neg D) = \neg AB\neg A + \neg AB\neg D = \neg AB + \neg AB\neg D = \neg AB$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/6733",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Motivation behind the definition of complete metric space What is motivation behind the definition of a complete metric space?
Intuitively,a complete metric is complete if they are no points missing from it.
How does the definition of completeness (in terms of convergence of cauchy sequences) show that?
| I'm not going to add nothing directly related to your question and previous answers, but make some propaganda of a theorem I like since I was student and which, I believe, says something stronger than comparing some intuitive notion of completness with its definition.
A somewhat related notion of completeness is the geodesical one. The definition may not be too much appealing unless you're interested in differential geometry, but one of its consequences is easy to explain: if a Riemann manifold is geodesically complete, you can join any two points by a length minimizing geodesic. (But geodesic already implies that it minimizes length, doesn't it? Not quite: just locally. So, for instance, the meridian joining the North Pole with London, but going "backward", through the Bering Strait and the Pacific Ocean, then the South Pole, Africa and finally London, is a geodesic, but not a length minimizing one blatantly.)
Anyway, $\mathbb{R^2} \backslash \left\{ (0,0)\right\} $ is not geodesically complete, since there is no length minimizing geodesic joining, say, $(-1,0)$ and $(1,0)$, due to the "hole" $(0,0)$. At the same time, as a metric space, $\mathbb{R^2} \backslash \left\{ (0,0)\right\}$ is not complete: the Cauchy sequence $(\frac{1}{n}, 0)$ converges to $(0,0)$, but since $(0,0)$ is not in $\mathbb{R^2} \backslash \left\{ (0,0)\right\}$ it doesn't have a limit there.
Well, the Hopf-Rinow theorem tells us that this kind of things always happen together: a "hole" for geodesics is the same as a "hole" for Cauchy sequences, since for a (finite-dimensional) Riemann manifold $M$, both notions agree: $M$ is complete as a metric space if and only if it is geodesically complete.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/6777",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 5,
"answer_id": 4
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.