Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
Evaluating $\int P(\sin x, \cos x) \text{d}x$ Suppose $\displaystyle P(x,y)$ a polynomial in the variables $x,y$.
For example, $\displaystyle x^4$ or $\displaystyle x^3y^2 + 3xy + 1$.
Is there a general method which allows us to evaluate the indefinite integral
$$ \int P(\sin x, \cos x) \text{d} x$$
What about the case when $\displaystyle P(x,y)$ is a rational function (i.e. a ratio of two polynomials)?
Example of a rational function: $\displaystyle \frac{x^2y + y^3}{x+y}$.
This is being asked in an effort to cut down on duplicates, see here: Coping with *abstract* duplicate questions.
and here: List of Generalizations of Common Questions.
| Calculus books teach an annoying method based around using trig identities to reduce the integral to one where trig substitution can be applied. This method requires a little bit of guess-work to determine which identity should be applied, and my recollection is that it does not always work.
Here is a completely mechanical method which always works, although for simple $P$ it may require more calculation than a smarter method. Instead of using $\sin \theta, \cos \theta$, use the complex exponential $e^{i \theta}$; then Euler's formula $e^{i \theta} = \cos \theta + i \sin \theta$ tells you that
$$\cos \theta = \frac{e^{i \theta} + e^{-i \theta}}{2}, \sin \theta = \frac{e^{i \theta} - e^{-i \theta}}{2i}$$
and now the problem is reduced to integrating a sum of exponentials.
Example. The integral $\int_{0}^{2\pi} \cos^{2n} \theta \, d \theta$ has come up several times on math.SE in one form or another. It is readily solved using this method: write it as
$$\frac{1}{4^n} \int_0^{2\pi} (e^{i \theta} + e^{-i \theta})^{2n} \, d \theta$$
and note that when integrating from $0$ to $2\pi$ all of the terms vanish except the constant term, so the final answer is
$$\frac{2\pi}{4^n} {2n \choose n}.$$
This method generalizes to the case when $P$ is replaced by a rational function; in that case the integrand becomes a rational function of $e^{i \theta}$ (rather than a Laurent polynomial) and using $u$-substitution we can reduce the problem to integrating a rational function, which can be done in a number of ways (partial fractions, residues...).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/29980",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "54",
"answer_count": 3,
"answer_id": 2
} |
Limits: How to evaluate $\lim\limits_{x\rightarrow \infty}\sqrt[n]{x^{n}+a_{n-1}x^{n-1}+\cdots+a_{0}}-x$ This is being asked in an effort to cut down on duplicates, see here: Coping with abstract duplicate questions, and here: List of abstract duplicates.
What methods can be used to evaluate the limit $$\lim_{x\rightarrow\infty} \sqrt[n]{x^{n}+a_{n-1}x^{n-1}+\cdots+a_{0}}-x.$$
In other words, if I am given a polynomial $P(x)=x^n + a_{n-1}x^{n-1} +\cdots +a_1 x+ a_0$, how would I find $$\lim_{x\rightarrow\infty} P(x)^{1/n}-x.$$
For example, how would I evaluate limits such as $$\lim_{x\rightarrow\infty} \sqrt{x^2 +x+1}-x$$ or $$\lim_{x\rightarrow\infty} \sqrt[5]{x^5 +x^3 +99x+101}-x.$$
| First note that
$$
\sqrt[n]{{x^n + a_{n - 1} x^{n - 1} + \cdots + a_0 }} = \sqrt[n]{{\bigg(x + \frac{{a_{n - 1} }}{n}\bigg)^n + O(x^{n - 2} )}}.
$$
By the mean value theorem applied to the function $f(y)=y^{1/n}$ (whose derivative is $n^{-1}y^{1/n-1}$), we have
$$
\sqrt[n]{{\bigg(x + \frac{{a_{n - 1} }}{n}\bigg)^n + O(x^{n - 2} )}} - \sqrt[n]{{\bigg(x + \frac{{a_{n - 1} }}{n}\bigg)^n }} = (x^n )^{1/n - 1} O(x^{n - 2} ) = O(x^{ - 1} ).
$$
Hence,
$$
\mathop {\lim }\limits_{x \to \infty } [\sqrt[n]{{x^n + a_{n - 1} x^{n - 1} + \cdots + a_0 }} - x] = \mathop {\lim }\limits_{x \to \infty } \bigg[\bigg(x + \frac{{a_{n - 1} }}{n}\bigg) + O(x^{ - 1}) - x\bigg] = \frac{{a{}_{n - 1}}}{n}.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/30040",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "65",
"answer_count": 6,
"answer_id": 4
} |
Integral yielding part of a harmonic series Why is this true?
$$\int_0^\infty x \frac{M}{c} e^{(\frac{-x}{c})} (1-e^{\frac{-x}{c}})^{M-1} \,dx = c \sum_{k=1}^M \frac{1}{k}.$$
I already tried substituting $u = \frac{-x}{c}$. Thus, $du = \frac{-dx}{c}$ and $-c(du) = dx$. Then, the integral becomes (after cancellation) $\int_0^\infty c u M e^u (1-e^u)^{M-1}\,du$.
I looked at integral-table.com, and this wasn't there, and I tried wolfram integrator and it told me this was a "hypergeometric integral".
Thanks,
| Careful, the substitution changes the limits of integration. It needs to be the integral going over negative real numbers.
You could just expand $(1-e^u)^{M-1}$ in $$\int_0^{-\infty} c u M e^u (1-e^u)^{M-1}\,du$$ to get
$$cM \int_0^{-\infty} u \sum_{n=0}^{M-1} \binom{M-1}{n} (-1)^n e^{(n+1)u}du.$$ Rearrange the integral and sum to find
$$cM \sum_{n=0}^{M-1} \binom{M-1}{n} (-1)^n \int_0^{-\infty} u e^{(n+1)u}du.$$ The anti-derivative of $xe^{rx}$ is $\frac{x}{r}e^{rx}-\frac{1}{r^2}e^{rx}$.
Can you solve it from here?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/30074",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 0
} |
Sums of roots to approximate numbers I was thinking about random sequences, and thought up of this problem.
Can algebraic or rational be expressed as a sum or difference of integer square roots?
For example, $5=\sqrt{25}=\sqrt{16}+\sqrt{1}$.
It is known that if the series is infinite any number can be approximated, but is it possible to get any algebraic number(rational or irrational) with a FINITE number of terms?
| Well certainly you cannot obtain a transcendental number as a sum of algebraics since algebraics are closed under sums. Nor can you obtain every algebraic number as a sum of radical integers (else every polynomial would be solvable!). But it's easy to see that one can obtain any rational integer.
Note: In fact Heine originally defined algebraic integers to be radical integers, i.e. the ring obtained by closing $\rm\:\mathbb Z\:$ under taking $\rm\:n$'th roots. Heine claimed that every solvable algebraic integer is a radical integer - a problem which is still open according to Franz Lemmermeyer. However, it is not too difficult to show that every quadratic integer is a radical integer, e.g.
$$ \frac{\sqrt{17}+1}{2}\ =\ \frac{\sqrt{17}+\sqrt{5}}{2}\ -\ \frac{\sqrt{5}-1}{2}\ =\:\ (7\ \sqrt{5} + 4\ \sqrt{17})^{1/3} - (\sqrt{5}-2)^{1/3}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/30139",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Weyl Group, Lie algebra How I can prove any element of order 2 in a Weyl group is the product of commuting root reflections. I need to show also that the only reflections in Weyl group are the root reflections.
| Let $R\subseteq W$ be the set of reflections in the hyperplanes orthogonal to the roots. Of course, $R$ generates $W$, and there is then a function $\ell:W\to\mathbb N_0$ such that for each $w\in W$ the number $\ell(w)$ is the minimal length of an expression of $w$ as a product of elements of $R$. One can easily show that $\ell(w)$ is the number of eigenvalues of $w$ different from $1$ in the defining representation (in other words, the codimension of the subspace fixed by $w$)
From this it follows immediately that the only reflections are the elements of $R$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/30196",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Finding inverse of congruence class involving complex number How do you find the inverse of $[2+i]_{3}$? I changed it into solving for x in $(2+i)x \equiv 1 \pmod{3}$, and tried to solve for x with extended euclid algorithm, but with no luck. Am I supposed to do something different with complex numbers?
Thanks!
| Douglas's answer is good and efficient. In case you wouldn't have come up with that idea, there's also a "pedestrian" way to do this: Write $x$ as $a+\mathrm{i}b$ and then consider $(2+\mathrm{i})(a+\mathrm{i}b)\equiv 1+0\mathrm{i}\pmod{3}$ as two real equations, one for the real part and one for the imaginary part. That gives you two linear equations for two unknowns, which you can solve like any old system of linear equations since the congruence classes mod $3$ form a field.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/30259",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Fast algorithm for solving system of linear equations I have a system of $N$ linear equations, $Ax=b$, in $N$ unknowns (where $N$ is large).
If I am interested in the solution for only one of the unknowns, what are the best approaches?
For example, assume $N=50,000$. We want the solution for $x_1$ through $x_{100}$ only. Is there any trick that does not require $O(n^{3})$ (or $O$(matrix inversion))?
| Unless your matrix is sparse or structured (e.g. Vandermonde, Hankel, or those other named matrix families that admit a fast solution method), there is not much hope of doing things better than $O(n^3)$ effort. Even if one were to restrict himself to solving for just one of the 50,000 variables, Cramer will demand computing two determinants for your answer, and the effort for computing a determinant is at least as much as decomposing/inverting a matrix to begin with.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/30330",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25",
"answer_count": 4,
"answer_id": 0
} |
Finding invertible polynomials in polynomial ring $\mathbb{Z}_{n}[x]$
Is there a method to find the units in $\mathbb{Z}_{n}[x]$?
For instance, take $\mathbb{Z}_{4}$. How do we find all invertible polynomials in $\mathbb{Z}_{4}[x]$? Clearly $2x+1$ is one. What about the others? Is there any method?
| Lemma 1. Let $R$ be a commutative ring. If $u$ is a unit and $a$ is nilpotent, then $u+a$ is a unit.
Proof. It suffices to show that $1-a$ is a unit when $a$ is nilpotent. If $a^n=0$ with $n\gt 0$, then
$$(1-a)(1+a+a^2+\cdots+a^{n-1}) = 1 - a^n = 1.$$
QED
Lemma 2. If $R$ is a ring, and $a$ is nilpotent in $R$, then $ax^i$ is nilpotent in $R[x]$.
Proof. Let $n\gt 0$ be such that $a^n=0$. Then $(ax^i)^n = a^nx^{ni}=0$. QED
Lemma 3. Let $R$ be a commutative ring. Then
$$\bigcap\{ \mathfrak{p}\mid \mathfrak{p}\text{ is a prime ideal of }R\} = \{a\in R\mid a\text{ is nilpotent}\}.$$
Proof. If $a$ is nilpotent, then $a^n = 0\in\mathfrak{p}$ for some $n\gt 0$ and all prime ideals $\mathfrak{p}$, and $a^n\in\mathfrak{p}$ implies $a\in\mathfrak{p}$.
Conversely, if $a$ is not nilpotent, then the set of ideals that do not contain any positive power of $a$ is nonempty (it contains $(0)$) and closed under increasing unions, so by Zorn's Lemma it contains a maximal element $\mathfrak{m}$. If $x,y\notin\mathfrak{m}$, then the ideals $(x)+\mathfrak{m}$ and $(y)+\mathfrak{m}$ strictly contain $\mathfrak{m}$, so there exists positive integers $m$ and $n$ such that $a^m\in (x)+\mathfrak{m}$ and $a^n\in (y)+\mathfrak{m}$. Then $a^{m+n}\in (xy)+\mathfrak{m}$, so $xy\notin\mathfrak{m}$. Thus, $\mathfrak{m}$ is prime, so $a$ is not in the intersection of all prime ideals of $R$. QED
Theorem. Let $R$ be a commutative ring. Then
$$p(x) = a_0 + a_1x + \cdots + a_nx^n\in R[x]$$
is a unit in $R[x]$ if and only if $a_0$ is a unit of $R$, and each $a_i$, $i\gt 0$, is nilpotent.
Proof. Suppose $a_0$ is a unit and each $a_i$ is nilpotent. Then $a_ix^i$ is nilpotent by Lemma 2, and applying Lemma 1 repeatedly we conclude that $a_0+a_1x+\cdots+a_nx^n$ is a unit in $R[x]$, as claimed.
Conversely, suppose that $p(x)$ is a unit. If $\mathfrak{p}$ is a prime ideal of $R$, then reduction modulo $\mathfrak{p}$ of $R[x]$ maps $R[x]$ to $(R/\mathfrak{p})[x]$, which is a polynomial ring over an integral domain; since the reduction map sends units to units, it follows that $\overline{p(x)}$ is a unit in $(R/\mathfrak{p})[x]$, hence $\overline{p(x)}$ is constant. Therefore, $a_i\in\mathfrak{p}$ for all $i\gt 0$.
Therefore, $a_i \in\bigcap\mathfrak{p}$, the intersection of all prime ideals of $R$. The intersection of all prime ideals of $R$ is precisely the set of nilpotent elements of $R$, which establishes the result. QED
For $R=\mathbb{Z}_n$, let $d$ be the squarefree root of $n$ (the product of all distinct prime divisors of $n$). Then a polynomial $a_0+a_1x+\cdots+a_nx^n\in\mathbb{Z}_n[x]$ is a unit if and only if $\gcd(a_0,n)=1$, and $d|a_i$ for $i=1,\ldots,n$. In particular if $n$ is squarefree, the only units in $\mathbb{Z}_n[x]$ are the units of $\mathbb{Z}_n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/30380",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
} |
Why in an inconsistent axiom system every statement is true? (For Dummies) I would like to know if someone can explain in a somehow down to earth (almost logic free) way why is it true that in an axiom system where there is some statement $P$ such that $P$ and its negation $\lnot P$ are true, then every statement in the system is true?
I'm not sure if this can be done, but basically since I don't know any formal logic at all, I'm interested in seeing if at least the argument can be conveyed in an intuitive way, or if the idea can be explained without talking about first or second order logic and using symbols like $\top$, $\bot$, and $\vdash$.
This previous question is like the formal version (which I don't understand) so maybe my question can be thought of as a version for dummies of that question.
Thanks a lot in advance.
| In basic sentence logic: there is a sentence $P$ such that $P$ implies $S$ and $P$ implies $\lnot S$.
Then, start assuming any statement $\lnot Q$ in your system. Then introduce the sentence $P$, from which $S$ and $\lnot S$ follow. Then the assumption of $Q$ leads to the conclusion $S \land \lnot S$, so by reduction, $\lnot Q$ follows. You can do this for any sentence $Q$ in your system.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/30437",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "29",
"answer_count": 7,
"answer_id": 4
} |
Continuous Collatz Conjecture Has anyone studied the real function
$$ f(x) = \frac{ 2 + 7x - ( 2 + 5x )\cos{\pi x}}{4}$$ (and $f(f(x))$ and $f(f(f(x)))$ and so
on) with respect to the Collatz conjecture?
It does what Collatz does on integers, and is defined smoothly on all
the reals.
I looked at $$\frac{ \overbrace{ f(f(\cdots(f(x)))) }^{\text{$n$ times}} }{x}$$ briefly, and it appears to have bounds independent of $n$.
Of course, the function is very wiggly, so Mathematica's graph is
probably not $100\%$ accurate.
| Yes, it has been studied in
Xing-yuan Wang and Xue-jing Yu (2007), Visualizing generalized 3x+1 function dynamics
based on fractal, Applied Mathematics and Computation 188 (2007), no. 1, 234–243.
(MR2327110).
I have found this reference in Jeffrey Lagarias's "The $3x + 1$ Problem: An Annotated Bibliography, II (2000-2009)".
This real/complex interpolation of yours is not the only one imaginable, you will find several others either in Lagarias' article cited above, or in the first part of this series of articles, namely in "The $3x+1$ problem: An annotated bibliography (1963--1999)".
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/30536",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21",
"answer_count": 3,
"answer_id": 2
} |
Complicated Logic Proof involving Tautology and Law of Excluded Middle I'm having great difficulty solving the following problem, and even figuring out where to start with the proof.
$$
\neg A\lor\neg(\neg B\land(\neg A\lor B))
$$
Please see the following examples of how to do proofs, I would appreciate it if you could attempt to give me guidance using the tools and the line numbers that it cites similar to those below:
This is a sample proof:
This is another sample proof (law of excluded middles):
| Here is a proof using a Fitch-style proof checker.
Assume the negation of what one is trying to prove on line 1 to attempt to derive a contradiction.
From lines 2 to 9, use De Morgan rule (DeM), conjunction elimination (∧E), double negative elimination (DNE) and disjunctive syllogism (DS) to simplify so one can arrive at two lines, 4 and 9, which are contradictory. That contradiction allows one to derive the goal on line 11 using indirect proof (IP).
Links to the proof checker and associated text are below.
Kevin Klement's JavaScript/PHP Fitch-style natural deduction proof editor and checker http://proofs.openlogicproject.org/
P. D. Magnus, Tim Button with additions by J. Robert Loftis remixed and revised by Aaron Thomas-Bolduc, Richard Zach, forallx Calgary Remix: An Introduction to Formal Logic, Fall 2019. http://forallx.openlogicproject.org/forallxyyc.pdf
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/30584",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 3
} |
Prove $\sim$ is an equivalence relation: $x \sim y$ if and only if $y = 3^kx$, where $k$ is a real number The relation $\sim$ is defined on $\mathbb{Z}^+$ (all positive integers). We say $x\sim y$ if and only if $y=3^kx$ for some real number $k$.
I need to prove that $\sim$ is an equivalence relation.
To prove an equivalence relation we must certify that:
*
*It is reflexive
*It is symmetric
*It is transitive
I am not sure how to start this off since if I want to prove for:
Reflexive: I would replace $y$ with $x$, so that $x = (3^k)*x$ which is a positive integer when $k > 0$. Thus $x \sim x$ and $\sim$ is reflexive (?).
Symmetric: Suppose $x \sim y$ so that $y = (3^k)*x$ then $x = (3^k)*y$ is also an element of $\mathbb{Z}^+$. Thus $y \sim x$ and $\sim$ is symmetric (?).
Transitive: I'm not too sure about this last one. I think my entire proof is wrong anyways.
Can anyone help me on this?
| Isn't this relation true for all $x,y$ with the same sign? Since $3^k$, $k$ in reals, can take any positive real number?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/30648",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 0
} |
Find the missing term The sequence $10000, 121, 100, 31, 24, n, 20$ represents a number $x$ with respect to different bases. What is the missing number, $n$?
This is from my elementary computer aptitude paper. Is there any way to solve this quickly?
| The bases are 2,3,4,5,6,7,8, expressing values 1*16=16, 9 + 6 + 1 = 16, 1*16=16, 15 + 1 = 16, 12 + 4 = 16, 7*x + y = 16, and 2*8 = 16. x and y are then 2 and 2.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/30691",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
How can I evaluate $\sum_{n=0}^\infty(n+1)x^n$? How can I evaluate
$$\sum_{n=1}^\infty\frac{2n}{3^{n+1}}$$?
I know the answer thanks to Wolfram Alpha, but I'm more concerned with how I can derive that answer. It cites tests to prove that it is convergent, but my class has never learned these before. So I feel that there must be a simpler method.
In general, how can I evaluate $$\sum_{n=0}^\infty (n+1)x^n?$$
| I assume that the $|x|$ to be less than $1$. Now, consider,
$f(x)=\sum_{n=0}^{n=\infty} x^{n+1}$
This will converge only if $|x|<1$. Now, interesting thing here is, this is a geometric progression. The $f(x)=x/(1-x)$.
$f'(x)$ is the series you are interested in, right? Differentiate $x/(1-x)$ and you have your expression!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/30732",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "438",
"answer_count": 23,
"answer_id": 7
} |
What is a good complex analysis textbook, barring Ahlfors's? I'm out of college, and trying to self-learn complex analysis. I'm finding Ahlfors' text difficult. Any recommendations? I'm probably at an intermediate sophistication level for an undergrad. (Bonus points if the text has a section on the Riemann Zeta function and the Prime Number Theorem.)
| "Complex Analysis with Applications" by Richard Silverman is a gentle introduction to the subject. Only covers the basics, but explains them in a crystal clear style. http://store.doverpublications.com/0486647625.html
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/30749",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "195",
"answer_count": 28,
"answer_id": 8
} |
Spivak's Calculus: Chapter 12, Problem 26
Suppose that $f(x) > 0$ for all $x$, and that $f$ is decreasing. Prove that there is a continuous decreasing function $g$ such that $0 < g(x) \le f(x)$ for all $x$.
To be quite honest, I have no idea how to approach this problem. (I also have no clue about the second part, but I imagine a hint at this solution will help me along for part b.)
I thought about setting $g(x) = f(x + k)$ for some $k > 0$, but I don't know how to get continuity. I should also note that this is in the "Inverse Functions" chapter, so that must play some sort of role here, but I'm not sure how really.
Any hint at how to think about the problem, specifically about the "continuous" part of it would be much appreciated. Thanks.
| Define $g$ to be piecewise linear with $g(n)=f(n+1)$. To elaborate, $g$ will be decreasing and continuous with
$$
g(x)\leq g(\lfloor x\rfloor)=f(\lfloor x\rfloor+1)\leq f(x).
$$
Or better yet, draw a picture.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/30777",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Does $a^2+b^2=1$ have infinitely many solutions for $a,b\in\mathbb{Q}$? Does $a^2+b^2=1$ have infinitely many solutions for $a,b\in\mathbb{Q}$?
I'm fairly certain it does, but I'm hoping to see a rigorous proof of this statement. Thanks.
Here is my motivation. I'm working on a strange little problem. I'm working in a geometry over an ordered field. Suppose I have a circle $\Gamma$ with center $A$ passing through a point $B$. I want to prove that there are infinitely many points on $\Gamma$. Up to a change of variable, I'm considering the unit circle centered on the origin over $\mathbb{Q}$. To show there are an infinite number of points on $\Gamma$, it suffices to show there are an infinite number of solutions to $a^2+b^2=1$ for $a,b\in\mathbb{Q}$. I could then extend this to showing there are infinite number of solutions to $a^2+b^2=r^2$ for some $r$, which proves that any circle over $\mathbb{Q}$ has an infinite number of points. Then since any ordered field has a subfield isomorphic to $\mathbb{Q}$, I would be finished.
| The answer is yes. This is most easily seen via stereographic projection, as described here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/30818",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 5,
"answer_id": 0
} |
Show that $\mathbb{R}^2$ is not homeomorphic to $\mathbb{R}^2 \setminus\{(0,0)\}$ Show that $\mathbb{R}^2$ is not homeomorphic to $\mathbb{R}^2 \setminus \{(0,0)\} $.
| $\mathbb{R}^2\backslash\{(0,0)\}\cong S^1\times\mathbb{R}$. you can use co/homology or fundamental groups (if that's in your toolkit). you can note that one is contractible and the other is homotopy equivalent to a circle (and tell those apart). you can note that one has euler characteristic 0 and the other has euler characteristic 1 (if this is something you can calculate).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/30867",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 2
} |
How to compare randomness of two sets of data? Given two sets of random numbers, is it possible to say that one set of random numbers has a greater degree of randomness when compared to the other?
Or one set of numbers is more random when compared to the other?
EDIT:
Consider this situation:
A hacker needs to know the target address where a heap/library/base of the executable is located. Once he knows the address he can take advantage of it and compromise the system.
Previously, the location was fixed across all computers and so it was easy for the hackers to hack the computer.
There are two software S1 and S2. S1 generates a random number where the heap/library/base of the executable is located. So now, it is difficult for the hacker to predict the location.
Between S1 and S2, both of which have random number generators, which one is better? Can we compare based on the random numbers generated by each software?
| It seems to me that it is not so important that the numbers are random (which I assume they are) but whether the underlying pseudo-random algorithm can be detected. You are using a software RNG, so by definition it is not truly random. I suspect that if you are dealing with 2 good software packages the RNG in each are good, so that continually testing for randomness and testing for statistical significance is not going to help alot, beyond the initial screen. In fact, if you do too much testing you can come up with a false positive. Perhaps additional tests are in order, such as visual pattern generation, which uses us humans to try to pick out sequences in the data that a program simply would not be able to do.
http://www.random.org/analysis/
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/30996",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
An integral Lobachevsky calculated incorrectly $\int_{0}^{\infty}\frac{(e^x-e^{-x})x}{e^{2x}+e^{-2x}+2\cos(2a)}dx$ In a recent lecture a professor told a story about the integral below. Lobachevsky calculated this integral at first time incorrectly. Following the publication of the integral, Ostrogradsky sent a letter with correct answer to Lobachevsky.
What is the right answer?
$$I(a)=\int_{0}^{\infty}\frac{(e^x-e^{-x})x}{e^{2x}+e^{-2x}+2\cos(2a)}dx$$
withe $0\leq a \leq \pi$.
| We can restrict the values of $a$ to be between $0$ and $\pi/2$ as $\cos(2a)= \cos[2(\pi-a)]$. With this $$I(a) = \frac{\pi a}{4 \sin(a)} \qquad 0\leq a \leq \frac{\pi}{2}.$$
The calculation can be done along the following line:
*
*As the integrand is symmetric the integration region can be extended to the full real line
$$I(a) = \frac{1}{2} \int_{-\infty}^{\infty}dx\, \frac{(e^x-e^{-x})x}{e^{2x}+e^{-2x}+2\cos(2a)}.$$
*The substitution $z= e^{x}$ brings the integral onto the form
$$I(a) = \frac{1}{2} \int_0^\infty dz\,\frac{(z^2-1) \log z}{z^4 + 2 z^2 \cos(2a) + 1}.$$
*A standard trick can be employed to bring it on the form
$$I(a) = \frac{1}{4} \sum_{z_n} \,\text{Res}_{z=z_n} \frac{(z^2-1) \log^2 z}{z^4 + 2 z^2 \cos(2a) + 1}$$ where $z_n$ are the zeros of $z^4 + 2 z^2 \cos(2a) + 1$ and the branch cut of $\log$ is along the negative real line.
*The 4 zeros of $z^4 + 2 z^2 \cos(2a) + 1$ are given by $\bar z=\pm i e^{\pm i a}$. The residues assume the form $$\text{Res}_{z=\bar z} \frac{(z^2-1) \log^2 z}{z^4 + 2 z^2 \cos(2a) + 1} = \frac{(\bar z^2 - 1)\log^2 \bar z}{4 \bar z^3 +4 \bar z \cos 2a}.$$
*Putting everything together, we obtain
the result quoted above (after some tedious but straightforward calculation).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/31046",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 0
} |
complex/trigonometric series How can I find the solution to the series $\sum_{n=1} ^\infty n^2(\cos(nx) + i \sin(nx))$
| As joriki pointed out, the series diverges for real $x$. I will in the following assume that $\text{Im} x >0$.
We have
$$
\begin{align}
\sum_{n=1}^\infty n^2[\cos(nx) + i \sin(nx)]
&= \sum_{n=0} ^\infty n^2 e^{i n x} = -\partial_x^2 \sum_{n=0} ^\infty e^{i n x}\\
&=\partial_x^2 \frac{1} {e^{ix}-1}.
\end{align}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/31078",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Is a graph simple, given the number of vertices and the degree sequence? Does there exist a simple graph with five vertices of the following degrees?
(a) 3,3,3,3,2
I know that the answer is no, however I do not know how to explain this.
(b) 1,2,3,4,3
No, as the sum of the degrees of an undirected graph is even.
(c) 1,2,3,4,4
Again, I believe the answer is no however I don't know the rule to explain why.
(d) 2,2,2,1,1
Same as above.
What method should I use to work out whether a graph is simple, given the number of vertices and the degree sequence?
| (a) 3,3,3,3,2 - YES! Graph Justifies claim
(b)1,2,3,4,3 - NO -Follows from the Handshaking Lemma
(c)1,2,3,4,4 - ANYBODY? (has no problem by Handshaking Lemma)
(d)2,2,2,1,1 - YES! Graph Justifies Claim
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/31119",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 1
} |
Tips for understanding the unit circle I am having trouble grasping some of the concepts regarding the unit circle. I think I have the basics down but I do not have an intuitive sense of what is going on. Is memorizing the radian measurements and their corresponding points the only way to master this? What are some ways one can memorize the unit circle?
Edit for clarification: I think my trouble arises by the use of $\pi$ instead of degrees. We started graphing today and the use of numbers on the number line were again being referred to with $\pi$. Why is this?
| The cool thing about radians is that they relate linear measure to angular measure. So if we go $x$ radians around a circle of radius $r$ units, then we've traveled $xr$ units of length. This comes from the formula $C = 2\pi r$. You can see that $2\pi$ is the ratio of the circumference of a whole circle to its radius and that this formula can be viewed as creating a function that maps radii to circumferences. So if we go halfway around the circle, then we've traveled only half the distance and we must have $C/2 = \frac{2\pi}{2}r = \pi r$. Again, we see that $\pi$ radians can be viewed as a function mapping radii to lengths of half circles. You want a quarter circle? Well, $C/4 = \frac{2\pi}{4}r = \frac{\pi}{2}r$ and we see that to figure out the length of a quarter circular arc, one just multiplies by $\frac{\pi}{2}$.
So I would forget all about degrees and think in terms of what part of the circle an angle sweeps out. You'll get some real number between $0$ and $1$ ($1$, $\frac{1}{2}$, and $\frac{1}{4}$ in the cases we've looked at). Multiply that number by $2\pi$ and that's how many radians the angle is.
EDIT:
And note that my restriction that the number be between $0$ and $1$ is arbitrary. What about traveling around a circle twice? Then you've gone $2C = 2 \cdot 2\pi r = 4\pi r$ times around the circle and traveled around an angle of $4\pi$ radians. Nothing at all is different.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/31163",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 7,
"answer_id": 4
} |
What is this shape? $C = \{(c_1,c_2):c_1^2 + c_2^2 \leq 1 \}$
$G = \{(g_1,g_2): g_1 = a_1 + d_1, g_2 = a_2 + d_2, d_1^2 + d_2^2 \leq 1 \}$
C is a unit circle centered at the origin, and G is a unit circle centered at $(a_1, a_2)$.
Define:
$X = \{(x_1,x_2): x_1 = c_1 g_1, x_2 = c_2 g_2, (c_1,c_2)\in C, (g_1, g_2)\in G\} $
What is the shape of $X$? Is there any name of it, or any other method to express like a polynomial equation? I thought it might be a ellipse or a circle, but it seems not.
| For any point $(g_1,g_2) \in G$, $\{(c_1 g_1, c_2 g_2): (c_1, c_2) \in C\}$ is an ellipse centred at the origin with semi-axes $|g_1|$ (in the $x$ direction) and $|g_2|$ (in the $y$ direction), and thus the equation $(x/g_1)^2 + (y/g_2)^2 \le 1$. Taking $g_1 = a_1 + \cos \theta$ and $g_2 = a_2 + \sin \theta$, the envelope of these ellipses will be determined by the equations $F(x,y,\theta) = 0$ and $F_\theta(x,y,\theta) = 0$, where $F(x,y,\theta) = (\frac{x}{a_1 + \cos \theta})^2 + (\frac{y}{a_2 + \sin \theta})^2 - 1$. In principle you can write $\cos(\theta) = c$ and $\sin(\theta) = s$, eliminate $s$ and $c$ from these equations plus $c^2 + s^2 = 1$, and be left with a polynomial equation in $x$ and $y$. It's looking pretty complicated, though.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/31204",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
How to prove a number $a$ is quadratic residue modulo $n$? In general, to show that $a$ is quadratic residue modulo $n$? What do I have to show? I'm always struggling with proving a number $a$ is quadratic residue or non-quadratic residue.
For example,
If $n = 2^{\alpha}m$, where $m$ is odd, and $(a, n) = 1$.
Prove that $a$ is quadratic residue modulo $n$ iff the following are satisfied:
If $\alpha = 2$ then $a \equiv 1 \pmod{4}$.
I just want to know what do I need to show in general, because I want to solve this problem on my own. Any suggestion would be greatly appreciated.
Thank you.
| Suppose $n=4\cdot m$ where $m$ is odd, and $a\in\mathbb{Z}$ is a quadratic residue modulo $n$ that is coprime to $n$. Thus there is an $x\in\mathbb{Z}$ such that $x^2\equiv a\bmod n$. Because $a$ is coprime to $n$, it must be odd. If $x$ were even, then $x^2$ would be divisible by 4, hence $x^2-a$ could not be divisible by 4, much less $n=4\cdot m$. Thus $x$ must be odd, say $x=2y+1$. Then
$$x^2=(2y+1)^2=4y^2+4y+1\equiv a\bmod 4m$$
and thus reducing mod 4, we have that $a\equiv 1\bmod 4$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/31271",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
What is the probability of success given two trials with a 1/16 chance each? I have never been able to wrap my head around probability, and I often find that my intuition is wrong. In this case, I don't even have intuition as to where to begin.
If I have two trials, each with a 1/16 chance of success, what are the chances that either or both of them result in success? How, mathematically, do you arrive at the correct probability? How, intuitively, can I understand this number?
| The probability that neither trial is successful is $(15/16)^2$ (assuming that the trials are independent), and the chances that at least one trial is successful is one minus that: $1-(15/16)^2 = 31/256$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/31324",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Invariant Subspace of Two Linear Involutions I'd love some help with this practice qualifier problem:
If $A$ and $B$ are two linear operators on a finite dimensional complex vector space $V$ such that $A^2=B^2=I$ then show that $V$ has a one or two dimensional subspace invariant under $A$ and $B$.
Thanks!
| This is not true in general, I guess you omitted part of the question (something like "$V$ is a complex vector space").
The dimension of subspaces stable under linear operators is something that depends much on the base field $K$.
Let us assume we are in a situation where your argument (more precisely the variation thereof pointed out by Qiaochu and Arturo) does not work, i.e. $E^A_{i} \cap E^B_{j} = \{ 0 \}$ for all $i,j$.
As Arturo explained, there is no one dimensional subspace invariant under $A$ and $B$ in this case.
Moreover, any subspace invariant under $A$ and $B$ is not contained in any of the $E^A_i$, $E^B_j$.
The linear maps $p_{i,j} : E^A_i \rightarrow E^B_j$ which are the restriction to $E^A_i$ of the projection along $E^B_{-j}$ onto $E^B_{-j}$, are isomorphisms.
A two-dimensional subspace of $V$ invariant under $A$ but not contained in $E^A_{\pm 1}$ is of the form $Kx \oplus Ky$ where $x \in E^A_1$, $y \in E^A_{-1}$ are not zero.
The same is true for $B$ instead of $A$, so $p_{1,1}(x)$ and $p_{-1,1}(y)$ are colinear.
We can scale $y$ and assume that $y = p_{-1,1}^{-1} \circ p_{1,1}(x)$.
Similarly, $p_{1,-1}(x)$ and $p_{-1,-1}(y)$ have to be colinear, so $p_{1,-1}^{-1} \circ p_{-1,-1} \circ p_{-1,1}^{-1} \circ p_{1,1} (x) = \lambda x$, and so such an $x$ exists iff $p_{1,-1}^{-1} \circ p_{-1,-1} \circ p_{-1,1}^{-1} \circ p_{1,1}$ has an eigenvector.
Over $\mathbb{R}$, it can happen that this invertible operator has no eigenvector, for example if we take $A=\begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & -1 \end{pmatrix}$ and $B=\begin{pmatrix} 0 & -1 & -1 & 1 \\ 1 & 0 & -1 & -1 \\ -1 & -1 & 0 & 1 \\ 1 & -1 & -1 & 0 \end{pmatrix}$, the aforementioned operator has matrix $\begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix}$ which has no real eigenvalue.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/31363",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 2
} |
Why $1$ is the only quadratic residue modulo $8$? I'm trying to understand the Proposition 5.1.1 - Ireland and Rosen, A Classical Introduction to Modern Number Theory, p.50, however, I can't understand why this argument is true:
$1$ is the only quadratic residue mod $8$.
I wrote a program to generate all quadratic residue modulo $8$, from $0$ to $7$
0 -> 0
1 -> 1
4 -> 4
9 -> 1
16 -> 0
25 -> 1
36 -> 4
Press any key to continue . . .
I saw $4$ there, so how come only $1$ satisfied?
The original text was,
Thank you,
@Bill Dubuque: Thank you for the reference.
| Quadratic residues are usually taken from the unit group. $2$ is not in $U_8$.
The wikipedia article mentions this issue, citing Ireland and Rosen.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/31431",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Every subgroup $H$ of a free abelian group $F$ is free abelian I am working through a proof that every subgroup $H$ of a free abelian group $F$ is free abelian (for finite rank)
For the inductive step, let $\{ x_1, \ldots, x_n \}$ be a basis of $F$, let $F_n = \langle x_1,\ldots,x_{n-1} \rangle$, and let $H_n = H \cap F_n$. By induction $H_n$ is free abelian of rank $\le n-1$. Now $$H/H_n = H/(H \cap F_n) \simeq (H+F_n)/F_n \subset F/F_n \simeq \mathbb{Z}$$
The isomorphism I can't see is $$H/(H \cap F_n) \simeq (H+F_n)/F_n.$$ I guess there is a way to get this from the first isomorphism theorem, but I am having a hard time seeing it
| Consider the composition
$$
H \stackrel{i}{\hookrightarrow} H + F_n \stackrel{\pi}{\to} (H + F_n)/F_n
$$
where $i$ is the inclusion and $\pi$ the projection. What's its kernel?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/31493",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
number of ordered partitions of integer How to evaluate the number of ordered partitions of the positive integer $ 5 $?
Thanks!
| Since $5$ is a smallish number, it is reasonable to try to list all of the ordered partitions, and then count. First maybe, lest we forget, write down the trivial partition $5$. Then write down $4+1$, $1+4$. Now list all the ordered partitions with $3$ as the biggest number. This is easy, $3+2$, $2+3$, $3+1+1$, $1+3+1$, $1+1+3$. Continue. After not too long, you will have a complete list.
It so happens that for this type of problem, there is a simple general formula, which one might guess by carefully finding the number of ordered partitions of $1$, of $2$, of $3$, of $4$. And there are good ways of proving that the general formula holds. Let us deal with the case $n=5$.
Put $5$ pennies in a row, leaving a little gap between consecutive pennies. There are $4$ interpenny gaps. CHOOSE any number of these gaps ($0$, $1$, $2$, $3$, or $4$) to put a grain of rice into. Any such choice gives rise to a unique ordered partition of $5$, and all of them arise in this way. For example, the trivial partition $5$ comes from using no grain. The partition $4+1$ comes from putting a grain of rice after the $4$th penny. And so on. So there are exactly as many ordered partitions of $5$ as there are ways of choosing a SUBSET of the set of gaps. But a set of $4$ elements has $2^4$ subsets.
Or else one could attack the problem by induction. For example, let $P(n)$ be the number of ordered partitions of $n$. Now look at $P(n+1)$. Ordered partitions of $n+1$ are of two types: (i) last element $1$ and (ii) last element bigger than $1$. You should be able to see that there are $P(n)$ ordered partitions of $n+1$ of each type, meaning that $P(n+1)=2P(n)$.
But after all this fancy stuff, I would like to urge that you get your hands dirty, that you list and count the ordered partitions of $n$ for $n=1$, $2$, $3$, $4$, $5$, maybe even $6$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/31562",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 4,
"answer_id": 1
} |
The Metropolis Algorithm
I Know how to apply the Metropolis Algorithm, but I'd be grateful if someone could explain to me the reasoning behind the steps in the algorithm. I've tried in vain looking for the original paper.
Thanks.
| The idea of the algorithm, is using one distribution (the transition distribution) in order to sample a different one (the original distribution). The assumption is that the original distribution is calculable, but that it is too difficult to sample directly. Ideally, the transition distribution should be "close enough" to the original distribution.
In order to do that, we have to sample in such a way where we don't follow the transition distribution "blindly", but rather use the original distribution to weigh how likely it is to actually sample the next value, rather than the old one.
It can be shown, (by simple summation of the probabilities) that the stationary distribution of the process - if there is one (if I'm not mistaken, in order for there to be one, the transition distribution has to satisfy some conditions that would ensure the sampling is "close enough" to the original distribution) - is indeed the distribution of the original distribution.
A common case which uses this algorithm, is the Gibbs sampling algorithm - on large graphical models. The transition distribution used is a single variable value change.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/31621",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
roots of $f(z)=z^4+8z^3+3z^2+8z+3=0$ in the right half plane This is a question in Ahlfors in the section on the argument principle: How many roots of the equation $f(z)=z^4+8z^3+3z^2+8z+3=0$ lie in the right half plane?
He gives a hint that we should "sketch the image of the imaginary axis and apply the argument principle to a large half disk."
Since $f$ is an entire function, I think I understand that the argument principle tells us that for any closed curve $\gamma$ in $\mathbb{C}$, the winding number of $f(\gamma)$ around 0 is equal to the number of zeros of $f$ contained inside $\gamma$.
How would you go about actually applying the hint though? I am having trouble figuring out what the image of a large half disk under $f$ would look like.
| The hint was to sketch the image of the imaginary axis. You can parametrize the imaginary axis as $\gamma(t) = it$. $f(it) = t^4-3t^2+3 + (-8t^3 + 8t)i.$ Do you recall how to plot the parametrized curve $(t^4-3t^2+3,-8t^3+8t)$ from calculus?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/31651",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25",
"answer_count": 3,
"answer_id": 2
} |
When is the product of two quotient maps a quotient map? It is not true in general that the product of two quotient maps is a quotient maps (I don't know any examples though).
Are any weaker statements true? For example, if $X, Y, Z$ are spaces and $f : X \to Y$ is a quotient map, is it true that $ f \times {\rm id} : X \times Z \to Y \times Z$ is a quotient map?
| If the quotient maps $p : W \rightarrow X, q:Y \rightarrow Z$ are open, then the product map is also a quotient map:
$U\times V$ open in $X \times Y \implies p^{-1}U\times q^{-1}{V} = (p\times q)^{-1}(U\times V)$ open.
Conversely, $U\times V = (p\times q)(\dots) = p\circ p^{-1}U\times q\circ q^{-1}V$ is open since $p,q$ are open maps.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/31697",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "29",
"answer_count": 6,
"answer_id": 2
} |
Mathematical symbol to reference the i-th item in a tuple? Given a tuple e=(x,y), how do I reference the 2nd item (y)?
| I figured I would collect a number of the comments together into an answer so you would have something to accept (citing, so no one would hate on me for an plagiarism).
As with many types of mathematical notations, there are a number of possible variations here.
*
*Sometimes $p_2(e)$ or $\pi_2(e)$ is used to denote the 2nd projection (Martin Sleziak, FrancescoTurco).
*If you define an n-tuple as $\mathbf{x}\in\mathbb{R}^n$ (note the bold font), then the $i$th element can easily addressed with $x_i$ (Hauke Strasdat).
*Sometimes even $e^{(2)}$ or $e^2$ (lhf).
Just be sure you explain to the reader what you mean by the notation—don't assume he will understand (GEdgar).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/31889",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 1,
"answer_id": 0
} |
Are there any random variables so that $\mathrm{E}[XY]$ exists, but $\mathrm{E}[X]$ or $\mathrm{E}[Y]$ doesn't? Are there any random variables so that $\mathrm{E}[XY]$ exists, but $\mathrm{E}[X]$ or $\mathrm{E}[Y]$ doesn't?
| Yes. For example take $C$ as a Cauchy random variable and independently $H$ as $0$ or $1$ with equal probability.
Let $X=CH$ and $Y=C(1-H)$.
Then the expectations of $X$ and $Y$ would be half the expectation of $C$, except that it does not exist, while $XY=0$ and so $E[XY]=0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/31947",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
On TM's with a single loop Thinking about the halting problem for TM's, I came up with a statement that I can't prove or disprove easily and would want your suggestions.
Conjecture: Given a TM whose digraph has a single cycle , and given that it loops forever on a word $w_1 \in \Sigma^*$. Then the set of words on which it loops forever is a regular language.
Question: Is this true or false , and how so?
Note that, if the TM's source code were written out in (say) C, it would have only a single loop (for or while ). Edit: The digraph consists of states for its nodes; if there is a transition $\delta(q_1, s) = (q_2, t, L/R)$, then there is a directed edge from $q_1$ to $q_2$, marked with $s \rightarrow t, L/R$ in the digraph.
| It depends rather strongly on the exact definition of the digraph. If you really mean that there is only one cycle, then there are two cases:
*
*The net head movement is zero.
*The net head movement is non-zero.
In the first case, it is easy to see that the condition of entering an infinite loop depends on only a finite prefix of the input; in particular, the language is regular.
In the second case, we get the same result, though now we're using the fact that empty positions are different from input positions, and so the loop can only involve empty positions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/31999",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Mandelbrot fractal: How is it possible? I'm a programmer and have recently played around a bit with rendering Mandelbrot fractals / zooming into them.
What I can't grasp: How can such infinite, complex shapes come out of somewhat 10 lines of deterministic code?
How is it possible that when zooming ever deeper and deeper, there are still completely new shapes coming up, while the algorithm remains the same?
Does the set maybe give us some deep insight into our universe or even other dimensions, as it involves complex numbers?
| I would resume it in one sentence:
ITERATION (OR FEEDBACK) CREATES COMPLEXITY
Julia sets and the Mandelbrot set all come from the iteration of the family of functions $q_c(z)=z^2+c$. What could be simpler that such a second degree polynomial? The complexity comes from the iteration
$$
z_{n+1}=q_c(z_n).
$$
Even if the function $q_c$ is simple, the behaviour of the sequence $\{z_n\}$ may be very complex. In fact, it may be chaotic.
We are in the presence of sensitive dependence on initial conditions (a.k.a. the butterfly effect.) Iterations that begin at two very close initial points, may separate after a sufficiently large number of iterations. The Mandelbrot set is the set of complex nmbers $c$ such that the sequence
$$
0,\ q_c(0)=c,\ q_c(q_c(0))=c^2+c,\ q_c(q_c(q_c(0)))=(c^2+c)^2+c,\dots
$$
remains bounded. The behaviour of that sequence may me very different for two very close values of $c$. That is the reason why you see that complex behaviour when zooming up on a small region.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/32062",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25",
"answer_count": 4,
"answer_id": 0
} |
Probability distribution for the remainder of a fixed integer In the "Notes" section of Modern Computer Algebra by Joachim Von Zur Gathen, there is a quick throwaway remark that says:
Dirichlet also proves the fact, surprising at first sight, that for fixed $a$ in a division the remainder $r = a \operatorname{rem} b$, with $0 \leq r < b$, is more likely to be smaller than $b/2$ than larger: If $p_a$ denotes the probability for the former, where $1 \leq b \leq a$ is chosen uniformly at random, then $p_a$ is asymptotically $2 - \ln{4} \approx 61.37\%$.
The note ends there and nothing is said about it again. This fact does surprise me, and I've tried to look it up, but all my searches for "Dirichlet" and "probability" together end up being dominated by talks of Dirichlet stochastic processes (which, I assume, is unrelated).
Does anybody have a reference or proof for this result?
| The Prime Numbers and Their Distribution By Gérald Tenenbaum, Michel Mendès France
Parts of this book, including discussion of the result, are available on Google Books.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/32107",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21",
"answer_count": 3,
"answer_id": 2
} |
Alternative definition for topological spaces? I have just started reading topology so I am a total beginner but why are topological spaces defined in terms of open sets? I find it hard and unnatural to think about them intuitively. Perhaps the reason is that I can't see them visually. Take groups, for example, are related directly to physical rotations and numbers, thus allowing me to see them at work. Is there a similar analogy or defintion that could allow me to understand topological spaces more intuitively?
| In "Quantales and continuity spaces" Flagg develops the notion of a metric space where the distance function takes values in a value quantale. A value quantale is an abstraction of the properties of the poset $[0,\infty]$ needed for 'doing analysis'. It is then showed that every topological space $X$ is metrizable in the sense that there exists a value quantale $V$ (depending on the topology on $X$) such that the topological space $X$ is given by the open balls determined by a metric structure on $X$ with values in $V$. At this level of abstraction it is thus seen that the open sets axiomatization for topology is nothing but the good old notion of a metric space, only taking values in value quantales other than $[0,\infty]$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/32153",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 2
} |
How to solve combinations of differentials and integrals? In the first chapter of Nearing's book "Mathematical tools for physics" (available online) I encountered an interesting combination of differentials and integrals - which I don't fully understand:
(a) $$\frac{\mathrm{d} }{\mathrm{d}\alpha}\int_{-\infty}^{\infty}e^{-\alpha x^2}dx=-\int_{-\infty}^{\infty}x^2e^{-\alpha x^2}dx$$
(b) $$\frac{\mathrm{d} }{\mathrm{d} x}\int_{0}^{x}e^{-x t^2}dt=e^{-x^{3}}-\int_{0}^{x}t^2e^{-x t^2}dt$$
(c) $$\frac{\mathrm{d} }{\mathrm{d} x}\int_{x^2}^{\,\sin x}e^{x t^2}dt=e^{x\, \sin^2 x} \,\cos x-e^{x^{5}}2x+\int_{x^2}^{\,\sin x}t^2e^{x t^2}dt$$
I can't see in which order you have to do which rules to arrive at the solutions. Could anyone please give me the steps in between? Thank you!
| It's a Leibniz integral rule
-see e.g. in wikipedia by the link.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/32250",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Diophantine impossibility and irrationality (or similar) The Diophantine equation $$a^2 = 2 b^2$$ having no solutions is the same as $\sqrt{2}$ being irrational.
Are there any Diophantine equations which are related to the irrationality of a number that is not algebraic?
For a similar question with broader scope, the Diophantine equation $$x^n + y^n = z^n$$ implies a certain elliptic curves is "ir"-modular.
Are there more examples of this phenomenon?
| The Diophantine equation $2^x=3^y$ having no solutions is the same as $\log3/\log2$ being irrational. It is known that $\log3/\log2$ is not algebraic.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/32314",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
Localization at a prime ideal in $\mathbb{Z}/6\mathbb{Z}$ How can we compute the localization of the ring $\mathbb{Z}/6\mathbb{Z}$ at the prime ideal $2\mathbb{Z}/\mathbb{6Z}$? (or how do we see that this localization is an integral domain)?
| We first recall the following well known result.
Let $R$ be a commutative ring with unity, $I$ an ideal in $R$ and $S$ a multiplicative closed set in $R$. Let $\bar{S}$ denote the image of $S$ in the quotient ring $\bar{R}=R/I$. Then $\bar{S}^{-1}\bar{R} = S^{-1}R/IS^{-1}R$. (Check the book here.)
Now take $R=\mathbb{Z}$, $I=6\mathbb{Z}$ and $S=\mathbb{Z}-2\mathbb{Z}$. Then $\bar{\mathbb{Z}}_{(\bar{2})}= \bar{S}^{-1}\bar{R} = S^{-1}R/IS^{-1}R = \mathbb{Z}_{(2)}/ 6\mathbb{Z}_{(2)} = \mathbb{Z}_{(2)}/ 2\mathbb{Z}_{(2)}$ {since 3 is a unit in $\mathbb{Z}_{(2)}$; $2 \mathbb{Z}_{(2)}$ = $6 \mathbb{Z}_{(2)}$} = $\mathbb{Z}/2\mathbb{Z}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/32381",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 4,
"answer_id": 1
} |
Are calculus and real analysis the same thing?
*
*I guess this may seem stupid, but
how calculus and real analysis are
different from and related to each
other?
I tend to think they are the same
because all I know is that the
objects of both are real-valued
functions defined on $\mathbb{R}^n$,
and their topics are continuity,
differentiation and integration of
such functions. Isn't it?
*But there is also
$\lambda$-calculus, about which I
honestly don't quite know. Does it
belong to calculus? If not, why is
it called *-calculus?
*I have heard at the undergraduate course level, some people mentioned the
topics in linear algebra as
calculus. Is that correct?
Thanks and regards!
| My take on this: One would use the word 'calculus' when one is applying the mathematical tools - chain rule, integration- by-parts, etc - to solve problems in science, engineering, and so on; whereas one would use the word 'analysis' when one is developing/justifying the same tools - proving the chain rule, inventing integration-by-parts, etc. I.e, analysis is what the pure mathematicians do, calculus is the product of analysis which engineers use.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/32433",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "102",
"answer_count": 8,
"answer_id": 1
} |
What are good resources to self-teach mathematics? I am teaching myself mathematics using textbooks and I'm currently studying the UK a-level syllabus (I think in the USA this is equivalent to pre-college algebra & calculus). Two resources I have found invaluable for this are this website (http://math.stackexchange.com) and Wolfram Alpha (http://wolframalpha.com). I am very grateful that with those tools, I have managed to understand any questions/doubts I have had so far.
Can anyone recommended other valuable resources for the self-taught student of mathematics at this basic level?
I hope questions of this format are valid here?
Thanks!
| This is an old question, but this is for anyone else who might want this in the future.
For direct information about the a-level search TLmaths on YouTube, who has a playlist of just under 1000 videos covering the entire course.
For lots of free practice questions visit physics and maths tutor.
I have recently found openstax for many great maths books, but is not specific to the a-level.
Professor Leonard on YouTube is great for any level of maths, and has great lectures.
I hope people find something here to help them learn.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/32497",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
On integer solutions of the equation $x^2+y^2+z^2=16(xy+yz+zx-1)$ Here is the question:
Question. Show that the equation
$$x^2+y^2+z^2=16(xy+yz+zx-1)$$
does no have integer solutions.
I know a nice and easy (actually, an obvious) way to solve this problem. But I'm just wondering can we solve this using infinite descent method? I remember I saw a solution using that method, but it was wrong.
| The following is an approach that is not by infinite descent, but imitates at the beginning descent approaches to similar problems.
Any square is congruent to $0$, $1$, or $4$ modulo $8$. It follows easily that in any solution of the given equation, $x$, $y$, and $z$ must be even, say $x=2r$, $y=2s$, $z=2t$. Substitute and simplify. We get
$$r^2+s^2+t^2=16(rs+st+tr) -4$$
Using more or less the same idea, we observe that $r$, $s$, and $t$ must be even. Let $r=2u$, $s=2v$, $t=2w$. Substitute and simplify. We get
$$u^2+v^2+w^2=16(uv+vw+wu)-1$$
Now the descending stops. The right-hand side is congruent to $-1$ modulo $8$, but no sum of $3$ squares can be.
ADDED: I have found a way to make the descent infinite, for proving a stronger result. Look at the equation
$$x^2+y^2+z^2=16(xy+yz+zx)-16q^2$$
We want to show that the only solution is the trivial one $x=y=z=q=0$. The argument is more or less the same as the one above, except that when (after $2$ steps) we reach $-q^2$, we observe that there is a contradiction if $q$ is odd, so now let $q$ be even, and the descent continues. It would probably be more attractive to use $8$ than $16$, and $4q^2$ instead of $16q^2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/32551",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
What's the opposite of a cross product? For example, $a \times b = c$
If you only know $a$ and $c$, what method can you use to find $b$?
| Alright, let's look for an inverse here:
The cross product is restricted to $\mathbb{R}^3$ (assuming reals--but that isn't really a constraint here, I don't think). So we have:
$$
\langle x_1, y_1, z_1\rangle\times\langle x_2, y_2, z_2\rangle = \langle y_1z_2 - z_1y_2, z_1x_2 - x_1z_2, x_1y_2 - y_1x_2\rangle
$$
Then we have:
\begin{align}
\langle x_3, y_3, z_3\rangle\times\left(\langle x_1, y_1, z_1\rangle\times\langle x_2, y_2, z_2\rangle\right) = & \langle y_3(x_1y_2 - y_1x_2) - z_3(z_1x_2 - x_1z_2),\\
&z_3(y_1z_2 - z_1y_2) - x_3(x_1y_2 - y_1x_2),\\
& x_3(z_1x_2 - x_1z_2) - y_3(y_1z_2 - z_1y_2)\rangle
\end{align}
Now this looks very complicated (and it is--if you need to solve it "as is"). Instead we can imagine that we already know $\vec{n}' = \vec{v}_1\times\vec{v_2}$. Then this becomes:
\begin{align}
\langle x_3, y_3, z_3\rangle\times\vec{n}' = & \langle y_3n'_z - z_3n'_y, z_3n'_x - x_3n'_z, x_3n'_y - y_3n'_x\rangle
\end{align}
Now we are trying to find $\langle x_3, y_3, z_3\rangle$ which is the inverse. So we set this equal to the second argument of the original cross product, we have a set of linear equations for three unknowns ($x_3, y_3, z_3$):
$$
x_2 = y_3n'_z - z_3n'_y \\
y_2 = z_3n'_x - x_3n'_z \\
z_2 = x_3n'_y - y_3n'_x
$$
The matrix gives:
$$
\begin{pmatrix} 0 & n_z' & -n_y' \\
-n_z' & 0 &n_x' \\
n_y' & -n_x' & 0
\end{pmatrix} \times \begin{pmatrix}x_3 \\
y_3 \\
z_3\end{pmatrix} = \begin{pmatrix}x_2 \\
y_2 \\
z_2\end{pmatrix}
$$
There is no solution when the determinate of this matrix is zero--which is always the case: $n_z'x_x'n_y'- n_y'n_z'n_x' = 0$. This means that unless $\langle x_2, y_2, z_2\rangle$ is $\vec{0}$ there is no solution (since a matrix that has a determinant equal to zero only has a solution when the RHS is zero--in which case it has infinite solutions).
Edit (in regards to comments)
While it's not strictly true that just because the RHS is non-zero (and the matrix is degenerate) there won't be a solution, it is true "in general". Meaning that you can only find solutions in very special cases. This still suggests that there is not an inverse to the cross product (except in very special cases).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/32600",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "33",
"answer_count": 5,
"answer_id": 1
} |
Roots of Unity in fields Which roots of unity are contained in the fields: $\mathbb{Q}[i]$, $\mathbb{Q}[\sqrt2]$, $\mathbb{Q}[\sqrt3]$, $\mathbb{Q}[\sqrt5]$, $\mathbb{Q}[\sqrt{-2}]$ and $\mathbb{Q}[\sqrt{-3}]$?
I know that the roots of unity in $\mathbb{Q}[i]$ are $1$, $-1$, $i$, and $-i$. I'm having a hard time finding the roots of unity in the other fields. If anyone could offer any advice, it would be greatly appreciated.
| The first three are easy enough so I will tackle the last two. Recall that the degree of the extension $\Bbb{Q}(\zeta_n)/\Bbb{Q}$ is $\varphi(n)$ where $\varphi$ is the Euler Totient Function. Now it is not hard to see that the only values of $n$ for which $\varphi(n) = 2$ is when $n = 2,3,4$ and $6$.
Now when you look at $\Bbb{Q}(\sqrt{-2})$ and $\Bbb{Q}(\sqrt{-3})$, if you have an $n^{th}$ root of unity in there it can only be for those stipulated values of $n$ above, because otherwise you have a $\Bbb{Q}$ - subspace of dimension greater than 2 sitting inside of a $\Bbb{Q}$ - vector space of dimension 2 which is impossible. Now let us write out $\zeta_n$ for these values of $n$, we have: $\zeta_2 = - 1$, $\zeta_3 = \frac{-1 + \sqrt{3}i}{2}$, $\zeta_4 = i$ and $\zeta_6 = \frac{1 + \sqrt{3}i}{2}.$
Can you now complete your problem? I leave the rest for you since this is a homework problem. By applying degree arguments, etc. you should be able to eliminate cases. For example, you should be able to work out for yourself why $\pm i$ is not in $\Bbb{Q}(\sqrt{-3})$ say.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/32649",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
} |
Write $\sum_{1}^{n} F_{2n-1} \cdot F_{2n}$ in a simpler form, where $F_n$ is the n-th element of the Fibonacci sequence? The exercise asks to express the following:
$\sum_{1}^{n} F_{2n-1} \cdot F_{2n}$
in a simpler form, not necessarily a closed one. The previous problem in the set was the same, with a different expression:
$\sum_{0}^{n} F_{n}^{2}$ which equals $F_{n} \cdot F_{n+1}$
Side note:
I just started to work through an analysis book, my first big self-study effort. This problem appears in the introductory chapter with topics such as methods of proof, induction, sets, etc.
| You can try to use closed form (Cauchy-Binet formula) for Fibonacci and Lucas sequence.
$F_n=\frac{\alpha^n-\beta^n}{\sqrt{5}}$
$L_n=\alpha^n+\beta^n$
where $\alpha$ and $\beta$ are the roots of $x^2-x-1$. It is useful to notice that $\alpha+\beta=1$ and $\alpha.\beta=-1$.
Thus $F_{2k}F_{2k-1}=\frac{(\alpha^{2k}-\beta^{2k})(\alpha^{2k-1}-\beta^{2k-1})}5 = \frac{\alpha^{4k-1}-\beta^{4k-1}}5 - \frac{(\alpha\beta)^{2k-1}(\alpha+\beta)}5 = \frac{L_{4k-1}+1}5$. (I believe this is a special case of equations (18) from picakhu's post.)
Thus it remains to calculate $\sum L_{4k-1}=\sum (\alpha^k+\beta^k)$, which is basically using geometric progressions and algebraic manipulations. (The identities $\alpha+\beta=1$ and $\alpha.\beta=-1$ might be handy.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/32708",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
Finding probability of an unfair coin An unfair coin is tossed giving heads with probability $p$ and tails with probability $1-p$. How many tosses do we have to perform if we want to find $p$ with a desired accuracy?
There is an obvious bound of $N$ tosses for $\lfloor \log_{10}{N} \rfloor$ digits of $p$; is there a better bound?
| There is no way to find $p$ by tossing the coin with an accuracy like for the deterministic problems. That's a Monte-Carlo simulation and hence you better use bounds for these methods. Note that these bounds are probabilistic. E.g.
$$
\mathsf{P}(|p-\hat{p}_n|>\delta)\leq 2\mathrm{e}^{-2n\delta^2}
$$
where $\hat{p}_n$ can be obtained as a frequency of heads from tossing the coin $n$ times, i.e.
$$
\hat{p}_n = \frac{1}{n}\#( \text{heads in the tossing sequence} ).
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/32772",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Sum of two independent random variables If A and B have the same distribution, e.g. Poisson, and they are independent. Is $P(A + B = k) = P(A = k)P(B = k)$ ? If so, why? If not, what is the correct way to calculate $P(A + B = k)$?
| No. Having $A + B = k$ is not equivalent to $A = k$ and $B = k$, as in the latter case you would have $A + B = 2k$. To find when $A + B = k$ you have to consider all the ways that $A$ and $B$ could sum to $k$. This means that $$P(A + B = k) = \sum_j P(A = j) P(B = k-j),$$ where the sum is over all feasible values of $j$ (i.e., all values of $j$ where you could actually have $A = j$ and $B = k-j$). This operation is called a (discrete) convolution of the common probability mass function of $A$ and $B$ with itself.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/32871",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Why is Harish-Chandra's last name never used? This is only barely a math question but I don't know where else to ask. I've always wondered about Harish-Chandra's name. The Wikipedia article seems to mention "Mehrotra" as a last name but only in passing, and it's not even used in the page's title. Did he simply not use a last name?
| During the independence movement, Harish-Chandra's father under the influence of Mahatma Gandhi decided to do away with using the caste name 'Mehrotra' as also started using Khadi attire. As for the hyphen it was a mistake by a sub-editor publishing his first paper but Harishchandra decided to retain it.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/32929",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
} |
Finding the first term of a geometric series by the sum and $n$ I have the following exponential series:
$$S = ar^0 + ar^1 + ar^2 + \cdots + ar^n$$
I know $S$, $r$ and $n$. How do I find $a$?
I actually need this done by a script so all "crazy" methods like doing an operation $n$ times are ok.
| As Raskolnikov points out this is a geometric series. Sum is given by $$S = a \cdot \frac{r^{n+1}-1}{r-1}$$
Substitute the value of $S,r,n$ to get $a$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/32976",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Question on notation of differentials Is the following notation acceptable, specifically last part of the last line?
$$f(x) = \csc^4(x) = (\csc(x))^4$$
Let
$$u =\csc(x) \rightarrow f(x) = u^4$$
$$f'(x) = \frac{du}{dx} \times \frac{df(x)}{du}$$
$$...$$
Edit: Reworded question: Is it OK to mix Leibniz's and Lagrange's notation?
| Yes, it is acceptable to mix notations.
(Answered by Arturo Magidin & Jack)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/33040",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Deleting any digit yields a prime... is there a name for this? My son likes his grilled cheese sandwich cut into various numbers, the number depends on his mood. His mother won't indulge his requests, but I often will. Here is the day he wanted 100:
But today he wanted the prime 719, which I obliged. When deciding which digit to eat first, he went through the choices, trying to make a composite with the digits left behind. But he quickly realized that eating any digit would leave a prime: 71, 79, 19 are all prime. Pleased with his discovery of this prime 719, he tried to find a larger one, but couldn't.
My questions:
*
*Do these primes have a name?
*Can you think of any more of them (clearly 23 is the smallest)?
*Are there an infinite number of them?
*Is there likely to be a way to find them short of using a computer?
| Here's my C code for generating these numbers. It should be very fast since it uses a sieve, but also consumes a lot of memory for the sieve[] array (we could compress by a factor of 8 by using a bitfield).
// find primes such that deleting any digit remains a prime
//
#include <stdio.h>
#define MAX 1000000000LL
char sieve[MAX];
int ddel(long long i);
int main(void) {
long long i, j;
sieve[0] = sieve[1] = 0; // don't count 0 and 1 as primes
for (i=2; i < MAX; i++) sieve[i] = 1;
// sieve; each new prime will be tested
for (i=2; i < MAX; i++) {
if (sieve[i] == 0) continue;
for (j=i+i; j < MAX; j += i) sieve[j] = 0;
if (i > 10) {
if (ddel(i))
printf("%d\n", i);
}
}
}
int ddel(long long i) {
long long j;
long long t = 1;
while (t < i) {
// delete the log_10(t) digit
j = i/10/t * t + i%t;
if (sieve[j] == 0) return 0;
t *= 10;
}
return 1;
}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/33094",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "174",
"answer_count": 2,
"answer_id": 0
} |
I haven't studied math in 12 years and need help wrapping my mind back around it I was never fabulous at Algebra and have always studied the arts. However, now I have to take Math 30 pure 12 years after I finished my last required high school math class.
If anyone has thoughts on how to help me re-learn some of what I used to know and help me build upon that knowledge before my class starts please let me know!
Thanks in advance.
J
| In class, I make sure my students get practice with their calculators by having them do the calculations (I teach physics). I can do the numbers in my head faster than they can work the calculator. This is partly because they are extremely slow with the calculator and badly need practice with it. See Undercover Mathemati's answer.
One day one of my students got the answer faster than I knew a calculator could provide it. I stared at him briefly and asked "how did you get that". He said that he'd followed my advice. If you want your brain to be good at arithmetic, use it while you're driving. Look at the license plates and do multiplication or addition or division or subtraction problems. You will find yourself getting better and better each day.
The same thing applies to any other type of math. You will slowly become fabulously good at the thing you spend your time doing.
What I'm saying is that the head shapes of mathematicians are not significantly different from those of the general public. What's different is what they find interesting. Make math interesting for yourself and you're halfway there.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/33167",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
How did Bessel functions come to be denoted by $J_n$? The $n$th Bessel function of the first kind is usually denoted $J_n(x)$.
Where did the use of the letter $J$ to indicate the Bessel function come from?
| " Bessel defined the function now known by his name by the
following definite integral:
$$\int \cos (h\epsilon-k \sin \epsilon) d\epsilon=2\pi I_k^h$$
where $h$ is an integer and the limits of integration are $0$ and $2\pi$. His
$I_k^h$ is the same as the modern $J_h(k)$, or rather $J_n$(x).
O. Schlomilch
following P. A. Hansen explained the notation $J_{\lambda±n}$ where $\lambda$ signifies
the argument and $\pm$ the index of the function. Schlomilch usually
omits the argument. Watson points out that Hansen and Schlomilch
express by $J_{\lambda,n}$ what now is expressed by $J_n(2\lambda)$. Schlafly marked it $^nJ(x)$. Todhunter uses the sign $J_n(x)$. $J_n(x)$ is known as the "Bessel function of the first kind of order $n$," while $Y^n(x)$, an allied function
introduced in 1867 by Karl Neumann is sometimes called "Newmann's Bessel function of the second kind of order $n$." It is sometimes marked $Y_n(x)$.
Watson says: "Functions of the types $J \pm (n+1/2)(z)$ occur with
such frequency in various branches of Mathematical Physics that
various writers have found it desirable to denote them by a special
functional symbol. Unfortunately no common notation has been
agreed upon and none of the many existing notations can be said to
predominate over the others. He proceeds to give a summary of
the various notations
in his Theory of Bessel Functions, pages 789, 790, Watson gives a
list of 183 symbols used by him as special signs pertaining to that
subject. "
This, in outermost quotes is a paragraph taken from Cajori's Book "A history of Mathematical Symbols" , clause 664, pg 279, vol-II explaining the symbol chosen for Bessel Function of the first kind. Hope it helps.
Cajori, F., A history of mathematical notations. Vol. II: Notations mainly in higher mathematics., XVIII + 367 p., 20 fig. Chicago, The Open Court Publishing Company (1929). ZBL55.0002.02.f
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/33226",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
Prime numbers which solve $2^s=1\pmod p$ Here we define those primes $p$ for which $\operatorname{ord}_p(2)=s$, where $s$ is the minimum of the set $S$ of all divisors $d\mid p-1$ such that $2^d-1\geq p$.
For example: for $p=7$, $s=3$, $7\mid 2^3-1$ thus $\operatorname{ord}_p(2)=s=3$ ($7$ is such a prime).
Questions: how many such primes are there?
Are such primes interesting?
Thanks.
| If $p$ is a prime, one less than a multiple of $8$, and one more than twice a prime, then it satisfies the condition. This explains $7$; the next such example is $23$. It is generally believed, but not proved, that there are infinitely many primes satisfying the conditions I have given.
There are primes that satisfy your condition but not mine, e.g., $17$.
"Interesting" is a subjective term. I found them interesting enough to spend a few minutes writing up this answer.
EDIT: It seems I can't comment on tomerg's answer, so I'll put my comment here.
@tomerg, yes, I said there are primes of your kind that are not of my kind, and I gave $17$ as an example. $73$ is also an example. You wanted to know how many of your primes there are, and I have given a good reason to believe (but not a proof) that there are infinitely many. I hope someone else can build on what I've done, and give a complete answer. But this may be difficult, so I've done what I can.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/33394",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Deriving the distance between two distinct points on the Upper Half Plane $\mathbb{H}$ I am trying to derive the distance between two arbitrary points in hyperbolic space; the model I'm using is the upper half plane model.
So the distance is just $\int_f \rho(z) dz$, where $\rho(z) = \frac{|z|^2}{\text{Im}(z)}$. Now I construct a circle between these two points $(x_1,y_1)$ and $(x_2, y_2)$ whose centre is on the real axis, and I arrive at the equation $$\Biggl(x - \frac{(y_1^2 - y_2^2 + x_1^2 - x_2^2)}{2(x_1 - x_2)}\Biggr)^2 + y^2 = x_1^2 + y_1^2.$$
I also know that $(x_1,y_1)$ and $(x_2,y_2)$ can be parametrised in terms of $t$, as when I do a change of variables in the integral I now have to say the integral is going from some value $t_2$ and $t_1$, where $t_k$ is the angle between the line joining this point to the center $(\frac{(y_1^2 - y_2^2 + x_1^2 - x_2^2)}{2(x_1 - x_2)},0)$ and the real axis. So after doing a lot of manipulations, you arrive at the equation : $$d\Big((x_1,y_1,),(x_2,y_2)\Big) = \ln\Bigl|\frac{y_1^2+c^2-2cx_1+x_1^2+cy_1-y_1x_1}{y_1^2+c^2+2cx_1+x_1^2+cy_1+y_1x_1} \Bigr|,$$ where $c$ is the $x$ coordinate of the center
But somehow this does not tally up with the answer given on http://en.wikipedia.org/wiki/Poincar%C3%A9_half-plane_model,
I tried looking at the identity $\text{arcosh}(x)=\ln(x+\sqrt{x^2-1})$, but it does not help.
Unless I did not stuff up any of my calculations, any ideas?
Ben
| Your formula for the circle is incorrect. You found the center correctly. But the radius, which you need to insert on the RHS, should be $(x_1 - c)^2 + y_1^2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/33464",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Proving quadratic function is bounded (vector input) I am reading Boyd's Convex Optimization textbook and I am looking at example 3.22, line 2.
It says $y^T x - \frac12 x^T Q x$ is bounded from above for all possible values of $y$. Also, it is important to note $Q$ is a symmetric positive definite. So for all values of $x$, $x^T Q x$ is a positive number. Why is $y^T x - \frac12 x^T Q x$ bounded above? I am not sure how to compare the quantity $y^T x$ with the quantity $\frac12 x^T Q x$. For $y=0$, I can see why.. then there are two cases, $y>0$ and $y<0$.
Thanks
| Hint: For given $y$ write $x:=Q^{-1}y + v$ with a new independent vector variable $v$. The resulting quadratic function of $v$ will have no linear term.
I'm expanding the hint: We are given the function $\phi(x):=y^T x -{1\over2} x^T Q x$ where $y$ is an a priori given constant vector. Writing $x:= Q^{-1} y+v$ we get the pullback (i.e., $\phi$ written as a function of the new variable $v$)
$$\eqalign{\tilde\phi(v)&=y^T(Q^{-1}y +v) -{1\over 2}(y^T Q^{-1} + v^T)\ Q\ (Q^{-1}y + v)\cr &= y^T Q^{-1}y + y^T v -{1\over2}(y^TQ^{-1} + v^T)( y+ Qv)\cr &={1\over2} y^TQ^{-1} y -{1\over 2} v^T Q v\cr}$$
(note that $y^T v=v^T y$). Here the right side is a constant minus something positive definite, so it is bounded above by this constant.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/33576",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
What does "splitting naturally" mean in the Universal Coefficients Theorem The Universal Coefficients Theorem states that
$0\rightarrow H_n(X)\otimes G\rightarrow H_n(X;G)\rightarrow\operatorname{Tor}(H_{n-1}(X),G)\rightarrow 0$
splits, but not naturally. In all the algebraic topology contexts I've come across, "natural" implies commutativity, but I don't see what a "natural split" means. Can someone give me an explicit definition?
Edit:
From what I've read... if I understand this correctly, splitting naturally implies that if
$0\rightarrow A\rightarrow A\oplus C\rightarrow C\rightarrow 0$
and
$0\rightarrow A'\rightarrow A'\oplus C'\rightarrow C'\rightarrow 0$
and given maps $a:A\rightarrow A'$ and $c:C\rightarrow C'$, the map $A\oplus C\rightarrow A'\oplus C'$ has to be the map $(a,c)$ in order for the diagram to commute.
| $\require{AMScd} \newcommand{\RP}{\mathbb{RP}}$
Note: The splitting is actually natural in $G$ contrary to what is written in many texts. What she is referring to is the following:$\newcommand{\z}{\mathbb{Z}}$. Lets fix $G=\z/2$.
Thm: There is no possibility of a natural transformation from the functor $H_*(-,\z/2)$ to $(H_{*}(-,\z))^* \oplus Ext(H_{*-1}(-,\z),\z/2)$.
Proof: Suppose if possible that for all spaces $X,Y$ and every continuous map $X \to Y$, there is an induced a commuting square
$ \begin{CD}
H_*(X,\z/2) @>\cong>> Tor(H_{*-1}(X),\z/2) \oplus H_*(X) \otimes \z/2 \\
@VVV @VVV\\
H_*(Y,\z/2) @>\cong>> Tor(H_{*-1}(Y),\z/2) \oplus H_*(Y) \otimes \z/2 \\
\end{CD}$
.
Then there would have to be a commuting square induced by $\RP^2 \to \RP^2/\RP^1 \to S^2$:
$ \begin{CD}
H_2(\RP^2,\z/2) @>\cong>> Tor(H_{1}(\RP^2),\z/2) \oplus H_2(\RP^2) \otimes \z/2 \\
@| @V0VV\\
H_2(S^2,\z/2) @>\cong>> Tor(H_{2-1}(S^2),\z/2) \oplus H_2(S^2) \otimes \z/2 \\
\end{CD}$. Contradiction since the 0 map would need to be an isomorphism.
Note: One could not have expected there to be such a natural transformation because $H_*(X)$ splits into objects of different homogenous degree, while the induced maps of topological maps map elements of the same degree onto elements of the same degree.
Note 2: The diagram $$ \begin{CD}
H^2(\RP^2,\z/2) @>\cong>> Ext(H_{1}(\RP^2),\z/2) \oplus H_2(\RP^2)^* \\
@| @V0VV\\
H^2(S^2,\z/2) @>\cong>> Ext(H_{1}(S^2),\z/2) \oplus H_2(S^2)^* \\
\end{CD}$$ shows that the exact sequence of the cohomological universal coefficient theorem does not split naturally with respect to the topological space.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/33685",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17",
"answer_count": 3,
"answer_id": 0
} |
Do infinitely many points in a plane with integer distances lie on a line? Someone posted a question on the notice board at my University's library. I've been thinking about it for a while, but fail to see how it is possible. Could someone verify that this is a valid question and point me in the right direction?
'Given an infinite set of points in a plane, if the distance between any two points is an integer, prove that all these points lie on a straight line.'
| MR0013511 (7,164a)
Anning, Norman H.; Erdős, Paul
Integral distances.
Bull. Amer. Math. Soc. 51, (1945). 598–600.
The authors show that for any n there exist noncollinear points $P_1,\dots,P_n$ in the plane such that all distances $P_iP_j$ are integers; but there does not exist an infinite set of non-collinear points with this property.
Reviewed by I. Kaplansky
I can add that the first result mentioned requires lots of points to be on a circle. I believe the current record for points in the plane with all distances integers, no three on a line, no 4 on a circle, is 8.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/33731",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 2,
"answer_id": 0
} |
eigenvalues and eigenvectors for rectangular matrices We can generalize matrix inverses from non-singular square matrices to rectangular matrices in general, for example, the well-known Moore–Penrose pseudoinverse. I am wondering how this can be done for eigenvalues and eigenvectors.
Though $\det(|A-\lambda I|)=0$ cannot be used any more when $A$ is not square, there is nothing that prevents one to consider $Av=\lambda v$ for non-zero vector $v$ except the possibility of having an inconsistent linear system.
Please give your comments and provide some references if there are some.
Many thanks.
Edit
I know SVD. But it does not seem to be the one I wanted. For SVD of real matrix $A$, $A=UDV^T$ where $U, V$ are orthogonal matrices and $D$ is diagonal (with possibly zeros in the diagonal). We only have $AV_{*k}=\sigma_{k}U_{*k}$, $V_{*k}$ is the $k^\text{th}$ column of $V$. Since $V_{*k}$ and $U_{*k}$ are in general different, it does not resemble $Av=\lambda v$ for non-zero vector $v$ in the definition of eigenvectors. Also, even if we can have $A^TAV_{*k}=\lambda V_{*k}$, but this is for the (square) matrix $A^TA$, rather than $A$ itself.
| Check out singular values.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/33781",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
How to calculate the matrix exponential explicitly for a matrix which isn't diagonalizable? How can I compute an expression for $(\exp(Qt))_{i,j}$ for some fixed $i, j$ and matrix $Q$?
When $Q$ is diagonalizable, we can diagonalize, but what can be done otherwise?
Thanks.
| You can find the exponential of a matrix by Putzer's algorithm, which avoids both computing either the Jordan canonical form or any eigenvectors. Putzer's algorithm uses the Cayley-Hamilton theorem, which states that a matrix satisfies its characteristic equation, in an essential way.
Here is a summary of the method for a $3 \times 3$ matrix $a$. The general case should be similar. The basic idea is that all powers of $a$ starting with $a^3, a^4, \ldots$ can be expressed as a linear combination of $i, a, a^2.$ We will use different combinations of $i, a, a^2$ instead, as explained in the next paragraph.
Let the characteristic polynomial of $a$ be written as
$\det(\lambda I - A) = (\lambda - \lambda_1)(\lambda - \lambda_1)(\lambda - \lambda_3)$. It does not matter how the roots are labeled or even whether they are all real. Now define $p_0 = i, p_1 = (a - \lambda_1 i)p_0, p_2 = (a - \lambda_2 i)p_1.$ We will need the following consequences $ a p_0 = \lambda_1 p_0 + p_1, a p_1 = \lambda_2 p_1 + p_2, a p_2 = \lambda_3 p_2.$
Now look for $e^{at} = i + at + \frac{t^2a^2}{2!} + \ldots$. Since all powers starting with $a^3$ is a linear combination of $i, p_0, p_1$, thanks to c-h, $e^{at} = r_0 p_0 + r_1 p_1 + r_2 p_2.$ Using $\frac{d}{dt}e^{at} = ae^{at}$ it is easy to see that $r_0,r_1, r_2$ satisfy $$\frac{dr_0}{dt} = \lambda_1 r_0, \frac{dr_1}{dt} = \lambda_2 r_1 + r_0, \frac{dr_2}{dt} = \lambda_3 r_2 + r_1$$ together with the initial conditions $r_0(0) = 1, r_1(0) = 0, r_2(0) = 0.$
Once we have $r_0, r_1, r_2$ the fundamental matrix $e^{at} = r_0 p_0 + r_1 p_1 + r_2 p_2$ and we are done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/33851",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17",
"answer_count": 3,
"answer_id": 1
} |
Variational Distance vs. maximum norm Suppose I have vector $x^t \in \mathbb{R}^n, x_i > 0$ that is a random variable in $t$. I define a measure $D(x) := \max_{i,j} |x_i - x_j|$, which essentially is the maximum discrepancy of any two values in the vector.
A paper I'm currently going through attempts to bound this measure for a certain process. I believe the details are unimportant, except that the sum of all the $x_i$ is $0$ for each $t$.
However, the paper goes on to bound another quantity: The variational distance between vector $x$ and the $0$-vector:
$$||x|| = \frac{1}{2} \sum_i |x_i|$$
According to the Wikipedia article on variational distance, the measure $D(x)$ should be the same as the variational distance, but I don't see how this can be the same.
| They are not the same, and the comments show some simple counter-examples. In fact, while $D(x) \leqslant 2 \| x\|$, the gap between these two quantities might be as large as $\Omega(n)$: e.g., take $x$ to be the vector containing $+1$ in half the coordinates, and $-1$ in the remaining half. Then, $D(x) = 2$ while $\| x \| = n/2$.
What is equivalent to the variational distance is a new quantity $D'(x)$ defined as
$$
D'(x) = \max_{I, J \subseteq [n]} \ | x(I) - x(J) |,
$$
where we define $x(I) = \sum \limits_{i \in I} x_i$ (and $x(J)$ similarly). Without loss of generality, we may assume that $I$ and $J$ are disjoint in the above definition.
This satisfies the identity
$$
D'(x) = 2 \| x \|.
$$
Proof. $D'(x) \leqslant \sum_{i} |x_i|$ holds by the triangle inequality. For the other direction, take $I = \{ i \,:\, x_i \geqslant 0 \}$ and $J = [n] \setminus I$. For this choice of $I$ and $J$, it holds that $| x(I) - x(J) | = \sum_i |x_i|$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/33891",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
problem to determine the chromatic polynomial of a graph for a homework graph theory, I'm asked to determine the chromatic polynomial of the following graph
this is my thread in another post:
https://stackoverflow.com/questions/5724167/problem-to-determine-the-chromatic-polynomial-of-a-graph
For the Descomposition Theorem of Chromatic Polynomials. if G=(V,E), is a connected graph and e belong E
P (G, λ) = P (Ge, λ) -P(Ge', λ)
When calculating chromatic Polynomials, i shall place brackets about a graph to indicate its chromatic polynomial. removes an edge any of the original graph to calculate the chromatic polynomial by the method of decomposition.
P (G, λ) = P (Ge, λ)-P (Ge ', λ) = λ (λ-1) ^ 3 - [λ (λ-1) (λ^2 - 3λ + 3)]
But the response from the answer key and the teacher is:
P (G, λ) = λ (λ-1)(λ-2)(λ^2-2λ-2)
I have operated on the polynomial but I can not reach the solution that I ask .. what am I doing wrong?
| Your graph is a 5-cycle. I wouldn't use that theorem, I'd just do it directly.
1st vertex: $\lambda$ options.
2nd vertex: $\lambda-1$ options.
3rd vertex: two cases. If it's the same color as 1st vertex, then $\lambda-1$ options for 4th vertex, $\lambda-2$ options for 5th vertex. So this case contributes $\lambda(\lambda-1)^2(\lambda-2)$. Second case, if it differs from the 1st vertex ($\lambda-2$ options), then two subcases: if 4th vertex is same color as 1st, then $\lambda-1$ options for 5th, making $\lambda(\lambda-1)(\lambda-2)(\lambda-1)$. If 4th differs from 1st ($\lambda-2$ options), then $\lambda-2$ options for 5th, making $\lambda(\lambda-1)(\lambda-2)^3$.
Now add 'em all up.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/33946",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Is the variance of a convex function a convex function? I am working on a optimization-related research problem and need to know if the variance of a convex function is convex. I know this can be a little vague so I'm including a (rather formal) explanation below.
Say I have a function $v_i(x)$ where $v_i: \mathbb{R}^n \rightarrow \mathbb{R}$.
Assume that for each $i = 1...s$ the function $v_i$ can be a different convex function. Also, assume that the function has the form $v_i$ with probability $p_i$ where $\sum_{i=1}^{s}{p_i} = 1$.
Define the *expectation function as:
$E[v(x)] = \sum_{i=1}^{s}{p_i v_i(x)}$
And the *variance function as:
$Var[v(x)] = \sum_{i=1}^{s}{p_i*(v_i(x) - E[v(x)])^2}$
Assume the function $v_i$ is convex for all $i = 1...s$, then:
*
*Is the expectation function convex? (yes, right?)
*Is the variance function convex? (unsure)
Also, if $v_i$ is affine for all $i = 1...s$, then:
*
*Is the expectation function convex? (yes, right?)
*Is the variance function of a convex function convex? (if not, then what is it?)
| First, I think your notation is bad. After taking expectation/variance, there should be no $i$ dependence. Also, your variance equation has one of the open parenthesis at a wrong place. It should be
$$Var[v(x)] = \sum_{i=1}^{s}{p_i*(v_i(x) - E[v(x)])^2}.$$
Anyway, the expectation, as a linear combination of convex functions, is definitely convex. But the variance is not. For example, take $s=2, p_1=p_2=1/2, v_1(x)=x^2, v_2(x)=x^4$, then the "variance" would be a polynomial $\frac{x^4}4(1-2x^2+x^4)$ which is nonnegative (of course) but not convex.
If all the $v_i$ are linear functions, then so is the expectation, hence convex. In this case, the variance is also convex, because it's a linear combination of convex functions $(v_i(x) - E[v(x)])^2$. (Note that each $(v_i(x) - E[v(x)])^2$ is convex because it's the square of a linear function.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/34009",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What differences are between $\mathbb{E}^n$ and $\mathbb{R}^n$ What differences are between the two notations $\mathbb{E}^n$ and $\mathbb{R}^n$?
Do they represent/define the same space set with the same structure(s)?
Thanks and regards!
| I am not sure if this is standard notation, but if an author distinguishes between $\mathbb{R}^n$ and $\mathbb{E}^n$, the former may refer to the real $n$-vector space, whereas the latter also include the structure of an inner product space.
The Wikipedia article seems to agree with this.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/34172",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 0
} |
Last two digits of $2^{1000}$ via Chinese Remainder Theorem? I bumped across the aforementioned question in my notes while studying today and I have completely forgot how to do this. I remember using CRT to solve a problem like this on one of my tests, too bad they didn't give back my solutions :(.
Since $\gcd(100,2) = 2$, we can't use the usual Euler's theorem to solve via $\pmod {100}$. So $100 = 5^2 2^2$, and applying Euler's on the 25 gives me $2^{20} \equiv 1 \pmod {25}$. However since $\gcd(2,4) = 2$, we can't do the same for $\bmod 4$. Accordingly how do I set up the other modulo congruence so I can apply CRT?
| By $\, ab\,\bmod\, \color{#90f}{ac}\, =\ a\,(b\bmod c)\, =\, $ mod Distributive Law $ $ & $\!\overbrace{\color{#c00}{2^{\large 10}}\! = 1024\equiv -1\pmod{\!25}}^{\!\!\!\!\!\Large {\rm or}\ \, \color{#c00}{2^{\LARGE 20}\equiv\ 1}\ \ {\rm by\ Euler\ \phi\ (totient)}}\,$
$\ \ 2^{\large 20J}\!\! \bmod \color{#90f}{100}\,
=\, 4\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\underbrace{\left[\dfrac{\color{#c00}{2}^{\large\color{#c00}{20}J}}4\!\bmod 25\right]}_{\large\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \equiv\ {\small\dfrac{\color{#c00}1}4}\equiv\ {\small\dfrac{-24}4}\,\ \equiv\ -6\ \ \equiv\ \ \color{#0a0}{19}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
= 4[\color{#0a0}{19}] = 76$ $\, \left\{\!\begin{align} &\text{mDL = ${\it operational}$ version of $\rm\small CRT$}\\ &\!\text { as explained in prior linked answer.}\end{align}\right.$
Remark $ $ Above used $\,4\mid \color{#c00}2^{\large \color{#c00}{20}J}\ $ by $\,J\ge 1\,\ [J=50\ \ \rm in\ OP].\ $ Another example from here
$\ \ 35^{\large 73} 53^{\large 25}\bmod 100\, =\, 25\left[\dfrac{35^{\large 73} 53^{\large 25}}{25}\bmod 4\right] = 25\overbrace{\left[\dfrac{(-1)^{\large 73} 1^{\large 25}}{1}\bmod 4\right]}^{\ \ \ \large \equiv\ -1\ \equiv\ \color{#c00}{ 3}\ \pmod{\!4}^{\phantom{|}}} = 25[\color{#c00}3] $
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/34222",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 0
} |
Order of general- and special linear groups over finite fields. Let $\mathbb{F}_3$ be the field with three elements. Let $n\geq 1$. How many elements do the following groups have?
*
*$\text{GL}_n(\mathbb{F}_3)$
*$\text{SL}_n(\mathbb{F}_3)$
Here GL is the general linear group, the group of invertible n×n matrices, and SL is the special linear group, the group of n×n matrices with determinant 1.
| First question: We solve the problem for "the" finite field $F_q$ with $q$ elements. The first row $u_1$ of the matrix can be anything but the $0$-vector, so there are $q^n-1$ possibilities for the first row.
For any one of these possibilities, the second row $u_2$ can be anything but a multiple of the first row, giving $q^n-q$ possibilities.
For any choice $u_1, u_2$ of the first two rows, the third row can be anything but a linear combination of $u_1$ and $u_2$. The number of linear combinations $a_1u_1+a_2u_2$ is just the number of choices for the pair $(a_1,a_2)$, and there are $q^2$ of these. It follows that for every $u_1$ and $u_2$, there are $q^n-q^2$ possibilities for the third row.
For any allowed choice $u_1$, $u_2$, $u_3$, the fourth row can be anything except a linear combination $a_1u_1+a_2u_2+a_3u_3$ of the first three rows. Thus for every allowed $u_1, u_2, u_3$ there are $q^3$ forbidden fourth rows, and therefore $q^n-q^3$ allowed fourth rows.
Continue. The number of non-singular matrices is
$$(q^n-1)(q^n-q)(q^n-q^2)\cdots (q^n-q^{n-1}).$$
Second question: We first deal with the case $q=3$ of the question. If we multiply the first row by $2$, any matrix with determinant $1$ is mapped to a matrix with determinant $2$, and any matrix with determinant $2$ is mapped to a matrix with determinant $1$.
Thus we have produced a bijection between matrices with determinant $1$ and matrices with determinant $2$. It follows that $SL_n(F_3)$ has half as many elements as $GL_n(F_3)$.
The same idea works for any finite field $F_q$ with $q$ elements. Multiplying the first row of a matrix with determinant $1$ by the non-zero field element $a$ produces a matrix with determinant $a$, and all matrices with determinant $a$ can be produced in this way. It follows that
$$|SL_n(F_q)|=\frac{1}{q-1}|GL_n(F_q)|.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/34271",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "64",
"answer_count": 2,
"answer_id": 1
} |
bound of Erlang distribution Is there any known polynomial bound of the Erlang distribution? I'd like to say that, given $k$ and $\lambda$ with probability p the r.v. is going to be less than some value x.
| That is simply the cumulative distribution function, given in WP by $\gamma(k,k\lambda)/(k-1)! = 1-\sum_{n=0}^{k-1}\mathrm e^{-\lambda x}(\lambda x)^{n}/n! $, where $\gamma$ is the incomplete gamma function.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/34364",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Levi-Civita connection of a left-invariant metric How do I compute Levi-Civita connection of a left-invariant metric on a Lie group in a neighbourhood of $1$ by knowing only its Lie algebra and the metric form on it? I know it's possible because a Lie group is determined by its Lie algebra in some small neighbourhood of $1$, but I just found out that I forgot how to compute Levi-Civita connection in this case in practice :) Is there some nice formula for this?
| As per Alexei's suggestion, I'm making this an answer.
The Koszul formula is the usual technique for working out the Levi-Civita connection given a metric. In your particular case, if one restricts to left invariant vector fields, one can use the inner product on the Lie algebra and the Lie algebra structure to work out the Koszul formula.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/34422",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
RSA: Encrypting values bigger than the module Good morning!
This may be a stupid one, but still, I couldn't google the answer, so please consider answering it in 5 seconds and gaining a piece of rep :-)
I'm not doing well with mathematics, and there is a task for me to implement the RSA algorithm. In every paper I've seen, authors assume that the message $X$ encrypted is less than the module $N$, so that $X^e\quad mod\quad N$ allows to fully restore $X$ in the process of decryption.
However, I'm really keen to know what if my message is BIGGER than the module?
What's the right way to encrypt such a message?
| Another possibility is to simply break up your message into bite-size pieces.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/34469",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Maximum Likelihood Estimator for Multivariate Bernoulli I am working on deriving Naive Bayes for document classification.
Each document is represented by a binary vector $x^i$ where $i=1,..,N$ for N documents. In this vector a cell is set to 1 if that cell representing a word is present at least once in the document, and left zero otherwise. Let's say there are 50,000 words hence each binary vector has 50,000 elements.
The joint distribution for Naive Bayes is
$p(x_1,...,x_{50000}) = \prod_{d=1}^{50000} p(x_d)=\prod_{d=1}^{50000} \alpha_d^{x_d}(1-\alpha_d)^{1-x_d}$
Likelihood is
$L(\theta) = \prod_{i=1}^N \prod_{d=1}^{50000} p(x_d^i) = \prod_{i=1}^N \prod_{d=1}^{50000} \alpha_d^{x_d^i}(1-\alpha_d)^{1-x_d^i}$
The text I am reading suggests maximum likelihood solution for $\alpha_d$ is $\alpha_d = \frac{N_d}{N}$, where $N_d$ is the total of '1's for a dimension (word) across all documents, and N is the total number of documents. I am guessing this is obtained by taking derivative of the likelihood function, setting the result to zero, then solve for $\alpha_d$. One trick is I guess taking log of both sides, but even then, the algebra gets hairy pretty fast. Maybe I am missing something else. I would appreciate if someone could help with this derivation.
| With the loglikelihood
$$LL(\alpha_1,\ldots,\alpha_{50000}) = \sum_{i=1}^N \sum_{d=1}^{50000} {x_d^i}\log(\alpha_d)+(1-x_d^i)\log(1-\alpha_d) \; ,$$
it's pretty easy, you get the following equations to solve:
$$\sum_{i=1}^N\left(\frac{x_d^i}{\alpha_d}-\frac{1-x_d^i}{1-\alpha_d}\right) = 0$$
or after summing over $i$
$$\frac{N_d}{\alpha_d}-\frac{N-N_d}{1-\alpha_d} = 0 \; .$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/34531",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find a way from 2011 to 2 in four steps using a special movement USAMTS 6/2/22 states:
The roving rational robot rolls along the rational number line. On each turn, if the
robot is at $\frac{p}{q}$, he selects a positive integer $n$ and rolls to $\frac{p+nq}{q+np}$. The robot begins at the rational number 2011. Can the roving rational robot ever reach the rational number 2?
Now, of course, I know it is true (I proved it and got the answer correct). However, I'm interested to see whether or not the robot can move from 2011 to 2 in four steps. I know it must move an even number of steps and cannot move in two steps, but that's as far as I was able to prove. I was able to find a set of six steps, so I know six is possible ($\displaystyle 2011 \rightarrow \frac{1}{671} \rightarrow 111 \rightarrow \frac{1}{23} \rightarrow 7 \rightarrow \frac{1}{3} \rightarrow 2$ works).
Is a set of four steps even possible? I figured out that for all $n > 2$, it is possible to move from $\frac{n-2}{2n-1}$ to 2 (with the robot using that same $n$). However, I have not been able to extend all the way from 2011 to that, even with a lot of time brute forcing in Mathematica.
| Not an answer, but maybe some help. It turns out your steps commute. That is, instead of using $(1007,133,29,10,5,5)$ you use $(5,5,10,29,133,1007)$ (or any other order) you get to $2$ as well. If the integers you use are $(n_1,n_2,n_3,n_4),$ after the four iterations you are at $\frac{2011(1+e_2+e_4)+(e_1+e_3)}{(1+e_2+e_4)+(e_1+e_3)2011}$, where $e_i$ is the $i^{\text{th}}$ degree symmetric polynomial. For example, $e_2=n_1n_2+n_1n_3+n_1n_4+n_2n_3+n_2n_4+n_3n_4$ Setting this equal to $2$ gives $2009(1+e_2+e_4)=4021(e_1+e_3)$. This will allow you to set bounds on the search-if the $n$'s are too large, $e_4$ will be too much larger than $e_3$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/34608",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21",
"answer_count": 2,
"answer_id": 1
} |
Simple set exercise seems not so simple Exercise about sets from Birkhoff's "Modern Applied Algebra".
Prove that for operation $\ \Delta $ , defined as
$\ R \Delta S = (R \cap S^c) \cup (R^c \cap S) $
following is true:
$\ R \Delta ( S \Delta T ) = ( R \Delta S ) \Delta T $
($\ S^c $ is complement of $\ S $)
It's meant to be very simple, being placed in the first excercise block of the book. When I started to expand both sides of equations in order to prove that they're equal, I got this monster just for the left side:
$\ R \Delta ( S \Delta T ) = \Bigl( R \cap \bigl( (S \cap T^c) \cup (S^c \cap T) \bigr)^c \Bigr) \cup \Bigl(R^c \cap \bigl( (S \cap T^c) \cup (S^c \cap T) \bigr) \Bigr) $
For the right:
$\ ( R \Delta S ) \Delta T = \Bigl(\bigl( (R \cap S^c) \cup (R^c \cap S) \bigr) \cap T^c \Bigr) \cup \Bigl( \bigl( (R \cap S^c) \cup (R^c \cap S) \bigr)^c \cap T \Bigr) $
I've tried to simplify this expression, tried to somehow rearrange it, but no luck. Am I going the wrong way? Or what should I do with what I have?
| joriki's answer it the best way to understand this conceptually. Still, it's perfectly possible to continue formally the way you started: simply repeatedly apply De Morgan's laws:
$(A \cup B)' = A' \cap B'$ and $(A \cap B)' = A' \cup B'$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/34676",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Where does the notation $\mathrm{Ad}(U)$ for $a\mapsto UaU^*$ come from? I have often seen, in the context of operator theory and operator algebras, the notation $\mathrm{Ad}(U)a=UaU^*$, where $U$ is a unitary operator on a Hilbert space $H$ and $a$ is a bounded linear operator on $H$. I have no idea what "Ad" stands for, where/how this notation came into common use, nor whether it fits into a more general context (e.g., for similarities or other automorphisms outside of the context of operator theory). Some Google searching revealed a use of "$\mathrm{Ad}$" in the theory of Lie groups that doesn't quite match with the above, but might have a common origin.
Where does "$\mathrm{Ad}$" come from, especially in the context of $\mathrm{Ad}(U)a=UaU^*$?
| Answered in the comments:
http://en.wikipedia.org/wiki/Adjoint_representation_of_a_Lie_group – Qiaochu Yuan Apr 23 '11 at 18:27
I'm pretty sure that @Qiaochu's right. I'd even go so far as to say that the motivation stems from the finite-dimensional case $A=M_n(\mathbb C)$, where the unitary group $U(n)=\mathcal U(A)$ leaves the subspace of anti-self-adjoint matrices (= its Lie algebra) invariant. – t.b. Apr 23 '11 at 18:47
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/34754",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Olympiad calculus problem This problem is from a qualifying round in a Colombian math Olympiad, I thought some time about it but didn't make any progress. It is as follows.
Given a continuous function $f : [0,1] \to \mathbb{R}$ such that $$\int_0^1{f(x)\, dx} = 0$$ Prove that there exists $c \in (0,1) $ such that $$\int_0^c{xf(x) \, dx} = 0$$
I will appreciate any help with it.
| Let $G(x) = \int_{0}^{x} t f(t) \, \mathrm{d}t$, and suppose that $G(c) \neq 0$ for all $c \in (0, 1)$. Then by IVT, either
*
*(Case 1) $G > 0$ on $(0, 1)$, or
*(Case 2) $G < 0$ on $(0, 1)$.
However,
\begin{align*}
0
= \int_{0}^{1} f(x) \, \mathrm{d}x
= \int_{0}^{1} \frac{G'(x)}{x} \, \mathrm{d}x
= G(1) + \int_{0}^{1} \frac{G(x)}{x^2} \, \mathrm{d}x.
\end{align*}
This is $ > 0 $ in Case 1 and $ < 0$ in Case 2, a contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/34808",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27",
"answer_count": 3,
"answer_id": 2
} |
Inclusion-exclusion principle: Number of integer solutions to equations The problem is:
Find the number of integer solutions to the equation
$$
x_1 + x_2 + x_3 + x_4 = 15
$$
satisfying
$$
\begin{align}
2 \leq &x_1 \leq 4, \\
-2 \leq &x_2 \leq 1, \\
0 \leq &x_3 \leq 6, \text{ and,} \\
3 \leq &x_4 \leq 8 \>.
\end{align}
$$
I have read some papers on this question, but none of them explain clearly enough. I am especially confused when you must decrease the total amount of solutions to the equation—with no regard to the restrictions—from the solutions that we don't want. How do we find the intersection of the sets that we don't want? Either way, in helping me with this, please explain this step.
| I'm not sure I can expand on PEV's hints in a comment, so I'll make it an answer.
You need to know the number of solutions of $$u_1+u_2+\dots+u_r=n$$ when the only restriction on the variables is that they be non-negative integers. Imagine $n+r-1$ dots in a line, and circle $r-1$ of them. The uncircled dots are $n$ in number, and the circled ones divide the uncircled ones into $r$ groups (some of which may be empty), so you get $r$ non-negative integers adding up to $n$. So the question becomes, how many ways can you choose which $r-1$ of the $n+r-1$ dots to circle? Unfortunately, PEV wrote 18-choose-3, where I think what's wanted is 15-choose-3, but now you should see how to get that part of the answer.
Then you ask how to use inclusion-exclusion. It isn't clear whether you mean that you don't see how to get a formula for the size of the union by using inc-excl, or whether you mean that you can write down a formula but don't see how to find the sizes of the $A_i$ and the various intersections that arise, so it's a little hard to help you here. I'll assume it's the second suggestion. So for PEV's $A_1$, let $v_1=y_1-3$, then you have $v_1+y_2+y_3+y_4=9$ and the variables are non-negative, so the previous paragraph applies. Similarly, for the intersection of $A_1$ and $A_2$, let $v_1=y_1-3$ and $v_2=y_2-4$, so you get $v_1+v_2+y_3+y_4=5$ with all variables non-negative.
Can you take it from there?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/34871",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 3,
"answer_id": 2
} |
Connection 1-form on Lie group If we regard $S^{2n-1} \to \mathbb{CP}^{n-1}$ as a principal $S^1$ bundle, how do I show that $$A=\frac{1}{2\pi}\sum_i(x_i dx_i-y_i dy_i),$$
where $(x_1,y_1,\dotsc,x_{2n},y_{2n})$ are coordinates on $S^{2n-1}$, satisfies the following relation:
$$(R_a)^*A = \mathrm{Ad}(a^{-1}) A$$ is true for all $a \in S^1$?
| Well, the action is abelian, so $Ad(a^{-1})$ is just the identity. Now, applying the right action by $a\in S^1$ to your connection, what kind of shape on $S^{2n-1}$ does it trace out? Try thinking about the case when $n=1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/34936",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Evaluating the limit $\lim \limits_{x \to \infty} \frac{x^x}{(x+1)^{x+1}}$ How do you evaluate the limit
$$\lim_{x \to \infty} \frac{x^x}{(x+1)^{x+1}}?$$
| How about using squeeze theorem? Try squeezing this as $0 \leq \frac{x^x}{(x+1)^{x+1}} \leq \frac{x^x}{x^{x+1}} = \frac{1}{x}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/34983",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
} |
A Binomial Coefficient Sum: $\sum_{m = 0}^{n} (-1)^{n-m} \binom{n}{m} \binom{m-1}{l}$ In my work on $f$-vectors in polytopes, I ran across an interesting sum which has resisted all attempts of algebraic simplification. Does the following binomial coefficient sum simplify?
\begin{align}
\sum_{m = 0}^{n} (-1)^{n-m} \binom{n}{m} \binom{m-1}{l} \qquad l \geq 0
\end{align}
Update: After some numerical work, I believe a binomial sum orthogonality identity is at work here because I see only $\pm 1$ and zeros. Any help would certainly be appreciated.
I take $\binom{-1}{l} = (-1)^{l}$, $\binom{m-1}{l} = 0$ for $0 < m < l$ and the standard definition otherwise.
Thanks!
| $\newcommand{\angles}[1]{\left\langle\, #1 \,\right\rangle}
\newcommand{\braces}[1]{\left\lbrace\, #1 \,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\, #1 \,\right\rbrack}
\newcommand{\ceil}[1]{\,\left\lceil\, #1 \,\right\rceil\,}
\newcommand{\dd}{{\rm d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,{\rm e}^{#1}\,}
\newcommand{\fermi}{\,{\rm f}}
\newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,}
\newcommand{\half}{{1 \over 2}}
\newcommand{\ic}{{\rm i}}
\newcommand{\iff}{\Longleftrightarrow}
\newcommand{\imp}{\Longrightarrow}
\newcommand{\pars}[1]{\left(\, #1 \,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\pp}{{\cal P}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\vphantom{\large A}\,#2\,}\,}
\newcommand{\sech}{\,{\rm sech}}
\newcommand{\sgn}{\,{\rm sgn}}
\newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert}$
$\ds{\sum_{m = 0}^{n}\pars{-1}^{n - m}{n \choose m}{m - 1 \choose \ell}:\
{\large ?}.\qquad\ell \geq 0}$
\begin{align}
&\color{#66f}{\large\sum_{m = 0}^{n}\pars{-1}^{n - m}{n \choose m}
{m - 1 \choose \ell}}
\\[3mm]&=\pars{-1}^{n}\sum_{m = 0}^{n}\pars{-1}^{m}{n \choose m}
\oint_{0\ <\ \verts{z}\ =\ a\ <\ 1}{\pars{1 + z}^{m - 1} \over z^{\ell + 1}}
\,{\dd z \over 2\pi\ic}
\\[3mm]&=\pars{-1}^{n}\oint_{0\ <\ \verts{z}\ =\ a\ <\ 1}
{1 \over z^{\ell + 1}\pars{1 + z}}
\sum_{m = 0}^{n}{n \choose m}\pars{-z - 1}^{m}\,{\dd z \over 2\pi\ic}
\\[3mm]&=\pars{-1}^{n}\oint_{0\ <\ \verts{z}\ =\ a\ <\ 1}
{1 \over z^{\ell + 1}\pars{1 + z}}
\bracks{1 + \pars{-z - 1}}^{n}\,{\dd z \over 2\pi\ic}
\\[3mm]&=\oint_{0\ <\ \verts{z}\ =\ a\ <\ 1}{1 \over z^{\ell - n + 1}\pars{1 + z}}
{\dd z \over 2\pi\ic}
=\sum_{k = 0}^{\infty}\pars{-1}^{k}\oint_{0\ <\ \verts{z}\ =\ a\ <\ 1}{1 \over z^{\ell - n - k + 1}}{\dd z \over 2\pi\ic}
\\[3mm]&=\sum_{k = 0}^{\infty}\pars{-1}^{k}\,\delta_{\ell - n,k}
=\color{#66f}{\large\left\lbrace\begin{array}{lcl}
\pars{-1}^{\ell - n} & \mbox{if} & \ell \geq n
\\[2mm]
0&&\mbox{otherwise}
\end{array}\right.}
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/35051",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 2
} |
Why doesn't Cantor's diagonal argument also apply to natural numbers? In my understanding of Cantor's diagonal argument, we start by representing each of a set of real numbers as an infinite bit string.
My question is: why can't we begin by representing each natural number as an infinite bit string? So that 0 = 00000000000..., 9 = 1001000000..., 255 = 111111110000000...., and so on.
If we could, then the diagonal argument would imply that there is a natural number not in the natural numbers, which is a contradiction.
| If you represent a natural number as an infinite string, the string will become identically $0$ after a certain point. If you think it through, the "diagonal argument" in this case doesn't produce a natural number; it will produce a string with infinitely many $1$s.
On the other hand, you can consider possibly infinite binary strings --- i.e. strings in which there can be infinitely many $1$; this is one way to think of
the set of $2$-adic numbers, which
is indeed an uncountable extension of the set of natural numbers (as one sees using the precise diagonal argument that you suggest).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/35107",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "62",
"answer_count": 2,
"answer_id": 0
} |
How calculate the number of possible different variations? I feel stupid, but I don't know how to calculate how many possible variations I can get from for example three spaces (1|2|3) Normally I'd say: "well that is easy, just take the number of spaces (=3) and 3^3"
But that doesn't work with two spaces, like "1|2", there are only 2 different ways on how to arrange two numbers, but 2^2 would be 4.
(I want to know how many spaces I need to get ~25000 possible variations)
| By variations, do you mean..
..placement without repetition
There are $n! = n*(n-1)* ... * 3 * 2 *1$ ways of ordering $n$ elements. So for 3, you have 3*2*1 = 6.
$8! = 8*7*6*5*4*3*2*1 = 40320$, which is as close to 25000 as you can get with a factorial.
..placement with repetition
Your original guess was right; the answer is $n^n$.
e.g. for 3 items, you have 3 choices for the first space, 3 choices for the second space, and 3 choices for the final space, so 3*3*3 = 27. For 2 also, i.e. 2*2 = 4 ways ((1|1), (1|2), (2|1), (2|2)).
$6^6 = 46656$ is as close to 25000 as you'll get.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/35174",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
} |
free groups: $F_X\cong F_Y\Rightarrow|X|=|Y|$ I'm reading Grillet's Abstract Algebra. Let $F_X$ denote the free group on the set $X$. I noticed on wiki the claim $$F_X\cong\!\!F_Y\Leftrightarrow|X|=|Y|.$$ How can I prove the right implication (find a bijection $f:X\rightarrow Y$), i.e. that the rank is an invariant of free groups?
I am hoping for a simple and short proof, having all the tools of Grillet at hand. Rotman (Advanced Modern Algebra, p.305) proves it only for $|X|<\infty$, Bogopolski's (Introduction to Group Theory, p.55) proof seems (unnecessarily?) complicated, and Lyndon & Schupp's (Combinatorial Group Theory, p.1) proof I don't yet understand. It's the very first proposition in the book; in the proof, they say:
The subgroup $N$ of $F$ generated by
all squares of elements in $F$ is
normal, and $F/N$ is an elementary
abelian $2$-group of rank $|X|$. (If
$X$ is finite, $|F/N|=2^{|X|}$ finite;
if $|X|$ is infinite, $|F/N|=|X|$). $\square$
Is $N:=\langle w^2;w\in F\rangle$? What is an abelian $2$-group? Elementary? What and how does the above quote really prove? I'm guessing a free abelian group on $X$ is $\langle X|[X,X]\rangle\cong\bigoplus\limits_{x\in X} \mathbb{Z}$?
Can an isomorphism $\varphi:F_X\rightarrow F_Y$ not preserve the length of words? At least one letter words?
| The first part of your question is relatively easy to get. say f is the homomorphism between the two free groups then you need to deduce that f induce a bijection (as a function not a homomorphism) between elements of X and a free (no obvious combinations) set of Fy (which of the same cardinality as Y) let us suppose that f(X) is not a free set in Fy i.e there are a1,a2,..ak in (f(X) and f(X)^(-1)) such that a1a2..ak=1 by f-1 you will end up having a relation in Fx which is supposed to be free so you have |Y|<|X| the rest follows..
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/35229",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 4,
"answer_id": 2
} |
Reference Request: Linear Forms in Logarithms Does anyone know a good book for learning about linear forms in logarithms especially one that is motivated by solving Diophantine equations with it? I know there's a chapter in Langs book but it doesn't have any applications so that's not the sort of thing I was looking for. Thanks very much for any recommendations!
| Yo should take a look at Alan Baker's Transcendental Number Theory. In chapters two and three he develops the theory of linear forms in logarithms and then he applies it to the study of some Diophantine equations.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/35299",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Express $z$ in terms of $x$ and $y$, i.e., find $z= f(x,y)$ I've been banging my head against the wall for a while now:
$x = s^2 - t^2$
$y = s + t$
$z = s^2 + 3t$
Express $z$ in terms of $x$ and $y$.
| $$x=s^{2}-t^{2}=(s+t)(s-t)$$
so
$$s+t=\frac{x}{s-t}$$
$$s-t=\frac{x}{s+t}=\frac{x}{y}$$
$$(s+t) + (s-t) = 2s=\frac{x}{s-t}+\frac{x}{y}$$
$y=s+t$, so $t=y-s$ and therefore:
$$2s=\frac{x}{2s-y}+\frac{x}{y}=x(\frac{1}{2s-y}+\frac{1}{y})$$
$$2s=x(\frac{y}{2sy-y^2}+\frac{2s-y}{2sy-y^2})=x(\frac{2s}{2sy-y^2})$$
$$1=\frac{x}{2sy-y^2}$$
$$2sy-y^2=x$$
$$2sy=x+y^2$$
$$s=\frac{x+y^2}{2y}$$
From $z=s^2+3t$ we have:
$$z=(\frac{x+y^2}{2y})^{2}+3t$$
$y=s+t$ so $t=y-\frac{x+y^2}{2y}$ and finally:
$$z=(\frac{x+y^2}{2y})^{2}+3(y-\frac{x+y^2}{2y})$$
Pretty sure this is correct...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/35327",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
How to prove that the Derivative of the Annuity Present Value Factor is negative I work with the Annuity present value factor, which I want to differentiate with respect to r:
$$\sum_{t=T_1}^{T_2} (1+r)^{-t}$$ which equals $$\frac{(1 + r)^{1 - T_1} - (1 + r)^{-T_2}}{r}$$
If I use the derivation of the Sum-expression I can easily proof, that the derivative with respect to r ist <0. If i do the same with the other (but equal) term, after some transforming I am stuck with $$(1 + r)^{1 + T_2} \cdot (1 + r \cdot T_1) < (1 + r)^{T_1} \cdot (1 + r \cdot(1+ T_2))$$ at which point I don't see how to proceed. How to solve this inequation? There must be some math rules or relations I am missing.
EDIT: The conditions are $0<r<1,\quad T_1\geq 1 \quad T_1<T_2$
EDIT: I tried to follow @Ross's instructions. I then end up with $$\frac{(1+r\cdot(1+T_2))}{(1+r\cdot T_1)} < (1+r)^{1+T_2-T_1}$$
Unfortunately I still don't know how to proceed from there on.
| When I fed it to Alpha the numerator comes out $(1+r+rT_2)(1+r)^{-1-T_2}-(1+rT_1)(1+r)^{-T_1}$. If you want this to be less than zero, the inequality comes out in the opposite sense from yours. If you divide by $(1+r)^{T_1}$ you get $(1+r+rT_2)\lt(1+rT_1)(1+r)^{1+T_2-T_1}$, then if you use $(1+r)^{1+T_2-T_1}\gt 1+r(1+T_2-T_1)$ you can get the RHS is greater than $(1+rT_1)(1+r(1+T_2-T_1))$ which is still greater than the LHS. As all the steps are reversible, we know the original is less than $0$
Added: We want to prove $(1+r+rT_2)\lt(1+rT_1)(1+r)^{1+T_2-T_1}$. As $(1+r)^x \gt 1+xr$, $(1+rT_1)(1+r)^{1+T_2-T_1} \gt (1+rT_1)(1+r)(1+T_2-T_1)=(1+r+rT_1+r^2T_1)(1+T_2-T_1)$
$=1+r+rT_1+r^2T_1+T_2+rT_2+rT_2T_1+r^2T_2T_1-T_1-rT_1-rT_1^2-r^2T_1^2$
$=1+r+rT_2+(r^2T_1+(T_2-T_1)(1+r+r^2))\gt1+r+rT_2$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/35442",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Gradient of a Harmonic Function I was asked the following vector calculus problem:
Let $D$ be the unit ball and let $S$ be the unit sphere in $\mathbb{R}^3$. Suppose that $F:\mathbb{R}^3\rightarrow \mathbb{R}^3$ is a $C^1$ vector field on some open neighborhood of $D$ which satisfies:
$(i) \nabla\times F=0$
$(ii) \nabla\cdot F=0$
$(iii)$ On $S$, $F$ is orthogonal to the radial vector.
Prove that $F=0$ on all of $D$.
Conditions $(i)$ and $(ii)$ imply that $F=\nabla g$ for some $g:\mathbb{R}^3\rightarrow \mathbb{R}$ where $g$ must be harmonic as well.
I know one solution (see end), however my initial instinct was to try to use the max/min property of harmonic functions, and I couldn't get it to work. Since the gradient is always orthogonal to the sphere, there must be a point on the sphere where it is $0$. (Hairy ball) If that was a local max or min in $\mathbb{R}^3$ we would be done, by taking a small neighborhood around it. If it is a saddle point this doesn't work. (We know that it must be a local max/min on $S$ since it is harmonic)
My question is: Is there any way to modify this approach, and solve the problem?
Thanks!
Other Solution: Here is one solution that first uses the fact that the radial vector is orthogonal, and then applies Gauss's Divergence theorem to the function $gF$. ($\nabla g=F$) That is $$0=\iint_S (gF\cdot n)dS=\iiint_D \nabla\cdot (gF)dV=\iiint_D \|F\|^2dV,$$ and since the integrand on the right hand side is non-negative, continuous and integrates to give zero, it must be zero.
| Your second proof is a good one and entirely appropriate within vector calculus. The first attempt has a gap (I think).
A third proof relies on Hopf's Lemma (commonly taught in graduate level classes on partial differential equations) which implies here that if a function $u$ satisfying $\Delta u \le 0$ in $D$ attains a strict minimum at $z \in \partial D = S$, then the outer normal derivative at that point satisfies $\nu \cdot \nabla u(z) < 0$. Applying this with $g = u$, it follows that $g$ must attain its minimum in the interior of $D$ and hence must be constant.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/35463",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
} |
Why are addition and multiplication commutative, but not exponentiation? We know that the addition and multiplication operators are both commutative, and the exponentiation operator is not. My question is why.
As background there are plenty of mathematical schemes that can be used to define these operators. One of these is hyperoperation
where
$H_0(a,b) = b+1$ (successor op)
$H_1(a,b) = a+b$ (addition op)
$H_2(a,b) = ab $ (multiplication op)
$H_3(a,b) = a^b$ (exponentiation op)
$H_4(a,b) = a\uparrow \uparrow b$ (tetration op: $a^{(a^{(...a)})}$ nested $b$ times )
etc.
Here it is not obvious to me why $H_1(a,b)=H_1(b,a)$ and $H_2(a,b)=H_2(b,a)$ but not $H_3(a,b)=H_3(b,a)$
Can anyone explain why this symmetry breaks, in a reasonably intuitive fashion?
Thanks.
| Edit. Okay, it turns out this idea has been studied before. See here.
I basically got this idea from user52541's answer. So no claim to originality.
Anyway. Define a sequence of operations $\langle n \rangle : (\mathbb{R}^+)^2 \rightarrow \mathbb{R^+}$ as follows.
*
*For all $x,y \in \mathbb{R}^+$, define $x\langle 0\rangle y = x+y.$
*For all $x,y \in \mathbb{R}^+$ and all $n \in \mathbb{N}$, define $x\langle n+1\rangle y=\exp(\log x\langle n\rangle\log y).$
Then $\langle 0 \rangle$ is addition, and $\langle 1 \rangle$ is multiplication. But $\langle 2 \rangle$ is not exponentiation. Furthermore, we can prove that for all $n$ it holds that $\langle n \rangle$ is both commutative and associative.
Remark. To prove commutativity, the functions $\exp$ and $\log$ don't even need to be inverses of one another. They can just be arbitrary functions. To prove associativity, we need a bit more. In particular, we require that $\exp$ and $\log$ are inverses. However, we're still not really using any of their properties, like $\log(xy) = \log x + \log y$ etc.
Question 1. Do we have existence of inverses at each level (we have negatives at level $0$, reciprocals at level $1$, and WHAT, if anything, at level $2$)?
Question 2. Does a form of distributivity hold?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/35598",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "73",
"answer_count": 13,
"answer_id": 1
} |
Are there distinct primes $p,q$ satisfying $pq=(2^r-1)(p+q)-5$? We let $p\neq q$ be odd prime numbers and $r$ be integer $>2$.
Are there such $p,q$ satisfying $pq=(2^r-1)(p+q)-5$?
This is clear from here that,
$q(p-2^r+1)=(2^r-1)p-5$,
and $p(q-2^r+1)=(2^r-1)q-5$.
Thanks.
| Over at tomerg's other, closely-related question The form $xy+5=a(x+y)$ and its solutions with $x,y$ prime I found $p=17179929661$, $q=4880269588100161$, $r=34$ is a solution.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/35647",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Locally non-enumerable dense subsets of R Today after lunch I was hungry for math problems so I started begging for some at the department and finally someone threw me this: Can $\mathbb{R}$ be partitioned into two non-countable dense subsets? It was a good starter, after a few minutes I got: Take the irrationals less than $0$ and the rationals greater than $0$, this is one subset, the complement of course works. Then the following question came into my mind: Can $\mathbb{R}$ be partitioned into two locally-non-countable dense subsets?
I'm still hungry
| $\mathbb{Q}$ has $2^{\aleph_0}$ many cosets in the additive group $\mathbb{R}$. Therefore $\mathbb{R}$ can be partitioned into 2 sets each of which is a union of $2^{\aleph_0}$ many cosets of $\mathbb{Q}$. Each coset is dense, so each of these sets is locally uncountable.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/35748",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17",
"answer_count": 3,
"answer_id": 0
} |
Validate my reasoning for this logical equivalence I've basically worked out how to do this question but not sure about my reasoning:
Question:
Show
1) $(p \rightarrow q) \land (q \rightarrow (\lnot p \lor r))$
is logically equivalent to:
2) $p \rightarrow (q \land r)$
and I am given this equivalence as a hint: $u \rightarrow v$ is logically equivalent to $(\lnot u) \lor v)$
My reasoning:
From statement (1): $(\lnot p \lor r)$ is equivalent to $(p \rightarrow r)$ (By the hint given)
Hence statement (1) becomes: $(p \rightarrow q) \land (q \rightarrow (p \rightarrow r))$
We assume $p$ is true, therefore $q$ is true
So $p$ also implies $r$
Therefore $p$ implies $q$ and $p$ also implies $r$
Hence $p \rightarrow (q \land r)$
I understand the basic ideas but I'm really confused as to how I can write it all down logically and clearly
| Here is yet another way to do this, this time using the Dijkstra-Scholten-Gries-etc. calculational proof format. We will start with the most complex side, then (as in other answers to this question) expand $P \rightarrow Q$ to $\lnot P \lor Q$, and then simplify and see where that leads us.
\begin{align}
& (p \rightarrow q) \land (q \rightarrow \lnot p \lor r) \\
\equiv & \;\;\;\;\;\text{"expand $\rightarrow$, twice"} \\
& (\lnot p \lor q) \land (\lnot q \lor \lnot p \lor r) \\
\equiv & \;\;\;\;\;\text{"simplify: factor out $\lnot p$, using the fact that $\lor$ distributes over $\land$"} \\
& \lnot p \lor (q \land (\lnot q \lor r)) \\
\equiv & \;\;\;\;\;\text{"simplify: use $q$ in other side of $\land$"} \\
& \lnot p \lor (q \land (\lnot \textrm{true} \lor r)) \\
\equiv & \;\;\;\;\;\text{"simplify"} \\
& \lnot p \lor (q \land r) \\
\equiv & \;\;\;\;\;\text{"reintroduce $\rightarrow$ -- inspired by the shape of our goal"} \\
& p \rightarrow q \land r \\
\end{align}
This proves the equivalence.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/35796",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 6,
"answer_id": 5
} |
two point line form in 3d the two-point form for a line in 2d is
$$y-y_1 = \left(\frac{y_2-y_1}{x_2-x_1}\right)(x-x_1);$$
what is it for 3d lines/planes?
| For a line, one can express it in parametric form. To give you an idea on how to derive it, note that the expression
$$(1-t)x_1+tx_2$$
gives $x_1$ for $t=0$ and $x_2$ for $t=1$. You can do this for the $y$ and $z$ components as well to arrive at a parametric equation of a line.
Another way to go about representing a line is to represent it in symmetric form (from which a parametric form can also be obtained by equating to a parameter each of the components):
$$\frac{x-x_1}{a}=\frac{y-y_1}{b}=\frac{z-z_1}{c}$$
where
$$\begin{align*}a&=\frac{x_2-x_1}{\rho}\\b&=\frac{y_2-y_1}{\rho}\\c&=\frac{z_2-z_1}{\rho}\\\rho&=\sqrt{(x_2-x_1)^2+(y_2-y_1)^2+(z_2-z_1)^2}\end{align*}$$
and $a,b,c$ are so-called direction cosines, cosines of the angles the line makes with the positive axes.
This notation is sometimes (ab)used even when any of the direction cosines are 0; this just means that the line is parallel to an axis. From the symmetric form, taking any two of the three implied equations determines two planes whose intersection is the line in question.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/35857",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Can one use Atiyah-Singer to prove that the Chern-Weil definition of Chern classes are $\mathbb{Z}$-cohomology classes? In Chern-Weil theory, we choose an arbitrary connection $\nabla$ on a complex vector bundle $E\rightarrow X$, obtain its curvature $F_\nabla$, and then we get Chern classes of $E$ from the curvature form. A priori it looks like these live in $H^*(X;\mathbb{C})$, but by an argument that I don't feel like I really understand, they're in the image of $H^*(X;\mathbb{Z})$, which is where they're usually considered to actually live. I've also recently been learning about the Atiyah-Singer index theorem, and I get the impression that whenever I see a arbitrary constants in geometry that end up having to live in $\mathbb{Z}$ I should ask myself whether the index theorem is lurking in there somewhere. Is there anything to this wild guess?
| The answer is no. See https://mathoverflow.net/questions/69085/can-one-use-atiyah-singer-to-prove-that-the-chern-weil-definition-of-chern-classe
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/35921",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 1,
"answer_id": 0
} |
Can multiplication be defined in terms of divisibility? Peano Arithmetic has two axioms which use the multiplication symbol: ∀x:x*0=0 and ∀x:∀y:x*Sy=x+x*y. The 2-term relation "x divides y" can be expressed as D(x,y) := ∃z:z*x=y. Multiplication is a function and divisibility is a relation, so in order to compare apples and apples, consider the 3-term relation M(x,y,z) := x*y=z and the axioms ∀x:M(x,0,0) and ∀x:∀y:∃u:∃v:M(x,Sy,u)∧M(x,y,v)∧v+x=u and also the fact that M is a function ∀x:∀y:∀u:∀v:(M(x,y,u)∧M(x,y,v))→u=v. Now D can be defined in terms of M by D(x,y) := ∃z:M(z,x,y). I wonder if it is possible to do the reverse, and define multiplication in terms of divisibility. If the M axioms are replaced by some D axioms (maybe ∀x:D(x,x), ∀x:D(x,0), and others), can M be expressed in terms of D? Prime, GCD, LCM can all be defined in terms of D alone, but I don't know how to define M in terms of D, nor do I know how to axiomatize D without reference to M. If it is possible, what axioms are required for the divisibility relation, and how is the multiplication relation defined? If not, why not?
| No, not in general. You can define the multiplication relation in terms of the division function, but this only gives you a truth condition M(x,y,z) that tells you if z is the product of x and y. It does not give you a mechanism for generating the z from the x and y: for that you need to be able to prove that the multiplication relation specifies a total function.
And this is not always possible:
*
*There are weak theories of arithmetic for which division is total, but where, although the multiplication relation exists, and specifies the expected triples, the relation cannot prove the multiplication is total (and so admits nonstandard models in which there are multiplication relations but all have "holes" at nonstandard number parameters and so are not functions);
*Even worse, there are such weak theories which would be rendered inconsistent by the addition of an axiom asserting that multiplication was a total function. All self-verifying theories are of this sort.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/35988",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 4,
"answer_id": 0
} |
What is the volume of $\{ (x,y,z) \in \mathbb{R}^3_{\geq 0} |\; \sqrt{x} + \sqrt{y} + \sqrt{z} \leq 1 \}$? I have to calculate the volume of the set
$$\{ (x,y,z) \in \mathbb{R}^3_{\geq 0} |\; \sqrt{x} + \sqrt{y} + \sqrt{z} \leq 1 \}$$
and I did this by evaluating the integral
$$\int_0^1 \int_0^{(1-\sqrt{x})^2} \int_0^{(1-\sqrt{x}-\sqrt{y})^2} \mathrm dz \; \mathrm dy \; \mathrm dx = \frac{1}{90}.$$
However, a friend of mine told me that his assistant professor gave him the numerical solutions and it turns out the solution should be $\frac{1}{70}$. Also, I found out that this would be the result of the integral
$$\int_0^1 \int_0^{1-\sqrt{x}} \int_0^{1-\sqrt{x}-\sqrt{y}} \mathrm dz \; \mathrm dy \; \mathrm dx,$$
which is pretty much the same as mine just without squares in the upper bounds. My question is: Is the solution provided by the assistant professor wrong or why do I have to calculate the integral without squared upper bounds?
Also, is there any tool to compute the volume of such sets without knowing how one has to integrate?
Thanks for any answer in advance.
| An integral $(*)\ \int_B f(x){\rm d}(x)$ over a three-dimensional domain $B$ depends on the exact expression for $f(x)$, $\ x\in{\mathbb R}^n$, and on the exact shape of the domain $B$. The latter is usually defined by a set of inequalities of the form $g_i(x)\leq c_i$. The information about $B$ has to be entered in the course of the reduction of the integral $(*)$ to a sequence of nested integrals. So, as a rule, there is a lot of work involved in the process of reducing everything to the computation and evaluation of primitives.
Now sometimes there is another way of handling such integrals: Maybe we can set up a parametric representation of $B$ with a parameter domain $\tilde B$ which is a standard geometric object like a simplex, a rectangular box or a half sphere. In the case at hand we can use the representation
$$g: \quad S\to B,\quad (u,v,w)\mapsto (x,y,z):=(u^2,v^2,w^2)$$
which produces $B$ as an essentially 1-1 image of the standard simplex
$$S:=\{(u,v,w)\ |\ u\geq0, v\geq0, w\geq 0, u+v+w\leq1\}\ .$$
In the process we have to compute the Jacobian $J_g(u,v,w)=8uvw$ and obtain the following formula:
$${\rm vol}(B)=\int_B 1\ {\rm d}(x)= \int_S 1 \> J_g(u,v,w) \> {\rm d}(u,v,w)=\int_0^1\int_0^{1-u}\int_0^{1-u-v} 8uvw \> dw dv du ={1\over 90}\ .$$
(In this particular example the simplification is only marginal.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/36041",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 1,
"answer_id": 0
} |
Functions, graphs, and adjacency matrices One naively thinks of (continuous) functions as of graphs1 (lines drawn in a 2-dimensional coordinate space).
One often thinks of (countable) graphs2 (vertices connected by edges) as represented by adjacency matrices.
That's what I learned from early on, but only recently I recognized that the "drawn" graphs1 are nothing but generalized - continuous - adjacency matrices, and thus graphs1 are more or less the same as graphs2.
I'm quite sure that this is common (maybe implicit) knowledge among working mathematicians, but I wonder why I didn't learn this explicitly in any textbook on set or graph theory I've read. I would have found it enlightening.
My questions are:
Did I read my textbooks too
superficially?
Is the analogy above (between
graphs1 and
graphs2) misleading?
Or is the analogy too obvious to be
mentioned?
| In my opinion, the similarity between graphs1 and graphs2 is only superficial. Both kinds of graphs can be thought of as particular subsets of certain kinds of Cartesian product ($\mathbb R \times \mathbb R$ and $V \times V$), but that's about as far as it goes.
Consider:
*
*A graph1 generalizes to higher dimensional functions $\mathbb R^m \to \mathbb R^n$ which cannot be thought of as a graph2 when $m \neq n$. A graph2 generalizes to graphs with labelled edges, multigraphs, and so on, which cannot be thought of as a graph1.
*Rearranging the order of elements in the adjacency matrix gives you the same graph2, but not the same graph1.
*Given a graph1, we care about things like injectivity, continuity, convexity, and so on, which do not correspond to useful properties of the corresponding graph2. Given a graph2, we care about things like connectivity, shortest paths, planarity, and so on, which do not correspond to useful properties of the corresponding graph1.
A simple example: The graph of $f(x) = x$ is a continuous line, but a graph where each vertex is only connected to itself is completely disconnected.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/36098",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
If $n$ is any positive integer, prove that $\sqrt{4n-2}$ is irrational If $n$ is any positive integer, prove that $\sqrt{4n-2}$ is irrational.
I've tried proving by contradiction but I'm stuck, here is my work so far:
Suppose that $\sqrt{4n-2}$ is rational. Then we have $\sqrt{4n-2}$ = $\frac{p}{q}$, where $ p,q \in \mathbb{Z}$ and $q \neq 0$.
From $\sqrt{4n-2}$ = $\frac{p}{q}$, I just rearrange it to:
$n=\frac{p^2+2q^2}{4q^2}$. I'm having troubles from here, $n$ is obviously positive but I need to prove that it isn't an integer.
Any corrections, advice on my progress and what I should do next?
| The number $\sqrt{4n-2}$ is rational iff $4n-2 = a^2$ reduction mod 4 shows that this is impossible.
Here is a proof of the general fact that $\sqrt{k}$ is irrational unless $k$ is a square: Suppose $\frac{u}{v}$ is a solution to $x^2 - k = 0$, then it is an integer $i$ by Gauss lemma, but then $k = i^2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/36195",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 1
} |
Permuting the rows of an invertible matrix to make 2 specific submatrices invertible This question is similar to a previous one: Gauss Elimination with constraints
Given an $n \times n$ matrix $M$ and a number $1 \leq m \leq n-1$, we partition is as a block matrix:
$$M = \begin{bmatrix} A & B \\ C & D \end{bmatrix}$$
where $A$ is an $m \times m$ matrix and $D$ is an $(n-m) \times (n-m)$ matrix. We then say that $M$ is $m$-good if both $A$ and $D$ are invertible.
Given any invertible matrix $M \in GL_n(\mathbb{F})$ and a number $1 \leq m \leq n-1$, is it always possible to permute the rows of $M$ to make it $m$-good?
Note: I only care about the case $\mathbb{F}=\mathbb{Z}_p$, but I asked the question more generally because my feeling is that it doesn't matter what the field is.
| Using the Laplace expansion, $\det M$ is a sum of terms of the form $ \pm \det(A) \det(D)$ over all ways of partitioning the rows (see e.g. http://accessscience.com/content/Determinant/188900 : the term "Laplace expansion" is sometimes used for the cofactor expansion along a single row or column, but it's really more general). So if $\det(A) \det(D)$ was always 0, $\det(M)$ would also be 0.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/36272",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
How to find continued fraction of the form $a\sqrt{b}$? For the form $\sqrt{b}$, I could just apply the recursive quadratic formula:
$$P_{k+1} = a_kQ_k - P_k$$
$$Q_{k+1} = \dfrac{d - P^2_{k+1}}{Q_k}$$
$$\alpha_k = \dfrac{P_k + \sqrt{d}}{Q_k}$$
$$a_k = \lfloor \alpha_k \rfloor$$
In this case, we have a coefficient namely $a$, so what's $d$? Is it still $b$?
Thanks,
| (So you can have something to "accept"...)
$a \sqrt{b} = \sqrt{a^2b}$ - Chan
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/36309",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.