Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
Number Theory - Proof of divisibility by $3$ Prove that for every positive integer $x$ of exactly four digits, if the sum of digits is divisible by $3$, then $x$ itself is divisible by 3 (i.e., consider $x = 6132$, the sum of digits of $x$ is $6+1+3+2 = 12$, which is divisible by 3, so $x$ is also divisible by $3$.)
How could I approach this proof? I'm not sure where I would even begin.
| It's due to radix representation being polynomial form in the radix, e.g. $\rm\ n = 4321 = p(10)\ $ for $\rm\ p(x) = 4\: x^3 + 3\: x^2 + 2\: x + 1\:.\:$ Thus mod $\rm\:3\::\ 10\equiv 1\ \Rightarrow\ p(10)\equiv p(1)\ =\ \sigma(n) :=$ sum of digits. Aternatively one may simply put $\rm\ x = 10\ $ in the $\:$ Factor Theorem $\rm\ \ x-1\ |\ p(x)-p(1)\:,\: $ hence $\rm\ 3\ |\ 9\ |\ p(10)-p(1) = n - \sigma(n)\:.\:$ This is a special case of casting out nines.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/23639",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
Relation between Borel–Cantelli lemmas and Kolmogorov's zero-one law I was wondering what is the relation between the first and second Borel–Cantelli lemmas and Kolmogorov's zero-one law?
The former is about limsup of a sequence of events, while the latter is about tail event of a sequence of independent sub sigma algebras or of independent random variables. Both have results for limsup/tail event to have either probability 0 or 1. I guess there are relations between but cannot identify them.
Can the former be viewed as a special case of the latter? How about in reverse direction?
Thanks and regards!
| The first Borel-Cantelli is simply the fact that the probability of a union is at most the sum of the probabilities; it has nothing in common with Kolmogorov zero-one law.
What Kolmogorov zero-one law tells you in the setting of the second Borel-Cantelli lemma is that the probability of the limsup is $0$ or $1$, because (1) the limsup is always in the tail $\sigma$-algebra, (2) you are considering independent events, hence their tail $\sigma$-algebra is trivial. Then the second Borel-Cantelli lemma itself tells you that this probability is in fact $1$ under the non summability condition which you know.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/23687",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 1,
"answer_id": 0
} |
Definition of simplicial approximation I have been given the following definition of simplicial approximation in lectures:
Let $K, L$ be simplicial complexes and $f : |K| \to |L|$ be a continuous map of their polyhedra. A simplicial approximation of $f$ is a map $g$ of vertices of $K$ to vertices of $L$ such that $$f(\mathrm{st}_K(v)) \subseteq \mathrm{st}_L(g(v))$$ for each vertex $v$ in $K$.
However, elsewhere I find the definition that a simplicial approximation is any simplicial map which is homotopic to the original map. This seems, to me, to be considerably more general than the definition I have, since, for example, if $f$ is homotopic to a constant map, then I can have very trivial simplicial approximations in this sense. So my question is, which is the more common / "morally" correct / better definition?
| Mariano's comment is dead on. It might be helpful to think also of cellular approximation. Given a map of CW complexes $f: X \to Y$, $g: X \to Y$ is a cellular approximation of $f$ if they are homotopic and $g$ is cellular, that is it restricts to a map between the n-skeletons of $X$ and $Y$. I personally don't think a lot about simplicial approximations or cellular approximations because I know they always exist in the settings I care about. The point of having these approximations is then you can make standard types of arguments (like inductive arguments) that are more familiar without losing any generality.
An interesting question is whether or not cellular/simplicial maps are cofibrant objects in a particular/obvious model category structure on the category of morphisms (if such a model structure exists). Maybe I should ask such a question on some sort of math question site...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/23751",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Numerical approximation of an integral I read a problem to determine the integral $\int_1^{100}x^xdx$ with error at most 5% from the book "Which way did the bicycle go". I was a bit disappointed to read the solution which used computer or calculator. I was wondering whether there is a solution to the problem which does not use computers or calculators. In particular, is there way to prove that the solution given in the book has a mistake because it claims that
$$\frac{99^{99}-1}{1+\ln 99}+\frac{100^{100}-99^{99}}{1+\ln 100}\leq \int_1^{100}x^xdx$$
gives a bound $1.78408\cdot 10^{199}\leq \int_1^{100}x^xdx$ but I think the LHS should be $1.78407\cdot 10^{199}\leq \int_1^{100}x^xdx$? I checked this by Sage and Wolfram Alpha but I was unable to do it by pen and paper.
| $x^x$ grows really fast. Notice
$$n^n>\frac{1}{n}\sum_{i=1}^{n-1} i^i.$$
In short $100^{100}$ is a lot bigger then $99^{99}$, so $$\frac{99^{99}-1}{1+\ln99}+\frac{100^{100}-99^{99}}{1+ln100}\sim \frac{100^{100}}{1+\ln100}=10^{99}\frac{10}{1+\ln100}$$
By calculator $$\frac{10}{1+\ln100}=1.78407,$$ (notice the same as above) but we can get bascially that by hand. First show $4<\ln100<5$. Then $$1.6=\frac{10}{6}\leq \frac{10}{1+\ln100}\leq \frac{10}{5}=2.$$
But lets do better. Use the fact that $$\sum_{i=1}^n\frac{1}{i}=\ln n +\gamma +E$$ where $|E|\leq \frac{1}{2n}$, and $n\geq 4$.
Then $$\ln(100)=1+\frac{1}{2}+\frac{1}{3}+\cdots +\frac{1}{100}-0.577+E$$ and diligently adding this together (under 5 minutes) gives $\ln(100)\sim 4.6$. From here we get the coefficient $\frac{25}{14}$ with an error bounded by $\frac{2}{100}$ or 2%. (We gained error because I cut off 4.61 to 4.6)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/23936",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
If $a\mathbf{x}+b\mathbf{y}$ is an element of the non-empty subset $W$, then $W$ is a subspace of $V$ Okay, so my text required me to actually prove both sides; The non-empty subset $W$ is a subspace of a $V$ if and only if $a\mathbf{x}+b\mathbf{y} \in W$ for all scalars $a,b$ and all vectors $\mathbf{x},\mathbf{y} \in W$. I figured out one direction already (that if $W$ is a subspace, then $a\mathbf{x}+b\mathbf{y}$ is an element of $W$ since $a\mathbf{x}$ and $b\mathbf{y}$ are in $W$ and thus so is their sum), but I'm stuck on the other direction.
I got that if $a\mathbf{x}+b\mathbf{y} \in W$, then $c(a\mathbf{x}+b\mathbf{y}) \in W$ as well since we can let $a' = ca$ and $b' = cb$ and we're good, so $W$ is closed under scalar multiplication. But for closure under addition, my text states that I can "cleverly choose specific values for $a$ and $b$" such that $W$ is closed under addition as well but I cannot find any values that would work. What I'm mostly confused about is how choosing specific values for $a$ and $b$ would prove anything, since $a, b$ can be any scalars and $\mathbf{x},\mathbf{y}$ can be any vectors, so setting conditions like $a = b$, $a = -b$, $a = 0$ or $b = 0$ don't seem to prove anything.
Also something I'm not sure about is if they're saying that $a\mathbf{x}+b\mathbf{y} \in W$, am I to assume that that is the only form? So if I'm testing for closure under addition, I have to do something like $(a\mathbf{x}+b\mathbf{y})+(c\mathbf{z}+d\mathbf{w})$?
| As you may be a little bit confused this, let me remember you that
$W\subset$ is a subspace of $V$ if and
only if:
*
*$W\not=\emptyset$
*$w+w'\in W$ for all $w,w'\in W$
*$aw'\in W$ for all $w'\in W, \ a\in\mathbb{R}$
Thus if $W$ is a subspace of $V$, it is, as you say, easy to see that
$$aw+bw'\in W \ \forall \ w,w'\in W, \ a,b\in \mathbb{R}$$
if you combine 2 and 3.
For the other side, suppose that
$$aw+bw'\in W \ \forall \ w,w'\in W, \ a,b\in \mathbb{R}$$
and you have to prove that then the three points above are true.
*
*$W$ is not empty by definition.
*Take $w, w'\in W$. Letting $a=1, \ b=1$ gives that $w+w'\in W$
*Take $w\in W$. Letting $a=1, \ b=0$ gives that $1w+0w'=w\in W$
So you see that they are simply particular cases of the relation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/24024",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
Graph coloring problem (possibly related to partitions) Given an undirected graph I'd like to color each node either black or red such that at most half of every node's neighbors have the same color as the node itself. As a first step, I'd like to show that this is always possible.
I believe this problem is the essence of the math quiz #2 in Communications of the ACM 02/2011 where I found it, so you might consider this a homework-like question. The quiz deals with partitions but I found it more natural to formulate the problem as a graph-coloring problem.
Coming from computer science with some math interest I'm not sure how to approach this and would be glad about some hints. One observation is that any node of degree 1 forces its neighbor to be of the opposite color. This could lead to a constructive proof (or a greedy algorithm) that provides a coloring. However, an existence proof would be also interesting.
| Perhaps related is the following well-known riddle: At each step, all the nodes (simultaneously) choose their color as the majority color of their neighbors (or black in case of a tie). Show that this process converges to either a fixed point or a $2$-cycle.
This charming riddle is solved by comparing the state at time $t$ to the state at time $t+2$, and using a potential function. This solution was published, and generalized in a sequence of papers.
In your case, you want to take the anti-majority. Perhaps the same techniques work, though I'm not sure how a $2$-cycle will help you.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/24083",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Prove that two distinct numbers of the form $a^{2^{n}} + 1$ and $a^{2^{m}} + 1$ are relatively prime if $a$ is even and have $\gcd=2$ if $a$ is odd
Prove that two distinct numbers of the form $a^{2^{n}} + 1$ and $a^{2^{m}} + 1$ are relatively prime if $a$ is even and have $\gcd=2$ if $a$ is odd.
My attempt:
If $a$ is even, let $a = 2^{s}k$ for some integers $k, s$
Then, $$a^{2^{n}} + 1 = 2^{2^{n}s}\cdot k^{2^n} + 1$$ and $$a^{2^{m}} + 1 = 2^{2^{m}s}\cdot k^{2^m} + 1$$
To prove that they're relatively prime, we need to show that their gcd = 1.
And I was stuck here, how could I prove that gcd of two numbers is $1$?
A hint would be sufficient. Thanks.
| Hint:
Consider the following proof when $a=2$: https://planetmath.org/fermatnumbersarecoprime
Try adapting it to work for all $a$.
Hint 2: Factor $a^{2^n}-1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/24143",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 3,
"answer_id": 0
} |
Is $2^{340} - 1 \equiv 1 \pmod{341} $? Is $2^{340} - 1 \equiv 1 \pmod{341} $?
This is one of my homework problem, prove the statement above. However, I believe it is wrong. Since $2^{10} \equiv 1 \pmod{341}$, so $2^{10 \times 34} \equiv 1 \pmod{341}$ which implies $2^{340} - 1 \equiv 0 \pmod{341}$
Any idea?
Thanks,
| HINT $\rm\ \ 2^{340}\equiv 2\ \ (mod\ 11\cdot31)\ \Rightarrow\ mod\ 11\::\ 2^{339}\equiv 1\equiv 2^{10}\ \Rightarrow\ 2^{gcd(339,10)} \equiv 1\:,\:$ i.e. $\ 2\equiv 1\ \Rightarrow\Leftarrow$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/24188",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Why do mathematicians use single-letter variables? I have much more experience programming than I do with advanced mathematics, so perhaps this is just a comfort thing with me, but I often get frustrated when I try to follow mathematical notation. Specifically, I get frustrated trying to keep track of what each variable signifies.
As a programmer, this would be completely unacceptable no matter how many comments you added explaining it:
float A(float P, float r, float n, float t) {
return P * pow(1 + r / n, n * t);
}
Yet a mathematician would have no problem with this:
$A = P\ \left(1+\dfrac{r}{n}\right)^{nt}$
where
$A$ = final amount
$P$ = principal amount (initial investment)
$r$ = annual nominal interest rate (as a decimal)
$n$ = number of times the interest is compounded per year
$t$ = number of years
So why don't I ever see the following?
$\text{final_amount} = \text{principal}\; \left(1+\dfrac{\text{interest_rate}}{\text{periods_per_yr}}\right)^{\text{periods_per_yr}\cdot\text{years}}$
| I write: Solutions of quadratic equation $ax^2 + bx + c=0$ are $(-b\pm\sqrt{b^2-4ac})/(2a)$. I would not want to write this with time in place of $x$ and acceleration_of_gravity in place of $a$ and so on!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/24241",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "288",
"answer_count": 27,
"answer_id": 6
} |
Big $O$ vs Big $\Theta$ I am aware of the big theta notation $f = \Theta(g)$ if and only if there are positive constants $A, B$ and $x_0 > 0$ such that for all $x > x_0$ we have
$$
A|g(x)| \leq |f(x)| \leq B |g(x)|.
$$
What if the condition is the following:
$$
C_1 + A|g(x)| \leq |f(x)| \leq C_2 + B |g(x)|
$$
where $C_1, C_2$ are possibly negative? Certainly more can be said than just $f = O(g)$. Is there a generalized $\Theta$ notation which allows shifts (by, say $C_1, C_2$)? In particular, I'm interested in the special case:
\begin{eqnarray}
-C \leq f(x) - g(x) \leq C
\end{eqnarray}
for some positive $C$. How does $f$ compare to $g$ in this case? If $f$ and $g$ are positive functions of $x$ which both diverge to $\infty$, is it true that $f(x) = -C + g(x) + \Theta(1)$? What is the appropriate asymptotic notation in this case?
Update Thanks for the clarifying answers. Now here is a slightly harder question. Suppose $f$ is discrete and $g$ is continuous. Suppose further that as $x \to \infty$, the difference $f(x) - g(x)$ is asymptotically bounded in the interval $[-C,C]$ but does not necessarily converge to $0$. Does $f \sim g$ still make sense? Would it be more appropriate to use $\liminf_{x \to \infty} f(x) - g(x) = - C$ and $\limsup_{x \to \infty} f(x) - g(x) = C$?
| If $\lim |f(x)| = \lim |g(x)| = \infty$ then there is no difference between your two concepts.
If $f$ is a $\Theta(g)$, then it is a "shifted" $\Theta(g)$ with $C_1 = C_2 = 0$.
If $f$ is a "shifted $\Theta(g)$, then it is a $\Theta(g)$ :
Since $\lim |g(x)| = \infty$, there exists $x_1$ from which $C_2/B \le |g(x)|$ and $ -2C_1/A \le |g(x)|$. This shows that for $x \ge \max(x_0,x_1)$, $A|g(x)|/2 \le C_1 + A|g(x)| \le |f(x)| \le C_2 + B|g(x)| \le 2B|g(x)|$.
If $-C \le f(x)-g(x) \le C$, then this is exactly the same as saying $f-g = O(1)$.
In this case, you have $f = g+O(1)$, and if their limit is $\pm \infty$, $f \sim g$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/24268",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Is there an clear way to state truncation? How can I express that if an expression evaluates to a negative number, I assign it a value of zero? Is it possible to do this without repeating the function twice?
Here is an example
$$f(\vec{\phi})=\begin{cases}
f(\vec{\phi}) & \text{for}\; f(\vec{\phi}) \gt 0 \\\
0 & \text{for}\; f(\vec{\phi})\lt 0.
\end{cases}$$
| It sounds like you are looking for $f^+(x) = \frac{f(x) + |f(x)|}{2}$. Positive values will evaluate to $f(x)$, but negative values will be taken to $0$, but this also repeats the expression so I hope that you were merely looking to avoid a piecewise definition.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/24312",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove that $f(n) = 2^{\omega(n)}$ is multiplicative where $\omega(n)$ is the number of distinct primes
Prove that $f(n) = 2^{\omega(n)}$ is multiplicative where $\omega(n)$ is the number of distinct primes.
My attempt:
Let $a = p_1p_2\cdots p_k$ and $b = q_1q_2\cdots q_t$
where $p_i$ and $q_j$ are prime factors, and $p_i \neq q_j$ for all $1 \leq i \leq k$ and $1 \leq j \leq t$.
We will show that $2^{\omega(ab)} = 2^{\omega(a)} \times 2^{\omega(b)}$
Indeed,
$\omega(a) = k$ and $\omega(b) = t$. Then $2^{\omega(a)} \times 2^{\omega(b)} = 2^{k + t}$
Where $2^{\omega(ab)} = 2^{k + t}$
$\therefore 2^{\omega(ab)} = 2^{\omega(a)} \times 2^{\omega(b)}$
Am I in the right track?
Thanks,
| Note that if $f : \mathbb{N} \to \mathbb{C}$ is additive:
$$
\gcd(m,n) = 1 \quad \Longrightarrow \quad f(mn) = f(m) + f(n),
$$
then for any $t > 0$, $g(n) := t^{f(n)}$ is multiplicative:
$$
\gcd(m,n) = 1 \quad \Longrightarrow \quad g(mn) = g(m) g(n),
$$
since
$$
g(mn) = t^{f(mn)} = t^{f(m) + f(n)} = t^{f(m)}t^{f(n)} = g(m) g(n).
$$
Note also that this $g$ will never be the zero function. Therefore, it suffices to show that $\omega(n)$ is additive (which has indeed already been shown).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/24366",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Is there a function with infinite integral on every interval? Could give some examples of nonnegative measurable function $f:\mathbb{R}\to[0,\infty)$, such that its integral over any bounded interval is infinite?
| The easiest example I know is constructed as follows. Let $q_{n}$ be an enumeration of the rational numbers in $[0,1]$. Consider $$g(x) = \sum_{n=1}^{\infty} 2^{-n} \frac{1}{|x-q_{n}|^{1/2}}.$$
Since each function $\dfrac{1}{|x-q_{n}|^{1/2}}$ is integrable on $[0,1]$, so is $g(x)$ [verify this!]. Therefore $g(x) < \infty$ almost everywhere, so we can simply set $g(x) = 0$ in the points where the sum is infinite.
On the other hand, $f = g^{2}$ has infinite integral over each interval in $[0,1]$. Indeed, if $0 \leq a \lt b \leq 1$ then $(a,b)$ contains a number $q_{n}$, so $$\int_{a}^{b} f(x)\,dx \geq \int_{a}^{b} \frac{1}{|x-q_{n}|}\,dx = \infty.$$ Now in order to get the function $f$ defined at every point of $\mathbb{R}$, simply define $f(n + x) = f(x)$ for $0 \leq x \lt 1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/24413",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "39",
"answer_count": 2,
"answer_id": 1
} |
Find the average of a collection of points (in 2D space) I'm a bit rusty on my math, so please forgive me if my terminology is wrong or I'm overlooking extending a simple formula to solve the problem.
I have a collection of points in 2D space (x, y coordinates). I want to find the "average" point within that collection. (Centroid, center of mass, barycenter might be better terms.)
Is the average point just that whose x coordinate is the average of the x's and y coordinate is y the average of the y's?
| Yes, you can compute the average for each coordinate separately because $$\frac{1}{n} \sum (x_i,y_i) = \frac{1}{n} (\sum x_i, \sum y_i) = (\frac{1}{n}\sum x_i, \frac{1}{n}\sum y_i)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/24485",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 2,
"answer_id": 1
} |
Why does rearranging the pieces of this triangle illusion give a different area?
Possible Duplicate:
How come 32.5 = 31.5?
| If you count carefully, you'll see that the base is meant to be $13$ units long, while the height is $5$ units long. That means that the triangle on the top on the top figure, which has a height of $2$, should have a base of length $b$, where
$$\frac{b}{2} = \frac{13}{5}$$
or $b = \frac{26}{5}$, longer than the $5$ units depicted.
Likewise, the bottom red triangle, with a base of size $8$ should have a height of length $h$, with
$$\frac{h}{8} = \frac{5}{13}$$
or $h = \frac{40}{13}$, which is a little longer than the $3$ depicted.
So in fact, the "missing square" comes from misdrawing the pictures (or from having the individual figures drawn correctly, but the composed figures not being real triangles; the two inner triangles are not similar, though they "should" be).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/24509",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
} |
Is this identity involving Stirling numbers of the first kind well-known? I've been looking in vain (most books I came across give identities involving sums or recurrence relations, but do not give much attention to fixed values) for a reference to the following identity:
$$S(n,n-3)={n \choose 2}{n \choose 4},$$
where $S(n,k)$ is the unsigned Stirling number of the first kind. Is this well-known, or too trivial to be mentioned anywhere?
| The identity and the proof for the identity are there in the wiki link you have sent.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/24609",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Splitting Fields This is a problem from Hungerford's book:
Let $E$ be an intermediate field of extension $K\subset F$ and assume that $E=K(u_1, \cdots ,u_r)$ where $u_i$ are (some of the) roots of $f\in K[x]$. Then $F$ is a splitting field of $f$ over $K$ if and only if $F$ is a splitting field of $f$ over E.
This is my attempt.
Let $u_1, \cdots ,v_n$ be the roots of $f$. Then since $F$ is a splitting field of $f$ over $K$, $F=K(v_1, \cdots ,v_n)$ $\implies$ $F=E(v_1, \cdots ,v_n)$. So $F$ is a splitting field of $f$ over $E$ .
Conversely, suppose $v_1, \cdots ,v_n$ are the roots of $f$ in $F$. Then $F=E(v_1, \cdots ,v_n)=K(u_1, \cdots ,u_r,v_1, \cdots, v_n)$. Thus $F$ is a splitting field of $f$ over $K$.
My Question: First I want to know if the proof above is right. The second part of the problem asks to extend the above to splitting fields of arbitrary set of polynomials. This is where I'm lost. I'd like a little help on how to begin.
Thanks.
Update: I think I've got it now. I had some difficulty with the indexing. Thanks.
| $\left(\implies\right)$ Let $v_{1},\cdots,v_{n}$ be the roots of
$f$ in $F$. Since $F$ is a splitting field of $f$ over $K$, $F=K\left(v_{1,}\cdots,v_{n}\right)$
which implies that $F=E\left(v_{1},\cdots,v_{n}\right)$. So $F$
is a splitting field of $f$ over $E$.
$\left(\Longleftarrow\right)$ Suppose $v_{1},\cdots,v_{n}$ are the
roots of $f$ in $F$. Then $F=E\left(v_{1},\cdots,v_{n}\right)$.
But each $u_{i}$ is a one of the $v_{j}$, so $F=E\left(v_{1},\cdots,v_{r}\right)=K\left(u_{1},\cdots,u_{r},v_{1},\cdots,v_{n}\right)$.
Thus $F$ is a splitting field of $f$ over $K$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/24680",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
Is there a gap in the standard treatment of simplicial homology? On MO, Daniel Moskovich has this to say about the Hauptvermutung:
The Hauptvermutung is so obvious that it gets taken for granted everywhere, and most of us learn algebraic topology without ever noticing this huge gap in its foundations (of the text-book standard simplicial approach). It is implicit every time one states that a homotopy invariant of a simplicial complex, such as simplicial homology, is in fact a homotopy invariant of a polyhedron.
I have to admit I find this statement mystifying. We recently set up the theory of simplicial homology in lecture and I do not see anywhere that the Hauptvermutung needs to be assumed to show that simplicial homology is a homotopy invariant. Doesn't this follow once you have simplicial approximation and you also know that simplicial maps which are homotopic induce chain-homotopic maps on simplicial chains?
| I think maybe what he means is that people think that simplicial homology is a homotopy invariant of polyhedra, or at least take the fact for granted, not that simplicial homology is not a homotopy invariant of simplicial complexes. I guess his main point is that homotopy invariants of simplicial complexes are not necessarily homotopy invariants of polyhedra.
does this help? or was this much obvious?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/24735",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18",
"answer_count": 3,
"answer_id": 1
} |
On certain cases of Seifert Van Kampen Given two connected open sets $U,V \subset X$ such that $U \cap V$ is path connected and $U \cup V = X$, then $\pi_1(U) \ast_{\pi_1(U \cap V)} \pi_1(V) \cong \pi_1(X)$. This is of course the Seifert Van Kampen theorem, a question in Munkres asks if the homomorphism induced by the inclusion map of $i: V \rightarrow X$ is trivial what can you say about the homomorphism induced by $j: U \rightarrow X$.
It's clear to me that $j_\ast$ must be surjective and therefore $\pi_1(X) \cong \pi_1(U)/ker(j_\ast)$. My question is can you say anything more? Does the kernel of $j_\ast$ relate at all to $F$, as given in the usual diagram. Would the kernel be at all related to the normal closure of the image of the induced homomorphism of the inclusion $k: U \cap V \rightarrow U$?
| Notice that the question is purely algebraic. If $G=\pi_1(U)$, $G'=\pi_1(V)$ and $H=\pi_1(U\cap V)$, you are asking what is the kernel of the obvious map $G\to G*_HG'$ (where the amalgamation happens along the images of $H$ in $G$ and $G'$) if the map $G'\to G*_HG'$ is trivial.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/24830",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
If c is an eigenvalue of $A$, is it an eigenvalue of $A^{\mathbf{T}}$? I am given a square matrix $A$, and I need to prove that if c is its eigenvalue, then it is also an eigenvalue of its transpose. How should I approach this? Clearly $Av$=$cv$, but I am not sure how to bring transpose into the equation.
| A simple way would be to look at $\left | A^T-cI \right |$. $$\left | A^T-cI \right | = \left| (A-cI)^T \right| = \left| (A-cI) \right|$$
EDIT
$\left| B \right|$ denotes the determinant of $B$. First note that for any matrix $B$, $\left| B \right| = \left| B^T \right|$.
(This is true since you get the same determinant if you find the determinant along the row or column.)
$c$ is an eigenvalue of $A$ iff $\left| A - cI\right| = 0$.
Note that $\left| A^T - cI \right| = \left| A^T - cI^T \right| = \left| (A - cI)^T \right| = \left| A - cI\right| = 0$.
Hence, $c$ is an eigenvalue of $A^T$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/24924",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
Nasty examples for different classes of functions Let $f: \mathbb{R} \to \mathbb{R}$ be a function. Usually when proving a theorem where $f$ is assumed to be continuous, differentiable, $C^1$ or smooth, it is enough to draw intuition by assuming that $f$ is piecewise smooth (something that one could perhaps draw on a paper without lifting your pencil). What I'm saying is that in all these cases my mental picture is about the same. This works most of the time, but sometimes it of course doesn't.
Hence I would like to ask for examples of continuous, differentiable and $C^1$ functions, which would highlight the differences between the different classes. I'm especially interested in how nasty differentiable functions can be compared to continuously differentiable ones. Also if it is the case that the one dimensional case happens to be uninteresting, feel free to expand your answer to functions $\mathbb{R}^n \to \mathbb{R}^m$. The optimal answer would also list some general minimal 'sanity-checks' for different classes of functions, which a proof of a theorem concerning a particular class would have to take into account.
| Although this is maybe not a good example of very "nasty" functions, you could look at $f_i = x^i \sin(1/x)$ for $i=0,1,2,3$ in order to see the distinction between those classes of functions. If you set $f_i(0)=0$ for all $i$, then $f_0$ is not continuous in $0$, $f_1$ is continuous but not differentiable in $0$, $f_2$ is differentiable but not $C^1$ and $f_3$ is $C^1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/24978",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 3,
"answer_id": 1
} |
Equality of polynomials: formal vs. functional Given two polynomials $A = \sum_{0\le k<n} a_k x^k$ and $B =\sum_{0\le k<n} b_k x^k$ of the same degree $n$, which are equal for all $x$, is it always true that $\ a_k = b_k\ $ for all $0\le k<n?$. All Coefficients and $x$ are complex numbers.
Edit:
Sorry, formulated the question wrong.
| For $\rm\ f = A-B\in R[x]\:,\:$ it is equivalent to ask if $\rm\ f(r) = 0\ $ for all $\rm\: r\in R\ \Rightarrow\ f = 0\:,\: $ i.e. if $\rm\:f\ $ is zero as a function then is $\rm\:f\ $ zero as a formal polynomial, i.e. are all its coefficients zero? This is true if $\rm\:R\:$ is an integral domain of cardinality greater than the degree of $\rm\:f\:,\:$ e.g. if $\rm|R|$ is infinite, but it may fail otherwise, e.g. $\rm\ x^p = x\ $ for all $\rm\: x\in \mathbb Z/p\ $ by Fermat's little theorem, but $\rm\ x^p \ne x\ $ in $\rm\: \mathbb Z/p\:[x]\:.$
Remark $\ $ In fact a ring $\rm\: D\:$ is a domain $\iff$ every nonzero polynomial $\rm\ f(x)\in D[x]\ $ has at most $\rm\ deg\ f\ $ roots in $\rm\:D\:.\:$ For the simple proof see my post here, where I illustrate it constructively in $\rm\: \mathbb Z/m\: $ by showing that, $\:$ given any $\rm\:f(x)\:$ with more roots than its degree,$\:$ we can quickly compute a nontrivial factor of $\rm\:m\:$ via a $\rm\:gcd\:$. The quadratic case of this result is at the heart of many integer factorization algorithms, which try to factor $\rm\:m\:$ by searching for a nontrivial square root in $\rm\: \mathbb Z/m\:,\:$ e.g. a square root of $1$ that is not $\:\pm 1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/25024",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
How to prove the equality $\sum_{j=0}^n (x)^j (-1)^{n-j} \left\{{n \atop j}\right\} = x^n$? How do you prove $$\sum_{j=0}^n (x)^j (-1)^{n-j} \left\{{n \atop j}\right\} = x^n,$$
where $(x)^j=x(x+1)...x(x+j-1)$ and $\left\{{n \atop j}\right\}$ is a Stirling number of the second kind?
| There's a simple but not very well-known formula on row sums for number triangles that can be applied here. I believe it is due to Neuwirth. Suppose you have a triangle of numbers $R(n,k)$ with $R(0,0) = 1$ and, for $n \geq 1$, satisfying $$R(n,k) = (\alpha (n-1) + \beta k + \gamma) R(n-1,k) + (\alpha' (n-1) + \beta' (k-1) + \gamma') R(n-1,k-1).$$
There are several interesting number triangles that are special cases, such as the binomial coefficients and both kinds of Stirling numbers.
Anyway, the formula is that if $\beta + \beta' = 0$ then $$\sum_{k=0}^n R(n,k) = \prod_{i=0}^{n-1} \left((\alpha + \alpha')i + \gamma + \gamma'\right).$$
It is easy to verify that $R(n,k) = \left\{ { n \atop k } \right\} x^{\underline{k}}$ satisfies the above recurrence with $\beta = 1, \beta' = -1, \gamma' = x$ and all other parameters $0$. Thus the row sum formula yields Derek's reformulation of the problem (Eq. (1)):
$$\sum_{k=0}^n \left\{ { n \atop k } \right\} x^{\underline{k}} = \prod_{i=0}^{n-1} x = x^n.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/25136",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 0
} |
proof of inequality $e^x\le x+e^{x^2}$ Does anybody have a simple proof this inequality
$$e^x\le x+e^{x^2}.$$
Thanks.
| For $x \geq 0$,
$$
e^{x^2 - x} + x e^{-x} \geq 1 + (x^2 - x) + x (1-x) = 1 \> .
$$
For $x < 0$, let $y = - x > 0$, whence,
$$
e^{-x^2}(e^x - x) = e^{-y^2} (e^{-y} + y) \leq e^{-y^2} (1 - y + y^2/2 + y) \leq e^{-y^2}(1+y^2) \leq e^{-y^2} e^{y^2} = 1 \> .
$$
Notice that we've only used the basic facts that $e^x \geq 1 + x$ for all $x \in \mathbb{R}$ and that $e^{-x} \leq 1 - x + x^2/2$ for $x \geq 0$, both of which are trivial to derive by simple differentiation, similar to Didier's approach.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/25187",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 4,
"answer_id": 2
} |
Proving that $\gcd(ac,bc)=|c|\gcd(a,b)$ Let $a$, $b$ an element of $\mathbb{Z}$ with $a$ and $b$ not both zero and let $c$ be a nonzero integer. Prove that $$(ca,cb) = |c|(a,b)$$
| If $(a,b)=d$, then the equation $ax+by=dz$ has a solution for all $z \in \mathbb{N}$, and this implies that $acx+bcy=(dc)z$ admits a solution for all $z \in \mathbb{N}$. And hence we can deduce the result which must appear in every elementary number theory book.
Moreover, you have not offered your motivation which absolutely will make the post better.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/25229",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
If closed sets $A,B\subseteq\mathbb{R}^2$ are non-homeomorphic, can $\mathbb{R}^2\setminus A$ and $\mathbb{R}^2\setminus B$ be homeomorphic? I have a question. Could you please help me to solve this problem?
Is it possible that $\mathbb{R}^2\setminus A$ and $\mathbb{R}^2\setminus B$ are homeomorphic, when $A$ and $B$ are non-homeomorphic closed subsets of $\mathbb{R}^2$?
| $A= \{(0,0)\}$, $B=B[0,1] $ i.e closed unit ball is an example of this.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/25271",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Simple proof that $8\left(\frac{9}{10}\right)^8 > 1$ This question is motivated by a step in the proof given here.
$\begin{align*}
8^{n+1}-1&\gt 8(8^n-1)\gt 8n^8\\
&=(n+1)^8\left(8\left(\frac{n}{n+1}\right)^8\right)\\
&\geq (n+1)^8\left(8\left(\frac{9}{10}\right)^8\right)\\
&\gt (n+1)^8 .
\end{align*}$
I had no trouble following along with the proof until I hit the step that relied on
$$8\left(\frac{9}{10}\right)^8 > 1$$. So I whipped out a calculator and confirmed that this is indeed correct. And I could see, after some fooling around with pen and paper that any function in the form
\begin{align}
k \; \left(\frac{n}{n+1}\right)^k
\end{align}
where $n \in \mathbb{Z}$ and $k \rightarrow \infty$ is bound to fall below one and stay there. So it's not a given that any function in the above form will be greater than one.
What I'm actually curious about is whether there are nifty or simple little tricks or calculations you can do in your head or any handwavy type arguments that you can make to confirm that $$8\left(\frac{9}{10}\right)^8 > 1$$ and even more generally, to confirm for certain natural numbers $k,n$ whether
\begin{align}
k \; \left(\frac{n}{n+1}\right)^k > 1
\end{align}
So are there? And if there are, what are they?
It can be geometrical. It can use arguments based on loose bounds of certain log values. It doesn't even have to be particularly simple as long as it is something you can do in your head and it is something you can explain reasonably clearly so that others can do also it (so if you're able to mentally calculate numbers like Euler, it's not useful for me).
You can safely assume that I have difficulties multiplying anything greater two single digit integers in my head. But I do also know that $$\limsup_{k\rightarrow \infty} \log(k) - a\cdot k < 0$$ for any $a>0$ without having to go back to a textbook.
| I'll start from the "obvious" fact that $(5/4)^3 < 2$. In fact, the cube root of 2 is around 1.26; of course you can explicitly compute $(5/4)^3 = 125/64 < 128/64 = 2$.
Then cubing both sides of that inequality, you get
$(5/4)^9 < 8$.
But $(10/9)^8 < (10/9)^9 < (5/4)^9$, so $(10/9)^8 < 8$. Taking reciprocals, $(9/10)^8 > 1/8$; multiplying both sides by 8 gives your result.
Actually, my heuristic here is as follows: $(10/9)$ is roughly one whole tone (two semitones); so $(10/9)^8$ is around sixteen semitones, or an octave and a major third, or $2 \times (5/4) = 2.5$. So $(9/10)^8$ must be around $0.4$. See Sanjoy Mahajan's handout on "singing logarithms", originally due to I. J. Good. Of course this method is really only useful if you know a little music theory.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/25329",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24",
"answer_count": 7,
"answer_id": 0
} |
Finding a point on the unit circle; more specifically, what quadrant it is in In my Trig class we have begun working on graphing the trig functions and working with radians and I'm trying to wrap my head around them.
At the moment I'm having trouble understanding radian measures and how to find where certain points lie on the unit circle and how to know what quadrant they are in.
For example, we are to find the reference angle of $\frac{5\pi}{6}$. My book says it terminates in QII.
This may be a dumb question, but how does one figure this out? What am I missing? How do you know that $\frac{\pi}{2} < \frac{5\pi}{6} < \pi$?
Thanks in advance...
| Think of the fractions without π: $\frac{1}{2}<\frac{5}{6}<1$ (or, alternately, since we're dealing with sixths, 5 sixths is between 3 sixths ($\frac{1}{2}$) and 6 sixths ($1$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/25362",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Conjugate of complex polynomial? Say I have a complex polynomial:
$$a_0+a_1x+\cdots+a_nx^n,$$
where $a_0,\ldots,a_n$ are complex numbers.
What is the conjugate of this polynomial? How is it defined?
For example, if we have an inner product on the vector space $V$ defined by (for complex polynomials):
$$\int_0^1 p(x)\overline{q(x)}dx$$
Then how do we know that:
$$\langle v,v \rangle = \int_0^1 (a_0+a_1x+\cdots+a_nx^n)(\overline{a_0}+\overline{a_1}x+\cdots+\overline{a_n}x^n)dx$$
is positive? Since there could be negative numbers in there, do we just know the positive ones outweigh the negative?
| It is simply the polynomial you get by replacing each $a_i$ by its complex conjugate.
You should think of this as the map induced on $\mathbb{C}[x]$ by the map $\mathbb{C}\to\mathbb{C}$ given by complex conjugation.
Added: How do we know that $\int_0^1 p(x)\overline{p(x)}\,dx$ is positive?
Write each $a_j = \alpha_j + i\beta_j$, with $\alpha_j,\beta_j\in\mathbb{R}$. Then note that $v = q(x)+ir(x)$, where
\begin{align*}
q(x) &= \alpha_0 + \alpha_1 x + \cdots + \alpha_nx^n\\
r(x) &= \beta_0 + \beta_1x + \cdots + \beta_nx^n.
\end{align*}
So you have:
\begin{align*}
\langle v,v\rangle &= \int_0^1 (a_0+a_1x + \cdots + a_nx^n)(\overline{a_0}+\overline{a_1}x + \cdots \overline{a_n}x^n)\,dx\\
&= \int_0^1(q(x)+ir(x))(q(x)-ir(x))\,dx\\
&= \int_0^1\Bigl( \left(q(x)\right)^2 - i^2\left(r(x)\right)^2\Bigr)\,dx\\
&= \int_0^1\Bigl( \left(q(x)\right)^2 + \left(r(x)\right)^2\Bigr)\,dx
\end{align*}
and since this is the integral of two nonnegative real valued functions, it is nonnegative (and equal to $0$ if and only if both $q(x)$ and $r(x)$ are zero, i.e., if and only if $v=\mathbf{0}$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/25422",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
} |
Why should I care about adjoint functors I am comfortable with the definition of adjoint functors. I have done a few exercises proving that certain pairs of functors are adjoint (tensor and hom, sheafification and forgetful, direct image and inverse image of sheaves, spec and global sections ect) but I am missing the bigger picture.
Why should I care if a functor has a left adjoint? What does it tell me about the functor?
| For one thing, it tells you that the functor respects colimits.
For example, the "free group" functor is the left adjoint of the "underlying set" functor from $\mathcal{G}roup$ to $\mathcal{S}et$. The fact that it is a left adjoint tells you that it respects colimits, so the free group of a coproduct is the coproduct of the free groups. The "coproduct" in $\mathcal{S}et$ is the disjoint union, and the coproduct in $\mathcal{G}roup$ is the free product: so the free group on a disjoint union $X\cup Y$, $F(X\cup Y)$, is (isomorphic to) the free product of the free groups on $X$ and $Y$, $F(X)*F(Y)$.
Dually, right adjoints respect limits; so in the case above, the underlying set of a product of groups is the product of the underlying sets of the groups.
Added: Ever wondered why the underlying set of a product of topological spaces is the product of the underlying sets, and the underlying set of a coproduct of topological spaces is also the coproduct/disjoint union of the underlying sets of the topological spaces? Why the constructions in topological spaces always seem to start by doing the corresponding thing to underlying sets, but in other categories like $\mathcal{G}roup$, $R-\mathcal{M}od$, only some of the constructions do that? (I know I did) It's because while in $\mathcal{G}roup$ the underlying set functor has a left adjoint but not a right adjoint, in $\mathcal{T}op$, the underlying set functor has both a left and a right adjoint (given by endowing the set with the discrete and indiscrete topologies).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/25455",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "74",
"answer_count": 5,
"answer_id": 3
} |
Squaring across an inequality with an unknown number This should be something relatively simple. I know there's a trick to this, I just can't remember it.
I have an equation
$$\frac{3x}{x-3}\geq 4.$$
I remember being shown at some point in my life that you could could multiply the entire equation by $(x-3)^2$
in order to get rid of the divisor. However,
$$3x(x-3)\geq 4(x-3)^2.$$
Really doesn't seem to go anywhere.
Anyone have any clue how to do this, that could perhaps show me/ point me in the right direction?
| $$\begin{align}
3x(x-3)&\geq 4(x-3)^2 \\
3x^2-9x &\geq 4(x^2-6x+9) \\
3x^2-9x &\geq 4x^2-24x+36
\end{align}$$
By rearranging we get
$$\begin{align}
x^2-15x+36 &\leq 0 \\
(x-3)(x-12) &\leq 0
\end{align}
$$
So $(x-3)> 0 \land (x-12)\leq0$ or $(x-3)< 0 \land (x-12)\geq 0$. ($x-3\neq0$)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/25550",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
A case of theorem proving through resolution I know I am making a mistake somewhere but consider the following truth table:
p0 p1 p2 |
-------------------
0 0 0 | 1
0 0 1 | 0
0 1 0 | 0
0 1 1 | 1
1 0 0 | 1
1 0 1 | 1
1 1 0 | 1
1 1 1 | 1
This coresponds to the boolean formula in DNF:
p0 v (~p1 ^ ~p2) v (p1 ^ p2)
or in CNF:
(p0 v ~p1 v p2) ^ (p0 v p1 v ~p2)
Using the last two clauses and applying the resolution inference rule on p1:
(p0 v ~p1 v p2) ^ (p0 v p1 v ~p2) |- (p0 v p2 v ~p2) = (p0 v T) = p0
So from the two clauses I can infer p0. This does not seem right since p0 does not always have to be true according to the truth table.
| Your mistake is that $(p_0 \vee T) = T$ and not $p_0$ as you wrote.
In fact it is provable from the empty set that $(p_0 \vee p_2\vee\neg p_2)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/25614",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
$T(1) = 1 , T(n) = 2T(n/2) + n^3$? Divide and conquer $T(1) = 1 , T(n) = 2T(n/2) + n^3$? Divide and conquer, need help, I dont know how to solve it?
| Hmm, possibly another way of heuristics is instructive.
First write the undisputable elements of the sequence:
$$\begin{array} &
a(1) &=&a(2^0) & = & 1 \\\
a(2)&=&a(2^1) &=&10 &= & 2^3 + 2*1 &=& 2*(4^1+1) \\\
a(4)&=&a(2^2) &=&84 &=& 4^3 + 2*10 &=& 4*(4^2+4^1+1) \\\
a(8)&=&a(2^3) &=&680 &=& 8^3 + 2*84 &=& 8^3+2*4^3+4*2^3+8*1^3\\\
& & & & &=& 8*(4^3+4^2+4^1+1) \\\
\ldots &=&a(2^k)&=& \ldots
\end{array} $$
It is obvious how this can be continued, because at the exponent k we get always $8^k$ plus two times the previous, thus the weighted sum of all powers of 8 which can be expressed as consecutive powers of 4:
$$ a(2^k) = 2^k*(4^k+4^{k-1} \ldots +4^0)= 2^k*\frac{4^{k+1}-1}{4-1} $$
Now the step "divide" can be taken: the above gives also a meaningful possibility for interpolation of the non-explicitely defined elements. If we allow base-2 logarithms for k we get for
$$\begin{array} &
& a(2^k) &= & 2^k*\frac{4^{k+1}-1}{4-1} \\\
& &= & 2^k*\frac{4*(2^{k})^2-1}{3} \\\
\text{assuming }& k&=& \frac{\log(n)}{\log(2)} \\\
& a(n) &=& n*\frac{4*n^2-1}{3} \\\
& &=& n^3 + \frac{(n-1)n(n+1)}{3!} \\\
& &=& n^3 + 2*\binom{n+1}{3} \\\
\end{array} $$
where the expression in the fourth line is the same as Fabian's result.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/25649",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
} |
Show that 13 divides $2^{70}+3^{70}$
Show that $13$ divides $2^{70} + 3^{70}$.
My main problem here is trying to figure out how to separate the two elements in the sum, and then use Fermat's little theorem. So how can I separate the two?
Thanks!
| $2^{12} \equiv 1 \pmod{13}$ and $3^{12} \equiv 1 \pmod{13}$ by Fermat's Little Theorem.
Hence, $2^{72} \equiv 1 \pmod{13}$ and $3^{72} \equiv 1 \pmod{13}$
$2^{72} \equiv 1 \pmod{13} \Rightarrow 2^{72} \equiv 40 \pmod{13} \Rightarrow 2^{70} \equiv 10 \pmod{13}$
$3^{72} \equiv 1 \pmod{13} \Rightarrow 3^{72} \equiv 27 \pmod{13} \Rightarrow 3^{70} \equiv 3 \pmod{13}$
Hence, you get the result.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/25701",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 7,
"answer_id": 0
} |
Differential equation problem in Maple I have differential equation $x\cdot\sqrt{4+y^2}\,dx + y\cdot\sqrt{1+x^2}\,dy = 0$, which is simple to solve on paper, but I have problem solving it with Maple (I use 10th, tried on 14th also). When applying dsolve it complains that it is not ODE.
Any suggestions?
Thank you in advance
| The command you are looking for is
dsolve(x*sqrt(4+y(x)^2)+y(x)*sqrt(1+x^2)*diff(y(x),x), y(x)).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/25747",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Factorial of 0 - a convenience? If I am correct in stating that a factorial of a number ( of entities ) is the number of ways in which those entities can be arranged, then my question is as simple as asking - how do you conceive the idea of arranging nothing ?
Its easy to conceive of a null element in the context of arrays, for example - so you say that there is only one way to present a null element.
But, in layman terms - if there are three humans $h1, h2, h3$ that need to be arranged to sit on three chairs $c1, c2, c3$ - then how do you conceive of a) a null human, and b) to arrange those ( that ? ) null humans ( human ? ) on the three chairs ?
Please note that referral to humans is just for easy explanation - not trying to be pseudo-philosophical. Three balls to be arranged on three corners of a triangle works just fine. So basically, how do you conceive of an object that doesn't exist, and then conceive of arranging that object ?
So, in essence ... is $0! = 1$, a convenience for mathematicians ? Not that its the only convenience, but just asking. Of course, there are many.
If yes, then its a pity that I can't find it stated like so anywhere.
If not, then can anybody suggest resources to read actual, good proofs ?
| I don't see any difficulty with the idea of "arranging a set of zero things"...
There is no need to
conceive of an object that doesn't exist, and then conceive of arranging that object
mostly, because that does not make much sense at all.
If I give you 2 apples and I tell you to arrange them in a line above the table, there are 2 possible outcomes. If I give you zero apples, how many possible possible things can I find on the table after you are done? Exactly one: no apples on the table.
By the way, every definition in math is done out of convenience.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/25794",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 1
} |
Using proof by cases -- stuck
Let $n$ be an integer.
If $3$ does not divide $n$, then $3$ divides $n^2 - 1$.
I'm trying to prove this using a "proof by cases". However, I'm lost as to how to start. I thought proof by cases started by making a logical expression, but I can't seem to find the right one.
Thanks
| Look at the possible remainders when $3$ divides an integer. The set of possible remainders when a number is divided by $3$ is $\{0,1,2\}$. You are given that $3$ does not divide $n$. Hence, the possible remainders when $3$ divides $n$ are $1$ and $2$. Hence $n=3k+1$ or $n=3k+2$. Now look at the square of the two cases.
If $n=3k+1$, then $n^2 = 9k^2 + 6k + 1 = 3(3k^2+2k) + 1 \Rightarrow (n^2-1) = 3(3k^2+2k)$
If $n=3k+2$, then $n^2 = 9k^2 + 12k + 4 = 3(3k^2+4k+1) + 1 \Rightarrow (n^2-1) = 3(3k^2+4k+1)$
Hence in both the cases $3|(n^2-1)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/25886",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
} |
If $f(1) = 2$ and $f(n) = n \cdot f(n-1)$ then $f(n) \gt 2^n$ for all $n \gt 2$ I'm having a little difficulty in proving what are probably simple induction proofs. Here is the question.
Define function $f(n)$ as follows. $f(1) = 2$ and $f(n) = n\cdot f(n-1)$ when $n > 1$.
Use induction to prove that $f(n) > 2^n$ for all $n>2$.
Note that $f(1) =2$, $f(2)=4$. I understand the basis step, so I'm not going to write that out.
Now we have. $f(k) = k \cdot f(k-1)> 2^k$ we want to show the following.
$f(k) + f(k+1) = (k+1)\cdot f(k)> 2^{k+1}$ This is where I'm getting stuck. Can someone give me a hint as to how to proceed?
| HINT $\rm\quad\displaystyle \frac{f(n)}{2^n}\ =\ \frac{2}2\ \frac{2}{2}\ \frac{3}2\ \frac{4}2\ \cdots\ \frac{n}2\ > 1\ $ for $\rm\ n > 2\ $ since each factor is $>1$ after the 2nd factor.
Generally that works to show that factorials grow faster than powers, i.e. $\rm\ f(n) > c^n\ $ for $\rm\ n > n_0\:.\ $ It suffices to show: eventually $\rm\ g(n) = f(n)/c^n > 1\:,\: $ or, equivalently, eventually $\rm\: g(n+1)/g(n) > 1 \ $ since, by multiplicative telescopy, $\rm\:g(n)\:$ is a product of these adjacent term ratios, namely
$$\rm g(0)\ \ \prod_{k\:=\:0}^{n-1}\ \frac{g(k+1)}{g(k)}\ = \ \ {\rlap{--}g(0)}\frac{\rlap{--}g(1)}{\rlap{--}g(0)}\frac{\rlap{--}g(2)}{\rlap{--}g(1)}\ \ \cdots\ \ \frac{g(n)}{\rlap{----}g(n-1)}\ =\ \ g(n) $$
Yours has $\rm\ g(k+1)/g(k)\ =\ (k+1)/2\ >\ 1\ $ for $\rm\ k > 1\ $ so $\rm\:g(n) > 1\:,\:$ as a product of terms $> 1\:.$
As I have emphasized here before in many posts, by means of cancelling complicated expressions, telescopy often reduces induction problems to trivialities (e.g. a product of terms $> 1$ is itself $> 1$). Difficult problems involving hyperrational functions (i.e. $\rm\ f(n+1)/f(n) = $ rational function of $\rm\:n\:,\:$ such as powers and exponentials) are, after application of telescopy, greatly simplified to trivial problems about rational functions - functions so simple that questions about such can be decided mechanically by algorithms.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/25924",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Intuitive Understanding of the constant "$e$" Potentially related-questions, shown before posting, didn't have anything like this, so I apologize in advance if this is a duplicate.
I know there are many ways of calculating (or should I say "ending up at") the constant e. How would you explain e concisely?
It's a rather beautiful number, but when friends have asked me "what is e?" I'm usually at a loss for words -- I always figured the math explains it, but I would really like to know how others conceptualize it, especially in common-language (say, "English").
related but not the same: Could you explain why $\frac{d}{dx} e^x = e^x$ "intuitively"?
| It's easy to show that $\dfrac{d}{dx} 2^x = (2^x\cdot\text{constant})$. And $\left.\dfrac{d}{dx} 2^x\right|_{x=0} = \text{that constant}$.
Since the graph of $y=2^x$ gets steeper as $x$ grows, the slope at $x=0$ must be less than the slope of the secant line involving $x=0$ and $x=1$. That latter slope is 1. Therefore the "constant" is less than 1.
By thinking about $y=4^x$ and considering the secant line involving $x=-1/2$ and $x=0$, one sees that that "constant" is more than 1.
Therefore 2 is too small, and 4 is too big, to be $e$.
For $y=e^x$, the "constant" is exactly 1.
(One can show that 3 is too big via the secant line at $x=-1/6$ and $x=0$, but the arithmetic is a bit messy.) Similarly $2.5$ is too small, via $x=0$ and $x={}$ . . . . I don't remember which number I used here. A positive number, obviously, and less than 1. Messy arithmetic again.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/26037",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "55",
"answer_count": 21,
"answer_id": 7
} |
Why eliminate radicals in the denominator? [rationalizing the denominator] Why do all school algebra texts define simplest form for expressions with radicals to not allow a radical in the denominator. For the classic example, $1/\sqrt{3}$ needs to be "simplified" to $\sqrt{3}/3$.
Is there a mathematical or other reason?
And does the same apply to exponential notation -- are students expected to "simplify" $3^{-1/2}$ to $3^{1/2}/3$ ?
| The usual reason I've heard is that dividing by integers is computationally easier -- it's easier to find, say, $(5\sqrt{3})/3$ by computing $5 \times \sqrt{3} \approx 8.66 $ and then dividing by $3$ to get $2.89$ then to find $5/\sqrt{3}$ by dividing $5/1.73$ directly.
The slightly cynical teacher in me wants to say that the reason for demanding no radicals in the denominator is so there is only one right answer to each question, which simplifies grading.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/26080",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21",
"answer_count": 6,
"answer_id": 2
} |
Is factoring polynomials as hard as factoring integers? There seems to be a consensus that factorization of integers is hard (in some precise computational sense.) Is it known whether polynomial factorization is computationally easy or hard?
One thing I originally thought was that if we could factor polynomials easily, then we could factor $n$ by finding a polynomial $p(x)$ with $p(m)=n$ for some $m$, then "easily factorizing" $p$ to get $p(x)=q(x)\cdot r(x)$. Then $q(m)\cdot r(m)$ would be a factorization of $n$. But we wouldn't get any information if one of these happened to be 1.
Does anyone have a solution or a reference for this problem? (I searched online, but all I could find was how to do factorization like in high school problems.)
| The problem is multifold:
*
*irreducibility, as the other answers showed, is a stumbling block at the best of times.
*Polynomial factoring (by faster methods), isn't the only way we could factor an integer, but it's probably the most cost effective to implement.
*We can use polynomial remainder theorem, it can work with things like $(x-(x-7))$ aka 7. But, that and other modular arithmetic tricks, is how we get to most divisibility tricks to begin with.
*$(n-1)^2+1$ integers always have a subset with a sum divisible by n (via pigeonhole principle). But, does going through potentially all 10,295,472 combinations of 7 from a set of 37 digits, repeatedly until you are left with 30 non-zero digits ( not necessarily clumped together) in the entire number , sound appealing to find out if a large number is divisible by 7 ? Welcome to the subset sum problem, applied to multisets.
*Most of the above facts seem useful ( plus a few more), until you consider implementing them. If you don't have enough digits, you can't implement them. Once you do, good luck going down lots of dead ends potentially.
example:
142859374923636582974652016665013284472978368643690531468097543
okay if I summed things right it's a multiple of 3 (bad choice of digits, casted out multiples of 3 using digit sum rule).
multiple of 5 test via the 4th point:
14285937492363658 has a subset of digits that's a multiple of 5, we can get rid of the whole thing it sums to a multiple of 5,
29746520166650132 Also sums to a multiple of 5 ...
84472978368643690 Also sums to a multiple of 5 ... ( what I get for not using a random number)
531468097543 Also sums to a multiple of 5 but isn't 17 digits.
Of course the digit sum trick plus pigeonhole principle aren't useful as the base being divisible by 5 is enough, just to check the last digit. in base's that have remainder 1 on division by 5 the number would have shrunk to 12 digits or less. It would change which digits mattered in different bases.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/26135",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "34",
"answer_count": 3,
"answer_id": 2
} |
What is the most general mathematical object that defines a solution to an ordinary differential equation? What is the most general object that defines a solution to an ordinary differential equation? (I don't know enough to know if this question is specific enough. I am hoping the answer will be something like "a function", "a continuous function", "a piecewise continuous function" ... or something like this.)
| The object you are looking for is called a phase curve. For ODE $\frac{dy}{dx}=f(x)$ with $x$, we are looking for a solution of the form:
$$y=y(t), x=x(t)$$
The result is a graph on the $xy$ plane that pass the initial condition $y=y_{0}$, $x=x_{0}$. All of this are quite standard. The best literature on this is:
Differential Geometry: Manifolds, Curves, and Surfaces
http://www.amazon.com/Differential-Geometry-Manifolds-Surfaces-Mathematics/dp/0387966269/ref=sr_1_8?ie=UTF8&qid=1299830752&sr=8-8
And Ordinary Differential Equations, by V.I.Arnold.
http://www.amazon.com/Ordinary-Differential-Equations-Universitext-Vladimir/dp/3540345639/ref=sr_1_4?s=books&ie=UTF8&qid=1299830826&sr=1-4
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/26295",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Show that the Hermite polynomials form a basis of $\mathbb{P}_3$ I have this question that I took a shot at but I am not very familiar with Hermite or Laguerre, this my first time running across these type of polynomials and need some help please.
(a) The first four Hermite polynomials are $1, 2t,-2+4t^2,$ and $-12t+8t^3$. These polynomials arise naturally in the study of certain important differential equations in mathematical physics. Show that the first four Hermite polynomials form a basis of $\mathbb{P}_3$.
(b) The first four Laguerre polynomials are $1, 1-t, 2-4t + t^2,$ and $6-18t + 9t^2- t^3$. Show
that these four Laguerre polynomials form a basis of $\mathbb{P}_3$.
Results:
(a) The first four Hermite polynomials will be shown to form a basis of $\mathbb{P}_3$ by showing that they are linearly independent and that the number of polynomials equals the dimension of $\mathbb{P}_3$.
Consider the following linear combination of the four Hermite polynomials:
$x(1) + y(2t) + z(-2+4t^2) + w(-12t + 8t^3) = at^3 + bt^2 + ct + d$
The first four Hermite polynomials will be shown to be linearly independent by showing that the only linear combination of them that produces the zero polynomial ($0t^3+0t^2+0t+0$) is the trivial combination of zero times each polynomial.
That is all I could come about thus far with it. Can anyone refine or correct this if this is the wrong approach to the problem.
| Forget all this Hermite and Laguerre stuff. The fact is that any family of polynomials with all different degrees is linearly free. Hence any family of polynomials with degrees $0$, $1$, $\ldots$, $n$ is a basis of the vector space of polynomials of degree at most $n$ (the space you denote by $\mathbb{P}_n$).
A relatively more sophisticated way of saying the same thing is that any triangular matrix with no zero on its diagonal is invertible.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/26337",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Center of gravity of a self intersecting irregular polygon I am trying to calculate the center of gravity of a polygon.
My problem is that I need to be able to calculate the center of gravity for both regular and irregular polygons and even self intersecting polygons.
Is that possible?
I've also read that: http://paulbourke.net/geometry/polyarea/ But this is restricted to non self intersecting polygons.
How can I do this? Can you point me to the right direction?
Sub-Question: Will it matter if the nodes are not in order? if for example you have a square shape and you name the top right point (X1Y1) and then the bottom right point (X3Y3)?
In other words if your shape is like 4-1-2-3 (naming the nodes from left to right top to bottom)
Note: Might be a stupid question but I'm not a maths student or anything!
Thanks
| If you are given just a list of points $z_i=(x_i, y_i)$ $\>(1\leq i\leq n)$ then it is not immediate how these points should determine a certain polygon. Maybe you want the convex hull $C$ of these points. There are algorithms that accept your list as input and produce a second list $(w_i)_{1\leq i\leq m}$ (a subset of the $z_i$) containing the corners of $C$ in counter-clockwise order. Using this second list you can compute the area and centroid of $C$ by means of the formulas given in the quoted source.
These formulas come from an application of Green's theorem to $C$ and its boundary $\partial C$. It reads as follows:
$${\rm area}(C)={1\over2}\int_{\partial C}(x\,dy- y\,dx).$$
If you apply this formula to an arbitrary closed curve, as, e.g., the piecewise linear curve $\gamma$ determined by the original $z_i$ you get very strange results: Every part enclosed by $\gamma$ is counted as many times as it is surrounded counterclockwise by $\gamma$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/26373",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
If $\lim\limits_{x\to a}f(x)$ exists, and $\lim\limits_{x\to a}[f(x) + g(x)]$ exists, does it follow that $\lim\limits_{x\to a}g(x)$ exists? This is a question from Spivak's Calculus. Question statement:
If $\lim\limits_{x\to a}f(x)$ exists, and $\lim\limits_{x\to a}[f(x) + g(x)]$ exists, does it follow that $\lim\limits_{x\to a} g(x)$ exists?
I have not been able to find a counterexample, so it seems that it exists. Here is my attempt at proof, and I'm mostly interested in validity of my proof.
Following is given:
$\forall \epsilon, \exists \delta_1: 0 < |x-a| < \delta_1 \Rightarrow |f(x) - L_1| < \epsilon$
$\forall \epsilon, \exists \delta_2: 0 < |x-a| < \delta_2 \Rightarrow |f(x) + g(x) - L_2| < \epsilon$
Let $\delta_2 \le \delta_1$. If greater $\delta_2$ works, then smaller one will of course work, and if smaller one works than that inequality follows from that.
Let $L_2 - L_1 = L_3$, that is $L_2 = L_3 + L_1$. Then we substitute:
$|f(x) + g(x) - L_1 - L_3| < \epsilon$
$|(g(x) - L_3) - (L_1 - f(x))| < \epsilon$
And by the inequality $|a-b| \ge |a| - |b|$:
$|g(x) - L_3| - |L_1 - f(x)| \le |(g(x) - L_3) - (L_1 - f(x))| < \epsilon$
$|g(x) - L_3| - |f(x) - L_1| < \epsilon$
$|g(x) - L_3| < \epsilon + |f(x) - L_1|$
Since $\delta_2 \le \delta_1$, we can always choose epsilon greater than $|f(x) - L_1|$ so we can make substitution:
$|g(x) - L_3| < 2\epsilon$
And we can make $2\epsilon$ as small as we wish. This completes the proof.
Thank you for any help!
| So it can be marked off as answered...
*
*Do you already know that if $\lim\limits_{x\to c}F(x)$ exists and $\lim\limits_{x\to c} G(x)$ exists, then $\lim\limits_{x\to c}\bigl(F(x)-G(x)\bigr)$ exists? If so, set $F(x) = f(x)+g(x)$ and $G(x) = f(x)$ to get the desired result.
*The proof is (essentially) valid; it can be streamlined a bit if you simply start from $|g(x)-L_3|$ and use the triangle inequality:
\begin{align*}|g(x)-L_3| &= |g(x)-L_3 + f(x)-f(x)+L_2-L_2|\\
&= \left|\Bigl(g(x)+f(x)-L_2\Bigr)-\Bigl(f(x)-(L_2-L_3)\Bigr)\right|\\
&\leq \left|\Bigl(g(x)+f(x)\bigr)-L_2\right|+\Bigl|f(x)-L_1\Bigr|\\
&\lt 2\epsilon.
\end{align*}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/26459",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
property of vector-space I don't know how can I to prove the following (this isn't homework, i'm just flipping through a book):
if a ∈ F, v ∈ V, and av=0
then a=0 or v=0
| If it hasn't been proven already, try proving the preliminary facts that $0_Fv=0_V$ and $a0_V=0_V$ for $a\in F$ and $v\in V$. This follows by using that fact that $0=0+0$ (for both the zero element of $F$ and of $V$), and using distributivity.
If $a=0$, you are done. So suppose $a\neq 0$, and hence has a multiplicative inverse. Try multiplying both sides of your equation by the inverse of $a$, and recall the axiom that $1v=v$ where $1$ is the multiplicative identity of $F$, to get your conclusion.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/26516",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Representability as a Sum of Three Positive Squares or Non-negative Triangular Numbers Let $r_{2,3}(n)$ and $r_{t,3}(n)$ denote the number of ways to write $n$ as a sum of three positive squares (A063691) and as a sum of three non-negative triangular numbers (A008443), respectively. I have noticed that $r_{2,3}(8k+3) = r_{t,3}(k)$ for $k \geq 1$. For example, $r_{2,3}(11) = 3$ because $11 = 3^2 + 1^2 + 1^2 = 1^2 + 3^2 + 1^2 = 1^2 + 1^2 + 3^2$ and $r_{t,3}(1) = 3$ because $1 = 1 + 0 + 0 = 0 + 1 + 0 = 0 + 0 + 1$, where $0$ and $1$ are triangular numbers.
Is this identity well-known? If so, where can I find its proof?
A proof should follow from showing that the coefficient of the $q^{8k + 3}$ of the $q$-series of $(\sum_{n \geq 1} q^{n^{2}})^{3}$ is equal to the corresponding coefficient of the $q$-series of $\frac{1}{8} \theta^{3}_{2}(q^{4})$, where $\theta_2(q) = \theta_2(0,q)$ is a Jacobi theta function.
| It is easy enough to prove something like this for squares and triangular numbers since modulo 4, squares are 0 or 1, so any three squares adding to $8k+3$ must each be odd, and the equation
$$k = \frac{a(a+1)}{2} + \frac{b(b+1)}{2} + \frac{c(c+1)}{2}$$
implies and is implied by
$$8k+3 = (2a+1)^2 + (2b+1)^2 + (2c+1)^2 .$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/26570",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Monotonic behavior of a function I have the following problem related to a statistics question:
Prove that the function defined for $x\ge 1, y\ge 1$,
$$f(x,y)=\frac{\Gamma\left(\frac{x+y}{2}\right)(x/y)^{x/2}}{\Gamma(x/2)\Gamma(y/2)}\int_1^\infty w^{(x/2)-1}\left(1+\frac{xw}{y}\right)^{-(x+y)/2} dw$$
is increasing in $x$ for each $y\ge 1$ and decreasing in $y$ for each $x\ge 1$. (Here $\Gamma$ is the gamma function.)
Trying to prove by using derivatives seems difficult.
| Let $W \sim F(x, y)$ where $F(x,y)$ stands for an $F$ distribution with degrees of freedom $x$ and $y$. Then, the quantity
$$
\mathbb{P}(W \geq 1 ) = f(x,y)=\frac{\Gamma\left(\frac{x+y}{2}\right)(x/y)^{x/2}}{\Gamma(x/2)\Gamma(y/2)}\int_1^\infty w^{(x/2)-1}\left(1+\frac{xw}{y}\right)^{-(x+y)/2} \mathrm{d}w \> .
$$
From this, I think you can find the answer in the reference below.
B. K. Ghosh, Some monotonicity theorems for $\chi^2$, $F$ and $t$ distributions with applications, J. Royal Stat. Soc. B, vol. 35, no. 3 (1973), pp. 480-492.
Incidentally, note that $W = \frac{y}{x} \frac{U_{xy}}{1-U_{xy}}$ where $U_{xy} \sim \mathrm{Beta}(x/2, y/2)$ and so $\mathbb{P}(W \geq 1) = \mathbb{P}(U_{xy} \geq (1+y/x)^{-1})$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/26632",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
} |
Solve $\sin(5A) + \cos(5A)\sin(A) - \cos(3A) = 0$ How do you solve this equation for A:
$~~\sin(5A) + \cos(5A)\sin(A) - \cos(3A) = 0$
I've tried expanding it many times, but I can't seem to be able to reduce it to a format I can work with. Is there a simpler method of solution than repeated expansion?
| $$ \sin(5A) + \cos(5A) \sin(A) - \sin(3A) = 0 $$
Let $ x = e^{iA} $ and use De Moivre's,
$$ \frac{x^5 - x^{-5}}{2i} + \frac{x^5 + x^{-5}}{2} \frac{x - x^{-1}}{2i} - \frac{x^3-x^{-3}}{2i} = 0 $$
Multiply by $ 4i x^6 $,
$$ 2(x^{11} - x) + (x^{10} + 1)(x^2 - 1) - 2(x^9 - x^3) = $$
$$ (x^2 - 1)(x^{10} + 2x^9 + 2x + 1) = 0 $$
The phase of each root to the polynomial above (the ones with $ | x | = 1 $ at least) is a solution $ A $ to your equation (up to an integer addition of $ 2\pi $).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/26687",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 0
} |
A sufficient condition for $U \subseteq \mathbb{R}^2$ such that $f(x,y) = f(x)$ I have another short question. Let $U \subseteq \mathbb{R}^2$ be open and $f: U \rightarrow \mathbb{R}$ be continuously differentiable. Also, $\partial_y f(x,y) = 0$ for all $(x,y) \in U$. I want to find a sufficient condition for $U$ such that $f$ only depends on $x$. Of course, the condition shouldn't be too restrictive. Is it sufficient for $U$ to be connected?
Thanks a lot for any help.
| This is from a problem in Rudin, if I recall correctly, in the section on functions of several variables. The important point to note is that this problem relies on the mean value theorem, which is a statement about a function from $\mathbf{R}$ to $\mathbf{R}$. The trick in this situation is to restrict your domain $E$ to a single straight line, in which $x$ is fixed and $y$ is variable. By the mean value theorem, the distance between two points along this line will depend only on x, and so all points on this line will map to the same point.
There is a notion in analysis of convexity. If $(x_1,y_1)$ and $(x_2,y_2)$ are in $E$, so is the line in between them. Our proof above required this to use the mean value theorem. However, we didn't require convexity in every direction. Since we only took partials in the $y$-direction, we just required convexity in the $y$-direction. That is to say, that if two points have the same $y$ value, then the line connecting them would remain in $E$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/26730",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Is it Variation? Counting elements Lets assume that we have a element, which can have value from 1 till n. (let's set it on 20 to make it easier)
And we have the Set, that consists of object, which consists of three elements $\langle e_1, e_2, e_3 \rangle$.
We have also one rule regarding to objects in the set: $e_1 \geq e_2 \geq e_3$
- example of good objects: $\langle n, n, n\rangle$, $\langle n, n-1, n-1\rangle$, $\langle 20, 19, 18\rangle$, $\langle 3, 2, 1\rangle$, $\langle 3, 3, 3\rangle$, $\langle 3, 2, 2\rangle$.
- example of bad objects: $\langle n, n+1, n\rangle$, $\langle 2, 3, 2\rangle$, $\langle 3, 2, 4\rangle$.
Now the question:
How to count the amount of all good objects, which pass to this Set (they don't violate the rule ) ?
Can you give me any hints?
I can solve this with brute force method. But probably there is a short way.
| There are many ways. One is first to ask how many 2-element things there are. If $e_1=i$, there are $n+1-i$ of them. So the total number of 2-element things is $$\sum_{i=1}^n(n+1-i)=n^2+n-\frac{n(n+1)}{2}=\frac{n(n+1)}{2}$$
Then the number of 3-element things is $$\sum_{i=1}^n\frac{i(i+1)}{2}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/26791",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Is there an explicit form for cubic Bézier curves? (See edits at the bottom)
I'm trying to use Bézier curves as an animation tool. Here's an image of what I'm talking about:
Basically, the value axis can represent anything that can be animated (position, scaling, color, basically any numerical value). The Bézier curve is used to control the speed at which the value is changing as well as it start and ending value and time. In this graphic, the animated value would slowly accelerate to a constant speed, then decelerate and stop.
The problem is, that Bézier curve is defined with parametric equations.
$f_x(t):=(1-t)^3p_{1x} + 3t(1-t)^2p_{2x} + 3t^2(1-t)p_{3x} + t^3p_{4x}$
$f_y(t):=(1-t)^3p_{1y} + 3t(1-t)^2p_{2y} + 3t^2(1-t)p_{3y} + t^3p_{4y}$
What I need is a representation of that same Bézier curve, but defined as value = g(time), that is, y = g(x).
I've tried solving for t in the x equation and substituting it in the y equation, but that 3rd degree is giving me some difficulty.
I also tried integrating the derivative of the Bézier curve (dy/dx) with respect to t, but no luck.
Any ideas?
Note : "Undefined" situations are avoided by preventing the tangent control points from going outside the hull horizontally, preventing any overlap in the time axis.
EDIT :
I have found two possible solutions. One uses Decasteljau's algorithm to approximate the $s$ parameter from the $t$ parameter, $s$ being the parameter of the parametric curve and $t$ being the time parameter. Here (at the bottom).
The other, from what I can understand of it, recreates a third degree polynomial equation matching the curve by solving a system of linear equation. Here. I understand the idea, but I'm not sure of the implementation. Any help?
| You're really looking for a cubic equation in one dimension (time).
$$
y = u_0(1-x)^3 + 3u_1(1-x)^2x + 3u_2(1-x)x^2 + u_3x^3
$$
Is all you need.
Walking $t$ at even intervals (say in steps of 0.1) takes evenly spaced points along the parametric curve.
So, the answer to your question is really quite simple. The parametric bezier curve provides 2 variables as the output, with only 1 variable as the input. To control an animation in time like these, that's only a 1 dimensional situation. So consider $t$ as time, and drop one variable (say drop $x$). Your animation ease curve is controlled by the $y$ value:
Now as $t=0,0.1..1$, you have an animation parameter that starts slowly, moves at medium speed in the middle, and slows down at the end.
Examples
Setting $u_0=0$, $u_1=0.05$, $u_2=0.25$, $u_3=1$ gives an ease-in curve (slow start, fast end)
Setting $u_0=0$, $u_1=0.75$, $u_2=0.95$, $u_3=1$ gives an ease-out curve (fast start, slow end)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/26846",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "47",
"answer_count": 8,
"answer_id": 5
} |
How to check if derivative equation is correct? I can calculate the derivative of a function using the product rule, chain rule or quotient rule.
When I find the resulting derivative function however, I have no way to check if my answer is correct!
How can I check if the calculated derivative equation is correct? (ie I haven't made a mistake factorising, or with one of the rules).
I have a graphics calculator.
Thanks!
| Many derivative problems can be done more than one way. One way to check your work is to try them both ways and see if you get the same thing.
For example: $y=\frac{1}{x^2}$
You can write this as $y=x^{-2}$ and use the power rule to get $y'=-2x^{-3}=\frac{-2}{x^3}$
Or you can use the quotient rule $y'=\frac{0*x^2-2*x}{{(x^2)}^2}=\frac{-2x}{x^4}=\frac{-2}{x^3}$
This is probably too simple of an example, but I hope you get the idea.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/26904",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 2
} |
Multiply: $(4 + x)(x^2 + 2x +3)$ How would I solve this:
Multiply: $(4 + x)(x^2 + 2x +3)$
| This question, as you probably know, requires the use of the distributive property. To use JavaMan's suggestion $$(a + b) \cdot c = a \cdot c + b \cdot c$$
Let "a + b" be your $4 + x$ and let "c" be your $x^2 + 2x + 3$
Then we need to multiply $a \cdot c$, or $4 \cdot (x^2 + 2x + 3)$, and add it to
$b \cdot c$, which is $x \cdot (x^2 + 2x + 3)$
So $$ a\cdot c + b \cdot c = [4\cdot (x^2 + 2x +3)] + [x \cdot (x^2 + 2x + 3)]$$
After taking these steps, combine like terms and write the result in order of decreasing exponents (for convention's sake)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/26956",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Defining sets and union as a group I came across an exercise in this book where the question was to define a collection of sets and the union operator as a group. The two parts of the question were to (1) find the identity element and (2) find the inverse element.
Assuming some collection of sets that's closed over set union $\mathcal{X}$ and some element $A \in \mathcal{X}$, my answer to the first question was that any set $A' \subseteq A$ and $\emptyset$ could be the identity element. But I run into trouble if I try to build an inverse out of this. For example, there's no way to obtain $\emptyset$ from a non-empty $A$ using union. And it's obvious that if uniqueness of identity and inverse is a requirement for defining groups, then for any $A$, $A$ is also its own identity and inverse.
My question is
*
*Is it valid to have multiple non-unique identity elements in a group for each item in the collection?
*Is it valid to have a unique inverse element for each item in the collection even if the identity elements are not unique?
Judging by what I read on wikipedia, uniqueness doesn't seem to be a criterion.
| Let $(\mathcal{X}, \cup, E, -)$ be a group. Take arbitrary $A\in \mathcal{X}$. $A\cup E = E$, then $A\supseteq E$. $A\cup (-A) = E$, then $A\subseteq E$. Then $A=E$. This group is trivial.
$\cup$ is idempotent. Any idempotent group is trivial. Proof. Let $(G, +, 0, -)$ be a group. Take arbitrary $g\in G$. Then $g+g=g, -g+g+g=-g+g, 0+g=0, g=0$. Qed.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/27023",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
How can I determine a number is irrational? I have a hypothesis about regular polygons, but in order to prove or disprove it I need a way to determine whether an expression is rational. Once I boil down my expression the only part that could be irrational is:
$$S_N = \cot \frac{\pi}{N} \text{ for } N\in ℕ_1 ∖ \left\{1, 2, 4\right\}$$
Is there at least one such $N$ for which $S_N$ is rational? Can it be proven that $S_N$ is never rational for any such $N$? How would I go about proving one or the other?
| A simple, complete proof can be found in Olmsted, J. M. H., Rational Values of Trigonometric Functions, Amer. Math. Monthly 52 (1945), no. 9, 507–508.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/27081",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Can it be determined that the sum of the diagonal entries, of matrix A, equals the sum of eigenvalues of A I have a question to ask down below, that I have been having some trouble with and would like some help and clarification on.
Suppose A is an $n \times n$ matrix with (not necessarily distinct) eigenvalues $\lambda_{1}, \lambda_{2}, \ldots, \lambda_{n}$. Can it be shown that:
(a) The sum of the main diagonal entries of A, called the trace of A, equals the sum of the eigenvalues of A.
(b) A $- ~ k$ I has the eigenvalues $\lambda_{1}-k, \lambda_{2}-k, \ldots, \lambda_{n}-k$ and the same eigenvectors as A.
Thank You very much.
| For the first,
$$A = P^{-1} M P$$
Where M is a (upper triangular) matrix with eigenvalues of A as diagonal elements. This is what it means to say that A is always similar to its Jordan form.
Use $Tr(AB)=Tr(BA)$
$$Tr(A)= Tr( P^{-1} M P) = Tr(MPP^{-1})=Tr(M)=\sum_n\lambda_n$$
b) Let $B=A-kI$ with eigenvalues be $\chi_n$
Eigenvalues are determined by solutions of
$$|B-\chi I|=0$$
or,
$$|A-(\chi+k)I|=0$$
but since you know $$|A-\lambda I|=0$$ you get $\chi_n = \lambda_n-k$
Let $Y$ be an eigenvector of $B$. So $BY=\chi Y$. Now plug stuff in for $B$ and $\chi$ and see what you'd get.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/27136",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Number of automorphisms of a direct product of two cyclic $p$-groups Suppose I have $G = \mathbb{Z}_{p^m} \times \mathbb{Z}_{p^n}$ for $m, n$ distinct natural numbers and $p$ a prime. Is there a combinatorial way to determine the number of automorphisms of $G$?
| $\def\ZZ{\mathbb{Z}}$
It's worth pointing out that Arturo's second method generalizes very nicely to all abelian $p$-groups: Consider $G = \bigoplus \ZZ/p^{\lambda_i}$; let $\mu_{\ell}$ be the number of $\lambda$'s which are equal to $\ell$; let $r$ be the number of summands of $G$ (so $r = \sum \mu_i$).
Then $\mathrm{End}(G) \cong \bigoplus \mathrm{Hom}(\ZZ/p^{\lambda_i}, \ZZ/p^{\lambda_j}) \cong \bigoplus \ZZ/p^{\min(\lambda_i, \lambda_j)}$ as abelian groups.
An element of $\mathrm{End}(G)$ is invertible if and only if its image in $\mathrm{End}(G/pG)$ is invertible, by Nakayama's lemma. Now, $G/pG \cong (\ZZ/p)^r$, so its endomorphism ring is $\mathrm{Mat}_{r \times r}(\ZZ/p)$. The map $\mathrm{End}(G) \to \mathrm{End}(G/pG)$ is NOT surjective. Rather, the image is the block-upper-triangular matrices, where the sizes of the blocks are the $\mu_i$. Such a matrix is invertible if and only if the diagonal blocks are invertible. So the fraction of $\mathrm{End}(G)$ which is made up of invertible elements is
$$\prod_{\ell} \frac{|\mathrm{GL}_{\mu_{\ell}}(\ZZ/p)|}{|\mathrm{Mat}_{\mu_{\ell} \times \mu_{\ell}}(\ZZ/p)|} = \prod_{\ell} \frac{(p^{\mu_\ell}-1)(p^{\mu_\ell}-p) \cdots (p^{\mu_\ell} - p^{\mu_\ell -1})}{p^{\mu_{\ell}^2}}$$
$$=\prod_{\ell} \left( 1- p^{-1} \right) \left( 1-p^{-2} \right) \cdots \left( 1-p^{-\mu_{\ell} +1} \right).$$
Putting it all together, $|\mathrm{Aut}(G)|$ is found by multiplying the above formula by $|\mathrm{End}(G)|$ (computed in the second paragraph), and we get
$$|\mathrm{Aut}(G)| = p^{\sum_{i,j} \min(\lambda_i, \lambda_j)} \prod_{\ell} \left( 1- p^{-1} \right) \left( 1-p^{-2} \right) \cdots \left( 1-p^{-\mu_{\ell} +1} \right).$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/27200",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 2,
"answer_id": 0
} |
Prove a 3x3 system of linear equations with arithmetic progression coefficients has infinitely many solutions How can I prove that a 3x3 system of linear equations of the form:
$\begin{pmatrix}
a&a+b&a+2b\\
c&c+d&c+2d\\
e&e+f&e+2f
\end{pmatrix}
\begin{pmatrix}
x\\ y\\ z
\end{pmatrix}
=\begin{pmatrix}
a+3b\\
c+3d\\
e+3f
\end{pmatrix}$
for $a,b,c,d,e,f \in \mathbb Z$ will always have infinite solutions and will intersect along the line
$ r=
\begin{pmatrix}
-2\\3\\0
\end{pmatrix}
+\lambda
\begin{pmatrix}
1\\-2\\1
\end{pmatrix}$
| This follows immediately from a basic theorem of linear algebra. Namely, the solution of a nonhomogeneous linear system can be expressed as the sum of any particular solution plus the general solution of the homogeneous system. Since your exhibited solution has this form it is necessarily a solution, i.e. $\rm\ \ L(x_0) = a_0,\ L(x) = 0\ \Rightarrow\ L(x_0+\lambda\ x)\ =\ L(x_0)+\lambda\ L(x)\ =\ a_0\:.\ \ $ This sort of "shifted" vector space structure is known as an affine space.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/27259",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Understanding a proof by descent [Fibonacci's Lost Theorem] I am trying to understand the proof in Carmichaels book Diophantine Analysis but I have got stuck at one point in the proof where $w_1$ and $w_2$ are introduced.
The theorem it is proving is that the system of diophantine equations:
*
*$$x^2 + y^2 = z^2$$
*$$y^2 + z^2 = t^2$$
cannot simultaneously be satisfied.
The system is algebraically seen equivalent to
*
*$$t^2 + x^2 = 2z^2$$
*$$t^2 - x^2 = 2y^2$$
and this is what will be worked on. We are just considering the case where the numbers are pairwise relatively prime. That implies that $t,x$ are both odd (they cannot be both even). Furthermore $t > x$ so define $t = x + 2 \alpha$.
Clearly the first equation $(x + 2\alpha)^2 + x^2 = 2 z^2$ is equivalent to $(x + \alpha)^2 + \alpha^2 = z^2$ so by the characterization of primitive Pythagorean triples there exist relatively prime $m,n$ such that $$\{x+\alpha,\alpha\} = \{2mn,m^2-n^2\}.$$
Now the second equation $t^2 - x^2 = 4 \alpha (x + \alpha) = 8 m n (m^2 - n^2) = 2 y^2$ tells us that $y^2 = 2^2 m n (m^2 - n^2)$ by coprimality and unique factorization it follows that each of those terms are squares so define $u^2 = m$, $v^2 = n$ and $w^2 = m^2 - n^2 = (u^2 - v^2)(u^2 + v^2)$.
It is now said that from the previous equation either
*
*$u^2 + v^2 = 2 {w_1}^2$, $u^2 - v^2 = 2 {w_2}^2$
or
*
*$u^2 + v^2 = w_1^2$, $u^2 - v^2 = w_2^2$
but $w_1$ and $w_2$ have not been defined and I cannot figure out what they are supposed to be. Any ideas what this last part could mean?
For completeness, if the first case occurs we have our descent and if the second case occurs $w_1^2 + w_2^2 = 2 u^2$, $w_1^2 - w_2^2 = 2 v^2$ gives the descent. Which finishes the proof.
| This descent has a very beautiful presentation based upon ideas going back to Fibonacci.
Fibonacci's Lost Theorem $ $ The area of an integral
pythagorean triangle is not a perfect square.
Over $400$ years before Fermat's celebrated proof by infinite descent of the essentially equivalent $\rm\,FLT_4\,$ (Fermat's Last theorem for exponent $4$), Fibonacci claimed
to have a proof of this in his Liber Quadratorum (Book of Squares). But, alas, to this day, his proof has never been found.
Below is my speculative reconstruction of Fibonacci's proof of this theorem, based upon similar ideas that survived in his extensive studies on squares and related topics.
A square arithmetic progression (SAP) is an AP $\rm\ x^2,\ y^2,\ z^2\ $ with a square stepsize $\rm\, s^2,\, $ viz. $$\rm\ x^2\ \ \xrightarrow{\Large s^2}\ \ y^2\ \ \xrightarrow{\Large s^2}\ \ z^2$$
Naturally associated with every SAP is a "half square triangle",
ie. doubling $\rm\ z^2 + x^2\ $ produces a triangle of square area $\rm\ s^2,\, $ viz.
$\rm\ (z + x)^2 + (z - x)^2\, =\ 2\ (z^2 + x^2)\ =\ 4\ y^2\ $
which indeed has $\ $ area $\rm\, =\ (z + x)\ (z - x)/2\ = \ (z^2 - x^2)/2\ =\ s^2\ $
With these concepts in mind, the proof is very easy:
If there exists a pythagorean triangle with square area then it
may be primitivized and its area remains square. Let its primitive
parametrization be $\rm\:(a,b)\:$ and let its area be $\rm\:c^2,\:$ namely
$$\rm\ \frac{1}2\ leg_1\ leg_2\, =\ \frac{1}2\ (2\:a\:b)\ (a^2-b^2)\ =\ (a\!-\!b)\ a\ (a\!+\!b)\ b\ =\ c^2 $$
Since $\rm\:a\:$ and $\rm\:b\:$ are coprime of opposite parity, $\rm\ a\!-\!b,\ a,\ a\!+\!b,\ b\ $ are coprime factors of a square, thus all must be squares.
Hence $\rm\ a\!-\!b,\ a,\ a\!+\!b\ $ form a SAP; doubling its half square triangle
yields a triangle with smaller square area $\rm\ b < c^2,\ $ hence descent. $\ \ $ QED
Remark $ $ This doubling construction is ancient - already in Euclid. It may be
viewed as a composition of quadratic forms $\rm\ (z^2 + x^2)\ (1^2 + 1^2)\:. $
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/27309",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 2,
"answer_id": 1
} |
Find the Matrix of a Linear Transformation Relative to a Basis Our book gives this problem:
Find the $\mathcal{B}$-matrix for the transformation $\vec{x} \rightarrow A\vec{x}$ when the basis $\mathcal{B} = \{ \vec{b}_1, \vec{b}_2 \}$, where $A = \left[\begin{array}{cc}
3 & 4 \\
-1 & -1 \\
\end{array} \right]$, $\vec{b}_1 = \left[\begin{array}{cc}
2 \\
-1 \\
\end{array} \right]$, and $\vec{b}_2 = \left[\begin{array}{cc}
1 \\
2 \\
\end{array} \right]$.
From what I understand, it's asking us to find the matrix for the same exact transformation as $A$, except relative to to the given bases. I can't figure out where to go from here, though... any thoughts?
| What you need to do is form the matrix $B = (\vec{b}_1|\vec{b}_2)$, where $\vec{b}_i$ is the $i$th column of B, and note that this matrix converts vectors from the standard basis into the basis $\mathcal{B}$, while the inverse $B^{-1}$ will convert vectors in the basis $\mathcal{B}$ into the standard basis. Thus if you have a vector already in the basis $\mathcal{B}$, you can convert it to standard basis by multiplying by $B^{-1}$, multiply it by $A$, and finally convert back to $\mathcal{B}$ by multiplying by $B$, so your overall matrix is $BAB^{-1}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/27363",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How many countable graphs are there? Even though there are uncountably many subsets of $\mathbb{N}$ there are only countably many isomorphism classes of countably infinite - or countable, for short - models of the empty theory (with no axioms) over one unary relation.
How many isomorphism classes of
countable models of the empty theory
over one binary relation (a.k.a.
graph theory) are there? I.e.: How many countable unlabeled graphs are there?
A handwaving argument might be: Since the number of unlabeled graphs with $n$ nodes grows (faster than) exponentially (as opposed to growing linearly in the case of a unary relation), there must be uncountably many countable unlabeled graphs. (Analogously to the case of subsets: the number of subsets of finite sets grows exponentially, thus (?) there are uncountably many subsets of a countably infinite set.)
How is this argument to be made rigorous?
| Another very simple way to show that they are uncountable is with well-orderings: By definition the well-orderings of a countable set are $\aleph_1$.
EDIT: Here's another simple way to show that they are $2^{\aleph_0}$ many. Take all but one (let's call it $p$) of the countable nodes and order them with the standard order of the natural numbers. This essentially assigns to every of these nodes a natural number. Now use the edgeless node $p$ that we kept to describe characteristic functions: For every subset $X$ of the natural numbers add an edge from the node that denotes $n$ to $p$ if and only if $n\in X$. It's trivial to see that all these graphs are not isomorphic: If two of them were then two different sets of natural numbers would coincide.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/27398",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 8,
"answer_id": 4
} |
zeroes of holomorphic function I know that zeroes of holomorphic functions are isolated,and I know that if a holomorphic function has zero set whic has a limit point then it is identically zero function,i know a holomorphic function can have countable zero set, does there exixt a holomorphic function which is not identically zero, and has uncountable number of zeroes?
| Your question needs some clarification, as it is not internally consistent. If I understand correctly, you're asking if all holomorphic functions have uncountably many zeros? In that case, the answer is definitely no, for example $f(z) = 1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/27546",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 4,
"answer_id": 3
} |
Detecting cycles in off-line Turing machines Let $M$ be an off-line Turing machine over the input alphabet $\{0,1\}^{*}$, that uses only one working tape in addition to the input tape. Construct a Turing machine $M'$, such that:
*
*$L(M) = L(M')$
*$M'$ never loops in a bounded space
(that is, $M'(w)\uparrow$ may happen
only if $M'$ visits infinitely many
cells in the computation on $w$)
*for each input word $w$, the number
of cells visited by $M'$ in the
computation on $w$ is the same as the
analogical number for $M$.
$M'$ may use larger working alphabet than $M$.
| A standard way to detect cycles in a TM is by counting configurations. For a machine with one input tape and one work tape, a configuration (of M's run on the input x) consists of the current state, the location of the heads in both tapes, and the content of the work tape.
If the location of the heads is known to be bounded, the number of configurations itself is bounded. So if the machine runs more steps than the possible number of configuration, by the pigeonhole principle it entered the same configuration twice, hence it is in a loop, hence we can safely reject.
The only problem in our case is that we don't know a-priory a bound on the location of the head in the work tape. So we adept dynamically - denote by $k$ the leftmost cell reached so far by the head, and count configurations according to $k$. If the head passes the $k$th cell, update $k$ accordingly.
The only remaining problem is counting configurations with only limited space; however, this can be fixed using extended alphabet. Think of every letter in the new alphabet as a pair - one letter from the old alphabet, representing what M sees, and the other letter is in an alphabet big enough so that $k$ digits are enough to represent a number as large as the possible number of configurations.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/27606",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
what type of math is this? I am a total newbie to the world of math and was interesting in learning. I just finished my degree(non-math) and am going to study a few math books to see if it interests me to apply for something more quantitative but I want to study something interesting with interesting problems that won't bore me.
I thought about it, and thought of the type of problems that intrest me. One is predicting the future and the other is predicting the past. Here's a problem that I think would be cool. Say you have a list
calories 89, 34, 67, 43, 54, 232, 623
and someone tells you that someone had a total of "6553" calories in a day. What type of math would try to figure this out? Is it algebra? (by the way to get this question all I did was take each value above and multiple first one by 1, second one by two, etc.up to 7.)
| To cast the problem a little more clearly, you have a number of "weights", $w_1,\ldots,w_n$, in this case:
\begin{align*}
w_1 &= 89\\
w_2 &= 34\\
w_3 &= 67\\
w_4 &= 43\\
w_5 &= 54\\
w_6 &= 232\\
w_7 &= 623,
\end{align*}
and a "target total" $T$, in this case $T=6553$. You want to find nonnegative integers $a_1,\ldots,a_n$ such that
$$a_1w_1 + \cdots + a_nw_n = T.$$
In its broadest sense, this is an example of what is called a Diophantine equation (an equation in which we require the solutions to be nonnegative integers, or more generally rational numbers). They are studied in the branch of mathematics called Number theory.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/27732",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Solving a complicated equation system I'm trying to get a value in a web application, using some informations. I succeeded on creating this system (I need $x$):
$
\begin{equation}
\left\{
\begin{matrix}
x & = & \dot a + b - c - (\dot d \cdot \dot e)\\
b & = & f \cdot \dot g\\
f & = & \frac{x - \dot d - h}{1 + \dot i}\\
c & = & \dot \alpha(b)\\
h & = & x \cdot \dot e
\end{matrix}
\right.
\end{equation}
$
But it seems I can't solve it. Please, note that I know the value of the dotted variables (like $\dot p$).
My fear is that I can't solve this system. If so, is there a way to approximate the value of $x$?
Any help would be very appreciated.
Thank you!
Edit: I added two equations. I don't know it this can help, anyway I added them. Note that I know the value of $\alpha()$, but I did not put that here because it does not contain any of the variables of the system. It's there as a placeholder for an if statement.
| If you don't know $\alpha(b)$ (I don't see it dotted) you still have six unknowns and five equations. If you do know $\alpha(b)$ you have a linear system. If you insert $x=he$ for $x$ in the third equation, then substitute all the rest into the first you have
$x=a+\frac{x-d-he}{1+i}-\alpha(b)-de$
$x\left(1-\frac{1}{1+i}\right)=x\frac{i}{1+i}=a+\frac{-d-he}{1+i}-\alpha(b)-de$
$x=\frac{i+1}{i}(a-\alpha(b)-de)-i(d+he)$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/27779",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Help understand isomorphism of tensor product and connection to vector spaces well I'm having a hard time understanding the tensor product. Here's a problem from Atiyah and Macdonald's book:
Let $A$ be a non-trivial ring and let $m,n$ be positive integers. Let $f: A^{n} \rightarrow A^{m}$ be an isomorphism of $A$-modules. Show this implies that $n=m$.
Well the solution is at follows:
Let $m$ be a maximal ideal of $A$. Then we have an induced isomorphism:
$(A/m) \otimes_{A} A^{n} \rightarrow (A/m) \otimes_{A} A^{m}$.
Now it says that this is an isomorphism between vector spaces of dimension $n$ and $m$.
My questions are:
1) How do we know that $(A/m) \otimes_{A} A^{n}$ is a vector space over $A/m$ ?
2) How do we know it has exactly dimension $n$?
Is there some "standard" theorem that tells us this? Can you please explain this in detail?
Thanks
| 1) Let $f: A \rightarrow B$ be any homomorphism of commutative rings and let $M$ be any $A$-module. Then one may view $B \otimes_A M$ as a $B$-module via $b \cdot (\sum_i b_i \otimes x_i) := \sum_i b b_i \otimes x_i$. This process of starting with an $A$-module and getting a $B$-module is called base change.
In your particular case the map is $f: A \rightarrow A/\mathfrak{m}$, so $(A/\mathfrak{m}) \otimes_A M$ is an $A/\mathfrak{m}$-module. But since $\mathfrak{m}$ is maximal, $A/\mathfrak{m}$ is a field, and a module over a field is simply a vector space over that field. This answers your first question.
2) You should establish the following two properties of tensor products:
(i) For any $A$-module $M$, there is a canonical isomorphism $M \otimes_A A \stackrel{\sim}{\rightarrow} M$. (Thus "changing the base from $A$ to $A$" has no effect.)
(ii) For any family $M_i$ of $A$-modules indexed by a set $I$ and any $A$-module $M$, there is a canonical isomorphism $(\bigoplus_{i \in I} M_i) \otimes_A N \stackrel{\sim}{\rightarrow} \bigoplus_{i \in I} (M_i \otimes_A N)$.
What you want follows easily from these facts (which are themselves ubiquitously useful). Try it yourself and ask again if you have further questions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/27889",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
Simple stats question-correlation coefficient Let's say we have two exams, each out of 50 points. The correlation rate between them is 0.75. If the teacher decides to add 10 points to the results of the first test, what will happen to the corr. rate?
The way I see it, the correlation should decrease, but by how much? Would it decrease by 1/5=20%? And the result would have been the same even if she subtracted 10 points from the first test, correct?
| If by correlation you mean the usual (Pearson) correlation coefficient, then nothing will happen. If we have two random variables $X$ and $Y$, then their correlation coefficient is $$\frac{E((X-\mu_X)(Y-\mu_Y))}{\sigma_X\sigma_Y}$$
where the symbols have their usual meaning. If $Z=Y+10$, then $Z-\mu_{Z}=Y-\mu_Y$ and $\sigma_{Z}=\sigma_Y$, so the correlation coefficient of $X$ and $Z$ is the same as the correlation coefficient of $X$ and $Y$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/27938",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
the symbol for translation, transformation, or conversion What is the symbol for demonstrating syntactic conversion (transformation or translation)? For example, I want to show a calculation sequence, from $ \neg ( A \wedge B ) $ to $ \neg A \vee \neg B $. Now I just use $ \vdash $: $ \neg ( A \wedge B ) \vdash \neg A \vee \neg B $. Is there a suitable symbol to replace $ \vdash $?
Thank you.
Kejia
| The symbol $\Rightarrow$ (or simply =>)
would be my answer for a symbol for transformation, but I think it would be better if it were a single character symbol.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/27995",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Finding the vector form of the general solution Ax = 0
Suppose that $x_1 = -1$, $x_2 = 2$, $x_3 = 4$, $x_4 = -3$ is a solution of a non-homogeneous linear system $A\mathbf{x} = \mathbf{b}$ and that the solution set of the homogeneous system $A\mathbf{x} =\mathbf{0}$ is given by the formulas:
$$\begin{align*}
x_1 &= -3r + 4s,\\
x_2 &= r - s,\\
x_3 &= r,\\
x_4 &= s.
\end{align*}$$
Find the vector form of the general solutions of Ax = 0 and Ax = b
I ended up with something like:
( -3 4)
( 1 -1)
( 1 0)
( 0 1)
where I separated the $r$ and $s$ values, I haven't tried to actually solve though because I'm kinda confused about what I'm suppose to do with this.
| You answer of the solutions to $A\vec{x}=0$ being equal to the span of $\begin{bmatrix}
-3\\
1\\
1\\
0
\end{bmatrix}$ and $
\begin{bmatrix}
4\\
-1\\
0\\
1
\end{bmatrix}$ is correct; now let's find the solutions to $A\vec{x}=\vec{b}$. If the kernel of a matrix is not equal to only the zero vector (as in our case, it actually has a dimension of two), then the matrix is non-invertible, and has infinitely many solutions for $\vec{b}$. By the property of linear transformations, if $A\vec{x}=0$, then $A(\vec{p}+\vec{x})=\vec{b}$ (where $\vec{p}$ is a vector such that $A\vec{p}=\vec{b}$) because $A(\vec{p}+\vec{x})=A\vec{p}+A\vec{x}=A\vec{p}+\vec{0}=\vec{b}+\vec{0}=\vec{b}$. Since you know that $
\begin{bmatrix}
-1\\
2\\
4\\
-3
\end{bmatrix}$ is a solution to $A\vec{x}=\vec{b}$, the set of solutions to this equations will consist of $\vec{p}$+any solution to $A\vec{x}=0$, which you found to be the span of $\begin{bmatrix}
-3\\
1\\
1\\
0
\end{bmatrix}$ and $
\begin{bmatrix}
4\\
-1\\
0\\
1
\end{bmatrix}$. Or, in other words, the set of $\left \{\vec{p}+\vec{x}| A\vec{x}=0 \right \}$ is a set of solutions to $A\vec{x}=\vec{b}$. Or, represented numerically, solutions to $A\vec{x}=\vec{b}$ are of form $
\underset{\text{A given solution to } A\vec{x}=\vec{b}}{\underbrace{\begin{bmatrix}
-1\\
2\\
4\\
-3
\end{bmatrix}}}$ $+$ $\underset{\text{span of Kernel } A} {\underbrace{r\begin{bmatrix}
-3\\
1\\
1\\
0
\end{bmatrix} + s
\begin{bmatrix}
4\\
-1\\
0\\
1
\end{bmatrix}}}$, as Arturo pointed out.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/28051",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Finding the optimum supply quantity when there is uncertainty in forecast This is actually a quiz that will be needed in a real life food stall! I need to decide how much stock to supply for my pumpkin soup stall. I sell each soup for $5$ dollars a cup, and let's say my ingredients cost is $1$ dollar. Therefore an outcome of under-forecasting is $4$ dollars per unit, while an outcome of over-forecasting is 1 dollar per unit.
My forecast isn't so simple, however. I'm guessing that the most probable number of sales is $150$ units, but I'm very unsure, so there's a normal distribution behind this prediction with a standard deviation of $30$ units.
This is harder than I expected.
Intuitively I would prepare ingredients for $180$ units, at which point I'd guess that the likely opportunity costs that would come with understocking would roughly meet the likely costs of overstocking. But given this is such a common dilemma, I thought that someone must be able to find a precise solution, and would then hopefully be able to explain it in layman's terms.
| If you get supplies for $n$ bowls and there are $m$ customers, your profit is $p=\begin {cases} 4n \text{ if } n \le m \\ 5m-n \text{ if } n \gt m \end {cases}$
The expectation of the profit is then $\sum pP(m)$ where $P(m)$ is the probability distribution of the number of customers. Unfortunately, the normal distribution is not normal-real life distributions usually have much longer tails. In this case, you probably have greater than one chance in 3 million of being too sick that day and not selling anything.
That aside, your intuition that you should err on the side of having too much is correct. If you take the expected profit of preparing $n+1$ bowls and subtract the expected profit of preparing $n$ bowls, the difference is $p(n+1)-p(n)=4\sum_{m \gt n}P(m)-\sum_{m\le n}P(m)$. If you know (or have a guess at) $P(m)$ you can goal seek this in Excel or use your favorite 1D rootfinder to solve this.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/28092",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
} |
exponential equation $$\sqrt{(5+2\sqrt6)^x}+\sqrt{(5-2\sqrt6)^x}=10$$
So I have squared both sides and got:
$$(5-2\sqrt6)^x+(5+2\sqrt6)^x+2\sqrt{1^x}=100$$
$$(5-2\sqrt6)^x+(5+2\sqrt6)^x+2=100$$
I don't know what to do now
| You've already seen that $(5-2\sqrt{6})(5+2\sqrt{6})=1$ when you squared both sides. This means that $5-2\sqrt{6}=\frac{1}{5+2\sqrt{6}}$, so your last equation can be rewritten as $$\left(\frac{1}{5+2\sqrt6}\right)^x+(5+2\sqrt6)^x+2=100$$
or, letting $y=(5+2\sqrt{6})^x$,
$$\frac{1}{y}+y+2=100$$
so
$$1+y^2+2y=100y$$
which is quadratic in $y$. Solve this for $y$, then use that solution to solve for $x$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/28157",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 7,
"answer_id": 1
} |
A first order sentence such that the finite Spectrum of that sentence is the prime numbers The finite spectrum of a theory $T$ is the set of natural numbers such that there exists a model of that size. That is $Fs(T):= \{n \in \mathbb{N} | \exists \mathcal{M}\models T : |\mathcal{M}| =n\}$ . What I am asking for is a finitely axiomatized $T$ such that $Fs(T)$ is the set of prime numbers.
In other words in what specific language $L$, and what specific $L$-sentence $\phi$ has the property that $Fs(\{\phi\})$ is the set of prime numbers?
| The set of prime numbers and its complement are spectra. Consider the language $\mathcal{L}$ (which contains only predicates and equality) and the $\mathcal{L}$-sentence $A'$ I define in this answer (you will also find how the predicates I use next are defined). Now, let $B$ be the $\mathcal{L}$-sentence saying that a prime number is the largest element in our domain:
$$\exists z\,((\neg\exists w\, Rzw)\wedge(\neg\exists x\,\exists y\,(Rxz\wedge Ryz\wedge Qxyz))\wedge(\neg Oz)).$$
The set of prime numbers is the spectrum of $A'\wedge B$, and its complement, the set of composite numbers, is the spectrum of $A'\wedge\neg B$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/28200",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 4,
"answer_id": 3
} |
How to simplify trigonometric inequality? $| 3 ^ { \tan ( \pi x ) } - 3 ^ { 1 - \tan ( \pi x ) } | \geq 2$
| This is equivalent to $(y^2-3)^2\ge4y^2$ with $y=3^{\tan(\pi x)}$. The situation of $y^2$ with respect to the roots of the polynomial $(z-3)^2-4z=(z-9)(z-1)$ yields the sign of the polynomial. As a result, the inequality holds if $\tan(\pi x)\le0$ or $\tan(\pi x)\ge1$. That is, the fractional part of $x$ should not be $\frac12$ nor in $(0,\frac14)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/28258",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
If f is surjective and g is injective, what is $f\circ g$ and $g\circ f$? Say I have $f=x^2$ (surjective) and $g=e^x$ (injective), what would $f\circ g$ and $g\circ f$ be? (injective or surjective?)
Both $f$ and $g : \mathbb{R} \to \mathbb{R}$.
I've graphed these out using Maple but I don't know how to write the proof, please help me!
| You should specify the domains and codomains of your functions.
I guess that
$f:\mathbb{R}\rightarrow\mathbb{R}_{\geq 0}$ and $g:\mathbb{R}\rightarrow\mathbb{R}$, but there are some other natural definitions you could make.
You can write down the compositions explicitly:
$f\circ g:\mathbb{R}\rightarrow\mathbb{R}_{\geq 0}$ has $x\mapsto (e^x)^2=e^{2x}$.
This is injective (since $x\mapsto e^x$ is injective) and not surjective, since 0 is not in the image.
$g\circ f:\mathbb{R}\rightarrow\mathbb{R}$ $x\mapsto e^{x^2}$ is neither injective, nor surjective.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/28305",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
} |
Picking a number nearest two thirds the average I am interested in the following game for $n$ players:
Each of the $n$ players chooses a random integer from the interval [0,100] and the average of these numbers (call it $a$) is computed. If we denote by S the set of chosen numbers that are greater than $\frac{2a}{3}$ then we call the player that picked min(S) as the winner of the game.
I am looking for strategies that would yield the highest expectation of winning the game.
One way would be to model the choices with some probabilistic distribution (which?) and compute the expected value of $\frac{2a}{3}$.
I would like to know if anyone happens to see any other tricks that could be useful in such a game.
| The problem with this game is that the only Nash equilibrium is 0. Given a prior distribution of answers from the other players, you should always guess lower. But if everyone does this, it changes the prior distribution until the "right" guess is again lower. It's more fun if you restrict play to [1,100] rather than [0,100].
In practice this isn't actually an exercise in mathematics, it's an exercise in psychology. The "obvious" average is of course 50. People who "understand" the puzzle will guess around 33. Of course, there are those who "understand" this first level of recursion and guess 22; and so on. The problem, for those who ACTUALLY understand the puzzle (because they studied it) is not to do the recursion, because it has no end. The problem is actually to guess: how well do you think your OPPONENTS understand recursion?
Experiments show that most people can figure out roughly 1.3 levels of recursion. So the typical average in this game is around 26-30, and you should guess 18-20. Repeating the experiment with different populations will give different results; in particular, people who have advanced training in recursion (computer programmers or some mathematicians, but in general NOT other kinds of engineers) will, on average, manage another level or so, making the "correct" play 12-13. But really, if all the players understand the game, the only correct play is 0.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/28340",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
Are dualizing modules stable under localization Let $(R,m,k)$ be a (noetherian) regular local ring of depth=dimension $d$, and let $D$ be a dualizing module for $R$ (say, the injective envelope of $R/m$).
Then is $D_p$ dualizing for $R_p$ for any prime $p$ of $R$ (more generally, if $R$ is Gorenstein and $p$ is a prime such that $R_p$ is also Gorenstein)? If it is true, could I have a reference for a proof?
Bump!
| I hope by dualizing module you mean canonical module. Please check page no 110 theorem 3.3.5 of the book Cohen Macaulay ring by Bruns and Herzog. Please do inform me if you have still difficulties.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/28379",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Finding subspaces for a nonhomogeneous system
Show that the solution vectors of a consistent nonhomogeneous system of m linear equations in n unknowns do not form a subspace of $\mathbb{R}^n$.
I'm not really sure how to go about this problem. I know that I'm suppose to check if the vectors are with the subspace by addition and scalar multiplication but I'm not really sure how to set it up.
| If we have a consistent nonhomogeneous system $Ax = y$ with a solutions $x_0$ and $x_1$, then $A(x_0 - x_1) = Ax_0 - Ax_1 = y - y = 0$. But if we assume that the solutions to the nonhomogeneous system form a subspace, then $x_0 - x_1$ must also be a solution to the nonhomogeneous system, thus $0 = A(x_0 - x_1) = y$. This is clearly false, so the solutions to the nonhomogeneous system do not form a subspace.
Or, even easier, $0$ must be an element of any subspace (since $x - x = 0$) but $A0 = 0 \ne y$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/28422",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
kernel maximal ideal If I have a homomorphism $\phi : R \rightarrow S$ between integral domains, how can I show that if the kernel is non-zero then it is a maximal ideal in R?
| This is true for a class $\:\mathfrak R\:$ of rings iff every ring in $\:\mathfrak R\:$ has dimension at most one, i.e. prime ideals are minimal or maximal. Equivalently, by factoring out by the (necessarily) prime kernel, it is true iff every domain in $\:\mathfrak R\:$ is a field. Examples of such one-dimensional classes of rings are Dedekind domains (e.g. PIDs), and Artinian rings (see this prior question).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/28468",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Bijection between an open and a closed interval Recently, I answered to this problem:
Given $a<b\in \mathbb{R}$, find explicitly a bijection $f(x)$ from
$]a,b[$ to $[a,b]$.
using an "iterative construction" (see below the rule).
My question is: is it possible to solve the problem finding a less exotic function?
I mean: I know such a bijection cannot be monotone, nor globally continuous; but my $f(x)$ has a lot of jumps... Hence, can one do without so many discontinuities?
W.l.o.g. assume $a=-1$ and $b=1$ (the general case can be handled by translation and rescaling).
Let:
(1) $X_0:=]-1,-\frac{1}{2}] \cup [\frac{1}{2} ,1[$, and
(2) $f_0(x):=\begin{cases}
-x-\frac{3}{2} &\text{, if } -1<x\leq -\frac{1}{2} \\ -x+\frac{3}{2} &\text{,
if } \frac{1}{2}\leq
x<1\\ 0 &\text{, otherwise} \end{cases}$,
so that the graph of $f_0(x)$ is made of two segments (parallel to the line $y=x$) and one segment laying on the $x$ axis; then define by induction:
(3) $X_{n+1}:=\frac{1}{2} X_n$, and
(4) $f_{n+1}(x):= \frac{1}{2} f_n(2 x)$
for $n\in \mathbb{N}$ (hence $X_n=\frac{1}{2^n} X_0$ and $f_n=\frac{1}{2^n} f_0(2^n x)$).
Then the function $f:]-1,1[\to \mathbb{R}$:
(5) $f(x):=\sum_{n=0}^{+\infty} f_n(x)$
is a bijection from $]-1,1[$ to $[-1,1]$.
Proof: i. First of all, note that $\{ X_n\}_{n\in \mathbb{N}}$ is a pairwise disjoint covering of $]-1,1[\setminus \{ 0\}$. Moreover the range of each $f_n(x)$ is $f_n(]-1,1[)=[-\frac{1}{2^n}, -\frac{1}{2^{n+1}}[\cup \{ 0\} \cup ]\frac{1}{2^{n+1}}, \frac{1}{2^n}]$.
ii. Let $x\in ]-1,1[$. If $x=0$, then $f(x)=0$ by (5). If $x\neq 0$, then there exists only one $\nu\in \mathbb{N}$ s.t. $x\in X_\nu$, hence $f(x)=f_\nu (x)$. Therefore $f(x)$ is well defined.
iii. By i and ii, $f(x)\lesseqgtr 0$ for $x\lesseqgtr 0$ and the range of $f(x)$ is:
$f(]-1,1[)=\bigcup_{n\in \mathbb{N}} f(]-1,1[) =[-1,1]$,
therefore $f(x)$ is surjective.
iv. On the other hand, if $x\neq y \in ]-1,1[$, then: if there exists $\nu \in \mathbb{N}$ s.t. $x,y\in X_\nu$, then $f(x)=f_\nu (x)\neq f_\nu (y)=f(y)$ (for $f_\nu (x)$ restrited to $X_\nu$ is injective); if $x\in X_\nu$ and $y\in X_\mu$, then $f(x)=f_\nu (x)\neq f_\mu(y)=f(y)$ (for the restriction of $f_\nu (x)$ to $X_\nu$ and of $f_\mu(x)$ to $X_\mu$ have disjoint ranges); finally if $x=0\neq y$, then $f(x)=0\neq f(y)$ (because of ii).
Therefore $f(x)$ is injective, hence a bijection between $]-1,1[$ and $[-1,1]$. $\square$
| For background to this 'turn-the-crank' technique see this answer.
Let $A = (0,1)$ and $B = [0,1]$. Let $f: A \to B$ be the inclusion mapping $f(x) = x$ and $g: B \to A$ be given by
$$ g(x) = \frac{x}{2} + \frac{1}{4}$$
We can build our bijective mapping using these two injective functions (thanks to Julius König of Schröder–Bernstein-Cantor theorem fame).
The number $0 \in B = [0,1]$ is a B-stopper; let
$$ D_0 = \{\, \frac{1}{2} - \frac{1}{2^n} \, | \, n \text{ is a positive integer}\,\}$$
The number $1 \in B = [0,1]$ is a B-stopper; let
$$ D_1 = \{\, \frac{1}{2} + \frac{1}{2^n} \, | \, n \text{ is a positive integer}\,\}$$
Define $h: A \to B$ as follows:
$$h(x) = \left\{
\begin{array}{1 1}
\frac{1}{2} - \frac{1}{2^{n-1}} & \mbox{if } x = \frac{1}{2} - \frac{1}{2^{n}} \, \land \, n \gt 1\\
\frac{1}{2} + \frac{1}{2^{n-1}} & \mbox{if } x = \frac{1}{2} + \frac{1}{2^{n}} \, \land \, n \gt 1\\
x & \mbox{otherwise}
\end{array}
\right.$$
The function $h: A \to B$ is a bijection with $h(\frac{1}{2}) = \frac{1}{2}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/28568",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24",
"answer_count": 4,
"answer_id": 3
} |
Inductive Proof of String Reversal I am trying to inductively prove that for any string s, the reverse of the reverse of string s is string s.
| The case $n = 1$ is trivial. For $n > 1$, assume that the statement holds for all $i < n$.
Let $s = as'$ be a string of length $n$ where $a$ has length 1 and $s'$ has length $n - 1$. The key is to observe that rev($ab$) = rev($b$)rev($a$). Therefore rev(rev($s$)) = rev(rev($as'$)) = rev(rev($s'$)rev($a$)) = rev(rev($a$))rev(rev($s'$)) = $as'$ = $s$, thus the statement holds for strings of length $n$. We have now proved by induction that the reverse of the reverse of a string $s$ is $s$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/28622",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
What's the goal of mathematics? Are we just trying to prove every theorem or find theories which lead to a lot of creativity or what?
I've already read G. H. Hardy Apology but I didn't get an answer from it.
| The question you've asked is actually a philosophical question, so it requires a philosophical answer. If we look abstractly at what mathematicians do, they are providing "scientific explanations" of pieces of mathematics, so:
The goal is to provide a satisfactory scientific explanation of mathematics.
There is a large literature on Scientific Explanation in philosophy. There are three nice articles in the Stanford Encyclopedia of Philosophy to provide a start:
The first, on scientific explanation in mathematics in particular:
http://plato.stanford.edu/entries/mathematics-explanation/
The second, on the philosophy of mathematics generally, which gives a scholarly treatment of many of the concepts in prior posts on this question:
http://plato.stanford.edu/entries/philosophy-mathematics/
The third, on scientific explanation more generally:
http://plato.stanford.edu/entries/scientific-explanation/
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/28665",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "60",
"answer_count": 18,
"answer_id": 13
} |
Number of Permutations in a Bin Packing Problem I'm having a discussion with a co-worker over the number of permutations in a bin packing problems as follows.
There are two bins each of which can hold 6 cu ft. A package can be from 1 - 6 cu feet, there can be from 1 - 12 packages. How many permutations are possible?
It's been a great many years since either of us have done any formal math but it seems to me the problem space isn't all that large due to the constraints, though the problem is NP complete. We found a few web pages talking about different approaches to bin packing but nothing really on how to determine number of possible permutations.
| In your problem, each bin is filled independently of the other. Therefore, you need to find out the number of ways you can fill up one bin and square that number.
If $n$ is the capacity of one bin, the number of distinct ways of filling it with packages is given by the partition number $p(n)$. Check out these articles for more details on partition functions
http://en.wikipedia.org/wiki/Partition_%28number_theory%29
http://mathworld.wolfram.com/Partition.html
For your question $n = 6$. Therefore, the number of possible arrangements is $p(n)^2 = 11^2 = 121$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/28716",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Exercise regarding Poisson processes and the uniform distribution This is Exercise 5.4.5 in Introduction to Stochastic Modeling (4th edition) page 257:
Customers arrive at a certain facility according to a Poisson process of rate lambda. Suppose that it is known that five customers arrived in the first hour. Each customer spends a time in the store that is a random variable, exponentially distributed with parameter alpha and independent of other customer times and then departs. What is the probability that the store is empty at the end of the first hour?
I have no clue how to model this situation or proceed. Help would be much appreciated. Thanks so much.
| First: You probably meant to write $[1 - (1 - e^{ - \alpha } )/\alpha ]^5$ as the solution to this exercise, right?
Hint 1: The arrival times of the five customers can be simulated as five i.i.d. uniform$[0,1]$ rv's. That is, if you place five i.i.d. uniform rv's on $[0,1]$, the points correspond to arrival times of five customers.
Hint 2: If $Z_1,\ldots,Z_5$ are i.i.d. rv's with distribution function $F$, how can you express the distribution function of $\max \{ Z_1 , \ldots ,Z_5 \}$ in terms of $F$?
Hint 3: You'll have to use the law of total probability, in order to calculate a certain probability. In this context, it may be useful to note that $1-U$ and $U$ are identically distributed, for $U$ a uniform$[0,1]$ random variable.
EDIT (further hints, in response to the OP's request). Suppose that $Z = U + Y$, where $U$ and $Y$ are independent uniform$[0,1]$ and exponential$(\alpha)$ random variables, respectively. First note that
$$
{\rm P}(Z \le 1) = {\rm P}(U + Y \le 1) = {\rm P}(Y \le 1 - U) = {\rm P}(Y \le U),
$$
where the last equality (which is actually not essential) follows from the fact that $U$ and $1-U$ are identically distributed. Now, since $U$ has constant density $f(u)=1$ for $u \in [0,1]$, the law of total probability gives
$$
{\rm P}(Y \le U) = \int_0^1 {{\rm P}(Y \le U|U = u)1\,{\rm d}u} = \int_0^1 {{\rm P}(Y \le u)\,{\rm d}u} = \int_0^1 {(1 - e^{ - \alpha u} )\,{\rm d}u} .
$$
Calculate the integral on the right-hand side, and recall Hint 2.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/28760",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Amount of chips required to insure that a least 90% of the time 12 are nondefective Hi does anyone have any suggestions on how to solve this problem?
To construct a circuit a student needs, among others, 12 chips of a certain type.
The student knows that 4% of these chips are defective.
How many chips have to be provided so that, with a probability of not less than 0.9, the student has a sufficient number of nondefective chips in order to be able to construct the circuit?
| Hint: The distribution of the number of good chips in the sample is binomial. You'll probably just need a few more than 12, so try n=12, 13, 14, ...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/28920",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to solve a problem with chessboard I am beginning math student and encountered following task which I do not know how to solve:
There is chessboard, ie 8x8 tiles. In the top left corner there is pawn. He cannot go diagonally and he can only move down and right.
1)How many steps will he need to reach the right bottom corner shortest way?
2)How many of these ways do exist?
Thank you in advance
| First you should think - Is there a way which is not the shortest?
Now, He will have to move 7 steps right and 7 steps down. In how many ways can you construct such a path?
EDIT: After having come to the conclusion that the number of such paths is the number of combinations of 14 steps, 7 of which are down and the other 7 of which are right, we reduce the problem to the following: How many sequences of 7 "down"s and 7 "right"s are there?
How do we arrange such a sequence? If we know exactly on which 7 of the 14 steps the pawn went down, then we know that on the rest of the steps (that is, the other 7), he went right. So the problem is equivalent to finding in how many ways we can choose 7 places out of an ordered sequence of 14 places (those chosen ones will be the "down" steps). This number is denoted by $\binom{14}{7}$, or in general if you have $n$ objects out of which you want to choose $k$, then the number of such choices is denoted by $\binom{n}{k}$.
Now let's compute the value of $\binom{14}{7}$. You have 14 options for the first choice (since you can take either one of the 14), 13 options for the second, and so on until you choose the 7th one out of the remaining 8 options. To conclude, this equals to $14\cdot 13\cdot 12\cdot 11\cdot 10\cdot 9\cdot 8$, but then you also have to divide by the number of permutations of the 7th steps, since it is not important whether you, say, take the first step on the first choice and the second on the second choice, or the other way around, so you have $\frac{14!}{7!7!}$ (Why?) - Or in the general case, $\frac{n!}{k!(n-k!)}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/28975",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 0
} |
Values of $\sum_{n=0}^\infty x^n$ and $\sum_{n=0}^N x^n$ Why does the following hold:
\begin{equation*}
\displaystyle \sum\limits_{n=0}^{\infty} 0.7^n=\frac{1}{1-0.7} = 10/3\quad ?
\end{equation*}
Can we generalize the above to
$\displaystyle \sum_{n=0}^{\infty} x^n = \frac{1}{1-x}$ ?
Are there some values of $x$ for which the above formula is invalid?
What about if we take only a finite number of terms? Is there a simpler formula?
$\displaystyle \sum_{n=0}^{N} x^n$
Is there a name for such a sequence?
This is being repurposed in an effort to cut down on duplicates, see here: Coping with abstract duplicate questions.
and here: List of abstract duplicates.
| If u expand your summation you get series $$1+0.7+0.7^2+\dots$$ as it is geometric series (you can see it here http://en.wikipedia.org/wiki/Geometric_series) $$\sum_{n=0}^{\infty}{0.7^n}=\frac{1}{1-0.7}$$
Or
$S=1+x+x^2+\dots$
$xS_n=x+x^2+\dots=S-1$
now u take
$S-xS=1$
$S(1-x)=1
\implies S=\frac{1}{1-x}$
here your $x=0.7$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/29023",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "131",
"answer_count": 6,
"answer_id": 0
} |
Why determinant of a 2 by 2 matrix is the area of a parallelogram? Let $A=\begin{bmatrix}a & b\\ c & d\end{bmatrix}$.
How could we show that $ad-bc$ is the area of a parallelogram with vertices $(0, 0),\ (a, b),\ (c, d),\ (a+b, c+d)$?
Are the areas of the following parallelograms the same?
$(1)$ parallelogram with vertices $(0, 0),\ (a, b),\ (c, d),\ (a+c, b+d)$.
$(2)$ parallelogram with vertices $(0, 0),\ (a, c),\ (b, d),\ (a+b, c+d)$.
$(3)$ parallelogram with vertices $(0, 0),\ (a, b),\ (c, d),\ (a+d, b+c)$.
$(4)$ parallelogram with vertices $(0, 0),\ (a, c),\ (b, d),\ (a+d, b+c)$.
Thank you very much.
| Also, if the coordinates of any shape are transformed by a matrix, the area will be changed by a scale factor equal to the determinant.
Since the determinant is the scale factor when the unit square is transformed to a parallelogram, it will be the scale factor when any parallelogram with the origin as a vertex is transformed to any other parallelogram because the inverse matrix will transform a parallelogram back into a square and has reciprocal determinant. If there is no inverse, the determinant is 0 and the transformed shape has no area.
Any triangle with the origin as a vertex can be drawn as half of a parallelgram including the origin. Any triangle not including the origin is the area of a triangle containing the origin minus two triangles inside not containing the origin. The area of any shape can be split into triangles, although an infinite number will be required if it has curved sides.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/29128",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "80",
"answer_count": 11,
"answer_id": 4
} |
Integral $\int{\sqrt{25 - x^2}dx}$ I'm trying to find $\int{\sqrt{25 - x^2} dx}$
Now I know that $\int{\frac{dx}{\sqrt{25 - x^2}}}$ would have been $\arcsin{\frac{x}{5}} + C$, but this integral I'm asking about has the rooted term in the numerator.
What are some techniques to evaluate this indefinite integral?
| Since you already know that
$$\int{\frac{dx}{\sqrt{25 - x^2}}}=\arcsin{\frac{x}{5}} + C$$
you can actually skip the trigonometric substitution part and solve by partial integration:
$$\begin{array}{lcl}\int{\sqrt{25 - x^2} dx} & = & x\sqrt{25 - x^2} - \int{\frac{x (-2x)dx}{2\sqrt{25 - x^2}}} \\
& = & x\sqrt{25 - x^2} - \int{\frac{-x^2 dx}{\sqrt{25 - x^2}}} \\
& = & x\sqrt{25 - x^2} - \int{\frac{25-x^2 dx}{\sqrt{25 - x^2}}} + \int{\frac{25 dx}{\sqrt{25 - x^2}}} \\
& = & x\sqrt{25 - x^2} - \int{\sqrt{25 - x^2} dx} + 25\arcsin{\frac{x}{5}} + C \; .
\end{array}$$
Or after rearranging
$$\int{\sqrt{25 - x^2} dx} = \frac{1}{2} x\sqrt{25 - x^2} + \frac{25}{2}\arcsin{\frac{x}{5}} + C \; .$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/29163",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 2
} |
Finding maxima and minima of a function A couple problems are giving me trouble in finding the relative maxima/minima of the function. I think the problem stems from me possibly not finding all of the critical numbers of the function, but I don't see what I missed.
Given $f(x)= 5x + 10 \sin x$, I calculated the derivative as $5 + 10 \cos x$, and found the first critical number by this work:
$$5+ 10 \cos x=0$$
$$\frac{5}{5}+10 \cos x= 0-5 \Rightarrow 10 \cos x= -5$$
$$\frac{10 \cos x}{10}= \frac{-5}{10}\Rightarrow \cos x= -\frac{1}{2}$$
$$x= \arccos(-\frac{1}{2}) = \text{First critical number is }\frac{2\pi}{3}$$
That gave me the maxima of the formula, since $$f(\frac{2\pi}{3})= 5(\frac{2\pi}{3})+10 \sin(\frac{2\pi}{3})= \frac{10\pi}{3}+5\sqrt3$$
However, I need the other critical number to calculate the minima. Should I look for the value of $\arccos(\frac{1}{2})$?
| $\cos(x)=-1/2$ if and only if there exists an integer $n$ such that $x=2n\pi+2\pi/3$ or $x=2n\pi+4\pi/3$. Hence any of these real numbers $x$ may be (and in fact, is) a relative maximum or a relative minimum of $f$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/29222",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
Closed forms of sums $f(a)+f(a+d)+\cdots+f(a+nd)$ with $f$ sine, cosine or tangent is/are there a closed form for
$\sin{(a)}+\sin{(a+d)}+\cdots+\sin{(a+n\,d)}$
$\cos{(a)}+\cos{(a+d)}+\cdots+\cos{(a+n\,d)}$
$\tan{(a)}+\tan{(a+d)}+\cdots+\tan{(a+n\,d)}$
$\sin{(a)}+\sin{(a^2)}+\cdots+\sin{(a^n)}$
$\sin{(\frac{1}{a})}+\sin{(\frac{1}{a+d})}+\cdots+\sin{(\frac{1}{a+n\,d})}$
| The two first sums $S$ and $C$ are the imaginary part and the real part of the complex sum
$$
\mathrm{e}^{\mathrm{i}a}+\mathrm{e}^{\mathrm{i}(a+d)}+\cdots+\mathrm{e}^{\mathrm{i}(a+nd)}.
$$
Ths is nothing but a geometric sum with first term $\mathrm{e}^{\mathrm{i}a}$ and argument $z=\mathrm{e}^{\mathrm{i}d}$, thus
$$
C+\mathrm{i}S=\mathrm{e}^{\mathrm{i}a}\frac{z^{n+1}-1}{z-1}=\mathrm{e}^{\mathrm{i}(a+(nd/2))}\frac{\sin((n+1)d/2)}{\sin(d/2)},
$$
for every $z\ne1$, that is, for every $d$ not in $2\pi\mathbb{Z}$. Using the shorthand $d=2b$ (hence $\sin(b)\ne0$), one gets finally
$$
C=\cos(a+nb)\frac{\sin((n+1)b)}{\sin(b)},
\quad
S=\sin(a+nb)\frac{\sin((n+1)b)}{\sin(b)}.
$$
If $d$ is in $2\pi\mathbb{Z}$, $C=(n+1)\cos(a)$ and $S=(n+1)\sin(a)$ (and these are the limits of the formulas valid when $d$ is not in $2\pi\mathbb{Z}$).
I do not think simple closed form formulas exist for the three other sums you are interested in.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/29262",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
two examples in analysis I want to ask for two examples in the following cases:
1) Given a bounded sequence $\{a_n\}$, $$\lim_{n\to \infty}{(a_{n+1}-a_n)}=0$$ but $\{a_n\}$ diverges.
2) A function defined on real-line $f(x)$'s Taylor series converges at a point $x_0$ but does not equal to $f(x_0)$.
Thanks for your help.
Edit
in 2), I was thinking of the Taylor series of the function $f$ at the point $x_0$.
| 2) Define the function $f$ as follows: $$f(x)=e^{-1/x^2}\ \text{if } x>0$$ and $$f(x)=0\ \text{if } x\leq 0.$$ Now consider the Taylor series centered at zero. This provides an example of a non-analytic $C^\infty$ function. This taylor series converges everywhere, but is identically zero, and $f(x)$ is not identically zero.
Hope that helps,
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/29313",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 2
} |
The value of improper integral $x\exp(-\lambda x^2)\, dx$ The integral in question is $\int_{-\infty}^{+\infty} x {e}^{-\lambda x^2}dx$ where $x$ and $\lambda$ both are real numbers.
My solution:
$\int_{-\infty}^{+\infty} x {e}^{-\lambda x^2}dx = \int_{-\infty}^{0} x {e}^{-\lambda x^2}dx +\int_{0}^{+\infty} x {e}^{-\lambda x^2}dx =$
$\lim_{a\rightarrow -\infty}\int_{a}^{0} x {e}^{-\lambda x^2}dx +\lim_{b\rightarrow +\infty}\int_{0}^{b} x {e}^{-\lambda x^2}dx = \begin{vmatrix}
u = -\lambda x^2\\
du = -\lambda 2xdx\\
\begin{matrix}
x & 0 & a & b\\
u & 0 & -\lambda a^2 & -\lambda b^2
\end{matrix}
\end{vmatrix} =$
$-\frac{1}{2\lambda}\left( \lim_{a\rightarrow -\infty}\int_{-\lambda a^2}^{0} {e}^{u}du +\lim_{b\rightarrow +\infty}\int_{0}^{-\lambda b^2} {e}^{u}du \right) =$
$-\frac{1}{2\lambda}\left( \lim_{a\rightarrow -\infty}\left(1 - {e}^{-\lambda a^2} \right) +\lim_{b\rightarrow +\infty}\left({e}^{-\lambda b^2} - 1 \right) \right) =
-\frac{1}{2\lambda}\left( \left(1 - 0 \right) +\left(0 - 1 \right) \right) = 0$
1) Is this solution correct?
2) Suppose that real function of real argument $f\left(x\right)$ is odd and both limits of $f\left(x\right)$ as $x$ approaches $\pm\infty$ are finite values. Is it enough to say that $\int_{-\infty}^{+\infty}f\left(x\right)dx$ is equal to 0?
| *
*Yes.
*This only works if $f$ is integrable. A counterexample would be $f(x)=x/(1+x^2)$, whose integral doesn't exist. What you need to know is that $\int_a^b f(x)dx\to 0$ as $a$ and $b$ go to infinity, and this will hold if $f$ is integrable.
However, you could say that the Cauchy principal value of the integral is $0$ when $f$ is odd.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/29377",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Recursion theory text, alternative to Soare I want/need to learn some recursion theory, roughly equivalent to parts A and B of Soare's text. This covers "basic graduate material", up to Post's problem, oracle constructions, and the finite injury priority method. Is there an alternate, more beginner friendly version of this material in another text? (At a roughly senior undergraduate or beginning graduate level of difficulty, although I wouldn't necessarily mind easier.)
For example, one text I've seen is S. Barry Cooper's book, Computability Theory. Is there anybody that has read both this and Soare, and could tell me about their similarities/differences?
| As of 2012, Rebecca Weber has published Computability Theory via the American Mathematical Society. The book covers Post's problem, oracle constructions, and the finite injury priority method, as well the arithmetical hierarchy and current areas of research. I am currently using it for a senior undergraduate computability theory course, and for the function, it works quite well. I would say that using Soare's book as reference for more detailed proofs has been just under necessity to fully understand the material, however.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/29418",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 7,
"answer_id": 6
} |
Self-Contained Proof that $\sum\limits_{n=1}^{\infty} \frac1{n^p}$ Converges for $p > 1$ To prove the convergence of the p-series
$$\sum_{n=1}^{\infty} \frac1{n^p}$$
for $p > 1$, one typically appeals to either the Integral Test or the Cauchy Condensation Test.
I am wondering if there is a self-contained proof that this series converges which does not rely on either test.
I suspect that any proof would have to use the ideas behind one of these two tests.
| Let $S(n)$ = $ \Sigma_{1}^{n} \frac{1}{n^p}$
then
$S(2n)$ = $S(n)$ + $\frac{1}{{(n+1)}^p} $ + $\frac{1}{{(n+2)}^p} $ + $\frac{1}{{(n+3)}^p} $ +$\frac{1}{{(n+4)}^p} $ ...........+ $\frac{1}{{(n+n)}^p} $
Let $\Delta S$ = $S(2n) - S(n)$
then
$\frac{n}{{(2n)}^p}$ $\leq$ $\Delta S$ $\leq$ $\frac{n}{(n+1)^p} $
By sandwitch theorem we see that if $p > 1$, $\lim_{n \rightarrow \inf}$ $\Delta S = 0$
so that as n tends to infinity $S(n) = S(2n) = S(4n) ....$
So that series converges
EDIT:
$S(2n) - S(n) \leq \frac{n}{(n+1)^p} \leq n^{1-p} $
$S(2^{k+1}n) - S(2^{k}n) \leq \frac{2^{k+1}n - 2^{k}n}{(2^kn+1)^p} \leq \frac{2^{k+1}n - 2^{k}n}{(2^kn)^p} \leq n^{1-p}2^{k(1-p)}$
Summing above equation for k = 0 to m, we get
$S(2^{m+1}n) - S(n) \leq n^{1-p}(\frac{1-2^{(m+1)(1-p)}}{1-2^{(1-p)}})$
Hence $\lim_{n \to \infty} R(n) =\lim_{m \to \infty}S(2^{m+1}n) - S(n) =0 $
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/29450",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "184",
"answer_count": 9,
"answer_id": 8
} |
How to prove $u = \mathrm{span}\{v_{1},\dots,v_{n}\}$ The vectors $v_{1},\dots,v_{n}$ form a linearly independent set.
Addition of vector $u$ makes this set linearly dependent.
How to prove that $u \in span\{v_{1}\ldots,v_{n}\}$.
I was able to prove the reverse.
| Ok since Alexander has already given you the answer, let's look at how just rearranging an equation can tell you so many things:
Problem 1: If $v_1, \cdots v_4$ are linearly independent vectors in $\mathbb{R}^4$, show that $\{v1, v2, v_3\}$ is a linearly independent set of vectors.
Use the contrapositive and solve the problem!
If $v_1, v_2, v_3$ are linearly dependent, then $\exists c_1, c_2, c_3$ such that $\sum c_i v_i$ = $0$. So if $c_4 = 0$ and $c_1, c_2, c_3 \neq 0$, then $c_1 v_1+ c_2 v_2 + c_3 v_3 = 0 \times v_4 = 0$ so $v_1, \cdots v_4$ are linearly dependent and we are done!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/29500",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Why are partial orderings important? I was reviewing my old Discrete Mathematics notes, and I came across a section describing how Partial Orderings are identified. I understand this, but I can't seem to recall/find information on why partial orderings are important.
What is the significance of knowing whether a set is partially ordered?
| A partial order is an abstraction of the concept of 'order' everyone is familiar with, like the one on the integers, the reals, etc. Altough this is a bit misleading because those orders are in fact 'total': any two elements can be compared.
But in fact lots of 'orders', such as total orders, are posets satisfying additional axioms. Another example is a well-order, the habitat of transfinite induction, which is an extension of induction on $\mathbb{N}$.
A poset in which any two elements have an upper bound is called a directed set. A function from a directed set is called a net, and this is a generalization of a sequence. Nets are quite useful in topology/functional analysis.
Every poset is a category, if we take as objects the elements of the poset, and if we take exactly one arrow from $a$ to $b$ if and only if $a\leq b$. Composition is defined beause of transitivity, and the existence of an identity arrow corresponds to reflexivity. Many basic concepts in category theory can be illuminated by this special choice of category. But of course this is only useful if you already know/care about posets in the first case.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/29550",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 3
} |
Is the set of all finite sequences of letters of Latin alphabet countable/uncountable? How to prove either? Today in Coding/Cryptography class, we were talking about basic definitions, and the professor mentioned that for a set $A=\left \{ \left. a, b, \dots, z \right \} \right.$ (the alphabet) we can define a set $A^{*}=\left \{ \left. a, ksdjf, blocks, coffee, maskdj, \dots, asdlkajsdksjfs \right \} \right.$ (words) as a set that consists of all finite sequences of the elements/letters from our $A$/alphabet. My question is, is this set $A^{*}$ countably or uncountably infinite? Does it matter how many letters there are in your alphabet? If it was, say, $A=\left \{ \left. a \right \} \right.$, then the words in $A^{*}$ would be of form $a, aa, aaa, \dots$ which, I think, would allow a bijection $\mathbb{N} \to A^{*}$ where an integer would signify the number of a's in a word. Can something analogous be done with an alphabet that consists of 26 letters (Latin alphabet), or can countability/uncountability be proved otherwise? And as mentioned before, I am wondering if the number of elements in the alphabet matters, or if all it does is change the formula for a bijection.
P.S. Now that I think of it, maybe we could biject from $\underset{n}{\underbrace{\mathbb{N}\times\mathbb{N}\times\mathbb{N}\times\dots\times\mathbb{N}}}$ to some set of words $A^{*}$ whose alphabet $A$ has $n$ elements? Thanks!
| Proposition. If the alphabet $A$ is countable, the set $A^*$ of all finite strings in that alphabet is also countable.
Proof. $A, A^2 = A \times A, A^3 = A \times A \times A$, etc. are all countable and well-ordered (under the lexicographic ordering induced by the well-ordering of $A$), and $$A^* = \bigcup_{n=0}^{\infty} A^n$$ and the union of countably many well-ordered countable sets is again countable, so $A^*$ is countable. Note this does not require the axiom of (countable) choice.
However, the set $A^{\mathbb{N}}$ of all countably-infinite strings is countable if and only if $A$ is empty or is a 1-letter alphabet.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/29599",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 2,
"answer_id": 1
} |
Why is $\arctan(x)=x-x^3/3+x^5/5-x^7/7+\dots$? Why is $\arctan(x)=x-x^3/3+x^5/5-x^7/7+\dots$?
Can someone point me to a proof, or explain if it's a simple answer?
What I'm looking for is the point where it becomes understood that trigonometric functions and pi can be expressed as series. A lot of the information I find when looking for that seems to point back to arctan.
| Marty Cohen's answer gives an explanation for the following fact:
$$
\frac{d}{dx} \arctan(x) = \frac{1}{1+x^2}
$$
Here is an alternative explanation.
Let $y = \arctan(x)$, and try to find $\frac{dy}{dx}$. We have
$$
y = \arctan(x)
$$
$$
\tan(y) = \tan(\arctan(x))
$$
We'd like to simplify the right-hand side. The definition of $\arctan(x)$ is:
$$
\arctan(x) = \textrm{the angle between }-\frac{\pi}{2} \textrm{ and } \frac{\pi}{2}\textrm{ whose tangent is } x.
$$
So whatever $\arctan(x)$ is, its tangent must be $x$. That is,
$$
\tan(\arctan(x)) = x.
$$
So we have
$$
\tan(y) = x
$$
Now differentiate both sides with respect to $x$ (this is called implicit differentiation):
$$
\frac{d}{dx} \tan(y) = \frac{d}{dx} x
$$
$$
\sec^2(y) \frac{dy}{dx} = 1
$$
$$
\frac{dy}{dx} = \cos^2(y)
$$
$$
\frac{d}{dx}\arctan(x) = \cos^2(\arctan(x))
$$
But it turns out that $\cos^2(\arctan(x)) = \frac{1}{{1+x^2}}$. To see this, you can draw a right triangle with vertices at $(0,0), (1, 0)$, and $(1, x)$. The angle at the origin is $\arctan(x)$, and you can easily compute its cosine. (Try it!)
Or, you can try some algebra. Using the notation from above, we have
$$
\tan(y) = x
$$
$$
\frac{\sin(y)}{\cos(y)} = x
$$
$$
\sin(y) = x\cos(y)
$$
$$
\sin^2(y) = x^2\cos^2(y)
$$
$$
1-\cos^2(y) = x^2 \cos^2(y)
$$
$$
1 = \cos^2(y)(1 + x^2),
$$
$$
\frac{1}{1 + x^2} = \cos^2(y),
$$
as desired.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/29649",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "41",
"answer_count": 10,
"answer_id": 5
} |
Measure theory question Let $f\geq 0$ be a measurable function which is finite almost everywhere.
For each $k\in\mathbb{Z}$, define, $E_k=\lbrace x|f(x)>2^k\rbrace, F_k=\lbrace x|2^k\leq f(x)<2^{k+1}\rbrace$.
Is it true that $\sum_{k=-\infty}^{\infty}2^km(E_k)<\infty$ if and only if $\sum_{k=-\infty}^{\infty}2^km(F_k)<\infty$?
| The idea is to bound the two expressions tightly in terms of each other:
$$
\sum_{k=-\infty}^{+\infty} 2^k m(F_k) \leqslant
\sum_{k=-\infty}^{+\infty} 2^k m(E_k) \leqslant
2 \cdot\sum_{k=-\infty}^{+\infty} 2^k m(F_k) .
$$
Once this is established, it is obvious that either summation converges if and only if the other does,
The left half of the inequality is obvious since $F_k \subseteq E_k$, so we prove only the right half.
$$
\begin{align*}
\sum_{k=-\infty}^{+\infty} 2^k m(E_k)
&\leqslant \sum_{k=-\infty}^{+\infty} 2^k \left( \sum_{j=k}^{\infty} m(F_j) \right)
\\ &= \sum_{j= -\infty}^{\infty} m(F_j) \left( \sum_{k=-\infty}^{j} 2^k \right)
\\ &= \sum_{j= -\infty}^{\infty} 2^j m(F_j) \left( \sum_{k=-\infty}^{0} 2^k \right)
\\ &= \sum_{j= -\infty}^{\infty} 2^j m(F_j) \cdot 2.
\end{align*}
$$
Here the interchange of summations is justified because all terms are nonnegative.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/29766",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.