Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
Can $f(g(x))$ be a polynomial? Let $f(x)$ and $g(x)$ be nonpolynomial real-entire functions.
Is it possible that $f(g(x))$ is equal to a polynomial ?
edit
Some comments :
I was thinking about iterations.
So for instance $f(f(x)) = $ some polynomial.
However such $f$ are usually (Always ?) not entire because of the fact that a non-linear polynomial has more than 1 fixpoint.
This lead me to consider adding the strong condition
$(f(g(x)) - g(f(x)))^2$ is not indentically $0$.
But I guess that is a followup question.
edit 2
Real-entire means entire and real-analytic.
|
There are various ways to show that this can not be true. E.g., by Casorati-Weierstrass, the image of $|z|>R$ under $g$ is dense in the plane for every $R>0$, so the image of the same domain under $f\circ g$ contains a dense subset of $f(\mathbb{C})$ which is itself dense in the plane, showing that $f \circ g$ has an essential singularity at $\infty$.
(And it does not matter that $f$ and $g$ are real.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1115663",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
}
|
What happens if we change the definition of quotient ring to the one that does not have ideal restriction? From Wikipedia:
Given a ring R and a two-sided ideal I in R, we may define an
equivalence relation ~ on R as follows:
a ~ b if and only if a − b is in I.
Using the ideal properties, it is not difficult to check that ~ is a
congruence relation. In case a ~ b, we say that a and b are congruent
modulo I. The equivalence class of the element a in R is given by
[a] = a + I := { a + r : r in I }.
This equivalence class is also sometimes written as a mod I and called
the "residue class of a modulo I".
The set of all such equivalence classes is denoted by R/I; it becomes
a ring, the factor ring or quotient ring of R modulo I, if one defines
(a + I) + (b + I) = (a + b) + I;
(a + I)(b + I) = (a b) + I.
Suppose that we no longer restrict $I$ to be an ideal, but just any set that is a subset of $R$ and is not an ideal of $R$. Then would it be impossible to satisfy (a + I) + (b + I) = (a + b) + I and (a + I)(b + I) = (a b) + I all the time if $I$ is not an ideal?
|
No. You can't take just any subset $I$ of $R.$ To define quotient ring, first it required to be a group. So the subset $I$ has to be a subgroup of $(R, +).$ (Actually we require normal subgroup to define quotient group, but in this case $(R,+)$ is abelain. So any subgroup is normal.) Now take $I$ to be a subgroup of $(R, +)$ and consider the quotient group $R/I.$ We want to show that it is a ring where the multiplication is defined by $(x+I)(y+I)=xy+I.$ First we need to make sure that it's a well defined operation. Suppose $a+I=b+I.$ Then for every $c+I \in R/I,$ we need $(c+I)(a+I)=(c+I)(a+I),$ i.e. $ca+I=cb+I$ and so $ca-cb\in I.$ Now $a+I=b+I \Rightarrow a-b \in I.$ But why should $c(a-b) \in I, \forall c \in R$? There we need the property of ideal: $x\in I, y \in R \Rightarrow yx \in I.$
To emphasis the last point, let's consider the following example: $R$ be the ring of all functions from $\mathbb R$ to $\mathbb R.$ Let $C$ be the subset of $R$ containing only the constant functions. Then $C$ is a subgroup of $R,$ but not an ideal of $R.$ Here we have $x+C=x+2+C.$ Multiplying both sides by $x+C,$ we would like to have $x^2+C=x^2+2x+C.$ But this is not true because $x^2 - (x^2 +2x) \notin C.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1115763",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
How is the exponential in the Fourier transform pulled out of the integrand? I'm looking at Fourier Transforms in a Quantum Physics sense, and it's useful to associate the Fourier Series with the Dirac Delta. The book I'm using follows this argument (Shankar, Quantum Mechanics):
The Dirac Delta has the following property:
$\int \delta(x-x^{\prime})f(x^{\prime}) \,\,\mathrm{d}x^{\prime} = f(x)$
We can represent a function by it's transform in the frequency domain:
$\hat{f}(k) = \frac{1}{2\pi} \int_{-\infty}^{\infty} e^{-jkx}f(x)\,\, \mathrm dx$
and can also perform an inverse transform:
$f(x^{\prime})=\int_{-\infty}^{\infty}e^{jkx^{\prime}} \hat{f}(k)\,\,\mathrm{d} k$
Substituting the first transform in the second:
$f(x^{\prime})=\int_{-\infty}^{\infty}e^{jkx^{\prime}} \left(\frac{1}{2\pi} \int_{-\infty}^{\infty} e^{-jkx}f(x) \,\,\mathrm{d} x \right) \,\,\mathrm dk$
Now, in the discussion that I'm reading, this is rearranged as:
$f(x^{\prime})=\int_{-\infty}^{\infty} \left(\frac{1}{2\pi} \int_{-\infty}^{\infty}\mathrm{d} k\,\, e^{jk(x^{\prime}-x)}\right)f(x) \,\,\mathrm dx$
Which, by comparison with the first function allows us to associate the parenthesized part of this equation with the dirac delta.
My Question:
Why can we remove the integrand from the inside of the dk integral, since it clearly depends on k, and $x-x^{\prime}$ is not necessarily zero?
|
The steps missing:
$$\begin{align}f(x^{\prime})&=\int_{-\infty}^{\infty}e^{jkx^{\prime}} \left(\frac{1}{2\pi} \int_{-\infty}^{\infty} e^{-jkx}f(x) \,\,\mathrm{d} x \right) \,\,\mathrm dk\tag{0}\\
&=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\frac{1}{2\pi}e^{jk(x'-x)}f(x)\,dx\,dk\tag{1}\\
&=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\frac{1}{2\pi}e^{jk(x'-x)}f(x)\,dk\,dx\tag{2}\\
&=\int_{-\infty}^{\infty} \left(\frac{1}{2\pi} \int_{-\infty}^{\infty}\mathrm{d} k\,\, e^{jk(x^{\prime}-x)}\right)f(x) \,\,\mathrm dx\tag{3}
\end{align}$$
The original proof jumped from (0) to (3).
(1) is bringing the $e^{jkx'}$into the integral, which you can do by distributive law.
(2) is a switching in the order of integration.
(3) is pulling out the $f(x)$ and $\frac{1}{2\pi}$ out, because it is a constant in the inner integral:
$$\int_{-\infty}^{\infty}\frac{1}{2\pi}e^{jk(x'-x)}f(x)\,dk$$
None of this is valid, mathematically, but it all can be made rigorous by doing the work in "distribution theory." Most physicists don't give a damn about that part, though, because they are mostly dealing with wave functions that are "close enough to" $\delta(x)$, not actually $\delta(x)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1115869",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Why is $\cos((\omega+\alpha\cos(\omega' t))t)$ the wrong model for frequency modulation? So I was trying to program vibrato, or freqency modulation, naively using the model:
$$\cos((\omega + \alpha\cos(\omega' t))t)$$
Where $\alpha \lt \omega$ and $\omega' \ll \omega$. For practical purposes, assume $\omega=200\,\mathrm{Hz},\ \alpha=5\,\mathrm{Hz},\ \omega'=1\,\mathrm{Hz}$ so that
$$195\,\mathrm{Hz}<\omega + \alpha\cos(\omega' t)<205\,\mathrm{Hz}.$$
However in practice this looks and sounds completely wrong. It starts out sounding like vibrato, but the modulation amplitude increases indefinitely:
After some Googling, I find that I'm supposed to use the model:
$$\cos(\theta(t))$$
where in general,
$$\omega(t)=\frac{d\theta(t)}{dt}$$
So I set it up as:
$$\theta(t)=\int\omega + \alpha\cos(\omega' t)\,dt$$
and I get the desired result:
Question is, being that I'm a little rusty with math, how can the first approach be analyzed to show why it behaves the way it does?
|
Let $\theta(t) = \omega t + t \alpha \cos (\xi t)$, then
$\theta'(t) = \omega + \alpha \cos (\xi t) - t \alpha \xi \sin (\xi t)$, so you
can see that the instantaneous frequency is unbounded.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1115950",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Soft question: Union of infinitely many closed sets this is a question that is not addressed in my book directly but I was curious. We just proved that the union of a finite collection of closed sets is also closed, but I was curious about if the union of infinitely many closed sets can be open. This question may not be at the level of the book so perhaps that's why it wasn't addressed.
Just to make things easier, lets imagine sets that are disks in the x-y plane. I can imagine that if there are nested disks inside each other, that in this case the union would clearly be closed.
But what if you could construct an infinite set of disks that together cover the entire real plane. Then in this case, it seems that every point in their union would have an open ball centered around the point that is also contained in the real plane, so that this union of an infinite collection of disks would create an open set.
Is this a correct way of thinking? Or at least on the right track?
I get the feeling that as long as you have no largest individual set that contains all the others, then you won't get a closed set. But I have a feeling there is more subtlety to it.
Thanks everyone
|
In $\Bbb R$ take $\aleph_0$ closed intervals from $-n$ to $n$ for natural $n$: $$\bigcup_{n\in\Bbb N} [-n,n] = \Bbb R$$
Similary $$\bigcup_{n\in\Bbb N^+} \left([-n,-n+1] \cup [n-1,n]\right) = \Bbb R$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1116023",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 2
}
|
Colimit preserves monomorphisms under certain conditions I know that colimit preserves epimorphisms.
Consider the special case where
*
*The diagrams are indexed by a directed set $I$,
*We are in the category of certain algebraic structures, such as $\mathbf{Ab}, \mathbf{Ring}, \mathbf{Set}$, $\mathbf{Mod_R}$, where colimit can be explicitly constructed as the disjoint union quotient an equivalence.
*The category we are considering has free objects, i.e. the forgetful functor to $\mathbf{Set}$ has a left adjoint. In this case monomorphisms are exactly the injective maps.
In this case, it seems to me that colimit also preserves monomorphisms. Given monomorphisms $f_i: A_i\to B_i$, we show that $f: \varinjlim A\to \varinjlim B$ is a monomorphism.
If two elements $[(i,a)],[(j,b)]$ are mapped to the same class, we can find $i\le k, j\le k$ such that $[(k,a')] = [(i,a)]$ and $[(k,b')] = [(j,b)]$ where $[(k,a')]$ and $[(k,b')]$ are mapped to the same class $[(k,f_k(a') = f_k(b')]$. But since $f_k$ is injective, we must have $a' = b'$, and that $[(i,a)] = [(j,b)]$.
Is it correct? The motivation of the question comes from an exercise where I try to prove that a morphism of sheafs $\Phi:\mathscr{F}\to\mathscr{G}$ is injective on open sets $\Phi_U:\mathscr{F}(U)\to \mathscr{G}(U)$ implies that it is injective on stalks.
|
Good observation!
Indeed it happens often that filtered colimits preserve finite limits, and as for a morphism $f: X\to Y$ being mono is equivalent to $id, id: X\rightrightarrows X$ being a pullback of $(f,f)$, in these situations monomorphisms are preserved.
You find this imposed as a condition in topos theory, where a geometric morphism $f: {\mathcal E}\to{\mathcal F}$ consists of an adjunction $f^{\ast}: {\mathcal E}\rightleftarrows {\mathcal F}: f_{\ast}$ in which additionally $f^{\ast}$ is required to preserve finite limits.
See http://ncatlab.org/nlab/show/geometric+morphism
As you already observed, taking ${\mathcal F}=\text{Sh}(X)$, ${\mathcal E}=\text{Sh}(\text{pt})$ and $f: \text{Sh}(\text{pt})\to\text{Sh}(X)$ coming from a point $x\in X$, then the extra condition on $f^{\ast} = (-)_x$ means that the formation of stalks preserves finite limits, which you already observed in case of monomorphisms.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1116101",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Rigorous proof for a maximization problem Problem: Eight players entered a round-robin tennis tournament. At the end of the tournament, a player who wins $N$ sets will take home $N^2$ dollars. The entry fee is $17.50 per player. Why is this enough to pay for the prizes?
Intuitive solution: Trying to maximize the total prize, we see that the eight players can take home as much as $7^2 + \ldots + 1^2+0^2=\$140$ dollars. So $\$140\div8 \text{ players}=\$17.50$ as entry fee for each player is enough.
What is a rigorous proof for this problem?
|
Denote by $x_k$ the number of victories for player $P_k$, and assume $x_1\leq x_2\leq\ldots\leq x_8$. If $x_8<7$ then $P_8$ has lost at least one game against a player $P_k$ with $k<8$. Since
$$(x_8+1)^2+(x_k-1)^2=x_8^2+x_k^2+2(x_8-x_k)+2\geq x_8^2+x_k^2+2\ ,$$
the total payout would have been greater when $P_8$ would have won this game as well. It follows that in the case of maximal payout player $P_8$ wins all his games. Therefore we may assume this from now on and start arguing about player $P_7$. If $x_7<6$, then $P_7$ has lost at least one game against a player $P_k$ with $k<7$, and so on.
(Of course this could and should be converted to a full-fledged induction proof.)
It follows that the total payout $W$ is maximal if $x_k=k-1$ $\>(1\leq k\leq 8)$, so that
$$W_\max=\sum_{j=1}^7 j^2={7\cdot 8\cdot 15\over 6}=140\ ,$$
which leads to an entrance fee of ${140\over8}=17.50$ dollars.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1116188",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
What is the remainder when $x^7-12x^5+23x-132$ is divided by $2x-1$? (Hint: Long division need not be used. What is the remainder when $x^7-12x^5+23x-132$ is divided by $2x-1$? (Hint: Long division need not be used.)
The Hint is confusing!
|
You can use Horner schema to evaluate it. For an explanation of this algorithm read this. In your case you have
\begin{array}{|c|c|c|c|c|c|c|c|c|}\hline &1&0&-12&0&0&0&23&-132 \\\hline\frac12 &0&\frac12&\frac14&-\frac{47}{8}&-\frac{47}{16}&-\frac{47}{32}&-\frac{47}{64}&\frac{1425}{128}\\\hline &1&\frac12&-\frac{47}{4}&-\frac{47}{8}&-\frac{47}{16}&-\frac{47}{32}&\frac{1425}{64}&\color{red}{-\frac{15471}{128}}\\\hline\end{array}
which, by the way, uses the same idea of the other answers, but it avoids you many calculation with high powers.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1116272",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
}
|
Series proof $\sum_1^\infty|a_n|<\infty$ then show that $\sum_1^\infty{a_n^2}<\infty$ Iam stuck with this proof. There seems to be no property to help.
If $\sum_1^\infty|a_n|<\infty$ then show that $\sum_1^\infty{a_n^2}<\infty$ and that the reverse isnt true.
|
$\sum_1^\infty{a_n^2} \leq (\sum_1^\infty|a_n|)(\sum_1^\infty|a_n|) < \infty$
A example to show reverse untrue.
See $a_n = \frac {1}{n} $
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1116367",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Prove that this sequence of integers is on average equal to zero. Consider the sequence $\{a(n)\}_{n\in\mathbb{N}^*}$ that is defined by the Dirichlet series:
$$\zeta (s)^2\cdot\left(1-\frac{1}{2^{s-1}}-\frac{1}{3^{s-1}}+\frac{1}{6^{s-1}}\right)=\sum_{n\geq 1}\frac{a(n)}{n^s}$$
Prove or disprove that on average it is equal to zero.
The sequence starts:
$$a(n)=1,0,-1,-1,2,0,2,-2,-3,0,2,1,2,0,-2,-3,2,0,2,-2,-2,0,2,2,3,0,-5,-2,2,0,2,-4,...$$
To be more precise:
Prove or disprove that:
$$\lim_{N\to\infty} \frac{1}{N}\sum_{n=1}^{N}a(n)=0.$$
Edit: To generate this sequence:
1. Multiply the red and green matrix $\zeta(s)$ with the matrix on the left (matrix multiplication).
2. Repeat.
3. The sequence $a(n)$ is found in the first column in the last matrix.
Edit:
After reading the answer below I tried to verify this in a Mathematica program:
a = {+1, -1, -2, -1, +1, +2}
Monitor[a =
Table[Sum[
Sum[If[Mod[n, k] == 0, a[[1 + Mod[k - 1, 6]]], 0], {k, 1,
n}], {n, 1, kk}]/kk, {kk, 1, 2000}], kk]
ListLinePlot[a]
This is the plot from the program:
which looks as if it is converging to zero based on the scale on the y-axis. 2000 terms used.
To confirm that the program generates the sequence $a(n)$:
a = {+1, -1, -2, -1, +1, +2}
Monitor[a =
Table[Sum[
If[Mod[n, k] == 0, a[[1 + Mod[k - 1, 6]]], 0], {k, 1, n}], {n, 1,
32}], n]
{1, 0, -1, -1, 2, 0, 2, -2, -3, 0, 2, 1, 2, 0, -2, -3, 2, 0, 2, -2, \
-2, 0, 2, 2, 3, 0, -5, -2, 2, 0, 2, -4}
aNumberThatIsAsBigAsPossible = 1000; a = {+1, -1, -2, -1, +1, +2};
Monitor[a =
Table[Sum[
Sum[If[Mod[n, k] == 0, a[[1 + Mod[k - 1, 6]]], 0], {k, 1,
n}], {n, 1, kk}]/kk, {kk, 1,
aNumberThatIsAsBigAsPossible}];, kk]; Show[
ListLinePlot[a, PlotRange -> {-0.5, +0.5}],
Graphics[Line[{{0, 0.3088626596}, {aNumberThatIsAsBigAsPossible,
0.3088626596}}]]]
The black line is at the value 0.3088626596
|
We have:
$$ \zeta(s)=\prod_{p}\left(1-\frac{1}{p^s}\right)^{-1},\qquad \zeta(s)^2=\sum_{n\geq 1}\frac{d(n)}{n^s} $$
hence, assuming that the sequence $\{a(n)\}_{n\in\mathbb{N}^*}$ is the sequence of coefficients of the Dirichlet series associated with $f(s)$:
$$ f(s)=\zeta(s)^2\left(1-\frac{1}{2^{s-1}}\right)\left(1-\frac{1}{3^{s-1}}\right)=\sum_{n\geq 1}\frac{a(n)}{n^s}$$
we have:
$$ \zeta(s)^2\left(1-\frac{2}{2^{s}}\right)\left(1-\frac{3}{3^{s}}\right)=\sum_{n\geq 1}\frac{d(n)}{n^s}-\sum_{n\geq 1}\frac{2 d(n)}{(2n)^s}-\sum_{n\geq 1}\frac{3 d(n)}{(3n)^s}+\sum_{n\geq 1}\frac{6 d(n)}{(6n)^s} $$
so:
$$ \sum_{n=1}^{N}a(n) = \sum_{n=1}^{N}d(n)-2\sum_{\substack{n=1\\2\mid n}}^{N} d(n/2)-3\sum_{\substack{n=1\\3\mid n}}^{N} d(n/3)+6\sum_{\substack{n=1\\6\mid n}}^{N} d(n/6).\tag{1}$$
Since the average order of the divisor function is given by:
$$ \sum_{n=1}^{N} d(n) = N \log N + (2\gamma-1)N + O(\sqrt{N}) \tag{2}$$
by the Dirichlet hyperbola method, we have:
$$ \frac{1}{N}\sum_{n=1}^{N}a(n) = O\left(\frac{1}{\sqrt{N}}\right)\tag{3}$$
so the limit average is zero.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1116438",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Rotating a point in space about another via quaternion I have a system that is giving me a point in 3D space (call it (x, y, z)) and a quaternion (call it (qw, qx, qy, qz)). I want to create a point at (x+1, y, z), and then rotate that point using the quaternion. How would I do this?
The specific application that I am using this for is with a Kinect. The Kinect produces the x,y,z points and provides the quaternion data. What I am trying to do then is add axes arrows to the kinect output so that we can visually see how "noisy" the kinect output is.
|
The answer to this question starts with the answer in this thread. Specifically, the formula posted as P' = Q(P-G)Q'+G, where P is the coordinates of the point being rotated, G is the point around which P is being rotated, Q is the quaternion, Q' is the quaternion inverse, and P' is the new location of the point after rotation.
The next step then is to break to formula into components.
Q = <a, b, c, d>, where a is the scalar component of the quaternion, b is the x component, c is the y component, and d is the z component.
Q' = <a/(a2+b2+c2+d2), -b/(a2+b2+c2+d2), -c/(a2+b2+c2+d2), -d/(a2+b2+c2+d2)>
G = (x, y, z)
P = (x+m, y+n, z+o)
By taking and rewriting all of the above with i, j, k, we get an equation that we can use with the formula at the start. Specifically, i denotes the x coordinates, j the y coordinates, and k the z coordinates. Therefore, we end up with the following components:
Q = a+bi+cj+dk
Q' = (a-bi-cj-dk)/(a2+b2+c2+d2)
G = xi+yj+zk
P = (x+m)i+(y+n)j+(z+o)k
After that, everything else is just algebra.
P' = (a+bi+cj+dk)((x+m)i+(y+n)j+(z+o)k-(xi+yj+zk))(a-bi-cj-dk)/(a2+b2+c2+d2)+xi+yj+zk
P' = 1/(a2+b2+c2+d2) *
(a+bi+cj+dk)(mi+nj+ok)(a-bi-cj-dk)+xi+yj+zk
after multiplying everything out and condensing it, we get the following results:
P' = 1/(a2+b2+c2+d2) * (((a2+b2-c2-d2)m+2(bc-ad)n+2(ac+bd)o)i + (2(ad+bc)m+(a2-b2+c2-d2)n+2(cd-ab)o)j + (2(bd-ac)m+2(ab+cd)n+(a2-b2-c2+d2)o)k)+xi+yj+zk
Breaking this into component form, P' = (x', y', z'), then:
x' = x+1/(a2+b2+c2+d2)*((a2+b2-c2-d2)m+2(bc-ad)n+2(ac+bd)o)
y' = y+1/(a2+b2+c2+d2)*(2(ad+bc)m+(a2-b2+c2-d2)n+2(cd-ab)o)
z' = z+1/(a2+b2+c2+d2)*(2(bd-ac)m+2(ab+cd)n+(a2-b2-c2+d2)o)
From this then, we can easily just plug in our point of rotation, our offset from the point of rotation, and the quaternion data being received to determine the offset point's new position after rotation.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1116552",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Polynomial prove exercise $P(x)=x^n + a_1x^{n-1} +\dots+a_{n-1}x + 1$ with non-negative coefficients has $n$ real roots. Prove that $P(2)\ge 3n$ I don't have an idea how to do that, I'm in 4th grade high school, you don't have to solve it for me, I'd be just happy to get a clue how - + if someone could provide me with some sort of exercises like that with explanation I'd be happy thanks.
|
Since $a_j\geq 0$ $(1\leq j\leq n-1)$, all zeros are negative. Hence
$$ P(x)=(x+r_1)\cdots (x+r_n)\qquad r_j>0, 1\leq j\leq n$$
and $r_1\cdots r_n=1$. It follows from this last equality that at least one $r_j$ is greater than or equal to $1$, which gives $r_1+\cdots+r_n\geq 1$.
It follows that
$$ P(2)=(2+r_1)\cdots (2+r_n)\geq 2^n+(r_1+\cdots+r_n)2^{n-1}+1\geq 2^n+2^{n-1}+1\geq 3n.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1116716",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
How do you show that the degree of an irreducible polynomial over the reals is either one or two? The degree of irreducible polynomials over the reals is either one or two.
Is it possible to prove it without using complex numbers? Or without using fundamental theorem of algebra?
|
1) If you know that every irreducible polynomial over $\mathbb R$ has degree $1$ or $2$, you immediately conclude that $\mathbb C$ is algebraically closed:
Else there would exist a simple algebraic extension $\mathbb C\subsetneq K=\mathbb C(a)$ with $[K/\mathbb C]=\operatorname {deg}_\mathbb C a=d\gt 1$.
Then $K=\mathbb C(a)=\mathbb R(i,a)=\mathbb R(b)$ for some $b\in K$ by the primitive element theorem
But then the minimal polynomial $f(X)\in \mathbb R[X]$ of $b$ over $\mathbb R$ would be irreducible over $\mathbb R$ and have degree $\operatorname {deg} f(X)=2d\gt 2$, a contradiction to our hypothesis.
2) That said it is possible to prove that every irreducible polynomial over $\mathbb R$ has degree $1$ or $2$ without using the Fundamental Theorem of Algebra for $\mathbb C$.
The method is due to Lagrange and is described in Samuel's Algebraic Theory of Numbers, pages 44-45.
The method consists in inducting on the largest power $r$ of $2$ dividing the degree $d=2^rl$ ($l$ odd) of an irreducible real polynomial, the result being clear for $r=0$ i.e. for odd $n$.
The proof (highly non trivial) proceeds by a clever application of Viète's formulas expressing the coefficients of a polynomial as symmetric functions of the roots of that polynomial.
3) Another real methods proof uses Galois theory and Sylow $2$-groups.
It can be found in Fine-Rosenberg's Theorem 7.6.1
That elementary and pedagogical book is entirely devoted to all kinds of proofs of the Fundamental Theorem of Algebra.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1116965",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17",
"answer_count": 2,
"answer_id": 1
}
|
Finding all possible combination **patterns** - as opposed to all possible combinations I start off with trying to find the number of possible combinations for a 5x5 grid (25 spaces), where each space could be a color from 1-4 (so 1, 2, 3, or 4)
I do 4^25 = 1,125,899,906,842,624 different combinations
However, now I'm trying to change the number of combinations to account for grids with the same number pattern, for example:
{ 1 1 1 1 1 }
{ 3 3 3 3 3 }
{ 4 4 2 2 3 }
{ 4 3 2 1 1 }
{ 2 2 1 2 3 }
1 is now 2, 2 is now 4, 3 is now 1, 4 is now 3
{ 2 2 2 2 2 }
{ 1 1 1 1 1 }
{ 3 3 4 4 1 }
{ 3 1 4 2 2 }
{ 4 4 2 4 1 }
I'm having trouble trying to come up with an equation I can use to solve this for a (x * y) grid where each space could be a color from 1 to (c).
|
$\sum _{n=1}^c \mathcal{S}_{x y}^{(n)}$ is what you're after, with $x, y, c$ the two dimensions and the number of colors. $\mathcal{S}_{x y}^{(n)}$ is the Stirling number of the second kind.
This generalizes to arbitrary dimensions.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1117061",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
}
|
What class would this be covered in? Would the material in this section of a wikipedia article be covered in a standard course on Differential Geometry, or should I look elsewhere to learn those sorts of things? Specifically, topics like solid angles, line/surface/volume elements, and so forth. I have taken vector calculus, which did not get into these topics in any detail.
|
A standard differential geometry class would certainly contain this material, but it would probably not spend significant time on them. This is more the flavor of a multivariable calculus/analysis class; if this wasn't covered to your satisfaction in yours, you might want to study these on your own.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1117134",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Paul Erdős showed a simple estimate for $\pi(x) \ge \frac{1}{2}\log_2 x$; is it possible to tweak his argument to improve the estimate? Paul Erdős gave a simple argument to show that $\pi(x) \ge \dfrac{1}{2}\log_2 x$.
Is it possible to tweak the argument and get a better estimate? I am wondering how good an estimate for $\pi(x)$ can be achieved using a variation of his reasoning.
I explored the possibility of changing $m^2$ to $m^3$ so that we have for any $y \le x$, we have:
$y = p_1^{e_1}p_2^{e_2}\ldots p_n^{e_n}m^3$ and $e_i\in\left\{0,1,2\right\}$ and $m \in \mathbb{Z}$
But this gets us to: $m \le \sqrt[3]{x}$ so that $3^n\times \sqrt[3]{x} \ge x$ and $\pi(x) \ge \dfrac{2}{3}\log_3 x$ which is weaker than $\pi(x) \ge \dfrac{1}{2}\log_2 x$
Has anyone thought of other creative tweaks that can improve the result?
|
The same idea also gives $p_n\leq 4^n$ and a rather trivial improvement of that is:
$n\in\Bbb Z_+,\;1\leq x\leq p_n\implies x=p_1^{e_1}\cdots p_n^{e_n}\cdot m^2,\;e_i=0,1\wedge\;m^2<p_n$.
If $e_n=1$ then $0=e_1=\cdots=e_{n-1}$ and if $e_n=0$ there is at most $2^{n-1}$ ways to chose $x$. Similar to Erdős proof $p_n\leq(2^{n-1}+1)\sqrt p_n$ which gives
$p_n\leq (2^{n-1}+1)^2$.
A challenge would be to prove $p_n\leq 2^{n-1}+1$ without using the prime number theorem.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1117224",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
}
|
Given a graph on $n$ vertices find the maximum amount of edges so it can be colored with no monochromatic $K_m$ I invented a problem and I wanted to share :What is the maximum amount of edges a graph on $n$ vertices can have if it can be edge-colored with $k$ colors so that it does not have a monochromatic $K_m$?
I think I have a solution, I would like a verification. Generalizations would also be greatly appreciated.
Thank you very much in advance.
Regards.
|
Such a graph cannot contain a complete subgraph of size $R\underbrace{(m,m\dots m)}_{\text{k times}}=j$ . What is the subgraph on $n$ vertices that has the most edges and does not contain a complete sugraph of size $n$? It is the Turan graph $T(n,j-1)$, so if $T(n,j-1)$ works we are done.
By the definition of the Ramsey number $K_{j-1}$ admits a coloring $C$ with $k$ colors and no monochromatic $K_m$. So what we do is take $T(n,j-1)$ and assume each of the $j-1$ parts of $T(n,j-1)$ is a vertex in $K_{j-1}$. So given two adjacent vertices of $T(n,j-1)$ we color that edge using the color used in $C$ to connect the vertices of their corresponding parts (Since every part of $T(n,j-1)$ is seen as a vertex of $K_{j-1}$. This coloring gives us no monochromatic $K_m$ because if we have $m$ vertices which are all pairwise connected they must all be in different parts, and then the colors of the edges between them will be the same as the color of the edges between the vertices in $K_{j-1}$ with the coloring $C$.
Hence the maximum number of edges is the number of edges in the $T(n,j-1)$ which is $\lfloor\frac{(j-2)n^2}{2(j-1)}\rfloor$. It also follows from Turan the graph is unique.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1117305",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
How to understand the concept behind the equation $\boldsymbol{Ax}=\boldsymbol{b}$ As is know to all, the equation $\boldsymbol{Ax}=\boldsymbol{b}$ can be understand as to find the linear combination coefficient of the column vector of the matrix $A$. At the same time, it can also be explained as to find vector space which is orthogonal to all the row vectors. So I think there must be some connections between this two problems. Maybe we can call them dual problems, Can anyone explain the connections between the two explanations for me?
|
As far as the equation $Ax=b$ goes, the left-hand-side can the thought of in different ways.
One way is that if $a_1,\ldots,a_n$ are the columns of $A$, and $x=(x_1,\ldots,x_n)$, then
$Ax = a_1x_1+\cdots+a_nx_n,$
which is a linear combination of the columns of $A$. Thus to solve the equation, you need to find the values of $x_1,x_2,\ldots,x_n$ such that this linear combination is equal to $b$.
But the left-hand-side $Ax$ can also be thought of in the following way, which is the usual "row times column" way of thinking about matrix products. If the rows of $A$ are $c_1,\ldots,c_n$, then $Ax$ is the vector whose $i$'th entry is the dot product $c_i\cdot x$. In the special case of the equation $Ax=0$, you get that $c_i\cdot x=0$ for all $i$, and thus that $x$ is orthogonal to all row vectors of $A$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1117410",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
class equation of order $10$ Is it a class equation of order $10$
$10=1+1+1+2+5$.
As far as I know for being a class equation each member on RHS has to divide $10$ and should have at least one $1$ on RHS, which is the case. So I think it is a valid class equation.Is it correct?
|
This cannot be a class equation of a group of order 10 because the $1's$ in the class equation correspond to all those elements which lie in the center $Z(G)$ of the group. But $Z(G)$ is a subgroup, so by Lagrange it's order (which in this case will be $3$) must divide $|G|=10$. This is a contradiction.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1117552",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Expectation value of absolute value of difference of two random variables I do not really know how to prove the following statement:
If $E(|X-Y|)=0$ then $P(X=Y)=1$.
The main problem is how to handle the absolute value $|X-Y|$.
If I say that $|X-Y| \geq 0$ it follows that $E(X)=E(Y)$ which is
also the result for $|X-Y|<0$. But then the expectation values are equal
and you can show that $P(X=Y)=1$ is not true for all $X,Y$.
|
Simple proof by contradiction:
Assume that $P(X=Y)<1$, then there has to exists $x',y'$ such that $x'\neq y'$ and $P(X=x'\wedge Y=y')>0$.
But then if we look at the random variable $Z=|X-Y|$, we get that
$$E(Z)=\sum_{x,y}P(X=x\wedge Y=y)\cdot|x-y|$$
$$=P(X=x'\wedge Y=y')\cdot|x'-y'|+\sum_{x,y\neq(x',y')}P(X=x\wedge Y=y)\cdot|x-y|$$
Since $$P(X=x'\wedge Y=y')\cdot|x'-y'|$$ is strictly positive and
$$\sum_{x,y\neq(x',y')}P(X=x\wedge Y=y)\cdot|x-y|$$
Is non-negative, it follows that $E(Z)>0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1117630",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Differential equation degree doubt $$\frac{dy}{dx} = \sin^{-1} (y)$$ The above equation is a form of $\frac{dy}{dx} = f(y)$, so degree should be $1$. But if I write it as $$y = \sin\left(\frac{dy}{dx}\right)$$ then degree is not defined as it is not a polynomial in $\frac{dy}{dx}$. Please explain?
|
To make sense of the definition of the order of a differential equation, you need to first isolate the highest order derivative first. For example,
$\frac{dy}{dx} = y, \frac{dy}{dx} = y^2 + 1$ are of the first order.
$ \frac{d^2 y}{dx^2} = y, \frac{d^2 y}{dx^2} = (\frac{dy}{dx})^3 + 1$ are of second order.
Your differential equation $\frac{dy}{dx} = \sin^{-1}y$ is first order even if it can be written in the equivalent form $y = \sin(\frac{dy}{dx}).$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1117694",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 2
}
|
Building a proper homomorphism between groups. Suppose I have a cyclic group $G$ of order $6$. I want to show that it is isomorphic to $\Bbb {Z}_6$. So $G=\{e,g^2,g^3,g^4,g^5\}=\langle g\rangle$. Can I build a homomorphism $f:G \to \Bbb{Z}_6$ that way?
$f(x)=f(g^m)=mf(g)=m$ where $f(g)=1$ and $f(e)=0$. It is problematic because I might get that $f(g^6)=f(e)=6=0$... 6 $6$ is somehow $0$, but is it constructive and plausible?? Would appreciate your reply.
|
It is much better to build the isomorphism in the other direction, as
$$
\varphi([i]) = g^{i},
$$
if $[i]$ is the class of $i$ in $\mathbb{Z}_{6}$. You have to prove it is well-defined, but this is immediate.
Even better, start with the homomorphism
$$
\mathbb{Z} \to G, \qquad i \mapsto g^{i},
$$
and use the first isomorphism theorem.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1117764",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Four different positive integers a, b, c, and d are such that $a^2 + b^2 = c^2 + d^2$ Four different positive integers $a, b, c$, and $d$ are such that $a^2 + b^2 = c^2 + d^2$
What is the smallest possible value of $abcd$?
I just need a few hints, nothing else. How should I begin? Number theory?
|
The number of representations of a positive integer as a sum of two squares depends on the number of prime divisors of the form $4k+1$ (see Cox, Primes of the form $a^2+k b^2$). If we take the first two primes of such a form, $5$ and $13$, we have that $5\cdot 13$ can be represented as:
$$ 65 = 1^2+8^2 = 4^2+7^2 $$
so we have a solution with $abcd = 224$. You can complete the proof by exhaustive search (it is quite easy to check that the first $64$ positive integers do not have a double representation in terms of positive integers, and we just have to check till $n=225$ or so to find the minimum $abcd$), or proving that the number of representations of $n$ as a sum of two squares is given by the number of divisors of $n$ of the form $4k+1$ minus the number of divisors of the form $4k+3$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1117884",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How do I solve this Olympiad question with floor functions?
Emmy is playing with a calculator. She enters an integer, and takes its square root. Then she repeats the process with the integer part of the answer. After the third repetition, the integer part equals 1 for the first time. What is the difference between the largest and the smallest number Emmy could have started with?
$$
\text{(A)} \; 229 \qquad
\text{(B)} \; 231 \qquad
\text{(C)} \; 239 \qquad
\text{(D)} \; 241 \qquad
\text{(E)} \; 254
$$
This was Problem 19 from the first round of the Norwegian Mathematical Olympiad, 2014–15.
I think the floor function applies well here.
The integer part obviously is the floor part of it.
Let $x_1$ be the initial.
$x_2 = \left \lfloor{{\sqrt{x_1}}}\right \rfloor $
$x_3 = \left \lfloor{{\sqrt{x_2}}}\right \rfloor$
$x_4 = \left \lfloor{{\sqrt{x_3}}}\right \rfloor = 1$ <-- Last one.
So we need to find $b \le x_1 \le c$ such that $b$ is the smallest number to begin with and $c$ is the largest number to begin with.
$\sqrt{b} \le \sqrt{x_1} \le \sqrt{c}$
|
Clearly we must have $x_3<4$, $x_2<16$, and $x_1<256$, so the largest possible value of $x_1$ is $255$. Since $x_3>1$, we know that $x_3\ge 2$ and hence that $x_2\ge 4$. This implies that $x_1\ge 16$. Thus, the range is from $16$ through $255$, the difference being $239$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1117948",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Let $F$ be a linear operator such that $F^2 - F + I = 0$, show that $F$ is invertible and $F^{-1} = I - F$ I didn't understand this exercise. I tried working with
$$F^2 - F + I = 0\implies (F-I)(F) + I =0$$
but I really don't understand how to prove $F$ is invertible neither find the inverse. Any hints? Thank you so much!
|
Hint 1: can $0$ be an eigenvalue of $F$?
Hint 2: can you multiply $F^2-F+I=0$ by $F^{-1}$ once you know it exists?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1118138",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Find the number of children, given that the estate was divided evenly between them Problem of the Week at University of Waterloo:
A man died leaving some money in his estate. All of this money was to be
divided among his children in the following manner:
*
*$x$ to the first born plus $1/16$ of what remains, then
*$2x$ to the second born plus $1/16$ of what then remains, then
*$3x$ to the third born plus $1/16$ of what then remains, and so on.
When the distribution of the money was complete, each child received the same amount and no money was left over. Determine the number of children.
Can anyone hint at a strategy I could use to solve this? I've tried everything I could think of. All I want is a hint, though. No solutions please.
|
Hint
Let $y$ be the amount he left over. How much did the first born receive? How much did the second born receive?
Mathe them equal.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1118220",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Order of any element divides the largest order. Let $A$ be a finite Abelian group and let $k$ be the largest order of elements in A. Prove that the order of every element divides $k$. This is my attempt, I sense there is something wrong\incorrect in it, but I can't figure out what....Also I didn't use that fact that $A$ is abelian... I would appreciate you reply.
$Attempt$: $|G|=n$. Suppose there is an element $a\in A$ of order $m$ that does not divide $k$. $k$ is the largest order of elements in $A$, therefore there exist $g\in A$ such that $o(g)=k$. $(ag)^m=g^m\ne 1$ and $(ag)^k=a^k\ne 1$ (that follows from the commutativity of A). The order of $ag$ has to be some $p$, where $p$| lcm$(m,k)$. If $lcm(m,k)=p$ then $(ag)^p=(ag)^{mt}=1$ where $mt=p,p\in \Bbb{N}$. But $(ag)^{mt}=((ag)^m)^t=(g^m)^t=g^{mt}=1$, but since $t$ is the smallest natural number that fulfills it, $mt=k$, i.e, $m$ divides $k$. A contradiction.
|
Let $x\in A$ such that $x$ has largest order $m$ in $A$. Assume that $g\in A$ such that $|g|$ does not divide $m$. Wlog, assume that $|g|=p$ where $p$ is a prime. Then $gcd(p,m)=1$. Now, as $A$ is abelian, $g$ and $x$ commute and $|gx|=pm=lcm(p,m)$. Note that $lcm(p,m)=pm$ as $p$ is a prime and $p\nmid m$. Hence, $|gx|=pm > m$ which gives the contradiction as $x$ has largest order $m$. (I hope.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1118365",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
Can the cube of every perfect number be written as the sum of three cubes? I found an amazing conjecture: the cube of every perfect number can be written as the sum of three positive cubes. The equation is
$$x^3+y^3+z^3=\sigma^3$$
where $\sigma$ is a perfect number
(well it holds good for the first three perfect numbers that is):
$${ 3 }^{ 3 }+{ 4 }^{ 3 }+{ 5 }^{ 3 }={ 6 }^{ 3 }$$
$${ 21 }^{ 3 }+{ 18 }^{ 3 }+{ 19 }^{ 3 }={ 28 }^{ 3 }$$
$${ 495 }^{ 3 }+{ 82 }^{ 3 }+{ 57 }^{ 3 }={ 496 }^{ 3 }$$
Is this what I am proposing that the cube of any perfect number can be expressed in terms of the sum of three positive cubes true?
If it is then can we prove it?
|
This is more of a comment as opposed to an answer
There is a formula for finding the values of $a, b, c, d$ in the following equation: $$a^3 + b^3 + c^3 = d^3$$ Where $$\forall x, y\in \mathbb{Z}, \ \begin{align} a &= 3x^2 + 5y(x - y), \ b = 2\big(2x(x - y) + 3y^2\big) \\ c &= 5x(x - y) - 3y^2, \ d = 2\big(3x^2 - 2y(x + y)\big) \end{align}$$
Therefore if we prove this conjecture, we prove that every perfect number $d$ must be even! I also would like to expand this theorem as well with a theorem of mine:
If $$\forall\{x, y, z\}\subset \mathbb{N}, \ 6^3 + (2x)^3 + (2y - 1)^3 = z^3$$ Then $$z \equiv \pm 1 \pmod 6$$ Where $z$ is a prime number. If $z$ is not a prime number, then $z\equiv3\pmod 6$
Examples: $$\begin{align} 6^3 + 8^3 + 1^3 &= 9^3 \qquad \ \ \ \ \mid9 &\equiv 3 \pmod 6 \\ 6^3 + 32^3 + 33^3 &= 41^3 \qquad \ \ \mid41 &\equiv 5 \pmod 6 \\ 6^3 + 180^3 + 127^3 &= 199^3 \qquad \mid199 &\equiv 1 \pmod 6 \\ 6^3 + 216^3 + 179^3 &= 251^3 \qquad \mid251 &\equiv 5 \pmod 6 \\ 6^3 + 718^3 + 479^3 &= 783^3 \qquad \mid783 &\equiv 3 \pmod 6 \\ 6^3 + 768^3 + 121^3 &= 769^3 \qquad \mid769 &\equiv 1 \pmod 6 \end{align}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1118456",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19",
"answer_count": 3,
"answer_id": 1
}
|
What does 'forms a right-handed set' mean? In a question I am reading, the following question appears.
What if $\vec{A},\vec{B}$, and $\vec{C}$ are mutually perpendicular and form a right-handed set?
What exactly does "form a right-handed set" mean? That $\hat{A}\times\hat{B}=\hat{C}$? Or does it mean the stronger equality $\vec{A}\times \vec{B}=\vec{C}$?
|
There are several equivalent definitions, but here is one that is often convenient:
The ordered triplet $(\vec A,\vec B,\vec C)$ of vectors in $\mathbb R^3$ is right called right-handed if $\det(\vec A,\vec B,\vec C)>0$, where $(\vec A,\vec B,\vec C)$ is the square matrix formed by taking the three vectors as columns.
(Note that the determinant is necessarily nonzero since you assumed the vectors to be orthogonal and (hopefully also) nonzero.)
Similarly the triplet is left-handed if the determinant is negative.
The triplet is linearly dependent if the determinant is zero.
This generalizes to any dimension: we can easily define what it means for a $n$-tuplet of vectors in $\mathbb R^n$ to be right-handed.
If you multiply each vector in the multiplet by the same invertible square matrix, handedness is preserved if the matrix has positive determinant and reversed if negative.
Some intuitive properties are easy to observe from this point of view:
If you interchange any two vectors in the multiplet, the handedness is reversed.
If you permute the vectors cyclically, handedness is preserved.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1118539",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Partition of fractional parts where each sum of them has to be at least 1 Let $ a_1,\ldots,a_t \in \mathbb{Q} \setminus \mathbb{Z} $ be with $ \sum_{i=1}^t \lbrace a_i \rbrace \in \left[k,k+1\right) $ for some $ k \in \mathbb{N} $ with $ k \ge 4 $. Here $ \lbrace x \rbrace $ stands for the fractional part of $ x $.
Are there always $ k-2 $ disjoint subsets $ A_j \subseteq \lbrace 1,\ldots,t \rbrace $ ($j =1,\ldots,k-2 $) with $ \sum_{i \in A_j} \lbrace a_i \rbrace \ge 1 $ for all $ j $?
|
No. Let $a_i=0.99$ for all $i$ and $t=6$. Then $k=5$. But if $ A_j \subseteq \lbrace 1,\ldots,t \rbrace $ with $ \sum_{i \in A_j} \lbrace a_i \rbrace \ge 1 $ then $|A_j|\ge 2$, so there can be at most $3<4=k-2$ such mutually disjiont subsets.
PS. Instead of fractional parts we can simply consider the rational numbers $a_i\in (0,1).$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1118645",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How to solve second order congruence equation if modulo is not a prime number the equation is $x^2 = 57 \pmod{64}$
I know how to solve equations like
(*) $ax^2 +bx +c = 0 \pmod{p}$, where $p$ is prime
and i know all the definitions for like Legendre's Symbol and all of the other quadratic residue terminology and that (*) can be rewritten in form $x^2 = c \pmod{p}$ but here $64$ is $2^6$. I know that if $a = b \pmod{n}$ and $d|n$ then $a = b \pmod{d}$ but i'm not sure if i can apply this here
can someone solve it without explaining related terminology like what is quadratic residue, excetera..
Thanks!
|
First, also from the reciprocity law, we know that $\;x^2=z\pmod{2^k}\;$ , has a solution for $\;k\ge 3\;$ iff $\;z=1\pmod 8\;$, which is the case here.
Now:
$$x_k^2=z\pmod{2^k}\implies x_{k+1}=\begin{cases}x_k+2^{k-1}&,\;\;\frac{x_k^2-z}{2^k}=1\pmod 2\\{}\\x_k&,\;\;\frac{x_k-z}{2^k}=0\pmod 2\end{cases}$$
and you can check easily $\;x_{k+1}\;$ is a solution to the equation modulo $\;2^{k+1}\;$
In our case:
$$x_3=1\;,\;\;\text{because}\;\;1^2=57\pmod 8\;,\;\;\text{and since}\;\;\frac{1^1-57}8=1\pmod 2$$
we get that
$$x_4=1+2^2=5\;\;\text{fulfills}\;\;5^2=57\pmod{16},\;\;\text{and since}\;\;\frac{5^2-57}{16}=0\pmod{16}$$
then also
$$x_5=5\;\;\text{fulfills}\;\;5^2=57\pmod{32},\;\;\text{and since}\;\;\frac{5^2-57}{32}=1\pmod 2$$
we get that
$$x_6=5+2^4=21\;\;\;\text{fulfills}\;\;\;21^2=57\pmod{64}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1118753",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
An easy example of a non-constructive proof without an obvious "fix"? I wanted to give an easy example of a non-constructive proof, or, more precisely, of a proof which states that an object exists, but gives no obvious recipe to create/find it.
Euclid's proof of the infinitude of primes came to mind, however there is an obvious way to "fix" it: just try all the numbers between the biggest prime and the constructed number, and you'll find a prime in a finite number of steps.
Are there good examples of simple non-constructive proofs which would require a substantial change to be made constructive? (Or better yet, can't be made constructive at all).
|
There's a classic example from point-set topology. The statement is this: suppose $X$ is a topological space and $Y \subseteq X$ is such that for all $y \in Y$ there exists an open set $U$ such that $y \in U$ and $U \subseteq Y$. Then $Y$ is open.
Most introductory textbooks will tacitly use the axiom of choice along the following lines. For each $y$, choose $U_y$ satisfying the above properties. Then $Y = \bigcup_{y \in Y} U_y$ is a union of open sets, so $Y$ is open.
In fact the axiom of choice is not needed at all. Instead, let $S$ be the set of open sets contained in $Y$ and simply note that $\bigcup_{U \in S} U = Y$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1118810",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "57",
"answer_count": 15,
"answer_id": 14
}
|
Evaluate the integral $\int_0^{\ln(2)} \sqrt{(e^x-1)}dx$
Evaluate the integral $\int_0^{\ln(2)} \sqrt{(e^x-1)}dx$
Why is it wrong to...
$$\int_0^{\ln(2)} \sqrt{(e^x-1)} dx= \int_0^{\ln(2)} (e^x-1)^{1/2} dx= \frac{2}{3}(e^x-1)^{3/2} |_0^{\ln(2)}$$
|
You forgot to find the derivative of $e^x -1$ to replace $dx$ in the original integral.
Put $e^x - 1 = u$. Then $e^x = u+1$.
Then $du = e^x\,dx\iff dx = \frac{du}{u+1}$
This gives us $$\int_0^1 \frac{u^{1/2}}{u+1} \,du$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1118903",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
How to prove the identity $\prod_{n=1}^{\infty} (1-q^{2n-1}) (1+q^{n}) =1 $ for $|q|<1$? Eulers product identity is as follows
\begin{align}
\prod_{n=1}^{\infty} (1-q^{2n-1}) (1+q^{n}) =1
\end{align}
How one can explicitly prove this identity?
Note here $q$ deonotes a complex number satisfying $|q|<1$
|
For $\lvert q\rvert < 1$, define
$$f(q) = \prod_{n=1}^\infty (1-q^{2n-1})(1+q^n).$$
The product is absolutely and locally uniformly convergent on the open unit disk, hence $f$ is holomorphic there, and the product can be reordered and regrouped as desired. It is clear that $f(0) = 1$, and by reordering and regrouping, we have
$$\begin{aligned}
f(q) &= \prod_{n=1}^\infty (1-q^{2n-1})(1+q^n)\\
&= \prod_{n=1}^\infty(1-q^{2n-1})(1+q^{2n-1})\cdot \prod_{n=1}^\infty (1+q^{2n})\\
&= \prod_{n=1}^\infty \bigl(1-(q^2)^{2n-1}\bigr)\bigl(1+(q^2)^n\bigr)\\
&= f(q^2).
\end{aligned}$$
Since for $0 < \lvert q \rvert < 1$ the set $\{ q^{2^k} : k \in \mathbb{N}\}$ has an accumulation point in the unit disk (namely $0$), it follows that $f$ is constant.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1118993",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
A question about a notation Let $A$ be a non-singular square matrix. Which of the following notations is correct?
$${A^2}^{-1} \qquad \text{or} \qquad A^{-2}$$
|
I assume that the first one is interpreted as $(A^2)^{-1}$. Note that $A^{-2}$ is defined as $(A^{-1})^2$. Hence, if you want to interpret the two as meaning the same thing you have to think about it:
Exercise: Determine whether or not $(A^2)^{-1}=(A^{-1})^2$ for all non-singular square matrix $A$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1119094",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
showing a (B,p) is complete let $(A,d)$ be a metric space which is complete and let $B$ be closed in A. Then prove $(B,p)$ is complete, with $p = d|_{B\times B}$ (i.e. restriction of d onto $B\times B$)
thoughts:
since $B$ is closed in $A$ and $(A,d)$ is complete i figured that $(B,y)$ is complete - from here I know I have to show for $x_n,x_m \in B$,$ p(x_n,x_m) < \epsilon $ for $x,m\geq N$ but I am not sur ehow to deal with the restriction of $d$ on $B\times B$ - I mean, how would I go about evaluating $p(x_n,x_m)$?
edit: progress:
$d|_{YXY} ((x_n,x_m),(y_n,y_m)) = p((x_n,x_m),(y_n,y_m)) $, with $(x_n,x_m), (y_n,y_m) \in Y$ now if I let $x_n,y_n$ be cauchy I can see that $p<\epsilon$ but no idea how ot prove it
|
Let $X$ be a complete metric space and $F$ a closed set in $X$. If $x_n$ is a Cauchy sequence in $F$, then it has a limit $x\in X$. Since $F$ is closed, $x\in F$. Hence $F$ is complete.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1119191",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Does the dot product angle formula work for $\Bbb{R}^n$? Whenever I have seen this formula discussed
\begin{equation}
\textbf{A} \cdot \textbf{B} = \|\textbf{A} \| \|\textbf{B} \| \cos\theta
\end{equation}
I have always seen it using vectors in $\Bbb{R}^2$.
I was wondering if this property works if $\dim \textbf{A}= \dim \textbf{B} = n$. I feel like it wouldn't because for spherical coordinates we need more than one angle. We use $\phi$ and $\theta$. But I have no idea. Maybe I am confusing the meaning of $\theta$ in the dot product angle formula. It isn't a dimension right?
My Question:
Does the dot product angle formula apply to $\Bbb{R}^n$?
|
Yes it does. If you accept that this holds in $\mathbb R^2$, the fact that it holds in $\mathbb R^n$ follows fairly easily - notice that for any choice of two vectors $A$ and $B$, it is always possible to choose a two-dimensional subspace (i.e. a plane) containing both vectors (the span of the two vectors usually functions), and the dot product on this subspace is the same as in $\mathbb R^2$ - so we're measuring the angle between $A$ and $B$ as if they were just two dimension vectors.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1119279",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Elliptic differential operator I am given the differential operator $D(f):=-(fg)'+hf$ and $D^* (f) = g \cdot f' + hf$ where $h,g$ are some smooth functions and want to find out under which conditions, these two operators are elliptic. Does anybody know how to do this, I am really puzzled by the wikipedia definition of elliptic differential operators.
|
The heat equation is parabolic:
$$
\frac{\partial f}{\partial t} = \frac{\partial^{2}f}{\partial^{2}x}+\frac{\partial^{2}f}{\partial y^{2}}+\frac{\partial^{2}f}{\partial z^{2}}
$$
Laplace's equation is elliptic:
$$
\frac{\partial^{2}f}{\partial^{2}x}+\frac{\partial^{2}f}{\partial y^{2}}+\frac{\partial^{2}f}{\partial z^{2}} = 0
$$
The Wave equation is hyperbolic:
$$
\frac{\partial^{2}f}{\partial t^{2}}=\frac{\partial^{2}f}{\partial^{2}x}+\frac{\partial^{2}f}{\partial y^{2}}+\frac{\partial^{2}f}{\partial z^{2}}
$$
These designations are related to $y-x^{2}=C$, $x^{2}+y^{2}=C$ and $y^{2}-x^{2}=C$, whose level surfaces are generally parabolic, elliptic and hyperbolic, respectively.
Your equation is never classified as elliptic because such designations are applied only to Partial Differential Equations.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1119382",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Tensoring the exact sequence by a faithfully flat module I have problems to do the exercise 1.2.13 from Liu's "Algebraic Geometry and Arithmetic curves".
First, Liu defines that if $M$ is a flat module over a ring $A$, then $M$ is faithfully flat over $A$ if it satisfies one of the following three equivalent conditions:
Let $M$ be a flat module. Then $M$ is $\textit{faithfully flat}$ if
*
*$M\ne \mathfrak mM$ for every maximal ideal $\mathfrak m$ of $A$.
*Let $N$ be an $A$-module. If $M\otimes_AN=0$, then $N=0$.
*Let $f:N_1\to N_2$ be a homomorphism of $A$-modules. If $f_M:N_1\otimes M\to N_2\otimes M$ is an isomorphism, then so is $f$.
In the text Liu proves the equivalence of these three conditions.
Now, the exercise 1.2.13 asks the following:
Let $M$ be a faithfully flat $A$-module. Let $N'\to N \to N''$ be a sequence of $A$-modules. Why do it is exact if and only if $N'\otimes M\to N\otimes M\to N''\otimes M$ is exact?
|
Although this is the usual property that defines the faithfully flatness, (one of the) Liu's definition(s) which is helpful here is the following:
Let $M$ be a flat $A$-module. Then $M$ is faithfully flat iff for any $A$-module $N$ such that $M\otimes_AN=0$ we have $N=0$.
Recall that $N'\stackrel{u}\to N\stackrel{v}\to N''$ is exact iff $\operatorname{im}u=\ker v$.
Suppose that $N'\otimes M\stackrel{u\otimes 1_M}\to N\otimes M\stackrel{v\otimes 1_M}\to N''\otimes M$ is exact. Then $(v\otimes 1_M)\circ(u\otimes 1_M)=0$, that is, $(v\circ u)\otimes 1_M=0$. This implies that $v(u(N'))\otimes M=0$, so $v(u(N'))=0$, that is, $v\circ u=0$. Thus $\operatorname{im}u\subseteq\ker v$.
But we know more, namely $\ker(v\otimes 1_M)=\operatorname{im}(u\otimes 1_M)$. Let's use it. If $n\in\ker v$, then $n\otimes x\in\ker(v\otimes 1_M)$ for all $x\in M$, so $(\ker v)\otimes M\subseteq\ker(v\otimes 1_M)=\operatorname{im}(u\otimes 1_M)=(\operatorname{im}u)\otimes M$. It follows $(\ker v/\operatorname{im}u)\otimes M=0$, so $\ker v=\operatorname{im}u$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1119476",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Universal Cover of $\mathbb{R}P^{2}$ minus a point I've already calculated that the fundamental group of $\mathbb{R}P^{2}$ minus a point as $\mathbb{Z}$ since we can think of real projected space as an oriented unit square, and puncturing it we can show it is homotopy equivalent to the boundary which is homotopy equivalent to the unit circle.
I am having some trouble finding the universal cover of punctured real projective space, however. I'm not sure how to construct one. I know that punctured real projective space is homeomorphic to the mobius band, but I'm not sure if that helps me here. Thank you for any help.
|
Letting $M$ denote the Mobius band, there is a universal covering map $\mathbb{R}^2 \to M$ with infinite cyclic deck transformation group $\langle r \rangle$ generated by the "glide reflection"
$$r : (x,y) \to (x+1,-y)
$$
As with any universal covering map, $M$ is the orbit space of this action. One can verify this by noticing that the vertical strip $[0,1] \times \mathbb{R}$ is a fundamental domain for $r$, its left side $0 \times \mathbb{R}$ being glued to its right side $1 \times \mathbb{R}$ by identifying $(0,y)$ to $(1,-y)$. This is exactly the gluing pattern for constructing the Mobius band that we all learned on our mother's knee.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1119606",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
}
|
If $\mu(E)=0$, show that $\mu(E\cup A)=\mu(A\setminus E)=\mu(A)$.
If $\mu(E)=0$, show that $\mu(E\cup A)=\mu(A\setminus E)=\mu(A)$.
I just started learning about measure this week, so I don't know any theory about measure except the definition of outer measure and that a countable set has measure zero.
Remark: $\mu(\cdot)$ denoted the outer measure
Attempt.
Suppose $\mu(E)=0$. Then $\mu(A\cup E)\leq\mu(A)+\mu(E)=\mu(A)$.
If I can also show that $\mu(A \cup E)\geq \mu (A)$, then equality would follow.
I know that if $A$ and $E$ were disjoint compact sets, then $\mu(A)+\mu(E)\leq \mu(A \cup E)$, so maybe I can say something like a compact set in $\mathbb{R}$ is closed and bounded by Heine-Borel, and somehow use the fact that $\mu([a,b])=\mu((a,b))$ to help.
For $\mu(A\setminus E)=\mu(A)-\mu(E)=\mu(A)$
A hint in the right direction would be appreciated. Thanks.
|
Note that if $S \subset T$, then $\mu(S) \leq \mu(T)$ provided both $S$ and $T$ are measurable. So, $\mu(A) \leq \mu(A \cup E)$, and you already have the other inequality.
Similar idea will work for the next equality as well.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1119709",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Moment generating function gives an undefined first moment, but first moment still exists? Let's say we have a probability density function given by
$f_X(x) = 2x$ for $0 \leq x \leq 1$.
(Note $\int_0^1 f_X(x) = 1$.)
The moment generating function is
$$\int_0^1 e^{tx}\cdot2x \,dx$$
which wolfram alpha gives as
$$\frac{2(e^t(t - 1) + 1)}{t^2}$$
Trying to get the first moment (which we should know to be $\int_0^1x \cdot 2x \, dx$ = 2/3), we take the first derivative of the mgf at $t = 0$:
$$\frac{d}{dt}\frac{2(e^t(t - 1) + 1)}{t^2}$$
which wolfram alpha gives as
$$\frac{2e^t(t^2-2t+2) - 4}{t^3}$$
which is clearly not defined for $t = 0$. If I didn't know better, I would conclude that the first moment for $f_X(x) = 2x$ does not exist (but it's clearly 2/3).
Why this is happening? Is it possible for a moment generating function to not give a moment, but for the moment to exist? Or am I calculating the moments incorrectly?
EDIT: I've noticed that the limit of the first derivative of the mgf that I gave above as t approaches 0 is indeed 2/3. So is the definition of the kth moment then not "the kth derivative of the mgf evaluated at t = 0", but rather "the limit as t approaches 0 of the kth derivative of the mgf?"
|
The singularity of the MGF at $t=0$ is removable. Use the Maclaurin series of the exponential function. The MGF is actually defined everywhere.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1119803",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
On average, how many times must I roll a dice until I get a $6$? On average, how many times must I roll a dice until I get a $6$?
I got this question from a book called Fifty Challenging Problems in Probability.
The answer is $6$, and I understand the solution the book has given me. However, I want to know why the following logic does not work: The chance that we do not get a $6$ is $5/6$. In order to find the number of dice rolls needed, I want the probability of there being a $6$ in $n$ rolls being $1/2$ in order to find the average. So I solve the equation $(5/6)^n=1/2$, which gives me $n=3.8$-ish. That number makes sense to me intuitively, where the number $6$ does not make sense intuitively. I feel like on average, I would need to roll about $3$-$4$ times to get a $6$. Sometimes, I will have to roll less than $3$-$4$ times, and sometimes I will have to roll more than $3$-$4$ times.
Please note that I am not asking how to solve this question, but what is wrong with my logic above.
Thank you!
|
The probability of the time of first success is given by the Geometric distribution.
The distribution formula is:
$$P(X=k) = pq^{n-1}$$
where $q=1-p$.
It's very simple to explain this formula. Let's assume that we consider as a success getting a 6 rolling a dice. Then the probability of getting a success at the first try is
$$P(X=1) = p = pq^0= \frac{1}{6}$$
To get a success at the second try, we have to fail once and then get our 6:
$$P(X=2)=qp=pq^1=\frac{1}{6}\frac{5}{6}$$
and so on.
The expected value of this distribution answers this question: how many tries do I have to wait before getting my first success, as an average? The expected value for the Geometric distribution is:
$$E(X)=\displaystyle\sum^\infty_{n=1}npq^{n-1}=\frac{1}{p}$$
or, in our example, $6$.
Edit: We are assuming multiple independent tries with the same probability, obviously.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1119872",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "45",
"answer_count": 5,
"answer_id": 3
}
|
When is a curve parametrizable? Is there a way in general to tell whether a given curve is parametrizable?
|
Suppose you are considering the level set $$f(x,y)=c$$ and we want to study whether we can parametrize this curve near the point $(x_0,y_0)$, which is a point in it, $f(x_0,y_0)=c$. If one of the partial derivatives is non-zero at that point, say $\frac{\partial f}{\partial y}(x_0,y_0)\neq0$ then we can use the Implicit function theorem to argue that there is such a local parametrization.
The implicit function theorem can be used for any number of variables.
Example: Suppose we study $$\sin(y)+y-xe^x=0$$ and we want to see if there is local parametrization at the origin $(0,0)$. We see that $$\frac{\partial}{\partial y}(\sin(y)+y-xe^x)|_{(0,0)}=(\cos(y)+1)|_{(0,0)}=2\neq0$$ Therefore, by the Implicit function theorem there is a function $g(x)$ such that $$\sin(g(x))+g(x)-xe^x=0$$ for all $x$ in a neighborhood of $x=0$.
When $f(x,y)$ is a polynomial sometimes the implicit function theorem is not applicable because the condition on the derivatives are not met.
Example: If we look at $x^2-y^3=0$ at the origin $(0,0)$ both partial derivatives are zero. Still, a parametrization exists $t\mapsto(t^3,t^2)$.
It is a very hard-to-prove theorem that in the case of polynomials nice parametrizations like this always exist.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1119954",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Poincare duality isomorphism problem in the book "characteristic classes" This is the problem from the book, "characteristic classes" written by J.W. Milnor.
[Problem 11-C] Let $M = M^n$ and $A = A^p$ be compact oriented manifolds with smooth embedding $i : M \rightarrow A$. Let $k = p-n$.
Show that the Poincare duality isomorphism
$\cap \mu_A : H^k(A) \rightarrow H_n(A)$
maps the cohomology class $u^{'}|A$ dual to $M$ to the homology class $(-1)^{nk} i_{*} (\mu_M)$.
[We assume that the normal bundle $v^k$ is oriented so that $\tau_M \oplus v^k$ is orientation preserving isomorphic to $\tau_A|M$.
The proof makes use of the commutative diagram where $N$ is a tubular neighborhood of $M$.
I think that it suffices to show that the right vertical map is sending the fundamental cohomology class of $H^k(N,N-M)$ to the fundamental homology class of $H_n(N) \cong H_n(M)$ because the right vertical map is an isomorphism. However, I can not catch the reason why the right vertical map is an isomorphism.
Can anybody give me a hint?
Thank you.
|
Start picking in the upper left angle the element $u' \otimes \mu_A \in H^k(A,A\setminus M)\otimes H_p(A)$ (using Milnor's notation $u'$ is the dual cohomology class of $M$), then you have to chase this element a little bit. The left "path" $( \downarrow$ and then $\rightarrow)$ is straightforward. You end up with $u'|A\cap \mu_A$.
For the "right" path $(\rightarrow$ and then $\downarrow$ and then $\leftarrow )$ use the so called coherency of the fundamental class $\mu_A$ and Corollary 11.2 in Milnor's Characteristic Classes to prove that you end up in $H^k(N,N\setminus M)\otimes H_p(N,N\setminus M)$ with $th(\nu)\otimes \mu_{N,N\setminus M}$ (work with compactly supported sections of the orientation bundle of $A$, which we know being trivial since $A$ is orientable, for more details see Bredon's Topology and Geometry chapter VI.7).
Since the vertical right arrow (after choosing the Thom class of the normal bundle as the cohomology class in $H^k(N,N\setminus M)\cong H^k(E(\nu),E(\nu)_0)$) is the so called homological Thom isomorphism (tom Dieck's Algebraic Topology Theorem 18.1.2 page 439), it's an isomorphism so it maps $\mu_{N,N\setminus M}$ to $\mu_M$ (up to a sign and using $H_n(N)\cong H_n(M)$). The conclusion follows at once.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1120233",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 1
}
|
Decide the smooth function $r : \mathbb R \rightarrow \mathbb R$ of the equation $r(t)^2 + r'(t)^2 = 1$. Suppose $r:\mathbb R \rightarrow \mathbb R$ is a smooth function and suppose $r(t)^2 + r'(t)^2 = 1$.
I want to determine the function $r(t)$.
I see that $r(t)^2 + r'(t)^2 = 1$, so I could take $r(t) = \pm \cos(t), \pm \sin(t)$.
However, are these all the solutions ? How do I see that the list above is exhaustive, if it is ?
|
This is one of the rare occasions where it actually helps to differentiate the given ODE! In this way we obtain
$$2r'(t)\bigl(r(t)+r''(t)\bigr)=0\qquad\forall t\ .\tag{1}$$
This is satisfied when $r'(t)\equiv0$, or $r(t)=c$ for some $c\in{\mathbb R}$. From the original ODE it then follows that $c^2=1$, so that we obtain the two solutions $$r(t)\equiv\pm1\quad(-\infty<t<\infty)\ .$$ It will turn out that these are "special" solutions.
But there have to be more interesting solutions of $(1)$! If $r(\cdot)$ is a solution with $r'(t_0)\ne0$ for some $t_0$ then in a neighborhood $U$ of $t_0$ this solution has to satisfy the well known ODE
$$r(t)+r''(t)=0\qquad(t\in U)\ .\tag{2}$$
The general solution of $(2)$ is
$$r(t)=A\cos t+B\sin t\tag{3}$$
with $A$, $B$ arbitrary real constants. From the original ODE it then follows that necessarily $A^2+B^2=1$, and it is easily checked that this condition is also sufficient for $(3)$ to be a solution. In terms of graphs this means that the solution curves are arbitrary sinus waves of amplitude $1$. The "extra solutions" found earlier are the envelopes of these waves. Note that an IVP with initial point $(t_0,\pm1)$ has several solutions, four different solution germs in all.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1120319",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How to determine significant figures involving radicals and exponents How do you determine the significant figures for solving equations with radicals and exponents? For example, how do you evaluate $x = \sqrt{4.56^2 +1.23^2}$?
|
Here is an abbreviation of the explanation I use in my 11th grade Chemistry and 12th grade Physics classes. This uses precision as is often used in American secondary schools, though it is not usually explained in quite this way.
There are two ways to measure precision: significant figures and decimal places. Significant figures (also called significant digits) are used in multiplication, division, powers, roots, and some other operations. Decimal places are used in addition and subtraction. In any operation, the proper precision of the answer equals the lowest precision of the operands. If multiple operations of the same kind are done consecutively, do rounding after doing all the operations. If two consecutive operations are of different kinds, you must round after each operation.
In your case, you have squaring, followed by addition, followed by a square root. This is significant figures followed by decimal places followed by significant figures, so we must round at each step.
First is squaring, which uses significant figures. If you think of this as powers, the $2$ is exact and does not affect the precision of the answer. In either case, each square has three significant figures, so we round each to three sig figs, getting
$$x=\sqrt{20.8+1.51}$$
Next we add one decimal place to two decimal places, giving one decimal place.
$$x=\sqrt{22.3}$$
Finally we take the square root of three significant figures, giving three sig figs.
$$x=4.72$$
Is all that clear?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1120419",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Use integration by substitution I'm trying to evaluate integrals using substitution. I had
$$\int (x+1)(3x+1)^9 dx$$
My solution: Let $u=3x+1$ then $du/dx=3$
$$u=3x+1 \implies 3x=u-1 \implies x=\frac{1}{3}(u-1) \implies x+1=\frac{1}{3}(u+2) $$
Now I get $$\frac{1}{3} \int (x+1)(3x+1)^9 (3 \,dx) = \frac{1}{3} \int \frac{1}{3}(u+2)u^9 du = \frac{1}{9} \int (u+2)u^9 du \\
= \frac{1}{9} \int (u^{10}+2u^9)\,du = \frac{1}{9}\left( \frac{u^{11}}{11}+\frac{2u^{10}}{10} \right) + C$$
But then I get to this one
$$\int (x^2+2)(x-1)^7 dx$$
and the $x^2$ in brackets is throwing me off.
I put $u=x-1\implies x=u+1,$ hence $x^2+2 =(u+1)^2 +2 = u^2+3$. So
$$ \int(x^2+2)(x-1)^7\, dx = \int (u+1)u^7 du = \int (u+u^7) du = \frac{u^2}{2}+\frac{u^8}{8} + C $$
Is this correct or have I completely missed the point?
|
$\frac{19683 x^{11}}{11}+\frac{39366 x^{10}}{5}+15309 x^9+17496 x^8+13122 x^7+6804 x^6+\frac{12474 x^5}{5}+648 x^4+117 x^3+14 x^2+x$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1120516",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
}
|
Is $\text{Hom}(\prod_p \Bbb Z/p\Bbb Z, \Bbb Q) = 0$ possible without choice? That divisible abelian groups are precisely the injective groups is equivalent to choice; indeed, there are some models of ZF with no injective groups at all. Now, given that $\Bbb Q$ is injective, one immediately has that $\text{Hom}(\prod_p \Bbb Z/p\Bbb Z, \Bbb Q)$ is nontrivial (pick an element of infinite order in the former group; then the definition of injective gives a nonzero map to $\Bbb Q$ factoring the inclusion $\Bbb Z \hookrightarrow \Bbb Q$.)
Now, assuming that $\Bbb Q$ is not injective does not obviously prove that $\text{Hom}(\prod_p \Bbb Z/p\Bbb Z, \Bbb Q)$ is trivial, whence: is there a model of ZF in which this is true?
|
A related question is the following: does there exist a model of ZF where choice fails but the hom-set is nontrivial? The following is an answer by Andreas Blass.
The answer to that is yes. The reason is that the failure of choice can be such as to involve only very large sets, much larger than the groups in the question. That is, one can start with a model of ZFC, where the Hom-set in question is nonzero, and then one can force over this model to add new subsets of some cardinal kappa that is much bigger than the cardinal of the continuum, and then one can pass to a symmetric inner model to falsify the axiom of choice; all of this can be done in such a way as not to adjoin any new elements in the product of Z_p's, so that product is the same in the new model as in the original, and all its non-zero homomorphisms to Q in the original ZFC model remain perfectly good nonzero homomorphisms in the new model that violates choice.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1120616",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 3,
"answer_id": 2
}
|
Convergence of Uniformly Distributed Random Variables (n-dimensional) Suppose that ${U_n} = ({U_{n1}},{U_{n2}},...,{U_{nn}})$ is uniformly distributed over the n-dimensional cube ${C_n}={[0,2]^n}$ for each $n=1,2,...$ That is, that the distribution of ${U_n}$ is ${Q_n}(dx) = {2^{-n}} {{\chi}_{C_{n}(x)}}{m_n}(dx)$, where ${m_n}$ is n-dimensional Lebesgue measure. Let ${X_n} = ({U_{n1}}{U_{n2}}...{U_{nn}}), n \geq 1$. Show that:
a) ${X_n} \rightarrow 0$ in probability as $n \rightarrow \infty$, and
b) $\{{X_n} : n > 1\}$ is not uniformly integrable.
Attempt at a solution:
We want to show that, for each $\epsilon > 0$, $P(|{X_n} - X|>\epsilon) \rightarrow 0$ as $n \rightarrow \infty$. Now, $$P(|{X_n}|>\epsilon) = 1-\int {\frac{1}{{{2^n}}}} {\chi _{[{X_n} < \varepsilon ]}}{m_n}(dx)$$
so then $$P(|{U_{n1}}{U_{n2}}...{U_{nn}}|>\epsilon) = 1- \int {\frac{1}{{{2^n}}}} {\chi _{[{X_n} < \varepsilon ]}}{m_n}(dx)$$
$$=1- \frac{{{\varepsilon ^n}}}{{{2^n}}}$$
which is where I'm stuck. Is the math up to here correct? I'm just beginning to study probability, so I know that there are some similar questions on this site but the answers involve martingales and other concepts that we haven't reached yet. I'm also pretty shaky on my measure theory, so there is probably a nuance somewhere in the integral calculation that I'm missing.
For part b, I'm really not even sure where to start.
|
A way to show convergence in probability to $0$ is to show that $\mathbb E\left[\sqrt{X_n}\right]\to 0$. This can be done in the following way: we observe that
$$
\mathbb E\left[\sqrt{X_n}\right]=2^{-n}\int_{[0,2]^n}\prod_{i=1}^n\sqrt{x_i}\mathrm dx_1\dots\mathrm dx_n=\left(\frac 12\int_0^2\sqrt u\mathrm du\right)^n
$$
and since $\frac 12\int_0^2\sqrt u\mathrm du=2^{3/2}/3\lt 1$, it follows that $\mathbb E\left[\sqrt{X_n}\right]\to 0$.
Observe that $\mathbb E[X_n]=1$. If the sequence $\left(X_n\right)_{n\geqslant 1}$ was uniformly integrable, then by item 6 of this answer, we would deduce from the convergence in probability to $0$ that $\mathbb E[X_n]\to 0$, which is not the case as $\mathbb E[X_n]=1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1120728",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Prove that $2^{3^n} + 1$ is divisible by 9, for $n\ge1$
Prove that $2^{3^n} + 1$ can be divided by $9$ for $n\ge 1$.
Work of OP: The thing is I have no idea, everything I tried ended up on nothing.
Third party commentary: Standard ideas to attack such problems include induction and congruence arithmetic. (The answers will illustrate, among others, that in this case both approaches work nicely.)
|
Hint $\rm \,\ {\rm mod}\,\ A^{\large B} + 1\!:\,\ \color{#c00}{A^B\equiv -1}\ \Rightarrow\ \color{}{A}^{\large BC}\equiv (\color{#c00}{A^{\large B}})^{\large C}\equiv (\color{#c00}{-1})^{\large C} $
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1120826",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 2
}
|
Determine the limit of a series, involving trigonometric functions: $\sum \frac{\sin(nx)}{n^3}$ and $\frac{\cos(nx)}{n^2}$ I have $$\sum^\infty_{n=1} \frac{\sin(nx)}{n^3}.$$
I did prove convergence:
$0<\theta<1$
$$\left|\frac{\sin((n+1)x)n^3}{(n+1)^3\sin(nx)}\right|< \left|\frac{n^3}{(n+1)^3}\right|<\theta$$
Now I want to determine the limit. I found a similar proof but I need help understanding it; it goes like this. :
$$ F(x):=\sum^\infty_{n=1} \frac{\cos(nx)}{n^2}$$
As for this series we have uniform convergence. The series of derivatives: $$-\sum^\infty_{n=1} \frac{\sin(nx)}{n}$$ converges for every $\delta >0$ on the interval $[\delta, 2\pi-\delta]$ uniform against $\frac{x-\pi}{2}$
so, for every $x \in]0,2\pi[$ : $\displaystyle F'(x) = \frac{x-\pi}{2}$$\displaystyle F(x) = \left(\frac{x-\pi}{2}\right)^2+c,c\in \mathbb{R}$.
To determine the constant we calculate:
$$ \int^{2\pi}_0F(x)dx=\int^{2\pi}_0\left(\frac{x-\pi}{2}\right)^2dx+\int^{2\pi}_0cdx=\frac{\pi^3}{6}+2\pi c$$
(Question: Why can we do this do get the constant?)
Because $\int^{2\pi}_0cos(nx)dx= 0 \forall n≥1$ we have:
$$\int^{2\pi}_0F(x)dx = \sum^\infty_{n=1}\int^{2\pi}_0\frac{\cos(nx)}{n^2}=0,$$ so $c = -\frac{\pi^2}{12}$. (Question: How does he get to that term $\frac{\pi^2}{12}$?) With that we have proven, that
$$\sum^\infty_{n=1} \frac{\cos(nx)}{n^2}=\left(\frac{x-\pi}{2}\right)^2-\frac{\pi^2}{12}$$
If you can explain one of the questions about this proof, or if you know how to calculate the limit in my situation above, it would be cool if you leave a quick post here, thanks!
|
We can follow the proof in the post indicated by Marko Riedel. Rewrite $$\underset{n=1}{\overset{\infty}{\sum}}\frac{\sin\left(nx\right)}{n^{3}}=x^{3}\underset{n=1}{\overset{\infty}{\sum}}\frac{\sin\left(nx\right)}{\left(nx\right)^{3}}$$
and use the fact that the Mellin transform identity for harmonic sums with base function $g(x)$
is$$\mathfrak{M}\left(\underset{k\geq1}{\sum}\lambda_{k}g\left(\mu_{k}x\right),\, s\right)=\underset{k\geq1}{\sum}\frac{\lambda_{k}}{\mu_{k}^{s}}\, g\left(s\right)^{*}$$
where $g\left(s\right)^{*}$
is the Mellin transform of $g\left(x\right)$
. So in this case we have $$\lambda_{k}=1,\,\mu_{k}=k,\, g\left(x\right)=\frac{\sin\left(x\right)}{x^{3}}$$
and so its Mellin transform is $$g\left(s\right)^{*}=\Gamma\left(s-3\right)\sin\left(\frac{1}{2}\pi\left(s-3\right)\right).$$
Observing that $$\underset{k\geq1}{\sum}\frac{\lambda_{k}}{\mu_{k}^{s}}=\zeta\left(s\right)$$
we have $$x^{3}\underset{n=1}{\overset{\infty}{\sum}}\frac{\sin\left(nx\right)}{\left(nx\right)^{3}}=\frac{x^{3}}{2\pi i}\int_{\mathbb{C}}\Gamma\left(s-3\right)\sin\left(\frac{1}{2}\pi\left(s-3\right)\right)\zeta\left(s\right)x^{-s}ds=\frac{x^{3}}{2\pi i}\int_{\mathbb{C}}Q\left(s\right)x^{-s}ds.$$
Note that sine term cancels poles in at odd negative integers and zeta cancels poles at even negative integers. So we have poles only at $s=0,1,2.$
And the compute is$$\underset{s=0}{\textrm{Res}}\left(Q\left(s\right)x^{-s}\right)=\frac{1}{12}$$
$$\underset{s=1}{\textrm{Res}}\left(Q\left(s\right)x^{-s}\right)=-\frac{\pi}{4x}$$
$$\underset{s=2}{\textrm{Res}}\left(Q\left(s\right)x^{-s}\right)=\frac{\pi^{2}}{6x^{2}}$$
so$$\underset{n=1}{\overset{\infty}{\sum}}\frac{\sin\left(nx\right)}{n^{3}}=\frac{\pi^{2}x}{6}-\frac{\pi x^{2}}{4}+\frac{x^{3}}{12}.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1120939",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
}
|
How can one know every Cauchy sequence in a complete metric space converges? I am new to Cauchy sequences. I stumbled onto them in the process of learning what a Hilbert space is. As I understand it, a Cauchy sequence is a sequence whose elements become arbitrarily close to each other as the sequence progresses.
But consider the following statement:
A metric space $X$ in which every Cauchy sequence converges to an element of $X$ is called complete.
How can we know every Cauchy sequence converges?
|
As I see it, the question is, "How can the criterion in the definition of completeness ever be verified (in principle, to say nothing of in practice), since the definition makes an assertion about arbitrary Cauchy sequences, and most spaces admit A Lot of Cauchy sequences?"
If one examines the proofs for, say, compact metric spaces or Euclidean spaces, there's a common theme: Show that every Cauchy sequence in $(X, d)$ has a convergent subsequence. It's a general fact in a metric space $(X, d)$ that a Cauchy sequence with a convergent subsequence is itself convergent in $(X, d)$.
Since a compact metric space is sequentially compact (every sequence has a convergent subsequence), a compact metric space is complete.
Since every real sequence has a monotone subsequence,[*] and every Cauchy sequence of reals is bounded (in the usual metric), and every bounded, monotone sequence of reals converges to a real limit, we conclude that every Cauchy sequence of reals has a convergent subsequence, and therefore converges.
And so forth. The point is, one doesn't explicitly inspect every Cauchy sequence; one proves that an arbitrary Cauchy sequence converges (e.g., by constructing a convergent subsequence), using nothing but the Cauchy property and properties of the "ambient" space $(X, d)$.
[*] Spivak's Calculus has a beautiful, elementary proof based on the concept of a peak point of a sequence $(x_{k})$, an index $n$ such that $x_{n+k} < x_{n}$ for all $k > 0$. If a sequence has finitely many peak points, one inductively constructs a non-decreasing subsequence; if instead there are infinitely many peak points, one constructs a strictly decreasing subsequence.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1121013",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 3
}
|
How to prove $L_{f}(P) \leq L_{f}(Q)$ when $Q$ and $P$ are partitions of $[a,b]$ and $Q \supseteq P$ I'm having trouble proving this idea.
Suppose that $f$ is bounded on the interval $[a, b]$. $P$ and $Q$ are partitions of $[a, b]$, and $Q \supseteq P$.
$$
L_{f}(P) \leq L_{f}(Q)
$$
I know that this makes sense, because by adding points to a partition, the subintervals get smaller which makes the minima $m_i$ larger, therefore making the lower sums bigger. But I'm not sure if this is a valid proof.
|
First, note we can simply prove this for $P$ and $Q$ where $Q$ is obtained from $P$ by adding only one number $t^*$, and then go by induction on the number of extra points $Q$ has.
So suppose $P=\{t_0,\ldots,t_i,t_{i+1},\ldots,t_n\}$ and $Q=\{t_0,\ldots,t_i,t^*,t_{i+1},\ldots,t_n\}$. Let $m_i$ be the infimum of $f$ on $[t_i,t_{i+1}]$ and $m',m''$ the infimum of $f$ on $[t_i,t^*]$ and $[t^*,t_{i+1}]$ respectively. Then $L_f(Q)$ and $L_f(P)$ differ by $$m''(t_{i+1}-t^*)+m'(t^*-t_i)-m_i(t_{i+1}-t_i)$$
Can you show this is nonnegative? Remember that if $A\subseteq B$ then $\inf A\geqslant \inf B$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1121130",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Is this a valid way to prove that $\frac{d}{dx}e^x=e^x$? $$e^x= 1+x/1!+x^2/2!+x^3/3!+x^4/4!\cdots$$
$$\frac{d}{dx}e^x= \frac{d}{dx}1+\frac{d}{dx}x+\frac{d}{dx}x^2/2!+\frac{d}{dx}x^3/3!+\frac{d}{dx}x^4/4!+\cdots$$
$$\frac{d}{dx}e^x=0+1+2x/2!+3x^2/3!+4x^3/4!\cdots$$
$$\frac{d}{dx}e^x= 1+x/1!+x^2/2!+\cdots=e^x$$
Of course this proof is circular because in order to find the expansion requires knowing the derivative, but ignoring that fact, is this proof valid?
|
It is valid after you have shown (or defined) that $e^x$ is indeed equal to that series and that deriving the series is the same as deriving each term individually (under suitable conditions, of course). It is also not circular and works also for complex numbers.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1121246",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Surprising applications of topology Today in class we got to see how to use the Brouwer Fixed Point theorem for $D^2$ to prove that a $3 \times 3$ matrix $M$ with positive real entries has an eigenvector with a positive eigenvalue. The idea is like this: consider $T = \{ (x,y,z) \mid x + y + z = 1, x, y, z \geq 0 \}$. This is a triangle in $\mathbb R^3$. Take a point $\overline x \in T$, and consider $\lambda_x M \overline x \in T$, for some $\lambda_x \in \mathbb R$. This is a vector which is equal to some $y \in T$. In particular, $\lambda_x M$ is a homeomorphism $T \to T$. Hence it has a fixed point $\overline x$. So $\lambda_x M \overline x =\overline x \implies M \overline x = \frac{1}{\lambda_x} \overline x$. So $\overline x$ is an eigenvector with eigenvalue $\frac{1}{\lambda _x}$, which is certainly positive.
I was extremely surprised when this question came up, as we are studying fundamental groups at the moment and that doesn't seem the least bit related to eigenvalues at first. My question is: what are some other example of surprising applications of topology?
|
How about Furstenberg's proof of the infinitude of prime numbers?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1121338",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "35",
"answer_count": 10,
"answer_id": 9
}
|
Where can I find linear algebra described in a pointfree manner? Clearly, some of linear algebra can be described in a pointfree fashion. For example, if $X$ is an $R$-module and $A : X \leftarrow X$ is an endomorphism of $X$, then we can define that the "eigenspace function of $A$" is the map $\mathrm{Eig}_A : \mathrm{Sub}(X) \leftarrow R$ described by the following equalizer.
$$\mathrm{Eig}_A(\lambda) = \mathrm{Eq}(A,\lambda \cdot \mathrm{id}_X)$$
In fact, this makes sense in any $R$-$\mathbf{Mod}$ enriched category with equalizers.
Anyway, I did some googling for "pointfree linear algebra" and "pointless linear algebra" etc., and nothing really came up, except for an article called "Point-Free, Set-Free Concrete Linear Algebra" which really isn't what I'm looking for. So anyway...
Question. Where can I find linear algebra described in a pointfree manner?
|
A commutative algebra textbook.
Here is a better description of how eigenspaces work from this perspective: a pair consisting of an $R$-module and an endomorphism of it is the same thing as an $R[X]$-module. If $R$ is a field, then finitely generated $R[X]$-modules are classified by the structure theorem for finitely generated modules over a PID. They consist of a torsion-free part and a torsion part which is a direct sum of modules of the form $R[X]/f(x)^d$ where $f(X) \in R[X]$ is irreducible.
If $R$ is an algebraically closed field, the only $f(X)$ that occur are the linear polynomials $f(X) = X - \lambda$, and the $(X - \lambda)$(-power) torsion submodule is precisely the (generalized) $\lambda$-eigenspace. But if $R$ is not algebraically closed or, worse, not a field, then more complicated things can happen, and accordingly the idea of eigenspaces is less useful.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1121457",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 0
}
|
Is there a notation for being "a finite subset of"? I would gladly use a notation for “$A$ is a finite subset of $B$”, like
$$A\sqsubset B \text{ or } A\underset{fin}{\subset} B,$$
but I have never seen a notation for that. Are there any?
EDIT: While waiting for a future standard, I will use Joffan’s $\ddot{\subset}$ coded as
$\newcommand{\finsub}[0]{\mathrel{\ddot{\subset}}}$
$A\finsub B$
$\newcommand{\finsub}[0]{\mathrel{\ddot{\subset}}}$
I will paste the new command in the first row, and then use \finsub, resulting in $A\finsub B$ which I will explain after first use in each text. I guess that is satisfying enough.
And really, as you define sets and functions in a text, you could as well define relations without standard notations.
|
$A\in[B]^{\lt\omega}$ where $[B]^{\lt\omega}$ denotes the set of all finite subsets of $B$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1121553",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 6,
"answer_id": 2
}
|
Does Intermediate Value Theorem $\rightarrow $ continuous? i try to understand Intermediate Value Theorem and wonder if the theorem works for the opposite side. I mean, if we know that $\forall c\:\:\:f\left(a\right)\le \:c\le \:f\left(b\right)\:,\:\exists x_0\in \left[a,b\right]\:\:$ such that $f\left(x_0\right)=c$ then $f\:$ is continuous in $\left[a,b\right]$? tnx!
EDITED: Continuity $\Rightarrow$ Intermediate Value Property. Why is the opposite not true? there is absolute fantastic answer for this!
|
No, what about $f(x)=x$ if $0\le x<1$ and $f(1)=1/2$. Clearly this has the property you stated with $a=0,b=1$ but is not continuous in $[0,1]$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1121654",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
How do I find the error of nth iteration in Newton's Raphson's method without knowing the exact root In our calculus class, we were introduced to the numerical approximation of root by Newton Raphson method. The question was to calculate the root of a function up to nth decimal places.
Assuming that the function is nice and our initial value does lead to convergence. Our teacher terminated the algorithm when the two successive iterations had the same first n digits and told us that the the approximation was correct up to the nth digit!
I feel that the termination step is valid if $f(x_n)$ and $f(x_{n+1})$ has different signs but my teacher disagrees. How do I sort this out??
Futhermore how do I find the error in the nth iteration without knowing the exact root?
|
This is a complement to Pp.'s answer.
Newton's method converges, and with the rate given in Pp.'s answer, only if certain assumptions are met. In simple numerical examples these assumptions are usually not tested in advance. Instead, one chooses a reasonable starting point $x_0$ and will then soon find out whether the process converges to the intended root.
Simple sufficient conditions are the following: You have made out an interval $I:=[a,b]$ such that $f(a)f(b)<0$, and that $f'(x)\ne0$, $f''(x)\ne0$ for all $x\in I$. Depending on the signs of $f$, $f'$, $f''$ at the endpoints of $I$ you should chose either $x_0:=a$ or $x_0:=b$ and then can be sure that the $x_n$ converge to the unique root $\xi\in I$. E.g., if $f(b)>0$, $f'(x)>0$ and $f''(x)>0$ $\>(x\in I)$ you should choose $x_0:=b$. Note that in this case $x_n>\xi$ for all $n$ (draw a figure!), so that the lower estimate $a$ of the root is never improved.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1121738",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 2
}
|
Check my answer - complex analysis, using residue and rouche's theorem I was asked the following questions and I am unsure of my solutions, any advice would be appreciated, maybe there is a better way of doing this.
Question:
We are given $f(z)=2z-\sinh (z)$ defined on the circle $|z|=1$. Our goal is to calculate
$\int_{|z|=1}\frac{1}{f(z)}dz$
What I did:
The first thing we need to take care of is finding how many roots does $f(z)$ have in the given domain. If we can prove that $|2z-f(z)| \leq |2z|$ in the circle then from Rouche's theorem they have the same number of roots, and we know that $2z$ has a root only at $z=0$.
Here I also used a known inequality: $|\sinh(z)| \leq |z| \cosh (|z|)$:
$|2z-f(z)|=|\sinh(z)|\leq |z|\cosh(|z|=\cosh(1)=\frac{e+e^{-1}}{2}<2=|2z|$ in the circle.
We proved the inequality, so from rouche's theorem, $f(z)$ has only $1$ root in the circle, and we can see that $f(0)=0$ is the only root, and $f'(0) \neq 0$ so the "rank" of the root is $1$.
We can use taylor series to separate $f$ to a function with roots multiplied by a function without roots (in the circle ofcourse):
$$f(z)=2z-\sinh(z)=2z-\sum_{n=0}^{\infty}\frac{z^{2n+1}}{(2n+1)!}=z(2-\sum_{n=0}^{\infty}\frac{z^{2n}}{(2n+1)!})=z(1-\sum_{n=1}^{\infty}\frac{z^{2n}}{(2n+1)!})$$
Let $\psi(z)=1-\sum_{n=1}^{\infty}\frac{z^{2n}}{(2n+1)!}$, so overall we have $f(z)=z\psi(z)$ Notice that $\psi(0)=1$ (that's the residue) and that $\psi$ has no roots in $|z|=1$ (otherwise, $f(z)$ would have had more roots, and we already showed that $0$ is its only root).
So to calculate the integral:
$$\int_{|z|=1} \frac{1}{f(z)}dz=\int_{|z|=1} \frac{1}{z}\frac{1}{\psi(z)}dz=\frac{2i\pi}{\psi(0)}=2i\pi$$ from residue theorem.
Is this result correct?
|
As @Mhenni Benghorbal said, you can do this a simpler way; that is, you don't need to use Rouche's Theorem. To find the poles, we only need to determine when
$$
2z - \sinh(z) = 0.
$$
Using the series representation of $\sinh(z) = \sum_{n=0}^{\infty}\frac{z^{2n+1}}{(2n+1)!}$, we have
$$
2z - \sum_{n=0}^{\infty}\frac{z^{2n+1}}{(2n+1)!} = z\biggl[2 - \sum_{n=0}^{\infty}\frac{z^{2n}}{(2n+1)!}\biggr] = z\biggl[2 - 1 - \sum_{n=1}^{\infty}\frac{z^{2n}}{(2n+1)!}\biggr]
$$
As you have found, the only simple pole occurs when $z=0$ which is inside $\lvert z\rvert = 1$. By the residue theorem, we have
\begin{align}
\oint_{\gamma}\frac{dz}{2z-\sinh(z)}&=2\pi i\sum\text{Res}\\
&= 2\pi i\lim_{z\to 0}(z-0)\frac{1}{z\Bigl[1 - \sum_{n=1}^{\infty}\frac{z^{2n}}{(2n+1)!}\Bigr]}\\
&=\lim_{z\to 0}\frac{2\pi i}{1 - \sum_{n=1}^{\infty}\frac{z^{2n}}{(2n+1)!}}\\
&= 2\pi i
\end{align}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1121854",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Investigate the convergence of $\int_1^\infty \frac{\cos x \ln x}{x\sqrt{x^2-1}} \, dx$
Investigate the convergence of $$\int_1^\infty \frac{\cos x \ln x}{x\sqrt{x^2-1}} \, dx$$
so first of all let's split the integral to:
$$I_1 = \int_1^2 \frac{\cos x \ln x}{x\sqrt{x^2-1}} \, dx, \hspace{10mm} I_2 = \int_2^\infty \frac{\cos x \ln x}{x\sqrt{x^2-1}} \, dx$$
I already showed that the limit of the integrand is $0$ when $x\to 1^+$. Therefore, $I_1$ converges since it's integrand is a continuous function.
Regarding to $I_2$. At first I tried to prove it converges absolutely (eliminating the $\cos x$) but it didn't workout. Anyway, I think it's diverges but I don't know how to demonstrate it.
|
For large enough $x$ you have $$\left| \frac{\cos x \ln x}{x^2} \right| \le \frac{\ln x}{2 x^2}.$$ You should have little difficulty showing that $$\int_1^\infty \frac{\ln x}{x^2} \, dx < \infty.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1121929",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Geometry: What is the height of a solid pyramid So the question is OPQRS is a right pyramid whose base is a square of sides 12 cm each. Given that the the slant height of the pyramid is 15cm. And now I need to find the Height of the pyramid.
I did my equation like this:
QR 1/2 = 6cm
HEIGHT^2 = 6^2+12^2
= 180
So height is root over 180 and thus I got 13.4 cm. But the answer at the back of my book says 13.7cm. Can anyone tell me what or why my answer is wrong?
|
The correct calculation is $\sqrt{15^2-(\frac{12}{2})^2}=\sqrt{189}=13,748$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1122044",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Sum of nth roots of unity Question: If $c\neq 1$ is an $n^{th}$ root of unity then, $1+c+...+c^{n-1} = 0$
Attempt: So I have established that I need to show that $$\sum^{n-1}_{k=0} e^{\frac{i2k\pi}{n}}=\frac{1-e^{\frac{ik2\pi}{n}}}{1-e^{\frac{i2\pi}{n}}} =0$$ by use of the sum of geometric series'. My issue is proceeding further to show that this is indeed true.
|
If a finite set of complex numbers is symmetric about a line passing through the origin, then its sum must lie on that line; if it is symmetric about two different lines through the origin, then its sum must be zero. The $n^{\text{th}}$ roots of unity are the vertices of a regular $n$-gon centered at the origin, which has $n$ lines of symmetry. Hence, for $n\ge2,$ the sum is zero.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1122110",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 6,
"answer_id": 0
}
|
A relation between the Jacobson radicals of a ring and those of a certain quotient ring Let $R$ be a ring $J(R)$ the Jacobson radical of $R$ which we define for this problem to be all the maximal left ideals of $R.$ I'm trying to prove the following proposition with only the definition (i.e. not using anything about simple modules ect)
Let $I\subset J(R)$ be an ideal of $R.$ Then $J(R/I)\cong J(R)/I.$
Attempt: Since $I\subset J(R)$ it is contained in every maximal ideal and hence the maximal ideals of $R$ are in bijection with the maximal ideals of $R/I.$ Then the map $\phi:J(R)\to J(R/I)$ given by $x\mapsto x+I$ is a $R$-module hom with kernel $I.$ So $\phi(J(R))\cong J(R)/I.$ But I am having a hard time showing this map is onto. I don't see a good reason why every $x\in J(R/I) = \bigcap( M_{k}/I)$ must look like $a + I$ where $a\in J(R)$
|
Lemma. Let $\phi : R\to S$ be an epimorphism. Then $\phi (J(R))\subset J(S)$:
proof. Let $x\in \phi (J(R))$. by Atiyah-Macdonald, Proposition 1.9, we need to prove that for all $s\in S$, $1-sx$ is a unit in $S$. let $x= \phi (t)$ for some $t\in J(R)$, and $s=\phi (r)$. we have $1-sx=\phi (1-rt)$. note that $1-rt$ has an inverse, say $u$. So $\phi (u) (1-sx)=1$.
In this case $S =R/I$ and $J(R)+I/I\subset J(R/I)$.
If $I\subset J(R)$ we have equality:
let $x+I\in J(R/I)$ and $m$ an arbitrary maximal ideal of $R$. we should prove that $x\in m$. note that $I\subset m$. so $m/I$ is a maximal ideal of $R/I$. by assumption $x+I\in m/I$. therefor $x\in m$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1122163",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
}
|
Beginner Simplex problem Good evening,
I have started studying the simplex method for some examinations I would like to take, and to be perfectly honest, I am stuck really bad. The basic examples and exercises are simple, and I can generally solve them without problem. But once I get to the actual exercise part of the book, I am given the following:
max(-x1+2x2-3x3)
x1-x2+x3+2x4=10 (1)
2x2-x3<=1 (2) => 2χ2-χ3+χ5=1
x2+2x4<=8 (3) => χ2+2χ4+χ6=8
The (1),(2),(3) are my attempts at standard form, which have not helped me. The problem I have is I cannot create the needed Identity matrix. I know that it should be x1,x5,x6, but all my calculations have come back really wrong. The rest of the exercises are of this form, and since I fail at one I fail at all. I have finished all other examination material so getting stuck here is not only a major hurdle, but a huge disappointment for me. Please, if anyone is able, give me a hint. I do not ask for a full solution. Also if someone could point me to a decent site with exercises and examples for methodologies, I would be grateful.
Last attempt for today gives me:
B c b P1 P2 P3 P4 P5 P6
P5 0 1 0 2 -1 0 1 0
P4 0 4 0 1/2 0 1 0 1/2
P1 -1 2 1 -2 1 0 0 -1
- z -2 0 0 2 0 0 1 <--- All zi - cj >= 0.
Final Edit: This is indeed the answer, and the problem has infinite solutions due to due to P2=0 (which means there is another perfect efficient solution)
|
And for house-keeping reasons, the answer as edited above is:
B c b P1 P2 P3 P4 P5 P6
P5 0 1 0 2 -1 0 1 0
P4 0 4 0 1/2 0 1 0 1/2
P1 -1 2 1 -2 1 0 0 -1
- z -2 0 0 2 0 0 1
The problem has infinite solutions due to due to P2=0 (which means there is another perfect efficient solution)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1122261",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Proving $\lim_{n\to\infty} a^{\frac{1}{n}}=1$ by definition of limit
given a sequence $a_n=a^{\frac{1}{n}}$ for $n\in\mathbb{N}^*$, $a\in\mathbb{R},a>1$ then proof that $\lim\limits_{n\to+\infty}a_n=1$ by definition.
proof:
given $a_n=a^{\frac{1}{n}}$ for $a\in\mathbb{R},a>1$.
for $n\in\mathbb{N}^*,n+1>n\Rightarrow \frac{1}{n+1}<\frac{1}{n}$ and because $a>1$ we gets $a^{\frac{1}{n+1}}<a^{\frac{1}{n}}$, since $\frac{1}{n+1}>0$, then $1=a^{0}<a^{\frac{1}{n+1}}<a^{\frac{1}{n}}\Rightarrow 1<a_{n+1}<a_{n}$.
then we proof that $\lim\limits_{n\to+\infty}a_n=1$ we need to show that $\forall\epsilon>0,\exists N,\forall n>N,|a_n-1|<\epsilon$
then for $\epsilon>0$, choose $N=\frac{1}{\log_a(\epsilon+1)}$, since $a>1$ we gets that
$$\forall n>N,1<a^{\frac{1}{n}}<a^\frac{1}{N}\Rightarrow0<a^{\frac{1}{n}}-1<a^{\frac{1}{N}}-1\Rightarrow |a^{\frac{1}{n}}-1|<|a^{\frac{1}{N}}-1|=|a^{\log_a(\epsilon+1)}-1|=|\epsilon+1-1|=\epsilon\Rightarrow |a^{\frac{1}{n}}-1|<\epsilon$$
wich implies that $\lim\limits_{n\to+\infty}a_n=1\square$
my proof to the limit is correct?
|
You didn't need the first paragraph in your proof, just the second paragraph. In the second paragraph, you cannot write $N = \frac{1}{\log_a(\epsilon + 1)}$ because $\frac{1}{\log_a(\epsilon + 1)}$ is not necessarily an integer. However, you can let $N > \frac{1}{\log_a(\epsilon + 1)}$. The rest of the proof is fine.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1122350",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Stuck on an integration question… $$\int x^{-\frac{1}{2}}\cosh^{-1}(\frac{x}{2}+1)dx$$
The answer I should get is $$2x^{\frac{1}{2}}\cosh^{-1}(\frac{x}{2}+1)-4(x+4)^{\frac{1}{2}}$$
but I keep going wrong.
Can someone show me how to get this solution?
Thanks.
|
The form of the answer suggests integration by parts with the choice $$u = \cosh^{-1} \left( \frac{x}{2} + 1 \right), \quad dv = x^{-1/2} \, dx.$$ Then compute the derivative $$du = \ldots?$$ and the integral $$v = \ldots?$$ If you have trouble computing $du$, you can obtain it by writing $$x = \cosh u, \quad \frac{dx}{du} = \sinh u,$$ hence $$\frac{du}{dx} = \frac{1}{\sinh u} = \ldots.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1122433",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
For which $q\in\mathbb Q$ is $\sin(\frac\pi2q)$ rational?
Do there exist rational numbers $q \in (0,1) \cap \mathbb Q$ such that $$\sin\left(\frac{\pi}{2}q\right) \in \mathbb Q\;?$$
Clearly if $q \in \mathbb Z$, yes. But what about the case $0 < q < 1$?
As $\sin(\pi/6) = 1/2$ we have $q = 1/3$ is a solution. Are there any others?
|
The only rationals $r$ such that $\sin(\pi r)$ is rational are those for which $ \sin(\pi r)$ is in $\{-1,-1/2,0,1/2,1\}$. This is because $2 \sin(\pi r)$ is an algebraic integer, and algebraic integers that are rational are ordinary integers.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1122518",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
}
|
Probability of one stock price rising, given probabilities of several prices rising/falling So this is the problem:
An investor is monitoring stocks from Company A and Company B, which
each either increase or decrease each day. On a given day, suppose
that there is a probability of 0.38 that both stocks will increase in
price, and a probability of 0.11 that both stocks will decrease in
price. Also, there is a probability of 0.16 that the stock from
Company A will decrease while the stock from Company B will increase.
What is the probability that the stock from Company A will increase
while the stock from Company B will decrease? What is the probability
that at least one company will have an increase in the stock price?
Things I've written down
If the probability for the price of both company's stock to go up is 0.38 then the probably for this to not happen, will be 0.62 & if this does not happen then would that mean at least one will decrease?
Same thing for the probability for both to decrease since it's .11 then the probability for this not to happen, or in other words for at least one to increase will be .89?
I know the respective answers should be .35 & .89, with .89 being the same as the second thing I wrote down but this seems rather semantic to me.
I can also get the first answer by adding .38+.11+.16 = .65 then 1-.65 = .35 but I can't work out in my head why that would work.
Some help please?
|
The sum of the probabilities for all possible cases need to add to one.
Since the stocks must go up or down (not stay the same) there are four possible outcomes for two stocks: {$A\uparrow B \uparrow, A\uparrow B \downarrow, A\downarrow B \uparrow, A\downarrow B \downarrow $}.
If you're given three (disjoint) probabilities of the four, subtract from one to get the probability of the fourth: $1 - 0.38 - 0.16 -0.11 = 0.35.$
Your last statement is the correct interpretation because the cases don't overlap at all.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1122631",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
}
|
Applying the Stone-Weierstrass Theorem to approximate even functions
Let $f:[-1,1] \rightarrow \mathbb{R}$ be any even continuous function on $[-1,1]$ (i.e. $f(-x)=f(x)$ $\forall x \in [-1,1]$). Let $\epsilon>0$. Prove that there exists an even polynomial $p$ such that $$|f(x)-p(x)|< \epsilon$$ $$\forall x \in [-1,1]$$
Here, "even polynomial" means that $p(-x)=p(x)$, not simply that it has even degree.
I think I should use the Stone-Weierstrass theorem to show that the subalgebra of even polynomials, call it $\mathcal{A}$, over this interval is dense, from which the result follows immediately.
For this to work I require that $\mathcal{A}$ contains the constants (obviously true) and separates points...which is not true, unfortunately. Anyone have any hints? I would prefer hints only, rather than solutions.
Oh yes, and I should mention that the version of the Stone-Weierstrass theorem that I can use says that if a subalgebra of $C(\mathbb{R})$ contains the constants and separates points, then it is dense in $C(\mathbb{R})$.
|
Hint: If $\sup_{x \in [-1,1]} |\,p(x) - f(x)|<\epsilon$ then
$\sup_{x \in [-1,1]} |\,\frac{1}{2}(\,p(x)+ p(-x)\,)- f(x)\,|<\epsilon$
$\bf{Added:}$
Let $p(x)$ a polynomial so that for every $x \in [-1,1]$ we have $|\,p(x) - f(x)|<\epsilon$. Note that if $x \in [-1,1]$ then also $-x \in [-1,1]$. So we have
$$|\,p(x) - f(x)|<\epsilon\\
|\,p(-x) - f(-x)|<\epsilon$$
Add up the inequality and divide by $2$:
$$\frac{1}{2} ( |\,p(x) - f(x)| + |\,p(-x) - f(-x)|) < \epsilon$$
Note that we have $|a+b| \le |a|+|b|$ for all numbers. Therefore we get
$$\frac{1}{2} \cdot |(p(x) + p(-x) ) - (f(x) + f(-x) ) | < \epsilon$$
or
$$ |\frac{1}{2}\cdot (p(x) + p(-x) ) - \frac{1}{2}\cdot(f(x) + f(-x) ) | < \epsilon$$
$\tiny{\text{ the averages also satisfy the inequality }}$.
Now since $f$ is even we have $\frac{1}{2}\cdot(f(x) + f(-x) ) = f(x)$.
Moreover, $\frac{1}{2}\cdot (p(x) + p(-x) )$ is an even polynomial already. We are done.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1122717",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 0
}
|
What is the integral of $\frac{\sqrt{x^2-49}}{x^3}$ I used trig substitution and got
$\displaystyle \int \dfrac{7\tan \theta}{343\sec ^3\theta}d\theta$
Then simplified to sin and cos functions, using U substitution with a final answer of:
$\dfrac{-7}{3x^3}+C$
Which section did I go wrong in. Any help would be appreciate!
|
If you let $x=7 \sec \theta$ then you won't get $7 \tan \theta$ in the numerator since $\sec^2\theta -\tan^2\theta =1$.
When the sub is done correctly you'll get an integral of the form $\displaystyle \frac{49}{343} \int \frac{\tan^2\theta}{\sec^2\theta}\,d\theta = \frac{1}{7} \int \sin^2\theta\ d\theta = \frac{1}{14}\int (1-\cos 2\theta)\ d\theta$ and the rest should be easy.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1122816",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Showing that $\mathcal{A}$ being countable $\Rightarrow f(\mathcal{A})$ is countable - (Algebras/Sigma Algebras)
For the first question my idea was to show that $\sigma(f(\mathcal{A})) \subseteq \sigma(\mathcal{A})$ and $\sigma(\mathcal{A}) \subseteq \sigma(f(\mathcal{A}))$. As for the second question I am at a loss of what to do there. I have been tinkering with it for awhile now and have not got anywhere. Help on both questions would be much appreciated!
|
It is clear that $\sigma(\mathcal A)\subset\sigma(f(\mathcal A))$, as $\mathcal A\subset f(\mathcal A)$. For the reverse inclusion, $\sigma(A)$ is a $\sigma$-algebra that contains $f(\mathcal A)$, so as $\sigma(f(\mathcal A))$ is the intersection of all such $\sigma$-algebras, $\sigma(f(\mathcal A))\subset\sigma(A)$.
For the second question, there is a clever argument given in this answer by @Robert Israel: Algebra generated by countable family of sets is countable?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1122895",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Sum $(1-x)^n$ $\sum_{r=1}^n$ $r$ $n\choose r$ $(\frac{x}{1-x})^r$ The question is to find the value of:
$n\choose 1$$x(1-x)^{n-1}$ +2.$n\choose2$$x^2(1-x)^{n-2}$ + 3$n\choose3$$x^3(1-x)^{n-3}$ .......n$n\choose n$$x^n$.
I wrote the general term and tried to sum it as:
S=$(1-x)^n$$\sum_{r=1}^n$$r$$n\choose r$$(\frac{x}{1-x})^r$.
I got stuck here.
What do I do after this?
|
HINT: $r\binom{n}r=n\binom{n-1}{r-1}$; this is easy to see if you expand into factorials, and it also has a straightforward combinatorial proof.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1122958",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Is $a_n={\{\dfrac{1}{n^2}+\dfrac{(-1)^n}{3^n}}\}$is monotonically decreasing? Is $a_n={\{\dfrac{1}{n^2}+\dfrac{(-1)^n}{3^n}}\}$is monotonically decreasing?
In process of solving this problem, I faced to the problem of proving that
$A::$: $\dfrac{2k+1}{(k(k+1))^2}-\dfrac{4}{3^{k+1}}\geq 0$ for every positive odd integer $k>1$,
which will complete the proof that ${\{a_n}\}$ would be monotonically decreasing.
Even induction method makes it more complicated. Is statement $A$ holds?
Thank you.
|
Note first that $$\dfrac{2k+1}{(k(k+1))^2}\geqslant \dfrac{2k}{k^2(k+1)^2}= \dfrac{2}{k(k+1)^2}\geqslant \dfrac{2}{(k+1)^3}$$
If we prove that
$$\dfrac{1}{n^3}\geqslant \dfrac{2}{3^n} \tag{1}$$
which is equivalent to
$$\dfrac{3^n}{n^3} \geqslant 2 \tag{2}$$
then we can conclude that
$$\dfrac{2k+1}{(k(k+1))^2}-\dfrac{4}{3^{k+1}}\geqslant 0.$$
The inequality $(2)$ holds for sufficiently large $n,\;\;(n\geqslant{6})$ and can be proved by induction.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1123065",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Struggling with connection between Clifford Algebra (/GA) and their matrix generators As I thought I understood things, the Gamma matricies behave as the 4 orthogonal unit vectors of the Clifford algebra $\mathcal{Cl}_{1,3}(\mathbb C)$, (also the Pauli matricies are for the 3 of $\mathcal{Cl}_{3}(\mathbb C)$??).
But, I'm not getting results that are intuitive when I perform algebra using multivectors built out of these.
The dot and wedge products are $A\cdot B=\frac{1}{2}(AB+BA)$ and $A\wedge B=\frac{1}{2}(AB-BA)$, respectively. But, I should be able to wedge multiply all 4 unit vectors in order to achieve the unit pseudovector, but the result is zero when I attempt to use these definitions to matrix multiply $\gamma_0\wedge\gamma_1\wedge\gamma_2\wedge\gamma_3$.
I think I'm missing something large...
|
Those formulas for dot and wedge product are only valid when $A$ and $B$ are vectors, not if they are bivectors, trivectors, etc.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1123155",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Divide matrix using left division In matlab, I defined
a=[1;2;3]
b=[4;5;6]
both a and b are not square matrix.
and execute a\b will return 2.2857
From various sources, we understand that a\b ~= inv(a)*b, but in my case, a is not a square matrix, thus we can't perform an inverse operation on it.
I am actually translating matlab code to c#, and I need to know how do left division between non-square matrix calculated.
Thanks!
|
In Matlab $a $\ $b =(a'a)^{-1}a'b$, i.e. $b$ is projected onto $a$.
A little bit more information: Consider,
$$
b \approx ax
$$
where $x$ is unknown and there are no exact solutions i.e. $b$ and $a$ have many rows and few columns, then the typical way to solve this problem is to solve for $x$ by projecting $b$ onto $a$ or by taking the point on the space $a$, which is closest to be i.e. $b \approx P_ab=a(a'a)^{-1}a'b=ax$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1123262",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Prove that $\int_0^1 \frac{\ln x}{x-1} dx$ converge.
Prove that $\int_0^1 \frac{\ln x}{x-1} dx$ converges.
We cannot apply Abel's/Dirichliet's tests here (For example, Dirichliet's test demands that for $g(x)=\ln x$, $\int_0^1 g(x)dx < \infty$ which isn't true).
I also tried to compare the integral to another;
Since $x>\ln x$ I tried to look at $\int_0^1 \frac{x}{x-1} dx$ but this integral diverges.
What else can I do?
EDIT:
Apparently I also need to show that the integral equals $\sum_{n=1}^\infty \frac{1}{n^2}$.
I used WolframAlpha and figured out that the expansion of $\ln x$ at $x=1$ is $\sum_{k=1}^\infty \frac{(-1)^k(-1+x)^k}{k}$. Would that be helpful?
|
Since:
$$ \int_{0}^{1}\frac{dt}{1-xt}=-\frac{\log(1-x)}{x} $$
and:
$$ I=\int_{0}^{1}\frac{\log x}{x-1}\,dx = -\int_{0}^{1}\frac{\log(1-x)}{x}\,dx $$
we have:
$$ I = \iint_{(0,1)^2}\frac{1}{1-xt}\,dt\,dx = \sum_{n\geq 0}\iint_{(0,1)^2}(xt)^n\,dt\,dx = \sum_{n\geq 0}\frac{1}{(n+1)^2}=\color{red}{\zeta(2)}$$
as wanted.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1123431",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 3
}
|
Find $f^{(n)}(1)$ where $f(x)={1\over x(2-x)}$. Find $f^{(n)}(1)$ where $f(x)={1\over x(2-x)}$.
What I did so far:
$f(x)=(x(2-x))^{-1}$. $f'(x)=-(x(2-x))^{-2}[2-2x]$ $f''(x)=2(x(2-x))^{-3}[2-2x]^2+2(x(2-x))^{-2}$. It confuses me a lot. I know I have to determine where I have an expression multiplied by $2-2x$ and where not, and what is its form. I would appreciate your help.
|
Since for any $a\neq 0$:
$$\frac{1}{x(a-x)} = \frac{1}{a}\left(\frac{1}{x}+\frac{1}{a-x}\right)\tag{1}$$
we have:
$$ \frac{d^n}{dx^n}\frac{1}{x(a-x)} = \frac{n!}{a}\left(\frac{(-1)^n}{x^{n+1}}+\frac{1}{(a-x)^{n+1}}\right).\tag{2}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1123541",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Math and geometry software to create instructional videos I am looking into starting an amateur project for online tutoring. I need a math/geometry program that I can use to create shapes, graph functions, and create animation. I know of many types of software (such as geogebra) but they have restrictions for commercial use. In other words, I can't use the material or features (even screenshots) to gain a profit. Does anyone know of any programs/ software that I can use for my project?
Note: I don't mind software or programs that I can purchase for a one time fee.
|
You can take a look at GNU Dr. Geo a free software of mine, there is no commercial limitation and you can even redistribute it along your work.
You can do classic interactive geometry with:
Or you can use its programming features to design very original sketches:
More over, it is very easy to modify Dr. Geo from itself, in case of need!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1123647",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
The quotient map and isomorphism of cohomology groups Let $X$ be a closed $n$-manifold, $B$ an open $n$-disc in $M$.
Suppose $p:X\rightarrow X/(X-B)$ is a quotient map. Notice that $X/(X-B)$ is homeomorphic to the sphere $\mathbb{S}^n$.
My question is whether the quotient map induces an isomorphism mapping $H^n(X;\mathbb{Z}_2)$ to $H^n(X/(X-B);\mathbb{Z}_2)$? Maybe it is known. But I can not find it in the literature.
If it is easy, can somebody show it to me? Thanks a lot.
|
Consider the map of pairs $(X,X-B)\to(S^n,*)$ with $S^n=X/(X-B)$ and $*$ the point resulting from collapsing $X-B$, and the diagram you get from it in the long exact sequences for cohomology. The map $H^n(S^n,*)\to H^n(S^n)$ is an isomorphism, and so is $H^n(S^n,*)\to H^n(X,X-B)$, so by the commutativity of one of the sqaures in the diagram you want to know if $H^n(X,X-B)\to H^n(X)$ is an isomorphism. Is it?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1123746",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
I have a question about Viete's formulas If I have a polynomial $a_n x^n + a_{n-1}x^{n-1}+ \cdots + a_1 x + a_0$, and the roots of the polynomial is $r_1,r_2,\ldots,r_n$, then I can rewrite the polynomial as,
$a_n x^n + a_{n-1}x^{n-1} \cdots + a_1 x + a_0 = (x-r_1)\cdots(x-r_n)$. Now if I wanted to express each coefficient of the polynomial by its roots, how would I go about that?
For example, is $a_0 = (-1)^n(r_1\cdots r_n)$? What about $a_1,a_2,\ldots,a_n$?
Would $a_1 = (-1)^{n-1}(r_1 r_2\cdots r_{n-1} + r_1 r_2\cdots r_{n-2} r_n + \cdots)$?
|
The answer to both questions is yes, provided that $a_n=1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1123828",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
To show that $X = (0,1]$ is complete with respect to the metric $e $ where $e(x,y) = |\frac{1}{x} - \frac{1}{y}|$. Show that $X = (0,1]$ is complete with respect to the metric $e $ where $e(x,y) = |\frac{1}{x} - \frac{1}{y}|$.
My proof: let $(x_n)$ be Cauchy in $(X,e)$. Let $(t_n) := \frac{1}{(x_n)}$. Then $(t_n)$ is Cauchy in $[1, \infty )$. Hence there exist a $t \in [1, \infty )$ such that $t_n \to t$ and this implies that $x_n \to 1/t$ in $(X,e)$. Thus....
Is the line of proof ok??
|
The idea is OK, but should be formalised a bit more.
Let $Y = [1,\infty)$ with the Euclidean metric $d$. Clearly, $Y$ is complete
as it is a closed subset of the complete $\mathbb{R}$ in the Euclidean metric.
Then $f(x) = \frac{1}{x}$ from $(X,e)$ to $(Y,d)$.
Then $d(f(x), f(y)) = |f(x) - f(y)| = |\frac{1}{x} - \frac{1}{y}| = e(x,y)$ for all $,y \in X$. So $f$ is an isometry. In particular, it is continuous, both ways, the same formula $g(x) = \frac{1}{x}$ defines an isometry from $(Y,d)$ to $(X,d)$.
So if $(x_n)$ is Cauchy, then $(t_n) = f(x_n)$ is Cauchy in $(Y,d)$, from being an isometry. And $(Y,d)$ is complete, so it has a limit $t$ and continuity of $g$ then garantuees $g(t_n) = (x_n)$ converges to $g(t)$ in $(X,e)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1123924",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Easy: Graphs of Straight Line
I can't exactly figure out how to work this out.
Well I know the equation for a straight line is $y = mx + c$
$c = gradient$
Therefore if I multiply $3$ by the number $x$ to get the gradient $6$ and $2$ I can work out which is line is which...
With $y = 3(x + 2)$ I tried to expand but then I got confused.
Can someone explain in easy terms? Thanks guys.
|
Syntactical side note: $m$ is called the gradient, $c$ is called the $y$-intercept.
To find the correct equation for $A$, just look at the $y$-intercept of $A$: it's $6$.
This means that $A$ must be of the form $y = mx + 6$, wich only one of your equations satisfies. Note that
$$a\cdot(b+c) = a\cdot b + a\cdot c$$
for any real numbers (also variables) $a,b$ and $c$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1124005",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Subring of a field extension is a subfield For the first part,
I am trying to show that any subring of $E/F$ is a subfield where $E$ is an extension of field $F$ and all elements of $E$ are algebraic over $F$.
My solution is to just show an element of a subring has an inverse and we will be done. Let $R$ be the subring and $r\in R$. Since $r$ is algebraic over $F$ we can find a polynomial $p(x)=a_0+a_1x+a_2x+\cdots + a_nx^n$ such that $p(r)=0$ where $a_i \in F$. Using this expression I explicitly computed what an inverse of $r$ must look like.
However, the second part of the question asks to prove that any subring of a finite dimensional extension field $E/F$ is a subfield.
Is the second part of the question redundant? I didn't use whether the extension was finite or infinite for my solution in the first part. Will both the parts have the same solution that I wrote above?
|
Let $R$ be a ring such that $F\subset R\subset E$ and $\alpha\neq 0$ an element of $R$. We can as well suppose $R=F[\alpha]$. Then multiplication by $\alpha$ is an endomorphism of the finite dimensional $F$-vector space $R$. This endomorphism is injective because $R$ is an integral domain (subring of a field), hence it is surjective. In particular, $1$ is attained, i.e. there exists an element $\beta\in R$ such that $\beta\alpha=1$ – in other words, $\alpha$ has an inverse in $F[\alpha]$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1124083",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Find $\lim_\limits{x\to 1^-}{\ln x\cdot \ln(1-x)}$. Find $\lim_\limits{x\to 1^-}{\ln x\cdot \ln(1-x)}$. I can't even start because I don't really know what $x\to 1^-$ means. If you know what it means it would really help me. I would as well if you helped me with finding the limit, hinted or thorough.
|
$$
\lim_{x\uparrow1} \ln x \ln(1-x) = \lim_{x\uparrow1} \frac{\ln x}{x-1}\cdot\frac{\ln(1-x)}{1/(1-x)}.
$$
Apply L'Hopital's rule to both fractions and you've got it. (In the first one, the numerator and denominator both approach $0$; in the second, each approaches either $+\infty$ or $-\infty$.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1124284",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 2
}
|
Recover the inverse after interative solution of a linear system I have solved the linear system $\mathbf{A} \mathbf{x} = \mathbf{b}$ with an iterative solver. The problem is well-posed ($\mathbf{A}$ is invertible, $\mathbf{b} \ne \mathbf{0}$, blah blah blah).
The question: Is it possible to combine $\mathbf{A}$, $\mathbf{x}$, and $\mathbf{b}$ (all known), to get the inverse $\mathbf{A}^{-1}$?
The context: I am optimizing a function of $\mathbf{x}$, and the parameters that I am optimizing with respect to show up in $\mathbf{A}$ and $\mathbf{b}$. I need to know the inverse of $\mathbf{A}$ to get the derivatives, i.e.,
$$
\frac{\partial \mathbf{x}}{\partial p} = \mathbf{A}^{-1} \left( \frac{\partial \mathbf{b}}{\partial p} - \frac{\partial \mathbf{A}}{\partial p} \mathbf{x} \right) ,
$$
where $p$ is one of my parameters, and so on for the Hessian. A direct solution is out of the question because the problem dimension is very large, so I can't store and re-use one of the triangular factors. I'm not coming up with anything on my own or in my textbooks, so here I am.
|
Each columns of the inverse is the solution to $Ax = b$ when $b = e_1,e_2,\dots,e_n$, where $e_i$ are the standard basis vectors.
At the very least, finding $A^{-1}$ from the $x$'s would require that you choose $n$ linearly independent $b$s. Perhaps you can optimize on which $b$'s you select.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1124385",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Convert 1-5 Grading Scale to 1-100 Grading System I am creating a formula in Excel to convert 1-5 Grading Scale to 1-100 Grading System Suppose that I have the following table:
97-100 = 1.00
94 - 96 = 1.25
91-93 = 1.50
88-90 = 1.75
85-87 = 2.00
82-84 = 2.25
79-81 = 2.50
76-78 = 2.75
75 = 3.00
My problem is this: What if the grade is 1.47, what will be the exact numerical equivalent from 91-93? And how do I compute it?
|
This is an answer based more on intution. Hopefully, it will be easier to understand.
You can multiply the score in the ratio of their respective range.
So, since the ratio will be $\frac{100}{5}=20$.
Let $S_5$ and $S_{100}$ be the respective scores.
Therefore,
$$S_{100}=S_5 \times 20$$
However, I included $0$ here. For no $0$ values, we essentially have to convert a $0$ to $4$ scale to $0$ to $99$.
We can do this by
$$S_{100}=(S_5-1)\times\frac{99}{4}+1$$
Now you can just plug in the values to find whatever you like.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1124448",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Evaluating $\int_0^1 \frac {x^3}{\sqrt {4+x^2}}\,dx$ How do I evaluate the definite integral $$\int_0^1 \frac {x^3}{\sqrt {4+x^2}}\,dx ?$$ I used trig substitution, and then a u substitution for $\sec\theta$.
I tried doing it and got an answer of: $-\sqrt{125}+12\sqrt{5}-16$, but apparently its wrong.
Can someone help check my error?
|
A way to compute this is as follows:
\begin{align*}
\int_0^1 \frac {x^3}{\sqrt {4+x^2}}\mathrm d x &=\int_0^1\frac{4x+x^3-4x}{\sqrt{4+x^2}}\mathrm d x\\
&=\int_0^1x\sqrt{4+x^2}\mathrm d x -2\int_0^1\frac{2x}{\sqrt{4+x^2}}\mathrm d x\\
&=\left.\frac{1}{3}(4+x^2)^{3/2}\right|_0^1-4\left.\left(4+x^2\right)^{1/2}\right|_0^1\\
&=\frac{5\sqrt{5}-8}{3}-4\left(\sqrt{5}-2\right)\\
&=\boxed{\color{blue}{\dfrac{16-7\sqrt{5}}{3}}}
\end{align*}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1124546",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 0
}
|
Clarification of some doubts: working with the restriction of a quadratic form Let $q:\mathbb{R^3}\to\mathbb{R}$ such that
$$q(x,y,z)=2x^2+3y^2+4xy-2xz.$$
I have to determine rank and signature of $q$, and so far it should be fine: I got $\operatorname{rk}(q)=3$ and $\operatorname{sgn}(q)= (2,1)$. The second question is: let $V=\langle(1,-1,0),(1,0,0)\rangle$, vector subspace of $\mathbb{R^3}$, determine rank, signature and the set of isotropy vectors of the restriction of $q$ to $V$.
Here I'm a bit confused, also because we've a two dimensional subspace of a three dimensional space. Could you show me how to work this example step by step (also showing how to calculate the matrix associated with the restriction of $q$)? Could you also add some general remarks on "how to work" with restriction of quadratic forms?
|
That plane $V$ is described by $z=0$, hence your restriction is just
$$q(x,y,0)=2x^2+3y^2+4xy$$
This is then
$$q'(x,y)=2x^2+3y^2-4xy$$
which is represented by the matrix
$$\begin{pmatrix} 2 & 2 \\ 2 & 3\end{pmatrix}$$
The eigenvalues are ${5\over 2}\pm {\sqrt{17}\over 2}$, which are both positive.
so the new signature is $(2,0)$, in particular it's positive definite, so that the form is anisotropic and the rank is $2$.
In general the idea is that in an ambient space, every linear space is the intersection of hyperplanes. So say you're in $\Bbb R^m$ and you want the restriction of the form to the subspace spanned by the (linearly-independent) $v_1,\ldots, v_k$. Then we know any $k$-dimensional subspace can be determined as the zero-set of $m-k$ linear forms. In our specific example, the linear form was $0x+0y+z=0$.
I used that to replace the variable $z$ by a function of $x$ and $y$, i.e. $z=0-0x-0y=0$ by moving the $x$ and $y$ data to the other side.
In a general setting you'd pick your preferred variables, and solve for the other ones in terms of those variables using the linear forms, then make substitutions into the form to reduce the number of variables.
Example
say we did your original form, but the space was spanned by $\langle 1, -1, 0\rangle$ and $\langle 1, 0, -1\rangle$ so that the linear form describing the space is $x+y+z=0$, then if I like $x$ and $y$ best I do $z=-(x+y)$ then I go back to the original form and plug in
$$q(x,y,-(x+y))=2x^2+3y^2+4xy+2x(x+y)= 4x^2+3y^2+6xy$$
so $q'(x,y)$ is represented by the matrix
$$\begin{pmatrix} 4 & 3 \\ 3 & 3\end{pmatrix}$$
If, instead, we wanted to restrict to the $1$-dimensional linear subspace spanned by $\langle 1, 1,3\rangle$ we note this is the zero set of the two linear forms: $x-y$ and $x-z$. This means $x=y$ and $3x=z$ so that we have the new form
$$q(x,x,3x)=2x^2+3x^2+4x^2-6x^2=3x^2$$
so $q'(x)=3x^2$ is represented by the matrix $3$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1124737",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
$F(x,y)=2x^4-3x^2y+y^2$. Show that $(0,0)$ is local minimum of the Reduction of F for every linear line that passes through $(0,0)$. first of all I checked if (0,0) is critical point
$Df(0,0)=(8x^3-6xy,-3x^3+2y)| = (0,0) $
now my idea was to replace $y$ with $xk$ because of the reduction of $F$
,and find the hessian matrix to prove that the point is local minimum.
Is this the right way ?
|
A direct way to prove that $(0,0)$ is a local minimum for $F$ is just to notice that $F(0,0)=0$ and that
$$
\begin{align}
F(x,y)-F(0,0)&=2x^4-3x^2y+y^2\\\\
&=2\left( x^4-\frac32x^2y+y^2\right)\\\\
&=2\underbrace{\left( \left(x^2-\frac34y\right)^2+\frac7{16}y^2\right)}_{\large\geq \:0}
\end{align}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1124820",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Conditions for a supremum of a set. Suppose a function $f(x)$ is continuous on $[a, b]$ and there exists, $x_0 \in (a, b)$ such that $f(x_0) > 0$. And then define a set,
$$A = \{ a \le x < x_0 \space | \space f(x) = 0 \}$$
We say $c = \sup A$
Does $c$ necessarily have to suffice that $f(c) = 0$ ? ? Why/ why not?
I am confused about sets so this will be helpful!
|
Yes, provided that $A$ is not empty: Since $c$ is the supremum of $A$, we find a sequence $(x_n)\subset A$ of elements of $A$ converging to $c$. Since each number $x_n$ is an element of $A$, we must have $\color{blue}{f(x_n)=0}$. Since $f$ is $\color{green}{\text{continuous}}$, we conclude
\begin{align*}
f(c)=f\left(\lim_{n\to\infty} x_n\right)\color{green}=\lim_{n\to\infty}f(x_n)\color{blue}=\lim_{n\to\infty}0=0.
\end{align*}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1124924",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Completing simplification step when solving a recurrence I am trying to understand a simplification step in one of the recurrence examples solved by repeated substitution in a book of algorithms problems I found on Github. I am using it for extra practice in solving recurrences. Anyways, here's the step I am stuck on:
I don't understand how to get from this:
$$3^{i-1}(3T(n-i)+2) + 2\sum_{j=0}^{i-2}3^j$$
To this:
$$3^iT(n-i)+2\sum_{j=0}^{i-1}3^j$$
The part I am particularly confused about is how the summation at the end of the recurrence changes from the first part above to the second. I don't understand how the upper limit in the summation changes from $i-2$ to $i-1$. I imagine I am forgetting something simple about how summations work. Any help is appreciated. I also appreciate excruciatingly detailed steps in showing the algebra or simplification math, if it's not too much trouble. This is often how I get stuck; when a simple step is omitted for brevity.
|
$$3^{i-1}\color{green}(3T(n-i)+2\color{green}) + 2\sum_{j=0}^{i-2}3^i$$
$$\color{green}{3^{i-1}3}T(n-i)+3^{i-1}\color{green}2 + \color{green}2\sum_{j=0}^{i-2}3^i$$
$$3^iT(n-i) + 2(\color{green}{3^{i-1}+\sum_{j=0}^{i-2}3^i})$$
$$3^iT(n-i) + 2\sum_{j=0}^{i-1}3^i$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1125034",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
}
|
Find all planes which are tangent to a surface I'm given the surface $z=1-x^2-y^2$ and must find all planes tangent to the surface and contain the line passing through the points $(1, 0, 2)$ and $(0, 2, 2).$ I know how to calculate tangent planes to the surface given one point, but how would I do so given two points? Any push in the right direction would be appreciated. Thank you.
|
Given $p_1,p_2$ and $G(p) = 0$ with $p = (x,y,z)$, determine a plane $\Pi\to (p-p_1)\cdot\vec n=0$ with $\{p_1,p_2\}\in \Pi$ and such that $\Pi$ is tangent to $G$ at $p^*$
Representing for $\nabla G(p^*)$ the normal to the the surface $G(p)=0$ at point $p^*$ we have the conditions.
$$
\cases{
\nabla G(p^*) = \lambda\vec n\\
(p_2-p_1)\cdot \vec n = 0\\
G(p^*) = 0\\
(p^*-p_1)\cdot\vec n = 0\\
\|\vec n\| = 1
}
$$
seven equations and seven $(p^*,\vec n,\lambda)$ unknowns.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1125193",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
A relationship between central-by-finite groups and FC-groups A group is said FC-group if for all $x\in G$ is true that the set $x^G$ is finite. Equivalently, $G$ is a FC-group if $|G:C_G(x)|$ is finite for all $x \in G$.
A group is said a central-by-finite if the center of $G$ has finite index $G$.
Is clear that if $G$ is a central-by-finite group then $G$ is a FC-group (in fact, $G$ is a BFC-group - look for the definition of BFC-grup in D. Robinson - Finiteness conditions and generalized soluble groups.
It is also true that: If G is a finitely generated FC-group, then G/Z(G) and Tor(G) are both finite.
So I do my question:
Let $G$ be a locally finite group. Suppose that $G$ is a FC-group. Then $G$ is central-by-finite?
Is true? If not, would like a counterexample.
|
A central product of countably infinitely many copies of $D_8$ (or more generally of any extraspecial group) is a counterexample. This group has the presentation
$$\langle x_i,y_i,z\ (i \in {\mathbb N}) \mid z^2=1,\ x_i^2=y_i^2=1,\ [x_i,y_i]=z\ (i \in {\mathbb N}),\
{\rm all\ other\ pairs\ of\ generators\ commute}\ \rangle.$$
Its centre is the finite subgroup $\langle z \rangle$. The conjugacy classes all have size $1$ or $2$, so it is a BFC-group.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1125319",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
How do I prove that if $p$ is prime then $p$ divides $2^{p}-2$? I know that if $p$ divides $2^{p}-2$ can be written as $2^p - 2 \equiv 0 \bmod p$, but then I get stuck. Im not sure how to take an approach on this.
|
$$(2\cdot1)\cdot(2\cdot 2)...(2\cdot (p-1))=1\cdot2 ...\cdot (p-1)\ \ \ \ (\text{ mod } p )$$
Where the factors on the left and right are equal but not necessarily in the order written.
The reason is that the remainders of $2\cdot i$ mod $p$ are all different (and different to zero) for different $1\leq i\leq p-1$ and therefore are all the numbers $1,2,...,p-1$.
Then
$$2^{p-1}\cdot(p-1)!=(p-1)!\ \ \ \ (\text{ mod }p )$$
$$2^{p-1}=1\ \ \ \ (\text{ mod }p )$$
$$2^{p}=2\ \ \ \ (\text{ mod }p )$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1125411",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
}
|
What is the derivative of this? I have a function of the following form:
$J = \|W^TW-I\|_F^2$
Where, $W$ is a matrix and $F$ is the Frobenius Norm.
How can I find the derivative of $\frac{\partial J}{\partial W}$ ?
|
Assuming the norm comes from real inner-product. The derivative of the norm can be found as
$$\left.\frac{d}{dt}\right|_{0}\|(W+tH)^T(W+tH)-I\|^{2}\\
=\left.\frac{d}{dt}\right|_{0}\langle (W+tH)^T(W+tH)-I,(W+tH)^T(W+tH)-I\rangle\\
=2\langle W^TW-I,H^TW+W^TH\rangle$$
We now use the definition of adjoint corresponding to a simple matrix transpose for real matrices, $\langle Ax,y\rangle=\langle x,A^Ty\rangle$ and similarly for the operators acting from the right; as well as again the symmetry of the real inner product, we find that the two H-linear terms are in fact the same:
$$
\langle W^TW-I,W^TH\rangle=\langle W(W^TW-I),H\rangle
$$
$$
\langle W^TW-I,H^TW\rangle=\langle H(W^TW-I),W\rangle\\=\langle H,W(W^TW-I)^T\rangle\\
=\langle H,W(W^TW-I)\rangle\\
=\langle W(W^TW-I),H\rangle
$$
Summing up, the derivative is the linear map
$$
4\langle W(W^TW-I),H\rangle
$$
Then, and only because we have the inner product, we can write the gradient vector: $$\frac{\partial J}{\partial W}=4W(W^TW-I)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1125499",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Two questions about discrete valuation rings of varieties Let $X$ be a proper, normal variety over $\mathbb{C}$, and $k(X)$ be its field of rational functions. I think the following two statements are true, but I was unable to give a proof or find the references:
(1) For any $f \in k(X), f \neq 0$, there are only finite discrete valuations $v$ of $k(X)$ such that $v(f) \neq 0$.
(2) For any discrete valuation $v$ of $k(X)$, there exists a variety $Y$, birational to $X$, and a Weil divisor $E$ such that the valuation given by this divisor is the same as $v$.
Any suggestion for either problem is welcome!
|
$\textbf{Wrong answer. See Comments!}$
$\def\OO{{\mathcal O}}\def\CC{{\mathbb C}}$I think the following argument shows that in fact $v$ is the valuation associated to a prime Weil divisor on $X$ from which part $(1)$ follows as well.
Let $v$ be such a discrete valuation. Consider the valuation ring $\mathcal{O}_v \subset k(X)$ corresponding to $v$. Now by the valuative criterion of properness, we have a lift of the map $\operatorname{Spec} \OO_v \to \operatorname{Spec} \CC$ to a morphism $\operatorname{Spec} \OO_v \to X$ that commutes with the morphism from $\operatorname{Spec} k(X)$. So the generic point of $\OO_v$ is sent to the generic point of $X$ and the closed point of $\OO_v$ is sent to a point $p \in X$.
This induces a morphism of local rings $\OO_p \to \OO_v$ commuting with the inclusions to $k(X)$. Thus $\OO_p$ includes into $\OO_v$ as subrings of $k(X)$ but $X$ is normal so $\OO_p$ is integrally closed and so is $\OO_v$ since it is a DVR thus $\OO_p = \OO_v$. The dimension of $\OO_v$ is $1$ and so $p$ is a height one prime. That is, $p$ is the generic point of a prime Weil divisor $D$ on $X$. But the valuation associated to a divisor is exactly the valuation of the local ring at the generic point of that divisor.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1125594",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
}
|
Applying the Cauchy Schwarz Inequality Let $A = (a_{ij})$ be an $n \times n$ real matrix, $I = (\delta_{ij})$ the $n \times n$ identity matrix, $b \in \mathbb{R}^n$. Suppose that $$\|A - I\|_2 = \left(\sum_{i=1}^n \sum_{j=1}^n (a_{ij} - \delta_{ij})^2\right)^{\frac{1}{2}} < 1$$ 1) Use the Cauchy-Schwarz Inequality to show that the map $T: \mathbb{R}^n \to \mathbb{R}^n$ defined by $T(x) = x - Ax + b$, satisfies $\rho(Tx, Ty) \leq \alpha \rho(x,y)$ for some $\alpha \in (0,1)$, where $\rho$ stands for the unique standard Euclidean distance in $\mathbb{R}^n$.
I know that the CSI in $\mathbb{R}^n$ is defined as $$\left(\sum_{i=1}^n x_i y_i\right)^2 \leq \left(\sum_{i=1}^n x_i^2\right) \left(\sum_{j=1}^n y_j^2\right)$$ but i'm not really sure how we can apply this to our problem. We have the map $T$ defined above with a certain metric property. What can we do with it?
|
Write $Tx = b - (A - I)x$. Then you can see that $Tx - Ty = (A - I)(y - x)$, which implies $\rho(Tx,Ty) \le \alpha \rho(x,y)$, with $\alpha = \|A - I\|_2 < 1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1125691",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Alternative proof of a transpose property I am asked to prove; $$(AB)^T=B^TA^T$$ although it is very simple to prove it by the straight forward way, in the exercise I am asked to prove it without using subscripts and sums, directly from the following property of inner product of real vectors:$$\langle A\textbf{x},\textbf{y}\rangle =\langle \textbf{x},A^T\textbf{y}\rangle$$
Where $A$ is an $m\times n$ real matrix, $x \in \mathbb{R}^n$,$y \in \mathbb{R}^m$
I don't know how to approach this problem, any suggestions?
|
Use nondegeneracy of inner product. I.e if for every vector $x\in\mathbb{R}^n$
$$<x,y_1>=<x,y_2>$$
holds, then $y_1=y_2.$ Fix $y\in\mathbb{R^m}$ and from what you have quoted you have that for every $x$
$$<ABx,y>=<Bx,A^Ty>=<x,B^TA^Ty>$$
and
$$<ABx,y>=<x,(AB)^Ty>.$$
Hence $B^TA^Ty=(AB)^Ty.$ $y$ was arbitrary, hence $B^TA^T=(AB)^T.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1125835",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.