Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
Closest point to a unit circle from a point inside it I'm working on a code at the moment that calculates the closest point to a circle from a point inside it. Let's say we have the point ($x_0, x_1$) to calculate the closest point to the unit circle it first calculates $\rho = \sqrt{x_0^2 + x_1^2}$ and then the closest point to the unit circle is $(x_0 / \rho, x_1 / \rho)$.
I don't know much about geometry and I don't understand how this works. I'd really appreciate it if someone could explain me why and how this works.
| The radius that passes through the center and your point $(x_0,y_0)$ also gives the direct path to the closest point on the circle. So the vector $(x_0,y_0)$ needs to be scaled up until its magnitude is $1$. This is accomplished by taking $(x_0,y_0)/\rho$. (And this also applies to points outside the circle.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/103453",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Is a topological group action continuous if and only if all the stabilizers are open? Let $G$ be a topological group and $(X,\mu)$ be a $G$-set, i.e. $\mu$ defines an action $X \times G \rightarrow X$.
Is it then true that $\mu$ is continuous if and only if for every $x \in X$ the stabilizer subgroup $G_x$ is open in $G$?
If yes, how does one prove this?
| Neither implication is true.
1) Let $X$ be a topological space admitting a discontinuous bijection $f: X \rightarrow X$. (E.g. we may take $X = \mathbb{R}$.) Let $G = \mathbb{Z}$ with the discrete topology. There is a unique set-theoretic action of $G$ on $X$ such that $1 \cdot x = f(x)$. Because $1 \cdot = f$ is discontinuous, the action is not continuous. However, since $\mathbb{Z}$ is discrete, all subgroups are open. In particular the point stabilizers $G_x$ are open.
2) Let $G$ be a nondiscrete topological group [Pierre-Yves Gaillard suggests $\mathbb{R}$ in his comment above], and view the group law $G \times G \rightarrow G$ as a left action of $G$ on itself. By definition this action is continuous, but the point stabilizers are $\{e\}$, which is not open since $G$ is not discrete.
Added: Wait! Upon closer reading, it was not explicitly said that $X$ is a topological space, only a $G$-set. In fact, if we view $X$ as a "naked set" and endow it with the discrete topology, then the equivalence becomes true.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/103498",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 1,
"answer_id": 0
} |
Circle and Line segment intersection I have a line segment (begin $(x_1,y_1)$, end $(x_2,y_2)$, with $D=5$, let’s say) and a circle (radius $R$, center $(x_3,y_3)$)
How can I check that if my line segment intersects my circle?
picture
http://kepfeltoltes.hu/120129/inter_www.kepfeltoltes.hu_.png
| You can do this analytically:
*
*write the equation of circle : $(x- x_R)^2 + (y - y_R)^2 = R^2$
*write the equation of the line which support the segment : $ax + by + c = 0 $ , a, b, c depending of the extremas of your segment.
*compute (second order polynome) the intersection : there is 2 points maximum.
*check if intersection is in the segment
Enjoy ;-)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/103556",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
Why is $S^1 \times S^1$ a Torus? Call a "torus" that geometric shape that "looks like" a doughnut. Frequently, one encounters the assertion that $S^1 \times S^1$ is a "torus" where $S^1$ is the unit circle. Now, if I think about this, I can understand the justification for calling this a torus, but I'm trying to understand how one would go about actually proving this. Indeed, there exist analytical descriptions of the torus such as this one provided by Wikipedia. So one could theoretically, with enough inspiration, find a homemorphism $h: S^1 \times S^1 \rightarrow G(T)$ where $G(T)$ denotes the graph of the torus as realized by the analytical description. This approach, assuming it works, uses coordinates and in any event wouldn't be very enlightening.
So, my question is, is there a coordinate-free way to prove that $S^1 \times S^1$ is homeomorphic to this thing we call a doughnut?
My thoughts on this are: I believe what is key is how one chooses to define a torus. I am familiar with constructing a torus by identifying opposite sides of a rectangle and this seems like a pretty natural definition. It is intuitively clear that if the rectangle is configured as
$$
\begin{align*}
A---- & B \\
|\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;&| \\
C---- & D
\end{align*}
$$
then one can map one circle homeomorphically onto the identification of AB and CD and then the other circle onto the identification of AC and BD. However, this really isn't a very precise argument and it seems to me that making it precise would eventually involve coordinates. Am I onto the right track with this approach or is there a better way of looking at the problem?
| Wow, its been a long time since the last post.
Well, there's a nice way to show that it is a homeomorphism; its given in the book "Basic Topology" by M.A. Armstrong. (Pg 67 ~ 68)
I'll give a rough sketch :
Take $X = I \times I$ where $I = [0,1]$ and partition $X$ into:
1) The set {(0,0), (1,0), (0,1), (1,1)} (four corner points)
2) Sets consisting of pairs of points $(x, 0), (x, 1), x \in I$
3) Sets consisting of pairs of points $(0, y), (1, y), y \in I$
4) Sets consisting of a single point $(x, y)$ where $x, y \in I$
Then the resulting identification space is the torus. If you think of $S^1$ as inside the complex plane, define a map $f : S^1 \times S^1 \rightarrow I \times I$ by
\begin{align*}
(x,y) \mapsto (e^{2\pi ix}, e^{2\pi iy})
\end{align*}
Now look at $f^{-1}(z)$ for $z \in S^1 \times S^1$. They partition $I \times I$ - and they're the sets mentioned above.
$\Rightarrow f$ is an identification map. (If $f : X\rightarrow Y$ is a surjective (continuous) map, $X$ is compact and $Y$ is Hausdorff, then $f$ is an identification map)
Thus, the two are homeomorphic.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/103621",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 4,
"answer_id": 1
} |
Are "axioms" in topology theory really axioms? If I understand correctly, axioms are those statements that we assume to be true, instead of proving to be true.
I have seen that in topology theory, various axioms of countability and separation axioms seem to be definitions of some concepts, instead of being assumptions. So I was wondering if "axioms" in topology theory are really axioms?
If no, is this kind of naming rules common in all other branches of mathematics?
Thanks and regards!
| Axioms are merely background assumptions defining the objects that are to be under consideration. If you’re studying Abelian groups, you’ll want to take commutativity of the group operation as an axiom. If you’re studying manifolds, you may want to take second countability as an axiom. For most mathematical purposes you probably want to take the axiom of choice as an assumption. What axioms you choose for your geometry depends on what kind of geometry you want. And so on. The notion that axioms are true is, as André said in the comments, very dated.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/103671",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
} |
Counterexample of G-Set
If every element of a $G$-set is left fixed by the same element $g$ of $G$,
then $g$ must be the identity $e$.
I believe this to be true, but the answers say that it's false. Can anyone provide a counter-example? Thanks!
| For a nontrivial example, consider the action of a group $G$ on itself by conjugation, that is, let $g \circ h=g^{-1}hg$. Now $g \circ h=h$ for all $h \in G$ means that $g$ and $h$ commute for all $h \in G$, that is $g$ is in the center $Z(G)$. And $Z(G)$ need not be just the identity element.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/103720",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Tensor products: proving that $I \otimes_R M \cong IM$ Assume it if it´s neccesarly that the ring has an 1 or is commutative ( I´m not sure if it´s needed)
Given a ring $R$ an ideal $I$ of $R$, and a $R$ module $M$ , prove that:
$
I \otimes _R M \cong IM
$
where $
IM = \left\{ {x \in M:x = \sum\limits_{finite} {i_k m_k \,\,\,i_k \in I\,\,m_k \in M} } \right\}
$
This is what I did. First I defined the obvious function $
\varphi\colon I\times M \to \,IM
$ which is bilinear, so it defines a R-module-homomorphism $$
\varphi ^ \bullet \colon I \otimes _R M \to IM
$$
and satisfies $
\varphi ^ \bullet \left( {i \otimes m} \right) = \varphi \left( {i,m} \right) = im
$
I proved that $
\varphi ^ \bullet
$ is surjective since, given $
\sum\limits_{finite} {i_k m_k } \in IM
$ clearly $
\varphi ^ \bullet \left( {\sum\limits_{finite} {i_k \otimes m_k } } \right) = \sum\limits_{finite} {i_k m_k }
$
But the injectivity how can I prove it?
| "But the injectivity how can I prove it?"
Dear Susuk, you can't because it is false and it is a good omen for you that you couldn't prove it!
Here is a counterexample:
Let $k$ be any field . Consider $R=k[X]/(X^2)=k[\epsilon]$ and let $I$ be the ideal $I=(\mathbb \epsilon)=k\cdot \epsilon \subset R$ .
Take $M=I$. We have $I\cdot M=I^2=(0)$ and in order to show that your map $
\varphi ^ \bullet :I \otimes _R I \to I^2$ is not injective, it suffices to prove that $I \otimes _R I\neq 0$.
However, since $I$ is killed by $\epsilon$ we have $I \otimes _R I=I \otimes _{R/(\epsilon)} I=I \otimes _k I$ and the latter vector space is one dimensional over the field $k$, hence non-zero.
(Of course if $M$ is flat over $R$, the isomorphism $I \otimes _R M \stackrel {\simeq} {\to}IM $ holds)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/103775",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
} |
Proof that $\mathbb{Q}$ is dense in $\mathbb{R}$ I'm looking at a proof that $\mathbb{Q}$ is dense in $\mathbb{R}$, using only the Archimedean Property of $\mathbb{R}$ and basic properties of ordered fields.
One step asserts that for any $n \in \mathbb{N}$, $x \in \mathbb{R}$, there is an integer $m$ such that $m - 1 \leq nx < m$. Why is this true? (Ideally, this fact can be shown using only the Archimedean property of $\mathbb{R}$ and basic properties of ordered fields...)
| If $\mathbb{Q}$ is not dense in $\mathbb{R}$, then there are two members $x,y\in\mathbb{R}$ such that no member of $\mathbb{Q}$ is between them. I claim that the distance $\varepsilon=|x-y|$ between $x$ and $y$ is an infinitesimal. By the Archimedean property, this implies $\varepsilon=0$.
If
$$
\underbrace{\varepsilon+\cdots+\varepsilon}_{n\text{ terms}} > 1
$$
then $\varepsilon>1/n$, so some rational number of the form $k/n$ is between $x$ and $y$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/103839",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 3,
"answer_id": 1
} |
If a coin is flipped 10 times, what is the probability that it will land heads-up at least 8 times? I absolutely remember learning this is middle school, yet I cannot remember how to solve it for the life of me. Something to do with nCr, maybe? ...
Thanks for any help.
| Lets split it into cases: 8 heads, 9 heads, and 10 heads. there are C(10, 8) (10 choose 8)=
45 sequences with 8, C(10, 9)= 10 sequences with 9 heads, and of course 1 case with 10.
45+10+1= 56
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/103903",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Cartesian Product in mathematics When using the cartesian product, and you have three collections, do you take two collections at a time OR all three together to calculate the product?
My question is if you have more than two collections, let's say A, B and C
A = {1,2,3}
B = {4,5,6}
C = {7,8}
A x B x C
{1,2,3} x {4,5,6} x {7,8}
Do you with the cartesian product calculate A x B, then B x C? And maybe A x C? Which means you take only two collections at a time.
OR
Do you take all three collections at the same time A x B x C?
| For $n \in \mathbb{N}$, the $n$-ary Cartesian product of $n$ sets $A_1, \dots, A_n$, denoted $A_1 \times \cdots \times A_n$, is defined to be the set of all $n$-tuples $(a_1, \dots, a_n)$ for which $a_i \in A_i$ for each $i$.
So in particular
$$A \times B \times C = \{ (a,b,c)\, :\, a \in A,\ b \in B,\ c \in C \}$$
This is distinct from
$$(A \times B) \times C = \{ ((a,b),c)\, :\, a \in A,\ b \in B,\ c \in C \}$$
each of whose elements is an ordered pair, the first 'coordinate' of which is itself an ordered pair.
Nonetheless, there is a very natural bijection
$$\begin{align}
A \times B \times C & \to (A \times B) \times C \\
(a,b,c) &\mapsto ((a,b),c) \end{align}$$
and similarly for $A \times (B \times C)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/103959",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Suppose $A \subseteq C$ and $B$ and $C$ are disjoint. Prove that $x \in A \rightarrow x \notin B$ Suppose $A \subseteq C$ and $B$ and $C$ are disjoint. Prove that $x \in A \rightarrow x \notin B$.
Basically I need to prove this.
| In proving these things you need to show that no matter how $A,B,C$ look like, if the condition that $A\subseteq C$ and $B\cap C=\varnothing$ then $x\in A\rightarrow x\notin B$ is true.
For this we want to take an arbitrary element of $A$, use the definition that $A\subseteq C$ to deduce more about $x$; and use the definition of $B\cap C$ to deduce that if $x$ was in $B$ then the intersection would not be empty - therefore $x\notin B$.
I leave the formal and technical details to you, since this is your homework assignment after all.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/104035",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Pythagorean Theorem Proof Without Words 6 Your punishment for awarding me a "Nice Question" badge for my last question is that I'm going to post another one from Proofs without Words.
How does the attached figure prove the Pythagorean theorem?
P.S. No, I will not go through the entire book page-by-page asking for help.
P.P.S. No, I am not a shill for the book. Just a curious math student.
| I think the diagram ought to include lines that connect all the three points on the large circle into a triangle. It is then (I think) supposed to be known that an inscribed triangle that has a diameter as one of its sides is right, and that an altitude towards the hypotenuse divides a right triangle into two similar triangle. The proportion $\frac{c+a}{b}=\frac{b}{c-a}$ then comes from these two similar triangles. Cross multiplying with the denominators produces $c^2-a^2=b^2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/104079",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18",
"answer_count": 7,
"answer_id": 3
} |
Groups of symmetries What are the symmetries of a solid rectangular box whose length, width and height are all different?
I get a group of order 4 by rotation 180, flipping along a vertical and horizontal axis and itself.
| If you are only allowing yourself "physical symmetries" (symmetries that can be realized by manipulation of the box in our familiar 3-space):
Place the center of the box at the origin. You get one symmetry by rotation by half a turn using the $z$-axis as pivot; you get one symmetry by rotation by half a turn using the $x$-axis as a pivot; you get one symmetry by rotation of 180 degrees with the $y$-axis as pivot.
If we number the corners of the box with $1,2,3,4$ on top and $5,6,7,8$ in the bottom, with $i+4$ under $i$, then we can identify the four rotations with permutations of the vertices. One of the rotations corresponds to $\sigma= (1,3)(2,4)(5,7)(6,8)$; another to $\tau=(1,8)(4,5)(2,7)(3,6)$; and the third one as $\rho=(1,6)(2,5)(4,7)(3,8)$.
Since
$$\begin{align*}\ \sigma\circ\tau=(1,3)(2,4)(5,7)(6,8)&\circ(1,8)(4,5)(2,7)(3,6)\\
& = (1,8)(4,5)(2,7)(3,6)\circ(1,3)(2,4)(5,7)(6,8)\\ &= (1,6)(2,5)(3,8)(4,7)\\&=\rho,\end{align*}$$
these three together with the identity form a Klein $4$-group. These are the four you've described.
Are there any others? Once you decide where vertices 1 and 4 map, everything else gets forced. 1 has only four possible locations, since orientation cannot change: either 1 maps to itself and 4 to itself; or 1 maps to 3 and 4 maps to 2; or 1 maps to 8 and 4 maps to 5; or 1 maps to 6 and 4 to 7. These are the four permutations given above, so that is all there is; it's not a cyclic group, because it has three elements of order two, so it is the Klein 4-group.
However: 'Symmetries' often includes reflections as well (so the dihedral groups include not only the rotations, but also the reflections). If we include reflections, then in addition to the four possibilities above, we can also exchange 1 and 4; map 1 to 2 and 4 to 3; map 1 to 5 and 4 to 8; or map 1 to 7 and 4 to 6. The first three are accomplished by reflecting about the coordinate planes (the $xy$-plane, the $xz$-plane, and the $yz$-plane, in some order depending on how you are picturing your box), the fourth one by reflection about the origin. This gives four other symmetries, and so the group corresponds to a group of order $8$. It cannot be cyclic, since it has the Klein $4$-group as a subgroup; it is a semidirect product of the Klein $4$-group and the cyclic group of order $2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/104142",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
prove for all $n\geq 0$ that $3 \mid n^3+6n^2+11n+6$ I'm having some trouble with this question and can't really get how to prove this..
I have to prove $n^3+6n^2+11n+6$ is divisible by $3$ for all $n \geq 0$.
I have tried doing $\dfrac{m}{3}=n$ and then did $m=3n$
then I said $3n=n^3+6n^2+11n+6$ but now I am stuck.
| If you know what "mod 3" means then argue as follows:
$$n^3 + 6n^2 + 11n + 6 \equiv n^3 - n = (n-1)n(n+1) \equiv 0 \pmod 3 .$$
If you don't, then write this as:
$$ n^3 - n + 12n + 6n^2 + 6 = n(n+1)(n-1) + 3(2n^2 + 4n + 2), $$
and you're left with showing that both terms are divisible by $3$.
Now $n(n+1)(n-1)$ is always a multiple of $3$, because if a number is not a multiple of 3, then either its predecessor or its successor must be.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/104201",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 9,
"answer_id": 2
} |
Rotations and the parallel postulate. If we take a full rotation to be $360^\circ$, then it seems that we can prove the following
Starting from the red point, we walk clockwise along the triangle. At each vertex, we must turn through the green angles marked to proceed down the adjacent sides of the triangle. When we return to the red point, we will have turned through one full rotation. This means that the sum of the exterior angles is given as $360^\circ$, implying the interior angles of the triangle sums of $180^\circ$.
The fact that the angles of a triangle sum to $180^\circ$ is well known to be equivalent to the parallel postulate and this made me wonder whether if the fact that a full rotation being $360^\circ$ is also equivalent to the parallel postulate?
I avoided stating the question using "exterior angles of a triangle sums to $360^\circ$" and instead used the more ambiguous term "rotations" to emphasize the fact that rotations seem to be more general. We can for example show that the interior angles of a heptagram sum to $180^\circ$ by noting that three full rotations are made while "walking" thge heptagram. This should generalize to arbitrary closed polygons and seems stronger than the fact that the exterior angles sum to $180^\circ$.
In summary, I would be interested in knowing the connections that this technique has to the parallel postulate as well as if this technique is a "rigorous" way of finding the internal angles of more complex shapes such as the heptagram.
| Good question. To begin with, the first part of the question is false. The "fact" that a full rotation is 360 degrees, is in fact a definition of the angular measure of a full rotation, and thus is not equivalent to the parallel postulate. You can think of it as defining what a degree is, namely, one part in 360 of a full rotation. We could have defined it to be say 100 "new degrees", but as it was, the ancients defined it as 360 degrees, presumably as a good approximation to one rotation of the earth around the sun. Thus one degree of rotation corresponds to (roughly) one day along earth's annual orbit around the sun.
For the last part of your question, indeed, this technique can be used to prove the connection between the parallel postulate and the interior angles sum of a triangle (and by extension to other polygons etc). The point is that to make your "walk clockwise along the triangle...the sum of the exterior angles is given as 360∘" statement rigorous, you need to use the parallel postulate to put all these exterior angles to the same point (ie to parallel transport all the exterior angles to one local point, and see that all the exterior angles add up to one full rotation), just that this idea is now so innate that you had implicitly assumed its validity, and as the other answers have shown, this is only true on the Euclidean plane.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/104260",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Hofstadter's TNT: b is a power of 2 - is my formula doing what it is supposed to? If you've read Hofstadter's Gödel, Escher, Bach, you must have come across the problem of expressing 'b is a power of 2' in Typographical Number Theory. An alternative way to say this is that every divisor of b is a multiple of 2 or equal to 1. Here's my solution:
b:~Ea:Ea':Ea'':( ((a.a')=b) AND ~(a=(a''.SS0) OR a=S0) )
It is intended to mean: no divisor of b is odd or not equal to 1. E, AND and OR are to be replaced by the appropriate signs.
Is my formula OK? If not, could you tell me my mistake?
| I would express this as follows using the original notation:
$$
\forall c:\forall d: \mathtt{\sim} b=(S(Sc \cdot SS0) \cdot d)
$$
Or equivalently
$$
\mathtt{\sim} \exists c:\exists d: b=(S(Sc \cdot SS0) \cdot d)
$$
Please note that negation in the first formula applies to the whole equality and that the negation in the second formula applies to the whole formula.
From Fundamental Theorem of Arithmetic we know that power of two cannot have divisors other than two. This is exactly what we are using in the formulas above where $S(Sc \cdot SS0)$ stands for $2c+1, c>0$, i.e. any odd number greater than $1$.
$Sc$ is used in $S(Sc \cdot SS0)$ in order to avoid the situation when $c=0$ and $d=b$ used in formula $S(c \cdot SS0)$ turn the equality into $b=b$ for any $b$.
These formulas detect all the powers starting from $2^0, 2^1, 2^2, \ldots$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/104293",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 4,
"answer_id": 3
} |
Equivalence Class for Abstract Algebra Class Let
$$R_3= \{(a,b)\mid a,b \in \mathbb{Z}\text{ and there exists }k \in \mathbb{Z} \text{ such that }a-b=3k\}.$$
I know there is an equivalence relation but I'm not 100% on what it means to be an equivalence class for this problem. In class we got 3: $\{0,3,6,9,\ldots\}$ and $\{1,4,7,10,-2,-5,\ldots\}$ and $\{2, 5, 8, 11, -1, -4,\ldots\}$.
I don't understand where these cells came from. Help?
| This question is old, but I'd like to share my view of it. I hope this offends no one.
What you are dealing with are equivalence classes modulo an integer. That is, you are dealing with equivalence classes defined by $[a]=\{b \in \mathbb{Z}:b \cong a \pmod 3\}$ for an element $a$.
In the case of the relation of congruence modulo $n$, there are $n$ equivalence classes: $[0], [1], [2], \dots, [n-1]$. You can see that supposing $[n]$ or $[n+1]$ is silly, since $b \cong n \pmod n$ reduces to $b \cong 0\pmod 0$ and $b \cong n+1 \pmod n$ reduces to $b \cong 1 \pmod n$ (and, hence $[n]=[0]$ and $[n+1]=1$). In other words, $[0], [1], [2],\dots, [n-1]$ are all of the equivalence classes.
For congruence modulo $3$, we thusly have $[0], [1], $ and $[2]$. That is,
\begin{align}
[0]&=\{b \in \mathbb{Z}: b\cong 0 \pmod 3\}\\
&=\{3,6,9,\dots\}.\\
[1]&=\{b \in \mathbb{Z}: b\cong 1 \pmod 3\}\\
&=\{4,7,10,\dots\}.\\
[2]&=\{b \in \mathbb{Z}: b\cong 2 \pmod 3\}\\
&=\{5,8,11,\dots\}.
\end{align}
In other words, the three equivalence classes are numbers that divide $3$ with remainder $0$, all numbers that divide $3$ with remainder $1$, and all numbers that divide $3$ with remainder 2.
This is the beauty of equivalence classes: They partition the set they are defined on into disjoint sets which as a whole form the entire set. i.e. For any given equivalence relation $E$ on set $S$ and the equivalence classes $[a]=\{b \in S: aEb\}$, $$\bigcup_{a \in S}[a]=S.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/104366",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Is a sum of equidistributed sequences $\sum \{n \alpha\}$ equidistributed? We have two equidistributed sequences {n a} and {n b} (mod 1), in which a and b are irrational.
1) Is it true that the sum {na} + {nb} is equidistributed?
and
2) Is it true that {na} + {nb} = {n(a+b)} ?
For (1), n*a and n*b are both polynomials satisfying Weyl's criteria, and so the sum of these is I think also a polynomial satisfying the criteria (but this does not necessarily mean {na} + {nb} is equidistributed). For (2), well, I just tried a few calculations, so I am not sanguine about it but it seems true. Note: I do see that a + b might not be irrational, and in that case we would not satisfy Weyl's criteria in (2).
Thanks for any insights.
| (1): Not necessarily.
*
*We could take $a$ to be any irrational number in $(0,1)$ and $b=1-a$. In this case, for all $n>0$, $na+nb=n$ and $\{na\}+\{nb\}=1$.
*If 1, $a$ and $b$ are linearly independent over $\Bbb Q$, then by the Weyl equidistribution criterion, the sequence of points $(\{na\},\{nb\})$ will be equidistributed in the unit square. This means that $\{na\}+\{nb\}$ will not be equidistributed in the interval $[0,2]$ but will have a triangular probability density function, with maximum density at 1.
On the other hand, if $a=b$ and $a$ is irrational, then $\{na\}+\{nb\}=2\{na\}$ will be equidistributed in $[0,2]$, because $\{na\}$ is equidistributed in $[0,1]$. Also,
it is possible for $\{na\}+\{nb\}$ to be equidistributed in a subinterval of $[0,2]$. For example, if $a=\sqrt{2}$, $b=-1/\sqrt{2}$, set $z:=\{nb\}$; then $na=-2nb$, so
$$\{na\}+\{nb\}=\{-2z\}+z=\left\{
\begin{array}{l}
1-z, \qquad z\in(0,\frac12],\\
2-z, \qquad z\in(\frac12,1].
\end{array}
\right.$$
Since $z$ is equidistributed in $[0,1]$, it follows that $\{na\}+\{nb\}$ is equidistributed in $[\frac12,\frac32]$.
(2): Both $\{\{na\}+\{nb\}\}$ and $\{n(a+b)\}$ are integer translates of $na+nb$ which are in $[0,1)$, so $\{\{na\}+\{nb\}\}=\{n(a+b)\}$. However, $\{na\}+\{nb\}$ and $\{n(a+b)\}$ may differ by 1. For example, if you take $a=1/\sqrt{2}$, $b=1/\sqrt{2}$ and $n=1$, then $\{na\}+\{nb\}=2/\sqrt{2}=\sqrt{2}$, but $\{n(a+b)\}=\{\sqrt{2}\}=\sqrt{2}-1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/104445",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Show that affine transformations are associative I'm having trouble with the associativity part of showing that $\text{Aff}(\mathbb{R}^2)$ is a group.
Recall that if $\left[ A, \overline{r} \right]$ denotes an affine transformation (where $A$ is the matrix representation of the linear transformation and $\overline{r}$ is the translation vector), then multiplication in $\text{Aff}(\mathbb{R}^2)$ is defined by $$\left[ A, \overline{r} \right] \cdot \left[ B, \overline{s} \right] := [AB,\; B^{-1}\overline{r} + \overline{s} ].$$
We must show that $$\left(\left[ A, \overline{r} \right] \cdot \left[ B, \overline{s}\right] \right) \cdot [ C, \overline{t} ] = \left[ A, \overline{r} \right] \cdot \left([ B, \overline{s}] \cdot [ C, \overline{t}] \right)$$
The expansion of the left side is $[ABC,\; C^{-1}(B^{-1}\overline{r} + \overline{s}) + \overline{t}]$. I've tried working from the other side and making them meet in the middle, but I can't seem to massage them the right way so that they're equal. What's the trick here?
| Just basic properties of matrices:
$$\begin{align*}
\Bigl( [A,\overline{r}][B,\overline{s}]\Bigr)[C,\overline{t}] &= [AB,B^{-1}\overline{r}+\overline{s}][C,\overline{t}]\\
&= [(AB)C,C^{-1}(B^{-1}\overline{r}+\overline{s})+\overline{t}]\\
&= [ABC, C^{-1}B^{-1}\overline{r} + C^{-1}\overline{s} + \overline{t}].\\
\ [A,\overline{r}]\Bigl([B,\overline{s}][C,\overline{t}]\Bigr) &= [A,\overline{r}][BC,C^{-1}\overline{s}+\overline{t}]\\
&= [A(BC), (BC)^{-1}\overline{r} + C^{-1}\overline{s} + \overline{t}].
\end{align*}$$
Now, from linear algebra, how can you express $(BC)^{-1}$ in terms of $B^{-1}$ and $C^{-1}$?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/104502",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
If $f(x) = h(x)g(x)$, is $h$ differentiable if $f$ and $g$ are? I know that if I have two differentiable functions $f, g$ then the functions $(f + g)$ and $fg$ are also differentiable.
I would like to find a way how to argue about the function $h$ where
\begin{equation}
f(x) = (hg)(x) := h(x)g(x) \quad \text{and } f,g \text{ are differentiable}
\end{equation}
For a start I can conclude $h$ is differentiable at all points where $g(x) \neq 0$ since there I can express $h$ as
\begin{equation}
h = \frac{f}{g}
\end{equation}
But for the remaining points I am not sure, my guess is that $h$ is differentiable, any hints how I can make this into a formal argument ? Or am I probably wrong ? In that case, would it help to impose further smoothness on $f$ and $g$, say both are $C^\infty$ ?
Many thanks!
| No. Let $f(x)=g(x)=0$ for all $x$. There are no functions more regular than constant functions. Then $f=gh$ holds for any function $h$, even if it is nowhere-differentiable.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/104565",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 2,
"answer_id": 0
} |
basic calculus proof - using theorems to prove stuff A function $f(x)$ is defined and continuous on the interval $[0,2]$ and $f(0)=f(2)$.
Prove that the numbers $x,y$ on $[0,2]$ exist such that $y-x=1$ and $f(x) = f(y)$.
I can already guess this is going to involve the intermediate value theorem.
So far I've defined things as such:
I'm looking to satisfy the following conditions for values x, y:
*
*$f(x) = f(x+1)$
*$f(x) = f(y)$
I've defined another function, $g(x)$ such that $g(x) = f(x+1) - f(x)$
If I can show that there exists an $x$ such that $g(x) = 0$ then I've also proven that $f(x) = f(x+1)$.
since I'm given the interval [0,2], I can show that:
$g(1) = f(2) - f(1)$,
$g(0) = f(1) - f(0)$
I'm told that $f(2) = f(0)$ so I can rearrange things to show that $g(1) = f(0) - f(1) = -g(0)$.
Ok, So i've shown that $g(0) = -g(1)$
How do I tie this up? I'm not able to close this proof. I know I need to incorporate the intermediate value theorem which states that if there's a point c in $(a,b)$ then there must be a value $a<k<b$ such that $f(k) = c $ because there's nothing else.
I thought maybe to use Rolle's theorem to state that since $f(0) = f(2)$ I know this function isn't monotonic. And if it's not monotonic it must have a "turning point" where $f'(x) = 0$ but it's not working out.
Anyway I need help with this proof in particular and perhaps some advice on solving proofs in general since this type of thing takes me hours.
Thanks.
| Hint: You are told that $f(2)=f(0)$ (not $f(1)$ as you say). This gives you that $g(1)=-g(0)$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/104684",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Is the set $\{\sin (p_{i})| p_{i} \,\,\mbox{is a prime number, for all}\,\,i\in \mathbb{N}\}$ linearly independent? Is the set $\{\sin (p_{i})| p_{i} \,\,\mbox{is a prime number, for all}\,\,i\in \mathbb{N}\,\,\mbox{and}\,\, p_{i}\neq p_{j}\,\,\mbox{if}\,\,i\neq j\}$ is linearly independent when $\mathbb{R}$ is a vector space over $\mathbb{Q}$?
Thanks for any help.
| I take the question to be whether $\{\sin p\mid p\hbox{ prime}\}$ is linearly independent over $\Bbb Q$. The answer is yes. Let $i:=\sqrt{-1}$. Since $\sin n = (2i)^{-1} (e^{in} - e^{-in})$, it suffices to prove, more generally, that $\{e^{in}\mid n\in{\Bbb Z}\}$ is linearly independent over $\Bbb Q$. This follows from the transcendence of $e^i$, which is proved by the Lindemann-Weierstrass theorem.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/104739",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Characters and permutation matrices Suppose I represent the group $S_{\,n}$ using $n \times n$ permutation matrices. This is a valid group representation. Let $\chi$ be its character. Since $\chi(g)$ is complex and since $1=\chi(gg^{-1})=\chi(g)\chi(g^{-1})=\chi(g)\chi(g)^*=|\chi(g)|^2$, the value of $\chi(g)$ lies on the unit circle. However, for certain permutations $g$, the permutation matrix has 0's down the diagonal and the value of $\chi(g)$, being the trace of this matrix, is 0. Am I missing something here?
| Call the representation $\rho$, then $\chi$ is the trace of $\rho$. The trace is a homomorphism of the additive group, not the multiplicative; your $\chi(gg^{-1})=\chi(g)\chi(g^{-1})$ is false. Only characters of degree 1 are multiplicative.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/104807",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How to analyze (sum and convergence) step-by-step this series: $\sum_{n=1}^\infty\frac{4}{(n+1)(n+2)}$? I have this series:
$$\sum_{n=1}^\infty\frac{4}{(n+1)(n+2)}$$
and $$\sum_{n=1}^\infty\frac{4}{(n+1)(n+3)}$$
and I would like to know how to analyze, very step-by-step.
thanks
| One way to show convergence of the first series, is to first note that
$$ \frac{4}{(n+1)(n+2)} < \frac{4}{(n+1)^2}$$ for all $n$. Then write
$$ \sum_{n = 1}^{\infty} \frac{4}{(n+1)^2} = 4\sum_{n=2}^{\infty} \frac{1}{n^2}. $$ Presumably, you already know that the latter series converges, and so you can apply the comparison test, to show that
$$ \sum_{n = 1}^{\infty} \frac{4}{(n+1)(n+2)} $$ converges. The other one can be treated in a similar way.
To find the sum, write
$$ \frac{1}{(n+1)(n+2)} = \frac{A}{n+1} + \frac{B}{n+2},$$ and solve for $A$ and $B$ (this is called a partial fractions decomposition). Then you should be able to determine the $n$'th partial sum of this series explicitly, and easily evaluate the sum (these types of series will be called telescopic in your textbook, for obvious reasons). The other one can be treated in a similar way, but the computations will be a bit more complicated.
Note: Just realized that the first part of my answer is made redundant by the second, but I'll leave it - it can't hurt to know more than one way of accomplishing something.
Edit: After seeing your comment above, I'll add some more details on the next step too:
$$ s_k = \sum_{n = 1}^{k} \frac{4}{(n+1)(n+2)} = 4\sum_{n = 1}^{k} \left( \frac{1}{n+1} - \frac{1}{n+2}\right).$$
Let us look more closely at the second sum
$$\sum_{n = 1}^{k} \left( \frac{1}{n+1} - \frac{1}{n+2}\right) = \left( \frac{1}{2} - \frac{1}{3} \right) + \left( \frac{1}{3} - \frac{1}{4} \right) + \left( \frac{1}{4} - \frac{1}{5} \right) + \ldots = \frac{1}{2} - \frac{1}{k+2}.$$
So $s_k = 4(1/2 - 1/(k+2))$ and hence the sum of the series is $s = \lim_{k \to \infty} s_k = 4/2 = 2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/104964",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
show if $n=4k+3$ is a prime and ${a^2+b^2} \equiv 0 \pmod n$ , then $a \equiv b \equiv 0 \pmod n$ $n = 4k + 3 $
We start by letting $a \not\equiv 0\pmod n$ $\Rightarrow$ $a \equiv k\pmod n$ .
$\Rightarrow$ $a^{4k+2} \equiv 1\pmod n$
Now, I know that the contradiction will arrive from the fact that if we can show $a^2 \equiv 1 \pmod n $ then we can say $b^2 \equiv -1 \pmod n $ then it is not possible since solution exists only for $n=4k_2+1 $ so $a \equiv b\equiv 0 \pmod n $
So from the fact that $a^{2^{2k+1}} \equiv 1 \pmod n$ I have to conclude something.
| Assume that $a$ and $b$ are coprime. If $a$ and $b$ are odd, replace
$a$ and $ b$ by $(a-b)/2$ and $(a+b)/2$. Then $a^2 + b^2 = pm$, and by reducing $a$ and $b$
modulo $p$ you can make sure that $m < p$. Since $p = 4n+3$ and $a^2 + b^2 = 4k+1$,
we must have $m = 4j+3$. But then $m$ is divisible by a prime number $q = 4r+3$ strictly
smaller than $p$. Repeat until you find that $3$ is a sum of two squares.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/105034",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 2
} |
roots of complex polynomial - tricks What tricks are there for calculating the roots of complex polynomials like
$$p(t) = (t+1)^6 - (t-1)^6$$
$t = 1$ is not a root. Therefore we can divide by $(t-1)^6$. We then get
$$\left( \frac{t+1}{t-1} \right)^6 = 1$$
Let $\omega = \frac{t+1}{t-1}$ then we get $\omega^6=1$ which brings us to
$$\omega_k = e^{i \cdot k \cdot \frac{2 \pi}{6}}$$
So now we need to get the values from t for $k = 0,...5$.
How to get the values of t from the following identity then?
$$
\begin{align}
\frac{t+1}{t-1} &= e^{i \cdot 2 \cdot \frac{2 \pi}{6}} \\
(t+1) &= t\cdot e^{i \cdot 2 \cdot \frac{2 \pi}{6}} - e^{i \cdot 2 \cdot \frac{2 \pi}{6}} \\
1+e^{i \cdot 2 \cdot \frac{2 \pi}{6}} &= t\cdot e^{i \cdot 2 \cdot \frac{2 \pi}{6}} - t \\
1+e^{i \cdot 2 \cdot \frac{2 \pi}{6}} &= t \cdot (e^{i \cdot 2 \cdot \frac{2 \pi}{6}}-1) \\
\end{align}
$$
And now?
$$
t = \frac{1+e^{i \cdot 2 \cdot \frac{2 \pi}{6}}}{e^{i \cdot 2 \cdot \frac{2 \pi}{6}}-1}
$$
So I've got six roots for $k = 0,...5$ as follows
$$
t = \frac{1+e^{i \cdot k \cdot \frac{2 \pi}{6}}}{e^{i \cdot k \cdot \frac{2 \pi}{6}}-1}
$$
Is this right? But how can it be that the bottom equals $0$ for $k=0$?
I don't exactly know how to simplify this:
$$\frac{ \frac{1}{ e^{i \cdot k \cdot \frac{2 \pi}{6}} } + 1 }{ 1 - \frac{1}{ e^{i \cdot k \cdot \frac{2 \pi}{6}} }}$$
| Note that
$$(t+1)^6 - (t-1)^6=((t+1)^3-(t-1)^3)((t+1)^3+(t-1)^3)$$
(difference of squares).
When you simplify the first term in the product on the right, there is no $t^3$ term and no $t$ term! The second term in the product simplifies to $2t^3+6t$.
Remark: The solution by Arhabhata is the right one, it works if we replace $6$ by $n$. And when we set $\frac{t-1}{t+1}=e^{2\pi i k/n}$, where $k=1,2,\dots,n-1$, and solve for $t$, we get $-i$ times cotangents.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/105129",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Diagonal triple limit Let $1>f(k+1,n)>f(k,n)>0$ and $0<f(k,n+1)<f(k,n)<1$.
And $f(k,\cdot)\to1, f(\cdot,n)\to0$.
Is there an $f$ for every $0<c<1$, such that $f(x,x)_{x \to \infty}\to c$?
| $f(k,n)=c^{n/k}$ works for the original question.
For the $f(k,\cdot)\to\infty$ version mentioned in a (now deleted) comment, $f(k,n)=2^{k-n}c^{n/k}$ works.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/105206",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Union of uncountably many subspaces Supposing, $\{V_t\}, t > 0$ are an uncountable number of linear subspaces of $\mathbb{R}^n$. If $\bigcup_{t>0} V_t = \mathbb{R}^n$ is it true that $V_T = \mathbb{R}^n$ for some $T>0$?
Any help is appreciated. Thanks.
EDIT: I have forgot to add the condition that $V_t$ are increasing.
| Consider a bijection $\phi:(0,+\infty)\to\mathbb R^n$ and for each $t>0$ let $V_t$ be the subspace of $\mathbb R^n$ spanned by $\phi(t)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/105249",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Find the background position by mouse pos and its hit area I'm trying to creating a jQuery plugin where the user can interact with a div element background image.
Basically, the background-image is larger than div element container, so if the user move the mouse over the div element, the image is automatically scrolled to be seen completely.
I thought to do that by working with this relation, but it is imprecise.
mX : aW = bX(?) : bW
so
-((mX * bW) / aW) / 2 = bX
this does something near but it isn't precise, how can I find the right way to do the proportion?
| I'm going to give your "?" the name X. When mX = 0, you want X = 0. When mX = aW, you want X = bW - aW. So mX : aW :: X : bW - aW. Solving, we get X = (bW - aW)*mX/aW.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/105288",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How to define the sign of a function $$y=\arctan\frac{x+1}{x-3} + \frac{x}{4}$$
I know that is necessary to put the function $>$ than $0$, but then?
$$\arctan\frac{x+1}{x-3} + \frac{x}{4}>0$$
It's a sum, so I can't set up a "false system" putting the two factors $>0$. In this case which is the rule to study the sign of this function?
Note: These are not homeworks.
| You want to determine the sign of $y$ given $x$, correct? In other words, you want to figure out when $y > 0$, which means, as you said, that we have to solve the inequality
$$\arctan \frac{x + 1}{x - 3} + \frac{x}{4} > 0.$$
The first step in solving this inequality are finding its "edges"—where the inequality goes from true to false, or vice versa. It turns out that there are only two places where this can happen: where the equation
$$\arctan \frac{x + 1}{x - 3} + \frac{x}{4} = 0$$
is true, and where the expression
$$\arctan \frac{x + 1}{x - 3} + \frac{x}{4}$$
is undefined. So, begin by making a list of all the points where the equation is true, along with all the points where that expression is undefined. You will use this list to determine the sign of $y$ given $x$ at all points.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/105336",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
How many associative binary operations there are on a finite set? I am studying Scott's book Group Theory. In the Exercise $1.1.17$ he asks us to show that if $S$ is a set and $|S|=n$, then there are $n^{\frac{n^{2}+n}{2}}$ commutative binary operations on $S$. But he doesn't talk about how many associative binary operations there are on a finite set.
Is there an answer to that question? I mean, how many associative binary operations there are on a finite set?
| Unlike the example you give (of commutative binary operations), there is no closed formula for the number of associative binary operations on a finite set.
A semigroup is a set with an associative binary operation. In what follows I will write "semigroup" rather than "associative binary operation".
It is shown in
Kleitman, Daniel J.; Rothschild, Bruce R.; Spencer, Joel H. The number of semigroups of order n.
Proc. Amer. Math. Soc. 55 (1976), no. 1, 227–232.
that almost all semigroups of order $n$ are $3$-nilpotent, and in Theorem 2.1(i) of
Distler, Andreas; Mitchell, J. D.
The number of nilpotent semigroups of degree 3.
Electron. J. Combin. 19 (2012), no. 2, Paper 51, 19 pp.
that the number of $3$-nilpotent semigroups of order $n$ is:
\begin{equation}
\sigma(n)=\sum_{m=2}^{a(n)}{n \choose m}m\sum_{i=0}^{m-1}(-1)^i{m-1 \choose
i}(m-i)^{\left((n-m)^2\right)}
\end{equation}
where $a(n)=\left\lfloor n+1/2-\sqrt{n-3/4}\,\right\rfloor$.
So, $\sigma(n)$ is approximately the number of distinct associative binary operations on a set of size $n$.
The value $\sigma(n)$ appears to converge very quickly to the number $\tau(n)$ of semigroups with $n$ elements:
\begin{equation}
\begin{array}{l|llllllll}
n&1&2&3&4&5&6&7&8\\\hline
\tau(n)& 1& 8& 113& 3492& 183732& 17061118& 7743056064& 148195347518186\\
\sigma(n)&0& 0& 6& 180& 11720& 3089250& 5944080072& 147348275209800
\end{array}
\end{equation}
So, by $n=8$, the ratio $\sigma(n)/\tau(n)>0.994$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/105438",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18",
"answer_count": 1,
"answer_id": 0
} |
Uniform continuity allows pushing limit inside integral? Let's say I have a (uniformly) continuous functions $f:[a,b] \to \mathbb{R}$ and an arbitrary function $h:\mathbb{R}^2 \to [a,b]$ such that
$$
\lim_{t\to 0} ~h(t,x) = h(0,x) = x
$$
for all $x$.
I would like to be able to conclude that
$$
\lim_{t\to 0}\int_a^bf(h(t,x))dx = \int_a^bf(x)dx.
$$
Is pushing the limit inside the integral justified in this situation?
| As you state, if we define $$y_n = f_n(x) $$ and $$y = f(x) = \lim_{n \to \infty} f_n(x)$$
then it is legitimate to state $$\mathop {\lim }\limits_{n \to \infty } \int\limits_a^b {{f_n}\left( x \right)dx = } \int\limits_a^b {\mathop {\lim }\limits_{n \to \infty } {f_n}\left( x \right)dx = \int\limits_a^b {f\left( x \right)dx} } $$
if the convergence is uniform over $[a,b]$.
I leave here a theorem I got from Piskunov's or Apostol's Calculus (don't remember which):
THEROEM Let $\sum u(x)_k$ be a series of functions, uniformly convergent in a closed interval $I$. Then if $x, \alpha\in I$ $$\int_{\alpha}^x s(t) dt = \sum \int_{\alpha}^x u_k(t) dt$$ where s(x) is the sum of the series (i.e. the limit as $n \to \infty$). This is usually stated as "an uniformly convergent series can be integrated term-wise".
Moreover, if $\sum u(x)_k$ and $\sum u'(x)_k$ are U.C. then you have that $$s'(x) = \sum u'(x)_k$$
EDIT: I'll add two examples of the notation $f_n(x)$
Let $f_n(x) = \tanh(nx)$. Then it is clear that the $f_n(x)$ converges to $0$ in $x=0$ and $\frac{\pi}{2}$ in $x>0$. Thus you have a sum of continuous functions that converge to a discontinuous.
Let $f_n(x) = \displaystyle \displaystyle \sum_{k=0}^{n} \frac{xn^2}{2^n}$. Then the function converges to $y = 6 x$ (note that $x$ is not indexed). There are more complex cases you might want to look at in books and webs.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/105487",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Integrable function Please help me prove this.
Suppose $f$ is defined in$\left [ a,b \right ]$ ,$f\geq 0$, $f$ is integrable in$ [a,b]$ and $\displaystyle\int_{a}^{b}fdx=0$
prove:
$\displaystyle\int_{a}^{b}f(x)^2dx=0$
Thanks a lot!
| Intuitively $f$ must be zero almost everywhere (term from Lebesgue theory). Now, if $f$ is zero almost everywhere then so is $f^2$, and then $\int f^2=0$ too.
That this is really the case is in fact trivial because $$0=\int_a^b f(x) dx =\int_a^b |f(x)|dx=\|f\|_{L^1(a,b)}$$
in other words the $L^1$-norm of $f$ is zero and then (because it is a norm) $f$ must be the $L^1$-zero function, that is $f$ must be zero almost everywhere.
If we did not know that $f\mapsto\|f\|_{L^1(a,b)}$ is a norm, it is sufficient to prove that the set $A=\{x: \, f(x)>0\}$ has measure zero $\mu(A)=0$ (where $\mu$ is the Lebesgue measure). In order to do that we need some argument where we use $\int f = 0$ in simpler terms, one such method is to split the set into a countable number of disjoint sets that we can control (similar to robjohn's argument). Having said that, let us consider
$$A= \bigcup_{n=1}^\infty A_n$$
where $A_n=\{x: 1/n\le f(x) < 1/(n-1)\}$ (where in the exceptional case $n=1$ we mean $A_1=\{x : \,f(x)\ge1\}$ ).
Then since the sets are disjoint we have
$$\mu(A)=\sum_{n=1}^\infty\mu(A_n)=\sum_{n=1}^\infty\int_{A_n}1dx$$
and since $f(x)\ge 1/n$ on $A_n$ we have
$$\int_{A_n}1dx\le n\int_{A_n}f(x)dx\le n\int_a^bf(x)dx=n\cdot0=0$$
That is each $A_n$ has measure 0, and hence so has $A$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/105553",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Proving that the "real part" function is continuous I want to use the definition of a limit,
$|f(z) - w_0| < \varepsilon$ whenever $0 < |z - z_0| < \delta$
to prove
$$\lim_{z \to z_0} \mathop{\rm Re}(z) = \mathop{\rm Re}(z_0)$$
By intuition this is obvious but I dont know how to show it using the defn. of a limit. This is question 1(a) from the book Complex Variables and Applications.
Here's the basic manipulation I have made going by an example in the book, I dont know where to go from here...
$$|\mathop{\rm Re}(z)-\mathop{\rm Re}(z_0)| = |x - x_0| = |x| - |x_0| = x - x_0$$
| If you have a complex number $z = a + ib$ where $a$ and $b$ are real, then
$$|z| = \sqrt{a^2 + b^2} \ge \sqrt{a^2} = |a|.$$
From this we see that for any complex number $z$,
$$|z| \ge |\Re(z)| \ge \Re(z).$$
Now apply this to $z - z_0$ and your limit follows right away.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/105621",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Find all ideals of $F[x]$ and $F[[x]]$. Let $F$ is a field. I know that $F[x]$ is a principal ideal domain. I know examples of ideals in $F[x]$ like $\{a_0+a_1x+\dots+a_nx^n|a_0=0\}$. I need to know all ideals. Same with $F[[x]]$: I know some examples of ideals but can't say that I know all ideals of the ring. Is there some technique to find all ideals of the rings?
| The following answer was given by yoyo in the comments:
The ideals of $F[x]$ are all principal, as you say, so every ideal is uniquely determined by the monic polynomial of minimal degree it contains. $F[[x]]$ is local, the unique maximal ideal is $(x)$ so every ideal is contained in $(x)$ (power series with non-zero constant term are invertible).
And as a consequence of this we easily see that any ideal $I$ of $F[[x]]$ is of the form $(x^m)$ for some non-negative integer $m$. This is easily seen as follows. Let $m$ the degree of the lowest degree non-zero term of any power series $a(x)$ in $I$. Then $a(x)=x^mu$, where $u$ is a unit of $F[[x]]$. Thus $(x^m)\subseteq I$. The reverse inclusion is trivial.
Essentially the same argument works for all DVRs (=discrete valuation rings).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/105681",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
The completion of a Boolean algebra is unique up to isomorphism Jech defines a completion of a Boolean algebra $B$ to be a complete Boolean algebra $C$ such that $B$ is a dense subalgebra of $C$.
I am trying to prove that given two completions $C$ and $D$ of $B$, then the mapping $\pi: C \rightarrow D$ given by $\pi(c) = \sum^D \{ u \in B \ \vert \ u \le c \} $ is an isomorphism. I understand that $c \not= 0 \implies \pi(c) \not= 0$, and that given any $d \in D$, I can write it as $d = \sum^D \{ u \in B \ \vert \ u \le d \} $, but I am unsure how to proceed. Any help would be appreciated.
| I am afraid that it doesn't make sense that $\pi (c)\leq c$ because both elements live in different lattices $C$ and $D$, so they are not comparable. So it's better to denote differently both orders as $\leq_C$ and $\leq_D$.
So let $\pi (c)= \sum^D${$u\in B:u\leq_C c$}, and let $\rho: D\to C$ be such that $\rho(d)=\sum^C${$u\in B:u\leq_D d$}.
For any $c\in C$ and $u\in B$ we have $u\leq_C c\Rightarrow u\leq_D\pi(c)$, by the property of the supremum. Conversely, let $u\in B$ be such that $u\leq_D\pi(c)$. Then $u=u\wedge\pi(c)=u\wedge\sum^D${$b\in B:b\leq_C c$}$=\sum^D${$u\wedge b:b\leq_C c$}, since $D$ is a complete Boolean algebra, and so it has the general distributivity. Now, if $b\leq_C c$ then $u\wedge b\leq_Cu\wedge c\leq_C c$. Therefore $u\leq_C c$. We have shown that $u\leq_C c\Leftrightarrow u\leq_D\pi(c)$.
Therefore $c=\sum^C${$u\in B:u\leq_C c$}$=\sum^C${$u\in B:u\leq_D \pi(c)$} $=\rho(\pi(c))$.
Similarly $\pi(\rho(d))=d$ for each $d\in D$. So $\rho=\pi^{-1}$, and $\pi$ is an isomorphism.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/105871",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
What does integration do? I know that integrals are used to compute the area under a curve. Let's say I have $y = x^2$. It creates smaller rectangles and then add up the sum (assuming that rectangles are going infinitely in number and is like going to a limit).
But I recently encountered a problem in my mind. Suppose we have a function, $y = x^2$. If we integrated it, we simply get the anti derivative of it which is $x^3/3$, assuming that the area is not of concern. What is the correlation of $x^3/3$ to $x^2$? I mean, it simply likes transforms a function into another function, but I can't get a clearer picture. When we graph $x^2$ and $x^3/3$, there is no connection visually. They are simply different graphs.
Thanks and I hope your comments can clear up my mind.
| It may be helpful to divorce (somewhat) the ideas of the integral and the antiderivative in your mind. The the definite integral is simply the concept of signed area. The integral exists, even for functions with breaks, corners, and other points generally considered 'not nice' (provided there are only countably many such points.
The antiderivative, on the other hand, is a function $F(x)$ such that $F'(x) = f(x)$.
The connection between them is the First Fundamental Theorem of Calculus, namely that: $$F(x) = \int_a^x f(t)dt$$ Without the fundamental theorem, there is no connection between antiderivative and integral. While this way of thinking is perhaps a bit extreme, it provides a good way of thinking integral and antiderivative in a more rigorous way.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/105937",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 5,
"answer_id": 1
} |
How does $\exp(x+y) = \exp(x)\exp(y)$ imply $\exp(x) = [\exp(1)]^x$? In Calculus by Spivak (1994), the author states in Chapter 18 p. 341 that
$$\exp(x+y) = \exp(x)\exp(y)$$ implies
$$\exp(x) = [\exp(1)]^x$$
He refers to the discussion in the beginning of the chapter where we define a function $f(x + y) = f(x)f(y)$; with $f(1) = 10$, it follows that $f(x) = [f(1)]^x$. But I don't get this either. Can anyone please explain this? Many thanks!
| $$
\begin{gathered}
\exp (x + y) = \exp (x)\exp (y)\quad \Rightarrow \hfill \\
\Rightarrow \quad \exp \left( {\sum\limits_j {x_j } } \right) = \prod\limits_j {\exp (x_j )} \quad \Rightarrow \hfill \\
\Rightarrow \quad \exp \left( {\sum\nolimits_{k\, = \,0}^{\,x} 1 } \right) = \prod\nolimits_{k\, = \,0}^{\,x} {\exp (1)} \quad \Rightarrow \hfill \\
\Rightarrow \quad \exp \left( x \right) = \exp (1)^{\,x} \hfill \\
\end{gathered}
$$
where
*
*$\sum\limits_j {} $ and $\prod\limits_j {} $ indicate the sum and product over a set of values for $j$
*$\sum\nolimits_{k\, = \,0}^{\,x} {} $ and $\sum\nolimits_{k\, = \,0}^{\,x} {} $ indicate the Indefinite sum/product, computed between the given limits
So the above holds also for $x$ and $y$ complex
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/106008",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 2
} |
If $H$ is a cyclic subgroup of $G$ and $H$ is normal in $G$, then every subgoup of $H$ is normal in $G$. Exercise 11, page 45 from Hungerford's book Algebra.
If $H$ is a cyclic subgroup of $G$ and $H$ is normal in $G$, then every
subgroup of $H$ is normal in $G$.
I am trying to show that $a^{-1}Ka\subset K$, but I got stuck. What I am supposed to do now?
Thanks for your kindly help.
| Suppose $H = \langle h \rangle$ is normal in $G$ and that $K$ is a subgroup of $H$. Any subgroup of a cyclic group is cyclic, so $K = \langle h^d \rangle$ for some integer $d$.
Let $g \in G$. Since $H$ is normal, $g^{-1}hg = h^i$ for some integer $i$. Then for any integer $k$ you get $g^{-1}(h^d)^kg = (g^{-1}hg)^{dk} = (h^i)^{dk} = (h^d)^{ik}$. This shows that for any $k \in K$, the element $g^{-1}kg$ is in $K$. Therefore $K$ is normal.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/106072",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24",
"answer_count": 6,
"answer_id": 0
} |
Prove that $n! > \sqrt{n^n}, n \geq 3$
Problem Prove that $n! > \sqrt{n^n}, n \geq 3$.
I'm currently have two ideas in mind, one is to use induction on $n$, two is to find $\displaystyle\lim_{n\to\infty}\dfrac{n!}{\sqrt{n^n}}$. However, both methods don't seem to get close to the answer. I wonder is there another method to prove this problem that I'm not aware of? Any suggestion would be greatly appreciated.
| To show that $(n!)^2>n^n$ for all $n\geq 3$ by induction, you first check that $(3!)^2>3^3$. Then to get the inductive step, it suffices to show that when $n\geq 3$, $(n+1)^2\geq\frac{(n+1)^{n+1}}{n^n}=(n+1)\left(1+\frac{1}{n}\right)^n$. This is true, and in fact $\left(1+\frac{1}{n}\right)^n<3$ for all $n$. It would be more than enough to use $\left(1+\frac{1}{n}\right)^n<n$, which is proved in another thread.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/106126",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17",
"answer_count": 5,
"answer_id": 3
} |
Finding limit of fraction with square roots: $\lim_{r\to 9} \frac {\sqrt{r}} {(r-9)^4}$ I have been looking at this for five minutes, no clue what to do.
$$\lim_{r\to 9} \frac {\sqrt{r}} {(r-9)^4}$$
| Replacing $r$ by $r-9$,
this becomes
$\lim_{r\to 0} \frac {\sqrt{r+9}} {r^4}
$.
As was said, the numerator goes to 3
and the denominator goes to 0,
so the quotient goes to $\infty$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/106167",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Closure of quadratic extensions under taking powers I'll give a proof of the following.
THEOREM: Let $x = p+\sqrt{q}$, with $p,q \in \mathbb{Q}$, and $m$ an integer. Then $x^m = a+b\sqrt{q}$ with $a,b \in \mathbb{Q}$.
PROOF
Using the Binomial Theorem
$${x^m} = C_0^m{p^m} + C_1^m{p^{m - 1}}{q^{\frac{1}{2}}} + C_2^m{p^{m - 2}}{q^{\frac{2}{2}}} + \cdots + C_{m - 2}^m{p^2}{q^{\frac{{m - 2}}{2}}} + C_{m - 1}^mp{q^{\frac{{m - 1}}{2}}} + C_m^m{q^{\frac{m}{2}}}$$
Let $m= 2·j$ then
$${x^{2j}} = C_0^{2j}{p^{2j}} + C_1^{2j}{p^{2j - 1}}{q^{\frac{1}{2}}} + C_2^{2j}{p^{2j - 2}}{q^{\frac{2}{2}}} + \cdots + C_{2j - 2}^{2j}{p^2}{q^{\frac{{2j - 2}}{2}}} + C_{2j - 1}^{2j}p{q^{\frac{{2j - 1}}{2}}} + C_{2j}^{2j}{q^{\frac{{2j}}{2}}}$$
Grouping produces
$${x^{2j}} = \sum\limits_{k = 0}^j {C_{2k}^{2j}{p^{2j - 2k}}{q^k}} + \sum\limits_{k = 1}^j {C_{2k - 1}^{2j}{p^{2j - 2k + 1}}{q^{k - 1}}\sqrt q } $$
But since every binomial coefficient is integer, and every power of $p$ and $q$ is rational then one has
$${x^{2j}} = a+b\sqrt{q} \text{ ; and } a,b \in \mathbb{Q}$$ where $b$ and $a$ are the sums.
If $m = 2j+1$ then
$$x^{2j+1} =(a+b\sqrt{q}) (p+\sqrt{q}) = c+d\sqrt{q}$$
which is also in our set.
(I don't know if the ring-theory tag is appropiate.)
| More conceptually and more generally, suppose $\alpha$ is the root of a monic polynomial over a ring $R$
$$\alpha^n\ =\ r_{n-1}\ \alpha^{n-1} +\:\cdots\:+r_1\ \alpha + r_0$$
Multiplying the above by $\alpha$, and using the above as a rewrite rule to replace $\alpha^n$ by the lower powers on the RHS, we deduce that $\alpha^{n+1}$ may also be expressed as on the RHS. By induction so too can every power of $\alpha$, hence $R[\alpha] = R + \alpha\ R +\:\cdots\:+ \alpha^{n-1}\ R$, i.e. every polynomial in $\alpha$ with coefficients in $R$ is equal to one of degree $< n$.
Equivalently, if monic $g(\alpha)= 0 $ for $g(x)\in R[x]$ of degree $n$, then, by the Division Algorithm, any polynomial $f(\alpha)$ is equal to an $h(\alpha)$ of degree $< n$, where $h(x)$ is the remainder of $f(x)$ mod $g(x)$
$$ f(x)\ =\ q(x)\ g(x) + h(x)\ \ \Rightarrow\ \ f(\alpha)\ =\ h(\alpha)\ \ by\ \ g(\alpha) = 0 $$
Recall that the high-school long Division (with Remainder) Algorithm works for any monic polynomial over any ring, so the the above normal-form degree-reduction algorithm works for any element $\alpha$ that is a root of monic polynomial.
Such $\alpha$ generalize pure radicals $\sqrt[n]{r},\ r\in R$ and are known as algebraic integers over $R$, a fundamental concept in number theory and algebra.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/106366",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
} |
How to integrate $ \int_{-\infty}^{+\infty} \frac{\sin(x)}{x} \,dx $?
Possible Duplicate:
Solving the integral $\int_{0}^{\infty} \frac{\sin{x}}{x} \ dx = \frac{\pi}{2}$?
How can I do this integration using only calculus?
(not Laplace transforms or complex analysis)
$$
\int_{-\infty}^{+\infty} \frac{\sin(x)}{x} \,dx
$$
I searched for solutions not involving Laplace transforms or complex analysis but I could not find.
| Putting rigor aside, we may do like this:
$$\begin{align*}
\int_{-\infty}^{\infty} \frac{\sin x}{x} \; dx
&= 2 \int_{0}^{\infty} \frac{\sin x}{x} \; dx \\
&= 2 \int_{0}^{\infty} \sin x \left( \int_{0}^{\infty} e^{-xt} \; dt \right) \; dx \\
&= 2 \int_{0}^{\infty} \int_{0}^{\infty} \sin x \, e^{-tx} \; dx dt \\
&= 2 \int_{0}^{\infty} \frac{dt}{t^2 + 1} \\
&= \vphantom{\int}2 \cdot \frac{\pi}{2} = \pi.
\end{align*}$$
The defects of this approach are as follows:
*
*Interchanging the order of two integral needs justification, and in fact this is the hardest step in this proof. (There are several ways to resolve this problem, though not easy.)
*It is nothing but a disguise of Laplace transform method. So this calculation contains no new information on the integral.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/106418",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Prove that the order of convergence must be $\geq 1$ Given a sequence $\{x_n\}_n$ and real numbers $c > 0$ and $L$, such that $\displaystyle\lim_{n \to \infty}x_n - L = 0$ and $\displaystyle\lim_{n \to \infty} \frac{| x_{n+1} - L |}{|x_n - L|^p} = c$, prove that $p \geq 1$.
This is assumed without proof in my textbook and I'd like a rigorous one, but I can't come up with it.
| If we allow $c = 0$, then this is not true.
Take $$x_n = \frac{1}{n}$$
where we can pick $p = 0.5$
If $c \gt 0$, then I believe it is true.
We use the following fact:
If $f_n \to \infty$ and if $\dfrac{f_{n+1}}{f_n} \to p$, then $p \ge 1$.
(because otherwise $\sum f_n$ would be absolutely convergent, by the ratio test!)
Assuming $x_n \neq L$ for any $n$ (otherwise problem is meaningless I suppose).
Picking $f_n = -\log |x_n - L|$ will give the result, as I believe we can then show that $\dfrac{f_{n+1}}{f_n} \to p$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/106479",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Can the natural numbers be uncountable? Definition of a countable set, from Stanford, as I didn't want to quote Wikipedia:
Definition. A set S is countable if |S| = |N|.
Thus a set S is countable if there is a one-to-one mapping of Num onto S, that is, if S is the range of an infinite one-to-one sequence.
So it seems that if we can define a set of numbers that does not map one-to-one to the natural numbers, then it is not a countable set. The natural numbers quite obviously map one-to-one to the natural numbers, so can they possibly be uncountable?
Let's say we have a list containing all of the natural numbers. Excerpt:
...
000099
000100
000101
000102
...
We can define a number that is different from each element in this list as follows: for the ith number in the list, the ith digit is one of the 8 (or 9) non-zero alternatives that make our new number different from the number on the list. For example:
...
00009 9
0001 0 0
000 1 01
00 0 102
...
12 3456
As we keep going, we will end up with a sequence of non-zero digits, which forms a valid Natural number, that is not on our list of all natural numbers, so our mapping of the natural numbers to the natural numbers breaks.
Does this make sense, or is there something I'm missing?
| Your error is in the assertion:
... we will end up with a sequence of non-zero digits, which forms a valid Natural number...
Natural numbers have only finitely many nonzero digits in their decimal expansion. This can be proven by induction: it is true of $1$; and the number of nonzero digits in the decimal expansion of $n+1$ is at most one more than that of $n$ (you need at most one more digit before you start padding with $0$s on the left).
An infinite list which contains infinitely many nonzero digits does not yield a natural number. In fact, one can prove that your procedure will produce a sequence of digits with infinitely many nonzero digits, hence not correspond to a natural number.
Let's say that your procedure selects $0$ whenever possible (that is, whenever the $i$th digit, from right to left, of the $i$th natural number, is nonzero; you really want this, because if you insist, as in your post, that you always select a nonzero digit, then you guarantee what you get is not a natural number). Can the list be arranged in such a way that the $i$th digit of the $i$th number is nonzero for all $i\gt N$ for some $N$? No: for any given $N$, there are $10^{N-1}$ natural numbers that require fewer than $N$ digits to write; since there are only $N$ positions prior to $N$, at least one of these $10^{N-1}$ numbers must be listed in a position below $N$; but if it is listed in position $j\gt N$, then its $j$th digit will be $0$, so the constructed sequence will necessarily have a nonzero entry at $j\gt N$. Since this holds for all $N$, the list obtained is never eventually $0$, and so does not correspond to a natural number.
Now, if you want to extend your notion of numbers to include expressions with infinitely many nonzero digits, then your argument is correct: the set of all such "numbers" is not countable. However, those are not the natural numbers.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/106546",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
Min. number of vertices in graph as function of $\kappa(G)$ and $\operatorname{diam}(G)$ Question I found in "Introduction to Graph Theory" by Douglas B. West:
Let $G$ graph on $n$ vertices with connectivity $\kappa(G)=k \geq 1$. Prove that $$n \geq k(\operatorname{diam}(G)-1)+2$$ where $\operatorname{diam}(G)$ is the the maximal (edge) distance between a pair of vertices in $G$
Seems easy and I've seen the answer, too, which goes the same as thought: by taking the vertices the end vertices of the path of length $\operatorname{diam}(G)$, and applying Menger's Theorem. I'm having a hard time understanding one part - it seems like all the paths need to be of the same length for the $n \geq k(\operatorname{diam}(G)-1)+2$ to apply. Am I missing some part here?
| Let's write $d$ for the diameter of $G$. Then there is some pair of vertices such that one path between them has exactly $d$ edges, and each path between them has at least $d$ edges. Menger says there are $k$ independent paths. So the number of vertices is at least $k(d-1)+2$.
I'm not sure why you think the paths all have to be the same length. They just have to have length at least $d$ - and, they do.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/106604",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Conditional convergence of $\int_{0}^{\infty}(-1)^{[ x^2]}dx$ I'm trying to prove that the following integral $\int_{0}^{\infty}(-1)^{[ x^2]}dx$ is conditional converges, ( The brackets stand for the floor function). I can't use the integral test since this is clearly not a positive function, but I'd like somehow to do an analog to Leibniz series and use it, I'm not sure how.
Any suggestions?
Thank you!
| Note that
$$
\int\limits_{[0,+\infty)}(-1)^{[x^2]}dx=
\sum\limits_{k=0}^\infty\int\limits_{[\sqrt{k},\sqrt{k+1})}(-1)^{[x^2]}dx=
\sum\limits_{k=0}^\infty\int\limits_{[\sqrt{k},\sqrt{k+1})}(-1)^{k}dx=
$$
$$
\sum\limits_{k=0}^\infty(-1)^{k}(\sqrt{k+1}-\sqrt{k})=
\sum\limits_{k=0}^\infty\frac{(-1)^{k}}{\sqrt{k+1}+\sqrt{k}}
$$
Using Leibniz test we see that this sum converges, hence converges the integral
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/106664",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Does $\int_{1}^{\infty}\sin(x\log x)dx $ converge? I'm trying to find out whether $\int_{1}^{\infty}\sin(x\log x)dx $ converges, I know that $\int_{1}^{\infty}\sin(x)dx $ diverges but $\int_{1}^{\infty}\sin(x^2)dx $ converges, more than that, $\int_{1}^{\infty}\sin(x^p)dx $ converges for every $p>0$, so it should be converges in infinity. I'd really love your help with this.
Thanks!
| This is a version of the Van der Corput lemma, basically.
Note that it's enough to find some $n$ for which $\int_n^{\infty} \sin(x\log(x))\,dx$ converges. The key facts about $f(x) = x\log(x)$ that allow this are a) $\lim_{x \rightarrow \infty} f'(x) = \infty$ and b) $f''(x) > 0$ for $x$ large enough. Specifically, we write
$$\int_n^{\infty} \sin(f(x))\,dx = \int_n^{\infty} f'(x) {\sin(f(x)) \over f'(x)}\,dx$$
$$= \lim_{N \rightarrow \infty} \int_n^{N} (f'(x) \sin(f(x)){1 \over f'(x)}\,dx$$
Integrating the integral on the right by parts you get
$$\int_n^{N} (f'(x) \sin(f(x)){1 \over f'(x)}\,dx = -{\cos(f(N)) \over f'(N)} + {\cos(f(n)) \over f'(n)} + \int_n^N \cos(f(x)){d \over dx}{1 \over f'(x)}$$
$$= -{\cos(f(N)) \over f'(N)} + {\cos(f(n)) \over f'(n)} - \int_n^N \cos(f(x)) {f''(x) \over (f'(x))^2}$$
As $N$ goes to infinity the first term goes to zero since $f'(x)$ goes to $\infty$ as $x$ goes to $\infty$ and $|\cos(f(N))| \leq 1$. The third term is bounded in absolute value by
$$\int_n^N\bigg|{f''(x) \over (f'(x))^2}\bigg|\,dx$$
Since $f''(x) > 0$ we can just take off the absolute values to get
$$\int_n^N{f''(x) \over (f'(x))^2}\,dx$$
Integrating this becomes
$${1 \over f'(N)} - {1 \over f'(n)}$$
Since $f'(N) \rightarrow \infty$ as $N$ goes to $\infty$ this converges as $N$ goes to infinity. Hence the integral $\int_n^{\infty} \cos(f(x)) {f''(x) \over (f'(x))^2}$ converges absolutely, and thus converges.
Hence we have shown the original integral converges.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/106710",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19",
"answer_count": 3,
"answer_id": 2
} |
Cartesian product of n sets I'm currently writing a little application in which I would like to create all permutations of n sets. For example I have the following sets (arrays):
s1 = { A, B, C }
s2 = { B, A }
s3 = { X, Y, Z }
...
What would be the most efficient way (algorithm) to get all the permutations?
With all permutations, I mean:
ABX, ABY, ABZ, AAX, AAY, AAZ, etc.
I already created a primitive algorithm using recursion, however I have no idea how to put this in maths and since I would like to optimize it and document it, I'd appreciate any help!
| The appropriate term would be cartesian product, rather than permutations.
An algorithm for that would be to interpret each set as a set of digits, and the result as a number.
You can now enumerate through those in a manner similar to what happens with an odometer.
You keep increment the units digit first. When you hit the limit, you try incrementing the tens place (if not move left) and reset the units digit and continue in this manner till you reach the largest possible number.
This should give you an algorithm which would require very little space (as compared to a recursive solution), as you can compute the next result from the current one.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/106784",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to directly prove that $M$ is maximal ideal of $A$ iff $A/M$ is a field? An ideal $M$ of a commutative ring $A$ (with unity) is maximal iff $A/M$ is a field.
This is easy with the correspondence of ideals of $A/I$ with ideals of $A$ containing $I$, but how can you prove it directly? Take $x + M \in A/M$. How can you construct $y + M \in A/M$ such that $xy - 1 \in M$? All I can deduce from the maximality of $M$ is that $(M,x) = A$.
| Since $(M,x)=A$, you have that $1\in (M,x)$. The elements of $(M,x)$ are expressions of the form $$am+bx$$
where $a,b\in A$ and $m\in M$ (of course, since $M$ is an ideal, we in fact have $am\in M$ as well.)
Thus, if $1\in (M,x)$, there exists a $m\in M$ and an $y\in A$ such that $$1=m+xy$$
and modding out by $M$, we get that
$$1+M = (x+M)(y+M).$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/106909",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 4,
"answer_id": 1
} |
Why does $\sum_n\binom{n}{k}x^n=\frac{x^k}{(1-x)^{k+1}}$? I don't understand the identity $\sum_n\binom{n}{k}x^n=\frac{x^k}{(1-x)^{k+1}}$, where $k$ is fixed.
I first approached it by considering the identity
$$
\sum_{n,k\geq 0} \binom{n}{k} x^n y^k = \sum_{n=0}^\infty x^n \sum_{k=0}^n \binom{n}{k} y^k = \sum_{n=0}^\infty x^n (1+y)^n = \frac{1}{1-x(1+y)}.
$$
So setting $y=1$, shows $\sum_{n,k\geq 0}\binom{n}{k}x^n=\frac{1}{1-2x}$. What happens if I fix some $k$ and let the sum range over just $n$? Thank you.
| Start from the Taylor series of $\frac1{1-x}$, differentiate it $k$ times, and then multiply everything by $\frac{x^k}{k!}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/106978",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
} |
Rittner equation I would like to know if the Rittner equation :
$$\partial_{t}{\varPhi(x,t)=k\partial_{xx}{\varPhi(x,t)}}-\alpha{\partial_{x}{\varPhi(x,t)}-\beta{\varPhi(x,t)}+g(x,t)}$$
can be solved using the Lax pair method or the Fokas method.
Thanks
| With $\alpha$ and $\beta$ being constants, this equation can be reduced to a heat equation and solved exactly. The first step is to remove the $\beta$ term. We do this by setting
$$\Phi(x,t)=e^{-\beta t}\Phi_1(x,t).$$
The equation becomes
$$\partial_t\Phi_1(x,t)=k\partial_{xx}\Phi_1(x,t)-\alpha\partial_x\Phi_1(x,t)+g(x,t)e^{\beta t}.$$
The next step is to remove the drift term. This can be done by setting
$$\Phi_1(x,t)=e^{ct}e^{ax}\Phi_2(x,t)$$
with $a$ a constant to be determined. We just note that $\partial_x(e^{ax}\Phi_2(x,t))=e^{ax}(\partial_x+a)\Phi_2(x,t)$ and so one gets
$$\partial_t\Phi_2(x,t)+c\Phi_2(x,t)=k(\partial_x+a)^2\Phi_2(x,t)-\beta(\partial_x+a)\Phi_2(x,t)+g(x,t)e^{\beta t}e^{-ax}$$
and so, choosing $a=\frac{\beta}{2k}$ and $c=-\frac{\beta^2}{4k}$ the equation becomes a heat equation
$$\partial_t\Phi_2(x,t)=k\partial_{xx}\Phi_2(x,t)+g(x,t)e^{\left(\beta-\frac{\beta^2}{4k}\right) t}e^{-\frac{\beta}{2k}x}$$
This equation can be solved exactly by the kernel
$$\Delta(x,t)=\frac{e^{-\frac{x^2}{4kt}}}{\sqrt{4\pi kt}}$$
Then, a full solution is given by
$$\Phi_2(x,t)=\tilde\Phi_2(t,x)+\int dx'dt'\Delta(x-x',t-t')g(x',t')e^{\left(\beta-\frac{\beta^2}{4k}\right) t'}e^{-\frac{\beta}{2k}x'}$$
being $\tilde\Phi_2(t,x)$ a solution of the homogeneous equation. Then, it easy to recognize that kernel for the initial equation is just
$$\Delta_0(x,t)=\Delta(x,t)e^{-\left(\beta-\frac{\beta^2}{4k}\right) t}e^{\frac{\beta}{2k}x}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/107044",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Ring of dyadic rationals The ring of dyaidc rationals is $R=\{\frac{a}{b}:a,b\in\mathbb{Z}, b=2^n, n\in\mathbb{N}\}$.
I want to be able to say that if we have a rational $\frac{a}{b}$ in which $a=2^{j}e$ where $e$ is odd and $b=2^i$ then $\frac{a}{b}$ is dyadic if and only if $j<i$, because otherwise we could reduce to $\frac{a}{b}=2^{j-i}e$ and it would not be in R.
However, if we insist upon writing things in this "simplest form" then I don't see how we could have that $1 \in R$, i.e. that $R$ is a ring with identity.
I've been asked to discuss the units (among other things) in this ring, but I can't make sense of that task for the reason I laid out above.
| First, the definition as given does not make it clear whether $\mathbb{N}$ includes $0$ or not; if it does, then this immediately gives you that denominators equal to $1$ are allowable. But even if it doesn't:
Second, nowhere in the definition does it require $\gcd(a,b)=1$. Note that a fraction can be written with a denominator a power of $2$, whether or not it is in least terms, if and only if, when written in least terms, the denominator is a nonnegative power of $2$.
Third: what you "want to say" is incorrect. Even setting aside the issue of $1$ not being in the ring, such a set is not even closed under sums: $\frac{3}{2}$ and $\frac{1}{2}$ would both be in the set, but their sum would not be.
So: a rational number is "dyadic" if and only if it can be written as $\frac{a}{b}$, with $a$ and $b$ integers, and $b$ a power of $2$. We do not require the fraction to be in least terms, only that it can be written as a fraction whose denominator is a power of $2$. In particular, this includes numbers like $\frac{2}{2}$, $\frac{7}{2}$, $\frac{1024}{8}$, $\frac{3072}{8}$, etc. It also includes $\frac{9}{6}$, since $\frac{9}{6}$ can be written as $\frac{3}{2}$, which is of the desired form.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/107130",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to integrate $1-\tan^4(x)$ without using secant? How to integrate such function:
$\int_{-\pi/3}^{\pi/3}1-\tan^4(x)$
I already found a solution using the trigonometric function secant. It looks like this:
$$u=\tan(x),\quad \frac{du}{dx}=\sec^2(x)$$
$$
\begin{align}
\int 1-\tan^4(x) & = \int 1 \;dx - \int\tan^4(x) \; dx \\
& = x-\int\tan^4(x) \; dx \\
& = x-\int\tan^2(x) \tan^2(x) \; dx \\
& = x-\int\tan^2(x) (\sec^2(x) - 1) \; dx \\
& = x-\int\tan^2(x) \sec^2(x) \; dx - \int\tan^2(x) \; dx \\
& = x-\int u^2 \; du - (\tan(x)-x) \\
& = x-\frac{1}{3}u^3 - (\tan(x)-x) \\
& = 2x-\frac{1}{3}\tan^3(x)-\tan(x)
\end{align}
$$
The only problem now is, that I have to find a solution without using secant. Can someone say me how to solve this without secant?
| Sasha's solution is very elegant. However, if you can't see a trick like this, you can always mechanically integrate any rational function of sine and cosine using the Weierstraß substitution.
With
$$\tan x=\frac{2t}{1-t^2}\;,$$
$$\frac{\mathrm dx}{\mathrm dt}=\frac2{1+t^2}\;,$$
$$t=\tan\frac x2\;,$$
the integral of $\tan^4 x$ becomes
$$
\begin{eqnarray}
&&\int\tan^4x\mathrm dx
\\
&=&
\int\left(\frac{2t}{1-t^2}\right)^4\frac2{1+t^2}\mathrm dt
\\
&=&
\int\frac{32t^4}{(1-t^2)^4(1+t^2)}\mathrm dt
\\
&=&
\int\left(\frac2{t^2+1}-\frac1{(t-1)^2}-\frac1{(t+1)^2}+\frac1{(t-1)^3}-\frac1{(t+1)^3}+\frac1{(t-1)^4}+\frac1{(t+1)^4}\right)\mathrm dt
\\
&=&
2\arctan t+\frac1{t-1}+\frac1{t+1}-\frac1{2(t-1)^2}+\frac1{2(t+1)^2}-\frac1{3(t-1)^3}-\frac1{3(t+1)^3}+ \color{gray}{\text{const}}
\\
&=&
2\arctan t+\frac{2t}{t^2-1}-\frac{8t^3}{3(t^2-1)^3}+ \color{gray}{\text{const}}
\\
&=&
x-\tan x+\frac13\tan^3 x+ \color{gray}{\text{const}}\;,
\end{eqnarray}
$$
where I used Wolfram|Alpha for pulling apart the fractions and putting them back together again.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/107195",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Essay about the art and applications of differential equations? I teach a high school calculus class. We've worked through the standard derivatives material, and I incorporated a discussion of antiderivatives throughout. I've introduced solving "area under a curve" problems as solving differential equations. Since it's easy to see the rate at which area is accumulating (the height of the function), we can write down a differential equation, take an antiderivative to find the area function, and solve for the constant.
Anyhow, I find myself wanting to share with my students what differential equations are all about. I don't know so much about them, but I have a sense that they are both a beautiful pure mathematics subject and a subject that has many, many applications.
I'm hoping to hear suggestions for an essay or a chapter from a book--something like 3 to 10 pages--that would discuss differential equations as a subject, give some interesting examples, and point to some applications. Any ideas?
| I strongly recommend to read a review paper with many interesting references therein :
PDE as a Unified Subject by Sergiu Klainerman.
An essay on partial differential equations written by a leading expert in the field, for anyone attemping to know more on the subject as well as to those who would like to get a grasp of interactions between Mathematics and Physics.
Another very interesting article dealing with certain aspects of differential equations (to some extent) and teaching is V.I. Arnold's On teaching mathematics (it may be worth to read if you are a teacher). A bit more detailed but still very clear is his
essay “Mathematics and physics: mother and daughter or sisters?" (check another sites if you can't download it).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/107264",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
How to create a vector field whose Curl and Divergence are zero at any point? What is the mathematical procedure to derive a vector field whose curl and divergence are zero at any point at any time?
Edit: Please explain it by solving the differential equations of curl and divergence.
| Take any harmonic function $\phi$, and set your vector field to be $v = \nabla \phi$, the gradient.
Edit On any simply connected domain (if your space is just the entire Euclidean space, for example), any curl free vector field can be written as the gradient of a function. You can see this explicitly by verifying that given a curl free vector field $v$, the function $\phi(x) = \int_{C} v(s)\cdot ds$ where $C$ is a curve starting at the origin and ending at the point $x$ is independent of the choice of the curve $C$, and hence $\phi$ is well-defined. (This is generally treated in any introductory textbook on vector calculus, and is a special instance of Stokes' theorem.) Once you have that, plugging $v = \nabla \phi$ into the divergence equation you automatically get that $\nabla^2\phi = 0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/107314",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 5,
"answer_id": 1
} |
If two vectors are orthogonal to a non-zero vector in $\mathbb{R}^2$ then are the two vectors scalar multiples of another? If two vectors $\bf{u}$ and $\bf{v}$ in $\mathbb{R}^2$ are orthogonal to a non-zero vector $\bf{w}$ in $\mathbb{R}^2$, then are $\bf{u}$ and $\bf{v}$ scalar multiples of one another? Prove your claim.
Attempt: From a geometric point of view it seems obvious that they must be scalar multiples of one another but I am having difficulties trying to prove it.
My approach was to use the Cauchy-Schwarz Inequality by assuming $|\bf{u}\cdot \bf{v}| < ||\bf{u}|| ||\bf{v}|| $ and somehow reaching a contradiction but I can't seem to obtain one. Maybe I need to try a different approach? It would be great (if possible) if someone can continue using my approach or show that it won't work (Assuming my answer is correct in the first place).
| If either $\mathbf{u}$ or $\mathbf{v}$ are zero, then it is a scalar multiple of the other and you are done. So you may assume that $\mathbf{u}\neq\mathbf{0}$ and $\mathbf{v}\neq\mathbf{0}$.
Note that $\mathbf{u}$ and $\mathbf{w}$ are linearly independent. Hence they span $\mathbb{R}^2$, so we can write $\mathbf{v}=\alpha\mathbf{u}+\beta\mathbf{w}$. So... what is $\langle \mathbf{v},\mathbf{w}\rangle$ going to be, then, and what should it be?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/107367",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
badly approximated numbers on the real line form a meagre set Let $S$ be the set of real numbers $x$ such that there exist infinitely many (reduced) rational numbers $p/q$ such that $$\left\vert x-\frac{p}{q}\right\vert <\frac{1}{q^8}.$$
I would like to prove that $\mathbb{R}\setminus S$ is a meagre set (i.e. union of countably many nowhere dense sets).
I have no idea about how to prove this, as I barely visualise the problem in my mind. I guess that the exponent $8$ is not just a random number, as it seems to me that with lower exponents (perhaps $2$?) the inequality holds for infinitely many rationals for every $x\in\mathbb{R}$.
Could you help me with that?
Thanks.
| Here is a more detailed proof. (BTW, 8 is completely arbitrary. It could be any number greater than or equal to 2. For generality, I will replace 8 by $n \ge 2$ from now on.)
Let $$T_q = \mathbb{R} - \bigcup_{p \text{ is relatively prime to }q} \left(\frac{p}{q}-\frac{1}{q^n}, \frac{p}{q}+\frac{1}{q^n}\right).$$
Clearly, $T_q$ is closed for any $q$.
Now for any natural number $Q$, define $$R_Q = \bigcap_{q\ge Q} T_q.$$
$R_Q$ is clearly closed. It is further nowhere dense, as shown here.
Now observe that $$\mathbb{R} - S = \bigcup_{Q=1}^\infty R_Q$$ and we get the desired result.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/107494",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
} |
From natural log to log base 10 The constraints of this question is related to a programming problem, but I must get the math right in order for it to be applied to code. The actual problem is I need a function that evaluates to log base 10 but all I have at my disposal is addition, subtraction, multiplication, division, a powering function, and a natual log function. Is there a way to effectively evaluate log base 10 with the given operations?
On a side note, I seem to remember having to do some division with logs in order to change the base, but it's been forever ago (who knew those college math classes would come in handy!)
| You do not have to memorize the change of base formula:
Say you want to evaluate $\log_{a}(x)$. Let's call it $y$. Then:
$$\begin{align}
y = \log_{a}(x) &\iff a^y = x
\\
& \iff \ln(a^y) = \ln(x)
\\
& \iff y \ln(a) = \ln(x)
\\
&\iff y = \frac{\ln(x)}{\ln(a)}.
\end{align}$$
This shows that $\log_{a}(x) = \frac{\ln(x)}{\ln(a)}$, but hopefully this also emphasizes that you need not memorize the change of base formula in order to use it.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/107574",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 3,
"answer_id": 0
} |
Computing index of a subgroup of a free abelian group We looked briefly at this example in class but I'm not quite sure how to proceed, and I can't find examples of this in any textbooks I have (Dummit & Foote and Nicholson).
Suppose we have $H = \langle(1,1) , (1,-1)\rangle \le G = \mathbb{Z}^2$ for groups $H$ and $G$. Find $|G:H|$.
I think I'd have to take the standard basis for $G$ and then express the elements in $H$ as some combination of this basis. The goal (from what I understood, at least) seems to be to express $G$ and $H$ as direct products and then look at the order of the quotient (since that's equal to $|G:H|$).
Am I on the right track here? How would I go about actually showing all the work for this question? Thanks for reading.
| Let's restrict to the case where we have a subgroup of $\mathbb{Z}^n$ generated by at least $n$ given vectors (if it is generated by fewer than $n$ vectors, then the index is infinite).
Write the vectors as columns of an $n\times m$ matrix, $n\leq m$. Then you can compute the Smith normal form of the matrix, which amounts to finding an automorphism of $\mathbb{Z}^n$ and a generating set for $B$ in which the generators of $B$ are scalar multiples of the standard basis vectors of $\mathbb{Z}^n$. In that situation, one can read off the index from the Smith normal form: it will be the product of the (absolute values of the) first $n$ diagonal entries if they are all nonzero, and the index will be infinite if the matrix has more than $m-n$ zeros in the entries.
If $n=m$ (so $B$ is generated by $n$ vectors from $\mathbb{Z}^n$ and the original matrix is a square), then the Smith normal form of the matrix has the same determinant as the original matrix; this determinant is just the product of the diagonal entries in the Smith normal form... which happens to be the index of $B$ in $\mathbb{Z}^n$. So computing the determinant of $B$ will do the trick: if the determinant is $0$, then the index is infinite; if the determinant is nonzero, then the (absolute value of the) determinant is the index of $B$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/107630",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Prove that a finite union of closed sets is also closed Let $X$ be a metric space. If $F_i \subset X$ is closed for $1 \leq i \leq n$, prove that $\bigcup_{i=1}^n F_i$ is also closed.
I'm looking for a direct proof of this theorem. (I already know a proof which first shows that a finite intersection of open sets is also open, and then applies De Morgan's law and the theorem "the complement of an open set is closed.") Note that the theorem is not necessarily true for an infinite collection of closed $\{F_\alpha\}$.
Here are the definitions I'm using:
Let $X$ be a metric space with distance function $d(p, q)$. For any $p \in X$, the neighborhood $N_r(p)$ is the set $\{x \in X \,|\, d(p, x) < r\}$. Any $p \in X$ is a limit point of $E$ if $\forall r > 0$, $N_r(p) \cap E \neq \{p\}$ and $\neq \emptyset$. Any subset $E$ of $X$ is closed if it contains all of its limit points.
| Let $F$ and $G$ be two closed sets and let $x$ be a limit point of $F\cup G$. Now, if $x$ is a limit point of $F$ or $G$ it is clearly contained in $F\cup G$. So suppose that $x$ is not a limit point of $F$ and $G$ both. So there are radii $\alpha$ and $\beta$ such that $N_\alpha(x)$ and $N_\beta(x)$ don't intersect with $F$ and $G$ respectively except possibly for $x$. But then if $r=min (\alpha,\beta)$ then $N_r(x)$ doesn't intersect with $F\cup G$ except possibly for $x$, which contradicts $x$ being a limit point. This contradiction establishes the result. The proof can be extended easily to finitely many closed sets. Trying to extend it to infinitely many is not possible as then the "min" will be replaced by "inf" which is not necessarily positive.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/107692",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 3,
"answer_id": 2
} |
Virtual nilpotent groups and lower central series By definition, a nilpotent group is one whose lower central series terminates in the trivial subgroup after finitely many steps. Now we want to consider the relation between the virtual nilpotent groups and its lower central series. Since virtual nilpotent group is a group which
has a nilpotent subgroup with finite index, so I want to ask whether the lower central series
of a virtual nilpotent group can teminate in a finite subgroup after finitely many steps. If not, what about the case if we add the condition that the group is finitely generated?
| Proposition: The cardinality of the intersection of the lower nilpotent series of a virtually nilpotent group can be any arbitrary infinite cardinality, and any finite cardinality not of the form $4k+2$ (or 0).
Proof: Let K be an infinite field of characteristic not 2 or 5, and let G be generated by the following matrices over K, where t ranges over a group generating set of K:
$$x = \begin{bmatrix} 1&0&0&0&0\\0&0&1&0&0\\0&1&0&0&0\\-1&-1&-1&-1&0\\0&0&0&0&1\end{bmatrix}, \quad
y = \begin{bmatrix} 0&1&0&0&0\\0&0&0&1&0\\0&0&1&0&0\\1&0&0&0&0\\0&0&0&0&1\end{bmatrix}, \quad
z(t) = \begin{bmatrix} 1&0&0&0&t\\0&1&0&0&0\\0&0&1&0&0\\0&0&0&1&0\\0&0&0&0&1\end{bmatrix}$$
G is the semi-direct product of a simple group of order 60 generated by x and y, and an abelian group V generated by the conjugacy classes of $z(t)$. The intersection of the the lower central series of G is G itself, because G is perfect. G has the same cardinality as K. (This is an explicit example of the kind mentioned by Derek Holt.)
If k is an odd number, then the dihedral group of order $2k$ has the intersection of its lower central series cyclic of order k. If k ≥ 4 is a power of 2, then the affine general linear group AGL(1, k) has the intersection of its lower central series elementary abelian of order k. The direct product of two such groups has the intersection of its lower central series of size the product of the cardinalities, so any positive integer not of the form $4k+2$.
If G is a group and N is a normal subgroup of size $4k+2$, then N has a characteristic subgroup K of order $2k+1$ by Cayley, and so G centralizes $N/K$, or in other words $[G,N] ≤ K$ and $N$ is not the last group in the lower central series of G. Obviously no subgroup has cardinality 0, so the claim has been shown.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/107759",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Confused by Calc II question regarding derivative of rational integrals So here's the question:
If $f$ is a quadratic function such that $f(0) = 1$ and $\int \frac{f(x)}{x^2(x+1)^3}\,dx$ is a rational function, find the value of $f’(0)$.
What I've done so far is try to solve the integral using partial fractions i.e.
$\frac{f(x)}{x^2(x+1)^3} = \frac{A}{x} + \frac{B}{x^2} + \frac{C}{(x+1)} + \frac{D}{(x+1)^2} + \frac{E}{(x+1)^3}$
Multiply out the denominator from the LHS to get:
$f(x) = Ax(x+1)^3 + B(x+1)^3 + Cx^2(x+1)^2 + Dx^2(x+1) + Ex^2$
when $x = 0$ I get that $B=1$.
At this point I'm stuck. I tried solving for the other variables but it gets insanely complicated. Wondering if anyone has a better strategy to solving the problem.
Thank you.
| You have
$f(x)=ax^2+bx +c$. That $f(0)=1$ gives you $c=1$. We have $f'(x)=2ax+b$; and so $f'(0)=b$.
The integrand can be written as
$$
{ax^2+ f'(0)x+1\over x^2(x+1)^3} = {A\over x\vphantom{ )^2}}+{B\over x^2\vphantom{ )^2}}+ {C\over (x+1)\vphantom{ )^2}}+{D\over (x+1)^2}+{E\over (x+1)^3}.
$$
Here's the important observation:
If the antiderivative of the above is a rational function, then $A=C=0$ (otherwise, it will contain logarithms).
Thus,
$$
{ax^2+ f'(0)x+1\over x^2(x+1)^3} = {B\over\vphantom{(^2} x^2}+ {D\over (x+1)^2}+{E\over (x+1)^3};
$$
or,
$$
{ax^2+ f'(0)x+1 } = {B }(x+1)^3+ {D }x^2(x+1)+{E }x^2.
$$
Setting $x=0$ in the above gives you $B=1 $.
Setting $x=-1$ in the above gives you $E=a-f'(0)+1$.
Also, comparing the $x^3$ terms, $B=-D$.
So:
$$
\eqalign{
& { ax^2+ f'(0)x+1 }\ =\ (x+1)^3- x^2(x+1)+{ (a-f'(0)+1)}x^2\cr
\iff& \color{maroon}{ax^2}+ f'(0)x+1\ =\ (\color{darkgreen}{x^3}+\color{darkblue}{3x^2}+3x+1) \color{darkgreen}{-x^3}\color{darkblue}{-x^2}+ \color{maroon}{ax^2} +({1-f'(0))}x^2 \cr
\iff&{ \hphantom{ax^2+} f'(0)x+1 }\ =\ 2x^2+3x+1+ \bigl (1 -f'(0)\bigr)x^2 \cr
\iff&
{ \hphantom{ax^2+} f'(0)x+1 }\ =\ \bigl(3 -f'(0)\bigr)x^2 +3x+1;
}
$$
whence, $f'(0)=3$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/107829",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Diagonalization method by Cantor (2) I asked a while ago a similar question about this topic. But doing some exercises and using this stuff, I still get stuck. So I have a new question about this topic. (Here is the link for the previous question: Cantor diagonalization method for subsequences).
Now my new question: Suppose I have a sequence $(x_n)$, a set $K$ and a function $f$ and we define for all $l \in \mathbb{N}$ the set $M_l:= K\cap \{x|f(x)\le l\}$ with $f< \infty$ on $K$.
I've proved that for a fixed $l$ there's a subsequence $(x_{n_k})$ converging on $M_l$, denote this by $(x^l_{n_k})$.
First it's clear that $M_l \subset M_{l+1}$ and I want to show that there's a subsequence which converges on $\cup_{l\ge1} M_l = K$.
So I have different subsequences $(x^1_{n_k}),\dots,(x^p_{n_k}),\dots$ and define the diagonal sequence as ${x^{\phi(l)}_{n_{\phi(l)}}}$ where $\phi(l)$ is the $l$-th element of $n_k$ (just the diagonal sequence).
Is it enough to say, since $(M_l)$ is increasing the sequence converges on $K=\cup_{l\ge1} M_l$?
I'm sorry, but I'm just confues about picking the right sequence and to prove that it is the right sequence using Cantor. Thank you for your help
| Denote by $(x_{\varphi_l(k)})$ a subsequence which works for $M_l$. In fact, you have to construct these subsequence by induction, in order to make $(x_{\varphi_{l+1}(k)})$ a subsequence of $(x_{\varphi_l(k)})$. Then we put $x_{n_k}=x_{\varphi_k(k)}$. Now we are sure that the sequence $(x_{n_k})_{k\geq N(j)}$ is a subsequence of $(x_{\varphi_j(k)})_{kgeq N(j)}$ for some integer $N(j)$.
It's important that the subsequences are nested, otherwise it may not work. For example, we assume that for $l$ even only a subsequence of the form $(x_{2k})$ work and for $l$ odd a subsequence of the form $(x_{2k+1})$. Then $x_{n_{\varphi(2l)}}^{\phi(2l)}=x_{4l}$ and $x_{n_{\varphi(2l+1)}}^{\phi(2l+1)}=x_{2l(2l+1)+1}$ so the sequence of indexes is not increasing.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/107875",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Poisson equation with one boundary condition at infinity Given the one-dimensional Poisson equation:
$${d^2\over dx^2}V(x)=\exp(-\beta V(x))$$
it's possible to find a solution given the initial conditions. But, if we have these boundary conditions:
$$V(0)=V_0$$
$$\lim_{x\to\infty}V(x)=0$$
can someone help me to find an analytical solution if any?
For the sake of simplicity, you can consider the easier ODE:
$${d^2\over dx^2}V(x)=\exp(-V(x))$$
| $\dfrac{d^2V}{dx^2}=e^{-\beta V}$
$2\dfrac{dV}{dx}\dfrac{d^2V}{dx^2}=2e^{-\beta V}\dfrac{dV}{dx}$
$\int2\dfrac{dV}{dx}\dfrac{d^2V}{dx^2}dx=\int2e^{-\beta V}\dfrac{dV}{dx}dx$
$\int2\dfrac{dV}{dx}d\left(\dfrac{dV}{dx}\right)=\int2e^{-\beta V}~dV$
$\left(\dfrac{dV}{dx}\right)^2=\dfrac{2C_1^2}{\beta}-\dfrac{2e^{-\beta V}}{\beta}$
$\dfrac{dV}{dx}=\pm\sqrt{\dfrac{2}{\beta}}\sqrt{C_1^2-e^{-\beta V}}$
$\dfrac{dV}{\sqrt{C_1^2-e^{-\beta V}}}=\pm\sqrt{\dfrac{2}{\beta}}dx$
$\int\dfrac{dV}{\sqrt{C_1^2-e^{-\beta V}}}=\int\pm\sqrt{\dfrac{2}{\beta}}dx$
For $\int\dfrac{dV}{\sqrt{C_1^2-e^{-\beta V}}}$ ,
Let $u=e^{-\beta V}$ ,
Then $V=\ln u$
$dV=\dfrac{du}{u}$
$\therefore\int\dfrac{dV}{\sqrt{C_1^2-e^{-\beta V}}}=\int\dfrac{du}{u\sqrt{C_1^2-u}}=C-\dfrac{2}{C_1}\tanh^{-1}\dfrac{\sqrt{C_1^2-u}}{C_1}=C-\dfrac{2}{C_1}\tanh^{-1}\dfrac{\sqrt{C_1^2-e^{-\beta V}}}{C_1}$
Hence $-\dfrac{2}{C_1}\tanh^{-1}\dfrac{\sqrt{C_1^2-e^{-\beta V}}}{C_1}=\pm\sqrt{\dfrac{2}{\beta}}x+c$
$\tanh^{-1}\dfrac{\sqrt{C_1^2-e^{-\beta V}}}{C_1}=\mp\left(\dfrac{C_1x}{\sqrt{2\beta}}+C_2\right)$
$\dfrac{\sqrt{C_1^2-e^{-\beta V}}}{C_1}=\mp\tanh\left(\dfrac{C_1x}{\sqrt{2\beta}}+C_2\right)$
$\sqrt{C_1^2-e^{-\beta V}}=\mp~C_1\tanh\left(\dfrac{C_1x}{\sqrt{2\beta}}+C_2\right)$
$C_1^2-e^{-\beta V}=C_1^2\tanh^2\left(\dfrac{C_1x}{\sqrt{2\beta}}+C_2\right)$
$e^{-\beta V}=C_1^2\text{sech}^2\left(\dfrac{C_1x}{\sqrt{2\beta}}+C_2\right)$
$V=-\dfrac{1}{\beta}\ln\left(C_1^2\text{sech}^2\left(\dfrac{C_1x}{\sqrt{2\beta}}+C_2\right)\right)$
$V(0)=V_0$ :
$-\dfrac{1}{\beta}\ln\left(C_1^2\text{sech}^2C_2\right)=V_0$
$C_1^2\text{sech}^2C_2=e^{-\beta V_0}$
$C_1^2=e^{-\beta V_0}\cosh^2C_2$
$C_1=\pm~e^{-\frac{\beta V_0}{2}}\cosh C_2$
$\therefore V=-\dfrac{1}{\beta}\ln\left(e^{-\beta V_0}\cosh^2C_2~\text{sech}^2\left(\dfrac{\pm~xe^{-\frac{\beta V_0}{2}}\cosh C_2}{\sqrt{2\beta}}+C_2\right)\right)$
difficult to find $C_2$ when $V(\infty)=0$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/107926",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How do I refer to a certain element in a set? I have an idea about how to do this. I've seen this before so this is what I think. Is this how you refer to a single element in a set?
$$a = \{5, 7, 3, 4\};a_2 = 7$$
Is this correct?
| You can just refer to it as 7. To say 7 is an element of the set, we write $7 \in \{5,7,3,4 \}$. Usually the order you write elements of sets in doesn't matter.
You can have very big (uncountable) sets where it is not easy to assign a number to each element of the set, for example the set of real numbers (i.e. decimal numbers) $\mathbb{R}$. So there is no (immediate) sensible idea of a "2nd element" of this set, but there is for your example.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/108013",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Basic question: closure of irreducible sets Let $(X,\tau)$ be a topological space, $U \subseteq X$ a non-empty open subset of $X$ and let $W$ be an irreducible non-empty closed subset of $X$. Assuming $W \cap U \neq \emptyset$ why is this intersection irreducible in $U$ and its closure (taken in $X$) equal $W$?
Clearly $W \cap U$ is dense in $W$ because it is an open subset of $W$ and $W$ is irreducible so its closure, taken in $W$ is $W$ itself. I don't see why its closure taken in $X$ is equal $W$. Why is this? Also, why is it irreducible in $U$?
EDIT: OK $W=cl_{X}(W)=cl_{X}(W \cap U)$ I think. Why is it irreducible in $U$ though?
| To say that $W\neq \emptyset$ is irreducible means that it is not the union of two closed subsets both $\neq W$.
Passing to complements, it means that two non-empty open subsets of $W$ have non-empty intersection.
But then every open subset $V\subset W$ has the same property (since opens of $V$ are in particular opens of $W$) and is thus irreducible.
This applies in particular to $V=W\cap U$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/108079",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 0
} |
For $\sum_{0}^{\infty}a_nx^n$ with an infinite radius of convergence, $a_n \neq0$ , the series does not converge uniformly in $(-\infty , \infty)$. I'd like your help with the following question:
Let $\sum_{0}^{\infty}a_nx^n$ be a power series with an infinite
radius of convergence, such that $a_n \neq 0 $ , for infinitely many values of
$n$. I need to show that the series does not converge uniformly in
$(-\infty , \infty)$.
I quite don't understand the question, since I know that within the radius of convergence for a power series, there's uniform convergence, and since I know that the radius is "infinite", it says that the uniform convergence is infinite, no?
Thanks!
| First, your argument is flawed. I assume the theorem you are referring to is that you have uniform convergence on $[−a,a]$ for any real number $a$ with $a<R$, where $R$ is the radius of convergence of the power series.
But you can not take $a=\infty$ in your problem...
What you could use is the following, very useful, fact:
A series $\sum\limits_{k=1}^\infty f_k(x)$ converges uniformly on $I$
if and only if the sequence of partial sums $S_n=\sum\limits_{m=1}^n
f_m$ is uniformly Cauchy. That is, for every $\epsilon>0$,
there is an integer $N$ such that
$$|S_m(x)-S_n(x)|=\Bigl|\sum_{k=n+1}^m f_k(x)\Bigr|<\epsilon,\quad\text{ for all }x\in I\ \ \text{and all }m\ge n \ge N.$$
An immediate result of this (and a result that, by itself, is easily proven, as in Davide Giraudo's answer) is:
If $\sum\limits_{k=1}^\infty f_k(x)$ converges uniformly on $I$, then
for every $\epsilon>0$, there is an integer $N$ such that $$|f_n(x)|
=| S_{n }(x)-S_{n-1}(x)|<\epsilon,\quad\text{ for all }x\in I\ \ \text{ and all }n \ge N;$$ that is, the sequence $(f_n)$ converges uniformly
to the zero function on $I$.
One can use the first result I mentioned directly to show that your series is not uniformly Cauchy over $\Bbb R$:
Since $a_i\ne 0$ for infinitely many $n$, given any positive integer $N$, you can choose $m>n\ge N$ such that$\sum\limits_{k=n}^m a_k x^k$ is not a constant polynomial. Taking the limit as $x$ tends to infinity will verify that this sum is not uniformly close to 0.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/108160",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Question on trigonometric linear equations Doing an exercise on complex analysis where it began by asking me to solve some equations for $u_x$ and $u_y$ I got stuck and looked up the answer.
$$u_x\cos\theta+u_y\sin\theta=u_r,$$
$$-u_xr\sin\theta+u_yr\cos\theta=u_\theta.$$
Solving these simultaneous linear equations for $u_x$ and $u_y$, we find that
$$u_x=u_r\cos\theta-u_\theta\frac{\sin\theta}{r}\quad\text{ and }\quad u_y=u_r\sin\theta+u_\theta\frac{\cos\theta}{r}.$$
(Here's the original image of this quoted text.)
The answer says 'solving these simultaneous linear equations'...but surely these are not linear equations when they use cos and sin?
And how do you solve these equations anyway, i can see you can divide across the bottom equation but I can see what to do after that.
| The equations are linear in $u_x$ and $u_y$—that is, if $\theta$ is a constant, $\sin\theta$ and $\cos\theta$ are constants, so they're just like having numbers there.
If you multiply the first equation by $\sin\theta$ and the second equation by $\frac{\cos\theta}{r}$, you'll have
$$u_x\cos\theta\sin\theta+u_y\sin^2\theta=u_r\sin\theta$$
$$-u_x\cos\theta\sin\theta+u_y\cos^2\theta=u_\theta\frac{\cos\theta}{r}$$
Adding these two equations gives
$$u_y(\sin^2\theta+\cos^2\theta)=u_r\sin\theta+u_\theta\frac{\cos\theta}{r}$$
and since $\sin^2\theta+\cos^2\theta=1$,
$$u_y=u_r\sin\theta+u_\theta\frac{\cos\theta}{r}.$$
You can find $u_x$ from this $u_y$, or by applying a similar technique to the original equations.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/108208",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What is the equation of an ellipse that is not aligned with the axis? I have the an ellipse with its semi-minor axis length $x$, and semi major axis $4x$. However, it is oriented $45$ degrees from the axis (but is still centred at the origin). I want to do some work with such a shape, but don't know how to express it algebraically. What is the equation for this ellipse?
| Sorry I didn’t see this one much earlier. Your ellipse has its major axis aligned with (I’m assuming) the line $y=x$, in other words it has symmetry $x\leftrightarrow y$. This means that it has an equation of the shape
$$
a(x^2+y^2) + bxy+c(x+y)=1\,,
$$
and since you’ve specified that the center is at the origin, you also need $c=0$. Thus your equation will simply be $a(x^2+y^2)+bxy=1$. You say that the semimajor axis is to be $4d$ and the semiminor axis is to be $d$ in length. So the points you know to be on the ellipse are $(\frac{4d}{\sqrt2},\frac{4d}{\sqrt2})=(2\sqrt2d,2\sqrt2d)$ at the end of the major axis and $(-\frac d{\sqrt2},\frac d{\sqrt2})$ on the minor axis.
Substituting these values into the equation and solving for $a$ and $b$, you get
$$
a=\frac{17}{32d^2}\,,\quad b=-\frac{15}{16d^2}\,,
$$
at least if I have not made a computational error. And there’s the equation for your ellipse.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/108270",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 3,
"answer_id": 2
} |
An application of the General Lebesgue Dominated convergence theorem I came across the following problem in my self-study:
If $f_n, f$ are integrable and $f_n \rightarrow f$ a.e. on $E$, then $\int_E |f_n - f| \rightarrow 0$ iff $\int_E |f_n| \rightarrow \int_E |f|$.
I am trying to prove (1) and the book I am using suggests that it follows from the Generalized Lebesgue Dominated Convergence Theorem:
Let $\{f_n\}_{n=1}^\infty$ be a sequence of measurable functions on $E$ that converge pointwise a.e. on $E$ to $f$. Suppose there is a sequence $\{g_n\}$ of integrable functions on $E$ that converge pointwise a.e. on $E$ to $g$ such that $|f_n| \leq g_n$ for all $n \in \mathbb{N}$. If $\lim\limits_{n \rightarrow \infty}$ $\int_E$ $g_n$ = $\int_E$ $g$, then $\lim\limits_{n \rightarrow \infty}$ $\int_E$ $f_n$ = $\int_E$ $f$.
I suspect that I need the right inequalities to help satisfy the hypothesis of the GLDCT, but I am not certain about what these inequalities should be.
| Suppose that $$\int |f_n|\to |f|.$$ The function $$|f_n|+|f|- |f_n-f|$$ is no negative. The following is just a technique used in other answer on this site. By Fatou's Lemma
$$\begin{align*}
\liminf \int |f_n|+|f|- |f_n-f| &\leq \int\liminf(|f_n|+|f|- |f_n-f|)\\
\liminf\int |f_n|+\int |f|-\limsup\int |f_n-f| &\leq \liminf \int |f_n|+\int |f|-\int\limsup |f_n-f|\\
2\int |f|-\int 0 &\leq 2\int |f| - \int\limsup |f_n-f|,
\end{align*}$$
the last inequality leads
$$\limsup\int |f_n-f|\leq 0,$$
therefore $$\lim_{n\to\infty}\int |f_n-f|= 0.$$
For the other implication see Martin's answer.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/108313",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Construction of a partial order on a quotient of a coproduct [NB: Throughout this post, let the subscript $i$ range over the set $\unicode{x1D7DA}
\equiv \{0, 1\}$.]
Let $(Y, \leqslant)$ be a poset, and $X\subseteq Y$. Let $\iota_i$ be the canonical inclusions $Y \hookrightarrow (Y \amalg Y)$. Define $Z = (Y \amalg Y)/\sim$, where $\sim$ is the smallest equivalence relation on $Y\amalg Y$ that identifies $\iota_0(x)$ and $\iota_1(x)$, for all $x \in X \subseteq Y$. Finally, define functions $f_i = \pi\;{\scriptstyle \circ}\;\iota_i$, where $\pi$ is the canonical projection $(Y \amalg Y) \to Z = (Y \amalg Y)/\sim$.
I'm looking for a construction of a partial order $\leqslant_Z$ on $Z = (Y \amalg Y)/\sim$ such that both $f_0$ and $f_1$ are order-preserving wrt $\leqslant_Z$.
This is what I have so far:
Since the $f_i$ are injections, the $\leqslant_i$ given by
$$
\leqslant_i \;\;=\;\; \{(f_i(y), f_i(y\,')) \;\;|\;\; (y, y\,' \in Y) \wedge (y \leqslant y\,')\} \;\;\cup\;\; I_Z.
$$
...(where $I_Z$ is the identity on $Z$) are well-defined partial orders on $Z$.
Now, let $T$ be the transitive closure of the relation on $Z$ given by $\leqslant_0 \cup \leqslant_1$. I.e. $T = \bigcap_{V \in \mathscr{T}} V$, where $\mathscr{T} \neq \varnothing$ is the family of transitive relations containing $\leqslant_0 \cup \leqslant_1 $. $T$ is obviously transitive, and it is also reflexive, since $I_Z \subseteq V, \forall \; V \in \mathscr{T}$.
The question therefore reduces to whether $T$ is antisymmetric (i.e. $T\cap T^{-1} = I_Z$).
I'm having a hard time finding a halfway reasonable-looking approach to this last point.
| Suppose that $z\leq w$ and $w\leq z$, this means that there are finite sequences so that $z=z_0\leq z_1\leq ...\leq z_n=w$ and $w=w_0\leq w_1\leq ...\leq w_m=z$ where for all $i< n$ $(z_i,z_{i+1})\in (\leq_0\cup \leq_1\cup I_Z)$ and for all $j<m$ $(w_j,w_{j+1})\in (\leq_0\cup \leq_1\cup I_Z)$. This implies $z$ equivalent to $w$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/108376",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
4 point quadratic curve I can define a curve that passes through 3 points using a quadratic equation:
ax2 + bx + c = 0
I would like to know is it possible to define a curve that passes through 4 points using:
ax3 + bx2 + cx + d = 0
Cheers
| The answer was already in the comments upon migration: Use a Lagrange polynomial. The restriction "in most cases" is unnecessary; the Lagrange polynomial is completely general and yields a polynomial which interpolates the points as long as no two of them have the same $x$ coordinate; if they do, there can be no univariate function, polynomial or otherwise, that interpolates them.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/108433",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$S(x)=\sum_{n=1}^{\infty}a_n \sin(nx) $, $a_n$ is monotonic decreasing $a_n\to 0$: Show uniformly converges within $[\epsilon, 2\pi - \epsilon]$ $S(x)=\sum_{n=1}^{\infty}a_n \sin(nx) $, $a_n$ is monotonic decreasing $a_n\to 0$, when ${n \to \infty}$. I need to prove that for every $\epsilon >0$, the series is uniformly converges within $[\epsilon, 2\pi - \epsilon]$. Can I use Dirichlet and say that $\sum_{0}^{M} \sin (nx)< M$ for every x in the interval and since $a_n$ is uniformly converges to 0 ( uniformity since it does not depend on $x$), so the series is uniform convergent in this range?
In addition I need to prove that if $\sum_{n=1}^{\infty} a_n^2 = \infty$ so the series is not uniform convergent in $[0, 2 \pi]$, Since I know that from $n_0$ and on $a_n^2< a_n$ I used an inequality, again I'm not sure of that.
In the other hand, maybe I need to use Fourier series somehow.
Thanks for the help!
| For your first question:
You can use Dirichlet's Test$^\dagger$ as long as you can show that
$$D_n(x)=\sum\limits_{k=1}^n \sin(kx)$$
is indeed uniformly bounded on $[\epsilon, 2\pi-\epsilon]$.
Dirichet himself did this as follows:
Using the formula $$2\sin(v)\sin(u)=\cos(v-u)-\cos (v+u),$$ for any $n$:
$$\eqalign{
2\sin(x/2) D_n(x)&= \sum_{k=1}^n\bigl[\, 2\sin(x/2)\sin(kx)\,\bigr]\cr
&=\sum_{k=1}^n\Bigl[\, \cos \bigl(\,( k-{\textstyle{1\over2}})x\,\bigr) - \cos \bigl(\,(k+{\textstyle{1\over 2}})x\,\bigr)\,\Bigr]\cr
&=\cos (x/2)-\cos \bigl(\,(n+\textstyle{1\over2})x\bigr).
}
$$
So:
$$\tag{1}
|D_n(x)|= \biggl| {{\cos (x/2)-\cos \bigl(\,(n+{1\over2})x\,\bigr)}\over 2\sin(x/2) }
\biggr|
\le {{1\over |\sin(x/2)|} }.
$$
Now, if $x\in [\epsilon, 2\pi-\epsilon]$, then $x/2\in[\epsilon/2, \pi-\epsilon /2]$
and it follows from inequality $(1)$ that
$$
|D_n(x)|\le {1\over \sin(\epsilon/2)}.
$$
Dirichlet's Test:
Let $E\subset\Bbb R$ be a non-empty set and let $f_k$, $g_k$ be functions from $E$ to $\Bbb R$.
If $$\biggl|\sum\limits_{k=1}^n f_k(x)\,\biggr|\le M<\infty$$ for all positive integers $n$ and all $x\in E$, and if $g_k\searrow 0$ uniformly on $E$, then $\sum\limits_{k=1}^\infty f_k g_k$ converges uniformly on $E$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/108486",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
} |
Why are $\log$ and $\ln$ being used interchangeably? A definition for complex logarithm that I am looking at in a book is as follows -
$\log z = \ln r + i(\theta + 2n\pi)$
Why is it $\log z = \ldots$ and not $\ln z = \ldots$? Surely the base of the log will make a difference to the answer.
It also says a few lines later $e^{\log z} = z$.
Yet again I don't see how this makes sense. Why isn't $\ln$ used instead of $\log$?
| This is one of those cases where two different notations developed in slightly different applications, and both are commonly used enough that one has not been adopted over the other. In some fields of engineering, $\log$ means $\log_{10}$, in math it usually means $\ln$, and in computer science it often means $\log_2$ (when it matters). Another example of this kind of notational difference is found in boolean algebra. While a mathematician might write an expression as $(a\land b)\lor(\lnot c\land d)$, an electrical engineer would probably write this as $ab+\overline{c}d$. Take also the Leibniz versus Newton calculus notations. It is what it is, I find that the best practice is just to know various different notations and guage what is most appropriate in context.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/108547",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
General solution using Euclidean Algorithm
I was able to come up with the integer solution that they also have in the textbook using the same method they used but I am really puzzled how they come up with a solution for all the possible integer combinations...how do they come up with that notation/equation that represents all the integer solutions ? I am talking about the very last line.
| I solved the problem...joriki your solution is correct but I found out what I didnt know after reading some text in book....
it basically says that if you want the complete solution of an equation, you first solve for a unique solution so find an x and y that works....then take the x and add to it the integer coefficient of y/GCD *n. Take your y solution you found and subtract from it integer coefficient of x/GCD *n. That is your complete solution for all integers n
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/108567",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 3
} |
$f_{n}=\frac{1}{1+z^{n}}$ uniform convergence $\newcommand{\Z}{\mathbb{Z}}$
$\newcommand{\C}{\mathbb{C}}$
$\newcommand{\R}{\mathbb{R}}$
$\newcommand{\s}{\sigma}$
$\newcommand{\Q}{\mathbb{Q}}$
$\newcommand{\F}{\mathbb{F}}$
I am trying to show and disprove uniform convergence in the following example:
$$D=\{z\in \C | |z| < 1 \} \ and \ f_{n}:D\rightarrow \C : f_{n}(z)=\frac{1}{1+z^{n}} $$
Proposition 1:
$f_{n}$ converges uniformly in all $B(0)$ with $0<r<1$
Proof 1: With $f_{n}(z)= (1+z^n)^{-1}$ put
$$ f'_{n}(z)=-nz^{n-1}(1+z^n)^{-2}=0 \Rightarrow z=0$$
(I wanted to use that $\lim sup |f_{n} - f | = 0 $ but I fail at finding the supremum of $|f_{n}-f|$) How does one find the supremum?
So I try finding an estimate instead: $$|f_{n} - f| = |\frac{1}{1+z^{n}}-1| = |\frac{-z^{n}}{1+z^{n}}| \le \frac{r^{n}}{|1-r^{n}|} =: \epsilon$$
Is this done correctly?
Proposition 2: $f_{n}$ does not converge uniformly in D
Proof 2: How can one show that something does not converge uniformly??
Thanks for suggestions.
| If $|z|<1$, then $z^n \to 0$ as $n \to \infty$, so $f_n$ converges pointwise to the constant function $1$.
On the other hand $\sup_{z\in D} |f_n| = +\infty$. (To see this, for a fixed $n$, take a sequence of points in $D$ converging to a $n$:th root of $-1$.)
On $D_r = \{ z : |z| < r \}$, (if $r < 1$)
$$ \left| \frac{1}{1+z^n} - 1 \right| = \left| \frac{z^n}{1+z^n} \right| < \frac{r^n}{1-r^n} \to 0\quad\text{as $n\to\infty$}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/108620",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Finding conditional density $p_{x \vert S}(x \vert s)$ when $S = X + Y$ So in this problem I have two i.i.d random variables $X$ and $Y$, which are uniformly distributed over interval $[0,1]$.
I know that $S = X + Y$. To find the density of $S$, I find the convolution of $X$ and $Y$. However I am struggling conceptually to understand what $p_{X \vert S}(x \vert s)$ is, for a given real value of $S$.
The conflict is that $X$ is defined as a uniform over [0,1], so it seems that the pdf should remain just that. However clearly if we condition on $S$ being 0.5 for example, then $X$ cannot sample values greater than that.
I am inclined to say
$p_{X \vert S}(x \vert s) = \begin{cases}
\frac{1}{s} & \mathrm{for}\ 0 \le x \le s, \\[8pt]
0 & \mathrm{otherwise}\
\end{cases} $
But this is clearly wrong if s > 2, or s < 0, and I am not sure it is correct. Could someone offer advice on how $p_{x \vert S}(x \vert s)$ would be derived?
| *
*Given $S = X+Y = s$ where $0 \leq s \leq 1$, the random point $(X,Y)$ is
constrained to be somewhere on the straight-line segment with endpoints
$(0,s)$ and $(s,0)$.
*Given $S = X+Y = s$ where $1 < s \leq 2$, the random point $(X,Y)$ is
constrained to be somewhere on the straight-line segment with endpoints
$(1,s-1)$ and $(s-1,1)$.
In both cases, $(X,Y)$ is uniformly distributed on the line segment.
Note that conditioned on $S$, $X$ and $Y$ are both continuous random
variables uniformly distributed
on $[0,s]$ (or $[s-1,1]$ as the case may be), but they are not
jointly continuous; they do not have a joint density function.
In fact, given $S=X+Y=s$, we have that $Y = s-X$ and the two
random variables
are conditionally linearly related.
So your conditional density $p_{X|S}(x|s) = 1/s$ for $0 \leq x \leq s$ is
correct for $0 \leq s \leq 1$, but not for $1 < s \leq 2$. The conditional density is undefined for $s < 0$ or $ > 2$ because $S$ cannot take on values outside the interval $[0,2]$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/108672",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
expected value of $Y= aX + b$
Given a normal random variable X with parameters $\mu$ and $\sigma^2$, find the $E(Y)$ of $Y=aX + b$.
So I started with $E(Y)=E(aX+b)=\frac{1}{\sqrt(2\pi)\sigma}\int_{-\infty}^{\infty}(ax+b)^{{-(ax+b-\mu}^2/2\sigma^2)}$ but this seems a bit unwieldy. Is this the correct approach, and if so, are there any useful substitutions I can make?
| It's probably easier to use the general linearity of expectation than to try to integrate. So $$E[Y] = aE[x]+b = a \mu + b$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/108742",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Number of 5 letter words over a 4 letter group using each letter at least once Given the set $\{a,b,c,d\}$ how many 5 letter words can be formed such that each letter is used at least once?
I tried solving this using inclusion - exclusion but got a ridiculous result:
$4^5 - \binom{4}{1}\cdot 3^5 + \binom{4}{2}\cdot 2^5 - \binom{4}{3}\cdot 1^5 = 2341$
It seems that the correct answer is:
$\frac{5!}{2!}\cdot 4 = 240$
Specifically, the sum of the number of permutations of aabcd, abbcd, abccd and abcdd.
I'm not sure where my mistake was in the inclusion - exclusion approach. My universal set was all possible 5 letter words over a set of 4 letters, minus the number of ways to exclude one letter times the number of 5 letter words over a set of 3 letters, and so on.
Where's my mistake?
| I am writing this answer in case you also need to know about the second way: (Ignore if you know the details of the second method.)
Since you need to use each letter atleast once you really have choice only over one of the letters. This can be chosen in $4$ ways and they can be permuted in $\dfrac{5!}{2!}$ ways.
Hence the number of words, is, $\dfrac{5!}{2!} \cdot 4 =240$ words. $\blacksquare$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/108854",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Computing $\sum_{1}^{\infty}\frac{1}{2^{2n+1}n} $ with a power series- What did I do wrong? The requested sum: $\sum_{1}^{\infty}\frac{1}{2^{2n+1}n}=
\frac{1}{2}\sum_{0}^{\infty}\frac{1}{4^nn}$
My helper- this power series: $
\sum_{0}^{\infty}\frac{x^{n}}{4^n}=\frac{1}{1-\frac{4}{x}}
$
Integration due to uniform convergence: $ \int \sum_{0}^{\infty}\frac{x^{n}}{4^n}=\int \frac{1}{1-\frac{4}{x}}$
This is what I get:$\sum_{0}^{\infty}\frac{x^{n+1}}{4^{n+1}(n+1)}=-4\ln(4-x)$
Or: $
\sum_{1}^{\infty}\frac{x^{n}}{4^{n}n}=-4\ln(4-x)
$
Finally what we need is: $\frac{1}{2}\sum_{1}^{\infty}\frac{x^{n}}{4^{n}n}=-2\ln(4-x)$
Now I want to plug-in $x=1$ and get the requested result, but what bothers me is that this is a positive series and I get a negative number in the RHS, obivously something's wrong.
Please tell me what's wrong with the steps described above.
Thanks! :)
| It would be best to proceed as Peter does starting with the series representation of $\ln(1-x)$.
However, to address your argument:
You wish to compute
$$
\sum_{n=1}^\infty {1\over 2^{2n+1} n}=
{1\over2} \sum_{n=1}^\infty { 1\over n \, 4^n }
$$
(this was your first error, the sum starts at $n=1$).
Using the Geometric series:
$$\tag{1}
\sum_{n=1}^\infty (x/4)^n= {x/4\over 1-(x/4)}= {1\over 1-(x/4)}-1
$$
(your sum of the series was your second error).
If $|x|<4$:
$$
\sum_{n=1}^\infty\int_0^x (t/4)^n\,dt= \int_0^x {1\over 1-(t/4)}-1\, dt
$$
(note, you need to take definite integrals).
Whence
$$
\sum_{n=1}^\infty{x^{n+1}\over 4^n (n+1)}= -4\ln|1-(x/4)|-x
$$
Substituting $x=1$ gives:
$$
\sum_{n=1}^\infty{1\over 4^n (n+1)}= -4\ln(3/4)-1\approx0.15073
$$
So
$$\eqalign{
{1\over2} \sum_{n=1}^\infty { 1\over n \, 4^n }
&={1\over2} \sum_{n=0}^\infty { 1\over( n +1)\, 4^{n+1} }\cr
&={1\over8} \sum_{n=0}^\infty { 1\over( n +1)\, 4^{n } }\cr
&={1\over8} [{1} -4\ln(3/4)-1]\cr
&={1\over8} [ -4\ln(3/4) ]\cr
&= { \ln(4/3)\over 2 }\cr
}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/108925",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Convergence/Divergence of infinite series $\sum_{n=1}^{\infty} \frac{(\sin n+2)^n}{n3^n}$ $$ \sum_{n=1}^{\infty} \frac{(\sin n+2)^n}{n3^n}$$
Does it converge or diverge?
Can we have a rigorous proof that is not probabilistic?
For reference, this question is supposedly a mix of real analysis and calculus.
| I propose the following heuristic argument that the series converges:
The natural numbers $n$ are uniformly distributed ${\rm mod}\ 2\pi$. Therefore the expected value of the $n$-th term of the series is
$$a_n:={1\over n}\int_{-\pi}^\pi\left({2+\cos\phi\over 3}\right)^n\ d\phi\ .$$
Now a look at the graphs shows that
$${2+\cos\phi\over 3}\leq e^{-\phi^2/9}\qquad(-\pi\leq\phi\leq\pi)\ .$$
Therefore
$$a_n\leq{1\over n}\int_{-\pi}^\pi e^{-n\phi^2/9}\ d\phi<{1\over n} \int_{-\infty}^\infty e^{-n\phi^2/9}\ d\phi={\sqrt{3\pi}\over n^{3/2}}\ ,$$
which leads to convergence.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/109029",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "50",
"answer_count": 5,
"answer_id": 2
} |
conditional probability chain rule? I am aware of the general conditional probability rule which says that
$P(ABCD) = P(A|BCD)P(B|CD)P(C|D)P(D)$
But is there any situation where one can write
$P(A|D) = P(A|B)P(B|C)P(C|D)$ where $A,B,C,D$ are random variables.
Thanks
| This works with Markov chains. It's essentially the definition of a Markov chain.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/109074",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Graph-Minor Theorem for Directed Graphs? Suppose that $\vec{G}$ is a directed graph and that $G$ is the undirected graph obtained from $\vec{G}$ by forgetting the direction on each edge. Define $\vec{H}$ to be a minor of $\vec{G}$ if $H$ is a minor of $G$ as undirected graphs and direction on the edges of $\vec{H}$ are the same as the corresponding edges in $\vec{G}$.
Does the Robertson-Seymour Theorem hold for directed graphs (where the above definition of minor is used and our graphs are allowed to have loops and multiple edges)?
| No. Digraphs are not well-quasi ordered under minor containment. However a subclass of digraph, namely semi-complete tournament (a fully connected DAG), are WQO under minor containment. Source: recent (2011-2012) paper by Seymour.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/109121",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
} |
Solid of Revolution Problem (Washer and Shell Method) Consider the region bounded by $x + y = 0$ and $x = y^2 + 3y$.
a) With the washer method, set up an integral of the solid that is rotated about the line x = 4
b) with the shell method, set up an integral expression for the solid rotated about the line y = 1
Solution Provided
a)
$$V = \pi \int_{-4}^{0} (|y^2 +3y| +4)^2 - (4+y)^2 dy$$
www.wolframalpha.com/input/?i=Pi*Integrate[(4+%2B+|y^2+%2B3y|)^2+-++(4%2By)^2%2C{y%2C-4%2C0}]
The solution I thought would be
$$V = \pi \int_{-4}^{0} (4 - (y^2 +3y))^2 - (4+y)^2 dy$$
http://www.wolframalpha.com/input/?i=Pi*Integrate[%284+-+%28y^2+%2B3y%29%29^2+-++%284%2By%29^2%2C{y%2C-4%2C0}]
Doesn't the absolute value sign in the integral will actually reflect the region to the fourth quadrant and hence make the +4 meangingless?? More importantly, why is my integral wrong?
Solution Provided
b) $$V = 2 \pi \int_{-4}^{0} (-y - (y^2 + 3y))(y+1) dy$$
I thought it should be $$V = 2 \pi \int_{-4}^{0} (-y - (y^2 + 3y))(1-y) dy$$
The region is below the x-axis, yet when they have y + 1, wouldn't that give me negative radius?
| You are correct.
Consider the crude drawing below:
Using the washer method:
A typical washer, generated by revolving the line segment $\color{orange}{\ell_y}$ about the line $\color{gray}{x=4}$, is shown in gray above.
The outer radius, $\color{darkgreen}{r_o}$ of this washer is
$$\eqalign{
\color{green}{r_o}&= 4-(\color{maroon}{y^2+3y} )
}
$$
and the inner radius, $\color{darkblue}{r_i}$ is
$$\eqalign{
\color{darkblue}{r_i}&= 4-(\color{pink}{-y})
}
$$
It important to realize that the above expressions work for all washer elements.
The area of the washer element at $y$ is
$$\eqalign{
\pi (r_o^2-r_i^2 ) =\pi\bigl[ \bigl(4-(y^2+3y)\bigr)^2- (4+y )^2 \bigr]
}
$$
Since the washers "start" at $y=-4$ and "end" at $y=0$, the volume of the solid of revolution is
$$
\int_{-4}^0 \pi\bigl[ \bigl(4-(y^2+3y)\bigr)^2- (4+y )^2 \bigr]\, dy,
$$
as you have.
Your solution to part b) is correct as well.
With the shell method, you are revolving the horizontal line segment $\color{orange}{\ell_y}$ about the line $\color{gray}{y=1}$. The length of $\color{orange}{\ell_y}$ is
$\color{pink}{-y} -(\color{maroon}{y^2+3y})$, and the distance from $\color{orange}{\ell_y}$ to the line $\color{gray}{y=1}$ is $1-y$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/109180",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Positive semi-definite matrix Suppose a square symmetric matrix $V$ is given
$V=\left(\begin{array}{ccccc}
\sum w_{1s} & & & & \\
& \ddots & & -w_{ij} \\
& & \ddots & & \\
& -w_{ij} & & \ddots & \\
& & & & \sum w_{ns}
\end{array}\right) \in\mathbb{R}^{n\times n},$
with values $w_{ij}> 0$, hence with only positive diagonal entries.
Since the above matrix is diagonally dominant, it is positive semi-definite. However, I wonder if it can be proved that
$a\cdot diag(V)-V~~~~~a\in[1, 2]$
is also positive semi-definite. ($diag(V)$ denotes a diagonal matrix whose entries are those of $V$, hence all positive) In case of $a=2$, the resulting
$2\cdot diag(V)-V$
is also diagonally dominant (positive semi-definite), but is it possible to prove for $a\in[1,2]$?
.........................................
Note that the above proof would facilitate my actual problem; is it possible to prove
$tr[(X-Y)^T[a\cdot diag(V)-V](X-Y)]\geq 0$,
where $tr(\cdot)$ denotes matrix trace, for $X, Y\in\mathbb{R}^{n\times 2}$ and $a\in[1,2]$ ?
Also note that
$tr(Y^TVY)\geq tr(X^TVX)$ and $tr(Y^Tdiag(V)Y)\geq tr(X^Tdiag(V)X)$.
(if that facilitates the quest., assume $a=1$)
.....................................................
Since the positive semi-definiteness could not generally be guaranteed for $a<2$, the problem casts to: for which restrictions on a does the positive semi-definiteness of a⋅diag(V)−V still hold?
Note the comment from DavideGiraudo, and his claim for case $w_{ij}=1$, for all $i,j$. Could something similar be derived for general $w_{ij}$≥0?
| In the case $w_{ij}=1$ we have $V=\pmatrix{n&-1&-1&\cdots &-1 \\\
-1&n&-1&\ldots &-1 \\\
\vdots&\vdots&\ddots& &-1\\\
-1&-1&-1&\ldots&n}$ and $$M_a:=a\operatorname{diag}(V)-V=\pmatrix{n(a-1)&1&1&\cdots &1 \\\
1&n(a-1)&1&\ldots &1 \\\
\vdots&\vdots&\ddots& &1\\\
1&1&1&\ldots&n(a-1)}.$$
We can compute the determinant $\det(M_a-XI_n)$ adding to the first line all the other one. We get
$$\det(M_A-XI_N)=(n(a-1)-(n-1)X)(n(a-1)-1-X)^{n-1},$$
and if we want $M_a$ positive semi-definite we should have $n(a-1)-1\geq 0$ so $a-1\geq\frac 1n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/109231",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
The set of limit points of an unbounded set of ordinals is closed unbounded. Let $\kappa$ be a regular, uncountable cardinal. Let $A$ be an unbounded set, i.e. $\operatorname{sup}A=\kappa$. Let $C$ denote the set of limit points $< \kappa$ of $A$, i.e. the non-zero limit ordinals $\alpha < \kappa$ such that $\operatorname{sup}(X \cap \alpha) = \alpha$. How can I show that $C$ is unbounded? I cannot even show that $C$ has any points let alone that it's unbounded.
(Jech page 92)
Thanks for any help.
| Since $\kappa$ is regular this means that the order type of $A$ is $\kappa$, and for every $\delta<\kappa$ we have that the cofinality of $\delta<\kappa$ as well.
Now suppose that $A$ is bounded in all limit points above $\beta$, without loss of generality $A\cap\beta=\varnothing$. Define a regressive function on limit ordinals:
$$\delta\mapsto\max\{A\cap\delta\}$$
This is indeed well defined, since $A$ is bounded below $\delta$. Since $\mathrm{Lim}$ is a club set, therefore stationary, this function is constant on a stationary subset.
In turn this means that unboundedly many times $\delta\cap A$ is reaching the same maximum, in particular this means that $A$ is bounded below $\kappa$, in contradiction!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/109292",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Proving a complex function is constant Let $u: \mathbb{R}^2 \to \mathbb{R}$ be a differentiable function. Prove that if the complex function
$f(x + iy) = u(x,y) + iu(x,y)$
is analytic in $\mathbb{C}$ then it is a constant function.
Answer:
If $f$ is a analytic it satisfies the Cauchy Riemann equations.
So $u_x = u_y$ and $u_x=-u_y$
This can only happen when $u_x$ and $u_y$ are equal $0$.
As the partial derivatives of $f$ are $0$, $f$ must be a constant function.
Is that correct?
| Yes.
Another way to think of it: $f(x+iy)=(1+i)u(x,y)$ has range contained in the line $\{t(1+i):t\in\mathbb R\}$, so $f$ is constant by the open mapping theorem, or by Liouville's theorem applied to $\frac{1}{f-1}$.
Or $g(x+iy)=(1+i)^3f(x+iy)=-4u(x,y)$ (or simply $\frac{1}{1+i}f = u$) is real valued, which makes applying the Cauchy-Riemann equations to $g$ a little more immediately show that $g$ is constant.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/109359",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Can a finitely generated group have infinitely many torsion elements? I'm reading the proof of a theorem that says
If $G$ is a finitely generated FC-group, then its set of torsion elements $T(G)$ is a finite normal subgroup of $G$.
I understand everything the proof says, but I don't understand why it doesn't explain why $T(G)$ is finite. Is it obvious that a finitely generated group has only finitely many torsion elements?
EDIT: Thanks to Jim Belk's answer, I know that being finitely generated isn't enough to have finitely many torsion elements for groups. Why is that true for FC-groups then? I know and can prove that that the commutator subgroup of a finitely generated FC-group is finite, but in such groups $G'\subseteq T(G)\subseteq G$, so this doesn't immidiately imply the finiteness of $T(G)$...
| You want to look up Dietzmann's lemma. Basically, $T(G)$ is finitely generated, and each generator has finitely many conjugates. So $T(G)$ is generated by a finite set of generators, closed under conjugation, and each generator has finite order. Dietzmann's lemma implies $T(G)$ is finite.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/109440",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 3,
"answer_id": 1
} |
Proof that $\int_1^x \frac{1}{t} dt$ is $\ln(x)$ A logarithm of base b for x is defined as the number u such that $b^u=x$. Thus, the logarithm with base $e$ gives us a $u$ such that $e^u=b$.
In the presentations that I have come across, the author starts with the fundamental property $f(xy) = f(x)+f(y)$ and goes on to construct the natural logarithm as $\ln(x) = \int_1^x \frac{1}{t} dt$.
It would be suprising if these two definitions ended up the same, as is the case. How do we know that the are? The best that I can think of is that they share property $f(xy) = f(x)+f(y)$, and coincide at certain obvious values (x=0, x=1). This seems weak. Is their a proof?
| Put $l(x) = \int_1^x dt/t.$ Then $l'(x) = 1/x$ if $x > 0$ by the fundamental theorem of calculus. Since $l' > 0$ on $(0,\infty)$, $l$ is 1-1 and therefore has an inverse. Denote its inverse by $m$. We have $l(m(x))= x;$ differentiate to see that
$$1 = l'(m(x))m'(x) = m'(x)/m(x).$$
Multiply to get $m'(x) = m(x).$ It is not at all hard to see that $m(0) = 1.$ Therefore, $m(x) = e^x$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/109483",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 6,
"answer_id": 3
} |
Totally bounded, complete $\implies$ compact
Show that a totally bounded complete metric space $X$ is compact.
I can use the fact that sequentially compact $\Leftrightarrow$ compact.
Attempt: Complete $\implies$ every Cauchy sequence converges. Totally bounded $\implies$ $\forall\epsilon>0$, $X$ can be covered by a finite number of balls of radius $\epsilon$. I'm trying to show that all sequences in $X$ have a subsequence that converges to an element in $X$. I don't see how to go from convergent Cauchy sequences and totally bounded to subsequence convergent $in$ $X$.
| You need to show that if $X$ is totally bounded, every sequence in $X$ has a Cauchy subsequence. Let $\sigma=\langle x_n:n\in\mathbb{N}\rangle$ be a sequence in $X$. For each $n\in\mathbb{N}$ let $D_n$ be a finite subset of $X$ such that the open balls of radius $2^{-n}$ centred at the points of $D_n$ cover $X$. $D_0$ is finite, so there is a point $y_0\in D_0$ such that infinitely many terms of $\sigma$ are in $B(y_0,1)$. Let $$A_0=\{n\in\mathbb{N}:x_n\in B(y_0,1)\}\;,$$ so that $A_0$ is infinite. Now $D_1$ is finite, so there is a $y_1\in D_1$ such that $$A_1=\{n\in A_0:x_n\in B(y_1,2^{-1})\}$$ is infinite. Repeat: if $A_k$ is an infinite subset of $\mathbb{N}$, there must be a $y_{k+1}\in D_{k+1}$ such that $$A_{k+1}=\{n\in A_k:x_n\in B(y_{k+1},2^{-(k+1)})\}$$ is infinite, and the process can continue.
Now choose a strictly increasing sequence $\langle n_k:k\in\mathbb{N}\rangle$ of natural numbers in such a way that $n_k\in A_k$ for every $k\in\mathbb{N}$. Can you show that $\langle x_{n_k}:k\in\mathbb{N}\rangle$ is Cauchy?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/109550",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "39",
"answer_count": 1,
"answer_id": 0
} |
Determinant of matrix $(x_j^{n-i}- x_j^{2n-i})_{i,j=1}^{n}$ Good evening all, I am determined to determine this determinant:
$$D = \det{\left[x_j^{n-i} - x_j^{2n-i}\right]_{i,j=1}^{n}}$$
Looking at the smaller cases, leads me to believe that
$$D = \prod_{1 \leq i < j \leq n}\left(x_i-x_j\right)\prod_{i=1}^n \left(1-{x_i}^n\right)$$
although I am having trouble showing this. I know that, since the determinant is an alternating function in the variables $x_1,\dots x_n$ it follows that
$$ \frac{D}{\displaystyle\prod_{1 \leq i < j \leq n}\left(x_i-x_j\right)} $$
is a symmetric polynomial of degree $n^2$ (the degree of D minus the degree of the Vandermonde part).
How can I show that this symmetric polynomial is exactly $\prod_{i=1}^n \left(1-{x_i}^n\right)$ ?
Your help is, as always, much appreciated.
| We would like to know the determinant
$ \det M$ of the matrix $n\times n$-matrix
$$M_{ij} = x_j^{n-i} - x_j^{2n-i}.$$
Note that $$M_{ij} = (1- x_j^n) x_j^{n-i} = (1-x_j^n) V_{ij}.$$ It is easy to see (column operation) that
$$ \det M = \det V \,\prod_{j=1}^n (1-x_j^n).$$
Now $V_{ij} = x^{n-i}_j$ is a Vandermonde matrix whose determinant is known (and can be proven by induction) to be
$$\det V= \prod_{1 \leq i < j \leq n}\left(x_i-x_j\right).$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/109631",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 0
} |
Examples of non-constructive results I'm giving a talk on constructive mathematics, and I'd like some snappy examples of weird things that happen in non-constructive math.
For example, it would be great if there were some theorem claiming $\neg \forall x. \neg P(x)$, where no $x$ satisfying $P(x)$ were known or knowable.
But other examples are welcome as well.
| The existence of a Hamel Basis, that is, a basis for $\mathbb R$ as a vector space over $\mathbb Q$. No one knows a Hamel basis; it's probably unknowable in some sense.
The existence of a basis for every vector space is equivalent to the axiom of choice, which is the non-constructive piece of math by excellence.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/109688",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 6,
"answer_id": 0
} |
Line segments intersecting Jordan curve I have thought this problem a week without success.
Is there a set $A\subset \mathbb{R}^2$ such that
*
*The boundary of $A$, $\partial A$, is a Jordan curve and
*For any $B\in \operatorname{int} A\ne\emptyset $, $C\in \operatorname{ext} A\ne\emptyset$ , the line segment $BC$ intersects $\partial A$ infinitely many times?
Any ideas?
| This is not an answer, only some ideas.
Let $A$ be a Jordan domain, and let $f\colon \mathbb D\to A$ be a conformal map of the unit disk onto $A$. By Caratheodory's theorem $f$ extends to a homeomorphism of closures. One says that $f$ is twisting at a point $\zeta\in\partial\mathbb D$ if for every curve $\Gamma\subset \mathbb D$ ending at $\zeta$ the following holds:
$$\liminf_{z\to\zeta}\ \arg(f(z)-f(\zeta))=-\infty, \quad
\limsup_{z\to\zeta}\ \arg(f(z)-f(\zeta))=+\infty,\quad (z\in\Gamma) $$
(This definition is from the book Boundary behaviour of conformal maps by Pommerenke. Some sources have different wording. The definition is unchanged if one replaces $\Gamma$ by a radial segment.)
If $f$ is twisting at $\zeta$, there is no line segment that crosses $\partial A$ only at $f(\zeta)$. Indeed, the preimage of such a line segment would be a curve along which $\arg(f(z)-f(\zeta))$ is constant.
Pommerenke's book presents several results on twisting points and gives pointers to literature. The message is that $f$ can have a lot of twisting points. Of course, it is impossible for $f$ to be twisting at every point of $\partial\mathbb D$. (Consider any disk contained in $A$ whose boundary touches $\partial A$.) But what we want is for every point of $\partial A$ to be twisting either on the inside or on the outside (i.e., for the conformal map onto interior or onto exterior).
My profound lack of knowledge of complex dynamics suggests that the Julia set of the quadratic polynomial $p(z)=z^2+\lambda z$ could have this property when $|\lambda|<1$ and $\mathrm{Im}\,\lambda\ne 0$. Indeed, in this case the polynomial has two Fatou components, and the boundary between them is a Jordan curve, indeed a quasicircle. The curve appears to be twisting as expected (see this applet, which takes polynomials in the form $z^2+c$. Here $c=\lambda/2-\lambda^2/4$ with $|\lambda|<1$, so we are in the main cardioid of the Mandelbrot set). Maybe someone who understands complex dynamics can tell if this Julia set is indeed an example.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/109752",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 0
} |
Confused about characteristic equation of a linear ODE Reading the Wikipedia page on linear ODEs, I've stumbled upon something I don't understand. In the section Homogeneous equations with constant coefficients, it says the following:
If the coefficients $A_i$ of the differential equation are real, then
real-valued solutions are generally preferable. Since non-real roots $z$
then come in conjugate pairs, so do their corresponding basis
functions $x^ke^{zx}$, and the desired result is obtained by replacing each
pair with their real-valued linear combinations $Re(y)$ and $Im(y$), where
$y$ is one of the pair.
I mean, why can you do this, that is, replace a pair of conjugate basis functions with their real and imaginary parts? The article gives as an example the equation $y'' -4y +5y = 0$. The roots of the characteristic equation are $2+i$ and $2-i$. Therefore, any solution of the equation will be of the form $c_1 e^{(2+i)x} + c_2 e^{(2-i)x}$ with $c_1, c_2 \in \mathbb{C}$. So far, so good. Then, one takes the real and imaginary parts of $e^{(2+i)x}$, so that any solution of the differential equation will be a linear combination of them, that is, $y = a_1e^{2x}\cos x + a_2e^{2x}\sin x$, with $a_1, a_2 \in \mathbb{R}$.
I don't understand the last step. You have a pair of complex functions, wich are conjugates of each other (because the characteristic equation has real coefficients), that form a basis of the space of solutions of the equation. Then, you claim that you can instead take the real and imaginary parts of one of them (it doesn't matter which one, since they are conjugates), and they will form a basis. Why is this true?
| Hint: it's simply a change of basis $[f,\bar f]\:\mapsto\: [\frac{1}2 (f-\bar f),\: \frac{1}2 (f+\bar f)]$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/109817",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Showing $\gcd(n^3 + 1, n^2 + 2) = 1$, $3$, or $9$ Given that n is a positive integer show that $\gcd(n^3 + 1, n^2 + 2) = 1$, $3$, or $9$.
I'm thinking that I should be using the property of gcd that says if a and b are integers then gcd(a,b) = gcd(a+cb,b). So I can do things like decide that $\gcd(n^3 + 1, n^2 + 2) = \gcd((n^3+1) - n(n^2+2),n^2+2) = \gcd(1-2n,n^2+2)$ and then using Bezout's theorem I can get $\gcd(1-2n,n^2+2)= r(1-2n) + s(n^2 +2)$ and I can expand this to $r(1-2n) + s(n^2 +2) = r - 2rn + sn^2 + 2s$ However after some time of chasing this path using various substitutions and factorings I've gotten nowhere.
Can anybody provide a hint as to how I should be looking at this problem?
| As you note, $\gcd(n^3+1,n^2+2) = \gcd(1-2n,n^2+2)$.
Now, continuing in that manner,
$$\begin{align*}
\gcd(1-2n, n^2+2) &= \gcd(2n-1,n^2+2)\\
&= \gcd(2n-1, n^2+2+2n-1)\\
&= \gcd(2n-1,n^2+2n+1)\\
&= \gcd(2n-1,(n+1)^2).
\end{align*}$$
Consider now $\gcd(2n-1,n+1)$. We have:
$$\begin{align*}
\gcd(2n-1,n+1) &= \gcd(n-2,n+1) \\
&= \gcd(n-2,n+1-(n-2))\\
&=\gcd(n-2,3)\\
&= 1\text{ or }3.
\end{align*}$$
Therefore, the gcd of $2n-1$ and $(n+1)^2$ is either $1$, $3$, or $9$. Hence the same is true of the original gcd.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/109876",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 6,
"answer_id": 3
} |
Syntax to calculate sum with maxima or wxmaxima using different variables So I'm not sure if I'm right here but I can try:
So we are having a statistics class at uni. The question is:
$x_{1}= 5$, $x_{2}= 3$, $x_{3}= 7$
$y_{1}= 6$, $y_{2}= 4$, $y_{3}= 12$
Calculate following:
$\sum_{i=2}^{3}(x_{i}+y_{i}) =$
I know it's actually quit trivial to calculate this by hand but I would really like to know how I can calculate such stuff using wxmaxima.
Can anybody help me?
| (Give a man a fish)
x:[5,3,7];
y:[6,4,12];
sum(x[i]+y[i], i, 2, 3);
(Teach a man to fish ;)
At the wxmaxima command line, type describe(sum) or example(sum)
(The parenthetical remarks refer to an old saying, my favorite version of which is, "Build a man a fire and he'll be warm for a night. Set a man on fire and he'll be warm for the rest of his life.")
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/109958",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
An "atom" in Boolean algebra Could someone explain what an atom in Boolean algebra means? I am acquainted with ring theory and group theory but not Boolean algebra. As far as I can tell from browsing around, it is something like a generator, but then again not exactly... Your help will be very much appreciated. Thanks in advance.
Added: It would be particularly helpful if it could be placed in the context of rings or groups. I think there are such entities as Boolean rings...
| If $B$ is a Boolean algebra then $x \in B$ is an atom if, for all $y \in B$, either $x \wedge y = x$ or $x \wedge y = 0$. Intuitively, it's a sort of minimal element: the way to think is "if $x$ is an atom and $0 \le y \le x$ then either $y=0$ or $y=x$".
In a Boolean ring $R$, we can think of $\wedge$ as multiplication, so we have that if $x$ is an atom then $xy=x$ or $xy=0$ for all $y \in R$.
For example, let $X$ be a set, let $R = \mathcal{P}X$ be its power set and define set multiplication by intersection and addition by symmetric difference. Then the zero element is $\varnothing$, and the atoms are the elements $A \subseteq X$ such that whenever $Y \subseteq X$ either $A \cap Y = A$ or $A \cap Y = \varnothing$. That is, the atoms of $\mathcal{P}X$ are precisely the singletons $\{ x \}$, and $\varnothing$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/110011",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 4,
"answer_id": 0
} |
Inverse function that takes connected set to non-connected set I've been struggling with providing examples of the following:
1) A continuous function $f$ and a connected set $E$ such that $f^{-1}(E)$ is not connected
2) A continuous function $g$ and a compact set $K$ such that $f^{-1}(K)$ is not compact
| For an example of an invertible function for the second part, map $[0,2\pi)$ to the unit circle in $\Bbb R^2$ via $t\rightarrow(\cos t, \sin t)$. $f$ is continuous, the unit circle $S$ in $\Bbb R^2$ is compact, but $f^{-1}(S)$ is not.
For the other example, take the space to be $[0,1)\cup[2,3]$ and the map to be
$$
f(x)=\cases{x,& $x\in[0,1)$\cr x-1,&$x\in [2,3]$ }
$$
Then $f$ is continuous, invertible, $[0,2]$ is connected, but $f^{-1}([0,2])$ is not.
(This actually furnishes an example for both parts.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/110069",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Minimal polynomial of intermediate extensions under Galois extensions.
Let $K$ be a Galois extension of $F$, and let $a\in K$. Let $n=[K:F]$, $r=[F(a):F]$, and $H=\mathrm{Gal}(K/F(a))$. Let $z_1, z_2,\ldots,z_r$ be left coset representatives of $H$ in $G$. Show that
$$\min(F,a)=\prod_{i=1}^r\left( x - z_i(a)\right).$$
In this product $\min(F,a)$ is of degree $r$. That is true from fundamental theorem.
And if one of $z_i$ is the
identity, then $a$ satisfies the polynomial.
My question is:
What is the guarantee that $z_i(a)$ belongs to $F$ for all $i$?
What if I choose a representative which is not the identity of $\mathrm{Gal}(K/F)$?
| *
*There is no guarantee that $z_i(a)\in F$ for all $a$; in fact, it will never be the case unless $a\in F$. But you don't need each to be in $F$, you need the coefficients of the product to be in $F$.
In order to show that the coefficients of the product are in $F$, it suffices to show that for every $\sigma\in G$ we have
$$\sigma\left(\prod_{i=1}^r(x-z_i(a))\right) = \prod_{i=1}^r\left(x - \sigma(z_i(a))\right) = \prod_{i=1}^r(x-z_i(a)),$$
because the coefficients lie in $F$ if and only if the coefficients are invariant under the action of the Galois group.
*If, say $z_iH = \mathrm{id}_GH$, then that means that $z_i\in H$. But $H$ is precisely the subgroup of elements that fix $F(a)$. So what is $z_i(a)$ then?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/110122",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.