Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
wave separation Say there is a wave of sines and cosines. (<- one can think of Fourier theory.)
A) There is a wave that has the same frequency all the time. However, amplitude (- shape) of each period differs. Is it possible to separate this wave into combinations of waves that have the same amplitude for each wave?
B) Say there is a added combination of two waves that are same in frequency but different in amplitude. The amplitude of each wave is equal all the time. Can the signal be decomposed into two signals that were combined?
Thanks.
| Your concept of "wave" is a little vague to me. But:
A) A "wave with constant frequency but varying amplitude" is (informally) what one have in AM : amplitude modulation. It can be shown that if the amplitud varies "slowly" (as compared with the main period), the signal can be expressed as a (in general, complicated) combination of sinusoids of frequencies near the main frequency.
In the wikipedia article it's shown the simplest case,
$ y(t)=[1 + M \cdot \cos(\omega_m t + \phi)]\cdot \sin(\omega_c t)$ Here we have a sinuosid of "central frequency" $\omega_c$ and its amplitude varies by a sinusoid of frequency $\omega_m$ ("modulation frequency"). It's easy to show (oops) that this signal can be expressed as the sum of three sinusoids of frequencies $\omega_c$, $\omega_c+\omega_m$ and $\omega_c -\omega_m$.
B) The sum of two sinusoids of same frequency and distinct amplitude (and perhaps phase) results in another sinusoid of the same frequency - this is easy, and it's fundamental property of sinusoids.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/138853",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Advection diffusion equation The advection diffusion equation is the partial differential equation $$\frac{\partial C}{\partial t} = D\frac{\partial^2 C}{\partial x^2} - v \frac{\partial C}{\partial x}$$ with the boundary conditions $$\lim_{x \to \pm \infty} C(x,t)=0$$ and initial condition $$C(x,0)=f(x).$$ How can I transform the advection diffusion equation into a linear diffusion equation by introducing new variables $x^\ast=x-vt$ and $t^\ast=t$?
Thanks for any answer.
| However, if the boundary conditions are for finite $x$, then for the transformed
equation we have boundary conditions depending on time!! For example if $C$ denotes
a poluttant concentration, we see his value at some points.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/138919",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Convergence of a Sequence by its Subsequences Question Given the sequence $\{a_n\}_{n=0}^\infty$ and the subsequences: $a_{3n}, a_{2n+1}, a_{2n}$ which converge. Prove that $a_n$ is a convergent sequence.
Thanks you very much.
| Hint: first show that the three subsequences have the same limit. (The subsequence $(a_{3n})$ has a further subsequence that is a subsequence of $(a_{2n})$, for instance.)
Then note that given $n>1$, $a_n$ is a term of one of the subsequences $(a_{2n})$, $(a_{2n+1})$. (So, given $\epsilon>0$, choose $N$ so that for any $n\ge N$, each of $a_{2n}$ and $a_{2n+1}$ is within $\epsilon$ of the common limit. Then... .)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/138987",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Commutativity between diagonal and unitary matrices? Quick questions:
*
*if you have a diagonal matrix $A$ and a unitary matrix $B$. Do $A$ and $B$ commute?
*if $A$ and $B$ are positive definite matrices. if $a$ is an eigenvalue of $A$ and $b$ is an eigenvalue of $B$, does it follow that $a+b$ is an eigenvalue of $A+B$?
| Perhaps you know of so-called normal matrices, those complex matrices $M$ that commute with their transpose conjugate $M^{\dagger}$?
If the answer to your first question were yes, there would be only "trivial", i.e. diagonal, normal matrices. Indeed, any normal matrix is unitarily equivalent to a diagonal one (all normal endomorphisms of a hermitian vector space are diagonalisable in an orthonormal basis).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/139054",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Independent, Normally Distributed R.V. Working on this:
A shot is fired at a circular target. The vertical and the horizontal
coordinates of the point of impact (with the origin sitting at the
target’s center) are independent and normally distributed with $\nu(0, 1)$. Show that the distance of the point of impact from the center is
distributed with PDF $$p(r) = re^{-r^2/2}, r \geq 0.$$ Find the median of this
distribution.
So I'm guessing this would be graphed on an X and Y axis. I can intuit that I need to take the integral of the PDF from the lower bound to $m$ (or from $m$ to the upper bound), but I don't know what the normal distribution with $\nu$(0, 1) mean.
Also, how would I show that the point of impact has the desired PDF?
Thank you.
| Here's an intuitive approach: On the circle of radius $r$, the value of the joint density is $ce^{-(x^2+y^2} = ce^{-r^2}$. The size of that region where the density has that value is the circumference of the circle, which is $(\text{constant}\cdot r)$ (not the $2$-dimensional size, of course; that is $0$). So the probability in that region is $\text{constant}\cdot re^{-r^2}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/139144",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
What are the vertices of a regular tetrahedron embeded in a sphere of radius R Imagine you had a sphere of radius R centered at the origin. What are the coordinates of the vertices of the regular tetrahedron which is circumscribed by the sphere? One of the vertices of the tetrahedron is (0,0,R) and one of the vertices lies in the z,x plane.
| A hint rather than a proper answer: alternating vertices of a cube (e.g. the vertices $(-1, -1, -1), (-1, 1, 1), (1, -1, 1),$ and $(1, 1, -1)$ of the cube $[-1..1]^3$) form the vertices of a regular tetrahedron. This allows you to easily calculate the internal angle of the tetrahedron (i.e., the angle between the lines from the center to any two vertices), and that internal angle provides the position of the vertex in the $xz$ plane. Once you have that vertex, you can find the others by rotating its position $\pm120$ degrees about the $z$ axis.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/139221",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Proving identities using Pythagorean, Reciprocal and Quotient Back again, with one last identity that I cannot solve:
$$\frac{\cos \theta}{\csc \theta - 2 \sin \theta} = \frac{\tan\theta}{1-\tan^2\theta}$$.
The simplest I could get the left side to, if at all simpler, is $$\frac{\cos\theta}{\csc^2\theta-2}$$
As for the right side, it has me stumped, especially since the denominator is so close to a identity yet so far away. I've tried rationalizing the denominators (both sides) to little success. From my last question, where multiplying by a '1' worked, I didn't see the association here.
Thanks!
| Write everything in terms of sines and cosines, then multiply through by the inner denominators so that you have a single fraction on both sides; those two fractions should turn out to be the same. By the way, note that what they turn out to be is $\frac12\tan(2\theta)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/139263",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
Prove is $n \in \mathbb{N}$ and is $p$ is prime such that $p|(n!)^2+1$ then $(p-1)/2$ is even? Can anyone help me prove if $n \in \mathbb{N}$ and is $p$ is prime such that $p|(n!)^2+1$ then $(p-1)/2$ is even?
I'm attempting to use Fermats little theorem, so far I have only shown $p$ is odd.
I want to show that $p \equiv 1 \pmod 4$
| If $p$ divides $(n!)^2+1$, then $(n!)^2 \equiv -1 \pmod p$, so $n!$ has order $4$ in $\mathbb F_p^\times$. By Lagrange's theorem, 4 divides the order of $\mathbb F_p^\times$ which is $p-1$, hence $p \equiv 1 \pmod 4$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/139385",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Expected return from casino "Keno" game Keno is a popular game in many gambling casinos. In one version of this game the casino selects 20 numbers at random from the set of numbers 1 through 80. A player selects 10 numbers. A win occurs if at least one of the player's chosen numbers match any of the 20 numbers selected by the casino. The payoffs are as follows:
Keno Payoffs
Number of matches Dollars won for each bet
----------------- ------------------------
0-4 0
5 1
6 17
7 179
8 1299
9 2599
10 24999
What is the expected net gain for the player? Is this a fair game?
Attempt of a solution: The probability that the player will select 1 winning number is .05 ÷ $80 \choose 20$ which .05 for the number he selects (1 number out of 20) divided by $80 \choose 20$ (numbers drawn by casino) which is very unlikely, about $1.41\cdot10^{-20}$.
So take this probability a multiply by 1, 2, ..., 10 (for the ten chances he will get a number 1, 2, ..., 10 times correct. Now we just multiply each of these by the payoff in dollars and add to get his expected net gain. The game is NOT fair because the odds are stacked so much against him.
Can this be correct? I'm getting answers which converge toward 0, so I think I'm off somewhere
| You are right the number of possible draws is $80 \choose 20$. The number of ways to get 1 number right is $20 \choose 1$(right numbers to choose)*$60 \choose 19$ (choosing the wrong numbers). This is much higher than you got. My memory from calculating this (maybe with different numbers) was the house edge was around 21%, which explains why they work so hard to get you to play Keno.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/139439",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
In the history of mathematics, has there ever been a mistake? I was just wondering whether or not there have been mistakes in mathematics. Not a conjecture that ended up being false, but a theorem which had a proof that was accepted for a nontrivial amount of time before someone found a hole in the argument. Does this happen anymore now that we have computers? I imagine not. But it seems totally possible that this could have happened back in the Enlightenment.
Feel free to interpret this how you wish!
| One of the classic examples surely is the Perko pair of knots. For 75 years people thought that these two knots were distinct, even though they had found no invariants to distinguish between them. Then in 1974 Kenneth Perko (a lawyer!) discovered that they were actually the same knot. Even Conway, apparently, in compiling his table, had missed this.
It is not by any means a significant error, but it is an intriguing one nonetheless.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/139503",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "297",
"answer_count": 28,
"answer_id": 2
} |
Solve $\frac{\cos x}{1+\sin x} + \frac{1+\sin x}{\cos x} = 2$ I am fairly good at solving trig equations yet this one equation has me stumped. I've been trying very hard but was unable to solve it. Can anyone help please? Thank you.
$$\frac{\cos x}{1+\sin x} + \frac{1+\sin x}{\cos x} = 2$$
solve for $x$ in the range of $[-2\pi, 2\pi]$
I do know we have to do a difference of squares, yet after that, I don't know what to do and I get lost.
Thank you.
| Hint: if you put the two fractions over a common denominator you get a nice cancellation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/139508",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 6,
"answer_id": 1
} |
How to find the minimum variance portfolio? I am doing some revision questions on my Portfolio Theory module, and have come across the following question:
Consider an investor who has constructed a risky portfolio from N securities. The investment opportunity set is described by the equation:
$$\sigma^2 = 10 - 5{\times}E(r) + 0.5\times(E(r))^2$$
Find the minimum variance portfolio.
I can't find any info in my notes, but my intuition says differentiate, set to zero and rearrange for E(r)?
| If you are trying to minimize sigma-squared, then the points where the derivative is zero will be at least local minima or maxima. If this is not intuitive, imagine a parabola and calculate the derivative at various points.
Another step would be to prove that the function is globally concave so that the local minima/maxima are in fact global, but your prof probably won't require that. In comparison with the parabola example, finding where the dy/dx is zero in y = x ^ 3 won't find the global.
I'm not sure what you mean by rearrange for E(r).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/139561",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
$f(f(x))$ has no fixed points if $f(x)$ has no fixed points
Assume that $f(x)=x$ has no real roots where
$$f(x) = ax^2+bx+c$$ Prove that $f(f(x))=x$ has no real roots as
well.
What I've done is, calculating $f(f(x))$:
$$f(f(x))=a(ax^2+bx+c)^2+b(ax^2+bx+c)+c$$
and putting $\Delta=b^2-4ac<0$ which seems quite time consuming. Is that the right thing to do?
| Write $g(x)=f(x)-x$. Then $g$ is continuous and never zero, so it must be either always positive or always negative.
Now $f(f(x))-x = g(f(x))+g(x)$, which is always positive if $g$ is always positive, or always negative if $g$ is always negative. In either case, it's never zero, so $f(f(x))$ is never equal $x$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/139611",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 3,
"answer_id": 0
} |
Prove that every positive integer $n$ is a unique product of a square and a squarefree number I am trying to prove that for every integer $n \ge 1$, there exists uniquely determined $a > 0$ and $b > 0$ such that $n = a^2 b$, where $b$ is squarefree.
I am trying to prove this using the properties of divisibility and GCD only. Is it possible?
Let me assume that $n = a^2 b = a'^2b'$ where $a \ne a'$ and $b \ne b$'. Can we show a contradiction now?
| For existence, let $a$ be the largest integer, in the usual ordering, such that $a^2$ divides $n$. If $n=a^2q$, then $q$ must be square-free.
For uniqueness, call a positive integer bad if it has two different decompositions $a^2 c$ and $b^2 d$, where $c$ and $d$ are square-free, and $a$ and $b$ are positive. If there are bad positive integers, let $M$ be the smallest bad one.
If $a$ and $b$ are not relatively prime, we can produce a bad positive integer smaller than $M$. So $a$ and $b$ are relatively prime.
We show that $a^2$ and $b^2$ are relatively prime. There are various approaches. One I like is that there exist integers $x$ and $y$ such that $ax+by=1$. Cube both sides. We get
$$a^2(ax^3+3x^2by)+b^2(3axy^2+by^3)=1,$$
which says that $a^2$ and $b^2$ are relatively prime.
Since $a^2c=b^2d$ and $a^2$ and $b^2$ are relatively prime, we have $a^2\mid d$. This contradicts the fact that $d$ is square-free, unless $a=1$. Similarly, $b=1$, and therefore $M$ cannot be bad.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/139737",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 0
} |
Understanding the Leontief inverse What I remember from economics about input/output analysis is that it basically analyses the interdependencies between business sectors and demand. If we use matrices we have $A$ as the input-output matrix, $I$ as an identity matrix and $d$ as final demand. In order to find the final input $x$ we may solve the Leontief Inverse:
$$
x = (I-A)^{-1}\cdot d
$$
So here's my question: Is there a simple rationale behind this inverse? Especially when considering the form:
$$
(I-A)^{-1} = I+A + A^2 + A^3\ldots
$$
What happens if we change an element $a_{i,j}$ in $A$? How is this transmitted within the system? And is there decent literature about this behaviour around? Thank you very much for your help!
| This question has languished. At the level the question was asked, there is now a short, useful lecture available:
https://www.youtube.com/watch?v=-1jT5NOk93w
If this information is insufficient, perhaps a followup question would be appropriate.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/139801",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 4,
"answer_id": 2
} |
How to prove $\left\{ \omega|X(\omega)=Y(\omega)\right\} \in\mathcal{F}$ is measurable, if $X$ and $Y$ are measurable? Given a probability space $(\Omega ,\mathcal{F} ,\mu)$. Let $X$ and $Y$ be $\mathcal{F}$-measurable real valued random variables. How would one proove that $\left\{ \omega|X(\omega)=Y(\omega)\right\} \in\mathcal{F}$ is measurable.
My thoughts: Since $X$ and $Y$ are measurable, it is true, that for each $x\in\mathbb{R}:$ $\left\{ \omega|X(\omega)<x\right\} \in\mathcal{F}$ and $\left\{ \omega|Y(\omega)<x\right\} \in\mathcal{F}$.
It follows that $\left\{ \omega|X(\omega)-Y(\omega)\leq x\right\} \in\mathcal{F}$
Therefore $\left\{ -\frac{1}{n}\leq\omega|X(\omega)-Y(\omega)\leq \frac{1}{n} \right\} \in\mathcal{F}$, for $n\in\mathbb{N}$.
Therefore $0=\bigcap_{n\in\mathbb{N}}\left\{ -\frac{1}{n}\leq\omega|X(\omega)-Y(\omega)\leq \frac{1}{n} \right\} \in\mathcal{F}$.
Am working towards the correct direction? I appreciate any constructive answer!
| Let $Z\colon \Omega\to\mathbb R^2$ defined by $Z(\omega)=(X(\omega),Y(\omega))$, where $\mathbb R^2$ is endowed with the Borel $\sigma$-algebra $\mathcal B(\mathbb R^2)$.
*
*Show that the fact that the projections are measurable implies that so is $Z$.
*Show that the set $\{(x,x)\in\Bbb R\times \Bbb R)$ is $\mathcal B(\mathbb R^2)$-measurable.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/139853",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Is 1100 a valid state for this machine? A room starts out empty. Every hour, either 2 people enter or 4 people leave. In exactly a year, can there be exactly 1100 people in the room?
I think there can be because 1100 is even, but how do I prove/disprove it?
| Exactly a year is $24*365=8760$ hours (or $8784$ for a leap year-maybe you need to try both). If there are $x$ times that two people enter and $y$ times that four people leave, you want $x+y=8760,\ 2x-4y=1100$. Two equations in two unknowns, and if the solution is integral you are there.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/139931",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 5,
"answer_id": 4
} |
A characterization of open sets
Let $(M,d)$ be a metric space. Then a set $A\subset M$ is open if, and only if, $A \cap \overline X \subset \overline {A \cap X}$ for every $X\subset M$.
This is a problem from metric spaces, but I think that it only requires topology. I don't know how to do it.
| Suppose $A$ is open, let $X\subseteq M$, and let $a\notin \overline{A\cap X}$. Therefore, there exists an open set $U$ such that $x\in U$ and $U\cap (A\cap X)=\varnothing$. If $a\in A$, then $(U\cap A)$ is an open set that contains $a$, and whose intersection with $X$ is empty, so $a\notin \overline{X}$, hence $a\notin A\cap\overline{X}$. If $a\notin A$, then $a\notin A\cap\overline{X}$. Thus, we have shown that
$$M-\overline{A\cap X} \subseteq M-(A\cap \overline{X}).$$ Taking complements proves the desired inclusion.
Conversely, suppose that for every $X\subseteq M$, we have $A\cap\overline{X}\subseteq \overline{A\cap X}$. We wish to show that $A$ is open. Let $X=M-A$. Then $A\cap\overline{X} \subseteq \overline{A\cap X} = \overline{A\cap(M-A)}=\varnothing$, so $\overline{X}\subseteq M-A = X$. Therefore, $X$ is closed.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/140012",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Balls and Boxes – Combinatorics This is studying for my final, not homework.
Six balls are dropped at random into ten boxes. What is the probability that no box will contain two or more balls?
So I know that to select 6 balls from ten we do: ${10\choose 6}$, but I'm not sure how to check whether or not each box contains less than two balls. Is it just:
${6\choose 1}\over{10\choose 6}$
| Drop a ball into a box. Now drop the next ball. There are $9$ empty boxes, so the probability of no collision is $\frac{9}{10}$. Then drop the next one. The probability of no collision is $\frac{8}{10}$. And so on.
So the overall probability of no collision is
$$\frac{9}{10}\cdot\frac{8}{10}\cdot\frac{7}{10}\cdot\frac{6}{10}\cdot\frac{5}{10}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/140086",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Brownian motion Convergence If $X$ is a standard 1d brownian motion and $M_t$ $= \mbox{max}\{X_S: 0 \le s \le t\} $, what can we say about $M_t/t$ as $t \rightarrow \infty$?
Mainly, what can we say about the behavior of this martingale?
My attempt: P(Msubt > a) = 2P(B(t) >a), but what integral does this fit in with?
| Since $B_t$ is symmetric, the result recalled in the post reads $\mathrm P(M_t\gt at)=\mathrm P(|B_t|\gt at)$ $(\ast)$.
Since $B_t$ equals $\sqrt{t}B_1$ in distribution, $(\ast)$ is $\mathrm P(|B_1|\gt a\sqrt{t})$. Since $|B_1|$ is almost surely finite and $a\sqrt{t}\to+\infty$ for every $a\gt0$, the last probability goes to zero, hence $\mathrm P(M_t/t\gt a)\to0$ for every $a\gt0$. That is, $M_t/t\to0$ in probability.
One can strengthen this convergence, noting that the series $\sum\limits_n\mathrm P(|B_1|\gt a\sqrt{n})$ converges since $\mathrm P(|B_1|\gt a\sqrt{n})$ is $\mathrm e^{-\frac12a^2n+o(n)}$. By the first Borel-Cantelli lemma, $M_n/n\to0$ almost surely when the integer $n$ goes to infinity. For every $t\geqslant1$ there exists an integer $n\geqslant1$ such that $n\leqslant t\lt n+1$, and $(M_t)_t$ is nondecreasing hence $M_t/t\leqslant M_{n+1}/n\leqslant 2M_{n+1}/(n+1)$. This proves that $M_t/t\to0$ almost surely when the real number $t$ goes to infinity.
The same identity $(\ast)$ shows the convergence in $L^p$ for every finite $p\geqslant1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/140147",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Minimum tiles for a grid given a few conditions Today, I came across an exercise in Problem Solving Strategies by Johnson and Herr which I was not sure was the best way to solve it. The problem given was:
Below I drew up a quick sketch of a diagram.
Note that each row and column should be the same height and width, which is not apparent by my sketch (sorry!). Also, there should be a horizontal line ending the last row of tiles.
My line of thinking was that since there was $12$ vertical lines and $9$ horizontal lines, there are $11$ horizontal tiles per row and $8$ vertical tiles per column. Also note that the total number of intersections is given by $12(9) = 108$. (Note I am unsure whether to count the very top line horizontal line as one of the nine lines and the far left line as one of the twelve vertical lines.) Each title has $4$ intersections. Thus the minimum number of tiles to cover the area would be $\frac{108}{4} = 27$ tiles. However, due to there being an odd number of lines ($9$), $24$ tiles would only cover $8$ lines ($96$ intersections). So, to account for those last $12$ intersections, we need to add $6$ more tiles - giving us a total min number of tiles needed being $30$.
Is this the type of logic you would use to solve similar problems to this - or is there a sneakier way perhaps?
| Here is a formal proof:
picture in red the lowest right corner, and every other intersection in the form $(2i,2j)$. Two different "red point" are at distance at least $11*2$, so for every "red point" one tile is needed (the farthest two point in a tile are $\sqrt2*12$ apart).
How many "red point" you have?
$$\lceil\frac{m}{2}\rceil\times\lceil\frac{n}{2}\rceil$$
So this is a lower bound on the number of tiles used. But you have already show a configuration for witch $30$ (and generally $\lceil\frac{m}{2}\rceil\times\lceil\frac{n}{2}\rceil$) tiles cover each point (so an upper bound).
This number are the same, so you have done.
For this class of problem, the solution is always in two part: show a solution that work, and show that you can't do better, so it has to be optimal. Depends on the problem what's the hardest part, sometimes there are hugly costruction, but most of times finding the proof the hard part that require more work.
A good rule of thumbs is never use induction for demostrating that the solution is optimal, usually the way is finding some sort fo invariant that hold and reduce the problem to a simpler one (like here, every "red point" one tiles, so tiles $\geq$ red point).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/140191",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Can every continuous function that is curved be expressed as a formula? By "curved", I mean that there is no portion of a graph of a function that is a straight line. (Let us first limit the case into 2-dimensional cases.. and if anyone can explain the cases in more than 2-dimensional scenarios I would appreciate.. but what I really want to know is the 2-dimensional case.)
Let us also say that a function is surjective (domain-range relationship).
So, can every continuous and curved function be described as a formula, such as sines, cosines, combinations of them, or whatever formula?
thanks.
| The answer depends a little on what you consider a formula. The function $f(x)=-x^2$ for $x<0$, $f(x)=x^3$ for $x\ge 0$ is nicely curvy, and continuous everywhere. The function is also surjective, and injective.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/140240",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 2
} |
Lagrange form and differences
*
*For a function f and distinct points $\alpha$, $\beta$, $\gamma$; what is
meant by $f[\alpha,\beta,\gamma]$?
*Find the Lagrange form for the polynomial $P(x)$ that interpolates
$f(x) = \frac{4x}{x+1}$ at $0$, $1$ and $3$.
For (1), I can say that we have a difference divided:$$ \frac{f[\gamma]-f[\beta]}{\gamma - \beta}$$ but a little lost on handling for three.
For (2):
$$(x-0) \times f(1) \times f(3) + (x-1) \times f(0) \times f(3) + (x-3) \times f(0) \times f(1)$$ Will this work?
| It is:
$$f[\alpha,\beta,\gamma]=\frac{f[\alpha,\beta]-f[\beta,\gamma]}{\alpha-\gamma}$$
Indeed, it works like that for $n$ points as well.
As for the number $2$,
$$P(x)=f(0) \times L_0(x) + f(1) \times L_1(x)+ f(3) \times L_2(x)=f(1) \times L_1(x)+ f(3) \times L_2(x) $$
Where:
$$L_j(x)=\frac{(x-x_0)(x-x_1) \dots (x-x_{j-1})(x-x_{j+1}) \dots (x-x_n)}{(x_j-x_0)(x_j-x_1) \dots (x_j-x_{j-1})(x_j-x_{j+1}) \dots (x_j-x_n)}$$
And $n=2$ (the number of points). Therefore:
$$L_0(x)=\frac{(x-x_1)(x-x_2)}{(x_j-x_1)(x_j-x_2)}=\frac{(x-1)(x-3)}{(0-1)(0-3)}=\frac{(x-1)(x-3)}{3}$$
(Actually, you don't need to calculate $L_0$)
$$L_1(x)=\frac{(x-x_0)(x-x_2)}{(x_j-x_0)(x_j-x_2)}=\frac{(x-0)(x-3)}{(1-0)(1-3)}=$$
$$L_2(x)=\frac{(x-x_0)(x-x_1)}{(x_j-x_0)(x_j-x_1)}=\frac{(x-0)(x-1)}{(3-0)(3-1)}=\frac{x(x-1)}{6}$$
Finally, we have:
$$P(x)=f(1) \times L_1(x)+ f(3) \times L_2(x)= 2 \times \frac{x(x-3)}{-2} + 3 \times \frac{x(x-1)}{6}=\frac{x(x-1)}{2} -x(x-3)$$
$$P(x)=\frac{-x^2+5x}{2}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/140294",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Compact operator? self adjoint operator? Stirling's formula Define a pair of operators $S\colon\ell^2 \rightarrow \ell^2$ and $M\colon\ell^2 \to\ell^2$ as follows: $$S(x_1,x_2,x_3,x_4,\ldots)=(0,x_1,x_2,x_3,\ldots) $$ and $$M(x_1,x_2,x_3,x_4,\ldots)=(x_1,\frac{x_2}{2},\frac{x_3}{3},\frac{x_4}{4},\ldots).$$ Let $T=M \circ S$.
Then how to prove that $T$ has no eigenvalues? Is $T$ a compact operator? And self-adjoint?
Also, if let $T^m$ denote the composition of $T$ with itself $m$ times. Then how to show directly from the formula for $T$ that $\lim_{m\rightarrow \infty} \|T^m\|^{1/m}=0$.
I think I need to use the Stirling's formula, but I don't know what to do. Please help me. Thank you.
| Denote $x=(x(k))_{k\geq 1}$ a sequence in $\ell^2$. We have
$$T^{j+1}(x)=M((T^jx(k+1))_{k\geq 1})=\left(\frac{T^j(k+1)}k\right)_{k\geq 1}$$
hence we have $(T^{j+1}(x))(k)=\frac{(T^j(x))(k+1)}k$. By induction, we get that
$$(T^j(x))(k)=\begin{cases}
0&\mbox{ if }j\geq k\\\
\frac{x(k+j)}{\prod_{l=k}^{k+j}l}&\mbox{ if }j<k.
\end{cases}$$
We can see that $$\lVert T^j\rVert=\frac 1{\prod_{l=j+1}^{2j+1}l}=\frac{j!}{(2j+1)!}\sim\frac{\sqrt{2j\pi}(j/e)^j)}{\sqrt{2(2j-1)\pi}((2j+1)/e)^{2j+1}}=\frac{\sqrt jj^j}{\sqrt{2j-1}2^{2j+1}(j+1/2)^{2j+1}}$$
therefore
$$\lVert T^j\rVert^{1/j}\sim \frac{j}{4(j+1/2)^2},$$
which converge to $0$. In particular, the spectral radius of $T$ is $0$, so the only possible eigenvalue of $T$ is $0$ (but we can see it's not the case because both $S$ and $M$ are injective). So $T$ has no eigenvalues.
We can show that $M$ is compact, for example writing it has the limit of the finite rank operators $M_n$ given by $M_n(x)(k)=\begin{cases}\frac 1kx(k)&\mbox{ if }k\leq n\\\
0&\mbox{ otherwise}
\end{cases}$. So $T$ is compact.
For the adjoint, write $T^*=S^*\circ M^*=S^*\circ M$, and look for example at the value at $(1,0,0,\ldots)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/140400",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
asymptotic limit at the integral I would like to get an asymptotic limit at the following integral: for $p\ge 2, n \in N$, $t \ge 0$
$$
\int_{0}^{\frac 12 \sqrt{(n+1)!}}\left(1-\frac{t^2}{2^2(n+1)!}\right)^p \mathrm{d} t
$$
I think substitution $t=\frac 12 \sqrt{(n+1)!}y$ should work. But after the substittution, I don't know what to do.
Thank you for your help.
| Performing the substitution $ t = \frac{y}{2} \sqrt{(n+1)!}$, the integral becomes:
$$
\frac{\sqrt{(n+1)!}}{2} \int_0^1 \left( 1 - \frac{y^2}{16} \right)^p \mathrm{d} y = \frac{\sqrt{(n+1)!}}{2} \sum_{k=0}^\infty \frac{1}{2k+1} \binom{p}{k} \frac{(-1)^k}{16^k}
$$
So as $n$ grows, so does the magnitude of the integral.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/140458",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Determining the dimension and a basis for a vector space I have the following problem:
Let $W$ be a vector space of all solutions to these homogenous equations:
$$\begin{matrix} x &+& 2y &+& 2z &-& s &+& 3t &=& 0 \\
x &+& 2y &+& 3z &+& s &+& t &=& 0 \\
3x &+& 6y &+& 8z &+& s &+& 5t &=& 0\end{matrix}$$
Find the dimension of the space $W$ and determine a basis for $W$.
I tried solving the above kernel to get the solutions.
The matrix:
$$\left(\begin{matrix} 1 & 2 & 2 & -1 & 3 \\ 1 & 2 & 3 & 1 & 1 \\ 3 & 6 & 8 & 1 & 5\end{matrix}\right)$$
When performing Gauss-Jordan on it, I get the matrix rank to be $3$:
$$\left(\begin{matrix} 1 & 0 & -1 & 0 & 0 \\ 0 & 5 & 2 & 0 & 0 \\ 4 & 10 & 0 & 0 & 0\end{matrix}\right)$$
So I get lost at this point. I don't know how to get the dimension nor how to determine a basis for it.
Can anyone point out the next thing I should do and whether I started off good?
| After clearing the bottom row, beneath the pivots $1$ and $5$, you'll have all zeros there. Thus you can take any $z,s,t$ and then $x$ and $y$ will be determined by $z$. Thus your basis is $\{e_1-\tfrac25e_2+e_3,e_4,e_5\}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/140532",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
About the localization of a UFD
I was wondering, is the localization of a UFD also a UFD?
How would one go about proving this? It seems like it would be kind of messy to prove if it is true.
If it is not true, what about localizing at a prime? Or what if the UFD is Noetherian?
| As far as I can tell, the localization of a UFD is always a UFD. Let $R$ be a UFD and $S \subseteq R$ multiplicatively closed.
A ring is a UFD if and only if every height 1 prime ideal is principal. So, let $P$ be a height 1 prime ideal of $S^{-1}R$. Then there is a prime ideal $I \lhd R$ such that $P = S^{-1}I$. Now, localization does not change height, so $I$ has height 1, hence it is principal as $R$ is a UFD, say $I = \langle a \rangle$. Then $P = S^{-1}I = S^{-1}\langle a \rangle = \langle \frac{a}{1} \rangle$, so $P$ is principal. Hence, all height 1 prime ideals of $S^{-1}R$ are principal, hence $S^{-1}R$ is a UFD.
Edit: As Arturo mentions, this requires $0 \not\in S$. Also, it requires that $R$ is noetherian.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/140584",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "31",
"answer_count": 4,
"answer_id": 2
} |
Using modular arithmetic, how can one quickly find the natural number n for which $n^5 = 27^5 + 84^5 + 110^5 + 133^5$? Using modular arithmetic, how can one quickly find the natural number n for which $n^5 = 27^5 + 84^5 + 110^5 + 133^5$?
I tried factoring individual components out, but it seemed really tedious.
| Tabulating the expression with respect to low primes:
$\bmod 2: 27^5 + 84^5 + 110^5 + 133^5 \equiv 1^5 + 0^5 + 0^5 + 1^5 \equiv 0 \implies n\equiv 0$
$\bmod 3: 27^5 + 84^5 + 110^5 + 133^5 \equiv 0^5 + 0^5 + -1^5 + 1^5 \equiv 0 \implies n\equiv 0$
$\bmod 5: 27^5 + 84^5 + 110^5 + 133^5 \equiv 2^5 + (-1)^5 + 0^5 + (-2)^5 \equiv -1 \implies n^5 \equiv n\equiv -1$
Collecting these gives $n\equiv 24\bmod 30$, which already points at $144$
$\bmod 7: 27^5 + 84^5 + 110^5 + 133^5 \equiv (-1)^5 + 0^5 + (-2)^5 + 0^5 \equiv -1+3 \equiv 2 \equiv n^5\equiv n^{-1} \implies n \equiv 4$
This confirms that $144$ is the only possible solution in range; the next modular equivalence solution would be far out of range at $144+210 = 354$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/140659",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Group of finite ideles Let $K$ be a number field. Let $\mathbb{I}_f$ denote the group of finite ideles, and let $\phi: K^{\times} \rightarrow \mathbb{I}_f$ be the diagonal embedding. On page 167 of his notes on Class Field Theory, Milne states the following result without proof:
"The induced topology on $K^{\times}$ has the following description: $U_K = \mathcal{O}_K^{\times}$ is open, and a fundamental system of neighborhoods of $1$ is formed by the subgroups of $U_K$ of finite index."
Can someone point me to a source where this is proved, or outline an argument for a proof of this theorem if it is not too much trouble.
| For the first claim: The subset of the finite ideles given by $\prod_v \mathcal{O}_{K_v}^\times$ is open by the definition of the topology on the restricted direct product, and an element in $K^\times$ is in this subset iff it is a unit in each completion, which is true iff it's in $\mathcal{O}^\times$.
The second claim is just a generalization of this first argument. A basis of open neighborhoods of 1 in the finite ideles is given by picking any open subgroup you choose of $K_v^\times$ in finitely many of the $v$-spots and picking $\mathcal{O}_v^\times$ in all the rest of them. If we just want a basis of open subsets, we may as well only look at really small ones, so we can assume that in the finitely many $v$-spots where we didn't pick $\mathcal{O}_v^\times$, we picked some small open subset of $\mathcal{O}_v^\times$. Note that subgroups of $\mathcal{O}_v^\times$ are all finite index and generated by a power of the uniformizer. Note also that all these open subsets of the ideles are subsets of the one we discussed in checking the first claim.
What happens when we intersect such a neighborhood with the diagonal image of $K^\times$? We get the subgroup of elements in $\mathcal{O}_K^\times$ which are in all the open subgroups of $\mathcal{O}_v^\times$ that we picked at the special places $v$. The index is just the product of the indices of those subgroups, so that's a finite index subgroup. Conversely, any finite index subgroup of $\mathcal{O}_K^\times$ is determined by the index of its image in each completion.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/140729",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
About single-valued function I was reading a paper about compression algorithm:
In order to optimality fit the line segments to the curve, Bellman's
algorithm assumes that the input data is a valid (i.e., single-valued)
function; thus, the trajectory cannot contain no loops.
What does it means with valid function? I suppose he intends to have a single-valued function, but: what's a single-valued function? According to wikipedia definition:
A single-valued function is an emphatic term for a mathematical
function in the usual sense. That is, each element of the function's
domain maps to a single, well-defined element of its range.
But this sounds to me like an injective function. Can you confirm that both (single-valued function and injective function) mean the same thing?
| Of course not! A map $f \colon X \to Y$ is injective when $f(x_1)=f(x_2)$ implies $x_1=x_2$. But, strictly speaking, you have to know what a function is, i.e. what a single-valued function is.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/140798",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Does there exist a bijection between empty sets?
Does there exist a bijection between empty sets?
What I think is:
Since 'for every $x\in \emptyset$, there exists $y\in \emptyset$ such that $(x,y)\in f$' is false and the negation of this statement is also false because of 'existence' sentence..
But I think empty sets are equipotent intuitively.
Am I wrong? Help
| Note that there are not "empty sets", because the axiom of extensionality tells us that all empty sets are equal, as they have the same elements.
Now it is obvious that there is a bijection of a set to itself. The identity, furthermore if you think of it a little bit you will see that the empty set itself is a function whose domain and range are empty, and therefore it is a bijection between that set and itself.
Further reading:
*
*Why is an empty function considered a function?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/140879",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 3,
"answer_id": 0
} |
Cardinal number subtraction We all know that $|\mathbb{N}| = \aleph_0$. Since $|\{-1\} \cup \mathbb{N}| = \aleph_0$ as well, I guess you could say that $\aleph_0 + 1 = \aleph_0$.
You can go on to derive that $\aleph_0 + \aleph_0 = \aleph_0$, and by induction $n \times \aleph_0 = \aleph_0$ (assuming that $n$ is finite, anyway).
Now, here's a thing: What is $\aleph_0 - \aleph_0$?
Well, $|\mathbb{P}| = \aleph_0$, and $|\mathbb{P} \backslash \mathbb{N}| = \aleph_0$, so perhaps $\aleph_0 - \aleph_0 = \aleph_0$?
No, wait. Consider the set $S = \{n \in \mathbb{N}: n > 5\}$. Now we have $|S| = \aleph_0$, and yet $|\mathbb{N} \backslash S| = 5$. So maybe $\aleph_0 - \aleph_0 = 5$?
But that is absurd, since we can redefine $S$ to make the result any finite number we wish. Or we can define $S$ such that the result is countably infinite.
Does this mean that the notion of $\aleph_0 - \aleph_0$ simply has no definite answer? Or am I just being too simplistic here?
| Let $\kappa_1$ and $\kappa_2$ be cardinals. It makes sense to say that a cardinal $\kappa_3$ is $\kappa_1 - \kappa_2$ if $\kappa_2 + \kappa_3 = \kappa_1$. But if at least one of $\kappa_2$ and $\kappa_3$ is infinite, then (assuming the Axiom of Choice, as usual) have
$\kappa_2 + \kappa_3 = \max(\kappa_2,\kappa_3)$.
Now $\max(\kappa_2,\kappa_3) = \kappa_1$ iff either ($\kappa_2 < \kappa_1$ and $\kappa_3 = \kappa_1$) or ($\kappa_2 = \kappa_1$ and $\kappa_3 \leq \kappa_1$).
Thus if $\kappa_2 < \kappa_1$, $\kappa_1 - \kappa_2$ is uniquely determined: it is $\kappa_1$.
However, if $\kappa_2 = \kappa_1$, then $\kappa_3$ can be any cardinal less than or equal to $\kappa_1$, i.e., subtraction of a cardinal from itself is not uniquely determined.
(Finally, if $\kappa_2 > \kappa_1$, then there is no cardinal $\kappa_3$ with
$\kappa_3 = \kappa_1 - \kappa_2$.)
In particular $\aleph_0 - \aleph_0$ is not uniquely defined: by performing this subtraction in the above sense one can get every finite cardinal and also $\aleph_0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/140930",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 0
} |
Minimal modulus for the finite field NTT I need your support.
Suppose I am performing an NTT in a finite field $GF(p)$. I assume it contains the needed primitive root of unity.
I am using it to compute the convolution of two vectors of length $n=2^m, m\in \mathbb{N}$. As usual, I double the length of the vectors to $2n=2^{m+1}$ to make the convolution in fact acyclic (right? may be wrong here) and thus guarantee the exact interpolation of the product during INTT.
These vectors are in fact long integers, with all coefficients less than $BASE\in\mathbb{N}$. I wanted to come up with an inequality for the prime number $p$ so that the convolution is recoverable (i.e. equal to the real product of polynomials).
Here are my considerations:
The worst case is when all coefficients of the vectors are equal to $BASE-1$. Then the convolution coefficients are
$$c_0 = a_0 b_0 = (BASE-1)^2$$
$$c_1 = a_0 b_1 + a_1 b_0 = 2\cdot (BASE-1)^2$$
$$...$$
$$c_{n-1} = a_0 b_{n-1} + ... + a_{n-1} b_0 = n\cdot (BASE-1)^2$$
$$c_n = \underbrace{a_0 b_n}_{=0} + a_1 b_{n-1} + ... + a_{n-1} b_1 + \underbrace{a_n b_0 }_{=0} = (n-1)\cdot (BASE-1)^2$$
...
$$c_{2n-1}=0$$.
Thus I have come up with an inequality for the prime $p$:
$$n \cdot (BASE-1)^2 < p.$$
Does it guarantee that the convolution is computed correctly or have I missed something?
| The answer is yes. $n \cdot (BASE-1)^2 \leq p$ is indeed the biting inequality that needs to be satisfied.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/141019",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
What is Mazzola's "Topos of Music" about? Disclaimers: I am neither a musician, nor I want to discredit Mazzola's work. Corollary of the first point: please use a plain style, without technical terms in the area of Music Theory. Corollary of the second: don't take my disbelief in Mazzola's work as an offense. ;)
So, the question is: what is Mazzola's "Topos of music" about? Is the considerable required amount of advanced Mathematics a necessary tool to achieve the goals of the book? Can a discrete knowledge of those mathematical prerequisite shed some light to the mathematical approach to music theory? Does somebody out there managed to apply Mazzola's ideas (if there are some, sensible to be applied) to a "concrete" situation?
Thanks!
| the theory has been applied in my composition software presto for atari (google it, it is still available for PC emulation), and for the universal software rubato for composition, analysis, and performance. These software were also used to compose music, see mazzola's homepage www.encyclospace.org, and go to CV there. Best, Guerino Mazzola
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/141082",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27",
"answer_count": 2,
"answer_id": 1
} |
What did Gauss think about infinity? I have someone who is begging for a conversation with me about infinity. He thinks that Cantor got it wrong, and suggested to me that Gauss did not really believe in infinity, and would not have tolerated any hierarchy of infinities.
I can see that a constructivist approach could insist on avoiding infinity, and deal only with numbers we can name using finite strings, and proofs likewise. But does anyone have any knowledge of what Gauss said or thought about infinity, and particularly whether there might be any justification for my interlocutor's allegation?
| Your interlocutor seems to oppose infinity (and attribute similar views to Gauss) on finitist or constructivist grounds. If this is the case, he would probably similarly oppose infinitesimals. This is because specifying an infinitesimal typically involves an infinite amount of data, at least in modern theories.
Here he would be wrong to assume similar beliefs on Gauss's part because Gauss specifically and routinely used infinitesimals in his development of differential geometry. A detailed discussion of this may be found in the book by Michael Spivak on Differential Geometry, Third edition, volume 2, chapter 4. The discussion starts on page 62 as follows: "Gauss now nonchalantly introduces infinitely small quantities..."
Your interlocutor also mentioned hierarchies of infinities. On page 75 in Spivak's translation of Gauss, one finds products of infinitesimals, and an expression for curvature in terms of these. These are second order infinitesimals. Thus Gauss dealt with a hierarchy of infinitesimals.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/141130",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "29",
"answer_count": 3,
"answer_id": 1
} |
Highest power of a prime $p$ dividing $N!$ How does one find the highest power of a prime $p$ that divides $N!$ and other related products?
Related question: How many zeros are there at the end of $N!$?
This is being done to reduce abstract duplicates. See
Coping with *abstract* duplicate questions. and List of Generalizations of Common Questions for more details.
| Walking down the street I realized how to proof the formula in a very simple way. The key idea is to know how many numbers contain a given prime in the exact $q$-th power.
Let us denote the set of all numbers up to $n$ (inclusive) that are divisible by $q$-th power of prime $p$ but not divisible by the greater ($q+1, q+2, ...$) power of that prime:
$$
M^p_q(n) := \{ m : m \le n, p^q | m, p^{q+1} \nmid m\}
$$
How many numbers in that set? Well, obviously, the number of those that are divisible by $p^q$ minus the number of those which are divisible by $p^{q+1}$:
$$
|M^p_q(n)| = \bigg\lfloor \frac{n}{p^q}\bigg\rfloor - \bigg\lfloor \frac{n}{p^{q+1}}\bigg\rfloor
$$
OK. Each of the numbers in $M^p_q$ gives us the $q$-th power of $p$ in $n!$ so we need just add them all:
$$
s_p(n!)=\sum_{q=1}^{\infty}q|M^p_q(n)|
$$
Which is the desired result.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/141196",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "70",
"answer_count": 9,
"answer_id": 5
} |
What is difference between a ring and a field? The ring axioms require that addition is commutative, addition and multiplication are associative, multiplication distributes over addition.
A field can be thought of as two groups with extra distributivity law.
A ring is more complex: with abelian group and a semigroup with extra distributivity law.
Is a ring a more basic structure than a field, or vice versa? What's the relation between them? What's the background why people study them?
| There's a whole range of algebraic structures. Perhaps the 5 best known are semigroups, monoids, groups, rings, and fields.
*
*A semigroup is a set with a closed, associative, binary operation.
*A monoid is a semigroup with an identity element.
*A group is a monoid with inverse elements.
*An abelian group is a group where the binary operation is commutative.
*A ring is an abelian group (under addition, say) that happens to have a second closed, associative, binary operation as well. And these two operations satisfy a distribution law. (You may or may not require rings to have an identity with the second operation)
*A field is a ring where both operations commute, where every element has both an additive (i.e. the first operation) and a multiplicative (i.e. the second operation) inverse (and thus there is a multiplicative identity), and the extra requirement that if $xy = 0$ for some $x \not = 0$, then we must have $y = 0$ (we call this having no zero-divisors).
People study these, and maps between them, because it is stunning how often things can be given a group or ring-like structure. So knowing how these things behave carries a lot of information about many things.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/141249",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "62",
"answer_count": 3,
"answer_id": 1
} |
How to make sense of $\int_0^a f'(x)x dx$ for monotone, not necessarily continuous, functions $f$ I have a monotone real function $f$ defined on [0,1] with values in [0,1]. The function need not be continuous. I need to make sense of the expression:
$$F(a)=\int_0^a f'(x)x dx,$$ in the greater possible generality and in the most elementary way. What is the best way to go about it?
| Let us pretend for a moment that $f$ is regular, then an integration by parts yields
$$
F(a)=\left[xf(x)\right]_{0}^a-\int_0^af(x)\mathrm dx=af(a)-\int_0^af(x)\mathrm dx.
$$
Since the RHS is meaningful for every monotonic function $f$ (and still others), one can define $F(a)$ by the expression in the RHS for every such function.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/141326",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
When a field extension $E\subset F$ has degree $n$, can I find the degree of the extension $E(x)\subset F(x)?$ This is not a problem I've found stated anywhere, so I'm not sure how much generality I should assume. I will try to ask my question in such a way that answers on different levels of generality could be possible. I'm also not sure that this question isn't trivial.
Let $E\subset F$ be a fields (both can be finite if needed), and let $n$ be the degree of the field extension ($n$ can be finite if needed). Can we find the degree of the extension $E(x)\subset F(x)$ of rational function fields?
Say $E=\mathbb F_2$ and $F=\mathbb F_4$. Then $(F:E)=2.$ I can take $\{1,\xi\}$ to be an $E$-basis of $F$. Now let $f\in F(x),$ $$f(x)=\frac{a_kx^k+\cdots +a_0 } {b_lx^l+\cdots+b_0 }$$ for $a_0,\ldots a_k,b_0,\ldots,b_l\in F$
I can write $a_0,\ldots a_k,b_0,\ldots,b_l$ in the basis: $$\begin{eqnarray}&a_i&=p_i\xi+q_i\\&b_j&=r_j\xi+s_j\end{eqnarray}$$
But all I get is $$f(x)=\frac{p_k\xi x^k+\cdots+p_0+q_kx^k+\cdots+q_0} { r_k\xi x^k+\cdots+r_0+s_kx^k+\cdots+s_0},$$
and I have no idea what to do with this. On the other hand, my intuition is that the degree of the extension of rational function fields should only depend on the degree of the extension $E\subset F,$ even regardless of any finiteness conditions.
| Suppose $F = E(\alpha)$, where $\alpha$ has minimal polynomial $f(t)$.
Then
$$F(x) = E(\alpha)(x) = E(\alpha, x) = E(x)(\alpha)$$
and $\alpha$ still has minimal polynomial $f(t)$. If you're not sure on that point, the roots of $f(t)$ in $\overline{F}$ are in $\overline{F}$. The roots of $f(t)$ in $\overline{F(x)}$ are still in $\overline{F}$. (you may want to prove a theorem that says this reasoning makes sense, and implies what we want it to imply!)
Every finite example is either this one, or can be formed by iterating this one finitely many times.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/141480",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
First Variation Formula I have a riemannian manifold $M$ and a smooth curve $\alpha$. I want to take a variation of $\alpha$ and apply the first variation formula of arc length but I want to know if it is possible to take the curves of the variation to be geodesics.
| It is not possible in general. For example, if you take a Hadamard manifold, wich is a complete riemannian manifold for what the exponential map is a diffeomorphism in some point $p$, there will exist just one geodesic joinning two given points, hence you will not be able to take variations (with fixed extremes) by geodesics. In some cases you will be able: on the sphere, a geodesic joinning two opposite points can be variated by geodesics.
More generally, it is not true that you can approach a given smooth curve by a geodesic, for example in the plane $\mathbb{R}^2$: if you take a general curve joinning two given points, its lenght can be much greater than the distance between these two points. If you have a curve near another (for example in the $C^1$ topology), the arc lenght must be near too. On another hand, you can approach a given smooth curve by broken geodesics, making an argument with totally convex neighborhoods.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/141566",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Show: If the adjoint of T is -T, all eigenvalues are purely imaginary Homework question.
Let $V$ be a finite dimensional inner-product space over $\mathbb{C}$. Let $T \in L(V,V)$ satisfy $T^*=-T$. Show that all eigenvalues of $T$ are purely imaginary, i.e., if $\lambda$ is an eigenvalue of $T$, then $\lambda = ia$ with $a \in \mathbb{R}$.
I can recall a proof we did in class that if $T$ is self-adjoint then all eigenvalues are real, and the basic gist of it was let $\lambda$ be an eigenvalue, then lambda is equal to its complex conjugate. I understand this; it implies that if $\lambda = a+bi = a-bi$ then $b = 0$.
It follows (in my mind, at least) that I need to show that given that $T^* = -T$ that $\lambda = -\lambda$.
I'm kind of stuck here, though. I've tried adapting the proof I have for self-adjoint operators, but I'm unable to do it without having the conjugate in the result, which is not what I want. (that seems to imply that all eigenvalues are 0, I don't think that's true).
| An alternative proof of this, probably not as enlightening but I like it: if $T^* = -T$, then $iT$ is self-adjoint (why?) and so the eigenvalues of $iT$ are real. It then follows that the eigenvalues of $T$ are purely imaginary. (again, why?)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/141639",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Result of the product $0.9 \times 0.99 \times 0.999 \times ...$ My question has two parts:
*
*How can I nicely define the infinite sequence $0.9,\ 0.99,\ 0.999,\ \dots$? One option would be the recursive definition below; is there a nicer way to do this? Maybe put it in a form that makes the second question easier to answer.
$$s_{i+1} = s_i + 9\cdot10^{-i-2},\ s_0 = 0.9$$
Edit: Suggested by Kirthi Raman:
$$(s_i)_{i\ge1} = 1 - 10^{-i}$$
*Once I have the sequence, what would be the limit of the infinite product below? I find the question interesting since $0.999... = 1$, so the product should converge (I think), but to what? What is the "last number" before $1$ (I know there is no such thing) that would contribute to the product?
$$\prod_{i=1}^{\infty} s_i$$
| See: "Dedekind eta function".
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/141705",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27",
"answer_count": 3,
"answer_id": 1
} |
Convergence of this integral
Possible Duplicate:
Some questions about the gamma function
My statistics text book prescribed by my school states that the integral $$\Gamma(n)=\int_{0}^{\infty}e^{-x}x^{n-1}dx$$ is convergent for $n>0$.It does not prove the statement.So can anyone please help me prove it?Thanks again!
| I assume that $n$ is a real number. Split the gamma improper integral $$\Gamma(n)=\int_{0}^{\infty}e^{-x}x^{n-1}dx\tag{0}$$ into $I_1+I_2$, where $$I_1=\int_{0}^{1}e^{-x}x^{n-1}dx\tag{1}$$
and
$$I_2=\int_{1}^{\infty}e^{-x}x^{n-1}dx\tag{2}$$
*
*To prove that the integral $I_2$ is always convergent use the fact that for any real number $\alpha $ the integral $$
\int_{1}^{\infty }e^{-x}x^{\alpha }dx\tag{3}$$ is convergent, by the limit comparison test
$$\lim_{x\rightarrow \infty }\frac{e^{-x}x^{\alpha }}{x^{-2}}=0\tag{4}$$
with the convergent integral $$\displaystyle\int_{1}^{\infty }\dfrac{dx}{x^{2}}\tag{5}.$$
*As for $I_1$ consider two cases. (a) If $n\geq 1$ observe that $\lim_{x\rightarrow 0}e^{-x}x^{n-1}=0$, so $I_1$ is a proper integral. (b) If $0<n<1$, the integrand $e^{-x}x^{n-1}$ behaves like $x^{n-1}$ near $n=0$,
because $e^{-x}\rightarrow 1$ as $x\rightarrow 0$. Since $$\displaystyle\int_{0}^{1}\dfrac{dx}{x^{1-n}}\tag{6}$$ is convergent if and only if $1-n<1$, i.e. $n>0$, so is $I_1$.
It follows that $\Gamma(n)=I_1+I_2$ is convergent for $n>0.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/141764",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
The maximum number of nodes in a binary tree of depth $k$ is $2^{k}-1$, $k \geq1$. I am confused with this statement
The maximum number of nodes in a binary tree of depth $k$ is $2^k-1$, $k \geq1$.
How come this is true. Lets say I have the following tree
1
/ \
2 3
Here the depth of the tree is 1. So according to the formula, it will be $2^1-1 = 1$. But we have $3$ nodes here. I am really confused with this depth thing.
Any clarifications?
| According to your formula. the depth is not equal to 1 here. depth starts from 1 to onwards. and level of the tree starts from 0.
So here depth is 2.
you take 2^2 - 1 = 3 nodes. here k = 2.
In some books you will find depth starting from 1 and level of the tree starting from 0 and in other books you find it the other way round. But from where you have taken this formula here depth starts from 1 (i.e. minimum depth can be 1)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/141783",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23",
"answer_count": 5,
"answer_id": 4
} |
Please help with derivative question Here's my question:
A metal bar is heated to a certain temperature and then the heat source is removed. At time t minutes after the heat source is removed, the temperature, x degrees Celcius, of the metal bar is given by $x = \dfrac{280}{1+0.02t}$ At what rate is the temperature decreasing 100 minutes after the removal of the heat source?
I'm guessing the chain rule needs to be employed; as in $\dfrac{dx}{dt} = \dfrac{dx}{du}\dfrac{du}{dt}$ but couldn't figure out exactly how? Help appreciated, thanks!
| I guess that if you want to apply the chain rule then what you want to do is to write your function as
$$
x(t) = 280(1 + 0.02t)^{-1}
$$
and then it's derivative can be found by applying the chain rule as follows. Following your notation you can let $u = 1 + 0.02t$ and then you have $x = 280u^{-1}$. Therefore
$$
\frac{dx}{dt} = \color{green}{\frac{dx}{du}} \color{red}{\frac{du}{dt}} = \color{green}{-280u^{-2}} \color{red}{(0.02)} = \frac{-280(0.02)}{u^2} = \frac{-5.6}{(1 + 0.02t)^2}
$$
Then with the derivative in hand I guess you can figure out the rate at which the temperature is decreasing 100 minutes after.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/141841",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Proposition about curves in $S^2$ Let $\gamma_1,\gamma_2:(a,b)\to S^2$ be unit speed curves in $S^2=\{\vec{v}\in\mathbb{R^3}:\vec{v}\cdot\vec{v}=1\}$. Then the following two statements are equivalent:
(1) There is a $3\times 3$ orthogonal matrix $M$ such that $\gamma_2\equiv M\gamma_1.$
(2) $\beta_j:(a,b)\to \mathbb{R}, \beta_j=\det(\gamma_j,\gamma_j',\gamma_j'')$ satisfy $\beta_1\equiv \beta_2$ or $\beta_1\equiv -\beta_2.$
I saw this proposition last night, but I still don't know how to prove it. Could you help me with it? Thanks in advance.
| 1=>2 is easy, just compute (and use $|\det(M)|=1$)
$$
\det(M\gamma_1,M\gamma_1',M\gamma_1'')=\det(M)\det(\gamma_1,\gamma_1',\gamma_1'')
$$
2=>1 is not so easy
Fix $s_0\in (a,b)$.
Since $\gamma_1(s_0)$ and $\gamma_2(s_0)$ have unit length, there is a $M\in O(3)$ such that
$\gamma_2(s_0)=M\gamma_1(s_0)$. Now compare $\gamma_2$ and $M\gamma_1$
and use http://en.wikipedia.org/wiki/Frenet%E2%80%93Serret_formulas
Edit: better link: http://en.wikipedia.org/wiki/Fundamental_theorem_of_curves
Edit (sorry, this is not a full solution!): Since $\gamma(t)$ is parametrized by arclenth, the tangent is
$$
T(s)=\gamma'(s)
$$
and the curvature $\kappa$ and the normal $N$ are given by
$$
\kappa(s)=|T'(s)|=|\gamma''(s)| \qquad T'(s)=\kappa(s) N(s)
$$
The Binormal is
$$
B(s)=T(s)\times N(s) \quad\text{and the Torsion $\tau$ is given by } B'(s)=\tau (s) N(s)
$$
Now $\det(\gamma,\gamma',\gamma'')=\det(\gamma,T,\kappa N)=\kappa\det(\gamma,T,N)$ fixes $\kappa$
Another hint: Differentiate $\gamma(s)^2=1$ several times to fix $\tau$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/141902",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Divergent series and $p$-adics If we naïvely apply the formula $$\sum_0^\infty a^i = {1\over 1-a}$$ when $a=2$, we get the silly-seeming claim that $1+2+4+\ldots = -1$. But in the 2-adic integers, this formula is correct.
Surely this is not a coincidence? What is the connection here? Do the $p$-adics provide generally useful methods for summing divergent series? I know very little about either divergent series or $p$-adic analysis; what is a good reference for either topic?
| Qiaochu Yuan points out that it is not at all a coincidence: “The argument that justifies the convergence of geometric series applies in any complete normed field when $|a|<1$, and in the 2-adics this is true when $a=2$.” In the 2-adics, the sequence does indeed converge, and the only thing it can converge to is $1\over1-r$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/141971",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19",
"answer_count": 4,
"answer_id": 1
} |
consistency of large cardinal axiom It is known that if ZFC is consistent, ZFC+"no such large cardinal exists" is consistent. Then, is ZFC+"such large cardinal exists" known to be consistent? This would imply that proving large cardinal axiom is independent from ZFC...
Thanks.
| ZFC cannot prove its own consistency, this is a result due to Godel's incompleteness theorems.
If, however, $\kappa$ is an inaccessible cardinal then $V_\kappa$ which is the collection of all sets whose von Neumann rank is $<\kappa$ is a model of ZFC. Due to the completeness theorem of Godel, this implies that ZFC is consistent.
So we cannot prove the existence of an inaccessible cardinal in ZFC.
Furthermore, suppose ZFC+There is an inaccessible cardinal is consistent, let $\kappa$ be the least inaccessible cardinal, then $V_\kappa$ is a model of ZFC but there are no inaccessible in $V_\kappa$.
(and almost all large cardinals are inaccessible, and if they are not inaccessible they imply that below them there are inaccessible cardinals, it might be that a cardinal is weakly inaccessible but "going down" to $L$ makes it a strongly inaccessible, and we are only interested in consistency results anyway)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/142080",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
example of weakly inaccessible cardinal that is not a strongly inaccessible cardinal As I study through inaccessible cardinals, I find many examples that show some cardinal being both weakly inaccessible and strongly inaccessible.
So, can anyone show me the example of weakly inaccessible cardinal that is not strongly inaccessible?
Thanks.
| Suppose that $\kappa$ is a strongly inaccessible cardinal, now force the continuum to be $\kappa^+$.
In the generic extension $\kappa$ is still a limit cardinal and still regular. However it is not a strong limit, so it is just weakly inaccessible.
Note, that if we have a weakly inaccessible cardinal then in some inner model it is strongly inaccessible, so this is really all the examples you can ask for which do not involve even stronger large cardinals.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/142135",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Periodical reflection conditions in a sphere. A perfect mirror covered the inside surface of a sphere (assumption: there is no any loss during reflection and reflections continue endless) and there is a very small laser on point $A$ in the surface of the sphere and the direction of the laser light goes to inside and reflects from other point $B$ in the surface of the sphere.
What are the whole possible mathematical conditions to get periodical reflections. (Passing again from $A$ to $B$)
Thanks a lot for answers
| Without loss of generality, we can rotate such that $A$ is at the top and the original laser beam lies in the $x$-$z$ plane. Then all the reflected beams will also lie in that plane, and the reflections will all take place on a great circle. Thus the problem is in fact two-dimensional, and we're just asking how a laser beam must be pointed in a circle such that it hits the same spot again. Since all the reflections are at the same angle, the laser beam traverses the same angle $\alpha$ between any two reflections, so the answer is that it must traverse a rational multiple of $\pi$ between any two reflections. Since the angle of incidence it forms with the surface normal on the sphere is $(\pi-\alpha)/2$, an equivalent condition is that the angle of incidence must be a rational multiple of $\pi$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/142212",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Graphic software Can somebody recomend me a software(preferably free) to draw a graphic like this:
I have downloaded GeoGebra, but it seems I can't use it to draw such a graphic.
| Inkscape is a general purpose open source vector graphics tool, available for all the standard platforms. I have used it for a few years. It works moderately well with LaTeX, and the price is right. Using Inkscape to produce a picture like the one you posted would be routine. Some fiddling might need to be done to get good positioning of labels.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/142276",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Gradient vector function using sum and scalar Could someone take a look on my attempt to compute the gradient for:
$$f(x) = \lambda \sum_{x = 1}^n g(x_i)$$
Where $x \in \mathbb{R^d}$, $\lambda \in \mathbb{R}$ and
$$g(x_i) = \begin{cases}
x_i - \varepsilon/2 & \textbf{if } |x_i| \geq \varepsilon\\
x_i^2 / (2\varepsilon) & \textbf{if } |x_i| < \varepsilon\\
\end{cases}$$
This is what I have done so far:
The function $g(x_i)$ is not differentiable if $x = -\varepsilon$, for the rest:
$$
\frac{\partial}{\partial\beta_i}\sum_{i=1}^n g(x_i)=
\begin{cases}
1&|x_i|\ge\epsilon\;,\\
x_i/\epsilon&|x_i|\lt\epsilon\;.
\end{cases}
$$
For $f(x)$ I would apply the product rule:
\begin{align*}
\frac{\partial}{\partial x} f(x) &= (\frac{\partial}{\partial x} \lambda) \cdot \sum_{x = 1}^n g(x_i) + \lambda \cdot (\frac{\partial}{\partial x} \sum_{x = 1}^n g(x_i))\\
&= 0 \cdot \sum_{x = 1}^n g(x_i) + \lambda \cdot (\frac{\partial}{\partial x} \sum_{x = 1}^n g(x_i))\\
&= \lambda \cdot \frac{\partial}{\partial x_i} \sum_{x = 1}^n g(x_i)
\end{align*}
If this is correct, then my question is of which domain is then $\frac{\partial}{\partial x} f(x)$?
Either it is $\mathbb{R}^n$ or $\mathbb{R}$. I am not sure, for the fact, that this is a gradient I would say $\mathbb{R}^n$, but how are then the components of the resulting vector computed?
$$\begin{pmatrix}
???\\
???\\
\vdots\\
???
\end{pmatrix}$$
| First, $\frac{\partial f(x)}{\partial x_i} = \lambda g'(x_i)$, so the gradient is given by $\nabla f(x) = \lambda (g'(x_1),...,g'(x_n))^T$
Second, $g'(t) = 1$, when $|t|>\epsilon$, and $g'(t) = \frac{t}{\epsilon}$when $|t| < \epsilon$.
Hence the gradient is either $(\lambda,...,\lambda)^T$ if $||x||_{\infty} < \epsilon$, or $\frac{\lambda}{\epsilon} x$ if $x_i > \epsilon$, $\forall i$. For other $x$, you need to use the appropriate value depending on $x_i$.
No product rule is needed.
And $\nabla f(x) \in \mathbb{R}^n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/142336",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Interval bisection to find a root of f(x) I'm attempting to understand Interval bisection. I'm given a simple question in my textbook, and I can do the process easily, I just don't know when to stop. The question is "Use Interval bisection to find the positive root of $x^2 - 7 = 0$, correct to one decimal place" (basically find the square root of 7 to 1 dp)
This is the solution I'm given:
How is it known that it is 2.6?
The last line shows that the root is between 2.640625(from (a+b)/2) and 2.65625(from b).
2.640625 rounds to 2.6 but
2.65625 rounds to 2.7
Surely I would have to keep going until both the upper and lower limit of the interval round to 2.6?
If it's just simple truncation why didn't the solution stop on the second last line?
(this is just a simple question, so it is as if you can't just do root 7 on a calculator)
| I would share your concern that this is not complete. As David Mitra says, one further iteration is needed.
In fact since at the third line you know $$2.625^2-7 \lt 0 \lt 2.75^2-7,$$ and so you know the rounded answer is $2.6$ or $2.7$ and could save some effort by checking $$2.65^2-7=+0.0225,$$ so $2.6$ is the rounded solution.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/142399",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Prove that [ContextFreeLanguage - RegularLanguage] is always a context free language, but the opposite is false Let L be a context-free grammar and R a regular language. Show that L-R is always context-free, but R-L is not. Hint: try to connect both automata)
The above hint did not help me :(
| The harder part is showing that $L\setminus R$ is always context-free. The hint actually is useful, once you figure out what to do with it. Let me point you in the right direction by talking about a simpler situation. Suppose that you have finite-state automata $M_1$ and $M_2$ for two regular languages, with state sets $S_1$ and $S_2$, respectively. Make a new FSA $M$ whose state set is $S_1\times S_2$; in $M$ there will be a transition from a state $\langle s_1,s_2\rangle$ to $\langle s_1',s_2'\rangle$ on input $\alpha$ iff $M_1$ has a transition $s_1\stackrel{\alpha}\longrightarrow s_1'$ and $M_2$ has a transition $s_2\stackrel{\alpha}\longrightarrow s_2'$. In essence, $M$ runs $M_1$ and $M_2$ simultaneously on the same input. That makes it easy to adjust the acceptor states of $M$ so that accepts exactly those words that are accepted by $M_1$ but not by $M_2$; I'll leave the details for you to think about. (You'll also have to figure out what the initial state should be, but that's very easy once you get the idea of how $M$ works.) Then you just have to modify the idea so that one of the automata is a pushdown automaton.
The second part is much easier. Two hints:
*
*If $\Sigma$ is the alphabet, is $\Sigma^*$ a regular language?
*Is the class of context-free languages closed under complementation?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/142470",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Always a prime between $x$ and $x+cf(x)$ What is the asymptotically slowest growing function $f(x)$, such that there exists constants $a$ and $b$, such that for all $x>a$, there is always a prime between $x$ and $x+bf(x)$?
$f(x)=x$ works, does $\sqrt{x}$ work, $\log(x)$, or $\log\log(x)$?
| What you are looking at is an upper bound for prime gaps. Bertrands postulate states that there is always a prime between $x$ and $2x$, but this has been improved significantly. The most recent result due to Baker Harman and Pintz states that $$\pi\left(x+x^{0.525}\right)
-\pi(x)\gg \frac{x^{0.525}}{\log x}.$$ This means that for sufficiently large $x$, there is always a prime between $x$ and $x+x^{0.525}$. This implies for example that there is always a prime between consecutive cubes, that is there is always a prime in the interval $(x^3,(x+1)^3)$.
As for the expected result, the Wikipedia article covers this as well, see Conjectures about prime gaps. In particular, the Riemann Hypothesis tells us that for any $\epsilon$, we will have a prime in the interval $(x,x+x^{\frac{1}{2}+\epsilon})$ for sufficiently large $x$. Cramer made the much stronger conjecture that there is always be a prime between $x$ and $x+\log^2x$ for sufficiently large $x$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/142535",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
A sequence of measurable sets I want to find a sequence of measurable sets $A_k$.
such that $A_k \subset [0,1]$, $\lim \lambda(A_k) =1$, but $\liminf A_k = \varnothing$.
There are some examples on function such as $\sin x \over x$ , but I can't apply on a set, $A_k$.
Please give me a simple example.
| One technique you can use is a "running window". The sequence of intervals $I_1=[0,1/2),I_2=[1/2,1],I_3=[0,1/3),I_4=[1/3,2/3),I_5=[2/3, 1],I_6=[0,1/4),\ldots$ has the property that their measure goes to $0$ but every element of $[0,1]$ is in infinitely many of these intervals. So you can simply let $A_k=[0,1]\backslash I_k$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/142602",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Is there any good example about Lie algebra homomorphisms? My textbook gave an example of the trace, but I think to get a better comprehension, more examples are still needed.
Any example will be helpful ~
| Let $V_k$ be the space of homogeneous polynomials of degree $k$ on the unknowns $x_1,x_2,\ldots,x_n$. The differential operators $a_{ij}=x_i\dfrac{\partial}{\partial x_j}$ act on the space $V_k$. It is an easy exercise to show that
$$
[a_{ij},a_{kl}]=\delta_{jk}a_{il}-\delta_{il}a_{kj}.
$$
Therefore the mapping $e_{ij}\mapsto a_{ij}$ is a homomorphism of Lie algebras from $gl_n$ to $gl(V_k)$.
Addendum: Here $\delta_{ij}$ is the Kronecker delta, and by $e_{ij}$ I mean the matrix with 1 at the intersection of row $i$ and column $j$ and zeros elsewhere. This is, of course, an example of a simple representation of $gl_n$, but I hope it will be helpful also, when you reach that point in your studies.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/142676",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Understanding Fatou's lemma I want to prove that
(without using Fatou's lemma)
for every $k \in N$ let $f_k$ be a nonnegative sequence $f_k(1),f_k(2),\ldots$
$$\sum^\infty_{n=1}\liminf_{k \to \infty} f_k(n) \le \liminf_{k \to \infty} \sum^\infty_{n=1}f_k(n)$$
Can you give some hint for me about that? hat
| Hints:
*
*Fix an integer $N$, and show that $$\sum_{n=1}^N\liminf_{k\to +\infty}f_k(n)\leq \liminf_{k\to +\infty}\sum_{n=1}^Nf_k(n).$$
*Show that $\sum_{n=1}^Nf_k(n)\leq \sum_{n=1}^{+\infty}f_k(n)$.
*Conclude, still using that all the terms are non-negative.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/142728",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Every embedded hypersurface is locally a regular surface? I'm reading do Carmo's Riemannian Geometry, in ex6.11 d) he wrote that"every embedded hypersurface $M^{n} \subset \bar{M}^{n+1}$ is locally the inverse image of a regular value". Could anyone comment on how to show this?
To be more specific, let $\bar{M}^{n+1}$ be an $n+1$ dimensional Riemannian manifold, let $M^{n}$ be some $n$ dimensional embedded submanifold of $\bar{M}$, then locally we have that $M=f^{-1}(r)$, where $f: \bar{M}^{n+1} \rightarrow \mathbb{R}$ is a differentiable function and $r$ is a regular value of $f$.
Thank you very much!
| Seems like you should be able to do this globally.
Define $f: \overline M \to \mathbb R$ by
$$f(x) = \textrm{dist}(x, M)$$
Then the function $\phi:\mathbb R \times M \to \overline M$ given by
$$(t,p) \mapsto \exp_p(t\nabla f)$$
is, for some small $t$, a diffeomorphism (this follows from existence and uniqueness results for ODE since by definition $exp_p$ gives the geodesic from $p$ in the normal direction $\nabla f$) and therefore $0$ is a regular value for $f$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/142797",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
How to sketch $y=2\cos\,2x+3\sin\,2x$ , $x$ for $[-\pi,\pi]$.
Use addition of ordinate to sketch the graph of $y=2\cos\,2x+3\sin\,2x$ , $x$ for $[-\pi,\pi]$.
I know that there will be three line in graph from the example it show that
$x=0$, $x=\frac{\pi}{4}$, $x=\frac{\pi}{2}$ and something like that I haven't no clue how to do. Can you please explain in step by step, so that I'll be able to do other questions.
Answer look like this.
Thanks.!
| I assuming you meant $\theta \in [-\pi,\pi]$, rather than $x$.
You might notice that $y = \sqrt{13} (\frac{2}{\sqrt{13}} \cos 2 \theta+\frac{3}{\sqrt{13}} \sin 2 \theta)$. Let $\alpha$ be the angle such that $\sin(\alpha) = \frac{2}{\sqrt{13}}$, and $\cos(\alpha) = \frac{3}{\sqrt{13}}$ (you should convince yourself that such an angle exists). Then the sum-product formulae for $\sin, \cos$ gives:
$$y = \sqrt{13} (\sin\alpha \cos 2 \theta+\cos\alpha \sin 2 \theta) = \sqrt{13} \sin(\alpha+2\theta).$$
You can compute $\alpha \approx 33.69^{\circ}$, plotting the function should be straightforward after that.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/142862",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Definition of Separability Degree For an assignment, I am trying to determine the separability degree of some algebraic field extension $L/K$. The definition of the separability degree of polynomial is not difficult to find at all, namely it is the degree of the unique irreducible, separable polynomial we can associate with any polynomial. As of yet, I have been unable to find the definition of the separability degree of a field extension.
Could someone give this definition or point me in the right direction to a definition?
Based on the fact that if $L$ is the splitting field for $K$, then $|Aut(L/K)|\leq [L:K]$ with equality if and only if $L$ is separable over $K$, I am tempted to guess that the separability degree of $L/K$ has something to do with $Aut(L/K)$. Is this justified?
| The definition can be found in this Wikipedia page, in the paragraph "Separable extensions within Algebraic extensions". I will synthesize it here.
Given an algebraic extension $L/K$ we consider the field:
$S=\{ \alpha\in L : \alpha \; \text{is separable over} \;K\}$
It is clearly an algebraic (separable) extension of $K$, and the separable degree of $L/K$ is simply $[S:K]$, the degree of the field extension $S/K$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/142979",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Can the product of non-zero ideals in a unital ring be zero? Let $R$ be a ring with unity and $0\neq I,J\lhd R.$ Can it be that $IJ=0?$
It is possible in rings without unity. Let $A$ be a nontrivial abelian group made a ring by defining a zero multiplication on it. Then any subgroup of $S$ of $A$ is an ideal, because for any $a\in A,$ $s\in S,$ we have $$sa=as=0\in S.$$ Then if $S,T$ are nontrivial subgroups of $A,$ we have $ST=0.$
A non-trivial ring with unity cannot have zero multiplication, so this example doesn't work in this case. So perhaps there is no example? I believe there should be, but I can't find one. If there isn't one, is it possible in non-unital rings with non-zero multiplication?
| Even simpler: Take any direct product of two nonzero rings $R\oplus S$. Then $R\oplus 0$ and $0\oplus S$ are nonzero ideals multiplying to zero.
One definition for prime ring is a ring in which this does not happen, so any non-prime ring will suffice.
Also, if you believe in nonzero nilpotent ideals, then you would easily find an example: If $I^n=\{0\}$ with $n$ minimal, then $I I^{n-1}=\{0\}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/143036",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
} |
Coproduct in the category of (noncommutative) associative algebras For the case of commutative algebras, I know that the coproduct is given by the tensor product, but how is the situation in the general case? (for associative, but not necessarily commutative algebras over a ring $A$). Does the coproduct even exist in general or if not, when does it exist? If it helps, we may assume that $A$ itself is commutative.
I guess the construction would be something akin to the construction of the free products of groups in group theory, but it would be nice to see some more explicit details (but maybe that would be very messy?) I did not have much luck in finding information about it on the web anyway.
| Quillin and Cuntz desribe this in the book "Cyclic Homology in non-commutative geometry" on the first page, without proof.
The construction goes as follows:
$A\star B \cong A \oplus B \oplus (A \otimes B) \oplus (B \otimes A) \oplus (A \otimes A \otimes B) \oplus (A \otimes B \otimes A) \oplus (B \otimes A \otimes A) \oplus ...$ (and keeps cycling though tensors).
Its a construction known as the free product of associative algebras. Though it should be no canonical embedding of A (or B) into any of the tensor products above, since any arrow $A \rightarrow A \otimes B$ cannot be free of relations.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/143098",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18",
"answer_count": 4,
"answer_id": 3
} |
What is the method to compute $\binom{n}{r}$ in a recursive manner? How do you solve this?
Find out which recurrence relation involving $\dbinom{n}{r}$ is valid, and thus prove that we can compute $\dbinom{n}{r}$ in a recursive manner.
I appreciate any help. Thank You
| To prove:
$$\dbinom{n}{k}=\dbinom{n-1}{k-1}+\dbinom{n-1}{k}$$
We have:
$$\dbinom{n-1}{k-1}=\frac{(n-1)!}{(k-1)!(n-k)!}=\frac{(n-1)!}{(k-1)!(n-k-1)!}\frac{1}{n-k}$$
$$\dbinom{n-1}{k}=\frac{(n-1)!}{(k)!(n-k-1)!}=\frac{(n-1)!}{(k-1)!(n-k-1)!}\frac{1}{k}$$
Adding the two together:
$$\dbinom{n-1}{k-1}+\dbinom{n-1}{k}=\frac{(n-1)!}{(k-1)!(n-k-1)!}(\frac{1}{n-k}+\frac{1}{k})$$
$$=\frac{(n-1)!}{(k-1)!(n-k-1)!}\frac{n}{k(n-k)}$$
$$=\frac{n!}{k!(n-k)!}$$
$$=\dbinom{n}{k}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/143150",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Show that $\lim_{R \to{+}\infty}{I_{R}}= 0$.
Consider $$\displaystyle I_{R}=\int_{C_{R}}^{} \frac{e^{iz}}{z^{2}}\, dz,$$ where $C_{R}$ is the semicircle with radius R in the upper half plane with endpoints $(-R,0)$ and $(R,0)$ $(C_{R}$ is open, it does not include the x axis). Show that $$\displaystyle \lim_{R \to{+}\infty}{I_{R}}= 0.$$
Could someone help me through this problem?
| On the semicircle $C_R$, we have $z = R e^{i \theta}$. Now try to bound the integrand $\displaystyle \frac{e^{iz}}{z^2}$ using a function decaying faster than $\displaystyle \frac1R$ on this circle. Then note that $$\displaystyle \left \lvert \int_{C_R} \frac{e^{iz}}{z^2} dz \right \rvert \leq \left \lvert \frac{e^{iz}}{z^2} \right \rvert_{\max \text{ on } C_R} \times \text{length of }C_R$$ and let $R \rightarrow \infty$. Move your cursor over the gray region below for detailed answer.
On the semicircle $C_R$, we have $z = R e^{i \theta}$. Hence, the integrand is$$\displaystyle \frac{e^{iz}}{z^2} = \frac{e^{iR(\cos(\theta) + i \sin(\theta))}}{R^2 e^{2i \theta}}.$$
Hence, $$\displaystyle \left \lvert \frac{e^{iz}}{z^2} \right \rvert = \left \lvert \frac{e^{iR(\cos(\theta) + i \sin(\theta))}}{R^2 e^{2i \theta}} \right \rvert = \left \lvert \frac{e^{-R \sin(\theta) + iR\cos(\theta)}}{R^2 e^{2i \theta}} \right \rvert = \frac{e^{-R \sin(\theta)}}{R^2}.$$ Note that $R \sin(\theta) > 0$ since $\theta \in \left (0,\pi \right )$. Hence, we get that $\displaystyle \frac{e^{-R \sin(\theta)}}{R^2} < \frac1{R^2}$.
Now we can bound the integral as shown below.$$\displaystyle \left \lvert I_R \right \rvert = \left \lvert \int_{C_R} \frac{e^{iz}}{z^2} dz \right \rvert \leq \left \lvert \frac{e^{iz}}{z^2} \right \rvert_{\max \text{ on } C_R} \times \text{length of }C_R < \frac{2 \pi}{R}.$$ Now take the limit as $R \rightarrow \infty$ to get that $I_R \rightarrow 0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/143187",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How can one solve the equation $\sqrt{x\sqrt{x} - x} = 1-x$? $$\sqrt{x\sqrt{x} - x} = 1-x$$
I know the solution but have no idea how to solve it analytically.
| For the left hand side to be defined, you need $x\geq 1$ or $x=0$. Zero is not a solution. The left hand side will equal 0 for $x=1$ and will be strictly positive if $x>1$.
For $x\geq 1$, the right hand side is equal to 0 if $x=1$ and will be strictly negative if $x>1$. This shows that the only real solution is $x=1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/143248",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 4,
"answer_id": 3
} |
How do I accurately count the integers(1-1000) that are not divisible by 3,4,5,6? I have the general algorithm here that my teacher gave us( see full at http://i.imgur.com/pbzQb.png) )
To count we just divide, correct?
like - 1000/3 = 333 ?
What is the sigma notation used here(it's for principle of inclusion, exclusion I thiink)? Also, I am wondering about the boundary cases: near 1000.
Update: I am close I think i can do this :)
| $\sum_{i=1}^4 N_i$ is shorthand for $N_1+N_2+N_3+N_4$.
Similarly, $\sum_{i,j=1}^4N_{ij}$ means write down all the numbers of the form $N_{ij}$ where $i$ and $j$ take on the values from 1 to 4, and add them up. However, that's not really what you want: you really want $\sum_{1\le i\lt j\le4}N_{ij}$, which is $N_{12}+N_{13}+N_{14}+N_{23}+N_{24}+N_{34}$.
EDIT: Actually, it appears from your $N_{3,4,5,6}$ that what you really want is for your sums to go from 3 to 6, not from 1 to 4, e.g., you want $\sum_{i=3}^6N_i=N_3+N_4+N_5+N_6$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/143308",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How does composition affect eigendecomposition? What relationship is there between the eigenvalues and vectors of linear operator $T$ and the composition $A T$ or $T A$? I'm also interested in analogous results for SVD.
| Denote the eigenvalues of $T,A,TA$ by $\lambda_1,\lambda_2, \ldots, \lambda_n,
\mu_1,\mu_2, \ldots, \mu_n, \nu_1,\nu_2, \ldots, \nu_n$, respectively. What relations exist between them ? There are several easy constraints :
(1) We must have $\prod_{k=1}^{n} \lambda_k\mu_k=\prod_{k=1}^{n}\nu_k$ because ${\sf det} (TA)={\sf det} (T) {\sf det} (A)$.
(2) We must have $k_{\nu} \geq {\sf max }(k_{\lambda},k_{\mu})$, where we denote by $k_{\lambda},(k_{\mu},k_{\nu})$ the number of indices $i$ such that $\lambda_i$ (or $\mu_i,\nu_i$ respectively), is zero (this is because ${\sf rank}(AT) \leq {\sf min}({\sf rank}(A),{\sf rank}(T))$ and ${\sf rank}(T)=n-k_{\lambda}$).
(3) If one of $A,T$ or $AT$ is a homothety, then $\lbrace \nu_{k} \rbrace_{1 \leq k \leq n}=\lbrace \lambda_k\mu_k \rbrace_{1 \leq k \leq n}$.
I think there are no other constraints besides the three above.
I can actually prove this for a "generic enough" example : assume $n=2$, and $\lambda_1\mu_1\lambda_2\mu_2=\nu_1\nu_2$ (to ensure (1)), and $\lambda_1 \neq \lambda_2, \mu_1 \neq \mu_2, \nu_1 \neq \nu_2$ (to ensure (3)).
Then consider the two matrices
$$
P=\bigg(
\begin{matrix}
(\nu_1-\lambda_1\mu_2) & (\nu_1-\lambda_1\mu_1)\\
(\nu_1-\lambda_2\mu_2) & (\nu_1-\lambda_2\mu_1)\\
\end{matrix}
\bigg),
$$
$$
Q=\bigg(
\begin{matrix}
(\nu_1-\lambda_2\mu_1)(\nu_1-\lambda_2\mu_2) & -(\nu_1-\lambda_1\mu_1)(\nu_1-\lambda_1\mu_2) \\
(\nu_2-\lambda_2\mu_1)(\nu_1-\lambda_2\mu_2) & -(\nu_2-\lambda_1\mu_1)(\nu_1-\lambda_1\mu_2) \\
\end{matrix}
\bigg)
$$
We have
$$
{\sf det}(P)=\nu_2(\lambda_2-\lambda_1)(\mu_2-\mu_1),
$$
$$
{\sf det}(Q)=\mu_1(\lambda_2-\lambda_1)(\nu_2-\nu_1)(\nu_1-\lambda_1\mu_2)(\nu_1-\lambda_2\mu_2)
$$
so that $P$ and $Q$ are both inversible whenever $\mu_1\neq 0,\nu_2\neq 0, \nu_1 \not\in \lbrace \lambda_1\mu_1, \lambda_1\mu_2 \rbrace$. If we put
$$
L={\sf diag}(\lambda_1,\lambda_2), M={\sf diag}(\mu_1,\mu_2), N={\sf diag}(\nu_1,\nu_2)
$$
then $QLPM=NQP$ (GP can check this), so that $LPMP^{-1}=Q^{-1}NQ$. If we put $T=L$, $A=PMP^{-1}$, then $T$ has eigenvalues $\lambda_1,\lambda_2$, $A$ has eigenvalues $\mu_1,\mu_2$, $TA$ has eigenvalues $\nu_1,\nu_2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/143362",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 2
} |
rank one update Given a matrix $X$, we can compute its matrix exponential $e^X$. Now one entry of $X$ (say $x_{i,j}$) is changed to $b$, the updated matrix is denoted by $X'$. My problem is how to compute $e^{X'}$ from $e^X$ in a fast way?
PS: I know that if our goal is to calculate the matrix inversion (not matrix exponential), we can use Sherman–Morrison formula to compute ${X'}^{-1}$ from $X^{-1}$ easily, but currently I have not found a way to deal with the matrix exponential. Hope you can give me a hand. Thanks!
| To expand on my comment, observe that
$$ vw^* A^n vw^* = \left( w^* A^n v \right) vw^* $$
so in any product of $A$'s and $vw^*$'s with more than one copy of $vw^*$, we can convert the middle part to a scalar and extract it.
Applying this and grouping like terms gives the formula
$$\begin{align}
(A + vw^*)^n = A^n + \sum_{i=0}^{n-1} A^i v w^* A^{n-1-i}
+ \sum_{i=0}^{n-2} \sum_{j=0}^{n-2-i} A^i v w^* A^j \left( w^* (A + vw^*)^{n-2-i-j} v \right)
\end{align}$$
Summing this to get $\exp(A + vw^*)$, the second term yields
$$ \sum_{n=1}^{+\infty} \frac{1}{n!} \sum_{i=0}^{n-1} A^i v w^* A^{n-1-i}
= \sum_{i=0}^{+\infty} A^i v w^* \sum_{n=i+1}^{+\infty} \frac{1}{n!}A^{n-1-i} $$
I'm not particularly inclined to deal with truncated exponentials of $A$. :( The third term also involves truncated exponentials of $A + vw^*$.
The path forward with this idea is not clear. I only see two ideas, and both promise to be irritating:
*
*Try to come up with a simplified formula for all truncated exponentials, hoping the complicated terms cancel or otherwise collect together or have a nice recursion
*Use combinatorics to further simplify $w^* (A + vw^*)^{n-2-i-j} v$ and hope something nice falls out.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/143430",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Finding $\lim\limits_{x\to 0}\frac{\sin{3x}}{x}$ I am trying to find the limit of
$$\lim_{x\to 0}\frac{\sin{3x}}{x}$$
I have no idea what I am supposed to do. I know the identity that,
$$\lim_{x\to0}\frac{\sin{x}}{x} = 1$$
but that will not be good enough on a test and I am not sure why that is true anyways. I do not know how I am supposed to proceed with this problem.
| In general,
$$\lim_{x\to 0}\frac{\sin{Ax}}{x} = A$$
Rewriting $\lim_{x\to 0}\frac{\sin{Ax}}{x}$ as
$$ A\lim_{x\to 0}\frac{\sin{Ax}}{Ax}$$ (which is legal since an $A$ term would cancel out from the denominator leaving us our original.)
Letting a variable, say, $s = Ax$, we have:
$$A\lim_{x\to 0}\frac{\sin{s}}{s}$$
From here, note that as $x$ goes to $0$, so does $s$. Using the well-known fact that
$$\lim_{x\to 0}\frac{\sin{x}}{x} = 1$$
We have $$A\cdot1$$ which concludes that $$\lim_{x\to 0}\frac{\sin{Ax}}{x} = A$$
So, your limit is $3.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/143473",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
} |
LFP - shortest path problem Curious question:
Can anyone show me how to describe shortest path problem using LFP + first order logic?
I am just getting lost on how to describe the problem, though I know that LFP + first-order logic matches to the complexity class $P$.
Thanks.
| It is only known to match $\mathsf{P}$ on ordered structures. On unordered structures the language is quite week (and IIRC provably cannot even count, e.g. cannot state that two vertices have the same number of neighbors.)
In the presence of order it is not difficult to solve the problems: we can use the a relation to encode the computation of an arbitrary polynomial time machine (and this can be expressed in $\mathsf{FO}$ which corresponds to $\mathsf{AC^0}$) and then use $\mathsf{LFP}$ to find that relation that would satisfy this formula, but usually there is simpler way.
I am not sure what you mean by "shortest path problem" exactly. I interepret it as
$$\{\langle E,s,t,k \rangle \mid \text{ the length of the shortest path from $s$ to $t$ in $G$ is $k$ }\}$$
In presence of order, we can express this as follows:
Define $\varphi(E,R)$ as
$$\forall x,y,i \ \left[R(x,y,i) \leftrightarrow
\left(0=i \land x=y) \lor (0<i \land \exists z \ E(x,z) \land R(z,y,i-1)\right)\right]$$
Then
$\mathsf{LFP}_R(\varphi(R,E))(s,t,k)$ is true if there is a path of length at most $k$ from $s$ to $t$. The shortest path problem is then given by
$$\mathsf{LFP}_R(\varphi(R,E))(s,t,k) \land \lnot \mathsf{LFP}_R(\varphi(R,E))(s,t,k-1)$$
Other versions of the shortest path problem can be solved in a similar fashion.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/143524",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
There exists a unique function $u\in C^0[-a,a]$ which satisfies this property The problem:
Let $a>0$ and let $g\in C^0([-a,a])$. Prove that there exists a unique function $u\in C^0([-a,a])$ such that $$u(x)=\frac x2u\left(\frac x2\right)+g(x),$$ for all $x\in[-a,a]$.
My attempt At first sight I thought to approach this problem as a fixed point problem from $C^0([-a,a])$ to $C^0([-2a,2a])$, which are both Banach spaces if equipped with the maximum norm. However i needed to define a contraction, because as it stands it is not clear wether my operator $$(Tu)(x)=\frac x2u\left(\frac x2\right)+g(x)$$ is a contraction or not. Therefore I tried to slightly modify the operator and I picked a $c>a>0$ and defined $$T_cu=\frac 1cTu.$$ $T_cu$ is in fact a contraction, hence by the contraction lemma i have for granted the existence and the uniqueness of a function $u_c\in C^0([-a,a])$, which is a fixed point for $T_cu.$ Clearly this is not what I wanted and it seems difficult to me to finish using this approach. Am I right, is all what I have done useless? And if this were the case, how to solve this problem?
Thanks in advance.
| As noted already, your approach works for $a<2$.
Now suppose that $a\geq 2$. We will first construct $u$ on [-1,1] and then double the width of this interval iteratively. Given a function $g$, the equation
$$\tag{$\ast$} u(x) = \frac{x}2 u\left(\frac{x}2\right) + g(x)$$
uniquely determines a function $u_0$ on [-1,1]. Now define $u_1$ by
$$u_1(x)=\begin{cases} u_0(x) &\text{if $x\in[-1,1]$} \\ \frac{x}2 u_0\left(\frac{x}2\right) + g(x) &\text{otherwise}.\end{cases}$$
We now have a function $u_1$ on [-2,2] satisfying $(\ast)$. As long as $2^{k+1}<a$, continue this way defining
$$u_{k+1}(x) = \begin{cases} u_{k}(x) &\text{if $x\in[-2^{k},2^{k}]$} \\ \frac{x}2 u_{k}\left(\frac{x}2\right) + g(x) &\text{otherwise}.\end{cases}$$
At a certain point, $2^{k+1}\geq a$. Define $u_{k+1}$ only on $[-a,a]$, then stop. Note that all the $u_k$ are continuous.
It is clear that the last function you constructed is the unique solution to the problem (solution by construction, uniqueness because $u_0$ is unique and because $(\ast)$ forces all the other function values).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/143587",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 2
} |
$C^1(\mathbb{R}^n)$ implies locally Lipschitz in $\mathbb{R}^n$ According to wikipedia a function $f\colon \mathbb{R}^n\to\mathbb{R}^n$ that is continuously differentiable, is also locally Lipschitz.
I there someone who knows a good reference which contains a proof of this statement?
| The proof on $\mathbb{R}^n$ is fairly straightforward.
Choose some ball $B(\hat{x},\epsilon)$. The closure is compact, so the derivative $\frac{\partial f}{\partial x}$ is bounded by some $L$ on the ball. Now suppose $x,y \in B(\hat{x},\epsilon)$, then using Taylor's formula, we have:
$$f(x)-f(y) = \int_o^1 \frac{\partial f (y+t(x-y))}{\partial x}(x-y)\;dt.$$
Hence we can get the bound:
$$\|f(x)-f(y) \| \leq \int_o^1 \|\frac{\partial f (y+t(x-y))}{\partial x}\|\; \|(x-y)\| \;dt \leq L \|x-y\|.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/143655",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
least squares regression in 3space robjohn is giving me a hand with this, but in case anybody else knows...
I need to do a least-squares regression for linearity on a set of coordinates in 3space. If the dataset is linear, I need to see if it is close to vertical or horizontal. How could I do this?
Many thanks in advance
Joe Stavitsky
| Typically, vertical would say all the $(x,y)$ coordinates are the same and horizontal would say all the $z$ coordinates are the same. So you could just look at the standard deviations of the coordinates of all the points to assess vertical or horizontal. That doesn't check if the points lie on an arbitrary line.
The discussion you cite was indeed to give a plane-that was the hypothesis. If you find a relation like $ax+by+cz=k$ you get a plane, as one equation reduces the dimension of the space by one. If you believe your points lie on a line in 3-space, you have two options. A standard linear regression (where one coordinate is fixed and you minimize the sum squared error in the other direction) may well work for you, and you can just do two regressions, one $x$ vs $y$ and another $x$ vs $z$. If the equations you get are $y=m_yx+b_y, z=m_zxb_z$, the line is then $(0,b_y,b_z)+k(1,m_y,m_z)$. If you want orthogonal distance regression, I believe (but didn't follow the derivation to confirm) the link you have is applicable, but you want the line to be in the direction of the maximum singular value, still through the centroid.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/143724",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
List Proper Primitive Groups of a certain degree by GAP As J.D.Dixon noted in his great books, there are just 5 proper primitive groups of degree 8, $P(8)=5$. I wanted to examine it with GAP, so wrote the following small program:
> G:=[];;
> for k in [1..7] do G[k]:=PrimitiveGroup( 8,k ); od;
> G[1]; G[2]; G[3]; G[4]; G[5]; G[6]; G[7];
Clearly, since $S_8$ and $A_8$ are not proper, so it works as we want. My question is how to "List all groups", for example as I called them above without writing the third line of the program? I examined some List commend line in the GAP, and could't list the groups not in the form I did above ( G[1]; G[2]; G[3]; G[4]; G[5]; G[6]; G[7]; ). Thanks for any help.
| Thanks to m_l, this has been answered in the comments. I'm posting a formal answer in order to clean up unanswered questions with the gap tag in view of "http://meta.math.stackexchange.com/questions/1559/dealing-with-answers-in-comments?" discussion on Meta.
I will try to cover a bit wider topic for a reader who will discover this page in a search for an answer on a similar question. GAP contains several Data Libraries listed on this page. In GAP 4.6, Group Libraries are covered in Chapter 50 of the Reference Manual (chapter numbering may change in future versions), and this includes the Small Groups Library, Primitive and Transtive Permutation Groups libraries, and more. From the GAP command line this chapter may be viewed by entering ?Group Libraries. Look there for functions like AllLibraryGroups and OneLibraryGroup, where "Library" denotes the appropriate library.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/143787",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Exponential group? Considering a suitable set of numbers, you can construct a group using addition, and you can construct another group using multiplication. My question: Can you construct a group using exponentiation?
*
*$(\mathbb{R}, +)$ is a group.
*$(\mathbb{R} \backslash 0, \times)$ is another group.
*Does there exist some $S \subseteq \mathbb{R}$ such that $(S, \uparrow)$ is a group?
(Here $x \uparrow y = x^y$.)
I'm going to guess "no", since exponentiation is asymmetric and hence needs two inverses (roots and logarithms). So maybe you can have some other group-like structure? (But which one?)
| This isn't quite what you want, but it's a nice example and deserves to be widely known: the reals exceeding 1 form a (commutative!) group under the operation given by $a*b=a^{\log b}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/143918",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How to find the order of elliptic curve over finite field extension? I want to find the order of elliptic curve over the finite field extension $\mathbb{F}_{p^2}$, where $E(\mathbb{F}_{p^2}):y^2=x^3+ax+b $
I am using the method illustrated by John J. McGee in his thesis 2006. Where $\#E(\mathbb{F}_{p^n})=p^n+1-(s_{n})$, with, $s_0=2$, $s_1=t$ and $s_{n+1}=t s_n - ps_{n-1}$.
Finding $t$ is easy by using Weil's theorem, where $\#E(\mathbb{F}_p)=p+1-t.$
McGee had put $s_0=2$, but he did not say why, nor he gave a reference. Therefore my question is: What is the condition to determine $s_0$? Is it supposed to be $2$ at all cases? And Why? I am asking this, because I worked on few examples where I found the number of points is not the same order when $s_0=2$.
Worth to say, the method that I am using to find the points of $E(\mathbb{F}_{p^2})$ is the same method to find the point of elliptic curve over $\mathbb{F}_p$.
Thank you in advance.
| This is best understood if you know something about the Frobenius endomorphism $F_p$, which sends $(x,y)$ to $(x^p, y^p)$ on the elliptic curve. Take any prime $\ell\neq p$. This endomorphism of the elliptic curve is also an endomorphism of the $2$-dimensional $\mathbb{F}_\ell$-vector space given by the points of order $\ell$ and the point at infinity (in fancy terms, the $\ell$-division points $E[\ell]$.). The characteristic polynomial of $F_p$ on $E[\ell]$ is $x^2-tx+p \mod\ell$. Note that $t=s_1$ becomes the trace of Frobenius ($\operatorname{Tr}(F_p)$) in this setting.
Similarly one defines $F_{p^n}$ by $(x,y)\mapsto (x^{p^n}, y^{p^n})$ and we have $\#E(\mathbb{F}_{p^n})=p^n+1-\operatorname{Tr}(F_{p^n})$ hence $s_n= \operatorname{Tr}(F_{p^n})$. Now note that
*
*The characteristic polynomial of $F_p$ is independent of $\ell$, so can be viewed as a polynomial in $\mathbb{Z}[x]$,
*$F_{p^n} = F_p^n$.
Calling $\alpha, \bar\alpha$ the roots in $\mathbb{C}$ of the characteristic polynomial of $F_p$ (i.e. its eigenvalues), we therefore have $s_n=\alpha^n+\bar\alpha^n$ by Point 2. This sequences satisfies the recurrence
$$
s_0= 2,\quad s_1=t,\quad s_{n+1} -t s_n +p s_{n-1}=0
$$
as you can show by induction. So the $s_0=2$ comes from $\alpha^0+\bar\alpha^0$, but it took me a while to explain it from scratch.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/144194",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
Sum from $0$ to $n$ of $ n \choose i $?
Possible Duplicate:
Algebraic Proof that $\sum\limits_{i=0}^n \binom{n}{i}=2^n$
Evaluation $\sum\limits_{k=0}^n \binom{n}{k}$
Is there a simple proof for this equality:
$$\sum_0^n {n \choose i} = 2^n$$
thanks and sorry I forgot the basics
| The standard combinatorial proof is that
*
*The LHS counts the number of ways to choose $0$, $1$, $2, \ldots , $ or $ n$ things from a total of $n$ objects.
*The RHS counts the number of ways to go through each of $n$ objects and mark them as "choose" or "don't choose".
With a little thought, these are equal.
An algebraic proof has been posted by Siminore.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/144256",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 5,
"answer_id": 0
} |
Signs in binomial expansions Edit the title as seems fit.
$$\begin{align}
(a^3+b^3)
&= (a+b)(a^2 -ab+b^2) \\
&= (a+b)^3 -3ab(a+b)
\end{align}$$
And so on and so forth. Right now, I only need these expansions in solving quadratic equations. But why do signs vary in the expansions? (asterisk). What controls this? I see that something similar comes in $a^2-b^2 = (a+b)(a-b)$ to allow the intermediate term(s) to cancel but how does this translate to other (higher-order) forms?
Level: US Grade-10 equivalent.
| One way of looking at this is to see that you need the intermediate terms to cancel, so taking out a factor of $(a+b)$ you will need alternating signs for the cancellation to work.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/144342",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Are there any situations where you can only memorize rather than understand? I realize that you should understand theorems, equations etc. rather than just memorizing them, but are there any circumstances where memorizing in necessary? (I have always considered math a logical subject, where every fact can be deducted using logic rather than through memory)
| Mathematics is rife with things that we remember without "understanding." Every definition whose genesis is not explained before it is stated is one. Every name that does not suggest what it refers to is one. Every arbitrary symbol is one. You used the word "fact" and thereby hangs a point of order. Is it not a "fact" that we call a specified or indicated relationship between two quantities a "function?" There seems to be a great deal to mathematics besides deductive and inductive (and abductive) statements of theorems and proofs.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/144393",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 3
} |
Lebesgue Integral Questions I'd like to show some properties of the Lebesgue integral.
I'd like to show that if $f$ is a simple function which is zero almost everywhere, then the Lebesgue integral $\int f(x) dx = 0$.
Similarly, I'd like to show this is also true for a measurable function $f$ which is zero almost everywhere.
I'm working through a real analysis textbook on my own, and I'm not quite sure what to do about this "almost everywhere." Do I have to consider separately a set of measure zero? Thank you as always.
Attempt for simple function:
If $f$ is a simple function that is zero almost everywhere, then $f = \sum_{i=1}^{n}a_{i}\chi_{E_i} = 0$.
Then, for each $i$, either $a_i = 0$ or else $m(E_i)= 0$.
By definition, $\int f(x) = \sum_{i=1}^{n}a_{1}m(E_i)$.
This summation is the sum of zeros. Thus, $\int f(x) = 0$ as desired.
Attempt for measurable function:
Assume $f$ is a non-negative measurable function that is zero almost everywhere.
By definition, $\int f(x) dx = \lim_{n \to \infty}\int f_n(x) dx$ where $\{f_n\}$ is a sequence of increasing, non-negative, simple functions that are all less than $f$.
Since $f$ is zero almost everywhere, then each non-negative, simple $f_n$ must also be zero almost everywhere.
Now, from above, we know that for each $n \in \mathbb{N}$, $\int f_n(x) dx = 0$.
Then, $\int f(x) dx = \lim_{n \to \infty} 0 = 0$.
Thus, $\int f(x) dx = 0$.
Now, for the general case...
We can write any measurable function $f$ in terms of its positive and negative parts. So, $f(x) = f^+(x) - f^-(x)$.
Both $f^+(x)$ and $f^-(x)$ are non-negative. Now if $f$ is zero almost everywhere, then both $f^+(x)$ and $f^-(x)$ are non-negative and zero almost everywhere.
Then, by above, $\int f^+(x) dx= \int f^-(x) dx= 0$
And by definition, $\int f(x) dx = \int f^+(x) dx - \int f^-(x) dx$.
So, $\int f(x) dx = 0 - 0 = 0$ as desired.
| The Lebesgue integral is monotone, that is if $f$ and $g$ are integrable and $f(x)\geq g(x)$ for all $x$, then $\int f\geq\int g$.
So take any nonnegative meaurable function $f$ that is zero almost everywhere and let $S$ be the set $S=\{x:f(x)\neq 0\}$. It is easily seen that $S$ is measurable and by assumption, $S$ has measure $0$. Define a function $f'$ that takes the value $\infty$ on $S$ and $0$ everywhere else. Let $f_0$ be the function that is constantly $0$. Then $f_0(x)\leq f(x)\leq f'(x)$ for all $x$ and hence $0=\int f_0\leq\int f\leq\int f'=0$. For $\int f'=0$, we use the fact that the sequence of simple functions $(f_n)$ defined so that $f_n$ takes the value $n$ on $S$ and $0$ everywhere else is an increasing sequence converging to $f'$.
For a general measurable function, you apply the argument to both the positive part and the negative part.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/144440",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
Is it wrong to tell children that $1/0 =$ NaN is incorrect, and should be $∞$? I was on the tube and overheard a dad questioning his kids about maths. The children were probably about 11 or 12 years old.
After several more mundane questions he asked his daughter what $1/0$ evaluated to. She stated that it had no answer. He asked who told her that and she said her teacher. He then stated that her teacher had "taught it wrong" and it was actually $∞$.
I thought the Dad's statement was a little irresponsible. Does that seem like reasonable attitude? I suppose this question is partly about morality.
| This has been treated extensively both in this question and in others. But I think the clearest formulation possible is this : for me the most correct answer is "NaN", Not a Number.
It is not wrong to say that $1/0 = \infty$ from a topological point of view (be it an Alexandrov compactification, or a projective space), and it's actually very useful and natural in some contexts.
However, as pointed out by everyone, it cannot be made coherent with arithmetic laws, addition and multiplication. So it's "not a number".
That said, for pedagogical purposes I think it's best to not go into these things and just illustrate the arithmetic problems arising when defining division by 0 (and maybe say something like "one might define 1/0 in some contexts, but we will not because of these problems"). Indeed arithmetic notions are much more intuitive than abstract, topological constructions at this stage.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/144526",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "50",
"answer_count": 12,
"answer_id": 1
} |
are fibres of a flat bijective map reduced? Let $f: X \to Y$ be a flat map of algebraic varieties or of complex analytic spaces which is bijective on closed points (or just bijective in the secnond case). Suppose both $X$ and $Y$ are reduced. Is it true that $f$ has reduced fibres?
If it is true, I would be most grateful for a reference.
| Let $f: X\to Y$ be a bijective flat morphism of reduced algebraic varieties over $\mathbb C$ (or any algebraically closed field $k$ of characteristic $0$), then $f$ is an isomorphism.
First $f$ is quasi-finite, hence finite and étale (because characteristic $0$) above some dense open subset $V$ of $Y$. As we work over an algebraically closed field, $f^{-1}(V)\to V$ is then an isomorphism.
Let $x\in X$ and $y=f(x)$. Then $O_{Y,y}\to O_{X,x}$ is flat, hence faithfully flat, therefore injective. This implies that the quotient $O_{X,x}/O_{Y,y}$ is flat over $O_{Y,y}$. But the total rings of fractions of $O_{Y,y}$ and $O_{X,x}$ coincide because $X\to Y$ is birational by the above. So $O_{X,x}/O_{Y,y}$ is of torsion over $O_{Y,y}$, hence equal to $0$. So $f$ is an open immersion. But $f$ is surjective, it is an isomorphism.
The proof should work for reduced complex analytic spaces.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/144587",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Mixed-strategy Nash equilibria I didn't find in books, so I'm asking - Mixed-strategy Nash equilibria is always only one or doesn't exist for the one certain game? And I know that there can be several(and can not be at all) pure strategy Nash equilibria.
| Pure strategies can be seen as special cases of mixed strategies, in which some strategy is played with probability $1$. In a finite game, there is always at least one mixed strategy Nash equilibrium. This has been proven by John Nash[1].
There can be more than one mixed (or pure) strategy Nash equilibrium and in degenerate cases, it is possible that there are infinitely many. In a well-defined sense (open and dense in payoff-space), almost every finite game has a finite and odd number of mixed strategy Nash equilibria.
A typical example of a game with more than one equilibrium is Battle of Sexes, which has two pure strategy equilibria and one completely mixed equilibrium, meaning every strategy is played with positive probability.
[1]: J.Nash. Non-Cooperative Games. http://www.cs.upc.edu/~ia/nash51.pdf
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/144640",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
Multiple choice question from general topology Let $X =\mathbb{N}\times \mathbb{Q}$ with the subspace topology of $\mathbb{R}^2$ and $P = \{(n, \frac{1}{n}): n\in \mathbb{N}\}$ .
Then in the space $X$
Pick out the true statements
1 $P$ is closed but not open
2 $P$ is open but not closed
3 $P$ is both open and closed
4 $P$ is neither open nor closed
what can we say about boundary of $P$ in $X$?
I always struggle to figure out subspace topology. Though i am aware of basic definition and theory of subspace topology. I need a bit explanation here about how to find out subspace topology of $P$?
Thanks for care
| Draw a picture: $P$ is a set of points on the positive half of the graph of the hyperbola $y=\frac1x$. Let’s look at one of those points, say $\left\langle 4,\frac14\right\rangle$. Is $\{4\}$ an open set in $\Bbb N$? Is $\Bbb Q$ an open set in $\Bbb Q$? If the answers to these questions are both yes, then $$\{4\}\times\Bbb Q=\{\langle 4,y\rangle:y\in\Bbb Q\}$$ is an open set in $X$. Call this set $U$; what is $U\cap P$?
This should help in deciding whether $P$ is open in $X$. To decide whether $P$ is closed in $X$, you need to consider a point $\langle n,q\rangle\in X\setminus P$ and ask whether this point can possibly be a limit point of $P$. It will help to realize that sets of the form $\{n\}\times(q-\epsilon,q+\epsilon)$ are open in $P$, because $\{n\}$ is open in $\Bbb N$, and $(q-\epsilon,q+\epsilon)$ is open in $\Bbb Q$. (Here I’m taking the interval in $\Bbb Q$, not in $\Bbb R$.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/144689",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Prove there exists a triangle without using Euclidean Parallel Postulate Let $a$ and $b$ be real numbers where $0 < a< b<180$. Let $A$, $B$, $D$ be points so $A$-$B$-$D$. Part 1: Prove there exists a triangle $ABC$ where measure of angle $CAB$ is $a$ and measure of angle $CBD$ is $b$.
How do I prove part 1 without making use of the Euclidean Parallel Postulate?
I know that d(A,B)+d(B,D)= d(A,D) due to definition of betweenness so it is on a straight line. To establish the side of AC, I could construct a ray AC by Angle Construction Postulate on a half plane on the same side of ray AB that makes angle $a$ and perhaps draw a line parallel to ray AC to show that angle is B is greater than angle A. I'm not sure how to show this is a triangle. Should I show the overlap of the half planes using definition for interior in order to show that or prove that the triangle exists? Or is just the definition of the interior of any angle enough to show that the triangle exists?
| This sounds like exactly like the parallel postulate (in the form given by Euclid).
If angle $a$ is smaller than angle $b$, then the two lines meet somewhere if we keep following them upward.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/144739",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
stereographic projection in a projective curve It seems that there is a standard technique for an idea similar to the stereographic projection. I don't know how can I use it. For example here in this exercise, how can I use it ? I'm really sorry for being so stupid....
Define a birational map from an irreducible quadric hypersurface $X \subset P^3$ to $P^2$, by analogy with the stereographic projection, and find the open sets $U \subset X$, $V \subset P^2$, that are isomorphic.
| View $\mathbb{P}^2$ as a hyperplane $H \subset \mathbb{P}^3$ and choose a point $P_0$ that lies on $X$ but is not on $H$. Consider the rational map $X \dashrightarrow H$ that sends $P \in X$ to the point on $H$ that intersects the line through $P$ and $P_0$. You need to check: (1) this is actually a rational map; (2) this map is invertible as a rational map (the inverse has a similarly concrete geometric description); (3) what are subsets of $X$, resp. $H$, where the map, resp. its inverse, is not well-defined.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/144816",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Determinant of symmetric Matrix with non negative integer element Let \begin{equation*}
M=%
\begin{bmatrix}
0 & 1 & \cdots & n-1 & n \\
1 & 0 & \cdots & n-2 & n-1 \\
\vdots & \vdots & \ddots & \vdots & \vdots \\
n-1 & n-2 & \cdots & 0 & 1 \\
n& n-1 & \cdots & 1 & 0%
\end{bmatrix}%
\end{equation*}
How can you prove that $\det(M)=(-1)^n\cdot n \cdot 2^{n-1}$?
I just guess the formula in the right hand side by observing the calculation for small n but I can't prove for arbitrary n. Thanks everyone.
| Let's take a $4\times 4$ matrix (I don't want to type much).
$$\begin{vmatrix}
0 & 1 & 2 & 3 \\
1 & 0 & 1 & 2 \\
2 & 1 & 0 & 1 \\
3 & 2 & 1 & 0
\end{vmatrix} $$
Since adding a row into another does not change determinant values. Add $-i'th$ row into $i+1$'th row.
$$\begin{vmatrix}
0 & 1 & 2 & 3 \\
1 & -1 & -1 & -1 \\
1 & 1 & -1 & -1 \\
1 & 1 & 1 & -1
\end{vmatrix} $$
Repeat the process with columns.
$$\begin{vmatrix}
0 & 1 & 1 & 1 \\
1 & -2 & 0 & 0 \\
1 & 0 & -2 & 0 \\
1 & 0 & 0 & -2
\end{vmatrix} = \frac{1}{2}\begin{vmatrix}
0 & 1 & 1 & 1 \\
2 & -2 & 0 & 0 \\
2 & 0 & -2 & 0 \\
2 & 0 & 0 & -2
\end{vmatrix} = \frac{1}{2}\begin{vmatrix}
3 & 1 & 1 & 1 \\
0 & -2 & 0 & 0 \\
0 & 0 & -2 & 0 \\
0 & 0 & 0 & -2
\end{vmatrix}$$
Now what you can say about its determinant?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/144902",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How are the "real" spherical harmonics derived? How were the real spherical harmonics derived?
The complex spherical harmonics:
$$
Y_l^m( \theta, \phi ) = K_l^m P_l^m( \cos{ \theta } ) e^{im\phi}
$$
But the "real" spherical harmonics are given on this wiki page as
$$
Y_{lm} =
\begin{cases}
\frac{1}{\sqrt{2}} ( Y_l^m + (-1)^mY_l^{-m} ) & \text{if } m > 0 \\
Y_l^m & \text{if } m = 0 \\
\frac{1}{i \sqrt{2}}( Y_l^{-m} - (-1)^mY_l^m) & \text{if } m < 0
\end{cases}
$$
*
*Note: $Y_{lm} $ is the real spherical harmonic function and $Y_l^m$ is the complex-valued version (defined above)
What's going on here? Why are the real spherical harmonics defined this way and not simply as $ \Re{( Y_l^m )} $ ?
| The page actually suggests the answer when it says "The harmonics with $m > 0$ are said to be of cosine type, and those with $m < 0$ of sine type." Recall how one switches between the complex exponential functions $\{e^{imx}\colon m\in \mathbb Z\}$ and the trigonometric functions: it's done with the formulas $$\cos mx=\frac{e^{imx}+e^{-imx}}{2}$$ and $$\sin mx=\frac{e^{imx}-e^{-imx}}{2i}$$
Taking only real parts would not give you the sines.
Since $\cos (-mx)=\cos mx$ and $\sin(-mx)=-\sin mx$, we don't need all values of $m$ in both families. We can remove the redundant functions and enumerate the entire trigonometric basis by $m\in\mathbb Z$ as follows:
$\{\cos mx\colon m\ge 0\}\cup \{\sin mx\colon m<0\}$. This is essentially what the wiki page does.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/145080",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 4,
"answer_id": 2
} |
Multiplication of cardinal numbers Let $a_i$ be a cardinal number for every $i \in\ I$.
Let $\{A_i\}$ and $\{A'_i\}$ be families of sets and let $A_i$, $A'_i$ and $a_i$ be equipotent for every $i \in I$.
Then show that $\prod_i\ A_i $ is equipotent with $\prod_i\ A'_i $.
This seems obviously true but I don't know how to actually show the bijection between them..
| For each $i\in I$ you have a bijection $\varphi_i:A_i\to A'_i$. Define $$\varphi:\prod_{i\in I}A_i\to\prod_{i\in I}A'_i:\langle a_i:i\in I\rangle\mapsto\Big\langle\varphi_i(a_i):i\in I\Big\rangle$$ and prove that it’s a bijection.
This is what I call a follow-your-nose proof: there really is only one reasonable thing to try. All you’re given is the equipotence of $A_i$ and $A'_i$ for $i\in I$. All that gives you is the existence of the bijections $\varphi_i$, so either the result is very hard (unlikely) or somehow it must be possible to use those bijections to get the one that you want. Since a typical element of $\prod_iA_i$ is just a function $\langle a_i:i\in I\rangle$ from $I$ to $\bigcup_iA_i$, about the only thing to try is to apply the bijections $\varphi_i$ to the components of $\langle a_i:i\in I\rangle$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/145139",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Examples of mathematical induction What are the best examples of mathematical induction available at the secondary-school level---totally elementary---that do not involve expressions of the form $\bullet+\cdots\cdots\cdots+\bullet$ where the number of terms depends on $n$ and you're doing induction on $n$?
Postscript three years later: I see that I phrased this last part in a somewhat clunky way. I'll leave it there but rephrase it here:
--- that are not instances of induction on the number of terms in a sum?
| Does proving statements like $f(n) \leq g(n)$ fit your bill? For instance, prove that $2^n \leq 2n!$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/145189",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "51",
"answer_count": 19,
"answer_id": 14
} |
Are questions of convergence important in real life? In the real world, do we ever need to worry about convergence and what not? I am not talking about whether recursive functions and such terminate, but convergence in analysis. It seems like the finitude of the universe makes questions like that meaningless. I ask because it often seems like physicists and statisticians are very lax about convergence. I know physicists might seem to care about it every once and a while (wave functions must be in normalizable i.e. in $L^2$) but it doesn't appear to be truly important.
So what are some real world reasons for concerning ourselves with convergence?
| I would argue that the most important reason for being concerned about convergence in the 'real world' is that statements proved in the absence of concerns about convergence can be outright false! The simplest example that comes to mind is the geometric series; the fact that $\sum_{n=0}^{\infty}x^n = \frac{1}{1-x}$ is incredibly useful and has plenty of applications, both directly to the real world and in doing other mathematics that then gets applied to real-world problems — but you have to be careful not to conclude that $1+2+4+8+\cdots = -1$ from it!
(and I'm well aware that even this 'absurd' conclusion can make sense in certain circumstances — but it's not true in $\mathbb{R}$ as it stands, and there are much more insidious versions of the same error where the interpretations that can be applied here make much less sense.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/145256",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Proving that surjective endomorphisms of Noetherian modules are isomorphisms and a semi-simple and noetherian module is artinian. I am revising for my Rings and Modules exam and am stuck on the following two questions:
$1.$ Let $M$ be a noetherian module and $ \ f : M \rightarrow M \ $ a surjective homomorphism. Show that $f : M \rightarrow M $ is an isomorphism.
$2$. Show that if a semi-simple module is noetherian then it is artinian.
Both these questions seem like they should be fairly straightforward to prove but I cannot seem to solve them.
| $1.$ Let me give you a proof of the following astonishing result due to Vasconcelos:
Theorem:
Let $M$ be a finitely generated $R$-module, Noetherian or not, and let $ \ f : M \rightarrow M \ $ be a surjective homomorphism. Then $f : M \rightarrow M $ is injective (hence is an isomorphism).
Proof:
We use the standard trick of converting $M$ into an $R[X]$-module by defining $X\cdot m=f(m)$.
For the ideal $I=XR[X]$ we have $M=IM$ since for any $m\in M$ we can write by surjectivity of $f$ : $m=f(n)=X\cdot n$ and $X\in I$.
Since Nakayama says that $$M=IM\implies m=im$$ there exists $i=P(X)X\in I$
with $m=P(X)X\cdot m=P(f)(f(m))$ for all $m\in M$.
So that finally $f(m)=0\implies m=P(f)(f(m))=P(f)(0)=0 \:$: injectivity of $f$ has been proved.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/145310",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "39",
"answer_count": 6,
"answer_id": 3
} |
The topology on $\mathbb{R}$ with sub-basis consisting of all half open intervals $[a,b)$.
Let $\tau$ be to topology on $\mathbb{R}$ with sub-basis consisting of all half open intervals $[a,b)$.
How would you find the closure of $(0,1)$ in $\tau$?
I'm trying to find the smallest closed set containing $(0,1)$ in that topology but then I realised I don't fully understand what an 'open' interval is. Is an open interval in this topology one that is half open like in the sub-basis?
| Recall that since the collection $S$ of all elements of the form $[a,b)$ is a sub-basis for your topology $\tau$ it follows that the collection $\mathscr{B}$ of all finite intersections of elements of $S$ is a basis for your topology $\tau$. Now it is easy to see that the finite intersection of elements of $S$ is either the empty set or another element of $S$.
Call $A = (0,1)$. Then since your topology is now given in terms of a basis, an element $x \in \Bbb{R}$ is in $\overline{A}$ iff every basis element about $x$ intersects $A$. Now it is clear from this definition that no $x < 0$ can be in the closure of $A$ because given any $x<0$, there exists $\epsilon \in \Bbb{R}$ such that $x < \epsilon < 0$ and hence $x$ is in the basis element $[x,\epsilon)$ that clearly does not intersect $A$.
We see similarly that no real number $x > 1$ can be in the closure of $A$. Now $1$ cannot be in the closure because there exists a basis element such as $[1,2)$ that contains $1$ and is completely disjoint from $A$. From these results we deduce immediately that the closure of $(0,1)$ with respect to the topology $\tau$ is the interval $[0,1)$.
Edit: You're making the mistake of assuming that being closed/open are mutually exclusive. Let us show that $[0,1)$ is closed by showing its complement is open. Now the complement of $[0,1)$ is $(-\infty,0) \cup [1, \infty)$. Now $(-\infty,0)$ is open because we can write it as
$$(-\infty,0) = \ldots \cup [-1,-0.5)\cup [-0.6,0)$$
while $[1,\infty)$ is clearly open. The union of two open sets is open from which it follows that $[0,1)$ is closed.
Now let us justify the existence of sets that are open and closed at the same time. This comes from the fact that $\Bbb{R}$ with your topology $\tau$ is disconnected. We can write $\Bbb{R}$ as $C \cup D$ where
$$C = \ldots \cup [0,1) \cup [2,3) \cup [4,5)\ldots \cup$$ and $$D = \ldots \cup [-1,0) \cup [1,2) \cup [3,4)$$
and $C,D$ are clearly open with $C\cap D = \emptyset$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/145393",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Solve equations using the $\max$ function How do you solve equations that involve the $\max$ function? For example:
$$\max(8-x, 0) + \max(272-x, 0) + \max(-100-x, 0) = 180$$
In this case, I can work out in my head that $x = 92.$ But what is the general procedure to use when the number of $\max$ terms are arbitrary? Thanks for the help, here is a Python solution for the problem if anyone is interested.
def solve_max(y, a):
y = sorted(y)
for idx, y1 in enumerate(y):
y_left = y[idx:]
y_sum = sum(y_left)
x = (y_sum - a) / len(y_left)
if x <= y1:
return x
print solve_max([8, 272, -100], 180)
| Hint: You can think of $max(8-x,0)$ as a piecewise defined function.
$$
max(8-x,0) = \begin{cases} 0 \text{ if $ x\geq 8$} &\\ 8-x \text{ if $x \lt 8$} \end{cases}
$$ Apply this idea to other $max$ functions as well.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/145458",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 1
} |
Links between difference and differential equations? Does there exist any correspondence between difference equations and differential equations?
In particular, can one cast some classes of ODEs into difference equations or vice versa?
| (Adding to user26872's answer as this was not obvious to me so it might help someone else going through his derivation)
The identity
$$x_{n+1} = Ex_n = \sum\limits_{k=0}^\infty \frac{h^k}{k!}D^k x_n=e^{hD}x_n$$
is true if we consider the following. Let $x_n = x(t_n)$. Let's assume that $x(t)$ is differentiable around some point $t_n$, then it's Taylor series can be defined as
$$x(t) = \sum\limits_{k=0}^\infty \frac{x^{(k)}(t_n)}{k!}(t-t_n)^k.$$
If we now perform the same expansion at $t' = t_n+h$ we have
$$x(t_n+h) = \sum\limits_{k=0}^\infty \frac{x^{(k)}(t_n)}{k!}h^k = \sum\limits_{k=0}^\infty \frac{h^k D^k}{k!}x(t) = e^{hD} x(t_n)$$
and so
$$x_{n+1} = Ex_n \leftrightarrow x(t_n + h) = E x(t_n)$$
thus giving
$$E = e^{hD}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/145523",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "48",
"answer_count": 5,
"answer_id": 0
} |
What are the generators for $\mathbb{Z}_p^*$ with p a safe prime? lets consider $\mathbb{Z}_p^*$ with $p = 2 \cdot q + 1$ a safe prime ($p$ and $q$ have to be prime).
Then $\varphi\left(p\right) = 2 \cdot q$ is the order of $\mathbb{Z}_p^*$, and $\varphi\left(\varphi\left(p\right)\right) = q-1$ the number of generators in $\mathbb{Z}_p^*$.
Also there are exactly $\frac{p-1}{2} = q$ quadratic residue and $q$ non quadratic residue in $\mathbb{Z}_p^*$.
Now my question:
Is every non quadratic residue (except $-1$ if it is a nqr) a generator of $\mathbb{Z}_p^*$? (due to the fact, that we have $q-1$ generators and quadratic residues cannot be generators)
An affirmation or compelling arguments, why I am wrong, are very appreciated, thanks!
| Indeed that is true, your counting argument does it. Since $p\equiv 3\pmod{4}$, we have that $-1$ is a non-residue. It is the only non-residue which is not a primitive root of $p$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/145578",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Choosing squares from a grid so that no two chosen squares are in the same row or column
How many ways can 3 squares be chosen from a 5x5 grid so that no two
chosen squares are in the same row or column?
Why is this not simply $\binom{5}{3}\cdot\binom{5}{3}$?
I figured that there were $\binom{5}{3}$ ways to choose $3$ different "$x$-axis" coordinates and then same for the "$y$-axis". Thus I would multiply them.
Thanks
| You can choose the first square arbitrarily, so there are $25$ options for it.
There are now $4$ rows and $4$ columns that the second square can be chosen from, so there are $16$ options for it (alternatively, if one just wants to count, there are $25$ squares total, minus the chosen first square, minus the $4$ other squares in the same row, minus the other $4$ squares in the same column, for $25 - 1 - 4 - 4 = 16$ options).
Similarly, there are $9$ options for the third square.
Dividing by $3!$ to account for the fact that the same three squares can be chosen in that many ways, the correct answer is
$$\frac{25\cdot 16\cdot 9}{3!} = 600$$
not
$$\binom{5}{3}\cdot\binom{5}{3}=10\cdot 10=100.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/145667",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Prove that $\log(n) = O(\sqrt{n})$ How to prove $\log(n) = O(\sqrt{n})$?
How do I find the $c$ and the $n_0$?
I understand to start, I need to find something that $\log(n)$ is smaller to, but I m having a hard time coming up with the example.
| consider n=2^100
then log(n)=Log(2^100)
=100*log2
=100
and now apply this to sqrt(n)
then Sqrt(n)=sqrt(2^100)
=2^50
we can clearly say that sqrt(n) is larger then logn.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/145739",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "32",
"answer_count": 4,
"answer_id": 3
} |
Cantor set: Lebesgue measure and uncountability I have to prove two things.
First is that the Cantor set has a lebesgue measure of 0. If we regard the supersets $C_n$, where $C_0 = [0,1]$, $C_1 = [0,\frac{1}{3}] \cup [\frac{2}{3},1]$ and so on. Each containig interals of length $3^{-n}$ and by construction there are $2^n$ such intervals. The lebesgue measure of each such interval is $\lambda ( [x, x + 3^{-n}]) = 3^{-n}$ therefore the measure of $C_n$ is $\frac{2^n}{3^n} = e^{(\ln(2)-\ln(3)) n }$ which goes to zero with $n \rightarrow \infty$. But does this prove it?
The other thing I have to prove is that the Cantor set is uncountable. I found that I should contruct a surjectiv function to $[0,1]$. But im totaly puzzeld how to do this.
Thanks for help
| Hint:
- There is a theorem that if $C_n$ is a descending sequence of measurable sets,
$C = \bigcap C_n$ then $\lim_{n \rightarrow \infty}m(C_n)= m(C)$.
Here, you know $C_n$ is a descending sequence, $m(C_n) $= (2/3)^n, and you want to know m(C)
*
*to prove $C$ is uncountable:
Express a number $x$ between $[0,1]$ in base $3$:
$x =0.x_1x_2x_3...$
In the 1st step, we remove the middle third from $[0,1]$
We express 0=.0, $\frac{1}{3} = .1, \frac{2}{3} = .2$, 1= .222222...
We have 3 intervals: [.0 , .1] , (.1, .2), [.2 , .22222...] and we remove the middle interval. The removed interval (.1 , .2) consists of all numbers with $x_1$ = 1, except the endpoint $.1$ of [.0 , .1 ], however, we can express .1 as .02222.....
So we can use the rule: whenever we have a number of form 0.x1(x is a sequence consisting of 0,1,2) as the end point of an interval, we express as 0.x0222....So in this step, we remove all numbers with $x_1$ = 1. The remaining intervals are [.0, .0222...] and [.2, .222...]
Similarly, we can prove that in the n-th step, we keep only those numbers with $x_n$ = 0 or 2:
So the Cantor set contains of all numbers of the form $.x_1x_2..$ with $x_i =0 $ or 2.
There exists a bijection between $E$ and $[0,1]$:
If you consider the new set E, with each member of E is a member of the Cantor set with every digit is divided by 2. E consists of all sequence with $x_i$ is 0 or 1. $|E| = |C|$
There exists an injective map from $[0,1]$ to this new set E. So you can prove it's uncountable.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/145803",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 3,
"answer_id": 2
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.