Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
Why finite morphisms between schemes are defined in a different sense of "local" with morphisms of finite type? A finite morphism $f:X\rightarrow Y$ firstly requires an affine cover $V_i\subset Y$, such that $f^{-1}(V_i)$ are affine open sets. However a morphism of (locally) finite type $f:X\rightarrow Y$ involves a cover $V_i\subset Y$ such that $f^{-1}(V_i)$ can be covered by affine open sets.
The latter sense of local must be broader than the first one. Why are the first part of their definition so different? What if I define a finite morphism by requiring the preimage covered by affine open sets, such that the induced $f^*$ are integral, will undesirable things happen? (BTW, is it right that there's actually no difference when $X$ is affine?)
| A finite morphism $f : X\to Y$ has a strong topological property: it is proper ($f$ is a closed map, and even universally closed : for any $Z\to Y$, the projection $X\times_Y Z\to Z$ is closed). If you only ask $f^{-1}(V_i)$ to be "locally finite", then you only get so called quasi-finite morphisms. For example, any open immersion is quasi-finite, but not finite.
If $X$ is affine and $Y$ is separated, then it is true that $f^{-1}(V)$ is affine for any affine open subset $V$ of $Y$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/81974",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Is there an easy way to determine when this fractional expression is an integer? For $x,y\in \mathbb{Z}^+,$ when is the following expression an integer?
$$z=\frac{(1-x)-(1+x)y}{(1+x)+(1-x)y}$$
The associated Diophantine equation is symmetric in $x, y, z$, but I couldn't do anything more with that. I tried several factoring tricks without luck. The best I could do was find three solutions such that $0<x\le y\le z$. They are: $(2,5,8)$, $(2,4,13)$ and $(3,3,7)$.
The expression seems to converge pretty quickly to some non-integer between 1 and 2.
| We write out a solution, to check whether something like it can be "borrowed" for a problem set.
The numerator is $1-x-y-xy$, which can be written as $-[(x+1)(y+1)-2]$. The denominator is $-[(x-1)(y-1)-2]$. So our fraction is
$$\frac{(x+1)(y+1)-2}{(x-1)(y-1)-2}.$$
We can take $x=1$ or $y=1$. The ratio is then an integer, albeit negative. This gives infinitely many trivial solutions. We will list the non-trivial solutions.
By symmetry we can assume that $x \le y$. Let $x=2$. Then we want $(-1-3y)/(3-y)$ to be an integer. But
$$\frac{3y+1}{y-3}=3+\frac{10}{y-3}.$$
So $y-3$ must divide $10$, giving (since $y \ge x$), the solutions $y=2$, $4$, $5$, $8$, and $13$.
Next we deal with $x=3$. A calculation similar to the previous one (but shorter) gives that the only $y\ge 3$ are given by $y=3$ and $y=7$.
Next we deal with $x=4$. We need $\dfrac{5y+3}{3y-5}$ to be an integer. Any common divisor of these two numbers must divide $3(5y+3)-5(3y-5)$, which is $34$. The only divisor of $34$ which is of the form $3y-5$, where $y \ge 4$, is given by $y=13$.
Similarly, if $x=5$, we get the solution $y=8$.
For the rest, we use essentially the analysis of @Ragib Zaman. One needs to verify that $2x+2y< xy-x-y-1$, or equivalently that $(x-3)(y-3)>10$. This is true if $x\ge 6$ and $y\ge 7$. (It fails at $x=y=6$, but that is not a solution.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/82087",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Counting Eulerian trails I have an assignment to count all nonequivalent Eulerian trails in the famous kindergarten graph:
I think that there are $12$ distinct trails in total, but I have reached this conclusion by drawing them all. On top of that, I think that I can reason like this:
The graph has 2 odd vertices, so the trail has to start from one of them and end in the other one. If I start from vertex $A$, I can go in $3$ different routes and each one then splits one more time to $2$ other. So that is $3 \times 2 = 6$ different trails. And another $6$ if I start from $B$, using the same reasoning.
My question is - is my reasoning correct? And is there any general algorithm for counting Eulerian trails in undirected graphs? I know it exists for directed graphs.
Thanks in advance!
| According to Wikipedia,
Counting the number of Eulerian circuits on undirected graphs is much more difficult [than on directed graphs]. This problem is known to be #P-complete. In a positive direction, a Markov chain Monte Carlo approach, via the Kotzig transformations (introduced by Anton Kotzig in 1968) is believed to give a sharp approximation for the number of Eulerian circuits in a graph.
A couple of references are given.
I know that circuits are not trails, but I expect that the two problems to be roughly equally difficult.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/82159",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
A circle with infinite radius is a line I am curious about the following diagram:
The image implies a circle of infinite radius is a line. Intuitively, I understand this, but I was wondering whether this problem could be stated and proven formally? Under what definition of 'circle' and 'line' does this hold?
Thanks!
| Algebraic description
A generic algebraic curve of degree $2$ is the set of points satisfying
$$ax^2 + by^2 + cxy + dx + ey + f = 0$$
You can define circles to be those curves with $a=b$ and $c=0$. You can compute the radius as
$$r=\sqrt{\frac{d^2 + e^2}{4a^2} - f}$$
For the case of $a=b=0$ your equation describes a line, and your radius becomes infinite due to the division by zero.
Note that the definition also includes circles with imaginary radius, which have no points in the real plane. You may exclude these by restricting the range of parameters.
Complex plane and cross ratios
Interpreting the points of your plane as complex numbers, you can define that four points lie on a common circle or line if their cross ratio is a real number. Using this definition, a line is just a circle, and the only reasonable value to assign to its radius is $\infty$.
Möbius Geometry
A Möbius transformation will map circles and lines in the complex plane (or complex line, depending on your use of the word) $\mathbb C$ to other circles and lines. So the above definition of a circle will suit these transformations as well. There is a scientific topic called Möbius Geometry which uses 4-dimenstional vectors to describe both lines and circles. In this setup again a line is a special case of circle, and performing any radius computation will lead to infinity.
You may extend Möbius geometry a bit further to obtain Lie geometry, where even points are special cases of circles, namely those with zero radius. Lie transformations may transform generic circles to lines or points and vice versa.
Constant curvature
You may define a circle as a line of constant curvature, and without endpoints. You may imagine curvature as the inverse of the radius, but you can define it in other ways as well. Obviously, for the case of zero curvature and therefore infinite radius, you will obtain a line.
Projective geometry
You can define a circle to be a conic incident with the two complex circle points
$$\begin{pmatrix}1\\i\\0\end{pmatrix}
\text{ and }
\begin{pmatrix}1\\-i\\0\end{pmatrix}$$
where the points are given in homogenous coordinates in $\mathbb{CP}^2$. There are conics with real coefficients which degenerate into a pair of lines. One of them is the line at infinity incident with the two points above, and the other may be a finite line. So unless your conic degenerates to a double line at infinity, you get a single line as the finite portion of a conic which by definition is a circle. You can compute the center of that circle using the points above, and the result will be an infinite point, indicating the infinite radius.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/82220",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27",
"answer_count": 8,
"answer_id": 2
} |
Distance saved by shortening the road A long distance drivers technique for saving time is to drive the width of the road you are on. To do this you drive so that you are placing you car on the inside of all turns as much as possible. I want to know how much distance is saved.
From the point of view of a math problem it could be presented as two parralel curves in 3space forming a ribbon and you want to know the difference in length of the edges compared to a radius minimizing path. The problem can have a simplified presentation of two curves in a plane.
I am not sure of the best equation form for the edges to make it possible to calculate the radius min line between.
| Let $\gamma(s)$ be the arc-length parameterized center of a road of width $2\epsilon$. Then the inner shoulder is given by
$$\psi(t) = \gamma(t) + \epsilon \gamma'(t)^{\perp}.$$
Here we orient $\gamma'^{\perp}$ so that it points inside the turn. Notice that $\psi$ has a discontinuous jump whenever curvature vanishes: we assume the truck can essentially "teleport" from one side of the road to the other whenever the direction of curvature changes.
We now have $\psi'(t) = \gamma'(t) + \epsilon \gamma''(t)^{\perp}$, so
$$\begin{align*}\|\psi'(t)\|^2 &= \gamma' \cdot \gamma' + 2\epsilon \gamma''^{\perp}\cdot \gamma' + \epsilon^2 \gamma''^{\perp} \cdot \gamma''^{\perp}\\
&= 1 - 2\epsilon \kappa + \epsilon^2 \kappa^2\\
&= (1-\epsilon \kappa)^2,
\end{align*}$$
where $\kappa$ is (unsigned) curvature. So the difference in distance traveled for the segment of road between $s=a$ and $s=b$ is
$$\int_a^b (\|\gamma'(s)\|-\|\psi'(s)\|)\ ds = \epsilon \int_a^b \kappa(s).$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/82316",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Sketch the graph of $y = \frac{4x^2 + 1}{x^2 - 1}$ I need help sketching the graph of $y = \frac{4x^2 + 1}{x^2 - 1}$.
I see that the domain is all real numbers except $1$ and $-1$ as $x^2 - 1 = (x + 1)(x - 1)$. I can also determine that between $-1$ and $1$, the graph lies below the x-axis.
What is the next step? In previous examples I have determined the behavior near x-intercepts.
| There can be no $x$-intercepts since the numerator is never $0$. There are vertical asymptotes at $\pm1$ since the denominator there is $0$ and the numerator is not. There is a horizontal asymptote at $4$, in both directions, because when $x$ is large in absolute value, the lower-degree terms are negligible by comparison to $x^2$. And the curve never touches the horizontal asymptote because if the fraction equals $4$, then $4x^2 + 1 = 4(x^2-1)$ and that implies $1=-4$ by canceling the $4x^2$ from both sides.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/82443",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
A counterexample to theorem about orthogonal projection Can someone give me an example of noncomplete inner product space $H$, its closed linear subspace of $H_0$ and element $x\in H$ such that there is no orthogonal projection of $x$ on $H_0$. In other words I need to construct a counterexample to theorem about orthogonal projection when inner product space is not complete.
| As I did not succeed to come with an example by myself, I tried searching google books for
"not complete" "orthogonal projection".
I found a counterexample in the book Linear Operator Theory in Engineering and Science
By Arch W. Naylor, George R. Sell, pages
289
and 302. (BTW this was the first ever book on functional analysis I was reading as a student. Now I wish didn't stop somewhere halfway back then - it seems that it contains a lot of interesting things and I remember the style of the book as very readable.)
I will reproduce a brief version of their example below, but it is probably better if you have a look in the book.
I've kept the notation from the book, so $X$ is the noncomplete space (your $H$) and $M$ is the subspace (your $H_0$).
Suppose that $H$ is any Hilbert space and $X$ is a dense subspace
of $H$ that is not closed. Let $z\in H\setminus X$. We define
$$M=\{y\in X; \langle y,z \rangle=0\}.$$
This set $M$ is a closed subspace of $X$.
Now choose $x_0\in X\setminus M$. We will show that $x_0$
has no orthogonal projection on $M$.
Suppose that $y_0\in M$ is the orthogonal projection of $x_0$.
This means $y_0-x_0$ is orthogonal to $M$ and thus it is a scalar
multiple of $z$. This would imply $z\in X$, a contradiction.
So now we only ask whether it is possible to find $H$, $M$, $z$,
$x_0$ as above.
The choice from the book I mentioned is the following:
$H=\ell_2$,
$X=$ the set of all sequences that have only
finitely many nonzero terms
$z=(\frac1{k^2})_{k=0}^\infty$
$x_0$ can be chosen arbitrary such that $\langle x_0,z \rangle \ne 0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/82499",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 2,
"answer_id": 1
} |
Root or zero...which to use when? This may seem like a very basic question, but:
What exactly is the difference between a root of a polynomial, and a zero? Of course I realise that they are technically exactly the same thing, but there seem to be subtle rules as to when to use each term, and a couple of times in the past I have been told I am using "root" where I should be using "zero".
Is it generally accepted that one should use "root" in an algebraic context, and "zero" in a analytic context? If not, when should one use one or the other...and does it really matter?!
| In general we will find zero of a function root of an equation
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/82643",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 3,
"answer_id": 0
} |
Proof of Convergence: Babylonian Method $x_{n+1}=\frac{1}{2}(x_n + \frac{a}{x_n})$ a) Let $a>0$ and the sequence $x_n$ fulfills $x_1>0$ and $x_{n+1}=\frac{1}{2}(x_n + \frac{a}{x_n})$ for $n \in \mathbb N$. Show that $x_n \rightarrow \sqrt a$ when $n\rightarrow \infty$.
I have done it in two ways, but I guess I'm not allowed to use the first one and the second one is incomplete. Can someone please help me?
*
*We already know $x_n \rightarrow \sqrt a$, so we do another step of the iteration and see that $x_{n+1} = \sqrt a$.
*Using limit, $x_n \rightarrow x, x_{n+1} \rightarrow x$ (this is the part I think it's incomplete, don't I have to show $x_{n+1} \rightarrow x$, how?), we have that
$$x = \frac x 2 (1 + \frac a {x^2}) \Rightarrow 1 = a/x^2 \Rightarrow x = \sqrt a$$
b) Let the sequence $x_n$ be defined as $x_{n+1}= 1 + \frac 1 {x_n} (n \in \mathbb N), x_1=1$. Show that it converges and calculate its limit.
"Tip: Show that sequences $x_{2n}$ and $x_{2n+1}$ monotone convergent to the limit."
I didn't understand the tip, how can this help me? Does it make a difference if the number is odd or even?
Thanks in advance!
| For b):
Consider the function,
$F(x) = 1 + 1/x$.
If you set $F(x)$ to the identity function $x$ and solve for $x$ using the quadratic formula, you will see that $F(x)$ has two fixed points, one positive and one negative.
With a little thought you might guess that our sequence is converging to the positive fixed point,
$\alpha=\frac{1+\sqrt5}{2}$.
Notice that we can restrict the domain of $F(x)$ and analyze the decreasing function
$F: [1.5,\, 2] \to [1.5,\, 2]$
Let $u$ and $v$ be any two distinct points in the closed interval $[1.5,\, 2]$.
Then
$\frac{(F(v) - F(u))}{(v - u)}$
is equal to (algebra)
$\frac {-1}{uv}$
The absolute value of this number can be no larger than
$\frac {1}{1.5*1.5}$, which is equal to $\frac {4}{9}$.
Applying the Banach Fixed Point Theorem,
we see that our sequence $x_{n+1}= 1 + \frac 1 {x_n} (n \in \mathbb N^*), x_1=1$ converges to $\alpha$, since $F(x_1)$ is 2.
Addendum
Actually, for our situation, we don't have to use Banach's theorem. One can easily prove the following, which can also be used to prove convergence of our sequence $x_1, x_2, x_3, ...,$.
Proposition: Let $f(x)$ be a continuous strictly monotonic function mapping a closed interval
$I = [a,\, b]$
into itself.
Assume that there is exactly one fixed point $c$ for $f$ with $a < c < b$.
Then for all $n > 0$, $f^n(I)$ is a closed interval that contains $c$, with $c$ not an endpoint.
Suppose also there is positive $\lambda < 1$, such that for any $\alpha$ $\beta$ with $a < \alpha < \beta < b$, the absolute value of
$\frac{(f(\beta) - f(\alpha))}{(\beta - \alpha)}$
is less than $\lambda$.
Then the intersection, $\cap \, f^n(I) \, | \, n > 0$, contains only one point, $c$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/82682",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22",
"answer_count": 7,
"answer_id": 6
} |
Question on Real Numbers and Limit Point Prove that every uncountable subset of $R$ (real numbers), has a limit point.
I tried using Baire Category Theorem, which deals with uncountability, but I'm at sea.
If anyone can please help me with this problem I'll be glad. Thanks in advance
| Write
$$\mathbb{R} = \bigcup\limits_{n=-\infty}^\infty [n,n+1).$$
Then think about whether every term in this union could have only finitely many points of the uncountable set in question, and what happens if one of them has more than finitely many.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/82750",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How to find a moment estimator? Suppose you have $X_1,\ldots,X_{20}$ $f(x|\theta) = 0.1(1+\theta x)$ for $-1<x<1$, $-1<\theta<1$. How would you find the moment estimator of $\theta$?
Attempt: Isn't the moment estimator of theta just the first moment which is the mean?
| $$
\int_{-1}^1 1+\theta x\;dx = 2,
$$
so you need $f(x|\theta) = \frac 12 (1+\theta x)$.
If you mean what I think you mean, I would call it a method-of-moments estimator of $\theta$ rather than a "moment estimator", since the latter term might be mistaken for "estimator of the moment(s)".
The first moment of this distribution is
$$
\int_{-1}^1 x f(x \mid \theta) \; dx,
$$
which by my reckoning is $\theta/3$. The first moment of the sample is $(X_1+\cdots+X_{20})/20$. You need to equate the first moment of the distribution with the first moment of the sample and then solve for $\theta$.
The method-of-moments estimator of $\theta$ would be equal to the sample mean only if $\theta$ were the population mean.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/82798",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
a continuous function satisfying $f(f(f(x)))=-x$ other than $f(x)=-x$ Is there a non-trivial solution to the functional equation $f(f(f(x)))=-x$ where $f$ is a continuous function defined on $\mathbb{R}$ ? Also, what about the general one $f^n(x)=-x$ where $f^n$ is $f$ composed with itself $n$ times and $n$ is odd.
In general, is there a theory about continuous solutions to $f^n(x)=g(x)$ where $g$ is a fixed continuous function. The only thing that I found was about the solutions of $f^2(x)=g(x)$, i.e, "square roots" in the sense of composition.
Thanks.
Edit : I forgot to mention that $n$ is supposed to be odd, sorry for the inconvenience.
| The only solution is $f(x)=-x$ for all odd $n$.
We have $f(0)=0$ because if $f(0)=a$, then $f(f(a)))=0$ and $-a=f(f(f(a)))=f(0)=a$, so $a=0$.
Since $f^{2n}(x)=x$ for all $x$, the function $f$ cycles sets of at most $2n$ elements $\{\pm a_1,\pm a_2,\ldots,\pm a_n\}$. Fix one of these cycles, and assume without loss of generality that all the $a_i$ are nonzero, and denote by $a>0$ the least positive number in this cycle. Then $f((0,a))$ is an open interval with the endpoints $f(0)=0$ and $f(a)=b$. Assume that $b\neq \pm a$. Since $|b|>|a|$, the interval $f((0,a))$ must contain either $a$ or $-a$, which means there is an $x, |x|<a$ such that $f(x)=a$ or $f(x)=-a$, which is a contradiction, because $f$ is a bijection and $\pm a$ are the images of some $a_i, |a_i|\geq a$. Thus it must hold that $|f(a)|=a$ for all $a$, and consequently $f(a)=-a$ for all $a$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/82857",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 3,
"answer_id": 0
} |
Is $f(x)=f(x+y)+f(x+z)-f(x+y+z)$ a functional equation of a line? Let $f$ be a real-valued function satisfying the functional equation $$f(x)=f(x+y)+f(x+z)-f(x+y+z)$$ for all $x,y,z\in\mathbb{R}$. Is it true that $f$ must be the equation of a line, with no additional assumptions? Can one use calculus to see this without any a priori constraints on $f$ (that it be continuous, differentiable, etc.)?
| No, it is not true. If you define $f$ arbitrarily on a basis of the vector space of real numbers over the rational numbers, you always get a linear function that is a solution of your equation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/82936",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Maximum of the Variance Function for Given Set of Bounded Numbers Let $ \boldsymbol{x} $ be a vector of $n$ numbers in the range $ \left[0, c \right] $, where $ c $ is a positive real number.
What's is the maximum of the variance function of this $ n $ numbers?
Maximum in the meaning what spread of the number will maximize the variance?
What would be a tighter bound for other assumptions on the spread of the numbers.
The variance of the vector $ \boldsymbol{x} $ is given by:
$$ \operatorname{var} (\boldsymbol{x}) = \frac{1}{n} \sum_{i = 1}^{n} {\left( {x}_{i} - \overline{\mathbf{x}} \right )}^2 $$
Where the mean $\overline{\boldsymbol{x}}$ is given by:
$$ \overline{\boldsymbol{x}} = \frac{1}{n} \sum_{i = 1}^{n} {x}_{i} $$
| Since $x_i \leq c$,
$\displaystyle \sum_i x_i^2 = \sum_i x_i\cdot x_i \leq \sum_i c\cdot x_i = cn\bar{x}.$
Note also that $0 \leq \bar{x} \leq c$. Then,
$$
\begin{align*}
n\cdot \text{var}(\mathbf{x}) &= \sum_i (x_i - \bar{x})^2= \sum_i x_i^2 - 2x_i\bar{x} + \bar{x}^2\\
&= \sum_i x_i^2 - 2\bar{x}\sum_i x_i + n\bar{x}^2= \sum_i x_i^2 - n\bar{x}^2\\
&\leq cn\bar{x} - n\bar{x}^2 = n\bar{x}(c-\bar{x})
\end{align*}
$$
and thus $$\text{var}(\mathbf{x}) \leq \bar{x}(c-\bar{x}) \leq \frac{c^2}{4}.$$
Added note: (second edit)
The result $\text{var}(X) \leq \frac{c^2}{4}$ also applies to random variables
taking on values in $[0,c]$, and, as my first comment on the question says, putting half the mass at $0$ and the other half at $c$ gives the maximal variance of $c^2/4$. For the vector $\mathbf x$, if $n$ is even, the maximal variance $c^2/4$ occurs when $n/2$ of the $x_i$ have value $0$ and the
rest have value $c$.
Someone else posted an answer -- it has since been deleted -- which said the
same thing and added that if $n$ is odd, the variance is maximized when
$(n+1)/2$ of the $x_i$ have value $0$ and $(n-1)/2$ have value $c$,
or vice versa. This gives a variance of $(c^2/4)\cdot(n^2-1)/n^2$ which
is slightly smaller than $c^2/4$. Putting the "extra" point at $c/2$
instead of at an endpoint gives a slightly smaller variance of
$(c^2/4)\cdot(n-1)/n$, but both choices have variance approaching
$c^2/4$ asymptotically as $n \to \infty$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/83046",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 0
} |
Zero divisor in $R[x]$ Let $R$ be commutative ring with no (nonzero) nilpotents. If $f(x) = a_0+a_1x+\cdots+a_nx^n$ is a zero divisor in $R[x]$, how do I show there's an element $b \ne 0$ in $R$ such that $ba_0=ba_1=\cdots=ba_n=0$?
| It is true over any commutative ring, and is sometimes called McCoy's theorem. Below is a proof sketch
from my sci.math post on May 4, 2004:
Theorem $\ $ Let $ \,F \in R[X]$ be a polynomial over a commutative ring $ \,R.\,$
If $ \,F\,$ is a zero-divisor then $ \,rF = 0\,$ for some nonzero $ \,r \in R.$
Proof $\ $ Suppose not. Choose $ \,G \ne 0\,$ of min degree with $ \,FG = 0.\,$
Write $ \,F =\, a +\,\cdots\,+ f\ X^k +\,\cdots\,+ c\ X^m\ $
and $ \ \ \ \ G = b +\,\cdots\,+ g\ X^n,\,$ where $ \,g \ne 0,\,$ and $ \,f\,$ is the highest deg coef of $ \,F\,$ with $ \,fG \ne 0\,$ (note that such an $ \,f\,$ exists else $ \,Fg = 0\,$ contra supposition).
Then $ \,FG = (a +\,\cdots\,+ f\ X^k)\ (b +\,\cdots\,+ g\ X^n) = 0.$
Thus $\ \,fg = 0\ $ so $\: \deg(fG) < n\,$ and $ \, FfG = 0,\,$ contra minimality of $ \,G.\ \ $ QED
Alternatively it follows by Gauss's Lemma
(Dedekind-Mertens form) or related results.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/83121",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "42",
"answer_count": 5,
"answer_id": 1
} |
Fourier cosine series and sum help I have been having some problems with the following problem:
Find the Fourier cosine series of the function $\vert\sin x\vert$ in the interval $(-\pi, \pi)$. Use it to find the sums
$$ \sum_{n\: =\: 1}^{\infty}\:\ \frac{1}{4n^2-1}$$ and $$ \sum_{n\: =\: 1}^{\infty}\:\ \frac{(-1)^n}{4n^2-1}$$
Any help is appreciated, thank you.
edit:I have gotten as far as working out the Fourier cosine series using the equations for cosine series
$$\phi (X) = 1/2 A_0 + \sum_{n\: =\: 1}^{\infty}\:\ A_n \cos\left(\frac{n\pi x}{l}\right)$$ and
$$A_m = \frac{2}{l} \int_{0}^{l} \phi (X) \cos\left(\frac{m\pi x}{l}\right) dx $$
I have found $$A_0 = \frac{4}{l}$$ but the rest of the question is a mess on my end and then I don't know how to relate the rest of it back to those sums.
| There is an alternative way to calculating the Fourier series as follows. The Fourier series for a periodic function $f(x)$ of period $2\pi$ (written $f(x) \sim \sum a_ne^{inx}$) is the sum
$$\sum a_n e^{inx}$$
where the Fourier coefficients $a_n$ are defined by $a_n = \frac{1}{2\pi}\int_{-\pi}^\pi f(x)e^{-inx} dx$. Now in your case, you can split the integral up into
$$\frac{1}{2\pi} \left(\int_{-\pi}^0 -\sin x e^{-inx} dx + \int_0^\pi \sin x e^{-inx} dx\right).$$
Now recall that $\sin x = \frac{e^{inx} - e^{-inx}}{2i}$ and chuck this into the integral to get the Fourier coefficient $a_n$ and hence you will obtain your Fourier series. You should get (I'm not spoiling anything because you can just chuck it into mathematica) that
$$f(x) \sim \sum_{n \in \Bbb{Z}} \frac{(1 + (-1)^n)e^{inx}}{\pi (1- n^2)} $$ Now to complete your problem in evaluating that sum, I suggest the following: The fourier series of $f(x)$ is absolutely convergent so we can say that
$$f(x) = \sum_{n \in \Bbb{Z}} \frac{(1 + (-1)^n)e^{inx}}{\pi (1- n^2)}. $$
For $n$ odd, the sum is zero so actually you can write
$$f(x) = \sum_{n \in \Bbb{Z}} \frac{2e^{2inx}}{\pi (1- (2n)^2)}.$$
Now put $x = \pi$ so that $f(\pi) =0.$ Then we get that
$$\begin{eqnarray*} 0 = \sum_{n \in \Bbb{Z}} \frac{2e^{2in\pi}}{\pi (1- (2n)^2)} &=& \sum_{n\in\Bbb{Z}} \frac{2}{\pi (1- (2n)^2)} \\
&=&\frac{2}{\pi} + \sum_{n\in\Bbb{Z}, n \geq 1}\frac{4}{\pi (1- (2n)^2)} \\
\implies \sum_{n\in\Bbb{Z}, n \geq 1}\frac{1}{4n^2 - 1} &=& \frac{2}{\pi}\cdot \frac{\pi}{4}\\
&=& \frac{1}{2}.
\end{eqnarray*}$$
The evaluation of the second sum is similar except that instead of substituting in $x = \pi$ you substitute in $x = \pi/2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/83176",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Integral of product of two functions in terms of the integral of the other function Problem:
Let $f$ and $g$ be two continuous functions on $[ a,b ]$ and assume $g$ is positive. Prove that $$\int_{a}^{b}f(x)g(x)dx=f(\xi )\int_{a}^{b}g(x)dx$$ for some $\xi$ in $[ a,b ]$.
Here is my solution:
Since $f(x)$ and $g(x)$ are continuous, then $f(x) g(x)$ is continuous. Using the Mean value theorem, there exists a $\xi$ in $[ a,b ]$ such that $\int_{a}^{b}f(x)g(x)dx= f(\xi)g(\xi) (b-a) $ and using the Mean value theorem again, we can get $g(\xi) (b-a)=\int_{a}^{b}g(x)dx$ which yields the required equality.
Is my proof correct? If not, please let me know how to correct it.
| Actually, this therom is called the first mean value theorem for integration. There is a neat proof on Wiki:
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/83246",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 3,
"answer_id": 2
} |
Spectral sequence for Ext If I have a morphism of schemes $f:X\rightarrow Y$ and sheaves $\mathcal F,G$ on $X$, then
is there a spectral sequence which relates the Ext-groups
$\mathrm{Ext}(f_* \mathcal F, f_*\mathcal G)$ on $Y$ and $\mathrm{Ext}(\mathcal F, \mathcal G)$ on $X$?
I should add, that if this is not possible in general, that my morphism $f$ is actually an affine morphism and that I want to compare the two $\mathrm{Ext}$-groups in any way. The first thing I thought of was a spectral sequence, but perhaps there are other ways you know.
| Well, a useless thing to say is that in general you have the Grothendieck spectral sequence (cf answer by Matt Emerton Contravariant Grothendieck Spectral Sequence), (I'm thinking of A,B there as your $Rf_*F, Rf_*G$).
But as your morphism is affine then this does not help one bit.
I guess it's impossible a priori to give any comparison: essentially because $f_*$ and $\underline{Hom}$ aren't compatible (which, via $E^\vee \otimes F = \underline{Hom}(E,F)$ is another way of saying that $f_*$ and $\otimes$ aren't compatible).
But I would very like to be proven wrong, as I'm also similarly stuck!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/83303",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
} |
What exactly happens, when I do a bit rotation? I'm sitting in my office and having difficulties to get to know that exactly happens, when I do a bit rotation of a binary number.
An example:
I have the binary number 1110. By doing bit rotation, I get 1101, 1011, 0111 ...
so I have 14 and get 7, 11, 13 and 14 again.
I can't get a rule out of it... can somebody help me?
Excuse my bad English, I'm just so excited about this problem.
| If you have an $n$-bit binary number $m$, possibly with leading zeroes, a circular left shift gives you $$2\left(m-2^{n-1}\left\lfloor \frac{m}{2^{n-1}}\right\rfloor\right)+\left\lfloor \frac{m}{2^{n-1}}\right\rfloor=2m-(2^n-1)\left\lfloor\frac{m}{2^{n-1}}\right\rfloor,\tag{1}$$ and a circular right shift gives you $$\left\lfloor\frac{m}2\right\rfloor+2^{n-1}\left(m-2\left\lfloor\frac{m}2\right\rfloor\right)=2^{n-1}m-(2^n-1)\left\lfloor\frac{m}2\right\rfloor.\tag{2}$$
Changing the base from $2$ to $b$ merely requires changing $2$ to $b$ everywhere in $(1)$ and $(2)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/83361",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
A fair coin is tossed $n$ times by two people. What is the probability that they get same number of heads?
Say we have Tom and John, each tosses a fair coin $n$ times. What is the probability that they get same number of heads?
I tried to do it this way: individually, the probability of getting $k$ heads for each is equal to $$\binom{n}{k} \Big(\frac12\Big)^n.$$ So, we can do $$\sum^{n}_{k=0} \left( \binom{n}{k} \Big(\frac12\Big)^n \cdot \binom{n}{k}\Big(\frac12\Big)^n \right)$$ which results into something very ugly. This ugly thing is equal to the 'simple' answer in the back of the book: $\binom{2n}{n}\left(\frac12\right)^{2n},$ but the equality was verified by WolframAlpha -- it's not obvious when you look at it. So I think there's a much easier way to solve this, can someone point it out? Thanks.
| The probability John gets $k$ heads is the same as the probability John gets $n-k$ heads since the coin is fair.
So the answer to the original question is equal to the probability that the sum of Tom's and John's heads is $n$.
That is the probability of $n$ heads from $2n$ tosses which is indeed $\frac{1}{2^{2n}}{2n \choose n}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/83489",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 3,
"answer_id": 1
} |
Whether $f(x)$ is reducible in $ \mathbb Z[x] $? Suppose that $f(x) \in \mathbb Z[x] $ has an integer root. Does it mean $f(x)$ is reducible in $\mathbb Z[x]$?
| If the degree of the polynomial is 1, then it is irreductible by definition. But if the degree is greater than one and it has an integer root $r$, then the polynomial is reductible, because it can be written in the form $f(x)=(x-r)g(x)+r(x)$, where $r(x)$ is a constant (by remainder division theorem). Plugging $x=r$ yields that $r(x)=0$ and therefore $f(x)=(x-r)g(x)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/83602",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Showing $|\mathrm{Gal}(f(x))| = \deg(f)$ How can I prove that if $f(x)$ is a separable irreducible poly. and $\mathrm{Gal}(f(x))$ is abelian then $|\mathrm{Gal}(f(x))| = \deg(f)$?
| Let $K$ be the ground field and let $a$ be a zero of $f$. Then $K(a)$ is a subfield of the splitting field $L$ of $f$. Since $\mathrm{Gal}(L/K)$ is abelian the subgroup fixing $K(a)$ is normal, so $K(a)/K$ is a normal extension. Since $f$ is irreducible and it has a zero in $K(a)$, it splits completely in $K(a)$. So $L=K(a)$. Therefore $[L:K]=\deg(f)$, which means $|\mathrm{Gal}(L/K)|=\deg(f)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/83688",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove the square root of 7 is irrational using the Division Algorithm and case reasoning I proved this previously using proof by contradiction like so:
I am not seeing where to start to prove it using the Quotient Remainder theorem or case reasoning however. Can anyone see the best way to go about doing this? Thanks!
| If the use of division algorithm is not compulsory, then a simple argument using prime factorization theorem will suffice to prove that $7|a^2 \implies 7|a$.
Consider the contra-positive statement $7\nmid a \implies 7\nmid a^2$. The fact that $7$ does not divide $a$ means that $a$ does not have $7$ as its prime factor. Thus $a^2$ will not have any $7$ as prime factor too. Hence $a^2$ is not divisible by $7$ either.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/83745",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
Proving residue formula with one pole around $0$
$f(z)$ is a function which has a simple pole at $z=0$, has finite amount of poles in the upper halfplane (not on the real axis) and for which holds $\lim \limits _{|z| \to \infty, \Im(z) \ge 0}|zf(z)|=0 $. We want to show by choosing a path, that : $$\int_{-\infty}^{\infty} f(z)dz = 2\pi i( \sum \operatorname{Res} z_{i}) + (\operatorname{Res} 0)\pi i .$$
Planned was to cut out spectacles, that means cut out two arcs with lower arc from $(-\epsilon, \epsilon)$ and upper arc of $(-R,R)$. Then we can pick $\epsilon$ so that $0$ doesn't lie in it but all the other polish points do. So from all those points we get : $2\pi i (\sum \operatorname{Res} z_{i})$.
Then for the $0$ I don't see how to continue.
Does anybody see a way ? Please do tell.
| Let $c$ be the residue at $0$. Your main contour going around the origin in an $\epsilon$-arc introduces an error, which you can correct for by adding another integral which goes backwards around the $\epsilon$-arc and then straight through the pole along the real axis.
To evaluate this auxiliary integral, write $f(z) = c\frac{1}{z} + g(z)$. Now $g$ has a removable singularlty at $0$ and no singularites inside the auxiliary contour, so its integral is $0$. You can now handle the auxiliary integral of $\frac{c}{z}$ by symmetry -- $[-\epsilon,0)$ and $(0,\epsilon]$ cancel out, and the arc is just half of an arc all the way around the pole.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/83808",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What is a real world application of polynomial factoring? The wife and I are sitting here on a Saturday night doing some algebra homework. We're factoring polynomials and had the same thought at the same time: when will we use this?
I feel a bit silly because it always bugged me when people asked that in grade school. However, we're both working professionals (I'm a programmer, she's a photographer) and I can't recall ever considering polynomial factoring as a solution to the problem I was solving.
Are there real world applications where factoring polynomials leads to solutions? or is it a stepping-stone math that will open my mind to more elaborate solutions that I actually will use?
Thanks for taking the time!
| If one has a 2x2 two-person zero-sum matrix game M (where neither row dominates the other, nor column dominates the other) where Row can play row I or II and Column can play column 1 or 2, what is the optimal mixed strategy for each player? If Row plays Row I with probability p and Row II with probability 1-p and Column plays Column 1 with probability q and Column 2 with probability 1-q then one can compute the expected value from Row's point of view. This expected value is a polynomial in p and q (and constants from the matrix M). One can factor this polynomial into the form C(p-s)(q-t) + V. V is the value of the game (from Row's point of view) and s and t are the optimal mixed strategies and C is a constant. The beautiful result is that if Row plays optimally (p = s) then whatever Column does not matter (similarly reasoning for Column about what Row does), since the term (p-s)(q-t) will be 0. So if V is positive Row can get a gain of V on average with each play of the game in the long run. (When V is negative the game is "biased" towards Column; when V is 0 the game is fair.) if Column wants to keep losses to a minimum (V positive case) the optimal play is q = t. This is not a "standard" high school factoring problem but it is a very nifty way to see a not obvious lovely result by factoring a polynomial.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/83837",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "70",
"answer_count": 17,
"answer_id": 14
} |
A metric space in which every infinite set has a limit point is separable I am struggling with one problem.
I need to show that if $X$ is a metric space in which every infinite subset has a limit point then $X$ is separable (has countable dense subset in other words).
I am trying to use the result I have proven prior to this problem, namely every separable metric space has a countable base (i.e. any open subset of the metric space can be expressed as a sub-collection of the countable collection of sets).
I am not sure this is the right way, can anyone outline the proof?
Thanks a lot in advance!
| A proof based on Rudin's Hints (Page 45, Qn 24)
Step 1: Fix $ \delta >0$, and pick $x_{1}\in X$. Having chosen $x_{1},...,x_{j}\in X$, choose $x_{j+1}\in X$, if possible, so that $d(x_{i},x_{j+1})\geq \delta $ for $i =1,...,j.$ This process must stop after a finite number of steps, otherwise $x_{i}\in X, i\in N$ is an infinite set in X, so it should have a limit point, say $x\in X$. Then any neighborhood of $x$ with radius less than $\frac{\delta}{2}$ contain at most one term of the sequence (remember, any two distinct terms of the sequence are of atleast $\delta$ distance). A contradiction.
Thus X can be covered by finitely many neighborhoods of radius $\delta$.
Step 2: Take $\delta = \frac{1}{n}$ ($n = 1,2,3,...$). Let $\{x_{n_{1}},...x_{n_{k(n)}}\}$ be the finite set obtained from step 1 corresponding to $\delta=\frac{1}{n}$. Let $D = \cup_{n=1}^{\infty} \{x_{n_{1}},...x_{n_{k(n)}}\}$. Then D is countable.\Next we prove D is dense in X which will prove the result.
If $D=X$ nothing to prove, otherwise let $x\in X \setminus D$ and take an $\epsilon$ - neighborhood of $x$. Choose n such that $\frac{1}{n}<\epsilon$. Neighborhoods of $x_{n_{1}},...x_{n_{k(n)}}$ with radius $\frac{1}{n}$ will cover X. So $x$ will be in one of such neighborhoods, say neighborhood of $x_{n_{i}}$, hence $d(x,x_{n_{i}})<\frac{1}{n}<\epsilon$. Thus $x$ is a limit point of D.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/83876",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 4,
"answer_id": 1
} |
Dense subseries of divergent series Suppose $\sum_{n>1} a_n=\infty$ and $0<a_{n+1}<a_n$.
Let $b_k=a_k$ or $b_k=0$ for all integers $k$.
Let $R=\lim_{n\rightarrow\infty}((1/n)\sum_{q=1}^{q=n} b_q/a_q)$
If $R>0$, how to show that $\sum_{n>1}b_n=\infty$?
If $0<\lim_{n\rightarrow\infty}((1/\sqrt n)\sum_{q=1}^{q=n} b_q/a_q)$, must $\sum_{n>1}b_n=\infty$?
| Per OP's request, posting the above comment as an answer.
I believe the first part of your question is answered by Theorem 2 in the paper Tibor Šalát: On subseries, Mathematische Zeitschrift, Volume 85, Number 3, 209-225.
For the second part $a_n=\frac1n$ and
$b_n=
\begin{cases}
a_n,& n=k^2\\
0,& \text{otherwise}
\end{cases}$
should work as a counterexample.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/83921",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
commutativity of non-scalar matrices Problem:
Prove that the commutativity on the set of non-scalar $2\times2$ matrices is an equivalence relation. (That is, for all A; B; and
C; if AB = BA and BC = CB then AC = CA:)
For commutativity to be equivalence relation, we have to it is reflexive, symmetric and transitive. The first two properties are obvious. Any help on how to prove the third property?
| This is a write-up of the comments adding some thoughts that came to my mind:
*
*The first step is to show that $A$ and $B$ commute iff $S^{-1}AS$ and $S^{-1}BS$ commute. Therefore it follows that for the proof of transitivity, i.e. that we have $A$, $B$ and $C$ with $A$ and $B$ commuting and $B$ and $C$ commuting, we can assume that $B$ is in Jordan normal form. Now because we look at two cases: 2./3. (since $B$ is not a scalar multiple of the identity.
*The matrix $B$ is a diagonal matrix with two distinct eigenvalues. In this case, as the OP already remarked it follows that also $A$ and $C$ are diagonal matrices and hence they commute.
*The matrix $B$ consists of one Jordan block to a single eigenvalue. In this case, both $A$ and $C$ also have to be Jordan blocks with a single eigenvalue and one checks by direct computation that these commute.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/83965",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Is $O(\frac{1}{n}) = o(1)$? Sorry about yet another big-Oh notation question, I just found it very confusing.
If $T(n)=\frac{5}{n}$, is it true that $T(n)=O(\frac{1}{n})$ and $T(n) = o(1)$? I think so because (if $h(n)=\frac{1}{n}$)
$$
\lim_{n \to \infty} \frac{T(n)}{h(n)}=\lim_{n \to \infty} \frac{\frac{5}{n}}{\frac{1}{n}}=5>0 ,
$$
therefore $T(n)=O(h(n))$.
At the same time (if $h(n)=1$)
$$
\lim_{n \to \infty} \frac{T(n)}{h(n)}=\frac{(\frac{5}{n})}{1}=0,
$$
therefore $T(n)=o(h(n))$.
Thanks!
| $\frac{1}{n}=o(1)$ and therefore if $f=O(\frac{1}{n})$ than $f=o(1)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/84021",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
On the Origin and Precise Definition of the Term 'Surd' So, in the course of last week's class work, I ran across the Maple function surd() that takes the real part of an nth root. However, conversation with my professor and my own research have failed to produce even an adequate definition of the term, much less a good reason for why it is used in that context in Maple. Various dictionaries indicate that it refers to certain subsets (perhaps all of?) the irrationals, while the Wikipedia reference link uses it interchangeably with radical. However, neither of those jive with the Maple interpretation as $\mbox{Surd}(3,x) \neq\sqrt[3]{x}\;\;\;\;\;\;\;x<0$.
So, the question is: what is a good definition for "surd"?
For bonus points, I would be fascinated to see an origin/etymology of the word as used in mathematical context.
| Surds originated from the Latin word surdus which meant "mute". This muted sound is largely thought that it represents irrational numbers whereas rational numbers would be a pure, clear sound. Go to https://www.google.com.tr/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0ahUKEwj0zp78rafJAhXEVywKHfiwDCIQFggbMAA&url=http%3A%2F%2Fwww.mathsisgoodforyou.com%2FAS%2Fsurds.htm&usg=AFQjCNEnoI88dgh2NOoZQoDtFVUn-nRHiw&sig2=qr8bEci4rb7QDnnfheZSkQ for more information.
Also, the definition I found at http://www.bbc.co.uk/schools/gcsebitesize/maths/number/surdsrev1.shtml says that a surd is a square root that cannot be simplified into a whole number.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/84075",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 5,
"answer_id": 3
} |
Problem about $[x]$
$$[x]-2[x/2]\leq 1$$
Equivalently, $[x]-2[x/2]$ assumes only the values 0 and 1.
It seems easy, but I don't know how to prove it...
| Let $f(x)=[x]-2[x/2]$.
Then
$$f(x+2)= [x+2]-2[(x+2)/2]=[x]+2-2[x/2+1]=[x]+2-2[x/2]-2=f(x) \,.$$
Thus $f$ is periodic with period $2$. It suffices to prove that $f(x) \leq1$ for all $x \in [0,2)$. But this is obvious since then $[x] \leq 1, [x/2] \geq 0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/84121",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
A Question about the Boundedness of the Conditional Expectation of a Random Variable Assume you are given a probability space $ ( \Omega, \mathcal{ F}, P ) $, a bounded random variable $ X $ on $ ( \Omega, \mathcal{ F}, P) $, and a sub-$\sigma$-algebra $ \mathcal{A} $ of $ \mathcal{F} $.
Is it true that the conditional expectation $ E[X | \mathcal{A}] $ of $ X $ given $ \mathcal{A} $ is again a bounded random variable?
Thanks a lot for your help!
Regards,
Si
| Yes. The result follows from the fact that, if $X_1\le X_2$ a.s., then $E(X_1|\mathcal{A})\le E(X_2|\mathcal{A})$.
Let $-B\le X\le B$ for some constant $B$ and apply the above result.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/84181",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Monotonicity of a length in a figure Here is something I have been wondering about today... It seems graphically obvious, but I haven't been able to find a "clean" proof of it.
If $x_1>x_2$, then $d_1>d_2$ in the following figure:
Would someone have a clue ?
| As $BC=BD$ the perpendicular bisector $m$ of $CD$ goes through $B$. The assumption $x_1>x_2$ means that the point $A$ is to the right of $m$, and the same is then true for the point $E$. It follows that $d_1>d_2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/84248",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Do calculators have floating point error? As a programmer, we have been told about floating points errors on computer. Do Calculators have floating point error too?
Example.
0.1 (display) = .0999999998603016 (actual value used) on computers
Not really 0.1 But you can see it is close.
| Calculators are computers, too; they're just smaller. Surely if we knew how to represent arbitrary real numbers inside calculators, we could do the same thing with desktop computers.
That said, it's possible—both on a calculator and on a computer—to represent some real numbers exactly. No computer I know of would represent $\frac12$ inexactly, since its binary expansion (0.1) is short enough to put inside a floating point register. More interestingly, you can also represent numbers like $\pi$ exactly, simply by storing them in symbolic form. In a nutshell, instead of trying to represent $\pi$ as a decimal (or binary) expansion, you just write down the symbol "$\pi$" (or, rather, whatever symbol the computer program uses for $\pi$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/84307",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Geometric proof for inequality While on AOPS, I saw this interesting problem. I was wondering how many different approaches could be used to tackle the problem.
In other words I am looking for interesting and unique ways to solve the question:
Show geometrically that $\sqrt{2}+\sqrt{3}>\pi$
| The tools we need to do this were provided to us in 1654 by Christiaan Huygens in his De Circuli Magnitudine Inventa. I'll rely on his Proposition IX, as well as this derivation at Wikipedia, both of which are entirely geometrical in nature.
The former states that the circumference of a circle is less than two thirds the perimeter of an inscribed regular polygon plus one third the perimeter of a circumscribed regular polygon with the same number of side, while the latter gives the perimeter of an inscribed regular polygon as the geometric mean of the perimeters of a circumscribed regular polygon with the same number of sides and an inscribed regular polygon with half the number of sides, and the perimeter of a circumscribed regular polygon as the harmonic mean of inscribed and circumscribed regular polygons with half the number of sides.
Thus, denoting the perimeter of a regular $n$-gon inscribed into the unit circle by $u_n$ and the perimeter of a regular $n$-gon circumscribed around the unit circle by $U_n$, we have:
$$u^2_{2n}=u_nU_{2n}\;,$$
$$\frac2{U_{2n}}=\frac1{u_n}+\frac1{U_n}\;,$$
$$2u_n+U_n\gt6\pi\;.$$
If we seed the recurrence with regular hexagons and write $\sigma=2-\sqrt3$, we obtain
$$
\begin{array}{c|c|l}
n&u_n&U_n&2u_n+U_n\\
\hline\\
6&6&4\sqrt3&12+4\sqrt3\\
12&12\sqrt{\sigma}&24\sigma&24(\sqrt\sigma+\sigma)
\end{array}
$$
Thus $4(\sqrt\sigma+\sigma)\gt\pi$. A couple of elementary squaring steps yield $\sqrt2+\sqrt3\gt4(\sqrt\sigma+\sigma)$, and thus $\sqrt2+\sqrt3\gt\pi$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/84381",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21",
"answer_count": 2,
"answer_id": 0
} |
Does the family of series have a limit? For $r<1$ define $F(r)=\sum_{n\in\mathbb N}(-1)^nr^{2^n}$. Does $F$ have a limit as $r\nearrow 1$?
| Note that
$$
F(r)=r-F(r^2)\tag{1}
$$
Thus, if $a=\lim\limits_{r\to1^-}F(r)$ exists, then
$$
a=\lim_{r\to1^-}F(r)=\lim_{r\to1^-}r-\lim_{r\to1^-}F(r^2)=1-a\tag{2}
$$
Therefore, if the limit exists then it is $a=\frac{1}{2}$.
Applying equation $(1)$ twice, we get
$$
F(r)=r-r^2+F(r^4)\tag{3}
$$
As $r\to1$, $(3)$ indicates $F$ tends toward being periodic in $-\log(-\log(r))$ with period $\log(4)$. Note that as $r\to1^-$, $-\log(-\log(r))\to\infty$. $F(r)$ is the sum of the lengths of the intervals in the following animation
The value of the sum oscillates between $0.49728$ and $0.50272$ over each period. Therefore, $\lim\limits_{r\to1^-}F(r)$ does not exist.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/84499",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
} |
proof of the Cauchy integral formula
$ D=D_{r}(c), r > 0 .$ Show that if $f$ is continuous in $\overline{D}$ and holomorphic in $D$, then for all $z\in D$: $$f(z)=\frac{1}{2\pi i} \int_{\partial D}\frac{f(\zeta)}{\zeta - z} d\zeta$$
I don't understand this question because I don't see how it is different to the special case of the Cauchy Integral formula. I would be very glad if somebody could tell me what the difference is and how to show that it is true.
| I am not sure what the "usual Cauchy integral formula" is for you but I am assuming it is the following result:
Theorem If $f$ is holomorphic in an open set containing the closure of $D_{r}(c)=\{z\in\mathbb{C}:\left|z-c\right|<r\}$ for some $r>0$ and $c\in\mathbb{C}$, then $f(z)=\frac{1}{2\pi i}\int_{\left|\zeta-c\right|=r} \frac{f(\zeta)}{\zeta - z}d\zeta$ for all $z\in D_{r}(c)$.
Note that the hypothesis of the theorem is that $f$ is holomorphic in an open set containing the closure of $D_{r}(c)$ (in particular, there is a smoothness condition on the restriction of $f$ to the boundary of $D_{r}(c)$), whereas in the result stated in your question, we are only given continuity on the closure of $D_{r}(c)$.
In any case, the general result stated in your question is easy to prove. I will give a hint: fix $z\in D_{r}(c)$ and apply the usual Cauchy integral formula (i.e., the theorem stated above) to conclude that:
$f(z)=\int_{\left|\zeta-c\right|=\epsilon} \frac{f(\zeta)}{\zeta-z}d\zeta$
for all $\epsilon < r$ sufficiently close to $r$. You can now take the limit $\epsilon\to r$ in the above equality and apply the Lebesgue dominated convergence theorem to conclude the proof of the general result. (Please do not forget to verify that the hypothesis of the dominated convergence theorem is satisfied!)
I hope this helps!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/84563",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Solve $f(f(n))=n!$ What am I doing wrong here: ( n!=factorial )
Find $f(n)$ such that $f(f(n))=n!$
$$f(f(f(n)))=f(n)!=f(n!).$$
So $f(n)=n!$ is a solution, but it does not satisfy the original equation except for $n=1$, why?
How to solve $f(f(n))=n!$?
| What you have to do is partition the natural numbers into chains of iterates of the factorial function:
$0, 1, 1, 1, \dots$
$2,2,\dots$.
$3,3!,3!!,3!!!,\dots$
$4, 4!, 4!!, \dots $
You go on by starting the next chain with the smallest natural number that has not yet been used.
Except for the first two chains, no chain will have repeated values and since the function $n!$ is injective for positive $n$, the chains will be disjoint.
Now, to find $f$, setting $f(0)=f(1)=1$ and $f(2)=2$ will give the desired relation for these values.
For all the other values, you pair up the chains, and for a pair of chains
$a_1,a_2,\dots$
$b_1,b_2,\dots$
you set $f(a_k)=b_k$ and $f(b_k)=a_{k+1}$.
This ensures that $f\circ f$ just moves one step along each chain as it should to satisfy your equation.
As you can see, you have a lot of choice in pairing up the chains.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/84660",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
} |
Complex equation solution. How can i resolve it? I have this complex equation $|z+2i|=| z-2 |$. How can i resolve it? Please help me
| We have $|z+2i|^2=| z-2 |^2$, which implies that
$(z+2i)\overline{(z+2i)}=(z-2)\overline{(z-2)}$, that is
$(z+2i)(\overline{z}-2i)=(z-2)(\overline{z}-2)$. This implies that
$$z\overline{z}-2iz+2i\overline{z}+4=z\overline{z}-2z-2\overline{z}+4,$$
that is
$-iz+i\overline{z}=-z-\overline{z}$, or
$$i(z-\overline{z})=z+\overline{z}.$$
Now if we write $z=a+bi$, we get
$i(2bi)=2a$, or $b=-a$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/84898",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Namesake of Cantor's diagonal argument There are two results famously associated with Cantor's celebrated diagonal argument. The first is the proof that the reals are uncountable.
This clearly illustrates the namesake of the diagonal argument in this case. However, I am told that the proof of Cantor's theorem also involves a diagonal argument. Given a set $S$, suppose there exists a bijection $f:S\longrightarrow\ P(S)$ from $S$ to its powerset. The construction of the set
$$B=\{b\in S\mid b\notin f(b)\}$$
is said to be a diagonal argument due to the dual occurrence of $b$ in $b\notin f(b)$. Now I am not exactly sure why this is called a diagonal argument. Is there a geometric representation of this argument like the picture above? Or is it simply an analogy to the first proof using the idea of constructing a witness to show $f$ is not surjective?
| Diagonalization is a common method in mathematics. Essentially it means "write it in an infinite matrix and then walk along a coordinate line which approaches infinity on both axes".
The "Cantor diagonal argument" says write the numbers in a matrix, and take the $n$-th number's $n$-th digit, and change it.
The Cantor theorem is a form of diagonalization because what you actually do is write the function in matrix form:
$$\begin{array}{|c|c|c|c|c}
&x_1 & x_2 & x_3 & \ldots\\ \hline
f(x_1)\\ \hline
f(x_2)\\ \hline
f(x_3)\\ \hline
\vdots
\end{array}$$
Now we write $1$ into the blank cells if $x_i\in f(x_j)$, and $0$ otherwise. The proof takes the set of all $x\in X$ such that $x\notin f(x)$, That is walk along the diagonal and take the coordinates which give $0$. This defines a subset of $X$ which is not in the range of $f$, much like the diagonal argument gives a number which is not in the enumeration.
Similar, but different arguments are given when showing the real numbers can be defined as Cauchy sequences of rationals up to some equivalence, and that this is a complete space. We take a Cauchy sequence of Cauchy sequences and we take the $k$-th element of the $k$-th sequence.
Of course this is not exactly what we do, but it is close enough. We need to toy with $\epsilon-\delta$ and define the diagonal we are going to walk on (for the $k$-th element we want to take some $x^k_j$ for $j$ large enough so and so...)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/85024",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 4,
"answer_id": 0
} |
$f\colon\mathbb{R}\to\mathbb{R}$ such that $f(x)+f(f(x))=x^2$ for all $x$? A friend came up with this problem, and we and a few others tried to solve it. It turned out to be really hard, so one of us asked his professor. I came with him, and it took me, him and the professor about an hour to say anything interesting about it.
We figured out that for positive $x$, assuming $f$ exists and is differentiable, $f$ is monotonically increasing. (Differentiating both sides gives $f'(x)*[\text{positive stuff}]=2x$). So $f$ is invertible there. We also figured out that f becomes arbitrarily large, and we guessed that it grows faster than any linear function. Plugging in $f{-1}(x)$ for $x$ gives $x+f(x)=[f^{-1}(x)]^2$. Since $f(x)$ grows faster than $x$, $f^{-1}$ grows slower and therefore $f(x)=[f^{-1}(x)]^2-x\le x^2$.
Unfortunately, that's about all we know... No one knew how to deal with the $f(f(x))$ term. We don't even know if the equation has a solution. How can you solve this problem, and how do you deal with repeated function applications in general?
| Some observations (without assuming differentiability):
First, monotonicity in the left and right half line does not require differentiability. Observe that if there exists $x_0, y_0\in\mathbb{R}$ such that $f(x_0) = f(y_0)$, then necessarily $f(f(x_0)) = f(f(y_0))$ and hence $x_0^2 = y_0^2$. So this implies that $f$ is injective among the positive (negative) numbers. If you assume we are looking for continuous solutions, this also implies that $f$ is monotonic in the region concerned. Also, one gets that $f(x)$ cannot be bounded from above trivially because $x^2$ is not bounded from above.
Next, observe that a fixed point of $f$ can only be $f(x) = x \implies 2x = x^2$, which means that the only possible fixed points of $f$ are $0$ and $2$. Along the same lines, we get that for a continuous solution, $f(0) \geq 0$: assume the contrary, then $f(0) = y < 0$. We have by the functional equation $f(y) = -y > 0$, so by continuity there exists some $y' < 0$ such that $f(y') = 0$. But this means that $f(y') + f(f(y')) = f(0) = y'^2 > 0$, a contradiction.
Third, still assuming that $f$ is continuous, we ask whether $f$ can be bounded on either of the half lines. The answer is no, as it will necessarily contradict the functional equation. Hence $f(x)$ must be unbounded in each half line. Using that $f(x)$ must be unbounded from above, we have that there are three cases: $f(+\infty) < 0$, $f(-\infty) < 0$, or neither. (Cannot be both, because of monotonicity.)
The second case can be ruled out, as for very large negative $M$, we would get $f(M) < 0$, so $f(f(M)) < f(0)$, and contradicting the functional equation. The third case will also be ruled out if $f(0) \neq 0$. By the previous arguments $f(0)$, if non-zero, must be positive, this implies that $f(f(0)) < 0$ and so $f$ cannot be monotonically increasing on the right line.
In the first case, we get that monotonicity and unboundedness imply there exists $M > 0$ such that if $|x| > M$, $x f(x) < 0$ (in fact, using $f(0) + f(f(0)) = 0$, one can take $M= |f(0)|$). In particular this means that for $x < -M$, $f(f(x)) < f(0) < f(x) \implies f(x) \geq x^2 / 2$. But we can assume (by choosing a larger $M$ if necessary) that $M > 2$, which implies that $f(x) > M$, which implies that in fact $f(f(x)) < 0$ for $x < -M$. And hence $f(x) \geq x^2$ if $x < -M$. This implies that for sufficiently large and positive $x$, $x^2 = f(f(x))+ f(x) \geq f(x)^2 + |f(x)|$. This implies that for sufficiently large and positive $x$, $|f(x)| < x$.
In the third case, monotonicity and positivity guarantees that $f(x) < x^2$ for $x > 0$. And given a solution on the right half line, setting $f(x) = f(-x)$ on the left half line gives automatically a continuous solution. Furthermore, we can show that $f(x)$ cannot be $O(x^\alpha)$ for any $\alpha < \sqrt{2}$. (Assume the contrary, for all sufficiently large $x$ we have $f(x) + f(f(x))\leq C x^{(\alpha^2)}$ for a universal constant $C$, and so contradicts the functional equation.) Similarly $f(x)$ cannot be bounded below, asymptotically, by any $\beta x^\alpha$ with $\alpha > \sqrt{2}$.
BTW, I don't think your argument that "if $f$ is differentiable that $f(x)$ must be increasing for positive $x$" is correct. You used the fact that $f(x)$ is arbitrarily large (and positive) somewhere. But it doesn't have to be so for positive $x$: in that step you are making the assumption that $f:\mathbb{R}_+ \to \mathbb{R}_+$, which is not necessarily true.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/85148",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22",
"answer_count": 5,
"answer_id": 4
} |
Inverse function of a polynomial and its derivative I know it is a simple problem but I am having trouble. Here is what I have so far:
Let $f(x) = x^5 + 2x^3 + x - 1$
a) Find $f(1)$ and $f'(1)$
I have a) done. $f(1)$ is $3$ and $f'(1)$ is $12$
b) Find $f^{-1}(3)$ and $(f^{-1})'(3)$
I need help with the first part. I think the way to find the inverse is to switch the $x$'s with $y$'s and then solve for $y$. But I am having trouble completing this. I have the following:
$$
x = y^5 + 2y^3 + y -1
$$
$$
x - 1 = y^5 + 2y^3 + y
$$
What am I supposed to do here?
| I don't think you need to find $f^{-1}$ explicitly. Remember that $f^{-1}(3)$ in this case is the number $x$ such that $f(x)=3$, and use what you've found in part (a).
For $(f^{-1})' (3)$, remember that by the inverse function theorem,
$$
(f^{-1})'(b)=\frac{1}{f'(a)}
$$
where $f(a)=b$, and again use what you've found in part (a).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/85226",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Can every group be represented by a group of matrices?
Can every group be represented by a group of matrices?
Or are there any counterexamples? Is it possible to prove this from the group axioms?
| It is not true that every group can be represented by a group of finite-dimensional matrices (say over $\mathbb{C}$). The groups that can are called linear. There are many examples of non-linear groups; here is a relatively simple one.
Claim: The group $(\mathbb{Z}/2\mathbb{Z})^{\infty}$ is not linear.
Proof. Suppose to the contrary that there exists a faithful representation $(\mathbb{Z}/2\mathbb{Z})^{\infty} \to \text{GL}_n(\mathbb{C})$ for some $n$. In particular, for arbitrarily large $m$, there exists a faithful representation $(\mathbb{Z}/2\mathbb{Z})^m \to \text{GL}_n(\mathbb{C})$. We can conjugate this to a representation into $U(n)$ and then simultaneously diagonalize to obtain a representation into $\mathbb{T}^n$. But the subgroup of elements of $\mathbb{T}^n$ of order $2$ is $(\mathbb{Z}/2\mathbb{Z})^n$, so the representation cannot be faithful if $m > n$; contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/85308",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "56",
"answer_count": 5,
"answer_id": 1
} |
$f:A \to B$ and $f$ is meromorphic function then does it follow that $f: \partial A \to \partial B$? Let $f:\mathcal{M} \to \hat{\mathbb{C}}$ where $\mathcal{M}$ is a arbitrary Riemann surface and $f$ is a meromorphic function.
Let $A \subset \mathcal{M}$.
If $f:A \to B$ then $f:\partial A \to \partial B$.
Does this result hold? Does $A$ have to be compact? What's a nice concise but clear proof?
Thanks
| No. Let $B=\hat{\mathbb{C}}$, and $A$ be anything with nonempty boundary. Then $f$ certainly maps $A$ into $B$, but since $\partial B=\varnothing$, $f$ can't map $\partial A$ into $\partial B$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/85359",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Solving $A(x) = 2A(x/2) + x^2$ Using Generating Functions Suppose I have the recurrence:
$$A(x) = 2A(x/2) + x^2$$ with $A(1) = 1$.
Is it possible to derive a function using Generating Functions? I know in Generatingfunctionology they shows show to solve for recurrences like $A(x) = 2A(x-1) + x$. But is it possible to solve for the above recurrence as well?
| As per Qiaochu's comment on the answer so far, consider
$$a_n = 2 a_{\lfloor n/2 \rfloor} + n^2$$
with $a_1 = 1$ and $a_n = 1, 6, 11, 28, 37, 58, 71, 120,\dots$ for $n = 1,2,\dots$. Then
$$a_{2n} = 2 a_n + 4n^2 \quad\quad\text{and}\quad\quad a_{2n+1} = 2 a_n + 4n^2 + 4n + 1$$
where both recurrences are valid for $n\ge 1$. Working with each recurrence we can use generating functions to obtain a system of functional equations. Let
$$f(z) = \sum_{n=1}^{\infty}a_n z^n$$
be the generating function for the sequence of $a_n$'s. Working with the first equation we multiply by $z^{2n}$, sum over all $n\ge 1$
$$\sum_{n=1}^{\infty}a_{2n}z^{2n} = 2\sum_{n=1}^{\infty}a_n(z^2)^n + \sum_{n=1}^{\infty}4n^2z^{2n}$$
and obtain
$$ \frac{f(z) + f(-z)}{2} = 2f(z^2) + \frac{4 z^2 \left(1+z^2\right)}{\left(1-z^2\right)^3}$$
Working with the second equation, we multiply by $z^{2n+1}$, sum over all $n\ge 1$
$$\sum_{n=1}^{\infty}a_{2n+1}z^{2n+1} = 2z\sum_{n=1}^{\infty}a_n(z^2)^n + \sum_{n=1}^{\infty}(4n^2+4n+1)z^{2n+1}$$
and obtain
$$\frac{f(z)-f(-z)}{2} -z = 2zf(z^2)+\frac{z^3 \left(z^4-2 z^2+9\right)}{\left(1-z^2\right)^3}$$
We can obtain a solution by solving these functional equations.
EDIT: Adding the two equations together and simplifying we obtain
$$f(z) = \frac{z+z^2}{(1-z)^3} + 2(1+z)f(z^2)$$
We can iterate this equation to obtain better and better approximation of $f(z)$. I believe that if we iterate enough times to have $f(z^{2^t})$ as part of the approximation, then the approximation will be exact for the first $2^t-1$ coefficients.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/85415",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Module homomorphisms $\prod \mathbb{Z} \rightarrow \mathbb{Z}$ Let $f\colon \prod \mathbb{Z} \rightarrow \mathbb{Z}$ be a $\mathbb{Z}$-module homomorphism.
I want to show that $f(e_i) = 0$ for almost all $i$ where $e_i$ are the standard unit vectors.
I assume $f(e_i) \neq 0$ for infinitely many $i$. We can assume without restriction that
$f(e_i) > 0$ for all $i \geq 1$.
Now is the following a valid argument and if not why?
It is $f(\sum e_i) = \sum (f(e_i)) = \infty$, a contradiction.
Or do I have to use a trickier argument?
| Suppose $f : \prod \mathbb{Z} \to \mathbb{Z}$ is a group morphism. (I assume the product is indexed by $\mathbb{N}$.)
Let $(x)$ be a sequence of strictly positive numbers $(x_0, \ldots, x_n, \ldots)$ such that for all $n\ge 0$, $x_n > 2 |f(x_0, \ldots, x_{n-1}, 0, \ldots)|$ and for all $n \ge 1$, $x_n$ is a multiple of $x_{n-1}$ strictly larger than $x_{n-1}$.
Now, $(x_n)$ is increasing, so there exists $n_0$ such that $x_{n_0} > 2 |f(x)|$.
Suppose $n\ge n_0$
$f(x) - f(x_0, \ldots, x_{n-1}, 0, \ldots) = f(0, \ldots, 0, x_n, \ldots) \equiv 0 \pmod {x_n}$
But $|f(x) - f(x_0, \ldots, x_{n-1}, 0, \ldots)| \le |f(x)| + |f(x_0, \ldots, x_{n-1}, 0, \ldots)| \lt x_n/2 + x_{n_0}/2 \le x_n$.
A number which is a multiple of something strictly bigger than itself has to be zero, so $f(x) = f(x_0, \ldots, x_{n-1}, 0, \ldots)$ for all $n \ge n_0$.
In particular, for all $n\ge n_0$, $f(x_n e_n) = f(x)-f(x) = 0$, and since $x_n > 0$, $f(e_n) = 0$
A similar trick can show that in fact, $f(x) = \sum_{k=0}^{n_0} x_k f(e_k)$ for all sequences $(x)$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/85539",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Zero Dim Topological group I have this assertion which looks rather easy (or as always I am missing something):
We have $G$ topological group which is zero dimensional, i.e it admits a basis for a topology which consists of clopen sets, then every open nbhd that contains the identity element of G also contains a clopen subgroup.
I naively thought that if I take $\{e\}$, i.e the trivial subgroup, it's obviously closed, so it's also open in this topology, i.e clopen, and it's contained in every nbhd that contains $e$, but isn't it then too trivial.
Missing something right?
:-)
| First of all, in a general topological space, a single point need not be closed. But even if you are assuming that points are closed (e.g., this is a Hausdorff space), it does not necessarily follow from the zero-dimensional assumption that $\{e\}$ is open:
$\{e\}$ closed $\implies$ complement of $\{e\}$ open $\implies$ complement of $\{e\}$ is the union of clopen sets
But you cannot conclude from here (as you may want to) that the complement of $\{e\}$ is closed, because there may be infinitely many clopen sets involved in the union, and the infinite union of closed sets need not be closed.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/85606",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
proving a function satisfies a Lipschitz condition
Let $F$ be a closed set of $ \mathbb{R} $ whose complement has finite measure. Let $\delta(x) = d (x, F) =\inf \{ |x - z| \mid z \in F\}$.
Prove $ \delta$ continuous by proving $| \delta(x) - \delta(y) | \leq |x - y|$
I appreciate any kind of hint.
Thanks
| I don't think we need $F$ to be a closed set whose complement has finite measure etc. The result holds for any nonempty $F$.
Hint: $|x-z|\le |x-y|+|y-z|$ and take $\inf$ over $z\in F$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/85720",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Geometry problem: Line intersecting a semicircle Suppose we have a semicircle that rests on the negative x-axis and is tangent to the y-axis.A line intersects both axes and the semicircle. Suppose that the points of intersection create three segments of equal length. What is the slope of the line?
I have tried numerous tricks, none of which work sadly.
| Let the line meet the axes at $A(-a,0)$ and $B(0,b)$, and the semi-circle at $A^\prime$ and $B^\prime$ (with $A^\prime$ the closer to $A$, and $B^\prime$ the closer to $B$). Let $O$ be the origin, and define $d := |AA^\prime|=|A^\prime B^\prime|=|B^\prime B| > 0$ as the common length of the segments.
The Power of a Point Theorem, applied to point $B$, tells us that
$$\begin{eqnarray*}
|BB^\prime|\cdot|BA^\prime| &=& |BO|^2 \\
\implies d \cdot (2d) &=& b^2 \\
\implies 2 d^2 &=& b^2
\end{eqnarray*}$$
Also, Pythagoras tells us that
$$\begin{eqnarray*}
a^2 + b^2 = ( 3 d )^2 = 9 d^2
\end{eqnarray*}$$
Eliminating $b$, we have
$$a^2 = 7 d^2$$
so that the slope is
$$\frac{b}{a} = \frac{\sqrt{2}d}{\sqrt{7}d}=\sqrt{\frac{2}{7}}$$
NOTE. If we cared for the actual value of $d$, we could leverage the Power of Point $A$ (writing $P$ for the point $(-2,0)$):
$$\begin{eqnarray*}
|AP|\cdot|AO| &=& |AA^\prime|\cdot|AB^\prime|\\
\implies (2-a)\cdot a &=& 2 d^2 \\
\implies (2-d\sqrt{7})\cdot d\sqrt{7} &=& 2 d^2 \\
\implies \frac{2\sqrt{7}}{9} &=& d
\end{eqnarray*}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/85775",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
How to use the Extended Euclidean Algorithm manually? I've only found a recursive algorithm of the extended Euclidean algorithm. I'd like to know how to use it by hand. Any idea?
| This is more a comment on the method explained by Bill Dubuque then a proper answer in itself, but I think there is a remark so obvious that I don’t understand that it is hardly ever made in texts discussing the extended Euclidean algorithm. This is the observation that you can save yourself half of the work by computing only one of the Bezout coefficients. In other words, instead of recording for every new remainder $r_i$ a pair of coefficients $k_i,l_i$ so that $r_i=k_ia+l_ib$, you need to record only $k_i$ such that $r_i\equiv k_ia\pmod b$. Once you will have found $d=\gcd(a,b)$ and $k$ such that $d\equiv ka\pmod b$, you can then simply put $l=(d-ka)/b$ to get the other Bezout coefficient. This simplification is possible because the relation that gives the next pair of intermediate coefficients is perfectly independent for the two coefficients: say you have
$$
\begin{aligned} r_i&=k_ia+l_ib\\ r_{i+1}&=k_{i+1}a+l_{i+1}b\end{aligned}
$$
and Euclidean division gives $r_i=qr_{i+1}+r_{i+2}$, then in order to get
$$
r_{i+2}=k_{i+2}a+l_{i+2}b
$$
one can take $k_{i+2}=k_i-qk_{i+1}$ and $l_{i+2}=l_i-ql_{i+1}$, where the equation for $k_{i+2}$ does not need $l_i$ or $l_{i+1}$, so you can just forget about the $l$'s. In matrix form, the passage is from
$$
\begin{pmatrix} r_i&k_i&l_i\\ r_{i+1}&k_{i+1}&l_{i+1}\end{pmatrix}
\quad\text{to}\quad
\begin{pmatrix} r_{i+2}&k_{i+2}&l_{i+2}\\ r_{i+1}&k_{i+1}&l_{i+1}\end{pmatrix}
$$
by subtracting the second row $q$ times from the first, and it is clear that the last two columns are independent, and one might as well just keep the $r$'s and the $k$'s, passing from
$$
\begin{pmatrix} r_i&k_i\\ r_{i+1}&k_{i+1}\end{pmatrix}
\quad\text{to}\quad
\begin{pmatrix} r_{i+2}&k_{i+2}\\ r_{i+1}&k_{i+1}\end{pmatrix}
$$
instead.
A very minor drawback is that the relation $r_i=k_ia+l_ib$ that should hold for every row is maybe a wee bit easier to check by inspection than $r_i\equiv k_ia\pmod b$, so that computational errors could slip in a bit more easily. But really, I think that with some practice this method is just as safe and faster than computing both coefficients. Certainly when programming this on a computer there is no reason at all to keep track of both coefficients.
A final bonus it that in many cases where you apply the extended Euclidean algorithm you are only interested in one of the Bezout coefficients in the first place, which saves you the final step of computing the other one. One example is computing inverse modulo a prime number $p$: if you take $b=p$, and $a$ is not divisible by it, then you know beforehand that you will find $d=1$, and the coefficient $k$ such that $d\equiv ka\pmod p$ is just the inverse of $a$ modulo $p$ that you were after.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/85830",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "85",
"answer_count": 3,
"answer_id": 2
} |
Expected value of smallest value Let's say we have a discrete random variable $X$ uniform distributes on $[1, n]$. Not we make $n$ experiments to get $X$ and record the minimal value of $X$ as $X_\min$. My question is, what is the expected value of $X_\min$?
| For clarity, I prefer to segregate the sample size from the support of the distribution. That is, I will assume that $X$ is uniform over $\{ 1, 2, \ldots, U \}$. The special case that the question is concerned with is not much simpler anyway. $\newcommand{\e}{\mathrm{e}} \newcommand{\Xmin}{X_\min}$
For $1 \leqslant r \leqslant U$, the probability that the minimum $\Xmin$ is at least $r$ is equal to
$$
\Pr(\Xmin \geqslant r) = \frac{(U-r+1)^n}{U^n}.
$$
This is because the event $\Xmin \geqslant r$ happens iff $X_i \geqslant r$ for all $i \in \{ 1, 2, \ldots, n \}$. For any fixed $i$, the probability of the latter event occurring is $\frac{U-(r-1)}{U}$. And as usual, by independence, the probability of all the $n$ events occurring simultaneously is just the product of the individual probabilities.
Therefore (e.g., see this wikipedia page for the formula used),
$$
\mathbf{E}(\Xmin) = \sum_{r \geqslant 1} \Pr(\Xmin \geqslant r) = \sum_{1 \leqslant r \leqslant U} \frac{(U-r+1)^n}{U^n} = \frac{1}{U^n} \sum_{s = 1}^{U} s^n, \tag{$\ast$}
$$
by re-indexing the sum.
The final expression $(\ast)$ is essentially the sum of the $n^{th}$ powers of the first $U$ natural numbers. I don't think this be simplified much further. :-)
Some asymptotics. We can say a bit more about the original question assuming $n \to \infty$.
In this case, for constant $r$, the probability that $\Xmin \geqslant r$ is equal to
$$
\Big( 1 + \frac{1-r}{n} \Big)^n \to \e^{1-r}.
$$
Of course, if $r$ is not a constant but grows with $n$ (i.e., $r \to \infty$ as $n \to \infty$), then this probability goes to $0$. Thus the probability that the minimum is exactly $r$ is
$$
\e^{1-r} - \e^{-r} = (\e-1) \cdot \e^{-r},
$$
which is a geometric distribution (starting at $1$). The expected value of the distribution approaches
$$
\sum_{r \geqslant 1} \e^{1-r} = \frac{1}{1 - \frac{1}{\e}} = \frac{\e}{\e - 1}.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/85898",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Calculate gross salary when knowing the net salary and tax brackets Assume the annually gross salary is $100,000.
Tax brackets:
*
*0 - 50K - 10% = 5K in taxes
*50K - 70K - 20% = 4K in taxes
*70K - $90K - 30% = 6K in taxes
*90K and up - 40% = 4K in taxes
The income tax on 100K would be 19K.
So the net salary would be 81K.
However, knowing only the net salary (81K) and the tax brackets (1-4) how can I find out what was the gross salary?
Thank you very much.
UPDATE: Would the same rules apply if I have 2 level of brackets? For instance Federal and State income tax. Lets assume that the brackets are the same and Federal and State charge you the same amount on yeah tax bracket.
UPDATE 2: Actually I will combine Federal and State tax into one set of brackets and apply the same rule as it was one set of brackets.
| consider net salary is x, now we will find out gross salary y
if x < 45000(0.9 * 50000)
y = x/0.9
end
if 45000 < x < 61000 (45000 + 0.8 * 20000)
y = 50000 + (x- 45000)/0.8
end
if 61000 < x < 75000 (61000 + 0.7 * 20000)
y = 70000 + (x - 61000)/0.7
end
if 75000 < x
y = 90000 + (x- 75000)/0.6
end
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/85957",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
An unintuitive probability question Suppose you meet a stranger in the Street walking with a boy. He tells you that the boy is his son, and that he has another child. Assuming equal probability for boy and girl, and equal probability for Monty to walk with either child, what is the probability that the second child is a male?
The way I see it, we get to know that the stranger has 2 children, meaning each of these choices are of equal probability:
(M,M)
(M,F)
(F,M)
(F,F)
And we are given the additional information that one of his children is a boy, thus removing option 4 (F,F). Which means that the probability of the other child to be a boy is 1/3. Am I correct?
| I believe you're correct, the answer 1/3 is correct based on the wording of your problem. According to your problem, there is no specification on whether the boy is the oldest or youngest and therefore, the boy's siblings can either be a older sister, younger sister, or a brother. Notice that the boy can't have an older or younger brother, because the probability of having an older sister would then be twice as large as having an older brother.
It also seems like in this problem, according to the wording, you're CHOOSING, beforehand all the two children families, and then EXCLUDING all the girl-girl families, leaving you with at least one child being a boy.
You're not choosing a boy, and then saying that he is part of a two-sibling family. If that were the case, it would be the same question as asking: What is the probability that the boy's to-be-born sibling will be a boy? The answer is obviously 1/2, because it's either a boy or girl.
However, in your problem you are CHOOSING a boy, because there is no specification, and therefore, the ordering of the siblings can be in any ordering, leaving u with twice the probability of having a sister rather than a brother.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/86114",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Trig reciprocal function nomenclature? The fact that the reciprocal of $\sin\theta$ is $\csc\theta$, and the reciprocal of $\cos\theta$ is $\sec\theta$ messed with my head for the longest time when I was taking trig. Why are the functions named this way, when an alliterative scheme would seemingly be more sensible?
While I'm at it, what's the reason for choosing the names sine, cosine, and tangent? The words sinusoidal and tangental come to mind, but perhaps these words could have come from the function names, not the other way around.
| Here is a nice explanation of the origin of all these names. In summary:
Most of the words come from Latin descriptions of the geometry involved. Sine comes from the Latin word 'sinus', tangent from the Latin 'tangens', and secant from the Latin 'secans'. The origin of the co-functions actually makes quite a lot of sense; cosine was originally co-sine, referring to the sine of the complementary angle. Similarly, cotangent and cosecant are the tangent and secant of the complementary angle.
I recommend reading the webpage if you want to know more detail, it's quite interesting.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/86227",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
How to find the GCD of two polynomials How do we find the GCD $G$ of two polynomials, $P_1$ and $P_2$ in a given ring (for example $\mathbf{F}_5[x]$)? Then how do we find polynomials $a,b\in \mathbf{F}_5[x]$ so that $P_1a+ P_2b=G$? An example would be great.
| The (extended) Euclidean algorithm works over any Euclidean domain, roughly, any domain enjoying a division algorithm producing "smaller" remainders, e.g. polynomial rings over fields, where the division algorithm yields smaller degree remainders (vs. smaller absolute value in $\mathbb Z$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/86265",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Example of a non-algebraic $\ell^2$-function in two variables Let's call an $\ell^2$-function $\mathbb{N} \times \mathbb{N} \to \mathbb{C}$ algebraic if it is in the image of the natural algebra homomorphism $\ell^2(\mathbb{N}) \otimes \ell^2(\mathbb{N}) \to \ell^2(\mathbb{N} \times \mathbb{N})$, where on the left hand side we consider the usual, non-completed tensor product. In other words, $f(m,n)$ is algebraic iff it may be written as $\sum_{i=1}^{k} g_i(m) h_i(n)$ for some $k \in \mathbb{N}$ and $\ell^2$-functions $g_i,h_i$. Probably there are abstract reasons for the existence of non-algebraic functions. But I would like to know an explicit example of an $\ell^2$-function together with a concise and complete proof that it is not algebraic. For example:
Question. Can you give a proof that the $\ell^2$-function $(n,m) \mapsto \dfrac{1}{2^{n \cdot m}}$ is not algebraic?
| An algebraic function has bounded rank (either as a tensor, or in this case just as a matrix). So let $f(n,m)$ be $1/2^n$ if $n=m$ and zero otherwise.
For all $p$ the composition $\{1,\cdots,p\}\times\{1,\cdots,p\}\to \mathbb{N}\times \mathbb{N}\to \mathbb{C}$ is a $p\times p$ matrix of full rank. If $f$ is algebraic then this matrix is just $\sum_{i=1}^k g_i(m)h_i(n)$ which has rank at most $k$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/86332",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Infinite product of connected spaces may not be connected? Let $X$ be a connected topologoical space. Is it true that the countable product $X^\omega$ of $X$ with itself (under the product topology) need not be connected? I have heard that setting $X = \mathbb R$ gives an example of this phenomenon. If so, how can I prove that $\mathbb R^\omega$ is not connected? Do we get different results if $X^\omega$ instead has the box topology?
| The first part of your question - about connectedness of $\mathbb
R^\omega$ with the usual product
topology - has
already been answered.
To show that box
product $\mathbb
R^\omega$ is not connected we only need find a clopen subset $U$ of this
topological space (different from $\emptyset$ and the whole space).
Here are two examples of such sets:
*
*$U=$ set of all sequences, that converge to $0$; as suggested by Henning's
comment, see also here
here.
Indeed, if $x_n\to 0$ then $V=\prod(x_n-1/2^n,x_n+1/2^n)$ is a
neighborhood of $x$ such that $V\subseteq U$, therefore $U$ is
open. Similar argument shows that the complement of $U$ is open in
the box topology.
*$U=$ set of all sequences that are bounded; see e.g. Example
10.16
here.
The argument is similar, here we can even use open intervals of
the same length on each coordinate.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/86395",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Form of rational solutions to $a^2+b^2=1$? Is there a way to determine the form of all rational solutions to the equation $a^2+b^2=1$?
| If you know some field theory, it's possible to find the form without much messy algebra. The condition that $a^2+b^2=1$ for rational $a$ and $b$ is equivalent to the fact that $N_{\mathbb{Q}(i)/\mathbb{Q}}(a+bi)=1$.
Since $\text{Gal}(\mathbb{Q}(i)/\mathbb{Q})\cong\mathbb{Z}/2\mathbb{Z}$, the classical statement of Hilbert's Theorem 90 implies that $a+bi=y/\tau(y)$ for some $y\in\mathbb{Q}(i)$, where $\tau$ is just the complex conjugation map in this case. So for some $m+ni\in\mathbb{Q}(i)$,
$$
a+bi=\frac{m+ni}{\tau(m+ni)}=\frac{m+ni}{m-ni}=\frac{(m^2-n^2)+(2mn)i}{m^2+n^2},
$$
which implies $a=\dfrac{m^2-n^2}{m^2+n^2}$ and $b=\dfrac{2mn}{m^2+n^2}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/86443",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 3
} |
Reference for "It is enough to specify a sheaf on a basis"? The wikipedia article on sheaves says:
It can be shown that to specify a sheaf, it is enough to specify its restriction to the open sets of a basis for the topology of the underlying space. Moreover, it can also be shown that it is enough to verify the sheaf axioms above relative to the open sets of a covering. Thus a sheaf can often be defined by giving its values on the open sets of a basis, and verifying the sheaf axioms relative to the basis.
However, it does not cite a specific reference for this statement. Does there exist a rigorous proof for this statement in the literature?
| This is an excellent question and to tell the truth it is often handled in a cavalier fashion in the literature. This is a pity because it is a fundamental concept in algebraic geometry.
For example the structural sheaf $\mathcal O_X$ of an affine scheme $X=Spec(A)$ is defined by saying that over a basic open set $D(f)\subset X \;(f\in A)$ its value is $\Gamma(D(f),\mathcal O_X)=A_f$ and then relying on the mechanism of sheaves on a basis to extend this to a sheaf on $X$.
The same procedure is also followed in defining the quasi-coherent sheaf of modules $\tilde M$ on $X$ associated to the $A$-module $M$.
However there are happy exceptions on the net , like Lucien Szpiro's notes where sheaves on a basis of open sets are discussed in detail on pages 14-16.
You can also find a careful treatment in De Jong and collaborators' Stack Project , Chapter 6 "Sheaves on Spaces", section 30, "Bases and sheaves"
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/86509",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 5,
"answer_id": 2
} |
Proving an exponential bound for a recursively defined function I am working on a function that is defined by
$$a_1=1, a_2=2, a_3=3, a_n=a_{n-2}+a_{n-3}$$
Here are the first few values:
$$\{1,1,2,3,3,5,6,8,11,\ldots\}$$
I am trying to find a good approximation for $a_n$. Therefore I tried to let Mathematica diagonalize the problem,it seems to have a closed form but mathematica doesn't like it and everytime I simplify it gives:
a_n = Root[-1 - #1 + #1^3 &, 1]^n Root[-5 + 27 #1 - 46 #1^2 + 23 #1^3 &, 1] +
Root[-1 - #1 + #1^3 &, 3]^n Root[-5 + 27 #1 - 46 #1^2 + 23 #1^3 &, 2] +
Root[-1 - #1 + #1^3 &, 2]^n Root[-5 + 27 #1 - 46 #1^2 + 23 #1^3 &, 3]
I used this to get a numerical approximation of the biggest root: $$\text{Root}\left[x^3-x-1,1\right]=\frac{1}{3} \sqrt[3]{\frac{27}{2}-\frac{3 \sqrt{69}}{2}}+\frac{\sqrt[3]{\frac{1}{2}
\left(9+\sqrt{69}\right)}}{3^{2/3}}\approx1.325$$
Looking at the function I set
$$g(n)=1.325^n$$
and plotted the first 100 values of $\ln(g),\ln(a)$ ($a_n=:a(n)$) in a graph (blue = $a$, red = $g$):
It seems to fit quite nicely, but now my question:
How can I show that $a \in \mathcal{O}(g)$, if possible without using the closed form but just the recursion. If there would be some bound for $f$ thats slightly worse than my $g$ but easier to show to be correct I would be fine with that too.
| It might not be exactly what you are asking for but you can dominate the series by a rough exponential estimate without finding any closed forms like so:
Given $$a_n = a_{n-2} + a_{n-3} = a_{n-3} + a_{n-4} + a_{n-5}$$ it can be seen the sequence is increasing (i.e. $a_{n-1} < a_n$) when $0 < a_{n-5}$, which holds when $n > 5$ and it can be checked directly that we can strengthen this to $n \ge 5$.
This implies that $$a_n < 3 a_{n-3}$$ and on this basis split the sequence into three parts:
*
*$b_n = a_{3n}$
*$b'_n = a_{3n+1}$
*$b''_n = a_{3n+2}$
which are all smaller than $3^n$ for $n \ge 2$ by induction, we find then (by dividing $n$ by $3$) that for $n \ge 4$:
$$a_n < \sqrt[3]{3}^n.$$
which is roughly $1.442\ldots$
P.S. Thanks to Srivatsan for the idea behind this estimate!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/86569",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
} |
Calculation of prime numbers - why so difficult? As I read more and more about advanced mathematics, the more complex and obscure topics seem to be tougher to bend the rules of math to describe. However, the simple (and undoubtedly very useful) subject of prime number identification remains an enigma (to me, at least). For a system with rules so simple, calculation isn't. Even though the rules of the calculation change as larger and larger values are tested, because of additional primes to take into consideration when testing a given value, how is there not a computationally cheap and easy way of identifying primes? I've got pages and pages full of attempts to find patterns (with a knowledge base not too well suited for this sort of research), and as soon as I have a large enough value that I'm testing, the pattern falls apart in quite an ugly manner.
| Depends on what you call "cheap and easy". Adleman-Pomerance-Rumely-Cohen-Lenstra and Elliptic Curve Primality testing have been used to prove primality of numbers (not of special form) with thousands of digits.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/86635",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Relation between bounded quadratic and linear complementary problems In a paper, it says that
Given a bounded quadratic problem (BQP) $$ \min_{x \in \mathbb{R}^n}
\frac{1}{2} x^T A x + b^T x $$ subject to $$ x \geq 0, \quad i.e. \quad x_i \geq 0, i=1,...,n$$
and a linear complementary problem (LCP) $$ x .* (Ax+b) = 0, \quad i.e. \quad x_i \times (A(i,:)x+b_i) =
0, i=1,...,n $$ $$ Ax+b \geq 0, \quad i.e. \quad A(i,:)x+b_i \geq 0, i=1,...,n $$ $$ x \geq 0, \quad i.e. \quad x_i \geq 0,
i=1,...,n$$
*
*if $A$ is symmetric positive definite, then $x$ is minimizer of BQP
iff x is a solution to LCP;
*if $A$ is symmetric, then $x$ is a first order solution to BQP iff
$x$ is solution to LCP;
*if $A$ is general, then there is no convenient relationship between
solutions of BQP and of LCP.
*
*I am trying to understand the above statements based on the KKT
conditions for the BQP, which are $$ \frac{1}{2} (A^T+A) x + b - \mu=0 $$ $$ \mu .* x =0, \quad i.e. \quad \mu_i \times
x_i=0, i=1,...,n $$ $$ \mu \geq 0 , \quad i.e. \quad \mu_i \geq 0, i=1,...,n $$ $$ x \geq 0, \quad i.e. \quad x_i \geq 0,
i=1,...,n, $$ but not sure how to go from here, or I just think in the wrong direction?
*What does "first order solution to BQP" in statement 2 mean? Is it defined as a
solution to the BQP that also satisfies KKT conditions?
Thanks and regards!
| You are on the right track I think. You can re-write your first condition as:
$$\mu = Ax +b$$
Substituting in the second condition it follows that:
$$x.* (Ax+b) = 0$$
Using the fact that $\mu \ge 0$, we have:
$$Ax+b \ge 0$$
Thus, in essence, you need to eliminate $\mu$ from the KKT conditions to get to the LCP.
In response to your edit, note the following:
(A). If $A$ is symmetric then $A^T = A$ and hence KKT reduces to the LCP as far as 1 and 2 are concerned.
(B). If $A$ is positive definite in addition to being symmetric then the objective function is convex and hence KKT is both necessary and sufficient for optimality. Hence, 1 follows from above.
(C) If $A$ is only symmetric then KKT is necessary but not sufficient which is what may be meant by 'first order solution'
(D) If $A$ is general matrix not necessarily symmetric, positive definite then the KKT conditions do not map one-to-one with the LCP and hence there may not be any relationship between your original program and LCP.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/86756",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Second derivative using implicit differentiation with respect to $x$ of $x = \sin y + \cos y$ I am running into trouble with this question:
I get as far as
$$1 = \cos y\frac{dy}{dx} - \sin y\frac{dy}{dx}$$
$$1 = \frac{dy}{dx} (\cos y - \sin y)$$
$$\frac{dy}{dx} = \frac{1}{\cos y-\sin y}$$
Second derivative:
Unsure how to continue here
| You have
$$\frac{dy}{dx}=\frac{1}{\cos y-\sin y}.$$
Take the derivative of both sides using one of the derivative rules:
$$\frac{d^2y}{dx^2}=\frac{(1)'(\cos y - \sin y)-(1)(\cos y-\sin y)'}{(\cos y-\sin y)^2}=?,$$
$$\mathrm{or},\quad (x^{-1})'=-x^{-2}\implies\frac{d^2y}{dx^2}=\frac{-1}{(\cos y-\sin y)^2}\cdot(\cos y-\sin y)'=?$$
Above are the beginnings to (i) the quotient rule and (ii) the power rule and chain rules.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/86810",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
One-to-one mapping from $\mathbb{R}^4$ to $\mathbb{R}^3$ I'm trying to define a mapping from $\mathbb{R}^4$ into $\mathbb{R}^3$ that takes the flat torus to a torus of revolution.
Where the flat torus is defined by $x(u,v) = (\cos u, \sin u, \cos v, \sin v)$.
And the torus of revolution by $x(u,v) = ( (R + r \cos u)\cos v, (R + r \cos u)\sin v, r \sin u)$.
I think an appropriate map would be:
$f(x,y,z,w) = ((R + r x)z, (R + r x)w, r y)$ where $R$, $r$ are constants greater than $0$.
But now I'm having trouble showing this is one-to-one.
| Do you need to explicitly give the map or are you asked to prove both sets have the same cardinilty?
If not, it is a good exercise to prove $\mathbb{R}$ and $\mathbb{R}^2$ are both the same size (Schroeder-Bernstein theorem - consider decimal expansions to show a surjection) then via induction and the fact both sets are subsets of $\mathbb{R}^n$ we see there is a bijection between them. Explicitly finding a bijection is not clear to me, perhaps it is to someone else.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/86930",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 2
} |
Calculating a Taylor Polynomial of a mystery function I need to calculate a taylor polynomial for a function $f:\mathbb{R} \to \mathbb{R}$ where we know the following
$$f\text{ }''(x)+f(x)=e^{-x} \text{ } \forall x$$ $$f(0)=0$$ $$f\text{ }'(0)=2$$
How would I even start?
| Hint: Assume that
$$
f(x)=\sum\limits_{n\geqslant0}a_n\frac{x^n}{n!},\qquad g(x)=\sum\limits_{n\geqslant0}b_n\frac{x^n}{n!},
$$
and that $g=f''+f$.
Then
$$
f''(x)=\sum\limits_{n\geqslant0}a_{n+2}\frac{x^n}{n!},
$$
hence, for every $n\geqslant0$, $b_n$ is...
To do: Identify $b_n$ for every $n\geqslant0$ and translate the initial conditions on $f$ in terms of $a_0$ and $a_1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/86981",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
} |
Example of a real-life graph with a "hole"? Anyone ever come across a real non-textbook example of a graph with a hole in it?
In Precalc, you get into graphing rational expressions, some of which reduce to a non-rational. The cancelled factors in the denominator still identify discontinuity, yet can't result in vertical asymptotes, but holes.
Thanks!
| I guess the derivative of the absolute value (on the reals) comes up in certain "actual" applications. It is undefined at $0$, and no way of plugging the hole makes it continuous. Which doesn't prevent one from defining arbitrarily a value of for instance $0$ at $0$, but it seems better to just leave the hole.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/87054",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 3
} |
Maximizing symmetric matrices v.s. non-symmetric matrices Quick clarification on the following will be appreciated.
I know that for a real symmetric matrix $M$, the maximum of $x^TMx$ over all unit vectors $x$ gives the largest eigenvalue of $M$. Why is the "symmetry" condition necessary? What if my matrix is not symmetric? Isn't the maximum of $x^TMx=$ still largest eigenvalue of $M$?
Thanks.
| The reason why the maximum of $x^T M x$ (I think you meant "over the set of $x$'s such that $\|x\|_2 = 1$", because if you don't add that condition your maximum is not defined) is the largest eigenvalue of $M$ is because any real square matrix can be factored as
$$
M = Q_1^T D Q_2
$$
where $Q_1$ and $Q_2$ are orthogonal and $D$ is a diagonal matrix containing the eigenvalues of $A$ on its diagonal. When $M$ is symmetric, we have $Q_1 = Q_2 = Q$, thus
$$
x^T M x = x^T Q^T D Q x = (Qx)^T D (Qx)
$$
and since $Q$ is orthogonal, not only is it invertible but it also preserves distances, so that
$$
\max_{\|x\| = 1} x^T M x = \max_{\| x \| = 1} (Qx)^T D (Qx) = \max_{\|x\| = 1} x^T D x.
$$
Now I believe you said that you knew the largest eigenvalue of $A$ was this maximum, so I assume you understand how to compute this last maximum to be the biggest eigenvalue of $M$ (it is a simple Lagrange multiplier argument that gives you that extremums are attained where $x$ is an eigenvector of $D$.)
Now if $M$ is not symmetric, none of these things apply because things started to look better when I assumed $Q_1 = Q_2 = Q$. But you can show that
$$
\|M\|_2 = \sqrt{ \rho(M^T M) }
$$
where $\rho( M^T M)$ is the spectral norm of $M$, i.e. $\rho(M^TM)$ is the largest eigenvalue (in absolute value) of $M^TM$. When $M$ is symmetric, $M^T = M$ and the spectral norm is just the square of the largest eigenvalue of $M$, so we know it still works. But in general the eigen values of $M^T M$ are not the square of the eigenvalues of $M$ because $M^T$ and $M$ do not have the same eigenvectors if $M$ is not symmetric, so the same arguments don't apply in general.
As an example, take
$$
M =
\begin{bmatrix}
0 & 2 \\
1 & 0 \\
\end{bmatrix}.
$$
You can readily see that its characteristic polynomial is just $\lambda^2 - 2$, thus that its eigenvalues are $\pm \sqrt 2$. But if you use the above formula,
$$
M^T M =
\begin{bmatrix}
4 & 0 \\
0 & 1 \\
\end{bmatrix}.
$$
Therefore the largest eigenvalue of $M^T M$ is $4$, hence $\|M \| = 2 > \sqrt 2, -\sqrt 2$. Perhaps that was more convincing.
Hope that helps,
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/87199",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
} |
What are the Properties of eigenvalues and eigenfunctions of periodic Sturm Liouville problem? I've read for a regular Sturm-Liouville problem for each eigenvalue there corresponds unique eigenfunction.
For periodic Sturm Liouville problem Which of the following are true?
Each eigenvalue of (periodic Sturm Liouville problem) corresponds to
1. one eigenfunction
2. two eigenfunctions
3. two linearly independent eigenfunctions
4. two orthogonal eigenfunctions
What are the Properties of eigenvalues and eigenfunctions of periodic Sturm Liouville problem?
Are these depend on boundary conditions or same for all periodic Sturm
Liouville problems?
| This isn't an answer, but evidently I don't have enough points to leave a "comment." I don't have a reference on Sturm-Liouville problems handy. However option (2) doesn't make sense. If there is one eigenfunction, there are infinitely many for the same eigenvalue: just take your eigenfunction $u$ and multiply by any non-zero real $\alpha$. Also, (3) implies (4): if there are two linearly independent eigenfunctions, you can produce two orthogonal functions (spanning the same subspace) using a Gram-Schmidt type argument.
Hope this clarifies a couple things.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/87256",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Proving Integral Inequality I am working on proving the below inequality, but I am stuck.
Let $g$ be a differentiable function such that $g(0)=0$ and $0<g'(x)\leq 1$ for all $x$. For all $x\geq 0$, prove that
$$\int_{0}^{x}(g(t))^{3}dt\leq \left (\int_{0}^{x}g(t)dt \right )^{2}$$
| It's straightforward: The function $g$ is positive for all $x>0$. Therefore $g'(t)\leq 1$ implies
$$2 g(t)g'(t)\leq 2 g(t)\qquad(t>0)\ ,$$
and integrating this with respect to $t$ from $0$ to $y>0$ we get
$$g^2(y)\leq 2\int_0^y g(t)\ dt\qquad(y>0)\ .$$
Multiplying with $g(y)$ again we have
$$g^3(y)\leq 2 g(y)\ \int_0^y g(t)\ dt ={d\over dy}\left(\Bigl(\int_0^y g(t)\ dt\Bigr)^2\right) \qquad(y>0)\ ,$$
and the statement follows by integrating the last inequality with respect to $y$ from $0$ to $x>0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/87305",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 2,
"answer_id": 1
} |
More Theoretical and Less Computational Linear Algebra Textbook I found what seems to be a good linear algebra book. However, I want a more theoretical as opposed to computational linear algebra book. The book is Linear Algebra with Applications 7th edition by Gareth Williams. How high quality is this? Will it provide me with a good background in linear algebra?
| 1) Linear Algebra by Hoffman and Kunge,
2) Linear Algbra by G. Strang,
3) Linear Algebra by Helson,
4) Introduction to linear algebra by V. Krishnamurthy
5) University Algebra by N. S. Gopalkrishanan
6) A First Course in Abstract Algebra by Fraleigh, J. B.
If you explain what you exactly want? then I can suggest u better also.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/87362",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18",
"answer_count": 9,
"answer_id": 6
} |
Solving the differential equation $y' - \frac{1}{x} y = x^2\sqrt{y} $ Which technique should I use for solving the follwoing DE?
$$
y' - \frac{1}{x} y = x^2\sqrt{y}
$$
I have tried some algebraic manipulations but I could not recognize any pattern.
| First, $z=y/x$ yields $z'=x\sqrt{y}=x^{3/2}\sqrt{z}$.
Then $u=\sqrt{z}$ yields $u'=\frac12x^{3/2}$ hence $u=\frac15x^{5/2}+c$. Finally, $y=xz=xu^2$ hence
$$
y=x\left(\frac15x^{5/2}+c\right)^2.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/87476",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Continued fraction: Show $\sqrt{n^2+2n}=[n; \overline{1,2n}]$ I have to show the following identity ($n \in \mathbb{N}$):
$$\sqrt{n^2+2n}=[n; \overline{1,2n}]$$
I had a look about the procedure for $\sqrt{n}$ on Wiki, but I don't know how to transform it to $\sqrt{n^2-2n}$.
Any help is appreciated.
EDIT:
I tried the following:
$\sqrt{n^2+2n}>n$, so we get $\sqrt{n^2+2n}=n+\frac{n}{x}$, and $\sqrt{n^2+2n}-n=\frac{n}{x}$ and further $x=\frac{n}{\sqrt{n^2+2n}}$.
So we get $x=\frac{n}{\sqrt{n^2+2n}-n}=\frac{n(\sqrt{n^2+2n}+n)}{(\sqrt{n^2+2n}-n)(\sqrt{n^2+2n}+n}$ I don't know if it's right and how to go on.
| The first few convergents of $[3;\overline{1,6}]$ are $3,4,\frac{27}7,\frac{31}8,\frac{213}{55}$, and $\frac{244}{63}$; $\frac{213}{55}=3.8\overline{72}$, and $\frac{244}{63}$ is a little over $3.873$, so $[3;\overline{1,6}]$ is clearly not $\sqrt{3^2-2\cdot3}=\sqrt3$.
In fact, $n^2-2n=(n-1)^2-1$, so the integer part and first convergent of $\sqrt{n^2-2n}$ will be $n-2$. But $\sqrt{n^2-2n}$ isn’t $[n-2;\overline{1,2n}]$, either, since, as you can see here, $\sqrt3=[1;\overline{1,2}]$.
$\sqrt8=[2;\overline{1,4}]$, so your identity should probably be $\sqrt{n^2-2n}=[n-2;\overline{1,2(n-2)}]$ for $n\ge 3$.
Now let $m=n-2$ and consider the continued fraction $x=[m;\overline{1,2m}]$.
$$\begin{align*}
x+m&=[2m;\overline{1,2m}]\\
&=2m+\frac1{1+\frac1{[2m;\overline{1,2m}]}}\\
&=2m+\frac1{1+\frac1{x+m}}\\
&=2m+\frac{x+m}{x+m+1}\\
&=\frac{(2m+1)x+2m^2+3m}{x+m+1}\;,
\end{align*}$$
and you can finish it off by solving for $x$ in terms of $m$ (and then in terms of $n$).
If the identity was supposed to have $\sqrt{n^2+2n}$ on the lefthand side, the same basic approach will work, though the details will obviously be different, and you won’t need $m$.
Added: I see that you have edited the problem statement to make the lefthand side $\sqrt{n^2+2n}$; I’ll leave my solution to the original version as an extended hint for the corrected version.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/87526",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
Convergence of a characteristic function This is the last part of a three part problem on characteristic functions, and it's been driving me crazy over the last few days. Any help would be most appreciated.
$X_1,X_2, \ldots, X_n$ are independent, with $P(X_j=j)=P(X_j=-j)=1/2j, P(X_j=0)=1-1/j$ . Show that $S_n/n$ converges to a distribution with characteristic function $ \exp \left( -\int_0^1 \frac{1-\cos(xt)}{x} dx \right) $.
I have made some progress, but I end up with $$\log \psi(t)=\sum_{j=1}^n \frac{\cos(tj/n)-1}{j}.$$ This is close, but still wrong. Any thoughts?
| You are almost done.
$$\log (\psi_n(t)) = \sum_{j=1}^n \left(\frac{\cos(tj/n)-1}{j} \right) = \sum_{j=1}^n \left(\frac{\cos(tj/n)-1}{j/n} \right) \frac1n $$
This is nothing but a Riemann sum. In the limit as $n \rightarrow \infty$, we get
$$\lim_{n \rightarrow \infty} \log (\psi_n(t)) = \lim_{n \rightarrow \infty} \sum_{j=1}^n \left(\frac{\cos(tj/n)-1}{j/n} \right) \frac1n = \int_0^1 \frac{\cos(tx) - 1}{x} dx$$
Hence,
$$\log (\psi(t)) = \log \left( \lim_{n \rightarrow \infty} \psi_n(t) \right) = \lim_{n \rightarrow \infty} \log (\psi_n(t)) = \int_0^1 \frac{\cos(tx) - 1}{x} dx$$
For the sake of completeness, a sequence of random variables $X_n$, converges in distribution to a random variable $X$, iff the sequence of characteristic functions of $X_n$, converges to the characteristic function of the random variable $X$ point-wise.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/87599",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Must a weakly divergence-free vector function be continuous? Let $F:{\mathbb R}^n \to {\mathbb R}^n$ have coordinate functions in $L^2$. Suppose $F$ is weakly divergence-free, that is, $\int_{{\mathbb R}^n} F \cdot \nabla \varphi = 0$ for all $\varphi \in C_0^\infty({\mathbb R}^n)$. Must $F$ be continuous?
| Consider a vector field defined on a square in $\mathbb{R}^2$, on the upper plane, define $F = \left(\begin{matrix} y-1\\x \end{matrix} \right)$ while on the lower plane define $F = \left(\begin{matrix} 1-y\\x \end{matrix} \right)$, you could check $F$ is weakly divergence free, and $F\cdot \vec{n}$ is continuous across the $x$-axis, where $\vec{n} = \left(\begin{matrix} 0\\1 \end{matrix} \right)$ is the normal vector to the $x$-axis.
However, $\displaystyle \lim_{y\to 0^+} F\times \vec{n} = -\lim_{y\to 0^-} F\times \vec{n}$, which basically says $F$ is not continuous along $x$-axis.
The idea behind this is: pick any domain $\Omega \subset \mathbb{R}^n$, on any $(n-1)$-hypersurface $S$ within the domain which cuts it into two parts: $\Omega_1$ and $\Omega_2$, integrate by parts separately we have
$$
0 = \int_{\Omega} F\cdot \nabla \phi = \int_{\Omega_1} \phi\,\mathrm{div} F + \int_{\Omega_2} \phi\,\mathrm{div} F + \int_{S} (F\cdot \vec{n}_1+F\cdot \vec{n}_2)\phi\,ds + \int_{\partial \Omega} F\cdot \vec{n}\,\phi\,ds
$$
For the integration by parts formula to hold, we must have $F\cdot \vec{n}_1+F\cdot \vec{n}_2=0$,ie, normal direction continuity, but for a divergence integrable field you can't really tell how its tangential trace $F\times \vec{n}$ behaves on the hypersurfaces.
You could learn more of this by Googling the keywords: tangential trace for $H(\mathrm{div})$ and $H(\mathbf{curl})$, normally the analysis is done in 3D setting.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/87754",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
The letters ABCDEFGH are to be used to form strings of length four
The letters ABCDEFGH are to be used to form strings of length four.
How many strings contain the letter A if repetitions are not allowed?
The answer that I have is :
$$ \frac{n!}{(n-r)!} - \frac{(n-1)!}{(n-r)!} = \frac{8!}{4!} - \frac{7!}{4!} = 8 \times 7 \times 6 \times 5 - (7 \times 6 \times 5) = 1470 $$ strings.
If you could confirm this for me or kindly guide in me the right direction, please do let me know.
| There are ${7\choose 3}=35$ ways to choose 3 letters from BCDEFGH. Add an "A" to this set and arrange these four distinct letters in $4!$ ways. This gives $35\cdot 24=840$ strings in total.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/87854",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 3
} |
reference for "compactness" coming from topology of convergence in measure I have found this sentence in a paper of F. Delbaen and W. Schachermayer with the title: A compactness principle for bounded sequences of martingales with applications. (can be found here)
On page 2, I quote: "If one passes to the case of non-reflexive Banach spaces there is—in general—no
analogue to theorem 1.2 pertaining to any bounded sequence $(x_n )_{n\ge 1} $ , the main
obstacle being that the unit ball fails to be weakly compact. But sometimes there
are Hausdorff topologies on the unit ball of a (non-reflexive) Banach space which
have some kind of compactness properties. A noteworthy example is the Banach
space $ L^1 (Ω, F, P) $ and the topology of convergence in measure."
So I'm looking for a good reference for topology of convergence in measure and this property of "compactness" for $ L^1 $ in probability spaces.
Thx
math
| If it's any help, convergence in measure is a metrizable criterion. For the case of a finite measure space, see Exercise 2.32 in Folland, "Real Analysis", 2nd. ed., or for a general measure space see Exercise 3.22 in my lecture notes here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/87983",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Does $\mathbb{R}^\mathbb{R}$ have a basis? I'm studying linear algebra on my own and saw basis for $\mathbb{R}, \mathbb{R}[X], ...$ but there is no example of $\mathbb{R}^\mathbb{R}$ (even though it is used for many examples). What is a basis for it? Thank you
| For this space (or simpler the vector space of infinite sequences of reals) one cannot actually write down a basis, even though one can prove using the axiom of choice that every vector space has a basis. Proofs of existence that essentially need the axiom of choice provide no means to actually construct the objects whose existence they establish.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/88038",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17",
"answer_count": 4,
"answer_id": 2
} |
Order of growth of $(s-1)\zeta(s)$ Again, order of growth problems.
Show that the function $(s-1)\zeta(s)$ is an entire function of growth order $1$; or equivalently,
$$|(s-1)\zeta(s)| \leq A_{\epsilon} \; \exp \left(a_{\epsilon}|s|^{1+\epsilon} \right).$$
Of course, $\zeta (\cdot)$ denotes the Riemann zeta function.
Many thanks in advance.
| Another approach is through the Laurent series of the Riemann zeta function at $s=1$,
$$\zeta(s)=\frac1{s-1}+\sum_{n=0}^\infty\frac{(-1)^n}{n!}\gamma_n(s-1)^n\;,$$
where the $\gamma_n$ are the Stieltjes constants. Multiplying by $s-1$ yields
$$(s-1)\zeta(s)=1+\sum_{n=0}^\infty\frac{(-1)^n}{n!}\gamma_n(s-1)^{n+1}\;.$$
Theorem 2.2.2 of Entire Functions by Ralph Philip Boas expresses the order $\mu$ of an entire function given by a power series
$$f(z)=\sum_{n=0}^\infty a_nz^n$$
in terms of the coefficients:
$$\mu=\limsup_{n\to\infty}\frac{n\log n}{\log (1/|a_n|)}\;.$$
To evaluate the limit superior, we can use bounds found by Matsuoka: For all $n\ge10$,
$$|\gamma_n|\le\frac{\exp(n\log\log n)}{10000}\;,$$
and for infinitely many n
$$|\gamma_n|\gt\exp(n\log\log n-n\epsilon)\;.$$
By substituting Stirling's approximation for the factorial,
$$\log n!\sim n\log n -n\;,$$
we can see that the limit superior is $1$: The upper bound on $\gamma_n$ ensures that the quotient is eventually below $1+\delta$, and the lower bound ensures that it is infinitely often above $1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/88109",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 0
} |
Counterexample to $f_ng_n \not\to fg$ in measure I'm looking for a pair of sequences $f_n \to f$ in measure, $g_n \to g$ in measure, where $f_ng_n \not\!\to fg$ in measure.
I've tried a number of things with characteristic functions that move around with $n$, and nothing seems to pan out. I've also tried looking at non-Lebesgue measures, like the counting measure. At least, I realize that whatever measure one uses must not be finite, or else $f_ng_n \to fg$ is always true.
Any hints?
| The classic counterexample I'm aware of is with the Lebesgue measure on $\mathbb{R}$, $f_n=g_n=x+\frac{1}{n}$, and $f=g=x$. Then obviously we have
$f_n\to f$ and $g_n\to g$ in measure, but
$$f_ng_n=x^2+\tfrac{2}{n}x+\tfrac{1}{n^2}\not\to x^2=fg$$
in measure because for any $\epsilon>0$ and $n\in\mathbb{N}$.
$$\{x\in \mathbb{R}\mid \tfrac{2}{n}x+\tfrac{1}{n^2}\geq\epsilon\}=[\tfrac{n^2\epsilon-1}{2n},\infty)$$
has infinite measure, so for any $\epsilon>0$,
$$\lim_{n\to\infty}\mu(\{x\in\mathbb{R}\mid |f_ng_n(x)-fg(x)|\geq\epsilon\})=\lim_{n\to\infty}\infty=\infty$$
I'm sorry I don't know how to give a good hint for this one; I didn't see it on my own either, someone pointed it out to me. In retrospect the intuition is that we set things up so that the gap between the $f_n$'s and $f$, and $g_n$'s and $g$, when multiplied, becomes "amplified" (since $\frac{2}{n}x+\frac{1}{n^2}$ depends on $x$, whereas $\frac{1}{n}$ doesn't).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/88167",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
} |
Cardinality of a set Consider four variables $x,y,z,w$.
Also consider the set $S=\{x^{i_1+j_1}y^{i_2+j_2}z^{j_1+j_3}w^{j_2+j_4}\}$ where
$i_1,i_2,j_1,j_2,j_3,j_4$ are nonnegative integers such that
$i_1+i_2+j_1+j_2 =m$ and $0\leq j_3,j_4 \leq m_1$ with $m,m_1$ are positive integers.
What is the cardinality of the set $S$ in terms of $m,m_1$?
| Since $i_1+i_2+j_1+j_2 =m$, $S=\{x^{i_1+j_1}y^{i_2+j_2}z^{j_1+j_3}w^{j_2+j_4}\}$ can be written as $S=\{x^{m-i_2-j_2}y^{i_2+j_2}z^{j_1+j_3}w^{j_2+j_4}\}$. Since $i_1,i_2,j_1,j_2,j_3,j_4$ are nonnegative integers, $m-i_2-j_2=i_1+j_1\geq0$, we have $m\geq i_2+j_2\geq0$. So there are $m+1$ distinct $x^{m-i_2-j_2}y^{i_2+j_2}$ when $i_2+j_2=0,1,..., m$.
Now fix $k=0,1,..., m$. If $i_2+j_2=k$, then the possible values for $j_2$ are $0,1,2,..., k$. On the other hand, since $i_1+j_1=m-i_2-j_2=m-k$, the possible values for $j_1$ are $0,1,2,..., m-k$. Therefore, if $i_2+j_2=k$, $0\leq j_1+j_3\leq k+m_1$ and $0\leq j_2+j_4\leq m-k+m_1$. That is, if $x^{m-i_2-j_2}y^{i_2+j_2}=x^{m-k}y^k$, the number of possible $z^{j_1+j_3}w^{j_2+j_4}$ are $(k+m_1+1)(m-k+m_1+1)$.
Combining all these, the cardinality of the set $S$ is
$$|S|=\sum_{k=0}^m(k+m_1+1)(m-k+m_1+1).$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/88212",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is the quotient of a $K(\pi,1)$ by a discrete subset still an Eilenberg-MacLane space? Let $X$ be a discrete subset of a $K(\pi,1)$. Is the quotient space $K(\pi,1)/X$ still a $K(\pi',1)$? If so, what is $\pi'$?
| The quotient of a pathconnected space $A$ by an $n$-point subspace is homotopy equivalent to $A\vee \bigvee\limits_{n-1} S^1$. And $K(\pi,1)\vee S^1=K(\pi*\mathbb Z,1)$. So the answer is, yes, for $\pi'=\pi*F_{n-1}$.
Upd. To make the answer more... self-contained, let me also sketch the proof of the first claim for $n=2$. The crucial lemma: if $X$ is a CW-complex, $Y\subset X$ -- a contractible subcomplex, then $X/Y\cong X$ (the proof can be found in any AT textbook -- for example, see Proposition 0.17 of Hatcher's book).
Now for any 2-point subset $\{a,b\}$ of $A$ take $X=A\cup[0,1]/(0\sim a,1\sim b)$ and apply the lemma for $Y_1=[0,1]$ and for $Y_2$ equal to path $\gamma$ from $a$ to $b$ in $X$ (actually, one has to be slightly more careful here -- because sometimes such path is not a subcomplex etc, but let's ignore this for now): $A/\{a,b\}=X/Y_1\cong X\cong X/Y_2=(A/\gamma)\vee S^1\cong A\vee S^1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/88282",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Proving the "differentiablity" of $x^{1/n}$ I am solving a problem that is below
Let $n \in \mathbb{N}$ and let $f(x) = x^{1/n}$, $x \gt 0$. Prove that $f$ is
differentiable on $(0,\infty)$ and find $f\,'$. (Hint: Note that $f = g^{−1}$, where
$g(y) = y^n$, $y \gt 0$.)
My question is
1) I should use induction correct?
2) I am not sure how to get $f\,'$ via the hint. I know that $(f^{-1})'=1/f'(f^{-1})$ though I am not sure how to apply this...
| The Inverse Function Theorem says that if $g(x)$ is differentiable at $a$, and $g'(a)\neq 0$, then $g^{-1}$ is differentiable at $g(a)$ and
$$(g^{-1})'(g(a)) = \frac{1}{g'(a)}.$$
Since $f(x) = x^{1/n}$, in order to show that $f(x)$ is differentiable at $a^n$ it is enough to show that $g(x)=x^n$ is differentiable at $a$ and that $g'(a)\neq 0$, by the Inverse Function Theorem, since $f(x) = g^{-1}(x)$.
Now, if you know that $g(x)=x^n$ is differentiable for all $n$, then you don't need to use induction, just use the Inverse Function Theorem, noting that every value in $(0,\infty)$ is the image of an $a$ in $(0,\infty)$ under $g$, and that $g'(a)\neq 0$ if $a\neq 0$.
If you don't yet know that $g(x)=x^n$ is differentiable, then you can use induction: $g(x)=x$ is differentiable; if $h(x)=x^n$ is differentiable, then $g(x)=x^{n+1} = h(x)x$, and the product of differentiable functions is differentiable, hence $g(x)$ is differentiable. By induction, $x\mapsto x^n$ is differentiable for all $n\in\mathbb{N}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/88415",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Integration by Parts with Trigonometric Functions Trying to evaluate this indefinite integral:
$$ \int (x^2 + 1)\cos2xdx$$
So far I have the following: $u=x^2 + 1 \Rightarrow du = 2xdx$ and $dv=\cos2x \Rightarrow v = \frac {\sin2x}{2}$. So the integral is equal to:
$$\int (x^2+1)\cos2xdx = (x^2+1)\frac{\sin2x}{2}-\int {\frac{\sin2x}{2}}2xdx$$
Next, I make another substitution for the integral on the right hand side; let $ u = x \Rightarrow du = dx$ and let $dv = \sin2x \Rightarrow v = \frac {-\cos2x}{2}$. Now I have the following:
$$\int (x^2+1)\cos2xdx = (x^2+1)\frac{\sin2x}{2}-\left (-\frac {x\cos2x}{2} - \int -\frac {cos2x}{2}dx\right)$$
Which after integrating becomes:
$$\int (x^2+1)\cos2xdx = (x^2+1)\frac{\sin2x}{2}-\left(-\frac {x\cos2x}{4} + \frac {\sin2x}{4}\right)$$
But when solving with the integrator on my calculator, I get a different answer (it looks like I am getting closer, but still off). What am I doing wrong here??
| I see three mistakes in your calculations:
*
*As Andre pointed out in the comments, in the first substitution $du = 2x dx$.
*There is a sign error in the second integration by parts:
$$-\left(-\frac{x\cos 2x}{2} - \int \frac{-\cos 2x}{2} dx\right) = \frac{x\cos 2x}{2} - \int \frac{\cos 2x}{2} dx.$$
*An integration constant should appear as early as the first integration by parts.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/88479",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Proving a function is continuous for all real numbers This a homework question:
Prove $f(x) = 2x^3 - 4x^2 + 5$ is continuous for all real numbers.
Which proof technique do I use for this? I don't really know where to start.
| Every polynomial is locally Lipschitz. More explicitly, take your example $f(x) = 2x^3 - 4x^2 + 5$. Then
$$
|f(x)-f(x_0)| = |(x-x_0) 2(x^2+x_0 x+x_0^2) - (x-x_0)4(x+x_0)| \le |x-x_0| L
$$
for a suitable value of $L$ obtained by using $|x|<|x_0|+\delta$ and the triangle inequality. You can then take $\delta=\varepsilon/L$.
In general, use that $x^n-x_0^n= (x-x_0)(x^{n-1}+x_0 x^{n-2} + \cdots + x_0^{n-2}x+x_0^{n-1})$ to factor $x-x_0$ out of $f(x)-f(x_0)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/88540",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 0
} |
A few questions about dual bases and bilinear forms I'm on my journey to understanding linear algebra. I've reached the point where I have to understand the dual basis and bilinear forms. This is something I didn't find in the book that I learn from (David Poole's A Modern Introduction to Linear Algebra). My teacher at the university did some examples so I want to understand them.
I know what a basis is. I know also what the geometrical interpretation is. I also know what is a quadratic form (is it related in any way to bilinear forms?)
Could you please summarize what is a bilinear form and a dual base? Also, if there is any geometrical interpretation, please list it. Thank you a lot!
| To answer just one of your questions, let me work over a field $k$ of characteristic $\neq 2$, such as $\mathbf R$. Then one can pass back and forth between quadratic forms and symmetric bilinear forms as follows. A quadratic form $f\colon k^n \to k$ on $k^n$ (thought of as column vectors) corresponds to a (unique) symmetric $n \times n$ matrix $A$ such that
$$
f(x) = {}^txAx. \qquad \text{[${}^tx$ is the transpose of the column vector $x$]}
$$
But we can also define a map $g\colon k^n \times k^n \to k$ by
$$
g(x, y) = {}^txAy,
$$
and this is a symmetric bilinear form (take the transpose of the right-hand side). Can you see how to go in the other direction?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/88654",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Can you give an example of a complex math problem that is easy to solve? I am working on a project presentation and would like to illustrate that it is often difficult or impossible to estimate how long a task would take. I’d like to make the point by presenting three math problems (proofs, probably) that on the surface look equally challenging. But…
*
*One is simple to solve (or prove)
*One is complex to solve (or prove)
*And one is impossible
So if a mathematician can’t simply look at a problem and say, “I can solve that in a day, or a week, or a month, how can anyone else that is truly solving a problem? The very nature of problem solving is that we don’t know where the solutions lies and therefore we don’t know how long it will take to get there.
Any input or suggestions would be greatly appreciated.
| Sometimes a problem which seems very hard turns out to be "easy" because someone was clever enough to look at it in just the correct way. Two examples of this:
a. Construct lots of non-homiltonian planar 3-valent 3-connected graphs (such graphs can be realized by convex 3-dimensional polyhedra) The Grinberg condition provides a nifty approach to finding such graphs easily: http://en.wikipedia.org/wiki/Grinberg%27s_theorem
b. Klee's art gallery problem: find the number of vertex guards that are sometimes necessary and always sufficient to "see" all of the interior of a plane simple polygon with n vertices. V. Chvatal and Steve Fisk found simple ways to answer this question: http://en.wikipedia.org/wiki/Art_gallery_theorem
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/88709",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "114",
"answer_count": 14,
"answer_id": 2
} |
What are the rules used for this integral representation? If we have $f(w)= w^{-\frac{\alpha}{\beta}}\displaystyle\int_0^w \frac{z^{\frac{\alpha}{\beta}-1}}{1-z} \mathrm{d} z $, what are the rules used to form $f(w)=\displaystyle\int_0^1 \frac{u^{\frac{\alpha}{\beta}-1}}{1-w u} \mathrm{d} u$?
Thanks a lot.
| For fixed $w$, we consider $w^{-\frac{\alpha}{\beta}}\displaystyle\int_0^w \frac{z^{\frac{\alpha}{\beta}-1}}{1-z} \mathrm{d} z$, which you called $f(w)$.
Let $z=wu$. Hence, if $z=w$, then $u=1$; if $z=0$, then if $z=0$, then $u=1$. Also, $\mathrm{d} z=w\,\mathrm{d}u$. On the other hand,
$$\frac{z^{\frac{\alpha}{\beta}-1}}{1-z}=\frac{w^{\frac{\alpha}{\beta}-1}u^{\frac{\alpha}{\beta}-1}}{1-wu}.$$
Now putting all these calculations together, we obtain
$$w^{-\frac{\alpha}{\beta}}\displaystyle\int_0^w \frac{z^{\frac{\alpha}{\beta}-1}}{1-z} \mathrm{d} z=w^{-\frac{\alpha}{\beta}}\int_0^1\frac{w^{\frac{\alpha}{\beta}-1}u^{\frac{\alpha}{\beta}-1}}{1-wu}\cdot w\,\mathrm{d}u=\int_0^1\frac{u^{\frac{\alpha}{\beta}-1}}{1-wu} \mathrm{d}u,$$
as required.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/88749",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Function which has no fixed points Problem:
Can anyone come up with an explicit function $f \colon \mathbb R \to \mathbb R$ such that $| f(x) - f(y)| < |x-y|$ for all $x,y\in \mathbb R$ and $f$ has no fixed point?
I could prove that such a function exists like a hyperpolic function which is below the $y=x$ axis and doesn't intersect it. But, I am looking for an explicit function that satisfies that.
| For an example that doesn't involve defining the function piecewise, how about
$$f(x) = x \Phi(x) + \Phi'(x)$$
where $\Phi$ is the normal cdf and $\Phi'$ is its derivative, the normal pdf. You can view some of its properties on Wolfram Alpha.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/88784",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 2
} |
Why does $\int \limits_{0}^{1} \frac{1}{e^t}dt $ converge? I'd like your help with see why does $$\int^1_0 \frac{1}{e^t} \; dt $$ converge?
As I can see this it is suppose to be:
$$\int^1_0 \frac{1}{e^t}\;dt=|_{0}^{1}\frac{e^{-t+1}}{-t+1}=-\frac{e^0}{0}+\frac{e}{1}=-\infty+e=-\infty$$
Thanks a lot?
| Your calculation is incorrect. Instead, you should have
$$\int_0^1\frac{1}{e^t}dt=\int_0^1e^{-t}dt=-e^{-t}\big|_0^1=(-e^{-1}-(-e^{0}))=1-\frac{1}{e}$$
You got mixed up with the rule for powers,
$$\int x^n\,dx=\frac{x^{n+1}}{n+1}+C$$
but for exponentials we have
$$\int e^x\,dx=e^x+C$$
You can also see that $e^{-t}$ is bounded above by the constant function $1$ on the interval $\[0,1\]$:
$\qquad\qquad\qquad\qquad\qquad\qquad\qquad$
so that the area underneath the curve $e^{-t}$ from $0$ to $1$ has to be less than the area underneath the curve $1$ from $0$ to $1$ (which is $1$). So this picture tells you that the value of $\int_0^1\frac{1}{e^t}dt$ is less than $1$, which we've confirmed by showing that is in fact equal to $1-\frac{1}{e}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/88849",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 6,
"answer_id": 2
} |
Correspondences between Borel algebras and topological spaces Though tangentially related to another post on MathOverflow (here), the questions below are mainly out of curiosity. They may be very-well known ones with very well-known answers, but...
Suppose $\Sigma$ is a sigma-algebra over a set, $X$. For any given topology, $\tau$, on $X$ denote by $\mathfrak{B}_X(\tau)$ the Borel algebra over $X$ generated by $\tau$.
Question 1. Does there exist a topology, $\tau$, on $X$ such that $\Sigma = \mathfrak{B}_X(\tau)$?
If the answer to the previous question is affirmative, it makes sense to ask for the following too:
Question 2. Denote by ${\frak{T}}_X(\Sigma)$ the family of all topologies $\tau$ on $X$ such that $\Sigma = \mathfrak{B}_X(\tau)$ and let $\tau_X(\Sigma) := \bigcap_{\tau \in {\frak{T}}_X(\Sigma)} \tau$. Is $\Sigma = \mathfrak{B}_X({\frak{T}}_X(\Sigma))$?
Updates. Q2 was answered in the negative by Mike (here).
| For Q1. How about this. $X = \{0,1\}^A$ for uncountable $A$, and $\Sigma$ is the product $\sigma$-algebra. So each element of $\Sigma$ depends on only countably many coordinates.
Now we just need a proof that this cannot be the Borel algebra of any topology.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/88916",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "37",
"answer_count": 4,
"answer_id": 1
} |
Mirorring a set of Points Let's say I have a cloud of points, and I know the equation of the symmetry plane. I'd like to mirror every single point with respect to this plane. It might be much simpler than I think, but I have some difficulties on finding a way to do that in Java.
I have the $x,y,z$ position of each point, its distance from the symmetry plane, the equation of the plane. How can I find the $x,y,z$ of the mirrored point ? I was trying to find the point of intersection between the symmetry plane and the line that passes through the point I have to mirror and with its normal being the symmetry plane.
I think it's more an Algebra problem than a Java one. But I still don't know how to do it without Java.
I was trying to calculate a linear system :
Line: $ax+by+cz+d=0$
Plane $a'x+b'y+c'z+d' =0$
But it looks like some informations are missing and I can't solve it (only two equations). I hope I explained myself properly.
| Your can solve this problem using vector algebra.
Now suppose we have a point $P$ and a plane with normal vector $\vec{n}(|\vec{n}|=1)$ passing through point $P_0$. To find the symmetric point $P'$, we find the projection $K$ of $P$ on the plane first: $$ \vec{KP} = |\vec{KP}|\cdot \vec{n} = (\vec{P_0 P} \cdot \vec{n})\vec{n}$$
Note that $\vec{P_0 P} \cdot \vec{n}$ is actually equals to $$\frac{ax + by + cz + d}{\sqrt{a^2+b^2+c^2}} $$
You can use this form or just use the $P_0$ method in your program.
Finally, we get $P'$ easily:
$$ \vec{OP'} = \vec{OP} - 2\vec{KP}$$
where $O$ is the origin point $(0, 0, 0)$.
Here I suppose you use a vector library in your Java program, so your functions can handle vectors. The following pseudocode shows how to find $P'$, tough it's not Java.
function findSymmetricPoint(p:vector3D, n:vector3D, p0:vector3D) -> p1:vector3D
n = n.normalized()
return p - n.dot(2 * (p - p0).dot(n))
end
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/89046",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Formula or code to compute number of subgroups of a certain order of an abelian $p$-group Given a finite abelian $p$-group and its factorization into groups of the form $\mathbb{Z}/p^k\mathbb{Z}$, does anyone know of a formula that gives the number of subgroups of a certain index/order? As I'm sure such a formula would contain some nasty product or sum, is there a computer algebra system out there that knows how to compute this?
| As pointed out in the comments, this question was answered by Greg Martin on MathOverflow. I include their answer below for completeness:
I had to look this up as well at some point in my research. The answer is yes, and a Google search for "number of subgroups of an abelian group" leads to several downloadable papers, not all of them easy to read. The paper "On computing the number of subgroups of a finite abelian group" by T. Stehling, in Combinatorica 12 (1992), contains the following formula and (I think) references to where it has appeared earlier in the literature.
Let $\alpha = (\alpha_1,\dots,\alpha_\ell)$ be a partition, so that $\alpha_1\ge\cdots\ge\alpha_\ell$. (In this formula it is convenient to allow some of the parts of the partition at the end to equal 0.) Define the notation
$$
{\mathbb Z}_\alpha = {\mathbb Z}/p^{\alpha_1}{\mathbb Z} \times \cdots \times {\mathbb Z}/p^{\alpha_\ell}{\mathbb Z}
$$
for a general $p$-group of type $\alpha$. Define similarly a partition $\beta$, and suppose that $\beta\preceq\alpha$, meaning that $\beta_j\le\alpha_j$ for each $j$. We want to count the number of subgroups of ${\mathbb Z}_\alpha$ that are isomorphic to ${\mathbb Z}_\beta$.
Let $a=(a_1,\dots,a_{\alpha_1})$ be the conjugate partition to $\alpha$, so that $a_1=\ell$ for example; similarly, let $b$ be the conjugate partition to $\beta$. Then the number of subgroups of ${\mathbb Z}_\alpha$ that are isomorphic to ${\mathbb Z}_\beta$ is
$$
\prod_{i=1}^{\alpha_1} \genfrac{[}{]}{0pt}{}{a_i-b_{i+1}}{b_i-b_{i+1}}p^{(a_i-b_i)b_{i+1}},
$$
where
$$
\genfrac{[}{]}{0pt}{}nm = \prod_{j=1}^m \frac{p^{n-m+j}-1}{p^j-1}
$$
is the Gaussian binomial coefficient.
To answer your specific question, you'd want to sum over subpartitions $\beta\preceq\alpha$ such that $\beta_1$ equals the exponent in question.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/89107",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Compute lambda of Poisson distribution $p = \lambda e^{ -\lambda }$
If $p$ and $e$ are known. How can I calculate $\lambda$?
I tried on log on both side but it did not help. Any suggestions?
| You say that $e$ is known -- does that mean you're using $e$ as a variable name rather than to denote Euler's number? That would be rather confusing.
In either case, what you need is the Lambert W function. Using the defining relation
$$z=W(z)\mathrm e^{W(z)}$$
(where $\mathrm e$ as usual denotes Euler's number) and
$$-p=-\lambda\mathrm e^{-\lambda}\;,$$
you get
$$\lambda =-W(-p)\;.$$
Note that this is a) multivalued and b) only defined for $p\le1/e$.
If you really did intend to use $e$ as a variable name, we can replace it by $a$ to keep things clear and then write
$$
\begin{eqnarray}
p
&=&
\lambda a^{-\lambda}
\;,
\\
p
&=&
\lambda \mathrm e^{-\lambda\log a}
\;,
\\
-p\log a
&=&
-\lambda\log a \mathrm e^{-\lambda\log a}
\;,
\\
\lambda
&=&
-\frac{W(-p\log a)}{\log a}
\;.
\end{eqnarray}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/89165",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Does $A$ homeomorphic to $B$ imply $f^{-1}(A)$ is homeomorphic to $f^{-1}(B)$? Let $X$ and $Y$ be topological spaces, and $f:X\to Y$ a continuous map. Is the following true:
If $A$ and $B$ are two homeomorphic subspaces of $Y,$ then $f^{-1}(A)$ and $f^{-1}(B)$ are homeomorphic subspaces of $X$.
| This is false.
Let $C$ and $D$ be any two non-homeomorphic topological spaces, let $X=C\coprod D$ with the disjoint union topology, let $Y=\{0,1\}$ with the discrete topology, let $A=\{0\}$ and $B=\{1\}$, and define the function $f:X\to Y$ by
$$f(x)=\begin{cases}0\text{ if }x\in C,\\ 1\text{ if }x\in D.\end{cases}$$
Then $f$ is continuous, and $A$ is homeomorphic to $B$, but $f^{-1}(A)=C$ is not homeomorphic to $f^{-1}(B)=D$.
This is false even if we assume that $Y=X/G$ where $G$ is a topological group acting continuously on $X$, and $f:X\to Y$ is the quotient map. For example:
Let $D=\{0,1\}$ with the discrete topology, let $C$ be any non-empty space, let $X=C\coprod D$ with the disjoint union topology, let the group $G=\mathbb{Z}/2\mathbb{Z}=\{\overline{0},\overline{1}\}$ with the discrete topology act on $X$ by
$$\begin{align*}\overline{0}\cdot x&=x\text{ for all }x\in X,\\\\
\overline{1}\cdot x&=\begin{cases}x\text{ if }x\in C,\\
0\text{ if }x=1,\\1\text{ if }x=0,\end{cases}\end{align*}$$
which is continuous. Then $X/G\cong C\coprod\{\star\}$, where $\star$ represents the orbit $D$ of the action of $G$. For any $c\in C$, we have that $A=\{c\}$ and $B=\{\star\}$ are homeomorphic, but $f^{-1}(A)=\{c\}$ and $f^{-1}(B)=D$ are not homeomorphic.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/89227",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Are there any R-complete problems? Many complexity classes have complete problems. For example, NP has the NP-complete problems (using polynomial-time reductions), and RE has some RE-complete problems like the halting problem (using many-to-one mapping reductions).
Are there any R-complete problems, where R is the class of recursive languages? That is, is there a problem L in R such that any problem in R can be reduced to L? If not, is there some reason why not?
Thanks!
| Assuming that you impose no restrictions on the reduction, then any nontrivial problem $L$ in R is complete for the class. By nontrivial, I mean that language should contain at least one "yes" and at least one "no" instance. The reduction is very simple: Suppose $L'$ be any language in R.
*
*We fix a canonical "yes" instance $x_1$ and a canonical "no" instance $x_0$ in $L$.
*Since $L'$ is in R, it is decided by some algorithm. Solve the given instance using this algorithm.
*If the result is "yes", output $x_1$; otherwise output $x_0$.
It is clear the above reduction works.
This situation has analogues in complexity theory as well: any nontrivial language in P is P-complete under polynomial time reductions. To overcome such arguably silly conclusions, while reducing a problem $A$ to another problem $B$, the usual understanding is that the reduction is allowed less resources than the algorithms solving either $A$ or $B$. For example, while logspace reductions make sense for P, polytime reductions do not; on the other hand, polytime reductions are useful while studying NP or PSPACE.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/89365",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
prove that if $f(x) \ge x^2$ and $f(x)$ is continuous then $f([0,\infty))$ has a minimum I have a homework question to prove that if $f(x) \ge x^2$ and $f(x)$ is continuous then $f([0,\infty))$ has a minimum .
This is fairly obvious why its true but I am having trouble writing it formally ( mainly the problem is selecting the min x)
Can someone help me please? Thanks :)
| You need to assume that $f$ is continuous; otherwise, there are counterexamples.
You also need to specify the domain. Is it $(0,\infty)$? In this case, the statement isn't true.
If the domain is $[0,\infty)$:
Since $f(x)\ge x^2$, there is an $M>0$ such that
$$\tag{1}f(x )\ge f( 0)\ \text{ for all }\ x\ge M.$$
Assuming $f$ is continuous, it does have a global minimum in the closed, bounded interval $[0,M]$.
By (1), this would also be the global minimum of $f$ in $[0,\infty)$.
Note that you just have to prove that there is a minimum of $f$, you don't have to explicitly find it (with the information given, this would be impossible to do).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/89568",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Trick to find multiples mentally We all know how to recognize numbers that are multiple of $2, 3, 4, 5$ (and other). Some other divisors are a bit more difficult to spot. I am thinking about $7$.
A few months ago, I heard a simple and elegant way to find multiples of $7$:
Cut the digits into pairs from the end, multiply the last group by $1$, the previous by $2$, the previous by $4$, then $8$, $16$ and so on. Add all the parts. If the resulting number is multiple of $7$, then the first one was too.
Example:
$21553$
Cut digits into pairs:
$2, 15, 53$
Multiply $53$ by $1, 15$ by $2, 2$ by $4$:
$8, 30, 53$
Add:
$8+30+53=91$
As $91$ is a multiple of $7$ ($13 \cdot 7$), then $21553$ is too.
This works because $100-2$ is a multiple of 7. Each hundreds, the last two digits are 2 less than a multiple of $7 (105 → 05 = 7 - 2, 112 → 12 = 14 - 2, \cdots)$
I figured out that if it works like that, maybe it would work if we consider $7=10-3$ and multiplying by $3$ each digit instead of $2$ each pair of digits.
Exemple with $91$:
$91$
$9, 1$
$9\cdot3, 1\cdot1$
$27, 1$
$28$
My question is: can you find a rule that works with any divisor? I can find one with divisors from $1$ to $19$ ($10-9$ to $10+9$), but I have problem with bigger numbers. For example, how can we find multiples of $23$?
| I will try to explain a general rule using modular congruence. We can show that any integer in base $10$ can be written as $$ z = a_0 + a_1 \times 10 + a_2 \times 10^2 + a_3 \times10^3 + \cdots + a_n \times 10^n$$
Lets say we have to find a divisibility rule of $7$,Hence for congruence modulo $7$ have,
$$ 10 \equiv 3, 10^2 \equiv 2, 10^3 \equiv -1, 10^4 \equiv -3, 10^5 \equiv -2, 10^6 \equiv 1,$$
The successive remainder then repeat. Thus our integer $z$ is divisible by $7$ iff if the remainder expression $$ r= a_0 + 3a_1 +2a_2 -a_3-3a_4-2a_5+a_6+3a_7+\cdots$$ is divisible by $7$
To understand why the divisibility of $r$ indicate the divisibility of $z$, find $z-t$ which is given by :$$z-t = a_1 \times (10-3) + a_2 \times (10^2-2) + a_3 \times (10^3+1) + \cdots + a_6 \times (10^6-1)
+ \cdots $$
Since all this numbers $ (10-3),(10^2-2),(10^3+1),\cdots$ are congruent to 0 modulo $7,z-t$ is also, and therefore $z$ leaves the same remainder on division by $7$ as $r$ does.
Using this approach we can derive divisibility of any integer.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/89623",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 7,
"answer_id": 3
} |
Computing element of fundamental group of Möbius strip How does one go about computing the element of the fundamental group of a Möbius strip represented by the loop $(\cos 10\pi t, \sin 10\pi t)$.
| HINT: You know that $\pi_1(M)\cong \mathbb{Z}$ via the isomorphism which takes $1\in\mathbb{Z}$ to the loop which goes around the Möbius strip once. Find how to express your loop as a product of the once-around loop and just pull back.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/89687",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to prove that $2 \arctan\sqrt{x} = \arcsin \frac{x-1}{x+1} + \frac{\pi}{2}$ I want to prove that$$2 \arctan\sqrt{x} = \arcsin \frac{x-1}{x+1} + \frac{\pi}{2}, x\geq 0$$
I have started from the arcsin part and I tried to end to the arctan one but I failed.
Can anyone help me solve it?
| It's kind of fun to "unveil" these mysterious formulas using only trigonometry, by which they all appear to be very simple angle relationships.
For $x>1$ consider the Figure below.
$\hskip1.5in$
Start with a right-angled triangle with hypothenuse $\overline{AC} = x+1$ and side $\overline{BC}=x-1$, so that
$$\alpha = \arcsin\left(\frac{x-1}{x+1}\right),$$
and, by Pythagorean Theorem,
$$\overline{AB} = 2\sqrt x.$$
Extend $BC$ to a segment $\overline{CD} = x+1.$ Then $\overline{BD} = 2$ and
$$\beta = \arctan \sqrt x.$$
Now use the fact that $\triangle ACD$ is isosceles and $\triangle ABD$ is right-angled to write
$$ \beta + (\beta - \alpha) = \frac{\pi}{2},$$
i.e.
$$2\arctan\sqrt x = \arcsin\left(\frac{x-1}{x+1}\right) + \frac{\pi}{2}.$$
For $0<x<1$ use the Figure below.
$\hskip1.5in$
Here $\overline{AC} = x+1$ and $\overline{BC} = 1-x$, so that
$$\alpha = -\arcsin\left(\frac{x-1}{x+1}\right).$$
Again we have $\overline{AB}=2\sqrt x$.
Extend $BC$ to a segment $\overline{BD} = 2$, so that $\overline{AC} = \overline{CD} = x+1$ and
$$\beta = \arctan\sqrt{x}.$$
Since $\triangle ACD$ is isosceles and $\triangle ABD$ is right-angled, we have, this time
$$(\alpha + \beta) + \beta = \frac{\pi}{2}.$$
Once the replacemente is done, this yields again the desired relationship, which therefore is valid for $x>0$. $\blacksquare$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/89759",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 0
} |
How to graph a cumulative subtraction on Wolfram Alpha I'd like to graph a timing algorithm that I'd been using (that did essentially the opposite of what I wanted). I apologize if I'm not using the correct name to refer to it.
Essentially it was timer = 100, timer = timer - level, where level increases from 0 to 50 etc.
For example; 100-1 = 99, 99-2 = 97, 97-3 = 94, 94-4 = 90, etc...
To give you some examples of the kind of graph I'd like, input the following two which were used to create the correct algorithm (a sort of exponential decay);
y=100\cdot e^{-0.05x} with x from 0 to 50
y=100*0.95^{x} with x from 0 to 50
(I'm writing a little article about the improved timer, and I'd like to include graphs of the old and new algorithms for comparison.) Thank you in advance. :)
| FIRST I changed it to going from $500$
Secondly, it actually gives an alternate interpretation for the summation, and once THAT is punched in, guess what ;)
y=500-(0.5x(x+1)) from x=0 to x=31
Wolframalpha input
It's too bad that I couldn't get the $x\gt0$ plotting to work with sigma notation, but this works :)
y=100-( sum j, j=1 to x)
Wolframalpha input
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/89838",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.