Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
Group actions on manifolds - exponential map Let $M$ be a smooth manifold. Suppose $K$ is a Lie group (with Lie algebra $\mathfrak{k}$) acting EDIT: TRANSITIVELY on $M$ from the left and $G$ is a Lie group (with Lie algebra $\mathfrak{g}$) acting on $M$ from the right. Suppose further these actions commute - ie $k(pg)=(kp)g=kpg$. Fix a point $x\in M$. Suppose $\phi(t)$ is some smooth curve in $M$ such that $\phi(0)=x$.
Is it possible to find $X\in \mathfrak{k}$ and $A\in\mathfrak{g}$ such that
\begin{align*}
\frac{d}{dt}\bigg|_{t=0}\exp(tX)x\exp(tA)=\phi^{\prime}(0)??
\end{align*}
And if so, how could you prove such a fact?
|
Yes, it is always possible. In fact, you can always take $A=0\in\mathfrak g$.
Fix $x\in M$, and consider the smooth map $F\colon K\to M$ given by $F(k) = kx$. Because $K$ acts transitively, $F$ is surjective. Moreover, $F$ is equivariant with respect to the (transitive) left actions of $K$ on $K$ and $M$: For all $k,k'\in K$,
$$
k'F(k) = k'(kx) = (k'k)x = F(k'k).
$$
The equivariant rank theorem (Theorem 7.25 in my Introduction to Smooth Manifolds) shows that $F$ has constant rank, and because it's surjective, the global rank theorem (Theorem 4.14 in ISM) shows that it's a smooth submersion. This means that $dF_e\colon T_eK \to T_xM$ is surjective (where $e$ is the identity of $K$). Identifying $T_eK$ with $\mathfrak k$, we can choose $X\in \mathfrak k$ such that $dF_e(X) = \phi'(0)$. With $A=0\in \mathfrak g$, it then follows that
\begin{align*}
\left.\frac d {dt}\right|_{t=0}\exp(tX)x\exp(tA) &=
\left.\frac d {dt}\right|_{t=0}F(\exp(tX))
=
dF_e (d(\exp)_e(X)) =
dF_e (X) = \phi'(0).
\end{align*}
(In the second-to-last equation, I used the fact that $d(\exp)_e$ is the identity map on $\mathfrak k$.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1369624",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Lipschitz-like behaviour of quartic polynomials I have observed the following phenomenon:
Let the biquadratic $q(x)=x^4-Ax^2+B$ have four real roots and perturb it by a linear factor $p(x)=q(x)+mx$, so that $m$ not too large with respect to $A,B$.
Then the roots of $p(x)$ (assuming they are real) are very close to the roots of $q(x)$.
Example:
$q(x)=x^4-15x^2+20$ has roots $\pm 1.22,\pm 3.68$. The perturbation $q(x)-4x$ has roots $-3.5,-1.4,1.065,3.83$ (I rounded to 1-2 digits for clarity).
The difference between corresponding roots of $q(x)$ and $r(x)$ does not exceed 0.185 in this case, which I consider quite close (good enough for my purposes).
Is there an explanation for this phenomenon?
I found two relevant papers, however one seems to have uncspecified constants $C$ and another goes into deep theory. I am looking for a simple, usable, result or at least for a simple explanation, if there is one.
|
You should look at that source of all wisdom: Kato's Perturbation theory of linear operators, which, in the first chapter, discusses the perturbation of eigenvalues of a matrix (before plunging into infinite dimensions in subsequent sections). Apply that discussion to the companion matrix of your polynomial.
Of course, for quartics, there is an explicit formula in radicals, so you can write down the perturbation as explicitly as you like using that.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1369716",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Frog jumping on leaves $N$ leaves are arranged round a circle. A frog is sitting on first leaf and starts jumping every $K$ leaves. How many leaves can be reached by a frog?
|
A couple of hints to get you started: recall Bezout's identity and think about the greatest common divisor of $N$ and $K$.
Full solution:
Let $d$ be the greatest common factor of $N$ and $K$.
Label the vertices $0,1,2\dots N-1$. Then the from starts at position $0$ and then jumps $K$ spaces to reach the position congruent to $K\bmod N$ and then it reaches the position congruent to $2K\bmod N$ and so on. So the frog can reach the posistions of the form $sK\bmod N$. So every position it can reach is a multiple of the greatest common factor of $N$ and $K$. because of bezout's identity there exists $s$ and $t$ so that $sK+tN=d$ in other words $sK\equiv d\bmod N$ and so after $s$ steps we reach position $d$, from here it is clear we can reach position $2d$ after $2s$ steps, $3d$ after $3s$ steps and so on up to position $N-d$ and finally position $0$. So we can reach $\frac{N}{d}$ positions.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1369866",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
How many words can be formed using all the letters of "DAUGHTER" so that vowels always come together? How many words can be formed using all the letters of "DAUGHTER" so that vowels always come together?
I understood that there are 6 letters if we consider "AUE" as a single letter and answer would be 6!. Again for AUE it is 3!, but I didn't get why to do 6! * 3!.
Can't we just add (6! + 3!) to get final result?
|
Imagine your three vowels as a block. Ignore the ordering of the block for the moment. Now you have 6 remaining "letters"- the original consonants plus this "vowel block". Now, there are $6$ of these altogether. So you get $6!$ ways of ordering them- which is the standard permutation formula. Now for $each$ of these orderings, you can internally order the three vowels in the block. So you multiply by $3!$, which is the number of ways to order the vowels.
EDIT: The intuition for multiplication can be strengthened with some visualisation. Take a sheet of paper. Imagine all the "external" orderings of consonants and vowel block as being listed vertically. Now for each of these, write horizontally the 6 corresponding "internal" orderings of the vowels. You get a rectangular grid, each cell containing exactly one ordering. The area of this grid is clearly the breadth by its height. Hence $6!3!$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1369974",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
}
|
Find the Limit $\lim_{n \rightarrow \infty}\frac{1}{(n+1) \log (1+\frac{1}{n})}$ Find the limit
$$\lim\limits_{n \rightarrow \infty}\frac{1}{(n+1) \log (1+\frac{1}{n})}$$
|
Set $1/n=h\implies h\to0^+$
$$\lim\limits_{n \rightarrow \infty}\frac{1}{(n+1) \log\left(1+\frac1n\right)}$$
$$=\lim_{h\to0^+}\dfrac h{(h+1)\ln(1+h)}$$
$$=\dfrac1{\lim_{h\to0^+}\dfrac{\ln(1+h)}h}\cdot\dfrac1{\lim_{h\to0^+}(1+h)}=?$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1370075",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 3
}
|
Geometry question about lines If I have two points in euclidean space or the Cartesian plane whichever and both points lie on the same side of a straight line. Both above or both below- how can I show that the segment connecting the two points also lies above or below the line respectively . I.e every point on the segment is above or below the line respectively. This is so obviously true. Is one supposed to take it as axiomatically true? Or can it be proved?
|
Given an equation for the line $f(x) = ax+b$, you can define a function $g:[c,d]\rightarrow\mathbb{R}$ that connects the two lines. Write the distance between the lines as a function of $x$ and then check for local extrema, you will find that there are non, therefore the minimal distance has to be at the global points $c,d$ which are just the original points. Thus every point on the line segment has to be on the same side.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1370156",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
}
|
What is $0\div0\cdot0$? We all know that multiplication is the inverse of division, and therefore
$x\div{x}\cdot{x}=x$
But what if $x=0$? $0\div0$ is undefined so $0\div0\cdot0$ should be too, but whatever happens when we divide that first $0$ by $0$ should be reversed when we multiply it by $0$ again, so what is the right answer? 0, undefined, or something else entirely?
|
Any expression involving "$\div 0$" is undefined because $0$ is not in the domain of the binary operation of division (of real numbers), no mater how it is combined with other mathematical symbols. You can't "pretend" it is and cancel it -- the expression is already meaningless, so you can't proceed further.
Your statement that "$x\div x \cdot x = x$" is not true as it stands -- it carries the implicit proviso that $x\neq 0$.
Amendment: I should have said that $(a,0)$ is not in the domain for any value of $a$, of course, because the domain consists of certain pairs of numbers.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1370214",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 17,
"answer_id": 6
}
|
Finding $\frac {a}{b} + \frac {b}{c} + \frac {c}{a}$ where $a, b, c$ are the roots of a cubic equation, without solving the cubic equation itself Suppose that we have a equation of third degree as follows:
$$
x^3-3x+1=0
$$
Let $a, b, c$ be the roots of the above equation, such that $a < b < c$ holds. How can we find the answer of the following expression, without solving the original equation?
$$
\frac {a}{b} + \frac {b}{c} + \frac {c}{a}
$$
|
If you multiply out the expression $(x-a)(x-b)(x-c)$ and compare the coefficients to the expression after you get a common denominator, all will become clear
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1370364",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 6,
"answer_id": 2
}
|
Solving recurrence equation with floor and ceil functions I have a recurrence equation that would be very easy to solve (without ceil and floor functions) but I can't solve them exactly including floor and ceil.
\begin{align}
k (1) &= 0\\
k(n) &= n-1 + k\left(\left\lceil\frac{n}{2}\right\rceil\right) + k\left(\left\lfloor\frac{n}{2}\right\rfloor\right) \qquad n \in \mathbb{N}_+
\end{align}
How can I solve such an equation?
|
SKETCH: It’s often useful to gather some numerical data:
$$\begin{array}{rcc}
n:&1&2&3&4&5&6&7&8&9&10&11&12&13&14&15&16&17&18\\
k(n):&0&1&3&5&8&11&14&17&21&25&29&33&37&41&45&49&54&59\\
k(n)-k(n-1):&&1&2&2&3&3&3&3&4&4&4&4&4&4&4&4&5&5
\end{array}$$
Notice that the gaps in the bottom line are a single $1$, two $2$s, four $3$s, and eight $4$s. This suggests that if $2^{m-1}<n\le 2^m$, then $k(n)-k(n-1)=m$. Assuming this to be the case, we must have
$$k(2^m)=\sum_{\ell=1}^m\ell2^{\ell-1}=(m-1)2^m+1\;.$$
This can now be proved by induction on $m$, since $k(2n)=2n-1+2k(n)$. Now let $n=2^m+r$, where $0\le r<2^m$. Then the obvious conjecture is that
$$k(n)=k(2^m)+(m+1)r=(m-1)2^m+1+(m+1)r\;,$$
which again can be verified by induction, though the argument is a bit messier.
Note that $m=\lfloor\lg n\rfloor$, and $r=n-2^m$, so $m$ and $r$ are both readily obtained from $n$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1370643",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Prove the Supremum is attained. Let $F$ denote denote the set of real valued functions on $[0,1]$ such that,
1) $ \; |f(x)| \leq 1 \; \forall x \; \in [0,1]$
2) $ \; |f(x)-f(x')| \leq |x-x'| \; \: \forall x,x' \: \in [0,1] $
Prove that that the following supremum is attained. $$\sup_{f \in F} \int_0^1 f(x) \sin(\frac{1}{x})\,dx$$
Thoughts :
Conditions 1) and 2) jointly imply that $F \subset C[0,1]$ is Equicontinuous and bounded. Hence by the Arzela Theorem, $F$ has compact closure. In this case I think $F$ is closed although I'm not sure how to show this.
For each $n \in \mathbb{N} \; \exists \; \; f_n(x) \in F$ such that, $$\sup_{f \in F} \int_0^1 f(x) \sin(\frac{1}{x})\,dx -\frac{1}{n} \: < \int_0^1 f_n(x) \sin(\frac{1}{x})\,dx \leq \; \:\sup_{f \in F} \int_0^1 f(x) \sin(\frac{1}{x})\,dx$$
Then $f_n(x)$ is a sequence in $F$ and so has a convergent subsequence $f_{n_{k}}(x) \rightarrow f$ by compactness.
I think I've almost got the answer except I can't justify that $$\lim_{k\rightarrow \infty} \int_0^1 f_{n_{k}}(x) \sin(\frac{1}{x})\,dx = \int_0^1 f(x) \sin(\frac{1}{x})\,dx$$
I've seen this work in Lebesgue theory ( Dominated Convergence ) but not sure how it really works with Riemann integrals.
|
As you already noted, $F$ is equicontinuous and (pointwise) bounded, so Arzela-Ascoli implies its closure is compact. Let's verify $F$ is closed: Suppose $f_n$ is a sequence in $F$ which converges uniformly to some continuous function $g$. Then $f_n$ converges pointwise, hence
\begin{align*}
1)&|g(x)|=\lim |f_n(x)|\leq 1;\qquad\text{and}\\
2)&|g(x)-g(x')|=\lim|f_n(x)-f_n(x')|\leq|x-x'|;
\end{align*}
so $g\in F$. Therefore $F$ is compact (with respect to the uniform norm $\Vert f\Vert_\infty=\sup_x|f(x)|$).
Let's verify that the function $f\in F\mapsto \int_0^1 f(x)\sin(1/x)dx$ is continuous (with respect to the uniform norm). Indeed, for all $f$ and $g$ in $F$,
\begin{align*}
|\int_0^1 f(x)\sin(1/x)d-\int_0^1 g(x)\sin(1/x)dx|&\leq\int_0^1|f(x)-g(x)||\sin(1/x)|dx\\
&\leq\int_0^1|f(x)-g(x)|dx\\
&\leq\int_0^1\Vert f-g\Vert_\infty dx=\Vert f-g\Vert_\infty,
\end{align*}
so in fact the map $f\mapsto \int_0^1f(x)\sin(1/x)dx$ is Lipschitz, hence continuous.
Since a continuous function attains its supremum in a compact, we are done.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1370758",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Tangent line parallel to another line At what point of the parabola $y=x^2-3x-5$ is the tangent line parallel to $3x-y=2$? Find its equation.
I don't know what the slope of the tangent line will be. Is it the negative reciprocal?
|
If you solve simultaneously the curve and the line $y=3x+c$ to get a quadratic equation in $x$ then this quadratic must have double roots at the point of tangency. This will give the value of $c$ and the required $x$ value is given by $x=-\frac{b}{2a}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1370933",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 3
}
|
Proving a theorem about Fourier coefficients I need to prove this:
Let $f$ be a $C^1$ function on $[-\pi, \pi]$. Prove that the Fourier coefficients of $f$ satisfy $|a_n| \leq \frac{K}{n}$ for some constant $K$.
Can someone please let me know if I would be on a right track if I said:
Let $||f(x)||_\infty$ = C, then $a_n = \frac{1}{\pi}\int_{-\pi}^{\pi} f(x) \cos(nx) dx \leq \frac{2}{\pi} C \int_{0}^{\pi} \cos(nx) dx = \frac{2}{\pi} C \frac{\sin(nx)}{n} \leq C \frac{2}{\pi}\frac{1}{n} = \frac{K}{n}$.
Any suggestions would be greatly appreciated.
|
Note that $$\int_{-\pi}^{\pi}f(x) \cos nx dx=\frac{1}{n}f(x)\sin nx\mid_{-\pi}^\pi-1/n\int_{-\pi}^\pi\frac{df}{dx}\sin nx dx=-1/n\int_{-\pi}^\pi\frac{df}{dx}\sin nx dx\\\implies \left|\int_{-\pi}^{\pi}f(x) \cos nx dx\right|=\frac{1}{n}\left|\int_{-\pi}^\pi\frac{df}{dx}\sin nx dx\right|\\\le \frac{1}{n}\max_{x\in [-\pi,\pi]}\left|\frac{df}{dx}\right|\left|\int_{-\pi}^\pi \sin nx dx\right|$$ Since $f\in \mathcal{C}^1, \frac{df}{dx}$ is continuous and hence takes a finite maximum value over $[-\pi,\pi]$(since this set is compact) and thus $|a_n|$ becomes bounded.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1371019",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Intuition on the Representable Functor Given a locally small category C, and an object $C$, the functor:
\begin{equation}
\mbox{Hom}_\textbf{C}(C,-):\textbf{C} \longrightarrow \textbf{Sets}
\end{equation}
that sends objects to hom-sets and arrows $f:A\rightarrow B$ to functions:
\begin{align}
f_*:\mbox{Hom}_\textbf{C}(C,A) &\longrightarrow \mbox{Hom}_\textbf{C}(C,B) \\
g&\longmapsto f\circ g
\end{align}
is called the Representable Functor.
I am looking for some intuition of why is that so, past the fact that it is an obvious functor from C to Sets.
|
First of, a functor $F : \mathscr C → \mathrm{Set}$ is called representable (by $C$) if it's isomorphic (not necessarily equal) to $\mathrm{Hom}(C, -)$ for an object $C$ of $\mathscr C$. As for the term, an abstract functor $F$ is represented by the very concrete action of $\mathrm{Hom}(C, -)$.
Take for example the functor $L : \mathrm{Top} → \mathrm{Set}$ sending a topological space $X$ to the set of all loops in $X$. A loop is just a continuous function $S^1 → X$, so we have that $LX ≅ \mathrm{Hom}(S^1, X)$.
In fact, it kind of makes sense to say that a loop is an $S^1$-shaped element of $X$, right? This really generalizes the classical elements of $X$, since those correspond to functions $* → X$, where $*$ is the one-point space.
Note in particular that every continuous function $f : X → Y$ extends to these generalized elements: if $l : S^1 → X$ is a loop, then it's image in $Y$ is exactly $f∘l = \mathrm{Hom}(S^1, f)(l)$.
Now compare: we have no idea how an arbitrary functor $F: \mathrm{Top} → \mathrm{Set}$ might act. But if $F$ is representable and $F ≅ \mathrm{Hom}(C, -)$, then we know that $FX$ are just $C$-shaped elements of $X$, and that $Ff$ simply maps the $C$-elements of $X$ to $C$-elements of $Y$ in the most obvious way. So you represented something possibly completely abstract with a very simple idea.
For contravariant functors, a somewhat higer-level perspective would be that the Yoneda embedding $C ↦ \mathrm{Hom}(-, C)$ embeds $\mathscr C$ into the functor category $[\mathscr C^\mathrm{op}, \mathrm{Set}]$. Looking at it that way, you could say that a representable contravariant functor $F$ literally is (isomorphic to) the element of $\mathscr C$ it's represented by.
Disclaimer: this is why the term seems very sensible and apt to me, I don't know why it was chosen by the one who named it. For what is worth, I can't find anything in the Mac Lane's book, which sometimes has these kinds of historical comments.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1371150",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Projections: Orthogonality Given a unital C*-algebra $1\in\mathcal{A}$.
Consider projections:
$$P^2=P=P^*\quad P'^2=P'=P'^*$$
Order them by:
$$P\perp P':\iff\sigma(\Sigma P)\leq1\quad(\Sigma P:=P+P')$$
Then equivalently:
$$P\perp P'\iff 0=PP'=P'P\iff\Sigma P^2=\Sigma P=\Sigma P^*$$
How can I check this?
(Operator algebraic proof?)
|
I'm assuming that by $\sigma(\Sigma P)\leq1$ you mean that $\|\Sigma P\|\leq1$.
*
*Suppose that $\|P+Q\|\leq1$. So $0\leq P+Q\leq 1$. Then $(P+Q)^2\leq P+Q$ (just conjugate with $(P+Q)^{1/2}$). That is,
$$
P+Q+QP+PQ\leq P+Q,
$$
or $QP+PQ\leq0$. If we conjugate this inequality with $Q$, we get $QPQ+QPQ\leq0$. But $QPQ\geq0$, so $QPQ=0$. Then
$$
0=QPQ=(PQ)^*PQ,
$$
and then $PQ=0$. By taking adjoints, $QP=0$.
$$
\
$$
*If $PQ=0$, it follows by taking adjoints that $QP=0$. And
$$
(P+Q)^2=P^2+Q^2+QP+PQ=P+Q.
$$
$$
\
$$
*If $(P+Q)^2=P+Q$, then by the C$^*$-identity
$$
\|P+Q\|^2=\|(P+Q)^2\|=\|P+Q\|,
$$
so either $\|P+Q\|=0$ (which by positivity would force $P=Q=0$) or $\|P+Q\|=1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1371240",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
$U_n=\int_{n^2+n+1}^{n^2+1}\frac{\tan^{-1}x}{(x)^{0.5}}dx$ . $U_n= \int_{n^2+n+1}^{n^2+1}\frac{\tan^{-1}x}{(x)^{0.5}}dx$ where
Find $\lim_{n\to \infty} U_n$ without finding the integration
I don't know how to start
|
We have (see here) $$\frac{\pi}{2}-\frac{1}{x}\leq\arctan\left(x\right)\leq\frac{\pi}{2}-\frac{1}{x}+\frac{1}{3x^{3}}
$$ then $$\frac{\pi}{2}\int_{n^{2}+n+1}^{n^{2}+1}\frac{1}{\sqrt{x}}dx-\int_{n^{2}+n+1}^{n^{2}+1}\frac{1}{x\sqrt{x}}dx\leq\int_{n^{2}+n+1}^{n^{2}+1}\frac{\arctan\left(x\right)}{\sqrt{x}}dx\leq\frac{\pi}{2}\int_{n^{2}+n+1}^{n^{2}+1}\frac{1}{\sqrt{x}}dx-\int_{n^{2}+n+1}^{n^{2}+1}\frac{1}{x\sqrt{x}}dx+\int_{n^{2}+n+1}^{n^{2}+1}\frac{1}{3x^{3}\sqrt{x}}dx
$$ and obviously $$\int_{n^{2}+n+1}^{n^{2}+1}\frac{1}{\sqrt{x}}=\left.2\sqrt{x}\right|_{n^{2}+n+1}^{n^{2}+1}\underset{n\rightarrow\infty}{\longrightarrow}-1
$$ and the other integral goes to $0$, then the result is $-\frac{\pi}{2}
$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1371340",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 4
}
|
Prove that there are infinitely many composite numbers of the form $2^{2^n}+3$.
There are infinitely many composite numbers of the form $2^{2^n}+3$.
[Hint: Use the fact that $2^{2n}=3k+1$ for some $k$ to establish that $7\mid2^{2^{2n+1}}+3$.]
If $p$ is a prime divisor of $2^{2^n}+3$ then $2^{2^n}+3\equiv0\pmod{p}$. But I don't think this is useful. I don't see how the fact that $2^{2n}=3k+1$ helps to show that $7\mid2^{2^{2n+1}}+3$.
Any hints on how to start solving this problem?
|
Let $a_n=2^{2^n}+3$. Then $a_{n+1}=(a_n-3)^2+3 = a_n^2-6a_n+12$. Since:
$$ p(x)=x^2-6x+12 \equiv (x-1)(x+2) \pmod{7} $$
maps $0$ to $5$ and $5$ to $0$, we have that $7\mid a_n$ iff $n$ is odd, since $a_1=7\equiv 0\pmod{7}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1371429",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
p-simplex spanned by elements in the boundary of the unit ball of Thurston norm is in the the boundary of this unit ball. in completing my thesis I have reached a momentary impass.
I am trying to solve an exercise given in the book "Foliations II" by Candel and Conlon. In particular, Exercise 10.4.1, and I can't seem to get through it.
Here is what I have to solve and can't seem to manage:
Let $V=\{\lambda_1,\dots,\lambda_p\}\subset\partial B_\xi$ be an affinely independent set, where $\xi$ is the Thurston norm and $\partial B_\xi$ is the boundary of the unit ball of this norm $B_\xi=\{w\in H_2(M,\partial M;\mathbb{R})|\xi(w)=1\}$. Prove that the affine $p$-simplex $\Delta_p$, spanned by $V$, is a subset of $B_\xi$. Generally, $\Delta_p$ is not a subset of $\partial B_\xi$, but if an interior point $\lambda$ of $\Delta_p$ has norm $\xi(\lambda)=1$ prove that $\Delta_p\subset\partial B_\xi$.
I will be using this for the case where $M$ is the complement in $3$-space of a link or knot, but I think this should work in general for $3$-manifolds. Any advice, or elegant solutions??
Thanks in advance,
Paul
|
Using subadditivity we get for $x$ in $\Delta$ that $\xi x = \xi(\sum \lambda_i v_i) \le \sum \lambda_i \xi(v_i) =1$.
For the second part note that if we restrict $\xi$ to the interior $int \Delta \to \mathbb R$, the set $\xi^{-1}(1)$ is closed and non empty. It is also obviously open (use that $x$ in the interior can only be written st all $\lambda_i$ are non zero) and hence the above restriction factors through the trivial space $\{1\}$ and hence through the boundary.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1371601",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Find the area of the shaded region in the figure Find the area of the shaded region in the figure
What steps should I do? I tried following the steps listed here https://answers.yahoo.com/question/index?qid=20100305030526AAef8nZ
But I got 150.7 which is wrong.
|
Hint: Break the shaded region up into two shapes. One is a portion of the circle (you know the portion because of the given angle), and the other is a triangle (which is equilateral). Find the area of each shape and then add them.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1371736",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
}
|
Sunflower Lemma - Allow Duplicates? The sunflower lemma states that if we have a family of sets $S_1, S_2, \cdots, S_m$ such that $|S_i| \leq l$ for each $i$, then $m > (p-1)^{l+1}l!$ implies that the family contains a sunflower with $p$ petals. My question is, do the $S_i$'s need to be distinct sets or can there be duplicates (i.e $S_i = S_j$ for some $i \neq j$). Would both of these sets be a part of the sunflower in such a case?
|
The usual statement of the lemma requires only that $m>(p-1)^\ell\ell!$, and in that case the sets must be distinct. Your version allows duplicates. To see this, suppose that $m>(p-1)^{\ell+1}\ell!$, and we have sets $S_1,\ldots,S_m$ such that $|S_k|\le\ell$ for $k=1,\ldots,m$. If there are $p$ or more duplicates of some set $S$, any $p$ of those duplicates form a sunflower with kernel $S$, each of the $p$ petals also being $S$. Otherwise, there are at most $p-1$ copies of each distinct set in the family, and $$m>(p-1)\cdot(p-1)^\ell\ell!\;,$$ so there must be more than $(p-1)^\ell\ell!$ distinct members of the family. We can apply the usual form of the sunflower lemma to these distinct members to get a sunflower with $p$ petals.
In other words, requiring that $m>(p-1)^{\ell+1}\ell!$ allows you to apply the lemma to multisets rather than just to sets, but the resulting sunflower may also be a multiset of petals rather than a set of petals.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1371803",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Evaluating the limit: $\lim_{x\rightarrow -1^+}\sqrt[3]{x+1}\ln(x+1)$ I need to solve this question: $$\lim_{x\rightarrow -1^+}\sqrt[3]{x+1}\ln(x+1)$$
I tried the graphical method and observed that the graph was approaching $0$ as $x$ approached $-1$ but I need to know if there's a way to calculate this.
|
$$\lim\limits_{x\to -1^+}\sqrt[3]{x+1}\ln(x+1)$$
Let $h=x+1$. Since $x\to -1^+$, then $h\to 0^+$. So now
$$\lim\limits_{h\to 0^+}\sqrt[3]{h}\ln h$$
$$=\lim\limits_{h\to 0^+}h^{\frac13}\ln h$$
$$=\lim\limits_{h\to 0^+}\exp\left(\ln h^{\frac13}\right)\ln h$$
$$=\lim\limits_{h\to 0^+}\exp\left(\frac13\ln h\right)\ln h$$
$$=3\lim\limits_{h\to 0^+}\left(\frac13\ln h\right)\exp\left(\frac13\ln h\right)$$
Let $k=\frac13\ln h$. Since $h\to 0^+$, then $k\to -\infty$. So now
$$3\lim\limits_{k\to -\infty}ke^k$$
$$=3\lim\limits_{k\to -\infty}\frac{k}{e^{-k}}$$
Let $m=-k$. Since $k\to -\infty$, then $m\to\infty$. So now
$$-3\lim\limits_{m\to \infty}\frac{m}{e^m}=0$$
At this point it should be clear that this limit is zero.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1371906",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
}
|
Find the $\int \frac{(1-y^2)}{(1+y^2)}dy$ $\int \frac{(1-y^2)}{(1+y^2)}dy$ first I tried to divide then I got 1-$\frac{2y^2}{1+y^2}$ and i still can't integrate it.
|
Hint:
$$ \int \frac{1-y^2}{1+y^2}dy = \int \frac{1}{1+y^2}dy-\int \frac{y^2}{1+y^2}dy$$
Note (using long division or otherwise): $$\int \frac{y^2}{1+y^2}dy = \int dy -\int \frac{1}{1+y^2}dy$$
Therefore:$$ \int \frac{1-y^2}{1+y^2}dy = 2\int\frac{1}{1+y^2}dy -\int dy$$
The solution should be straight forward from here.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1371981",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
The class equation of the octahedral group I know that the class equation of the octahedral group is this:
$$1 + 8 + 6 + 6 + 3$$
I think the $8$ stands for the $8$ vertices, the $6$ could be $6$ faces and $6$ pairs of edges. Then what is the $3$ for?
|
Following the Wikipedia list of conjugacy classes given by Zev Chonoles, we obtain:
24 = 1 (identity from the whole octahedron) + 8 (rotation by 120° about a face, an axis with 3-fold symmetry) + 6 (rotation by 180° about an edge, an axis with 2-fold symmetry) + 6 (rotation by 90° about a vertex, an axis with 4-fold symmetry) + 3 (rotation by 180° about a vertex, an axis with 4-fold symmetry)
This is the exact relation between conjugacy classes and rotations about icosahedron's axes of symmetry
For details see Algebra 2nd edition chapter 7.4 'The Class Equation of the Icosahedral Group' by M. Artin
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1372234",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Find $\lim_\limits{x\to -2}{f(x)}$ Let $f:\mathbb{R}\mapsto{\mathbb{R}}$ be an odd function such that: $$\lim_{x\to 2}{(f(x)-3\cdot x+x^2)}=5$$
Find $\lim_\limits{x\to -2}{f(x)}$, if it exists. (so also prove its existence)
|
$$\lim_{x\to 2}{(f(x)-3\cdot x+x^2)}=5\\ \lim_{x\to 2}{(f(x)-3*2 +4)}=5\\lim_{x\to 2}{f(x)}=5+2=7\\$$ we know $f(-x)=-f(x)$ as it odd function
so $$lim_{x\to -2}f(x)=lim_{x\to +2}f(-x)=\\lim_{x\to +2}(-f(x))=-7$$
limit exist because :if we have odd function ,domain is symmetrical interval .so if we have $|x-2|<\delta$ around $x=2$ then we will have $|-x-2|< \delta\\or\\|x+2|<\delta$ around $x=-2$
to complete this prove
$$\forall \varepsilon >0 \exists \delta >0 :|x-2| <\delta \Rightarrow |f(x)-7|<\varepsilon $$ so $$ \forall \varepsilon >0 \exists \delta >0 :|x-(-2)| <\delta \Rightarrow |f(-x)-(-7)|<\varepsilon \\\overset{f(-x)=-f(x)}{\rightarrow}\forall \varepsilon >0 \exists \delta >0 :|x+2| <\delta \Rightarrow |-f(x)-7|<\varepsilon \\ \space \\ \forall \varepsilon >0 \exists \delta >0 :|x+2| <\delta \Rightarrow |f(x)+7|<\varepsilon $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1372356",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
How to find the Summation S Given function $f(x)=\frac{9x}{9x+3}$.
Find S:
$$
S=f\left(\frac{1}{2010}\right)+f\left(\frac{2}{2010}\right)+f\left(\frac{3}{2010}\right)+\ldots+f\left(\frac{2009}{2010}\right)
$$
|
We can write the sum in terms of digamma function. I don't know if it is what do you want, but surely it's a closed form. We have $$f\left(x\right)=1-\frac{1}{3x+1}
$$ then $$f\left(\frac{k}{2010}\right)=1-\frac{2010}{3k+2010}
$$ then we have $$\sum_{k=1}^{2009}\left(1-\frac{2010}{3k+2010}\right)=2009-\frac{2010}{3}\sum_{k=1}^{2009}\frac{1}{k+2010/3}=
$$ $$=2009-\frac{2010}{3}\left(\psi^{(0)}\left(\frac{2010}{3}+2009+1\right)-\psi^{(0)}\left(\frac{2010}{3}+1\right)\right)=1080.80766...
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1372434",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Initial Value Problem $dy/dx = (y+1)^{1/3}$ Consider the differential equation $$\frac{dy}{dx} = (y+1)^{1/3}$$
(a) State the region of the $xy$-plane in which the conditions of the existence and uniqueness theorem are satisfied (using any appropriate theorem).
(b) Let $S$ be the region of the $xy$-plane where the conditions of the existence and uniqueness theorem are NOT satisfied. State whether the given equation with the initial condition $y(x_0)=y_0$, where $(x_0, y_0)$ is an element of $S$, has a solution.
(c) solve this equation
(d) Using the result of (c), find whether the given equation with the initial condition $y(x_0) = y_0$, where $(x_0, y_0)$ is an element of $S$, has a unique solution.
Struggling massively with this question. Currently studying initial value problems but haven't been able to come across any text or vid explaining the technique to approach this. Would very much appreciate any help.
I can only assume for part (a) the region of the xy plane is -infinity < x < infinity. And -1 < y < infinity.
For (b) I don't know the technical method of approaching this question even after reading through a differential equations textbook because it was explained it riddles.
For (c) i arrived at y = [[(2/3)(x+c)]^(3/2)]-1. Don't know if evaluating it further is necessary or not?
For (d) again I used a book which didn't explain the steps to doing this
|
You probably have to use Picard-Lindelöf. The function is not Lipschitz around $y = -1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1372497",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Constructing DVR's from arbitrary UFD's Is the following statement true?
Let $A$ be an UFD and $p\in A$ prime, then $A_{(p)}$ is a discrete valuation ring.
I think yes: For every element $x$ of $Q(A_{(p)})=Q(A)$, there is a unique $k\in\mathbb{Z}$ such that I can write $x$ as $p^k\cdot\frac{a}{b}$ for some $a,b\in A$ with $p\nmid a,b$. This should give me the well-defined valuation $\nu\colon x\mapsto k$ from $Q(A_{(p)})$ to $\mathbb{Z}$ such that $A$ is the valuation ring associated to $\nu$. Then $\nu$ is surjective, because for every $k\in\mathbb{Z}$ we have $p^k\mapsto k$.
But now I am confused since this would imply that $A_{(p)}$ is Noetherian without further assumptions, which for me seems to come out of nowhere. So is the above argument correct? And if so, is there a more direct way to see that $A_{(p)}$ should always be Noetherian?
|
Let me complement the nice and abstract existing answer by a concrete one:
Yes, the argument is correct. To see that the ring $A_{(p)}$ is noetherian directly use an argument as you might know it from Euclidean domains.
Let $I$ be an non-zero ideal. Let $a \in I$ be non-zero with minimal valuation, say $k$; then show $I = (a) $, as for $b \in I$ you have $a^{-1}b$ has non-negative valuation and thus is in the ring.
Thus, this is a PID and thus noetherian.
Or, show further that $(a)= (p^k)$ so all non-zero ideals are given by $(p^k)$ with $k$ a natural number and there cannot be an infinite ascending chain as there is no infinite descending chain of natural numbers.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1372596",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
Computing $\max_{1/2 \leq x \leq 2} ( \min_{1/3 \leq y \leq 1} f(x,y) )$ where $f(x,y) = x(y \log y - y) - y \log x$.
Let $f(x,y)=x(y\ln y-y)-y\ln x.$ Find $\max_{1/2\le x\le 2}(\min_{1/3\le y\le1}f(x,y))$.
This problem is quite easy and it is from Spivak; it is the part $c)$ of the general exercise 2-41 page 43 Calculus on manifolds; here it is:
Let $f:\mathbb{R}\times \mathbb{R}\to \mathbb{R}$ be twice continuously differentiable. For each $x\in \mathbb{R}$ define $g_x(y)=f(x,y)$. Suppose that for each $x$ there is a unique $y$ with $g'_x(y)=0$; let $c(x)$ be this $y$.
$a)$: If $D_{2,2}f(x,y)\ne0$ for all $(x,y)$ show that $c$ is differentiable and $c'(x)=-\frac{D_{2,1}f(x,c(x))}{D_{2,2}f(x,c(x))}$
$b)$: Show that if $c'(x)=0$, then for some $y$ we have $D_{2,1}f(x,y)=0$, $D_2f(x,y)=0$.
I cannot visualize how part c) relates to the previous ones. Can you give me a hint?
|
I have given the complete solution below. The part highlighted shows where we use the result of part (a). In short, we use part (a) to compute the derivative of the critical point of $g_x(y)$ for the given function $f(x,y)$. This is used in computing the critical points to maximise $f(x,y)$ w.r.t. $x$ after minimising it w.r.t. $y$.
Let $f : (0,\infty) \times (0,\infty) \to \mathbb{R}$ be the function given by $$f(x,y) = x(y \log y - y) - y \log x.$$
Clearly $f$ is $C^\infty$. For each $x \in (0,\infty)$ define $g_x : (0,\infty) \to \mathbb{R}$ by
$$g_x(y) = f(x,y).$$
To find points where $g_x$ is minimised, we first find the critical points:
$$
{g_x}'(y) = x\log y - \log x,\\
\therefore {g_x}'(y) = 0 \iff y = x^{1/x}.
$$
So, for each $x \in (0,\infty)$ there is a unique $y \in (0,\infty)$ such that ${g_x}'(y)=0$. So, let $c(x) = x^{1/x}$ be this critical point. To check the nature of this critical point, we evaluate ${g_x}''(c(x))$ and check its sign.
$$
{g_x}''(y) = \frac{x}{y} \implies {g_x}''(c(x)) = \frac{x}{x^{1/x}}.
$$
Hence, ${g_x}''(c(x)) > 0$ for all $x \in (0,\infty)$, so $g_x(y)$ has a global minimum at $y=c(x)$. Therefore, we would like to conclude that
$$
\min_{1/3 \leq y \leq 1} \{ f(x,y) \} = f(x,c(x)) = x \cdot x^{1/x},
$$
but this would be a bit hasty, for it is not necessary that $c(x) \in [1/3,1]$ for all $x \in (0,\infty)$. So, let $\alpha \in (0,\infty)$ be the unique element such that $c(\alpha) = 1/3$. Define $h : (0,\infty) \to \mathbb{R}$ by
$$
h(x) =
\begin{cases}
f(x,1/3), & x \in (0,\alpha);\\
f(x,c(x)), & x \in [\alpha,1];\\
f(x,1), & x \in (1,\infty)
\end{cases}
=
\begin{cases}
-(x(1+\log 3)+\log x)/3, & x \in (0,\alpha);\\
-x \cdot x^{1/x}, & x \in [\alpha,1];\\
-x-\log x, & x \in (1,\infty).
\end{cases}
$$
Then,
$$
\min_{ 1/3 \leq y \leq 1 } \{ f(x,y) \} = h(x).
$$
Now, note that the hypothesis of part (a) of the problem is satisfied, because $D_{2,2} f(x,y) = {g_x}''(y) \neq 0$ for all $x,y \in (0,\infty)$. Hence,
$$
D_{2,1} f(x,y) = \log y - \frac{1}{x},
$$
and
$$
\begin{align}
c'(x) = -\frac{D_{2,1}f(x,c(x))}{D_{2,2}f(x,c(x))}
= \frac{x^{1/x}(1 - \log x)}{x^2}.
\end{align}
$$
To find points where $h$ is maximised, we find its critical points. In the interval $(0,\alpha)$,
$$
h'(x) = -\frac{\left(1+\log 3 + \frac{1}{x}\right)}{3}
$$
which is negative for all $x \in (0,\alpha)$. Hence, $h(x)$ is decreasing on this interval. In the interval $(1,\infty)$,
$$
h'(x) = -1-\frac{1}{x},
$$
which is negative for all $x \in (1,\infty)$. Hence, $h(x)$ is decreasing on this interval as well. In the interval $[\alpha,1]$,
$$
h'(x) = -c(x) - x c'(x) = -c(x)\left( 1 + \frac{1-\log x}{x} \right),
$$
which is negative for all $x \in [\alpha,1]$. Hence, $h$ is decreasing on this interval as well. One can check that $h(x)$ is continuous at $x = \alpha$ and $x = 1$, so $h$ is continuous everywhere, and thus it is decreasing on the entire domain.
Lastly, $c(1/2) = 1/4 < 1/3$, so $1/2 < \alpha$. Therefore, we have
$$
\begin{align}
\max_{1/2 \leq x \leq 2} \left\{ \min_{1/3 \leq y \leq 1} \{ f(x,y)\} \right\} &= \max_{1/2 \leq x \leq 2} \{ h(x) \}\\
&= h(1/2)\\
&= f(1/2,1/3)\\
&= -\frac{\left( 1-\log \frac{3}{4} \right)}{6}.
\end{align}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1372675",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Trying to understand Bienaymé formula In Bienaymé formula, it states that $var(\bar X) = \large\frac{\sigma^2}{n}$.
However, when I was going through the proof here, it says the variances of $X_1,X_2,X_3......X_n$ are the same(assuming they are all independent). Can anyone explain the reason behind it?
I am confused about how different random variables can have the same variance.
|
Taking a random sample $X_1, X_2, \dots, X_n$ from a population with mean $\mu$ and
variance $\sigma^2$ means that the $X_i$ are independent and that
$E(X_i) = \mu$ and $V(X_i) = \sigma^2.$ All of these random variables
have the same variance because they represent observations from
the same population.
Consequently, defining
$$\bar X = \frac{1}{n}\sum_{i=1}^n X_i = \frac{X_1 + X_2 + \cdots + X_n}{n},$$
one has $E(\bar X) = \mu$ and $V(\bar X) = \sigma^2/n.$
It seems you are trying to understand the proof for $V(\bar X)$, which uses the assumption of independence.
This proof is given in your link. (If there is a step in that
you don't understand, please leave a Comment.)
I see that two other Answers have been posted while I was typing
this. The Answer by @ConradoCosta shows two RVs with the same variance (but not because of random sampling); I up-voted it and left a Comment there with yet another one. I hope one of the three Answers is helpful.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1372761",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
4th Isomorphism Theorem applied to normalizers I'm reading a proof showing a proper subgroup Q of p-group P is contained in it's normalizer.
It applies the 4th Isomorphism Theorem to assert $\frac{Q}{Z(P)} < N_{\frac{P}{Z(P)}}(\frac{Q}{Z(P)})$ implies $Q < N_P(Q)$. How is this?
|
Hint: $N_{P/Z(P)}(Q/Z(P))=N_P(Q)/Z(P)$.
(And proper inclusions remain proper under lattice correspondence.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1372830",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Calculating in closed form $\int_0^1 \log(x)\left(\frac{\operatorname{Li}_2\left( x \right)}{\sqrt{1-x^2}}\right)^2 \,dx$ What real tools excepting the ones provided here Closed-form of $\int_0^1 \frac{\operatorname{Li}_2\left( x \right)}{\sqrt{1-x^2}} \,dx $ would you like to recommend? I'm not against them, they might be great, but it seems they didn't lead anywhere for the version $\displaystyle \int_0^1 \frac{\operatorname{Li}_2\left( x \right)}{\sqrt{1-x^2}} \,dx$. Perhaps we can find an approach that covers both cases, also
$$\int_0^1 \log(x) \left(\frac{\operatorname{Li}_2\left( x \right)}{\sqrt{1-x^2}}\right)^2 \,dx$$
that I would like to calculate.
Might we possibly expect a nice closed form as in the previous case? What do you propose?
EDIT: Thanks David, I had to modify it a bit to fix the convergence issue. Also, for the previous question there is already a 300 points bounty offered for a full solution with all steps clearly explained.
|
We have:
$$\sum_{k=1}^{n-1}\frac{1}{k^2(n-k)^2}=\frac{1}{n^2}\sum_{k=1}^{n-1}\left(\frac{1}{k}+\frac{1}{n-k}\right)=\frac{2H_{n-1}^{(2)}}{n^2}+\frac{4H_{n-1}}{n^3}$$
so:
$$ \text{Li}_2(x)^2 = \sum_{n\geq 2}\left(\frac{2H_{n-1}^{(2)}}{n^2}+\frac{4H_{n-1}}{n^3}\right) x^n\tag{1}$$
and since:
$$ \int_{0}^{1}\frac{x^n \log x}{1-x^2}\,dx = -\sum_{m\geq 0}\frac{1}{(n+2m+1)^2}\tag{2}$$
we have:
$$ \int_{0}^{1}\log(x)\left(\frac{\text{Li}_2(x)}{\sqrt{1-x^2}}\right)^2\,dx = -\sum_{n\geq 2}\left(\frac{2H_{n-1}^{(2)}}{n^2}+\frac{4H_{n-1}}{n^3}\right)\sum_{m\geq 0}\frac{1}{(n+2m+1)^2}\tag{3}$$
and the problem boils down to the computation of a complicated Euler sum.
In order to perform partial summation, it is useful to recall that:
$$ \sum_{n=1}^{N}\frac{2H_{n-1}^{(2)}}{n^2}=\left(H_{N}^{(2)}\right)^2-H_{N}^{(4)},$$
$$\sum_{n=1}^{N}\frac{H_{n-1}}{n^3} = H_N^{(3)}H_{N-1}-\sum_{n=1}^{N-1}\frac{H_{n}^{(3)}}{n}.\tag{4}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1372936",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Let $T$ be the set of full binary trees. In what way $T^7 \cong T$? I was reading the slides of a talk by Tom Leinster.
I have trouble understanding the last line of page 17 (pages 1-15 are irrelevant and can be skipped). Could someone please explain it to me?
If I translate the images correctly, $T$ is defined to be the set of all full binary trees. A full binary tree is either:
*
*a single vertex, $\bullet$
*a graph formed by taking two full binary trees, adding a vertex, and adding an edge directed from the new vertex to the root of each binary tree.
Then $T \cong \{ \bullet \} \sqcup (T \times T)$. Thus $$|T| = 1 + |T|^2.$$ (Note: I think the last equation also holds if we defined $T$ to be the set of all (not necessarily full, and possibly empty) binary trees since then $T \cong \{\emptyset\} \sqcup (T \times T)$.)
Forgetting what $|T|$ stands for, we could solve the above equation for $|T|$ and find that $|T|=e^{\pm i\pi/3}$. Hence $|T|^7 = |T|$. This suggests that $T^7 \cong T$. Tom Leinster writes "This really is true!". Why is that?
|
The paper Seven Trees in One exhibits a "very explicit bijection" $T^7\cong T$. It is perhaps a bit cumbersome because it requires separating into five cases based on how the seven trees look in the first four levels of depth. A proof is present too. Disclaimer: I haven't read it.
Another paper Objects of Categories as Complex Numbers discusses the relationship between algebra (in particular, manipulating elements of polynomial semirings modulo relations) and natural isomorphisms. Indeed, if we consider an initial set of relations satisfied by an object to be a class of prototypical isomorphisms, then we can "do algebra" utilizing the relations and translate that into a sequence of isomorphisms. For example the authors write that one can do
$$\begin{array}{ll} T & \cong 1+T^2 \\ & \cong 1+T+T^3 \\ & \cong 1+T+T^2+T^4 \\ & \cong 2T+T^4 \\ & \cdots \\ & \cong T^7 \end{array} $$
with $18$ isomorphisms in total. (This is not the bijection exhibited in 7 trees in 1.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1372984",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
}
|
Relation of gamma function to a factorial mimic function The gamma function is an analytic extension of the factorial function.
For real and positive $x$, with $x=int(x)+frac(x)$, where consider the function:
$$f(x)=\prod_\limits{i=1}^{int(x)} (frac(x)+i)$$
defined on $frac(x)\in[0,1)$.
We need to start from $i=1$ to avoid issues with $frac(x)=0$. This function mimics $x!$ in that $f(x)=x!$ for $x\in\mathbb{Z^+}$.
How can $\Gamma(x)-f(x)$ be explained?
|
I think that you can try something with the Bohr-Mollerup Theorem.
Theorem (Bohr-Mollerup): Let be $f:(0,+\infty)\to \Bbb{R}$ such that:
*
*$f(1)=1$ and $f(x)>0$, for every $x>0.$
*$f(x+1)=xf(x)$, for every $x>0.$
*$\lg{(f(x))}$ is a convex function.
Then $f\equiv \Gamma\big|_{(0,+\infty)}.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1373108",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Finding the expected number of trials in an experiment. Given a uniform probability distribution over $[0, 1]$, a number is randomly selected from this distribution. We have to find the expected number of trials such that the sum of the picked numbers $ >= $ 1.
I have been told that the answer to this question is $ e $ but I'm not sure how to solve this.
|
Let $E(x)$ be the expected number of trials for reaching a sum of $\ge1$ starting from a sum of $x$. Then
$$
E(x)=1+\int_x^1E(t)\mathrm dt\;.
$$
Differentiating with respect to $x$ yields
$$
E'(x)=-E(x)\;,
$$
with the solutions $E(x)=c\mathrm e^{-x}$, and the condition $E(1)=1$ yields $c=\mathrm e$, so the solution is $E(x)=\mathrm e\cdot\mathrm e^{-x}=\mathrm e^{1-x}$, and the value $E(0)=\mathrm e$ is the desired probability.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1373279",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
In a group $G$, prove the following result Let $G$ be a group in which $a^5=e$ and $aba^{-1}=b^m$ for some positive integer $m$, and some $a,b\in G$. Then prove that $b^{m^5-1}=e$.
Progress
$$aba^{-1}=b^m\Rightarrow ab^ma^{-1}=b^{m^2}$$
What will be the next?
|
We have $$b=ebe=a^5ba^{-5}=a^4b^ma^{-4}=a^3b^{m^2}a^{-3}=a^2b^{m^3}a^{-2}=ab^{m^4}a^{-1}=b^{m^5}$$
Thus, multiplying by $b^{-1}$, we have $e=b^{m^5-1}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1373339",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
When does this sum of combinatorial coefficients equal zero? $p>2$ is a prime number, $n\in \mathbb{N}$. Is the following statement true or false? Thanks.
$$\sum_{i=0}^{\lfloor n/p\rfloor}(-1)^i {n\choose ip}=0$$ iff $n=(2k-1)p$ for some $k\in \mathbb{N}$.
|
Let $\omega=\exp\left(\frac{2\pi i}{p}\right)$. Since $f(n)=\frac{1}{p}\left(1+\omega^n+\ldots+\omega^{(p-1)n}\right)$ equals one if $n\equiv 0\pmod{p}$ and zero otherwise, we have:
$$S(n)=\sum_{i\equiv 0\pmod{p}}\binom{n}{i}(-1)^i=\frac{(1-1)^n+(1-\omega)^n+\ldots+(1-\omega^{p-1})^n}{p}\tag{1}$$
so $p\cdot S(n)$ is the sum of the $n$-th powers of the roots of $q(x)=1-(1-x)^p$.
Assuming that $M$ is the companion matrix of $q(x)$, it follows that:
$$ S(n) = \frac{1}{p}\cdot\text{Tr}(M^n)=\frac{1}{p}\,\text{Tr}\,\begin{pmatrix}1 & -1 & 0&0&\ldots\\0 & 1 & -1 & 0 & \ldots\\\vdots & 0 & \ddots &\ddots&\vdots\\0&\ldots&\ldots&1&-1\\-1&0&\ldots&\ldots&1\end{pmatrix}^n. \tag{2}$$
By the Cayley-Hamilton theorem, $\{S(n)\}_{n\geq 0}$ is a linear recurring sequence with the same characteristic polynomial of $M$, i.e. $1-(1-x)^p$. On the other hand, $(1)$ gives that $S(n)$ cannot be zero if $n\not\equiv 0\pmod{p}$, and:
$$\begin{eqnarray*} S(mp)&=&\frac{(1-\omega)^{mp}+\ldots+(1-\omega^{p-1})^{mp}}{p}\\&=&\frac{1}{p}\sum_{x\in Z}(1-x)^{mp}=\frac{1}{p}\sum_{x\in Z}\sum_{k=0}^{m}\binom{mp}{kp}(-x)^{kp}\\&=&\sum_{k=0}^{m}\binom{mp}{kp}(-1)^{kp}\tag{3}\end{eqnarray*}$$
Now, it is not difficult to prove the only if part of the statement.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1373415",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
}
|
Range of an inverse trigonometric function Find the range of $f(x)=\arccos\sqrt {x^2+3x+1}+\arccos\sqrt {x^2+3x}$
My attempt is:I first found domain,
$x^2+3x\geq0$
$x\leq-3$ or $x\geq0$...........(1)
$x^2+3x+1\geq0$
$x\leq\frac{-3-\sqrt5}{2}$ or $x\geq \frac{-3+\sqrt5}{2}$...........(2)
From (1) and (2),
domain is $x\leq-3$ or $x\geq0$
but could not solve further..Any help will be greatly appreciated.
|
You do not have the correct domain. We must also have $-1\leq\sqrt{x^2+3x+1}\leq1$ and $-1\leq\sqrt{x^2+3x}\leq1$. In other words, $\sqrt{x^2+3x+1}\leq1$ and $\sqrt{x^2+3x}\leq1$, since they are positive.
Thus $x^2+3x+1\leq1$ (squaring is allowed since both are positive), or $x^2+3x\leq0$, this gives $-3 \leq x \leq 0$. Together with $x \leq -3$ or $x \geq 0$, we get that the only numbers that give a well defined value are $x=0$ or $x=-3$.
This gives $\arccos(\sqrt{1})+\arccos(\sqrt{0})$ in both cases, so the range is $\frac{1}{2}\pi$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1373523",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Why the determinant of a matrix with the sum of each row's elements equal 0 is 0? I'm trying to understand the proof of a problem, but I'm stuck. In my book they consider that if all lines of a matrix has sum 0 then it's determinant is also 0. I checked some random examples and it's true, but I couldn't proof it. Could you help me?
|
Another way to derive this result:
When the sum of all rows are zero, the row vectors are linearly dependent. Hence the determinant is zero.
In fact, if any two or three or four rows add up to pure zero, the determinant is zero as well.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1373600",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 5,
"answer_id": 0
}
|
problem of number theory N. Sato Can someone help me solve this problem?
Sato, 4.2.
For an odd positive integer $n>1$, let $S$ be the set of integers $x$ such that $1 \leq x \leq n$, such that both $x$ and $x+1$ are relatively prime to $n$. Show that $$\prod_{x\in S}x \equiv 1 \mod n$$
Thanks Giorgio Viale
|
Wilson's theorem says if $n$ is prime, $(n-1)! ≡ -1$ (mod $n$). Then $n-1$ isn't included, because $n$ is divisible by $n$. The product of the rest of the factors gives 1.
If $n$ isn't a prime, let its powered prime factors be $p_1$, $p_2$, ..., $p_y$, and respective remainders an $x$ candidate gives by being divided by them be $r_1$, $r_2$, ..., $r_y$ . No $r_a$ can equal 0 or one less than a multiple of the prime base of $p_a$ ($b_a$), so the possible remainders progress like 1, ... $b_a-2$, $b_a+1, ..., 2b_a-2, ...$ for each $p_a$. The number of the $x$ candidates allowing $r_a$ to be a given value is the same for any possible $r_a$ value, because all the given remainder combinations are possible. Therefore, the product is equivalent to 1 mod $p_a$, which can be said for every factor.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1373685",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Power serie of $f'/f$ It seems that I'm [censored] blind in searching the power series expansion of $$f(x):=\frac{2x-2}{x^2-2x+4}$$ in $x=0$.
I've tried a lot, e.g., partiell fraction decomposition, or regarding $f(x)=\left(\log((x+1)^2+3)\right)'$ -- without success.
I' sure that I'm overseeing a tiny little missing link; dear colleagues, please give me a hint.
|
Given any function $f$, if we restrict to where $f$ is nonzero (so that either $\log f$ or $\log -f$ exists), we can write $\frac{f'}{f}=\frac{d}{dt}\log |f|$. If $f=c \prod (x-\alpha_i)^{k_i}$ is a rational function, $\log f = \log c + \sum k_i \log (x-\alpha_i)$ and so
$$\frac{f'}{f}=\frac{d}{dt}\log |f| = \sum k_i \frac{1}{x-\alpha}.$$
Now, use the fact that $\frac{1}{1-x}=\sum x^i$, and the factorization that $(2x-2)/(x^2−2x+4)= 2(x-1)(x-1+\sqrt{3})^{-1}(x-1-\sqrt{3})^{-1}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1373729",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 3
}
|
Using the definition of derivative to find $\tan^2x$ The instructions: Use the definition of derivative to find $f'(x)$ if $f(x)=\tan^2(x)$.
I've been working on this problem, trying every way I can think of. At first I tried this method:
$$\lim_{h\to 0} {\tan^2(x+h)-\tan^2(x)\over h}$$
$$\lim_{h\to 0} {\tan(x+h)-\tan(x)\over h}\cdot\lim_{h\to 0} {\tan(x+h)-\tan(x)\over h}$$
And then I went on from there, but I was never able to get rid of the $h$.
So then I tried this:
$$\lim_{x\to y} {\tan(x)-\tan(y)\over x-y}\cdot\lim_{x\to y} {\tan(x)-\tan(y)\over x-y}$$
$$\lim_{x\to y} {\tan(x-y)[1+\tan(x)\tan(y)]\over x-y}\cdot\lim_{x\to y} {\tan(x-y)[1+\tan(x)\tan(y)]\over x-y}$$
$$\lim_{x\to y} {\sin(x-y)\over (x-y)}\cdot{1\over \cos(x-y)}\cdot[1+\tan(x)\tan(y)]\cdot\lim_{x\to y} {\sin(x-y)\over (x-y)}\cdot{1\over \cos(x-y)}\cdot[1+\tan(x)\tan(y)]$$
I then put the ${\frac{\sin(x-y)}{x-y}}=1$, and I traded all of the $y$ values for $x$, which gave me $\frac{1}{\cos(\theta)}=1$
$$1+\tan^2(x)\cdot1+\tan^2(x)=1+2\tan^2(x)+\tan^4(x)$$
I know that the derivative of $\tan^2(x)=2\tan(x)\sec^2(x)$, so this is obviously wrong.
I found this link from a previous stack exchange point on finding the limit by definition of $\tan(x)$, so I tried using the answers given there. But, even with that, I haven't been able to get this correct.
What do I need to do?
|
Using first principle, the derivative of any function $f(x)$ is given as $$\frac{d(f(x))}{dx}=\lim_{h\to 0}\frac{f(x+h)-f(x)}{h}$$ Hence, derivative of $\tan^2 x$ is given as $$\frac{d(\tan^2 x)}{dx}=\lim_{h\to 0}\frac{\tan^2(x+h)-\tan^2(x)}{h}$$ $$=\lim_{h\to 0}\frac{(\tan(x+h)-\tan(x))(\tan(x+h)+\tan(x))}{h}$$$$=\lim_{h\to 0}\frac{\tan(x+h)-\tan(x)}{h}\times \lim_{h\to 0}(\tan(x+h)+\tan(x))$$ $$=\lim_{h\to 0}\frac{\frac{\sin(x+h)}{\cos(x+h)}-\frac{\sin(x)}{\cos(x)}}{h}\times \lim_{h\to 0}(\tan(x+h)+\tan(x))$$ $$=\lim_{h\to 0}\frac{\sin(x+h)\cos x-\cos(x+h)\sin x}{h\cos(x+h)\cos x}\times (\tan x+\tan x)$$ $$=2\tan x\lim_{h\to 0}\frac{\sin(x+h-x)}{h}\times \lim_{h\to 0}\frac{1}{\cos(x+h)\cos x}$$ $$=2\tan x\lim_{h\to 0}\frac{\sin h}{h}\times \frac{1}{\cos^2 x}$$ $$=2\tan x \times 1\times \sec^2 x$$ $$=\color{blue}{2\tan x \sec^2x}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1373809",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
How to find: $\int^{2\pi}_0 (1+\cos(x))\cos(x)(-\sin^2(x)+\cos(x)+\cos^2(x))~dx$? How to find: $$\int^{2\pi}_0 (1+\cos(x))\cos(x)(-\sin^2(x)+\cos(x)+\cos^2(x))~dx$$
I tried multiplying it all out but I just ended up in a real mess and I'm wondering if there is something I'm missing.
|
You have to linearise the integrand. The simplest way to do is with Euler's formulae. To shorten the computation, set $u=\mathrm e^{\mathrm ix}$. Starting with $\cos x =\dfrac{u+\bar u}2$ and taking into account the relations
$$u\bar u=1,\quad u^n+\bar u^n=2\cos nx,$$
we have
\begin{align*}
(1+\cos x)&\cos x(-\sin^2x+\cos x+\cos^2x)=(1+\cos x)\cos x(\cos x+\cos 2x)\\
&=\Bigl(1+\frac{u+\bar u}2\Bigr)\frac{u+\bar u}2\Bigl(\frac{u+\bar u}2+\frac{u^2+\bar u^2}2\Bigr)\\
&=\frac18(2+u+\bar u)(u+\bar u)\bigl(u+\bar u+u^2+\bar u^2\bigr)\\
&=\frac18\bigr(u^4+\bar u^4+3(u^3+\bar u^3)+4(u^2+\bar u^2)+5(u+\bar u)+6\bigl)\\
&=\frac14(\cos 4x+3\cos 3x+4\cos 2x+5\cos x+3)
\end{align*}
Consequently the integral from $0$ to $2\pi$ is equal to
$$\frac{3\pi}2.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1373977",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
Uniform convergence of $f_n(x) = x^n$ on $[0,c]$
Let $c \in (0,1)$ be fixed. Let $$f_n(x) = x^n,\quad x \in [0,1)$$and$$f(x) = 0,\quad x \in [0,1)$$ Show that $f_n$ converges uniformly to $f$ on $[0,c]$.
So, we have, $f_n(0) = 0, f_n( c) = c^n,f_n'(x) = nx^{n-1}, f_n'(0) = 0, f_n'( c) = nc^{n-1}.$ Since I don't know which one is supremum of $f_n(x)$, I've divided this into two cases:
Case 1: If $f_n'(c ) > f_n(c )$, then $$\lim_{n \to \infty} \sup_{\{0\leq x \leq c\}}\ | f_n(x) - f(x)|= | nc^{n-1} - 0|= 0,$$ since $0<c<1$.
Case 2: If $f_n(c ) > f_n'(c )$, $$\lim_{n \to \infty} \sup_{\{0\leq x \leq c\}}|f_n(x) - f(x)| = | c^n - 0|= 0,$$ again because $0<c<1$.
Thus, $f_n$ converges uniformly to $f$. What do you think? Is it correct?
|
An explicit argument: if $x \in (0,1)$ and $\varepsilon>0$ then $x^n<\varepsilon$ is equivalent to $n \ln(x) < \ln(\varepsilon)$ or $n>\frac{\ln(\varepsilon)}{\ln(x)}$. Now check that if $n>\frac{\ln(\varepsilon)}{\ln(c)}$ then $x^n<\varepsilon$ for every $x \in [0,c]$. This will necessarily work because the limit is zero and $x^n$ is increasing (so the convergence is slowest at the rightmost point).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1374064",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
$n$-th term of an infinite sequence Determine the $n$-th term of the sequence $1/2,1/12,1/30,1/56,1/90,\ldots$. I have not been able to find the explicit formula for the $n$-th term of this infinite sequence. Can some one solve this problem and tell me how they find the answer?
|
To find a formula you need to first find the pattern.
There are a number of ideas to use when trying to find for such paterns in a sequence:
*
*if all numbers are composite, try to factorise - if this works nicely a pattern may emerge at this stage, especially for numbers with one prime factor, e.g. $56 = 7 \times 8$
*often a sequence will be built on the natural numbers $1,2,3,\dots$ somehow
*use trial and error, and look at the context around the numbers for any clues
Using these ideas, you could come up with
*
*$2 = 1 \times 2$
*$12 = 3 \times 4$
*$30 = 5 \times 6$
*$56 = 7 \times 8$
If you want to pair these with the natural numbers to develop a formula, you will need to recognise that you are dealing with pairs of consecutive numbers that ascend by two from the previous pair.
$n: 1,2,3,4,5,\dots \\
2n: 2,4,6,8,10,\dots \\$
Hence, each pair is $\{2n-1,2n\}$.
Then $s_n = \dfrac{1}{2n-1} \cdot \dfrac{1}{2n}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1374138",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Show that there is a metric space that has a limit point, and each open disk in it is closed. Collecting examples
Show that there is a metric space that has a limit point, and each open disk in it is closed.
This question belongs to the 39th math competitions of Iran. This is one solution:
Suppose that $X=\{\frac{1}{n}: n\in \mathbb{N}\} \cup \{0\}$
and:
$d(x,y) =
\left\{
\begin{array}{ll}
x+y & \mbox{if } x \neq y \\
0 & \mbox{if } x = y
\end{array}
\right.$
It is clear that $(X,d)$ is a metric space and $0$ is a limit point for this space. And for every $x\in X$ and $r > 0$ the open disk $B_r(x)$ has one element or for one $1\leq N$ its equal to $\{\frac{1}{n}: n\geq N\} \cup \{0,x\}$. In each case they are closed.
I am looking for other solutions for this question.
|
The p-adic numbers are an example of this phenomenon and there are a lot of similar examples, since any discrete valuation ring or its quotient field have the property that open disks are closed and vice versa. This includes finite extensions of the p-adics as well as rings of the form $K((X))$ (Laurent series with finite principal part over a field).
Note that these are more or less algebraic examples which might not be what you are looking for since the question is tagged as real-analytic.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1374229",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
Argue that $\binom{n}{n_1,n_2,...,n_r} = \binom{n-1}{n_1-1,n_2,...,n_r} + \binom{n-1}{n_1,n_2-1,...,n_r}+...+\binom{n-1}{n_1,n_2,...,n_r-1} $ Argue that
$\binom{n}{n_1,n_2,...,n_r} = \binom{n-1}{n_1-1,n_2,...,n_r} + \binom{n-1}{n_1,n_2-1,...,n_r}+...+\binom{n-1}{n_1,n_2,...,n_r-1} $
Each term on the right hand side is the number of ways of dividing $n-1$ distinct objects into $r$ distinct groups. I cant really make a start on this.
I have thought about it in terms of apples to try and get a grasp of the problem, if i were to have 10 apples labeled 1- 10 and 3 groups of size 2,3,5. Then the math works out.
If I have $n$ apples labeled $1$ through $n$, if I choose the first(labelled 1) apple and fix it to the 1st group then divide the remaining $n-1$ apples into the $r$ groups, there are $\binom{n-1}{n_1-1,n_2,...,n_r}$ ways to do this. If I repeat this process with apple 2 and place the apple in group 2 instead, we obtain $\binom{n-1}{n_1,n_2-1,...,n_r}$ continuing in this fashion when we fix the $r$th apple to the $r$th group and count the number of ways of distributing the remaining $n-1$ apples to the $r$ groups we get $\binom{n-1}{n_1,n_2,...,n_r-1}$. Summing the terms on the right will then give us the number of ways of dividing the $n$ distinct apples into $r$ distinct groups.
|
Your argument using apples is very close, maybe only a bit awkwardly written.
We give a combinatorical argument of equality, showing the the rhs and lhs count the same thing.
Now, the lhs is easy: it counts the number of ways to partition $n$ apples into $r$ groups with sizes $n_1, n_2, \ldots, n_r$.
We want to show that the rhs counts this as well. Let's pick a fixed apple.
The essesce of the proof is noting that:
(the number of ways to partition apples into $r$ groups of specified sizes)
=
(the number of ways we can partition apples into $r$ groups of specified sizes if the fixed apple is in the first group)
+
(the number of ways we can partition apples into $r$ groups of specified sizes if the fixed apple is in the second group)
+
...
+
(the number of ways we can partition apples into $r$ groups of specified sizes if the fixed apple is in the last group)
This options are disjoint, so we can freely sum the number of cases in each option. Now, lets count the number of ways in each of the options.
If we put it in the first group, then to get partitions into $r$ groups with sizes $n_1, n_2, \ldots, n_r$, we need to arrange remaining $n-1$ apples into groups of sizes $n_1-1$, $n_2$, ..., $n_r$, as we already have one apple in the first group. We can do this in $\binom{n-1}{n_1-1, n_2, \ldots, n_r}$ many ways. Similary, we can put our apple in second group, which yields $\binom{n-1}{n_1, n_2-1, \ldots, n_r}$ ways, and so forth until the $r$-th group.
Rewriting equation in text above into numbers, we get the desired equality.
PS:
The case by case distinction intuitivnely reminds me of the law of total probabilities, it it helps you to think about it that way.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1374363",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
find total integer solutions for $(x-2)(x-10)=3^y$ I found this questions from past year maths competition in my country, I've tried any possible way to find it, but it is just way too hard.
How many integer solutions ($x$, $y$) are there of the equation $(x-2)(x-10)=3^y$?
(A)1 (B)2 (C)3 (D)4 (E)5
If let $y=0$, we had
$$x^2 - 12x + 20 = 1$$
$$x^2 - 12x + 19 = 0$$
no integer solution for x
let $y=1$, we had $x^2 - 12x + 17 = 0$, no integer solution too.
let $y=2$, we had $x^2 - 12x + 11 = 0$, we had $x = 1, 11$.
let $y=3$, we had $x^2 - 12x - 7 = 0$, no integer solution.
let $y=4$, we had $x^2 - 12x - 61 = 0$, no integer solution.
and going on....
is there any other efficient way to find it? "brute-forcing" it will wasting a lot of time.
|
Just as Bob1123 commented, compute the discriminant for the equation $$x^2-12 x+(20-3^y)=0$$ and the roots $$x_{1,2}=6\pm\sqrt{3^y+16}$$ So $(3^y+16)$ must be a perfect square.
I only see one which is possible what you already found ($y=2$).
Now, to prove that this is the unique solution is beyond my skills. Fortunately, you have Alex G.'s nice answer.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1374534",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
How to prove the group $G$ is abelian?
Question: Assume $G$ is a group of order $pq$, where $p$ and $q$ are primes (not necessarily distinct) with $p\leqslant q$. If $p$ does not divide $q-1$, then $G$ is Abelian.
I know that if the order of $Z(G)$ is not equal to $1$, then I can prove $G$ is Abelian. However, suppose $|Z(G)|=1$, how can I know $G$ is Abelian too?
|
Assume first that $\vert G \vert = p^2$. Since $Z(G) < G$, we have that $\vert Z(G) \vert \mid \vert G \vert$. One can show that the center of a group $G$ whose order is a prime power is non-trivial - it's not too hard, but too long to state here. You might want to look that up in an Algebra book.
If $\vert Z(G) \vert = p$, the quotient $G / Z(G)$ is of prime order by Lagrange's theorem, hence cyclic and generated by - say - $g \in G$. Let $a, b \in G$ and $a \in g^n Z(G)$, $b \in g^m Z(G)$. Then $ab = g^n z g^m z' = g^n g^m z z' = g^{m + n} z z' = g^m z' g^n z = ba$ by choosing $z$, $z'$ to be elements in the centralizer. We've just shown that $G$ is abelian, implying $Z(G) = G$, contradiction.
Hence $\vert Z(G) \vert = p^2$, proving the first statement.
A sketch of a solution of the second statement may look like this:
If now $\vert G \vert = pq$ for $p, q$ distinct primes and $p \nmid q - 1$,
we get by Sylow's Theorem's that there are $$n_p = 1 \mod p$$ $p$-subgroups and $$n_q = 1 \mod q$$ $q$-subgroups, and also $n_p \mid q$, $n_q \mid p$. But by what we just said $n_p$ is of the form $1 + kp$ and as $p \mid q - 1$, we have $k = 0$. As we assumed $p \leq q$ and $n_q \mid p$, we get $n_q = 1$ also.
The group $H_p \cdot H_q < G$ will now be a subgroup of order $pq$ for $H_p$ a $p$-subgroup and $H_q$ a $q$-subgroup, hence $G = H_p \cdot H_q \simeq H_p \times H_q$. Both subgroups are cyclic, so is their product (they are of relatively prime order).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1374908",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
How to find a function that satisfies 2 conditions I am solving a partial differential equation by separation of variables. Part of the solution requires finding a function that meets the following criteria.
f(L,t)=C
f(0,t)=A*cos(at+b)
I was wondering if a function exists and how to find it (I am guessing and checking, wondering if there is a more direct method).
|
Presumably you want a continuous (or better) such function, because otherwise the problem's trivial. :) Also, I assume $L, C, A, a, b$ are all constants.
Recall the situation in one variable (that is, if I'm given the values of a one-variable function at two points): if I want a function $g$ such that $g(a)=u$ and $g(b)=v$, one easy way to get such a $g$ is to plot a line from $(a, u)$ to $(b, v)$: $$g(x)={x-b\over a-b}u+{x-a\over b-a}v.$$ Now, in your problem you're given the values of a two-variable function at two lines. This might seem harder, but in this case there's a simple way to reduce it to the one-variable situation.
HINT: for a fixed $t$, consider the single-variable function $f_t: x\mapsto f(x, t)$. We are given desired values for $f_t(0)$ and $f_t(L)$. What should each $f_t$ be? OK, now how do we get from the many, single-variable functions $f_t$ to the specific, two-variable function $f$?
By the way, this is not always the best way to find a function with given boundary values, but you haven't set out any requirements on $f$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1374996",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Is this:$\sum_{n=1}^{\infty}{(-1)}^{\frac{n(n-1)}{2}}\frac{1}{n}$ a convergent series? Is there someone who can show me how do I evaluate this sum :$$\sum_{n=1}^{\infty}{(-1)}^{\frac{n(n-1)}{2}}\frac{1}{n}$$
Note : In wolfram alpha show this result and in the same time by ratio test it's not a convince way to judg that is convergent series
Thank you for any help
|
I post this answer because Dirichlet's test has not been mentioned in any of the previous answers. Let $a_n=(-1)^{n(n-1)/2}$ and $b_n=1/n$. The partial sums of $a_n$ are bounded and $b_n$ is decreasing and converging to $0$. Dirichlet's test implies the series is convergent.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1375069",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 3
}
|
Find $f'(x)$ when $f(x) = x^2 \cos(\frac{1}{x})$ Find $f'(x)$ when
$$
f(x) = \begin{cases}
x^2\cos(\frac{1}{x}),& x \neq 0\\
0, &x = 0\end{cases}$$
Ok, I know the derivative of $f$ and it is $2x\cos(\frac{1}{x})+\sin(\frac{1}{x})$. My question is, am I only supposed to find the derivative here or evaluate this at $0$? I mean what exactly is the reason behind this piecewise function if I don't evaluate this at $0$? Does it make sense? Also, when a question says to find out whether or not $f$ is differentiable for a piecewise function like this, what are we supposed to do, find $f'(x)$ or evaluate at $f'(0)$? I am sorry if these are very elementary level questions. But, I always get confused on terms like these. Any explanation would be much appreciated. Thanks.
|
Let $$
f(x) = \begin{cases}
x^2\cos(\frac{1}{x}),& x \neq 0\\
0, &x = 0\end{cases}$$
As you said in your question, when $x$ is not 0, we have $$f^{'}(x)=2x\cos(\frac{1}{x})+\sin(\frac{1}{x})$$
When $x$ is 0, we must use the limit definition of the derivative. That is, $$f^{'}(x)=\lim_{h\rightarrow0}\dfrac{f(0+h)-f(0)}{h}$$ Since when evaluating this limit, $$0+h\neq 0$$ we have that $$f(0+h)=f(h)=h^2\cos(\frac{1}{h})$$
Therefore, $$f^{'}(0)=\lim_{h\rightarrow 0}h\cos{\dfrac{1}{h}}=0$$
In conclusion, $$f^{'}(x)=$$ \begin{cases}
2x\cos(\frac{1}{x})+\sin(\frac{1}{x}),& x \neq 0\\
0, &x = 0\end{cases}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1375117",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
A sequence inequality for $x_{2^n}$, the binary partition function.
Define the sequence $\{x_{n}\}$ recursively by $x_{1}=1$ and
$$\begin{cases}
x_{2k+1}=x_{2k}\\
x_{2k}=x_{2k-1}+x_{k}
\end{cases}$$
Prove that
$$x_{2^n}>2^{\frac{n^2}{4}}$$
I have compute some term $x_{2}=2,x_{3}=2,x_{4}=4,x_{5}=4,x_{6}=6,x_{7}=6,x_{8}=10,\cdots,x_{16}=36,x_{32}=202$
I am unsure what to do from here, I know I somehow need to compare a form of this expression to $x_{2^n}?$, but how? Am I on the right lines?but I don't have any idea how to start proving it
|
In this answer I will prove that $$\log_2 \left(x_{2^k}\right) \sim \frac{k^2}{2}$$ by providing the explicit inequalities $$2^{k^{2}/2-k/2-k\log_{2}k}\leq x_{2^k} \leq2^{k^{2}/2+k/2+1}.$$ This not only shows that we can do better and achieve $k^2/2$ in the exponent, but also that this is optimal.
As martin mentions in the comments, this sequence $x_{n}$
is the binary partition function, which I will denote by $p_{B}(n).$ This functions counts the number of ways to write $$n=\sum_{i=0}^{\lfloor\log_{2}n\rfloor}a_{i}2^{i}$$
where the $a_{i}$ are nonnegative integers. In what follows we'll provide some fairly sharp upper and lower bounds for $p_{B}\left(2^{k}\right)$.
Upper Bounds: When $n=2^{k}$, $a_{i}$ can be at most $2^{k-i}$. This yields $k+1$ trivial partitions where $a_{i}=2^{k-i}$ for some $i$, and at most $2^{k-i}$ remaining choices for each $a_i$. Thus the number of possible choices is at most $$\prod_{i=0}^{k-1}2^{k-i}=2^{\sum_{i=1}^{k}i}=2^{k(k+1)/2},$$
and so $$p_{B}\left(2^{k}\right)\leq2^{k^{2}/2+k/2}+(k+1)\leq2^{k^{2}/2+k/2+1}.$$
Lower Bounds: To bound $p_{B}(2^{k})$ from below, we split up $2^{k}$ into $k$ parts and assign each of these parts to the coefficients $a_{1},a_{2},\dots,a_{k}.$ There are at least $2^{k-i}/k$ choices available for each of the $a_{i}$ given above, and for any such choice we can complete it into a full partition by choosing $a_{0}=2^{k}-\sum_{i=1}^{k}a_{i}2^{i}.$
(Note that if $2^{k-i}/k<1$, there still is always one choice which is a_{i}=0
) Thus we have given the lower bound $$p_{B}(2^{k})\geq\prod_{i=1}^{k}\frac{2^{k-i}}{k}=k^{-k}2^{k(k-1)/2},$$ and so $$p_{B}(2^{k})\geq2^{k^{2}/2-k/2-k\log_{2}k}.$$
Remark: The inequality $x_{2^k}>2^{k^2/4}$ follows from a numerical check along with the fact that $$\frac{k^{2}}{2}-\frac{k}{2}-k\log_{2}k>\frac{k^2}{4}$$ for all $k\geq 19$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1375205",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 2,
"answer_id": 0
}
|
Finding $P(X < Y)$ where $X$ and $Y$ are independent uniform random variables Suppose $X$ and $Y$ are two independent uniform variables in the intervals $(0,2)$ and $(1,3)$ respectively. I need to find $P(X < Y)$.
I've tried in this way:
$$
\begin{eqnarray}
P(X < Y) &=& \int_1^3 \left\{\int_0^y f_X(x) dx\right\}g_Y(y) dy\\
&=& \frac{1}{4} \int_1^3 \int_0^y dx dy\\
&=& \frac{1}{4} \int_1^3 y dy\\
&=& \frac{1}{8} [y^2]_1^3\\
&=& 1
\end{eqnarray}
$$
But I'm suspicious about this result. It implies that $X<Y$ is a sure event, which is not at all true.
|
Answer:
Divide the regions of X with respect to Y for the condition $X<Y$.
For $0<X<1$, $P(X<Y) = \frac{1}{2}$
For $1<X<2$ and $1<Y<2$ $P(X<Y) = \int_{1}^{2}\int_{x}^{2} \frac{1}{2}\frac{1}{2}dydx = \frac{1}{8}$
For $1<X<2$, and $2<Y<3$ $P(X<Y) = \frac{1}{2}.\frac{1}{2}=\frac{1}{4}$
Thus $P(X<Y) = \frac{1}{2}+ \frac{1}{8}+\frac{1}{4} = \frac{7}{8}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1375287",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 6,
"answer_id": 5
}
|
Is the Green-Tao theorem valid for arithmetic progressions of numbers whose Möbius value $\mu(n)=-1$? I am reading the basic concepts of the Green-Tao theorem (and also reading the previous questions at MSE about the corollaries of the theorem). According to the Wikipedia, the theorem can be stated as: "there exist arithmetic progressions of primes, with $k$ terms, where $k$ can be any natural number". There is also an extension of the result to cover polynomial progressions.
My question is very basic, but I am not sure about the answer of this:
If green-Tao theorem is true for prime numbers, then would it be true automatically for those numbers whose Möbius function value is $\mu(n)=-1$?
Something like (1):"there exist arithmetic progressions of numbers whose Möbius function value $\mu(n)=-1$, with $k$ terms, where $k$ can be any natural number"
My guess is that if all prime numbers are $\mu(p)=-1$ then the theorem is also true for those numbers whose $\mu(n)=-1$ (including primes and other numbers).
At least if the prime numbers are included (1), then I suppose that is (trivially?) true, but I am not sure if it could be said (2) that the theorem would be true for non-prime numbers whose $\mu(n)=-1$ only (something like: "it is possible to find any kind of arithmetic progression of non-primes whose $\mu(n)=-1$" ).
I guess that (1) could be correct, but (2) would be wrong. Is that right?
Thank you!
|
I will add here Steven Stadnicki's comments as an answer, so the question will be closed in some days (if other answers do not come):
(1) is trivially implied, as you suggest, but (2) is not trivially implied. That said, it would be very surprising if the proof of Green-Tao could not be extended to handle your case - in fact, probably even the special case (which implies your question) which covers only integers that are the products of three distinct primes. –
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1375368",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Euler's Phi function, elementary number theory Show that the equation $\phi(n)=2p$ where $p$ is prime and $2p+1$ is composite has no solutions. Using formula for $\phi$ it's quite easy proving $n$ cannot have more than two prime factors in its factorization (and we can do even better), so we must split this problem into some cases; however I'm not that familiar with number theory, so I'm not even sure which cases I must include, can you help me?
|
Recall that $\phi(p^e)=p^{e-1}(p-1)$ for a prime $p$ and that $\phi$ is multiplicative (but not totally).
If $n=2p+1$ were prime, then $\phi(n)=2p$ but the question states that $2p+1$ is composite so this is not possible.
Now let $l$ be a prime such that $l\mid n$.
If $l>3$, then $l-1$ would be a factor of $2p$ which is again impossible (why?).
If $l=2$, then we can have at most $l^3\mid n$ (depending on whether $p=2$ or not).
If $l=3$, then we can have at most $l^2\mid n$ (depending on whether $p=3$ or not).
Putting this all together, we only need to check values of $n$ in the set $\{2,3,4,6,8,9,12,18,24,36,72\}$ (we could also reduce this list further but its not too bad so I won't by splitting into whether $p=2$ or $p=3$).
Calculating $\phi(n)$ for each of these, we find no solutions of the desired form.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1375449",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Does the inverse image sheaf have a left adjoint for $\mathsf{Set}$-valued sheaves? It's known that for sheaves with values in modules, the inverse image sheaf functor $j^\ast$ for $j:U\subset X$ an inclusion of an open set has a left adjoint which is extension by zero.
Is there any way to carry this over to $\mathsf{Set}$-valued sheaves where no zero object is available?
|
Let $j : U \to X$ be the inclusion of an open subspace. Then $j^* : \mathbf{Sh} (X) \to \mathbf{Sh} (U)$ has a left adjoint $j_! : \mathbf{Sh} (U) \to \mathbf{Sh} (X)$: given a sheaf $F$ on $U$,
$$j_! F (V) = \begin{cases}
F (V) & \text{if } V \subseteq U \\
\emptyset & \text{otherwise}
\end{cases}$$
the idea being that $(j_! F)_x = F_x$ for $x \in U$ and $(j_! F)_x = \emptyset$ for $x \notin U$.
The easiest way to see this is to use the espace étalé definition of "sheaf". Then $j_!$ is just postcomposition with $j : U \to X$, while $j^*$ is pullback along $j : U \to X$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1375527",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Prove that $T_n(x)={}_2F_1\left(-n,n;\tfrac 1 2; \tfrac{1}{2}(1-x)\right) $ Prove that, for Chebyshev polynomials of the first kind,
\begin{align}
T_n(x) & = \tfrac{n}{2} \sum_{k=0}^{\left \lfloor \frac{n}{2} \right \rfloor}(-1)^k \frac{(n-k-1)!}{k!(n-2k)!}~(2x)^{n-2k} && n>0 \\
& = {}_2F_1\left(-n,n;\tfrac 1 2; \tfrac{1}{2}(1-x)\right) \\
\end{align}
where
$${}_2F_1(a,b;c;z) = \sum_{k=0}^\infty \frac{(a)_k (b)_k}{(c)_k} \frac{z^k}{k!}$$
is the hypergeometric function, and
$$(x)_{n}=x(x-1)(x-2)\cdots(x-n+1).$$
The main difficulty is to understand why $z=\tfrac{1}{2}(1-x)$. In this way, I have
$$\sum_{k=0}^\infty \ldots (1-x)^k$$
and not
$$\sum_{k=0}^\infty \ldots (2x)^k$$
Any suggestions please?
|
$y=\phantom{}_2 F_1(a,b;c;z)$ is the regular solution of the ODE:
$$z(1-z)y''+\left[c-(a+b+1)z\right]y'-ab y=0 $$
hence $y=\phantom{}_2 F_1(-n,n;1/2;z)$ is the regular solution of the ODE:
$$z(1-z)y''+\left(\frac{1}{2}-z\right)y'+ n^2 y=0 $$
and $y=\phantom{}_2 F_1\left(-n,n;\frac{1}{2};\frac{1-z}{2}\right)$ is the regular solution of the ODE:
$$(1-z^2)\,y'' - z y'+ n^2\,y=0 \tag{1}$$
so to prove our claim we just need to prove that $T_n(z)$ fulfills the same ODE. Since:
$$ T_n(\cos\theta) = \cos(n\theta) $$
by differentiating twice that identity we have:
$$ -\sin(\theta)\, T_n'(\cos\theta) = -n\sin(n\theta),$$
$$ \sin^2(\theta)\, T_n''(\cos\theta) -\cos(\theta)\, T_n'(\cos\theta) = -n^2\cos(n\theta) $$
and $(1)$ just follows from replacing $\cos(\theta)$ with $z$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1375603",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Finding a path in a graph by its hash value Assume there is a graph $G = (V, E)$ and a hash function $H: V^n \rightarrow \{0,1\}^m$. Given a path $p = (v_1, v_2, ..., v_n)$ from the graph $G$, compute its hash value $H(p) = h_p$.
Question: Given only the value $h_p$ for any path in the graph (or any "non-path" in the graph), can one prove or disprove that the path exists in the graph?
Possible solution: Expand the graph $G$ into a (possibly) infinite Merkle tree considering every vertex in $V$ as the respective root of the tree (designate these $m_v$). Try to match the hash value $h_p$ with all of the $m_v$ trees. If at least one tree can authenticate the $h_p$ value, then the path exists.
Remark: A non-path would be a sequence of vertices that does not correspond to the edges in the graph.
|
If it is a finite graph, the answer is yes, because for fixed $n$ there is only a finite number of paths of length $n$ in the graph. For example, you can enumerate all $n$-tuples of distinct vertices in the graph, check whether they form a path, compute the hash value of that path if they do, and check if that hash occurs in your list of hashes.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1375767",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
$f(x) = \frac{2}{x}$ is not uniformly continuous on $(0,1]$ Show that $f(x) = \frac{2}{x}$ is not uniformly continuous on $(0,1]$
WLOG Suppose, $0< \delta \leq 1.$ Let, $\epsilon = 1$ and $x = \frac{\delta}{2}, y = x + \frac{\delta}{3}, x,y \in (0,1]$
Skipping all the details: $\mid f(x) - f(y)\mid =...= \mid \frac{24}{5\delta}\mid \geq 1 = \epsilon$, since $0<\delta\leq 1$.
I have seen people restricting the $\delta$ like I did here, but I have never done this before and I thought the only condition applies to $\delta$ is $\delta >0$(though restricting the $\delta$ does make things easier). So, since this is one of my HW question, am I allowed to do it like this (just to make sure I don't make any stupid mistake)? Do you think my approach is correct? Please let me know. Thanks.
|
Let us see what you do. You assume a function is uniformly continuous. You take $\epsilon = 1$. Then you have some $\delta$ such that for all $x,y$ with $|x-y| < \delta$ something should hold. But if this something holds for all $|x-y| < \delta$ then to say it holds for all $|x-y| < \delta'$ for some $\delta' < \delta$ is a weaker assumption.
Thus you can always assume a smaller delta and thus in particular you can assume $\delta$ is less than $1$ if it helps.
You could write:
Assume for a contradiction that $f$ is uniformly continuous, and chose $\epsilon = 1$. There exists a $\delta > 0 $ such that for all $x,y$ with $|x-y|< \delta $ one has $|f(x)-f(y)| < 1$. WLOG we can assume $\delta < 1$. Now let $x= \delta/2$ ...
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1375865",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 1
}
|
when is the cokernel of a map of free modules free? Let $R$ be a commutative ring (noetherian if needed) and $n,m$ be two nonnegative integers. Consider a map
$\varphi: R^n\rightarrow R^m$
Is there a characterisation, e.g. in terms of the matrix representation of $\varphi$ of the cokernel of this map being free?
remark: this question seems to be quite similar and I just learned from it that there is a criterion for when the module is projective in terms of minors of the matrix representing $\varphi$. So maybe this question is to strong as it asks essentially to classifiy the free modules among the projectives, but is there at least a sufficient criterion in nice situations?
|
You need ALL $(k+1)\times (k+1)$ minors for some $k$ to be zero and one $k\times k$ minor to be a unit.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1375943",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
Find $\int_0^1(\ln x)^n\hspace{1mm}dx$ I am not a big fan of induction, it's just a personal preference.
Is there a method other than induction.
Answer is $n!$ by the way
|
Let, $\ln x=t\implies \frac{dx}{x}=dt\implies dx=e^tdt$, we have $$\int_{0}^{1}(\ln x)^ndx=\int_{-\infty}^{0}(t)^ne^tdt$$ $$=\int_{0}^{\infty}(-t)^ne^{-t}dt$$ $$=(-1)^n\int_{0}^{\infty}e^{-t}t^ndt$$ Now, using Laplace transform $\color{blue}{\int_{0}^{\infty}e^{-st}f(t)dt=L[f(t)]}$ & $\color{blue}{L[t^n]=\frac{n!}{s^{n+1}}}$, we get $$(-1)^n\int_{0}^{\infty}e^{-t}t^n dt=(-1)^n L[t^n]_{s=1}$$ $$=(-1)^n\left[\frac{n!}{s^{n+1}}\right]_{s=1}$$ $$=(-1)^n\left[\frac{n!}{(1)^{n+1}}\right]=(-1)^n(n!)$$ $$\implies \color{blue}{\int_{0}^{1}(\ln x)^ndx=(-1)^n(n!)}$$ Let, $n$ be an even integer then we have $$ \color{blue}{\int_{0}^{1}(\ln x)^ndx=(-1)^n(n!)=n!}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1376214",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 3
}
|
Extending the Riemann zeta function using Euler's Theorem. Euler's theorem states that if the real part of a complex number $z$ is larger than 1, then $\zeta(z)=\displaystyle\prod_{n=1}^\infty \frac{1}{1-p_n^{-z}}$, where $\zeta(z)=\displaystyle\sum_{n=1}^\infty n^{-z}$ is the Riemann zeta function and $\{p_n\}$ is the set of primes in increasing order.
After reading the argument on extending $\zeta(z)$ to a meromorphic function for those complex numbers $z$ whose real part is larger than $0$, I am wondering whether it is possible to write
$\zeta(z)=\displaystyle\prod_{n=1}^\infty f_n(z)\cdot\frac{1}{1-p_n^{-z}}, \hspace{0.1in} 0< \Re z <1,$
for some holomorphic functions $f_n$. I am new to the Riemann zeta function, so if this is a silly question forgive me. Thank you.
|
Yes and no, if you want something nontrivial. The analytically extended zeta function admits a Weierstrass factorization (called the Hadamard factorization in this case):
$$\zeta(s)=\pi^{s/2}\frac{\prod_{\rho}(1-s/\rho)}{2(s-1)\Gamma(1+s/2)},$$
where the product is over the nontrivial zeros. You can also see the trivial zeros at $s=-2k$ from the Gamma function. You can further expand the product via:
$$\Gamma(1+s/2)=\frac{e^{-\gamma z}}{z}\prod_{k=1}^\infty \left(1+\frac{1+s/2}{k}\right)^{-1}e^{(s+1/2)/k}.$$
This is about as good a product as you'll get and it'll agree with the Euler formula for $s>1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1376317",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Given an acute triangle ABC with altitudes AH, BK. Let M be the midpoint of AB Given an acute triangle ABC with altitudes AH, BK. Let M be the midpoint of AB. The line through CM intersect HK at D. Draw AL perpendicular to BD at L. Prove that the circle containing C, K and L is tangent to the line going through BC
|
This proof does not assume that $ABC$ is acute. Suppose $a:=BC$, $b:=CA$, and $c:=AB$. Likewise, $\alpha:=\angle BAC$, $\beta:=\angle ABC$, and $\gamma:=\angle BCA$, so $\alpha+\beta+\gamma=\pi$. Let $x:=\angle CKL$ and $z:=\angle AMC$. Note that $A$, $K$, $L$, $H$, and $B$ lie on the circle centered at $M$ with radius $\frac{c}{2}$.
Note that $$\angle KBL=\angle ABL-\angle ABK=\angle CKL-\left(\frac{\pi}{2}-\alpha\right)=x+\alpha-\frac{\pi}{2}\,,$$
$$\angle LBC=\angle ABC-\angle ABL=\beta-\angle CKL=\beta-x\,,$$
$$\angle BCM=\angle AMC-\angle ABC=z-\beta\,,$$
$$\angle MCK=\pi-\angle CAM-\angle AMC=\pi-\alpha-z\,,$$
$$\angle CKH=\angle ABC=\beta\,,$$
and
$$\angle HKB=\angle LAB=\frac{\pi}{2}-\beta\,.$$
The lines $BL$, $CM$, and $KH$ concur. By the trigonometric version of Ceva's Theorem on the triangle $BCK$, we have
$$1=\frac{\sin(\angle KBL)}{\sin(\angle LBC)}\cdot\frac{\sin(\angle BCM)}{\sin(\angle MCA)}\cdot\frac{\sin(\angle CKH)}{\sin(\angle HKB)}=-\frac{\cos(x+\alpha)}{\sin(\beta-x)}\cdot\frac{\sin(z-\beta)}{\sin(\alpha+z)}\cdot\frac{\sin(\beta)}{\cos(\beta)}\,.$$
That is,
$$
\begin{align}
\frac{\cos(\beta)\tan(x)-\sin(\beta)}{\cos(\alpha)-\sin(\alpha)\tan(x)}&=-\frac{\sin(\beta-x)}{\cos(x+\alpha)}=\frac{\sin(z-\beta)}{\sin(\alpha+z)}\cdot\frac{\sin(\beta)}{\cos(\beta)}
\\
&=\frac{\sin(\beta)}{\cos(\beta)}\cdot\frac{\tan(z)\cos(\beta)-\sin(\beta)}{\sin(\alpha)+\cos(\alpha)\tan(z)}\,.
\end{align}$$
Let $F$ be the feet of the perpendicular from $C$ to $AB$. We have $CF=b\sin(\alpha)$ and $MF=\frac{c}{2}-b\cos(\alpha)$ ($MF$ is taken to be a signed length, so $MF$ is negative if $\alpha<\beta)$. That is, $$\tan(z)=\frac{b\sin(\alpha)}{\frac{c}{2}-b\cos(\alpha)}=\frac{2\sin(\alpha)\sin(\beta)}{\sin(\gamma)-2\cos(\alpha)\sin(\beta)}\,,$$
since $\frac{b}{\sin(\beta)}=\frac{c}{\sin(\gamma)}$ due to the Law of Sines on the triangle $ABC$. Now, $\gamma=\pi-\alpha-\beta$, so we get $\sin(\gamma)=\sin(\alpha+\beta)=\sin(\alpha)\cos(\beta)+\cos(\alpha)\sin(\beta)$. Therefore,
$$\tan(z)=\frac{2\sin(\alpha)\sin(\beta)}{\sin(\alpha)\cos(\beta)-\cos(\alpha)\sin(\beta)}\,.$$
Consequently,
$$
\begin{align}
\frac{\cos(\beta)\tan(x)-\sin(\beta)}{\cos(\alpha)-\sin(\alpha)\tan(x)}&=
\frac{\sin(\beta)}{\cos(\beta)}\cdot\frac{\tan(z)\cos(\beta)-\sin(\beta)}{\sin(\alpha)+\cos(\alpha)\tan(z)}
\\
&=\frac{\sin(\beta)}{\cos(\beta)}\cdot\frac{2\sin(\alpha)\sin(\beta)\cos(\beta)-\sin(\beta)\big(\sin(\alpha)\cos(\beta)-\cos(\alpha)\sin(\beta)\big)}{\sin(\alpha)\big(\sin(\alpha)\cos(\beta)-\cos(\alpha)\sin(\beta)\big)+2\sin(\alpha)\cos(\alpha)\sin(\beta)}
\\
&=\frac{\sin^2(\beta)}{\sin(\alpha)\cos(\beta)}\cdot\frac{\sin(\alpha)\cos(\beta)+\cos(\alpha)\sin(\beta)}{\sin(\alpha)\cos(\beta)+\cos(\alpha)\sin(\beta)}=\frac{\sin^2(\beta)}{\sin(\alpha)\cos(\beta)}\,.
\end{align}$$
(Technically, we have to worry about the case $\alpha=\beta$, but we can argue by continuity that in the limit $\alpha=\beta$, the above equality still holds.)
Ergo,
$$\sin(\alpha)\cos^2(\beta)\tan(x)-\sin(\alpha)\sin(\beta)\cos(\beta)=\cos(\alpha)\sin(\beta)^2-\sin(\alpha)\sin^2(\beta)\tan(x)\,,$$
leading to
$$\sin(\alpha)\tan(x)=\sin(\beta)\big(\sin(\alpha)\cos(\beta)+\cos(\alpha)\sin(\beta)\big)=\sin(\beta)\sin(\alpha+\beta)=\sin(\beta)\sin(\gamma)\,.$$
That is,
$$\tan(x)=\frac{\sin(\beta)\sin(\gamma)}{\sin(\alpha)}=\frac{c\sin(\beta)}{a}=\frac{AH}{BC}\,,$$
where we have once again used the Law of Sines $\frac{a}{\sin(\alpha)}=\frac{c}{\sin(\gamma)}$.
Now, as $\angle ABL=\angle CKL=x$, we have
$$\frac{AL}{BL}=\tan(x)=\frac{AH}{BC}\,.$$
The triangles $AHL$ and $BCL$ have $\angle LAH=\angle LBH=\angle LBC$ and $\frac{AL}{BL}=\frac{AH}{BC}$. Therefore, $AHL$ and $BHL$ are similar triangles, whence $\angle BCL=\angle LHC$. However, $\angle{LHC}=\angle LKC=x$. Thus, $\angle BCL=\angle LKC=x$. This means $BC$ is tangent to the circumscribed circle of the triangle $CKL$.
P.S. As a result of this problem, we can also show that the circumscribed circle of $BCL$ is tangent to $AB$ and that, if $N$ is the midpoint of $CH$, then $MN$ is a perpendicular bisector of $HL$, and that the circle with diameter $CH$ passes through $L$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1376403",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
remainder of $a^2+3a+4$ divided by 7
If the remainder of $a$ is divided by $7$ is $6$, find the remainder when $a^2+3a+4$ is divided by 7
(A)$2$ (B)$3$ (C)$4$ (D)$5$ (E)$6$
if $a = 6$, then $6^2 + 3(6) + 4 = 58$, and $a^2+3a+4 \equiv 2 \pmod 7$
if $a = 13$, then $13^2 + 3(13) + 4 = 212$, and $a^2+3a+4 \equiv 2 \pmod 7$
thus, we can say that any number, $a$ that divided by 7 has remainder of 6, the remainder of $a^2 + 3a + 4$ is 2.
is there any other way to calculate it? (Let say it given b as the remainder of a divided by 7, not 6)
|
If the remainder when $a$ divided by $7$ is $b$, then $a = 7n+b$ for some integer $n$.
Hence, $a^2+3a+4 = (7n+b)^2+3(7n+b)+4$ $= 49n^2 + 14nb + b^2 + 21n + 3b + 4$ $= 7(7n^2+2nb+3n) + (b^2+3b+4)$.
So, the remainder when $a^2+3a+4$ is divided by $7$ will be the same as the remainder when $b^2+3b+4$ is divided by $7$.
For the specific case when $b = 6$, we get that $a^2+3a+4 = 7(7n^2+12n+3n)+58$ $= 7(7n^2+12n+3n+8)+2$.
So the remainder when $a^2+3a+4$ is divided by $7$ is $2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1376498",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
}
|
Can $\mathrm{PGL}_2$ be viewed as an affine algebraic group? I was just wondering whether or not it is possible to view $\mathrm{PGL}_2$ as an affine algebraic group.
|
We may view $PGL_{2}$ over $\mathbb{Z}$ as an affine scheme given by an open subset of $\mathbb{P}^{4}_{\mathbb{Z}}$. More precisely, let $\{z_{11},z_{12},z_{21},z_{22}\}$ be coordinates on $\mathbb{P}^4$. Then $PGL_2$ is given by the distinguished open set $D(f)$ where $f=z_{11}z_{22}-z_{12}z_{21}$. The coordinate ring $\mathcal{O}(PGL_2)$ is the degree $0$ component of the graded ring $\mathbb{Z}[z_{11},z_{12},z_{21},z_{22}][f^{-1}]$.
$PGL_2$ is a group object in the category of schemes. Giving a group object is equivalent to giving a functorial group structure on $PGL_2(R)$ for each ring $R$. Observe that a map $s:Spec(R)\to PGL_2$ corresponds to the data (up to mutliplication by a unit in $R$):
$(P,s_{11},s_{12},s_{21},s_{22})$ where $P$ is a projective $R$-module of rank $1$, $s_{ij}\in P$ such that $R\to P\otimes_R P$ given by $1\to (s_{11}s_{22}-s_{12}s_{21})$ is an isomorphism of $R$-modules.
Given another map $t:Spec(R)\to PGL_2$ with data $(Q,t_{11},t_{12},t_{21},t_{22})$ we define the product morphism $s.t$ as the one associated to the data
$(P\otimes_R Q, w_{11},w_{12},w_{21},w_{22})$ where $w_{ij}$ are obtained from multiplying the matrix $[s_{ij}]$ with $[w_{ij}]$.
When $(R,\mathfrak{m})$ is a local ring we have $P=R$ and map to $PGL_2$ is just a 4 tuple $(s_{11},s_{12},s_{21},s_{22})$ with discriminant a unit. By virtue of it being a map to $PGL_2$ and in particular $\mathbb{P}^4$ it is implicit that we identify any two such tuples which differ by multiplication by $R^*=R\setminus \mathfrak{m}$. This is the classical description.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1376597",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Proposition: $\sqrt{x + \sqrt{x + \sqrt{x + ...}}} = \frac{1 + \sqrt{1 + 4x}}{2}$. Proposition: $$\sqrt{x + \sqrt{x + \sqrt{x + ...}}} = \frac{1 + \sqrt{1 + 4x}}{2}$$
I believe that this is true, and, using Desmos Graphing Calculator, it seems to be true.
I will add how I derived the formula in a moment, if you would like.
Working
I will be honest; all that I did was use the Desmos Graphing Calculator, and let $y = \sqrt{x + \sqrt{x + \sqrt{x + ...}}},$ let $x = 1, 2, 3, ...$, looked at the point at which the two graphs meet, and searched for the number in Google.
It turned up an interesting website, which you may access here, which seemed to show a pattern.
I used this pattern to derive the formula that I stated earlier.
|
First, I would like to mention that said series is not necessarily well-defined; you can't just add a "..." and expect the resulting quantity to be well-defined. For example, consider:
$$S=1+2+4+8+16+\ldots$$
Then,
$$2S=2+4+8+16+\ldots$$
so:
$$2S+1=S$$
and you get $S=-1$, which is clearly absurd. The issue here is that $S$ is not well defined; you have to define $S$ as the limit as $n \rightarrow +\infty$ of $1+2+\ldots+2^n$, and one can show that this quantity is $+\infty$. Similarly, you must instead ask for the limit of the sequence $a_n$, where $a_0=\sqrt{x}$ and $a_{n+1}=\sqrt{x+a_n}$. Now one can show that this sequence is increasing and is upper bounded, so it must converge (for $x>0$). Only once you have shown that it has converged can you use Nemo's approach (the comments about the quadratic formula's two solutions are easily resolved since the limit has to be positive).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1376667",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
}
|
Is the inequality $| \sqrt[3]{x^2} - \sqrt[3]{y^2} | \le \sqrt[3]{|x -y|^2}$ true? I'm having some trouble deciding whether this inequality is true or not...
$| \sqrt[3]{x^2} - \sqrt[3]{y^2} | \le \sqrt[3]{|x -y|^2}$ for $x, y \in \mathbb{R}.$
|
$|x - y|^2 = (x - y)^2 $
Let $\sqrt[3]{x} = a , \sqrt[3]{y} = b$
I will study this
$$| a^2 - b^2 | \leq \sqrt[3]{|a^3-b^3|^2} $$
L.H.S
$$|a^2 - b^2|^3 = |a-b|^3 \cdot |a+b|^3= \color{red}{|a-b|^2}\cdot |a^2-b^2|\cdot |a^2 +2ab+b^2|$$
R.H.S
$$|a^3 - b^3|^2 = \color{red}{|a-b|^2} \cdot |a^2 +ab + b^2|^2 $$
So our problem reduced into studying if
$$|a^2-b^2|\cdot |a^2 +2ab+b^2| \leq |a^2 +ab + b^2|^2$$
If $a,b >0$. Then L.H.S
$$\color{red}{a^4 +2a^3b} - 2ab^3 -b^4$$
R.H.S
$$\color{red}{a^4 +2a^3b} + 3a^2b^2 +2ab^3 + b^4$$ which is absolutely bigger than L.H.S
Hope it will help you..
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1376742",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
}
|
Question about probability distributions I've recently came across this question:
You are trying to hitch-hike. Cars pass at random intervals, at an average rate of 1
per minute. The probability of a car giving you a lift is 1%. What is the probability
that you will still be waiting after one hour?
My first thought was to use the Binomial distribution. After one hour an average of 60 cars will pass, so the probability of all 60 cars not giving a lift is $\mathcal{P}_{binom}(60;0.999,60)=0.999^{60}=0.547$.
To check this I then retried the calculation using the Poisson distribution, If cars pass at a rate of 1 per minute then $\lambda=0.01\times60=0.6$, so the probability of not getting a lift is $\mathcal{P}_{pois}(0,\lambda=0.6)=e^{-\lambda}=0.549$.
Both answers are very close to each other, but why aren't they exactly equal?
|
Because they model two different things.
A Binomial Distribution, $\mathcal{Bin}(n, p)$, is that the count of successes in of $n$ Bernoulli trials with each trial having independent and identical probability of success $p$. The count of such successes can range from $0$ to $n$.
A Poisson Distribution, $\mathcal{Pois}(\lambda, \Delta t)$ is that of the count of events occurring within a fixed interval, $\Delta t$, if the subsequent events occur independently of the span since their prior event at a constant rate $\lambda$ per unit interval. The count of such events can range from $0$ to indefinite; no upper bound.
Only for large $n$ do the two intervals approach the same 'shape'.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1376797",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Are there an infinite number of open balls in an open set in a metric space? Let's start off by recalling the definition of an open set in a metric space:
A set $A$ in a metric space $(X,d)$ is open if for each point $x\in A$ there is a number $r\gt0$ such that $B_r(x)\subset A$
Where $B_r(x)$ denotes the open ball of radius $r$ at a point $x$,
$$B_r(x)=\{y\in X:d(x,y)\lt r\}$$
Supposing that our space is some form of the reals, $\mathbb R^n$, does this not mean that there are an infinite number of open balls, and hence an infinite number of points, within A?
This is my reasoning so far: take any point $x\in A$, and say that for some $r$ we have some $B_r(x)$ centered at $x$. Then, surely, there would be some point $x_1\in B_r(x)$ such that $d(x,x_1)\lt r$. But then, must not $x_1$ be in $A?$ And so would there not also exist some other open ball, $B_{r\,'}(x_1)$ with radius $r'$ centered at $x_1$, and then so on and so forth for the other points within that radius?
|
In any metric space, there are an infinite number of ways to write down balls with a given center. But some of the balls might actually be the same. For instance, in the "discrete metric" $d(x,y)=0$ if $x=y$ and $1$ otherwise, all balls $B_r(x)$ for $r \leq 1$ are the same (they are just $\{ x \}$) while all balls $B_r(x)$ for $r>1$ are also the same (they are the whole space). In particular, if we put the discrete metric on a finite set, this gives a counterexample to your claim.
In $\mathbb{R}^n$, balls of different radii are distinct. But this might not be true in a subset of $\mathbb{R}^n$. It certainly isn't true in a discrete subset of $\mathbb{R}^n$. But it also fails in other subsets of $\mathbb{R}^n$. For instance it fails for bounded sets (since all balls whose radius is larger than the diameter are the same). It would also fail for a lot of sets with a bounded connected component, such as $\{ x \in \mathbb{R}^n : \| x \| \leq 1 \text{ or } \| x \| \geq 2 \}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1376907",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
}
|
Proving by induction $\sum\limits_{k=1}^{n}kq^{k-1} = \frac{1-(n+1)q^{n} + nq^{n+1}}{(1-q)^{2}}$ The context is as follows: I am asking this question because I would like feedback; I am a beginner to mathematical proofs.
We wish to show $\sum\limits_{k=1}^{n}kq^{k-1} = \frac{1-(n+1)q^{n} + nq^{n+1}}{(1-q)^{2}}$ for arbitrary $q>0$. My attempt:
Let as first consider the trivial base case $n=1$ which, in the series, is clearly equally to one. Upon simplifying the left side using basic algebra, we see that the fraction is also equal to one. The base case has been shown.
Now, we must show the inductive step, meaning that if the equality holds for some $n$ it necessarily holds for $n+1$. Let $r=n+1$. We add to the series the next term $r*q^{r-1} = (n+1)q^{n}$ (to both sides of the equation). Thus,
$\frac{1-(n+1)q^{n} + nq^{n+1}}{(1-q)^{2}} + (n+1)q^{n}$
$\Rightarrow$ $\frac{1-(n+1)q^{n} + nq^{n+1} + (1-q)^2(n+1)(q^n)}{(1-q)^{2}}$
$\Rightarrow$ $\frac{1-(n+1)q^{n} + nq^{n+1} + (n+1)(q^n - 2q^{n+1} + q^{n+2})}{(1-q)^{2}}$
$\Rightarrow$ $\frac{1 + nq^{n+1} + (n+1)(q^{n+2} - 2q^{n+1})}{(1-q)^{2}}$
$= \frac{1 - (n+2)q^{n+1} + (n+1)(q^{n+2})}{(1-q)^{2}}$
Substituting $r=n+1$ the final expression can be simplified to
$\frac{1 - (r+1)q^{r} + (r)(q^{r+1})}{(1-q)^{2}}$
Therefore it's clear that if the original equality is true for some number $n$, it is also true for the number $r=n+1$ and this process can be repeated ad infinitum. The "first" case was the base case and the rest follows logically.
|
The finite geometric series is given by
$$
G(q,n)=\sum_{k=0}^nq^k=\frac{1-q^{n+1}}{1-q}
$$
for constant $q$ such that $|q|<1$.
Now $\frac{d}{dq}G(q,n)$ is the series you are looking for...
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1376981",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Choose $\rho$ such that $\rho$-norm minimizes the matrix condition number I'm solving questions from am exam that I failed miserably, so I would love it if someone can take a look at my proof and make sure I'm not making any gross mistakes.
Question
Let $A$ a symmetric matrix. Which $\rho$-norm minimizes $A$'s condition number: $\kappa(A,\rho)$?
Edit
Well, my solution was clearly wrong.
I would love any suggestion or direction for solution you might have.
Thanks! :)
|
$k(A,2)=\|A\|_2\|A^{-1}\|_2= \max | \lambda_A|.\max|\lambda_{A^{-1}}|\leq\|A\|\|A^{-1}\|=k(A,\|.\|)$ for each operator norm $\|.\|$.
That $\max|\lambda_A|\leq \|A\|$ follows from:
Let $x$ be an eigenvector of $A$ for the eigenvalue $\lambda$, where $\max\limits_{i}|\lambda_i|=|\lambda|$. Then for arbitrary matrix norm $\|.\|$, subordinate to the vector norm $\|.\|$, we have $\|A\|=\max\limits_{y\neq 0} \frac{\|Ay\|}{\|y\|}\ge \frac{\|Ax\|}{\|x\|}=\frac{\|\lambda x\|}{\|x\|}=|\lambda|$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1377074",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
A question about bijective functions Suppose that $f: [1,3]$$\rightarrow$$[0,8]$ is continuous. Show that there is some $x$$\in$$[1,3]$ such that $f(x)+4=4x$.
I need some advice on how to get started. Do i need to use IVT for this problem? Do I need to show that f is increasing while x is increasing?
|
Let $h(x) = f(x) - 4x + 4$. $h$ is continuous on $[1,3]$. Moreover:
$h(1) = f(1) \ge 0$ and $h(3) = f(3) - 8 \le 0$ since $\operatorname{Range}(f) = [0,8]$.
Thus, by IVT..
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1377166",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Why do remainders show cyclic pattern? Let us find the remainders of $\dfrac{6^n}{7}$,
Remainder of $6^0/7 = 1$
Remainder of $6/7 = 6$
Remainder of $36/7 = 1$
Remainder of $216/7 = 6$
Remainder of $1296/7 = 1$
This pattern of $1,6,1,6...$ keeps on repeating. Why is it so? I'm asking in general, that is for every case of type $a^n/b$'s remainder keeps on repeating as we increase $n$.
P.S: This is a follow up question of my previous question.
|
Looking at remainders after division by 7 is called arithmetic modulo 7.
You are regarding powers of 6, modulo 7. But $6$ is $-1$, modulo $7$. This is written:
$$6 \equiv -1 \pmod 7$$
But the powers of $-1$, that is the numbers $(-1)^n$, simply alternate: $1,-1,1,-1,1,\ldots$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1377221",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 6,
"answer_id": 4
}
|
Sum of cosines of complementary/suplementary angles Why are $(\cos(2^{\circ})+\cos(178^{\circ})), (\cos(4^{\circ})+\cos(176^{\circ})),.., (\cos(44^{\circ})+\cos(46^{\circ}))$ all equal zero?
Could you prove it by some identity?
|
so
$$cos (2) +cos(178) =cos(2)+cos(180-2)=0\\ cos (4) +cos(176) =cos(4)+cos(180-4)=0\\...\\$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1377303",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 6,
"answer_id": 0
}
|
Poisson distribution and price reduction A store owner has certain existence of an item and he decides to use the following scheme to sell it:
The item has a price of \$100. The owner will reduce the price in half for every customer that buys the item on a given day. That way, the first customer will pay \$50, the next one will pay $25, and so on...
Suppose that the number of customers who buy the item during a day follow a Poisson distribution, with $\mu = 2$ Find the expected price of the item at the end of a day.
I defined a "final price" function like this:
$$g(x) = \frac{100}{2^x}$$
where $x$ is the random variable of the Poisson distribution:
$$f(x)=\frac{\lambda^xe^{-x}}{x!}$$
Since $\mu = \lambda = 2$, I can rewrite $f(x)$ like this:
$$f(x) = \frac{2^x}{e^2x!}$$
And there's where I'm stuck. I see two possible roads:
|
Is solving for $x$ the way to go? If so, how can I deal with $x!$ ?
B: Should I go with $E(g(x))$?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1377375",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Solving Equations system question We get this equation and need to solve
Solve in $\mathbb{Z} $ the given equation
$ y(y -x )(x+1) = 12\ $
|
If you factor 12 as 3 x 2 x 2 and then put y = 3, y - x = 2, x + 1 = 1, you get a solution x = 1 and y = 3 which is an integer solution. This is by using common sense. The other one can be obtained by writing 12 as -2 x -3 x 2 in which case x = 1 and y = -2 is the other solution.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1377447",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Is there a name for the closed form of $\sum_{n=0}^{\infty} \frac{1}{1+ a^n}$? I hope this is not a duplicate question. If we modify the well known geometric series, with $a>1$, to
$$
\sum_{n=0}^{\infty} \frac{1}{1+a^n}
$$
is there a closed form with a name?
I suspect strongly that the answer is not in terms of elementary functions but will be some special function defined as such (as e.g. of the likes of the Hurwitz zeta function)
|
We can write
$$\frac{1}{1+a^n}=\left[\frac{\partial}{\partial t}\ln\left(1+ta^{-n}\right)\right]_{t=1},$$
which implies that
$$\sum_{n=0}^{\infty}\frac{1}{1+a^n}=\left[\frac{\partial}{\partial t}\ln\prod_{n=0}^{\infty}\left(1+ta^{-n}\right)\right]_{t=1}=
\left[\frac{\partial}{\partial t}\,\ln\left(-t;a^{-1}\right)_{\infty}\right]_{t=1},$$
where $(z,q)_{\infty}$ denotes the q-Pochhammer symbol.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1377549",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
Inverse of partitioned matrices A matrix of the form
$$A=\begin{bmatrix} A_{11} & A_{12}\\ 0 & A_{22} \end{bmatrix}$$
is said to be block upper triangular. Assume that $A_{11}$ is $p \times p$, $A_{22}$ is $q \times q$ and $A$ is invertible. Find a formula for $A^{-1}$.
Could anyone help on this?
|
Hint:
You may consider the matrix $$B = \begin{bmatrix} B_{11} & B_{12} \\ B_{21} & B_{22} \end{bmatrix},$$
such that:
$$\begin{bmatrix} A_{11} & A_{12}\\ 0 & A_{22} \end{bmatrix} \cdot \begin{bmatrix} B_{11} & B_{12} \\ B_{21} & B_{22} \end{bmatrix} = \begin{bmatrix} I_{p\times p} & 0 \\ 0 & I_{q\times q} \end{bmatrix},$$
where $B_{11}$ is a $p\times p$ matrix, $B_{12}$ is a $p\times q$ matrix, $B_{21}$ is a $q \times p$ matrix and $B_{22}$ is a $q \times q$ matrix.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1377634",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Asymptotic Estimates for a Strange Sequence Let $a_0=1$. For each positive integer $i$, let $a_i=a_{i-1}+b_i$, where $b_i$ is the smallest element of the set $\{a_0,a_1,\ldots,a_{i-1}\}$ that is at least $i$. The sequence $(a_i)_{i\geq0}=1,2,4,8,12,20,28,36\ldots$ is A118029 in Sloane's Online Encyclopedia. It is easy to show that $a_i$ is strictly increasing. My question is about whether there are any ways to deduce asymptotic estimates for $a_i$. Alternatively, we could define a sequence $(c_i)_{i\geq0}$ by letting $c_i$ be the largest integer $m$ such that $a_m\leq i$. For example, $c_{13}=4$ because $a_4=12\leq 13<a_5=20$. Could we find asymptotic estimates for $c_i$?
|
If $b_i$ is nearly $i$ (you define it as larger, this will give a lower bound) then you have a simple recurrence:
$$
a_{i + 1} - a_i = i \qquad a_0 = 0
$$
so that approximately:
$$
a_i = \frac{i (i + 1)}{2}
$$
This tends to confirm your conjecture.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1377712",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Evaluate $\iint_{R}(x^2+y^2)dxdy$ $$\iint_{R}(x^2+y^2)dxdy$$
$$0\leq r\leq 2 \,\, ,\frac{\pi}{4}\leq \theta\leq\frac{3\pi}{4}$$
My attempt :
Jacobian=r
$$=\iint_{R}(x^2+y^2)dxdy$$
$$x:=r\cos \theta \,\,\,,y:=r\cos \theta$$
$$\sqrt{x^2+y^2}=r$$
$$\int_{\theta=\pi/4}^{\theta=3\pi/4}\bigg[\int_{r=0}^{r=2}\bigg(r^2\bigg)dr\bigg]d\theta$$
$$.....=\boxed{\frac{4\pi}{3}}$$
Is it correct?
>
|
Switching to polar coordinates, the Jacobian is given by $ |J|$ where $$ J = \dfrac{\partial(x,y)}{\partial(r,\theta)} = \begin{vmatrix} \dfrac{\partial x}{\partial r} & \dfrac{\partial y}{\partial r} \\ \dfrac{\partial x}{\partial \theta} & \dfrac{\partial y}{\partial \theta} \end{vmatrix} = \begin{vmatrix} \cos\theta & \sin\theta \\ -r\sin\theta & r\cos\theta \end{vmatrix} = r$$ Therefore, your double integral is given by $$ \begin{aligned} \iint_{R} \left( x^2 + y^2 \right) \text{ d}x \text{ d}y & = \int_{\pi/4}^{3\pi/4} \int_{0}^{2} \left( (r\cos\theta)^2 + (r\sin\theta)^2 \right) |J| \text{ d}r \text{ d}\theta \\ & = \int_{\pi/4}^{3\pi/4} \int_{0}^{2} r^2 |r| \text{ d}r \text{ d}\theta \end{aligned}$$ and since $r \in \left[0,2\right]$, $|r| = +r$ so the integrand is $r^3$. I leave the rest to you.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1377877",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
}
|
Generalization of Cantor Pairing function to triples and n-tuples Is there a generalization for the Cantor Pairing function to (ordered) triples and ultimately to (ordered) n-tuples? It's however important that the there exists an inverse function: computing z from (w, x, y) and also computing w, x and y from z. In other words:
*
*project(w, x, y) = z
*unproject(z) = (w, x, y)
Thinking about it in terms of a three-/n-dimensional coordinate system it should be possible to generalize from ordered pairs to at least ordered triples and most probably also to ordered n-tuples. Is anyone aware of any resources (papers, books, websites...) where such a function is described?
|
Ok, I think I got it. Your idea is to create recursive functions for both pair and unpair and simply "assemble" the results instead of computing them with an algebraic formula. Of course this works due to the nature of pairing function. If I have time I will add the code in here.
Just one more question: Assuming I actually knew the pairing function for, let's say, triples (or n-tuples). In terms of processing speed on a regular computer, do you think it would be faster than a recursive solution? I assume it would be, because recursion requires to repeatedly create an internal stack for each recursive function call, whereas a mathematical function would just be computed once. So, if it's really about getting the most out of your processor, it would be worth trying to find the algebraic solution to this problem.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1377929",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 4,
"answer_id": 2
}
|
How to prove $\frac{2^a+3}{2^a-9}$ is not a natural number How can I prove that
$$\frac{2^a+3}{2^a-9}$$
for $a \in \mathbb N$ is never a natural number?
|
$$1+\frac{12}{2^a-9}$$
which means that $2^a$ should equal either $10,11,12,13,15$ or $21$, but neither of them is a power of $2$, so it's never a natural number
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1378034",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 0
}
|
Solve the equation $4\sqrt{2-x^2}=-x^3-x^2+3x+3$ Solve the equation in $\Bbb R$:
$$4\sqrt{2-x^2}=-x^3-x^2+3x+3$$
Is there a unique solution $x=1$? I have trouble when I try to prove it.
I really appreciate if some one can help me. Thanks!
|
It can be seen that $4 \sqrt{2 - x^2} = (1+x)(3 - x^2)$ leads to the expanded form, after squaring both sides and combining terms,
$$ x^6 + 2 x^5 - 5 x^4 - 12 x^3 + 19 x^2 + 18 x - 23 = 0 .$$
It is readily identified that $x=1$ is a solution and can then be factored out leading to
$$ (x-1)( x^5 + 3 x^4 - 2 x^3 - 14 x^2 + 5x + 23) = 0.$$
The polynomial of order $5$ has one real solution and $4$ complex solutions.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1378107",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Curve fit minimizing the sum of the deviation I'm fitting a curve taking the smaller sum of deviations for each parameter tested, the smaller sum returns me the parameter that gives the best fit. Here is the algorithm for a test $f(x, parameter)$:
function fit(function:function, list:sample_y, list:x_values, list:parameter_values):
sums = []
for each parameter in parameter_values:
y = function(x_values, parameter)
deviations = sum(abs(sample_y - y))
sums.push(deviations)
end for
smaller_deviation = min(sums)
parameter_index = sums.index_of(smaller_deviation)
return parameter_values[parameter_index]
end function
This method is working. I can minize the error giving a nice band of parameters.
I'd like to know whether is valid fit data in this way and, if It's valid, this method has a name?
Obs.: I'm avoiding using levemberg-maquardt and least squares methods...
|
Translation:
There is some function $f : A \times B \to C$ where $A$, $B$ and $C$ are some sets of objects, where $C$ supports addition, subtraction, taking the absolute value and determining a minimal element.
*
*$x$ is a countable subset of $A$.
*$p$ is a countable subset of $B$, called parameters.
*$y$ is a countable subset of $C$, called samples.
For all $i$ that index $p$, the values
$$
D_i = \sum_j \left\lvert y_j - f(x_j, p_i) \right\rvert
$$
are calculated and from this the minimum
$$
d = \min_i D_i
$$
The result is
$$
\DeclareMathOperator*{\argmin}{arg\,min}
p_k
$$
with
$$
k = \argmin_i D_i
$$
Interpretation:
Let $n$ be the number of different $p$ values, then this leads to the calculation of $n$ vectors $\tilde{y}_i = (f(x_j, p_i))$ which each have a distance $D_i$ to the sample vector $y$ in the 1-norm:
$$
D_i = \lVert y - \tilde{y} \rVert_1
$$
The algorithm returns the parameter $p_k$ associated to a closest vector $\tilde{y}_k$ to $y$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1378186",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Find the sum of all primes smaller than a big number I need to write a program that calculates the sum of all primes smaller than a given number $N$
($10^{10} \leq N \leq 10^{14} $).
Obviously, the program should run in a reasonable time, so $O(N)$ is not good enough.
I think I should find the sum of all the composite numbers smaller than $N$ and subtract it from $1+2+...+N$, but I'm trying that for a long time with no progress.
|
You could try programming a sieve to mark all the composites, then add up the unmarked numbers. To do that, you'll need a list of primes up to $10^7$. In order to get that, you could program a sieve...
This method is obviously pretty memory-intensive, but it's certainly faster than prime-testing each integer from $10^{10}$ to $10^{14}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1378286",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
Showing that a functions derivative is not bounded on $\mathbb{R}$ Suppose that $f$ is differentiable but not uniformly continuous on $\mathbb{R}$. Prove that $|f'|$ is not bounded on $\mathbb{R}$.
So I know that to show that $|f'|$ isn't bounded you would have to show that for any constant B, $|f'(a)|$ $>$ $B$, for some $a$ $\in$ $\mathbb{R}$. How would I go about doing this?
|
Hint: $A\Rightarrow B \iff \neg B \Rightarrow \neg A$.
Assume that for $f:\mathbb R\rightarrow\mathbb R$ differentiable and $|f'|$ bounded it follows that $f$ is uniformly continuous (to be more precise, it is Lipschitz continuous).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1378364",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Find the sum of series $\sum_{n=2}^\infty\frac{1}{n(n+1)^2(n+2)}$ How to find the sum of series $\sum_{n=2}^\infty\frac{1}{n(n+1)^2(n+2)}$ in the formal way? Numerically its value is $\approx 0.0217326$ and the partial sum formula contains the first derivative of the gamma function (by WolframAlpha).
|
Partial fractions: $$\frac{1}{n(n+1)^2(n+2)} = \frac{1}{2} \left( \frac{1}{n} - \frac{1}{n+2} \right) - \frac{1}{(n+1)^2},$$ the first part being telescoping, and the last part being related to $\zeta(2)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1378465",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Modelling interest with differential equations (Interpretation) I am having trouble interpreting the meaning of this differential equation model for interest on an account. The problem is as follows:
Assume you have a bank account that grows at an annual interest rate of r and every year you withdraw a fixed amount from the account (denoted w). Assuming continuous compounding interest and continuous withdrawal, the described account follows the differential equation:
${\frac{dP}{dt} = rP - w}$
Where P(t) is the amount of money in the account at time t.
I am confused as to why we subtract the entire amount ${w}$ in the equation? My interpretation (which is obviously incorrect) is that ${\frac{dP}{dt}}$ is the change in the amount of money in the account with respect to time for any given time ${t}$. If this is so, wouldn't the above equation mean that at every instant ${t}$ we are adding ${rP}$ to the account and subtracting the entire annual deduction ${w}$ so that at the end of an entire year we've deducted more than ${w}$ from the account?
|
$w$ is the (continuous, constant) rate of withdrawal, and $rP$ is the (continuous, proportional) rate of interest accrual.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1378550",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
}
|
How to calculate the integral? How to calculate the following integral?
$$\int_0^1\frac{\ln x}{x^2-x-1}\mathrm{d}x=\frac{\pi^2}{5\sqrt{5}}$$
|
You can start writing $$x^2-x-1=(x-r_1)(x-r_2)$$ where $r_{1,2}=\frac{1}{2} \left(1\pm\sqrt{5}\right)$ and use partiel fraction decomposition. So,$$\frac 1{x^2-x-1}=\frac{1}{{r_2}-{r_1}} \Big(\frac{1}{x-r_2}-\frac{1}{x-r_1}\Big)$$ and use $$\int \frac{\log(x)}{x+a}=\text{Li}_2\left(-\frac{x}{a}\right)+\log (x) \log \left(1+\frac{x}{a}\right)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1378657",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Compact support vs. vanishing at infinity? Consider the two sets
$$ C_0 = \{ f: \mathbb R \to \mathbb C \mid f \text{ is continuous and } \lim_{|x|\to \infty} f(x) = 0\}$$
$$ C_c = \{ f: \mathbb R \to \mathbb C \mid f \text{ is continuous and } \operatorname{supp}{(f)} \text{ is bounded}\}$$
Aren't these two sets the same? What am I missing?
|
Note that $C_c \subset C_0$, but $C_c \neq C_0$. For example, $f(x) = \dfrac{1}{x^2+1}$ belongs to $C_0$ but not $C_c$.
What you seem to be assuming is that $\lim_{|x|\to\infty}f(x) = 0$ implies that there is some $N > 0$ with $f(x) = 0$ for all $|x| > N$. This is not true, as the above example demonstrates. That is, a function can limit to zero at $\pm\infty$ without ever being zero.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1378751",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 0
}
|
Continuity Must Hold in an Entire Open Set? Claim: If a function $\mathbb{R}^n \rightarrow \mathbb{R}^m$ is continuous at $\vec a \in \mathbb{R}^n$, it is continuous in some open ball around $\vec a$.
Is this claim false? In other words, is it possible for a function to be continuous at a single point $\vec a$ only, but not in the points around $\vec a$?
|
Yes, consider the function $f\colon \mathbb R \to \mathbb R$ given by $f(x)=x$ if $x\in \mathbb Q$ and $f(x)=-x$ otherwise. You can even improve this example to obtain a function that is differentiable at a point but no continuous at any other point.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1378829",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Limit to infinity with natural logarithms $\lim_{x\to \infty } \left(\frac{\ln (2 x)}{\ln (x)}\right)^{\ln (x)} $ I found the following problem in my calculus book:
Solve:
$$\lim_{x\to \infty } \left(\frac{\ln (2 x)}{\ln (x)}\right)^{\ln (x)} $$
I tried to solve it using log rules and l'Hôpital's rule with no success, can someone give me any hints on how to go about this?
|
$$\lim_{x\to \infty } \Big(\frac{\ln(2x)}{\ln(x)}\Big)^{\ln(x)} $$
$$=\lim_{x\to \infty } \Big(\frac{\ln(x)+\ln 2}{\ln(x)}\Big)^{\ln(x)} $$ $$=\lim_{x\to \infty } \Big(1+\frac{\ln(2)}{\ln(x)}\Big)^{\ln(x)} $$ $$=\lim_{x\to \infty }exp\Big(\ln(x)\Big(1+\frac{\ln(2)}{\ln(x)}-1\Big)\Big) $$ $$=\lim_{x\to \infty }exp\Big(\ln (x)\frac{\ln(2)}{\ln(x)}\Big)\Big) $$ $$=\lim_{x\to \infty }exp\Big(\ln 2\Big)=e^{\ln 2}=2 $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1378922",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 1
}
|
Computing the intersection of two arithmetic sequences $(a\mathbb{Z} + b) \cap (c \mathbb{Z} + d)$ I am getting stuck writing a general formula for the intersection of two arithmetic sequences.
$$ (a\mathbb{Z} + b) \cap (c \mathbb{Z} + d) = \begin{cases}
\varnothing & \text{if ???} \\
?\mathbb{Z} + ? & \text{otherwise}\end{cases} $$
For any two arithmetic sequences, I would know how to compute their (possibly null) intersection, but I don't know how to write a formula in general.
|
It amounts to solving the system of simultaneous congruences:
$$\begin{cases}
x\equiv b\mod a\\x\equiv d\mod c
\end{cases}$$
This system has a solution idf and only if $b\equiv d\mod \gcd(a,c)$, and it is unique modulo $\operatorname{lcm}(a,c)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1378976",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 3
}
|
Number of points of discontinuity Find the number of points where
$$f(\theta)=\int_{-1}^{1}\frac{\sin\theta dx}{1-2x\cos\theta +x^2}$$ is discontinuous where $\theta \in [0,2\pi]$
I am not able to find $f(\theta)$ in terms of $\theta$,$sin\theta$ in the numerator i can take out but $cos\theta$ in the denominator is troublesome.Can someone tell a good way to integate it or some other way to solve it.
|
Let
$$ f(\theta)=\int_{-1}^{1}\frac{\sin\theta dx}{1-2x\cos\theta +x^2}. $$
If $\theta=0,\pi$ or $2\pi$, there is nothing to do since $f(\theta)=0$. Otherwise, using
$$ \int\frac{1}{(x-a)^2+b^2}dx=\frac{1}{b}\arctan\frac{x-a}{b}, $$
we have
\begin{eqnarray}
f(\theta)&=&\sin\theta\int_{-1}^{1}\frac{dx}{(x-\cos\theta)^2+\sin^2\theta}\\
&=&\arctan\frac{x-\cos\theta}{\sin\theta}\big|_{x=-1}^{x=1}\\
&=&\arctan\frac{1-\cos\theta}{\sin\theta}-\arctan\frac{-1-\cos\theta}{\sin\theta}\\
&=&\arctan(\tan\frac{\theta}{2})+\arctan(\cot\frac{\theta}{2})\\
&=&\frac{\pi}{2}.
\end{eqnarray}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1379049",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Finding the equation of the straight line $y=ax+b$?
If I have a circle $x^2+y^2=1$ and line that passes trough $(0,0)$
and I know the angle between the line and the axis. If, for example, the angle is $\frac{\pi}{3}$, how can I find the equation of the straight line $y=ax+b$?
|
You do know that $b$ is called the "$y$-intercept", right?
There is a reason for that: $b$ is the value of $y$ where the line
crosses the $y$-axis.
Now look at the figure and see where the line crossed the axis and what
the value of $y$ is at that point.
Recall that $a$ is the slope of the line, which you can get from
the coordinates of two points on the line $(x_0,y_0)$, $(x_1,y_1)$
like so:
$$ a = \frac{y_1 - y_0}{x_1 - x_0}.$$
You already have one point, $(x_0,y_0) = (0,0)$.
So you just need to find one other point, for example
one of the points on the circle where the line intersects it,
or draw a right triangle with one leg from $(0,0)$ to $(1,0)$
(along the $x$-axis) and see where the third point is if the
angle at $(0,0)$ is $\frac\pi3$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1379159",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
About the elements of a finite subgroup of $\mathrm{GL}(\mathbb{R}^{n})$ Let $G$ be a finite subgroup of $\mathrm{GL}(\mathbb{R}^{n})$. I would like to prove that for every $g \in G$, $\det(g) \in \lbrace -1,1 \rbrace$.
Here are my ideas : since $G$ is a finite subgroup of $\mathrm{GL}(\mathbb{R}^{n})$, the elements of $G$ satisfy to : $X^{e} - \mathrm{Id} = 0$ (for $e \in \mathbb{N}^{\ast}$). Therefore, the eigenvalues of the elements of $G$ are roots of unity in $\mathbb{C}$. For a given element $g \in G$, we can also note that if $\lambda \in \mathbb{C}$ is an eigenvalue for $g$, then $\overline{\lambda}$ is also an eigenvalue for $g$. Therefore, the determinant of $g$ will be either $-1$ or $1$. Is this correct ?
|
I'd like to show a somewhat simpler proof, without using eigenvalues theory and stuff.
Since $\det$ is a group homomorphism $\det : \mathrm{GL}(\mathbb{R}^{n}) \to \mathbb{R}^{\times}$ (the codomain being the multiplicative group of reals) the image of $G$ under $\det$ should be a finite subgroup of $\mathbb{R}^{\times}$. In $\mathbb{R}$ only two elements generate finite subgroups: 1 and -1, so it has only two finite subgroups: $\{1\}$ (the trivial one) and $\{1, -1\}$ (isomorphic to cyclic group in two elements). So, the image of $G$ under $\det$ should be one of those two groups, which means that $\det$ has only 1 and -1 as it's values.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1379311",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
}
|
Limit behavior of a definite integral that depends on a parameter. Let $A>0$ and $0\le \mu \le 2$. Consider a following integral.
\begin{equation}
{\mathcal I}(A,\mu) := \int\limits_0^\infty e^{-(k A)^\mu} \cdot \frac{\cos(k)-1}{k} dk
\end{equation}
By substituting for $k A$ and then by expanding the cosine in a Taylor series about zero I have shown that:
\begin{equation}
{\mathcal I}(A,\mu) = \frac{1}{\mu} \sum\limits_{n=1}^\infty \frac{(1/A)^n}{n!} \cos\left(\frac{\pi}{2} n\right) \cdot \Gamma\left(\frac{n}{\mu}\right)
\end{equation}
Unfortunately the series on the right hand side above does not converge for small values of $A$. My question is therefore how do we find the small-$A$ behavior of ${\mathcal I}(A,\mu)$ ?
|
Here we provide an answer for $\mu=2q/p$ where $p$ is a positive integer and $p+1\le 2q \le 2 p$. By using the multiplication theorem for the Gamma function we have shown that:
\begin{eqnarray}
&&{\mathcal I}(A,\mu) =\\
&&\frac{1}{\mu} \sqrt{\frac{(2\pi)^{2q-p}}{p 2q}} \sum\limits_{r=1}^q \left(\frac{p^{p/q}}{(2q)^2} \frac{(-1)}{A^2}\right)^r \cdot
\frac{\prod\limits_{k=0}^{p-1} \Gamma(\frac{r}{q}+\frac{k}{p})}{\prod\limits_{k=0}^{2q-1} \Gamma(\frac{2r+1}{2q} + \frac{k}{2q})}
\cdot F_{p+1,2q} \left[\begin{array}{rr} 1 & \left\{\frac{r}{q}+\frac{k}{p} \right\}_{k=0}^{p-1} \\ & \left\{\frac{2r+k}{2q}\right\}_{k=1}^{2q} \end{array}; \left(\frac{p^{p/q}}{(2q)^2} \frac{(-1)}{A^2}\right)^q\right]
\end{eqnarray}
Now we use the asymptotic behaviour of hypergeometric functions, see for example Wolfram's site. To make things simpler we plug the expression above into Mathematica and expand it using the Series[] command. The result is the following:
\begin{eqnarray}
{\mathcal I}(A,\frac{2}{1}) &=& \log(A) - \frac{1}{2} \gamma + A^2 + O(A^4)\\
{\mathcal I}(A,\frac{3}{2}) &=& \log(A) - \frac{1}{3} \gamma + A^{3/2} \frac{1}{2} \sqrt{\frac{\pi}{2}} + O\left((A^{3/2})^2\right) \\
{\mathcal I}(A,\frac{4}{3}) &=& \log(A) - \frac{1}{4} \gamma + A^{4/3} \frac{9 \sqrt{3} \pi }{280 \Gamma \left(-\frac{10}{3}\right)} + O\left((A^{4/3})^2\right)
\end{eqnarray}
As it seems the function in question is a linear combination of $\log(A)$ and some function that is analytic in $A^\mu$. It would be nice to find the full series expansion of that later function.
Since using Mathematica is cheating here we explicitely treat the case of $\mu=2$. In this case we have:
\begin{eqnarray}
{\mathcal I}(A,2) &=& -\frac{\, _2F_2\left(1,1;\frac{3}{2},2;-\frac{1}{4 A^2}\right)}{4 A^2} \\
&=& -\frac{1}{8 A^2} \int\limits_0^1 \int\limits_0^1 (1-t)^{-1/2} \exp\left(-\frac{t \cdot t_1}{4 A^2}\right) dt dt_1 \\
&=&-\frac{1}{2} \int\limits_0^{\frac{1}{4 A^2}} \left(1 - 4 A^2 t\right)^{-1/2} \left(\frac{1- e^{-t}}{t}\right) dt
\end{eqnarray}
Now we expand the first factor in the integrand in a Taylor series. Therefore we have:
\begin{eqnarray}
-2 {\mathcal I}(A,2) &=& \log \left(\frac{1}{4 A^2}\right)+\Gamma \left(0,\frac{1}{4 A^2}\right)+\gamma +\\
&&\sum\limits_{n=1}^\infty \binom{-1/2}{n} (-1)^n \left[\frac{1}{n} - (4 A^2)^n \gamma(n,\frac{1}{4 A^2})\right]
\end{eqnarray}
Now, since
\begin{equation}
\sum\limits_{n=1}^\infty \binom{-1/2}{n} \frac{(-1)^n}{n} = \log(4)
\end{equation}
we are getting the final answer:
\begin{eqnarray}
{\mathcal I}(A,2) = \log(A) - \frac{\gamma}{2} - \frac{1}{2} \Gamma(0,\frac{1}{4 A^2}) - \frac{1}{2} \sum\limits_{n=1}^\infty \binom{-1/2}{n} (-1)^n (4 A^2)^n \gamma(n,\frac{1}{4 A^2})
\end{eqnarray}
Note that our conjecture was not quite true. Indeed our function (at least in the case $\mu=2$) is a linear combination of a natural logarithm, and a function that has a nontrivial Laurant expansion about the origin(ie has all possible both positive and negative powers in the expansion).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1379377",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
}
|
Laplace Transform of a Heaviside function Find the Laplace transform.
$$g(t)= (t-1) u_1(t) - 2(t-2) u_2(t) + (t-3) u_3(t)$$
I understand that the $\mathcal{L}\{u_c(t) f(t-c)\} = e^{-cs}*F(s)$
Finding $F(s)$ is the hard part for me. My professor has used, for example,
$$f(t-2)=t^2$$
let $$s = t-2$$
$$t= s+2$$
$$f(s) = (s+2)^2$$
therefore $f(t) = (t+2)^2$
But then he said that $f(t-2) = 1$ therefore $f(t) = 1$. But why/how?
By the previous logic if you let $s = t-2$ then $t= s+2$, and $f(s) = s+2$, so $f(t) = t+2$ not $1$.
I'm having a tough time figuring this out.
|
Your function is $f(t)=t$ but not $f(t)=1$.
You need to use the formula
$$\mathcal{L}\{u_c(t) f(t-c)\} = e^{-cs}*F(s)$$
3 times with $c=1,2, 3$ to get the answer.
The answer derived by Leucippus looks correct.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1379536",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
Strongly equivalent metrics are equivalent I have $d,d'$ metrics in X and that they are strongly equivalent. In my case, this means that:
$\exists\alpha,\beta\in\mathbb{R}_{++}$ so that $\alpha d<d'<\beta d$
I want to show that they are equivalent by proving that:
$\forall x\in X,\forall\epsilon>0,\exists\delta>0:B_d(x,\delta)\in B_{d'}(x,\epsilon)$
and vice-versa.
My argument is: take $x\in X,\epsilon>0$ and build the ball $B_{d'}(x,\epsilon)$. If we take the ball $B_{d}(x,\frac{\epsilon}{\alpha})$, then :
$$y\in B_{d}(x,\frac{\epsilon}{\alpha})\Rightarrow d(x,y)<\epsilon/\alpha\Rightarrow\alpha d(x,y)<\epsilon$$
but, using the strongly equivalence property:
$$\alpha d(x,y)<d(x,y)<\epsilon\Rightarrow y\in B_{d'}(x,\epsilon)$$
There's something wrong in this argument. What is it? How can I understand this properly?
Thanks for your time and help!metri
|
Two metrics are strongly equivalent if they give rise to equivalent notions of strong convergence.
In this sense we want to prove that $$d(x_n,x ) \to 0 \Leftrightarrow d'(x_n,x ) \to 0$$
Assume that for every $B_{d'}(x,\epsilon)$ there is a $\delta$ such that $B_{d}(x,\delta) \subset B_{d'}(x,\epsilon)$
as $x_n \to x$ for large $n$ we have that $x_n \in B_{d}(x,\delta)$ therefore $x_n \in B_{d'}(x,\epsilon)$ therefore $d'(x_n,x) \to 0$.
the other implication is analogous.
What you are trying to prove might be seen more easily as follows:
If $\alpha d <d'<\beta d$ then for every $B_d(x,\epsilon)$ consider $B_{d'}(x, \delta)$
$$d'(y,x)<\delta \Rightarrow d(y,x)< \delta/\alpha $$
So $\delta \leq \epsilon\alpha$ implies that $$B_{d'}(x,\delta) \subset B_{d}(x,\epsilon) $$
the other implication is analogous, take $B_{d'}(x, \delta)$ and consider
$$d(y,x)<\gamma \Rightarrow d'(y,x)< \beta \gamma$$
therefore for $\gamma \leq \frac{\delta}{\beta}$
$$B_{d}(x,\gamma) \subset B_{d'}(x,\delta) \subset B_{d}(x,\epsilon) $$
$$B_{d}(x,\frac{\epsilon}{\alpha \beta}) \subset B_{d'}(x,\frac{\epsilon}{\alpha}) \subset B_{d}(x,\epsilon) $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1379625",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
why does the reduced row echelon form have the same null space as the original matrix? What is the proof for this and the intuitive explanation for why the reduced row echelon form have the same null space as the original matrix?
|
Say we have an $n$x$n$ matrix, $A$, and are going to row reduce it. Every time we do a row operation, it is the same as multiplying on the left side by an invertible matrix corresponding to the operation. So at the end of the process, we can conclude something like $B = L_1L_2...L_kA$, where $B$ is the row reduced matrix, and the $L_i$ are the matrices corresponding to the row operations.
The null space of $A$ is the set $\left\{\vec{x} \in \mathbb{R}^n |\space A\vec{x} = 0\right\}$, so for any $\vec{x}$ in this set:
$B\vec{x} = L_1L_2...L_kA\vec{x} = L_1L_2...L_k\vec{0} = \vec{0}$.
Conversely, if $x$ is in the null space of $B$ ($B\vec{x} = \vec{0}$) then
$A\vec{x} = L_k^{-1}...L_2^{-1}L_1^{-1}L_1L_2...L_kA\vec{x} = L_k^{-1}...L_2^{-1}L_1^{-1}B\vec{x} = L_k^{-1}...L_2^{-1}L_1^{-1}\vec{0} = \vec{0}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1379719",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 4,
"answer_id": 1
}
|
Why $\lim_{\Delta x\to 0} \cfrac{\int_{x}^{x+\Delta x}f(u) du}{\Delta x}=\cfrac{f(x)\Delta x}{\Delta x}$? I'm reading Nahin's: Inside Interesting Integrals.
I've been able to follow it until:
$$\lim_{\Delta x\to 0} \cfrac{\int_{x}^{x+\Delta x}f(u) du}{\Delta x}=\cfrac{f(x)\Delta x}{\Delta x}$$
I don't know what justifies the passage of the limit here. I've tried to write it as:
$$\lim_{\Delta x\to 0} \left[ \cfrac{1}{\Delta x}\cdot \int_{x}^{x+\Delta x}f(u) du\right]=\left[\lim_{\Delta x\to 0} \cfrac{1}{\Delta x}\right]\cdot \left[ \lim_{\Delta x\to 0} \int_{x}^{x+\Delta x}f(u) du \right]$$
But it seemed to create more non-sense than I previously had.
|
It is not actually rigorous what he says... but what he means is that, when $\Delta x $ approches $0$, then $f$ is approximately constant (assuming it is continuous). What you can do is, for instance:
Fix $\epsilon >0$. There exists $\delta>0$ such that: $f(x)-\epsilon<f(u)< f(x)+\epsilon$ for all $u$ in the $\delta$-neighbourhood of $x$. Therefore:
$ (f(x)-\epsilon) \Delta x \leq \int_x^{x+\Delta x} (f(x)- \epsilon) du \leq \int_x^{x+\Delta x} f(u)du \leq \int_x^{x+\Delta x} (f(x)+ \epsilon) du =(f(x)+\epsilon) \Delta x $
which implies:
$$\displaystyle f(x)-\epsilon \leq \lim_{\Delta x \rightarrow 0}\frac{\int_x^{x+\Delta x} f(u)du}{\Delta x}\leq f(x)+\epsilon$$
Since this holds for every $\epsilon>0$, we have that
$$\displaystyle \lim_{\Delta x \rightarrow 0}\frac{\int_x^{x+\Delta x} f(u)du}{\Delta x}=f(x)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1379789",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.