Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
Let $0\leq aLet $0\leq a<b$ be real numbers. Prove that there is no continuous function $f:[a,b]\rightarrow\mathbb{R}$ such that $\int\limits_{a}^{b}f(x)x^{2n}dx>0$ and $\int\limits_{a}^{b}f(x)x^{2n+1}dx<0$ for all integer $n\geq0$.
I am trying to use weierstrss approximation theorem. But I am not getting. Please give some idea
| We can give an elementary proof as follows: the change of variables $y=qx$ gives $\int\limits_{a}^{b}f(x)x^{n}dx=\frac{1}{q^{n+1}}\int\limits_{qa}^{qb}f(y/q)y^{n}dy$ so with $g(y)=f(y/q)$ one has that $\int\limits_{qa}^{qb}g(y)y^{n}dy$ has the same sign as $\int\limits_{a}^{b}f(x)x^{n}dx$ so if we choose $qb \le 1$ one can assume $b \le 1$
Let $0<c<1$ and consider the polynomial $P(x)=1+\eta-(x+\epsilon)(x-c)^2=(1+\eta-\epsilon c^2)-(c^2-2c\epsilon)x+(2c-\epsilon)x^2-x^3$ and note that for $\epsilon, \eta>0$ small enough we have that $P(c)=1+\eta,0<P(x)<1, 0<x<1, x \notin [c-\delta, c+\delta]$ for $\delta \to 0$ as $\eta, \epsilon \to 0$ and its coefficients alternate so $P(x)=a_0-a_1x+a_2x^2-a_3x^3, a_0,..a_3 >0$
Now it is easy to see by induction that $P(x)^k$ has the same properties as $P$ regarding its coefficients being alternating as the product of two polynomials with alternating coefficients has alternating coefficients
Now coming back to our problem, assume $f \ne 0$ so there is $0<c<1$ for which $f(c)=-d<0$; in particular there exists a small $\delta>0$ st $f(x)<-d/2, x \in (c-\delta, c+\delta)$ Choose $\eta, \epsilon>0$ st for the above polynomial one has $P \ge 1$ on $[c-\delta/2, c+\delta/2]$ and $0<P<1$ outside $[c-\delta, c+\delta]$
However $\int_a^bf(x)P^k(x)>0$ is positive for all $k \ge 0$ since $P^k$ has alternating coefficients
But $|(\int_a^{c-\delta}+\int_{c+\delta}^b)f(x)P^k(x)dx| \to 0, k \to \infty$ while $\int_{c-\delta}^{c+\delta}f(x)P^k(x)<-\delta d /2$ so we get our contradiction for large $k$ and we are done!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4421775",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Does $\int_{\Gamma}\frac{x^2}{1+x^2}\,\Lambda_1(dx)=\int_{\Gamma}\frac{x^2}{1+x^2}\,\Lambda_2(dx)$ suffices to prove $\Lambda_1=\Lambda_2$? Suppose we have two measure $\Lambda_1$ and $\Lambda_2$ on $\mathcal{B}\left(\mathbb{R}\right)$.
They may be infinite measure but for all $\Gamma\in \mathcal{B}\left(\mathbb{R}\right)$, and $\Lambda_1\left(\left\{0\right\}\right)=\Lambda_2\left(\left\{0\right\}\right)=0$, we have
$$
\int_{\Gamma}\frac{x^2}{1+x^2}\,\Lambda_1(dx)=\int_{\Gamma}\frac{x^2}{1+x^2}\,\Lambda_2(dx)<\infty.
$$
Does it follows that $\Lambda_1=\Lambda_2$?
| Yes, it does.
First note that
$$ \infty>\int_{\mathbb{R}\setminus\left(\frac{-1}{n},\frac{1}{n}\right)}\frac{x^2}{1+x^2}\Lambda_i(dx)\ge\frac{\Lambda_i\left(\mathbb{R}\setminus\left(\frac{-1}{n},\frac{1}{n}\right)\right)}{n^2+1}\ ,
$$
so both $\ \Lambda_i\ $ must be $\sigma$-finite.
If $\ \Gamma\ $ is any Borel set with $\ \Lambda_2(\Gamma)=0\ $, then
\begin{align}
0&=\int_\Gamma\frac{x^2}{1+x^2}\Lambda_2(dx)\\
&=\int_\Gamma\frac{x^2}{1+x^2}\Lambda_1(dx)\\
&\ge\int_{\Gamma\setminus(-a,a)}\frac{x^2}{1+x^2}\Lambda_1(dx)\\
&\ge\frac{a^2\Lambda_1\big(\Gamma\setminus(-a,a)\big)}{1+a^2}\ ,
\end{align}
for any $ a>0\ $. Therefore $\ \Lambda_1\big(\Gamma\setminus(-a,a)\big)=0\ $ for any $\ a>0 $, and
\begin{align}
\Lambda_1(\Gamma)&=\Lambda_1\big(\Gamma\cap\{0\}\big)+\Lambda_1\left(\bigcup_{n=1}^\infty\Gamma\setminus\left(\frac{-1}{n},\frac{1}{n}\right)\right)\\
&=0
\end{align}
From Radon-Nikodym it follows that $\ \Lambda_1\ $ has a density $\ \varphi\ $ with respect to $\ \Lambda_2\ $:
$$
\Lambda_1(\Gamma)=\int_\Gamma\varphi(x)\Lambda_2(dx)
$$
for any $\ \Gamma\in\mathcal{B}(\mathbb{R})\ $. Then
\begin{align}
\int_\Gamma\frac{x^2}{1+x^2}\Lambda_1(dx)&=\int_\Gamma\frac{x^2\varphi(x)}{1+x^2}\Lambda_2(dx)\\
&=\int_\Gamma\frac{x^2}{1+x^2}\Lambda_2(dx)
\end{align}
for any $\ \Gamma\in\mathcal{B}(\mathbb{R})\ $, from which it follows that $\ \varphi(x)=1\ $ for $\ \Lambda_2$-a.e. $\ x\ $. Therefore
\begin{align}
\Lambda_1(\Gamma)&=\int_\Gamma\Lambda_2(dx)\\
&=\Lambda_2(\Gamma)
\end{align}
for all $\ \Gamma\in\mathcal{B}(\mathbb{R})\ $.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4421922",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Dimension of the symmetric\alternating k-tensor over an $n$-dimensional vector space. I want to solve this question:
Suppose $V$ is a vector of dimension $n$ over a field $F$ of characteristic not equal to 2. Calculate dim $Sym^{k}(V)$(the symmetric k tensor ).
I know that $(Sym^k(V))^*,$ where $*$ denotes the dual is isomorphic to the homogeneous polynomial of degree $k$ in $n$-variables $F[x_1, \dots, x_n]_k$. and I know that in case of $F[x_1, x_2]_k$ its dimension is $k+1$ but I do not know how to generalize its dimension when we have $n$-variables. Could someone clarify this to me please?
Also, how can I calculate dim of $\wedge^k(V)$ (skew symmetric forms)
| For Symmetric tensors you can look at this answer . I'll provide an outline for alternating tensors as I cannot find a proper link right now
Given a basis $\{e_{1},...,e_{n}\}$ of $V$.
Define $\{E^{1},...,E^{n}\}$ the dual basis corresponding to $\{e_{1},...,e_{n}\}$.
for a multi-index $(i_{1},...,i_{k})= I$ such that $1\leq i_{j}\leq n$ for all $j=1,2,...,k$.
Define $$E^{I}(v_{1},...,v_{n})=\begin{vmatrix} E^{i_{1}}(v_{1})& E^{i_{2}}(v_{1})&\cdots& E^{i_{k}}(v_{1}) \\ \vdots&\vdots&\cdots & \vdots \\ E^{i_{1}}(v_{k})&E^{i_{2}}(v_{k})&\cdots &E^{i_{k}}(v_{k})\end{vmatrix}$$
Then you can prove that $\{E^{I}:i_{1}<i_{2}<...<i_{k}\}$ forms a basis for alternating forms .
That is you can express any alternating form as $$\sum^{\text{increasing}}_{I}c_{I}E^{I}$$ . That is you are summing over all increasing multi-indices.
And that the above set is linearly independent.
The cardinality is precisely $\dbinom{n}{k}$ as there are precisely $\dbinom{n}{k}$ many ways to pick and arrange $k$ numbers out of $n$ and arrange in increasing order.
For more details(complete proof) see John M Lee's introduction to smooth manifolds chapter 14.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4422119",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Approximating $f(x) = x$ with a certain class of polynomials. Let
$$P = \{\sum_{j=0}^{n}a_{j}x^{2j} | n \in \mathbb{N}\cup \{0\}, a_{0},...,a_{n} \in [0,\infty)\}.$$ So that $P$ contains polynomials in $x^2$ with non-negative coefficients. For example $1+x^2+\frac{1}{4}x^4+\pi x^{8} \in P$, $x^2 \in P$, $0 \in P$.
Given an $\epsilon > 0$ is there $p_{\epsilon} \in P$ so that
$$\max_{x \in [0,1]} |p_{\epsilon}(x) - x| \leq \epsilon \text{ }?$$
This question is inspired from Let $0\leq a<b$ be real numbers. Prove that there is no continuous function $f:[a,b]\rightarrow\mathbb{R}$
If the identity function can be well approximated from functions in $P$, then I can prove the linked question.
One cannot simply apply Stone-Weierstrass Theorem for my question as $P$ contain polynomials with non-negative coefficients (more restrictive).
| The answer is no; assume the opposite and let $P_n(x)=a_{0n}+a_{1n}x^2+...$ as in the OP st $|P_n(x)-x| \le 1/n, x \in [0,1]$; this means $a_{0n} \le 1/n$ so $Q_n(x)=P_n(x)-a_{0n}$ satisfies $|Q_n(x)-x| \le 2/n, x \in [0,1]$ or $|x||\sum a_{kn}x^{2k-1} -1| \le 2/n$
For $x=1$ this means $1-2/n \le \sum a_{kn} \le 1+2/n$
But then for $x=1/2$ one has $1-\sum a_{kn}(1/2)^{2k-1} \ge 1-\sum a_{kn}/2 \ge 1/2-1/n$ so $|x||\sum a_{kn}x^{2k-1} -1| \ge 1/4-1/(2n)$ contradicting the above estimate for $n \ge 20$ say
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4422750",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Which approximation expression was more preciece? Consider the expression
$$\lim_{h\rightarrow 0} \frac{f(x+h, y+h) -f(x,y)}{h} $$
Does part of the numerator better approximated as
$$f(x+h, y+h)\approx f(x,y)+h\partial_xf(x,y)+h\partial_yf(x,y)$$
or
$$f(x+h, y+h)\approx f(x,y)+h\partial_xf(x,y+h)+h\partial_yf(x+h,y)$$
They seemed to be equivalent at the first order on the paper if further expanding to $f(x,y)$, but the numerical expression might be a bit different.(For example, the Runge–Kutta methods did perform better some cases.)
Would the second approximation "formally" be more precise?
(A reference of the Taylor expansion for multivariable around the $f(\vec a)$)
The question came up because one wanted to figure out the value of a "derivative" where two argument was coupled
$$\frac{f(x,y)}{\partial x+y}$$
where an additional condition was imposed on $x$ and $y$ with $x<<y$.
| $f_0=f(x+h, y+h)$, $f_1=f(x,y)+h\partial_xf(x,y)+h\partial_yf(x,y)$, $f_2=f(x,y)+h\partial_xf(x,y+h)+h\partial_yf(x+h,y)$.
$f_0=f_1+\frac{h^2}{2} \partial^2_{xx} f(x,y)+\frac{h^2}{2} \partial^2_{yy} f(x,y)+h^2 \partial^2_{xy} f(x,y)+o(h^2)$, $f_2=f_1+2 h^2 \partial^2_{xy} f(x,y)+o(h^2)$.
Precision of $f_1$ and $f_2$ depends on function itself. Suppose $f(x,y)=(x-y)^2$, then $\partial^2_{xy} f(x,y)=-2$, $\partial^2_{xx} f(x,y)=\partial^2_{yy} f(x,y)=2$, then $f_0=f_1+o(h^2)$, $f_2=f_1-4h^2+o(h^2)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4422942",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
If $\pi$ is a representation of a lie algebra $\mathfrak{g}$, why is $\pi(x)$ not required to be invertible? The definition of a representation of a group $G$ is a homomorphism $\pi: G\to GL(V)$. So here $\pi(x)$ is an invertible linear map $V \to V$.
The definition of a representation of a lie algebra $\mathfrak{g}$ is a homomorphism $\pi: \mathfrak{g} \to End(V)$. So here $\pi(x)$ is just a linear map.
What is the motivation for not requiring invertibility? Is this something with category theory?
| If $\pi: G \rightarrow \operatorname{GL}(V)$ is a representation of a Lie group $G$ on a finite dimensional vector space $V$, then the differential $d \pi: \mathfrak g \rightarrow \operatorname{End}(V)$ is a Lie algebra representation of the Lie algebra $\mathfrak g$ into the space of all linear maps from $V$ to itself.
Originally, Lie algebra representations showed up as differentials (tangent space maps) from ordinary group representations, which are invertible. These differentials are, in particular, linear maps. They send $0$ to $0$, and therefore you are forced to deal with non-invertible elements of $\operatorname{End}(V)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4423118",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
How many associative binary operations on the integers does $+$ distribute over? I am interested in binary operations $\mid: \mathbb{Z} \times \mathbb{Z} \to \mathbb{Z}$ which satisfy:
*
*Associativity: $a \mid (b \mid c) = (a \mid b) \mid c$
*$+$ distributes over $\mid$: $(a \mid b) + c = (a + c) \mid (b + c)$.
I know of the following such operations: $\max, \min, (x, y) \mapsto x,$ and $(x, y) \mapsto y$. Are there others?
Over the reals, there is also an infinite family of such operations $(x, y) \mapsto \log_a (a^x + a^y)$, for any base $a$. But this operation doesn't restrict to the integers.
| Answering my own question: I prove that there are only the four such operations: $\max(x, y) = x$, $\min(x, y) = y$, $\text{first}(x, y) = x$, and $\text{last}(x, y) = y$.
In the following lemmas, we first show idempotence $x \mid x = x$, with some work. Then we look at chains of sums of $(0 \mid 1)$ to find $0 \mid n = n (0 \mid 1)$, and finally we determine that $(0 \mid 1)$ is either $0$ or $1$.
Combining this with the symmetric observation that $(1 \mid 0)$ is either $0$ or $1$ will give us the four cases.
Lemma 1: $x \mid x = x$.
Since $x \mid x = x + (0 \mid 0)$, it suffices to show $0 \mid 0 = 0$.
Consider the map $\phi: \mathbb{Z}^+ \to \mathbb{Z}$ defined by $\phi(n) = \underbrace{0 \mid 0 \mid \cdots \mid 0}_{n \text{ zeros}}$.
Then I claim
$$
\phi(mn) = \phi(m) + \phi(n)
$$
This is by rewriting the right-hand side as
$$
\underbrace{0 \mid 0 \mid \cdots \mid 0}_{m} + \phi(n)
= \underbrace{\phi(n) \mid \cdots \mid \phi(n)}_{m}
$$
and expanding.
In particular, we have that
$\phi(p^a) = a \phi(p)$ for any prime $p$.
Now I claim that $\phi(x) = \phi(y)$ for some $0 \le x < y$.
If $\phi(p) = 0$ for any prime, then $\phi(1) = \phi(p)$.
Otherwise, there must be primes $p_1$ and $p_2$ such that $\phi(p_1)$
and $\phi(p_2)$ have the same sign. Examining
$\phi(p_1^a) = a \phi(p_1)$ and $\phi(p_2^b) = b \phi(p_2)$, we see
that we can choose $a, b > 0$ to make these equal: in particular, $a = |\phi(p_2)|$ and $b = |\phi(p_1)|$.
To complete the proof of Lemma 1,
$\phi(x) = \phi(y)$ implies that $\phi(n)$ is periodic for $n$ sufficiently large, since $\phi(n) = \phi(n - x) \mid \phi(x) = \phi(n - x) \mid \phi(y) = \phi(n - x + y)$, and hence bounded.
But $\phi(2^a) = a \phi(2)$ is not bounded unless $\phi(2) = 0$,
so $\phi(2) = 0 \mid 0 = 0$, and in fact, $\phi(n) = 0$ for all $n$.
Lemma 2: for $n \ge 0$, $0 \mid 1 \mid \cdots \mid n = n (0 \mid 1)$.
Proof by induction:
\begin{align*}
0 \mid 1 \mid \cdots \mid (n + 1)
&= 0 \mid (1 \mid 1) \mid (2 \mid 2) \mid \cdots \mid (n \mid n) \mid (n + 1) \quad\text{by Lemma 1} \\
&= (0 \mid 1) \mid (1 \mid 2) \mid \cdots \mid (n \mid (n + 1)) \\
&= (0 + (0 \mid 1)) \mid (1 + (0 \mid 1) \mid (2 + (0 \mid 1)) + \cdots \\
&= (0 \mid 1 \mid \cdots \mid n) + (0 \mid 1).
\end{align*}
Lemma 3: for $n \ge 0$, $0 \mid n = n (0 \mid 1)$.
Note that:
\begin{align*}
(0 \mid n) + (0 \mid 1 \mid 2 \mid ... \mid n)
&= (0 + (0 \mid 1 \mid 2 \mid ... \mid n)) \mid (n + (0 \mid 1 \mid 2 \mid ... \mid n)) \\
&= 0 \mid 1 \mid 2 \mid ... \mid (2n) \quad\text{by Lemma 1: } n \mid n = n
\end{align*}
Applying Lemma 2, $(0 \mid n) + n (0 \mid 1) = 2n (0 \mid 1)$,
and the result follows.
Lemma 4: $(0 \mid 1) \in \{0, 1\}$.
Let $k = (0 \mid 1)$. By idempotence, $0 \mid 1 = 0 \mid 0 \mid 1 \mid 1$, and we consider different ways to evaluate this associatively.
First, $(0 \mid 1) \mid 1 = k \mid 1$, and second, $0 \mid (0 \mid 1) = 0 \mid k$. Thus,
$$
k = k \mid 1 = 0 \mid k
$$
Now we have two cases. If $k \ge 0$, then
$$
0 \mid k = k (0 \mid 1) = k^2,
$$
so $k = k^2$ and $k \in \{0, 1\}$. Second, if $k < 0$, then
subtracting $k$ from $k = k \mid 1$ we get
$$
0 = (k - k) \mid (1 - k) = 0 \mid (1 - k)
= (1 - k) (0 \mid 1) = (1 - k) k
$$
so again $k = 0$ or $k = 1$ (actually a contradiction since $k < 0$), and we are done.
Putting things together
All of lemmas 2-4 can be proven identically for the symmetric case of $b \mid a$ instead of $a \mid b$, from which we get that $1 \mid 0 \in \{0, 1\}$.
So there are two cases for $0 \mid 1$ and two cases for $1 \mid 0$. Together with Lemma 3 we can then calculate $m \mid n$ for any $m, n$:
$$
m \mid n = \begin{cases}
m + (0 \mid (n - m)) = m + (n - m) (0 \mid 1) \text{ if } n \ge m \\
n + ((m - n) \mid 0) = n + (m - n) (1 \mid 0) \text{ if } m \ge n.
\end{cases}
$$
In particular:
*
*If $0 \mid 1 = 1 \mid 0 = 1$, this gives the max operation $\max(x, y)$.
*If $0 \mid 1 = 1 \mid 0 = 0$, this gives the min operation $\min(x, y)$.
*If $0 \mid 1 = 0$ and $1 \mid 0 = 1$, this gives the first projection $\text{first}(x, y) = x$.
*Finally, if $0 \mid 1 = 1$ and $1 \mid 0 = 0$, this gives the second projection $\text{last}(x, y) = y$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4423277",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 1
} |
Do we extend the geometrically constructible numbers in a 3D space, where lines, circles, spheres and planes can be constructed? Do we extend the geometrically constructible numbers in a 3D space, where lines, circles, spheres and planes can be constructed?
In a 2D plane, we construct lines and circles only with compass and straightedge, and through this we give rise to constructible numbers hence: without arbitrary placement of points, any point that may be constructed onto the real number axis (a line defined by points representing 0 and 1) is a constructible real number. This set of numbers is contained in the algebraics but contains the rationals.
Now, just imagine this in 3D - suppose we have three or four points defining 0, 1, and perhaps an imaginary i or whatever significant point designators, etc. We have the ability to draw circles, lines, spheres and planes defined through any known points.
Im concerning myself with real numbers, specifically, and am curious what points can be plotted onto the real number line. Im curious if this 3D extension and new ruleset in any way extends the constructable numbers beyond what they already are.
Many tools have been used to extend the constructables. These are called neusis tools, but also non-constructiable curves drawn in the plane, etc. can accomplish much of the same. Im wondering if we may include higher dimensions among our neusis tools.
| You can extend the range of constructible length ratios in this way, and the Greeks actually explored such extensions -- but with cones instead of spheres. Such constructions were called solid constructions. Cones are superior to spheres because spheres can intersect any plane only in circles, whereas cones generate the full range of conic sections (circles, ellipses, parabolas, hyperbolas). The conic sections are then used to generate the constructions on paper.
The method solves all equations up to degree $4$ in integers or previously constructed quantities, thereby allowing the definition of angle trisections, cube root extractions and regular $n$-gons when the Euler totient of $n$ has prime factors of $2$ and $3$. This capability may even be generated from one properly chosen conic section, such as the parabola $y=x^2$, plus Euclidean construction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4423474",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
If $\mathbf{A}$ is real symmetric, prove $\mathbf{A}^n=\mathbf{0}$ implies $\mathbf{A}=\mathbf{0}$ For all $\mathbf{x}$, we see that $\mathbf{A}^n\mathbf{x}=\mathbf{0}$. Therefore we can construct a linearly independent basis of eigenvectors (each with eigenvalue $0$). Following from this, the eigenvectors of $\mathbf{A}$ must also form a basis, with eigenvalues $0^{1/n}=0$. Since a square matrix is diagonalizable iff there is a basis of its eigenvectors, there exists a frame in where $\mathbf{A} = \mathrm{diag}(0,\cdots,0) = \mathbf{0}$. However $\mathbf{0}\mathbf{M}=\mathbf{0}$ for all matrices $\mathbf{M}$ so therefore $\mathbf{A} = \mathbf{0}$ in any frame.
Is this proof complete? Where have I used the fact that $\boldsymbol{A}$ is real symmetric? The question actually asks you to consider the quadratic form $Q = \mathbf{x^T}\mathbf{A}\mathbf{x}$ but how does this help? Are there any other proofs?
| Notice that $\text{tr}(A^TA)=0$ implies $A=0$. Since $A$ is symmetric, then it
follows from $A^2=0$ that $A=0$.
Moreover, if $A$ is symmetric and $A^n=0$, there exists $k\in\mathbb{N}$ such that $2^k\ge n$ and then $A^{2^k}=A^{n}A^{2^k-n}=0$. By induction we obtain
$$A^{2^k}=0\Longrightarrow A^{2^{k-1}}=0\Longrightarrow\cdots\Longrightarrow A=0.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4423636",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Show $Z(H)Z(G)=(HZ(G))\cap C_G(H)$ for $H\le G$ Suppose that $H$ is a subgroup of $G$, then I would like to show that the following follows:
$$Z(H)Z(G)=(HZ(G))\cap C_G(H)$$
where $C_G(H)=\{ x\in G \mid \forall h\in H: xh=hx\}$ and $Z(G)=\{ z\in G\mid \forall g\in G, zg=gz\}$.
What I showed thusfar is the following:
Let $y\in Z(H)Z(G)$ so there exist $z_1\in Z(H), z_2\in Z(G)$ such that: $y=z_1z_2$, we get that $y\in C_G(H)$ since $\forall h\in H: \ \ hz_1z_2=z_1hz_2=hz_1z_2$, so $yh=hy$.
Now I find it difficult to show that: $y\in HZ(G)$.
Here's how I started my proof:
$$y=z_1z_2=hz_1h^{-1}z_2=h(z_1h^{-1}z_2)$$
where $h\in H$ is some arbitrary element of the subgroup $H$. So, I need to show that: $z_1h^{-1}z_2\in Z(G)$, which is where I am stuck? I need to show that for each $g\in G$ we have: $gz_1h^{-1}z_2=z_1h^{-1}z_2g$, how to show that? something trivial I am missing here?
Thanks!
BTW, is this proof of one direction is reversible, i.e instead of one direction it's actually bi-directional, or is the other direction different than this direction?
| You could almost do this hands down if you are familiar with Dedekind's Modular Law: if $A,B,C$ are subgroups of a group $G$ with $A \subseteq B$ then (as sets) $(B \cap C)A = B \cap CA$. Hence, since, $Z(G) \subseteq C_G(H)$ we get
$$Z(H)Z(G)=C_H(H)Z(G)=(H \cap C_G(H))Z(G)=HZ(G) \cap C_G(H).$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4423804",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Is $[a,b] \subset B$ necessarily true? I am currently working on a problem, and in order to continue, I need to first make sure if the following statement is true:
Suppose I have a set $A$ which is dense in the interval $[a,b]$. Now let $B$ be a closed set with $A \subseteq B$. Is it necessarily true that $[a,b]\subseteq B$?
My thinking is yes, but I’m struggling to convince myself why it is true. Can anyone please give me an indication if this is true and why?
| Denote the complement of $B$ by $B^c$
Suppose that $[a,b] \not\subseteq B$. Then there exists a point $x \in [a,b]$ with $x \in B^c$. Since $B^c$ is open, there exists an open interval $I$ with $x \in I \subset B^c$.
Since $A$ is dense in $[a,b]$ and $x \in [a,b]$, the interval $I$ must contain a point of $A$. This implies $A \cap B^c \not= \emptyset$, contrary to hypothesis.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4423993",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Find all integers $n$ such that $\frac{3n^2+4n+5}{2n+1}$ is an integer. Find all integers $n$ such that $\frac{3n^2+4n+5}{2n+1}$ is an integer.
Attempt:
We have
\begin{equation*}
\frac{3n^2+4n+5}{2n+1} = \frac{4n^2+4n+1 - (n^2-4)}{2n+1} = 2n+1 - \frac{n^2-4}{2n+1}.
\end{equation*}
So, we must have $(2n+1) \mid (n^2-4)$, so $n^2-4 = k(2n+1)$, for some $k \in \Bbb Z$. But, I did not be able to find $n$ from here.
Any ideas? Thanks in advanced.
| Write $k=2n+1$ then $n=(k-1)/2$ so $$3n^2+4n+5= {3(k^2-2k+1) + 8(k-1)+20\over 4} ={3k^2+2k+15\over 4}$$ and thus $$4k\mid 3k^2+2k+15\implies k\mid 15$$
Now you have only few values of $k$...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4424423",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Functional inequality $\sqrt{f(x+y)}\leqslant\sqrt{f(x)}+\sqrt{f(y)}$ The following problem was posted on another forum but I wonder about the validity of its claim:
Let $f:\mathbb{R}\to [0, +\infty)$ satisfy the functional equation
$$f\left(\frac{x+y}{2}\right) +f\left(\frac{x-y}{2}\right) =\frac{f( x) +f( y)}{2}$$
Prove that
$$\sqrt{f( x+y)} \leqslant \sqrt{f( x)} +\sqrt{f( y)} ,\forall x,y\in [0, +\infty)$$
It seems the problem was motivated by the quadratic function $f(x)=x^2$. Indeed, it is easy to show that for any rational number $r$, we have $f(rx) = r^2f(x)$. So, if the function was continuous, we'd get the quadratic function and the inequality follows. However, no additional conditions are provided. My question is: is the statement of the problem correct and why, or if it is not correct, could you provide a counterexample?
| The claim is in fact true. Consider a function $ f : \mathbb R \to \mathbb R _ { 0 + } $ satisfying
$$ f \left ( \frac { x + y } 2 \right ) + f \left ( \frac { x - y } 2 \right ) = \frac { f ( x ) + f ( y ) } 2 \tag 0 \label 0 $$ for all $ x , y \in \mathbb R $. Substitute $ x + y $ for $ x $ and $ x - y $ for $ y $ in \eqref{0} to get
$$ f ( x + y ) + f ( x - y ) = 2 f ( x ) + 2 f ( y ) \tag 1 \label 1 $$
for all $ x , y \in \mathbb R $. \eqref{1} is known as the quadratic functional equation (as it resembles the identity that holds for quadratic functions of the form $ f ( x ) = a x ^ 2 $ where $ a $ is a constant, with $ a \ge 0 $ here) or the parallelogram law functional equation (as it resembles the parallelogram law). You can take a look at the post "Parallelogram law functional equation: $ f ( x + y ) + f ( x - y ) = 2 \big( f ( x ) + f ( y ) \big) $" to check out some of the properties of the functions satisfying \eqref{1}. Although that post is about $ f : \mathbb R \to \mathbb R $, many of the things done there can be repeated here. The important property that is of use here is that there must exist a symmetric biadditive $ B : \mathbb R ^ 2 \to \mathbb R $ such that $ f ( x ) = B ( x , x ) $ for all $ x \in \mathbb R $. This lets us mimic the proof of the triangle inequality for inner product spaces by means of the Cauchy-Shwarz inequality. To do this, first note that since $ B $ is biadditive, it must be bilinear over $ \mathbb Q $. This is because fixing one of the arguments of $ B $ and varying the other, we get a function that satisfies the well-known Cauchy's functional equation, which you can find about by taking a look at "Overview of basic facts about Cauchy functional equation". Knowing this, one can define $ p : \mathbb Q \to \mathbb R $ with $ p ( t ) = f ( t x + y ) $ for all $ t \in \mathbb Q $, given any fixed $ x , y \in \mathbb R $, and observe that
\begin{align*}
p ( t ) & = f ( t x + y ) \\
& = B ( t x + y , t x + y ) \\
& = t ^ 2 B ( x , x ) + 2 t B ( x , y ) + B ( y , y ) \\
& = t ^ 2 f ( x ) + 2 t B ( x , y ) + f ( y )
\end{align*}
for all $ t \in \mathbb Q $. Therefore, $ p $ is in fact a quadratic polynomial on $ \mathbb Q $, and we can continuously extend it to $ \mathbb R $; i.e. defining $ q : \mathbb R \to \mathbb R $ with $ q ( s ) = s ^ 2 f ( x ) + 2 s B ( x , y ) + f ( y ) $ for all $ s \in \mathbb R $, $ q $ will be a continuous quadratic polynomial over $ \mathbb R $ with $ q | _ { \mathbb Q } = p $. Note that $ p $ only takes nonnegative values by definition, and since $ \mathbb Q $ is dense in $ \mathbb R $, $ q $ only takes nonnegative values, too. As this is only possible when the discriminant of $ q $ is nonpositive, we must have
$$ \bigl ( 2 B ( x , y ) \bigr ) ^ 2 - 4 f ( x ) f ( y ) \le 0 \text , $$
or equivalently
$$ | B ( x , y ) | \le \sqrt { f ( x ) f ( y ) } \text . \tag 2 \label 2 $$
Note that $ x $ and $ y $ were arbitrary, and therefore \eqref{2} holds for all $ x , y \in \mathbb R $. Finally, note that
\begin{align*}
f ( x + y ) & = B ( x + y , x + y ) \\
& = B ( x , x ) + 2 B ( x , y ) + B ( y , y ) \\
& = f ( x ) + 2 B ( x , y ) + f ( y ) \\
& \stackrel { \eqref{2} } \le \sqrt { f ( x ) } ^ 2 + 2 \sqrt { f ( x ) } \sqrt { f ( y ) } + \sqrt { f ( y ) } ^ 2 \\
& = \left ( \sqrt { f ( x ) } + \sqrt { f ( y ) } \right ) ^ 2
\end{align*}
for all $ x , y \in \mathbb R $, which by taking square roots of both sides proves the claim (even better, as it holds for all $ x , y \in \mathbb R $, rather than all $ x , y \in \mathbb R _ { 0 + } $).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4424611",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
How to evaluate the definite integral $\int _{0}^{\frac{\pi }{2}}\frac{\ln(\tan x)}{1-\tan x+\tan^{2} x}\mathrm{d} x$? I am struggling with this integral:
$\displaystyle \int _{0}^{\frac{\pi }{2}}\frac{\ln(\tan x)}{1-\tan x+\tan^{2} x}\mathrm{d} x$
What I tried so far:
$\displaystyle \int _{0}^{\frac{\pi }{2}}\frac{\ln(\tan x)}{1-\tan x+\tan^{2} x}\mathrm{d} x$
$\displaystyle =\int _{0}^{\frac{\pi }{2}}\frac{\cos^{2} x\ln(\tan x)}{1-\sin x\cos x}\mathrm{d} x$
$\displaystyle =\int _{0}^{\frac{\pi }{2}}\frac{-\sin^{2} x\ln(\tan x)}{1-\sin x\cos x}\mathrm{d} x$
$\displaystyle =\frac{1}{2}\int _{0}^{\frac{\pi }{2}}\frac{\cos 2x\ln(\tan x)}{1-\sin x\cos x}\mathrm{d} x$
The answer should come out to be $\dfrac{-7\pi^2}{72}$.
Any help will be appreciated.
| Since
$$
\displaystyle \int _{0}^{\frac{\pi }{2}}\frac{\ln(\tan x)}{1-\tan x+\tan^{2} x}\mathrm{d} x=\int_0^{\infty} \frac{\ln t}{(1-t+t^2)(1+t^2)}dt,
$$
consider
$$
\begin{aligned}
\mathscr{I}(s)&=\int_0^{\infty} \frac{t^s}{(1-t+t^2)(1+t^2)}dt
\\&=\int_0^{\infty} \frac{t^{s-1}}{1-t+t^2}dt-\int_0^{\infty} \frac{t^{s-1}}{1+t^2}dt
\\&=\int_0^{\infty} \frac{t^{s-1}}{1+t^3}dt+\int_0^{\infty} \frac{t^{s}}{1+t^3}dt-\int_0^{\infty} \frac{t^{s-1}}{1+t^2}dt.
\end{aligned}
$$
With Beta function, we have
$$
\int_{0}^{\infty}\frac{t^{s-1}}{1+t^{a}}dt=\frac{\pi \csc(\frac{\pi s}{a})}{a}
$$
thus
$$
\mathscr{I}(s)=\frac{\pi \csc(\frac{\pi s}{3})}{3}+\frac{\pi \csc(\frac{1}{3} \pi (s+1))}{3}-\frac{\pi \csc(\frac{\pi s}{2})}{2}.
$$
In conclusion,
$$
\begin{aligned}
\int_0^{\frac{\pi}{2}}\frac{\ln \tan x}{1-\tan x+\tan^2 x}dx&=\lim_{s\to 0}\frac{\partial }{\partial s}\mathscr{I}(s)
\\&=\lim_{s\to 0}\left(-\frac{\pi^{2} \csc(\frac{\pi s}{3}) \cot(\frac{\pi s}{3})}{9}-\frac{\pi^{2} \csc(\frac{1}{3} \pi s+\frac{1}{3} \pi) \cot(\frac{1}{3} \pi s+\frac{1}{3} \pi)}{9}+\frac{\pi^{2} \csc(\frac{\pi s}{2}) \cot(\frac{\pi s}{2})}{4}\right)
\\&=-\frac{7 \pi^{2}}{72}
\end{aligned}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4424853",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Combinatorial proof of $\sum_{k=0}^{n} k \binom{n+1}{k+1} n^{n-k} = n^{n+1}$ Show :
$$\sum_{k=0}^{n} k \binom{n+1}{k+1} n^{n-k} = n^{n+1}$$
for natural number $n$. I randomly discovered this identity, and managed to prove it using simple algebra. I tried a combinatorial proof of this, but it seems too difficult for me.
The RHS is basically distributing $n+1$ people to $n$ different groups where an empty group is possible, but I could not show that the LHS is the same. Picking $k+1$ people out of $n+1$ equals $\binom{n+1}{k+1}$, and distributing others($n-k$ people) is equal to $n^{n-k}$ ; and now I am stuck with that $k$. Also I have no idea what to do with $k+1$ people I just picked; if I distribute them to $n$ groups then it will be overlapped with other terms of the sum.
A proof using algebra is also welcome, just in case.
| Suppose you are choosing a team consisting of $1$ or more people among $n+1$ people(enumerated as person 1, person 2, etc). You also want to choose a captain in the team. The selection process is the following: You score each person with a number from $1$ to $n+1$ and you want to choose the team to be those who scored $n+1$. Among them, you want to select the captain. You can do this by first selecting who were the ones to score $n+1$ (say $k$ of them) and then the captain, this will yield:
$$\sum _{k=0}^{n+1}\binom{n+1}{k}\cdot k\cdot n^{(n+1)-k}=\sum _{k=0}^{n}\binom{n+1}{k+1}\cdot (k+1)\cdot n^{(n+1)-(k+1)},$$
or you could have chosen the captain and then score the other people in $(n+1)\cdot (n+1)^n$ ways.
Now suppose you do not want the captain to be the tallest of the team, were height is given proportional to their number i.e., person 1 is smaller than person 2, etc..(perhaps this is not a basketball team). We can choose the team, consisting in $k$ people, and then the captain in the following way:
$$\sum _{k=0}^{n}\binom{n+1}{k+1}\cdot k\cdot n^{(n+1)-(k+1)},$$
or we could have chosen first the captain. By the above problem, we know that there are in total $(n+1)^{n+1}$ ways to do this. Consider the opposite problem: Let's choose the captain to be the tallest in the team, we claim that this can be done in $(n+1)^{n+1}-n^{n+1}$, and so we would have at the end $(n+1)^{n+1}-((n+1)^{n+1}-n^{n+1})=n^{n+1}$ ways.
Naively, we can represent the choosing of the captain by saying it is the $s-$th person and then choosing the rest of the team $k$ people in $\binom{s-1}{k}$ ways in the following way
$$\sum _{s=1}^{n+1}\sum _{k=0}^{s-1}\binom{s-1}{k}n^{n+1-(k+1)},$$
but we could have chosen the score of the non-selected people that are smaller than $s$ giving us
$$\sum _{s=1}^{n+1}\sum _{k=0}^{s-1}n^{n+1-s}\binom{s-1}{k}n^{s-(k+1)}=\sum _{s=1}^{n+1}n^{n-(s-1)}(n+1)^{s-1}=n^n\sum _{s=0}^n\left (\frac{n+1}{n}\right )^s=(n+1)^{n+1}-n^{n+1},$$
where the last step is the geometric sum. Combinatorially, the middle step corresponds to letting elements below $s$ to be in the team (having score $= n+1$ and not allowing people bigger than $s$). The last step by considering where is the last score $=n+1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4425206",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 3,
"answer_id": 0
} |
A pattern of periodic continued fraction I am interested in the continued fractions which $1$s are consecutive appears.
For example, it is the following values.
$$
\sqrt{7} = [2;\overline{1,1,1,4}] \\
\sqrt{13} = [3;\overline{1,1,1,1,6}]
$$
In this article, let us denote n consecutive $1$s as $1_n$.
Applying this, the above numbers would be as follows.
$$
\sqrt{7} = [2;\overline{1_3,4}] \\
\sqrt{13} = [3;\overline{1_4,6}]
$$
While investigating these numbers, the following pattern was found experimentally.
$$ \sqrt{F(n)^2m^2-(F(n)^2-L(n))m+\frac{(F(n)-1)(F(n)-3)}{4}-\frac{F(n-3)-1}{2}}
=\left[F(n)m-\frac{F(n)-1}{2};\ \overline{1_{n-1},\ 2\left(F(n)m-\frac{F(n)-1}{2}\right)}\right] $$ ($m,n \in \mathbb{N},\ n\equiv\pm1\ (mod3),\ n > 3,\ $$F(n)$ is Fibonacci number, $L(n)$ is Lucas number)
I confirm that it works correctly when $n$ and $m$ are single digits.
If you find a proof or counterexample, please let me know.
(2022/04/13 edit)
A general expression was derived. I think the expression I found is that special case. The condition is that the inside of the square root is always an integer.
Here are some concrete examples.
$$\begin{array}{|c|c|}
\hline
n & pattern \\ \hline
4 & \sqrt{9m^2-2m} = [3m-1;\overline{1_3,2(3m-1)}] \\ \hline
5 & \sqrt{25m^2-14m+2} = [5m-2;\overline{1_4,2(5m-2)}] \\ \hline
7 & \sqrt{169m^2-140m+29} = [13m-6;\overline{1_6,2(13m-6)}] \\ \hline
8 & \sqrt{441m^2-394m+88} = [21m-10;\overline{1_7,2(21m-10)}] \\ \hline
\end{array}$$
| I think your claims are correct but quite needlessly complicated. The theorem at the end of this answer shows a result which is both simpler to write out and more general.
Let $(F_n)_{n\geq 0}$ be the standard Fibonacci sequence, defined by $F_0=0,F_1=1$ and $F_{n+2}=F_n+F_{n+1}$ for $n\geq 1$.
Let $f(x)=\frac{1}{1+x}$, and $f^n=f\circ f \circ \ldots \circ f$ ($n$ times). It is easy to check by induction that
$$
f^n(x)=\frac{\big(F_{n+1}-F_n\big)+\big(2F_n-F_{n+1}\big)x}{F_n+(F_{n+1}-F_n)x} \tag{1}
$$
Now let $a\geq 1$ be an integer. If we put $g(x)=f(\frac{1}{a+x})$,
$$
g(x)=\frac{\big(F_{n+1}-F_n\big)(x+a)+\big(2F_n-F_{n+1}\big)}{F_n(x+a)+F_{n+1}-F_n} \tag{2}
$$
The roots of $g(x)=x$ are therefore defined by the equation
$$
x^2+ax-\bigg(\frac{F_{n+1}}{F_n}(a-1)+2-a\bigg)=0 \tag{3}
$$
This is quadratic whose roots are $-\frac{a}{2}\pm \sqrt{\Delta}$ where
$$
\Delta = \frac{a^2}{4} + \bigg(\frac{F_{n+1}}{F_n}(a-1)+2-a\bigg) \tag{4}
$$
For $n\geq 3$, we have $F_{n+1}=\frac{3}{2}F_{n}+\frac{1}{2}F_{n-3}$ and hence $F_{n+1}\geq \frac{3}{2}F_{n}$. It follows from (4) that $\Delta \geq \frac{a^2}{4} + 4\bigg(\frac{3}{2}(a-1)+2-a\bigg) \gt \frac{a^2}{4}$, so that the largest root $\alpha$ of $g(x)=x$ is positive. Thus :
Theorem. For any $n\geq 3$ and $a\geq 2$, there is a unique positive number whose continued fraction is $[\overline{1_{n},\ a}]$. This number is
$$
\alpha = -\frac{a}{2} + \sqrt{\frac{a^2}{4} + \bigg(\frac{F_{n+1}}{F_n}(a-1)+2-a\bigg)} \tag{5}
$$
Update. When $a$ is of the form $a=F_n(2m+1)+1$, it is straightforward to compute that
$$
\begin{array}{lcl}
\Delta &=& \frac{4F_n^3m^2 + (8F_nF_{n+1} + (4F_{n}^3 - 4F_n^2))m + (4F_nF_{n+1} + (F_n^3 - 2F_n^2 + 5F_n)}{4F_n} \\
&=& F_n^2 m^2 + (2F_{n+1} + F_{n}^2 - 1)m + F_{n+1} + \frac{F_n^2-2F_n+5}{4}
\end{array}
$$
So that $\Delta$ is an integer iff $F_n^2-2F_n+5$ is divisible by $4$. This is easily seen to be the case when $n\not\equiv 2$ modulo $3$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4425432",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
probability of 2 person has the same birthday in a class. problem: there are n persons in a room, what is the probability that no two of them celebrate the same birthday in a year?
Here is my thought process,
The sample space is $|\{(b_1,b_2,\dots,b_n): b_1,b_2\in\{1,2,\dots 365\}\}|=(365)^n$ , and I got stuck at counting the Event, $|\{(b_1,b_2,\dots,b_n):b_i\neq b_j \forall i\neq j \}|=365*364*\dots*1$ but what if n>365? how do I count that?
| *
*Your first calculation is $365$,
*your second calculation $365\times 364$,
*...,
*your $365$th calculation $365\times 364\times \cdots\times 1$,
*and your $366$th calculation $365\times 364\times \cdots\times 1\times 0$ which is $0$.
You can carry on further, but you will always have the $\times 0$ term with more people. So whenever you have more people than possible birthdays, you never get them each having a different birthday.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4425633",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Quadratic equation: understanding how the absolute values in the derivation correspond to the $\pm$ symbol in the classic quadratic formula expression I'd like if someone could help me understand the typical form of the quadratic formula, which, for the equation $ax^2+bx+c=0$, reads as $x=\frac{-b \pm \sqrt{b^2-4ac}}{2a}$, where $x \in \mathbb R$ and $a \neq 0$. Throughout this derivation, I will use the definition: $\sqrt{x^2}=|x|$.
Here is my derivation, and I have placed a $\color{red}{\dagger}$ next to the part that I would like some clarificaiton about:
$ax^2+bx+c =0 \iff x^2+\frac{bx}{a}+\frac{c}{a}=0 \iff (x+\frac{b}{2a})^2+(\frac{c}{a}-\frac{b^2}{4a^2})=0 \iff (x+\frac{b}{2a})^2+(\frac{4ac-b^2}{4a^2})=0 $
Bringing the right summand over to the right side of the equation:
$(x+\frac{b}{2a})^2=(\frac{b^2-4ac}{4a^2}) \iff \left| x+\frac{b}{2a}\right|=\sqrt{b^2-4ac}\cdot\sqrt{\frac{1}{4a^2}} \iff \left| x+\frac{b}{2a}\right|=\left|\frac{1}{2a} \right| \cdot \sqrt{b^2-4ac} \quad \quad \color{red}{\dagger}$
My confusion stems from how the final expression above is equivalent to the syntax "$x=\frac{-b \pm \sqrt{b^2-4ac}}{2a}$" .
For $\color{red}{\dagger}$, we have 4 total cases:
*
*$ a \lt 0$ and $x+\frac{b}{a} \lt 0$
*$ a \lt 0$ and $x+\frac{b}{a} \geq 0$
*$ a \gt 0$ and $x+\frac{b}{a} \lt 0$
*$ a \gt 0$ and $x+\frac{b}{a} \geq 0$
Case 1: the conditions imply that $x=\frac{-b+\sqrt{b^2-4ac}}{2a}$
Case 2: the conditions imply that $x=\frac{-b-\sqrt{b^2-4ac}}{2a}$
Case 3: the conditions imply that $x=\frac{-b-\sqrt{b^2-4ac}}{2a}$
Case 4: the conditions imply that $x=\frac{-b+\sqrt{b^2-4ac}}{2a}$
From the four scenarios, is the way we get to "$x=\frac{-b \pm \sqrt{b^2-4ac}}{2a}$" simply by noting that for a fixed $a$, we have $x=\frac{-b+\sqrt{b^2-4ac}}{2a} \text{ or } x=\frac{-b-\sqrt{b^2-4ac}}{2a}$? ...where the '$\text{ or }$' here is denoting the logical or.
At this point, we define "$x=\pm \alpha$" as meaning $x = \alpha \text { or } x=-\alpha$...therefore meaning that $x=\beta\pm \alpha$ is equivalent to $x=\beta+\alpha \text{ or } x=\beta - \alpha$.
Is that the proper understanding?
| An appealing way of understanding why $\left(x + \frac{b}{2a}\right)^2 = \frac{b^2 - 4ac}{4a^2}$ can be written as $x=(\pm)\sqrt{\frac{b^2-4ac}{4a^2}}-\frac{b}{2a}$ is to take a set perspective.
The statement $\left(x + \frac{b}{2a}\right)^2 = \frac{b^2 - 4ac}{4a^2}$ can be thought of as saying:
$x \in S$ where $S:=\left\{x\in \mathbb R:\left(x + \frac{b}{2a}\right)^2 = \frac{b^2 - 4ac}{4a^2} \right\}$
The statement $x=(\pm)\sqrt{\frac{b^2-4ac}{4a^2}}-\frac{b}{2a}$ can be though of as saying:
$ x \in T$ where $T:=\left\{x\in \mathbb R:x=(\pm)\sqrt{\frac{b^2-4ac}{4a^2}}-\frac{b}{2a}\right\}$
The objective is to show that $x \in S \rightarrow x \in T$ and $x \in T \rightarrow x \in S$.
There is a lemma that follows directly from the definition of $|\cdot|$, which reads as:
$|x|=C \iff x=C \text{ OR } x= -C \quad (*_1)$
To prove the $\rightarrow$ direction, simply exhaust all possible values of $x \in \mathbb R$ by splitting it into the two cases of $x \geq 0$ and $x \lt 0$. The $\leftarrow$ direction is trivial.
Prove:$\quad x \in S \rightarrow x \in T$
By assumption, we have that $\left(x + \frac{b}{2a}\right)^2 = \frac{b^2 - 4ac}{4a^2} $. Taking the square root of both sides, and applying the definition of $\sqrt{\cdot}$, we have:
$$\left |x + \frac{b}{2a} \right|=\sqrt{\frac{b^2 - 4ac}{4a^2}}$$
Applying our lemma $(*_1)$, we have the logical statement:
$$x+\frac{b}{2a}=\sqrt{\frac{b^2 - 4ac}{4a^2}} \quad\text { OR }\quad x+\frac{b}{2a}=-\sqrt{\frac{b^2 - 4ac}{4a^2}} \quad (*_2)$$
The symbol $\pm$ is defined to capture the meaning of $(*_2)$ and is equivalently written as:
$x+\frac{b}{2a}=(\pm)\sqrt{\frac{b^2 - 4ac}{4a^2}}$
Subtraction gives us: $x=(\pm)\sqrt{\frac{b^2 - 4ac}{4a^2}}-\frac{b}{2a}$, which means that $x \in T$.
Prove:$\quad x \in T \rightarrow x \in S$
By assumption, we have that $x=(\pm)\sqrt{\frac{b^2 - 4ac}{4a^2}}-\frac{b}{2a}$. This means that $x= \sqrt{\frac{b^2 - 4ac}{4a^2}}-\frac{b}{2a}\text { OR } x=-\sqrt{\frac{b^2 - 4ac}{4a^2}}-\frac{b}{2a}$.
In the first case, we have that $\left(x+\frac{b}{2a}\right)^2=\left(\sqrt{\frac{b^2 - 4ac}{4a^2}} \right)^2=\frac{b^2 - 4ac}{4a^2}$, which means that $x \in S$.
In the second case, we have that $\left(x+\frac{b}{2a}\right)^2=\left(-\sqrt{\frac{b^2 - 4ac}{4a^2}} \right)^2=\frac{b^2 - 4ac}{4a^2}$, which means that $x \in S$. Therefore, in all cases, we have that $x \in T$.
In conclusion, we have: $x \in S \iff x \in T$, which means that the two statements:
(1) $\left(x + \frac{b}{2a}\right)^2 = \frac{b^2 - 4ac}{4a^2}$
(2) $x=(\pm)\sqrt{\frac{b^2-4ac}{4a^2}}-\frac{b}{2a}$
are equivalent.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4425762",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Is $\mathbb{P}_n^{-1}[x]$ a vector space?
Let $\mathbb{P}_n^{-1}[x]=\{a_0+a_1x+a_2x^2+\cdots+a_nx^n:a_0-a_1+\cdots+(-1)^na_n=0\}$.
Is $\mathbb{P}_n^{-1}[x]$ a vector space?
I took different values of $a_i$ for different polynomials such that they satisfy the condition $a_0-a_1+\cdots+(-1)^na_n=0$, and added them, I think that it does satisfy the condition for vector spaces. Am I correct?
| If $P(x)$ is a polynomial with $\deg P(X)\leqslant n$, then$$P(x)\in\Bbb P_n^{-1}[x]\iff P(-1)=0.$$So,
*
*$\Bbb P_n^{-1}[x]$ is not empty, since the null polynomial belongs to it.
*If $P(x),Q(x)\in\Bbb P_n^{-1}[x]$, $(P+Q)(-1)=P(-1)+Q(-1)=0$.
*If $P(x)\in\Bbb P_n^{-1}[x]$ and $\lambda$ is a scalar $(\lambda P)(-1)=\lambda P(-1)=0$.
So, yes, $\Bbb P_n^{-1}[x]$ is a vector space, since it is a subspace of the space of all polynomials $P(x)$ such that $\deg P(x)\leqslant n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4425935",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Does $S\subset R$ imply $A\otimes_{R} A \subset A\otimes_{S} A$? I'm currently trying to prove Proposition 3.2 from the paper "Galois correspondence for Hopf Bigalois Extensions" by Peter Schauenburg, focusing on the last two implications it declares as "clear". The details of the proof are irrelevant, I just assumed something that now I'm not sure if holds:
Let $A$ be an $R$-module, and let $S\subset R$ ($A$ is also an
$S$-module). Then $A\otimes_RA\subset A\otimes_S A$.
Does that "$\subset$" make sense? Or should it be "$A\otimes_RA$ is a quotient of $A\otimes_S A$"? If not, is there any relation I can find between two ordered sets $S\subset R$ and their corresponding $\otimes_S$, $\otimes_R$? Any help will be appreciated, thanks in advance.
| Let $R$ be a ring, $M$ a right $R$-module, and $N$ a left $R$-module. The usual construction of the tensor product $M \otimes_R N$ is that we take the free abelian group on $M \times N$ and quotient by the minimum relations necessary for bilinearity: for all $m, m' \in M$, $r \in R$, and $n, n' \in N$,
*
*$(m, n + n') - (m, n) - (m, n') = 0$,
*$(m + m', n) - (m, n) - (m', n) = 0$,
*$(m \cdot r, n) - (m, r \cdot n) = 0$.
If $S \subseteq R$ is a subring, then $M \otimes_S N$ is constructed as the quotient of the same free abelian group by the same relations, except the last set of relations is smaller because we are only quantifying over all $r \in S$ rather than all $r \in R$. Thus, $M \otimes_R N$ is a quotient of $M \otimes_S N$. (You can also deduce this from the universal properties, but I think it's a little easier to see this from the construction.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4426142",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Show that $f$ is Riemann integrable and that $\int_a^b f (x)dx = 0$ Let $k \in \mathbb{R}$, $x_0 \in [a, b]$, and define $f:[a, b] \to \mathbb{R}$ via
$$
\begin{cases}
0 & x \ne x_0\\
k & x= x_0
\end{cases}
$$
Show that $f$ is Riemann integrable and that $\int_a^b f (x)dx = 0$.
I know that I need to show that the lower and upper Riemann sums must be equal, but in my mind they are not equal, because at the point $x=x_0$, it will have an upper sum of $k$ but a lower sum of $0$. However, I understand and can visualize how the entire integral is equal to $0$.
Edit: I find the upper Riemann sum by
$$
U(f,P_n)=\sum_{i=1}^n M_i\Delta x_i.
$$
I know that $\Delta x_i =\frac{1}{n}$, but I don't know how to write $M_i$ because the function will suddenly jump up to $k$ at a certain point.
| The upper and lower sums will not be equal, but the upper and lower integrals will be. I'm not sure what notation you use, so I will use the one I am most familiar with. For a partition $P=\{x_0,x_1,\ldots,x_n\}$, the upper and lower sums are respectively \begin{align*}
U(f,P) &= \sum^n_{i=1} \left(\sup_{x\in[x_{i-1},x_i]} f(x)\right)(x_i-x_{i-1}), \\
L(f,P) &= \sum^n_{i=1} \left(\inf_{x\in[x_{i-1},x_i]} f(x)\right)(x_i-x_{i-1}).
\end{align*} Note that the width of the intervals come into play here. You are right that the supremum or infimum of $f$ will take the value $k$ for one (or maybe two) of those intervals, but the width of those intervals could be very small so that contribution can be made as small as you like.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4426293",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Solve a triangle given one angle of the median, and a side and an angle of its external triangle This is a modified version of this question, and as such I'm using similar wording and visuals. Given:
*
*$α_2$, one of the two angles which the vertex $A$ is split by the median $m$;
*$\overline{AD}$, the length of the segment of a triangle external to $ABC$;
*$\overline{BM}$, the length of half of the side split by the median;
*$θ$, the angle opposite to the shared side $\overline{AB}$;
Find the value of the angles $γ$ and $b_2$.
Similarly to the original question, the proportion $\frac{\sin{α_1}}{\sin{α_2}}=\frac{\sin{β_1}}{\sin{γ}}$ holds true. And it's still true that $β_1 = 180 - α_1 - α_2 - γ$. However, unlike the original scenario, we don't know the value of $α_1$.
With the external angle theorem, we know that $α_1 = θ + β_2 - α_2$, but now we have the $β_2$ variable to resolve.
I figured we could use the law of sines to establish $\frac{\sin{β_2}}{\overline{AD}}=\frac{\sin{θ}}{\overline{AB}}$, but after a few variable swaps the problem ultimately seems to loop back to $α_1$.
I feel that this problem is solvable, as the given parameters uniquely identify the two triangles. But I can't find out how to take it further than I have.
| To better highlight the unknown terms, let us set $\alpha_1=x$, $\beta_1=y$, $\gamma=z$. The known quantities are $\alpha_2$, $AD$, $BM$, $\theta$.
The first two equations, already reported in the OP, are:
$$
\frac{\sin{x}}{\sin{α_2}}=\frac{\sin{y}}{\sin{z}}
\tag{1}$$
$$y = 180 - x - α_2 - z \tag{2}$$
As we have $3$ unknowns, we need a third equation.
From the triangle $\triangle{ABD}$ we have:
$$\frac{AB}{\sin \theta}=\frac{AD}{\sin \beta_2}$$
Here we can try to express $AB$ and $\beta_2$ as functions of $x,y,z$ and the other known terms. Considering the triangle $\triangle{ABC}$, we have
$$AB=\frac{2BM}{\sin(x+\alpha_2)} \,\sin z$$
Also, by the exterior angle theorem, $$\beta_2=x+\alpha_2-\theta$$
Substituting, we obtain our third equation, which complete the system:
$$\frac{2\,BM \, \sin z }{\sin(x+\alpha_2)} \,= \frac{AD \, \sin \theta }{\sin (x+\alpha_2-\theta)} \tag{3}$$
The system can then be solved by the usual methods. Once we obtained $x,y,z$, we can also calculate $\beta_2$ using the formula above.
To provide an example of how the system works: let us consider the very simple case where the triangle $\triangle{ABC}$ is right and isosceles with hypotenuse equal to $2$, and the triangle $\triangle{ABD}$ is right with $\theta=60°$. In this case, the known terms are $\alpha _2=\pi/4$, $\gamma=\pi/3$, $BM=1$, and $AD=1/\sqrt{6}$. By construction, this case corresponds to the trivial case where $x=y=z=\pi/4$ and $\beta_2=\pi/6$.
As expected, the system provides the correct $x,y,z$ solutions, as shown here. From this, we also easily get $\beta_2=\pi/4+\pi/4-\pi/3$ $=\pi/6$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4426514",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
} |
In the category of schemes, what conditions on a closed monomorphism make it a closed immersion? If the question can be simplified, we can work on smaller categories, such as the category of varieties or schemes of finite type over a field, etc.
By a closed morphism of schemes I mean it is closed as a continuous map, not a closed immersion.
| Closed immersions are exactly the proper monomorphisms, see Stacks 04KV. You'll also find a few other conditions there. Any of the following conditions in addition to being a monomorphism will imply your morphism is a closed immersion:
*
*proper (i.e. universally closed, separated, finite type)
*universally closed + unramified
*universally closed + locally of finite type
This implies, for instance, that a universally closed monomorphism of varieties (schemes of finite type over a field) is a closed immersion.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4426715",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove that $\text{rank}(\mathbf{I}-\mathbf{X}(\mathbf{X}^\intercal\mathbf{X})^{-1}\mathbf{X}^\intercal)=n-k-1$, where $\mathbf{X}$ is $n\times k+1$ This is required to show that $\text{SSR}/\sigma^2$ is $\chi^2(n-k-1)$, where SSR is the residual sum of squares $(\mathbf{y}-\mathbf{X}\boldsymbol{\hat{\beta}})^\intercal(\mathbf{y}-\mathbf{X}\boldsymbol{\hat{\beta}}) = \mathbf{y}'(\mathbf{I}-\mathbf{X}(\mathbf{X}^{\intercal}\mathbf{X})^{-1}\mathbf{X}^\intercal)\mathbf{y}$.
My attempt
(Let $\mathbf{H} = \mathbf{X}(\mathbf{X}^{\intercal}\mathbf{X})^{-1}\mathbf{X}^\intercal$)
$\mathbf{I}-\mathbf{H}$ is symmetric and idempotent. Therefore, it has $\text{rank}(\mathbf{I}-\mathbf{H})=r$ eigenvalues equal to 1, and $n-r$ eigenvalues equal to 0. Hence,
$\text{rank}(\mathbf{I}-\mathbf{H}) = \sum_{i=1}^n \lambda_{1i} = n - \sum_{i=1}^n \lambda_{2i}$, where $\lambda_{1i}$ are the eigenvalues of $\mathbf{I}-\mathbf{H}$ and $\lambda_{2i}$ are the eigenvalues of $\mathbf{H}$. But $\mathbf{H}$ is symmetric and idempotent as well, so $\sum_{i=1}^n \lambda_{2i} = \text{rank}(\mathbf{H})$. I could start from $\text{rank}{(\mathbf{X}^\intercal \mathbf{X})^{-1}}= k+1$ and apply some theorem "If $\mathbf{A}$ is $p\times p$ and symmetric of rank $p$ and $\mathbf{B}$ is $n\times p$ of rank $p$ then $\text{rank}(\mathbf{BAB^\intercal}) = p$" to show that $\text{rank}(\mathbf{H})=k+1$ as well, but is there such a theorem? Is there a shorter way?
| We know that $H^2 = H \cdot H = H $ and by using this equation we can get
$(I-H)^2 = I-H$ which shows that $(I-H)$ is an idempotent matrix.
By using fact that the rank of idempotent matrix is trace of matrix,
$$
rank(I-H) = tr(I - H) = tr(I - H) = n - (k+1).
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4426892",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Absolute integrability and limits Suppose that $h \in L_1(\mathbb{R})$ and define
$$
g(t) = \int_{t}^{\infty}h(\tau)\,\mathrm{d}\tau.
$$
Is it true that $\lim_{t\to\infty}g(t) = 0$? Seems 'obviously' true to me, but what's the formal way to argue it?
| Let's first prove it with the assumption that $h\geq 0$. Consider the function $h_t(\tau) = h(\tau)\chi_{[0,t]}(\tau)$, where $\chi$ is the indicator function. Clearly $h_t\leq h$ and $h_t\to h$ pointwise as $t\to\infty$. Thus the dominated convergence theorem implies
$$\lim_{t\to\infty}\int_0^t h(\tau)\,d\tau = \int_0^\infty h(\tau)\,d\tau.$$
Now, since $h\in L^1(\mathbb{R})$ we can rearrange to find that
$$g(t) = \int_t^\infty h(\tau)\, d\tau = \int_0^\infty h(\tau)\,d\tau - \int_0^t h(\tau)\,d\tau.$$
Taking the limit $t\to\infty$ above proves the claim.
Now for general $h$ we can decompose it into positive and negative parts: $h =h^+ - h^-$ where $h^+,h^-\geq 0$ and both are $L^1$. Linearity of integration and limits now gives us the result in this case.
One word of caution: technically the dominated convergence theorem requires us to use a sequence of functions, rather than a continuously parameterized family like $h_t$. On the other hand, checking a limit $t\to\infty$ is equivalent to checking it for all sequences $t_n\to\infty$, so we can move between the two without issue.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4427249",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Is $A=\{ (x,y)|x>0, 0I know, if let say we omit the term "$x>0$" in set A , called it A', we can apply the theorem "a criterion for a continuity mappings in terms of Open sets" we can define $f:R^2 \to R$
$f(x,y)= x^2 + y^2$
then since $f$ is Polynomial then it's continous and we know that $(0,\infty)$ is open set on R so we can conclude that $f^{-1}(0,\infty)$ which is the set A' is open .Firstly, what do they mean by taking inverses $f^{-1}$ , it does not even one to one function for example the image 1 in R can be belongs to $(1,0)$ and $(-1,0)$ in $R^2$.
And also i am doubt wheter we can do the same thing for the set A ? because we have some restriction in there (i.e $x>0$). Can someone explain ? my guess is cannot, since it does not even one to one function for example the image 1 in R can be belongs to $(1,0)$ and $(-1,0)$ in $R^2$ . And $(-1,0)$ is the outside of set $A$.
| The function $f(x,y)=x^2+y^2$ is continuous from $\Bbb R^2$ to $\Bbb R.$ And $(0,5)$ is open in $\Bbb R.$ So $B=f^{-1}(0,5)$ is open in $\Bbb R^2.$
The function $g(x,y)=x$ is continuous from $\Bbb R^2$ to $\Bbb R.$ And $\Bbb R^+$ is open in $\Bbb R.$ So $C=g^{-1}(\Bbb R^+)$ is open in $\Bbb R^2.$
So $A=B\cap C$ is the intersection of two open sets, so $A$ is open.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4427449",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Concatenation of two oriented curves I've encountered a question with a notation I've never seen before, and I was wondering if I could get some clarification as to what it could mean. Here's an excerpt from the question:
Given two oriented curves $\mathbf{C}_1 : R_1(t) = (\sin t, \cos t, 0), 0 \leq t \leq \pi$ and $\mathbf{C}_2 : R_2(t) = (0, -\cos t, \sin t), 0 \leq t \leq \pi$, let $\mathbf{C}$ be the concatenation $\mathbf{C}_1 \cdot \mathbf{C}_2$.
Is the dot notation here ($\cdot$) effectively being used as a union ($\cup$) of the two curves? I noticed that the end points of the two curves match to form a loop, so I'm wondering if that's relevant here too.
Thank you!
| What you noticed is exactly correct.
Here's the general definition of concatenation. Suppose that you have two parameterized curves of the form
$$c_1 : [0,a_1] \to \mathbb R^n, \qquad c_2 : [0,a_2] \to \mathbb R^n
$$
Suppose also that $c_1(a_1)=c_2(0)$, so the terminal endpoint of curve $c_1$ is the same as the initial endpoint of curve $c_2$. The concatenation of $c_1$ with $c_2$, which I will denote $c_1 * c_2$ instead of re-using that over-used "dot" notation, is the curve
$$c_1 * c_2 : [0,a_1 + a_2] \to \mathbb R^n
$$
defined by
$$c_1 * c_2(t) = \begin{cases}
c_1(t) &\quad\text{if $0 \le t \le t_1$} \\
c_2(t-a_1) &\quad\text{if $t_1 \le t \le t_1 + t_2$}
\end{cases}
$$
One thing that's nice about this notation is that you can then use it in path integral settings, and you get equations like this:
$$\int_{c_1 * c_2} \text{[something]} \, dt = \int_{c_1} \text{[something]}\, dt + \int_{c_2} \text{[something]}\, dt
$$
I'll mention also that this concatenation notation becomes very important in more advanced courses, particularly topology.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4427618",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
$L^p_{\text{loc}}$ convergence implies almost everywhere convergence Suppose $\{f_n\}\subset L^p(B)$ for the unit ball $B\subset\mathbb{R}^n$ converges in $L^p_{\text{loc}}$, i.e. $\int_V\lvert f_n(x)-f(x)\rvert^p dx\rightarrow 0$ for all $V\subset\subset B$. Can we say that $f_{n_i}(x)\rightarrow f(x)$ almost everywhere in $B$ for some subsequence $\{f_{n_i}\}$?
My idea for a proof was to consider a sequence of balls $B_{r_k}$ with $r_k\rightarrow 1$ and denote by $\{f^k_n\}$ the subsequence that converges almost everywhere in $B_{r_k}$. Then I thought I could pick a diagonal subsequence that converges almost everywhere in $B$. However I'm not sure how to show that the diagonal sequence converges almost everywhere.
Maybe this is not true, in that case can we find a counter-example?
| You need to be careful about how exactly your subsequences are constructed.
Take a sequence $r_k\nearrow 1$ as $k\to\infty$, and let $B_k := B(0,r_k)$.
Let us start with $B_1$. By the assumptions there is a subsequence $f_{\sigma(1,1)}, f_{\sigma(1,2)}, f_{\sigma(1,3)},\dotsc$ which converges pointwise a.e. in $B_1$. Let $Z_1\subset B_1$ be the measure zero set where the pointwise convergence fails, so $(f_{\sigma(1,n)})$ converges pointwise everywhere in $B_1\setminus Z_1$ as $n\to\infty$.
Now, for $B_2$, we let $(f_{\sigma(2,n)})$ be a further subsequence of $(f_{\sigma(1,n)})$ that converges pointwise a.e. in $B_2$. This is possible since $(f_{\sigma(1,n)})$ certainly converges in $L_{\rm loc}^p$ as well. Continuing in this manner, we let $(f_{\sigma(k,n)})$ be a subsequence of the $(f_n)$ with the properties that
*
*$(f_{\sigma(k,n)})$ is a subsequence of $(f_{\sigma(k-1,n)})$
*$(f_{\sigma(k,n)})$ converges pointwise a.e. in $B_k$, with exceptional measure zero set $Z_k\subset B_k$
Let's arrange everything into a grid:
\begin{equation*}
\begin{array}{cccc}
f_{\sigma(1,1)} & f_{\sigma(1,2)} & f_{\sigma(1,3)} &\cdots \\
f_{\sigma(2,1)} &f_{\sigma(2,2)} & f_{\sigma(2,3)} & \cdots \\
f_{\sigma(3,1)} &f_{\sigma(3,2)} & f_{\sigma(3,3)} & \cdots \\
\vdots & \vdots & \vdots & \ddots
\end{array}
\end{equation*}
It's then not hard to see that the diagonal sequence $g_n = f_{\sigma(n,n)}$ is such that $(g_n)_{n\geq k}$ is a subsequence of $(f_{\sigma(k,n)})_{n\geq 1}$.
Let $Z = \bigcup_i Z_i$, and note that $Z$ is measure zero as the countable union of measure zero sets. Let $x\in B\setminus Z$. Then $x\in B_k\setminus Z_k$ for some $k$. For this $k$, we see that $g_n(x)\to f(x)$ as $n\to\infty$ since, for $n\geq k$, $g_n(x) = f_{\sigma(n,n)}(x)$ is a subsequence of $(f_{\sigma(k,n)}(x))$, the latter of which has limit $f(x)$ by construction. Thus the $(g_n)$ converge pointwise everywhere in $B\setminus Z$, and is thus the desired subsequence of the original $(f_n)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4427829",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
How many ordered quadruples $(a,b,c,d)$ satisfy $a+2b+3c+4d=420$ where $a,b,c,d$ natural numbers or $\mathbb{N}^+$ How many ordered quadruples $(a,b,c,d)$ satisfy $a+2b+3c+4d=420$ where $a,b,c,d$ natural numbers or $\mathbb{N}^+$
I know we need to find the coefficient of $x^{420}$ in
$$
\frac{1}{(1-x)} \cdot \frac{1}{(1-x^2)} \cdot \frac{1}{(1-x^3)} \cdot \frac{1}{(1-x^4)}
$$
Could someone help me to finish this?
| Besides the fact that I don't know anything about generating functions, one thing that I dislike about that approach is that you simply seem to be transferring the computational difficulty.
That is, I have seen people set up an answer, based on generating functions, and then use computer software to calculate the pertinent coefficient. It seems to me that you could just as readily have the computer manually count all such solutions.
The following approach suffers from the same difficulty. That is, I can set up the computations, but then there won't be any elegant way of finding the final answer. My understanding is that absent generating functions, the approach given below is standard.
How many ordered quadruples $(a,b,c,d)$ satisfy $a+2b+3c+4d=420$ where $a,b,c,d$ natural numbers or $\mathbb{N}^+$
I am assuming that $a,b,c,d$ must each be a positive integer.
For any real number $r$, let $\lfloor r\rfloor$ (i.e. the floor function) denote the largest integer $\leq r.$
Work backwards.
Suppose that you have the equation
$a + 2b = n,$ where $n$ is a positive integer,
and you want to know how many solutions there are in positive integers $a,b$.
Here, $b$ can take on any value from $1$ through
$\displaystyle \left\lfloor \frac{n-1}{2}\right\rfloor.$
So, let $f_2(n)$ denote
$\displaystyle \left\lfloor \frac{n-1}{2}\right\rfloor ~: ~n \in \Bbb{Z^+}.$
Now, consider the equation
$a + 2b + 3c = n.$
Here, $c$ can take on any value from $1$ through
$\left\lfloor \frac{n-1}{3}\right\rfloor.$
For each such value for $c$, there will be $f_2(n-3c)$ solutions.
So, the number of solutions to
$a + 2b + 3c = n$
is
$$f_3(n) = \sum_{c=1}^{\left\lfloor \frac{n-1}{3}\right\rfloor} f_2(n - 3c).$$
In a very similar fashion, the final computation will be
$$\sum_{d=1}^{\left\lfloor \frac{420-1}{4}\right\rfloor} f_3(420 - 4d). \tag1 $$
The difficulty with the answer given in (1) above, is so what. Again, all that has been done is to organize the very cumbersome calculations. You still wouldn't want to try calculating this manually.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4428004",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to show that $\sup_n(|X_n|) < \infty$? Suppose that $X_1,X_2,....$ is a martingale satisfying $E[X_1]=0$ and $E[X_n^2] <\infty$. Assume that $\sum_n E[(X_n-X_{n-1})^2] < \infty$. Prove that $X_n$ converges with probability $1$.
I am using the martingale convergence theorem to solve this problem and it is stated as:
Let $X_1,X_2,...$ be a sub martingale. If $K=\sup_n E(|X_n|) <
\infty$, then $X_n \rightarrow X$ with probability $1$.
I am trying to show that $\sup_n E(|X_n|) < \infty$. In this case,
\begin{eqnarray*}
X_n &=& X_n - X_{n-1}+X_{n-1}+......+X_2-X_1+X_1\\
|X_n| & \leq& \sum_{I=1}^n |X_i-X_{i-1}|+|X_1|
\end{eqnarray*}
I am confused here, how to use $\sum_n E[(X_n-X_{n-1})^2] < \infty$ to show that $\sup_n E(|X_n|) < \infty$.
Anyone can suggest some direction to prove this condition?
| A stronger result is true: $\sup_n EX_n^{2}<\infty$. To see this note that for $n \geq m$ we have $E(X_{n+1}-X_n)(X_m-X_{m-1})=E[E(X_{n+1}-X_n)(X_m-X_{m-1})|\mathcal F_n)=0$ by martingale property. Let $Y_n=X_n-X_{n-1}$. Then $(Y_n)$ is an orthogonal sequence in $L^{2}$ and $\sum \|Y_n\|^{2} <\infty$. [The norm here is the $L^{2}$ norm]. This implies that the series $\sum Y_n$ converges in $L^{2}$. In particular, $\|Y_1+Y_1+\cdots+Y_n\|$ is bounded and this shows that $\sup_n E|X_n|^{2} <\infty$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4428239",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What's the group extension corresponding to the sum of 2 classes of $H^2(G,A)$ I'm learning Galois/Group Cohomology and i've just seen that the second cohomology group $H^2(G,A)$ (constructed as the factor systems quotient splitting factors) classifies the group extensions of $G$ with abelian kernel $A$ up to equivalence.
My question is, given 2 extensions $$1\to A\to E\to G\to 1$$ and $$1\to A\to E'\to G\to 1$$ with cohomology classes $[c]$ and $[c']$ respectively ($[c]\ne [c']$), what is the extension (or the class of equivalent extensions) with cohomology class $[c]+[c']$ (assuming it is non-trivial)? Does it have a general form or it depends on the specific case?
I have not found it anywhere so I would appreciate some book where I can find it.
| The extension corresponding to the sum of cohomology classes represented by the two original extensions is called the Baer sum of the two extensions.
It can be defined using just group theory without cohomology.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4428518",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Intuitively, what is the difference between a "simply connected" set and a "locally connected" set? I was looking into the Mandelbrot Set and saw a note that said it has been proven that the Mandelbrot Set is "simply connected" but it is still an open question of whether or not it is "locally connected" (MLC).
I can easily understand what simply connected means intuitively--you can draw a line between any two points in the set, it has no holes, etc.
But I'm having trouble understanding visually what "locally connected" means, and what the difference is between simple connection and local connection. Every page I see on local connectedness only describes it in topological jargon that goes over my head.
Is there any way to intuitively or visually describe what the difference is between a locally and simply connected set?
| Locally connected intuitively means that when you "make a sufficient amount of zoom" in any part of the object, youre gonna always "see" one total piece rather than a lot of small pieces. For example the famous "topologist sine curve" is one space that is NOT locally connected; no matter how much you zoom in the $Y$-axis you always gonna see a lot of separated bars, rather than "one" object.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4428688",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Geometric problem about incenters
As shown below, $BC$ is a diameter of $\odot F$. If $AG$ bisects $\angle DAE$ and $FG$ bisects $\angle DFE$, show that $G$ is the I center of $\triangle DEF$.
$\because FD=FE,FG$ bisects $\angle DFE$.
$\therefore GD=GE$.
$\because AD$ bisects $\angle DAE$.
$\therefore ADGE$ con-cyclic or $\triangle ADG\cong\triangle AEG$.
Apparently it’s the former not the latter.
$\therefore\angle GDE=\angle GAE$.
Now we need $\angle FDG=\angle GAE$. I’m not sure whether this contributes to the problem solving.
| Please note
$$ \begin {align}
\angle DFE &= 180^\circ - (\angle BFD + \angle CFE) \\
& = 180^\circ - (180^\circ - 2 \angle B + 180^\circ - 2 \angle C)\\
& = 2 (\angle B + \angle C) - 180^\circ\\
& = 2 (180^\circ - \angle A) - 180^\circ\\
& = 180^\circ - 2 \angle A \\
\text {So, }\angle FDE &= \angle DEF = \angle A\\
\end {align}
$$
Alternatively note that $\angle ADE = \angle C$ and so $\angle FDE = 180^\circ - \angle B - \angle C = \angle A$. Similarly, $\angle DEF = \angle A$.
Now given $G$ is intersection of perpendicular bisector of $DE$ and angle bisector of $\angle A$, it must be on the circumcircle of $\triangle ADE$.
That leads to $~\angle GDE = \angle DEG = \angle A/2$
So $DG$ and $GE$ are angle bisector of $\angle FDE$ and $\angle DEF$ respectively.
Therefore we conclude that $G$ is the incenter of $\triangle DEF$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4428879",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
What does an exponent of a differential form mean? I know that the wedge product is alternative, $\omega \wedge \omega =0$, but I still see people talking about $\omega^{\wedge n}$ like when the power of a symplectic form is considered as the volume form. I assume that this volume form is meant to be non zero for higher dimensions but I can’t see how. Is this just some weird short hand for something else?
| If $\omega$ is a $p$-form and $\eta$ is a $q$-form, then $\omega\wedge\eta = (-1)^{pq}\eta\wedge\omega$. In particular, if $p$ is odd, then $\omega\wedge\omega = -\omega\wedge\omega$ and hence $\omega\wedge\omega = 0$. However, if $p$ is even, then $\omega\wedge\omega$ need not be zero. For example, if $\omega$ is a symplectic form on a $2n$-dimensional manifold with $n > 1$, then $\omega\wedge\omega \neq 0$; in fact, $\omega^n = \underbrace{\omega\wedge\dots\wedge\omega}_{n\ \text{times}}$ is non-zero.
For an explicit example, consider $\omega = dx^1\wedge dx^2 + dx^3\wedge dx^4$ on $\mathbb{R}^4$. Note that
$$\omega\wedge\omega = 2dx^1\wedge dx^2\wedge dx^3\wedge dx^4 \neq 0.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4429077",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Assume that there are infinitely many Mersenne primes, prove there are infinitely many positive integers m and n such that $\phi(m)=\sigma(n)$. Conjecture 3 of chapter 1 states that there are infinitely many Mersenne prime numbers. Mersenne prime numbers are integers that are in the form of $2^p-1$ where p is prime.The first 3 Mersenne prime numbers are $2^2-1=3$, $2^3-1=7$, $2^5-1=31$.Let p be prime then $\phi(2^p-1)=2^p-1-1=2^p-2$ and $\sigma(2^p-1)=2^p-1+1=2^p$
I feel like I'm on the right lines but I just can't put the last bit together. Could someone give me some ideas, please,
($\phi(m)$ is the number of integers relatively prime to m and $\sigma(n)$ is the sum of the divisors of n.
| Hint: If you're supposed to use the existence of infinitely many Mersenne primes, it's likely that the construction of either $m$ or $n$ will involve Mersenne primes, but not necessarily both. Can you find a number with $\sigma(n)=2^p-2$? How about a number with $\phi(m)=2^p$? It may be useful to remember how $\phi$ and $\sigma$ relate to prime factorizations, to find a number $m$ or $n$ with a nice prime factorization and the desired property.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4429286",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Integration of a piecewise defined discontinuous function In a proof i posted recently on this site (link) i made the mistake of thinking that a function $f$, which is bounded on a closed interval $[a,b]$, would assume its infimum at a certain $x$ in the domain of $f$. I was presented with a counter example which, for my ability, seemed rather complicated. So i tried to find an easier counter example. I found the following function:
$$ f(x) = \begin{cases}
x & x > 0 \\
1 & x = 0
\end{cases} $$
Here, $ inf \{ f(x) : 0 \le x \le 1 \} = 0 $ but $f(x) \neq 0$ for all $x$.
I guess this is suitable as a counter example. Then i came up with the idea of trying to integrate this function. So came to the following proposition:
Proposition: Let $f$ be a function defined on $[a,b]$ as follows:
$$ f(x) = \begin{cases}
x & x > 0 \\
1 & x = 0
\end{cases} $$
This function is integrable with:
$$ \int_0^b = \frac{b^2}{2}$$
Proof: I use a partition $ P = \{t_0, ... , t_n \} $ of $[a,b]$ with
$$ t_i - t_{i-1} = \frac{b}{n} $$
$$ t_i = \frac{ib}{n} $$
$$ t_{i-1} = \frac{(i-1)b}{n} $$
$$ m_i = inf \{ f(x) : t_{i-1} \le x \le t_i , i \neq 1 \} = \frac{(i-1)b}{n} $$
$$ m_1 = inf \{ f(x) : t_0 \le x \le t_1 \} = 0 $$
$$ M_i = sup \{ f(x) : t_{i-1} \le x \le t_i , i \neq 1 \} = \frac{ib}{n} $$
$$ M_1 = sup \{ f(x) : t_0 \le x \le t_1 \} = 1 $$
Then we have
$$ L(f, P) = \sum_{i=2}^n m_i \cdot \frac{b}{n} + m_1 \cdot \frac{b}{n} = \sum_{i=2}^n \frac{(i-1)b}{n} \cdot \frac{b}{n} + 0 \cdot \frac{b}{n} = \frac{b^2}{n^2} \cdot \sum_{j=1}^{n-1} j = \frac{b^2}{2} \cdot \frac{n-1}{n}$$
and
$$ U(f, P) = \sum_{i=2}^n M_i \cdot \frac{b}{n} + M_1 \cdot \frac{b}{n} = \sum_{i=2}^n \frac{ib}{n} \cdot \frac{b}{n} + 1 \cdot \frac{b}{n} = \Biggl[ \frac{b^2}{n^2} \cdot \sum_{j=1}^{n-1} j+1 \Biggl] + \frac{b}{n} = \frac{b^2}{2} \cdot \frac{n+1}{n} + \frac{b}{n}$$
For the difference of the upper and lower sums this results in
$$ U(f, P) - L(f, P) = \frac{b^2}{2} \cdot \frac{n+1}{n} + \frac{b}{n} - \frac{b^2}{2} \cdot \frac{n-1}{n} = \frac{b^2}{2} \cdot \frac{n+1}{n} - \frac{b^2}{2} \cdot \frac{n-1}{n} + \frac{b}{n} = \frac{b^2}{2} \cdot \frac{2}{n} + \frac{b}{n} = \frac{b^2+b}{n}$$
So in order get $ U(f, P) - L(f, P) < \epsilon$ we can choose $ n > \frac{b^2+b}{\epsilon} $. Thus $f$ is integrable and since
$$ \frac{b^2}{2} \cdot \frac{n-1}{n} \le \frac{b^2}{2} \le \frac{b^2}{2} \cdot \frac{n+1}{n} + \frac{b}{n}$$
and the integral is unique, if it exists, we have
$$ \int_0^b = \frac{b^2}{2}$$
as required. $ \blacksquare $
All these equations and manipulations are quite complex regarding my ability. So might anyone tell me if this is correct or point me towards my mistakes? Thanks in advance.
| In retrospection i think i found a mistake in my proof. I used $\sum_{j=1}^{n-1} j+1 = \frac{n(n+1)}{2}$ which is false, since $\sum_{j=1}^{n-1} j+1 = \frac{n^2 + n -2}{2}$.
With this discovery and the hint of @Andrea S. i found
$$ U(f, P) = \sum_{i=2}^n M_i \cdot \frac{b}{n} + M_1 \cdot \frac{b}{n} = \sum_{i=2}^n \frac{ib}{n} \cdot \frac{b}{n} + 1 \cdot \frac{b}{n} = \Biggl[ \frac{b^2}{n^2} \cdot \sum_{j=1}^{n-1} j+1 \Biggl] + \frac{b}{n} = \frac{b^2}{2} \cdot \frac{(n-1)(n+2)}{n^2} + \frac{b}{n} = \frac{b^2}{2} \cdot \frac{(n-1)}{n} \cdot \frac{n+2}{n} + \frac{b}{n}$$
and
$$ L(f, P) = \frac{b^2}{2} \cdot \frac{n-1}{n}$$
Now, since $\lim_{n\to \infty} \frac{(n-1)}{n} = \lim_{n\to \infty} \frac{n+2}{n} = 1$ and $\lim_{n\to \infty} \frac{b}{n} = 0$ we have
$$\lim_{n\to \infty}L(f,P)=\lim_{n\to \infty}U(f,P_n)=\frac{b^2}{2}$$
as required. $\blacksquare$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4429665",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
How can I find the eigenvalues of this $2n \times 2n$ matrix? The matrix I am dealing with is of the form below.
$$\begin{pmatrix} 1-\frac{1}{\sqrt{2^n}}& -\frac{1}{\sqrt{2^n}} & -\frac{1}{\sqrt{2^n}} & -\frac{1}{\sqrt{2^n}} & \cdots & -\frac{1}{\sqrt{2^n}}\\ -\frac{1}{\sqrt{2^n}}& -\frac{1}{\sqrt{2^n}}& -\frac{1}{\sqrt{2^n}} & -\frac{1}{\sqrt{2^n}}& \cdots & -\frac{1}{\sqrt{2^n}} \\ -\frac{1}{\sqrt{2^n}} & -\frac{1}{\sqrt{2^n}}& -\frac{1}{\sqrt{2^n}} & -\frac{1}{\sqrt{2^n}}& \cdots & -\frac{1}{\sqrt{2^n}}\\-\frac{1}{\sqrt{2^n}}& -\frac{1}{\sqrt{2^n}}& -\frac{1}{\sqrt{2^n}} & -\frac{1}{\sqrt{2^n}} & \cdots & -\frac{1}{\sqrt{2^n}}\\ \vdots & \vdots& \vdots& \vdots & \ddots & \vdots\\ -\frac{1}{\sqrt{2^n}} & -\frac{1}{\sqrt{2^n}} & -\frac{1}{\sqrt{2^n}} & -\frac{1}{\sqrt{2^n}} & \cdots & -\frac{1}{\sqrt{2^n}} \end{pmatrix}$$
It's a $2n \times 2n$ matrix whose $(1,1)$-th entry is $1 - \frac{1}{\sqrt{2^n}}$ and all the others $- \frac{1}{\sqrt{2^n}}$. How can I solve this eigenvalue problem?
| Here's a similar approach to what KBS suggests. Begin with the observation that we can write $M = e_1e_1^T - \alpha 11^T$ for some $\alpha \in \Bbb R$. From there, we can write $M = AB$, with
$$
A = \pmatrix{e_1 & \alpha \mathbf 1}, \quad B = \pmatrix{e_1 & - \mathbf 1}^T.
$$
From the fact that $AB$ and $BA$ have the same non-zero eigenvalues, conclude that every eigenvalue of $M$ is either equal to $0$ or is an eigenvalue of
$$
BA = \pmatrix{e_1 & - \mathbf 1}^T\pmatrix{e_1 & \alpha \mathbf 1} = \pmatrix{1 & \alpha\\ -1 & -\alpha n}.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4429780",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Proving That A Sequence Does Not Obey The Weak Law Of Large Numbers I have a sequence $f_n$ of random variables such that $\mu( \{ f_n = n \}) = \mu( \{f_n = -n \}) = \frac{1}{2}$, and need to prove that it doesn't satisfy WLLN.
Clearly, $E(f_n) = 0$, so I have to disprove that $$ \frac{1}{n} \sum_{j=1}^n f_j (\omega) \to 0$$ in measure. In other words, $$\lim_{n \to \infty} \mu \left( \omega: | \frac{1}{n} \sum_{j=1}^n f_j (\omega)| \ge \delta \right) \ne 0$$ $\forall \delta > 0.$
In my textbook, I haven't really learned much about how to do these types of questions. As a hint, I was told to show that if $\frac{1}{n} \sum_{j=1}^n f_j \to 0$ in measure $\mu$, then also $\frac{1}{n} \sum_{j=1}^{n-1} f_j \to 0$ in measure $\mu$.
However, I dont know how to show this and even if I were able to, I don't see why that means $f_n$ doesn't obey the Weak Law of Large Numbers.
I would gladly appreciate any help.
| Take the difference, i.e.,
$$
\frac{1}{n}\sum_{i=1}^n f_i-\frac{1}{n}\sum_{i=1}^{n-1} f_i=\frac{f_n}{n}.
$$
The LHS converges to $0$ (in $\mu$-measure). Thus, $f_n/n$ must converge to $0$ as well. However,
$$
\mu(|f_n/n|>1/2)=1
$$
for each $n$.
The observation in the hint follows from the fact that if $\sum_{i=1}^n f_i/n\to 0$, then
$$
\frac{1}{n}\sum_{i=1}^{n-1}f_i=\frac{n-1}{n}\times \frac{1}{n-1}\sum_{i=1}^{n-1}f_i\to 0
$$
in $\mu$-measure.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4429928",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Showing that $(a^2 - b^2)^2$ $ \ge $ $4ab(a-$ $b)^2$ An inequality problem from Beckenbach and Bellman:
Show that $(a^2 - b^2)^2 \ge 4ab(a-b)^2$
The given answer is simply
Equivalent to $(a - b)^4 \ge 0$
I have tried two approaches, one which agrees with the given answer, and the other which does not.
Approach one. (Agrees with answer)
\begin{align}
(a^2 - b^2)^2 & \ge 4ab(a-b)^2\\
(a^2 - b^2)^2 - 4ab(a-b)^2 & \ge 0\\
((a+b)(a-b))^2 - 4ab(a-b)^2 & \ge 0\\
(a+b)^2(a-b)^2 - 4ab(a-b)^2 & \ge 0\\
(a-b)^2((a+b)^2 - 4ab) & \ge 0 \\
(a-b)^2 (a^2 -2ab + b^2) &\ge 0 \\
(a-b)^2 (a-b)^2 & \ge 0\\
(a - b)^4 & \ge 0
\end{align}
Approach Two
\begin{align}
(a^2 - b^2)^2 & \ge 4ab(a-b)^2\\
((a+b)(a-b))^2 & \ge 4ab(a-b)^2\\
(a+b)^2(a-b)^2 & \ge 4ab(a-b)^2\\
(a+b)^2 & \ge 4ab\\
(a^2 -2ab + b^2) &\ge 0 \\
(a-b)^2 & \ge 0
\end{align}
Could someone point out where the second approach is going wrong?
| Note that
$$
a^2 - b^2 = (a + b) (a - b)
$$
Thus,
$$
(a^2 - b^2)^2 = (a + b)^2 (a - b)^2
$$
Thus, the given inequality
$$
(a^2 - b^2)^2 \geq 4 a b (a - b)^2
$$
is equivalent to
$$
(a + b)^2 (a - b)^2 \geq 4 a b (a - b)^2
$$
or
$$
(a - b)^2 \left[ (a + b)^2 - 4 a b \right] \geq 0
$$
or
$$
(a - b)^2 \left[ a^2 + b^2 - 2 a b \right] \geq 0
$$
or
$$
(a - b)^2 (a - b)^2 \geq 0
$$
or
$$
(a - b)^4 \geq 0
$$
which is always true.
Hence, we proved the given inequality.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4430076",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Proving the factorial integral using induction I'm currently reviewing some mathematics for a thermal physics course. My textbook claims that the factorial integral
$$
n! = \int_0^\infty x^n\text{e}^{-x}\text{d}x$$
can be proved via induction. My current work:
*
*n = 0:
$$\int_0^{\infty}x^0\text{e}^{-x}\text{d}x = \int_0^{\infty}e^{-x}\text{d}x = 1$$
which is 0!.
*Assuming that the case $n = k$ is true
$$k! = \int_0^{\infty}x^k\text{e}^{-x}\text{d}x$$
we now wish to show that the $n = k+1$ case follows. The textbook then gives a hint to integrate the $k+1$ case by parts. First rewriting the integral
$$(k+1)! = \int_0^{\infty}x^{k+1}\text{e}^{-x}\text{d}x = -\int_0^{\infty}x^{k+1}\frac{\text{d}}{\text{d}x}(e^{-x})\text{d}x$$
and then swapping over the derivative
$$
-\int_0^{\infty}x^{k+1}\frac{\text{d}}{\text{d}x}(e^{-x})\text{d}x = \int_0^{\infty}\frac{\text{d}}{\text{d}x}(x^{k+1})\text{e}^{-x}\text{d}x + \left[x^{k+1}\text{e}^{-x}\right]_0^{\infty}
$$
The proof would be complete if the boundary term (second term on the right-hand side) comes out to be 0; however, I'm getting $-x$. What am I doing wrong? Thanks in advance for any help.
| $x^k e^{-x} \to 0$ as $ x \to \infty$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4430280",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Find number of functions such that $f(f(x)) = f(x)$, where $f: \{1,2,3,4,5\}\to \{1,2,3,4,5\}$
Consider set $A=\{1,2,3,4,5\}$, and functions $f:A\to A.$ Find the number of functions such that $f\big(f(x)\big) = f(x)$ holds.
I tried using recursion but could not form a recursion relation. I tried counting the cases manually, but it was too exhaustive and impractical. I tried drawing/mapping but that too included counting by making cases. Random variable was another approach I tried but couldn't make a summation.
A general idea for such problems is needed.
Thanks.
| If a variable $x$ is in the range of $f(x)$, then $f(x)=x$. This means that we can effectively categorize the potential functions by their range. The number of functions that have a given range is $n^{5-n}$, with $n$ being the number of values in the range. The number of ranges that have $n$ values in them is $\frac{5!}{n!(5-n)!}$. This means that the total number of functions is $$\sum_{n=1}^5\frac{5!}{n!(5-n)!}n^{5-n}$$which comes out to be $196$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4430492",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Limit of sum/difference condition which are not being followed here This was given as an example in a book : If $$\lim _{x \rightarrow a}[f(x)+g(x)]=2$$ and
$$\lim _{x \rightarrow a}[f(x)-g(x)]=1,$$ then find the value of $$\lim _{x \rightarrow a} f(x) g(x)$$
Solution : $$\lim _{x \rightarrow a}[f(x)+g(x)]=2$$
or $$\lim _{x \rightarrow a} f(x)+\lim _{x \rightarrow a} g(x)=2$$ $\tag{1}$
$$\lim _{x \rightarrow a}[f(x)-g(x)]=1$$
or $$\lim _{x \rightarrow a} f(x)-\lim _{x \rightarrow a} g(x)=1$$ $\tag{2}$
Adding (1) and (2),
$$2 \lim _{x \rightarrow a} f(x)=3$$ or $$\lim _{x \rightarrow a} f(x)=\frac{3}{2}$$
Subtracting (2) from (1),
$$2 \lim _{x \rightarrow a} g(x) \neq 1$$ or $$\lim _{x \rightarrow a} g(x)=\frac{1}{2}$$
or $$\lim _{x \rightarrow a} f(x) g(x)=\lim _{x \rightarrow a} f(x) \lim _{x \rightarrow a} g(x)=\frac{3}{2} \times \frac{1}{2}=\frac{3}{4}.$$
My query is that isnt the lim of sum = individual limits sum only when we already know that limit of individual sum exists ? Here we actually dont know if it actually exists or not so shouldnt this method is totally wrong ?
| You are right that the way it is written is slightly problematic. That $\lim_{x\to a} f(x)+g(x)$ exists doesn't mean that $\lim_{x\to a} f(x)$ and $\lim_{x\to a} g(x)$ exist. However, there is no actual reason to split the limits and try writing
\begin{align*}
\lim _{x \rightarrow a} f(x)+\lim _{x \rightarrow a} g(x)=2\\
\lim _{x \rightarrow a} f(x)-\lim _{x \rightarrow a} g(x)=1
\end{align*}
as they have. Instead, work directly with
\begin{align*}
\lim_{x\to a} f(x)+g(x)=2\\
\lim_{x\to a} f(x)-g(x)=1
\end{align*}
and add these two together (as the limit of a sum is the sum of the limits), and get
$$\lim_{x\to a} f(x)+g(x)+f(x)-g(x)=3$$
i.e.
$$\lim_{x\to a} 2f(x)=3$$
and indeed $\lim_{x\to a} f(x)$ exists, while no unjustified splitting of limits has occurred.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4430670",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
How far does one have to zoom into an image that was rotated a certain amount of degrees in order to only see only the original pixels again? This question was asked by a work colleague of mine, but my days as a mathematician are long gone unfortunately. It does sound like a pretty basic geometry problem to me, doesn't it?
I'm not expecting an extremly detailed answer here, but does anyone have any resources on this? I'm pretty sure this was already answered somewhere.
Please let me know if the question is formulated too vaguely. Thanks a lot in advance!
For clarification: I imagine that the original image looks something like this:
And then it gets rotated like this:
Only the red pixels are considered the "original pixels" while the white pixels are not. So the question would be how far do I have to zoom into the second picture in order to only see red pixels again?
|
The figure above shows the original image in the standard orientation with its edges parallel to the $x$ and $y$ axis, and then it shows the same image but rotated by its center by an angle $\theta$.
What you need to do is find the intersection points between the diagonals and the rotated edges of the image, and choose the one that is closest to the center of the image to construct your zoom rectangle (the blue rectangle) in the figure above.
If the image center is the origin, and the image extends over the rectangle $[-a, a] \times [-b, b ]$, then the equation of the right edge is $ x = a $, and the equation of the top edge is $ y = b$. When rotating the image by an angle $\theta$ counter clockwise, then the point $(x, y)$ is mapped to $(x',y') = R (x,y) $
where
$ R = \begin{bmatrix} \cos \theta && - \sin \theta \\ \sin \theta && \cos \theta \end{bmatrix} $
It follows that $(x, y) = R^T (x', y')$. Therefore, the equation of the right edge is $R^T (x', y') = a$, which is
$ \cos(\theta) x' + \sin(\theta) y' = a $
Similarly the equation of the rotated top edge is
$ -\sin(\theta) x' + \cos(\theta) y' = b $
Now the equations of the red diagonals are $y' = \pm \dfrac{b}{a} x' $
Intersect these diagonals with the two rotated edges to obtain $(x_1, y_1)$ and $(x_2, y_2)$. Choose the point that closer to the origin, i.e. having the smaller $\sqrt{ x_i^2 + y_i^2 } $, and call this point $(x_0, y_0)$
Now the zoom rectangle (shown in blue) extands over $[- |x_0|, |x_0| ] \times [-|y_0| , |y_0| ] $
The zoom factor is the following ratio
$\text{Zoom Factor} = \dfrac{a}{|x_0| } = \dfrac{b}{| y_0 | } $
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4430987",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Find limit of $a_{n+1} = \frac{2a_n}{a_n+1}$.
Determine if the following sequence $(a_n)_{n \in \mathbb{N}}$ converges and if so, find the limit. $a_1 \geq 0$, $a_{n+1} = \frac{2a_n}{a_n+1}$.
What I've done so far:
First we can notice that if $a_1 = 0 \Rightarrow a_n = 0, n \in \mathbb{N}$ and thus $\lim\limits_{n \to \infty} a_n=0$.
If $a_1 > 0, a_n > 0$ by induction and $a_{n+1} = 2 - \frac{2}{a_n + 1}$ which gives $a_{n+1}a_n + a_{n+1}-2a_n = 0$. I'm not familiar with recurrence equations, but WolframAlpha gives an equation such that a limit of it is easy to evaluate and equals $1, n \rightarrow \infty$.
Is there another way to find the limit of this sequence?
| Let $f(x) = \frac{2x}{x+1}$ for $x\geq 0$, so that $a_{n+1}= f(a_n)$.
Then, we have :
$$\forall x\geq 0, f(x)-x = \frac{x(1-x)}{x+1}$$
Therefore $1>f(x) > x$ when $x\in (0,1)$ and $1<f(x)<x$ when $x>1$.
*
*If $a_1$ is equal to $0$ or $1$, then $f(a_0) = a_0$ and the sequence is constant.
*If $a_1 \in (0,1)$, then you can show by induction that for all $n\in \mathbb N$,
$$0<a_n<a_{n+1}<1$$
Therefore, $(a_n)$ converges to some $\ell \in (0,1]$. Taking the limit in $a_{n+1} = f(a_n)$, we find $\ell = f(\ell)$, whose only solution on $(0,1]$ is $\ell = 1$.
*If $a_1 \in (1,+\infty)$, then by induction :
$$\forall n\in\mathbb N, 1<a_{n+1}<a_n $$
Therefore $a_n \to \ell \geq 1$. Again we have $f(\ell) = \ell$ which implies $\ell =1$.
Conclusion
If $a_1 = 0$, then $(a_n)$ converges to $0$. If $a_1>0$, then $(a_n)$ converges to $1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4431217",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Existence of an integer matrix with maximal subdeterminants $a_1, \ldots, a_n$ Given $n \geq 2$ and integers $a_1, \ldots, a_n$, does there exist an integer $(n-1) \times n$ matrix whose maximal subdeterminants are $a_1, \ldots, a_n$ (with fixed ordering)?
Example: $n = 3$, $(a_1, a_2, a_3) = (19, 4, 22)$. The matrix
$$\begin{pmatrix}0 & 11 & 2 \\
-2 &95&19\end{pmatrix}$$
has $i$th subdeterminants (with $i$th column removed) equal to $(19, 4, 22)$.
Context:
*
*The $n=3$ case is precisely this question from the newsletter: Is every vector in $\mathbb Z^3$ a cross product?. (This is where the example comes from.)
*The general case would give an alternative proof for this question: Can the determinant of an integer matrix with a given row be any multiple of the gcd of that row? by taking $(a_1, \ldots, a_n)$ to be coefficients in Bézout's theorem.
| This can be proven inductively as in the second linked question, and it generalizes the construction in the first linked question (which is for $n=3$).
When $n = 2$ we can take the matrix $\begin{pmatrix}a_2 & a_1\end{pmatrix}$. Now take $n \geq 3$ and $a_1, \ldots, a_n$ integers, and let $d = \gcd(a_2, \ldots, a_n)$. Construct (using the induction hypothesis) a working $(n-2) \times (n-1)$ matrix for $a_2/d, \ldots, a_n/d$ and call it $M$. The maximal subdeterminants of $M$ are coprime, so there exist integers $c_2, \ldots, c_n$ with
$$\det\begin{pmatrix}c_2&\cdots&c_n\\\\
&\large M\end{pmatrix}=a_1 \,.$$
We can then take the matrix
$$\begin{pmatrix}-d & c_2&\cdots&c_n\\
0 \\
\vdots &&\large M \\
0\end{pmatrix} \,.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4431415",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Does every infinite summation with real terms converge to a real number? The title says it all, I'm curious if there are convergent infinite series whose terms are all real but which converge to a value that is not real. I want to phrase this as "are the reals closed under infinite summation" but I'm sure someone would on this site would fuss about that phrasing.
Two facts that I know make me wonder this:
First, not all series which are defined on rational numbers converge to a rational number. For example, the Basel problem wherein the sum of the squares of the reciprocal of natural numbers involves pi.
Second, I know that there are types of numbers on the number line that are not reals. The surreal numbers are the only case I know of, not to say there are not others.
These two facts together make me wonder if the reals can be used in this way (infinite summation) to define non-real numbers.
| The answer is no (to your post's question, "yes" to your title's implied question), because the real numbers are a complete metric space. In other words, every summation that converges, converges to a real number.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4431607",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Finding rational solutions to $0\neq a^3c + b^3c^2\in\mathbb{Z}$ but $abc\notin\mathbb{Z}$ I'm trying to find $a,b,c\in\mathbb{Q}$ such that $0\neq a^3c + b^3c^2\in\mathbb{Z}$ but that $abc\notin\mathbb{Z}$.
I tried to pick specific values for $a, b$
$$
b^3x^2 + a^3x - n= 0
$$
where I tried different integers $n\neq 0$ and then evaluate the roots of this polynomial. However, this proved unsuccessful and now I'm stuck.
| COMMENT.-Certainly you do want to have $a,b,c\in\mathbb Q\setminus\mathbb Z$ so I am afraid you have affaire to very particular elliptic curves of the form $Ax^3+BY^3=CZ^3$ which have been seriously studied first by E. S. Selmer (The diophantine equation $ax^3+by^3+cz^3=0$ Act. Math. (Stockh.) 85 (1951) p.203-362). It is a topic in which the difficulty is very hard mainly if you want to impose restrictive conditions like yours.
In fact, put $\dfrac{a}{\alpha}, \dfrac{b}{\beta},\dfrac{c}{\gamma}$ your rational with $\dfrac{abc}{\alpha\beta\gamma}\notin\mathbb Z$ then your condition becomes
$$\gamma c(\beta a)^3+c^2(\alpha b)^3=n(\alpha\gamma)^3$$
However you have at hand some divisibility conditions you can maybe used to get divers class of solutions. In any case you do have to try with a Selmer curve.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4431754",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Gauss map on trace of curve Suppose we have a curve $\gamma : I \to S$ where $I \subseteq \mathbb{R}$ is an interval and $S$ is a surface in $\mathbb{R}^3$ (not necessarily orientable). We know that the Gauss map $N$ can be locally defined at all points of $S$, but there is not necessarily a way to define it continuously on the whole surface. Can we define a map $N_I : I \to S^2$ such that $N_I(t)$ is normal to the surface $S$ at $\gamma(t)$?
Notice that $N_I$ is not necessarily $N|_{\gamma(I)}$, since for two points $t_0, t_1$ with $\gamma(t_0) = \gamma(t_1)$ it is possible that $N_I(t_0) \neq N_I(t_1)$ (indeed, this must happen necessarily if the manifold is non orientable). This question came to me as a variation of the problem 2.6.6 of Do Carmo Differential Geometry of Curves and Surfaces.
| Yes, this is possible. Intuitively, at every point $s \in I$ there are two choices of unit normal to $S$ at $\gamma(s)$. Since there are no loops in $I$, it is possible to make a consistent global choice.
Using the language of vector bundles, the pullback of the normal bundle $TS^\perp$ to the interval I, $\gamma^*(TS^\perp) := \{(s,\nu) : s\in I, \, \nu \in T_{\gamma(s)}S^\perp\}$, is isomorphic to the trivial bundle, $I \times \mathbb{R}$. Therefore we may choose a global section $N: I \rightarrow \mathbb{R}$ corresponding to a consistent choice of normal along the curve.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4431902",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Solution to this beta function integral-within-an-integral (Continued) Continuing on from my earlier post, here, I have re-evaluated all the work-up to the integral and managed to re-formulate everything to be slightly different. Now, the aim is to solve
$$
I = \int_0^\infty \frac{t^{\frac{3}{\alpha}-1}}{(1+t)^8}\left(\int_0^t
\frac{t'^{\frac{2}{\alpha}-1}}{(1+t')^8}dt'\right)d t.$$
Once again, we have the condition that $2 \leq \alpha \leq 4$, where $\alpha$ is real. I'm not sure if the same problem stands that the inside integral diverges if $\alpha < 1/2$, as in my previous question or if this has managed to help somehow as the exponent is different.
Thanks!
Edit:
This once again can be written out as the integral of a hypergeometric function,
$$
I = \frac{\alpha}{2}\int_0^\infty \frac{t^{\frac{5}{\alpha}-1}}{(1+t)^8}\space_2F_1(8,\frac{2}{\alpha};\frac{\alpha+2}{\alpha};-t) \space dt.
$$
| If $\operatorname{Re}(a) > 0$ and $- 1 < \operatorname{Re}(b) < \min (\operatorname{Re}(c)+7,\operatorname{Re}(a) + \operatorname{Re}(c) - 1)$, then
\begin{align*}
& \int_0^{ + \infty } {\frac{{t^b }}{{(1 + t)^c }}{}_2F_1 \left( {8,a;a + 1; - t} \right)dt} = \int_0^{ + \infty } {\frac{{t^b }}{{(1 + t)^{c + 8} }}{}_2F_1 \!\left( {8,1;a + 1;\frac{t}{{t + 1}}} \right)dt}
\\ & = \frac{{\Gamma (a + 1)}}{{7!}}\sum\limits_{n = 0}^\infty {\frac{{\Gamma (8 + n)}}{{\Gamma (a + 1 + n)}}\int_0^{ + \infty } {\frac{{t^{b + n} }}{{(1 + t)^{c + 8 + n} }}dt} }
\\ & = \frac{{\Gamma (a + 1)\Gamma (c - b + 7)}}{{7!}}\sum\limits_{n = 0}^\infty {\frac{{\Gamma (8 + n)\Gamma (b + 1 + n)}}{{\Gamma (a + 1 + n)\Gamma (c + 8 + n)}}}
\\ & = \frac{{\Gamma (b + 1)\Gamma (c - b + 7)}}{{\Gamma (8 + c)}}{}_3F_2 (1,8,b + 1;a + 1,c + 8;1).
\end{align*}
In your case $a=\frac{2}{\alpha}$, $b=\frac{5}{\alpha}-1$, $c=8$ and $2\leq\alpha\leq 4$ fulfill the conditions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4432075",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Does the series $\sum_{n=1}^{\infty} (-1)^n(\sqrt{n+1}-\sqrt{n})$ converge or diverge? $$\sum_{n=1}^{\infty} (-1)^n(\sqrt{n+1}-\sqrt{n})$$
I know that $\sum_{n=1}^{\infty} \sqrt{n+1}-\sqrt{n}$ diverges since the kth partial sum of the telescoping series can be written as:
$(\sqrt{2}-\sqrt{1})+(\sqrt{3}-\sqrt{2})+(\sqrt{4}-\sqrt{3})+...+(\sqrt{k+1}-\sqrt{k})=\sqrt{k+1}-1$
Then taking the limit as $k\rightarrow\infty$ of the kth partial sum gives me
$\lim_{n\rightarrow\infty}=\sqrt{k+1}-1=\infty$
So this series is not absolutely convergent. But how can I check if it is conditionally convergent?
Intuitively I'm thinking if $\sqrt{n+1}-\sqrt{n}$ diverges then
$\sum_{n=1}^{\infty} (-1)^n(\sqrt{n+1}-\sqrt{n})$ must diverge too. Is my logic correct here?
The solution says that this series is conditionally convergent but I'm not sure how they get that? Do I need to use the Alternating series test?
| A slightly more general technique (though the one by Átila Correia is the simplest and probably best here):
Suppose you have a (positive) sequence $(a_n)_n$, not necessarily monotone, such that
$$
a_n = b_n + c_n + O(c_n)
$$
where $(b_n)_n$ is non-increasing with limit $0$, and $(c_n)_n$ is absolutely convergent. Then $\sum_n (-1)^n a_n$ converges.
Indeed, you have
$$
\sum_{n=1}^N (-1)^n a_n = \sum_{n=1}^N (-1)^n b_n + \sum_{n=1}^N (-1)^n (c_n+O(c_n))
$$
and on the RHS the first term will converge (by Leibniz's criterion) and the second will converge (absolutely) (by comparison with $\sum_n c_n$).
Now, in your case, $$\begin{align}a_n &= \sqrt{n+1}-\sqrt{n}
= \sqrt{n}\left(\sqrt{1+\frac{1}{n}}-1\right) = \sqrt{n}\left(\frac{1}{2n}-\frac{1}{8n^2} + O\!\left(\frac{1}{n^2}\right)\right) \\&= \frac{1}{2\sqrt{n}}-\frac{1}{8n^{3/2}} + O\!\left(\frac{1}{n^{3/2}}\right)\end{align}$$
and you can apply the above with $b_n = \frac{1}{2\sqrt{n}}$, $c_n = -\frac{1}{8n^{3/2}}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4432430",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
Changing $\Delta x$ with $\Delta x^2$ in the derivative definition. We know definition of derivative is $\frac{\Delta f(x)}{\Delta x}$ when $\Delta x$ approaching 0. But what happens when we change $\Delta x$ by $(\Delta x)^2$.
$\lim\limits _{\Delta x\to0}\frac{f(x+\Delta x)-f(x)}{(\Delta x)^2}$.
Does it have any meaning of this rate of change formula?
$(\Delta x)^2$ is the thing when we use it definition of second derivative.
P.s more from that could you recommend a book which plays this formulas?
| Assume that the second derivative of $f$ exists at the point $x$. Recall that then we have the formula
$$
f''(x) = \lim_{h \to 0} \frac{f(x + h) - 2 f(x) + f(x - h)}{h^2}.
$$
for the second derivative. Now note that
$$
\frac{f(x + h) - 2 f(x) + f(x - h)}{h^2}
= \frac{f(x + h) - f(x)}{h^2}
+ \frac{f(x - h) - f(x)}{h^2}.
$$
Since we are assuming that the limit $\lim_{h \to 0} \frac{f(x + h) - f(x)}{h^2}$ exists also note that $\lim_{h \to 0} \frac{f(x - h) - f(x)}{h^2}$ exists too, because it is obtained by the change of variables $h \mapsto -h$. Limit laws now mean that
$$
f''(x) = 2 \lim_{h \to 0} \frac{f(x + h) - f(x)}{h^2}.
$$
So your limit turns out to be twice the second derivative of $f$ at $x$.
On the other hand, existence of your limit is not equivalent to the second derivative of $f$ existing at the point $x$. For starters, recall that that the limit in the formula for $f''(x)$ above can exist even when the second derivative of $f$ does not exist. Next, your limit $ \frac{f(x - h) - f(x)}{h^2}$ exists less often than $\frac{f(x + h) - 2 f(x) + f(x - h)}{h^2}$, for the following reason: if your limit
$
\lim_{h \to 0} \frac{f(x + h) - f(x)}{h^2}
$
exists, then by using limit laws we must have that the limit
$$
\lim_{h \to 0} \frac{f(x + h) - f(x)}{h}
= \left(\lim_{h \to 0} h\right) \cdot \left(\lim_{h \to 0} \frac{f(x + h) - f(x)}{h^2}\right) = 0
$$
exists, too. That means that the existence of your limit at $x$ implies that the derivative of $f$ at $x$ exists and is zero. Of course, there are plenty of functions for which the second derivative at a point exists, but nonetheless the derivative at the same point is nonzero.
An intuitive way to think about this when everything is smooth is to note that around the point $x$ the function $f$ is given by its Taylor series there:
$$
f(x + h) = f(x) + f'(x) h + \frac{1}{2} f''(x) h^2 + O(h^3).
$$
Your limit then simplifies into
$$
\lim_{h \to 0} \left(\frac{f'(x)}{h} + \frac{1}{2} f''(x) + O(h) \right).
$$
The only way this limit can exist is if $f'(x) = 0$, in which case it becomes
$$
\lim_{h \to 0} \left(\frac{1}{2} f''(x) + O(h) \right) = \frac{1}{2} f''(x),
$$
like we saw above.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4432594",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Linear Programming - Motivation behind the Dual Simplex Method I am trying to understand the motivation behind the Dual Simplex Method. However, I have run into some roadblocks while understanding the rationale behind the Dual Simplex Method. This is my current understanding of the Simplex, the Primal and Dual problem:
$1$. For a minimization problem, the Simplex Algorithm proceeds with first a basic feasible solution; then it replaces individual basis columns with an external column until $c_j - C_B B^{-1} A_j >0~\forall~j$ where $c_j$ is the $jth$ cost tuple ; $C_B$ is the cost corresponding to the feasible basis $B$ and $A_j$ is the $jth$ column external to $B$.
$2$. If $x_0$ be the primal feasible solution and $y_0$ be the dual feasible solution and both satisfy the complementary slackness conditions, then $x_0$ is the primal optimal solution and $y_0$ is the dual optimal solution.
$3$. If $c^T x_0 = b^T y_0$ where $B$ is the feasible basis, then $x_0$ is the primal optimal solution and $y_0$ forms the dual optimal solution.
Using this, my professor has tried to implement the Dual Simplex Algorithm by first accounting for a tuple of the RHS $:b_r < 0$ and then proceeding ahead.
However, I do not quite understand the need to consider $b_r < 0$ nor the algorithm ahead. My confusion is: - The simplex method is applied to the dual formulation without explicitly finding the dual. How is that done?Could someone help me build the dual simplex algorithm from here?
| After adding the slack variable, consider a standard-form linear programming problem:
$$
\begin{align*}
&\text{max} \quad &&c^Tx \\
&\text{subject to}\quad &&Ax = b \\
&&& x\geq 0
\end{align*}
$$
where the matrix $A$ is a partitioned-matrix form of $A = [B\ \ N]$. We can write
$$
x = \begin{pmatrix} x_1 \\
\vdots \\
x_n \\
w_1 = x_{n+1} \\
\vdots \\
w_m = x_{n+m}
\end{pmatrix} \quad \text{and}\quad z = \begin{pmatrix} z_1 \\
\vdots \\
z_n \\
y_1 = z_{n+1} \\
\vdots \\
y_m = z_{n+m}
\end{pmatrix}
$$
where $x_i$ are the primal variables, $w_i$ are the primal slacks, $z_i$ are the dual slacks, and $y_i$ are the dual variables. Let the subscript $N$ indicate the non-basis, and $B$ indicate the basis. Now the primal dictionary updates as:
$$
\begin{align*}
\zeta &= \bar{\zeta} - z_N^Tx_N \\
x_B &= \bar{x}_B - B^{-1}Nx_N
\end{align*}
$$
and the dual dictionary actually updates as
$$
\begin{align*}
-\zeta &= -\bar{\zeta} - \bar{x}_B^Tz_B \\
z_N &= \bar{z}_N + (B^{-1}N)^Tz_B
\end{align*}
$$
Notice that here $(B^{-1}N)^T$ is the negative transpose of $-B^{-1}N$.
Take a look at https://vanderbei.princeton.edu/542/lectures/lec6.pdf or Chapter 6 of the book Linear Programming by Vanderbei for a detailed explanation of the negative transpose property.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4432821",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Meromorphic function with given principle part Need to find meromorphic function with poles at $z=\sqrt{n}$ and principle part is $\frac{1}{(z-\sqrt{n})^4}+\frac{1}{z-\sqrt{n}}$ . It's clear that the first term would converge uniformly on the compact subsets of $\mathbb{C}$ because in comparison with $\frac{1}{n^2}$. But adding $\frac{1}{\sqrt{n}}$ is not changing the convergence of the second term. Any hints regarding how to claim the convergence of the series ?
| The general idea is to develop the wanted principal parts into a Taylor series and take “sufficiently many” terms to that the rest converges locally uniformly.
In our case we have
$$
\frac{1}{z-\sqrt{n}} = -\frac{1}{\sqrt{n}} \left( 1 + \frac{z}{\sqrt n} + \frac{z^2}{n} + \cdots\right)
$$
so that
$$
\frac{1}{z-\sqrt{n}} + \frac{1}{\sqrt n} + \frac{z}{n} = O\left( \frac{z^2}{n^{3/2}}\right)
$$
This shows that
$$
\sum_{n=1}^\infty \frac{1}{z-\sqrt{n}} + \frac{1}{\sqrt n} + \frac{z}{n}
$$
converges uniformly on each compact set not containing any of the poles. Combining that with your result we get
$$
f(z) = \sum_{n=1}^\infty \frac{1}{(z-\sqrt{n})^4} + \frac{1}{z-\sqrt{n}} + \frac{1}{\sqrt n} + \frac{z}{n}
$$
as a meromorphic function with the desired poles and principal parts.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4432971",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How to find the largest possible minimum eigenvalue for a matrix? Let $\mathcal{X}$ be an arbitrary set and let $\phi: \mathcal{X} \to \mathbb{R}^d$ and $\lVert \phi(x)\rVert_2 \leq 1, \ \forall x \in \mathcal{X}$. Suppose $x$ is drawn from $\mathcal{X}$ according to $P$ and define a matrix $A$ as $$A = \underset{{x \sim P}}{\mathbb{E}}\left[ \, \,\phi(x) \phi(x)^\top \right]$$ I watched a talk in which it was said that the largest possible minimum eigenvalue is $\frac1d$. Can anybody help me to figure out why the following holds?
$$\max_{P \in \Delta(\mathcal{X})} \sigma_{\min}(A) = \frac{1}{d}$$
| Hint: What's the trace of $\phi(x)\phi(x)^T$. With the assumption given, you can bound it by $1$. The trace of a diagonalizable matrix is the sum of its eigenvalues. You can show that the sum of the eigenvalues is less than or equal to 1. You have d eigenvalues whose sum is less than or equal to 1. Can the smallest be strictly bigger than $1/d$?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4433179",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Prove that a matrix can be written as a sum of permutation matrices
Given a square matrix $A$ of size $n$ whose entries are non-negative integers and where the sum of each column and row is equal to $k$, prove that $A$ can be written as a sum of $k$ permutation matrices.
First, it is obvious to see that a sum o $k$ permutation matrices will result in a matrix where the rows and columns sums to $k$. By Birkhoff's theorem, any doubly stochastic matrix can be written as a linear combination of permutation matrices,
$$\frac{1}{k}A = \sum_{i=1}^{r}{c_i P_i}$$
$$A = k \left(\sum_{i=1}^{r}{c_i P_i}\right)$$
where $c_i > 0$ is a real coefficient and $P_i$ is a permutation matrix. However, my solution can have more than $k$ permutation matrices (because the $c_i$ can be less than $1$) and each one with a real coefficient and question asks for just a sum of $k$ permutation matrices (so $c_i = 1$).
Some related questions that guided me:
*
*Characterizing sums of permutation matrices
*Is it possible to solve for values in a matrix such that all rows and columns have equal sum?
*Prove the existence of a permutation for a matrix
| You can use induction on $k$. The base case $k=1$ is clear. If $k>1$, then by Birkhoff's theorem you cited (or by Hall's marriage lemma), there is some permutation matrix $P$
so that $P_{ij}>0$ implies $A_{ij}>0$ (any $P$ that is part of the Birkhoff decomposition of $A/k$ will do.) For this $P$, the $n \times n$ matrix $A-P$ has non-negative integer entries, and the sum of each column and row of $A-P$ is equal to $k-1$.By the inductive hypothesis,
$A-P$ can be written as a sum of $k-1$ permutation matrices, whence $A$
can be written as a sum of $k$ permutation matrices.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4433341",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Banach Circle Fractal: Find factor scale $s$ such that between small circle is not overlap in $n$-th step as $n\to\infty$
Source: from stackoverflow
I want to draw a Banach Fractal as picture above.
*
*First step, draw a circle with radius $r$.
*Second step, draw 9 smaller circles with radius $s\cdot r$ ($s$ is
factor scale, $0<s<1$) [one circle in center and 8 circles in circle
side].
*Third step, each 9 smaller circle from second step, is drawn by 9
smaller circles with radius $s^2\cdot r$ [one circle in center and 8 circles in circle
side].
*etc.
In picture above, from third step the smaller circle is overlap. Now, I want to calculate the factor scale $s$ such that the smaller circle is not overlap in $n$-th step as $n\to\infty$.
I try to calculate as follows.
For first step until infinity step, total radius of smaller circle is
$$sr+s^2 r+s^3 r+\ldots = \dfrac{sr}{1-s}.$$
So, for first step until infinity step total diameter of smaller circle is
$$T_d=2\dfrac{sr}{1-s}.$$
The circumference of first step circle is $2\pi r$.
Because of we draw 8 circles in first step circle side, now I divide circumference by 8.
So, total diameter per small circle in "first circle side" is
$$T_d=\dfrac{2\pi r}{8}=\dfrac{\pi r}{4}$$
Now, we have equality
$$2\dfrac{sr}{1-s}=\dfrac{\pi r}{4}.$$
Solving the equality:
\begin{align}
&2\dfrac{sr}{1-s}=\dfrac{\pi r}{4}\\
\iff &\dfrac{s}{1-s}=\dfrac{\pi }{8}\\
\iff &8s=\pi-\pi s\\
\iff &(8+\pi) s=\pi \\
\iff & \dfrac{\pi}{8+\pi}.
\end{align}
Now I have the factor scale $s=\dfrac{\pi}{8+\pi}$ such that between small circle is not overlap in $n$-th step as $n\to\infty$.
Is it correct answer?
I think it over a hour and I'm not sure with my answer.
| I guess you could just draw it and see:
Close, but not quite. The correct value of
$$
s = \frac{\sin(\pi/8)}{\sin(\pi/8) + \sin(5\pi/8)} = \frac{1}{2+\sqrt{2}}
$$
should yield the so-called octa-gasket:
You can see why this should work once you see how the following lines perfectly partition the pieces:
Once you get to that point, finding $s$ is a simple application of trigonometry.
You can find the code for the pictures on Observable.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4433743",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Strict inclusion in subdifferential sum rule $\partial{f(x)}+\partial{g(x)}\subseteq{\partial(f+g)(x)}$. I wish to find an example to show that the inclusion in the subdifferential sum rule
$$\partial{f(x)}+\partial{g(x)}\subseteq{\partial(f+g)(x)}$$
is strict. However, I have a problem understanding the way this inclusion is stated.
This inclusion in the subdifferential sum rule is taken from the book Convex Analysis by R. Tyrrell Rockafellar, Page 223, Theorem 23.8. This theorem says:
Let $f_1,f_2,\dots,f_m$ be proper convex functions on $\mathbb{R}^n$, and $f=f_1+f_2+\cdots+f_m$, then $\partial{f_1(x)}+\cdots+\partial{f_2(x)}\subset{\partial{f(x)}}~~~\forall{x}$.
It also says if the domains of the $f_i$s have common point, then the inclusion turns into equality. From what I understand, if the domains of the $f_i$s do not have common point, then the strict inclusion should hold. One of the many examples that I tried was the pair $\sqrt{1-x}$ and $\sqrt{x-2}$. These two functions do not have any point in common in their domains. However, I do not understand how we can add two functions whose domains do not overlap.
I have two main issues:
*
*Intuitively, the subset sign should be the other way around. In the last part of the answer to this question, there is an example where the inclusion is true for the case where the subset sign is the other way around. In addition, the example is in contradiction to the inclusion statement in the question.
*This might be an easy question, but how can we add two functions whose domains do not intersect?
Any insight and example that can help me understand this are really appreciated!
| Here is an example in $\mathbb R^1$: Define
$$
f(x) =
\begin{cases}
-\sqrt{x} & \text{for } x \ge 0, \\
+\infty & \text{for } x < 0,
\end{cases}
$$
and
$$
g(x) =
\begin{cases}
0 & \text{for } x \le 0, \\
+\infty & \text{for } x > 0.
\end{cases}
$$
Thus, $f+g$ is the indicator function of $\{0\}$.
Then,
\begin{align*}
\partial f(0) &= \emptyset, \\
\partial g(0) &= [0,\infty), \\
\partial f(0) + \partial g(0) &= \emptyset, \\
\partial (f+g)(0) &= \mathbb R.
\end{align*}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4433966",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Diophantine Equation solved with elliptic curves I want to know how to find all solutions in $\mathbb{Z}$ for
$$
2a^2 -3ab +5c^2 =0.
$$
I already solved it and I will post my solution soon.
One solution for example is $(15,11,3).$
| We substitute $x=\frac{a}{b}$ and $y=\frac{c}{b}$ and find all the intersections of $2 x^{2}-3 x+5 y^{2}=0$ and $y=t x$. We find
$$
0=2 x^{2}-3 x+5 t^2 x^{2}=x\left(\left(2+5 t^2\right) x-3\right) .
$$
So the intersection points are $(0,0)$ and
$$
\left(\frac{3}{2+5t^2}, \frac{3t}{2+5t^2}\right).
$$
The second intersection is in $\mathbb{Q}^{2}$ exactly when $t \in \mathbb{Q}$. Now we write $t=\frac{n}{m}$ with $m, n \in \mathbb{Z}$ and $\operatorname{gcd}(m, n)=1$, and get
$$
\left(\frac{3 m^{2}}{2 m^{2}+5 n^{2}}, \frac{3mn}{2 m^{2}+5 n^{2}}\right).
$$
Since $x=\frac{a}{b}$ and $y=\frac{c}{b},$ we see that each solution $(a, b, c)$ is a (rational) multiple of
$$
\left(3 m^{2}, 2 m^{2}+5 n^{2}, 3 m n\right),
$$
where $m, n \in \mathbb{Z}$ and $\operatorname{gcd}(m, n)=1$.
What are the primitive solutions? We have $\operatorname{gcd}\left(3 m^{2}, 3 m n\right)=3 m$.
Can 3 be a divisor of $2 m^{2}+5 n^{2}$? We have
$$2 m^{2}+5 n^{2}=2\left(m^{2}+n^{2}\right) \bmod 3$$
and
$$
\begin{aligned}
m^{2} &\equiv 0 \bmod 3\\
n^{2} &\equiv 1 \bmod 3.
\end{aligned}
$$
So 3 divides $2 m^{2}+5 n^{2}$, exactly when $3 \mid m, n$, but $\operatorname{gcd}(m, n)=1$, Contradiction.
Now $\operatorname{gcd}\left(3 m, 2 m^{2}+5 n^{2}\right)=\operatorname{gcd}\left(m, 5 n^{2}\right)=\operatorname{gcd}(m, 5)$ holds. So the primitive solutions with $b \neq 0$ are :
$$
(a, b, c)=\frac{1}{d}\left(3 m^{2}, 2 m^{2}+5 n^{2}, 3 m n\right) \text { for } \operatorname{gcd}(m, n)=1 \text { and } d=\operatorname{gcd}(5, m) \text {. }
$$
Note: For $(m, n)=(0,1)$, $d=5$ and $(a, b, c)=(0,1,0)$.
For $b=0$ there is only the solution $(a, b, c)=(0,0,0)$, i.e. $(m, n)=(0,0)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4434353",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Question about a step in a derangement proof I read a proof about derangement and I didn't understand this step:
$d_n - nd_{n-1} = -(d_{n-1}-(n-1)d_{n-2}) \implies d_n=nd_{n-1}+(-1)^n$
I see that we have $S_n = -S_{n-1}$ if $S$ is the stuff on one side of the equation but I don't get what happened then. Where does $(-1)^n$ come from?
| The second equation doesn't follow from just the first equation you wrote here. That equation has to be understood as a recurrence, and then it implies
$S_n = (-1)^n K$ for some constant $K$ and $n$ in the range where the recurrence works.
Now you need an appropriate $K,$ which might be $S_m$ or $-S_m$ for integer $m.$
If $K = 1$ then you have $S_n = (-1)^n,$ which implies the second equation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4434498",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Multiplicative property of trace Let $T_1 \in \mathcal{L}(V)$ and $T_2 \in \mathcal{L}(V)$ be positive operators. Prove that the trace of their product is non-negative i.e., tr($T_1 T_2) \geq 0$
Attempt 1: Obviously, a positive operator has a positive trace because all of its eigenvalues are positive. However, the product of two positive operators is not necessarily positive.
Attempt 2: Finding a pattern between the positivity of eigenvalues during multiplication
Both of my attempts fizzled out and I am pretty stuck. How would one solve this?
| The product of two positive matrices $A,B \ge 0$ always has nonnegative eigenvalues. Indeed, since $B$ is positive it has a unique positive square root $B^{1/2}$ so
$$\sigma(AB) = \sigma(AB^{1/2}B^{1/2}) = \sigma(B^{1/2}AB^{1/2}).$$
Notice that $B^{1/2}AB^{1/2}$ is a positive matrix since it is self-adjoint and for all $x \in V$ holds
$$\langle B^{1/2}AB^{1/2}x, x\rangle = \langle AB^{1/2}x, B^{1/2}x\rangle \ge 0$$
because $A \ge 0$. In particular, $B^{1/2}AB^{1/2}$ has nonnegative spectrum and hence the same holds for $AB$.
Since the trace is the sum of eigenvalues, it follows that $\operatorname{Tr}(AB) \ge 0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4434784",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Objective function with inverse matrix Let $\mathbf{v}\in \mathbb{R}^{p+1}$ a known vector and $\mathbf{A}\in\mathbb{R}^{p\times p}$, $\mathbf{B}\in \mathbb{R}^{n \times p}$ known matrices. In this setting, $\mathbf{A}$ is symmetric and invertible. My objective is to determine whether the optimization problem
$$\min_{a > 0} \mathbf{v}^\top(a\mathbf{A}+ \mathbf{B}^\top \mathbf{B})^{-1}\mathbf{v}$$
is convex and reformulate it in a friendly manner.
Straightforwardly, the objective function can be rewritten as
$$\mathbf{v}^\top(a\mathbf{A}+ \mathbf{B}^\top \mathbf{B})^{-1}\mathbf{v} = \mathbf{v}^\top\left(\mathbf{I}+ \frac{1}{a}\mathbf{A}^{-1}\mathbf{B}^\top \mathbf{B}\right)^{-1}(a\mathbf{A})^{-1}\mathbf{v}.$$
I was not able to follow from here in the general case... However, if $\mathbf{A}^{-1}$ and $\mathbf{B}^\top \mathbf{B}$ commute, and since they are symmetric matrices, the product will be also symmetric, so it can be expressed as $\mathbf{A}^{-1}\mathbf{B}^\top \mathbf{B} = \mathbf{U}^{\top}\boldsymbol{\Sigma}\mathbf{U}$, where $\mathbf{U}^\top = \mathbf{U}^{-1}$ and $\boldsymbol{\Sigma} = \text{diag}(d_1, \ldots, d_p)$. In this case,
$$\mathbf{v}^\top(a\mathbf{A}+ \mathbf{B}^\top \mathbf{B})^{-1}\mathbf{v} = \mathbf{v}^\top\left(\mathbf{U}^{\top}\mathbf{U}+ \frac{1}{a}\mathbf{U}^{\top}\boldsymbol{\Sigma}\mathbf{U}\right)^{-1}(a\mathbf{A})^{-1}\mathbf{v} = \frac{1}{a}\mathbf{v}^\top \mathbf{U}\left(\mathbf{I}+ \frac{1}{a}\boldsymbol{\Sigma}\right)^{-1}\mathbf{U}^{\top} \mathbf{A}^{-1}\mathbf{v}.$$
Since $\mathbf{I}+ \frac{1}{a}\boldsymbol{\Sigma}$ is diagonal, its elements are of the form $\frac{1}{1+ \frac{d_i}{a}}$, and multiply by the $\frac{1}{a}$ factor we get
$$\min_{a > 0} \mathbf{r}^\top\mathbf{D}\mathbf{s}$$
where $\mathbf{r}^\top = \mathbf{v}^\top \mathbf{U}$, $\mathbf{s}=\mathbf{U}^{\top} \mathbf{A}^{-1}\mathbf{v}$ and $\mathbf{D} = \text{diag}\left(\frac{1}{a + d_1}, \ldots, \frac{1}{a+d_p}\right)$.
I was wondering if I could get rid of this strong assumption (the matrices commute). Any help will be appreciated.
| The map $\phi: a\mapsto a\mathbf A + \mathbf{B^\top B}$ is a linear map, as such it conserves convexity; it also takes values in the set of SDP matrices. Now, your objective function can be seen as $\operatorname{tr}(\phi(a)^{-1}\mathbf{vv^\top})$, which is convex in $\phi(a)$.
To convince yourself of this using an elementary proof, you can adapt the one from this question:
Is the trace of inverse matrix convex?
to the function $\mathbf{S} \mapsto \operatorname{tr}(\mathbf S^{-1}\mathbf{vv^\top})$ defined over the set of SDP matrices.
As for the friendly reformulation, you can try
$$
\min_{a,\mathbf S} \mathbf{v^\top S^{-1}v},
\ \mathrm{s.t.} \ \mathbf S = a\mathbf{A + B^\top B}.
$$
and maybe find the dual.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4434960",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Is it possible to calculate $\int_0^\pi\frac{dx}{2+\sin^2x}$ without residue theorem I came across the following integral:
Calculate $\int_0^\pi\frac{dx}{2+\sin^2x}$.
I know that you probably can solve it using residue theorem, but since we haven't proved it yet, I tried another approach:
I know that $\sin x=\frac{e^{ix}-e^{-ix}}{2i}$ and therefore $\sin^2x=\frac{e^{2ix}-2+e^{-2ix}}{-4}$. So the integral becomes: $$\int_0^\pi\frac{dx}{2+\frac{e^{2ix}-2+e^{-2ix}}{-4}}=-4\int_0^\pi \frac{dx}{e^{2ix}+e^{-2ix}-10}$$ Denoting by $\gamma:[0,\pi]\to\mathbb{C}$ the path $\gamma(x)=e^{2ix}$ and using change of variables $z=\gamma(x)$, we have: $$-4\int_0^\pi \frac{dx}{e^{2ix}+e^{-2ix}-10}=-4\int_\gamma\frac{1}{z+\frac{1}{z}-10}\frac{dz}{2iz}=\frac{-2}{i}\int_\gamma\frac{1}{z^2-10z+1}dz$$
But I couldn't solve the last one (I figured that I must find the zeroes of the function in the denominator and then use Cauchy's integral formula somehow).
Any help would be appreciated.
| You could use a double angle identity to write $\sin^2 x = \frac 12 (1-\cos 2x)$, then apply a Weierstrass substitution. The final step will be of the form $\int \frac 1{1+t^2}dt$ which will give a simple arctangent result.
You will need a couple of linear substitutions along the way, but those are trivial.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4435147",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
Why is minimizing the norm of an vector ($|w|$) the same as minimizing its sum of squares ($0.5 |w|^2$)? Like the title says, im not sure why some popular machine learning textbooks say this. Is there something i am missing. I can elaborate if needed.
Similar to this question: Minimize the norm of $w$.
But i was not able to follow the answer.
| In general, if $f$ is a strictly increasing function then minimizing $f(g(x))$ is equivalent to minimizing $g(x)$. In your case, set $f: \mathbb{R}_{\geq 0} \rightarrow \mathbb{R}$ so that $f(x) = .5x^2$ and set $g: V \rightarrow \mathbb{R}$ so that $g(x) = |x|$. Then $f(x)$ is strictly increasing, so minimizing $|x|$ is equivalent to minimizing $.5|x|^2$.
Intuitively, just imagine if you're playing a game where there is some dollar payout for getting a high score, and the higher your score the more the payout. No matter what the relationship is between the score and the payout, the strategy to make the most money is to get the highest score.
More formally:
Lemma. Let $g: A \rightarrow B$ be a function, with $A$ any set and $B \subseteq \mathbb{R}$ and let $f: B \rightarrow \mathbb{R}$ be an increasing function, so that $x \leq y \implies f(x) \leq f(y)$. Then, if $x_0 \in A$ is such that $g(x_0)$ minimizes $g(x)$ then $f(g(x_0))$ minimizes $f(g(x))$.
Proof: Suppose $g(x_0)$ is a minimum. Then for any $x$, $g(x_0) \leq g(x)$, and so since $f$ is increasing, $f(g(x_0)) \leq f(g(x))$. $\square$
Note also that if $f$ is strictly increasing, with $x < y\implies f(x) < f(y)$, then $f$ "preserves uniqueness" in the sense that if $x_0$ is the unique minimizer of $g$ then $x_0$ is also the unique minimizer of $f\circ g$. [exercise]
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4435315",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find all of the extreme values using Lagrange Multipliers I need to find all of the extreme values of the function
$x^2+y^2+z^2$
constrained to $x^2+2y^2-z^2-1=0$
the problem is that I get this system that I have no idea how to solve:
$2x=2\lambda x$
$2y=4\lambda y$
$2z=-2\lambda z$
$x^2+2y^2-z^2-1=0$
All I can think of is that $x=y=z=0$ but then in the last equation I would get $-1=0$ which of course can't happen.
Any idea what do I need to do here?
| There are other candidate solutions, if you select an appropriate value for $\lambda$.
Suppose you solve your first equation for $\lambda$, getting $\lambda=1$. This does not fit with your second or third equation, so you must set $y=z=0$; but you can adjust $x$ to match your final equation and thus get candidates for an extreme point with the zero derivative. You find $x=\pm1, y=z=0$.
You then have the function, with the Lagrnge multiplier built in:
$x^2+y^2+z^2-1(z^2+2y^2-z^2-1)=-y^2+2z^2-1.$
This evidently has a saddle point, not a maximum or minimum, at $(\pm1,0,0)$, so this choice does not work.
But ... you can try a different value of $\lambda$ by solving the second equation first, intending to render $x=z=0$ to fit with that value of $\lambda$. Or you start with the $\lambda$ value obtained from the third equation and put $x=y=0$ to fit with this possible $\lambda$ value.
One of these cases will ultimately lead to a real solution that actually does correspond to an extremum, which you would expect on geometric grounds to be a minimum.
The true solution is obtained from solving the second equation $2y=4\lambda y$, thus $\lambda=1/2$ leading to $y=\pm\sqrt{1/2}$, $x=z=0$, and a minimum function value of $1/2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4435461",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
First homology group of Klein bottle (without using the Hurewicz Theorem) The Klein bottle $K$ is the connected sum of two real projective planes $K = \mathbb{RP}^2\#\mathbb{RP}^2$ and has fundamental group $\pi_1(K) = \pi_1(\mathbb{RP}^2\#\mathbb{RP}^2) = \langle a, b\mid a^2b^2 = 1\rangle$. Then by the Hurewicz Theorem, we should have
$$
H_1(K ;\mathbb{Z}) = \pi_1(K)/(aba^{-1}b^{-1}).
$$
I know that $H_1(K;\mathbb{Z}) = \mathbb{Z}\oplus\mathbb{Z}_2$. But why is the following is true?
$$
\langle a, b\mid a^2b^2 = 1\rangle/(aba^{-1}b^{-1}) = \mathbb{Z}\oplus\mathbb{Z}_2
$$
| First note that $\langle a, b \mid a^2b^2, aba^{-1}b^{-1}\rangle \cong \langle a, b \mid (ab)^2, aba^{-1}b^{-1}\rangle$ and that a presentation for $\mathbb{Z}\oplus\mathbb{Z}_2$ is $\langle c, d \mid d^2, cdc^{-1}d^{-1}\rangle$. From these presentations, it seems that an isomorphism should map $ab$ to $d$. As $a$ and $c$ have infinite order, we can also try mapping $a$ to $c$. In order to define a homomorphism, we see that $b$ must be mapped to $c^{-1}d$.
We first need to check that
\begin{align*}
\phi : \langle a, b\mid a^2b^2, aba^{-1}b^{-1}\rangle &\to \langle c, d \mid d^2, cdc^{-1}d^{-1}\rangle\\
a &\mapsto c\\
b &\mapsto c^{-1}d
\end{align*}
is a well-defined group homomorphism, and then verify that it is an isomorphism.
Consider the homomorphism
\begin{align*}
\Phi : \langle a, b\rangle &\to \langle c, d\rangle\\
a &\mapsto c\\
b &\mapsto c^{-1}d.
\end{align*}
Note that $\Phi((ab)^2) = \Phi(ab)^2 = (\Phi(a)\Phi(b))^2 = (cc^{-1}d)^2 = d^2$ and
\begin{align*}
\Phi(aba^{-1}b^{-1}) &= \Phi(a)\Phi(b)\Phi(a^{-1})\Phi(b^{-1})\\
&= \Phi(a)\Phi(b)\Phi(a)^{-1}\Phi(b)^{-1}\\
&= cc^{-1}dc^{-1}(c^{-1}d)^{-1}\\
&= dc^{-1}d^{-1}c\\
&= c^{-1}(cdc^{-1}d^{-1})c.
\end{align*}
As $\Phi((ab)^2)$ and $\Phi(aba^{-1}b^{-1})$ are in the kernel of the natural projection $\langle c, d\rangle \to \langle c, d \mid d^2, cdc^{-1}d^{-1}\rangle$, the homomorphism descends to a homomorphism, namely $\phi : \langle a, b\mid a^2b^2, aba^{-1}b^{-1}\rangle \to \langle c, d \mid d^2, cdc^{-1}d^{-1}\rangle$ as defined above.
Suppose $w \in \langle a, b \mid (ab)^2, aba^{-1}b^{-1}\rangle$ is in the kernel of $\phi$. Note that $w = a^mb^n$ for some $m, n \in \mathbb{Z}$. As $\phi(a^mb^n) = \phi(a)^m\phi(b)^n = c^m(c^{-1}d)^n = c^{m-n}d^n$ we must have $m - n = 0$ and $n \in 2\mathbb{Z}$ - to see this, note that the isomorphism $\langle c, d \mid d^2, cdc^{-1}d^{-1}\rangle \to \mathbb{Z}\oplus\mathbb{Z}_2$ is given by $c^rd^s \mapsto (r, s)$. Therefore $m = n = 2k$ for some $k \in \mathbb{Z}$. Now note that $w = a^mb^n = a^{2k}b^{2k} = (ab)^{2k} = ((ab)^2)^k = 1$, so $\phi$ is injective. On the other hand, $\phi$ is surjective since $\phi(a) = c$ and $\phi(ab) = d$. Therefore $\phi$ is an isomorphism.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4435629",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
A natural deduction proof of $\neg (A \leftrightarrow \neg A ) $. I want to prove $\neg (A \leftrightarrow \neg A ) $ in natural deduction: I tried first
But I can't figure how to discharge the hypothesis $A$ and $\neg A$. I then tried
Here I just need to discharge $A \wedge \neg A$, but I can't either figure how to do it.
Edit
Following the help in the answers and comments I came out with
Where $\begin{array}{c} (*) \\ \vdots \\ A \vee \neg A \end{array}$ is a natural deductive proof on its own (needing RAA) of $A \vee \neg A$.
Second method
Without a proof of $A \vee \neg A$:
| Once you have shown that $A$ leads to a contradiction, then presumably you can infer $\neg A$ using $\neg$ Intro, and that will discharge $A$. That then also means that you don't have to assume $\neg A$ ... you can just continue working with that $\neg A$. So, not having assumed $\neg A$ in the first place, there is no $\neg A$ to discharge.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4436035",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
An Extension of Gambler's Ruin Suppose there is a gambler who has $n$ dollars gambles for infinitely many times. The payoff of the $t^{th}$ round is $X_t$, which is an integer beween $-1000$ and $1000$. We know that $\mathbb{E}[X_t|X_{t-1},\cdots,X_1]\leq c$. Here, $c$ is a negative constant. Suppose the probabliity of bankrupt is $p(n)$. Do we have $\lim_{n\rightarrow \infty}p(n)=0$?
I have thought this problem for a couple of days. I am not familiar with martingale theory so I do not know whether this problem is difficult.
Supplement: what about the case that $\mathbb{E}[X_t|X_{t-1},\cdots,X_1]\geq c$ where $c$ is a positive constant? Thank you very much!
| $p(n)=1$ for each $n$.
Let $S_k= X_1+\ldots X_k$, so the gambler's fortune after $k$ rounds is $\max\{n+S_k,0\}$. Write $b=-c>0$. The hypothesis implies that $M_k=S_k+bk$ is a supermartingale, so by the Azuma-Hoeffding inequality
$$P(n+S_k >0)=P(M_k>bk-n) \le \exp\Bigl(\frac{-(bk-n)^2}{2 \cdot 1000^2 k}\Bigr)$$
which decays exponentially in $k$. By the Borel-Cantelli Lemma, the event $\{S_k>0\}$ will occur only finitely often almost surely, so the gambler will go bankrupt with probability 1.
https://en.wikipedia.org/wiki/Azuma%27s_inequality
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4436204",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Deriving with respect to arc length In an x-y coordinates, $dy/dx$ is said to be the tangent to a curve at some point and giving us the slope there.
If this curve was parametrized by it's arc length $s$, then we might have $dy/ds$ and $dx/ds$ at some point.
How can we interpret $dy/ds$ and $dx/ds$, what do they give us?
| $dy/dx = \tan \phi$ is the slope of tangent to a curve at any point.
We interpret $ \sin \phi =dy/ds$ and $\cos \phi=dx/ds$, in the infinitesimal or differential triangle where instantaneous tangent is the hypotenuse of right triangle $ds =1$ shown exaggerated.
In mechanics/dynamics we can physically view them as $x,y$ components of not only position, but components of velocity or acceleration. That is, if dots denote time derivatives
$$ \tan \phi = \frac{dy}{dx} =\frac{\dot y}{\dot x} =\frac{\ddot y}{\ddot x}\; ; $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4436556",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Irreducible representation of Hecke algebra are dimension 1 or 2 First, Denote by $H(2)$ the algebra generated by the elements $Y_{1}, Y_{2}$, and $s$ subject to the following relations:
$$
s^{2}=1, \quad Y_{1} Y_{2}=Y_{2} Y_{1}, \quad s Y_{1}+1=Y_{2} s
$$
My goal is to show that finite irreducible representations of H(2) are either one dimensional or two dimensional.
Since $Y_{1}$ and $Y_{2}$ commute, we know that they have a common eigenbasis.
I would show that an irreducible representation of H(2) is generated by (v, sv) where v is an eigenvector of $Y_{1}$ and $Y_{2}$ but I don't know how to do it.Thanks.
| It is not necessary that $Y_1$ and $Y_2$ have a common eigenbasis, because $Y_1$ and $Y_2$ might not be diagonalizable at all! However, you are right that this is a key observation, and what is true is that $Y_1$ and $Y_2$ will have a common eigenvector.
For simplicity, suppose that $V$ is an $H(2)$-module such that $s$ acts by the identity. Then $Y_2$ acts by $Y_1+1$. In this case, can you show that, if $v$ is an eigenvector of $Y_1$, then the line spanned by $v$ is an $H(2)$-submodule of $V$? What about if $s$ acts by $-1$?
Can you see how to generalize this to the case of $V$ where $s$ does not act by a scalar? Perhaps you can guess what a basis of a two-dimensional $H(2)$-submodule of $V$ could be in this case.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4436725",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Proving this identity using summation by parts This problem was left as an exercise from the notes which I am self studying and I am not able to get any ideas on how to solve it.
Problem: Let $$\pi(x;q,a)=\#\{p \leq x : p=a \mod q\}$$ and $$\psi(x; q,a)= \sum_{n\leq x , n= a \mod q} \Lambda(n)$$
The prime number theorem for arithmetic progressions states that if $\gcd(a,q)=1$, then as $n\to \infty$ ($q$ fixed),
$$\pi (x;q,a) = \frac{1} {\psi(q)} \text{Li}(x) + O(x e^{-c (\log x)^{1/2}})$$ and $$\psi(x;q,a) = \frac{x} {\psi(q)} +O( x e^{-c (\log x)^{1/2}})$$
Applying Summation by parts, $$\sum_{ p\leq x, p=a \mod q } \frac{\log p}{p} = \frac{1} {\phi(q)} \log x +O(1) $$
I am not able to deduce that how by applying summation by parts I will get the required result. I thought of using Abel's identity and Euler summation formula but I am not able to understand how to deal with $p=a (\mod q) $ that is in the summation.
Can you please help me in proving this?
| It follows from the second identity that
$$
\vartheta(x;q,a)=\sum_{\substack{p\le x\\p\equiv a(q)}}\log p={x\over\varphi(q)}+O(xe^{-c(\log x)^{1/2}}),
$$
allowing us to perform partial summation as follows:
\begin{aligned}
\sum_{\substack{p\le x\\p\equiv a(q)}}{\log p\over p}
&=\int_{2^-}^x{\mathrm d\vartheta(t;q,a)\over t} \\
&={1\over\varphi(q)}\int_2^x{\mathrm dt\over t}+\int_{2^-}^x\frac1t\mathrm d\left\{O(te^{-c(\log t)^{1/2}})\right\} \\
&={\log x\over\varphi(q)}+O(1).
\end{aligned}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4436895",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Points stabilised by conjugate of a group Let $G$ be a reductive group acting on an affine variety $X$. Let $x\in X$ such that the stabiliser of $x$ in $G$ is a subgroup $H$ which is itself a reductive group. Then, it is easy to see that any point in the orbit $Gx$ is stabilised by some conjugate of the group $H$ in $G$. Now, as the orbit $Gx$ isn’t necessarily closed, we can consider an arbitrary point $y$ in the closure of this orbit. Is the point $y$ necessarily stabilised by some conjugate of $H$? To simplify things, we can assume that the orbit $Gy$ of $y$ is closed in $X$, and then, does the question have a positive answer in this case?
As a reality check, this statement is trivially true when $H=G$ or if $H$ is trivial. The question is essentially asking if being closed by a conjugate of $H$ is an algebraic condition, but I don’t know how to see it.
| See Proposition 4.19 here for when $y$ has a closed orbit: https://web.northeastern.edu/iloseu/Jose_S16.pdf.
The idea of the proof is to use Luna's slice theorem. Assuming that $y$ has a closed orbit, there exists a slice $S$ through $y$ and a map:
$$\phi:G \times^{G_y} S \rightarrow X,$$
given by $(g,s)\mapsto gs$. The theorem tells us, among other things, that the image contains an open neighbourhood of $y$. Then, for any $y'$ in the neighbourhood of $y$, we compute the stabiliser $G_{y'}$ by computing the stabiliser of a point in its pre-image under $\phi$, say $(g,s)$.
Then, it is a straightforward computation to check that $G_{y'}=G_{(g,s)} = g(G_y)_sg^{-1}\subseteq gG_yg^{-1},$ or $g^{-1}G_{y'}g \subseteq G_y$. As this is true for all points $y'$ in a neighbourhood of $y$, the claim follows.
When the orbit isn’t closed, one can repeat the argument with a slight modification. Consider $X \setminus (\overline{Gy} \setminus Gy)$. Then, we see that $Gy$ is closed in this subvariety and still lies in the closure of $Gx$, and so, we can repeat the argument above.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4437038",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Regarding the derivative of Euclidean L2 norm, Definition of differentiation in Rudin. I am trying to understand the answer posted by hemanth in this post.
I understand how he derived the derivative $f:\Bbb R^n \rightarrow \Bbb R$ defined as $f(x)=\rVert x \rVert$,
$$
Df(x) = \nabla_x\rVert x\rVert_2= \frac{x}{\lVert x \rVert}.
$$
So we have $Df:\Bbb R^n \rightarrow L(\Bbb R^n,\Bbb R)$, but according to Rudin, $Df(x)$ must be a linear transformation from $\Bbb R^n$ to $\Bbb R$, while I do not see $Df(x)h \in \Bbb R$ for any h $ \in \Bbb R^n$.
I am trying to derive $Df(x)$ such that: $$
\lim_{h\rightarrow0}=\frac{\rVert f(x+h)-f(x)-Df(x)h \rVert}{\rVert h \rVert},$$ so $Df(x)h$ must be a real number.
Could it be that $Df(x)h$ is a product of a column vector and a row vector?
If so, how does
$$
\begin{split}
\frac{\rVert f(x+h)-f(x)-Df(x)h \rVert}{\rVert h \rVert} &= \frac{\Vert\rVert x+h \rVert - \rVert x \rVert -Df(x)h \Vert}{\rVert h\rVert}\\
&\le\frac{\rVert\rVert x \rVert+\rVert h \rVert - \rVert x \rVert -Df(x)h \Vert}{\rVert h\rVert}=\frac{\rVert\rVert h \rVert -\frac{x}{\rVert x \rVert}h \rVert}{\rVert h \rVert}\rightarrow 0\;,
\end{split}
$$ as $h \rightarrow 0$ ?
Anyone can help me with this confusion?
It is my first time asking the question here, so my formatting might be bad.
Thank you for understanding.
| I would simply write
$$
Df(\mathbf{x})[\mathbf{h}] =
\frac{\partial f}{\partial \mathbf{x}}:\mathbf{h}
$$
where the colon operator denotes the inner product in $\mathbb{R}^N$
This says how much (up to first order) $f$ will change when you move from
$\mathbf{x}$ to $\mathbf{x+h}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4437160",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
What's the fastest way to solve a system of equations a million times such that the coefficient matrix is same but the constant matrix is different? I need an efficient way to solve $Ax=C$ a million times such that coefficient matrix $A$ is always the same but the constant matrix $C$ is always different for each of the million problems.
To solve the problem one time I can evaluate determination $A,A_x,A_y...$ using Gauss Elimination which would take like $O(n*n^3log||A||)$ time,
where n is degree of $|A|$. But for million cases that would be too much redundant calculations.
I am looking for an approach which is $<= O(n*n*million)$
@littleO suggested we can do it efficiently by LU Decomposition.
| If $A$ is an $n\times n$ matrix, then you can solve the $n$ equations
$$Az_i=e_i$$
where $e_i$ is all zeroes except for a $1$ in the $i$th position. (Here the subscripted terms $z_i,e_i$ denote column vectors.)
Then the solution to $Ax=C$ is simply $$x=\sum_{i=1}^n c_iz_i$$
(Here the $z_i$'s are column vectors, but the $c_i$'s are the elements of $C$. Sorry if that is confusing!)
This is really just @Ilya's suggestion of computing the inverse $$A^{-1}=[z_1\; z_2\; \ldots\; z_n]$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4437361",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
If $f$ and $g$ are two non-zero linear functionals and $\ker(f)=\ker(g)$ This question was left as an exercise in course notes on smooth manifolds and I am struck on this.
Question: Let $f: V\to \mathbb{R}$ be a non-zero linear on an $n$-dimensional vector space $V$.
(a) Prove that $\dim(\ker (f))= n-1$.
(b) If $f, g : V \to \mathbb{R}$ are two non-zero linear functionals such that $\ker(f)=\ker(g)$, prove that $g=cf$ for some constant $c\in \mathbb{R}$.
Attempt: (a) I have done.
(b) Let $P(x) = g(x)-cf(x)$ where $c\in \mathbb{R}$ is a constant. Then, if $x\in\ker(f)=\ker(g)$ then $g=cf$. But, when $x\notin\ker(f) $, I am unable to think how should I proceed.
Can you please help me with this?
| Denote $W=\ker f=\ker g$ we may form the quotient map $\pi:V\to V/W\cong \mathbb{R}$. The mappings $f$ and $g$ factor through $\pi$ and we get $\bar{f},\bar{g}:\mathbb{R}\cong V/W\to \mathbb{R}$ so $\bar{f}$ and $\bar{g}$ differ by a constant of multiple. Write $\bar{f}=c\bar{g}$ we get $f=\bar f\circ \pi=c\bar g\circ \pi=cg$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4437514",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
What are some good, elementary and maybe also interesting proofs by induction? I am hosting a one-time class/talk on the concept of infinity for some (talented) high-school students. I want to teach them about proof by induction and I want them to do some exercises (you learn math by doing!). I am therefore looking for easy, elementary and maybe also intersting exercises for someone with little to no experience in proving statements. Some examples that came to mind:
*
*Proving $n \leq n^2$.
*Proving $n! \leq n^n $.
*Proving that the angle sum in an $n$-sided polygon is $(n-2)180^\circ$ for $n \geq2$.
*Proving Bernoulli's Inequality: $(1+x)^n \leq 1+nx$ for all $x \geq -1$.
I have looked at the thread Examples of mathematical induction but most if not all of the examples given here I think are too difficult for the audience.
Any inputs are welcome and appreciated! The result and its' induction proof need not be 100% rigorous, the point is to illustrate the induction proof in simple settings.
| When I first studied Proof by induction in highschool, the very simple but interesting proof of $\sum_{i=1}^ni = \frac{n(n+1)}{2}$ was presented to me. I thought this to be very intuitive and quite straightforward. I believe this is quite well suited for your audience.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4437668",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 6,
"answer_id": 2
} |
If I define summation recursively, how do I prove formally that both of the following definitions are equivalent? First definition:
$\sum_{x=a}^{b} f(x)=f(b)+\sum_{x=a}^{b-1} f(x)$ if $a\leq b$, and equals $0$ otherwise
Second definition:
$\sum_{x=a}^{b} f(x)=f(a)+\sum_{x=a+1}^{b} f(x)$ if $a\leq b$, and equals $0$ otherwise
Intuitively, I have no doubt that both statements are always equal, but I am struggling to come up with a formal proof for it. I tried induction over $b$, but only got so far on working with the second definition:
$\sum_{x=a}^{b+1} f(x)=f(b+1)+\sum_{x=a}^{b}$
| Let's prove it formally, by strong induction on $b - a$. If $b - a < 0$, then both definitions produce $0$, so we consider only the case where $b - a \ge 0$, allowing us to use strong induction.
Let's be clear about the predicate we're proving. Let $R(f,a,b)$ and $L(f, a, b)$ be the result of applying the two respective definitions to $\sum_{x=a}^b f(x)$ (in case it's not entirely obvious, $R$ is for "right" and $L$ is for "left", to represent from where we're stripping off terms). We are claiming that:
$$P(n) : (\forall a)(\forall b)(\forall f)\Big(a - b = n \implies R(f, a, b) = L(f, a, b)\Big).$$
The base cases I wish to start from are $P(0)$ and $P(1)$. First, we tackle $P(0)$. Assuming $a - b = 0$,
$$R(f, a, b) = R(f, a, a) = R(f, a, a - 1) + f(a) = 0 + f(a),$$
and
$$L(f, a, a) = f(a) + L(f, a + 1, a) = f(a) + 0.$$
Next, if $a - b = 1$, then
$$R(f, a, b) = R(f, a, a + 1) = R(a, a) + f(a + 1) = f(a) + f(a + 1) + 0,$$
and similarly for $L(f, a)$.
Now, let's fix $n \ge 2$, and suppose that $P(k)$ is true for $k < n$. To prove $P(k)$, let us suppose $a$ and $b$ such that $b - a = n$, and $f$ is some function. Now,
$$L(f, a, b) = f(a) + L(a + 1, b).$$
Because $b - (a + 1) = n - 1 < n$, we therefore know that $L(a + 1, b) = R(a + 1, b)$, so
$$L(f, a, b) = f(a) + R(a + 1, b) = f(a) + R(a + 1, b - 1) + f(b).$$
A similar argument shows
$$R(f, a, b) = f(a) + L(a + 1, b - 1) + f(b),$$
which is equal to the quantity above, since $b - 1 - (a + 1) = n - 2 < n$ (note: we needed the assumption that $n \ge 2$ to ensure that two terms can be pulled out this way, hence the need for double base case). Thus, the induction step is completed, and the result is proven by strong induction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4437863",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
approximation of integral of $|\cos x|^p$ Let $p\in [1,2)$. Let
$$
\beta = \frac{1}{2\pi}\int_0^{2\pi} |\cos x|^p\, dx = \frac{\Gamma(\frac{p+1}{2})}{\sqrt{\pi}\Gamma(1+\frac{p}{2})}.
$$
Consider the following approximation to the integral definition of $\beta$:
$$
S_n = \frac{1}{2n}\sum_{k=0}^{2n-1} \left| \cos\left(2\pi \cdot \frac{k}{2n}\right)\right|^p.
$$
We are interested in the asymptotics of the approximation error
$$
|S_n - \beta|.
$$
I have found empirically that
$$
|S_n - \beta| \leq \frac{c_p}{n^{p+1}}
$$
where $c_p$ is a constant that depends only on $p$ and $c_p\to 0$ as $p\to 2$. But I have no idea at all how to prove this. (This bound is actually tight, that is, the correct order should be $1/n^{p+1}$.)
I tried to split the integral from $0$ to $2\pi$ into $2n$ pieces and sum up the approximation errors in each piece, which only gives an error of $O(1/n^2)$ instead of $O(1/n^{p+1})$. Can anyone shed some light on how to deal with the $p$-th power or how $p$ enters the exponent in the approximation error?
Edit: The following seems to be a proof, up to tiny boundary sloppiness that can be easily fixed.
Assume that $n$ is even and so we replace $n$ with $2n$. Note that $\cos x$ is decreasing and positive on $[0,\pi/2]$. By symmetry, it suffices to bound
\begin{align*}
E &:= \frac{1}{2n}\sum_{k=0}^{2n-1} \left|\cos\frac{\pi k}{2n}\right|^p - \frac{1}{\pi}\int_0^\pi |\cos x|^p dx\\
&= \frac{1}{2n}\sum_{k=0}^{n-1} \left(\cos^p\left(\frac{\pi k}{2n}\right) + \cos^p\left(\frac{\pi (k+1)}{2n}\right)\right) - \frac{1}{\pi}\sum_{k=0}^{n-1} 2\int_{k\pi/(2n)}^{(k+1)\pi/(2n)} \cos^p xdx \\
&= \frac{1}{n}\sum_{k=0}^{n-1}\left[ \frac{1}{2}\left(\cos^p\left(\frac{\pi k}{2n}\right) + \cos^p\left(\frac{\pi (k+1)}{2n}\right)\right) - \frac{1}{\pi/(2n)}\int_{k\pi/(2n)}^{(k+1)\pi/(2n)} \cos^p xdx\right]\\
&=: \frac{1}{n}\sum_{k=0}^{n-1} E_k
\end{align*}
Recall the following mean-value result for the error of trapezoidal rule:
$$
\frac{1}{b-a}\int_a^b f(x)dx - \frac{f(a)+f(b)}{2} = -\frac{(b-a)^2}{12}f''(\xi)
$$
Let $f(x) = \cos^p x$, then $f'(\pi/2-\delta)\asymp \delta^{p-1}$ and $f''(\pi/2-\delta)\asymp 1/\delta^{2-p}$.
Let $\epsilon \geq 2/n$ to be determined let $K = [n\epsilon]$ so $K \geq 2$. Applying the mean-value result to the intervals corresponding to $k=0,\dots,n-K-1$, we have
$$
E = \frac{1}{n}\sum_{k=0}^{n-K-1} \frac{1}{12}\left(\frac{\pi}{2n}\right)^2 f''(\xi_k) + \frac{1}{n}\sum_{k=n-K}^{n-1} E_k
=: A + B.
$$
where $\xi_k \in [k\pi/(2n), (k+1)\pi/(2n)]$.
Write $A$ as
$$
A = \frac{\pi}{24n^2} \sum_{k=0}^{n-K-1} f''(\xi_k)\frac{\pi}{2n} =: \frac{\pi}{24n^2} A'
$$
Note that $A'$ is the Riemann sum of $\int_0^{\pi/2-\epsilon} f''(x)dx$. Also note that $f''(x)$ has a unique positive root $x_0$ in $(0,\pi/2)$ and $f''(x)$ is positive and increasing when $x\geq x_0$. We can upper bound
$$
\begin{aligned}
A' &\leq \int_0^{\pi/2-\epsilon} f''(x)dx + \frac{C_1}{n}\max_{x\in [0,x_0]}|f'''(x)| + \frac{C_2}{n}f''\left(\frac{\pi}{2}-\epsilon\right) \\
&\leq f'\left(\frac{\pi}{2}-\epsilon\right) + \frac{C_3}{n} + \frac{C_4}{n\epsilon^{2-p}} \\
&\lesssim \epsilon^{p-1} + \frac{1}{n\epsilon^{2-p}}.
\end{aligned}
$$
Next, we deal with the last term $B$. Invoke the integral estimation error from Theorem 3 of this paper, which states that, if $f'$ is absolutely continuous on $[a,b]$ and $f''\in L^\alpha(a,b)$ for some $\alpha\geq 1$, then
$$
\left|\frac{1}{b-a}\int_a^b f(x)dx - \frac{f(a)+f(b)}{2}\right| \leq C_\alpha (b-a)^{2-\frac{1}{\alpha}} \|f''\|_\alpha.
$$
Since $f''(\pi/2-\epsilon)\asymp 1/\epsilon^{2-p}$, $f''$ is $L^1$ integrable. It is clear that $E_k > 0$ when $k\geq n-K$, as $f''(\xi_k) > 0$ in this case. Applying the error bound above to $E_k$ ($k\geq n-K$), we obtain that
$$
B \lesssim \frac{1}{n^2}\int_{\pi/2-\epsilon}^{\pi/2} f''(x)dx \lesssim\frac{1}{n^2}\int_0^{\epsilon} \frac{1}{x^{2-p}}dx \lesssim \frac{\epsilon^{p-1}}{n^2}.
$$
Therefore we conclude that
$$
E \lesssim \frac{\epsilon^{p-1}}{n^2} + \frac{1}{n^3\epsilon^{2-p}}.
$$
Taking $\epsilon = \Theta(1/n)$ gives the desired result.
| A "Fourier-analytic" approach, with the analysis of $n^{p+1}|S_n-\beta|$ as $n\to\infty$. Let $$
T_N(f):=\frac1N\left(\frac{f(0)+f(2\pi)}{2}+\sum_{k=1}^{N-1}f\Big(\frac{2k\pi}{N}\Big)\right),\\\Delta_N(f):=T_N(f)-\frac1{2\pi}\int_0^{2\pi}f(x)\,dx.$$ Suppose that $f(x)=\sum_{n\in\mathbb{Z}}c_n e^{inx}$ with $\sum_{n\in\mathbb{Z}}|c_n|<\infty$, then $$T_N(f)=\frac1N\sum_{k=0}^{N-1}\sum_{n\in\mathbb{Z}}c_n e^{2\pi ikn/N}=\sum_{d\in\mathbb{Z}}c_{Nd},$$ because $\sum_{k=0}^{N-1}e^{2\pi ikn/N}=0$ if $n$ is not a multiple of $N$; thus $\color{blue}{\Delta_N(f)=\sum_{d\neq 0}c_{Nd}}$. In particular, if $c_n=O(|n|^{-\alpha})$ as $n\to\pm\infty$ with $\alpha>1$, then $\Delta_N(f)=O(N^{-\alpha})$ as $N\to\infty$.
Take $f(x)=|\cos x|^p$ with $\color{blue}{p>0}$, then $c_n=0$ if $n$ is odd, and (see e.g. {1} or {2}) $$c_{2n}=\frac2\pi\int_0^{\pi/2}\cos^p x\cos 2nx\,dx=\frac{\Gamma(p+1)}{2^p\Gamma(p/2+1+n)\Gamma(p/2+1-n)},$$ so that, using the reflection formula for $\Gamma$ and that $\Gamma(x+a)/x^a\Gamma(x)\to1$ as $x\to\infty$, $$\lim_{n\to\infty}(-1)^{n-1}n^{p+1}c_{2n}=\lambda_p:=\frac1{2^p\pi}\Gamma(p+1)\sin\frac{p\pi}{2}.$$
Now, denoting $a_n:=n^{p+1}(S_n-\beta)=n^{p+1}\Delta_{2n}(f)$, we get easily $$\lim_{n\to\infty}a_{2n}=-2\lambda_p\zeta(p+1)=2\pi^p\zeta(-p);\\\lim_{n\to\infty}a_{2n+1}=-(1-2^{-p})\lim_{n\to\infty}a_{2n}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4438010",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 1,
"answer_id": 0
} |
Limit of sum $\sum_{x=1}^{\infty} \frac{1}{\sqrt{x}\sqrt{x+1}}$ I am having a hard time proving if this sum converges or diverges:
$$\sum_{x=1}^{\infty} \frac{1}{\sqrt{x}\sqrt{x+1}}$$
I tried proving it by the ratio test but $q = 1$.
I couldn’t proceed further and would like some help.
| Dunno if it helps, but $\sqrt x<\sqrt {x+1}$ because the square root is increasing, and therefore
$$\frac1{\sqrt x\sqrt{x+1}}>\frac1{\sqrt {x+1}\sqrt {x+1}}=\frac1{x+1}$$
because $x\mapsto 1/x$ is decreasing for positive $x$. Thus you are left with the harmonic series (minus 1).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4438239",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What breaks down when generalising from sets to classes? I have been familiarising myself recently with some basic definitions in category theory, where we work with classes of objects and morphisms, as opposed to just sets. It feels like I am long overdue trying to understand exactly what differences there are between sets and proper classes i.e. which properties we take for granted with sets no longer hold in this more general setting.
Since I don't have any rigorous definition of what a class is (or even a set for that matter, I have not studied axiomatic set theory), what should I know about classes to avoid making false assumptions in category theory?
For example:
*
*Do two classes $A$ and $B$ have a well-defined product class $A\times B$?
*Given a class $A$, can we form a 'power class' $\mathcal{P}(A)$ consisting of its subclasses?
*Can we define partial orders on classes in the same way we can for sets? (I remember being told that 'isomorphism' is an equivalence relation as an undergrad.)
I suspect the answer to all of the above is 'yes'. The only practical difference I have encountered so far seems to be that proper classes are simply too 'large' to be sets i.e. they do not have a well-defined cardinality. But can we still compare the relative sizes of two classes in some meaningful way?
In short, what things should I look out for when working with non-small categories?
| An alternative to the NBG theory espoused in another answer is the theory of Grothendieck universes, according to which "class" is literally just a name for an unusually large set, we never consider such a collection as "all sets whatsoever", and all constructions available for sets are (thus) available for "classes".
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4438357",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
} |
Uniform converges of $f_n(x)=x-\frac{x^n}{n!} , x\in [-a,a] ,a>0$ I have 2 questions :
1.with the given following function series :
$f_n(x)=x-\frac{x^n}{n!}$
I need to show that for $x\in [-a,a]$ $ ,a>0 $ there is $P,Q>0$ such that :
$|f_n(x)-f_{n-1}(x)|\leq PQ^{n-1}\frac{|x|^n}{n!}$
and I stack at this point :
$$\begin{align} |f_n(x)-f_{n-1}(x)|&=\left|x-\frac{x^{n+1}}{(n+1)!}-\left(x-\frac{x^{n}}{n!}\right)\right|\\&=\left|x-\frac{x^{n+1}}{{n+1}!}-x+\frac{x^{n}}{n!}\right|\\&=\left|\frac{x^n}{n!}-\frac{x^{n+1}}{(n+1)!}\right|\\&=\left|\frac{x^n}{n!}\left(1-\frac{x}{n+1}\right)\right|\\&=\left|\frac{x^n}{n!}\right|\left|1-\frac{x}{n+1}\right|\\&=\frac{|x|^n}{n!}\left|1-\frac{x}{n+1}\right|\end{align} $$
so I need to prove that :
$|1-\frac{x}{n+1}|\leq PQ^{n-1}$
then I tried :
$|1-\frac{x}{n+1}|\leq 1+|\frac{x}{n+1}|\leq 1+\frac{|x|}{n+1} \leq 1+\frac{a}{n+1} $
from here I stack.
My second question is to show that $f_n(x)$ converges unifomly at $x\in [-a,a]$ $ ,a>0 $
| Fixed $x\in [-a, a]$
$\lim_{n\to\infty} f_n(x)=x-\frac{x^n}{n!}=x $
Since
$[\lim_{n\to \infty }\frac{ x^n}{n!}=0]$
Hence, $f_n\to f$ pointwise on $[-a, a]$ where $f:[-a, a]\to \Bbb{R}$ defined by $f(x) =x$
Claim : $f_n\to f$ uniformly on $[-a, a]$ i.e $(f_n) \to f$ in the space $(C[-a, a], \|•\|_{\infty}) $
$\begin{align}\|f_n-f\|_{\infty}&=sup\{|f_n(x)-f(x)|:x\in [-a, a]\}\\&=sup\{|x-\frac{x^n}{n!}-x| : x\in[-a,a]\}\\&=sup\{\frac{|x|^n}{n!}:x\in[-a,a]\}\\&=\frac{a^n}{n!} \to 0 [\text{ as $n\to\infty$}]\end{align}$
Hence, $f_n\to f$ uniformly on $[-a, a]$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4438530",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
A specific sequence such that there is no three-term arithmetic progression in the sequence: does the corresponding series of reciprocals diverge? Define the sequence of natural numbers $(a_n)_n$ recursively as follows:
For each $k\geq 0,\ $ define $a_{k+1}$ to be the least natural number such that it doesn't make a three term arithmetic progression with (two) previous terms in the sequence.
So if I haven't made a mistake, this sequence is: $1,2,4,5,10,11,13,14,28,29, 31, 32, 37, 38, 40, 41,\ldots.$
My question is, does $\sum \frac{1}{a_k}$ converge or diverge?
I believe this question is related to Erdos' conjecture on arithmetic progressions, which I have been reading about.
| If I'm not mistaken, $a_n$ grows roughly as $n^\alpha$ where $\alpha = \log_2(3) \approx 1.585$. Since $\alpha > 1$, $\sum_n 1/a_n$ converges.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4438710",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
A vector perpendicular to two other vectors Let $u,v \in {R}^{3}$ be linearly independent . Find a third vector in ${R}^{3}$ that is perpendicular to both ${v}^{\perp}$ and $u$ , where ${v}^{\perp}$ is the orthogonal projection from $v$ onto $u$
We know that two a , b vectors are orthogonal then
${\|a+b\|}^{2}$ = ${\|a\|}^{2}$ + ${\|b\|}^{2}$
Now , assume there exists a vector $w$ such that $w$ is orthogonal to both ${v}^{\perp}$ and $u$,then
${\|w+u\|}^{2}$ = ${\|w\|}^{2}$ + ${\|u\|}^{2}$
${\|w+{v}^{\perp}\|}^{2}$ = ${\|w\|}^{2}$ + ${\|{v}^{\perp}\|}^{2}$
subtracting both the equations, we get
${\|w+u\|}^{2}$ - ${\|w+{v}^{\perp}\|}^{2}$ = ${\|u\|}^{2}$ - ${\|{v}^{\perp}\|}^{2}$
$(2w+u+{v}^{\perp}).(u-{v}^{\perp})$ = $({v}^{\perp}+u).(u-{v}^{\perp})$
this simplifies to
$w=0$
but how can zero vector can be orthogonal to the both of vectors
| All that it means for two vectors $v$ and $w$ to be orthogonal is for their dot product to be $0$. In particular, the zero vector is orthogonal to all vectors.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4438820",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Show that $7^6 \equiv 2^7 \pmod{223}$. Given $7 \cdot2^5 \equiv 1\pmod{223}$. Show that $7^6 \equiv 2^7 \pmod{223}$.
I know there must be some clever way to show this congruence. I can't seem to figure it out. I've considered that $224=7\cdot2^5$ and tried multiplying both sides of the first congruence by various terms, but with little success. Any solutions or hints are greatly appreciated.
| A not clever method?
$$
\begin{aligned} 7^6 &\equiv (7^3)^2 \\
&\equiv 343^2 \\
&\equiv 120^2 \\
&\equiv 240 \times 60 \\
&\equiv 17 \times 60 \\
&\equiv 60 + 4 * 4 * 60 \\
&\equiv 60 + 4 * 17 \\
&\equiv 128 \ ({\rm mod}\ 223)
\end{aligned}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4439033",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
A problem in graph coloring, inspired by the 4CT Let $G = (V, E) $ be a simple k-chromatic graph. A coloring of $G$ can be assumed to imply a proper k-coloring on the vertices.
We call a set of vertices $V'\subset V$ fully chromatic if for any coloring of $G$, all colors are present in $V'$. If $V'$ is minimal, in the sense that all subsets are not fully chromatic, then we call $V'$ critically fully chromatic.
Now I believe that a k-chromatic graph can't have a critically fully chromatic subset of size $k+1$. In the case k=4, this would simplify the proof of the 4CT by a lot.
Can some come up with such a critical chromatic set of vertices of size $k+1$?
| For an example of this when $k=3$, take the $6$-vertex graph whose vertices are arranged in a $2 \times 3$ grid, and vertices are adjacent if they are in the same row or column. Here are some $3$-colorings of this graph (where A, B, and C are the colors):
A B C A B C
B C A C A B
Up to permuting the colors, these are the only two possible colorings.
In particular, it is impossible to color a $2 \times 2$ subgrid with just $2$ colors, so the vertices of a $2\times 2$ subgrid are a fully chromatic set. However, for every $3$-vertex subset of a $2\times 2$ subgrid, one of the colorings above shows that it can be colored with just $2$ colors. So there is a $(k+1)$-vertex minimal fully-chromatic set.
A straightforward way to get examples for all $k>3$ from here is to add $k-3$ new vertices adjacent to every old vertex and each other, and include them in $V'$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4439147",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is the set of all functions from any set into R closed under addition and multiplication? I am doing a bit of self-study in Linear Algebra and I am having difficulty understanding the following:
*
*If $S$ is a set, then $\textbf{F}^S$ denotes the set of functions from $S$ to $\textbf{F}$.
*For $f, g \in \textbf{F}^S$, the sum $f + g \in \textbf{F}^S$ is the function defined by
$$(f + g)(x) = f(x) + g(x)$$
for all $x \in S$.
*For $\lambda \in \textbf{F}$ and $f \in \textbf{F}^S$, the product $\lambda f \in \textbf{F}^S$ is the function defined by
$$(\lambda f)(x) = \lambda f(x)$$
for all $x \in S$.
Does this mean that, for sums and products defined as above, the set of all functions from any set $S\mapsto R$ is closed under addition and scalar multiplication?
If so, could I use this to quickly prove, for example, that the set of all continuous functions on $[0,1] \subset R$ is closed under addition and scalar multiplication?
| Yes, it is true that the set of functions from any set $S$ to any field $\mathbb{F}$ has the structure of an $\mathbb{F}$-vector space. The proof of this fact is contained in your question (you don't check the axioms, but they are extremely easy to check).
It does not follow immediately that the set of continuous functions from $[0,1]$ to $\mathbb{R}$ is a $\mathbb{R}$-vector space, though. If $f$ and $g$ are two functions in this space, you know only that $f+g$ and $\lambda f$ are again functions from $[0,1]$ to $\mathbb{R}$. You need then prove that $f+g$ and $\lambda f$ are continuous. You probably saw this proof in a first calculus class.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4439337",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
finding a closed formula for $\sum_{k=0}^{n} k{2n \choose k}$ my attempt:
$\sum_{k=0}^{n} k{2n \choose k}=\sum_{k=0}^{2n} k{2n \choose k}-\sum_{k=n+1}^{2n} k{2n \choose k}$
the first term in the right hand side
suppose there are $2n$ poeple, we have to choose a community from them ,and then we have to choose a president from the community chosen ,so if we choose 1 people then there is one possiblity for choose the president ,and if we choose 2 people then there are two possibility for choose the president,and so on,so all ways possible is $\sum_{k=0}^{2n} k{2n \choose k}$,and we can do it by an other way; we can choose $1$ person from them and then we have to choose a community from the remaining $2n-1$;the numbers of ways to do it,is;${2n \choose 1} 2^{2n-1}$
the second term in the right hand side
suppose there are $2n$ poeple, we have to choose a community from them but thier members doesnot less than $n+1$ people ,and then we have to choose a president from the community chosen ,so if we choose $n+1$ people then there is $n+1$ possiblity for choose the president,and so on,so all ways possible is $\sum_{k=n+1}^{2n} k{2n \choose k}$and we can do it by an other way;we can choose $n+1$ people from $2n$ and,and choose a president from those $n+1$ people then we have to choose a community from the remaining ,that's means;$(n+1){2n \choose n+1} 2^{2n-(n-1)}=(n+1){2n \choose n+1} 2^{n-1}$
so finally;$\sum_{k=0}^{n} k{2n \choose k}=\sum_{k=0}^{2n} k{2n \choose k}-\sum_{k=n+1}^{2n} k{2n \choose k}$
$={2n \choose 1} 2^{2n-1}-(n+1){2n \choose n+1} 2^{n-1}$
does my attempt correct?
| Combinatorics are not my strong suite, but I believe the issue lies with your second method for choosing a set of more than $n+1$ from $2n$, where you possibly double count.
You first choose $n+1$ people, then choose a president from this set, then choose additional people to add on. But consider the set $\{a,b,c,d,e,f\}$, choosing $\{a,b,c,f\}$ with $a$ as president and $\{a,b,d,f\}$ with $a$ as president are covered as separate cases in the first two choices. But then they are counted as the same choice if the additional people are chosen as $\{d,e\}$ and $\{c,e\}$ respectively.
Unfortunately, I cannot think of a way to mend your argument to finish the proof. Perhaps someone else can figure something out?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4439517",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Let $u(x,y)=e^{-y} \cos x$. Find all functions $v(x,y)$ such that $f(x+iy) = u(x, y) + iv(x, y)$ is complex differentiable for all $z ∈ \mathbb C$.
Let $u(x,y) = e^{-y} \cos x$. Find all functions $v(x,y)$ such that $f(x+iy) = u(x, y) + iv(x, y)$ is complex differentiable for all $ z ∈ \mathbb C$.
I did this:
$Re (e^{-y}cosx + iv(x,y)) = e^{-y}cosx$
$u_x = -e^{-y} sinx = \frac{\delta}{\delta{y}} v$
$u_y = -e^{-y}cosx = - \frac{\delta}{\delta{x}} v$
$\implies v = \int (-e^{-y} sinx ) \delta{y} = e^{-y} sinx +f(x) $
$v= \int (e^{-y} cosx ) \delta{x} = e^{-y} sinx +g(y) $
hece $v(x,y)= e^{-y} sinx$
is my solution correct?
and to add $f(z) = e^{-y} cosx = i e^{-y}sinx = e^{-y}.e^{ix} = e^{ix-y} $
am i right?
| You're basically correct, with a minor glitch. I'd organize the exercise in a different way. The Cauchy-Riemann equations are
$$
\frac{\partial u}{\partial x}=\frac{\partial v}{\partial y}
\qquad
\frac{\partial u}{\partial y}=-\frac{\partial v}{\partial x}
$$
Since
$$
\frac{\partial u}{\partial x}=-e^{-y}\sin x
\qquad
\frac{\partial u}{\partial y}=e^{-y}\cos x
$$
we need to solve
$$
\begin{cases}
\dfrac{\partial v}{\partial x}=-e^{-y}\cos x \\[6px]
\dfrac{\partial v}{\partial y}=-e^{-y}\sin x
\end{cases}
$$
Integrating the first equation with respect to $x$ yields
$$
v(x,y)=e^{-y}\sin x+\varphi(y)
$$
and differentiating with respect to $y$ yields
$$
-e^{-y}\sin x+\varphi'(y)=-e^{-y}\sin x
$$
so we conclude that $\varphi$ is a (real) constant, say $C$.
Hence
$$
v(x,y)=e^{-y}\sin x+C
$$
Your function is therefore
\begin{align}
f(x+iy)
&=e^{-y}\cos x+ie^{-y}\sin x+iC \\[6px]
&=e^{-y}(\cos x+i\sin x)+iC \\[6px]
&=e^{-y}e^{ix}+c \\[6px]
&=e^{ix-y}+iC
\end{align}
If $z=x+iy$, then $ix-y=iz$ and the function is
$$
f(z)=e^{iz}+iC
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4439698",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Evaluate $\lim_{n\to\infty}\int_1^n\frac{\ln x}{c_n+x\ln x}\,dx$ Let $c_n$ be an unbounded sequence: $c_n\to\infty$ when $n\to\infty$
Find the value of the limit:
$$\lim_{n\to\infty}\int_1^n\frac{\ln x}{c_n+x\ln x}\,dx$$
What I managed to find was the fact that
$$\lim_{n\to\infty}\frac{\ln x}{c_n+x\ln x}=0$$
I was thinking that the limit is going to be $0$, but I have to prove it still. Maybe it is not and I am wrong I do not know for sure.
I denoted
$$I(n)=\int_1^n\frac{\ln x}{c_n+x\ln x}\,dx$$
I was thinking of finding a recurrence formula for this integral, Though, I could not be able to make any significant progress.
What should I do?
| As @Ryszard Szwarc mentioned in the comment, if $\frac{c_n}{n\ln n}\to\infty,\,\,\lim_{n→∞} \int _1^n\frac{\ln\:x}{c_n\:+\:x\:\ln x}dx=0$, and the most interesting is the case $c_n=n\ln n$.
So, we want to find $\lim_{n→∞} I(n)=\lim_{n→∞}\int _1^n\frac{\ln\:x}{n\ln n\:+\:x\:\ln x}dx$. Making the substitution $x=e^t$
$$I(n)=\int _1^n\frac{\ln\:x}{n\ln n\:+\:x\:\ln x}dx=\int_0^{\ln n}\frac{te^t}{n\ln n+e^t t}dt$$
Making another substitution $t=s\ln n$
$$I(n)=\frac{\ln n}{n}\int_0^1\frac{sn^s}{1+\frac{s}{n^{1-s}}}ds$$
As $\frac{s}{n^{1-s}}\leqslant1$, we can decompose the denominator into the series:
$$I(n)=\frac{\ln n}{n}\int_0^1n^ss\big(1-sn^{s-1}+s^2n^{2s-2}-s^3n^{3s-3}+-...\big)ds$$
Integrating by part, we get the terms with the different rate of convergence, for example:
$$\int_0^1sn^sds=\frac{1}{\ln^2n}\int_0^{\ln n}se^sds=\frac{1}{\ln^2n}\big(n\ln n-n\big)=\frac{n}{\ln n}-\frac{n}{\ln^2 n}$$
The same story happens to all the terms; only the first ones will give a non-zero limit at $n\to\infty$.
Therefore, keeping only these main terms
$$I(n)=\frac{\ln n}{n}\Big(\frac{1}{\ln^2n}\int_0^{\ln n}e^xxdx-\frac{1}{n\ln^3n}\frac{1}{2^3}\int_0^{2\ln n}e^xx^2dx+\frac{1}{n^2\ln^4n}\frac{1}{3^4}\int_0^{3\ln n}e^xx^3dx-+...\Big)$$
The pattern is clear. Integrating by part and keeping only first terms of every integral
$$I(n)=\frac{\ln n}{n}\Big(\frac{1}{\ln^2n}n\ln n-\frac{1}{n\ln^3n}\frac{1}{2^3}2^2n^2\ln^2n+\frac{1}{n^2\ln^4n}\frac{1}{3^4}3^3n^3\ln^3n-+...\Big)+O\Big(\frac{1}{\ln n}\Big)$$
$$\Rightarrow\,\,\boxed{\,\,\lim_{n\to\infty} I(n)=1-\frac{1}{2}+\frac{1}{3}-\frac{1}{4}+-...=\ln 2=0.693147...\,\,}$$
The numeric evaluation of the limit confirms this conclusion:
$$\int _1^n\frac{\ln\:x}{n\ln n\:+\:x\:\ln x}dx\,\Big|_{n=10^{300}}=0.692143...$$
though the rate of convergence, of course, is monstrously low :)
It is not difficult to evaluate in the same way the second and third terms of the asymptotics. I got
$$\boxed{\,\,I(n)=\ln2-\frac{\ln2}{\ln n}+\Big(\ln2-\frac{\pi^2}{12}\Big)\frac{1}{\ln^2n}+O\Big(\frac{1}{\ln^3n}\Big)\,\,}$$
$\mathbf{PS}$
I checked numerically the revised asymptotics. It seems it works!
$$\int _1^n\frac{\ln\:x}{n\ln n\:+\:x\:\ln x}dx\,\Big|_{n=1000}=0.590148...$$
Asymptotics gives
$$I(n)\sim\ln2-\bigg(\frac{\ln2}{\ln n}+\Big(\ln2-\frac{\pi^2}{12}\Big)\frac{1}{\ln^2n}\bigg)\,\bigg|_{n=1000}=0.590093...$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4440078",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 0
} |
Showing that a function of two brownian motions is a martingale. Let $B$ be a standard Brownian motion, let $f$ be a smooth function taking values in $[a,b]$ where $0<a<b<\infty$ and assume that the derivative $f^\prime$ is bounded. For $t\in[0,1]$ and $x\in\mathbb{R}$, let $$U(t,x)=\mathbb{E}\{f(x+B_{1-t})^2\}.$$ Let $M_t$ = $U(t,W_t)$ where $W$ is a Brownian motion independent of $B$.
I am tasked with showing that $M$ is a martingale with respect to the filtration generated by $W$. The question suggests that I should do this by "directly computing conditional expectations and the definition of Brownian motion". I am quite unsure how to do this - particularly the part regarding the filtration generated by W. Any advice would be greatly appreciated! Thank you.
| Let $\{{\mathcal F}_t\}_{t \ge 0}$ denote the Brownian filtration determined by $W$ and suppose that the Brownian motion $B$ is independent of $W$. The definition of conditional expectation implies that
$$M_t={\mathbb E}[f(W_t+B_{1-t})^2 | W_t]= {\mathbb E}[f(W_t+B_{1-t})^2 |{\mathcal F}_t] \,.$$
Therefore, for $s<t$ we have
$$ {\mathbb E}[M_t |{\mathcal F}_s] ={\mathbb E}[f(W_t+B_{1-t})^2 |{\mathcal F}_s] ={\mathbb E}[f(W_s+W_t-W_s+B_{1-t})^2|{\mathcal F}_s] \,.$$
The Gaussian variable $W_t-W_s+B_{1-t}$ is independent of ${\mathcal F}_t$, and has the same $N(0,1-s)$ distribution as $B_{1-s}$, which is also independent of ${\mathcal F}_s$. Thus
$$ {\mathbb E}[M_t |{\mathcal F}_s] = {\mathbb E}[f(W_s+ B_{1-s})^2 |{\mathcal F}_s] =M_s\,.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4440239",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Understanding the difference between $\partial f, \mathcal{D}_{f^{*}}, f^{*}, \partial f^{*}$ for a specific set of functions I am trying to calculate the following quantities: $\partial f, \mathcal{D}_{f^{*}}, f^{*}, \partial f^{*}$ for the following functions:
*
*$f(x) = x^2, \mathcal{D}_{f} = \mathbb{R}$.
*$f(x) = |x|, \mathcal{D}_{f} = \mathbb{R}$.
*$f(x) = e^x, \mathcal{D}_{f} = \mathbb{R}$.
*$f(x) = \dfrac{|x|^{p}}{p}, \mathcal{D}_{f} = \mathbb{R}, p>1$.
*$f(x) = x \log x-x+1, f(0) = 1, \mathcal{D}_{f} = [0, + \infty]$.
I am completely lost, seeing that as far as my study goes I have not been taught how to "calculate" them, other than the relatively well known definition.
Any pointers would be more than welcome.
| Any book with Convex Analysis in the title will help you. The function in 2 is sublinear so its conjugate must be an indicator function. I find the following formula very useful for computing the subdifferential:
$$\partial f(x) = \mathbb{R} \cap [f'_-(x),f'_+(x)],$$
where $f'_{\pm}$ are the one-sided derivatives from Calculus I.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4440808",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Formula for the number of ways to choose $n$ digits from $k$ possible digits such that the selection contains no repeats under permutation An example to illustrate:
Say I have 3 digits, to be taken from the set $\{1,2,3\}$.
I want to choose from this set of digits, with repetition allowed, to make triplets.
However, no two triplets should be attainable through permutation of the triplet - e.g:
$001$ can be permuted to $010$
$201$ can be permuted to $012$
but $011$ and $001$ are "permutationally independent" (any help on the proper name for this appreciated)
In this case, the set of all possible triplets is $\{000, 001, 011, 111, 002, 022, 222, 012, 112, 122\}$, and there are $10$ possibilities.
How could I have worked out this number without manually enumerating the possibilities? And how could this be done for forming n-tuples from a set of k possible digits?
I have tagged this question with some group theory tags, as am expecting from my own thinking that the answer can probably be quite neatly given in terms of dihedral group actions and orbits, though I may be wrong.
| To elaborate more on angryavian’s answer, Your general "$n$-tuples from a set of $k$ digits" question is equivalent to choosing non-negative $x_1$ times of the digit 1, … $x_k$ of the digit k, where sum of all these $x_i$ need to be $n$. Each solution $(x_1,…x_k)$ corresponds to one type of tuples with all its permutations. Total number of tuples is thus the total number of solutions $\binom{n+k-1}{n-1}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4440925",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
20 cities, how many ways to build 187 roads Question:
There are 20 cities in the country. The government allocated money for the construction of 187 roads. Each road connects two different cities and does not pass through other cities. There cannot be more than one road between any two cities. How many ways are there to build roads?
Attempt
This is a graph with 20 vertices, 187 edges and no multi-edges. There are $187 \choose 2$ ways to plot 187 edges, which is 17391. However that's not the right answer and I'm not sure where I'm going wrong with my reasoning.
| There are $\binom{20}{2}$ possible roads, we must select $187$ of those to build. The number of ways to do this is $$\binom{ \binom{20}{2}}{187}$$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4441151",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Polar decomposition of a linear combination of unitary matrices Consider a complex-valued square matrix $M$ of the form
$$M = \frac{1}{2}\left(U_1 + e^{-i\phi}U_2\right),$$
where $U_1$ and $U_2$ are unitary matrices and $\phi$ is a real number. Moreover, consider the polar decomposition of $M$:
$$M = U P,$$
where $U$ is a unitary matrix and where $P$ is a positive semi-definite Hermitian matrix.
I want to know if the matrices $U$ and $P$ can be directly expressed in closed form as a function of $U_1$, $U_2$, and $\phi$. Is that possible? I somehow doubt it, but I thought I'd ask.
We can assume that $M$ is nonsingular for simplicity.
EDIT: Shortly after posting this question, I discovered that there are at least expressions for the polar decomposition of 2 by 2 real matrices and of 2 by 2 complex matrices. Though I'm still wondering if something more can be said for the structure above in any dimension.
| There exists a unitary matrix such that $V^2 = U_1^\dagger e^{-i\phi} U_2$ and $V+V^\dagger$ is positive semi-definite (It can be computed by diagonalizing $U_1^\dagger e^{-i\phi} U_2$ in an orthonormal basis and taking the square root of the eigenvalues, choosing a square root with non negative real part).
Then, we have :
$$M = \frac 12 U_1V(V+V^\dagger)$$
so the polar decomposition is :
$$U = U_1 V \text{ and } P= \frac 12 (V+V^\dagger)$$
This does not give a closed form expression for $U$ and $P$, but I don't think there is : I believe there is no closed form expression for $V$ and, if $P$ is non degenerate, then unicity of the polar decomposition means that finding $V$ is equivalent to finding $(U,P)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4441310",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
finding sets of integers whose subsets do not sum to a prime: how to address the problem? This is sort of a variation of the Hasler sequence, which requires that the sum of a certain number of directly previous integers (in the sequence) isn't prime.
I started to wonder: Suppose the rule is that, for a set S containing the sequence of non prime counting numbes, [1,4,...N] , is there a limit to N such that the sum of any subset of S containing K elements of S is not prime?
I'm going to write some code to test this for small-ish N and K, but could use some help learning how this would be proved/disproved analytically.
| So for $K=2$ we get $N=2$ because the sum of the first two non primes $1, 4$ is $5$ which is already prime. Similarly for $K=3$ we get $N=3$ because $1+4+6=11$ which is also prime. One can compute a few more terms and you will get that $N$ is at most 1 or 2 bigger than $K$.
For larger values of $K$ one should still expect an $N$ that is only marginally bigger than $K$ by a heuristic argument. Given $K$ if $N=K+m$ for some value of $m$ that there are $m \choose N$ ways to pick a subset of size $K$ (which is the same as not picking $m$ numbers) and you want every single one of these to not be a prime. This grows very quickly, for $K$ large and $m$ small this is on the order of $N^m$.
Getting a good explicit bound for $N$ in terms of $K$ is probably hard. Some version of the pigeon hole principle might give you a rough bound like $N < 2K$ or $N < K + \sqrt{K}$. Using the prime number theorem in the sense that the probability of a number of size $n$ being prime is $1/{\ln(n)}$ might give a heuristic argument for the correct asymptotic size of $N$ in terms of $K$ but making this into a rigorous upper bound is usually very hard.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4441479",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
The converse of the Heine–Borel property I have been reading on the Heine–Borel theorem and Heine–Borel property and their relation to topological vector spaces.
The Heine–Borel theorem states each subset of Euclidean space $\mathbb{R}^n$, is closed and bounded if and only if it is compact.
A topological vector space is said to have the Heine–Borel property if each closed and bounded set in is compact.
From this I understand that not every TVS has the Heine–Borel property. However, what about the converse? i.e, Is each compact subset of TVS, closed and bounded?
| In a topological vector space, one point sets (which are clearly compact) are closed if and only if the space is Hausdorff (see, for example, Rudin's book on functional analysis). In a Hausdorff space, every compact set is closed.
If $C$ is compact and $V$ an open neighborhood of $0$, the family of sets of the form $nV$ is an open cover of $C$. If a finite subcover exists, there must exist a cover given by a single such $n V$, so every compact set in a topological vector space is bounded.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4441852",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
An inequality about non negative numbers Question:
Let $x_1, x_2, ... , x_m$ be non negative numbers, and let $\sum_{i=1}^{m}x_i=k.$ If $s>1,$ then
$$\sum_{i=1}^{m}x_i^s \geq \frac{k^s}{m^{s-1}}.$$ Equality holds iff $x_i=k/m, i=1 , 2 , ... , m.$
I am trying to prove the given inequality using induction or A.M and G.M inequalities but am not able to prove it.
Any hint or solution is appreciated
Thank you!
|
Jensen's inequality:
If $\lambda_1,$ $\lambda_2,$ $...,$ $\lambda_n$ be non negative real numbers such that $\lambda_1+\lambda_2+...+\lambda_n=1,$ and $f$ is convex function then $f(\lambda_1x_1+\lambda_2x_2+...+\lambda_nx_n) \leq \lambda_1f(x_1)+\lambda_nf(x_n)+...+\lambda_nf(x_n),$ for any $x_1,x_2,...,x_n.$
Let $\phi :$ $x \to x^s$ be a convex function for $x>0$ and $s>1$ then,
By Jensen's inequality
$\phi((1/k)x_1+(1/k)x_2+...+(1/k)x_n) \leq (1/k) \phi(x_1)+(1/k) \phi(x_2)+...+(1/k) \phi(x_n)$
$\implies ((1/k)x_1+(1/k)x_2+...+(1/k)x_n)^s \leq (1/k)x_1^s+(1/k)x_2^s+...(1/k)x_n^s$
$\implies (1/k)^s(x_1+x_2+...+x_n)^s \leq (1/k)(x_1^s+x_2^s+...+x_n^s)$
$\implies (x_1^s+x_2^s+...+x_n^s) \geq \frac{(x_1+x_2+...+x_n)^s}{k^{s-1}}.$
This is my approach to proving the given inequality using the convex function and Jensen's inequality.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4442284",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Why is the image of $H_1(X,\mathbb{Z})\rightarrow (\Omega^1)^*$ a lattice in $(\Omega^1)^*$? (for Albanese varieties) Albanese varieties are described here, and provide motivation for this question, although are not central to it.
Let $X$ be an algebraic variety and $\Omega^1$ the space of everywhere regular differential $1$-forms on $X$. In the construction of Albanese varieties, one defines the map $\phi: H_1(X,\mathbb{Z})\rightarrow (\Omega^1)^*$ given by $\phi: \gamma\mapsto (\omega\mapsto \int_\gamma \omega)$ for each $\gamma\in H_1(X,\mathbb{Z})$ and $\omega\in \Omega^1$. It is stated that $\phi(H_1(X,\mathbb{Z}))$ is a lattice in $(\Omega^1)^*$. Why is this the case?
Recall that being a lattice implies $\phi(H_1(X,\mathbb{Z}))\otimes_\mathbb{Z} \mathbb{R}\simeq (\Omega^1)^*$.
Note: some sources such as chapter 5 of Beauville note that this is due to Hodge theory, but I don't see how
| It suffices to show that the map $H_1(X,\mathbb R)\to(\Omega_X^1)^*$ is surjective. Using Serre duality we have $(\Omega_X^1)^*=H^0(X,\Omega_X^1)^* = H^n(X,\Omega^{n-1}_X).$ Using Poincare duality we have a map $$\tag{$*$} H_1(X,\mathbb R)\cong H^{2n-1}(X,\mathbb R)\to H^{2n-1}(X,\mathbb C) = H^n(X,\Omega^{n-1}_X)\oplus H^{n-1}(X,\Omega^n_X)$$ where the last equality uses the Hodge decomposition. Now since the Hodge decomposition satisfies symmetry under conjugation, we must have that, inside $H^{2n-1}(X,\mathbb C),$ $$H^{2n-1}(X,\mathbb R)=\{\alpha+\overline\alpha\mid \alpha\in H^n(X,\Omega^{n-1}_X)\}.$$ Under this description the map $H_1(X,\mathbb R)\to (\Omega_X^1)^*$ is just the map $(*)$ composed with projection onto the first factor and is clearly surjective.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4442514",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Proving that $f_n(x)=\frac{(x+1)^n-x^n-1}{x(x+1)}$ is strictly positive for odd integers $n\geq 5$
Let $n\geq 5$ be an odd integer. I'd like to prove that the function $$f_n(x)=\frac{(x+1)^n-x^n-1}{x(x+1)}$$ is strictly positive on $\mathbb{R}$.
(I already figured out that the numerator is divisible by the denominator, so $f_n(x)$ is in fact a polynomial of degree $n-3$.
I tried to find the minimum by differentiation, but I have difficulty finding the location and the value of the minimum. Another thing I know is that $f_n(x)$ is palindromic, but it doesn't seem to give any inspirations. Does anyone have ideas?
Thanks in advance!
| First, we have
$$f_5(x) = 5(x^2 + x + 1) > 0, \, \forall x \in \mathbb{R}.$$
Second, for $k\ge 2$, let
\begin{align*}
g_k(x) &:= f_{2k+3}(x) - f_{2k+1}(x)\\[5pt]
&= \frac{(x + 1)^{2k + 3} - (x + 1)^{2k + 1} - x^{2k + 3} + x^{2k + 1}}{x(x + 1)}\\
&= \frac{(x + 1)^{2k}[(x + 1)^3 - (x + 1)] - x^{2k}(x^3 - x)}{x(x + 1)}\\
&= \frac{(x + 1)^{2k}x(x + 1)(x + 2) - x^{2k}x(x - 1)(x + 1)}{x(x + 1)}\\
&= (x + 1)^{2k}(x + 2) - x^{2k}(x - 1).
\end{align*}
We can prove that $g_k(x) \ge 0$ for all $x\in \mathbb{R}$.
If $-2 < x < 1$, clearly $g_k(x) \ge 0$.
If $x \le -2$, we have $g_k(x) = (1 - x)(-x)^{2k} - (-x - 2)(-x - 1)^{2k} \ge 0$ since $1 - x > - x - 2 \ge 0$
and $- x > - x - 1 \ge 1$.
If $x \ge 1$, we have $g_k(x) \ge 0$
since $x + 1 > x \ge 1$
and $x + 2 > x - 1 \ge 0$.
We are done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4442702",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Asymptotic for a binomial sum Is it possible to derive an asymptotic expression as $n \to \infty$ for the following sum:
$$S_n(a,x) = \sum_{k=1}^n \binom{n}{k} \frac{k!^2 x^k}{k \cdot (a)_k} $$
$(a)_k$ is the rising factorial (Pochhammer symbol).
Euler-Maclaurin seems way too complicated here, and I'm not sure what else to try.
| Not an answer.
$$S_n(a,x) = \sum_{k=1}^n \binom{n}{k} \frac{(k!)^2 }{k \, (a)_k}x^k$$
$$\frac{\partial S_n(a,x)}{\partial x} = \sum_{k=1}^n \binom{n}{k} \frac{(k!)^2 }{ \, (a)_k}x^{k-1}$$
$$\frac{\partial S_n(1,x)}{\partial x}=\frac{e^{\frac{1}{x}}\, x^n\, \Gamma \left(n+1,\frac{1}{x}\right)-1}{x} \quad \implies \quad S_n(1,x)=\color{red}{\large ???}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4442892",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 2
} |
Prove that if the sum $S=\sum_{n=1}^{\infty}\frac{1}{n^2}$ then, $1+\frac{1}{3^2}+\frac{1}{5^2}+\frac{1}{7^2}+\dots=\frac{3}{4}S$ Prove that if the sum $S=\sum_{n=1}^{\infty}\frac{1}{n^2}$ then, $1+\frac{1}{3^2}+\frac{1}{5^2}+\frac{1}{7^2}+\dots=\frac{3}{4}S$
Attempt:
I did figure out that $1+\frac{1}{3^2}+\frac{1}{5^2}+\frac{1}{7^2}+\dots = \sum_{n=1}^{\infty}\frac{1}{(2n-1)^2}$
I have checked on google for $ \sum_{n=1}^\infty \frac{1}{n^2}$ and i do not suppose to know that is equal to $\frac{\pi^2}{6}$.
and I checked how The sum can be given explicitly of $\sum_{n=1}^{\infty}\frac{1}{(2n-1)^2}$
and the result was show that needs the Fourier series and we haven't learned that yet.
So, I have no idea how to prove that equal to $\frac{3}{4}S$ without know the exactly the sum
Thanks.
| Hint: Note that you get to this series by subtracting the even $n$ terms from the original.
Further, note that for any $n$,
$$
\frac{1}{(2n)^2}=\frac{1}{4}\cdot\frac{1}{n^2}.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4443051",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.