Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
How do you scale along a non canonical direction? Silly question but I am not figuring it out on my own.
In 3D you can scale a point / shape / field along a canonical direction quite easily:
$p = (x_0, x_1, x_2)$ becomes $p = (ax_0, x_1, x_2)$ for example if we wanted to scale along the first axis. This map has the property that distances in any plane parallel to the plane $(x_1, x_2)$ remain unaffected.
Now given an arbitrary vector $N$ I want to induce the same transformation along the direction of $N$.
One possible solution, of course, is to find a rotation that transforms $N$ into one of the canonical directions, scale that direction, then invert the transformation. i.e. $RSR^{-1}$.
However this is inelegant, I am trying to do it in a way that involves only $N$ and requires no change of basis. i.e. I want to scale points along $N$ without changing the coordinate space, and perhaps better, in a coordinate free way.
| The way to construct a matrix that does this is as follows:
The component of $P$ parallel to $N$ is $N\cdot P N = (N^TP)N = NN^TP$
Thus $-NN^T P + P = (-NN^T + I)P$ is P minus its component orthogonal to $N$. Thus $(-NN^T + I)P + sNN^TP$ is the scaling of $P$ along the direction $N$ as described in Chris Eagle's answer.
We get:
$$(-NN^T + I)P + sNN^TP = (-NN^T + I + sNN^T)P$$
$$ = (NN^T(-I + sI) + I)P$$
$$ = (NN^T(s - 1) + I)P$$
Thus the matrix $((s - 1)NN^T + I)$ Represents a linear scaling along the direction $N$ for any $P$ in a coordinate free way.
Notice that matrix must be symmetric btw.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3945174",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Is PID cyclic? I cannot understand quotient ring is cyclic In the textbook, it says
“If $R$ is a principal ideal domain and $r$ is an elt of $R$, then the quotient ring $R/(r)$ is a cyclic R-module.
I dont know why $R/(r)$ is cyclic.
Thank you in advance.
| I am just elaborating on Berci's comment.
Maybe you are confusing two different notions of being cyclic:
*
*A cyclic group is a group $G$, such there exists $g \in G$ with $G = \langle g \rangle$, i.e., all elements of $G$ are of the form $g^m$ for $m \in \mathbb Z$.
*A cyclic $R$-module (for a commutative unital ring $R$) is an $R$-module $M$ such that there exists an $m \in M$ with $M = \langle m \rangle_R$, i.e., $M$ is generated by $m$ as an $R$-module. By definition, $\langle m \rangle_R = \{rm \ | \ r \in R\}$.
Here, obviously, we are speaking about the notion of a cyclic $R$-module.
Now, as noted in the comments, $R/(r)$ is generated by $1 + (r)$ as an $R$-module. Here, one does not need to use that $R$ is a principal ideal domain (and it's hard to guess why you require this, because you don't provide much context).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3945374",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove $E[\frac{1}{1+e^{t X}}] = 1/2$ for $X\sim N(0,1)$. I'm trying to evaluate an expectation of the form
$$
E\left[\frac{\exp(t X_1)}{\sum_{i=1}^n \exp(t X_i)}\right],
$$
where each $X_i$ is independent standard normally distributed.
In the specific case $n=2$ this can be rewritten as
$$E\left[\frac{1}{1+e^{\sqrt{2} t X}}\right],$$
which numerically appears to be equal to $1/2$ for all real $t$.
I wonder if that might be possible to prove?, and if it could shed some light on the original integral?
If I series expand I get the sum
$$E\left[\frac{1}{1+e^{t X}}\right]
=\sum_{k\ge 0}(-1)^k e^{(kt)^2/2},
$$
which Mathematica evaluates as $\frac{1}{2} \left(\vartheta _4\left(0,e^{\frac{t^2}{2}}\right)+1\right)$.
I don't really know what this EllipticTheta[4, 0, E^(t^2/2)] function is, but if it is 0 it at least fits with my conjecture.
| Let $M = E\left[\frac{\exp(t X_1)}{\sum_{i=1}^n \exp(t X_i)}\right]$. Since $X_j$'s are IID, so $M = E\left[\frac{\exp(t X_j)}{\sum_{i=1}^n \exp(t X_i)}\right]$ for all $j = 1,\dots,n$. Take summation over $j = 1,\dots,n$. We get $nM = E[1] = 1$, so $M = 1/n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3945555",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
$z$ is a complex number such that $z^7=1$, where $z\not =1$. Find the value of $z^{100}+z^{-100} + z^{300}+z^{-300} + z^{500}+z^{-500}$ Let $z=e^{i\frac{2\pi}{7}}$
Then the expression, after simplification turns to
$$2[\cos \frac{200\pi}{7} +\cos \frac{600 \pi}{7} +\cos \frac{1000\pi}{7}]$$
How do I solve from here?
| Since $z^7=1$ we have that it's equal to
$$z^2+z^5+z^6+z+z^3+z^4$$
$$=z^6+z^5+z^4+z^3+z^2+z$$ because $z^{7k+1}=z, z^{7k+2}=z^2,...$. This is equal to $-1$ since $z^7-1=(z-1)(z^6+..+1)=0$ and thus $z^6+..+1=0$ noting that $z -1\neq 0$.
Also we can solve using sum of geometric progression.
$\frac{a(r^n-1)}{r-1}=\frac{z(z^6-1)}{z-1}=\frac{z^7-z}{z-1}=\frac{1-z}{z-1}=-1$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3945696",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
What change of variables is this? I have come across this integral:
$$\int \frac{f(x+tv)-f(x)}{t}g(x)dx = \int -\frac{g(x)-g(x-tv)}{t}f(x)dx$$
which the author claims is justified by a change of variables, but I cannot see what they did. Would anyone be able to elaborate?
| I think he is breaking the difference apart and doing a change of variable in one term only, then recombining.
$$\int_{\mathbb R^n}\frac{f(x+tv) - f(x)}{t}\eta(x)\;dx = \int_{\mathbb R^n}\frac{f(x+tv)}{t}\eta(x)\;dx - \int_{\mathbb R^n}\frac{ f(x)}{t}\eta(x)\;dx$$
$$= \int_{\mathbb R^n}\frac{f(x)}{t}\eta(x-tv)\;dx - \int_{\mathbb R^n}\frac{f(x)}{t}\eta(x)\;dx$$
$$= \int_{\mathbb R^n}f(x)\frac{\eta(x-tv)-\eta(x)}{t}\;dx $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3945841",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
First Fundamental Theorem of Calculus question regarding F'(x) The first fundamental theorem is defined:
$F(x) = \int^x_c f(t) dt$ where $c$ is constant.
The corollary:
$\int^b_a f(t) dt = F(b) - F(a)$
My question is how can the integral $[c, x]$ in the first theorem be calculated without the corollary? Otherwise would $c$ be assumed to be $0$?
As an example:
$f(t) = t^2$,
$F(t) = \int^x_c f(t) dt = \int^x_c t^2 dt$
Antiderivative:
$F(t) = \frac {t^3} 3$
However, my understanding is the antiderivative is the area under the curve from $t = 0$.
Appreciate any guidance.
| You are skipping a step. The corollary can be stated:
For a function $F'(x)=f(x)$ (such a function is called an antiderivative)
$$\int_{a}^{b}f(t)dt=F(b)-F(a)$$
Hence it tells you that in order to calculate the definite integral you need to find an antiderivative $F$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3945975",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Roots of polynomial/exponential function Let $n\in \mathbb{N}$ and $\sigma \in (0,1]$ be fixed.
I am interested in finding the roots of the function
\begin{equation}
f(x) = \big(x^2-n^2-\sigma n^2 x\big)e^{x} - \big(x^2-n^2+\sigma n^2 x\big)e^{-3x};
\end{equation}
on $[0,n^2]$, namely solving $f(x) = 0$ for $x \in [0,n^2]$.
I would be happy with an approximation formula, which clearly indicates the dependence on $n \in \mathbb{N}$.
Clearly one root is $x=0$.
A way to proceed for finding the other root(s?) could of course be to factor the polynomials appearing in $f$; namely,
\begin{equation}
f(x) = \big(x-\big(n^2\sigma-n\sqrt{n^2\sigma^2+4}\big)\big)\big(x-\big(n^2\sigma+n\sqrt{n^2\sigma^2+4}\big)\big)e^x - \big(x-\big(-n^2\sigma-n\sqrt{n^2\sigma^2+4}\big)\big)\big(x-\big(-n^2\sigma+n\sqrt{n^2\sigma^2+4}\big)\big)e^{-3x}.
\end{equation}
Though, I don't see clearly how one may proceed from here.
| There is a formal analytical solution to the equation.
Rewrite it as
$$e^{-4x}=\frac {x^2-\sigma n^2 x-n^2 } {x^2+\sigma n^2 x-n^2 }=\frac{(x-a)(x-b) }{(x-c)(x-d)}$$ and the solution is given in terms of the generalized Lambert function (have a look at equation $(4)$ in the paper).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3946075",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Extremal distribution of random variable that averages to a given value Denote by $\mathcal M$ the set of probability measures over $[0,1]$ that average to $p$. This is a convex set and I am trying to characterize its extreme points. For any $x$, $y$ such that $0\leq x< p< y\leq 1$ define
\begin{align*}
\lambda_x^y = \frac{p-y}{x-y}\cdot\delta_x+\frac{x-p}{x-y}\cdot\delta_y
\end{align*}
where for any measurable $A\subseteq [0,1]$ and $a\in[0,1]$, $\delta_a(A)=\mathbf 1(a\in A)$. Since $\delta_x$ (resp. $\delta_y$) averages to $x$ (resp. $y$) we get that $\lambda_x^y$ averages to $\frac{p-y}{x-y}\cdot x+\frac{x-p}{x-y}\cdot y=p$ and so is in $\mathcal M$.
I am very tempted to say that the extreme points of $\mathcal M$ is the set $\{ \lambda_x^y : 0\leq x< p < y \leq 1 \}\cup \{ \delta_p \}$ without being able to prove it. If I am not mistaking I am supposed to prove that (ignoring the $\delta_x$ component)
*
*If for some $x$ and $y$ and $a\in[0,1]$, $\lambda_x^y=a \mu+(1-a) \nu$ then $\mu=\nu=\lambda_x^y$.
*For any $\mu\in\mathcal M$ there is a probability measure $\kappa$ over $[0,p]\times[p,1]$ such that $\mu(A)=\int \lambda_x^y(A) d\kappa(x,y)+\mu(\{ p \})\delta_p(A)$ for all measurable $A\subseteq [0,1]$.
For the first point, it is quite clear by positivity of the measures $\mu$ and $\nu$ that their support $\{ x,y \}$, then there is only one probability measure with such a support that averages to $p$ and it is $\mu_x^y$.
For the second statement however it is less clear to me, Maybe one way to proceed is computing the integral
\begin{align*}
\int \lambda_x^y(A) d\kappa(x,y)&=\int \left( \frac{p-y}{x-y}\cdot\delta_x(A)+\frac{x-p}{x-y}\cdot\delta_y(A) \right) d\kappa(x,y)\\
&=\int_{\left(A\cap [0,p]\right)\times [p,1]}\frac{p-y}{x-y} d\kappa(x,y)+\int_{[0,p]\times\left(A\cap [p,1]\right)}\frac{x-p}{x-y}d\kappa(x,y)
\end{align*}
It now feel easier to separate cases $A\subseteq [0,p]$ and $A\subseteq [p,1]$ to set one of the two integral to $0$ but I still don't know how to proceed.
Actually thinking about how I could use it in my research it would be much more powerful to not enforce $x\leq p \leq y$ but just that $p$ is in the convex hull of $\{ x,y \}$ and then we get repetition $\lambda_x^y=\lambda_y^x$ but the of extreme points is the same. This makes $\kappa$ a probability measure over $[0,1]^2$ where we have to be careful assigning $0$ probability to sets $A\times B\subseteq [0,1]^2$ where the convex hull of $A\cup B$ does not contain $p$.
Here is the last thing I tried :
Denote $a_x^y=\frac{p-y}{x-y}$, for $\mu \in\mathcal M$ and $A$ measurable
\begin{align*}
\mu(A)&=\int_{[0,p)} \delta_x(A) d\mu(x)+\int_{(p,1]} \delta_y(A) d\mu(y)+\mu(\{ p \})\delta_p(A)\\
&=\int_{[0,p)\times (p,1]} \left(\frac{a_x^y}{a_x^y}\delta_x(A) + \frac{1-a_x^y}{1-a_x^y}\delta_y(A) \right)d\mu\otimes\mu(x,y)+\mu(\{ p \})\delta_p(A)\\
\end{align*}
It feels like we can define $\kappa$ as a function of $\mu\otimes \mu$ and $a_x^y$, I cannot finish the argument though.
| Your identificaton of the extreme points of $\mathcal M$ is correct. A proof of the difficult part can be based on a theorem of R.G. Douglas. See my recent answer to Two term martingales and their extreme points, with a reference to a paper of A.F. Karr containing such a proof.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3946167",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
Showing that an arithmetic function is multiplicative Let $f$ be an arithmetic function that counts the number of consecutive integers between $1$ and $n$ (inclusive) such that both integers are coprime to $n$. More formally,
$$ f(n) = \sum_{\substack{1 \leq t \leq n \\ (t,n)=1 \\ (t+1,n)=1}}1. $$
An immediate observation is that $f(n)=0$ when $n$ is even. After playing around with this function, I suspect that it is multiplicative. That is, if $(a,b)=1$, then $f(ab)=f(a)f(b)$. However, I am unable to prove this.
Is this function really multiplicative or are there counterexamples?
Also, what happens when we modify $f$ so that it counts the number of consecutive triplets that are all coprime to $n$?
| Let $m$ and $n$ be coprime positive integers. Chinese reminder theorem gives us a ring isomorphism
$$
g: \mathbb{Z}/mn\mathbb{Z} \to \mathbb{Z}/m\mathbb{Z} \times \mathbb{Z}/n\mathbb{Z} \\
x\pmod{mn} \mapsto (x\pmod m, x\pmod n).
$$
For integer $k$, define $A_k \subset \mathbb{Z}/k\mathbb{Z}$ as
$$
A_k = \{t \in \mathbb{Z}/k\mathbb{Z} \, | \, \gcd(t, k) = 1 \land \gcd(t + 1, k) = 1\}.
$$
Note that $x \in A_k$ if and only if both $x$ and $x + 1$ are units in the ring $\mathbb{Z}/k\mathbb{Z}$ i.e. $x \in A_k$ if and only if there exist $y \in \mathbb{Z}/k\mathbb{Z}$ and $z \in \mathbb{Z}/k\mathbb{Z}$ such that
$$
xy = 1 \pmod k \\
(x + 1)z = 1 \pmod k.
$$
Since $g$ preserves inverses, we see that $t \in A_{mn}$ if and only if $g(t) = (t_1, t_2)$ with $t_1 \in A_m$ and $t_2 \in A_n$. Thus, $g$ restricted to $A_{mn}$ is a bijection between $A_{mn}$ and $A_m \times A_n$. Therefore, $|A_{mn}| = |A_m| |A_n|$.
Multiplicativity of $f$ follows since $f(k) = |A_k|$.
Using the result above and the observation that $f(p^k) = p^{k-1}(p - 2)$ for prime $p$, one can derive a formula for $f(n)$ where $n = p_1^{k_1}p_2^{k_2}\dots p_r^{k_r}$
$$
f(n) = \prod_{i=1}^r p_i^{k_i-1} (p_i - 2) = n \prod_{i=1}^r \left(1 - \frac{2}{p_i}\right)
$$
which is reminiscent of a similar formula for the Euler totient function.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3946379",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
How many "words" of length n is it possible to create from {a,b,c,d} without any "a"s after a "b"? I think this is a hard question and I didn't come up with a solution. Could you please give me a hint or approach?
For instance, "bdca" is not ok, but "aabcd" is ok.
It is somehow similar to this question, but with some differences.
Thanks in advance.
| Hint
$\sum_{k=1}^n [f(k) \times g(k)].$
$f(k)$ is the # of possible sequences of length $k$, where the very first occurrence of the letter $b$ is on the $k$-th letter.
Note that this categorization facilitates mutually exclusive groupings.
$g(k)$ is the # of possible sequences of length $(n-k)$ where the letter $a$ is excluded from this sequence.
Two items need special handling:
What happens if the first $b$ in the sequence is on the last (i.e. $n$-th) letter?
What happens if the sequence does not contain any $b$'s.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3946588",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
For the following question show that locus is a circle. Two circles touch one another internally at A, and a variable chord PQ of the outer circle touches the inner circle. Show that the locus of the incentres of the triangle APQ is another circle touching the given circles at A.
First I found the equation of the chord which is also the tangent to the smaller circle. Then I thought a lot on how to continue further; but literally have no idea how to do it. Even if we use the formula of the incetre of a triangle then we have to find the length of the sides of the triangle which is damn tedious.
Please help me with shorter method.
| Let $H$, $K$ be the points where chords $AP$, $AQ$ intersect the inner circle (see figure). The homothety of centre $A$ carrying the outer circle to the inner circle also carries $P$ to $H$ and $Q$ to $K$, with
$$
{AP\over AH}={AQ\over AK}={R\over r},
$$
where $r$ and $R$ are the radii of inner and outer circle respectively.
Hence $HK$ is parallel to tangent $PQ$ and tangency point $T$ divides arc $HK$ into two equal parts.It follows that inscribed angles $\angle HAT$ and $\angle KAT$ are equal too and $AT$ is the bisector of $\angle PAQ$, so that incenter $I$ lies on $AT$.
Moreover, by the angle bisector theorem we have $IT:IA=PT:PA$, that is:
$$
{IT^2\over IA^2}={PT^2\over PA^2}={PH\cdot PA\over PA^2}=
{PH\over PA}={PA-HA\over PA}=1-{HA\over PA}=1-{r\over R}.
$$
Hence $AT/AI$ is a fixed ratio:
$$
{AT\over AI}={AI+IT\over AI}=1+{IT\over AI}=1+\sqrt{1-{r\over R}}
$$
and incenter $I$ is the image of $T$ under the homothety of centre $A$ and ratio given above. Hence its locus is a circle, the image of the inner circle under the same homothety.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3946754",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Projective norm for Banach spaces The projective norm of tensors from (algebraic) tensor product of Banach spaces $X,Y$ is defined as
$$\|t\|_\wedge = \inf\left\{ \sum\limits_{j=1}^N \|x_j\|\|y_j\| \, : \, t=\sum\limits_{j=1}^N x_j\otimes y_j\right\}.$$
In one place I found slightly different definition:
$$\|t\|_\wedge '=\inf\left\{\left(\sum\limits_{j=1}^N \|x_j\|^2\right)^{1/2}\left(\sum\limits_{j=1}^N \|y_j\|^2\right)^{1/2} \, : \, t=\sum\limits_{j=1}^N x_j\otimes y_j\right\}.$$
Are they indeed equal? It is easy to see one inequality, but why we have the opposite one?
| The two norms are indeed equal. On the one hand, by the Cauchy-Schwarz inequality we have
$$\sum_{j = 1}^{N} \lVert x_j\rVert \lVert y_j\rVert \leqslant \Biggl(\sum_{j = 1}^{N} \lVert x_j\rVert^2\Biggr)^{1/2} \Biggl(\sum_{j = 1}^{N} \lVert y_j\rVert^2\Biggr)^{1/2}$$
which yields $\lVert t\rVert_{\wedge} \leqslant \lVert t\rVert'_{\wedge}$. I suppose that's the inequality you saw.
On the other hand, consider when we have equality in the Cauchy-Schwarz inequality. A particular case where we have equality is when $\lVert x_j\rVert = \lVert y_j\rVert$ for $1 \leqslant j \leqslant N$.
We can always achieve that by moving a scalar factor from $x_j$ to $y_j$, unless one of $x_j$ and $y_j$ is zero. But excluding all representations
$$t = \sum_{j = 1}^{N} x_j \otimes y_j \tag{1}$$
where at least one of the $x_j$ or $y_j$ is zero doesn't change either norm. For the first, including or excluding such terms doesn't change $\sum \lVert x_j\rVert \lVert y_j\rVert$ at all, for the second, the value for the sum without zero factors is smaller (or equal, if $x_j$ and $y_j$ are both zero for such terms), so the terms we exclude can't make the infimum, i.e. $\lVert t\rVert'_{\wedge}$ smaller.
Thus for a representation $(1)$ where no $x_j$ or $y_j$ vanishes, define
$$c_j = \sqrt{\frac{\lVert x_j\rVert}{\lVert y_j\rVert}}$$
and $x_j' = c_j^{-1}\cdot x_j$, $y_j' = c_j \cdot y_j$. Then we have
$$t = \sum_{j = 1}^{N} x_j' \otimes y_j'$$
and
$$\sum_{j = 1}^{N} \lVert x_j\rVert \lVert y_j\rVert = \sum_{j = 1}^{N} \lVert x_j'\rVert \lVert y_j'\rVert = \Biggl(\sum_{j = 1}^{N} \lVert x_j'\rVert^2\Biggr)^{1/2} \Biggl(\sum_{j = 1}^{N} \lVert y_j'\rVert^2\Biggr)^{1/2}$$
which shows
$$\lVert t\rVert'_{\wedge} \leqslant \lVert t\rVert_{\wedge}\,.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3946927",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
For what a will this series converge? $\sum_{n=1}^{\infty}n^a\left(\frac{1}{\sqrt{n}}-\frac{1}{\sqrt{n+1}}\right)$
I tried to use the D'Alembert ratio test and it didn't quite work out. The series inside is telescoping but I don't know if that information will be useful. Can someone help?
| We have
$$
\frac{1}{\sqrt{n}} - \frac{1}{\sqrt{n+1}} = \frac{\sqrt{n+1} - \sqrt{n}}{\sqrt{n}\sqrt{n+1}} = \frac{(\sqrt{n+1} - \sqrt{n})(\sqrt{n+1} + \sqrt{n})}{\sqrt{n}\sqrt{n+1}(\sqrt{n+1} + \sqrt{n})} \sim \frac{1}{2n^{3/2}}.
$$
Hence $a_n = n^{\alpha} \Bigl( \frac{1}{\sqrt{n}} - \frac{1}{\sqrt{n+1}} \Bigr) \sim \frac{1}{2n^{3/2-a}}$. The sum $\sum_n a_n$ is convergent iff $\sum_n \frac{1}{n^{3/2-a}}$ is convergent. Using integral test for convergence we get that $\sum_n \frac{1}{n^{3/2-a}}$ is convergent iff $3/2-a > 1$.
So, the sum $\sum_n a_n$ is convergent iff $a<\frac12$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3947113",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Folland Question 6.4 trouble with estimating norm The question is as follows:
If $1\leq p<r\leq \infty$, prove that $L^p+L^r$ is a Banach space with norm $\lVert f\rVert= \inf\{\lVert g\rVert_p+\lVert h\rVert_r\,|\, f=g+h\in L^p+L^r\}$, and prove that for $p<q<r$, the inclusion map $L^q\to L^p+L^r$ is continuous.
So, I managed to prove $\lVert \cdot\rVert$ really is a norm, and that the result is a Banach space with the given norm, but I'm having trouble with the continuity part. Given $f\in L^q$, I considered the set $A=\{x\in X\, | \, |f(x)|>1\}$, and the functions $f\cdot 1_A\in L^p$ and $f\cdot 1_{A^c}\in L^r$ (I've already shown these inclusions). What I'm having trouble with is estimating the norms appropriately. I found that (assuming $r<\infty$)
\begin{align}
\lVert f\rVert \leq \lVert f\cdot 1_A\rVert_p + \lVert f\cdot 1_{A^c}\rVert_r \leq \lVert f\cdot 1_A \rVert_q^{q/p} + \lVert f\cdot 1_{A^c} \rVert_q^{q/r}
\end{align}
From here, I'm not sure how to get an upper bound of the form $C\lVert f\rVert_q$, for some constant $C$. Note that I've seen this answer, but I'm not sure how the last few estimates arise (particularly, why $|f\cdot 1_A|^p\leq |f\cdot 1_A|^q$ implies $\lVert f\cdot 1_A\rVert_p\leq \lVert f\cdot 1_A\rVert_q$, and likewise for the $r$ term). Any help is appreciated.
| You are already quite far!
Recall that ~o show continuity of a linear map, you only need to
show that it is continuous at $0$
If you estimate
$\lVert f\cdot 1_A \rVert_q^{q/p} + \lVert f\cdot 1_{A^c} \rVert_q^{q/r}$
by
$\lVert f \rVert_q^{q/p} + \lVert f\rVert_q^{q/r}$,
then we have
$$
\| f \|_{L^p+L^r} \to 0
\quad\text{for}\; \|f\|\to0.
$$
Thus, the inclusion is continuous at $0$ and therefore continuous.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3947253",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
$\int_a^b\sum_{n=1}^\infty f_n(x) dx=\sum_{n=1}^\infty \int_a^b f_n(x) dx$ $\{f_n\}$ is a sequence of continuous (so also integrable) real functions on $[a,b]$ that is uniformly convergent and uniformly equicontinuous. I want to show $$\int_a^b\sum_{n=1}^\infty \frac{f_n(x)}{{\color{red}{n^2}}} dx=\sum_{n=1}^\infty \frac{1}{{\color{red}{n^2}}}\int_a^b f_n(x) dx,$$
and I cannot figure out how to do this. I think to take the limit as $n\to\infty$ and use uniform convergence to the limit $f$ which will somehow allow the sum and integrals to switch, but I cannot figure out how to do this. Help would be greatly appreciated.
Edit: In red.
| Let $f(x) = \sum\limits_{n=1}^\infty f_n(x)$. We need $$\int_a^b f(x)dx = \lim\limits_{N \to \infty} \int_a^b \sum\limits_{n=1}^N f_n(x)dx$$ Fix $\epsilon>0$, for $N$ big enough, $|f(x)-\sum\limits_{n=1}^N f_n(x)|\leq\epsilon$. Therefore $$|\int_a^b f(x)dx -\int_a^b \sum\limits_{n=1}^N f_n(x)dx| \leq \int_a^b |f(x) - \sum\limits_{n=1}^N f_n(x)|dx \leq \epsilon (b-a)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3947426",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Best book for tensor products I'm trying to improve my understanding of tensor of products of modules, my algebra background is one semester of undergrad algebra and one semester of graduate algebra, in which I struggled through the section on tensor products. In my graduate course, we used Dummit and Foote and I generally liked their exposition, but found the tensor product section very confusing, so I'm looking for other sources. I'm not interested in anything specific, i.e. not just limited to vector spaces, or $\mathbb{Z}$ modules, although I don't really care about the non-commutative case. Does anyone have any recommendations?
| You can read about tensor products all over the place.
The wikipedia page (https://en.wikipedia.org/wiki/Tensor_product_of_modules) is reasonably good and complete if you're looking for a free resource.
Personally, I'm partial to Atiyah and Macdonald's treatment in Chapter 2 of their Commutative Algebra. It has the benefit of being quite to the point and there are some useful and pleasant exercises.
You could also try Algebra by Lang.
Keith Conrad's notes: https://kconrad.math.uconn.edu/blurbs/linmultialg/tensorprod.pdf also appear quite complete, and I've heard good things about the writing.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3947635",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Show the limit of the sequence is $\sqrt{\alpha}$ Given: $\sqrt{\alpha}<x_1, \alpha >0, x_{n+1}=\frac{1}{2}\left(x_n+\frac{\alpha}{x_n}\right)$. Show $\lim_{n\to \infty}x_n=\sqrt{\alpha}.$
I have already shown the this is a monotonic decreasing function. Intuitively, I see in the end I get $\frac{1}{2}\left(\sqrt{\alpha}+\sqrt{\alpha}\right)$ but I'm not sure if I should keep going using $\lim$ notation or if I should switch to the definition of converging sequence.
| Rewrite $$ x_{n+1}=\frac{1}{2}\left(x_n+\frac{\alpha}{x_n}\right)$$
$$ x_{n+1}= x_{n}-\Big[x_n-\frac{1}{2}\left(x_n+\frac{\alpha}{x_n}\right)\Big]$$ that is to say
$$ x_{n+1}= x_{n}-\frac {x_n^2-\alpha}{2x_n}$$ and recognize the Newton iterative scheme for finding the zero of function $f(x)=x^2-\alpha$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3947752",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Give an example to show that if T is the product topology on X , then $\prod_{B}$ is need not be closed( Wayne patty 2.3.8) This question is from page85 of Wayne patty's topology.
Let {$(X_\alpha ,T_\alpha): \alpha \in \Lambda$} be an indexed family of topological spaces, let $X= \prod_{\alpha \in \Lambda}X_\alpha $ , let T be a topology on X, and for each $\beta \in \Lambda$, let $\pi_{\beta} : X \to X_{\beta}$ be the projective map. Then, Give an example to show that if T is the product topology on X, then $\pi_\beta$ need not be closed.
I am unable to construct such an example and so requesting you to help me.
| The projection onto the first coordinate on $\mathbb R^{2}$ is not a closed map: $\{(a,b):ab=1\}$ is closed but its projection is not. The projection is $\mathbb R \setminus \{0\}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3947951",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is there a closed-form expression for $\prod_{n=1}^{\infty}(1-\frac{x}{n^3})$? I would like to ask if for $|x|<1$, we can express the product $\prod_{n=1}^{\infty}(1-\frac{x}{n^3})$ as a function $f(x)$. I tried to use Weierstrass factorization theorem, but without much success.
I would really appreciate a reference or a solution.
| Comment:
The bound of this product can be found usig Weierstrassn inequality:
If $a_1, a_2, a_3, \ldots,a_n$ are real positive integers less than unity, and:
$S_n=(a_1+a_2+a_3+ \cdots+a_n)<1$
then:
$1-S_n<(1-a_1)(1-a_2)(1-a_3) \cdots (1-a_n)<\frac 1 {1+S_n}$
Where we can let:
$a_n=\frac x {n^3}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3948129",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
What is the chance that an infinite series of random numbers will, at some point, repeat all previously generated numbers in that order? The main reason of this post is to provide a writeup of what I think is a correct solution to this problem from the r/math subreddit. I'll quote the poster to phrase the problem:
What is the chance that an infinite series of random numbers will, at some point, repeat all previously generated numbers in that order?
While there was some confusion in the comments, I believe what the poster means is something like the sequences below:
$$7,7,...$$
$$3,2,3,2,...$$
$$7,8,6,7,8,6,...$$
Where the dots afterward are arbitrary as the condition has already been fulfilled. If we start writing out a sequence of digits in this way, where the probability of selecting any digit is equal, what is the probability that something like this will happen?
I'll post my solution below. Feel free to comment or correct me.
| EDIT: This answer is incorrect, but I'll leave it up as it was my original approach.
First, some definitions. Let the digits of the sequence be defined as the list $a_0,a_1,a_2,...$ Let the blocks of $2^n$ digits be defined as $b_1,b_2,...$ What I mean by this is more easily illustrated by looking at the decimal expansion of $e$:
$$ \underbrace{2.7}_{b_0}\underbrace{18}_{b_1}\underbrace{2818}_{b_2}\underbrace{28459045}_{b_3}\underbrace{2353602874713526}_{b_4}...$$
$b_n$ is the $n$th block of $2^n$ numbers, save for the first one. I will also define $c_n$ as the $n$th partial sequence - eg for $\pi$,
$$c_5=3,1,4,1,5$$
With that out of the way, the solution is rather simple. Let the probability of such a sequence occurring be $p$. Then we can write
$$p=\Pr(a_1=a_0)+\Pr(b_1=c_2)\Pr(a_1\neq a_0)+\Pr(b_2=c_4)\Pr(b_1\neq c_2)+\Pr(b_3=c_8)\Pr(b_2\neq c_4)+...$$
$$p=0.1+0.01\cdot0.9+\sum_{n=2}^\infty \Pr(b_n=c_{2^n})\Pr(b_{n-1}\neq c_{2^{n-1}})$$
$$p=0.109+\sum_{n=2}^\infty \left(\frac{1}{10}\right)^{2^n}\left(\frac{9}{10}\right)^{2^{n-1}}$$
$$p\approx 0.1090810065610000431...$$
Perhaps someone can find a closed form for the above sum.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3948306",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Derivation of a log of an integral I've come across the following derivation
$$
\frac{\partial}{\partial \alpha} \log_2 \left(\int p(x)^\alpha \, \mathrm{d}x \right)
$$
but cannot figure out, what tricks are required to solve it nicely. Definitely some chain rule but the log of the integral looks scary.
($p(x)$ is probability density; an alternative version with probability mass function, with sum instead of integral, was easy)
| $$\frac{\partial \log _2\left(\int p(x)^{\alpha } \, dx\right)}{\partial \alpha }=\frac{\partial \frac{\log \left(\int p(x)^{\alpha } \, dx\right)}{\log 2}}{\partial \alpha }=\frac{1}{\log 2}\frac{\partial \log \left(\int p(x)^{\alpha } \, dx\right)}{\partial \alpha }=$$
$$=\frac{1}{\log 2}\frac{\int p(x)^{\alpha } \log (p(x)) \, dx}{\int p(x)^{\alpha } \, dx}$$
Example
$$\log _2\left(\int x^{\alpha } \, dx\right)=\frac{\log \left(\frac{x^{\alpha +1}}{\alpha +1}\right)}{\log 2}+C$$
$$\frac{\partial}{\partial\alpha}\log _2\left(\int x^{\alpha } \, dx\right)=\frac{\partial}{\partial\alpha}\frac{\log \left(\frac{x^{\alpha +1}}{\alpha +1}\right)}{\log 2}=\frac{1}{\log 2}\frac{\partial}{\partial\alpha}\left((\alpha+1)\log x-\log(\alpha+1)\right)=\frac{1}{\log 2}\left(\log x-\frac{1}{\alpha+1}\right)$$
Using the formula above
$$\frac{\partial}{\partial\alpha}\log _2\left(\int x^{\alpha } \, dx\right)=\frac{1}{\log 2}\frac{\int \left(x^{\alpha}\log x\right)\,dx}{\int x^{\alpha}\,dx}=$$
$$=\frac{1}{\log 2}\frac{\frac{x^{\alpha +1} ((\alpha +1) \log x-1)}{(\alpha +1)^2}}{\frac{x^{\alpha +1}}{\alpha +1}}=\frac{1}{\log 2}\frac{x^{\alpha +1} ((\alpha +1) \log x-1)}{(\alpha +1)^2}\cdot \frac{\alpha +1}{x^{\alpha +1}}=\frac{1}{\log 2}\frac{(\alpha +1) \log (x)-1}{\alpha +1}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3948452",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$xy''-(2x^2+1)y'=x^5e^{x^2}$ $xy''-(2x^2+1)y'=x^5e^{x^2}$
Hi there,
needs help with the second order ODE.
I've tried (for the homogenous part) to substitute $\frac{dy}{dx} =u$ to get $ u'-(2x+\frac{1}{x})u=0$
solving for the first non trivial solution
$y_1=C_1 e^{x^2}$
however with $y_2$ and the particular solution i stuck.
asks for help.
| $$y'(x)=u(x)$$
$$x u'(x)-\left(2 x^2+1\right) u(x)=0$$
$$u(x)=c_1 e^{x^2} x$$
$$y(x)=\int c_1 e^{x^2} x\,dx=\frac{c_1 e^{x^2}}{2}+c_2$$
To solve the non homogeneous equation $$xy''-(2x^2+1)y'=x^5e^{x^2}$$
we guess the particular solution as $y_1(x)=(a x^4+b x^3+c x^2+d x+e)e^{x^2}$
$$y_1'(x)=2 e^{x^2} x \left(a x^4+b x^3+c x^2+d x+e\right)+e^{x^2} \left(4 a x^3+3 b x^2+2 c x+d\right)=\\=e^{x^2} \left(2 a x^5+4 a x^3+2 b x^4+3 b x^2+2 c x^3+2 c x+2 d x^2+d+2 e x\right)$$
$$y_1''(x)=2 e^{x^2} \left(2 a x^6+9 a x^4+6 a x^2+2 b x^5+7 b x^3+3 b x+2 c x^4+5 c x^2+c+2 d x^3+3 d x+2 e x^2+e\right)$$
So plugging this in the equation we get
$$8 a x^5+6 b x^4+ x^3 (8 a+4 c)+x^2 (3 b+2 d)-d\equiv x^5e^{x^2}$$
$a=\frac18;\;c=-\frac{1}{4}$ so the particular solution is
$$y_1=\frac{1}{8} e^{x^2} x^2 \left(x^2-2\right)$$
The general solution of the DE is
$$y=\frac{c_1 e^{x^2}}{2}+c_2+\frac{1}{8} e^{x^2} x^2 \left(x^2-2\right)$$
$$y=\frac{1}{8} e^{x^2} \left(x^4-2 x^2+4 c_1\right)+c_2$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3948835",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Graph Theory - Can two vertices have two distinct edges? My question is simple:
Can two vertices in a graph have two distinct edges connecting them? Why or why not?
| I put this answer up only because OP asked for it in a comment, and also to present the other two terms I've heard of besides the widely used "simple graph".
In a graph, two or more edges connecting two distinct vertices are called parallel edges,
and an edge connecting only one vertex to itself is called a loop.
According to some sources: a graph allowing no loop or parallel edges is a "simple graph", a graph allowing parallel edges but no loops is a "multigraph", and if both parallel edges and loops are allowed the graph is a "pseudograph". [It's not clear to me whether a multigraph must have at least one case of a parallel edge, or whether a pseudograph which happens not to have loops would qualify as a multigraph.]
I also don't know whether there is general agreement about the definitions of multigraph and pseudograph. (Nor do I think it matters much provided a text/article makes the definitions used clear.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3948960",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
Is there a notation for repeatedly applying a function? There is notation for applying both multiplication and addition a set amount of times (can be infinite): $\sum$, $\prod$ but there is no notation (to my knowledge) of repeating a function. Whenever I need this I use this notation: $R_{n}^{k} f(a) $ where $k$ is the amount of times it is repeated, $a$ is where the function is repeated, and $n$ is the input. For example: $$R_{n=\sqrt{2}}^{\infty} n^{a}=2$$
If anyone knows of the proper notation for this it would be greatly appreciated.
| A common notation is $f^k(a)$, so
$$f^0(a)=a, f^1(a)=f(a), f^2(a)=f(f(a)), f^3(a)=f(f(f(a))),\dots.$$
You can see this notation used for $k=-1$ to denote the inverse of a function, e.g. $\sin^{-1}(x):=\arcsin(x)$.
For some functions, this notation is not very good; for example, $\sin^2(x)$ is usually defined as $\sin(x)^2$ instead of $\sin(\sin(x))$, since the former is a more useful function. It's a good idea to define this notation anywhere you might want to use it, and if there could be confusion, use something related like $f^{(k)}(a)$ (which can also be confused with the $k$th derivative of $f$) or $f^{\circ k}(a)$ (with the latter denoting "$f$ composed with itself $k$ times").
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3949057",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Compare $\ln(\pi)$ and $\pi-2$ without calculator Unlike the famous question of comparing $e^\pi$ and $\pi^e$, which I solved almost instantly, I am stuck with this problem. My thought was the following.
Since exponential function is order-preserving, we exponentiate both terms and get $\pi$ and $e^{\pi-2}$. Then we study the function $f(x) = e^{x-2} - x$ or the function $g(x) = \frac{e^{x-2}}{x}$, and compare them with zero and one respectively. I tried both. But both involve solving the equation
$$e^{x-2} = x.$$
I tried Lagrange error terms and have
$$f(x) = -1 + \frac{(x-2)^2}{2!} + R_2(x-2),$$
where
$$\frac{(x-2)^3}{3!} \le R_2(x-2) \le \frac{e^{x-2}}{3!} (x-2)^3.$$
It is easy to see that the equation have a root between $3$ and $2 + \sqrt2$. But I don't know how close it is to $\pi$. It is to provide some lower bounds since we can plug in some values and calculate to show that $f(x) > 0$ for such values. But for the upper bound, it is hard to calculate by hands since it has the $e^{x-2}$ factor. At my best attempt by hand, I showed that $f(3.15) > 0$. All it entails is that for all $x \ge 3.15$, $e^{x-2}$ is greater than $x$. But it tells nothing about the other side.
Then I looked at the calculator and find that $e^{\pi-2} < \pi$.
I also tried Newton-Raphson iteration, but it involves a lot of exponentiation which is hard to calculate by hand and also involves approximation by themselves. And I don't know how fast and close the iteration converges to the true root of the equation.
Any other hint for comparing these two number purely by hand?
| By hand I took $e^{\pi -2}<e\cdot e^{0.1416}<(2.7183)e^{0.1416}.$ Using $B_1=0.1416=0.1+0.04(1+0.04)$ for manual calculation, I computed, to $5$ decimal places, an upper bound $B_2$ for $(B_1)^2/2$ and an upper bound $B_3$ for $B_1B_2/3 $ and an upper bound $B_4$ for $B_1B_3/4,$ etc., until I was sure that the sum of the remaining terms was less than $0.00005,$ to obtain an upper bound $B$ to $4$ decimal places for $e^{0.1416}.$ Then I multiplied $B\times 2.7183$ and got less than $\pi.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3949524",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 1,
"answer_id": 0
} |
Solving the double integral with $|y-2x|\leq 0.1$ as boundary Question:
Solve the integral of $x$ with boundaries $|y-2x|\leq 0.1,
0 \leq x \leq 1,
0 \leq y \leq 1 $
What gave me trouble is the absolute value, I tried graphing it online and got a noncontinuous line, how can I proceed?
| Consider $y \geq 2x$ and $2x \geq y$ separately.
Please see the sketch. You have to integrate over the area shaded in black.
Integrate over $dx$ first and then over $dy$ so you have to split it into two integrals otherwise you will have to split it into three.
$D1: 0 \leq x \leq \frac{y + 0.1}{2}, 0 \leq y \leq 0.1$
$D2: \frac{y - 0.1}{2} \leq x \leq \frac{y + 0.1}{2}, 0.1 \leq y \leq 1$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3949652",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$0^\infty$ and indeterminate forms I have actually two related questions:
1. Should indeterminate forms be able to attain an infinite number of values to be considered indeterminate?
I’m asking because Wikipedia says:
The expression
1/0 is not commonly regarded as an indeterminate form, because there is not an infinite range of values that
f/g could approach.
2. I know that $0^\infty$ is not indeterminate.
But i want to know why this is wrong:
$\lim_{(x,y)\to(0,\infty)}x^y=e^{y\cdot \ln(x)}$ and if we take the path $y=\frac{a}{\ln(x)}$ it becomes $e^a$ and could attain infinite number of values and therefore is indeterminate.
| The issue is that as $x\to 0$ then $y:=a/\ln(x)\to 0$, so what you have written is a justification of $0^0$ being indeterminate.
Whatever path we take, $0^\infty$ will give $0$: this is because if $y\geq 1$ and $0<x<1$ then $0<x^y\leq x$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3949841",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Exercise in FOL Section 4.1, ex. $3.(2)$ of Introduction to mathematical Logic by Richard Hodel.
Express the following statements:
(2) For every natural number $a$ greater than 1, there is a prime $p$ that divides $a$.
Is the following, in language of FOL, correct:
$\forall a\in \Bbb N \left[a>1\Rightarrow\left(\exists p\in \Bbb N(\forall b\in \Bbb N \lnot(b \vert p)\; \land\; p \vert a\right) \right].$
Edit
Reply to @Bram28:
Writing instead of $(\forall b\in \Bbb N \lnot(b \vert p))$ this:
$(\forall b\in \Bbb N(b>1\; \land b\neq p))$
would work?
| No.
You always have that $p|p$, as well as $1|p$, whether $p$ is prime or not.
Hence, the $\forall b\in \Bbb N \lnot(b \vert p)$ part will always be false.
It'll take a bit more work to express that $p$ is a prime ... but take into consideration what I just said, and try again!
Also, the rest of the expression has the right format, good job!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3949976",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove that $f: X \rightarrow Y$ is one-to-one, if and only if : $g: P(Y)\rightarrow P(X)$ , $g(Y)= \{ x\in X : f(x)\in Y \} $ is onto My attempt (for first direction), I said that suppose that $f$ is one-to-one and $g$ is not onto,
then there exists $X'\subset X$, such that for all $Y'\subset Y$ : $g(Y')\ne X$
then if $f$ is one to one : $\forall x\in X, \exists! y\in Y : f(x)=y $
hence there must be a subset $Y'\subset Y$ that satisfies $g(Y')=X'$
which contradicts that $g$ is not onto.
but I really think I have made some mistakes since looking at it again I could have proved it in the same way without $f$ being one to one.
Any help or corrections or answers are appreciated.
Thanks in advance!
| The fact that $f$ is one-to-one does not mean that for each $x\in X$ there is a unique $y\in Y$ such that $f(x)=y$: that’s true simply because $f$ is a function. The fact that $f$ is one-to-one means that for each $y\in Y$ there is at most one $x\in X$ such that $f(x)=y$.
You’re right that your argument doesn’t work: it would apply even if $f$ were a constant function, and in that case it’s clear that $g$ is not onto.
Suppose that $f$ is one-to-one, and let $A\subseteq X$; we want to find an $S\in\wp(Y)$ such that $g(S)=A$. The natural candidate is $f[A]$, i.e., $\{f(x):x\in A\}$, so let $S=f[A]$. Certainly $g(S)=f^{-1}\big[f[A]\big]\supseteq A$, so we need only show that $g(S)\subseteq A$. Let $x\in g(S)$; then $f(x)\in S$. Let $y=f(x)$; $y\in S$, so there is an $a\in A$ such that $f(a)=y$. But then $f(x)=f(a)$, and $f$ is one-to-one, so $x=a\in A$. This shows that $g(S)\subseteq A$ and hence that $g(S)=A$, as desired, so $g$ is onto.
Now suppose that $f$ is not one-to-one. Then there are distinct $x_0,x_1\in X$ such that $f(x_0)=f(x_1)=y$, say. Let $A=\{x_0\}$, and suppose let $S\in\wp(Y)$ be such that $x_0\in g(S)$. By definition this means that $f(x_0)\in S$, so $y\in S$. But $f(x_1)=y$, too, so $x_1\in g(S)$ as well. Thus, $g(S)\supseteq\{x_0,x_1\}\supsetneqq A$, and we see that there is no $S\in\wp(Y)$ such that $g(S)=A$. That is, $g$ is not onto. This shows that if $g$ is onto, $f$ must be one-to-one and completes the proof.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3950130",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Bertrand's postulate Strong proof Bertrand's postulate
claim :
for all $ 5< n \in \mathbb{N}$ there is at least two prime numbers different in the opening interval $ (n,2n)$.
Use reinforcement of Bertrand's postulate in order to prove that :
for all $10<n\in \mathbb{N} $ Exists that has at least two Prime factors in the factorization of $n!$ which appear with a power of 1.
Example : $n = 11$ the primes $7,11$ are such prime's meet the conditions.
But , for $n=10$ it dosen't meet the conditions because only $7$ appears with power 1 in the factorization of $10!$.
Hint : Consider two cases, $n$ even or $n$ odd
Attempt:
we need to use the claim in order to Implement the solution
Case (1): $n$ even
if $n$ is even we can rewrite $n = 2t$
if we use the claim we can get $(2t,4t)$
Case (2): $n$ odd
if $n$ is odd we can rewrite $n = 2t + 1$
if we use the claim we can get $(2t+1,2(2t+1))$
| You want to find primes that divide $n!$ exactly once, i.e., primes such that $p\le n$ but $2p>n$. On the other hand, Bertrand will give us primes with $t<p<2t$ for suitable $t>5$.
So in our situation, we want $2t\ge n$ (to make $2p>n$) and $2t\le n+1$ (because $p<n+1$ means the same as $p\le n$). This suggests taking $t=\lceil \frac n2\rceil$. This will make $t>5$ as soon as $n>10$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3950362",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Vieta's formulae - looking for a plain English explanation of the general-case formulae I am working with a set of equations within which arise functions of the coefficients of a polynomial expressed in terms of the roots. The expressions get rather complicated as the order of the polynomial goes above 3, and so I want to write these coefficients using Vieta's formulae:
\begin{equation}
\sum_{1\leq i_1 < i_2 \cdots i_k \leq n}\left(\prod_{j=1}^k \alpha_{i_j}\right)
\end{equation}
However, I am struggling to disentangle this notation, in particular the expression beneath the summation and the notation subscripts for $\alpha$. I know what the result should be for a polynomial of order 1, 2, 3 4 and so on, but I would appreciate a plain-language description of how the notation describes this - and any suggestions as to how the expression could be made more straightforward to interpret?
| This particular formula is for a particular $k$.
The summation is for every strictly increasing sequence of $k$ elements, in which each element ranges from $1$ to $n$ inclusive. The inequality $1\le i_1 < i_2 < \ldots < i_k \le n$ denotes that the sequence $(i_1, i_2, \ldots, i_k)$ should have $k$ elements, the elements are strictly increasing, and the elements are between $1$ and $n$ inclusive.
There are $\binom n k$ such sequences, so there are $\binom nk$ summands. Each summand is a product, which may be written as
$$\prod_{j=1}^k \alpha_{i_j} = \prod_{i\in\{i_1, i_2,\ldots,i_k\}}\alpha_i.$$
The product means that for each unique sequence, the sequence would choose $k$ of the $n$ roots, and take their product.
For example, $k = 4$ and $n=6$, one of the sequences is $(1,2,3,5)$, and the sequence corresponds to the product $\prod_{i\in\{1,2,3,5\}} \alpha_i = \alpha_1\alpha_2\alpha_3\alpha_5$. There are $\binom64 = 15$ such products to sum.
So I may reword the summation and product sign as
$$\sum_{S : k-\text{sequence which is strictly increasing between }1\text{ and }n} \left(\prod_{i\in S}\alpha_i\right)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3950504",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Solve $\frac{d^2y}{dx^2}+y=\frac{1}{y^3}$ Solve the equation,
$$\frac{d^2y}{dx^2}+y=\frac{1}{y^3}$$
We have $$y^3\frac{d^2y}{dx^2}+y^4=1$$
I tried using change of dependent variable
Let $z=y^3\frac{dy}{dx}$
Then we get
$$y^3\frac{d^2y}{dx^2}+3y^2\left(\frac{dy}{dx}\right)^2=\frac{dz}{dx}$$
But i could not get an equation completely involving $z,x$
| Another possibility
Switch variables to make
$$-\frac {x''}{[x']^3}+y=\frac 1 {y^3}$$ Reduction of order $p=x'$ gives a separable equation
$$-\frac {p'}{p^3}=\frac 1 {y^3}-y\implies p=\pm\frac{y}{\sqrt{c_1 y^2-y^4-1}}$$ Now
$$x+c_2=\pm \int \frac{y}{\sqrt{c_1 y^2-y^4-1}}\,dy=\pm \frac 12 \int \frac{dt}{\sqrt{c_1 t-t^2-1}}$$ and you will arrive to some arctangent.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3950617",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Examples of when Newton's Method will fail? I'm currently working on Newton's Method, and my instructor gave four instances where Newton's Method will fail.
(A) Newton's method converges to another solutions x=b such that f(b)=0 instead of converging to the desired solution x=a.
(B) Newton's method eventually gets into the never ending cycle, bouncing between the same two approximations $x_i$ and $x_{i+1}$.
(C) Eventually, each next approximation $x_{i+1}$ falls further from desired solution $x_a$ than the previous approximation $x_i$ determined by the Newton's method.
(D) Newton's method is not able to find the next approximation $x_{i+1}$ because f'($x_i$)=0 or f'($x_i$) Does Not Exist.
However, there aren't any examples of when this happens. Would anyone be willing to provide examples of these instances?
| Example for Case (A): $$f(x) = \frac{1}{1+x^2} - \frac{1}{2},$$ which has roots at $x \in \{-1,1\}$. The initial choice $x_0 = 2$ converges to the negative root.
Example for Case (B): $$f(x) = \begin{cases}\sqrt{x}, & x \ge 0 \\ -\sqrt{-x}, & x < 0 \end{cases}$$ has the peculiar property that for any initial guess $x_0 \ne 0$, the orbit is trapped in a cycle of period $2$, with $x_k = -x_{k-1}$. This is quite easy to prove and is left as an exercise for the reader.
Example for Case (C): $$f(x) = x^{1/3}.$$ The Newton's method recursion has no fixed point except for the initial guess $x_0 = 0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3950743",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Solving $x^{\log_{10}(x^2)} = 100$ $x^{\log(x^2)}=100$
Edit: I am writing $\log$ for $\log_{10}$.
I tried to solve this and can get the solutions $10$ and $10^{-1}$. But how can I get the solutions $-10$ and $-10^{-1}$? And other solutions (including complex), if exist? I'm not sure how to use the complex logarithm in this problem.
My attempt:
Note that $x \neq 0$, because we have in the equation $\log(x^2)$.
Assume $x > 0$. Then:
$x^{\log(x^2)} > 0$
$\log (x^{\log(x^2)}) = 2$
$\log(x^2)\log(x)=2$
$2\log(x)\log(x)=2$
$\log^2(x) = 1$
$\log(x) = \pm 1 \implies x=10$ or $x = 10^{-1}$.
But how about $x<0$?
| This may help :
$$x^{\log_{10} x^2}=100$$
$$\log_{10} x^{\log_{10} x^2}=2$$
$$2\log_{10}^2 x=2$$
$$\log_{10} x=\pm 1$$
$$|x|=10^{\pm 1}$$
$$x=\pm 10,\pm \frac{1}{10}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3950881",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 6,
"answer_id": 4
} |
How to use linear approximation to estimate $2\ln(1.1)+3$? Can anyone please share how to solve this?
The question is to use the linear approximation to estimate $2\ln(1.1)+3$
The hint given to me was to find f(x) first and find a and f'(a) but I can't follow the hint and don't know how to find those.
Thanks.
I have not tried anything because I am lost at how to find f(x)first or a.
We have only been doing answers where the original f(x) is given then I find the linear approximation and estimate certain values afterwards. This is the first time the f(x) is not provided to me.
| First, we have the following formula for the linear approximation of f(x) at a point $x=a$:
$f(x) ≈f(a) + f'(a)(x - a)$
The nearest value to $1.1$ that is easy to compute is $x=1$ as we know $\ln(1)=0$
If $f(x) = \ln (x)$ and $a = 1$, we get:
$\ln(x) = \ln (1) + x - 1=x-1$
If we set $x=1.1$ we can approximate $\ln(1.1)$ by $1.1-1=0.1$, we can multiply it by $2$ now and get $0.2$, then we add $3$ to it as we need $2\ln(1.1)+3$ and get $3.2$ as our approximation.
Actual value is $2\ln(1.1)+3=3.1906...$ so we are pretty close.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3951002",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Better method to show $-x^3 + 1 = (-x + 1)(x^2 + x + 1)$ Is there a rule, algorithm, or theorem used for this equality below:
$$-x^3 + 1 = (-x + 1)(x^2 + x + 1)$$
I know the RHS can be established using trial and error distributing the first term through the second, or alternatively same with FOIL. I'm curious if there is a better method?
Image below from a CAS.
| Let $\zeta$ be a primitive cube root of unity, then $\zeta^3=1,\zeta^2+\zeta+1=0$ and hence $1-x^3=-(x^3-1)=-(x-1)(x-\zeta)(x-\zeta^2)=(1-x)(x^2-(\zeta+\zeta^2)x+\zeta^3)=(1-x)(x^2+x+1)$
Note: FOIL, which is the distributive property of multiplication and addition, is unavoidable in pretty much any proof. To try to avoid using FOIL in a proof would be like trying to avoid using the fact that $1+1=2\ne 0$ in a field of characteristic $\gt 2$ or $0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3951329",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 6,
"answer_id": 4
} |
Order of Convergence Proof with p>1 and M>0 So I am struggling with a problem on my homework, the problem statement is Assuming $x_{n}\rightarrow x^*$, show that any sequence that satisfies $$\lim\limits_{n \to \infty} \frac{|x_{n+1}-x^*|}{|x_{n}-x^*|^p}= M$$ with $p>1$ for some $M>0$ also satisfies $$\lim\limits_{n \to \infty} \frac{|x_{n+1}-x^*|}{|x_{n}-x^*|}= 0$$.
I looked at this problem for a bit and think it is as simple as seeing that $x_{n+1}$ goes to $x^*$. This would make the numerator of the limit $= 0$, but if I use this methodology, then I would have the bottom of the limit $=0$ as well which would cause an error in my methodology. I would appreciate any hints or nudges in the correct direction. This is my first experience with this so a lighter explanation would help.
| First of all note that $\lim_{n\to+\infty}|x_{n}-x^{*}|=0$ since the sequence is convergent. Then $\forall p>1$ the following holds:
$$\lim_{n\to+\infty}|x_{n}-x^{*}|^{p-1}=0$$
Now multiply and divide by $|x_n-x^{*}|^p$ the argument of the limit you want to show to hold to get
$$\lim_{n\to+\infty}\frac{|x_{n+1}-x^{*}|}{|x_{n}-x^{*}|^p}|x_{n}-x^{*}|^{p-1}=M\cdot 0=0$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3951480",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Assume $f:[0,1] \rightarrow \mathbb{R}$ is continuous and that $f(x) \geq 2 \quad \text{ if } 0 \leq x \lt 1$. Show that $f(1) \geq 2$ This question comes from Advanced Calculus, Fitzpatrick. Section 3.1 exercise 7
Suppose that the function $f:[0,1] \rightarrow \mathbb{R}$ is continuous and that $$f(x) \geq 2 \quad \text{ if } 0 \leq x \lt 1$$
Show that $f(1) \geq 2$
Fitzpatrick uses the sequence definition of continuity, there is a similar question [here] (Suppose that the function $f:[0,1]\rightarrow \mathbb{R}$ is continuous and that $f\left(x\right)>2$)
I wanted to do this using epsilon-delta method. Is the following valid? Is there a simpler way to write this proof?
My attempt
First, assume $f(1) < 2$. Then there exists some $\varepsilon > 0$, call it $\varepsilon_0$, such that $2 - f(1) > \varepsilon_0$. Show this creates a contradiction.
Let $x_n = 1 - 1/n$ for all $\mathbb{N}$. Then $x_n$ is a sequence in $[0,1]$ that converges to 1. By definition, this is
$$\forall \delta_1 > 0 \; \exists N \in \mathbb{N}, \forall n \geq N \quad \vert x_n - 1 \vert < \delta_1$$
Since $f$ is continuous on $[0,1]$ it is continuous at $1$, therefore we have
$$\forall \varepsilon > 0 \; \exists \delta > 0 \; \forall x \in [0,1]\quad \vert x - 1 \vert < \delta \rightarrow \vert f(x) - f(1) \vert < \varepsilon$$
If we let $\varepsilon = \varepsilon_0$ and $\delta = \delta_1$. Then we know $\vert x_n - 1 \vert < \delta_1$, therefore we can conclude $\vert f(x_n) - f(1) \vert < \varepsilon_0$.
Furthermore, we know $\{f(x_n)\} >= 2$ for all $n$, so
$$\vert 2 - f(1) \vert \leq \vert f(x_n) - f(1) \vert < \varepsilon_0$$
$$\vert 2 - f(1) \vert < \varepsilon_0$$
$$-\varepsilon_0 < 2 - f(1) < \varepsilon_0$$
However this contradicts our assumption that $2 - f(1) > \varepsilon_0$
Therefore $f(1) >= 2$
| You haven't exactly done this using the $\epsilon-\delta$ definition, though. Someone else has already commented on your proof so I'll just present my own argument.
Suppose that $f(1) < 2$. Since $f$ is continuous on $[0,1]$:
$$\lim_{x \to 1^-} f(x) = f(1) < 2$$
So, for each $\epsilon > 0$, there is a $\delta > 0$ such that:
$$1-x < \delta \implies |f(x)-f(1)| < \epsilon$$
Let $\epsilon = 2-f(1)$. Then, for a sufficiently small $\delta > 0$, we have:
$$1-x < \delta \implies f(1)-\epsilon < f(x) < f(1)+\epsilon$$
which implies that $f(x) < 2$ whenever $x \in (1-\delta,1]$. That's a contradiction. $\Box$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3951659",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
What are the extreme points of this polytope? Let $n, k$ be positive integers with $n\geq k$.
Let $P_{n,k}$ be the set of vectors $x$ in $[0,1]^n$ for which
$$ \sum_{i=1}^n x_i = k $$
$P_{n,k}$ is defined by linear equations, so it is a polytope. What are its corners (extreme points)?
My guess is that the corners are the "$k$-binary" vectors - the vectors with exactly $k$ ones and $n-k$ zeros. To prove this, it is sufficient to prove (I think) that every vector in $P_{n,k}$ is a convex combination of $k$-binary vectors. Is this correct?
Note: when $k=1$, $P_{n,1}$ is just the standard simplex in $\mathbb{R}^n$. Does it have a name when $k > 1$?
| We prove that every $x$ in $P_{n,k}$ -- i.e. with $x_i\in[0,1]$ and $x\cdot \vec{1}=k$ -- is in the convex hull of the integer vectors in $P_{n,k}$, by induction on the number of non-integer components of $x$ (of course all integer components have values $0$ or $1$).
If all $x_i$ are integers we are done. Now suppose some component of $x$ is non-integer. Then since the sum of them is integer, there are at least two non-integer components, $x_i$ and $x_j$. First, let's decrease $x_i$ and increase $x_j$ until one of them becomes integer. Call the resulting vector $x_l$. Then let's increase $x_i$ and decrease $x_j$ until one of them becomes integer. Call the resulting vector $x_u$. Then $x$ is a convex combination of $x_l$ and $x_u$ (Proof: Denoting by $d$ the vector with $1$ in $i$th coordinate and $-1$ at $j$the coordinate, $x_u=x+t_ud$ and $x_l=x-dt_l$ for some positive $t_u, t_l$ so $x$ is on the segment between $x_u$ and $x_l$, as wanted). Of course, both $x_l$ and $x_u$ are in $P_{n,k}$.
Now by induction hypothesis, $x_l$ and $x_u$ are both convex combinations of integer points in $P_{n,k}$, and hence so is $x$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3951772",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Question on sums of products of divisors Let us consider some positive integer $n$ with divisors (other than $1$) $d_1, d_2, ..., d_n$. Lately, it came to me the following question:
Does it exist some positive integer $n$ such that a sum of $k$ products of its divisors greater than $1$ equal $n$ itself, when $k$ is not a divisor of $n$ itself?
My "gut feeling" is that is not possible, but I have not been able to ellaborate any proof of it. If all products added are equal, it is clear that its sum is not equal to $n$ unless $k$ divides $n$, but I am having trouble when considering distinct products of divisors.
It can be noticed easily that if $n$ is some prime number, a perfect power, or a semiprime, then a sum as the defined is not possible, as either there are no possible products of divisors, or all the products of divisors are equal, or all the possible sums of products have a number of terms $k$ which is a divisor of $n$.
Any hint or a sketch of a proof would be really welcomed. Thanks!
| If I understand your question correctly, the answer is yes.
For $n=54=2\times 3^3$, we can have
$$54=2\times 3+2\times 6+2\times 9+3\times 6$$
where $2,3,6$ and $9$ are its divisors larger than $1$.
There are $k=4$ terms in the sum, and $4$ does not divide $54$.
Added :
There are infinitely many such $n$.
Proof :
For $n=2\cdot 3^p$ where $p$ is a prime number larger than $3$, we can have
$$2\cdot 3^p=2\times (2\times 3^1)+2\times (2\times 3^2)+\cdots +2\times (2\times 3^{p-1})+2\times 3$$
There are $k=p$ terms in the sum, and $p$ does not divide $n=2\cdot 3^p$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3951939",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
k-NN average distance bound I need to show the following inequality concerning the k-NN algorithm:
$$
$$
Data $ S = \{X_1, X_2, ..., X_n\} $ is split in the $ \leq k$ parts:
$$
S_j = \{ X_i L i - (j-1) \lfloor \frac{n}{k} \rfloor + 1,\ ...\ , j\lfloor\frac{n}{k} \rfloor\}, 1 \leq j \leq k
$$
So for example for $ S = \{X_1, X_2, X_3, X_4, X_5, X_6, X_7, X_8, X_9, X_{10}\}$ and $k=2$ we got $S_1 = \{X_1, X_2, X_3, X_4, X_5\}$ and $S_2 = \{X_6, X_7, X_8, X_9, X_{10}\}$
$$
$$
For the $X_{1, j}(X)$ as 1st nearest neighbour of $X$ in $S_j, j \leq k$
I need to show that:
$$
\frac{1}{k}\sum_{j=1}^{k}{||X_{j}(X) - X||} \leq \frac{1}{k}\sum_{j=1}^{k}{||X_{1, j}(X) - X||}
$$
$X_{j}(X)$ stands for j-th next neighbour of X
$$ $$
A far as I understand, this inequality states that the on average the distance between $X$ and it's j-th neighbour is bounded by the average of distances between $X$ and it's first neighbour in the j-th partition of the data set. Does it make any sense? If not, what is the correct intuition? I'd be extremely thankful for any hints on how to aproach this proof.
| Your use of $j$ to index both the partitions and the nearest neighbors within each partition is confusing in terms of notation, but the key issue is that you've misstated the core inequality. It should read:
$\frac{1}{k}\sum_{j=1}^{k}{||X_{j}(X) - X||} \geq \frac{1}{k}\sum_{j=1}^{k}{||X_{1, j}(X) - X||}$
That's because, within any given partition, the distance to $X$'s closest neighbor, ${||X_{1, j}(X) - X||}$, must, by definition, be less than or equal to the distance between $X$ and all of its more distant neighbors (whose distances will be equal only if there are multiple points equidistant from $X$):
${||X_{j}(X) - X||} \geq {||X_{1, j}(X) - X||}$
Just as that definitional inequality establishes a lower bound on how close the points within some particular partition lie to $X$, so too does the average across all those minimal, nearest-neighbor distances establish a more generalized bound. For example, if you take the second closest point from each partition and average across each of those distances to $X$, you will necessarily get an average distance that is equal to or greater than the average across the nearest neighbor distances. (Again, those two averages would be equal only if there was a tie between the first and second nearest point within every individual partition). More generally, there is no way to draw a point from each partition such that the average of the selected points' distances to $X$ is less than the average of the distances between $X$ and its nearest neighbor within each partition. That's the point of the inequality and the sense in which the average of the nearest neighbor distances from each partition establishes a lower bound across all the partitions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3952317",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
An exercise about field extension Let $K/F$ be a field extension, $L$ an intermediate field, $\alpha \in K$ is algebraic over $F$ with minimal polynomial $p(x) \in F[x] $. If $p(x)$ is irreducible in $L[x]$, then $F(\alpha)\cap L=F$.
Could you please help me this exercise? Thank you in advance.
| Take an element $\beta \in F(\alpha) \cap L$. Then you have that by the Tower Law
$$\deg p(x) = [F(\alpha) : F] = [F(\alpha) : F(\beta)][F(\beta) : F]$$
They key thing here is that $\beta \in L$ so $F(\beta) \subseteq L$. $p$ is irreducible in $L$ so it must be irreducible in $F(\beta)$ and $[F(\alpha) : F(\beta)] = \deg p(x)$.
From the above equality, $[F(\beta) : F] = 1$ so $\beta \in F$. This shows $F(\alpha) \cap L \subseteq F$. But clearly it contains $F$ so equality follows.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3952486",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Do we have this symbol in mathematics?
It is the symbol I used to show left hand side of inequality isn't greater than right hand side.
More precisely I used this to show the below matrix is not diagonally dominant:
$$\begin{bmatrix}5&6&7\\
2&-4&2\\
3&2&-5\\
\end{bmatrix}$$
So at the first row I want to emphasize that $|5|$ is not greater than $|6|+|7|$ therefor instead of writing $|5|<|6|+|7|$ I used the symbol in the image above.
Is it ok to use this symbol in mathematics?
| To me, it is OK. Putting a bar/cross on a pre-existing maths symbol just means the negation of it.
I'm not sure it is standardized, though.
Edit
As shown by @Dietrich Burde in his answer, these symbols are $\LaTeX$ symbols, so that's clearly OK you can use them.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3952632",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Can anyone help me with this modular manipulation in system of equations? So i have to solve this system but i got stuck while computing and I need help.
m = 928377461;
x0 = 21380413;
x1 = 32564732;
x2 = 803330610;
a*x0+c === x1 mod m
a*x1+c === x2 mod m
I know the x's and m but I can't figure out reasonable equation to get a and c, because they have to be integers.
I considered:
a = ((x1 - (x1 * x1) - (x2 * x0)) / (x1 - x0)) mod m
c = (((x1 * x1) - (x2 * x0)) / (x1 - x0)) mod m
But they are not giving correct result (It's taken from linear congruential generator).
I would be really thankful if anyone could give me the correct formulas, cause i don't have a clue how am i supposed to solve it and everywhere i searched this part was described as a basic arithmetics, which seems to be too hard for me.
| Ooookay...
$x_2 - x_1 \equiv ax_1 - ax_0 \equiv a(x_1-x_0)\pmod m$.
If $\gcd(x_1-x_0, m) =1$ then $x_1-x_0$ is invertible. So
$a \equiv (x_2 - x_1)(x_1-x_0)^{-1} \pmod m$
And the $c \equiv x_1-ax_0 \pmod m$.
So...
$x_1-x_0 = 11184319$ now use Euclid's Algorithm to find $\gcd(x_1-x_0, m)$ and solve $(x_1-x_0)^{-1}$.
$m = 928377461$
$928377461= 83*11184319 + 78984$
$11184319 = 141*78984+ 47575$ etc....
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3953090",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Find curve at the intersection of two level surfaces. Let C be the curve at the intersection of two level surfaces M(x, y, z) = 5 and N(x,y,z) = 0, passing through a point P (1,1,1). Let $M(x, y, z) = 2x^2 + y^2 + 2z^2$ and $N(x, y, z) = xy-z$. Find curve C in the parametric form <x(t), y(t), z(t)>.
I am quite stuck on this one. I could let $z = xy$ and then plugging that into M I'd have $2x^2 + y^2 + 2x^2y^2 = 5$, and I have no idea what would be good ways to parametrize that.
Another way is to let $y=z/x$, in which case I end up with $2x^4 + (2z^2-5)x^2+z^2 = 0$, but again, I'm quite stuck. Any help would be really appreciated!
Or, if it is impossible to find the curve C, the question I'm working on actually asks for the tangent line to curve C at P if x'(0) = 3. If this makes things easier, how exactly is using this tangent more useful?
| In the equation $2x^2 + y^2 + 2 x^2 y^2 = 5$ put $x = r \cos t$, $y = \sqrt{2} \,r \sin t$, get $r = \frac{\sqrt{5} }{\sqrt{1 + \sqrt{1 + 5\sin^2 2 t} } }$, and the parametrization
$$\left( \frac{\sqrt{5} \cos t}{\sqrt{1 + \sqrt{1 + 5\sin^2 2 t} } } , \frac{\sqrt{10}\sin t}{\sqrt{1 + \sqrt{1 + 5\sin^2 2 t} } } \right)$$
Note that $t= \tan^{-1} (\frac{1}{\sqrt{2}})$ maps to $(1,1)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3953198",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Solving surface integral using divergence theorem Here is the problem:
Use divergence theorem to compute
$S$ is boundary of solid bounded above cone $z = \sqrt{x^2+y^2}$ below sphere $x^2+y^2+z^2 = 1$.
Here is my solving process:
When setting up the integral, the bounds I chose are
*
*$ 0 <r < \frac{\sqrt{2}}{2}$ , upper bound from 2 solid's intersection
*$ 0< \theta <2\pi$
*$ 0 < z <\sqrt{1-r^2}$, upper bound comes from equation of sphere
Then the $\nabla \cdot F = x^2+y^2+z^2$, and $dV = rdzdrd\theta$.
Here is my confusion:
The result I've got is incorrect, and the only problem I can think of is the lower bound of $z$. It might be $r$ (coming from the equation of cone) instead of $0$ but why? The bounded solid E starts from origin, isn't it? Hope someone will answer my doubt.
Plus, are there any other problems in my solving process? Thanks!
| Your divergence is correct. The problem as you mentioned is with the lower bound of $z$. In cylindrical coordinates, the integral should be set up as -
$\displaystyle \int_0^{2\pi} \int_0^{1/ \sqrt 2}\int_r^{\sqrt{1-r^2}} r(r^2+z^2) \, dz \, dr \, d\theta$
On your question as to why the lower bound should be $r$ and not $0$, first deal with the cone with base as $z = \frac{1}{\sqrt2} \,$ and vertex at the origin, without the spherical cap on top. For a given $r$, can $z$ freely vary between $0$ and $1 / \sqrt2$? That will give you the volume of the cylinder instead of the cone. In a cone, $z$ can vary from the base to the points on its rays, ($r \leq z \leq \frac{1}{\sqrt2}$). Now the upper bound of $z$ changes because we are just adding the spherical cap on top.
In spherical coordinates which is easier in this case,
$\displaystyle \int_0^{2\pi} \int_0^{\pi/4}\int_0^1 r^4 \sin \phi \, dr \, d\phi \, d\theta$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3953297",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
simple and uniform convergence of $ \sum ( x+\frac{1}{n})^{n+\frac{x}{n}} $ It's my first post in this forum ! Hello ! (I have a very bad english sorry in advance...) I come with an exercice that I can not solve. I hope you can help me...
Q1 : simple and uniform convergence of $ \sum ( x+\frac{1}{n})^{n+\frac{x}{n}} $
Here my guess and what I have done the convergence is |x|< 1 (that I didn't proved just guessed)
I tried limited development it didn't worked.
Thank you for reading !
| For the convergence, using the ratio test, let
$$a_n=\left( x+\frac{1}{n}\right)^{n+\frac{x}{n}}\implies \log(a_n)=\left(n+\frac{x}{n}\right) \log \left(x+\frac{1}{n}\right)$$ Now, using Taylor series for large values of $n$
$$\log(a_n)=n \log (x)+\frac{1}{x}+\frac{2x^3 \log (x)-1}{2x^2n}+O\left(\frac{1}{n^2}\right)$$ Apply it twice and continue with Taylor series
$$\log(a_{n+1})-\log(a_n)=\log (x)+\frac{1-2x^3 \log
(x)}{2x^2n^2}+O\left(\frac{1}{n^3}\right)$$
$$\frac{a_{n+1} } {a_n}=\exp\big(\log(a_{n+1})-\log(a_n) \big)=x+\frac{1-2 x^3 \log (x)}{2 n^2 x}+O\left(\frac{1}{n^3}\right)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3953430",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Why does $\pi$ appear in the probability of a number being square-free? The probability of a number being square-free (i.e., the number has no divisor that is a square, cube, etc.) is $6/\pi^2$. I have seen many appearances of $\pi$, and this is also similar to them. All of them can be explained intuitively why pi appears in them. But I don't see any connection between square-free numbers and $\pi$. So my question is:
Why $\pi$ appears here?
Note: I don't want rigorous proofs, I want intuitive explanations.
| The most intuitive thing I can think of would be the Riemann Zeta function. You are working with squares, and you will be adding a bunch of squares. In a similar way, if you wanted cube free numbers, you would get Apéry's constant.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3953566",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Solving $\cos(40+\theta) = 3\sin(50+\theta)$ Question
Solve the following equation for $0≤\theta≤180$
$$
\cos(40+\theta) = 3\sin(50+\theta)
$$
Hint: $\cos(40) = \sin(50)$
Solution
$$
\cos(40+\theta) = 3\sin(50+\theta)
$$
$$
\cos(40)\cos\theta -\sin(40)\sin\theta = 3(\sin(50)\cos\theta +\sin\theta \cos(50))
$$
$$
\sin(50)\cos\theta-\cos(50)\sin\theta = 3(\sin(50)\cos\theta+\sin\theta \cos(50))
$$
$$
2\sin(50)\cos\theta+4\sin\theta \cos(50) = 0
$$
What do I do next?
| Observe that $\cos(\theta)=0$ cannot give a solution. Rearranging from your last line as suggested by comments, we get $\tan(-50)=2\tan(\theta)$. Recall that $\tan(-150)=\dfrac1{\sqrt 3}$ and the trigonometric identity $\tan(3\phi)
=\dfrac{3\tan(\phi)-\tan^3(\phi)}{1-3\tan^2(\phi)}$. So $\dfrac{6\tan(\theta)-8\tan^3(\theta)}{1-12\tan^2(\theta)}=\dfrac1{\sqrt 3}\implies \dfrac1{\sqrt 3}-6\tan(\theta)-4\sqrt3\tan^2(\theta)+8\tan^3(\theta)=0$. We transform it into a monic depressed cubic in $\tan(\theta)$ as follows:
$\dfrac1{8\sqrt 3}-\dfrac34\tan(\theta)-\dfrac{\sqrt3}2\tan^2(\theta)+\tan^3(\theta)=0$. Let $x+\dfrac{\sqrt3}6=\tan(\theta),$ then the equation may be rewritten as $f(x)=x^3-x-\dfrac{\sqrt3}9=0$.
The discriminant of $f(x)$ is then $\delta^2=-4(-1)^3-27(-\dfrac{\sqrt3}9)^2=3$, and by Cardano formula, $\chi=\sqrt[3]{\dfrac{\sqrt3}{18}+\sqrt{\dfrac{-3}{108}}}+\sqrt[3]{\dfrac{\sqrt3}{18}-\sqrt{\dfrac{-3}{108}}}=\sqrt[3]{\dfrac{\sqrt3}{18}+\dfrac{\sqrt{-1}}6}+\sqrt[3]{\dfrac{\sqrt3}{18}-\dfrac{\sqrt{-1}}6}$ is a root of $f$.
Observe that $f(\dfrac{\sqrt3}3)\gt 0,f(1)\lt 0, f(2)\gt 0, $ so by intermediate value theorem, $f$ has at least two real roots. However since $f(x)\in \Bbb R[x]$, if $\alpha$ is a non real root of $f$, then so is its complex conjugate $\bar{\alpha}\ne\alpha$. Therefore $f$ has three real roots, and from this we see that $\chi=\sqrt[3]{\dfrac{\sqrt3}{18}+\dfrac{\sqrt{-1}}6}+\sqrt[3]{\dfrac{\sqrt3}{18}-\dfrac{\sqrt{-1}}6}\in \Bbb R$.
Write $p=\sqrt[3]{\dfrac{\sqrt3}{18}+\dfrac{\sqrt{-1}}6}, q=\sqrt[3]{\dfrac{\sqrt3}{18}-\dfrac{\sqrt{-1}}6}, $ then $\chi=p+q\implies \chi^3=(p+q)^3=p^3+q^3+3pq(p+q)$. Now, $p^3+q^3=\dfrac {\sqrt 3}9, pq=\sqrt[3]{(\dfrac{\sqrt 3}{18})^2+\dfrac 1{36}}$. Suppose that $p+q<0$, then $|p+q|^3\lt 3pq|p+q|\implies (p+q)^2=p^2+2pq+q^2\lt 3pq \implies p^2+q^2\lt pq$, but by A.M.-G.M.inequality we have $p^2+q^2\gt 2|pq|=2pq$, which gives $0\lt 2pq\lt p^2+q^2\lt pq$, a contradiction.
Hence $\chi=p+q\gt 0$ and $\tan(\theta)=\chi+\dfrac{\sqrt3}6=\sqrt[3]{\dfrac{\sqrt3}{18}+\dfrac{\sqrt{-1}}6}+\sqrt[3]{\dfrac{\sqrt3}{18}-\dfrac{\sqrt{-1}}6}+\dfrac{\sqrt3}6\gt 0$. Since $\arctan(\phi)\gt 0\iff \phi \gt 0, \theta=\arctan(\sqrt[3]{\dfrac{\sqrt3}{18}+\dfrac{\sqrt{-1}}6}+\sqrt[3]{\dfrac{\sqrt3}{18}-\dfrac{\sqrt{-1}}6}+\dfrac{\sqrt3}6) \gt 0$, and such $\theta$ is a desired solution.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3953749",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Help me prove $K(k) = \frac{1}{1+k}K\left(\frac{2\sqrt{k}}{1+k}\right)$ where $K(k) = \int^{\pi/2}_0 \frac{du}{\sqrt{1-(k\sin{u})^2}}$ Hi this problem is posed in Sean M. Stewart's How to Integrate It book pg.143
$$K(k) = \int^{\pi/2}_0 \frac{du}{\sqrt{1-(k\sin{u})^2}}$$ where $ 0 \leq k < 1$
Based on the substitution:
$$ \tan{u} = \frac{\sin{2\theta}}{k+k\cos{2\theta}} =\frac{\tan{\theta}}{k} $$
Show that:
$$K(k) = \frac{1}{1+k}K\left(\frac{2\sqrt{k}}{1+k}\right)$$
$K(k) $ is some function defined in term of definite-integral called complete elliptic integral which is not easily computable, so attacking $K(k)$ gave no fruitful result.
Well then proceeding with substitution ( It is easy to lose track as it involved a lot of trigonometric manipulations )
I reduced the integral into this form :
$$K(k) = k\int^{\pi/2}_0 \frac{\sec^2{\theta}d\theta}{\sqrt{k^2 + \tan^2{\theta}}\sqrt{k^2 + \tan^2{\theta} - k^2\tan^2{\theta}}}$$
Using substitution : $\tan{\theta} = p$
$$K(k) = k\int^{\infty}_0 \frac{dp}{\sqrt{k^2 +p^2}\sqrt{k^2 +p^2 - (kp)^2}}$$
To my naive eyes this looked manageable but WolframAlpha says otherwise.
Any hint or ideas on how to attack this problem?
I thought the trig manipulations to find a more suitable form but in the I end up transform $\sin {u}$ term into $\tan{u}$ to use the given substitution
EDIT: It has been suggested in commens that there is typo in the substituion(given in book) , it should be :
$$ \tan{u} = \frac{\sin{2\theta}}{k+\cos{2\theta}} $$
| I think that you are on the right track. But you have to be aware that the completed elliptic integral of this type can be defined by any of three integrals:
$$
K(k)=\int_0^{\pi/2}\frac{1}{\sqrt{1-k^2\sin^2\theta}}\ d\theta\\=\int_0^1\frac{1}{\sqrt{(1-t^2)(1-k^2t^2)}}\ dt\\
=\int_0^\infty \frac{1}{\sqrt{(1+t^2)[(1+(1-k^2)t^2)}]}\ dt
$$
This is from K. Oldham, J. Myland, & J. Spanier, An Atlas of Functions, Springer . Chapter 61 covers the Complete Elliptic Integrals. Can you take it from here?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3953893",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 1
} |
How do I determine all complex solutions of $|z|z=-i(\bar{z})$, for all $z\in \mathbb{C}$
Let $z\in \mathbb{C}$. How do I determine all complex solutions of $|z|z=-i(\bar{z})$?
My approach: We can see that $$|x+yi|(x+yi)=y-ix$$ Then I came up with the real part $$-y^2+x^2+x^4-y^4$$ and the imaginary part $$2ixy+x^2(2iyx)+y^2(2xiy)$$
I don't know how to go on and I think that is wrong anyway.I would appreciate every attempt to help!
| If $|z|=1$, then $\overline{z} = 1/z$ and the equation becomes
$$
z = -i/z,
\\
z^2 = -i
$$
with two solutions
$$
z = \frac{-1+i}{\sqrt{2}},\qquad z=\frac{1-i}{\sqrt{2}}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3954012",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Adjoint of the linear transformation over a vector space over complex numbers. Consider the vector space $V:=P_2(\mathbb{C})$ consisting of polynomials of degree at most $2$ with complex coefficients together with the following inner product $$\langle f, g\rangle=\int_{-1}^1f(t)\overline{g(t)}\, dt$$
Evaluate the adjoint of the linear transformation $T:V\to V$ by $$T(f)=if'+2f.$$
My effort:
I know that $T^*$ is a linear transformation which satisfies
$$\langle Tf, g\rangle=\langle f, T^*g\rangle.$$
Above implies that $$i\int_{-1}^1f'(t)\overline{g(t)}\, dt+2\int_{-1}^1f(t)\overline{g(t)}\, dt=\langle f, T^*g\rangle.$$
Please tell how to proceed next?
| It is enough to consider the transformation $D(f)=f'$, since $T=iD+2\operatorname{id}$ yields $T^*=-iD^*+2\operatorname{id}$.
Instead of constructing an orthonormal basis for $P_2(\mathbb C)$ first, we can do the computation in our favorite basis $B=(1,t,t^2)$ using the Gram matrix of the given inner product.
The Gram matrix of $\langle -,-\rangle$ with respect to $B$ is obtained from the integrals $\int_{-1}^1 t^k\,\mathrm dt$ for $k=0,1,2,3,4$ as
$$
G_B = (\langle t^i, t^j\rangle)_{i,j=0,1,2} =
\begin{pmatrix}
2 & 0 & 2/3 \\
0 & 2/3 & 0 \\
2/3 & 0 & 2/5
\end{pmatrix}.
$$
Denoting the coordinate vector of $f\in P_2(\mathbb C)$ with respect to $B$ by $[f]_B$ this describes our inner product as
$$
\langle f,g\rangle = [f]_B^T \ G_B \ \overline{[g]_B}.
$$
The matrix of $D$ with respect to $B$ is given by
$$
[D]_B =
\begin{pmatrix}
0 & 1 & 0 \\
0 & 0 & 2 \\
0 & 0 & 0
\end{pmatrix}.
$$
Now by comparing the descriptions
\begin{align*}
\langle Df,g\rangle &= [f]_B^T\ [D]_B^T \ G_B \ \overline{[g]_B} \qquad\text{and} \\
\langle f,D^*g\rangle &= [f]_B^T\ G_B \ \overline{[D^*]_B}\ \overline{[g]_B},
\end{align*}
one obtains the general formula for the matrix of the adjoint in non-orthonormal bases:
$$
[D^*]_B = \overline{G_B^{-1} [D]_B^T \ G_B} =
\begin{pmatrix}
0 & -5/2 & 0 \\
3 & 0 & 1 \\
0 & 15/2 & 0
\end{pmatrix}.
$$
Hence $D^*$ is given by
$$
D^*(1) = 3t,\quad D^*(t) = -\frac 5 2+ \frac{15}2 t^2,\quad D^*(t^2) = t
$$
which translates to $T^*$ as
$$
T^*(1) = -3it+2,\quad D^*(t) = \frac 5 2i - \frac{15}2 i t^2 + 2t,\quad D^*(t^2) = -it+2t^2.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3954161",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
injective map between the semi-unit circle and a parabola How can we injectively map the semi-unit circle to a parabola bounded between two points?
I earnestly think this can be done, as the curves are homeomorphic (one can be "bent" to the other), but the explicit map is out of my imagination. Any hints? Thanks beforehand.
| Thanks to the hint provided by @AdamRubinson, supposing for the sake of representation, we wish to find a map between the upper unit semicircle and the parabolic segment $x=y^2$ bounded between $(0,0)$ and $(0,1)$. Then, the map $f(x,y)=\left(\frac{(x+1)^2}{4},\frac{x+1}{2}\right)$ does the job. For other cases, appropriate scaling can be done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3954433",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Proof that if $n^2/r^2 = p$, with $p$ prime, then $n$ is divisible by $p$ Given positive integers $n, r$, with $p$ a prime, and $n^2/r^2 = p$, prove that $n$ is divisible by $p$. Use proof by contradiction (I am sure there are easier ways to prove this, but this is what the book requests, since the chapter is exercises on proof by contradiction).
I started off by assuming that $n$ is not divisible by $p$, therefore:
$n = ap + b$ for a non-negative integer $a$ and a positive integer $b$.
$n^2 = pr^2$, so $(ap+b)^2 = pr^2$
I expanded this out to get a quadratic polynomial in $b$:
$b^2 + 2apb + a^2p^2-r^2p = 0$
Using the quadratic formula, I found:
$b = -ap \pm r\sqrt{p}$
Since $a$, $r$, and $p$ are integers, and $p$ is prime, then both of these values of $b$ would therefore be irrational, violating the condition that $b$ must be a positive integer.
I'm not sure if this proof works or not. It doesn't feel very elegant (compared to some of the examples in the book). Could you please critique it?
| You proof does not work because you did not use the fact that $n$ is not divisible by $p$ (you could have $b=0$ for instance). Also, when defining $b$, I think it would be more convenient to say that $n=ap+b$ is the euclidean division of $n$ by $p$ (this also implies that $0<b<p$). What is wrong in you proof is that the initial equation is false. Indeed you found a contradiction without using the fact that $n$ is not divisible by $p$ so the initial hypothesis is false, that is $n^2=pr^2$. You can see that by writing $2v_p(n)=1+2v_p(r)$ which is not possible because the LHS is even and the RHS is odd.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3954568",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Dimension Notation for Topological Spaces Lots of families of topological spaces get superscripts denoting dimension: $\mathbb{R}^n$, $B^n$, $D^n$, $\Delta^n$, $S^n$, $\mathbb{R}P^n$, $T^n$. There's a niceness to this: $S^n$ is the boundary of $D^{n+1}$, which is a bit awkward at first, especially if you think about how you would define their dimensions as subsets of $\mathbb{R}^n$ as a vector space, but under any applicable topological definition of dimension, their superscripts represent their dimensions. For once, there is a nice continuity of notation. However, there are multiple topological notions of dimension: topological dimension, both inductive dimensions, dimension of CW-complexes, dimension of manifolds... So are there any explicit rules for what the superscript $n$ is actually telling us about a space, or is it just a guide that should wind up being equivalent under different definitions for sufficiently nice spaces?
| I would say there are no explicit rules. For me the point is that you have a sequence of “similar” spaces naturally indexed by $n$. For the examples you mentioned it feels natural to align the indexes with the dimension (where for these spaces various notions of dimension coincide). If one wants to directly discuss some dimension of a space, I'd suggest being explicit: $\operatorname{somedim}(X) = n$ while $\operatorname{otherdim}(X) = n'$.
There are more examples of the general pattern not directly related to dimension: $ℤ_n$ for cyclic group od order $n$, $S_n$ and $A_n$ for the symmetric and alternating group; $\ell^p$ and $L^p$ (also denoted by $\ell_p$ and $L_p$) for the Lebesgue spaces.
Note that in $ℝ^n$ the superscript is also a genuine operation – the Cartesian power. (Maybe the similarity with $ℝ^n$ is the reason why $S^n$ instead of $S_n$ is used for spheres, etc, but that is just a speculation.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3954797",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Let $F(t)=\int_{0}^{\infty}\frac{\arctan(tx)-\arctan(x)}{x}dx$ Prove $F$ is $C^1(0,\infty)$ and find $F'(t)$
Let $F(t)=\int_{0}^{\infty}\frac{\arctan(tx)-\arctan(x)}{x}dx$
Prove $F$ is $C^1(0,\infty)$ and find $F'(t)$
I've thought that maybe I should prove that $f(t,x)=\frac{\arctan(tx)-\arctan(x)}{x}$ is riemann integrable and therefore it's lebesgue integrable on $(0,\infty)$ and then use differentiation under integral (in the real analysis sense) and to show $\frac{\partial f}{\partial t}$ is bounded.
| Fix $t>0$, for $\Delta t$ such that $|\Delta t|<\frac{t}{2}$, there is
$\theta\in(0,1)$ such that
$$ \arctan((t+\Delta t)x)-\arctan(tx)=\frac{1}{1+(t+\theta\Delta t)^2x^2}\Delta t x. $$
So
\begin{eqnarray}
&&\frac{F(t+\Delta t)-F(t)}{\Delta t}\\
&=&\int_{0}^{\infty}\bigg[\frac{\arctan((t+\Delta t)x)-\arctan(tx)}{x}dx\\
&=&\int_{0}^{\infty}\frac{1}{1+(t+\theta\Delta t)^2x^2}dx.\\
\end{eqnarray}
Since
$$ |t+\theta\Delta t|\ge t-\theta |\Delta t|>\frac{t}{2} $$
we have
$$ \bigg|\frac{1}{1+(t+\theta\Delta t)^2x^2}\bigg|<\frac4{4+t^2x^2}. $$
Since
$$ \int_{0}^{\infty}\frac4{4+t^2x^2}dx $$
converges, by the DCT,
\begin{eqnarray}
\lim_{\Delta t\to0}\frac{F(t+\Delta t)-F(t)}{\Delta t}=\int_{0}^{\infty}\lim_{\Delta t\to0}\frac{1}{1+(t+\theta\Delta t)^2x^2}dx=\int_{0}^{\infty}\frac{1}{1+t^2x^2}dx=\frac{\pi}{2t}\equiv f(t).
\end{eqnarray}
which impies $F(t)$ is differentiable and $F'(t)=f(t)$. Since $f(t)$ is continuous for $t>0$, we conclude $F(t)\in C^1(0,\infty)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3955116",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
For $s(t) = a_2t^2 + a_1t + a_0$ and $r(t) = b_2t^2 + b_1t + b_0$ in $P_2[t],$ define $ = 2a_2b_2 + a_1b_1 + 3a_0b_0$ EDIT: I'm still trying to figure out! Will ask for help if I can't answer it still by the end of the day. Thank you :)
Online classes hasn't been as easy as face-to-face classes and with that my professor is quite old and isn't very familiar with the online methods of teaching these days.
He said that the above is an inner product on $P_2[t].$
In attempting to prove other < , > examples, I would like to know how to know and show that < , > are inner products on $P_2[t].$ You may use the example I provided. Thank you so much!
| an orthonormal basis is given by
$$ \frac{t^2}{\sqrt 2}, \; \; \; t, \; \; \; \frac{1}{\sqrt 3} $$
If you have real numbers with $$e^2 + f^2 + g^2 = 1,$$
a unit vector is given by
$$ \frac{ e t^2}{\sqrt 2} \; \; \; +ft + \; \; \; \frac{g}{\sqrt 3} $$
The quadratic form is given by the norm of your $s(t)$ being
$$ 3 a_0^2 + a_1^2 + 2 a_2^2 $$
which becomes $g^2 + f^2 + e^2 $ when $a_0 = g/ \sqrt 3, \; \; a_1 = f, \; \; a_2 = e/ \sqrt 2$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3955353",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Can you switch integrals if you don't care about the infinite part? So, schematically, suppose one is doing some real integral
$$\mathcal{I}=\int_a^\Lambda \int_b^cf(x,y)\,dx\,dy,$$
where $\Lambda$ is some cutoff that one would like to take to infinity and suppose the inner integral is finite and $\int_b^c|f(x,y)|\,dx<\infty$ for any $y$. Further suppose that the whole integral diverges as $\Lambda\to\infty$ (for example, the case I have in mind is $\mathcal{I}\sim \alpha\Lambda + \beta\log\Lambda+$ finite). Obviously, switching the order of integration is not okay. However, if I am interested only in the non-divergent part of the integral, is switching the order of integration allowed? Or are there conditions under which it is allowed if I only want the part that is finite as $\Lambda\to\infty$?
| Without going too far afield, e.g., theorems of Tonelli and Fubini, if $f$ is continuous on a bounded rectangle $[a,b]\times [c,d]$, then it must hold that
$$\int_a^b \int_c^d f(x,y) \, dy \, dx =\int_c^d \int_a^b f(x,y) \, dx \, dy $$
This can be proved easily using the uniform continuity of $f$ on the compact rectangle.
A counterexample when $f$ is not continuous is
$$ \int_0^1\int_0^1 \frac{x-y}{(x+y)^3} \, dx\, dy = -\frac{1}{2}, \quad \int_0^1 \int_0^1 \frac{x-y}{(x+y)^3} \, dy\, dx = \frac{1}{2}$$
The integrals are evaluated easily by noticing that
$$\frac{x-y}{(x+y)^3} = -\frac{\partial}{\partial x} \left( \frac{x}{(x+y)^2} \right) =\frac{\partial}{\partial y} \left( \frac{x}{(x+y)^2} \right),$$
and, hence,
$$ \int_0^1\int_0^1 \frac{x-y}{(x+y)^3} \, dx\, dy = -\int_0^1\int_0^1\frac{\partial}{\partial x} \left( \frac{x}{(x+y)^2} \right) \,dx\,dy \\ = -\int_0^1\frac{1}{(1+y)^2} \, dy = \left.\frac{1}{1+y} \right|_0^1 = - \frac{1}{2}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3955547",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Orthogonal complement in $ L_2 $ Find the orthogonal complement of a subspace
$$ M = \{ x \in L_2(-1, 1):x(t)=-x(-t), \int_0^1 x(t)t^2dt=0 \} $$
in $L_2(-1, 1).$
As I understand M can be described as all odd functions which are orthogonal to $ \lambda t^2 $ on $(0, 1)$. But I don`t know how to find the orthogonal complement
| Let $f(t)=t^{2}\chi_{(0,1)}(t)$. Then $M$ consists of functions which are odd and orthognal to $f$. This means $M$ is precisely the orthogonal complement of the span of even functions and $f$. Thus, $M^{\perp} =\{g+cf:c \in \mathbb R, g \, \text {is even} \}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3955797",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Stability of equilibrium points in Gradient Systems , Lyapunov functions and Hartman-Grobman Theorem So I have learned about Lyapunov theory to study the stability of equilibrium points are now we want to apply it to the study of gradient systems. So suppose we have $x'=-\nabla V(x)$ and we have that $a$ is an equilibrium point for this equation that is $\nabla V(a)=0$. Now if we have that $a$ is an isolated local minimum we can use the Lyapunov function $H(x):=V(x)-V(a)$ to see that this is an asymptotically stable point. If $a$ is an isolated local maximum we can use $-H(x)$ to see that it is unstable, but what happens if $a$ is an isolated saddle point ? How can we study the stability in this case? One way I thought about it would be to use the Hartman-Grobman theorem and we know that the linearization of this dynamical system will be unstable and so since they have homeomorphic flows I guess this would also be unstable, but I am not completely sure this works, or if there is another way to see this.
I guess my biggest doubt is that if we can use the Hartman-Grobman theorem to study the stability of the sistem from the linearized equation.
Any help is appreciated, thanks in advance.
| You could use Hartman–Grobman (if $a$ is hyperbolic), but it's perhaps overkill. There's the simpler Lyapunov instability theorem which says that if there is a differentiable function $H$ which is defined in a neighbourhood of $a$ and does not have a local minimum at $a$, and $\dot H<0$ on a punctured neighbourhood of $a$, then $a$ is unstable.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3955974",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Difficult Probability Question! (involving independent or dependent events)
Each time a button is pressed on a particular machine a random integer between 1 and 2Y is displayed on a screen, where Y is a certain positive integer. All numbers between 1 and 2Y are equally likely and the same number can appear more than once. The button is pressed n times and the displayed numbers are recorded. Consider the following three events: | the first number displayed is either 1 or 2.
B: the product of the n displayed numbers is even.
C: the sum of the n displayed numbers is n + 2.
(a) Find the probabilities P(A), P(B) and P(C).
(b) Are the events A and B independent?
(c) Are the events B and C independent?
Would P(A) simply be (1/Y)?
For P(B) will it be 1- [(Y^m)/((2Y)^m)] ?
(Essentially the probability that the probability that product of the displayed numbers is odd which mean all n displayed numbers have to be odd so the chance of picking an odd number (1/2) to the power of n times. Then that probability taken away from 1 will give the chance that one or more of the n displayed numbers are even and therefore the product is even)
For P(C) will it be [(n!)/(2!)((n-2)!)][(1/2Y)^2][(1/2Y)^(n-2)] + [n][(1/2Y)]
[(1/2Y)^(n-1)]
(Essentially for the sum of the n displayed numbers to be n+2 you need (n-2) displayed numbers to be 1 and 2 displayed numbers to be 2 OR (n-1) displayed numbers to be 1 and 1 displayed number to be 3.) So I did NC2 multiplied by the probability of the number 2 being generated twice and then the number 1 being generated (n-2) times. And then plus NC1 multiplied by the probability of the number 3 being generated once multiplied by the number 1 being generated (n-1) times.
Are the probabilities P(A), P(B) and P(C) correct?
If so how do I calculate P(B∩C)for part(c)
Any help would be much appreciated!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3956119",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Struggling with intuition about this probability question. Symmetry argument of two balls drawn from an urn. So the question is as follows:
An urn contains m red balls and n blue balls. Two balls are drawn uniformly at random
from the urn, without replacement.
(a) What is the probability that the first ball drawn is red?
(b) What is the probability that the second ball drawn is red?*
The answer to a) quite clearly works out to be $\frac{m}{(m+n)}$, but the answer to b turns out to be the same, and my tutor said this is intuitive by a symmetry argument.
i.e. that $P(A_1)$ = $P(A_2)$ where $A_i$ is the event that a red ball is drawn on the ith turn. However I am struggling to see how this is evident, can anyone explain this?
| Part (b) doesn't give any information about the first ball, it is just asking for the probability that the second ball in the line is red.
Now red balls (or those of any other color!) don't have any preference for positions in the line, hence if you randomly pick up any ball from the line, its probability of being red will be the same.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3956392",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Need to prove or disprove an alternative group definition. Let be the non empty set $G$ with the operation $*$ satisfy the following 3 conditions:
*
*$a(bc) = (ab)c$, for all $a, b, c \in G$ (associative law).
*For every $a,b$ there is $c$ such that $ac = b$. ($c$ is the "path" from $a$ to $b$).
*For every $a,b$ there is $d$ such that $bd = a$. ($d$ is the "path back" from $b$ to $a$).
Prove or disprove: $G,*$ is a group.
Asociativity:
Proof:
from 1)
Identity element:
Proof:
According to 2) For every $a,a$ there is $e$ such that $ae = a$.
Let proof $ea=a$.
According to 2) For every $e,e$ there is $X$ such that $eX= e$.
Let's check the two possibilites:
*
*$X=a,$
*$X\neq a$
If 1) then $ea=a$ proved.
If 2): $eX=e$ then what?
Every try failed. Someone can do it?
| If the third condition is written as
For every $a,b \in G$ there is $d \in G$ such that $db=a$.
then the answer is that $G$ is a group and the existence of the identity element can shown as follows:
Fix $a \in G$. Then there exists $x_a$ and $y_a$ in $G$ such that $ax_a=a=y_aa$. We will prove that $gx_a=g=y_ag$ for any other $g \in G$. Indeed, if $g \in G$, since we know that there exists $x,y \in G$ with $ax=g=ya$, then $$gx_a = (ya)x_a = y(ax_a) = ya = g$$ and similarly $y_ag = g$. In particular $x_a=y_a$ (why?) and then $e := x_a (=y_a)$ is the identity element.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3956604",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
gammas division Can anyone explain me how to calculate this expression?
$$\frac{\Gamma(n/2)}{\Gamma((n-1)/2)}$$
Thank you, I tried with doble factorial, but don´t know really how to continue.
| Almost as @J.G. answered, for large values of $n$, using Stirlin approximation twice and continuing with Taylor series
$$\frac{\Gamma \left(\frac{n}{2}\right)}{\Gamma \left(\frac{n-1}{2}\right)}=\sqrt{\frac n 2}\left(1-\frac{3}{4 n}-\frac{7}{32 n^2}-\frac{9}{128 n^3}+\frac{59}{2048
n^4}+O\left(\frac{1}{n^5}\right)\right)$$ which shows a relative error lower than $0.1$% as soon as $n>2$, lower than $0.01$% as soon as $n>3$, lower than $0.001$% as soon as $n>5$.
If you wish a very good approximation
$$\frac{\Gamma \left(\frac{n}{2}\right)}{\Gamma \left(\frac{n-1}{2}\right)}\sim\sqrt{\frac n 2}\frac{1-\frac{441823}{287784 n}+\frac{119909}{54816 n^2}-\frac{1473029}{877056
n^3} } {1-\frac{225985}{287784 n}+\frac{697315}{383712 n^2}-\frac{7699031}{18418176
n^3} }$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3956971",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Calculus involving a Physics problem This is the equation that occurs in a particular problem in physics. If earth falls into the sun if it stopped orbiting suddenly.
$\mathrm{d}r = \frac{GM}{2r^2} \mathrm{dt}^2$enter image description here
This is my attempted solution. Is it correct? Also can someone explain how on one side we have only one integral and on the other side we have two? If we integrate one side of the equation, surely we must also integrate the other side as well. No? So how come we integrate one side once and the other side twice.
$\int_{0}^{R}\frac{2r^2}{GM}\mathrm{d}r=\int_{0}^{T}\int\mathrm{d}t^2$
This becomes $\frac{T^2}{2}=\frac{2R^3}{3GM}$
| You've misunderstood how to set up second-order differential equations.
A body that falls vertically has $\frac{d^2r}{dt^2}=-\frac{GM}{r^2}$. This has a number of implications you were considering that I review in the paragraph below, but it is this equation alone that we use to solve for $r$ as a function of $t$, e.g. as in @GerryMyerson's link.
Over short enough falls with initially zero speed for $r$ to change little (this is the simplest case to analyze), the acceleration is approximately constant, so $r\approx r_0-\frac{GM}{2r_0^2}t^2$, where $r_0$ is the $t=0$ value of $r$. On the other hand, at small times $dt$ we have$$r\approx r_0+\underbrace{\dot{r}_0}_0dt+\tfrac12\ddot{r}_0dt^2=r-\tfrac{GM}{2r_0^2}dt^2,$$i.e. the change in $r$ is $-\tfrac{GM}{2r_0^2}dt^2$. However, at a later time when $r$ has reduced, $\dot{r}$ has become negative and the elapsed time is no longer negligible, the result is rather different.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3957098",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why $\omega\left[\begin{pmatrix} 0 &Q\\ R &0 \end{pmatrix}\right]\leq\omega\left[\begin{pmatrix} P &Q\\ R &S \end{pmatrix}\right]$? Let $E$ be a complex Hilbert space, with inner product $\langle\cdot\;, \;\cdot\rangle$ and the norm $\|\cdot\|$ and let $\mathcal{L}(E)$ the algebra of all operators on $E$. The numerical radius of an operator $T\in\mathcal{L}(E)$ is given by
$$\omega(T)=\sup_{\|x\|=1}|\langle Tx, x\rangle|.$$
Let $P,Q,R,S\in\mathcal{L}(E)$. I want to prove that
$\omega\left[\begin{pmatrix}
0 &Q\\
R &0
\end{pmatrix}\right]\leq\omega\left[\begin{pmatrix}
P &Q\\
R &S
\end{pmatrix}\right].$
Here $\begin{pmatrix}
0 &Q\\
R &0
\end{pmatrix},\begin{pmatrix}
P &Q\\
R &S
\end{pmatrix}\in \mathcal{L}(E\oplus E)$.
My attempt: One can remark that
$$
\begin{pmatrix}
0 &Q\\
R &0
\end{pmatrix} = \frac{1}{2} \begin{pmatrix}
P &Q\\
R &S
\end{pmatrix} + \frac{1}{2} \begin{pmatrix}
-P &Q\\
R &-S
\end{pmatrix},
$$
This implies that
$$
\omega\left[\begin{pmatrix}
0 &Q\\
R &0
\end{pmatrix}\right] \leq \frac{1}{2} \omega\left[\begin{pmatrix}
P &Q\\
R &S
\end{pmatrix}\right] + \frac{1}{2} \omega\left[\begin{pmatrix}
-P &Q\\
R &-S
\end{pmatrix}\right].
$$
But I'm facing difficulties to prove that
$$\omega\left[\begin{pmatrix}
P &Q\\
R &S
\end{pmatrix}\right]=\omega\left[\begin{pmatrix}
-P &Q\\
R &-S
\end{pmatrix}\right].$$
Let $T=\begin{pmatrix}
P &Q\\
R &S
\end{pmatrix}$. I want to find an unitary operator $U$ such that
$$U^*TU=\begin{pmatrix}
-P &Q\\
R &-S
\end{pmatrix}.$$
In this case we get the desired result since
$$\omega(U^*TU)=\omega(T).$$
| You might not be able to find the desired $U$. E.g. if $P=S=I$, and $Q=R=0$, then
$$
\pmatrix{P & Q \cr R & S}
$$
is not conjugate to
$$
\pmatrix{-P & Q \cr R & -S},
$$
since they have different eigenvalues. However, letting
$$
V=\pmatrix{-I & 0 \cr 0 & I},
$$
one has
$$
V\pmatrix{P & Q \cr R & S}V^{-I} = \pmatrix{P & -Q \cr -R & S} = -\pmatrix{-P & Q \cr R & -S}
$$
so you get what you want because $\omega (-T)=\omega (T)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3957237",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Numbers of the form $\sum_{n=0}^{N} 2^{a_n} 3^n$ Just out of curiosity.
Does the numbers of the form
\begin{align}
\sum_{n=0}^{N} 2^{a_n} 3^n \text{,}
\end{align}
with $a_n \ge 0$ and $N \ge 0$, cover all integers not divisible by 3?
These numbers can never be divisible by three.
It is also easy to discover integers with multiple representations (e.g., $4 = 2^2 3^0 = 2^0 3^0 + 2^0 3^1$).
I am however stuck to prove that they cover all integers not divisible by 3.
Second question: The numbers of the form
\begin{align}
\sum_{n=0}^{N} 5^{a_n} 3^n \text{,}
\end{align}
with $a_n \ge 0$ and $N \ge 0$, are numbers congruent to $\{1, 4, 5, 8\} \pmod{12}$.
These are the numbers not divisible by 3 and not $\{ 2, 3 \} \pmod{4}$.
Many other such forms may be considered.
What is the name of such numbers (numbers of such a form)?
| Erdos asked a similar question some time ago (about “3-smooth representations” of the natural numbers). See here for a discussion on both of your questions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3957376",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Jacobian linearization of trigonometric functions If I have to linearize a nonlinear trigonometric system around the origin $(0,0)$:
$$\dot{x_1} = x_2$$
$$\dot{x_2} = \cos(x_1)$$
I can apply the small angle approximation to find the matrix A:
$$ A = \pmatrix{0&1\\1&0} $$
However, if I apply Jacobian linearization and take the partial derivatives of $\dot{x}$, I get:
$$ A = \pmatrix{0&1\\-\sin(x_1)&0}$$
Evaluated at the origin, I get:
$$ A = \pmatrix{0&1\\0&0} $$
Seeing as how the two different linearization techniques yield different results, is one more valid than the other?
| "Approximation" in our case means Taylor's theorem, i.e.
$$
f(x) = f(0) + Df(0)x
$$
where $f(x) := (x_2, \cos(x_1))$. This is where $Df(0)$, i.e. your second matrix appears. So this is a linearization in the normal sense. This type of linearization is actually useful when we want to determine stability of an equilibrium point.
The first approach I would call incoherent, especially since you leave $x_2$ untouched. This is not how linearization in analysis is understood. Linear maps have the property that they vanish in $0$. So what we do, is to always approximate
$$
x \mapsto f(x) - f(0)
$$
by a linear map since the former vanishes in $0$, too. This is the idea behind Taylor's theorem. We just accept, that this is actually an affine approximation of $f(x)$ since it simply has no major ramifications in practice.
But what you did, is to find a linear map that approximates $f(x)$ only. But since $f(0) \neq 0$ and $A0 = 0$, this might yield huge errors. To my knowledge (which is of course limited), the first one has no application whatsoever, but I would love to be corrected if I were wrong.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3957553",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Showing that the maximum value of $\sin x+\sin y\sin z$, where $x+y+z=\pi$, is the golden ratio
Find the maximum of
$$\sin x+\sin y\sin z$$
if $x+y+z=\pi$.
By using Lagrange multipliers, concluded that $y=z$, further plug $x=\pi-2y$, I've reduced the problem to single-variable expression.
$$\sin^2 y+\sin(2y)$$
Then by taking first derivative and using formulas for $\sin(\arctan x)$ and $\cos(\arctan x)$, finally we can obtain the maximum
$$\frac{\sqrt5+1}{2}$$
As one can see this is precisely the golden ratio $\phi$!
But this solution takes some time, so I'm interested in different no calculus solution of this problem, especially considering that it is related to the golden ratio.
| Substituting
$z=-x-y+\pi$
into the first equation, we get:
$\frac{\sqrt{5}}{2}sin(atan(\frac{1}{2})+x)-\frac{cos(x+2y)}{2}$
Setting
$atan( \frac{1}{2})+x=\frac{\pi}{2}$
and
$x+2y=\pi$
to make maximum both $sin$ and $cos$, we get:
$x=atan(\frac{1}{3})+\frac{\pi}{4}$
and
$y=\frac{3\pi}{8}-\frac{atan(\frac{1}{3})}{2}$
that substituted in the initial function , is as a result:
$\frac{\sqrt{5}+1}{2}$,
the golden ratio.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3957701",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Be $\tau_{1}$ and $\tau_{2}$ topologies over $\mathbb{N}$ Be $\tau_{1}$ and $\tau_{2}$ topologies over $\mathbb{N}$ defined by:
$\tau_{1}=\{\{m\in \mathbb{N}:m<n\}:n\in \mathbb{N}\}\cup \{\mathbb{N}\}\}$ and $\tau_{2}=\{A\subseteq \mathbb{N}: 0\in A\}\cup \{\emptyset\}$.
Check who are these topologies and prove which is the finest.
| The topology consisting of all subsets containing a fixed point $x$ is called the particular/included point topology. In your case $x=0$. You can simply say that $\tau_1\subset \tau_2$ i.e. $\tau_2$ is strictly finer than $\tau_1$ since:
*
*For all subsets $S\ne\emptyset\in\tau_1,0\in S\subseteq\Bbb N$ and thus $S\in\tau_2$, so $\tau_1\subseteq \tau_2$.
*$\{0,2\}\in\tau_2$ but does not belong to $\tau_1$. Hence $\tau_1\ne\tau_2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3957907",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Solve THIS ODE $ (2x-y\sin(2x))dx = (\sin^2x-2y)dy $ The following problem is in Mathematical Methods in physical science CH8 miscellaneous problems
$$
(2x-y\sin(2x))dx = (\sin^2x-2y)dy
$$
it isnt an exact equation the only difference is $-$ sign
because it not an exact equation I tried to rearrange it to produce linear first order but it did not work
I dont know what to do
I assume there an integration factor for such problem to make it exact or methods
Also I tried to use Mathematica DSolve function to find a solution
DSolve[{y'[x] == (2 x - y[x] sin (2 x))/(sin^2 x - 2 y[x])}, y[x], x]
This what it produced
DSolve::deqn: Equation or list of equations expected instead of True in the first argument {True}.
DSolve[{True}, y[x], x]
thank you
| Recall we say a differential equation of type
$$N(x,y)dx+M(x,y)dy=0$$ is exact if $$\frac{\partial N}{\partial y}=\frac{\partial M}{\partial x}$$
So re-writing the above differential equation as
$$
(2x-y\sin(2x))dx - (\sin^2x-2y)dy
=0$$
we can see that $$\frac{\partial N}{\partial y}=-\sin(2x)=\frac{\partial M}{\partial x}$$
so it is in fact exact.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3958142",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Interesting topologies on $\mathbb{R}$? It is evident in with respect to the usual topology, the sequence $0.9,0.99,0.999,0.9999,\ldots$ converges to $1$.
I am also aware that the same sequence does not converge to $1$ with respect to the lower limit topology.
My question is: Does there exist a topology on $\mathbb{R}$ with respect to which the aforementioned sequence converges to a real number that is NOT $1$?
| Take into account that a topology on a set defines its structure, without it $\mathbb R$ is the same as any other set with $\aleph_1$ cardinality. Take any real point $a$ and define the distance in $\mathbb R$ as $d(x,y)=|x-y|$ if $x,y\not\in\{1,a\}$, $d(1,y)=d(y,1)=|a-y|$ and $d(a,y)=d(y,a)=|1-y|$, it is clear that under this topology any sequence that converges to $1$ in the usual topology converges to $a$ in this one.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3958270",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Find the Maclaurin series of $\ln{(x+\sqrt{x^2+1})}$ with accuracy up to $o(x^{2n})$ I need to find the Maclaurin series of $\ln{(x+\sqrt{x^2+1})}$ with accuracy up to $o(x^{2n})$ and present the series as a sum of members with the same formula. Since we haven't studied integrals yet, I'd prefer suggestions without them. For the Maclaurin series, we studied the formula $f(x)=f(0)+\frac{f'(0)x}{1!}+\frac{f''(0)x^{2}}{2!}+...+\frac{f^{(n)}(0)x^{n}}{n!}+o(x^{n})$. From this formula, we derived $\ln{(1+x)}=\sum_{k=1}^{n}{\frac{(-1)^{k-1}x^{k}}{k}}+o(x^{n})$. So, trying to use one of these, I thought of two approaches:
*
*$\ln{(x(1+\frac{\sqrt{x^2+1}}{x}))}=\ln{x}+\ln{(1+\frac{\sqrt{x^2+1}}{x})}$ but then that root fraction is not suitable for the $\ln$ formula
*the derivative of $\ln{(x+\sqrt{x^2+1})}$ is $(1+x^{2})^{-\frac{1}{2}}$. But then I will have to find the n-th derivative for the basic formula, which I wouldn't be able to. How should I procede?
| You don't want to use integration, but I don't think you can avoid it entirely. First note that
$$f(x) = \ln(x+\sqrt{x^2+1}) = \mbox{arcsinh } x, $$
and as you note
$$f'(x) = (1+x^2)^{-1/2}.$$
The generalized binomial theorem
https://en.wikipedia.org/wiki/Binomial_theorem#Newton's_generalized_binomial_theorem
gives you
$$f'(x) = \sum_{n=0}^{\infty} \frac{ \left(-\frac{1}{2}\right)_n}{n!}x^{2n} = \sum_{n=0}^{\infty} \frac{(-1)^n(2n)!}{2^{2n}(n!)^2}x^{2n}.$$
Since you know how to differentiate polynomials, then it's just barely "integration" to un-differentiate this series and get your answer. Realize that any constant could be added to your answer and still differentiate to the above. So you can plug in $x=0$ at the end and work out that constant. (It'll be $0$.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3958427",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Let $G$ be a group that acts on $P(G)$ via conjugation.
Let $G$ be a group that acts on a set $X.$ Let $\alpha, \beta \in X$ be in the same orbit. If $G$ acts on $P(G)$ via conjugation, then $G_{\alpha}$ and $G_{\beta}$ are in the same orbit.
$P(G)$ is the power set of $G$ i.e the set of all subset of $G$
$G_{\alpha}$ = {$g\in G: \alpha .g=\alpha$}
In the same way, $G_{\beta}$ = {$g\in G: \beta .g=\beta$}
How can I prove this statement?
Recall that the orbit of $G_{\alpha}$ is equal to $\mathcal O_{G_{\alpha}} = \{g^{-1} h g \,|\, g \in G \text{ and } \alpha \cdot h = \alpha \}.$ To show $G_{\alpha}$ and $G_{\beta}$ are in the same orbit, then for some $ g \in G,$ we have that $G_{\alpha} \cdot g = G_{\beta}.$ Any hints?
| If $\alpha$ and $\beta$ are in the same orbit of some action $\cdot$ , then $G_\alpha$ and $G_\beta$ (stabilizers) are conjugate in $G$. In fact, $\beta\in O(\alpha)\Rightarrow\exists \tilde g\in G\mid \beta=\tilde g\cdot \alpha$; but then, by action properties:
\begin{alignat}{1}
G_\beta &= \{g\in G\mid g\cdot \beta=\beta\} \\
&= \{g\in G\mid g\cdot (\tilde g\cdot \alpha) =\tilde g\cdot \alpha\} \\
&= \{g\in G\mid (g\tilde g)\cdot \alpha =\tilde g\cdot \alpha\} \\
&= \{g\in G\mid \tilde g^{-1}\cdot((g\tilde g)\cdot \alpha) =\tilde g^{-1}\cdot(\tilde g\cdot \alpha\}) \\
&= \{g\in G\mid (\tilde g^{-1}(g\tilde g))\cdot \alpha =(\tilde g^{-1}\tilde g)\cdot \alpha\} \\
&= \{g\in G\mid (\tilde g^{-1}g\tilde g)\cdot \alpha =\alpha\} \\
&= \{\tilde gg'\tilde g^{-1}\in G\mid g'\cdot \alpha =\alpha\} \\
&= \tilde g\{g'\in G\mid g'\cdot \alpha =\alpha\}\tilde g^{-1} \\
&= \tilde gG_\alpha\tilde g^{-1} \\
\tag 1
\end{alignat}
So, $G_\alpha$ and $G_\beta$ are coniugate in $G$. This precisely means that they lie on the same orbit of the action of $G$ by conjugation on $\mathcal{P}(G)$, say $\star$ . In fact: $\tilde g\star G_\alpha=\tilde g G_\alpha\tilde g^{-1}\stackrel{(1)}{=}G_\beta$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3958564",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Solution to an inexact differential equation with the difference between partial derivatives not single variable How can I find the general solution to the equation $$\left(x^2+xy+\frac{y^2}{x}\right)dx+(x^2+xy-y)dy=0$$
Note that, by using exact differentials, I could reduce it to $(x+y)d(x+y)=yd\left(\frac{y}{x}\right)$, but could not solve it. The difference between $M_y-N_x=\frac{2y}{x}-x-y$, where $M=x^2+xy+\frac{y^2}{x}$, $N=x^2+xy-y$. Any hints? Thanks beforehand.
| You already reduced it to $(x+y) \,d(x+y) = y \,d(\frac{y}{x})$.
Now you can just use change of variable -
$\displaystyle x + y = u, \frac{y}{x} = v$
$ \displaystyle \implies x = \frac{u}{v+1}, y = \frac{uv}{v+1}$
So $d(x+y) = \frac{y}{(x+y)} \,d(\frac{y}{x})$ becomes
$\displaystyle du = \frac{v}{v+1} dv \,$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3958651",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Precalculus in reverse? In real analysis, I am aware of James Propp's "Real Analysis in Reverse" which does "naive reverse mathematics", showing the equivalence of various theorems of analysis to the completeness property of the real line.
Recently I've been thinking about the various geometric results that are typically covered in a precalculus course, such as:
*
*The Pythagorean theorem
*The law of cosines
*The equivalence between algebraic and geometric dot product ($u\cdot v = \|u\|\|v\|\cos \theta$)
*The formula for projections: $\operatorname{proj}_u(v) = \frac{v\cdot u}{\|u\|^2}u$
*The sine and cosine angle addition formulas
*The geometric interpretation of complex multiplication ("add the angles and multiply the lengths")
*Determinant of a $2\times 2$ matrix as the signed area of the parallelogram spanned by its columns
...and so on. It seems to me that these results are all saying "basically the same thing" (whatever that means); in particular it's often possible to prove one of them using some of the others.
I am wondering if there is a textbook or paper that does something like "naive reverse mathematics of precalculus". What would the "base axioms" be in this setting? What is the relationship between these theorems? Why are they so closely related? Are there other "equivalent" theorems that are missing from this list? Are there any keywords I should be using in my search?
| There are many algebraic statements like the ones you list that are equivalent to the Pythagorean theorem.
That is essentially because the Pythagorean theorem is exactly what you need to show that the Euclid's geometric plane can be modeled as $\mathbb{R}^2$.
Even more interesting is the fact that there are many geometric equivalences. Among those are
*
*The parallel postulate.
*The angles of a triangle sum to $2\pi$.
*Similar triangles that are not congruent exist.
See
Is Pythagoras' Theorem a theorem?
which axiom(s) are behind the Pythagorean Theorem
https://www.cut-the-knot.org/triangle/pythpar/PTimpliesPP.shtml
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3958801",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Derive Rodrigues’ formula for Laguerre polynomials
Derive Rodrigues’ formula for Laguerre polynomials
$$
L_n(x)=\frac{e^x}{n!}.\frac{d^n}{dx^n}(x^ne^{-x})
$$
The Rodrigues’ formula for Hermite polynomials can be obtained by taking $n^{th}$ order partial derivatives of its generatig function
$$
g(x,t)=\sum_{n=0}^\infty H_n(x)\frac{t^n}{n!}=1+tH_1(x)+\frac{t^2}{2!}H_2(x)+\cdots\cdots\cdots+\frac{t^n}{n!}H_n(x)+\cdots\cdots\cdots\\
\frac{\partial^n}{\partial t^n}\Big(e^{2xt-t^2}\Big)=H_n(x)+\frac{(n+1)n(n-1)\cdots2}{(n+1)!}tH_{n+1}(x)+\cdots\\
H_n(x)=\Bigg[\frac{\partial^n}{\partial t^n}\Big(e^{2xt-t^2}\Big)\Bigg]_{t=0}=e^{x^2}\Bigg[\frac{\partial^n}{\partial t^n}\Big(e^{-(x-t)^2}\Big)\Bigg]_{t=0}\\
=(-1)^ne^{x^2}\Bigg[\frac{\partial^n}{\partial x^n}\Big(e^{-(x-t)^2}\Big)\Bigg]_{t=0}=(-1)^ne^{x^2}\Bigg[\frac{\partial^n}{\partial x^n}\Big(e^{-x^2}\Big)\Bigg]
$$
Generating function for laguerre polynomials is $g(x,t)=\dfrac{e^{-\frac{xt}{1-t}}}{1-t}=\sum_{n=0}^\infty L_n(x) t^n$
I do not think the same technique applies in the case of Laguerre polynomials. So how do I derive that for Laguerre polynomials ?
| Note that the Taylor-Maclaurin formula cannot be used directly (as in the case of Hermite polynomials) as there is some $n$ dependency inside the $n^{th}$ derivative. How can we get around this ? ... Let use a coefficient extrator
\begin{eqnarray*}
x^n= [u^0]: \frac{u^{-n}}{1-xu}.
\end{eqnarray*}
We have
\begin{eqnarray*}
\sum_{n=0}^{\infty} L_n(x) t^n &=&[u^0]: e^x \sum_{n=0}^{\infty} \left(\frac{t}{u} \right)^n \frac{1}{n!} \frac{d^n}{dx^n} \frac{e^{-x}}{1-xu} \\
&=&[u^0]: e^x \frac{e^{-(x+t/u)}}{1-u(x+t/u)} \\
&=& \frac{1}{1-t} [u^0]: \frac{e^{-t/u}}{1-\frac{ux}{1-t}}. \\
\end{eqnarray*}
Expand these two functions and observe that the central term is ... what we want
\begin{eqnarray*}
\left( \sum_{i=0}^{\infty} \frac{(-t/u)^i}{i!} \right) \left( \sum_{j=0}^{\infty} \left( \frac{ux}{1-t} \right)^j \right) =
\cdots+ \sum_{k=0}^{\infty} \frac{1}{k!} \left( \frac{-xt}{1-t} \right)^k + \cdots
\end{eqnarray*}
Now reverse engineer all this ... and your result follows.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3958878",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Find all eigenvalue for $S$ Let $V$ be the vector space of all converging real sequences, that is
$$V = \lbrace (a_1,a_2,a_3,\ldots) |\forall i \in \mathbb {N}: a_i \in\ \mathbb{R}\;\mathrm{and}\; \exists\! \lim_{n\to\infty}a_n \rbrace$$
We are given this linear transformation:
$$ S:V \rightarrow V,\; S(a_1,a_2,a_3,\ldots) = (a_2,a_3,a_4,\ldots)$$
Find all of the eigenvalues of $S$.
MY METHOD: In fact, $S$ "moves left". The linear transformation
deletes the first member in the sequence. I tried to find any
$\lambda$ $\in\ \mathbb{R}$, such that there is a sequence with at
least one $a_i \neq 0$ that will satisfy:
$S(a_1,a_2,a_3,a_4,\ldots) = (\lambda a_1, \lambda a_2,
\lambda a_3, \lambda a_4)$,
but without success to this moment
| The equation $Sa = \lambda a$, for $a = (a_i)_{i \in \mathbb N} \in V$, implies that
$$\lambda a_i = a_{i+1}, \quad \forall i \in \mathbb N.$$
Therefore, $(1, \lambda, \lambda^2, \ldots)$ is an eigenvector if and only if $-1 < \lambda \le 1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3959259",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
First year french college. Is there a finite set of points of the plane A. Finite set of point A such as... Is there a finite set of at least 3 points in the plane A, such as every 3 points of A are not aligned, and such as every circle going through 3 points of more of A, it’s center is in A ?
I believe that there can’t be such a set. So I tried showing that for every n point of the plane you can’t arrange them to be set A.
Being based on a number n I also tried doing it by doing a « mathematical induction » to show that there can’t be a set A for n points.
I’m a french student in my first year of college, I’m sorry by advance if my english is a bit wacky.
edit: forgot that it's a set of at least 3 points sorry
| In the plane draw a circle, take three points on the circle (I think you can take any finite number of points but 3 will do). Call these points your set A. These 3 points are not aligned (not on the same line) and there is only one circle that goes through them and we know by construction that this circle is in the plane.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3959378",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 3
} |
Does the limit exist if a function approaches a limit where it is discontinuous??
If the function is discontinuous at $x=0$ and$f(0)=4.5$ . And as $x$ approaches $0$, the value of function nears $3$.
Does the limit in this case exist?
Imho ,it does and equals $3$ but one of my friend disagreed.
I'd be grateful if someone even hints at my error.
Thank you.
| The limit exists and is 3. As I recall form my first year of calculus (Analysis?): If a function f is real-valued, then the limit of f at p is L if and only if both the right-handed limit and left-handed limit of f at p exist and are equal to L. $lim_{x\rightarrow0^+}f(x)=lim_{x\rightarrow 0^-}f(x)=3$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3959546",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Why can ellipses be expressed as $y=Ax$? I am reading linear algebra textbook. One question is it said for vectors in the unit circle $||x||=1$. I can understand this. However, it said the vectors $y=Ax$ in the ellipse will have $||A^{-1}y||=1$. Why it is an ellipse?
I only know the ellipse can be expressed in this way:
$\frac{x^2}{a^2}+\frac{y^2}{b^2} = 1 $
| A partial answer: your formula for an ellipse holds if the (semi-minor and semi-major) axes of the ellipse are horizontal and vertical. More generally, the ellipses can point in any two perpendicular directions.
With that in mind, suppose that $\vec u,\vec v$ are orthogonal unit vectors. The component of $(x,y)$ along $\vec u = (u_1,u_2)$ is the dot-product $(u_1,u_2)\cdot (x,y) = \vec u \cdot \vec x$. We can produce the equation of an ellipse with axes in the direction of $u$ and $v$ by using the component of the vector $\vec x$ along $\vec u$ instead of the $x$-value and the component along $\vec v$ instead of the $y$-value. Doing this gives us the equation
$$
\frac{(\vec u \cdot \vec x)^2}{a^2} + \frac{(\vec v \cdot \vec x)^2}{b^2} = 1
$$
Now, let $M$ denote the matrix
$$
M = \pmatrix{u_1/a & u_2/a\\ v_1/b & v_2/b}.
$$
Verify that $M \vec x$ = $((\vec u\cdot \vec x)/a, (\vec v \cdot \vec x)/b)$. It follows that the equation of our ellipse is given by
$$
\|M \vec x\|^2 = 1 \implies \|M\vec x\| = 1.
$$
In other words, we see that every ellipse can be written in the form $\|M \vec x\| = 1$ for some matrix $M$.
The original question was why is it that $\|A^{-1}x\| = 1$ defines an ellipse for every (invertible) matrix $A$. So far, my question only answers this in the case that the rows of $A^{-1}$ happen to be orthogonal to each other.
One approach to answering this question is to combine our analysis above with the existence of the singular value decomposition.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3959699",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How did this shorter method of finding $x^r$ in equation come?
Q)What is coefficient of $x^{30}$ in $$\left(\frac{1-x^{11}}{1-x}\right)^3\left(\frac{1-x^{21}}{1-x}\right)$$
I did this question by expanding $\frac{1}{1-x}$ and multiplying and got a large equation which in the end came to the correct solution of 1111.However, in the book the following steps are provided:
=Coefficient of $x^{30}$ in $(1-x^{11})^3(1-x^{21})(1-x)^{-4}$
=Coefficient of $x^{30}$ in $(1-3x^{11}+3x^{22})(1-x^{21})(1-x)^{-4}$
=Coefficient of $x^{30}$ in $(1-3x^{11}+3x^{22}-x^{21})(1-x)^{-4}$Here,(according to my understanding) the terms greater than $x^{30} $are excluded as only the terms lower than $x^{30}$ can give the desired result(Expansion of $\frac{1}{1-x}=1+x+x^2+x^3+....$). However, in the next step, it equates the above to:
$$\frac{33!}{30!3!} +(-3)\frac{22!}{19!3!}+(3)\frac{11!}{8!3!}+(-1)\frac{12!}{9!3!}$$which also provides the correct solution. How did this equation come?
| $(1-x)^{-n}=1+nx+\frac{n(n+1)}{2}x^2+\frac{n(n+1)(n+2)}{6}x^3+...$
Basically, the coefficient of $x^r$ in $(1-x)^{-n} = \binom{n+r-1}{r}$ where n is a natural number
In your case $n=4$
What we want is:
(Coefficient of $x^{30}$ in $(1-x)^{-4}) -(3 \cdot $ Coefficient of $x^{19}$ in $(1-x)^{-4})+(3 \cdot$ Coefficient of $x^8$ in $(1-x)^{-4}) -($ Coefficient of $x^9$ in $(1-x)^{-4}$)
You can take it from here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3959795",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
A question about winding numbers and the Global Version of Cauchy's Theorem In his book, Introduction to Complex Analysis at a Graduate Level, Serge Lang starts the chapter on the Global Version of Cauchy's Theorem with the following discussion
However, I fail to follow what he means. He talks about how if a path winds around some point outside of $U$ then we will have an integral not equal to zero, but this can't be true. Take the integral along the unit circle of $\frac{1}{z-5}$. And furthermore, he mentions that the integrals of functions such as $f(z)=\frac{1}{z - \alpha}$ where $\alpha$ is a pint not in $U$ have an integral =/0 for closed paths in $U$, which is again false (for them to have values not equal to zero doesn't do point need to be in $U$? For example $f(z)=\frac{1}{z}$ along the unit circle has a value of $2\pi i$ and the point 0 and the closed path are both in the open set, say the circle of radius 2 centered at the origen.
I think I am probably missing something. Is whats written in the discussion by Serge Lang correct? And if yes could someone please help me understand where I have gone wrong?
All help is really appriciated.
| Perhaps that you are missing that $U$ is the domain of $f$. So, in your example with $f(z)=\frac1z$, the natural domain of $f$ is $U=\Bbb C\setminus\{0\}$. And so the only point outside $U$ is $0$. And the integral of $f$ along $\gamma\colon[0,2\pi]\longrightarrow\Bbb C$ defined by $\gamma(t)=Re^{it}$ $(R>0$) is indeed $2\pi i$, which is different from $0$.
On the other hand, note that Lang wrote that if the path winds around some point outside of $U$, then we can find functions whose integral is not equal to $0$. He does not claim that the integral is always different from $0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3959972",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
The uniform convergence of $\int_0^\infty \mathrm{e}^{-ax^2}x^{2n}\mathrm{d}x \quad w.r.t \ a$
For fixed $n\in \mathbb{N}$, study the uniform convergence of
$$\int_0^\infty \mathrm{e}^{-ax^2}x^{2n}\mathrm{d}x \quad w.r.t \ a\in(0,\infty)$$.
Actually I want to compute the integral by differentiating
$$
\int_0^\infty \mathrm{e}^{-ax^2}\mathrm{d}x=\dfrac{1}{2}\sqrt{\dfrac{\pi}{a}},
$$
but we need to prove $$\int_0^\infty \mathrm{e}^{-ax^2}x^{2n+2}\mathrm{d}x $$
converges uniformly w.r.t $a>0$. Then the procedure is legal. The Weierstrass, Dirichlet and Abel criterion all seem do not work.
Appreciate any help!
| Hint
$$I_n=\int_0^\infty {e}^{-ax^2}x^{2n}\,dx$$
$$x=\sqrt t \implies I_n= \frac{1}{2}\int_0^\infty e^{-a t} t^{n-\frac{1}{2}}\,dt=\frac{1}{2} a^{-(n+\frac{1}{2})} \Gamma \left(n+\frac{1}{2}\right)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3960096",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
If $\mathcal B_1\subseteq\tau_2$ is an analytic basis for $\tau_1$, does it follow $\tau_1\subseteq\tau_2$? Let $E$ be a set, $\tau_i$ be a topology on $E$ and $\mathcal B_1\subseteq\tau_2$ be an analytic basis for $\tau_1$.
Are we able to conclude $\tau_1\subseteq\tau_2$?
Intuitively this should be trivially true, but how can we prove it? Please bear with me if this is rather simple, but I'm still trying to get familiar with general topology.
| Yes. This is trivial (he said after leaving the room for half an hour). $\tau_2$ is closed under unions (because it's a topology), so since everything in $\tau_1$ is a union of elements of $\tau_2$, we have $\tau_1\subseteq\tau_2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3960214",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
The special function $P(s)=\int^\infty_0 \frac{\ln(x)dx}{1+x^s}$
Evaluate the integral
$$P(s)=\int^\infty_0 \frac{\ln(x) \,dx}{1+x^s}$$
I tried integration by parts, using $u=\ln(x),du=\frac{x}{dx},v=\frac{1}{1+x^s}$, but I did not manage to get anything.
I thought about using a power series, but I did not find one that converges on the entire domain.
Using the substitution $u=\ln(x),du=\frac{dx}{x}$ we get $$\int^\infty_{-\infty} \frac{ue^{-u}}{1+e^{su}} ,$$
which doesn't seem useful either.
The previous similar question Integral of $\int^{\infty}_0 \frac{x^n}{x^s+1}dx$ gives
$$\int^\infty_0 \frac{x^n \,dx}{1+x^s}=\frac{\Gamma(\frac{1}{s})\Gamma\left(1-\frac{1}{s}\right)}{s(n+1)} .$$
Another possibly relevant question is Integral of $\int^{\infty}_0 \frac{e^{-x}}{x^s+1}\,dx$.
| The $p$-test implies that this integral diverges for $s \leq 1$, so we assume that $s > 1$.
Hint This integral is a standard application of the Residue Theorem. In this case, we can take the contours $\Gamma_R$ to be the boundaries of the sectors, centered at the origin, of radius $R$ and central angle $\frac{2 \pi}{s}$. (A convenient choice is to take one boundary line segment along the positive real axis and the other along the ray through $e^{2 \pi i / s}$.) Then, the contour contains a single pole, at $e^{\pi i / s}$. Proceeding as usual by rewriting the contour integral as a sum of three integrals, taking the limit as $R \to \infty$ (which eliminates one of the integrals), rearranging, and taking real and imaginary parts gives values both of the given integral,
$$\int_0^\infty \frac{\log x \,dx}{1 + x^s} ,$$
and, as a welcome bonus, the related integral,
$$\int_0^\infty \frac{\,dx}{1 + x^s} .$$
Carrying out the above procedure gives that the relevant residue is $$\operatorname{Res}\left(\frac{\log z}{1 + z^s}, z = e^{\pi i / s}\right) = -\frac{\pi}{s^2} \exp \left(\frac{s + 2}{2 s} \pi i\right)$$ and then that the integral has value $$\int_0^\infty \frac{\log x \,dx}{1 + x^s} = -\frac{\pi^2}{s^2} \cot \frac{\pi}{s} \csc \frac{\pi}{s} .$$
The above technique is essentially robjohn's approach in his answer to this question, which treats the special case $s = 3$. Ron Gordon's approach there, that is, using instead a keyhole contour, applies at least in the special case that $s$ is an integer (necessarily $\geq 2$). Marko Riedel's approach there is similar in spirit to J.G.'s answer to this question.
Remark This integral takes on special values where $\frac{\pi}{s}$ does, including at various rational numbers with small numerator and denominator. In particular for $s = 2$ the integral vanishes, which can be shown using a slick but easier argument.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3960336",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 3
} |
Prove that $\frac1{1+a}+\frac1{1+b}+\frac1{1+c}\leq\frac14(a+b+c+3)$ if $abc=1$ I found the following exercise in a problem book (with no solutions):
Given $a,b,c>0$ such that $abc=1$ prove that
$$\frac{1}{1+a}+\frac{1}{1+b}+\frac{1}{1+c}\leq\frac{a+b+c+3}{4}$$
I tried AM-GM for the fraction on the LHS but got stuck from there.
| Let $f(x) = \frac{1}{1 + x} - \frac{x+1}{4} + \frac{1}{2}\ln x$.
We have $f'(x) = \frac{(1-x)(x^2+x+2)}{4x(1+x)^2}$.
Thus, $f'(x) > 0$ on $(0, 1)$, and $f'(x) < 0$ on $(1, \infty)$.
Thus, $f(x)$ is strictly increasing on $(0, 1)$, and strictly decreasing on $(1, \infty)$.
Also $f(1) = 0$. Thus, $f(x) \le 0$ on $(0, \infty)$.
Thus, $f(a) + f(b) + f(c) \le 0$ which results in $\frac{1}{1+a} + \frac{1}{1+b} + \frac{1}{1+c}
\le \frac{a+b+c+3}{4}$ (using $\ln (abc) = 0$).
We are done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3960474",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Round Table Combinatorics Problem I have an issue with two problems with round table.
I have $n$ families, each family have father, mother, kid and a dog.
First problem: I need to sit the families that every kid sits between his parents.
Second problem: I need to sit the families that the two parents sit together and no dogs near each other.
In the first problem I tried to stick the parents and the kid together like this:
I have total of $2n$ places in the table, so to sit the parents and kid between them I have $n!$ options (and multiplies by $2$ because the side the parents sit) and $(n-1)!$ for the dogs to sit.
$$n!\cdot 2\cdot (n-1)!$$
but I'm not really sure about it.
And in the second problem I thought to stick the parents together and the kid and the dog together, but it seems to be wrong because I need the dog the be alone because there can be a possibility that the dog can be between two couples of parents.
Thank you very much for your help!
| For the first problem, the factor $2$ applies to each family, so that would give $2^n$. Presumably the dogs need to go between the families, but dogs can be together so you are ordering $2n$ (families plus dogs). The result is $2^n(2n-1)!$
For the second, you have $n$ couples, $n$ kids, and $n$ dogs to seat. You again get a factor $2^n$ for the couples' order. Then attach a couple or kid to the right of each dog. You have $2n$ choices for the first dog, $2n-1$ for the second, and so on. This gives a factor $\frac {(2n)!}{n!}$. Now you have $2n$ groups to arrange around the circle, which you can do in $(2n-1)!$ ways, for a final result of $2^n\frac {(2n)!(2n-1)!}{n!}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3960598",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Efficient way to find the remainder when $2001+ 2002+ 2003+ … + 2015+ 2016$ is divided by $2017$? I can think of a couple of ways
*
*Notice that unit digit of the first $8$ terms can be added in the last $8$ terms to make them $2017$. Now add the first $8$ terms without their unit digits (i.e. $2000*8$) and find a remainder on that.
*Sum of AP series = $4017*8$, now find the remainder
But both would take a lot of computation.
Is there any more efficient way of doing it manually?
| Welcome to MSE:
$2001 \equiv -16 \mod (2017)$
$2002 \equiv -15 \mod (2017)$
$2003 \equiv -14 \mod (2017)$
$\cdot$
$\cdot$
$\cdot$
$2016 \equiv -1 \mod (2017)$
Summing these relations we get:
$2001+2002+2002+2003+ . . . 2016 \equiv-\frac{16(1+16)}2=-136 \mod(2017)\equiv 1881 \mod(2017)$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3960680",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Showing linear independence using matrices.
If $\mathbf{v}_1,...,\mathbf{v}_m \in F^n$ are written as rows of an $m\times n$ matrix $A$ and $B$ is the row-reduced echelon form of $A$, then $\{\mathbf{v}_1,...,\mathbf{v}_m\}$ is linearly independant if and only if $B$ has no all zero rows
I thought that a good way of showing this may be to show that linear dependence implies row of zeroes in reduced row echelon form and row of zeroes in reduced row echelon form implies linear dependence.
So I start from assuming that $B$ has a row of zeroes. I'm able to show that each row of $B$ is a linear combination of the rows of $A$ by considering the elementary row operations which reduce $A$ to $B$, and so since the rows of $A$ are $\mathbf{v}_1,...,\mathbf{v}_m$, I have that each row of $B$ is a linear combination of $\mathbf{v}_1,...,\mathbf{v}_m$.
So if $B$ has a row of zeroes, then we have that the zero vector is a linear combination of $\mathbf{v}_1,...,\mathbf{v}_m$. However, my issue is that I am unsure whether this linear combination could possibly be $0\mathbf{v}_1+0\mathbf{v}_2+...+0\mathbf{v}_m = \mathbf{0}$ as in this case, having a row of zeroes wouldn't imply linear dependence of $\mathbf{v}_1,...,\mathbf{v}_m$. Also, I don't know how I would prove in the opposite direction: That if $\mathbf{v}_1,...,\mathbf{v}_m$ are linearly dependent then $B$ has a row of zeroes.
Can anyone help me with this? Is there a better way to approach this question?
| Nice problem :-)
This result is useful for to solve problem in linear algebra. First, we will write an example of this statament amd then we will give a proof about this stament.
Example: Suppose that we need to analyze if the set $$S=\left\{\begin{pmatrix} 2 \\ 0 \\ 0\end{pmatrix}, \begin{pmatrix} 0 \\ 3 \\ 0\end{pmatrix}, \begin{pmatrix} 0 \\ 0 \\ 4\end{pmatrix} \right\}$$
I think it's clear that $S$ is independent linearly, but we will use that statement
Informally, your result says that if by putting my vectors as rows of a matrix and then reducing by rows, no row is transformed into a row filled with zeros, then I can state that the set I took to build the rows was linearly independent .
Let $$A=\begin{pmatrix} 2 & 0 & 0 \\ 0 & 3 & 0 \\ 0 & 0 & 4\end{pmatrix}$$
using Gauss-Jordan's elimination we have $$A=\begin{pmatrix} 2 & 0 & 0 \\ 0 & 3 & 0 \\ 0 & 0 & 4\end{pmatrix}\sim \cdots \sim \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1\end{pmatrix}=B $$
since that no row in $B$ is filled with zeros, by the statement, we can see that the set is independent linearly.
Proof of the statament: Read here answer your question.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3960789",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Find $a$ so a quadratic expression describes a straight line (problem from a Swedish 12th grade ’Student Exam’ from 1931)
$x$ and $y$ are coordinates in an orthogonal coordinate system. If the constant $a$ is suitably chosen the equation
$$9x^2-3xy+ay^2+15x-16y-14=0$$
describe two straigth lines. Find the value of $a$ and the equations for the two lines.
I tried to complete the square (using Mathematica) but the resulting expression was not helpful’. Any idea where to start? TIA.
(N.B.: The problem is quoted from a Swedish 12th grade ’Student Exam’ from 1931. I find these problems rather fun to solve in the evenings.)
| You can solve it by polynomial coefficient identification by doing
$$
(a_1 x+b_1 y + c_1)(a_2 x+b_2 y+c_2)=9 x^2 - 3 x y + a y^2 + 15 x - 16 y - 14
$$
resulting in
$$
\left\{
\begin{array}{l}
a_1 a_2-9=0 \\
a_1 b_2+a_2 b_1+3=0 \\
a_1 c_2+a_2 c_1-15=0 \\
b_1 b_2-a=0 \\
b_1 c_2+b_2 c_1+16=0 \\
c_1 c_2+14=0 \\
\end{array}
\right.
$$
solving for $a_1, b_1, c_1, a_2, b_2, a$ we get
$$
\left(3x+y+7\right)\left(3x-2y-2\right)c_2^2 = 0
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3960886",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 3
} |
How to find the least in $E^{\circ}=\frac{5S^g}{162}+\frac{C^\circ}{50}+\frac{2\pi^2}{360}\textrm{rad}$? The problem is as follows:
Let $S^{\circ}$, $C^{g}$ and $R\,\textrm{rad}$ the measures of a positive angle in sexagesimal, centesimal and radian degrees respectively such as:
$$E^{\circ}=\frac{5S^g}{162}+\frac{C^\circ}{50}+\frac{2\pi^2}{360R}\textrm{rad}$$
Using this information find the least $E$.
The alternatives in my book are as follows:
$\begin{array}{ll}
1.&\textrm{6}\\
2.&\textrm{8}\\
3.&\textrm{10}\\
4.&\textrm{12}\\
\end{array}$
How exactly should I find the least angle here?.
So far what I remember is that the proportions between each unit are the same in the sense of:
$\frac{S}{360}=\frac{C}{400}=\frac{R}{2\pi}$
Using this information you may find a relationship between the variables.
Plugin these in the above equation I'm getting:
However I find confusing why the degree symbols for $S$ and $C$ appears swapped then I figured that what it might be intended could be that the digits for the angle are represented by the letter but the degree is contrary to what its unit mentions.
For $\frac{5S^g}{162}$:
$\frac{5S^g}{162}\times\frac{360^\circ}{400^g}=\frac{1S}{36}^{\circ}$
For $\frac{C^\circ}{50}$:
Since;
$\frac{S}{360}=\frac{C}{400}$
$C=\frac{10}{9}S$
$\frac{C^\circ}{50}=\frac{10S^\circ}{9}\times \frac{1}{50}=\frac{1S^\circ}{45}$
For: $\frac{2\pi^2}{360R}\textrm{rad}$
Since this case $R$ matches with the degree symbol $\textrm{rad}$ then I would apply the equation:
$\frac{S^{\circ}}{180}=\frac{R\rad}{\pi}$
$R \textrm{rad}=\frac{S^\circ \pi}{180}$
$\frac{2\pi^2}{360R}\textrm{rad}=\frac{2\pi^2 \times 180}{360S^\circ \pi}=\frac{\pi}{S^\circ}$
Then:
$E^{\circ}=\frac{5S^g}{162}+\frac{C^\circ}{50}+\frac{2\pi^2}{360R}\textrm{rad}$
$E^{\circ}=\frac{1S}{36}^{\circ}+\frac{1S^\circ}{45}+\frac{\pi}{S^\circ}$
$E^{\circ}=\frac{S^2+20\pi}{20S}$
Then I assume that this latter expression must me minimized but would it be used calculus for this?
If so:
$E^{\circ}\,'=\frac{1}{20}-\frac{\pi}{S^2}$
Equating this to zero:
$E^{\circ}=2\sqrt{5\pi}$
But I don't know if this is what it was intended?.
Can someone help me here?. I'm stuck, why it doesn't matches any of the alternatives given or could it be that I made a mistake?. Please help.
| While it is true that $R$ rad $=\dfrac {S^\circ \pi}{180^\circ} $, it is not true that $\dfrac {2\pi^2}{360R}$rad $= \dfrac {2\pi^2 \times 180^\circ}{360S^\circ\pi}$; the latter is actually equal to $\dfrac {2\pi^2}{360R \text{ rad}}$. To resolve this, we consider the numerical value of $R$ and the conversion of rad to degrees (sexagesimal) seperately:
$$R = \frac {S \pi}{180}, \quad 1 \text{ rad }=\frac {180}{\pi}^\circ$$
Now we have:
$$\frac {2\pi^2}{360R} \text{ rad } = \frac {2\pi^2 \times 180}{360S\pi} \times \frac {180^\circ}{\pi}=\frac {180^\circ}{S}$$
$$E^\circ = \frac{5S^g}{162}+\frac{C^\circ}{50}+\frac{2\pi^2}{360R}\textrm{rad} = \frac {S^\circ}{36} + \frac {S^\circ}{45} + \frac {180^\circ }S = \left(\frac S{20}+\frac{180}S\right)^\circ$$
To minimize this we may use calculus, or better yet, the 2-term form of AM $\ge$ GM:
$$a+b\ge 2\sqrt{ab}$$
$$\frac S{20}+\frac{180}S\ge2\sqrt{\frac S{20}\times \frac {180}S} =6$$
Hence the minimum of $E^\circ$ is $6^\circ$ (which occurs at $S = 60$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3961092",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
If $f$ is continuous at $c$ and $f ′(c) = 0$, then there exists an $h > 0$ such that $f$ is differentiable in the interval $(c – h, c + h)$. My book states the following: if $f$ is continuous at $c$ and $f ′(c) = 0$, then there exists an $h > 0$ such that $f$ is differentiable in the interval $(c – h, c + h)$. But I don't understand this. It is not as if $f'$ is given to be continuous, rather $f$ is given continuous and differentiable at $x=c$. So how can we possibly comment about the existence of $f'$ in the neighborhood of $c$ too? Note that f is defined on an open interval I and c belongs to I.
| Just start with the function $Q$ that’s the characteristic function of $\Bbb Q$, i.e., $Q(x)=1$ if $x$ is rational, zero otherwise.
Then take $f(x)=x^2Q(x)$, with $c=0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3961211",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Why does knowing that the MLE of the mean = the arithmetic mean of the sample uniquely determine the Normal sampling distribution? Currently reading through E.T Jayne on Bayesian inference and in one chapter dedicated to the Normal distribution he argues that Gauss proved the following:
Given a sample from a population distribution with unknown mean, our MLE for the mean is the arithmetic mean of our sample if and only if the sampling distribution of the mean is Normal
I’m really struggling to see the intuition behind this, can anyone help out? Thanks!
| You're referring to an argument Jaynes attributes to Gauss in Sec. 7.3. The log-likelihood of $n+1$ observations $\{x_i|0\le i\le n\}$ when the estimator is unbiased is of the form $\sum_i\ln f(x_i|\theta)$, which we want to maximise when $\theta=\bar{x}$.
Jaynes actually adds an assumption, which as @paulinho notes is inapplicable for the Bernoulli distribution, namely that the likelihood depends only on the $\theta-x_i$. If this seems like sleight of hand, bear in mind the point of his Chapter 7 is only to motivate why sampling errors would be Gaussian. (In particular, sampling means being approximately Gaussian for large samples is a completely unrelated issue!) It is not an "everything is Gaussian" argument. (In fact, Sec. 7.12 explores why sometimes even errors would be non-Gaussian.) The assumption we're adding is reasonable for sampling errors.
Compare $\sum_i(\bar{x}-x_i)=0$ to $\left.\sum_i\frac1f\frac{\partial f}{\partial\theta}\right|_{\theta=\bar{x}}=0$. For $\frac1f\frac{\partial f}{\partial\theta}$ to be proportional to $\theta-x$ (which is equivalent to $\ln f$ being quadratic in $\theta-x$ and peaked at $x=\theta$, making $\theta$ the mode of a Gaussian distribution) is clearly sufficient, and the case $x_i=-\tfrac1nx_n$ for all $i<n$, when $n\ge1$, proves necessity.
As for an intuition rather than a proof, note that if $\frac1f\frac{\partial f}{\partial\theta}$ is nonlinear you should be able to tweak initially constant $x_i$, without changing $\bar{x}$, so that $\frac1f\frac{\partial f}{\partial\theta}$ changes in value. The above example shows you can; see Jaynes for the full proof.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3961524",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Definition of normed and inner product space I was reading some Wikipedia pages about Normed Vector spaces and Inner product spaces and, in the definitions, they always talk about vector spaces over either $\Bbb R$ or $\Bbb C$.
Is this because most of the useful normed and inner product spaces are over $\Bbb R$ or $\Bbb C$ or is those spaces only defined for vector spaces over those specific fields?
Edit: After debating this topic in the comments of this post I want to rephrase my question:
Let $V$ be a vector space over a field $\mathbb F$. What condition should $\Bbb F$ verify if we want $V$ to be able to be an inner product space? How about a normed vector space?
| I believe it works over any normed field (at least the normed space, for inner product spaces I'm not sure, since you'd need some generalisation for complex conjugation). A normed field $k$ is a field equipped with a norm $||\cdot||: k\to \mathbb{R}_{\ge0}$ such that
*
*$||x||=0\Leftrightarrow x=0$
*$||a+b|| \le ||a|| + ||b||$
*$||a\cdot b|| = ||a||\cdot||b||$
If your field $k$ has a discrete valuation $\nu$ that you can build a norm by defining $||x||:=\exp(-a\nu(x))$ for any positive $a$...
In any case, I am sure that Bourbaki will provide you with the most general definition.
And if you want to relax the condition that the norm map to $\mathbb{R}_{\ge0}$, I think there is also a way to do that, and just have it map to some kind of totally ordered semiring...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3961609",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Nice geometry question to prove tangency Let $\triangle ABC$ be a scalene triangle with circumcircle $\omega$. The tangents to $\omega$ at $B$ and $C$ meet at a point $P$. Let $AP$ meet $BC$ at a point $K$ and let $M$ be the midpoint of $BC$. Let $X$ be a point on $AC$ such that $AB$ is tangent to the circumcircle of $\triangle BXC$. Let $BX$ meet $AM$ at a point $T$. Let the line through $C$ and parallel to $AM$ meet $TK$ at a point $V$.
Prove that $AV$ is tangent to $\omega$.
I tried the question with $\sqrt{bc}$ inversion with reflection over angle bisector. This approach does not seem to help much. Please provide hint?
| First, $BX$ is parallel to the tangent to $\omega$ at $A$. So if $L=BX\cap AK$, we have $$-1=A(B,C;AK\cap\omega,A)=(B,X;L,\infty_{BX}).$$ So $L$ is the midpoint of $BX$, hence $L$ lies on $MN$, where $N$ is the midpoint of $AB$. Then $$-1=T(M,L;N,TK\cap MN)=(A,B;N,TK\cap AB),$$ so $TK\parallel AB$. Reflect $A$ over $M$ to $D$, so $CD\parallel AB\parallel TK$. Then $$-1=C(A,D;M,\infty_{AM})=(CA\cap TK,\infty_{TK};K,V)=(AC,AB;AK,AV).$$ This implies the result, as $(AC,AB;AK,AA)=-1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3961754",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Central limit theorem for weighted random variable Let $(X_n)_n$ be a sequence of i.i.d random variable, $(x_n)_n$ be sequence of $\mathbb{R}^*.$ Let $y^2_n=\sum_{k=1}^nx^2_k.$
Suppose that $E[X_1]=0,E[X_1^2]=1,x_n=o(y_n),y_n \to +\infty.$ Prove that the sequence of the weighted random variable $(x_nX_n)_n$ fulfills the central limit theorem: $$\frac{1}{y_n}\sum_{k=1}^nx_kX_k \implies N(0;1).$$
Remark (optional): more generally, the following holds: $E[X_1^2]<+\infty$ if and only if there exist a sequence of real numbers $(w_n)_n$ such that $\frac{1}{y_n}\sum_{k=1}^nx_kX_k-w_n$ converges in distribution to an arbitrary random variable $Y.$ In this case, $Y$ is normal distributed.
A way to prove that the CLT is fulfilled, is to prove that Lindeberg condition holds. So let $\epsilon>0.$
$$\frac{1}{y_n^2}\sum_{k=1}^nx_k^2E[X_k^2 1_{|x_kX_k|>\epsilon y_n}]=\frac{1}{y_n^2}\sum_{k=1}^nx_k^2E[X_1^2 1_{|x_kX_1|>\epsilon y_n}],$$ can't see how to continue from here, especially how to remove $x_k$ from $1_{|x_kX_1|>\epsilon y_n}?$
Any suggestions are welcomed.
| Part 1:
Counterexample.
Consider $X_k$ -i.i.d., $X_k \sim U[-1,1]$, $x_1 = 1$, $x_k = \frac1{2^{k+2}}$, $k \ge 2$.
$$S_n = \frac{ X_1 + \eta_n}{ \sqrt{(1 + \sum_{k=1}^n \frac1{4^{k+2}}) }}$$
where $|\eta_n| = |\sum_{k = 2}^n \frac1{2^{k+2}} X_k| \le \sum_{k = 2}^n \frac1{2^{k+2}} \le \frac12$.
As $\eta_n = \sum_{k = 2}^n \frac1{2^{k+2}} X_k $ we have $\exists \eta:$
$\eta_n \to \eta$ a.s. (Kolmogorov's two-series theorem). As$|\eta_n| \le \frac12$ we have $\eta \le \frac12$.
Put $c = \sqrt{1 + \sum_{k=1}^{\infty} \frac1{4^{k+2}} }$. Hence
$$S_n \to \frac{X_1 + \eta}{c} $$ a.s. But $|\frac{X_1 + \eta}{c} | \le \frac{2}c$ and hence
$$\lim_n S_n = \frac{X_1 + \eta}{c} \ne N(0,1).$$
The claim is false. Hence we need some stronger assumption, and assumption $\max_{1 \le k \le n}|x_k| = o(y_n)$ is sufficient, as it's shown below.
Part 2: Suppose that we don't have condition "$X_n$ are i.i.d." - in this case problem is even more interesting.
Claim: instead of condition $x_n = o(y_n)$ we should have more strong condition $\max_{1 \le k \le n}|x_k| = o(y_n)$, otherwise there's no convergence to $N(0,1)$. Let us prove it.
Theorem (from Probability 1 by A.N. Shiryaev): if $\xi_{n1}, \ldots, \xi_{nn}$ is a sequence of independent r.v., $E \xi_{nk}=0$, $DS_n = 1$, where $S_n = \xi_{n1} + \ldots + \xi_{nn}$ then $S_n \to N(0,1)$ iff $\max_{1 \le k \le n}E\xi_{kn}^2 \to 0$, $n \to \infty$.
Put $\xi_{nk} = \frac{x_k X_k}{y_n}$. We have $E \xi_{nk}=0$, $DS_n = 1$. So $S_n \to N(0,1)$ iff
$$\max_{1 \le k \le n}E\xi_{kn}^2 \to 0, n \to \infty.$$
As $\max_{1 \le k \le n}E\xi_{kn}^2 = \max_{1\le k\le n}\frac{x_k^2}{y_n^2}$ we have that $S_n \to N(0,1)$ iff $max_{1 \le k \le n}|x_k| = o(y_n)$.
For example, we have convergence to $N(0,1)$ for $x_n$ such that $|x_n|$ is nondecreasing, but not in general case.
Сonclusion: if $X_i$ are not necessary i.i.d., then the condition $\max_{1 \le k \le n}|x_k| = o(y_n)$ is necessary and sufficient to guarantee the convergence to $N(0,1)$ for all independent $X_i$ with $EX_i = 0$ and $DX_i = 1$.
Let us notice also, that if $X_i \sim N(0,1)$ then $S_n = N(0,1)$ for any $x_k$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3961865",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
What's wrong with this proof of $3=0$ starting from $x^2+x+1=0$? What am I missing?
$$ x^2 + x + 1 = 0$$
Then
$$ x^2 + x = -1$$
$$ x(x+1) = -1$$
$$ x(-x^2) = -1$$
$$x^3 = 1 $$
$$x = 1 $$
but
$$(1)^2 + (1) + 1 = 3 $$
So
$$ 3 = 0$$
| You start with a solution to the equation $x^2 + x + 1 = 0$ and correctly show that it is also a solution to the equation $x^3 = 1$. You then observe that the only real solution to the second equation is $1$ (this is true).
You then conclude that $x = 1$ is a solution to the first equation (this is not true, there may be no real solutions to your original equation, as is in fact the case).
Analogy: if you started with $x^2 = -1$ and squared both sides, you would get $x^4 = 1$, but you can't conclude that $x = 1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3961981",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 4,
"answer_id": 3
} |
Show that $\left(\sum_{n=1}^{\infty} a_n^q \right)^{1/q}$ is a decreasing function of $q$ for $q>0$
Let $\{a_n\} $ be a sequence of non-negative real number such that $\sum_{n=1}^{\infty} a_n^q$ is convergent . Then show that $\left(\sum_{n=1}^{\infty} a_n^q \right)^{1/q}$ is a decreasing function of $q$ for $q>0$
Since given that $\sum_{n = 1}^{\infty} a_n^q$ is convergent therefore $\sum_{n=1}^{\infty} a_n^q=c, c>0$ then $\left(\sum_{n = 1}^{\infty} a_n^q \right)^{1/q}=c^{1/q}$ .
Now if $c>1 $ then $ c^{1/q} $ is decreasing function of $q$ but if $0<c<1 $ then $c^{1/q}$ is increasing function of $ q $.
| As D F said in the comment,
$$\sum_{j = 1}^\infty a_j^q = c(q),$$
so your argument doesn't work. For $p < q$, we have
$$\frac{|a_j|}{\|a\|_{q}} \le 1, \quad \text{for }~\|a\|_{q} = \left(\sum_{j = 1}^\infty a_j^q\right)^{1/q}.$$
Therefore,
$$\left|\frac{a_j}{\|a\|_q}\right|^q \le \left|\frac{a_j}{\|a\|_q}\right|^p,$$
and you conclude the statement by summation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3962069",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
A relationship between $a$ and $b$ related to algorithm Rock(1)-Scissor(0)-Paper(2)
This is related to algorithm Rock(1)-Scissor(0)-Paper(2)
I need to find a relationship between $a$ and $b$ so that
$$\begin{matrix}
a & b \\
1 & 0\\
0 & 2\\
2 & 1
\end{matrix}$$
is always true and
$$\begin{matrix}
a & b \\
1 & 2\\
0 & 1\\
2 & 0
\end{matrix}$$
is always false. I had read a book about this involving to binary and computer. I'm quite sure that we must do some analytics among
$$\begin{matrix}
01 & 00 \\
00 & 10
\end{matrix}$$
and
$$\begin{matrix}
00 & 10 \\
10 & 01
\end{matrix}$$
and
$$\begin{matrix}
10 & 01 \\
01 & 00
\end{matrix}$$
What should I do next ?? And I wanna know more about this knowledge, thanks a real lot !
Edit: another solution that is
$$a- b+ 2\equiv 0\mod 3$$
|
I want some new observations for this problem like system of modulo-equations
This answer shows three systems.
Let us define $f(a,b):=pa+qa^2+rb+sb^2$ where $p,q,r,s$ are integers. (This is because $x^3\equiv x\pmod 3$ always holds.)
We want $f(a,b)$ to satisfy
$$f(1,0)\equiv f(0,2)\equiv f(2,1)\pmod 3,$$
i.e.
$$p+q\equiv -r+s\equiv -p+q+r+s\pmod 3,$$
i.e.
$$r\equiv -p+q\pmod 3,\qquad s\equiv -q\pmod 3$$
So, $f(a,b)$ can be written as
$$f(a,b)\equiv pa+qa^2+(-p+q)b-qb^2\pmod 3$$
*
*For $(p,q)\equiv (0,1)\pmod 3$, the only solutions of $f(a,b)\equiv a^2+b-b^2\equiv 1\pmod 3$ are $(a,b)\equiv (1,0),(0,2),(2,1),(1,1),(2,0)\pmod 3$.
*For $(p,q)\equiv (1,1)\pmod 3$, the only solutions of $f(a,b)\equiv a+a^2-b^2\equiv 2\pmod 3$ are $(a,b)\equiv (1,0),(0,2),(2,1),(0,1),(2,2)\pmod 3$.
*For $(p,q)\equiv (1,2)\pmod 3$, the only solutions of $f(a,b)\equiv a-a^2+b+b^2\equiv 0\pmod 3$ are $(a,b)\equiv (1,0),(0,2),(2,1),(0,0),(1,2)\pmod 3$.
From these observations, we get the following three systems whose solutions are $(a,b)=(1,0),(0,2),(2,1)$.
(1)$$\begin{cases}a^2+b-b^2\equiv 1\pmod 3
\\\\a+a^2-b^2\equiv 2\pmod 3\end{cases}$$
(2)$$\begin{cases}a^2+b-b^2\equiv 1\pmod 3
\\\\a-a^2+b+b^2\equiv 0\pmod 3\end{cases}$$
(3)$$\begin{cases}a+a^2-b^2\equiv 2\pmod 3
\\\\a-a^2+b+b^2\equiv 0\pmod 3\end{cases}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3962191",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.