Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
Proof about $z\cot z=1-2\sum_{k\ge1}z^2/(k^2\pi^2-z^2)$ In Concrete Mathematics, it is said that
$$z\cot z=1-2\sum_{k\ge1}\frac{z^2}{k^2\pi^2-z^2}\tag1$$
and proved in EXERCISE 6.73
$$z\cot z=\frac z{2^n}\cot\frac z{2^n}-\frac z{2^n}\tan\frac z{2^n}+\sum_{k=1}^{2^{n-1}-1}\frac z{2^n}\left(\cot\frac{z+k\pi}{2^n}+\cot\frac{z-k\pi}{2^n}\right)$$
The trigonmetric identity is not hard, but I cannot understand the rest:
It can be shown that term-by-term passage to the limit is justified, hence equation (1) is valid.
How can we conclude that? Thanks for help!
| This identity is also proven in this answer, but the limit of the trigonometric identity is a cute trick, too.
Concrete Mathematics claim:
For the limit claimed in Concrete Mathematics, we need a few things.
First, by inspecting the graph of $\frac{1-x\cot(x)}{x^2}$ for $-\frac{3\pi}{4}\le x\le\frac{3\pi}{4}$, we have
$$
\left|\frac1x-\cot(x)\right|\le|x|\tag{1}
$$
Next, the Mean Value Theorem says
$$
\begin{align}
|\cot(\delta+x)+\cot(\delta-x)|
&=|\cot(x+\delta)-\cot(x-\delta)|\\
&\le2\delta\sup_{[x-\delta,x+\delta]}\csc^2(\xi)\\
&\le\color{#C00000}{8\delta\,\csc^2(x)}\\
&\le\color{#C00000}{2\pi^2\delta/x^2}\tag{2}
\end{align}
$$
if $\color{#C00000}{2\delta\le|x|\le\frac{\pi}{2}}$.
Finally, note that since $0\le k< 2^{n-1}$, $0\le\frac{k\pi}{2^n}<\frac{\pi}{2}$
Using $(1)$, we get
$$
\begin{align}
&\left|\frac{z}{2^n}\left(\cot\left(\frac{z+k\pi}{2^n}\right)+\cot\left(\frac{z-k\pi}{2^n}\right)\right)-\left(\frac{z}{z+k\pi}+\frac{z}{z-k\pi}\right)\right|\\
&\le2\left|\frac{z}{2^n}\right|\frac{|z|+k\pi}{2^n}\tag{3}
\end{align}
$$
Using $(2)$, we get, for $2z\le k\pi$,
$$
\begin{align}
\left|\frac{z}{2^n}\left(\cot\left(\frac{z+k\pi}{2^n}\right)+\cot\left(\frac{z-k\pi}{2^n}\right)\right)\right|
&\le2\pi^2\left|\frac{z^2}{2^{2n}}\right|\left(\frac{2^n}{k\pi}\right)^2\\
&\le2\pi^2\left(\frac{z}{k\pi}\right)^2\tag{4}
\end{align}
$$
Estimate $(3)$ is used to control the difference between the series for small $k$, and $(4)$ to control the remainder in the sum of the cotangents for large $k$.
Pick an $\epsilon>0$, and find $m$ large enough so that $2z\le m\pi$ and
$$
\sum_{k=m}^\infty\frac{1}{k^2}\le\epsilon\tag{5}
$$
Then we have the following estimate for the tail of the sum
$$
\sum_{k=m}^\infty\frac{z^2}{k^2\pi^2-z^2}\le\frac43z^2\epsilon\tag{6}
$$
Combining $(4)$ and $(5)$ yields
$$
\sum_{k=m}^{2^{n-1}-1}\left|\frac{z}{2^n}\left(\cot\left(\frac{z+k\pi}{2^n}\right)+\cot\left(\frac{z-k\pi}{2^n}\right)\right)\right|\le2z^2\epsilon\tag{7}
$$
Summing $(3)$ gives
$$
\begin{align}
&\sum_{k=1}^{m-1}\left|\frac{z}{2^n}\left(\cot\left(\frac{z+k\pi}{2^n}\right)+\cot\left(\frac{z-k\pi}{2^n}\right)\right)-\left(\frac{z}{z+k\pi}+\frac{z}{z-k\pi}\right)\right|\\
&\le2\left|\frac{z}{2^n}\right|\frac{m|z|+m^2\pi/2}{2^n}\tag{8}
\end{align}
$$
Just choose $n$ big enough so that $(8)$ and $\displaystyle\left|\frac z{2^n}\cot\frac z{2^n}-\frac z{2^n}\tan\frac z{2^n}-1\right|$ are each less than $\epsilon$ and we get that the term-by-term absolute difference is less than
$$
\left(\frac{10}{3}z^2+2\right)\epsilon\tag{9}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/159752",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
$\gcd(n!+1,(n+1)!)$ The recent post didn't really provide sufficient help. It was too vague, most of it went over my head.
Anyway, I'm trying to find the $\gcd(n!+1,(n+1)!)$.
First I let $d=ab\mid(n!+1)$ and $d=ab\mid(n+1)n!$ where $d=ab$ is the GCD.
From $ab\mid(n+1)n!$ I get $a\mid(n+1)$ and $b|n!$.
Because $b\mid n!$ and $ab\mid(n!+1)$, $b$ must be 1.
Consequently, $a\mid(n!+1)$ and $a\mid(n+1)$.
So narrowing down options for $a$ should get me an answer. At this point I've tried to somehow bring it around and relate it to Wilson's theorem as this problem is from that section of my textbook, but I seem to be missing something. This is part of independent study, though help of any kind is appreciated.
| The previous posts have I think carefully explained why the gcd is $1$ if $n+1$ is composite. It comes down to this: if $q$ is a prime that divides $(n+1)!$, and $n+1$ is composite, then $q \lt n+1$, and therefore $q \le n$. But then $q$ divides $n!$, and therefore $q$ cannot divide $n!+1$.
You have shown that any common divisor of $n!+1$ and $(n+1)!$ must divide $n+1$.
Suppose now that $n+1$ is prime, say $n+1=p$. Then by Wilson's Theorem, $(p-1)!\equiv -1 \pmod p$. This says that $p$ divides $(p-1)!+1$, meaning that $n+1$ divides $n!+1$.
It follows that if $n+1$ is prime, then $n+1$ is a common divisor of $n!+1$ and $(n+1)!$. It is the greatest common divisor, since all common divisors must divide $n+1$, and nothing bigger than $n+1$ can divide $n+1$.
We conclude that $\gcd(n!+1,(n+1)!)=1$ if $n+1$ is composite, and $\gcd(n!+1,(n+1)!)=n+1$ if $n+1$ is prime.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/159804",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 2
} |
Distribution of the product of an exponential and a uniform distribution I'm trying to show that $U(X+Y) = X$ in distribution, where X and Y are independent $\exp(\lambda)$ distributed and $U$ is uniformly distributed on (0,1) independent of $X+Y$.
I've been able to show that $X+Y$ has a $\Gamma(2, \lambda)$ distribution, but how do I calculate the distribution of this product?
To clarify: The answer by Sasha works, but I was looking for a solution that does something like integrate over a suitable area.
| One way to show equality in distribution is to establish point-wise equality of characteristic functions:
$$
\phi_X(t) = \mathbb{E}\left(\mathrm{e}^{i t X} \right) = \frac{\lambda}{\lambda- i t}
$$
For the left-hand-side:
$$
\mathbb{E}\left(\mathrm{e}^{i t U (X+Y)} \right) \stackrel{\text{tower law}}{=} \mathbb{E}\left( \mathbb{E}\left( \mathrm{e}^{i t U (X+Y)} | U\right) \right) \stackrel{\text{cond. indep.}}{=} \mathbb{E}\left( \mathbb{E}\left( \mathrm{e}^{i t U X} | U\right) \mathbb{E}\left( \mathrm{e}^{i t U Y} | U\right) \right) = \mathbb{E}\left( \left(\frac{\lambda}{\lambda - i U t}\right)^2\right) = \int_0^1 \left(\frac{\lambda}{\lambda - i u t}\right)^2 \mathrm{d} u = \left. \frac{\lambda}{i t} \frac{\lambda}{\lambda - i t u} \right|_0^1 = \frac{\lambda}{\lambda - i t}
$$
N.B.: Incidentally, your conclusion that $X+Y$ follows $\exp(2\lambda)$ is not correct. $X+Y$ follows gamma distribution $\Gamma(2,\lambda)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/159865",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Sequence of square integrable functions Let $\{f_{n}\}$ be a sequence of nonzero continuous functions on $\mathbb{R}$, which is uniformly bounded, uniformly Lipschitz on $\mathbb{R}$, and the derivative sequence $\{f_{n}'\}$ is also uniformly Lipschitz on $\mathbb{R}$, and $f_{n}\in L^{2}(\mathbb{R})$ for all $n$. If $\{f_{n}\}$ converges uniformly on any closed interval $I\subset \mathbb{R}$ to a continuous function $f$, does this imply that $f\in L^{2}(\mathbb{R})$. If not, what condition(s) the sequence $\{f_{n}\}$ must have to get such result?
| You need a bound on the $L^2$ norms of the $f_n$. Since you know that your sequence converges pointwise (even better) the same is true for $|f_n|^2$. You can then use Fatou's lemma applied to $|f_n|^2, |f|^2$ to conclude that the limit is in $L^2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/159950",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Optimization: Simplifying the function from an expression in derivative and again taking derivative I am trying to maximize a function. When I took the first derivative, I found an expression very similar to one in original function. My question is that can I use the expression in the original function to simplify it and then again take derivative to find the optimum? I am not sure if it is mathematically correct. Thanks!
Following is the function and its derivative (when $Q<\frac{au}{2}$):
$$\pi_{1}=\int_{0}^{t^{*}}(\frac{a-\frac{Q}{u-vt}}{b})Qdt+\int_{t^{*}}^{T}\frac{a}{2b}(\frac{a}{2})(u-vt)dt-cQT$$
So I took derivative with respect to Q as follow ($t^{*}$ is a function of Q with $u-vt^{*}=\frac{2Q}{a}$ and $\frac{\partial t^{*}}{\partial Q}=-\frac{2}{av}$):
$$ \frac{\partial \pi_{1}}{\partial Q}=\frac{\partial t^{*}}{\partial Q}(\frac{a-\frac{Q}{u-vt^{*}}}{b})Q+\int_{0}^{t^{*}}(\frac{a-\frac{2Q}{u-vt}}{b})dt-\frac{\partial t^{*}}{\partial Q}(\frac{a^2}{4b})(u-vt^{*})-cT$$
$$ \int_{0}^{t^{*}}(\frac{a-\frac{2Q}{u-vt}}{b})dt-cT=0 $$
Then I used this expression that holds for optimal Q in the original function to simplify and got the following and again I took derivative.
$$\pi_{1}=\int_{0}^{t^{*}}\frac{Q^2}{b(u-vt)}dt+\int_{t^{*}}^{T}(\frac{a^2}{4b})(u-vt)dt$$
$$\frac{\partial \pi_{1}}{\partial Q}=\frac{\partial t^{*}}{\partial Q}\frac{Q^2}{b(u-vt^{*})}+\int_{0}^{t^{*}}\frac{2Q}{b(u-vt)}dt-\frac{\partial t^{*}}{\partial Q}(\frac{a^2}{4b})(u-vt^{*})=0$$
| No, this won't work in general. Consider this example:
$$
f(x) = x^2 + e^x
$$
At the minimum $x^* \approx -.351733$, we have:
$$
f'(x^*) = 2x^* + e^{x^*} = 0 \Rightarrow e^{x^*} = -2x^*
$$
There is no algebraic way to solve this, so one might be tempted to do what you're suggesting: plug the condition into the original function to simplify it. This gives:
$$
f(x^*) = {x^*}^2 - 2x^*
$$
This is equation is true for the specific value $x^*$, but it's not really meaningful to differentiate this expression. If we did differentiate it, we would get $f' =? \ 2x - 2$, leading to the erroneous conclusion that the minimum is at $x=1$, but clearly $f'(1) = 2 + e \ne 0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/160000",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What does "+ complete" mean? I'm reading notes about Liapunov stability, and in the book of Abraham, Marsden and Ratiu I found the next definition:
Let $m$ be a critical point of $X$. Then
$m$ is stable (or Liapunov stable) if for any neighborhood $U$ of $m$, there is a neighborhood $V$ of $m$ such that if $m'$ $\in$ $V$, then $m'$ is $+$ complete and $F_{t}(m') \in U$ for all $t \geq 0$ .
I want to know what "$+$ complete means.
Thanks!
| It's defined a few pages earlier (2.1.13). It means that the integral curve starting at $m'$ at $t=0$ is defined for all $t>0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/160050",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Continued fraction question I have been given an continued fraction for a number x:
$$x = 1+\frac{1}{1+}\frac{1}{1+}\frac{1}{1+}\cdots$$
How can I show that $x = 1 + \frac{1}{x}$? I played around some with the first few convergents of this continued fraction, but I don't get close.
| Doesn't this immediately follow from the definition of the $n+\frac1{a+}\frac1{b+}\cdots$ notation you are using? Specifically, I thought that $\frac1{a+}Z\ldots$ was defined to be exactly the same as $\frac1{a+Z\ldots}$.
Then if $x=1+\frac1{1+}\frac1{1+}\cdots$ then $\frac1x = \frac1{1+}\frac1{1+}\cdots $ and $1+\frac1x = 1+\frac1{1+}\frac1{1+}\cdots = x$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/160131",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Integration of $\int\frac{1}{x^{4}+1}\mathrm dx$ I don't know how to integrate $\displaystyle \int\frac{1}{x^{4}+1}\mathrm dx$. Do I have to use trigonometric substitution?
| There are two (three) ways to go. One, assume
$$x^4+1=(x^2+ax+1)(x^2-ax+1)$$
You'll get that
$${x^4} + 1 = {x^4} + \left( {2 - {a^2}} \right){x^2} + 1$$
Then $a=\sqrt 2$ (or the other, by symmetry)
$${x^4} + 1 = {x^4} + 1 = \left( {{x^2} + \sqrt 2 x + 1} \right)\left( {{x^2} - \sqrt 2 x + 1} \right)$$
The other ${x^2} = \tan \theta $, but it might get messy, unless you know how to use the Weierstrass substitution for example.
$$\int {\frac{{dx}}{{{x^4} + 1}}} = \int {\frac{{\left( {{{\tan }^2}\theta + 1} \right)d\theta }}{{{{\tan }^2}\theta + 1}}} \frac{1}{{2\sqrt {\tan \theta } }} = \int {\sqrt {\frac{{\cos\theta }}{{\sin\theta }}} \frac{{d\theta }}{2}} $$
$$\int {\sqrt {\frac{{\frac{{1 - {u^2}}}{{1 + {u^2}}}}}{{\frac{{2u}}{{1 + {u^2}}}}}} \frac{{du}}{{1 + {u^2}}}} = \int {\sqrt {\frac{{1 - {u^2}}}{{2u}}} \frac{{du}}{{1 + {u^2}}}} $$
However, Chandrasekar's is the best way to go, if you can figure it out.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/160157",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 20,
"answer_id": 13
} |
Dirichlet's Test Remark in Apostol Dirichlet's Test is theorem $10.17$ in Apostol's Calculus Vol. $1$.
The theorem itself says that if the partial sums of $\{a_n\}$ (can be complex numbers, not just reals) form a bounded sequence and $\{b_n\}$ is a (monotone?) decreasing function converging to $0$, then $\sum a_n b_n$ converges.
The part of the proof I am stuck on says that, letting $A_n=\sum_{k=1}^{n} a_k$
"The series $\sum (b_k - b_{k+1})$ is a convergent telescoping series which dominates $\sum A_k(b_k - b_{k+1})$. This implies absolute convergence..."
How does this imply absolute convergence? Does it have to do with the fact that $\{b_n\}$ is decreasing? By decreasing, should I automatically think monotone?
| Note that the partial sums of $\{a_n\}$ are bounded means that $\lvert A_k \rvert \leq M$ for all $k$ and some $M > 0$. Hence, we have that
\begin{align}
\left \lvert \sum_{k \leq n} A_k(b_k - b_{k+1}) \right \rvert & \leq \sum_{k \leq n} \left(\left \lvert A_k(b_k - b_{k+1}) \right \rvert \right) & (\because \text{By triangle inequality})\\
&= \sum_{k \leq n} \left \lvert A_k \right \rvert \left \lvert (b_k - b_{k+1}) \right \rvert & \because \lvert z_1 z_2 \rvert = \lvert z_1 \rvert \lvert z_2 \rvert\\
& \leq \sum_{k \leq n} M \lvert(b_k - b_{k+1}) \rvert & (\because A_k \text{ is bounded by }M)\\
& = M \sum_{k \leq n} (b_k - b_{k+1}) & (\because \{b_n\}\text{ form a decreasing sequence})\\
& = M (b_1 - b_{n+1}) & (\because \text{By telescoping})\\
& \leq Mb_1 & (\because b_n \downarrow 0 \implies b_{n+1} \geq 0)
\end{align}
Hence, $\displaystyle \sum_{k \leq n} A_k(b_k - b_{k+1})$ converges absolutely.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/160201",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Evaluating $\lim\limits_{n\to\infty} e^{-n} \sum\limits_{k=0}^{n} \frac{n^k}{k!}$ I'm supposed to calculate:
$$\lim_{n\to\infty} e^{-n} \sum_{k=0}^{n} \frac{n^k}{k!}$$
By using WolframAlpha, I might guess that the limit is $\frac{1}{2}$, which is a pretty interesting and nice result. I wonder in which ways we may approach it.
| I do not know how much this will help you.
For a given $n$, the result is $\dfrac{\Gamma(n+1,n)}{n\ \Gamma(n)}$ which has a limit equal to $\dfrac12$ as $n\to\infty$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/160248",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "257",
"answer_count": 9,
"answer_id": 8
} |
$k$th power of ideal of of germs Well,We denote set of germs at $m$ by $\bar{F_m}$ A germ $f$ has a well defined value $f(m)$ at m namely the value at $m$ of any representative of the germ. Let $F_m\subseteq \bar{F_m}$ be the set of germs which vanish at $m$. Then $F_m$ is an ideal of $\bar{F_m}$ and let $F_m^k$ denotes its $k$th power. Could any one tell me how the elements look like in this Ideal $F_m^k$? and they said all finite linear combination of $k-fold$ products of elements of $F_m$ But I dont get this. and they also said These forms $\bar{F_m}\supsetneq F_m\supsetneq F_m^2\supsetneq\dots \supsetneq$
| Let $R$ be a commutative ring and $I\subseteq R$ an ideal of $R$. The $k$th power $I^k$ of that ideal is defined to be the set of all elements of $R$ that can be written as finite sums of elements of the form $a_1\cdot a_2\cdot\ldots\cdot a_k$ with $a_i\in I$. One can easily check, that with $I^k$ is itself an ideal of $R$. It is then also clear that for all $k\in\mathbb{N}$ one has $I^{k+1}\subseteq I^k$, while in general one does not have $I^{k+1}\neq I^k$. For the latter one needs more asumptions concerning the ring $R$ or the ideal $I$.
A simple guess in the case you are considering is this one: on a smooth manifold there exist functions $f$ that are smooth in a neighborhood of $m$ possessing a simple zero at $m$. The germ defined by such a function then lies in $F_m$ but not in any $F_m^k$, $k>1$. Similar for the germ defined by $f^2$ etc.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/160320",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Topologies for $2^\mathbb{N}$ Let $X = 2^\mathbb{N}$ and $Y = \mathbb{R}^+$ (i.e. the non-negative numbers). Is there a topology in which functions similar to $f : X \to Y$,
$$ f(A) = \begin{cases}
\frac{1}{|A|}, & |A| < \infty \\ \\
\ \ 0, & |A| \not<\infty
\end{cases}$$
or $$g(A) = f(A \cap \{2k \mid k \in \mathbb{N}\})$$
would be continuous, but
$$ h(A) = \begin{cases}
\frac{1}{|A|}, & |A| < \infty \\ \\
\ \ 1, & |A| \not<\infty
\end{cases}$$
would not? (For $Y$ take the standard topology on $\mathbb{R}$.)
Is it possible for $X$ to be compact with such topology?
The context is proving that some functions defined on $X$ attain
the minimum/maximum value and I am wondering if it could be done via topology.
The functions I am talking about are similar to $$F(A) = \sum_k [\text{if }B_k \subseteq A\text{ then }b_k\text{ else }0]$$
where $(B_k)$ is some countable family of finite sets, and we know that $F$ is bounded, i.e. there exists $M$ such that $|F(A)| < M$ for every $A$. I will appreciate comments on other approaches too.
Thanks in advance!
| Presumably you'll want to define $f$ differently in the case $A = \phi$, but that shouldn't be much of an issue.
If any topology will work, then the smallest topology such that $f$ is continuous in particular will work. This is given by defining $U \subset 2^\mathbb{N}$ to be open iff $U = f^{-1}(V)$ for $V$ open in $Y$. A basis for this topology is $\{\{A : |A|=n \} : n \in \mathbb{N} \} \cup \{ \{A : |A| > n \} : n \in \mathbb{N} \}$.
In this topology, $f$ is indeed continuous, while $h$ is not since ${1}$ is open in the image of $h$ and $h^{-1}(1) = \{A : |A| =1$ or $|A| = \infty \}$ is not open. You can check for yourself that this topology is compact.
I haven't thought about the optimization part, but hopefully this gives you somewhere to start on that.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/160368",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Are test/bump functions always bounded? A bump function is a infinitely often differentiable function with compact support. I guess that such functions are always bounded, especially because the set where they are not zero is compact and because they are continuous they should attain a maximum value on that set. or am I wrong? I am wondering because nowhere in the literature I am using there it is said that such functions are bounded, and I guess this is an important property and think it should be mentioned if it holds. So maybe it's not the case?
| Hint: The image of a compact set under a continuous function is always compact.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/160485",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Consecutive non square free numbers I was thinking to solve this by computer programs but I prefer a solution.
How to obtain a list of 3 consecutive non square free positive integers? In general, how to obtain the same kind of list with $k$ elements? Thanks.
| Let $n$ be the first number. It will work if we can arrange for the following:
$$
n\equiv 0\pmod{4}
$$
$$
n+1\equiv 0\pmod{9}
$$
$$
n+2\equiv 0\pmod{25}
$$
Using the Chinese Remainder Theorem, the first two congruences are equivalent to requiring that $n\equiv 8 \pmod{36}$. Combining this with the third congruence gives that $n\equiv 548\pmod{900}$. Thus, three such numbers are $548$, $549$, and $550$.
A similar algorithm works for $k$ consecutive square-free numbers.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/160547",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 2
} |
Probably simple factoring problem I came across this in a friend's 12th grade math homework and couldn't solve it. I want to factor the following trinomial:
$$3x^2 -8x + 1.$$
How to solve this is far from immediately clear to me, but it is surely very easy. How is it done?
| There's an approach called (by some) the "$ac$ method". Suppose $\alpha,\beta,\gamma,\delta$ are some constants, and consider the expansion $$(\alpha x+\beta)(\gamma x+\delta)=\alpha\gamma x^2+(\alpha\delta+\beta\gamma)x+\beta\delta=ax^2+bx+c.$$ Note that $\alpha\delta$ and $\beta\gamma$ are factors of $ac=\alpha\beta\gamma\delta$ and that $b=\alpha\delta+\beta\gamma$. In fact, they are paired factors of $ac$, in that $\alpha\delta\cdot\beta\gamma=ac$. The idea of the $ac$ method of factoring is to find such paired factors of $ac$ whose sum is $b$.
To see how this can be useful, consider the example $3x^2+13x-10$. Here, $ac=-30$ and $b=13$. We need to find two numbers that add up to $13$ and multiply to get $-30$. The only pair that works is $15$ and $-2$. Thus, we can rewrite $13$ as $15-2$ or $-2+15$. How is this useful? Well, $$3x^2+13x-10=3x^2+15x-2x-10=3x(x+5)-2(x+5)=(3x-2)(x+5)$$ and $$3x^2+13x-10=3x^2-2x+15x-10=x(3x-2)+5(3x-2)=(x+5)(3x-2),$$ so either way, we obtain our factorization without too much difficulty.
In this case, unfortunately, we have $ac=3$ and $b=-8$, so there is no obvious pair of factors! Such a pair does exist, namely $-4+\sqrt{13}$ and $-4-\sqrt{13}$, but even knowing the pair may not make the factorization simple in all cases (in this case it is, because $c=1$).
In general, if trying to factor something intractible to the $ac$ method, I recommend completing the square, as Marvis has demonstrated so well.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/160605",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
} |
What transforms under SU(2) as a matrix under SO(3)? A vector $\boldsymbol{r}$ in $\mathbb{R}^3$ transforms under rotation $\boldsymbol{A}$ to $\boldsymbol{r}'=\boldsymbol{Ar}$. It is equivalent to an SU(2) "rotation" as
$$\left( \boldsymbol{r}'\cdot\boldsymbol{\sigma} \right) = \boldsymbol{h} \left( \boldsymbol{r}\cdot\boldsymbol{\sigma} \right) \boldsymbol{h}^{-1},$$
where $\boldsymbol{h}$ is the counterpart of $\boldsymbol{A}$ in SU(2) given by the homomorphism between these two groups.
Now the question is, what would be the equivalent transformation in SU(2) of the rotation of a matrix in $\mathbb{R}^3$? In other words, what is the equivalent in SU(2) of $\boldsymbol{M}'=\boldsymbol{A}\boldsymbol{M}\boldsymbol{A}^{-1}$.
| Firstly, we need to map $\mathbb{R}^3$ to the representation space $V$ for $\mathrm{SU}(2)$. One possible map is given by the following formula:
$$\begin{pmatrix} x \\ y \\ z \end{pmatrix} \mapsto
x \mathbf{I} +
y \mathbf{J} +
z \mathbf{K}$$
where
\begin{align}
\mathbf{I} & = \begin{pmatrix} i & 0 \\ 0 & -i \end{pmatrix} &
\mathbf{J} & = \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} &
\mathbf{K} & = \begin{pmatrix} 0 & i \\ i & 0 \end{pmatrix}
\end{align}
$\mathrm{SU}(2)$ acts on $V$ by conjugation: so for each $X$ in $V$ and each $A$ in $\mathrm{SU}(2)$, the ordinary matrix product $A X A^{-1}$ is in $V$. This is linear in $X$ and is indeed a linear representation of $\mathrm{SU}(2)$. Indeed, if
$$A = \begin{pmatrix} r e^{i \theta} & s e^{-i \phi} \\ -s e^{i \phi} & r e^{-i \theta} \end{pmatrix}$$
where $r, s, \theta, \phi$ are real numbers and $r^2 + s^2 = 1$, then $A \in \mathrm{SU}(2)$, and
\begin{align}
A \mathbf{I} A^{-1} & = (r^2 - s^2) \mathbf{I} + 2 r s \sin (\theta - \phi) \mathbf{J} - 2 r s \cos (\theta - \phi) \mathbf{K} \\
A \mathbf{J} A^{-1} & = 2 r s \sin (\theta + \phi) \mathbf{I} + (r^2 \cos 2 \theta + s^2 \cos 2 \phi) \mathbf{J} + (r^2 \sin 2 \theta - s^2 \sin 2 \phi) \mathbf{K} \\
A \mathbf{K} A^{-1} & = 2 r s \cos (\theta + \phi) \mathbf{I} - (r^2 \sin 2 \theta + s^2 \sin 2 \phi) \mathbf{J} + (r^2 \cos 2 \theta - s^2 \cos 2 \phi) \mathbf{K}
\end{align}
Thus, the induced action of $\mathrm{SU}(2)$ on $\mathbb{R}^3$ is given by the group homomorphism below,
$$\begin{pmatrix} r e^{i \theta} & s e^{-i \phi} \\ -s e^{i \phi} & r e^{-i \theta} \end{pmatrix} \mapsto
\begin{pmatrix}
r^2 - s^2 & 2 r s \sin (\theta + \phi) & 2 r s \cos (\theta + \phi) \\
2 r s \sin (\theta - \phi) & r^2 \cos 2 \theta + s^2 \cos 2 \phi & -r^2 \sin 2 \theta - s^2 \sin 2 \phi \\
-2 r s \cos (\theta - \phi) & r^2 \sin 2 \theta - s^2 \sin 2 \phi & r^2 \cos 2 \theta - s^2 \cos 2 \phi
\end{pmatrix}$$
and one may verify that the RHS is a matrix in $\mathrm{SO}(3)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/160662",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Compute: $\sum_{k=1}^{\infty}\sum_{n=1}^{\infty} \frac{1}{k^2n+2nk+n^2k}$ I try to solve the following sum:
$$\sum_{k=1}^{\infty}\sum_{n=1}^{\infty} \frac{1}{k^2n+2nk+n^2k}$$
I'm very curious about the possible approaching ways that lead us to solve it. I'm not experienced with these sums, and any hint, suggestion is very welcome. Thanks.
| I think one way of approaching this sum would be to use the partial fraction
$$ \frac{1}{k^2n+2nk+n^2k} = \frac{1}{kn(k+n+2)} = \frac{1}{2}\Big(\frac{1}{k} + \frac{1}{n}\Big)\Big(\frac{1}{k+n} - \frac{1}{n+k+2}\Big)$$
to rewrite you sum in the form
$$\sum_{n=1}^{\infty}\sum_{k=1}^{\infty} \frac{1}{k^2n+n^2k+2kn} = \frac{1}{2}\sum_{k=1}^{\infty}\sum_{n=1}^{\infty} \Big( \frac{1}{n(k+n)} - \frac{1}{n(k+n+2)} + \frac{1}{k(k+n)} - \frac{1}{k(k+n+2)}\Big)$$
Since the sum on the right will telescope in one of the summation variables it should be straightforward to find the answer from here (it ends up being $7/4$ I think).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/160737",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 2
} |
Dedekind complete ⇒ Sequentially complete Let F be an ordered field with least upper bound property.
1.Let $\alpha: \mathbb{N} \to F$ be a Cauchy sequence.
Since F is an ordered field, $x$ is bounded both above and below.
2.By assumption and dual of it, $A$={$\alpha(n)$|$n\in \mathbb{N}$} has a inf $a_0$ and sup $b_0$.
3.F is Archimedean
4.If subsequence of a Cauchy sequence is convergent to $a\in F$ then the Cauchy sequence is convergent to $a\in F$
These are all i know.. How do I prove all Cauchy sequences are convergent in $F$?
Please consider my level. I want quite a direct proof not mentioning any topology & Cauchy net.
*Comment button is not available to me now, (I don't know why), so i write this here.
I just proved it with facts that (i)every cauchy sequence is convergent in the set of Cauchy reals and (ii)there exists a bijective homomorphism between two dedekind complete fields and (iii)the set of Cauchy reals is dedekind complete.
Let $x:i→x(i):\mathbb{N}→F$ be a cauchy sequence in dedekind complete field $F$. Then use the bijective homomorphism $f$ to show that $x':i→f(x(i))$ is a cauchy sequence in the set of Cauchy reals. By the fact (i), $x'$ is convergent. Since inverse of $f$ is also homomorphism, use this to show that $x$ is convergent.
| Let $c_0 = \frac{a_0 + b_0}2$ (Note that an ordered field has characteristic 0, so $2 = 1+1$ is invertible in $F$). If $[a_0, c_0]$ contains infinitely many $\alpha(n)$, then let $a_1 = a_0$, $b_1 = c_0$, otherwise $a_1 = c_0$, $b_1 = b_0$. Now set $c_1 = \frac{a_1 + b_1}2$.
Continuing inductively, we obtain sequences $(a_n)$, $(b_n)$ such that:
*
*$[a_n, b_n]$ contains infinitely many $\alpha(k)$.
*$b_n - a_n = \frac{b_0 - a_0}{2^n}$
*$a_{n} \le a_{n+1} \le b_{n+1} \le b_n$.
hold for each $n$.
Now let $a^* = \sup_n a_n$ (note that $a_n \le b_0$, so the sup exists) and $b_* = \inf b_n$. Then $a_n \le a^* \le b_* \le b_n$ for each $n$, so $b_* - a^* \le b_n - a_n = \frac{b_0 - a_0}{2^n}$. As $F$ is Archimedian, $b_* - a^* = 0$, i. e. $b_* = a^*$.
Set $k_0 := 1$, and for each $n$, given $k_n$ choose $k_{n+1} > k_n$ with $\alpha(k_{n+1}) \in [a_n, b_n]$ (which is possible since the latter interval contains infinitely many $\alpha(k)$).
We will prove, that $\alpha(k_n) \to a_*$ (which, by 4., suffices). So let $\epsilon > 0$, as $F$ is Archimedian, there is some $N$ with $2^{-N} \le \epsilon(b_0 - a_0)$. For $n \ge N$ we now have $\alpha(k_n) \in [a_N, b_N]$ and $a^* \in [a_N, b_N]$, hence $|a_n - a^*| \le b_N - a_N \le 2^{-N}(b_0 - a_0) \le \epsilon$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/160781",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
System of equations of 3rd degree I need help with the following system of equations:
$
2y^3 +2x^2+3x+3=0 $
$
2z^3 + 2y^2 + 3y + 3= 0 $
$2x^3 + 2z^2 + 3z + 3 = 0$
| The only real solution is $x = y = z = -1$.
Claim 1: $x,y,z \ge -1$.
Proof. Suppose that $x < -1$. Then $0 = 2y^3 + 2x^3 + 3x + 3 > 2y^3 + 2$, so that $y < -1$ also. Similarly it follows that $z < -1$. Hence if one of $x,y,z$ is smaller than $-1$, all of them are. But then if for example $x<z$, we have
$$0 = 2x^3 + 2z^2 + 3z + 3 < 2z^3 + 2z^2 + 3z + 3 = (z+1)(2z^2 + 3) < 0,$$
and we see that necessarily $x=y=z$, which implies that $x=y=z=-1$, contradiction.
Claim 2: $x,y,z \le -1$.
Proof. Suppose that $x > -1$ is the largest of $x,y,z$. So $z \le x$ and
$$0 = 2x^3 + 2z^2 + 3z + 3 \ge 2z^3 + 2z^2 + 3z + 3 = (z+1)(2z^2 + 3),$$
which implies that $z \le - 1$. By Claim 1. $z = -1$ and hence also $x = -1$ and $y = -1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/160824",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Why is the matrix representing a non-degenerate sesquilinear form invertible? Let's consider a finite-dimensional vector space $E$ on the field $\mathbb{K}$ (where $\mathbb{K}=\mathbb{C} \ \text{or}\ \mathbb{R}$) and a sesquilinear (or bilinear if $\mathbb{K}=\mathbb{R}$) form $q:E\times E \rightarrow \mathbb{K}$.
The definition for a non-degenerate form is that $q(x,y)=0\ \forall y\in E$ implies $x=0$.
Now if we represent $q(x,y)$ with a matrix, so $q(x,y) =x^HAy$, why does the condition that the form be non-degenerate impose that $A$ is non-singular?
I tried to see it using the dual space as $M(x,A)=x^HA\in E^*$, so that $M:E\times L(E,E)\rightarrow E^*$, where $L(E,E)$ is the vector space of all linear transformations from $E$ to $E$ and playing with the nullspace of $A$, but I just can't see it
| Let $q$ be a sesquilinear form on a vector space $E$, given by a matrix $A$. The following statements are equivalent:
*
*$q$ is degenerate.
*There exists a nonzero vector $x\in E$ so that $q(x,y)=0$ for all $y\in E$.
*There exists a nonzero vector $x\in E$ so that $x^H A y = 0$ for all $y\in E$.
*There exists a nonzero vector $x\in E$ so that $x^H A$ is the zero (row) vector.
*The left nullspace of $A$ is non-trivial.
*The matrix $A$ is singular.
It should be clear that $(1)\Leftrightarrow(2)\Leftrightarrow(3)\Leftrightarrow(4)\Leftrightarrow(5)\Leftrightarrow(6)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/160882",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Probability of an exact number of duplicate pairs when choosing X from Y. Here is the problem I'm faced with, as best as I can describe it.
There is a set of 256 values (a byte), and 108 values are chosen from this set. Each choice may be any value from 0 to 255. What is the probability that once the values are chosen, there will be six distinct pairs of duplicate values, and all other 96 values will be unique?
| First let us compute the number of choices in a canonical order. We list the pairs first in increasing order, then the single numbers. There are $\binom {256}{6}$ ways to pick the pairs and $\binom {250}{96}$ ways to pick the singles. Given a set of numbers, they can be ordered in $\frac {108!}{2^6}$ ways, as interchanging any of the pairs is equivalent. The total probability is then $\frac {\binom {256}{6}\binom {250}{96}108!}{256^{108}2^6}\approx 6.5E-6$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/160940",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
} |
Null space of a matrix I was referring to this lecture http://www.stanford.edu/class/ee364a/videos/video05.html (about 0:38:10) related to convex optimization and for optimization it had a certain affine function equality constraint like
$$Ax=b$$
The lecturer then obtained the equivalent optimization problem removing the equality constraint and replacing $x$ with $Fz+x_0$
$$Ax=b \iff x=Fz+x_0\text{ for some }z.$$
According to the lecturer $x_0$ is a solution of the equality $Ax=b$ and $F$ is obtained from the null space of matrix $A$. I didn't get this null space thing and how this
$$x = Fz+x_0$$
is derived. Can anyone please explain?
| The null space of matrix $A$ are all those vectors $z$ with $Az=0$ where $0$ is the zero vector (same dimensions as $x$ and $b$).
These vectors build up a vector space which can be described by a base $B$. And that base $B$ can be used to form a matrix $F$ - just fill up with zero columns until you can operate on the same vectors $A$ can.
Now multiplying $F$ with any vector $z$ is just a linear combination of some null vectors and the base vectors of $A$'s null space, in short $A(Fz)=0$.
If for $x_0$ we know $Ax_0=b$ we can add a zero and are done
$Ax_0=b\Leftrightarrow0+Ax_0=b\Leftrightarrow A(Fz)+Ax_0=b\Leftrightarrow A(\underbrace{Fz+x_0}_{=:x})=b$
No matter what $z$ an $x$ defined like that is a solution.
EDIT: Example:
$$A=\left[\begin{matrix}1&0&0\\3&2&-2\\3&3&-3\end{matrix}\right]$$
Now to find the null space one has to solve $Ax=0$. For that use row reduction until the matrix is in an "easily readable" form.
$$A\longrightarrow\left[\begin{matrix}1&0&0\\0&2&-2\\0&3&-3\end{matrix}\right]\longrightarrow\left[\begin{matrix}1&0&0\\0&2&-2\\0&0&0\end{matrix}\right]=:\bar A$$
Row reduction ensures $Ax=0\Leftrightarrow\bar Ax=0$ and in the latter matrix the rows "read" as follows:
1. $1x_1+0x_2+0x_3=0\quad$ aka $x_1$ the first component of $x$ must be $0$.
2. $2x_2-2x_3=0\quad$ aka $x_2=x_3$.
3. Basically: Whatever.
So the single base vector for $A$'s null space is
$$B_1=\left[\begin{matrix}0\\1\\1\end{matrix}\right]$$
To get $F$ fill up with null vectors
$$F=\left[\begin{matrix}0&0&0\\1&0&0\\1&0&0\end{matrix}\right]$$
For an arbirtray $z$ we get
$$A(Fz)=\left[\begin{matrix}1&0&0\\3&2&-2\\3&3&-3\end{matrix}\right]\left(\left[\begin{matrix}0&0&0\\1&0&0\\1&0&0\end{matrix}\right]\left[\begin{matrix}z_1\\z_2\\z_3\end{matrix}\right]\right)=\left[\begin{matrix}1&0&0\\3&2&-2\\3&3&-3\end{matrix}\right]\left[\begin{matrix}0\\z_1\\z_1\end{matrix}\right]=\left[\begin{matrix}0\\2z_1-2z_1\\3z_1-3z_1\end{matrix}\right]=\left[\begin{matrix}0\\0\\0\end{matrix}\right]$$
Again, if for $x_0$ we know $Ax_0=b$ it follows for any $z$ that an $x:=Fz+x_0$ yields $Ax=b$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/160973",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Tangent line of parametric curve I have not seen a problem like this so I have no idea what to do.
Find an equation of the tangent to the curve at the given point by two methods, without elimiating parameter and with.
$$x = 1 + \ln t,\;\; y = t^2 + 2;\;\; (1, 3)$$
I know that $$\dfrac{dy}{dx} = \dfrac{\; 2t\; }{\dfrac{1}{t}}$$
But this give a very wrong answer. I am not sure what a parameter is or how to eliminate it.
| Method 1 Eliminating. I think they want to write everything in terms of x first.
$x = 1 + ln(t) \iff e^{x - 1} = t$
$y = t^2 + 2 = e^{2x -2} + 2 \implies y' = 2e^{2x -2}$
At (1,3) $y' = 2$. So the tangent line is $y = 2(x- 1) + 3$ or parametrically let $x - 1 = t
\iff x = 1 + t$ and $y = 2t + 3$
Method 2 No eliminating.
Let $r = (1 + ln(t),2 + t^2) \implies r' = (1/t, 2t)$. At (1,3), $t = 1$, therefore $r'(1) = (1,2)$
$r = (1,3) + s(1,2)$ which gives you $x = 1 + s$ and $y = 3 + 2s$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/161029",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
} |
A $C^{\infty}$ function from $\mathbb{R}^2$ to $\mathbb{R}$ Сould any one help me how to show $C^{\infty}$ function from $\mathbb{R}^2$ to $\mathbb{R}$ can not be injective?
| If f is constant then $f$ is trivially not injective.
Let $f$ be a non-constant. $C = \{\text{ critical points }\}$=$\{p: df(p) \text {is singular }\}$
We know that by Sard's Theorem $f(C)$ is of measure $0$. Hence
there is a regular point $p$. It's inverse image is either empty or
consists of some regular points.
Suppose for all regular $p$, $f^{-1}(p)$ were empty, then then $f(C)$ has
only critical values. Continuity means that $f(C)$ which has measure $0$,
should also be connected subset of $\mathbb{R}$. The only connected sets are
intervals. So $f(C)$ must be a point i.e., $f$ is a constant, a
contradiction. Therefore there is some regular value $p$ with a nonempty
pre-image, which has to be a $1$-manifold which cannot be a point.
Therefore f is not injective.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/161144",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 2
} |
$\int f(x) dx $ is appearing as $\int dx f(x)$. Why? A few of us over on MITx have noticed that $\int f(x) dx $ is appearing as $\int dx f(x)$.
It's not the maths of it that worries me. It's just I recently read a justification (analytical?) of the second form somewhere but can't recall it or where I saw it.
Can anyone give me a reference?
| The second form sometimes makes it easier for the reader to match variables of integrations with their limits. Compare
$$
\int_0^1\int_{-\infty}^{\infty}\int_{-\eta}^{\eta}\int_{0}^{|t|} \Big\{\text{some long and complicated formula here}\Big\}\,ds\,dt\,d\zeta\,d\eta
$$
and
$$
\int_0^1 d\eta\int_{-\infty}^{\infty}d\zeta\int_{-\eta}^{\eta}dt\int_{0}^{|t|} ds\,\Big\{\text{some long and complicated formula here}\Big\}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/161200",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Limit of exponentials Why is $n^n (n+m)^{-{\left(n+m\over 2\right)}}(n-m)^{-{\left(n-m\over 2\right)}}$ asymptotically equal to $\exp\left(-{m^2\over 2n}\right)$ as $n,m\to \infty$?
| By Stirling's approximation we have $$ \binom{2n}{n+m}= \frac{(2n)!}{(n+m)!(n-m)!} \sim \frac{\sqrt{2\pi n} (2n/e)^{2n}}{\sqrt{2\pi(n+m)} \left( \frac{n+m}{e} \right)^{n+m} \sqrt{2\pi(n-m)} \left( \frac{n-m}{e} \right)^{n-m} }= \frac{1}{\sqrt{2\pi (n^2-m^2)}} \cdot \frac{(2n)^{2n} }{ (n+m)^{n+m} (n-m)^{n-m}} .$$
Now if $m$ is "small" compared to "n" then $$n^n (n+m)^{-(n+m)/2} (n-m)^{-(n-m)/2} \sim \frac{1}{2^n} \sqrt{ \sqrt{2\pi (n^2-m^2)}\binom{2n}{n+m} }.$$
We make the assumption that $m$ is small compared to $n$ precise in the sense that we take $m=\mathcal{o}(n^{2/3})$ so that we can apply the refined entropy formula found at equation 8 of the link, which gives the result.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/161274",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Show that $a^{\phi(b)}+b^{\phi(a)} \equiv 1 (\text{mod }ab)$ , if a and b are relatively prime positive integers. Show that
$a^{\phi(b)}+b^{\phi(a)} \equiv 1 (\text{mod} ab)$,
if a and b are relatively prime positive integers.
Note that $\phi(n)$ counts the number of positive integers not exceeding n which are relatively prime with n.
| Since $gcd(a,b)=1$, by Fermat's little theorem, $a^{\phi(b)}\equiv1 (\mod{b})$.
Now, $b^{\phi(a)}\equiv 0(\mod{b})(\because b\mid b^{\phi(a)}).$
So now we have, $a^{\phi(b)}+b^{\phi(a)}\equiv 1(\mod{b}).\tag{1}$
Again by Fermat little theorem,
$b^{\phi(a)}\equiv 1(\mod{a}).$
And $a^{\phi(b)}\equiv 0(\mod{a})(\because a\mid a^{\phi(b)})$
From this we have,
$a^{\phi(b)}+b^{\phi(a)}\equiv 1(\mod{a})\tag{2}$
From $(1)$ and $(2)$, we have,
$a^{\phi(b)}+b^{\phi(a)}\equiv 1(\mod{ab}), (\because (a,b)=1)$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/161465",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 3,
"answer_id": 0
} |
Does an injective endomorphism of a finitely-generated free R-module have nonzero determinant? Alternately, let $M$ be an $n \times n$ matrix with entries in a commutative ring $R$. If $M$ has trivial kernel, is it true that $\det(M) \neq 0$?
This math.SE question deals with the case that $R$ is a polynomial ring over a field. There it was observed that there is a straightforward proof when $R$ is an integral domain by passing to the fraction field.
In the general case I have neither a proof nor a counterexample. Here are three general observations about properties that a counterexample $M$ (trivial kernel but zero determinant) must satisfy. First, recall that the adjugate $\text{adj}(M)$ of a matrix $M$ is a matrix whose entries are integer polynomials in those of $M$ and which satisfies
$$M \text{adj}(M) = \det(M).$$
If $\det(M) = 0$ and $\text{adj}(M) \neq 0$, then some column of $\text{adj}(M)$ lies in the kernel of $M$. Thus:
If $M$ is a counterexample, then $\text{adj}(M) = 0$.
When $n = 2$, we have $\text{adj}(M) = 0 \Rightarrow M = 0$, so this settles the $2 \times 2$ case.
Second observation: recall that by Cayley-Hamilton $p(M) = 0$ where $p$ is the characteristic polynomial of $M$. Write this as
$$M^k q(M) = 0$$
where $q$ has nonzero constant term. If $q(M) \neq 0$, then there exists some $v \in R^n$ such that $w = q(M) v \neq 0$, hence $M^k w = 0$ and one of the vectors $w, Mw, M^2 w,\dots, M^{k-1} w$ necessarily lies in the kernel of $M$. Thus if $M$ is a counterexample we must have $q(M) = 0$ where $q$ has nonzero constant term.
Now for every prime ideal $P$ of $R$, consider the induced action of $M$ on $F^n$, where $F = \overline{ \text{Frac}(R/P) }$. Then $q(\lambda) = 0$ for every eigenvalue $\lambda$ of $M$. Since $\det(M) = 0$, one of these eigenvalues over $F$ is $0$, hence it follows that $q(0) \in P$. Since this is true for all prime ideals, $q(0)$ lies in the intersection of all the prime ideals of $R$, hence
If $M$ is a counterexample and $q$ is defined as above, then $q(0)$ is nilpotent.
This settles the question for reduced rings. Now, $\text{det}(M) = 0$ implies that the constant term of $p$ is equal to zero, and $\text{adj}(M) = 0$ implies that the linear term of $p$ is equal to zero. It follows that if $M$ is a counterexample, then $M^2 \mid p(M)$. When $n = 3$, this implies that
$$q(M) = M - \lambda$$
where $\lambda$ is nilpotent, so $M$ is nilpotent and thus must have nontrivial kernel. So this settles the $3 \times 3$ case.
Third observation: if $M$ is a counterexample, then it is a counterexample over the subring of $R$ generated by the entries of $M$, so
We may assume WLOG that $R$ is finitely-generated over $\mathbb{Z}$.
| Here is an elementary proof of the fact that if the determinant $D$ of an $n \times n$ matrix M is a zero-divisor, then there is a nonzero vector $X$ such that $MX = 0$.
Let $a$ be a nonzero scalar such that $aD = 0$. Let $M'$ be a square submatrix of $M$ of maximum size $r$ such that $a \det M' \ne 0$. (If there is no such submatrix, let $r = 0$.) We have $r < n$ by the definition of $a$. After permuting the rows and columns of $M$ if necessary, we may assume that $M'$ is located in the top left corner of $M$.
Let $M''$ be the $r \times (r + 1)$ matrix in the top left corner of $M$, and let $d_j$, for $j = 1, \dots, r+1$, be the minor of $M''$ obtained by deleting its $j$th column.
Now define $X = aX_0$, where
$$X_0 = (d_1, -d_2, \dots, (-1)^r d_{r+1}, 0, \dots, 0).$$
We have $ad_{r+1} = a\det M' \ne 0$, so $X \ne 0$.
I claim that $MX = 0$. If $i > r$, then the $i$th coordinate of $MX_0$ is, up to sign, the minor of $M$ obtained from its first $r + 1$ columns and rows $1, 2, \dots, r, i$. By the definition of $r$, this minor becomes zero upon multiplication by $a$. When $i \leq r$, the the $i$th coordinate of $MX_0$ is zero because it is, up to sign, the determinant obtained by extending $M''$ with its own $i$th row. This proves the claim.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/161523",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24",
"answer_count": 4,
"answer_id": 1
} |
What is the total number of combinations of 5 items together when there are no duplicates? I have 5 categories - A, B, C, D & E.
I want to basically create groups that reflect every single combination of these categories without there being duplicates.
So groups would look like this:
*
*A
*B
*C
*D
*E
*A, B
*A, C
*A, D
*A, E
*B, C
*B, D
*B, E
*C, D
.
.
.
etc.
This sounds like something I would use the binomial coefficient $n \choose r$ for, but I am quite fuzzy on calculus and can't remember exactly how to do this.
Any help would be appreciated.
Thanks.
| Think about this from another angle. You want some number of these five categories without repetition.
Well each category is either chosen by you or not chosen by you. Each such choice bears no relationship with the other choices.
Thus there are $2^5 = 32$ possibilities. However, you are not counting the choice of none of the five categories, so we subtract $1$ to get $31$ possibilities.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/161565",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 5,
"answer_id": 2
} |
What is the distribution of a random variable that is the product of the two normal random variables ? What is the distribution of a random variable that is the product of the two normal random variables ?
Let $X\sim N(\mu_1,\sigma_1), Y\sim N(\mu_2,\sigma_2)$
and $Z=XY$
That is, what is its probability density function, its expected value, and its variance ?
I'm kind of stuck and I can't find a satisfying answer on the web.
If anybody knows the answer, or a reference or link, I would be really thankful...
| Given the densities $\varphi$ and $\psi$ of two independent random variables, the probability that their product is less than $z$ is
$$
\iint_{xy< z}\varphi(x)\psi(y)\,\mathrm{d}x\,\mathrm{d}y\tag{1}
$$
Letting $w=xy$ so that $x=w/y$ yields
$$
\iint_{w< z}\varphi\left(\frac{w}{y}\right)\psi(y)\,\mathrm{d}\frac{w}{y}\,\mathrm{d}y=\iint_{w< z}\varphi\left(\frac{w}{y}\right)\psi(y)\,\mathrm{d}w\,\frac{\mathrm{d}y}{y}\tag{2}
$$
Taking the derivative of $(2)$ with respect to $z$ gives the density of the product of the random variables to be
$$
\phi(z)=\int\varphi\left(\frac{z}{y}\right)\psi(y)\,\frac{\mathrm{d}y}{y}\tag{3}
$$
We can compute the expected value using this distribution as
$$
\begin{align}
\mathrm{E}(Z)
&=\int z\phi(z)\,\mathrm{d}z\\
&=\iint z\,\varphi\left(\frac{z}{y}\right)\psi(y)\,\frac{\mathrm{d}y}{y}\,\mathrm{d}z\\
&=\iint xy\,\varphi(x)\psi(y)\,\mathrm{d}y\,\mathrm{d}x\tag{4}
\end{align}
$$
which is exactly what one would expect when computing the expected value of the product directly.
In the same way, we can also compute
$$
\begin{align}
\mathrm{E}(Z^2)
&=\int z^2\phi(z)\,\mathrm{d}z\\
&=\iint z^2\,\varphi\left(\frac{z}{y}\right)\psi(y)\,\frac{\mathrm{d}y}{y}\,\mathrm{d}z\\
&=\iint x^2y^2\,\varphi(x)\psi(y)\,\mathrm{d}y\,\mathrm{d}x\tag{5}
\end{align}
$$
again getting the same result as when computing this directly.
The variance is then, as usual, $\mathrm{E}(Z^2)-\mathrm{E}(Z)^2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/161757",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28",
"answer_count": 5,
"answer_id": 1
} |
Uncountability of $\overline{\mathbb{F}_p}$. In the following MathOverflow question, it has been pointed out that $\overline{\mathbb{F}_p}$ is an uncountable set. Whereas according to http://press.princeton.edu/chapters/s9103.pdf (see page 4 theorem 1.2.1) the closure $\overline{\mathbb{F}_p}$ is $\cup_{n=1}^{\infty}\mathbb{F}_{p^n}$, which I think is a countable union of finite sets and hence countable. Where am I going wrong in this?
Also, in the same document before the same theorem its mentioned that if $\mathbb{F}_q$ has characteristic $p$ then its closure is same as that of $\mathbb{F}_p$ but I think that the set $\cup_{n=1}^{\infty}\mathbb{F}_{q^n}$ is a proper subset of $\cup_{n=1}^{\infty}\mathbb{F}_{p^n}$ since $q$ is a power of $p$, thus they are not the same. Again where is the fault in my reasoning?
| The field discussed on MO is $\mathbb{F}_p((t))$, the field of formal Laurent series over $\mathbb{F}_p$. This has uncountable algebraic closure. The algebraic closure of $\mathbb{F}_p$ is countable, as you have correctly stated in the question.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/161842",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Important papers in arithmetic geometry and number theory Having been inspired by this question I was wondering, what are some important papers in arithmetic geometry and number theory that should be read by students interested in these fields?
There is a related wikipedia article along these lines, but it doesn't mention some important papers such as Mazur's "Modular Curves and the Eisenstein ideal" paper or Ribet's Inventiones 100 paper.
| Answer posted by Zev Chonoles in the comments:
Pete Clark recommends the "1958 paper of Lang and Tate, on Galois cohomology of abelian varieties", which I believe refers to the paper Principal Homogeneous Spaces over Abelian Varieties. I assume this is classified as arithmetic geometry?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/161895",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20",
"answer_count": 3,
"answer_id": 0
} |
Sum inequality: $\sum_{k=1}^n \frac{\sin k}{k} \le \pi-1$ I'm interested in finding an elementary proof for the following sum inequality:
$$\sum_{k=1}^n \frac{\sin k}{k} \le \pi-1$$
If this inequality is easy to prove, then one may easily prove that the sum is bounded.
| Let's first observe that $\sum_{k=1}^\infty u^k/k=-\ln(1-u)$.
If we're concerned about the convergence radius, we can always replace $u$ with $ue^{-\epsilon}$ and let $\epsilon\rightarrow0$. The branch of $\ln$ we're using is the one defined on $\mathbb{C}\setminus(-\infty,0]$: i.e. $\ln(re^{i\theta})=\ln r+i\theta$ where $r>0$ and $\theta\in(-\pi,\pi)$.
Inserting $\sin x=(e^{ix}-e^{-ix})/2i$, we get
$$\sum_{k=1}^\infty \frac{\sin kx}{k}
=\sum_{k=1}^\infty \frac{e^{ikx}-e^{-ikx}}{2ki}
=\frac{\ln(1-e^{-ix})-\ln(1-e^{ix})}{2i}
$$
At this point, I have two alternative solutions. In either case, I assume $x\in[0,\pi)$ to help stay within the selected branch of the logarithm.
You can look at the triangle with corners $O=0$, $I=1$ and $A=1-e^{-ix}$: this has $IO=IA$ and $\angle OIA=x$, so $\angle AOI=\frac{\pi-x}{2}$. This makes the imaginary part of $\ln(1-e^{-ix})=\angle AOI=\frac{\pi-x}{2}$; for $\ln(1-e^{ix})$ it is $-\frac{\pi-x}{2}$. The real part of the logarithm cancels out, and what remains is $\frac{\pi-x}{2}$.
Alternatively, while ensuring we stay within the branch of the logarithm, we get
$$\sum_{k=1}^\infty \frac{\sin kx}{k}
=\frac{1}{2i}\ln\frac{1-e^{-ix}}{1-e^{ix}}
=\frac{\ln(-e^{-ix})}{2i}
=\frac{\ln(e^{i(\pi-x)})}{2i}
=\frac{\pi-x}{2}.
$$
Thus, not only is the sum less than $\pi-1$. It is exactly $\frac{\pi-1}{2}$. And the more general sum
$$\sum_{k=1}^\infty \frac{\sin kx}{k}
=\frac{\pi-x}{2}
$$
for $x\in[0,\pi]$: if $x=\pi$, the sum becomes zero (either by limit or because all the terms are zero).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/161960",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 2,
"answer_id": 0
} |
distributional derivative I want to calculate the first and second distributional derivate of the $2\pi$-periodic function $f(t) = \frac{\pi}{4} |t|$, it is
$$
\langle f', \phi \rangle = - \langle f, \phi' \rangle = -\int_{-\infty}^{\infty} f \phi' \mathrm{d}x
$$
and
$$
\langle f'', \phi \rangle = \langle f, \phi'' \rangle = \int_{-\infty}^{\infty} f \phi'' \mathrm{d}x
$$
so have to evaluate those integrals, but other than $\phi$ is smooth and has compact support i know almost nothing about $\phi$, so it is enough to integrate over finite bound, but how does i use the definition of $f$? So how could i calculate those integrals?
| Let $G(t)=\max(t,0)$. The $G'=H$ where $H$ is the Heaviside function ($H(t)=1$ if $t>0$ and $0$ otherwise). Indeed, for any test function $\phi$ we have
$$\langle G, \phi'\rangle = \int_0^{\infty} t\phi'(t)\,dt = - \int_0^{\infty} \phi(t)\,dt = - \langle H, \phi\rangle$$
in agreement with the definition of the distributional derivative. Also, $H'=\delta_0$ by a similar argument:
$$\langle H, \phi'\rangle = \int_1^{\infty} \phi'(t)\,dt = -\phi(0) = -\langle \delta_0, \phi\rangle$$
Any piecewise linear function can be written in terms of the translates of $G$. To do this for your function, you can begin with $f_0(t)=\frac{\pi}{4}G(t)-\frac{\pi}{2}G(t-\pi)+\frac{\pi}{4}G(t-2\pi)$ which agrees with $f$ on $[0,2\pi]$ and is zero elsewhere. Note that $f(t)=\sum_{n\in\mathbb Z}f_0(t+2\pi n )$, where the sum converges in a very strong sense: for any finite interval $I$, there exists $N$ such that $f(t)=\sum_{|n|\le N}f_0(t+2\pi n )$ for $t\in I$. This kind of convergence implies convergence of all distributional derivatives, since test functions have compact support.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/162014",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
$(\mathbb R^2,+)$ as vector space over $\mathbb R$ It is not hard to see that $(\mathbb R^2,+)$ with this product
$
{r\cdot(x,y)=(rx,ry) }
$
is vector space over field $\mathbb R$.
I'm looking for another product that $(\mathbb R^2,+)$ is vector space over $\mathbb R$. I know
$n*(x,y)=\underbrace{(x,y)\oplus (x,y) \oplus \cdots \oplus (x,y)}_n=(nx,ny)$ but I have no idea for arbitrary element of $\mathbb R$.
Any Suggestion.
Thanks
| If you are allowed to use the axiom of choice, then the abelian groups $(\mathbb R, +), (\mathbb R^2, +), (\mathbb R^3,+), \ldots$ are isomorphic. In fact they are all isomorphic to a direct sum of continuum-many copies of $(\mathbb Q,+)$.
So giving a vector space structure to any of them is the same thing. In particular, we know how to give $(\mathbb R^3,+)$ a structure of $\mathbb R$-vector space which is of dimension $3$, thus by transporting it to $\mathbb R^2$, we can give a vector space structure to $\mathbb R^2$ making it into a 3-dimensional $\mathbb R$-vector space (and similarly for any dimension you wish, as long as it's not more than the continuum cardinality).
In fact, even without the axiom of choice, giving a finite dimensional $\mathbb R$-vector space structure to any abelian group $G$ is the same as finding a group isomorphism from $G$ to $\mathbb R^n$ and transporting the natural vector space structure of $\mathbb R^n$ back to $G$. So the real question is how to find those group isomorphisms.
Sadly, I don't think it is possible to find any nontrivial one without the axiom of choice, so all the nontrivial vector space structure you can put on $\mathbb R^2$ need you to use it, and are not constructive.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/162086",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Compute the limit of $\frac1{\sqrt{n}}\left(1^1 \cdot 2^2 \cdot3^3\cdots n^n\right)^{1/n^2}$ Compute the following limit:
$$\lim_{n\to\infty}\frac{{\left(1^1 \cdot 2^2 \cdot3^3\cdots n^n\right)}^\frac{1}{n^2}}{\sqrt{n}} $$
I'm interested in almost any approaching way for this limit. Thanks.
| Let's begin
$$
\lim\limits_{n\to\infty}\frac{\left(\prod\limits_{k=1}^n k^k\right)^{\frac{1}{n^2}}}{\sqrt{n}}=
\lim\limits_{n\to\infty}\exp\left(\frac{1}{n^2}\sum\limits_{k=1}^n k\log k - \frac{1}{2}\log n\right)=
$$
$$
\lim\limits_{n\to\infty}\exp\left(\frac{1}{n^2}\sum\limits_{k=1}^n k\log\left(\frac{k}{n}\right)+\frac{1}{n^2}\sum\limits_{k=1}^n k\log n - \frac{1}{2}\log n\right)=
$$
$$
\lim\limits_{n\to\infty}\exp\left(\sum\limits_{k=1}^n \frac{k}{n}\log\left(\frac{k}{n}\right)\frac{1}{n}+\frac{1}{2}\log n\left(\frac{n^2+n}{n^2}-1\right)\right)=
$$
$$
\exp\left(\lim\limits_{n\to\infty}\sum\limits_{k=1}^n \frac{k}{n}\log\left(\frac{k}{n}\right)\frac{1}{n}+\frac{1}{2}\lim\limits_{n\to\infty}\frac{\log n}{n}\right)=
$$
$$
\exp\left(\int\limits_{0}^1 x\log x dx\right)=\exp\left(-1/4\right)
$$
And now we are done!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/162142",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20",
"answer_count": 3,
"answer_id": 0
} |
Properties of Topological Groups I'm working though William Basener's Topology and Its Applications and I have come across a problem I can't solve. The book defines a topological group as a group equipped with a topology where for each element $a$, $L_{a} (x) = a + x$ and $R{a}(x) = x + a$ are both continuous. I need to prove that if the topology underlying the group is Hausdorff then $f(x, y) = x - y$ is continuous iff those all such functions $L$ and $R$ are continuous. Any ideas?
| The definition of topological group which you wrote is highly non-standard (I never saw it before) and the claim which you tried to prove is false. The situation is described at the first page of my paper. :-)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/162195",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Graph of a continuous function $f:[a,b]\to(0,\infty)$ is measurable only if $f$ is differentiable? I am studying measure theory, and I came across the following problem:
Let $f: [a, b]\to (0, \infty)$ be continuous, and let $ G = \{ (x, y): y = f(x)\}.$ Prove that $G$ is measurable only if $f$ is differentiable in $(a, b)$.
| That claim seems false. In fact, I think it proves itself false, as follows: on the interval $[a,b]$, the function $f:[a,b]\to(0,\infty)$ defined by $f(x)=|x-\frac{a+b}{2}|+1$, which isn't differentiable at $\frac{a+b}{2}$, has a graph of
$$G=\{(x,\tfrac{a+b}{2}-x+1)\in\mathbb{R}^2\mid x\in[a,\tfrac{a+b}{2}]\}\cup \{(x,x-\tfrac{a+b}{2}+1)\in\mathbb{R}^2\mid x\in[\tfrac{a+b}{2},b]\}$$
each of which ought to be measurable by the claim, and therefore their union should be too.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/162245",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Polynomial of same degree
Let $p(z)$ and $q(z)$ are two polynomial of same degree and zeroes of $p(z)$ and $q(z)$ are inside open unit disc, $|p(z)|=|q(z)|$ on unit circle then show that $p(z)=\lambda q(z)$ where $|\lambda|=1$.
Please just give a hint not the whole solution. Thank you.
| Let $d$ the common degree of $P$ and $Q$. Define $P_1(z):=z^dP(1/z)$ and $P_2(z):=z^dQ(1/z)$. These polynomial doesn't vanish in the unit disk. We apply maximum modulus principle to $P_1/P_2$ and $P_2/P_1$. This gives that $P_1/P_2$ is constant (otherwise we get a contradiction because $P_1/P_2$ would be a holomorphic function with a constant modulus on a connected open set, hence would be itself constant).
Note that the fact that $P$ and $Q$ have the same degree is necessary. Indeed, $P(z)=z$ and $Q(z)=z^2$ have their root in the open unit disk, the same modulus on the unit sphere but of course are not equal up to a constant.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/162296",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
how to change polar coordinate into cartesian coordinate using transformation matrix I would like to change $(3,4,12)$ in $xyz$ coordinate to spherical coordinate using the following relation
It is from the this link. I do not understand the significance of this matrix (if not for coordinate transformation) or how it is derived. Also please check my previous question building transformation matrix from spherical to cartesian coordinate system. Please I need your insight on building my concept.
Thank you.
EDIT::
I understand that $ \left [ A_x \sin \theta\cos \phi \hspace{5 mm} A_y \sin \theta\sin\phi \hspace{5 mm} A_z\cos\theta\right ]$ gives $A_r$ but how is other coordinates $ (A_\theta, A_\phi)$ equal to their respective respective rows from Matrix multiplication?
| This is not the Matrix you're looking for. For a simple co-ordinate switch you can just use the relations:
$$\begin{align*}x &= \rho\sin\theta\cos\phi\\
y &= \rho\sin\theta\sin\phi \\
z &= \rho\cos\theta\end{align*}$$
And the inverse operations:
$$\begin{align*}\rho &= \sqrt{x^2 + y^2 + z^2}\\
\phi &= \arctan\dfrac yx\\
\theta &= \arctan\left(\frac{\sqrt{x^2 + y^2}}z\right)\end{align*}$$
However the matrix you've found is for mapping a vector between the co-ordinate systems. For example (using a textbook, Engineering Electromagnetics by Demarest. Example 2-6, p34)
Need to do an integration of $\int( r^3\cos\phi\sin\theta\cdot Ar) d\theta d\phi$
Where $Ar$ is a unit vector in the radial direction. The integral is over phi and theta but also dependent on phi and theta, therefore it's much easier to do this by switching back to cartesian coordinates by the relation:
$$Ar = \sin\theta\cos\phi\cdot Ax + \sin\theta\sin\phi\cdot Ay + \cos\theta\cdot Az$$
Once we substitute that straight in for Ar the integral looks longer but we've removed the dependence inside the integrand, so we can do the integration in a straight forward way.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/162366",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 1
} |
Simple functional equation. Find $f:\mathbb Q\longrightarrow\mathbb Q$ (own) Find the functions $f : \mathbb{Q} \mapsto \mathbb{Q}$ knowing that $$2f\left(f\left(x\right)+f\left(y\right)\right)=f\left(f\left(x+y\right)\right)+x+y,\ \forall x,\ y\in\mathbb{Q}
$$
| First of all, to make things easier to write, let $u:=f(0)$. In the text that follows, I shall use the word "equation" to refer to the functional equation from the question. We start by proving:
Lemma 1. Suppose $f$ is a solution of the equation. Then $f$ is injective.
Proof. Suppose $f(x_0)=f(y_0)=a_0$. Then the equation tells us: $$f(a_0)+x_0=2f(a_0+u)=f(a_0)+y_0$$ For the first equality, we simply plug $x=x_0,y=0$ into the equation and for the second equality, we plug in $x=0,y=y_0$. But this immediately gives us $x_0=y_0$ and we are done. $\square$
Next, we notice:
Lemma 2. Suppose $f$ is a solution of the equation. Then for all $x,y\in\Bbb Q$: $$f(x+y)+u=f(x)+f(y)$$ holds.
Proof. Let $x_0,y_0\in\Bbb Q$. Then the equation tells us: $$2f(f(x_0)+f(y_0))=f(f(x_0+y_0))+x_0+y_0=2f(f(x_0+y_0)+u)$$
Here, we get the first equality by plugging $x=x_0,y=y_0$ into the equation, and the second one by plugging in $x=x_0+y_0,y=0$. But since $f$ must be injective by the first lemma, this means precisely that $$f(x_0)+f(y_0)=f(x_0+y_0)+u$$ which is what we wanted to show. $\square$
Lemma 3. Suppose $f$ is a solution of the equation. Then there exists a $c\in\Bbb Q$ such that $f(x) = cx+u$ for $x\in\Bbb Q$.
Proof. Define another function $h$ by $h(x)=f(x)-u$ for $x\in\Bbb Q$. Then for all $x,y\in\Bbb Q$ we have $$h(x+y)=f(x+y)-u=f(x)-u+f(y)-u=h(x)+h(y)$$ where the second equality follows by the second lemma. So $h$ is an additive function. By the standard argument, there is a constant $c\in\Bbb Q$ such that $h(x)=cx$ for all $x\in\Bbb Q$. But then $f(x)=cx+u$. $\square$
Now, since every solution must be of this form, we can now simply substitute this in the functional equation, which gives us: $$2(c(cx+u+cy+u)+u)=c(c(x+y)+u)+u+x+y$$ for all $x,y\in\Bbb Q$. Now we plug $x=0,y=0$ and $x=0,y=1$ into this to get the following equations (after simplifying): $$(3c+1)u=0\\3cu+c^2+u-1=0$$
First of these tells us that in order for $f$ to be a solution, we must either have $u=0$ or $c=-\frac13$. But plugging $c=-\frac13$ into the second equation gives us $1 = \frac19$, which is false, so $c=-\frac13$ is not a possibility. Therefore $u$ must indeed be $0$.
But plugging this into the second equation gives us $c^2=1$, so either $c=1$ or $c=-1$. So the only possible solutions are $f=\pm\operatorname{id}_{\Bbb Q}$ and as these are indeed solutions, we have proved:
Proposition. A function $f$ is a solution of the equation if and only if $f=\pm\operatorname{id}_{\Bbb Q}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/162427",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Series Expansion An old problem from Whittaker and Watson I'm having issues with. Any guidance would be appreciated.
Show that the function
$$
f(x)=\int_0^\infty \left\{ \log u +\log\left(\frac{1}{1-e^{-u}} \right) \right\}\frac{du}{u}e^{-xu}
$$
has the asymptotic expansion
$$
f(x)=\frac{1}{2x}-\frac{B_1}{2^2x^2}+\frac{B_3}{4^2x^4}-\frac{B_5}{6^2x^6}+\;... \;,
$$
where
$$B_1, B_3, ...$$
are Bernoulli's numbers.
Show also that f(x) can be developed as an absolutely convergent series of the form
$$
f(x)=\sum_{k=1}^\infty\frac{c_k}{(x+1)(x+2)...(x+k)}
$$
| Note that:
$$\frac{d}{du} \left\{ \ln u + \ln \left( \frac{1}{1- e^{-u}}\right) \right\} = \frac{1}{u} - \frac{1}{e^u - 1} = -\sum_{n = 1}^{+\infty} \frac{B_n u^{n-1}}{n!}$$
then:
$$\ln u + \ln \left( \frac{1}{1- e^{-u}}\right) = -\sum_{n=1}^{+\infty} \frac{B_n}{n! \cdot n} u^n $$
so we can rewrite the integral as:
$$-\int_0^{+\infty} \sum_{n=1}^{+\infty} \frac{B_n}{n! \cdot n} u^n \cdot \frac{e^{-xu}}{u} \, du = - \sum_{n=1}^{+\infty} \frac{B_n}{n! \cdot n} \cdot \frac{(n-1)!}{x^n} = - \sum_{n=1}^{+\infty} \frac{B_n}{n^2 x^n}$$
In other words it's equal to:
$$f(x) = \frac{1}{2x} - \frac{B_2}{2^2 \cdot x^2} + \frac{B_4}{4^2 \cdot x^4} - \ldots$$
I've used the same notation for Bernoulli numbers as it is at MathWorld.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/162469",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Calculate the limit at x=0 Find the limit of $f(x)=\frac{\sqrt{a^2-ax+x^2}-\sqrt{a^2+ax+x^2}}{\sqrt{a-x}-\sqrt{a+x}}$ (at x=0) so that $f(x)$ becomes continuous for all $x$. My answer is $2\sqrt{a}$. Am I right?
Sorry to state the question incorrectly. We have to define $f(x)$ at $x=0$ such that $f(x)$ is continuous for all $x$. In that case my answer is $2\sqrt{a}$.
Let f(x)=$[x]$+$[-x]$ be a function where [.] stands for the greatest integer not greater than x. For any integer $m$, what can we say about $lim_{x \to m}$. Is $f(x)$ contiuous at $x=m$. Sorry for asking such vague question but I am forgetting the exact wording and the options given. This was a question asked in a class test.
| EDIT::
$$ \frac{\sqrt{a^2}-\sqrt{a^2}}{\sqrt a - \sqrt a} = \frac{0}{0}\neq 2\sqrt a $$
You have to multiply by conjugate of both terms (in numerator and denominator) and get the following.
$$ \frac{\sqrt{a^2-ax+x^2}-\sqrt{a^2+ax+x^2}}{\sqrt{a-x}-\sqrt{a+x}} \\
= \frac{(a^2-ax+x^2)-(a^2+ax+x^2)}{(a-x)-(a+x)} \times \frac{\sqrt{a-x}+\sqrt{a+x}}{\sqrt{a^2-ax+x^2}+\sqrt{a^2+ax+x^2}} \\
= \frac{-2ax}{-2x}\times \frac{\sqrt{a-x}+\sqrt{a+x}}{\sqrt{a^2-ax+x^2}+\sqrt{a^2+ax+x^2}}
\\
\text{ taking limit x } \rightarrow 0 \text{ we get}
=a \frac{2\sqrt a}{2 a} = \sqrt a$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/162531",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Convergent operatorial series An exercise I was doing asks (among other things) for the values of $z\in\mathbb{C}$ for which the following (operatorial) series converges absolutely:
$$\sum_{n=0}^{\infty}z^nA^n$$
where $A$ is an operator in the Hilbert space $L^2(0,2\pi)$ such that
$$(Af)(x)=\frac{1}{\pi}\int_0^{2\pi}[\cos(x)\cos(y)+\sin(x)\sin(y)]f(y)dy$$
I understand that $A$ is basically a projection operator in the form $Af = c\langle c,f\rangle+s\langle s,f\rangle$, where $s=\frac{1}{\sqrt{\pi}}\sin (x)$ and $c=\frac{1}{\sqrt{\pi}}\cos (x)$, so $||A||=1$ and $A^n = A$.
I also understand that, if I interpreted well, you should apply the Cauchy-Hadamard theorem to $\sum_{n=0}^{\infty}z^nA^n$ and search if it converges absolutely in the Banach space of all bounded operators between $L^2(0,2\pi)$ and itself. But with the Cauchy-Hadamard theorem you can conclude that the radius of convergence is $(\limsup||A^n||^{1/n})=1$. The answer to the exercise, however, is different and more cryptic:
"The series converges in norm when $||zA||\le 1$ ($|z|\le1$)"
How could you include the case $|z|=1$?
| You solution is correct and solution in your book is wrong, because of the simple case $z=1$. For this case we have infinite sum of identical non-zero operators, more preciesly we have
$$
\sum\limits_{n=1}^\infty 1^nA^n=\sum\limits_{n=1}^\infty A= ???
$$
Since $A^n=A$ our series is of the form
$$
\sum\limits_{n=0}^\infty z^n A
$$
which is absolutely convergent for $|z|<1$, because convergence depends only on the series
$$
\sum\limits_{n=0}^\infty z^n
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/162580",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Describing multivariable functions So I am presented with the following question:
Describe and sketch the largest region in the $xy$-plane that corresponds to the domain of the function:
$$g(x,y) = \sqrt{4 - x^2 - y^2} \ln(x-y).$$
Now to be I can find different restrictions like $4 - x^2 - y^2 \geq 0$... but I'm honestly not even sure where to begin this question! Any help?
| When is $g(x,y)$ defined and takes a real value? Well, the square root has to be of a positive number, so $x^2+y^2\le4$. Also, when you are taking $\log(x-y)$, for the logarithm to be defined, $x-y$ must be greater than $0$. Thus, the domain of $g(x,y)$ consists of all points $(x,y)$ such that
(1) $x^2+y^2\le4$ and
(2) $y<x$
Sketch (1) and (2) on an $xy$ plane. (1) is a filled-in circle with radius $2$ centered around the origin (all points on the boundary are points in the domain). (2) is all points strictly below the line $y=x$ (all points on the boundary are not points in the domain). For $g(x,y)$ to be defined, both (1) and (2) must be satisfied, so we must take the intersection of these two figures.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/162650",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
A fifth degree polynomial $P(x)$ with integral coefficients. A fifth degree polynomial $P(x)$ with integral coefficients takes on values $0,1,2,3,4$ at $x=0,1,2,3,4$, respectively.
Which of the following is a possible value for $P(5)$?
A) $5$
B) $24$
C) $125$
D)None of the above
| Any polynomial of degree at most five, fulfilling the given interpolation conditions, has the form
\[
P(x) = x + a\prod_{i=0}^4 (x-i)
\]
for some $a \in \mathbb Z$ (we can see this if we write $P$ in the Newton basis, for example). So $P(5) = 5 + 5!a = 5 + 120a$. Now A) corresponds to $a = 0$, B) to $a = \frac{19}{120}$ and C) to $a=1$. As we want $P$ to be of fifth degree, we need $a \ne 0$, and as $\frac{19}{120} \not\in\mathbb Z$, the correct answer is C), giving $P(x) = x + \prod_{i=0}^4(x-i)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/162737",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
A number has 101 composite factors. A number has 101 composite factors. How many prime factors at max A number could have ?
| Suppose $m = p_1^{a_1} ... p_n^{a_n}$ has evactly $101$ composite factors.
Then $101 + (1+n) = (a_1 + 1)(a_2+1) ... (a_n+1)$.
But the RHS is at least $2^n$ and it is easily checked that the inequality:
$102 + n \geq 2^n$
fails for $n \geq 7$. So there can be at most $6$ primes in the factorisation of $m$.
We now try to decompose the numbers $101 + (1+n)$ into a product of exactly $n$ integers for $n=1,2,3,4,5,6$, in order to see whether the $a_i$ can actually exist in each case.
We see that:
$108 = 2^2 \times 3^3$
$107$ is prime
$106 = 2\times 53$
meaning that the cases for $n=4,5,6$ cannot work.
However the number $105 = 3\times 5\times 7$ does have such a representation as a product of three numbers. Hence the biggest number of primes you may have in $m$ is $3$ in order to have exactly 101 composite factors.
Such a number is given by $m = p_1^2 p_2^4 p_3^6$ for any three different primes you wish.
As an aside, all such numbers $m$ must be of one of the following forms:
$p_1^2 p_2^4 p_3^6$
$p_1^7 p_2^{12}$
$p_1^3 p_2^{25}$
$p_1 p_2^{51}$
$p_1^{102}$
Where $p_1,p_2,p_3$ are distinct primes.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/162804",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 1,
"answer_id": 0
} |
Finding solutions to $(4x^2+1)(4y^2+1) = (4z^2+1)$ Consider the following equation with integral, nonzero $x,y,z$
$$(4x^2+1)(4y^2+1) = (4z^2+1)$$
What are some general strategies to find solutions to this Diophantine?
If it helps, this can also be rewritten as $z^2 = x^2(4y^2+1) + y^2$
I've already looked at On the equation $(a^2+1)(b^2+1)=c^2+1$
| Let $a$ be a positive integer.
Then
\begin{align}
(4a^2+1)(4((2a)^2)^2+1) &= 256a^6 + 64a^2 + 4a^2 + 1 \\
& = 4(64a^6 + 16a^4 + a^2) + 1 \\
&= 4(a^2(8a^2+1)^2)+1 \\
&= 4((8a^2+1)a)^2+1
\end{align}
so $(a, (2a)^2, (8a^2+1)a)$ is always a solution.
There are others as well.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/162862",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
Finding $\frac{d^2 y}{dx^2}$ I am not sure how to do this but I need to find $\frac{dy}{dx}$ and $\frac{d^2 y}{dx^2}$
For $x = t^2 + 1, y= t^2+t$
And then show what t values gives a concave upward.
I know the simple formula to find $\frac{dy}{dx}$
I get $$\frac{dy}{dx} = \frac{y'}{x'}$$
$$\frac{dy}{dx} = \frac{2t+1}{2t}$$
$$\frac{d^2 y}{dx^2} = \frac{\frac{dy}{dx}}{dx}$$
$$\frac{\frac{2t+1}{2t}}{2t}$$
This is wrong and I am not sure why, they end with a negative number which makes no sense to me.
| You have $$\frac{dy}{dx}=\frac{2t+1}{2t}=1+\frac1{2t}\;.$$
To differentiate this again with respect to $x$, you must repeat what you did to get this: calculate
$$\frac{\frac{d}{dt}\left(\frac{dy}{dx}\right)}{dx/dt}\;.$$
You forgot to do the differentiation in the numerator. When you do it, you get
$$\frac{\frac{d}{dt}\left(1+\frac1{2t}\right)}{2t}=\frac{-\frac1{2t^2}}{2t}=-\frac1{4t^3}\;.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/162938",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Closest point of line segment without endpoints I know of a formula to determine shortest line segment between two given line segments, but that works only when endpoints are included. I'd like to know if there is a solution when endpoints are not included or if I'm mixing disciplines incorrectly.
Example : Line segment $A$ is from $(1, 1)$ to $(1, 4)$ and line segment $B$ is from $(0, 0)$ to $(0, 2)$, so shortest segment between them would be $(0, 1)$ to $(1, 1)$. But of line segment $A$ did not include those end points, how would that work since $(1, 1)$ is not part of line segment $A$?
| In your example, if the endpoints are not included in $A$ and $B$, then there is no shortest line segment connecting the two, since for any supposedly shortest path someone gives you, you can just slide the top end a teeny bit more downward in the direction of $(1,1)$.
In general, if the endpoints of the given line segments are not included, you can either still get a segment connecting the two that is as short as the one you would be able to get had the endpoints been included, or you can get arbitrarily close to that distance.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/162996",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
First order ordinary differential equations involving powers of the slope
Are there any general approaches to differential equations like
$$x-x\ y(x)+y'(x)\ (y'(x)+x\ y(x))=0,$$
or that equation specifically?
The problem seems to be the term $y'(x)^2$. Solving the equation for $y'(x)$ like a qudratic equation gives some expression $y'(x)=F(y(x),x)$, where $F$ is "not too bad" as it involves small polynomials in $x$ and $y$ and roots of such object. That might be a starting point for a numerical approach, but I'm actually more interested in theory now.
$y(x)=1$ is a stationary solution. Plugging in $y(x)\equiv 1+z(x)$ and taking a look at the new equation makes me think functions of the form $\exp{(a\ x^n)}$ might be involved, but that's only speculation. I see no symmetry whatsoever and dimensional analysis fails.
| The usual existence-and-uniqueness theory has problems because $y'$ is not a Lipschitz function of $x$ and $y$. In particular, besides $y(x)=1$ the initial value $y(0)=1$ seems to have a series solution of the form
$$y = 1-\frac{1}{2}\,{x}^{2}+\frac{1}{6}\,{x}^{3}+{\frac {7}{48}}\,{x}^{4}+{\frac {1}{240}}
\,{x}^{5}+{\frac {1}{2160}}\,{x}^{6}+{\frac {787}{30240}}\,{x}^{7}+{
\frac {20047}{645120}}\,{x}^{8}+{\frac {370693}{10450944}}\,{x}^{9} + \ldots
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/163052",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Is it true that $\max\limits_D |f(x)|=\max\limits\{|\max\limits_D f(x)|, |\min\limits_D f(x)|\}$? I came across an equality, which states that
If $D\subset\mathbb{R}^n, n\geq 2$ is compact, for each $ f\in C(D)$, we have the following equality
$$\max\limits_D |f(x)|=\max\limits\{|\max\limits_D f(x)|, |\min\limits_D f(x)|\}.$$
Actually I can not judge if it is right. Can anyone tell me if it is right, and if so, how to prove it?
Thanks a lot.
| Since $f$ is continuous on a compact set $D$, the minimum and maximum of $f(x)$ over $D$ exist. Suppose that the minimum is $a$ and the maximum is $b$.
There are two cases, depending on the relative size of $|a|$ and $|b|$.
Case 1: $|a| \le |b|$. Then $\max |f(x)|=|b|$. (Actually, the absolute value sign around $b$ is unnecessary, since if $|a|\le |b|$ then $b$ must be $\ge 0$.)
But $|\max f(x)|=|b|$ and $|\min f(x)|=|a|$, so $\max\{|\max f(x)|, |\min f(x)|\}=|b|$.
Case 2: $|a| \gt |b|$. This can only happen if $a$ is negative. Then $\max|f(x)|=|a|$. But $|\max f(x)|=|b|$ and $|\min f(x)|=|a|$, so $\max\{|\max f(x)|, |\min f(x)|\}=|a|$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/163119",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Sum of Natural Number Ranges? Given a positive integer $n$, some positive integers $x$ can be represented as follows:
$$1 \le i \le j \le n$$
$$x = \sum_{k=i}^{j}k$$
Given $n$ and $x$ determine if it can be represented as the above sum (if $\exists{i,j}$), and if so determine the $i$ and $j$ such that the sum has the smallest number of terms. (minimize $j-i$)
I am not sure how to approach this. Clearly closing the sum gives:
$$x = {j^2 + j - i^2 + i \over 2}$$
But I'm not sure how to check if there are integer solutions, and if there are to find the one with smallest $j-i$.
| Some pointers:
Factor the numerator of your fraction to get
$$x = {j^2 + j - i^2 + i \over 2}=\frac{(j-i+1)(j+i)}2\;.$$
Let $d=j-i+1$ and $s=j+i$; you have $2x=ds$, and you want to minimize $d$. Moreover, you have $d+s=2j+1$, so $d$ and $s$ are of opposite parity: one is odd, and one is even.
Now suppose that $ds$ is any factorization of $2x$ such that $d$ and $s$ have opposite parity. Then $d+s-1$ is even, so you can set $j=\frac12(d+s-1)$, and so is $s-d+1$, so you can set $i=\frac12(s-d+1)$. I leave it to you to check that if you do this, you really will have $x=\sum_{k=i}^jk$.
Does $2x$ always have such a factorization? Yes: even if $x$ is a power of $2$, it can be written as $1\cdot(2x)$. The question is whether it has one that yields $i$ and $j$ in the required range. Note that it’s enough to get $j\le n$, i.e., $d+s-1\le 2n$, or $d+s\le 2n+1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/163174",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
for what value of $a$ has equation rational roots? Suppose that we have following quadratic equation containing some constant $a$
$$ax^2-(1-2a)x+a-2=0.$$
We have to find all integers $a$,for which this equation has rational roots.
First I have tried to determine for which $a$ this equation has a real solution, so I calculated the discriminant (also I guessed that $a$ must not be equal to zero, because in this situation it would be a linear form, namely $-x-2=0$ with the trivial solution $x=-2$).
So $$D=(1-2a)^2-4a(a-2)$$ if we simplify,we get $$D=1-4a+4a^2-4a^2+8a=1+4a$$
So we have the roots: $x_1=(1-2a)-\frac{\sqrt{1+4a})}{2a}$ and $x_2=(1-2a)+\frac{\sqrt{1+4a})}{2a}$
Because we are interesting in rational numbers, it is clear that $\sqrt{1+4a}$ must be a perfect square, because $a=0$ I think is not included in solution set,then what i have tried is use $a=2$,$a=6$,$a=12$,but are they the only solutions which I need or how can I find all values of $a$?
| put $4a+1=m^2$ for some $m$, then $4a=(m+1)(m-1)$.Now, $m$ must be odd otherwise RHS won't be divisible by 4.Let $m=2k+1$,it gives $4a+1=4k^2+4k+1 \implies a=k(k+1)$. So Put $k=1,2,3,$ and so on, you will get the required values of $a$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/163238",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Resources for kids education Most of the questions on MSE about educational resources are for high school. At least I haven't found questions which deals with sites on very elementary mathematics.
I will be grateful if someone provides me with list of educational resources for kids - e.g. of the level of kindergarten, primary school and junior high school.
If you think that this question is not for MSE, tell me were is it appropriate to ask?
| Though not a direct answer to your question, another resource you may be able to use is a university sponsored Math Circle program. They really are amazing and I am extremely jealous of those kids (I wish I got to go to a Math Circle when I was their age!). If you're not familiar with what it is, it's basically a program where your kids can go and learn mathematics they wouldn't normally be exposed to in school, with peers of their age.
To give you an example, at UCLA (where I attend) the fifth grade age group learned basic abstract algebra, cryptography, and fundamentals of proofs.
Check your local schools out! Math Circle Main Website
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/163288",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
} |
Single-digit even natural number solutions to the equation $a+b+c+d = 24$ such that $a+b > c+d$
Possible Duplicate:
Two algebra questions
How to approach the below question:
How many single-digit even natural number solutions are there for the equation $a+b+c+d = 24$ such that $a+b > c+d$?
| The single-digit even natural numbers are $2$, $4$, $6$, $8$. The sum is $24$, quite big. So the number of possibilities is not large. Almost any careful listing will do the job. But here is a possible systematic approach.
The average of our numbers must be $6$. If they are all $6$, we violate $a+b\gt c+d$.
So there is at least one $8$. There can't be three $8$'s, that makes the sum too big. So the number of $8$'s is $1$ or $2$.
Now listing should be straightforward. Do (i) one $8$ and (ii) two $8$'s.
(i) There is only one $8$, and the average is $6$, so we must have one $8$, two $6$'s, and one $4$. Where can they be? Because $a+b\gt c+d$, one of $a$ or $b$ must be $8$, and the other $6$. And therefore one of $c$ or $d$ is $6$ and the other $4$. List all cases. You should get $4$.
(ii) All yours!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/163343",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Transitivity of uncorrelated random variables? Suppose $cov(X,Y)=0\;$ and $\;cov(Y,M)=0$. Does this imply $cov(X,M)=0\;$, if all distinct RV are normal?
| In general, no. Take $X = M$ ;)
It also does not hold if $(X,Y,Z)$ follow a Multivariate Normal $(\mu,\Sigma)$.
Necessary and sufficient conditions for a matrix to be a covariance matrix $(\Sigma)$ are presented here. A matrix is a covariance matrix if and only if it is positive semi-definite.
Hence, take
$$\Sigma =\left( \begin{array}{ccc}
1 & 0 & 0 \\
0 & 1 & 1 \\
0 & 1 & 1 \end{array} \right)$$
and observe that $Cov(X,Y) = Cov(X,Z) = 0$ but $Cov(Y,Z)=1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/163461",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$p=4n+3$ never has a Decomposition into $2$ Squares, right? Primes of the form $p=4k+1\;$ have a unique decomposition as sum of squares $p=a^2+b^2$ with $0<a<b\;$, due to Thue's Lemma.
Is it correct to say that, primes of the form $p=4n+3$, never have a decomposition into $2$ squares, because sum of the quadratic residues $a^2+b^2$ with $a,b\in \Bbb{N}$
$$
a^2 \bmod 4 +b^2 \bmod 4 \le 2?
$$
If so, are there alternate ways to prove it?
| Yes.
$a^2 \equiv -b^2 \ \ (p)$. Then raise to the $\frac{p-1}{2}$ power and use Fermat's little Theorem. You get $p|2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/163519",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
} |
Finding $L^2$ norm of solution of ODE I have a linear differential equation with real constant coefficients
$$
\sum\limits_{i=0}^3 a_i y^{(i)}(x)=0
$$
with initial conditions $y^{(i)}(0)=y_i\in\mathbb{R}$ where $i=0,1,2$. I need to find $L^2(\mathbb{R}_+)$ norm of $y(x)$ assuming that $\lim\limits_{x\to+\infty}y(x)=0$.
How can I solve it? The chracteristic equation is of the third order with arbitrary coefficients!
| Presumably the $a_j$ are real. Let $p(z) = \sum_{j=0}^3 a_j z^j$. Let $r_j$ be the roots of this polynomial (counted by multiplicity).
If all $r_j$ have real part $\ge 0$, the only solution $y$ with $y \to 0$ at $+\infty$ is $0$.
If two $r_j$ have real part $\ge 0$ and one (say $r_3$) is negative, the only solution with $y(0) = y_0$ and
$y \to 0$ at $\infty$ is $y = y_0 e^{r_3 t}$, and the $L^2({\mathbb R}_+)$ norm of this is
$\|y_0|/\sqrt{-2 r_3}$.
If a complex conjugate pair of $r_j$ have real part $< 0$ (say $\alpha \pm \beta i$ with $\beta > 0$ and $\alpha < 0$)and the other root is nonnegative, the real solutions with $y \to 0$ at $\infty$ are $e^{\alpha t} \left(y_0 \cos(\beta t) + \dfrac{y_1 - \alpha y_0}{\beta} \sin(\beta t)\right)$, and their $L^2({\mathbb R}_+)$ norms are $$\sqrt{\frac {5\,{{\it y_0}}^{2}{\alpha}^{2}+{{\it y_0}}^{2}{\beta}^{2}-4\,\alpha\,{\it y_0}\,{\it y_1}+{{\it y_1}}^{2}}{ -4\left( {\alpha}^{2}+{
\beta}^{2} \right) \alpha}}$$
Other cases may be more complicated.
EDIT: if all $r_j$ have real part $< 0$, so all solutions go to $0$ as $t \to \infty$,
then (with help from Maple) I get
$$\|y\|^2 = (-2a_{{1}}{a_{{2}}}^{2}y_{{0}}y_{{1}}-2a_{{3}}y_{{1}}y_{{2}}{a_{{2}
}}^{2}+ \left( -a_{{0}}{a_{{2}}}^{2}-{a_{{1}}}^{2}a_{{2}}+a_{{0}}a_{{1
}}a_{{3}} \right) {y_{{0}}}^{2}- \left( {a_{{2}}}^{3}+{a_{{3}}}^{2}a_{
{0}} \right) {y_{{1}}}^{2}-{a_{{3}}}^{2}{y_{{2}}}^{2}a_{{2}}+2\,a_{{3}
} \left( -a_{{2}}a_{{1}}+a_{{0}}a_{{3}} \right) y_{{0}}y_{{2}})/
(2 a_{{0}} \left( -a_{{2}}a_{{1}}+a_{{0}}a_{{3}} \right)
)
$$
EDIT: This was assuming real $y_i$.
The general solution of the differential equation is
$ y(t) = \sum_{r} c_r e^{rt}$, the sum being over the roots of $p(z)$. The initial conditions are $y(0) = \sum_{r} c_r = y_0$, $y'(0) = \sum_{r} r c_r = y_1$, $y''(0) = \sum_r r^2 c_r = y_2$. Write these three equations as $Y = V C$ where $$V = \pmatrix{1 & 1 & 1\cr r_1 & r_2 & r_3\cr r_1^2 & r_2^2 & r_3^2\cr}$$
If $y_i$ are real, so is $y(t)$, and the square of its $L^2$ norm is
$$ \int_0^\infty y(t)^2\ dt = \sum_r \sum_s c_r c_s \int_0^\infty e^{(r+s)t}\ dt
= \sum_r \sum_s \dfrac{-c_r c_s}{r+s} = - C^T M C$$
where $$M = \pmatrix{\dfrac{1}{2r_1} & \dfrac{1}{r_1+r_2} & \dfrac{1}{r_1+r_3}\cr
\dfrac{1}{r_2+r_1} & \dfrac{1}{2r_2} & \dfrac{1}{r_2+r_3}\cr
\dfrac{1}{r_3+r_1} & \dfrac{1}{r_3+r_2} & \dfrac{1}{2r_3}\cr}$$
Now $-C^T M C = -Y^T (V^{-1})^T M V^{-1} Y$, so what we need to do is express each entry of $(V^{-1})^T M V^{-1}$ in terms
of the coefficients $a_j$. These entries are symmetric rational functions in $r_1$, $r_2$, $r_3$, so that should be possible: every symmetric polynomial in $k$ variables can be expressed as a polynomial in the elementary symmetric polynomials. But doing it by hand looks rather daunting.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/163586",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Examples of logs with other bases than 10 From a teaching perspective, sometimes it can be difficult to explain how logarithms work in Mathematics. I came to the point where I tried to explain binary and hexadecimal to someone who did not have a strong background in Mathematics. Are there some common examples that can be used to explain this?
For example (perhaps this is not the best), but we use tally marks starting from childhood. A complete set marks five tallies and then a new set is made. This could be an example of a log with a base of 5.
| Probably not helpful to what you want, but the energy release of earthquakes is measured on the Richter scale to base $\sqrt {1000}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/163690",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
A sensible/systematical way to deal with the following equation Given that $y=x\varphi(z)+\psi(z)$ where $z$ is an implicit function of $x,y$, and $x\cdot\varphi'(z)+\psi'(z)\neq0$. Try to prove that
$$\frac{\partial^2z}{\partial x^2}\cdot\left(\frac{\partial z}{\partial y}\right)^2-2\cdot\frac{\partial z}{\partial x}\cdot\frac{\partial z}{\partial y}\cdot\frac{\partial^2 z}{\partial x\partial y}+\frac{\partial^2 z}{\partial y^2}\cdot\left(\frac{\partial z}{\partial x}\right)^2=0$$
The outline of the proof from the book Григорий Михайлович Фихтенгольц:
Differentiating $y=x\varphi(z)+\psi(z)$ with $\dfrac{\partial^2z}{\partial x^2}$, $\dfrac{\partial^2z}{\partial x\partial y}$, $\dfrac{\partial^2z}{\partial y^2}$, and then multiplying special coefficients, we can get the answer.
I wonder whether there's some sensible ways to check these equations, or even more, some sysmatical ways to produce such equations.
Any help? Thanks a lot!
| Let $\begin{pmatrix}z_{xx}&z_{xy}\\z_{xy}&z_{yy}\end{pmatrix}$ be the Hessian matrix of $z(x,y)$. For any unit vector $v$ the expression $v^THv$ is the second directional derivative of $z$ along $v$. The expression we are given is exactly of this form, with $v=\begin{pmatrix}-z_y \\ z_x\end{pmatrix}$, which we recognize as the gradient $\nabla z$ rotated by 90 degrees.
In other words, the formula we are asked to prove simply says that the second directional derivative of $z(x,y)$ vanishes in the direction tangent to the level curve of $z$. This sounds like the level curve is not allowed to have positive curvature, and indeed, a glance at the implicit equation tells us that the level curves of $z(x,y)$ are lines. Naturally, all directional derivatives of $z$ vanish along these lines, including the second one.
Additional comments:
*
*The graph of $z$ is a special kind of a ruled surface: it is ruled by horizontal lines. Not sure if these have a name.
*If we instead require that the second derivative vanishes in the direction of $\nabla z$ (without rotating the gradient by 90 degrees), we get $(\nabla z)^T\, H\, \nabla z=0$, the $\infty$-Laplace equation, a recently popular subject in PDE.
*Григорий Михайлович Фихтенгольц died almost exactly 53 years ago, June 26th. I'm sure that he expected the students to actually carry out the computations, and that the students did.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/163756",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Sum of series $1 - nx + n(n-1)x^2 - n(n-1)(n-2) x^3+\cdots+ (-1)^n n! x^n$ I am having a hard time summing the seemingly simple finite series:
$$1 - nx + n(n-1)x^2 - n(n-1)(n-2) x^3+\cdots+ (-1)^n n! x^n$$
Thanks for your help in advance!
| Hint $\ $ If you divide by $\rm\:x^n,\:$ differentiate, multiply by $\rm\:x^{n+2},\:$ you'll find that it satisfies the ODE
$$\rm\ x^2 y' - (1+nx) y\, =\, -1$$
which yields the "closed form"
$$\rm\: y = -x^n {\it e}^{-1/x} \int {\it e}^{1/x}x^{-n-2}$$
which is expressible in closed form in terms of the incomplete gamma function - see below.
Or one may use operator methods, or a computer algebra system, e.g. Mma below, with $c_1 = 0$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/163839",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find all ordered pair of integers $(x,y)$
Obtain all ordered pair of integers $(x,y)$ such that $$x(x + 1) = y(y + 1)(y + 2)(y + 3)$$
I'm getting 8,
(0, 0), (0, -1), (0, -2), (0, -3)
(-1, 0), (-1, -1), (-1, -2), (-1, -3)
Please confirm my answer.
| Hint: It is easily proved that the product of four consecutive integers, plus $1$, is a perfect square. But $x(x+1)+1$ is hardly ever a perfect square!
Added: To prove that $y(y+1)(y+2)(y+3)+1$ is a perfect square, note that
$$y(y+1)(y+2)(y+3)=y(y+3)(y+1)(y+2)=(y^2+3y)(y^2+3y+2)=z(z+2),$$
where $z=y^2+3y$. And clearly $z(z+2)+1=(z+1)^2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/163902",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Non connected surfaces and Gauss Bonnet Theorem In general, I have seen that a consequence of the Gauss-Bonnet Theorem is the following:
Theorem. If S is a CONNECTED smooth compact oriented surface in $R^3$, then S is diffeomorphic to a $g$-tori for some $g=0,1,2,...$, and the characteristic of S is $\chi(S)=2(1-g)$.
My question is: what happens when we have a NON CONNECTED surface S?
For example, if $S=S_1\cup S_2$ for connected disjoint surfaces $S_1,S_2$,
can we say that $\chi(S)=\chi(S_1)+\chi(S_2)$?
Can we obtain thus, surfaces (non-connected of course) with $\chi(S)> 2$?
Thanks
| If $S$ is not connected, then it will be a disjoint union of connected surfaces, each one a $g_i$-tori by your quoted theorem (where $g_i$ can vary).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/163972",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Which person will leave in last? Can anyone give me the quickest idea to solve the question below?
Read the information below and answer the question that follows.
In a mathematical game, one hundred people are standing in a line and they are required to count off in fives as "one, two, three, four five, one, two, three, four, five," and so on from the first person in the line. The person who says "five" is taken out of the line. Those remaining repeat this procedure until only four people remain in the line.
What was the original position in the line of the last person to leave?
A. 93
B. 96
C. 97
D. 98
If any one can elaborate with more similar kind of example then it will be very nice.
| After one round, 98 becomes next-to-last in a field of 80; after the second round, 98 becomes last in a field of 64. He will remain last unless he is thrown out, that is, unless the total number of people is divisible by 5.
The number of people left after each of the successive rounds is: 52 ($64-12$); 42 ($52-10$); 34 ($42-8$); 28 ($34-6$); 23 ($28-5$); 19 ($23-4$); 16 ($19-3$); 13; 11; 9; 8; 7; 6; 5.
So person 98 is not thrown out until he reaches position 5, which means he's the last person to leave, since he is last in line.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/164091",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Asymptotic behavior of the expression: $(1-\frac{\ln n}{n})^n$ when $n\rightarrow\infty$ The well known results states that:
$\lim_{n\rightarrow \infty}(1-\frac{c}{n})^n=(1/e)^c$ for any constant $c$.
I need the following limit: $\lim_{n\rightarrow \infty}(1-\frac{\ln n}{n})^n$.
Can I prove it in the following way? Let $x=\frac{n}{\ln n}$, then we get: $\lim_{n\rightarrow \infty}(1-\frac{\ln}{n})^n=\lim_{x\rightarrow \infty}(1-\frac{1}{x})^{x\ln n}=(1/e)^{\ln n}=\frac{1}{n}$.
So, $\lim_{n\rightarrow \infty}(1-\frac{\ln}{n})^n=\frac{1}{n}$.
I see that this is wrong to have an expression with $n$ after the limit. But how to show that the asymptotic behavior is $1/n$?
Thanks!
| The idea you used could have worked. Exactly as you wrote, we have
$$\left(1-\frac{\ln n}{n}\right)^n=\left(\left(1-\frac{1}{x}\right)^x\right)^{\ln n},$$
where $x=\frac{n}{\ln n}$.
Note that $x\to \infty$ as $n\to\infty$. Note also that $\ln n=g(x)$ for some function $g(x)$ such that $g(x)\to\infty$ as $x\to \infty$. Our limit is
$$\lim_{x\to\infty}\left(1-\frac{1}{x}\right)^{g(x)}.$$
Since $\left(1-\frac{1}{x}\right)^x$ approaches $\frac{1}{e}$, and $g(x)\to\infty$, our limit is $0$.
Remark: Your impossible answer $\frac{1}{n}$ contained a large kernel of truth. When $n$ is large, $(1-1/x)^x$ is close to $1/e$, so the original expression is close to $1/n$. And of course the quantity $1/n$ approaches $0$.
Let $E(n)$ be the original expression. We can adapt your argument to show that $\lim_{n\to\infty} nE(n)=1$, which proves in a very informative way that $E(n)$ has limit $0$, by giving quite exact information about the rate of approach to $0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/164202",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 6,
"answer_id": 1
} |
How to find $\lim\limits_{n\rightarrow \infty}\frac{(\log n)^p}{n}$ How to solve $$\lim_{n\rightarrow \infty}\frac{(\log n)^p}{n}$$
| Let $A=\lim_{n\to\infty} \frac{(\log n)^p}{n}$.Taking $log$ both sides gives $\log A=\lim_{n\to\infty} (p\log \log n -\log n)=\lim_{\log n\to\infty} (p\log \log n -\log n) $.Take $y=1/\log n\implies \log A=\lim_{y\to0^+} (-p\log y -1/y) \implies \log A=\lim_{y\to0^+} \frac{-py\log y -1}{y}$.This limit surely goes to $-\infty$ and hence $\log A=-\infty\implies A=0$.Thus the limit is $0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/164243",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 7,
"answer_id": 5
} |
Find max $x\cdot y$ where $x\in S= \{(x_1,\ldots, x_n)\in \mathbb{R}^n$ Let $S= \{(x_1,\ldots, x_n)\in \mathbb{R}^n$; $|x_1|^p+\ldots+|x_n|^p=1\}$, where $p>1$ is real(and fixed), consider a fixed $y\in\mathbb{R}^n$ and $T:\mathbb{R}^n\rightarrow\mathbb{R}$ such that $T(x) = x\cdot y$, where $x\cdot y = x_1y_1+\ldots+x_ny_n$.
I'm having a hard time to find $\max_{x\in S}\ T(x)$.
I already noticed a few things but its still really difficult to do something useful.
1) $\forall (x_1\ldots, x_n)\in S, |x_1|^p+\ldots+|x_n|^p\leq |x_1|+\ldots+|x_n|$;
2) Taking the norm $\Vert(x_1,\dots, x_n)\Vert=(|x_1|^p+\ldots+|x_n|^p)^{\frac{1}{p}}$ and the ball $B(0,1)$, with center $0\in\mathbb{R}^n$ and radius $1$, we have $S=\partial B$;
3) $\forall i=1\ldots n, T(e_i) = y_i$.
Also, I'm trying to avoid Holder's inequality.
| Define $\newcommand{\sgn}{\operatorname{sgn}}$
$$
F(x)=\sum_{k=1}^n|x_k|^p\tag{1}
$$
then
$$
\nabla F(x)=\left(p\sgn(x_k)|x_k|^{p-1}\right)_{k=1}^n\tag{2}
$$
You want to find a point on the surface where $\nabla F\,||\,y$. Therefore,
$$
x=\left(\sum_{k=1}^n|y_k|^{\frac{p}{p-1}}\right)^{-\frac{1}{p}}\left(\sgn(y_k)|y_k|^{\frac{1}{p-1}}\right)_{k=1}^n\tag{3}
$$
should be the point where $T(x)$ is the greatest. Computing $T(x)$ yields
$$
T(x)=\left(\sum_{k=1}^n|y_k|^{\frac{p}{p-1}}\right)^{\frac{p-1}{p}}\tag{4}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/164324",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Calculating the MLE for mu(x) in a regression model Say we have the following regression model:
$$Y_i = \alpha + \beta(x_i - \mathrm{mean}(x)) + R_i$$
where $R_1,\ldots,R_{20} \sim G(0, \sigma)$
If we have $\mu(x) = \alpha + \beta(x - \mathrm{mean}(x))$, how do I go about finding the MLE of $\mu(5)$?
I have a given data set with some calculations done for me, but not sure how to approach this?
| https://instruct1.cit.cornell.edu/courses/econ620/reviewm5.pdf
Look at the document above and search for "functional invariance". If the MLE for $\alpha$ is $\hat\alpha$ then the MLE for $\cos\alpha$ is $\cos\hat\alpha$, and so on. So if $\hat\alpha$ and $\hat\beta$ are the respective MLEs of $\alpha$ and $\beta$, then $8\hat\alpha+6\hat\beta$ is the MLE for $8\alpha+6\beta$, etc. That's the sort of function you have here.
Here's another source: http://books.google.com/books?id=5OLlwXg6r9kC&pg=PA487&dq=functional+invariance+of+mle&hl=en&sa=X&ei=Zt3sT5ryFITiqgGQ_KGeAg&ved=0CFsQ6AEwBw#v=onepage&q=functional%20invariance%20of%20mle&f=false
This property of MLEs is quite easy to prove. You don't need calculus; you just need to know definitions of things like "increasing function" and "maximum".
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/164387",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Difference between power law distribution and exponential decay This is probably a silly one, I've read in Wikipedia about power law and exponential decay. I really don't see any difference between them. For example, if I have a histogram or a plot that looks like the one in the Power law article, which is the same as the one for $e^{-x}$, how should I refer to it?
| How is a power law different from an exponential? (I'm putting this answer here mainly for my own future reference. Hopefully someone else may find it useful.)
Power Law function
(notice the exponent, $k,$ is a constant)
$$
y = x^k
$$
Exponential function
(notice the exponent is a variable)
$$
y = a^x
$$
Technical definition of Power Law:
A power law is any polynomial relationship that exhibits the property of scale invariance.
Scale invariance (from Wikipedia)
One attribute of power laws is their scale invariance. Given a relation $f(x) = ax^k,$ scaling the argument $x$ by a constant factor $c$ causes only a proportionate scaling of the function itself. That is,
$$
f(c x) = a(c x)^k = c^k f(x) \propto f(x)
$$
That is, scaling by a constant $c$ simply multiplies the original power-law relation by the constant $c^k.$
Thus, it follows that all power laws with a particular scaling exponent are equivalent up to constant factors, since each is simply a scaled version of the others. This behavior is what produces the linear relationship when logarithms are taken of both $f(x)$ and $x,$ and the straight-line on the log-log plot is often called the signature of a power law.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/164436",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "41",
"answer_count": 5,
"answer_id": 4
} |
Test for convergence $\sum_{n=1}^{\infty}\{(n^3+1)^{1/3} - n\}$ I want to expand and test this $\{(n^3+1)^{1/3} - n\}$ for convergence/divergence.
The edited version is: Test for convergence $\sum_{n=1}^{\infty}\{(n^3+1)^{1/3} - n\}$
| By direct inspection, for every pair of real numbers $A$ and $B$,
$$
A^3 - B^3 = (A-B) (A^2+AB+B^2).
$$
Choose now $A=\sqrt[3]{n^3+1}$ and $B=n$. Then
$$
(n^3+1)^{1/3} - n = \frac{n^3+1-n^3}{(n^3+1)^{2/3} + n (n^3+1)^{1/3}+n^2} \sim \frac{1}{n^2}
$$
as $n \to +\infty$. The limit is therefore zero.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/164495",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Prove that $\sum_{n=1}^\infty\frac{\sin(nz)}{2^n}$ is analytic on $\{z\in\mathbb{C}:|\operatorname{Im}(z)|<\log(2)\}$
Prove that $f(z)=\sum_{n=1}^\infty\frac{\sin(nz)}{2^n}$ is analytic on $A=\{z\in\mathbb{C}:|\operatorname{Im}(z)|<\log(2)\}$
I tried expanding $\sin(nz)$ in terms of $e^{inz}$ but that did not help me unless I am doing something wrong. I know Weierstrass's M-test comes in to play.
| Using $\sin nz=\frac{1}{2i}(e^{inz}-e^{-inz})$ works. The constant is irrelevant for the convergence.
Deal with the two exponentials separately. Let $z=x+iy$. Then $|e^{inz}|=e^{-ny}$, and $|e^{i((n+1)z}|=e^{-(n+1)y}$.
Thus, remembering about the $2^n$ in the denominator, we see that the norm of the ratio of two consecutive terms is $\frac{e^{-y}}{2}$. This norm is $\lt 1$ precisely if $e^{-y} \lt 2$, that is, if $y\gt -\log 2$.
In the same way, for the term in $e^{-iz}$, the norm of the ratio of two consecutive terms is $\lt 1$ precisely if $y \lt \log 2$. Thus, by the Weierstrass $M$-test, we have analyticity if $-\log 2\lt y\lt \log 2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/164695",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Derivative of $f(x)= (\sin x)^{\ln x}$ I am just wondering if i went ahead to solve this correctly?
I am trying to find the derivative of $f(x)= (\sin x)^{\ln x}$
Here is what i got below.
$$f(x)= (\sin x)^{\ln x}$$
$$f'(x)=\ln x(\sin x) \Rightarrow f'(x)=\frac{1}{x}\cdot\sin x + \ln x \cdot \cos x$$
Would that be the correct solution?
| Five different answers, and all of them using exponentials and logarithms? While logs are indeed convenient, it's certainly possible to solve this problem without what one might consider a detour into logarithm-land.
We know that $(x^n)'=nx^{n-1}$ and $(a^x)'=a^x\log a$, which means that
$$(y^z)'=zy^{z-1}y'$$
when $z$ is constant, and
$$(y^z)'=y^{z}\log y\cdot z'$$
when $y$ is constant. Therefore, when both $y$ and $z$ vary, we have
$$(y^z)'=zy^{z-1}y'+y^{z}\log y\cdot z'.$$
Substitute $y = \sin x$ and $z = \log x$, and you're done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/164751",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 7,
"answer_id": 1
} |
Show $x+\epsilon g(x)$ is 1-1 if $g'$ is bounded and $\epsilon$ is small enough. Problem: Suppose $g$ is a real function on $\mathbb{R}$ with bounded derivative (say $|g'|<M$). Fix $\epsilon>0$, and define $f(x)=x+\epsilon g(x)$. Prove that $f$ is one-to-one if $\epsilon$ is small enough.
(A set of admissible values of $\epsilon$ can be determined which depends only on $M$.)
Source: W. Rudin, Principles of Mathematical Analysis, Chapter 5, exercise 3.
| Suppose not. Then for every $\epsilon>0$, there exists $a,b\in\mathbb{R}$, $a\neq b$, with $f(a)=f(b)$. Using the mean value theorem, we see there exists $x\in(a,b)$ with $f'(x)=0$. Note that $f$ is differentiable, as it is the sum of two differentiable functions. For such an $x$, we have
$$f'(x)=1+\epsilon g'(x)=0 \Rightarrow g'(x)=-\frac{1}{\epsilon}.$$
Taking $\epsilon$ small enough, we can force $g'(x)$ to be arbitrarily large. This is a contradiction, as $g'$ is bounded.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/164808",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Nakayama Conjecture. The commutative case.
Is the Nakayama conjecture solved in the commutative case? It states that "if all the modules of a minimal injective resolution of an Artin algebra $R$ are injective and projective, then $R$ is self-injective".
I tried to look up but could not find if it is solved or not solved in the commutative case. Can someone provide a reference if it is solved in the commutative case? The Wikipedia page does not say if it is solved in the commutative case.
Thanks.
| The following remark can be found in Morita Contexts, Idempotents, and Hochschild Cohomology
— with Applications to Invariant Rings by Ragnar-Olaf Buchweitz (arXiv):
Remarks 3.2. (1) The conjectures (INC’), (INC) trivially hold if the algebras
C,B involved are already commutative noetherian rings. However, there seems to
be no real advantage gained in either (SNC) or (GNC) if one assumes that A is
already commutative. In this sense, the aforementioned conjectures truly belong to
the realm of (slightly) noncommutative algebra.
Here, SNC denotes the strong Nakayama conjecture and GNC the generalized Nakayama conjecture. For their meaning, see loc. cit.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/164978",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Evaluation of $\sum_{n=1}^\infty \frac{1}{\Gamma (n+s)}$ I want to try and evaluate this interesting sum:
$$\sum_{n=1}^\infty \frac{1}{\Gamma (n+s)}$$
where $0 \le s < 1$
WolframAlpha evaluates this sum to be
$$\sum_{n=1}^\infty \frac{1}{\Gamma (n+s)} = e\left(1-\frac{\Gamma(s, 1)}{\Gamma(s)}\right)$$
Some notable cases of this sum would be when $s=0$ (producing the Taylor's series for $e$) and when $s=\frac{1}{2}$:
$$\sum_{n=1}^\infty \frac{1}{\Gamma (n+\frac{1}{2})} = e \operatorname {erf}(1)$$
I would be very interested to know the steps of how one would evaluate this interesting sum.
| I guess one starts by considering a more general sum:
$$E_{\alpha,\beta}(z) = \sum_{n=0}^\infty \frac{z^n}{\Gamma(\alpha n+\beta)}$$
which is known as a Mittag-Leffler function. For the special case of $\alpha=1$, the function satisfies a differential equation:
$$
z \frac{d}{d z} E_{1,s}(z) + (s-1) E_{1,s}(z) = \sum_{n=0}^\infty \frac{(n+s-1)z^n}{\Gamma(n+s)} =z \sum_{n=0}^\infty \frac{z^{n-1}}{\Gamma(n-1+s)} = \frac{1}{\Gamma(s-1)} + z E_{1,s}
$$
This is an inhomogeneous equation of the first order
$$
z y^\prime(z) + (s-1-z) y(z) = \frac{1}{\Gamma(s-1)}
$$
$$
z \frac{\mathrm{d}}{\mathrm{d} z} \left( z^{s-1} \mathrm{e}^{-z} y(z) \right) = \frac{1}{\Gamma(s-1)} z^{s-1} \mathrm{e}^{-z}
$$
Hence
$$
y(z) = \frac{1}{z^{s-1} \mathrm{e}^{-z}} \left( C - \frac{1}{\Gamma(s-1)} \int_z^\infty t^{s-2} \mathrm{e}^{-t} \mathrm{d} t \right)
$$
The integral on the right hand-side is known as incomplete Gamma function.
Incidentally, the original series is also a hypergeometric series, meaning that $E_{1,s}(z)$ represents a hypergeometric function. Indeed:
$$
E_{1,s}(z) = \frac{1}{\Gamma(s)}{}_1F_1\left(1; s; z\right) = \frac{1}{\Gamma(s)} \sum_{n=0}^\infty \frac{(1)_n}{(s)_n} z^n = \sum_{n=0}^\infty \frac{z^n}{\Gamma(n+s)}
$$
where $(s)_n = \frac{\Gamma(n+s)}{\Gamma(s)}$ was used.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/165047",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 3,
"answer_id": 0
} |
A (potential) projection operator Let $I$ be a 3-by-3 identity matrix and $\hat n$ be a unit vector orthonormal to some surface. What then does $I-\hat n\hat n$ mean geometrically? Also what does it mean to multiply this matrix/operator by a vector $v$? Some sort of projection?
Also, I don't understand what $\hat n\hat n$ means. Dimensionally it should be a 3-by-3 matrix because $I$ is such, right?
| If $n$ is a column vector (as usual in most linear algebra textbooks), then $\hat n\hat n$ would probably mean $\hat n \hat n^{T},$ which is the projection in the direction of $\hat n.$
And $(I - \hat n \hat n) v$ is the vector that resembles the projection vector from the tip of $v$ into the plane. You can see it if you expand $(I - \hat n \hat n) v$. You get $v - v_n$ where $v_n$ is the projection of $v$ onto the direction of $n.$
You can see in the following image: $v - v_n = e$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/165121",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Burnside's Lemma I've been trying to understand what Burnside's Lemma is, and how to apply it, but the wiki page is confusing me. The problem I am trying to solve is:
You have 4 red, 4 white, and 4 blue identical dinner plates. In how
many different ways can you set a square table with one plate on each
side if two settings are different only if you cannot rotate the table
to make the settings match?
Could someone explain how to use it for this problem, and if its not too complicated, try to explain to me what exactly it is doing in general?
| Burnside's Lemma states that the number of orbits $|X/G|$ of a set $X$ under the action of a group $G$ is given by:
\begin{equation}
|X/G| = \frac{1}{|G|}\sum_{g \in G}|X^g|
\end{equation}
where $X^g$ denotes the set of elements in $X$ fixed under the action of $g$.
For the example given, your set $X$ is all the possible ways to arrange three different coloured plates around a square table and $G$ is the set of rotations of the square.
You can think of an arrangement as a string of length four, e.g. the string $RRWB$ indicates the first plate is red, the plate opposite is white, etc. There are $3^4$ arrangements of this kind, so we expect our answer to be smaller than this as, for instance,
$RRWB,RWBR,WBRR \text{ and } BRRW$ are all considered to be equivalent as each may be obtained from another by rotation.
We say that the set $\{RRWB, RWBR, WBRR, BRRW\}$ is the orbit of the element $RRWB \in X$ under $G$, and the orbit of $X$ is the set of orbits of each element $x \in X$.
Now, $G$ contains $4$ elements which are rotation by $0,90,180 \text{ and } 270$ degrees respectively. Clearly, all elements in $X$ are fixed under rotation by $0$ rads, so $|X^0|=|X|=3^4$.
The arrangements fixed under rotation by $90$ degrees require that each plate be the same colour as the one next to it and thus include $RRRR,WWWW$ and $BBBB$ only. By symmetry, $$|X^{90}|=|X^{270}|=3$$
A similar counting argument shows that $|X^{180}|=3^2$ and thus by Burnside's Lemma:
$$|X/G|=\frac{1}{4}\left( |X^0|+|X^{90}|+|X^{180}|+|X^{270}|\right) $$
$$~~ = \frac{1}{4}(81+3+9+3)=\frac{96}{4}=24$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/165184",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 3,
"answer_id": 1
} |
Deriving the Formula for Average Speed (Same distance). Let me start of by specifying the question:
A and B are two towns. Kim covers the distance from A to B on a scooter at 17Km/hr and returns to A on a bicycle at 8km/hr.What is his average speed during the whole journey.
I solved this problem by using the formula (since the distances are same):
$$ \text{Average Speed (Same distance)} = \frac{2xy}{x+y} = \frac{2\times17\times8}{17+8} =10.88 \text{Km/hr}$$
Now I actually have two questions:
Q1- I know that $$ Velocity_{Average}= \frac{\Delta S }{\Delta T} $$
Now here does $$\Delta S$$ represent $$ \frac{S_2+S_1 }{2} \,\text{or}\, S_2-S_1 ?$$
Where S2 is the distance covered from point A to point B and S1 is the distance covered from point B to point A
Q2. How did they derive the equation:
$$ Velocity_{Average(SameDistance)} = \frac{2xy}{x+y} $$
Could anyone derive it by using
$$ Velocity_{Average}= \frac{\Delta S }{\Delta T} $$
| If one traveled distance $d_k$ at speed $v_k$, this took time $t_k=\dfrac{d_k}{v_k}$. It took time $T=\sum\limits_kt_k$ to travel distance $D=\sum\limits_kd_k$ and the average speed $V$ solves $D=VT$, hence $$V=\frac{\sum\limits_kd_k}{\sum\limits_k\frac{d_k}{v_k}}.
$$
In the particular case when there are $n$ distances which are all equal, one gets $V=\dfrac{n}{\sum\limits_{k=1}^n\frac1{v_k}}$, or
$$\frac1V=\frac1n\sum\limits_{k=1}^n\frac1{v_k}.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/165261",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 6,
"answer_id": 0
} |
Computing this fundamental group What is the fundamental group of
$$X = \left\{\left(\sqrt{x^2+y^2}-2\right)^2 + z^2 = 1\right\}\cup \left\{(x,y,0)\;\; :\;\; x^2 + y^2 \leq 9\right\}\subset\mathbb R^3\,?$$
I would say that it is $\,\mathbb Z\,$ cause you can deform one of "the class of paths" that usually would make the fundamental group of $\,S^1\times S^1\,$ be $\,\mathbb Z\oplus \mathbb Z\,$ in the constant path.
I used the Seifert van Kampen Theorem but I'm not sure if I used it correctly.
Thanks a lot!
| Following Neal's suggestion in the comments. The "top" ($z \geq 0$) and "bottom" ($z \leq 0$) are each themselves homeomorphic to a torus with a disk "plugging in the hole," which has fundamental group $\mathbb{Z}$, and the intersection has trivial $\pi_1$, so you should get the free group on two generators.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/165323",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 0
} |
Hellinger distance between Beta distributions I am interested in calculating the Hellinger distance $H(f,g)$ between two Beta distributions $f$ and $g$ of which I already know the parameters for. I am aware that you can calculate it directly using the 2-norm of discrete distributions. But it would be nicer to have full analytical expression.
The wikipedia page that I link to give nice expression for Gaussian, exponential, Weibull, and Poisson distributions. But how to derive similar expressions for:
*
*the 2-parameter Beta distribution defined on $[0,1] \in \mathbf{R}$?
*the 4-parameter Beta distribution having arbitrary support in $\mathbf{R}$?
The last option is also known (atleast for me) as the Pearson Type I distribution (was this the origin of the Beta distribution?). I have used the pearsrnd function in MATLAB and much of my data seems to fit a type I distribution.
I just need it for univariate statistics. Please remember the factor $1/2$ as I need the distance in the range $[0,1]$. Whether or not the expression gives me the squared distance is not so important.
Addendum:
I tried to solve it directly in Mathematica 7 usintg Integrate. I created two functions: (1) hellingerDistanceA that implements the integration directly and (2) hellingerDistanceB that evaluates the expression given by Sasha below. The answer by Sasha seems to be correct:
hellingerDistanceA[a_, b_, c_, d_] :=
1 - Integrate[
Sqrt[
Times[
PDF[BetaDistribution[a, b], x],
PDF[BetaDistribution[c, d], x]
]
],
{x, 0, 1},
Rule[Assumptions, {Element[x, Reals]}]
]
hellingerDistanceB[a_, b_, c_, d_] :=
1 - Divide[
Beta[(a + c) / 2, (b + d) / 2],
Sqrt[
Times[
Beta[a, b],
Beta[c, d]
]
]
]
hellingerDistanceA[1/2, 1/2, 5, 1] // N (* gives 0.251829... *)
hellingerDistanceB[1/2, 1/2, 5, 1] // N (* also gives 0.251829... *)
hellingerDistanceA[2, 2, 2, 5] // N (* gives 0.148165... *)
hellingerDistanceB[2, 2, 2, 5] // N (* also gives 0.148165... *)
| Let $X$ and $Y$ be independent beta random variables, such that $X \sim \operatorname{Beta}(a_1, b_1)$ and $Y \sim \operatorname{Beta}(a_2, b_2)$. Then
$$ \begin{eqnarray}
H(X, Y) &=& 1 - \int_0^1 \sqrt{ f_X(t) f_Y(t) } \mathrm{d} t \\ &=& 1 - \frac{1}{\sqrt{B(a_1,b_1) B(a_2,b_2)}} \int_0^1 t^{ (a_1 + a_2)/2 -1} (1-t)^{(b_1 + b_2)/2-1} \mathrm{d} t \\ &=& 1 - \frac{B\left(\frac{a_1+a_2}{2}, \frac{b_1+b_2}{2}\right)}{\sqrt{B(a_1,b_1) B(a_2,b_2)}}
\end{eqnarray}
$$
Calculations with 4-parameter beta random variables will be similar. The integrals might be expressible in terms of incomplete beta functions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/165380",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to prove if a function is bijective? I am having problems being able to formally demonstrate when a function is bijective (and therefore, surjective and injective). Here's an example:
How do I prove that $g(x)$ is bijective?
\begin{align}
f &: \mathbb R \to\mathbb R \\
g &: \mathbb R \to\mathbb R \\
g(x) &= 2f(x) + 3
\end{align}
However, I fear I don't really know how to do such. I realize that the above example implies a composition (which makes things slighty harder?). In any case, I don't understand how to prove such (be it a composition or not).
For injective, I believe I need to prove that different elements of the codomain have different preimages in the domain. Alright, but, well, how?
As for surjective, I think I have to prove that all the elements of the codomain have one, and only one preimage in the domain, right? I don't know how to prove that either!
EDIT
f is a bijection. Sorry I forgot to say that.
| To prove a function is bijective, you need to prove that it is injective and also surjective.
"Injective" means no two elements in the domain of the function gets mapped to the same image.
"Surjective" means that any element in the range of the function is hit by the function.
Let us first prove that $g(x)$ is injective. If $g(x_1) = g(x_2)$, then we get that $2f(x_1) + 3 = 2f(x_2) +3 \implies f(x_1) = f(x_2)$. Since $f(x)$ is bijective, it is also injective and hence we get that $x_1 = x_2$.
Now let us prove that $g(x)$ is surjective. Consider $y \in \mathbb{R}$ and look at the number $\dfrac{y-3}2$. Since $f(x)$ is surjective, there exists $\hat{x}$ such that $f(\hat{x}) = \dfrac{y-3}2$. This means that $g(\hat{x}) = 2f(\hat{x}) +3 = y$. Hence, given any $y \in \mathbb{R}$, there exists $\hat{x} \in \mathbb{R}$ such that $g(\hat{x}) = y$. Hence, $g$ is also surjective.
Hence, $g(x)$ is bijective.
In general, if $g(x) = h(f(x))$ and if $f(x) : A \to B$ and $h(x): B \to C$ are both bijective then $g(x): A \to C$ is also bijective.
In your case, $f(x)$ was bijective from $\mathbb{R} \to \mathbb{R}$ and $h(x) = 2x+3$ is also bijective from $\mathbb{R} \to \mathbb{R}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/165434",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "29",
"answer_count": 4,
"answer_id": 2
} |
When will these two trains meet each other I cant seem to solve this problem.
A train leaves point A at 5 am and reaches point B at 9 am. Another train leaves point B at 7 am and reaches point A at 10:30 am.When will the two trains meet ? Ans 56 min
Here is where i get stuck.
I know that when the two trains meets the sum of their distances travelled will be equal to the total sum , here is what I know so far
Time traveled from A to B by Train 1 = 4 hours
Time traveled from B to A by Train 2 = 7/2 hours
Now if S=Total distance from A To B and t is the time they meet each other then
$$\text{Distance}_{\text{Total}}= S =\frac{St}{4} + \frac{2St}{7} $$
Now is there any way i could get the value of S so that i could use it here. ??
| Since both trains move toward each other when remaining half of a space,
then the general meeting equation in t is:
$$v_{1}*t + v_{2}*t = S$$
Then we get that:
$$ \frac{S t}{4}+\frac{2S t}{7}=\frac{S}{2} \longrightarrow \frac{ t}{4}+\frac{2t}{7}=\frac{1}{2} \longrightarrow t=\frac{14}{15} (\text{56 minutes})$$
Q.E.D.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/165513",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 1
} |
Square units of area in a circle I'm studying for the GRE and came across the practice question quoted below. I'm having a hard time understanding the meaning of the words they're using. Could someone help me parse their language?
"The number of square units in the area of a circle '$X$' is equal to $16$ times the number of units in its circumference. What are the diameters of circles that could fit completely inside circle $X$?"
For reference, the answer is $64$, and the "explanation" is based on $\pi r^2 = 16(2\pi r).$
Thanks!
| I'm currently doing this, and how I see the problem is:
$πr^2 = (2πr) * 16 $
If you solve this, r gives $32$, and since diameter $d = 2*r$, $d = 64$.
So, any circle with a lesser diameter fits in this circle $(d<64)$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/165565",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Find the largest integer $p$ such that $p+2013$ perfectly divides $[(p^{2013})+(p^{2012})+ .... +(p^2)+p+1]$ Find the largest integer $p$ such that $p+2013$ perfectly divides $[(p^{2013})+(p^{2012})+ .... +(p^2)+p+1]$
My approach:
Remainder when $p^{2012} + p^{2011} + \ldots + p + 1$ is divided by $p + 2012$ is
$R = 2012^{2012} - 2012^{2011} + 2012^{2010} - \ldots - 2012 + 1$.
This should be divisible by $p + 2012$.
So maximum value of $p$ will be $R - 2012$
$$
\max p = {(2012^{2013} + 1)/2013} - 2012.
$$
| The solution below is very close to the solution described in the post. It seems useful to describe what's going on in terms of polynomial division.
For brevity, and increased generality, let $a=2013$, and let $n=2013$. Consider the polynomial
$$P(x)=x^n+x^{n-1}+x^{n-2}+\cdots+x+1.$$
Divide $P(x)$ by the monic polynomial $x+a$. So we have $P(x)=Q(x)(x+a)+r$, where $Q(x)$ has integer coefficients. Putting $x=-a$ we conclude that $r=P(-a)$.
It so happens that $P(-a)$ is negative, so it is convenient to let $K=-P(-a)$.
Thus
$$\frac{P(x)}{x+a}=Q(x)-\frac{K}{x+a}.$$
It follows that for any integer $x$, the number $x+a$ divides $P(x)$ iff $x+a$ divides $K$.
We want the largest $x$ such that $x+a$ divides $K$. This is $K-a$. We can describe $K$ in various ways, not very compactly as $a^{n}-a^{n-1}+\cdots +a^2-a+1$, and more compactly by summing the geometric series.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/165654",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Your favourite application of the Baire Category Theorem I think I remember reading somewhere that the Baire Category Theorem is supposedly quite powerful. Whether that is true or not, it's my favourite theorem (so far) and I'd love to see some applications that confirm its neatness and/or power.
Here's the theorem (with proof) and two applications:
(Baire) A non-empty complete metric space $X$ is not a countable union of nowhere dense sets.
Proof: Let $X = \bigcup U_i$ where $\mathring{\overline{U_i}} = \varnothing$. We construct a Cauchy sequence as follows: Let $x_1$ be any point in $(\overline{U_1})^c$. We can find such a point because $(\overline{U_1})^c \subset X$ and $X$ contains at least one non-empty open set (if nothing else, itself) but $\mathring{\overline{U_1}} = \varnothing$ which is the same as saying that $\overline{U_1}$ does not contain any open sets hence the open set contained in $X$ is contained in $\overline{U_1}^c$. Hence we can pick $x_1$ and $\varepsilon_1 > 0$ such that $B(x_1, \varepsilon_1) \subset (\overline{U_1})^c \subset U_1^c$.
Next we make a similar observation about $U_2$ so that we can find $x_2$ and $\varepsilon_2 > 0$ such that $B(x_2, \varepsilon_2) \subset \overline{U_2}^c \cap B(x_1, \frac{\varepsilon_1}{2})$. We repeat this process to get a sequence of balls such that $B_{k+1} \subset B_k$ and a sequence $(x_k)$ that is Cauchy. By completeness of $X$, $\lim x_k =: x$ is in $X$. But $x$ is in $B_k$ for every $k$ hence not in any of the $U_i$ and hence not in $\bigcup U_i = X$. Contradiction. $\Box$
Here is one application (taken from here):
Claim: $[0,1]$ contains uncountably many elements.
Proof: Assume that it contains countably many. Then $[0,1] = \bigcup_{x \in (0,1)} \{x\}$ and since $\{x\}$ are nowhere dense sets, $X$ is a countable union of nowhere dense sets. But $[0,1]$ is complete, so we have a contradiction. Hence $X$ has to be uncountable.
And here is another one (taken from here):
Claim: The linear space of all polynomials in one variable is not a Banach space in any norm.
Proof: "The subspace of polynomials of degree $\leq n$ is closed in any norm because it is finite-dimensional. Hence the space of all polynomials can be written as countable union of closed nowhere dense sets. If there were a complete norm this would contradict the Baire Category Theorem."
| Let $I=[0,1]$ and $\mathcal{C}(I)= \{ f : I \to \mathbb{R} \ \text{continuous} \}$ with the topology of uniform convergence. Then the set of nowhere differentiable functions over $I$ is dense in $\mathcal{C}(I)$.
The same thing holds in $\mathcal{C}(I)$ for the set of nowhere locally monotonic functions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/165696",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "248",
"answer_count": 28,
"answer_id": 4
} |
Something weaker than the Riesz basis I have some function $f$, real valued and continuous.
I formed functions $\{f_{m,k}, k \in \mathbb{Z}, m>0\}$ such that that $\mathrm{span}\{f_{m,k}, k \in \mathbb{Z}, m>0\}$ is dense in $L_p(R)$.
Now I would like to constract a Riesz basis from $\{f_{m,k}, k \in \mathbb{Z}, m>0\}$, but unfortunately, my $\{f_{m,k}\}$ does not generate a Riesz basis.
I am wondering if there is some weaker/stronger structure than the Riesz basis, such that $\{f_{m,k}, k \in \mathbb{Z}, m>0\}$ can possibly generate?
Thank you.
| The term you're looking for is probably Schauder basis, altough for that you also need some independence properties (as usual with bases).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/165726",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Trying to find the name of this Nim variant Consider this basic example of subtraction-based Nim before I get to my full question:
Let $V$ represent all valid states of a Nim pile (the number of stones remaining):
$V = 0,1,2,3,4,5,6,7,8,9,10$
Let $B$ be the bound on the maximum number of stones I can remove from the Nim pile in a single move (minimum is always at least 1):
$B = 3$
Optimal strategy in a two-player game then is to always ensure that at the end of your turn, the number of stones in the pile is a number found in $V$, and that it is congruent to $0$ modulo $(B+1)$. During my first move I remove 2 stones because $8$ modulo $4$ is $0$.
My opponent removes anywhere from 1-3 stones, but it doesn't matter because I can then reduce the pile to $4$ because $4$ modulo $4$ is $0$. Once my opponent moves, I can take the rest of the pile and win.
This is straightforward, but my question is about a more advanced version of this, specifically when $V$ does not include all the numbers in a range. Some end-states of the pile are not valid, which implies that I cannot find safe positions by applying the modulo $(B+1)$ rule.
Does this particular variant of Nim have a name that I can look up for further research? Is there another way to model the pile?
| You have analyzed the game correctly. Wikipedia lists it as "the subtraction game" under "Other variations of Nim".
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/165779",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
Spectrum of a real number Whilst reading Concrete Mathematics, the authors mention something which they refer to as the "spectrum" of a real number (pg. 77):
We define the spectrum of a real number $\alpha$ to be an infinite multiset of integers, $$\operatorname{Spec}{(\alpha)}=\{\left\lfloor\alpha\right\rfloor, \left\lfloor2\alpha\right\rfloor,\left\lfloor3\alpha\right\rfloor,\cdots\}.$$
It then goes on to describe interesting properties of these "spectra", such as the inequality of any two spectra, i.e. given some $\alpha\in\mathbb{R},\space\not\exists\beta\in\mathbb{R}\setminus\{\alpha\}:\operatorname{Spec}{(\alpha)}=\operatorname{Spec}{(\beta)}$.
It also gives a proof regarding the partitioning of the set $\mathbb{N}$ into two spectra, $\operatorname{Spec}{(\sqrt{2})}$ and $\operatorname{Spec}{(\sqrt{2}+2)}$.
These properties intrigued me, so I wondered if there were any other properties of these multisets, however; a google search for "spectrum of a real number" doesn't appear to yield any relevant results, so I wondered if there was any research into these objects, and if there is whether "spectrum" is a non-standard name (if it is, I'd appreciate the common name for these objects).
Thanks in advance!
| This is non-standard (and does not agree well with any of the other meanings of "spectrum" in mathematics so I would not use it). The standard name is Beatty sequence (at least when $\alpha$ is irrational).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/165835",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
} |
Corollary of Cauchy's theorem about finite groups. Cauchy's Theorem: Let $G$ be a finite group and let $p$ divide the order of $G$. Then $G$ has an element of order $p$ and consequently a subgroup of order $p$ (of course the cyclic subgroup generated by the aforementioned element of order $p$).
Corollary: Let $G$ be a finite group, and $p$ be a prime. Then $G$ is a $p$-group if and only if the order of $G$ is a power of $p$.
Relevant Definition: $G$ is a $p$-group (for a prime $p$) if every element of $G$ has order $p^m$ for some $m\geq 1$.
The proof of the theorem was a beautiful application of using group actions to count. Now I'm trying to use the theorem to prove the corollary which is stated in my text leaving the proof as an exercise, but I am stuck because there doesn't seem to be a natural way to apply the theorem. Please accept my 'request for a hint'.
Thanks very much!
| If the order $n$ of the group is not a power of $p$, there is a prime $q\ne p$ that divides $n$. But then $\dots$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/165923",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Average number of times it takes for something to happen given a chance Given a chance between 0% and 100% of getting something to happen, how would you determine the average amount of tries it will take for that something to happen?
I was thinking that $\int_0^\infty \! (1-p)^x \, \mathrm{d} x$ where $p$ is the chance would give the answer, but doesn't that also put in non-integer tries?
| You are dealing with the geometric distribution. Let the probability of "success" on any one trial be $p\ne 0$. Suppose you repeat the experiment independently until the first success. The mean waiting time until the first success is $\frac{1}{p}$.
For a proof, let $X$ be the waiting time until the first success, and let $e=E(X)$ There are two possibilities. Either you get success right away (probability $p$) in which case your waiting time is $1$, or you get a failure on the first trial (probability $1-p$), in which case your expected waiting time is $1+e$, since the first trial has been "wasted." Thus
$$e=(1)(p)+(1+e)(1-p).$$
Solving for $e$ we get $e=\frac{1}{p}$.
Alternately, we can set up an infinite series expression for $e$, and sum the series. The probability that $X=1$ is $p$. The probability that $X=2$ is $(1-p)p$ (failure then success). The probability that $X=3$ is $(1-p)^2p$ (two failures, then success). And so on. So
$$E(X)=p+2(1-p)p+3(1-p)^2p +4(1-p)^3p +\cdots.$$
If you wish, I can describe the process of finding a simple expression for the above infinite series.
Remark: For smallish $p$, your integral turns out to be a reasonably good approximation to the mean. However, as you are aware, we have here a discrete distribution, and the right process involves summation, not integration.
Another tool that we can use is the fact that
$$E(X)=\Pr(X \ge 1)+\Pr(X\ge2)+\Pr(X \ge 3)+\cdots.$$
That is the discrete version of your integral.
It gives us the geometric progression $1+(1-p)+(1-p)^2+(1-p)^3+\cdots$, which has sum $\frac{1}{p}$ if $p\ne 0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/165993",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 3,
"answer_id": 2
} |
Linear system with positive semidefinite matrix I have a linear system $Ax=b$, where
*
*$A$ is symmetric, positive semidefinite, and positive. $A$ is a variance-covariance matrix.
*vector $b$ has elements $b_1>0$ and the rest $b_i<0$, for all $i \in \{2, \dots, N\}$.
Prove that the first component of the solution is positive, i.e., $x_1>0$.
Does anybody have any idea?
| Someone bumped up this old question today. For sake of having an answer, we will see that the problem statement arises from a classical property of $M$-matrices.
Let $D=\operatorname{diag}(1,-1,\ldots,-1),\ M=DAD+tI,\ y=Dx$ and $q=Db+ty$. Note that $y_1$ has the same sign as $x_1$ and $DAD$ is positive semidefinite. Pick a sufficiently small $t>0$. Then $q$ is positive and $M$ is positive definite (hence nonsingular). However, as $M=DAD+tI$, all off-diagonal entries of $M$ are negative. This makes $M$ an $M$-matrix. Now the problem boils down to the following known property of $M$-matrices:
Suppose $M$ is a matrix whose eigenvalues have positive real parts and its off-diagonal entries are negative. Then $M^{-1}>0$ (entrywise). Consequently, if $My=q>0$, then $y>0$.
Proof. See e.g. Horn and Johnson's Topics in Matrix Analysis. The usual proof is very easy. By the given assumptions on $M$, when $\alpha>0$ is sufficiently large, $P=\alpha I - M$ is nonsingular and (entrywise) positive and hence
$$
M^{-1}=\frac1\alpha\left(I-\frac1\alpha P\right)^{-1}
=\frac1\alpha\left(I+\frac1\alpha P+\frac1\alpha^2 P^2+\ldots\right)>0.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/166055",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Determine whether $\sum\limits_{i=1}^\infty \frac{(-3)^{n-1}}{4^n}$ is convergent or divergent. If convergent, find the sum. $$\sum\limits_{i=1}^\infty \frac{(-3)^{n-1}}{4^n}$$
It's geometric, since the common ratio $r$ appears to be $\frac{-3}{4}$, but this is where I get stuck. I think I need to do this: let $f(x) = \frac{(-3)^{x-1}}{4^x}$.
$$\lim\limits_{x \to \infty}\frac{(-3)^{x-1}}{4^x}$$
Is this how I handle this exercise? I still cannot seem to get the answer $\frac{1}{7}$
| Let $q = \frac{-3}{4}$, $a_n = \frac{(-3)^{n-1}}{4^n}$, $b_0 = 0$, $b_n = b_{n-1} + a_n$.
$a_n = -\frac{1}{3} (\frac{-3}{4})^n = -\frac{1}{3} q^n$, hence $b_n = -\frac{1}{3} c_n$, where $c_0 = 0$, $c_n = c_{n-1} + q^n$.
The $c_n$ limit is equal to $q + q^2 + q^3 + \ldots = \frac{q}{1-q} = \frac{\frac{-3}{4}}{1-\frac{-3}{4}} = \frac{-3}{7}$, thus $b_n$ limit is equal to $\frac{1}{7}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/166097",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
Some link between laplace equation and heat equation I would like to know if it is true that the solution of the equation $\partial_tu(x,t)=\Delta u(x,t)+f(x) ,t\ge0, u=0 $ for $x\in R^n , t=0$ converges to the solution of $\Delta u=-f, x\in R^n$ as $t\to \infty$ ?
How can i show if its true ?
What i am thinking is to use duhamels principle and find the solution and what should i do next ?
Thank you for your help .
| If $f$ is well behaved, the solution $u(t)$ can be obtained analytically. First, define $w(t,x) \equiv u(t,x)+F(x)$ where $\Delta F(x)=f$. Assuming $f$ is not pathological, such $F$ exists and can be calculated directly, because we know the kernel of Laplace's equation:
$$F(x)=\int_{\mathbb{R}^n} \frac{f(y)}{|x-y|}dy $$
This $w$ obeys a homogeneous equation:
$$\partial_t w= \partial_t (u+F)=\partial_t u=\Delta u+f=\Delta (u+F)= \Delta w$$
The solution for $w$ can also be obtained because we know the kernel of the heat equation:
$$w(t,x)=\frac{1}{(4\pi t)^{n/2}} \int_{\mathbb{R}^n} w(0,y)e^{-\frac{|x-y|^2}{4t}}dy$$
It is seen from this solution that if $w(x,0)$ decays fast enough with x, then $w\to 0$ for $t\to\infty$ and for all $x$. Therefore, $\Delta u \to -f$ as needed.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/166257",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Proving $\mathrm e <3$ Well I am just asking myself if there's a more elegant way of proving $$2<\exp(1)=\mathrm e<3$$ than doing it by induction and using the fact of $\lim\limits_{n\rightarrow\infty}\left(1+\frac1n\right)^n=\mathrm e$, is there one (or some) alternative way(s)?
| First let's consider a simple heuristic argument to show that $2<e<4$. It is easy to prove using the definition of the derivative that if $f(x)=2^x$ then $f'(x) = (\text{constant}\cdot 2^x$). The curve gets steeper as $x$ increases, and the average slope between $x=0$ an $x=1$ is $(2^1-2^0)/(1-0)= 1$. Therefore, the slope at $x=0$ is less than $1$; hence the "constant" is less than $1$. Now do the same with $g(x)=4^x$ on the interval from $x=-1/2$ and $x=0$, and conclude that the slope at $x=0$ is more than $1$; hence the "constant" you get there is more than $1$.
So $2$ is too small, and $4$ is too big, to serve as the base of the natural exponential function.
It's messier to do the same with $3$, but the interval from $x=-1/6$ to $x=0$ will do it, and you conclude $3$ is too big to be the base of the natural exponential function.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/166310",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "33",
"answer_count": 7,
"answer_id": 4
} |
De-arrangement in permutation and combination This article talks about de-arrangement in permutation combination.
Funda 1: De-arrangement
If $n$ distinct items are arranged in a row, then the number of ways they can be rearranged such that none of them occupies its original position is,
$$n! \left(\frac{1}{0!} – \frac{1}{1!} + \frac{1}{2!} – \frac{1}{3!} + \cdots + (-1)^n \frac{1}{n!}\right).$$
Note: De-arrangement of 1 object is not possible.
$\mathrm{Dearr}(2) = 1$; $\mathrm{Dearr}(3) = 2$; $\mathrm{Dearr}(4) =12 – 4 + 1 = 9$; $\mathrm{Dearr}(5) = 60 – 20 + 5 – 1 = 44$.
I am not able to understand the logic behind the equation. I searched in the internet, but could not find any links to this particular topic.
Can anyone explain the logic behind this equation or point me to some link that does it ?
| These are called derangements. Wikipedia on inclusion-exclusion counts the number of permutations where at least one item is where it belongs, which is $n!-$the derangement number. OEIS A000166 has many references
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/166378",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
What is the simplest way to prove that the logarithm of any prime is irrational? What is the simplest way to prove that the logarithm of any prime is irrational?
I can get very close with a simple argument: if $p \ne q$ and $\frac{\log{p}}{\log{q}} = \frac{a}{b}$, then because $q^\frac{\log{p}}{\log{q}} = p$, $q^a = p^b$, but this is impossible by the fundamental theorem of arithmetic. So the ratio of the logarithms of any two primes is irrational. Now, if $\log{p}$ is rational, then since $\frac{\log{p}}{\log{q}}$ is irrational, $\log{q}$ is also irrational. So, I can conclude that at most one prime has a rational logarithm.
I realize that the rest follows from the transcendence of $e$, but that proof is relatively complex, and all that's left to show is that no integer power of $e$ is a prime power (because if $\log p$ is rational, then $e^a = p^b$ has a solution). It is easy to prove that $e$ is irrational ($e = \frac{a}{b!} = \sum{\frac{1}{n!}}$, multiply by $b!$ and separate the sum into integer and fractional parts) but I can't figure out how to generalize this simple proof to show that $e^x$ is irrational for all integer $x$; it introduces a $x^n$ term to the sum and the integer and fractional parts can no longer be separated. How to complete this argument, or what is a different elementary way to show that $\log{p}$ is always irrational?
| A proof of the irrationality of rational powers of $e$ is given on page 8 of Keith Conrad's notes.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/166441",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25",
"answer_count": 3,
"answer_id": 0
} |
3 consecutive numbers I was playing with some numbers and just realized that:
For any 3 consecutive numbers X, Y and Z: $Y^2$ = (X*Z) + 1
For eg: Consider numbers 171, 172 and 173
$172^2$ = 29584
and
171*173 = 29583
Can anyone tell me if there is any proof for this and what it is known as?
| Let the middle number be $x$; the other two are $x-1$ and $x+1$. Basic algebra tells us that $(x-1)(x+1)=x^2-1$, and therefore $x^2=(x-1)(x+1)+1$. (This is true even if $x$ is not an integer.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/166512",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Deducing results in linear algebra from results in commutative algebra Here are two examples of results which can be deduced from commutative algebra:
*
*Any $n\times n$ complex matrix is conjugate to a Jordan canonical matrix (can be proven using the structure theorem for modules over a PID, in this case $\mathbf{C}[T]$ - see for example these course notes).
*Commuting matrices have a common eigenvector (this can be seen as a consequence of Hilbert's Nullstellensatz, according to Wikipedia).
My question is, does anyone know of other examples of results in linear algebra which can be deduced (non-trivially*) from results in commutative algebra?
More precisely: Which results about modules over fields can be optained via modules over more general commutative rings? (Thanks to Martin's comment below for suggesting this precision of the question).
.* by "non-trivially," I mean you have to go deeper than simply applying module theory to modules over a field.
| Criteria for unitarily similarity between two matrices is given by Specht in 1940 in a paper
titled “Zur Theorie der Matrizen” (“Toward Matrix Theory”).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/166579",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 3,
"answer_id": 2
} |
Are there problems that are optimally solved by guess and check? For example, let's say the problem is: What is the square root of 3 (to x bits of precision)?
One way to solve this is to choose a random real number less than 3 and square it.
1.40245^2 = 1.9668660025
2.69362^2 = 7.2555887044
...
Of course, this is a very slow process. Newton-Raphson gives the solution much more quickly. My question is: Is there a problem for which this process is the optimal way to arrive at its solution?
I should point out that information used in each guess cannot be used in future guesses. In the square root example, the next guess could be biased by the knowledge of whether the square of the number being checked was less than or greater than 3.
| pure guess and check? Probably not, there's always some amount of mathematical cleverness to reduce the problem. But many famous problems, such as the four colour theorem, were ultimately solved by checking a large number of test cases. Solving non-linear differential equations, when they can be solved at all, also amounts to a lot of guess and check.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/166631",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.