Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
Proof of Inverse Function Theorem, class k I was studying the Inverse Function Theorem, and I found this proof on the internet:
http://virtualmath1.stanford.edu/~andras/174A-2.pdf
In the proof, there is this line about $C^k$ functions:
If $F$ is $C^k$, $k > 1$, then $DF$ is $C^{k−1}$, hence $(DF)^{−1}$ is $C^{k−1}$, hence $F^{-1}$ is $C^k$.
Now what I don't get is the last "hence" part, since $(DF)^{-1}\neq D(F^{-1})$. Is there any reasoning in why this is true?
| You can express the derivative $D (F^{-1})$ of $F^{-1}$ in terms of $F$ and the derivative of $F$. If these are both differentiable then so is $D (F^{-1})$, just because it can be expressed through the composition of differentiable maps. The general case is just by induction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3524900",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
If $f:[0,\infty)\to[0,\infty)$ is continuous & decreasing and $α>0$ s.t. $\int_{0}^\infty x^α f(x)dx<\infty$, $\lim_{x\to\infty}f(x)x^{α+1}=0$ Let $f: [0,\infty)\rightarrow [0,\infty)$ be a continuous and decreasing function. Suppose that exists an $\alpha>0$ such that $\int_{0}^{\infty} x^\alpha f(x) dx < \infty$. Prove that $\lim_{x\rightarrow \infty} f(x)x^{\alpha+1}=0$.
First I tried to integrate using parts formula because I thought that was a good way for arriving to the limit. And then I tried using the definition of convergence but I didn't know how to finish.
| Indeed suppose that the statement $\lim_{x\rightarrow \infty} f(x)x^{\alpha+1}=0$ is false. Then, by definition of the limit, there exists a $\varepsilon>0$ such that for every $X\in[0,\infty[$, there exists a $x\geq X$ such that $f(x)x^{\alpha+1}\geq\varepsilon$.
Hence we can construct a sequence $x_1,x_2,x_3,\dots$ in $[0,\infty[$ satisfying:
*
*$f(x_i) x_i^{\alpha+1}\geq\varepsilon$ for all $i$, and
*$\frac{x_{i}}{x_{i-1}}\geq 2^i$ for all $i$.
Now comes the main idea: Since $f$ is decreasing, we have $f(x)\geq\frac{\varepsilon}{x_i^{\alpha+1}}$ for all $i$ and $x\le x_i$.
Hence, for all $i\geq 2$, $$\int_{x_{i-1}}^{x_i} f(x) x^\alpha\,\mathrm dx\geq\varepsilon\int_{x_{i-1}}^{x_i} \frac{x^\alpha}{x_i^{\alpha+1}}\,\mathrm dx=\frac\varepsilon{\alpha+1}-\varepsilon\left(\frac{x_{i-1}}{x_i}\right)^\alpha\geq\frac{\varepsilon}{\alpha+1}-\frac{\varepsilon}{2^i}.$$
It follows that $$\int_0^\infty f(x) x^\alpha\,\mathrm dx\geq \sum_{i=2}^\infty \int_{x_{i-1}}^{x_i} f(x) x^\alpha\,\mathrm dx\geq\sum_{i=2}^\infty\frac{\varepsilon}{\alpha+1}-\frac{\varepsilon}{2^i},$$ but the last sum is clearly divergent. Contradiction. $\square$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3525021",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Finding limit using polar coordinates Polar coordinates are often useful in finding the limits in the case where $(x,y) $ approaches (0,0). Is it possible to use polar coordinates to find
$$\lim_{(x,y)\rightarrow (1,1)} \frac {xy-y-2x+2}{x-1} $$
This limit is -1 by direct substitution, can I use polar coordinates
| For this case you would have to use $x=r\cos\theta+1,y=r\sin\theta+1$, giving
$$\lim_{r\to0}\frac{(r\sin\theta-1)r\cos\theta}{r\cos\theta}$$ where $\theta$ can vary arbitrarily. This indeed shows that the limit exists and is $-1$, but there is no benefit to switch to polar.
An alternative approach would be to use translated Cartesian coordinates, $u=x+1,v=y+1$, and
$$\lim_{(x,y)\to(-1,-1)}\frac{xy-y-2x+2}{x-1}=\lim_{(u,v)\to(0,0)}\frac{(v-1)u}u.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3525417",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
If $x^d \equiv 1 \pmod{p}$ has full number of roots, $d\mid p-1$? Let $p$ be a prime.
What I know is that $d\mid p-1$ then $x^d \equiv 1 \pmod{p}$ has the maximum possible number of roots, that is, $d$.
I am wondering if the converse holds.
I have searched for $d\not\mid p-1$ that still has $d$ many roots in the equation, but I could not find one up to $p=23$ (unless my calculation was not complete). Is there a counter-example to the converse?
| Yes, the converse holds.
We have $x^m \equiv 1 \bmod{p}$ iff $x^d \equiv 1 \bmod{p}$, where $d=\gcd(m,p-1)$.
Moreover, $x^d \equiv 1 \bmod{p}$ has exactly $d$ roots because $\mathbb F_p^\times$ is cyclic.
Therefore, $x^d \equiv 1 \bmod{p}$ has $d$ roots iff $d$ divides $p-1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3525595",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Positive matrix with integer eigenvalues Is there any way of creating a positive matrix which has integer eigenvalues? Each entry $a_{ij}$ of the matrix must be strictly greater than $0$. I get how to create a matrix with certain eigenvalues using diagonal matrices, but I do not know how to make sure the matrix is strictly positive
| The $n \times n$ matrix with diagonal entries $b$ and off-diagonal entries $a$ has eigenvalues $b-a$ (with multiplicity $n-1$) and $b + (n-1) a$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3525859",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 0
} |
Let f be a real valued continuous function such that for all real x and for all t≥0, f(x)=f(x•e^t). Show that f is a constant function. My approach to solving the problem stated in the title is as follows:
f(x)=f(x•e^t)=y(say)
Differentiating both sides w.r.t x, I got
f'(x)=e^t•f'(x•e^t)
This implies
e^t=1 [since f(x)=f(x•e^t) implies their derivatives are equal as well]
The above equation holds true for the case when t=0, which would trivially mean f(x)=f(x). To explain the above equality for all t≥0,
f'(x)=f'(x•e^t)=0, which means f is a constant function.
However, I'm concerned with the rationality of my last step, which makes me doubt my approach altogether. Can anyone explain if my approach is correct?
My background: High school level calculus. Once again, apologies for not using MathJax :)
| For the function $e^x$ maps $[0,\infty]$ onto $[1,\infty]$. So given any $0<x<y$ we can choose $t$ such that $y=xe^t$ and then we have $f(x)=f(y)$. In other words we have $f(x)$ equals some constant $c_1$ on the positive reals (either $x=y,x<y$ or $y<x$).
Similarly, $f(x)$ is constant $c_2$ on the negative reals. But we are told it is continuous, so we must have $c_1=f(0)=c_2$ and $f$ is constant on the reals.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3526008",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Closed unit ball of $X^*$ has extreme points. Let $X$ be a Banach space. I have to show that the closed unit ball of the dual space $X^*$ has extreme points.
I just used the Banach-Alaoglu and Krein-Milman theorems to prove this but I'm not sure if this is correct because we haven't specified a topology on $X^*$. Because this argument obviously only holds when we consider the weak* topology so if anyone could clarify this that would be greatly appreciated.
| The concept of extreme points doesn't depend on which topology you choose. So we could restate Krein-Milman as follows:
Let $V$ be a vector space, and $K \subset V$ a nonempty convex set. Suppose there exists a locally convex topology $\tau$ on $V$ such that $K$ is compact with respect to $\tau$. Then $K$ has extreme points (and indeed $K$ is equal to the $\tau$-closure of the convex hull of its extreme points).
Thus, taking $V = X^*$ and $K$ the closed unit ball, it suffices to come up with some locally convex topology $\tau$ in which $K$ is compact. Thanks to the Banach-Alaoglu theorem, an obvious choice is to take $\tau$ to be the weak-* topology, and then everything works.
So your proof is fine.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3526163",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
$I_n=\int_0^1{\frac{x^n}{x^n+1}}$. Prove $I_{n+1} \le I_n$ for any $n \in \mathbb N$ $$I_n=\int_0^1{\frac{x^n}{x^n+1}}$$
Prove $\lim_{n\to\infty}{I_n} = 0$
Here is what I tried.
First, I rewrite $I_n$.
$$I_n=\int_0^1{1-\frac{1}{x^n+1}}=1 - \int_0^1{\frac{1}{x^n+1}}$$
Now the limit becomes:
$$L=1-\lim_{n\to\infty}\int_0^1{\frac{1}{x^n+1}}$$
Next I try to solve the limit of the integral using Squeeze Theorem, with no success.
Using the fact $0\le x \le 1$ I get to the following double inequality:
$$\int_0^1\frac{1}{x^{n-1}+1} \le \int_0^1\frac{1}{x^{n}+1} \le 1 $$
$$I_{n-1} \le I_n \le 1$$
I don't know what to do next, I would greatly appreciate some help.
| For any $\epsilon \gt 0$, $\int_0^1\frac{1}{x^n+1}dx\gt K_n(\epsilon)=\int_0^{1-\epsilon}\frac{1}{x^n+1}dx$.
For every $\delta \gt 0$, there exists an $N$, so that for all $n\gt N$, $x^n\lt \delta$.
In this case, $K_n(\epsilon)\gt\frac{1-\epsilon}{1+\delta}$. Since $\delta$ is arbitrarily small, $\lim_{n\to \infty}K_n(\epsilon)\ge 1-\epsilon$ so $\lim_{n\to \infty}\int_0^1\frac{1}{x^n+1}dx \ge 1-\epsilon$. Because $\epsilon $ is arbitrary, $\lim_{n\to\infty}\int_0^1\frac{1}{x^n+1}dx=1$ and $I_n\to 0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3526292",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
} |
Simplification of a sum of factorials I am having trouble simplifying the following series. The sum is as follows: $$\sum_{k=0}^{\infty} \frac{x^k}{k^2!}.$$
is it possible to simplify? My intuition is telling me that a simplification involving the exponential is possible. Help would be appreciated, thank you.
| I think I have a partial answer. Begin by recognizing the square factorial function can be broken up into two components:
$$
k^2 ! = k! \prod_{j=k+1}^{k^2} j
$$
The series is now:
$$
\sum_{k=0}^\infty \frac{x^k}{k!} \prod_{j=k+1}^{k^2} \frac{1}{j}
$$
The product can be expressed as
$$ \frac{\Gamma(k+1)}{\Gamma(k^2+1)} = \frac{\Gamma(k)}{k\Gamma(k^2)}
$$
The second equality is obtained by $z\Gamma(z) = \Gamma(z+1)$ (alternatively, $\Gamma(z+1) = \Pi(z)$. Hence the series can be written as:
$$
\sum_{k=0}^\infty \frac{x^k}{k!} \frac{\Gamma(k)}{k\Gamma(k^2)} = \sum_{k=0}^\infty \frac{x^k}{k!} \frac{\Pi(k)}{\Pi(k^2)}
$$
I have looked through many references and I cannot find an identity that simplifies the ratio of the Pi or Gamma functions. Unfortunately duty calls and I cannot spend anymore time on this problem, but I do believe this is the best that can possibly be done. However in this form it is easy to ascertain the following:
1) The series converges (to what, we still don't know)
2) The converged value must be less than $e^x $, because the ratio of $\Pi(z)/ \Pi(z^2) \leq 1$ for most of $z$.
For the hell of it I checked Wolfram's infinite series calculator. I was only told the series converges (as expected), but no specific answer was given. Of course, these types of calculators aren't 100% reliable and dont replace a proof. If anyone has further suggestions please chime in. For whatever reason the caption on the picture isnt adding nicely. This is a graph of $\Gamma(z+1) / \Gamma(z^2+1)$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3526421",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Inequality that holds for all $n\in \mathbb N$ and $x\in [0,1]$. I want to prove that
$$x(1-x)^n+x^n(1-x)\le \frac1{2n}$$
for $x\in[0,1]$ and any $n\in \mathbb N$ ($n\ge1$). I have plotted the function $$f(x)=x(1-x)^n+x^n(1-x)-\frac1{2n}$$ and (if I am not mistaken) I know that it holds with equality for $n=1,2$ and strict inequality for $n\ge3$ but I cannot prove it rigorously. I have also tried to exploit symmetries, e.g., $g(x)=x(1-x)^n$, then I can write it as $g(x)+g(1-x)\le \frac{1}{2n}$ or maximize each summand independently (I know that $x^n(1-x)<1/ne$ with the max attained at $x=\frac{n}{n+1}$) or to write it as $$nx^{n-1}+n(1-x)^{n-1}\le \frac{1}{2x(1-x)}$$ and again exploit some kind of AM-GM inequality or $\ln$ type inequality (the RHS in the last inequality is $\left(\ln{\sqrt{\frac{x}{1-x}}}\right)'$, or do induction but nothing has worked.
Edit: My best shot up to now is to write it as $$g(x)+g(1-x)\le0$$ with $g(x)=2nx^{n-1}-\frac1x$ and $x\in[0,1/2]$. The LHS is already less than $-1$ for $n\ge3$ so there seems to be enough leeway (at least more than in the original formulation). Also, the maximum of $g(x)+g(1-x)$ is attained at $1/2$ (which is convenient) for all $n\ge0$ except for $4\le n\le 14$.
| We claim that
$$x^n(1-x)\leq \frac{x}{2n}.$$
This is equivalent to
$$x^{n-1}(1-x)\leq \frac{1}{2n}.$$
However, the maximum of the left side is attained when its derivative is $0$, as it is $0$ at both $0$ and $1$. This means
$$(n-1)x_0^{n-2}-nx_0^{n-1}=0\implies x_0=\frac{n-1}{n}.$$
So
$$x^{n-1}(1-x)\leq \left(\frac{n-1}{n}\right)^{n-1}\frac{1}{n}\leq \left(\frac12\right)^1\frac1n=\frac1{2n}.$$
Now we can finish by noting that
$$x^n(1-x)+x(1-x)^n\leq \frac{x}{2n}+\frac{1-x}{2n}=\frac1{2n}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3526576",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Third order differential equation with variable coefficients I am wondering if there were known methods for solving this type of differential equations ?
$$x^2y'''+3xy''+2xy'+2y=0$$
Thank you in advance
| You can reduce the order of the DE:
$$x^2y'''+3xy''+2xy'+2y=0$$
$$(x^2y'')'+(xy')'-y'+2(xy)'=0$$
Integrate:
$$x^2y''+xy'+(2x-1)y=C_1$$
Now it's a second order DE. Then use reduction of order or series solution.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3526779",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Order of a power series Let $f$ be the a power series over $\mathbb{C}$ of infinite radius. The order $\rho(f)$ of $f$ is defined to be the infinum of the following set :
$$ \{ A \ge 0 \mid \exists r_0\ge 0, \forall r\ge r_0, M_f(r) \le \exp(r^A)\}$$
Where $M_f(r)$ is the supremum of $f$ over all complex numbers of norm $r$.
How does one show the following identify :
$$ 1/\rho(f) = \lim\inf (- \dfrac{\ln|a_n|}{n\ln n})$$
Where $f(z) = \sum_{n\ge 0} a_n z^n$
I managed to show an inequality.
Let $A > \rho(f)$, then for $r$ big enough, $M_f(r) \le \exp(r^A)$. We then use the identity
$$2\pi r^n a_n = \int_0^{2\pi} f(re^{it}) e^{-int}\mathrm{d}t $$
To obtain
$$ |a_n| \le r^{-n}M(r)$$
For $r$ big enough, we therefore have
$$ | a_n|\le \exp( r^A - n\ln r)$$
$$ \rightarrow \dfrac{-\ln |a_n|}{n\ln n} \ge \frac{\ln(r)}{\ln n} - \dfrac{r^A}{n\ln n}$$
For $n$ big enough, one can set $r= n^{1/A}$ and use the previous inequality :
$$ \dfrac{-\ln |a_n|}{n\ln n} \ge \frac{1}{A} - \frac{1}{\ln n}$$
Taking the lower limit, we obtain
$$\lim\inf (- \dfrac{\ln|a_n|}{n\ln n}) \ge \frac{1}{A}$$
Then, by letting $A\to \rho(f)$ I get
$$\lim\inf (- \dfrac{\ln|a_n|}{n\ln n}) \ge \dfrac{1}{\rho(f)} $$
I'm having trouble showing the other side now, my idea was to take $A < \lim\inf (- \dfrac{\ln|a_n|}{n\ln n})$, then there exists $n_0\ge 0$ such that $n\ge n_0$ implies
$$ -\dfrac{\ln |a_n|}{n\ln n} \ge A $$
$$ \rightarrow \ln |a_n| \le - A n\ln n$$
$$ |a_n| r^n \le \exp( n\ln r - A n\ln n)$$
$$ |a_n| r^n \le \left( \frac{r^{1/A}}{n}\right)^{nA} $$
At first sight, this looks good as the right hand side is close to
$$ \dfrac{ r^{n/A}}{n^n} \le \dfrac{r^{n/A}}{n!} $$
And the latter sums to $\exp(r^{1/A})$ and the other inequality follows. But I don't see how I can achieve that inequality. I tried taking $B< A$ but it doesn't get rid of the exponent over the $n^n$. I also tried splitting the sum in 2 according to $ r^{1/A} /n \le \ge 1$ and use inequality but the splitting depends on $n$.
So I am stuck, and starting to feel like I should try to pursue another idea, even though this seems to be the "natural" path. But I can't think of anything.
Any ideas to solve this problem ? Thanks.
| You're on a good way, just the final estimate is missing. This is easier to do with the whole series than termwise. Let me however replace your $A$ with $1/\vartheta$ so that at the end I get exponents of $\vartheta$ instead of $1/A$. What we want to show is
$$\frac{1}{\vartheta} < \liminf_{n \to \infty} \biggl(\frac{-\ln \lvert a_n\rvert}{n\ln n}\biggr) \implies \vartheta \geqslant \rho(f)\,.$$
You already found
$$\lvert a_n\rvert < \frac{1}{n^{n/\vartheta}}$$
for $n \geqslant n_0$. Hence there is a $b_1 > 0$ such that
$$\lvert a_n\rvert \leqslant b_1\cdot \frac{1}{n^{n/\vartheta}}$$
holds for all $n$. Then we have
$$M(r) \leqslant \sum_{n = 0}^{\infty} \lvert a_n\rvert r^n \leqslant b_1\cdot \sum_{n = 0}^{\infty} \frac{r^n}{n^{n/\vartheta}}$$
for all $r \geqslant 0$. For fixed $r > 0$ the expression
$$\frac{r^t}{t^{t/\vartheta}}$$
attains its maximum at $t = r^{\vartheta}/e$, and the maximum is
$$\exp \biggl(\frac{r^{\vartheta}}{e}\log r - \frac{r^{\vartheta}}{e} \log \frac{r}{e^{1/\vartheta}}\biggr) = \exp \biggl(\frac{r^{\vartheta}}{e\vartheta}\biggr)\,.$$
We use this bound for the terms with index $n < (2r)^{\vartheta}$. For $n \geqslant (2r)^{\vartheta}$ we have
$$\biggl(\frac{r}{n^{^/\vartheta}}\biggr)^n \leqslant \frac{1}{2^n}$$
and thus we obtain
\begin{align}
M(r) &\leqslant b_1\cdot\sum_{n = 0}^{\infty} \frac{r^n}{n^{n/\vartheta}} \\
&\leqslant b_1\biggl(\bigl(1 + (2r)^{\vartheta}\bigr)\exp \biggl(\frac{r^{\vartheta}}{e\vartheta}\biggr) + 2\biggr) \\
&\leqslant 4b_1(2r)^{\vartheta}\exp \biggl(\frac{r^{\vartheta}}{e\vartheta}\biggr)
\end{align}
for $r \geqslant 1/2$. This implies $\vartheta \geqslant \rho(f)$, as needed.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3527179",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Surface area of an oblate spheroid using gaussian quadrature I want to compute the surface area of an oblate spheroid using gaussian quadrature, the parametrization of the oblate spheroid is given by:
$$x = a \cdot \sin\theta \cdot \cos \phi \\
y = a \cdot \sin\theta \cdot \sin \phi \\
z = b \cdot \cos\theta $$
Where $b<a$, in order to compute this integral I am using a Gauss-legendre quadrature to compute the points and the weights for the integral for the first octant on the spheroid so $\theta = [0,\pi/2]$, $\phi = [0,\pi/2]$. So using the weights and the nodes of a Gauss-Legendre quadrature of order $n$ I can define the weights and the nodes in $\theta$ and $\phi$ by:
$$X_{\theta/\phi} = \frac12 \cdot(X_{\text{Gauss-Leg}} + 1)\cdot \frac{\pi}2 \\
W_{\theta/\phi} = \frac12\cdot W_{\text{Gauss-Leg}} \cdot\frac{\pi}2$$
So having this defined, I can compute the surface integral as:
$$\int_{0}^{\pi/2}\int_{0}^{\pi/2} dS = \sum\sum W_{\theta}\cdot W_{\phi}\cdot dS $$
Where I think $dS = a^2\cdot b \cdot\rho^2 \cdot \sin\theta \cdot d\theta \cdot d\phi$. I find a bit confusing the $\rho$ here but I have tried using the standard definition of $\rho$ as:
$$\rho = \sqrt{x^2 + y^2 + z^2} = \sqrt{a^2\cdot \sin^2\theta + b^2 \cdot \cos^2\theta} $$
So the expresion that I am using for solving this integral is:
$$\int_{0}^{\pi/2}\int_{0}^{\pi/2} dS = \sum\sum W_{\theta}\cdot W_{\phi}\cdot a^2\cdot b \cdot(a^2\cdot \sin^2\theta + b^2 \cdot \cos^2\theta) \cdot \sin\theta $$
So I am doing something wrong since the surface of the oblate spheroid can be computed using:
$$S = 2\pi \cdot \left(a^2 + \frac{b^2}{\sin(ae)} \ln\Bigl(\frac{1 + \sin(ae)}{\cos(ae)} \Bigr) \right)$$
with $ae = \arccos(b/a)$. And the obtained result is not the same than the analytical surface. I have made an small python script that computes both results and prints them:
import numpy as np
from scipy.special import roots_legendre
#Define a and b
b = 2.
a = 100.
#Compute the Weights and nodes
x_phi, w_phi = roots_legendre(150)
x_theta, w_theta = roots_legendre(100)
#Translate them
x_phi = 0.5 * (x_phi + 1.) * np.pi/2.
x_theta = 0.5 * (x_theta + 1.) * np.pi/2.
w_phi = 0.5 * w_phi * np.pi/2.
w_theta = 0.5 * w_theta * np.pi/2.
#Compute the integral
integral = 0
for i in xrange(len(x_phi)):
for j in xrange(len(x_theta)):
integral += w_phi[i] * w_theta[j] * a**2 * b * (a**2 * np.sin(x_theta[j])**2 + b**2 * np.cos(x_theta[j])**2) * np.sin(x_theta[j])
print("Estimated int: %f" %(8*integral))
ae = np.arccos(b/a)
surface = 2*np.pi*(a**2 + b**2/np.sin(ae) * np.log((1+np.sin(ae))/np.cos(ae)))
print("Real int: %f" %(surface))
So what am I doing wrong? (I have to mention that this is just a simple test, what I really want to do is to compute the surface integral of any arbitrary function on this spheroid)
| Even though you eventually got the right areal element you seem to have an unsteady method of deriving it. Since on the surface we have parameterized
$$\vec r=\langle x,y,z\rangle=\langle a\sin\theta\cos\phi,a\sin\theta\sin\phi,b\cos\theta\rangle$$
We can take the differential to get
$$d\vec r=\langle a\cos\theta\cos\phi,a\cos\theta\sin\phi,-b\sin\theta\rangle\,d\theta+\langle-a\sin\theta\sin\phi,a\sin\theta\cos\phi,0\rangle\,d\phi$$
We can form the cross product to get the areal element
$$\begin{align}d^2\vec A&=\pm\langle a\cos\theta\cos\phi,a\cos\theta\sin\phi,-b\sin\theta\rangle\,d\theta\times\langle-a\sin\theta\sin\phi,a\sin\theta\cos\phi,0\rangle\,d\phi\\
&=\pm\langle ab\sin^2\theta\cos\phi,ab\sin^2\theta\sin\phi,a^2\sin\theta\cos\theta\rangle\,d\theta\,d\phi\end{align}$$
And we could take the $+$ sign to get the outward normal. Then we can take its magnitude to get the scalar areal element:
$$d^2A=\lVert d^2\vec A\rVert=\sqrt{b^2\sin^2\theta+a^2\cos^2\theta}\,a\sin\theta\,d\theta\,d\phi$$
It's equivalent to the areal element you eventually arrived at but I don't see how you could have gotten such a convoluted expression for it.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3527286",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Sum (sigma) notation disparity this might be a stupid question but I've googled it and I can't find an answer that I "trust". I am dealing with PCA (I guess that's not relevant, but just in case it is) and I am seeing a lot of Sigma notation that I'm not used to. My whole life I've always seen the following:
$\sum_{i=1}^{10}$
But now I am seeing this:
$\sum_{i}$
Does it mean exactly the same, supposing it is known that n=10 ??
| Basically, yes. It means that the index you are summing over is $i$, and presumably the domain for $i$ is stated elsewhere.
You might also see things like
$$ \sum_{a\in A} $$
or
$$ \int_A $$
etc.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3527392",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Given a recurrence formula, evaluate $\lim\limits_{n\to \infty} n^2 x_n^3$
Define a sequence $(x_n)_{n\geq 0}$ with a fixed initial term $x_0 > 0$ such that:
$$x_0 + x_1+\ldots+x_n=\frac{1}{\sqrt{x_{n+1}}}$$
Evaluate
$$\lim_{n\to \infty} n^2 x_{n}^3$$
My attempt: I should define a new sequence $s_n = \displaystyle\sum_{i = 0}^nx_i$ with the recurrence formula:
$$s_{n+1} = s_n+\frac{1}{s_n^2}$$
$(s_n)_{n\geq 0}$ is increasing and divergent, and I want the limit of:
$$n^2x_n^3 = \frac{n^2}{s_{n-1}^6}$$
So, I can look instead for the limit:
$$\lim_{n\to \infty} \frac{s^3_n}{n}$$
For $s_n^3$, the recurrence formula gives
$$s_{n+1}^3 = \left(\frac{1+s_n^3}{s_n^2}\right)^3$$
Looking at the function $f:(0, \infty) \to (0, \infty),\ f(x) = \dfrac{(x+1)^3}{x^2}$ this is increasing from a certain point forward and it feels that I should squeeze it between $3n$ and $3n+\text{something negligible}$, but I can't give a solid argument for this.
| Start with
$$
x_n=\left(\sum_{k=0}^{n-1}x_k\right)^{-2}\tag1
$$
Let $x_n=u_n^{-2}$, then we get
$$
u_n=\sum_{k=0}^{n-1}u_k^{-2}\tag2
$$
From $(2)$, we get
$$
u_{n+1}-u_n=u_n^{-2}\tag3
$$
Equation $(3)$ indicates that $u_n$ is increasing. If $u_n$ were bounded above, it would approach a finite limit, $\bar u$, and then we would have $\bar u=\bar u+\bar u^{-2}$, which is impossible. Thus, $u_n\to\infty$.
Furthermore,
$$
\begin{align}
u_{n+1}^3-u_n^3
&=\left(u_{n+1}^2+u_{n+1}u_n+u_n^2\right)\left(u_{n+1}-u_n\right)\\
&=\left(\frac{u_{n+1}}{u_n}\right)^2+\frac{u_{n+1}}{u_n}+1\\
&=\left(1+u_n^{-3}\right)^2+\left(1+u_n^{-3}\right)+1\tag4
\end{align}
$$
Equation $(4)$ and Stolz–Cesàro then says that
$$
\lim_{n\to\infty}\frac{u_n^3}{n}=3\tag5
$$
which is equivalent to
$$
\lim_{n\to\infty}n^2x_n^3=\frac19\tag6
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3527497",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Counting (Combinations) Suppose there are 10 people apart of a club: A, B, C, D, E, F, G, H, I, and J. They decide to go to a restaurant for a club outing, but there isn't one table to seat all of them, so they decide to take one table that seats four people and two tables that seat three people. Based on this, what is the probability that Person G and Person J sit at the same table?
*
*Ways for both Person G and Person J to both sit at the 4-person table: 6 = (4 choose 2)
*Ways for both Person G and Person J to both sit at the 3-person table: 3 = (3 choose 2)
*Ways for both Person G and Person J to both sit at the other 3-person table: 3 = (3 choose 2)
Total of $6+3+3=12$ ways for Person G and Person J to both sit at the same table.
I've determined the number of ways that Person G and Person J can sit together at each table, but I don't know how to properly count the total different ways (i.e., the denominator) to determine the probability.
| By conditional probability, we have
$$P(J\text{ is at $G$’s table}) =
P(G\text{ at $4$-table})P(J\text{ at $G$’s table} | G\text{ at $4$-table}) + P(G\text{ at a $3$-table})P(J\text{ at $G$’s table}| G\text{ at a $3$-table})$$
$$\left(\frac4{10}\right)\left(\frac39\right)+\left(\frac6{10}\right)\left(\frac29\right) = \frac2{15} + \frac2{15} = \frac4{15}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3527698",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
} |
complete square of summation of odd numbers Suppose we have an integer(which may be odd or even), let us say for example $x=20$, also we have an odd number which is also known (let us say for example $y=39$).
Is there any formula that tell us how many consecutive odd numbers starting from $y$ that if we add them together with $x$, then we have a result of perfect square.
In our case, our perfect square will be : $perfect=100$, and it is $perfect=20+39+41$.
so the answer will be $2$ odd numbers (include the starting one).
any suggestions ?
| Not an answer, but too much for a comment:
The sum of all the odd numbers below $y$ is $\left(\frac{y-1}2\right)^2$. If you add $n$ odd numbers starting with $y$ the greatest is $y+2n-2$ and the sum of all the odd numbers up to that is $\left(\frac{y+2n-1}2\right)^2$. You are then looking for
$$\left(\frac{y+2n-1}2\right)^2-\left(\frac{y-1}2\right)^2+x=n(y+n-1)+x$$
to be a perfect square. I don't have any good ideas for that in the general case.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3527806",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Proving that there is no negative integer with $n^2+n<0$ I'm trying to prove the statement: There is no negative integer with $n^2+n<0$. For this, I went with proving it by contradiction. Here's what I have so far:
Let us assume there is a negative integer $n$ with $n^2+n<0$. Since $n$ is negative, this means that $n<0$. Then, $n^2\ge 0$, where $n^2=n*n$. However, this is a contradiction because $n^2\ge 0$ and we assumed $n^2+n<0$, $n^2+n$ will at least be $0$ and $0 \nless 0$.
I'm not sure if my proof is correct or even going in the right direction, I think my negated statement may be wrong, which makes my proof wrong too. Any feedback or help is appreciated.
| In your argument it is not clear how you conclude that $n^{2}+n $ is at least $0$.
Let $m =-n$. Them $m >0$ and $n^{2}+n <0$ gives $m^{2} <m$. But then you can cancel $m$ and get $m <1$. there is no positive integer $m$ such that $m <1$ so we have a contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3527937",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 0
} |
Every compact Hausdorff space can be expressed as a disjoint union of finitely many open sets. Let $X$ be a compact Hausdorff space.
Can we express as disjoint union of open sets ?
I got a proof for that.
Since $X$ is compact there exist a finite sub collection of open sets $\{W_{i}\}_{i=1}^{n}$ that covers $X$. Construct disjoint open sets $W_{i}'$ for $i=1, \cdots,n$ as follows :
\begin{eqnarray*}
W_{1}'= & W_{1} \\
W_{k}'= & W_{k}-\displaystyle \bigcup_{j=1}^{k-1}\overline{W_{j}'}'
\end{eqnarray*}
Then by the construction $W_{i}'$ are disjoint.
Suppose $x \in W_{i}$ only then $x$ belongs to $W_{i}'$. Now let $x$ be an element of more than one $W_{i}$. Let $i$ be the least index for which $x \in W_{i}$ then $x $ belongs to that $W_{i}'$ and does not belongs to any other $W_{j}'$. Thus the collection $\{ W_{i}' \}_{i=1}^{n}$ covers $X$.
Is this proof correct ? If it is other questions are immaterial. I know that I have not used Hausdorff condition in the proof.
Can we have a counter example ?
Can we apply some conditions on compact Hausdorff space so that it can be expressed as disjoint union of open sets ?
| You can always do it with $n=1$ and $W_1=X$: if $X$ is connected, this is the one and only possibility. Otherwise, your procedure fails in general, your error lying in the fact that $\{W_i',\:\, i\in\Bbb N\}$ may not be a covering. Also, the entire idea of a non-trivial case $n\ge2$ fails catastrophically: again, see the concept of connected space, and related concepts such as connected components of a topological space.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3528099",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
} |
What is the difference between equality and logical identity? I'm reading the book Introduction to Logic and to the Methodology of the Deductive Sciences by Alfred Tarski and he states:
"In this book we consider the notion of equality among numbers always as a special case of the general concept of logical identity. One should add, however, that there have been mathematicians who —as opposed to the standpoint adopted here— did not identify the symbol "=" occurring in arithmetic with the symbol of logical identity; they did not consider equal numbers to be necessarily identical, and therefore looked upon the notion of equality among numbers as a specifically arithmetical concept."
So what is the difference between "=" and "logical identity"?
| I expect that Tarski means by "logical identity" something close to "literally the same thing", which could be applied to numbers in arithmetic, sets in set theory, or anything else.
Now imagine a mathematician X (not Tarski) who might say something like "a rational number is a pair of integers called numerator and denominator", so that "2/4" and "1/2" are different "rational numbers" because they have different numerators and denominators. Then X would still write "2/4=1/2" (they're a mathematician, after all), but might say the reason is that the criteria for rational numbers "a/b" and "c/d" to be equal in the sense of "=" is the arithmetic property that $a*d=b*c$ (for whatever equality means for integers).
Tarski is saying "I'm not doing that mathematician X stuff. There's no arithmetic in my intended meaning of '='."
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3528250",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 0
} |
If $|z-\frac 3z|=2$, Find the greatest value of $|z|$ $$|z^2-3|=2|z|$$
And $$|z^2-3|\le |z|^2+3$$
$$2|z|\le |z|^2+3$$ which isn’t a valid equation, what’s going wrong?
| Your inequality $2|z|\le|z|^2+3$ is very much valid. For example, $|z|=2$ satisfies it.
The maximum value of $|z|$ that allows this inequality is the maximum root of the corresponding equality $2|z|=|z|^2+3$, which can be solved by the usual methods for quadratic equations to give $|z|=3$.
Thus $|z|=3$ is an upper bound. To prove that it is the sharp upper bound, meaning the true maximum, simply try $z=3$ itself in your original equation $|z-(3/z)|=2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3528430",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
Calculating $\lim_{n \to \infty}\frac{1*2+2*3+...+n(n+1)}{1*3+3*5+...+(2n-1)(2n+1)} $ This might seem pretty straightforward, as both the numerator and the denominator seem to resemble some well-known sums, however the numerator is the only one which can actually be rewritten as $\frac{n(n+1)(n+2)}{3}$. Could you point me to the right direction as to how I'd be able to continue the exercise in this manner, or is there perhaps a different way of dealing with this limit?
| Method one
We can write the summation formula for both the numerator and denominator which is not my favorite :)
Method two
We use calculus (and the definition of integral) to find the limit.
$$
\lim_{n\to\infty}\frac{1*2+2*3+...+n(n+1)}{1*3+3*5+...+(2n-1)(2n+1)}
{=
\lim_{n\to\infty}\frac{\sum_{i=1}^ni(i+1)}{\sum_{i=1}^n4i^2-1}
\\=
\lim_{n\to\infty}\frac{\sum_{i=1}^ni^2+\sum_{i=1}^ni}{-n+\sum_{i=1}^n4i^2}
\\=
\lim_{n\to\infty}\frac{{1\over n}\sum_{i=1}^n\left({i\over n}\right)^2+{1\over n^2}\sum_{i=1}^n\left({i\over n}\right)}{-{1\over n^2}+{1\over n}\sum_{i=1}^n4\left({i\over n}\right)^2}
\\=
\frac{\int_0^1 x^2dx}{\int_0^1 4x^2dx}
\\={1\over 4}
}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3528528",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
Equivalence classes and subgroups of bijections of $S_3$
Consider the group $S_3 = Sym({1,2,3})$, which we recall is the group of all bijections from $\{1,2,3\}$ to $\{1,2,3\}$ and where the group operation is composition. We have seen that this group has order 6 and is not abelian. We let
S3 equal this set
For instance, $a_5$ is the function $a_5 : \{1,2,3\} \to \{1,2,3\}$ with $a_5(1) = 2, a_5(2) = 3,$ and $a_5(3) = 1$.
(a) Explain in one or two sentences why $H = \{1,a_1\}$ is a subgroup of $S_3$.
(b) Recall that there is an equivalence relation ~ defined in $S_3$ by $u$ ~ $v$ iff $uv^-1 \in H$. Find all the distinct equivalence classes for ~.
(c)Which of the obtained equivalence classes are subgroups of $S_3$?
So for (a), I know that for each $x \in$ group compositioned with identity element yields $x$, so I know that 1 belongs in subset $\{1, a_1\}$, and same can be applied for $a_1$. But I do not know how to summarize that into a sentence.
For (b) and (c), I'm having trouble exactly what equivalence classes are... Are they just subsets that partition the group in 2?
| Let's write out the elements of $S_3$. They are $\{(12), (23), (13), (123), (132),e\}$. (In the picture they are $a_1=(12), a_2=(23), a_3=(13), a_4=(132)$ and $a_5=(123)$.)
Now, it's clear that there are $3$ equivalence classes. They are $[(12)]=\{e, (12)\}, [(13)]=\{(123),(13)\}$ and $[(23)]=\{(132), (23)\}$. Only the first is a subgroup.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3529013",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Definition of a limit between functions in metric spaces: Rudin vs Amann-Escher - excluding a limit point Amann-Escher in Analysis I define a limit of a function between metric spaces as follows:
Let $X,Y$ be metric spaces, $D \subseteq X$ and $a \in X$ a limit point of $D$. Let $f\colon D\to Y$ be a function. We write $\lim_{x\to a} f(x) = y$ if for any sequence $(x_n)$ in $D$ which converges to $a$ in $X$, the sequence $(f(x_n))$ converges to $y$ in $Y$.
They then prove that this is equivalent to the following:
For each neighborhood $V$ of $y$ in $Y$, there is a neighborhood $U$ of $a$ such that $f(U\cap D) \subseteq V$.
Rudin's definitions are slightly different:
Let $X,Y$ be metric spaces, $D \subseteq X$ and $a \in X$ a limit point of $D$. Let $f\colon D\to Y$ be a function.
We write $\lim_{x\to a} f(x) = y$ if for any $\epsilon > 0$ there exists $\delta > 0$ such that $d_Y(f(x),y) < \epsilon$ for all $x \in D$ such that $d_X(x,a) < \delta$.
As far as Amann-Escher definition goes, this would be equivalent to the following:
For every neighborhood $V$ of $y$ in $Y$, there is a neighborhood $U$ of $a$ such that $f(U\cap (D\setminus\{a\})) \subseteq V$.
For the sequential version of a limit definition, Rudin requires that $(f(x_n))$ converges to $y$ not for all $(x_n)$ in $D$ which converge to $a$, but for all $(x_n)$ in $D\setminus\{a\}$ which converge to $a$.
I understand that Rudin's version is standard. But what is really wrong with Amann-Escher's version that it can't be used? Why do we need to exclude $a$?
| In Amann -Escher definition if $x_n \to a$ (and $a \in D$) then $(x_1,a,x_2,a,x_3,a...)$ also converges to $a$. So this forces the limit of $f(x_n)$ to be $f(a)$. So this becomes definition of continuity at $a$ rather than existence of the limit.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3529140",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Is there a closed form for the sequences mentioned in the body of this question? Is there any hope for closed forms for expressions like: $$\sum_{k=0}^{\infty}\frac{x^k}{(3k+i)!}\text{ and/or }\sum_{k=0}^{\infty}\frac{kx^k}{(3k+i)!}$$where $i\in\{0,1,2\}$?
I am interested because I am trying to find an answer to this question.
Thank you in advance and sorry if this is a duplicate.
| Considering
$$f_i=\sum_{k=0}^{\infty}\frac{x^k}{(3k+i)!}\qquad \text{and} \qquad g_i=\sum_{k=0}^{\infty}\frac{kx^k}{(3k+i)!}$$
a CAS gives
$$f_i=\frac{1}{i!}\,\,
_1F_3\left(1;\frac{i+1}{3},\frac{i+2}{3},\frac{i+3}{3};\frac{x}{27}\right)$$
$$g_i=\frac{x}{(i+3)!}\,\,
_1F_3\left(2;\frac{i+4}{3},\frac{i+5}{3},\frac{i+6}{3};\frac{x}{27}\right)$$
Only for the specific cases you asked for $(i=0,1,2)$, we can write the results in a nice form, defining
$$F_i=3 e^{\frac{t}{2 \sqrt{3}}}\left(\frac{t}{\sqrt3}\right)^{i-1}\,f_i-e^{\frac{\sqrt{3} }{2}t}\qquad \text{where} \qquad \color{red}{t=\sqrt{3} \sqrt[3]{x}}$$
$$F_0=2 \cos \left(\frac{t}{2}\right)\qquad
F_1=-2 \sin \left(\frac{\pi}{6} -\frac{t}{2}\right)\qquad F_2=-2 \sin \left(\frac{\pi}{6} +\frac{t}{2}\right)$$
It seems that for $f_i$ there is no problem for the expansion for any $i$. For $g_i$, it does not seem to be the same story at all except for $i=0$ (in such a case $g_0=x f_0'$).
$$g_0=\frac{t}{9 \sqrt{3}}e^{\frac{t}{\sqrt{3}}}\left(1-2 e^{-\frac{\sqrt{3}}{2}t} \sin \left(\frac{\pi}{6} +\frac{t}{2}
\right)\right)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3529254",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Better method for solving ODE $yy''+ (y')^2-(y')^3\ln (y)=0$ What's a better way to solve this equation?
$$yy''+ (y')^2-(y')^3\ln (y)=0 $$
so far, I've tried:
*
*reducing the order of the equation ($p=y'$): $yp'+ p^2-p^3ln(y)=0 $
*dividing everything by $p^3$: $y\frac{p'}{p^3} + \frac{1}{p}=ln(y)$
*solving the homogeneous equation: $y\frac{p'}{p^3} + \frac{1}{p}=0 \implies p=(\frac{1}{3ln(cy)})^\frac{1}{3}$
but using constant variation to solve this from this point on seems unnecessarily complicated. Is there a better method I'm not seeing?
| As I stated in the comments, use the fact that
$$y y'' + y'^2 = \frac{d}{dx} (y y')$$
Then
$$\frac{d}{dx} (y y') = y'^3 \log{y} $$
This may be rewritten as
$$\frac{d(y y')}{(y y')^2} = \frac{\log{y}}{y^2} y' dx = \frac{\log{y}}{y^2} dy$$
This equation may be integrated to produce
$$-\frac1{y y'} + C_1 = -\frac{1+\log{y}}{y} $$
Rearrange to get
$$y' = \frac1{1+\log{y}-C_1 y}$$
which may be expressed as
$$(1+\log{y}-C_1 y) dy = dx$$
which may be integrated again to produce
$$x = y \log{y} - \frac12 C_1 y^2 + C_2$$
which is about as far as I can take this.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3529705",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Prove Quotient Ring is Isomorphic to Product of Fields
Problem: (a) Show $\mathbb{Z}[i]/\langle 5\rangle$ is a product of two fields. (b) Show $\mathbb{Z}[i]/\langle 3\rangle$ is a field. (c) Show $\mathbb{Z}[i]/\langle 2\rangle$ is neither a field nor a product of two fields.
My Attempt: Now from first glance, it is clear (a) and (c) will not be fields since both $5$ and $2$ are not irreducible in $\mathbb{Z}[i]$ and thus will not general maximal ideals and hence the quotient ring cannot be a field. It can also be shown that $\langle 3\rangle$ forms a maximal ideal, thus (b) is a field.
What I am having difficultly with is showing (a) is a product of 2 fields and that (c) is not a product of 2 fields.
For (a), I initially thought of the homomorphism, $\phi: \mathbb{Z}[i] \rightarrow \mathbb{Z}_5 \times \mathbb{Z}_5$ given by $a+bi \rightarrow (a \mod (5), b \mod (5))$ and then thought to show $\ker (\phi ) = \langle 5\rangle$ to prove $\mathbb{Z}[i]/\langle 5\rangle \cong \mathbb{Z}_5 \times \mathbb{Z}_5$. However I soon realised that $\phi$ is not a ring homomorphism since: $$\phi(a+bi)\star \phi(c+di) \neq \phi((a+bi)\cdot (c+di))$$
Question: Could anyone please point me in the right direction for parts (a) and (b).
Also more generally, is there any statement that can be made regarding if $\mathbb{Z}[i]/\langle a\rangle$ (where $a$ is an integer) can be written as some product of field?
Thank you.
| Hint:
a) In $\mathbf Z[i]$, one has $5=(2+i)(2-i)$ and these factors are irreducible since their norm is equal to $5$. Furthermore, $\mathbf Z[i]$ is a P.I.D., so each generates a maximal ideal, and you can apply the Chinese remainder theorem:
$$\mathbf Z[i]/5\mathbf Z[i]\simeq \mathbf Z[i]/(2+i)\times\mathbf Z[i]/(2-i).$$
b) $$\mathbf Z[i]/2\mathbf Z[i]\simeq\mathbf Z[X]/(X^2+1)\!\Bigm/\!\!2\cdot\mathbf Z[X]/(X^2+1)\\\simeq \mathbf Z/2\mathbf Z[X]/(X^2+1)\simeq \mathbf Z/2\mathbf Z[X]/\bigl((X+1)^2\bigr).$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3529873",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Showing that $\frac{ 1- \sin\frac{5\pi}{18}}{\sqrt{3} \sin \frac{5\pi}{18}}= \tan\frac{\pi}{18} $ It's easy to verify on WolframAlpha, but I have difficulty deriving it.
It's easy to see
$$
\tan\left(\frac{\pi}{18}\right)=\frac{\sqrt{3}-\tan(5/18 \pi)}{1+\sqrt{3}\tan(5/18 \pi)}
$$
| We need to prove that $$(1-\sin50^{\circ})\cos10^{\circ}=2\sin50^{\circ}\sin10^{\circ}\cos30^{\circ}$$ or
$$2\cos10^{\circ}-\sin60^{\circ}-\sin40^{\circ}=2(\cos40^{\circ}-\cos60^{\circ})\cos30^{\circ}$$ or
$$2\cos10^{\circ}-\sin40^{\circ}=2\cos40^{\circ}\cos30^{\circ}$$ or
$$2\cos10^{\circ}-\sin40^{\circ}=\cos70^{\circ}+\cos10^{\circ}$$ or
$$\cos10^{\circ}=\cos70^{\circ}+\cos50^{\circ}.$$
Can you end it now?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3530024",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 0
} |
Evaluate $ \lim_{n\to\infty} \frac{n^n}{3^n} a_n$ For
$$ f(z)= \sum_{n=0}^{\infty } a_n z^n $$
with
$$ |f(z)| \leq M e^{|z|}, $$
calculate
$$ \lim_{n\to\infty} \frac{n^n}{3^n} a_n$$
Tried several methods but I don't know how to approach this correctly, thanks for your time.
| This is for $f$ being an entire function. By Cauchy's formula
$$
a_n = \frac{1}{{2\pi i}}\oint_{\left| t \right| = r} {\frac{{f(t)}}{{t^{n + 1} }}dt} ,
$$
with some $r>0$. Thus,
$$
\left| {a_n } \right| \le \frac{1}{{2\pi }}\oint_{\left| t \right| = r} {\frac{{\left| {f(t)} \right|}}{{\left| t \right|^{n + 1} }}\left| {dt} \right|} \le \frac{1}{{2\pi }}\oint_{\left| t \right| = r} {\frac{{Me^{\left| t \right|} }}{{\left| t \right|^{n + 1} }}\left| {dt} \right|} = \frac{1}{{2\pi }}\oint_{\left| t \right| = r} {\frac{{Me^r }}{{r^{n + 1} }}\left| {dt} \right|} = M\frac{{e^r }}{{r^n }}.
$$
Since $f$ is entire, $r$ can take any positive value. For $a_n$, take $r=n$. Hence,
$$
\frac{{n^n }}{{3^n }}\left| {a_n } \right| \le \frac{{n^n }}{{3^n }}M\frac{{e^n }}{{n^n }} = M\left( {\frac{e}{3}} \right)^n \to 0,
$$
since $e/3<1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3530149",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Proof review/explanation: Let $\text{char}(\mathbb{K}) = 0$. It then follows that $AB-BA \ne 1 \, (A, B \in \Bbb K^{n \times n})$
Let $\text{char}(\mathbb{K}) = 0$. It then follows that $AB-BA \ne 1 \, (A, B \in \Bbb K^{n \times n})$.
I first showed that $\text{trace}(AB) = \text{trace}(BA)$ for every $A, B \in \Bbb K^{n \times n}$:
Let $A=(a_{ij}), B=(b_{ij}) \in \Bbb K^{n \times n}$, so $AB=(c_{ij}) \in \Bbb K^{n \times n}$ where $c_{ij}= \sum_{k=1}^na_{ik}b_{kj}$. One now gets:
$$\text{trace}(AB) = \sum_{i=1}^n \sum_{k=1}^n a_{ik}b_{ki} = \sum_{k=1}^n \sum_{i=1}^n a_{ik}b_{ki} = \sum_{k=1}^n \sum_{i=1}^n b_{ki}a_{ik} = \text{trace}(BA)\tag{*}.$$
Now, assume $AB-BA=1$. Consider
$$AB-BA=1 \Leftrightarrow \text{trace}(AB)-\text{trace}(BA)=\text{trace}(1) \xrightarrow{(*)} \\ 0 = n,$$
which is a contradiction, so the assumption $AB-BA=1$ was wrong.
My question is: Where exactly do you need $\text{char}(\Bbb K)=0$ in this proof?
Thanks in advance!
| If $\text{char}(\mathbb{K})=p$ and $p \mid n$, then you get $0=0$, which doesn't give you a contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3530285",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Why is $\det(I + A^{50}) = 4$ here? I’m trying to understand the reasoning behind the following:
Let $A \in \mathbb{R}^{3\times3}$ with eigenvalues $1$, $-1$, $0$. What is $\det \left(I + A^{50} \right)$?
The given answer is 4.
A is obviously singular due to eigenvalue 0. Moreover, spectral decomposition of $A$ exists:
$$
A = T diag(1, -1, 0) T^{-1}\\
A^{50} = T diag(1, -1, 0)^{50} T^{-1}
$$
But I can’t see why $\det(I + T diag(1, -1, 0)^{50}T^{-1})$ would be 4 either, due to the limited knowledge about $T$. Can somebody shed light on this for me?
| As you said, you can diagonalize $A=PDP^{-1}$ with $D=\operatorname{diag}(1,-1,0)$. Then $A^{50}=PD^{50}P^{-1}$. $D^{50}=\operatorname{diag}(1,1,0)$.
So,
$$\det(I+A^{50})=\det(P^{-1}(I+A^{50})P)=\det(I+D^{50})=\det(\operatorname{diag}(2,2,1))=4.$$
The first equality follows from the fact that determinants are invariant under change of basis. The second follows from distributing.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3530455",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 6,
"answer_id": 1
} |
Finding a constant to make this function periodic So I have the function $$ f(x) = x^2 + b $$ which is $ 2\pi $ periodic for $ 0 \leq x < 2\pi $
If $ F $ is the antiderivative of $ f $ with $ F(0) = 1 $, what value of $ b $ is $ F $ periodic?
So I found that $$ F(x) = \frac{1}{3}x^{3}+bx+1 $$ so in order to prove something is periodic I need to show that:
$$ F(x) = F(x+2\pi) $$
So I plugged in $ (x+2\pi) $ in $ F $:
$$ \frac{1}{3}x^{3}+bx+1 = \frac{1}{3}(x+2\pi)^{3}+b(x+2\pi)+1 $$
and then I solved for $ b $ to get:
$$ b = -x^{2}-2\pi x-\frac{4}{3}\pi^2 $$
$ b $ is a real constant, my question is how do I solve for this $ b $ to get a real constant?
| Note that $f(x)=x^2+b$ only in the interval $0\leq x<2\pi$. For all other $x$ we have to compute first the "fractional part mod $2\pi$ of $x$". In other words we have
$$f(x):=\left(x-2\pi\left\lfloor{x\over2\pi}\right\rfloor\right)^2+b\qquad(-\infty<x<\infty)\ .$$
The antiderivative $F$ is obtained by inserting this $f$ into
$$F(x):=1+\int_0^x f(t)\>dt\qquad(x\in{\mathbb R})\ ,$$
and $F$ is in a way staircase like again. For $F$ to be periodic we need $F(2\pi)=F(0)=1$, which amounts to the condition
$$\int_0^{2\pi}f(t)\>dt=\left({x^3\over3}+bx\right)\biggr|_0^{2\pi}=0\ ,$$
hence
$$b=-{4\pi^2\over3}\ .$$
This value of $b$ makes $F$ in fact periodic.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3530616",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Cubic equation with three distinct roots Let
$$g(x)=ax^3+bx^2+cx+d$$
be a polynomial of degree $3$ with $a,b,c,d,e\in\mathbb{R}$ and $a>0$. Suppose that the cubic equation $g(x)=0$ has three distinct real roots, i.e. the discriminant $\Delta>0$.
Let $f(x)=\frac{a}{4}x^4+\frac{b}{3}x^3+\frac{c}{2}x^2+dx$. Can we express $M:=\min(f(r_1),f(r_2),f(r_3))$ in terms of the parameters $a,b,c,d$ without directly inserting $r_1$, $r_2$ and $r_3$ (given by solution formula) into $f(x)$ and compare them? In other words, we have explicit solution as shown, for instance in here. Can we give conditions on $a,b,c,d$ and thus determine which $r_i$ is the one that satisfies $M$?
Any reference, suggestion, idea, or comment is welcome. Thank you!
| So, (lots of this is on the Wikipedia page linked in above comments), if we set $$x=t-\frac{b}{3a} ..[1]$$ then $g(x)$ becomes $G(t)=t^3+pt+q$ where $p=\frac{3ac-b^2}{3a^2}$ and $q=\frac{2b^3-9abc+27a^2d}{27a^3}$.
Provided $4p^3+27q^2<0$, it has three real roots given by setting $k=0, 1, 2$ in $$t_k=2\sqrt{\frac{-p}{3}}\cos\left(\frac13\cos^{-1}\left(\frac{3q}{2p}\sqrt{\frac{3}{-p}}\right)-\frac{2\pi k}{3}\right) ...[2]$$
Then the integral of G(t) becomes $$F(t)=\frac{t^4}{4}+\frac{pt^2}{2}+qt$$ which will only differ from f(x) by a constant. Assuming $a>0$, any deduction about which root gives the minimum value for F(t) will translate to the same conclusion for f(x).
So it remains to evaluate F(t) at each of the roots of G(t). As we are only concerned with the minimum value, we need only consider the lowest and highest root which are respectively given by $k=2$ and $k=0$ in [2]. (This can be established by recognising that if $\theta = \frac13\cos^{-1}\left(\frac{3q}{2p}\sqrt{\frac{3}{-p}}\right)$ then $0< \theta < \frac{\pi}{3}$).
By definition, at a root of G(t), $t^3 = -pt-q$ and so at the roots, we can show $$F(t_k)=\frac{t_k}{4}(pt_k+3q).$$ The two values of interest are $F(t_2)$ and $F(t_0)$. Once you have established which of these is lower, you can translate the relevant value of t into a value for x using [1] and evaluate f(x) there.
Given $0< \theta < \frac{\pi}{3}$, it is possible to show that $$F(t_2)>F(t_0)$$ $$\Leftarrow\Rightarrow \cos(\theta-\frac{4\pi}{3})+\cos(\theta)+\cos(3\theta)>0$$ $$\Leftarrow\Rightarrow \theta<\frac{\pi}{6}$$ $$\Leftarrow\Rightarrow q<0.$$
So, putting it all together, if $q<0$ ie if $$2b^3-9abc+27a^2d<0 $$ then the minimum value is given by $$f(t_0-\frac{b}{3a})$$ and if $$2b^3-9abc+27a^2d>0 $$ then the minimum value is given by $$f(t_2-\frac{b}{3a})$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3530757",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Sum of two perfect squares is also a perfect square. Proof that one of these numbers is divisible by 3 At first, I tried to proof by contradiction: I considered two numbers that are not divisible by $3$. Than I tried to write consecutive perfect squares that are divisible by $3$ to see some pastern, but it didn't get me anywhere. Then i tried to make express each whole number a,b and c under the condition that $a^2+b^2=c^2$ in terms of two arbitrary integers. I found out that $a$ has to be of the form $2mn$ and $b$ has to be of the form $m^2-n^2$, where $m$ and $n$ are whole numbers. So that I have to prove that either $2mn$ or $m^2-n^2$ is divisible by $3$. I got stuck at this point and don't know how to prove that, so help me please
| HINT. Write $c^2=a^2+b^2$ and consider the equation $\mod 3$, remembering that $x \equiv 0 \mod 3$ implies that $x$ is divisible by $3$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3530934",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
write down the transition probability matrix for a certain pattern Suppose we're generating a sequence of 0s and 1s. We want a specific pattern 0011, and we are given this transition matrix with arbitrary probabilities
$$
\begin{bmatrix}
p & q \\
q & p \\
\end{bmatrix}
$$
where the first column and row is for 0, and the second column and and row are for 1.
Using this transition matrix, how do we end up with a transition matrix for patter 0011:
(Our states are {1,0,00,001,0011})
$$
\begin{bmatrix}
p&q&0&0&0 \\
q&0&p&0&0 \\
0&0&p&q&0 \\
0&q&0&0&p \\
0&0&0&0&1 \\
\end{bmatrix}
$$
I really have no idea how they arrived to this matrix. I am very lost. Any help is appreciated.
| the first 2x2 matrix tells that you switch symbols (from 0 to 1 or from 1 to 0) with probability $q$.
The second 5x5 matrix provides a path towards state 0011. Let us analyze it more carefully:
*
*from state "1", you switch to "0" with probability $q$
*from state "0", you switch to state "00" with probability $p$
*from state "00", you switch to state "001" with probability $q$
*from state "001", you switch to state "0011" with probability $p$
*once in state "0011" you remain there forever
To obtain the 5x5 matrix algorithmically from the 2x2 matrix, note that:
*
*first block: the two way principal minor in the upper left corner of the 5x5 matrix matrix equals the 2x2 matrix, except for one entry which is replaced from $p$ to 0. This entry is now used to represent a transition to the second block
*second block: the next two way principal minor of the 5x5 matrix matrix again equals the 2x2 matrix, except for two entries which are now replaced from $q$ and $p$ to 0 and 0. These entries are now used to represent a transition back to the first block or to the last block
*last block: the last entry of the 5x5 matrix is a self-transition
In summary: the 5x5 matrix contains 3 blocks -- the first block captures the transitions to generate "00", the second block captures the transitions to generate "11" and the last block is a self transition.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3531094",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why is length often a unitless quantity in mathematics? This is something that's bothered me for a long time: Why is length often not given any units (e.g. inches or meters) in mathematics? The unit circle is said to have a "radius of 1," with no unit given. A vector is said to have a magnitude of, say, 27, with no unit given. A triangle is said to have a hypotenuse length of, say, 10, again with no unit given.
Is the reason no unit is given simply because one could substitute any unit, so long as the proportions between quantities remain the same? I searched around for a little on here but couldn't find any questions directly addressing my concern here. I'd appreciate any clarification.
| Distance in a Cartesian coordinate plane is a function which inputs two points $p_1 = (a_1,b_1)$ and $p_2 = (a_2,b_2)$ and outputs their distance $d(p_1,p_2) = \sqrt{(a_1-a_2)^2 + (b_1-b_2)^2}$, which is a real number. So you can think of the unit of distance abstractly as the numeral $1$, which is the multiplicative identity of the real number system. This abstraction is actually useful, because it enforces a focus on mathematical concepts that are independent of the real world units that may occur in applications (e.g. inches or meters).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3531200",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Prove or disprove the statements about linear map Let $m,n\in \mathbb{N}$ and let $V, W$ be $\ \mathbb{R}$-vector spaces with $\dim V=n$ and $\dim W=m$.
Let $f:V\rightarrow W$ be a linear map.
I want to prove or disprove the following:
There is a linear map $f:V\rightarrow W$ with the following properties:
*
*$f$ is injective, $n=3$, $m=4$.
*$\text{rang}(f)=6$, $\text{Nullity}(f)=5$, $m=11$.
*$f$ is surjective, $n=4$, $m=3$.
*$f$ is injective, $n=2$, $\text{rang}(f)=3$.
*$\text{Nullity}(f)=0$, $n=3$, $m=5$.
$$$$
I have done the following:
*
*Since $f$ is injective, then $\text{ker}(f) = \{0\}$ and so $\dim\text{ker}(f) = 0$.
It holds that $\text{Im}(f)\subseteq W$. Therefore $\dim \text{Im}(f)\leq \dim W$.
From the Rank–nullity theorem we have that \begin{equation*}\dim V = \dim \text{ker}(f) + \dim \text{Im}(f)=\dim \text{Im}(f)\leq \dim W\end{equation*}
Therefore it cannot be that $n=4$ and $m=3$, right?
*From the Rank–nullity theorem we have that \begin{equation*}\dim V=\text{Defekt}(f)+\text{Rang}(f)=5+6=11 \Rightarrow n=11\end{equation*} How do we use the information of $m$ ?
*Since $f$ is surjective, it holds that $\text{Rang}(f)=W$.
From the Rank–nullity theorem we have that \begin{equation*}\text{Rang}(f)=\dim V-\text{Nullity}(f)\leq \dim V \Rightarrow \dim W\leq \dim V \Rightarrow m\leq n\end{equation*}
Therefore there can be a linear surjective map with $n=4$ and $m=3$, right?
*Since $f$ is injective, it holds that $\text{ker}(f) = \{0\}$ and so $\dim\text{ker}(f) = 0$.
It holds that $\text{Nullity}(f)=\dim \text{ker}(f)=0$.
From the Rank–nullity theorem we have that \begin{equation*}\text{Rang}(f)=\dim V-\text{Nullity}(f) \Rightarrow 3=4-0 \Rightarrow 3=4\end{equation*}
A contradiction.
Therefore there cannot be a linear surjective map with $n=2$ and $\text{Rang}(f)=3$, right?
*Do we apply here the Rank–nullity theorem? But how?
| Since these are linear maps between finite dimensional vector spaces $V_n \longrightarrow W_m$ so each of them (if it exists) can be represented by a $m \times n$ matrix $A$.
For (1): Here $A$ must be $4 \times 3$ matrix. For injective map, no free columns in $A$, so take
$$A_{4 \times 3}=\begin{bmatrix}1&0&0\\0&1&0\\0&0&1\\0&0&0\end{bmatrix}$$
For (2): $\text{dim} V=\text{rank}+\text{nullity}=6+5=11$, so $A$ should have $6$ pivot columns and $5$ free columns. For example,
$$A_{11 \times 11}=\begin{bmatrix}\mathbf{e_1} & \mathbf{e_2} & \ldots & \mathbf{e_6} & \mathbf{0} & \ldots & \mathbf{0}\end{bmatrix}.$$
For (3): Here $A_{3 \times 4}$. So max rank of $A$ is $3$. For surjective map, we need full rank. Thus
$$A_{3 \times 4}=\begin{bmatrix}1&0&0&0\\0&1&0&0\\0&0&1&0\end{bmatrix}$$
For (4): Here $A_{m \times 2}$. $\text{rank}=3$, so $m \geq 3$. For injective map $A$ should not have free columns. But maximum rank of this matrix can be $2$ and $m$ being at least three, there will always be a free column. So not possible.
For (5): Here $A_{5 \times 3}$. Nullity $0$ means injective map, so no free columns in $A$. Maximum rank of this matrix can be $3$. Thus
$$A_{5 \times 3}=\begin{bmatrix}1&0&0\\0&1&0\\0&0&1\\0&0&0\\0&0&0\end{bmatrix}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3531296",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
For any ring $R$, prove that $R$-$\mathbf{Mod}$ has no subobject classifier. This is Exercise I.3 of Mac Lane and Moerdijk's, "Sheaves in Geometry and Logic [. . .]".
The Question:
For any ring $R$, prove that the category $R$-$\mathbf{Mod}$ of all left $R$-modules has no subobject classifier.
I assume that the morphisms of $R$-$\mathbf{Mod}$ are module homomorphisms; that is, $M\stackrel{f}{\rightarrow} N$ is given by, for all $x,y\in M$ and all $r\in R$,
$$\begin{align}
f(x+y)&=f(x)+f(y)\\
f(rx)&=rf(x).
\end{align}$$
I'm guessing that rings are intended to have a $1$ and are not necessarily commutative.
A definition of a subobject classifier is given on page 32, ibid.
Definition: In a category $\mathbf{C}$ with finite limits, a subobject classifier is a monic, ${\rm true}:1\to\Omega$, such that to every monic $S\rightarrowtail X$ in $\mathbf{C}$ there is a unique arrow $\phi$ which, with the given monic, forms a pullback square
$$\begin{array}{ccc}
S & \to & 1 \\
\downarrow & \, & \downarrow {\rm true}\\
X & \stackrel{\dashrightarrow}{\phi} & \Omega.
\end{array}$$
Thoughts:
Following the answers to my previous question on the nonexistence of a subobject classifier in $\mathbf{FinSets}^{\mathbf{N}}$, I have considered using the Yoneda Lemma; however, I'm not sure how or whether it applies: the "target category," so to speak, for the Lemma is $\mathbf{Sets}$.
Also, I ask myself, "what would a subobject classifier in $R$-$\mathbf{Mod}$ look like?"
To answer this, I considered first the existence of a terminal object in the category. My guess is that it's $I=(\{0_R, 1_R\}, \times_R, +_R)$, since, for any $R$-module $M$, we have $!: M\to I$ given by
$$!(m)=\begin{cases}
0_R &: m=0_M, \\
1_R &: \text{ otherwise}.
\end{cases}$$
But I don't think this is right. Perhaps my problem is my understanding of left $R$-modules.
Please help :)
| The terminal and initial object is the $0$-module, $\{0\}$. Addition/multiplication with elements in $R$ given bu the only way possible. Consider $S = 0$. Then we get that $\ker (\phi) = 0$ and every $X$ embeds into $\Omega$. A morphism with zero kernel in $R$-$\mathbf{Mod}$ has to be a monomorphism and because right adjoints preserve monomorphisms and the forgetful functor $R$-$\mathbf{Mod} \rightarrow \mathbf{Set}$ is a right adjoint $\phi$ has to be injective on set level. There are no size restrictions on $R$ modules thus we arrive at a contradiction, there can't be an injection $X \rightarrow \Omega$ for every $X$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3531554",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Linear isomorphism $f:U_1\to U_2$ with $U_2\subset U_1$ Let $V$ be a finite-dimensional vector space and $U_1,U_2\subset V$ be subspaces of $V$ such that $U_2\subset U_1$ and let $f:U_1\to U_2$ be a linear isomorphism. Prove (or give a counterexample) whether the following holds: $$U_1=U_2$$
I can't think of a counterexample but showing this also seems difficult since if $B_1$ is a Basis of $U_1$, all I know is that $f(B_1)$ is a basis of $U_2\subset U_1$ but that doesn't seem to suffice.
Thank you very much in advance.
| Hint:
Since we have finite dimensional spaces,
$$\dim (U_1/U_2)=\dim U_1 -\dim U_2. $$
Now, since $U_1$ and $U_2$ are isomorphic, they have the same dimension.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3531661",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Improper Integral - If there is an asymptote, does it necessarily have to diverge? Context: After learning that the harmonic series diverges, I have begun to doubt just merely looking at graphs
Here's an improper integral with two vertical asymptotes: $x=0$ and $x=0.5$.
$$\int_{0}^1\frac{1}{2x^2-x}dx$$
It diverges.
My line of reasoning: The function is not bounded at $x=0$ and $x=0.5$, and is not integrable, despite being continuous almost everywhere from $0$ to $1$.
Another possible line of reasoning:
The integral can be split up into: $\int_{0^+}^{0.5^-}\frac{1}{2x^2-x}dx$ + $\int_{0.5^+}^{1}\frac{1}{2x^2-x}dx$
Each one of these integrals, when evaluated, diverge. So the improper integral diverges.
Which line of reasoning is more logical?
Is there a function that has vertical asymptote $x=k$, but when evaluated with limits at $x=k^-$ and $x=k^+$, converges?
(If this is true, then the second line of reasoning might be correct.)
| I assume by "diverges" you mean is not absolutely (and so Lebesgue) integrable? Because in this case you can just compute the integral of its absolute value
$$ \int_0^\frac{1}{2} \frac{1}{x-2x^2} + \int_\frac{1}{2}^2 \frac{1}{2x^2-x} $$
and notice that, for instance, the second integrand is $+\infty$. Because the integral of its negative part is also $+\infty$, it's Lebesgue integral simply makes no sense.
Do not split your integral into two different parts before knowing it's absolute value is integrable (or the argument does not change sign), because that operation makes no sense.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3531918",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$\prod_{k=1}^\infty (1-1/2^k)$ converge to zero? I know this product converge
$$\prod_{k=1}^\infty (1-1/2^k),$$
but I don't know how to prove that this limite is different to zero. This is equivalent to prove that
$$\sum^{\infty}_{k=1} \log(1-1/2^k)$$ converge. I can't get it. Could you give me any hint? Thank!
I'm trying to prove that
$$\frac{|GL_n(\mathbb{F}_2)|}{2^{n^2}} \to \alpha,$$
where $\alpha > 0$
| $\log(1-x)$ is a concave function on $[0,1)$, so for any $x\in[0,1/2]$ we have
$$ \log(1-x)\geq -2\log(2) x $$
immediately implying
$$ \sum_{k\geq 1}\log\left(1-\frac{1}{2^k}\right) \geq -2\log(2)\sum_{k\geq 1}\frac{1}{2^k} = -2\log(2) $$
and
$$ \prod_{k\geq 1}\left(1-\frac{1}{2^k}\right) \geq \frac{1}{4}.$$
Much better approximations can be derived through the Mellin transform, as done by Marko Riedel here.
Creative telescoping is also a chance. Numerically the LHS is $\approx 0.288788095$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3532080",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Prove $\sin x + \arcsin x > 2x$ using Maclaurin series My teacher asked us to solve this problem using the Maclaurin series, but I could not figure out how to approach..
Prove that the inequality sin x + arcsin x > 2x holds for all values of x such
that 0 < x ≤ 1.
I know that the Maclaurin series of
sin(x) = x - $\frac{x^3}{3!}$ + $\frac{x^5}{5!}$ - $\frac{x^7}{7!}$ + ...
arcsin(x) = x + $\frac{1}{2}\cdot\frac{x^3}{3}$ + ($\frac{1}{2}\cdot\frac{3}{4}$)$\cdot\frac{x^5}{5}$ + ...
However, I do not know how to prove this using there series...Could anyone have some ideas?
Thank you!
| For $0\le x\le1$ we have
$$\sin x\ge x-{1\over6}x^3\ge0\quad\text{and}\quad\arcsin x\ge x+{1\over6}x^3+{3\over40}x^5\ge0$$
which imply
$$\sin x\arcsin x\ge x^2+\left({3\over40}-{1\over36} \right)x^6-{1\over80}x^8=x^2+\left(34-9x^2\over720\right)x^6\ge x^2$$
By AGM we have
$${\sin x+\arcsin x\over2}\ge\sqrt{\sin x\arcsin x}\ge x$$
Remark: As Martin R astutely observes in comments, as soon as you have $\sin x\ge x-{1\over6}x^3$ and $\arcsin x\ge x+{1\over6}x^3$, you have $\sin x+\arcsin x\ge2x$, so tacking another (nonnegative) term onto the arcsine series, taking the product and using AGM is wholly unnecessary. I failed to notice this because I was approaching things backwards: I had decided to see if AGM could be used and then worked out how much of the two series were needed to arrive at the desired inequality.
The inequality $\sin x\ge x-{1\over6}x^3$ for $0\le x\le1$ can be seen from the fact that the series for $\sin x$ is an alternating series of decreasing terms.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3532201",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 6,
"answer_id": 2
} |
What's the probability that you wait $2$ hours for the train when you have already waited $1$ hour?
Let's say there are independent waiting times of trains which are
exponentially distributed with mean value $\frac{1}{2}$ hours. If you
have already waited $1$ hour for the train, what's the probability
that you will wait $2$ hours?
We have that mean value $\mu = \frac{1}{2}$. It's known that $\mu = \frac{1}{\lambda} \Leftrightarrow \frac{1}{2} = \frac{1}{\lambda} \Leftrightarrow \lambda = 2$
Now comes the part I'm not sure about.. when you have already waited $1$ hour before and now wait additional $2$ hours how the probability is calculated correctly here? I have looked this up on the internet and found that memorylessness applies to exponential distribution with formula:
$$P(X > r+t \mid X > r) = P(X>t)$$
where $r$ is the time you have previously waited, so $r=1$ and where $t$ is the time you have waited afterwards, thus $t=2$. Putting this into the formula we have
$$P(X>3 \mid X>1) = P(X>2) = 1-P(X<2) = 1-\left(1-e^{-2 \cdot2}\right) \approx 0.0183$$
So when you have already waited $1$ hour for the train, there is a probability of $0.0183$ that you will be waiting additional $2$ hours for the train?
Can you please tell me if it's correct like that because that's how I would do it in the exam :c
| Let $T_n\stackrel{\mathrm{i.i.d.}}\sim\mathrm{Expo}(\lambda)$. Define $S_0=0$ and $S_n=\sum_{i=1}^n T_n$. Then $S_n$ is a renewal process, in fact a Poisson process, with associated counting process $N(t) = \sup\{n: S_n\leqslant t\}$. Define the age process by $A_t = t - S_{N(t)}$ and the residual process by $R_t = S_{N(t)+1} - t$ for $t\geqslant 0$. Define the renewal function $M(t) = \mathbb E[N(t)]$. Since $N(t)$ is Poisson distributed with mean $\lambda t$, it readily follows that $M(t)=\lambda t$. Let $Z_t\stackrel{\mathrm{def}}=(A_t,R_t)$. We derive the distribution of $Z_t$ by considering when $A_t=t$, that is, when $t$ lies within the first renewal interval. Then we have
$$
f_{Z_t}(t,y) = f_{T_1}(t+y) = \lambda e^{-\lambda(t+y)}\cdot\mathsf 1_{[0,\infty)}(y).
$$
The other case where $A_t<t$ corresponds to when $t$ occurs after the first renewal. We have
\begin{align}
f_{Z_t}(x,y) &= \sum_{n=1}^\infty \frac{\lambda^n(t-x)^{n-1}e^{-\lambda(t-x)}}{(n-1)!}\lambda e^{-\lambda(x+y)}\
&= \lambda^2 e^{-\lambda(x+y)}\cdot\mathsf 1_{[0,t)\times[0,\infty)}(x,y).
Combining these two cases yields
$$
f_{Z_t}(x,y) = \lambda^2e^{-\lambda(x+y)}\mathsf 1_{[0,t)\times[0,\infty)}(x,y) + \lambda^2e^{-\lambda(t+y)}\mathsf 1_{\{x=t\}\times[0,\infty)}(x,y)
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3532325",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Help Proving Linear Regression Model in Matrix Notation I need to show that:
$\hat \mu ^{'} \hat \mu = \hat \mu ^{'} y$
Given the matrix notation of the linear regression model: $y= X\beta + \mu$
Also given: $\hat y = X \hat \beta $ and $\hat \mu = y - X\hat \beta $
I have tried:
$(y-X \hat \beta)^{'} (y-X\hat \beta)$ = $ (y-X \hat \beta)^{'} y $
I'm not sure if there's a property of transpose that I'm not aware of but I don't see how we could be saying $(y-X \hat \beta) = y $
Are we assuming that $X\hat\beta = 0 $ ?
| We know that $$X'(y-X\hat{\beta})=0\tag{1}$$
We want to show that $$\hat{\mu}'\hat{\mu}=\hat{\mu}'y$$
which is equivalent to
$$(y-X\hat{\beta})'X\hat{\beta}=0$$
Let's take the transpose,
$$\hat{\beta}' \color{red}[X'(y-X \hat{\beta})\color{red}]=0$$
From $(1)$, we can see that the result is true.
Regarding your attempt:
For matrices, if $AB=AC$, we can't conclude that $B=C$ unless there are other conditions such as $A$ is a non-singular matrix.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3532461",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Logistic map confusion Logistic map is a simple example of discrtete dynamical systems defined as $$x_{i+1}=\lambda*x_{i}*(1-x_{i}).$$
It is known that for $\lambda=4$ this map shows a chaotic behavior for $x\in (0,1)$.
The question:
How could it be that the logistic map at $\lambda=4$ is chaotic, while, starting with $x_0=0.5$, the iteration goes to 1 then fixed at 0?
| What is actually meant with "chaotic" is that almost all start values lead to a (theoretically) non-periodical sequence and the sequence shows no structure which means that it can be used as a pseudo-random-generator.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3532590",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Prove that :
$f(1)+f(-1)-2(f(0)+1)\equiv 0\pmod{2f(-1)}$ Problem :
$$f(t)=t^{3}+\alpha t^{2}+\beta t+\gamma , \alpha ,\beta \operatorname{and} \gamma \in\mathbb{Z}$$
with root $x_{1},x_{2},x_{1}x_{2}$
Question is :
Prove that :
$$f(1)+f(-1)-2(f(0)+1)\equiv 0\pmod{2f(-1)}$$
My try:
We have :
$$\begin{cases}x_{1}+x_{2}+x_{1}x_{2}=-\alpha \\x_{1}x_{2}+x_{1}^{2}x_{2}+x_{1}x_{2}^{2}=\beta \\x_{1}^{2}x_{2}^{2}=-\gamma\end{cases}$$
Also :
$$f(1)+f(-1)-2(f(0)+1)=2(\alpha -1)$$
$$2f(-1)=2(\alpha -\beta +\gamma -1)$$
So :
$$f(1)+f(-1)-2(f(0)+1)\equiv \beta -\gamma\pmod{2f(-1)}$$
But we have from system equation
$$\beta -\gamma =x_{1}x_{2}(1+x_{1}+x_{2}+x_{1}x_{2})=x_{1}x_{2}(1-\alpha )$$
the problem done just how we prove $x_{1}x_{2}\in\mathbb{Z}$ ??
| COMMENT.-I am afraid your question is not true. Choose, for instance, $(x_1,x_2)=(3,5)$ so $$f(t)=t^3-23t^2+135t+225$$ and $f(1)+f(-1)-2(f(0)+1)=-48$ and $2f(-1)=-768$. However if you interchange the modulus you have $48$ divides $768$ and maybe you could have a correct question.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3532740",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Solve $\int_0^1\ln^2\Gamma(x)\,\mathrm{d}x$ I want to solve the following integral but after some work I didn't find a way to go. Could anyone give me a hint?
\begin{equation}
I=\int_{0}^{1}\ln^2\Gamma(x)\,\mathrm{d}x
\end{equation}
The answer is
\begin{equation}
I=\frac{\ln^2 (2\pi)}{3}+\frac{\pi^2}{48}+\frac{\gamma\ln(2\pi)}{6}+\frac{\gamma^2}{12}+\frac{\zeta''(2)}{2\pi^2}-\frac{\zeta'(2)\ln (2\pi)}{\pi^2}-\frac{\gamma\zeta'(2)}{\pi^2}
\end{equation}
They only give a hint (using the Fourier Series) which I looked up at https://de.wikipedia.org/wiki/Gammafunktion.
\begin{equation}
\ln\Gamma(x) = \left(\tfrac{1}{2}-x\right) \bigl(\gamma + \ln(2\pi)\bigr) + \frac{1}{2} \ln\frac{\pi}{\sin(\pi x)} + \frac{1}{\pi} \sum_{k=2}^\infty \frac{\ln k}{k} \sin(2\pi k x)
\end{equation}
Want I have tried so far:
*
*squared the series
*integration by parts and the the fourier series
| Use Parseval's Theorem as @James Arathoon metioned and use the Fourier Series given here:
Integral that arises from the derivation of Kummer's Fourier expansion of $\ln{\Gamma(x)}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3532877",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 1
} |
Wronskian of $x|x|$ and $x^2$. Wikipedia says wronskian of $x|x|$ and $x^2$ is identically zero.
But it is not LD.
I know why these two are LI and not LD.
since x|x| is not differentiable function,how to find their wronskian????
And plz suggest ways to check LI and LD when functions are not differentiable.
Thanks in advance.
Plz help.
| The function $f(x)=x|x|$ is is differentiable everywhere. For $x>0$, you have $f(x)=x^2$, differentiable. For $x<0$ you have $f(x)=-x^2$, differentiable. At $0$, you have
$$
\frac{f(h)-f(0)}h=\frac{h|h|}h=|h|\to0,
$$
so the derivative exists and is zero.
For two functions, using the Wronskian is overkill. Linear dependence for two functions means that one is a multiple of the other: this is trivial to check for yes or for no. In your example, for instance, if $x|x|=cx^2$ for all $x$, then evaluate at $1$ to get $c=1$, and at $-1$ to get $c=-1$; so such $c$ cannot exist and the functions are linearly independent.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3533167",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
All squarefree semiprime numbers that less than a certain number How can i find all number $N$, where $N=p*q$. (Here, $p$ and $q$ are primes and $p<q$) ?
For example: I want a formula that help me to find all $N<1000$
| Apply the sieve of sundaram's logic, $$(2a+1)(2b+1)=2(2ab+a+b)+1$$ one of $a,b$ must less than 16, because if not $a=b$ gives $$4a^2+4a+1>1000$$ Then applying it again you'll need $c$ less than 3, because you can sieve out all values that would not create primes in the product. Finally you would double all primes less than 500.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3533295",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Volume between surfaces
Find $V(T),$ where $T$ is the region bounded by the surfaces $y = kx^2+kz^2$ and $z=kx^2+ky^2,$ where $k\in\mathbb{R}, k > 0.$
I tried solving for the area over which these curves intersect, which gave me $y-kz^2 = z-ky^2.$ Solving gives $y+ky^2 - (z+kz^2) =0\Rightarrow (y-z)(1 + k(y-z)) = 0.$ Since $y$ and $z$ are nonnegative, this implies $y=z.$ We thus obtain $y = kx^2 + ky^2\Rightarrow ky^2 - y +kx^2 = 0\Rightarrow (y -\dfrac{1}{2k})^2 +x^2 = \dfrac{1}{4k^2},$ which is a circle centered at $(0,\dfrac{1}{2k})$ with radius $\dfrac{1}{2k}.$
I am able to solve for $A(x),$ the area in terms of $x$ at a given value of $x,$ and I know $x$ ranges from $-\dfrac{1}{2k}$ to $\dfrac{1}{2k},$ but the resulting integral I have to evaluate is absolutely disgusting!
Is there a "cleaner" integral I can use?
| Observe that the enclosed volume by $y = kx^2+kz^2$ and $z=kx^2+ky^2$ are symmetric with respect to the plane $y=z$. So, the total volume is twice the volume between the surfaces $y=z$ and $z=kx^2+ky^2$.
Recognize that the integration region in $xy$-coordinates is the circle given by
$$x^2+ \left(y -\frac{1}{2k}\right)^2 = \dfrac{1}{4k^2}$$
and re-center the circle with the variable changes $x=u$ and $v = y - \frac{1}{2k}$. Then, the integration region becomes $u^2+ v^2 = \frac{1}{4k^2}$ and the two enclosing surfaces are respectively,
$$z_1 = v+ \frac 1{2k},\>\>\>\>\> z_2=ku^2+k\left(v+\frac1{2k}\right)^2$$
As a result, the volume integral can be expressed as,
$$V= 2\int_{u^2+v^2\le \frac1{4k^2}} (z_1-z_2)dudv
= 2\int_{u^2+v^2\le \frac1{4k^2}} (\frac1{4k}-ku^2-kv^2)dudv$$
Then, integrate in polar coordinates to obtain,
$$V=2\int_0^{2\pi} \int_0^{\frac1{2k}}(\frac1{4k}-kr^2)rdrd\theta
=4\pi\int_0^{\frac1{2k}}(\frac1{4k}-kr^2)rdr =\frac\pi{16k^3} $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3533481",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Given a positive, finite, regular measure $\lambda$ and $g\in L^1(\lambda)$, the measure $\mu$ given by $\mu(E)=\int_E g~d\lambda$ is regular Suppose $\lambda$ is a positive, finite, regular measure, and suppose $g \in L^1(\lambda)$. Define a measure $\mu$ by letting $\mu(E)=\int_E g~d\lambda$. Then is it true that $\mu$ is regular(i.e., $|\mu|$ is regular)?
This should be true, but I have no idea to show this. This question arises from the proof of Theorem 6.19 of Rudin's Real and Complex Analysis. He asserted that $\mu$ is regular without proof, but I cannot see why.
| Let $\epsilon >0$. Since $\mu << \lambda$ there exists $\delta >0$ such that $\lambda (E) <\delta$ implies $|\int_E gd\lambda| <\epsilon$. Let $E$ be any Borel set and $K$ be a compact subset of $E$ with $\lambda (E\setminus K) <\delta$. Then $|\mu (E\setminus K)| \leq \int_{E\setminus K} |g| d\lambda <\epsilon$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3533594",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How many words can be formed out of the letters of the word GRANDMOTHER, such that each word starts with G and ends with R? This is a problem from a specimen question paper:
"How many words can be formed out of the letters of the word GRANDMOTHER, such that each word starts with G and ends with R?"
My problem is that the question does not make it clear whether the letters can be repeated or not, and therefore, I assumed that repetition of letters is NOT allowed and this is my working:
Vowels : {A, O, E}
Consonants: {G, R, N, D, M, T, H, R}
For first and last letter, we have only two choices: G and R
For remaining 9 letters in between, we have 9P9 choices.
Therefore, total possible words= 9P9= 362880
I would like to know if my approach and answer are correct. Also, what should I assume in regard to repetition of letters, incase questions are ambiguous as the above one?
Thanks!
| As you say, the problem is not particularly clear. What you have is correct if you are supposed to use all the letters: once you fix the G and R at the start and end, all the $9$ internal letters are different, so it is just the number of ways to permute them.
However, another possible interpretation is that you have to use letters from "GRANDMOTHER", but not necessarily all of them, so e.g. "GOMER" is an acceptable word.
In general I would assume that you are allowed to use at most as many copies of a given letter as there are in the original. This is especially the case here where there are two Rs and you are presumably allowed to use both - if letters couldn't be repeated, or if any letter can be used as many times as you want, there would be no need to give a word with repeated letters in the first place. Also, in this case there is no restriction given on the length of the words, so if you were allowed to repeat letters ad lib there would be infinitely many possibilities.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3533759",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Approach the covariance matrix I have this problem:
Let the random vector $\binom{X}{Y}\sim
N_{2}(\binom{0}{0},\bigl(\begin{smallmatrix} 1 & \rho\\ \rho & 1 \end{smallmatrix}\bigr))$
1) Find the distribution of $Z=X+Y$.
2) Find the distribution of $W=X^2$, the mean and the variance.
3) Calculate $Cov(X,W)$ and $Cov(Z,W)$.
It's the first time that I have found myself analysing this type of function. What the covariance matrix involves? How do I manage it? Thanks in advance for any clarification!
| It is given that $X$ and $Y$ are jointly normal, $EX=EY=0$, $EX^{2}=EY^{2}=1$ and $cov(X,Y)=\rho$ which gives $EXY=\rho$.
$X+Y$ is normal with mean $0$ and its variance is $E(X+Y)^{2}=EX^{2}+EY^{2}+2EXY=1+1+2\rho$.
$P(W \leq w)=P-\sqrt w \leq X \leq \sqrt w)-\int_{-\sqrt w} ^{\sqrt w} f(x)dx$ where $f$ is the standard normal density.
$cov(X,W)=EX^{3}-EXEX^{2}=EX^{3}=0$ by symmetry of standard normal distribution. I will leave the calculation of $cov (Z,W)$ to you.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3533935",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
An expression for $\lim_{n\to\infty}\frac1{2^n}\left(1 + x^{1/n}\right)^n$
I am looking for a closed form answer to the limit ($x<1$): $$\lim_{n\to\infty}\frac1{2^n}\left(1 + x^{1/n}\right)^n$$
For context, I was studying weighted averages and considered $$\left(\frac12(x^{1/n} + y^{1/n})\right)^{n}$$ to be a good way to weight averages in favour of the lower number (similar to root mean squared, but kinda reversed). I think was studying what happens for different values of $n$. For $x=3$, $y=5$, I found that this limit seems to converge to $3.78962712197\dots$ but I do not recognise where this number comes from. Rearranging the above average formula gives $$\frac{y}{2^n}\left(1+\left(\frac xy\right)^{1/n}\right)^n$$and this is what inspired the question. I see it looks similar to some kind of exponential, but it isn't quite there. I also tried exponentiating and logging the whole expression to bring down the $n$, but I didn't know how to deal with the power inside the brackets then. My main issue is that since $1/n$ goes to $0$, the thing thats raised to this power goes to $1$ (and so is not small) so series expansions can't be used.
Thanks!
| This is a bit of a hand-wavy argument but I thought it was fun so I'll share it. Let $f_n(x)=\dfrac{\left(1+x^{\frac{1}{n}}\right)^{n}}{2^{n}}$, and assume $\lim\limits_{n\to\infty}f_n(x)=f(x)$ exists, and that the limit of derivatives, $\lim\limits_{n\to\infty}f_n'(x)$, exists and is equal to $f'(x)$. Then,
$$\begin{align}f_n'(x)&=\underbrace{\frac{(1+x^{\frac{1}{n}})^n}{2^n}}_{f_n(x)}\cdot\frac{x^{\frac{1}{n}}}{x\left(1+x^{\frac{1}{n}}\right)}
\\
\lim_{n\to\infty}f_n'(x)&=\lim_{n\to\infty}f_n(x)\cdot \lim_{n\to\infty}\frac{x^{\frac{1}{n}}}{x\left(1+x^{\frac{1}{n}}\right)}
\\
f'(x)&=\frac{f(x)}{2x}
\\
\int \frac{f'(x)}{f(x)}\ \mathrm{d}x&=\int\frac{1}{2x}\ \mathrm{d}x
\\
\ln |f(x)|&=\frac{1}{2}\ln |x|+C
\\
f(x)&=\pm C_2\sqrt{\left|x\right|}
\end{align}$$
Then, given $f(1)=\lim\limits_{n\to\infty}f_n(1)=\lim\limits_{n\to\infty} 1$ and that $f,x\ge 0$, we see $f(x)=\sqrt{x}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3534175",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
} |
Is this alternative representation of $f(x)=xe^x$ as Maclaurin series correct? Let $f(x)=xe^x$. I know
$$e^x=\sum_{n=0}^\infty \frac{x^n}{n!}$$
and so
$$f(x)=x\sum_{n=0}^\infty \frac{x^n}{n!}=x(1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+...)=x+x^2+\frac{x^3}{2!}+\frac{x^4}{3!}+...=\sum_{n=0}^\infty \frac{x^{n+1}}{n!}$$
But shouldn't this other representation be also correct?
$f'(x)=e^x+xe^x, f''(x)=2e^x+xe^x, f'''(x)=3e^x+xe^x,... \implies f^{(n)}(x)=ne^x+xe^x \implies f^{(n)}(0)=n$
Because I want $f(x)=\sum_{n=0}^\infty \frac{f^{(n)}(0)}{n!}x^n$ then
$$f(x)=\sum_{n=0}^\infty \frac{n}{n!}x^n$$
If this other representation is also correct, then how $\sum_{n=0}^\infty \frac{n}{n!}x^n=\sum_{n=0}^\infty \frac{x^{n+1}}{n!}$?
| Indeed $f^{(n)}(0)=n$. To make the connection, note that in the first case the first term is zero, so you have $\sum_{n=1}^\infty \frac{1}{(n-1)!} x^n$, and now you can shift the index to start at $0$ again by replacing $n$ with $n+1$ everywhere. This gives $\sum_{n=0}^\infty \frac{1}{n!} x^{n+1}$ as you expected.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3534313",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Let $V$ be a vector space. If $U \leq V$, $Z \leq V$, $U \cap Z = 0$ and $Z \simeq V/U$, then $V = U \oplus Z$. Let $V$ be a vector space (possibly of infinite dimension).
I know that given a subspace $U$ of $V$, we can always write $V = U \oplus Z$, where $Z$ is some subspace of $V$ such that $Z \simeq V/U$.
Let $U$ and $Z$ be subspaces of $V$. I also know that, if $V = U \oplus Z$, then $Z \simeq V/U$.
Now I’m trying to prove the following statement: If $U \cap Z = 0$ and $Z \simeq V/U$, then $V = U \oplus Z$. Any help is appreciated.
| The issue here is which map is giving you the isomorphism $Z\simeq V/U$. Let $\pi:V\to V/U$ denote the projection map.
Since every short exact sequence of vector spaces splits, there is a map $i:V/U\to V$ such that its composition with the projection above $i\circ\pi=id_{V/U}$ is the identity map. This gives you a way to identify $V/U$ with the subspace $im(i)=Z$ of $V$, i.e., an isomorphism $i:V/U\to Z\subset V$ which yields $V= U\oplus Z$.
To detect a failure as in the example of the other answer you'd need to verify if the given isomorphism $f:Z\to V/U$ is compatible with the inclusion maps $j:Z\hookrightarrow V$ and $i: V/U\hookrightarrow V$: your claim holds if and only if $j=i\circ f$, i.e., both spaces are embedded as the same subspace of $V$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3534466",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
How many 7-digit phone numbers are possible, assuming that the first digit can’t be a 0 or a 1 I use the multiplication rule.
For the first digit I have 8 choices. For the last 6 digits I have 10 choices for each. So answer is $8 \cdot 10 ^6$.
Is there any other way to solve this problems. I usually gain a lot of insight from solving the problems in different ways. Please write which theorems etc. you have used.
| number of 7 digit numbers ( including leading 0): 10,000,000
number of 7 digit numbers including lead 0 or 1 : - 2,000,000
number of 7 digit numbers not lead by 0 or 1 : 8,000,000
This more just taking a complement of a set. ( so an interior form of inclusion-exclusion)
You could realize all 7 have at least 8, get $8^7$ then realize each of 6 have 2 more with the seventh having $8=2^3$, for $2^9$ and have fun adding up all $64=2^6$ combinations all together. More a property of a powerset which relates to combinations, as the total number of combinations of all sizes, is the number of distinguishable states which in includes all subsets, (Also tedious)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3534559",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Summation of $\log\left(\frac i2\right)$ Could someone just briefly explain why this summation is true?
$$\sum_{i = 1}^n\log\left(\frac i2\right) = \frac n2(\log n - 1)$$
I'm having a hard time wrapping my head around this. Any help is appreciated, thanks.
| $$\sum_{i=1}^n \log(i/2) = \log \prod_{i=1}^n (i/2) = \log \frac{n!}{2^n} = \log(n!) - n \log 2.$$
Stirling's approximation (applied somewhat crudely) gives $\log(n!) \approx n \log n - n$, so
$$\sum_{i=1}^n \log(i/2) \approx n \log n - (1 + \log 2) n = n(\log n - (1+\log 2)).$$
If the original quantity is instead slightly different (the expression in your original screenshot is ambiguous), then
$$\frac{1}{2}\sum_{i=1}^n \log i = \frac{1}{2} \log (n!) \approx \frac{1}{2} (n \log n - n) = \frac{n}{2} (\log n - 1).$$
Again, both of these are approximations (with an error term on the order of $\log n$), not equalities.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3534678",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is there a standard name for the function $f_m(x,y) = mx + y$? When formalizing the base-$m$ positional numeral system, the function $$f_m : \mathbb{N} \times \mathbb{N} \rightarrow \mathbb{N}$$ $$x,y \mapsto mx + y$$ is extremely useful. For example, observe that $$365 = f_{10}(f_{10}(3,6),5).$$
Question. Is there an accepted name for the function $f_m$ either in general, or else in the the case $m=2$, or in the case $m = 10$, or else in the case $m=2^{64}$?
Motivation. I have plans to use the function $f_{2^{64}}$ in the context of a term-rewriting system that's designed to implement basic arithmetic on a standard $64$-bit computer. More generally, the idea is to specify a whole family of term-rewriting systems based on $f_m$, one for each possible value of $m$. By choosing $m := 2^n$, where $n$ is the native word size of the target architecture, it should be possible to obtain a system that implements arithmetic with large integers relatively efficiently, and whose correctness is extremely easy to prove (because every step involves rewriting some terms based on accepted and provably correct principles of mathematics.) To do this, I have to pick a name for this function, and I guess it's better to use the standard name if there is one.
| The inverse of the function you gave $$f_m: \mathbb{N} \to \mathbb{N} \times \mathbb{N}$$ $$a \mapsto (b,r) \backepsilon a=bm + r $$
is called Euclidean division so logically your function should be called Euclidean multiplication. Though i've never heard that term used.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3534827",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
If $H,K$ are subgroups of $G$ s.t. $o(H), o(K)$ are relativily prime $\implies H \cap K = \{ e \}$.
If $H,K$ are subgroups of $G$ s.t. $o(H), o(K)$ are relativily prime
$\implies H \cap K = \{ e \}$.
Here $o(H)$ means the order of $H$.
Below is my attempt but I am afraid I might be jumping some hoops here:
Suppose $a \in H \cap K$ is an arbitrary element. Then we know $o(a)$ must divide $o(H)$ and $o(K)$.
Then $\exists m,n \mathbb{N}$ s.t. $m \cdot o(a) = o(H)$ and $n \cdot o(a) = o(K)$.
But $o(H)$ and $o(K)$ are relatively prime, so for
$m \cdot o(a)$ and $n \cdot o(a)$ to be reelatively prime, $o(a)$ must be equal to $1$ which means $a=e$ the identity element.
$\Box$
*
*Am I on the right track?
*Now my problem is with the last statement. How do I effectively argue the last statement?
*Any alternative proof?
| Note, $H\cap K$ is group and moreover subgroup(since, intersection of two subgroup of a same group is again a group ) of both $H, K$. So, $o(H\cap K)|o(H), o(H\cap K)|o(K)$, but since $o(K), o(H)$ are co-primes, so $1$ is the only common divisor to them.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3534910",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Why we need branch cut?[Continuity and analytic on Branch cuts] Why we need branch cut in Complex analysis? It just my guess though, complex function itself is multi-valued function, so To define the one-valued function we take the branch cut, right?
Also what about the continuity and analytic on the branch cut for the function $f(z)$ for $z \in \mathbb{C}$? Is $f(z)$ always both discontinuous and non-analytic on branch cut and the branch point? (The below e.g. is reason that why I did think that.)
e.g.) $f(z) = z^{1\over2}$ for $D = \{z \vert -{3\pi\over2} \leq arg(z) \lt {\pi\over 2} \}$ (It is not analytic on the $\{z \vert arg(z) = -{3\pi\over2} \}$ )
Plus Does entire function not have a branch cut?
| You are right: "To define the one-valued function". Branch cut has to start at branch point and then goes arbitrary path to infinity. In this way Riemann surface is cut into Riemann sheets. On a given sheet function is on the cut discontinuous and non-analytic. Of course, function remains analytic if you go from one sheet to the next one. It has to be non-analytic at branch point because the Taylor expansion (as criteria for analyticity) cannot take "multiple-valued" character of the function into account, Taylor expansion is always single-valued. The same argument applies for cut. Imagine the function on one single Rimmann sheet would be analytic at cut: how it would be continued to a different sheet? There would be only one sheet and no cut.
By definition entire function have no cut neither branch point.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3535070",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
How many numbers are required to define a sequence without stating a rule/function for generating the next term in the sequence? I'm wondering if there is some minimum number of numbers required to define a sequence, without explicitly stating the rule that generates the next term in the sequence. For instance if I write $(1,a_2,a_3,...)$, and hide the remaining numbers in the sequence behind $(a_2,a_3,...)$, we don't know what the sequence is or what rules define it. If I then write $(1,2,a_3,...)$, it still isn't clear. Is the rule for determining the next number in the sequence $a_{i+1}=2 a_i$? Is it $a_{i+1}=a_i+1$?
If I write $(1,2,4,8,16)$, it's clear the rule is $a_{i+1}=2a_i=2^{i-1}$. Could I even shorten this to $(1,2,4,...)$ and figure this out? Is this an example of the minimum number of numbers required to define the sequence of powers of $2$. As J.W. Tanner says in the comments, you can come up with a polynomial whose first terms are $1,2,4,8,16,23$, so apparently not.
How about the Fibonacci sequence? I think it's clear what the rule is if I write $(0,1,1,2,3,5,8,...)$, even if I hadn't learned of this sequence before. I can't learn anything from $(0,1)$. What about $(0,1,1)$? It's hard to decide if I can learn the rule from this or if I need more numbers from the sequence. Typically you would just say $a_0=0,a_1=1,$ and $a_{i} = a_{i-1} + a_{i-2}$ for $i>1$. But that defeats the point of the question. The point is to ask how many numbers we need in order to define/learn the sequence without explicitly stating the rule that generates the next term in the sequence, and writing $a_{i} = a_{i-1} + a_{i-2}$ is explicitly stating the rule.
How does this idea generalise?
| Consider the sequence $1,1,2,3.$ At first look, it seems to be the first terms of Fibonacci numbers, but it's not true that the only sequence which starts with $1,1,2,3,5$ are Fibonacci numbers.
Here we can say that these are the first terms of triangle read by rows in which row n lists A000041(n-1) 1's followed by the list of juxtaposed lexicographically ordered partitions of n that do not contain 1 as a part.
Or we can say these are the numbers of rooted trees with n vertices in which vertices at the same level have the same degree.
Or these are the terms generated by $a_n=\lfloor(3^n / 2^n)\rfloor$.
In your example the terms of $1,2,4,8,16$ are not necessarily generated by $a_n=2^n$
For example we can say these are the coefficients of expansion of $$\frac{(1-x)}{(1-2^x)}$$ in powers of $x$.
Or these are the numbers of positive divisors of $n!$.
Or these are Pentanacci Numbers.
For more information look at :oeis.org
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3535185",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 2
} |
A question about the proof of Godel's Incompleteness Theorem The article A Computability Proof of Gödel’s First Incompleteness Theorem, by Jørgen Veisdal on Cantor's Paradise contains the following passage:
The second property regards the complement of a set $E$, that is, all the strings which are not in set $E$. First, notice that if $E$ is decidable, so is the complement of $E$ (we can construct a set $F$ of all the strings that are shown to not belong to $E$). As such, if the set $E$ can be constructed by a mechanical process (is computably enumerable), so too must its complement.
If a set $E$ is enumerable, why does its complement also have to be enumerable? What if its complement is infinite? Moreover, if a set $E$ is decidable, why does its complement have to be decidable?
| Decidable means there's a decision procedure for determining whether an element is in the set or not. By symmetry, this holds for the complement too: we can determine whether the element is not in the set or not.
However, it is false that the complement of a computably enumerable set is computably enumerable. In fact, this fact is very important to Gödel's theorem: one example of such a set is the set of all Gödel numbers of theorems. (Another famous one is the halting set.)
If a set $S$ is computably enumerable and so is its complement, then it is decidable, since we can decide if $x$ is in $X$ by enumerating both $S$ and it complement and seeing which list $x$ appears in. The converse is clearly true as well, so one characterization of decidable sets is those that are c.e. and whose complements are also c.e.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3535341",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How strongly does $\int_1^X \frac{\exp(B/\sqrt x)}{\sqrt x} dx$ depend on $B$? As part of analysing an algorithm I stumbled upon the integral $\displaystyle \int_1^X \frac{\exp(B/\sqrt x)}{\sqrt x} dx$ as an approximation for the corresponding sum. Wolfram alpha gives the antiderivative in terms of non-elementary functions. This seems more trouble than it's worth since I'm only interested in a readable bound for the integral.
One very poor bound comes from rounding the exponential up to $e^B$ and get $\displaystyle e^B \int_1^X \frac{dx}{\sqrt x} = 2e^B(\sqrt X - 1) = O(e^B \sqrt X) $.
One better bound comes from breaking the intgral into two parts like this:
$$\displaystyle \int_1^X \frac{\exp(B/\sqrt x)}{\sqrt x} dx= \int_1^{B^2}\frac{\exp(B/\sqrt x)}{\sqrt x} dx + \int_{B^2}^X \frac{\exp(B/\sqrt x)}{\sqrt x} dx$$
For the first part do the same rounding up to get
$$ \int_1^{B^2}\frac{\exp(B/\sqrt x)}{\sqrt x}\le e^B \int_1^{B^2}\frac{dx}{\sqrt x} \le 2e^B \sqrt {B^2} = 2B e^B.$$
For the second part the denominator is small and we get
$$ \int_{B^2}^X \frac{\exp(B/\sqrt x)}{\sqrt x} dx \le \int_{B^2}^X \frac{e}{\sqrt x} dx = 2 e (\sqrt X - \sqrt {B^2}) \le 2e \sqrt X.$$
Putting it back together we can bound the original integral by $2Be^B + 2e \sqrt X$. We still have the exponential term but not it is a constant and not multiplied by the $\sqrt X$.
I wonder can we do any better than this? Can we replace the dependence on $B$ with something weaker?
| I do not know if this will answer your question.
Effectively, after one integration by parts, we end with
$$I=\int_1^X\frac{e^{\frac{B}{\sqrt{x}}}}{\sqrt{x}}\, dx=-2 \left(B \,\text{Ei}\left(\frac{B}{\sqrt{X}}\right)-B\, \text{Ei}(B)-\sqrt{X}
\, e^{\frac{B}{\sqrt{X}}}+e^B\right)$$
What is doable is to expand as a series around $B=0$; this would give
$$I=2\left( \sqrt{X}-1\right)+B\, \log (X)+
\left(1-\frac{1}{\sqrt{X}}\right)B^2+O\left(B^3\right)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3535451",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Parameter $d$ that makes the probability of the graph $G(n,\frac{d}{n})$ being $k$-colorable tends to $0$, as $n \to \infty$ , for $k \leq 2$. This is an exercise that I'm doing.
Let $\epsilon > 0$ and $d > 0$ be fixed. Prove that for $k \geq 2$, if $d \geq (1 + \epsilon)2k(\text{log} k + 1)$, then, $\lim\limits_{n \to \infty} \mathbb{P}\left[\text{G}\left(n,\frac{d}{n}\right) \text{ is $k$-colorable} \right] \to 0$.
I'm not sure I understand the statement. Wouldn't it be the case that as $n \to \infty$, $\frac{d}{n} \to 0$, the graph becomes more sparse, and thus easier to be properly $k$-colored? I'd like some hint if possible.
| Whether the graph becomes "more sparse" or not is a question of how you measure sparseness; certainly the fraction $\frac{\text{total edges}}{\text{possible edges}}$ goes to $0$, but on the other hand, the average degree remains fixed at $d$. (Well, almost; if we were pickier, we'd set the edge probability to $\frac{d}{n-1}$, but asymptotically that's the same.)
The reason that the random graph has high chromatic number is the lower bound $$\chi(G) \ge \frac{n}{\alpha(G)}$$ which comes from the fact that every color class is an independent set, and therefore contains at most $\alpha(G)$ vertices. To color all $n$ vertices, we need at least $\frac{n}{\alpha(G)}$ colors.
Show that with high probability, $G(n, \frac dn)$ has no independent set of size $\frac nk$, and you'll have shown that with high probability, it's not $k$-colorable.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3535596",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How does population growth relate to $e$? Say that there is a population of $10,000$. From year $0$ to year $1$, we know that the population has grown to $20,000$. How might I model this growth?
It clearly isn't sensible to say that the population suddenly 'jumped' from $10,000$ to $20,000$ as the year drew to a close. A better model might be to split the year up into $365$ segments, where on each day the population is multiplied by $\sqrt[365]{2}$ or approximately $1.0019$. However, even this feels unsatisfying, as the population would have grown during that day, which would have (very slightly) affected the rate of growth. The discrete model used in this example does not take into account the fact that the population would be slightly higher in the afternoon compared to the morning, which (at least in theory) would have an effect on population growth. If I use a continuous model, then I expect to see $e$ crop up. However, I can't seem to find it. Why is this? And, more generally, how does continuous modelling work—is it premised on splitting the year up into $n$ segments, where $n \to \infty$? (An answer that does not use university-level maths would be appreciated, but I understand if this is not possible.)
| Your expectation is not unfounded.
We would model it as such: let $f(t)$ be the population at time $t$.
Then $f(0) = 10000$ and $f(1) = 20000$.
We implicitly assume that the growth of the population is proportional to to the population, and moreover that this proportion is constant in time.
This would translate symbolically into
$$f'(t) = cf(t) \tag{$*$},$$
where $c$ is the constant of proportionality, to be determined.
The solution to the differential equation $(*)$ is
$$f(t) = ae^{ct},$$
where $a$ is a constant to be determined.
From $f(0) = a$, we find that $a = 10000$.
For $c$, we compute
$$f(1) = 10000e^c = 20000 \implies e^c = 2 \implies c = \log 2.$$
Wrapping up, we find that the function that continuously describes the growth of of the population through time is
$${f(t) = 10000\cdot 2^t}.$$
$e$ need not show up in the final answer, but it's intrinsically tied to the solution of $(*)$ which is the heart of our problem's modeling.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3535678",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
What's the best way to solve $1 = A(x^2+1) + (Bx+C)(x+1)$ What's the best way to solve $$1 = A(x^2+1) + (Bx+C)(x+1)$$
I let $x=-1$ and got $A=\frac{1}{2}$
But what sub is ideal to find B&C
This gets messy quick, I think.
Instead, I started over, and I grouped like terms
$$1 = Ax^2+A+Bx^2+Bx+Cx+C$$
This led to $$A+B=0$$ $$B+C=0$$ $$A+C=1$$
This led to $B=-\frac{1}{2}$
$C=\frac{1}{2}$
and solved using my A=1/2
But, I did not like that I had to mix techniques.
| At each step take advantage of your previous findings.$$1 = A(x^2+1) + (Bx+C)(x+1)$$
$$x=-1 \implies A=\frac {1}{2}$$
$$ x=0 \implies A+C=1 $$
Thus $$C =1/2$$
$$x=1 \implies 2A+2(B+C)=1$$
Thus $$B+C=0 \implies B=-\frac {1}{2}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3535890",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Find the number of ways you can invite $3$ of your friends on $5$ consecutive days. Find the number of ways you can invite $3$ of your friends on $5$ consecutive days, exactly one friend a day, such that no friend is invited on more than two days.
My approach: Let $d_A,d_B$ and $d_C$ denote the total number of days $A, B$ and $C$ were invited respectively. According to the question we must have $0\le d_A,d_B,d_C\le 2.$ Also, we must have $$d_A+d_B+d_C=5.$$
Now let $d_A+c_A=2, d_B+c_B=2, d_C+c_C=2,$ for some $c_A, c_B, c_C\ge 0$.
This implies that $c_A+c_B+c_C=1$.
Therefore the problem translates to finding the number of non-negative integer solutions to the equation $$c_A+c_B+c_C=1.$$
By the stars and bars method the total number of required solutions is equal to $$\dbinom{1+3-1}{3-1}=3.$$
But the number of ways to invite the friends will be higher than this, since the friends are distinguishable and we have assumed them to be indistinguishable while applying the stars and bars method.
How to proceed after this?
| The fact that the only way to achieve this is by inviting one friend over once and the other two twice will make this problem simpler.
How many ways are there to pick the one friend (from $3$) that will be only visiting one day instead of two?
Now if we assume the days are Monday through Friday, how many ways are there to pick the day that the one-day friend will visit?
Finally, for the remaining four days, how many ways are there to pick two of them? This will be the number of ways to arrange the remaining two friends.
Now multiply the three of these numbers together for the final answer
A side proof as to why we must have it with one friend one day and the other two with $2$: We obviously can it have any friend arrive three days out of the five, as per given in the question. If all three come twice, this would require $6$ days so this isn't possible either. If two friends only come one day each, then we aren't using up all $5$ days.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3536061",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Prove that $G=$SL$(2, \mathbb{F}_5)$ is an extension of $\mathbb{Z}_2$ by $A_5$ which is not a semidirect product. Question:
Prove that $G=$SL$(2, \mathbb{F}_5)$ is an extension of $\mathbb{Z}_2$ by $A_5$ which is not a semidirect product.
(This is a question from Rotman's Advanced Modern Algebra which I am trying to self-learn.)
I guess that we can send $\mathbb{Z}_2$ to
$$K = \{
\begin{bmatrix}
1 & 0 \\
0 & 1 \\
\end{bmatrix}
,
\begin{bmatrix}
-1 & 0 \\
0 & -1 \\
\end{bmatrix}
\} $$
which is normal in $G$. and then I read online that $G/K$ is isomorphic to $PSL(2, \mathbb{F}_5)$ which is isomorphic to $A_5$.
So I can create the short exact sequence
$$ 0 \to \mathbb{Z}_2 \to^i G \to^p A_5 \to 1$$
Does this suffice to show that $G$ is an extension of $\mathbb{Z}_2$ by $A_5$?
Next, I want to show that this is not a semidirect product, which means I need to show that the extension is not split, which means I need to show there does not exist any homomorphism $j :A_5 \to G$ such that $pj$ is the identity on $A_5$. But how do I do this? Is it not possible to use the isomorphism $G/K \cong A_5$ to construct some homomorphism?
| Your work is totally fine.
Now, for your last question: if the sequence were split, then $A_5$ would be isomorphic to a subgroup of index $2$ of $G$, and you would get a surjective morphism $G\to \mathbb{Z}/2\mathbb{Z}$ with kernel isomorphic to $A_5$.
Now since $\mathbb{Z}/2\mathbb{Z}$ is abelian, this morphism sends any element of $[G,G]$ to $0$ (because any commutator is mapped to $0$. It is a well-known fact that $[SL_n(K),SL_n(K)]=SL_n(K)$ except if $n=2$ and $K=\mathbb{F}_2$ or $\mathbb{F}_3$. Using this classical fact, you see that your morphism is trivial, and you get a contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3536218",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
When does $2n-1$ divide $16(n^2-n-1)^2$? Find all integers $n$ such that
$\dfrac{16(n^2-n-1)^2}{2n-1}$
is an integer.
| Starting off with the initial expression:
$$\frac{16(n^2-n-1)^2}{2n-1}$$
Completing the square:
$$=\frac{4^2(n^2-n-1)^2}{2n-1}$$
$$=\frac{(4n^2-4n-4)^2}{2n-1}$$
$$=\frac{((2n-1)^2-5)^2}{2n-1}$$
Some manipulation:
$$=(2n-1)^3(1-\frac{5}{(2n-1)^2})^2$$
Expanding the expression:
$$=(2n-1)^3-10(2n-1)+\frac{25}{2n-1}$$
Since the first two terms will always be integers,
If this expression is to be an integer, then:
$$(2n-1)= factor\ of\ 25$$
$$n=\frac{1+factor\ of\ 25}{2}$$
The factors of 5 are $\pm 1, \pm 5, \pm 25$
thus,
n=$1\pm1\over 2$
or
n=$1\pm 5 \over 2$
or
n=$1\pm 25 \over 2$
n=1,0,-2,3,-12,13
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3536371",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 2
} |
combinatorics, you can win prizes from different categories but not from the same one. The question goes like this:
There is a conference for psychology in which $12$ researchers are participating. In the conference, two different companies are giving out prizes in two different categories. Three researchers will receive an award for "the most innovative research" (TV, DVD, or a radio), while two researchers will receive an award "the best presentation" (cash prize overall 10,000 dollars). If you know that every researcher can win in more than one category but can't win more than one prize in each category, how many different ways are there to divide the prizes?
choose one:
A) $14520$
B) $87120$
C) $95040$
D) $174240$
What I did was say each one of the researchers can win one prize of the first category, that's $C(12,3)$, then each one of the researchers can also win the second category, which is $C(12,2)$, which got me to $14520$, but apparently the answer is $87120$.
| The first three prizes are distinct (TV, DVD, radio), so we use permutation instead of combinaison. There are $P(12,3)=1320$ ways to give the first three prizes.
The second prizes were the same, si combinaison was right here. There are $C(12,2)=66$ ways the give the second prizes.
Finally, there are $P(12,3)\times C(12,2)=87120$ wyas to give all the prizes.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3536517",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Quadratic function with roots in $[0,1]$. Prove that $f(0) \geq \frac49$ or $f(1) \geq \frac49$ Let $a,b$ in $[0,1]$ be such that the polynomial $f(x) = (x-a)(x-b)$ satisfies $f(\tfrac12) \geq \frac1{36}$.
I have found a quite complicated proof of the following inequality using calculus:
$$f(0) \geq \frac49 \quad\text{or}\quad f(1) \geq \frac49.$$
Can you find a simple argument? (with geometric flavour if possible)
| I'm gonna prove this algebraically. First some observations:
$$f\left(\frac{1}{2}\right) \geq \frac{1}{36}\Leftrightarrow 2ab-(a+b)+\frac{4}{9} \geq 0 \ \ \ \ \ \ \ (1)$$
$$f(0) \geq \frac{4}{9} \Leftrightarrow ab \geq \frac{4}{9}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (2)$$
$$f(1) \geq \frac{4}{9} \Leftrightarrow ab-a-b + \frac{5}{9} \geq 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (3)$$
Assume for the sake of contradiction that both $(2)$ and $(3)$ are false, that means:
$$ab \leq \frac{4}{9}\text{ and } ab+\frac{5}{9} \leq a+b$$
Notice that from $(1)$ and AM-GM, we have:
$$0 \leq 2ab-(a+b)+\frac{4}{9} \leq 2ab-2\sqrt{ab}+\frac{4}{9}=2\left(\sqrt{ab}-\frac{1}{3}\right)\left(\sqrt{ab}-\frac{2}{3}\right)$$
However, since $ab \leq \dfrac{4}{9}$, we have $\sqrt{ab} \leq \dfrac{2}{3}$. This, together with the above inequality, implies that $\sqrt{ab} \leq \dfrac{1}{3}$. But combining the negation of $(3)$ and $(1)$, we get:
$$ab+\frac{5}{9}\leq a+b \leq 2ab+\frac{4}{9}\Rightarrow ab \geq \frac{1}{9}\Rightarrow \sqrt{ab} \geq \frac{1}{3}$$
a contradiction. Therefore, at least one of $(2)$ or $(3)$ is true.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3536622",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Probability of Broken Computers A school orders 7 new computers for a classroom but they are told 3 will not work properly when they recieve them. The school begins to turn each computer on, one by one, to figure out which computers do not work.
There are quite a few questions but most of them I am just having trouble setting up and I feel mostly are similar in set up.
What is the probability that no more than five computers need to be turned on to find the three computers that don't work?
For this question N is the event the computer does not work and W means it works:
\begin{align}&\color{white}=P(NNN)+P(NWNN)+P(NWWNN)\\&=\left(\frac{3}{7}\cdot\frac{2}{6}\cdot\frac{1}{5}\right)\left(3\cdot\frac{1}{4}\cdot\frac{2}{6}\cdot\frac{4}{5}\cdot\frac{1}{4}\right)\left(4\cdot\frac{3}{7}\cdot\frac{2}{6}\cdot\frac{4}{5}\cdot\frac{3}{4}\cdot\frac{1}{3}\right)\\
&=\frac{12}{42875}\end{align}
The next couple questions are very similar I just don't know how to set up. These all don't have specific order of which computers are picked but are told the computers are turned on in a couple possible spots. These are the remaining questions if someondy can explain how these can be set up that be appreciated.
Given that exactly one of the computers not working was found within the first three computers, what is the probability that the other two computers that aren't wokring are found within the next three computers turned on?
Given that exactly two of the computers that don't work were found within the first three computers, what is the probability the last computer that doesn't work is found within the next two computers turning on?
Given that exactly two of the computers that don't work were found within the computers 1, 3, 5, what is the probability the other computer not working was found on tests 6 or 7?
Given the last computer doesn't work was found within the last two tests, what is the probability that the first two computers that don't work were found within the first three computers?
| For the first question, it helps to consider the opposite event. What is the probability that more than five computers need to be turned on to find the three that do not work? In other words, what is the probability that after knowing the state of five computers, you still don't know the state of the two remaining computers?
Note that it really doesn't matter in which order the first five computers work or do not work.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3536763",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
If $a, b > 0$ and $ b \neq 1$ prove that $\displaystyle {\int_1^b a^{\log_b x} dx > \ln b}$ Now, $a^{\log_b x} = x^{\log_b a},$
therefore $$ {\int_1^b a^{\log_b x}dx} = {\int_1^b x^{\log_b a}dx} = {\frac {ab - 1}{\log_b ab}} = {\frac {ab - 1}{\ln ab - \ln 1} \ln b} = {\frac {\ln b}{c}}$$ where $c \in (1 , ab)$ when $ab > 1$ or $c \in (ab , 1)$.
Here, I have proved the inequality for $c \in (1 , ab)$ as tangent of $\log x $ at $x>1$ is less than tangent at $x=1$. But I can not prove it when $c \in (ab , 1)$.
Is there any other easy approach to this problem?
| Consider the inequality
$$\tag1
\frac{ab-1}{\ln ab}>1,
$$
for all $a,b>0$.
This is equivalent to
$$\tag2
ab>1+\ln ab.
$$
The function $g(t)=t-1-\ln t$ has derivative $g'(t)=1-\tfrac1t$, so $g$ is decreasing on $(0,1)$ and increasing on $(1,\infty)$, with the only critical point, a minimum, at $t=1$. Thus $g(t)\geq g(1)=0$, and the inequality is strict when $t\ne1$. In summary, $(1)$ holds as long as $ab\ne1$, in which case you have equality and not strict inequality.
Now, when $b<1$, we have that $\ln b<0$, and so the inequality that holds is
$$
\int_1^b a^{\log_b x} < \ln b.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3536917",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Dimension of connected components of $O(n)$ The special orthogonal group $SO(n)=O_+(n)$ is the subgroup of the orthogonal group $O(n)$ containing matrices with determinant $+1.$ The other connected component of $O(n)$, call it $O_-(n)$, has matrices with determinant -1.
I always thought $O_+(n)$ and $O_-(n)$ had the same dimension and were in fact isomorphic. However the following consideration shaked this belief when $n$ is even.
The eigenvalues of $g\in O(2m)$ are either real or come in complex conjugate pairs. If all eigenvalues are complex, then the determinant is positive and $g\in O_+(2m)$. The only chance for $g$ to belong to $O_-(2m)$ is if some eigenvalues are real and an odd number of them are $-1$. This stringent condition seems to imply that $O_-(2m)$ has lower dimension than $O_+(2m)$.
Is that true? In even dimension almost all orthogonal matrices are in fact special?
| Actually, if $M\in O_-(n)$, then$$\begin{array}{ccc}O_+(n)&\longrightarrow&O_-(n)\\N&\mapsto&MN\end{array}$$is a diffeomorphism. So, yes $\dim O_+(n)=\dim O_-(n)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3537063",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Integral with binomial to a power $\int\frac{1}{(x^4+1)^2}dx$ I have to solve the following integral:
$$\int\frac{1}{(x^4+1)^2}dx$$
I tried expanding it and then by partial fractions but I ended with a ton of terms and messed up. I also tried getting the roots of the binomial for the partial fractions but I got complex roots and got stuck. Is there a trick for this kind of integral or some kind of helpful substitution? Thanks.
EDIT:
I did the following:
Let $x^2=\tan\theta$, then $x = \sqrt{\tan\theta}$ and $dx=\frac{\sec^2\theta}{2x}d\theta$
Then:
$$I=\int\frac{1}{(x^4+1)^2}dx = \int\frac{1}{(\tan^2\theta+1)^2} \frac{\sec^2\theta}{2x}d\theta=\int\frac{1}{\sec^4\theta} \frac{\sec^2\theta}{2x}d\theta$$
$$I=\frac{1}{2}\int{\frac{1}{\sec^2\theta \sqrt{\tan\theta}}}d\theta$$.
After this I don't know how to proceed.
| Hints:
$$\frac1{(x^4+1)^2}=\frac{x^4+1-x^4}{(x^4+1)^2}=\frac1{x^4+1}-\frac{x^4}{(x^4+1)^2}$$ and by parts
$$4\int\frac{x^3x}{(x^4+1)^2}dx=-\frac x{x^4+1}+\int\frac{dx}{x^4+1}.$$
This way we can get rid of the square at the denominator, and we are left with
$$\frac1{x^4+1}.$$
Now using the factorization of the quartic binomial,
$$\frac{\sqrt8}{x^4+1}=\frac{x+\sqrt2}{x^2+\sqrt2x+1}-\frac{x-\sqrt2}{x^2-\sqrt2x+1}.$$
Here, by completing the square, we can handle the terms $\sqrt2x$ in the denominators, and solve with terms $\log(x^2\pm\sqrt2x+1)$ and $\arctan(\sqrt2x\pm1)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3537167",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 2
} |
Find the number of group isomorphisms from the group $(\mathbb{Z}_3, +)$ to itself. I have to find the number of isomorphisms from the group $( \mathbb{Z}_3, + )$ to itself.
I don't know of any procedure to do this so I basically just tried to guess functions until I cannot find anything else. I found that the functions:
$$f(x) = x$$
$$f(x) = \hat{2}x$$
are both bijective and they hold the equality:
$$f(x + y) = f(x) + f(y)$$
true for any $x, y \in \mathbb{Z}_3$. So I concluded that these are all the isomorphism from the group $(\mathbb{Z}_3, +)$ to itself, since I couldn't find any more functions to be bijective and satisfy that condition.
I checked the answer of the exercise and it agrees with me, saying that the correct answer is $2$ (meaning $2$ isomorphisms, I guess the ones I found).
My question is this: Is there a more organized, general way of finding the answer to this question.
Guessing all the possible function seems a bit weird, there are infinite possibilities.
How do I know that I found the maximum number of isomorphisms and that I can stop? Is there a better strategy than guessing the functions and coming up with the answer?
| Hint: $${\rm Aut}(\Bbb Z_n)\cong U(n),$$ where $U(n)$ is the group of units modulo $n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3537331",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Can a regular tetrahedron cast a square shadow given parallel light rays? How do you prove this? And, more generally, can a $n$-simplex cast a hypercube shadow in one lower dimension?
| Yes.
Viewing direction is along center of two skewed sides onto a plane perpendicular to it in orthographic projection. It also passes through tetrahedron center. The viewing point is at infinity.
Spherical symmetry restraint of four connected lines of equal projected length makes for a square for such a viewing direction. Mathematica projection is slightly off due to finite viewing distance and due to angle made to such a direction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3537498",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Existence of a number field containing $\alpha$ where $p$ is unramified I want to better understand the structure of algebraic number fields and for this purpose I am thinking of various problems. One of them I cannot solve, is the following:
Let $p$ be some prime number and $\alpha \in \mathbb{C}$ integral over $\mathbb{Z}$. Is there always some algebraic number field $K$ such that $\alpha$ is in the ring of integers $O_K$ of $K$ and $p$ is unramified in $O_K$, i.e. if $pO_K = \prod_{i=1}^n P_i^{e_i}$ where $P_i$ are different prime ideals of $O_K$, then $e_i = 1$ for all $i$.
| This is not always possible. We have:
Let $L/K/F$ be a tower of number fields and $p \subset F$ a prime. TFAE:
*
*$p$ is unramified in $L$;
*$p$ is unramified in $K$ and all primes of $K$ above $p$ are unramified in $L$.
This is a consequence of unique factorization of ideals.
In particular, in the situation of your question, if $p$ is unramified in some field containing $\alpha$ then $p$ is unramified in $\mathbb Q(\alpha)$. So to give a counterexample, it suffices to give an example of a number field $\mathbb Q(\alpha)$ and a prime that is ramified in it! Example:
$\alpha = i$ and $p = 2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3537687",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Bank vaults and probability
A burglar breaks into a bank with the intention to open the vaults and steal some gold coins.
He knows that each vault contains a number of 1 to 100 such coins with equal probability for each number. Since he is what we call an “ethical burglar”, he will only get 100 coins to help some people in need. How many vaults must he breach on average?
This is really confusing because we are not given how many vaults we have. Is it safe to say that since the numbers from 1 to 100 have equal probability, then we have 100 vaults?
Then, if vault Number 1 has 1 coin, 2 has 2 etc, we need to open as many that will give sum of 100? And then get the average?
I need help with the interpretation and solution please!
Ok after having asked several people, here is my interpretation:
We may have any number of vaults, not necessarily 100 (maybe more, maybe less) but the number of coins in them has been placed randomly, with equal probability for each number from 1 to 100.
The burglar will stop once he has collected 100 or more coins: That is, if, after the last vault, he has 99, he may open another one, with 23 coins, so he will have a total of 122 and this is OK.
Any ideas for the solution?
Thank you!
| Very nice problem! You can safely assume that there are exactly $100$ vaults since this is the maximal number of vaults the burglar may have to open. (See below for an argument.)
My take on the interpretation is as follows: Number the vaults $V_1, V_2, \ldots, V_{100}$ and say that the burglar opens $V_1$ first, then $V_2$ if necessary, and then $V_3$ and so on. Suppose that the number of coins in vault $V_i$ is $C_i$. There are $N = 100^{100}$ different possible configurations of coins in the vaults, all of which are equally likely if we assume that the contents of different vaults are independent. That is, for any $(n_1, \ldots, n_{100}) \in S:= \lbrace 1, \ldots, 100 \rbrace^{100}$,
\begin{align*}
P\left( C_1 = n_1, \ldots, C_{100} = n_{100} \right) = \frac{1}{N}.
\end{align*}
For $\textbf{n} = (n_1, \ldots, n_{100}) \in S$, corresponding to specific configuration of coins, let
\begin{align*}
b(\textbf{n}) = \min \left\lbrace 1 \leqslant x \leqslant 100 \mid n_1 + \cdots + n_x \geqslant 100 \right\rbrace.
\end{align*}
The problem is to compute
\begin{align*}
P_{100} := \frac{1}{\# S} \sum_{\textbf{n} \in S} b(\textbf{n}) = \frac{1}{N} \sum_{\textbf{n} \in S} b(\textbf{n}).
\end{align*}
To see why you can assume that there are only $100$ vaults, consider $100 + k$ vaults for some $k \geqslant 1$. Then (extending our notation in an obvious way) we have
\begin{align*}
P\left( C_1 = n_1, \ldots, C_{100 + k} = n_{100 + k} \right) = \frac{1}{100^k N}.
\end{align*}
But for $\textbf{n} = (n_1, \ldots, n_{100+k})$, it is clear that $b (\textbf{n})$ is independent of $n_{101}, \ldots, n_{100+k}$ since $n_i \geqslant 1$ for all $i$. Letting $S_{100+k} = \lbrace 1, \ldots, 100 \rbrace^{100 + k} = S \times \lbrace 1, \ldots, 100 \rbrace^k$, we now see that the average number of vaults the burglar needs to open in the case of $100 + k$ vaults is
\begin{align*}
P_{100+k} := \frac{1}{100^k N} \sum_{\textbf{n} \in S_{100 + k}} b(\textbf{n}) = \frac{100^k}{100^k N} \sum_{\textbf{n} \in S} b(\textbf{n}) = P_{100},
\end{align*}
since there are precisely $100^k$ ways to choose the last $k$ entries of the $(100+k)$-tuple n, and $b(\textbf{n})$ is independent of these entries.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3537885",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 3,
"answer_id": 2
} |
Increasing and decreasing function doubt This is a question of increasing and decreasing functions.
$f(x)=\sin x + \cos x$, $x$ belongs to $[0, 2\pi]$
Derivative of this function $f'(x) = \cos x - \sin x$
For increasing function we put $f'(x) > 0$.
I tried to solve it this way:
$\cos x - \sin x > 0$
$\cos x > \sin x$
$\tan x < 1$
$x$ belongs to $\left(0, \dfrac{\pi}{4}\right)\cup \left(\dfrac{\pi}{2}, \dfrac{5\pi}{4}\right) \cup \left(\dfrac{3\pi}{2}, 2\pi\right)$
But the answer given in the book is $x$ belongs to $\left(0, \dfrac{\pi}{4}\right)\cup \left(\dfrac{5\pi}{4}, 2\pi\right)$.
Any help.
| What you did up to$$\cos x>\sin x\tag1$$is fine. After this point, you need to consider two possibilities:
*
*$x\in\left(0,\frac\pi2\right)\cup\left(\frac{3\pi}2,2\pi\right)$: then $\cos x>0$ and what you did is fine;
*$x\in\left(\frac\pi2,\frac{3\pi}2\right)$: then $\cos x<0$ and what you deduce from $(1)$ is that $\tan x>1$.Another possibility is to use the fact that\begin{align}\cos(x)+\sin(x)&=\sqrt2\left(\cos\left(\frac\pi4\right)\cos(x)+\sin\left(\frac\pi4\right)\sin(x)\right)\\&=\sqrt2\cos\left(x-\frac\pi 4\right).\end{align}Since you know where $\cos$ is increasing and decreasing…
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3538014",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Find $P(A \cap B)$ with $P(A)=0.4$, $P(B) = 0.3$ and $P(A \cup B) = 0.6$
Find $P(A \cap B)$ with $P(A)=0.4$, $P(B) = 0.3$ and $P(A \cup B) =
0.6$.
My professor doesn't specify whether these events are mutually exclusive. If I solve this as if they are, I get
$$P(A\cap B) = P(A)*P(B) = 0.12$$
If I solve this as if they are not, I get
$$P(A\cup B) = P(A)+P(B)-P(A\cap B) \\
\Leftrightarrow 0.6 = 0.4+0.3 -P(A\cap B)\Leftrightarrow \\
0.6 -0.4-0.3 = -P(A \cap B) \Leftrightarrow \\
0.1 = P(A\cap B)$$
The values are close, kind of. My professor solved this as if they are not mutually exclusive. Is there a way of telling, or did he simply forget to mention that?
| When $A$ and $ B$ are mutually exclusive, $P(A \cup B) = P(A) + P(B)$.
In this case, that would mean $0.6=0.4+0.3$, which is clearly not so.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3538176",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Behaviour of $L^1$ function at $\infty$ I have the following doubt..
If $f$ is $L^1(\mathbb{R}),$ then can we show $\lim\limits_{k \rightarrow \infty} \int\limits_k^\infty f(s)\,ds=0$
If so please outline the proof..
| Let $f \in L^1(\mathbb{R})$. Then $\int_{\mathbb{R}} f(s) ds = I < \infty$ and furthermore $\forall k \in \mathbb{N}$:
$$\int_{\mathbb{R}} f(s) ds = \int f(s) 1_{(- \infty, k]} ~ ds + \int f(s) 1_{[k, \infty)} ~ ds$$
Now we have
\begin{align*}
\lim_{k \rightarrow \infty} \vert \int_{k}^{\infty} f(s) ds \vert &\leq \lim_{k \rightarrow \infty} \int \vert f(s) \vert 1_{[k, \infty)} ~ ds \\
&= \int \lim_{k \rightarrow \infty} \vert f(s) \vert 1_{[k, \infty)} ~ ds \\
&= \int \lim_{k \rightarrow \infty} \vert f(s) \vert 0 ~ ds = 0
\end{align*}
where in the middle step we used dominated convergence via $\vert f(s)\vert 1_{(k, \infty)} \leq \vert f(s) \vert$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3538390",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Integration of $\exp(-z^2)/(z-z_0)$ I am interested into the following integral
$I = \int_{-\infty}^{+\infty} \frac{f(z)}{z-z_0} dz$
where $f$ is typically a Gaussian function $e^{-z^2}$.
I am tempted to simply use Cauchy formula to get
$I = 2\pi i f(z_0) = 2\pi i \ e^{-z_0^2}$
But I am not sure that the contribution of the half circle containing $z_0$ goes to $0$ as the radius of the circle $R$ goes to $+\infty$ and I could not find a description of the analytic properties of the complex Gaussian to justify this integration on the internet. On top of that, Mathematica gives me $0$ as a result too..
Can the integral be carried out like written or is not so simple? Thank you.
|
One can deform the real line contour in the complex plane, but it is rather dubious that such a defomation facilitates evaluation of the integral of interest via Cauchy's Integral Theorem. However, we can use Feynman's trick to evaluate the integral in terms of the Imaginary Error Function. To that end we now proceed.
We first note that
$$\begin{align}
\int_{-\infty}^\infty \frac{e^{-z^2}}{z-z_0}\,dz&=2z_0 \int_0^\infty \frac{e^{-x^2}}{x^2-z_0^2}\,dx\tag1
\end{align}$$
Second, let $I(a)$ be defined as
$$\begin{align}
I(a)&=2z_0 \int_0^\infty \frac{e^{-ax^2}}{x^2-z_0^2}\,dx\tag2
\end{align}$$
Observe that $I(1)$ is simply the integral of interest in $(1)$.
Third, differentiate $(2)$ to find
$$\begin{align}
I'(a)&=-2z_0 \int_0^\infty \frac{x^2\,e^{-ax^2}}{x^2-z_0^2}\,dx\\\\
&=-2z_0 \int_0^\infty \frac{(x^2-z_0^2+z_0^2)\,e^{-ax^2}}{x^2-z_0^2}\,dx\\\\
&=-2z_0\int_0^\infty e^{-ax^2}\,dx-z_0^2I(a)\\\\
I'(a)+z_0^2I(a)&=-z_0\sqrt{\pi/a}\tag3
\end{align}$$
Now, solving the ODE in $(3)$ for $I(a)$ we find that
$$I(a)=I(0)e^{-az_0^2}-2\sqrt{\pi}e^{-az_0^2}\int_0^{\sqrt{a}z_0}e^{x^2}\,dx\tag4$$
Using the initial condition $I(0)=i\pi\,\text{sgn}\left(\text{Im}\left(z_0\right)\right)$ and setting $a=1$ in $(4)$ yields the coveted result
$$\begin{align}
\int_{-\infty}^\infty \frac{e^{-z^2}}{z-z_0}\,dz&=i\pi\,\text{sgn}\left(\text{Im}\left(z_0\right)\right)e^{-z_0^2}-2\sqrt{\pi}e^{-z_0^2}\int_0^{z_0}e^{x^2}\,dx\\\\
&=\bbox[5px,border:2px solid #C0A000]{i\pi\,\text{sgn}\left(\text{Im}\left(z_0\right)\right)e^{-z_0^2}-\pi e^{-z_0^2}\text{erfi}(z_0)}\tag5
\end{align}$$
where $\text{sgn}(x)$ is the Sign Function and $\text{erfi}(z)$ is the Imaginary Error Function.
EXAMPLES:
As an example, if $z_0=2+i5$, the result is $i\pi e^{21-20i}-\pi e^{21-20i} \text{erfi}(2+i5)$, which is verified using Wolfram Alpha (See Here).
And if $z_0=2-i5$ the result is $-i\pi e^{21+20i}-\pi e^{21+20i} \text{erfi}(2-i5)$, which was also verified using Wolfram Alpha.
Thus, we see that the sign of the imaginary part of the answer depends on the sign of $\text{Im}(z_0)$.
Note that if $z_0$ is purely real, we can evaluate $(1)$ as a principal value. Setting $z_0$ to $x_0\in \mathbb{R}$ in $(5)$ shows that
$$\text{PV}\int_{-\infty}^\infty \frac{e^{-z^2}}{z-x_0}\,dz=-\pi e^{-x_0^2}\text{erfi}(x_0)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3538522",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
A real-analytic function at $p$ is $C^\infty$ at $p$ (in "An Introduction to Manifolds" by Loring W. Tu.) I am reading "An Introduction to Manifolds" by Loring W. Tu.
There is the following sentence in this book.
I cannot understand what the sentence is saying.
A real-analytic function is necessarily $C^\infty$, because as one learns in real analysis, a convergent power series can be differentiated term by term in its region of convergence.
By definition we can expand a function which is real-analytic at $p$ in its Taylor series at $p$.
So obviously a function which is real-analytic at $p$ must be $C^\infty$ at $p$.
I think we don't need "because as one learns in real analysis, a convergent power series can be differentiated term by term in its region of convergence".
I cannot understand what Loring W. Tu is saying.
Please explain.
| You are confusing
forall $K$, $f(x)= \sum_{|a|\le K} c_a x^a+O(|x|^{K+1})$
with
forall $K$ and for all $y$ close to $0$, $f(x)= \sum_{|a|\le K} c_a(y) (x-y)^a+O(|x-y|^{K+1})$ and the $c_a(y)$ are continuous
$C^\infty$ at (near) $0$ is the latter.
That a (convergent) power series is $C^\infty$ follows from that (for $|x-y|$ small enough) $$\sum_a c_a x^a= \sum_a c_a (y+(x-y))^a=\sum_a c_a \sum_b {a\choose b}y^a (x-y)^b=\sum_b (x-y)^b\sum_a c_a {a\choose b}y^a$$ is a convergent power series in $x-y$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3538668",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Show that a function which is defined by integral is entire by using Morera’s theorem.
Let $f : [-1,1] \rightarrow \mathbb{C}$ be a continuous function and we define $F$ on $\mathbb{C}$ by
$$\displaystyle F(z) = \int_{-1}^{1} f(t) e^{itz} dt $$
1) Show that $F$ is well-defined and continuous on $\mathbb{C}$.
2) Show that $F$ is entire by using Morera’s theorem.
Since $f$ is a single-variable complex-valued function, I could show that $F$ is well-defined and continuous on $\mathbb{C}$ easily.
But I have difficulty with second part of the above problem,
What I tried is as follows:
Since the problem asks me to use Morera’s theorem,
I have to show that for every possible contour $C$, $\int_{C}^{\ } F(z) dz=0$.
Then
$$\begin{align}
\Bigl \vert \int_{C}^{\ }F(z)dz \Bigr \vert &= \Bigl \vert \int_{C}^{\ } \Bigl ( \int_{-1}^{1}\,f(t)e^{itz}dt \Bigr ) dz \Bigr \vert \\ &= \Bigl \vert \int_{C}^{\ } e^{itz}\Bigl(\int_{-1}^{1} f(t) dt\Bigr)dz \Bigr \vert \qquad \cdots (1) \\ &= \Bigl \vert \int_{C}^{\ } e^{itz} \Bigl( const. \Bigr) dz \Bigr \vert \\ &= 0
\end{align}$$
I’m not sure at the equation (1), is it okay to pull $e^{itz}$ out from inner integral.
If it is allowed to do, then how exactly it is allowed?
And if not, then how do I proceed?
Edit:
I guess it is okay to change the order of integrals provided that the integral still gives you finite value even if you change the integrand by the absolute value of the integrand.
So, for any contour $C$,
$$\begin{align}
\int_{C}^{\ } \Bigl (\int_{-1}^{1} \vert \, f(t)e^{itz}\, \vert dt \Bigr ) dz &= \int_{C}^{\ } \Bigl (\int_{-1}^{1} \vert \, f(t) \, \vert dt \Bigr ) dz \\ &= (const.)
\end{align}$$
This shows that it is okay to interchange the order of integration, then
$$\begin{align}
\int_{C}^{\ } \Bigl (\int_{-1}^{1} f(t)e^{itz} dt \Bigr ) dz &= \int_{-1}^{1} \Bigl (\int_{C}^{\ } f(t)e^{itz} dz \Bigr ) dt \\ &= \int_{-1}^{1} 0 dt \\&=0
\end{align}$$
This shows that $F$ is entire by Morera’s theorem.
Is this proof right?
| Your intuition on separating integrals is correct, but you have pulled out the wrong thing :)
Let's do it in this manner:
$$\int_{C}^{\ } \Bigl ( \int_{-1}^{1}\,f(t)e^{itz}dt \Bigr ) dz {=\int_{C} \int_{-1}^{1}f(t)e^{itz}dtdz\\=\int_{-1}^{1}\int_{C}f(t)e^{itz}dzdt\\=\int_{-1}^{1}f(t)\int_{C}e^{itz}dzdt}$$and $\int_{C}e^{itz}dz=0$ since $e^{\imath tz}$ is entire.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3538815",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Two Lie Group homomorphisms are equal if their induced Lie algebra homomorphisms are equal and $G$ is connected Let $f, g$ be two Lie Group homomorphisms from G to H (smooth group homomorphisms).
Let the induced Lie Algebra homomorphisms $Df(e)$ and $Dg(e)$ be equal. Then if $G$ is connected show that $f=g$.
I know that the set $S=\{x | f(x)=g(x)\}$ is closed. If I show it is open I am done. Now I know $e$ is in $S$. How do I show a nbd is also there?
| Let $\varphi:G\to H$ a Lie group homomorphism and $d\varphi:\mathfrak{g}\to\mathfrak{h}$ the induced Lie algebra homomorphism, given by $(d\varphi X)_{e_H}=(d\varphi)_{e_G} X_{e_G}$.
Given $\varphi$, let $\Gamma_{\varphi}$ the graph of $\varphi$, namely $\Gamma_{\varphi}=\{(g,\varphi(g))\mid g\in G\}$. You can verify that $\Gamma_{\varphi}$ is an abstract subgroup of $G\times H$ and that $(\Gamma_{\varphi}, \iota)$ (where $\iota$ is the inclusion) is an embedding of $G\times H$ so it is a Lie subgroup of $G\times H$.
What is the Lie algebra of $\Gamma_{\varphi}$ as a subalgebra of the Lie algebra of $G\times H$? Well, you can see that $\mathrm{Lie}(\Gamma_{\varphi})\cong \Gamma_{d\varphi}$, i.e. the graph of the homomorphism $d\varphi$.
Now the proof of your statement:
We have $\mathrm{Lie}(\Gamma_f)\cong \Gamma_{df}=\Gamma_{dg}\cong\mathrm{Lie}(\Gamma_g)$, and if $G$ is connected, then $\Gamma_f$ and $\Gamma_g$ are two connected subgroups which have the same Lie algebra. By uniqueness we must have $\Gamma_f=\Gamma_g$, i.e. $f(a)=g(a)\,\forall a\in G$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3538919",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
determination coefficients of quadratic equation using sum and product of its solutions? let $x_1$ and $x_2$ are real solutions of the quadratic equation $ f(x)=ax^2+bx+c=0$ with $a, b, c$ are real such that its solutions satisfy the following system :
$$\begin{cases}x_1+x_2=16, \\ x_1 x_2=55.\end{cases}$$
Now my question here is : what are $a, b, c$ ?
I have got $a=1,b=-16, c=55$ but someone said me that I am wrong , $a$ should be arbitrary as a reason the system have infinitely many solutions
| Let's assume the solutions are $x_1$ and $x_2$. Now, we can write:
\begin{align*}
f(x) &= ax^2 + bx + c \qquad (1)\\
f(x) &= (x - x_1)(x - x_2) = x^2 - x(x_1 + x_2) + x_1 x_2 \qquad (2)
\end{align*}
Since the first term of $(1)$ has a highest power of $1$, we scale equation $(2)$ into:
$$
f(x) = a(x - x_1)(x - x_2) = ax^2 - xa(x_1 + x_2) + ax_1 x_2 \qquad (3)
$$
We can use the relations $x_1 + x_2 = 16$, $x_1x_2 = 55$ in equation $(3)$ to arrive at:
$$
f(x) = ax^2 -16ax + 55a
$$
which means we have an (infinite) family of polynomials $f(x)$ for different choices of $a$.
EDIT: showing that the roots of $f(x)$ are the same as asked:
Note that we can find the roots of $x^2 - 16x + 55$ using (say) the quadratic equation:
$$
x_{1, 2} = \frac{16 \pm \sqrt{16^2 - 4 \cdot 1 \cdot 55}}{2 \cdot 1} = \frac{16 \pm 6}{2} = 8\pm3 = \{ 11, 5\}
$$
Hence, $$f(x) = a(x^2 - 16x + 55) = a(x - 11)(x - 5)$$
which means that the roots are $\{11, 5\}$
EDIT: An attempt to clarify the confusion between roots and solutions
We have two "solutions" floating around in this question.
*
*The collection of polynomials of the form $f_a(x) = a(x - 11)(x - 5)$
*The roots of each $f_{a_0}(x)$, for each $a$. which are $x_1 = 11, x_2 = 5$.
Given a fixed $a = a_0$, the solution of $f_{a_0}(x)$ is $x_1 = 11, x_2 = 5$.
Given that the roots obey the equation $x_1 + x_2 = 16, x_1 x_2 = 55$. Equivalently, given that the roots are $x_1 = 11, x_2 = 5$, .there are an infinite number of polynomials $f_a(x)$ which have the roots, one for each choice of $a$.
So the fact that, for example, $f_1(x) = (x - 11)(x - 5)$ and $f_2(x) = 2(x- 11)(x - 5)$ both have the same roots ($x_1 = 11, x_2 = 5$) should not contradict the fact that there are two such polynomials, one called $f_1(x)$ and one called $f_2(x)$.
Indeed, this is analogous to this situation which might be easier to think about:
$$
a(x) \equiv x - 10 \quad
b(x) \equiv 2x - 20 \quad
c(x) \equiv 3x - 30
$$
All of $a(x), b(x), c(x)$ have the root $(x = 10)$, but $a(x) \neq b(x) \neq c(x)$.
Similarly, all of the $f_a(x)$ have roots $(x_1 = 11, x_2 = 5$), but they are different polynomials, one for each choice of $a$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3539066",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Preference relation $\succsim$ continuous if and only if the upper and lower contour sets are both closed I'm trying to show that a preference relation $\succsim$ is continuous if and only if the upper and lower contour sets are closed.
The direction $\Rightarrow$, i.e. continuity implies the upper and lower contour sets are closed, is trivial, but I'm really struggling to show the other direction. Here is my attempt so far:
Assume that all upper and lower contour sets are closed. Let $x^n$ be a sequence converging to $x$, $y^n$ a sequence converging to $y$ with $x^n \succsim y^n$ for each $n$. Suppose, for a contradiction, that we do not have $x \succsim y$, i.e. $y$ is not contained in $L(x)$, the lower contour set of $x$.
Since $L(x)$ is closed, we have $\bar{L(x)} = L(x)$ and so this means there exists an $\varepsilon>0$ such that $B(y,\varepsilon)\subset X \setminus L(x)$. $y^n \rightarrow y$ so there exists an $N$ such that $y^n\in B(y,\varepsilon)$ for all $n\geq N$. Then I don't know how to proceed.
Note: We have also been given an alternative definition of continuity, which states that if $x^n \rightarrow x$ and $x^n \succsim y$ for each $n$ then $x \succsim y$. It is clear that this is follows from the first definition, but I don't know if the other direction is true. I'm also unsure how this definition would imply closedness of the lower contour sets (the closedness of the upper contour sets is clear).
| As you have argued, if the claim is false, then $y^{n} \succ x$ for all but finitely many $n$.
Analogously, because $U(y)$ is closed, $y \succ x^{n}$ for all but finitely many $n$.
Therefore, for all $n$ larger than some $n_{0}$, we have $y \succ x^{n} \succeq y^{n} \succ x$.
In particular, $y \succ y^{n_{0}} \succ x$.
See if you can finish the proof from here by using the fact that the upper and lower contour sets of $y^{n_{0}}$ are closed. (Just to be sure, there is no special reason for using $y^{n_{0}}$ instead of $x^{n_{0}}$ to complete the proof. The argument given below only requires that there be some point $z$ in $X$ that satisfies $y \succ z \succ x$.)
Using closedness of the upper and lower contour sets again, we infer that there is an open neighborhood $O_{y}$ of $y$ contained in $X \setminus L(y^{n_{0}})$, and an open neighborhood $O_{x}$ of $x$ contained in $X \setminus U(y^{n_{0}})$.
Since the two sequences are assumed to converge to $x$ and $y$ respectively, there exists a number $n_{1}$ such that for all $n$ greater than $n_{1}$,
\begin{align*}
y^{n} \in O_{y} \subseteq X \setminus L(y^{n_{0}}) \quad \text{and}\quad
x^{n} \in O_{x} \subseteq X \setminus U(y^{n_{0}}).
\end{align*}
But now we have reached a contradiction, since $y^{n} \succ y^{n_{0}} \succ x^{n}$ is impossible if $x^{n} \succeq y^{n}$ for all $n$.
The second definition does not imply the first.
For a counterexample, let the relation to be such that it is represented by a function $f$ which is upper-semicontinuous, but not lower-semicontinuous.
For instance, let $X = [0, 1]$ and let $f\colon[0, 1]\to\mathbb{R}$ be the function
$$
f(x) = \begin{cases}
-x \quad \text{, if $x \in [0, 1/2)$},\\
1 + x \quad \text{, else}.
\end{cases}
$$
Then $x^{n} \succeq y$ and $x^{n} \to y$ imply $x \succeq y$ since $f$ is upper-semicontinuous.
On the other hand, the lower contour sets are not closed. To see this, note that $0 \succeq 1/2 - 1 / n$ for every $n$ since $f(0) \geq f(1/2 - 1/n)$. However, $1/2 \succ 0$ since $f(1/2) > f(0)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3539267",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove that if $ (x+y)$ is even, then $(x−y)$ is even, for integers. Did I prove correctly? is this a direct proof? For $x+y$ to be even, either $x$ and $y$ are both even, or $x$ and $y$ are both odd.
If $x$ and $y$ are both even we obtain: $x=2k$ and $y=2j$.
substituting into $x-y$ we get $2k-2j$.
$2(k-j)$ is even, so we have proved the first case.
Now if $x$ and $y$ are both odd, we obtain: $x=2k+1$ and $y=2j+1$.
substituting into $x-y$ we get $(2k+1)-(2j+1)$,
$2k+1-2j-1= 2k-2j= 2(k-j)$ which is even, and we have proved the remaining case.
| Actually, you can have a direct proof that $x+y$ and $x-y$ have the same parity, since $(x+y)-(x-y)=2y$ is even.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3539448",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Prove $4\sin^{2}\frac{\pi}{9}-2\sqrt{3}\sin\frac{\pi}{9}+1=\frac{1}{4}\sec^{2}\frac{\pi}{9}$. While attempting to algebraically solve a trigonometry problem in (Question 3535106), I came across the interesting equation
$$
4\sin^{2}\frac{\pi}{9}-2\sqrt{3}\sin\frac{\pi}{9}+1=\frac{1}{4}\sec^{2}\frac{\pi}{9}
$$
which arose from the deduction that $\frac{1}{4}\sqrt{\frac{256\sin^{4}40^{\circ}-80\sin^{2}40^{\circ}+12-\ 8\sqrt{3}\sin40^{\circ}}{\left(16\sin^{4}40^{\circ}-4\sin^{2}40^{\circ}+1\right)}}=\cos50^{\circ}$. Despite the apparent simplicity of the relationship, it seems quite tricky to prove. I managed to prove it by solving the equation as a quadratic in $(\sin\frac{\pi}{9})$ and then using the identity $\sqrt{\sec^2 x-1}=|\tan x|$, the double angle formulae and finally that $\frac{\sqrt{3}}{2}\cos x-\frac{1}{2}\sin x$ can be written in the form $\sin\left(x+\frac{2\pi}{3}\right)$.
But it seems like quite a neat problem. So, does anyone have a better way of proving it?
| We need to prove that:
$$4\sin^220^{\circ}-4\sin60^{\circ}\sin20^{\circ}+1=\frac{1}{4\cos^220^{\circ}}$$ or
$$4\sin^240^{\circ}-8\sin60^{\circ}\sin40^{\circ}\cos20^{\circ}+4\cos^220^{\circ}=1$$ or
$$2-2\cos80^{\circ}-4\sin60^{\circ}(\sin60^{\circ}+\sin20^{\circ})+2+2\cos40^{\circ}=1$$ or
$$\cos40^{\circ}-\cos80^{\circ}-2\sin60^{\circ}\sin20^{\circ}=0,$$ which is obvious.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3539618",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
} |
Maximum likelihood estimation in $d$-dimensional Euclidean space of a ball I have a set of points in $d$-dimension Euclidean space drawn from a ball centered at point $c$ and with radius $r$ which are unknown. I want help in formulating the maximum likelihood estimator of $r$ and $c$.
| If they're uniformly distributed in the ball, then the value of the probability density at every point in the ball is equal to the reciprocal of the volume of the ball. The values of $c$ and $r$ that maximize that density, subject to the constraint that all of the observed points lie within the ball, are the values of $c$ and $r$ that give you the smallest possible ball containing the observed points.
That reduces the problem to one of geometry. Possibly more that is of interest could be said about the geometry problem.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3539735",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Can there exist $\{X_n\}_{n\ge 1}$ such that $X_n \to -\infty$ a.s. and $EX_n\to 0$? Can there exist variables $\{X_n\}_{n\ge 1}$ such that $X_n \to -\infty$ a.s. and $EX_n\to 0$?
How can we prove or disprove this analytically?
| Let $X_n$ be $-n$ with probability $p_n$ and $e^{n}$ with probability $1-p_n$ where $p_n=e^{n}/(n+e^{n})$ . To show that this example works note that $\sum (1-p_n)<\infty$ and use Borel Cantelli Lemma.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3539892",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Find $A\subseteq [0,1]$ such that $\lim\limits_{\varepsilon\to0}\frac{m(A\cap[0,\varepsilon])}{\varepsilon}=\frac{1}{2}$ Find a measurable set $A\subseteq [0,1]$ such that
$$\lim\limits_{\varepsilon\to0}\frac{m(A\cap[0,\varepsilon])}{\varepsilon}=\frac{1}{2}$$
I'm also interested in a set $B\subseteq [0,1]$ such that
$$\forall\alpha\in(0,1)\quad \frac{m(B\cap[0,\alpha])}{\alpha}=\frac{1}{2}$$
My Attempt
I thought to try constructing:
*
*Fat Cantor set
This does not answer the second question, and I'm not sure if $0$ is a density point for fat cantor set, or not.
*Splitting $[0,1]$ by the binary expansion of every element.
Maybe something like:
$\forall x\in[0,1]$ define $i(x)$ to the the first index in the binary expansion of $x$ in which $1$ appear.
Then, $$x\in A\iff \sum\limits_{j=i(x)}^{2i(x)} x_j = 1 \mod 2$$
Intuitively, It seems that $m(A)=m([0,1]\setminus A)$, but I'm not sure how to prove it formally.
Thanks in advance for any help.
| A hint for the first part: let $\{x_k\}\downarrow 0$ such that $x_0=1$ and $x_k>x_{k+1}$ for all $k\in \Bbb N_{\geqslant 0} $, also choose some $r\in [0,1]$. Now set $m_k:=r\ell _k$ for $\ell _k:=x_k-x_{k+1}$, also we set $E_k:=[x_{k+1},x_{k+1}+m_k)$. Then $|E_k|=m_k$, and setting $E:=\bigcup_{k\geqslant 0}E_k$ we found that $|E\cap [0,x_k)|/x_k=r$ for all $k$. Pick $r=1/2$ in your case and try to see how to choose $(x_k)$ to make the density of $E$ at zero equal to $1/2$.
A hint for the second part: use the Lebesgue differentiation theorem to show that it cannot be possible.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3540055",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Limit as $n\to+\infty$ of $\prod_{k=1}^{n} \frac{2k}{2k+1}$ I'm trying to evaluate
$$\lim_{n\to+\infty} \prod_{k=1}^{n} \frac{2k}{2k+1}$$
First I notice that since $k\geq1$ it is $\frac{2k}{2k+1}>0$ for all $k\in\{1,...,n\}$; so
$$0\leq\lim_{n\to+\infty} \prod_{k=1}^{n} \frac{2k}{2k+1}$$
Then I notice that
$$\prod_{k=1}^{n} \frac{2k}{2k+1}=\exp{\ln\left(\prod_{k=1}^{n} \frac{2k}{2k+1}\right)}=\exp{\sum_{k=1}^{n}\ln\left(\frac{2k}{2k+1}\right)}=$$
$$=\exp{\sum_{k=1}^{n}\ln\left(1-\frac{1}{2k+1}\right)}$$
Since $\ln(1+x)\leq x$ for all $x>-1$ and since $\exp$ is an increasing function it follows that
$$\exp{\sum_{k=1}^{n}\ln\left(1-\frac{1}{2k+1}\right)}\leq\exp{\sum_{k=1}^{n}-\frac{1}{2k+1}}$$
So
$$\lim_{n\to+\infty}\prod_{k=1}^{n} \frac{2k}{2k+1}\leq\lim_{n\to+\infty}\exp{\sum_{k=1}^{n}-\frac{1}{2k+1}}$$
Since $\exp$ is a continuous function it follows that
$$\lim_{n\to+\infty}\exp{\sum_{k=1}^{n}-\frac{1}{2k+1}}=\exp{\sum_{k=1}^{+\infty}-\frac{1}{2k+1}}=e^{-\infty}=0$$
So by the comparison test we deduce that the limit is $0$.
Is this correct? Thanks for your time.
| Another way:
Using arithmetic geometric Inequality
$$\frac{k+k-1}{2}> \sqrt{k\cdot(k-1)}\Rightarrow \frac{2k-1}{2k}>\sqrt{\frac{k-1}{k}}$$
$$\frac{2k}{2k-1}<\sqrt{\frac{k-1}{k}}\Rightarrow \prod^{n+1}_{k=2}\frac{2k}{2k-1}<\prod^{n+1}_{k=2}\sqrt{\frac{k-1}{k}}=.\frac{1}{\sqrt{n+1}}$$
$$\Longrightarrow 0<\prod^{n+1}_{k=2}\frac{2k}{2k-1}<\frac{1}{\sqrt{n+1}}$$
Applying limit $n\rightarrow \infty$ and Using Squeeze Theorem
We have $$\prod^{n+1}_{k=2}\frac{2k}{2k-1}=0$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3540188",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17",
"answer_count": 4,
"answer_id": 3
} |
Prove that $\int_a^b f(t)f'(t)dt=\frac{1}{2}[(f(b))^2 - (f(a))^2]$ Prove that If $f(t)=u(t)+iv(t)$ is function that derivative continuous on $[a,b]$ then $\displaystyle\int_a^b f(t)f'(t)dt=\frac{1}{2}[(f(b))^2 - (f(a))^2]$
I know If $\displaystyle F'(t)dt=U'(t)dt+iV'(t)dt , \text{then}$
$$\int_a^bF'(t)dt=\int_a^bU'(t)dt+i\int_a^bV'(t)dt =U(b)-U(a)+i(V(b)-V(a))=F(b)-F(a) $$
I'll try to proof the same way but I can't do it.
Edited
Since $f(t)f'(t)=\frac{1}{2}(f(t)^2)'$
I have $\displaystyle \int_a^b \frac{1}{2}(f(t)^2)'dt=\frac{1}{2}\int_a^b (f(t)^2)'dt=\frac{1}{2}[\int_a^b (u(t)^2)'dt+i\int_a^b (v(t)^2)'dt]=\frac{1}{2}[(u(b)^2)-(u(a)^2)+i\big((v(b)^2)-(v(a)^2)\big)]=\frac{1}{2}[(u(b)^2)+i(v(b)^2)-\big((u(a)^2)+i(v(a)^2)\big)]=\frac{1}{2}[(f(b))^2 - (f(a))^2]$
Please help me to prove this proof , Thank you.
| $$\int_a^bf(t)f'(t)dt=\int_{f(a)}^{f(b)}udu=\left[\frac u2\right]_{f(a)}^{f(b)}$$
using $u=f(t)\Rightarrow dt=\frac{du}{f'(t)}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3540413",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 1
} |
Prove that if $0\leq a_k \leq b_k$ then $\prod_{k=1}^n a_k \leq \prod_{k=1}^{n} b_k$ Let $0\leq a_k\leq b_k$ for all $k\in\mathbb{N}$, then show
$$\prod_{k=1}^{n} a_k \leq \prod_{k=1}^{n} b_k$$
My attempt: since $0\leq a_k\leq b_k$ for all $k\in\mathbb{N}$ we have that
$$0 \leq a_0\leq b_0, 0 \leq a_1 \leq b_1,..., 0\leq a_k \leq b_k$$
Multiplying for $a_1$ the first inequality we have
$$0\leq a_0a_1 \leq b_0a_1$$
But $a_1 \leq b_1$, so we have
$$0\leq a_0a_1 \leq b_0a_1 \leq b_0b_1$$
Doing this $n$ times we have
$$\prod_{k=1}^{n} a_k \leq \prod_{k=1}^{n} b_k$$
Two questions:
(1) Is this correct? I've done this by myself and even if this seems obvious I want to be sure or learn for my mistakes;
(2) If I want to remove the hypothesis $a_k \geq 0$ and $b_k \geq 0$, should I assume $|a_k|\leq|b_k|$ for all $k\in\mathbb{N}$ and prove
$$\prod_{k=1}^{n} |a_k| \leq \prod_{k=1}^{n} |b_k|$$
Thanks for your time.
| 1: Yes, what you've done is correct.
2: This follows immediately from the first part you have proven: Just define $\hat{a}_k = \left|a_k\right|$ and $\hat{b}_k = \left|b_k\right|$, then $\hat{a}_k$ and $\hat{b}_k$ satisfy the assumptions from your first point and you get
$$\prod_{k=1}^{n} \hat{a}_k \leq \prod_{k=1}^{n} \hat{b}_k$$ and by inserting the definition of $\hat{a}_k$ and $\hat{b}_k$ you get
$$\prod_{k=1}^{n} \left|a_k\right| \leq \prod_{k=1}^{n} \left|b_k\right|$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3540672",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Does this statement hold true about square matrices $A$ and $B$: Does this statement hold true about square matrices $A$ and $B$:
If $\det (A) \neq 0$ and $\det(B) \neq 0$, then $\det(A+B) \neq 0$ or $\det(A-B) \neq 0$.
I tried researching about this but it seems not to be asked online.
| It is not the case. Counterexample:
If
$A = \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}, \tag 1$
and
$B = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}, \tag 2$
then
$\det(A) = -1 = -\det(B), \tag 3$
but
$A + B = \begin{bmatrix} 2 & 0 \\ 0 & 0 \end{bmatrix}, \tag 4$
and
$A - B = \begin{bmatrix} 0 & 0 \\ 0 & -2 \end{bmatrix}, \tag 5$
so that
$\det (A + B) = \det (A - B) = 0. \tag 6$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3540880",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Assume that $\int_{a}^{ab} f(x) dx$ is independent of $a$. Prove $f(x)=\frac{c}{x}$ Assume $f$ is integrable on $[0,\infty)$ and assume that for $a,b>0$, the value of $\int_{a}^{ab} f(x) dx$ is independent of $a$.
Prove that $f(x)=\frac{c}{x}$, where $c$ is a constant.
I have tried several things, like showing $g(x)=xf(x)$ must have derivative zero, but I am unsure how to use the integral assumption.
Thanks!
| I suppose the assumption is for all $b$. If it holds for just $b=1$ we cannot prove that $f$ has the desired form.
By Lebesgue's theorem on differentiation of indefinite integrals we can differentiate w.r.t. $a$ and get $bf(ab)-f(a)=0$ a.e.. If $g(x)=xf(x)$ we get $g(ab)=g(a)$ a.e... Can you finish?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3541092",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 6,
"answer_id": 4
} |
$\sum_{k=1}^{n}\frac{\ln k}{(2k-1)(2k+1)}<\frac{1}{4}$
Show that $$\sum_{k=1}^{n}\frac{\ln k}{(2k-1)(2k+1)}<\frac{1}{4}$$
holds for all $n\in\mathbb{N^+}$.
Since this is a positive series, it suffices to show that $$\sum_{k=1}^{\infty}\frac{\ln k}{(2k-1)(2k+1)}<\frac{1}{4},$$which is true by machine computing. WA gives that
$$\sum_{k=1}^{\infty}\frac{\ln k}{(2k-1)(2k+1)}\approx 0.231051.$$
| By summing by parts,
$$\begin{align}
S_n:&=\sum_{k=1}^{n}\frac{\ln(k)}{(2k-1)(2k+1)}\\
&=\frac{1}{2}\sum_{k=2}^{n}\left(\frac{\ln(k)}{(2k-1)}-\frac{\ln(k)}{(2k+1)}\right)\\
&=\frac{1}{2}\sum_{k=2}^{n}\frac{\ln(k-1)}{(2k-1)}-\frac{1}{2}\sum_{k=2}^{n}\frac{\ln(k)}{(2k+1)}
+\frac{1}{2}\sum_{k=2}^{\infty}\frac{-\ln(1-1/k)}{2k-1}\\
&=-\frac{\ln(n)}{2(2n+1)}+\frac{\ln(2)}{6}+\frac{1}{2}\sum_{k=3}^{\infty}\frac{-\ln(1-1/k)}{2k-1}.
\end{align}$$
Now, by concavity $-\ln(1-x)\leq 3\ln(3/2)x$ for $x\in (0,1/3)$, and therefore
$$S_n< \frac{\ln(2)}{6}+3\ln(3/2)\sum_{k=3}^{\infty}\frac{1}{(2k)(2k-1)}
=\frac{\ln(2)}{6}+3\ln(3/2)\left(\ln(2)-\frac{7}{12}\right)<\frac{1}{4}$$
where we used the fact that $\ln(2)=\sum_{j=1}^{\infty}\frac{(-1)^{j-1}}{j}=\sum_{k=1}^{\infty}\left(\frac{1}{2k-1}-\frac{1}{2k}\right)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3541416",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.