Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
Constructing 6-fold cover of $S^1 \vee S^1$ with deck transformation group $\cong S_6$ So i'm thinking that this will be a cover space of maximum possible symmetry. Will a "necklace" of 6 circles work? Any tips appreciated
| The comments are correct that this construction is impossible, however the reason is flawed.
First of all, the action of the deck transformation group $G$ of a connected covering map $f : X \to Y$ a free action, meaning that the action of a nontrivial element of $G$ has no fixed points --- so no, a deck transformation cannot take $y_2$ to $y_3$ whilst fixing $y_1$.
Second, given $x \in X$ and $y=f(x) \in Y$, the action of $G$ on the subset $f^{-1}(y) \subset X$ need not be transitive. In fact, that action is transitive if and only if the covering map $f : X \to Y$ is regular, if and only if the image of the induced homomorphism $\pi_1(X,x) \to \pi_1(Y,f(x))$ is a normal subgroup of $\pi_1(Y,f(x))$. However, this only makes the construction more impossible (if that means anything), because it implies that the cardinality of $f^{-1}(x)$ is greater than or equal to the order of the group $G$.
So, in a covering map of degree $6$, the set $f^{-1}(x)$ has cardinality $6$, and there is no free action of a group of order $6! = 120$ on a set of cardinality $6$.
For proofs of the various things discussed in this answer, I suggest sitting down with a good solid course on the relationship between fundamental groups and covering spaces, as in Hatcher.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3163226",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How to adjust the diagonal so that a matrix is on the stability threshold? I am working on the stability of food webs, which can be represented by a Jacobian matrix showing the interaction strengths between species. I know that a matrix is locally stable if all real parts of the eigenvalues are negative, and I know how to find these eigenvalues.
I now want to adjust the diagonal of my matrix so that the matrix is on the threshold of stability: it is stable, but any loss of intraspecific interaction - which are the values on the diagonal - will result in an unstable matrix. Mathematically speaking I think I want a matrix with an adjusted diagonal so that all eigenvalues are zero - does this makes sense?
Is there a method to find out what my diagonal should look like if I want all my eigenvalues to be zero? The starting point for the transformation can be any diagonal (i.e. all values are -1, or all values differ according to some biological data).
In literature I found "An eigenvalue $\lambda_{i}$ can be linearly transformed by the amount that must be subtracted from $a_{ii}$ to allow the eigenvalue to be 0 (by $-|\lambda_{i}-a_{ii}|)"$, but I'm not sure how to implement this.
I hope this is the right way of asking the question; first time I use this platform!
| To be on the boundary of instability, you don't need all eigenvalues to be zero, you just need the largest occuring real part to be zero (so that the “rightmost” eigenvalue(s) is (are) on the imaginary axis).
This you can do by adding $cI$ to your matrix (where I is the identity matrix, and $c$ is a suitable constant); this will add $c$ to all the eigenvalues, i.e., shift them $c$ steps to the right in the complex plane.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3163345",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
I'm facing problem with inhomogeneous equation. Let $u(x,t)$ be a function that satisfies the PDE $$u_{xx}-u_{t t} = e^x+6t,$$ $x\in \mathbb{R}, t>0,$ and the initial conditions $$u(x,0)=\sin(x) ,u_{t}(x,0)=0$$ for every $x\in \mathbb{R}.$
Then what is the value of $u(\pi/2,\pi/2)?$
I'm not getting how to solve an inhomogeneous wave equation; can anyone give a hint?
| Let $\begin{cases}p=x+t\\q=x-t\end{cases}$ ,
Then $u_x=u_pp_x+u_qq_x=u_p+u_q$
$u_{xx}=(u_p+u_q)_x=(u_p+u_q)_pp_x+(u_p+u_q)_qq_x=u_{pp}+u_{pq}+u_{pq}+u_{qq}=u_{pp}+2u_{pq}+u_{qq}$
$u_t=u_pp_t+u_qq_t=u_p-u_q$
$u_{tt}=(u_p-u_q)_t=(u_p-u_q)_pp_t+(u_p-u_q)_qq_t=u_{pp}-u_{pq}-u_{pq}+u_{qq}=u_{pp}-2u_{pq}+u_{qq}$
$\therefore u_{pp}+2u_{pq}+u_{qq}-(u_{pp}-2u_{pq}+u_{qq})=e^\frac{p+q}{2}+6\times\dfrac{p-q}{2}$
$4u_{pq}=e^\frac{p+q}{2}+3(p-q)$
$u_{pq}=\dfrac{e^\frac{p+q}{2}}{4}+\dfrac{3(p-q)}{4}$
$u(p,q)=f(p)+g(q)+e^\frac{p+q}{2}+\dfrac{3(p^2q-pq^2)}{8}$
$u(p,q)=f(p)+g(q)+e^\frac{p+q}{2}+\dfrac{3pq(p-q)}{8}$
$u(x,t)=f(x+t)+g(x-t)+e^x+\dfrac{3(x^2-t^2)t}{4}$
$u(x,0)=\sin x$ :
$f(x)+g(x)+e^x=\sin x$
$f(x)+g(x)=\sin x-e^x......(1)$
$u_t(x,t)=f_t(x+t)+g_t(x-t)+\dfrac{3x^2}{4}-\dfrac{9t^2}{4}=f_x(x+t)-g_x(x-t)+\dfrac{3x^2}{4}-\dfrac{9t^2}{4}$
$u_t(x,0)=0$ :
$f_x(x)-g_x(x)+\dfrac{3x^2}{4}=0$
$f(x)-g(x)=c-\dfrac{x^3}{4}......(2)$
$\therefore f(x)=\dfrac{\sin x-e^x}{2}-\dfrac{x^3}{8}+\dfrac{c}{2}$ , $g(x)=\dfrac{\sin x-e^x}{2}+\dfrac{x^3}{8}-\dfrac{c}{2}$
Hence $u(x,t)=\dfrac{\sin(x+t)-e^{x+t}}{2}-\dfrac{(x+t)^3}{8}+\dfrac{c}{2}+\dfrac{\sin(x-t)-e^{x-t}}{2}+\dfrac{(x-t)^3}{8}-\dfrac{c}{2}+e^x+\dfrac{3(x^2-t^2)t}{4}$
$u(x,t)=\dfrac{\sin(x+t)+\sin(x-t)+2e^x-e^{x+t}-e^{x-t}}{2}+\dfrac{(x-t)^3-(x+t)^3}{8}+\dfrac{3(x^2-t^2)t}{4}$
$u\left(\dfrac{\pi}{2},\dfrac{\pi}{2}\right)=\dfrac{2e^\frac{\pi}{2}-e^\pi-1}{2}-\dfrac{\pi^3}{8}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3163494",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
normal subgroup basis of $\mathrm{GL}(n, \mathbb{Q}_{p})$ Consider the group $G=\mathrm{GL}(n, \mathbb{Q}_p)$. This group is locally compact and totally disconnected, and we have a basis of open subgroups given by
$$
K(p^m) = \left\{ \begin{pmatrix} a&b\\c&d\end{pmatrix} \equiv \begin{pmatrix}1&0\\0&1\end{pmatrix} mod \,p^m\right\} \subseteq \mathrm{GL}(n, \mathbb{Z}_{p})
$$
for $n\geq 0$. These are even normal subgroup of $\mathrm{GL}(n, \mathbb{Z}_{p})$, but not in $\mathrm{GL}(n, \mathbb{Q}_{p})$. Can we find any other basis of open subgroups that is even normal in $\mathrm{GL}(n, \mathbb{Q}_p)$? If not, how can we prove it? Such normality holds for any totally disconnected compact group, so non-compactness of $\mathrm{GL}(n, \mathbb{Q}_p)$ will be a problem. However, I can't prove this.
| $\DeclareMathOperator{GL}{GL}\DeclareMathOperator{SL}{SL}\DeclareMathOperator{PSL}{PSL}$Here is another, more abstract proof. It uses some more powerful facts, but on the upside, requires pretty much no computations (if you use these facts as black boxes).
Instead of showing that $\GL_n$ does not have a basis of open normal subgroups, we will show that $\SL_n$ does not have one (which is slightly stronger, at least a priori). It works for any non-discrete, Hausdorff topological field, not just for the $p$-adics (just like the other answer).
Suppose $K$ is a non-discrete topological field and $n>1$. We will show that every open normal subgroup of $\SL_n(K)$ has index at most $n$. It follows that there is a minimal open normal subgroup $N_0\unlhd \SL_n(K)$ (also of index at most $n$). Since $K$ is non-discrete, neither is $\SL_n(K)$, so $N_0$ is nontrivial, and so if $K$ is Hausdorff, it follows that there is a neighbourhood of the identity in $\SL_n(K)$ which does not contain $N_0$.
Let $N\unlhd \SL_n(K)$ be an open normal subgroup. Since $K$ is not discrete, it follows that $N$ is nontrivial and contains a non-central element (e.g. a non-scalar diagonal matrix).
It follows that $N/Z(\SL_n(K))$ is a nontrivial normal subgroup in $\PSL_n(K)$, which is a simple group. Thus, $N\cdot Z(\SL_n(K))=\SL_n(K)$. But the center of $\SL_n(K)$ has at most $n$ elements (corresponding to $n$-th roots of unity in $K$). The result follows.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3163620",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
For which $a>0$ series is convergent?
For which $a>0$ series $$\sum { \left(2-2 \cos\frac{1}{n} -\frac{1}{n}\cdot \sin\left( \sin\frac{1}{n} \right) \right)^a } $$ $(n \in \mathbb N)$ is convergent?
My try:From Taylor theorem I know that:$$a_{n}={ \left(2-2 \cos\frac{1}{n} -\frac{1}{n}\cdot \sin\left( \sin\frac{1}{n} \right) \right)^a } = (\frac{1}{n^{4}}-\frac{7}{72n^{6}}+o(\frac{1}{n^{8}}))^{a}$$
Then I have:
$$(\frac{1}{n^{4}}-\frac{7}{72n^{6}}+o(\frac{1}{n^{8}}))^{a} \le (\frac{1}{n^{4}}+o(\frac{1}{n^{8}}))^{a}$$At this point, my problem is that if I had:
$$(\frac{1}{n^{4}}-\frac{7}{72n^{6}}+o(\frac{1}{n^{8}}))^{a} \le (\frac{1}{n^{4}})^{a}$$I could say that $0 \le a_{n} \le (\frac{1}{n^{4}})^{a}$ so for $a>\frac{1}{4}$ this series is convergent.However in this task I have also $o(\frac{1}{n^{8}}))^{a}$ and I don't know what I can do with it to finish my sollution.Can you help me?
| The term that is $o(n^{-8})$ is smaller, for all sufficiently large $n$, than $Cn^{-8}$, for at least some well-chosen value of $C$.
Making use of that $C$, you can finish the proof by noting that for sufficiently large $n$, $\frac7{72n^6} > Cn^{-8}$ since for large enough $n$, $$n^2 > \frac{72C}{7}$$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3163767",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 2
} |
Finding canonical names for equivalence classes Given $x,y\subset \omega$, define the equivalence relation
$$x=^*y\iff x\Delta y\text{ is finite.}$$
Let $M$ be a ctm of ZFC and $G$ an $M$-generic filter over $P$=Fn$(\omega\times\omega,2)$. Put $g=\bigcup G:\omega\times\omega\to2$ and let
$$a_i=\{n<\omega: g(i,n)=1\}$$
be the $i^\text{th}$ Cohen real.
I know that each $a_i$ has a canonical name, namely (no pun intended)
$$
\dot a_i:=\{(\check n,p):p(i,n)=1\}$$
Can we find any similarly easy names for the equivalence class $[a_i]=\{x\subset\omega: x=^*a_i\}$?
The context for this question is: I'm considering automorphisms of $P$, and I need names for which I can simply describe the action of the automorphism on then.
| Yes, of course. Let's focus on adding just a single Cohen real $\dot a$.
If $s$ is a finite binary sequence, $\dot a^s$ would be $\{(\check n,p)\mid p\subseteq s\lor s\subseteq p\land p(n)=1\}$.
Now $[\dot a]=\{\dot a^s\mid s\in 2^{<\omega}\}^\bullet$, where $\{\dot x_i\mid i\in I\}^\bullet$ is the name given by $\{(\dot x_i,1)\mid i\in I\}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3163943",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Integers $n$ satsifying $\frac{1}{\sin \frac{3\pi}{n}}=\frac{1}{\sin \frac{5\pi}{n}}$
If $\displaystyle \frac{1}{\sin \frac{3\pi}{n}}=\frac{1}{\sin \frac{5\pi}{n}},n\in \mathbb{Z}$, then number of $n$ satisfies given equation ,is
What I tried:
Let $\displaystyle \frac{\pi}{n}=x$ and equation is $\sin 5x=\sin 3x$
$\displaystyle \sin (5x)-\sin (3x)=2\sin (4x)\cos (x)=0$
$\displaystyle 4x= m\pi$ and $\displaystyle x= 2m\pi\pm \frac{\pi}{2}$
$\displaystyle \frac{4\pi}{n}=m\pi\Rightarrow n=\frac{4}{m}\in \mathbb{Z}$ for $m=\pm 1,\pm 2\pm 3,\pm 4$
put into $\displaystyle x=2m\pi\pm \frac{\pi}{2}$
How do i solve my sum in some easy way Help me please
| Since $-n$ is a solution if $n$ is a solution, it suffices to look for solutions with $n\gt1$. Since $\sin x$ is strictly increasing for $0\le x\le\pi/2$, we cannot have $\sin(3\pi/n)=\sin(5\pi/n)$ if $n\ge10$, so it suffices to consider $2\le n\le9$.
From $\sin x=\sin(\pi-x)$, we have
$$\sin\left(5\pi\over n\right)=\sin\left(\pi-{5\pi\over n}\right)=\sin\left((n-5)\pi\over n\right)$$
For $2\le n\le5$, the signs of $\sin(3\pi/n)$ and $\sin((n-5)\pi/n)$ do not agree (e.g., $\sin(3\pi/2)=-1$ while $\sin(-3\pi/2)=1$). For $6\le n\le9$, we have $0\le{3\pi\over n},{(n-5)\pi\over n}\le{\pi\over2}$, in which case
$$\sin\left(3\pi\over n\right)=\sin\left((n-5)\pi\over n\right)\iff{3\pi\over n}={(n-5)\pi\over n}\iff3=n-5\iff n=8$$
Thus we have two solutions, $n=8$ and $n=-8$.
Remark: The cases $n=3$ and $n=5$ could have been rejected outright, since $\sin(3\pi/n)\sin(5\pi/n)=0$ in those cases, guaranteeing a forbidden $0$ in one of the denominators in the original expression.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3164257",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Proving an alternating infinite series to be divergent I am trying to prove $$\sum_{n=0}^\infty \frac{(-4)^{3n}}{5^{n-1}}$$ is a divergent series. I know that it increases as the series progresses regardless of sign, so it must be divergent, but im not sure how to prove it.
Usually with an alternating infinite series as long as the term not involving the minus doesnt tend to zero then it is easy to prove, however in this, that term is $$\frac{1}{5^n-1}$$ (I think), so it makes it a little harder, maybe i'm taking the wrong appraoch to this, any help?
| Since $\lim_{n\to\infty}\left\lvert\frac{(-4)^{3n}}{5^{n-1}}\right\rvert=\infty$, you don't have $\lim_{n\to\infty}\frac{(-4)^{3n}}{5^{n-1}}=0$, and therefore the series diverges.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3164387",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Evaluating: $\lim_{n \to 0} \prod_{\substack{i=nk \\k \in \Bbb Z_{\geq 0}}}^{2-n} \left( 2-i \right) $ How would we evaluate:
$$\lim_{n \to 0} \prod_{\substack{i=nk \\k \in \Bbb Z_{\geq 0}}}^{2-n} \left( 2-i \right) $$
Is it possible to evaluate this manually? Or do we have to make a program to get an approximation?
EDIT:
Since people are getting confused in the comments, here is an example:
If we take n=0.5:
$$\prod_{\substack{i=nk \\k \in \Bbb Z_{\geq 0}}}^{2-n} \left( 2-i \right) = \prod_{\substack{i=0.5k \\k \in \Bbb Z_{\geq 0}}}^{1.5} \left( 2-i \right)=(2-0)(2-0.5)(2-1)(2-1.5) $$
Hopes this clears up the misunderstanding.
| The product can be written in an equivalent form as:
$$\Pi(n)=\prod_{k=0}^{\frac{2-n}{n}} (2-kn)=\prod_{k=0}^{\frac{2-n}{n}} -n(k-\frac{2}{n})=\prod_{k=0}^{\frac{2-n}{n}} -n\prod_{k=0}^{\frac{2-n}{n}} (k-\frac{2}{n})=$$
$$=2(-n)^{\frac{2-n}{n}} \left(\frac{n-2}{n}\right)_{\frac{2-n}{n}}$$
This may be helpful.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3164509",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Can $5^n+1$ be sum of two squares? I want to determine whether or not $5^n+1$ , $n\in\mathbb{N}$ can be written as sum of two squares.
Obviously, the real problem is when $n$ is odd. I am aware of the known results about numbers written as sum if squares, but I couldn't apply them here. We can see that when $n$ is odd, $5^n+1$ ia divisible by 6 and gives reminder 2 when divided by 4 which means that the two squares we would have to add must end in 1 and 9 or 5 and 5.
| Say $x^2 + y^2 = 5^n + 1$.
Looking mod 2, we get that $x^2 + y^2 = 0 \mod 2$, so $x$ and $y$ have the same parity, and looking mod 4, we get that $x^2 + y^2 = 2 \mod 4$, so $x$ and $y$ must both be odd.
Say $x = 2r + 1$ and $y = 2s+ 1$, so $$x^2 + y^2 = (2r + 1)^2 + (2s+1)^2 = 4(r^2 + r + s^2 + s) + 2 = 5^n + 1$$
or
$$4(r(r+1) + s(s+1)) = 5^n - 1$$
Now, the left side must be divisible by $8$, so $n$ must be even.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3164742",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Proof Verification: $\epsilon(\sigma)=\epsilon(\sigma^{-1}) \, \, \forall{}\sigma\in{}S_n$ Not certain whether my proof is right, would appreciate it if I could get some feedback on it. Also, the epsilon here is the sign function of the permutation so $\epsilon=sgn$
Proof:
Since the mapping $ \, \, \epsilon:S_n\rightarrow\{\pm{}1\}$ is a group homomorphism, I'll use the fact that
\begin{equation}
\epsilon(\sigma\tau)=\epsilon(\sigma)\cdot\epsilon(\tau)
\end{equation}
And let $\tau=\sigma^{-1}$ which will give us $\epsilon(\sigma\sigma^{-1})=\epsilon(e)$ with $e$ the identity permutation. And since the identity permutation has sign $1$ we have that $\epsilon(\sigma)\cdot\epsilon(\sigma^{-1})=1$.
Now since $\epsilon(\sigma)=(-1)^{\text{number of inversions of} \, \sigma}$ and $\epsilon(\sigma^{-1})=(-1)^{\text{number of inversions of} \, \sigma^{-1}}$ let $n$ and $m$ denote those powers respectively (to avoid cumbersome notation) we then have that
\begin{align}
&(-1)^n\cdot(-1)^m=1\\
&(-1)^{n+m}=(-1)^2\\
&n=m-2
\end{align}
And since $m-2$ doesn't alter the sign of $(-1)$ (there's a better way of saying this) we have that the number of both inversions is the same and hence the sign of both permutations is also the same.
How does this look?
| When you say $n=m-2$, that should be $n\equiv m-2\mod 2$. The value of $\epsilon(\sigma)$ doesn't tell you what that exponent is, only that it's even or odd.
In fact, nothing about this depends on that "number of inversions" formula. It's entirely a consequence of $\epsilon$ being a homomorphism to the two-element group.
The way I would phrase it? $\epsilon(\sigma^{-1})=(\epsilon(\sigma))^{-1}$ since $\epsilon$ is a homomorphism. Then, in the two-element group $\{1,-1\}$, every element is its own inverse, so $(\epsilon(\sigma))^{-1}=\epsilon(\sigma)$. Done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3164833",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Find angles in trapezoid with sides' lengths equal $15$, $7$, $7$, $8$
What are the interior angles in the trapezoid with sides whose lengths equal $15$, $7$, $7$, $8$ (sides with lengths $15$ and $7$ are parallel)?
I found this problem in elementary school's problem book. We should not use any trigonometry. Is there some simple solution, or the problem has some typo?
| If we slide the two slant sides together, reducing the two parallel bases by equal amounts, we end up with a triangle with sides $7$, $8$, and $8$ - the two slanted sides and the difference between the parallel sides.
The angles in this isosceles triangle are $\arccos\left(\frac{3.5}{8}\right)\approx 64^\circ$, $\arccos\left(\frac{3.5}{8}\right)\approx 64^\circ$, and $2\arcsin\left(\frac{3.5}{8}\right)\approx 52^\circ$. Going back to the original trapezoid, the angles on the long base are the $52^\circ$ and one of the $64^\circ$ angles, leaving the supplementary angles $128^\circ$ and $116^\circ$ on the short base
Yeah, we're not getting those without trigonometry.
On the other hand, that same method suggests a fix - if that reduced triangle had sides $8,8,8$, we could just read off the $60^\circ$ angles of an equilateral triangle. That would require changing the length $7$ slant side into a length $8$ side. It's quite likely that this mistake was made.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3165191",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Evaluate $\sum_{k=0}^n {{2n + 1}\choose {2k + 1}}$ Evaluate $$ \sum_{k=0}^n {{2n + 1}\choose {2k + 1}} $$
I'm really stuck on this one, no idea how to progress. My best guess is to somehow get it into the form of $ n\choose k $ and then take that summation and work with that. Or maybe binomial theorem, but I'm very experienced with that. If you could give a breakdown on how to tackle these problems that'd be great!
| Hint:
$$2\sum_{k=0}^n\binom{2n+1}{2k+1}a^{2n-2k}b^{2k+1}=(a+b)^{2n+1}-(a-b)^{2n+1}=?$$
Set $a=b=1$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3165338",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 1
} |
An integration that wolfram cannot help me. $$\int e^{x\sin x+\cos x}\frac{x^4\cos^3 x-x\sin x+\cos x}{x^2\cos^2x}dx$$
I noted the fact that $\frac{d(x\cos x)}{dx}=-x\sin x+\cos x$ but I cannot apply the substitution on it.
| Here is the best solution I can do
Compute the following:
\begin{align}
& \int e^{x\sin x+\cos x}\frac{x^4\cos^3 x-x\sin x+\cos x}{x^2\cos^2x}dx \\
& =\int e^{x\sin x+\cos x} \cdot x^2 \cos x dx+ \int e^{x\sin x+\cos x} \cdot \frac{-x\sin x+\cos x}{x^2\cos^2x}dx \\
\end{align}
Remark:
$$\frac{d(x\sin x+\cos x)}{dx}=x \cos x$$
$$\frac{d(x\cos x)}{dx}=-x\sin x+\cos x$$
Therefore, using integration by parts, the first term is
\begin{align}
\int e^{x\sin x+\cos x} \cdot x^2 \cos x dx &= \int x d(e^{x\sin x+\cos x}) \\
&= x \cdot e^{x\sin x+\cos x} - \int e^{x\sin x+\cos x} dx \qquad (1)
\end{align}
Also, the second term is
\begin{align}
\int e^{x\sin x+\cos x} \cdot \frac{-x\sin x+\cos x}{x^2\cos^2x}dx &= \int \frac{e^{x\sin x+\cos x}}{x^2\cos^2x}d(x \cos x) \\
&= \frac{e^{x\sin x+\cos x}}{x \cos x}-\int x \cos x d(\frac{e^{x\sin x+\cos x}}{x^2\cos^2x}) \qquad (2)
\end{align}
And derive:
\begin{align}
\int x \cos x d(\frac{e^{x\sin x+\cos x}}{x^2\cos^2x}) &= \int \frac{e^{x\sin x+\cos x} \cdot (-2 \cos x+x^2 \cos^2 x+2x \sin x)}{x^2 \cos^2 x}dx \\
&= \int e^{x\sin x+\cos x} dx - 2 \cdot \int \frac{(-x\sin x+\cos x)e^{x\sin x+\cos x}}{x^2 \cos^2 x} dx
\end{align}
Thus, the equation (2) becomes:
\begin{align}
\int e^{x\sin x+\cos x} \cdot \frac{-x\sin x+\cos x}{x^2\cos^2x}dx = \frac{e^{x\sin x+\cos x}}{x \cos x} - \int e^{x\sin x+\cos x} dx \\ + 2 \cdot \int \frac{(-x\sin x+\cos x)e^{x\sin x+\cos x}}{x^2 \cos^2 x} dx
\end{align}
By rearranging the terms in the equation above, we have:
\begin{align}
-\int e^{x\sin x+\cos x} \cdot \frac{-x\sin x+\cos x}{x^2\cos^2x}dx &= \frac{e^{x\sin x+\cos x}}{x \cos x} - \int e^{x\sin x+\cos x} dx \\
\int e^{x\sin x+\cos x} \cdot \frac{-x\sin x+\cos x}{x^2\cos^2x}dx &= -\frac{e^{x\sin x+\cos x}}{x \cos x} + \int e^{x\sin x+\cos x} dx \qquad (3)
\end{align}
Combine the equation (1) and (3):
\begin{align}
& \int e^{x\sin x+\cos x}\frac{x^4\cos^3 x-x\sin x+\cos x}{x^2\cos^2x}dx \\
& = x \cdot e^{x\sin x+\cos x} - \int e^{x\sin x+\cos x} dx -\frac{e^{x\sin x+\cos x}}{x \cos x} + \int e^{x\sin x+\cos x} dx \\
& = x \cdot e^{x\sin x+\cos x} -\frac{e^{x\sin x+\cos x}}{x \cos x}
\end{align}
as above
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3165482",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Idempotents and cyclic codes Let $C_1 = \langle e_1(x) \rangle$, $C_2 = \langle e_2(x) \rangle$ cyclic codes, where $e_1(x)$ and $e_2(x)$ are idempotents.
I know what cyclic codes and idempotents are, but why can one deduce the following: $C_1 \subset C_2 \Leftrightarrow e_1(x) e_2(x) = e_1(x)$?
| I believe you are talking about an idempotent $e=e(x)\in F[x]/(x^n-1)$. The fact you are speaking of is actually true for idempotents in any commutative ring with identity.
Suppose $e,f$ are two idempotents in a commutative ring $R$.
Then it is elementary to show that $(e)\cap (1-e)=\{0\}$ and $(f)\cap (1-f)=\{0\}$.
Now if $(e)\subseteq (f)$, then $e-ef=e(1-f)\in (f)\cap (1-f)=\{0\}$. Therefore $e=ef$.
In the other direction, suppose $e=ef$: then clearly $e\in (f)$ and $(e)\subseteq (f)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3165758",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Solve: $\|u+v\| \le \|u\| + \|v\|$ with $\|x\| = \left( \sqrt{|x_1|} + \sqrt{|x_2|} \right)^2$ I was given the following task:
Check if $x\rightarrow \left(\sqrt{|x_1|} + \sqrt{|x_2|}\right)^2$ is a norm on $\mathbb{R}^2$.
I've already shown that
$$\|x\| \ge 0\qquad \|x\| = 0 \Leftrightarrow x = 0$$
$$\|\alpha x\| = |\alpha| \cdot \|x\|$$
The last thing that I need to show is the triangle inequality using $\sqrt{|a|+|b|}\le \sqrt{|a|} + \sqrt{|b|}$
$$\|u+v\| = \left(\sqrt{|u_1+v_1|} + \sqrt{|u_2+v_2|}\right)^2$$
$$\le \left(\sqrt{|u_1|} + \sqrt{|v_1|}+ \sqrt{|u_2|} + \sqrt{|v_2|} \right) ^2$$
$$= \left(\sqrt{|u_1|} + \sqrt{|u_2|}+ \sqrt{|v_1|} + \sqrt{|v_2|}\right)^2$$
$$\dots$$
I have no idea how to continue at this point. I've also tried a different approach:
$$\|u+v\| = \left(\sqrt{|u_1+v_1|} + \sqrt{|u_2+v_2|}\right)^2$$
$$\ge {\sqrt{|u_1|+|u_2|+|v_1|+|v_2|}}^2 = |u_1|+|u_2|+|v_1|+|v_2|$$
but this didn't get me any further as well.
I am thankful for any help
| Let $u(a^2,b^2)$ and $v(c^2,d^2),$ where $a$, $b$, $c$ and $d$ are positives.
Thus, we need to prove that
$$(a+b)^2+(c+d)^2\geq\left(\sqrt{a^2+c^2}+\sqrt{b^2+d^2}\right)^2$$ or
$$ab+cd\geq\sqrt{(a^2+c^2)(b^2+d^2)},$$ which can be wrong by C-S.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3165963",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Leibniz convergence test to compute the limit Can we compute the limit of a series using Leibniz test?
This is the problem I am struggling with:
" Let $(a_{n})_{n \ge 1}$ be a sequence of natural numbers, $a_{n} \ge 2$ ,
Let $b_{n} = 1 - \frac{1}{a_{1}} + \frac{1}{a_{1}a_{2}} +- \dots + (-1)^n\frac{1}{a_1a_2...a_n}$ , $n = 1,2,3, \dots$
Prove that:
a) $(b_n)_{n \ge 1}$ is convergent
b) if $(a_n)_{n \ge 1}$ is unbounded, then the limit of $(b_n)_{n \ge 1} \in \mathbb{R\setminus Q}$ "
(Source: Romanian National Olympiad Shortlist)
It's easy to prove point a) using Leibniz test. I may be wrong, but I belive the test can be used in point b), to prove the irrationality. Is there any way to compute the limit?
| Here is my solution, but I am not sure if it is correct.
Suppose that $b_n \rightarrow \frac{p}{q}$ , where $p,q \in \mathbb{Z^*}$.
For any $\epsilon > 0$ , we have $|b_n - \frac{p}{q}| < \epsilon$ , and by choosing $\epsilon := \frac{\epsilon}{|q|}$ , we get:
$|1 - \frac{1}{a_1} + \frac{1}{a_1a_2} + \dots (-1)^n\frac{1}{a_1a_2...a_n} - \frac{p}{q}| < \frac{\epsilon}{|q|}$
Multiplying both sides with $|q|a_1a_2...a_n$ , we get:
$|q(a_1a_2...a_n - a_1a_2...a_{n-1} + \dots (-1)^n) - p| < \epsilon$
By setting $d_n = a_1a_2...a_n - a_1a_2...a_{n-1} + \dots (-1)^n$, we get that $d_n \rightarrow \frac{p}{q} \in \mathbb{Q}$
Since $a_n \ge 2$ for any $n \ge 1$ we get that $a_1a_2...a_n \ge 2^n$ , which implies $a_1a_2...a_n \rightarrow \infty$
Because $d_n = a_1a_2...a_{n-1}(a_n - 1) + a_1a_2...a_{n-3}(a_{n-2}-1) + \dots$ and $a_n - 1 \ge 1$ ,
we get that $d_n \rightarrow \infty$, contradiction
Can anybody tell me if this is correct?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3166174",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Which is the maximum value of $k^2 \binom{n}{k}$, for $k$ and $n$ integers?
Given an integer $n$, what is the maximum value that $k^2 \binom{n}{k}$ can take, for $k$ integer?
I've done the case of which $\binom{n}{k}$ is maximum, but for this one I don't see where to begin with. Any hint?
| Hint: if $f(k) = k^2 {n \choose k}$, then
$$ \frac{f(k+1)}{f(k)} = \frac{(k+1)(n-k)}{k^2} $$
When is this $> 1$ or $< 1$?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3166299",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Independence of coin flips
A fair coin is tossed three times in succession. If at least one of
the tosses has resulted in Heads, what is the probability that at
least one of the tosses resulted in Tails?
My argument and answer: The coin was flipped thrice, and one of them was heads. So we have two unknown trials. The coin flips are all independent of each other, and so there is no useful information to be derived from the fact that one of them was heads. The probability of getting at least one Tails in these two trials is $ \frac 12 + \frac 12 - \frac 14 = \frac 34 $.
The given answer: $ \frac 67 $. The answer proceeds as follows: Initially the sample space consists of 8 events. We now know that one of those events can't happen (TTT can't happen because one of them were heads).6 of the remaining 7 events have at least one tail, and so the probability is $ \frac 67 $.
Why is my answer wrong? What am I missing?
| Well, let's look at the problem. It's asking the odds of flipping any tail given that you flipped at least one head, or $P(T>0|X>0)$ Using the definition of conditional probability, we can say that $P(T>0|X>0)=P(T>0,X>0)/P(X>0)$. Then, using some identities, we have that $P(T>0,X>0)=1-P(T=0)-P(X=0)$ and $P(X>0)=1-P(X=0)$.
To put it in words, the probability that our coin toss triple contains one head and one tail is 1 minus the odds of it containing no heads or no tails, and the probability that it held at least one head is 1 minus the odds that it contained no heads.
Then, doing the remaining math:
$$\frac{1-P(T=0)-P(X=0)}{1-P(X=0)}=\frac{1-\frac{1}{8}-\frac{1}{8}}{1-\frac{1}{8}}=\frac{\frac{6}{8}}{\frac{7}{8}}=\frac{6}{7}$$
Your problem is that you specifically looked at the case when one head was flipped- you did not account for other cases.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3166419",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
The direct sum of two finitely generated algebras is finitely generated Let $X$ and $Y$ be sets of variables. Let $R$ be a commutative ring. Suppose $S$ and $T$ are finitely generated $R$-algebras. Then $S\cong R[X]/I$ and $T\cong R[Y]/J$ for some $R[X]$-ideal $I$ and $R[Y]$-ideal $J$.
Why is $S\bigoplus T$ a finitely generated $R$-algebra? I wanted to say "just combine the generators of $S$ and $T$" together, but isn't that saying that
$$\frac{R[X]}{I}\oplus\frac{R[Y]}{J}\cong \frac{R[X,Y]}{IR[X,Y]+JR[X,Y]}?$$
But if that's the case, then I'm confused because I thought
$$\frac{R[X]}{I}\otimes\frac{R[Y]}{J}\cong \frac{R[X,Y]}{IR[X,Y]+JR[X,Y]}?$$
But I thought the direct sum and tensor product were very different operations?
| What do you mean by $\oplus$ when you are working with $R$-algebras, or more generally with commutative rings? See this wiki section.
If you are writing $\oplus$ to mean the coproduct (in the category of $R$-algebras), then you ought to write $\otimes_R$ instead, and your equation is correct. If you mean the product, then you ought to write
$\times$ instead, and your equation is incorrect (e.g., with $I=J=0$, $\mathbb{C}[X]\times \mathbb{C}[Y]$ is not a domain, so it can't be $\mathbb{C}[X,Y]$), but see @Lubin's answer for more in this case. If you mean neither of those, you should clarify what it is that you mean (and your equation is probably incorrect).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3166592",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Duality between Ideal and Filters on a poset I am studying Ideals and Filters on a poset, and it is clear to me that this are dual notions in the sense that reversing inclusions in the definition we can get the one from the other, or in the sense that taking complements of subsets belomging to a structure we obtain the associated dual one.
I was wondering if there is more about this duality concept, particularly in relation to general topology: Filters are used directly to assign neighboroods, do Ideal directly play a role as well?
| There is a nice duality between filters and ideals in topology, but then we have to go via Boolean algebras and Stone spaces:
If we have a Boolean algebra (BA) $B$ (a bounded distributed and complemented lattice essentially, so with operations $\land, \lor, \lnot$ and a $0$ and $1$.) on the set $S(B)$ of its ultrafilters we can put a natural topology that makes it into a compact Hausdorff space with a base of clopen (closed-and-open) sets.
On the other hand, if we have a compact Hausdorff space with a base of clopen sets (a co-called Stone space) $X$ then $CO(X)$, the set of clopen subsets of $X$, is a natural Boolean algebra (with the standard set operations of union, intersection and complementation and $\emptyset$, $X$ as top and bottom.
It turns out these are "natural" inverses: the Stone space of the BA $CO(X)$ is homeomorphic to $X$ again and $CO(S(B))$ is isomorphic to the BA $B$ for all Boolean algebras.
Now a filter $\mathcal{F}$ in a BA $B$ corresponds to a closed set $C \subseteq S(B)$ : define $C$ to be the set of all ultrafilters on $B$ (i.e. points of $S(B)$) that contain $\mathcal{F}$. And vice versa, if $C$ is a closed subset of $S(B)$, then the corresponding filter in $B$ is the intersection of all $\mathcal{U}_b$ with $\mathcal{U}_b \in C$, where $\mathcal{U}_b=\{: F \subseteq B: b \in F\}$ is the ultrafilter on $B$ determined by $b$.
Dually an open subset $O$ of $S(B)$ corresponds to an ideal of $B$ and vice versa.
So in Stone spaces we see that the duality open vs closed is the mirrror image of
the filter and ideal duality.
For more info on this duality and proofs, see these notes by KP Hart.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3166852",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Why is this map $S^2\to S^1$ nullhomotopic? I know that $\pi_2(S^1)=0$ since $S^1$ has $\mathbb{R}$ as universal cover, which is contractible. However, I have a map $S^2\to S^1$ that I can't intuitively see why it is nullhomotopic (it has to be, since otherwise it would represent a nontrivial element of $\pi_2(S^1)$).
Take $S^2$ to be the standard sphere centered at the origin of $\mathbb{R}^3$. Project onto the $XY$-plane by $p_1: (x,y,z)\mapsto (x,y,0)$. Then, do the same with the disc obtained, $p_2:(x,y,0)\mapsto (x,0,0)$. Now we have an interval, that we can send homeomorphically (say by $h$) to $[0,1]$. Now, I can choose non-nullhomotopic maps $[0,1]\to S^1$, such as the quotient map $q$ (not nullhomotopic because it represents a generator of singular homology) or $\phi(t)=e^{2\pi i t}$ (not nullohomotpic because it represents a generator of the fundamental group).
All the maps are continuous. The resulting map $f=q\circ h\circ p_2\circ p_1$ (or $g=\phi\circ h\circ p_2\circ p_1$) should be nullhomotopic, but I can't figure out a homotopy or a geometric intuition of how is that possible, since what I see is that in the end we're just performing the classical loop around $S^1$, which is not nullhomotopic.
| Imagine poking the sphere inwards from $x=0$ and $x=1$ so that under projection to the $x$-axis it doesn't reach all the way around the interval $[0,1]$. Performing this in $\mathbb R^3$ exhibits a homotopy between two embeddings of $S^2$. From this point it should be clear that following this homotopy with the map you describe yields a nullhomotopic map.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3167000",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
What are the chances of my server giving an error given $N$ daily users? This probability question is based on a real problem: My server gives an error if it gets hit by more than $5$ requests in one second. If I have $N$ daily users, and each one sends an average of $M$ requests to the server per day (assuming each request takes exactly one second), what are the chances that the server will give an error that day?
Specifically, for an average day, I'd like to know $P(\geq 1 \space error)$ (the probability that the server gives at least one error that day), as well as $E[\#errors]$ (the expected value of the total number of errors that day)--so that I can then calculate the expected number of errors over the course of one year, for example.
What I have so far: For any given user, whenever they send a request to the server, what are the chances another one of the $N-1$ users is doing it at the same time? There are $86,400$ seconds in one day, and each user is sending a request for $M$ of those seconds, so the chances are:
$$1 - \left(\frac{86,400-M}{86,400}\right)^{N-1}$$
Is that correct? If so, what are the chances that this happens to any user, not just a given $one$?
| The probability that a specific user sends a request in a specific second is $p=\frac{M}{86,400}$. Then the number of requests received by the server in a specific second, noted $X$, follows a binomial law, $X\sim\mathcal{B}(N,p)$.
So $\mathbb{P}\{X\geq 5\}$ gives you the probability to have an error for a specific second. I'll let you compute this probability ; and then continue the computation for each second in the day.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3167110",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 2
} |
Why is $\cos(36°) = \frac{\phi}{2}$ where $\phi$ is the golden ratio? A recent question had a comment which intrigued me, so I went to Wolfram Alpha and put in $\cos(36°)$ and it was half the golden ratio? Why is $\cos(36^\circ) = \frac{\phi}{2}$? Is there a nice geometric proof? I have never heard this before and it's quite fascinating.
Is it a coincidence?
Is there something special about the angle $\frac{\pi}{5}$?
| Because it's $\frac{\sin 72^\circ}{2\sin 36^\circ}=\frac{\sin 108^\circ}{2\sin 36^\circ}$, which by the sine rule, applied to a side-side-diagonal isosceles triangle of a rectangular pentagon, is $\frac{\varphi}{2}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3167393",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Finding system of linear equations starting from parametric solution I need to find a system of two linear equations in variables $x_1$, $x_2$ and $x_3$ from a solution vector of the form $x_1=t$, $x_2=1+t$ and $x_3=2-t$, but I'm not sure where to start.
| Each individual implicit equation $ax_1+bx_2+cx_3+d=0$ represents a plane with normal $\mathbf n = (a,b,c)^T$. A line $t\mathbf v+\mathbf p_0$ that lies on this plane is perpendicular to $\mathbf n$, i.e., $\mathbf n\cdot\mathbf v=0$, which in this case produces the constraint $a+b-c=0$. Obviously, any point on the line must lie on the plane, too. We know that $(0,1,2)^T$ is on the line, which generates the equation $b+2c+d=0$. Any two independent solutions to this system of equations will give you the pair of equations that you seek. To put it slightly differently, any two linearly independent elements of the null space of $$\begin{bmatrix}1&1&-1&0\\0&1&2&1\end{bmatrix}$$ give the coefficients of the two required equations.
If you interpret the rows of this matrix as homogeneous coordinates of points, the first row is the “point at infinity” that corresponds to the direction of the line, and the second is the known point on the line from its definition. This method works in general: if you know any two points on the line (finite or not), assemble them into a matrix and find its null space. If the point is finite, append a $1$ to that row; if the point is at infinity (i.e., is a direction vector for the line), append a $0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3167487",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
OR equality constraint for binary integer program I am trying to find a way to implement an OR equality constraint in a Binary Integer Program. For example, say I want to add the following logical condition to the program:
$$x_1+x_2+x_3+x_4+x_5 = 1\; \text{OR}\; x_1+x_2+x_3+x_4+x_5 = 3$$
$$\textbf{x}\in \mathbb{B}^5$$
The big-M method does not seem to work here because we are dealing with equalities rather than inequalities. I also have not seen any literature on this after a search. My only idea is to come up with all the possible partitions of the five variables and individually assign constraints as according. For example...
$$\text{IF } x_1 \geq 1 \rightarrow x_{2,...,5} \leq 0 \text{ OR } x_1 \geq 1 \rightarrow x_2 \geq1, x_3 \geq 1, x_4 \leq 0, x_5 \leq 0 \text{ OR } ...$$
And then use the big-M method. But this is obviously very tedious and the constraints grow exponentially for such a simple OR statement. Do you have any ideas/hints on how to approach this? Thanks
| You can do it with one additional binary variable $y$:
$$\sum_{i=1}^5 x_i = 1 + 2y.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3167775",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Alternating series test question. I'm reading this in my text:
So the alternating series test says:
i) is about the sequence decreasing
ii) is about the limit of the b term going to 0
I'm confused about why we need to do anything more once we find out that ii) isn't satisfied in example 2. What does it mean that we're looking at the limit of the nth term of the series? Don't we know that it diverges already? Also, can someone show me how they determine the limit of $a_n$?
| If conditions (i) and (ii) are satisfied, then you conclude that the series ${\bf converges}$.
If one of the conditions fails, then you cannot conclude that the series ${\bf diverges}$
The limit $(a_n)$ does not exist because it converges to two different values. In fact, if $n$ is even, then sequence $a_n$ converges to $3/4$ while it converges to $-3/4$ for the odd terms and thus it diverges. In particular, it $\lim a_n \neq 0$ and so series diverges by divergence test.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3167910",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
If A is an m by n matrix, prove that the set of vectors b that are not in C(A) forms a subspace. If A is an m by n matrix, prove that the set of vectors b that are not in C(A) forms a subspace.
I would like to first understand if I am interpreting the question correctly. My understanding of this question is that I need to first prove that the set of vectors b are equal to the 0 vector, and if b1 and b2 are members of the subspace, then the sum of set b should be a member, and so should some scalar C multiplied by the set of vectors b. I just don't understand how to actually prove this
| Your result is wrong. Maybe this picture can help you figure out.
Since every subspace must contain the zero vector, noted by $\underline{0}$.
We know that $C(A)$ is a subspace for $\mathbb{R}^m$, so $\underline{0}\in C(A) \subseteq \mathbb{R}^m$, that means the zero vector are inside the column space and also $\mathbb{R}^m$.
But if we consider $\mathbb{R}^m\setminus C(A)$, by the picture, it means we are now cancelling the red circle out, so the zero vector is no longer inside this "space".
Therefore, it can not form a subspace.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3168021",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Proving $\;\ln k \geq \int_{k-\frac{1}{2}}^{k+ \frac{1}{2}}\ln x dx$ I'm trying to prove $$\ln k \geq \int_{k-\frac{1}{2}}^{k+ \frac{1}{2}}\ln x dx$$
In other words, I'm trying to show why the area of the rectangle with height $\ln k$ and width $1$ bounds the area under the graph of $f(x)=\ln x$ in the interval $[k-\frac{1}{2},k+\frac{1}{2}].$
I tried to integrate but got stuck. Any ideas for an elegant proof for this?
| Logarithm is a concave function, by Jensen inequality,
$$\ln E\left( U\right) \ge E\left( \ln (U)\right)$$
where $U \sim Uni\left( k-\frac12, k+\frac12\right)$.
$$\ln k \ge \int_{k-\frac12}^{k+\frac12} \ln (x)\, dx$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3168139",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 0
} |
Weak convergence in $\ell_p$ I have failed to prove the following statement:
Let $1<p< \infty$. Then $f_n \to f$ weakly in $\ell_p$ if and only if $\mathrm{sup} \| f_n \|_p < \infty$ and $f_n \to f$ pointwise.
Any help will be appreciated.
| Suppose $f_n \rightharpoonup f$: use PUB to prove boundedness, and to prove the pointwise convergence, test against the standard basis vectors $e_j = (0,0, \ldots 0, 1, 0,0, \ldots)$ (the $1$ in the $j$-th slot).
Suppose boundedness and pointwise convergence. Fix any $\xi \in \ell^q$, where $q$ is the Holder conjugate of $p$. The action of $\xi$ on $\ell^p$ is (as you know)
$$\langle \xi, g\rangle = \sum_i \xi^i g^i.$$
Let $g^m$ denote the truncation of $g$ at height $m$ (after the $m$-th coordinate, all zeroes). Then
$$\langle \xi, f_n - f\rangle = \langle \xi, f_n^m - f^m \rangle + \sum_{i>m}\xi^i (f_n^i - f^i)$$.
We can bound the last term using Holder's inequality by
$$(\sup_n \|f_n\|_{\ell^p} + \|f\|_{\ell^p})\|\xi - \xi^m\|_{\ell^q}.$$
So, we can pick $m$ large so that $\|\xi - \xi^m\|_{\ell^q}< \varepsilon$, and then with this $m$ fixed, pass to a pointwise limit in $\langle \xi, f_n^m - f^m \rangle$, since this is a finite sum. Hope this helps.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3168219",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Integration by parts for $u \in H^{1}$ and $v\in H^{1}_{0}$ Let $\Omega$ be a smoothly, open bounded domain in $\mathbb{R}^{n}$. Assume that $u\in W^{1,2}\left(\Omega\right)$ and $v\in W^{1,2}_{0}\left(\Omega\right)$. Is the integration by parts always true, that is
$$ \int_{\Omega}\left(\partial_{i}u\right) v + \left(\partial_{i}v\right) u dx=\int_{\partial \Omega} uv\tau_{i} d\sigma.$$
Here $\tau_{i}$ is the i-th component of the outward normal vector.
Thanks.
| The above formula above is an application of the Gauss-Green (divergence) theorem to the vector
$$
\overline{uv}=\left.
\begin{pmatrix}
uv\\
uv\\
\vdots\\
uv
\end{pmatrix}\quad\right\}\text{ $n$ rows}
$$
As a matter of fact,
$$
\begin{split}
\nabla\cdot\overline{uv}&=
\left(\frac{\partial}{\partial x_1}, \frac{\partial}{\partial x_2},\cdots,\frac{\partial}{\partial x_n}\right)\cdot
\begin{pmatrix}
uv\\
uv\\
\vdots\\
uv
\end{pmatrix}\\
&=\frac{\partial (uv)}{\partial x_1}+ \frac{\partial (uv)}{\partial x_2}+\cdots+\frac{\partial (uv)}{\partial x_n}\\
&=\sum_{i=1}^n\big[(\partial_iu)v+u(\partial_iv)\big]
\end{split}
$$
thus,
$$
\begin{split}
\int\limits_\Omega \nabla\cdot\overline{uv}\,\mathrm{d}x &=\int\limits_{\partial\Omega} \overline{uv}\cdot\boldsymbol\tau\,\mathrm{d}\sigma_x\\
&\Updownarrow\\
\int\limits_\Omega \nabla\cdot\overline{uv}\,\mathrm{d}x&=\int\limits_{\partial\Omega} \overline{uv}\cdot\boldsymbol\tau\,\mathrm{d}\sigma_x\\
&\Updownarrow\\
\int\limits_\Omega (\partial_iu)v+u(\partial_i&v)\,\mathrm{d}x=\int\limits_{\partial\Omega} uv\tau_i\mathrm{d}\sigma_x\\
\end{split}\label{1}\tag{1}
$$
Since $u\in W^{1,2}(\Omega)$ and $v\in W_0^{1,2}(\Omega)$ and $\Omega$ is a nice, smooth and bounded domain,
$$
\begin{split}
(\partial_iu)v+u(\partial_iv)&\in L^1(\Omega)\\
\operatorname{tr}_{\partial\Omega}(uv\tau_i)=\operatorname{tr}_{\partial\Omega}(u)\operatorname{tr}_{\partial\Omega}(v)\tau_i&\in L^1(\partial\Omega)
\end{split}\quad \forall i=1,\ldots,n
$$
all integrals in \eqref{1} are well defined and finite: note also that, since $\operatorname{tr}_{\partial\Omega}(v)\equiv 0$ for all $v\in W_0^{1,2}(\Omega)$, the left side integral of \eqref{1} is $0$ so the formula reduces to
$$
\int\limits_\Omega \nabla\cdot\overline{uv}\,\mathrm{d}x=\int\limits_\Omega(\partial_iu)v+u(\partial_iv)\,\mathrm{d}x= 0 \label{2}\tag{2}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3168345",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Finding the fundamental group of a subspace of $\mathbb{R}^4$ Let $C^{\times} := \mathbb{C} \backslash \{0\}$ and $Z := \{(w, z) \in \left(\mathbb{C}^{\times}\right)^2 \ | \ w^n = z\}$ for some fixed $n \geq 1$. I'm trying to find the fundamental group of $Z$. My toolbox consists pretty much out of Seifert-van Kampen and deformation retracting to a CW-complex, but I don't know how to do that here because $Z$ is 4 dimensional. Here is my work so far:
For a given $z$, the solutions are $\{w_1, ..., w_n\}$ where $w_i$ has length $\sqrt[n]{w}$ and argument $\frac{2i}{n} \cdot\pi $, and the fundamental group of $\mathbb{C}^\times$ is $\mathbb{Z}$ because it deformation retracts unto $\{z \in \mathbb{C} \ | \ |z| = 1\}$.
| We can write $Z = \{(w, w^n) \mid w \in \mathbb{C}^{\times} \}$. Let $\alpha : \mathbb{C}^{\times} \to \mathbb{C}^{\times}, \alpha(w) = w^n$. We see that $Z$ is nothing else than the graph of $\alpha$.
But for any contiunuous map $\phi : X \to Y$ between topological spaces $X,Y$ the graph $G(\phi) = \{ (x,\phi(x)) \mid x \in X \} \subset X \times Y$ is homeomorphic to $X$. In fact, define $f : X \to G(\phi), f(x) = (x,\phi(x))$ and $g : G(\phi) \to X, g(x,\phi(x)) = x$. These are continuous maps (note that $g$ is the restriction of the projection $X \times Y \to X$) such that $g \circ f = id_X$ and $f \circ g = id_{G(\phi)}$.
Hence $Z \approx \mathbb{C}^{\times}$ and $\pi_1(Z) \approx \mathbb{Z}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3168440",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove tautology using propositional equivalence and the laws of logic determine q ∧ ( p → ¬q) → ¬p
q ∧ ( ¬p ∨ ¬q) → ¬p
(q ∧ ¬p)∨ (q ∧ ¬q) → ¬p
(q ∧ ¬p)∨ F → ¬p
i dont know how to solve this further. Kind of leaves me confused
what would be the next step
| First, a term like $P \lor F$ is equivalent to just $P$. So, as the next step you get:
$(q \land \neg p) \to \neg p$
And now rewrite this second implication just as you did the first. That is, the next step is:
$\neg (q \land \neg p) \lor \neg p$
Now do DeMorgan and you're almost there!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3168671",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Partial fractions, disagreement with Wolfram Alpha On my math homework, I have this problem, and WolframAlpha says that $A=-\frac{1}{7}$ and $B=\frac{1}{7}$. However, while solving the problem on my own, I found a restriction that $A \neq -B$. How is it that this answer works? The problem in question is:
$$\dfrac{1}{x^2 - 3 x - 10} = \dfrac{A}{x+2} + \dfrac{B}{x - 5}$$
| Here's the full derivation: $$\frac{1}{x^2-3x-10}=\frac{A}{x+2}+\frac{B}{x-5}\implies$$ $$(x+2)(x-5)\left(\frac{1}{x^2-3x-10}\right)=(x+2)(x-5)\left(\frac{A}{x+2}+\frac{B}{x-5}\right)\implies$$ $$1=A(x-5)+B(x+2)=A(x)-5A+B(x)+2B=x(A+B)+1(2B-5A)$$
Hence we need $A+B=0\space\text{and}\space2B-5A=1$. So substitute $A=-B$ into the second equation: $$2B-5(-B)=1\implies7B=1\implies B=\frac{1}{7}$$
But we have the condition that $A+B=0$ and we know $B=\frac{1}{7},$ hence $$A+\frac{1}{7}=0\implies A=-\frac{1}{7}$$ Thus $$\frac{1}{x^2-3x-10}=\frac{A}{x+2}+\frac{B}{x-5}=\frac{1}{7(x-5)}-\frac{1}{7(x+2)}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3168854",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Show that $f(z)=\frac{1}{2\pi}\int\limits_0^{2\pi}f\left(\frac{e^{i\theta}+z}{1+\overline{z}e^{i\theta}}\right)d\theta$
Let $f$ be analytic on domain $\Omega$ which contains the closed unit
disk $\overline{\mathbb{D}}$. Show that
(a)
$$f(0)=\frac{1}{2\pi}\int_0^{2\pi}f(e^{i\theta})d\theta$$
(b) Use part (a) to show that whenever $z\in\mathbb{D}$,
$$f(z)=\frac{1}{2\pi}\int_0^{2\pi}f\left(\frac{e^{i\theta}+z}{1+\overline{z}e^{i\theta}}\right)d\theta$$
Hint: Consider the conformal self maps of the unit disk
For part (a) I used the Cauchy integral formula on the unit disk and then substituted the parameterization $z(\theta)=e^{i\theta}$ where $0<\theta<2\pi$
But for part (b) I don't see how it is related with part (a) and how the automorphisms on the unit disk comes into play.
Anyway I know that $\Phi_{\alpha}:\mathbb{D}\rightarrow\mathbb{D}$ defined as $\Phi_\alpha(z)=\frac{\alpha-z}{1-\overline{\alpha}z}$ is an automorphism when $|\alpha|<1$
Appreciate your help
| Consider the function $g_z(w) = \frac{z+w}{1+\overline{z}w}$ for $|z|<1$ and $|w|\leq 1$. When $|w| = 1$, observe that
$$|g_z(w)| = \left| \frac{z+w}{1+\overline{z}w} \right| = \left| \frac{z+w}{1+\overline{z}w} \right| \cdot \left| \frac{1}{\overline{w}} \right| = \left| \frac{z+w}{\overline{w} + \overline{z}} \right| = \left| \frac{z+w}{\overline{z+w}} \right| = 1.$$
When $|w| < 1$ observe that, for example, $|g_z(0)| = |z| < 1$. By the maximum principle, $|g_z(w)| < 1$ in this case.
Therefore, we see that $f \circ g_z$ is analytic on a domain containing the closed unit disc.
So by part (a), we have
$$(f\circ g_z)(0) = \frac{1}{2 \pi} \int_{0}^{2 \pi} (f \circ g_z)(e^{i \theta}) d\theta,$$
and the result follows.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3168949",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Using Rolle's theorem to show $e^x=1+x$ has only one real root
Applying Rolle's Theorem, prove that the given equation has only one root:
$$e^x=1+x$$
By inspection, we can say that $x=0$ is one root of the equation. But how can we use Rolle's theorem to prove this root is unique?
| Let $f(x) = e^x - 1 - x$, and we observe that $f(0)=0$. $f$ is also obviously continuous and differentiable over the real numbers (if you wish to verify that in detail, you can do that separately).
Suppose there exists a second root $b \neq 0$ such that $f(0) = f(b) = 0$. Then there exists some $c \in (0,b)$ (or $(b,0)$ if $b<0$) such that $f'(c) = 0$ by Rolle's theorem.
$f'(x) = e^x - 1$, however, which satisfies $f'(x) = 0$ only when $x=0$, which is not in any interval $(0,b)$ (or $(b,0)$).
Thus, since no satisfactory $c$ exists, we conclude the equation only has one real root.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3169097",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
$X$=number of successes before 2nd failure in a seq of independent Bernoulli trials. pmf of $X$ and $\mathbb E[X]$
Let random variable $X$ denote the number of successes before the 2nd failure of a sequence of independent Bernoulli(p) trials. I need to describe the pmf of $X$ and calculate the expected value $\mathbb E[X]$
I tried the following:
$$f_X(x)=P(X=x)= \binom{x+2-1}x p^x (1-p)^{2-1} \cdot (1-p) = \binom{x+1}x p^x (1-p)^2$$
Is that correct?
How can i calculate $\mathbb E[X]$?
The hint given is that $$\sum_{k=1}^{\infty}kx^{k-1}=\frac{1}{(1-x)^2}, |x|<1$$ and $$\sum_{k=1}^{\infty}k^2x^{k-1}=\frac{x+1}{(1-x)^3}, |x|<1$$
| Let $X_{1}$ denote the number of successes before the first failure
and let $X_{2}$ denote the number of successes between the first
failure and the second failure.
Then $X_{1}$and $X_{2}$ are independent and identically distributed
with $P\left(X_{i}=k\right)=p^{k}\left(1-p\right)$ for $i=1,2$.
Using the first hint for $i=1,2$ we find: $$\mathbb{E}X_{i}=\sum_{k=0}^{\infty}kp^{k}\left(1-p\right)=\left(1-p\right)p\sum_{k=1}^{\infty}kp^{k-1}=\frac{p}{1-p}$$
For a fixed nonnegative integer $k$ there are $k+1$ configurations
for a sequence that contains $k$ successes, $2$ failures and ends with a failure.
This allows us to find the PMF (not PDF): $$P\left(X=k\right)=\left(k+1\right)p^{k}\left(1-p\right)^{2}$$
Now we could go apply both hints and find: $$\mathbb{E}X=\sum_{k=0}^{\infty}k\left(k+1\right)p^{k}\left(1-p\right)^{2}=\left(1-p\right)^{2}p\left[\sum_{k=1}^{\infty}k^{2}p^{k-1}+\sum_{k=1}^{\infty}kp^{k-1}\right]=\frac{2p}{1-p}$$
But it is much more handsome to apply linearity of expectations: $$\mathbb{E}X=\mathbb{E}\left(X_{1}+X_{2}\right)=\mathbb{E}X_{1}+\mathbb{E}X_{2}=\frac{2p}{1-p}$$
Doing so we do not even need the second hint.
Actually also the first hint can be missed because it is obvious that:$$\mathbb{E}X_1=p\left(1+\mathbb{E}X_1\right)+\left(1-p\right)0=p+p\mathbb{E}X_1$$(do you see why?)
leading directly to: $$\mathbb{E}X_1=\frac{p}{1-p}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3169296",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to understand fibres of morphisms of schemes. Let $f:X\to Y$ be a morphism of schemes, and let $k(y)$ to be the residue field of the point $y$. The fibre of the morphism $f$ over the point $y$ is defined to be the scheme $X_y=X\times Spec(k(y))$.
It's said that $X_y$ is homeomorphic to $f^{-1}(y)$. If we consider affine schemes, then the point $y$ corresponds to a prime ideal $p$ in $\mathcal{O}_y$, then $f^{-1}(y)$ should correspond to a prime ideal $q$ in $\mathcal{O}_x$, which is a point. But sometimes $f^{-1}(y)$ can be more than one point, so I am not sure what $X_y$ looks like.
Besides, it seems that if $H$ is a subscheme of $Y$, then $f^{-1}(H)$ can be regarded as $X\times H$. I hope someone can show me a clear picture. Thanks!
| The question is of a local nature so let's assume $X=Spec(B)$ and $Y=Spec (A)$ are affine schemes. Then $f: Spec$ $ B\rightarrow Spec$ $A$ corresponds to a ring homomorphism $ g: A \rightarrow B$ . Let $y$ correspond to the prime ideal $p$ in $A$ . Then $X_y= X \times_{ Y} Spec (k(y)) = Spec ( B \otimes_A \frac { A_p}{pA_p}) $. Suppose $g^{-1}(q)=p$ where $q\in Spec(B)$ Then $A \rightarrow B \rightarrow B_q$ factors uniquely through $A_p \rightarrow B_q$ and hence we have a unique ring homomorphism from $ B \otimes _A A_p/pA_p \rightarrow B_q/qB_q$ which is surjective and $B_q/qB_q$ is a field Thus $\exists ! z\in X_y $ which is the image of the point $Spec (B_q/qB_q)$ .
Now back to geometry.
Consider the commutative diagram
$\require{AMScd}
\begin{CD}
X_y@>{}>> Spec (k(y));\\
@VVV @VVV \\
X@>{}>> Y;
\end{CD}
$
The maps are all continuous and follow from properties of fibre product. $\pi_1(X_y) \subset f^{-1} (y)$ following commutativity. And the algebra above shows it is a bijection. So you have got a continuous bijection.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3169457",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
If $g \circ f=g$, prove that $g$ is constant Let $f:\mathbb{R} \to [0,1]$ be a monotonic function so that $|f(x) - f(y) | <|x-y|$, $\forall x, y \in \mathbb{R} $, $x\neq y$. If $g:\mathbb{R} \to \mathbb{R} $ is a continuous function and $g \circ f=g$, prove that $g$ is constant.
This problem also previously asked to prove that $f$ has a unique fixed point and I could show this by proving that $f$ is continuous and considering the function $h(x) =f(x) - x$. Yet, I don't know how to use this to prove that $g$ is constant.
| Denote by $c$ the fixed point of $f$.
Let $a \in \mathbb R$ be arbitrary. Then
$$g(a)=g(f(a))=g(f^2(a))=....=g(f^n(a))=...$$
The sequence $x_n =f^n(a)$ converges to the fixed point $c$ of $f$. Therefore by continuity of $g$
$$g(a)=g(x_n) \to g(c)$$
This shows that $g(a)=g(c)$ for all $a \in \mathbb R$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3169756",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
If $a^{2x-1} = b^{1-3y}$ and $a^{3x-1} = b^{2y-2}$, show $13xy = 7x +5y -3.$ If $a^{2x-1} = b^{1-3y}$ and $a^{3x-1} = b^{2y-2}$, show $13xy = 7x +5y -3.$
I apologize in advance if this forum finds this question trivial but I am desperate for any help, will appreciate it. Thanks.
| As suggested in J. W. Tanner's comment to the question, assuming that $2x - 1 \neq 0$ and $3x - 1 \neq 0$, then taking appropriate roots of both sides gives that
$$a^{2x-1} = b^{1-3y} \; \Rightarrow a = b^{\frac{1-3y}{2x-1}} \tag{1}\label{eq1}$$
$$a^{3x-1} = b^{2y-2} \; \Rightarrow a = b^{\frac{2y-2}{3x-1}} \tag{2}\label{eq2}$$
Assuming that $b$ is not $-1,0$ or $1$, the powers of $b$ must be equal in \eqref{eq1} and \eqref{eq2}, so
$$\frac{1-3y}{2x-1} = \frac{2y-2}{3x-1} \tag{3}\label{eq3}$$
You can now cross-multiply and simplify to get the requested equality, plus you will also need to show, for the cases not covered by this solution technique, that the equality still holds, or that you can't determine that it does (e.g., if $a = b = 0$ or $a = b = 1$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3169927",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
How to integrate a product of three terms? (includes an exponential and trig. function) Given a periodic signal with $T=2$, I need to find the complex Fourier coefficient $C_n$, using the following formula:
$$C_n = \frac 1 T \int_\frac {-T} 2^\frac T 2 f(t) \, e^{-jnwt} dt$$
where
$$f(t)=t^2 \cos(3 \pi t)$$
My integral becomes a product of three terms. I tried to look this integral up in integral tables online but could not find anything. Closest I could find is #107 from http://integral-table.com/downloads/single-page-integral-table.pdf Does a formula exist for this particular integral?
| In my humble opinion, I think that the easiest is to consider two integrals
$$A=\int t^2 \cos(3 \pi t) \, e^{-inwt}\, dt\qquad \text{and}\qquad B=\int t^2 \sin(3 \pi t) \, e^{-inwt} \,dt$$ and use
$$C=A+iB=\int t^2 e^{i 3 \pi t} \, e^{-inwt}\, dt=\int t^2 e^{i (3 \pi -n \omega )t}\,dt$$
To make life easier, let $$x=i (3 \pi -n \omega )t\implies C=\frac{i }{(3 \pi -n \omega )^3}\int{ x^2 e^x}\,dt$$
Now, two integrations by parts to get the integral; when done, replace $x$, redecompose $A$ and finally use the given integration bounds.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3170033",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Integral roots of cubic equation $x^3-27x+k=0$ The number of integers $k$ for which the equation $x^3-27x+k=0$ has atleast two distinct integer roots is
(A)$1$
(B)$2$
(C)$3 $
(D)$4$
My Attempt: The condition for cubic $x^3+ax+b=0$to have $3$ real roots happens to be $4a^3+27b^2\leq0$. But how to go about finding condition for integer roots.
| Suppose $x^3 - 27x + k = 0$ has distinct integer roots $a$ and $b$; then
$$
a^3 - 27a = b^3 - 27b,
$$
or
$$
a^3 - b^3 = 27(a - b).
$$
Since, by hypothesis, $a\ne b$, a factor of $a-b$ can be removed, resulting in
$$
a^2 + ab + b^2 = 27.
$$
After multiplying by $4$, this can be rearranged into
$$
(2a + b)^2 + 3b^2 = 108.
$$
It follows that the integer $2a+b$ is a multiple of $3$, and has a square $\le 108$; thus $2a+b = 0,\pm3,\pm6$ or $\pm9$.
*
*If $2a+b = 0$, then $b^2 = 36$, so $b = \pm6$.
*If $2a+b = \pm3$, then $b^2 = 33$, so this has no integral solution.
*If $2a+b = \pm6$, then $b^2 = 24$, so this has no integral solution.
*If $2a+b = \pm9$, then $b^2 = 9$, so $b = \pm3$.
In the first case, we find $(a,b) = (-3,6)$ or $(3,-6)$. In the fourth case,
the four possible combinations of signs result in $(a,b) = (3,3), (6,-3), (-3,-3)$ or $(-6,3)$. Rejecting the cases with $a=b$, $(a,b) = (-3,6)$ or $(6,-3)$ results in $k = 27a - a^3 = -54$ and $(a,b) = (3,-6)$ or $(-6,3)$ results in $k = 54$. Thus there are two possible values of $k$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3170175",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
On the integrals $\int_{-1}^0 \sqrt[2n+1]{x-\sqrt[2n+1]x} \mathrm dx$ Playing with integrals of type,
$$
I(n)=\int_{-1}^0 \sqrt[2n+1]{x-\sqrt[2n+1]x} \mathrm dx,
$$
$$
n \in \mathbb{N}
$$
I got two interesting results for the limiting cases $n=1$ and $n \to \infty$:
$$
\lim_{n \to \infty} I(n) = 1
$$
The second result is perhaps more interesting,
$$
I(1)=\int_{-1}^0 \sqrt[3]{x-\sqrt[3]x} \mathrm dx \approx \frac {\pi}{\sqrt {27}}
$$
The approximation is valid to $12$ places of decimal.
My question is straight, can we prove these results analytically? Any help would be appreciated.
| For $n \in \mathbb{N}$ we have
\begin{align}
I (n) &= \int \limits_{-1}^0 \left[x - x^{\frac{1}{2n+1}}\right]^{\frac{1}{2n+1}} \mathrm{d} x \stackrel{x = -y}{=} \int \limits_0^1 \left[y^{\frac{1}{2n+1}} - y\right]^{\frac{1}{2n+1}} \mathrm{d} y = \int \limits_0^1 y^{\frac{1}{(2n+1)^2}}\left[1 - y^{\frac{2n}{2n+1}}\right]^{\frac{1}{2n+1}} \mathrm{d} y \\
&\hspace{-10pt}\stackrel{y = t^{\frac{2n+1}{2n}}}{=} \frac{2n+1}{2n} \int \limits_0^1 t^{\frac{n+1}{n (2n+1)}} (1-t)^{\frac{1}{2n+1}} \mathrm{d}t = \frac{2n+1}{2n} \operatorname{B}\left(\frac{n+1}{n(2n+1)}+1,\frac{1}{2n+1} + 1\right) \, .
\end{align}
Using $\Gamma(x+1) = x \Gamma(x)$ we can rewrite this result to find
$$ I (n) = \frac{\operatorname{B} \left(\frac{1}{n} - \frac{1}{2n+1}, \frac{1}{2n+1}\right)}{2 (2n+1)} = \frac{1}{2(2n+1)} \frac{\operatorname{\Gamma}\left(\frac{1}{n} - \frac{1}{2n+1}\right) \operatorname{\Gamma}\left(\frac{1}{2n+1}\right)}{\operatorname{\Gamma}\left(\frac{1}{n}\right)}$$
for $n \in \mathbb{N}$. In particular,
$$ I(1) = \frac{\operatorname{\Gamma}\left(\frac{2}{3}\right) \operatorname{\Gamma}\left(\frac{1}{3}\right)}{6} = \frac{\pi}{6 \sin \left(\frac{\pi}{3}\right)} = \frac{\pi}{3 \sqrt{3}} \,.$$
Moreover, we obtain
$$ \lim_{n \to \infty} I (n) \stackrel{\Gamma(x) \, \stackrel{x \to 0}{\sim} \, \frac{1}{x}}{=} \lim_{n \to \infty} \frac{1}{2(2n+1)} \frac{\frac{n(2n+1)}{n+1} (2n+1)}{n} = \lim_{n \to \infty} \frac{2n+1}{2(n+1)} = 1 \, .$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3170351",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 0
} |
Level-set of constant rank It is well-known the regular value theorem holds. Does this generalization also hold? I can not think of a counter-example.
$f:M\to N$ be a smooth map of two smooth manifolds of arbitrary dimension $m,n$. For a level set $S:=f^{-1}(q)\neq\emptyset$, if $\text{rank}(f)_p\equiv r$ for all $p\in S$, then S is a smooth sub-manifold of dimension $m-r$.
The regular value theorem is just a special case of it. But do we really need the surjectivity?
| It does not hold: consider $S^1=\{(x,y)\in\mathbb{R}^2;x^2+y^2=1\}$ and the map $p_1:S^1\to\mathbb{R}$ defined by $p_1(x,y)=y$. Then $p_1^{-1}(1)=\{(0,1)\}$ and $p_1$ has rank $0$ on $(0,1)$ (if you parametrize by $\theta\mapsto(\cos(\theta,\sin(\theta))$, then$(0,1)$ has coordinate $\frac{\pi}{2}$ and
$$\frac{\partial}{\partial\theta}\hat{p_1(\theta)}|_{\frac{\pi}{2}}=\frac{\partial}{\partial \theta}\sin(\theta)|_{\frac{\pi}{2}}=0,$$
where $\hat{p_1}$ denotes $p_1$ read in these coordinates). But $\{(0,1)\}$ is not a manifold of dimension $1-0=1$.
What fails here is that the rank of $p_1$ is not constant in a neighborhood of $p_1^{-1}(1)$, and so we can't apply the constant rank theorem which would give local coordinates of the expected form around your points. That's why we require the map being a submersion at a point: by openess of being of maximal rank, you know that you will stay of maximal rank in a neighborhood of your preimage, and so on.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3170519",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
The canonical open set which is equal to a set with the BP modulo meager sets is regular open (Kechris) A set $U$ in a topological space $X$ is called regular open iff $U=(\overline{U})^°$.
Exercise $(8.30)$ (Kechris, "Classical Descriptive Set Theory")
Prove that
$$U(A)=\bigcup \{U\,\text{open}\mid U\Vdash A\}$$
is regular open.
Moreover, if $X$ is a Baire space and $A$ has the BP (Baire property), then $U(A)$ is the unique regular open set $U$ with $A=^*U$.
Actually, the first part of this question has already been asked, but the answer doesn't seem to work (at least for me): indeed, what the hint allows to prove is that $A$ is comeager in $(\overline{U(A)})^°$, but how does the emptyness of $(\overline{U(A)})^°\setminus U(A)$ follow?
For the second part, assume that $A=^*V$ for some open regular open set in $X$. I don't see how the assumption should help.
NOTE: I decided to ask for it once again because the user Brian M. Scott seems to be no more active on this site .
| The aforementioned hint is that $U(A)\Vdash A$. As you indicate, this allows to show $(\overline{U(A)})^°\Vdash A$.
By the hint and its definition, $U(A)$ it is the largest open set $U$ such that $U\Vdash A$. On the other hand, $U(A)\subseteq (\overline{U(A)})^°$. Hence $U(A) = (\overline{U(A)})^°$ and therefore it is regular open. (I fail to see which emptiness should be relevant here.)
For the second part, assume $A$ the BP and $A=^*V$ with $V$ regular open. Then $V\Vdash A$ and hence $V\subseteq U(A)$.
From $A=^*V$ we also conclude $A\setminus V$ is meager, and hence $U(A)\setminus A \cup A\setminus V \supseteq U(A) \setminus V$ is meager and has empty interior. This shows that $ U(A)\setminus V \subseteq \overline{V}$, and therefore $U(A) \subseteq (\overline{V})^° = V$. We have both inclusions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3170630",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
$2$-dimensional Runge-Kutta for system of polynomial ODEs I have just started getting into ODEs, and have come across the Runge-Kutta method for numerically solving them. However, in playing around with them to model hypothetical situations, I came across the equations:
$$\begin{aligned} \dot x &= x - x^2 - y\\ \dot y &= xy - y^2 \end{aligned}$$
I was trying to think of how one would use Runge Kutta methods to do this, and I couldn't figure it out. What is this type of differential equation called, and how can I simulate/solve it?
| The Runge-Kutta formulas for a system of differential equations are really the same as for a single equation, it's just that your dependent variable is a vector rather than a scalar. Write your system as
$$\dfrac{dX}{dt} = F(t, X(t))$$
where $X = (x, y)$. If you're using the classical fourth-order Runge-Kutta with step size $h$, your iteration is
$$ \eqalign{K_1 &= h F(t_n, X_n)\cr
K_2 &= h F(t_n + h/2, X_n + K_1 /2)\cr
K_3 &= h F(t_n + h/2, X_n + K_2 /2)\cr
K_4 &= h F(t_n + h, X_n + K_3)\cr
t_{n+1} &= t_n + h\cr
X_{n+1} &= X_n + (K_1 + 2 K_2 + 2 K_3 + K_4)/6\cr}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3170716",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove that $\sum_{n=0}^{\infty}\frac{x^n}{n!}$ is not uniformly convergent on $(0, +\infty)\ni x$.
Prove that $\sum_{n=0}^{\infty}\frac{x^n}{n!}$ is not uniformly
convergent on $(0, +\infty)\ni x$.
I wanted to do it by applying Cauchy's test and coming to a contradiction (i.e. $\epsilon>$ positive constant):
$$\epsilon>|\frac{x^{n+1}}{(n+1)!}+\frac{x^{n+2}}{(n+2)!}+\cdots+\frac{x^{n+p}}{(n+p)!}|$$
$$\epsilon>|\frac{x^{n+1}}{(n+1)!}\cdot(1+\frac{x}{n+2}+\cdots+\frac{x^{p-1}}{(n+2)\cdots(n+p)})|.$$
The expressions within the ||'s reminded me of $e^x=1+x+\cdots$, but after several tries I still wasn't able to evaluate. I've also tried standard method of 'find the smallest one in the brackets and take it $p$ times', but that didn't lead anywhere.
Another idea where I took $x$ to be an expression involving $n$ worked, but is it even legal? $x$ is supposed to be fixed, and $n$ is arbitrarily large, so I don't know.
Hints or full answers, I will be very grateful.
| Much easier if we can use that the sum is the exponential: the partial sum is a polynomial and the exponential grows faster than any polinomial, so for any $n$
$$\sup_{x\in(0,\infty)}\left|e^x - \sum_{k=0}^n\frac{x^k}{k!}\right| = \infty$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3170836",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Calculating the Kernel, dimension of linear equations, real numbers and galois field Given these problems below, how would one calculate the result?
By intuition, I have managed to solve two of them but cannot crack the last one.
Btw, I am not sure that the approach of my intuition is the right one.
I am interested in learning about how to solve these (formulas and approach).
Problems Below:
1) Let $f: ℝ^7 \rightarrow ℝ^4$ be a surjective (onto) linear function. What is the dimension of $Ker \space f$? (answer = 3)
2) Let $f: GF(2)^8 \rightarrow GF(2)^9$ be a linear funktion with $dim \space Im \space f=5$. How many vectors are in $Ker f$? (answer = 8)
3) Let $f: ℝ^7 \rightarrow ℝ^{14}$ be a injective (one-to-one) linear function. Determine $dim \space Im \space f$? (answer = 7)
Could determine the answers for problem 1 and 3 by intuition (but not by formula). Here is the intuition below:
1 - Intuition: The function is surjective so the dimension must be $7 - 4 = 3$
3 - Intuition: The function is injective so the dimension must be $14 - 7 = 7$
(First question on site, so open for constructive feedback regarding the question.)
| All you need here is the dimension theorem: if $f:U\to V$ is any linear map, then
$$\dim\ker f+\dim\mathrm{im} f\,=\, \dim U$$
Choose bases $u_1,\dots, u_k$ for $\ker f$ and $v_1,\dots, v_r$ for $\mathrm{im} f$ and arbitrary preimages $w_j$ of $v_j$. Then show $u_1,\dots, u_k, w_1,\dots, w_r$ is a basis of $U$.
This resolves your uncertainty for a) and c), and gives $\dim\ker f=3$ for b).
So that $\ker f\cong GF(2)^3$ (by coordinating the vectors) which has $2^3$ elements.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3171032",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$F(x) = L(1, \chi ) \log x + O(1)$ I wish to prove $$F(x) = L(1, \chi ) \log x + O(1)$$
when $A(n) = \sum_{d|n} \chi (d)$ and $F(x) = \sum_{n \leq x} \frac{A(n)}{n}$
I started of course by substituting $A(n)$ in $F(x)$, which becomes a horrible double sum.
Knowing that: $L(1, \chi) = \sum_{n=1}^{\infty} \frac{\chi(n)}{n}$
Any help appreciated.
| Let $\chi(n)$ be $q$-periodic and $\sum_{n=1}^q \chi(n)=0$. For any $a \le q$ $$ \sum_{n \le x, n \equiv a \bmod q} \frac1n = \frac{\log(x)}q+C(a)+O(1/x)$$
$$\sum_{n \le x} \frac{\chi(n)}{n} = \sum_{a=1}^q \chi(a)\sum_{n \le x, n \equiv a \bmod q} \frac1n = \sum_{a=1}^q \chi(a) ( \frac{\log(x)}q+C(a)+O(1/x)) \\= \sum_{a=1}^q \chi(a) C(a)+ O(1/x)$$
Letting $x \to \infty$ gives
$$ \sum_{a=1}^q \chi(a) C(a) = L(1,\chi)$$
Therefore
$$\sum_{n \le x} \frac{\sum_{d | n} \chi(d)}{n} = \sum_{md \le x}\frac{\chi(d)}{md}= \sum_{m \le x} \frac1m\sum_{d \le x/m} \frac{\chi(d)}{d} =\sum_{m \le x} \frac1m (L(1,\chi)+O(\frac{m}x))\\= L(1,\chi) \log x+O(1)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3171117",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Showing that $ \int_0^\pi \frac{\sin{(kx)}}{\sin(x)} d x = \pi $ for odd integer $k$ I am trying to show that
$$ \int_0^\pi \frac{\sin{(kx)}}{\sin(x)} d x = \pi $$
for odd integer $k$.
It seems like this could be done using multiple angle formulae, but I'm stuck.
I can get to
$$ \int_0^\pi \frac{\sin{((2N+1)x)}}{\sin(x)} d x $$
$$ = \sum_{\ell = 0} ^N (-1)^\ell \frac{(2N+1)!}{(2\ell+1)!(2(N-\ell))!} \int_0 ^\pi (\cos(x))^{2(N-\ell)} (\sin(x))^{2\ell} d x, $$
using a multiple angle formula due to Viete (from the trig formulae Wikipedia page), but I don't know what to do from here. Is there a simpler way?
| Note that $$\sin(x)=\frac{e^{ix}-e^{-ix}}{2i}$$
Then $$\frac{\sin(kx)}{\sin(x)}=\frac{e^{ikx}-e^{-ikx}}{e^{ix}-e^{-ix}}=\frac{(e^{ix})^k-(e^{-ix})^k}{e^{ix}-e^{-ix}}$$
Using (n is odd)$$a^n-b^n=(a-b) \sum _{j=0}^{n-1} a^j b^{n-1-j}$$
We have $$\frac{\sin(kx)}{\sin(x)}=\sum _{j=0}^{k-1} (e^{ix})^j (e^{-ix})^{k-1-j}=\sum _{j=0}^{k-1} e^{i(2j-(k-1))x}$$
$$\sum _{j=0}^{k-1} e^{i(2j-(k-1))x}=\sum _{j=0}^{k-1} \frac{e^{i(2j-(k-1))x}+ e^{-i(2j-(k-1))x}}{2}=\sum _{j=0}^{k-1}\cos((2j-(k-1))x)$$
It should be easy to move on now
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3171236",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 5,
"answer_id": 2
} |
How to show that the given matrix has non-zero determinant Given $p,q$ to be primes where $p<q$ .
Show that the following marix has non-zero determinant,
\begin{bmatrix}
1&2 & 2 & 2 &\dotso & 2\\
2&q-p+1 & 1 & 1 &\dotso & 1\\
2& 1 & q-p+1 & 1 & \dotso & 1\\2&1 & 1 & q-p+1 &\dotso & 1 \\ \dotso &\dotso & \dotso & \dotso & \dotso \\ \dotso & \dotso & \dotso & \dotso & \dotso \\ \dotso & \dotso & \dotso & \dotso &\dotso
\\2&1 &1 &1 &\dotso & q-p+1
\end{bmatrix}
I am able to show that the submatrix of this matrix \begin{bmatrix}q-p+1 & 1 & 1 & 1 & \dotso & 1\\ 1 &q-p+1& 1 &\dotso & \dotso &1 \\ \dotso &\dotso & \dotso & \dotso & \dotso \\ \dotso & \dotso & \dotso & \dotso & \dotso \\ \dotso & \dotso & \dotso & \dotso &\dotso
\\1 &1 &1 &\dotso & \dotso &q-p+1\end{bmatrix}
has determinant non-zero.
How I can show that the original matrix has determinant non-zero?
I tried using Laplace Expansion but not getting anything.
Please help.
| If you substract the $2$-times the first row from all other rows, then you see that the determinant of the full matrix is equal to the determinant of a matrix with diagonal entries $q-p-3$ and off-diagonal entries $-3$. This matrix can be written as
$$
(q-p)I - 3 E,
$$
where $E$ is the matrix with all entries one. The matrix $E$ of dimension $(n-1)\times (n-1)$ has eigenvalues $n-1$ (multiplicity 1) and $0$ (multiplicity $n-2$). Thus the matrix $(q-p)I - 3 E$ has eigenvalues
$q-p-3n$ and $q-p$ with multiplicities.
Hence the determinant of the original $n\times n$ matrix as product of the eigenvalues is
$$
\det = (q-p-3(n-1))(q-p)^{n-2}.
$$
This matrix is singular for, e.g., $n=2$, $p=2$, $q=5$, where the matrix is equal to
$\pmatrix{1&2\\2& q-p+1}=\pmatrix{1&2\\2& 4}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3171488",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
About the Radius of Convergence of $ \sum_{n\ge 0}a_n z^n $
Fix $ \delta>0 $ and let
$$ \Omega=\{ z\in\mathbb C:|z|<1 \}\cup\{ z\in\mathbb C:|z-1|<2\delta \} .$$
Assume that $ f(z) $ is a holomorphic function on $ \Omega $ whihc has a Taylor series expansion $ \sum_{n\ge 0}a_nz^n $ at $ z=0 $ such that $ a_n $ is a non-negative real number for all $ n\ge 0 $.
(A) Prove that the derivatives $ f^{(k)}(1) $ are real for all $ k\ge 0 $ and, moreover, $$ f^{(k)}(1)\ge\frac{m!}{(m-k)!}a_m $$ for all $ 0\le k\le m $.
(B) Prove that $ \sum_{n\ge 0}a_n z^n $ has radius of convergence strictly greater than $ 1 $.
My attempt:
(A) Since $ f(z)=\sum_{n\ge 0}a_nz^n $ when $ |z|<1 $, we have
\begin{align}
f(z)&=\sum_{n\ge 0}a_n[(z-1)+1]^n\\
&=\sum_{n\ge 0}a_n\sum_{m=0}^n\binom{n}{m}(z-1)^m\\
&=\sum_{k\ge 0}\left[\sum_{n\ge k}a_n\binom{n}{k}\right](z-1)^k\\
&=\sum_{k\ge 0}\frac{f^{(k)}(1)}{k!}(z-1)^k
\end{align}
for $$ z\in\{z:|z|<1\}\cap\{ z:|z-1|<2\delta \} .$$
\begin{align}
&\implies \frac{f^{(k)}(1)}{k!}=\sum_{n\ge k}a_n\binom{n}{k}\\
&\implies\frac{f^{(k)}(1)}{k!}=\sum_{n\ge k}a_n\frac{n!}{k!(n-k)!}\\
&\implies f^{(k)}(1)=\sum_{n\ge k}a_n\frac{n!}{(n-k)!}
\end{align}
Hence $ f^{(k)}(1) $ are real for all $ k\ge 0 $ and since $ a_n\ge 0 $, we have
$$ f^{(k)}(1)\ge\frac{m!}{(m-k)!}a_m\quad\text{for all}\ 0\le k\le m .$$ So we have proved (A).
(B) Since there must exist at least one singular point on the boundary of the disk of convergence, it suffices to prove that there exists a singular point on $ \{ z\in\mathbb C:|z|=1 \} $. Then I am stuck...... Any hint?
| Note that if $|z| < 1+\delta$, \begin{align}\sum{|a_nz^n|} &\le \sum{a_n(1+\delta)^n}\\
&=\sum_{n \ge 0}\left(a_n\sum_{k=0}^{n}\binom{n}{k}{\delta}^k\right)\\
&=\sum_{k\ge 0}\left({\delta}^k\sum_{n\ge k}a_n\binom{n}{k}\right)\\
&=\sum_{k=0}^\infty\frac{f^{(k)}(1)}{k!}\,\delta^k < \infty,\end{align} hence $f(z)$ has a Taylor series defined in the (open) disc with radius $1+\delta$
The switching of the double sum is allowed by the non-negativity of all terms as $a_{mn}\ge 0$ implies $$\sum_{m}(\sum_{n}a_{mn})=\sum_{n}(\sum_{m}a_{mn})=\sup_{n \in I, m \in J}\sum{a_{mn}}$$ with supremum taken on all finite sets $I,J$ of natural numbers, the double sums being finite and equal or both infinite
Note that this result is also known as the power series version of Landau's Theorem (Landau's Theorem being much better known for Dirichlet series where it is slightly more difficult to prove than here), stating that if a power series with radius of convergence precisely $r>0$ has non-negative coefficients (for all $n$ high enough), than it must have a singularity at $z=r$. In particular here $r=1$ cannot be the radius convergence of $f$ as $1$ is not a singular point by hypothesis.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3171615",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
What does 1+1≠0 mean? I am using Schaum's outline of linear algebra in which there is a result that the notion of alternating and skew-symmetric bilinear forms is equivalent provided that 1+1≠0, i.e a bilinear form f satisfying:
f(v,v)=0 satisfies f(u,v)=-f(v,u) and
f(u,v)=-f(v,u) satisfies f(v,v)=0
provided that 1+1≠0
I am confused with this sentence "1+1≠0"
Can anybody tell me what does it mean?
Thanks in advance
| If you are working over the field $\mathbb F_2$, then you'll have $1+1=0$. More generally, the fields for which this equality holds are called fields with characteristic $2$. So, that book assumes that we are not working over such a field.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3171874",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Weak solutions to $\Delta u=f$ are in $W^{2,2}$ I believe the following statement is true.
Let $\Omega$ be a smoothly, bounded domain in $\mathbb{R}^{n}$.
The statement:
Let $u\in H^{1}(\Omega)$ so that
there exists $f\in L^{2}(\Omega) \;s.t.\int_{\Omega}\nabla u\nabla \varphi=\int_{\Omega} f\varphi, \forall \varphi\in H^{1}(\Omega)$.
Then $u\in H^{2}(\Omega)$.
I have searched in the book by Evans and Brezis but not so certain. Could anyone provide a reference for that? It can be seen in Evans's book that $u\in H^{2}_{loc}(\Omega)$.
Thanks so much.
| Please check a general elliptic regularity result ($L^p$ version) on:
Dauge, Monique. Elliptic boundary value problems on corner domains: smoothness and asymptotics of solutions, 1988
Theorem 20.10 (together with the explanation of notation above the theorem), essentially we have the following regularity result:
$$
\|u\|_{H^2} \leq C \|\Delta u\|_{L^2}.
$$
i.e., in your case, $\Delta u = f$ which is implied the fundamental theorem of calculus of variation. For a more specific version in $L^2$ sense, please refer to Grisvard's book Elliptic Problems in Nonsmooth Domains, $\S 2.3.3$, where the whole chapter 2 deals with weak solutions.
BTW: please be careful with the test function you choose, if your test function is in $H^1(\Omega)$, not in $H^1_0(\Omega)$, this is a Neumann problem and your $u$ and $f$ should satisfy certain compatibility condition.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3172014",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Mathemathical model equation PDE I am studying models in DPE an the professor give us this problem:
$\begin{cases}u_t+au_x=f(x,t)\\ u(x,0)=g(x)\end{cases}$
I've studied the transport equation and the Burger's equation. About heat equation, a little. Hoewever, this one is not included in those kinds of problems.
I will be very mercy if someone could help me with indications to this solution.
Thanks.
| We can find the characteristics with this system of ODE's:
$\dfrac{dt}{1}=\dfrac{dx}{a}=\dfrac{du}{f(x,t)}$
From the first proportion, $x=c_1+at$. Now:
$\dfrac{dt}{1}=\dfrac{du}{f(at+c_1,t)}$ or
$f(at+c_1,t)dt=du$ Integrating,
$$u(x,t)=\int_0^tf(ar+c_1,r)dr+c_2=\int_0^tf(ar+x-at,r)dr+c_2$$
For the general solution we have to consider $c_1$ and $c_2$ are somehow related: $c_2=h(c_1)$, with $h$ some single variable differentiable function to determine with th initial conditions: $c_2=h(x-at)$:
$$u(x,t)=\int_0^tf(ar+x-at,r)dr+h(x-at)$$
Finally we can impose the i.c. $u(x,0)=g(x)$
$u(x,0)=g(x)=h(x)$ leading to
$$u(x,t)=\int_0^tf(ar+x-at,r)dr+g(x-at)$$
Added
I just finished and see the link in the comments to your post. There is the answer, but it is proposed and checked, not deduced, so I post mine as it shows a way to get the solution using the method of characteristics.
$$u(x,t)=g(x-at)+\int_0^tf(x-a(t-r),r)dr$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3172113",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Finding values for K such that the roots of the quadratic are strictly imaginary: $x^2+\left(K^2+3K-7\right)x+K$ $x^2+\left(K^2+3K-7\right)x+K$
I'd like to know the general approach needed to find out how to find solutions for K when I want the roots of this equation to have a specific property, such as strictly imaginary, or when I want them to have a positive real part only, or negative real part only.
Bonus points if the approach applies to a general quadratic of the form $ax^2+bx+c$, what equations should be satisfied if I want specific properties for the roots? Extra bonus points for the same question, but for cubic and quartic equations.
| Hint: The roots of a monic quadratic are strictly imaginary iff it is of the form $(x-bi)(x+bi)=x^2+b^2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3172210",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
} |
Baby rudin theorem 8.8 how he found the two inequalities (56 and the last)?
Thanks!
| The last inequality is achieved as follows.
$ |Q(re^{i\theta})|$
$=|1+b_k r^k e^{ik\theta}+b_{k+1}r^{k+1}e^{i(k+1)\theta}+ \dots +b_{n}r^{n}e^{in\theta}|$
$\le|1+b_k r^k e^{ik\theta}|+|b_{k+1}r^{k+1}e^{i(k+1)\theta}|+ \dots +|b_{n}r^{n}e^{in\theta}|$
$=1-r^k |b_k|+|b_{k+1}|r^{k+1}+ \dots +|b_{n}|r^{n}$
$=1-r^k$ {$|b_k|-r|b_{k+1}|-\dots-r^{n-k}|b_{n}|$ }
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3172379",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Grillet's "Abstract Algebra", p. 148, ex. 3: Noetherian subring of a ring
Let $R$ be a Noetherian subring of a commutative ring $S$. Suppose that $S = (R\cup\{b_1,...,b_m\})$ for some $b_1,...,b_m \in S_n$. Then $S$ is Noetherian.
I'm not sure how to approach this exercise. One idea was to take an ideal $J$ of $S$ and to look at the ideal $I = \{r \in R \mid s_1r + s_2b_1 + \cdots + s_{m+1}b_m$ for some $s_1,s_2,...,s_{m+1} \in S \}$ of $R$. More generally, it seems we need to represent an ideal of $J$ of $S$ as an ideal $I$ of $R$ plus some additional data.
Of course, $S = (1)$, hence $(1) = (R\cup\{b_1,...,b_m\})$, hence there are $r_1,...,r_n \in R$ and $s_1,...,s_{n+m}$ so that $1 = s_1r_1 + \cdots + s_nr_n + s_{n+1}b_1 + \cdots + s_{n+m}b_m$, but I'm not sure how we can use this.
| Hint: try finding a surjective ring homomorphism $R[x_1,\dots,x_m]\to S$, and then use the fact that $R[x_1,\dots,x_m]$ is Noetherian by the Hilbert basis theorem
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3172533",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Using a generating function for piggy bank problem A piggy bank contains 45 loonies and 25 toonies. How
many ways can the coins be divided so that Jamie gets no loonies, Julie gets no toonies but
at least 10 loonies, and Brenda gets an odd number of toonies? Use generating functions to
solve the problem. (Note that you are only interested in how many of each type of coin each
sibling gets, not which specific coins they get).
So far, my only idea is to split the problem up into 2 generating functions, and then multiply the two answers:
For the loonies,
$g(x)=x^{10}(1+x+\cdots+x^{35})(1+x+x^2+\cdots+x^{45})
= x^{10}\frac{(x^{36}-1)}{x-1}\frac{(x^{46}-1)}{x-1}$
And I would do similarly for the toonies. My problem is that I'm not sure if this is even a way to do it, and my biggest problem is that I still don't understand how to find the coefficient of $x^{45}$ in the generating function I have shown, despite reading through all of my notes.
Thanks very much in advance for any help!!
| To answer your first question, your method is essentially correct. If you have $a$ ways of distributing the loonies and $b$ ways of distributing the toonies, then you have $ab$ ways of doing both. See also the rule of product and this video.
To answer your biggest question, you know that each loony will go to one of two people, and you know that 10 of unlabeled loonies have a predetermined destination, so the question is how can the other 35 be distributed into two groups? You want to know how many ways there are to write 35 as the sum of two nonnegative integers. Well, there's $0+35$, and $1+34$... surely you can count them all.
More generally, with a generating function you can take the Taylor series expansion centered at 0 to find the coefficient you want, but that's not really reasonable to do by hand with the 45th coefficient.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3172667",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
probability of subset contained in dice rolls of custom dice with duplicate symbols I play a game where you roll 24 dice, 6 of 4 different colors with each color having a different set of numbers and math operations
these are the dice colors and possible symbols, there are 6 dice of each color, 24 total:
red = 0, 1, 2, 3, +, -
blue = 0, 1, 2, 3, x, /
green = 4, 5, 6, -, x, ^
black = 7, 8, 9, +, /, √
In the game you set a goal of up to 6 dice and the rest of the game involves manipulating the dice to find something equivalent to the goal, but I'm only interested in the probability of setting a goal.
My question:
If I have a subset of symbols that I need say, (x, 1, 1, 4, +, 3), call it $D$. What is the probability that $D$ will exist in each roll of the set of all rolls of 6 red, blue, green, and black dice?
This problem is particularly tricky because some symbols repeat, like x is on blue and green cubes, 1 is on red and blue cubes, + is on red and black cubes.
I've tried to solve this problem myself by brute forcing it in python, but computing 6^24 dice rolls is not feasible for me, and I also tried computing each color as a separate instance, but I couldn't figure out how to find the probability.
I don't know much probability theory, I've taken statistics a while ago, and have looked up numerous questions on this forum; but I either don't know the right question to ask, or I don't find it already asked.
| The probability is $$\frac{635545571166992339}{2369190669160808448} \approx 0.268.$$
You can compute this by dynamic programming.
For a given multiset of symbols $S$ and a set of dice $D$, we will compute the probability $p(S,D)$ that all symbols in $S$ come out in a throw of $D$ as follows:
*
*Base case: $D$ is empty. If $S$ is empty then the probability is 1, otherwise it's 0.
*If $D$ is not empty, choose $d \in D$ arbitrarily. Go over all possible realizations $x$ of $d$. For each of them, if $x$ in $S$, let $q_x = p(S \setminus \{x\}, D \setminus \{d\})$, and otherwise, let $q_x = p(S, D \setminus \{d\})$. The answer is $\sum_x q_x/|d|$ (in your case, $|d| = 6$).
The easiest way to compute this efficiently is using memoization, which is how I computed the number above.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3172794",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Why does $\frac{X - aZ}{Z}$ have a double pole at the point $(0 : 1 : 0)$ and not just one (Divisors)? If I have an Elliptic Curve E and the function $\frac{X - aZ}{Z}$, I would have expected the divisor to be, defining a point $P = (a,b)$ and $-P = (a,-b)$, $div(f) = [P] + [-P] - [\infty]$.
Instead the correct solution would be $div(f) = [P] + [-P] - 2[\infty]$.
Where does this double pole come from? It would have made sense if the denominator would have been $Z^2$, but it is not.
| A rational function $f(x) := x - a$, for example, expressed in homogeneous coordinates is $f(X/Z) = (X - aZ)/Z$ which has a single zero in the numerator and a single pole given by the denominator. Thus a simple zero and a simple pole always appear together. In general, when there are multiple zeros "up to multiplicity", there will be an equal number of poles "up to multiplicity".
For example, if $f(x) := (x-a)(x-b)$, when it is expressed in homogeneous coordinates, then it becomes $$f(X/Z) = (X - aZ)(X - bZ)/Z^2$$ which has two simple zeros in the numerator and one double pole given by the denominator. Thus, $div(f) = [a] + [b] - 2[\infty].$ The key idea is that both zeros and poles can have "multiplicity" and this has to be taken into account so that they balance each other in all cases. When two simple zeros merge they become one double zero, and so on.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3173092",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is there an orientable $3$-manifold with non-vanishing $w_2$? In the case that $M$ is a closed orientable $3$-manifold, using Wu's formula we can show $w_1(M) =0 \implies w_2(M) =0$, and so $w_3 = w_1w_2 + Sq^1 w_2 = 0$ (or you can use the fact that $\chi(M)=0$ for closed orientable manifolds with odd dimension). It can then be shown that in fact $M$ is parallelizable and orientedly null-bordant.
If $M$ is compact and orientable, then its boundary is an orientable surface and therefore bounds so we can complete $M$ to a closed manifold $\bar{M}$ where the same argument applies, and we can again compute $w_2(M) = 0$.
Therefore if we want an example of an orientable $3$-manifold with $w_2(M)\neq 0$ it needs to be non-compact. Does anyone know an example?
| All orientable three-manifolds $M$ are parallelizable. If you just want to deduce the noncompact case from the closed one, this requires little machinery.
Basically, you find first an exhaustion of $M$ with connected compact manifolds with boundary $M_k$. Then you inductively construct linearly independent vector fields $X,Y,Z$ on each $M_k$. Extending them from $M_k$ to $M_{k+1}$ is not trivial, and can be done e.g. taking an appropriate harmonic extension. See my answer to a very similar question here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3173266",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Second Order Differential Equation Assistance Hi Maths stack exchange!
I’m doing this question for homework,
$$y′′+4y′+4y=0$$
I managed to find the auxiliary equation.
$$y=(A+Bx)e^{-2x}$$
The issue is when I was looking at the solutions of what to do next, it said this should be the next line.
$$y'(x)=Be^{-2x}+(-2)(A+Bx)e^{-2x}=(B-2A+Bx)e^{-2x}$$
I have no Idea how they got answer, I was wondering if anyone could shed some light on how this answer was obtained.
Thanks
~Neamus
| You have the correct idea so far, they got that line by differentiation using the chain rule. If we take your progress so far $y(x)=(A+Bx)e^x$ and differentiate it with respect to $x$ we get: $$\frac{d}{dx}(y(x))=\frac{d}{dx}(Ae^{-2x}+Bxe^{-2x})=\frac{d}{dx}(Ae^{-2x})+\frac{d}{dx}(Bxe^{-2x})$$ From this we can split it into
$$\frac{d}{dx}(Ae^{-2x})=-2Ae^{-2x}$$The crucial step here is$$\frac{d}{dx}(Bxe^{-2x})=Bx\frac{d}{dx}(e^{-2x})+e^{-2x}\frac{d}{dx}(Bx)$$This simply resolves to$$\frac{d}{dx}(Bxe^{-2x})=-2Bxe^{-2x}+Be^{-2x}$$So when we put it all back together it gives
$$\frac{d}{dx}(y(x))=y'(x)=-2Ae^{-2x}-2Bxe^{-2x}+Be^{-2x}$$
Thus by factorising this you get the answer in the form given.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3173388",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
If A is nowhere dense, then its complement X \ A is dense in X. Let X a topological space. A is nowhere dense in
if the interior of the closure of A is empty.
I have to prove that if A is nowhere dense, then its complement X \ A is dense in X.
I tried to prove it, even looking a similar question but I could not prove it.
Could anyone give me a valid proof??
| If $A$ is nowhere dense, then $int(\bar{A})=\emptyset$. This means that it contains no (non-empty) open sets. Let $U\subset X$ be an open set. Since $A$ is nowhere dense, then $U$ is not a subset of $A$, which means - $U\cap A^c\neq\emptyset$. This means that $A^c$ meets any open set, and it is therefore dense in $X$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3173583",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Graphing the function $(-2)^x$ When I wanted to graph $y=(-2)^x$ many graphing calculator apps refused to plot it. TI-Nspire CAS plotted it as shown in the first picture. I think the plot is not correct as only the envelopes should be there with no values between the envelopes as shown in the second picture and the $(-2)^x$ should look like $2^x$ and $-(2^x)$ plotted on the same graph but of course with many discontinuities as explained in my analysis below. Am I right?
$(-2)^x$ as graphed by the TI-Nspire
this is what I think it should be
Here’s my analysis of the function:
f(x)=(-2)^x
First: when x >=0
*
*if x is an integer, x>=0
(-2)^0=1
(-2)^1=-2
(-2)^2=4
(-2)^3=-8
(-2)^4=16
(-2)^x oscillates back and forth
When x is an even integer (-2)^x is positive.
When x is an odd integer (-2)^x is negative.
*
*when x is a rational number, x>0
let x=p/q , p>0, q>0
(-2)^x=(-2)^(p/q)=((-2)^p)^(1/q)
if p is even and q is odd, (-2)^x is a positive real value
Example: (-2)^(100/51)= 3.8927
if p is odd and q is odd, (-2)^x is a negative real value.
Example: (-2)^(99/51)=-3.8402
if p is odd and q is even, (-2)^x is an imaginary value (not defined in the set of real numbers)
Example: (-2)^(99/50)=i 3.9449
(Also in all the above cases, if you extend your analysis to include complex, we get q complex roots.)
In the domain of rational numbers, (-2)^x oscillate or is undefined (imaginary or complex)
*
*When x is irrational, x>0
(-2)^x has no real value. It has an infinite number of complex roots.
Second: when x<0
(-2)^x=1/(-2)^|x|
Use the same approach above to analyze the behavior of the function.
| The graph of $f(x)=(-2)^x$ is problematic for real numbers $x$. Think about what happens when $x=\frac{1}{2}$. Then $f(x)=(-2)^{\frac 1 2}=\sqrt{-2}$. Can you see why this is a problem to graph?
The graph of $f(x)=(-2)^x$ only makes sense for integer values of $x$. Also, as zwim pointed out in the comments, your second graph is not the graph of a function, as it is multivalued, i.e. one input of $x$ gives two outputs of $f(x)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3173736",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
An interesting inequality with condition If $a,b,c$ positive reals and $\frac{a}{b+c} \ge 2$ I have to prove that
$(ab+bc+ca)\left(\frac{1}{(b+c)^2}+\frac{1}{(c+a)^2}+\frac{1}{(a+b)^2} \right)\geq \frac{49}{18}$
We may assume that $a\geq b \geq c.$ Firstly, let's show that
$\frac{1}{(b+c)^2}+\frac{1}{(c+a)^2}+\frac{1}{(a+b)^2}\geq \frac{1}{4ab}+\frac{2}{(a+c)(b+c)}.$ This can be rewritten as
$\left(\frac{1}{a+c}-\frac{1}{b+c}\right)^2\geq \frac{(a-b)^2}{4ab(a+b)^2},$ or equivalently $4ab(a+b)^2\geq (a+c)^2(b+c)^2.$ This is obvious, since $4ab\geq (b+c)^2$ and $(a+b)^2\geq (a+c)^2.$
Thus, it remains to prove that
$(ab+bc+ca)\left(\frac{1}{4ab}+\frac{2}{(a+c)(b+c)} \right)\geq \frac{49}{18}.$ Using the identities
$\frac{ab+bc+ca}{4ab}=\frac{1}{4}+\frac{c(a+b)}{4ab}, \quad \frac{2(ab+bc+ca)}{(a+c)(b+c)}=2 -\frac{2c^2}{(a+c)(b+c)},$ this becomes
$\frac{c(a+b)}{4ab}\geq \frac{2c^2}{(a+c)(b+c)}+\frac{17}{36}.$
Then I stuck. Any idea please?
| This can be solved in a brute force way:
$$\frac{a}{b+c}\ge2\implies a=2b+2c+x$$
..where $x$ is some non-negative value. The inequality:
$$(ab+bc+ca)\left(\frac{1}{(b+c)^2}+\frac{1}{(c+a)^2}+\frac{1}{(a+b)^2} \right)-\frac{49}{18}\ge0$$
...becomes:
$$((2b+2c+x)b+bc+c(2b+2c+x))\left(\frac{1}{(b+c)^2}+\frac{1}{(2b+3c+x)^2}+\frac{1}{(3b+2c+x)^2} \right)-\frac{49}{18}\ge0$$
This can be writen as:
$$\frac AB\ge 0\tag{1}$$
where
$$A=654 b^5 c+462 b^5 x+2783 b^4 c^2+3620 b^4 c x+851 b^4 x^2+4276 b^3 c^3+8748 b^3 c^2 x+4260 b^3 c x^2+572 b^3 x^3+2783 b^2 c^4+8748 b^2 c^3 x+6854 b^2 c^2 x^2+1932 b^2 c x^3+167 b^2 x^4+654 b c^5+3620 b c^4 x+4260 b c^3 x^2+1932 b c^2 x^3+352 b c x^4+18 b x^5+462 c^5 x+851 c^4 x^2+572 c^3 x^3+167 c^2 x^4+18 c x^5$$
$$B=18 (b+c)^2 (3 b+2 c+x)^2 (2 b+3 c+x)^2$$
$A,B$ are positive so (1) is obviously true.
There is a similar problem which I find to be much more interesting:
For all positive $a,b,c$:$$(ab+bc+ca)\left(\frac{1}{(b+c)^2}+\frac{1}{(c+a)^2}+\frac{1}{(a+b)^2} \right)\ge\frac94$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3173875",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Find the probability of rolling ten different, standard 6-sided dice simultaneously and obtaining a sum of 30? Find the probability of rolling ten different, standard 6-sided dice simultaneously and obtaining a sum of 30?
I started to answer this question by setting up an equation like this:
x1+x2+...+x10=30
with 0 less than or equal to xi less than or equal to 6
But we know the die will have values 1-6 so we have
y=x+1 so that
y1+y2+...+y10=20
Now I know we can have at most 4 6's, 5 5's, 6 4's, 10 3's, 10 2's, 10 1's.
I was not sure if I am heading in the right direction or should I just find all possible outcomes of the die and use the inclusion-exclusion method to find all the permutations in which the sum will be 30?
| Say that $n$ represents the number of dices, $x$ the total sum and $f(n, x)$ the total number of different ways in which this sum can be obtained.
We have the following recurrence relation:
$$f(n,x)=\sum_{i=1}^6 f(n-1, x-i)\tag{1}$$
...which baciscally says that you can calculate $f$ by assuming that the first dice can roll 1,2,3,4,5 or 6 and by adding the number of ways in which you can roll the sums $x-1,x-2,\dots x-6$ by using one dice less.
We have the following exit conditions:
$$x \lt 0 \implies f(n,x)=0\tag{2}$$
$$f(0,0)=1\tag{3}$$
In this particular problem you want to calculate $f(10, 30)$ and then divide the result with the total number of different rolls (which is $6^{10}$). You can complete the task with this miniscule Python script:
cache = dict()
def f(n, x):
if x < 0: return 0
if x == 0 and n == 0: return 1
key = (n, x)
if key in cache: return cache[key]
cache[key] = sum([f(n - 1, x - i) for i in range (1, 7)])
return cache[key]
count = f(10, 30)
prob = count / (6 ** 10)
print("Number of ways to roll 30 with 10 dices:", count)
print("Probability:", prob)
And the result is:
Number of ways to roll 30 with 10 dices: 2930455
Probability: 0.048464367913724195
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3174028",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Find the maximum likelihood estimator for Pareto distribution and a unbiased estimator Let $X_1,...X_n$ be a random sample from the Pareto distribution with parameters $\alpha$ and $\theta$, where $\alpha$ is known.
Find the maximum likelihood estimator for $\theta$ and say if it is unbiased, if not find an unbiased estimator
My Approach:
$$f(x;\alpha, \theta) = \alpha \theta^\alpha x^{-(\alpha +1)},\quad x \ge \beta$$
$$L(\theta) = \alpha^n \theta^{\alpha n} \left(\prod_{i=1}^n x_i\right)^{-(\alpha+1)}$$
Taking log for $L(\alpha)$ gives
$$\ln L(\theta) = n \ln(\alpha) + \alpha n \ln(\theta) + \sum_{i=1}^n -(\alpha+1) \ln(x_i)$$
Then since $\ln L(\theta)$ is an increasing function if $\theta$ increases, and for a Pareto distribution we have that $\theta \le x$ we conclude that the maximum likelihood estimator is $\hat\theta=\min {x_i}$ (the first order statistic)
Am I right?
Then to prove that it is and unbised statistic we have to prove that $E(\hat{\theta}) = \theta$. I donot know how to do it I just thought using the p.d.f of the firs order statistic and integrate from $\theta $ to infinite but I'm not sure about this.
Any ideas?
| You've got some notation errors and the work is a bit sloppy, but it is essentially the correct idea. You should have written
$$f(x; \alpha, \theta) = \alpha \theta^\alpha x^{-(\alpha+1)}, \quad x \ge \color{red}{\theta},$$ and $$\ell(\theta) = \log \mathcal L(\theta) = n \log \alpha + \alpha n \log \theta - (\alpha + 1) \sum_{i=1}^n \log x_i.$$ In fact, I would have dispensed with this altogether and noted that when $\alpha$ is known, the likelihood is proportional to $$\mathcal L(\theta) \propto \theta^\alpha \mathbb 1(x_{(1)} \ge \theta),$$ hence for $\alpha > 0$, $\mathcal L$ is monotone increasing on the interval $\theta \in (0, x_{(1)}]$ and the MLE is $\hat\theta = x_{(1)}$. No need to take log-likelihoods.
$\hat \theta = x_{(1)}$ is necessarily biased because $\Pr[X_{(1)} > \theta] > 0$ but $\Pr[X_{(1)} < \theta] = 0$. That is to say, the sample minimum can never be less than $\theta$, whereas being greater than it is certainly possible; so taking the expected value of the sample minimum, you can never hope to be equal to $\theta$ on average.
Formally, though, you would need to compute $\operatorname{E}[X_{(1)}]$ by first computing the probability density of the first order statistic. This in turn can be found by considering $$\Pr[X_{(1)} > x] = \Pr[(X_1 > x) \cap (X_2 > x) \cap \ldots \cap (X_n > x)] = ?$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3174150",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to simplify the the binomial coefficient in the binomial series? Use binomial series to expand the function $\frac{5}{(6+x)^3}$ as a power series.
I understand the process to get the following summation:
$\frac{5}{6^3}\sum_{n=0}^{\infty} {-3 \choose n} (\frac{x}{6})^n $
However, I am stuck on seeing what's going on with ${-3 \choose n}$.
From the Stewart Calculus textbook, it says that ${k \choose n} = \frac{k(k-1)(k-2)...(k-n+1)}{n!}$.
By applying that, I would get:
${-3 \choose n}=\frac {(-3)(-4)(-5)...[-(n+2)]}{n!}$.
I think the next step would be to extract out the negative, so I will get $(-1)^n$.
The summation would then be:
$\frac{5}{6^3}\sum_{n=0}^{\infty} {\frac {(-1)^n(3)(4)(5)...[(n+2)]}{n!}} (\frac{x}{6})^n $
The solution to this problem is $\frac{5}{2}\sum_{n=0}^{\infty} {\frac {(-1)^n(n+1)(n+2)x^n}{6^{n+3}}}$
I am not sure what has happened to the factorial. Was it cancelled out due to the (3)(4)(5)... in the numerator? Where did (n+1) come from?
| $$\frac {(3)(4)(5)\cdots(n+2)}{n!} = \frac {(3)(4)(5)\cdots(n)(n+1)(n+2)}{(1)(2)(3)(4)(5)\cdots(n)} = \frac {(n+1)(n+2)}{(1)(2)}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3174277",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Opposite real number identities(why $\cos(-x)=\cos x$)
Ive been studying about opposite real number identities and Ive been stuck on this question on why $\cos(-x)=\cos x$.
Okay so if we consider that the given circle is a unit circle and triangle $pom$ and triangle $qom$ are congruent then how $\cos(-x)=\cos x$?
According to me when we will do base/hypotenuse for triangle qom then it will come $om/oq$ which should give $-\cos x$ as $oq$ is negative, right?
Please help me through this.
| A non-geometry approach would be to consider the series definition for cosine. With this, for all $x\in\mathbb R$ the series $\sum_{n=0}^\infty\frac{(-1)^nx^{2n}}{(2n)!}$ converges and
$$\cos(x)=\sum_{n=0}^\infty\frac{(-1)^nx^{2n}}{(2n)!}.$$
For every $x\in\mathbb R$, you can now trivially see that
$$\cos(-x)=\sum_{n=0}^\infty\frac{(-1)^n(-x)^{2n}}{(2n)!}=\sum_{n=0}^\infty\frac{(-1)^n(-1)^{2n}x^{2n}}{(2n)!}=\sum_{n=0}^\infty\frac{(-1)^nx^{2n}}{(2n)!}=\cos(x)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3174407",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
What is the easiest way to get: $2+ \sqrt{-121} = (2+ \sqrt{-1})^3$ I was reading the book Seventeen equations have changed the world.
At some point, while the book was talking about complex numbers, I see this equation:
$2+ \sqrt{-121} = (2+ \sqrt{-1})^3$
Even if it's easy to proof the truth of this equivalence (it is enough to develop the two members),
I can't find an easy/good/fast way to obtain straight the identity.
Can you help me? Does there exist a mathematical property that I'm missing?
| If it is to prove:
$$2+11i=2+i+10i=2+i+(2+i)(2+4i)=(2+i)(3+4i)=(2+i)(2+i+1+3i)=(2+i)(2+i+(2+i)(1+i))=(2+i)(2+i)(2+i).$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3174607",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
} |
Proving that $ \tan 2x \cdot (1 + \tan x) \cdot \cot x = \frac{2}{1 - \tan(x)} $
Given the following expression,
$$ \tan(2x) \cdot (1 + \tan(x)) \cdot \cot(x) $$
the exercise asks to simplify the expression and
$$ \frac{2}{1 - \tan(x)} $$
should be the simplified expression.
I have tried everything I possibly could, including letting WolframAlpha eat it to show alternative forms of the expression – nothing worked.
What do you think? How could I go about simplifying this expression? Thank you.
| $$\tan(2x) (1+\tan(x)) \cot(x) = \frac{2\sin(x)\cos(x)}{\cos^2(x)-\sin^2(x)}\left(\frac{\sin(x)+\cos(x)}{\cos(x)} \right)\frac{\cos(x)}{\sin(x)}$$
Simplifying you get
$$\tan(2x) (1+\tan(x)) \cot(x) = \frac{2(\sin(x)+\cos(x))\cos(x)}{(\cos(x)-\sin(x))(\cos(x)+\sin(x))} = \frac{2\cos(x)}{\cos(x)-\sin(x)}$$
i.e.
$$\tan(2x) (1+\tan(x)) \cot(x) =\frac{2}{1-\tan(x)}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3174746",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Continuous function which is always rational Let $f:\mathbb{R} \to \mathbb{R} $ be a continuous function such that $f(x) \in \mathbb{Q} $, $\forall x\in \mathbb{R} $. Is it true that the only functions with this property are the constant functions? Intuitively, I believe it is, but I am not sure.
EDIT: I had a typo, the relation holds $\forall x\in \mathbb{R} $. I am sorry for my mistake.
| No, $f(x)$ could be any polynomial function with rational coefficients. Polynomial functions are continuous, and a polynomial with rational coefficients evaluated at a rational number is rational.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3174853",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Symmetrizability of shallow water equations Consider the shallow water equation
\begin{equation}h_t+(hu)_x=0\\
(hu)_t+\left(hu^2+\frac{g}{2}h^2 \right)_x=0
\end{equation}
I want to know the entropy of this system?
I understood that if their exists a change of variable which symmetrizes the system, then system admits strictly convex entropy..
But I am unable to proceed... Please help
| Follow the steps in Sec 3.2 of (1). Let's subtract $u$ times the first equation to the second one. After division by $h$, we get the following conservation law for $u$:
$$
u_t + (\tfrac12 u^2 + gh)_x = 0 \, .
$$
Now, multiply the conservation law for $h$ by $\frac12 u ^2 + gh$, multiply the conservation law for $u$ by $hu$, and add the results. We have the additional conservation law
$$
\eta_t + G_x \leq 0
$$
in the week sense, where $\eta = \tfrac12 h u^2 + \tfrac12 g h^2$ is a convex entropy and $G = (\tfrac12 h u^2 + gh^2)\, u$ is the corresponding entropy flux (cf. Eqs. (1.25) and (1.27) of (1)). You may be able to conclude, see (2).
(1) F Bouchut: Nonlinear Stability of Finite Volume Methods for Hyperbolic Conservation Laws — and Well-Balanced Schemes for Sources, Birkhäuser, 2000. doi:10.1007/b93802
(2) KO Friedrichs, PD Lax: "Systems of conservation equations with a convex extension", Proc Natl Acad Sci U S A 68.8 (1971), 1686-8. doi:10.1073/pnas.68.8.1686
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3175027",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Can natural deduction prove it's own rules, as my logic book says? Is there a level confusion there? I'm currently studying John Nolt's Outline of Logic ( Schaum's series).
According to the author, one can use natural deduction to prove some rules of natural deduction itself, for example the absorption rule ( chap. 4, Solved problem 4.33).
Example. Consider the following proof
(1) P --> Q hypothesis ( For conditional proof)
(2) ~ P v Q DF -->
(3) Q v ~ P Commut. v
(4) ~ ~ Q v ~ P DN
(5) ~ Q --> ~ P Df -->
(6) (P --> Q) --> ( ~Q --> ~P) Cond. Proof (1 - 5)
What did I prove here : a "rule" ? or simply the "sentence" :
(P--> Q) --> ( ~Q --> ~P)
| Natural Deduction consists of a set of fundamental rules, which are each independent, and justified by the semantics of the connectives. The fundamental rules can be used to prove sentences which may be used to justify derived rules. Sometimes these sentences may be called Tautological Consequences (TautCon).
Here is the Natural Deduction proof for $\vdash (p\to q)\to(\lnot q\to\lnot p)$ using only the usual fundamental rules. (Note: in most Natural Deduction systems, implication equivalence is not actually considered a fundamental rule.)
$$\def\fitch#1#2{\quad\begin{array}{|l} #1 \\ \hline #2 \end{array}}
\fitch{}
{\fitch{1.~p\to q\hspace{14.75ex}\text{Assumption}}
{\fitch{2.~\lnot q\hspace{14.5ex}\text{Assumption}}
{\fitch{3.~p\hspace{12.5ex}\text{Assumption}}
{4.~q\hspace{12.5ex}\text{1,3,Conditional Elimination}
\\5.~\bot\hspace{11.75ex}\text{2,4,Negation Elimination}}
\\6.~\lnot p\hspace{14.25ex}3{-}5,\text{Negation Introduction}}
\\7.~\lnot q\to\lnot p\hspace{11.5ex}2{-}6,\text{Conditional Introduction}}
\\8.~(p\to q)\to(\lnot q\to\lnot p)\hspace{2ex}1{-}7,\text{Conditional Introduction}}$$
Now, because this sentence is provable, we don't need to repeat all the above typesetting to apply conditional elimination. We can just cite such a proof to justify deriving $(\lnot \psi\to\lnot \phi)$ from $(\phi\to\psi)$. We can call doing this applying a derived rule of inference and, in this case, name it Contraposition .
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3175179",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Magnitude of Complex Numbers Let $\alpha \neq 1$ be a complex number such that the distance from $\alpha^2$ to 1 is twice the distance from $\alpha$ to 1, while the distance from $\alpha^4$ to 1 is four times the distance from $\alpha$ to 1. Enter all possible values of $\alpha,$ separated by commas.
I have no idea how to do this. Can someone help me?
| An alternative method:
Use the first condition to find$$|\alpha^2-1|=2|\alpha-1|\\|\alpha+1|=2\\\alpha=-1+2e^{i\theta}$$
Use the second condition to get $$|\alpha^4-1|=4|\alpha-1|\\|\alpha^3+\alpha^2+\alpha+1|=4\\|(2e^{i\theta}-1)^3+(2e^{i\theta}-1)^2+2e^{i\theta}|=4\\\left|8e^{3i\theta}-12e^{2i\theta}+6e^{i\theta}-1+4e^{2i\theta}-4e^{i\theta}+1+2e^{i\theta}\right|=4\\\left|2e^{2i\theta}-2e^{i\theta}+1\right|=1\\2e^{2i\theta}-2e^{i\theta}+1=e^{i\phi}\\2\cos2\theta-2\cos\theta+1=\cos\phi\in[-1,1]\\2\sin2\theta-2\sin\theta=0$$This second equation tells us $$(4\cos\theta-2)\sin\theta=0$$So $\theta=\pi,\pm\frac\pi3$. If $\theta=\pi$, then the first equation gives $5=\cos\phi$ which is not solvable for real $\phi$, so this is not a solution. Therefore have $\theta=\pm\frac\pi3$.
$$\alpha=-1+2\left(\frac12\pm i\frac{\sqrt3}2\right)=\pm\sqrt3i$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3175366",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Addition formula for elliptic integral of second kind Let $k\in(0,1)$ and the incomplete elliptic integral integral $E(u, k) $ be defined by $$E(u, k) =\int_{0}^{u}\operatorname {dn} ^2(t,k)\,dt\tag{1}$$ where $\operatorname {dn} (u, k) $ represents one of the Jacobian elliptic functions. When the value of $k$ is evident from context the parameter $k$ is dropped and one writes $E(u)$ instead of $E(u, k) $.
The function $E(u) $ satisfies the following addition formula $$E(u) +E(v) - E(u+v) =k^2\operatorname {sn} (u, k) \operatorname {sn} (v, k) \operatorname {sn} (u+v, k) \tag{2}$$ where $\operatorname {sn} (u, k) $ is another Jacobian elliptic function.
Dr. Bruce C. Berndt mentions in his Ramanujan Notebooks vol 3 that the above formula $(2)$ is equivalent to $$\frac{qf(-a, - q^2/a)f(-b,-q^2/b)f(-ab,-q^2/ab)} {abf(-aq, - q/a) f(-bq, - q/a) f(-abq, - q/ab) }=\frac{\varphi(-q)} {f^{3}(-q^2)}\sum_{n=1}^{\infty}\frac{q^n} {1-q^{2n}}\left(\frac{1}{a^nb^n}-\frac{1}{a^n}-\frac{1}{b^n}+a^n+b^n-a^nb^n\right) \tag{3}$$ where $|q|<1$ and
\begin{align*}
f(a, b) & =\sum_{n\in\mathbb {Z}} a^{n(n+1)/2}b^{n(n-1)/2},|ab|<1\\
\varphi(q) &= f(q, q) =\sum_{n\in \mathbb {Z}} q^{n^2}=\vartheta_{3}(q)\\
f(-q)&=f(-q, - q^2)=\prod_{n=1}^{\infty} (1-q^n)
\end{align*} are Ramanujan's theta functions.
The formula $(2)$ is famous and proved in both Jacobi's Fundamenta Nova and Whittaker & Watson's A Course of Modern Analysis.
How does one show that it is equivalent to formula $(3)$?
My own try is to deal with the fraction $\varphi(-q) /f^{3}(-q^2)$. We have via the theory of theta functions and elliptic integrals $$\varphi(-q) =\vartheta_{4}(q)=\sqrt{\frac{2k'K}{\pi}}$$ where $$K=K(k) =\int_{0}^{\pi/2}\frac{dx}{\sqrt{1-k^2\sin^2x}},k'=\sqrt{1-k^2}$$ and $$q^{1/12}f(-q^2)=q^{1/12}\prod_{n=1}^{\infty} (1-q^{2n})=2^{-1/3}\sqrt{\frac{2K}{\pi}}(kk')^{1/6}$$ so that $$q^{1/4}f^{3}(-q^2)=\frac{K}{\pi}\sqrt{\frac{2kk'K}{\pi}}$$ Thus we have $$\frac{\varphi(-q)} {f^3(-q^2)}=\frac{q^{1/4}}{\sqrt{k}}\frac{\pi}{K}$$ Next we have the product expansion $$\operatorname {sn} (u, k) = \dfrac{2q^{1/4}}{\sqrt{k}}\sin z\prod_{n = 1}^{\infty}\dfrac{1 - 2q^{2n}\cos 2z + q^{4n}}{1 - 2q^{2n - 1}\cos 2z + q^{4n - 2}}, z=\frac{\pi u} {2K}$$ which can be written as $$\frac{2q^{1/4}}{\sqrt{k}}\sin z\prod_{n=1}^{\infty}\frac{(1-q^{2n}e^{2iz})(1-q^{2n}e^{-2iz})}{(1-q^{2n-1}e^{2iz})(1-q^{2n-1}e^{-2iz})}\tag{4}$$ Using the Jacobi triple product (written in terms of Ramanujan theta function) $$f(a,b)=\prod_{n=1}^{\infty} (1-(ab)^n)(1+a(ab)^{n-1})(1+b(ab)^{n-1})\tag{5}$$ and the expression $(4)$ for $\operatorname{sn } (u, k)$ (and similar expressions for $\operatorname {sn} (v, k)$ and $\operatorname {sn} (u+v, k) $) it appears that the RHS of $(2)$ can be written like LHS of $(3)$ (via substitution $a=e^{2iz},b=e^{2iw},w= \dfrac{\pi v} {2K}$). However I don't see how to handle the LHS of $(2)$ (or RHS of $(3)$).
| On searching further in Fundamenta Nova I found the key to the problem. Not only Jacobi found the Fourier series for elliptic functions, but he found such series for their integer powers also using purely algebraic approach. Here one needs the following Fourier series for $\operatorname {dn} ^2(u,k)$ $$\left(\frac{2K}{\pi}\right) ^2\operatorname {dn} ^2(u,k)=\frac{2K}{\pi}\cdot\frac{2E}{\pi}+8\sum_{n=1}^{\infty} \frac{nq^n}{1-q^{2n}}\cos 2nz,\,E=E(K,k),z=\frac{\pi u} {2K}\tag{1}$$ Integrating the above with respect to $u$ we get $$\frac{2K}{\pi}\cdot E(u,k)=\frac{2E}{\pi}\cdot u+4 \sum_{n=1}^{\infty} \frac{q^n} {1-q^{2n}}\sin 2nz\tag{2}$$ Using this formula we can see that $$E(u) +E(v) - E(u+v) =\frac{2\pi}{K}\sum_{n=1}^{\infty}\frac{q^n}{1-q^{2n}}\{\sin 2nz+\sin 2nw-\sin 2n(z+w)\}, \, w=\frac{\pi v} {2K}\tag{3}$$ Now we can set $$a=e^{2iz},b=e^{2iw}$$ and then the RHS of $(3)$ above resembles the RHS of equation $(3)$ in question. I have checked that other factors match exactly as specified in the question.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3175504",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
} |
Can a Cauchy sequence converge for one metric while not converging for another? Is there an easy example of one and the same space $X$ with two different metrics $d$ and $e$ such that one and the same sequence $\{x_n\}$ is a Cauchy sequence for both metrics, but converges only for one of them?
| You can always have some artificial example where you just "move the limit elsewhere". For example, let $X=\mathbb{R_{\ge 0}}$, $d_1$ be the Euclidean metric and $d_2(x, y)=|\hat{x}-\hat{y}|$, where $ \hat{x}=-1$ if $x=0$ and $ \hat{x}=x$ otherwise. Then the sequence $x_n=\frac 1n$ converges in $(X, d_1)$ but not in $(X, d_2)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3175604",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 4,
"answer_id": 2
} |
Boundary condition inserted in the local PDE Let us consider the following problem:
$$
\begin{align}
-&u_{xx}=0&&\forall x\in(0,L)&&\tag{1}\\
&u(0)=0\tag{2}\\
&u_x(L)=\alpha\tag{3}
\end{align}
$$
It is possible to insert (3) in (1) as follows:
$$
\begin{align}
-&u_{xx}=\alpha\delta (x-L)&&\forall x\in(0,L]\tag{4}\\
&u(0)=0 \tag{5}\\
&u_x(L)=0 \tag{6}
\end{align}
$$
where $\delta(x-L)=\delta_L$ is the Dirac distribution. Are there results showing that these two formulations are equivalent? It is not clear to me whether (6) should be kept or not.
| I am suggesting a solution below (but I am not convinced). From (1), (2) and (3), it is clear that the sought solution is $u(x)=\alpha x$. Let us try to solve (4), (5) and (6) in the sense of distributions. Integrating (4) twice yields:
$$-u(x)=ax+b+\alpha (x-L)H(x-L)$$
Condition (5) implies $b=0$ and condition (6) implies
$$a+\alpha H(0)=0 \tag{7}$$
If we consider that $H(0)=1$ by definition, (7) becomes $a=-\alpha$ and the exact solution is retrieved on the interval $[0,L]$. However, another definition of $H$ would generate erroneous results, which seems annoying. We also realize that $u(x)=\alpha x$ does no longer satisfy (6), which looks strange.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3175747",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Determine the real numbers $a$, $b$, $c$ such that $1$, $\frac1{1+\omega}$ and $\frac1{1+\omega^*}$ are zeroes of the polynomial $p(z)=z^3+az^2+bz+c$ I am stuck on this question:
Let $1$, $\omega$ and $\omega^*$ be the cube root of unity.
a. Show that $\dfrac1{1+\omega}=-\omega$ and $\dfrac1{1+\omega^*}=-\omega^*$.
b. Determine the real numbers $a$, $b$, $c$ such that $1$, $\dfrac1{1+\omega}$ and $\dfrac1{1+\omega^*}$ are zeroes of the polynomial $p(z)=z^3+az^2+bz+c$.
c. Hence, find $p(\omega)$ and $p(\omega^*)$.
So I was able to do part a by finding the roots in Cartesian forms, but I am not sure how to approach part b.
| For those three numbers to be roots of the cubic equation means that if you set $z$ equal to any one of them, then the cubic is zero. Therefore you can write $$z^3+az^2+bz+c\equiv(z-1)\left(z-\frac1{1+\omega}\right)\left(z-\frac1{1+\omega^*}\right)$$ To determine $a,b,c$, simply multiply this out and equate the coefficients of different powers of $z$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3175872",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Find the symmetrical matrix $A$ so that $Q(\vec x) = \vec x^TA \vec x$ $Q(\vec x) = x_1^2+x_1x_2+x_2^2$
The matrix $A=\begin{bmatrix}1 & 0.5 \\ 0.5 & 1\end{bmatrix}$ seems to do the job. But what's the general procedure for finding a solution?
I can just think of setting it up like this for more clarity:
$\begin{bmatrix}x_1 & x_2\end{bmatrix}\begin{bmatrix}? & ?\\ ? & ?\end{bmatrix}\begin{bmatrix}x_1\\ x_2\end{bmatrix} =\begin{bmatrix}x_1^2+x_1x_2+x_2^2\end{bmatrix}$
But after that I'm lost, there surely must be some concepts I can apply.
| $A$ is called the matrix associated with the quadratic form $Q$. The general procedure is rather simple: put the coefficients of $x_i^2$ in the diagonal $a_{ii}$ and divide the coefficient of $x_{ij}$ in $2$, writing it twice in $A$: once in $a_{ij}$ and once in $a_{ji}$.
In your example, the coefficient of $x_1^2$ is $1$ so $a_{11}=1$, the coefficient of $x_2^2$ is $1$ so $a_{22}=1$. The coefficient of $x_1x_2$ is $1$ so $a_{12}=a_{21}=\frac{1}{2}$, resulting in:
$$
A=\begin{pmatrix}
1 & 0.5 \\ 0.5 & 1
\end{pmatrix}
$$
Here is another example. Consider $Q(\underline{x})=x_1^2+2x_2^2+x_3^2+2x_1x_2+x_3x_2$. Then the matrix associated with $Q$ is:
$$
\begin{pmatrix}
1 & 1 & 0 \\
1 & 2 & 0.5\\
0 & 0.5 & 1
\end{pmatrix}
$$
For further information see here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3176052",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
how to check a propriety using r studio I have to check that this propriety
$Z \sim N(0,1)$ and $U\sim \chi ^{2}(10)$ then $ Z/\sqrt{U/10} \sim T(10)$
is true using r studio if anyone can help , much appreciate
| One approach could be simulation of thousands of values:
*
*Simulate $Z$ using rnorm
*Simulate $U$ using rchisq
*Do the division $Y = Z / \sqrt{U / 10}$
*Simulate the same number of $T$ from the hypothesised $t$-distribution using rt
*Sort $Y$ and $T$ and plot them against each other - you want to see a diagonal straight line essentially $y=x$ with a little noise; this is visual demonstration though not a proof that the distributions are the same
You can do similar things with the qqplot function if you know what you are doing
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3176151",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
What does $*$ mean in equivalence relations? The notation of "$*$" started being used in my proof textbook in the section of equivalence relations and partitions yet it never once said what it means.
An example from the textbook:
Let $\mathbb{Z}^* = \mathbb{Z} - \{0\}.$ Define the relation on $\mathbb{Z} \times \mathbb{Z^*}$ by, for all $a,c \in \mathbb{Z}$ and all $b,d \in \mathbb{Z}$
What does "$*$" mean?
| In an algebraic context, many authors use $A^*$ to denote the set $A$ without the zero element. In your specific example, the author uses $\mathbb Z^*$ to denote the set $\mathbb Z$ without $0$, i.e. $\mathbb Z-\{0\}$. So, $*$ is just used in a context of notation and does not denote any particular operation.
In a similar manner, it also common to write $\mathbb R^*$ for $\mathbb R-\{0\}$, $\mathbb C^*$ for $\mathbb C-\{0\}$, etc.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3176322",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
If $f$ is continuous at $a$ and $f' < 0$ on $(b, a)$, then $a$ is a minimum on $(b, a)$. Could you please verify my proof? I feel like it's terribly overwrought and I'm missing a much simpler explanation. (It may even be invalid, see the bold).
If $f$ is continuous at $a$ and $f' < 0$ on $(b, a)$, then $a$ is a minimum on $(b, a)$.
$\forall y, x \ \ \ \ \ \ \ \ \ a-\delta < y < x < a \Rightarrow f(a) < f(x) + \epsilon$
Since $y, x$ are in $(a-\delta, a)$, then $y,x $ are in $(b, a)$. Hence $f(y) > f(x)$, and $f(y) - f(x) > 0$.
Therefore,
$\forall y, x \ \ \ \ \ \ \ \ \ a-\delta < y < x < a \Rightarrow f(a) < f(y)$
Showing that $a$ is a minimum on $(a - \delta,a)$, and consequently $(b,a)$, since for any $y$ < $a$ in the former interval, $f(y)$ > $f(a)$.
| You have $f(a)<f(x)+\epsilon$ and $f(x)<f(y).$ This does NOT imply $f(a)<f(y). $
(1). Suppose $c\in (b,a)$ and $f(c)< f(a).$
Let $e= (f(a)-f(c))/2.$ Let $d\in (c,a)$ such that $f(d)-f(a)>-e.$ We know that $d$ exists because $f$ is continuous at $a.$ By the MVT there exists $d'\in (c,d)$ such that $$f'(d')=\frac {f(d)-f(c)}{d-c}>\frac {f(a)-e-f(c)}{d-c}=\frac {e}{d-c}>0$$ contrary to $\forall d'\in (b,a)\;(f(d')<0).$
Therefore $\forall c\in (b,a)\;(f(c)\ge f(a)\,).$
(2). Suppose $c'\in (b,a)$ and $f(c')=f(a).$ Let $c=(a+c')/2.$ By (1) we have $f(c)\ge f(a).$ By the MVT there exists $c''\in (c,c')$ such that $$f'(c'')=\frac {f(c)-f(c')}{c-c'}\ge \frac {f(a)-f(c')}{c-c'}=\frac {f(c')-f(c')}{c-c'}=0$$ contrary to $\forall c''\in (b,a)\;(f'(c'')<0).$
Therefore $\forall c'\in (b,a)\;(f(c')\ne f(a)\,).$
(3). By (1) and (2) we have $\forall x\in (b,a)\;(f(x)>f(a)\,).$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3176430",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
If $B$ satisfies $A B = B A^{-1}$, show that $B^2$ is diagonalisable
Let $A$ be a $n \times n$ non-singular matrix having distinct eigenvalues. If $B$ is a matrix satisfying $A B = B A^{-1}$, show that $B^2$ is diagonalisable.
Answer:
Let $\lambda_i, \ i=1,2,3, \cdots, n$ be the $n$ distinct eigenvalues.
Now, $AB=BA^{-1} \Rightarrow B=ABA $
But then how to proceed?
| We can show that $AB^2=ABABA=B^2A$. So $B^2$ and $A$ commute. Then the claim follows from these duplicates (replacing $B$ by $B^2$):
$AB=BA$. Prove $B$ is diagonalizable.
If $AB=BA$, show that $B$ is diagonalizable.
Indeed, we have $AB^2=(AB)B=ABABA$ and $B^2A=B(BA)=ABABA$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3176751",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Integral as infimum of integrals I am trying to understand if the following formula holds. I cannot prove it but cannot find a counterexample either.
For $\mu$ a probability measure on $\mathbb{R}^d$ and $p \geqslant 1$ does it hold that
$\int |x|^p d\mu(x) = \inf \limits_{y \in \mathbb{R}^d } \int |x+y|^p d\mu(x)$ ?
| Not always. For example, if $\mu$ is a Dirac measure at a point $x \neq 0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3176876",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Finding the last root of $p(x) = x^5 + a_3 x^3 + a_2 x^2 + a_1x + a_0$ given that...
The polynomial $p(x) = x^5 + a_3 x^3 + a_2 x^2 + a_1x + a_0$ has real coefficients and has 2
roots of $x = -3$, and two roots of $x=4$. What is the last root, and
how many times does it occur?
At first I expanded $(x+3)^2(x-4)^2$ to get a divisor to divide the polynomial with, but I could not get around the fact that the coefficients are not given, so I can't get any concrete number. Here's the expansion:
$$x^4 - 2x^3 - 23x^2 + 24x + 144$$
I don't quite get how to use it to help me find the last root.
| If $(x-r)$ is a repeated root of $p(x)$ then $p(r) =p'(r) =0$.
So $p(-3)=p(4)=p'(-3)=p'(4)=0$, allowing you to form a system of four linear equations in terms of four unknowns. Solve for the coefficients and find the last root (which has to be a single root as the polynomial is quintic).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3177004",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 4
} |
Prove atleast one of the equations $x^2+b_1x+c_1=0$ and $x^2+b_2x+c_2=0$ has real roots if $b_1b_2=2(c_1+c_2)$
If $b_1b_2=2(c_1+c_2)$, then prove that atleast one of the equations $x^2+b_1x+c_1=0$ and $x^2+b_2x+c_2=0$ has real roots.
$$
\Delta_1.\Delta_2=(b_1^2-4c_1)(b_2^2-4c_2)=b_1^2b_2^2-4b_2^2c_1-4b_1^2c_2+16c_1c_2\\
=4(c_1+c_2)^2-4b_2^2c_1-4b_1^2c_2+16c_1c_2\\
=4c^2_1+4c_2^2+8c_1c_2-4b_2^2c_1-4b_1^2c_2+16c_1c_2\\
=4\big[c^2_1+c_2^2+2c_1c_2-b_2^2c_1-b_1^2c_2+4c_1c_2\big]\\
=4\big[c^2_1+c_2^2+6c_1c_2-b_2^2c_1-b_1^2c_2\big]
$$
Why am I not reaching any conclusion from the product of the determinants ?
| We can assume both $c_1,c_2$ greater than zero.
$b_1b_2=2(c_1+c_2) = 4\dfrac{(c_1+c_2)}{2}$
$b^2_1b^2_2=16\dfrac{(c_1+c_2)^2}{4}\geq 16c_1c_2$
$b^2_1b^2_2 \geq 4c_14c_2$
So if $b^2_1 \lt 4c_1$ , then $b^2_2 \gt 4c_2$ and vice versa.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3177101",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Evaluate $\lim_{(x,y) \to (0,0), x+y \neq 0}{\frac{\ln(1-x-y)}{x+y} } $ This was the question of a test. My question is if my attempt to solve it is correct, and if it is, why is it correct.
$$\lim_{(x,y) \to (0,0), x+y \neq 0}{\frac{\ln(1-x-y)}{x+y} } $$
My attempt:
Let $\xi = -x-y $. Then $\xi \to 0$ whenever $(x,y) \to (0,0)$ and $x+y \neq 0 \iff \xi \neq 0$. (Is it then correct to say that the previous limit exists and is equal to the following iff the following exists? And why?):
$$\lim_{\xi \to 0, \xi \neq 0}{\frac{\ln(1+\xi )}{-\xi}}$$
If it is correct, then the limit exists and is $-1$. If it is correct, why is it correct?
| We want to show that $\forall \epsilon \gt 0 : \exists \delta \gt 0 : ||(x,y)||< \delta \implies |\frac{\ln(1-x-y)}{x+y}+1| \lt \epsilon$.
Fix $\epsilon \gt 0$. We know that $\lim_{\phi \to 0} \frac{ln(1+\phi)}{-\phi} = -1$. So there's $\delta_1 \gt 0$ such that $|\phi| \lt\delta_1 \implies |\frac{ln(1+\phi)}{-\phi}+1| \lt \epsilon$. Let $\delta = \delta_1$ and $\xi = -x-y$. Suppose $||(x,y)|| \lt \delta $. Since all norms in $R^n$ are equivalent we can use the norm of the sum. Then:
$||(x,y)|| = |x|+|y| \geq |x+y| = |\xi|$. Since $|\xi| < \delta_1 $, then $|\frac{ln(1+\xi)}{-\xi}+1| < \epsilon$ therefore $|\frac{ln(1-x-y)}{x+y}+1| < \epsilon $. $\square$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3177220",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Combinatorics problems that can be solved via infinite descent I'm looking for high school problems that can be solved with the method of infinite descent. Usually, those problems are from number theory, but I would be very happy if someone could provide a problem(s) from combinatorics and/or any other field of mathematics. Here are some problems from number theory:
Prove that a following equations have no nontrivial solutions in $\mathbb{Z}$:
*
*$a^3+2b^3 = 4c^3$
*$2a^2+3b^2 = c^2+6d^2$
*$x^2 + y^2 + z^2 = 2xyz$
*$x^4+y^4 = z^2$
| What about this one :
Let $(a,b,c)$ in $\mathbb{N}$ such that $(a^2+b^2)/(1+ab) =c $
Prove that $c = p^2$ with $p \in \mathbb{N}$
I don't have a proof though...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3177320",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 8,
"answer_id": 5
} |
Algebra. Solving for a given gamma function: $\ln L = n \ln(\Gamma(a+1)) - n \ln (\Gamma(a)) + (a-1) \sum_{i=1}^{n} \ln x_i$ $\ln L = n \ln(\Gamma(a+1)) - n \ln (\Gamma(a)) + (a-1) \sum_{i=1}^{n} \ln x_i$
so given this I want to solve the derivative for $a$ then solve for $a$, $\ln L = 0$
$0 = \frac{n(\Gamma(a+1)')}{\Gamma(a+1)} - \frac{n\Gamma'(a)}{\Gamma(a)} + \sum_{i=1}^{n} \ln x_i$
it equals to $\frac{n}{a} + \sum_{i=1}^{n} \ln x_i$ then $a = -\frac{n}{\sum_{i=1}^{n} \ln x_i}$. Not sure how they got to that though.
Not sure how to do the rest.
| It's simply that
$$ \frac{\Gamma(a + 1)}{\Gamma(a)} = a,$$
by a standard property of the Gamma function.
So
$$ \ln L = n \ln a + (a - 1) \sum_{i=1}^n \ln x_i.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3177479",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
union of non-regular language and finite language Is the union of a non-regular language and a finite language necessarily non-regular?
My suspicion is that it is, and I am yet to think of a counterexample, but am not sure how one might set out a proof.
| Say $L_1$ is nonregular and $L_2$ is finite and so regular. Note that $L_1\cap L_2$ is also regular.
If $L_1\cup L_2$ were regular then we would have $$L_1=((L_1\cup L_2)-L_2)\cup (L_1\cap L_2)$$ so that $L_1$ is also regular. Therefore $L_1\cup L_2$ must be irregular.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3177607",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Does the degree of a polynomial give the number of roots? I am aware of the fundamental theorem of algebra, i.e., the degree of a polynomial is the number of roots of the polynomial. For example, $x^2 - 9 = 0$ would have two solutions: $x=3$ and $x=-3$. However, sometimes I come across quadratic polynomials that only have one root, e.g.,
$$t^2 - 2 t + 1 = (t-1)(t-1) = 0$$
which only has the solution $1$, or so I think. Is there some underlying concept that I am overlooking?
| Two main things overlooked as far as I can tell:
*
*Multiplicities, the number of times a root occurs.
*Complex roots, $x^2+1=0$ has no real roots, but two complex roots i, and -i.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3177754",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Why is the total differential divided by the norm of h bounded? I'm trying to proof the product rule for functions $f: R^{n} \to R$. There is already a good thread on this - total differential of $f+g$, $fg$ and $\frac fg$.
However, I am not sure about the last step, i.e. showing that
$$\lim \limits_{h \to 0} \frac{(f(x+h)-f(x))dg_{x}(h)}{\lVert h \rVert}=0$$
As the other thread mentions in the comments
\begin{align} & \lim \limits_{h \to 0} \frac{(f(x+h)-f(x))dg_{x}(h)}{\lVert h \rVert} \\
& =\lim \limits_{h \to 0} (f(x+h)-f(x)) \lim \limits_{h \to 0} \frac{dg_{x}(h)}{\lVert h \rVert} \\
&
\end{align}
and the first limit converges to $0$ by continuity of $f$ (since f is totally differentiable). So it remains to show that the second term is bounded.
But I am not sure how to do this.
| I have given this another thought and have come up with a solution.
Since $dg_{x}(h)$ is a linear map on a finite-dimensional vector space, it is a bounded linear operator which means that $\lvert dg_{x}(h)\rvert<C \|h\|, C \in \mathbb{R}$ for all $h$ (see https://en.wikipedia.org/wiki/Discontinuous_linear_map).
Therefore, $\frac{dg_{x}(h)}{\|h\|} \leq \frac{\lvert dg_{x}(h)\rvert}{\|h\|}<C$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3177859",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$8^n-3^n$ Divisible by 5 - Proof Verification. Statement: $\frac{8^k-3^k}{5}=M, M\in\mathbb{N}$
Base case: $P(1): \frac{8-3}{5}=1\in\mathbb{N}$
Assume $P(n): \frac{8^n-3^n}{5}=N$
Then, $P(n+1)=8^{n+1}-3^{n+1}=5K$, where $K$ is in terms of $M$
Writing LHS in terms of $N$:
$8^n-3^n=5N \to 8\cdot8^n-8*3^n=40N$
$8^{n+1}-3^{n+1}=40M+5\cdot3^{n+1}$
Dividing through by $5$:
$\frac{8^{n+1}-3^{n+1}}{5}=8M+3^{n+1}=K$
, where $K$ is clearly an integer, as $M$ and $N$ Are defined to be integers.
———————————————————————————————
Questions:
*
*Is this proof valid?
*Adding to that, is that last deduction of $K\in\mathbb{N}$ fair?
| Yes, your proof is correct. Below I explain how to view the arithmetical essence of the matter more conceptually as the result of a product rule, first using congruences, and later using bare divisibility (in case you don't know congruences).
Conceptually the induction follows very simply by multiplying the first two congruences below using CPR = Congruence Product Rule, $ $
$$\begin{align}\bmod 5\!:\qquad \color{#c00}{8}\ &\equiv\ \color{#c00}{3}\\
8^{\large n}&\equiv 3^{\large n}\quad\ \ \ P(n)\\
\Rightarrow\ \ \color{#c00}{8}\,8^{\large n}&\equiv 3^{\large n}\color{#c00}{3}\quad\ P(n\!+\!1),\ \ \rm by \ CPR\end{align}\qquad $$
i.e. the proof is a special case of the (inductive) proof of the Congruence Power Rule. Note how the use of congruences highlights innate arithmetical structure allowing us to reduce the induction to an easy one $\,a\equiv b\,\Rightarrow\, a^n\equiv b^n,\,$ with obvious inductive step: multiply by $\,a\equiv b\,$ via the product rule.
If you don't know congruences we can preserve this arithmetical essence by using an analogous divisibility product rule (DPR), $ $ where $\ m\mid n\ $ means $\,m\,$ divides $\,n,\,$ namely
$\!\!\begin{align}
5&\mid\ \color{#c00}{8\,\ \ -\ 3}\\
5&\mid\ \ \ 8^{\large n} -\ 3^{\large n}\quad\ P(n)\\
\Rightarrow\ \ 5&\mid\ \color{#c00}{8}8^{n}\! -\!\color{#c00}33^{\large n}\quad\ \ P(n\!+\!1),\ \ \rm by\ the\ rule\ below\\[.8em]
{\bf Divisibility\ Product\ Rule}\ \ \ \
m&\mid\ a\ -\ b\qquad {\rm i.e.}\quad \ a\,\equiv\, b\\
m&\mid \ \ A\: -\: B\qquad\qquad \ A\,\equiv\, B\\
\Rightarrow\ \ m&\mid aA - bB\quad \Rightarrow\quad aA\equiv bB\!\pmod{\!m}\\[.4em]
{\bf Proof}\,\ \ m\mid (\color{#0a0}{a\!-\!b})A + b(\color{#0a0}{A\!-\!B}) \ \ \ &\!\!\!\!=\, aA-bB\ \ \text{by $\,m\,$ divides $\rm\color{#0a0}{green}$ terms by hypothesis.}\end{align}$
Remark $ $ The proof in Jose's answer is nothing but a (numerical) special case of the prior proof - see here where I explain that at length. Further discussion on related topics is in many prior posts.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3177988",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
} |
Extending $\|H^{\frac{1}{2}}XK^{\frac{1}{2}}\|\leq\frac{1}{2}\|HX+XK\|$ from matrices to operators I saw in some literature that many author works in finite dimensional (matrix) is because it can be extended into infinite dimensional (operator). The case is as follows:
If the following inequality
$$\|H^{\frac{1}{2}}XK^{\frac{1}{2}}\|\leq\frac{1}{2}\|HX+XK\|$$ holds for any $n\in\mathbb{N}$ and for any $H,K,X\in M_{n}(\mathbb{C})$ with $H,K>0,$ then so is for any $H,K,X\in\mathcal{B}(\mathcal{H})$ with $H,K>0.$
Here $M_{n}(\mathbb{C})$ denotes the collection of $n\times n$ matrix with complex entries, $H>0$ denotes positive definite matrix (operator), $\mathcal{B}(\mathcal{H})$ denotes the collection of bounded linear operator on Hilbert space $\mathcal{H}.$
What I'm thinking is by involving (finite dimensional) projection and take the limit.
$$\lim_{n\rightarrow\infty}\|P_{n}H^{\frac{1}{2}}XK^{\frac{1}{2}}P_{n}\|\leq\frac{1}{2}\lim_{n\rightarrow\infty}\|P_{n}(HX+XK)P_{n}\|$$
where $\lim_{n\rightarrow\infty} P_{n}=I,$ ($I\in\mathcal{B}(\mathcal{H})$).
But I'm not sure, whether I'm on the right track or not.
Any help would be appreciated. Thank you
| I dont'think you are on the right track. The limit $\lim_nP_n=I$ occurs in the strong operator topology (and other weak topologies on $B(H)$) but not in the norm topology.
If you are looking for the particular inequality mentioned in your question, the usual proof works the same for operators, you gain nothing by going to matrices first.
If your question is general, I don't think you can expect to have a general method that allows you to prove that any norm inequality for matrices holds for arbitrary operators.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3178129",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Partition of space I have a problem with the following exercise:
Let $\left(A_{k}\right)_{k=1...n}$ be a sequence of subsets of space $\Omega$. Introduce the notation $A^{0} = \Omega \setminus A$ and $A^{1}=A$.
For $\epsilon \in \{0,1\}^{n}$, we put $$A_{\epsilon} = \bigcap^{n}_{k=1} A_{k}^{\epsilon_{k}}.$$
1. Show that if $\epsilon$, $\eta\in \{0,1\}^{n}$ and $\epsilon \neq \eta$ then $A^{\epsilon}\cap A^{\eta} = \emptyset$.
2. Show that $\bigcup \{A_{\epsilon}:\epsilon \in \{0,1\}^{n}\} = \Omega$
3. Conclude that $\{A_{\epsilon}:\epsilon \in \{0,1\}^{n}\}$ is a partition of the space $\Omega$.
I have succesfully proven part 1 of the exercise by considering $\epsilon$ and $\eta$ such that $\left(\exists i\in \{1,2,...,n\}\right) \left(\eta_{i} \neq \epsilon_{i}\right)$
. The second part seems intuitive but hard for me to write formally. I have tried a direct computation but it failed. I would be grateful for any hint that will point me in the right direction.
| An attempt with a direct computation was a step in the right direction. I suspect, however, you may have tried to do the computation for the general case. Generally, what is recommended is first to try the computation on elementary special cases.
So, first, let
$$n = 2, \quad \Omega = \{ \omega_1, \omega_2, \omega_3 \}, \quad
A_1 = \{\omega_1, \omega_2\}, \quad A_2 = \{\omega_2, \omega_3\},
$$
and try proving the required statements for this case by direct computation.
Once you are done, if the result is not yet instructive enough to enable you to "see" how to do the general case, try a slightly bigger special case; e.g., $n = 3$.
And so on.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3178234",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Show that for any integer a and prime p, $(a+1)^p \equiv a^p+ 1 \pmod{p}$. I believe that this may require the use of Fermat's Little Theorem. I rewrote it as $(a+1)^p - a^p \equiv 1 \pmod{p}$ because the right-hand side looks similar to Fermat's Little Theorem, but I was unable to figure out how I can get the left-hand side to become $a^{(p-1)}$.
| Hint 1:
$$(a+1)^p=\sum_{i=0}^{p}a^i\binom{p}{i}$$
Hint 2:
What is $\binom{p}{i}~\text{mod}~p$?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3178374",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 2
} |
How is the relation "the smallest element is the same" reflexive? Let $\mathcal{X}$ be the set of all nonempty subsets of the set $\{1,2,3,...,10\}$. Define the relation $\mathcal{R}$ on $\mathcal{X}$ by: $\forall A, B \in \mathcal{X}, A \mathcal{R} B$ iff the smallest element of $A$ is equal to the smallest element of $B$. For example, $\{1,2,3\} \mathcal{R} \{1,3,5,8\}$ because the smallest element of $\{1,2,3\}$ is $1$ which is also the smallest element of $\{1,3,5,8\}$.
Prove that $\mathcal{R}$ is an equivalence relation on $\mathcal{X}$.
From my understanding, the definition of reflexive is:
$$\mathcal{R} \text{ is reflexive iff } \forall x \in \mathcal{X}, x \mathcal{R} x$$
However, for this problem, you can have the relation with these two sets:
$\{1\}$ and $\{1,2\}$
Then wouldn't this not be reflexive since $2$ is not in the first set, but is in the second set?
I'm having trouble seeing how this is reflexive. Getting confused by the definition here.
| Why are you testing reflexivity by looking at two different elements of $\mathcal{X}$? The definition of reflexivity says that a relation is reflexive iff each element of $\mathcal X$ is in relation with itself.
To check whether $\mathcal R$ is reflexive, just take one element of $\mathcal X$, let's call it $x$. Then check whether $x$ is in relation with $x$. Because $x=x$, the smallest element of $x$ is equal to the smallest element of $x$. Thus, by definition of $\mathcal R$, $x$ is in relation with $x$. Now, prove that this is true for all $x \in \mathcal X$. Of course, this is true because $\min(x) = \min(x)$ is always true, which is intuitive. In other words, $x \mathcal{R} x$ for all $x \in \mathcal X$, which is exactly what you needed to prove that $\mathcal R$ is reflexive.
You must understand that the definition of reflexivity says nothing about whether different elements (say $x,y$, $x\neq y$) can be in the relation $\mathcal R$. The fact that $\{1\}\mathcal R \{1,2\}$ does not contradict the fact that $\{1,2\}\mathcal R \{1,2\}$ as well.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3178532",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 2,
"answer_id": 1
} |
Finding the equivalence classes of a relation Define a relation on the set of all real numbers $x, y \in \mathbb{R}$:
$x ≃ y$ if and only if $x − y \in \mathbb{Z}$
Prove that this is an equivalence relation, and find the equivalence class of the number $1/3$.
I proved that the relation is:
reflexive
$$x-x = 0 \in \mathbb{Z}$$
symmetric
$$x-y = y-x \in \mathbb{Z}$$
transitive
$$x-y \in \mathbb{Z}$$
$$y-z \in \mathbb{Z}$$
$$x-z \in \mathbb{Z}$$
Are these proofs enough?
I'm stuck on the step where I need to find the equivalence class.
| Opps, carefull: $$x-y\ne y-x$$
You should write: if $x\sim y$ then $x-y\in Z$ so $-(x-y) = y-x\in Z$ so $y\sim x$.
Also, if $x\sim y$ and $y\sim z$ then $x-y,y-z\in Z$, so $(x-y)+(y-z) \in Z$ so $x-z\in Z$ so $x\sim z$.
And equivance class is $$Z+{1\over 3} = \{...-{5\over 3},-{2\over 3},{1\over 3}, {4\over 3},{7\over 3},...\}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3178651",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.