Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
Containment of Unitary Representations
Definition (Weak Containment): Let $G$ be a locally compact group, and let $\pi, \rho$ be unitary representations of $G$ into Hilbert spaces $\mathcal{H}$ and $\mathcal{K}$, respectively. Then $\pi$ is weakly contained in $\rho$ if for every $x \in \mathcal{H}$, for every compact set $K \subseteq G$, and for every $\epsilon > 0$, there exist $y_1,y_2,...,y_n \in \mathcal{K}$ such that for all $g \in K$, we have
$$\left| \langle \pi (g)x,x \rangle - \sum_{i=1}^{n} \langle \rho(g)y_i,y_i \rangle \right| < \epsilon$$
I've searched for the definition of containment of unitary representations, but I wasn't able to find it. Does anyone know what it means to say that $\pi$ is contained in $\rho$?
| To say that $\pi$ is contained in $\rho$ is the same as saying that $\pi$ is a subrepresentation of $\rho$, i.e., there is an isometry $V:\mathcal H\to\mathcal K$ such that $V\pi(g)V^*=\rho(g)$ for all $g\in G$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3576436",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Proof that if the limit of a function exists then the function is bounded in a neighborhood. The question is: Let $f:D\to\mathbb{R}$ and let $c$ be an accumulation point of $D$. Suppose that $f$ has a limit at $c$. Prove that $f$ is bounded on a neighborhood of $c$. That is, prove that there exists a neighborhood $U$ of $c$ and a real number $M$ such that $\left|f(x)\right|\leq M$ for all $x\in U\cap D$.
Is the following proof valid?
Since $\lim_{x\to c} f(x)$ exists, we can conclude that for any neighborhood $V$ such that $\lim_{x\to c} f(x)\in V$, there exists a deleted neighborhood of $c, U^*$ such that $f(U^*\cap D)\subseteq V$. Let $V$ be a neighborhood of $\lim_{x\to c} f(x)$ such that for all $y\in V, \left|y\right|\leq K$ with $K\in\mathbb{R}$. Which implies that there exists a deleted neighborhood $U^*$ such that $f(U^*\cap D)\subseteq V$. Thus for all $x\in U^*\cap D, \left|f(x)\right|\leq K$. Let $M=\max(K, f(c))$ then we can conclude that for all $x\in U\cap D, \left|f(x)\right|\leq M$
| Assume f is unbound in every epsilon neighborhood of c. Then it's absolute value doesn't tend toward a limit at c. Therefore, f doesn't tend toward a limit at c.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3576631",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Greatest common divisor of $(x+1)^{4n+3} + x^{2n}$ and $x^3-1$. I have to find the greatest common divisor of
$$(x+1)^{4n+3} + x^{2n}$$
and
$$x^3-1$$
I know I can express the second polynomial as:
$$x^3-1 = (x-1)(x^2+x+1)$$
So I would have to check if the first polynomial is divisible by $(x^3-1)$, $(x^2+x+1)$ or $(x-1)$ and if it is not divisible by any of those, then the two polynomials do not have a common divisor except for $1$. But I don't know how I can divide the polynomial
$$(x+1)^{4n+3} + x^{2n}$$
by those $3$ other polynomials and therefore can't check the greatest common divisor.
| Hint $\,\ x\!-\!1\nmid f(x)\,$ by $\,f(1)\neq 0,\,$ but $\ x^2\!+\!x\!+\!1\mid f(x)\,$ by
$\!\!\!\begin{align}\bmod\, \color{#0a0}{x^2\!+\!x\!+\!1}\!:\,\ f(x)\,\equiv\ &x^{\large 2n}+(\color{#0a0}{x\!+\!1})^{\large 4n+3}\\[.2em]
\equiv\ &x^{\large 2n}+({\color{#0a0}{-x^{\large 2}}})^{\large 4n+3}\ \ {\rm thus\ reducing\ using}\ \ x^{\large\color{#c00}3}\equiv 1\\[.2em]
\equiv\ &x^{\large 2n}\, -\, x^{\large\color{#90f}{2n}}\equiv 0,\ \, {\rm by}\ \ \color{#0a0}2(4n\!+\!3)\equiv\color{#90f}{2n}\!\!\!\pmod{\!\color{#c00}3}\end{align}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3576749",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Prove that $\left\lfloor{\frac{n}{2}}\right\rfloor+\left\lfloor\frac{\left\lceil\frac{n}{2}\right\rceil}{2}\right\rfloor+\cdots=n-1$.
Prove that, for $n\in \Bbb{Z}^+$,
$$\left\lfloor{\frac{n}{2}}\right\rfloor+\left\lfloor\frac{\left\lceil\frac{n}{2}\right\rceil}{2}\right\rfloor+\left\lfloor\frac{\left\lceil\frac{\left\lceil\frac{n}{2}\right\rceil}{2}\right\rceil}{2}\right\rfloor+\cdots = n - 1\,,$$
where there are $\lceil{\log_2n}\rceil$ addends on the left-hand side.
I don't know how I could prove this. Any ideas? There is an intimate relationship here with a binary tree where each addend is the number of nodes on that layer, and $n$ is the number of leaves.
| Any positive integer $n$ satisfies the following equation:
$$
n=\sum_{i=0}^{\left\lfloor\log_{2}{n}\right\rfloor}{\left(a_{i}2^{i}\right)}
$$
Substitute it to your equation to obtain:
$$
\begin{aligned}
<your\ equation>&=\sum_{i=0}^{\left\lfloor\log_{2}{n}\right\rfloor}{\left(a_{i}\left(2^{0}+\sum_{j=0}^{i-1}{2^{j}}\right)\right)}-a_{0}\\
&=\sum_{i=0}^{\left\lfloor\log_{2}{n}\right\rfloor}{\left(a_{i}2^{i}\right)}-a_{0}\\
&=n-1
\end{aligned}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3576943",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Decompose Poisson random variable as sum of Poisson random variables If $X,Y$ are independent Poisson random variables with parameter $\lambda_1, \lambda_2$, then $X+Y$ is Poisson random variable with parameter $\lambda_1+\lambda_2$. I am wondering whether the converse if true, given a poisson random variable on a probability space $(\Omega, \mathcal{F},P)$, can we always decompose it into independent poisson random variables with parameter $\lambda_1,\lambda_2$ such that there sum is the given random variable?
| Suppose $W\sim\operatorname{Poisson}(\lambda).$
Suppose $0<\lambda_1 <\lambda,$ and let $\lambda_2 = \lambda - \lambda_1.$
Let $p = \dfrac{\lambda_1}\lambda = \dfrac{\lambda_1}{\lambda_1+\lambda_2}.$
Let $X\mid W \sim\operatorname{Binomial}(W,p),$ i.e. this is the number of successes in $W$ independent trials with probability $p$ of success on each trial. Let $Y= W-X.$ Then $Y\mid W \sim\operatorname{Binomial}(W,1-p).$
Given all of this, one can conclude that
*
*$W=X+Y.$
*$X\sim\operatorname{Poisson}(\lambda_1).$
*$Y \sim\operatorname{Poisson}(\lambda_2).$
*$X,Y$ are independent.
Proving this is a standard exercise. How to do it is a question that has been posted here a number of times.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3577047",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Finding a primitive element in a field with 27 elements. I am trying to construct a field with 27 elements, and find a primitive element in that field. I considered the irreducible polynomial $f(x)=x^3+2x+1$ over $\mathbb{Z}_3[x]$. Then I considered
$$\mathbb{Z}_3[x]/\langle f\rangle.$$
This is a field with $3^{\deg f}=3^3=27$ elements. I know that the unique elements of this field are given by
$$\{a_0+a_1t+a_2t^2:a_i\in\mathbb{Z}_3\}$$
where $t=x+\langle f\rangle$. Now my question is, what is an efficient way (for beginners) to find a primitive element of this field? One could argue that it suffices to find an element $u\in\mathbb{Z}_3[x]/\langle f\rangle$ with $\text{ord}(u)\neq 1,2,13$, but finding such a $u$ is computationally tedious (at least to me). Any comments or advice appreciated.
| From Arthurs answer we know that just guessing an element $at^2+bt+c$, it will likely be primitive. We have to choose at least one of $a$ and $b$ non-zero, so trying $t$ itself first is a good start. I wanted to add how the computation reduces to taking powers of matrices, i.e., linear algebra.
Identifying a polynomial $at^2+bt+c\in\mathbb Z_3[x]/\langle f\rangle$ with the vector $(a,b,c)^T\in\mathbb Z_3^3$, the multiplicative action of $t$ is given by the matrix
$$
Z =
\begin{pmatrix}
0 & 1 & 0 \\
1 & 0 & 1 \\
2 & 0 & 0
\end{pmatrix}.
$$
This is obtained from $t(at^2+bt+c) = bt^2 + (a+c)t + 2a$.
Obviously $Z^2\neq I$, since $t^2\neq 1$ and using your favorite method of computing powers of matrices you get $Z^{13} = 2I \neq I$ as well. Hence $t$ is a primitive element in $\mathbb Z_3[x]/\langle f\rangle$.
If this didn't work out, you could now try other matrices $aZ^2+bZ+cI$ and compute their powers.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3577198",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
} |
Probability of choosing envelopes
Suppose that you have 20 different letters and 10 distinctly addressed envelopes. The 20 letters consists of 10 pairs, where each pair belongs inside one of the 10 envelopes. Suppose that you place the 20 letters inside the 10 envelopes, two per envelope, but at random.
What is the probability that exactly 3 of the 10 envelopes will contain both of the letters which they should contain?
I have seen similar questions to this one but they always assign only one letter to one envelope. Also, the scenario is usually how to choose AT LEAST one right envelope. I am not too clear as to how to adapt to this new scenario.
| Let $S_i$ be the arrangements where envelope $i$ has both of its intended letters. The number of intersections of $k$ of the $S_i$ is
$$
N_k=\overbrace{\ \ \binom{10}{k}\ \ }^{\substack{\text{number of ways}\\\text{to choose the}\\\text{$k$ envelopes}}}\ \ \overbrace{\frac{(20-2k)!}{2^{10-k}}\vphantom{\binom{10}{k}}}^{\substack{\text{ways to arrange}\\\text{$20-2k$ letters in}\\\text{$10-k$ envelopes}}}
$$
The Generalized Principle of Inclusion-Exclusion says that the number of arrangements with exactly $3$ envelopes properly filled is
$$
\begin{align}
\sum_{k=3}^{10}(-1)^{k-3}\binom{k}{3}\binom{10}{k}\frac{(20-2k)!}{2^{10-k}}
&=\binom{10}{3}\sum_{k=3}^{10}(-1)^{k-3}\binom{7}{k-3}\frac{(20-2k)!}{2^{10-k}}\\[6pt]
&={75718299600}
\end{align}
$$
The number of ways to arrange $20$ letters in $10$ envelopes is
$$
\frac{20!}{2^{10}}={2375880867360000}
$$
Thus, the requested probability is
$$
\begin{align}
\frac{75718299600}{2375880867360000}
&=\frac{21032861}{659966907600}\\[6pt]
&\approx0.0000318695691523
\end{align}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3577383",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
} |
How to compute $\int_0^\infty \frac{\log(2+x^2)}{4+x^2}\,\mathrm dx$ Evaluate the integral
$$\int_0^\infty \frac{\log(2+x^2)}{4+x^2}dx$$
--
I started by stating that the integral from 0 to infinity should be the same as half the integral from $-\infty$ to $\infty$, that is:
$$\int_0^\infty \frac{\log(2+x^2)}{4+x^2}dx = \frac{1}{2}\int_{-\infty}^\infty \frac{\log(2+x^2)}{4+x^2}dx$$
and then by stating that this must be equal to
$$\pi i \cdot \sum(\text{residues in upper plane})$$
noting that there is a singularity at $z=2i$ in the upper half plane, and an "issue" (I don't know if it's strictly a singularity) with the log function at $z=\sqrt{2}i$.
The residue at $z=2i$ is easily dealt with, but when I try to deal with the log issue, I can't make any headway. I decided to make my branch cut between $z=\sqrt{2}i$ and $z=-\sqrt{2}i$, and form a contour that goes around this cut and the point, but it doesn't seem to be working out for me.
Suggestions would be appreciated!
| Without residues.
$$ \frac{\log(2+x^2)}{4+x^2}=\frac{\log(x+i\sqrt2)+\log(x-i\sqrt2)}{(x+2i)(x-2i)}$$
$$\frac{1}{(x+2i)(x-2i)}=\frac{i}{4}\left(\frac{1}{x+2 i}-\frac{1}{x-2 i}\right)$$ So, we face four integrals looking like
$$\int \frac{\log(x+a)}{x+b}\,dx=\text{Li}_2\left(\frac{x+a}{a-b}\right)+\log (x+a) \log \left(\frac{b+x}{b-a}\right)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3577535",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
How many number of permutations Here’s a question I’m struggling with:
There are 10 book consisting of 4 biographies and 6 novels. A person stacks four of the books together. In the stack of four books, at least 2 books must be biographies. How many possible permutations are there for stacking the four books?
I thought of two ways to do this problem:
(# permutation for biographies with r=2) * (# permutations for 8 remaining books with r=2) * (# of possible positions where the two biographies can occupy)
$$ ^4 P_2 * ^8 P_2 * ^4 C_2=
\\\dfrac{4!}{2!} * \dfrac{8!}{6!} *\dfrac{4!}{2!*2!}=
\\4*3*8*7*6=
4032$$
The rationale behind this method is as follow:
The other method I used is as follow: to satisfy the requirement, you first pick two biography books out at random, which has 4*3 permutations. Then, out of the 8 remaining books, you pick 2 random books, which has 8*7 permutations. In terms of order, there are 6 total combinations for ordering of biography B and other books X (BBXX, BXBX, BXXB, XBBX, XBXB, XXBB). Thus the solution should be 4*3*8*7*6
The other method I used is as follow:
(# of total permutations) - (# of permutations with no biography) - (# of permutations with exactly 1 biography)
$$
^{10} P_4 - ^6 P_4 - ^6 P_3 * ^4 P_1 * ^4 C_1
\\
\dfrac{10!}{6!}-\dfrac{6!}{2!}-\dfrac{6!}{3!}*4*4=
\\ 5040-360-1920=2760
$$
The rationale behind this is simpler: from total amount of permutations, I subtract away permutations where no biography books exist and permutations where only one biography book exist, leaving only permutations with 2 or more biography books.
The two methods both make logical sense to me, so I’m lost as to why they give different results. I’m struggling to see what went wrong that would cause the two to have differing solutions
| The first method is miscounting. Consider your first method of counting, and suppose you have all four biographies in the stack: $A,B,C,D$.
You are choosing $A,B$ from the biographies, then choosing $C,D$ from the remaining eight books, then permuting them. This is the same as choosing $C,D$ from the biographies, then choosing $A,B$ from the remaining eight books, and them permuting them.
You can modify the first method by counting the number of ways to stack the four with exactly 2 biographies plus exactly three biographies, plus exactly 4 biographies.
This calculation is the number of ways to choose four books for the stack, then permute the books. So, I am adding together the number of ways to choose 2 biographies and 2 novels, plus 3 biographies and 1 novel, plus 4 biographies, and then, finally, permuting them.
$$\left((^4C_2)(^6C_2)+(^4C_3)(^6C_1) + (^4C_4)(^6C_0)\right)4! = 2760$$
This gives the same answer you already found by your second counting method.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3577711",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Net in $\mathcal{B}^*$ converging to unbounded functional Let $\varphi$ be an unbounded functional on a Banach space $\mathcal{B}$. Can we always find a net of bounded functionals (i.e. in $\mathcal{B}^*$) converging to $\varphi$ in w*-topology?
Any proof or counterexample?
| Let $V$ be the directed set of finite dimensional subspaces of $\mathcal B$. For each $v\in V$ choose a continuous projection onto $v$ and denote it with $P_v$ (this works with Hahn-Banach and uses that $v$ is finite dimensional). Now define a functional $\varphi_v := \varphi\lvert_{v}\circ P_v$. Note that $P_v$ is continuous and $\varphi\lvert_v$ is a linear functional on a finite dimensional space and as such continuous. Then $\varphi_v$ is continuous.
Now consider $x\in \mathcal B$. If $x\in v$ then $\varphi_v(x) = \varphi(P_v(x))=\varphi(x)$. Now for any $v\in V$ there is a $w\in V$ with $w\supseteq v$ and $x\in w$. Further for any $u\supseteq w$ you have $x\in u$ also. This implies then that
$$\lim_{v\to\mathcal B}\varphi_v(x) = \varphi(x), $$
and $\varphi_v$ converges pointwise to $\varphi$, which is the weak* convergence.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3578086",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Can someone explain why this is true? $\int f(x) g'(x)\,dx = \int f(x)\,dg$ I'm a software engineer (have been for 20+ years) and have over the past few years taken quite an interest in math. I would appreciate it if anyone is able to help me out with my question.
I understand the following:
$$\int g’(x)\,dx = \int dg$$
This makes sense to me.
$$\int g’(x)\,dx = g(x)+c$$
$$\int dg = g(x) + c$$
That being the case, $\displaystyle \int g'(x)\,dx = \int dg$.
As far as I know, this is also true:
$$\int f(x)g’(x)\,dx = \int f(x)\,dg$$
What I don't completely understand is why you can put $f(x)$ in each of these integrals and they continue to be equal. I mean, I understand that if $\int g’(x)\,dx = \int dg$, then I should be able to swap $g’(x)\,dx$ for $dg$ (or vice versa), but I'm trying to visualize how $f(x)$ doesn't somehow throw this off.
Just reading it, I am looking for the integral of $f(x)$ multiplied by $g’(x)$ with respect to $x$ and the integral of $f(x)$ with respect to $g$.
Can someone explain the logic of this to me and/or point me to an applicable proof? I've Googled around to see if I can find a good explanation, but I must not be describing my question in the search query well enough (hopefully I am doing an OK job describing it here).
| As Yves Daoust writes, this is from the simple fact:
$$\frac{dg}{dt} \cdot dt = /\text{ cancel dt }/=dg$$
This is independent of integration.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3578240",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
In a function formula, can the independant variable $x$ ( ranging over $N$) be used as a simple figure inside a number? E.g. $ f(x)= 4,5x975$ Suppose $x$ ranges over {0,1,2,3,4,5,6,7,8,9}.
Is $f(x) = 4,5x975$ a valid function formula ?
What about the case where $x$ ranges over the set of natural numbers.
In that case, $x$ would not represent necessarily the hundredth, 9 would not either necessarily represent the $10^{-3} $th, etc.
Which equivalent general formula would give us the value of $f(x)$ in case $x\gt9$? ( I mean, the case where $x$ has 2 digits or more?)
| yes as it can be expressed as f(x)= 450975+1000*x when x $\in$ {0,1,..9}
Over the set of natural number, we still can have the formula
f(x)= $45*1000*10^{(⌊log10x⌋+1)}+975+1000*x$
Proof: How many digits does a number have? $\lfloor \log_{10} n \rfloor +1$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3578415",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How many Steiner Symmetrizations does it take to make an arbitrary set convex? I have not seen this question investigated before but I might be wrong:
*
*Can any subset of $\mathbb{R}^d$ be turned into a convex set by finitely many steiner symmetrizations?
*If yes, is the number of symmetrizations necessarily bounded, i.e. does there exist a dimensional constant such that it will always take at most $m_d$ symmetrizations (this would be interesting)
*If no, what is a counterexample, i.e. a set that can not be made convex within a finite number of symmetrizations?
I hope somebody knows something, this is super a interesting situation for me.
Thanks for any answers.
| Koch's snow flake needs an infinite number of Steiner symmetrizations.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3578576",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to do a quick estimation if $x_2 \ll x_1$ holds for the roots of a quadratic equation - to apply quick and easy root-finding formula? Wikipedia provides an interesting method of (approximately) solving a quadratic equation:
Vieta's formulas provide a useful method for finding the roots of a
quadratic in the case where one root is much smaller than the other.
If $|x_2|\ll|x_1|$, then $x_1+x_2\approx x_1$ and we have the estimate $$x_1\approx-\frac ba$$
But how can I quickly estimate that one root is much larger than the other?
| One root is much smaller than the other when $|ac| \ll b^2$ because then the square root in the quadratic formula is very close to $b$. The approximation given comes from replacing the square root by $b$ and taking the minus sign so the two terms add. This is also the time that the calculation of the other root suffers from roundoff error in a computer because $-b+\sqrt{b^2-4ac}$ is the difference of two similar numbers.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3578707",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Circumcircle of a square and an arbitrary point inside it; prove: $|A_1B_1|\cdot|C_1D_1|=|A_1D_1|\cdot|B_1C_1|$
Point $T$ is inside the square $ABCD$. Let $A_1,B_1,C_1,D_1$ the other intersection point of the lines $AT,BT,CT,DT$ respectively and the circumcircle of the square $ABCD$. Prove:
$$|A_1B_1|\cdot|C_1D_1|=|A_1D_1|\cdot|B_1C_1|$$
My attempt:
I was looking for the inscribed angles of the same measure:
$$\measuredangle ABB_1=\measuredangle AA_1B_1\;\&\;\measuredangle BTA=\measuredangle B_1TA_1\implies\;\Delta ABT\;{\sim}\;\Delta A_1B_1T$$
Analogously:
$$\Delta TAD_1{\sim}\Delta TA_1D$$$$\;\Delta C_1D_1T\;{\sim}\Delta CDT$$$$\Delta D_1A_1T{\sim}\Delta DAT$$$$\Delta B_1C_1T{\sim}\Delta CBT$$
Also, $\measuredangle DB_1B=\measuredangle BA_1D$, so $DB_1B$ and $BA_1D$ are right triangles.
However, I couldn't find any triangles with useful information.
May I ask for advice on solving the problem? Thank you in advance!
| Because $$\frac{A_1D_1\cdot B_1C_1}{A_1B_1\cdot C_1D_1}=\frac{\frac{A_1D_1}{AD}\cdot\frac{B_1C_1}{BC}}{\frac{A_1B_1}{AB}\cdot\frac{C_1D_1}{CD}}=\frac{\frac{A_1T}{DT}\cdot\frac{C_1T}{BT}}{\frac{A_1T}{BT}\cdot\frac{C_1T}{DT}}=1$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3578871",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Prove that $(ay-bx)^2+(az-cx)^2\ge (bz-cy)^2$ Let be $a,b,c,x,y,z>0$ such that $ax\ge \sqrt{(b^2+c^2)(y^2+z^2)}$. Prove that
$$(ay-bx)^2+(az-cx)^2\ge (bz-cy)^2$$
I tried to expand
$$a^2(y^2+z^2)+x^2(b^2+c^2)+2bcyz\ge b^2z^2+c^2y^2+2abxy+2acxz$$
Here my idea was to use the condition after the means inequality:
$$a^2(y^2+z^2)+x^2(b^2+c^2) \ge 2ax\sqrt{(b^2+c^2)(y^2+z^2)}$$
$$\ge 2(b^2+c^2)(y^2+z^2)$$
but it's not good enough to prove the question
$$2(b^2+c^2)(y^2+z^2)+2bcyz\ge b^2z^2+c^2y^2+2abxy+2acxz$$
is not true when $a$ and $x$ can be very big.
Thank you for your help.
| Using Cauchy-Schwarz:
$$
\begin{aligned} \left[\left(\frac{c}{a}\right)^2+\left(\frac{b}{a}\right)^2\right]\cdot \left[(ay-bx)^2+(cx-az)^2\right]&\geq \left[\frac{c}{a}(ay-bx)+\frac{b}{a}(cx-az)\right]^2\\
&=(cy-bz)^2\\
\end{aligned}
$$
and similarly
$$
\begin{aligned} \left[\left(\frac{z}{x}\right)^2+\left(\frac{y}{x}\right)^2\right]\cdot \left[(bx-ay)^2+(az-cx)^2\right]&\geq (bz-cy)^2\\
\end{aligned}
$$
Multiplying the two inequalities:
$$\left[\left(\frac{c}{a}\right)^2+\left(\frac{b}{a}\right)^2\right]\cdot \left[\left(\frac{z}{x}\right)^2+\left(\frac{y}{x}\right)^2\right]\cdot \left[(ay-bx)^2+(cx-az)^2\right]^2 \geq (bz-cy)^4$$
and notice that using the condition:
$$\left[\left(\frac{c}{a}\right)^2+\left(\frac{b}{a}\right)^2\right]\cdot \left[\left(\frac{z}{x}\right)^2+\left(\frac{y}{x}\right)^2\right]=\frac{(b^2+c^2)(y^2+z^2)}{a^2x^2}\leq 1$$
It follows that:
$$\left[(ay-bx)^2+(cx-az)^2\right]^2 \geq (bz-cy)^4$$
which is equivalent with the inequality to prove.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3579129",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How many integer solutions are there for the equation $c_1 + c_2 + c_3 + c_4 = 25$, where $c_i \ge 0$ for all $1 \le i \le 4$ Question Statement:
How many integer solutions are there for the equation $c_1 + c_2 + c_3 + c_4 = 25$, where $c_i \ge 0$ for all $1 \le i \le 4$.
I would like to solve this problem using combinatorics and I've read generating functions can be used as a method to find the solution. However, I have no idea how to do this.
My first attempt at solving this problem is below,
Observe the missing constraint $c_i \le 21$. The solution can be obtained by reasoning using Principle of Exclusion and Inclusion.
Applying the theorem to the above problem yeilds,
$N(\bar{c_1}\bar{c_2}\bar{c_3}\bar{c_4}) = N - \sum N(c_i) + \sum N(c_i c_j) - \sum N(c_i c_j c_k) + \sum N(c_1 c_2 c_3 c_4)$
For all $i,j,k = 1,...,4$.
Since, $N=H(4,25)=C(28,25)$, $N(c_i)=H(4,4)=C(7,4)$ and $N(c_i c_j) = N(c_i c_j c_k) = N(c_1 c_2 c_3 c_4) = 0$. Hence, the result is 3248.
| Generating Function Method
Associate to each variable the polynomial $p(x) = \sum_{i=0}^{25} x^i$. Then the product
$$ \left(p(x)\right)^4 = 1 + 4 x + 10 x^2 + \cdots + 3276 x^{25} + \cdots $$
exhibits the fact that there are $3276$ solutions to the equation. It also exhibits the number of solutions of \begin{align*}
c_1 + c_2 + c_3 + c_4 &= 0 & :& & 1 \\
c_1 + c_2 + c_3 + c_4 &= 1 & :& & 4 \\
c_1 + c_2 + c_3 + c_4 &= 2 & :& & 10 \\
& & \vdots& &
\end{align*}
Our polynomial encodes the choices for the variable in the powers of $x$, so we have one term for each of the integers $0$ through $25$. When you multiply two of these polynomials, you get generic terms $x^i x^j$ for $0\leq i,j \leq 25$. But consider the terms we get for $i+j = 5$, for instance, they are
$$ x^0 x^5, x^1 x^4, x^2 x^3, x^3 x^2, x^4 x^1, x^5 x^0, $$
that is, we have one term in the product for each way to write $5$ as a sum of two nonnegative integers, so the resulting product of two polynomials records the number of ways to produce $n$ as the sum of two nonnegative integers in the coefficient of $x^n$. Multiplying in the other two polynomials, the coefficient of $x^n$ records the number of ways to write $n$ as the sum of four nonnegative integers (each less than $25$).
(One might wonder how to compute that massive product. You don't, exactly. You only need terms of degree up to $25$ throughout the computation, so you only keep track of the leading terms and ignore the rest. For me, this computation went as \begin{align*}
p^1 &= 1 + x + x^2 + \cdots + x^{25} + \text{(don't care)} \\
p^2 &= 1 + 2x + 3x^2 + \cdots + 26 x^{25} + \text{(don't care)} \\
p^3 &= 1 + 3x + 6x^2 + \cdots + 351 x^{25} + \text{(don't care)} \\
p^4 &= 1 + 4x + 10x^2 + \cdots + 3276 x^{25} + \text{(don't care)}
\end{align*}
It helped that I am familiar with figurate numbers and recognized the coefficients were, successively, constantly one, sequential integers, sequental triangular numbers, and sequential tetrahedral numbers.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3579247",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Prove $x^6-6x^4+12x^2-11$ is irreducible over $\mathbb{Q}$ Extracted from Pinter's Abstract Algebra, Chapter 27, Exercise B1:
Let $p(x) = x^6-6x^4+12x^2-11$, which we can transform into a polynomial in $\Bbb{Z}_3[x]$:
\begin{align*}
x^6+1
\end{align*}
Since none of the three elements $0,1,2$ in $\Bbb{Z}_3$ is a root of the polynomial, the polynomial has no factor of degree 1 in $\Bbb{Z}_3[x]$. So the only possible factorings into non constant polynomials are
\begin{align*}
x^6+1 &= (x^3+ax^2+bx+c)(x^3+dx^2+ex+f)
\end{align*}
or
\begin{align*}
x^6+1 &= (x^4+ax^3+bx^2+cx+d)(x^2+ex+f)
\end{align*}
From the first equation, since corresponding coefficients are equal, we have
\begin{align}
x^0:\qquad & cf &= 1 \tag{1} \\
x^1:\qquad & bf + ce &= 0 \tag{2} \\
x^2:\qquad & af + be + cd &= 0 \tag{3} \\
x^3:\qquad & c + f + bd + ae &= 0 \tag{4} \\
x^5:\qquad & a + d &= 0 \tag{5} \\
\end{align}
From (1), $c = f = \pm1$, and from (5), $a + d = 0$. Consequently, $af + cd = c(a + d) = 0$, and by (3), $eb = 0$. But from (2) (since $c = f$), $b + e = 0$, and therefore $b = e = 0$. It follows from (4) that $c + f = 0$, which is impossible since $c = f = \pm1$. We have just shown that $x^6 + 1$ cannot be factored into two polynomials each of degree 3.
For the second equation, however, $x^6+1=(x^2+1)^3$ in $\Bbb{Z}_3[x]$. So we cannot say $p(x)$ is irreducible over $\Bbb{Q}$ because $x^6+1$ is irreducible over $\Bbb{Z}_3$. What am I missing here?
| Update: The answer is wrong but see my comment!
I think the reasoning should be like the following. As $p(x)$ has integer coefficients and is monic every zero of $p$ that lies in $\mathbb{Q}$ is also integer. But every integer zero of $p$ must divide the absolute term which is 11. Therefore, it could only be $\pm1$ or $\pm11$. Neither is a solution. The explanation above shows that $p$ cant be the product of two cubic polynomials. So if $p$ were reducible it had an irreducible quadratic monic polynomial with as factor which would have itself two zeros of the form $\pm\sqrt{q}+r$ whose square is in $\mathbb{Q}$. But then $p$ has only even powers of $x$ so substituting $x^2\rightarrow{}y$ gives a cubic polynomial which is either irreducible or has a linear factor. But then the same reasoning as above applies and since neither $\pm1$ nor $\pm11$ are zeros of that polynomial we are done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3579410",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Show that distribution of function of LRT statistic for normal mean hypothesis testing is normally distributed Suppose $X_1 ... X_n$ ~$^{iid}$ N($\mu, \sigma$), with $\sigma$ known. What is the distribution of $-2ln(\lambda)$ where $\lambda$ is the LRT statistic for testing $H_0:\mu = \mu_0, H_1:\mu \neq \mu_0$?
So we know $\lambda = \left(\frac{\hat{\sigma^2}}{\hat{\sigma^2_0}}\right)^{\frac{n}{2}},$ with $\hat{\sigma^2} = \frac{1}{n}\sum(X_i-\bar{X})^2, \hat{\sigma^2_0} = \frac{1}{n}\sum(X_i-\mu_0)^2$.
Correct answer: $N(\mu=0, \sigma=\sigma)$
My work:
$-2ln(\lambda) = -nln\left(\frac{\sum(X_i-\bar{X})^2}{\sum(X_i-\mu_0)^2}\right) = -nln\left(\frac{\sum(X_i-\bar{X})^2}{\sigma^2}\frac{\sigma^2}{\sum(X_i-\mu_0)^2}\right) = -nln\left(\chi_{n-1}^2 \cdot \frac{1}{\sum\left(\frac{X_i-\mu_0}{\sigma}\right)^2}\right)$,
since the square of a standard normal variable $Z$ is $\chi_1^2$ distributed, and we have each $X_i$ iid,
$ = -nln\left(\chi_{n-1}^2 \cdot \frac{1}{\sum{Z_i^2}}\right) = -nln\left(\chi_{n-1}^2 \cdot \frac{1}{\sum{\chi_1^2}}\right) = -nln\left(\frac{\chi_{n-1}^2}{\chi_n^2} \right)$.
From here I fail to see the relation to the normal distribution?
| For unknown $\sigma^2$
$$\lambda=\left(\frac{\sum(X_i-\bar{X})^2}{\sum(X_i-\mu_0)^2}\right) =\left(\frac{\sum(X_i-\bar{X})^2}{\sum(X_i-\bar{X})^2+n(\bar{X}-\mu_0)^2}\right)$$
$$=\left(\frac{1}{1+\frac{n(\bar{X}-\mu_0)^2}{\sum(X_i-\bar{X})^2}}\right)$$
$$=\left(\frac{1}{1+\frac{n(\bar{X}-\mu_0)^2}{(n-1)\frac{1}{n-1}\sum(X_i-\bar{X})^2}}\right)$$
$$=\left(\frac{1}{1+\frac{T^2}{n-1}}\right)$$
where
$T^2=\frac{n(\bar{X}-\mu_0)^2}{\frac{1}{n-1}\sum(X_i-\bar{X})^2}$
Now reject $H_0$ if $\lambda \leq \lambda_0 $ $\Leftrightarrow$ $T^2>c$ $\Leftrightarrow$ $|T|>k$, $T\sim t(n-1)$
For known $\sigma^2$
$$\lambda=\left(\frac{(2\pi \sigma^2)^{-n/2} e^{-\frac{1}{2\sigma^2}\sum (X_i -\mu_0)^2}}{((2\pi \sigma^2)^{-n/2} e^{-\frac{1}{2\sigma^2}\sum (X_i -\bar{X})^2}}\right) $$
$$=e^{-\frac{1}{2\sigma^2} \left( \sum (X_i -\mu_0)^2 -\sum (X_i -\bar{X})^2 \right)}$$
$$=e^{-\frac{1}{2\sigma^2} \left( n(\bar{X}-\mu)^2) \right)}$$
$$=e^{-\frac{1}{2} \left( (\frac{\bar{X}-\mu}{\sigma / \sqrt{n}})^2) \right)}$$
now $\lambda \leq \lambda_0$ $\Leftrightarrow$
$$(\frac{\bar{X}-\mu}{\sigma/ \sqrt{n}})^2 >c$$
$\Leftrightarrow$
$$|Z|=|\frac{\bar{X}-\mu}{\sigma/ \sqrt{n}}| >c$$
$Z=\frac{\bar{X}-\mu}{\sigma/ \sqrt{n}}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3579559",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
In a set of 5 bottles,1 has a fracture.If you select a pair of bottles, probability that the fractured bottle is chosen is? Its also mentioned that this question is an example of Sampling without replacement
My question is , by the general method of how I do these kind of problems , I would assume that there are two possibilities
*
*You chose a defective bottle and then a non defective which will have the probability of
1/5 X 4/4 = 4/20
OR
*Choosing a non defective one first and then a defective one which will have the probability of
4/5 x 1/4 =4/20
Total probability is 8/20= 2/5 which doesn't make sense at all, since there is just one bottle to begin with
But the problem is I cant find a fault in my logic. Though I feel both the options are redundant , isn't both two different ways of selecting the pair?. Or is the fact that the phrase "one after the other" is absent a reason on why this might be a wrong approach?
Thank you in advance
| Here is another viewpoint.
Suppose you don't know there is a flawed bottle. You choose two of the five bottles. You are then told that one of the five bottles contains a crack. It is equally likely to be any of the five bottles, so each bottle has a probability of $\frac{1}{5}$ of being the cracked one.
You hold two bottles, so the probability of either one of them being the cracked one is $\frac{1}{5}+\frac{1}{5}=\frac{2}{5}$. Note that you can add these two probabilities because they are mutually exclusive events - they cannot both be cracked as there is only one cracked bottle.
Edit: I see now that JMP already briefly mentioned this at the end of his answer.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3579739",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
} |
Identity related to $\sum_{k=0}^{n}\frac{x^k}{\binom{n}{k}}$ How it can be shown that:
$$\sum_{k=0}^{n}\frac{x^k}{\binom{n}{k}}=\left(n+1\right)\left(\frac{x}{x+1}\right)^{n+1}\sum_{k=1}^{n+1}\frac{1+x^k}{\left(1+x\right)k}\left(\frac{1+x}{x}\right)^{k}$$
for $x \ne-1$
I tried Additive Forms of Reciprocal Pascal’s Identity , but could not derive that.
| Your result is a kind of generalization of Newton's binomial identity with binomial coefficients replaced by their inverses.
You will find in page 2 of this reference by Toufik Mansour, University of Haifa) the more general expression :
$$\sum_{k=0}^{n}\frac{a^kb^{n-k}}{\binom{n}{k}}=\frac{n+1}{(a+b)\left(\tfrac{1}{a}+\tfrac{1}{b}\right)^{n+1}}\sum_{k=1}^{n+1}\frac{(a^k+b^k)\left(\tfrac{1}{a}+\tfrac{1}{b}\right)^{k}}{k}\tag{1}$$
with its proof. Nice expression...
It suffices now to take $a=x$ and $b=1$...
I have to admit that the proof is long... and uses generating functions.
Remark : If in the following identity for Beta integrals (see here):
$$B(x,y)=\int_{t=0}^{t=1}t^{x-1}(1-t)^{y-1}dt=\frac{(x-1)!(y-1)!}{(x+y-1)!}\tag{2}$$
one takes $x-1=k$ and $y-1=n-k$, we deduce that :
$$\frac{1}{\binom{n}{k}}=(n+1)\int_{t=0}^{t=1}t^k(1-t)^{n-k}dt\tag{3}$$
(besides, (3) is mentionned in the referenced article).
It is likely that a (simpler) proof of your identity can be deduced from (3) by multiplying it by $x^k$ and summing from $k=0$ to $k=n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3579866",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Given $x_{n} \to x_{0}$ as $n \to \infty$, and $e^{x}=\sum_{k=0}^{\infty}\frac{x^{k}}{k!}$, prove that $\lim_{n \to \infty}e^{x_{n}} = e^{x_{0}}$ Problem: Given a convergent sequence $x_{n} \to x_{0}$ as $n \to \infty$, and that e is defined as $e^{x}=\sum_{k=0}^{\infty}\frac{x^{k}}{k!}$, prove that $\lim_{n \to \infty}e^{x_{n}} = e^{x_{0}}$.
Now I know I could simply use limit rules, and say that $\lim_{n \to \infty}e^{x_{n}} = e^{\lim_{n \to \infty}x_{n}} = e^{x_{0}}$. However I would like (and the question suggests) to use that infinite series definition of $e^{x}$ to arrive at the answer.
So far I have $\lim_{n \to \infty}e^{x_{n}} = \lim_{n \to \infty}\sum_{k=0}^{\infty}\frac{x_{n}^k}{k!} =\sum_{k=0}^{\infty}\frac{x_{0}^k}{k!}$, but again without just distributing the limit inside the summation, I am stuck as to how to proceed from there and find the limit of that series. Any help would be appreciated! (It could be there is no way to do it without simply distributing the limit, I'm not sure)
| Suppose we can prove that $x_k\to 0\Rightarrow e^{x_k}\to 1.$ Then, if $x_k\to x_0,\ y_k:=x_k-x_0\to 0$ and then $e^{y_k}=e^{x_k-x_0}\to 1$ and this implies that $e^{x_k}\to e^{x_0}$.
So, it suffices to prove the result for $x_0= 0.$ But this is easy: choose $K$ large enough so that $k>K\Rightarrow |x_k|<\epsilon<1.$ Then, for such $k$
$ \left|\sum_{k=0}^\infty \frac{x_n^k}{k!} - 1\right|= \left|\sum_{k=1}^\infty \frac{x_n^k}{k!}\right|\le \sum_{k=1}^\infty \frac{\epsilon^k}{k!}\le\sum_{k=1}^\infty \epsilon^k=\frac{1}{1-\epsilon}-1$
The result follows (since $\epsilon$ is arbitrary).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3580024",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Proving a function(with two variables) is continuous I am having a difficulty solving this problem. First of all, I'm sorry that the problem isn't well written but I am not very good with typing out math problems, due to the fact that I am new to this, so I hope it's at least understandable. Next, I want to say that I've tried solving this using polar coordinates and also by changing y with $y=kx$, $y=kx^2$. But it didn't work. Anyway, the problem says: Find parameter a so that a function is continuous. (I tried to translate it correctly in English.) I hope someone can help me solve this problem, I would be so grateful.
$$
f(x,y) =
\begin{cases}
\dfrac{5 - \sqrt{25-x^2-y^2}}{7 - \sqrt{49-x^2-y^2}} & (x,y)\neq (0,0) \\
\\
a & (x,y)=(0,0)
\end{cases}
$$
| In these problems with roots, a typical strategy is the one of “rationalize” the fraction:
If you write:
$$\frac{5-\sqrt{25-x^2-y^2}}{7-\sqrt{49-x^2-y^2}}\cdot \frac{5+\sqrt{25-x^2-y^2}}{5+\sqrt{25-x^2-y^2}}\cdot \frac{7+\sqrt{49-x^2-y^2}}{7+\sqrt{49-x^2-y^2}}= \frac{x^2+y^2}{x^2+y^2}\cdot\frac {7+\sqrt{49-x^2-y^2}} {5+\sqrt{25-x^2-y^2}}=\frac {7+\sqrt{49-x^2-y^2}} {5+\sqrt{25-x^2-y^2}}$$
So your limit is $\frac 75$!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3580125",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
I get a contradiction in the theory of free abelian groups. What am I doing wrong? Hi: The definition I'll use is this: Let $F$ be an abelian group and $X$ a subset of $F$. Then $F$ is a free abelian group on $X$ if for every abelian group $G$ and every function $f$ from $X$ to $G$ there is a homomorphism $\phi$ from $F$ to $G$ that extends $f$.
Let $G$ be a finite group and $X$ a subset of $G$. Let $F$ be the free abelian group on $X$. Then $F=\langle X\rangle$ and so $F\subseteq G$. That is, every finite group has an infinite subgroup. What am I doing wrong?
EDIT: It will be easier to make myself clear working with free groups. I'll quote from Derek Robinson, A Course in the Theory of Groups, 2nd ed.
From this a free group is not only always free on a subset but additionally that subset generates it. If $G$ is a group and $X$ is a subset, however, indeed there will exist a free group on $X$ but I am unable to show it will be generated by $X$ based in the above quote. Which is very natural, of course. Thanks for the posts. Honestly none of the feedback, up to now, throws light in the paradox (paradox for me, of course).
| You are using $\langle X \rangle$ to mean two different things, and conflating them:
*
*You are using it to mean the free abelian group on $X$.
*You are using it to mean the subgroup of $G$ generated by $X$.
These are not the same thing, but you assume that they are.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3580258",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Unclear Answer Book on Calculus by Michael Spivak (3rd edition) Question 11-26. The question goes as follows
Suppose that $f'(x)\geq M>0$ $\forall x\in [0,1]$. Show that there is an interval of length $\frac{1}{4}$ on which $|f|\geq M/4$.
and the answer book states
Note that $f$ is increasing. If $f(1/2)\geq 0$, then $f(3/4)\geq M/4$, so certainly $f\geq M/4$ on the interval $[3/4,1]$. On the other hand, if $f(1/2)\leq 0$, then $f(1/4)\leq -M/4$, so $f\leq -M/4$ on the interval $[0,1/4]$.
How does $f(1/2)\geq 0\Rightarrow f(3/4)\geq M/4$ given that $f$ is increasing on $[0,1]$? (the same question goes for $f(1/2)\leq 0$).
| Essentially, you can "integrate" the expression $f'(x)\geq M$ to deduce for $x>a$ $$f(x) \geq M \cdot (x-a) + f(a) \space \space [*].$$ To prove this, use the mean value theorem: assuming the usual conditions are met if $x>a$ then $\exists c $ with $a \leq c \leq x$ such that $$\frac {f(x)-f(a)}{x-a} = f'(c)$$ which rearranges to $$f(x)=f'(c) \cdot (x-a) + f(a).$$ Given $f'(c) \geq M$ and $x-a > 0$, the result $[*]$ follows.
Applying $[*]$ to the question, we have:
using $a = 1/2$, for $x \geq 3/4$, $x-a > 1/4$ so $$f(x) \geq M \cdot \frac 14 + f(1/2)$$ or using $x=1/2$, for $a \leq 1/4$, $x-a>1/4$ so $$f(1/2) \geq M \cdot \frac 14 + f(a)$$ From these, we can deduce respectively that
if $f(1/2) \geq 0$ then $f(x) \geq \frac M4$ for $x \geq 3/4$
if $f(1/2) < 0$ then $f(a) < - \frac M4$ for $a \leq 1/4$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3580357",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Permutation representation contains trivial representation Let $G$ be a finite group and $H \vartriangleleft G$ a normal subgroup.
Let $(V,\rho)$ be the permutation representation (over $\mathbb{C})$ of $G$ acting on the set $G/H$ (we think of the quotient group as a set) in the natural way, i.e. for $s,t \in G$: $s \cdot (t\ (\textrm{mod}\ H)) = st\ (\textrm{mod}\ H)$.
Show that $(V,\rho)$ contains the trivial representation of $G$ with multiplicity 1.
My idea was to show that $G$ acts doubly transitively on $G/H$ and then apply this result.
But $G$ in particular need not act doubly transitively on itself, so we cannot apply this result.
| Another way to approach this is with Frobenius Reciprocity and induction of characters. The permutation character equals $(1_H)^G$ and $[(1_H)^G, 1_G]=[1_H,1_H]=1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3580504",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
the upper Riemann integral is equal to the upper Riemann sum (Analysis 1 by Tao) Proposition 11.3.12. (in Analysis 1 by Tao) Let $f: I \to \mathbb{R}$ be a bounded function on a bounded interval $I$. Then
$$\overline{\int}_If = \inf\{U(f, P): \text{$P$ is a partition of $I$}\}.$$
I know from the previous exercise that
$$p.c. \int_I g \ge U(f,P),$$
where $g$ is piecewise constant function and majorizes $f$ ($p.c. \int$ denotes the piecewise constant integral).
In this text, $\overline{\int}_If$ is defined as $\inf\{p.c.\int_I g: \text{$g$ is a piecewise constant function on $I$, and majorizes $f$\}}$, and $U(f, P) = \sum_{J \in P: J\not= \emptyset} (\sup_{x \in J}f(x) |J|)$.
From above, we can easily get $\overline{\int}_If \ge \inf\{U(f, P): \text{$P$ is a partition of $I$}\}.$
But how can we show the opposite inequality?
| Let $P$ be a partition of $I$ and let $U(f, P) = \sum_{J \in P; J \neq \emptyset} \: (\sup_{x \in J} f(x)) \cdot|J|$. Define a piecewise constant function $g$ with respect to the same partition $P$ where for each non-empty $J \in P$, $g(x) = \sup_{x \in J} f(x)$ for all $x \in J$. Clearly, $g$ majorizes $f$ and so $$\overline\int_{I} f \leq p.c. \: \int_{I} \: g = \sum_{J \in P} \: c_{j} \cdot |J| = \sum_{J \in P; J \neq \emptyset} \: (\inf_{x \in J} f(x)) \cdot|J| = U(f, P)$$
Therefore, $\overline\int_{I} f$ is a lower bound for $\{U(f, \: P) : P$ is a partition of $I\}$, which means $\overline\int_{I} f \leq$ inf$\{U(f, \: P) : P$ is a partition of $I\}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3580687",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Why does an orthogonal matrix have to be square? I understand intuitively why this has to be the case (otherwise you could lose a dimension / gain a dimension which changes the length), but what is the formal proof that an orthogonal matrix has to be square?
| Just to sum up the comments, your book says a linear transformation $T:\mathbb R^n\to\mathbb R^n$ is orthogonal if it preserves the length of vectors. The matrix of a transformation from a vector space to a vector space of the same dimension is necessarily square, so this is baked into the definition of an orthogonal matrix.
If the book said "A linear transformation $T:\mathbb R^n\to\mathbb R^n$ is said to be blah if it blahs", you'd still know that its matrix is square.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3580819",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
What is the inverse of the *divergence* operator? The inverse of derivation is integral.
But what is the inverse of the divergence operator ? Doest it exist ?
| The answer by Keith is close, except note that the divergence operator is not invertible, just like the derivative. It's "inverse" would also have some degrees of freedom.
In particular, when inverting the derivative $F'=f$, we have $F(y)=\int_{x=0}^{y} f(x) dx +C$.
If instead, we want to solve $\nabla \cdot \boldsymbol{F}=f$, we have $\boldsymbol{F}(r)=\boldsymbol{F}_0(f)(r)+\boldsymbol{C}(r)$ where $\boldsymbol{F}_0$ is an operator that takes $f$ and applies the Coulomb's integral.
$\boldsymbol{C}$ can be any function with a constant divergence of $0$. Written in unconstrained form $\boldsymbol{C}=\nabla \times \boldsymbol{J}$ for any vector field $\boldsymbol{J}(r)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3580926",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
First-order Logic with infinite conjunction I have an infinite set of variables X and I want to state that the property that there is a unique variable in X with value 2.
For a finite set, I would write the first-order logic formula:
$$
(x_0 = 2 \wedge x_1 \neq 2 \wedge ... \wedge x_n\neq2)
\vee
(x_0 \neq 2 \wedge x_1 = 2 \wedge ... \wedge x_n\neq2)
\vee
...
$$
For the infinite set X, I could do the same but using infinite conjunction/disjunction.
Alternatively, using second-order logic, I could (if I understand correctly) quantify over X stating.
$$
\exists x\in X. (x=2 \wedge (\forall y\in X. (x=y \vee y\neq 2)))
$$
*
*Is there a name for first-order logic with infinite conjunction/disjunction?
*Does the example demonstrate that this logic is then more expressive than first-order logic with finite/binary conjunction/disjunction?
I am clearly not a logician so please don't overwhelm me too much :D
| The name you are looking for is infinitary logic.
And yes, this is a simple example to show that first order logic cannot do everything.
To makes things a bit more formal: in logic "$X$" is a set of "constant symbols", or "0-ary function symbol", each element of $X$ is an element of your language, and not some set/number.
If you have a set $Y$ of elements of the model(for example, $Y$ is a set of integers), then FOL is enough: $∃y∈Y(y=2)$. But because $X$ is a set of elements of the language, and not the model you are working in you cannot talk about $X$ directly in FOL.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3581057",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Joint Probability Density Function with Function Bounds I have a question about joint CDFs. My understanding was given a joint PDF, the joint CDF was the integral of the joint PDF from -inf to +inf for all the random variables defined. This joint CDF should be equal to 1. However, in the question below I see a contradiction. When I integrate, the joint CDF equals 1. However, calculating the area under the curve just looking at the bounds gives me an answer of 1/4. I am not sure why I am getting different answers. Additionally, if the area under the curve is equal to 1/4 does this make the joint PDF I was given not a valid PDF?
Joint PDF problem described above:
| You just missed the factor of $4$ in the second calculation. The area of the region is not the value of the integral of $f_{x,y}$ since the value of $f_{x,y}$ is $4$ in the region. You have to multiply the area by $4$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3581201",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What is the conditional probability that the second card is a Spade given that the second-to-last card is a Spade? What is the conditional probability that the second card is a Spade given that the second-to-last card is a Spade? Cards are dealt without replacement.
I know conditional probability is $$P(A \mid B) = \frac{P(A \cap B)}{P(B)}$$ My question is how do we find $P(B)$ and $P(A \cap B)$. Finding the P(B) means what is the 51st card place down is spades. Would $$P(B) = \frac{\binom{51}{13}}{\binom{52}{13}} \cdot \frac{13}{51}$$ and $$P(A \cap B) = \left(\frac{13}{52} + \frac{12}{51} + \frac{1}{2}\right)?$$
Any hint or advice will help. Thanks.
| Let $A=$ {2nd card is a spade} and $B=$ {penultimate card is a spade}.
All the condition tells you is that for each of the other 51 positions, you have one fewer spade that it could be. So $P(A)=P(B)=\frac{13}{52}=\frac{1}{4}$, but $P(A|B)=P(B|A)=\frac{12}{51}=\frac{4}{17}$.
From there, it's easy to get $P(A \cap B)$ using the formula $P(A \cap B) = P(A|B) \cdot P(B)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3581326",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Show a sequence $(x_n)^{\infty}_{n=1}$ converges to a point of S if and only if it is eventually constant Suppose a set $S$ is given the discrete metric $d_0$. Show a sequence $(x_n)^{\infty}_{n=1}$ converges to a point of $S$ if and only if it is eventually constant; there exists $N \in \mathbb{N}$ such that $x_n=x_N$ for all $n \geq N$.
pf: If $(x_n)_{n\in \Bbb N}$ converges to an element $x \in M$, there exists a positive integer $k$ such that $d_0(x_n, x) < 1/2$ for all $n \ge k$. How to show $x_n=x$ for all $n \geq k$?
Conversely, if $(x_n)_{n\in \Bbb N}$ is eventually constant, there exists an $x \in M$ and a positive integer $k$ such that $x_n = x$ for all $n \ge k$. Thus, for any $\varepsilon > 0$, $d_0(x_n, x) = 0 < \varepsilon$ for all $n \ge k$. Consequently, $(x_n)_{n\in \Bbb N}$ converges to $x$.
| The discrete metric is defined by
$$d(x, y) = \begin{cases} 0 & \text{if } x = y \\ 1 & \text{otherwise.}\end{cases}$$
So, if $d(x, y) < 1/2$, then $d(x, y) \neq 1$, and hence $x = y$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3581449",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Partial sum $\{\frac{s_m}{s_n}\}$ converge to $1$ implies series converge? Let $s_n$ and $s_m$ be the partial sum of the series $\sum\limits_{k=0}^\infty a_k$ with $m<n$ and $a_k > 0$ for all k. If $\{\frac{s_m}{s_n}\}$ converge to $1$, does it imply that the series converges?
| If $\frac {s_m} {s_n} \to 1$ as $n,m \to \infty$ then $\ln s_m -\ln s_n \to 0$ which means $(\ln s_n)$ is a Cauchy sequence. Hence it converges to some number $c$. It follows that $s_n \to e^{c}$ so the series is convergent.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3581651",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Two Polynomials with a Common Quadratic Factor Let $f(x)=x^3-ax^2-bx-3a$ and $g(x)=x^3+(a-2)x^2-bx-3b$. If they have a common quadratic factor, then find the value of $a$ and $b$.
My Attempt
Let $h(x)$ be the common quadratic factor. Then $h(x)$ also the factor of $g(x)-f(x)$, that is
$$(2a-2)x^2+(3a-3b)$$
Since $h(x)$ a quadratic factor, then
$$(2a-2)x^2+(3a-3b)=k \cdot h(x)$$
Where $k$ is a constant.
But, i don't know how to continue, because there are many possible values for $k$.
Any advice?
| Since $(2a-2)x^2+(3a-3b)$ is a common factor and the common factor is said to be a quadratic, the (monic) common factor must look like $h(x) = x^2+\dfrac 32 \cdot \dfrac{a-b}{a-1}$.
This is ugly. So let's try and avoid the direction that this is taking us. We can assume that $h(x) = x^2 - \alpha$ where $\alpha = -\dfrac 32 \cdot \dfrac{a-b}{a-1}$ and, for some $u$ and for some $v$
\begin{align}
f(x) &= (x-u)(x^2-\alpha) \\
x^3-ax^2-bx-3a &= x^3 - ux^2 - \alpha x +\alpha u \\
u &= a \\
\alpha &= b \\
\alpha u &= -3a
\end{align}
\begin{align}
g(x) &= (x-v)(x^2-\alpha) \\
x^3+(a-2)x^2-bx-3b &= x^3 - vx^2 - \alpha x +\alpha v \\
v &= 2-a \\
\alpha &= b \\
\alpha v &= -3b
\end{align}
Since $\alpha = b$, then $\alpha v = -3b
\implies b v = -3b
\implies b(v+3)=0$.
So, either $b=0$ or $v=-3$
If $b=0$, then we must also have $a=0$.
In which case,
\begin{align}
f(x) &=x^3 \\
g(x) &=x^3-2x^2 \\
h(x) &=x^2
\end{align}
If $v=-3$, then $a=5$, $u=5$, $\alpha = -3$, and $b=-3$.
In which case
\begin{align}
f(x) &= x^3-5x^2+3x-15 \\
g(x) &= x^3+3x^2+3x+9 \\
h(x) &= x^2+3
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3581830",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Surprising fact about a certain number-theoretic function Ante suggested the following function :
For natural number $n$ we can observe the $n$ remainders $b_1,...,b_n$ by writing $n$ as $n=a_k \cdot k+b_k$ for $1 \leq k \leq n$
Because of the familiar division-with-remainder-theorem we have $0 \leq b_k <n$
Now we can study the sum $$r(n)=\sum_{k=1}^{\lfloor \frac{n-1}{2} \rfloor}b_k$$
After playing around with some values with support of Haran and Ante, we noticed that $$r(b)=r(b+1)$$ seems to hold if and only if $b+1$ is a power of $2$ or $3$ , including $2$ and $3$
There is no counterexmple upto $10^4$
? r={(p)->su=0;for(j=2,(p-1)/2,su=su+lift(Mod(p,j)));su}
%94 = (p)->su=0;for(j=2,(p-1)/2,su=su+lift(Mod(p,j)));su
? for(j=1,10^4,if(r(j)==r(j+1),print(j," ",factor(j+1))))
1 Mat([2, 1])
2 Mat([3, 1])
3 Mat([2, 2])
7 Mat([2, 3])
8 Mat([3, 2])
15 Mat([2, 4])
26 Mat([3, 3])
31 Mat([2, 5])
63 Mat([2, 6])
80 Mat([3, 4])
127 Mat([2, 7])
242 Mat([3, 5])
255 Mat([2, 8])
511 Mat([2, 9])
728 Mat([3, 6])
1023 Mat([2, 10])
2047 Mat([2, 11])
2186 Mat([3, 7])
4095 Mat([2, 12])
6560 Mat([3, 8])
8191 Mat([2, 13])
?
Is this in fact true, and if yes, why ?
| We have:
$$r(b)=r(b+1)$$
$$\sum_{k=1}^{\lfloor \frac{b-1}{2} \rfloor} (b \bmod{k}) =\sum_{k=1}^{\lfloor \frac{b}{2} \rfloor} ((b+1) \bmod{k}) $$
since $n \equiv b_k \pmod{k}$. Now, we take two cases:
Case $1$ : When $b$ is odd
We have:
$$\sum_{k=1}^{\frac{b-1}{2}} (b \bmod{k}) =\sum_{k=1}^{\frac{b-1}{2}} ((b+1) \bmod{k})$$
$$\sum_{k=1}^{\frac{b-1}{2}} \bigg((b \bmod{k})-((b+1) \bmod{k})\bigg)=0$$
We can see that:
$$ (b \bmod{k})-((b+1) \bmod{k}) =
\begin{cases}
-1 & \text{if $k \nmid (b+1)$} \\
k-1 & \text{if $k \mid (b+1)$}
\end{cases}$$
Essentially, we are to substitute $-1$ for all the values $1 \leqslant k \leqslant \frac{b-1}{2}$ and add $k$ when it is a divisor of $b+1$. $k$ will be all the divisors of $b+1$ excpet $b+1$ and $\frac{b+1}{2}$ since $k \leqslant \frac{b-1}{2}$. We have:
$$\sum_{k=1}^{\frac{b-1}{2}} \bigg((b \bmod{k})-((b+1) \bmod{k})\bigg)=\bigg( \sum_{k=1}^{\frac{b-1}{2}} (-1) \bigg) + \sigma(b+1)-(b+1)-\bigg(\frac{b+1}{2}\bigg)$$
$$ \implies -\bigg(\frac{b-1}{2}\bigg)+\sigma(b+1)-(b+1)-\bigg(\frac{b+1}{2}\bigg)=0 \implies \sigma(b+1)=2b+1$$
Since $b+1$ is even, we have $b+1=x$ for all even $x$ satisfying:
$$\sigma(x)=2x-1$$
Clearly, $x=2^k$ works for all non-negative integers $k$. Such $x$ are known as 'almost perfect numbers'. It is unknown whether powers of $2$ are the only such solutions. For your claim to be true, we need all even almost perfect numbers to be powers of $2$.
Case $2$ : When $b$ is even
This works similarly:
$$\sum_{k=1}^{\frac{b}{2}-1} (b \bmod{k}) =\sum_{k=1}^{\frac{b}{2}} ((b+1) \bmod{k})$$
We can easily see that since $\frac{b}{2} \mid b$, we have $\sum_{k=1}^{\frac{b}{2}-1} (b \bmod{k})=\sum_{k=1}^{\frac{b}{2}} (b \bmod{k})$. Now, we have:
$$\sum_{k=1}^{\frac{b}{2}} (b \bmod{k}) =\sum_{k=1}^{\frac{b}{2}} ((b+1) \bmod{k})$$
$$\sum_{k=1}^{\frac{b}{2}} \bigg((b \bmod{k})-((b+1) \bmod{k})\bigg)=0$$
Using the same logic as in the first case (but we are only to exclude the divisor $b+1$):
$$\sum_{k=1}^{\frac{b}{2}} \bigg((b \bmod{k})-((b+1) \bmod{k})\bigg) = \bigg( \sum_{k=1}^{\frac{b}{2}} (-1) \bigg)+\sigma(b+1)-(b+1)=0$$
$$\sigma(b+1)=\frac{3b+2}{2}$$
We have $b+1=x$ for all odd $x$ satisfying:
$$\sigma(x)=\frac{3x-1}{2}$$
It is clear that $x=3^k$ works for all powers of $3$. However, I doubt it is possible to prove whether these are the only odd solutions, with elementary methods. I wasn't able to find any literature on this subject.
Link:
Almost Perfect Numbers : https://mathworld.wolfram.com/AlmostPerfectNumber.html
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3582249",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
} |
Bound on finite dimensional space Let $V$ be a finite dimensional normed space over $\mathbb{R}$, with norm $||.||$
Show that there exists $C>0$ such that for all $x\in V$, $\sum_{i=1}^n|x_i|\leq C||x||$.
My attempt:
Suppose $dimV=n$. Let $\{$ $e_1,e_2...,e_n$ $\}$ be a basis for $V$. Consider the unit ball,
$K=\{$ $x\in \mathbb{R}^n$ $:$ $||x||_1=1$ $\}$ , where $||.||_1$ is simply the maximum of the coordinates. Observe, by Heine Borel, and equivalence of norms, $K$ is compact.
Define $f:\mathbb{R}^n \rightarrow \mathbb{R}$ by $f(x_1,x_2....,x_n) =||x_1e_1+....+x_ne_n||$
$f$ is indeed Lipschitz, and thus continuous. Restricting the domain to $K$ yields a continuous map defined on a compact space. Hence $f|_K$ attains its minimum.
So, there exists $k \in K$ such that $\forall y\in K$, $f(k)\leq f(y)$.
i.e. $||k_1e_1+...+k_ne_n||\leq ||y_1e_1+...+y_ne_n ||$ for all $(y_1,y_2...,y_n)$ satisfying $\sum_{i=1}^n|y_i|=1$.
What should I do next?
| In the following by $x$, I mean $\sum_k x_k e_k$.
Note that $\|x\| \le \sum_k |x_k| \|e_k\| \le K_1 \|x\|_1$ where $K_1 = \max_k \|e_k\|$.
This is the 'easy' direction.
In particular, $\|\cdot\|$ is continuous with respect to $\|\cdot\|_1$.
Let $K_2 = \min_{\|x\|_1 = 1} \|x\|$. Since the $\| \cdot\|_1$ sphere is compact the $\min$ is
attained and since $\|\cdot\|$ is a norm, $K_2 >0$. Since $\| {x \over \|x\|_1} \|\ge K_2$, it follows that $\|x\| \ge K_2 \|x\|_1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3582359",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
An example to show that convergence of Cesaro sum to $0$ does not imply the original sequence converges to $0$. I'd like construct a non-negative sequence $\{a_{k}\}_{k=0}^{\infty}$ with $a_{0}=0$ such that the Cesaro sum $\frac{1}{n}\sum_{k=0}^{n-1}a_{k}\longrightarrow 0$ but $a_{n}$ does not converge to $0$.
I have a really bad attempt:
Define $a_{0}=a_{1}=0$, and $a_{k}:=\log(\log(k))$ for $k\geq 2$, then $a_{k}\longrightarrow\infty$. But $$\dfrac{1}{n}\sum_{k=0}^{n-1}a_{k}=\dfrac{\log\Big(\prod_{k=2}^{n-1}\log(k)\Big)}{n}$$ and Wolframalpha told me that this series converges to $0$ as $n\longrightarrow\infty$.
However, I have no idea about how to show this convergence to $0$. Also, I wish to have a sequence as simple as possible, since at some stage I need too how $f(k):=a_{k}$ is positive semi-definite.
Is there any simpler example?
Thank you!
| How about $a_k=(-1)^k$ C-sum $\to 0$, but $a_k$ does not converge.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3582494",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Is this data skewed or symmetrical? So I have a piece of data, however, I am having a disagreement with others whether it is symmetric or skewed.
The mean of the data is 430, and the median is 433.
The data would be skewed if the mean > median, or mean < median. However the data would be symmetric if mean ≈ median.
Because the data is looks symmetrical, is it skewed due to the difference in mean and median, or would the two values be considered close enough for the data to be at least “approximately symmetrical?
Attached is a sketch of the data I am working with.
histogram
| The data itself is definitely skewed, by the definition you give, albeit only slightly.
However if you introduce the idea that the data graphed is only a sample from a larger population, and ask whether the sample indicates the population as a whole is skewed, this is a different question.
The size of any sample is divisible by the smallest difference in height between two bars.
The fact the bars are of common heights suggests this is probably a small sample size and is sufficiently close to a normal distribution size for such a small sample that the hypothesis the parent population is not skewed, is a reasonable hypothesis.
However the very slight discrepancy in heights between the bars to the left and right of the centre would indicate this is a very large sample size. For such a large sample, the hypothesis that the population is skewed is not skewed is much weaker because large samples more closely approximate their parent populations.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3582615",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
If $\sum (a_n)^2$ converges and $\sum (b_n)^2$ converges, does $\sum (a_n+b_n)/n$ converge? Could someone help me to solve this or at least give me a hint? I've tried a few criterions and still can't really prove this, and I don't know what should I try. Any help would be appreciated
| Applying Cauchy-Schwarz twice, you get
\begin{align*}
\Big(\sum_n \frac{a_n+b_n}n\Big)^2 &\leqslant \Big(\sum_n {a_n}^2+2\sum_{n}a_nb_n+\sum_n{b_n}^2\Big)\sum_n\frac 1{n^2}\\[5pt]
&\leqslant\Big(\sum_n {a_n}^2+2\sqrt{\sum_n{a_n}^2\sum_n{b_n}^2}+\sum_n{b_n}^2\Big)\sum_n\frac 1{n^2},
\end{align*}
where each sum clearly converges.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3583154",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
If 7 dice are thrown simultaneously, then what does the probability that all six digit appears on the upper face equal to? I've approached the problem the following way :
Out of the 7 dice, I select any 6 which will have distinct numbers : 7C6.
In the 6 dice, there can be 6! ways in which distinct numbers appear.
And lastly, the last dice will have 6 possible ways in which it can show a number.
So the required answer should be : 7C6 * 6! * 6/(6^7) which on simplifying becomes : 70/(6^3 * 3).
However, the answer given is 35/(6^3 * 3).
Where exactly am I going wrong?
| You probably noticed that your answer differs from the correct answer by a factor 2, so apparently you count everything twice.
Suppose your dice are labeled A, B, C, D, E, F, G and you throw:
A:1
B:2
C: 3
D:4
E: 5
F: 6
G: 1
Then you count this throw twice: one time with ABCDEF as the 'special' dice showing 6 different figures and G as the redundant die, and once with BCDEFG as the special dice showing 6 different figures and A as the redundant die.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3583330",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
} |
Sections of the exceptional divisor of a blowup Let $C$ be a smooth curve in a smooth threefold $X$. Denote by $Y$ the blowup of $X$ along $C$ with exceptional divisor $E$. Then $E \rightarrow C$ is a $\mathbb{P}^1$-bundle over $C$.
Is it true that sections of $E \rightarrow C$ correspond to smooth surfaces $S \subset X$ containing $C$?
To be more specific: If $S \subset X$ is a smooth surface containing $C$ then the strict transform of $S$ intersects $E$ along a section $\sigma$. Is the converse true? That is, given $\sigma$ a section of $E \rightarrow C$ can we always find a smooth surface $S$ such that $\sigma = \tilde S \cap E$?
| No. For example, let $C \subset \mathbb{P}^3$ be a twisted cubic curve. Then
$$
N_{C/X} \cong \mathcal{O}_C(5) \oplus \mathcal{O}_C(5),
$$
and a surface $S$ smooth along $C$ corresponds to a section of the sheaf $I_C(d)$ such that the composition
$$
\mathcal{O}_{\mathbb{P}^3} \to I_C(d) \to I_C/I_C^2 \otimes \mathcal{O}_{\mathbb{P}^3}(d) = N_{C/X}^\vee \otimes \mathcal{O}_C(3d) = \mathcal{O}_C(3d-5) \oplus \mathcal{O}_C(3d-5).
$$
does not vanish at any point of $C$.
If
$$
\phi \colon N_{C/X} = \mathcal{O}_C(5) \oplus \mathcal{O}_C(5) \to \mathcal{O}_C(3d)
$$
is the dual map, the corresponding section $\sigma$ is determined by $\mathrm{Ker}(\phi)$. In particular,
$$
\deg(\mathrm{Ker}) = 10 - 3d \equiv 1 \bmod 3.
$$
So, if you take, for instance, a section of $N_{C/X}$ that corresponds to an embedding $\mathcal{O}_C(5) \to N_{C/X}$, it does extend to a smooth surface (even to a surface smooth along $C$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3583401",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
If you equip two isomorphic groups with homeomorphic topologies, are they isomorphic as topological groups? I'm wondering if anyone has any insight regarding the truth of the above statement. Intuitively, if I have two topological groups in which their algebraic group structures are the same up to relabelling, and toplogical spaces that behave the same, it seems as though as topological groups they would have the same structure and behaviours, up to relabelling of course. Or is there an obvious counterexample that I'm missing?
I've seen a similar question posted with a counterexample, however I believe that the counterexample proposed did not accurately satisfy the hypothesis.
| If $G$ is a finite topological group and $N$ is the connected component of the identity, then $N$ is normal and the coset space $G/N$ forms a basis for the topology. Conversely, one can create any finite topological group given a choice of finite group $G$ and normal subgroup $N$ to be the connected component. (See this question for the result.)
Thus, if we can find a finite group $G$ with two normal subgroups $N_1$ and $N_2$ that are not related by any automorphism but are nonetheless the same size, we can have $(G,\tau_1)$ and $(G,\tau_2)$ homeomorphic, but a continuous isomorphism would have to preserve identity's connected component, i.e. send $N_1$ to $N_2$, which is impossible, and we would have a counterexample.
For this, we can pick $N_1$ and $N_2$ to simply be nonisomorphic. For instance, if $G=\mathbb{Z}_2\times\mathbb{Z}_4$ then we can use the subgroups $N_1=\mathbb{Z}_2\times\mathbb{Z}_2$ and $N_2=\mathbb{Z}_4$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3583546",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 0
} |
How to obtain a formula for $f(z)$ given this recurrence I am trying to figure out how to derive a formula for $f(z)$ that is a function of $z$ and maybe $k \in \mathbb{N}$:
$$
f(z) = 1+z f \bigg(\frac{z}{1+z} \bigg)
$$
As an attempt, I tried a change of variable $z=\frac{1}{x}$, and I get:
$$
f\bigg(\frac{1}{x}\bigg) = 1 + \frac{1}{x}f\bigg(\frac{1/x}{1+1/x}\bigg)
$$
which evaluates to the following after simplifying the terms inside $f$:
$$
f\bigg(\frac{1}{x}\bigg) = 1 + \frac{1}{x}f\bigg(\frac{1}{1+x}\bigg)
$$
Now, given the above, is the following telescoping operation valid?
$$
f\bigg(\frac{1}{x}\bigg) = 1 + \frac{1}{x}\bigg[ 1 + \frac{1}{x} + f\bigg(\frac{1}{2+x}\bigg) \bigg] = 1 + \frac{1}{x} + \frac{1}{(x+1)^2} + ... + \frac{1}{(x+k-3)^{k-2}} + \frac{1}{(x+k-2)^{k-1}}f\bigg(\frac{1}{k+x}\bigg)
$$
Also, how do I transform the above to a solution for $f(z)$?
| If we plug in $z=0$ to the original functional equation we get $f(0)=1$. Then we set $g(x)=f(1/x)$. We have, as you showed, that $$g(x)=1+\frac1{x}g(x+1).$$
Thus for integer $m>0$ we have
$$g(x)=g(x+m+1)\prod_{r=0}^{m}\frac1{x+r}+\sum_{k=0}^{m-1}\prod_{j=0}^{k}\frac1{x+j}.$$
Taking the limit as $m\to\infty$ on both sides, we have $$g(x)=\sum_{k\ge0}\prod_{j=0}^{k}\frac1{x+j},$$
because $\lim_{m\to\infty}g(x+m)=\lim_{m\to\infty}f(\tfrac1{x+m})=f(\lim_{m\to\infty}\tfrac1{x+m})=f(0)=1$, while $\prod_{k\ge0}\frac1{x+k}=0$ for all $x\in\Bbb R\setminus \Bbb Z_{\le 0}$. Thus we have
$$f(x)=\sum_{k\ge0}\prod_{j=0}^{k}\frac1{1/x+j}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3583682",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Integral of $\int \sin^2x\cos^4xdx$ $$\int \sin^2x\cos^4xdx$$
I tried
$$I = \int (1-\cos^2x)\cos^4xdx = \int \frac{\sec^2x-1}{\sec^6x}dx = \int \frac{\tan^2x}{\sec^6x}dx$$
Take $\tan x = t \implies \sec^2xdx = dt$
$$I = \int \frac{t^2}{(t^2+1)^4}dt$$
And I could not proceed further from here.
| $$I=\int \sin^2 x \cos^4 x dx =\frac{1}{8} \int \sin^2 2x (1+\cos 2x) dx=\frac{1}{8}\int \sin ^2 2x dx+\frac{1}{8}\int (t^2/2) dt$$
Here $\sin 2x=t$
$$\implies I=\frac{1}{16}\int (1-\cos 4x) dx +\frac{(\sin 2x)^3}{48} =\frac{1}{16} x-\frac{1}{64} \sin 4x+\frac{(\sin 2x)^3}{48} $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3583796",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
} |
Calculating $\mathbb E$ and $\mathbb V$ of a random variable. $\begin{pmatrix}&&&&\mathrm{payout}\\\mathrm{age,sex}&&1&&2&&4\\68, female&&1&&-&&1\\67,male&&-&&2&&-\end{pmatrix}$
I don't know how to properly format matrices, so let's explain it:
We got two $68$ year old females. If female_1 dies we have to pay $1$. If female_2 dies we have to pay $4$.
We got two $67$ year old males. If one of them dies we have to pay $2$. If both die we have to pay $2+ 2=4$.
(If noone dies we don't have to pay anything)
Let $S$ be our total payoff. I want to calculate $\mathbb E(P)$ and $\mathbb V(P)$.
$q_{68,f}$ and $q_{67m,}$ are the probabilities that a 68 year old female (67 year old male respectively) dies.
So $\mathbb E(S)=q_{68,f}\cdot 1+q_{68,f}\cdot 4+q_{67,m}2+q_{67,m}2=q_{68,f}\cdot 5+q_{67,m}4$
For $\mathbb E(S^2)$ which of these numbers do I have to square? 1, 4, 2, 2 or 5, 4?
Or do I have to do it like this: $\mathbb E(S)=\sum_{k=0}^9k\cdot\mathbb P(S=k)$ and $\mathbb E(S^2)=\sum_{k=0}^9k^2\cdot\mathbb P(S=k)$
| Assuming the deaths are independent, you have
$$\mathbb E(S^2)=q_{68,f}\cdot 1^2+q_{68,f}\cdot 4^2+q_{67,m}2^2+q_{67,m}2^2=q_{68,f}\cdot 17+q_{67,m}8$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3583911",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$\overline{U}\cap V\subseteq\overline{U\cap V}$ In particular if the equality is generally false, is it true if $V$ is open?
Could someone help, me please?
| In general this is false:
Consider $U := [0,1)$ and $V:= [1,2]$. The we have $\overline{U} = [0,1]$ and thus $\overline{U} \cap V = \{1\}$ but $U \cap V = \emptyset$ and thus their closure is empty as well.
However, if $V$ is open, this is true.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3584050",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
If $m_n + n$ is true, then prove that $m_{n+1} + n + 1$ is true. Algebraic breakdown help I'm a 46 year old Discrete Math student, and with all of the gaps in my math education, remembering the algebra to do the last step of my Induction proofs have been the hardest part for me. How do I deal with the subscript and where do I even start to break the right side of this down to prove that it is equal to the left. Looking forward to learning this. I have added some example pics of the program that I have created to run the Truth Table examples. It sums up the total of each row and proves that my theory is correct.
Let $m_n$ be the number of true $p_i$ in the chained bi-conditional $(p_1 \leftrightarrow p_2 ... \leftrightarrow p_n)$
Claim: The chained bi-conditional is true if and only if $m_n + n$ is even for all positive integers. This formula covers both odd and even cases.
$m_n + n$ $\rightarrow$ $m_{n+1} + n + 1$
| So if I understand correctly you want to see the truth of the biconditional
$$P \leftrightarrow p_{n+1}$$ where $P$ itself is a complex chain of smaller statements with arrows between them. The beauty of the situation is that you discovered the a different way of describing the truth or falsehood of $P$: $P$ is true if $m_n + n$ is even and false if it is odd.
This is great. Now we don't have to worry about the inner structure of $P$ and just role with this.
We know that there are just 4 possible situations:
*
*$P$ is true and $p_{n+1}$ is true
*$P$ is true and $p_{n+1}$ is false
*$P$ is false and $p_{n+1}$ is true
*$P$ is false and $p_{n+1}$ is false
In each of the four situations you can compute two things:
*
*Whether the 'big' statement $P \leftrightarrow p_{n+1}$ is true or not.
*Whether $(n+1) + m_{n+1}$ is even or odd
Once you have done that you can check if the relation between truth/falsehood of the big statment and even/oddness (parity) of $(n+1) + m_{n+1}$ that you conjectured holds in all four cases and if it does you are done.
Now I think you don't have any trouble with item 1. So let's talk about 2.
First: why $(n+1) + m_{n+1}$? Well because there are $n + 1$ terms in the big statement $P \leftrightarrow p_{n+1}$
As I said you want to compute this quantity in four situations. I'll do the first, you can do the other three.
$P$ is true so $n + m_n$ is even.
Let's distinguish two more cases: $n$ even and $n$ odd.
If $n$ is even then $m_n$, the number of true statements among $p_1, \ldots, p_n$ must be even as well. Now we add the aditional true (IN THIS CASE) statement $p_n$ so we get that $m_{n+1} = m_n + 1$: the total number of true statments is one more so $m_{n+1}$ is odd. On the other hand, since $n$ is even, $n+1$ is odd as well and hence $(n+1) + m_{n+1}$ is odd + odd = even.
If $n$ is odd then, since $n + m_n$ is even, $m_n$ is odd as well. We add one more true statement ($p_n$) so we find (IN THIS CASE) that $m_{n+1}$ is even. But $n + 1$ is even as well (since $n$ is odd) so again $(n+1) + m_{n+1}$ is even.
Conclusion: in the scenario of the first bullet point we have that $(n+1) + m_{n+1}$ is even. One down, three to go.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3584188",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Generalized Union and Intersection by Induction Our teacher, told us to prove,
$$\left( \bigcap_{i=1}^n A_i\right)^c = \bigcup_{i=1}^n (A_i^c) $$
By induction. He told us that it has something to do with DeMorgan.
So my question is on knowing what's on the sets. I think that the left one has all the numbers to n except for the number 1, but in the right I get lost.
Can you explain how I could know the elements on the right-handed set? Also, if you could tell me another hint I would really thank you.
| \begin{align}
\left( \bigcap_{i=1}^{n+1} A_i\right)^c & = \left(\left( \bigcap_{i=1}^n A_i \right) \cap A_{n+1} \right)^c \\[8pt]
& = (B\cap A_{n+1})^c \\[8pt]
& = B^c \cup A_{n+1}^c & & \text{de Morgan} \\[8pt]
& = \left( \bigcap_{i=1}^n A_i \right)^c \cup A_{n+1}^c \\[8pt]
& = \left( \bigcup_{i=1}^n (A_i^c) \right) \cup A_{n+1}^c & & \text{by the induction hypothesis} \\[8pt]
& = \bigcup_{i=1}^{n+1} (A_i^c)
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3584525",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Compounding more frequently seems to decrease total amount when using APYs? Interest rates are often given in terms of annual growth, even when compounding happens more often than once a year. To account for this, I read that we can use the following transformation to get the periodic compounding rate, $r$.
$$ r = (1 + \text{apy})^{1/n} $$
Where apy is the percentage annual growth, and n is the number of compounding periods.
Though approximations exist, my understanding is that this is justified by the fact that, for a balance $B$, one year of growth produces
\begin{align}
B \cdot \underbrace{r \cdot r \cdot \ldots \cdot r}_{n \text{ times}}
& = B (1 + \text{apy})^{1/n} (1 + \text{apy})^{1/n} \ldots (1 + \text{apy})^{1/n} \\
& = B(1 + \text{apy})
= B + B(\text{apy})
\end{align}
Which is the annual rate applied once, as we should expect.
But problems arise when we consider the fact that people periodically contribute to their accounts. When I do the math, assuming contributions are made at the start of each compounding period, compounding more frequently decreases the overall growth. This, surely, cannot be right.
My thinking can be expressed two ways. First, mathematically, then as an equivalent computer program.
*
*Let $a$ be the contribution amount.
*Let $t$ be the number of contributions per year.
*Let $P$ be the initial balance.
*Let $n$ be the number of compounding periods. Assume $n$ divides $t$ for simplicity.
*Let $y$ be the number of years.
$$ P ( 1 + r)^y + a {t \over n} {(1+r)^{1/n} \over {(1+r)^{1/n}-1}} ((1+r)^y - 1)$$
Alternatively, as a C program, we see the same results.
int main() {
double apy = 7.2; // annual growth rate
double start = 5000; // starting amount
double add = 2000; // amount to add each contribution
int compounds = 4; // number of times to compound
int additions = 12; // number of times to contribute
int years = 20; // number of years to grow
double balance = start;
double rate = pow(1 + apy/100.0, 1.0/compounds) - 1;
for (int i = 0; i < years; i++) {
for (int c = 0; c < compounds; c++) {
for (int m = 0; m < additions/compounds; m++)
balance += add;
balance *= 1 + rate;
}
}
printf("Balance: %.2lf\n", balance);
}
In either case, we can tabulate the following when $P = 5000$, $a = 2000$, $\text{apy} = 7.2$, $t = 12$ and $y = 20$,
The balance is $1,098,139.14$ when $n = 1$.
The balance is $1,070,593.39$ when $n = 4$.
The balance is $1,064,536.20$ when $n = 12$.
| The reason is that with less compounding the money is in the account longer. Let us take a one year term and compare annual vs. semiannual compounding at $10\%$. If you deposit $1$ at the start of the year, annual compounding gets you $1.1$ while semiannual gets you $1.1025$ as you would expect. But if you contribute $1$ split over the periods, for semiannual interest the second $0.5$ is only in the account for half the year, so it only draws half a year's interest. Again, $1$ in the annual account gives $1.1$, but in the semiannual account it only gives $1.07625$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3584639",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Are these fields equal?
Let $\zeta_3$ be the third root of unity.
1) Does it hold that: $\mathbb{Q}(\sqrt{2},\zeta_3)=\mathbb{Q}(\sqrt{2}+\zeta_3)$ ?
2) Does it hold that $\mathbb{Q}(\sqrt[3]{2},\zeta_3)=\mathbb{Q}(\sqrt[3]{2}\zeta_3)$?
My attempt for 1) is to compute the minimal polynomial of $\sqrt{2}+\zeta_3$ as $p(x)=x^4+2x^3-x^2-2x+7$
which is of degree 4. But since $|\mathbb{Q}(\sqrt{2},\zeta_3):\mathbb{Q}|=6$ these can't be equal. Is my proof correct?
For the second one I am not sure.
| $\zeta_3$ is a root of $x^2 + x + 1$ since $x^3 - 1 = (x - 1)(x^2 + x + 1)$. So its degree is $2$ not $3$.
For 2) you can do the same kind of thing. The minimal polynomial of $\sqrt[3]2 \zeta_3$ is $x^3 - 2$ so $\mathbb{Q}(\sqrt[3]2 \zeta_3) \ne \mathbb{Q}(\sqrt[3]2, \zeta_3)$. It's a 3-dimensional subspace of a 6-dimensional $\mathbb{Q}$-vector space.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3584756",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Show $AA^+$ is symmetric Can somebody show me how $AA^+$ is symmetric if $A^+$ is the pseudoinverse of $A$?
All I can muster is:
$(AA^+)^T => (A^+)^TA^T$
I know: $(A^+)^T = (A^T)^+$ but that doesn't really seem like it gets us anywhere.
Thanks!
| The comment from user759562 is correct, it is Hermitian by definition. But in the spirit of the question, lets do the computation with the definition provided here. That is, when $A$ has linearly independent columns, $A^+$ can be expressed as $A^+ = (A^*A)^{-1}A^*.$ Note that for an invertible matrix $B$, we have $(B^*)^{-1} = (B^{-1})^*.$ Also, if you are working in the reals, just replace $B^*$ with $B^T$.
We wish to show that $(AA^+)^* = AA^+.$
We have \begin{equation}
\begin{split}
(AA^+)^* &= (A^+)^*A^* \\
&= [(A^*A)^{-1}A^*]^*A^* \\
&= A[(A^*A)^{-1}]^*A^* \\
&= A[(A^*A)^*]^{-1}A^* \\
&= A(A^*A)^{-1}A^* \\
&= AA^+.
\end{split}
\end{equation}
This can be shown for the 'other' definition (ie. when A has linearly independent rows) using a similar sequence of steps.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3584960",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How does a vector b in the column space come from a vector in the row space? I'm working through Gilbert Strang's Introduction to Linear Algebra book and am really confused by a paragraph from chapter 4.1 titled 'Orthogonality of the Four Subspaces'. The paragraph is as follows:
Every vector goes to the column space! Multiplying by A cannot do anything else. More than that: Every vector $b$ in the column space comes from one and only one vector $x_r$ in the row space. Proof: If $Ax_r = Ax'_r$, the difference $x_r - x'_r$ is in the nullspace. It is also in the row space, where $x_r$ and $x'_r$ came from. This difference must be the zero vector, because the nullspace and row space are perpendicular. Therefore $x_r = x'_r$.
Further on in the book an exercise is given, where we have to demonstrate this using the following figure: Two pairs of orthogonal subspaces, with the following matrix: $A = \begin{bmatrix}1 & 2\\3 & 6\end{bmatrix}$. The column space of the matrix is: $(1, 3)$, and its row space is: $(1, 2)$. If I multiply A with the randomly chosen $x$ vector: $(1, 1)$, I arrive at $b = (3, 9)$. However, this $b$ seems unable to be recreated using a multiple of the row space vector: $(1, 2)$. I'm really confused by this. I also feel like I'm missing the meaning of the proof and am not familiar with the $'$ symbol in $Ax'_r$. Does it mean the transpose?
Any help would be greatly appreciated!
| This theorem is strange, because its not always true... It only holds when the matrix $\mathbf{A}$ has full rank. So probably context is missing here.
Anyway, to your question:
The row space that is spanned by your example matrix is NOT
$$\text{span}\left(\begin{bmatrix}1\\2\end{bmatrix}\right),$$ it is
$$\text{span}\left(\begin{bmatrix}1\\2\end{bmatrix}, \begin{bmatrix}3\\6\end{bmatrix}\right).$$
You have two linear independent rows (this is important for the theorem to work!), so you can span $\mathbb{R^2}$.
But the underlying meaning here is:
$$\mathbf{A}\begin{bmatrix}x_1\\x_2\end{bmatrix}=\mathbf{A}_1x_1+\mathbf{A}_2x_2$$
That means that regardless of what you put in as $x$, you will get a linear combination of the columns of $\mathbf{A}$, so you are in the column space of $\mathbf{A}$.
The theorem now says that if you columns are linear independent, for each element in the column space there is exactly one $\begin{bmatrix}x_1\\x_2\end{bmatrix}$ that will lead to this vector by computing $\mathbf{A}\begin{bmatrix}x_1\\x_2\end{bmatrix}$.
If that is not true, then you have a null space of $\mathbf{A}$, but the null space will always be orthogonal to the column space of $\mathbf{A}$.
I hope that cleared some things up. If not, please ask!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3585142",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
} |
Show convergence of $\sum \frac{z^n}{n}$ Show that the series $\displaystyle\sum \frac{z^n}{n}$ for $z=1$ diverges, but for all $z \ne 1$ with $|z|=1$ converges. Hint: Estimate $(1-z) \displaystyle\sum_{n=k}^{m} \frac{z^n}{n}$.
The first case $z=1$ is easy, this is just the harmonic series. But I am really stuck with the second part, which is the actually interesting one. First I tried to write $z=re^{i \phi}$, but the hint does not look like this could be useful. Specifically, I don't know how to bring the series in a form such that the hint is even useful.
Edit: To clearify some questions from the comments. The hint has no misprint and I don't see how the question about the uniform convergence of this series on the open unit disk that was posted multiple times in the comments helps me to solve this problem. However, thank you already for your help! There might be a good strategy by applying the Dirichlet test as one of the answers suggests.
| Hint:
$$\sum_{k=1}^n\frac{z^k}{k}-\sum_{k=1}^n\frac{z^{k+1}}k=\sum_{k=1}^n\frac{z^k}{k}-\sum_{k=2}^{n+1}\frac{z^k}{k-1}=z-\sum_{k=2}^{n}\frac{z^k}{k(k-1)}-\frac{z^{n+1}}n.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3585272",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
How do we find the answer of the derivative when we are not even given a point in the function? In almost all problems I came across of derivatives, we were just given the function(for ex-x3) and we were told to find the derivative of the function. As it is to my understanding, derivative is the slope of the tangent at a point. So if we are not given a point in the function, how will we find the derivative of the function?
| When someone says to find the derivative of a function in the manner you speak of, they are wanting you to find the derivative at an arbitrary point. This ends up being another function. For example:
$f(x) = x^2$
We know that at any point $x$ the derivative is $2x$. Therefore the derivative is $f’(x) = 2x$. The evaluation of that function gives you the value of the derivative at $x$. You didn’t pick a value of $x$ so you just leave it as “$x$”.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3585396",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
An inequality for the mgf using Jensen’s inequality Given non-negative random variables $X_1,X_2,...$ how to show that
$$\mathbb{E}\exp(t\max\limits_{1\leq i\leq n}X_i)\leq \sum\limits_{1\leq i\leq n}\mathbb{E}\exp(tX_i).$$
I think we should start with
$$\max\limits_{1\leq i\leq n}X_i\leq \sum\limits_{1\leq i\leq n}X_i$$
and the apply Jensen's inequality, but I need help with clarification of the details
| If $t<0$ then $t \max_k X_k(\omega) \le t X_i(\omega)$ for all $i$ and if
$t \ge 0$ then $ t \max X_k(\omega) \le tX_i(\omega)$ for some $i$.
Hence $e^{t \max_k X_k(\omega) } \le \sum_k e^{t X_k(\omega)}$ and hence taking expectations
we have the desired result.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3585511",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Do $\{x\mid x\in\mathbb{R},xShort question about set-builder notation.
Do
$$D=\{x \mid x \in \mathbb{R}, x < k\}$$
and
$$D=\{x \in \mathbb{R} \mid x < k\}$$
mean the same thing?
I see both of them used in different contexts and was wondering if they are interchangeable.
| They mean the same thing. I prefer $\{x \in \mathbb{R} \mid x < k \}$, because it is a clear separation between the domain ($\mathbb{R}$) and the condition ($x < k$). So I think it is easier to read, definitely when the condition gets more complicated.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3585683",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Powers of roots in terms of polynomial coefficients Suppose we have a monic polynomial of degree $n$ with coefficients $c_1, c_2, c_3, \cdots, c_n$, and roots $r_1, r_2, r_3, \cdots, r_n$:
$$
x^n+c_1 x^{n-1} + c_2 x^{n-2} + c_3x^{n-3} + \cdots + c_n
$$
I'm looking to find expressions such as
$$
r_1^2 + r_2^2 + r_3^2 + \cdots + r_n^2 \\
r_1^3 + r_2^3 + r_3^3 + \cdots + r_n^3 \\
r_1^4 + r_2^4 + r_3^4 + \cdots + r_n^4 \\
$$
in terms of the coefficients $c_k$.
I already know how to do the first few on a case by case basis, so I'm looking for a more general solution or method for handling higher powers and higher degree polynomials, if they exist.
I suspect there's some simple inductive method I'm just not seeing.
| You can use Newton's identities.
This process would be inductive. The coefficient of $x^{n-k}$ is $(-1)^ke_k$ by the notation in the article on Newton's identities. Your desired sums are
$$p_k=r_1^k+r_2^k+\cdots+r_n^k$$
Then the formula says
$$ke_k=e_{k-1}p_1-e_{k-2}p_2-e_{k-3}p_3+\cdots+(-1)^{k-1}p_k$$
Substituting in the coefficients, you can solve for $p_k$. For example,
$$p_1=e_1=-c_{1}$$
$$2e_2=2c_{2}=e_1p_1-p_2=c_{1}^2-p_2$$
so
$$p_2=c_{1}^2-2c_{2}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3585785",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Disproving equality of cartesian products We are to disprove the statement $X \times Y = Y \times X \iff X = Y$ but I can't think of an example where this would be false. If $X = Y$, then wouldn't the Cartesian product be the same in either direction?
| The important point is that a Cartesian product is a set of ordered pairs. So if $X$ (say) contains an element $a$ which is not in $Y$, then in $X \times Y$ the element $a$ will appear in ordered pairs only as the first item of the pair, while in $Y \times X$ it will appear only as the second item of the pair. So no pair containing $a$ in $X \times\ Y$ can ever equal any pair in $Y \times X$, even one also containing $a$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3585903",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Compute $\int_0^1\frac{\ln(1-x)\ln(1+x)}{1+x}\ln\left(\frac{1+x}{2}\right)\ dx$ How to prove that
$$\int_0^1\frac{\ln(1-x)\ln(1+x)}{1+x}\ln\left(\frac{1+x}{2}\right)\ dx$$
$$=2\text{Li}_4\left(\frac12\right)-2\zeta(4)+\frac{15}8\ln(2)\zeta(3)-\frac12\ln^2(2)\zeta(2)$$
where $\text{Li}_r$ is the polylogarithm function and $\zeta$ is Riemann zeta function.
I managed to prove the equality above using the following harmonic series,
$$\sum_{n=1}^\infty\frac{(-1)^nH_n^3}{n}, \ \sum_{n=1}^\infty\frac{(-1)^nH_n^{(2)}H_n}{n},\ \sum_{n=1}^\infty\frac{(-1)^nH_n^2}{n^2}\ \text{and }\ \sum_{n=1}^\infty\frac{(-1)^nH_n}{n^3}$$
so definitely this approach is pretty boring. Is it possible to solve it in a different way? Thank you.
| Set $x=2t-1$
$$\begin{align}
& =\int_{\frac{1}{2}}^{1}{\frac{\ln \left( t \right)\ln \left( 2t \right)}{t}\ln \left( 2-2t \right)dt} \\
& =\int_{\frac{1}{2}}^{1}{\frac{\ln \left( t \right)\ln \left( 2t \right)}{t}\left( \ln \left( 2 \right)-\sum\nolimits_{n=1}^{\infty }{\frac{{{t}^{n}}}{n}} \right)dt} \\
& =\int_{\frac{1}{2}}^{1}{\left\{ \frac{\ln \left( t \right)\ln \left( 2t \right)\ln \left( 2 \right)}{t}-\sum\nolimits_{n=1}^{\infty }{\frac{{{t}^{n-1}}\ln \left( t \right)\ln \left( 2t \right)}{n}} \right\}dt} \\
& =\int_{\frac{1}{2}}^{1}{\frac{\ln \left( t \right)\ln \left( 2t \right)\ln \left( 2 \right)}{t}dt-}\sum\nolimits_{n=1}^{\infty }{\frac{1}{n}\int_{\frac{1}{2}}^{1}{{{t}^{n-1}}\ln \left( t \right)\ln \left( 2t \right)}dt} \\
& =-\frac{1}{6}{{\ln }^{4}}\left( 2 \right)-\sum\nolimits_{n=1}^{\infty }{\left( \frac{2}{{{n}^{4}}}-\frac{2}{{{2}^{n}}{{n}^{4}}}-\frac{\ln \left( 2 \right)}{{{n}^{3}}}-\frac{\ln \left( 2 \right)}{{{2}^{n}}{{n}^{3}}} \right)} \\
& \vdots \\
& \vdots \\
\end{align}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3586030",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
show that $\sqrt{n+1}-\sqrt{n} \rightarrow 0$ as $n \rightarrow \infty$ show that $\sqrt{n+1}-\sqrt{n} \rightarrow 0$ as $n \rightarrow \infty$
Here is the algebric proof:
We have $a_n=\sqrt{n+1}-\sqrt{n}$, and we want to show that $\lim a_n=0$.
$$\sqrt{n+1}-\sqrt{n}=\frac{(\sqrt{n+1}-\sqrt{n})(\sqrt{n+1}+\sqrt{n})}{\sqrt{n+1}+\sqrt{n}}=\frac{n+1-n}{\sqrt{n+1}+\sqrt{n}}=\frac{1}{\sqrt{n+1}+\sqrt{n}}$$
So, when $n\to\infty$, we get $\frac{1}{\sqrt{n+1}+\sqrt{n}}\to 0$.
Question: I am wondering does episilon-delta method work here as an alternative proof?
| The epsilon-delta method requires you to work out how small a $\delta$ is sufficient for a sought $\epsilon$, so you need your calculation anyway. You want to prove$$\forall\epsilon>0\exists\delta>0\left(\forall n>\frac{1}{\delta}\left(\frac{1}{\sqrt{n+1}+\sqrt{n}}<\epsilon\right)\right).$$It suffices to take $\delta=4\epsilon^2$. Or if we use the more typical $\epsilon$-$N$ definition, take $N=\frac{1}{\delta}=\frac{1}{4\epsilon^2}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3586136",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
} |
Use contour Integration to establish $\int\limits_{-\infty}^\infty\frac1{(x^2+a^2)(x^2+b^2)}{\rm d}x=\frac\pi{ab(a+b)}$ for $a,b>0$ Can someone help me figure this out please? This is the question along with what I have so far.
Use contour integration to establish
$$\int_{x=-\infty}^\infty\frac1{(x^2+a^2)(x^2+b^2)}{\rm d}x=\frac\pi{ab(a+b)},~~~\text{where}~a,b>0$$
Assume $R>1$. Then, the improper integral can be written as
$$\lim_{R\to\infty}\int_{_R}^R\frac1{(x^2+a^2)(x^2+b^2)}{\rm d}x$$
Let $\gamma$ be the positively oriented contour consisting of $\gamma_1\cup\gamma_2$, where $\gamma_1=\{z=x+iy\in\Bbb C\mid-R\le x\le R\}$ and $\gamma_2=\{z=Re^{i\theta}\in\Bbb C\mid0\le\theta\le\pi\}$. Define the functions $f(z)=\frac1{(z^2+a^2)(z^2+b^2)}$ and...
| Hint: The poles are simple, and located at $\pm ai,\pm bi$. You could use the residue theorem, if you prove the integral on the part of the contour off the real axis goes to zero.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3586288",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What is the notation of a set of $n$ binary numbers where all of them are 0 except for one? From this post I found out we can define a set of $n$ binary numbers mathematically like: $\mathbb Z_2^n$.
But what if I want to further restrict this set such that all the bits must be zero except for one? For example, elements of the set look like:
$$[1,0,0,\dotsc,0] \quad\text{or}\quad [0,1,0,0,\dotsc,0] \quad\text{or}\quad [0,0,\dotsc,0,1], \quad\text{etc.}$$
What would be mathematical notation for this restriction?
| I don't know of any formal notation for this, but the set you are describing is precisely the powers of $2$ up to $2^{n-1}$; i.e. $\{2^a\mid a\in\mathbb Z,0\le a<n\}$. If you refer to such sets regularly, you may denote them by $A_n$ or whatever notation is convenient for you.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3586374",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Trig substitution for $\sqrt{9-x^2}$ I have an integral that trig substitution could be used to simplify.
$$ \int\frac{x^3dx}{\sqrt{9-x^2}} $$
The first step is where I'm not certain I have it correct. I know that, say, $\sin \theta = \sqrt{1-cos^2 \theta}$, but is it correct in this case $3\sin \theta = \sqrt{9 - (3\cos \theta)^2}$?
Setting then $x = 3\cos \theta; dx = -3\sin \theta d\theta$
$$-\int \frac{(3\cos\theta)^3}{3\sin\theta}3\sin\theta d\theta$$
$$-27\int\cos^3\theta d\theta$$
$$-27\int(1-\sin ^2\theta)\cos \theta d\theta$$
Substituting again, $u=\sin \theta; du=\cos \theta d\theta$
$$-27\int(1-u^2)du $$
$$-27u + 9u^3 + C$$
$$-27\sin \theta + 9 \sin^3 \theta + C$$
$$-9\sqrt{9-x^2} + 3\sin\theta\cdot 3\sin\theta\cdot \sin \theta + C$$
$$-9\sqrt{9-x^2} + (\sqrt{9-x^2})^2 \cdot \frac{\sqrt{9-x^2}}{3} + C$$
$$-9\sqrt{9-x^2} + \frac{1}{3}(9-x^2)(9-x^2)^{\frac{1}{2}} + C$$
$$-9\sqrt{9-x^2} + \frac{1}{3}(9-x^2)^\frac{3}{2} + C $$
I guess I have more doubts that I've done the algebra correctly than the substitution, but in any case I'm not getting the correct answer. Have I calculated correctly? Is the answer simplified completely?
EDIT
Answer needed to be simplified further:
$$-9\sqrt{9-x^2} + \frac{1}{3}(\sqrt{9-x^2}^2 \sqrt{9-x^2}) + C$$
$$-9\sqrt{9-x^2} + \frac{1}{3}((9-x^2)\sqrt{9-x^2}) + C$$
$$\sqrt{9-x^2} \left (-9 + \frac{1}{3}(9-x^2) \right ) + C$$
$$\sqrt{9-x^2} \left (-6 - \frac{x^2}{3} \right ) + C$$
$$ \bbox[5px,border:2px solid red]
{
- \left ( 6+ \frac{x^2}{3} \right ) \sqrt{9-x^2}
}
$$
This is the answer the assignment was looking for.
| What you have done is absolutely correct, except where you forgot to mention that $\theta$ is in $(0, \pi)$, but you can simplify your answer further.
The book's answer might be something like $-\frac{1}{3} \sqrt{9-x^2} (x^2+18)$, which you can get by factoring out a factor of $\sqrt{9-x^2}$:
$$-9\sqrt{9-x^2} + \frac{1}{3}(9-x^2)(9-x^2)^{\frac{1}{2}} + C$$
$$= \sqrt{9-x^2} \left(-9 + \frac{1}{3}(9-x^2) \right)+ C$$
and you can surely continue from here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3586503",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 6,
"answer_id": 5
} |
How to give a combinatorial proof for: If $n$ and $k$ are positive integers with $n=2k$ then $\frac{n!}{2^k}$ is an integer How can i give a combinatorial proof for if $n$ and $k$ are positive integers with $n=2k$ then $\dfrac{n!}{2^k}$ is an integer?
| If $n=2k$, $\dfrac{n!}{2^k}$ can be written as the multinomial $\dbinom{n}{2,\dots,2}$ (with $k$ $2$'s), and is therefore an integer.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3586646",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
How could we plot the KL divergence? I was trying to write a blogpost on information theory and I think it would be a good idea (if possible) to plot the KL divergence in a 3D-plot in order to show graphically its convexity, but I wouldn't know how to define the pdf space. How would you do it?
$$
KL(f||g)=\sum_{x \in X} f(x)\log \frac{f(x)}{g(x)}
$$
| One idea would be to use the fact that a function is convex if and only if its restriction to a line is convex.
In the case of KL divergence, we can pick any two pairs of distributions $(f, g)$ and $(f', g')$ and plot
$$
\mathrm{KL}(\lambda f + (1 - \lambda) f' \, || \, \lambda g + (1 - \lambda) g')
$$
as a function of $\lambda$, with $\lambda \in [0, 1]$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3586784",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Physics Question Math Related: When do Physicists ever use the following expression in whatever context:
\begin{align}
\frac{d}{dt}[r_1(t)\ \cdot \ r_2(t)] = r_1 \ \cdot \ \frac{dr_2}{dt} + \frac{dr_1}{dt} \ \cdot \ r_2 \\
\frac{d}{dt}[r_1(t)\ \times \ r_2(t)] = r_1 \ \times \ \frac{dr_2}{dt} + \frac{dr_1}{dt} \ \times \ r_2
\end{align}
For an answer type I could be visualizing one from linear momentum, or any where applicable. Any thoughts would be helpful, or even a migration to Physics.SE I posted here since it was about the math component.
| 2) Angular momentum of a solid object is defined as $\vec{L} = \vec{r} \times \vec{p}$, where $\vec{p}$ is regular momentum of that object, and $\vec{r}$ is the distance vector from the point w.r.t which the angular momentum is calculated. Thus, change in angular momentum can come either from changing the linear momentum or the change of length of the "arm" . Taking the derivative of $\vec{L}$ shows that the result is the sum of these two effects
1) Your factory produces $\vec{r}_1 = (3/5 bottles, 4/5 cans)$ per minute. You should be producing $\vec{r}_2 = (4/5 bottles, 3/5 cans)$ per minute. Thus your efficiency is $\vec{r}_1 \cdot \vec{r}_2$. Your efficiency can change due to two factors - you yourself produce different ratio of bottles and cans, or your goal changes. Again, the net effect is sum of both
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3586931",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
For fixed hypotenuse, can the number of primitive Pythagorean triples exceed the number of non-primitive ones? For the equation,
$$a^2+b^2=c^2$$
if $c$ is fixed and the number of natural solutions for $a, b$ is greater than $1$, then can the number of primitive solutions (solutions in which $a, b, c$ are coprime) exceed the number of non-primitive ones?
After testing a large number of cases I believe that the number of non-primitive solutions will always exceed the number of primitive ones although I have no proof for this.
If false, what is the smallest counter-example for $c$?
| For some hypotenuse $c$, let the number of primitive solutions be greater than the number of non-primitive solutions.
Assume that $p \mid c$ for some prime $p$. Clearly, there is atleast one primitive solution $(a,b,c)$. Then, we have:
$$p^2 \mid c^2 \implies p^2 \mid (a^2+b^2)$$
It is easy to see that since $p \nmid a$, we must have $p \neq 2$ and $p \not\equiv 3 \pmod{4}$. Thus, any prime $p \mid c$ satisfies $p \equiv 1 \pmod{4}$.
Now, let the prime factorization of $c$ be:
$$c=\prod_{k=1}^n p_k^{x_k}$$
where all $p_k$ are $1 \bmod{4}$. Let $p_k=(a_k+b_ki)(a_k-b_ki)$ where $(a_k+b_ki)$ and $(a_k-b_ki)$ are Gaussian primes (since all primes in naturals which are $1 \bmod{4}$ are products of two Gaussian primes). We clearly have $\gcd(a_k,b_k)=1$. Then:
$$c=\prod_{k=1}^n (a_k+b_ki)^{x_k}(a_k-b_ki)^{x_k}$$
Now, we have $c^2=a^2+b^2=(a+bi)(a-bi)$. We will write $a+bi$ as the product of some of these Gaussian Primes and $a-bi$ as the product of the rest.
For all solutions to $c^2=a^2+b^2$ (including negative), we have to split the Gaussian primes equally, i.e. since we need to maintain the fact that $a+bi$ and $a-bi$ are conjugates, whenever we write $a_k+b_ki$ in the product of $a+bi$, we are to write $a_k-b_ki$ in the product of $a-bi$ and vice versa.
For each $p_k$, we have $2x_k+1$ choices for this process since $p_k^2 \mid (a+bi)(a-bi)$ and we have to divide $(a_k+b_ki)^{2x_k}(a_k-b_ki)^{2x_k}$ (so $a+bi$ can have $a_k+b_ki$ for $t$ number of times for $0 \leqslant t \leqslant 2x_k$). Finally, we can multiply by units $i,-i,1,-1$ which is $4$ choices. Thus, the number of solutions is:
$$4\prod_{k=1}^n (2x_k+1) \geqslant 4 \cdot 3^n$$
Since we need to remove $(c,0),(0,c),(-c,0),(0,-c)$, we reduce $4$. Furthermore, we divide by $4$ since both $a$ and $b$ are positive and divide by $2$ since $(a,b)$ is the same as $(b,a)$, giving:
$$T_{\text{all}} \geqslant \frac{4\cdot3^n-4}{8} = \frac{3^n-1}{2}$$
For primitive solutions alone, we need to segregate either all of the $a_k+b_ki$ or all of the $a_k-b_ki$ to $a+bi$ since $\gcd(a,b)=1$. This only gives $2$ choices per $p_k$. Multiplying by units, total choices are $4 \cdot 2^n$.
Again, we are to do the necessary removal. $(c,0)$ won't work as primitive, so we are to only divide by $8$. Thus:
$$T_{\text{primitive}} = \frac{4 \cdot 2^n}{8}= 2^{n-1}$$
We need to have:
$$2T_{\text{primitive}} > T_{\text{all}}$$
$$2^n>\frac{3^n-1}{2} \implies 2^{n+1}>3^n-1$$
Clearly, we have $n=1$. Thus:
$$T_{\text{all}}=\frac{(2x+1)-1}{2}=x$$
$$T_{\text{primitve}}=2^{n-1}=1$$
Since we have $2T_{\text{primitive}} > T_{\text{all}}$, we have $x=1$ showing $c=p$ is prime.
Thus, only all odd prime hypotenuse of the form $4k+1$ are exceptions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3587066",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Upper Bound on a en exponential function I am trying to upper bound the following function and find its growth rate:
\begin{equation}
\psi(y) \stackrel{\triangle}{=} \int_{0}^{\infty}\exp\left(-\frac{(y-x)^2}{2(\sigma_0^2 + \sigma_1^2 x)}\right)f(x)\,dx,~y>0,
\end{equation}
where $f(x)>0$ satisfies $\int_{0}^{+\infty}f(x)\,dx = 1$ (it is a pdf of the random variable $X$). I would like to find the growth rate of $\psi(y)$ for large values of $y$, i.e., would like to know if $\psi(y)$ grows like $O(y^{-k})$, for some $k>1$.
Here are my attempts so far:
It is straightforward to show that the function $\exp\left(-\frac{(y-x)^2}{2(\sigma_0^2 + \sigma_1^2 x)}\right)\in[0,1]$ is increasing in $x$ for $y>x>0$ and is decreasing in $x$ for $0<y<x$.
Then one can break the bounds of the integral to $[0,y/2], \, [y/2,3y/2], \, [3y/2, y^{3/2}],\,[y^{3/2},\infty)$ and then evaluate each integral. Doing so will give the following:
\begin{align}
\int_{0}^{y/2}\exp\left(-\frac{(y-x)^2}{2(\sigma_0^2 + \sigma_1^2 x)}\right)f(x)\,dx &= O(y^{-k}), \,k>1\\
\int_{y/2}^{3y/2}\exp\left(-\frac{(y-x)^2}{2(\sigma_0^2 + \sigma_1^2 x)}\right)f(x)\,dx &\stackrel{?}{=}O(y^{-k}),\, k>1\\
\int_{3y/2}^{y^{3/2}}\exp\left(-\frac{(y-x)^2}{2(\sigma_0^2 + \sigma_1^2 x)}\right)f(x)\,dx &= O(y^{-k}), \,k>1\\
\int_{y^{3/2}}^{\infty}\exp\left(-\frac{(y-x)^2}{2(\sigma_0^2 + \sigma_1^2 x)}\right)f(x)\,dx &= O(y^{-k}), \,k>1
\end{align}
Therefore, if we can prove that the second integral also grows like $O(y^{-k}),\,k>1$, then we are done. However, I have not been able to show this and am stuck. Any help would be highly appreciated!
| First, let us see an example in which $\lim_{y\to \infty} y\psi(y) = \infty$.
Let (log-Cauchy distribution)
$$f(x) = \frac{1}{\pi x (1 + (\ln x)^2)}, \ x > 0.$$
We have, for sufficiently large $y$,
\begin{align}
\psi(y) &= \int_{0}^{\infty}\exp\left(-\frac{(y-x)^2}{2(\sigma_0^2 + \sigma_1^2 x)}\right)
\frac{1}{\pi x (1 + (\ln x)^2)}\,\mathrm{d} x\\
&\ge \int_{y - \sqrt[4]{y}}^{y + \sqrt[4]{y}}\exp\left(-\frac{(y-x)^2}{2(\sigma_0^2 + \sigma_1^2 x)}\right)
\frac{1}{\pi x (1 + (\ln x)^2)}\,\mathrm{d} x \\
&\ge \int_{y - \sqrt[4]{y}}^{y + \sqrt[4]{y}}\exp\left(-\frac{\sqrt{y}}{2(\sigma_0^2 + \sigma_1^2 y/2)}\right)
\frac{1}{\pi x (1 + (\ln x)^2)}\,\mathrm{d} x \\
&\ge \int_{y - \sqrt[4]{y}}^{y + \sqrt[4]{y}} \frac{1}{2}\cdot
\frac{1}{\pi x (1 + (\ln x)^2)}\,\mathrm{d} x \\
&= \frac{\arctan(\ln(y + \sqrt[4]{y})) - \arctan(\ln(y - \sqrt[4]{y}))}{2\pi}\\
&= \frac{1}{2\pi}
\arctan \frac{\ln(y + \sqrt[4]{y}) - \ln(y - \sqrt[4]{y})}{1 + \ln(y + \sqrt[4]{y})\ln(y - \sqrt[4]{y})}
\end{align}
where we have used $\tan (x-y) = \frac{\tan x - \tan y}{1 + \tan x \tan y}$.
Thus, we have
\begin{align}
\lim_{y\to \infty} y\psi(y) &= \lim_{y\to \infty} \frac{1}{2\pi} y
\arctan \frac{\ln(y + \sqrt[4]{y}) - \ln(y - \sqrt[4]{y})}{1 + \ln(y + \sqrt[4]{y})\ln(y - \sqrt[4]{y})}\\
&= \lim_{y\to \infty} \frac{1}{2\pi} y \frac{\ln(y + \sqrt[4]{y}) - \ln(y - \sqrt[4]{y})}{1 + \ln(y + \sqrt[4]{y})\ln(y - \sqrt[4]{y})}\\
&= \infty
\end{align}
where we have used $\lim_{z \to 0}\frac{\arctan z}{z} = 1$.
Second, if $\mathbb{E}[X]$ and $\mathbb{E}[X^2]$ are both finite, one can prove that $\psi(y) = O(y^{-2})$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3587232",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
How many times must I apply L’Hopital? I have this limit:
$$\lim _{x\to 0}\left(\frac{e^{x^2}+2\cos \left(x\right)-3}{x\sin \left(x^3\right)}\right)=\left(\frac 00\right)=\lim _{x\to 0}\frac{\frac{d}{dx}\left(e^{x^2}+2\cos \left(x\right)-3\right)}{\frac{d}{dx}\left(x\sin \left(x^3\right)\right)}$$
$$\lim_{x\to0}\frac{2e^{x^2}x-2\sin \left(x\right)}{\frac{d}{dx}\left(x\right)\sin \left(x^3\right)+\frac{d}{dx}\left(\sin \left(x^3\right)\right)x}=\lim_{x\to0}\frac{2e^{x^2}x-2\sin \left(x\right)}{\sin \left(x^3\right)+3x^3\cos \left(x^3\right)+\sin \left(x^3\right)}$$
but yet we have $(0/0)$. If I apply L’Hopital again, I obtain
$$=\lim_{x\to0}\frac{2\left(2e^{x^2}x^2+e^{x^2}\right)-2\cos \left(x\right)}{15x^2\cos \left(x^3\right)-9x^5\sin \left(x^3\right)}$$
again giving $(0/0)$. But if I apply L’Hopital a thousand times I'll go on tilt. What is the best solution in these cases? With the main limits or applying upper bonds?
| Let's first attack the numerator alone, repetitively differentiating until we no longer get zero. Let $N = \mathrm{e}^{x^2} + 2 \cos x - 3$.
\begin{align*}
\frac{\mathrm{d}}{\mathrm{d}x} N &= 2 x \mathrm{e}^{x^2} - 2 \sin x \xrightarrow{x \rightarrow 0} 0 \text{,} \\
\frac{\mathrm{d}^2}{\mathrm{d}x^2} N &= (4 x^2 +2)\mathrm{e}^{x^2} - 2 \cos x \xrightarrow{x \rightarrow 0} 0 \text{,} \\
\frac{\mathrm{d}^3}{\mathrm{d}x^3} N &= (8 x^3 +12 x)\mathrm{e}^{x^2} + 2 \sin x \xrightarrow{x \rightarrow 0} 0 \text{,} \\
\frac{\mathrm{d}^4}{\mathrm{d}x^4} N &= (16 x^4 +48x^2+12)\mathrm{e}^{x^2} + 2 \cos x \xrightarrow{x \rightarrow 0} 14 \text{.}
\end{align*}
Now let $D = x \sin x^3$. If any of its first three derivatives are nonzero, our limit is zero. Otherwise, the fourth derivative will resolve the value of the limit.
\begin{align*}
\frac{\mathrm{d}}{\mathrm{d}x} D &= 3 x^3 \cos x^3 + \sin x^3 \xrightarrow{x \rightarrow 0} 0 \text{,} \\
\frac{\mathrm{d}^2}{\mathrm{d}x^2} D &= 12 x^2 \cos x^3 - 9x^5 \sin x^3 \xrightarrow{x \rightarrow 0} 0 \text{,} \\
\frac{\mathrm{d}^3}{\mathrm{d}x^3} D &= (24x - 27 x^7) \cos x^3 - 81 x^4 \sin x^3 \xrightarrow{x \rightarrow 0} 0 \text{,} \\
\frac{\mathrm{d}^4}{\mathrm{d}x^4} D &= (24 - 432 x^6) \cos x^3 + (-396 x^3 + 81 x^9) \sin x^3 \xrightarrow{x \rightarrow 0} 24 \text{.}
\end{align*}
So the limit is $\frac{14}{24} = \frac{7}{12}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3587352",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
} |
Exercise and proof given in a Number Theory textbook
Prove that $(\forall m\in\mathbb N)(\exists$ $x,y \in \mathbb N)$, s.t. $x-y \geq m$ and $\sigma(x^2)=\sigma(y^2)$
$\sigma(x):=\displaystyle\sum_{i=1}^kd_i,\;d_i\mid x,\;\forall i\in\{1,\ldots,k\},\;k\le x$
Proof (in text book):
Let $n \in \mathbb{N}$ with $n > m$ and $(n,10)=1$ . For $x=5n$, $y=4n$, we have $x-y=n>m$ and $\sigma(x^2)=\sigma(y^2)=31 \sigma(n^2)$
I don't think this is true. For example, let $n=17$, $m=12$
| General framework. It's interesting to study natural numbers $\ s<t\ $ such that $\ \sigma(s)=\sigma(t).\ $ In particular, it's a tough challenge to find natural numbers $\ a<b\ $ such that $\ \sigma(a^2)=\sigma(b^2).\ $ Indeed, this last equation is the difficult part of the given exercise.
Once you have such $\ a<b\ $ then routine considerations solve the problem. Namely, you can always find a prime number $\ p\ $ such that it divides neither $\ a\ $ nor $\ b. $ Every prime $\ p>b\ $ would do. Then we would have:
$$ \sigma(a^2\cdot p^2)\ =\ \sigma(b^2\cdot p^2) $$
In particular, $\ b^2\cdot p^2 - a^2\cdot p^2\ $ can be arbitrarily large
-- just take appropriately huge prime $p$.
Thus, the main challenge is finding solutions for
$\ \sigma(a^2)=\sigma(b^2)\ $ (where $a\ne b$). The textbook mentioned
in OP's question provided
$$ \sigma(4^2)\ =\ \sigma(5^2)\ =\ 31 $$
When you play with the perfect and multi-perfect numbers (I call them
baroque numbers) then you run into more examples.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3587462",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Evaluating $\lim_{x\to\infty}\left(\frac{4^x+5^x}{4}\right)^{1/x}$ $$\lim_{x\to\infty}\left(\frac{4^x+5^x}{4}\right)^{\frac{1}{x}}=?$$
I have tried a lot but I am stuck when I solve this by using this hints. $a^x=\exp(\ln(a^x))=\exp(x\ln a)$, so then $a=\frac{4^x+5^x}{4}$. The above expression becomes
$$\lim_{x\to\infty}\left(\frac{4^x+5^x}{4}\right)^{\frac{1}{x}}=\lim_{x\to\infty}\exp\left(\ln\left(\left(\frac{4^x+5^x}{4}\right)^{\frac{1}{x}}\right)\right)=\exp\left(\lim_{x\to\infty}\frac{1}{x}\ln\left(\frac{4^x+5^x}{4}\right)\right).$$
Now here I am stuck that how I can deal this indeterminate expression?
| Let's assume that the required limit to be calculated is $L$
And by taking the natural logarithm both sides, we would have:
$$\ln L =\lim_{ x\to \infty }\frac{\ln(\frac{4^x+5^x}{4})}{x}$$
This is an inderminate form of $\frac{\infty}{\infty}$ and can be solved by applying L' Hopital's Rule.
Applying the rule we get:
$$\ln L = \lim_{x\to\infty}\frac{\ln 5\cdot 5^x+\ln 4\cdot 4^x }{5^x+4^x}$$
Dividing the numerator and the denominator by $5^x$ we have:
$$\ln L = \lim_{x\to \infty} \frac{\ln 5+ \ln 4\cdot (\frac45)^x}{1+(\frac{4}{5})^x}$$
Since a number less than one when raises to power tending to infinity tends to $0$ , we have :
$$\ln L = \ln 5$$
Which means :
$$L =5$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3587623",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Find $a, b, c$ such that element $x=a\alpha^2+b\alpha+c \in \mathbb{Q}[x]/\langle x^3+x+11 \rangle$ I'm trying to generate a fundamental unit of the number field $K=\mathbb{Q}(\alpha)$, where $\alpha^3+\alpha+11$.
I found a non-trivial unit and I need to find $a,b,c\in\Bbb{Q}$ such that
$$\frac{(-5\alpha^2-4\alpha+8)(6\alpha^2+6)(10\alpha^2+10\alpha)}{(8\alpha^2+8\alpha)^6(9\alpha^2+9\alpha)^{11}(10\alpha^2)}= a\alpha^2+b\alpha+c.$$
Does anyone know how to compute $a, b, c$ on Wolfram alpha or Sagemath?
| A quick google search leads to the "Number field element" page of the Sage documentation, which shows that the following code does the trick in SageMath:
K.<a> = NumberField(x^3 + x + 11)
f = a.coordinates_in_terms_of_powers()
f((-5*a^2-4*a+8)*(6*a^2+6)*(8*a^2+8)^(-6)*(9*a^2+9*a)^(-11)*(10*a^2)^(-1)*(10*a^2+10*a))
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3587740",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$\sin(x) - \sqrt3 \sin(3x) + \sin(5x) < 0$ for $0My attempt at solving this:
$\sin(x) - \sqrt3\sin(3x) + \sin(5x) < 0$
$2\sin \left(\frac{5x+x}2\right) + \cos\left(\frac{5x-x}2\right) - \sqrt 3\sin(3x) < 0$
I divide everything with 2:
$\sin(3x) + \frac12\cos(2x) - \frac {\sqrt 3}2\sin(3x) < 0$
I think I have gone the wrong way at solving this problem. Please advise.
| $$\sin (x)+\sin (5x) - \sqrt3\sin(3x) <0\Rightarrow 2\sin(3x)\cos (2x) - \sqrt{3}\sin(3x) < 0 \\\Rightarrow \sin(3x)\left(\cos(2x) -\frac{\sqrt{3}}{2}\right)<0 $$
Case $1$:
$$ \sin(3x) < 0 \text{ and } \cos (2x) > \frac{\sqrt3}2 \implies \frac{\pi}{12}<x<\frac\pi3$$
Case $2$:
$$ \sin (3x) > 0 \text{ and } \cos (2x) < \frac{\sqrt3}2 \implies \frac{2\pi}{3} <x< \frac{11\pi}{12}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3587885",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Let $X$ and $Y$ are topological spaces with indiscrete topologies then prove that the product topology $X\times Y$ will be indiscrete space Now $\tau_X=\{X,\emptyset\}$ and $\tau_Y=\{Y,\emptyset\}$ their product topology will be like $\tau_{X\times Y}=\{X \times Y , \emptyset \times Y , X \times \emptyset , \emptyset\}$ which is clearly not indiscrete . Please help where I am going wrong. Thanks in advance
| It turns out that $X\times\emptyset=\emptyset\times Y=\emptyset$. So, yes, the product topology on $X\times Y$ is the indiscrete topology.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3588040",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Easy way to find the partial fraction I always have trouble trying to find the partial fraction, especially for complicated ones.
For example, this is what I will do to find the partial fraction of $\displaystyle \frac{8x^3+35x^2+42x+27}{x(2x+3)^3}$
*
*$\displaystyle \frac{A}{x}+\frac{B}{2x+3}+\frac{C}{(2x+3)^2}+\frac{D}{(2x+3)^3}$
*$\displaystyle A(2x+3)^3+Bx(2x+3)^2+Cx(2x+3)+Dx = 8x^3+35x^2+42x+27 $
*When x = $0$ -> $\displaystyle 27A = 27, A=1$
*$\displaystyle (2x+3)^3+Bx(2x+3)^2+Cx(2x+3)+Dx = 8x^3+35x^2+42x+27$
After what I can do is to expand all the elements and group them based on their exponent, and solve the system of equation.
However, I remember seeing there exists an easier solution. Also given that there will be no calculator available on the exam, doing this way will take a long time and results in possible errors.
Does anyone have an easier way to solve this question or similar ones?
Thanks!
| A suggestion may be in form of $$\displaystyle \frac{8x^3+35x^2+42x+27}{x(2x+3)^3}$$
look for $(2x+3)^3=8x^3+36x^2+54x+27$ so ,we can rewrite $$\displaystyle \frac{8x^3+35x^2+42x+27}{x(2x+3)^3}=\\\frac{(2x+3)^3-x^2-12x}{x(2x+3)^3}=\\
\frac{(2x+3)^3}{x(2x+3)^3}-\frac{x(x+12)}{x(2x+3)^3}=\\
\frac{1}{x}-\frac{(x+12)}{(2x+3)^3}=\\$$ can you take over ?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3588481",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Impulse/Delta Function--homework help I need to solve the initial value problem:
I took the Laplace transform of both sides and this is what I have thus far:
I now need to take the inverse Laplace transform to find x(t). I can't simplify the denominator by completing the square, so I am stuck here. Is this an example of Duhamel's principle? If so, how do I solve that? Any help would be greatly appreciated!
| What you have looks correct to me. Now decompose the fraction this way:
$$G(s)=\frac 1 {(s^2+s-2)}=\frac 1 {(s+2))(s-1)}$$
$$G(s)=\frac 1 3 \left (\frac 1 {(s-1)}-\frac 1 {(s+2)} \right )$$
Then take Inverse Laplace Transform.
$$g(t)=\frac 13 (e^{t}-e^{-2t})$$
Do the same for the other fraction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3588646",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
If A and B are $n \times n$ matrices where each column sums to p. Then for what values of p will the matrix AB also have all columns that sum to p? I have no idea how to approach this question. I've tried working through it with sum notation but it became jumbled. I assume there's another property of matrices that I can use to make this simpler? Since using the basic properties of matrix multiplication seems convoluted. Using generic 2x2 matrices, I was able to find that for p=0 and p=1 AB has columns that add to p. But I'm unsure on how to do this working for a generic nxn matrix.
| Nice question.
You can go for the following approach : note that if $A,B$ are $n \times n$ matrices, each having columns summing to $p$, then the sum of all entries of $A$ and $B$ are both $np$ (number of columns times sum of each column).
Now, we calculate the sum of all entries of $AB$.
$$
\sum_{i,j=1}^n (AB)_{ij} = \sum_{i,j,k=1}^n A_{ik}B_{kj} = \sum_{j,k=1}^n B_{kj} \sum_{i=1}^n A_{ik} \\ = p \sum_{j,k = 1}^n B_{kj} = np^2
$$
where we note that $\sum_{i=1}^n A_{ik}$ is the sum of the $k$th column of $A$ which is $p$, and that $\sum_{j,k=1}^n B_{jk}$ is the sum of every entry of $B$, which is $np$.
Finally, suppose every column of $AB$ summed to $q$. Note that the sum of all entries of $AB$ is then $nq$. But we've seen it is $np^2$ above.
Therefore, $q = p^2$. In particular, if all columns of $AB$ summed to $p$, then $p = p^2$.
Which forces $p=0$ or $p=1$. I leave you to find matrices $A,B$ such that
*
*$A,B,AB$ have every column summing to $0$.
*$A,B,AB$ have every column summing to $1$.
Think simple, the examples are easy!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3588791",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
If the system of inequalities $3x^2+2x-1<0$ and $(3a-2)x-a^2x+2<0$ possesses a solution, find the least natural number $a$ If the system of equations $3x^2+2x-1<0$ and $(3a-2)x-a^2x+2<0$ possesses a solution, find the least natural number $a$
My attempt is as follows:-
$$3x^2+3x-x-1<0$$
$$3x(x+1)-1(x+1)<0$$
$$x\in\left(-1,\dfrac{1}{3}\right)$$
$$(-a^2+3a-2)x<-2$$
$$(a^2-3a+2)x>2$$
Case $1$: $a^2-3a+2<0$
$$a\in(1,2)$$
In this interval there is no natural number, hence no need to proceed further.
Case $2$: $a^2-3a+2\ge0$
$$a\in(-\infty,1]\cup[2,\infty)$$
Checking for $a=1$:
$0>2$, which is not possible
Checking for $a=2$
$(4-6+2)x>2$
$0>2$, which is not possible
$$x>\dfrac{2}{a^2-3a+2}$$
$$x\in\left(\dfrac{2}{a^2-3a+2},\infty\right)$$
If system of equations possess a solution, then $\dfrac{2}{a^2-3a+2}<\dfrac{1}{3}$
$$6<a^2-3a+2$$
$$a^2-3a-4>0$$
$$a^2-4a+a-4>0$$
$$a(a-4)+(a-4)>0$$
$$(a+1)(a-4)>0$$
$$a\in(-\infty,-1)\cup(4,\infty)$$
So $a=5$ should be the answer, but actual answer is $2$
| Inequalities can be sumized in this system:
$x<-1$ and $x>\frac{1}{3}$
$a<=\frac{3\sqrt{x}-\sqrt{x+8}}{2\sqrt{x}}$
$a>=\frac{3\sqrt{x}+\sqrt{x+8}}{2\sqrt{x}}$
The least natural number is $a=4$ for $x=\frac{1}{3}$.
Other values are:
$a=5$ for $x=\frac{1}{6}$;
$a=6$ for $x=\frac{1}{10}$;
$a=7$ for $x=\frac{1}{15}$;
$a=8$ for $x=\frac{1}{21}$;
$a=9$ for $x=\frac{1}{28}$;
$a=10$ for $x=\frac{1}{36}$,
etc…
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3588930",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Rayleigh as exponential family - compute $\mathbb{E}(Y)$ and Var$(Y)$ where Y is a sum of independent squared Rayleigh's distribution Prove that X of Rayleigh distribution with pdf $f(x, \sigma) = \frac{x}{\sigma^2}e^{-\frac{x^2}{2\sigma^2}}\mathbb{1}_{(0, \infty)}(x)$ comes from the exponential family and then compute $\mathbb{E}(Y)$ and Var$(Y)$ where $Y = \sum_{k=1}^nX^2_k$ (all $X_k$ are independent).
I did the first part:
$$
f(x, \sigma) = \frac{x}{\sigma^2}e^{-\frac{x^2}{2\sigma^2}}\mathbb{1}_{(0, \infty)}(x) =
xe^{-\frac{1}{2\sigma^2}x^2 - 2\ln(\sigma)}\mathbb{1}_{(0, \infty)}(x)
$$
so
$$
h(x) = x; \eta_1(\theta) = -\frac{1}{2\sigma^2}; T_1(x)=x^2; B(\theta)=2ln(\sigma)
$$
Why is the fact that Rayleigh's comes from an exponential family useful? Are there any equations to quickly solve the second part of the task? I appreciate any help.
| The Rayleigh distribution is a single parameter exponential family if we can write it in the form
$$ f(x: \sigma) = h(x) \exp\left( \eta(\sigma) T(x) - A(\sigma) \right)$$
Here we have $$f(x: \sigma) = x \mathbb{1}_{[0,\infty)} (x) \exp \left( \frac{-1}{2\sigma^2} x^2 - 2 \log \sigma \right)$$
so it is indeed an exponential family, with $h(x) = x \mathbb{1}_{[0,\infty)} (x), \ T(x) = x^2, \ \eta(\sigma) = \frac{-1}{2\sigma^2}$ and $A(\sigma) = 2 \log \sigma.$
The moment generating function of a member of an exponential family has a particularly nice form:
$$ M_T(t) = \mathbb{E}[\exp(t T(x))] = \exp\left( A(\eta + t) - A(\eta) \right) $$
Here, since $\eta = \frac{-1}{2\sigma^2}$ and $A(\sigma) = 2 \log \sigma,$ we can write $A(\eta) = - \log( -2 \eta).$
This gives $M_{T}(t) = \frac{\eta}{\eta + t},$ and differentiating this gives
$$ \frac{d}{dt} M_{T}(t) = \frac{-\eta}{(\eta + t)^2} \ , \ \frac{d^2}{dt^2} M_{T}(t) = \frac{2\eta}{(\eta + t)^3}.$$
Plugging in $t=0$ into these (and remembering $T = X^2$) we have
$$ \mathbb{E}[X^2] = \frac{-1}{\eta} = 2\sigma^2 \ , \ \mathbb{E}[X^4] = \frac{2}{\eta^2} = 8 \sigma^4$$
From these we deduce $\text{Var}(X^2) = \mathbb{E}[X^4] - \mathbb{E}[X^2]^2 = 4 \sigma^4$ and therefore
$$ \mathbb{E}[Y] = 2n \sigma^2 \ , \ \text{var}(Y) = 4n \sigma^4. $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3589074",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Unable to prove an assertion with induction I need to prove:
$ \displaystyle \sum_{k=1}^n\frac{1}{(5k + 1) (5k + 6)} = \frac{1}{30} - \frac{1}{5(5n + 6)} $ with mathematical induction for all $n \in \mathbb{N}$.
After successfully proving it for n = 1, I try to prove it in the Induction-Step for n + 1:
$ \displaystyle \sum_{k=1}^{n+1}\frac{1}{(5k + 1) (5k + 6)} = \frac{1}{30} - \frac{1}{5(5(n+1) + 6)} $
which can be summarised to:
$ \displaystyle \sum_{k=1}^{n}\frac{1}{(5k + 1) (5k + 6)} + \frac{1}{(5(n+1) + 1) (5(n+1) + 6)} = \frac{1}{30} - \frac{1}{5(5(n+1) + 6)} $
Using the Induction Hyptohesesis this turns to:
$ ( \frac{1}{30} - \frac{1}{5(5n + 6)} ) + \frac{1}{(5n + 6 ) (5n+6+5)} $
when we let s = 5n +6 we get:
$ \frac{1}{30} - \frac{1}{5s} + \frac{1}{(s) (s+5)} = \frac{1}{30} - \frac{1}{5(s + 5)} $
substracting both sides by $\frac{1}{30}$ yields:
$- \frac{1}{5s} + \frac{1}{(s) (s+5)} = - \frac{1}{5(s + 5)}$ and this is where I'm stuck. I'm working on this for more than 6h but my mathematical foundation is too weak to get a viable solution. Please help me.
| Let $ n $ be a positive integer.
Observe that : $ \left(\forall k\in\mathbb{N}\right),\ \frac{1}{\left(5k+1\right)\left(5k+6\right)}=\frac{1}{5}\left(\frac{1}{5k+1}-\frac{1}{5k+6}\right) \cdot $
Thus, \begin{aligned} \sum\limits_{k=1}^{n}{\frac{1}{\left(5k+1\right)\left(5k+6\right)}}&=\frac{1}{5}\left(\sum\limits_{k=1}^{n}{\frac{1}{5k+1}}-\sum\limits_{k=1}^{n}{\frac{1}{5k+6}}\right)\\ &=\frac{1}{5}\left(\sum\limits_{k=0}^{n-1}{\frac{1}{5k+6}}-\sum\limits_{k=1}^{n}{\frac{1}{5k+6}}\right)\\ &=\frac{1}{5}\left(\frac{1}{6}-\frac{1}{5n+6}\right)\\ \sum\limits_{k=1}^{n}{\frac{1}{\left(5k+1\right)\left(5k+6\right)}}&=\frac{1}{30}-\frac{1}{5\left(5n+6\right)} \end{aligned}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3589245",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 3
} |
Trying to visualize a polygon in a space $X$ From Rotman's Algebraic Topology:
A polygon in a space $X$ is a $1$-chain $\pi = \sum\limits_{i=0}^k \sigma_i$ where $\sigma_i(e_1) = \sigma_i(e_0)$ for all $i$.
By a theorem proven in the book, all polygons are $1$-cycles.
From the book: a $1$-cycle is essentially "a sum (union) of oriented $1$-simplices in $X$ that ought the constitute the boundary of some union of $2$-simplices in $X$."
But a polygon must have equal endpoints. So (as a simple example) is three disjoint circles in $\Bbb R^2$ considered a polygon? As in $\gamma = \sigma_1 + \sigma_2 + \sigma_3$ where each $\sigma_i \colon \Delta^1 \rightarrow \Bbb R^2$ is the $i$th circle.
Am I misunderstanding the idea or is there a simpler visual for this?
| You may have made a transcription error when copying the definition of polygon. The definition Rotman gives (at least by the 4th corrected printing, 1998) says
A polygon in a space $X$ is a $1$-chain $\pi = \sum_{i=0}^k \sigma_i$, where $\sigma_i(e_1) = \sigma_{i+1}(e_0)$ for all $i$ (indices are read mod$(k + 1)$).
In other words, the end of $\sigma_i$ is the start of $\sigma_{i+1}$. By "indices are read mod$(k+1)$" Rotman just means that $\sigma_k(e_1) = \sigma_0(e_0)$. Intuitively this should remind you of a cycle in a graph.
Rotman's proof that a polygon is a cycle is terse, and he doesn't mention that the sum is telescoping. Specifically I mean
$$\begin{align}
\partial \pi &= \sum_{i=0}^k \partial\sigma_i\\
&= \sum_{i=0}^k(\sigma_i(e_1) - \sigma_i(e_0))\\
&=\sigma_0(e_1) - \sigma_0(e_0) + \sigma_1(e_1) - \sigma_1(e_0) + \dots + \sigma_k(e_1) - \sigma_k(e_0)\\
&=(\sigma_0(e_1) - \sigma_1(e_0)) + (\sigma_1(e_1) - \sigma_2(e_0)) + \dots + (\sigma_k(e_1) - \sigma_0(e_0))\\
&= 0
\end{align}$$
The definition of polygon implies that $\cup_i im(\sigma_i)$ is connected, and therefore three disjoint circles is not a polygon.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3589612",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Would duplicates matter in cartesian product of a set? For example:
\begin{align}
A &= \{1, 1, 2\}
\\
B &= \{3, 3, 3, 2, 2, 4\}
\end{align}
Would $A$ cross $B$ equate to $\{(1,3),(1,2),(1,4),(2,3),(2,2),(2,4)\}$ without the dupes of $(1,3)$, etc.
| No, because the Cartesian product of sets is itself a set. For sets in general, we consider a set, and a set with the same entries but some duplicates, to be precisely the same.
For example, let $A=\{1,2\},B=\{3,4\},A'=\{1,1,2,2\},B'=\{3,3,4,4,4,4,4\}$. Under these conditions, $A=A',B=B',$ and in turn $A \times B = A' \times B'$.
Thus, the inclusion of duplicate elements does not matter here. Why you would want to include them is beyond me, unless you're working with objects like multisets (which do admit duplicate elements). In such an instance, obviously the opposite holds true. But unless stated otherwise I imagine the first instance holds for you, i.e. you should be removing duplicates from sets.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3589844",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Show that if $(b^n-1)/(b-1)$ is a power of prime numbers, where $b,n>1$ are positive integers, then $n$ must be a prime number.
Show that if $(b^n-1)/(b-1)$ is the power of a prime number, where $b,n>1$ are positive integers, then $n$ must be a prime number.
My solution:
If $n$ is composite, then let $n=mk$, $m,k>1$,
\begin{align*}
\frac{b^n-1}{b-1} &= 1+b+\cdots+b^{n-1} \\
&=(1+b+\cdots+b^{k-1} )+(b^k+b^{k+1}+\cdots+b^{2k-1}) \\ &\quad\,+\cdots+(b^{(m-1)k}+b^{(m-1)k+1}+\cdots+b^{mk-1}) \\
&=(1+b+\cdots+b^{k-1})(1+b^k+\cdots+b^{(m-1)k})
\end{align*}
Which is composite and distinct, thus, for $(b^n-1)/(b-1)$ to be a power of primes, $n$ is not composite, thus it must be prime.
However, $(1+b+\cdots+b^{k-1})(1+b^k+\cdots+b^{(m-1)k})$ might be equal to $p^x \times p^y$, where $p$ is prime.
Is there any better solution?
| Let $(b^n-1)/(b-1)=p^x$ where $p$ is a prime and $x> 0$. If $n$ is composite, there are two cases.
*
*There exists a prime $q$ such that $n=q^m$ for some $m>1$. Note
$$p^x=\frac{b^n-1}{b-1}=\frac{b^{q^m}-1}{b^{q^{m-1}}-1}\cdot \frac{b^{q^{m-1}}-1}{b-1},$$
we can assume
$$\frac{b^{q^{m-1}}-1}{b-1}=p^y$$
for some $0< y< x$. Then we have
\begin{align}
1+q(b-1)p^y+\sum_{i=2}^q\binom{q}{i}\left((b-1)p^y\right)^i&=\left((b-1)p^y+1\right)^q\\
&=\left(b^{q^{m-1}}\right)^q\\
&=b^{q^m}\\
&=(b-1)p^x+1,
\end{align}
i.e.,
$$q+\sum_{i=2}^q\binom{q}{i}\left((b-1)p^y\right)^{i-1}=p^{x-y}.$$
Hence, $p\mid q$. Recall that $p$ and $q$ are both primes, so $p=q$, we further have
$$1+\binom{p}{2}(b-1)p^{y-1}+\sum_{i=3}^p\binom{p}{i}(b-1)^{i-1}p^{y(i-2)}=p^{x-y-1}.$$
Note the left hand side is no less than 2, so both sides are divisible by $p$, i.e., the term $\binom{p}{2}(b-1)p^{y-1}$ cannot be divisible by $p$, thus $p=2$ and $y=1$. We further have $b=p^{x-2}$, i.e., $p\mid b$ (recall that $b>1$). However, note
$$p^x=\frac{b^n-1}{b-1}=1+b+\cdots+b^{n-1},$$
it is impossible that $p\mid b$.
*There exist two co-prime numbers $s,t>1$ such that $n=st$. In this case, we have
$$p^x=\frac{b^n-1}{b-1}=\frac{b^{st}-1}{b^s-1}\cdot\frac{b^s-1}{b-1},$$
which means $(b^s-1)/(b-1)$ is divisible by $p$. Similarly, $(b^t-1)/(b-1)$ is also divisible by $p$. Since $s$ and $t$ are co-prime, there exists integers $w_s,w_t$ such that $w_ss+w_tt=1$. Without loss of generality, we assume $w_s>0$ and $w_t<0$. Then we have
$$\frac{b^{w_ss}-1}{b^s-1}\cdot\frac{b^s-1}{b-1}-b\cdot\frac{b^{-w_tt}-1}{b^t-1}\cdot\frac{b^t-1}{b-1}$$
is also divisible by $p$. Note the expression above is exactly
$$\left(1+b+\cdots+b^{w_ss}\right)-b\left(1+b+\cdots+b^{-w_tt}\right)=1,$$
which is impossible.
As a conclusion, $n$ must be a prime.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3590064",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Calculate $\mathbb{E}(X-Y\mid 2X+Y).$ if $X\sim N(0,a)$ and $Y\sim N(0,b)$
Question: Given that $X$ and $Y$ are two random variables satisfying $X\sim N(0,a)$ and $Y\sim N(0,b)$ for some $a,b>0$. Assume that $X$ and $Y$ have correlation $\rho.$
Calculate
$$\mathbb{E}(X-Y \mid 2X+Y).$$
I tried to use the fact that if $A$ and $B$ are independent, then $\mathbb{E}(A\mid B) = \mathbb{E}(A)$ and uncorrelated implies independence in jointly normal distribution.
So, I attempted to express $X-Y$ as a linear combination of $2X+Y$ and $Z$ where $\operatorname{Cov}(2X+Y,Z) = 0.$
But I am not able to do so.
Any hint is appreciated.
| The joint distribution of $(Z_1,Z_2)\equiv(X-Y,2X+Y)$ is $\mathcal{N}(0,\Sigma)$, where
$$
\Sigma=\begin{bmatrix}
a+b-2\rho\sqrt{ab} & 2a-b-\rho\sqrt{ab} \\
2a-b-\rho\sqrt{ab} & 4a+b+4\rho\sqrt{ab}
\end{bmatrix}.
$$
Then the conditional distribution of $Z_1$ given $Z_2$ is
$$
Z_1\mid Z_2=z\sim \mathcal{N}(\Sigma_{12}\Sigma_{22}^{-1}z,\,\Sigma_{11}-\Sigma_{12}\Sigma_{22}^{-1}\Sigma_{21}).
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3590203",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
} |
Stokes Theorem application question Below is an excerpt from the book "Partial Differential Equations" by Evans. The underlined equation confuses me. Clearly it is an application of Stokes theorem, and the implication seems to be that if $f$ is any compactly supported smooth function (for simplicity say on all of $\mathbb{R}^n$) then $$\int_{\mathbb{R}^n-B_\epsilon(0)} f_{x^i} d{x}=\int_{\partial B_\epsilon(0)} f\cdot \frac{(-x^i)}{\epsilon} d{S}$$
(here i am just replacing $u\phi$ with $f$ and $\nu^i$ with $ \frac{(-x^i)}{\epsilon} $). But I can't get this equation to come out of stokes theorem. E.g. assume for simplicity $n=2$ (set $(x^1,x^2)=(x,y)$), and $\epsilon =1$. Let $d\theta$ be the 1-form gotten by pulling back (via $\mathbb{R}^2-0\rightarrow S^1, v\mapsto v/|v|$) the volume form on $S^1$. Then stokes gives
$$\int_{S^1}f\cdot(-x) d\theta=\int_{\mathbb{R}^2-B_1(0)}d(f\cdot(-x) d\theta)=\int_{\mathbb{R}^2-B_1(0)}\frac{\partial f\cdot(-x)}{\partial x}dx\wedge d\theta+\int_{\mathbb{R}^2-B_1(0)}\frac{\partial f\cdot(-x)}{\partial y}dy\wedge d\theta.$$
Now it seems $dx\wedge d\theta = \frac{x}{x^2+y^2}~~ dx\wedge dy$ and $dy\wedge d\theta = \frac{y}{x^2+y^2}~~ dx\wedge dy$ so this gives
$$\int_{\mathbb{R}^2-B_1(0)}\frac{\partial f\cdot(-x)}{\partial x} \cdot \frac{x}{x^2+y^2} ~~dx dy+\int_{\mathbb{R}^2-B_1(0)}\frac{\partial f\cdot(-x)}{\partial y} \cdot \frac{y}{x^2+y^2}~~dx dy=$$ $$\int_{\mathbb{R}^2-B_1(0)}- \frac{x}{x^2+y^2}\cdot f~~dx dy+\int_{\mathbb{R}^2-B_1(0)}- \frac{x^2}{x^2+y^2}\cdot f_x ~~dx dy+\int_{\mathbb{R}^2-B_1(0)}- \frac{xy}{x^2+y^2}\cdot f_y~~dx dy.$$
I see no cancellations here or any way to make this look like $$\int_{\mathbb{R}^2-B_1(0)}f_x~~dx dy.$$
| I think it's easier to go the other direction:
$$
\begin{split}
\int_{\mathbb{R^2}-B_1(0)}f_x\,dx\wedge dy &= \int_{\mathbb{R}^2-B_1(0)}d(f\,dy) = \int_{S^1}f\,dy \\
&= -\int_0^{2\pi} f\,d(\sin\theta) = -\int_0^{2\pi}f\cos\theta\,d\theta = -\int_{\partial B_1(0)} fx\, dS.
\end{split}
$$
The additional minus sign after the third equality is because $S^1$ has the orientation induced by the outward pointing normal, which points towards the origin.
The reason you're not seeing this in your original calculation is that you need to add an additional term that vanishes when pulled back to $S^1$.
In $\mathbb{R}^2-B_1(0)$,
$$
\begin{split}
&f(-x)\,d\theta = f\left(-\frac{x^2\,dy + xy\,dx}{x^2+y^2}\right) = -f\,dy + \frac{1}{2}y\,d(\ln(x^2+y^2))\\
\implies &f\,dy = f x\,d\theta + \frac{1}{2}y\,d(\ln(x^2+y^2)).
\end{split}
$$
The second term vanishes when pulled back to $S^1$.
Edit: to address the more general question: taking $i=1$ for simplicity
$$
\int_{\mathbb{R}^n-B_\epsilon(0)} f_{x^1} dx^1\wedge\ldots\wedge dx^n = \int_{\mathbb{R}^n-B_\epsilon(0)} d(fdx^2\wedge\ldots\wedge dx^n) = -\int_{\partial B_\epsilon(0)} f\,dx^2\wedge\ldots\wedge dx^n.
$$
However $dS = i_{-\nu}dV$ (since $\nu$ is the inward pointing normal), so
$$
\begin{split}
f\nu^1 dS &= -f\frac{x^1}{\epsilon^2}(x^1 dx^2\wedge\ldots\wedge dx^n - x^2 dx^1\wedge dx^3\wedge\ldots\wedge dx^n + x^3 dx^1\wedge dx^2\wedge\ldots\wedge dx^n\ldots)\\
&= -\frac{f}{\epsilon^2}\big\lbrace(x^1)^2 dx^2\wedge\ldots\wedge dx^n \\
&\qquad\quad+ \frac{1}{2}d(x^1)^2\wedge(-x^2\widehat{dx^2}\wedge dx^3\wedge\ldots\wedge dx^n+x^3dx^2\wedge\widehat{dx^3}\wedge\ldots\wedge dx^n-\ldots)\big\rbrace\\
&= -\frac{f}{\epsilon^2}\big\lbrace ((x^1)^2+(x^2)^2+\ldots+(x^n)^2)\,dx^2\wedge\ldots\wedge dx^n\big\rbrace \\
& \qquad+ \frac{1}{2}d((x^1)^2+(x^2)^2+\ldots+(x^n)^2)\wedge(-x^2\widehat{dx^2}\wedge dx^3\wedge\ldots\wedge dx^n+x^3dx^2\wedge\widehat{dx^3}\wedge\ldots\wedge dx^n-\ldots)\big\rbrace
\end{split}
$$
Pulling back to the surface of the sphere, the first term simplifies to $-f\,dx^2\wedge\ldots\wedge dx^n$, while the second term vanishes. So
$$
\int_{\partial B_\epsilon(0)}f\nu^1 dS = - \int_{\partial B_\epsilon(0)} f\,dx^2\wedge\ldots\wedge dx^n = \int_{\mathbb{R}^n-B_\epsilon(0)}f_{x^1}dx^1\wedge\ldots\wedge dx^n
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3590429",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Does the map from a set to the free object on the set have to be injective? I've seen the following definition of a free object in category theory.
Let $\mathcal C$ be a concrete category. Denote by $U\colon\mathcal C\to\mathrm{Set}$ the forgetful functor. Let $X$ be a set. Then an object $F(X)\in\mathcal C$ equipped with an arrow $f_X\colon X\to U(F(X))$ is called the free object of $\mathcal C$ on $X$ if: for all $A\in\mathcal C$ and any function $g\colon X\to U(A)$ in the category of sets, there exists a unique "extension" $g'\colon F(X)\to A$ in the category $\mathcal C$ such that $U(g')\circ f_X=g$.
I looked at some concrete examples of free objects in the category of groups and the category of modules. In each case, the arrow $f_X$ was injective. Does this follow from the definition? Why isn't it included in the definition?
| No, it does not follow from the definition. Indeed, it does not even follow from the definition in the case of modules in general. Suppose $R$ is the zero ring (the ring with one element). Then every module over $R$ has one element, and this single module is free on every possible set via every possible map.
If you assume there exists an object $A$ of $\mathcal{C}$ such that $U(A)$ has more than one element, then it does follow that $f_X$ must be injective for any free object. Indeed, suppose $F$ is a free object on $X$ via a map $f_X:X\to U(F)$ and $f_X(x)=f_X(y)$ for some distinct $x,y\in X$. Since $U(A)$ has more than one element, there is a function $g:X\to U(A)$ such that $g(x)\neq g(y)$. Taking the unique $g':F\to A$ such that $U(g')\circ f_X=g$, we get a contradiction, since $U(g')(f(x))=U(g')(f(y))$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3590541",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Hilbert Polynomial at a point? One of the review problems in my final review is the following:
Let $X\subset\mathbb P_{\mathbb C}^n$ be a hypersurface, and $P\in X$ a singular point. Let $L$ be a line not contained in $X$ that intersects $X$ at $P$.
Prove that $h_{X\cap L}(P)\geq2$, the intersection multiplicity of $X$ and $L$ at $P$, where $h_{X\cap L}(P)$ refers to the hilbert polynomial of $X\cap L$ at $P$.
From my readings, it doesn't seem like the hilbert polynomial accepts a point as input, but rather a natural number. So what does this even mean?
| The following is a community wiki answer recording the discussion in the comments so that this question might be marked as answered (once this answer is upvoted or accepted).
Hm yeah I am not sure that this makes sense? Not an expert though. Could they mean like intersection number of $X\cap L$ at $P$? – user113102
Oh it even says that in the question lol. Yeah I think the stuff after the semicolon is just a typo or they forgot to delete it or something. – user113102
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3590721",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Evaluating $\lim_{x\to 0}\frac{x\sin x-2+2\cos x}{x\ln(1+x)-x^2}$ using L'Hôpital Considering this limit, assigned to my high school students,
$$\lim_{x\to 0}\frac{x\sin x-2+2\cos x}{x\ln \left(1+x\right)-x^2}=\left(\frac00\right)=\lim_{x\to 0}\frac{\frac{d}{dx}\left(x\sin \left(x\right)-2+2\cos \left(x\right)\right)}{\frac{d}{dx}\left(x\ln \left(1+x\right)-x^2\right)} \tag 1$$
After some steps, using L'Hôpital, I find:
$$\lim_{x\to 0}\frac{\left(x\cos \left(x\right)-\sin \left(x\right)\right)\left(1+x\right)}{-2x^2-x+x\ln \left(x+1\right)+\ln \left(x+1\right)}=\left(\frac00\right)$$
Should I continue to apply L'Hôpital? :-(
| A quick estimate allows you to see that Taylor would yield even degree terms at the numerator (the function is even), but the constant and quadratic ones cancel each other. At the denominator, $x^2-x^2$ and cubic terms.
Hence the limit will be zero, but you will need three successive applications of L'Hospital to establish this.
$$x\sin x-2+2\cos x\to x\cos x-\sin x\to-x\sin x\to-\sin x-x\cos x\to0$$
vs.
$$x\log(1+x)-x^2\to\log(x+1)+\frac x{x+1}-2x\to\frac1{x+1}+\frac1{(x+1)^2}-2
\\\to-\frac1{(x+1)^2}-\frac2{(x+1)^3}\not\to0.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3590839",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
the equivalent definition of the interval in $\mathbb{R}$ Let $X$ be a subset of the real line $\mathbb{R}$. Then the following statements are equivalent.
(b) Whenever $x,y \in X$ and $x < y$, the interval $[x, y]$ is also contained in $X$.
(c) $X$ is an interval (in the sense of Definition 9.1.1).
Definition 9.1.1. Let $a, b \in \mathbb{R}^*$ be extended real numbers. We define the closed interval $[a, b]$ by $[a, b] : = \{x \in \mathbb{R}^* : a \le x \le b\}$. The half-open intervals and the open intervals are defined in a similar fashion.
(c) $\implies$ (b) is easy.
If $X$ is closed (which I don't know), $\inf X, \sup X \in X$. This implies that $[\inf X, \sup X] = X$. Letting $a = \inf X$ and $b= \sup X$, we have the desired result. But, without knowing that $X$ is closed, how can we show that (b) $\implies$ (c)?
| Hint: To show that $X$ is one of the intervals $[a,b], (a,b),[a,b),(a,b]$ (where $a =\inf X, b=\sup X$) you only have to show that $x \in X$ whenever $a <x<b$. So it makes no difference as to whether $X$ is closed or not.
Use definitions of infimum and supremum to show that $c<x<d$ for some $c,d \in X$. Then use b) to conclude that $x \in X$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3591088",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Simpson's rule — where did the coefficients come from? I am reading how Simpson's Rule works for numerical integration. So I understand that given the two endpoints $x_0$ and $x_2$, and one intermediate point $x_1$, we can connect these points to make a parabolic function as an approximation to the original function which we want to integrate.
The book then proceeds to give the following formula, which I do not understand.
Where did the coefficients of $f(x_0), f(x_1), f(x_2)$ come from?
| The interpolating polynomial can be written as
\begin{align*}
p_2(x)= &\sum_{i=0}^2 L_i(x) f(x_i)=\sum_{i=0}^2 \frac{\prod_{j \ne i}(x-x_j)}{\prod_{j \ne i}(x_i-x_j)} f(x_i)\\
=& \frac{(x-x_1)(x-x_2)}{(x_0-x_1)(x_0-x_2)}f(x_0)+\frac{(x-x_0)(x-x_2)}{(x_1-x_0)(x_1-x_2)}f(x_1)+\frac{(x-x_0)(x-x_1)}{(x_2-x_0)(x_2-x_1)}f(x_2)
\end{align*}
Simpson's formula just uses, as you mention, the approximation
$$
\int_{x_0}^{x_2} f(x)dx \approx \int_{x_0}^{x_2} p_2(x) dx.
$$
Note: this works because $L_i(x_j) = \delta_{ij}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3591245",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Conjecture $\frac{a}{a^r+b^r}+\frac{b}{b^r+c^r}+\frac{c}{c^r+a^r}\geq \frac{a}{a^r+c^r}+\frac{c}{c^r+b^r}+\frac{b}{b^r+a^r}$ following this kind of inequality One of my old inequality (very sharp) I propose this because I don't see it on the forum :
Let $a,b,c>0$ and $a+b+c=1$ with $r\in(\frac{1}{2},1)$ and $a\geq b \geq c$ then we have :
$$\frac{a}{a^r+b^r}+\frac{b}{b^r+c^r}+\frac{c}{c^r+a^r}\geq \frac{a}{a^r+c^r}+\frac{c}{c^r+b^r}+\frac{b}{b^r+a^r}$$
First of all it's a conjecture where I don't find counter-examples .
Secondly when $r\in(0,\frac{1}{2})$ the inequality is reversed .I use Pari-gp for that .Furthermore (if it's true) I think it's really not new so I add the tag reference request.We have an equality case when $r=0.5$ whenever $a,b,c>0$.
So if you have idea to prove it or disprove it...
Thanks a lot .
| If $\prod\limits_{cyc}(a-b)=0$, so it's obvious.
Let $a>b>c.$
Thus, we need to prove that:
$$\sum_{cyc}\left(\frac{a}{a^r+b^r}-\frac{a}{a^r+c^r}\right)\geq0$$ or
$$\sum_{cyc}\frac{a(c^r-b^r)}{(a^r+b^r)(a^r+c^r)}\geq0$$ or
$$\sum_{cyc}a(c^r-b^r)(b^r+c^r)\geq0$$ or
$$\sum_{cyc}a(c^{2r}-b^{2r})\geq0$$ or
$$a^{2r}(b-c)+c^{2r}(a-b)-b^{2r}(a-b+b-c)\geq0$$ or
$$\frac{a^{2r}-b^{2r}}{a-b}\geq\frac{b^{2r}-c^{2r}}{b-c}.$$
Now, use the Lagrange's mean value theorem for $f(x)=x^{2r}$ and that $f'$ increases.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3591424",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove that $\frac{1}{2\pi}\int_0^{2\pi}\frac{R^2-r^2}{R^2-2Rr\cos\theta+r^2}d\theta=1$ Let $C=\{z:|z|=r|\}$ with $r<R$ oriented in + sense. calcule:
$$\int_{C}\frac{R+z}{z(R-z)}dz$$
and deduce that
$$\frac{1}{2\pi}\int_0^{2\pi}\frac{R^2-r^2}{R^2-2Rr\cos\theta+r^2}d\theta=1$$
My attempt
I proved that $$\int_{C}\frac{R+z}{z(R-z)}dz=2\pi i$$
Using Residue theorem because the residue of the function is $a_{-1}=1$ and the function have a simple polo at $z=0$.
For the other part i'm a little stuck, can someone help me?
| Define : \begin{aligned} f:\mathbb{C}\setminus\left\lbrace\frac{R}{r},\frac{r}{R}\right\rbrace&\rightarrow\mathbb{C}\\ z&\mapsto\frac{R^{2}-r^{2}}{\left(R-rz\right)\left(Rz-r\right)} \end{aligned}
Since $ r<R $, the residue theorem allows us to write : $$ \oint_{\left|z\right|=1}{f\left(z\right)\mathrm{d}z}=2\pi\,\mathrm{i}\,\mathrm{Res}\left(f,\frac{r}{R}\right) $$
Calculating the residue : $ \mathrm{Res}\left(f,\frac{r}{R}\right)=\lim\limits_{z\to \frac{r}{R}}\left(z-\frac{r}{R}\right)f\left(z\right)=\lim\limits_{z\to\frac{r}{R}}{\frac{R^{2}-r^{2}}{R^{2}-rRz}}=1 $, setting $ z=\mathrm{e}^{\mathrm{i}\,\theta} $ gives the following : $$ \frac{1}{2\pi}\int_{0}^{2\pi}{f\left(\mathrm{e}^{\mathrm{i}\,\theta}\right)\mathrm{e}^{\mathrm{i}\,\theta}\,\mathrm{d}\theta}=1 $$
Since $ f\left(\mathrm{e}^{\mathrm{i}\,\theta}\right)\mathrm{e}^{\mathrm{i}\,\theta}=\frac{R^{2}-r^{2}}{\left(R-r\,\mathrm{e}^{\mathrm{i}\,\theta}\right)\left(R-r\,\mathrm{e}^{-\mathrm{i}\,\theta}\right)}=\frac{R^{2}-r^{2}}{R^{2}-2rR\cos{\theta}+r^{2}} $, we get : $$ \frac{1}{2\pi}\int_{0}^{2\pi}{\frac{R^{2}-r^{2}}{R^{2}-2rR\cos{\theta}+r^{2}}\,\mathrm{d}\theta}=1 $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3591587",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Why is it true that if $H$ is a subgroup of a group $G$, then $1_H=1_G$? I am studying about subgroup. My definition of subgroup is that:
Let a set $G$, with a binary operation$
×:G×G→G,(a,b)↦×(a,b)=:a×b$ be a group. Then $H⊂G$ is a subgroup iff $H$ with a restriction of $×$ to $H×H$, that is,$×|_{H×H}$ is also a group.
And my book states that if $H$ is a subgroup of a group $G$, then $1_H=1_G$. Why is this true?
| Suppose exists $b\in G,b\ne 1_H$ such that $ab=a$ for some $a\in G.$ By multipling $a^{-1}$ on the left we have that $b=1_G$, but $1_H$ is an element of $G$ which satisies $a1_H=a$ for some $a\in G,$ (in particular, the elements of H). Therefore, $1_H=1_G$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3591734",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Converting a regular expression to its complement via automata I'm supposed to convert a regular expression $r = (\alpha\beta + \beta\alpha)^\ast$ into its complement via automata. I started out by first constructing the individual DFAs that recognize $\alpha\beta$ and $\beta\alpha$:
I then combined and closed these with empty transitions, in order to generate the NFA that would recognize the language $(\alpha\beta + \beta\alpha)^\ast$:
After this, I wrote the state transition table in order to make the $\newcommand{\Pset}[1]{\mathit{2}^{#1}}\Pset Q$-algorithm (a.k.a. the power set algorithm) easier to deal with.
It turned out as follows:
Next I wrote out the $\Pset Q$-algorithm in order to turn the NFA into a DFA:
\begin{align*}\newcommand{\pa}[1]{\left( #1 \right)}\newcommand{\set}[1]{\left\{#1\right\}}
\delta\pa{ \set{t_0} }^\epsilon
&= \set{ t_0, a_0, b_0 }\\
\delta\pa{ \set{ t_0, a_0, b_0 }, \alpha }^\epsilon
&= \set{a_1}^\epsilon
= \set{a_1}\\
\delta\pa{ \set{ t_0, a_0, b_0 }, \beta }^\epsilon
&= \set{b_1}^\epsilon = \set{b_1}\\
\delta\pa{ \set{a_1}, \alpha }^\epsilon
&= \varnothing^\epsilon
= \varnothing \\
\delta\pa{ \set{a_1}, \beta }^\epsilon
&= \set{a_2}^\epsilon
= \set{a_2, t_0} \\
\delta\pa{ \set{b_1}, \alpha }^\epsilon
&= \set{b_2}^\epsilon
= \set{b_2, t_0} \\
\delta\pa{ \set{b_1}, \beta }^\epsilon
&= \varnothing^\epsilon
= \varnothing \\
\delta\pa{ \set{a_2, t_0}, \alpha }^\epsilon
&= \varnothing^\epsilon
= \varnothing \\
\delta\pa{ \set{a_2, t_0}, \beta }^\epsilon
&= \varnothing^\epsilon
= \varnothing
\end{align*}
The resulting DFA would look something like this:
The complement of this DFA would then be the automaton that has its accepting and non-accepting states swapped as follows:
At this stage I realized I'm missing something: the repetition resulting from $(\cdot)^\ast$. This DFA only recognizes the complement of $(\alpha\beta+\beta\alpha)$, not the complement of $(\alpha\beta+\beta\alpha)^\ast$. My first question then is, how do I take that into account. Second, I'm aware of how to transform linear and branching automata into regular expressions by ''eating up'' pairs of states from left to right, and concatenating or taking unions of symbols, as long as the automaton ends up in an accepting state in each of its branches. But how do I transform automata that
*
*do not end up in an accepting state and maybe even begin with an accepting state or
*are the compelement of some automaton
into a regular expressions? In my head in case 2 I should be swapping the alphabet in the transitions as well as the states, if I move along the directed graph while eliminating states... I guess if a run into an accepting state while reading a graph, I could introduce an empty string there. So for example an initial accepting state plus something else might be expressed with $\epsilon + \cdots$, but I'm not sure.
| Since typing $\alpha$ and $\beta$ is time consuming, let me take the alphabet $A = \{a, b\}$ instead.
Your language $L = (ab + ba)^*$ is the star of the prefix code $P = \{ab, ba\}$ and there is a standard algorithm to compute the minimal automaton of $P^*$ when $P$ is a finite prefix code. Here you get the automaton ${\cal A} = (Q, A, \cdot, 1, F)$ with $Q = \{0, 1, 2, 3\}$, $F = \{1\}$ and the following transition function
\begin{array}{c|c|c|c|c|}
&1&2&3&0\\
\hline
a&2&0&1&0\\
\hline
b&3&1&0&0\\
\hline
\end{array}
The minimal automaton of the complement $L^c$ of $L$ is obtained by changing $F$ to $Q - F$. A possible regular expression for $L^c$ is $(ab + ba)^*(a + b + aaA^* +bbA^*)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3591854",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Relation between Weierstrass $\wp$-functions
Let $\Lambda=[\lambda_1,\lambda_2]$ be a lattice with associated Weierstrass function $\wp$, and consider the Weierstrass function $\wp_2$ associated to the lattice $\Lambda_2=[\tfrac{1}{2}\lambda_1,\lambda_2]$. Prove the identities $$\wp_2(z)=\wp(z)+\wp(z+\tfrac{1}{2}\lambda_1)-\wp(\tfrac{1}{2}\lambda_1)$$
Recall that $\wp(z)=\dfrac{1}{z^2}+\sum_{0\neq \omega \in \Lambda}\left( \dfrac{1}{(z-\omega)^2}-\dfrac{1}{\omega^2}\right)$
Every $\omega \in \Lambda$ has the form $\omega=n_\omega \lambda_1 + m_\omega \lambda_2$. Let $\omega_2 \in \Lambda_2$ we have $\omega_2=\omega-\tfrac{1}{2}n_\omega\lambda_1$. Therefore
$$\wp_2(z)=\dfrac{1}{z^2}+\sum_{0\neq \omega \in \Lambda}\left( \dfrac{1}{(z-\omega+\tfrac{1}{2}n_\omega\lambda_1)^2}-\dfrac{1}{(\omega-\tfrac{1}{2}n_\omega\lambda_1)^2}\right)$$
$$\wp(\tfrac{1}{2}\lambda_1)=\dfrac{4}{\lambda_1^2}+\sum_{0 \neq \omega \in \Lambda}\left(\dfrac{1}{(\tfrac{1}{2}\lambda_1-\omega)^2}-\dfrac{1}{\omega^2}\right)$$
$$\wp(z+\tfrac{1}{2}\lambda_1)=\dfrac{1}{(z+\tfrac{1}{2}\lambda_1)^2}+\sum_{0 \neq \omega \in \Lambda} \left( \dfrac{1}{(z-\omega+\tfrac{1}{2}\lambda_1)^2}- \dfrac{1}{\omega^2}\right)$$
I'm stuck here because I don't know how to deal with these infinite sum.
| Given a lattice $\,\Lambda,\,$ the Weierstrass $\wp$ function is characterized by being a
meromorphic doubly periodic function with period
lattice $\,\Lambda\,$ whose only poles are at points in $\,\Lambda,\,$
and whose Laurent series at the origin is
$\,\wp (z) = z^{-2} + O(z^2)$. In your first equation, note
that the right side satisfies the poles and Laurent series
properties for the lattice $\,\Lambda_2.$
The Wikipedia article Weierstrass elliptic functions states
Further development of the theory of elliptic functions shows that Weierstrass's function is determined up to addition of a constant and multiplication by a non-zero constant by the position and type of the poles alone, amongst all meromorphic functions with the given period lattice.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3592216",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to deal with binomial expansion within floor function as in $\lfloor{(a+\sqrt{b})^n\rfloor}$? In questions involving floor functions containing binomial coefficients, like example 368 in the posted image, where it asks
for $n$, a nonnegative integer, show that the integers $\lfloor{(1+\sqrt{2})^n\rfloor}$ are alternatively even and odd.
The solution starts with "By the binomial theorem, $(1+\sqrt{2})^n + (1-\sqrt{2})^n$..."
I would like some clarifications about the beginning of the solution steps.
My questions are as follows:
1) How does adding the term $(1-\sqrt{2})^n$ follow from the binomial theorem?
2) Is it because the coefficients of the binomial expansions are in the form of $a+\sqrt{b}$
3)How did the author know to use $(1-\sqrt{2})^n$ to get the fractional part of $(1+\sqrt{2})^n$?
4) If I change the terms to something that is different from $(a+\sqrt{b})$, to $(m + n)$, where m and n have some other kind of values, like transcendental functions evaluated at particular values, fractions of different values, n-th root of different values, etc, I don't think I could easily say that $(m-n)^n$ is the fractional part of $(m+n)^n$. Basically, would the same technique work for them all. Thank you in advance.
| The point is that if you expand $(1+\sqrt 2)^n+(1-\sqrt 2)^n$ by the binomial theorem, the terms with $\sqrt 2$ raised to an odd power cancel while the ones with $\sqrt 2$ raised to an even power are equal in the two terms. The $k$ in the summation is half the power of $\sqrt 2$ in the terms we are considering. The leading factor of $2$ comes from the fact that the terms match. The author uses $1-\sqrt 2$ because it is the conjugate of $1+\sqrt 2$ and makes the cancellation work. To make this work with $(m+k)^n$ (please do not reuse $n$ in the expression when they are not the same) you need $(m+k)^n+(m-k)^n$ to be an integer and $|m-k| \lt 1$. To get the sum to be an integer you want $m$ an integer and $k$ a square root so the cancellation gets rid of the square roots. Then if $m$ is the integer one one side or the other of $\sqrt k$ the magic works.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3592385",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Why are these two definite integrals equal? How can one prove that, for $0< z<1$, the two integrals
$$\int_0^\infty \frac{u^{z-1}}{1+u}du$$
and
$$\int_0^\infty \frac{u^{-z}}{1+u} du$$
are equal?
From the integral representation of the beta function
$$B(z,w)=\frac12\int_{0}^\infty \frac{u^{z-1}+u^{w-1}}{(1+u)^{z+w}} du$$
If we replace $w$ with $1-z$, the left hand side equal to $\pi/\sin(\pi z)$ while the right hand side is
$$\frac12\int_{0}^\infty \frac{u^{z-1}+u^{-z}}{(1+u)^{z+w}} du$$
this is the reason of my question.
| If you know, say, that the result of the first integral is $\pi/\sin(\pi z)$, you can see that the second one follows from the first by a mapping $z \mapsto 1-z$, which means the result of the second one is just $\pi/\sin(\pi - \pi z) = \pi/\sin(\pi z)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3592596",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Determine the validity of the argument. I have this question and was hoping I could get some help on it:
p∧q∧r → s
u →s
p∧u∧~r
∴q
I have found:
p is true
u is true
~r is true
r is false.
But I am unsure what to do to find the validity of the statement.
My thinking is that premise 1 is (~p ∨ ~q ∨ ~r) ∨ s where S is true but I get stuck here.
Hopefully this made sense and thank you for your assistance!
| No, $q$ does not follow. You can try it with $q$ both ways and see.
As you said, the third premise means $p$ is True, $u$ is True, and $r$ is False. Then the second premise means $s$ is True.
If $q$ is True, the first premise says $T \wedge T \wedge F \to T$, which is fine.
If $q$ is False, the first premise says $T \wedge F \wedge F \to T$, which is fine.
So we can conclude nothing about $q$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3592731",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove the space $L^p(X) \cap L^q(X)$ with the norm $||f||_{L^p \cap L^q}=||f||_p+||f||_q$ is a Banach space $X$ is a space with positive measure and $1\le p<q\le +\infty$. I have to prove that $L^p(X) \cap L^q(X)$ is a complete space i.e. every Cauchy's sequence converges in this space with the norm $\lVert f\rVert_{L^p \cap L^q}=\lVert f\rVert_p+\lVert f\rVert_q$.
Can I use the fact that if I have a sequence in $L^p$ that converges a $f \in L^P$ then exist a subsequence $f_{n_{k}}$ that converges a.e?
| You know that both $L^p$ and $L^q$ are Banach spaces. Fix a Cauchy sequence $(f_n)_{n\geq 1}$ in $L^p\cap L^q$.
*
*Since $(f_n)_{n\geq 1}$ is a Cauchy sequence in $L^p$, it converges in $L^p$. Let $f\in L^p$ be the limit.
*Since $(f_n)_{n\geq 1}$ is a Cauchy sequence in $L^q$, it converges in $L^q$. Let $g\in L^q$ be the limit.
You now have to show that $f=g$. If that's true, then you're done (can you see why?). To do that, use whatever tool you want/can: for instance, you can use the fact that $L^p$ convergence implies convergence in measure, and use uniqueness of that limit.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3592870",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.