Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
Solve $(x^2-2018^2)^2 - 8072x - 1 = 0$ I already know the answer using WolframAlpha, however, I don't know how to tackle this by hand.
I've tried using the difference of squares on the first term, didn't get much though, I tried using $x^2-2018^2 = \sqrt{8072x+1}$, but I didn't know how to continue without squaring again, and I've noticed $8072 = 4 \cdot 2018$.
| I get $$ \left(x^2 + n^2 \right)^2 - \left(2nx+1 \right)^2 = (x^2 - n^2)^2 - 4nx-1 $$
Once written as a difference of squares, note that $A^2 - B^2 = 0$ means $(A+B)(A-B) = 0,$ so that $A+B=0$ or $A-B = 0,$ possibly both.
Here $$ n = 2018 $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3836540",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Polar coordinates with arc-length instead of angle Is it possible to define coordinates on the 2d cartesian plane with arc length and radius instead of angle and radius. For example I could have
$$
\begin{split}
s(x,y) &= \sqrt{x^2 + y^2 } \arctan(y/x)\\
r(x,y) &= \sqrt{x^2 + y^2 }
\end{split}
$$
are there any subtle points about coordinate systems that prevent this from meeting the definition of a proper coordinate system?
| I am guessing that the coordinates you want actually are
\begin{split}
s(\rho,\theta) &= \rho\theta,\\
r(\rho,\theta) &= \rho,
\end{split}
where $(\rho,\theta)$ are the standard polar coordinates.
(I used $\rho$ for the polar-coordinates radius in order to be able to make sense of the expression of your radius in terms of the polar coordinates.)
As pointed out in the comments, the arc tangent function is not good for such a definition because its range is only the angles $-\frac\pi2 < \theta < \frac\pi2.$
There is no value for $\arctan(y/x)$ when $x = 0$,
and the value of $\arctan(y/x)$ duplicates the value of $\arctan(-y/-x),$
hence coordinates in the first quadrant would be duplicated in the third.
You can improve your form of the definition a little by means of
the two-parameter arc tangent function,
$\DeclareMathOperator{\atan}{atan2}\atan(y,x)$,
whose range is $(-\pi,\pi]$, so at least it covers all points of the plane uniquely:
\begin{split}
s(x,y) &= \left(\sqrt{x^2 + y^2}\right) \atan(y,x),\\
r(x,y) &= \sqrt{x^2 + y^2},
\end{split}
But there is still a discontinuity along the negative $x$ axis in this definition.
For a point $(x,y)$ where $x < 0$ and $y\approx 0,$ if $y$ is positive the $s$ coordinate is approximately $\pi x,$ but if $y$ is negative the $s$ coordinate is approximately $-\pi x.$
Note that standard polar coordinates finesse the discontinuity by allowing multiple values of the polar coordinates at any given point.
For example, the point with Cartesian coordinates $(-1,0)$ has polar coordinates
$(1,\pi)$, $(1,-\pi)$, $(1,3\pi)$, $(1,-3\pi)$, $(-1,0)$, $(-1,2\pi)$, and many other choices. We don't define $\rho$ and $\theta$ as functions of $x$ and $y$, and this is what allows us to move along a differentiable path anywhere in the plane while varying $r$ and $\theta$ continuously.
However, there is another feature polar coordinates have, which is that we can define local coordinate vectors $\hat \rho$ and $\hat \theta$ which are orthogonal anywhere except at the origin.
That is, if you are anywhere except at the origin, a small change in $\rho$ (while holding $\theta$ constant) will take you in a direction perpendicular to the direction you would go with a small change in $\theta$ (while holding $\rho$ constant). Your coordinates do not have that property. (Note that this is not a fatal flaw; there are useful coordinate systems, such as oblique coordinates, that do not have orthogonal coordinate vectors.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3836674",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find a complex analytic function equivalent to $|z-1|$ on a circle. Let $f$ be the complex function $f(z) = |1-z|$ and let $C$ be the circle centered at the origin with radius 1. $f$ is not analytic at 1, but I'm trying to find a function $g$ which is analytic (on a domain containing $C$) and takes the same values on the curve $C$. That is to say, $f(z) = g(z)$ for all $z \in C$.
Intuitively $|1-z|$ represents the distance from $z$ to 1. When $z = 1$ this is 0, when $z = i$ this is $\sqrt 2$, and when $z = -1$ this is 2. It certainly behaves like $2\sin(x)$ as $z$ traces a path through the circle, but I don't see how to write that as a function of $z$.
For any given $z$ I know it has an argument and radius, so $z = re^{i\theta}$. And with that $\theta$, I could then do some trig and get an expression for $f(z)$. But it would be in terms of $\theta$ and I don't think that's legitimate, if we're looking for a function of $z$.
Anyone have any ideas?
[Edit: If it helps to clarify, $C$ here is the set of all points satisfying $|z|=1$.]
| As Funktoriality explained, one cannot do this in a neighbourhood of $1$, but can we
do this if we avoid $z=1$?
On the unit circle,
$$f(z)^2=|1-z|^2=(1-z)(1-\overline z)=(1-z)(1-z^{-1})=-\frac1z(1-z)^2.$$
Let's take then $g(z)$ to be $1-z$ times a suitable branch of $\sqrt{-1/z}$
defined on the complex plane slit along the positive real axis. This will give
a holomorphic extension of $f$ to a neighbourhood of $C-\{1\}$ which is
the best one can hope for.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3836863",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Proof that $a(a+1)(2a+1)$ is divisible by $6$ for every integer a This is from the book Elementary Number Theory by Jones & Jones
Example 3.6
Let us prove that a(a+1)(2a+1) is divisible by 6 for every integer a
By taking least absolute residues mod(6) we see that $a \equiv 0,\pm1,\pm2 or 3$.
If $a \equiv 0$ then $a(a+1)(2a+1) \equiv 0 \cdot 1 \cdot 1 \equiv 0$, if $a \equiv 1$, then $a(a+1)(2a+1) \equiv 1 \cdot 2 \cdot 3 = 6 \equiv 0$, and similar calculations (which you should try for yourself) show that $a(a+1)(2a+1) \equiv 0$ in the other 4 cases, so $6 \vert a(a+1)(2a+1)$ for all a.
I don't understand the proof at all starting with the first line - By taking least absolute residues mod(6) we see that $a \equiv 0,\pm1,\pm2 or 3$. - How does taking absolute residues mod(6) give $a \equiv 0,\pm1,\pm2 or 3$?
|
I don't understand the proof at all starting with the first line - By taking least absolute residues mod(6) we see that $a \equiv 0,\pm1,\pm2 or 3$. - How does taking absolute residues mod(6) give $a \equiv 0,\pm1,\pm2 or 3$?
Let us prove that f(a) := a(a+1)(2a+1) is divisible by 6 for every integer a
By taking least absolute residues mod(6) we see that $a \equiv 0,\pm1,\pm2\ or\
3$.
Shifting the standard system of residues (remainders) $\,0,1,\ldots 5\pmod{\!6}\,$ shows that any sequence $\,R\,$ of $\,6\,$ consecutive integers forms a complete system of residues (or remainders), i.e. every integer $\,a\,$ it is congruent to a unique $\,r_i\in R.\,$ Now $\!\bmod 6\!:\ a\equiv r_i\,\Rightarrow\, f(a)\equiv f(r_i)\,$ by the Polynomial Congruence Rule. Hence if we prove that $\,f(r_i)\equiv 0\,$ for all $\,r_i\in R\,$ then we can conclude that for all integers $a$ we have $\,f(a)\equiv 0,\,$ i.e $\,6\mid f(a).\,$ They prove $\,f(r_i)\equiv 0\,$ when $\,r_i = 1\,$ and leave to the reader the proofs for the remaining five elements of $R$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3837000",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
} |
Where is the gambler's fallacy in this coin flipping binomial distribution? Suppose we have a biased coin where:
$p(Heads) = 0.6$
$p(Tails) = 0.4$
If $X$ is the number of heads obtained in 10 flips, the binomial distribution says:
$p(X = 9) = 0.04$
$p(X = 10) = 0.006$
Suppose we are now flipping this biased coin, and we have flipped it 9 times so far. On all 9 flips, the coin landed on heads. Is heads or tails more likely on the next flip?
Answer 1: Since $p(X = 9) > p(X = 10)$, $X = 9$ is the more likely outcome. Therefore, the next flip is more likely to be tails.
Answer 2: Since $p(Heads) > p(Tails)$, the next flip is more likely to be heads.
I think the first answer is wrong because it looks like the gambler's fallacy, but I can't explain it in mathematical terms. Can someone explain how the reasoning in the first answer is faulty? How do I refute the reasoning given in the first answer?
| The term you are looking for is independence.
In probability, independent events are ones where the occurrence of one event does not affect the probability of occurrence of the other. For example, flipping heads on a coin does not make it more or less likely to roll a $6$ on a die. We say that these events are independent.
To refute the first answer, we want to rephrase the question to "is the next flip more likely to be Heads or Tails [given the first nine flips were Heads]?" We can now apply what we know about conditional probability.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3837101",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
} |
Does the axiom imply its dual? Suppose we have the following axiom for the projective plane:
Axiom: If a projectivity leaves invariant each of three distinct points on a line, it leaves invariant every point of the line.
The dual of this axiom is the following statement:
Dual: If a projectivity leaves invariant each of three distinct lines, which are concurrent at a point, it leaves invariant every line passing through that point.
Now I need to prove that Axiom $\Rightarrow$ Dual.
My wrong attempt at the proof: Let the three lines $a,b,c$, concurent at $O$ be the lines left invariant by the projectivity, and $l$ be any other line through $O$. If we can prove that three points on $l$ are left invariant by this projectivity, then we will have that all the points on $l$ are left invariant by the projectivity, which would imply that $l$ itself is left invariant, and we will be done, because we have taken any arbitrary line through $O$.
Now we know that $O$ is left invariant by the projectivity. For any other point, say $P$, on $l$, let a line passing through $P$ other than $l$, intersect $a, b$ and $c$ at the points $A,B$ and $C$, respectively. Now if $A,B,C$ are left invariant by the projectivity, then $P$, a point on the line $AB$, will also be left invariant. Since $P$ is any arbitrary point on $l$, this is true for all points on $l$ and we are done.
However the lines $a,b,c$ being invariant does not imply that the points $A,B,C$ will be left invariant by the projectivity. That is where I am stuck.
Please share any insight to nudge me in the right direction.
| You can think of a projectivity as an alternating sequence of points and lines and the sequences of perspectivities they represent. For a simple example, $(\ell_1,P_2,\ell_3)$ represents the perspectivity between lines $\ell_1$ and $\ell_3$ with center $P_2$. And ($P_2,\ell_3,P_4$) is the perspectivity with axis $\ell_3$ between pencils at $P_2$ and $P_4$. ($\ell_1,P_2,\ell_3,P_4,\ell_5$) is a projectivity $\ell_1\rightarrow\ell_5$ that is the composition of two perspectivities $\ell_1\rightarrow\ell_3\rightarrow\ell_5.$
Let $(P_1,\ell_2,\dots,P_1)$ be a projectivity that leaves three lines in the pencil at $P_1$ invariant, but not the entire pencil. Then consider $(\ell_2,\dots,P_1,\ell_2)$. This is possibly already more than the nudge you've requested, but I'll leave the final step as a spoiler.
$(\ell_2,\dots,P_1,\ell_2)$ is a projectivity that violates the Axiom.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3837248",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How to solve this absolute value inequality? I am new to absolute value inequalities.I was looking trough a book and I found this inequality,
I know a little bit about absolute value inequalities.
The inequality is given below:
$$
\left| \frac{n+1}{2n+3} - \frac{1}{2} \right| > \frac{1}{12}, \qquad n \in \mathbb{Z}
$$
| As $2n+3\ne0$,
$$\left|\frac{n+1}{2n+3}-\frac12\right|>\frac1{12}$$
can be written (multiplying by $12|2n+3|$)
$$6>|2n+3|.$$
The odd numbers below $6$ are $1,3,5$ (in absolute value), from which we draw
$$n=-4,-3,-2,-1,0,1.$$
As the number of solutions is small, I preferred to exhaust them rather than work out the inequalities.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3837365",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 2
} |
construct a convergent positive series which $a_{n_k}\geq \frac{1}{n_k}$ How to construct a convergent positive series $\sum\limits_{n=1}^{\infty}a_n$, which has infinity terms satisfy $a_{n_k}\geq \frac{1}{n_k}$.
I constucted a series: $1,\dfrac{1}{4},\dfrac{1}{4},\dfrac{1}{4},\dfrac{1}{9},\dfrac{1}{9},\dfrac{1}{9},\dfrac{1}{9},\dfrac{1}{9}\cdots$, but I am not sure whether it's true.
If I'm wrong, please give me a correct example, otherwise help me prove it. Thanks!
| Your series diverges because we can find the harmonic series inside it:
$$
1 + \frac14 + \underbrace{\frac14 + \frac14}_{1/2} + \frac19 + \frac19 + \underbrace{\frac19 + \frac19 + \frac19}_{1/3} + \frac1{16} + \frac1{16} + \frac1{16} + \underbrace{\frac1{16} + \frac1{16} + \frac1{16} + \frac1{16}}_{1/4} + \dots
$$
In general, if the terms of the series have to be non-increasing, then there is no solution.
If they don't, then morally speaking we can just modify your example to take
$$
1 + 0 + 0 + \frac14 + 0 + 0 + 0 + 0 + \frac19 + 0 + 0 + 0 + 0 + 0 + 0 + \frac1{16} + \dots
$$
This violates positivity, but we can replace the $0$ terms with something arbitrarily small (such as the terms of any convergent series).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3837477",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Probability that a random pairing of numbered people is "good" There are two people numbered $1$, two people numbered $2$, ..., two people numbered $n$. The $2n$ people are randomly paired up by choosing a random permutation of the people. So for $n=3$, if the randomly chosen permutation is $121332$, then the pairings are $(12)(13)(32)$.
A "good" pair is a pair such that both people's numbers are either the same, or one apart. $(11)$ and $(12)$ are good pairs, while $(31)$ is not. What is the probability that for a random pairing, all pairs are good?
My attempt: Let $p(n)$ be the probability that all pairs are good for the $n$ case. Then $p(1)=p(2)=1$. For $p(n)$, choose one of the people labeled $n$. The probability is $\frac{1}{2n-1}$ that they're paired with the other person labeled $n$, and $\frac{2}{2n-1}$ that they're paired with someone labeled $n-1$. If they're paired with someone labeled $n-1$, then the other person labeled $n$ must also be paired with the other person labeled $n-1$, which has probability $\frac{1}{2n-3}$ of happening. So the recursion I'm getting is $p(n) = \frac{1}{2n-1}\cdot p(n-1) + \frac{2}{2n-1}\cdot \frac{1}{2n-3}\cdot p(n-2)$.
The problem is I can't confirm whether this is correct, and even if it is I don't know how to find an explicit formula.
| Let's imagine all items are distinguishable. The total number of permutatons is $z_n = (2n)!$
Let $x_n$ count the good permutations.
By a similar reasoning we get the recurrence
$$x_n = 2 \, n \, x_{n-1} + {n \choose 2} 16 \, x_{n-2} = \, n \, x_{n-1} + n(n-1) 8 \, x_{n-2} \tag1$$
with $x_0=1$, $x_1=z_1=2$.
Letting $x_n = w_n \, 2^{n} \,n!$ we can also write the simpler
$$w_n = w_{n-1} + 2 \, w_{n-2} \tag 2$$
with $w_0=1$, $w_1=1$.
And the desired probability is then
$$p_n = \frac{x_n}{z_n}= w_n \, 2^n \frac{ n!}{(2n)!} \tag 3 $$
The Fibonacci-like recursive equation $(2)$ can be solved
by the usual methods. Its characteristic polynomial is $x^2-x-2=(x+1)(x-2)$, hence the general term has the form
$$w_n = k_1 2^n + k_2(-1)^n \tag 4$$
Plugging the initial conditions we get the solution
$$ w_n = \frac{2^{n+1}+(-1)^n}{3} \tag 5$$
These are also known as Jacobsthal numbers (with an index shift). See also
sequence https://oeis.org/A001045. Finally
$$ p_n = \frac{ n!}{(2n)!} \left( \frac23 4^n + \frac13(-2)^n \right) \approx (e/n)^n \frac{\sqrt{2}}{3} \tag 6$$
The latter asymptotic is quite precise already for $n \ge 5$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3837796",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Reading a first order logic definition, with for all I'm trying to read the following FOL for a functional binary relation:
$R \subset X \times Y$
$\forall x \in X, \forall y \in Y, \forall z \in Y, (x,y) \in R \land (x,z) \in R \implies y = z$
I'm confused on how to read this. It's my understanding "∀ __ is True if the value of __ is True for all values of x" and that "∀ __ is False if the value of __ is False for any value of x.
So above, $\forall z \in Y$ will bind $z$ to values where $y \ne z$ and the entire statement will be false. Am I reading this wrong?
| You are misunderstanding the bracketing here:
$\forall x \in X, \forall y \in Y, \forall z \in Y, (x,y) \in R \land (x,z) \in R \implies y = z$
Should be:
$\forall x \in X, \forall y \in Y, \forall z \in Y, ((x,y) \in R \land (x,z) \in R) \implies y = z$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3837972",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Radius of convergence of a complex serie Good evening, people.
I am trying to find the radius of convergente of the series $\sum\limits_{n=0}^{\infty} (4+(-1)^n )^n z^n$
I tried to use the theorem which states that $ r = \lim\limits_{n\rightarrow \infty} |\frac{a_{n}}{a_{n+1}}|$. However, I could not solve this limit.
So I did the following:
$$ \sum\limits_{n=0}^{\infty} (4+(-1)^n )^n z^n = \sum\limits_{k=0}^{\infty} 3^{2k+1}z^{2k+1} + \sum\limits_{k=0}^{\infty}5^{2k}z^{2k}$$
And said that the radius of convergence of the serie on the left side is the smallest radius between the two series on the right side.
Is this right? Is there an easy way to find the radius of convergence?
Thank you.
| Yes, that is correct, except that in English the word "series" is either singular or plural, i.e. one writes "This series is..." or "These series are..."
There is a theorem that asserts that power series converge absolutely at every point in the interior of their circles of convergence (but may converge conditionally at a point on the boundary; however, that is of no concern here).
If convergence is absolute, then the series can be "rearranged" without altering the sum. In particular,
$$
\sum_{n\,=\,0}^\infty t_n = \sum_{n\,\in\,A} t_n + \sum_{n\,\in\,B} t_n
$$
if $A,B$ are disjoint sets whose union is $\{0,1,2,\ldots\}.$
If both of the series on the right converge absolutely, then so does the series on the left.
If the series on the left converges absolutely, then so do the two series on the right.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3838084",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Question regarding step 8 in appendix of chapter 1 , baby rudin This question refers to the construction of R from Q using Dedekind cuts, as presented in Rudin's "Principles of Mathematical Analysis" pp. 17-21.
Specifically, I cannot proof (b) in step 8, bottom of pp.20. To be more precise, I'm not able to show that ${(rs)^*\subset r^*s^*}$ when $r>0$ and $s>0$, could somebody prove it for me?
Here are the original texts in Baby Rudin.
Step 8
We associate with each $r\in Q$ the set $r^*$ which consists of all $p\in Q$ such that $p < r$. It is clear that each $r^*$ is a cut; that is, $r^* \in R$. These cuts satisfy the following relations :
(a) $ r^ * + s^* = (r + s)^*$,
(b) $r^*s^* = (rs)^*$,
(c) $r^* < s^*$ if and only if $r < s$.
| You want to prove that if $q\in\Bbb Q$, if $q>0$, and if $q<rs$, then $q=q_1q_2$, with $q_1,q_2>0$, $q_1,q_2\in\Bbb Q$, $q_1<r$ and $q_2<s$. Take a rational number $q_1\in\left(\frac qs,r\right)$; this makes sense, since $\frac qs<r$. Now, take $q_2=\frac q{q_1}$. Then $q_2<s$, since\begin{align}q_2<s&\iff\frac q{q_1}<s\\&\iff q_1>\frac qs.\end{align}And, of course, $q=q_1q_2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3838401",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Iteration on sequences I am trying to find a general formula for this sequence, in order to find its limit (has to be this way):
$$X_{k+1} = \frac {X_k} 2 + \frac 1 {X_k}$$
I cannot get a grasp on it; after $2$ or $3$ iterations, the sequence gets huge and I don't see the pattern.
Any help is greatly appreciated!
| If $X_0=\sqrt 2$, then the sequence is constant, so we can assume $X_0\not = \sqrt 2$. We may also assume that $X_0>0$, as the negative case is the same.
Let $X_0=\sqrt 2\cdot\frac {r+1}{r-1} $, or $r=\frac{X_0+\sqrt2}{X_0-\sqrt2}$, then the closed form is $X_k=\sqrt 2\cdot\frac {r^{2^k}+1}{r^{2^k}-1}$. The way to get the closed form is as follows.
Dividing by $\sqrt 2$:
$$\frac {X_{k+1}}{\sqrt 2}=\frac 12(\frac {X_{k}}{\sqrt 2}+\frac {\sqrt 2}{X_{k}})$$
Substituting $\frac {X_{k}}{\sqrt 2}=S_k+1$:
$$S_{k+1}+1=\frac 12 \cdot(S_k+1+\frac 1{S_k+1})$$
$$S_{k+1}=\frac 12 \cdot(S_k-1+\frac 1{S_k+1})$$
$$S_{k+1}=\frac 12 \cdot\frac {S_k^2}{S_k+1}$$
Taking the inverse:
$$\frac 1{S_{k+1}}=2 \cdot\frac {S_{k}+1}{S_k^2}$$
$$\frac 1{S_{k+1}}= 2\cdot (\frac 1{S_{k}})^2+2\cdot \frac 1{S_{k}}$$
Completing the square:
$$\frac 1{S_{k+1}}+\frac 12=2\cdot(\frac 1{S_{k}}+\frac 12)^2$$
$$\frac 2{S_{k+1}}+1=(\frac 2{S_{k}}+1)^2$$
This means each term is the square of the previous term, which we can denote as $\frac 2{S_{k}}+1=r^{2^{k}}$. Substituting everything back into $X_k$ then we have the close form.
Proving that the closed form is actually correct:
\begin{align}
\frac {X_k}2+\frac 1{X_k}
&=\frac 1{\sqrt2} \cdot (\frac {r^{2^k}+1}{r^{2^k}-1}+\frac {r^{2^k}-1}{r^{2^k}+1})\\
&=\frac 1{\sqrt2} \cdot \frac {(r^{2^k}+1)^2+({r^{2^k}-1})^2}{(r^{2^k}-1)({r^{2^k}+1})} \\
&=\frac 1{\sqrt2}\cdot \frac {r^{2^{k+1}}+2r^{2^k}+1+r^{2^{k+1}}-2r^{2^k}+1}{r^{2^{k+1}}-1}\\
&=\sqrt 2\cdot \frac {r^{2^{k+1}}+1}{r^{2^{k+1}}-1}\\
&=X_{k+1}
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3838549",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
For continuous optimizations, what is the condition that says an optimal value lies on the boundary and not in the interior? Suppose I want to solve the problem
\begin{equation}
\min_{x \in \mathcal{X}} f(x)
\end{equation}
where $f$ is assumed to be continuously differentiable, and $\mathcal{X}$ is closed and possibly compact.
I know that an interior optimal point $x^*$ satisfies $\nabla f(x^*) = 0$
And in general, optimal points satisfies $\nabla f(x^\star)^T (x - x^*) \geq 0, \forall x \in \mathcal{X}$
But what about points that only lie on the boundary $\partial \mathcal{X}$? What is a condition that characterizes these points?
I am thinking of a condition that reads,
$\nabla f(x^\star)^T (x - x^*) \geq 0, \forall x \in \mathcal{X}$ and $\nabla f(x^\star) \neq 0$
However, $\nabla f$ cannot be evaluated on the boundary. So this condition doesn't actually make any sense.
Does anyone know how to reconcile the non-existence of $\nabla f$ and characterization of boundary optimal points?
| For me personally, this setting only makes sense if $ \nabla f$ can be evaluated on the boundary of $\mathcal{X}$ for the following reason:
If $\mathcal{X}$ is a closed set then we need to be able to evaluate $f$ for all $x \in \mathcal{X}$ to be able to solve the optimization problem. Hence, the domain of $f$ should be a superset of $\mathcal{X}$. Further, if $f$ is continuously differentiable then (by definition) its domain is some open set $D$ and we get $f \colon D \supseteq \mathcal{X} \to \mathbb{R}$. For such a function $f$ we can evaluate $\nabla f$ on the boundary of $\mathcal{X}$.
If we don't know anything about $\mathcal{X}$ then a necessary optimality condition would be as follows: If $\bar{x} \in \mathcal{X}$ is an optimal solution of the optimization problem, it holds that $\nabla f(\bar{x})^\top d \geq 0 \;\forall d \in T(\mathcal{X},\bar{x})$ where $T$ stands for the tangent cone.
If $\mathcal{X}$ is given by some equality and inequality constraints, then under certain regularity assumptions the Karush-Kuhn-Tucker (KKT) conditions would be a necessary optimality cirterion.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3838740",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
If a curve has velocity and acceleration of constant magnitude, then its curvature is constant. If a curve has velocity and acceleration of constant magnitude, then its curvature is constant.
I think it's too easy a problem, but I'm not sure my solution is correct:
Let $\alpha (t) = (x (t), y (t)) $ be the curve, having constant velocity and acceleration implies that $ x' (t) = c_1 $, $ y' (t) = c_2 $, $ x'' (t) = c_3 $, $ y'' (t) = c_4 $.
Using the following equation for curvature in which the parametrization does not matter and assuming that the curve is regular:
$$
\kappa=\dfrac{x'y''-x''y'}{(x'^2+y'^2)^{\frac{3}{2}}}=\dfrac{c_1c_4-c_3c_2}{(c_1^2+c_2^2)^{\frac{3}{2}}}.
$$
So with this we conclude that the curvature is constant since the denominator is different from 0.
Is the proof correct?
| Hint 1. Show that the angle between $\alpha''$ and $J(\alpha')= (-y',x')$ is constant. Subhint: For this, show that the angle between $\alpha''$ and $\alpha'$ is constant.
Hint 2. Note that
$$
\kappa = \frac{\alpha'' \cdot J\alpha'}{\|\alpha'\|^3}.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3838843",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Is it possible to solve exponential equation analytically? I'm trying to solve the following equation:
$$e^{3x}-e^{2x}\left(e^2-\frac{1}{e^4}\right)-1=0$$
I know the solution is 2, as the equation above is simply a rearranged version of this initial statement:
$$e^{x}-\frac{1}{e^{2x}}=e^2-\frac{1}{e^4}$$
I assumed I could forge a cubic by letting $x=e^b$ and then using the cubic formula to do so, but I get into a hideous mess with terms being "trapped" inside cube roots and nothing really falls together nicely.
My question is, how would one go about solving this equation analytically (if it's at all possible)?
| With $t:=e^x$ and mutliplying by $e^4$,
$$e^4t^3-t^2\left(e^6-1\right)-e^4=0$$
indeed has the solution $t=e^2$. By long division,
$$e^4t^3-t^2\left(e^6-1\right)-e^4=(t-e^2)\left(e^4t^2+ t+e^2\right)=0.$$
Now you can solve the quadratic.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3838976",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 3
} |
Basic question about open and closed sets Let $A$ and $B$ be open sets in a metric space $X$ and suppose that $B\subseteq \overline{A}$ (where $\overline{A}$ denotes the closure of $A$). Is it true that $B\subseteq A$ ?
| Minimal counterexample. Take the Sierpiński space. It is the $2$-element set $X = \{0, 1\}$ whose open sets are $\emptyset$, $\{1\}$ and $X$. Now take $A = \{1\}$ and $B = X$. Then $A$ and $B$ are open and $B \subseteq \overline{A}$ (since $\overline{A} = X$), but $B \not\subseteq A$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3839087",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Using the Fourier Transform to prove the convolution of two gaussians is gaussian I've got two distributions
$$
p_1(x) = \cfrac{1}{\sqrt{2\pi}\sigma_1}\cdot e^{-x^2/2\sigma_1^2}
$$
and similarly
$$
p_2(x) = \cfrac{1}{\sqrt{2\pi}\sigma_2}\cdot e^{-x^2/2\sigma_2^2}
$$
I'm told that using the convolution theorem is the way to go so we'll start from there.
I know that $F(p_1(x)) = \int\limits_{-\infty}^\infty p_1(x) e^{2\pi ikx} dx$
so
$$
\begin{align}
F(p_1(x))F(p_2(x))
&=
\int\limits_{-\infty}^\infty \cfrac{1}{\sqrt{2\pi}\sigma_1}\cdot e^{-x^2/2\sigma_1^2} e^{2\pi ikx} dx
\cdot
\int\limits_{-\infty}^\infty \cfrac{1}{\sqrt{2\pi}\sigma_2}\cdot e^{-x^2/2\sigma_2^2} e^{2\pi ikx} dx \\
&=
\cfrac{1}{\sqrt{2\pi}\sigma_1}\cdot
\cfrac{1}{\sqrt{2\pi}\sigma_2}
\int\limits_{-\infty}^\infty e^{-x^2/2\sigma_1^2} e^{2\pi ikx} dx
\cdot
\int\limits_{-\infty}^\infty e^{-x^2/2\sigma_2^2} e^{2\pi ikx} dx \\
&=
\cfrac{1}{2\pi\sigma_1 \sigma_2}
\int\limits_{-\infty}^\infty e^{-x^2/2\sigma_1^2} e^{2\pi ikx} dx
\cdot
\int\limits_{-\infty}^\infty e^{-x^2/2\sigma_2^2} e^{2\pi ikx} dx \\
\end{align}
$$
And then I'm not sure what to do from here... I know that I'll take the inverse Fourier transform at the end and that will reveal the final gaussian distribution but I'm not sure how to evaluate these next couple steps... I've never used Fourier before and haven't taken a class that uses integrals in a complex space.
| Let's be even more general by computing the convolution of $f_j:=\frac{1}{\sigma_j\sqrt{2\pi}}\exp\frac{-(x-\mu_j)^2}{2\sigma_j^2},\,j\in\{1,\,2\}$. First note $f_j$ has Fourier transform$$\begin{align}\int_{\Bbb R}\frac{1}{\sigma_j\sqrt{2\pi}}\exp\frac{-(x-\mu_j)^2+4\pi\sigma_j^2ikx}{2\sigma_j^2}dx&=\Bbb E[\exp2\pi ikX|X\sim N(\mu_j,\,\sigma_j^2)]\\&=\exp(2\pi ik\mu_j-2\pi^2k^2\sigma_j^2).\end{align}$$Now use$$\begin{align}f_1\ast f_2&=\mathcal{F}^{-1}(\mathcal{F}f_1\cdot\mathcal{F}f_2)\\&=\mathcal{F}^{-1}\exp(2\pi ik(\mu_1+\mu_2)-2\pi^2k^2(\sigma_1^2+\sigma_2^2))\\&=\frac{1}{\sqrt{2\pi(\sigma_1^2+\sigma_2^2)}}\exp\frac{-(x-\mu_1-\mu_2)^2}{2(\sigma_1^2+\sigma_2^2)}.\end{align}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3839245",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Chain rule using the Jacobian for general spaces If $f:X \rightarrow Y$ is differentiate at $x$ and $g:Y \rightarrow Z$ is differentiable at $y = f(x)$, then $g \circ f: X \rightarrow Z$ is differentiable at x and $$F(g\circ f)(x) = Dg(f(x)) Df(x)$$
I am able to prove it for univariate functions or for $\mathbb{R}^n \rightarrow \mathbb{R}$ functions, but not for more general spaces such as $X, Y$ And $Z$. Can somebody help me on that?
| I like the Weierstraß formulation of differentiation for such cases:
$f$ is differentiable at $x$ if there is a linear function $L$ and a remainder function $r$ such that $f(x+v)=f(x)+L(v)+r(v)$ with $\lim_{v\to 0}\frac{r(v)}{\|v\|}=0.$
This has the advantage that you do not have to bother coordinates or specific directions. This way we get matrix multiplication for the derivatives and only have to work on the remainders a bit.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3839375",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Logical Equivalence for $p \lor q$ I have to prove that
$$p \vee q \equiv (p\wedge q) \vee (\neg p\wedge q) \vee (p\wedge \neg q)$$
Based on the truth table, they are equivalent, but I couldn't figure out how to use logic statements to prove they are equivalent. I have tried many ways but they all go weird.
$(p\wedge q) \vee (\neg p\wedge q) \vee (p\wedge \neg q)$
$\equiv (p\wedge q) \vee ((\neg p\wedge q)\vee p) \wedge ((\neg p\wedge q)\vee \neg q)$
$\equiv (p\wedge q) \vee ((T \wedge (q\vee p)) \wedge (T\wedge \neg(p \wedge q))$
$\equiv (p\wedge q) \vee (q\vee p) \wedge \neg(p \wedge q)$
I couldn't figure out what I'm supposed to from this point. Did I do anything wrong?
Thanks
| To prove $p \vee q \equiv (p\wedge q) \vee (\neg p\wedge q) \vee (p\wedge \neg q)$, let's start from RHS as follows.
$(p\wedge q) \vee (\neg p\wedge q) \vee (p\wedge \neg q)$
$\equiv q \wedge (p\vee \neg p) \vee (p\wedge \neg q)$
$\equiv (q \wedge \text{T}) \vee (p\wedge \neg q)$
$\equiv q \vee (p\wedge \neg q)$
$\equiv (q \vee p)\wedge(q\vee \neg q)$
$\equiv (q \vee p)\wedge \text{T}$
$\equiv (q \vee p)$
Q.E.D.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3839581",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Definite Integral involving logarithm and tangent function
Show that
$$\mathcal{I}:=\int_0^{\frac{\pi}2} \log |1-a^2\tan^2\theta| d\theta= \pi\log\sqrt{a^2+1}.$$
I tried to use the substitution $\tan\theta=z$, to get that
$$\mathcal{I}:=\int_0^{\infty} \frac{\log|1-a^2z^2|}{z^2+1}dz$$
This integral is quite similar to this one:
Evaluating $\int_0^{\infty}\frac{\ln(x^2+1)}{x^2+1}dx$
However note that the sign inside the logarithm term is different and it seems none of the idea is applicable to this setting. May be there is a way to use the above link result to prove my integral. But I dont know.
Edit: Okay I think I have figured out one way to do it via contour integration. We can take a contour that looks like a large semicircle on the upper half plane so that it encloses only the pole at $z=i$. The contour should also have two holes around $z=\pm \frac1a$ to avoid the singularity coming from $\log$. Then Residue Calculus gives the desired result.
It would be nice to see a proof without residue calculus as well.
| $\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
Lets $\ds{\mathcal{I}\pars{\beta} \equiv \int_{0}^{\pi/2}\ln\pars{\verts{1 - \beta\tan\pars{\theta}}}\,\dd\theta}$ such that
$$
\underbrace{\bbox[5px,#ffd]{\int_{0}^{\pi/2}\ln\pars{\verts{1 - a^{2}\tan\pars{\theta}}}\,\dd\theta}}
_{\ds{\vphantom{\LARGE A}\Large ?}}\ =\
\mathcal{I}\pars{a} + \mathcal{I}\pars{-a}
$$
\begin{align}
\mathcal{I}'\pars{\beta} &\equiv \int_{0}^{\pi/2}
{-\tan\pars{\theta}\over 1 - \beta\tan\pars{\theta}}\,\dd\theta =
-\int_{0}^{\pi/2}
{\sin\pars{\theta}\over \cos\pars{\theta} - \beta\sin\pars{\theta}}\,\dd\theta
\\[5mm] & =
\left.-\int_{0}^{\pi/2}
{\sin\pars{\theta}\over \cos\pars{\theta} - \tan\pars{\phi}\sin\pars{\theta}}\,\dd\theta
\,\right\vert_{\large\ \color{red}{\phi\ \equiv\ \arctan\pars{\beta}}}
\\[5mm] = &\
-\cos\pars{\phi}\int_{0}^{\pi/2}
{\sin\pars{\theta}\over \cos\pars{\theta + \phi}}\,\dd\theta
\\[5mm] & =
-\cos\pars{\phi}\int_{\phi}^{\pi/2 + \phi}
{\sin\pars{\theta - \phi}\over \cos\pars{\theta}}\,\dd\theta
\\[5mm] & =
-\cos^{2}\pars{\phi}\int_{\phi}^{\pi/2 + \phi}\tan\pars{\theta}\dd\theta
+ {\pi \over 2}\sin\pars{\phi}\cos\pars{\phi}
\\[5mm] & =
{1 \over \tan^{2}\pars{\phi} + 1}
\ln\pars{\verts{\cos\pars{\phi + \pi/2}} \over \verts{\cos\pars{\phi}}} + {\pi \over 2}
{\tan\pars{\phi} \over \tan^{2}\pars{\phi} + 1}
\\[5mm] & =
{\ln\pars{\verts{\beta}} + \pi\beta/2 \over \beta^{2} + 1}
\end{align}
Then
$\ds{\pars{~\mbox{with}\ \mathcal{I}\pars{0} = 0~}}$,
\begin{align}
&\bbox[5px,#ffd]{\int_{0}^{\pi/2}\ln\pars{\verts{1 - a^{2}\tan\pars{\theta}}}\,\dd\theta}
\\[5mm] = &\
\int_{0}^{a}{\ln\pars{\verts{\beta}} + \pi\beta/2 \over \beta^{2} + 1}\,\dd\beta +
\int_{0}^{-a}{\ln\pars{\verts{\beta}} + \pi\beta/2 \over \beta^{2} + 1}\,\dd\beta
\\[5mm] = &\
\pi\int_{0}^{a}{\beta \over \beta^{2} + 1}\,\dd\beta =
\pi\,{1 \over 2}\ln\pars{a^{2} + 1} =
\bbx{\pi\ln\pars{\root{a^{2} + 1}}} \\ &
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3839712",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Trailing zeroes of $\dfrac{n!}{m!}$ for $n>m$ I (as a teacher) saw in a book for $8^{th}$ grade students that the number of trailing zeroes of ${n!}\times{m!}$ is the sum of the trailing zeroes of $n!$ and $m!$. There also has been noticed that the number of trailing zeroes of $\dfrac{n!}{m!}$ ($m<n$) is their subtraction. i.e.
$$(\left\lfloor \frac{n}{5}\right\rfloor+ \left\lfloor \frac{n}{5^2}\right\rfloor+
\left\lfloor \frac{n}{5^3}\right\rfloor+\cdots)-(\left\lfloor \frac{m}{5}\right\rfloor+ \left\lfloor \frac{m}{5^2}\right\rfloor+
\left\lfloor \frac{m}{5^3}\right\rfloor+\cdots).$$
But I think this is wrong because for example $\dfrac{15!}{14!}=15$ but $3-2=1$.
Can one prove that this statement is correct if $n>m-1$? If so why this restriction is necessary?
Of course it is obvious that $\dfrac{(n+1)!}{n!}=n+1$ and the number of trailing zeroes depend on the number of trailing zeroes of the number $n+1$.
Where does this strange behavior comes from? i.e. in product of factorials we sum number of trailing zeroes but in division we should care about it?
Note: I always make mistakes in simple math calculations. Am I wrong here?
| Let $\lfloor r\rfloor$ denote the floor of $r$.
For prime $p$ and positive integer $n$, let
$V_p(n)$ denote the largest exponent $\alpha$
such that $p^{\alpha} | n.$
Note that under this definition
$p^{(\alpha + 1)} \not | n.$
The formula for $V_p(n!)~$ is $~\lfloor \frac{n}{p^1}\rfloor ~+~
\lfloor \frac{n}{p^2}\rfloor ~+~
\lfloor \frac{n}{p^3}\rfloor ~+~
\lfloor \frac{n}{p^4}\rfloor ~+~ \cdots
$
Using this formula, the precise formula for the # of trailing zeros that
a number will have is $\min\{V_2(n), V_5(n)\}.$
It's easy to see that in general, $V_5(n!) \leq V_2(n!).$
I recommend against any attempt to shortcut the formula.
For example, if $n > m$, there is no way to guarantee that
$V_5(n!) - V_5(m!) = m - n.$
In fact, one of the ways of showing that
$\binom{n}{k}$ is an integer,
when $~n ~\in ~\mathbb{Z^+}$ and $k \in \{0,1,\cdots, n\}$
is by showing that for any prime $p$,
$V_p(n!) \geq \{V_p(k!) + V_p([n-k]!)\}.$
Addendum
I really didn't focusing on the internals of where the OP's computation could
lead to the wrong result.
Therefore, see also Michael Burr's subsequent comment to this answer.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3839844",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
3 category car insurance probability I couldn't find any 3 category car insurance examples. I was able to deduce part a but I could use some help trying to figure out part b) of the following:
An insurance company believes that people can be classified into three groups: good risk,
average risk or bad risk. Their statistics show that the probabilities of good, average and bad
risk individuals being involved in an accident in any one year are $0.04$, $0.12$ and $0.3$ respectively.
Assume that $20$% of the population can be classified as good risk, $55$% as average risk and $25$% as
bad risk.
a. Find the proportion of policy holders having accidents in any one year.
b. Suppose that a new policy holder has an accident within a year of purchasing a policy. Find
the probability that the policy holder is an average risk.
For part a, I think it was just asking for the total probability of an accident which was P(A) = P(good)P(A|good) + P(avg)P(A|avg) + P(bad)P(A|bad), which is: $$.2(.04)+.55(.12)+.25(.3) = .149.$$
I think part b) involves Bayes' theorem where P(avg|A) = (P(A|avg)P(avg))/P(A), and where the denominator is the answer from part a, and the numerator are given parts from the statement.
Thanks in advance! :)
| 20% is $0.2$. Since the probability of any of them getting an accident is $0.04$, the total amount out of the total population is $0.2\cdot 0.04 = 0.008.$
Similarly, $0.12\cdot 0.55=0.066$ and $0.3\cdot 0.25 = 0.075$ are the probabilities for average and bad risk people getting an accident in one year.
$$0.008+0.066+0.075 = 0.149.$$
This means $$\boxed{14.9\%}$$
had accidents in one year.
The average people contribute $6.6\%$, so $\frac{6.6\%}{14.9\%} \approx 0.4429.$
Therefore, for b the answer is $$\boxed{44.29\%}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3839987",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Show $\int_0^\infty \frac{\ln^2x}{(x+1)^2+1} \, dx=\frac{5\pi^3}{64}+\frac\pi{16}\ln^22$ Tried to evaluate the integral
$$I=\int_0^\infty \frac{\ln^2x}{(x+1)^2+1} \, dx$$
and managed to show that
\begin{align}
I &= \int_0^1 \frac{\ln^2x}{(x+1)^2+1} \, dx + \int_0^1 \frac{\ln^2x}{(x+1)^2+x^2} \, dx\\
&= \int_0^1 \frac{\ln^2x}{(x+1+i)(x+1-i)} \, dx
+ \int_0^1 \frac{\ln^2x}{(x+1+ix )(x+1-ix )} \, dx\\
&= -2\operatorname{Im}\operatorname{Li}_3\left(-\frac{1+i}2\right)
-2\operatorname{Im} \operatorname{Li}_3(-1-i)
\end{align}
which is equal to $ \frac{5\pi^3}{64}+\frac\pi{16}\ln^22$.
It is perhaps unnecessary, though, to resort to evaluation in complex space. I would like to work out an elementary derivation of this integral result.
| $$I(a)=\int_0^\infty\frac{x^{-a}}{x^2+2x+2}dx\overset{x\to 1/x}{=}\int_0^\infty\frac{x^a}{2x^2+2x+1}dx$$
$$=\Im\int_0^\infty\frac{(1+i)x^a}{1+(1+i)x}dx\overset{(1+i)x=u}{=}\Im \frac{1}{(1+i)^a}\int_0^\infty\frac{u^a}{1+u}du$$
$$=-\Im\frac{\pi\csc(a\pi)}{(1+i)^a}=2^{-a/2}\csc(a\pi)\sin(\frac{\pi}{4}a),$$
and your integral is $I''(0).$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3840117",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 5,
"answer_id": 1
} |
Find the probability that $(a-c)^2+(b-d)^2<1/4$ I was trying to solve for $P((a-c)^2+(b-d)^2<1/4)$ where $a,b,c,d$ are independent and uniformly distributed on $[0,1]$. One thing that I found online was $a-c$ has a triangular distribution.
I think that you have to find the probability cumulative function, but I don't know how to do that. Though I am willing to look through some resources which could point to how to solve these kinds of problems. I briefly looked through a probability and statistics textbook, but couldn't find what I needed.
| The quantity $(a-c)^2 + (b-d)^2$ is the square of the distance between the coordinates $(a,b)$ and $(c,d)$, where we clearly have both points independent and identically distributed uniformly in the unit square $[0,1]^2$. Thus, your probability is geometrically equivalent to asking for the probability that two points, picked uniformly at random in the unit square, will be less than $1/2$ units apart. How might you reason from this point forward?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3840213",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
When is the $\lim\sup(a_n+b_n)$ strictly less than $\lim \sup (a_n)+\lim\sup(b_n)$ So I recently proved this inequality in my real analysis class:
$$\lim\sup(a_n+b_n)\leq \lim\sup(a_n) + \lim\sup(b_n)$$
However I am wondering when this inequality is strictly less than. I've tried out a bunch of sequences and here is my thought process so far:
$$$$
To get a view of when this inequality is unequal look at the two sequences:
$$a_n=(0,1,0,1,0,....)$$
$$b_n=(1,0,1,0,1,....)$$
Clearly:
$$\lim \sup (a_n+b_n)=1$$
However:
$$\lim \sup (a_n)+\lim\sup(b_n)=2$$
This seems to imply that if the limit of a sequence does not exist then we have an inequality. Furthermore what if:
$$a_n=(1,1,1,1,....)$$
$$b_n=(0,1,0,1,....)$$
Well then clearly:
$$\lim \sup (a_n+b_n)=2$$
$$\lim \sup(a_n)+\lim \sup (b_n)=2$$
And we have equality, which contradicts our earlier hypothesis. Perhaps then it is if both limits do not exist, well let $b_n$ be defined as previously and $a_n=cb_n$, for some $c\in\mathbb{R}$ both these limits of the sequence do not exist however we obtain:
$$\lim \sup (a_n+b_n)=c+1$$
$$\lim\sup(a_n)+\lim\sup(b_n)=c+1$$
And we have equality again. Well then perhaps it must be that both limits do not exist, and that the sequences cannot be scalar multiples of each other. Well then define:
$$a_n=(0,1,0,1,0,....)$$
$$b_n-(0,0,0,1,0,0,0,1...)$$
Clearly:
$$\lim \sup (a_n+b_n)=2$$
$$\lim \sup(a_n)+\lim \sup (b_n)=2$$
From here I tried testing some cases where $\sup(a_n)\neq \lim\sup(a_n)$ but I still continued to get equalities. So I really can't figure out for what conditions inequality holds. I guess if $a_n$ and $b_n$ look period but are shifted by a non even $n$ then that would make sense but I feel like there's more to it than that.
| Set $$a=\limsup a_n, \quad b=\limsup b_n ,$$
$$
\limsup \ (a_n + b_n) \ \leq \ a+b . \tag{*}
$$
Then, (*) holds with equality if and only if
$$
\exists \text{ a sequence of indexes } n_k \text{ such that}
$$
$$
\lim_k a_{n_k}=a \quad \text{and} \quad \lim_k b_{n_k}=b.
$$
Proof.
From RHS to LHS:
$$
\limsup_n \ (a_n+b_n) \ \geq \ \lim_k \ (a_{n_k}+b_{n_k}) \ = \ a+b.
$$
From LHS to RHS:
as $\limsup_n \ (a_n+b_n) \ = \ a+b$,
$$ \text{it exists a subsequence of } a_n+b_n \text { such that} \\
\lim_k \ (a_{n_k}+b_{n_k}) \ = \ a+b .
$$
Now, $\limsup_k a_{n_k} = a$. Otherwise:
$$
a+b \ = \ \lim_k \ (a_{n_k}+b_{n_k}) \ = \ \limsup_k \ (a_{n_k}+b_{n_k})
\leq \limsup_k \ a_{n_k} + \limsup_k \ b_{n_k} \ < \ a+b
$$
Therefore, it exists a sub-subsequence $a_{n_{k_l}}$ converging to $a$.
Finally, $b_{n_{k_l}}\ =\ (a_{n_{k_l}}+b_{n_{k_l}}) - a_{n_{k_l}} $, is the difference of two converging sequence with limits, respectively, $a+b$ and $a$.
Note that, if at least one of the sequence is convergent, then equality is achieved.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3840347",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
$8$ distinct balls are randomly distributed among $4$ boxes. What is the probability that each box has exactly two balls? I made up an question in my head as to probability but i am not sure about the solution.
Question: There are $8$ distict balls and $4$ distinct boxes.An individual distributes the balls into boxes randomly.What is the probability that each boxes have exactly two balls?
My solution: The sample space is ${4}^{8}$ and the number of ways distributing exactly two balls into each four boxes : $ C(8,2).C(6,2).C(4,2).C(2,2).P(4,4)$
$\therefore \frac {C(8,2).C(6,2).C(4,2).C(2,2).P(4,4)}{{4}^{8}} $
Is my solution correct? I feel that i am wrong in somewhere.If it is not correct ,can you give hints or the solution. Thanks for your helps..
| A natural interpretation is that you have a sequence of eight steps: at step $n$ you take ball number $n$ and put it in some numbered box.
You can then choose two steps among the eight at which a ball will be placed in box number $1$. There are $C(8,2)$ ways to choose the two steps, which is tantamount to choosing which two balls go in box $1$.
Next you choose two steps among the remaining six at which a ball will be placed in box number $2$. There are $C(6.2)$ ways to choose these two steps.
Next you have have $C(4,2)$ ways to choose the two steps at which to put a ball in box $3$, and finally $C(2,2) = 1$ way to choose the two steps at which to put a ball in box $4$.
Notice that among the $C(8,2) C(6,2)$ ways the balls could go into boxes $1$ and $2$, one way is to put balls $1$ and $2$ in box $1$ and to put balls $3$ and $4$ into box $2$.
Another way is to put balls $3$ and $4$ into box $1$ and to put balls $1$ and $2$ in box $2$.
That is, by the time you have multiplied $C(8,2) C(6,2)$, you not only have counted the number of ways to choose two pairs of balls from the original eight balls, you have also counted the number of ways those two pairs could be placed in the two boxes by switching which pair goes in which box.
You do not have to multiply by any additional factors to account for the ways to distribute the pairs of balls between the two boxes after selecting the pairs.
In fact, if you do multiply by any additional factor other than $1$, you will produce a wrong answer.
Similarly, by the time you have multiplied $C(8,2) C(6,2) C(4,2) C(2,2)$,
you have already counted all the ways to assign the pairs of balls to specific boxes, for example, balls $1$ and $2$ in box $1$, balls $3$ and $4$ in box $2$,
balls $5$ and $6$ in box $3$, and balls $7$ and $8$ in box $4$.
Given any set of four disjoint pairs of balls, you have already counted each permutation of the pairs.
If you now start shuffling pairs of balls among the boxes, you will be multiply counting outcomes that you have already counted.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3840490",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Expected number of games so that every player has been an Imposter in Among Us Imagine you have $n$ players. In each game, $k$ $(k\leq n)$ players are chosen randomly to do whatever.
*
*By game $G$, what is the probability that every player has been chosen at least once?
*What is the expected number of games that have to be played so that every player was chosen at least once?
Of course, when $G < n/k$ the probability is zero.
This idea came from the game Among Us, where there are $n$ players ($n\leq10$) and in each game you have $k$ imposters (usually $1$, $2$ or $3$).
| This can be found using the principle of inclusion exclusion. You first find the probability that someone had not been picked yet by adding up, for each player, the probability they had not been picked. The result is $n\big(\binom{n-1}k/\binom{n}k\big)^G$. But then you must add back in the probabilities, for each pair of players, the probability that they were both not picked, which is $\binom{n}2\big(\binom{n-2}k/\binom{n}k\big)^G$. Continuing in this fashion, the probability that everyone was picked is
$$
\mathbb P(\text{all players were picked in $G$ rounds})=\sum_{i=0}^n(-1)^i\binom{n}i\left[\binom{n-i}k\big/\binom{n}k\right]^G
$$
Finally, let $X$ be the number of rounds. Note that the event $\{X> G\}$ occurs if and only if not everyone was picked in $G$ rounds. Using the "layer cake formula" for expected value,
\begin{align}
\mathbb E[X]
&=\sum_{G=0}^\infty P(X>G)
\\&=\sum_{G=0}^\infty\sum_{i=1}^n(-1)^{i+1}\binom{n}i\left[\binom{n-i}k\big/\binom{n}k\right]^G
\\&=\sum_{i=1}^n(-1)^{i+1}\binom{n}i\frac{1}{1-\binom{n-i}k\big/\binom{n}k}
\end{align}
Here is the formula in Wolfram Alpha so you can see specific values. As a special case, when $k=1$, this is just the Coupon Collector's problem, for which the expected number of trials is $n(1+1/2+\dots+1/n)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3840627",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Suppose that a group $G$ contains two elements $m$ and $n$ such that $mn=nm$ and $\langle m\rangle\cap\langle n\rangle=\{e\}$. Suppose that a group $G$ contains two elements $m$ and $n$ such that $mn=nm$ and $\langle m\rangle\cap\langle n\rangle=\{e\}$.
Show that if $m^s\cdot n^t = e$, that $m^s=e$ and $n^t = e$.
Does $\langle m\rangle\cap\langle n\rangle=\{e\}$ mean that the two subgroups are relatively prime? To be honest I am having trouble on where to go with this.
| Hint Show that $m^s \in \langle m \rangle \cap \langle n \rangle$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3840761",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Why is the set of all polynomial functions of degree at most n a subspace of the set of all functions? I am given that:
Let $F(\mathbb R)$ be the set of all functions $f: \mathbb R \rightarrow \mathbb R$ and let $P_n(\mathbb R)$ be the set of all polynomial functions from $\mathbb R$ to $\mathbb R$ of degree at most $n$. $P_n(\mathbb R)$ is a subspace of $F(\mathbb R)$
However I wouldn't think that the zero function was an element of $P_n(\mathbb R)$ since the zero function is a polynomial function of undefined degree and so does not have degree of at most $n$. And so $P_n(\mathbb R)$ cannot be a subspace of $F(\mathbb R)$
Is it assumed here that the zero function is included in $P_n(\mathbb R)$? If so, why?
| $P_n(\Bbb R)$ satisfies the subspace criterion. Namely, it is closed under addition and scalar multiplication.
Thus we have a vector subspace (of $\Bbb F(\Bbb R)$).
I would think that the zero polynomial would have degree zero.
Looking at the other answer, the comments, and again at your question, you don't explicitly say "vector space", so I am a little confused. Why it would help to consider the zero polynomial to have degree $-\infty$ is beyond me. On second perusal, @Qiaochu Yuan has given a couple of intelligent reasons for doing that.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3840878",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
General solution of $x^{ 2 }\left( y-x\frac { dy }{ dx } \right) =y{ \left( \frac { dy }{ dx } \right) }^{ 2 }$ Find the general solution of the differential equation
$${ x }^{ 2 }\left( y-x\dfrac { dy }{ dx } \right) =y{ \left( \dfrac { dy }{ dx } \right) }^{ 2 }.$$
I tried it by first finding $\dfrac { dy }{ dx } $ using quadratic formula but I am getting a complex differential equation
$\dfrac { dy }{ dx } =\dfrac { -{ x }^{ 3 }\pm \sqrt { { { x }^{ 6 }+ }4{ x }^{ 2 }{ y }^{ 2 } } }{ 2y } $
| Hint: Since $y\equiv0$ is a solution, we can as well multiply the equation by $y$. Then, with $$z=\frac12y^2+\frac18x^4,$$ we arrive at $$2x^2z=z'^2,$$ after an elementary calculation.
Can you take it from there?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3840996",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Find $\lim\limits_{x\to -\infty} (e^{-x} \cos{x})$ $$\lim\limits_{x\to -\infty} (e^{-x} \cos{x})=\lim\limits_{x\to -\infty} \left(\dfrac{\cos{x}}{e^x}\right)$$
From there, I see that $e^x$ approaches $0$ while $\cos{x}$ oscillates between $-1$ and $1$.
My answer is that the limit does not exist. What is the proper reasoning to explain this? Does the limit oscillate forever, approach $\pm\infty$, etc.?
| Yes your idea is right, to show that in a rigorous way let consider that for $x_n= -2\pi n \to -\infty$ as $n\to \infty$
$$e^{-x_n} \cos{x_n}=e^{-x_n}=\infty$$
and for $x_n= -\pi n$
$$e^{-x_n} \cos{x_n}=-e^{-x_n}=-\infty$$
therefore the limit doesn't exist.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3841127",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Extension of Definition of Concavity Question
Suppose that the graph of a function $f$ is concave up on an open interval $I$. Show that, for any $a, b \in I$, where $a < b$ and $0 < \lambda < 1$, $$(1 - \lambda)f(a) + \lambda f(b) > f((1 - \lambda)a + \lambda b).$$
My working
From the definition of a function being concave up, $$f(b) - f(a) > (b - a)f'(a).$$
When $b = (1 - \lambda)a + \lambda b$,
$$f((1 - \lambda)a + \lambda b) - f(a) > ((1 - \lambda)a + \lambda b - a)f'(a)$$
$$\implies f((1 - \lambda)a + \lambda b) - f(a) > \lambda (b - a)f'(a)$$
This is where I am currently stuck at. I have a hunch that I am supposed to use the definition once more by substituting another set of values for $b$ and $a$ in order to get rid of the $f'(a)$, but I cannot see what they are. Any intuitions will be greatly appreciated!
| A function $f: I \to \Bbb R$ is strictly convex (aka strictly “concave up”) if
$$ \tag 1
f((1 - \lambda)a + \lambda b) < (1 - \lambda)f(a) + \lambda f(b)
$$
for all $a, b \in I$ with $a \ne b$ and all $\lambda \in (0, 1)$. For a differentiable function this is equivalent to
$$ \tag 2
f(y) - f(x) > (y - x)f'(x)
$$
for all $x, y \in I$ with $x \ne y$.
To see that $(2)$ implies $(1)$, set $c = (1 - \lambda)a + \lambda b$. Then
$$
\begin{align}
f(a) &> f(c) + (a-c) f'(c) = f(c) + \lambda (a-b) f'(c) \\
f(b) &> f(c) +(b-c) f'(c) = f(c) +(1-\lambda) (b-a) f'(c)
\end{align}
$$
and therefore
$$
(1-\lambda)f(a) + \lambda f(b) > (1-\lambda)f(c) + \lambda f(c) = f(c) \, .
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3841283",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
When we substitute variables do we compose functions? Suppose that we have the function $f(u)=2u+3$ and we define $u=2x$. What of the below expressions are correct?
$$f(u(x))=4x+3 \ \text{or} \ f(x)=4x+3 $$
Does the "definition" serves only as a substitution (shorthand) i.e. is "$f$" still the same function or the composition of $f \circ u$ irrespective of the fact that we didn't write $u(x)=2x$ at first (abuse of notation)?
| The second of your suggestions is the correct interpretation. We often abuse notation and write $f(x)$, although one actually defines a new function by composition $\hat f(x)=f\circ u(x)$. The most common application is the substitution, when calculating integrals. The error cancels in the end of those calculations by resubstitution (for indefinite integrals) or because you also substitute the boundaries (for definite integrals).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3841462",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
Epsilon-Delta proof of a quadratic limit I just started learning about limits and their definition. I'm practicing with some proofs that involve the epsilon-delta definition of a limit.While I absolutely have no problem with linear functions, I really don't know how to prove quadratic limits. This is the original limit that I'm trying to prove:
$\lim \limits_{x\to2} (12x^2-3x+8)=50$
I started with the epsilon- delta definition.
Given an $\epsilon$ > 0, there is always a $\delta$ >0 such that:
If $|x-2|<\delta$,then $|12x^2-3x+8-50| = |12x^2-3x-42|<\epsilon$
I found out that $12x^2-3x-42 =(x-2)(12x+21)$. So, by starting from the hypothesis:
$|x-2|<\delta$
$-\delta<x-2<\delta$
$-\delta(12x+21)<(x-2)(12x+21)<\delta(12x+21)$
$-\delta(12x+21)<(12x^2-3x-42)<\delta(12x+21)$
$|12x^2-3x-42|<\delta(12x+21)$, this form is very similar to $|12x^2-3x-42|<\epsilon$, so this suggests me
that i should take $\delta=\frac{\epsilon}{12x+21}$, so that $\delta(12x+21)=\epsilon$
Now I have a big problem. $\delta$ cannot depend on x, so I must find a bound, but I do not know how.
I thought that maybe I could first consider values of x that are greater than 2
if $x>2$
$12x>24$, which means that $12x+21>45$
If $12x+21>45$,then $\frac{\epsilon}{12x+21}<\frac{\epsilon}{45}$, so $|x-2|<\delta=\frac{\epsilon}{12x+21}<\frac{\epsilon}{45}$, which implies that $|x-2|<\frac{\epsilon}{45}$. So a new choice for $\delta$ could be $\frac{\epsilon}{45}$, but it is valid only for $x>2$.
At this point I thought that I should also consider values of x that are less than 2.
So for $x<2$, I have that $12x+21<45$, so now $\frac{\epsilon}{12x+21}>\frac{\epsilon}{45}$,,so $|x-2|<\frac{\epsilon}{12x+21}>\frac{\epsilon}{45}$. I'm blocked right here because I cannot find a new delta for x < 2. How should I proceed?
| Take $\varepsilon>0$. Note that\begin{align}|x-2|<1&\iff1<x<3\\&\iff12<12x<36\\&\iff33<12x+21<57\\&\implies|12x+21|<57.\end{align}So, take $\delta=\min\left\{1,\frac\varepsilon{57}\right\}$. Then$$|x-2|<\delta\implies\bigl|(x-2)(12x+21)\bigr|<\frac\varepsilon{57}\times57=\varepsilon.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3841602",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
} |
Show that the solutions to the equation $ax^2 + 2bx + c =0$ are given by $x = -\frac{b}{a} \pm \sqrt{\frac{b^2-ac}{a^2}}$
Show that the solutions to the equation $ax^2 + 2bx + c =0$ are given by $x = -\frac{b}{a} \pm \sqrt{\frac{b^2-ac}{a^2}}$
Hint: Start by dividing the whole equation by $a$
At first I have tried solving the equation without using the hint provided in my exercise and directly applying completing square, I get $x = -\frac{b}{a} \pm \sqrt{\frac{b^2-c}{a}}$. So if I am to use the hint, I obtain the appropriate answer. But I wonder if I am asked the same question in my exam where the hint will not be provided then how am I supposed to answer.
I would like to know how one should approach this kind of question and how do I realise when to divide the whole equation with in this case $a$ or is there any other ways so that I can avoid dividing the whole equation by $a$.Thanks in advance for any help you are able to provide!
EDIT: Here's my steps. Please see where have I done wrong.
\begin{align}
ax^2+2bx+c&=0 \\
a\left[\left(x+\frac{b}{a}\right)^2-\frac{b^2}{a^2}\right] + c&=0 \\
a\left(x+\frac{b}{a}\right)^2-\frac{b^2}{a} + c&=0 \\
\left(x+\frac{b}{a}\right)^2&=\left(\frac{b^2}{a}-c\right)\left(\frac{1}{a}\right)\\
\left(x+\frac{b}{a}\right)^2&=\frac{a(b^2-c)}{a^2}\\
\left(x+\frac{b}{a}\right)^2&=\frac{b^2-c}{a} \\
x+\frac{b}{a}&=\pm\sqrt{\frac{b^2-c}{a}} \\
\implies x&=-\frac{b}{a}\pm\sqrt{\frac{b^2-c}{a}}\\
\end{align}
| HINT: Complete the square:
$$ax^2+2bx+c=0$$
$$a\left(x^2+\frac{2b}{a}x\right)+c=0$$
Try the rest from here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3841751",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
Expected value conditioned on subset of domain I have a random variable $X$ with pdf $f(\cdot)$ and support $[0, b]$. As part of some results I am generating, I am left with the integral $$\int_a^b x f(x) \, dx$$ where $0<a<b.$ I am wondering how to think about this integral - can this be expressed $E[X\mid X>a]$ or something like this? Seems like a straightforward question I know but I couldn’t think of the right search terms to use. I’d welcome a link to another post asking the same thing if it exists.
| If $S$ is any subset of $(a,b),$ then you have
\begin{align}
& \Pr(X\in S\mid X>a) = \frac{\Pr(X\in S\ \&\ X>a)}{\Pr(X>a)} \\[8pt]
= {} & \frac{\Pr(X\in S)}{\Pr(X>a)} = \frac{\int_S f(x)\,dx}{\int_a^b f(x)\, dx}.
\end{align}
Therefore the conditional probability density of $X$ given that $X>a$ is
$$
f_{X\,\mid\,X\,>\,a}(x) = \frac{f(x)}{\int_a^b f(u)\,du}.
$$
So you integrate $x$ times that to get the conditional expected value.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3841906",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Partition defined by orbits is trivial? So this may be a dumb question. I've stumbled across a potentially good false proof except I can't find the error. According to this reasoning, every partition defined by the orbits of the elements of G is the trivial partition (the whole of G):
So let the eq. relation $\sim$ be defined by $a \sim b$ iff $b \in orb(a)$. Obviously, the identity element $e$ is in every element's orbit, but if $a \sim b$ and $c \sim b$ then $a \sim c$ therefore, since every element in equivalent to $e$, all the elements are equivalent among themselves, therefore they are all in the same equivalence class. Another way of proving the same thing would be to say that in a group, every element can be 'made' from the combination of any other element with a specific element (ie the eq-n $a = xb$ always has a solution (trivially, $x = ab^{-1}$)), therefore the equivalence classes are all equal... my question is, where did I go wrong, if I did.
| You say that $a\sim b $ iff $b\in orb(a)=\{ax| \ x \in X\}$ where $G$ acts on $X$.
So I suppose you have considered a very specific action, namely $G$ acts on $G$ by left multiplication. This action has indeed one orbit (i.e. one equivalence class)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3842051",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
It is given that $(1+x)(1+{x}^{2})(1+{x}^{3})(1+{x}^{4})......(1+{x}^{100})$ ,Find the coefficient of ${x}^{9}$ in the expansion. I saw a question in my textbook but i stuck in it. The question is:
It is given that $(1+x)(1+{x}^{2})(1+{x}^{3})(1+{x}^{4})......(1+{x}^{100})$
a-)Find the coefficient of ${x}^{9}$ in the expansion.
b-)Find the coefficient of ${x}^{28}$ in the expansion.
Apart from this question, i wonder that what would happen if the question were in the form of $(1+a_0x)(1+a_1{x}^{2})(1+a_2{x}^{3})(1+a_3{x}^{4})......(1+a_{99}{x}^{100})$ where $a_1....a_{99}$ are rational numbers.
a-)Find the coefficient of ${x}^{9}$ in the expansion.
b-)Find the coefficient of ${x}^{28}$ in the expansion.
I am open to hints ,shortcuts and full solution for both of these two questions.Thanks for your helps..
| Hint : The coefficient of $x^9$ is the number of ways you can decompose $9$ as a sum of distincts positive integers.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3842205",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
By first expanding $(\cos^2x + \sin^2x)^3$, otherwise, show that $ \cos^6x + \sin^6x = 1 - (3/4)\sin^2(2x)$ By first expanding $(\cos^2x + \sin^2x)^3$, otherwise, show that
$$ \cos^6x + \sin^6x = 1 - (3/4)\sin^2(2x)$$
Here's what I've done so far (starting from after expansion):
$\cos^6x + (3\cos^4x\sin^2x) + (3\cos^2x\sin^4x) + \sin^6x$
$\cos^6x + (3\cos^2x\sin^2x)(\cos^2x+\sin^2x) + \sin^6x$
$\cos^6x + (3\cos^2x\sin^2x) + \sin^6x$
$\cos^6x + \sin^6x = -3\cos^2x\sin^2x$
$\cos^6x + \sin^6x = (-3/2)(2\cos^2x\sin^2x)$
$\cos^6x + \sin^6x = (-3/2)(\sin^22x)$
How can I get it into $ 1 - (3/4)\sin^2(2x)$?
| It should be
$(\cos^2x+\sin^2x)^3=1$
$\cos^6x + (3\cos^4x\sin^2x) + (3\cos^2x\sin^4x) + \sin^6x\color{red}{=1}$
$\cos^6x + (3\cos^2x\sin^2x)(\cos^2x+\sin^2x) + \sin^6x=1$
$\cos^6x + (3\cos^2x\sin^2x) + \sin^6x=1$
$\cos^6x + \sin^6x = 1-3\cos^2x\sin^2x$
$\cos^6x + \sin^6x = 1-\dfrac3{\color{red}4}(4\cos^2x\sin^2x)$
$\cos^6x + \sin^6x = 1-\dfrac3{\color{black}4}(2\cos x\sin x)^2$
$\cos^6x + \sin^6x = 1-\dfrac34(\sin^22x)$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3842339",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 9,
"answer_id": 5
} |
Let $S$ a subspace and $V$ a vector space. Show that the additive identity of $S$ is the additive identity of $V$. Working on the book: Robert Messer. "Linear algebra - The gateway to mathematics" (p. 55)
16. Suppose $S$ is the subspace of a vector space $V$.
a. Show that the additive identity of $S$ is the additive identity of $V$.
This is my attempt to prove it:
*
*$0$ is the additive identity of $V$ and $0'$ is the additive identity of $S$.
*
*Assume $v \in S$
*$v + 0' = v$ ($0'$ is the additive identity of S)
*$v + 0 = v$ ($0$ is the additive identity of V)
*$\forall x(x \in V \to \exists! y(x+y=x))$ (0 is the unique additive identity of $V$)
*$v \in V \to \exists! y(v+y=v)$
*$v \in V$ (since S \subseteq V)
*$\exists! y(v+y=v)$
*
*$v+z=v \land \forall z'(z' \in V \land v+z'=z \to z'=z)$
*$\forall z'(z' \in V \land v+z'=z \to z'=z)$
*$0 \in V \land v+0=v \to 0=z$
*$0' \in V \land v+0'=v \to 0'=z$
*$0=0'$ (by transitivity)
*$\vdots$
Is my proof skeleton correct ?
Would it suffice to show
*
*If $0$ is the additive identity of $V$ then $0'$ is the additive identity of $S$ and,
*If $0'$ is the additive identity of $V$ then $0$ is the additive identity of $S$ ?
Would appreciate some insight from a logic point of view.
| My spiel is not too heavy on the logic, but I think it conveys the essential ideas:
Consider:
$\forall v \in V, \; 0_V + v = v + 0_V = v; \tag 1$
now let
$s \in S \subset V; \tag 2$
then
$s \in V, \tag 3$
whence, via (1),
$0_V + s = s; \tag 4$
also, by (2),
$0_S + s = s; \tag 5$
combining (4) and (5) we find
$0_V + s = 0_S + s; \tag 6$
we may now write
$(0_V + s) + (-s) = (0_S + s) + (-s), \tag 7$
from which
$0_V + (s + (-s)) = 0_S + (s + (-s)); \tag 8$
now,
$s + (-s) = 0_V, \tag 9$
so (8) becomes
$0_V + 0_V = 0_S + 0_V, \tag{10}$
and thus by virtue of (1),
$0_V = 0_V + 0_V = 0_S + 0_V = 0_S. \tag{11}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3842630",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
For what values of x is true ${-(10^{10}})^{-j}\leq x \leq ({10^{10}})^{-j}$ whit $j=1,2,3...$ Can you give me any suggestions?
I understand that it is the same as
$x\leq|({10^{10}})^{-j}|$
but I don't know how to conclude
| It depends on how you interpret the final phrase "with $j=1,2,3,\cdots.$" If it means just pick one of them then your answer is OK. If it means it should be true for every $k$ you need to intersect all the answers.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3842811",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Is it possible to find the expression of the antiderivative $\int \frac{dx}{\cosh(x)+\sqrt{\cosh(2x)}}$ I have been asked to express the integral $$\int \frac{dx}{\cosh(x)+\sqrt{\cosh(2x)}}$$
I thought about the substitution $$t=e^x$$
but it gave me a more complicate function. So, any idea will be appreciated.
| Using $\cosh(x)=t$ we end with
$$I=\int \frac{dt}{\sqrt{t^2-1} \left(t+\sqrt{2 t^2-1}\right)}$$ for which a CAS gives
$$I=\frac{-2 t^3+\sqrt{2 t^2-1}+\sqrt{2-4 t^2} \sqrt{1-t^2} \left(F\left(\sin
^{-1}\left(\sqrt{2} t\right)|\frac{1}{2}\right)-E\left(\sin ^{-1}\left(\sqrt{2}
t\right)|\frac{1}{2}\right)\right)+t}{ \sqrt{(t^2-1)(2 t^2-1)}}$$ Back to $x$
$$I=\text{csch}(x)-\sqrt{\cosh (2 x)} \coth (x)-i \left(F(i x|2)+E(i x|2)\right)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3842901",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Identity for the CDF of Poisson Random Variable
My question is how does the left hand side equation for part a equal to the probability $P(W_n >1)$ where $W_n$ is a Poisson process on the unit interval with mean $\lambda$? The method I used to prove part a is using the gamma and beta functions.
| *
*The waiting time until the $n$th event can be written as $W_n = T_1 + T_2 + \cdots + T_n$, where $T_i$ is the waiting time between the $(i-1)$th event and the $i$th event (and where $T_1$ is the waiting time until the first event).
*For a Poisson process with rate $\lambda$, the $T_i$ are i.i.d. exponential random variables with rate $\lambda$.
*The sum of $n$ independent $\text{Exponential}(\lambda)$ random variables can be shown to follow a $\text{Gamma}(n, \lambda)$ distribution, whose density appears in the left-hand side of the equation you are asking about.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3843071",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Expected squared distance between two jointly gaussian distributed random variables (dependent, with covariance) This is question basically asks for a generalization of the answer to this question:
Expected distance between two vectors that belong to two different Gaussian distributions . The difference here is that I know my variables have covariance.
If I have two $N$-dimensional random variables $X$ and $Y$ which are jointly gaussian distributed and not independent, i.e. the combined vector $Z=[X_0,...,X_N,Y_0,...,Y_N]$ is distributed according to $Z\sim \mathcal{N}(\mu_Z, \Sigma_Z)$, where
$\Sigma_Z = \begin{bmatrix}\Sigma_X & \Sigma_{XY}\\ \Sigma_{YX} & \Sigma_Y\end{bmatrix}$,
and $\Sigma_{XY}$, $\Sigma_{YX}$ are not just zero matrices.
What is the expected value of the squared euclidean distance between $X$ and $Y$?
I would highly appreciate help on this one. Also, please let me know if I can ask the question in a better way.
| Assuming that everything is $0$ mean.
You can reexpress the distance as
\begin{align*}
\mathbb E[\| X-Y \|^2] &= \mathbb E[(X-Y)^T(X-Y)]\\
&= \mathbb E[X^TX] + \mathbb E[Y^TY]-2 \mathbb E[X^TY]
\end{align*}
To treat terms like $\mathbb E[X^T X]$ you can use the trick $X^T X = \text{Tr}(X^TX) = \text{Tr}(XX^T)$ this together with the fact that trace is linear yelds
\begin{align*}
\mathbb E[X^TX] &= \text{Tr} (\Sigma_X )\\
\mathbb E[Y^TY] &= \text{Tr} (\Sigma_Y )\\
\mathbb E[X^TY] &= \text{Tr} (\Sigma_{YX} )
\end{align*}
So in the end you obtain
\begin{align*}
\mathbb E[\| X-Y \|^2] &= \text{Tr} (\Sigma_X ) + \text{Tr} (\Sigma_Y ) -2 \text{Tr} (\Sigma_{YX} )
\end{align*}
If $X$ now have mean $\mu_X$ and $Y$ have mean $\mu_Y$, then
\begin{align*}
&\| X-Y \|^2\\=& \| (X-\mu_X)-(Y-\mu_Y)+\mu_X-\mu_Y \|^2\\
=& \| (X-\mu_X)-(Y-\mu_Y) \|^2 + 2 \langle (X-\mu_X)-(Y-\mu_Y),\mu_X-\mu_Y\rangle +\| \mu_X-\mu_Y \|^2
\end{align*}
taking the expectation of this, the second term is $0$ since both $X-\mu_X$ and $Y-\mu_Y$ have mean $0$. So we get that
\begin{align*}
\mathbb E[\| X-Y \|^2] &= \text{Tr} (\Sigma_X ) + \text{Tr} (\Sigma_Y ) -2 \text{Tr} (\Sigma_{YX} ) + \| \mu_X-\mu_Y \|^2
\end{align*}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3843307",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Basis of $\{0\}$ set I am solving Linear Algebra and having a trivial doubt .
Is W ={∅} i.e. an empty set a basis of ={0} ?
I have read some solutions regarding the above and they imply that since W contains no vector , W by definition is linearly independent .
I am not sure how W spans V. Can anyone explain this to me ?
"Every vector space has a basis."
Is the above statement true ?
| The standard convention for any binary operation with a neutral element is that the "empty operation" gives the neutral element. Some specific examples:
*
*The empty sum is $0$: $\sum_{x\in\emptyset}x=0$.
*The empty product is $1$: $\prod_{x\in\emptyset}x=1$.
*The empty union is the empty set: $\bigcup_{A\in\emptyset}A=\emptyset$.
Important for your question is the first one in the list: A linear combination of vectors in the empty set is the zero vector:
$$\sum_{v\in W}a_vv=\sum_{v\in\emptyset}a_v v=0.$$
And thus $W=\emptyset$ (not $\{\emptyset\}$!) spans the trivial vector space. It is also linearly independent, since there is no non-trivial linear combination which results in the zero vector.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3843436",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Simplifying equation using trigonometric sum and differences identities Recently started learning trigonometric identities in school, having problems with this one. Tried solving using the sum and differences identities but keep having repeating everything. Thanks.
$\frac{-\cos(x)\sin(x)\pm \cos(y)\sin(y)}{\sin^2(y)-\cos^2(x)}=\tan(x\mp y)$
| We have
$$LHS = \frac{-\cos x \sin x + \cos y \sin y}{\sin^2y-\cos^2x} = \dfrac{1}{2}\dfrac{\sin(2y)-\sin(2x)}{\sin^2y-\cos^2y} = \dfrac{\sin(y-x)\cos(y+x)}{\sin^2y-\cos^2x} $$
$\sin(-z) = -\sin(z)$ So $\sin(y-x) = -\sin(x-y)$
$$\begin{align}& LHS = \dfrac{\sin(x-y)\cos(y+x)}{\color{blue}{\cos^2x-\sin^2y}} = \dfrac{\sin(x-y)\cos(x+y)}{\color{blue}{\cos(x+y)\cos(x-y)}}\\
& \ = \dfrac{\sin(x-y)}{\cos(x-y)} =\tan(x-y) =RHS\end{align}$$
Use $y = -y$ to get the other one.
For the blue highlighted part (a short proof)
$\cos(x+y)\cos(x-y) = \dfrac{1}{2}\left[\cos(2x)+\cos(2y)\right] = \dfrac{1}{2}\left[(2\cos^2x -1) + (1-2\sin^2y)\right]\\ = \cos^2x-\sin^2y$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3843584",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Closed form solution to a divergence condition I've been stuck on a problem and I was wondering if anyone would be able to help me. I am trying to solve the following divergence problem explicitly for $\vec{A}$
$$\nabla \cdot \vec{A} = \nabla \phi \cdot \nabla (\nabla^{2} \phi) - (\nabla^{2} \phi)^{2} \tag{1}$$
where $\nabla = (\partial_{x}, \partial_{y})$ and $\phi$ (and hence $\vec{A}$) is periodic in space. I was wondering if there was a 'nice' factorisation of the RHS such that it can be written in the form $\nabla \cdot \vec{\Phi}$? I doubt that there is, as
$$\nabla \phi \cdot \nabla (\nabla^{2} \phi) - (\nabla^{2} \phi)^{2} = -(\nabla^{2} \phi)^{2} \nabla \cdot \left( \frac{\nabla \phi}{\nabla^{2} \phi} \right) \tag{2}$$
where the quotient is to be interpreted as elementwise division and the RHS of $(2)$, as far as I can tell, has no further factorisation that would put it in the form I'm after. Of course, I could just define
$$\vec{A} = (A_{1},A_{2}) = \left(c \int \nabla \phi \cdot \nabla (\nabla^{2} \phi) - (\nabla^{2} \phi)^{2} dx, \ (1 - c) \int \nabla \phi \cdot \nabla (\nabla^{2} \phi) - (\nabla^{2} \phi)^{2} dy \right)$$
for some $c \in \mathbb{R}$, which would satisfy $(1)$. However, I was hoping to find a nice simplification without integrals. We can also note that the first terms on the RHS of $(1)$ can be interpreted as $\mathcal{L}_{\nabla \phi} \nabla^{2} \phi$, the directional derivative of $\nabla^{2} \phi$ in the direction of $\nabla \phi$. I'm not sure if this is of much use though.
Also, if anyone else has seen the RHS of $(1)$ in any other contexts, please let me know.
Thanks.
| The problem as you wrote it has no solutions unless $\phi\equiv c$. Observe that we would have
$$\text{div} A=\text{div}(\Delta\phi\nabla\phi)-2(\Delta\phi)^2.$$ Since your space has no boundary, we can average and find $$\int_{\mathbb T^d}(\Delta\phi)^2=0$$ and we conclude.
If you fix this issue by, say, subtracting off the average of the right-hand side you will have an infinite-dimensional space of solutions. At this point, inverting the divergence in various spaces is a well-studied problem, see eg. here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3843688",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Is "$--2$" by itself a valid Math expression? I inserted the following in the calculator:
$$--2$$
and the calculator gave me a result of $2$.
Is $--2$ by itself a valid Math expression, or did the calculator added $0$ or something before it (e.g.. the calculator might have turned $--2$ into $0--2$)?
| $--2$ is equivalent to $-(-2)$ or $-1 \cdot (-1 \cdot 2)$.
$-1 \cdot -1$ is just $1$, so we get $1 \cdot 2$, which is equal to $2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3843846",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Any "shortcuts" to proving that $\frac{\sin(x)}2+\sin^2(\frac x2)\tan(\frac x2)\to\tan(\frac x2)$ I was working on simplifying some trig functions, and after a while of playing with them I simplified $$\frac{\sin(x)}{2}+\sin^2\left(\frac{x}{2}\right)\tan\left(\frac{x}{2}\right) \rightarrow \tan\left(\frac{x}{2}\right)$$
The way I got that result, however, was with what I think a very "roundabout" way. I first used the half-angle formulaes, then used $x=\pi/2-\beta$, and that simplified to $$\frac{\cos(\beta)}{1+\sin(\beta)}$$ where I again used the coordinate change to get $$\frac{\sin(x)}{1+\cos(x)}\rightarrow\tan\left(\frac{x}{2}\right)$$
I tried using the online trig simplifiers but none succeeded. Of course, after you know the above identity, it's easy to prove by proving that $$\frac{\sin(x)}{2}=\tan\left(\frac{x}{2}\right)-\sin^2\left(\frac{x}{2}\right)\tan\left(\frac{x}{2}\right)$$
Is there a more direct way to get the identity? I guess what I'm asking is, am I missing any "tricks" or software that I could have on my toolbelt so that next time I don't spend hours trying to simplify trig identities?
| Use $ \sin(x) = 2 \sin(x/2) \cos(x/2)$ then we have
\begin{eqnarray*}
\frac{\sin(x)}{2}+\sin^2\left(\frac{x}{2}\right)\tan\left(\frac{x}{2}\right) &=& \frac{\sin(x/2)}{\cos(x/2)} \underbrace{\left( \cos^2(x/2) + \sin^2(x/2) \right)}_{=1} \\
&=& \tan (x/2).
\end{eqnarray*}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3843961",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
} |
Propagation of Sobolev Regularity to Endpoints in Local Well-Posedness Theory Let $F:H^s\rightarrow H^s$ and suppose I have proved LWP for some PDE
\begin{align*}
\partial_t u &= Fu \qquad \text{on }\, \mathbf{R}\times [0,\infty) \\
u(x,0)&=u_0(x) \qquad \text{on }\, \mathbf{R}
\end{align*}
and in particular, given $u_0\in H^s$, there is a time $T$ such that $u\in C([0,T);H^s(\mathbf{R}))$. Furthermore, I have proven that
\begin{align*}
\sup_{0\leqslant t < T}\lvert\lvert u(\cdot,t)\rvert\rvert_{H^s} < \infty
\end{align*}
Is it the case that $u(x,T)\in H^s(\mathbf{R)}$?
I have argued yes, since letting $u(x,T)=:\lim_{t\rightarrow T}u(x,t)$ means that by Fatou's
\begin{align*}
\lvert\lvert u(\cdot,T)\rvert\rvert_{H^s} \leqslant \liminf_{t\rightarrow T}\,
\lvert\lvert u(\cdot,t)\rvert\rvert_{H^s} \leqslant \sup_{0\leqslant t < T}\lvert\lvert u(\cdot,t)\rvert\rvert_{H^s} < \infty
\end{align*}
However, this doesn't make much sense to me, because in this case couldn't we easily get LWP for $u\in C_t([0,T];H^s(\mathbf{R}))$ and by a similar argument keep extending the regularity to the rest of the space? I always thought we lost LWP regularity since we could no longer control the $H^s$ norm at $T$ (via Grönwall or some similar argument).
If it helps I have also proved that there are solutions with $u_0\in H^s$ which lose their $H^s$ regularity in finite time.
Where have I made a mistake? And is my intuition about losing control on a Sobolev norm giving us the endpoint for LWP correct?
| Solved:
I have abused Fatou's lemma in a subtle way. Fatou's lemma requires setting $u(x,T):=\liminf_{t\rightarrow T}u(x,t)$ which does not equal the limit in general. All I have proved is that the limit inferior is finite... which is true. The pointwise convergence in this case is suspect.
It is also instructive to see why this fails to be true in general. The condition that $\sup_{t<T}\lvert\lvert u(\cdot, t)\rvert\rvert_{H^s}$ says that $u(x, t)$ is a bounded sequence in the space $C_t H^s$, which of course does not imply convergence. In fact, since I know there are solutions which lose their regularity at time $T$, this set is guaranteed not to be compact; hence we cannot conclude that $u(x,T)\in H^s$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3844084",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Two Uncountable Sets Making a Infinitely Countable Set I am trying to solve the following: $A$ and $B$ are uncountable sets, what is a infinite countable result from $A \oplus B$ where $\oplus$ is the symmetric difference.
My solution:
Define $A= \mathbb Q \cup \mathbb R$ and $B= \mathbb R$
Thus, $A \oplus B = \mathbb Q$ which is countably infinite.
As you can tell, I am having a hard time justify my answers and the idea of what symmetric difference does with certain steps. Any help is greatly appreciated.
| Remember that $A\oplus B=(A\cup B)\setminus(A\cap B)$.
If you want $A$ and $B$ to be uncountable, but $A\oplus B$ to be countable, that means that $A\cup B$ is "almost" $A\cap B$. But if $A=B$, we get $A\oplus B=\varnothing$, which is not infinite enough.
What you can do is reverse engineer this. Start with $C=A\oplus B$, which is countably infinite, for example $\Bbb N$, and then pick your favourite uncountable set $X$, and let $A=X$ and $B=X\cup C$. Or, split $C$ into two parts, $C_0$ and $C_1$, and let $A=C_0\cup X$ and $B=C_1\cup X$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3844207",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Elementary Symmetric Means as Quasi-Arithmetic Means The elementary symmetric polynomial of degree $k$ in $n$ variables is
$$e_{n,k}(\mathbf{x})=\sum_{1 \leq i_1 \leq i_2 \leq \dots \leq i_k \leq n} x_{i_1} x_{i_2} \cdots x_{i_k} $$
and for positive data $\mathbf{x}$, the corresponding elementary symmetric mean is
$$s_{n,k}(\mathbf{x}):=\sqrt[k]{\frac{e_{n,k}(\mathbf{x})}{\binom{n}{k}}} .$$
Quasi-arithmetic means are defined in terms of an invertible function $f$, and the arithmetic mean $A$
$$M_f(\mathbf{x}) := (f^{-1} \circ A \circ f)(\mathbf{x}) = f^{-1} \left( \frac{f(x_1)+f(x_2)+\cdots+f(x_n)}{n} \right) .$$
For any $n$, the elementary symmetric mean in $n$ variables of degree $n$ is the geometric mean, which is a quasi-arithmetic mean:
$$s_{n,n}(x_1,\dots,x_n)=\sqrt[n]{x_1x_2\cdots x_n} = \exp\left(\frac{\log(x_1)+\log(x_2)+\cdots+\log(x_n)}{n}\right). $$
My question is: are the other elementary symmetric means (with $k<n$) also quasi-arithmetic? If so, can the conjugating functions $f_{n,k}$ be described explicitly?
My attempt: Obviously $s_{n,1}=A$ is quasi-arithmetic, so the first nontrivial case is $s_{3,2}$, with the functional equation
$$f_{3,2} \circ s_{3,2} = A \circ f_{3,2}, $$ or
$$f_{3,2} \left( \sqrt{ \frac{x_1 x_2+x_1 x_3+x_2 x_3}{3}} \right) = \frac{f_{3,2}(x_1)+f_{3,2}(x_2)+f_{3,2}(x_3)}{3}. $$
I tried pre-composing $f$ with a logarithm and differentiation, but it didn't seem to get me anywhere.
I would greatly appreciate any help on this. Thank you.
| If you require that $f$ is, say, $C^1$, then differentiating $M_f$ with respect to any $x_i$ gives
$$\frac{\partial}{\partial x_i} M_f = \frac{1}{f' \left( \frac{\sum f(x_i)}{n} \right)} \frac{f'(x_i)}{n}$$
which means that for any $i \neq j$ we have
$$\frac{ \frac{\partial}{\partial x_i} M_f }{ \frac{\partial}{\partial x_j} M_f } = \frac{f'(x_i)}{f'(x_j)}.$$
So we can check whether $s_{3, 2}$ has this property. Actually it will be slightly more convenient for the purposes of calculating derivatives to check this property after conjugating $s_{3, 2}$ by $f(x) = x^2$ to remove the outer square root, giving a modified mean
$$t_{3, 2}(x_1, x_2, x_3) = \frac{\sqrt{x_1 x_2} + \sqrt{x_2 x_3} + \sqrt{x_3 x_1}}{3}.$$
(Conjugating preserves the property of being quasi-arithmetic so this is fine.) We get
$$\frac{\partial}{\partial x_1} t_{3, 2} = \frac{ \sqrt{x_2} + \sqrt{x_3} }{6 \sqrt{x_1}}$$
and similarly for $x_2, x_3$, which gives
$$\frac{ \frac{\partial}{\partial x_1} t_{3, 2} }{ \frac{\partial}{\partial x_2} t_{3, 2} } = \frac{ (\sqrt{x_2} + \sqrt{x_3}) \sqrt{x_2} }{ (\sqrt{x_1} + \sqrt{x_3}) \sqrt{x_1} }.$$
In particular, this quotient depends nontrivially on $x_3$, so it's not of the form $\frac{f'(x_1)}{f'(x_2)}$ for any function $f$. A similar but more annoying calculation can be done for the other elementary symmetric means. Intuitively this is saying that the elementary symmetric means "mix" the $x_i$ too much to be quasi-arithmetic.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3844327",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Relation of two Functions and Logic For two functions, $f \colon \mathbb{R} \to \mathbb{R}$ and $g \colon \mathbb{R} \to \mathbb{R}$, we define the set, $\Omega^g_f=\{x \in \mathbb{R}: f(x)<g(x) \}$. We say that the function $f$ loves $g$ when
$\forall x \in \Omega^g_f, \exists y \in \Omega^f_g $ such that $x<y$.
a) If $f$ loves $g$, and $g$ loves $h$, prove that $f$ loves $h$.
b) For any $t \in \mathbb{R}$, let $f_t(x)=f(x)+t$. For every function $f$ there exist a function $g$ such that, for every $t \in \mathbb{R}$, $g$ loves $f_t$.
I already think a) is not true. I found a counterexample. Take $g(x)=\sin(x)$, $h(x) = 0.5$, $f(x) = -0.5$. For b) I am not sure how to go about it since the question says 'there exists a function' so I have to consider all possible functions and only one has to work. I understand that $f_t$ is just translated up/down by $t$ units.
| Statement a) is false and your counterexample is perfect.
Statement b) is true. Let us prove it. Essentially, the proof is based on the fact that the total order on $\mathbb{R}$ has no greatest element.
Let $f \colon \mathbb{R} \to \mathbb{R}$ be a function. Consider the function $g \colon \mathbb{R} \to \mathbb{R}$ defined by $g(x) = f(x) + x$.
We have to show that, for every $t \in \mathbb{R}$, $g$ loves $f_t$, that is, for every $x \in \Omega_{g}^{f_t}$ there exists $y \in \Omega_{f_t}^{g}$ such that $x < y$.
So, let us fix $ t \in \mathbb{R}$. Let $x \in \Omega_{g}^{f_t}$, that is, $x \in \mathbb{R}$ and $f(x) + x = g(x) < f_t(x) = f(x) + t$.
Thus, $x < t$.
Take $y = t + 1$. Hence, $x < t < y$ and $f_t(y) = f(y) + t < f(y) + y = g(y)$.
Therefore, $y \in \Omega_{f_t}^{g}$. $\qquad$QED
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3844517",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
PRMO level question of functions Consider the functions $f(x)$ and $g(x)$ which are defined as $f(x)=(x+1)(x^2+1)(x^4+1)\ldots\left({x^2}^{2007}+1\right)$ and $g(x)\left({x^2}^{2008}-1\right)= f(x)-1$.
Find $g(2)$
This is a PRMO level question of functions and I tried it with substituting values also but to no avail and the solution Of this question is also not available thought answer is given as $g(x)=2$.
| Some hints:
Note that
$$f(x)(x-1) = x^{2^{2008}}-1$$
This gives $f(2)$.
Now, use this in
$$g(x)\left({x^2}^{2008}-1\right) = g(x)f(x)(x-1)= f(x)-1$$
to evaluate $g(2)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3844660",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 1
} |
How to solve a system of two inequalities where one is quadratic and the other is linear? As I am self isolating as of now I am having to use a google meet for my maths lessons. As such, the quality is not the best and at some points illegible (Please see below). Actual question is below the first blurry picture!
Actual Question:
As a result, since I have no idea how to solve double inequalities as it's my first time, could someone please work through this example question below (step by step please!).
Use set notation to describe the set of values of x for which:
$$
x^2 - 7x + 10 <0 \qquad \text{and}\qquad 3x+5<17
$$
Query:
I have tried googling this type of question but I keep getting joint inequalities such as
$$
-16 \leq 3x+5 \leq 20
$$
Are they the same compared to the question above?
Thank you and any help is much appreciated.
| Solving $3x+5<17$ we have $x<4$.
Then $x^2-7x+10=(x-2)(x-5)<0$ is satisfied when $2<x<5$ (draw a sketch!).
But we know that $x<4$. Thus the required solution is $2<x<4$.
See examples here and here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3844769",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Restriction of local diffeomorphism to a neighbourhood of a submanifold is injective Question. Let $\varphi : M \to N$ be a local diffeomorphism of smooth manifolds and $S \subset M$ a smooth submanifold such that $\varphi|_S: S \to N$ is an embedding. Is there a neighbourhood $U$ of $S$ in $M$ such that $\varphi|_U$ is injective?
My guess is that we can achieve this by taking a neighbourhood $U$ which deformation retracts onto $S$. This can be done, for example, by taking a tubular neighbourhood, i.e. a diffeomorphism from a neighbourhood of the zero section in the normal bundle of $S$ to a neighbourhood of $S$ in $M$, and contracting by scalar multiplication. Then, there is perhaps an argument using path liftings, but I'm unsure how to do this. If $\varphi(p) = \varphi(q)$, we could maybe argue that there is a path from $p$ to $q$ which maps to a contractible loop in $N$ and somehow use this to show that $p = q$? Since $\varphi$ is a priori not a covering map, I'm not sure if this is a good approach.
| As you've suggested, this can be proven using tubular neighborhoods.
We can choose a Riemannian metric $g$ on $N$ and equip $M$ with the pullback metric $\varphi^*g$. With these metrics, we can consider normal bundles as subbundles $NS\subseteq TM|_S$ and $N\varphi(S)\subseteq TN|_{\varphi(S)}$, and $d\varphi|_{NS}:NS\to N\varphi(S)$ is then an isomorphism.
By the tubular neighbor theorem, there are open neighborhoods $U\subseteq NS$ and $V\subseteq M$ (with $U$ fiberwise star-shaped) and a diffeomorphism $\psi:U\to V$ defined by $\psi(v)=\gamma_v(1)$, where $\gamma$ is the geodesic with initial velocity $v\in NS$. Note that $\psi$ maps the zero section of $NS$ to $S$.
There likwise exist $U'\subseteq N\varphi(S)$, $V'\subseteq N$ and $\psi':U'\to V'$ with the same properties, and since $\varphi$ maps geodesics onto geodesics, $\psi'(d\varphi(v))=\varphi(\psi(v))$ where both sides are well defined.
Since $U'\cap d\varphi(U)$ contains the zero section of $N\varphi(S)$, the map
$$
\psi'|_{U'\cap d\varphi(U)}\circ d\varphi|_{U\cap d\varphi^{-1}(U')}\circ\psi^{-1}|_{V\cap\psi(d\varphi^{-1}(U'))}
$$
is a diffeomorphism between open sets containing $S$ and $\varphi(S)$, and is equal to $\varphi$ on its domain.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3844899",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Conditions for this vectors to be linearly dependent Consider $m<n$ positive definite matrices $A_1\dots,A_m\in\mathbb{R}^{n\times n}$ which are linearly independent matrices. This is, that there are no $c_1,\dots,c_n\in\mathbb{R}$ such that $\sum_{i=1}^m c_iA_i = 0$ unless $c_i=0, \forall i=1,\dots,m$.
How can I find vectors $x\in\mathbb{R}^n$ different from $0$ such that the set of vectors
$$
A_1x,\dots,A_mx
$$
become linearly dependent? This is, that there exists $d_1,\dots,d_n\in\mathbb{R}$ different from all $0$,such that $\sum_{i=1}^m d_iA_ix = 0$.
This question is motivated by an example as the following. Consider the matrices
$$
A_i = \begin{bmatrix}
M & 0_{3 \times 1}\\
0_{1\times 3} & a_i
\end{bmatrix}
$$
where $a_i\in\mathbb{R}$ and $M\in\mathbb{R}^{3\times 3}$, and $a_1,\dots,a_m$ are different from each other. Hence, if I take
$$
x = \begin{bmatrix}
x' \\
0
\end{bmatrix}
$$
with some $x'\in\mathbb{R}^3$, then $A_ix = Mx', \forall i=1,\dots, m$. Thus, in this case (which is something like a "trivial example") we have that the vectors $A_ix$ are linearly dependent.
| Not a full answer, but too long for a comment.
Essentially you are asking if you can find a nontrivial combination of the matrices that is singular. (If you can do that, then the combination $(\sum \alpha_i A_i) x=0$ will have a solution for some $x$.)
This seems to be related: Vector subspace of $M_n(\mathbb{R})$ with invertible matrices
In general, you can not do it for $m<n$.
Without the positive definiteness, a counter-example is trivial: take, in $\mathbb{R}^4$, the identity matrix and the matrix that rotates $x-y$ plane by $90$ degrees and simultaneously rotates the $z-w$ plane by $90$ degrees.
With positive definite symmetric matrices, I did a few experiments in python and here is a counter-example for 2 matrices:
$$
A = \mathrm{diag}(4,3,2,1)
$$
$$
R = \begin{pmatrix}
-2.00850073 & -1.60957843 & -0.90369738 & -0.50794833\\
2.38498604 & 0.57307146 & -0.07249097 & 1.52387285\\
-0.57167226 & 0.40667149 & -1.85512324 & 0.44869258\\
-0.21978557 & 1.07929074 & -1.82720749 & -1.99403596
\end{pmatrix}
$$
a random matrix, and $B = RAR^{-1}$.
This is the plot of $\det(A + \lambda(B))$ (always positive):
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3845028",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
If $n$ is odd, then $n/2 + 1/2$ is always even?
If $n$ is odd, prove that $n/2 + 1/2$ is even.
Context: I'm a Statistician and the term $n/2 + 1/2$ showed up in the index of a summation when deriving the pdf of some Order Statistic:
$$
\sum_{j = (n+1)/2}^{n}...
$$
I realized that $n/2 + 1/2$ is always even if $n$ is odd, but I couldn't prove the result to myself (well, I have no training in Number Theory).
What I've tried:
Suppose $n$ is odd. Then $n + 1$ is even (by the successor function?). Then $n + 1 = 2k$ for $k \in \mathbb{N} \Rightarrow (n+1)/2 = k $. But this doesn't show that $k$ is even.
| No.
Consider $n=1$.
I add this sentence because my answer is too short.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3845361",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
In a deck of cards that are turned up one at a time until the first A appears. Is the next card more likely to be the A of spades or the 2 of clubs? This is Example 5j, from Sheldon Ross's First Course in Probability 8th ed, page 38. I don't understand why the following is true.
Solution.To determine the probability that the card following the first ace is the aceof spades, we need to calculate how many of the (52)! possible orderings of the cardshave the ace of spades immediately following the first ace. To begin, note that each ordering of the 52 cards can be obtained by first ordering the 51 cards different from the ace of spades and then inserting the ace of spades into that ordering.
I don't see how this (sentence in italics) can be true. For example if we have $S=\{ 1, 2, 3\} $ the number of orderings that can be obtained are $3!=6$. Following the solution's reasoning we could calculate the orderings for $S$ by ordering the cards different form $3$ and then inserting in into that ordering, that is $ 2!$ .
What am I missing? perhaps the sentence in italics does not mean what I think it does?
Also, the solution given is a probability of $ \frac{1}{52} $ for both, I understand why but I have a different solution that also seems valid:
My solution
*
*Ordering in which the card following the first ace is the ace of spades;
We have 3 other aces so we put $A_i A_s $, with $i = c, d, h $, together as one unit and count the number of permutations $ = 51! $. As we have three of these such pairs $$ P(N_a) = \dfrac{3\cdot 51!}{52!} $$
*Ordering in which the card following the first ace is the two of clubs By a similar argument we put $A_i A_s $, with $i = c, d, h, s$, so
$$ P(N_c) = \dfrac{4\cdot 51!}{52!} $$
Can someone tell me what is the error in this reasoning?
| Your argument (the second one) seems to miss the FIRST ace aspect of the situation. The $3*51!$ orderings in the numerator have the spade ace immediately following another ace but some have yet a different ace occurring before both of them, so they shouldn't be counted.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3845457",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 3
} |
conjugacy in symmetric group I was reading Dummit and Foote and encountered the following statement: any two elements in $S_n$ are conjugate if and only if they have the same cycle types.
However, I am able to produce a counter example:
Let $(1 2 3)$ and $(4 5 6) (7 8)$ be in $S_{10}$, then $(4 5 6) (7 8) =(1 3 2) (4 5 6) (7 8) (1 2 3)$, which show that these two are conjugate.
What am I misunderstanding here?
| Of course a permutation is conjugate to itself. It has the same cycle type as itself, as well. You could just as well have conjugated by the identity.
Conjugation takes $k$-cycles to $k$-cycles: $\pi^{-1}(a_1\dots a_k)\pi=(\pi(a_1)\dots\pi(a_k))$.
Also, conjugation is a homomorphism. So, under conjugation, a product of cycles is the product of the conjugates. To finish, use that any permutation has a representation as a product of disjoint cycles.
Thus there can be no counterexample.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3845602",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Recurrence relation of binomial sum. I'm trying to find a closed-form solution to the sum
$$
a(n):= \sum_{k=0}^{\lfloor n/3 \rfloor} \binom{n}{3k}.
$$
In my attempt, I found the first few values of $a(n)$ and entered them into the OEIS and got a hit for sequence A024493. In the notes there I saw that there was a recurrence relation given, namely
$$
a(n) = 3a(n-1)-3a(n-2)+2a(n-3)
$$
or perhaps more illuminatingly
$$
a(n)-3a(n-1)+3a(n-2)-a(n-3) = a(n-3)
$$
where we can see that the coefficients on the right hand side are $(-1)^i \binom{3}{i}$ for $0\leq i \leq 3$.
I've tried proving this relation by induction, but the result seems to depend on the value of
$n\mod 3$ more than on the previous terms.
Any thoughts on how I can prove that $a(n)$ satisfies the given recursion?
| Bearing in mind that for $ k > n$ or $ k < 0$, ${ n \choose k } =0 $, we can write $a_n = \sum_{k= - \infty } ^\infty { n \choose 3k}$. This allows us to avoid the "have to consider $n \pmod{3}$ cases".
Then use the identity $ { n\choose k } = { n-1 \choose k-1 } + { n - 1 \choose k }$ (which is still true when $k > n$ or $k < 0$) to iteratively reduce $a_n - 3 a_{n-1} + 3a_{n-2} + 2 a_{n-3}$, IE
$= \left[ \sum_{k} { n \choose 3k} \right] - 3 a_{n-1} + 3a_{n-2} - 2 a_{n-3} $
$ = \left[ \sum_{k} { n-1 \choose 3k-1} + {n-1 \choose 3k }\right] - 3 a_{n-1} + 3a_{n-2} - 2 a_{n-3}$
$ = \left[ \sum_{k} { n-1 \choose 3k-1} - 2 {n-1 \choose 3k }\right] + 3a_{n-2} - 2 a_{n-3}$
$ = \ldots $
Can you complete this to show that it equals 0?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3845738",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Speed of convergence for the secant method I am studying the following sequence, which comes from the secant method applied to $f : x \mapsto x^3$ between $\left[-\frac{1}{2},1\right]$ :
$$x_{n+1}=x_n - \frac{x_{n}-x_{n-1}}{x_n^3-x_{n-1}^3} x_n^3$$
with $x_0=-\frac{1}{2}$ and $x_1=1$.
I proved that $(x_n)_n$ is increasing and is always strictly negative. I also know that $(x_n)_n$ converges toward $0$.
I want to study the speed of convergence $\underset{n \rightarrow +\infty}{\lim} \frac{|x_{n+1}|}{|x_{n}|}$.
I have the following hint :pose $y_n=-\frac{1}{x_n}$ and study the sequences $(a_n)_n=(y_{n+1}^2-y_n^2)_n$. Then use Césaro's Lemma.
However i don't understant this hint because if I manage to show the convergence of $(a_n)_n$, i will be able to use Césaro's Lemma on $(a_n)_n$ and have a limit for $(y_n^2)_n$ by telescopic summation... which seems weird because $y_n^2=\frac{1}{x_n^2}$ should explose because $x_n \underset{n \rightarrow +\infty}{\longrightarrow} 0$
Any helps or hints are welcomed !
Update I understood how I'll be able to study the speed of convergence using Césaro's lemma. However I still need to find that $(a_n)_n$ is convergent.
| I'm not so certain about the hints, but you can observe the ratio of consecutive terms, writing the secant method in symmetric form:
$$x_{n+1}=\frac{f(x_n)x_{n-1}-f(x_{n-1})x_n}{f(x_n)-f(x_{n-1})}=\frac{x_n^3x_{n-1}-x_{n-1}^3x_n}{x_n^3-x_{n-1}^3}$$
Let $t_n=x_{n+1}/x_n,~x_{n+1}=t_nx_n$ to get
$$t_n=\frac{t_{n-1}^2-1}{t_{n-1}^3-1}=\frac{t_{n-1}+1}{t_{n-1}^2+t_{n-1}+1}$$
which converges to $t\approx0.755$, the unique solution of
$$t^3+t^2-1=0$$
and hence
$$\lim_{n\to\infty}\frac{x_{n+1}}{x_n}=t\approx0.755$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3845964",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Methods for showing that 4 points lie on the same circle? I'm looking for commonly used methods in contest geometry to show that 4 point lie on the same circle. Are there any tricks besides using the fact that the angles across add up to 180°?
| Let the points be considered as lying on the complex plane, say, $z_1,z_2,z_3,z_4$. Then, consider the map (also called the cross ratio of $z,z_2,z_3,z_4$) given by,
$$
M(z)=\left(\frac{z-z_3}{z-z_4}\right)/\left(\frac{z_2-z_3}{z_2-z_4}\right).
$$
This is a Mobius transformation which sends $z_2,z_3,z_4$ to $1,0,\infty$ respectively. Since Mobius transformations preserve circles (including straight lines considered as circles), it maps the circle made by $z_2,z_3,z_4$ to the real axis. Also, Mobius transformations are invertible, hence, no other point which is not on the above circle will be mapped to the real axis.
Hence, if $M(z_1)$ is real, $z_1$ lies on the circle formed by $z_2,z_3,z_4$.
In conclusion,
if
$$
\left(\frac{z_1-z_3}{z_1-z_4}\right)/\left(\frac{z_2-z_3}{z_2-z_4}\right)
$$
is real, then $z_1,z_2,z_3,z_4$ lie on a circle.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3846103",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Show that this sequence converges to $0$ The Question
For any fixed $k'\in\mathbb{N}$, for any $a\in\mathbb{R}^+$ and for any $n\in\mathbb{N}$, define the function $f:\mathbb{N}\to\mathbb{R}$ given by
\begin{equation*}
x_n=f(n)=\frac{n^{k'}}{(1+a)^n}.
\end{equation*}
I want to show that $(x_n)$ converges to $0$.
For clarification, I don't include $0$ in $\mathbb{N}$.
Attempts at Solution
For any $\varepsilon>0$, I need to find a $N\in\mathbb{N}$ such that for any integer $n
\geq \mathbb{N}$, the following holds:
\begin{equation*}
\left | \,\frac{n^{k'}}{(1+a)^n}-0 \, \right | <\varepsilon.
\end{equation*}
Since $x_m>0$ for any $m\in\mathbb{N}$, I can drop the absolute value signs and we get
\begin{equation*}
\frac{n^{k'}}{(1+a)^n}<\varepsilon.
\end{equation*}
So, I considered $x^{k'}=\varepsilon(1+a)^x$ for any $x\in\mathbb{R}^+$. That equation does not have a closed form solution I think, so I will denote the bigger root of that as $x^*$. Now $N=\lceil x^* \rceil$ should be a candidate for the convergence definition. This is where I get stuck: how do I put that $N$ back to the convergence definition and show that $N$ really is a good candidate?
I wonder if there are any elegant proofs for this; mine is too ugly.
Also, I have a simple inequality that should play a role in this but I do not see how it fits. (My "proof" did not use the inequality)
For any fixed $k'\in\mathbb{N}$ and for any $n\geq k'$, consider $(1+a)^n$.
\begin{align*}
(1+a)^n&=\sum_{k=0}^{n}\binom{n}{k}a^k1^{n-k}\\
&=\sum_{k=0}^{k'-1}\binom{n}{k}a^k+\binom{n}{k'}a^{k'}+\sum_{k=k'+1}^{n}\binom{n}{k}a^k\\
&>\sum_{k=0}^{k'-1}\binom{n}{k}0^k +\binom{n}{k'}a^{k'}+ \sum_{k=k'+1}^{n}\binom{n}{k}0^k\\
&=\binom{n}{k'}a^{k'}.
\end{align*}
Thanks in advance!!
| Let me construct an elementary proof.
Let $\sqrt[2k']{1+a}=1+b$. Then, $b>0$ since $a>0$,
and
$$
x_n=\frac{n^{k'}}{(1+a)^n}=\frac{n^{k'}}{(1+b)^{2k'n}}=\left(\frac{\sqrt{n}}{(1+b)^{n}}{}\right)^{2k'}
$$
But
$$
(1+b)^n\ge 1+bn>bn,
$$
and thus
$$
\frac{1}{(1+b)^n}<\frac{1}{bn},
$$
and finally
$$
x_n=\left(\frac{\sqrt{n}}{(1+b)^{n}}{}\right)^{2k'}<\left(\frac{\sqrt{n}}{bn}\right)^{2k'}=b^{-2k'}\cdot\frac{1}{n^{k'}}
$$
Now, it suffices to show that the right hand side tends to zero, as $n$ tends to infinity.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3846268",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
What does bijection mean with regards to Combinatorics and Sets? I think I understand the 2 conditions that are necessary for a function to be bijective, but in a book of Combinatorics I am reading it talks about Bijections with sets and with combinatorial problems, and doesn't explain what it is very well in my opinion. It simply says it's a one-to-one correspondence between different items... What does a bijection mean then with regards to Combinatorics and sets?
For example, earlier in the book it gave this problem:
Suppose that $a,~b,~c,~d$ and $e$ are positive integers. How many solutions are there to the equation $$a+b+c+d+e=11$$?
This can be found by considering $11$ items and $4$ gaps between different items out of the $10$ possible gaps, so the solution is $\binom{10}{4}$. I understand this perfectly.
Later, when talking briefly about bijections it says of the aforementioned problem that the coding (of $4$ objects that can be chosen of one and $6$ of the same type of object that can't be chosen (in this case, gaps)) is 'unique and reversible, or, in other words, that it represented a bijection.'
What does it mean by this? What bijection is it talking about? I have no idea. Also, I don't understand what it means by reversible.
Thank you for your help in this very basic question.
| Let me see if I can show you in detail what’s going on behind the scenes, so to speak, in that problem and the book’s explanation.
Let
$$S=\{\langle x_1,x_2,x_3,x_4,x_5\rangle\in\Bbb Z^+:x_1+x_2+x_3+x_4+x_5=11\}\,;$$
we want to know $|S|$. The idea behind the solution is to find a set $A$ whose cardinality is easier to determine and show that $|A|=|S|$ by showing that there is a bijection between $A$ and $S$.
In this case we imagine lining up $11$ items: $c_1,c_2,c_3,c_4,c_5,c_6,c_7,c_8,c_9,c_{10},c_{11}$. We let $G$ be the set of gaps between adjacent items; clearly $|G|=10$. Finally, we let $A=\{X\subseteq G:|X|=4\}$, the set of $4$-element subsets of $G$; we know that $|A|=\binom{10}4$. If we can find a bijection between $A$ and $S$, we’ll have shown that $|S|=\binom{10}4$.
And you already know what the bijection is: if $s=\langle x_1,x_2,x_3,x_4,x_5\rangle\in S$, we let $f(s)=\{g_1,g_2,g_3,g_4\}\in A$, where $g_1$ is the gap between $c_{x_1}$ and $c_{x_1+1}$, $g_2$ is the gap between $c_{x_1+x_2}$ and $c_{x_1+x_2+1}$, $g_3$ is the gap between $c_{x_1+x_2+x_3}$ and $c_{x_1+x_2+x_3+1}$, and $g_4$ is the gap between $c_{x_1+x_2+x_3+x_4}$ and $c_{x_1+x_2+x_3+x_4+1}$, so that there are $x_1$ items before the gap $g_1$, $x_2$ items between gaps $g_1$ and $g_2$, $x_3$ items between gaps $g_2$ and $g_3$, $x_4$ items between $g_3$ and $g_4$, and $x_5$ items after gap $g_4$. Clearly this set of gaps is completely determined by the solution $s$: given $s$, there is a unique set of $4$ gaps described in this way by $s$. This simply means that $f$ is a function from $S$ to $A$ and is what the book’s unique is getting at.
Reversible simply means that the function $f$ has an inverse, i.e., that it is a bijection: it is a surjection, because every set of $4$ gaps is $f(s)$ for some solution $s\in S$, and it is an injection, because if we are given a set of $X=\{g_1,g_2,g_3,g_4\}$ of $4$ gaps, we can determine the unique $s\in S$ such that $f(s)=X$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3846394",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Is the representation of any prime of the form $6n+1$ as $a^2+3b^2$ essentially unique? Two well-known results in number theory are:
Fermat's $4n+1$ theorem: Every prime of the form $4n+1$ can be represented as $a^2+b^2 (a,b \in\mathbb{N})$.
Euler's $6n+1$ theorem: Every prime of the form $6n+1$ can be represented as $a^2+3b^2 (a,b \in\mathbb{N})$.
Looking at the Mathworld entries on these theorems here and here, I notice that representation of primes of the form $4n+1$ is stated to be unique (up to order), but that there is no mention of uniqueness in respect of representation of primes of the form $6n+1$. Uniqueness does however seem to hold at least for small primes of this form.
Question: Is the representation of any prime of the form $6n+1$ as $a^2+3b^2$ essentially unique?
If this is the case then a reference to a proof would be appreciated.
| This follows from very old results on representations of integers by quadratic forms. In particular it is a special case of a result of Euler which shows that two essentially distinct representations of $\,m\,$ imply $\,m\,$ is composite $ $ (the proof constructs a proper factor of $\,m\,$ via a quick gcd computation). $\, $
Appended below is a classic elementary proof of Euler's result that requires no knowledge of ideal theory of quadratic number fields. It is excerpted from Wieb Bosma's thesis (1990) pp. 14-16 (which has a nice concise historical introduction to primality testing). It deserves strong emphasis that the arithmetical essence of this proof is much clearer when it is translated into the language of quadratic number fields and their ideal theory - as is often the case for such results. The use of ideals essentially simplifies such (nonlinear) quadratic (form) arithmetic by linearizing it into arithmetic of ideals (modules) - making available analogs of the powerful tools of linear algebra.
As hinted in the final few paragraphs below, this result was part of Euler's research on idoneal numbers for primality testing. For more on such see Ernst Kani's paper.
[10] Z.I. Borevich, I. R. Shafarevich, Number Theory, Orlando: Academic Press 1966.
[159] A. Weil, Number theory, an approach through history, Boston: Birkhauser 1984.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3846551",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 3,
"answer_id": 0
} |
Calculations with complex numbers So I encountered this one question today in my math book, and I don't know how to get the right answer even though it seems really easy, I just wanna know how to do it so i can get some sleep.
This is the question:
When the complex number $z=r(\cos\theta + i \sin\theta)$ show that $z-\frac{1}{z}=i(2r\sin\theta)$
I haven't done much for this question other than do this using the actual complex number:
$$\frac{z^2-1}{z}$$
Can anyone show me how to do prove this question, any help is appreciated.
| The identity is not true, indeed we have that
$$z-\frac1z=z-\frac{\bar z}{|z|^2}=r(\cos\theta + i \sin\theta)-\frac1r(\cos\theta - i \sin\theta)=\left(r-\frac1r\right)\cos \theta+\left(r+\frac1r\right)i\sin \theta$$
the identity is true only for $\left(r-\frac1r\right)=0$ and $\left(r+\frac1r\right)=2r$ that is $r=1$ or $z=e^{i\theta}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3846691",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Simple Proof - ∃n∈ Z: ∀k ∈ Z: n < k How to prove these 2 (one is true, the other one is false):
*
*∃n∈ Z: ∀k ∈ Z: n < k
*∀n∈ Z: ∃k ∈ Z: k < n
where Z = {0,±1,±2,...}
EDIT1: What I have so far:
.
∀n ∈ Z: ∃k ∈ Z: k < n
Pose n = x et k = x - 1,
For any x ∈ Z, there will always be a term inferior to x such as:
if n = -665, k = -665 - 1 =-666 where k < n
Indeed, ∀n ∈ Z: ∃k ∈ Z: k < n is true.
.
∃n ∈ Z: ∀k ∈ Z: n < k
Pose n = x and k = y for any x,
There will exist x < y
But, limite of Z is +-∞
Then, for any x, there will always exist y = x-1 < x.
This proposition is false.
.
I know this is not working, i'm new to this
EDIT2:
To prove the second part, i would have to prove the opposite.
∃n ∈ Z: ∀k ∈ Z: n < k would be ¬(∃n ∈ Z: ∀k ∈ Z: n < k) = ∀n∃k: k < n
| Observe that $$t\in \Bbb Z \iff \{t+1 \in \Bbb Z\} \cap \{t-1 \in \Bbb Z\}\tag 1$$
An implication of this is that $\Bbb Z$ has no greatest or least element.
We can see this by assuming $t$ is the greatest (*least) element of $\Bbb Z$, and noticing that $(1)\implies t+1(^*t-1)\in\Bbb Z$ and that $t+1>t \ (^*t-1<t)$ and so $t$ is not the greatest (*least) element. By contradiction, there must be no such element.
The first statement asks whether there exists an $n\in\Bbb Z$ such that every $k\in\Bbb Z$ is greater than it. Such an $n$ would be the least element of $\Bbb Z$, but as stated earlier, it doesn't have one, and this statement is false!
The second statement asks whether for every $n\in \Bbb Z$, there exists a $k\in\Bbb Z$ less than it. Indeed, since there is no least element of $\Bbb Z$, we may easily find such an element. $(1)$ identifies that picking $k=n-1$ works fine here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3846826",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove or disprove the recursively defined sequence is convergent. The sequence $\{a_n\}$ is defined by $a_1=1, a_2=0$ and $a_{n+2}=a_{n+1}+\displaystyle\frac{a_n}{n^2}$ for $n\in \mathbb{N}$.
Since $\displaystyle\frac{1}{n^2}$ is summable, when $n$ is large, the sequence is something like $a_n=a_{n-1}+\displaystyle\sum_{i\leq n-2}\frac{a_i}{i^2}$, so I think the sequence should be convergent.
Then I want to use the Monotone convergent theorem, i.e. to show $\{a_n\}$ is monotonic and bounded.
For monotonic, it is easy to see that $\{a_n\}$ is increasing.
But for the upper bound, assuming $\{a_n\}$ converges and taking the limit $n\to \infty$ does not give any hints for me to find a suitable upper bound. I have also used computer programs to compute up to the 10000th term, but it seems that $\{a_n\}$ is still increasing, does not converges to a certain number.
So I wonder if it is convergent or not.
| Well this took longer than I thought. I feel like there must be an easier solution...
Claim 1: $a_n\le\sqrt n$ for all $n$. This holds for $n=1$ and $n=2$. Actually, we will want to assume $n\ge 3$ later, so we can also check $a_3=1\le\sqrt3$. Now if $a_n\le\sqrt n$ and $a_{n+1}\le\sqrt{n+1}$, then
$$a_{n+2}=a_{n+1}+\frac{a_n}{n^2}\le\sqrt{n+1}+\frac{\sqrt n}{n^2},$$
and it suffices to show $\sqrt{n+1}+\frac{\sqrt n}{n^2}\le \sqrt{n+2}$. Note
$$\sqrt{n+1}+\frac{\sqrt n}{n^2}\le\sqrt{n+1}+\frac{\sqrt{n+1}}{n^2}=\sqrt{n+1}\left(1+\frac{1}{n^2}\right)$$
and the inequality
$$\sqrt{n+1}\left(1+\frac{1}{n^2}\right)\le\sqrt{n+2}$$
is equivalent to
$$(n+1)\left(1+\frac{1}{n^2}\right)^2\le n+2.$$
With some elbow grease this is equivalent to
$$n^4\ge 2n^3+2n^2+n+1.$$
Now since $n\ge 3$,
$$n^4\ge 3n^3=2n^3+n^3\ge 2n^3+3n^2=2n^3+2n^2+n^2\ge 2n^3+2n^2+n+1.$$
This establishes Claim 1.
Claim 2: $a_n=\sum_{i=1}^{n-2}\frac{a_i}{i^2}$ for $n\ge 3$. This holds for $n=3$, and if $a_n=\sum_{i=1}^{n-2}\frac{a_i}{i^2}$, then
$$a_{n+1}=a_n+\frac{a_{n-1}}{(n-1)^2}=\frac{a_{n-1}}{(n-1)^2}+\sum_{i=1}^{n-2}\frac{a_i}{i^2}=\sum_{i=1}^{n-1}\frac{a_i}{i^2}.$$
Finishing up: we now have $a_n=\sum_{i=1}^{n-2}\frac{a_i}{i^2}\le\sum_{i=1}^{n-2}n^{-\frac32}.$ Pick your favorite way to show this is the partial sum of a convergent $p$-series, and we're done!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3847186",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Is it true that all homotopy groups of a manifold are countable? It is well-known that the first homotopy group (fundamental group) of a manifold is countable. I would like to know this for higher homotopy groups of a manifold. i.e.
Question: Is it true that all homotopy groups of a manifold are countable?
I think one can do same process in the proof of countability of fundamental group for higher homotopy groups. i.e. using Lebesgue Number Lemma. See SM J. M. Lee. Am I right? If not any proof or reference for the proof.
| Every second-countable ANR has has the homotopy type of a countable CW complex. Thus every second-countable (i.e. metrisable) manifold, being an ANR, has the homotopy type of a countable CW complex.
Let $M$ be a second-countable manifold with a chosen basepoint. Note that $M$ is therefore separable and metrisable. Let $C_*(S^n,M)$ be the set of pointed maps $S^n\rightarrow M$. If $C_*(S^n,M)$ is given the uniform topology, then it becomes a separable metric space, and in particular is second-countable.
On the other hand, $C_*(S^n,M)$ in the compact-open topology is an ANR, and so homotopy equivalent to a CW complex. But since $S^n$ is compact, the compact-open and uniform topologies on $C_*(S^n,M)$ coincide. Therefore by the above $C_*(S^n,M)$ is homotopy equivalent to a countable CW complex. Thus it has countably many path-components, each of which is open. In particular
$$\pi_0(C_*(S^n,M))\cong \pi_n(M)$$
is countable.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3847307",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
If $n^k$ is odd, $n$ is odd (proof by contra-positive) I started with the claim that: If $n$ is even, $n^k$ is even.
But I have only gotten this far:
$$n = 2a,$$
$$n^k = (2a)^k,$$
$$n^k = 2^k a^k.$$
I am trying to reach the format where I can show that
$$n^k = 2a,$$
so that $n^k$ is even.
| I will use your solution. Let's assume that $n$ is even, then we can write
$$n=2a.$$
Powering the both sides we obtain
$$n^k=(2a)^k=2^ka^k.$$
Taking $a'=2^{k-1}a^k$ we have
$$n^k=(2a)^k=2^ka^k=2\cdot 2^{k-1}a^k=2a',$$
so $n^k$ is even.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3847433",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
} |
Given a fraction with $ x $ is a real number, judge the scope of $ x $. From an ACT Math test:
Suppose that $ x $ is a real number and $ \frac { 4 x } { 6 x ^ 2 } $ is a rational number. Which of the following statements about $ x $ must be true?
*
*$ x $ is rational
*$ x $ is irrational
*$ x = 1 $
*$ x = \frac 2 3 $
*$ x = \frac 3 2 $
The answer says it must be a rational number, but how about an irrational number, say, $ \frac 4 3 $, which can also satisfy $ \frac { 4 x } { 6 x ^ 2 } $ a rational number?
| A rational number is any number that can be expressed as a ratio of two integers $\frac{p}{q}$ with $q\neq 0$. That is $\mathbb Q:= \{\frac{p}{q}|p\in\mathbb Z,q\in\mathbb Z, q\neq0\}.$ Moreover $\frac{4}{3}$ is a rational number.
Since $x$ is a positive real number we have $$\frac{4x}{6x^2}=\frac{4}{6x}=\frac{2}{3x}$$
which we are told is a rational number. Then $\frac{2}{3x}$ is of the form $\frac{p}{q}$, where $p=2$ and $q=3x$ (and we know $x\neq 0$).
So statement $A$ must be true.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3847566",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
The partial derivative of the gradient function I feel ashamed asking this basic question but am still confused.
Given function $f \in C^2$, show that $g:=\text{grad} \, f$ is a $C^1$ function.
I understand we have to show that the 1st order partial derivatives of $g$ exist and are continuous, but since $g : U \to \mathbb{R}^n$ is a vector valued function.. How do I figure out the first order partial derivatives?
Edit: I tried to apply the chain rule and deduced that $$D_1(g\circ f)(x)=D_1 D_{1}^2f + \dots + D_1 D_{n}^2f$$ and since $f$ is a $C^2$ function, the second order derivatives of $f$ are continuous and thus the first order derivative of $g$ is also continuous. Am I on the right path?
| Firstly, $g$ is differentiable precisely when its components are, so we can consider each component of $g$ individually. Let's consider the $i$th component
$$
g_i = \frac{\partial f}{\partial x_i}.
$$
Now $f$ is twice differentiable, thus we can differentiate the right hand side, and by extension the left hand side. The expression for the derivative of $g_i$ is
$$
\frac{\partial g_i}{\partial x_j} = \frac{\partial^2 f}{\partial x_j \partial x_i}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3847725",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Definition of separable stochastic process. Which is the "intuition" behind such a definition? Herebelow, I quote Kuo (2006)
Definition: A stochastic process $X(t,\omega)$, $0\leq t\leq 1$, $\omega\in\Omega$ is called separable if there exist $\Omega_0$ with $P(\Omega_0)=1$ and a countable dense subset $S$ of $[0,1]$ such that for any closed set $F\subset\mathbb{R}$ and any open interval $I\subset [0,1]$, the set difference
$$\bigg\{\omega\in\Omega; X(t,\omega)\in\ F, \forall \text{ } t\in I\cap S\bigg\}\backslash\bigg\{\omega\in\Omega; X(t,\omega)\in F,\forall \text{ }t\in I\bigg\}$$
is a subset of the complement $\Omega_0^c$ of $\Omega_0$. The set $S$ is called a separating set.
First, isn't $\Omega_0^c=\emptyset$?
Secondly, what does the fact that $S$ is a subset of the complement $\Omega_0^c$ of $\Omega_0$ mean? What does $S$ "separate"? From what?
More generally, could you please explain the intuition behind such a definition, even in very rough terms? What is it useful for?
| Separability means that the behavior of the process is essentially determined by its values on a countable ("separating") set. This is actually not a stochastic but rather an analytic notion. One may similarly define separability of a non-random function:
$f\colon A\to B$ is separable, if there exists a countable dense subset $S\subset A$ with the property: for any closed $F\subset B$ and any open $I\subset A$, if $f(t)\in F$ for all $t\in I\cap S$ , then $f(t)\in F$ for all $t\in I$.
And then one may call a process separable if $X(\cdot,\omega)$ is separable for almost all $\omega\in \Omega$ (which is precisely the usual definition).
Concerning the term itself, it originates from "separable set/separable space", where it is also not very suitable: there is nothing being "separated" (see also a relevant discussion here).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3847845",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
If $\lim_{x\to n}f(x) = 0$ and $f(n)=0$, does $\lim_{x\to n}\frac{\sin(f(x))}{f(x)} = 1$ always? If $\lim_{x\to n}f(x) = 0$ and $f(n)=0$, does $\lim_{x\to n}\frac{\sin(f(x))}{f(x)} = 1$ always?
I have been playing around with some graphs on desmos and there's always the indication that the limit equals $1$. I know that $\sin x$ becomes very linear, is there any function with the potential to "un-linearize" it?
| hint
Let $ \epsilon>0$.
we know that
$$\lim_{X\to 0,\ne}\frac{\sin(X)}{X}=1$$
Thus
$\exists \eta>0$ such that
$$\color{red}{0<}|X|<\eta \implies |\frac{\sin(X)}{X}-1|<\epsilon$$
But
$$\lim_{x\to 0}f(x)=0$$
then
$\exists \alpha>0 $ such that
$$|x|<\alpha \implies |f(x)|<\eta$$
The red condition $ \color{red}{0<|f(x)|}$ is not always satisfied to insure the existence of $\frac{\sin(f(x))}{f(x)}$.
Your conclusion will be correct if you put$$\frac{\sin(0)}{0}=1$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3848012",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
Obtaining linear functionals on $B(H)$ using ultrafilters. Suppose $H$ is a Hilbert space with orthonormal basis $\{e_i\}_{i}\in \mathbb N$ and $X=H^*\otimes^\pi H$ is the projective tensor product. We have a natural isometry $$J:X\to X^{**}=B(H)^*$$ given by $J_{\sum k_1x_i\otimes y_i}(T)=\sum k_i\left<T(y_i),x_i\right>=\sum k_iT(x_i\otimes y_i)$ for all $T\in (H^*\otimes^\pi H)^*=B(H)$
I vaguely remember reading somewhere that $\psi:B(H)\to \mathbb C$ defined as $$\psi(T)=\lim_{n, U}\left<Te_n,e_n\right>$$ where $U$ is any is non-principle ultrafilter on $\mathbb N$ , is a well defined bounded linear functional which do not lie in the image of map $J$. How do we prove that $\psi$ is not in $J(X)$?
Further, is it correct to write $$\lim_{n, U}T(e_n\otimes e_n)=T(\lim_{n, U}e_n\otimes e_n)$$
| $\psi$ vanishes on compact operators, while this is not the case for the members of $J(X)$.
Regarding the last question, I see no meaning in the RHS.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3848193",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Suppose v,u,w are distinct points on one line in $R^3$. The line need not pass through the origin. Prove that {v, u, w} is linearly dependent. There is a solution to an exercise from a linear algebra textbook written below. I am new to writing proof and I would like to see if my proof is correct. Thank you.
Suppose v,u,w are distinct points on one line in $ℝ^3$. The line need not pass through the origin. Prove that {v, u, w} is linearly dependent.
PROOF:
We can write v,u,w as a $3$ x $3$ matrix $A$=[v,u,w]
v,u,w are distinct points on one line so the columns of $A$ do not span $ℝ^3$. Therefore, the corresponding linear transformation $T(x)=Ax$ cannot be onto.
But, there is a pivot position in every row of $A$ IFF $T(x)$ is onto. Therefore we have less than 3 pivot positions and so the RREF of $A$ must have free variables.
So, the homogenous system of equations $A$x$=0$ must have non trivial solutions. But, the columns of $A$ are linearly independent IFF the corresponding homogenous system has only the trivial solution.
Therefor, v,u,w are linearly dependent.
| A straight line is given by the formula $\mathbf{r}(t)=\mathbf{a}+\mathbf{b}t$. So if $\mathbf{u},\mathbf{v},\mathbf{w}$ are on the line, \begin{align}
\mathbf{u}&=\mathbf{a}+\mathbf{b}t_1\\
\mathbf{v}&=\mathbf{a}+\mathbf{b}t_2\\
\mathbf{w}&=\mathbf{a}+\mathbf{b}t_3\end{align}
Clearly, $\mathbf{u},\mathbf{v},\mathbf{w}\in Span(\mathbf{a},\mathbf{b})$ which is a 2-dimensional space, hence the three vectors must be linearly dependent. (Theorem: Every spanning set has more/equal elements than every linearly independent set of vectors.) This can also be seen directly by eliminating $\mathbf{a}$ and $t$, \begin{align}\mathbf{u}-\mathbf{v}&=\mathbf{b}(t_1-t_2)\\
\mathbf{v}-\mathbf{w}&=\mathbf{b}(t_2-t_3)\end{align}
$$\therefore \frac{\mathbf{u}-\mathbf{v}}{t_1-t_2}-\frac{\mathbf{v}-\mathbf{w}}{t_2-t_3}=\mathbf{0}$$ ($t_1\ne t_2\ne t_3$)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3848315",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
The Sorgenfrey plane and the Niemytzki plane are Baire spaces A space $X$ is called a Baire space if every countable intersection of open dense sets is dense. By the Baire category theorem, every complete metric space is Baire and every locally compact Hausdorff space is Baire.
The Sorgenfrey line is an example of a Baire space (shown here) that is not metrizable and not locally compact.
The Sorgenfrey plane and the Niemytzki/Moore plane are also not metrizable and not locally compact, and are not even normal.
For reference, I'd like a proof that the Sorgenfrey plane and the Niemytzki plane are Baire spaces. Sketch of proof is fine.
| Here is a direct proof for the Sorgenfrey plane $X$. The topology for $X$ admits as a basis the collection of all half-open rectangles $[a,b)\times[c,d)$ with $a<b$ and $c<d$. A simple but useful fact is that any such half-open rectangle contains a closed rectangle $[a',b']\times[c',d']$ with nonempty interior in $X$.
To show that $X$ is Baire, assume we have a sequence of dense open sets $(U_n)_n$. Given an arbitrary nonempty set $O\subseteq X$, we have to show that $O\cap\bigcap_n U_n\ne\varnothing$. The intersection $O\cap U_1$ is open and nonempty by density of $U_1$. So it contains a half-open rectangle, which itself contains a closed rectangle $R_1=[a_1,b_1]\times[c_1,d_1]$ with nonempty interior as explained above. The interior of $R_1$ meets $U_2$ by density of $U_2$, so their intersection contains a half-open rectangle, which itself contains a closed rectangle $R_2$ with nonempty interior, etc. Continuing in this way, we get a nested sequence of closed rectangles $R_n=[a_n,b_n]\times[c_n,d_n]\subseteq U_n$. By compactness with respect to the usual topology of the plane the intersection $\bigcap_n R_n$ is nonempty, and is contained in $O\cap\bigcap_n U_n$.
The proof above is entirely self contained, not requiring the knowledge that the plane with the Euclidean topology is Baire.
If instead one wants to assume the result that the Euclidean plane is Baire, one can use the following (repeating the comments of David Hartley and the answer by Daniel Wainfleet).
Suppose we have a set $X$ and two topologies $\sigma$ and $\tau$ on it, not necessarily comparable, but satisfying the condition:
(*) Every nonempty member of $\sigma$ contains a nonempty member of $\tau$ and vice versa.
Lemma 1: The dense sets in $X$ are the same for the two topologies. In other words, given $A\subseteq X$, $\operatorname{cl}_\sigma(A)=X$ if and only if $\operatorname{cl}_\tau(A)=X$.
Lemma 2: If $U\subseteq X$ is open and dense in $(X,\sigma)$, then $O=\operatorname{int}_\tau(U)$ is open and dense in $(X,\tau)$.
Proposition: Under assumption (*) above, if $(X,\tau)$ is a Baire space, so is $(X,\sigma)$.
(The proofs are not difficult.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3848442",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 3
} |
Solving a system of equations with constraints on the values we want to find In one paper I find these set of equations:
$$ u_1 = b( \omega_1^2 + \omega_2^2 + \omega_3^2 + \omega_4^2)$$
$$ u_2 = b(\omega_1^2 + \omega_2^2 - \omega_3^2 - \omega_4^2)$$
$$ u_3 = b(\omega_1^2 - \omega_2^2 + \omega_3^2 - \omega_4^2)$$
$$ u_4 = b(\omega_1^2 - \omega_2^2 - \omega_3^2 + \omega_4^2) $$
Where we put numerical values in $\omega_i$ and $\omega_i>0$ (the constraint is based on physical meaning) and you can obtain $u_i$ solving these equations.
But my objective is the inverse, I would like to give values to $u_i$ and obtain $\omega_i$. The problem is that I don't know the range or the specific values I need in $u_i$ to obtain meaningful (physically plausible) values of $\omega_i$ ($\omega_i>0$).
How could I calculate these values using Mathematica? Is there a library of python that I can use? Up until now I have seen examples of $Y=AX$ and obtain $Y$ with constraints in $X$, not obtain $Y$ with constraints in $Y$.
---edit: possible solution---
I don't know why I thought it was a harder problem. At least in Mathematica software is straight forward. The steps I follow was convert to matrix form ($U=AW$), inverse matrix to get $W=A^{-1}U$ and in the equations also write $w_i>0$ conditions (wolfram alpha solutions) (In the link I used $x,y,z,t$ instead of $\omega_i$).
The only problem now is to know numerically for a range of $\omega_i$ what range I have in $u_i$. But as seen in the solution, is not quite easy, it depends on the relations between different $u_i$.
| Define
$$v_i=\frac {u_i}b \qquad \text{and}\qquad x_i=\omega_i^2$$ and you face four linear equations for four unknowns.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3848540",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
2 urns, blue and orange balls Urn 1 has 1 orange ball and 6 blue balls. Urn 2 has 2 orange balls and 5 blue balls.
Suppose you draw 3 balls from one urn. To decide which urn to use you roll a fair 6-sided die. Draw from urn 1 if you roll an even number, urn 2 if you roll an odd number. What's the probability of drawing exactly one orange ball?
I understand that you have a $0.5$ chance of drawing from urn 1 and $0.5$ chance of drawing from urn 2. I initially drew out a tree diagram for this question which led me to the answer of $P(exactly\ 1\ orange) = 0.5(3/7 + 4/7)$. My issue is the other solution which involves combinations.
$$P(1\ orange | Urn_1) = \frac{6 \choose 2}{7 \choose 3} = 15 / 35 = \frac{3}{7}$$
and
$$P(1\ orange | Urn_2) = 2 \left(\frac{5 \choose 2}{7 \choose 3} \right) = 20/35 = 4/7$$
My mind simply can't understand why the above works. I have a tree diagram in front of me where I manually calculate each of the options but I can't relate the two together.
I do know in the end you would just do
$$P(1\ orange) = 0.5 \left(P(1\ orange | Urn_1) + P(1\ orange | Urn_2) \right) = 0.5$$
| I think the way you wrote the two cases makes it harder to see the rule. If you are drawing three balls, and want exactly $1$ of them to be orange, then the other two must be blue. When drawing from an urn with $N_o$ orange balls and $N_b$ blue balls, then, you can choose the single orange ball in $N_o$ ways and the blue balls in ${N_b}\choose{2}$ ways; to obtain the probability, the product of these should be divided by the total number of ways to choose $3$ balls, which is ${N_o+N_b}\choose{3}$:
$$
P=\frac{N_o\cdot{{N_b}\choose{2}}}{{N_o+N_b}\choose{3}}.
$$
More generally, if you wanted to find the probability of drawing exactly $n_o$ orange balls and $n_b$ blue balls when drawing $n_o+n_b$ without replacement, it would be
$$
P=\frac{{{N_o}\choose{n_o}}{{N_b}\choose{n_b}}}{{N_o+N_b}\choose{n_o+n_b}}.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3848849",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Connection Circles with Piegonhole Principle I'm working on a Pigeon Hole problem as part of my homework and I'm having difficulty coming up with the actual mathematical rationale behind my explanation. The problem is as follows:
There are 6 circles and each circle is directly connected to zero or more other circles. Show that there are at least two circles that are connected to the same amount of circles, i.e. have the same amount of connections.
Visually, I can see how this works, but I'm having trouble verbalizing. Would the pigeonholes here be the amount of connections (0 connection, 1 connection, 2 connection ... up to 5) and the pigeons the actual connections themselves after attempting to connect the circle accordingly? Any clarification would be great and I've provided an example below:
Two examples, where the numbers inside represent the amount of connections
| The six pigeons (circles) could match the six holes (counts from $0$ to $5$), but then every hole must be occupied. In particular one circle is connected to $0$ and another to $5$ (i.e., all) circles - contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3848997",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Combinations: Why is this wrong? I want to find the number of different 5 digit numbers where 3 of the digits are different. Each digit can be from the set (1,2,3,4,5). Suppose I have the three digits $a,b,c$. We have to cases
$$a,b,c,a,a \text{ or } a,b,c,a,b$$
In the first case we have $\frac{5!}{3!}$ different combinations and in the second we have $\frac{5!}{2!2!}$. Since $a,b,c$ are different integers from the set $(1,2,3,4,5)$, one will be able to take $5$ values, one will be able to take $4$ values, and one will be able to take $3$ values. Hence, the total number of combinations is $(\frac{5!}{3!}+ \frac{5!}{2!2!})\times5\times4\times3=3000$.
However, the answer to the question is given to be $1500$. Does anybody know where I have gone wrong?
It is worth noting that I have asked something very similar here, however, the comment section has gotten to large and also the answers seem to be suggestign I use a different method, but I want to know what is wrong with this.
| You have two cases: 3-1-1 or 2-2-1.
Case 3-1-1:
*
*pick the three slots in which to place the repeated digit: $5\choose 3$.
*choose the duplicated digits: 10
*choose the two others together $9 \choose 2$
This gives 7200 possibilities.
Deal similarly with case 2-2-1.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3849156",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Quaternion algebras over a non-Archimedean local field $K$, up to isomorphism I want to know the number of non-isomorphic quaternion algebras over a non-Archimedean local field $K$. What is the number of non-isomorphic central simple algebras of dimension $n^2$ over a non-Archimedean local field $K$?
I know the Brauer group of $K$ is isomorphic to $\dfrac{\mathbb{Q}}{\mathbb{Z}}$. I know the structure of the group $\dfrac{\mathbb{Q}}{\mathbb{Z}}$ very well, and it has only one element of order $2$.
Let $n \in \mathbb{N}$ be arbitrary. Is there any relation between the elements of order $n$ (or elements of order dividing $n$) in the group $\dfrac{\mathbb{Q}}{\mathbb{Z}}$, and the central simple algebras of dimension $n^2$?
| The elements of order $n$ in $\frac{\mathbb{Q}}{\mathbb{Z}}$ correspond bijectively to the isomorphism classes of central simple algebras over $K$ of dimension $n^2$. In particular there is a unique isomorphism class of (non-split) quaternion algebra. See Remark 4.4 on p. 110 here for an explicit construction.
https://www.jmilne.org/math/CourseNotes/CFT310.pdf
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3849290",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Using Squeeze Theorem to compute $\lim_{(x,y)\to(0,0)} \frac{x²y}{x^2+xy+y^2} = 0$ Can you help me to show that $\lim_{(x,y)\to(0,0)} \frac{x²y}{x^2+xy+y^2} = 0$ with the squeeze theorem ?
I'm running out of ideas to bound $|\frac{x²y}{x^2+xy+y^2}|$. I was thinking to use $|xy|\leq |x^2+xy+y^2|$, which seems to be a right inequality but don't know to to show it.
| $2xy=(x+y)^2-x^2-y^2,$ so $$|f(x,y)|=\frac{x^2|y|}{|x^2+xy+y^2|}=\frac{2x^2|y|}{(x+y)^2 + x^2 + y^2} \le \frac{2x^2|y|}{x^2+y^2}= |x|\frac{2|xy|}{x^2+y^2} \le |x| \frac{x^2+y^2}{x^2+y^2}=|x|$$
where we also used $2|xy| \le x^2+y^2$ which follows from $(|x|-|y|)^2\ge 0$. Sending $(x,y)\to 0$ gives the result.
A now deleted comment informed me that there is an shorter proof:
$$ |f(x,y)|=\frac{x^2|y|}{|x^2+xy+y^2|}=\frac{2x^2|y|}{(x+y)^2 + x^2 + y^2} \le 2|y|.$$
This follows because $x^2 \le (x+y)^2 + x^2 + y^2$. Together with the above, we have the improved bound
$$ |f(x,y)| \le \min(|x|,2|y|).$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3849458",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Definition of a 'simple' rules in a combinatorial game I have looked on the internet for this for some time. Unfortunately, when I make a search that includes the words "definition" and "simple", the search engine is certain I am looking for a simple definition rather than a definition of simple.
Apparently Elwyn Berlekamp asked if:
there is a game that has simple, playable rules, an intricate explicit
solution, and is provably NP or harder.
Note I have left words in bold that were bold in the source of the quote (a list of open problems in combinatorial game theory).
So what does simple mean in this context?
| I agree with the comments that "simple" probably doesn't refer to a precisely defined technical notion - certainly not a standard one.
I think "simple" is intended to mean something like "can be described without to much text, implemented by a human easily, and crucially, doesn't explicitly encode any computationally hard problems". If you make the determination of whether a move is legal require solving an $\mathsf{NP}$-hard problem, that defeats the point.
As an aside, I'm curious/concerned about the "explicit solution" part. Presumably "write out the game tree (easy to do since the rules are simple) and mark every position as a win for the appropriate player" doesn't count as an explicit solution. But then I'd want to define "explicit solution" as something that doesn't take computational power to work through, which seems at odds with the solution being $\mathsf{NP}$-hard.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3849623",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Integrate $2x(2x-3)^\frac{1}{2} dx$. I am fairly new to integration.
I need to find the integral of
$$2x(2x-3)^\frac{1}{2} dx.$$
I will be using substitution, right? I tried using $u = 2x - 3$, but I'm not sure what to do with the $2x$. If I find $\frac{du}{dx}$, it turns out to be $2$ (so $du = 2dx$). There's that $x$ left over.
| There is no need to do a substitution here. Do it by parts:\begin{align}\int2x\sqrt{2x-3}\,\mathrm dx&=\frac23x(2x-3)^{3/2}-\int\frac23(2x-3)^{3/2}\\&=\frac23x(2x-3)^{3/2}-\frac2{15}(2x-3)^{5/2}\\&=\frac23\left(x-\frac15(2x-3)\right)(2x-3)^{3/2}\\&=\frac25(x+1)(2x-3)^{3/2}.\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3849761",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 3
} |
property $P$ type $a$ and irreducibility with respect to $P$
Let $(Y, \tau)$ a 2-countable topological space. It is said that a property $P$ is type $a$ if for each decreasing sequence of closed $\{A_n : n \in \mathbb{N}\}$ such that $A_n$ has the property $P$ for each $n \in \mathbb{N}$, the intersection $\cap_{n \in \mathbb{N}} A_{n}$ it also has the property $P$. We will say that $A \subseteq Y$ is irreducible with respect to $P$, if no proper and closed subset of $A$ has the property $P$. Show that if $P$ is type $a$ and some closed of $Y$ has the property $P$, then there exists a closed subset of $Y$ with the property $P$ that is irreducible with respect to the property $P$.
Suppose by contradiction. For everything closed $A \subseteq Y$ with property $ P $ there is a subset of its own and closed $B_1 \subset A $ that has property $ P $, Furthermore, since $ B_1 $ is also closed of $ Y $ and has the property $ P $, there is a subset $ B_2 \subset B_1 $ closed of $ B_1 $ with the property $ P $. Inductively we can create a succession of decreasing closures. Is it possible to construct a sequence as we did previously such that the $\cap_{n \in \mathbb{N}} B_n$ does not have the property $ P $? is there another way to test it?
| Suppose that there is no such irreducible set.
Let $A_0\subseteq Y$ be a closed set with property $P$. Suppose that $\alpha<\omega_1$, and for $\xi<\alpha$ we have constructed closed sets $A_\xi$ with property $P$ such that $A_\xi\subsetneqq A_\eta$ whenever $\eta<\xi<\alpha$. If $\alpha=\beta+1$, by hypothesis there is a closed $A_\alpha\subsetneqq A_\beta$ with property $P$. If $\alpha$ is a limit ordinal, there is a strictly increasing sequence $\langle\xi_n:n\in\omega\rangle$ of ordinals less than $\alpha$ such that $\alpha=\sup_{n\in\omega}\xi_n$, and we set
$$A_\alpha=\bigcap_{n\in\omega}A_{\xi_n}=\bigcap_{\xi<\alpha}A_\xi\,;$$
clearly $A_\alpha$ is closed, and it has property $P$ because $P$ is type $a$. Thus, the recursion goes through to $\omega_1$ to give us sets $A_\xi$ for $\xi<\omega_1$ such that each is closed and has property $P$, and $A_\xi\subsetneqq A_\eta$ whenever $\eta<\xi<\omega_1$.
Let $\mathscr{B}$ be a base for $Y$. For each $\xi<\omega_1$ there is a point $y_\xi\in A_\xi\setminus A_{\xi+1}$, and there is a $B_\xi\in\mathscr{B}$ such that $y_\xi\in B_\xi\subseteq Y\setminus A_{\xi+1}$. It follows that if $\eta<\xi<\omega_1$, then $y_\xi\in B_\xi\setminus B_\eta$, so $B_\xi\ne B_\eta$ and hence that $\mathscr{B}$ is uncountable, contradicting the second countability of $Y$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3850067",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Optimized Chernoff Bound I'm trying to find the optimized Chernoff bound of a sum of i.i.d. discrete random variables Xi, 0$\leq$ i < n, with support {-1,0,1}. I also know that pX0(1) > pX0(-1). However, I'm not sure if I optimized the bound correctly over t.
My attempt looks like this:
I computed the moment generating function of X0 as
MX0(t) = E[$e^{tX0}$] = $\sum_{k=1}^{\infty} e^{tX0(k)}$pX0(k) = $e^{t(-1)}$pX0(-1) + $e^{t(1)}$pX0(1) + pX0(0)
I then found the moment generating function of the sum of the random variables from 0 to n-1 by multiplying their respective moment generating functions:
MSn = M[$\sum_{i=0}^{n-1}$ Xi] = MX0 MX1 ... MXn-1 = [$e^{-t}$pX0(-1) + $e^{t}$pX0(1)+pX0(0)]$^{n}$
The optimized Chernoff bound for Pr[Sn$\le$a] is given by
Pr[Sn$\le$a] $\le$ min $e^{ta}$ $\prod _ { i = 0 } ^ n E[e^{-tXi}]$
Pr[Sn$\le$0] $\le$ [$e^{-t}$pX0(-1) + $e^{t}$pX0(1) + pX0(0)]$^{n}$
Letting c1 = pX0(-1) and c2 = pX0(1), I then minimize the above expression by taking its derivative and setting it equal to 0 to obtain
-c1$e^{-t}$ + c2$e^{t}$ = 0
c2$e^{t}$ = c1$e^{-t}$
$e^{2t}$ = c1/c2
2t = ln(c1/c2)
t = $\frac 1 { 2 }$ln$\frac {c1} {c2}$
Substituting the optimized value of t into the generic Chernoff bound I obtain
Pr(Sn$\leq$0) $\leq$ [($\frac {pX0(-1)} {pX0(1)}$) $^{-1/2}$ pX0(-1) + ($\frac {pX0(-1)} {pX0(1)}$) $^{1/2}$ pX0(1) + pX0(0)]$^n$
Is this the correct optimized Chernoff bound for Pr[Sn$\le$0]? Thanks for your help and insights.
| Your calculation seems to be correct but your final value of t is less than 0 as probability of getting a 1 is greater than getting a -1 as you stated earlier. So you can't use that value if you actually try to minimize the expression you have to put t equals to 0 which will be of no use.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3850411",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How can Cyclic groups be infinite I am a little confused about how a cyclic group can be infinite.
To provide an example, look at $\langle 1\rangle$ under the binary operation of addition. You can never make any negative numbers with just $1$ and the addition opperation.
When we declare a cyclic group $\langle a\rangle $, does it go without saying that even if $a^n \neq a^{-1}, \forall n \in \mathbb{N}$ that $a^{-1} \in \langle a\rangle $?
If the inverse element can not be made with the generator and the operator, how can it be in the group? Do all groups come with an inverse operation such that $a \in S$ and $b \in S$, $a \circ^{-1}b \in S$?
|
Do all groups come with an inverse operation such that $a \in S$ and $b \in S$, $a \circ^{-1}b \in S$?
Do you mean "$a^{-1} b \in S$"? If so, then yes, the existence of inverses is literally one of the group axioms!
I think a precise meaning of the word "generated" will help answer this question.
Let $G$ be a group, and let $S$ be any subset of $G$. The subgroup of $G$ generated by $S$, sometimes denoted $\langle S \rangle_G$, is defined to be the intersection of all subgroups of $G$ which contain $S$ as a subset.
From this definition, we see that $\langle S \rangle_G$ is the unique smallest subgroup of $G$ which contains $S$ as a subset, and from this it's not too hard to prove that $\langle S \rangle_G$ is also the set of elements of $G$ of the form $x_1 x_2 \cdots x_n$ where each $x_1, \dots, x_n$ is either an element of $S$ or the inverse of an element of $S$. So the inverses aren't coming out of nowhere: they arise naturally from this construction of the subgroup generated by a subset!
When we say that $G$ is generated by a subset $S$, we mean that $\langle S \rangle_G = G$; i.e. every element of $G$ can be written as a finite product of elements of $S$ and their inverses. "$G$ is cyclic" means that $G$ is generated by some singleton subset, i.e. there is some $a \in G$ such that every element of $G$ is a finite product of the terms $a$ and $a^{-1}$. In other words, "$G$ is cyclic" is equivalent to saying "there exists some $a \in G$ such that every element of $G$ is equal to $a^n$ for some integer $n$".
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3850689",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 1
} |
Prove this formula $1+\cos\theta+\cos2\theta+...+\cos n\theta=\frac{1}{2}+\frac{\sin(n+\frac{1}{2})\theta}{2\sin\frac{\theta}{2}}$ This is homework but I’m really stuck.
The question is to prove a fromula which states:
$$1+\cos\theta+\cos2\theta+...+\cos n\theta=\frac{1}{2}+\frac{\sin(n+\frac{1}{2})\theta}{2\sin\frac{\theta}{2}}$$
I want to solve it using complex numbers
So I came to this
(I missed Re in last one)
Can you guys please help me finish this ?
| You can refer this :
This way you can easily do to prove Lagrange’s trigonometric identity:
http://faculty.bard.edu/belk/math362/Homework1Solutions.pdf original answer
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3850838",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Probability of recieving an offer Your friend tells you that he had two job interviews last week. He says that based on how the interviews went, he thinks he has a 20% chance of receiving an offer from each of the companies he interviewed with. Nevertheless, since he interviewed with two companies, he is 50% sure that he will receive at least one offer. Is he right?
My Approach:
since he had 2 interviews, 20% each, the total probability of getting should be 40%?
| Hello and welcome to Math.SE.
Your intuition is not serving you correctly here.
If we assume that the events of getting offers from the different companies are independent we find
$$
\begin{align}
\Pr(\text{Friend receives at least 1 offer})
&= 1 - \Pr(\text{Friend receives no offer}) \\
&= 1 - \Pr(\text{No offer from C1 and no offer from C2}) \\
&= 1 - \Pr(\text{No offer from C1}) \cdot \Pr(\text{No offer from C2}) \\
&= 1 - 0.8^2 \\
&= 0.36
\end{align}
$$
Hence the chance of receiving at least one offer would be $36\%$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3850995",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Subgroup element composition Please validate the following proof.
Let $H$ be a subgroup of the Abelian group $G$ with elements $h \in H$ and $g \in G$ but $g \notin H$.
Theorem
The product $gh \notin H$.
Proof
Let $x = gh$. Assume $x \in H$, then by multiplying by $h^{-1} \in H$ we have that $xh^{-1} = gh(h^{-1}) = g(h h^{-1}) = g$. So we have composed two elements of $H$ and obtained an element $g \notin H$ and this violates the closure of the subgroup, hence $gh \notin H$. $\square$
| In fact
If $H$ is a subgroup of $G$ and $h \in H$, then for all $g \in G$ we have $gh \in H \iff g \in H$.
Indeed, $gh \in H \implies g \in Hh^{-1} \subseteq H$.
Conversely, $g \in H \implies gh \in H$. Both claims follow because $H$ is a subgroup and so is closed under products and inverses.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3851128",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
prove that in case of externally touching circles the radical axis is the transverse common tangent between them
Let S1 and S2 be two circles with centers o1 and o2 respectively. By definition, radical axis of two circles is the locus of the point from which the length of the two tangents are equal. In case of externally touching circles,I read that transverse common tangent is the radical axis, but how to prove it? How can we prove that AP=AQ or BR=BS in the above diagram? I tried it by congruency but triangle o1PA and o2PA are not congruent.
Thanks in advance
| It is well-known that the radical axis is a line, that, in this case, passes through the tangency point, say $T$. If $A$ is a point on the common tangent of circles $\omega_1$ and $\omega_2$, it suffices to prove that $$\text{Power}_{\omega_1}(A)=\text{Power}_{\omega_2}(A)=\iff AO_1^2-r_1^2=AO_2^2-r_2^2$$ Which is a consequence of the Pythagorean theorem in $\triangle AO_1T$ and $\triangle ATO_2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3851517",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Support on Probability Functions I would like to ask a somewhat general question: How do I determine the correct support for a given function? I'm thinking in terms of piecewise functions, and with a focus on probability. I will give some examples to illustrate what I mean.
Consider the joint probability function:
$$f(x,y) = \begin{array}{lr}
xy, & -1 < x < 1, & 0 < y < a \\
0, & else
\end{array}$$
where "a" is chosen to make the function a valid pdf. If I want to find the joint CDF, I would evaluate the integral:
$$F(x,y) = \int_{-\infty}^{y} \int_{-\infty}^{x} uvdudv$$
Where u and v are dummy variables. However, as I understand it, since f(x,y) is zero outside the box proscribed by the bounds given, it follows that the correct integral would have the bounds:
$$F(x,y) = \int_{0}^{y} \int_{-1}^{x} uvdudv \\ \text{y<a & x<1}$$
Since any other part would be zero. Then suppose I want to find $$F_X(x)$$ in which case I would want to find F(x,y) where x is a constant and the limit of y goes to infinity. But since 0<y<a, does this mean that I actually would need to evaluate:
$$F_X(x) = F(x,a)$$
I've become considerably confused since my professor's notes are... rather jumbled and messy. I'm looking for resources that would be able to explain support and provide explicit examples of how to evaluate various integrals in the context of probability.
| Given the posted density:
$$f_{XY}(x,y)=xy\mathbb{1}_{(-1;1)}(x)\mathbb{1}_{(0;a)}(y)$$
there's not any value of $a$ such that it is a valid joint density.
But worst....if you try to calculate $f_X$ you get
$$f_X(x)=\int_0^a xy dy=\frac{a^2}{2}x$$
Which is NEGATIVE in $x \in(-1;0)$
I'm looking for resources that would be able to explain support and provide explicit examples of how to evaluate various integrals in the context of probability.
The best book I have ever read on this matter is the Papoulis
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3851786",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Let and be real numbers. If is irrational, then is irrational or is irrational Let and be real numbers. If is irrational, then is irrational or is irrational.
Prove by contrapositive
I believe the contrapositive is if x is rational or y is rational, then xy is rational
How do I go about proving this?
| The contrapositive of ($xy$ is irrational $\implies x$ is irrational or $y$ is irrational) is:
$x$ is rational and $y$ is rational $\implies xy$ is rational.
Pf: Let $x = \frac ab; a,b\in \mathbb Z$ And $y = \frac cd; c,d\in \mathbb Z$.
The $xy = \frac ab \frac cd = \frac {ac}{bd}$. And $ac\in \mathbb Z$ and $bd\in \mathbb Z$ so $\frac {ac}{bd}$ is rational.
======
FWIW
It's worth having the following under your belt:
$rational \times rational = rational$. Pf: $\frac mn \frac jk = \frac {mj}{nk}$.
$\underbrace{rational}_{\text{not equal to zero}} \times irrational = irrational$. Pf: $\frac ab\times irrational=?????? \implies irrational = ????? \times \frac ba=?????\times rational$. If $?????$ is rational then we have $irrational = rational\times rational$ which we just proved was impossible.
$irrational \times irrational = impossible\ to\ tell$. Knowing that what each of the multiplicands can't be doesn't tell us what the product can or cant be. Example $\sqrt 2 \times \sqrt 8=\sqrt{16} =4$. But $\sqrt 2\times \sqrt 3 = \sqrt 6$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3851929",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Prove that $P\left(A_{1}+A_{2}+\ldots+A_{n}\right) \leq P\left(A_{1}\right)+P\left(A_{2}\right)+\ldots+P\left(A_{n}\right)$ Question: For $n$ events $A_{1}, A_{2}, \ldots . ., A_{n}$ in a probability space show that
$$
P\left(A_{1} A_{2} \ldots A_{n}\right) \geq P\left(A_{1}\right)+P\left(A_{2}\right)+\ldots+P\left(A_{n}\right)-(n-1)
$$
Hence deduce $P\left(A_{1}+A_{2}+\ldots+A_{n}\right) \leq P\left(A_{1}\right)+P\left(A_{2}\right)+\ldots+P\left(A_{n}\right)$.
Progress: Proof :
We have by Boole's inequality
$$
P\left(\bar{A}_{1}+\bar{A}_{2}+\cdots+\bar{A}_{n}\right) \leqslant P\left(\bar{A}_{1}\right)+P\left(\bar{A}_{2}\right)++\cdots+P\left(\bar{A}_{n}\right)
$$
or. $\quad P \overline{\left(A_{1} A_{2} \ldots \ldots A_{n}\right)} \leqslant P\left(\bar{A}_{1}\right)+P\left(\bar{A}_{2}\right)+\ldots+P\left(\bar{A}_{n}\right)$
by De Morgan's law.
$\begin{array}{ll}\text { or, } & 1-P\left(A_{1} A_{2} \ldots \ldots A_{n}\right) \leqslant P\left(\bar{A}_{1}\right)+P\left(\bar{A}_{2}\right)+\cdots+P\left(\bar{A}_{n}\right) \\ \text { or, } & P\left(A_{1} A_{2} \ldots \ldots A_{n}\right) \geqslant 1-\sum_{i=1}^{n} P\left(\bar{A}_{i}\right)\end{array}$.
Now $P\left(\bar{A}_{i}\right)=1-P\left(A_{i}\right)\implies 1-\sum_{i=1}^{n} P\left(\bar{A}_{i}\right)=1-\sum_{i=1}^{n}\left\{1-P\left(A_{i}\right)\right\}=\sum_{i=1}^{n} P\left(A_{i}\right)-(n-1)$.
From these we can prove the first part.
How can I deduce the last part from first part?
| Hint $:$ Apply the first part on $P\left (A_1^c A_2^c \cdots A_n^c \right )$ and use De Morgan's law to deduce the second part.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3852102",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Are there different words for a circle, and the edge of a circle, which are topologically distinct? The following shape, we would refer to as a circle:
First circle
The following shape we would also refer to as a circle:
Second circle
But these two circles are topologically distinct from one another, are they not? The first circle has a massive "hole" in the middle, and so is really more of a loop in two dimensions. The second circle is a "true circle". But we would refer to both as a circle. In fact, the wikipedia page on circles shows images that mirror the first circle: https://en.wikipedia.org/wiki/Circle
But the first circle is less of a circle object than the second - it's really a loop holding a circular form, or the edge of a circle, or a circle with a hole punched in it, than a circle.
I was thinking about this because of Nietzsche's quote: "time is a flat circle". Would he mean to say that time is a normal, two dimensional circle, akin to the second picture of a circle above? Or that time is a loop, like a piece of flat ribbon, folded back on itself? I tend towards interpreting it as the second option, as that makes more sense: he's saying that in the end everything repeats and there are no beginnings nor endings, just an eternal cycle. But that's more metaphysical, the specific question for this post is whether there are different words for these two, clearly topologically distinct, 2D objects, which we refer to as circles?
| In addition to the answers given on "circle" being the boundary of a "disk" in a 2-dimensional plane: In arbitrary dimensions one usually calls the set of all $x$ in $\mathbb R^n$ with $\|x\|\le 1$ the (closed) $n$-dimensional unit ball and its boundary, the set of all $x$ with $\|x\|=1$, the $(n-1)$-dimensional unit sphere. Here $\|x\|$ denotes the distance from the origin.
So the first figure would be a $1$-dimensional sphere and the second a $2$-dimensional (closed) ball.
The names have their origin in the case $n=3$ where the $3$-ball actually is a solid ball as you think of it and the $2$-sphere is just the surface of the ball.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3852265",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Find x so that $\left(\frac{3}{2}+ \sum_{i=1}^{x} 3^{i}\right)^2$ Find $x$ so that $$\left(\frac{3}{2}+ \sum_{i=1}^{x} 3^{i}\right)^2 = \frac{3^{22}}{4}$$
I've tried with simpler values for $x$ such as $0, 1$ and $2$. But I can't seem to find any pattern I can take advantage of. How do I solve it? Where would I learn how to solve things like these?
| We have that
$$\left(\frac{3}{2}+ \sum_{i=1}^{x} 3^{i}\right)^2 = \frac{3^{22}}{4} \iff \frac{3}{2}+ \sum_{i=1}^{x} 3^{i} = \frac{3^{11}}{2} \iff \sum_{i=1}^{x} 3^{i} = \frac{3^{11}-3}{2}$$
then use that
$$\sum_{i=1}^{x} a^{i}=\frac{a^{x+1}-1}{a-1}-1$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3852386",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Prove that if $A$ is an $n \times n$ matrix such that $A^{4} = 0$ then $(I_n - A)^{-1}=I_n+A+A^2+A^3$ Prove that if $A$ is an $n \times n$ matrix such that $A^{4}$ = 0 then:
$$(I_n - A)^{-1}=I_n+A+A^2+A^3$$
My proof is as follows:
$$(I_n - A)(I_n - A)^{-1}=I_n$$
$$(I_n - A)^{-1}=I_n/(I_n - A)$$
$$I_n/(I_n - A)=I_n+A+A^2+A^3$$
$$I_n=(I_n - A)(I_n+A+A^2+A^3)$$
$$I_n=I_n+A+A^2+A^3-A-A^2-A^3-A^4$$
$$I_n=I_n-A^4$$
because we know that: $$A^4=0$$
therefore:
$$I_n=I_n$$
Is this an acceptable justification or have I made an error in my logic?
*I apologize for any poor formatting
| Hint: Let $B=I+A+A^2+A^3$. Compute $AB$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3852512",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.