Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
If $\sum a_n$ converges then $\sum a_n^3$ converges Prove or contradict: if $\sum a_n$ converges then $\sum a_n^3$ converges.
I was able to prove that if $a_n \geq 0$ then the statement is true. But I couldn't prove nor contradict the general case.
|
False, counterexample: $$a_n = \frac{\epsilon_n}{\sqrt[3]{\lceil n/3 \rceil}}
\quad\text{ where }\quad \epsilon_n = \begin{cases}+2,& n \equiv 1, \pmod 3\\
-1, & n \not\equiv 1, \pmod 3\end{cases}$$
It is easy to see
$\displaystyle\;\left| \sum_{n=1}^N a_n \right| \le \frac{2}{\sqrt[3]{\lceil N/3 \rceil}}
\quad\implies\quad
\sum_{n=1}^\infty a_n$ exists and equal to $0$.
However $\displaystyle\;\sum_{n=1}^{3N} a_n^3 = 6\sum_{n=1}^N \frac{1}{n} \approx 6( \log N + \gamma )\;$
diverges logarithmically.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1826774",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Very very simple matrix multiplication formula, don't go harsh on me please :) So I'm studying from "Linear algebra and it's applications 3rd edition" - Gilbert Strang.
He gave out this formula to find
$$
\sum_{j=1}^n a_{i,j}x_j
$$
matrix multiplication of the matrices below
$$
\left[
\begin{array}{ccc}
1 & 4 & 7\\
2 & 5 & 8\\
3 & 6 & 9
\end{array}
\right]
$$
$$
\begin{bmatrix}
u\\
v\\
w\\
\end{bmatrix}
$$
Let the first matrix be A and second matrix be x and solution be b
$$
Ax= B
$$
once we apply the summation formula to find
$$
Ax
$$
We see that for each row of a (i in this case) x is multiplied by its row.
for instance
$$
a_{i,1}
$$
will get us the first column of the A. so here (according to this summation formula) we will multiply the first column with the first row of X.
and wouldn't this produce =
$$
6u = b_{1,1}\\
15v = b_{2,2}\\
24w = b_{3,3}\\
$$
Above is wrong, i know but i believe this would produce the equations above. Equation below should be produced however,
$$
u+4v+7w = b_{1,1}\\
2u+5v+8w= b_{2,2}\\
3u+6v+9w = b_{3,3}\\
$$
So I don't believe this summation formula in the textbook is flawed, but what I'm asking is how it is done?
Normally while multplying matrices I would multiply the first row of the matrix on the left with each column of the matrix on the right, and that would give out that first row of the result matrix.
BUT IN THIS CASE --->
But here something different than that done, first column of the matrix on the left is multiplied with the first row of the matrix on the right?
I've never heard of a matrix multiplication like that, what will be the output of such multiplication and how would be the output produced .
My question is very easy, but I'm really bad on matrices, I'm not bad on math in general but I've always used the way I descriped for multiplying matrices for such a long time. So I got a little confused here, any help will be appreciated :)
|
Example: Take $i = 2$, i.e. row number $2$. Then
$$\sum\limits_{j=1}^n a_{i,j} x_j = \sum\limits_{j=1}^3 a_{2,j} x_j = a_{2,1}x_1 + a_{2,2}x_2 + a_{2,3}x_3 = 2u + 5v + 8w. $$
This is because $(x_1,x_2,x_3) = (u,v,w)$ and $n = 3$ is the number of columns of the matrix $A$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1826867",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Find stationary points of the function $f(x,y) = (y^2-x^4)(x^2+y^2-20)$ I have problem in finding some of the stationary points of the function above. I proceeded in this way: the gradient of the function is:
$$ \nabla f = \left( xy^2-3x^5-2x^3y^2+40x^3 ; x^2y+2y^3-x^4y-20y \right) $$
So in order to find the stationary points, I must resolve the system:
$$ \begin{cases}
xy^2-3x^5-2x^3y^2+40x^3 = 0 \\
x^2y+2y^3-x^4y-20y = 0
\end{cases} $$
So far I've found the points:
$$ (0,0) \qquad \left(\pm 2 \sqrt{10 \over 3} , 0 \right) \qquad (0, \pm \sqrt{10}) $$
But, I'm still blocked when I've to found the points deriving by the system:
$$
\begin{cases}
2x^6 + 3x^4 +x^2 -20 = 0 \\
y^2 = \frac{1}{2} \left( x^4 - x^2 + 20 \right) \end{cases}
$$
Which I don't know how to solve. Can someone help me ? Thanks.
|
WA gets
$$
\DeclareMathOperator{grad}{grad}
\grad((x^2+y^2-20) (y^2-x^4)) = (-6 x^5-4 x^3 (y^2-20)+2 x y^2, 2 y (-x^4+x^2+2 y^2-20))
$$
(link) and nine real critical points (link).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1826973",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Nested Hypergeometric series Is it possible to express the following series as a hypergeometric function:
$$\sum_{n=0}^\infty (a)_n \sum_{j_1+j_2+\cdots+j_k=n} \frac{1}{(b)_{j_1} (b)_{j_2}\cdots (b)_{j_k}} z^n $$
where $(a)_n, (b)_n$ are Pochhammer symbols.
Intuitively, if the inner sum can be expressed as a Pochhammer symbol, we obtain a hypergeometric series.
Any ideas, suggestions and clues are much appreciated.
|
$$\sum_{j_1+\ldots+j_k=n}\frac{z^{n}}{(b)_{j_1}\cdots (b)_{j_k}}\tag{1} $$
is the coefficient of $x^n$ in the product:
$$\left(\sum_{m\geq 0}\frac{z^m x^m}{(b)_m}\right)^k = \left(\int_{0}^{1}x z e^{txz}(1-t)^{b-1}\,dt\right)^k=x^k z^k\left(\int_{0}^{1}t^{b-1} e^{(1-t)xz}\,dt\right)^k\tag{2}$$
I wrote $(b)_m=\frac{\Gamma(b+m)}{\Gamma(b)}=\frac{\Gamma(m)}{B(b,m)}$, exploited the integral definition of the Beta function and switched $\sum$ and $\int$. So we have:
$$\sum_{n\geq 0}(a)_n \sum_{j_1+\ldots+j_k=n}\frac{z^{n}}{(b)_{j_1}\cdots (b)_{j_k}}= \frac{z^k}{\Gamma(a)}\sum_{n\geq 0}\Gamma(a+n)\cdot [x^{n-k}]\left(\int_{0}^{1}t^{b-1} e^{(1-t)xz}\,dt\right)^k\tag{3}$$
and the RHS of $(3)$ should be easy to rearrange in a hypergeometric format.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1827077",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
$f:S^1 \to \mathbb R$ be continuous , is the set $\{(x,y) \in S^1 \times S^1 : x \ne y , f(x)=f(y)\}$ infinite ? Let $f:S^1 \to \mathbb R$ be a continuous function , I know that $\exists y \in S^1 : f(y)=f(-y)$ where $y \ne -y $ (since $||y||=1$) , so that the set $A:=\{(x,y) \in S^1 \times S^1 : x \ne y , f(x)=f(y)\}$ is non-empty ; my question is , is the set $A$ infinite ?
|
If $f$ is constant, then the answer is clearly yes.
Suppose that $f$ is not constant, so there are $p,q\in S^1$ with $f(p) < f(q)$.
There are two paths joining $p$ and $q$ in $S^1$; call them $I$ and $J$. Applying the Intermediate Value Theorem to $f|_I$ and $f|_J$, we can see that every value in the open interval $(f(p),f(q))$ is achieved at least twice: once in $I$ and once in $J$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1827151",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Suppose that $a$ and $b$ satisfy $a^2b|a^3+b^3$. Prove that $a=b$.
Suppose that $a$ and $b \in \mathbb{Z}^+$ satisfy $a^2b|a^3+b^3$. Prove that $a=b$.
I have reduced the above formulation to these two cases. Assuming $b = a + k$. Proving that any of the below two implies that $a=b$ will be enough.
$$a^2b|(a+b)^3 - 3ab^2$$
$$a^2b|2a^3+3a(a+k)+k^3$$
I can't proceed from here. How should I proceed from here?
Thanks.
|
By hypothesis $\ n = \dfrac{a^3\!+b^3}{a^2b} = \dfrac{a}b + \left(\dfrac{b}a\right)^2\! =\, x+x^{-2}\,\overset{\large {\times\, x^2}}\Longrightarrow\,x^3-n\,x^2 + 1 = 0$
By the Rational Root Test $\ a/b\, =\, x\, = \pm 1\ \ $ QED
Generally applying RRT as above yields the degree $\,j+k\,$ homogeneous generalization
$$a,b,c_i\in\Bbb Z,\,\ a^{\large j}b^{\large k}\mid \color{#c00}1\:\! a^{\large j+k}\! + c_1 a^{\large j+k-1} b + \cdots + c_{\large j+k-1} a b^{\large j+k-1}\! + \color{#c00}1\:\!b^{\large j+k}\Rightarrow\, a = \pm b \qquad $$
$\qquad\qquad\ \ \ \ \ \ $ e.g. $\ a^2b \mid a^3 + c_1 a^2b + c_2 ab^2 + b^3\,\Rightarrow\, a = \pm b $
Alternatively the statement is homogeneous in $\,a,b\,$ so we can cancel $\,\gcd(a,b)^{\large j+k}$ to reduce to the case $\,a,b\,$ coprime. The dividend $\,c\,$ has form $\,a^{\large n}\!+b^{\large n}\! + abm\,$ so by Euclid it is coprime to $a,b$ thus $\,a,b\mid c\,\Rightarrow\, a,b = \pm1$.
Remark $\ $ The proof in lhf's answer is precisely the standard proof of the Rational Root Test specialized to this particular polynomial. The Rational Root Test concisely encapsulates all divisibility results of this (homogeneous) form.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1827236",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
}
|
A sum of squared binomial coefficients I've been wondering how to work out the compact form of the following.
$$\sum^{50}_{k=1}\binom{101}{2k+1}^{2}$$
|
$$\begin{align}\sum_{k=0}^m \binom {2m+1}{2k+1}^2
&=\sum_{k=0}^m \binom {2m+1}{2k+1}\binom {2m+1}{2m-2k}
\color{lightgrey}{=\sum_{j=0}^m\binom {2m+1}{2(m-j)+1}\binom {2m+1}{2j}\quad \scriptsize (j=m-k)}\\
&=\frac 12 \sum_{k=0}^m \binom {2m+1}{2k}\binom {2m+1}{2(m-k)+1}+\binom {2m+1}{2k+1}\binom{2m+1}{2m-2k}\\
&=\frac 12 \sum_{i=0}^{2m+1}\binom {2m+1}i\binom {2m+1}{2m+1-i}\\
&=\frac 12 \binom {4m+2}{2m+1}\\
\sum_{k=1}^m \binom {2m+1}{2k+1}^2&=\frac 12 \binom {4m+2}{2m+1}-\binom {2m+1}1^2\\
&=\frac 12 \binom {4m+2}{2m+1}-(2m+1)^2
\end{align}$$
Put $m=50$:
$$\sum_{k=1}^{50}\binom {101}{2k+1}^2=\color{red}{\frac 12 \binom {202}{101}-101^2}\qquad\blacksquare$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1827355",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
$S_n$ is an integer for all integers $n$
Let $a$ be a non-zero real number. For each integer $n$, we define $S_n = a^n + a^{-n}$. Prove that if for some integer $k$, the sums $S_k$ and $S_{k+1}$ are integers, then the sums $S_n$ are integers for all integers $n$.
We have $S_{k} = a^k+\frac{1}{a^k} = m_1$ and $S_{k+1} = a^{k+1}+\frac{1}{a^{k+1}} = m_2$ where $m_1,m_2 \in \mathbb{Z}$. Thus raising $S_k$ and $S_{k+1}$ to any positive power results in an integer. Is there a way I can prove the statement from this?
|
Partial stuff:
Lemma: If $b+b^{-1}$ is an integer then $b^n+b^{-n}$ is an integer for all $n$.
The proof is via induction.
Notice that by Newton's theorem (and symmetry of binomial coefficients) $(b+b^{-1})^n=\sum\limits_{i=0}^{(n-1)/2}\binom{n}{i}(b^ib^{-(n-i)}+b^{n-i}{b^{-i}})+A$ (where $A=0$ if $n$ is odd and $A=\binom{n}{n/2}$ otherwise).
Notice every summand except for $b^n+b^{-n}$ is an integer by the inductive hypothesis, and $(b+b^{-1})^n$ is also an integer. We conclude $b^n+b^{-n}$ i an integer.
Setting $b=a^k$ and $a^{k+1}$ we have $a^{nk}+a^{-nk}$ and $a^{n(k+1)}+a^{-n(k+1)}$ are integers for all $n$.
Solution: Let $T_n$ be the polynomial defined by $T_0(x)=2,T_1(x)=x,T_{n+1}=xT_n(x)-T_{n-1}(x)$. We then have $T_n(a+a^{-1})=a^n+a^{-n}$, it follows from the recursion that if two consecutive values $T_k(x),T_{k+1}(x)$ are integer then $T_n(x)$ is an integer for all $x$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1827466",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
}
|
Pi Appoximation: Simpler Solution to Limit? May be a ridiculous question, but I wanted to see if MSE had "simpler" proofs for Viete's approximation (specifically, using an equation derived from Viete's formula) of $\pi$: $$\lim_{x \to \infty}2^x \left(\sqrt{2-\sqrt{2+\sqrt{2 + \sqrt{2 +\sqrt(2)+...} }}}\right) = \pi$$ for x square roots (and only one minus sign; rest are addition).
By "simpler," I'm sort of looking for a proof not using Viete's formulas at the least, but I would also like to know if it's possible to show this with more basic limit methods (e.g., Taylor Series(?), taking natural log, etc)
Thank you kindly!
|
You can use the fact that
$$\sqrt{2+\sqrt{2+\ldots+\sqrt{2}}}=2\cos\frac{\pi}{2^{n+1}},$$
where $n$ is the number of radicals in LHS. Thus your sequence is
$$2^n\sqrt{2-2\cos\frac{\pi}{2^n}}=2^{n+1}\sin\frac{\pi}{2^{n+1}}\to \pi$$
as $n\to\infty$, since $$\lim_{x\to 0}\frac{\sin\pi x}{x}=\pi.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1827577",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Describing the action of T (linear transformation) on a general matrix I am not familiar with linear transformations in general, and as such, I do not know how to approach this type of question as the examples I'm given/looked up online usually deal with finding the transformation matrix itself.
Suppose $T:M_{2,2}\rightarrow P_{3}$ is a linear transformation whose action on the standard basis for $M_{2,2}$ is as follows:
$$T\begin{bmatrix}
1 & 0\\ 0
& 0
\end{bmatrix}= x^3-3x^2+x-2$$
$$T\begin{bmatrix}
0 & 1\\ 0 & 0\end{bmatrix}= x^3-3x^2+2x-2$$
$$T\begin{bmatrix}
0 & 0\\ 1
& 0
\end{bmatrix}=x^3-x^2+2x$$
$$T\begin{bmatrix}
0 & 0\\ 0
& 1
\end{bmatrix}=3x^3-5x^2-1$$
I am asked to describe the action of $T$ on a general matrix using $x$ as a variable for the polynomial and $a,b,c,d$ as constants. I am assuming that I need to form some sort of expression in polynomial form.
Any help would be greatly appreciated!
|
As it turns out, I have been overthinking this problem.
Taking $T=\begin{bmatrix}
1 & 1 & 1 & 3\\
-3 & -3 & -1 & -5\\
1 & 2 & 2 & 0\\
-2 & -2 & 0 & -1
\end{bmatrix}$ and multiplying it with the vector of constants $\begin{bmatrix} a\\b\\c\\d\end{bmatrix}$ gives me $$\begin{bmatrix} a+b+c+3d\\-3a-3b-c-5d\\a+2b+2c\\-2a-2b-d\end{bmatrix}$$
From here, row 1 corresponds with $x^3$, row 2 with $x^2$, row 3 with $x$, and row 4 with a constant of 1 (given the standard basis for the polynomials to be $1, x, x^2, x^3$, this makes sense).
Therefore, the action of $T$ on the general matrix can be written in polynomial form as $$T\begin{bmatrix} a&b\\c&d\\ \end{bmatrix}=(a+b+c+3d)x^3+(-3a-3b-c-5d)x^2+(a+2b+2c)x+(-2a-2b-d)$$
Alternatively, another method (suggested to me previously) is to simply take $$aT\begin{bmatrix} 1&0\\0&0 \end{bmatrix}+bT\begin{bmatrix} 0&1\\0&0 \end{bmatrix}+cT\begin{bmatrix} 0&0\\1&0 \end{bmatrix}+dT\begin{bmatrix} 0&0\\0&1 \end{bmatrix}$$ such that you get
$$a(x^3-3x^2+x-2)+b(x^3-3x^2+2x-2)+c(x^3-x^2+2)+d(3x^3-5x^2-1)$$
Combining like terms, we end up with the same result as above. Both methods are essentially the same.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1827670",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
Prove $\sum \frac{\cos nz}{n!}$ converges on compact sets Prove that
$$\sum_{n=1}^{\infty} \displaystyle\frac{\cos nz}{n!}$$
Is an entire function, which means that it uniformly convergent on compact sets
|
Sketch:
You can use that $\cos(nx + n iy) = \cos nx \cosh ny - i\sin nx \sinh ny$ to obtain that $|\cos nz| \leq e^{|ny|}$. Since factorials grow faster than exponentials, the complex version of the Weierstrass $M$-test will imply the result.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1827796",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Any unit has an irreducible decomposition The proposition is the following:
Let $R$ be a principal ideal domain. Then every $a \in R$ with $a \neq 0$ has an irreducible decomposition, that is, there is a unit $u$ and irreducible elements $p_1, \dots, p_n$ such that $a = up_1\dots p_n$.
We learned that an irreducible element is non unit, non zero, can not be written as a product of two non unit, and that the unit is the element that has a multiplicative inverse in ring theory.
But how come in the proof of this proposition, it says: "Any unit clearly has an irreducible decomposition" ? Why? How can a unit have an irreducible decomposition? Isn't irreducible a non unit?
For example, $1$ in $(\mathbb{Z},+, \cdot)$, it is a unit. What is its irreducible decomposition?
|
In the definition of an "irreducible decomposition" $a = up_1\dots p_n$, it is possible to have $n=0$. Then you have no irreducible factors $p_i$ at all and just have a unit $u$, so you are saying $a=u$. So, for instance, the irreducible decomposition of $1$ is just $1=1$ (with $n=0$ and $u=1$).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1827953",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Proving that $3 + 3 \times 5 + 3 \times 5^2 + \cdots+ 3 \times 5^n = [3(5^{n+1} - 1)] / 4$ whenever $n \geq 0$
Use induction to show that $$3 + 3 \times 5 + 3 \times 5^2 + \cdots+ 3 \times 5^n= \frac{3(5^{n+1} - 1)}{4} $$whenever $n$ is a non-negative integer.
I know I need a base-case where $n = 0$:
$$3 \times 5^0 = \frac{3(5^{0+1} - 1)}{4}\\LHS = 3 = \frac{12}{4} = RHS$$
Next I need to show that this is true for the $n + 1$ (next) term through a proof using induction. This is really where I could use a concrete example of a proof; I have yet to find one that I could really understand.
|
I imagine the post on how to write a clear induction proof could be of great service to you. Bob's answer highlights the key points, but I thought I would provide another answer to possibly increase clarity.
You have completed the base case and that's the first part. Great. Now, fix some integer $k\geq 0$ and assume that the statement
$$
S(k) : \color{green}{\sum_{i=0}^k3\cdot5^i=\frac{3(5^{k+1}-1)}{4}}
$$
holds (this is the inductive hypothesis). To be shown is that
$$
S(k+1) : \color{blue}{\sum_{i=0}^{k+1}3\cdot5^i=\frac{3(5^{k+2}-1)}{4}}
$$
follows. Beginning with the left-hand side of $S(k+1)$,
\begin{align}
\color{blue}{\sum_{i=0}^{k+1}3\cdot5^i}&= \color{green}{\sum_{i=0}^k3\cdot5^i}+3\cdot5^{k+1}\tag{by definition of $\Sigma$}\\[1em]
&= \color{green}{\frac{3(5^{k+1}-1)}{4}}+3\cdot5^{k+1}\tag{by inductive hypothesis}\\[1em]
&= \frac{3(5^{k+1}-1)+4\cdot3\cdot5^{k+1}}{4}\tag{common denominator}\\[1em]
&= \frac{3\cdot5^{k+1}-3+12\cdot5^{k+1}}{4}\tag{simplify}\\[1em]
&= \frac{15\cdot5^{k+1}-3}{4}\tag{simplify}\\[1em]
&= \color{blue}{\frac{3(5^{k+2}-1)}{4}},\tag{factor / simplify}
\end{align}
we end up at the right-hand side of $S(k+1)$, completing the inductive step. Thus, the statement $S(n)$ is true for all integers $n\geq0$. $\blacksquare$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1828029",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Algebraic geometry look on space-filling curves Can space-filling curves be somehow described in terms of algebraic geometry? It appears to me that they shouldn't, but I'm not sure. Does anyone know of interesting papers on space-filling curves?
|
In fact there is an interesting discussion about this in Eisenbuds tome on Commutative Algebra (with a view towards AG). In particular, you may want to read his short and soft introduction to dimension theory chapter. He explains how the discovery of space filling curves helped algebraists understand the need for more precise notions of dimension than simply "number of free parameters."
(But Crostuls comment is correct, as you will see if you study dimension theory in, for example, Eisenbud.)
(Though possibly your question becomes more "interesting" if you work over a finite field, and you want to cover all of the $F_q$ points with a curve of some bounded degree.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1828160",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
What is the value of $\frac{a^2}{b+c} + \frac{b^2}{a+c} + \frac{c^2}{a+b}$ if $\frac{a}{b+c} + \frac{b}{a+c} + \frac{c}{a+b} = 1$? If $$\frac{a}{b+c} + \frac{b}{a+c} + \frac{c}{a+b} = 1$$ then find the values of $$\frac{a^2}{b+c} + \frac{b^2}{a+c} + \frac{c^2}{a+b}.$$ How can I solve it? Please help me. Thank you in advance.
|
HINT:
$$\dfrac{a^2}{b+c}+a=\dfrac{a(a+b+c)}{b+c}$$
$$\sum_{\text{cyc}}\left(\dfrac{a^2}{b+c}+a\right)=(a+b+c)\sum_{\text{cyc}}\dfrac a{b+c}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1828288",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
}
|
Sum of elements of the inverse matrix (without deriving the inverse matrix) using elementary methods.
I have the matrix
$$\begin{pmatrix}
3&2&2&\\
2&3&2\\
2&2&3
\end{pmatrix}.$$ Find the sum of elements of the inverse matrix without computing the inverse.
I have seen this post, but I need much more elementary method. I have checked that inverse exists, and $\Delta=7$. The answer is $\frac37$.
What I did, is by (not so)simple computation, I worked out that every matrix of the form
$\begin{pmatrix}
a&b&b&\\
b&a&b\\
b&b&a
\end{pmatrix}$ always achieves an inverse of the form
$$\frac{a-b}{a+2b}
\begin{pmatrix}
a+b&-b&-b&\\
-b&a+b&-b\\
-b&-b&a+b
\end{pmatrix}.$$
And by this technique, the expected result follows for my case $a=3, b=2$.
But I don't think this is an elegant method. Because I need to prove this lemma, then have to claim the result. And also, I was told in question not to compute inverse, where I, here, am computing inverse of a general case.
|
Notice that $u^T=(1,1,1)$ is an eigenvector with eigenvalue $7$.
So we have $Au=7u$ and hence $A^{-1}u=\frac{1}{7}u$.
But the sum of the elements of $\frac{1}{7}u$ is just the sum of the elements of $A^{-1}$. So the answer is $\frac{3}{7}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1828345",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
}
|
Intuitive reason why the Euler characteristic is an alternating sum? The Euler characteristic of a topological space is the alternating sum of the ranks of the space's homology groups. Since homeomorphic spaces have isomorphic homology groups, however, even the non-alternating sum of the ranks of the homology groups is an invariant. So, is there an intuitive reason why the Euler characteristic should be defined as an alternating sum instead of a non-alternating sum -- aside from the fact that other theorems, such as Gauss-Bonnet, would break (or at least need to be re-worked)? Is this related to rank being additive? If so, then a historical question: what would have been the original motivation of Euler to consider an alternating sum?
|
The Euler characteristic can be computed from the number of cells of each dimension. The non-alternating sum (or other functions of the ranks) cannot.
The Euler characteristic has nice formulas when $X=X_1\cup X_2$, or when $X=X_1\times X_2$, or when $X$ is a bundle etc. The non-alternating sum (or other functions of the ranks) have not.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1828482",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 3,
"answer_id": 0
}
|
Why does $A$ times its inverse equal to the identity matrix? I was trying to come up with a proof of why: $AA^{-1} = I$.
If we know that: $A^{-1}A = I$, then $A(A^{-1}A) = A \implies (AA^{-1})A = A$.
However I don't like setting $AA^{-1} = I$ for fear that it might be something else at this point, even though we know that $IA=A$. For example, could $A$ times its inverse equal something other than the identity leading back to the original matrix $A$.
Does anyone have a another proof for why $A$ times its inverse would give you the identity or could explain something I'm missing?
|
We say a matrix $B$ is an inverse for $A$ if $AB = BA = I$, and the notation for $B$ is $A^{-1}$.
So it's by definition $AA^{-1}=I$, you cannot really prove it.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1828546",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Lonely theorems What are some instances of theorems which are especially unique in mathematics, i.e. for which there are not many other theorems of a similar character? An example I have in mind is Pick's theorem, since it is the only theorem I have ever seen concerning geometry of polygons with vertices on a lattice.
There are three reasons I am interested in "lonely theorems" like this:
*
*It's hard to find these results. By virtue of their uniqueness, they tend to not fall within the scope of most traditional math classes. The only reason I found out about Pick's theorem was through math competitions.
*Related to the last remark, lonely theorems allow those who know of them to solve problems which other people cannot (which is presumably why they tend to arise on math comps), because there are generally not alternative approaches to fall back on.
*Sometimes, what begins life as a lonely theorem later becomes the centerpiece of an entire new branch of mathematics. An example that comes to mind here is Mobius inversion, which was initially a trick applying to arithmetic functions, but is now of great importance for lattices & incidence algebras.
|
The Bieberbach conjecture is an example of a lonely result in the sense that, while it generated much interest and almost competition, its ultimate solution by de Branges pretty much closed the field. It turned out that the result does not have many applications, and is a kind of a very high-level olympiad problem.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1828623",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 1,
"answer_id": 0
}
|
Limit of the minimum value of an integral
Let $$f(a)=\frac{1}{2}\int_{0}^{1}|ax^n-1|dx+\frac{1}{2}$$ Here $n$ is a natural number. Let $b_n$ be the minimum value of $f(a)$ for $a>1$. Evaluate $$\lim_{m \to \infty}b_mb_{m+1}\ldots b_{2m}$$
Some starters please. Thanks.
|
$$\begin{eqnarray*}f(a) = \frac{a}{2}\int_{0}^{1}\left| x^n-\frac{1}{a}\right|\,dx+\frac{1}{2}&=&\frac{1}{2}+\frac{a}{2}\int_{0}^{1}(x^n-1/a)\,dx+a\int_{0}^{\frac{1}{\sqrt[n]{a}}}\left(\frac{1}{a}-x^n\right)\,dx\\&=&\frac{1}{2}+\frac{a}{2n+2}-\frac{1}{2}+\frac{1}{\sqrt[n]{a}}-\frac{1}{(n+1)\sqrt[n]{a}}\\&=&\frac{a}{2n+2}+\frac{n}{(n+1)\sqrt[n]{a}}\end{eqnarray*}$$
attains its minimum at $a=2^{\frac{n}{n+1}}$:
$$ b_n = 2^{-\frac{1}{n+1}}.$$
Then consider that:
$$ \sum_{k=m}^{2m}\frac{1}{k+1}\stackrel{m\to +\infty}{\longrightarrow}\log 2.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1828702",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
Probability of getting a pair of socks from a drawer if three are drawn I'm really struggling with this concept, hoping you guys could help me out.
Question: You have been provided with 20 pairs of socks within a box consisting of 4 red pairs, 4 yellow pairs, 4 green pairs, 4 blue pairs and 4 purple plairs.
The pairs have been separated out and you must take out a pair of socks.
Consider these problems and provide a calculation for each:
*
*Probability of drawing a matching pair if you randomly draw 2 socks?
*Probability of drawing a matching pair if you randomly draw 3 socks?
*(Repeats up to randomly drawing 5 socks)
For 2 socks I got the following:
40 possible socks * 39 other possible socks = 1560 possible combinations of socks / 2 (to remove duplicate matches) = 780
For each set of socks, there are 8. 8 * 7 (7 other socks to each being matched) = 56 possible combinations in each set of socks / 2 to remove duplicates = 28 possible combinations of socks in each set.
28 / 780 = 0.036 probability of drawing a pair when drawing 2 socks from the drawer.
I'm completely lost when it comes to drawing three socks from the drawer, however -
Cheers guys!
|
HINT: with 3 socks you must find the probability that all socks have different color, the probability that 2 socks have the same color, and the probability that all socks have the same color. Name these probabilities as P1, P2 and P3... with the number referencing the different amount of different colors.
For any number of socks there is the same strategy: probability for different amount of different colors, i.e. for 5 socks you want to know P1, P2,..., P5.
After you do that then you must multiply each probability for the probability of get at least one sock of one of the colors for this group. "At least one" is the opposite to "no one" but this last probability is easier to calculate, after you get the complementary that is the probability that you are searching.
After you did that you sum all these multiplications and voilá!, its done.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1828815",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
}
|
How to find number of solutions of an equation? Given $n$, how to count the number of solutions to the equation $$x + 2y + 2z = n$$ where $x, y, z, n$ are non-negative integers?
|
$\newcommand{\angles}[1]{\left\langle\,{#1}\,\right\rangle}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\half}{{1 \over 2}}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\iff}{\Leftrightarrow}
\newcommand{\imp}{\Longrightarrow}
\newcommand{\ol}[1]{\overline{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
\begin{align}
&\color{#f00}{\sum_{x = 0}^{\infty}\sum_{y = 0}^{\infty}\sum_{z = 0}^{\infty}
\delta_{x + 2y + 2z\,,n}} =
\sum_{x = 0}^{\infty}\sum_{y = 0}^{\infty}\sum_{z = 0}^{\infty}
\oint_{\verts{w} = 1^{-}}{1 \over w^{n + 1 - x - 2y - 2z}}
\,{\dd w \over 2\pi\ic}
\\[3mm] = &\
\oint_{\verts{w} = 1^{-}}{1 \over w^{n + 1}}
\sum_{x = 0}^{\infty}w^{x}\sum_{y = 0}^{\infty}\pars{w^{2}}^{y}
\sum_{z = 0}^{\infty}\pars{w^{2}}^{z}\,{\dd w \over 2\pi\ic}
\\[3mm] = &\
\oint_{\verts{w} = 1^{-}}{1 \over w^{n + 1}}\,{1 \over 1 - w}
\,{1 \over 1 - w^{2}}\,{1 \over 1 - w^{2}}\,{\dd w \over 2\pi\ic} =
\oint_{\verts{w} = 1^{-}}{1 \over w^{n + 1}\pars{1 - w}^{3}\pars{1 + w}^{2}}\,{\dd w \over 2\pi\ic}
\\[3mm] \stackrel{z\ \to\ 1/z}{=}\ &\
\oint_{\verts{w} = 1^{\color{#f00}{+}}}
{w^{n + 4} \over \pars{w - 1}^{3}\pars{w + 1}^{2}}\,{\dd w \over 2\pi\ic} =
\underbrace{{1 \over 2!}\,\lim_{w \to 1}\partiald[2]{}{w}{w^{n + 4} \over \pars{w + 1}^{2}}}
_{\ds{2n^{2} + 10n + 11 \over 16}}\ +\ \underbrace{%
{1 \over 1!}\,\lim_{w \to -1}\partiald{}{w}{w^{n + 4} \over \pars{w - 1}^{3}}}
_{\ds{\pars{-1}^{n}\,{2n + 5 \over 16}}}
\\[5mm] = &\
\color{#f00}{{2n^{2} + 2\bracks{5 + \pars{-1}^{n}}n +
\bracks{11 + 5\pars{-1}^{n}} \over 16}}
\end{align}
$$
\mbox{A few values:}\quad
\left\lbrace\begin{array}{rclcr}
\ds{n} & \ds{=} & \ds{0,1} & \ds{\imp} & \ds{1}
\\
\ds{n} & \ds{=} & \ds{2,3} & \ds{\imp} & \ds{3}
\\
\ds{n} & \ds{=} & \ds{4,5} & \ds{\imp} & \ds{6}
\\
\ds{n} & \ds{=} & \ds{6,7} & \ds{\imp} & \ds{10}
\\
\ds{n} & \ds{=} & \ds{8,9} & \ds{\imp} & \ds{15}
\\
\ds{n} & \ds{=} & \ds{10} & \ds{\imp} & \ds{21}
\end{array}\right.
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1828889",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 3
}
|
Permutations of {1 .. n} where {1 .. k} are not adjacent The Problem:
So I was thinking up some simple combinatorics problems, and this one stumped me.
Let N be the set of numbers $\{1 .. n\}$, or any set of cardinality $n$
Let K be the set of numbers $\{1 .. k\}$ where $k < n$, or any subset of N of cardinality $k$
How many permutations of N exist such that no two members of set k are adjacent?
Here was my basic approach:
Solutions = Permutations of N - Permutations of N that contain a pair in K
Permutations of N = nPn = n!
Every item in K will occur in each permutation of N so we loop through:
You place down one item of K
Possibilities where the next is in K = K - 1
You place down an item of K
possibilities where next is in k = k-2
...
You place the last item in K
The total permutations with a pair would therefore be:
$ (k-1) + (k-2) + ... + 0 $
However, many of those entries are duplicates, so we have to rule out instances where two of those pairs occurred in one set, and then again rule out instances that occurred three times, up pairs occurred k times
I think the amount with two pairs would best be found by getting the amount with one pair minus the amount with no more.
To do this I would loop as:
You place two items in K
There are k-2 chances the next item is in k
You place an item in K
There are k-3 chances the next item is in k
...
You place your last item in K
Total permutations with two pairs $= (k - 2) + (k - 3) + ... + 0$
Total permutations with three pairs $= (k - 3) + (k - 4) + ... + 0$
And so on..
And at this point I know I am incorrect, because I would get a total of:
$( (k-1) + (k-2) + ... + 0 ) - ( (k-2) + (k-3) + ... + 0 ) - ( (k-3) + (k-4) + ... + 0 ) + ... + 0$ permutations that contain pairs.
This number is waaay negative...
As I pointed out at the end of that, my solution finds a negative amount of permutations that have pairs in them, and so it is clearly wrong. If someone could explain my error, or else show a better way to approach the problem, I would greatly appreciate it.
Things that I think could be causing my answer to be wrong:
*
*I'm not sure if my generalization for removing the permutations with multiple pairs from the total amount of possible pairs works correctly if the pair does not occur as the first appearance of a member of set K
|
Let's call the elements up to $k$ white and the others black, and consider elements of the same colour to be indistinguishable for now. For arrangements beginning with a white element, glue a black element to the left of all other white elements and choose $k-1$ slots among the resulting $n-k$ objects (excluding the initial white element) for the glued pairs. For arrangements beginning with a black element, glue a black element to the left of all white elements and choose $k$ slots among the resulting $n-k$ objects for the glued pairs. In total, that makes $\binom{n-k}{k-1}+\binom{n-k}k=\binom{n-k+1}k$ arrangements. Each corresponds to $k!$ permutations of the white elements and $(n-k)!$ permutations of the black elements, so the total number of permutations is
$$
\binom{n-k+1}kk!(n-k)!=\frac{(n-k+1)!k!(n-k)!}{(n-2k+1)!k!}=\frac{(n-k+1)!(n-k)!}{(n-2k+1)!}\;.
$$
If $n-2k+1\lt0$, i.e. $k\gt\frac{n+1}2$, the binomial coefficient is zero and there are no such permutations.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1828983",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
}
|
Rings of Krull dimension one I have to write a monograph about commutative rings with Krull dimension $1$, but I can't find results, so I am looking foward for some references, and some results to search. Also, I would appreciate a lot to know if there is some result of the kind:
$$ \dim(A)=1 \iff ~?$$
Thanks in advance.
|
An integral extension $S$ of a ring $R$ of Krull dimension $1$ has Krull dimension $1$. This is because any 3-chain of prime ideals $P_1 \subsetneq P_2 \subsetneq P_3$ induces an inclusion $p_1 \subset p_2 \subset p_3$ (where we define $p_j := R \cap P_j$ for $j = 1, 2, 3$), of which not all inclusions can be proper because of what we assumed about $R$, and then we only have to use that between two prime ideals lying over the same prime ideal there are no proper inclusions.
This, of course, generalises to show that the Krull dimension is constant when passing to an integral extension.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1829055",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
}
|
Assume $r,s \in\mathbb{Q}$. Prove $\frac{r}{s},r-s \in\mathbb{Q}$ I have attempted this proof by contradiction. Beginning with assuming to the contrary that each a and b are irrational but was not sure if I did it correctly. Any help would be greatly appreciated.
Assume $r,s \in\mathbb{Q}$.
a) Prove $\frac{r}{s}\in\mathbb{Q}$
b) Prove $r-s \in\mathbb{Q}$
|
$x\in\mathbb Q\iff \exists a,b\in\mathbb Z: b\ne0,\ x=\frac ab$
Let $r=\frac pq$,$s=\frac tu$, where $p,q,t,u\in\mathbb Z$ and $u,q\ne0$. Then,
If $s=0$, $\frac rs$ is not defined. Assuming, $s\ne0$,thus, $t\ne0$. $\frac rs=\frac {pu}{qt}$, where $pu,qt\in\mathbb Z$, as $t,q\ne0$; $qt\ne0$. Thus, $\frac rs\in\mathbb Q$.
$r-s=\frac {pu-tq}{qu}$, where $pu-tq,qu\in\mathbb Z$, and $qu\ne$ as $q,n\ne0$. Thus, $r-s\in\mathbb Q$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1829237",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Prove that $\int_{0}^{\infty}{1\over x^4+x^2+1}dx=\int_{0}^{\infty}{1\over x^8+x^4+1}dx$ Let
$$I=\int_{0}^{\infty}{1\over x^4+x^2+1}dx\tag1$$
$$J=\int_{0}^{\infty}{1\over x^8+x^4+1}dx\tag2$$
Prove that $I=J={\pi \over 2\sqrt3}$
Sub: $x=\tan{u}\rightarrow dx=\sec^2{u}du$
$x=\infty \rightarrow u={\pi\over 2}$, $x=0\rightarrow u=0$
Rewrite $(1)$ as
$$I=\int_{0}^{\infty}{1\over (1+x^2)^2-x^2}dx$$
then
$$\int_{0}^{\pi/2}{\sec^2{u}\over \sec^4{u}-\tan^2{u}}du\tag3$$
Simplified to
$$I=\int_{0}^{\pi/2}{1\over \sec^2{u}-\sin^2{u}}du\tag4$$
Then to
$$I=2\int_{0}^{\pi/2}{1+\cos{2u}\over (2+\sin{2u})(2-\sin{2u})}du\tag5$$
Any hints on what to do next?
Re-edit (Hint from Marco)
$${1\over x^8+x^4+1}={1\over 2}\left({x^2+1\over x^4+x^2+1}-{x^2-1\over x^4-x^2+1}\right)$$
$$M=\int_{0}^{\infty}{x^2+1\over x^4+x^2+1}dx=\int_{0}^{\infty}{x^2\over x^4+x^2+1}dx+\int_{0}^{\infty}{1\over x^4+x^2+1}dx={\pi\over \sqrt3}$$
$$N=\int_{0}^{\infty}{x^2-1\over x^4-x^2+1}dx=0$$
$$J=\int_{0}^{\infty}{1\over x^8+x^4+1}dx={1\over 2}\left({\pi\over \sqrt3}-0\right)={\pi\over 2\sqrt3}.$$
|
$$
\begin{aligned}
I & =\int_0^{\infty} \frac{\frac{1}{x^2}}{x^2+\frac{1}{x^2}+1} d x \\
& =\frac{1}{2} \int_0^{\infty} \frac{\left(1+\frac{1}{x^2}\right)-\left(1-\frac{1}{x^2}\right)}{x^2+\frac{1}{x^2}+1} d x \\
& =\frac{1}{2}\left[\int_0^{\infty} \frac{d\left(x-\frac{1}{x}\right)}{\left(x-\frac{1}{x}\right)^2+3}-\int_0^{\infty} \frac{d\left(x+\frac{1}{x}\right)}{\left(x+\frac{1}{x}\right)^2-1}\right] \\
& =\frac{1}{2}\left[\frac{1}{\sqrt{3}}\left[\tan ^{-1}\left(\frac{x-\frac{1}{x}}{\sqrt{3}}\right)\right]_0^{\infty}-\frac{1}{2}\ln \left|\frac{x+\frac{1}{x}-1}{x+\frac{1}{x}+1}\right|_0^{\infty}\right]\\
& =\frac{\pi}{2 \sqrt{3}}
\end{aligned}
$$
$$
\begin{aligned}
& J=\int_0^{\infty} \frac{1}{x^8+x^4+1} d x=\frac{1}{2}\left(\underbrace{\int_0^{\infty} \frac{x^2+1}{x^4+x^2+1}}_K d x-\underbrace{\int_0^{\infty} \frac{x^2-1}{x^4-x^2+1} d x}_L\right) \\
& K=\int_0^{\infty} \frac{1+\frac{1}{x^2}}{\left(x-\frac{1}{x}\right)^2+3} d x=\int_0^{\infty} \frac{d\left(x-\frac{1}{x}\right)}{\left(x-\frac{1}{x}\right)^2+1} =\frac{1}{\sqrt{3}}\left[\tan ^{-1}\left(\frac{x-\frac{1}{x}}{\sqrt{3}}\right)\right]_0^{\infty}=\frac{\pi}{\sqrt{3}}
\end{aligned}
$$
$$
\begin{aligned}
L & =\int_0^{\infty} \frac{1-\frac{1}{x^2}}{x^2+\frac{1}{x^2}-1} d x =\int_0^{\infty} \frac{d\left(x+\frac{1}{x}\right)}{\left(x+\frac{1}{x}\right)^2-3} =\frac{1}{2 \sqrt{3}}\left[\ln \left(\frac{x+\frac{1}{x}-\sqrt{3}}{x+\frac{1}{x}+\sqrt{3}}\right)\right]_0^{\infty} =0
\end{aligned}
$$
Hence we can conclude that$$\boxed{J=I =\frac{\pi}{2 \sqrt{3}}} $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1829298",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17",
"answer_count": 9,
"answer_id": 7
}
|
A fair coin is flipped 3 times. Probability of all $3$ heads given at least $2$ were heads Here's a problem from Prof. Blitzstein's Stat 110 textbook. Please see my solution below.
A fair coin is flipped 3 times. The toss results are recorded on separate slips of paper (writing “H” if Heads and “T” if Tails), and the 3 slips of paper are thrown into a hat.
(a) Find the probability that all 3 tosses landed Heads, given that at least 2 were Heads.
(b) Two of the slips of paper are randomly drawn from the hat, and both show the letter H. Given this information, what is the probability that all 3 tosses landed Heads?
The binomial distribution of the number of heads (#H) is as follows:
$\mathbb{P}(\#H=0) = 1/8 $
$\mathbb{P}(\#H=1) = 3/8 $
$\mathbb{P}(\#H=2) = 3/8 $
$\mathbb{P}(\#H=3) = 1/8 $
a) Probability of 3 heads given at least 2 heads = $\mathbb{P}(\#H=3)/\mathbb{P}(\#H \ge 2 ) = (1/8)/(1/2)=1/4$
b) But I'm not able to understand how (b) is different from (a). To me it's the same problem and has the same answer.
Can anyone help please?
|
(b) involves two processes: First flip the coins, then read two of the results.
So, for instance, if you flip exactly two coins, what is the (conditional )probability that you the two papers read are those of the two heads?
Let $H_f$ be the count of heads flipped and $H_r$ the count of heads read. Use Bayes' Rule and Law of Total Probability.
$$\begin{align}\mathsf P(H_f = 3\mid H_r=2) =&~ \dfrac{\mathsf P(H_r=2\mid H_f=3)~\mathsf P(H_f=3)}{\sum_{x=0}^3\mathsf P(H_r=2\mid H_f=x)~\mathsf P(H_f=x)} \end{align}$$
(Note: several of the terms of the sum will be zero because it is impossible to read 2 if you have thrown less than that.)
PS: As Andre mentions in his answer, because each paper belongs to a specific coin, this is practically the same as flipping three coins, noting that the first two looked at are heads, and asking for the probability that the third is also heads.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1829336",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
}
|
Simplifying this series of Laguerre polynomials I trying to figure out whether a simpler form of this series exists.
$$\sum_{i=0}^{n-2}\frac{L_{i+1}(-x)-L_{i}(-x)}{i+2}\left(\sum_{k=0}^{n-2-i} L_k(x)\right)$$
$L_n(x)$ is the $n$th Laguerre polynomial.
Wikipedia offers several useful recurrence relations in terms of the associated Laguerre polynomials. Two of which show that
$$L_n^{(\alpha+1)}{x}=\sum_{i=0}^n{L_i^{(\alpha)}(x)}$$
and
$$L_n^{(\alpha)}(x)=L_n^{(\alpha+1)}(x)-L_{n-1}^{(\alpha+1)}(x)$$
These allow us to simplify the expression a bit to
$$\sum_{i=0}^{n-2}\frac{L_{i+1}^{(-1)}(-x)L_{n-2-i}^{(1)}(x)}{i+2}$$
Wikipedia also gives us the relation
$$L_n^{(\alpha+\beta+1)}(x+y)=\sum_{i=0}^nL_{i}^{(\alpha)}(x)L_{n-i}^{(\beta)}(y)$$
Removing the denominator, our expression would completely reduce to
$$\sum_{i=0}^{n-2}L_{i+1}^{(-1)}(-x)L_{n-2-i}^{(1)}(x)=L_{n-2}^{(1)}(0)=n-1$$
This is where I'm stuck. How might we evaluate it considering the denominator in the sum? Or is there some other way we can simplify the expression?
|
Let we set $N=n-2$. The generating function of Laguerre's polynomials is:
$$ \sum_{n\geq 0}L_n(x)\,t^n = \frac{1}{1-t}\,\exp\left(-\frac{tx}{1-t}\right)\tag{1} $$
hence:
$$ \sum_{n\geq 0}\left(\sum_{k=0}^n L_k(x)\right)\,t^n = \frac{1}{(1-t)^2}\,\exp\left(-\frac{tx}{1-t}\right)\tag{2} $$
$$ \sum_{n\geq 0}L_n(-x)\,t^n = \frac{1}{1-t}\,\exp\left(\frac{tx}{1-t}\right)\tag{3} $$
$$ \sum_{n\geq 0}\left(L_{n+1}(-x)-L_{n}(-x)\right)t^{n+1} = t\exp\left(\frac{tx}{1-t}\right)-t(1+x)\tag{4} $$
Now we may replace $t$ with $tu$ in $(4)$ and integrate both sides over $(0,1)$ with respect to $u$ to get:
$$ \sum_{n\geq 0}\frac{L_{n+1}(-x)-L_n(-x)}{n+2}\,t^{n} \\= \frac{2-4t+2t^2-t^2x^2-t^2 x^3}{2t^2 x^2}+\frac{(1-t)(tx+t-1)}{t^2 x^2}\,\exp\left(\frac{tx}{1-t}\right)\tag{5}$$
and now:
$$ \sum_{i=0}^{N}\frac{L_{i+1}(-x)-L_i(-x)}{i+2}\left(\sum_{k=0}^{N-i}L_k(x)\right)\tag{6a}$$
is the coefficient of $x^N$ in the product between the RHS of $(2)$ and the RHS of $(5)$.
It can be computed through $(1)$ with a bit of patience.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1829403",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Closed form of infinite product $\prod\limits_{k=0}^\infty 2 \left(1-\frac{x^{1/2^{k+1}}}{1+x^{1/2^{k}}} \right)$ I encountered this infinite product while solving another problem:
$$P(x)=\prod_{k=0}^\infty 2 \left(1-\frac{x^{1/2^{k+1}}}{1+x^{1/2^{k}}} \right)$$
$$P(x)=P \left( \frac{1}{x} \right)$$
I strongly believe it has a closed form in general, because it has a closed form for all the values I tried (checked numerically by Wolfram Alpha with high precision):
$$P(2)=\frac{14}{9} \ln 2$$
$$P(3)=\frac{13}{12} \ln 3$$
$$P(4)=\frac{14}{15} \ln 4$$
$$P(5)=\frac{31}{36} \ln 5$$
So the general closed form should be:
$$P(x)=R(x) \ln x$$
What is $R(x)$? And how to prove the closed form?
The product looks like it telescopes, but I couldn't find an appropriate form.
Another thought was to make a substitution:
$$x=e^t$$
$$P(t)=\prod_{k=0}^\infty 2 \left(1-\frac{\exp(t/2^{k+1})}{1+\exp(t/2^{k})} \right)$$
I tried series for the exponent, but didn't get telescoping either.
Edit
Turns out, there is a related product which has more simple form (I derived it numerically, I don't know how to prove it either, except in the way @Did did).
$$\prod_{k=0}^\infty \frac{2}{1+x^{1/2^k}}=\frac{2}{x^2-1} \ln x$$
So far @Did's proof looks like magic. Is there any way to derive this product by using the definition and properties of the natural logarithm?
|
You could more or less infer the results from the infinite product you have posted. Denote
$$f_k(t)=\frac{2}{1+t^{1/2^k}},$$
we have
$$\frac{f_k(x)~f_k(x^{1/2})}{f_k(x^{3/2})}=\frac{2}{1+x^{1/2^k}}\frac{1+\left(x^{1/2^{k+1}}\right)^3}{1+x^{1/2^{k+1}}}\\
=\frac{2}{1+x^{1/2^k}}\left(1+x^{1/2^{k+1}}+x^{1/2^k}\right)=2\left(1-\frac{x^{1/2^{k+1}}}{1+x^{1/2^k}}\right),$$
which is the term in your product. Therefore
$$P(x)=\frac{\prod_{k=0}^\infty f_k(x)~\prod_{k=0}^\infty f_k(x^{1/2})}{\prod_{k=0}^\infty f_k(x^{3/2})}=\frac{\frac{2\ln x}{x^2-1}\frac{\ln x}{x-1}}{\frac{3\ln x}{x^3-1}}=\frac{2(x^2+x+1)\ln x}{3(x^2-1)}.$$
To make it a bit more rigorous, consider the following equality
$$x^2-1=2^{n+1}(x^{1/2^n}-1)\prod_{k=0}^{n}\frac{x^{1/2^k}+1}{2}$$
by applying $x^2-y^2=(x-y)(x+y)$ repeatedly, so the equality you posted is equivalent to
$$\lim_{n\to\infty}2^n\left(x^{1/2^n}-1\right)=\lim_{y\to0}\frac{x^y-1}{y}=\lim_{y\to0}~x^y\ln x=\ln x,$$
where $y=1/2^n$ and we have used L'Hôpital's rule. You can get similar expressions for $x^{1/2}$ and $x^{3/2}$. Multiply them together as what has been done above, then take the limit, you would get the same answer.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1829500",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 1,
"answer_id": 0
}
|
Why is the determinant defined in terms of permutations? Where does the definition of the determinant come from, and is the definition in terms of permutations the first and basic one? What is the deep reason for giving such a definition in terms of permutations?
$$
\text{det}(A)=\sum_{p}\sigma(p)a_{1p_1}a_{2p_2}...a_{np_n}.
$$
I have found this one useful:
Thomas Muir, Contributions to the History of Determinants 1900-1920.
|
I think Paul's answer gets the algebraic nub of the issue. There is a geometric side, which gives some motivation for his answer, because it isn't clear offhand why multilinear alternating functions should be important.
On the real line function of two variables (x,y) given by x-y gives you a notion of length. It really gives you a bit more than length because is a signed notion of length. It cares about the direction of the line from x to y and gives you positive or negative based on that direction. If you swap x and y you get the negative of your previous answer.
In R^n it is useful to have a similar function that is the signed volume of the parallelpiped spanned by n vectors. If you swap two vectors that reverse the orientation of the parellelpiped, so you should get the negative of the previous answer. From a geometric persepective, that is how alternating functions come into play.
The determinant of a matrix with columns v_1,... v_n calculates the signed volume of the parallelpiped given by the vectors v_1,.. v_n. Such a function is necessarily alternating. It is also necessarily linear in each variable separately, which can also be seen geometrically.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1829594",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "47",
"answer_count": 5,
"answer_id": 4
}
|
Can I always say that any "custom" operation in $Z_n$ is commutative and associative? I have a lot of exercises that say something similiar:
Given, in the set $Z_{15}$ the following binary operation $*$ $$\forall a,b \in Z_{15}, a*b = \overline6(a+b)\ -\ \overline5ab$$
This is just an example, the operation could be any other operation that does additions and multiplications (without involving inverses). Can I always say (write down in exercise) without manually proving that it is associative and commutative because $Z_{15}$ is a commutative and unary ring?
|
Something you can generally do, is looking for a bijection with a set having a binary operation $\oplus$ of which you know that it has the desired properties. Here it is rather evident: Let
$$
f\colon \mathbf Z/15\rightarrow \mathbf Z/3\times\mathbf Z/5
$$
be the map that sends $\overline a$ to the pair $(a\bmod 3,a\bmod 5)$. This is well-defined as $3$ and $5$ divide $15$. The Chinese remainder theorem states that $f$ is a bijection, but it can be easily seen: if $a\bmod3=0$ and $a\bmod5=0$ then $a$ is divisible by $3$ and by $5$. Since $3$ and $5$ are relatively prime, $a$ is also divisible by $15$, i.e., $\overline a=0$. This proves that the kernel of the group morphism $f$ is zero. Hence $f$ is injective. Since the domain and codomain of $f$ are finite and of same cardinality, $f$ is a bijection.
Now, since $6\equiv 0\bmod3$, $-5\equiv1\bmod3$, $6\equiv1\bmod5$ and $-5\equiv 0\bmod5$, one has
$$
f(\overline a*\overline b)=f(\overline 6(\overline a+\overline b)-\overline 5\overline a\cdot \overline b)=
(0(\overline a+\overline b)+\overline a\cdot\overline b,\overline a+\overline b-0\overline a\cdot\overline b)=
(\overline a\cdot\overline b,\overline a+\overline b),
$$
writing $\overline{\cdot}$ for reductions modulo $15$, $3$, and $5$ in order to simplify notation.
One notices that
$$
f(x*y)=f(x)\oplus f(y)
$$
where $\oplus$ is the binary operation on $\mathbf Z/3\times\mathbf Z/5$ defined by
$$
(x_1,x_2)\oplus(y_1,y_2)=(x_1y_1,x_2+y_2).
$$
Since $f$ is a bijection, all properties of $\oplus$ are also satisfied by $*$! For example, since one knows that $\oplus$ is associative, $*$ is associative, and so on.
In fact, this his how teachers construct unusual binary operations for you to check commutativity, associativity or other properties!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1829715",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Are there more quadratics with real roots or more with complex roots? Or the same? Consider all quadratic equations with real coefficients:
$$y=ax^2+bx+c \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,, a,b,c \in \mathbb{R}, \, a≠0 $$
I was wondering whether if more of them have real roots, more have complex roots, or a same number of each?
I thought about this graphically, and wlog considered only the ones with positive $a$. By categorising the quadratics by their minimum point, there are an equal number of possible minima below the $x$-axis as above, so it seems that there are an equal number of quadratics with real roots as complex ones.
However, I then considered this problem algebraically by using the discriminant:
$$b^2-4ac$$
If $a$ and $c$ have opposite signs, then the discriminant will be positive, and so the quadratic will have real roots. This represents $50\%$ of the quadratic equations possible.
However, if $a$ and $c$ have the same sign, then some of these quadratics have real roots, whilst others have complex roots depending on whether $4ac$ is bigger than $b^2$ or not.
This suggests that there are more quadratics with real roots which contradicts the above answer.
Is the reason I've reached contradicting answers something to do with infinites, and that I can't really compare them in the way I've done above?
|
There are two ways to write a quadratic equation.
1) $y = ax^2 + bx + c$ and 2) $y = a(x-h)^2 + k$
Both models have equal numbers of quadriatic formulas; i.e. the cardinality of $\mathbb R^3$.
However "random" distribution results in different concentrations when represented as corresponding elements of $\mathbb R^3$ and "randomness" is not at all unambiguously defined.
Model 2 has real roots if and only if k is non-positive. So that's a probability of 1/2.
Model 1 has roots if a and c are opposite signs OR if $b^2 - ab \ge 0$ which is ... I dunno... It's too early and my motivation isn't strong enough to figure it out although I know it can be done.... but it is significantly less than 1/2.
====
Here's a way of thinking about it.
Consider $X = \mathbb R^3$ be euclidean three space and you can pick a point out of it at random.
Now let $f(a,b,c) = (a, -b/2, c-b^2/4)$. This is a bijective 1-1 map.
Let $Y = f(X) = \mathbb R^3$.
Both $X$ and $Y$ are the exact same shape with the exact same number of points. But if you pick a point from one and look at a corresponding point in the other it will seem that the space and concentration has been "warped" and "randomness" in one is not the same as randomness in the other.
The resolution seems to me that "random" and "probability" is not as simple or intuitive as we thought and, as stated, the "probability" of a quadratic having real roots is not actually a well-defined question. ... as yet.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1829809",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
What are linearly dependent vectors like? How are they different from linearly independent vectors?
|
While the above answer is correct, I think some intuitive understanding might help you, too.
Consider $\mathbb{R}^2$. Take the vectors $u= (1,1)$ and $v=(-2,-2)$. These vectors are called linearly dependent because they are equal up to a linear transformation, i.e. $v = (-2) \cdot u$.
This generalizes to $\mathbb{R}^n$ and more generally any kind of vector space using the definition given by Amstell above.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1829908",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Find three different systems of linear equation whose solutions are.. Find three different systems of linear equation whose solutions are $x_1 = 3, x_2 = 0, x_3 = -1$
I'm confused, how exactly can I do this?
|
Note that your system is described by the matrix
$$
\left[\begin{array}{rrr|r}
1 & 0 & 0 & 3 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & -1
\end{array}\right]
$$
Performing any row operation on this matrix yields a system with the same solutions. For example, you could add $\DeclareMathOperator{Row}{Row}\Row_1$ to $\Row_2$ to get the system
$$
\left[\begin{array}{rrr|r}
1 & 0 & 0 & 3 \\
1 & 1 & 0 & 3 \\
0 & 0 & 1 & -1
\end{array}\right]
$$
You could then add $\Row_2$ to $\Row_3$ to get
$$
\left[\begin{array}{rrr|r}
1 & 0 & 0 & 3 \\
1 & 1 & 0 & 3 \\
1 & 1 & 1 & 2
\end{array}\right]
$$
To get a third system, you could then add $2\cdot \Row_3$ to $\Row_1$
$$
\left[\begin{array}{rrr|r}
3 & 2 & 2 & 7 \\
1 & 1 & 0 & 3 \\
1 & 1 & 1 & 2
\end{array}\right]
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1829975",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Determine the closed form for $\int_{0}^{\infty}\sinh(xe^{-x})dx$ Find the closed form for
$$I=\int_{0}^{\infty}\sinh(xe^{-x})dx\tag1$$
Change to
$$I={1\over 2}\int_{0}^{\infty}(e^{xe^{-x}}-e^{-xe^{-x}}dx)\tag2$$
Any hints? I can't go further.
|
By expanding the hyperbolic sine as a Taylor series we have:
$$ I = \sum_{n\geq 0}\frac{1}{(2n+1)!}\int_{0}^{+\infty}x^{2n+1} e^{-(2n+1)x}\,dx = \color{red}{\sum_{n\geq 0}\frac{1}{(2n+1)^{2n+2}}} \tag{1}$$
with a series recalling sophomore's dream. I won't bet on simple closed forms but in terms of integrals involving the Lambert $W$ function.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1830166",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
How can I find the sup, inf, min, and max of $\bigcup\left[\frac{1}{n}, 2-\frac{1}{n}\right]$ $$\bigcup\left[\frac{1}{n}, 2-\frac{1}{n}\right]$$
I'm not sure how to get started with this one. When I graph the two functions I see they intersect at the point $(1,1)$, which I take to be the union of the set.
But how do I use this information to obtain the inf, min, sup, and max, particularly if there is only one element in the set? Is $1$ the inf, min, sup and max?
|
The union consists of all real numbers $x$ such that $\frac{1}{n}\leq x\leq 2-\frac{1}{n}$ for some natural number $n$. This is a much larger set than just the single point $\{1\}$. For instance, taking $n=2$ shows that all real numbers $x$ with $\frac{1}{2}\leq x\leq \frac{3}{2}$ are included in the union.
Some leading questions to get you started: what happens to $\frac{1}{n}$ as $n\to\infty$, and what does this tell you about the union?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1830283",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Find radius of circle (or sphere) given segment area (or cap volume) and chord length The goal is to design a container (partial sphere) of given volume which attached to a plane via a port of a given radius. I believe this can be done as follows but the calculation is causing me problems:
A circle of unknown radius is split by a chord of known length L. The larger circular segment has known area A. Given this information, is it possible to calculate the radius of the circle? To clarify, it is also given that L is not a diameter of the circle.
I believe whatever approach is used for the above would also be extensible to a spherical cap.
|
With the notations in this figure (borrowed from https://en.wikipedia.org/wiki/Circular_segment), $c$ is your chord length $L$, $$\theta=2\arcsin(\frac{c}{2R}),$$ the green area is $\frac{R^2}{2}(\theta-\sin(\theta))$. The larger circular segment area (whole area of the disk minus the green area) is given by $$A=R^2(\pi-\theta/2+\sin(\theta)/2).$$ If you replace $\theta$ from the first equation, you obtain a transcendental equation in $R$. It is possible to solve this numerically.
Similar equations for a spherical cap can be found at https://en.wikipedia.org/wiki/Spherical_cap
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1830347",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Help with proving if $s_n$ converges to $0$ and $x_n$ is a bounded sequence, then $\lim(x_ns_n)=0$
Prove: If $s_n$ converges to $0$ and $x_n$ is a bounded sequence, then $\underset{n \to \infty}{\lim}(x_ns_n)=0$
I'm have trouble getting started on this proof.
Since I know $s_n$ converges to $0$ I feel like I should start using the epsilon definition with $x_n$
Since $\underset{n \to \infty}{\lim} s_n=0$
So $\left|x_n-s_n\right| \lt \epsilon \implies (=) \quad \left|x_n\right| \lt \epsilon$
But I'm not sure if the aforementioned steps are correct and where to go from here.
|
HINT: If $x_n$ is bounded, then $\exists M>0: |x_n|<M$ for all $n$. Hence $|x_ns_n|<M\varepsilon$ for large $n$. It should be clear that $x_ns_n$ also tends to 0.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1830424",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
}
|
Distribution of ages of 3 children in a family Please consider Blitzstein, Introduction to Probability (2019 2 edn), Chapter 2, Exercise 30, p 88.
*A family has 3 children, creatively named A,B, and C.
(a) Discuss intuitively (but clearly) whether the event “A is older than B” is independent
of the event “A is older than C”.
(b) Find the probability that A is older than B, given that A is older than C.
Let X, Y, Z be the ages of A, B, C respectively.
I assumed X, Y, Z are i.i.d. random variables with each following uniform distribution U(0,18). (18 can be changed to any constant, obviously it doesn't affect the probabilities in question here.)
By this logic the event (X>Y) and (X>Z) are independent events, each having probability 0.5. $P(X>[Y \cap Z])$ is just $P((X>Y)∩(X>Z))=P(X>Y)\times P(X>Z)=1/4$.
However the author's solution takes a discrete approach, assuming the 6 possible birth orders to be equally likely events with probability 1/6 each. If we make this assumption we can see that A is older than B means A is not the youngest, which takes out 2 of these 6 possibilities. Making the event "A is the oldest" 50% likely instead of 1/3 likely originally. Hence they're not independent. Given (A>B) increases the probability of (A>C).
This is completely logical and understandable, but my question is why are the two approaches not producing the same conclusions? What am I missing? I can spot one obvious difference that the birth order approach ignores the possibilities X=Y, Y=Z etc., but even under the uniform distribution assumption these events have zero probability (by standard continuous distribution properties).
|
The ages of $A,B,$ and $C$ may be independent†, but the events of pairwise orders are not. If you are told that $A$ is one of the two oldest children (because $A$ is older than $C$) it should raise your anticipation that $A$ is also older than $B$.
(† though, actually, they are not independent if they have the same mother; but that is beside the point.)
Assuming that even in the case of twins and triplets the children will still be described as older and younger, there are $6$ equi-plausible arrangements of age. $$\begin{array}{|c|c|} & C<A & B<A & \text{Both} \\ A<B<C \\A<C<B\\B<A<C & & \checkmark \\B<C<A & \checkmark& \checkmark& \checkmark\\C<A<B & \checkmark\\C<B<A & \checkmark& \checkmark& \checkmark\end{array}$$
The events $C<A$ and $B<A$ are not independent because they are codependent on the age of $A$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1830534",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Linearly independent subset - a simple solution?
Problem: Let $\{ v_1, \ldots, v_n \}$ be a linearly independent subset of $V$,
a vector space. let $$ v= t_1 v_1 + \cdots + t_n v_n $$ where $t_1,
\ldots t_n \in \mathbb{R}$. For which $v$ is the set $\{v_1 + v ,\ldots , v_n + v \}$ linearly independent?
My ideas were:
Firstly, any linear combination of the new set is,
$$ \sum \lambda_i (v_i + v) = \sum \lambda_i v_i + sv = \sum (\lambda_i + st_i) v_i = 0 $$
where $s = \sum \lambda_i$. By linear independence of $v_i$, we must have
$\lambda _i + st_i = 0$ for $i=1, \ldots , n$.
Summing up yields $ s + s \sum t_i = 0 \Rightarrow s( 1 + \sum t_i) = 0 $.
My claim is: $\sum t_i \not = -1 \Leftrightarrow$ set is linearly
independent.
Proof: If $\sum t_i \not = -1$, then $s = 0$, hence $\lambda _i = 0$ for $i=1, \ldots, n$. So the new set is linearly independent.
On the other hand if $\sum t_i = -1$, then regarding the original
system of linear equations, where $$\lambda_i + st_i = \sum_{i \not=j}
\lambda_j t_i + \lambda_i (1+t_i) = 0$$ for $i-1, \ldots, n$. We wish
to find a non trivial set of solutions for $i=1, \ldots, n$. Written
out in matrix form, we have a matrix with all column sums equal to $0$
(column $i$ has sum $\sum t_i +1 = 0$), hence the transpose of this
matrix, has a non trivial solution, eigenvector $(1, \ldots, 1 )^T$.
Therefore, the transpose matrix is not invertible - hence, the matrix
itself is not invertible. So there exists a non trivial solution.
I think this solution is a bit too long winded - I am hoping if anyone could give a more elmentary solution that does not use matrices. Thank you!
|
It is easy to explicitly guess a nontrivial solution for the system of equations $$\lambda_i + st_i = 0$$ By setting $\lambda_i = t_i$ for all $i$, we get $\lambda_i + st_i = t_i-t_i = 0$ for all $i$ since $s=-1$ by assumption. Also since $\sum t_i = -1$, atleast one $t_i$ is nonzero, so the solution is nontrivial.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1830648",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Preimage of a simply closed curve under the two-dimensional antipodal map
Suppose $p:S^2\to P^2$ is the quotient antipodal map, and $J$ is a simply closed curve in $P^2$, then $p^{-1}(J)$ is either a simply closed curve in $S^2$, or two disjoint simply closed curves in $S^2$.
I think I can prove the second case. Clearly $(S^2,p)$ is a 2-fold covering space. Suppose $J$ is totally contained in a basic neighbourhood $U$, of $P^2$, then $p$ is a homeomorphic projection on the two connected components of $p^{-1}(U)$, hence each component contains exactly one homeomorphic preimage of $J$.
Now I have trouble proving the other case where no basic neighbourhood can contain $J$. How to proceed? Thanks in advance.
|
$S^2 \to P^2$ is a 2-sheeted covering map. Now note, that for any space $X\subset P^2$, the map restricts to a 2-sheeted covering map $p^{-1}X\to X$. If we pick $X=J$, then the covering map restricts to
$$ p^{-1}J \stackrel {2:1} \longrightarrow J\cong S^1.$$
So the preimage $p^{-1}J$ might either be homeomorphic to the unique covering space of $S^1$ corresponding to $2\mathbb Z\subset \mathbb Z$, or be homeomorphic to the trivial covering space $S^1 \sqcup S^1$. Note that generally (not in this case for the Jordan curve theorem) the two curves might be knotted.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1830731",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Proof that the function is uniformly continuous
Proof that the function $f: [0, \infty) \ni x \mapsto \frac{x^{2}}{x + 1} \in \mathbb{R}$ is uniformly continuous.
On the internet I found out that a function is uniformly continuous when
$\forall \varepsilon > 0 \ \exists \delta > 0: \left | f(x)-f(x_{0}) \right | < \varepsilon$ whenever $\left | x - x_{0} \right | < \delta .$
Because I don't know how to prove it calculative, I have drawn the function and showed its uniform continuity like that.
But I'd like to know how to do it the other, more professional and efficient way. I've watched some videos but anyway couldn't find a solution. Also tried to for almost 2 hours myself but nothing came out, too.
For the drawing, I think there is actually a mistake, at the beginning the epsilon (first one I haven't drawed) seems smaller than the others, while the others have same size.
(In addition I skipped the other part of the function because it's trivial).
Here is the picture:
|
The best way to start these types of problems is to start by messing with the part $|f(x) - f(y)| < \epsilon$ of the definition. Note that, by combining fractions and multiplying everything out we have
$f(x) - f(y) = \frac{x^2}{x+1} - \frac{y^2}{y+1} = \frac{x^2y-y^2x+x^2-y^2}{xy+x+y+1}$.
After playing around with some grouping I found that this can be rewritten as
$\frac{xy(x-y)+(x^2-y^2)}{xy+x+y+1} = \frac{(x-y)(xy+x+y)}{xy+x+y+1}$.
As a hint for where to go from here it is important to remember that $x, y \in [0, \infty)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1830807",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Does $|z_1+z_2|=|z_1|+|z_2|\implies z_1=kz_2$? Let $z_1,z_2\in \mathbb C$. I was wondering if
$$|z_1+z_2|=|z_1|+|z_2|\implies z_1=k z_2\ \ or\ \ z_2=0,\quad k\in\mathbb R^+.$$
Is it true?
I am having problems trying to show it. I'm sure it's easy to show using brute force, but if it's true, is there an elegant way to show it?
|
Squaring both sides, we see this is equivalent to:
$$(z_1+z_2)(\overline z_1+\overline z_2)=z_1\overline z_1 + 2|z_1||z_2| + z_2\overline z_2$$
which is equivalent to:
$$\mathrm{Re}(z_1\overline z_2)=\frac{1}{2}\left(z_1\overline z_2+z_2\overline z_1\right)=|z_1 \overline z_2|$$
So $z_1\overline z_2=\alpha$ must be non-negative real. Then you get:
$$z_1|z_2|^2 = \alpha z_2$$
So either $z_2=0$ or $z_1=\frac{\alpha}{|z_2|^2}z_2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1830909",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Characterizing spaces with no nontrivial covers I know that simply connected locally path-connected spaces have no nontrivial covers. Is there a characterization of spaces with this property?
|
The converse is also true, under mild assumptions on $X$.
Namely, if you assume that your space $X$ is path connected, locally path connected, and semilocally simply connected, then $X$ has no nontrivial path connected covers if and only if $X$ is simply connected.
This follows from the standard theorem giving a bijection between subgroups of $\pi_1(X,p)$ and connected pointed covering spaces of $(X,p)$. Under this bijection, the degree of the covering map equals the index of the corresponding subgroup. So the covering map is nontrivial if and only if the subgroup is proper, and every nontrivial group has a nonproper subgroup.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1831015",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Three positive numbers a, b, c satisfy $a^2 + b^2 = c^2$; is it necessarily true that there exists a right triangle with side lengths a,b and c? If so, how could you go about constructing it? If not, why not?
I am new to proofs and I was reading a book where they posed this question. I understand that if we are given any right triangle, the relation between sides a, b, c is given by pythagorus' theorem; although, given $a^2 + b^2 = c^2$, does it necessarily imply that there exists a triangle with sides a, b, c?
|
Yes indeed the converse for Pythagoras is also true. When such a construction is made (SSS) the relation is satisfied. The vertices lie on a semicircle, and by Thale's thm a right angle is enclosed.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1831187",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 6,
"answer_id": 5
}
|
Different ways of evaluating $\int_{0}^{\pi/2}\frac{\cos(x)}{\sin(x)+\cos(x)}dx$ My friend showed me this integral (and a neat way he evaluated it) and I am interested in seeing a few ways of evaluating it, especially if they are "often" used tricks.
I can't quite recall his way, but it had something to do with an identity for phase shifting sine or cosine, like noting that $\cos(x+\pi/2)=-\sin(x)$ we get:
$$ I=\int_{0}^{\pi/2}\frac{\cos(x)}{\sin(x)+\cos(x)}dx=\int_{\pi/2}^{\pi}\frac{-\sin(x)}{-\sin(x)+\cos(x)}dx\\
$$
Except for as I have tried, my signs don't work out well. The end result was finding
$$ 2I=\int_{0}^{\pi/2}dx\Rightarrow I=\pi/4
$$
Any help is appreciated! Thanks.
|
You're integrating on $[0, \pi/2]$ so replacing $x$ by $\pi/2 -x$ we see that $$I=\int_0^{\pi/2} \frac{\sin{x}}{\cos{x}+\sin{x}}dx.$$ Now sum this integral with the initial expression and notice that $2I=\pi/2$ hence...
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1831270",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 3
}
|
$\lim_{x\to \infty} \ln x=\infty$ I'm reading the following reasoning:
Since $\underset{n\to \infty}{\lim}\ln 2^n=\underset{n \to \infty}{\lim}n\cdot(\ln 2)=\infty$ then necessarily $\underset{x\to \infty}{\lim}\ln x =\infty$.
I don't understand how the generalisation was done from $\lim_{n\to \infty}\ln 2^n=\infty$ to $\lim_{x\to \infty} \ln x=\infty$.
Thank you for your help!
|
For $2\leq n\in N$ and $x\geq n$ we have $$\ln x=\int_1^x(1/y)\;dy\geq \int_1^n(1/y)\;dy=\sum_{j=1}^{n-1}\int_j^{j+1}(1/y)\;dy\geq \sum_{j=1}^{n-1}\int_j^{j+1}(1/(j+1))\;dy=$$ $$=\sum_{j=1}^{n-1}1/(j+1).$$ There are many ways to show that this last sum has no upper bound as $n\to \infty.$ The simplest, I think,is $$1/2+1/3+1/4\;>\;1/2+2(1/4)=1,$$ $$1/2+1/3+...1/8\;>\;1/2+2(1/4)+4(1/8)=3/2,$$ $$1/2+1/3+...+1/16\;>\;1/2+2(1/4)+4(1/8)+8(1/16)=2,$$...... et cetera.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1831367",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 4
}
|
How to pull out coefficient from radical in an integral I am in an online Calculus 2 class, and before my professor gets back to me, I was wondering if you guys could help. I was reading through an example:
How was 1/27 pulled out from the coefficient next to u^2? I am probably missing something dumb. Thanks.
|
There's an error in the problem-the numerator in the u-substitution should be $u^3$, not $u^2$. The numerator here is $(1/3 u)^3$ (compare to u). So a factor of 1/27 drops out into the denominator. The rest is easy to see by careful substitution.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1831458",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Solving for possible orientations of 3 objects on a 3x3 grid Say you have a 3x3 grid, and 3 objects to work with. Each occupies one space. How would I go about solving the amount of ways they can lay on the grid.
Example: (pardon my bad ASCII art)
[][][]
[][][]
[x][x][x]
Any help would be much appreciated, thanks!
|
To reiterate what was said in the comments, imagine temporarily that your grid has its positions labeled.
$\begin{bmatrix}1&2&3\\4&5&6\\7&8&9\end{bmatrix}$
A specific arrangement of the objects will then correspond to a selection of three numbers from $1$ to $9$.
$\begin{bmatrix}\circ&\times&\circ\\\circ&\circ&\times\\\circ&\circ&\times\end{bmatrix}$ would correspond to the set $\{2,6,9\}$
Assuming you consider $\begin{bmatrix}\circ&\times&\circ\\\circ&\circ&\times\\\circ&\circ&\times\end{bmatrix}$ to be a different arrangement than $\begin{bmatrix}\circ&\times&\times\\\times&\circ&\circ\\\circ&\circ&\circ\end{bmatrix}$, one can move freely between the set of desired arrangements and the set of subsets of size three of $\{1,2,\dots,9\}$ via a natural bijection.
The fact that there is a bijection between the sets in question implies that if we know how to count how many arrangements one of these questions has, we know how to count both.
We have thus boiled the question down to asking the question, "How many size three subsets exist of the set $\{1,2,\dots,9\}$?"
This is one of the many problems that binomial coefficients were designed to answer. The answer being $\binom{9}{3}=\frac{9!}{3!6!}=\frac{9\cdot 8\cdot 7}{3\cdot 2\cdot 1} = 84$ possible arrangements.
If you consider rotations and/or reflections of your grids to be considered "the same", for example the two arrangements pictured above, the question becomes harder since we will have over counted. Approach instead by Burnside's lemma.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1831582",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Prove that order is antisymmetric. (for natural numbers) Prove that order is antisymmetric.(for natural numbers)i.e.
If $ a \leq b$ and $b\leq a$ then $a=b$.
I do not want a proof based on set theory.
I am following the book Analysis 1 by Tao. It should be based on peano axioms.
I tried $ b=a+n$ where $n$ is a natural number then $ a+n \leq a$ but subtraction not yet defined (in the text that I am following).
How should I proceed ?
|
If you're allowed to rely on trichotomy you might as well suppose by way of contradiction that $a\ne b$. Since $a\leq b$ and $b\leq a$ it follows that $a<b$ and $b<a$, a contradiction to the forementioned trichotomy property.
Remarks.
*
*For this proof to be valid you'd sooner have $a\leq b$ defined as $a<b\oplus a=b$ where by $\oplus$ we mean an exclusive or
*To understand why we can conclude $a<b$ and $b<a$ from $a\leq b$ and $b\leq a$ take into account that $(p\oplus q)\wedge \sim q\implies p$ where $p$ and $q$ are propositions and $\oplus$ is an exclusive or.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1831647",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Why $\langle x^2+1\rangle$ is not prime in $\mathbb{Z}_2[x]$? I am reading ring theory (a beginner) and I stumbled upon a problem which I can't understand
The ideal $\langle x^2+1\rangle$ is not prime in $\mathbb{Z}_2[x]$, since it contains $(x+1)^2=x^2+2x+1=x^2+1$ , but does not contain $x+1$ .
$\langle x^2+1\rangle$ denotes the principal ideal generated by $x^2+1$ i.e. $$\langle x^2+1\rangle=\{f(x)(x^2+1)\mid f(x)\subset \mathbb{R}[x]\}$$
$\mathbb{R}[x]$ denotes the ring of polynomials with real coefficients.
My doubt:
How can $x^2+1+2x$ be written in the form $f(x)(x^2+1)\mid f(x)\subset \mathbb{R}[x]$ ?
|
You're comparing apples and oranges. The rings $\mathbb{Z}_2$ and $\mathbb{R}$ are very different and the polynomial $x^2+1$ has different behavior in them and there's no contradiction, because
$$
x^2+1\in\mathbb{Z}_2[x]
$$
and
$$
x^2+1\in\mathbb{R}[x]
$$
are different objects that live in distinct sets.
So it can very well happen that one is reducible and the other one isn't. Indeed so it happens; in $\mathbb{Z}_2[x]$ we have
$$
x^2+1=(x+1)^2
$$
whereas $x^2+1\in\mathbb{R}[x]$ is irreducible.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1831783",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Composition of a unique arrow with the inverse of another Suppose we have the arrows $u:T \rightarrow Q$, $v:T \rightarrow P$ and $f:P \rightarrow Q$. Furthermore, suppose $u$ is unique and $f$ is iso.
I understand that we can say that $v = u;f^{-1}$, but do we have enough information that $v$ is also unique?
I believe that $v$ is unique, but I'm not sure why. I think it is because - by the uniqueness of inverses - $f^{-1}$ is unique, and that when it is composed with another unique arrow ($u$), then the composition $u;f^{-1}$ must also be unique.
Can anyone confirm whether this is the case, please?
|
Yes this is true. Once you have two arrows (the same thing as two uni ue arrows) you can compose them (composition being a function).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1832021",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Area of a circle on sphere On a (flat) Euclidean plane, the area of a circle with a radius $r$ can be described by the function $A(r) = \pi r^2.$
But how can one describe the area of the same circle on a spherical manifold? Assuming that the radius of the sphere is an Euclidean distance of $d,$ how would $A(x)$ look?
I'm assuming this can be found using calculus and/or trigonometric functions, but I'm not exactly sure how to do it.
|
It depends on how you define $r$. If $r$ is the length of the arc on the sphere, then your area is still $\pi r^2$. If $r$ is the radius in the plane, you need to calculate the length of the arc given by a point on the circle, and the intersection between the sphere and the line that goes through the center of the sphere and the center of the circle. For the radius of the sphere $d$, the arc length id $d\theta$, where $\sin(\theta)=r/d$. The area of the "circle" is then $\pi d^2 \theta^2$
My mistake. Here is the solution:
Suppose you call $r$ the length of the arc along the sphere, and $x$ the radius in the plane. At a position $l$ along the arc, $x=d\sin(\theta)$, where $\theta=l/d$. A small strip on the sphere of width $dl$ has area $2\pi x dl$. Then
$$A=2\pi d\int_0^l dl \sin(l/d)=2\pi d^2\int_0^{l/d} dy \sin(y)=2\pi d^2(1-\cos(l/d))$$
For $l=\pi d$, $\cos(\pi)=-1$, so $A=4\pi d^2$
For $d\rightarrow \infty$, we should recover the plane geometry. $\cos(l/d)\approx1-l^2/d^2/2$, so $A=\pi l^2$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1832110",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
}
|
$\text{Ext}_R^n(\bigoplus_{i\in I} M_i, N) = \prod_{i\in I}\text{Ext}_R^n(M_i,N)$ Let $(M_i)_{i\in I}$ be a collection of $R$-moduls. Show that for all $N\in \text{Ob}(_R\text{Mod})$ is
$$
\text{Ext}_R^n(\bigoplus_{i\in I} M_i, N) = \prod_{i\in I}\text{Ext}_R^n(M_i,N).
$$
My idea is to make use of the universal property of the direct sum (note that $N\rightarrow I^{\bullet}$ is an injective resolution of $N$):
$$\begin{eqnarray*}
\text{Ext}_R^n(\bigoplus_{i\in I}M_i,N) & = &\mathcal{H}^n(\text{Hom}_R(\bigoplus_{i\in I}, I^{\bullet}))\\
& \cong & \mathcal{H}^n(\prod_{i\in I}\text{Hom}_R(M_i, I^{\bullet}))\\
& \cong & \prod_{i\in I}\mathcal{H}^n(\text{Hom}_R(M_i,I^{\bullet}))\\
& = & \prod_{i\in I}\text{Ext}_R^n(M_i,N)
\end{eqnarray*}$$
Is this correct? I'm not sure if & why $\mathcal{H}^n(\prod_{i\in I}\text{Hom}_R(M_i, I^{\bullet})) \cong \prod_{i\in I}\mathcal{H}^n(\text{Hom}_R(M_i,I^{\bullet}))$ holds....
|
The proof is correct. If $(C_i)_{i\in I}$ is any family of cochain complexes, then by writing out the definitions you immediately see $H^n(\prod C_i)=\prod H^n(C_i)$.
For if $D$ denotes the differential on $\prod C_i$, $d_i$ the differential on $C_i$, then
$H^n(\prod C_i)=Ker D/Im D=\prod Ker d_i/\prod Im d_I=\prod Ker d_i/ Im d_i=\prod H^n(C_i).$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1832222",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Find the minimum, maximum, minimals and maximals of this relation
Tell if the following order relation is total and find the minimum, maximum, minimals and maximals: $$\forall a,b \in\mathbb Z,\ \ a\ \rho\ b \iff a \leq b\ \text{ and }\ \pi(a) \subseteq \pi(b)$$
where $\pi$ is defined as follows: $\forall n \in\mathbb Z: \pi(n) = \{p \in P\ |\ \ p|n\}$ where $P$ is the set of positive prime numbers.
My attempt at solving this exercise:
*
*$\rho$ is not total. In fact, $2\ \not\rho\ 5$ and $5\ \not\rho\ 2$.
*I am confused because we are solving this exercise in $\mathbb Z$ so I guess $1$ can't be minimum because $-1\ \rho\ 1$. So $-1$ is minimum of $\rho$.
*I think there is maximum/maximals and it is $0$ because also negative numbers are in relation with $0$: $-1\ \rho\ 0$. Correct?
Thank you.
|
I think you have not yet got a hold of the difference between minimum and minimal and maximum/maximal. A minimum of a partially ordered set is an element smaller than all other elements. An element is minimal if there is no element that is strictly smaller than it.
For a totally ordered set, these are the same. But for a partially ordered set there can be many minimal elements which are not comparable to each other. In this case, there would be no minimum.
*
*This is correct.
*The number 1 is indeed not minimal, for the reason you said. $-1$ is minimal since nothing can be strictly smaller -- any other negative number will have some prime factors. -1 is not the minimum though since $-1 \!\!\not\operatorname{\!\rho} -2$.
*0 is not larger than every other element so it is not the maximum. $1 \!\!\not\operatorname{\!\rho} 0$. So 0 might be maximal but it is not the maximum.
To complete this exercise, you'll want to have proof that you've found all of the maximal and minimal elements. You are on the right track to single out -1, 0, and 1 and pay careful attention to them.
Then you might want to ask yourself if a regular old number like 3 or 22 or -12 could be maximal or minimal.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1832283",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How do I find the Integral of $\sqrt{r^2-x^2}$? How can I find the integral of the following function using polar coordinates ?
$$f(x)=\sqrt{r^2-x^2}$$
Thanks!
|
$\displaystyle\int \sqrt{r^2-x^2}dx$
Let be $\;x=r.\sin\alpha$ or $\quad x=r.\cos\alpha$,
Let be $\;x=r.\sin\alpha$,
and $\quad dx=r.\cos \alpha \;d\alpha$
Integral be,
$\displaystyle\int \sqrt{r^2-x^2}dx=\displaystyle\int r.\sqrt{1-\sin^2\alpha}\;.r.\cos \alpha \;d\alpha=\displaystyle\int r^2.\cos^2\alpha\; d\alpha$
And use this equation,$\quad \cos2\alpha=2\cos^2\alpha-1\longrightarrow \cos^2\alpha=\dfrac{\cos 2\alpha+1}{2}$
Then,
$=\displaystyle\int r^2.\cos^2\alpha\; d\alpha=\displaystyle\int r^2.\dfrac{\cos 2\alpha+1}{2}\; d\alpha=\dfrac{r^2}{2}\left[\dfrac{\sin2\alpha}{2}+\alpha\right]+C$
From here ,
$\boxed{\;x=r.\sin\alpha}$
Answer'll be,
$\boxed{\boxed{\displaystyle\int \sqrt{r^2-x^2}dx=\dfrac{r^2}{2}\left[\dfrac{\sin2\alpha}{2}+\alpha\right]+C=\dfrac{r^2}{2}\left[\dfrac{2x\sqrt{r^2-x^2}}{r^2}+\arcsin\left(\dfrac{x}{r}\right)\right]}}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1832470",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 3
}
|
Find the probability that a word with 15 letters (selected from P,T,I,N) does not contain TINT If a word with 15 letters is formed at random using the letters P, T, I, N,
find the probability that it does not contain the sequence TINT.
(I just made up this problem.)
|
The following Python script confirms the answer given by @McFry:
LENGTH = 15
LETTERS = 'PTIN'
SEQUENCE = 'TINT'
def func(word):
if len(word) < LENGTH:
return sum(func(word+letter) for letter in LETTERS)
else:
return SEQUENCE not in word
count = func('')
total = len(LETTERS)**LENGTH
print '{}/{}={}%'.format(count,total,100.0*count/total)
The output is $1024574807/1073741824\approx95.42\%$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1832582",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 2
}
|
Is convexity the most general dividing line between "easy" and "hard" optimization problems? Just got started with Boyd's Convex Optimization. It's great stuff and I see how it directly subsumes the all-important linear programming class of models. However, it seems that if a problem is non-convex, then the only recourse is some form of exhaustive search with smart stopping rules (e.g., branch and bound with fathoming).
Is convex optimization really the last line of demarcation between "easy" and "hard" optimization problems? By "last line" I mean that there does not exist a strict superset of convex problems that are also easily solved and well-behaved for a global optima.
|
In my opinion, it is sufficient that the objective is quasi-convex. Indeed, this ensures that all local minimizers are global minimizers and the set of global minimizers is convex. Thus, you do not have to fight against local minimizers (which are not global minimizers).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1832658",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Help with proving a 2 by 2 determinant is the area of parallelogram I have proved a large part of this by the following but get stuck at the last step.
To say $A=ad-bc$, we still need $ad>bc$. I have puzzling over this for hours. Thank you!
|
In general, $A \geq 0$ is area and $A^2= (ad-bc)^2$, then $A=\vert ad-bc \vert$, so in fact area of a parallelogram is absolute value of the determinant.
In your case, where $(c,d)$ is between 0 and 180 degrees ccw of $(a,b)$, call this angle $\theta \in $[0, 180]. Note $(d,-c)$ is a rotation of $(c,d)$ 90 degrees cw, so $(a,b)$ and $(d,-c)$ are $90-\theta$ degrees apart. Then $$\cos (90-\theta)= \frac{ad-bc}{\sqrt{(a^2+b^2)(c^2+d^2)}}$$ The denominator is >0, given our restriction on $\theta$, $90-\theta \in$[-90,90], and cosine is $\geq 0$ in this range. So $ad-bc \geq 0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1832753",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
$5^{th}$ degree polynomial expression
$p(x)$ is a $5$ degree polynomial such that
$p(1)=1,p(2)=1,p(3)=2,p(4)=3,p(5)=5,p(6)=8,$ then $p(7)$
$\bf{My\; Try::}$ Here We can not write the given polynomial as $p(x)=x$
and $p(x)=ax^5+bx^4+cx^3+dx^2+ex+f$ for a very complex system of equation,
plz hel me how can i solve that question, Thanks
|
hint : write the polynomial in this form $$f(x)= a(x-1)(x-2)(x-3)(x-4)(x-5)+b(x-1)(x-2)(x-3)(x-4)(x-6) +c(x-1)(x-2)(x-3)(x-5)(x-6)+d(x-1)(x-2)(x-4)(x-5)(x-6)+e(x-1)(x-3)(x-4)(x-5)(x-6)+f(x-2)(x-3)(x-4)(x-5)(x-6)$$ now finding constants are easy
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1832885",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 7,
"answer_id": 3
}
|
How to solve asymptotic expansion: $\sqrt{1-2x+x^2+o(x^3)}$ Determinate the best asymptotic expansion for $x \to 0$ for:
$$\sqrt{1-2x+x^2+o(x^3)}$$
How should I procede?
In other exercise I never had the $o(x^3)$ in the equation but was the maximum order to consider.
|
You have the following asymptotic expansion :
$$\sqrt{1+x}=1+\frac{x}{2}-\frac{x^2}{8}+\frac{x^3}{16}+o(x^3)$$
So :
$$\sqrt{1-2x+x^2+o(x^3)}=1+\frac{-2x+x^2+o(x^3)}{2}-\frac{(-2x+x^2+o(x^3))^2}{8}+\frac{(-2x+x^2+o(x^3))^3}{16}+o((-2x+x^2+o(x^3))^3)\\=1+\frac{-2x+x^2}{2}-\frac{4x^2-4x^3+x^4}{8}+\frac{-8x^3+12x^4-6x^5+x^8}{16}+o(x^3)\\=1-x+\frac{x^2}{2}-\frac{x^2}{2}+\frac{x^3}{2}-\frac{x^3}{2}+o(x^3)=1-x+o(x^3)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1832965",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
An erroneous application of the Counting Theorem to a regular hexagon? I'm trying to count the unique orbits of a regular hexagon such that each vertex is either Black or White and each edge is either Red, Gree, or Blue. The group I've chosen to act on the hexagon is the dihedral group $D_7$, $$\{e,r,r^2,r^3,r^4,r^5,r^6,s,rs,r^2s,r^3s,r^4s,r^5s,r^6s\}$$ where $r$ is a rotation by $\frac{\pi}{3}$, and $s$ a reflection in the axis connecting two opposite vertices or the midpoints of two opposite edges. When chopped up, I get the following partition into conjugacy classes: $$\{e\} \hspace{0.5cm} \{r,r^6\}\hspace{0.5cm} \{r^2,r^5\}\hspace{0.5cm} \{r^3,r^4\}\hspace{0.5cm}\{s,r^2s,r^4s,r^6s\}\hspace{0.5cm} \{rs,r^3s,r^5s\}$$
Taking the first element of each conjugacy class as the representative, I then go about counting the permutations that are left fixed by that representative. Here's my count (note, $X^g$ denotes the set of all regular hexagons left fixed by group element $g$): $$\begin{align*}
|X^e|&=3^6\times 2^6 & |X^r|&=3\times 2 & |X^{r^2}|&=3^2\times 2^2\\ |X^{r^3}|&=3^3\times 2^3 & |X^s|&=3^4\times 2^3 + 3^3\times 2^4 & |X^{rs}|&=3^3\times 2^4 + 3^4\times 2^3 \end{align*}$$
Notice that the order of the last two sets, $X^s$ and $X^{rs}$, are sums: one addend counts the reflections through opposite vertices and the other through midpoints of opposite sides. When I apply the Counting Theorem (aka Burside's Lemma?) I obtain $$\frac{1}{14}[3^6\times 2^6 + 2(3\times 2) + 2(3^2 \times 2^2) + 2(3^3\times 2^3) + 4(3^4\times 2^3 + 3^3 \times 2^4) + 3(3^3\times 2^4 + 3^4 \times 2^3)]$$ and it is here I stumbled when I saw this product is not an integer.
|
For future reference I would like to document how we can do this
calculation using a cycle index. The key observation here is the
following: the cycle structure of a rotation (but not a reflection)
acting on the vertices and edges is the same for edges and
vertices. So we may compute the cycle index by duplicating the cycle
structure of the terms of the ordinary cycle index acting on the
vertices. Do the rotations first. There is the identity which yield
$$a_1^6 b_1^6.$$
A rotation that takes zero to one or five yields
$$2 a_6 b_6.$$
A rotation that takes zero to two or four yields
$$2 a_3^2 b_3^2.$$
The rotation that takes zero to three yields
$$a_2^3 b_2^3.$$
For the reflections we get reflections about an axis passing through
two opposite vertices to get (note the different cycle structure for
the vertices and the edges)
$$3 a_1^2 a_2^2 b_2^3.$$
Then there are reflections about an axis passing through the midpoints
of two opposite edges which yield (once again we have a different
cycle structure for vertices and edges)
$$3 a_2^3 b_1^2 b_2^2.$$
Now we have two colors for the vertices and three for the edges which
by Burnside must be constant on the cycles. This yields
$$\frac{1}{12}
\left(6^6 + 2\times 6 + 2\times 6^2 + 6^3
+ 3 \times 2^4 3^3 + 3 \times 2^3 3^4\right).$$
This yields for the desired end result the value
$$4183.$$
It was not practicable to verify this with Maple as resource
consumption (time, space) was unacceptable. Perl seems to cope quite
well.
#! /usr/bin/perl -w
#
sub convert {
my ($val, $base, $len) = @_;
my @res;
for(my $pos = 0; $pos < $len; $pos++){
my $digit = $val % $base;
push @res, $digit;
$val = ($val - $digit) / $base;
}
return \@res;
}
MAIN : {
my ($idx2, $idx3, $d2, $d3);
my %orbits;
for(my $idx2 = 0; $idx2 < 2**6; $idx2++){
$d2 = convert $idx2, 2, 6;
for(my $idx3 = 0; $idx3 < 3**6; $idx3++){
$d3 = convert $idx3, 3, 6;
my @interl;
for(my $pos = 0; $pos < 6; $pos++){
push @interl,
$d2->[$pos], $d3->[$pos];
}
my (%orbit, $entry, $refent);
for(my $rot=0; $rot<12; $rot+=2){
$entry =
[@interl[$rot..11],
@interl[0..$rot-1]];
$orbit{join('-', @$entry)} = 1;
}
for(my $refl=0; $refl<12; $refl+=4){
$entry =
[@interl[$refl..11],
@interl[0..$refl-1]];
$refent =
[$entry->[0],
reverse(@$entry[1..11])];
$orbit{join('-', @$refent)} = 1;
$refent =
[$entry->[2],
$entry->[1],
$entry->[0],
reverse(@$entry[3..11])];
$orbit{join('-', @$refent)} = 1;
}
$orbits{join('|', sort(keys %orbit))} = 1;
}
}
print scalar(keys %orbits);
printf " (%d)\n",
(6**6
+ 2 * 6
+ 2 * 6**2
+ 6**3
+ 3 * 2**4 * 3**3
+ 3 * 2**3 * 3**4)/12;
1;
}
Remark. We could adapt the join statements above to use empty separators which however reduces readability of the data structure during debugging.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1833081",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
}
|
Matrix with orthonormal base I have the two following given vectors:
$\vec{v_{1} }=\begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix}$
$\vec{v_{2} }=\begin{pmatrix} 3 \\ 0 \\ -3 \end{pmatrix} $
I have to calculate matrix $B$ so that these vectors in $\mathbb{R}^{3}$ construct an orthonormal basis.
The solution is:
$$B=\begin{pmatrix} 0 & -\frac{\sqrt{2} }{2} & \frac{\sqrt{2} }{2} \\ 1 & 0 & 0 \\ 0 & -\frac{\sqrt{2} }{2} & -\frac{\sqrt{2} }{2} \end{pmatrix}$$
I really don't have any idea how to get this matrix. I'm also confused because I only have 2 vectors.
|
Maybe these calculations would help you.
We need to find vertor $\vec{v}_3$ such that $\vec{v}_3\perp\vec{v}_1$ and $\vec{v}_3\perp \vec{v}_2$, i.e.
$$
\begin{cases}
(\vec{v}_1, \vec{v}_3) = 0, \\
(\vec{v}_2, \vec{v}_3) = 0.
\end{cases}
$$
Here $(\vec{x},\vec{y})$ is a scalar product of vectors $\vec{x}$ and $\vec{y}$.
If we denote $\vec{v}_3$ as $(x_1,x_2,x_3)^T$ we get the system
$$
\begin{cases}
0\cdot x_1 + 1\cdot x_2 + 0\cdot x_3 = 0, \\
3\cdot x_1 +0\cdot x_2 - 3\cdot x _3 = 0
\end{cases} \iff
\begin{cases}
x_2 = 0, \\
3x_1 - 3x_3 = 0
\end{cases}\iff
\begin{cases}
x_2 = 0, \\
x_1 = x_3.
\end{cases}
$$
So vector $\vec{v}_3$ is depends on one parameter $x$ and has form $(x,0,x)^T$.
Then we need to normalize this system, i.e. calculate vectors $\vec{u}_i = \dfrac{\vec{v}_i}{||\vec{v}_i||}$. We get
$$
\vec{u}_1 = \frac{1}{\sqrt{1^2}}
\begin{pmatrix}
0 \\ 1 \\ 0
\end{pmatrix} =
\begin{pmatrix}
0 \\ 1 \\ 0
\end{pmatrix};
$$
$$
\vec{u}_2 = \frac{1}{\sqrt{3^2 + (-3)^2}}
\begin{pmatrix}
3 \\ 0 \\ -3
\end{pmatrix} =
\begin{pmatrix}
\frac{\sqrt{2}}{2} \\ 0 \\ -\frac{\sqrt{2}}{2}
\end{pmatrix};
$$
$$
\vec{u}_3 = \frac{1}{\sqrt{x^2 + x^2}}
\begin{pmatrix}
x \\ 0 \\ x
\end{pmatrix} =
\begin{pmatrix}
\frac{\sqrt{2}}{2} \\ 0 \\ \frac{\sqrt{2}}{2}
\end{pmatrix}.
$$
One may see that system of vectors $(\vec{u}_1,\vec{u}_2,\vec{u}_3)$ is orthonormal.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1833162",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
What does it mean to have a "different topology"? On a space, I understand the notion of having different metrics on the same space. It is, in layman's terms, different ways of defining a distance but on the same space.
But I often see the term "different topology" being used, for example in this excellent answer. But I do not understand this idea so well.
What does it mean, essentially, to have a "different topology" on the same set say $\mathbb{R}$? Can you provide some simple examples that convey this idea?
|
The topology of a set is essentially a notion about the shape of the set. If you have a topology on a set, you can talk about neighbours, convergence, etc. There are many ways to give a topological structure to a set, as you know, for example taking different metrics may lead to different topologies. For example, the sets $X_1 =\mathbb{N}$ and $X_2 = \{\frac{1}{n}:n\in\mathbb{N} \}\cup\{ 0 \}$ are indistinguishable from the point of view of set theory. On the other hand, as subsets of $\mathbb{R}$, they have a topology induced by the usual metric. Topologically they have a different structure. $X_2$ has a distinguished point, namely, $0$, since every open set $U$ in $\mathbb{R}$ containing $0$ also contains infinitely many other elements of $X_2$. On the other hand, there are no points in $X_1$ having the same property. This is a common example where two set having the same set theoretical properties have different topological properties.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1833269",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
}
|
which natural english interpretation of this symbolic statement is correct? Part of Keith Devlin's Coursera MOOC on mathematical thinking requires the translation of this symbolic statement into natural language:
$$ 5 < x < 7$$
Interpretation 1: $x$ is a single unknown number located somewhere between 5 and 7 on the number line.
Interpretation 2: x is an interval on the number line between 5 and 7. (a line segment).
Which interpretation is correct?
(I will define x to be a REAL NUMBER)
|
The first interpretation is correct.
For the second one, you would write
$$\{x\in \mathbb R ,\ 5<x<7\},$$
or simply
$$\{5<x<7\},$$
or (thanks to a comment) :
$$x\in(5,7).$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1833347",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Constructing an $L^2$ space on the unit ring $\mathcal{S^1}$ Revised Question:
Starting with $L^2[0,2\pi]$, does the canonical map $$[0,2\pi)\ni\theta\mapsto e^{i\theta}\in\mathcal{S^1}$$(with functions going across in the obvious way) turn $L^2[\mathcal{S^1}]$ into a bona fide Hilbert space?
In particular, does the difference in topology between $[0,2\pi]$ and ${S^1}$ have any nasty implications?
Original Question:
Is the Hilbert Space of $L^2$ functions on $[0,2\pi]$ with $f(0)=f(2\pi)$ equivalent to a Hilbert Space of $L^2$ functions defined on the unit ring $\mathcal{S^1}$? Can I even construct the latter?
The reason I ask is that I'm uncertain whether the difference in the measure and topologies between $[0,2\pi]$ and $\mathcal{S^1}$ 'bubbles' its way up into functional analytic results.
Note: The boundary condition given is shorthand for: "The set of equivalence classes of Lebesgue square measurable functions (modulo sets of measure zero) containing a continuous member satisfying the given boundary condition".
|
Of course as Hilbert spaces $L^2(S^1)$ and $L^2\bigl([0,1]\bigr)$ are isomorphic, and you could also say that $L^2\bigl([0,1]\bigr)$ is the prime example of a Hilbert space arising from Lebesgue theory.
But note that $L^2(S^1)$ is one of the most important Hilbert spaces in the world, and there definitively is an essential difference between $L^2\bigl([0,1]\bigr)$ and $L^2(S^1)$ which has not been exposed so far: While $[0,1]$ is just a measure space the ground set $S^1$ carries additional structure, namely a transitive group of translations. It is not necessary to go into details here. We all have heard of Fourier theory. E.g., there is a famous theorem saying that any function $f\in L^2(S^1)$ is represented (meaning "equal to") almost everywhere by its Fourier series.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1833434",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
How to find the Summation of series of Factorials? $$1\cdot1!+2\cdot2!+\cdots+x\cdot x! = (x+1)!−1$$
I don't understand what's happening here. The given sum of factorials is generalized into a single term. Could somebody please help me finding the logic behind this series.
Thanks in advance!!
|
Michael Hardy’s computational proof is the simplest, but there is also a reasonably straightforward combinatorial argument.
There are $(n+1)!-1$ permutations of the set $[n+1]=\{1,2,\ldots,n+1\}$ other than the increasing permutation $\langle 1,2,\ldots,n+1\rangle$. Let $P$ be the set of all such permutations; we’ll now calculate $|P|$ in another way.
For $k=1,\ldots,n$ let $P_k$ be the set of permutations $\langle p_1,p_2,\ldots,p_{n+1}\rangle\in P$ such that $p_k\ne k$, and $p_\ell=\ell$ for $1\le\ell<k$. If $\langle p_1,\ldots,p_{n+1}\rangle\in P_k$, $p_k$ cannot be any of the positive integers less than or equal to $k$, so there are $(n+1)-k$ possible choices for $p_k$. Once $p_k$ has been set, $\langle p_{k+1},\ldots,p_{n+1}\rangle$ can be any permutation of the remaining $(n+1)-k$ members of $[n+1]$, so
$$|P_k|=(n+1-k)(n+1-k)!\;.$$
Thus,
$$(n+1)!-1=\sum_{k=1}^n|P_k|=\sum_{k=1}^n(n+1-k)(n+1-k)!=\sum_{\ell=1}^n\ell\cdot\ell!\;,$$
where in the last step I just set $\ell=n+1-k$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1833481",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Integrating factor for a non exact differential form I can't find an integrating factor for the differential form
$$
-b(x,y)\mathrm{d}x + a(x,y)\mathrm{d}y
$$
where
$$
a(x,y) = 5y^2 - 3x
$$
and
$$
b(x,y) = xy - y^3 + y
$$
The problem has origin form the following differential equation
\begin{cases}
x' = a(x,y) \\
y' = b(x,y)
\end{cases}
and my teacher told me that an integrating factor for the associated differential form exists.
I have tried to find an integrating factor of the form $\mu(\phi(x,y))$ where $\mu(s)$ is a single variable function.
Requiring $-b(x,y)\mu(\phi(x,y)) \mathrm{d}x + a(x,y)\mu(\phi(x,y)) \mathrm{d}y$ to be closed, I obtained the differential equation
$$
\frac{\mathrm{d}\mu(\phi)}{\mathrm{d}\phi} = -\frac{\frac{\partial a}{\partial x} + \frac{\partial b}{\partial y}}{a\frac{\partial \phi}{\partial x} + b\frac{\partial \phi}{\partial y}}\mu(\phi)
$$
But I am unable to continue. Any ideas?
|
The integrating factor $\quad \mu=y^2e^x \quad$ can be found thanks to the method below :
The differential relationship leads to a first order linear PDE. We don't need to fully solve it with the method of characteristics. Only a part of the solving is sufficient to find an integrating factor.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1833621",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Non constant analytic function from $\{z\in\mathbb{C}:z\neq 0\}$ to $\{z\in\mathbb{C}:|z|>1\}.$ Does there is non constant analytic function from $\{z\in\mathbb{C}:z\neq 0\}$ to $\{z\in\mathbb{C}:|z|>1\}?$ According to me there is no such non constant analytic function because if there is any such function say $f,$ then $f$ can have either a pole or essential singularity at $z=0$. In the case of pole Picard's theorem of meromoprphic function will work and in the case of essential singularity we know that image of any neighbourhood of essential singularity is dense in $\mathbb{C}$, so in both of the cases we get a contradiction. So no such non constant analytic function. Am i right? Please suggest me. Thanks.
|
If so then $1/f$ is bounded. Hence $1/f$ has a removable singularity at the origin, giving a bounded entire function.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1833688",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
How to determine which of the following transformations are linear transformations?
Determine which of the following transformations are linear transformations
A. The transformation $T_1$ defined by $T_1(x_1,x_2,x_3)=(x_1,0,x_3)$
B. The transformation $T_2$ defined by $T_2(x_1,x_2)=(2x_1−3x_2,x_1+4,5x_2)$.
C. The transformation $T_3$ defined by $T_3(x_1,x_2,x3)=(x_1,x_2,−x_3)$
D. The transformation $T_4$ defined by $T_4(x_1,x_2,x3)=(1,x_2,x_3)$
E. The transformation $T_5$ defined by $T_5(x_1,x_2)=(4x_1−2x_2,3|x_2|)$.
I believe that it could be A and E. How can I determine this? If someone could show me one I could figure out the rest.
|
Is T in A a linear transformation?
*
*Check linearity for addition.
Suppose $T:V \rightarrow W$ .Where $V$ and $W$ are vector spaces over $F$. Let $x_1,x_2,x_3 \in F$ and also let $x_4,x_5,x_6 \in F$. So that $(x_1,x_2,x_3) \in V$ and $(x_4,x_5,x_6) \in V$. Now need to check that $T((x_1,x_2,x_3)) + T((x_4,x_5,x_6)) = T((x_1+x_4, x_2+x_5,x_3+x_6))$.
We have LHS $ = (x_1,0,x_3) + (x_4,0,x_6) = (x_1+x_4,0,x_3+x_6) = $RHS. Hence this holds by definition of vector addition.
*Check linearity for scalar multiplication:
Let be as above, and suppose $V$ is a vector-space over a field $F$. Then let $a\in F$. Want to prove that:
$aT(x_1,x_2,x_3)=T(ax_1,ax_2,ax_3)$. Hence we have:
LHS $ = a(x_1,0,x_3) = (ax_1,0,ax_3) = $RHS. And this holds by vector scalar multiplication and by property of zero in $\mathbb{R}$.
Hence this is a linear transformation by definition. In general you need to show that these two properties hold.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1833776",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 6,
"answer_id": 5
}
|
Probability of $\max_i \{X_i\} = X_0$ where $X_i$ are iid binomial We have $M$ Binomial random variables, where $X_0 \sim $ Bin$(n,p)$ and $X_i \sim $ Bin$(n,1/2)$.
Suppose $p > 1/2$. I'm interested in the probability that $\mathbb{P}(\max \{X_1,\dots,X_M\} \geq X_0)$. Is this tractable?
If not, is it tightly boundable/approximable? If this is a very difficult question, I'd accept a reference providing some insight on the problem as well.
|
$P( \max \{X_1,...,X_M \} \geq X_0)=P(X_i \geq X_0$ for some i) = $1-P( \text{each }X_i < X_0)$.
Now $P( \text{each }X_i < X_0)=\sum_{k=0}^n P(\text{each }X_i<X_0 \vert X_0=k) \cdot P(X_0 =k)$ =$\sum_{k=0}^n P( \text{each } X_i<k \text{ and } X_0=k)$ = $\sum_{k=1}^n \big[ P(X_1<k) \big]^M \cdot P(X_0=k)$. Note the k=0 term is 0.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1833854",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Find the value of $ [1/ 3] + [2/ 3] + [4/3] + [8/3] +\cdots+ [2^{100} / 3]$ Assume that [x] is the floor function. I am not able to find any patterns in the numbers obtained. Any suggestions?
$$[1/ 3] + [2/ 3] + [4/3] + [8/3] +\cdots+ [2^{100} / 3]$$
|
Just to better asses the solution given by Kitter Catter, premised that
$$
\eqalign{
& x = \left\lfloor x \right\rfloor + \left\{ x \right\} \cr
& \left\lfloor { - x} \right\rfloor = - \left\lceil x \right\rceil \cr
& \left\lceil x \right\rceil = \left\lfloor x \right\rfloor + \left\lceil {\left\{ x \right\}} \right\rceil \cr}
$$
with $\left\lfloor x \right\rfloor $ being the floor, $\left\lceil x \right\rceil $ the ceil,
and $\left\{ x \right\}$ the fractional remaining
then:
$$
\eqalign{
& \left\lfloor {{{2^{\,n + 1} } \over 3}} \right\rfloor \quad \left| {\;0 \le n} \right.\quad = \left\lfloor {2{{2^{\,n} } \over 3}} \right\rfloor = \left\lfloor {2^{\,n} - {1 \over 3}2^{\,n} } \right\rfloor = \cr
& = 2^{\,n} + \left\lfloor { - {{2^{\,n} } \over 3}} \right\rfloor = 2^{\,n} - \left\lceil {{{2^{\,n} } \over 3}} \right\rceil = 2^{\,n} - \left\lfloor {{{2^{\,n} } \over 3}} \right\rfloor - \left\lceil {\left\{ {{{2^{\,n} } \over 3}} \right\}} \right\rceil = \cr
& = 2^{\,n} - \left\lfloor {{{2^{\,n} } \over 3}} \right\rfloor - 1 \cr}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1833916",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 4,
"answer_id": 2
}
|
Show that if A is self-adjoint and $A^{2}x=0$, show that $Ax=0$. I feel like i'm overcomplicating this a bit.
Let $X$ be a finite-dimensional inner product space and $A$ be a linear transformation from $X$ to $X$. If A is self-adjoint and if $A^{2}x=0$, show that $Ax=0$.
Here's my thought:
$0=\left<A^{2}x,y\right>=\left<A(A^{*}x),y\right>=\left<A^{*}x,A^{*}y\right>=\left<Ax,A^{*}y\right>$
The first equality just came from the fact that $A^{2}x=0$. So the last equality has to be zero as well. I just don't know how to show that $Ax$ specifically must be zero, because as its written, I think $A^{*}y$ could also be zero while $Ax$ is non-zero. Thanks in advance.
|
It's useful to know that if $X$ and $Y$ are finite dimensional inner product spaces over $F$ (where $F$ is $\mathbb R$ or $\mathbb C$) and $A:X \to Y$ is a linear transformation, then $A$ and $A^* A$ have the same null space. Here's a proof: Clearly $Ax = 0 \implies A^* Ax = 0$. Conversely,
\begin{align*}
& A^* A x = 0 \\
\implies & \langle x, A^* A x \rangle = 0 \\
\implies & \langle Ax, Ax \rangle = 0 \\
\implies & \|Ax \|^2 = 0 \\
\implies & Ax = 0.
\end{align*}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1833998",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
}
|
Alternate proof for Vieta's formula (formula for the summing the roots of a polynomial) I just saw Vieta's formula for the first time, where it was stated that given some polynomial
$$p(x)=a_nx^n+\cdots+a_0,$$
let $x_1,\ldots,x_n$ denote its roots. Then $$\sum_{i=1}^n x_i=-\frac{a_{n-1}}{a_{n}}.$$
I initially tried to use an inductive argument, starting with a linear base case, and supposing that the formula held for all polynomials of degree $m < n + 1$, and trying to move from here, by factoring out a term of $x$. This failed for what are perhaps obvious reasons.
My second attempt was successful, and I think I've found what is the standard proof, namely using the fundamental theorem of algebra to get linear factors, expanding the right side of the equation, setting the terms equal, and deriving the formula from there. This basic idea can be found in detail at The art of problem solving.
I'm interested in an alternate proof. To clarify, I'm not looking for a proof that doesn't depend on the fundamental theorem of algebra in any way at all, but simply a different method of proving the result.
|
If we let $r_1, .. r_n$ be the roots of $a_nx^n + .. + a_0$.
$(a_nx^n + .. + a_0)/(x - r_1) = a_nx^{n - 1} + (r_1a_n + a_{n - 1})x^{n - 2} + (r_1^2a_n + r_1a_{n - 1} + a_{n - 2})x^{n - 3} + .. + (r^{n - 2}_1a_n + .. + a_2)x + (r_1^{n - 1}a_n + .. + a_1)$
By inductive hypothesis, we have $\sum_{k = 2}^{n} r_k = -\dfrac{r_1a_n + a_{n - 1}}{a_n}$. The result falls out for $\sum_{k = 1}^{n} r_k$.
Likewise, by inductive hypothesis, $\prod_{k = 2}^n r_k = (-1)^{n - 1}\dfrac{r_1^{n - 1}a_n + .. + a_1}{a_n}.$ Then $\prod_{k = 1}^n r_k = (-1)^{n - 1}\dfrac{r_1^na_n + .. + r_1a_1}{a_n} = (-1)^{n - 1}\dfrac{-a_0}{a_n} = (-1)^n\dfrac{a_0}{a_n}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1834081",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
If $\sin(\pi \cos\theta) = \cos(\pi\sin\theta)$, then show ........ If $\sin(\pi\cos\theta) = \cos(\pi\sin\theta)$,
then show that $\sin2\theta = \pm 3/4$.
I can do it simply by equating $\pi - \pi\cos\theta$ to $\pi\sin\theta$,
but that would be technically wrong as those angles could be in different quadrants. So how to solve?
|
We first rewrite
$$
\cos(\pi/2 - \pi \cos \theta) = \cos(\pi \sin \theta)
$$
(cofunction identity).
We notice that $\cos(x)$ is a periodic function with period $2\pi$, so we need a period offset term to be sure that we find all solutions. We also need to account for the fact that $\cos(x)$ is symmetric, so:
$$
\cos(\pi/2 - \pi \cos \theta) = \cos(\pm \pi \sin \theta + 2 \pi k) \\
\pi/2 - \pi \cos \theta = \pm \pi \sin \theta + 2 \pi k
$$
Then we do basic algebra
$$
1/2 - \cos \theta = \pm \sin \theta + 2k \\
\cos \theta \pm \sin \theta = 1/2 - 2k
$$
We'll get back to this.
Trigonometric trick time!
We can do a neat little trick to all functions of the form $a \cos \theta + b \sin \theta$. We rewrite:
\begin{align*}
a \cos \theta + b \sin \theta &= \sqrt{a^2+b^2} \left(\frac{a}{\sqrt{a^2+b^2}} \cos \theta + \frac{b}{\sqrt{a^2+b^2}} \sin \theta\right) \\
&=: \sqrt{a^2+b^2} (a' \cos \theta + b' \sin \theta) \\
&=: \sqrt{a^2+b^2} (\cos \phi \cos \theta + \sin \phi \sin \theta) \\
&=: \sqrt{a^2+b^2} \cos (\theta - \phi)
\end{align*}
where we can find $\phi=\arctan(b'/a') + \pi n = \arctan(b/a) +\pi n$ (note the $\pi n$ because, like $\sin(x)$ and $\cos(x)$, $\tan(x)$ is also a periodic function, so we have to account for all possible inverse values).
End trigonometric trick
We apply the trick to get
$$
\cos \theta \pm \sin \theta = 1/2 - 2k \\
\sqrt{1^2+1^2} \cos (\theta - \arctan (\pm1/1)) = 1/2 - 2k \\
\sqrt 2 \cos (\theta \mp \pi/4) = 1/2 - 2k
$$
(You can verify that the rewritten forms do indeed evaluate to the original.)
We continue:
$$
\cos (\theta \mp \pi/4) = 1/(2\sqrt{2}) + \sqrt{2} k = \sqrt{2} (1/4 - k)
$$
Notice that $-1 \le \cos(x) \le 1$, so only $k = 0$ is valid. Hence we get
$$
\cos (\theta \mp \pi/4) = \sqrt{2}/4
$$
Next, we notice
\begin{align*}
\sin 2\theta &= \cos (\pi/2 - 2 \theta) \\
&= \cos (2 \theta - \pi/2) \\
&= \cos(2 (\theta - \pi/4)) \\
&= 2 \cos^2 (\theta - \pi/4) - 1 \\
&= 2 (\sqrt{2}/4)^2 - 1 \\
&= 1/4 - 1 \\
&= -3/4
\end{align*}
We notice the other solution
\begin{align*}
\sin 2\theta &= -\cos (\pi/2 - 2 \theta) \\
&= \cos (2 \theta - \pi/2) \\
&= -\cos (2 \theta - \pi/2 + \pi) \\
&= -\cos (2 \theta + \pi/2) \\
&= -\cos(2 (\theta + \pi/4)) \\
&= -(2 \cos^2 (\theta + \pi/4) - 1) \\
&= -(2 (\sqrt{2}/4)^2 - 1) \\
&= -(1/4 - 1) \\
&= 3/4
\end{align*}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1834164",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
If $f(x) = \frac{\cos x + 5\cos 3x + \cos 5x}{\cos 6x + 6\cos4x + 15\cos2x + 10}$then.. If $f(x) = \frac{\cos x + 5\cos 3x + \cos 5x}{\cos 6x + 6\cos4x + 15\cos2x + 10}$ then find the value of $f(0) + f'(0) + f''(0)$.
I tried differentiating the given. But it is getting too long and complicated. So there must be a way to simplify $f(x)$. What is it?
|
we can simplify the fraction as $$\frac{2\cos3x\cos2x+5\cos3x}{2\cos^23x-1+6[2\cos3x\cos x]+9\cos2x+10}$$
$$=\frac{(2\cos2x+5)\cos3x}{2\cos^23x+12\cos3x\cos x+18\cos^2x}$$
$$=\frac{(2\cos2x+5)\cos3x}{2(\cos3x+3\cos x)^2}$$
$$=\frac{[2(2c^2-1)+5](4c^3-3c)}{2(4c^3)^2}$$
$$=\frac{(4c^2+3)(4c^2-3)}{32c^5}$$
$$=\frac 12\sec x-\frac{9}{32}\sec^5x$$
Now you can differentiate this twice quite easily
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1834240",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 6,
"answer_id": 0
}
|
Formula for proportion of entropy Let's say we have a probability distribution having 20 distinct outcomes. Then for that distribution the entropy is calculated is $2.5$ while the maximal possible entropy here is then of course $-\ln(\frac{1}{20}) \approx 3$.
How can I describe that $2.5$ is quite a high entropy given the number of possible outcomes? It is hard to get a feel for it just stating that it is $2.5$, my gut tells me it would be ok to simply divide the entropy by the maximum entropy possible resulting in a number between 0 and 1; in this case $\frac{2.5}{-\ln(\frac{1}{20})} \approx 0.83$. Is this a valid way of calculating it (since this is not a linear but logarithmic scale)? Has this been done before?
|
First of all, I would use log base 2 instead of natural log because it's easier to talk about its meaning as the number of yes/no questions on average to guess the value.
Given 20 choices, the maximum entropy distribution has entropy of 4.322 bits. While your distribution has 3.607 bits, which is 83% of the maximum possible value.
Of course you can normalize it, but I do not recommend normalizing it to be between 0 and 1 because it will lose its value. Without a context, it is not clear what this normalized entropy means.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1834316",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Find curve parametrization I am asked to find the work of $f(x, y, z) = (x, z, 2y)$ through the curve given by the intersection of two surfaces. I have been doing a series of exercises on this and my question has simply to do with the parametrization of the curve.
The two surfaces are:
$\{(x, y, z) \in R^3 : x = y^2 + z^2\}$ and
$\{(x, y, z) \in R^3 : x + 2y = 3\}$
Although I managed to calculate a function $g$ such that $g(\alpha) = (3-2\alpha, \alpha, \sqrt{3 - \alpha^2 - 2\alpha})$ gives me points on both those surfaces, I am pretty sure there is a nicer parametrization for proceeding to calculate the integral, involving modified polar coordinates. Even with this one, I could only find that $\alpha \le \frac{3}{2}$, leaving me wondering what the lower bound for $\alpha$ is.
|
The intersection of the two surfaces is given by the equation : $$y^2+z^2-x=3-2y-x$$
or, $$z^2+y^2+2y-3=z^2+(y+1)^2=4$$ which is the equation of a circle in the yz-plane.
We have the parametrization $$z=2cost$$, $$y=-1+2sint$$ and $$x=5-4sint$$
Finaly : $\gamma(t)=(5-4sint,-1+2sint,2cost)$ for $t\in[0,2\pi]$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1834398",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Trigonometric and exp limit
Evaluation of $$\lim_{x\rightarrow \frac{\pi}{2}}\frac{\sin x-(\sin x)^{\sin x}}{1-\sin x+\ln (\sin x)}$$
Without Using L hopital Rule and series expansion.
$\bf{My\; Try::}$ I have solved it using L hopital Rule and series expansion.
But I did not undersand How can i solve it Without Using L hopital Rule and series expansion
Help required, Thanks
|
It is not a direct answer.
By substition $x-\frac { \pi }{ 2 } =t$ $$f\left( x \right) =\frac { \cos { x } -{ \cos { x } }^{ \cos { x } } }{ 1-\cos { x } +\ln { \cos { x } } } $$
$$\lim _{ t\rightarrow 0 }{ \frac { \sin { \left( \frac { \pi }{ 2 } +t \right) - } { \sin { \left( \frac { \pi }{ 2 } +t \right) } }^{ \sin { \left( \frac { \pi }{ 2 } +t \right) } } }{ 1-\sin { \left( \frac { \pi }{ 2 } +t \right) +\ln { \left( \sin { \left( \frac { \pi }{ 2 } +t \right) } \right) } } } } =\lim _{ t\rightarrow 0 }{ \frac { \cos { t } -{ \cos { t } }^{ \cos { t } } }{ 1-\cos { t } +\ln { \cos { t } } } } $$
Analysing graph of the function i think (i hope) we can get an answer
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1834513",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Determining a basis for a space of polynomials. Let $V = \mathbb R[x]_{\le 3}$
I have the space of polynomials $U_2 = \{ p = a_0 + a_1x + a_2x^2 + a_3x^3 \in V \mid a_1 - a_2 + a_3 = 0, a_0 = a_1 \}$
I am asked to find a basis, so I proceed by noticing that in $U_2$:
$$a_0 + a_1x + a_2x^2 + a_3x^3 = a_0 (1+x-x^3) + a_2 (x^2 +x^3)$$
So I figure that a basis is $(1+x-x^3,x^2 + x^3 )$ In $\mathbb R^4$ the vectors would be $(1 10 -1), (001 1)$ but the solution gives the vectors $(1110), (-1-1 0 1)$.
What am I doing wrong?
Edit: corrected an error pointed out in the comments.
|
These are both bases. Your basis is $\{1+x-x^3, x^2+x^3\}$; the solution gives the basis $\{1+x+x^2, -1-x+x^3\}$. But each of these is expressible in terms of the other:
\begin{gather*}
1+x+x^2 = (1+x-x^3) + (x^2+x^3),\quad -1-x+x^3 = -(1+x-x^3) \\
1+x-x^3 = -(-1-x+x^3),\quad x^2+x^3 = (1+x+x^2) + (-1-x+x^3).
\end{gather*}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1834660",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Is there a trick to drawing cubic equation graphs? This year I started drawing out cubic graphs on graph paper...My teacher have been giving a ton of practices.However,I find that it is very difficult to connect the points of the graph,especially the curves...
So,is there a trick to drawing out on graph paper easily?
Oh,is the french curve or curve ruler easier to draw cubic graphs?
|
It's better to locate the point of inflection first:
Say for a graph of $f(x)=ax^3+bx^2+cx+d$,
$$f''\left( -\frac{b}{3a} \right)=0$$
The graph has a rotation symmetry about the point of inflection namely
$$\left( -\frac{b}{3a}, d-\frac{bc}{3a}+\frac{2b^3}{27a^2} \right)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1834705",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Find $y(2) $ given $y(x)$ given a separable differential equation Find what $y(2)$ equals if $y$ is a function of $x$ which satisfies:
$x y^5\cdot y'=1$ given $y=6$ when $x=1$
I got $y(2)=\sqrt{6\ln(2)-46656}$
but this answer is wrong can anyone help me figure out the right answer and how I went wrong?
|
This equation is again separable (you would benefit from learning how to solve separable equations as they are the most simple class of differential equations to solve).
$$
\frac{dy}{dx}xy^5=1\Rightarrow y^5dy=x^{-1}dx\Rightarrow \frac{y^6}{6}=\ln(x)+c
$$
Using boundary condition $y(1)=6$ yields
$$
\frac{y^6}{6}=\ln(x)+c\Rightarrow \frac{6^6}{6}=\ln(1)+c\Rightarrow 6^5=c
$$
Then we have solution
$$
\frac{y^6}{6}=\ln(x)+6^5
$$
Now you can plug in whatever value you want.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1834796",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
A 'bad' definition for the cardinality of a set My set theory notes state that the following is a 'bad' definition for the cardinality of a set $x:$ $|x|=\{y:y\approx x\}$ $(y\approx x\ \text{iff} \ \exists\ \text{a bijection}\ f:x\rightarrow y )$
The reason this is a 'bad' definition is since if $x\neq \emptyset$ then $|x|$ is a proper class and I am asked to prove this.
For the moment, let's consider $x$ to be finite. Say $|x|=n$ as $x$ contains a finite number of elements.
But then $x\approx \{n+1,...,n+n\}$ as $|\{n+1,...,n+n\}|=n$ and we can 'keep going up' in this way such that $\bigcup On\subseteq|x|$ and since $\bigcup On = On$ and $On$ is a proper class we must have $|x|$ being a proper class as well.
This argument is certainly not rigourous and I am unfortunately very much stumped on what to do if $x$ not finite.
$\underline{\text{Please correct me if $\ $} \bigcup On \neq On}$
Any feedback is very much appreciated.
|
It is true that $\bigcup ON=ON$. However, it is not true that $\bigcup On\subseteq\vert x\vert$ - rather, each ordinal is an element of some element of $\vert x\vert$, that is, $ON\subseteq \bigcup \vert x\vert$. This is still enough to get a contradiction, though, by the union axiom.
If $x$ is not finite - no problem! Just consider sets of the form $\{\alpha+\beta: \beta<\kappa\}$, where $\kappa$ is the cardinality (in the usual sense) of $x$, and $\alpha$ is some fixed ordinal. The same picture works.
In fact, we can do better: show that for any set $a$, there is some $C\in \vert x\vert$ with $a\in C$. This will show that $\bigcup \vert x\vert=V$ (and it's easier to show that $V$ isn't a set than it is to show that $On$ isn't a set, although this is a very minor point).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1834902",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
properties of distributions If $$\int_{-\infty}^\infty f dx = 1$$, with $f > 0 \forall x$, then prove or disprove: $$\int_{-\infty}^\infty \frac{1}{1 + f} dx $$ diverges.
The hint I got is to consider the measure of the set$(x:f > 1)$. May be the measure is zero thereby ensuring the divergence of integral?
|
Suppose that $f$ is nonnegative (you say it's a distribution?). Notice that $1=\frac{1}{1+f}+\frac{f}{1+f}$. Also $0\leq \frac{f}{1+f}\leq f$ which means $\frac{f}{1+f}$ is integrable. Since the l.h.s. is not integrable, it follows that $\frac{1}{1+f}$ is not integrable.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1835020",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 0
}
|
A very curious rational fraction that converges. What is the value? Is there any closed form for the following limit?
Define the sequence
$$ \begin{cases}
a_{n+1} = b_n+2a_n + 14\\
b_{n+1} = 9b_n+ 2a_n+70
\end{cases}$$
with initial values $a_0 = b_0 = 1$. Then $\lim_{n\to\infty} \frac{a_n}{b_n} = ? $
The limit is approximately $0.1376$. My math teacher Carlos Ivorra says that this limit have a closed form involving the sine of an angle. What is the closed form for is limit?
NOTE: I have found this (and another series of converging sequences) by the use of an ancient method for calculating sines recently rediscovered. I'll give the details soon as a more general question.
|
Define
$$ L = \lim_{n \to \infty} \frac{a_n}{b_n} $$
$$ r = \lim_{n \to \infty} \frac{a_{n+1}}{b_n} $$
$$ s = \lim_{n \to \infty} \frac{b_{n+1}}{b_n} $$
then it's not hard to see that $L = r/s$. Also, by substituting in the recursion, since we have $b_n \to \infty$ we can compute
$$ r = \lim_{n \to \infty} \left(1 + 2 \frac{a_n}{b_n} + \frac{14}{b_n}\right) = 1 + 2L $$
$$ s = \ldots = 9 + 2L $$
(the reason to define $r$ and $s$ is precisely because I wanted to simplify the recursions in this fashion)
Solving the system of equations, along with $L>0$, gives
$$ L = \frac{-7 + \sqrt{57}}{4} $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1835103",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19",
"answer_count": 5,
"answer_id": 2
}
|
Thinking about $ax + by = ab$ This is embarrassing because I know this is fairly simple, but I'm hitting a mental block and not having much luck with any references I'm aware of.
What properties does $ax + by = ab$ have? (a and b are integers)
I guess I'm sort of thinking...
*
*Is there always a solution? (I think so)
*How many solutions are there?
*Is there always a solution where neither x or y is zero? (not sure)
Does this sort of equation look familiar to anyone else? Does it remind you of anything? I feel like it must have some obvious properties.
|
$1.$ There is always a solution: we can write the greatest common denominator as $ax+by$, hence we just need to scale that solution in order to get a solution for $ab$. (If $a, b$ are zero it is trivial to see there is a solution)
$2.$ The number of solutions is infinite. Let $ax+by = \gcd(a, b)$. One of $a$ or $b$ must be non-zero. WLOG let $a$ be non-zero, then we can keep increasing $y$ by one and subtracting by $\frac{b}{\gcd(a, b)}(ax+by)$ to get another solution. (Again, the case where either $a, b$ are zero is trivial)
$3.$ The only time where we do not find a solution where both $x, y$ are zero is when only one of $a, b$ are zero.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1835186",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Summation of series involving factorials. I got this question in a maths contest archive and I am completely clueless over how to start.
$$\sum_{m=0}^q(n-m){(p+m)!\over m!}= {(p+q+1)! \over q!}\left(\frac{ n}{ p+1}-\frac {q}{p+2}\right)$$
I thought of transforming $\frac{(p+m)!}{m!}$ into $p+m\choose m$ by multiplying and dividing by $p!$ but that was surely a bad idea or maybe I couldn't figure it out right.
|
For $q=0$ you have only the term $m=0$,$n\frac{p!}{0!}=n p!$ on the left side of the equation. On the right side you have $\frac{(p+1)!}{0!}\frac{n}{p+1}$. Noting that $(p+1)!=p!(p+1)$ you get the left hand side term.
We now prove by induction. We assume that the given equation is valid for $q$, and we want to prove that is valid for $q+1$.
$$\sum_{m=0}^{q+1}(n-m)\frac{(p+m)!}{m!}=\sum_{m=0}^{q}(n-m)\frac{(p+m)!}{m!}+(n-q-1)\frac{(p+q+1)!}{(q+1)!} = \frac{(p+q+1)!}{q!}\left(\frac{n}{p+1}-\frac{q}{p+2}\right)+(n-q-1)\frac{(p+q+1)!}{(q+1)!}=\frac{(p+q+1)!}{(q+1)!}\left(\frac{n(q+1)}{p+1}-\frac{q(q+1)}{p+2}+(n-q-1)\right)=\frac{(p+q+1)!}{(q+1)!}\left(\frac{n(q+1)+n(p+1)}{p+1}-\frac{q(q+1)+(p+2)(q+1)}{p+2}\right)=\frac{(p+q+2)!}{(q+1)!}\left(\frac{n}{p+1}-\frac{q+1}{p+2}\right)$$
Which is exactly what we wanted to show
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1835276",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
}
|
An example of a power series that has a radius of convergence of 3 The problem states "Give an example of a power series $\sum^{\infty}_{n=0}$a$_{n}$z$^{n}$ that has a radius of convergence of 3 and that represents an analytic function having no zeroes. I'm sorry if this is a little simplistic but I really can't think of anything and am having a hard time with this.
|
Take any series you are familiar with and has a finite radius of convergence $r$. Then rescale the argument to be $\dfrac{3z}r$. This multiplies all terms by $\dfrac{3^n}{r^3}$ and yields the desired radius.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1835354",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Let $f = 2x^4 + 2(a - 1)x^3 + (a^2 + 3)x^2 + bx + c.$ ,Find out $a, b, c ∈ R$ and its roots knowing that all roots are real.
Let $f = 2x^4 + 2(a - 1)x^3 + (a^2 + 3)x^2 + bx + c.$
Find out $a, b, c ∈ R$ and its roots knowing that all roots are real.
The first thing that came into my mind was to use vieta's formulas so let roots be $\alpha , \beta , \gamma , \delta$
$$ \alpha + \beta +\gamma+\delta=-\frac{b}{a}$$
$$ \Rightarrow \alpha + \beta +\gamma+\delta = 1-a$$
$$ \alpha\beta + \alpha\gamma + \alpha\delta + \beta\gamma+\beta\delta+\gamma\delta=\frac{c}{a}$$
$$ \Rightarrow \alpha\beta + \alpha\gamma + \alpha\delta + \beta\gamma+\beta\delta+\gamma\delta =\frac{a^2+3}{2}$$
$$ \alpha\beta\gamma + \alpha\beta\delta+\alpha\gamma\delta+\beta\gamma\delta=\frac{-d}{a}$$
$$ \Rightarrow \alpha\beta\gamma + \alpha\beta\delta+\alpha\gamma\delta+\beta\gamma\delta= \frac{-b}{2}$$
$$ \alpha\beta\delta\gamma = \frac{e}{a} $$
$$ \Rightarrow \alpha\beta\delta\gamma = \frac{c}{2} $$
But that did not get me anywhere...
Also I took the second derivative and set it equal to zero but that led me to complex roots... any ideas?
|
As you said, $f''$ has complex roots which shows that double derivative is always positive. So this implies that $f'$ has only one real root because it's always a increasing function. From this you can conclude that $f$ will have either 2 coincident roots ($p,p,q,q$) or all 4 coincident roots ($p,p,p,p$) or no real roots. So we will neglect the third case as we want f to have real roots.
Now
$ Case-1 \ \ $ When all the roots are real and coincident -:
Let that root be $\alpha$
From here you will get 4 equations
$$4\alpha=-(a-1)$$
$$6\alpha^2=\frac {a^2+3}{2}$$
$$4\alpha^3=-\frac{b}{2}$$
$$\alpha^4=\frac{c}{2}$$
Frome here you can get the value's of $[a,b,c,\alpha]$ very easily. These values are $[-3,-8,2,1]$ respectively.
$ Case-2 \ \ $ When two of the roots are coincident and real -:
Let the roots be $\alpha,\beta$
Now again write 4 equations using these roots
$$2(\alpha+\beta)=-(a-1)$$
$$\alpha^2+\beta^2+4\alpha\beta=\frac {a^2+3}{2}$$
$$2(\alpha^2\beta+\alpha\beta^2)=-\frac{b}{2}$$
$$\alpha^2\beta^2=\frac{c}{2}$$
From first two equations, you can get the vlaue of $\alpha\beta=\frac{a^2+2a+5}{8}$ and from first equation $\alpha+\beta=-\frac{(a-1)}{2}$
So consider a equation $\delta^2+\frac{(a-1)}{2}\delta+
\frac{a^2+2a+5}{8}=0$. Clearly, this equation has roots $\alpha,\beta$
Now Disciminant of this equation is $D=-\frac{(a+3)^2}{4}<0$ for all values of $a$. So $\alpha,\beta$ can not be real for any value of $a$. So we have to reject this case.
So finally $$[a,b,c]=[-3,-8,2]$$ for roots to be real.
Hope this will help !
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1835427",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Why $f:R^2\to R$, with $\mbox{dom} f=R^2_+$ and $f(x_1,x_2) =x_1x_2$ is quasiconcave? Why $f:R^2\to R$, with $\mbox{dom} f=R^2_+$ and $f(x_1,x_2) =x_1x_2$ is quasiconcave?
I have tried to use Jensen eniquality to check that superlevel set $\{x\in R^2_+ | x_1x_2 \ge \alpha\}$ is convex.
$$\begin{align*}(\theta x_1 + (1-\theta) x_3)(\theta x_2 + (1-\theta) x_4) \\= \theta^2 x_1x_2 + (1-\theta)^2x_3x_4+\theta(1-\theta)x_1x_4+\theta(1-\theta)x_2x_3\end{align*}$$
this must be greater or equal to
$$\alpha$$
But here i don't know how to prove it.
|
Let $(x_1,y_1),\, (x_2,y_2) \in \{x,y\in\mathbb{R}_+: (x,y)\ge \alpha\}$. Then $y_1 \ge {\alpha \over x_1}$ and $y_2 \ge {\alpha \over x_2}$. Since ${1\over x}$ is convex. $${\alpha \over \theta x_1 + (1-\theta)x_2} \le \theta {\alpha \over x_1} + (1-\theta){\alpha \over x_2}$$ Then $${\alpha \over \theta x_1 + (1-\theta)x_2} \le \theta y_1 + (1-\theta)y_2 \implies \alpha \le (\theta x_1 + (1-\theta)x_2)\cdot(\theta y_1 + (1-\theta)y_2) = \theta^2 x_1 y_1 + \theta(1-\theta)(x_2 y_1+x_1 y_2) + (1-\theta)^2 x_2 y_2 = (\theta x_1 + (1-\theta)x_2)(\theta y_1 + (1-\theta)y_2)$$ and that means that $\{x,y\in\mathbb{R}_+: (x,y)\ge \alpha\}$ is convex.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1835545",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Show that $\int\limits_a^b |f(t)|dt \leq (b-a)\int\limits_a^b|f'(t)|dt$
Let $f:[a,b]\to\mathbb{R}$ be continuously differentiable. Suppose $f(a) = 0$. Show that
$$
\int\limits_a^b|f(t)|dt \leq (b-a)\int\limits_a^b|f'(t)|dt
$$
By the mean value theorem, for every $t\in[a,b]$, there exists $c\in[a,t]$ such that
$$
\frac{f(t) - f(a)}{t-a} = f'(c) \Rightarrow f(t) = (t-a)f'(c) \leq (b-a)f'(c)
$$
So basically, I have that $f(t) \leq (b-a)f'(c)$. But $c$ depends on $t$, so I don't know how to proceed from here.
|
I think the mean value theorem is a no-go here. If you start with (just the fundamental theorem of calculus)
$$
f(t)=f(a)+\int_a^t f'(s)\,ds=\int_a^t f'(s)\,ds,
$$
you find with the triangle inequality that
$$
|f(t)|\leq \int_a^t |f'(s)|\,ds.
$$
Next, integrate this inequality from $a$ to $b$:
$$
\int_a^b |f(t)|\,dt\leq \int_a^b\int_a^t |f'(s)|\,ds\,dt.
$$
In the final integral, change order of integration and estimate $(b-s)\leq(b-a)$. I leave that part to you.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1835665",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Finding axis of a cylinder I have to find axis of a cylinder that has the top in the origin and the points $A(-5,6,-4),B(-4,-1,2),C(-1,2,4)$ lie on its lateral area.
Now I know that points A,B,C have the same distance to the axis, but I don't know how I could find this distance ( radius).
I would really appreciate any help.
|
So, the axis is through the origin. Place a unit vector there (two variables to fix) and impose that its cross-product with the position vectors for $A,B,C$ be of the same modulus, which will be distance $r$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1835778",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Proving $\sum\limits_{k=0}^n \sum\limits_{j=0}^{n-k} \frac{(k-1)^2}{k!} \frac{(-1)^j}{j!} =1$ without character theory
Let $n \geq 2$ be an integer. I would like to prove the following identity in an easy way:
$$\sum\limits_{k=0}^n \left( \frac{(k-1)^2}{k!} \sum\limits_{j=0}^{n-k} \frac{(-1)^j}{j!} \right)=1$$
You can see on WolframAlpha, for $n=3$, that this holds.
Here is nice proof I've found. However it seems to be awfully sophisticated...
Consider the representation $S_n \to \text{GL}(V)$ where $V=\text{span}_{\Bbb C}(e_1-e_2, \dots, e_1-e_n)$ and $\sigma \cdot (e_1-e_i) := e_{\sigma(1)}- e_{\sigma(i)}\,$.
It is well-known that this is an irrep: we can show that if $0\neq u \in V$, then $\text{span}_{\Bbb C}(\{\sigma \cdot u \mid \sigma \in S_n\}) = V$.
In particular, the scalar product of character is
$\langle \chi_{V}, \chi_V \rangle=1$.
But $\chi_V = \chi_{S_n} - \chi_1$ where $\chi_{S_n} $ is the permutation character of $S_n$ and $\chi_{1}$ the trivial character.
$\newcommand{\supp}{\text{supp}}$
Therefore, if $\supp(\sigma)$ denotes the support of a permutation $\sigma$, we have:
$$ \begin{align}
\langle \chi_{V}, \chi_V \rangle &=
\frac{1}{n!} \sum\limits_{\sigma \in S_n} |\chi_{S_n}(\sigma)-1|^2\\&=
\frac{1}{n!} \sum\limits_{\sigma \in S_n}
\left|\text{card}(\{1, \dots, n\} \setminus \supp(\sigma)) \;-\; 1\right|^2\\&=
\frac{1}{n!} \sum\limits_{m=0}^n\; \sum\limits_{\substack{\sigma \in S_n\\ |\supp(\sigma)|=m}}
\left|n-m \;-\; 1\right|^2\\&\stackrel{(*)}{=}
\frac{1}{n!} \sum\limits_{m=0}^n (n-m- 1)^2 {n \choose m} m! \sum\limits_{j=0}^m \frac{(-1)^j}{j!} \\&=
\sum\limits_{m=0}^n (n-m- 1)^2\frac{1}{(n-m)!} \sum\limits_{j=0}^m \frac{(-1)^j}{j!} \\&=
\sum\limits_{k=0}^n \left( \frac{(k-1)^2}{k!} \sum\limits_{j=0}^{n-k} \frac{(-1)^j}{j!} \right)=1
\end{align} $$
The equality $(*)$ holds because to choose $\sigma \in S_n$ has a support of cardinality $m$, I choose $m$ numbers out of $n$, and then I have $m!\sum\limits_{j=0}^{m} \frac{(-1)^j}{j!}$ derangements of these $m$ elements.
The following posts are close to my identity, but different: (1), (2). This one only proves $\sum\limits_{k=0}^n \left( \frac{1}{k!} \sum\limits_{j=0}^{n-k} \frac{(-1)^j}{j!} \right)=1$.
In all cases, I would like to see some easy proofs of my identity.
Thank you for your comments!
|
We may start from:
$$ \sum_{k\geq 0} \frac{(k-1)^2}{k!}x^k = (1-x+x^2)\,e^{x}\tag{1} $$
$$ \sum_{k\geq 0}\left(\sum_{j=0}^{k}\frac{(-1)^j}{j!}\right) x^k = \frac{e^{-x}}{1-x}\tag{2} $$
then notice that the original sum is just the coefficient of $x^n$ in the product between the RHSs of $(1)$ and $(2)$, i.e.
$$ [x^n]\left(\frac{1-x+x^2}{1-x}\right)=[x^n]\left(-x+\color{red}{\frac{1}{1-x}}\right)\tag{3}$$
Now the claim is pretty trivial: the original sum equals $1$ for every $n\in\mathbb{N}$, except for $n=1$ where it equals zero.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1835965",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 0
}
|
How can I solve this inequality? Have a nice day, how can I solve this inequality?
$$a<b<-1$$
$$ |ax - b| \le |bx-a|$$
what is the solution set for this inequality
|
There may be more efficient ways to do these but for clarity I like to break absolute values into cases:
Case 1: $ax -b \ge 0$ and $bx -a \ge 0$.
[This implies $ax \ge b\implies x \le b/a$ and likewise $x \le a/b$ so $x \le \min(a/b,b/a) = b/a < 1$. Let's keep in mind $b/a < 1 < a/b$]
Then $|ax - b| \le |bx -a| \implies ax - b \le bx - a$
So $(a-b)x \le (b-a)$; $a < b$ so $(a - b) < 0$ so
$x \ge (b-a)/(a-b) = -1$
So $-1 \le x \le b/a < 1$. OR
Case 2: $ax -b <0$ and $bx-a < 0$.
[This implies $ax < b\implies x > b/a$ and $x > a/b$ so $x > \max(a/b,b/a) =a/b > 1$]
then $b - ax \le a - bx$ so
$(b - a) \le (a - b)x$ so
$(b-a)/(a-b) \ge x$ so $x \le -1$ which is a contradiction.
OR
Case 3: $ax -b < 0$ and $bx - a \ge 0$
[which implies $ax < b\implies x >b/a > 1$ and $bx \ge a\implies x \le a/b \le 1$. So $b/a < x \le a/b$. Remember $b/a < 1 < a/b$]
So $b - ax \le bx - a$ so $(b+a) \le (a + b)x$. Now (a+b) < 0. So
$(b+a)/(a+b) = 1 \ge x$
So $b/a < x \le 1$.
OR
Case 4: $ax - b \ge 0$ and $bx - a < 0$
[which implies $ax \ge b \implies x\le b/a$ and $x > a/b$ so $a/b < x \le b/a$ which is a contradiction.]
So we know $-1 \le x \le b/a$ OR $b/a < x \le -1$ so combining the two results we know $-1 \le x \le 1$.
But there are probably more efficient ways to do this.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1836020",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Prove that the number: $z = \det(A+B) \det(\overline A-\overline B)$ is purely imaginary. Problem: Let $A_{n\times n}$ and $B_{n\times n}$ be complex unitary matrices, where n is an odd number. Prove that the number:
$$z=\det(A+B) \det(\overline A-\overline B)$$ is purely imaginary.
My idea:
We have that $AA^*=I$ and $BB^*=I$.
We also have that $\det A=\det A^*=(-1)^n \det A^*=-\det A$. Same for matrix $B$.
Am I free to say $A$ and $B$ are skew-symmetric matrices?
$$z=\det(A+B)\overline {\det(A-B)}$$
$$z=\det((A+B) {(A-B)^*)}$$
$$z=\det((A+B) {(A-B)^*)}$$
$$z=\det((A+B) {(A^*-B^*))}$$
$$z=\det((AA^*-AB^*+BA^*-BB^*)$$
$$z=\det((I-AB^*+BA^*-I)$$
$$z=\det((BA^*-AB^*)$$
This is where I am unsure of how to proceed. Am I free to say since matrices are unitary and skew-symmetric, their eigenvalues are purely imaginary thus $z$ must be as well? Is this statement correct?
Thank you all in advance.
|
As proceeding along the lines given in the hint by @Tsermo , we find that $$ z = det(BA^{*} -AB^{*}) =\Pi_i^{n} e_i $$ where $e_i$ are the eigenvalues of $(BA^{*}-AB^{*}) $ which are purely imaginary(as the matrix$(BA^{*}-AB^{*}) $ is skew symmetric of odd order).Since n is odd therefore the result follows.
By the way how do you say that A and B are skew symmetric? In fact $det (A) = \frac{1}{det(A^{*})}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1836116",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Differentiability of piecewise functions Check whether the function is differentiable:
$$f:\mathbb{R}^2\rightarrow \mathbb{R}$$
$$f= \begin{cases}
\frac{x^3-y^3}{x^2+y^2} & (x,y)\neq (0,0) \\
0 & (x,y) = (0,0) \\
\end{cases}
$$
So what I did is I calculated the partial derivatives of the function in point $(0,0)$. I got:
$$\frac{∂f}{∂x}\left(0,0\right)=lim_{t\rightarrow 0}\left(\frac{f\left(t,0\right)-f\left(0,0\right)}{t}\right)=lim_{t\rightarrow 0}\left(\frac{t^3}{t^3}\right)=1$$and
$$\frac{∂f}{∂y}\left(0,0\right)=lim_{t\rightarrow 0}\left(\frac{f\left(0,t\right)-f\left(0,0\right)}{t}\right)=lim_{t\rightarrow 0}\left(\frac{-t^3}{t^3}\right)=-1$$
And since the answers I got are not equal, that means the function isn't partially derivable in point $(0,0)$ so it isn't differentiable either?
I'm not sure whether what I did was right, differentiability is still a little unclear to me, for multivariable functions. I also asked about it here Differentiability of function definition but have yet to get an answer. Can someone tell me if I'm on the right track at least?
|
This is wrong. Being partially differentiable means that the partial derivatives exist, and you have shown this by showing the limits to exist. The partial derivatives need not coincide! To show that $f$ is differentiable a sufficient conditon is that the partial derivatives exist and are continous. To show that $f$ is not differentiable, it suffices to show that the partial derivatives not not exist.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1836217",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
I want to show that $\int_{-\infty}^{\infty}{\left(x^2-x+\pi\over x^4-x^2+1\right)^2}dx=\pi+\pi^2+\pi^3$ I want to show that
$$\int_{-\infty}^{\infty}{\left(x^2-x+\pi\over x^4-x^2+1\right)^2}dx=\pi+\pi^2+\pi^3$$
Expand $(x^4-x+\pi)^2=x^4-2x^3+2x^2-2x\pi+\pi{x^2}+\pi^2$
Let see (substitution of $y=x^2$)
$$\int_{-\infty}^{\infty}{x\over (x^4-x^2+1)^2}dx={1\over 2}\int_{-\infty}^{\infty}{1\over (y^2-y+1)^2}dy$$
Substituion of $y=x^3$
$$\int_{-\infty}^{\infty}{x^3\over (x^4-x^2+1)^2}dx={1\over 4}\int_{-\infty}^{\infty}{1\over (y^2-y+1)^2}dy$$
As for $\int_{-\infty}^{\infty}{x^2\over (x^4-x^2+1)^2}dx$ and $\int_{-\infty}^{\infty}{x^4\over (x^4-x^2+1)^2}dx$ are difficult to find a suitable substitution. This is the point where I am shrugged with to find a suitable substitution To lead me to a particular standard integral. Need some help, thank.
standard integral of the form
$$\int{1\over (ax^2+bx+c)^2}dx={2ax+b\over (4ac-b^2)(ax^2+bx+c)}+{2a\over 4ac-b^2}\int{1\over ax^2+bx+c}dx$$
And
$$\int{1\over ax^2+bx+c}dx={2\over \sqrt{4ac-b^2}}\tan^{-1}{2ax+b\over \sqrt{4ac-b^2}}$$
|
Elaborating user @Dr. MV's answer, we have
\begin{equation}
\int_0^\infty\frac{1}{a^2x^4+bx^2+c^2}\ dx=\frac{c\pi}{2a\sqrt{b+2ac}}
\end{equation}
Putting $a=1$, $b=a$, and $c^2=b$, then
\begin{equation}
I(a,b)=\int_0^\infty\frac{1}{x^4+ax^2+b}\ dx=\frac{\pi}{2}\sqrt{\frac{b}{a+2\sqrt{b}}}
\end{equation}
Hence
\begin{equation}
\frac{\partial}{\partial a}I(a,b)=\int_0^\infty\frac{x^2}{\left(x^4+ax^2+b\right)^2}\ dx=-\frac{\pi}{2b}\sqrt{\left(\!\frac{b}{a+2\sqrt{b}}\!\right)^3}
\end{equation}
and
\begin{equation}
\frac{\partial}{\partial b}I(a,b)=\int_0^\infty\frac{1}{\left(x^4+ax^2+b\right)^2}\ dx=\frac{\pi(a+\sqrt{b})}{2\sqrt{b\left(a+2\sqrt{b}\right)^3}}
\end{equation}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1836306",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 5,
"answer_id": 3
}
|
How tounderstand the proof to show if a real number $w>0$ and a real number $b>1$, $b^w>1$? Claim: If there are two real numbers $w>0$ and $b>1$, then $b^w>1$.
One proof is that for any rational number $0<r<w$ and $r=m/n$ with $m,n\in\mathbb Z$ and $n\ne0$, $(b^m)^{1/n}>1$. To show $b^m>1$, you can use induction, and then use $1^n=1<b^m$ to finally show that $b^r>1$. So the proof is done.
I do not think this proof is complete. If $w$ is also a rational number, everything is fine. But if $w$ is an irrational number, I think the proof is incomplete, but I am not sure how to proceed.
===========
Please do not use the continuity or more advanced techniques. Assume you just finished the first chapter of Rudin's Principles of Mathematical Analysis.
|
If
$b > 1$,
then
$b = 1+c$
where
$c > 0$,
so
$b^n
=(1+c)^n
\ge 1+nc
\gt 1
$
by
Bernoulli's inequality.
If
$b^{n/m} = a$,
then
$b^n = a^m$.
Since
$b^n > 1$,
$a^m > 1$
so that
$a > 1$.
Then apply the definition
$b^x
=\sup_{r \le x} b^r
$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1836387",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
}
|
Additivity of trace Let $A$ be a finitely generated abelian group and $\alpha:A\to A$ be an endomorphism. Since $A=A_{free}\oplus A_{torsion}$, we can induce $\bar \alpha:A_{free}\to A_{free}$, i.e. $\bar\alpha$ is a map from $\oplus\mathbb Z$ to itself. Write $\bar\alpha$ as a matrix form and define $tr(\alpha)=tr(\bar\alpha)$ as the trace of the matrix.
Assume we have short exact sequence of finitely generated abelian groups $A,B,C$
and endomorphisms $\alpha,\beta,\gamma$ where the following diagram commutes.
$$\begin{array}{c} 0 & \to & A & \to & B & \to & C & \to & 0 \\ & & \!\downarrow \alpha && \!\downarrow\beta && \!\downarrow\gamma & \\ 0 & \to & A & \to & B & \to & C & \to & 0\end{array}$$
Prove $tr(\beta)=tr(\alpha)+tr(\gamma)$.
|
In the case of free abelian groups, the sequence $$ 0 \to A \to B \to C \to 0 $$
splits and we obtain $ B \cong A \oplus C $.
Choosing bases of $A$ and $C$ yields a basis of $B$ and we see (implicitly using above isomorphism)
$$ \operatorname{tr} (\beta) = \operatorname{tr}(\beta | A) + \operatorname{tr}(\beta | C)$$ as only diagonal entries are relevent for the trace.
However, $\operatorname{tr}(\alpha) = \operatorname{tr}(\beta|A)$ and $\operatorname{tr}(\gamma) = \operatorname{tr}(\beta | C)$ as the squares in the diagram commute.
For extending to the general case, observe that the free parts of $A, B, C$ form a short exact sequence as well.
A better understanding of trace in this context may be the coordinate-free definition: https://en.wikipedia.org/wiki/Trace_(linear_algebra)#Coordinate-free_definition
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1836491",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
}
|
Finding the limit of a Matrices determinant The problem is as follows:
I've been trying to figure this out with no luck. I'm lost at the $A_k+1$ and $A_0$. I'm not sure what they are implying and how they would apply in finding the limit.
|
Note that it can be easily shown that
$$
A^{-1} \;\; =\;\; \left [ \begin{array}{cc}
2 & -1\\
-1 & 1 \\
\end{array} \right ].
$$
This implies that
$$
A_1 \;\; =\;\; \left [ \begin{array}{cc}
3/2 & 0 \\
0 & 3/2 \\
\end{array} \right ]
$$
Implying that
$$
A_2 \;\; =\;\; \left [ \begin{array}{cc}
3/4 & 0 \\
0 & 3/4 \\
\end{array} \right ] + \left [ \begin{array}{cc}
1/3 & 0 \\
0 & 1/3 \\
\end{array} \right ] \;\; =\;\; \left [ \begin{array}{cc}
13/12 & 0 \\
0 & 13/12 \\
\end{array} \right ]
$$
Implying that
$$
A_3 \;\; =\;\; \left [ \begin{array}{cc}
13/24 & 0 \\
0 & 13/24 \\
\end{array} \right ] + \left [ \begin{array}{cc}
6/13 & 0 \\
0 & 6/13 \\
\end{array} \right ] \;\; =\;\; \left [ \begin{array}{cc}
313/312 & 0 \\
0 & 313/312 \\
\end{array} \right ].
$$
Therefore we have $A_3^{-1} = \frac{312}{313}I$, and thus $\det(A_3^{-1})^n = \left (\frac{312}{313}\right )^{2n}$ which tends to zero as $n\to \infty$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1836624",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.