Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
Evaluate $\int_0 ^x \frac{\sin\left(\frac{1}{2}-n\right) t}{\sin\frac{t}{2}}\,dt$. Evaluate
$$\int_0 ^x \frac{\sin\left(\frac{1}{2}-n\right) t}{\sin\frac{t}{2}}\,dt$$
for $x\in(0,2\pi)$ and $n\in\mathbb Z$. From trigonometric formulas, we have:
$$\int_0 ^x\cos nt\, dt-\int_0 ^x \frac{\cos\frac{t}{2}}{\sin\frac{t}{2}}\, \sin nt\,dt$$
What can we say for the second integral? I was not able to solve it.
|
Only a partial answer for $n\in\mathbb{N}$, but maybe helpful nonetheless. Call $f_n(x)=\int_0^x dt \sin(nt)\cot(t/2)$ and consider the generating function
$$
F(z,x)=\sum_{n=0}^\infty f_n(x)z^n=\int_0^x dt\cot(t/2)\frac{\mathrm{i} \left(-1+e^{2 \mathrm{i} t}\right) z}{2 \left(-z+e^{\mathrm{i} t}\right) \left(-1+e^{\mathrm{i} t} z\right)}=
$$
$$
=-\frac{z}{2}\int_0^x dt\ \frac{\left(1+e^{\mathrm{i} t}\right)^2 }{\left(-z+e^{\mathrm{i} t}\right) \left(-1+e^{\mathrm{i} t} z\right)}\ ,
$$
using the Euler's representation of the cotangent. The real part of it can be computed with the help of Mathematica, yielding eventually (for $0<x<2\pi$ and $|z|<1$)
$$
F(z,x)=-\frac{(x+\pi ) z+2 (z+1) \arctan\left(\frac{(z-1) \cot \left(\frac{x}{2}\right)}{z+1}\right)-x+\pi }{2 (z-1)}\ .
$$
[This agrees with numerics]. The generating function can be represented as
$$
F(z,x)=\varphi\left(\frac{z-1}{z+1};x\right)+\frac{x}{z-1}\ ,
$$
where
$$
\varphi(\xi,x)=-\frac{1}{\xi}\arctan(\xi\cot(x/2))-\frac{x+\pi}{2\xi}\ .
$$
With any symbolic software, you can obtain a closed form solution for any $n$ by simply Taylor-expanding this result around $z=0$. For example:
$$
f_3(x)=x+2 \sin (x)+\sin (2 x)+\frac{1}{3} \sin (3 x)\ ,
$$
and based on some experiments, one might conjecture a general form as
$$
f_n(x)=x+\sum_{k=1}^n \frac{n_k}{m_k}\sin(k x)\ ,
$$
with $n_k$ and $m_k$ small integers. A closed-closed form solution might be obtained after some painstaking work on the generating function (using also the great simplification that $\arctan(\cot(x/2))=(\pi-x)/2$). But that's not for me tonight.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1665328",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Field with four elements If $F=\{0,1,a,b\}$ is a field (where the four elements are distinct), then:
1.What is the characteristic of $F$?
2.Write $b$ in terms of the other elements.
3.What are the multiplication and addition tableau of these operations?
|
The addition: $\;a+b=0\implies a=-b=b\;$ , since $\;\text{char}\, F=2\;$ . It also can't be $\;a+b=a\;,\;\;a+b=b\;$ , else $\;b=0\;$ or $\;a=0\;$ . Thus it must be $$\;a+b=1\implies b=1-a=1+a\;$$
Generalize the above and get the addition table.
As for multiplication $\;ab\neq 0,a,b\;$ , else $\;a=0\;$ or $\;b=0\;$ , or $\;a,b=1\;$ and none of this is true, thus it must be $\;ab=1\iff b=a^{-1}\;$ . Generalize now for the table.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1665378",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 4,
"answer_id": 0
}
|
Is the space $R^2$ for a ring $R=\mathbb{Z}/p\mathbb{Z}$ the sum of two invariant lines? I didn't do a very good job of summarising the problem in the title, sorry, but here is the full question:
Let us define, for every ring $R$, the set
$S(R)=$ {$\rho : \mathbb{Z}/2\mathbb{Z} \to GL(2,R) | \rho (0+2\mathbb{Z}) \ne \rho (1+2\mathbb{Z}) $}
of representation of the group $\mathbb{Z}/2\mathbb{Z}$ on $R^2$, where $R^\times$ is the subset of elements of $R$ with a multiplicative inverse, and that $GL(2, R)$ is the group of size 2 square matrices with entries in $R$, which are invertible, and whose inverse has coefficients in $R$ as well.
Consider the following property:
$(P(\rho,R))$ The space $R^2$ is a sum of two invariant lines, that is:
$\exists v, w \in R^2$ such that $\forall g \in \mathbb{Z}/2\mathbb{Z}, \rho(g)(R\cdot v)=R \cdot v $ and $\rho(g)(R\cdot w)=R \cdot w$ and that $R \cdot v + R \cdot w = R^2.$
For every $R$ of the form $\mathbb{Z}/p\mathbb{Z}$ with $p$ a prime, does property $(P(\rho,R))$ hold for all $\rho$, some $\rho$ or no $\rho$ in $S(R)$?
I am struggling to make progress with this but I have started by looking at the problem when $p=3$, and letting $\rho(0)=\begin{bmatrix}1 & 0\\1 & 1\end{bmatrix}$, $\rho(1)=\begin{bmatrix}1 & 1\\0 & 1\end{bmatrix}$, $v=(v_1,v_2)$ and $w=(w_1,w_2)$. I am not sure how a matrix multiplies a ring so I am not sure how to proceed with the following;
$\begin{bmatrix}1 & 0\\1 & 1\end{bmatrix} R \begin{bmatrix}1 & 0\\1 & 1\end{bmatrix}(v_1,v_2)$
I would greatly appreciate any help.
|
The answer is all if $p\neq 2$.
Consider the representation $\rho(0)=\pmatrix{1& 0\\0& 1}$ and $\rho(1)=\pmatrix{-1& -1\\ 0& 1}$
Then the lines $L_1=\langle\pmatrix{-1\\ 2}\rangle$ and $L_1=\langle\pmatrix{1\\ 0}\rangle$ (These are basically the eigenvectors of $\rho(1)$ with eigen values $1$ and $-1$ respectively) are both invariant under both $\rho(0)$ and $\rho(1)$. The only problem is when $R=\mathbb Z/2\mathbb Z$ because both the eigenvectors are the same in this ring.
To solve the problem completely, notice that $\rho$ is determined by which order two element of $GL_2(\mathbb Z/p \mathbb Z )$ that $1$ is mapped to. You need to find all order two elements of $GL_2(\mathbb Z/p \mathbb Z )$ and find their eigenvectors and show that there are two distinct linearly independant ones. Since any matrix must satisfy its own characteristic polynomial , it follows that $x^2-1$ divides the characteristic polynomial of $\rho(1)$, which must be monic and have degree two, so it must be $x^2-1$ itself. This polynomial has identical roots if and only if $1=-1$ which only happens if $R=Z/2\mathbb Z$. So if $p\neq 2$, the matrix has distinct eigenvalues and hence two linearly independant eigenvectors whose span will yield $L_1$ and $L_2$ as above.
If $R=Z/2\mathbb Z$, there are only three non-identity elements of order two. Can you find out which these are and check their eigenvectors to finish?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1665462",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Number distinct powers of $cos(\alpha \pi) + i\sin(\alpha \pi)$ How many distinct powers of $\cos(\alpha \pi) + i\sin(\alpha \pi)$ are there if $\alpha$ is rational? Irrational?
Here's my thoughts. The answer is probably some finite number based on $\alpha$ if $\alpha$ is rational and infinitely many if $\alpha$ is irrational. So I'd like to confirm (or disprove) that guess.
First we start with the formula $$(\cos(\alpha \pi) + i\sin(\alpha \pi))^n = \cos(n\alpha \pi) + i\sin(n\alpha \pi)$$
If $\alpha$ is rational, then I can see that some of the powers would be the same. Let $\alpha = \frac pq$, then we'd get $\cos(n\alpha \pi) + i\sin(n\alpha \pi) = \cos(m\alpha \pi) + i\sin(m\alpha \pi) \iff n\alpha \pi = m\alpha \pi + 2\pi k$. This implies that $$(n-m)p=2kq$$
So whenever that condition is satisfied, then the $n$th power and $m$th power will be the same. But I don't see how to count those possibilities.
Then if $\alpha$ is irrational, we'd still need $n\alpha \pi = m\alpha \pi + 2\pi k = (m\alpha + 2k)\pi$, but this time $\alpha \ne \frac pq$. So we'd just get $$\alpha = \frac{2k}{n-m}$$ The right hand side is always rational, so there is no pair $(n,m)$ for which this holds. So there are infinitely many solutions if $\alpha$ is irrational.
So it seems I've gotten the irrational case, but I can't see how to count up the solutions in the rational case.
|
Remeber that you may write $\Bbb e ^{\Bbb i \alpha} = \cos \alpha + \Bbb i \sin \alpha$.
*
*If $\alpha = \frac p q \pi$ with $p,q \in \Bbb Z, \ q \ne 0$ and $p$ even, then $(\Bbb e ^{\Bbb i \alpha}) ^0, \dots, (\Bbb e ^{\Bbb i \alpha}) ^{q-1}$ are all distinct, the next power being $(\Bbb e ^{\Bbb i \alpha}) ^{q} = \Bbb e ^{\Bbb i p \pi} = 1$ (so starting from here the powers begin to repeat in a cyclical way), so you have $q$ distinct powers.
*If $\alpha = \frac p q \pi$ with $p,q \in \Bbb Z, \ q \ne 0$ and $p$ odd, then $(\Bbb e ^{\Bbb i \alpha}) ^0, \dots, (\Bbb e ^{\Bbb i \alpha}) ^{2q-1}$ are all distinct, the next power being $(\Bbb e ^{\Bbb i \alpha}) ^{2q} = \Bbb e ^{2 \Bbb i p \pi} = 1$ (so starting from here the powers begin to repeat in a cyclical way), so you have $2q$ distinct powers.
*If $\alpha \notin \Bbb Q \pi$, then assume there are $m \ne n \in \Bbb Z$ with $(\Bbb e ^{\Bbb i \alpha})^m = (\Bbb e ^{\Bbb i \alpha})^n$ or, equivalently, $(\Bbb e ^{\Bbb i \alpha})^N = 1$ (where $N = m-n \ne 0$). Then $\alpha N = 2 k \pi$ for some $k \in \Bbb Z$, so $\alpha = \frac {2k} N \pi \in \Bbb Q \pi$, which is a contradiction, therefore all the powers are distinct.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1665569",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Rotation of the torus $T^2$ by irrational numbers linearly dependent over $\mathbb Z$ It is known that the rotation $x \to x + \alpha$ of $S^1 = \mathbb R / \mathbb Z$ with irrational $\alpha$ is ergodic and, in particular, $\alpha n$, $n = 1, 2,\dots$, are dense in $S^1$. In two dimensions, the corresponding result is that if $\alpha_1$ and $\alpha_2$ are irrational numbers, linearly independent over $\mathbb Z$, then the rotation $(x_1,x_2) \to (x_1+\alpha_1,x_2+\alpha_2)$ of $T^2 = \mathbb R^2 / \mathbb Z^2$ is also ergodic. In particular, the points $(n\alpha_1,n\alpha_2)$, $n = 1, 2,\dots$, are dense in $T^2$. My question is whether it is possible to find an explicit example of such irrational numbers $\alpha_1$ and $\alpha_2$, which are linearly dependent over $\mathbb Z$, and for which the distance from the set $\{(n\alpha_1,n\alpha_2), \; n = 1,2,\dots \} \subset T^2$ to zero is strictly positive?
|
No, that is not possible.
If $\alpha_1,\alpha_2$ are linearly dependent irrational numbers over $\mathbb{Z}$ then there exists another irrational number $\alpha'$, which is a $\mathbb{Z}$-linear combination of $\alpha_1$ and $\alpha_2$, such that your original set can be written as multiples of $(m_1\alpha',m_2\alpha')$ for some $m_1,m_2 \in \{1,2,\ldots\}$. That is:
$$\{(n \alpha_1, n \alpha_2) \,\bigm|\, n=1,2,...\} = \{(nm_1\alpha',nm_2\alpha') \,\bigm| \, n=1,2,...\}
$$
Furthermore, this subset is contained densely in
$$C = \{(m_1 t, m_2 t) \,\bigm|\, t \in \mathbb{R}\}
$$ which is a circle embedded in $T^2$ that contains zero. Therefore, the original set $A$ has points arbitrarily close to zero.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1665666",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
How to prove the following binomial identity How to prove that
$$\sum_{i=0}^n \binom{2i}{i} \left(\frac{1}{2}\right)^{2i} = (2n+1) \binom{2n}{n} \left(\frac{1}{2}\right)^{2n} $$
|
Suppose we seek to verify that
$$\sum_{q=0}^n {2q\choose q} 4^{-q} =
(2n+1) {2n\choose n} 4^{-n}$$
using a method other than induction.
Introduce the Iverson bracket
$$[[0\le q\le n]]
= \frac{1}{2\pi i}
\int_{|w|=\epsilon}
\frac{w^q}{w^{n+1}} \frac{1}{1-w}
\; dw$$
This yields for the sum (we extend the sum to infinity because the
Iverson bracket controls the range)
$$\frac{1}{2\pi i}
\int_{|w|=\epsilon}
\frac{1}{w^{n+1}} \frac{1}{1-w}
\sum_{q\ge 0} {2q\choose q} w^q 4^{-q}
\; dw
\\ = \frac{1}{2\pi i}
\int_{|w|=\epsilon}
\frac{1}{w^{n+1}} \frac{1}{1-w}
\frac{1}{\sqrt{1-w}}
\; dw.$$
We have used the Newton binomial to obtain the square root.
Continuing we have
$$\frac{1}{2\pi i}
\int_{|w|=\epsilon}
\frac{1}{w^{n+1}} \frac{1}{(1-w)^{3/2}}
\; dw.$$
Use the Newton binomial again to obtain
$${n+1/2\choose n}
= \frac{1}{n!} \prod_{q=0}^{n-1} (n+1/2-q)
= \frac{1}{2^n \times n!} \prod_{q=0}^{n-1} (2n+1-2q)
\\ = \frac{1}{2^n \times n!} \frac{(2n+1)!}{2^n n!}
= 4^{-n} \frac{(2n+1)!}{n!\times n!}
= 4^{-n} (2n+1) {2n\choose n}.$$
This is the claim.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1665751",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 1
}
|
How is $\Bbb Z_0 = \{0, \pm m, \pm2m, \pm3m, \ldots\}$ denoted in set builder notation? $\Bbb Z_0$ has integers as its elements, and its elements are listed as follows:
$\Bbb Z_0 = \{0, \pm m, \pm2m, \pm3m, \ldots\}$
Then how is $\Bbb Z_0$ denoted in set builder notation?
Is there a correct representation for the set $\Bbb Z_0$ between the following two?
$\Bbb Z_0=$ {$x \in \Bbb Z\,|\, x=km \text{ for some } k \in \Bbb Z\ \text{for all integer m}$}
OR
$\Bbb Z_0=$ {$x \in \Bbb Z\,|\, x=km \text{ for some } k \in \Bbb Z\ \text{for some integer m}$}
|
A simple way would be to write it $m\Bbb Z=\{mn\in\Bbb Z:n\in\Bbb Z\}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1665844",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
}
|
How to prove abelian group I am struggling answering this question for myself:
How can i prove that a Group $G$ is abelian, if $$g\circ g=e \ \forall g \in G $$
A group is abelian if this is true:
$$a\circ b = b\circ a\ \forall a,b \in G$$
But i dont understand how to prove this.
Hope someone can help me out with this!
|
$a$ and $b$ commute iff $a \circ b = b \circ a$ iff $a \circ b \circ a^{-1} \circ b ^{-1} = e$.
However $a^{-1} = a$ and $b^{-1} = b$ so then $a \circ b \circ a^{-1} \circ b ^{-1} = a \circ b \circ a \circ b = (a \circ b) \circ (a \circ b) = e$ by hypothesis.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1665945",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Book(s) on Algebras (Quaternions)? Well, lately I've been looking for a book on quaternions but I've realized that quaternions are a particular case of the named Algebras(I think Geometric Algebra). Since here, I've found all kind of algebra you can imagine, Geometric Algebra, Lie Algebra, Clifford Algebra, conmutative and no conmutative algebra... I am very confused. How can i get into quaternions? Do I have to study only geometric algebra or another one? I think all algebras are quite related between them.
What books would you recommend?
The reason why I am triying to study Quaternions it's because I am studing the Maxwell's book "A treatise on electricity and magnetism" and it is written in quaternions.
Thanks a lot.
|
The quaternions are a very specific object like the real numbers. There is no reason to plow into entire disciplines to 'understand' them. You can learn all their important properties from the quaternions wiki page.
IMO, despite previous recommendations above to the contrary, think it is a pretty terrible idea to try to use the literature of that time to learn about quaternions. Even reviewers at that time thought Maxwell's treatise was poorly written, and I know from personal experience how ideosyncratic Hamilton and Grassman's works were.
There is simply no reason not to take advantage of the massive clarifications in exposition of mathematics that have happened in the intervening century to learn about quaternions, and then (if you are really bent on doing it) seeing how it as used in the old literature.
Anyone interested in quaternions and physics should also read Lambek's If hamilton had prevailed: quaternions in physics for an infinitely clearer overview.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1666050",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Intuition behind cross-product and area of parallelogram The cross product in 2D is defined like that: $|(x_1, y_1) \times (x_2, y_2)| = x_1 y_2 - x_2 y_1.$
I perfectly understand the first part of the definition: $x_1 y_2$, which is simply the area of a rectangle:
I am struggling to understand the second part: $- x_2 y_1.$
I feel that the second part, probably, has to do with the rotations of the vectors $(x_1, y_1)$ and $(x_2, y_2)$, because when we rotate the original rectangle we should preserve the area and $- x_2 y_1$ somehow compensates for the excess amount of area that we get from the first term $x_1 y_2$.
I feel that my intuition lacks a lot of details and I would be grateful for the explanations of the second term and it's connection to the rotations.
My question is different from Reasoning behind the cross products used to find area, although the titles are almost identical. The orthogonality of my question to that can be seen, by reading these parts of the question:
I do not have any issues with calculating the area between two vectors.
But I have.
...but not why the cross product is used instead of the dot product.
The dot product is out of the scope of my question.
|
*
*The formula works fine for the standard unit vectors.
*Stretching one of the vectors by a constant $c$ should multiply the (oriented) area by $c$ and does indeed multiply the cross product by $c$
*Shearing along $(x_1,y_1)$, i.e., replacing $(x_2,y_2)$ with $(x_2+cx_1,y_2+cy_1)$ does not change the area and also does not change the cross product (the extra terms cancel)
As any parallelogram can be obtained from the standard unit vectors by a few steps of shearing/stretching, the cross product tells us the oriented area for all parallelograms.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1666137",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
}
|
Can there be an unbounded sequence of equicontinuous functions? I am trying to find a equicontinuous sequence of functions $f_n$ on $(a, b)$ that is bounded somewhere but not everywhere.
I am thinking along the lines of
$$f_n=\frac{1}{nx}$$ on $(0, 1)$, but this is obviously not equicontinuous.
Any hints?
|
I understand/interpret your question as follows: You want to find
an equicontinuous sequence $\left(f_{n}\right)_{n\in\mathbb{N}}$
of functions $f_{n}:\left(0,1\right)\to\mathbb{R}$ such that the
set
$$
U:=\left\{ x\in\left(0,1\right)\,\middle|\,\exists M_{x}\in\left(0,\infty\right)\,\forall n\in\mathbb{N}:\,\left|f_{n}\left(x\right)\right|\leq M_{x}\right\}
$$
is not empty, but also not all of $\left(0,1\right)$.
As we will see now, such a sequence does not exist. To see this, we
will show that $U$ is both open and closed in $\left(0,1\right)$.
By connectedness of $\left(0,1\right)$, this implies $U\in\left\{ \emptyset,\left(0,1\right)\right\} $.
*
*$U$ is open: Let $x_{0}\in U$ be arbitrary. By equicontinuity,
there is $\varepsilon>0$ (potentially depending on $x_{0}$) with
$\left(x_{0}-\varepsilon,x_{0}+\varepsilon\right)\subset U$ and with
$\left|f_{n}\left(x_{0}\right)-f_{n}\left(y\right)\right|<1$ for
all $y\in\left(x_{0}-\varepsilon,x_{0}+\varepsilon\right)$. But this
implies
$$
\left|f_{n}\left(y\right)\right|\leq\left|f_{n}\left(x_{0}\right)\right|+1\leq M_{x_{0}}+1
$$
for all $n\in\mathbb{N}$, so that we get $\left(x_{0}-\varepsilon,x_{0}+\varepsilon\right)\subset U$.
*$U$ is closed in $\left(0,1\right)$. To see this, let $x_{0}\in\overline{U}\cap\left(0,1\right)$.
By equicontinuity, there is $\varepsilon>0$ with $\left(x_{0}-\varepsilon,x_{0}+\varepsilon\right)\subset U$
and with $\left|f_{n}\left(x_{0}\right)-f_{n}\left(y\right)\right|<1$
for all $y\in\left(x_{0}-\varepsilon,x_{0}+\varepsilon\right)$. But
since $x_{0}\in\overline{U}$, there is some $y\in U\cap\left(x_{0}-\varepsilon,x_{0}+\varepsilon\right)$.
This implies
$$
\left|f_{n}\left(x_{0}\right)\right|\leq\left|f_{n}\left(y\right)\right|+1\leq M_{y}+1
$$
for all $n\in\mathbb{N}$. Hence, $x_{0}\in U$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1666341",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Show that if $p(x)$ is a polynomial, $|p(x)|$ attains its minimum.
Show that if $p(x)$ is a polynomial, $|p(x)|$ attains its minimum.
Attempt
Let $p(x) = a_nx^n+a_{n-1}x^{n-1}+\cdots+a_1x+a_0$. Then if $|p(x)| = |a_nx^n+a_{n-1}x^{n-1}+\cdots+a_1x+a_0|$. If $x > 0,$ then $|p(x)| = |a_nx^n+a_{n-1}x^{n-1}+\cdots+a_1x+a_0| \leq |a_n|x^n+|a_{n-1}|x^{n-1}+\cdots+|a_1|x+|a_0|.$ Then we can take there derivative of $|p(x)|$ to get a bound to find critical points. I am not sure how to deal with the case of $x \leq 0$.
|
Does $|p(x)|$ always attain $p(x)$'s minimum?
Try using a $p(x)$ such that $a_0<0$ and $a_0$ happens to be its minimum [example: $p(x)=x^2-1$]. Then $|p(x)|$ will never attain $a_0$, but $|a_0|$.
Maybe I'm misunderstanding the question. Try to word it better.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1666458",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Give a big-O estimate of $(x+1)\mathrm{log}(x^2+1) + 3x^2$ I wanted to know if the following solution demonstrates that the function $f(x) = (x+1)\mathrm{log}\, (x^2+1) + 3x^2 \in O(x^2)$, because my answer and the book's answer deviate slightly.
Clearly,
$$3x^2 \in O(x^2) \tag{1}$$
$$x+1 \in O(x)\tag{2}$$
The following inequality is where the book and I differ, but I understand how the author obtained his big-O estimate. I said
$$\mathrm{log}(x^2+1) \in O(\mathrm{log}(x^2+1))\tag{3}$$
Therefore, the product of $(2)$ and $(3)$ renders
$$(x+1)\mathrm{log}(x^2+1) \in O(x \mathrm{log}(x^2+1)) \tag{3}$$
Finally, $(1)$ and $(3)$ gives us this big-O estimate
$$(x+1)\mathrm{log}(x^2+1) \in O(\mathrm{max}(x^2,x\mathrm{log}(x^2+1)) = O(x^2) \tag{4}$$
Any problems! Thanks!
|
If this is as
$x \to \infty$,
then
$\log(x^k+a)
=O(\log(x))
$
for any fixed $k$
and $a$.
Also,
$\log(x)
=O(x^c)
$
for
any $c > 0$.
This should be enough.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1666567",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Evaluation of $\lim_{x\rightarrow \infty}\left\{\left[(x+1)(x+2)(x+3)(x+4)(x+5)\right]^{\frac{1}{5}}-x\right\}$
Evaluation of $\displaystyle \lim_{x\rightarrow \infty}\left\{\left[(x+1)(x+2)(x+3)(x+4)(x+5)\right]^{\frac{1}{5}}-x\right\}$
$\bf{My\; Try::}$ Here $(x+1)\;,(x+2)\;,(x+3)\;,(x+4)\;,(x+5)>0\;,$ when $x\rightarrow \infty$
So Using $\bf{A.M\geq G.M}\;,$ We get $$\frac{x+1+x+2+x+3+x+4+x+5}{5}\geq \left[(x+1)(x+2)(x+3)(x+4)(x+5)\right]^{\frac{1}{5}}$$
So $$x+3\geq \left[(x+1)(x+2)(x+3)(x+4)(x+5)\right]^{\frac{1}{5}}$$
So $$\left[(x+1)(x+2)(x+3)(x+4)(x+5)\right]^{\frac{1}{5}}-x\leq 3$$
and equality hold when $x+1=x+2=x+3=x+4=x+5\;,$ Where $x\rightarrow \infty$
So $$\lim_{x\rightarrow 0}\left[\left[(x+1)(x+2)(x+3)(x+4)(x+5)\right]^{\frac{1}{5}}-x\right]=3$$
Can we solve the above limit in that way, If not then how can we calculate it
and also plz explain me where i have done wrong in above method
Thanks
|
$$\lim _{t\to 0}\left(\left[\left(\frac{1}{t}+1\right)\left(\frac{1}{t}+2\right)\left(\frac{1}{t}+3\right)\left(\frac{1}{t}+4\right)\left(\frac{1}{t}+5\right)\right]^{\frac{1}{5}}-\frac{1}{t}\right) = \lim _{t\to 0}\left(\frac{\sqrt[5]{1+15t+85t^2+225t^3+274t^4+120t^5}-1}{t}\right) $$
Now we use the Taylor's development at the first order
$$= \lim _{t\to 0}\left(\frac{1+3t-1+o(t)}{t}\right) = \color{red}{3}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1666688",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 7,
"answer_id": 2
}
|
Limit of function of hyperbolic How can I - without using derivatives - find the limit of the function
$f(x)=\frac{1}{\cosh(x)}+\log \left(\frac{\cosh(x)}{1+\cosh(x)} \right)$
as $x \to \infty$ and as $x \to -\infty$?
We know that $\cosh(x) \to \infty$ as $x \to \pm \infty$ thus $\frac{1}{\cosh(x)} \to 0$ as $x \to \pm \infty$.
And I imagine that $\frac{\cosh(x)}{1+\cosh(x)} \to 1$ as $x \to \pm \infty$ thus $\log\left(\frac{\cosh(x)}{1+\cosh(x)}\right) \to 0$ as $x \to \pm \infty$.
Is this approach sufficiently formal?
Any help is appreciated.
|
$$
\begin{aligned}
\lim _{x\to \infty }\left(\frac{1}{\cosh \left(x\right)}+\ln\left(\frac{\cosh \left(x\right)}{1+\cosh \left(x\right)}\right)\right)
& = \lim _{x\to \infty }\left(\frac{1+\ln \left(\frac{\left(\frac{e^x+e^{-x}}{2}\right)}{\left(\frac{e^x+e^{-x}}{2}\right)+1}\right)\left(\frac{e^x+e^{-x}}{2}\right)}{\left(\frac{e^x+e^{-x}}{2}\right)}\right)
\\& = \lim _{x\to \infty }\left(\frac{2+\ln \:\left(\frac{e^x+e^{-x}}{e^x+e^{-x}+2}\right)\left(e^x+e^{-x}\right)}{e^x+e^{-x}}\right)
\\& \approx \lim _{x\to \infty }\left(\frac{2+\ln \:\left(\frac{e^x}{e^x+2}\right)\left(e^x\right)}{e^x}\right)
\\& \approx \lim _{x\to \infty }\left(\frac{2+0\cdot \left(e^x\right)}{e^x}\right)
\\& = \color{red}{0}
\end{aligned}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1666771",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 6,
"answer_id": 1
}
|
What do the (+) and (-) symbols after variables mean? This paper describing an Unscented Kalman Filter implementation uses notation that I am unfamiliar with nor can find on eg https://en.wikipedia.org/wiki/List_of_mathematical_symbols
Xu et al (2008)
An example line is:
$$\chi_{i,k-1}(+) = f(\chi_{i,k-1}), x_k(-)=\sum_{i=0}^{2n}\omega_i\chi_{i,k-1}(+)$$
This notation continues throughout the paper.
Could someone please tell me what the (+) and (-) indicate? At first I thought it was redundant information describing the post and prior versions of $x$ however one of them is $\chi$ and the other is $x$. I also thought it could be the positive and negative solutions of a square root but that isn't obvious to me as being correct either (why (+) of $\chi$ and (-) of $x$ and where are the other solutions).
|
They seem to refer to the a priori and a posteriori estimate. Compare the equations with these, Wikipedia uses $x_{k∣k−1}$ where your paper uses $x_k(−)$ and $x_{k∣k}$ instead of $x_k(+)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1666872",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Limit of $x^5\cos\left(\frac1x\right)$ as $x$ approaches $0$ Find the limit of $x^5\cos\left(\frac1x\right)$ as $x$ approaches $0$.
Can I just substitute $0$ to $x^5$? But what would be $\cos\left(\frac10\right)$ be?
I could solve for $-x^4\le x^4\cos(1/x)\leq x^4$
in which limit of $x^4$ is and $-x^4=0$ and by sandwich theorem limit=0
but is there a method to solve it without doing so
|
Notice, $-1\le \cos \left(\frac 1x\right)\le 1\ \ \forall \ \ x\in \mathbb{R}$, hence
$$\lim_{x\to 0}x^5\cos\left(\frac 1x\right)=0$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1666981",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
If $v_1,v_2,\dots,v_n$ are linearly independent then what about $v_1+v_2,v_2+v_3,\dots,v_{n-1}+v_n,v_n+v_1$? If $v_{1},v_{2},...v_{n}$ are linearly independent then are the following too?
$v_{1}+v_{2},v_{2}+v_{3},...v_{n-1}+v_{n},v_{n}+v_{1}$
I tried summing them to give 0 but had no success.
|
$$a_1(v_1+v_2) + a_2(v_2+v_3) + \cdots + a_{n-1}(v_{n-1}+v_n) + a_n(v_n + v_1) \\ = a_1v_1 + a_1v_2 + a_2v_2 + a_2v_3 + a_3v_3 + \cdots + a_{n-1}v_n + a_nv_n + a_nv_1 \\ = (a_1+a_n)v_1 + (a_1+a_2)v_2 + \cdots (a_{n-2} +a_{n-1})v_{n-1}+(a_{n-1}+a_n)v_n \\ = b_1v_1 + b_2v_2 + \cdots + b_{n-1}v_{n-1}+b_nv_n$$
Now make conclusions (casewise).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1667059",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 2
}
|
Evaluating an integral of a form related to $\int_{-\infty}^{\infty} e^{-ax^2} \cdot e^{-2\pi i k x} dx$ Claim
\begin{equation}
\int_{\mathbb R} \exp\left(-2\pi \cdot \left(\frac{x}{\sqrt 2}\right)^2 \right) \cdot \exp\left(-2i \pi \frac{x}{\sqrt 2} \cdot f\right) \mathrm{d}\left( \frac{x}{\sqrt{2}} \right)
= \frac{1}{\sqrt{2}} \exp(-\pi f^{2}/2).
\end{equation}
where I cannot get anyway the second power in $f$.
Wolfram alpha also returns correct result here.
However, I do not understand how.
The general formula of Fourier transform of Gaussian is
\begin{equation}
\int_{-\infty}^{\infty} e^{-ax^2} \cdot e^{-2\pi i k x} dx = F_{x}[ e^{-ax^2} ]
\end{equation}
Looking at (1), I cannot see how the $f$ squares in the parameter of the result because $x := (x/\sqrt{2})$ in (1), and $f$ term apparently corresponds $2\pi i k$.
I think the general equation from Wolfram does not apply here directly.
Using $a \mapsto 2\pi$, $x \mapsto x/\sqrt{2}$, and $k \mapsto f$.
I really get the General formula there.
Solving the Fourier transform
\begin{equation}
F_{x} [e^{-ax^2}] = F_{x} [e^{-2\pi x^2}]
\end{equation}
where missing the $f$ term.
So I am still thinking how the $f$ term comes there in the end.
If $x$ was also mapped to $f$, there would be $f$ term but then the original equation cannot hold.
How can you get the result of integration?
|
Hint:
Let$$I=\int_{\mathbb R}e^{-x^2/2}\cos(fx)dx.$$
Then deriving on $f$ and integrating by parts,
$$I'_f=-\int_{\mathbb R}xe^{-x^2/2}\sin(fx)dx=\left.e^{-x^2/2}\sin(fx)\right|_{\mathbb R}-\int_{\mathbb R} fe^{-x^2/2}\cos(fx)dx=-fI.$$
The solution of this differential equation is
$$I=Ce^{-f^2/2}.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1667163",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Finding the orthogonal projection of a given vector on the given subspace W of the inner product space V. Let $V = R^3$ with the standard inner product
$u = (2,1,3)$ and $W = \{(x,y,z) : x + 3y - 2z = 0\}$
I came up with the basis $\{(-3,1,0), (2,0,1)\}$ but these are not orthogonal to each other. I'm not exactly sure how to approach this question, any help would be appreciated.
Thanks
|
There are many ways how to find an orthogonal projection.
You seem to want to use an orthogonal (or an orthonormal) basis of $W$ in some way.
If you already have a basis of $W$, you can get an orthogonal basis from it using Gram-Schmidt process.
Another way to do this. Let us choose $\vec b_1=(2,0,1)$ at the first vector basis. Now you want a find another vector which belongs to $W$ (i.e., it satisfies $x+3y-z=0$) and which is orthogonal to $\vec b_1$ (i.e., it satisfies $2x+z=0$). Can you find solution of these two equations? Can you use it to get an orthogonal basis of $W$?
Solution using a linear system. Here is another way to find an orthogonal projection. We are given a vector $\vec u=(2,1,3)$. And we want to express it as $\vec u=\vec u_1+\vec u_2$, where $\vec u_1 \in W$ and $\vec u_2=W^\bot$. We know bases of $W=[(-3,1,0),(2,0,1)]$ and of $W^\bot=[(1,3,-2)]$.
So we simply express the vector $\vec u$ as a linear combination $\underset{\in W}{\underbrace{c_1(-3,1,0)+c_2(2,0,1)}}+\underset{\in W^\bot}{\underbrace{c_3(1,3,-2)}}$.
To find $c_{1,2,3}$ it suffices to solve the system of equations
$$
\left(\begin{array}{ccc|c}
-3 & 2 & 1 & 2 \\
1 & 0 & 3 & 1 \\
0 & 1 &-2 & 3
\end{array}\right)
$$
If you do so, you will find that the only solution is $c_1=\frac{17}{14}$, $c_2=\frac{20}7$, $c_3=-\frac1{14}$.
This gives you $\vec u_1=\underline{\underline{\frac1{14}(29,17,40)}}$ and $\vec u_2=\frac1{14}(-1,-3,2)$.
Projection to $W^\bot$. As mentioned in a comment, since $W^\bot$ is one-dimensional, it is easy to find projection to $W^\bot$. The vector $\vec a=\frac1{\sqrt{14}}(1,3,-2)$ is unit vector which spans $W^\bot$. The projection can be found as
$$\vec u_2 = \vec u \vec a^T \vec a =\frac1{14} (2,1,3)\begin{pmatrix}1\\3\\-2\end{pmatrix}(1,3,-2)=-\frac1{14}(1,3,2).$$
Then the projection to the subspace $\vec u_1$ can be computed as $\vec u_1=\vec u-\vec u_2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1667271",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Show that $2$ is a primitive root modulo $13$. Find all primitive roots modulo $13$.
We show $2$ is a primitive root first. Note that $\varphi(13)=12=2^2\cdot3$. So the order of $2$ modulo $13$ is $2,3,4,6$ or $12$.
\begin{align}
2^2\not\equiv1\mod{13}\\
2^3\not\equiv1\mod{13}\\
2^4\not\equiv1\mod{13}\\
2^6\not\equiv1\mod{13}\\
2^{12}\equiv1\mod{13}
\end{align}
Hence $2$ has order $12$ modulo 13 and is therefore a primitive root modulo $13$.
Now note all even powers of $2$ can't be primitive roots as they are squares modulo $13$. $(*)$
There are $\varphi(12)=4$ primitive roots modulo $13$. These must therefore be
$$2,2^5=6,2^7=11,2^{11}=7\mod{13}.$$
Questions:
*
*Why do we only check the divisors of $\varphi(13)$ lead to $1\mod{13}$?
*does the line marked $(*)$ mean they can be written in the form $a^2$? Why does this mean they can't be a primitive root?
*I thought $\varphi(12)$ counts the number of coprimes to $12$.. Why does this now suddenly tell us the number of primitive roots modulo $13$?
*How have these powers been plucked out of thin air? I understand even powers can't be primitive roots, also we have shown $2^3$ can't be a primitive root above but what about $2^9$?
|
1 . This is Lagrange's theorem. If $G$ is the group $(\mathbb{Z}/13\mathbb{Z})^{\ast}$ (the group of units modulo $13$), then the order of an element $a$ (that is, the smallest number $t$ such that $a^t \equiv 1 \pmod{13}$) must divide the order of the group, which is $\varphi(13) = 12$. So we only check the divisors of $12$.
2 . Yes, that is a square mod $13$. To say that $a$ is a primitive root mod $13$ means that $a^{12} \equiv 1 \pmod{13}$, but all lower powers $a, a^2, ... , a^{11}$ are not congruent to $1$. Again use Lagrange's theorem: supposing $a^2$ were a primitive root, then $12$ would be the smallest power of $a^2$ such that $(a^2)^{12} \equiv 1$. But note that $b^{12} \equiv 1$ for ANY integer $b$ not divisible by $13$. So $(a^2)^{6} = a^{12} \equiv 1$, and $6 < 12$, contradiction.
3 . It's a general result about finite cyclic groups. A cyclic group of order $m$ is a group of the form $H = \{ 1, g, g^2, ... , g^{m-1}\}$. It is basically the same thing as the group $\mathbb{Z}/m\mathbb{Z}$ with respect to addition. In general, if $d \geq 1$, there exist elements in $H$ with order $d$ (that is, their $d$th power is $1$, all lower powers are not $1$) if and only if $d$ is a divisor of $m$, and there are exactly $\varphi(d)$ such elements.
In particular, if $p$ is an odd prime number, the result is that $(\mathbb{Z}/p\mathbb{Z})^{\ast}$ is a cyclic group of order $\varphi(p) = p-1$, and the number of primitive roots (that is, the number of elements with order $p-1$) is exactly $\varphi(p-1) = \varphi(\varphi(p))$.
4 . If you have found a primitive root modulo $p$ (where $p$ is an odd prime), then you can easily find the rest of them: if $a$ is a primitive root mod $p$, then the other primitive roots are $a^k$, where $k$ runs through those numbers which don't have any prime factors in common with $p-1$. It's a good exercise to prove this. So $2^9$ wouldn't work; $9$ has prime factors in common with $12$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1667355",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
}
|
Is the action of $\mathbb Z$ on $\mathbb R$ by translation the only such action? It is well known that $\mathbb Z$ acts on $\mathbb R$ by translation. That is by $n\cdot r=n+r$. The quotient space of this action is $S^1$.
Could someone give me an example where $\mathbb Z$ acts on $\mathbb R$ in some other (non trivial) way to give (possibly) some other quotient? I am only interested in continuous actions.
Specifically, is there an action such that the quotient is a closed interval?
Thank you.
|
Recall that a continuous action of $\Bbb{Z}$ on $\Bbb{R}$ consists of a group morphism $\Bbb{Z} \to \operatorname{Aut}(\Bbb{R})$, where $\operatorname{Aut}(\Bbb{R})$ denotes the group of homeomorphisms of $\Bbb{R}$ into itself.
Now, $\Bbb{Z}$ is cyclic, hence any morphism $\mu:\Bbb{Z} \to \operatorname{Aut}(\Bbb{R})$ is uniquely determined by $\mu(1)$. So, if you fix any homeomorphism $f \in \operatorname{Aut}(\Bbb{R})$, you will get the action
$$n \cdot x = f^n(x)$$
where $f^n$ is the $n$-iterated map $f(f( \dots))$ (for $n < 0$ it's $(f^{-1})^{-n}$).
Now, let's see some examples:
$f(x)=x+1$ leads to the action in your question.
$f(x)=x$ leads to the trivial action.
$f(x)=-x$ leads to the action $n \cdot r = (-1)^nr$, whose quotient is $\Bbb{R}_{\ge 0}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1667457",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
}
|
Problem finding Jacobian when computing density I'm having trouble finding the Jacobian when trying to compute a distribution. If $(X,Y)$ is a point on a unit disk with radius $1$, I'd like to find the density of the distance between the point and the centre of the disk.
So the joint density function of $X$ and $Y$ is
$$
f_{XY}(x,y) = \begin{cases}
1/\pi, & x^2 + y^2 \leq 1,\\
0, & \text{otherwise}
\end{cases}.
$$
I put $U = \sqrt{X^2+Y^2}$ and use an auxiliary random variable $V = \mathrm{arctan}(Y/X).$ So there is a 2-to-1 mapping.
Now I wonder what the Jacobian might be. Or, well, the book tells me it's $u$, but I don't know how to get there myself. Using wolfram alpha (admittedly I'm probably just using it wrong) gets me the answer $0$, not $u$.
Normally, I would work with simpler auxiliary variables, which have helped me compute the Jacobian for other problems, but in this case I'm adviced to use $\mathrm{arctan}(Y/X)$ and that could be what makes this so hard for me.
|
Full solution, using Dirac delta:
$$
p_U(u)=\int_D dx dy \frac{1}{\pi}\delta\left(u-\sqrt{x^2+y^2}\right)\ .
$$
Making a change to polar coordinates $x=r\cos\theta$ and $y=r\sin\theta$, whose Jacobian is $=r$,
$$
p_U(u)=\frac{1}{\pi}\int_0^{2\pi}d\theta\int_0^1 dr\ r\ \delta(r-u)=2u\qquad 0\leq u\leq 1\ ,
$$
which is correctly normalized, $\int_0^1 du\ p_U(u)=1$.
About the Jacobian, I am not sure what the problem is [and also not sure about the statement '2-to-1' you made...you had two variables $X,Y$ and introduce two other variables $U,V$]. The Jacobian between the old variables $(X,Y)$ and the new ones $(U,V)$ is
$$
\det\begin{pmatrix}
\frac{\partial u}{\partial x} & \frac{\partial u}{\partial y}\\
\frac{\partial v}{\partial x} & \frac{\partial v}{\partial y}
\end{pmatrix}
=
\det\begin{pmatrix}
\frac{x}{\sqrt{x^2+y^2}} & \frac{y}{\sqrt{x^2+y^2}} \\
-\frac{y}{x^2+y^2} & \frac{x}{x^2+y^2}
\end{pmatrix}=\frac{1}{\sqrt{x^2+y^2}}=\frac{1}{u}
$$
(obviously, then, the Jacobian between the new variables and the old ones is $u$).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1667538",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Probability - Can't understand hint professor gave You don't really need any expert knowledge in Machine Learning or linear regression for this question, just probability.
Our model is this: We have our input matrix $X \in \mathbb R^{n \times d}$ and our output vector $y \in \mathbb R^n$ which is binary. $y_i$ is either $0$ or $1$.
We have $P(y_i = 1|x_i) = 1-P(y_i = 0|x_i) = g(w^Tx_i)$ where $g(z) = \frac{1}{1+e^{-z}}$ is the sigmoid function, $x_i$ denotes the $i$'th row of the matrix $X$, and $w \in \mathbb R^d$ maximizes our likelihood function defined $l(w) = \frac{1}{n} \sum_{i=1}^{n}\log(P(y_i|x_i))$
I was asked to prove that $\frac{\partial l}{\partial w} = \frac{1}{n} \sum_{i=1}^{n}x_i(y_i - g(w^Tx_i))$
We had a hint: Observe that $P(y_i|x_i) = g(w^Tx_i)^{y_i}(1-g(w^Tx_i))^{1-y_i}$
I don't understand how can the hint be true. It was defined at the beginning that $P(y_i|x_i) = g(w^Tx_i)$. This seems different.
Edit: Here is the specific text the teacher gave
|
It's just a way of writing two identities in one.
If $y_1 = 0$ you get what you had before, and similarly if $y_1= 1:$ there's always one of the two terms canceling. Since $0$ and $1$ are the only possible valuse you can get, the equality holds.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1667685",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Why is the delta function the continuous generalization of the kronecker delta and not the identity function? In a discrete $n$ dimensional vector space the Kronecker delta $\delta_{ij}$ is basically the $n \times n$ identity matrix. When generalizing from a discrete $n$ dimensional vector space to an infinite dimensional space of functions $f$ it seems natural to assume that the generalization of the Kronecker delta should be an identity operator
$$
\operatorname{I} f = f
$$
However it is said that the continuous generalization of the Kronecker delta is the Dirac delta function
$$
\int_{-\infty}^\infty \delta(x - y) f(x)\, dx = f(y)
$$
Why is that the case? What is "wrong" with the simple identity operator?
UPDATE: What I mean with "simple" identity operator is: Why is the identity operator not simply the scalar number "1", but the delta function instead?
UPDATE2: To make it more clear: Why is the continuous generalization of the Kronecker delta
$$
\int_{-\infty}^\infty \delta(x - y) f(x)\, dx = f(y)
$$
and not
$$
1 \cdot f(x) = f(x)
$$
?
|
I) Let there be given an linear operator $A:V\to V$, where $V$ is a vector space.
*
*If $V$ is finite-dimensional, given a choice of basis, the operator $A$ can be represented
$$(Av)^i ~=~ \sum_j a^i{}_j v^j.\tag{1}$$
by a matrix $a^i{}_j$.
*If $V$ is infinite-dimensional, of the form of an appropriate function space, the operator $A$ can be represented
$$ (Af)(x)~=~ \int \!dy ~a(x,y) f(y).\tag{2}$$
by an integral kernel $a(x,y)$
Notice that the discrete index $j$, the summation, and the vector components $v^j$ in eq. (1) have been replaced by a continuous index $y$, an integral, and a function value $f(y)$ in eq. (2), respectively.
II) Now consider the identity operator $A={\rm Id}_V:V\to V$. In both cases (1) and (2), we may view the identity operator ${\rm Id}_V$ as multiplication with the constant $1$.
*
*In case 1, the matrix $a^i{}_j=\delta^i_j$ is given by the Kronecker delta.
*In case 2, the integral kernel $a(x,y)=\delta(x,y)$ is given by the Dirac delta distribution.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1667753",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
Compute the matrix $A^n$, $n$ $\in$ $\mathbb{N}$. Compute $A^n$, $n$ $\in$ $\mathbb{N}$., where $
A=\left[\begin{array}{rr}
2&4\\3&13
\end{array}\right]$.
Hi guys, this is the question, I compute the diagonal matrix of $A$ and obtained this $
D=\left[\begin{array}{rr}
1&0\\0&14
\end{array}\right]$. I need use this result $(M^{-1}BM)^n$ $=$ $M^{-1}B^nM$ to find $A^n$ and I know how to use, but my problem is compute this matrix $M$ how can I compute $M$?
|
If a matrix $A$ is diagonalizable, then what that means is there exists a decomposition of the vector space into eigenspaces. Then the form $A = MDM^{-1}$ is merely a change-of-basis from the basis in which $A$ was expressed into this new eigenbasis.
Therefore, the entries of $M$ will be the eigenvectors associated to the eigenvalues.
$\lambda = 1$: The corresponding eigenspace is the nullspace of the matrix $A-1 I = A-I$. So,
$$
A-I=\left[\begin{array}{rr}
1&4\\3&12
\end{array}\right]$$
then row reduce to find a single generator of this nullspace.
$\lambda = 13$: Same thing as above.
Now take the two eigenvectors found in the two cases above and form the $2 \times 2$ matrix $M$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1667948",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Negation of a statement without using (symbol ¬)
Write the negation of the following statement (without using the symbol $¬$ ):
$\mathrm P ~=~ (∃x ∈ \Bbb R)\Big(\big((∃y ∈ \Bbb R)(x = (1 − y)^2)\big) ∧ \big((∃z ∈ \Bbb R)(x = −z^2)\big)\Big)$
Which statement is true, $\rm P$ or $\rm ¬P$ ?
P.S.... i'm kinda confused as to not being able to use the $¬$ symbol for negating the statement. So, how would I be able to do it?
|
What you are supposed to do, is the following: Consider $\neg P$ and then iteratively transform this into an equivalent sentence $Q$ in which the symbols $\neg$ doesn't appear. I will show you how to begin:
(I'm assuming that your $R$ is supposed to be $\mathbb R$ - the set of all reals.)
$$
\begin{align}
&\neg \left( \exists x \in \mathbb R \exists y \in \mathbb R \colon x = (1-y)^2 \wedge \exists z \in \mathbb R \colon x = -z^2 \right) \\
\Leftrightarrow &\neg \left( \exists x \in \mathbb R \exists y \in \mathbb R \colon x = (1-y)^2 \right) \vee \neg \left( \exists z \in \mathbb R \colon x = -z^2 \right) \\
\Leftrightarrow &\forall x \in \mathbb R \neg \left( \exists y \in \mathbb R \colon x = (1-y)^2 \right) \vee \forall z \in \mathbb R \colon x \neq -z^2 \\
&\ldots
\end{align}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1668049",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
De Mere's Martingales In a casino, a player plays fair a game. If he bets $ k $ in one hand in that game, then he wins $ 2k$ with
probability $0.5$, but gets $0$ with probability $0.5$. He adopts the following strategy. He bets $1$ in the
first hand. If this first bet is lost he then bets $2$ at the second hand. If he loses his first $n$ bets, he
bets $2^n$ at the $(n + 1)$-th hand. Moreover, as soon as the player wins one bet, he stops playing (or
equivalently, after he wins a bet, he bets $0$ on every subsequential hand). Denote by $X_n$ the net profit
of the player just after the nth game, and $X_0 = 0$.
(i) Show that $(X_n)_{n≥0}$ is a martingale.
(ii) Show that game ends in a finite time almost surely.
(iii) Calculate the expected time until the gambler stops betting.
(iv) Calculate the net profit of the gambler at the time when the gambler stops betting.
(v) Calculate the expected value of the gambler’s maximum loss during the game.
My approach: Consider $Y_n$ to be i.i.d $Ber(0.5)$, $Y_n = 1$(win a bet) with probability $0.5$ and $Y_n = -1$(lose a bet) with probability $0.5$. Let $Y_n$ denote the outcome of $n$-th bet. The stopping time is defined as
$T = \inf\{ k: Y_k = 1\}$. Then,
$X_k = \sum_{i = 1}^{k}(2^{i}-2^{i-1})Y_i $. Is this definition of $X_k$ correct.
|
For each $n\geqslant 0$ we have
$$X_{n+1} = X_n + 2^{n+1}Y_n.$$ Clearly $\mathbb E[X_0]=0$, and hence $$\mathbb E[X_{n+1}] = \mathbb E[X_n] + 2^{n+1}\mathbb E[Y_n] = \mathbb E[X_n].$$ Therefore $\mathbb E[X_n]=0$ for all $n$. Further,
\begin{align}
\mathbb E[X_{n+1}\mid \mathcal F_n] &= \mathbb E[X_n + 2^{n+1}Y_n\mid\mathcal F_n]\\
&= X_n + \mathbb E[2^{n+1}Y_n]\\
&= X_n,
\end{align}
so that $\{X_n\}$ is a martingale.
We have $\mathbb P(T=n) = \left(\frac12\right)^n$ for $n\geqslant 1$, so $$\mathbb P(T=\infty) = 1-\sum_{n=1}^\infty \mathbb P(T=n) = 1 - \sum_{n=1}^\infty \left(\frac12\right)^n = 0,$$ and therefore the game ends in a finite time almost surely.
It is straightforward to see that the expected time until the gambler stops betting is $$\mathbb E[T] = \frac1{1/2}=2. $$
The net profit of the gambler at the time when the gambler stops betting is $$X_T=X_{T-1} +2^T. $$ Now, $Y_i=0$ for $i<T$, so
$$X_T = -\sum_{i=1}^{T-1}2^i +2^T=1.$$
The gambler's maximum loss $L$ is in bet $T-1$, so the expected maximum loss is given by
\begin{align}
\mathbb E[L] &= \mathbb E\left[2^{T-1} \right]\\
&= \sum_{n=1}^\infty 2^{n-1}\mathbb P(T=n)\\
&= \sum_{n=1}^\infty 2^{n-1}\left(\frac12\right)^n\\
&= \sum_{n=1}^\infty \frac12 =\infty.
\end{align}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1668138",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Paths of even and odd lengths between cube vertices I have an ordinary cube with the standard 8 vertices and 12 edges. Say I define a path as a journey along the edges from one vertex to another such that no edge is used twice. Then I pick two vertices that are connected by a single edge, ie. a path of length 1. How would I go about proving that all the paths between these two points are always odd in length, ie. each such path consists of an odd number of edges.
Ideally I'm after an explanation that could be understood by a high school student. Which is another way of saying I'm trying to help my daughter with her maths homework :)
|
Imagine to color the vertices of the cube in this way:
Now imagine you start from a red vertex. Doing one step, in any direction, you go in a blue vertex. It is easy to verify that this hold for each red vertex.
Now check the blue vertices. It is easy to see that from one blue vertex, with one step, you can only go to a red one.
So, if you start from a red vertex, at each odd step you will be at a blue vertex, and at each even step you will be again at a red vertex.
This works no matter long is the path, and it can even go back and forth.
So, since two adjacent vertices have different colors, only odd-length paths can join them.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1668379",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Liouville numbers and continued fractions First, let me summarize continued fractions and Liouville numbers.
Continued fractions.
We can represent each irrational number as a (simple) continued fraction by $$[a_0;a_1,a_2,\cdots\ ]=a_0+\frac{1}{a_1+\frac{1}{a_2+\frac{1}{\ddots}}}$$
where for natural numbers $i$ we have $a_i\in\mathbb{N}$, and we also have $a_0\in\mathbb{Z}$. Each irrational number has a unique continued fraction and each continued fraction represents a unique irrational number.
Liouville numbers.
An irrational number $\alpha$ is a Liouville number such that, for each positive integer $n$, there exist integers $p,q$ (where $q$ is nonzero) with $$\left|\alpha-\frac pq\right|<\frac1{q^n}$$
The important thing here is that you can approximate Liouville numbers well, and the side effect is that these numbers are transcendental.
Now if we look at the Liouville's constant, that is, $L=0.1100010\ldots$ (where the $i!$-th digit is a $1$ and the others are $0$), then we can write
$$L=[0;9,1,99,1,10,9,999999999999,1,\cdots\ ]$$
The large numbers in the continued fraction make the convergents very close to the actual value, so that the number it represents is in that sense "well approximatable".
My question now is, can we bound the numbers in the continued fraction below to be sure that the number it represents is a Liouville number?
|
Bounding the error.
The error between a continued fraction $[a_0;a_1,a_2,\ldots]$ and its truncation to the rational number $[a_0;a_1,a_2,\ldots,a_n]$ is given by
$$
|[a_0;a_1,a_2,a_3,\ldots] - [a_0;a_1,a_2,\ldots,a_n]|=\left|\left(a_0+\frac{1}{[a_1;a_2,a_3,\ldots]}\right) - \left(a_0 + \frac{1}{[a_1;a_2,a_3,\ldots,a_n]}\right)\right|=\left|\frac{[a_1;a_2,a_3,\ldots,a_n]-[a_1;a_2,a_3,\ldots]}{[a_1;a_2,a_3,\ldots]\cdot[a_1;a_2,a_3,\ldots,a_n]}\right| \le \frac{\left|[a_1;a_2,a_3,\ldots,a_n]-[a_1;a_2,a_3,\ldots]\right|}{a_1^2},
$$
terminating with $\left|[a_0;a_1,a_2,a_3,\ldots] - [a_0;]\right|\le 1/a_1$; by iterating this recursive bound we conclude that
$$
\left|[a_0;a_1,a_2,a_3,\ldots] - [a_0;a_1,a_2,\ldots,a_n]\right| \le \frac{1}{a_1^2 a_2^2 \cdots a_n^2}\cdot \frac{1}{a_{n+1}}.
$$
Let $D([a_0;a_1,a_2,\ldots,a_n])$ be the denominator of the truncation $[a_0;a_1,a_2,\ldots,a_n]$ (in lowest terms). Then we have a Liouville number if for any $\mu > 0$, the inequality
$$
a_{n+1} \ge \frac{D([a_0;a_1,a_2,\ldots,a_n])^\mu}{a_1^2 a_2^2 \cdots a_n^2}
$$
holds for some $n$. To give a more explicit expression, we need to bound the growth of $D$.
Bounding the denominator.
Let $D(x)$ and $N(x)$ denote the denominator and numerator of a rational number $x$ in lowest terms. Then
$$
D([a_0;a_1,a_2,\ldots, a_n])=D\left(a_0+\frac{1}{[a_1;a_2,a_3,\ldots,a_n]}\right)\\ =D\left(\frac{1}{[a_1;a_2,a_3,\ldots,a_n]}\right)=N([a_1;a_2,a_3,\ldots,a_n]),
$$
and
$$
N([a_0;a_1,a_2,\ldots, a_n])=N\left(a_0+\frac{1}{[a_1;a_2,a_3,\ldots,a_n]}\right)\\ =N\left(a_0+\frac{D([a_1;a_2,a_3,\ldots,a_n])}{N([a_1;a_2,a_3,\ldots,a_n])}\right) = a_0 N([a_1;a_2,a_3,\ldots,a_n]) + D([a_1;a_2,a_3,\ldots,a_n]) \\ = a_0 D([a_0;a_1,a_2,\ldots, a_n]) + D([a_1;a_2,a_3,\ldots,a_n]).
$$
So
$$
D([a_0;a_1,a_2,\ldots,a_n]) = a_1 D([a_1;a_2,a_3,\ldots,a_n]) + D([a_2;a_3,a_4,\ldots,a_n]),
$$
and the recursion terminates with $D([a_0;])=1$ and $D([a_0;a_1])=D(a_0+1/a_1)=a_1$. Since we have $D([a_0;a_1,a_2,\ldots,a_n]) \ge a_1 D([a_1;a_2,a_3,\ldots,a_n])$, we can say that $D([a_1;a_2,a_3\ldots,a_n]) \le \frac{1}{a_1}D([a_0;a_1,a_2,\ldots,a_n])$, and so
$$
D([a_0;a_1,a_2,\ldots,a_n]) \le \left(a_1 +\frac{1}{a_2}\right) D([a_1;a_2,a_3,\ldots,a_n]) \le (a_1 + 1)D([a_1;a_2,a_3,\ldots,a_n]).
$$
An explicit bound on the size of the denominator is therefore
$$
D([a_0;a_1,a_2,\ldots,a_n]) \le (a_1+1)(a_2+1)\cdots(a_n+1).
$$
Conclusion.
We conclude the following theorem:
The continued fraction $[a_0;a_1,a_2,\ldots]$ is a Liouville number if, for any $\mu > 0$, there is some index $n$ such that $$a_{n+1} \ge \prod_{i=1}^{n}\frac{(a_i + 1)^{\mu}}{a_i^2}.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1668461",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 1,
"answer_id": 0
}
|
If the limit of a function when x goes to infinity is infinity, does that mean the integral from 1 to infinity is also infinity? If the limit of a function when x goes to infinity is infinity, does that mean the integral from 1 to infinity is also infinity?
Can we use the above to show that an unsolvable integral is infinity?
edit: Can we also say that if the limit of a function when x goes to infinity is a number then the integral is not infinity?
|
The integral
$$\int_{1}^\infty f(x)dx$$
is defined as
$$\lim_{b\to\infty} \int_1^b f(x)dx$$
so it should be simple for you to show that if there exist $x_0\geq 1$ and $M>0$ such that $f(x)>M$ for all $x>x_0$ (this is of course true of $\lim_{x\to\infty} f(x)=\infty$), then $$\int_{1}^\infty f(x)dx = \infty.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1668545",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Extracting information from the graph of a polynomial Problem: Below is the graph of a polynomial with real coefficients
What can you say about the degree of the polynomial and about the sign of the first three and last three coefficients when written in the usual manner.
My attempt: let $P(x)=a_nx^n+a_{n-1}x^{n-1}+a_{n-2}x^{n-2}+ \cdots a_2x^2+a_1x+a_0$
Clearly the degree is even as the polynomial has even (8) number of roots.
Also $P(0)>0$ and $P'(0)=0$ and $P''(0) <0$ (as we can see that 0 is a locally maxima and also the tangent is above the graph at 0).
This is wrong as pointed out in the answer:Also, its obvious that sum of the roots of is positve (from the figure). $a_{n-1} >0$ as $a_n < 0$ since the polynomial has a global maxima and no global minima.
But I am unable to comment on the sign of $a_{n-2}$.
So, I would request someone to help me.
|
You're right about $a_0$, $a_1$, $a_2$.
It doesn't seem to be stated that all of the real zeroes are in the $x$-range shown on the graph, but if you do assume that, you can certainly also conclude that the degree is even, and that $a_n$ is negative.
You can say more about the degree than that, though. Clearly it must be at least $8$ in order to allow for all of the zeroes. But if you count stationary points, you can see it must be at least $12$ -- and if you count inflection points it must be at least $18$.
You can't say anything about $a_{n-1}$ and $a_{n-2}$, though, even with the assumption that you can see all of the zeroes.
For example, if the degree of the polynomial you see is $2n$, then adding a sufficiently small multiple of $-x^{2n+4}\pm x^{2n+3} \pm x^{2n+2}$ will not modify the graph visibly, and will not create any new zeroes -- so you cannot see whether this has already happened.
It is true that $-\frac{a_{n-1}}{a_n}$ is the sum of the roots, but you don't know what the sum of the roots is. All you can judge from the graph is the sum of the real roots; you can't see where there might be complex roots, whose real parts can easily dominate the sum of the roots.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1668711",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Fokker Planck and SDE I have the following Fokker-Planck equation in spherical coordinates $(\theta,\phi)$:
$$ \partial f/ \partial t= D \cot\theta \quad \partial f/\partial \theta + \quad 1/\sin^2\theta \quad \partial^2 f/\partial \phi^2 - \quad A[\sin\theta \partial f/\partial\theta +2 \cos\theta f] \tag{1}$$
where $D$ and $A$ are constants. I want to write it in stochastic differential equation form. I have no idea about the stochastic differential equation but I am reading from Gardiner book ''Handbook of Stochastic Methods'' 2 edition.
In the book, the Fokker-Planck equation in one dimension:
$$ \partial f/\partial t= - \partial /\partial x [A(x,t) f(x,t)]+1/2 \partial^2/\partial x^2 [B(x,t) f(x,t)] \tag{2}$$
It can be written in Ito SDE form-
$$ dx(t)=A(x,t) dt +\sqrt{B(x,t)}\; dW(t) \tag{3}$$
Note that Fokker-Planck equation needs to be in certain format as in (2) so that you can write its equivalent SDE. Any help will be appreciated.
|
The relation between the Fokker-Planck equation and the associated SDE has been investigated by Figalli (2008) and is known as the (stochastic analogue of the) superposition principle.
Michael Röckner and colleagues extended the superposition principle to many different situations, e.g. for McKean-Vlasov SDEs or non-local Fokker-Planck equations.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1668782",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
What's the negation of “At least three of the sentence are false”? Following with the question I asked before
What's the negation of "One of the sentence is false"?
The negation of “At least three of the sentence are false” would be "any, one or two of the sentence is/are false"?
|
With f-o logic, the original statement is:
$\exists x \ \exists y \ \exists z \ [(x \ne y \land x \ne z \land y \ne z) \ \land \ (False(x) \land False(y) \land False(z))]$.
Thus, negeatin it:
$\forall x \ \forall y \ \forall z \ \lnot [(x \ne y \land x \ne z \land y \ne z) \ \land \ (False(x) \land False(y) \land False(z))]$,
i.e.
$\forall x \ \forall y \ \forall z \ [(x \ne y \land x \ne z \land y \ne z) \ \to \ \lnot (False(x) \land False(y) \land False(z))]$,
i.e.
$\forall x \ \forall y \ \forall z \ [(x \ne y \land x \ne z \land y \ne z) \ \to \ ((False(x) \land False(y)) \to \lnot False(z))]$.
The final formula matches with the "informal" negation of the original statement:
"there are at most two false sentences."
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1668874",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
}
|
Show that $lim_{n \rightarrow \infty}m(O_n)=m(E)$ when $E$ is compact. Below is an attempt at a proof of the following problem. Any feedback would be greatly appreciated. Thx!
Let $E$ be a set and $O_n = \{x: d(x, E) < \frac{1}{n}\}$.
Show
*
*If $E$ is compact then $m(E) = lim_{n \rightarrow \infty} m(O_n)$.
*This is false for $E$ closed and unbounded or $E$ open and bounded.
Note that all sets are real and measurable refers to Lebesgue measurable.
Suppose $E$ is compact. Then $m(E) < \infty$. Also, since each $O_n$ is an open set of $\mathbb{R}^d$, $m(O_n) < \infty$. If either of the following is true, we have that $lim_{n \rightarrow \infty}m(O_n)=m(E)$:
$O_n \nearrow E$
$O_n \searrow E$
Now $\bigcap_{n=1}^{\infty} O_n = \{ x:d(x,E) = 0\}$.
That is, $x \in \bigcap_{n=1}^{\infty}O_n \iff \exists n> \frac{1}{\epsilon},\ n \in \mathbb{N} \iff x\in \{x \mid d(x,E) =0\}$.
Then we have that $\bigcap_{n=1}^{\infty}O_n = E$ and $O_n \searrow E$. Thus $ \ m(E) = lim_{n \rightarrow \infty} m(O_n)$.
Suppose $E$ is closed and unbounded. Let $E = \{(x,y): y = 2 \}$ and $O_n = \{(x,y): d(x,E) < \frac{1}{n}\}$.
Since $E$ is a line in $\mathbb{R}^2$, $m(E) = 0$.
Also, since the measure of a rectangle in $\mathbb{R}^d$ is its volume, $m(O_n) = \left| O_n \right| = \infty$, since the rectangle $O_n$ is unbounded.
It is therefore apparent that $m(E) \not = lim_{n \rightarrow \infty} m(O_n)$.
Suppose $E$ is open and bounded.
Let $E=(0,1)$ and $O_n = \{x \mid d(x,E) < \frac{1}{n}\}$.
$\mathbb{R}-E$ is closed and $O_n \in \mathbb{R}-E$.
Since $\mathbb{R}-E$ contains all its limit points $lim_{n \rightarrow \infty}O_n = O \in \mathbb{R}-E$.
Thus $lim_{n \rightarrow \infty} m(O_n) \neq m(E)$.
|
I refer to your three settings as (1), (2), and (3).
(1) What you write here is confusing. You only have to prove that $\bigcap_nO_n = E$. Clearly, $E\subset\bigcap_nO_n$. Now show that $\bigcap_n O_n\subset E$.
(2) This is a correct counterexample.
(3) The counterexample is false. You have $\bigcap_n O_n = [0,1]$ and thus $\lim m(O_n) = m([0,1]) = m(E) + m(\{0,1\}) = m(E)$. I think that the claim is false anyway. We should have $\lim m(O_n) = m(E)$ for bounded and open $E$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1669121",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
Finding a number of twin primes less than a certain number I was doing some problems on number theory, and I came across the following question: "How many twin primes less than 100 exist?" I was wondering if anyone could tell me what method would be used to solve this problem and others too.
Thanks!
|
HINT.Only I think to see Wilson's theorem and (because of primes less that $100$) looking for solutions of the two congruences
$$(p-1)!+1\equiv 0 \pmod p$$ $$(p+1)!+1\equiv 0 \pmod {p+2}$$
Paying some attention this could work, I guess. However for primes $p$ larger, verification in tables of primes or twin primes could be the only "method"
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1669200",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Find all possible values of $c^2$ in a system of equations. Numbers $x,y,z,c\in \Bbb R$ satisfy the following system of equations:
$$x(y+z)=20$$
$$y(z+x)=13$$
$$z(x+y)=c^2$$
Find all possible values of $c^2$.
To try to solve this, I expanded the equations:
$$xy+xz=20$$
$$yz+xy=13$$
$$xz+yz=c^2$$
Then I subtracted the first equation from the second one to get:
$$xz-yz=7$$
I added and subtracted this equation with the 3rd and got the following equations:
$$2yz=7-c^2$$
$$2xz=7+c^2$$
I then added these equations, factored out $2z$ and divided by $2$ to get:
$$z(x+y)=7$$
So, I found one possible value of $c^2 = 7$. How do I find the other values if they exist or how do I prove that there are no other values if they don't? Thanks!
|
Let $c^2 = s$. Eliminating $x$ and $y$, you get an equation in $s$ and $z$:
$$ s^2 + 2 z^2 s - 66 z^2 - 49 $$
Thus
$$ z^2 = \dfrac{s^2 - 49}{66 - 2 s}$$
Since $z^2 \ge 0$, we need either $s \le -7$ or $7 \le s < 33$. This corresponds to $\sqrt{7} \le c < \sqrt{33}$. We then have
$$ \eqalign{y &= \dfrac{z (33 - c^2)}{c^2 + 7}\cr
x &= \dfrac{z(33 - c^2)}{c^2-7} \cr}$$
In particular, $c = \sqrt{7}$ is not possible. So the result is that
$\sqrt{7} < c < \sqrt{33}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1669299",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
}
|
Limits: $\lim_{n\rightarrow\infty}\frac{nx}{1+n^2x^2}$ on $I=[0,1]$ I'm doing an assignment for my analysis course on the uniform convergence. And I have to assess the uniform convergence of the sequence $$f_n(x)=\frac{nx}{1+n^2x^2}$$ on the intervals $I=[1,2]$ and $I=[0,1]$. Before assessing whether there is uniform convergence, we need to provide the pointwise limit of the sequences. Now, for the first interval everything is okay, but I'm running into issues with finding the pointwise limit for the second interval.
It's clear to me that for $0<x\leqslant1$ the limit of the sequence should be zero. But when $x=0$ I'm not sure if this is also the case. Clearly filling in $x=0$ for any finite $n$ would give zero for $f_n(0)=0$, but how can we be sure this is still the case when $n\rightarrow\infty$?
Also, for the uniform convergence part of the assignment, I've gotten as far as seeing that I should use the fact that $f_n(\frac{1}{n})=\frac{1}{2}$, but I'm not really sure yet how to use this.
Any help is much appreciated!
|
For $x\in [1,2]$ we have
$$0 < f_n(x) = \frac{nx}{1+n^2x^2} < \frac{nx}{n^2x^2} = \frac{1}{nx} \le \frac{1}{n}.$$
This shows $f_n \to 0$ uniformly on $[1,2].$ (In fact $f_n \to 0$ on any $[a,\infty),$ where $a>0.$)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1669418",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
How to solve Schrödinger equation numerically with time dependent potential How to solve the Schrödinger equation with time dependent potential in 1D or 3D (if it is easier):
$$i\hbar\dfrac{\partial \Psi}{\partial t}(x,t)=\left(-\dfrac{\hbar}{2m}\nabla^2-\frac{e^2}{x+\alpha}-exE(t)\right)\Psi(x,t)$$
where $E(t) = E_0 \exp(-t/\tau^2)sin(\omega_0 t)$ is a Gaussian pulse in time, $\alpha$ is a constant and $e$ is a constant (the electron charge). $\Psi(x,0)$ is hydrogen ground state.
What would it mean to find the solution in a self consistent manner?
|
this is not an easy topic to discus, since it would take some writing-up.. you can find in Google the detailed methods. $1D$ is simpler. You can use the split operator time-propagator, where the $x$ and $d/dx$-alike operators are treated differently. When applying the derivative operators, usually one makes use of the Fourier Transform, for which very good software packages are available as open source (FFTW - the Fastest Fourier Transform in the West). Not sure there is any meaning to "consistent manner" in this context . You need to specify the initial wave-function though.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1669544",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Probability clarification $\chi^2$ distribution table I'm having some trouble understanding the solution of this probability question.
Ammeters produced by a manufacturer are marketed under the specification that the standard deviation of gauge readings is no larger than $.2$ amp. One of these ammeters was used to make ten independent readings on a test circuit with constant current. If the sample variance of these ten measurements is $.065$ and it is reasonable to assume that the readings are normally distributed, do the results suggest that the ammeter used does not meet the marketing specifications? [Hint: Find the approximate probability that the sample variance will exceed $.065$ if the true population variance is $.04$.]
The solution is as follows:
At the step $P(9S^2 / .04 >= 14.925)$, where does the $.10$ come from. I look at the $\chi^2$ distribution and I'm looking at $9$ degrees of freedom, but I still don't understand.
|
$\Bbb P\left(\frac{(n-1)S^2}{\sigma^2}\geq \frac{(n-1)s^2}{\sigma^2}\right) = \Bbb P({\raise{0.5ex}{\chi}}^2_{9}\geq 14.925) = ~$$0.0930...$ $~\approx 0.1{\small 0}$
The value comes from lookup tables or an online calculator.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1669654",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Proving that the line joining $(at_1^2,2at_1),(at_2^2,2at_2)$ passes through a fixed point based on given conditions on $t_1,t_2$ Problem:If $t_1$ and $t_2$ are roots of the equation $t^2+kt+1=0$ , where $k$ is an arbitrary constant. Then prove that the line joining the points $(at_1^2,2at_1),(at_2^2,2at_2)$ always passes through a fixed point.Also find that point.
I have done this question by a fairly difficult procedure(at least I think so) ..by finding the equation of the line passing through the given points as
$$y(t_2+t_1)-2at_1t_2-2at_1^2=2x-2at_1^2$$
And then using the relation from the quadratic equation given and writing it as $$(x+a)+k(\frac{y}{2})=0$$ which is the equation of family of lines and so which gives the fixed point as $(-a,0)$.
But actually I found this question in exercises of parabola and so I was thinking if it was possible to solve this question without this family of lines and all that (that is by using geometrical properties of parabola)...I have been thinking about this ...and I could only notice that since $(at^2,2at)$ is the parametric equation of a general point on a parabola $y^2=4ax$ so the points $(at_1^2,2at_1),(at_2^2,2at_2)$ can be thought of as two given points on parabola $y^2=4ax$ where $t_1t_2=1$ but I am not able to get that fixed point (which apparently is a point on the directrix of parabola $y^2=4ax$ from the above method that is $(-a,0)$) based on the above information by using the geometrical properties of parabola.
Any help is appreciated on this.
|
Tgts at ends Parb
With some reference to the above:
When the roots are solved, the tangent drawn at these particular slope pairs all cut on the directrix when the chord between $(t_1- t_2)$ points of tangency passes through the focus.
EDIT 1
There appears to be an incorrect tangent/slope supplying equation to start with, at the outset:
$$ t^2 + k t + 1 = 0 $$
whereas the correct equation for the given parametrization ought to be
$$ t^2 + k t - 1 = 0 $$
EDIT 3:
( It is now conceded the former sign is correct for all rays through (-a,0) )
By change of sign in the constant term we have two different "foci"
The sum of roots at either end of a parabola can be arbitrary, but since tangent at either end of focal chord are perpendicular ( meeting on directrix but that is besides the point) their product of slopes must be necessarily $-1$ and not $+1$ as we can see directly from quadratic equation coefficients.
This property of parabola is easy to verify and shall not do it here.
Taking correct sign means chaging sign of $ a, a\rightarrow -a $
$$ (x- a)+k(\frac{y}{2})=0 $$
EDIT 2:
( To find fixed point by C discriminant , partially differentiate w.r.t. k, y=0, and so (x=a, y=0) is the fixed point).
which is the equation of all focal rays passing through a fixed point, viz., the focus of parabola.
(BTW, It will be also an interesting problem to see how the (red) directrix line is fixed)
I am giving a sketch for confirmatory reference.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1669809",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Direct limits of simple C*-algebras are simple Let $S$ be a non-empty set of simple C$^*$-subbalgebras of a C$^*$-algebra $A$. Let us also suppose that $S$ is upwards-directed and that the union of all element of $S$ is dense in $A$. Then $A$ is simple.
I can't show this. Could you tell me how to show this?
|
Write $S=\{A_j\}$. Then, for any $a\in A$, we have $a=\lim a_j$ with $a_j\in A_j$.
Now fix $I\subset A$, a nonzero ideal. As $A_j\cap I$ is an ideal of $A_j$, we either have $A_j\cap I=0$ or $A_j\cap I=A_j$. We also have that $\{A_j\cap I\}$ is an increasing net of ideals of $A$.
For $a\in I$ and $a=\lim a_j$ with $a_j\in A_j$, we can use an approximate unit $\{e_k\}$ of $I$ to get $a=\lim_kae_k=\lim_k\lim_ja_je_k$. As $a_je_k\in A_jI=A_j\cap I$ (proof here), we get that $I=\overline{\bigcup_jA_j\cap I}$. This shows that, eventually, $A_j\cap I=A_j$. Thus
$$
I=\overline{\bigcup_jA_j\cap I}=\overline{\bigcup_jA_j}=A.
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1669927",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Find $\sum r\binom{n-r}{2}$
Let $A=\{1,2,3,\cdots,n\}$. If $a_i$ is the minimum element of set $A_i$ where $A_i\subset A$ such that $n(A_i)=3$, find the sum of all $a_i$ for all possible $A_i$
Number of subsets with least element $1$ is $\binom{n-1}{2}$
Number of subsets with least element $r$ is $\binom{n-r}{2}$
Sum of all $a_r$ is $r\binom{n-r}{2}$
How do I find $$\sum_{r=1}^{n-2}r\binom{n-r}{2}$$
|
Two possibilities:
*
*write $r=\binom r1$ and apply a variation of the Vandermonde identity.
*If you add an element $0$ to your set, then $a_i$ counts the number of ways a fourth element can be chosen, less than $a_i$, and therefore less than all elements of $A_i$. One easily checks this gives a bijective correspondence with $4$-element subsets of $\{0,1,\ldots,n\}$ (in whcih $a_i$ gives the second smallest element of the subset). The number of such subsets is $\binom{n+1}4$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1670014",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Switching limits: $n \rightarrow \infty$ for $n\rightarrow 0$ I feel like this question may have already been asked, but despite my searches, I could not find it.
I am looking to prove that $\lim\limits_{\epsilon \rightarrow 0} \int_{[b, b+\epsilon]} g=0$ for an integrable function $g$. To do so, I would like to use the Dominated Convergence Theorem, as $g$ is my integrable dominating function. However, I realize that the DCT is defined for the limit as $n$ goes to infinity. I am looking to somehow switch this $\epsilon$ for $\frac{1}{k}$ and then take $k$ to infinity and use DCT. However, I feel that it cannot be so simple. How do I go about applying DCT to limits which are not going to infinity?
Any hints would be appreciated.
|
Your approach of setting $\epsilon = 1/k$ can be made to work. However, you need to make some adjustments to your argument:
*
*Note that it suffices to show that $\int_{[b,b+\epsilon]}|g|\to 0$
*Note that for $\epsilon \in [1/(k+1),1/k]$, we have
$$
\int_{[b,b+1/(k+1)]}|g| \leq
\int_{[b,b+\epsilon]}|g| \leq
\int_{[b,b+1/k]}|g|
$$
Now, use your argument to show that $\int_{[b,b+1/k]}|g| \to 0$, and by the above arguments that's enough.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1670122",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
is $E(x),E(y) ↦ E(x+y)$ well defined?
Let $x,y\in \mathbb R$ and $x\sim y \iff x-y\in \mathbb Z$. $E(x)$ is
the equivalance class containing $x$.
a) Is $E((x),E(y)) ↦ E({x+y})$ well defined?
Where $\rightarrow$ means an operation
b) Is $(E(x),E(y)) ↦ E(xy)$?
|
If the notation $(E(x),E(y))$ is mean an operation, then $(E(x),E(y))=(E(z),E(w))$.
If the notation $(E(x),E(y))$ is mean an pair, then $(E(x),E(y))\neq(E(z),E(w))$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1670243",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
finding the area using integral
Find the area blocked between X-Y axis and $x=\pi$, $y=\sin x$ according to $x$ and $y$.
According to x: $\int_0^\pi \sin x \, dx=-\cos(\pi)+\cos(0)=2$
According to $y$: $\int_0^1 (\pi- \sin^{-1} y)\,dy=\left[\pi y-\sin^{-1}y + \sqrt{1-y^2}\right]_0^1 = \pi-\frac{2}{\pi}+1=\frac{2}{\pi}+1$
Where did I get it wrong?
|
$\sin^{-1}$ is the inverse of the restriction of the function $\sin$ to the interval $[-\pi/2,\pi/2]$. But you're working with areas outside of that interval.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1670307",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Describing the motion of a particle (sphere) If I have the following position at time t : $\hat{r}(t) = 3\cos(t)\hat{i} + 4\cos(t)\hat{j} + 5\sin(t)\hat{k}$ , then how can I tell if the particle's path lies on a sphere or not? If e.g. the second term was simply $t$ and not a trigonometric function, I know that it would be a spiral, since $t$ will change along (in this case) y-axis on this curve, but if it was a constant then we would simply have an ellipse; but I don't know how to explain or derive a sphere from the given position vector, I tried to use spherical coordinates but to no avail. Could someone explain how one can see if the path of the particle lies on the sphere? If it indeed is a sphere, then from the norm of the position we know that the radius will be 5 units.
|
Let $t = \theta + \frac{\pi}{2}$, so that $\hat{r}(t)$ gives $x = 3\cos(\theta + \frac{\pi}{2}) = -3\sin\theta, y= -4\sin \theta, z = 5\cos\theta$.
Comparing to the spherical coordinates: $x = r\sin\theta \cos\phi, y = r\sin\theta \sin\phi, z = r\cos\theta$.
Then radius of sphere $r = 5$, but we need to find the fixed azimuth $\phi$ such that $5\cos \phi = -3$, and $5 \sin\phi = -4$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1670401",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 2
}
|
Difficulty in understanding a part in a proof from Stein and Shakarchi Fourier Analysis book.
Theorem 2.1 : Suppose that $f$ is an integrable function on the circle with $\hat f(n)=0$ for all $n \in \Bbb Z$. Then $f(\theta_0)=0$ whenever $f$ is continuous at the point $\theta_0$.
Proof : We suppose first that $f$ is real-valued, and argue by contradiction. Assume, without loss of generality, that $f$ is defined on $[-\pi,\pi]$, that $\theta_0=0$, and $f(0) \gt 0$.
Since $f$ is continuous at $0$, we can choose $ 0\lt \delta \le \frac \pi2$, so that $f(\theta) \gt \frac {f(0)}2$ whenever $|\theta| \lt \delta$. Let $$p(\theta)=\epsilon + \cos\theta,$$
Where $\epsilon \gt 0$ is chosen so small that $|p(\theta)| \lt 1 - \frac \epsilon2$, whenever $\delta \le |\theta| \le \pi$. Then, choose a positive $\eta$ with $\eta \lt \delta$, so that $p(\theta) \ge 1 + \frac \epsilon2$, for $|\theta| \lt \eta$. Finally, let $p_k(\theta)=|p(\theta)|^k$, and select $B$ so that $|f(\theta)| \le B$ for all $\theta$. This is possible since $f$ is integrable, hence bounded.
By construction, each $p_k$ is a trigonometric polynomial, and since $\hat f(n)=$ for all $n$, we must have $\int_{-\pi}^{\pi} f(\theta)p_k(\theta)\,d\theta=0$ for all $k$.
I understood the first paragraph clearly. But the rest is not making it's way into my head.
*
*In the beginning of second paragraph, how does the given range works for choosing $\delta$? If the continuity is used to get the range, then how?
*How can we choose $\epsilon$ so small such that, $|p(\theta)| \lt 1 - \frac \epsilon2$, whenever $\delta \le |\theta| \le \pi$?
*How can we choose positive $\eta$ with $\eta \lt \delta$, so that $p(\theta) \ge 1+ \frac \epsilon2$, for $|\theta| \lt \eta$.
*Why do we must have $\int_{-\pi}^{\pi}f(\theta)p_k(\theta)\,d\theta=0$ for all $k$?
|
*
*Continuity tells us we can choose $\delta>0$ so that $|f(\theta) - f(0)|< \frac{f(0)}{2}$ if $|\theta - 0| < \delta$ (which in particular implies $f(\theta) > \frac{f(0)}{2}$). Once the existence of such a $\delta$ is established, we can assume it is as small as we need; in particular, we're free to take it to be less than $\pi/2$, without affecting the inequality $f(\theta) > \frac{f(0)}{2}$. If you want to see this more concretely, then take the first $\delta$ that we obtained through continuity, and set $\delta' = \min(\delta,\frac{\pi}{2})$. Then $0<\delta < \frac{\pi}{2}$, and $|\theta|<\delta'$ implies $|\theta|<\delta$ implies $f(\theta) > \frac{f(0)}{2}$.
*Here we're using the fact that $\cos\theta < 1$ when $\theta$ is away from $0$. To make this quantitative, $\cos\theta$ ranges from a maximum of $\cos\delta$ to a minimum of $-1 = \cos\pi$ on the set $\{\delta \leq |\theta|\leq\pi\}$. Therefore
$$
\epsilon - 1 \leq p(\theta) \leq \epsilon + \cos\delta
$$
when $\delta \leq |\theta| \leq \pi$. On the left, we can throw out an extra $\epsilon/2$:
$$
\frac{\epsilon}{2} - 1 \leq \epsilon - 1 \leq p(\theta).
$$
On the right, we have
$$
\epsilon + \cos\delta = \epsilon + 1 - (1-\cos\delta) = \epsilon + 1 - \lambda.
$$
Note $\lambda > 0$. If we choose $\frac{3\epsilon}{2}< \lambda$, then $-\lambda < -\frac{3\epsilon}{2}$, and
$$
\epsilon + 1 - \lambda < 1 - \frac{\epsilon}{2}.
$$
Therefore if we choose $\epsilon < \frac{2}{3}(1-\cos\delta)$, and obtain $\delta$ as above, then
$$
\frac{\epsilon}{2} - 1 \leq p(\theta) \leq 1 - \frac{\epsilon}{2},
$$
or equivalently
$$
|p(\theta)| \leq 1 - \frac{\epsilon}{2}
$$
whenever $\delta \leq |\theta| \leq \pi$.
*This is similar to both 1 and 2. Now we're using the fact that near $\theta = 0$, $p(\theta) \sim \epsilon + 1$. By continuity of $\cos\theta$ at $\theta = 0$, there exists $\eta>0$ such that if $|\theta| < \eta$, then $|1 - \cos\theta| < \frac{\epsilon}{2}$. This inequality implies $\cos\theta > 1 - \frac{\epsilon}{2}$. Therefore
$$
p(\theta) = \epsilon + \cos\theta > 1 + \frac{\epsilon}{2}.
$$
Again, once the existence of such $\eta>0$ is established, then we are free to take it to be as small as we want; in particular we may specify that $\eta<\delta$.
*For example, look at $p_1(\theta) = p(\theta) = \epsilon + \cos\theta$. Then
$$
\int_{-\pi}^\pi f(\theta)p_1(\theta) d\theta = \epsilon\int_{-\pi}^\pi f(\theta) d\theta + \int_{-\pi}^\pi f(\theta)\cos(\theta) d\theta .
$$
Note that the first integral on the RHS is just $\epsilon\hat{f}(0)$, so this is $0$. The second integral is just the first Fourier cosine coefficient: in fact,
$$
\int_{-\pi}^\pi f(\theta)\cos\theta d\theta = \int_{-\pi}^\pi \text{Re}(f(\theta)e^{i\theta} )d\theta = \text{Re}\left(\int_{-\pi}^\pi f(\theta)e^{i\theta} d\theta\right) = \text{Re}\hat{f}(1) = 0.
$$
(I may be off by a constant prefactor of $\frac{1}{2\pi}$, but this is unimportant.) Here I'm using the assumption that $f$ is real-valued to take $f(\theta)$ under the real part. For higher powers of $p(\theta)$, you'll see the other Fourier coefficients come up just like this, because $p(\theta)^k$ is a trigonometric polynomial (i.e. linear combinations of $\sin(k\theta)$ and $\cos(k\theta)$ for all $k$. So all of these integrals vanish.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1670464",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
}
|
Proving a sequence is increasing Prove the sequence defined by $a_0=1$ and $a_{n+1}=\sqrt{3a_n+4}$ is increasing for all $n\ge0$ and $0\le a_n\le4$
I know that a sequence is increasing if $a_n\le a_{n+1}$ but I don't know what information I can use to prove that since all I have is $a_{n+1}$ and a base case of n=0. Am I able to just test integers 0 to 4 to prove it is true? Or is there another method?
|
Observe that if $1 < x < 4$, then essentially, $x^2 - 3x + 4 < 0$. Prove by induction that $1 < a_n < 4$. Then, set $x = a_n$ to get that $\{a_n\}$ is increasing.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1670592",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Strangely defined ball compact in $L^p(I)$ or not? Let $I = (0, 1)$ and $1 \le p \le \infty$. Set$$B_p = \{u \in W^{1, p}(I) : \|u\|_{L^p(I)} + \|u'\|_{L^p(I)} \le 1\}.$$When $1 < p \le \infty$, does it necessarily follow that $B_p$ is compact in $L^p(I)$?
|
Note that
$B_p$ compact in $L^p$ $\Rightarrow$ $B_p$ closed in $L^p$ $\Rightarrow$ $B_p$ complete in $L^p$ $\Rightarrow$ $W^{1,p}(I)$ complete w.r.t. $\|\cdot\|_{L^p}$
So, $B_p$ isn't compact in $L^p$.
However, $B_p$ is relatively compact in $L^p$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1670699",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Equivalence Relations and Classes 3 I am studying for a discrete math exam that is tomorrow and the questions on equivalence classes are not making sense to me.
Practice Problem: Let $\sim$ be the relation defined on set of pairs $(x, y) \in R^2$ such that $(x, y) \sim (p, q)$ if and only if $x^2 + y^2 = p^2 + q^2$. Find three elements in the equivalence class $[(0, 1)]$
The example solution shows $(0,1),(1,0),(-1,0),$ can somebody explain why those solutions hold true for this equivalence class? Thank you!
|
The equivalence relation is defined such that $(x,y) \sim (p,q)$ if $x^2 + y^2 = p^2 + q^2$.
If $(x,y) \in[(0,1)]$, then $(x,y) \sim (0,1)$, so it must satisfy
$$
x^2 + y^2 = 0^2 + 1^2 = 1
$$
Which is the equation of the unit circle, which all three of those points are a part of.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1670881",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Analytic vs. Analytical I am trying write an article.
Little Summary:
I have developed some tools to analyze derivative of some function $f$. This characterization leads to better results than previous works that only studied the function itself.
I am trying to say that:
"Our analytic view of the problem provides a better characterization of blah blah blah "
When I say analytic view I mean that we not only look at a function but also its derivatives.
My question:
"Our analytic view of the problem provides a better characterization of blah blah blah "
or
"Our analytical view of the problem provides a better characterization of blah blah blah "
|
If you have doubts, most probably some other people will also have!
It is always better to be clear, although it is sometimes a difficult decision where to stop (it depends on whether it is for a paper or for a book, whether it is for an abstract or introduction, etc, etc).
Summing up, much better something like: "By looking not only at the function itself but also at its derivatives, in contrast to what Mr.X did in [Y], we are able to provide a better characterization of".
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1670990",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
Combinatorics problem involving selection of digits How many 10-digit decimal sequences using $(0, 1, 2, . . . , 9)$ are there
in which digits 3, 4, 5, 6 all appear?
What I did to solve this question was this.
The number of ways to select $3,4,5,6$ from $10$ numbers is
$$\binom{10}{4}$$
and the ways to fill the rest of the digits would be $10^{10-4}=10^{6}$
So I thought that the total number of possibilities would
$$\binom{10}{4}*10^{6}$$
I was wondering what is wrong with this reasoning?
The answer that you get using inclusion-exclusion principle is different.
|
First, your description of $10 \choose 4$ is wrong because there is only one way to select specifically those digits. What you really should be saying is there are $10 \choose 4$ ways to select the positions for the $3,4,5,6$. Second you should multiply by $4!$ for the orders of $3,4,5,6$ Third, you are double counting numbers that have additional copies of $3,4,5,$ and/or $6$. For example $3456311111$ is counted twice, once when you count the first $3$ in the specified $3,4,5,6$ and again when you count the second $3$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1671054",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Equation for a smooth staircase function I am looking for a smooth staircase equation $f(h,w,x)$ that is a function of the step height $h$, step width $w$ in the range $x$.
I cannot use the unit step or other similar functions since they are just one step. I have been experimenting with various sigmoid curves and while I can get a single smooth step I cannot get to realize the staircase shape. The closest staircase function I have found is given in this paper in equation (18) and depicted in Fig. 4 and it is a close example of what I want (i.e generate a staircase in the range $x$ for arbitrary step heights and widths) but it is not smooth at all.
Regarding smooth steps, a likely starting point I found is here but it gives a smooth function of just a single step. I have been unable to modify the equation to make it into a staircase. I would like to specify arbitrary step heights and widths and generate a smooth staircase in the range $x$ specified.
Edit (Extra info):
The smooth function I mention above has the problem that the upper, horizontal line is not equal in length to the lower, horizontal line which is why I have been unable to adapt it into a staircase function
Edit 2
Including some pictures
Edit 2
Plot of $s$ with a steep slope showing a different width on the first horizontal line
|
Here is an example based on Math536's answer: Wolfram link
$$f(h,w,a,x) = h \left[\frac{\tanh \left( \frac{ax}{w}-a\left\lfloor \frac{x}{w} \right\rfloor-\frac{a}{2}\right)}{2\tanh\left(\frac{a}{2}\right) } + \frac{1}{2} + \left\lfloor \frac{x}{w} \right\rfloor\right]$$
Where h is the step height, w is the period, and a is the smoothness
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1671132",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 6,
"answer_id": 0
}
|
Can I change this summation to a sum of other summations? The form of the summation I have is
$$\sum _{ x=0 }^{ \infty }{ x{ a }^{ x } } $$
I need to somehow remove the $x$ from the original summation in order to achieve the geometric series in each other summation. For instance, $$\sum _{ x=? }^{ \infty }{ { a }^{ x }\quad +\quad } \sum _{ x=? }^{ \infty }{ { a }^{ x }\quad +\quad } \sum _{ x=? }^{ \infty }{ { a }^{ x }\quad +\quad } ...$$ I have seen this done before, but forget how to work with the bounds of each new summation . It would be greatly appreciated if anyone had a clue what I was talking about.
|
This is one of my favourite tricks. Multiplying a series by a carefully chosen term. It is especially useful in dealing with arithmetic-geometric series.
$$\begin{align}
S &= a + 2a^2 + 3a^3 + \dots \\
aS &= a^2 + 2a^3 + 3 a^4 + \dots \\
S - aS &= a + a^2 + a^3 + \dots \\
S &= \frac {a + a^2 + a^3 + \dots}{1-a}\\
S &= \frac{a}{(1-a)^2}\\
\end{align}$$
The last step assumes $\left\lvert a \right\rvert \lt 1$. Otherwise, the infinite series does not converge.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1671250",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
}
|
Nonlinear optimization: Optimizing a matrix to make its square is close to a given matrix. I'm trying to solve a minimization problem whose purpose is to optimize a matrix whose square is close to another given matrix. But I can't find an effective tool to solve it.
Here is my problem:
Assume we have an unknown Q with parameter $q11, q12,q14,q21,q22,q23,q32,q33,q34,q41,q43,q44$, and a given matrix G, that is,
$Q=\begin{pmatrix}
q11&q12 &0 &q14 \\q21&q22& q23&0\\ 0&q32& q33&q34\\ q41&0& q43&q44\\
\end{pmatrix} $, $G=\begin{pmatrix}
0.48&0.24 &0.16 &0.12 \\ 0.48&0.24 &0.16 &0.12\\0.48&0.24 &0.16 &0.12\\0.48&0.24 &0.16 &0.12
\end{pmatrix} $,
The problem is how to find the values of $q11, q12,q14,q21,q22,q23,q32,q33,q34,q41,q43,q44$ such that the square of $Q$ is very close to matrix $G$.
I choose to minimize the Frobenius norm of their difference, that is,
$ Q* ={\arg\min}_{Q} \| Q^2-G\|_F$
s.t. $0\leq q11, q12,q14,q21,q22,q23,q32,q33,q34,q41,q43,q44 \leq 1$,$\quad$
$\quad$ $q11+q12+q14=1$,
$\quad$ $q21+q22+q23=1$,
$\quad$ $q32+q33+q34=1$,
$\quad$ $q41+q43+q44=1$.
During those days, I am frustrated to find a effective tool to execute the above optimization algorithm, can someone help me to realize it?
|
For the modified question, let me try to give an answer that can address situations with matrices having the structure that you have.
Basically, your matrix $G$ has the following structure $$G=uv^T$$ where I have taken $u$ as the all $1$'s vector and $v$ a vector of positive coordinates such that $v^Tu=1$, i.e. $G$ is a stochastic matrix. It is also easy to see that $G$ is idempotent i.e. $G^2=G$.
You need to find out $Q$ which is a stochastic such that $\|Q^2-G\|_F$ is minimized. Obviously the only solution is $Q=G$. But if you do not want to take $Q=G$, then you cannot find an optimal solution. Instead you can choose a matrix $\delta$ having the property $\delta u=0$ and then can form the matrix $Q=G+\delta$, then you need to ensure that, for a chosen $\epsilon>0$, $$\|Q^2-G\|_F<\epsilon\implies \|G\delta+\delta G+\delta^2\|_F<\epsilon\\\implies \|uv^T\delta+\delta^2\|_F<\epsilon$$ since $\delta u=0$.
Edit: regarding how to calculate the matrix $\delta$, I am not sure it is always possible to get closed form solutions of $\delta$ and you have to use some numerical technique. Basically, for a given $\epsilon$, you have to calculate the Frobenius norm which will yield,
$$Tr(N\delta^Tvv^T \delta+2\delta^T vu^T\delta^2+\delta^2(\delta^T)^2)<\epsilon$$ As I mentioned earlier, in general, this will ensure that the elements of $\delta$ remain inside a hyperellipse of degree $4$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1671357",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 1
}
|
Opening and closing convex sets It seems true that, given $K \subseteq \mathbb{R}^n$ a convex set with $K^\circ \neq \emptyset$, then $\overline{K^{\circ}} = \overline{K}$ and $\left ( \overline{K} \right )^\circ = K^\circ$.
I am able to prove the first equality by making use of the "segment Lemma", which states that if $y \in K$ and $x \in K^\circ$, then $[x, y[ \subseteq K^\circ$ (here $[x, y[$ is the segment joining $x$ and $y$ without taking $y$).
However I have not found any correct proof of the second equality, and neither a counter-example has come to my mind.
Thanks all!
|
Either use the suggestion by Andrea, or you can prove directly this way: $K \subseteq \bar K$, so $\mathring K \subseteq (\bar K)^\circ$. On the other hand, if $x \in (\bar K)^\circ$ then there is a neighborhood $x \in U_x \subseteq \bar K$. Now take a small simplex in $U_x$ that contains $x$ in its interior. Up to perturbing slightliy its vertices, you can make them to belong to $K$ while $x$ still belongs to the interior of the simplex. Then by convexity the simplex is in $K$, so $x \in \mathring K$.
I realized after writing that my answer is essentially the same as https://math.stackexchange.com/a/7509.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1671444",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
}
|
What is the speed of the car given the time taken to receive an echo? I am trying to solve this question-
The driver of an engine produced a whistle sound from a distance $800m$ away a hill to which the engine was approaching.The driver heard the echo after $4.5s$.Find the speed of the car is speed of sound through air is $340 m/s$.
My attempt-
$V=\frac {2d}{t}$ (2d since echo sound has to travel twice from A to B and then from B to A again.)
Putting $v=340m/s,d=800m$ we get,$t=\frac{80}{17}s$.
So,time taken to move $2AB$ (AB+AB)=$\frac{80}{17}s$.
Let,driver has moved to point O when sound reaches him at O.
So,time taken by sound to reach O (A to B and then from B to O)=4.5s.
So,2AB time-(AB+OB) time=$(\frac{80}{17})-4.5=\frac{7}{34}s$
So,AB-OB time=$\frac{7}{34}s.$
So,$\frac{40}{17}-OB=\frac{7}{34}s.$
Solving we get,time taken by sound to reach $OB=\frac{73}{34}s$.
So,distance $OB=speed\times time=340\times \frac{73}{34} m.$
So,we can find AO and OB.
So,time taken by sound to reach from A to O by car=time taken by sound to reach from B to O.
applying time=distance/speed,and making the time taken by sound=that of car we get,
$\frac{AO}{Speed_{car}}=\frac{OB}{Speed_{sound}}.$
Solving we get,$Speed_car=\frac{2380}{34} m/s=32 m/s$ approx.
But answer given is $15.5m/s$.Where am I going wrong?
Thanks for any help.
|
See sound will travel $340*4.5=1530m$ but in this time train will also travel some distance. original distance between hill and train back and forth is $1600m$ but sound was heard at $1530m$ from hill ie $70m$ from original place of train so train travelled $70m$ in $4.5s$ thus speed is approximately $70/4.5=15.55m/s$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1671489",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Inverse Laplace Transform of $e^{\frac{1}{s}-s}$ doing some work on a PDE system I have stumbled across a Laplace transform which I'm not sure how to invert:
$$
F(s) = e^{\frac{1}{s}-s}
$$
I can't find it in any table and the strong singular growth for $s=0$ makes me think that perhaps the inverse doesn't exist? Does it exist? And if it does, how can I find it?
Thanks for your help,
Maxi
|
The inverse most definitely exists. Write the integrand of the ILT in its Laurent expansion about $s=0$ as follows:
$$F(s) e^{s t} = e^{\frac1s} e^{(t-1) s} = \left (1+\frac1s +\frac1{2! s^2} + \frac1{3! s^3}+\cdots \right ) \left [1+(t-1) s+\frac1{2!} (t-1)^2 s^2 + \cdots \right ]$$
The ILT is simply the residue of the integrand at $s=0$, or namely the coefficient of $s^{-1}$ in the above expansion. This coefficient is
$$\sum_{k=0}^{\infty} \frac{(t-1)^{k}}{k!(k+1)!} H(t-1) = \frac{I_1\left (2 \sqrt{t-1} \right )}{\sqrt{t-1}} H(t-1)$$
where $H$ is the Heaviside function and $I_1$ is the modified Bessel function of the first kind of first order.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1671589",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Linear Algebra Coordinate Systems Isomorphism This is an excerpt from the book.
Let $B$ be the standard basis of the space $P_3$ of polynomials; that is let $B=\{1,t,t^2,t^3\}$. A typical element $p$ of $P_3$ has the form
$p(t) = a_0 + a_1t+ a_2t^2 + a_3t^3.$
Since $p$ is already displayed as a linear combination of the standard basis vectors, we conclude that $[P]B = [a_0,a_1,a_2,a_3]$.
This is the part I don't get. My understanding is if I were to change the basis of something, then I would have a matrix, for instance $B$ represented as $\{b_1,b_2\}$. And using the matrix, I would change $x$ to $[x]B$ by performing $B^{-1} x = [x]B$. None of these steps were taken for the above. Could anybody explain how the polynomial equation suddenly turned into a vector with respect to $B$?
|
The polynomial is not turned into a vector. The vector $[P]B=[a_0,a_1,a_2,a_3]$ shows the coordinate of $f(x)=a_0+a_1x+a_2x^2+a_3x^3$ when you choose $B=[1,x,x^2,x^3]$ as your basis. As I understand, there is no intention to change the basis.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1671754",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Find the volume of the solid generated by rotating $B$ about $l$
In a space, let $B$ be a sphere (including the inside) with radius of $1$. Line $l$ intersects with $B$, the length of the common part is the line segment with the length of $\sqrt{3}$. Find the volume of the solid generated by rotating $B$ about $l$.
This solid looks like a torus pushed together. Thus, if we take the volume of the cylinder formed which here is $8\pi$ and subtract the inner part, which is a sector of a sphere rotated about the axis of the $\sqrt{3}$ line we get the desired volume. But I struggle how to find such a volume. Maybe there is a different way to find the volume I am not seeing that is easier.
|
It can be shown that you are describing a circle with radius $1$ and centre $(0,0.5)$.
There are two parts to the curve:
Green: $y_1=\frac 12 + \sqrt{1-x^2}$
Red: $y_2=\frac 12 - \sqrt{1-x^2}$
Rotate the green part about the $x$-axis between $x=-1$ and $x=1$ to find a large volume.
Then rotate the red part between $x=-1$ and $x=-\frac{\sqrt3}2$ and between $x=\frac{\sqrt3}2$ and $x=1$ to get two smaller parts. Subtract the smaller bits from the larger.
$y_1^2=\frac 14 + \sqrt{1-x^2}+1-x^2$
$y_2^2=\frac 14 - \sqrt{1-x^2}+1-x^2$
Volume between $x=0$ and $x=\frac{\sqrt3}2$ is $$\pi \int_0^{\frac{\sqrt3}2}y^2_1 dx$$
Volume between $x=\frac{\sqrt3}2$ and $x=1$ is $$\pi \int_{\frac{\sqrt3}2}^{1}(y^2_1-y^2_2)dx$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1671854",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Why is the Jacobian determinant continous in the proof of the Inverse function theorem.
Can anyone explain to me why the first sentence in the proof is valid.
|
Alternatively to the answer of Oskar Linka, we can also see the continuity by looking at the Leibniz-formula for determinants.
First of all, note that $f \in C'$. (I guess $C'$ denotes the set of all maps whose partial derivatives exist in all variables and are continuous, i.e. all the maps $\frac{\partial f_i}{\partial x_j}$ exist and are continuous.) Now, using the Leibniz-formula, we have
$$J_f(b)=\sum_{\pi \in S_n} \text{sgn}(\pi)\prod_{i=1}^n \frac{\partial f_i}{\partial x_{\pi(i)}}(b)$$
for all $b \in S$. From this formula, we can directly read off the continuity of $J_f$ on $S$. It follows from the continuity of the maps $\frac{\partial f_i}{\partial x_j}$.
Now to the part about the open ball: if a map $h$ is continuous at some point $a$ and $h(a) \neq 0$, (e.g. $h=J_f$), there is always and open ball $B_{\varepsilon}(a)$ such that $h(x) \neq 0$ for all $x \in B_{\varepsilon}(a)$. Proof: Assume that this is not true. Then, for each $n \in \mathbb{N}$ there is an $x_n \in B_{\frac{1}{n}}(a)$ such that $h(x_n)=0$. Now, the sequence $(x_n)$ tends to $a$ for $n \to \infty$. However, $h(x_n)=0$ for all $x_n$. Therefore $h(x_n) \to 0$ for $n \to \infty$. But this is a contradiction: since $h$ is continuous at $a$, we should have $h(x_n) \to h(a) \neq 0$ instead.
Note: $B_{\varepsilon}(a)$ denotes the open $\varepsilon$-ball around $a$ where it is assumed that $\varepsilon >0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1671981",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
Polynomial game problem: do we have winning strategy for this game? I'm thinking about some game theory problem. Here it is,
Problem: Consider the polynomial equation $x^3+Ax^2+Bx+C=0$. A priori, $A$,$B$ and $C$ are "undecided", yet and two players "Boy" and "Girl" are playing game in following way:
*
*First, the "Boy" chooses a real number.
*Then, the "Girl" decides the position of that number among $A$,$B$ and $C$.
*They repeat this process three times, deciding all values for $A$,$B$ and $C$.
If the final polynomial has all distinct integer roots, the Boy wins. Otherwise, the Girl wins.
Question is: Does one of Boy or Girl has any "Winning Strategy"? If so, who has one?
It seems to me, obviously the boy has great advantage. Here is my attempted argument: If the boy suggest "$0$" at the very first turn, regardless of the girl's decision we can make the polynomial to have three distinct integer roots. Actually, my argument has "almost" worked. for example,
*
*If the girl put $0$ at position $A$, then boy should suggest "$-84$" at the second round. Then, in any case, we always have three distinct roots, i.e. $(10,-2,-8)$ or $(-3,-4,7)$.
*If the girl put $0$ at position $C$, then boy should suggest "$-1$" at the second round. Then, in any case, we always have three distinct roots, i.e. $(-1,2,0)$ or $(1,-1,0)$.
*HOWEVER, if the girl put $0$ at position $B$, I couldn't find any good number for second round.
Has my winning strategy some problem? Or has the girl a winning strategy somehow?
Thank you for any help in advance!
|
If the girl puts $0$ at position $B$, the boy can choose $1764$, resulting in roots $[-6, 7, 42]$ or $[-864, -1440, 540]$.
EDIT: If the polynomial $(x-a)(x-b)(x-c)$ has $ab+bc+ac=0$, then $c = -ab/(a+b)$. Writing $t = a+b$, we need $t$ to divide $ab = at - a^2$. Thus $t$ is a divisor of $a^2$.
Here's a Maple program that finds the solution $1764$, together with some others. For positive integer values of $a$, it looks at each divisor $t$ of $a^2$ (positive as well as negative), computes the corresponding $b$ and $c$, and the corresponding values of the coefficients $A$ and $C$ of the polynomial, storing under those indices the values $[a,b,c]$. We then look for an index that appears for both $A$ and $C$.
for a from 1 to 1000 do
for t in numtheory:-divisors(a^2) union map(`*`,numtheory:-divisors(a^2),-1) do
b:= t-a;
c:= -a*b/t;
if nops({a,b,c}) < 3 then next fi;
A[-a-b-c]:= [a,b,c];
C[-a*b*c]:= [a,b,c];
od
od:
{indices(A)} intersect {indices(C)};
$$ \{[-12348], [-1764], [1764], [3136], [12348]\}$$
A[1764], C[1764];
$$[42, 7, -6], [540, -864, -1440]$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1672114",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
}
|
Assumption and simple calculation I'm having an issue with what seems to be an simple question.
Here it is:
Two hockey teams, team A and team B played a game, Team A beat Team B by 2 goals. The crowd was pleased as there were 8 goals in total for the whole game.
What was the game's scored?
How many goals did team A score and how many did Team B?
My first thought was that because the total goals in the game were 8 and team A won by 2 goals they must had 6 goals and team B had 2 because if team B had scored those 2 last goals they would have a tie. But I'm still confused is my assumptions are right. Any ideas?
|
If I may, how did you deduce that Team A scored 6 goals and team B scored 2. There were 8 goals TOTAL for the entire game. And we know the condition $$Ascore= Bscore+2$$ Knowing that, recall what the total score was and deduce your answer by system of equations.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1672220",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
}
|
Restricting a map from $S^{2n-1}$ to $\mathbb{R}P^{2n-1}$ This might be really obvious, but I have constructed a continuous map $f:S^{2n-1} \rightarrow S^{2n-1}$ with no fixed points, and I want to use this to get a continuous map $g:\mathbb{R}P^{2n-1} \rightarrow \mathbb{R}P^{2n-1}$ with no fixed points (where $\mathbb{R}P^{2n-1}$ is the real projective plane). Is the idea that $\mathbb{R}P^{2n-1}$ can be defined as points on $S^{2n-1}$, with points identified with their antipodal points, so $\mathbb{R}P^{2n-1} \subseteq S^{2n-1}?$ Therefore, by simply restricting $f|_{\mathbb{R}P^{2n-1}}$ do we get the continuous map $g$?
Assuming this doesn't work, using the universal cover $p:S^{2n-1} \rightarrow \mathbb{R}P^{2n-1}, pf:\mathbb{R}P^{2n-1} \rightarrow \mathbb{R}P^{2n-1}$. But does $pf$ also not have fixed points if $f$ has no fixed points? I don't see why that necessarily would be true. If this doesn't work either, how should I get $g$?
|
Hint: Let me put $m$ for $2n-1$ for brevity and generality. $\Bbb{R}P^m$ is a quotient space of $S^m$ obtained using the equivalence relation that identifies antipodal points. You need to show that your function $g : S^m \to S^m$ is compatible with the equivalence relation, so that it induces a function on the equivalence classes (pairs of antipodal points) that constitute the points of $\Bbb{R}P^m$. See the discussion about functions that "descend to the quotient" in the Wikipedia link.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1672314",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Binomial distribution with random parameter I want to compute the probabbilty
$$
P(S = k) = \sum_{l = 0}^n P(S=k|N=l)P(N=l)
$$
where $N$ is a binomial random variable $B(n, p)$. And when $N = l$, $S$ is also a binomial $B(l, r)$. I tried to compute but it seems difficult, thank you for any anwser or suggestion.
|
You can think of $S$ this way. There are $n$ potential coins. For each of them, we have a Bernoulli random variable $Y_i$ which equals 1 with probability $r$. Then we choose which of the coins we use, choosing each of them with probability $p$. We sum $Y_i$ over the remaining coins. We can set $X_i$ to be 1 if the $i$th coin was chosen, and then
$$
S = \sum_{i=1}^n X_i Y_i.
$$
Now $X_i Y_i$ is also a Bernoulli random variable, which equals 1 with probability $pr$. Hence $S \sim B(n,pr)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1672397",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Solve the recurrence relation $a_{k+2}-6a_{k+1}+9a_k=3(2^k)+7(3^k), k\geq 0, k_0=1,k_1=4.$
Solve the recurrence relation $a_{k+2}-6a_{k+1}+9a_k=3(2^k)+7(3^k), k\geq 0, k_0=1,k_1=4.$
I know that I need a general solution of the form $a_k=a^{(h)}_k+a^{(p)}_k$, where the first term is a general solution to the recurrence relation given on the LHS and the particular solution which depends on both the LHS and the RHS.
So far, I got the characteristic polynomial of the LHS, which is given by $t^2-6t+9=(t-3)^2=0.$ So I have double roots $t=3$. So the homogenous solution is given by $a^{(h)}_k=x3^k+ky3^k$.
Now i'm having trouble determining the particular solution. How do I proceed?
|
The $3(2^k)$ part is easy to deal with. Look for a solution of the shape $c(2^k)$.
The $7(3^k)$ part is made more complicated by the fact that $3$ is a root, indeed a double root, of the characteristic polynomial. Look for a solution of the shape $dk^2(3^k)$. There will be very nice cancellation.
For a particular solution, by linearity we can add the expressions obtained in the two paragraphs above.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1672573",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Does the sequence $a^n/n!$ converge? The sequence when plotted converges to zero because a factorial grows faster than the numerator, but I can not prove that this sequence actually converges.
|
METHODOLOGY $1$:
Here is another way forward that is very efficient.
We know from the $n$'th Term Test that if a series converges, then its terms must approach zero.
Inasmuch as the series
$$\sum_{n=0}^\infty \frac{a^n}{n!}=e^a$$
converges, then the $n$'th Term Test guarantees that $\lim_{n\to \infty}\frac{a^n}{n!}=0$. And we are done!
METHODOLOGY $2$:
It is easy to show that $n!>(n/2)^{n/2}$. Therefore, we have
$$\left|\frac{a^n}{n!}\right|\le \left(\frac{\sqrt 2 |a|}{\sqrt n}\right)^n\to 0\,\,\text{as}\,\,n\to \infty$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1672641",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 3
}
|
Closed form 0f $I=\int _{ 0 }^{ 1 }{ \frac { \ln { x } { \left( \ln { \left( 1-{ x }^{ 2 } \right) } \right) }^{ 3 } }{ 1-x } dx }$ While solving a problem, I got stuck at an integral. The integral is as follows:
Find the closed form of: $$I=\int _{ 0 }^{ 1 }{ \frac { \ln { x } { \left( \ln { \left( 1-{ x }^{ 2 } \right) } \right) }^{ 3 } }{ 1-x } dx } $$
I tried using power series but it failed. I tried various subtitutions which came to be of no use. Please help.
|
We have $$ I=\int_{0}^{1}\frac{\log\left(x\right)\log^{3}\left(1-x^{2}\right)}{1-x}dx=\int_{0}^{1}\frac{\log\left(x\right)\log^{3}\left(1-x^{2}\right)}{1-x^{2}}dx+\int_{0}^{1}\frac{x\log\left(x\right)\log^{3}\left(1-x^{2}\right)}{1-x^{2}}dx
$$ and so if we put $x=\sqrt{y}$ we get $$I=\frac{1}{4}\int_{0}^{1}\frac{y^{-1/2}\log\left(y\right)\log^{3}\left(1-y\right)}{1-y}dy+\frac{1}{4}\int_{0}^{1}\frac{\log\left(y\right)\log^{3}\left(1-y\right)}{1-y}dy$$ and recalling the definition of beta function $$B\left(a,b\right)=\int_{0}^{1}x^{a-1}\left(1-x\right)^{b-1}dx
$$ we have $$\frac{\partial^{h+k}B}{\partial a^{h}\partial b^{k}}\left(a,b\right)=\int_{0}^{1}x^{a-1}\log^{h}\left(x\right)\left(1-x\right)^{b-1}\log^{k}\left(1-x\right)dx
$$ hence $$I=\frac{1}{4}\frac{\partial^{4}B}{\partial a\partial b^{3}}\left(\frac{1}{2},0^{+}\right)+\frac{1}{4}\frac{\partial^{4}B}{\partial a\partial b^{3}}\left(1,0^{+}\right).$$ For the computation of the limit, we can use the asymptotic $\Gamma(x)=\frac{1}{x}+O(1)$ when $x\rightarrow 0$ and the relations between the polygamma terms and zeta.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1672759",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
}
|
Showing continuity of an operator from $L^p$ to $L^q$ Question: Let $1 \leq q \leq p < \infty$ and let $a(x)$ be a measurable function. Assume that $au \in L^q$ for all $u \in L^p$. Show that the map $u \to au$ is continuous.
My Approach: I have tried to use closed graph theorem to show continuity of $a$.
Let $\{u_n\} \subset L^p$ s.t. $u_n \rightarrow u$ in $L^p$ and $au_n \rightarrow v$ in $L^q$. Then we need to show $v = au$ a.e.
$$
|v-au|_q \leq |v-au_n|_q + |a(u_n - u)|_q
$$
The first term converges to zero but I got stuck in showing that the second term also converges to zero. Can anyone help?
|
This is not true. Let $b:L^p \to \mathbb{R}$ be a discontinuous linear functional (such functional always exists on infinite dimensional Banach spaces) and let $f\in L^q.$ Then the operator $a:L^p \to L^q ,$ defined by $a(f)(x) =b(x)\cdot f(x) $ satisfies assumptions of your exercise but $a$ is not continuous.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1672865",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 3
}
|
Limit of the function $\lim \limits_{(x,y)\to (0,0)} \sin (\frac{x^2}{x+y}) \ (x+y \neq 0)$. Since $\sin x$ is a continuous function at $(0,0)$ it suffices to check if the limit $\lim_{(x,y)\to (0,0)} \frac{x^2}{x+y}$ is finite.
I seem to be missing the idea in order to show that the limit $\lim_{(x,y)\to (0,0)} \frac{x^2}{x+y}=0$. I tried converting to polar form I get $\lim_{r\to 0} \frac{r \cos^2 \theta}{\cos \theta +\sin \theta}$. It seems that this is not conclusive. Any help?
|
We can see there's a problem with your method if
$\;\cos\theta+\sin\theta=0\iff\tan\theta=-1\iff \theta=-\frac\pi4\;$ , in the trigonometric circle.
We can try for example:
$$\begin{align*}&y=x:\implies \frac{x^2}{x+y}=\frac{x^2}{2x}=\frac x2\xrightarrow[(x,y)\to(0,0)]{}0\\{}\\
&y=x^2-x:\implies\frac{x^2}{x+y}=\frac{x^2}{x+x^2-x}=\frac {x^2}{x^2}=1\xrightarrow[(x,y)\to(0,0)]{}1\end{align*}$$
and thus the limit doesn't exist.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1672969",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Two Subnormal subgroups with index of one and order of other relatively prime Let $H,K$ two subgroups of a finite group $G$. Suppose that $\gcd(|G:H|,|K|)=1$
Prove that if $K\triangleleft\triangleleft\; G$ then $K\subseteq H$.
My idea: Consider before the case $K\triangleleft G$. Then $HK\leq G$.
Remember that $|HK|=\dfrac{|H||K|}{|H\cap K|}$. Using this formula it's easy to show that $|G:HK|=\dfrac{|G:H|}{|K:K\cap H|}$. This must be an integer but numerator and denominator are coprime so the only possibility (except for the trivial case $G=H$) is $|K:K\cap H|=1$ so $K\subseteq H$.
In the general case this argument doesn't work because $HK$ is not a subgroup.
So i was trying to use induction on subnormal depth.
$K=K_0\triangleleft K_1\triangleleft...\triangleleft K_t=G$. But from here i don't know how to proceed.
|
Let $\pi$ be the set of primes dividing $|K|$, and let $O_\pi(G)$ be the largest normal subgroup of $G$ whose order is divisible only by primes in $\pi$. Then $O_\pi(G)$ is characteristic and hence normal in $G$, so by the case you have solved, $O_\pi(G) \le H$.
So now we have to prove that $K \le O_\pi(G)$ You can do that by induction on $t$. It is true when $t=1$. By induction $K \le O_\pi(K_{t-1})$. But since $O_\pi$ is a characteristic subgroup, $O_\pi(K_{t-1})$ is normal in $G$ and hence is contained in $O_\pi(G)$ and we are done.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1673072",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Coincidence Lemma and universal validity? I'm supposed to say whether the following statement is true or not:
$$ \exists x \exists y f(x) = y \equiv \forall x \exists y (( x = y \vee E(x,y)) \rightarrow \exists z (z=y \vee E(z,y)))$$
I have a couple of questions about the coincidence lemma here:
If you assume that your $\sigma$ structure has elements a, b in its universe so that f(a)=b holds, does it then follow from the coincidence lemma that f(a)=b is true for all interpretations, since there are no free variables in this formula?
If that was the case then the left side would be universally valid. Now the formula on the right side is also universally valid, I think. Because the left side of the implication is the same as the right side of the implication. Does it then even matter whether this is true for all elements x and a given y in the universe of the structure?
So I would say the two formulas are equivalent, since both are universally valid, is that true?
|
The lefthand side is true in any (nonempty!) model (= interpretation); the righthand side is true in any model. Function symbols are total functions, and the righthand side is true basically because $\forall x\exists y(whatever(x,y)\to\exists z(z=y \lor whateverElse(y,z)))$ is valid.
Re your question about the "coincidence lemma" (I don't know this term): something's amiss there. if $a,b$ are elements of a model, not constants of the signature $\sigma$, what do you mean about $f(a) = b$ being true in all models? They must be constants after all, and $f(a)=b$ holds in some model.
Long story short: Yes, they're equivalent, because they're both valid in first order logic, which classically assumes a nonempty universe: ($\exists x)\,x=x$ is a theorem.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1673186",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
$T^3=\frac{1}{2}(T+T^*) \rightarrow$ T is self adjoint
Let $T$ be a normal transformation on a finite-dimensional Hilbert space; that is, $TT^*=T^*T$, where $T^*$ is the adjoint of $T$.
Prove that if $T^3=\frac{1}{2}(T+T^*)$, then $T$ is self adjoint.
I have tried to do some math on $(Tv,u)$ but I was not successful in proving the following: $(Tu,v)=(u,Tv)$ which is what I need for self-adjoint transformation.
Edit:
$(T^3,v)=\frac{1}{2}\left(\left(T+T^*\right),u \right)= \left( Tu,v\right)+ \left( T^*u,v\right)=\left( u,T^*v\right)+\left( u,Tv\right)=\left( u,T^3v\right)$
Therefore, $T^3$ is self adjoint. Does it mean that $T$ is self adjoint?
Thanks!
|
$T^3=\frac{1}{2}(T+T^*)\Rightarrow T^4=\frac{1}{2}(T^2+TT^*)=\frac{1}{2}(T^2+T^*T)\text{ implies }TT^*=TT^*.$
Hence T is normal. Since $T$ is normal, there exists an orthonormal basis consists of its eigenvectors such that $T=UDU^*$, where $D$ is diagonal and $U$'s column vectors are $T$'s eigenvectors. $T$ is normal also implies $U^*(T^3)U=\frac{1}{2}(U^*TU+U^*T^*U)$ is a real diagonal matrix.
Let $\lambda=a+bi$ be one of the $T$'s eigenvalue, then $\lambda^3=\frac{1}{2}(\lambda+\bar{\lambda})\Rightarrow a^3-3ab^2+(3a^2b-b^3)i=a.$ Suppose $b\neq 0,$ then $b^2=3a^2\neq 0\Rightarrow 1=a^2-9a^2=-8a^2\Rightarrow a=\pm\frac{1}{2\sqrt{2}}i,$ leads to a contradiction. Hence $b=0$, i.e. $\lambda\in\mathbb{R}$, which means $T=T^*.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1673267",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
}
|
Integration involving $\arcsin$ How to integrate the following function:
$$\int_0^a \arcsin\sqrt{a \over a+x}dx$$
By using the substitution $x = a\tan^2\theta$, I managed to write the integral as:
$$2a\int_0^{\pi \over 4}\theta \frac{\sin\theta}{\cos^3\theta}d\theta$$
How would I proceed? Should I use by parts method?
|
You are on the right track.
By parts,
$$\int\theta \frac{\sin\theta}{\cos^3\theta}d\theta=\frac\theta{2\cos^2(\theta)}-\int\frac{d\theta}{2\cos^2(\theta)}=\frac\theta{2\cos^2(\theta)}-\frac{\tan(\theta)}2.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1673319",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 3
}
|
Flows and Lie brackets, $\beta$ not a priori smooth at $t = 0$ Let $X$ and $Y$ be smooth vector fields on $M$ generating flows $\phi_t$ and $\psi_t$ respectively. For $p \in M$ define$$\beta(t) := \psi_{-\sqrt{t}} \phi_{-\sqrt{t}} \psi_{\sqrt{t}} \phi_{\sqrt{t}}(p)$$for $t \in (-\epsilon , \epsilon)$ where $\epsilon$ is sufficiently small. Does it follow that$$[X, Y](f)(p) = \lim_{t \to 0} {{f(\beta(t)) - f(\beta(0))}\over t}?$$Thoughts. I know that $\beta$ is not a priori smooth at $t = 0$ because of the $\sqrt{}$ terms. But I do not know what do from here. Could anybody help?
|
Let me please put $\alpha(t):=\beta(t^2)$. We are asked to prove
$$\left.\frac{d}{dt}\right|_{t=0}\alpha(\sqrt{t})=[X.Y]_p.$$
Fact: $$\left.\frac{d}{dt}\right|_{t=0}\alpha(t)=
\left.\frac{d}{dt}\right|_{t=0}\psi_{-t}\phi_{-t}\psi_t\phi_t(p)
=-\psi'_0\phi_0\psi_0\phi_0(p)-\psi_0\phi'_0\psi_0\phi_0(p)
+\psi_0\phi_0\psi'_0\phi_0(p)+\psi_0\phi_0\psi_0\phi'_0(p)=0.$$
Fact: $\alpha(t)$ has a Taylor expansion near $0$ which I notate as
$$\alpha(t)=\alpha(0)+\alpha'(0)(t)+\frac{\alpha''(0)}2t^2+O(t^3).$$
Note that I can drop the $t$ term since the coefficient is $0$.
Then $$\left.\frac{d}{dt}\right|_{t=0}\alpha(\sqrt{t})=\lim_{t\rightarrow0}
\frac{\alpha(\sqrt t)-\alpha(0)}t=\lim_{t\rightarrow0}
\frac{\left(\alpha(0)+\alpha'(0)(\sqrt t)+\frac{\alpha''(0)}2(\sqrt t)^2+O(t^{3/2})\right)-\alpha(0)}t=\frac{\alpha''(0)}2.$$
So it is necessary and sufficient to show $$\left.\frac{d^2}{dt^2}\right|_{t=0}\alpha(t)=2[X.Y]_p.$$
Now, for two vector fields $X$ and $Y$ with local coords $X^i\partial_i$, $Y^j\partial_j$, where here and henceforth I omit the summation symbol over indices, there is a convenient formula for the bracket
$[X,Y]=\gamma^i\partial_i$, namely
$$\gamma^i=X(Y(x_i))-Y(X(x_i))=X(Y^i)-Y(X^i)=(X^j\partial_jY^i-Y^j\partial_jX^i).$$
In a similar way (following this answer), we may write a Taylor expansion of the flow at $p=p^i$ (implicit summation):
$$\phi_t(p)=p+tX(p)+\frac{t^2}2X(X(x_i))(p)+O(t^3),$$
finding (summing implicitly)
$$\phi_t(p)=p^i + t \; X^i(p) + \frac{t^2}{2} X^j(p) \; \partial_j X^i(p) + O(t^3).$$
This is true, since
$$
\frac{d}{dt} \phi_t(p) = X^i(p) + t X^j(p) \partial_j X^i(p) + O(t^2) \\
= X^i(p + t \; X^i(p)) + O(t^2) \\
= X^i(\phi_t(p)) + O(t^2).
$$
The second (middle) equality here follows from Taylor expanding $X^i$. The third (i.e. last) follows from substitution.
With this understanding, we can actually compute
$$
\psi_t \phi_t(p) = p^i + t X^i + \frac{t^2}{2} X^j \partial_j X^i + t Y^i + \frac{t^2}{2} Y^j \partial_j Y^i + t^2 X^j \partial_j Y^i +O(t^3),
$$
$$
\psi_{-t} \phi_{-t}(p) = p^i - t X^i + \frac{t^2}{2} X^j \partial_j X^i - t Y^i + \frac{t^2}{2} Y^j \partial_j Y^i + t^2 X^j \partial_j Y^i +O(t^3).
$$
Now, we don't write out exactly the composition of these two fields but we write only the $t^2$ term. From pairing $\frac{t^2}{2} X^j \partial_j X^i$ with $p$ and likewise for $Y$, we get a contribution of
$$\frac{t^2}{2} X^j \partial_j X^i +\frac{t^2}{2} X^j \partial_j X^i + \frac{t^2}{2} Y^j \partial_j Y^i + \frac{t^2}{2} Y^j \partial_j Y^i.$$
From pairing $t X^i$ with $-t X^i$ and $t Y^i$ with $-t Y^i$ we cancel this term. From pairing $-t X^j$ with $t Y^i$ we cancel one of the two $t^2 X^j \partial_j Y^i$, and are left with
$[X,Y]_pt^2$ from $-t Y^i$ with $t X^j$ and the remaining $t^2 X^j \partial_j Y^i$. It is now also easy to see explicitly that the linear term vanishes and $$\alpha(t)=p+[X,Y]_pt^2+O(t^3).$$
Hence $\alpha''(0)=([X,Y]_p t^2)''(0)=2[X,Y]_p$; that's it.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1673521",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
a number n as pa+qb How can we express a number $n$ as $pa+qb$ where $p \geq0$ and $q \geq 0$ and $p$ and $q$ can't be fraction. In contest I got a puzzle as if we can express $c$ as sum of $a$ and $b$ in form $pa+qb$.
Suppose $a$ is $3$ and $b$ is $4$ and $c$ is $7$ so we can express $7$ as $3+4$.
Suppose $a$ is $4$ and $b$ is $6$ and $c$ is $15$ but we can't express $15$ as sum of $4 \cdot p+6 \cdot q$.
NOTE: $p$ AND $q$ CAN'T BE FRACTIONS.
I came up with an approach to take $\gcd$ of $a$ and $b$ and check if their $\gcd$ leaves zero as remainder when $\gcd$ divides $c$.
Is there any general method to check this ?
|
If $a,b$ are fixed natural numbers, then the set of all integers $n$ which can be written in the form $n=pa+qb$ for integers $p$ and $q$ is precisely the set of multiples of the greatest common divisor of $a$ and $b$. So $n$ can be written in the desired form if and only if $n$ is divisible by $\gcd(a,b)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1673603",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
$ p \in Q[x] $ has as a root a fifth primitive root of unity, then every fifth primitive root of unity is a root of $p$. I'm extremely stuck. Can't figure it.
The conjugate is easy: let $w$ be a primitive root of unity, then $w^{-1}$ will also be a root, that's easy. But I'm missing $w^2$ and $w^3$. Why would they be also roots of p?
|
Since $\;p(x)\in\Bbb Q[x]\;$ and it vanishes on $\;\zeta:=e^{2\pi i/5}\;$ , the minimal polynomial of $\;\zeta\;$ over the rationals (also known as a cyclotomic polynomial) also divides $\;p(x)\;$ , and thus
$$\Phi_5(x)\,|\,p(x)\implies p(x)=\overbrace{(x^4+x^3+x^2+x+1)}^{=\Phi_5(x)}\cdot h(x)\;,\;\;h(x)\in\Bbb Q[x]$$
Yet
$$\;x^4+x^3+x^2+x+1=\frac{x^5-1}{x-1}\implies$$
the roots of $\;\Phi_5(x)\;$ are precisely the four primitive roots of unity of order five, and thus these roots are also roots of $\;p(x)\;$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1673701",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 0
}
|
Symplectic manifold, flow $\phi_t$ generated by unique vector fiel $X_f$ preserves symplectic form $\omega$? Let $M$ be a symplectic manifold with symplectic form $\omega$, let $f$ be a smooth function on $M$, and let $X_f$ be the unique vector field on $M$ so that $df(Y) = \omega(X_f, Y)$ for all vector fields $Y$. Does the flow $\phi_t$ generated by $X_f$ preserve $\omega$?
|
I'm just going to call your vector field $X$. It suffices to show that the Lie derivative $\mathcal L_X \omega = 0$ (do you see why?) To do this, use Cartan's formula for the Lie derivative of a form: $\mathcal L_X \omega = d\iota_X \omega + \iota_X d\omega = d\iota_X \omega$, because $\omega$ is closed; now by your assumption, $\iota_X \omega = df$, and as a result $d\iota_X \omega = ddf = 0$, as desired.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1673783",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
What is $\mathbb Z \oplus \mathbb Z / \langle (2,2) \rangle$ isomorphic to? This question came up after I'd solved the following exercise:
Determine the order of $\mathbb Z \oplus \mathbb Z / \langle (2,2) \rangle$. Is the group cyclic?
I had no trouble solving the exercise: The answer is the group is infinite and non-cyclic.
But as a bonus exercise I tried to figure out what it is isomorphic to and now I'm stuck.
Here is what I have so far:
$[(1,0)]$ and $[(0,1)]$ generate infinite disjoint subgroups.
Also, every element in the odd diagonal $[(2k+1, 2k+1)]$ has order $2$. Let's call the odd diagonal $D$. Then $D \cong \oplus_{k \in \mathbb Z} \mathbb Z_2$.
Is this correct? And if so, is $\mathbb Z \oplus \mathbb Z / \langle
(2,2) \rangle \cong \oplus_{k \in \mathbb Z} \mathbb Z_2 \oplus
\mathbb Z \oplus \mathbb Z$?
And if not could someone help me by showing me what it is isomorphic to?
|
Let $G = \dfrac{\mathbb Z \oplus \mathbb Z}{ \langle (2,2) \rangle}$.
*
*$v_1=(1,1)$ is mapped to an element of order $2$ in $G$.
*$v_2=(2,1)$ is mapped to an element of infinite order in $G$.
This means that $G$ is infinite and non-cyclic.
Note that $\mathbb Z \oplus \mathbb Z = v_1 \mathbb Z \oplus v_2\mathbb Z$. Therefore
$$
G = \dfrac{\mathbb Z \oplus \mathbb Z}{ \langle 2v_1 \rangle}
= \dfrac{v_1 \mathbb Z \oplus v_2\mathbb Z}{v_1 2\mathbb Z \oplus 0}
\cong \dfrac{\mathbb Z}{2\mathbb Z} \times \mathbb Z
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1673889",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Arithmetic growth versus exponential decay I have a kilogram of an element that has a long half-life - say, 1 year - and I put it in a container. Now every day after that I add another kilogram of the element to the container.
Does the exponential decay eventually "dominate" or does the amount of the substance in the container increase without bound?
I know this should be a simple answer but it's been too long since college...
|
On the first day, you have 1 kilogram of decayium. On the second day, that 1 kilogram decays into $1/\sqrt[365]{2}$ kilograms of decayium, on the third day you have $1/(\sqrt[365]{2})^2$ kilograms from the original... And so on, until after a year has passed, you finally have $1/(\sqrt[365]{2})^{365} = 1/2$ kilograms form the original. Seems legit up until here.
But everyday, you're adding 1 kilogram to whatever already exists, so you have multiple copies of this sequence all running in parallel. So on the second day, there are $1 + 1/\sqrt[365]{2}$ kilograms, on the third day, $1 + 1/\sqrt[365]{2} + 1/(\sqrt[365]{2})^2$, ... and on the $n$th day, you will have
$$\sum_{i=0}^{n-1} \frac{1}{(\sqrt[365]{2})^i}$$
kilograms of decayium.
This is a geometric sequence, which we know has a sum of $\frac{1 - 1/(\sqrt[365]{2})^n}{1 - 1/\sqrt[365]{2}}$. As $n$ becomes larger and larger (tends to infinity), this actually converges: it approaches $\frac{1}{1 - 1/\sqrt[365]{2}} = 527$ (approximately). So it becomes a stable system of sorts, which means that you'll keep getting closer to 527 kilograms no matter how long you continue adding material.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1674076",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 3
}
|
Continuous Poisson distribution: $\int_0^\infty \frac{t^n}{n!}dn$ I was thinking poission distribution, actually i like it. Then i thought there is no reason for some events to be integers. We can define occurences as half finished homeworks for example, or 3.7 apples etc.
So when i give wolfram an example, it actually calculates. Mean occurence of events is 3.2, what is the probability that i see this event to be between 1 and 4 ? It is about 0.6076
My question is that although wolfram calculates, i dont know and i couldnt find this integral (all density). Can it be calculated ?
$$\int_{0}^{+\infty}\frac{t^{n}}{n!}\,dn$$
|
Since $n!=\Gamma(n+1)$, you are then asking about a closed form of
$$
\int_0^\infty \frac{t^x}{\Gamma(x+1)}dx, \quad t>0. \tag1
$$ There is no known closed form of $(1)$, but this integral has been studied by Ramanujan who proved that
$$
\int_0^{\infty} \frac{t^x}{\Gamma(1+x)} \, dx = e^t - \int_{-\infty}^{\infty} \frac{e^{\large -te^x}}{x^2+\pi^2} \, dx, \quad t>0. \tag2
$$
Some useful related links are link 1, link 2, link 3.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1674206",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
What are the elements of a filtration generated by a Wiener process? I understand the concept of filtration intuitively, and I can wrtite down the elements of a filtration for example in the case of a coin toss game, but what are the sets in the filtration of a Wiener process at a given time? How do they look like?
|
For example, a stock price $S_t$ is usually modelled by the following
$$dS_t=S_tdt+S_tdW_t$$
where $W_t$ is a Wiener process. In this context, roughly speaking, events that generate the filtration (natural filtration of $W_t$) are any random events that can influence the stock price $S_t$. For instance, at time $t$ there is an earthquake at California that destroyed a factory of Tesla (event that consists of one outcome). $W_t$ goes down and, consequently, stock price falls. However, no-one knows all the random events that can happen at time $t$. You can only observe the stock price. Therefore, it is usually assumed that the natural filtration of a Wiener process is an abstract flow of information that is known up to time $t$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1674343",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Metric embedding of negatively curved surfaces Suppose a metric surface, simply connected and locally isometric to the hyperbolic plane; Do you can embed this surface on the hyperbolic plane?
The fact is known to be false in the spherical case (that is, there exist a surface, simply connected and locally isometric to the Riemann sphere that does not embed on the Riemann sphere).
I'm not sure that the fact is really true, but I've heard this before, and I need to understand why it's true (if it is at all) and why the proof doesn't work on the spherical case.
Thanks in advance!
|
For a counterexample, take any point $p \in \mathbb{H}^2$, and take the universal cover of $\mathbb{H}^2 - p$.
A spherical counterexample is similar, except removing a single point does not work because the result is already simply connected and so nothing changes when you take its universal cover. So you simply remove two points $p,q \in S^2$. That is, take the universal cover of $S^2 - \{p,q\}$.
Notice, in these counter-examples the metrics are not complete. If instead you take a complete simply connected surface locally isometric to the hyperbolic plane, then it is globally isometric to the hyperbolic plane. Similarly, a complete simply connected surface locally isometric to $S^2$ is globally isometric to $S^2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1674431",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Summing the terms of a series I have a really confusing question from an investigation. It states-
Find the value of:
$$\sqrt{1^3+2^3+3^3+\ldots+100^3}$$
How would I go about answering this??
|
And if you don't know the formula
and don't need it exactly,
$\sum_{k=1}^{100} k^3
\approx \int_0^{100} x^3 dx
=\frac{100^4}{4}
$
so the result is
$\sqrt{\frac{100^4}{4}}
=\frac{100^2}{2}
=5000
$.
If you add in the
usual correction of
$\frac12 f(n)$,
the result is
$\sqrt{\frac{100^4}{4}+\frac12 100^3}
=\frac{100^2}{2}\sqrt{1+\frac{2}{100}}
\approx \frac{100^2}{2}(1+\frac{1}{100})
=5050
$.
Shazam!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1674533",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 6,
"answer_id": 4
}
|
Error bound in the sum of chords approximation to arc length We are currently covering arc length in the calculus class I'm teaching, and since most of the integrals involved are impossible to solve analytically, I'd like to have my students do some approximations instead. Of course we could use the usual rectangle-based methods on the integral $$ \int_a^b \sqrt{1 + f'(x)^2}\,dx $$ but this is somewhat unnatural compared to the "sum-of-chords" approximation using evenly-spaced points $(x_i,f(x_i))$ along the curve:
$$ \sum_{i=1}^n \sqrt{(x_{i+1} - x_i)^2 + (f(x_{i+1}) - f(x_i))^2} $$
(Not to mention that computing the maximum value of the second derivative of $\sqrt{1 + f'(x)^2}$ along $[a,b]$ would be incredibly time-consuming for my students.)
The problem with this approximation is that I don't know a bound on the error. This question (and the papers referenced there) suggests that the order of the error is $o(1/n^2)$, but the exact form of the bound isn't mentioned, and my (rudimentary) calculations only gave an error bound on the order of $o(1/n)$.
So: is there an order-$o(1/n^2)$ bound on the error from the sum-of-chords approximation? What form does it take?
|
\begin{align*}
e_{h}
&= \int_{a}^{a+h} \sqrt{1+f'(x)^{2}} \, dx-\sqrt{h^{2}+[f(a+h)-f(a)]^{2}} \\
&= \frac{h^{3}}{24}
\frac{f''(a)^{2}}{\left[ 1+f'(a)^{2} \right]^{\frac{3}{2}}}+O(h^{4}) \\
E &=\frac{(b-a)^{3}}{24n^{2}}
\frac{f''(\xi)^{2}}{\left[ 1+f'(\xi)^{2} \right]^{\frac{3}{2}}}
\end{align*}
Alternatively
Using local canonical form of curve parametrized by arc length
\begin{align*}
\mathbf{r}(s) &= \mathbf{r}(0)+
\left( s-\frac{\kappa^{2} s^{3}}{6} \right) \mathbf{T}+
\left( \frac{\kappa s^{2}}{2}+\frac{\kappa' s^{3}}{6} \right) \mathbf{N}+
\left( \frac{\kappa \tau s^{3}}{6} \right) \mathbf{B}+O(s^{4}) \\
|\mathbf{r}(s)-\mathbf{r}(0)| & \approx
\sqrt{\left( s-\frac{\kappa^{2} s^{3}}{6} \right)^{2}+
\left( \frac{\kappa s^{2}}{2} \right)^{2}} \\
&= s-\frac{\kappa^{2}}{24}s^{3}+O(s^{4})
\end{align*}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1674635",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Is it true that if $(n\Delta s_n)$ is bounded then $(s_n)$ is bounded? Let $(s_n)$ be a sequence of real numbers and $\Delta s_n=s_n-s_{n-1}.$ Is the following always true?
If $(n\Delta s_n)$ is bounded then $(s_n)$ is bounded.
|
Let $s_n=\sum_{k=1}^n\frac1k$. Then $$Δs_n=\frac1n$$ hence $nΔs_n=1$ for all $n\in \Bbb N$ but $s_n\to +\infty$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1674717",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Prove that $\liminf_{n→∞} s_n \le \liminf_{n→∞} σ_n$
I am trying to prove that $$\liminf_{n→∞} s_n \le \liminf_{n→∞} σ_n$$
given that $σ_n=\frac1n(s_1+s_2+\dots+s_n)$.
Setting $\alpha = \liminf_{n→∞}s_n$, hence $$\forall \epsilon>0 \ \exists N: \forall n \geq N, \ \alpha - \epsilon <\inf s_n \le s_n$$ Now, I know I can split $σ_n$ to get $\sigma_n \geq \frac{1}{n}\ (s_1 +\dots + s_N) + \frac{1}{n}\ (s_{N+1}+\dots+s_n)$ and later\begin{align}\sigma_n &\geq \frac{1}{n}\ (s_1 +\dots + s_N)+ \frac{1}{n}(\alpha - \epsilon)(n-N)\\\implies \sigma_n &\geq \frac{1}{n}\ (s_1 + \dots + s_N)- \frac{N}{n}(\alpha - \epsilon)+\alpha - \epsilon\end{align}
Now, $\frac1n(s_1 + \dots+ s_N)-\frac{N}{n}(\alpha-\epsilon)<\epsilon_1$ as $n \rightarrow \infty$ arbitrarily small, so $$\forall \ \epsilon_1>0 \ \exists N_1: \forall n>N_1: \sigma_n>\alpha-\epsilon-\epsilon_1$$ Now I really don't know how to infer from it that $\liminf_{n→∞} s_n \leq \liminf_{n→∞} σ_n$.
So any help will be much appreciated!
|
Note: this is a response to the first version whose details were a little hard to read.
Fix any $N$. For any $n>N$ we have
$$\sigma_n =\frac 1n (s_1+\dots+s_n) \ge \frac 1 n ( s_1+\dots+ s_N + (n-N) \min\{s_{N+1},\dots,s_n\}) \ge \frac{s_1+\dots+s_N}{n} + \frac{n-N}{n} \inf \{s_{N+1},s_{N+2},\dots\}.$$
Therefore $$\liminf_{n\to\infty} \sigma_n \ge 0 + \inf \{s_{N+1},s_{N+2},\dots\}.$$
This is true for every $N$. The RHS increases to $\liminf_{n\to\infty} s_n$, as $n\to\infty$, and the result follows.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1674852",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Optimal rounding a sequence of reals to integers I'm given positive real numbers $c_1,\dots,c_m \in \mathbb{R}$ and an integer $d \in \mathbb{N}$. My goal is to find non-negative integers $x_1,\dots,x_m \in \mathbb{N}$ that minimize $\sum_i (x_i - c_i)^2$, subject to the requirement $\sum_i x_i = d$.
I'm inclined to suspect that there exists $t \in \mathbb{R}$ such that the optimal solution is $x_i = \lfloor t c_i \rceil$ for all $i$ (i.e., take $x_i$ to be $tc_i$ rounded to the nearest integer). Is this the case?
My first instinct was to apply Lagrange multipliers, but I guess that doesn't work given the requirement that $x_1,\dots,x_m$ be integers.
Motivation: I'm trying to help with someone's quantization problem, but the problem seems fun in its own right.
|
This is an MIQP (Mixed Integer Quadratic Programming) problem. This version is the easy one: convex. That means there are quite a few good solvers available to handle this. Still, finding proven global optimal solutions is often difficult. On the other hand, solvers find typically good solutions very quickly. For $n=1000$ a good solver should take no more than a few seconds to find a global optimal solution.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1674971",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 0
}
|
Is there a way to obtain exactly 2 quarts in the 8-quart or 5-quart pitcher? Suppose we are given pitchers of waters, of sizes $12$ quarts, $8$ quarts, and $5$ quarts. Initially the $12$ quart pitcher is full and the other two empty. We can pour water from one pitcher to another, pouring until the receiving pitcher is full or the pouring pitcher is empty. Is there a way to pour among pitchers to obtain exactly $2$ quarts in the $8$-quart or $5$-quart pitcher? If so, find the minimal sequence of pourings to get $2$ quarts in the $8$-quart or the $5$-quart pitcher.
My work:
Start at $(12,0,0)$
Can either go to $(7,0,5)$ or $(4,8,0)$
From $(7,0,5)$ you can go to $(0,7,5)$ or $(7,5,0)$
From $(4,8,0)$ you can go to $(0,8,4)$ or $(4,3,5)$
After this I'm kind of confused as to where to go with this. The answer in the back of the textbook says (listed as $(b,c)$) -> $(0,0)-(0,5)-(7,5)$. This answer doesn't make any sense to me and I don't know if it is correct. Am I missing something?
Edit: New thoughts are $(12,0,0)$ to $(7,0,5)$ to $(0,7,5)$ to $(5,7,0)$ to $(5,2,5)$ Is this the shortest way to solve this problem?
|
I would go to (7, 0, 5), then (7, 5, 0), (2, 5, 5) and (2, 8, 2). This is four pours.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1675037",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Could anyone explain why this is a general case of Weierstrass Approximation? Suppose $X_1, X_2 ...$ are independent Bernoulli random variables. with probability $p$ and $1-p$.
Let $\bar{X}_n = \frac{1}{n} \sum\limits_{i=1}^nX_i$.
If $U \in C^0([0,1],\mathbb{R})$, then $E(U(\bar{X_n}))$ converges uniformly to $ U(E(\bar{X_n})) = U(p).$
Our professor used this as a "general case of Weierstrass approximation" and it was proved. But I still don't understand very much.
What is the polynomial here?
What is the meaning of $E(U(\bar{X_n}))$ converges uniformly to $ U(E(\bar{X_n}) = U(p)?$
|
The polynomial here is $E(U(\bar X_n))$. Writing it out, this is
$$
E(U(\bar X_n))=\sum_{k=0}^n U(k/n)P(\bar X_n=k/n)\tag1$$
since $\bar X_n$ takes values in $0/n, 1/n,\ldots k/n$. If we write
$$P(\bar X_n=k/n)=P(n\bar X_n=k)$$
we see that $n\bar X_n=\sum_i X_i$ is a binomial($n,p$) random variable, so the sum (1) becomes the polynomial
$$\sum_{k=0}^n U(k/n){n\choose k}p^k(1-p)^{n-k}.\tag2$$
Note that the coefficients $U(k/n)$ are constants. The convergence is uniform in $p$, in the sense
$$\sup_{p\in[0,1]} \left | E(U(\bar X_n))-U(p) \right | \to 0\ \text{as $n\to\infty$.}
$$
The remark your prof made is that (2) is an explicit construction of the $n$th approximating polynomial (the Bernstein polynomial) in the Weierstrass theorem.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1675119",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
show quadratic polynomial cannot solve differential equation I have a differential equation$ y\prime + 2xy = 1 $and I need to show that there is no quadratic polynomial that solves this equation. I set $y=Ax^2+Bx+C $ and solved for $y\prime$ plugged in $y$ and $y\prime$ into my differential equation. Is this the correct approach and if so, how can I show that my differential equation can never equal $1$? My final equation is:
$$(Ax+B)+(2x(Ax^2+Bx+C))=1 $$
Thanks so much in advance.
|
You do not need to
equate coefficients,
and you can easily show that
no polynomial can be
a solution to
$y' + 2xy = 1
$.
Suppose $y$
is a polynomial
of degree $d$.
Then $2xy$
is a polynomial of
degree $d+1$
and
$y'$ is a polynomial
of degree $d-1$.
Therefore
$y' + 2xy
$
is a polynomial
of degree $d+1$,
since the term of degree
$d+1$
in $2xy$
can not be cancelled out
by $y'$
since its highest
degree term
has degree $d-1$.
Therefore
$y' + 2xy$
is a polynomial
of degree $d+1$
and therefore
can not be constant,
let alone $1$.
To actually solve the equation,
note that the integrating factor is
$e^{\int 2x}
=e^{x^2}
$.
Then
$e^{x^2}(y'+2xy)
=e^{x^2}y'+2xe^{x^2}y
=(e^{x^2}y)'
$
so
$1 = y'+2xy$
becomes
$e^{x^2}
= (e^{x^2}y)'
$.
Integrating,
$e^{x^2}y = \int e^{x^2}
$
(with the constant of integration implied)
or
$y = e^{-x^2}\int e^{x^2}
$.
To do the integration will involve the
error function or an equivalent.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1675217",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
What are the advantages of outer measure? I am learning about measure theory. I have studied on outer measure then i am learning about Lebesgue measure. But i have a question why we learn outer measure since we have Lebesgue measure? That is What are the advantages of outer measure ?
|
Another reason besides the excellent answer given is that every set has an outer measure, whereas not every set has a measure. If a set can't be proven to be measurable, it's common to investigate it with the outer measure. If the set turns out to be measurable, the outer measure results still apply because they agree
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1675299",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
proof - Bézout Coefficients are always relatively prime I had been researching over the Extended Euclidean Algorithm when I happened to observe that the Bézout Coefficients were always relatively prime.
Let $a$ and $b$ be two integers and $d$ their GCD. Now, $d = ax + by$ where x and y are two integers.
$$d = ax + by \implies 1 = \frac{a}{d}x + \frac{b}{d}y$$ So, $x$ and $y$ can be expressed to form 1 so their GCD is 1 and are relatively prime. ($\frac{a}{d}$ and $\frac{b}{d}$ are integers.)
Another great thing is that $\frac{a}{d}$ and $\frac{b}{d}$ are also relatively prime. So you see this goes on like a sequence till $a$ and $b$ become one.
Am I right? What else can be known from this fact? Is it useful? Can it be used to prove some other things?
|
You are partially right.Not necessarily. Bezout's identity also mentioned, "more generally, the integers of the form $$ n=ax + by$$ are exactly the multiples of $d$."
This implies if $\gcd(x,y)=d'$, then $n$ is also a multiple of $d'$.
Therefore,
$$n=ax+by=\gcd(a,b)\gcd(x,y)n'$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1675405",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 1
}
|
Diagonalizable Linear Operator - Is $Ker(T)=Ker(T^2)$? Been spending a lot of time on this one.
Given a linear operator T, in a vector space V, having a finite dimension and is diagonalizable - Is $Ker(T)=Ker(T^2)$?
One way is trivial, apply $T$ on $T(v)$ to get $0$, but I cannot find the other way around.
As always, TIA.
Edit:
Proving $Ker(T) \subset Ker(T^2)$:
Since $[T]_B[T(v)]_B=[T^2(v)]_B$ then $[T]^2_B[v]_B=[T^2(v)]_B$
Now given the fact that T is diagonalizable, $T^2$ is the result of squaring its diagonal elements - which we assume are not all equal to zero.
Having that in mind, let $v\in Ker{T^2}$, then $[T^2(v)]_B=0$.
But since
$[T^2(v)]_B=[T^2]_B[v]_B=[T]^2_B[v]_B=[T]_B[T]_B[v]_B$ then $[T]_B[T]_B[v]_B=0$
Since we established that $T$ is not zero we are left to conclude that $[T]_B[v]_B=0$ namely $v\in Ker{T^2}$.
|
Express $T$ as diagonal matrix. Then the matrix of $T^2$ has the diagonal entries of $T$ squared. In particular, the number of zeroes on the diagonal is the same.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1675574",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
}
|
The j-function of a point on the boundary of the fundamental domain is real valued. I am trying to prove the following statement:
Let $z \in \mathbb{D} $ (the standard fundamental domain for $SL_{2}(\mathbb{Z})$). Prove that if $z$ lies on the boundary of $\mathbb{D}$, or if $Re(z)=0$, then $j(z) \in \mathbb{R}$.
So far I have shown that $j(i)=1728$ and $j(e^{(2\pi i)/3})=0$ but I am not sure how to proceed...
|
Hint: Show that if $f(\tau)=g(\exp(\alpha\mathrm{i}\tau))$ for some Laurent series $g$ with real coefficients and real $\alpha>0$, then $f(-\bar{\tau}) = \bar{f}(\tau)$. Then use the symmetries of $j$ to show that on the boundary of the fundamental domain, as well as for $\operatorname{Re}\tau=0$, you get $j(\tau) = j(-\bar{\tau}) = \bar{j}(\tau)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1675699",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Ordered integral domain
If $a>0$ and $b>0$, both $a$ and $b$ are integers, and $a|b$. Use ordered integral domain to prove $a<b$.
I wrote: We can write that $b=an$, where $b$ is some positive integer and
we get $b\left(\frac1n\right)=a$;
$\frac1n < 1$;
that proves $b>a$.
Is this correct?
|
The problem as stated cannot be proved, e.g. for $a=3$ and $b=3$ we have $a\mid b$ but $a \not < b$.
Assuming the problem was to prove $a\le b$, you should be careful how you conclude that $\frac 1 n \le 1$. When you write $b=an$ for some integer $n$, you should explain why $n$ cannot be negative or $0$.
With rings and integral domains, you should generally avoid division. In your case, $\frac 1 n$ is not an integer unless $n=\pm 1$. Writing $\frac 1 n$ means we have to use the rationals or reals.
A more careful proof would be:
*
*$a \mid b$ means $b=an$ for some integer $n$.
*Argue that $n$ is positive, e.g. we know $n\ne0$, otherwise $b=an=a(0)=0$; assuming we've already proven positive $\times$ negative $=$ negative then $n$ cannot be negative.
*Write $b-a = an-a = a(n-1)$.
*Case 1: if $n=1$ then $b=a(1)=a$ so $a\le b$ and we're done.
*Case 2: if $n\ne1$ argue (since $n$ positive) that $n-1 > 0$, so $b-a = a(n-1) =$ positive $\times$ positive $=$ positive, i.e. $b-a>0$, i.e. $a<b$.
To be really careful, using just the axioms of an ordered integral domain, you'd need/prove extra properties of the integers, e.g. that there are no integers between $0$ and $1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1675794",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Bijection from $A \rightarrow \varnothing$ My thoughts. We need to prove that:
1 $\forall x,y \in A, \text{ if } f(x) = f(y) \rightarrow x = y$
2 $\forall y \in \varnothing, \exists x \in A, f(x) = y$.
In (1), $f(x) = f(y)$ is false, since neither $f(x)$ nor $f(y)$ have a value, so (1) is vacuously true.
Also, $\forall y \in \varnothing, P(x, y)$ is vacuously true. So both statements are true.
Admittedly, Does there exist a bijection between empty sets? offers some guidance, but I am unsure whether my rationale for #1 is sound.
|
Functions $f:A\rightarrow B$ can be thought of as particular subsets of $A\times B$ (ones that satisfy the well-defined property). Since $A\times\emptyset=\emptyset$, there is only one subset of $A\times\emptyset$.
Additionally, for the domain of $f:A\rightarrow B$ to be $A$, for all $a\in A$, there must exist $b\in B$ such that $(a,b)\in f$. In your case, since $f=\emptyset$, $(a,b)\not\in f$, so it must be that there is no $a\in A$. Hence $A=\emptyset$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1675883",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 3
}
|
Normed vector space with a closed subspace Suppose that $X$ is a normed vector space and that $M$ is a closed subspace of $X$ with $M\neq X$. Show that there is an $x\in X$ with $x\neq 0$ and $$\inf_{y\in M}\lVert x - y\rVert \geq \frac{1}{2}\lVert x \rVert$$
I am not exactly sure how to prove this. I believe since $M\neq X$ we can find some $z\in X\setminus M$ then if we let $\delta = \inf_{y\in M}\lVert z - y\rVert$ then we can choose some $y$ and deduce that $y\in M$.
Any suggestions is greatly appreciated.
|
The quotient space $X/M$ is a non-trivial Banach space with elements that are cosets of the form $x+M$. And $\|x+M\|=\inf_{m\in M}\|x+m\|$. Choose any non-zero coset $x'+M$. Then there exists $m'\in M$ such that
$$
\|x'+m'\| \le 2\|x'+M\|_{X/M}
$$
Then
$$
\|(x'+m')\| \le 2\inf_{m\in M}\|x'+m'+m\| =2\inf_{m\in M}\|(x'+m')-m\|
$$
Take $x=x'+m'$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1675944",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Proving $6 \sec\phi \tan\phi = \frac{3}{1-\sin\phi} - \frac{3}{1+\sin \phi}$
$$6 \sec\phi \tan\phi = \frac{3}{1-\sin\phi} - \frac{3}{1+\sin \phi}$$
I can't seem to figure out how to prove this.
Whenever I try to prove the left side, I end up with $\frac{6\sin\theta}{\cos\theta}$, which I think might be right.
As for the right side, I get confused with the denominators and what to do with them. I know if I square root $1-\sin\phi$, I'll get a Pythagorean identity, but then I don't know where to go from there.
Please help me with a step-by-step guide. I really want to learn how to do this.
|
You can do this either from LHS to RHS or from RHS to LHS.
Solution 1: LHS $\rightarrow$ RHS
$$\require{cancel}\begin{aligned}6\sec\phi\tan\phi&=6\frac{1}{\cos\phi}\frac{\sin\phi}{\cos\phi}\\&=\frac{6\sin\phi}{\cos^2\phi}\\&=\frac{3\sin\phi+3\sin\phi}{\left(1-\sin\phi\right)\left(1+\sin\phi\right)}\\&=\frac{3\left(1+\sin\phi\right)-3\left(1-\sin\phi\right)}{\left(1-\sin\phi\right)\left(1+\sin\phi\right)}\\&=\frac{3\cancel{\left(1+\sin\phi\right)}}{\left(1-\sin\phi\right)\cancel{\left(1+\sin\phi\right)}}-\frac{3\cancel{\left(1-\sin\phi\right)}}{\cancel{\left(1-\sin\phi\right)}\left(1+\sin\phi\right)}\\&=\frac{3}{1-\sin\phi}-\frac{3}{1+\sin\phi}\end{aligned}$$
Solution 2: RHS $\rightarrow$ LHS
$$\begin{aligned}\frac{3}{1-\sin\phi}-\frac{3}{1+\sin\phi}&=\frac{3\left(1+\sin\phi\right)-3\left(1-\sin\phi\right)}{\left(1-\sin\phi\right)\left(1+\sin\phi\right)}\\&=\frac{\cancel3+3\sin\phi\cancel{-3}+3\sin\phi}{\left(1-\sin\phi\right)\left(1+\sin\phi\right)}\\&=\frac{6\sin\phi}{1-\sin^2\phi}\\&=\frac{6\sin\phi}{\cos^2\phi}\\&=\frac{6\sin\phi}{\cos\phi\cos\phi}\\&=6\sec\phi\tan\phi\end{aligned}$$
I hope this helps.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1676062",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 6,
"answer_id": 5
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.