Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
Example of hypergraph Definition: A hypergraph $\Gamma=(V,\mathcal{E})$ is a set of vertices $V$ and a collection $\mathcal{E}$ of subsets of $V$ such that for every $E\in \mathcal{E}$, we have $|E|\geq 2$. The members of $\mathcal{E}$ are called hypergraphs.
Example: Let $V=\{1,2,3,4\}$ and consider the collection $\mathcal{E}=\{\{1,2,4\}, \{3,4\}, \{2,3,4\}\}$.
However, I have tried to depict it. For instance, $\{1,2,4\}$ means that I've connected 1 and 2, 2 and 4, 1 and 4. I've the following picture.
Is it correct?
| The two main philosophies of hypergraph drawing are:
*
*The Set-Circling School (shown on the left). Here, you just take every hyperedge and circle all the vertices involved with a continuous region.
*
*Pros: Each edge is definitely unambiguously drawn.
*Cons: Way more going on in the diagram, especially when lots of edges overlap.
*The Curved-Line School (shown on the right). Here, you draw a curved line through all the vertices on a hyperedge, in arbitrary order, often sticking out a bit past the ends.
*
*Pros: Minimalist. A natural generalization of graph drawings.
*Cons: Not always obvious that it's the same edge entering and exiting a vertex.
(Note: in both of these examples, I misread the question and so both of these are drawings of the hypergraph with edge set $\{\{1,2,4\}, \{2,3,4\}, \{2,4\}\}$. Sorry. These drawings took a great deal of effort to make, so I'm not going to draw them all over again.)
In both cases, the problem we are trying to solve is to make a drawing with the single hyperedge $\{1,2,4\}$ look different from from a drawing including some of the hyperedges $\{1,2\}$, $\{1,4\}$, $\{2,4\}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2668045",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Simultaneous Equations with a Modulus I must confess I'm finding it pretty hard to get my head round these equations, much less solve them. So any help would be enormously appreciated. I'd appreciate if you'd show the logic of how to solve these, and keep it simple without explicitly mentioning the Extended Euclidean what have you, and all this other stuff. (I've seen it mentioned in other answers, but it's impenetrable to me). I'm only interested in the step-by-step process, and how it works, rather than just quoting formulaic methods. So please keep it simple and jargon-free. Thank you. Equations are:
$$
\begin{cases}
z_1 (8z_2 + 2) = 3 \mod 11\\
z_2 (4z_1 + 11) = 5 \mod 13
\end{cases}
$$
I'm trying to solve for $z_1$ and $z_2$ modulo whatever.
| So you will have $10$ answer pairs for the first equation.
Each of these is obtained by getting the inverse of some chosen value of $z_1$ and then solving for
$(8z_2 + 2) \equiv 3z_1^{-1} \bmod 11$ which calculates out to $z_2\equiv 7(3z_1^{-1}-2) \bmod 11$ since $8^{-1}\equiv 7 \bmod 11$.
You know in advance that $z_1\equiv 0 \bmod 11$ is not going to give a solution. Note that $z_2\equiv 0$ can be a solution though; in fact $(z_1,z_2)\equiv (7,0)\bmod 11$ is a solution for the first equation.
So for example with $z_1\equiv 6$, we know $z_1^{-1}\equiv 2$ so $z_2 \equiv 7(3\cdot 2 -2) \equiv 28\equiv 6 \bmod 11$.
Similarly you will have $12$ answer pairs for the second equation. Here of course $z_2\equiv 0 \bmod 13$ cannot be a solution, but this time $z_1\equiv 0$ is feasible and $(z_1,z_2)\equiv (0,4)\bmod 13$ satisfies the requirement.
Combining every solution to the first equation with every solution to the second through the Chinese Remainder Theorem, you will have $120$ answer pairs $\bmod 143$.
An example: combining $(6,6)\bmod 11$ with $(0,4)\bmod 13$ gives $(z_1,z_2)\equiv(39,17)\bmod 143$ as one of the answer pairs.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2668187",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Vector-valued forms inside the first jet bundle On page 433 of "Self-duality in four-dimensional Riemannian geometry" by Atiyah, Hitchin and Singer, it is written that $p^*(E \otimes \Lambda^1) \subset p^*J_1(E)$, where $\Lambda^1 \to X$ is the bundle of $1$-forms, $p : E^* \to X$ is a vector bundle and $J_1(E) \to X$ is the first jet bundle of $E$. How exactly is this inclusion realized?
| An element of $J_1(E)$ can be thought of simply as the value and first derivative of a section of $E$ at a single point. When $E$ is a trivial bundle $M \times V$ so that sections of $E$ are just vector-valued functions $M \to V,$ this is made very explicit by the canonical isomorphism
\begin{eqnarray}J_1(E) &=& E \oplus (E \otimes \Lambda^1)
\\j^1_x s &\mapsto&(s(x), ds(x)).
\end{eqnarray}
For more general vector bundles, to make this same explicit identification we need a way of differentiating sections. If we fix a linear connection $\nabla$ on $E$ then we can simply use $j_x^1s \mapsto (s(x),\nabla s(x)).$ You can check that this is an isomorphism by using local coordinates, where jets are simply Taylor polynomials.
In general, this isomorphism clearly depends on our choice of connection. When restricted to the jets with target zero (i.e. $j^1_xs$ for sections with $s(x)=0$), however, it does not: in local coordinates we have $(\nabla s)_i^\alpha = \partial_i s^\alpha + \omega_{\beta i}^\alpha s^\beta$ where $\omega_{\beta i}^\alpha$ are the connection coefficients of $\nabla;$ so at a point where $s=0$ the formula is simply $(\nabla s)^\alpha_i = \partial_i s^\alpha.$
Thus we have a canonical inclusion $E \otimes \Lambda^1\subset J_1(E)$ given by restricting the isomorphism discussed above to $\{0\} \times (E \otimes \Lambda^1).$ Composing with $p$ yields an inclusion $$p^* (E \otimes \Lambda^1) \subset p^*J_1(E).$$ I haven't checked that this agrees with the formulae given in the paper - it's possible that there's some other way of doing this that I've missed. To check that this really is what the authors mean, I recommend that you carefully verify the equation immediately following the claim of the inclusion.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2668294",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Example of non-isomorphic sheaves, with isomorphic stalks at every point? Basically what the title asks. I'd like to see an example of two sheaves on a topological space which have the same (isomorphic) stalks at every point, but are not isomorphic as sheaves.
| Take as the topological space $S^1$ the circle. Take as stalks, $\mathbb{Z}_3$. There is an automorphism switching the two non identity elements. One sheaf is the trivial sheaf, the other is twisted like a Möbius strip.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2668409",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Converting from Polar form to Cartesian Form I am just starting with complex numbers and vectors. The question is:
Convert the following to Cartesian form.
a) $8 \,\text{cis} \frac \pi4$
The formula given is:
$$z = x +yi = r\space(\cos\theta + i\sin\theta)$$
With $r=8$ and $\theta = \frac\pi4$, I did:
$$z=8\left(\cos\frac\pi4 + i \sin \frac\pi4\right)$$
$$z = 8(0.71 + i0.71)$&
$$z = 5.66 + i5.66$$
The answer they give in the answers section is:
$$z=4\sqrt2(1+i)$$
I know this is the same answer just written approximately. Is someone able to take me through how you get to $z=4\sqrt2(1+i)$ instead of $z = 5.66 + i5.66$?
| Knowing the $\pi\over4$ family:
$$\cos \dfrac{\pi}{4} = \sin \dfrac{\pi}{4} = \dfrac{1}{\sqrt 2}$$
Then applying this result to $8\,\text{cis} \frac{\pi}{4}$:
$$z=8\left(\dfrac{1}{\sqrt 2} + i\dfrac{1}{\sqrt 2}\right) = 8\cdot \dfrac{1}{\sqrt 2}(1+i)$$
And then multiply:
$$8\cdot \dfrac{1}{\sqrt 2} = \dfrac{8\sqrt 2}{2}=4\sqrt2$$
$$8\cdot \dfrac{1}{\sqrt 2}(1+i) = 4\sqrt 2(1+i)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2668495",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What is the Laplace Inverse Transform of $\ln(s)/(s(s+a))$? I've come across a term $\frac{\ln(s)}{s(s+a)}$ in a transformed differential equation.
I am trying to take the inverse transform of it, but I have no idea how to approach it.
If anyone knows how to solve this, that would be greatly appreciated.
| Let $$F(s)=\ln s\\G(s)=\dfrac{1}{s(s+a)}$$therefore we're gonna find the Inverse Laplace Transform (ILT) of $F(s)G(s)$, also if we denote the ILTs of $F(s)$ and $G(s)$ with $f(t)$ and $g(t)$ respectively the $ILT$ of $F(s)G(s)$ is $f(t)*g(t)$ where $*$ denotes the convolution operator. Also we have$$f(t)=\dfrac{u(t)}{t}\\g(t)=\dfrac{1}{a}(1-e^{-at})u(t)$$where $u(t)$ denotes unit step function. Therefore$$f(t)*g(t)=\int_{-\infty}^{\infty}f(\tau)g(t-\tau)d\tau=\dfrac{1}{a}\int_{0}^{t}\dfrac{1-e^{-a\tau}}{t-\tau}d\tau$$which has no closed form so:$$\dfrac{1}{a}\int_{0}^{t}\dfrac{1-e^{-a\tau}}{t-\tau}d\tau=L^{-1}\left(\dfrac{\ln s}{s(s+a)}\right)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2668586",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
Why is it that $\sum_{n=1}^\infty \frac{1}{n^2}=\frac{\pi^2}{6}$ is not less than $\int_1^\infty \frac{dx}{x^2} = 1$? So according to Euler's proof of the Basel problem,
$$\sum_{n=1}^\infty \frac{1}{n^2}=\frac{\pi^2}{6},$$
But only for $n \in \mathbb{Z}$.
But if $n$ was a positive real and $n \geqslant 1$, then would the sum $S$ be equal to,
$$\int_1^\infty \frac{dx}{x^2}?$$
If so then after solving the integral via power rule we get $S=1$.
But how can this be since when taking reals, we take integral values as well as new values. So by logic shouldn't, $S>\dfrac{\pi^2}{6}$ but $1$ isn't. Where am I going wrong pls explain.
| In the following picture, the pink area is the left-hand part of the integral $\displaystyle \int_{x=1}^\infty \frac1x \, dx$ while the green and pink areas together are left-hand part of the sum $\displaystyle \sum_{n=1}^\infty \frac1{n^2}$
The green area is the difference, which is clearly positive and is in fact $\dfrac{\pi^2}{6} -1\approx 0.6449$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2668681",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 7,
"answer_id": 0
} |
Finding a set of vectors that is not a vector space I am asked to find a set of vectors in $\mathbb{R}^2$ such that if $y$ is in the set, then $b\cdot y$ is in the set for every real number $b$. However, I am told that this set cannot be a vector space.
The set would not be a vector space if the zero vector weren't included. However, given that $b$ can be any real number, including $0$, this cannot be the case. How else can a set of vectors that matches this condition not be a vector space?
| Let $S$ be the union of the first and third quadrants (along with the $x$- and $y$-axes, let's say). Scaling by a real number $r$ will either keep each vector in its same quadrant (if $r>0$), or send the first and third quadrants to each other (if $r<0$), or send all vectors to the origin (if $r=0$). So $S$ is closed under scalar multiplication.
However, notice that
$$
(-2, -1) + (1, 2) = (-1, 1)
$$
which is in the second quadrant, so $S$ is not closed under vector addition.
Extra note: for an example of something closed under addition but not under scalar multiplication, take just the first quadrant.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2668762",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 7,
"answer_id": 6
} |
>Finding range of $f(x)=\frac{\sin^2 x+4\sin x+5}{2\sin^2 x+8\sin x+8}$
Finding range of $$f(x)=\frac{\sin^2 x+4\sin x+5}{2\sin^2 x+8\sin x+8}$$
Try: put $\sin x=t$ and $-1\leq t\leq 1$
So $$y=\frac{t^2+4t+5}{2t^2+8t+8}$$
$$2yt^2+8yt+8y=t^2+4t+5$$
$$(2y-1)t^2+4(2y-1)t+(8y-5)=0$$
For real roots $D\geq 0$
So $$16(2y-1)^2-4(2y-1)(8y-5)\geq 0$$
$$4(2y-1)^2-(2y-1)(8y-5)\geq 0$$
$y\geq 0.5$
Could some help me where I have wrong, thanks
| Hint: Use
$$\sin^2 x +4\sin x+5 = (\sin x +2)^2 +1$$
and
$$2\sin^2 x +8 \sin x +8 = 2\left(\sin x + 2 \right)^2.$$
Also, break the fraction into two pieces
$$\dfrac{(\sin x +2)^2 +1}{2\left(\sin x + 2 \right)^2}=\dfrac{1}{2}+\dfrac{1}{2}\dfrac{1}{\left(\sin x + 2 \right)^2}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2668839",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 0
} |
Is $F(c\vec{v})$ = $cF(\vec{v})$ condition of linear transformation the same thing as $F(\vec{a})+F(\vec{b})=F(\vec{a}+\vec{b})$? Is $F(c\vec{v})$ = $cF(\vec{v})$ condition of linear transformation the same thing as $F(\vec{a})+F(\vec{b})=F(\vec{a}+\vec{b})$?
I am a physics undergraduate student and studying linear algebra.
The 2 conditions of linear transformation seem quite same to me. But, I know if these 2 were the same then it wouldn't be mentioned in explicitly in the literature.
Why do I think the 2 conditions are the same thing:
For any real number (rational or irrational) $c$, the vector $c\vec{v}$ can be expressed as a sum of $n$ equal vectors, $\frac{c}{n}\vec{v}$.
So, $$c\vec{v}=\sum_{i=1}^{n}\frac{c}{n}\vec{v}$$
Then, applying the first condition on 2 constituents we get $F(\frac{c}{n}\vec{v})+F(\frac{c}{n}\vec{v})=F(\frac{2c}{n}\vec{v})$ $\implies2F(\frac{c}{n})=F(\frac{2c}{n})$. Then applying induction, we can get the second condition $F(c\vec{v})$ = $cF(\vec{v})$
So, why is the second condition necessary?
One reasoning: If $c$ is an irrational number, then this repeated addition never ends; because $\frac{c}{n}$ is also an irrational number. While it may not be a problem to a physicist, it can be a real problem to a mathematician. But, I don't know that deep about number theory. So, I can't figure out the problem.
So, mathematicians, help me figure out the reason of explicitly stating the 2nd condition. You are welcome to use advanced stuff to make it clear, but first give a simple explanation, then rigorously explain it.
| You have proved that if $F(a) + F(b) = F(a+b)$ then given any $n \in \mathbb{N}$ and any real $c$ and any vector $v$ $F(cv) = nF(\frac{cv}{n})$. Following the same argument you can prove that $F(nv) = nF(v)$ with $n \in \mathbb{N}$. Using this two proofs you managed to do it with rational numbers. As you say, you miss the irrational numbers, that is important, indeed they are almost all $\mathbb{R}$. And if you consider a $\mathbb{C}$-Vector Space you are missing much more. Now if you consider an arbitrary space vector, say $(V,\mathbb{K},+,.)$ with $\mathbb{K}$ an arbitrary field?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2668960",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Alternative way to calculate the sequence Given $a>0$ that satisfies $4a^2+\sqrt 2 a-\sqrt 2=0$.
Calculate $S=\frac{a+1}{\sqrt{a^4+a+1}-a^2}$.
Attempt:
There is only one number $a>0$ that satisfies $4a^2+\sqrt 2 a-\sqrt 2=0$, that is
$a=\frac{-\sqrt{2}+\sqrt{\Delta }}{2\times 4}=\frac{-\sqrt{2}+\sqrt{2+16\sqrt{2}}}{8}$
However obviously if you replace it directly to calculate $S$, it would be extremely time-consuming.
Is there another way? I think the first equation can be changed (without needing to solve it) to calculate $S$, as when I use brute force (with calculator), I found out that $S\approx 1.414213562\approx\sqrt 2$.
| First, we have
$$\begin{align}S&=\frac{a+1}{\sqrt{a^4+a+1}-a^2}\\\\&=\frac{a+1}{\sqrt{a^4+a+1}-a^2}\cdot\frac{\sqrt{a^4+a+1}+a^2}{\sqrt{a^4+a+1}+a^2}\\\\&=\frac{(a+1)(\sqrt{a^4+a+1}+a^2)}{a+1}\\\\&=\sqrt{a^4+a+1}+a^2\tag1\end{align}$$
We have
$$4a^2+\sqrt 2 a-\sqrt 2=0\implies (4a^2)^2=(\sqrt 2-\sqrt 2\ a)^2\implies a^4=\frac{a^2-2a+1}{8}$$
from which
$$a^4+a+1=\frac{a^2-2a+1}{8}+a+1=\frac{(a+3)^2}{8}\tag2$$
follows.
Also, we have
$\sqrt 2\ (4a^2+\sqrt 2 a-\sqrt 2)=0\implies 2\sqrt 2\ a^2+a=1\tag3$
From $(1)(2)(3)$,
$$S=\frac{a+3}{2\sqrt 2}+a^2=\frac{2\sqrt 2a^2+a+3}{2\sqrt 2}=\frac{1+3}{2\sqrt 2}=\color{red}{\sqrt 2}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2669096",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 1
} |
Prove log(log(n)) is Big-O (log(n)) I need to show that:
$g(n) = \log(\log(n))= O(\log(n))$
This is what I have so far:
Choose $k = 1$
Suppose $n > 1$ then:
$\log(n) < n$
$\log(\log(n)) < \log(n)$
But I can't figure out what my C should be if this is the correct answer?
| If $0<f(n)<g(n)$ then $c=1$ works to show $f(n)=O(g(n))$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2669188",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Coset proof: $aH=bH$ if and only if $H=a^{-1}bH$ In Gallian's Contemporary Abstract Algebra, 9TH edition, on page 140, he is trying to prove $aH=bH$ if and only if $a^{-1}b\in H$.
And he says to observe that:
$aH=bH$ if and only if $H=a^{-1}bH$.
How is this true?
"$\rightarrow$"
Assume $aH=bH$, so let $t\in aH$. Then $t=ah=bh$, so $a^{-1}t=h=a^{-1}bh$.
How to go forward from this to prove $H \subset a^{-1}bH$ and $a^{-1}bH \subset H$?
| Maybe this way of seeing it is clearer ?
$\begin{eqnarray*}
aH = bH &\iff& \{ ah_1, \dots, ah_n \} = \{bh_1, \dots, bh_n\} \\
& \iff & \{a^{-1}ah_1, \dots, a^{-1}ah_n \} = \{a^{-1}bh_1, \dots, a^{-1}bh_n \} \\
& \iff & H = a^{-1}bH
\end{eqnarray*}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2669349",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Composition of two pivot-angle rotations in 2D Suppose I have a set of points $P$ in the 2D plane.
Let $(\theta_1, p_1)$ and $(\theta_2, p_2)$ be two axis-pivot rotation, where for each $i \in \{1, 2\}$, $\theta_i$ is a rotation angle and $p_i$ the pivot point of this rotation.
I would like to rotate each point of $P$ by an angle of $\theta_1$ around $p_1$, then rotate each resulting point by an angle of $\theta_2$ around $p_2$.
The resulting location of each point of $P$ can easily be computed by calculating the result of the first rotation, then the second.
However, can this be described by a single axis-pivot rotation $(\theta_3, P_3)$?
Semi-related is this question: Composition of two axis-angle rotations
However, it applies to 3D and seems more complicated than it needs to be.
| A rotation $R(\theta,P)$ can be decomposed as the product of two reflections $P_r$ and $P_s$, about any two lines $r$ and $s$ intersecting at $P$ and forming an angle $\theta/2$ between them: $R(\theta,P)=P_s\circ P_r$.
If you have two rotations $R(\theta_1,P_1)$, $R(\theta_2,P_2)$, and $r$ is line $P_1P_2$, you can then choose other two lines $a$, $b$ such that
$$
R(\theta_1,P_1)=P_r\circ P_{a},\quad R(\theta_2,P_2)=P_{b}\circ P_r.
$$
It follows that
$$
R(\theta_2,P_2)\circ R(\theta_1,P_1)=P_{b}\circ P_r\circ P_r\circ P_{a}=
P_{b}\circ P_{a}.
$$
If lines $a$ and $b$ intersect at $P_3$, this is indeed a rotation. If instead $a$ and $b$ are parallel, this is a translation: that happens if $\theta_1+\theta_2$ is a multiple of 360°.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2669470",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Find limit of $\lim\limits_{x \to\infty}{\left({{(x!)^2}\over{(2x)!}}\right)}$ I'm practising solving some limits and, currently, I'm trying to solve $\lim\limits_{x\to\infty}{\left({{(x!)^2}\over{(2x)!}}\right)}$.
What I have done:
*
*I have attempted to simplify the fraction until I've reached an easier one to solve, however, I'm currently stuck at the following:
$$
\lim_{x→\infty}{\left({{(x!)^2}\over{(2x)!}}\right)}=
\lim_{x→\infty}{\left({{(\prod_{i=1}^{x}i)^2}\over{\prod_{i=1}^{2x}i}}\right)}=
\lim_{x→\infty}{\left({
{
{\prod_{i=1}^{x}i}\cdot{\prod_{i=1}^{x}i}
}\over{
{
{\prod_{i=1}^{x}}i}\cdot{\prod_{i=x+1}^{2x}i}
}
}\right)}=
\lim_{x→\infty}{\left({
{\prod_{i=1}^{x}i}\over{
{\prod_{i=x+1}^{2x}i}}
}\right)}.
$$
*
*Instinctively, I can see that the limit is equal to $0$, since the numerator is always less than the denominator, thus approaching infinity slower as $x→\infty$.
Question:
*
*How can I continue solving the above limit w/o resorting to instinct to determine it equals $0$ ?
*If the above solution can't go any further, is there a better way to approach this problem?
| Stirling formula may be difficult to remember, but the simpler one below is extremely useful and allows you to solve most asymptotic results with factorials:
Factorial Inequality problem $\left(\frac n2\right)^n > n! > \left(\frac n3\right)^n$
You get $\quad\dfrac{n^{2n}}{9^n}< (n!)^2 < \dfrac{n^{2n}}{4^n}$
And also $\quad\dfrac{4^nn^{2n}}{9^n}< (2n)! < \dfrac{4^nn^{2n}}{4^n}$
So by dividing positive quantities $0<\dfrac{(n!)^2}{(2n)!}<\dfrac{n^{2n}9^n}{4^n4^nn^{2n}}=\left(\dfrac 9{16}\right)^n\to 0$
You can also notice that in this case $\dfrac{(2n)!}{(n!)^2}=C_{2n}^n\ge C_{2n}^1\ge 2n\to\infty$ so its inverse is going to zero.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2669615",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 7,
"answer_id": 6
} |
How many functions under these conditions? So in my new discrete mathematics class, I had the following exercise.
Let $\Bbb{S} = \{1,2,3,4,5,6\}$. How many functions $f : S → S$ can we find such that
a)$ \;\; \forall y \in \Bbb{S} $ there is $ \;\;\ x \in \Bbb{S} $ such that $f(x)=y$
b)$ \;\; \forall y \in $ {2,4,6} there are two $ \;\;\ x \in \Bbb{S} $ such that $f(x)=y$
b)$ \;\; \forall y \in $ {3,6} there are three $ \;\;\ x \in \Bbb{S} $ such that $f(x)=y$
I think the first one is $6!$ since it's asking us for one to one functions, but what are some hints to follow on the next ones?
| For the second question, note that if there are two inputs mapping to each of 2, 4 and 6, then there are no inputs that map to 1, 3 or 5. So in other words, how many ways can you allocate two inputs to 2, two inputs to 4 and two inputs to 6? (Alternatively, how can you pick two elements of the set to map to 2, then pick two elements from the remainder to map to 4, and then pick two from the remainder of that to map to 6?)
The third question is the same thing, but with mapping 3 inputs to each of 2 outputs.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2669764",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Determinant of matrix whose $(i,j)$ entry is $\min(i,j)$ Let $n \in\mathbb{N}$ and define $A \in M_{n}(\mathbb{R})$ by $A(i,j)= \min(i,j)$ for $i,j \in \{1, 2 ,3, 4,\cdots, n\}$.Compute $\det(A)$.
My try is with a example:
Given a $n\times n$ matrix whose $(i, j)$-th entry is the lower of $i,j$, eg.
$$\begin{pmatrix}1 & 1 & 1 & 1\\
1 & 2 & 2 & 2 \\
1 & 2 & 3 & 3\\
1 & 2 & 3 & 4 \end{pmatrix}.$$
The determinant of any such matrix is $1$.
How do I prove this?
Tried induction but the assumption would only help me to compute the term for $A_{nn}^*$ mirror.
Help, please.
| By looking at your example, it seems that by using Laplace expansion in the last column, you would get twice your induction assumption. Let me ellaborate: your determinant would be
$$-1\left|\begin{array}{ccc} 1 & 2 & 2 \\
1 & 2 & 3 \\
1 & 2 & 3\end{array}\right|+2\left|\begin{array}{ccc} 1 & 1 & 1 \\
1 & 2 & 3 \\
1 & 2 & 3\end{array}\right|-3\left|\begin{array}{ccc} 1 & 1 & 1 \\
1 & 2 & 2 \\
1 & 2 & 3\end{array}\right|+4\left|\begin{array}{ccc} 1 & 1 & 1 \\
1 & 2 & 2 \\
1 & 2 & 3\end{array}\right|$$
The first two determinants are $0$, since there are repeated rows, and the last two are equal to $1$ by your induction assumption, so it is $4-3=1$. I am pretty sure that this approach Works in the general $n\times n$ matix, so try it out and see what happens.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2669883",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Example of infinite dimensional linear spaces where the space is equal to its dual. My understanding is that in finite dimensions, every linear space $V$ is isomorphic to its dual $V^\ast$.
In infinite dimensions, we have that any Hilbert space $\mathcal{H}$ is isomorphic (specifically, anti-isomorphic) to its dual $\mathcal{H}^\ast$ (Riesz Representation Theorem). Furthermore, every Hilbert space is also isomorphic to the square summable sequence space $\ell^2$.
I am wondering if there are examples of infinite dimensional linear spaces where the dual is equal to itself, and the space is not isomorphic to $\ell^2$.
Edit: We assume the underlying field to be $\mathbb{R}$ or $\mathbb{C}$.
| There's some confusion here:
*
*In the context of Hilbert spaces, $\mathcal{H}^*$ is not the full (algebraic) dual of $\mathcal H$. It's the topological dual, that is, the space of all continuous linear forms.
*It is not true that every infinite-dimensional Hilbert space is isomorphic to the space $\ell^2$ of square summable sequences. Only those which are separable.
If we are talking only about the algebraic dual, then no infinite-dimension vector space $V$ is isomorphic to $V^*$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2670012",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Calculating $\lim_{x\to\infty} \frac{x^{5}}{2^{\sqrt{x}}}$ I'm interested in calculating
$$\lim_{x\to\infty}\frac{x^{5}}{2^{\sqrt{x}}}$$
I'm thinking of using L'Hopital's Rule here, which gives:
$$\lim_{x\to\infty}\frac{5x^{4}\sqrt{x}}{\ln(2)\cdot
2^{\sqrt{x}-1}}$$
But it doesn't look so great, is this the wrong approach?
| This is essentially boils down to making the right substitutions. First we try $x \rightarrow x^2$ to get rid of the square root. Now we have $\frac{x^{10}}{2^x}.$ At this point, you can finish the proof by using the fact that exponentials always outpace polynomials.
Your job is rigorize my 2 statements to an acceptable level. If this is enough for you, then that's fine. If not, I suggest providing a proof for the last step. Either way, good luck!
Note: Now that I think about it, you can prove that $\forall a, n > 1$, $\frac{x^n}{a^x}$ tends to $0$ by strong induction and L'Hopitals rule.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2670117",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 0
} |
Evaluating sum of binomial coefficients This sum popped out of one of my calculations, I know what it should evaluate to, but I have no idea how to prove it.
$$\sum_{i=0}^{r}{n \choose 2i } - {n\choose 2i -1}$$
I know that $2i-1$ is negative for $i=0$, but for the purpose of this sum, we will say that ${n \choose x}=0$ if $n<0$ or $x<0$. So this sum is basically summing the difference of consecutive even/odd binomial coefficient pairs. We can rewrite this sum as
$$\sum_{i=0}^{2r}(-1)^i {n \choose i}.$$
I don't really know how to proceed from here, I couldn't find any information on evaluating sums of alternating series involving binomial coefficients.
| Consider the coefficient of $x^{m}$ in the expansion of
$$(1-x)^n\frac{1}{1-x}=\sum_{r=0}^{n}(-1)^r{n \choose r}x^r\sum_{r=0}^{∞}x^r$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2670237",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Roots of $x^3+5x-18=0$ using Cardano's method Given that $x^3+5x-18=0$. We have to solve it using Cardano's method.
Using trial $x=2$ will be a root. Dividing the equation by $x-2$ we shall get the other quadratic equation and solving that one, we shall obtain all the roots.
But when I am trying to solve the equation using Cardan\rq s method, the calculation is becoming very difficult. I don\rq t know why. Please help.
Here is how did I proceed.
Let $x=u+v$. Then $x^3=u^3+v^3+3uvx$ i.e. $x^3-3uvx-(u^3+v^3)=0$. So $-3uv=5$ and $u^3+v^3=18$. Clearly $u^3, v^3$ are the roots of
\begin{align}
&t^2-(u^3+v^3)t+(uv)^3=0\\
\Rightarrow &t^2-18t-\frac{125}{27}=0\\
\Rightarrow &27t^2-(27\times 18)t-125=0
\end{align}
and from here when we are getting the roots of $t$, they are very complicated. Hence I do not know how to simplify them so that $x=2$ finally be achieved along with the other two roots.
Please help me
| By your work we obtain:
$$x=\sqrt[3]{9+\sqrt{81+\frac{125}{27}}}+\sqrt[3]{9-\sqrt{81+\frac{125}{27}}}=$$
$$=
\sqrt[3]{9+\frac{34}{3}\sqrt{\frac{2}{3}}}+\sqrt[3]{9+\frac{34}{3}\sqrt{\frac{2}{3}}}=\frac{1}{3}\left(\sqrt[3]{243+102\sqrt6}+\sqrt[3]{243-102\sqrt6}\right)=$$
$$=\frac{1}{3}\left(\sqrt[3]{(3+2\sqrt6)^3}+\sqrt[3]{(3-2\sqrt6)^3}\right)=\frac{1}{3}(3+2\sqrt6+3-2\sqrt6)=2.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2670354",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Showing $x^4-x^3+x^2-x+1>\frac{1}{2}$ for all $x \in \mathbb R$
Show that
$$x^4-x^3+x^2-x+1>\frac{1}{2}. \quad \forall x \in \mathbb{R}$$
Let $x \in \mathbb{R}$,
\begin{align*}
&\mathrel{\phantom{=}}x^4-x^3+x^2-x+1-\frac{1}{2}=x^4-x^3+x^2-x+\dfrac{1}{2}\\
&=x^2(x^2-x)+(x^2-x)+\dfrac{1}{2}=(x^2-x)(x^2+1)+\dfrac{1}{2}\\
&=x(x-1)(x^2+1)+\dfrac{1}{2}.
\end{align*}
Is there any way to solve this question?
| We have to prove $$x^4-x^3+x^2-x+0.5>0\forall x\in\mathbb{R}$$
So let $$f(x)=\frac{1}{2}\bigg[2x^4-2x^3+2x^2-2x+1\bigg]$$
$$f(x)=\frac{1}{2}\bigg[x^4+(x^2-x)^2+(x-1)^2\bigg]>0\forall x\in\mathbb{R}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2670433",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 7,
"answer_id": 6
} |
Does $A\in B$ imply $A\subset B$? Question: Does $A\in B$ imply $A\subset B$ and does $A\in B$ and $B\in C$ imply $A\in C$?
I've been trying to find examples to get some intuition for this and I've come up with the following:
Example 1: Suppose that $A = \{1\}$, $B = \{\{1\},2\}$. I'd say that $A$ is an element of $B$ and $A$ is a subset of $B$.
Example 2: Suppose that $A = \{1\}$, $B = \{\{1\},2\}$ and $C = \{\{\{1\},2\},3\}$. Now $A\in B$, $B\in C$ but $A\not\in C$, right?
I think my confusion stems from the fact that I'm not sure how $B = \{\{1\},2\}$ vs $B = \{1,2\}$ determines whether $A$ is an element and/or a subset of $B$.
| No. Taking $A=\{\emptyset\}$ and $B=\{A\} = \{\{\emptyset\}\}$, you have $A\in B$, but not $A\subseteq B$.
Furthermore, in Example $1$, $A=\{1\}$ is not a subset of the set $B=\{\{1\},2\}$, because it is not true that every element of $A$ is also an element of $B$. Specifically, $1$ is an element of $A$, but $1$ is not an element of $B$.
$B$ is a set with two elements, one of them is equal to $2$, the other one is equal to $\{1\}$, and $1$ is not equal to any of these two elements.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2670553",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
} |
Generating a finite group with a subset of more than $\frac{1}{p}$ elements Sorry if this is a duplicate, but I couldn't find any related question. I'm working on the following exercise
Let $G$ be a finite group with more than $1$ element, en $S\subset G$ a subset such that $\#S>\frac{1}{p}\#G$, with $p$ the smallest prime divisor of the order of G. Prove that $\langle S\rangle=G$.
I managed to prove this in the kind of trivial case where $2|\#G$: observe that $S$ and $S^{-1}$ cannot be disjoint, so for an arbitrary $x\in G$, $\exists a\in xS^{-1}\cap S$; that is: for certain $s_1\in S^{-1}$, $s_2\in S$, $xs_1^{-1}=a=s_2$, so that $x=s_1s_2$, and since $x\in G$ was arbitrary, $\langle S\rangle=G$.
However, how do i prove this for the general case that $\#S>\frac{1}{p}\#G$, with $p$ the smallest prime divisor of the order of G?
| Put $H=\langle S \rangle$ then $H$ is a subgroup and $S \subseteq H$. If $H \subsetneq G$, then the index $|G:H| \gt 1$. But $|G:H| \leq |G|/\#S \lt p$. Since $p$ is the smallest prime dividing $|G|$ and $|G:H|$ divides $|G|$, this is a contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2670686",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Evaluate the limit $\lim_{n\to \infty}\sqrt{n^2 +2} - \sqrt{n^2 +1}$ I know that
$$\lim_{n\to \infty}(\sqrt{n^2+2} - \sqrt{n^2+1})=0.$$
But how can I prove this?
I only know that $(n^2+2)^{0.5} - \sqrt{n^2}$ is smaller than $\sqrt{n^2+2} - \sqrt{n^2}$ = $\sqrt{n^2+2} - n$.
Edit:
Thank Y'all for the nice and fast answers!
| Note that
$$( \sqrt{n^2+2}-\sqrt{n^2+1})(\sqrt{n^2+2}+\sqrt{n^2+1})=1$$
Thus
$$\sqrt{n^2+2}+\sqrt{n^2+1} \to \infty \implies \sqrt{n^2+2}-\sqrt{n^2+1}\to 0$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2670779",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Applications of perturbation expansion of eigenvectors Partially citing this thread, consider $A_{\varepsilon}=A+\varepsilon V$, a perturbation of $A$, where $\varepsilon$ is a small number (All matrices are symmetric).
A known result derived in some books that I've read is the following perturbation expansion for the eigenvectors of $A_{\varepsilon}$ ($e_j$ are the eigenvectors of $A$):
$$e_{j}' = e_{j} +\epsilon \sum_{k=1\;(k\ne j)}^{k} \frac{(Ve_j,e_k)}{\lambda_j-\lambda_k}e_k.$$
My question is are there any applications, theoretical but most importantly practical, where this expansion is used?
| The analytic dependence of an eigenvalue from a perturbation of the linear operator is a question that has many applications in the study of dynamical systems, and it extension to linear operators in function (Hilbert) spaces is very important in quantum mechanics applications. A classical book on this subject is here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2670901",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Putnam 1957, 17th Problem I have spent the better part of today trying to prove the following question from Putnam 1957
If $\cos B\neq \cos A$, and $k>1$ is any natural number, prove that $$\left|\frac{\cos(kB)\cos(A)-\cos(kA)\cos(B)}{\cos(B)-\cos(A)}\right|<k^2-1$$
I tried using the mean value theorem for the function $f(x)=\cos[k(\cos^{-1}(x))]\cos[A+B-\cos^{-1}(x)]$. The rationale behind using this function is that if $q=\cos B$ and $p=\cos A$, then $\left|\frac{f(q)-f(p)}{q-p}\right|=\left|\frac{\cos(kB)\cos(A)-\cos(kA)\cos(B)}{\cos(B)-\cos(A)}\right|$. Now I would just have to prove that the derivative of this function at all points is bounded above by $k^2-1$.
I also tried using the Taylor expansion for all the cosines. Although it seemed promising at first, I got a lot of cross terms I didn't know how to get rid of.
Other methods tried- complex numbers, trig identities, etc
I eventually did find a solution here. However, I would be interested in alternate/cleaner proofs of this assertion. How would you approach this question?
| The given inequality is equivalent to
$$|\cos(kB)\cos A-\cos(kA)\cos B|<(k^2-1)|\cos B-\cos A|$$
Using the formula $2\cos x\cos y=\cos(x-y)+\cos(x+y)$ we can rewrite the last inequality as
$$|\cos(kB-A)+\cos(kB+A)-\cos(kA-B)-\cos(kA+B)|<2(k^2-1)|\cos B-\cos A|$$
which is equivalent to
$$|[\cos(kB-A)-\cos(kA-B)]+[\cos(kB+A)-\cos(kA+B)]|<2(k^2-1)|\cos B-\cos A|$$
Now we use formula $2\sin x\sin y=\cos(x-y)-\cos(x+y)$ to both sides of the last inequality to get the equivalent form
$$\Big|2\sin\Big((k-1)\frac{A+B}{2}\Big)\sin\Big((k+1)\frac{A-B}{2}\Big)+2\sin\Big((k+1)\frac{A+B}{2}\Big)\sin\Big((k-1)\frac{A-B}{2}\Big)\Big|\\ <4(k^2-1)\sin\Big(\frac{A+B}{2}\Big)\sin\Big(\frac{A-B}{2}\Big)$$
Since $\cos A\neq \cos B$ then necessarily $A\neq B$ which in turn implies $(A-B)/2\neq 0$, moreover by assumption $k>1$ we have $|\sin(kx)<k|\sin x|$ (proof by induction as in the link). Now by triangle inequality we have
$$\Big|2\sin\Big((k-1)\frac{A+B}{2}\Big)\sin\Big((k+1)\frac{A-B}{2}\Big)+2\sin\Big((k+1)\frac{A+B}{2}\Big)\sin\Big((k-1)\frac{A-B}{2}\Big)\Big|\\\leqslant 4\Big|\sin\Big((k-1)\frac{A+B}{2}\Big)\sin\Big((k+1)\frac{A-B}{2}\Big)\Big|\\<4(k-1)(k+1)\Big|\sin\Big(\frac{A+B}{2}\Big)\sin\Big(\frac{A-B}{2}\Big)\Big|\\=4(k^2-1)\Big|\sin\Big(\frac{A+B}{2}\Big)\sin\Big(\frac{A-B}{2}\Big)\Big|$$
as desired.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2670994",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
All roots $\lambda$ of $\det(A-\lambda B)=0$ are $\ge1$ when $B$ is p.d and $A-B$ is n.n.d. I am trying to prove the following statement:
Let $A,B\in M(n,\mathbb{R})$.
If $B$ is positive definite and $(A-B)$ is non-negative definite, then $\det(A-\lambda B)=0$ has all its roots $\lambda\geqslant1$ and conversely, if all roots $\lambda\geqslant 1$, then $(A-B)$ is non-negative definite.
If $(A-B)$ is n.n.d and $B$ is p.d, then I have $x^\top(A-B)x\geqslant0$ for all $x\in\mathbb{R}^n$.
This implies $x^\top Ax\geqslant x^\top Bx>0$ for all $x\ne0$, so that $A$ is p.d.
Moreover, as $B$ is p.d, $B$ is nonsingular.
So, $\det(A-\lambda B)=0\implies\det((AB^{-1}-\lambda I)B)=0$
$\qquad\qquad\qquad\qquad\quad\implies\det(AB^{-1}-\lambda I)=0$ , as $\det(B)\ne0$
Thus $\lambda$ is an eigenvalue of the matrix $AB^{-1}$.
Now for the eigenvector $x\ne0$ corresponding to $\lambda$ we have,
$(AB^{-1})x=\lambda x\implies(AB^{-1}-I)x=(\lambda-1)x$.
If I can show that $AB^{-1}-I$ is n.n.d given that both $A$ and $B$ are p.d, then that would possibly
imply $\lambda-1\geqslant0$ and I am done. But I am not sure if this is true or not.
In a different approach using the fact that a p.d matrix can be expressed as $D^\top D$ for some nonsingular matrix $D$, I was able to show that $AB^{-1}-I=PQ$ for some n.n.d matrix $P$ and p.d matrix $Q$. Does that help me conclude that $AB^{-1}-I$ is indeed n.n.d?
Any simpler or alternate approach is welcome.
| You can finish your own proof with the expression $QD$ using this answer. Since $Q$ is p.d., $QD$ has the same eigenvalues as $Q^{1/2}DQ^{1/2}$. Since $x^TQ^{1/2}DQ^{1/2} x = (Q^{1/2}x)^TD(Q^{1/2}x) \geq 0$ (because $D$ is n.n.d.), we get that $QD$ is indeed n.n.d.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2671111",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 4
} |
Proving a polynomial has x amount of zeros I am new to this thread so sorry if I violate any rules or whatever, but anyway in Calculus right now we are doing stuff related to Fermat's Theorem, Rolle's Theorem, and Intermediate value theorem. I am very confused by Rolles and Fermats and totally don't understand them I try looking online just to be more perplexed than resolved. In our homework it is full of problems like this:
Prove $y = x^3-6x^2+12x-8$ has at most two zeros.
Prove $f(x)=x^5+5x^3+45x+9$ has exactly one zero.
I am just totally lost on what to do and how to begin I really don't understand how to prove these with the theorems given.
Thanks for the help in advance.
| i) $f(x) = x^3-6x^2+12x-8$.
Suppose that $f$ has $3$ zeroes in $a<b<c$.
How $f(a)=f(b)=f(c)=0%$ then, by Rolle's Theorem, there is a zero of $f'(x)$ in interval $(a, b)$ and another zero in $(b, c)$. In other words, $f'(x)$ has two zeroes. But $f'(x) = 3x^2-12x+12 = 3(x-2)^2$ has only one zero. Contradiction!
Therefore $f$ has at most two zeroes.
ii) $f(x) = x^5+5x^3+45x+9$
How $f$ has odd degree, then $f$ has at least one zero.
Note that $f'(x) = 5x^4+15x^2+45 > 0$ for all $x\in\mathbb{R}$.
Suppose that $f$ has two zeroes in $a<b$. By Rolle's Theorem, should exists $c$ with $a<c<b$ such that $f'(c)=0$. Contradiction!
Therefore $f$ has exactly one zero.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2671251",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Stochastic Process: Is this a mistake or am I misunderstanding? (general two-state markov chain and finding $P(T\ge n)$ Consider the general two-state chain where p and q are not both 0. Let $T$ be the first reutrn time to state 1, for the chain started in 1.
(a) Show that $P(T\ge n) = p(1-q)^{n-2}$, for $n\ge 2$.
$$
P= \begin{pmatrix}
1-p & p \\
q & 1-q
\end{pmatrix}
$$
the $p$ term is self evidence. Going from state 1 to state 2 occurs with probability $p$.
However, what about the return probability $q$?
It is my belief that it should be,
$P(T\ge n) = p(1-q)^{n-2}q$, for $n\ge 2$.
Is this a typo or am I completely misunderstanding my probabilities?
| The quantity you have written is the probability that $T=n$ exactly. For the event $\{T\geq n\}$, all that is required is that at the first $n-1$ steps you are not at $1$, which happens with probability $p(1-q)^{n-2}$ (jump to $2$ first, then stay there the next $n-2$ steps).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2671447",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
eigenvalue test for linear time-varying systems Consider the linear time-varying system given by
$\dot x(t)=A(t)x(t)$
Denote by $\lambda_\max(t)$ the maximum eigenvalue of $A(t)+A^T(t)$. Suppose that there exist constants $\alpha\gt0$ and $\gamma$ such that $\lambda_\max(t)$ satisfies
$\int_\tau^t\lambda_\max(s)ds\le-\alpha(t-\tau)+\gamma$ where $\forall t\ge\tau$
Prove that $x_e=0$ is uniformly exponentially stable.
I was given a hint: Examine $d\over dt$$\Vert x\Vert_2^2$
But still, have no clue how to solve this problem. Also why we consider maximum eigenvalue for $A(t)+A^T(t)$? Please help me!
| Following the hint we examine the derivative of the square of the Euclidean norm. I use $\langle \cdot, \cdot \rangle$ to denote the standard inner product on $\mathbb{R}^n$. We then have for all $t$
$$\frac{d}{dt}\|x(t)\|^2 = \frac{d}{dt} \langle x(t), x(t) \rangle
= \langle A(t)x(t), x(t) \rangle + \langle x(t), A(t)x(t) \rangle$$
$$=\langle (A(t)+A(t)^T)x(t), x(t) \rangle \leq \lambda_{max}(t)\|x(t)\|^2 $$
Integrating from $\tau$ to $t$ we get
$$ \|x(t)\|^2 \leq \|x(\tau)\|^2 + \int_{\tau}^t \lambda_{max}(s)\|x(s)\|^2 ds$$
Gronwall's lemma yields
$$ \|x(t)\|^2 \leq \exp\left( \int_{\tau}^t \lambda_{max}(s) ds\right)
\|x(\tau)\|^2 $$
With the given assumption we obtain
$$ \|x(t)\|^2 \leq e^\gamma e^{-a(t-\tau)}
\|x(\tau)\|^2 $$
As $a>0$ and $\gamma$ are independent of $t,\tau$ this shows uniform exponential stability.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2671562",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why does expanding $\lim_{p \rightarrow 0}\frac{1-p-(1-p)^3}{1-(1-p)^3}$ give a different limit from just substituting? $$\lim_{p \rightarrow 0}\frac{1-p-(1-p)^3}{1-(1-p)^3}$$
$$= \frac{1-0-(1-0)^3}{1-(1-0)^3}$$
$$=\frac{0}{0}$$
$$\lim_{p \rightarrow 0}\frac{1-p-(1-p)^3}{1-(1-p)^3}$$
$$=\lim_{p \rightarrow 0}\frac{p^2-3p+2}{p^2-3p+3}$$
$$=\frac{0^2-3(0)+2}{0^2-3(0)+3}$$
$$=\frac{2}{3}$$
Edit 1:
$0\le p \le1$
| A little bit of context:
Let $f, g$ be continuos in $D$, $a \in D$, and $g(a)\not =0.$
Then $\lim_{x \rightarrow a} \dfrac{f(x)}{g(x)} = \dfrac{f(a)}{g(a)}$.
Your functions $f$, in the numerator, and g, in the denominator, are continuos but : $g(a)=0$!!.
Hence the above is not applicable.
Another example:
$f(x)=x$, $g(x) =x.$
Does $\lim_{x \rightarrow 0} \dfrac{f(x)}{g(x)}$ exist?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2671760",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 7,
"answer_id": 3
} |
What is the probability that the first 2 balls are the same color while the last 2 balls are different colors? A box contains 3 Blue balls, 4 Green balls and 5 Red balls. 4 balls were picked at random without replacement. What is the probability that the first 2 balls are the same color while the last 2 balls are different colors?
What I have tried:
P(B,B,G,R) = 1/99
P(R,R,B,G) = 2/99
P(G,G,R,B) = 1/66
Therefore, the probability of (2 same color and 2 different colors) = (1/99) + (2/99) + (1/66) = 1/22 (Wrong, according to the textbook).
Please help.
| I tried $2$ different methods and got $\frac{67}{330}$ both times. The denominators of the probabilities at each draw are always $12*11*10*9=11880$ so we only have to keep track of the numerators. We also only have to draw one order of the last $2$ balls, then multiply by $2$.
$$\begin{array}{c|c}\text{Config}&\text{Ways}\\\hline
\text{BBBG}&24\\
\text{BBBR}&30\\
\text{BBGR}&120\\
\text{GGGB}&72\\
\text{GGGR}&120\\
\text{GGBR}&180\\
\text{RRRB}&180\\
\text{RRRG}&240\\
\text{RRBG}&240\\
\hline\text{Total}&2412
\end{array}$$
So I get a probability of
$$\frac{2\cdot2412}{11880}=\frac{67}{330}$$
Working out every draw also was the same:
program balls2
implicit none
integer i1,i2,i3,j1,j2,j3,j4
integer total, count
integer draw(12)
total = 0
count = 0
do i1=1,10
do i2=i1+1,11
do i3=i2+1,12
do j1=1,9
if(any(j1==[i1,i2,i3])) cycle
do j2=j1+1,10
if(any(j2==[i1,i2,i3])) cycle
do j3=j2+1,11
if(any(j3==[i1,i2,i3])) cycle
do j4=j3+1,12
if(any(j4==[i1,i2,i3])) cycle
total=total+1
draw = 0
draw([i1,i2,i3]) = 1
draw([j1,j2,j3,j4]) = 4
if(any(draw(1)+draw(2)==[1,4,5])) cycle
if(any(draw(3)+draw(4)==[0,2,8])) cycle
count = count+1
end do
end do
end do
end do
end do
end do
end do
write(*,*) total,count
end program balls2
Output was
27720 5628
Which is the same ratio.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2671839",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
If $\alpha$ is the angle between two curves, find $\cos (\alpha)$ in terms of parametric representation of the curves. If $\alpha$ is the angle between two curves, find $\cos (\alpha)$ in terms of parametric representation of the curves. Answer in book:
Is this answer correct? I keep getting a minus sign between the top terms.
Outline of my approach:
We draw a horizontal at the point of intersection. By simple geometry the angle between the tangents is $\alpha=\theta + \phi$ where $\theta$ and $\phi$ are the angles between the $x$ axis and the tangent to the curves. Taking cosine both sides, and using the fact that
$\cos (t)=\pm \frac{\dot{x}}{\sqrt{\dot{x}^2 + \dot{y}^2}}$ and
$\sin (t)=\pm \frac{\dot{y}}{\sqrt{\dot{x}^2 + \dot{y}^2}}$ as given in book for angle $t$ between $x$ axis and tangent to curve. I find that I always get the negative after using the additional formula for $\cos$ after expansion.
| Assume that
$$\gamma:\quad u\mapsto{\bf z}(u)=\bigl(x(u),y(u)\bigr)$$
is the parametric representation of a curve $\gamma\subset{\mathbb R}^2$. Then for any parameter value the vector ${\bf z}'(u)=\bigl(x'(u),y'(u)\bigr)$, if $\ne{\bf 0}$, is a tangent vector to $\gamma$ at the point ${\bf z}(u)$. In the problem at hand two curves $\gamma_1$, $\gamma_2$ are given, and it is assumed that for two particular parameter values $u$, resp. $v$, the curves intersect at a common point ${\bf z}_1(u)={\bf z}_2(v)$. In order to find the angle of intersection we have to make use of the following fact:
Given two nonzero vectors ${\bf a}$, ${\bf b}\in{\mathbb R}^n$ the cosine of the enclosed angle $\alpha$ is given by
$$\cos\alpha={{\bf a}\cdot{\bf b}\over|{\bf a}|\>|{\bf b}|}\ ,$$
whereby the dot denotes the scalar product. The latter is computed as follows:
$${\bf a}\cdot{\bf b}=\sum_{i=1}^n a_i\,b_i\ .$$
Now apply this to the two vectors ${\bf z}_1'(u)$, $\>{\bf z}_2'(v)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2671946",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Representations of superalgebras If I have a superalgebra $A$ and consider the category of finite-dimensional $A$-modules.
Is this category the same as the category of finite-dimensional $A$-modules which are super-vector spaces?
I.e. is every representation of $A$ a super-vector space?
| Let $A=k[x]$ graded so that a monomial $x^i$ is even or odd according to the partity of $i$. Let $V=k^2$ and view $V$ as an $A$-module in such a way that multiplying by $x$ has matrix $\begin{pmatrix}1&1\\0&1\end{pmatrix}$. It is easy to see that $V$ does not have a nontrivial direct sum decomposition preserved by $x^2$. It follows that if it is in some way a supermodule over $A$ then it is all odd or all even. But that is not possible, since $x$ changes parity and is an isomorphism.
A finite dimensional example: let $A=k[x]/(x^9)$ graded again as before, let $V=k^3$, and let $A$ act on $V$ so that $x$ acts as $\begin{pmatrix}0&1&0\\0&0&1\\0&0&0\end{pmatrix}$. Unless I messed up, the only decompositions of $V$ as nontrivial direct sums of subspaces invariant under $x^2$ are
*
*$\langle(a,b,0)\rangle\oplus\langle(1,0,0),(d,e,f)\rangle$ with $(e,d)\neq(0,0)$ and $bf\neq0$.
*$\langle(\alpha,\beta,0)\rangle\oplus\langle(a,b,c),(d,b,c)\rangle$ with $a\neq d$ and $c\beta\neq0$.
In each of this cases, the image under $x$ of the $2$-dimensional summand is not contained in the other summand. It follows that if $V$ is a graded $A$-module, then it is all odd or all even. But that is impossible, since $x$ changes parity and acts not by zero.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2672065",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Polynomially sized Folner sets Let $\Gamma$ be a finitely-generated group with a fixed finite generating set $S$.
Then, $\Gamma$ is amenable if and only if it there is a sequence $(F_n)_{n=1}^{\infty}$ of finite subsets of $\Gamma$, whose union is $\Gamma$, such that the boundary of $F_n$ with respect to $S$ is of size at most $\frac{1}{n}\cdot|F_n|$ (this is a "Folner sequence").
Let's say that $\Gamma$ is "polynomially amenable" if the sets $F_n$ above can be chosen such that $|F_n|$ is bounded by a polynomial in $n$.
Then, groups of polynomial growth are polynomially amenable.
Are there polynomially amenable groups which are not of polynomial growth? If so, I'd like to know many examples.
| No: the Følner function grows at least as fast as the word growth: this is due to Varopoulos. Hence a group with polynomially bounded Følner function has polynomially bounded growth (this is purely analytic, not related to Gromov's theorem).
See Drutu's slides:
http://people.maths.ox.ac.uk/drutu/tcc2/TCC5-slides.pdf
PS your assumption that $(F_n)$ covers the group is quite not natural and inconvenient to handle, and usually not part of the definition of Følner sequences. (Although it is easy to find a covering sequence for every countable amenable group.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2672231",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Rubik's cube function I'm thinking in a function and if it's possible to solve that.
I have been playing with the cube using the following move: $R U L' U'.$
I notice that the cube solves itself with a certain number of moves: 28 moves to $2\times2\times2$ and 112 to $3\times3\times3.$ (If the cube already is solved).
Then I'm trying to create a formula to calculate the number of moves for the another cubes like $4\times4\times4, 5\times5\times5, 6\times6\times6\dots$
Since:
$x \rightarrow y$
$1 \rightarrow 0$
$2 \rightarrow 28$
$3 \rightarrow 112$
$4 \rightarrow z$
Where x is the number of the cube $(2 = 2\times2\times2, 3 = 3\times3\times3 \dots)$ and y is the amount of moves, I came up with two formulas: 28*$((x-1)^{(x-1)})$ and 28*$(x-1)^2$. Thus, the value for $z$ could be 252 or 756,
My questions are:
*
*Are any of these formulas correct? If so, which one?
*Can be my reasoning corret about the formulas?
*If I'm wrong, answer me why!
| Have you tried it on a $4\times4\times4$ cube? If not there are online rubiks cubes you can play around with.
But to answer your question my guess is that for any cube size $3$ and up you'll solve it again in $112$ moves. That is because R U L' U' doesn't touch any of the "inner" edges so that means larger cubes will all behave like a $3\times3\times3$ cube in this case.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2672360",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 3,
"answer_id": 0
} |
Use Green's Theorem to Find the Area Use Green's Theorem to find the area enclosed by:
$$y=9-x^{2},y=8x, y=\frac{2}{5}x$$
(The area in Quadrant 1)
In class we only did examples of this type of problem that were very simple (eg. area under $\ x^{2}\ $ from 0 to 2), which made setting up the equation for area using Green's Theorem simple. But since this problem has multiple lines/curves to consider, I am not sure how to set up the problem once I am done paramaterizing.
| This really isn’t any different from the simpler examples that you’ve seen, or, for that matter, from the way you’d approach this in first-year Calculus: find the intersections of the curves and integrate piecewise. Remember that even in the example you cite of finding the area under $x^2$, the region is bounded by three curves: that one and the lines $y=0$ and $x=2$. The intersections of these curves are only slightly easier to find than the ones for the region in your question.
The region is bounded below by the line $y=\frac25x$ and above by $y=8x$ and $y=9-x^2$. One intersection point is obviously the origin, and the other two are easily found by substitution into the quadratic. You’ll then need to integrate along the lower line segment from the origin to the first intersection point, along the parabola to the second, and then back to the origin along the other line segment. Take care to orient the curves correctly.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2672517",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Does omitting the multiplication operator have an effect on order of operations When you write a mathematical expression like this:
$4:2(1+1)$,
does the fact that the multiplication operator is not explicitly written has any bearing on the precedence? What is the order of operations in this case?
Is it:
$4:4=1$ (order: parenthesized addition, implicit multiplication, division)
or
$2(1+1)=4$ (order division, parenthesized addition, implicit multiplication).
If the explicit operator has no effect, this would be $4\div2\cdot(1+1)$ and calculated from left to right (because of the no precedence between division and multiplication). Result woud then be $4$.
| The right way to do it, I think, would be as follows:
$$ 4 \div 2 (1 + 1) = 4 \div 2 \times (2) = 4 \div 2 \times 2 = (4 \div 2) \times 2 = 2 \times 2 = 4. $$
However, one can also proceed as follows:
$$ 4 \div 2(1+1) = (4 \div 2) \times (1+1) = 2 \times (1+1) = 2 \times 2 = 4. $$
Note that parentheses carry the topmost precedence.
Except for the grouping operators such as the parentheses, the multiplication --- whether expressed with or without the sign --- and division have equal precedence, followed by addition and subtraction, which also have the same precedence. However, operators (of the same precedence) are to be evaluated from left to right.
Hope this helps.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2672699",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
} |
Is the derivative directional? I was met with a surprising face when I assumed that a derivative is a directional change, i.e. that $$\frac{df(x)}{dx}$$
describes the change in $f(x)$ following an positive change in $x$. Moreover, the negative derivative describes the change in $f(x)$ following a negative change in $x$:
$$-\frac{df(x)}{dx}$$
Am I mistaken?
| You have to be careful with statements like $-\frac{df(x)}{dx}$ being the change in $f(x)$ following a negative change in $x$
Take for example $f(x)=x^2$ at $x=3$ so $f(x)=9$. Then $\frac{df(x)}{dx}=2x$ which is $6$ when $x=3$.
A small positive change in $x$ changes $f(x)$ by about the change multiplied by $6$: so for example $f(3+0.01) = 3.01^2= 9.0601$ which is close to $f(3) + 0.01 \times 6$
A small negative change in $x$ also changes $f(x)$ by about the change multiplied by $6$: so for example $f(3-0.01) = 2.99^2= 8.9401$ which is close to $f(3) - 0.01 \times 6$. And it is safer to think of it this way than think of the result being something like $f(3) + 0.01 \times (-6)$.
In other words, keeping the sign of the change means you do not need to reverse the sign of the derivative
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2672761",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Expected Value of a Ratio of Log-Normal Variables Suppose $a$ and $b$ are constants (with $a\neq b$) and $X$ is a log-normal random variable, i.e. $\ln(X) \sim \mathcal N(\mu,\sigma^2)$. Suppose that
$$ \mathbb E \left[ \frac{X-a}{X-b} \right]=0.$$
Is it possible to solve for $\mu$ as an explicit function of $a$, $b$ and $\sigma^2$?
| I think that you'd need to restrict the potential values of $b$ such that $b \le 0$ as otherwise the expectation doesn't exist.
And I'm not convinced that there is a closed-form solution other than for $b=0$. Here is some Mathematica code to find $\mu$ in terms of $a$ and $\sigma^2$ when $b=0$:
b = 0
expectation = Integrate[E^(-((-\[Mu] + Log[x])^2/(2 \[Sigma]^2)))/(
Sqrt[2 \[Pi]] x \[Sigma]) (x - a)/(x - b), {x, 0, \[Infinity]},
Assumptions -> {a \[Element] Reals, \[Sigma] > 0, \[Mu] \[Element]
Reals}]
$$1-a e^{\frac{\sigma ^2}{2}-\mu }$$
This means that
$$\mu=\log (a)+\frac{\sigma ^2}{2}$$
where $a>0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2672878",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Give an example of a topology that is not compact, not connected, and not Hausdorff here are my questions:
1) Give an example of a topology that is not compact, not connected, and not Hausdorff.
2) Give an example that is connected, but not compact and not Hausdorff.
This is the hint we have:
*i) Let $(X,\mathcal{T})$ and $(X',\mathcal{T}')$ be two topological spaces, and the collection
$$
\mathcal{B} = \bigl\{ \{0\} \times U \mid U \in \mathcal{T} \bigr\} \cup \bigl\{ \{1\} \times U' \mid U'\in \mathcal{T}' \bigr\}
$$
defines a basis for a topology $\mathcal{T}_\mathcal{B}$ on the disjoint union $X \cup X':= (\{0\} \times X) \cup (\{1\} \times X')$
*ii) If $X$ and $X'$ are not empty, the space $(X \cup X', \mathcal{T}_\mathcal{B})$ is not connected.
*iii) $(X \cup X', \mathcal{T}_\mathcal{B})$ is Hausdorff if and only if $(X,\mathcal{T})$ and $(X',\mathcal{T}')$ are.
*iv) $(X \cup X', \mathcal{T}_\mathcal{B})$ is compact if and only if $(X,\mathcal{T})$ and $(X',\mathcal{T}')$ are.
I think for 1), maybe we can let $(X,\mathcal{T})$ equal to some space that is not Hausdorff union $(X',\mathcal{T}')$ which is not compact. For instance, $X = (\{1,2,3\},\mathcal{T})$ which is not HSD, because the each compact set is closed, yet if we take $y = \{1\}$, and $y^c$ is not the same as $\mathcal{T}$, thus $y^c$ is not open. And let $X' = (3, \infty]$, because it's clearly not compact. And $X$ and $X'$ are not empty, so it's not connected. Will this work?
For 2), let $X = (\{1,2,3\},\mathcal{T})$, and let it union with some set?
Please give some example and give some explanation for why it is a good example.
Thank you for the help!
| *
*N with the base { {1,2}, {n} : n in N, n > 2 }.
*N with the base { {1,n} : n in N }.
Details left to the diligent reader.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2672997",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
How many different three-digit numbers contain both the digit $2$ and the digit $6$?
How many different three-digit numbers contain both the digit $2$ and the digit $6$?
There are six possible permutations: $$
\overline{26x},\ \overline{62x},\ \overline{2x6},\ \overline{6x2},\ \overline{x26},\ \overline{x62}.
$$
In the first four groups $x$ can be any number in $[0,9]$, in the two last groups $x$ can be any number in range $[1,9]$. So
$$
4 \times 10 + 2 \times 9 = 58.
$$
However, the right answer is $52$. Could you, please, point out the mistake in the reasoning?
| You should also take into account the repetition of a number. The following numbers are repeated:
$226({x26\text{ and }2x6})$
$262(26x\text{ and }x62)$
$266(26x\text{ and }2x6)$
$622(62x\text{ and }6x2)$
$626(62x\text{ and }x26)$
$662(6x2\text{ and }x62)$
So after subtracting the number of repeated numbers (i.e. $6$) from $58$, we get $52$.
A genuine approach to this problem is given in the other answer of Manthanein.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2673144",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
A problem of combinatorics (reduced from a problem of computing probability generating function) This is a problem involving computation of probability generating function (PGF).
I have reduced the problem into a problem of combinatorics, as given below.
Let $f(n,k)$ be defined as
$$f(n+1,k):=f(n,k)+f(n,k-1)+\cdots+f(n,k-n)$$
where $f(n,k)=0$ for $k<0$, $f(1,0)=1$ and $f(1,k)=0$ otherwise. To show that
$$\sum_{k=0}^{\infty}s^k f(n,k)=\prod_{k=2}^n \frac{1-s^k}{1-s},\,\, |s|<1$$
Source : Rohatgi, Saleh. p.92. Problem 11. Thanks in advance.
| The problem can be rewritten as
$$
\left\{ \matrix{
f(n,k) = \sum\limits_{0\, \le \,j\, \le \,n - 1} {f(n - 1,k - j)} \hfill \cr
f(n,k) = 0\quad \left| {\,n,k < 0} \right. \hfill \cr
f(0,k) = f(1,k) = \left[ {0 = k} \right] \hfill \cr} \right.
$$
and more compactly as
$$
f(n,k) = \sum\limits_{0\, \le \,j\, \le \,n - 1} {f(n - 1,k - j)} + \left[ {0 = n} \right]\left[ {0 = k} \right]
$$
where $[P]$ denotes the Iverson bracket
$$
\left[ P \right] = \left\{ {\begin{array}{*{20}c}
1 & {P = TRUE} \\
0 & {P = FALSE} \\
\end{array} } \right.
$$
Then let's take the generating function on the index $k$
$$
\eqalign{
& F(n,x) = \sum\limits_{0\, \le \,k} {f(n,k)\,x^{\,k} } = \sum\limits_{0\, \le \,j\, \le \,n - 1} {\sum\limits_{0\, \le \,k} {f(n - 1,k - j)\,x^{\,k} } } + \left[ {0 = n} \right] = \cr
& = \sum\limits_{0\, \le \,j\, \le \,n - 1} {x^{\,j} \sum\limits_{0\, \le \,k} {f(n - 1,k - j)\,x^{\,k - j} } } + \left[ {0 = n} \right] = \cr
& = {{1 - x^{\,n} } \over {1 - x}}F(n - 1,x) + \left[ {0 = n} \right] \cr}
$$
Thus
$$
\eqalign{
& F(0,x) = 1 \cr
& F(1,x) = {{1 - x^{\,1} } \over {1 - x}} = 1 \cr
& F(2,x) = {{1 - x^{\,2} } \over {1 - x}} = {{1 - x^{\,2} } \over {1 - x}}{{1 - x^{\,1} } \over {1 - x}} \cr
& \quad \quad \vdots \cr
& F(n,x) = \sum\limits_{0\, \le \,k} {f(n,k)\,x^{\,k} } = \prod\limits_{1\, \le \,\,k\, \le \,n} {{{1 - x^{\,k} } \over {1 - x}}} \cr}
$$
$f(n,k)$ is OEIS sequence A008302.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2673252",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Fundamental Theorem of Calculus For Piecewise Continuous Functions Recall that the first fundamental theorem of calculus says that if $f$ is continuous real values function defined on $[a,b]$. Then the function defined by $F(x)= \int_ a^x f(t) dt$ is differentiable and $F’(x)= f(x) $
Does the same result hold if $f$ is merely piecewise continuous instead of continuous?
Edit: The above question is motivated from a set of notes. The exact statement written in notes is: Let $f$ be continuous on $ \mathbb R$ with period 2l, then $\int_ l^x f’(t) dt= f(x) dx$
P.S: I don’t know about measure theory so please avoid to use it.
| The answer is no.
Consider the function $f(x)$ defined on $[-1,1]$ as $ f(x)=1$ for $-1\le x\le 0$ and $f(x)=-1$ for $0<x\le 1$
Then $F(x)= \int _{-1}^x f(t)dt =x+1$ if $x \le 0$ and $F(x)= \int _{-1}^x f(t)dt =1-x$ if $x > 0$
Note that F(x) is a tent map which is not differentiable at $x=0$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2673386",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
How to get $\lim_{x\to 0}\frac{x}{\sin x} =1$ from $\lim_{x\to 0}\frac{\sin x}{x} =1$? We know the identity that
$$\lim_{x\to 0}\frac{\sin x}{x} =1$$
However in many solved examples that I was going through , I came across the identity
$$\lim_{x\to 0}\frac{x}{\sin x} =1$$
Although it was never formally mentioned anywhere in the text . How does the previous identity imply this?
Does it mean that as $x$ becomes very small the value of $\sin x$ is approximately equal to the value of $x$ so the value of both of $\frac{x}{\sin x}$ and $\frac{\sin x}{x}$ tends to $1$?
| Simply note that
$$\frac{x}{\sin x}=\frac1{\frac{\sin x}{x}}\to \frac11=1$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2673493",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 6,
"answer_id": 2
} |
Solving finite limit without L'Hôpital I have come across a problem which requires solving the following limit without L'Hôpital rule:
$$\lim_{x\to\infty} x^2\cdot(e^\frac{1}{x-1}-e^\frac{1}{x})$$
It is obvious from the graphic plot (or using L'Hôpital rule) that the limit is 1. I have tried a few algebraic manipulations and changes of variable without success. Another approach that I tried was to "sandwich" this limit between two other different limits, one strictly greater and one strictly lesser than this one, that would be easier to work with than the difference of exponentials present in this one. As of now I haven't had any success, how would one go about solving it?
| By the mean value theorem,
$$e^{\frac{1}{x-1}}-e^{\frac{1}{x}} = e^{c_x}\left(\frac{1}{x-1} -\frac{1}{x}\right
)=e^{c_x}\left(\frac{1}{(x-1)x}\right
)$$
for some $c_x$ between $1/x$ and $1/(x-1).$ Thus as $x\to \infty, c_x\to 0,$ hence $e^{c_x}\to 1.$ Multiplying by $x^2$ then gives a limit of $1.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2673590",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 4
} |
Colimit of collection of finite sets Situation: Let $\mathbb{T}$ be an endofunctor on $\mathsf{Set}$, the category of sets an suppose that $\mathbb{T}$ restricts to an endofunctor on $\mathsf{FinSet}$, the category of finite sets. That is $\mathbb{T}Y$ is finite whenever $Y$ is finite. Moreover assume $\mathbb{T}$ is monotone, that is, $Y \subseteq Z$ implies $\mathbb{T}Y \subseteq \mathbb{T}Z$.
Let $X$ be any set (possibly of infinite size). Then the finite subsets of $X$ form a directed set.
Question: Is the directed colimit of the collection $\{ \mathbb{T}Y \mid Y \subseteq X \text{ finite} \}$ equal to $\mathbb{T}X$.
| No, of course not, at least not without more assumptions on $\mathbb{T}$.
For instance take $\mathbb{T}$ to be the covariant powerset functor. Then clearly it satisfies your conditions, but for infinite $X$, the power set of $X$ is much larger than the union of the power sets of $Y$ for $Y\subset X$ finite.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2673680",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Perform the modified Euler's Method given a point and a stepsize Consider the following ODE
$$\dfrac{dy}{dx} = -\dfrac{2x}{y}~~~~\text{where}~~~~y(0)=1$$
Given that $y(0.7)=0.141421$ to $6$ digit precision, use the modified Euler's method to estimate $y(0.8)$ using $h=0.1$ and work to $5$ digit precision.
How do I do this? I can do it from scratch, but given $y(0.7)$ is throwing me off for some reason.
Thanks in advance for any help!
| Given the ODE $y'=f(x,y)$ then Euler's method says $y(x_{n+1})=y(x_n)+h\cdot f(x_n,y(x_n))$ where in your case $f(x,y) = -\frac{2x}{y}$, $x_{n+1}=0.8$, $x_n=0.7$ and $h=x_{n+1}−x_n=0.1$. The modified Euler's method says
$$y(x_{n+1}) = y(x_n) + \frac{h}{2}\cdot[f(x_n,y(x_n)) + f(x_{n+1},\tilde{y}(x_{n+1})) ]$$
where $ỹ (x_{n+1})=y(x_n)+h\cdot f(x_n,y(x_n))$ is the value you would get using Euler's method. It's just to plug in and compute all the terms here (and do the computations to the given precision).
The analytical solution to the ODE is $y(x) = \sqrt{1-2x^2}$ and $y(0.7)$ is in fact $0.141421$ to $6$ significant digits. It's just provided to allow you to perform the method from $x=0.7$. The reason $x=0.7$ is choosen is probably since the analytical solution does not exist (as a real function) for $x>0.707$ so the values you obtain using Euler's vs modified Euler's will likely be very different (and both wrong in this case).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2673795",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Show that $|\det(A_n)|=n^{n/2}$ For k $\ge2$ we recursively define $A_{2^k}$ as $\begin{bmatrix} A_{2^{k-1}} & A_{2^{k-1}} \\ A_{2^{k-1}} & -A_{2^{k-1}} \end{bmatrix}$ and $A_1=[1]$
The problem is to show that $|\det(A_n)|=n^{n/2}$
My attempt: we do an induction on $k$
$|\det(A_2)|=2=2^{2/2}$. Induction hypothesis: $|\det(A_{n})|=n^{n/2}$ and we want to show that $|\det(A_{2n})|=(2n)^n$
using block matrix properties
$|\det(A_{2n})|=|\det(\begin{bmatrix} A_{n} & A_{n} \\ A_{n} & -A_{n} \end{bmatrix})|=|\det(-A)\det(A+AA^{-1}A)|=|2^n\det(A_n)^2|=|2^nn^n|=(2n)^n$
Can somone confirm that there are no flaws in the reasoning please?
| Your proof looks good.
Alternatively, for the induction-step, notice that
$$
\begin{vmatrix}
A_n & A_n \\
A_n & -A_n
\end{vmatrix}
=
\begin{vmatrix}
A_n & A_n \\
0 & -2A_n
\end{vmatrix},
$$
since the matrices
$$
\begin{bmatrix}
A_n & A_n \\
A_n & -A_n
\end{bmatrix}
\text{and}
\begin{bmatrix}
A_n & A_n \\
0 & -2A_n
\end{bmatrix}
$$
are row-equivalent.
Thus,
$$|A_{2n}| = |A_n|\cdot |-2A_n| = n^{n/2}(-2)^n n^{n/2} = 2^n n^n = (2n)^{2n/2}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2673937",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
How would the sketch of a residual plot look for residuals from an exponential distribution with expectation 0? A linear model has been fitted under the usual assumptions, i.e. Y = Xβ + ε, with $ε ∼ N(0,σ^2I)$. How would the sketch of a residual plot look for residuals from an exponential distribution with expectation 0?
| Here is a model with $\beta_0 = -8$ and $\beta_1$. As said @BruceET, in order to maintain expectation of $0$, I have to set $\lambda = 1/8$ in the exponential noise. In this case you can view $\epsilon_i \sim \mathcal{E}xp(\lambda) -\beta_0$, thus $\mathbb{E}{\epsilon_i}=0$. What you see on both plots is a random sample from exponential distribution, namely, large concentration at$[-8, 0]$ ( support left boundary up to the expected value) and then random "spikes" upwards.
set.seed(1)
x = rnorm(1000, 10, 2)
y = - 8 + x + rexp(1000, 1/8)
mod = lm( y ~ x)
par(mfrow=c(1,2))
plot(x, y, pch = 20,
col = "darkgrey",
main = "Linear Regression",
cex.main = 0.9)
abline(mod)
plot(fitted(mod),
residuals(mod),
pch = 20,
main = "Residuals vs. Fitted values",
cex.main = 0.9,
col = "darkgrey")
abline(h = 0)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2674043",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Ways to people can get exactly 4 beers
Let $b ≥ 1$ and $c ≥ 1$ be integers. Elisa’s neighborhood pub serves $b$ different types of
beer and $c$ different types of cider. Elisa invites $6$ friends to this pub and orders $7$ drinks,
one drink (beer or cider) for each friend, and one cider for herself. Different people may get
the same type of beer or cider.
In how many ways can Elisa place these orders, such that exactly $4$ people get a beer?
Answer: ${6\choose4} \cdot b^4 \cdot c^3$
Why do we do ${6\choose4}$ here? I can understand that $b^4$ is the number of ways you can distribute the beers to $4$ people, but if we then do $c^3$ thats $3$ ways to distribute $3$ ciders (including Elisa) which makes for $7$ people. Also why do we do $c^3$ when we have already reserved one cider for Elisa?
| $\binom{6}{4}$ is the number of ways to choose the 4 of Elisa's friends who will get a beer.
$b^4$ is the number of ways those 4 friends can choose their beers: Each of those friends can choose one of $b$ different beers, so there are $b^4$ different assignments of friends to beers.
$c^3$ is the number of ways Elisa and the two cider-getting friends can get their drinks, since c is the number of choices of ciders that each person who receives cider has.
Putting this all together gives you $$\binom{6}{4}\cdot b^4 \cdot c^3. $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2674136",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Proof verification: For isomorphism $\phi: G\to H$, $G$ is abelian iff $H$ is abelian. May someone please verify my following proof?
For isomorphism $\phi: G\to H$ of the two groups $G$ and $H$, prove that $G$ is abelian iff $H$ is abelian.
Proof: (assume $G$ is abelian) Let $h_{1},h_{2}\in H$. Since $\phi$ is bijective, $\phi$ is onto. Then there exist $g_{1},g_{2}\in G$ such that $\phi (g_{1})=h_{1}$ and $\phi (g_{2})=h_{2}$ for all $h_{1},h_{2}\in H$. Then $h_{1}h_{2}=\phi (g_{1})\phi (g_{2})=\phi (g_{1}g_{2})$ (operation preserving) $=\phi (g_{2}g_{1})$ ($G$ is abelian) $=\phi (g_{2})\phi (g_{1})$ (operation preserving) $=h_{2}h_{1}$. Then $H$ is abelian.
(assume $H$ is abelian) Let $g_{1},g_{2}\in G$. Since $\phi$ is bijective, the inverse $\phi ^{-1}:H\to G$ exists and is onto. Then there exist $h_{1},h_{2}\in H$ such that $\phi ^{-1}(h_{1})=g_{1}$ and $\phi ^{-1}(h_{2})=g_{2}$ for all $g_{1},g_{2}\in G$. Then $g_{1}g_{2}=\phi ^{-1} (h_{1})\phi ^{-1}(h_{2})=\phi ^{-1}(h_{1}h_{2})$ (operation preserving) $=\phi ^{-1}(h_{2}h_{1})$ ($H$ is abelian) $=\phi ^{-1}(h_{2})\phi ^{-1}(h_{1})$ (operation preserving) $=g_{2}g_{1}$. Then $G$ is abelian. $\square$
| It is correct but, in the first paragraph, you shouldn't have written “for all $h_1,h_2\in H$”. The elements $h_1$ and $h_2$ were given at the beginning of the proof.
And it is a waste of time assirting that $\phi^{-1}$ is onto. Every inverse has that property.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2674245",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How would one solve a linear equation in two integer variables? For example, how would I find integers $a$ and $b$ that satisfy the following equation?
$$5a - 12b = 13$$
I always resorted to trial and error when doing something like this and more often than not I would finally reach my answer. But for this one I just kept going and going to no avail. So it finally brought about the concern that trial and error wasn't always going to work.
So what would be the best way to get two integer solutions to the above equation?
| The Euclidean algorithm is best for large numbers, but for a small example like this, trial and error, combined with a little modular arithmetic works just fine, and is less work. Reducing $5a−12b=13$ modulo $12$, we have $a\equiv 1 \pmod{12},$ and $a\equiv 5 \pmod{12},$ by trial. Then it's clear that $a=5,$ $b=1$ is a solution.
Alternatively, we might reduce modulo $5,$ getting $3b \equiv 3 \pmod{5},$ so $b\equiv 1 \pmod{5},$ and again we find that $b=1$ gives $a=5.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2674368",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 6,
"answer_id": 1
} |
Query about streamlines of the velocity function $\textbf{q} = k^2 \frac{x \textbf{j} - y \textbf{i}}{x^2 + y^2} , k =$ constant. If we consider the velocity potential $\textbf{q} = k^2 \frac{x \textbf{j} - y \textbf{i}}{x^2 + y^2} , k =$ constant.
The streamlines are given by the circles $x^2 + y^2 =$ constant, the circles.
Now the streamlines cut the equipotential surfaces orthogonally.
We also found the equipotential surfaces $\phi(x,y,z) = c$ given by $q = -\nabla \phi$.
That is $\phi(x,y)= k^2 tan^{-1}(\frac{x}{y}) = c$
That is they represent the planes $x = cy$.
And also it is intuitively and geometrically clear that the streamlines cut the equipotential surface orthogonally.
But my main question is what happens at the origin, like because we have the equipotential surfaces as planes passing through the origin ? , at origin the equipotential surfaces are not even defined?
| This is the velocity field induced by a line vortex and there is a singularity at the origin in the plane. Because of this singularity, the potential is, in fact, not defined at the line $\{(0,0,z): -\infty < z < \infty\}$ and the equipotential surfaces converge here.
The flow is irrotational everwhere except at $(x,y) = (0,0) $, with $\nabla \times \mathbf{q} = 0$, and where there exists a well-defined differentiable potential $\phi$ such that $\mathbf{q} = - \nabla \phi$.
It can help clarify by switching to cylindrical coordinates: $\,x = r \cos \theta, \,\, y = r \sin \theta$. The unit basis vectors are $\mathbf{e}_r = \cos \theta \,\,\mathbf{i} - \sin \theta \,\,\mathbf{j}$ and $\mathbf{e}_\theta = -\sin \theta \,\,\mathbf{i} + \cos \theta \,\,\mathbf{j}$ and the velocity field is
$$u_r = 0, \,\,\,u_\theta = \frac{k^2}{r}, \,\,\, u_z = 0.$$
Note that $u_\theta$ is infinite at $r = 0.$
The potential is given by
$$-\nabla \phi = -\left(\frac{\partial \phi}{\partial r}\,\mathbf{e}_r + \frac{1}{r} \frac{\partial \phi}{\partial \theta}\,\mathbf{e}_\theta + \frac{\partial \phi}{\partial z}\right) = \frac{k^2}{r} \mathbf{e}_\theta,$$
with solution $\phi = - k^2 \theta$ for $0 \leqslant \theta < 2\pi$. We restrict the domain of $\theta$ to $[0, 2\pi)$ so that the potential is not multi-valued for $(x,y) \in \mathbb{R}^2 \setminus (0,0)$ and $\phi$ is undefined for $(x,y) = (0,0)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2674568",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Herbrand theorem works only with clauses I read that Herbrand's theorem is valid only for a set of clauses. I already know how to convert every formula in a clause.
But, what I haven't understood is why Herbrand's theorem doesn't work with a formula that's not a clause.
Can you show me ?
| See: Mordechai Ben-Ari, Mathematical Logic for Computer Science, Springer (3rd ed 2012), page 180:
Herbrand’s Theorem: A set of clauses $S$ is unsatisfiable if and only if a finite set of ground instances of clauses of $S$ is unsatisfiable.
From this we have that: a set of clauses $S$ is unsatisfiable iff $S$ has no
Herbrand models.
The restriction that $S$ be a set of clauses is unavoidable, i.e. if $S$ is a set of arbitrary closed formulas, it is not possible, in general, to show that $S$ is unsatisfiable considering only to Herbrand interpretations.
Let $S = \{ \ p(a), \ \exists x \lnot p(x) \ \}$, where the second formula is not a
clause.
Obviously, $S$ has a model: let $D = \{ 0, 1 \}$ and assign $0$ to $a$. Then, interpret $p$ with the function which maps $0$ to TRUE and $1$ to FALSE.
The only Herbrand interpretations for $S$ are $\emptyset$ and $\{ p(a) \}$ (there are no function symbols in $S$), but neither of these is a model for $S$.
Conclusion: $S$ does not have an Herbrand model.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2674679",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
if $i$ is a number then what is its numerical value? $ i $ is the unit imaginary part of complex number , but there is a question which it is mixed me probably i missed the definition of a number , wolfram alpha $ i $ is assumed to be a number , and others assumed it to be variable because it satisfies $ \sqrt{i^2}$ =$+i$ or $-i $ then my question here :
Question:
Is $i$ a number then what is it's value ?
| Asking what's the value of $i$ is like asking what's the value of $2$. And, just like $i^2$ has two square roots, $i$ and $-i$, $2^2$ has two square roots, $2$, and $-2$.
And yes, it is a number, not a variable.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2674885",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 6,
"answer_id": 0
} |
Can the product of three complex numbers ever be real? Say I have three numbers, $a,b,c\in\mathbb C$. I know that if $a$ were complex, for $abc$ to be real, $bc=\overline a$. Is it possible for $b,c$ to both be complex, or is it only possible for one to be, the other being a scalar?
| Another approach: suppose $a, b$ are complex and not real and $ab$ isn't real. Then let $c=\overline{ab}$.
Note that in a precise sense this is universal: if $abc$ is real (and each is nonzero), then $c$ is a real multiple of $\overline{ab}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2674934",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 15,
"answer_id": 7
} |
How do I determine whether the orientation of a basis is positive or negative using the cross product I know that if I have an orthonormal base in $\mathbb{R}^3$, namely $e_1$, $e_2$ & $e_3$, then it is positively oriented if
$$e_1 \times e_2 = e_3$$
$$e_2 \times e_3 = e_1$$
$$e_3 \times e_1 = e_2$$
It would make sense if it was negatively oriented when
$$e_1 \times e_2 = -e_3$$
$$e_2 \times e_3 = -e_1$$
$$e_3 \times e_1 = -e_2$$
But I would like to be sure. Am I correct?
| A basis $u$, $v$, $w$ of $\mathbb{R}^3$ is positively oriented if $(u\times v)\cdot w > 0$, and negatively oriented if $(u \times v) \cdot w < 0$. In the case of orthonormal vectors, that is probably equivalent to what you have.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2675132",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Equation and polynomial a) For $a,\,b,\,c\in \mathbb{R}$ , let $f(x)=x^3+ax^2+bx+c$ and $M=\max\{1,|a|+|b|+|c|\}$. Show that $f(x)>0$ for $x>M$ and $f(x)<0$ for $x<-M$
b) Consider the following polynomial with integer coefficients $a_1,...,a_n$: $P(x)=x^n+a_1 x^{n-1}+...+a_n$. Show that every rational root of $P$ is an integer.
For the problem b) first I consider it is not true that $\frac{p}{q}:(p,q)=1$ is a root of this polynomial and putting this equation $P(x)=0$ and then contradict that $(p,q)\ne 1$.
But what about a)?? Any help...
| Not a rigorous proof:
$x^3 > ax^2+ bx + c$ for some $x > r$
Consider the scenario where $r$ will be greatest. Then
$f(x) = x^3 - |a|x^2 - |b|x - |c|$
For $x = 1$, we get $f(1) = 1 - |a| - |b| - |c|$.
So if $1 \gt |a| + |b| + |c|$, then for all $x \ge 1$, $f(x) > 0$
and if $1 \lt |a| + |b| + |c|$, then for all $x \ge |a| + |b| + |c$, $f(x) > 0$
Likewise on the negative side.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2675260",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Prove that $\lim_{y \to x-}f(y) = a$ if for every sequence $(x_n)_n$ in $\mathbb{R}$ that increases towards x, the sequence $f(x_n)$ converges to $a$
Let $f: \mathbb{R} \to \mathbb{R}$ be a function.
Prove that: $$\forall (x_n)_n \subseteq \mathbb{R}: (x_n \to x \land
\forall n: x_n \leq x_{n+1} \implies f(x_n) \to a)$$ implies that
$\lim_{y \to x-}f(y) = a$
My attempt:
Suppose $\lim_{y\to x-}f(y) \neq a$. This means,
$$\exists \epsilon >0: \forall \delta >0: \exists y \in \mathbb{R}: 0 < x-y < \delta \land |f(y) - a | \geq \epsilon$$
Choosing this $\epsilon > 0$ and letting $\delta := 1/n$, we can find a sequence $(y_n)_n$ in the reals such that $0 < x-y_n< 1/n$ and $|f(y_n)-a| \geq \epsilon$
This implies that $y_n \to x$ and $f(y_n) \not\to a$ and it is also clear that $(y_n)_n$ is increasing. The result follows by contraposition.
Is this correct?
| Attempt to fix it , correct me if wrong:
You have:
1)$0<x-y_n \lt 1/n$ and
2)$|f(y_n)-a| \ge \epsilon$.
1) Implies $\lim_{ n \rightarrow \infty} y_n =a.$
You need to choose an increasing subsequence such that
$y_{n_k} \le y_{n_{k+1}} $.
Inductively:
0) Start: $x-y_0\lt 1/n_0.$
1) Choose $n_1$ such that :
$1/n_1 \lt (x-y_0).$
Step:
2) Choose $n_{k+1}$ such that :
$1/n_{k+1} \lt (x-y_{n_k})$.
Now apply your argument for the increasing subsequence $y_{n_{k}}.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2675392",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
How to derive the formula for the expected value of a function of a continuous random variable If $X$ is a r.v. with density $f$ and $Y = g(X)$ then $$E[Y] = \int_{\Bbb R}g(x)f(x)$$
My text offers no demonstration of this. I am familiar with the Lebesgue integral in case the proof relies on measure-theoretic notions. Any help, either in the form of a derivation or reference to one would be much appreciated.
| Here's a derivation for the Riemann Integral:
Let $F_X$/$F_Y$ and $f(x)$/$h(x)$ denote the CDF and pdf of $X$ and $Y = g(X)$ respectively. Recall that, by definition, the pdf is the derivative of the CDF. Thus we find the density $h(x)$ by finding $F_Y$ and using the fact that $h = F'_Y$. We thus have:
$F_Y(y) = Pr[Y \leq y] = Pr[g(X) \leq y] = Pr[ X \leq g^{-1}(y)] = F_X\left(g^{-1}(y)\right) = \left(F_X \circ g^{-1}\right)(y) \\ \implies h(y) = F'_Y(x) = \left(F_X \circ g^{-1}\right)'(y) = F'_X\left(g^{-1}(y)\right)\left(g^{-1}\right)'(y) = f\left(g^{-1}(y)\right)\left(g^{-1}\right)'(y) $
$ \\ $
Finally we make the $u$-substitution $$y = g(x) \implies x = g^{-1}(y) \text{ and } dx = \left(g^{-1}\right)'(y) \ dy$$ and observe:
$$E[Y] = \int_{\mathbb{R}} yh(y) \ \mathrm{d}y = \int_{\mathbb{R}} y f\left(g^{-1}(y)\right) \left(g^{-1}\right)'(y) \ \mathrm{d}y = \int_{\mathbb{R}} g(x)f(x) \ \mathrm{d}x$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2675501",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
If $P$ is a polynomial with $|P(1)| = \max\limits_{|z| =1} |P(z)|$, then its root on the unit circle is separated away from 0 Let $P(z)$ be a nonzero polynomial of degree $n$ such that $$|P(1)| = \max\limits_{|z| =1} |P(z)|.$$
Furthermore let $z_0 = e^{i\varphi_0}$, $\varphi_0 \in [-\pi,\pi]$ be a root of $P$ on the unit circle.
I want to prove that $|\varphi_0| \geq \pi /n$. Also, by my intuition, if this becomes an equality, $P(z)$ would be a multiple of $1+z^n$. Does anyone has an idea?
| EDIT. Mulpiplying on a constant, we may assume wlog that $P(1)=1$.
Consider the function $$f(t)=\frac{1}{2}\left( P(e^{it})+\overline{ P(e^{it})}\right)=\frac{1}{2}\left( P(e^{it})+P^*(e^{-it})\right),$$
where $P^*$ is the polynomial with complex conjugate coefficients.
This $f$ is an entire function of exponential type $n$, real on the real line, and and $|f(t)|\leq1$ on the real line, $f(0)=1$.
For such functions we have a generalization of Bernstein's inequality
due to Szego and Boas:
$$f'^2+n^2f^2\leq n^2,$$
which we rewrite as
$$|f'|\leq n\sqrt{1-|f|^2}.$$
Now define $g(t)=\arcsin f(t)$.
Suppose that $f(a)=0$ and $f(b)=1$, and $0<f(t)<1$ for $t\in(a,b)$. Then $g$ is well defined (with the principal branch of arcsin)
and analytic on $(a,b)$, and $g(a)=0,\; g(b)=\pi$.
Thus in view of our inequality above
$$\pi\leq\int_a^b|g'(t)|dt\leq\int_a^b \frac{|f'(t)|}{\sqrt{1-f^2(t)}}dt\leq n|b-a|.$$
So $|b-a|\geq\pi/n,$
as requested. The statement on the case of equality follows from the case of equality in Szego's inequality.
Refs.
Szegö, Schriften der Königsberger gelehrten Gesellschaft, Naturwissenschaftliche
Klasse, Fünftes Jahr 4 (1928), p. 69.
R. Duffin and A. Shaeffer, Some inequalities for functions of exponential type, Bull AMS, 43 (1937), 554. (Freely available online.)
EDIT. Further research shows that the result is not new: it was proved in
MR0018268
Turán, P.
On rational polynomials.
Acta Univ. Szeged. Sect. Sci. Math. 11, (1946). 106–113.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2675604",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 1,
"answer_id": 0
} |
If $f(x) = { x^2 \sin \frac 1 x \text{ for } 0 < x \leq 1}$ and $f(0)=0$, prove $f$ is rectifiable. If $f(x) = { x^2 \sin \frac 1 x \text{ for } 0 < x \leq 1}$ and $f(0)=0$, prove $f$ is rectifiable.
I tried calculating the length, but couldn't do it. The actual integral should be $\int _0^1\sqrt{1+\left(2x\sin \left(\frac{1}{x}\right)-\cos \left(\frac{1}{x}\right)\right)^2}\:dx$ but its too hard. How to prove $f$ is rectifiable? Also anyone have any ideas how to do that integral?
| You don't have to calculate it, you just have to show it's finite. Hint:
$$(2x\sin(1/x)-\cos(1/x))^2\le 4$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2675708",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
if $d$ divides $n$, why is $d^{n-1} \not\equiv 1 \pmod n$? For the Fermat test it is stated that $a^{n-1} \equiv 1 \pmod n$ implies that $\gcd(a, n) = 1$ even when $n$ is not prime (the case for prime $n$ is obvious). I want to know why is this true.
If I can prove the above statement then it will prove this statement as follows.
Assume : If $d$ divides $n$ then $d^{n-1} \not\equiv 1 \pmod n$.
Let $\gcd(a, n) = d,\ a = cd,$ and so $\gcd(c, n) = 1$.
$a^{n - 1} \pmod n \equiv d^{n - 1} \pmod n \times c^{n - 1} \pmod n$
$c^{n - 1} \pmod n \equiv 1 \pmod n\ \ \ \ $ (I am not sure about this step when $n$ is not prime)
Thus, $a^{n - 1} \pmod n \equiv d^{n - 1} \pmod n$.
| If $\gcd(a,n)\ne 1$, then there is a prime $p$ with $p|a$ and $p|n$. $a^{n-1}\equiv 1\mod n$ means $n|a^{n-1}-1$ and because of $p|n$ this implies $p|a^{n-1}-1$.
But we also have $p|a^{n-1}$ because of $p|a$. This is a contradiction because a prime can never divide two consecutive integers simultaneously.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2675972",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Can any plane in $\mathbb{R}^3$ be described using a two vectors that only move on two axes? Let's say I have the plane $2x-y-z=11$, and when I want the parametric form I get $u_1=(1,2,0)$ & $u_2=(0,-2,1)$. Aren't all vectors spanning the same plane parallel to one of these vectors? I can't make sense of this because that would mean that there would be no vectors on the plane that move on all three axes. The issue I have is that it seems I can always get two vectors that only move on two axes, no matter the plane, but obviously not all planes are spanned by vectors that only move on two axes.
| Considering the example you have provided, assuming the standard basis $\left\{\mathbb{\hat{i}}, \mathbb{\hat{j}}, \mathbb{\hat{k}}\right\}$, the two vectors $u_1$ and $u_2$ have between them non-zero contributions in all components necessary to span the space.
In this manner, the plane cannot be contained solely within any one of the $xy$, $xz$ or $yz-$planes.
Hope this helps (and that I understood your question correctly).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2676076",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find radius of circle which is tangential to parabola.
Given $r < 57$, let $C$ denote a circle centered at ($57$, $r$) with
radius $r$.
$C$ is tangent to a parabola, $y=x^2+r$, from the outside in the first
quadrant.
Find the value of $r$.
I am not sure how to approach the above problem. I started with implicit differentiation of the general circle formula.
*
*$r^2 = (x-57)^2+(y - r)^2$
*$D(r^2) = D\left((x-57)^2+( y - r)\right)$
*$0 = 2x - 114 + y'(2y-2r)$
*$y' = 114-\dfrac{2x}{2y-2r}$
For parabola:
*
*$D(y)=D(x^2+r)$
*$y'=2x$
Which gives me the formula of the gradient of the tangent to the circle. However, I am unsure how to proceed from here. What does the question means by from the outside in the first quadrant?
I am actually modeled the parabola and circle in an online graph but I fail to see how does the first quadrant requirement comes into play since $r<57$.
| Let point of intersect be $(h,k)$. We get two equations, one from equating slopes and other from equating $y$ coordinate. $(h,k)$ satisfies both equations.
$$\frac{114-2h}{2k-2r} = 2h \tag{1}$$
$$(h-57)^2 + (k-r)^2 = r^2 \tag{2}$$
From first equation we can write:
$$114-2h = 4h^3$$
Since this is a cubic representing $x$ coordinate of point of intersection, we need only one real root. Which is the case here because derivative of cubic $4h^3+2h-114$ is positive for all real. The exact value might be cumbersome to calculate.
Now that we have $h$, use equation two to get $r$:
$$(h-57)^2 +h^4 = r^2$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2676160",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Diffusion equation if f and φ are periodic the solution is also periodic If I have the diffusion equation
$$ \frac{\partial u}{\partial t}-D \frac{\partial^{2}u}{\partial x^{2}}=f(x,t) $$ with the initial condition $$u(x, 0) = φ(x).$$
How would I prove that if $f$ and $φ$ are p-periodic in $x$, that is for some $p > 0$ the identities f(x+p, t) = f(x, t)
and $φ(x + p)$ = $φ(x)$
hold for all x and t, then the solution is also p-periodic in x, i.e.
$$u(x + p, t) = u(x, t)$$
| Let $v(x, t) = u(x+p,t)-u(x,t)$, thus $v(x, 0)=0$ and
$$
\frac{\partial v} {\partial t} -D \frac{\partial^2 v} {\partial x^2}=0
$$
from which we conclude that $v(x, t) =0$ and thus $u(x+p,t)=u(x,t)$.
Edit: (more detailed)
1) Using periodicity of initial condition $\phi$ to find initial condition for $v$
$$
v(x, 0)= u(x+p,0)-u(x,0) = \phi(x+p)-\phi(x) = \phi(x)-\phi(x) =0
$$
2) Using periodicity of source $f$ to find equation for $v$
\begin{align}
\frac{\partial v(x,t)} {\partial t}
&= \frac{\partial u(x+p,t)} {\partial t} - \frac{\partial u(x,t)} {\partial t}\\
&= D\frac{\partial^2 u(x+p,t)} {\partial x^2} +f(x+p,t) -D\frac{\partial^2 u(x,t)} {\partial x^2} -f(x,t)\\
&=D\frac{\partial^2 v(x,t)} {\partial x^2} + f(x,t) - f(x,t)\\
&=D\frac{\partial^2 v(x,t)} {\partial x^2}
\end{align}
3) Uniqueness of solutions to the diffusion equation means that $v(x,t)=0$ is the only solution, and as such $u(x+p,t)=u(x,t)$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2676265",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Solve $n * \lceil{\frac{R}{T}}\rceil - \lceil{\frac{R*n}{T} - \frac{x*n}{T}}\rceil = \frac{x * n}{C}$ for x and n, $n \in Z^+$ and $x,R,T,C \in R^+$ I want to find the minimum value of n which satisfy given equation.
(Also, not stated in title but given is that $C < T$)
So far, I have been able to find the following properties:
*
*$$\frac{x * n}{C} \in Z^+$$
*$$x \leqslant \frac{\lceil{\frac{R}{T}}\rceil - \frac{R}{T}}{\frac{1}{C} - \frac{1}{T}}$$
Reasoning for the first observation is that since left hand side of the equation in title is all integers, right hand side too needs to be integer
The second observation comes from the following:
$$\frac{R*n}{T} - \frac{x*n}{T} \leqslant \lceil{\frac{R*n}{T} - \frac{x*n}{T}}\rceil$$
So
$$ n * \lceil{\frac{R}{T}}\rceil - \lceil{\frac{R*n}{T} - \frac{x*n}{T}}\rceil = \frac{x * n}{C} \leqslant n * \lceil{\frac{R}{T}}\rceil - (\frac{R*n}{T} - \frac{x*n}{T} ) \implies$$
$$ x*n(\frac{1}{C} - \frac{1}{T}) \leqslant n *(\lceil{\frac{R}{T}}\rceil - \frac{R}{T})$$
But I haven't been able to find any explicit constraints on n. What is the minimum bound on n?
Also, I suspect but can't prove, that if there exists a value of n, then there is no upper bound on n.
For context, I have asked a similar (but not exactly the same) question here: Solve $\lceil x\rceil-\frac{\lceil nx\rceil}n\geqslant y$ for $n$, where $n \in\mathbb Z^+, x \in\mathbb R^+, y \in\mathbb R^+$
Also, I need to solve this equation in a computer program. So an iterative solution also works.
| Trivial but perhaps not efficient: Iterate over $n=1,2,\dots$ until you get a solution. For each $n$, substitute it (so that the only variable is $x$) and try to solve equation for $x$. Note that the left side is decreasing in $x$ and right side is increasing in $x$. Something "primitive" like bisection search should work.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2676546",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Solving area of a triangle where medians are perpendicular. Medians $\overline{AD}$ and $\overline{BE}$ of a $\triangle ABC$ are perpendicular. If $AD= 15$ and $BE = 20$, then what is the area of $\triangle ABC$?
Note: A lot of my work can have inaccuracies and is based off a diagram. It is very helpful if you draw a diagram first
Let's call the centroid of $\triangle ABC$, $G$. Since we know that $\frac{AG}{GD}=\frac{2}{3}$, we have that $AG=10$. We also have that $GD=\frac{20}{3}$. This gives us $\frac{10\sqrt{13}}{3}$ as the length of $AE$. We can draw a perpendicular from $D$ to a point on $AC$ (we'll call this point $H$) such that $\angle{ADH}=90^\circ$. We can use similar triangles to determine that the side length of $AH=5\sqrt{13}$. From there, we can once again use similar triangles to find that $HC=\frac{20}{27}$. From here, I don't know what to do.
| Let $AD$ and $BD$ meet at $G$ = gravity center. Remember that $AG:GD =2:1$ so $AG=10$.
Area of the triangle $ABD$ which is half of the area of whole triangle $ABC$ is $${EB \cdot AG \over 2} = {20 \cdot 10\over 2} =100$$ so the whole triangle has area $200$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2676657",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Closed and bounded but not compact subset of $\ell^1$ I need to prove that
$$A=\{a\in \ell^1:\sum_{i=1}^{\infty}|a_n| \le 1\}$$
is closed, bounded and not a compact subset in $\ell^1$. Boundedness is trivial, but I get stuck in the other two. Proving subsets of $l^1$ are not closed seems easy, because one sequence whose limit is not in the subsets does it, but I’m stuck in proving that any sequence converges to a point in $A$. Thanks in advance!
| PART 1. In any normed linear space, if $\lim_{n\to \infty}\|v_n-v\|=0 $ then $\lim_{n\to \infty}\|v_n\|=\|v\|.$ PROOF:
(i). $\|v_n\|=\|(v_n-v)+v\|\leq \|v_n-v\|+\|v\|. \;$.... So $\|v_n\|-\|v\|\leq \|v_n-v\|.$
(ii). $\|v\|=\|(v-v_n)+v_n\|\leq \|v-v_n\|+\|v_n\|.\; $.... So $\|v\|-\|v_n\|\leq \|v-v_n\|.$
(iii). From (i) and (ii) the absolute value of $\|v_n\|-\|v\|$ cannot exceed $\|v_n-v\|.$
So in your Q, if $(a_n)_{n\in \Bbb N}$ is a sequence of members of $A$ converging to $a\in l^1$ then $\|a\|=\lim_{n\to \infty}\|a_n\|\leq \sup_{n\in \Bbb N}\|a_n\|\leq 1, $ so $a\in A.$ Therefoe $A$ is closed.
It $is$ true that $A$ with the metric $d(u,v)=\|u-v\|$ is a $complete$ metric space, but that is another matter.
PART 2. The Kronecker delta. Definition: $\delta (a,b)=1 $ if $a=b. $ And $\delta (a,b)=0 $ if $a \ne b.$
For $n\in \Bbb N$ let $v_n=(\delta (j,n))_{j\in \Bbb N}.$ Let $V=\{v_n:n\in \Bbb N\}.$ Then $V$ is an infinite subset of $A.$
Let $C=\{B(v, 1):v\in A\}$ where $B(v,1)=\{u\in l^1: \|v-u\|<1\}. $ Then $C$ is an open cover of $A.$ We have $\|v_n-v_m\|=2$ when $n\ne m,$ so the triangle inequality implies that any member of $C$ contains at most $1$ member of $V.$
So if $D$ is a finite subset of $C$ then $\cup D$ contains only finitely many members of $V, $ so $D$ is not a cover of $A.$ Therefore $A$ is not compact.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2676757",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
The coprimality of 2 integers that are divisors of 2 larger coprimes? Given an example with $(a,b)=1$ where $a=ux$ and $b=vy$ (with all variables being integers), obviously $(ux,vy) = 1$ directly, but does $(u,v) = 1$ as well? I am pretty sure it should but I am unsure if this is actually a true statement and how a rigorous proof of this would look.
| If $u$ and $v$ share a prime factor then so do $ux$ and $vy$. So if the latter two are coprime, so are the former.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2677049",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Adding Absolute value to a complex number: $ z+| z|=2+8i$ I would like to know my error in this problem.
Find the complex number such that:
$$ z+|z|=2+8i$$
So far, I have:
$$
\begin{split}
a+bi+\sqrt{a^2+b^2} &= 2 + 8i\\
a^2-b^2+a^2+b^2&=4-64\\
2a^2 -b^2 + b^2&=-60\\
a^2&=-30
\end{split}
$$
But I should end up with
$$a^2=-15$$
No matter how hard I try, I can't seem to find what I did wrong. Any suggestions?
| $a+bi+\sqrt{a^2+b^2} = 2 + 8i$ so
$a + \sqrt{a^2 + b^2} = 2$ and $b = 8$.
So $a + \sqrt{a^2 + 64} = 2$
So $\sqrt{a^2 + 64} = 2- a$
$a^2 + 64 = 4 -4a + a^2$
$4a = -60$
$a = -15$.
$z = -15 + 8i$.
....
To do what you were attempting
You have to realize that the $Re(z) = a + \sqrt{a^2 + b^2}$ and $Im(z) = b$. I think somehow you were thinking there were threee parts $Re(z)=a$ and $Im(z) = b$ and some $Weird(z) = \sqrt{a^2 + b^2}$ and that $z\overline z = Re^2(z) - Im^2(z) + Weird^2(z)$. That simply isn't true....
$(a+bi+\sqrt{a^2+b^2})(a - bi +\sqrt{a^2 + b^2}) = (2 + 8i)(2-8i)$
$(a + \sqrt{a^2 + b^2})^2 - b^2 = 4 - 64$
$2a^2 + b^2 + 2a\sqrt{a^2 + b^2} -b^2 = -60$
$a^2 + a \sqrt{a^2 + b^2} = -30$
which is a pain to solve but can be done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2677177",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 6,
"answer_id": 0
} |
Permutation function on $\mathbb{Z}_p$ I'm curious about whether the function $\varphi:\mathbb{Z}_p\to\mathbb{Z}_p$ such that $\varphi(x)=x^3$ defines a permutation iff $p=5\pmod{6}$ (for $p\geq5$ of course). I have some evidence about it but it seems harder than I expected, I hope I'm not missing something important. Thanks.
| Since $x^3=0$ if and only if $x=0$, the question boils down to determining for which $p$ the map $x\mapsto x^3$ is an automorphism of $(\mathbb{Z}/p\mathbb{Z})^{\times}$.
And because $(\mathbb{Z}/p\mathbb{Z})^{\times}$ is a cyclic group of order $p-1$, $x\mapsto x^3$ is an automorphism if and only if $p-1$ and $3$ are coprime. Finally, if $p\geq 5$ is prime then either $p\equiv 1$ (mod $6$) or $p\equiv 5$ (mod $6$), and so $x\mapsto x^3$ is an automorphism precisely when $p\equiv 5$ (mod $6$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2677270",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Solving the differential equation $(xy^{4}+y)dx - xdy = 0$ The differential equation is :
$(xy^{4}+y)dx - xdy = 0$
I am trying to simplify the equation to the form $\dfrac{fdg-gdf}{f^{2}}$ so that I can reduce it to $d(\dfrac{g}{f})$ but I am unable to do it.
Any ideas are appreciated.
| If you instead let $y(x)=x u(x)$ your differential equation takes the form
$$
\bigl(x(xu)^4+xu)\bigr)\,dx-x(x\,du+u\,dx)=0
$$
that is
$$
x^5u^4\,dx=x^2\,du.
$$
I'm sure you can solve this differential equation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2677389",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Reversed binary representation of primes I discovered a very interesting fact that if a prime is represented in a binary form , and if that binary is rewritten in the reverse order and converted back to base 10 decimal and if the resulting number is not divisible by 5 or 7 then it is a prime or a square number or a product of different primes
For example:
$(29)_{10} = (11101)_{2}$
Now by rewriting $(11101)_{2}$ in reverse order we get,
$(10111)_{2} = (23)_{10}$
If we perform the same operations other numbers,we get,
$(3)_{10} \rightarrow (3)_{10}$
$(11)_{10} \rightarrow (13)_{10}$
$(17)_{10} \rightarrow (17)_{10}$
$(71)_{10} \rightarrow (113)_{10}$ ($113$ is a prime)
$(149)_{10} \rightarrow (169)_{10}$ ($169 = 13\times 13$)
$(151)_{10} \rightarrow (233)_{10}$ ($233$ is a prime)
So I was wondering if there are any proofs for this property of primes.
Can you tell me whether it is true for all primes?
If you could provide any proofs or counter-examples it would be of great help.
| More counterexamples: $139$ is 10001011, reversed it's 11010001 which is $209 = 11 \times 19$, not divisible by $5$ or by $7$, not a square. $191$ is 10111111, reversed that's 11111101 which is $253 = 2^8 - 3 = 11 \times 23$.
You've been fooled by what humans call Richard K. Guy's law of small numbers. Mwahahahaha!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2677531",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Determine if improper integral is convergent or divergent Determine if $$\int_1 ^\infty \frac {dx}{x^2+x} $$ is divergent or convergent. If convergent: determine its value.
Tip: When $ x\ge1 $ is $ \frac 1 {x^2} \ge \frac 1 {x^2+x} = \frac 1 {x} - \frac {1} {x+1} $
Don't really know where to start here. Finding convergence/divergence really difficult so any tips on how to tackle questions like this one is appreciated.
| HINT
Note that for $x\to \infty$
$$\frac {1}{x^2+x}\sim \frac{1}{x^2}$$
then use limit comparison test with $\int_1 ^\infty \frac {1}{x^2}dx$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2677755",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Product of two Abelian group My Question :
$G=AB$, where $A,\, B$ are abelian groups. Are $A$ and $B$ normal?
The problem is a very basic, and very much possibly duplicate of an existing question in this community.
But my question actually might have gone a different way......
I post this today. When I check it, I found in the final part that $[a,b]^{a'} = [a, b^{a'}] \in [A,B]$ may imply $ b^{a’}\in B,\,\forall a’\in A,\, b\in B $. So $B$ is invariant under conjugations by elements of $A$, and $B$ is hence normal, since $G=AB$. The same is true for $[a,b]^{b'} $.
I know my statement is wrong, but what have I misunderstood? Any help is sincerely appreciated!
| Not necessarily. For example, the Klein bottle group (that is, the fundamental group of a Klein bottle) is given by $G=\langle a, b; a^{-1}ba=b^{-1}\rangle$. Here, $A=\langle a\rangle$ and $B=\langle b\rangle$. Clearly, $G=AB$. However, $A$ is not normal in $G$.
This group can be viewed as $\mathbb{Z}\rtimes\mathbb{Z}$, where the action of one infinite cyclic group on the other is to negate it: $b\mapsto b^{-1}$. If the action was trivial then we would have a direct rather than semidirect product, so $G=\mathbb{Z}\times\mathbb{Z}$, which is clearly abelian so all subgroups are normal.
The above is rather complicated. For a simpler example, consider the smallest non-abelian group. This is also the smallest counter-example to your question.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2677894",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Devise a Newton iteration formula for computing $\sqrt[3]{R}$ where $R>0$. Perform a graphical analysis of your function $f(x)$ to determine Devise a Newton iteration formula for computing $\sqrt[3]{R}$ where $R>0$. Perform a graphical analysis of your function $f(x)$ to determine the starting values for which the iteration will converge.
I know that $\sqrt[3]{R}$ is the root of $f(x)=x^3-R$, with which, the iteration formula we need is $x_{n+1}=x_n-(\frac{x_n^3-R}{3x_n^2})$, here I have the graphs for when $R=1,2,3$. But I'm not sure where to start the method so that convergence is assured, it will be in the $[0,1]$ interval since according to the graph the roots go through there? Thank you very much.
| You can prove by induction that the Newton iteration sequence is decreasing and always greater than $\sqrt[3]{R}$ if you start at $x_1 > \sqrt[3]{R}$:
*
*If $x_n > \sqrt[3]{R}$ then $x_{n+1}-x_n=-(\frac{x_n^3-R}{3x_n^2}) < 0$, hence $x_{n+1} < x_n$.
*As $f$ is convex for $x > \sqrt[3]{R}$, you get that $x_{n+1} > \sqrt[3]{R}$.
As a decreasing lower bounded sequence converges, if you start with $x_1 > \sqrt[3]{R}$, the Newton iteration sequence converges. As $f$ is continuous, it can in that case only converge to $l= \sqrt[3]{R}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2678015",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Moving limit into sup norm I need help understanding where I'm going wrong with this line of thought:
Assume $f_n$ converges pointwise to $f$, so $\lim \limits_{n \rightarrow \infty}f_n(x) = f(x) \forall x \in X$, then since the suprumum norm is a norm therefore continuous we can move a limit inside, like this:
$\lim \limits_{n \rightarrow \infty}\|f_n-f\|_\infty = \|\lim \limits_{n \rightarrow \infty}(f_n-f)\|_\infty = \|0\|_\infty = 0$
So all pointwise convergent sequences are also uniform convergent.
This is clearly not the case so where am I going wrong?
| The norm is only necessarily continuous with respect to the topology that it induces. You need to be careful since you have two different topologies present here - the topology of pointwise convergence and the topology induced by $\| \cdot \|_\infty$ (which is the topology of uniform convergence).
As a result, in order to take a limit inside the norm, you need to have convergence in the topology for $\| \cdot \|_\infty$. That is, you need to have $f_n \to f$ uniformly and not just pointwise. Then the fact that you can take the limit inside the norm as in the question is just the trivial statement that uniform convergence implies uniform convergence.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2678107",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
RREF Practice Check I am going over the following exercise
Find a condition on $a,b,c$ so that $(a,b,c)$ in $\mathbb{R}^{3}$
belongs to the space spanned by $u = (2,1,0)$, $v=(1,-1,2)$, and $w =
(0,3,-4)$.
I write out the span of $u,v,w$ and set it equal to $a,b,c$. Then I reduce that linear system to
The last row tells me that one of my vectors was dependent on another. So I must have $\frac{1}{2}c = -\frac{2}{3}(b - \frac{1}{2}a)$. Then I solve for $c_2$ to get $c_2 = 2c_3 - \frac{2}{3}(b - \frac{1}{2}a)$. Now $c_1 = \frac{1}{2}a - c_3 + \frac{1}{3}(b - \frac{1}{2}a)$.
So, I think this is right, but how can I check?
| You row reduce the matrix
\begin{align}
\begin{bmatrix}
2 & 1 & 0 & a \\
1 & -1 & 3 & b \\
0 & 2 & -4 & c
\end{bmatrix}
&\to
\begin{bmatrix}
1 & 1/2 & 0 & a/2 \\
0 & -3/2 & 3 & b-a/2 \\
0 & 2 & -4 & c
\end{bmatrix}
\\&\to
\begin{bmatrix}
1 & 1/2 & 0 & a/2 \\
0 & 1 & -2 & -\frac{2}{3}(b-a/2) \\
0 & 2 & -4 & c
\end{bmatrix}
\\&\to
\begin{bmatrix}
1 & 1/2 & 0 & a/2 \\
0 & 1 & -2 & -\frac{2}{3}(b-a/2) \\
0 & 0 & 0 & c+\frac{4}{3}(b-a/2)
\end{bmatrix}
\end{align}
Thus the condition is
$$
c+\frac{4}{3}\left(b-\frac{a}{2}\right)=0
$$
that can be rewritten
$$
c=\frac{2}{3}a-\frac{4}{3}b
$$
No more steps are required.
Once you have a vector $(a,b,c)$ satisfying the condition, it's easy to find how you can express it in terms of the given vectors (actually of the first two, which form a basis of the span of $u,v,w$). Just compute the RREF:
$$
\begin{bmatrix}
1 & 1/2 & 0 & a/2 \\
0 & 1 & -2 & -\frac{2}{3}(b-a/2) \\
0 & 0 & 0 & 0
\end{bmatrix}
\to
\begin{bmatrix}
1 & 0 & 1 & a/2+\frac{1}{3}(b-a/2) \\
0 & 1 & -2 & -\frac{2}{3}(b-a/2) \\
0 & 0 & 0 & 0
\end{bmatrix}
$$
Thus you see that a vector satisfying the condition above can be written as
$$
\left(\frac{a}{3}+\frac{b}{3}\right)u+
\left(\frac{a}{3}-\frac{2b}{3}\right)v=\frac{a+b}{3}u+\frac{a-2b}{3}v
$$
You also see that $w=u-2v$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2678283",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
How do you generate a random number in real life from $1$ to $k$. $k\leq 4$ I've been in some multiple choice exams (4 choices, no penalty for incorrect answers) where I have $2$ minutes on the clock, and $10$ questions to go. According to probability, if I randomly chose one of the $4$ answers in each question, on expectation, I should get somewhere in the $2-3$ extra marks with a fairly good probability (Unless Karma is against me).
Now my question is, during an exam, with no access to a computer or a programming library, how can I efficiently and quickly generate a random number from $1$ to $4$?
I guess the question can also generalize for generating a number from $1$ to $k\leq 4$ in the condition that I can get rid of some of the choices in a question.
| I would agree with @Remy. I believe that choosing the 3rd option as the correct one for all remaining questions is a better strategy. The reason I feel is that the answers seldom have a "random" pattern!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2678554",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
Is there a formal name for this type of knot? My (undergrad) research group and I were working with this one specific class of knots, and we don't know quite how to search up their qualities to find out if they have a name.
Basically, they are knots with only one twist in their center, with some odd number x of crossings in their center twist. The trefoil is one such knot, with x=3. The next colorable one is at x=9. I don't think we're talking about Twist Knots, of which the Trefoil is also one such knot, because to my untrained eye the two look different. See below for x=9 "8-Bigon Twisty Knot" as we preliminarily called it:
knot with one twist and 9 crossings
So, do you know if these kinds of knots have a name? Or should we just call them "knot with one twist and x crossings"...
Thanks for your help!
| This is the (9,2) torus knot, which you can draw this way:
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2678678",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Does the following limit exists? $$
\lim_{a\to 0}\left\{a\int_{-R}^{R}\int_{-R}^{R}
\frac{\mathrm{d}x\,\mathrm{d}y}
{\,\sqrt{\,\left[\left(x + a\right)^{2} + y^{2}\right]
\left[\left(x - a\right)^{2} + y^{2}\right]\,}\,}\right\}
\quad\mbox{where}\ R\ \mbox{is a}\ positive\ \mbox{number.}
$$
The integral exists, since you can take small disks around singularities and make a change of variables. It seems that the limit should be $0$, as $\lim_{a \to 0}\left[a\log\left(a\right)\right] = 0$.
| We may assume $a>0$ without loss of generality.
We may decompose $D=[-R,R]^2$ as the union of $B^-,B^+$ and $C=D\setminus\left(B^+ \cup B^-\right)$, where
$$B^-=\left\{(x,y)\in D : \left\|(x,y)-(-a,0)\right\|\leq\frac{a}{2}\right\} $$
$$B^+=\left\{(x,y)\in D : \left\|(x,y)-(a,0)\right\|\leq\frac{a}{2}\right\}. $$
If some point belongs to $B^+$ its distance from $(-a,0)$ is at least $\frac{3}{2}a$, hence
$$ \iint_{B^+}\frac{dx\,dy}{\sqrt{((x-a)^2+y^2)((x+a)^2+y^2)}}\leq \frac{2}{3a}\iint_{B^+}\frac{dx\,dy}{\sqrt{(x-a)^2+y^2}}=\frac{4\pi}{3}$$
where the last equality follows by switching to polar coordinates. By symmetry, the integral over $B^-$ is bounded by the same constant. Let $d^+=d^+(x,y)=\|(x,y)-(a,0)\|$, $d^-=d^-(x,y)=\|(x,y)-(-a,0)\|$ and
$ C^+ = C\cap\{x> 0\}$, $C^-=C\cap\{x<0\}$. We have:
$$ \iint_{C}\frac{dx\,dy}{d^+\cdot d^-}\leq\iint_{C}\frac{dx\,dy}{\min(d^+,d^-)^2}=2\iint_{C^+}\frac{dx\,dy}{d^+(x,y)^2}\leq 4\pi\int_{a/2}^{R\sqrt{2}}\frac{\rho}{\rho^2}\,d\rho= 4\pi\log\left(\frac{2R\sqrt{2}}{a}\right) $$
and since $\lim_{a\to 0^+}a\log(a)=0$, this proves that the wanted limit is zero.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2678757",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
How to find no. of polynomials over a finite field satisfying given condition. If i am given with a polynomial $$g(x) = \sum_{i=0}^4 a_ix^i$$ where $a_i \in \mathbb{F}_{2^k}$, finite field with $2^k$ elements. How to find out the no of such polynomials $g(x)$ such that either $g(w) = 0$ or $g(w^{-1}) = 0$, where $w$ is primitive $5^{th}$ root of unity.
| You are asking for the number of polynomials $g(x)$ of degree $\le4$ such that $g(x)$ is divisible by the minimal polynomial of either $w$ or $w^{-1}$ over $K=\Bbb{F}_{2^k}$. The answer depends on the degree of those minimal polynomials (they actually coincide for three quarters of choices of $k$), so we split the treatment accordingly.
A key parameter is the extension degree $m=[K(w):K]$. Because $5\mid (2^4-1)$, the smallest field containing $w$ is $\Bbb{F}_{2^4}$. Therefore the extension degree $m$ is the smallest positive integer such that $4\mid (mk)$. An alternative way of saying the same thing is that $m$ is the smallest positive integer such that $5\mid 2^{mk}-1$. Clearly, $m=4$ when $k$ is odd, $m=2$ when $k\equiv2\pmod4$ and $m=1$ when $4\mid k$.
Case 1: $m=1$, or $4\mid k$. In this case $5\mid 2^k-1$ so $w$ is actually an element of $K$ as is obviously also $w^{-1}=w^4$. The minimal polynomial of $w$ is $x-w$. Therefore we have $g(w)=0$ if and only if $g(x)=(x-w)h(x)$ for some polynomial $h(x)=h_0+h_1x+h_2x^2+h_3x^3\in K[x]$. All those four coefficients $h_i$ can be chosen freely, so there are $2^{4k}$ such polynomials. Similarly we see that $g(w^{-1})=0$ if and only if $g(x)=(x-w^{-1})h(x)$ for some at most cubic polynomial $h(x)$. Superficially this gives us another $2^{4k}$ polynomials $g(x)$. But, we are double counting those polynomials $g(x)$ that have both $w$ and $w^{-1}$ as zeros. This happens if and only if
$g(x)=(x-w)(x-w^{-1})f(x)$ for some polynomial $f(x)$ that can be at most quadratic. As $f(x)=f_0+f_1x+f_2x^2$ has three coefficients we can choose freely, there are $2^{3k}$ ways this can happen. The conclusion is that
when $4\mid k$ the number of polynomials $g(x)$ is $2^{4k}+2^{4k}-2^{3k}=2^{4k+1}-2^{3k}$.
Case 2: $m=2$, or $k\equiv2\pmod4$. Here $5\nmid 2^k-1$ but $5\mid 2^{2k-1}=(2^k-1)(2^k+1)$. Therefore $5\mid 2^k+1$ and $2^k\equiv-1\pmod5$. In this case the minimal polynomial $m(x)$ of $w$ over $K$ is a quadratic. By Galois theory
$$m(x)=(x-w)(x-w^{2^k})=(x-w)(x-w^{-1})=x^2+\omega x+1,$$
where $\omega=w+w^{-1}$ is an element of $K$. Actually, aided by a discrete logarithm table of $\Bbb{F}_{16}$ we can easily verify that $\omega$ is one of the third primitive roots of unity in $K$ (the choice of the third root of unity depends on the choice of which fifth root of unity we call $w$). Anyway, in this case $g(w)=0$ if and only if $g(w^{-1})=0$ if and only if $g(x)=m(x)f(x)$ for some polynomial $f(x)=f_0+f_1x+f_2x^2$. Again, we can choose the coefficients $f_0,f_1,f_2$ any which way we want and
when $k\equiv2\pmod4$ the number of such polynomials $g(x)$ is $2^{3k}$.
Case 3: $m=4$, $k$ odd. In this time the minimal polynomial of $w$ has degree four. Because $w$ is trivially a zero of $m(x)=(x^5-1)/(x-1)=x^4+x^3+x^2+x+1$, this is then also the minimal polynomial. As in case 2, $m(x)$ is also the minimal polynomial of $w^{-1}$. In this case we must have
$g(x)=m(x)r(x)$, but by considering the degrees we see that $r(x)$ must be a constant polynomial. Therefore there are $2^k$ choices of $r(x)$.
When $k$ is odd the number of polynomials $g(x)$ with either $w$ or $w^{-1}$ as a zero is $2^k$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2678831",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Function which creates the sequence 1, 2, 3, 1, 2, 3, ... I was wondering how to map the set $\mathbb{Z}^+$ to the sequence $1, 2, 3, 1, 2, 3, \ldots$. I thought it would be easy, but I was only able to obtain an answer through trial and error.
For a function $f \colon \mathbb{Z}^+ \rightarrow \mathbb{Z}$, we have that
$f(x) = x \bmod 3$ gives the numbers $1, 2, 0, 1, 2, 0, \ldots$
$f(x) = (x \bmod 3) + 1$ gives the numbers $2, 3, 1, 2, 3, 1, \ldots$
After a bit of experimenting, I finally found that
$f(x) = ((x + 2) \bmod 3) + 1$ gives the numbers $1, 2, 3, 1, 2, 3, \ldots$
More generally, if I want to map the set $\mathbb{Z}^+$ to the sequence $\{1, 2, \ldots, n, 1, 2, \ldots, n, \ldots\}$, I need to use the function
$$f(x) = ((x + n - 1) \bmod n) + 1$$
I was only able to come to this result by trial and error. I was not able to find a solution to this relatively simple question online (although perhaps my search terms were off).
How would one come to this result in a more systematic way?
| I'm surprised no one has mentioned composition. You've identified that you need to shift the range
$$f_1(x) = x - 1$$
then take the value modulo $n$
$$f_2(x) = x \mod n$$
and finally shift the result again
$$f_3(x) = x + 1$$
The function you want is just the composition $f = f_3 \circ f_2\circ f_1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2678895",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "41",
"answer_count": 13,
"answer_id": 0
} |
to find probability of the given problem or
I solved this question like this but still doubt is there in my mind in the part where London is mentioned.
let us take first equally likely probability that letter could be from London or Washington
now, let $p(E)$ represent the probability that on is the only word legible then we have
$p(E)= 0.5*p(E_1)+0.5*p(E_2)$
where $p(E_1)$ represents the probability of the only word on which is legible
so how to calculate this $p(E_1)$ and $p(E_2)$.
Or is my approach wrong all together?!
| This problem is massively underspecified - you need to make lots of assumptions in order to get an answer, and different assumptions produce different answers.
The first question is the prior probability. In the absence of any other information, $P(L)=P(W)=1/2$ might be a reasonable prior (although the population of London is significantly higher than that of Washington, so maybe not).
Next, how on earth are we supposed to model which letters are visible (even assuming that in each case the postmark will consist precisely of the city name)? If each letter is independently visible with some probability $p$, well, it will depend on $p$ what answer we get, and for say $p=1/2$ a lot of the reason Washington is unlikely is simply that it has more letters, so it's unlikely that so few would still be visible. Or we could assume there are always two consecutive letters visible, with each possibility being equally likely. Or we could assume that a random contiguous section of the word is visible.
Once we have guessed at what probabilities to assign to ON being all that is visible in each case, then the solution is
$$P(L\mid E)=\frac{P(E\mid L)P(L)}{P(E\mid L)P(L)+P(E\mid W)P(W)},$$
where $L$ is the event that it came from London and $E$ that only ON is visible. In the case that two consecutive letters, equally likely to be any of the consecutive pairs, are always visible, for example, we would have $P(E\mid L)=2/5$, since of the $5$ consecutive pairs, $2$ are ON, and $P(E\mid W)=1/9$. (This gives an answer of $18/23$.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2678977",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Laplace transform of $\ddot{x} +4x = f(t)$ I am stuck in an excercise that on first sight didn't look that strange:
Given the initial problem:
$$\ddot{x} +4x = f(t), x(t=0)=3, \dot{x}(t=0)=-1$$
So I started:
$$s^2(X(s) -sx(0)-\dot{x}(0) +4X(s) = \mathcal{L}(f(t))$$
Now substitute the given values:
$$s^2X(s) -3s-(-1) +4X(s) = \mathcal{L}(f(t))$$
Rearranging:
$$X(s)(s^2+4) -3s+1 = \mathcal{L}(f(t))$$
The answer give: The Laplace transform is of the form:
$$X(t)= A \cos2t+B \sin2t +\frac{1}{2}\int_{0}^{t}f(\tau)\sin2(t-\tau)d\tau $$
Is there anybody that can help me to get the given form?
| Hint
You were almost done put all the s terme at the right side then use the convolution formula
$$X(s)(s^2+4) -3s+1 = \mathcal{L}(f(t))$$
$$X(s)(s^2+4) =3s-1+ \mathcal{L}(f(t))$$
For convenience I substitute $h(s)=\mathcal{L}(f(t))$
$$X(s) =\frac {3s-1}{s^2+4}+ h(s) * \, \frac 1 {s^2+4}$$
$$X(s) =3\frac {s}{s^2+4}-\frac 12\frac {2}{s^2+4}+ \frac 12h(s) * \, \frac 2 {s^2+4}$$
Because $\mathcal{L^{-1}}(h(s))=\mathcal{L^{-1}}\mathcal{L}(f(t))=f(t)$ Using the convolution formula, we get that :
$$\boxed {x(t) =3\cos(2t)-\frac 12\sin(2t)+ \frac 12\int_0^t f(\tau) \, \sin(2(t-\tau))d\tau}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2679100",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Coefficients of characteristic polynomial of a $3\times 3$ matrix Let $A$ be a $3\times 3$ matrix over reals. Then its characteristic polynomial $\det(xI-A)$ is of the form $x^3+a_2x^2+a_1x+a_0$. It is well known that
$$-a_2=\mbox{trace}(A) \mbox{ and } -a_0=\det(A).$$
Note that these constants are expressed as functions of $A$ without referring to eigenvalues of $A$.
Q. What is interpretation of $a_1$ in terms of $A$ without considering its eigenvalues?
This could be trivial, but I have never seen it.
| For invertible $A$, we have Adj$(A)=det(A)\cdot A^{-1}$ so the trace of the Adj$(A)$ is $det(A)\cdot (\frac{1}{x_1}+\frac{1}{x_2}+\frac{1}{x_3})=x_1\cdot x_2+x_1\cdot x_3+x_2\cdot x_3=a_1 $ where $x_i$ are the eigenvalues. Since it holds for invertible $A$, it holds for all matrices.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2679233",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Set theory bracket notation, what is excluded $X=\{\emptyset,\{\emptyset\},\{\{\emptyset\}\}\}$ and $Y=X\setminus\{\{\emptyset\}\}$ If $X=\{\emptyset,\{\emptyset\},\{\{\emptyset\}\}\}$ and $Y=X\setminus\{\{\emptyset\}\}$
then what element is excluded from $X$? Is it $\{\{\emptyset\}\}$, or $\{\emptyset\}$?
In a similar vein, if $Z=\{a, b, c\}$, does it make sense to say $Z\setminus a$?
Thanks
| $$X=\{\emptyset,\{\emptyset\},\{\{\emptyset\}\}\}$$
and $$ Y=X\setminus\{\{\emptyset\}\}= \{\emptyset,\{\{\emptyset\}\}\} $$ because $\{\emptyset\}$ is removed from your $X$.
For your next question regarding $$ Z=\{a, b, c\}$$ $Z\setminus a$ does not make sense unless $a$ is a set.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2679393",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Proof of the derivative of the compostion of functions. This question comes from the proof of the derivative of a composite function which I lifted straight from Proof Wiki
https://proofwiki.org/wiki/Derivative_of_Composite_Function:
Let $f, g, h$ be continuous real functions such that: $\forall x \in \mathbb R: h \left({x}\right) = f \circ g \left({x}\right) = f \left({g \left({x}\right)}\right)$
Then: $h' \left({x}\right) = f' \left({g \left({x}\right)}\right) g' \left({x}\right)$
where $h'$ denotes the derivative of $h$.
Proof:
Let $g \left({x}\right) = y$, and let
$g(x+δx) = y+δy$
Thus:
Thus:
$δy→0\ \ \ \ \ $ as $\ \ \ \ \ δx→0$
and:
$\frac{δy}{δx}→g′(x) \ \ \ \ \ \ \ \ \ (1)$
Case 1:
Suppose $g′(x)≠0$ and that $δx$ is small but non-zero.
Then $δy≠0$ from (1) above, and:
$lim_{δx→0} \frac{h(x+δx)−h(x)}{δx} = lim_{δx→0}\frac{f(g(x+δx))−f(g(x))}{g(x+δx)−g(x)}$ $\frac{g(x+δx)−g(x)}{δx}$
= $lim_{δx→0}\frac{f(y+δy)−f(y)}{δy}$ $\frac{δy}{δx}$
= $f′(y)g′(x)$
My question: Why does the assumption that $g'(x)$ $\neq$ $0$ imply that $dy$ $\neq$ $0$ and why does this in turn imply that the expression $\ \ \ \ \ lim_{δx→0} \frac{h(x+δx)−h(x)}{δx}\ \ \ \ \ $
is equal to $\ \ \ \ \ \ \ \ \ \ \ \ \ \ lim_{δx→0}\frac{f(g(x+δx))−f(g(x))}{g(x+δx)−g(x)}\frac{g(x+δx)−g(x)}{δx}$
| If $g'(x)$ $\neq$ $0$
Since $g'(x) = \frac{dy}{dx}$, and we have also assumed $dx$ is small but non-zero. So, the product of two non-zero values yield non-zero result. implies that $dy$ $\neq$ $0$
Since $dy$ $\neq$ $0$, by our definition of $dy$, we know this is equivalent to saying ${g(x+δx)−g(x)}$$\neq$ $0$. Then, this means it's valid to multiply top and bottom of the fraction by ${g(x+δx)−g(x)}$. Which in turn yields:
The expression$\ \ \ \ \ lim_{δx→0} \frac{h(x+δx)−h(x)}{δx}\ \ \ \ \ $
is equal to $\ \ \ \ \ \ \ \ \ \ \ \ \ \ lim_{δx→0}\frac{f(g(x+δx))−f(g(x))}{g(x+δx)−g(x)}\frac{g(x+δx)−g(x)}{δx}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2679502",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Proving if $X_n$ are uniformly integrable and $X_n \Rightarrow X$, then $EX_n \to EX$ Below is the proof from Billingsley's Convergence of Probability Measures. However, in the proof, I don't understand the final step.
That is, we want to show that$$
\int_0^\alpha P[t<X_n<\alpha] \,\mathrm{d}t \to \int_0^\alpha P[t<X<\alpha] \,\mathrm{d}t
$$
using the bounded convergence theorem. However, to use the bounded convergence theorem, we need that $P[t<X_n<\alpha]$ converges pointwise to $P[t<X<\alpha]$. But the function used here, $\mathbb{1}_{(t,\alpha)}(x)$ is not a continuous function. So how do we show pointwise convergence here? And what does $P[X=\alpha]=0$ have to do here?
| $\def\dto{\xrightarrow{\mathrm{d}}}\def\d{\mathrm{d}}$Suppose $D = \{x > 0 \mid F_X \text{ is not continuous at } x\}$, then $D$ is a countable set, and for any fixed $α > 0$ and $t \in [0, α]$,$$
P(X = t) > 0 \Longrightarrow t \in [0, α] \cap D.
$$
If $P(X = α) = 0$, then for any $t \in [0, α] \setminus D$, there is $P(X \in \partial([t, α])) = 0$. Thus $X_n \dto X$ implies$$
\lim_{n \to \infty} P(t < X_n <α) = P(t < X < α), \quad \forall t \in [0, α] \setminus D
$$
so for $t \in [0, α]$,$$
\lim_{n \to \infty} P(t < X_n <α) = P(t < X < α). \quad \mathrm{a.e.}
$$
Since for any $n \geqslant 1$,$$
0 \leqslant P(t < X_n <α) \leqslant 1, \quad \forall t \in [0, α]
$$
then the bounded convergence theorem implies that$$
\lim_{n \to \infty} \int_0^α P(t < X_n < α) \,\d t = \int_0^α P(t < X < α) \,\d t.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2679616",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Numerical evaluation of the period of a limit cycle How can I calculate all the periods of the limit cycle of the Ueda-Duffing equation with forcing:
$\ddot{x} + k \dot{x} + x^3 = B \cos(t) $
for each set of parameters $(k, B)$ ?
Edit:
The equation exhibits sub-harmonic resonance for some sets of parameters (and chaotic behaviour too).
E.g. for $k=0.08, B=0.2$ there are 5 coexisting attractors of period $2n\pi $ with $n=1, 2, 3$
| I have done some exploratory python script that uses at its core a boundary value solver to find the closed loops from a systematic sweep of the relevant part of the phase space:
k=0.08; B=0.2
def odesys(u,t): return [u[1], B*np.cos(t)-k*u[1]-u[0]**3 ]
def bc(ya, yb): return yb-ya
def norm(a): return max(abs(a))
rows, cols = 2, 3
points = [np.array([10.0,10.0])]; # have some non-relevant point
for p in range(rows*cols): # loop over periods T=(p+1)*2*pi
plt.subplot(rows,cols,p+1);
n=p+1;
t_init = np.linspace(0, 2*n*np.pi, n*10+1)
t_sol = np.linspace(0, 2*n*np.pi, n*500+1)
for x in np.linspace(-0.5,0.5,5+1):
for v in np.linspace(-0.5,0.5,4+1):
u0 = np.array([ x, v ]);
u_init = odeint(odesys, u0, t_init);
res = solve_bvp(lambda t,u:odesys(u,t), bc, t_init, u_init.T, tol=5e-3, max_nodes=n*500+1)
print res.message,"\n",n,": ",res.sol(0)
if res.success and min( norm(pp - res.sol(0)) for pp in points ) > 5e-3:
res = solve_bvp(lambda t,u:odesys(u,t), bc, res.x, res.y, tol=1e-5, max_nodes=n*10000+1)
print res.message, "\n refine to",res.sol(0)
if res.success and min( norm(pp - res.sol(0)) for pp in points ) > 1e-4:
for j in range(n): points.append(res.sol(2*np.pi*j))
u_sol = res.sol(t_sol);
plt.plot(u_sol[0], u_sol[1], label="$u_0=$(%.6f,%.6f)"%(u_sol[0,0], u_sol[1,0]));
plt.grid(); plt.legend(), plt.title("T=%d$\cdot2\pi$"%n);
plt.show()
For the parameters posted in the question, $k=0.08$, $B=0.2$, screening up to period 12 (times $2\pi$) gave only periodic orbits for up to period 3.
Increasing the forcing and reducing the friction coefficient to $k=0.02$, $B=1.5$ periodic orbits were found for periods $1,3,6$ with as expected a more chaotic behavior
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2679760",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Volume of region inside sphere and cone
Let $R$ consist of the points lying inside of the sphere $$ x^2 + y^2 + z^2 = 3^2 $$ and inside the cone $$ z = \cot(\alpha)\sqrt{x^2 +y^2} $$ where $\alpha$ is $\arccos(\frac15)$ Find the volume of $R$.
So I used cylindrical coordinates and set the two equations for $z$ equal to each other, and got $z = \pm \sqrt{9-r^2}$ as the bounds for $z$. Then for $r$ I replaced the expression for $z^2$ which I got from the the cone equation with the $z^2$ in the sphere equation. I got $r = \pm \frac{3}{\sqrt{1+\cot^2(\alpha)}}$ as the bounds. I think the bounds for r is where I made some mistake but I don't know what exactly.
I'm aware you can solve this using spherical coordinates as well, but I would like to solve the problem using cylindrical coordinates, and particularly learn where i went wrong in this problem.
The answer should be some rational number times $\pi$.
| HINT
*
*The intersection between the cone and the sphere is for $z=\cot \alpha \sqrt{9-z^2}\ge0\\\implies z=\frac{3\cot \alpha}{\sqrt{1+\cot^2 \alpha}}$
*For symmetry we can set up the integral in two parts for $z\ge0$, notably
$$V=\int_0^{2\pi} \int_0^{\frac{3\cot \alpha}{\sqrt{1+\cot^2 \alpha}}}\int_0^{r_1(z)}r\,dr\,dz\,d\theta+\int_0^{2\pi} \int_{\frac{3\cot \alpha}{\sqrt{1+\cot^2 \alpha}}}^3\int_0^{r_2(z)}r\,dr\,dz\,d\theta$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2679888",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
I know $\mathbb{R}$ is the real number line. What really is $\mathbb{R}^n$? I know $\mathbb{R}$ is the real number line. What really is $\mathbb{R}^n$?
EDIT (based on comments below): What actually is the result of a cartesian product? That is something I failed to grasp too. What do you get when you multiply the number line $\Bbb{R}$ with itself? I mean the final result? Thanks.
| It is Euclidean space, which can be thought of as ordered $n$-tuples of real numbers.
For example, $\mathbb{R}^3$ is the set of all ordered triples $(a,b,c)$, where $a,b,c\in\mathbb{R}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2680085",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
covariance of two points of an empirical cdf Probably it is a simple question but I was not able to find the answer somewhere.
Assume that $\hat{F}_{X}(\cdot)$ is the empirical cdf estimator that refers to the continuous random variable $X$. We know that the variance of this estimate at a point, say $x_{1}$, is $Var(\hat{F}_{X}(x_{1}))=\frac{F_{X}(x_{1})(1-F_{X}(x_{1}))}{n}$. The expression is similar for another different point, say $x_{2}$.
What is the covariance: $Cov(\hat{F}_{X}(x_{1}),\hat{F}_{X}(x_{2}))$ ??
| Perhaps it's a bit late, but here's help at a solution.
Recall that $\mathbb{E}[\hat{F}_n(x)] = F(x)$. If we expand out the expression for covariance, we are trying to compute
\begin{align*}
\mathbb{E}\left[ \left(\hat{F}_n(x_1) - F(x_1)\right)\left(\hat{F}_n(x_2) - F(x_2)\right)\right] = \mathbb{E}\left[ \hat{F}_n(x_1)\hat{F}_n(x_2)\right] - F(x)F(y).
\end{align*}
Now it's easiest to expand out the first term and get
\begin{align*}
\mathbb{E}\left[ \hat{F}_n(x_1)\hat{F}_n(x_2)\right] = \frac{1}{n^2}\mathbb{E}\left[ \sum_i \mathbb{I}(X_i \leq x_1) \sum_j \mathbb{I}(X_j \leq x_2)\right].
\end{align*}
If $i \neq j$, these are independent and we just get $F(x_1)F(x_2)$. If $i = j$, then we have to think about how these two events in the indicators actually relate. I'll leave it to the reader to get a pretty final formula.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2680244",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Quadratic equations with complex coefficients I am stuck on the final bit of this question. There is an answer for this on stack exchange but is different from two other answers on others sites which are also different to each other.
Solve the equation $z^2=-\sqrt3 + i$
So far i've done:
$$|z| = 2$$
$$\theta = \dfrac{5\pi}{6}$$
$$\therefore 2cis\dfrac{5\pi}{6}$$
De Moivre's theorem: $$z^2=r^2cis2\theta$$
$$\therefore$$
$$r^2=2$$
$$r = \sqrt2$$
$$cis2\theta = cis\dfrac{5\pi}{6}$$
$$2\theta = \dfrac{5\pi}{6}$$
$$\theta = \dfrac{5\pi}{12}$$
$$\therefore z = \sqrt2cis(\dfrac{5\pi}{12} + \pi n)$$
for any integer n
If that is all correct, (please correct if it isn't), how do I finish it off? My textbook explains something about two distinct values.
I can leave it in polar form. As I said, I've seen three completely different answers to this.
| Nearly or I should say almost perfect.
$$z^2=2e^{i(\frac {5\pi}{6}+2\pi k)}$$
Hence $$z=\sqrt 2e^{i(\frac {5\pi}{12}+\pi k)}$$
For $k=0,1$
We consider only 2 values because since we are dealing with a quadratic we must get only two roots.
In general if we find solution to some $z^n=re^{i\theta+2\pi k}$
Then $k=0,1,2,....,(n-1)$
Hence the answer is $z_1=\sqrt 2e^{\frac {5i\pi}{12}}$ and $z_2=\sqrt 2e^{\frac {17i\pi}{12}}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2680324",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Show that system is linearly dependent We have that the system of vectors $$x, y, z$$
is linearly independent. Show that the system $$x-y, y-z, z-x $$ is linearly-dependent.
Here is my try. As the first system is independent, it means that
$$a_1x+b_1y+c_1z=0 => a_1=b_1=c_1=0$$
In the second system, writing the combination $$a(x-y) + b(y-z) + c(z-x) = ax-ay+by-bz+cz-cx = (a-c)x + (b-a)y+(c-b)z$$ if we put ($a_1 = a-c, b_1 = b-a, c_1 = c-b$), then from the first system's independence the second system is independent too.
Where is the mistake?
| The mistake lies in the passage in which you jump from$$a-b=b-c=c-a=0\tag1$$to $a=b=c=0$. Note that if, say, $a=b=c=1$, then $(1)$ still holds.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2680493",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
What is the order of operations for $p \implies q \implies r$ I've been studying mathematical logic recently and we have briefly covered the order of operations for operators like AND/OR/IMPLIES, etc.
However, we have a challenge question regarding how the following statement should be interpreted in terms of order of operations, and I don't believe we have covered this material nor can I find the same question answered online.
The statement is
$p \implies q \implies r$
The question asks if the above statement is correctly represented by $(p \implies q) \implies r$, or $p \implies (q \implies r)$, or neither - i.e. what is the correct order of operations when there are no brackets and the two logic operators are equally weighted.
I have used a truth table to determine that the above two statements are not equivalent, but are either the logical equivalent to the first statement, or neither?
| The answer for the purposes of your course (or if you were to see it in a paper specifically on logic) may be different, but in general mathematical usage this is shorthand for "$p\Rightarrow q$ and $q\Rightarrow r$", similar to constructs like $a\leq b\leq c$.
(If something other than this is meant, I think parentheses should always be used.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2680615",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
On the proof that, for every $L^2$-bounded continuous martingale $M$ starting at $0$, the martingale $M^2-[M]$ is uniformly integrable I am trying to prove that, for every $L^2$-bounded continuous martingale $M$ starting at $0$, the process $M^2-[M]$ is a uniformly integrable martingale. In the proof I am currently reading, they showed that $$E[ \sup_{t\geq0}[M]_t ]=\lim_{t\rightarrow\infty}E[[M]_t]=\lim_{t\rightarrow\infty}E[M_t^2]=E[M_{\infty}^2]<\infty$$ Then, it is clear that $$\sup_{t\geq0}\,(M_t^2-[M]_t)\in L^1\tag{1}$$ Finally they conclude from $(1)$ that $M^2-[M]$ is a uniformly integrable martingale.
However, the point I don't understand is, how uniform integrability can follow from $(1)$. In my opinion, $(1)$ only tells us that $M^2-[M]$ is $L^1$ bounded.
| For every $Y$ in $L^1$, the set of random variables $S_Y=\{X\in L^1\mid |X|\leqslant Y\}$ is uniformly integrable.
Apply this fact to $Y=\sup\limits_{t\geqslant0}\,(M_t^2-[M]_t)$ and to the set $\{M_t^2-[M]_t\mid t\geqslant0\}\subseteq S_Y$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2680708",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.