Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
Category Theory: special case of commutative diagram / function I am sorry if this is an obvious question.
I have a curious situation, in a software design, where it's supposed that there may exist a space $N$ (for normalised data) and a function $e: N \to K$ ($K$ another space different from $N$), such that for each function $h$ and space $L$ where $h: L \to K$, there exists a function $i: L \to N$, verifying $h = e\circ i$.
By intuition, I think this is not possible, or it must be some additional conditions on the spaces $L$, $N$ and $K$.
Is this a known problem in category theory ? If yes, are there possible solutions knowing that $e= id$ and $N = K$ is not an option ?
Thanks in advance and sorry if this is a badly formulated question
| Your condition means precisely that $e$ is a retraction, i.e. that $e$ admits a right inverse, i.e. that there exists a morphism $f \colon K \to N$ with $e \circ f = \operatorname{id}_K$.
To see that such an $f$ exists under the given conditions we take $L = K$ and $h = \operatorname{id}_K$ and then take for $f$ the resulting morphism $i$.
If on the other hand $e$ admits a right inverse $f$ then we can always get the required morphism $i$ as $i := f \circ h$.
In the category of sets, i.e. if $K, L, N, \dotsc$ are sets and $e, h, i, \dotsc$ maps then this means precisely that $e$ is surjective.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3238616",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How to evaluate the following integral involving a gaussian? I want to evaluate the following integral:
$$\int\limits_0 ^\infty {x \sin{px} \exp{(-a^2x^2})} dx$$
Now I am unsure how to proceed. I know that this is an even function so I can extend the limit terms to $-\infty, \infty $ and then divide by 2. I have tried to evaluate this on Wolfram Alpha, but it only shows the answer while I am interested in the procedure.
| Start with:
$$I\left( p \right)=\int_{0}^{\infty }{\cos \left( px \right)\exp (-{{a}^{2}}{{x}^{2}})dx}$$
We can use differentiation under the integral sign:
$${I}'\left( p \right)=-\int_{0}^{\infty }{x\sin \left( px \right)\exp (-{{a}^{2}}{{x}^{2}})dx}$$
Integration by parts using $u=\sin \left( px \right)\quad and\quad dv=-x\exp \left( -{{a}^{2}}{{x}^{2}} \right)dx$
$${I}'\left( p \right)=\left. \sin \left( px \right)\frac{\exp \left( -{{a}^{2}}{{x}^{2}} \right)}{2{{a}^{2}}} \right|_{0}^{\infty }-\frac{p}{2{{a}^{2}}}\int_{0}^{\infty }{\cos \left( px \right)\exp \left( -{{a}^{2}}{{x}^{2}} \right)dx}$$
The first term on the right vanishes, and we have the first-order differential equation:
$$\frac{{I}'\left( p \right)}{I\left( p \right)}=-\frac{p}{2{{a}^{2}}}\Rightarrow \ln \left( I\left( p \right) \right)=-\frac{{{p}^{2}}}{4{{a}^{2}}}+C$$
Using $$I\left( 0 \right)=\frac{\sqrt{\pi }}{a}$$
We can find $C=\ln \left( \frac{\sqrt{\pi }}{a} \right)$
hence
$$\ln \left( I\left( p \right) \right)=-\frac{{{p}^{2}}}{4{{a}^{2}}}+\ln \left( \frac{\sqrt{\pi }}{a} \right)$$
So
$$I\left( p \right)=\frac{\sqrt{\pi }}{a}\exp \left( -\frac{{{p}^{2}}}{4{{a}^{2}}} \right)$$
Finally the integral in question equals
$$-{I}'\left( p \right)=-\frac{d}{dp}\left( \frac{\sqrt{\pi }}{a}\exp \left( -\frac{{{p}^{2}}}{4{{a}^{2}}} \right) \right)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3238744",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 1
} |
If ${}^nP_{12}={}^nP_{10}×6$, than what is $n$?
If ${}^nP_{12}={}^nP_{10}×6$, than what is $n$?
I am at year 11. I do understand the concept of $^nP_r,{}^nC_r$. Once I know the $n$ I can calculate. I got stuck on this.
| So to compute nPr we do $n(n-1)\ldots(n-r+1)$. We note the following recurence
\begin{equation}
nPr = nP(r-1) \times (n-r+1)
\end{equation}
If you apply this recurrence twice, you should find a quadratic equation that $n$ must satisfy. When you find the solutions, pick the one such that both $nP12$ and $nP10$ make sense, and you can then verify that the solution is correct.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3238866",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
Prove $(y-x^2)$ is a prime ideal in $\mathbb{R}[x,y]$, but not maximal. My guess is to use the fact that when we take the quotient,
$\mathbb{R}[x,y]/(y-x^2)$, this will become an integral domain but not a field.
I am not sure how to take the quotient, though. I am also unfamiliar with the ring of polynomials of two variables. As a set, can I write $\mathbb{R}[x,y] = \{a + bx + cy + dxy + ex^2+fy^2+...| a,b,c,d,e,f,...\in \mathbb{R}\}$?
And would it be correct if I assume that in the quotient, $y = x^2$?
If my above two guesses are correct, then I would assume that the quotient becomes $\mathbb{R}[x]$, since all the $y$ terms can be turned into $x^2$. But isn't this ring an integral domain, since there are no zero divisors, but not a field, since not every real polynomial has an inverse?
| Indeed, $x\mapsto x$, $y\mapsto x^2$ gives us a homomorphism $\Bbb R[x,y]\to\Bbb R[x]$. Show that the kernel is $(x^2-y)$ and you are done as that shows $\Bbb R[x,y]/(x^2-y)\cong \Bbb R[x]$, which is an integral domain, but not a field.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3238973",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Solve $\sqrt{1 + \sqrt{1-x^{2}}}\left(\sqrt{(1+x)^{3}} + \sqrt{(1-x)^{3}} \right) = 2 + \sqrt{1-x^{2}} $ Solve $$\sqrt{1 + \sqrt{1-x^{2}}}\left(\sqrt{(1+x)^{3}} + \sqrt{(1-x)^{3}} \right) = 2 + \sqrt{1-x^{2}} $$
My attempt:
Let $A = \sqrt{1+x}, B = \sqrt{1-x}$ and then by squaring the problematic equation we get:
$$(1+AB)(A^{3} + B^{3})^{2} = (AB)^{2} + 4AB + 4 $$
$$ A^{6} + B^{6} + BA^{7} + A B^{7} = -2 (AB)^{4} - 2(AB)^{3} + (AB)^{2} + 4AB + 4 $$
I also have tried using$A = (1+x), B = (1-x)$ , and some others, but none solves the problem.
I am now trying $A = (1+x)$ and $(1-x) = -(1+x) + 2 = 2 - A$, so:
$$\sqrt{1 + \sqrt{A(2-A)}}\left(\sqrt{(A)^{3}} + \sqrt{(2-A)^{3}} \right) = 2 + \sqrt{A(2-A)} $$
| First, this is stated as an equation to solve (for $x$) rather than an identity to be shown.
So with $a=\sqrt {1+x}$ and $b=\sqrt {1-x}$ we have $$a^2+b^2=2$$ and $$(a+b)^2=a^2+2ab+b^2=2(1+ab)$$ and $$a^3+b^3=(a+b)(a^2-ab+b^2)=(a+b)(2-ab)$$
Then $$\sqrt {1+ab}\cdot (a^3+b^3)=\frac {\sqrt 2}2(a+b)(a+b)(2-ab)=\sqrt 2(1+ab)(2-ab)$$
If we then put $c=ab$ the equation to solve is then $$\sqrt 2(1+c)(2-c)=2+c$$ which is a straightforward quadratic in $c$. Then solve for $x$ by noting $c^2=1-x^2$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3239071",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 7,
"answer_id": 0
} |
exponential time complexity for $M(n,n)$ with $ M(i,j) = M(i-1,j) + M(i-1,j-1) + M(i,j-1) $. For $n \in \mathbb{N}$ we define $Q(n) = M(n,n)$ with:
$$ M(i,j) = M(i-1,j) + M(i-1,j-1) + M(i,j-1) $$ and
$$ M(i,0) := M(0,i) := i \mbox{ } \mbox{ } \forall i \geq 0 $$
Show that $Q(n)$ (regarding the recursion above ) does take exponential time complexity. Moreover show that there is an algorithm, which does take quadratic time.
My thoughts:
Regarding the first question I think I understand the problem. I choose $ n = 2 $ and consider the recursion represented as a tree. The subproblems overlap several times. ( not like Mergesort, where the subproblems are disjoint). For example we have to compute $M(1,1)$ three times for $n=2$. So this would be the reason for exponential time complexity, I guess.
For the second question I would say that we can replace the "top-down" strategy of the recursion with a "bottom-up" strategy. So we start with a subproblem of a "trivial size" and begin to focus of "greater" subproblems. We can note every result in table and use them if we need them for greater subproblems.
That were my thoughts for the two questions.
| Let the number of function calls required for computing $M(i,j)$ the naive way be $F(i,j)$. Then for sufficiently large $n$, and since $F()$ and $M()$ are symmetric in their arguments,
$$F(n,n)=F(n,n-1)+F(n-1,n)+F(n-1,n-1)=2F(n,n-1)+F(n-1,n-1)$$
$$=2(F(n,n-2)+F(n-1,n-1)+F(n-1,n-2))+F(n-1,n-1)$$
$$>2F(n-1,n-1)+F(n-1,n-1)=3F(n-1,n-1)$$
This is a recurrence inequality on the number of function calls needed for $Q(n)$, and it is easily seen that these call counts must grow exponentially with an exponent of at least $3$.
To compute $Q(n)$ in quadratic time, compute $M(i,j)$ for increasing values of $i+j$ – from $0$ to $2n$ – and cache the results so they need not be recomputed again. This time-saving technique is called memoisation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3239166",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
How can I evaluate $\lim_{x \to \infty}\frac{\sum_{k=1}^{\ 1000}(x+k)^{10}}{x^{10}+10^{10}}$? I would like to examine the limit of the following function as x goes towards infinity:
$$\lim_{x \to \infty}\frac{\sum\limits_{k=1}^{\ 1000}(x+k)^{10}}{x^{10}+10^{10}}$$
I already have tried to split the sum, exlude the factor $\ x^{10} $ , then reduce the fraction by it in the numerator and denumerator, so that i might see a way to determine the limit. But that does not seem to help me proceeding
And using l’Hospital´s Rule seems to be too complicated.
| Here the "honest" calculation using properties of limits:
\begin{eqnarray*} \frac{\sum_{k=1}^{\ 1000}(x+k)^{10}}{x^{10}+10^{10}}
& = & \frac{\sum_{k=1}^{\ 1000}\left(1+\frac{k}{x}\right)^{10}}{1+\frac{10^{10}}{x^{10}}}\\
& \stackrel{x\to \infty}{\longrightarrow} & \frac{\sum_{k=1}^{\ 1000}\left(1+0\right)^{10}}{1+0} \\
& = & \sum_{k=1}^{\ 1000} 1 = 1000
\end{eqnarray*}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3239253",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Definition of infinite cyclic group I'm having some conceptual issues with the infinite cyclic group $C_\infty$. Finite groups $C_n$ have a clear representation as integers $0,1,\cdots,n-1$ under addition $\pmod{n}$, or as the rotation group of the $n$-gon for $n\geq 3$. The rotation group of a circle, which is what I interpreted $C_\infty$ to be, has uncountable order since any real angle $[0,2\pi)$ is valid. This would make it isomorphic to $[0,2\pi)$ under addition $\pmod{2\pi}$. But online it says $(\mathbb{Z},+)$ is also isomorphic, which doesn't make sense to me because it has order $\aleph_0$. Also, the first group has two inverses $0$ and $\pi$, while this group only has $0$.
I'm guessing my interpretation is wrong. The textbook never defines what $C_\infty$. What exactly is it?
| The (up to isomorphism) infinite cyclic group is just $\mathbb{Z}$ under addition.
You can visualize it as the group of integer shifts of the integers.
You can also visualize it as the rotations of the circle through integer numbers of radians, but that's not pretty geometrically since the orbit of any point on the circle is dense.
The group of all rotations of the circle is not cyclic.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3239387",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Modular exponentitation (Finding the remainder) I want to find the remainder of $8^{119}$ divided by $20$ and as for as i do is follows:
$8^2=64\equiv 4 \pmod {20} \\
8^4\equiv 16 \pmod {20} \\
8^8\equiv 16 \pmod {20}\\
8^{16}\equiv 16 \pmod {20}$
from this i see the pattern as follows $8^{4\cdot 2^{n-1}} \text{is always} \equiv 16 \pmod {20} \,\forall n \ge 1$
So,
$\begin{aligned} 8^{64}.8^{32}.8^{16}.8^7 &\equiv 16.8^7 \pmod{20}\\
&\equiv 16.8^4.8^3 \pmod{20} \\
&\equiv 16.8^3 \pmod {20}\end{aligned}$
And i'm stuck. Actually i've checked in to calculator and i got the answer that the remainder is $12$. But i'm not satisfied cz i have to calculate $16.8^3$
Is there any other way to solve this without calculator. I mean consider my condisition if i'm not allowed to use calculator.
Thanks and i will appreciate the answer.
| Although $8^{4\cdot 2^{n-1}}\equiv 16\pmod {20} $ for all positive integer $n $, it is actually much simpler:
$8^{4k}\equiv 16\pmod {20} $ for all $k>0$. That is for any multiple of $4$, not just $4$times powers of $2$.
And furthermore $8^{4k+1}\equiv 16*8\equiv -4*8\equiv-32\equiv 8\pmod {20} $ for $k>0$
And $8^{4k+2}\equiv 8*8\equiv 4\pmod {20} $
And $8^{4k+3}\equiv 4*8\equiv 32\equiv 12\pmod {20} $.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3239872",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
Inverse of an upper bidiagonal Toeplitz matrix I have a matrix with the following structure
$$\left[\begin{array}{cccccc|c}
-1 & 1-b & 0 & \dots & 0 & 0 & b \\
0 & -1 & 1-b & \dots & 0 & 0 & b \\
\cdots \\
0 & 0 & 0 & \dots &-1 & 1-b & b \\
0 & 0 & 0 & \dots & 0 & -1 & 1 \\ \hline
0 & 0 & 0 & \dots & 0 & 0 & -1
\end{array}\right]$$
I have to find the inverse of this matrix. I begin by using the block matrix inversion formula
$$\begin{bmatrix}\mathbf {A} &\mathbf {B} \\\mathbf {C} &\mathbf {D} \end{bmatrix}^{-1}=\begin{bmatrix}\left(\mathbf {A} -\mathbf {BD} ^{-1}\mathbf {C} \right)^{-1}&-\left(\mathbf {A} -\mathbf {BD} ^{-1}\mathbf {C} \right)^{-1}\mathbf {BD} ^{-1}\\-\mathbf {D} ^{-1}\mathbf {C} \left(\mathbf {A} -\mathbf {BD} ^{-1}\mathbf {C} \right)^{-1}&\quad \mathbf {D} ^{-1}+\mathbf {D} ^{-1}\mathbf {C} \left(\mathbf {A} -\mathbf {BD} ^{-1}\mathbf {C} \right)^{-1}\mathbf {BD} ^{-1}\end{bmatrix}$$
where
$$\mathbf{A} = \begin{bmatrix}
-1 & 1-b & 0 & \dots & 0 & 0 \\
0 & -1 & 1-b & \dots & 0 & 0 \\
\cdots \\
0 & 0 & 0 & \dots &-1 & 1-b \\
0 & 0 & 0 & \dots & 0 & -1
\end{bmatrix}$$
$$\mathbf{B} = \begin{bmatrix} b \\ b \\ \vdots \\ b \\ 1 \end{bmatrix}$$
$$\mathbf{C} = \begin{bmatrix} 0 & 0 & 0 & \dots & 0 & 0 \end{bmatrix}$$
$$\mathbf{D} = \begin{bmatrix} -1 \end{bmatrix}$$
Given that $\mathbf{C}$ and $\mathbf{D}$ I can simplify the formula as
$$\begin{bmatrix}\mathbf {A} &\mathbf {B} \\\mathbf {C} &\mathbf {D} \end{bmatrix}^{-1}=\begin{bmatrix}\mathbf{A}^{-1} & \mathbf {A}^{-1} \mathbf {B}\\ \mathbf{0} & -1\end{bmatrix}$$
The only part left is to solve for $\mathbf{A}$ which I believe can be called an upper bidiagonal Toeplitz matrix. Unfortunately, I have not been able to find a formula to compute the inverse for the same.
| Elements $m_{ij}$ of the inverse matrix $M=A^{-1}$ are:
\begin{align}
m_{ij}
&=
\begin{cases}
-(1-b)^{j-i}, &j\ge i,
\\
0, &j<i.
\end{cases}
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3239967",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
The set of all values of m for which $mx^2 – 6mx + 5m + 1 > 0$ for all real x is The set of all values of m for which $mx^2 – 6mx + 5m + 1 > 0$ for all real x is?
The answer given is $0<=m<1/4$
My working: $D>=0$
$=> (-6m)^2 -4(m)(5m+1)>=0$
$=> m(4m-1)>=0$
=> Either $m>=1/4$ or $m<=0$
Where am I going wrong?
| Option:
$y=m(x^2-6x+5) +1>0;$
0) $m=0$√
1) $m>0$.
A parabola opening upward.
Minimum at:
$y'=m(2x-6)=0;$ $x=3;$
$y_{\min}=m(9-18+5)+1=$
$-4m+1$;
We require: $y_{\min}= -4m+1>0$, or $m<1/4$;
Combining : $0 \le m < 1/4$.
2) Rule out $m<0$
(Parabola opening downward )
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3240052",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 3
} |
LCM of irrational numbers So i read in a book that irrational and rational numbers do not have a common multiple and it said that lcm of irrational numbers is also only possible when both the irrational numbers have the same surd. I was wondering what this means.
| Those are odd claims.
The first can, I think, be justified. Let's say $\alpha$ is an irrational number and $\frac ab$ is rational (with $a,b\in \mathbb Z$). Then it is certainly true that, for any non-zero integers $m,n$ we have $m\times \alpha$ is irrational and $n\times \frac ab$ is rational, so it is not possible for them to be equal.
But the second claim seems hard to follow, no matter what (standard) meaning you assign to "surd".
Originally, "surd" just mean "irrational". These days, it more often means an expression in radicals, such as $\sqrt 2$ or $\sqrt[3] 3$. However, numbers like $\pi$ and $2\pi$ clearly have common multiples so I'm not sure what meaning is intended.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3240155",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Is there a general effective method to solve Smullyan style Knights and Knaves problems? Is the truth table method the most appropriate one? Below, an attempt at solving a knight/knave puzzle using the truth table method.
Are there other methods?
Source : https://en.wikipedia.org/wiki/Knights_and_Knaves
| Truth tables always work, of course, but another appproach is to use algebra in the field with two elements (i.e. the integers modulo 2) to represent truth values.
If we let $1$ represent a true statement and $0$ represent a false statement, then
*
*"X and Y" corresponds to $xy$.
*"not X" corresponds to $1-x$.
*"X or Y" corresponds to $x+y-xy$.
and therefore we can represent any propositional formula by a polynomial expression. Finally, we can represent "A says (or would say) X" by the equation
$$ (\text{A is a knave}) + x = 1 $$
since the if A says X then we know that either X is true or A is a knave but not both.
For the simple puzzle in question, then introduce variables $j$ and $b$ for "John is a knave" and "Bill is a knave". John's statement is then $jb$ and the known fact that he says it is the equation
$$ j+jb = 1 $$
Algebra now tells us
$$ j(1+b) = 1$$
and the only way for that to be true in $\mathbb F_2$ is if $j=1$ and $1+b=1$. In other words "John is a knave" must be true, and "Bill is a knave" must be false.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3240256",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 4,
"answer_id": 1
} |
Prove that if $p\mid ab$ where $a$ and $b$ are positive integers and $a\lt p$ then $p\le b$ I have found an old textbook called "Real Variables by Claude W. Burrill and John R. Knudsen" in the first chapter this textbook uses 15 axioms to derive much of the well known and basic facts about the integers, i have been reading and solving all the exercise and so far so good until exercise 1-27 which asks the following: "Prove that if $p$ is prime and divides $ab$ where $a$ and $b$ are positive and $a\lt p$, then $p\le b$." this would be very easy if we assume Euclid's lemma but it hasn't been proven and the very next exercise asks for its proof so i believe that there is a way to prove it without Euclid's lemma but how? Is there even a way to prove this without Euclid's lemma? I also believe i'm not allowed to use Bézout's identity because its proof is exercise 1-29
I have been thinking about this problem since yesterday and i searched online for exercise solutions for this textbook but there was no results.
As another question:does the theorem above imply Euclid's lemma in a straightforward way?
| As a way to suggest that this is at least nearly equivalent to Euclid (or something like it), let's see how it does with the so-called Hilbert Numbers. These are just the naturals of the form $4k+1$. They are useful for thinking about things like unique factorization, since such basic properties do not hold for them. For instance, numbers like $3\times 7=21$ are "prime" here, since neither $3$ nor $7$ are Hilbert Numbers. Thus you can have something like $$21\times 209= 33\times 133$$ as two distinct "prime" factorizations of $4389$. (Note: Here, of course, $209=11\times 19$ and $133=7\times 19$ so, in the context of the natural numbers, all we've done is to 'reapportion' the various primes. As all those primes are of the form $4k+3$ none of them are Hilbert Numbers, of course).
How does your result fare in your context? Well, the largest "prime" in our example is $209$ so let that be $p$. Then, letting $a=33,b=133$ we see that both $a,b<p$ but $p\,|\,ab$ nonetheless. So...whatever proof the authors had in mind, it has to fail for the Hilbert Numbers.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3240319",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Find the probability that no two among $A$, $B$, and $C$ are together when $12$ people are arranged in a circle There are $12$ people including A,B and C. They are arranged in a circle. Find the probability that no two among A, B and C are together.
I have solved problem where cases involving two person sitting adjacent to each other. But here there are $3$ people where no two among them sits adjacent, hence I am getting confused.
| I think it is easier to find the probability where the condition is not satisfied. We have two cases to consider:
Case 1: A,B,C are adjacent. This is simply grouping A,B,C and changing their places among themselves with $3!$ and merging them with the remaining $9$ people. In this case, there are
$$3!\cdot(10-1)! = 3!\cdot9!$$
such arrangements.
Case 2: A,B are adjacent, C is not adjacent to neither A nor B. We can find this by grouping $4$ people. While grouping them, put A and B on the two middle places with $2!$ and for the side places, we can choose $2$ people from $9$ (Not $10$ because we don't want C to be one of these side places as it is already counted in case 1) and change their places among themselves with $\binom{9}{2}2! = 72$. Then we can take this group as one person and merge them with the remaining $8$ people. In this case, there are
$$2!\cdot 72\cdot(9-1)! = 72\cdot 2! \cdot 8! = 16\cdot9!$$
such arrangements. Notice that this is the same for the cases where A,C are adjacent B is non-adjacent to neither A nor C; and B,C are adjacent A is non-adjacent to neither B nor C. So, in total, we have
$$6\cdot9!+3\cdot16\cdot9! = 54\cdot9!$$
arrangements that do not satisfy the condition. So the answer should be
$$\frac{(12-1)!-54\cdot9!}{(12-1)!} = \frac{11\cdot10\cdot9!-54\cdot9!}{11!} = \frac{56\cdot9!}{11!} = \frac{28}{55}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3240435",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Tensor coordinates problem
Let $B = ((1,2)^T,(1,3)^T)$ be the basis of $V=\Bbb R^2$.
Find the dual basis $B^*=(e^1,e^2)$
Find the matrix of the billinear form (tensor): $ T =
e^1\oplus e^2 - e^2\oplus e^1 + 2e^2 \oplus e^2$ with the respect to
the canonical basis.
I suppose my tensor is given with the respect to the basis $B^*$. But how could I get the expression for $[T]_K$ (with the respect to canonical basis)?
I calculated the dual basis as $B^* = ((3,-1)^T,(-2,1)^T)$
| The correct symbol $\otimes$ is produced with \otimes. A strategy is to first find the matrix of $T$ taken with respect to the basis $\mathcal{B}$, and then convert it to the standard basis. With respect to $\mathcal{B}$, it is clear that $$[T]_{\mathcal{B}} = \begin{pmatrix} 0 & 1 \\-1 & 2 \end{pmatrix}.$$Now, we use the tensor transformation law (that physicists love to write as that mess with indices), giving $$[T]_{{\rm std}} = A^\top [T]_{\mathcal{B}} A,$$where $A = [{\rm Id}_{\Bbb R^2}]_{{\rm std},\mathcal{B}} = [{\rm Id}_{\Bbb R^2}]_{\mathcal{B},{\rm std}}^{-1}$. Compute.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3240535",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Divisibility by 3 Maybe this is a duplicate question (if so, can the moderators be kind enough to merge this appropriately?), but what is the condition on $a$ and $b$ for an expression $am+b$ to be divisible by $3$ ($a$ and $b$ are integers)? For example, I can say $16m+3$ is divisible by $3$ since $b=3$ is divisible by $3$ and for $m$ a multiple of $3$, $3|16m$. In the case of $176m+23$, I don't think it is divisible by $3$. Is this the case?
| A basic principle is that if $3|v$, then $3|w \iff 3|v+w$. Furthermore, if $3|v$ then
$3|nv$.
So $3|ma+b\iff3|ra+s,$ where $r$ is the remainder when $m$ is divided by $3,$
and $s$ is the remainder when $b$ is divided by $3$.
Now we have only $9$ possibilities to consider:
$r=0, 1, $ or $2, $ and $s=0, 1, $ or $2$.
When $r=0$ and $s=0$, $ra+s=0$ is divisible by $3$.
When $r=0$ and $s=1$ or $2$, $ra+s=1$ or $2$ is not divisible by $3$.
When $s=0$ and $r=1$ or $2$, then $ra+s=ra$ is divisible by $3$ when $a$ is and not when $a$ is not.
When $r=1$ and $s=1,$ then $ra+s=a+1,$ so $ra+s$ is divisible by $3$ when $a$ leaves remainder $2$ when divided by $3$.
When $r=1$ and $s=2,$ then $ra+s=a+2,$ so $ra+s$ is divisible by $3$ when $a$ leaves remainder $1$ when divided by $3$.
When $r=2$ and $s=1,$ then $ra+s=2a+1,$ so $ra+s$ is divisible by $3$ when $a$ leaves remainder $1$ when divided by $3$.
When $r=2$ and $s=2,$ then $ra+s=2a+2,$ so $ra+s$ is divisible by $3$ when $a$ leaves remainder $2$ when divided by $3.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3240676",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
Existence of continuous $r(t)$ with $\lim_{t \to \infty} \frac{f(r(t))}{g(t)} = 1$ Let $ \ f,g: \mathbb{R} \to \mathbb{R} \ $ be continuously differentiable functions such that $$\lim_{t \to \infty} f(t) = \infty = \lim_{t \to \infty} g(t) \ \ . $$ My question is:
Is there a continuous function $ \ r: \mathbb{R} \to \mathbb{R} \ $ such that $$\lim_{t \to \infty} \frac{f \big( r(t) \big)}{g(t)} = 1 \ \ \ \ ? $$
I would like hints for a proof or a counterexample. You may feel free to modify the assumptions about $f$ and $g$ as you please.
Thanks in advance.
| Consider the functions $f(t)=t(2+\cos t)$ and $g(t)=t$. They are infinitely differentiable at the whole domain. Now suppose that the above function $r$ exists.
Let $T>0$ be large enough for $\frac 2 3 < \frac {f(r(t))}{g(t)} < \frac 4 3$ to hold for all $t>T$. This implies $$T<t_1<t_2 \implies \frac {f(r(t_1))}{f(r(t_2))} < \frac {\frac 4 3 \, g(t_1)}{\frac 2 3 \, g(t_2)}=2\,\frac {t_1}{t_2} < 2\,. \tag{1}\label{eq1}$$
Note that $|f(t)| \le 3|t|$, $f(t)$ has the sign of $t$, hence for all $t>T$ we have: $\frac 2 3 t = \frac 2 3 g(t) < f(r(t)) \le 3 r(t)$ and therefore $\lim\limits_{t \to +\infty} r(t) = +\infty$. Select $n \in \mathbb N$ such that $2 \pi n > r(T)$ and let $$t_1=\inf\,\{t>T \mid r(t)=2 \pi n\},\quad t_2=\inf\,\{t>T \mid r(t)=(2n+1) \pi\}.$$ By the intermediate value theorem, $T<t_1<t_2$. Now we see that $$\frac {f(r(t_1))}{f(r(t_2))}=\frac {f(2\pi n)}{f((2n+1) \pi)}=\frac {2 \pi n \cdot 3}{(2n+1) \pi \cdot 1}=\frac {6n}{2n+1} \ge 2\,,$$ that contradicts $\eqref{eq1}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3240786",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Every functor $\mathcal C\to\mathsf{Set}$ is an epimorphic image of a monofunctor implies every morphism of $\mathcal C$ is monic I am a trying to solve the following problem.
A functor $F: \mathcal{C} \rightarrow \mathsf{Set}$ is called a monofunctor if $F(f)$ is a monomorphism (that is, injective) for every morphism $f$ of $\mathcal{C}$.
Show that the following conditions on a small category $\mathcal{C}$ are equivalent:
*
*Every morphism of $\mathcal{C}$ is monic.
*Every representable functor $\mathcal{C} \rightarrow \mathsf{Set}$ is a monofunctor.
*Every functor $\mathcal{C} \rightarrow \mathsf{Set}$ is an epimorphic image of a monofunctor.
Under what hypotheses on $\mathcal{C}$ is every functor $\mathcal{C} \rightarrow \mathsf{Set}$ a monofunctor?
I understand why $(1)$ and $(2)$ imply each other. For the implication $(2)\Rightarrow (3)$ I constructed an epimorphism for a presheaf $F$ $$\alpha:\coprod_{A\in \text{Ob }\mathcal{C}, \space x \in FA}\mathcal{C}(A,-)\twoheadrightarrow F $$
Since $\mathcal{C}(A,-)$ is a $\textit{monofunctor}$ for every $A\in\text{Ob }\mathcal{C}$, so is the domain of $\alpha$.
But I don't really know how to prove $(3)\Rightarrow (1)$. Can someone help me with this?
| Hint 1
We have some $G:\mathcal C\to\mathbf{Set}$ such that $\alpha:G\twoheadrightarrow\mathcal{C}(A,-)$ for an object $A$. This means $\alpha_B:GB\twoheadrightarrow\mathcal C(A,B)$ for all $B$. In particular, for $\alpha_A$ we have a surjection $GA\twoheadrightarrow\mathcal C(A,A)$ which means there's an element $\eta\in GA$ such that $\alpha_A(\eta)=id_A$.
Hint 2
Naturality of $\alpha$ states $f\circ\alpha(x)=\alpha(Gf(x))$, and in particular $f=\alpha_B(Gf(\eta))$ for $f:A\to B$.
Answer
Next, let $g\circ h = g\circ k$ where $h,k:A\to B$. Clearly this means $$Gg\circ Gh=G(g\circ h)=G(g\circ k)=Gg\circ Gk$$ and so $Gg(Gh(\eta))=Gg(Gk(\eta))$. Since $Gg$ is injective because $G$ is a monofunctor, we have $Gh(\eta)=Gk(\eta)$. Therefore, $$h=\alpha_B(Gh(\eta))=\alpha_B(Gk(\eta))=k$$ so $g$ is a mono.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3240939",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Solution of $ty'' +(2t+3)y' +(t+3)y = 3e^{-t}$ via Laplace transform A recent question which was put on hold due to lack of context by the OP was the following:
Solve the following ODE using Laplace transforms.
$$ty'' +(2t+3)y' +(t+3)y = 3e^{-t}, \qquad y(0)=0$$
Putting the equation into the form
$$t(y^{\prime\prime}+2y^\prime+y)+3(y^\prime+y)=3e^{-t} $$
which has $y_c=ce^{-t}$ as a solution of its complementary equation immediately suggests $y=Ate^{-t}$ as a particular solution.
And this is borne out by substitution, with $A=1$. Applying the initial condition yields the solution
$$ y=te^{-t} $$
So why would the original OP want the equation solved using Laplace transforms?
Is there a shorter path using Laplace transforms than the following?
Use the fact that $(3e^{-t})^\prime+3e^{-t}=0$ to get the homogeneous equation
$$ [t(y^{\prime\prime}+2y^\prime+y)+3(y^\prime+y)]^\prime+[t(y^{\prime\prime}+2y^\prime+y)+3(y^\prime+y)]=0 $$
This simplifies to the homogeneous equation
$$
t(y^{\prime\prime\prime}+3y^{\prime\prime}+3y^\prime+y)+4(y^{\prime\prime}+2y^\prime+y)=0 $$
Taking the Laplace transform of this involves quite a bit of tedium which I will spare the reader, but yields the following:
\begin{eqnarray}
(s+1)^3Y^\prime+2(s+1)^2Y&=&0\\\
(s+1)^2Y^\prime+2(s+1)Y&=&0\\\
\left[(s+1)^2Y\right]^\prime&=&0\\\
Y&=&\frac{c}{(s+1)^2}\\\
y&=&cte^{-t}
\end{eqnarray}
So, with $c=1$, yielding the same solution found much more easily by inspection.
| Keeping in mind that
$$
\mathcal{L}(t\mathcal{D}(y)) = -\frac{d}{ds}\mathcal{L}(\mathcal{D}(y))
$$
assuming null initial conditions and applying this in
$$
t \mathcal{D}^2y + \mathcal{D}y = 3e^{-t}
$$
we have
$$
-\frac{d}{ds}\left((s+1)^2 Y\right)+(s+1)Y = \frac{3}{s+1}
$$
or
$$
-Y'+\frac{1}{s+1}Y = \frac{3}{(s+1)^3}
$$
the homogeneous gives
$$
Y_h = \frac{c_1}{s+1}
$$
and the particular
$$
Y_p = \frac{1}{(s+1)^2}
$$
hence
$$
y(t) = (c_1+t)e^{-t}
$$
not difficult at all. Now if $y(0) = 0$ then
$$
y(t) = te^{-t}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3241038",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Question regarding surjective mapping I have come across a question while solving practice papers on the topic 'Functions'.
The question is as follows -
If $f : X \to Y $, find $f (X)$, when $f $ is a surjective or onto mapping.
Here $X $ and $Y $ are non-empty sets
Here is my approach -
As $f $ is a surjective mapping of $X$ to $Y$, then for each $y \in Y $, there exists an $x \in X $, such that $f (x) = y $. Thus $f (X) = Y$.
Can I be provided with a more formal proof ?
Suggestions for correction in my answer and a detailed answer with explanation would be helpful.
| Just to be really really pedantic, let us prove it by double inclusion. Let us set $f(X):=\left\{y\in Y\mid \exists\,x\in X \text{ such that } y=f(x)\right\}$. Thus $f(X)\subseteq Y$. On the other hand, surjectivity means that for every $y\in Y$ there exists $x\in X$ such that $y=f(x)$, that is to say, that $Y\subseteq f(X)$. Conclusion: $Y=f(X).$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3241183",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Help with sage math defining function of two variables Hello I need help regarding sage math, since I cant find anything about it on the manuals.
So I have a function of the form $F(r,t) = 2r H(t)$ and then i want to perform an operation on it involving differentiation. H(t) is kept arbitrary. I know how to do this using sagemath. My question is, what if before performing any operation, I want to perform a change of variables $u = t-r$ and $v=t+r$ first? So my function now becomes
$F =(v-u) H( (u+v)/2)$
Is it possible to define H((u+v)/2) in sage math, such that when it takes the derivative it takes the partial derivative wrt u and then wrt v?
| I think this is what you want.
var('t,r')
H = function('H', nargs=1)(t)
F = 2*r*H(t)
print F
print diff(F,r)
print diff(F,r,t)
var('u,v')
G = F.subs(r=(v-u)/2,t=(u+v)/2)
print G
print diff(G,u)
print diff(G,u,v)
output:
2*r*H(t)
2*H(t)
2*diff(H(t), t)
-(u - v)*H(1/2*u + 1/2*v)
-1/2*(u - v)*D[0](H)(1/2*u + 1/2*v) - H(1/2*u + 1/2*v)
-1/4*(u - v)*D[0, 0](H)(1/2*u + 1/2*v)
You have to be careful with F = 2*r*H(t), though, as it won't know what "order" the arguments come in. F(r,t) = ... might work better there, but it will be weirder on the derivative side once you substitute because it will still think the "inputs" are r,t.
See function? for more documentation on how these abstract functions work, which isn't always intuitive.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3241273",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Properties of a derivative on a compact interval Suppose a function $F$ is differentiable on an interval $(a,b) \supset [0,1]$. Denote its derivative by $f$, and suppose that $f > 0$ on $[0,1]$.
Question 1: Is it true that $f$ can be bounded away from $0$ on $[0,1]$, i.e. that there exists some $c > 0$ such that $f(x) > c$ for all $x \in [0,1]$? If $f$ is continuous, this is clearly true, as a continuous function attains its minimum on a compact set, and this minimum is $> 0$ by assumption. If $f$ were an arbitrary function (not a derivative), this is clearly false; for instance, consider the function $f(x) = 1$ when $x = 0$ and $f(x) = x$ elsewhere. But this function has a jump discontinuity, and therefore is not the derivative of any function.
Question 2: Is it true that $f$ is bounded on $[0,1]$? Note that if we remove the $f > 0$ requirement, this is not true (for instance, consider $F(x) = x^2 \sin(1/x^2)$, with $F(0) = 0$; $F$ is differentiable, so $f$ exists, but $f$ is not bounded).
This question may be relevant, but it doesn't directly answer the above.
| No. For instance, let $f$ be piecewise linear and positive on $[0,1)$ such that on an infinite sequence of intervals approaching $1$, $f$ alternates between jumping down to values approaching $0$, jumping up to values approaching $\infty$, and jumping back down to $1$ and remaining constant with value $1$. Define $F(x)=\int_0^xf(t)\,dt$. If we choose the intervals where $f$ takes values other than $1$ to be sufficiently small and sparse (so as $t\to 1$, $f(t)=1$ for a quickly increasingly large proportion of the time), the integrals of $f$ over these intervals will have a negligible effect on the limiting behavior of $F(x)$ as $x\to 1$. So, $F$ will extend continuously to $1$ with $F'(x)=1$. We can then extend $F$ to be differentiable on an open interval containing $[0,1]$ (e.g., by making its derivative $1$ outside of $[0,1]$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3241426",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Recurrence relations and power series solution I am given the following initial value problem: $$(1-x^2)y''+7xy'-26y=0 \qquad , \qquad y(0)=0 \qquad , \qquad y'(0)=4$$
I have solved for the singular points, which are $x= 1, -1$
The question then tells me that I can find a normal power series solution for $y$ about $x=0$ $$y(x)=\sum_{m=0}^{\infty} a_m x^m $$
I then went on to find the recurrent relation, to which I determined to be:
$\frac{a(m+1)}{am}=- \frac{(-m^2+8m-26)}{(m+1)(m+2)}$
I plugged in values $m=0$, to $m=3$ into the above derived formula.
So:
$m=0 \implies a(2) = 13 a(0)$,
$m=1 \implies a(3) = \frac{19}{6} a(1)$,
$m=2 \implies a(4) = \frac{91}{6} a(0)$,
$m=3 \implies a(5) = \frac{209}{120} a(1)$,
So far, all of my answers are correct. What I am finding difficult to do it using the above results and the given initial conditions to find the first $3$ non-zero terms of the power series solution for $y$.
I appreciate any help,
thank you for your time.
| Considering the differential equation
$$
(1-x^2)y''+\alpha x y' + \beta y = 0
$$
and substituting $y = \sum_{k=0}^n a_k x^k$ we obtain the recurrences
$$
2a_2+\beta a_0 = 0\\
6a_3+(\alpha+\beta)a_1 = 0
$$
and for $k \ge 4$
$$
k(k-1)a_k +((k-2)\alpha +\beta-(k-2)(k-3))a_{k-2} = 0
$$
Attached a plot showing in black the numerical integration for $y$ and in red the series representation with $n=8$
NOTE
This series represent the solution to $y$ for $-1 < x < 1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3241563",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Example of exotic $S_5$ as a Galois group Is there an example of a sextic irreducible polynomial over $\mathbb{Q}$ with Galois group isomorphic to $S_5$?
The transitive action of the Galois group of this polynomial on the 6 roots of $p(x)$ would give rise to the exotic embedding $S_5\to S_6$.
| The Galois groups of sextic polynomials have been determined here. In table $2$ on page $5$, the group T14 is $S_5$, generated by $(15364), (16)(24), (3465)$ in $S_6$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3241698",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Trying to show that logarithm of this matrix does not convergence let $A \in O(3) - SO(3)$, i.e, $A$ is an orthogonal $3 \times 3$ matrix with real entries that has a determinant $-1$. I'm trying to show that $logA$ defined as:
\begin{equation*}
logA = (A-I) - \frac{(A - I)^2}{2} + \frac{(A - I)^3}{3} - \frac{(A - I)^4}{4} + ...
\end{equation*}
Does not convergence.
My thinking: We know that $A$ has to be close to $I$, the identity matrix, for $logA$ to convergence. Since $I \in SO(3)$, i.e, it is an orthogonal matrix with determinant $1$, and $A \in O(3) - SO(3)$, it can not be contained in a small neighborhood of $I$ thus cannot converge.
However, I couldn't show this in any rigorous way. Any help would be appreciated.
Thanks.
| Let's break this down. If a matrix $A$ has determinant $d$, then $\log A$ will have determinant $\log d$. Can we get a sum with determinant $\log d$ from a series of real matrices when $d=-1$?
Yup, it's bad.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3241790",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
A function $g(x)$ has one and only one real root if $g'(x)\leq k <0$. $g : \mathbb{R} \to \mathbb{R}$ is differentiable on $\mathbb{R}$. Then $g(x)$ has one and only one real root if $g'(x)\leq k <0$.
Proof attempt:
Let us assume the contrary, i.e. $g(x)$ has no real zero at all.
Therefore, being continuous, $g(x)$ cannot be both positive and negative on $\mathbb{R}$ . So, firstly we assume that $g(x) <0$ for every $x \in \mathbb{R}$.
We take some $a>0$. (WLOG, take $a=1$). Now, $g(1)/1<0$ and $g(1)/k >0$. So, $\displaystyle\frac{g(-g(1)/k)-g(1)}{-g(1)/k-1} \leq k <0 \implies -k \leq \displaystyle\frac{g(-g(1)/k)-g(1)}{g(1)/k+1} \implies 0<-k \leq g(-g(1)/k) $
So, we have found at least one point in the domain, where $g(x)$ is positive. So, $g(x)$ must have a zero. Now, $g'(x)<0, \ \ \forall x \ \in \mathbb{R}$ makes the function one-to-one.
[Note that the numerator must be positive, since $1>- g(1)/k \implies g(-g(1)/k)>g(1)$]
For the assumption that $g(x)>0$, we consider the points $a<0$ and $-g(a)/k$ (WLOG, take $a=-1$). Everything else is kept the same.
Are the statement and the proof both correct, or is there any mistake?
Please verify.
| We can prove that:
$$\lim_{x \to -\infty} g(x) = \infty \quad (1)$$
$$\lim_{x \to \infty} g(x) = -\infty \quad (2)$$
Lets prove the second statement, the first statement can be proved using a similar argument.
Let's assume the opposite, that:
$$\lim_{x \to \infty} g(x) = C \neq -\infty$$
Now if we consider the following limit:
$$\lim_{x \to \infty} g'(x) = \lim_{x \to \infty}\left(\lim_{h \to 0} \frac{g(x+h)-g(x)}{h}\right)$$
$$=\lim_{h \to 0}\left(\lim_{x \to \infty} \frac{g(x+h)-g(x)}{h}\right)$$
$$=\lim_{h \to 0}\frac{C-C}{h}$$
$$=0$$
Which contradicts the fact that $g'(x) < 0$ for all $x$.
Since $g'(x) < 0$, $g(x)$ can't go to $+\infty$ as $x \to \infty$, so it goes to $-\infty$.
This proves that $g$ has only one real root.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3241970",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Discontinuous point for a function $\frac{|\sin{x}|}{\sin{x}}$ I want to determine what type of discontinuity a function has by using one-sided limits for the function $$f(x) = \frac{|\sin{x}|}{\sin{x}}$$
I found the left and right hand limits at $x=0$ (because the $f(x)$ is undefined for $f(0)$). I have found that $$\lim_{x \rightarrow 0^-}f(x) = 0 \qquad \text{and} \qquad \lim_{x \rightarrow 0^+}f(x) = 0$$
It appears to me, that the limit exists and is zero, but it shouldn't be like that I guess. Could someone help me out?
| It might prove helpful to visualize the function:
At every integer multiple of $\pi$, there is a discontinuity, since $\sin(k\pi) \equiv 0$ for all integers $k$. We make a jump because the function effectively "flips sign" here: wherever $\sin(x)<0$ we have $f(x) = |\sin(x)|/\sin(x) = -1$ and similarly for $\sin(x) > 0$ we have $f(x) = 1$.
Let's pick $0$ as our discontinuity. We want to show the right- and left-hand limits are not equal there. Indeed, let's consider a path from $x=2$ (or whatever less than $\pi$) to $0$. Since $x>0$ here, then $\sin(x) > 0$ and $|\sin(x)| = \sin(x)$. Then, for a "path" of $x$ going from positive numbers to $0$ we would have
$$f(x) = \frac{|\sin(x)|}{\sin(x)} = \frac{\sin(x)}{\sin(x)} = 1 \xrightarrow{x \to 0^+} 1$$
However, let's say we started at some $x$ less than $0$, say $x=-2$, and approached $0$ for the left-hand limit. Then $\sin(x) < 0$ on this interval, and $|\sin(x)| =- \sin(x)$ as a result. And thus here,
$$f(x) = \frac{|\sin(x)|}{\sin(x)} = \frac{-\sin(x)}{\sin(x)} = -1\xrightarrow{x \to 0^-} -1$$
We in turn conclude:
$$\lim_{x \to 0^-} f(x) = -1 \;\;\;\;\; \lim_{x \to 0^+} f(x) = 1$$
establishing a discontinuity at $x=0$. A similar argument could be done for any $x=k\pi$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3242055",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Given three postive numbers $a,b,c$ so that $a\geqq b\geqq c$. Prove that $\sum\limits_{cyc}\frac{a+bW}{aW+b}\geqq 3$ .
Given three postive numbers $a, b, c$ so that $a\geqq b\geqq c$. Prove that
$$\sum\limits_{cyc}\frac{a+ b\sqrt{\frac{b}{c}}}{a\sqrt{\frac{b}{c}}+ b}\geqq 3$$
I make it
Firstly, we need to have one general inequality
$$\sum\limits_{cyc}\frac{a+ bW}{aW+ b}\geqq 3$$
By buffalo-way, let $c= 1, b= 1+ u, a= 1+ u+ v$ so $W= \sqrt{1+ u}$, be homogeneous. I guess that
$${\rm W}= \sqrt{\frac{b}{c}}$$
An inspiration-one Given three positive numbers $a,b,c$ so that $a\leqq b\leqq c$. Prove that $\sum\limits_{cyc}\frac{a+1.4b}{1.4a+b}\geqq 3$ .
| Let $a=x^2$, $b=y^2$ and $c=z^2$, where $x$, $y$ and $z$ are positives.
Thus, we need to prove that
$$\sum_{cyc}\frac{a+\sqrt{\frac{b}{c}}b}{\sqrt{\frac{b}{c}}a+b}\geq3$$ or
$$\sum_{cyc}\frac{x^2z+y^3}{x^2y+y^2z}\geq3.$$
Now, by AM-GM
$$\sum_{cyc}\frac{x^2z+y^3}{x^2y+y^2z}\geq3\sqrt[3]{\frac{\prod\limits_{cyc}(y^3+x^2z)}{\prod\limits_{cyc}(x^2y+y^2z)}}$$ and it's enough to prove that
$$\prod\limits_{cyc}(x^3+z^2y)\geq\prod\limits_{cyc}(x^2y+y^2z)$$ or
$$\sum_{cyc}(x^5z^4+x^6y^2z)\geq xyz\sum_{cyc}(x^3y^3+x^4yz),$$ which is true by Rearrangement twice:
$$\sum_{cyc}x^5z^4=x^4y^4z^4\sum_{cyc}\frac{x}{y^4}\geq x^4y^4z^4\sum_{cyc}\frac{x}{x^4}=xyz\sum_{cyc}x^3y^3$$ and
$$\sum_{cyc}x^6y^2z=x^2y^2z^2\sum_{cyc}\frac{x^4}{z}\geq x^2y^2z^2\sum_{cyc}\frac{x^4}{x}=xyz\sum_{cyc}x^4yz.$$
Done!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3242218",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Peculiar (convergent?) definite integral I have been trying to calculate the integral:
$$\int_1^{\infty} \left(\frac{x^2}{\sqrt{x^4-1}}-1\right)dx$$
A hint is to multiply the whole integral by $x^{\lambda}$, calculate the two terms independently as a function of $\lambda$ and then set $\lambda=0$. But this did not work.
By substituting $x=\frac{1}{u}$ the integral transforms into $$\int_0^1 \left(\frac{1}{\sqrt{1-u^4}}-1\right)\frac{1}{u^2}du$$
which leads to a Beta function with one negative argument.
The result should be $$1-\frac{\pi}{\Gamma(\frac{1}{4})^2}$$
Thank you very much in advance!
| The integral in $u\in[0,1]$ is for me simpler, so let us introduce for a handy notation
$$y=y(u) = \sqrt{1-u^4}\ .
$$
Then for the integral to be calculated we observe first
$$
\frac\partial{\partial u}
\left(\frac{1-y}u\right) =
\frac{u^2}y-\left(\frac 1y-1\right)\frac 1{u^2}\ .
$$
So we need to calculate
$$
\begin{aligned}
J
&=
\int_0^1 \left(\frac{1}{\sqrt{1-u^4}}-1\right)\frac{1}{u^2}\;du
%\\&
=
\int_0^1 \left(\frac1y-1\right)\frac{1}{u^2}\;du
\\
&=\left[\frac {1-y}u\right]_0^1
-
\int_0^1 \frac{u^2}y\;du
%\\&
=
1
-
\int_0^1 \frac{u^2}{\sqrt{1-u^4}}\;du
\\
&= 1-\frac 14B\left(\frac 12,\frac 34\right)\qquad
\text{ with $B$ being the $\beta$-function}
\\
&=1-\frac 14\cdot\frac
{\Gamma\left(\frac 12\right)\cdot\Gamma\left(\frac 34\right)}
{\Gamma\left(\frac 54\right)}
%\\&
=1-\frac 14\cdot\frac
{\sqrt\pi\cdot\Gamma\left(\frac 34\right)}
{\frac 14\Gamma\left(\frac 14\right)}
\\
&=1-\sqrt\pi
\cdot
\frac
{\Gamma\left(\frac 14\right)\cdot\Gamma\left(\frac 34\right)}
{\Gamma\left(\frac 14\right)^2}
%\\&
=1-\sqrt\pi
\cdot
\frac
{\sqrt{2\pi}\Gamma\left(2\cdot\frac 14\right)}
{\Gamma\left(\frac 14\right)^2}
\\
&
=
1-
\frac 12
(2\pi)^{3/2}\cdot
\frac
1{\Gamma\left(\frac 14\right)^2}
\ .
\end{aligned}
$$
I tried to represent the result closer to the shape of the prediction in the OP.
We have the assisted checks / numerical validations:
sage: value = 1 - (2*pi)^(3/2) / 2 / gamma(1/4)^2
sage: value.n()
0.400929882632204
sage: var('u');
sage: integral( (1/sqrt(1-u^4) - 1)/u^2, u, 0, 1).n()
0.40092987307719175
sage: integral( u^2/sqrt(1-u^4), u, 0, 1)
1/4*beta(1/2, 3/4)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3242614",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Why is A union B also called "A or B"? In A union B, the element either belongs to A or B, or A and B right?
So shouldn't it be called A and/or B? Due to this I am unable to solve a problem in my textbook.
| Because the elements of $A\cup B$ are exactly those objects that either belong to $A$, or belong to $B$. The claim "$p$ or $q$" is true if either $p$ is true, or $q$ is true, or $p$ and $q$ are both true.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3242699",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 7,
"answer_id": 5
} |
Why is the Killing form of $\mathfrak{g}$ restricted to a subalgebra $\mathfrak{a} \subset \mathfrak{g}$ not the Killing form of $\mathfrak{a}$? I know that the Killing form of $\mathfrak{g}$ restricted to an ideal $I \subset \mathfrak{g}$ is just the Killing form of $I$.
However, what happens in general if we relax the conditions and just consider the Killing form restricted to a subalgebra $\mathfrak{a} \subset \mathfrak{g}$?
| The question 'what happens' is best answered by looking at an example, so the first question is: where do we find examples of subalgebras that are not ideals?
Here is a class of examples. consider the real Lie algebra $\mathfrak{g} = \mathfrak{sl}_n$ of traceless $n$-by-$n$ matrices. It has a Cartan decomposition $\mathfrak{g} = \mathfrak{k} \oplus \mathfrak{p}$ where $\mathfrak{k}$ is the $(n(n-1)/2)$-dimensional subalgebra of anti-symmetric matrices and $\mathfrak{p}$ the $(n(n+1)/2 - 1)$-dimensional subspace (not algebra) of symmetric matrices. Inside $\mathfrak{p}$ there is a unique-up-to-conjugation maximal abelian subalgebra $\mathfrak{a}$ which we might as well (and, following convention, will) take to be the $n-1$-dimensional abelian subalgebra of diagonal matrices.
The nice thing about this set-up (well there actually many even nicer things about it, that is why these names are standard, but well) is that neither $\mathfrak{k}$ nor $\mathfrak{a}$ is an ideal in $\mathfrak{g}$.
Let's look at $\mathfrak{a}$ first. It is abelian so the killing form of $\mathfrak{a}$ itself is $0$. However, the restriction of the killing form of $\mathbb{g}$ to $\mathfrak{a}$ is not. We can decompose $\mathfrak{g}$ as the direct sum of $\mathfrak{a}$ and a bunch of root spaces for the adjoint action of $\mathfrak{a}$ (in other words: we can treat $\mathfrak{a}$ as a cartan subalgebra, this is because $\mathfrak{g}$ is split) and so we find that $ad(A)$ for some element $A \in \mathfrak{a}$ is diagonal with $n-1$ zeroes, $n(n-1)/2$ entries equal to $2$'s and $n(n-1)/2$ entries equal to $-2$.
It follows that $(A, A) = 4n(n-1)$ which is rather different from $0$. Here $(.,.)$ denotes the killig form of $\mathfrak{g}$. $(A_1, A_2)$ for different elements of $\mathfrak{a}$ yield multiples of 4 closer (in absolute value) to zero, but is zero only if $A_1 = -A_2$.
The case of the subalgebra $\mathfrak{k}$ is rather different, but I have to run now, maybe I come back to it later, or you can try it yourself.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3242799",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Derivative function continuous iff partial derivatives continuous
Let $f:\mathbb{R} ^{n}\rightarrow \mathbb{R} ^{m}$ be differentiable.
The derivative function $Df:\mathbb{R} ^{n}\rightarrow L\left( \mathbb{R} ^{n},\mathbb{R} ^{m}\right)$ is continuous in respect to the operator norm $\left\| A \right\|_{L\left( \mathbb{R} ^{n},\mathbb{R} ^{m}\right)}:=\sup _{\left\| v\right\| =1}\left\| Av\right\|$, iff the partial derivatives $\dfrac {\partial f_{i}}{\partial x_{j}}$ are continuous for all $i\in \left\{ 1,\ldots ,m\right\}$ and $j\in \left\{ 1,\ldots ,n\right\}$.
How can I show this?
| The idea behind all the proofs I've seen is to use the mean value theorem (or mean value inequality if you're working in general Banach spaces). This is carried out in a clear fashion in Henri Cartan's book Differential calculus in proposition 3.7.2. BTW this book is out of print, but I think there is a reprint under a different name; see https://www.amazon.com/Differential-Calculus-Normed-Spaces-Analysis/dp/154874932X. There is also a proof in Loomis and Sternberg's book Advanced Calculus in Theorem 8.2 of Chapter 3. I HIGHLY recommend both these books. You can also find a proof in Spivak's Calculus on Manifolds, in Theorem 2-8 (Spivak only proves the "if" part).
The "only if" part is pretty much trivial once you know how $Df(a)$ and the various partials are related (see either Cartan/ Loomis and Sternberg).
As an outline for the "if" part, it suffices to prove it in the case $m=1$ (it's easy to deduce the general case from this). Notice the following equality:
\begin{align}
& f(x_1, \dots, x_n) - f(a_1, \dots, a_n) - \sum_{i=1}^n \dfrac{\partial f}{\partial x_i}(a) \cdot (x_i-a_i) \\
&= f(x_1, x_2, \dots x_n) - f(a_1, x_2, \dots, x_n) - \dfrac{\partial f}{\partial x_1}(a) \cdot (x_1-a_1) \\\\
&+ f(a_1, x_2, \dots, x_n) - f(a_1, a_2, \dots, x_n) - \dfrac{\partial f}{\partial x_2}(a) \cdot (x_2-a_2) \\
& \vdots \\
&+ f(a_1, \dots, a_{n-1}, x_n) - f(a_1, \dots, a_{n-1}, a_n) - \dfrac{\partial f}{\partial x_n}(a) \cdot (x_n-a_n)
\end{align}
Now, applying the mean-value theorem (the standard single variable version) to each line separately, and using the continuity of the partials allows you to complete the proof.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3242905",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Estimating quality of projection Suppose we are given a vector $v$ and vectors $\mu_i$:
$v = \mu_1+\mu_2+...+\mu_m$, where $\mu_i \in R^n$, all $\mu_i$ are of unit length.
Oracle will give me $k$ vectors $\mu_{j_1}, \mu_{j_2},...\mu_{j_k}$ from the original set such that when I project $v$ onto subspace spanned by these vectors the length of the projection is highest possible. In other words, from the set of all combinations of $k$ vectors from $[\mu_1,...\mu_n]$ the $[\mu_{j_1}, \mu_{j_2},...\mu_{j_k}]$ give highest length of projection. Lets denote by $v_{\text{proj}}$ projection of $v$ onto $[\mu_{j_1}, \mu_{j_2},...\mu_{j_k}]$
I want to estimate quality of projection before oracle gives me this $k$ vectors. I want to give upper bound on $||v - v_{\text{proj}}|| $
As far as I understood it is very difficult to obtain these $k$ vectors by myself. However, I know that for any two vectors $\mu_i, \mu_j$, $||\mu_i-\mu_j|| \leq \alpha$, where $\alpha$ is a given positive number.
Small values of $\alpha$ will tell me that all $\mu_i$ are close to each other and heading towards same direction. I would suspect then that projection will be good, and its length will be close to the length of original vector. How can I use this to give an upper bound $||v - v_{\text{proj}}|| $?
My attempts:
Without loss of generality lets assume that $k$ optimal vectors are first $k$ vectors in the list, i.e $\mu_1,\mu_2,...\mu_k$. Lets denote by $P$ projection operator on the space spanned by $\mu_1,\mu_2,...\mu_k$.
$\|v - v_{\text{proj}}\| = \|v - P(v)\| = \|v - P(\mu_1+\mu_2+...+\mu_m)\| = $
$\|v - P(\mu_1) - P(\mu_2) - ... - P(\mu_m)\| = $
$ \| v - \mu_1 - \mu_2 - ... - \mu_k - P(\mu_{k+1}) - P(\mu_{k+2}) - ... - P(\mu_m)\| = $
$\|\mu_{k+1} - P(\mu_{k+1}) + \mu_{k+2} - P(\mu_{k+2}) + ... + \mu_{m} - P(\mu_{m})\|$
$\|v - v_{\text{proj}}\| \leq \|\mu_{k+1} - P(\mu_{k+1})\| + \|\mu_{k+2} - P(\mu_{k+2}) + ... + \|\mu_{m} - P(\mu_{m})\|$
$\|v - v_{\text{proj}}\| \leq (m-k)\alpha$
So in order to make $\|v - v_{\text{proj}}\| \leq \epsilon$, we need $k \geq \frac{m\alpha - \epsilon}{\alpha}$
I am not satisfied with this result because $k$ grows linearly with $m$. I want it to grow much slower, something like $\log(m)$. My goal is to show that under some constraints on $\mu_i$, we need only approximately $\log(m)$ vectors to approximate $v$.
I think the bound can be improved substantially. First Cauchy inequality isn't very tight and second, I used $|\mu_{k+1} - P(\mu_{k+1})\| \leq \alpha$ which is also very loose.
I am open for additional constraints on $\mu_1,...\mu_m$ to achieve logarithmic growth
As Alex Ravsky has noted, we also need a constraint on $\alpha$ in order to achieve logarithmic growth. Assume that $m$ $\leq n$, $\mu_i$ is th $i$-th standard ort of the space $\mathbb{R}^n$, and $\alpha = \sqrt{2}$. Then $\|v - v_{\text{proj}}\| = \sqrt{m-k}$
| In general, the answer is negative. Indeed, assume that $m\leq n-1$, $\alpha\le \sqrt{2}$ and for each $i\le m$, $\mu_i=\sqrt{1-\tfrac{\alpha^2}{2}}e_{m+1}+\tfrac{\alpha}{\sqrt{2}}e_i$, where for each $j$, $e_j$ is $i$-th standard ort of the space $\mathbb{R}^n$ (that is its $i$-th coordinate is $1$ and other coordinates are $0$).
Let $\mu_{j_1}, \mu_{j_2},...\mu_{j_k}$ be any $k$ vectors from the original set and $v_{\text{proj}}=\sum \lambda_i \mu_{j_i}$.
Then
$$\|v - v_{\text{proj}}\|^2=
\left(1-\frac{\alpha^2}{2}\right)\left(\sum_i \lambda_i-m\right)^2+\sum_i \frac{\alpha^2}{2}(\lambda_i-1)^2+(m-k) \frac{\alpha^2}{2}\ge$$ $$ (m-k) \frac{\alpha^2}{2}.$$
We can improve this lower bound as follows.
Put $\beta=\tfrac{\alpha^2}{2}\le 1$, $\Lambda_1=\sum_i\lambda_i$ and $\Lambda_2=\sum_i\lambda_i^2$. Remark that by the inequality between quadratic and arithmetic means, $\Lambda_2\ge \tfrac{\Lambda_1^2}{k}$. Then
$$\|v - v_{\text{proj}}\|^2=
\left(1-\beta\right)\left(\sum_i \lambda_i-m\right)^2+\sum_i \beta(\lambda_i-1)^2+(m-k) \beta\ge $$
$$\left(1-\beta\right)(\Lambda_1^2-2m\Lambda_1)+ \beta\left(\frac{\Lambda_1^2}{k}-2\Lambda_1 \right)+m^2\left(1-\beta\right) +m\beta=$$
$$\left(1-\beta+\frac{\beta}{k}\right)\Lambda_1^2-2(\beta+m(1-\beta))\Lambda_1+m^2\left(1-\beta\right) +m\beta=$$
$$\left(\sqrt{1-\beta+\frac{\beta}{k}}\Lambda_1-\frac{\beta+m(1-\beta)}{\sqrt{1-\beta+\frac{\beta}{k}}}\right)^2- \frac{(\beta+m(1-\beta))^2}{1-\beta+\frac{\beta}{k}} +m^2\left(1-\beta\right) +m\beta\ge $$
$$-\frac{(\beta+m(1-\beta))^2}{1-\beta+\frac{\beta}{k}} +m^2\left(1-\beta\right) +m\beta=$$
$$\frac{1}{k-k\beta+\beta}(-k(\beta+m(1-\beta))^2 +( k-k\beta+\beta)(m^2 (1-\beta) +m\beta)=$$
$$ (m-k) \beta \frac{m-m\beta+\beta}{k-k\beta+\beta}.$$
On the other hand, we can obtain an upper bound for $\|v - v_{\text{proj}}\|$ based on the following balancing sum
Lemma (see this answer for references) For any sequence $\{\nu_1,\dots,\nu_t\}$ of vectors of $\Bbb R^n$ of unit length there exists a sequence $\{\varepsilon_1,\dots, \varepsilon_t\}$ such that $\|\sum_{i=1}^t \varepsilon_i\nu_i\|\le\sqrt{n}$.
Now we inductively construct a sequence $\{v_s\}$ of vectors in $\Bbb R^d$ and a decreasing sequence $\{A_s\}$ of subsets of $\{1,\dots,n\}$ as follows. Put $A_0=\{1,\dots,n\}$. Given $A_s$, put $v_s=\sum_{i\in A_s} \mu_i$. In particular, $v_0=v$. By Lemma, there exists a sequence $\{\varepsilon_i: i\in A_s\}$ such that $\|\sum_{i\in A_s} \varepsilon_i\mu_i\|\le\sqrt{n}$. Let $A_{s+1}$ be the smallest of the sets $\{i\in A_s: \varepsilon_i=1\}$ and $\{i\in A_s: \varepsilon_i=-1\}$. Remark that $| A_{s+1}|\le |A_s|/2$.
We have $$\|v_s-2v_{s+1}\|=\|v_{s+1}-(v_s- v_{s+1})\|\le \sqrt{n}.$$ Thus
$$\|v_0-2^{s+1}v_{s+1}\|\le \|v_0-2v_1\|+\|2v_1-4v_2\|+\dots +\|2^sv_s-2^{s+1}v_{s+1}\|\le$$ $$\sqrt{n}\left(1+2+\dots +2^s\right)= \sqrt{n}\left(2^{s+1}-1\right).$$
Now pick the smallest $s$ such that $|A_s|\le k$ (so $m/2^{s-1}>k$) and let $\{j_1,\dots, j_k\}\supset A_s$. Then
$$\|v - v_{\text{proj}}\|\le \|v_0-2^{s}v_{s}\|\le \sqrt{n}\left(2^{s+1}-1\right)>\sqrt{n}\left(\frac {4m}k-1\right).$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3243045",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 1,
"answer_id": 0
} |
What is the difference betwen equivalence and isomorphism of functors in categories. I am learning category theory using Basic Category Theory by Tom Leinster as my main source. In the chapter on natural transformations he says that isomorphism of categories is unreasonably strict for the notion of the sameness of two categories. Isomorphism would require functors,
$$
F:A\rightarrow B,G:B\rightarrow A
$$
such that
$$
G\circ F=1_A, F\circ G=1_B
$$
Instead he says that for equivalence we loosen the requirement on these functors to be isomorphic,
$$
G\circ F\cong 1_A,F\circ G\cong 1_B
$$
Then this is better. This section threw me for a loop. I don't understand the difference between the equivalence and the isomorphism statements. Any help clarifying what is trying to be said here is greatly appreciated.
| Well, the actual difference between the two statements is that for an equivalence of categories, we only require that that the composites $F \circ G$ and $G \circ F$ are naturally isomorphic to the identity functors rather than exactly equal. That is, there's a collection of isomorphisms $\eta_x :GF(x) \rightarrow x$ for each object of $A$ such that whenever $f: x \rightarrow y$ is a morphism in $A$, $\eta_y GF(f) = f \eta_x$, and a similar natural isomorphism for $F \circ G$.
As for why we do this... Imagine we're both doing group theory, so we both get ourselves a category of groups and start doing group theory in that category. But then we compare our categories and they're not the same: your category has one object for each isomorphism class of groups, while the objects of my category are given by a set $X$ along with a multiplication $\otimes: X \times X \rightarrow X$ which makes it a group.
Our categories aren't isomorphic, not by a long shot: for every object in your category there's a large class of objects in mine. So if we could only use isomorphisms of categories it would look like we're working on entirely different things.
Fortunately, our two categories are equivalent: using one functor which sends a set and a multiplication to its isomorphism class, and the other functor which takes each isomorphism class and picks a realisation of that group. Therefore, we're justified in calling both categories 'The Category of Groups' and any result you get in your category will also work in mine.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3243243",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 2,
"answer_id": 0
} |
Calculate $\int ^{4\pi} _{-4\pi} \frac{(\sin x)^2-(\sin x)^4}{1-(\sin x)^4}dx$
Calculate $$\int ^{4\pi} _{-4\pi} \frac{(\sin x)^2-(\sin x)^4}{1-(\sin x)^4}dx$$
I tried to do this task in several ways, but none of them proved to be effective. For example:
$$\int ^{4\pi} _{-4\pi} \frac{(\sin x)^2-(\sin x)^4}{1-(\sin x)^4}dx=\int ^{4\pi} _{-4\pi} \frac{(\sin x)^2(1-(\sin x)^2)}{1-(\sin x)^4}dx=\int ^{4\pi} _{-4\pi} \frac{(\sin x)^2}{1+(\sin x)^2}dx=\int ^{4\pi} _{-4\pi} \frac{1}{1+\frac{1}{(\sin x)^2}}dx$$
However I don't know what I can do the next to finish this task. When I use $u=(\sin x)^2 $ I have $du=\cos x dx$ so I can't use it.Have you got some intelligent way to do this task?
| Hint:
Bioche's rules say you should set $t=\tan x$. Indeed, with some trigonometry,
$$\frac{\sin^2x}{1+\sin^2x}=\frac{\cfrac{t^2}{1+t^2}}{1+\cfrac{t^2}{1+t^2}}=\cfrac{t^2}{1+2t^2},\qquad\mathrm dx=\frac{\mathrm dt}{1+t^2},$$
so the indefinite integral becomes
$$\int\frac{t^2\,\mathrm dt}{(1+2t^2)(1+t^2)}$$
Can you continue?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3243347",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 3
} |
Use of the Leibniz integral rule in Laplace transform proof My Laplace transform textbook presents the following theorem:
If $\mathcal{L}\{ F(t) \} = f(s)$, then $\mathcal{L}\{ t F(t) \} = - \dfrac{d}{ds}f(s)$ and in general $\mathcal{L}\{ t^n F(t) \} = (-1)^n \dfrac{d^n}{ds^n} f(s)$.
The proof then begins as follows:
Proof Let us start with the definition of Laplace transform
$$\mathcal{L}\{ F(t) \} = \int_0^\infty e^{-st} F(t) \ dt$$
and differentiate this with respect to $s$ to give
$$\begin{align} \dfrac{df}{ds} &= \dfrac{d}{ds} \int_0^\infty e^{-st} F(t) \ dt \\ &= \int_0^\infty -te^{-st} F(t) \ dt \end{align}$$
...
My understanding is that the author went from
$$\dfrac{d}{ds} \int_0^\infty e^{-st} F(t) \ dt$$
to
$$\int_0^\infty -te^{-st} F(t) \ dt$$
by using the Leibniz integral rule to change the ordinary derivative to a partial derivative.
However, as you can see from the Wikipedia page, the Leibniz integral rule is only valid for $\int_{a(x)}^{b(x)}, b(x) < \infty$, whereas the Laplace transform has $b(x) = \infty$. Doesn't this mean that the Leibniz rule is invalid?
I would greatly appreciate it if people could please take the time to clarify this.
|
The Leibniz Rule for an infinite region
If there is a positive function $g(x, y)$ that is integrable, with respect to $x$, on $[0,∞)$, for each $y$, and such that $|\frac{∂f}{∂y} (x, y)| ≤ g(x, y)$ for all $(x, y)$, then
$$\frac{d}{dy}\int_{0}^\infty f(x,y)\,dx=\int_{0}^\infty \frac{\partial}{\partial y} f(x,y)\,dx$$
Ref.: https://math.hawaii.edu/~rharron/teaching/MAT203/LeibnizRule.pdf
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3243542",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Characteristic polynomial of $2\times2$ matrix $A$ with $A^2v=-v$ This is a multiple select question, i.e., more than one answer can be correct:
If $A\ne0$ is a $2\times2$ real matrix and suppose $A^2v=-v$ for all vectors $v\in\Bbb R^2$, then
*
*$-1$ is an eigenvalue of $A$,
*The characteristic polynomisal of $A$ is $\lambda^2+1$,
*The map from $\Bbb R^2\to\Bbb R^2$ given by $v\to Av$ is surjective,
*$\det A=1$.
My try:
*
*-1 can't be eigenvalue: If it is, then $Av=-1v$ for some nonzero $v$, hence $A^2v=A(Av)=A(-v)=-(Av)=-(-v)=v$, which contradicts hypothesis.
*Minimal polynomial must divide $x^2+1$, and since $A$ is real matrix, $x^2+1$ is minimal polynomial. But how to see if it is characteristic polynomial as well?
*Suppose $A$ not surjective, then image of $A$ is one or zero dimensional, hence $A^2$ has one or zero dimensions image. But $A^2v=-v$ says that $A^2$ is surjective, i.e. has two dimensional image.
*If option 2 correct, then this ids true as well.
I sam having serious trouble with option 2 and 4. Please help!
| As the rest is easy, I'll just indicate an alternative way to get at the characteristic polynomial, without using either the Cayley-Hamilton theorem or complex numbers. There can be no real eigenvalues, as $A^2v=-v$ for en eigenvector $v$ shows the corresponding eigenvalue$~\lambda$ should have $\lambda^2=-1$, which it cannot. So for any nonzero vector $v$, its image $Av$ is linearly independent, and therefore $[v,Av]$ forms a basis. Now $A^2v=-v$ shows that change of basis of $A$ to the basis $[v,Av]$ gives
$$ \pmatrix{0&-1\\1&0} $$
whose characterisitic polynomial, which conincides with that of $A$, is $X^2+1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3243654",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Show that $a_0\leq \alpha \le a_0 +1$ It is given that $$\alpha=[a_0;a_1,a_2,...,a_n]$$ where $a_0,...,a_n$ are all positive integers.
We need to show that
$$a_0\leq \alpha \le a_0 +1$$
My question is when the equality holds ? I guess the question is wrong ... there can't be the equality sign in the question as these are positive integers.
| The question isn't wrong. Note that it is true, for example, that $2\leq 2.5\leq 3$. It is also true that $2<2.5<3$. Just because the strict inequality is true does not mean the inequality with the equal signs is false. In fact, the strict inequality implies the one with equal signs. Indeed, the statement is true because
$$ \alpha=a_0+\underbrace{\frac{1}{a_1+\frac{1}{a_2+\frac1{a_3+\frac1\ddots}}}}_{\mathrm{strictly\ between}\ 0\ \mathrm{and}\ 1}. $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3243786",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
$\frac{4^p - 1}{3}$ is a Fermat pseudoprime with respect to 2 I have to prove that $n = \frac{4^p - 1}{3}$ is a Fermat pseudoprime with respect to $2$ when $p \geq 5$ is a prime number. I have proved that $n$ is not prime because $4^p - 1 = (2^p-1)(2^p+1)$ and $(2^p + 1)$ is divisible by $ 3$. But now I can't show that $2^{n-1} \equiv 1\bmod n$.
I calculated that $2^{n-1} = 2^{(2^p + 2)(2^p-2)/3}$ but I don't know if I can deduce anything from this.
| $n=\dfrac{4^p-1}3=\dfrac{2^{2p}-1}3,$ so $n\mid 2^{2p}-1,\,$ so $\,\color{#c00}{2^{2p}\equiv 1}\pmod{\!n}$
Further, $2p$ divides $2\times\dfrac{(2^{p-1}-1)}3\times{(2^{p}+2)}=\dfrac{2^{2p}-4}3=n-1,\,$ so $n-1 = 2pk$
Therefore, $\!\bmod n,\,$ we have $\,2^{n-1}\equiv (\color{#c00}{2^{2p}})^{k}\equiv \color{#c00}1^k\equiv 1$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3243884",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How to prove : $\cos^32\theta + 3\cos2\theta = 4(\cos^6 \theta -\sin^6 \theta)$ How to prove : $\cos^32\theta + 3\cos2\theta = 4(\cos^6 \theta -\sin^6 \theta)$
| $4(\cos^6\theta-\sin^6\theta)$
$=4((\cos^2\theta)^3-(\sin^2\theta)^3)$
$=4(\cos^2\theta-\sin^2\theta)(\cos^4\theta+\sin^4\theta+\cos^2\theta\sin^2\theta)$
$=4\cos 2\theta[\{(\cos^2\theta+\sin^2\theta)^2-2\cos^2\theta\sin^2\theta\}+\cos^2\theta\sin^2\theta]$
$=4\cos 2\theta[\{1-2\cos^2\theta\sin^2\theta\}+\cos^2\theta\sin^2\theta]$
$=4\cos 2\theta(1-\cos^2\theta\sin^2\theta)$
$=4\cos2\theta-\cos2\theta\sin^22\theta$
$=4\cos2\theta-\cos2\theta(1-\cos^22\theta)$
$=\cos^32\theta+3\cos2\theta$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3244039",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
Is an integrated Wiener process recurrent or transient? Like the title says, if I take an integrated Wiener process / Brownian motion $\int ^t _0 W_s ds$, will it be recurrent or transient? Or, under what conditions will it be one or the other?
I know that, for any $t$, the integral is a normal variable ~$N(0,\frac{t^3}{3})$. So it'll diverge to $\infty$ as $t\rightarrow \infty$. And it's not a martingale. But would it still be recurrent? And how would I show that, or show that it's not?
(If there are any theorems or discussions of this out there, even just a link to that would be great!)
| It is recurrent.
Let $X_t := \int_0^t W_s \,ds$. I claim that $\varlimsup_{t\to\infty}X_t=\infty$ and $\varliminf_{t\to\infty}X_t=-\infty$, so every real number is visited infinitely often a.s.
Here is a sketch of a proof.
Write $\mathscr F_t := \sigma(W_s ; s \le t)$. For a finite stopping time $T$, write $X_t = X_{t \wedge T} + (t - T)^+ W_T + Y_{(t - T)^+}$ with $Y$ a copy of $X$ that is independent of $\mathscr F_T$. Use $T_n := \inf \{t \ge n ; W_t = 0\}$ to see that $[\sup_t X_t = \infty]$ is independent of $\mathscr F_n$ for each $n$, whence is trivial: $P[\sup_t X_t = \infty] \in \{0, 1\}$. If this event has probability $0$, then
use $T := \inf \{t \ge 1 ; W_t = -1\}$ to see that $P[\inf_t X_t = -\infty] = 1$, which contradicts symmetry. Therefore, $P[\sup_t X_t = \infty] = 1$.
A stronger result was proved by Khoshnevisan and Shi,
Chung's law for integrated Brownian motion,
Trans. Amer. Math. Soc. 350 (1998), no. 10, 4253–4264.
In two (and higher) dimension, integrated Brownian motion tends to infinity: Kolokoltsov, A Note on the Long Time Asymptotics of the Brownian Motion with Application to the Theory of Quantum Measurement, Potential Analysis 7 (1997), 759–764.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3244239",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Question about equilibrium points of non-linear system of ODEs. So I have the system
\begin{align}
x'&=y\\
y'&=-x-y\ln(x^2+4y^2)
\end{align}
To find the equilibrium points I need $x'=0$ and $y'=0$, thus I obtain
\begin{align}
y&=0\\
-x-y\ln(x^2+4y^2)&=0
\end{align}
I don't see how to proceed here. If $y=0$ in the second equation we get $-x=0\Leftrightarrow x=0$ but if $x=0$ and $y=0$ the logarithm is undefined. So $(0,0)$ can't be an equilibrium point.
| Let $V((x,y)) = {1 \over 2} (x^2+y^2)$, and $\phi(t) = V((x(t),y(t)))$. Note that
$\phi'(t) = -y^2 \ln(x^2+4y^2)$.
Let $A= \{ (x,y) | {1 \over 4} \le x^2+y^2 \le 4 \}$.
Note that $\phi'(t) \ge 0$ if $x(t)^2+y(t)^2= {1 \over 4}$.
Note that $\phi'(t) \le 0$ if $x(t)^2+y(t)^2= 4$.
$A$ contains no equilibrium points.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3244366",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
} |
Prove $f(x) = \sum_{n=1}^{\infty} \frac{x^n}{2^n} \cos{nx}$ is differentiable at $(-2, 2)$. Prove $$f(x) = \sum_{n=1}^{\infty} \frac{x^n}{2^n} \cos{nx}$$ is differentiable at $(-2, 2)$.
I can't use formula for radius of convergence, because it's not a power series ($x$ is present also in $\cos$).
| HINT: Try to prove uniform convergence on $[-2+\alpha,2-\alpha]$ for any $0<\alpha<2$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3244502",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
- Show the set $A=\{(m,n)\in N\times N : m\leq n\}$ is countably infinite.
*
*Show the set $A=\{(m,n)\in N\times N : m\leq n\}$ is countably infinite.
If $A$ is countable then we need to show that there is a bijection between $A$ and $\mathbb{N}$, but how can I show $A$ is countably infinite?
Thanks...
| Note that $A$ can be written as
$$
A = \bigsqcup_{n \in \mathbb{N}}\{(m,n) : m \leq n\}.
$$
That is, $A$ is the disjoint countable union of sets $F_n = \{(m,n) : m \leq n\}$. These are finite: for a fixed $n$, the set $F_n$ has $n$ elements, namely $(1,n) , (2,n) \dots, (n,n)$. Thus $A$ is a countable union of countable sets, which says that $A$ itself is countable.
To see that is is infinite, observe that the mapping $d : n \in \mathbb{N} \mapsto (n,n) \in A$ is injective, and so $A$ cannot have a finite amount of elements.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3244649",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Quadratics vs Hermitian forms The quadratic form is given by:
$$Q(\mathbf{x}) =\langle\ \mathbf{x} \ | \ A\mathbf{x} \rangle = x^TAx$$
Where $Q$ is a real scalar and hence $Q = Q^T$
The hermitian form is given by:
$$H(\mathbf{x}) = \langle \ \mathbf{x} \ | \ A\mathbf{x} \ \rangle=x^{\dagger}Ax$$
Where $H$ is also scalar and hence $H = H^T$
Now I'm new to the subject and these 2 are really similar and I get quite confused when to use which since both can be used with complex numbers.
Under exactly what circumstances do we use the Hermitian form and when do we use quadratic?
Thanks!
| For $Q$ to be a quadratic form $A$ has to be a symmetric matrix, and for $H$ to be a Hermitian form, $A$ has to be Hermitian i.e. $A = \overline{A^T}$. They are the same over real numbers, but if you work over the complex numbers of course they would be different.
An important reason why we usually use Hermitian forms when working with complex numbers is that $H(x)$ is always a real number, whereas $Q(x)$ might not be, so $\sqrt{H(x})$ can define a norm on your vector space, hence giving it a topology, so that you can discuss concepts such as continuity.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3244789",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Show the series $\sum\limits_{n=1}^\infty\frac{1}{2+\sqrt{n}}$ diverges I know intuitively why this series diverges but I can't really get a proof.
So I am trying to use that the fact that: $\sum\limits_{n=1}^\infty\frac{1}{2+\sqrt{n}} > \sum\limits_{n=1}^\infty\frac{1}{2+{n}} $.
And then from there I want to get it down to something like 1/n, which is a p-series with p=1 therefore it converges, and then use the comparison test to show the original sequence diverges. However I am stuck finding in this middle bit.
Any help would be appreciated.
| You already have the answer. $\sum_{n=1}^\infty\frac1{2+n}$ is a tail of the divergent harmonic series and thus diverges itself. The original series therefore diverges by the limit comparison test.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3244930",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to solve $y'^2 +yy'+x=0$? I encountered this ODE while looking for a curve which is orthogonal to the family of lines given by $$ y = mx + \frac{1}{m} \ \ \ m \in \Re \ \ ...[1] \\ $$
I setup an ODE for [1] by puting $m = y'$ in [1],
$$ \ \ y = xy' + \frac{1}{y'} \ \ ...[2]. \\ $$ Next, to get a family of orthogonal curves to [1], we change $y' \rightarrow - 1/y'$ in Eq. [2], we get
$$y'~^2 +yy'+x = 0 \ \ ... [3] $$
The handbook by G.M.Murphy gives the solution of equation [3] as
$$x = -t\left(\frac{C+\sinh^{-1}t}{\sqrt{1+t^2}}\right)~ \mbox{and}~ y=-t-\frac{x}{t}.$$
Can some one help me to get to this solution. The interesting point is that even for a simple family of lines [1] the form of family of orthogonal curves is unfamiliar and involved.
This family of lines [1] may also cut or touch the required curve at some point other than that of normalcy.
| Writing your equation in the form
$$y(x)=-y'(x)-\frac{x}{y'(x)}$$ differentiating this equation with respect to $x$
$$\frac{d}{dx}(y'(x))=\frac{y'(x)^3+y'(x)}{x-y'(x)}$$
substituting
$$v(x)=y'(x)$$
$$x'(v)=-\frac{v^2}{v^3+v}+\frac{x}{v^3+v}$$
Calculating
$$\mu(x)=\int e^{1/(v^3+v)}dx=\frac{\sqrt{v^2+1}}{v}$$ so we get
$$\frac{\sqrt{v^2+v}}{v}v'(x)-\frac{\sqrt{v^2+1}x(v)}{v(v^3+v)}=-\frac{v\sqrt{v^2+1}}{v^3+v}$$ and now integrate
$$\int\frac{d}{dv}\left(\frac{\sqrt{v^2+1}x(v)}{v}\right)dv=\int-\frac{v\sqrt{v^2+1}}{v^3+v}dv$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3245060",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Interesting primary school problem Let say
$$\frac{a}{b+c}+\frac{d}{e+f}+\frac{g}{h+i}=1$$
Given that $$a,b,c,d,e,f,g,h,i$$ represents number 1,2,3,4,5,6,7,8,9 (we don't know which alphabet represent which digit)
When dealing with this problem, I came up with the following question:
Q1)Is that an algebraic way to solve this problem? If not, does wild guess is the only way we can use to solve this type of problem?
Q2)I find one answer by luck but I am not sure whether it is the unique solution for this problem. How can i prove that is a unique problem?
| The one algebraic approach I can think of is to render
$(1/2)+(1/3)+(1/6)=1$
and thereby identify
$a/(b+c)=(1/2)$
$d/(e+f)=(1/3)$
$g/(h+i)=(1/6)$
Now there are only a few possibilitues for the $1/6$ fraction because the denominator has to be less than $9+9=18$ and has to be a multiple of $6$. Only the following fit both criteria:
$1/(2+4)$
$2/(3+9)$
$2/(4+8)$
$2/(5+7)$
Say we use $2/(3+9)$. Now we try to form a fraction of $1/3$ with the remaining digits. The denominator must be a multiple of $3$ less than $18$, so the numerator has to be no greater than $5$. A numerator of $1$ forces us to repeat digits: $1/3=1/(1+2)$. Ditto for $2$ and $3$ because we use those in our trial $1/6$ expression. We find that only two choices may work for $1/3$:
$1/3=4/(5+7)$
$1/3=5/(7+8)$
In the first case we are left with the digits $1,6,8$ but alas, we cannot make a fraction of $1/2$ with those. The second case fails similarly. So we move on; $1/6$ can't be $2/(3+9)$.
The solution I found uses $1/6=1/(2+4)$. See if you can work out the $1/3$ and $1/2$ fractions from that.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3245149",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Real Analysis, Integration
Let $f:\mathbb{R}^{n}\rightarrow \mathbb{R}$ be a continuous function such that $\lim_{|x|\rightarrow \infty} f(x)=0$. Prove that $\lim_{k\rightarrow \infty}\int_{[0,1]^{n}}f(kx)dx=0. $
I don't know how to proceed, anyone have any idea? Thanks!!
My idea: If $\lim_{|x|\rightarrow \infty} f(x)=0$, then $\lim_{|kx|\rightarrow \infty} f(kx)=0$, then for all $\epsilon>0$, $|kx|>A\ , (A>0)$, then $|f(kx)|<\epsilon $. Now, $\Big|\int_{[0,1]^{n}}f(kx)dx\Big|\leq \int_{[0,1]^{n}}|f(kx)|dx \leq \epsilon \ vol([0,1]^n)=\epsilon .$
| Hint: Since $f$ is continuous and $\lim_{\lvert x\rvert\to\infty}f(x)=0$, we have $f$ is bounded and uniformly continuous. Now what is the obvious thing we can do to $\int_{[0,1]^n} f(kx)\,\mathrm{d}x$?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3245299",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Proving that the $\frac {\xi +\zeta\eta}{\sqrt {1+\zeta^2}}$ has normal distribution (0,1) The task: $\xi, \eta, \zeta \sim N(0,1)$ and independent. Prove, that $\frac {\xi +\zeta\eta}{\sqrt {1+\zeta^2}} \sim N(0,1).$ (1)
It is clear, that with fixed $\zeta$ we get, that (1) has expected value = 0 (as the sum of normal distributed values) and variance = 1 (as the sum of $(\frac {1}{\sqrt {1+\zeta^2}})^2$ and $(\frac {\zeta}{\sqrt {1+\zeta^2}})^2$). And what to do with un-fixed value I don't know. There was a small tip -imagine, that $\zeta$ is discrete value (for example getting 3 different values) and use the full probability formula $(P(B)=\sum P(B|A_{j})P(A_{j}))$.
| Let $R=\frac{\xi+\zeta\eta}{\sqrt{1+\zeta^2}}$. You want to show $E\exp(itR)=\exp(-t^2/2)$. Write $E\exp(itR)=E(E[\exp(itR)|\zeta])$. The inner, or conditional expectation is $$\begin{align*}E[\exp(itR)|\zeta]&=\tag{*}
E\exp(it\xi/\sqrt{1+\zeta})|\zeta) \times E\exp(it\eta/\sqrt{1+\zeta})|\zeta)\\
&= \exp(-\frac{t^2}{2(1+\zeta^2)}) \exp (-\frac{t^2\zeta^2}{2(1+\zeta^2)})\\
&= \exp(-\frac{t^2}2).
\end{align*}$$
The first step, at (*), is because $\xi$ and $\eta$ are conditionally independent given $\zeta$.
So the outer expectation is also $\exp(-t^2/2)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3245400",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Are $m\mathbb{Z} \cong n\mathbb{Z}$ as rings for arbitrary $m,n \in \mathbb{N}$? Are $m\mathbb{Z} \cong n\mathbb{Z}$ as rings for arbitrary $m,n \in \mathbb{N}$?
$\alpha: m\mathbb{Z} \rightarrow n\mathbb{Z}$
$\alpha(m)=n$.
Then $\alpha(ma) = \alpha(mb) \rightarrow a=b$ so $\alpha$ is injective.
It can also be shown that this map is surjective, as well as a homomorphism.
So, they are isomorphic then?
| It is not ring isomorphic. I will give a particular case and you will easily develop for general case. Take $n=2$ and $m=3$ and consider $f:3 \Bbb Z \to 2\Bbb Z$. If it were a ring isomorphism, then it maps generator to generator. So assume for example, $f(3)=2$ Now $$f(9)=f(3+3+3)=2+2+2=6$$ and $$f(9)=f(3 \times 3)=f(3)f(3)=4$$ so we get $4=6$. This absurdity proves the result!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3245511",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
How to derive the Method of Moments estimator of mu, using the second moment of X, when X is norm distributed? If $X_1,\ldots,X_n$ follow a normal distribution, where the variance $\sigma$ is given, how can you derive the MME of the mean $\mu$ using the second moment?
| Note that the second moment of $X$ is
$$E[X^2] = \sigma^2 + \mu^2,$$
so by subtracting the known variance from the second moment, you can find the square of the mean.
To this end, compute an estimate of the second moment
$$\hat s = \frac {1}{N} \sum_{i=1}^N X_i^2,$$
then find the estimate of the mean with
$$\hat \mu = \sqrt{\hat s -\sigma^2}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3245624",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find the roots of equation based on some geometry hints Plots of the equations $y = 8 - x^2$ and $|y|=\sqrt{8+x}$ are symmetric w.r.t. the line $y=-x$. We have to solve the equation $$8-x^2=\sqrt{8+x}$$
| If $y=8-x^2$ and $|y|=\sqrt{8+x}$, then $y^2-y=(8+x)-(8-x^2)=x+x^2,$
so $y(y-1)+x(-1-x)=0,$ so $y(y-1-x)+x(y-1-x),$ so $(y+x)(y-1-x)=0,$
i.e., $y=-x$ or $y=1+x$. Therefore $x$ must be a solution of $8-x^2=-x$ or $8-x^2=1+x$.
Can you solve these quadratic equations? Note that you want solutions where $y=8-x^2\ge0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3245784",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 0
} |
Check my quick proof $P_1 \oplus P_2$ is projective $\iff P_1,P_2$ are projective. Just want to check if I am right in the $\Longrightarrow$ direction. This is Exercise 3 from Dummit Foote.
($\Longleftarrow$) Given SES $0 \to L \to M \to N \to 0$, $ Hom(P_1, -) \oplus Hom(P_2,-) = Hom(P_1 \oplus P_2, -)$. Take the direct sum of the two SES of $Hom(P_1, -)$ and $Hom(P_2, -)$ to get this direction.
($\Longrightarrow$)
WLOG, taking $P_1$. I am using Dummit&Foote Proposition 30.2 (one of the projective equivalences). The lowest row is exact (right-exact)
$$\begin{array}{} P_1 & \xrightarrow{g} & P_1 \oplus P_2 & \\
& \swarrow{F} & \downarrow{f} & \\
M & \xrightarrow{\phi} & N & \xrightarrow{} & 0 & \end{array}$$
Then defining $F' = F\circ g$ and $f' = f\circ g$ are the require homomorphisms.
I forget to mention, $g: P_1 \to P_1 \oplus P_2$ is just $p_1 \to (p_1, p_2)$
| To show $P_1$ is projective you need to show that any map $f:P_1 \to N$ factors through M, that is, show there exists a map $g: P_1 \to M$ such that $\phi g = f$. So your idea to involve the direct sum is right but you need to switch up the order of operations a bit.
An alternative would be to use the fact that a module is projective if and only if it's a direct summand of a free module. You can use this to do both directions actually.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3245868",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Ball around a set equals union of balls around each of its points I have this problem:
Let $(X,d)$ be a metric space and let $A \subseteq X$. Define for any $ \epsilon > 0$:
$$B_d(A,\epsilon) = \left\{ x \in X \ : \ d(x,A) < \epsilon \right\}.$$
Show that $B_d(A,\epsilon) = \bigcup_{x \in A} B_d(x,\epsilon)$.
I was trying to show the double inclusion, but I am not sure about anything. I know that $d(x,A)$ is defined as:
$$d(x,A) = \inf \left\{d(x,a) | a \in A\right\}.$$
My attempt went as follows:
\begin{align*}
y \in B_d(A,\epsilon) &\Rightarrow d(y,A) < \epsilon\\
& \Rightarrow \inf \left\{d(x,a) | a \in A\right\} < \epsilon\\
& \Rightarrow \exists z \in A (d(y,z) < \epsilon)\\
&\Rightarrow \exists z \in A (y \in B_d(z,\epsilon))\\
&\Rightarrow y \in \cup_{z \in A} B_d(z,\epsilon).
\end{align*}
I doubt this is correct. And I don't know about the other inclusion. Does anybody have a very rigorous proof of this fact?
| Filling the minor gap in the forward inclusion:
Take $p \in B_d(A,\varepsilon)$. So $d(p,A) < \varepsilon$. Suppose that for all $x \in A$ we would have that $\varepsilon \le d(x,p)$.
This means that $\varepsilon$ is an lowerbound for the set $\{d(x,p): x \in A\}$ and $d(p,A)= \inf \{d(x,p): x \in A\}$ is the maximum of the set of lowerbounds and we'd have $d(p,A) \ge \varepsilon$ which is a contradiction.
So for some $x \in A$, $d(x,p) < \varepsilon$ or equivalently $p \in B_d(x,\varepsilon)$ for that $x$, and hence
$$B_d(A,\varepsilon)\subseteq \bigcup_{x \in A} B_d(x,\varepsilon)$$
For the reverse, if for some $x \in A$ we have $p \in B_d(x,\varepsilon)$ we know that $d(x,p) < \varepsilon$ and so $d(p,A) \le d(x,p)$ (the infimum of a set is a lowerbound of that set) and so $d(p,A) < \varepsilon$ or $p \in B_d(A,\varepsilon)$ and so
$$B_d(A,\varepsilon)\supseteq \bigcup_{x \in A} B_d(x,\varepsilon)$$
and equality ensues.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3246014",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
dimension of a connected Manifold Currently, I am studying smooth manifolds and I want to solve some exercises. There is a question that says:
Show that the dimension of a connected topological manifold is defined without ambiguity! Meaning that if $\dim M=n$, with the change of charts, the dimension still is $n$. Then show that this is true for a $C^r$ connected manifold using Inverse Function Theorem.
To prove the first part, let $(U,\varphi)$ and $(V,\psi)$ be two charts with $\varphi(U) \subseteq \mathbb{R}^m$ and $\psi(V) \subseteq \mathbb{R}^n$ and suppose that $U \cap V \neq \emptyset$. Then, since $\psi \circ \varphi : \varphi (U \cap V) \rightarrow \psi (U \cap V) $ is a homeomorphism, $m=n$. Now, using this, if $M$ is a connected topological manifold with atlas $\mathcal{A}$, for all charts $(U_{\alpha}, \varphi_{\alpha})\in \mathcal{A}$, $\varphi_{\alpha}(U_{\alpha}) \subseteq \mathbb{R}^n$ and $n$ is constant for all the charts. Am I right?
But I have a problem showing the second part! I think that since every $C^r$ manifold is also a topological manifold, the answer to this question would be trivial! Why do we need "Inverse Function theorem" to answer it??
Any help is appreciated.
| If $M$ is a smooth manifold of dimensions $m$ and $n$ then for each $p\in M$, we can find charts about $p$ with images open in $\mathbb{R}^n$ and $\mathbb{R}^m$. Hence $T_pM$ is isomorphic to $\mathbb{R}^n$ and $\mathbb{R}^m$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3246167",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Suppose that $G$ is a group of order $924=2^2\cdot3\cdot7\cdot 11$. Prove that $G$ has an element of order $77$.
Suppose that $G$ is a group of order $924=2^2\cdot3\cdot7\cdot 11$. Prove that $G$ has an element of order $77$.
My attempt:
By Sylow theorems, we know that there exist elements $ a, b\in G$ with $o(a)=7 $ and $o(b)=11$. Note that $\gcd(7,11)=1$, so if we can show that $ab=ba$ then we are through.
Consider the group $\langle a \rangle$ acting on the set $\Omega=\{g\in G: o(g)=11\}$ by $$ a^k\cdot g:=a^kga^{-k}\ , k=1,2,...,7. $$
Note that the element and its conjugate have the same order and we can easily check that it is a well-defined $\langle a\rangle$ group action on $\Omega$. Now by the Burnside's lemma, we know that the number of orbits, denoted by $|\Omega/\langle a\rangle|$:
$$ |\Omega/\langle a\rangle|=\frac{1}{7}\sum_{a^{k}\in\langle a\rangle}|\Omega^{a^k}| $$ where $\Omega^{a^k}=\{g\in\Omega:a^k\cdot g=g\}$.
Now suppose the converse, i.e., there are not elements fixed by $a^k$ in $\Omega$ if $k\ne 7$($a^7=e$ the unit), then $$ |\Omega/\langle a\rangle|=\frac{1}{7}\sum_{e}|\Omega^{e}|=\frac{|\Omega|}{7}\in\mathbb Z. $$
So $\displaystyle 7\vert |\Omega|$. But the number of Sylow $11$-subgroups $n_{11}| 12\cdot 7$ and $n_{11}\equiv 1\pmod {11}$, we have $n_{11}=1$ or $n_{11}=12$ and in either case, $|\Omega|=11-1=10$ and $|\Omega|=12\cdot (11-1)=12\cdot 10=120$, respectively. But neither $7$ divides $10$ nor $7$ divides $120$ and we are done.
Is my reasoning right? Moreover, I am looking for other solutions without using Burnside's lemma. Thank you.
| Well we know there are elements of order $7$ and $11$ and if any pair of such elements commute then they generate a cyclic subgroup of order $77$.
I think you can argue that if the number of subgroups of order $11$ is not $1$ then it is $12$ (Sylow again: $\equiv 1 \bmod 11$). Take these two cases together.
Take an element $a$ of order $7$ and let it act on these subgroups by conjugation. The orbits must either be single subgroups or sets of $7$ subgroups. In either case there is a subgroup of order $11$ fixed under the conjugation action.
Now consider the action on that subgroup - the automorphism group of a cyclic group of order $11$ has order $10$ and the action induces a homomorphism from the group of order $7$ generated by $a$ to the automorphism group. The image is a subgroup of order $1$ or $7$, and it must be $1$. Therefore $a$ acts trivially on the subgroup of order $11$ and commutes with its members.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3246270",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
$\int_0^{100}\frac{e^{-x}}{x+100}dx>0.005$? $\int_0^{100}\frac{e^{-x}}{x+100}dx>0.005$?
My attempt: $$\int_0^{100}\frac{e^{-x}}{x+100}dx>\int_0^{100}\frac{e^{-x}}{200}dx=\frac{1-e^{-100}}{200}$$ A little bit error. How to amend it?
| The idea: The integrand decreases rapidly on the given interval, so the idea is to estimate the integral from below by integrating over a shorter interval $[0, a]$, and then continue with your approach, but with a better bound for the denominator:
For $0 < a < 100$ we can estimate
$$I = \int_0^{100}\frac{e^{-x}}{x+100}dx \ge \int_0^a\frac{e^{-x}}{x+100}dx \\
\ge \int_0^a\frac{e^{-x}}{a+100}dx = \frac{1-e^{-a}}{a+100} \, .
$$
For $a=4$ this gives
$$
I \ge \frac{1-e^{-4}}{104} \approx 0.00943927270299294
$$
which comes fairly close to the result $I \approx 0.009901942286733037$ (obtained by numeric integration with Maxima).
We can also avoid calculating $e^{-a}$ numerically and use $e^a \ge 1+a$ to further estimate
$$
I \ge \frac{1-e^{-a}}{a+100} \ge \frac{a}{(a+1)(a+100)} \, .
$$
For $a=10$ this gives
$$
I \ge \frac{1}{121} \approx 0.008264462809917356 \,.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3246386",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Equivalence of completeness statements I need to "find a metric d such that (0,1) is complete". Is it equivalent to instead find a homeomorphism f from (0,1) to a complete space, so for instance $f=\tan(-\frac{\pi }{2}+\frac{\pi x}{2})$, and then say $d(x)=|f(x)-f(y)|$
| Let $X$ be topological space and $M$ metric space, $f\colon X\to M$ a homeomorphism.
Define $d_X(x,y):=d_M(f(x),f(y))$. You can easily check that this is a metric on $X$.
Also, $f$ becomes isometry, so $(x_n)$ is a Cauchy sequence in $X$ iff $(f(x_n))$ is Cauchy sequence in $M$ and by continuity $(x_n)$ converges in $X$ iff $(f(x_n))$ converges in $M$.
Use the above to conclude that $X$ is complete iff $M$ is complete.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3246594",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
$\cos (n \phi)$ in terms of $\cos (\phi)$ Consider the following quantity:
$$\cos(n \phi), \ n \in \mathbb{N}, \ n > 1$$
I know it is possible to alternatively express it as a polynomial of degree $n$, with powers of $\cos (\phi)$ and $\sin (\phi)$.
But is it possible to express $\cos(n \phi)$ in terms of $\cos(\phi)$ only? In other words, is it possible to write
$$\cos(n \phi) = A \cos (\phi + \alpha)$$
for some real constants $A, \alpha$, or as a linear combination
$$\cos(n \phi) = A \cos (\phi + \alpha) + B \cos (\phi + \beta) + \ldots$$
?
My guess: no, because the base frequency of $\cos (\phi)$ is too low, therefore unable to represent faster variations with respect to $\phi$, as it happens in $\cos(n \phi)$.
| No. The right-hand side asds up to one big $Z\cos(\phi+\omega)$.
$$A\cos(\phi+\alpha)+B\cos(\phi+\beta)+\ldots\\
=\cos\phi(A\cos\alpha+B\cos\beta+\ldots)-\sin\phi(A\sin\alpha+B\sin\beta+\ldots)\\
=P\cos\phi+Q\sin\phi=Z\cos(\phi+\omega)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3246722",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Number of binary words that can be formed
How many binary words of length $n$ are there with exactly $m$ 01 blocks?
I tried by finding number of ways to fill $n-2m$ gaps with $0$ and $1$ such that no $'01'$ block gets created again. But this method is not working and I am stuck in this problem. Please provide me an elegant solution of this problem.
Edit: Hw Chu has given a wonderful solution and I really appreciate this solution. But I am now interested in the intuition behind his solution. Further, I request to provide a relatively easier solution to this problem.
| Here is another approach. It is "relatively easier" in the sense that setting up recurrence relations that condition on the last few digits of the binary word to force what we're interested in, is a standard way to approach such questions.
Let $a_{n,m}$ be the number of binary words of length $n$ with exactly $m$ 01 blocks that end with a $0$.
Let $b_{n,m}$ be the number of binary words of length $n$ with exactly $m$ 01 blocks that end with a $1$.
We want to find $a_{n,m} + b_{n,m}$.
Observe that the recurrance relations are:
$a_{n+1, m} = a_{n, m} + b_{n,m},$
$b_{n+1, m} = a_{n, m-1} + b_{n,m} $.
This follows directly because the only way to get an additional 01 block is to add a 1 to something in $a_{n, m}$.
Claim: $a_{n,m} = { n \choose 2m+1}, b_{n,m} = { n \choose 2m}$.
(You can guess this by looking at several initial terms.)
Proof: This can be shown by induction on $n$.
First, the base case of $n=1$ only involves $m=0$, and is clearly true.
For the induction step,
$a_{n+1,m} = a_{n,m} + b_{n,m} = { n \choose 2m+1} + { n \choose 2m} = {n+1 \choose 2m+1}$, and
$b_{n+1,m } = a_{n, m-1} + b_{n,m} = {n \choose 2m-1} + { n \choose 2m} = {n+1 \choose 2m}$.
Hence, the number of binary words of length $n$ with exactly $m$ 01 blocks is $a_{n,m} + b_{n,m} = { n+1 \choose 2m+1} $.
You could modify this approach to count the number of
*
*binary words with $m$ 11 blocks (where 111 has 2 such blocks)
*trenary words with $m$ 01 blocks
*binary words with $m$ 010 blocks
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3246823",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Connected, locally compact, paracompact Hausdorff space is exhaustible by compacts I'm trying to understand the proof of the following:
A connected, locally compact, paracompact Hausdorff space $X$ has an exhaustion by compact sets,
That is, there exists a sequence $(K_n)_n$ of compact subsets of $X$ whose union is $X$ itself, such that $K_n$ is included in the interior of $K_{n+1}$, for all $n$.
The proof goes as follows. Choose a locally finite open cover $(U_i)_{i∈I}$ of $X$, such that the closure of $U_i$ is compact, for all $i$. Then every compact subset of $X$ intersects only finitely many of the $U_i$ (why?).
Then $K_1$ is chosen as the closure of any nonempty $U_i$. $K_2$ is chosen as the union of the closures of the $U_i$ that intersect $K_1$ and so on. The rest is easy.
| One starts by noting that all the set of all open $O$ with $\overline{O}$ compact is an open cover of $X$ by locally compactness and Hausdorffness.
The paracompactness of $X$ then gives us a locally finite refinement $(U_i)_{i \in I}$ of that cover. It's not the $U_i$ that need to be compact in this proof, but their closures and this follows as each $U_i$ is a subset of some $O$ with compact closure so the same holds for the $U_i$. The new thing is the local finiteness, which is used for the compact set fact:
Now if $K$ is compact, each $x \in K$ has a neighbourhood $W_x$ such that $\{i \in I: U_i \cap W_x\neq \emptyset \}$ is finite, as we have a locally finite refinement.
Then $K$ being compact is covered by finitely many of these $W_x$, say $W_x, x \in F$ for some finite $F \subseteq K$.
But then $$\{i \in I: K \cap U_i \neq \emptyset \} \subseteq \{i \in I: ( \bigcup_{x \in F} W_x ) \cap U_i \neq \emptyset\} = \bigcup_{x \in F} \{i \in I: W_x \cap U_i\neq \emptyset\}$$ where the latter is a finite union of finite sets so finite.
So the $\{U_i: i \in I\}$ are as needed.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3246970",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Find all numbers x which satisfy $|x^2 + 2| = |x^2 − 11|$. This question is taken from book: Exercises and Problems in Calculus, by
John M. Erdman, available online, from chapter 1.1, question $4$.
Request help, as not clear if my approach is correct.
(4) Find all numbers x which satisfy $|x^2 + 2| = |x^2 − 11|$.
Have two conditions, leading to three intervals;
(i) $x \lt \sqrt{-2}$
(ii) $\sqrt{-2} \le x \lt \sqrt{11}$
(iii) $x \ge \sqrt{11}$
(i) no soln.
(ii) $2x^2 = 9 \implies x = \pm\sqrt{\frac 92}$
(iii) no soln.
Verifying:
First, the two solutions must satisfy that they lie in the given interval,
this means $\sqrt{-2} \le x \lt \sqrt{11}\implies \sqrt{2}i \le x \lt \sqrt{11}$.
Am not clear, as the lower bound is not a real one. So, given the two real values of $x = \pm\sqrt{\frac 92}$ need only check with the upper bound.
For further verification, substitute in values of $x$, for interval (ii):
a) For $x = \sqrt{\frac 92}$, $|x^2 + 2| = |x^2 − 11|$ leads to
$\frac 92 + 2 = -\frac 92 + 11\implies 2\frac 92 = 9$.
b) For $x = -\sqrt{\frac 92}$, $|x^2 + 2| = |x^2 − 11|$ leads to
$\frac 92 + 2 = -\frac 92 + 11\implies 2\frac 92 = 9$.
| Hint:
For real $x,$ $x^2\ge0$, so $x^2+2>0$, so $|x^2+2|=x^2+2$. $|x^2-11|=x^2-11$ or $-(x^2-11).$ Equate them and see what you get.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3247025",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 3
} |
What is the probability that P and Q have no common elements? A is a set containing n elements. A subset P of A is chosen at random. The set A is reconstructed by replacing the elements of the subset of P. A subset Q of A is again chosen at random. Find the probability that P and Q have no common elements.
I tried to calculate in this way :
In set P we can have no element i.e.Φ, 1 element, 2 elements, ...... upto n elements. If we have no element in P, we will leave by all the elements and number of set Q formed by those elements will have no common element in common with P. Similarly, it there are r elements in P we are left with rest of (n - r) element to form Q, satisfying the condition that P and Q should be disjoint.
Now, my confusion is how can I find the Total number of ways in which we can form P and Q?
| $P,Q$ can both be one of the $2^n$ subsets of $A$. The total number of ways to form $P$ and $Q$ is $2^n\cdot2^n=2^{2n}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3247188",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Evaluate $\int_0^{\infty} \frac {\ln(1+x^3)}{1+x^2}dx$
Prove that $$\int_0^{\infty} \frac {\ln(1+x^3)}{1+x^2}dx=\frac {\pi \ln 2}{4}-\frac {G}{3}+\frac {2\pi}{3}\ln(2+\sqrt 3)$$ Where $G$ is the Catalan's constant.
Actually I proved this using the Feynman's trick namely by introducing the parameter $a$ such that $$\xi(a)=\int_0^{\infty} \frac {\ln(1+ax^3)}{1+x^2}dx$$
Where it is clear that $\xi(0)=0$, hence we just need $$\int_0^1 \xi'(a)da$$ which I found too. Hence proving the statement, but this method was too much lengthy because it involved heavy partial fraction decomposition and one infinite summation.
Can someone suggest some better method?
Edit: I also tried some trigonometry bashing by using the substitution $x=\tan \theta$ but got stuck midway
| Note
$$\int_0^{\infty} \frac {\ln(1+x^3)}{1+x^2}dx
= \underset{= \frac\pi4\ln2+G}{ \int_0^{\infty} \frac {\ln(1+x)}{1+x^2}dx}
+ \underset{=K}{\int_0^{\infty} \frac {\ln(1-x+x^2)}{1+x^2}dx}
\tag1
$$
To compute $K$, let
$J(a)= \int_0^{\infty} \frac {\ln\left(\frac12 (1+x^2)\sec a-x\right)}{1+x^2}dx
$
$$J’(a) = \int_0^\infty \frac{\tan a}{(x-\cos a)^2 + \sin^2a}dx=(\pi-a)\sec a
$$
\begin{align}
K&=\>J(\frac\pi3) =J(0)+\int_0^{\frac\pi3} J’(a)da
=-2G +\int_0^{\frac\pi3} (\pi -a)\sec a\>da\\
&\overset{\text{ibp}}=-2G +(\pi-a)\tanh^{-1}(\sin a)\bigg|_0^{\frac\pi3}+\int_0^{\frac\pi3} \tanh^{-1}(\sin a)\>da\\
&=-2G +\frac{2\pi}3\ln(\sqrt3+2)+\frac23G
\end{align}
where $\int_0^{\frac\pi3} \tanh^{-1}(\sin a)\>da=\frac23G $. Substitute into (1) to obtain
$$\int_0^{\infty} \frac {\ln(1+x^3)}{1+x^2}dx=\frac {\pi}{4}\ln2+\frac {2\pi}{3}\ln(\sqrt 3+2)- \frac {G}{3} $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3247341",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
} |
How many series of length $n$ combined from the numbers $0,1,3,4$ , that don't contain the sequences $04,40,13,31$ or $0X0, 1X1, 3X3, 4X4$ are there? How many series of length $n$ combined from the numbers ${0,1,3,4}$ , that don't contain the sequences $04,40,13,31$ or $0X0, 1X1, 3X3, 4X4$ (where X can be any number from $0,1,3,4$) are there?
Let $f(n)$ be the number of the good series of length $n$.
If I start with $1$, then I have $f(n-1)$ options for the rest of the series without limitations, but I have to decrease the series of length $(n-2)$ starting with 3 (so I won't have $13$ in my series) and I have to decrease 3 times the series of length $(n-3)$ starting with 1 (so I won't have $1X1$ in my series).
Overall $f(n)$ is 4 times the process I just did because of symmetry depending on the first number in the series, i.e. $f(n)=4(f(n-1)-f(n-2)-3f(n-3))$.
I know I have a mistake when I'm decreasing the series of length $(n-2)$ becuse I'm decreasing too much.
Would appreciate any help in correcting that mistake.
Thanks:)
| As soon as I have a series of length $2$ or more it will end with a pair $\dots ab$
To add $c$ so we get $\dots abc$ we need to make sure that $c\neq a, 4-b$. This excludes two possibilities unless $a=4-b$, but in this case the sequence was already bad, so we can exclude it.
(This can also be used to analyse what happens at the start of the series).
There will be some special cases at the beginning where this analysis does not apply, but here can be done by hand.
I think you have overcomplicated your analysis - the exclusions depend on the actual elements of the series and you need to get under the skin of that before you start counting.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3247458",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How many labeled trees of $n$ vertices exist which have at most $k$ degree of any vertex? I am interested how one can count all the trees with $n$ vertices from all $n^{n-2}$ trees, that have at most $k$ degree of a vertex. Is there a way to do it for any $k$?
All trees with degree at most $k$ can be encoded to Prufer sequence with at most $k−1$ repetition of number of the vertex. Therefore we could count all possible ways to create permutation with repetition of each element at most $k−1$ times. Honestly, I don't know how one could do that.
Any suggestons would be welcomed.
| There is no "closed form" solution in terms of elementary functions. However, the number of lists of length $n-2$ with entries in $\{1,2,\dots,n\}$ where each element appears at most $k-1$ times can be written as
$$
(n-2)!\cdot[x^{n-2}]\Big(1+x^1/1!+x^2/2!+\dots+x^{k-1}/(k-1)!\Big)^n
$$
Here, $[x^m]f(x)$ refers to the coefficient of $x^m$ in the polynomial (or power series) $f(x)$. This can be computed efficiently using Fourier transform polynomial multiplication.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3247542",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find $9$'th derivative of $\frac{x^3 e^{2x^2}}{(1-x^2)^2}$ How can I find $9$'th derivative at $0$ of $\displaystyle \frac{x^3 e^{2x^2}}{(1-x^2)^2}$. Is there any tricky way to do that?
This exercise comes from discrete mathematic's exam, so I think that tools like taylor can't be used there.
| Hint: Take $\ln$ of both sides of
$$y=\frac{x^3 e^{2x^2}}{(1-x^2)^2}$$
and use these facts that
$$\dfrac{d^n}{dx^n}\ln(1+x)=\dfrac{(-1)^{n-1}(n-1)!}{(1+x)^{n+1}}$$
$$\dfrac{d^n}{dx^n}\ln(1-x)=\dfrac{(n-1)!}{(1-x)^{n+1}}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3247700",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Slot machines fundamental matrix interpretation The problem:
A man is playing two slot-machines. The first machine pays off with probability c the second with probability d. If he loses, he plays the same machine again. If he wins, he switches to the other machine. Let S, be the state of playing the i-th machine. Create the transition matrix and, assuming c=1/2 and d=1/4, find the fundamental matrix and intreprete it.
What I've done:
I found both matrices, but I don't know how to interprete the fundamental matrix, help would be appreciated.
$$T =\begin{bmatrix} 1-c & c \\ d & 1-d \end{bmatrix}$$
$$Z =\begin{bmatrix} 11/9 & -2/9 \\ -1/9 & 10/9 \end{bmatrix}$$
Where $T$ is the transition matrix and $Z$ is the fundamental matrix, calculated after replacing the given values for c and d.
| The fundamental matrix of an ergodic Markov chain is
$$Z = \left(I-P+W\right)^{-1}$$
where $P$ is the transition matrix and $W$ is a matrix where each row is the fixed probability vector $w = (w_i)$ (the limiting distribution).
You can calculate the mean first passage times $m_{ij}$ (expected number of steps to reach state $j$ when starting from the state $i$, when $i\neq j$) from $Z=[z_{ij}]$ with the formula
$$m_{ij} = \frac{z_{jj}-z_{ij}}{w_j}$$
Source: this book, Theorem 11.16.
As you can, see those are
$$m_{12} = \frac{\frac{10}{9}-(-\frac{2}{9})}{\frac{2}{3}} = 2$$
and
$$m_{21} = \frac{\frac{11}{9}-(-\frac{1}{9})}{\frac{1}{3}} = 4$$
which we of course in this case see also straight from the probabilities as their inverses, since the number of steps we stay in each state are geometric random variables.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3247793",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is this solution correct? I thought it was meant to be $y = 1/t-c_i,$ if the solution is wrong, what's the correct answer? The question is solve the following ODE: $t^2y'' =(y')^2 ,\; t>0.$
Here's the solution file:///M:/pc/My%20Documents/Doc3.pdf
This is the solution my lecture provided and I think there's an error somewhere when $c_1$ doesn't equal $0;$ shouldn't it be $y= t^2/2+\ln (1-c_1t) + c_2?$
If the solution is wrong, what's the correct solution? Thanks: spent over 6 hours trying to work it out to come to the conclusion that there's an error.
| Hint: Substitute $$y''=v'$$ then you will get $$\frac{t^2v'(x)}{v(x)}=1$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3247895",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How many different ordered pairs of 20 elements? Question: How many different ordered pairs of 20 elements $\left( x_{1}, x_{2}, x_{3}, \dotsc, x_{20} \right)$ can you create if $x_{1}, x_{2}, x_{3}, \dotsc, x_{20}$ are non-negative integers and $x_{1} \leq 3, \, x_{2} \leq 6, x_{3} \leq 9, \dotsc, x_{20} \leq 60$ and $x_{1}, x_{2}, x_{3}, \dotsc, x_{20}$ are different?
The answer is $2^{20} \, 21!$
| The number of choices that you have for the $k^{th}$ number is $3k+1-(k-1) = 2(k+1)$, because $3k+1$ is the total number of choices and $k-1$ are the numbers that have already been selected and are excluded from the available choices.
Multiplying these for $k$ ranging from $1$ to $20$ we get the desired result. Namely, $\prod \limits_{k=1}^{20} 2(k+1) = 2^{20}\cdot 21!$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3248029",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove that $\langle f(x) \rangle $ is prime ideal in $\Bbb Z[x]$ if $f(x)$ is irreducible over $\Bbb Z$ I know $(x^2+1)$ is prime ideal of $\Bbb Z[x]$ without being maximal ideal. It can be easily proved as quotient ring isomorphic to integral domain $\Bbb Z[i]$. My question is the generalization of this i. e. if $(x^2+1)$ is replaced by arbitrary irreducible polynomial $f(x)$ over $\Bbb Z$. Can anyone suggest me an outline of this proof ? Thank you.
| A simple argument could be:
*
*If $f(x)$ is irreducible in $\Bbb{Z}[x]$, including that it has no non-unit constant factor, then it is also irreducible in $\Bbb{Q}[x]$ (the usual argument invokes Gauss's lemma at a key step).
*Therefore $f(x)$ generates a maximal ideal in $\Bbb{Q}[x]$.
*Therefore $\Bbb{Q}[x]/\langle f(x)\rangle$ is a field.
*But $\Bbb{Z}[x]/\langle f(x)\rangle$ is a subring of $\Bbb{Q}[x]/\langle f(x)\rangle$, so it is an integral domain.
*Therefore $\langle f(x)\rangle$ is a prime ideal of $\Bbb{Z}[x]$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3248158",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
lengths of sides in golden ratio isosceles triangles The figure below shows three different isosceles triangles. every triangle is either 36-36-108 or 36-72-72. The base of the outermost triangle has length $\phi$.
Find the lengths of both lines AT and MT.
Can someone please help me figure this out? What I'm thinking so far is that $1+\phi=\phi^2$ has roots which are the golden ratio.
I found this image online (on wikipedia) and i think it is the solution but i don't know how to solve it or actually get these values
| As shown on Wikipedia's article about golden ratio (letters in the blockquote indicate points on the picture provided by Wikipedia):
If angle BCX = α, then XCA = α because of the bisection, and CAB = α because of the similar triangles; ABC = 2α from the original isosceles symmetry, and BXC = 2α by similarity. The angles in a triangle add up to 180°, so 5α = 180, giving α = 36°. So the angles of the golden triangle are thus 36°-72°-72°. The angles of the remaining obtuse isosceles triangle AXC (sometimes called the golden gnomon) are 36°-36°-108°.
From the given above (in the question) $AH=TH= \phi$ and $m(\angle AHT)=36°$, So by using Law of cosines:
$${AT}^2={AH}^2+{TH}^2-2(AH)(TH)(\cos {\angle AHT}) \\ {AT}^2={\phi}^2+{\phi}^2-2(\phi)(\phi)(\cos {36°}) \\ {AT}^2=2{\phi}^2-2\phi^2 \cdot \cos {36°} \\ {AT}^2=2 \left( {\frac {3+ \sqrt 5}{2}} \right) -2 \left( {\frac {3+ \sqrt 5}{2}} \right) (\cos {36°})=1 \\ AT=\sqrt 1 =1$$
Similarly,
$${AC}^2=2 {\phi}^2-2{\phi}^2(\cos 108°)={\phi}^4 \\ AC = {\phi}^2$$
Now you know it, law of cosines is the trick in such questions.
I hope my answer helps you !
Another solution:-
You can use trigonometrical functions to get the height of any of the two triangles and then use Pythagoras theorem to find other sides (in each triangle).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3248331",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Unit vector notation - both line and hat? I know that unit vectors are usually written in bold with a hat for example: $$\hat{\mathbf{i}}$$
But if you use vector notation with arrows (as I do when I write with pen and papper), should you both have an arrow above the vector and a hat, or just the hat?
Thanks!
| In general, what indicates whether a vector is unit vector or not is the letter used rather than the hat ($\hat{}$). So, as long as you prevent confusion by not giving name $i,j,k,e_1,e_2$ etc. to some other vectors, $\vec{i}$ still is a unit vector. But I should also note that this also depends on the context in which you are using this notation. I don't know whether there is a context where $\hat{i}$ or $\vec{i}$ is a different vector from $[1\ 0\ 0]^T$ but I am sure that in some contexts, someone can name a vector $i$ even if it is not a unit vector (for instance where $e_1,e_2,e_3,$ etc. are used for unit vectors). But as long as you clarify your notation beforehand, I don't think there will be any problems when you use $\vec{i}, \vec{j}, \vec{k}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3248422",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Proving monotone function of two variables is integrable
Let $f:[0,1]^2\rightarrow \mathbb R$ be a monotone function of two variables, that is, $x\leq x'$ and $y\leq y' \implies f(x,y)\leq f(x',y').$ Prove that $f$ is Riemann integrable.
I want to "copy" and generalize the argument for the one dimensional case. Well, what I tried so far was to consider the partition $P=\{P_{ij}: i,j = 1,\cdots, N\}, N\in \mathbb N, $ given by $P_{ij} = (\frac{i-1}{N},\frac{i}{N})\times (\frac{j-1}{N},\frac{j}{N})$. This is a partition of the square by small squares of area $1/N^2$. As in the one dimensional case, I want the sum $R(f,P)-L(f,P)$ to telescope and be something like: $\frac{f(1,1)-f(0,0)}{N}$ where $f(1,1)$ and $f(0,0)$, as we may notice, is the maximum and minimum of the function on the square. But, it doesn't seem that this sum will be telescoping, because for each small square, its maximum and minimum of the function is reached at the opposed diagonal vertices (on the right) and will always remain diagonal vertices which will not "kill each other".
Is this the right approach? Any hint on how to prove this?
| The key idea is that you have $N^2$ squares each of area $\dfrac{1}{N^2}$, but after telescoping the sum "as much as possible", there are only $2N-1$ summands left, each summand being of the form $f(p) - f(q)$; which is bounded by $f(1,1) - f(0,0)$. Let $S$ denote an arbitrary subrectangle determined by the partition $P$ you have constructed. Then,
\begin{align}
U(f,P) - L(f,P) &= \sum_{S \in P} \left( M_S(f) - m_S(f) \right) \cdot \text{area}(S) \\
&= \dfrac{1}{N^2} \sum_{S \in P} \left( M_S(f) - m_S(f) \right) \\
&= \dfrac{1}{N^2} \left( \text{sum of $(2N-1)$ terms of the form $f(p) - f(q)$} \right) \\
& \leq \dfrac{1}{N^2} \sum_{i=1}^{2N-1} f(1,1) - f(0,0) \\
&= \dfrac{2N-1}{N^2} \cdot \left( f(1,1) - f(0,0) \right)
\end{align}
As $N \to \infty$, the RHS $\to 0$; hence you're done.
Now, I know I didn't index my terms properly etc, because I think this is one of those problems where a picture makes it a hundred times clearer. Consider the figure below:
Here, I chose $N=5$, so there are $N^2 = 25$ rectangles. The purple diagonal lines indicate how the telescoping works. The red dots are to be taken with a $+$ sign, while the blue dots are to be taken with a $-$ sign. The $2N-1$ I got above is because there are $2N-1 = 9$ red dots. So for this partition, just to make things explicit, we have
\begin{align}
25 \cdot \left( U(f,P) - L(f,P) \right) &=
\left[ f \left( \dfrac{1}{5},1 \right) - f \left(0, \dfrac{4}{5} \right)\right] +
\left[ f \left( \dfrac{2}{5},1 \right) - f \left(0, \dfrac{3}{5} \right)\right] \\
&+ \left[ f \left( \dfrac{3}{5},1 \right) - f \left(0, \dfrac{2}{5} \right)\right]
+ \left[ f \left( \dfrac{4}{5},1 \right) - f \left(0, \dfrac{1}{5} \right)\right] \\\\
&+ \left[ f \left( 1,1 \right) - f \left(0, 0 \right)\right] \\\\
&+ \left[ f \left( 1, \dfrac{4}{5} \right) - f \left(\dfrac{1}{5}, 0 \right)\right]
+ \left[ f \left( 1, \dfrac{3}{5} \right) - f \left(\dfrac{2}{5}, 0 \right)\right] \\
&+ \left[ f \left( 1, \dfrac{2}{5} \right) - f \left(\dfrac{3}{5}, 0 \right)\right]
+ \left[ f \left( 1, \dfrac{1}{5} \right) - f \left(\dfrac{4}{5}, 0 \right)\right] \\\\
& \leq 9 \cdot \left[ f \left( 1,1 \right) - f \left(0, 0 \right)\right]
\end{align}
Hence,
\begin{align}
U(f,P) - L(f,P) &\leq \dfrac{9}{25} \cdot \left[ f \left( 1,1 \right) - f \left(0, 0 \right)\right] \\\\
&= \dfrac{2(5) -1}{5^2} \left[ f \left( 1,1 \right) - f \left(0, 0 \right)\right]
\end{align}
I think it's a notational nightmare to precisely explain where the $2N-1$ comes from, but pictorially it's a trivial counting exercise. So, as a recap, the crux of the proof was that the number of summands grows like $N$, while the area grows like $N^2$, so that their ratio goes to $0$ for large $N$. I'm sure this proof can be extended to $\mathbb{R^d}$ as well, where now, the number of summands grows like $N^{d-1}$, while the volume of each cube grows like $N^d$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3248575",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Why does this inequality hold for smallest positive non-square modulo p? Let $p$ be an odd prime and $q$ the smallest positive integer that is not a square modulo $p$. Show that $q<\sqrt{p}+1$?
I can show that $q$ also has to be prime and that $p$ is a divisor of $q^\frac{p-1}{2}+1$. But I am not sure how this can help me.
| Assume $q>\sqrt p+1$. Let $a=\left\lceil\frac pq\right\rceil$. Then
$$1\le a\le\left\lceil\frac p{\sqrt p+1}\right\rceil
=\left\lceil\sqrt p-1+\frac 1{\sqrt p+1}\right\rceil\le\left\lceil\sqrt p\right\rceil<q,$$
hence $a$ is a square $\bmod p$ and so $aq$ is a non-square.
But
from $p\le aq<p+q$, we see that $aq\bmod p$ is one of $0,1,\ldots,q-1$, hence a square, contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3248702",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Do the categories of Sets and Smooth manifolds with smooth functions have zero morphisms. And how are compositions with the empty set defined? When reading wiki zero morphisms it seems that the category of sets does not have zero morphisms. Also, I could not find how the composition with the empty map works in the Set. Can someone explain that?
Edit: there is a confusion about what 'does have zero morphisms' mean. Here I actually mean: 'category with zero morphisms'.
| Thomas Andrews has answered your main question about zero morphisms, so I'll just answer the question "how are compositions with the empty set defined?"
In the category of sets, a function $X\to Y$ is a set $f\subseteq X\times Y$ such that for all $x\in X$, there is a unique $y\in Y$ such that $(x,y)\in f$. From this definition, we see that $\emptyset \subseteq X\times Y$ is a function $X\to Y$ if and only if $X = \emptyset$. Indeed, if $X\neq \emptyset$, then picking some $x\in X$, there is no $y\in Y$ such that $(x,y)\in \emptyset$, so $\emptyset$ is not a function $X\to Y$. On the other hand, if $X = \emptyset$, then $\emptyset$ satisfies the definition of a function $X\to Y$ vacuously.
Now how do we compose with the empty function? Well, let's view $\emptyset$ as a function $\emptyset \to Y$, and suppose we have a function $g\colon Y\to Z$. Composing, we should get a function $(g\circ \emptyset)\colon \emptyset \to Z$. You should expect to get $(g\circ \emptyset) = \emptyset$, since the empty function is the only function with domain $\emptyset$. And indeed, we have $$(g\circ \emptyset) = \{(x,z)\mid \exists y\, (x,y)\in \emptyset\text{ and }(y,z)\in g\} = \emptyset.$$
On the other hand, let's view $\emptyset$ as a function $\emptyset \to Y$, and suppose we have a function $g\colon Z\to \emptyset$. Composing, we should get a function $(\emptyset\circ g)\colon Z\to Y$. Again, we have $$(\emptyset\circ g) = \{(z,y)\mid \exists x\, (z,x)\in g\text{ and }(x,y)\in \emptyset\} = \emptyset.$$
In fact, in this case we must have $Z = \emptyset$ and $g = \emptyset$, since there are no functions from non-empty sets to the empty set.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3248827",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Uniqueness of $\mathbb{R}$ In the very beginning of baby Rudin we are given
$\textbf{1.19 Theorem}$ There exists an ordered field $\textit{R}$ with the least upper bound axiom. Moreover, $\textit{R}$ contains $\textit{Q}$ as a subfield.
Are the real numbers the $\textit{unique}$ ordered field with l.u.b. property with the rationals being a subfield? Of course we know $\mathbb{R}$ exists through many constructions, but are there any uncountably infinite fields with the same properties that aren't $\mathbb{R}$ up to isomorphism?
My intuition is telling me that it's true since the fact that $\mathbb{Q}$ (or an isomorphic copy) being a subfield of each must mean that both fields must capture the same properties. But I'm really not sure. Apologies if this question is silly.
| Yes. The relevant property is not, however, that $\mathbb{Q}$ is a subfield, but rather the fact of the supremum property (or any one of any number of other statements equivalent to Dedekind completeness). $\mathbb{R}$ is the only ordered field (up to isomorphism) with this property.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3248955",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Problem with change of variables in an integral In calculating the electric potential of a charged line segment of length $2L$ (don't worry, this is pretty much where the physics ends), I encountered the following integral: \begin{equation}V(r,z)=\frac{Q}{8\pi\varepsilon L}\int_{-L}^L\frac{dz'}{\sqrt{r^2+(z-z')^2}}.\end{equation} Based purely on this definition, I think I can assert that $V(r,z)\neq V(r,-z)$.
Trying to solve the integral, I made the substitution $u=z-z'$. This means $du=-dz'$; furthermore $$\begin{cases}\begin{array}{lll}z'=L&\Rightarrow&u=z-L\\z'=-L&\Rightarrow&u=z+L.\end{array}\end{cases}$$ Thus, we obtain $$V(r,z)=\frac{Q}{8\pi\varepsilon L}\int_{z-L}^{z+L}\frac{du}{\sqrt{r^2+u^2}}$$ where I compensated for the minus sign in $du=-dz'$ by interchanging the bounds.
I did the rest of the calculations by hand. I checked my answer with Maple; Maple gives me answers $f(r,z)$ and $g(r,z)$ for the first and second integrals respectively (the actual expressions aren't of much importance). These answers are supposed to be equal if I carried out the change of variables adequatly, but they turn out NOT to be equal. Instead, they have the odd property that $f(r,z)=g(r,-z)$! I assume something went wrong in my substitution, but I wouldn't know where exactly.
I'd very much appreciate any help!
| Actually, $V(r,z)=V(r,-z)$, as can be easily seen by the substitution $z'\to -z'$ in the integrand, which has the effect of replacing $z$ with $-z$.
Indeed, letting $y=z'$ for ease of notation we have
$$
V(r,z)=c\int_{-L}^L\frac{dy}{\sqrt{r^2+(z-y)^2}}=c\int_{L}^{-L}\frac{-dy}{\sqrt{r^2+(z+y)^2}}=c\int_{-L}^L\frac{dy}{\sqrt{r^2+(-z-y)^2}}=V(r,-z)
$$
where $c=Q/(8\pi\epsilon L).$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3249065",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Minimum number of times monkey must travel While working on an unrelated subject, I found a problem which could be alternatively be stated as the following:
A monkey must travel a path of length $n-1$, from $a_1$ to $a_n$. On every turn, he may jump any distance forward, but cannot go backwards. He may jump any number of times. How many times must he travel the path so that he has jumped at least once from $a_i$ to $a_j$ for all $1\leq i\lt j\leq n$? (After reaching $a_n$, one travel is completed and he starts again from $a_1$.)
The solution to the original problem (in chemistry) was done by brute force, but I am hoping to discover an expression in $n$.
My attempts
Suppose the number of such paths is $f(n)$. Then for a path to $a_{n+1}$, it must be $f(n+1)$. This can be achieved in the following method:
Jump $1$ unit, then jump in the $f(n)$ ways. Then jump $2$, then in $f(n-1)$ ways. And so on, thus getting $f(n+1)=\sum_1^nf(k)$. But this is wrong, as a lot of these paths overlap, and I can find no way of figuring out how many.
So I tried reformulation, again.
Take series $\langle a_n\rangle_1^n$ of whole numbers such that $\sum a_k=n$. Every nonzero element appears before every zero element. What is the minimum number of such series which must be taken, such that for every $i\geq1$, $a_{n-i}$ is equal to every number from $1$ to $i$ in at least one of them?
There are, I believe, formulas for the breaking of natural numbers into a sum of whole numbers, but here the conditions are different, and the order too matters. I am unsure of where to go from here.
Please help.
| There was an answer here a few hours ago claiming $n^2/4.$ I did not read it in full, and I do not know why it's been deleted. But I think the claim is correct. The minimum number of travels is ${n^2\over 4}$ for even $n$, and ${n^2-1 \over 4}$ for odd $n$.
(Credits: the necessity argument I stole from the now-deleted post. The sufficieny part is my own - I didn't get a chance to read that part of the deleted post and have no idea if its argument is same as mine.)
Necessity: For each $a_i$ among the $\lfloor n/2 \rfloor$ nodes in the first half, and each $a_j$ among the $\lceil n/2 \rceil$ nodes in the second half, there is one jump $(a_i, a_j)$. None of these jumps can share the same travel. So the number of travels $\ge \lfloor n/2 \rfloor \lceil n/2 \rceil = {n^2 \over 4}$ (if $n$ is even) or ${n^2 - 1 \over 4}$ (if $n$ is odd).
Sufficiency, via explicit construction:
In round $1$, we do all jumps of the form $(a_1, a_i)$ and $(a_i, a_n)$. These can be assembled into $n-1$ travels as follows: the direct jump $(a_1, a_n)$, and, the two-hop path $((a_1, a_i), (a_i, a_n))$ for all $i \in [2,n-1]$. Since there are $n-2$ choices for $i\in [2,n-1]$, plus the direct jump, the total number of travels is $n-1$.
In round $2$, we do all jumps of the form $(a_2, a_i)$ and $(a_i, a_{n-1})$. By the same argument as above, there are $n-4$ choices for $i \in [3, n-2]$, plus the direct jump $(a_2, a_{n-1})$, for a total of $n-3$ travels. (Obviously, for each travel we need to attach $(a_1, a_2)$ at the front and $(a_{n-1}, a_n)$ at the back.)
In general, in round $k$, we do all jumps of the form $(a_k, a_i)$ and $(a_i, a_{n-k+1})$, for a total of $n-2k+1$ travels. (Obviously, for each travel we need to attach $(a_1, a_k)$ at the front and $(a_{n-k+1}, a_n)$ at the back.)
We keep doing this as long as $k < n-k+1$.
*
*For even $n$, this means $k \le {n \over 2}$ (entire first half), and $n-k+1 \ge {n \over 2} + 1$ (entire second half).
*For odd $n$, this means the midpoint ${n+1\over 2}$ is never either $k$ or $n-k+1$. We have: $k \le {n+1 \over 2} - 1$ (entire first half, exclude midpoint), and $n-k+1 \ge {n+1 \over 2} +1$ (entire second half, exclude midpoint).
Claim: After we do this for all rounds, all jumps have now been used.
Proof: consider any jump $(a_p, a_q)$. Either $p$ is in the first half (exclude midpoint if $n$ odd), or $q$ is in the second half (exclude midpoint if $n$ odd). Thus, this jump has been used in some round. Note that if $n$ is odd and $p$ or $q$ is the midpoint, the argument still applies to the other point.
Summing up all rounds, the total number of travels $=f(n) = (n-1) + (n-3) + (n-5) + \dots$
*
*Even $n=2m: f(n) = (2m-1) + (2(m-1)-1) + \dots + 5 + 3 + 1 = m^2 = {n^2 \over 4}$.
*Odd $n=2m+1: f(n) = 2m + \dots + 4 + 2 = 2(m + \dots + 2 + 1) = m(m+1) = {n^2 -1 \over 4}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3249281",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
} |
Getting a one to one morphism $p : G \to GL_n(\mathbb{C})$ I am studying representation theory, and I would like to find an algorithm that finds, given a finite group $G$, a one to one morphism $p : G \to GL_n(\mathbb{C})$ (the integer $n$ is also found by the algorithm). I don't necessarily want this algorithm to be efficient. First is it possible to make such an algorithm ?
Now I am wondering how to begin with. I thought about finding the character table of the group first (I know this not an easy task) and find its irreductible representations, but then from here I don't see how I can find a solution to my problem.
If I have all the irreductible representations of my group I don't see how I can get $p$. I mean the characters only give information about the sum on the digonal of $p(g)$, but how to get from this information the other entries in the matrices ?
Thank you !
| Every finite group embeds into a permutation group; this is Cayley's theorem. The proof of the theorem is essentially constructive, with $G$ embedding into $S_{|G|}$. That is, there is an algorithm with input a Cayley table* of a finite group $G$ and output a permutation group $\operatorname{Perm}(G)$ isomorphic to $G$. You can then easily embed $\operatorname{Perm}(G)$ into $\operatorname{GL}_{|G|}(\mathbb{C})$ using permutation matrices, and again this can be made algorithmic.
*The input form can vary, but the OP explicitly mentions Cayley tables in the comments.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3249417",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Uniform convergence of $f_n(x) = \left(1 + \frac{x}{n}\right)^n$ when calculating limit Calculate$$
\lim_{n \rightarrow \infty} \int_0^1 \left(1 + \frac{x}{n}\right)^ndx
$$
My attempt - if
$$
f_n(x) = \left(1 + \frac{x}{n}\right)^n
$$
converged uniformly for all $x \in [0,1]$ then I could swap integral with limes and solve it:
$$
\lim_{n \rightarrow \infty} \int_0^1 \left(1 + \frac{x}{n}\right)^ndx =
\int_0^1 \lim_{n \rightarrow \infty}\left(1 + \frac{x}{n}\right)^ndx =
\int_0^1 e^x dx = e^x|_{0}^{1} = e - 1
$$
I must then prove that $f_n(x)$ is indeed uniformly convergent. I already know that
$f_n(x) \rightarrow e^x$. If $f_n(x)$ converges uniformly then for each epsilon the following statement must hold
$$
\sup_{x \in [0,1]} \left|f_n(x) - f(x)\right| < \epsilon
$$
How can I prove this?
| You can use Dini's theorem.
On a compact set $K$, if a sequence of continuous functions $\langle f_n(x) \rangle$
$a)$ is monotone in $n$ for each $x \in K$
$b)$ converges pointwise to a continuous function of $x \in K$
then the convergence is uniform.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3249567",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 6,
"answer_id": 1
} |
Find $\lim_{x\to\infty}1+2x^2+2x\sqrt{1+x^2}$ Consider the function
$$f(x)=1+2x^2+2x\sqrt{1+x^2}$$
I want to find the limit $f(x\rightarrow-\infty)$
We can start by saying that $\sqrt{1+x^2}$ tends to $|x|$ when $x\rightarrow-\infty$, and so we have that
$$\lim_{x\rightarrow-\infty}{(1+2x^2+2x|x|)}=\lim_{x\rightarrow-\infty}(1+2x^2-2x^2)=1$$
However, if you plot the function in Desmos or you do it with a calculator, you will find that $f(x\rightarrow-\infty)=0$
What am I missing?
| For negative $x$, we have
$$\sqrt{1+x^2}=-x\sqrt{1+\frac{1}{x^2}}=-x\left(1+\frac{1}{2x^2}+O(x^{-4})\right)$$
So we have
$$
\lim_{x\rightarrow-\infty}(1+2x^2+2x\sqrt{1+x^2})=\lim_{x\rightarrow-\infty}\left(1+2x^2-2x^2\left(1+\frac{1}{2x^2}+O(x^{-4})\right)\right)=\\
=\lim_{x\rightarrow-\infty}O(x^{-2})=0
$$
You were missing a constant term in your approximation for $\sqrt{1+x^2}$, it can sometimes be dangerous to just reason "by feeling" like you seem to have done, I suggest using Taylor series for rigorous derivations in such cases.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3249693",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 1
} |
Alternating sum of binomial coefficients multiplied by index to an n-2 extent Doing research in probability modelling I obtained an identity, which is correct for $n > 2$.
$$\sum\limits^n_{i=1}(-1)^{n+i}{{n}\choose{i}}i^{n-2}=0$$
How can it be proven directly? In what literature can I find it?
| This is a variation using the coefficient of operator $[z^n]$ to denote the coefficient of $z^n$ in a series. Recalling $e^z=\sum_{j=0}^\infty \frac{z^j}{j!}$ we can write for instance
\begin{align*}
n![z^n]e^{kz}=k^n\tag{1}
\end{align*}
We obtain for $n>2$
\begin{align*}
\color{blue}{\sum_{k=1}^n}&\color{blue}{(-1)^kk^{n-2}\binom{n}{k}}\\
&=\sum_{k=1}^n(-1)^k\binom{n}{k}(n-2)![z^{n-2}]e^{kz}\tag{2}\\
&=(n-2)![z^{n-2}]\sum_{k=1}^n\binom{n}{k}\left(-e^{z}\right)^k\\
&=(n-2)![z^{n-2}]\left(\left(1-e^z\right)^n-1\right)\tag{3}\\
&=(n-2)![z^{n-2}]\left((-1)^n\left(z+\frac{z^2}{2}+\cdots\right)^n-1\right)\tag{4}\\
&\,\,\color{blue}{=0}
\end{align*}
Comment:
*
*In (2) we apply the coefficient of operator according to (1)
*In (3) we apply the binomial theorem.
*In (4) we observe the coefficient of $z^{n-2}$ is zero since the left term starts with powers in $z$ greater or equal to $n$ and the constant $-1$ does not contribute since $n>2$.
Hint: Instructive examples using this and related techniques can be found in H.S. Wilf's book generatingfunctionology. See the examples following (1.2.6).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3249780",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
} |
Show that this function is bounded Let $f$ be a $\mathbb R \rightarrow \mathbb R$ continuous function such that : $\lim_ {x \to \pm \infty} f(x) \in \mathbb R$ and $\lim_ {x \to 0} f(x) \in \mathbb R$
How can one show that $f$ is bounded ? I get it "intuitively" but I cant show it rigorously
| If $\lim_{x\to-\infty} f(x)=a$ and $\lim_{x\to\infty} f(x)=b$ put $|a|+|b|+1=:c$. There is an $M>0$ such that $|f(x)|\leq c
$ for all $x\geq M$ and all $x\leq-M$. Since $f$ is continuous there is a $c'$ such that $|f(x)|\leq c'$ for all $x\in[-M,M]$. It follows that $|f(x)|\leq c+c'$for all $x\in{\mathbb R}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3249892",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 2
} |
Calculate $\lim_{\epsilon\rightarrow 0^+} \int_0^1 \frac{1}{\epsilon x^3+1} dx$
Calculate $\lim_{\epsilon\rightarrow 0^+} \int_0^1 \frac{1}{\epsilon x^3+1} dx$
I tried to use:
$$\int_0^1 f(x) \le \int_0^1 \frac{1}{1+\epsilon x^3} dx \le \int_0^1 \frac{1}{1+0} dx=1$$However I have a problem to find $f(x)$ such that $\int_0^1 f(x) \rightarrow 1$ because when I will take $f(x)=\frac{1}{1+a}, a > 0$ then I have $\int_0^1 f(x)=\frac{1}{1+a}$. Have you got any ideas?
| Since $\epsilon>0,$ you have $f_\epsilon (x)\leq f(x),$ where $f_\epsilon(x)=\frac{1}{1+\epsilon x^3}$ and $f(x)=1.$ By dominated convergence theorem ($f$ is integrable on $[0,1]$), $f_\epsilon(x)\to f(x)$ as $\epsilon \to 0^+$ so $\int_0^1 f_\epsilon(x)\,\mathrm{d}x\to \int_0^1 f(x)\,\mathrm{d}x=1.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3250083",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Vector form for Taylor series What is the vector form of Taylor series for a vector valued function of a scalar variable $f:\mathbb R\to\mathbb R^n $? I presume it is exactly the same as the classical Taylor series but for confirmation I have been searching the internet to no avail. Can anyone also point out a reference for the proof? I think the proof should follow from applying the classical case to each coordinate.
| If it's a function from $f:\mathbb{R} \rightarrow \mathbb{R}^n$, the Taylor expansion would be exactly similar expansion for $f:\mathbb{R} \rightarrow \mathbb{R}$.
Let the $f:\mathbb{R} \rightarrow \mathbb{R}^n$ be equal to $f\left(x\right)=(g_1\left(x\right),...,g_n\left(x\right))$
Consider the Taylor expansion for each of the functions $g_1\left(x\right),...,g_n\left(x\right)$ around point $a$
$g_1\left(x\right)={\displaystyle g_1(a)+{\frac {g_1'(a)}{1!}}(x-a)+{\frac {g_1''(a)}{2!}}(x-a)^{2}+{\frac {g_1'''(a)}{3!}}(x-a)^{3}+\cdots ,}$
and so on...
$g_n\left(x\right)={\displaystyle g_n(a)+{\frac {g_n'(a)}{1!}}(x-a)+{\frac {g_n''(a)}{2!}}(x-a)^{2}+{\frac {g_n'''(a)}{3!}}(x-a)^{3}+\cdots ,}$
Now, $f\left(x\right)=(g_1\left(x\right),...,g_n\left(x\right))$
$=({\displaystyle g_1(a)+{\frac {g_1'(a)}{1!}}(x-a)+{\frac {g_1''(a)}{2!}}(x-a)^{2}+\cdots }$,...,${\displaystyle g_n(a)+{\frac {g_n'(a)}{1!}}(x-a)+{\frac {g_n''(a)}{2!}}(x-a)^{2}+\cdots })$
This is equal to:
$=(g_1\left(a\right),...,g_n\left(a\right))+({\frac {g_1'(a)}{1!}},...,{\frac {g_n'(a)}{1!}})(x-a)+({\frac {g_1''(a)}{2!}},...,{\frac {g_n''(a)}{2!}})(x-a)^2$...
Clearly, this is:
$f(a)+{\frac {f'(a)}{1!}}(x-a)+{\frac {f''(a)}{2!}}(x-a)^2...$,
which is the Taylor expansion of $f(x)$ at the point $a$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3250302",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Derivative of a (complicated) sum Given
$$
f(x)=e^{-ax}\ \sum_{k=0}^{r-1}\frac{(ax)^k}{k!}
$$
How do I show that
$$
f'(x)= -\frac{a^r}{(r-1)!}x^{r-1}e^{-ax}
$$
Thank you in advance!
| It is:
$$f(x)=e^{-ax}\ \sum_{k=0}^{r-1}\frac{(ax)^k}{k!}=e^{-ax}\left(e^{ax}-\sum_{k=r}^{\infty} \frac{(ax)^k}{k!}\right)=1-e^{-ax}\sum_{k=r}^{\infty}\frac{(ax)^k}{k!} \Rightarrow \\
f'(x)=\color{red}{ae^{-ax}}\sum_{k=r}^{\infty}\frac{(ax)^k}{k!}-\color{red}{ae^{-ax}}\sum_{k=r}^{\infty}\frac{(ax)^{k-1}}{(k-1)!}=\color{red}{ae^{-ax}}\cdot \left(-\frac{(ax)^{r-1}}{(r-1)!}\right)=-\frac{a^r}{(r-1)!}x^{r-1}e^{-ax}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3250456",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Taylor polynomial of $f(x_1,...,x_m)=\varphi(e^{a\sum_{i=1}^mx_i})$
Let $\varphi:\mathbb{R}\to\mathbb{R}$ be a $C^3(\mathbb{R})$ function, with $a\in\mathbb{R}$. Find the Taylor polynomial of degree $3$, centered in the origin $p=(0,...,0)$, of $f(x_1,...,x_m)=\varphi(e^{a\sum_{i=1}^mx_i})$.
If we define $g=e^{a\sum_{i=1}^mx_i},\ g:\mathbb{R^m}\to\mathbb{R}$, we have that $f=\varphi\circ g$, so we can calculate the Taylor polynomial of $f$ centered in $p$ (from now on $P_{3,p,f}$) by doing the compositon $P_{3,g(p),\varphi}[P_{3,p,g}]$, where
$P_{3,p,g}=1+a\sum_{i=1}^mx_i+\frac{a^2}{2}\sum_{i,j=1}^mx_ix_j+\frac{a^3}{6}\sum_{i,j,k=1}^mx_ix_jx_k$
$P_{3,g(p),\varphi}=P_{3,1,\varphi}=\varphi(1)+\varphi'(1)(x-1)+\frac{1}{2}\varphi''(1)(x-1)^2+\frac{1}{6}\varphi'''(1)(x-1)^3$
Am I proceding in the right way? I have tried to develop last expression and replace $x$ by $a\sum_{i=1}^mx_i$, $x^2$ by $\frac{a^2}{2}\sum_{i,j=1}^mx_ix_j$ and $x^3$ by $\frac{a^3}{6}\sum_{i,j,k=1}^mx_ix_jx_k$, but it does not seem to be the right solution as it is not the same as the one in my book. What am I doing wrong? Thanks in advance!
| Replacing $x$ by the polynomial $P_{3,p,g}$ (taylor series of $g(x)$) in $P_{3,g(p),\varphi}$, and ignoring the terms with degree greater than $3$, we obtain the following expression:
$\varphi(1)+\varphi'(1)(a\sum_{i=1}^mx_i+\frac{a^2}{2}\sum_{i,j=1}^mx_ix_j+\frac{a^3}{6}\sum_{i,j,k=1}^mx_ix_jx_k)+\frac{1}{2}\varphi''(1)(a^2\sum_{i,j=1}^mx_ix_j+a^3\sum_{i,j,k=1}^mx_ix_jx_k)+\frac{1}{6}\varphi'''(1)a^3\sum_{i,j,k=1}^mx_ix_jx_k=$
$\varphi(1)$+$a\varphi'(1)\sum_{i=1}^mx_i$+$\frac{a^2}{2!}(\varphi'(1)$+$\varphi''(1))\sum_{i,j=1}^mx_ix_j$+$\frac{a^3}{3!}(\varphi'(1)$+$3\varphi''(1)$+$\varphi'''(1))\sum_{i,j,k=1}^mx_ix_jx_k$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3250584",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
If $f$ is continous at $c$, prove $\lim_{h \to 0} (\inf \,\{f(x)\mid c \leqslant x \leqslant c+h\})=f(c)$(duplicated) Let $f$ be continous at $c$. Prove $$\lim_{h \to 0} \left(\inf \,\{f(x)\mid c \leqslant x \leqslant c+h\}\right)=f(c)$$
This fact is used in Spivak's book to prove 1nd Fundamental Calculus Theorem.
This question is duplicated, I have done it previously but i didn't understand yet
| Actually, that's not the only thing Spivak uses in his proof. He first considers the case $h > 0$, then $h < 0$, and then claims that $\lim \limits_{h \to 0} m_h = 0$. So, to take into account both signs of $h$, lets do the following: suppose $a < c < b$, and for any $h \in \Bbb{R}$ , define the set
\begin{equation}
A_h = \{f(x): a \leq x \leq b \quad \text{and} \quad |x-c| \leq |h| \}.
\end{equation}
First, since $f$ is integrable, it is by definition bounded, so for all $h \in \Bbb{R}$, $A_h$ is a non-empty, bounded set hence $m_h := \inf A_h$ exists. We now wish to use continuity of $f$ at $c$ to prove $\lim \limits_{h \to 0}m_h = f(c)$.
To do this, let $\varepsilon > 0$ be arbitrary. By assumption, $f$ is continuous at $c$, so there is a $\delta > 0$ such that for all $x \in [a,b]$, if $|x-c| < \delta$, then $|f(x) - f(c)| < \varepsilon$. Now, let $0 < |h| < \delta$ be arbitrary. Then, for every $x \in [a,b]$ which satisfies $|x-c| \leq |h|$, we have that (since $|h| < \delta$)
\begin{equation}
f(c) - \varepsilon < f(x) < f(c) + \varepsilon.
\end{equation}
What this says is that $f(c) - \varepsilon$ is a lower bound for $A_h$ and $f(c) + \varepsilon$ is an upper bound for $A_h$. Hence, it follows by definition of infimium that
\begin{equation}
f(c) - \varepsilon \leq m_h \leq f(c) + \varepsilon
\end{equation}
Or equivalently, $|m_h - f(c)| \leq \varepsilon$. This completes the proof, because we have shown that given any $\varepsilon > 0$, there is a $\delta > 0$ such that for all $h \in \mathbb{R}$, if $0 < |h| < \delta$, then $|m_h - f(c)|\leq \varepsilon$.
Note: From my proof, we can only conclude that $|m_h - f(c)| \leq \varepsilon$, not $|m_h - f(c)| < \varepsilon$. But this still shows the desired limit because this is true for every $\varepsilon > 0$. If this isn't clear to you, see one of the exercises in Chapter 5 of Spivak (it's probably 25 or 26).
Next, for $c=a$ or $c=b$ you only have to modify the proof slightly; and for the proof of $\lim \limits_{h \to 0} M_h = f(c)$, you just have to replace $m_h$ with $M_h$ everywhere above, and the same proof works.
Also, while it's important to be able to prove this, I remember when I read his proof of the FTC, this confused me immensely. I think there is a simpler, more direct proof though, and I can outline it if you wish.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3250678",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How to calculate limit as $n$ tends to infinity of $\frac{(n+1)^{n^2+n+1}}{n! (n+2)^{n^2+1}}$? This question stems from and old revision of this question, in which an upper bound for $n!$ was asked for.
The original bound was incorrect. In fact, I want to show that the given expression divided by $n!$ goes to $0$ as $n$ tends to $\infty$.
I thus want to show:
$$\lim_{n\to\infty}\frac{(n+1)^{n^2+n+1}}{n!(n+2)^{n^2+1}}=0.$$
Using Stirling's approximation, I found that this is equivalent to showing that
$$\lim_{n\to\infty} \frac{\exp(n)}{\sqrt n}\cdot\left(\frac{n+1}{n+2}\right)^{n^2+1}\cdot\left(\frac{n+1}{n}\right)^n=0.$$
However, I don't see how to prove the latter equation.
EDIT: It would already be enough to determine the limit of $$\exp(n)\left(\frac{n+1}{n+2}\right)^{(n^2)}\left(\frac{n+1}{n}\right)^n$$ as $n$ goes to $\infty$.
| $$\frac{(n+1)^{n^2+n+1}}{n! (n+2)^{n^2+1}}$$
= $$\frac{(1+\frac{1}{n})^{n^2+n+1}}{n! (1+\frac{2}{n})^{n^2+1}} \frac{n^{n^2+n+1}}{n^{n^2+1}}$$
=$$\frac{(1+\frac{1}{n})^{n^2+n+1}}{n! (1+\frac{2}{n})^{n^2+1}} \frac{n^nn^{n^2+1}}{n^{n^2+1}}$$
=$$\frac{(1+\frac{1}{n})^{n^2+n+1}}{n! (1+\frac{2}{n})^{n^2+1}} n^n$$
=$$\frac{((1+\frac{1}{n})^{n})^n(1+\frac{1}{n})^{n}(1+\frac{1}{n})}{n! ((1+\frac{2}{n})^{n})^n (1+\frac{2}{n}) } n^n$$
at the limit n tends to infinity, using the standard definition of log(z) = $\lim_{x\to\infty}(1 + 1/z)^z$
=$$\frac{e^n.e.1}{n!e^{2n}e^2} n^n$$
=$$\frac{ n^n}{n!e^{n+2}} $$
is how far I got really
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3250902",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
"Prove that a topology Ƭ on X is the discrete topology if and only if {x} ∈ Ƭ for all x ∈ X" This question is from "Introduction to Topology: Pure and Applied," by Colin Adams and Robert Franzosa.
Here's how the authors define a topology:
Let X be a set. A topology Ƭ on X is a collection of subsets of X, each called an open set, such that
(i) ∅ and X are open sets;
(ii) The intersection of finitely many open sets is an open set;
(iii) The union of any collection of open sets is an open set.
Here's how they define a discrete topology:
Let X be a nonempty set and let Ƭ be the collection of all subsets of X. Clearly this is a topology, since unions and intersections of subsets of X are themselves subsets of X and therefore are in the collection Ƭ. We call this the discrete topology on X. This is the largest topology that we can define on X.
Here's where I am with this problem:
First, is this what it's asking? Prove that Ƭ is a discrete topology on X if and only if every x in X is a set {x} in Ƭ?
If that is indeed the question, I'm still struggling. (I'm very bad at proofs).
If we look at the definition of a discrete topology, it seems self evident. A discrete topology must contain all subsets of X, which would include every x in X. I just have no idea how to write a mathematical proof of that.
Appreciate any help.
| You correctly argued one direction -- if it has the discrete topology, the topology is the power set of $X$, so it contains all subsets of $X$, including the singletons $\{x\}$.
For the other direction, use axiom (iii) of a topology, and the fact that every subset of $X$ can be written as a union of singleton sets $\{x\}$.
E.g. $\{1,2,3 \} = \{1\} \cup \{2\} \cup \{3\}$.
What topology does $X$ have if every subset of $X$ is included in it?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3251008",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Almost quadratic computational complexity Suppose I can bound the running time of my algorithm as $O(a_N N^2)$ for any positive increasing sequence $\{a_N\}$ that diverges to infinity. Does this imply that my algorithm's running time is actually $O(N^2)$?
N.B. I understand that the running time can be bounded by $O(N^{2+\varepsilon})$ for any $\varepsilon > 0$, and by $O(N^2 \log\log(N))$. I would like to understand if the big O notation has "this sort of continuity property".
| Yes it does. Let $t_N$ denote the algorithm time. Let $f_N = \max\{t_M/M^2:M\le N\}$. Suppose $f_N$ diverges. Then $t_N$ fails to be $O(\sqrt{f_N} N^2)$. Therefore $f_N$ does not diverge. Since $f_N$ is a non-decreasing sequence, it must be bounded. Set $C= \sup f_N$. Then $t_N \le C N^2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3251171",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
$\lim_{(x,y)\to 0} \frac{x\sin(y)- y\sin(x)}{x^4 + y^4}$ without polar coordinates? I have the following limit:
$$\lim_{(x,y)\to 0} \frac{x\sin(y)- y\sin(x)}{x^4 + y^4}$$
And I must evaluate it without polar coordinates. I have tried a lot of stuff but nothing works. Can someone give me a hint?
| You can easily disprove that the limit exists by considering the one dimensional family of rays $y=\lambda x, \lambda\in\mathbb{R}$ and taking the limit to the origin along them instead, for a given $\lambda$. If the limit exists then it shouldn't depend on $\lambda$.
The limit on the rays boils down to evaluating:
$$L=\lim_{x\to 0}\frac{x\sin(\lambda x)-\lambda x\sin x}{(1+\lambda^4)x^4}$$
which one can evaluate straightforwardly using L'Hopital's rule or a Taylor expansion to yield:
$$L=\frac{\lambda-\lambda^3}{6(1+\lambda^4)}$$
which does depend on $\lambda$ and therefore the limit cannot exist.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3251287",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Let $\{a_n\}$ be a sequence of positive real numbers such that $\sum_{n=1}^\infty a_n$ is divergent. Which of the following series are convergent? Let $\{a_n\}$ be a sequence of positive real numbers such that
$\sum_{n=1}^\infty a_n$ is
divergent. Which of the following series are convergent?
a.$\sum_{n=1}^\infty \frac{a_n}{1+a_n}$
b.$\sum_{n=1}^\infty \frac{a_n}{1+n a_n}$
c.
$\sum_{n=1}^\infty \frac{a_n}{1+ n^2a_n}$
My Solution:-
(a)Taking $a_n=n$, then $\sum_{n=1}^\infty \frac{n}{1+n}$ diverges.
(b) Taking $a_n=n$, $\sum_{n=1}^\infty \frac{n}{1+n^2}$ diverges using limit comparison test with $\sum_{n=1}^\infty \frac{1}{n}$
(c) $\frac{a_n}{1+n^2a_n}\leq \frac{a_n}{n^2a_n}=\frac{1}{n^2} $. Using comparison test. Series converges. I am not able to conclude for general case for (a) and (b)?
| If $\sum_{n=1}^{\infty} a_n$ be a divergent series of positive real numbers prove that the
series $\sum_{n=1}^{\infty}\frac{a_n}{1+a_n}$ is divergent.
Proof:
Let $S_n = a_1 +a_2 + ... +a_n$ .
Since the series $\sum_{n=1}^{\infty} a_n$ is a divergent series of positive real numbers, the
sequence $\{S_n\}$ is a monotone increasing sequence and $\lim S_n = \infty.$
Therefore for every natural number $n$ we can choose a natural number
$p$ such that $S_{n+p} > 1 + 2S_{n}$ . Now, $\frac{a_{n+1}}{1+a_{n+1}}+\frac{a_{n+2}}{1+a_{n+2}}+....+\frac{a_{n+p}}{1+a_{n+p}} >\frac{a_{n+1}}{1+S_{n+1}}+\frac{a_{n+2}}{1+S_{n+2}}+....+\frac{a_{n+p}}{1+S_{n+p}}$ (since $S_{n+p}\ge S_{n+1}\ge a_{n+1}$....$S_{n+p}\ge a_{n+p}$)
Now $\frac{a_{n+1}}{1+S_{n+1}}+\frac{a_{n+2}}{1+S_{n+2}}+....+\frac{a_{n+p}}{1+S_{n+p}}= \frac{S_{n+p}-S_n}{1+S_{n+p}}>\frac{\frac{1}{2}(1+S_{n+p})}{1+S_{n+p}}=\frac{1}{2}$
This shows that Cauchy's principle of convergence is not satisfied by
the series $\sum_{n=1}^{\infty}\frac{a_n}{1+a_n}$ Hence the series is divergent.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3251425",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
} |
Describing the orbits of an algebraic action on $\mathbb{A}^4$ I have an algebraic action of $(k^*)^2$ (an algebraic group) on the variety $\mathbb{A}^4$ by: $(t,u) \cdot (x,y,z,v) = (tx, uy, t^{-1}z, u^{-1}v)$. What are the orbits of this action?
I am not sure exactly how I am supposed to describe these orbits. For example, it is clear to me that $(0,0,0,0)$ is in its own orbit, and there is an orbit where $x=0$ with the rest nonzero, $y=0$ with the rest nonzero, and so on. And in some cases, like if $z=v=0$ and $x$ and $y$ are nonzero, then you can get any nonzero values in the first two slots, so the orbit "looks like" $(k^*)^2$. Isn't there a better way of describing these orbits?
It seems clear to me that the quotient by the action is given by $(x,y,z,v) \mapsto (xz, yv)$. This is very obviously constant on orbits, but doesn't seem to actually tell me much about the orbits?
Any insight into how to describe these orbits would be appreciated. Is it just a big list of all of the cases where various coordinates are zero? What about when they are all nonzero?
| Yes, it is just a big list.
*
*If none of $x,y,z,v$ are zero, then the orbit is a copy of $\mathbb{G}_m^2$: the intersection of two quadrics $x_1x_3=xz$ and $x_2x_4=yv$ in $\mathbb{A}^4$.
*If $x=0$ but $y,z,v$ are nonzero, then you also have a copy of $\mathbb{G}_m^2$ as the orbit: $x_1=0, x_3\neq 0, x_2x_4=yv$. Similarly the other three cases of one zero.
*If exactly one from each of $\{x,z\}, \{y,v\}$ are zero, then we have one orbit, a copy of $\mathbb{G}_m^2$: e.g., $x=y=0$ given by $x_1=x_2=0,x_3x_4\neq 0$.
On the other hand, if $x=z=0$, $yv\neq 0$, then we get orbits which are a copy of $\mathbb{G}_m$: $x_1=x_3=0$, $x_2x_4=yv$. Similarly for $y=v=0$, $xz\neq 0$.
*If $x=y=z=0$, then $v\neq 0$ is an orbit which is a copy of $\mathbb{G}_m$. Similarly the other three cases of three zeros.
*Finally, the origin $(0,0,0,0)$ is a fixed point of the action.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3251992",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Inverse of a function intersecting at y =X line
The inverse of a function intersects the function on $y=x$ line.
This is what I was taught. It works fine for $y=x^2, x^3$ ,
Eg $y = x^2$ meet $x= y^2$ at$ (1,1)$ but..
For a function like $ y =-x^3$
It seems to intersect at $ x+y = 0 , $
Why, is the first statement wrong.
Also can it so happen , that an inverse of a function meets the function on a point other than on line $ y=±x $??
| Consider the curve $y=1-x$. It's inverse is $y=1-x$, i.e. it is self inverse. This means it intersects all along its curve, despite only intersecting $y=x$ once.
Now suppose a curve $y=f(x)$ intersects the line $y=x$ at $x_0$. This means that $$y_0=f(x_0)=x_0.$$
Applying $f$ to both sides yields
$$
f(y_0)=f(x_0)=x_0,
$$
and hence the inverse of the curve intersects the curve at its intersection with $y=x$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3252105",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Dunford decomposition proof : why such a form? I'm working on the proof of the Dunford decomposition theorem :
All matrices $A\in M_n(K)$ such their characteristic polynomials split can be written in the form $A=D+N$ where $D$ is diagonalizable and $N$ is nilpotent.
The proof :
Let $A \in M_n(K) $ a linear operator with spectrum $\sigma(A)=\{\lambda_1,...\lambda_r\}$.
The characteristic polynomial of $A$ can be written :
$$X_A(t)=\prod\limits_{i=1}^{r}(t-\lambda_i)^{m_i} \quad m_i\text{ is the algebraic multiplicity of }\lambda_i$$
Then, we know, thanks to the primary reduction theorem, that :
$$\ker(X_A(A))=K_n=N_1 \oplus \cdots \oplus N_r$$
where $N_i=N_{\lambda_i}(A)=\ker(A-\lambda_iI_n)^{m_i}$
If we call $B_i$ a basis of $N_i$ far all $i \in \{1,...,r\}$, then $B=B_1 \cup \cdots \cup B_r$ is a basis of $K^n$. Here is the thing that I don't understant : why, in this basis, the matrix of $A$ will have this form :
\begin{pmatrix}
A_1 & 0 & \dots & 0 \\
0 & A_2 & \dots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \dots & A_r
\end{pmatrix}
Where $A_i, i\in \{1,...,r\}$ is a matrix with size $m_i\times m_i$ ???
| It has this form because each $N_i$ is stable under $A$, so the basis vectors of $B_i$ are sent to combinations of vectors of $B_i$ : their components on $B_j, j\neq i$ are therefore $0$.
More generally, when $E$ is a vector space, $f$ an endomorphism and $F,W$ two stable subspaces such that $E=F\oplus W$, then the matrix of $f$ can be written as $\begin{pmatrix}
A_1 & 0 \\
0 & A_2 \end{pmatrix}$ in the basis $B_1\cup B_2$ if $B_1$ is a basis of $F,B_2$ of $W$, and $A_1$ is the matrix of the restriction-corestriction of $f$ to $F$ in the basis $B_1$, $A_2$ similarly but with $W$ and $B_2$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3252313",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Given $f(x)=ax^3-ax^2+bx+4$ Find the Value of $a+b$ Let $f(x)=ax^3-ax^2+bx+4$. If $f(x)$ divided by $x^2+1$ then the remainder is $0$. If $f(x)$ divided by $x-4$ then the remainder is $51$. What is the value of $a+b$?
From the problem I know that $f(4)=51$.
Using long division, I found that remainder of $\frac{ax^3-ax^2+bx+4}{x^2+1}$ is $a+b+x(b-a)$.
Then
$$a+b+x(b-a)=0$$
I can't proceed any further so I'm guessing the other factor of $f(x)$ is $ax+4$.
Then
$$f(x)=(ax+4)(x^2+1)=ax^3+4x^2+ax+4=ax^3-ax^2+bx+4$$
I found that $a=-4$ and $b=a=-4$. Then $f(x)=-4x^3+4x^2-4x+4$. But I doesn't satisfy $f(4)=51$
| $$x=\pm i$$
so
$$a(\pm i)^3-a(\pm i)^2+b(\pm i)+4=0$$
so
$$ai+a\pm bi+4=0$$
$$ai+a+bi+4=0\tag 1$$
or
$$ai+a-bi+4=0\tag 2$$
now we will solve the first equation
$$a+4=0\rightarrow a=-4$$
$$a-b=0\rightarrow b=-4$$
hence $$f(x)=-4x^3+4x^2-4x+4$$
at $x=4$
$$f(4)=-204=-4(51)$$
the second equation gives
$$a=-4$$
$$b=4$$
hence $$f(x)=-4x^3+4x^2+4x+4$$
at $x=4$
$$f(4)=-176=-4(43)$$
as @marty said the problem is wrong
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3252433",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 2
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.