Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
A is independent of B, and B depends on A? I learned in class that the following three conditions are equal:
*
*$P(A\cap B)=P(A)\cdot P(B)$
*$P(A|B)=P(A)$
*$P(B|A)=P(B)$
But suppose event $A$ is a subset of $B$ such that the (2) condition applies and $P(B)<1$. In that case the probability $P(B|A)$ must equal $1$, because $A$ is a subset of $B$, but $P(B)< 1$, in contradiction to the assumption that (2) and (3) are equal, what am I missing?
| If $A$ is a subset of $B$ and $P(B)< 1$, then all conditions are wrong. For (2), you get $P(A|B) = P(A)/P(B)\not=P(A)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3011498",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Find the limit of sequence with respect to a given topology
Find the limit of sequence $x_{n} = (\frac{1}{n},\frac{1}{n})$ with respect to the topology $$\tau = \{(n,\infty)\times (n, \infty): n \in \mathbb{N} \} \cup \{\emptyset,\mathbb{R^2}\}.$$
My work. Let $x$ be a limit.
If $x<0$, every number can be a limit, because in this case neighbourhood of $x$ is $\mathbb{R}$.
If $x > 0$. For example, if $x = (0,0)$, neighbourhood looks like $(0,\infty)\times (0, \infty)$. So, here is not limit for sequence $x_n$?? Or limit equals $(1,1)$??
| Let's abstract a bit: the topology is just a decreasing sequence of open sets $U_1 \supset U_2 \supset U_3 \supset \ldots $ with empty intersection, and the compulsory $\emptyset, X$ are open too. It's quite easy to check that this always gives a topology.
Now in our case, $U_n = (n,\infty) \times (n, \infty)$ of course and the sequence
has the property:
$$\forall n: x_n \in U_0, x_n \notin U_1$$
Now for any point $p$ not in $U_1$, the only open neighbourhoods it has are (possibly $U_0$) and in $X$, and in both cases, this contains the whole sequence. So $x_n \to p$ for $p \notin U_1$ and if $p \in U_1$, then $U_1$ is a neighbourhood of $p$ such that no tail of the sequence lies in it (it entirely misses the sequence in fact), and so $x_n$ cannot converge to $p$.
So there lots of limits for this sequence : all points $(x,y) \notin (1,\infty) \times (1,\infty)$ i.e. all $(x,y)$ with $x \le 1$ or $y \le 1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3011587",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
How to show that $p(1)\ \text{is real} \iff \ p(-1)\ \text{is real}$ I have been working on this problem and I cannot figure it out! I spent hours of time on it with no use. Can anybody help? The question is:
Suppose $p(x)$ is a polynomial with complex coefficients and even degree($n=2k$). All zeros of $p$ are non-real and with length equal to $1$. prove $$p(1)\in\mathbb{R} \;\;\Longleftrightarrow\;\; p(-1)\in\mathbb{R} $$
| Note that (assuming $p(-1)\ne 0$ to begin with
$$ \frac{p(1)}{p(-1)}=\prod_{j=1}^{2k}\frac{1-w_j}{-1-w_j}$$
where the $w_j$ run over the complex roots (with multiplicity).
For a single factor,
$$\frac{1-w}{-1-w}=-\frac{(1-w)(1+\bar w)}{|1+w|^2}=\frac{|w|^2-1+(w-\bar w)}{|1+w|^2}. $$
As we are given that $|w|=1$ for all roots, this is the purely imaginary number $\frac{2\operatorname{im} w}{|1+w|^2}i$. The product of an even number of imaginaries is real.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3011692",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Asymptotic behavior $\sum_{n=1}^x\phi_k(n)$, a variant of Euler's Totient function Let $$\phi_k(x)=\sum_{1\le n \le x \\(n,x)=1} n^k$$
What's the asymptotic behavior of
$$\sum_{n=1}^x\phi_k(n)?$$
According to the wikipedia $\sum^x_{n=1} \phi_0 (n) \approx \frac{3}{\pi^2}x^2 $. It also appears in page $69$ and $70$ which are $30$ and $31$ of this pdf.
The possible routes
Route 1 (For someone who wants some practice with Abel Summations): There should be an approach which is an analog to the techniques shown here: sum of the divisor functions and I think that $\sum_{n=1}^x \frac{\phi_k(n)}{n^{k+1}}$ is always on the order of a linear function. So that might be the place to start.
If no one takes this route I will almost certainly post my own answer in 2 or 3 weeks and ask this community for help verifying my proof. This is the most obvious route for me to take to make progress on this.
Route 2: Also it would be particularly interesting to see an argument which isn't an analog of the linked post and which exploits what we already know about the asymptotic behavior of $\sum \sigma_k(n)$ to make claims about $\sum \phi_k(n)$. I am not sure this possible but it may be a route forward.
| Not an answer.
$$\sum_{n=1}^\infty\frac{\phi_k(n)}{n^s}=\frac{1}{\zeta(s-k)}\sum_{l=0}^{k+1}c(k,l)\zeta(s-l)$$
Where $c(k,k+1)=\frac{1}{k+1}, c(k,k)=\frac{1}{2}$ and $c(k, k-l+1) = \frac{B_lk!}{l!(k-l+1)!}$
where $B_k$ is the $k$th Bernoulli number which we define in terms of Stirling numbers of the second kind.
$$B_k=\sum_{m=0}^k \frac{(-1)^mm!}{m+1}S(k,m), \text{ and } S(k,m)=\frac{1}{k!}\sum_{j=1}^k(-1)^{k-j} {k \choose j} j^m$$
$c(a,b)$ are coefficients which we can find in Faulhaber's triangle. In particular, let's consider $k=4$,
$$1^4+2^4+3^4+ \dots x^k \\=c(4,5)x^{5}+c(4,4)x^4+c(4,3)x^3+c(4,2)x^2+c(4,1)x
\\ = \frac{1}{5}x^5+\frac{1}{2}x^4+\frac{1}{3}x^3-\frac{1}{30}x $$
$$\sum_{n=1}^\infty \frac{\phi_4(n)}{n^s} = \frac{1}{\zeta(s-4)}\bigg[ \frac{1}{5} \zeta(s-5)+\frac{1}{2}\zeta(s-4)+\frac{1}{3}\zeta(s-3)- \frac{1}{30} \zeta(s-1) \bigg]$$
Here's a graph of this lining up! Lovely.
Then we can somehow look at the poles of this to find the asymptotic behavior.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3011841",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Find the mistake in $\lim_{x\rightarrow 1^-} \frac{\sum_{n=0}^\infty x^n}{\sum_{n=0}^\infty x^n}=1 \Rightarrow 1=\frac{1}{2}$ It is obvious that we have:
$$\lim_{x\rightarrow 1^-} \frac{\sum_{n=0}^\infty x^n}{\sum_{n=0}^\infty x^n}=\lim_{x\rightarrow 1^-}1=1.$$
But let us now write this sum in two ways, let $a_n=x^n$ and $b_n=x^{2n}+x^{2n+1}$ we thus have $\sum_{n=0}^\infty x^n = \sum_{n=0}^\infty a_n = \sum_{n=0}^\infty b_n$. We can write the above limit as:
$$
\lim_{x \rightarrow 1^-}\lim_{N\rightarrow \infty} \frac{\sum_{n=0}^N a_n}{\sum_{n=0}^{N} b_n} = \lim_{N\rightarrow \infty} \lim_{x \rightarrow 1^-} \frac{\sum_{n=0}^N a_n}{\sum_{n=0}^{N} b_n},
$$
where we can swap limits because of the Moore-Osgood Theorem. We now find for the right hand side:
$$
\lim_{N\rightarrow \infty} \lim_{x \rightarrow 1^-} \frac{\sum_{n=0}^N a_n}{\sum_{n=0}^{N} b_n}=\lim_{N\rightarrow \infty} \frac{N+1}{2(N+1)}=\frac{1}{2}.
$$
This shows that $1=\frac{1}{2}$ which is clearly incorrect, but I do not see where the error occurs, I guess it is in the step where the Moore-Osgood Theorem is applied where we define $f_N(x)=\frac{\sum_{n=0}^N a_n}{\sum_{n=0}^{N} b_n}$.
EDIT: I believe I have found the error, in order to apply the Moore-Osgood Theorem we need uniform convergence from $f_N(x)$ to $f(x)=\frac{\sum_{n=0}^\infty a_n}{\sum_{n=0}^{\infty} b_n}$ but this $f$ is not continuous, therefore we can not apply Dini's theorem to show that pointwise convergence implies uniform convegence.
| In order to use the Moore-Osgood Theorem, you must make sure that $(f_n)_{n \geq 0}$ converges uniformly toward $f$.
$i.e. \sup\limits_{[0,1]}|f_n - f| \rightarrow 0$.
This is not the case here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3011926",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Help understand beta reduction example I am currently reading a text book on distributed computing systems that includes a short introduction to $\lambda$-calculus. There is an example of evaluating the sequence $(((if \space \space true) \space \space 4) \space \space 5)$ which is written below.
*
*$\space\space(((\lambda b.\lambda t.\lambda e.((b \space t) \space e) \space \lambda x.\lambda y.x) \space 4) \space 5) $
*$\space\space((\lambda t.\lambda e ((\lambda x.\lambda y.x \space \space t) \space e) \space 4) \space 5) $
*$\space\space(\lambda e ((\lambda x.\lambda y.x \space \space 4) \space \space e) \space \space 5) $
*$\space\space((\lambda x.\lambda y.x \space \space 4) \space \space 5) $
*$\space\space(\lambda y.4 \space \space 5) $
*$\space\space 4$
The author has used $\beta$-reduction on each line and I can follow up until the second to last reduction. Could someone explain how we get from line 4 to line 6?
| When going from line 4 to line 5, we substitute $x=4$ into $\lambda y. x$, because the inner argument is bound to the $\lambda x$. The 5 is then passed to the result and bound to the $\lambda y$, so that we substitute $y=5$ into $\lambda y. 4$. Since $y$ does not appear free in $4$, this does not alter the expression, so we end up with $4$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3012109",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Counting question on bit strings - problem with using cases
How many bit strings of length 10 either begin with three 0s or end with two 0s?
I solved this question using cases but I do not seem to be getting the answer of $352$.
My attempt:
Consider two cases:
*
*Case 1: The string begins with three $0$s and does not end with two $0$s. There is only $1$ way to choose the first three bits, $2^5$ ways for the middle bits, and $3$ ways for the last two bits ($4$ ways to construct a string of two bits, minus $1$ way to make three $0$s). There are $2^5 \cdot 3$ ways to construct strings of this type.
*Case 2: The string does not begin with three $0$s but ends with two $0$s. There are $2^3 - 1 = 7$ ways to choose the first three bits without three $0$s, $2^5$ ways for the middle bits, and $1$ way for the last two bits. There are $7 \cdot 2^5$ ways to construct strings of this type.
By the rule of sum, there are $2^5 \cdot 3 + 2^5 \cdot 7 = 320$ ways to construct bit strings of length 10 either begin with three $0$s or end with two $0$s.
| $$\underbrace{2^7}_{\text{begin with three zeros}}+\underbrace{2^8}_{\text{end with two zeros}}-\underbrace{2^5}_{\text{double-count}} $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3012277",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
What is "the set of all polynomials in $\pi$"? From Ian Stewart's Galois Theory (2015, 4e, p. 20):
What does, for example, "the set of all polynomials in $\pi$" mean?
| The set of all polynomials in $\pi$ with rational coefficients is the set of real numbers of the form $p(\pi)$, where $p(x)$ is a polynomial with rational coefficients; that is, it is the set $\{ p(\pi) \mid p(x) \in \mathbb{Q}[x] \}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3012397",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
} |
Tensor product terminology in category theory? Say that I have any homomorphisms of commutative rings, $A \rightarrow B, A \rightarrow C.$ I recently read that $B \otimes_A C$ is the pushout of the morphisms in the category of commutative rings. However, I understood tensor products as defined for modules over a commutative ring. Can we somehow realize $B, C$ as modules over $A$ via the homomorphisms or is $B \otimes_A C$ as defined by the pushout a generalization of the 'normal' definition of tensors? It seems unlikely that it is a generalization as it seems to depend on these morphisms whereas the tensors of modules doesn't depend on any sort of morphisms.
| Take all rings here to be commutative. A ring homomorphism $f:A\to B$
makes $B$ into an $A$-module. In detail, the module action is $a\cdot b=f(a)b$.
With another ring homomorphism $g:A\to C$ then we have two $A$-modules,
and can form the tensor product $B\otimes_A C$.
At first $B\otimes_A C$ is just a module. But it has a multiplication,
defined as the composition
$$(B\otimes_A C)\times(B\otimes_A C)\to (B\otimes_A C)\otimes_A (B\otimes_A C)
\to (B\otimes_A B)\otimes_A (C\otimes_A C)\to B\otimes_A C.$$
The middle map is just permuting the factors, and the last map is induced
by $(b\otimes b')\otimes(c\otimes c')\mapsto bb'\otimes cc'$.
In terms of elements:
$$(b\otimes c)(b'\otimes c')=bb'\otimes cc'.$$
Then $B\otimes_A C$ is a ring. There are ring homomorphisms from $B$
and $C$ to it, the first given by $a\mapsto a\otimes 1_C$. Now one sits
down with a large sheet of paper, and proves that the map $A\to B\otimes_A C$
is the pushout of $A\to B$ and $A\to C$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3012515",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
$(a) = R$ if and only if a is a unit. Let $⟨R;+,−,0,·,1⟩$ be a commutative ring. For $a \in R$, define $(a)$ $:= \{a · r | r \in R \}.$ How can i prove that $(a) = R$ if and only if a is a unit.
So if there exist $a' \in R$ such that $a*a' = 1$ we have $ \forall r \in R $ $ (a*a')*r = r$ hence $(a) = R$ , right ?
If $(a) = R$ than $ \forall r \in R $ there exists $r' \in R $ such that $ a*r' = r$ but than ...?
Is the first part correct and how can i formulate the second part?
| The first part is correct: if $a$ is a unit, then $ab=1$ for some $b\in R$; therefore, for every $r\in R$,
$$
r=r1=r(ab)=a(rb)\in(a)
$$
hence $R\subseteq (a)$ and therefore $R=(a)$.
The second part is simpler: if $(a)=R$, then $1\in (a)$, so there exists $b\in R$ with $ab=1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3012674",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Singularity of Morphism and Its Extension I really need help with this question.
Assume you have a morphism $\varphi: \mathbb{A}^2 \rightarrow
> \mathbb{A}^1$ such that $\varphi (x,y)= x^2-y^4$.
1) Find all points in $\mathbb{C}$ such that $\varphi^{-1}(a)$ is
singular. And its type of singularity.
2) If $Y_{a}$ is the closure of curve $\varphi^{-1}(a)$ in
$\mathbb{P}^{2}$ and L is line at $\infty$ (z=0), find the point
$Y_{a}\cap L$.
3) Let $\psi: \mathbb{P}^{2}\rightarrow \mathbb{P}^{1}$ be rational
map that extends $\varphi$, find the domain of $\psi$ and the fiber
$\psi^{-1}(\infty)$.
This is a reducible curve that I've never seen before. I do not know what can I do!
I found $(0,0)$ is a singular point by using Jacobian. For the second one, I took homogenization of curve but I couldn't how to find the intersection with the line at infinity. For the last one, I have no idea.
| 1) As you said, by the Jacobian criterion indeed $\varphi^{-1}(0)$ is the unique singular fiber and the singularity type is two tangent parabolas $(x-y^2)(x+y^2) = 0$.
2) The fiber has homogenous equation $x^2z^2 = y^4 + az^4 $. Its intersection with the line at infinity given by $z=0$ and the previous equation, i.e this is the point $(1,0,0)$ (with multiplicity $4$).
3) We take $$\psi(x,y,z) = \frac{x^2z^2 - y^4}{z^4}$$
and the domain of $\psi$ is exactly when $z^4 \neq 0$ or $x^2z^2 - y^4 \neq 0$. This means that the domain is $\Bbb P^2 \backslash \{(1,0,0)\}$. This is not surprising since by the previous question, $(1,0,0)$ was in the closure of each fibers !
Finally, the fiber over $\infty$ is simply given by the line $z=0$, minus the point $[1:0:0]$. For completness, the other projective fibers over $[a:1] \in \Bbb P^1 $ are given by $\{ (x,y,1) : x^2 = y^4 + a \}$. Notice that it does not include the point $(1,0,0)$ by what we said, even though this point is in the closure of all the fibers.
A final remark : this is possible to find a surface $X$ and a morphism $f : X \to \Bbb P^2$ so that $\psi$ becomes defined everywhere, this morphism is called a blow-up and making your map defined everywhere is typically why algebraic geometers introduce blow-up.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3012815",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How to show that $2^n > n$ without induction I'm solving exercises about Pascal's triangle and Binomial theorem, and this problem showed up, however I don't have any clue on how to solve it
The sum of ${n\choose p}$ from $p=0$ to $n$ is the same thing as $(1+1)^n=2^n$, how can I use this information?
Maybe comparing with another summation that equals to n?
| hint
Consider $x\mapsto \frac{\ln(x)}{x}$ for $x\ge 1$.
$$f'(x)=\frac{1-\ln(x)}{x^2}$$
the maximum if $f(e)=\frac{1}{2}<\ln(2)$.
thus
$$\ln(x)<x\ln(2)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3013093",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 4
} |
Birthday problem: why not: combinations without/with replacement?? My first intuition on the birthday problem was:
*
*The number of ways $k$ people have different birthdays is the combinations $\binom{365}{k}$
*The number of ways $k$ people can have birthdays is the combinations with replacement.
So the probability that all $k$ people have birthdays on different days is
$$ P = \frac{\binom{365}{k}}{\binom{365+k-1}{k}}$$
What is wrong with this logic?
| The calculation $\binom{365}{k}$ only chooses $k$ days; it does not assign particular days to the people. So your first statement is incorrect. I am not sure where your second formula comes from, but you're probably making a similar error.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3013206",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Find a partial derivatives by definition I have to check whether the function is able to be differentiated on M(0, 0), and find partial derivatives $f_x'(0, 0), f'_y(0, 0)$. Is it correct?
Let $z = {x}+{y}+\sqrt{\mid{xy}\mid}$.
By definition of partial derivative, $$\frac{\partial{z}}{\partial{x_k}} = \lim_{\Delta{x}\to0}{\frac{f(x_1,\dots,x_k+\Delta{x}_1,\dots,x_n)-f(x_1,\dots,x_k,\dots,x_n)}{\Delta{x}}}$$
Therefore, we calculate the partial derivative with respect to $x$:
$$\Delta{z} = f(x_0+\Delta{x}, y_0) - f(x_0, y_0)$$
$$\Delta{z} = (x_0+\Delta{x}+y_0+\sqrt{\mid(x_0+\Delta{x})y\mid}) - (x_0+y_0+\sqrt{\mid x_0y_0\mid}) $$
$$(x_0,y_0)=(0, 0)\rightarrow\Delta{x}+\sqrt{0(\mid{0+\Delta{x}}\mid)}-0=\Delta{x}$$
$$\lim_{\Delta{x}\to0}\frac{\Delta{x}}{\Delta{x}} = 1.$$
Then, with respect to $y$:
$$\Delta{z} = f(x_0, y_0 + \Delta{y}) - f(x_0, y_0)$$
$$\Delta{z} = (x_0+\Delta{y}+y_0+\sqrt{\mid(y_0+\Delta{y})x\mid}) - (x_0+y_0+\sqrt{\mid x_0y_0\mid}) $$
$$(x_0,y_0)=(0, 0)\rightarrow\Delta{y}+\sqrt{0(\mid{0+\Delta{y}}\mid)}-0=\Delta{y}$$
$$\lim_{\Delta{y}\to0}\frac{\Delta{y}}{\Delta{y}} = 1.$$
Thus, the partial derivatives $f'_x(0, 0)$, $f'_y(0, 0)$ do exist and equal $1$.
| That part is OK.
Now, to see that $f$ is NOT differentiable at $(0,0)$, remember that if $f$ is differentiable at $(0,0)$, then for any direction $\vec v$ such that $\|\vec v\|=1$ it is true that
$$f'_{\vec v}(0,0)=\langle \nabla f(0,0),\vec v\rangle.$$
But you proved that $\nabla f(0,0)=(1,1)$, and you can use the definition to calculate $f'_{\vec v}(0,0)$ for any other direction, such as $(\frac1{\sqrt{2}},\frac1{\sqrt{2}})$, $(\frac35,-\frac45)$, etc., and then do the inner product on the right hand side to see that the formula does not hold. This can only be the case if $f$ is not differentiable at $(0,0)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3013269",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Find the Legendre polynomial Let us consider the numerical integral $ \ \int_{-1}^{1}w(x) f(x)dx=\sum_{i=0}^{N} f(x_i)w_i$, where $w_i$ are the weights and $w(x)$ is the weight function.
Legendre polynomials, denoted by $ \{p_n \}$ are a list of orthogonal polynomial supported on $[-1,1]$ with weight $w(x)=1$. Then the explicit expression for $p_0,p_1,p_2$.
Answer:
we have show that
$p_0=1, \ p_1=x , \ p_2=\frac{1}{2}(3x^2-1)$.
How to show this using $w(x)=1$?
If $w(x)=1$, then we have
$ \int_{-1}^{1} f(x)dx=\sum_{i=0}^{N} f(x_i) w_i$.
Now how to proceed?
help me
| (Up to now I see no correction to the question, here is the computation of the first Legendre polynomials)
This is a simple exercise in integration and using the orthogonality relations.
Let's write $(f,g)=\int_{-1}^1 f(x)g(x) dx$. Then you have to compute the polynomials $p_n$ of degree $n$ with
$(p_n, p_m) = \frac{2}{2n+1}\delta_{nm}.$
Firts, with $p_0=a$ you have $$2=(p_0,p_0) = 2a \Longrightarrow a = 1.$$
With $p_1(x) =ax + b$ you get
$$0 =(p_1, p_0) = 2b \Longrightarrow b = 0$$
$$\frac{2}{3}=(p_1, p_1) = \frac{2}{3}a^2 \Longrightarrow a = 1.$$
So $p_1(x) = x.$ Can you continue with the Ansatz $p_2(x) = ax^2+bx+c?$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3013403",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Divisors of $-1$ are only $1$ and $-1$? I'm working through a discrete math textbook and I've come across this question with answer:
Prove that the only divisors of $−1$ are $−1$ and $1$.
Answer:
We established that $1$ divides any number; hence, it divides $−1$, and any nonzero number divides itself. Thus, $1$ and $−1$ are divisors of $−1$. To show that these are the only ones, we take $d$, a positive divisor of $−1$. Thus, $dk = −1$ for some integer $k$, and $(−1)dk = d(−k) = (−1)(−1) = 1$; hence, $d\mid 1$, and the only divisors of $1$ are $1$ and $−1$. Hence, $d = 1$ or $d = −1$.
I understand everything stated in the answer except for the part: $(-1)dk = d(-k) = (-1)(-1) = 1$
Perhaps someone could help me understand where the $(-1)dk$ comes from? And how we go from $d(-k)$ to $(-1)(-1)$?
| We are assuming that $d$ is a divisor of $-1$ that is
$$dk=-1$$
and multiplying each side by $-1$ we obtain
$$-1\cdot dk=-1\cdot (-1)=1 \iff d(-k)=1 \iff d=1,-1 $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3013506",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Making sure if it is Cauchy In my real analysis exam I had a problem in which I proved that given any positive number $a\lt 1$ if $|x_{n+1} - x_n|\lt {a^n}$ for all natural numbers $n$ then $(x_n)$ is a Cauchy sequence.
This was solved successfully but the question is if $|x_{n+1} - x_n|\lt \frac 1n$ does that mean $(x_n)$ is Cauchy? Well my answer was yes because I could write this in the form of the first one, but now I am somehow confused with what I have answered since $1/n$ is a sequence of $n$ so maybe the answer is not necessarily true... Can you please provide me with the correct answer for this question?
| Take $x_n=1+\frac 1 2+\cdots+\frac 1 n$. This is not Cauchy because the harmonic series $1+\frac 1 2+\cdots$ is divergent.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3013628",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 6,
"answer_id": 1
} |
Smooth a graph as if placing a rope across the data…? I'm not sure how to correctly phrase this question, in fact if I knew exactly what I needed to ask I could probably work it out myself, so please bear with me.
What I need to do is smooth out a line, but weight the smoothing so that the smoothed graph never goes below the sampled data. What it would look like is if you put a thick rope over the sample points so that at the pointy bits the smooth curve of the rope would be close to the data, and at the low points of the data the smooth rope would be hanging in space. Like the red line here (it's a bit wrong at the top right, but you get the idea):
This is for some motion graphics I'm doing. The samples are from a timed transcript of someone speaking. As they speak their words appear, and then float off, driven by an expression. If I track the timing of the words exactly it's very jerky, if I take an average of the value over a period of time, e.g. five seconds I get the smoothness I want, but it looks bad when the text lags behind the speaking, but not so much if the speech lags behind the words.
| You're possibly looking for the envelope of the data. There is a Wikipedia article on it. https://en.m.wikipedia.org/wiki/Envelope_(waves)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3013723",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Let $f$ be integrable on $[a,b]$ and suppose for each integrable function $g$ defined on $[a,b]$, $\int^{b}_afg=0$, then $f(x)=0,\forall x\in[a,b]$ I do not think this is true,
but at the same time I am not sure.
I know that if we assume that f is continuous instead of integrable then this statement is true. I just do not know how to provide a counterexample if it is false to show that this is wrong. Integrable does not imply continuity I know that much.
| Another counter example is that consider $f:[0,1]\rightarrow [0,1]$ given by $f(x)=0$ if $x$ is irrational or $x=0$ and $f(\frac{p}{q})=\frac{1}{q}$ where $p\in \Bbb Z-\{0\},q\in \Bbb N,gcd(p,q)=1$ , then $f$ is Riemann integrable and $\int_0^1 f=0$ and for any other Riemann integrable $g$ we have using Cauchy-Schwarz inequality $$|\int_0^1 fg|^2\leq\int_0^1 f^2 ×\int_0^1 g^2\leq \int _0^1 f×\int_0^1 g^2=0×\int_0^1 g^2=0$$ ,hence $\int_0^1 fg=0$ . But $f$ is not identically zero in $[0,1]$.
Though I used $[0,1]$ as a special interval , by slide modifications you can give argument for general compact interval.Note one thing is that I have only consider Riemann integration i.e. here is no Lebesgue integration. You can prove using only definition of Riemann integration that $\int_0^1 f=0$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3013834",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 3
} |
Finding $\lim_{x\to\pi/2}\left(\frac{1-\sin x}{(\pi-2x)^4}\right)(\cos x)(8x^3 - \pi^3)$ using algebra of limits Let $$
F(x) = \left(\frac{1-\sin x}{(\pi-2x)^4}\right)(\cos x)(8x^3 - \pi^3)
$$
Then find the limit of $F(x)$ as $x$ tends to $\pi/2$.
How can we find the limit using algebra of limits?
The limit of $\dfrac{1-\sin x}{(\pi-2x)^4}$ as $x$ tends to $\pi/2$ is some non-zero finite number. And the limit of rest part is zero. Then why the limit of the whole function is not $0$?
The limit of $F(x)$ is $-\frac{3\pi^2}{16}$.
How to apply algebra of limits here?
| The limit of $\frac{1-\sin x}{(\pi-2x)^4}$ as $x$ tends to $\pi/2$ is NOT some non zero finite number.
Note that $(8x^3 - \pi^3)=(2x-\pi)(4x^2+2x\pi+\pi^2)$, then
$$F(x)=\frac{1-\sin x}{(2x-\pi)^2}\cdot\frac{\cos x}{(2x-\pi)}\cdot (4x^2+2x\pi+\pi^2)$$
Now evaluate the limit of each factor. This time they are all finite numbers and we can apply the rule related to your previous question How to apply algebra of limits?
Can you take it from here?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3013994",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Minimal elements Let $A$ be the set $A = \{1,2,3,...,20\}$.
$R$ is the relation over $A$ such that $xRy$ iff $y/x = 2^i$, $i$ is natural including $0$.
I am supposed to find the minimal and maximal elements in relation to $R$.
Does that mean the elements are members of $A$ or members of $R$?
The elements of $R$ are pairs that their divisions yield $1,2,4,8,16 \ldots$
But I am confused as to whether I am searching for pairs in $R$ or numbers in $A$.
Thanks.
| You're searching for members of $A$.
$a \in A$ is maximal, whenever $aRa'$ implies $a'=a$.
$a \in A$ is minimal, whenever $a'Ra$ implies $a'=a$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3014104",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
How to show $2^{ℵ_0} \leq \mathfrak c$ I want to show $2^{ℵ_0}=\mathfrak c$.
I already showed $\mathfrak c \leq 2^{ℵ_0}$ as follows:
Each real number is constructed from an integer part and a decimal fraction. The decimal fraction is countable and has $ℵ_0$ digits. So we have
$\mathfrak c \leq ℵ_0 * 10^{ℵ_0} \leq 2^{ℵ_0} * (2^4)^{ℵ_0} = 2^{ℵ_0}$ since $ℵ_0 + 4ℵ_0=ℵ_0$
But how can I prove the other way $2^{ℵ_0} \leq \mathfrak c$?
| $2^{\aleph_0}$ is the cardinality of all reals (belonging to $(0,1)$ if you prefer) that you can write by using only $0,1$. Those numbers clearly form a subset of $\mathbb R$ which must therefore have cardinality at least $2^{\aleph_0}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3014284",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
Factorial Proof - ${n \choose r-1}+{n \choose r}={n+1 \choose r}$ ${n \choose r-1}+{n \choose r}={n+1 \choose r}.$
So what I tried to do was expand the first and second term.
$\frac{n!}{(r-1)!(n+1-r)!}+\frac{n!}{(r)!(n-r)!}.$
Then what I did was try to get common denominators.
$\frac{n!}{(r-1)!(n+1-r)(n-r)!}+\frac{n!}{(r)(r-1)!(n-r)!}.$
Then I attempted to combine and get the common denominator.
$\frac{(n!)(r)+(n!)(n+1-r)}{(r-1)!(n+1-r)(n-r)!(r)}.$
From here I simplified a bit more but didn't get a nice answer.
Some insight would be helpful.
| You start well:
$$
\binom{n}{r-1}+\binom{n}{r}=
\frac{n!}{(r-1)!\,(n-r+1)!}+\frac{n!}{r!\,(n-r)!}
$$
Now it's best to collect all common terms. Not so different from what you did, but with less complications. Note you can collect
*
*$n!$ in the numerator
*$(r-1)!$ in the denominator, using $r!=r\,(r-1)!$
*$(n-r)!$ in the denominator, using $(n-r+1)!=(n-r+1)\,(n-r)!$
Thus you can go on with
$$
=\frac{n!}{(r-1)!\,(n-r)!}\left(\frac{1}{n-r+1}+\frac{1}{r}\right)
$$
The part in parentheses can be rewritten as $\dfrac{n+1}{r(n-r+1)}$
so you have
$$
=\frac{(n+1)\,n!}{r\,(r-1)!\,(n-r+1)\,(n-r)!}=\frac{(n+1)!}{r!\,(n+1-r)!}=\binom{n+1}{r}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3014423",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 3
} |
Truth tables from word problem Sentential Logic I am reading the book "how to prove it" and the answers says that this argument is valid and that I'm to construct a truth table to verify it but I just can't see how this argument is valid and I'm not sure how to construct the truth table to prove it. Here is the argument:
Either sales or expenses will go up. If sales go up, then the boss will
be happy. If expenses go up, then the boss will be unhappy. Therefore,
sales and expenses will not both go up.
I understand that the boss can't be both happy and unhappy at the same time, but as I see it sales and expenses can both go up at the same time and I have no idea why the bosses mood proves that this can't happen. Am I just understanding this completely wrong?
I tried constructing a truth table with the values for sales going up (S), expenses going up (E), boss being happy (H) and boss being unhappy (U) and then looking at
S->H and E->U
But I just end up with a truth table with the size of 2^4 but it just ends up being a big mess that I have no idea how to read. Can anybody help me here?
| Hint
1) Either sales or expenses will go up.
2)If sales go up, then the boss will be happy.
3) If expenses go up, then the boss will be unhappy.
4) Therefore, sales and expenses will not both go up.
In symbols :
1) $S \lor E$
2) $S \to H$
3) $E \to \lnot H$
4) $\lnot (S \land E)$
Having said that, you have to build uo a truth table with the three propositional letters : $S, E$ and $H$, that means $2^3=8$ rows and one column for each premise and the last column for the conclusion : seven columns in total.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3014503",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Rewriting a logical statement Only lakers are irrational people.
I believe it technically should be translated as:
All irrational people are lakers.
Is there is any way at all to rewrite the above statement to mean the following and be logically correct:
All lakers are irrational people.
How would you justify it? (If it is possible)
| Indicating with $L$ the set of lakers $l$ and with $\Pi$ the set of irrational people $\pi$, the first statement is equivalent to
$$\forall \pi\in \Pi \quad \pi\in L$$
the second one is
$$\forall l\in L\quad l\in \Pi $$
which is not equivalent to the first one, indeed from this last one we could also have $\pi \not \in L$ for aome $\pi$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3014649",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Problem on dual basis
Let V be the real vector space of all polynomials, in a single
variable and with real coefficients, of degree at most $3$.
Let $V^*$ be its dual space.
Let $t_1 = 1, t_2 = 2, t_3 = 3, t_4 = 4.$
Which of the following sets of functionals $\{f_i |1 \leq i \leq 4\}$
form a basis for $V^*$?
a). For $1 \leq i \leq 4,$ and for all $p \in V , f_i(p) = p(t_i)$.
b). For all $p \in V , f_i(p) = p(t_i)$ for $i = 1, 2$, $f_3(p) =
p'(t_1)$ and $f_4(p) = p'(t_2)$.
c). For all $p \in V , f_i(p) =
p(t_i)$ for $1 \leq i \leq 3$ and $f_4(p)=\int_{a}^{b} p'(t)dt$
For option a) i'm doing just take $f=af_1+bf_2+cf_3+df_4=0$. There exists a $p\in V$ with $p(1)=1,p(2)=p(3)=p(4)=0$ Since $f_i(v_j) = \delta_{ij}$. Applying $f$ to $p$ we find $a=0$, Similarly $b=c=d=0$ hence basis. but don't know how doing for b and c ????Thank you for help
| One approach is as follows: just as the linear map
$$
p \mapsto \pmatrix{p(1)\\p(2)\\p(3)\\p(4)}
$$
has a trivial kernel, show that the map
$$
p \mapsto \pmatrix{p(1)\\p(2)\\p'(1)\\p'(2)}
$$
has a trivial kernel.
For c, note that we can rewrite $f_4(p) = p(b) - p(a)$. So, whether or not the $f_i$ are linearly independent depends on the values of $a$ and $b$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3014797",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Prove that $f(x)=\log\sqrt{\frac{1+x}{1-x}}$ is surjective from $(-1,1)$ to $\mathbb{R}$. I have to prove that the function $\;f:(-1,1)\to \mathbb{R}\;$ defined by $f(x)=\log\sqrt{\frac{1+x}{1-x}}\;$ is bijective.
I have already proved that it is injective:
$$f(x)=f(y)$$
$$\log\sqrt{\frac{1+x}{1-x}}=\log\sqrt{\frac{1+y}{1-y}}$$
$$\log\sqrt{\frac{(1+x)(1-y)}{(1-x)(1+y)}}=0$$
$$\frac{1+x-y-xy}{1-x+y-xy}=1$$
$$x=y$$
But now, how can I prove that the function is surjective?
| We have that $f(x)$ is defined in $(-1,1)$ and
$$f(x)=\log\sqrt{\frac{1+x}{1-x}}\implies f'(x)=\frac1{1-x^2}>0$$
then $f(x)$ is injective, moreover
$$\lim_{x\to 1^-} f(x)=\infty \quad \lim_{x\to -1^+} f(x)=-\infty$$
and since $f(x)$ is continuous by IVT it is surjective.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3014929",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
} |
Evaluate $\lim_{h \rightarrow 0} {(x+h)^{99}-x^{99}\over h}$
$$\lim_{h \rightarrow 0} {(x+h)^{99}-x^{99}\over h}$$
I need to factor this in order to get a limit.
I tried:
$$\lim_{h \rightarrow 0} {[(x+h)^{33}]^3-[(x)^{33}]^3\over h}
\\ \lim_{h \rightarrow 0} {[(x+h)^{33}-(x)^{33}][(x+h)^{66}+(x+h)^{33}(x)^{33}+(x)^{66}]\over h}$$
However this does not seem helpful.
How do I approach this question?
| Because both $\lim\limits_{h\to 0}((x-h)^{99}-x^{99})=0$ and $\lim\limits_{h\to 0}h=0$ you may also use L'Hospital's rule:
$\displaystyle\lim\limits_{h\to 0}\frac{(x-h)^{99}-x^{99}}{h}=\displaystyle\lim\limits_{h\to 0}\frac{\frac{d}{dh}((x-h)^{99}-x^{99})}{\frac{d}{dh}h}=\displaystyle\lim\limits_{h\to 0}\frac{99(x-h)^{98}-0}{1}=99x^{98}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3014988",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 3
} |
approximation of trigonometric functions in form $a + bx + cx^2 +...$ Is there any way to express the trigonometric functions as infinitely long polinomials? If so, how? If not why?
Obviously doing $x(x-\pi)(x-2\pi)(x-3\pi)...$ does not work as it doesn't match for values in between the zeroes.
| I have heard that there are ways of making your approach work, but I don't know anything about it personally.
The standard approach would be Taylor series. The idea of Taylor series is to make an "infinite degree" polynomial for which the function value and all derivatives agree with those of your function.
Specifically, for $\sin$, we have
$$
\sin(x)=x-\frac{x^3}{3!}+\frac{x^5}{5!}-\frac{x^7}{7!}+\cdots
$$
For any given $x$, the factorial $n!$ eventually grows much faster than the power $x^n$, so the series will converge.
A Taylor series is an expression like the above (we do allow even degree terms, it's just that $\sin$ happens to not have them), possibly with $x$ replaced by $(x-a)$ for some real number $a$. We say that such a Taylor series is centered around $a$. The above series for $\sin$ is centered at $0$.
In general, in order to have a Taylor series, a function must be infinitely differentiable. And even then, there is no guarantee that the Taylor series converges, or that it converges to the function it is derived from. And even if it does, the series may not converge everywhere.
A function which for each point $a$ in its domain has a Taylor series centered at $a$ which converges on some open neighborhood of $a$ is called analytic. Many (most?) named functions that one learn about in school are analytic. Trigonometric functions like $\sin$ are analytic, as are polynomials, $\sqrt{\phantom x}$, $e^x$, and rational functions, to name a few.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3015082",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Locality and Hilbert Curve I have a hilbert curve index based on this algorithm. I take two to four values (latitude, longitude, time in unix format and an id code) and create a 1-d hilbert curve.
I'm looking for a way to use this data to create a bounding box query (i.e. "find all ids within this rectangle).
I'm looking for a way to do so without decoding the 1d Hilbert code back into its constituent parts.
My question is: if I created a 2d hilbert curve range (i.e. I converted the range of the box into a hilbert curve so x1y1-> hilbert value1 and x2y2-> hilbertvalue2) would all values of corresponding 2d hilbert values fall within their range?
E.g. If I converted (1,2) and (20,30) into Hilbert values and then searched for all values between hilbertvalue1 and hilbertvalue2, would all the values I get decode to fall within (1,2) and (20, 30), or would I have to perform additional transformations?
When I set all my values to 2^a* X and 2^a * y (a being a positive integer multiplier) it seems to be true. However, is there a way to use a range search on the 4d hilbert curve? I.e., if I have a Hilbert Curve made of 4 values and I have a bounding box query, can I see which hilbert values fall within that bounding box without decoding the entire Hilbert curve and checking?
Thanks.
| I just found the answer in this paper:
https://www.researchgate.net/publication/3296936_Analysis_of_the_Clustering_Properties_of_Hilbert_Space-filling_Curve
It is a dense paper, so I don't have the exact answer. But if you're motivated you can find it !
PS: you can also find the paper here
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3015176",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
distribution associated with a discontinuous function Let $f\colon\mathbb{R}\to\mathbb{R}$ be such that, for every $n\in\mathbb{Z}$, $f$ is differentiable on $\left(n,n+1\right)$ and $n$ is a discontinuity of first kind of $f$. We define
$$T_f(\phi)=\int_{\mathbb{R}}f(x)\phi(x)dx,\quad\text{where }\phi\text{ is a test function}.$$
I understand that, when $f$ is continuously differentiable, $T_f$ is well defined and the derivative of $T_f$ is just $T_{f'}$. However, these two do not seem clear in this case.
My attempt: Since $n$ and $n+1$ are discontinuities of first kind of $f$, we can define a new function, say $g_{n}$ such that $g_{n}$ is continuous on $[n,n+1]$ and equal to $f$ a.e. Thus $f$ is (Lebesgue) integrable on $[n,n+1]$. We then conclude that $f$ is locally integrable on $\mathbb{R}$ and therefore $T_{f}$ is well defined.
My question: 1) Is the above reasoning (to show that $T_f$ is well defined) correct? And:
2) What would the derivative of $T_f$ look like in this case? I think it is not $T_{f'}$ as $f'$ is not defined at $n$.
Any help/hint is highly appreciated.
| If $f$ is absolutely continuous on each segment $[n, n + 1]$ when defined to take the values $f(n + 0)$ and $f((n + 1) - 0)$ at the endpoints, integration by parts shows that the distributional derivative is
$$f'(x) + \sum_n (f(n + 0) - f(n - 0)) \delta(x - n),$$
where $f'(x)$ is the ordinary derivative. If the conditions for integration by parts do not hold, the distributional derivative may not be representable in this form. Also, if the test functions do not have finite support, it is necessary to add a condition on how fast $f$ can grow.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3015318",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Second derivative of a matrix quartic form I need to compute the second derivative of the following quartic expression: $$x^H A^H x x^H A x$$ where is Hermitian. I have tried to compute the first derivative, and if I am not wrong, it should be: $$(A+A^H) x x^H (A+A^H) x$$
But then, I do not know how to proceed to calculate the second derivative. Could someone sketch the steps I need to follow? Thank you.
| Define the scalar variables
$$\eqalign{
&\phi &= x^HAx = (A^Tx^*)^Tx \cr
&\phi^* &= x^HA^Hx = (A^*x^*)^Tx \cr
&\psi &= \phi^*\phi \cr
}$$
Find the gradient of your function $(\psi)$ with respect to $x$, treating $x^*$ as an independent variable.
$$\eqalign{
d\phi &= (A^Tx^*)^T\,dx \cr
d\phi^* &= (A^*x^*)^T\,dx \cr
d\psi
&= \phi\,d\phi^* + \phi^*\,d\phi \cr
&= (\phi A^*x^* + \phi^*A^Tx^*)^Tdx \cr
g = \frac{\partial\psi}{\partial x}
&= A^*x^*\phi + A^Tx^*\phi^* \cr
g^* = \frac{\partial\psi}{\partial x^*}
&= Ax\phi^* + A^Hx\phi \cr
}$$
Not that we need it, but the last equation is a consequence of the fact that $\psi=\psi^*\,$ (it's real).
Now the Hessian is just the gradient of the gradient, so
$$\eqalign{
dg
&= (A^*x^*)\,d\phi + (A^Tx^*)\,d\phi^* \cr
&= \Big((A^*x^*)(A^Tx^*)^T + (A^Tx^*)(A^*x^*)^T\Big)\,dx \cr
H = \frac{\partial g}{\partial x}
&= A^*x^*x^HA + A^Tx^*x^HA^H \cr
}$$
Note that the Hessian is symmetric, but it is not Hermitian.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3015464",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
$E(X^2|X-Y) E(X^3|X-2Y)$ for Gaussians? For independent gaussians with following the normal distribution with expectation zero and variance one, how do I compute:
$E(X^2|X-2Y), E(X^3|X-2Y)$
I know that $X-2Y$,$X+2Y$ are independent. However, this does not seem to be enough to deduce the result, without a restriction such as:
$E(X^2|X-2Y)=-2E(Y^2|X-2Y)$
(I am not sure if this holds).
| $X=\frac {(X+2Y)+(X-2Y)} 2$. Compute $X^{2}$ and $X^{3}$ in terms of $X+2Y$ and $X-2Y$ from this. [ $$E(X^{2}|X-2Y)=\frac {E((X+2Y)^{2}|X-2Y)+E((X-2Y)^{2}|X-2Y)+2E((X+2Y)E((X-2Y)|X-2Y)} 4$$ $$=[E((X+2Y)^{2} +(X-2Y)^{2}+2(X-2Y)E(X+2Y)] /4=[E((X+2Y)^{2} +(X-2Y)^{2}] /4$$ $$=\frac 5 4+\frac {(X-2Y)^{2}} 4.$$
This answer was written assuming that the OP was right is saying that $X+2Y$ and $X-2Y$ are independent. They are not, so a slight modification is required. See my comment below for the modification.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3015559",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Recurrence relation/with limit Let $F_{n+1}=F_{n-1}+F_{n-2}$ for $n \in \mathbb{N}$ with $n \geq 2$
$F_0:=0$ and $F_1:=1$.
How to compute
$\lim\limits_{n\to\infty}\frac{F_{n-1}}{F_{n+1}}$?
I tried to use Binet's formula:
$\lim\limits_{n\to\infty}\frac{F_{n-1}}{F_{n+1}}=\lim\limits_{n\to\infty}\frac{\frac{1}{\sqrt{5}}(\xi^{n-1}-\phi^{n-1})}{\frac{1}{\sqrt{5}}(\xi^{n+1}-\phi^{n+1})}=\lim\limits_{x\to\infty}\frac{\xi^{n-1}(1-\frac{\phi^{n-1}}{\xi^{n-1}})}{\xi^{n+1}(1-\frac{\phi^{n+1}}{\xi^{n+1}})}$
But I don't know what to do next.
I suppose $\xi^{n+1}(1-\frac{\phi^{n+1}}{\xi^{n+1}})=\xi$ but what about ${\xi^{n-1}(1-\frac{\phi^{n-1}}{\xi^{n-1}})}$?
| After$$\lim_{n\to\infty}\frac{F_{n-1}}{F_{n+1}}=\lim_{n\to\infty}\frac{\frac{1}{\sqrt{5}}\left(\xi^{n-1}-\phi^{n-1}\right)}{\frac{1}{\sqrt{5}}\left(\xi^{n+1}-\phi^{n+1}\right)},$$you should have obtained$$\lim_{n\to\infty}\frac{\xi^{n-1}\left(1-\frac{\phi^{n-1}}{\xi^{n-1}}\right)}{\xi^{n+1}\left(1-\frac{\phi^{n+1}}{\xi^{n+1}}\right)},$$which is equal to$$\frac1{\xi^2}\lim_{n\to\infty}\frac{1-\frac{\phi^{n-1}}{\xi^{n-1}}}{1-\frac{\phi^{n+1}}{\xi^{n+1}}}=\frac1{\xi^2}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3015694",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
Picking marbles
We have 15 urns each of them having a different number of marbles, from 1 to 15. We start by picking the same number of marbles from each of the urns we choose. We repeat the process until we have picked all marbles. What is the minimum number of days we can finish picking all marbles? Just to clarify that it is not necessary to pick marbles from EVERY urn.
I don't think I can make it in less than 5 moves (start by picking 6, then 4, then 3, then 2 and 1) but I am fairly sure it can be done in 4 or maybe less.
Any ideas?
| It is possible in 4 days:
First day you reduce the number of balls by 8 in urns with at least 8 balls. So now each urn has at most 7 balls.
Second day you reduce the number of balls by 4 in urns with at least 4 balls. So now each urn has at most 3 balls.
Third day you reduce the number of balls by 2 in urns with at least 2 balls. So now each urn has at most 1 ball.
Last day you took balls from all the nonemty urns.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3015813",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
} |
Let $f:[a,b]\rightarrow \mathbb{R}$ a continuous function, with continuous derivate in $[a,b]$ such that: $0<f'(x)<M \ \ \forall \ x\in[a,b]$
Find $c,d\in R$ such that $c\leq f(x)\leq d$.
| If you know a bit of general topology, there is an easy proof of a stronger statement:
Lemma. Let $(X, \tau), (Y, \sigma)$ be topological spaces and let $f \colon (X, \tau) \to (Y, \sigma)$ be continuous. If $C$ is compact in $(X,\tau)$. Then $f[C]$ is compact in $(Y, \sigma)$.
Prove this and combine it with the fact that $C \subseteq \mathbb{R}$ is compact in $\mathbb{R}$ if and only if it is closed and bounded.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3016019",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Given a book with $100$ pages and a $100$ lemmas, prove that there is some lemma written on the same page as its index
A book consists of 100 pages and contains 100 lemmas and some images. Each lemma is at most one page long and can't be split into two pages (it has to fit in one page). The lemmas are numbered from 1 to 100 and are written in ascending order. Prove that there must be at least one lemma written on a page with the same number as the lemma's number.
If lemma no. $1$ is written on page $1$, then it is proved. Let's assume lemma no. $1$ is written on page $k, k>1$. Then on at least one page there must be $2$ lemmas. Let's assume that always on page $k+i$ we have lemma no. $i+1$ and so on. Then the last $100-k-i$ lemmas must fit on the last page, which means that there will be at least one lemma (number $100$) on page $100$.
But I don't know how to express it in a more mathematical way!
Any help?
| We claim more generally that a book of $n$ pages and $n$ lemmas numbered $1$ through $n$ has at least one lemma on a page matching its number.
Proof by induction on $n$: The case $n=1$ is obvious. Now suppose the statement is true for some $n$, and suppose we have a book of $n+1$ lemmas and $n+1$ pages. If lemma $n+1$ is on a page numbered less than $n+1$, then lemmas $1$ through $n$ must be on pages $1$ through $n$, and there must be at least one lemma on a same-numbered page by the inductive hypothesis. If not, then lemma $n+1$ is on page $n+1$, and we're done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3016149",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18",
"answer_count": 7,
"answer_id": 5
} |
Type/codomain of constant function $f(x)=5$ A function like $f(x) = 2x$ can be defined over the reals so its “type signature” or in set theory domain and codomain is $f: \mathbb{R} \rightarrow \mathbb{R}$.
I want to define a function $f(x) = 5$ (or some other constant number) and restrict the codomain/return type to be a constant value. What is the (most restrictive) type signature of such a function? Does this require dependent types? Is it $f: \mathbb{R} \rightarrow 5$? I want to know the type signature for that specific function that returns $5$ and the more general notion of a function that returns a constant value that does not depend on the input argument.
My motivation is that I want to be able to describe in a precise way functions that do not use/“delete” their input argument and return some other constant value. Eventually, I want to formalize this in a proof assistant.
| The answer to this really depends on your type theory. The type theories of proof assistants like Coq or Agda will let you express any of the following classes as types:
$$
\{f : \Bbb{R} \to \Bbb{R} \mid \forall x:\Bbb{R}\cdot f(x) = 5\} \\
\{f : \Bbb{R} \to \Bbb{R} \mid \exists c:\Bbb{R}\cdot \forall x:\Bbb{R}\cdot f(x) = c\} \\
\bigcup_Y \{f : \Bbb{R} \to Y \mid \exists c:Y\cdot \forall x:\Bbb{R}\cdot f(x) = c\} \\
\bigcup_{X, Y} \{f : X \to Y \mid \exists c:Y\cdot \forall x:X\cdot f(x) = c\}
$$
and many variations on this kind of theme. Weaker type theories might not be able to deal with some of the above.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3016252",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Prove that $\forall n, \, \exists N,x :\lfloor{x^{N}}\rfloor =n \, \land \,\lfloor{x^{N+1}}\rfloor =n+1$ The question is related to the interesting problem raised by following OP.
Notably I'm trying to prove the following fact
For any $n\in \mathbb{N} \quad \exists x\in \mathbb{R} \quad x>1$ and $\exists N\in \mathbb{N}$ such that
$$\lfloor{x^{N}}\rfloor =n \quad \land \quad \lfloor{x^{N+1}}\rfloor =n+1$$
My idea for the proof is to consider
*
*$x=\sqrt[N]{n} \implies \lfloor{x^{N}}\rfloor = \lfloor{{(\sqrt[N]{n})}^{N}}\rfloor =\lfloor n\rfloor=n$
and then we need that
*
*$x^{N+1}=x\cdot n=n+1+\alpha\,$ for some $\alpha \in(0,1)$
therefore finally we need to show that for any $n\in \mathbb{N} \quad \exists \alpha\in \mathbb{R} \quad \alpha \in(0,1)$ and $\exists N\in \mathbb{N}$ such that
$$\sqrt[N]{n}=1+\frac{1+\alpha}{n}$$
but I'm totally stuck here and I can't find any method to prove that.
Thanks in advance for any idea or suggestion about that!
| This is long after you received the excellent answer by Ingix(+1), but I'd like to present an arguably more elementary solution.
We'll use $x = n^{1/N}$ as in your Q. We need to argue that $\exists N \in \mathbb{N}:$ \begin{align} n+1 &\le n^{1 + 1/N} < n+2 \\ \iff \frac{\ln (n+1)}{\ln n} &\le 1 + \frac{1}{N} < \frac{\ln (n+2)}{\ln n} \\ \iff \frac{\ln(1+1/n)}{\ln n} &\le \frac{1}{N} < \frac{\ln (1 + 2/n)}{\ln n} \\ \iff \frac{\ln n }{\ln(1 + 2/n)} &< N \le \frac{\ln n}{\ln( 1 + 1/n)}
\end{align}
But this is true so long as the difference between the left and right hand side above is at least $1$. We can use the the following elementary inequality: $$\mathrm{for}\, x\in (0,1): \quad 2x/3 \le \ln(1+x) \le x \iff \frac{1}{x} \le \frac{1}{\ln(1+x)} \le \frac{3}{2x}. $$
Thus, $ \frac{\ln n}{\ln(1 + 1/n)} \ge n \ln n$ and $ \frac{\ln n}{\ln(1 + 2/n)} \le \frac{3n}{4} \ln n$. The gap between them is at least $ \frac{n}{4} \ln n $ which is seen to be $> 1$ for $n\ge 4.$
For $n = 1,2,3$ witnesses are found easily: $x = 4/3$ and $N = 2,3,4$ respectively.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3016355",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Outer measure and set with full measure I've seen two different definitions for an outer measure $\mu$ on $\mathcal{P}(X)$, where $X$ is a set, obtained from a given probability measure on $X$.
D1 = a set of full outer measure is a subset $A\subseteq X$ such that $\mu(A)=1$.
D2 = a set of full outer measure is a subset $A$ such that $\mu(X\backslash A)=0$.
Since outer measures are not additive, I'm having trouble seeing how those two definitions are equivalent. Clearly D2 implies that a set with full outer measure is measurable (for the sigma algebra generated by $\mu$), however it's not clear to me that it's the case for D1.
So my question is : are those definitions equivalent ?
My follow up question is : when we say that $C([0,1])$ has full outer measure for the Wiener measure on $\mathbb{R}^{[0,1]}$, are we using Definition D2, or D1 ?
| D1 and D2 are not equivalent in general.
Consider $X=[0,1]$ and $\mu=$ Lebesgue outer measure
By this thread, there is a non-measurable set $V$ with $\mu(V)=1$, so $V$ is a set of full measure in the sense of D1. However, as you pointed out, any set of full measure in the sense of D2 must be measurable, so $V$ is not such a set.
I do not have knowledge on Wiener measure so I leave it to someone who know it well.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3016509",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why does there not exist a particular finite set of congruences that forms a partition of the set of integers There is a paragragh that I saw in an article and could not really understand. The article is Covering Systems of Congruences, J. Fabrykowski and T. Smotzer, Mathematics Magazine Vol. 78, No. 3 (Jun., 2005), pp. 228-231, DOI: 10.2307/30044163. It starts by discussing problem 9 from the 2002 American Invitational Mathematics Examination (AIME):
PROBLEM. Harold, Tanya, and Ulysses paint a very long fence. Harold starts with the first picket and paints every $h$th picket; Tanya starts with the second picket and paints every $t$th picket; and Ulysses starts with the third picket and paints every $u$th picket. If every picket gets painted exactly once, find all possible triples $(h,t,u)$.
After presenting the solution, the article goes on to say:
One can generalize the AIME problem and ask whether there exists a finite set of congruences, with all moduli distinct and greater than or equal to $2$, that forms a partition of the set of integers. This turns out to be impossible. Relaxing the assumption about partitioning the integers, one can look for finite sets of congruences such that every integer belongs to at least one of them."
Can anyone help explain why the first part is impossible?
| The answer is yes. There are several.
This question gives one example: Prove {0 mod 2, 0 mod 3, 1 mod 4, 1 mod 6, 11 mod 12} is a covering system
$0 \mod 2, 0 \mod 3, 1\mod 4, 1\mod 6, 11 \mod 12$.
This paper is all about them https://arxiv.org/pdf/1705.04372.pdf
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3016664",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Maximum Value of $g(x)=(8+x)^3(8-x)^4$ I think there can not be a maximum value of this, as if I plug $x=$1000, it will increase the value of the function in leaps and bounds. The answer says that the maximum value will occur at $x=-8/7$. What am I missing here?
| We have that
$$g(x)=(8+x)^3(8-x)^4\implies g'(x)=3(8+x)^2(8-x)^4-4(8+x)^3(8-x)^3=0$$
$$3(8+x)^2(8-x)^4=4(8+x)^3(8-x)^3$$
which is true when
*
*$x=\pm 8$
*$3(8-x)=4(8+x) \implies 7x=-8 \implies x=-\frac87$
and the latter is a local maximum.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3016757",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
How to show a class of structures is not axiomatizable? For example, let $F$ be a field and $L$ be the language of $F$-vector space.
(1) Prove that the class of finite dimensional $F$-vector space is not axiomatizable.
(2) Prove that if $F$ is infinite then the class of infinite dimensional $F$-vector space is not axiomatizable.
Or let $L$ be the language of rings.
(3) Prove that the class of algebraic extensions of $\mathbb{Q}$ is not axiomatizable.
I think the common way to prove this type of statement would be: first suppose the class is axiomatizable. Then there is some $L$-theory axiomatizing the class. We make a new language $L'$ by adding new symbols to $L$ then construct a $L'$-theory $T'$ and show $T'$ is consistent by compactness then observe a contradiction.
However, this method requires a bit of algebra knowledge. Can anyone give some hints for the above problems I have listed?
| Hint : for all these questions, you can use the ascending Löwenheim-Skolem theorem.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3016857",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Every finite group has a chief series A chief series in a group $G$ is a series of normal subgroups such that
$1=N_0 \triangleleft N_1 \triangleleft ... \triangleleft N_n=G$,
for which each factor $N_{i+1} / N_i$ is a minimal (non-trivial) normal subgroup of $G/N_i$.
I am trying to prove that every finite group has at least one chief series and my idea is to do it by induction, but I have not managed to make it work yet.
| We'll prove by induction on size of $G$. If $|G|$ is 1 or 2, then $\{0\} \triangleleft G$ is a chief serie. Let $G$ be a group of size $n$ and let $N$ be a minimal normal subgroup of $G$, if $N$ is $\{0\}$, then $N \triangleleft G$ is a chief series. So suppose $N$ is not $\{0\}$, then by induction, $G/N$ has a chief series $1=N_0 \triangleleft N_1 \triangleleft N_2...\triangleleft N_k=G/N$. Let $\phi: G \rightarrow G/N$ be the canonical quotient homomorphism. Then we claim that $1 \triangleleft \phi^{-1}(1) \triangleleft \phi^{-1}(N_1)\triangleleft ...\triangleleft \phi^{-1}(G/N)=G$ is a chief serie for $G$. One can check $\phi^{-1}(N_i)\triangleleft \phi^{-1}(N_{i+1})$ because it is a property of surjective homomorphism. Also, as $N_{i+1}/{N_i}$ is minimal no trivial normal subgroup of $\frac{G/N}{N_{i}}$, $\phi^{-1}(N_{i+1}/N_i)$ is the smallest normal subgroup of $\phi^{-1}(G/N_{i})$. And as $N$ is minimal normal in $G$, it is minimal normal in $\phi^{-1}(N_1)$. I hope this is correct.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3017195",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to select distribution? — Binomial, Poisson, ... How do I go about finding which distribution I need to use for my exercise?
I have the following exercise:
Compute the probability that within a group of 5 students exactly two
are born on a Sunday.
What gives me a hint on what probability distribution that is?
| The Guide from the previous answer is helpful. It should help you pinpoint the exact distribution to choose.
Sometimes the problem might not be easily convertible to a distribution. In those cases you can always go back to thinking in terms of basic probabilities.
How many total ways can the student birthday be arranged? Number of days to the power of number of students = $7^5$
How many ways can we arrange the students to satisfy the condition? Choose 2 students from the 5. They are born on a Sunday. The rest isn't. This leads to ${5 \choose 2}$ * $1^2$ * $6^3$
If you divide the two you will get ${5 \choose 2} * {\frac 1 7}^2 * {\frac 6 7}^3$
Notice that it's the same result if you were to model it using binomial distribution with p = 1/7, n = 5. So at the end of the day, if you feel stuck you can always roll back to basics.
Hope it helps
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3017322",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Distance an arbitrary point is found along a given vector Say I have a vector in 2D space defined by two points $(x_1, y_1)$ and $(x_2, y_2)$: $$\vec{v}=(x_2 - x_1, y_2 - y_1)$$ I would like to find how far along that vector an arbitrary point $(x_3, y_3)$ is. This very woolly language$^*$, so I've attempted to create a diagram showing the sitation.
In this diagram, the quantity I'm interested in $a$, which I can calculate using Pythagoras' theorem if I know $b$ and $c$. I know $c$, which is the length of vector $(x_3 - x_1,y_3 - y_1)$, given by, $$c = \sqrt{(x_3 - x_1)^2 + (y_3 - y_1)^2}$$ So, now I need to calculate $b$: the length of a vector – that I'll call $\vec{u}$ – that is perpendicular to $\vec{v}$ and passes through point $(x_3, y_3)$. For $\vec{u}$ and $\vec{v}$ to be perpendicular the dot product must be zero. That is,
$$\vec{v}\cdot \vec{u}=0$$
$$(x_4-x_3)(x_2-x_1)+(y_4-y_3)(y_2-y_1)=0$$
This is where I begin to falter: one equation with two unknowns, $y_4$ & $x_4$. I expect there is some obvious constraint on $\vec{u}$ that I should be using to eliminate an unknown, but my sleep-deprived mind is offering no help. Can someone point out what I've missed?
$$$$
$^*$I really want to use the word project to describe how my arbitrary point $(x_3, y_3)$ is placed along that vector. Is this the correct terminology?
| You are right in calling this a projection. If $(x_1,y_1)$ is the origin, then you can project ${\bf u} = (x_3,y_3)$ onto $\bf v$ thus:
$${\rm proj}_{\bf v}{\bf u} = \frac{\bf u \cdot v}{\bf v \cdot v}{\bf v}.$$
If $(x_1,y_1)$ is not the origin, then just shift the frame of reference to make it so.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3017416",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
LaSalle for time varying systems I am looking for an explanation, why LaSalles theorem is in general not applicable to time varying systems. Can someone provide an example system with
$$
\dot{x}(t) = A(t)x(t) \tag{1}
$$
I.e., why can't LaSalles theorem be used if I have a Lyapunov function $V(t, x)$ for the system $(1)$ with $\dot{V}(t, x) \leq 0$?
| Consider the system
$$
\left\{\begin{array}{lll}\tag{1}
\dot x&=&0\\
\dot y&=&0\\
\end{array}\right.
$$
and the function $V(t,x,y)=e^{-t}(x^2+y^2)$. The directional derivative is negative definite:
$$
\dot V= -e^{-t}(x^2+y^2)+e^{-t}(2x\dot x+2y\dot y)=-e^{-t}(x^2+y^2),
$$
but the solutions of (1) do not approach the set
$$
E=\{ (x,y):\; \dot V= 0\}=\{(0,0)\}.
$$
This example is possible because $V(t,x)$ may decrease due to the explicit dependence on $t$, regardless of the approaching of the solution to the set $E$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3017681",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
If $x \in A$ why does it not follow that $x \in A-B$? I understand that if $x \in A-B$ then $x \in A$ because $x \notin B$ but why doesn't the reverse hold true?
| Because $A\setminus B$ only includes elements in $A$ which are not also in $B$. Put differently, $A$ can be partitioned as
$$
A = (A\setminus B )\cup (A\cap B)
$$
and if $x\in A\cap B$, then $x\in A$ but $x\notin A\setminus B$.
For instance, take $A=B=\{1\}$ for a trivial counterexample. Then $A\setminus B = \emptyset$: so $1\in A$, but $1\notin A\setminus B$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3017931",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Show that $\lim_{r\to s} \|D_rf-D_sf\|_p = 0,\,r,\,s > 0$ Full Question: Let $D_r$ be the dilation operator $D_rf(x) = f(rx)$ on $L^p(\mathbb{R}^d),\, 1 \leq p < \infty$. Show that $\lim_{r\to s} \|D_rf-D_sf\|_p = 0,\,r,\,s > 0$
I was told to use $\int f(rx)d\lambda^d(x) = |r|^{-d}\int f(x)d\lambda^d(x)$.
So I was thinking that set $s=1$, and prove $\lim_{r\to 1}\|D_rf-D_1f\|_p = \lim_{r\to 1}\|f(rx)- f(x)\| = 0$, then show this scales for all $s$. To show this I was thinking that since we know $\|f-g\|_p < \epsilon$, where $g$ vanishes to $0$ outside a bound, then do something along the lines of $\|D_rf - f\|_p \leq \|D_rf-D_rg\|_p+\|D_rg-g\|_p+\|g-f\|_p$, and $\|D_rf-D_rg\| = \int(f(rx)-g(rx)^pd\lambda^{1/p} \to |r|^{-d/p}\|f-g\|_p < \epsilon$, but I've been stuck here.
| Given $\epsilon >0$ there exists $g \in C_c(\mathbb R^{d})$ such that $\|f-g\|<\epsilon$. DCT tells you that the result is true with $g$ in place of $f$. Now $\|f(rx)-f(x)\| \leq \|g(rx)-g(x)\|+\|f(x)-g(x)\|+\|f(rx)-g(rx)\|$ and $\|f(rx)-g(rx)\|=r^{-d} \|f(x)-g(x)\|$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3018094",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Doubt in passing in the Riemann mapping theorem
I have a question, maybe silly in the passage marked in red. I understood everything up to this part. Why does $H'(0) > 0$ imply $e^{i \theta} = 1$? Is the Schwarz Lemma being used?
| From $H=F \circ G^{-1}$ we get with the chain rule and the rule for the derivative of $G^{-1}$ that
$$ H'(0)= \frac{F'(z_0)}{G'(z_0)}.$$
Since $F'(z_0), G'(z_0)>0$, it follows that $H'(0)>0$.
From $H(z)=e^{i \theta}z$, we get $H'(0)=e^{i \theta}.$
Furthermore we have: $e^{i \theta}>0 \iff e^{i \theta}=1.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3018218",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Does this limit $lim_{n\to\infty}\sum_{i=0}^n 1/n \sqrt{1 - i^2/n^2}$ converge to $\pi/4$? While trying to find an approximate area of a quarter of a circle by splicing it into small rectangles and summing their areas I've reached a point where I have this formula:
$$\sum_{i=0}^n 1/n \sqrt{1 - i^2/n^2}$$
Writing quick program and calculating the sum with n = 100, 100, 1000, 10k, 100k items suggest this sum converges to $\pi/4$, however I have no idea why. I've tried to search for known series converging to $\pi/4$ but nothing seems to resemble above formula.
Please note that in this question I'm not interested in what I was initially for, i.e. the area of a quarter of a circle. This was merely an exercise to show my nephew how we can approximate certain things.
| The sum is nothing but a Riemann sum for $\int_0^{1}\sqrt{1-t^{2}}\, dt$. You can evaluate this by making the substitution $t=\sin\, \theta$ and using the formula $2 \cos ^{2}\, \theta =1+\cos\, (2\theta)$ and you will get $\pi /4$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3018382",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Sigma-algebra for countable sample space Let $E=\{\{\omega \} : \omega \in \Omega \}$.
$\sigma ( E) = \{ A \subseteq \Omega : A \ is \ countable \ or \ A^c \ is \ countable \} $ is a $\sigma$-algebra generated by the set $E$ i.e. the smallest $\sigma$-algebra containing $E$ (already proved).
Prove:
$\sigma (E)$ is equal to partitive set of $\Omega$ if and only if $\Omega$ is a countable set.
One side: Suppose $\Omega$ is countable. $\sigma (E)$ contains all the sets that are countable or their complement is countable. That is true for every subset of a countable set, so it leads $\sigma (E)$ is a partitive set of $\Omega$.
Other side: please help
| If $\Omega$ is not countable then it can be written as disjoint union of two uncountable subsets $\Omega_1,\Omega_2$.
See here for a proof of that.
So then $\Omega_1,\Omega_2\notin \sigma(E)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3018503",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Trace of symmetric matrix equals sum eigenvalues I need to show that if $\mathbf{S}$ is symmetric, then it's trace sums to the sum of the eigenvalues. But I don't know how to show this. Can anybody give me a hint?
P.S. Shame on my google skills, buy I really can't find any pages on this specific issue. Not with the assumption that $\mathbf{S}$ is symmetric, and no proofs.
| If $S$ is a symmetric matrix then $S$ has a spectral decomposition as $S=PDP'$ where $D$ is the diagonal matrix consisting the eigenvalues of $S$ and $P$ is orthogonal. Then $tr(S)=tr(PDP')=tr(DP'P)=tr(D)=\sum \text{eigen values of } S.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3018616",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 4,
"answer_id": 0
} |
What is derivative of $\sin ax$ where $a$ is a constant? What is the derivative of $\sin a x$ where $a$ is a constant.
Actually, I'm studying Physics and not so well-versed with calculus. So, I have studied the basic rules of calculus but am stuck here.
I somewhat know about the product rule but don't get what to do if a constant is given in a trigonometric function, be it $\sin ax $ or $\cos ax$. Whatever..
Please help me get my concept clear.
Thank You!
| Derivative of $\sin(ax) = a \cos(ax)$ by Chain Rule.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3018756",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Prove by induction $7^n$ is an odd number for every natural number n I have proved the base case already of n=1 where $7 = 2p+1$
Then I assumed $n=k$ for $7^k = 2p+1$ for $k \in N$ and $p \in N$
To prove $7^{k+1} = 2p+1$ I have these steps so far:
$7(7^k) = 2p+1$
$7(2p+1) = 2p+1$
$14p+7 = 2p+1$
I am not sure how to prove from here.
| Just rewrite your sum as
$$14p+7=2(7p+3)+1$$
Also, it's not a good idea to use $p$ for all of your steps. If $7^k=2p+1$, then you need to show that $7^{k+1}=2q+1$ for some integer $q$. Otherwise the equality is not actually equal since you are using the same expression as your desired form for different valued expressions.
Now since $14p+7=2(7p+3)+1$, let $q=7p+3$. Thus $7^{k+1}=2q+1$. And you're done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3018867",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Normal operator in Hilbert Complex space share an eigenvalue. Everyone, I get stuck in an exercise of Functional Analysis.
Let $T \in B(H)$ (H a complex Hilbert space) and $T^*$ adjoint of $T$. Supose $T$ is a normal operator.
1) Prove that $Ker(T)= Ker(T^*) = R(T)^\perp$ - I've finished this.
2) Using previous proof. If $\alpha$ is an eigenvalue of $T$ then the conjugate $\bar\alpha$ is an eigenvalue of $T^*$.
3) If $\alpha \neq \beta $ are eigenvectors of T, then the associated eigenspaces are orthogonal between them.
My try:
2) Let $x \in H$ such that $Tx = \alpha x $.
$$<x,Tx> = <T^*x,x> = \alpha<x,x> = <\bar\alpha x,x>$$
Then we have that
$$<T^*x - \bar \alpha x,x>=0$$
Here I don't know if above implies that $T^*x - \bar \alpha x =0$.
3) I didn't make a great progress here.
| $T$ is normal means $T^*T=TT^*$, which is equivalent to
$$
\|Tx\| = \|T^*x\|,\;\;\; x\in H.
$$
So $\mathcal{N}(T)=\mathcal{N}(T^*)$ follows. The sum of normal operators is normal, and any scalar times a normal operator is normal. And the identity $I$ is normal. So, if $T$ is normal, then so is $\alpha I-T$ for any scalar $\alpha$. Therefore $\mathcal{N}(T-\alpha I)=\mathcal{N}(T^*-\overline{\alpha}I)$, and any eigenvector of a normal $T$ with eigenvalue $\alpha$ is an eigenvector of $T^*$ with eigenvalue $\overline{\alpha}$. If $Tx=\alpha x$ and $Ty=\beta y$, then
\begin{align}
(\alpha-\beta)\langle x,y\rangle
& = \langle \alpha x,y\rangle-\langle x,\overline{\beta}y\rangle \\
& = \langle Tx,y\rangle-\langle x,T^*y\rangle \\
& = \langle Tx,y\rangle-\langle Tx,y\rangle =0.
\end{align}
Therefore, if $\alpha\ne \beta$, it follows that $x\perp y$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3019006",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
If a first derivative doesn't exist at a certain point, is it not a critical point? Say $f'(x)$ of a function was $(x+2) \over (x+3)$. If $x = -2$ then $f'(x)$ = 0, which means that at this point, there would be a local min or max. But what if $x = -3$? It doesn't exist for $f'(x)$, so do we just ignore it? State it DNE at $x = -3$ so it is not a critical point?
| I think the easiest explanation at a basic calculus level is to simply say that no, it is not a critical point. Hence if we want to do problems regarding optimization for a function which does not have its derivative (or the function itself) defined everywhere, then you would want to consider the "shape" of that function when deducing things, and not always just writing $f'(x)=0$. For example, if you want to minimise $1/x$, setting its derivative equal to $0$ you get $1=0$, which leads you to the conclusion that there are no critical points. Indeed, if you look at the graph, there aren't any "hills" or "valleys". Yet it still makes sense to say that its minimum value is arbitrarily large, or sometimes $-\infty$, in many contexts.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3019131",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 1
} |
Is the set $\{|f(0)|: \int_{0}^{1}|f(t)|dt\le1\}$ bounded? Let $x_0 \in [0,1]$ and define $T:C[0,1] \rightarrow \mathbb{R}$ by $T_{x_0}(f)=f(x_0)$. Let $||\cdot||_1$ be a norm on $C[0,1]$. Is $T_0$ bounded or not? That is, is the set
$$
\left\{|T_{0}(f)|:||f||_1 \leq 1\right\}=\{|f(0)|:||f||_1 \leq 1,f \in C[0,1]\}
$$ bounded? Since $||f||_1:=\int_{0}^{1}|f(t)|dt$, the question may be equivalent to the following:
Let $f:[0,1] \rightarrow \mathbb{R}$ be continuous. Is the set $$\left\{|f(0)|: \int_{0}^{1}|f(t)|dt \leq 1\right\}$$ bounded?
I guess the answer is no. Because, for example, we can have a function whose graph is a narrow spike at the origin but with infinite height. The area enclosed by the graph may be 1 but the value at the origin $f(0)$ which is its height is infinite.
But how can I prove this formally?
| For every $a>0$, the function
$$f_a(x)=\frac{2 a e^{-a^2 x^2}}{\sqrt{\pi} \textrm{erf}(a)}$$ with the error function $\textrm{erf}(a)=\frac{2}{\pi}\int_0^a e^{-t^2}dt$ is in the set $\{|f(0)|: \int_{0}^{1}|f(t)|dt=1\}$ and evaluates to $f_a(0)=\frac{2 a }{\sqrt{\pi} \textrm{erf}(a)}$. Because $\textrm{erf}(a) \rightarrow 1$ as $a\rightarrow \infty$, we obtain $f_a(0)$ arbitrarily large as we increase $a$. As a consequence, $f_a(0)$ and thus your set are unbounded.
For every $a>0$, the function $f_a(x)$ is in $C[0,1]$ and even infinitely differentiable on $[0,1]$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3019246",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 3
} |
Proof that $\frac{1+x^2}{n^2} \geq 1-e^{-x^2/n^2}$ for all $x,n \in \mathbb{R}$ I'm looking for a simple proof that $\frac{1+x^2}{n^2} \geq 1-e^{-x^2/n^2}$ for all $x,n \in \mathbb{R}$.
My first attempt was to express the exponential as a Taylor series:
$$\frac{1+x^2}{n^2} \geq \frac{x^2}{n^2}-\frac{1}{2!}\frac{x^4}{n^4}+\frac{1}{3!}\frac{x^6}{n^6}- \, ... \, .$$
Obviously
$$\frac{1+x^2}{n^2} \geq \frac{x^2}{n^2},$$
so if I can show
$$-\frac{1}{2!}\frac{x^4}{n^4}+\frac{1}{3!}\frac{x^6}{n^6}- \, ... <0,$$
then I'm done. But I'm stuck here, and also wondering if there's an even simpler way.
| Set $y=x^2/n^2$. Then you want to show that
$$
\frac{1}{n^2}+y\ge 1-e^{-y}
$$
Note that $y\ge0$. A standard process is to consider
$$
f(y)=\frac{1}{n^2}+y-1+e^{-y}
$$
and note that $f(0)=1/n^2>0$. Also
$$
f'(y)=1-e^{-y}=\frac{e^y-1}{e^y}>0
$$
for $y>0$. Therefore the function $f$ is strictly increasing over $[0,\infty)$ and so
$$
f(y)>0
$$
for $y\ge0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3019372",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Is there a surface on which a hexagon can have all right angles? So I was watching a video that features astronomer and topologist Cliff Stoll talking about how figures that aren't quadrilaterals can have all their angles equal 90 degrees on different surfaces. For example, on a sphere, you can create a triangle that has all of its angles equal $90^\circ$. On a pseudosphere, you can create a pentagon that has all of its angles equal $90^\circ$. Now, here's my question.
Is there a surface where a hexagon with this property is possible?
| You would need a surface of negative curvature.
It is best to use a hyperbolic plane for this, where you can easily fit any regular n-gon with given angles as long as the sum of its external angles is greater than 360 degrees. The problem is that the hyperbolic plane does not fit in Euclidean space.
The pseudosphere is a small fragment of the hyperbolic plane. You can draw a right-angled hexagon on the pseudosphere only if you allow it to wrap over itself. (Edit: actually I am not completely sure about this; see here, you get a pseudosphere by cutting the part covered with white dots; it appears that a hexagon is slightly larger than the area covered by the pseudosphere, but I am not sure. Should be possible to prove.)
You can also draw it on a Dini's surface -- that is basically an unrolled pseudosphere where you have several layers, and thus you avoid the intersection problem. But it would be hard to see anything because it is rolled very tightly. See here.
Less smooth, but probably the best way would be to use something similar to a hyperbolic crochet. See our computer simulation (arrow keys to rotate).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3019500",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27",
"answer_count": 3,
"answer_id": 0
} |
Show that $\int_{0}^{2\pi} \cos^2(x) dx = \int_{0}^{2\pi} \sin^2(x) dx$ I'm trying to follow the argument in the image below, which aims to show that: $\int_0^{2\pi} \cos^2(x) dx = \int_0^{2\pi} \sin^2(x) dx$.
I believe this on an intuitive level, and I understand that it uses the periodicity of the sine and cosine functions to make the point. But, I'm stuck because the argument seems to only show that $\int_{x=0}^{x=2\pi} \cos^2(x)dx = \int_{u=0}^{u=2\pi} \sin^2(u) du$, and the variable $u$ is not the same as $x$.
Would you be able to clarify this argument?
| There are two ways to check the validity of the claim without changing the variable $x$:
1) Consider their difference:
$$\int_0^{2\pi} \cos^2(x) dx - \int_0^{2\pi} \sin^2(x) dx = \int_0^{2\pi} \cos 2xdx=-\frac12\sin 2x|_0^{2\pi}=0.$$
2) Use the half angle formula:
$$\begin{align}\int_0^{2\pi} \cos^2(x) dx &= \int_0^{2\pi} \frac{1+\cos 2x}{2} dx=\\
&=\int_0^{2\pi} \left(\frac{1-\cos 2x}{2}+\cos 2x\right) dx=\\
&=\int_0^{2\pi} \frac{1-\cos 2x}{2}dx+\underbrace{\int_0^{2\pi} \cos 2xdx}_{=0}=\\
&=\int_0^{2\pi} \sin^2(x) dx. \end{align}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3019625",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Proof explanation: Calculate a spectrum of a pair of commuting operators According to the following paper of Taylor:
J. L. Taylor, A joint spectrum for several commuting operators, J. Functional Anal. 6(1970), 172-191.
we have
Let $A= \begin{pmatrix}0&1\\1&0\end{pmatrix}$ and $I= \begin{pmatrix}1&0\\0&1\end{pmatrix}$. By this answer, the Harte spectrum of $(I,A)$ is equal to
$$\sigma_H(I,A)=\{(1,1);(1,-1)\}.$$
I want to understand why the taylor spectrum of $(I,A)$ which is denoted $\sigma_T(I,A)$ is equal also to $\{(1,1), (1,-1)\}$.
Proof: Let $R_X(\lambda) = (X-\lambda)^{-1}$ be the resolvent of $X$. You have the identities
\begin{gather*}
\begin{bmatrix} I-\lambda & A-\mu \end{bmatrix}
\begin{bmatrix} R_I(\lambda) \\ 0 \end{bmatrix} = I, \\
\begin{bmatrix} R_I(\lambda) \\ 0 \end{bmatrix}
\begin{bmatrix} I-\lambda & A-\mu \end{bmatrix}
+ \begin{bmatrix} -(A-\mu) \\ I-\lambda \end{bmatrix}
\begin{bmatrix} 0 & R_I(\lambda) \end{bmatrix}
= \begin{bmatrix} I & 0 \\ 0 & I \end{bmatrix} , \\
\begin{bmatrix} 0 & R_I(\lambda) \end{bmatrix}
\begin{bmatrix} -(A-\mu) \\ I-\lambda \end{bmatrix} = I ,
\end{gather*}
whenever the resolvent $R_I(\lambda)$ exists. These identities (in homological algebra, they are known as a contracting homotopy for this complex) imply that, whenever $\lambda$ is not in the spectrum of $I$ (namely, when $R_I(\lambda)$ exists), Taylor's Koszul complex is exact and hence the corresponding value of $(\lambda,\mu)$ does not belong to $\sigma_T(I,A)$. We can write similar formulas, but using $R_A(\mu)$ instead. Hence, we have reduced the calculation to $\sigma_T(I,A) \subseteq \{ (1,\mathbb{C}) \} \cap \{ (\mathbb{C},1), (\mathbb{C},-1) \} = \{ (1,1), (1,-1) \}$. Now it's just a matter of checking that for these values of $(\lambda,\mu)$ the Koszul complex really does fail to be exact, which is easy to see from the known common eigenvectors of $I$ and $A$.
For more details about the taylor spectrum in a more general context we have:
| Chose a basis so that $A\equiv\begin{pmatrix}1&0\\0&-1\end{pmatrix}$. Now note that
$$(I-1)\oplus (A-1)\equiv0\oplus\begin{pmatrix}0&0\\0&-2\end{pmatrix},\qquad (I-1)\oplus (A+1)\equiv0\oplus\begin{pmatrix}2&0\\0&0\end{pmatrix}$$
both fail to be injective maps $\Bbb C^2\to\Bbb C^2\oplus \Bbb C^2$. For that reason you achieve
$$\{(1,1),(1,-1)\}=\sigma(I)\times\sigma(A)\subset\sigma_T(I,A).$$
Your question contains the proof that $\sigma(a_1)\times\sigma(a_2)\supset \sigma_T(a_1,a_2)$ in general. You can generalise this specific example to see $\sigma_T(I,a)=\{1\}\times\sigma(a)$, provided that $a$ is bounded.
(Here you need to use the theorem that if $a$ is a bijective linear map between Banach spaces it is an isomorphism.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3019776",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Given the subspace of $L^2$ made by constant functions, characterize its orthogonal complement
Let $X=\{v\in L^2([-1,1]):v \text{ is constant a.e.}\}$ be a subspace of $L^2([-1,1])$, characterize $X^{\perp}$.
I don't really know what characterize means, I know that: $X^{\perp}=\{f\in L^2([-1,1]):\langle f,v \rangle=0,\ \forall v\in X\}$.
$$\langle f,v \rangle = \int^1_{-1}fv\ dx = v\int^1_{-1}f,\quad \text{since $v$ is constant.}$$
$$v\int^1_{-1}f=0\quad \text{for every $v\in X$ iff}\quad \int^1_{-1}f=0.$$
So $X^{\perp}=\{f\in L^2([-1,1]):\int^1_{-1}f=0\}$.
Is this enough to characterize the set $X^{\perp}$ or I need to say something on the orthogonal projections of functions in $L^2([-1,1])$ on $X$ too?
| While the word "characterize" is somewhat ambiguous, since the characterization of a set may be given in many ways (enumeration of elements, "in words", via those satisfying a given proposition, or a union/intersection of some described sets), I am sure whoever has asked this question(person/text) will have expected the answer that you have given i.e. those with integral zero.
Which also means that there is no need to say something on the orthogonal projections of $L^2[-1,1]$ on $X$ (although you may want to think about this : it is not too difficult either).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3019926",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$f$ is T periodic and $f(x) + f'(x) \ge 0 \Rightarrow f(x) \ge 0$
Let $f: \Bbb R \to \Bbb R$ be a function such that $f'(x)$ exists and is continuous over $\Bbb R$. Moreover, let there be a $T > 0$ such that $f(x + T) = f(x)$ for all $x \in \Bbb R$ and let $f(x) + f'(x)\ge 0$ for all $x \in \Bbb R$.
Show that $f(x) \ge 0$ for all $x \in \Bbb R$.
My attempt:
$f(x) \ge 0 \iff f(x) \ge f'(x) - f'(x) \iff f(x) + f'(x) \ge f'(x)$.
Thus, it is enoguh to show that $0 \ge f'(x)$.
$\iff 0 \ge \lim_{h\to0}\frac{f(x + h) - f(x)}{h}$
I do not know how to proceed from here. I know that $f'$ also has a periodicity of T but I do not know how to use that here.
Am I on the right track? How can I use the periodicity of $f$ to solve the problem?
| $ f \ne 0$
$f(x)=0 \Rightarrow f'(x) \ge 0$, hence $f$ can cross the X-axis at most once. periodicity means that it cannot cross at all.
if $f$ is non-positive then $f' \ge 0$ so $f$ is monotone increasing. again this contradicts periodicity
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3020024",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 7,
"answer_id": 6
} |
Addition of Logarithmic Equation
According to the basic rules of $\log$, I'm solving both $\log$ terms as for first: base is $3$, $N$ is $9$ so exponent is calculated as $2$, and same for other term. But I'm confused with this '$x$'. Adding both terms $\log$ according to my logic will result in $4$. But I know I'm doing something wrong here, how to treat this $x$? Is B the correct answer?
| HINT
Recall that
$$\log_3 (9\cdot x)=\log_3 9+\log_3 x$$
then what about $\log_2 (4\cdot x)$?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3020153",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Identities similar to $\arctan(x)+\arctan(1/x)=\pi/2$ The $\arctan(x)+\arctan(1/x)=\pi/2$ (for $x>0$) identity can be solved by taking the derivative of the left hand side, showing it is $0$, and then plugging in, say, $x=1$ to get its constant value $\pi/2$.
Are there any other (nontrivial) identities which can be solved similarly? I am hoping for something a 1st semester Calculus student could solve... so not too difficult please!
| The proof for $x>0$ can be obtained as follows
*
*let $\alpha=\arctan x \quad x\in\left(0,\frac{\pi}2\right)$
then
$$\tan\left(\frac{\pi}2-\alpha\right)=\frac1{\tan \alpha}=\frac1x \implies \frac{\pi}2-\alpha =\arctan \frac1x$$
Another similar identity is
$$\arcsin x + \arccos x=\frac{\pi}2 \quad \forall x\in[-1,1]$$
which can be proved by $\cos\left(\frac{\pi}2-\alpha\right)=\sin \alpha$.
Using derivatives we can prove some basic important inequalities as
*
*$\tan x\ge x\quad x\ge 0$
*$\sin x\le x\quad x\ge 0$
*$\sin x\ge x-x^3/6\quad x\ge 0$
*$\cos x \ge 1-\frac12 x^2$
and so on.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3020249",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 5,
"answer_id": 4
} |
How to convert a straight line into polar coordinates? The straight line $y=mx+b$ can be expressed in polar coordinates as:
$$\rho=x\cos(\theta) + y\sin(\theta)$$
Where $(\rho,\theta)$ defines a vector from the origin to the nearest point on the line. Thus the Hough transform of a straight line in $x-y$ space is a point in $(\rho,\theta)$ space.
Find $(\rho, \theta)$ for the following straight line $y=-x+5$.
I'm trying to go through a simple exercise for the Hough transform where I have a simple straight line in the form of $\;y=-x+5\;$ and I want to obtain polar coordinates $\;(\rho,\theta)$. I know polar coordinates can be represented by $\;\rho = x⋅\cos(\theta) + y⋅\sin(\theta).$
What are the steps I'm supposed to take to solve this problem? I have searched around and couldn't really find any examples I can follow in this exact format.
| $$\frac{\left|c\right|}{\sqrt{a^2+b^2}}$$
Gives you the normal distance from the origin to the straight line which is $\rho$
So if you multiply $$ax+by=c$$
by $\rho/c$ you get
$$mx+ny=\rho$$ where
$m=\cos(\theta)$ and $n=\sin(\theta)$ and getting $\theta$ given this should be easy.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3020367",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
} |
What's the difference between $f(x)=\sqrt{x^2+9}$ and $k(x^2+9)=\sqrt{x^2+9}$? Let's say we 've got a function $f(x)=\sqrt{x^2+9}$, which is a composite function. $f(x)=\sqrt{g(x)}$ and $g(x)=x^2+9$.
When we have a function like $h(x)=x$, we are allowed to set $x$ to $x+9$ and have $h(x+9)=x+9$.
So why do we need $g(x)$ and can't just set $x=x^2+9$, which with a function like $k(x)=\sqrt{x}$ leads to $k(x^2+9)=\sqrt{x^2+9}$ (same as f(x) above)?
Where's the difference between these two ($f(x),k(x)$)? Is $x^2+9$ even a valid argument for $k(x)$?
| A function is a mapping from elements of a domain set to elements of a range set (also called the "codomain"). Let's consider your example of $k(x) = \sqrt{x}$. This actually is an incomplete definition of a function; you also need to specify the domain and range. So suppose $k$ takes nonnegative real numbers to nonnegative real numbers.
Then when we take the function $g(x) = x^2+9$, we again have to specify the domain and range. So let's say $g$ takes real numbers to real numbers greater than or equal to 9.
Now consider the composition $f(x) = k(g(x))$. The equation is $\sqrt{x^2+9}$. But now the domain and range have changed a bit from the original $k$ or $g$. In particular, now the domain is all real numbers, and the range is real numbers greater than or equal to 3. So there is a subtle difference between the functions $f(x)$ and $k(x)$. It's important to keep the domain and range/codomain in mind whenever you do function composition.
Now to address your confusion regarding how we are "allowed" to set $x = x^2+9$ and write $k(x^2+9) = \sqrt{x^2+9}$. Again, think about a function as taking inputs to outputs. So when you write $k(x^2+9) = \sqrt{x^2+9}$, what you're saying is that given a number $x$, $k$ maps the number $x^2+9$ to $\sqrt{x^2+9}$. This is really just a variable substitution. There is nothing wrong with writing down $k(x^2+9)$, or $k(e^x)$; just like with $k(2)$ or $k(\pi)$, it represents passing some value into the function $k$.
Hopefully that addresses your questions, and let me know in the comments if I can clarify further!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3020494",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Why does $z^n-1=0$ have at max n solutions? $z\in\mathbb{C}$ I know that there is a Theorem which says that a Polynom of Degree n has at most n Solutions, however we have not proved it yet in our class. Is there Maybe another explaination for this Special case?
| There's an explanation if you represent them in polar co-ordinates and consider that multiplying two complex numbers involves adding their arguments (ie angles) and multiplying their magnitudes (distances from the origin). It turns out they need to have $1$ as their magnitude and be multiples of$\frac{360°}{n}$ apart on the resulting circle, so only $n$ of them will fit.
(This is effectively the same as gimusi's answer.)
Edit: In fact, there are exactly $n$ of them, and they're equally spaced round the circle. The reason should be obvious if you pick one of the candidate angles and multiply it by $n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3020762",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 6,
"answer_id": 3
} |
Prove that $G$ is an open interval for two-valued continuous function $f$
Suppose $G\subset \mathbb R$ is a non-empty open set such that the function $f:G \rightarrow \{0,1\}$ is a two-valued function and is continuous. Show that any two-valued function on $G$ is a constant if and only if the set $G$ is an open interval.
I am concerning about the construction of open set for set $G$ and have no idea to start off with.
| Theorem: A metric space $X$ is connected if and only if any continuous function $f:X\to \{0,1\}$ is constant.
Proof:Suppose $X$ is connected and $f:X\to \{0,1\}$ is continuous. If $f$ is not a constant function, then $f$ is onto. Let $A=f^{-1}(0)$ and $B=f^{-1}(1)$. Then $A\cup B=X$,and $A,B\neq \emptyset$. Also note that both are proper subsets of $X$ and are open and closed in $X$, a contradiction.
Suppose $X$ is not connected. Let $A$ and $B$ be the disconnection. Then define $f:X\to \{0,1\}$ such that $$f(x)=\begin{cases}
0, &\text{if $x\in A$}\\
1,&\text{if $x\in B$}
\end{cases}$$
$f$ is a non-constant continuous function(verify).
Another useful theorem is
A subset $I$ of $\Bbb{R}$ is connected if and only if $I$ is an
interval.
You can find a proof here.
I hope now you can complete your answer on your own.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3020937",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
How to show that if $f^2(x)$ is uniformly continous function then f is uniformly continous $f:R\to [0,\infty)$ is function such that $f^2(x)$ is uniformly continuous on R then I have to show that f is uniformly continuous ?
My attempt :
$|f^2(x)-f^2(y)|<\epsilon$ for $|x-y|<\delta$
then
$|f(x)-f(y)<\epsilon/|f(x)+f(y)|$ for $|x-y|<\delta$
My problem is that how to control above difference as f may be 0 at both x and y
SO how to show above is uniformly continuous
Any help will be appreciated
| I think the result holds for all $f\in C(\mathbb{R})$. You could use mean value theorem: if $f(x)f(y)<0$ then there is some $z$ between $x$ and $y$ such that $f(z)=0.$ The details is omitted.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3021073",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
$1,2, \ldots, n$ are permuted. None of the numbers $1,2,3$ are adjacent and $n>4$. The numbers $1, 2, \ldots, n$ are permuted. How many different permutations exist such that none of the numbers $1, 2, 3$ are adjacent when $n>4$?
Solution:
$4,5, \ldots, n$ can be shuffled in $(n-3)!$ ways and $3!$ ways to arrange $1,2,3$. There are $n−2$ slots that are separated by $n−3$ shuffled numbers, and if we insert each of $1,2,3$ into a different slot, they cannot be adjacent. There are $\binom{n - 2}{3}$ ways to do this.
Why is it $\binom{n-2}{3}$ ways? I don't get the explanation.
| We can count the cases
*
*the permutations are n!
from which we need to eliminate the ways
*
*we can arrange $1,2,3$ adjacent that is: $3!(n-2)(n-3)!$
*we can arrange exactly a pair adjacent that is: $3!(n-3)![(n-3)(n-4)+2(n-3)]=3!(n-3)!(n-3)(n-2)$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3021175",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
Proper condition on the dihedral group Is there a theream which is a condition on $n\in\mathbb N$ that says when the dihedral group, $D_{n}$, has non-cyclic subgroups?
After spending some time figuring a condition I tried to find some similar thread but didn't find any.
| Dihedral group $D_n = <r, s | r^n = s^2 = 1, rs = sr^{-1}>, \forall n \ge3$.
By your question, $D_n \le D_n$. So true for $\forall n\ge3$.
But if we want a proper non-cyclic subgroup, then we have to consider some $<r^a,s>$.
Hence when $n$ is composite we get a required subgroup.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3021319",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
A finite field $F$ such that $F[\sqrt{2}]$ and $F[\sqrt{3}]$ are not isomorphic (as fields) Assuming $F$ is a finite field such that $F[\sqrt{2}]$ and $F[\sqrt{3}]$ are both fields, I am trying to prove that they must both be isomorphic. Or is there a counterexample? Is there a counterexample where $F$ has prime order? I have looked hard for one and cannot find any.
For the avoidance of doubt $F[\sqrt{2}] := F[X]/(X^2-2)$, so assume that $X^2 -2$ and $X^2-3$ are irreducible over $F$.
| If neither $2$ nor $3$ are squares in the finite field $F$, then both $F[\sqrt{2}]$ and $F[\sqrt{3}]$ are extensions of $F$ having degree $2$, so both have $|F|^2$ elements. Hence they're isomorphic: if $p$ is a prime, then any two fields of cardinality $p^n$ ($n>0$) are isomorphic, because both are the splitting field of $x^{p^n}-x$.
The only case in which they aren't isomorphic, is when one among $2$ and $3$ is a square and the other isn't.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3021494",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Obtain the leading order uniform approximation of the solution Obtain the leading order uniform approximation of the solution to $ \epsilon y′′-x^2y′-y=0$.
The boundary conditions are $y(0)=y(1)=1$.
Since $a(x)<0$ the boundary layer is at $x=1$.
The outer solution will be of the form $y(x; \epsilon) = y_0(x) + \epsilon y_1(x) + ...$
So the leading order problem is $-x^2y_0' - y_0 =0$ and $y_0(0) = 1$.
Hence $y_0 = ke^{1/x}$.
Here's where the issue arises: to work out $k$ I get $y_0(0)=ke^{1/0}=1$ using the boundary condition at $x=0$, but $e^{1/0}$ is obviously undefined. How can I proceed form here?
| This is a complicated problem, but if you're careful it does work out. Firstly, let's write out the equation assuming there is a boundary layer or width $\epsilon^\alpha$ at $x_0$, so let $X=(x-x_0)/\epsilon^\alpha$ and $Y(X)=y(x)$. We get
$$ \epsilon^{1-2\alpha}Y_{XX}-\epsilon^{-\alpha}(\epsilon^\alpha X+x_0)^2Y_X-Y=0,$$
expanding
$$ \epsilon^{1-2\alpha}Y_{XX}-\epsilon^{\alpha}X^2Y_X-2Xx_0Y_X-\epsilon^{-\alpha}x_0^2Y_X-Y=0.$$
Now for dominant balance. If $x_0\neq0$, then we would balance $\epsilon^{1-2\alpha}$ with $\epsilon^{-\alpha}$ (you can check that this is the only dominant balance) to give $\alpha=1$. You can leave $x_0$ unknown and determine that it must be 1, but since you already know that we'll use the fact and write our leading-order inner equation at $x=1$ as
$$Y_{XX}-Y_X=0,$$
with boundary condition $Y(0)=1$ (since $x=1$ corresponds to $X=0$). The solution is $Y(X)=A+Be^X$ where $A+B=1$.
If $x_0=0$ then we have a different balance, with $\alpha=1/2$. Let $\mathsf Y(\chi)=y(x)$ with $\chi=x/\sqrt{\epsilon}$ and the leading-order inner equation at $x=0$ is
$$\mathsf Y_{\chi\chi}-\mathsf Y=0,$$
with $\mathsf Y(0)=1$. The solution is $\mathsf Y=Ce^\chi+De^{-\chi}$, and since our solution must be bounded as we exit the boundary layer, we need $A=0$, and the boundary condition gives $D=1$.
The leading order outer solution is, as you found, $y=ke^{1/x}$. You can say here that $k=0$ since the solution must be bounded as you enter each boundary layer, or do it through asymptotic matching.
To fix the remaining constants $k$ and $A$ (or $B$), we need to match all the parts of the solution. To do this, we need the outer solution to be bounded, so we need $k=0$. Alternatively, consider matching the outer solution to the inner solution at $x=0$,
$$\lim_{\chi\rightarrow\infty}e^{-\chi}=\lim_{x\rightarrow0}ke^{1/x}\Rightarrow0=\lim_{x\rightarrow0}ke^{1/x}\Rightarrow k=0.$$ Then the match between the outer layer and the inner layer at $x=1$ gives
$$\lim{x\rightarrow1}0=\lim{X\rightarrow-\infty}A+Be^X\Rightarrow A=0$$
and hence $B=1$.
So the inner solution at $x=0$ is $\mathsf Y(\chi)=e^{-\chi}$, the outer solution is $y(x)=0$ and the inner solution at $x=1$ is $Y(X)=e^X$. We can find a uniformly valid approximation by adding the three equations (writing them all in terms of $x$) and subtracting off the matching constants (all zero) as,
$$y_{unif}(x)=e^{x/\sqrt{\epsilon}}+e^{(x-1)/\epsilon}.$$
This will not be particularly accurate, it's an $O(\sqrt{\epsilon})$ approximation, but I did verify that numerically. I've made a plot of the solution for $\epsilon=0.01$ first and $\epsilon=0.02$ second.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3021588",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Values of $x$ satisfying $\sin x\cdot\cos^3 x>\sin^3x\cdot\cos x$
For what values of $x$ between $0$ and $\pi$ does the inequality $\sin x\cdot\cos^3 x>\sin^3x\cdot\cos x$ hold?
My Attempt
$$
\sin x\cos x\cdot(\cos^2x-\sin^2x)=\frac{1}{2}\cdot\sin2x\cdot\cos2x=\frac{1}{4}\cdot\sin4x>0\implies\sin4x>0\\
x\in(0,\pi)\implies4x\in(0,4\pi)\\
4x\in(0,\pi)\cup(2\pi,3\pi)\implies x\in\Big(0,\frac{\pi}{4}\Big)\cup\Big(\frac{\pi}{2},\frac{3\pi}{4}\Big)
$$
But, my reference gives the solution, $x\in\Big(0,\dfrac{\pi}{4}\Big)\cup\Big(\dfrac{3\pi}{4},\pi\Big)$, where am I going wrong with my attempt?
| As an alternative for a full solution we can consider two cases
*
*$\sin x \cos x >0$ that is $x\in(0,\pi/2)\cup(\pi,3\pi/2)$
$$\sin x\cdot\cos^3 x>\sin^3x\cdot\cos x \iff\cos^2x>\sin^2x \iff2\sin^2 x<1$$
$$-\frac{\sqrt 2}2<\sin x<0 \,\land\, 0<\sin x<\frac{\sqrt 2}2 \iff \color{red}{x\in(0,\pi/4)}\cup(\pi,5\pi/4)$$
*
*$\sin x \cos x <0$ that is $x\in(\pi/2,\pi)\cup(3\pi/2,2\pi)$
$$\sin x\cdot\cos^3 x>\sin^3x\cdot\cos x \iff\cos^2x<\sin^2x \iff2\sin^2 x>1$$
$$-1<\sin x<-\frac{\sqrt 2}2\,\land\, \frac{\sqrt 2}2<\sin x <1 \iff \color{red}{x\in(\pi/2,3\pi/4)}\cup(3\pi/2,7\pi/4)$$
and then your solution is correct.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3021679",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
} |
About closed graph of an unbounded operator I am working on problems related to the closed graph of an unbounded operator. There is a proposition:
Let $X,Y$ be Banach spaces and let $A:\mathrm{dom}(A)\to Y$ be linear and defined on a linear subspace $\mathrm{dom}(A)\subset X$. Prove that the graph of $A$ is a closed subspace of $X\times Y$ if and only if $\mathrm{dom}(A)$ is Banach with respect to the graph norm.
I finished one direction. Suppose $\mathrm{graph}(A)$ is closed. We take any Cauchy sequence $x_n$ in $\mathrm{dom}(A)$, and since the norm is graph norm, we know $x_n$ and $Ax_n$ will both be Cauchy. Then we have a Cauchy sequence $(x_n,Ax_n)$ in the graph, so the pair converges to a certain $(x_0,y_0)$ since the graph is closed. Therefore $x_0\in\mathrm{dom}(A)$, which means that $\mathrm{dom}(A)$ is Banach.
However I encountered some trouble on the other direction. Suppose $\operatorname{dom}(A)$ is Banach with respect to the graph norm. If we take a Cauchy sequence $(x_n,Ax_n)$ in $\mathrm{graph}(A)$, since $X,Y$ are both Banach, it converges to a pair $(x_0,y_0)\in X\times Y$. Then we know $x_n$ converges to $x_0$ in the graph norm and so $x_0\in\mathrm{dom}(A)$, but this only tells $(x_0,Ax_0)\in X\times Y$. We still don't know whether $Ax_0=y_0$.
| Let $(x_n, Ax_n)$ be a Cauchy sequence in $\operatorname{graph}(A)$. Then, by definition of the graph norm, $(x_n)_n$ is a Cauchy sequence in $\operatorname{dom}(A)$.
Since $\operatorname{dom}(A)$ is a Banach space w.r.t. the graph norm, $(x_n)_n$ converges to some $x \in \operatorname{dom}(A)$ w.r.t. the graph norm. This precisely means $(x_n, Ax_n) \to (x, Ax)$ in $X \times Y$. Hence, the sequence $(x_n, Ax_n)$ converges in $\operatorname{graph}(A)$ so $\operatorname{graph}(A)$ is a Banach space. In particular, it is a closed subspace of $X \times Y$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3021781",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Proving the convergence of the sequence defined by $x_1=3$ and $x_{n+1}=\frac{1}{4-x_n}$ Consider the sequence defined by
$$x_1=3 \quad\text{and}\quad x_{n+1}=\dfrac{1}{4-x_n}$$
I can calculate limit by assuming limit exist and solving quadratic equation, but I first wanted to give existence of limit.
I tried to show given sequence is decreasing and bounded below by 0.
I used derivative test as
$$f^\prime(x)=\frac{1}{(4-x)^2}$$
but form this I am not able to show
Also, I tried to shoe $x_{n+1}-x_n<0$ but that way also I am not succeed.
Please tell me how to approach such problem
| It can be approached in a graphical manner:
*
*Draw the graph of $y = \frac{1}{4-x}$ to scale while marking the essentials.
*Asymptote at $x=4$; Value at $x = 3$ is $1$.
*Comparing it to the previous value of the sequence would require the plot of $y=x$ on the same axes.
*Mark the intersection as $x=2-\sqrt3$ whereas $x=2+\sqrt3$ is near $x=4$.
If through, notice that starting the sequence from $x=3$ means that the next value is $1$ from the hyperbola which is well below the straight line. Now to get the next value put $x=1$ and get the next value from the hyperbola, which is again less than $1$ as the straight line depicts.
If you follow the pattern, you would tend to reach the intersection $x=2-\sqrt3$ as the gap between both the curves decreases to zero which gives the limit of the sequence as $x=2-\sqrt3$ (the limit only, not one of the terms of the sequence, since these are all rational numbers).
Also, one can thus say that if $x_1 \in (2-\sqrt3,2+\sqrt3)$ then the sequence would be decreasing and would converge at $x=2-\sqrt3$ and that all the terms $x \in (2-\sqrt3,2+\sqrt3)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3022296",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 0
} |
Four dimensional cross product of THREE vectors There are many MSE posts about how to define a cross product in $\mathbb{R^4}$. It is impossible to define a cross product of two vectors in $\mathbb{R^4}$, since there are infinitely many directions perpendicular to those two vectors, and we don't know which direction to choose. However, If we are given THREE vectors $A,B,C$, it is possible to find a unique direction perpendicular to this three vectors, if $A,B,C$ are independent. However, finding this perpendicular vector involves solving a system of equations.
So my question is:can we define a Quasi Cross Product $\{A,B,C\}$ on $\mathbb{R^4}$, so that we can find a direction perpendicular to $A,B,C$ without solving a system of equations?
| The short answer is yes. One way is to take the formal determinant
$$\left|\begin{matrix}e_1&e_2&e_3&e_4\\
a_1&a_2&a_3&a_4\\
b_1&b_2&b_3&b_4\\
c_1&c_2&c_3&c_4\\
\end{matrix}\right|$$
where $e_1,\ldots,e_4$ are the standard unit vectors, and $a=\sum a_ie_i$
etc., are the three vectors.
Or you can rephrase this in terms
of exterior powers and the Hodge star operator.
All this works in $n$ dimensions too.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3022394",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Lagrange notation: $f^{(0)}(x)$? Using Lagrange notation, is $f^{(0)}(x)=f(x)$? Is this standard notation, or would one have to define $f^{(0)}(x)=f(x)$ first, before using it?
Aside: the context of the question is whether to include the first term within the summation when expressing the Taylor series and hence start at $n=0$, or to write it separately outside the summation and start the summation at $n=1$.
The former has been done here: https://en.wikipedia.org/wiki/Taylor_series#Definition
| A definition of something is always good before using it, as happend in the wikipedia article:
The derivative of order zero of $f$ is defined to be $f$ itself.
So you can start summation at $n=0$ in Taylor series after refering to this definition. But you can put your mind at rest. Most mathematicians would expect that $f^{(0)}(x)=f(x)$ without definition.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3022524",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Lower bound bound for the Ramsey number $R_k(3,3,...,3)$ The question is:
Show that $R_k(3,3,...,3)\geq 2^k+1$. The upper bound part of this problem has been proved in the link How to obtain lower and upper bounds for Ramsey number $R_k (3,3,\dots,3)$, however the lower bound is not clearly shown procedurally because I want to make my understanding on this problem complete.
| Following the hint in the link: let $n = 2^k$ and consider the complete graph on the set $\{0,1\}^k$ and colour the edge between $(x_1,\ldots,x_k) \neq (y_1,\ldots y_k)$ by the colour $c = \min(i: x_i \neq y_i)\in \{1,\ldots,k\}$.
It's clear we cannot have a triangle of a fixed colour $c$: suppose
$ (x_1,\ldots,x_k),(y_1,\ldots,y_k),(z_1,\ldots,z_k)$ is a triangle (three distinct points) where all three edges have the same colour $c$. This implies that $x_i = y_i = z_i$ for all $i < c$ and $\{x_c,y_c, z_c\}$ would have be three distinct values in $\{0,1\}$, which is absurd.
So this graph on $2^k$ points has a $k$-colouring without triangle, hence
$R_k(3,\ldots,3) > 2^k$, and this is what you were required to show. No induction needed.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3022677",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Minimum value of the given function
Minimum value of $$\sqrt{2x^2+2x+1} +\sqrt{2x^2-10x+13}$$ is $\sqrt{\alpha}$ then $\alpha$ is________ .
Attempt
Wrote the equation as sum of distances from point A(-1,2) and point B(2,5) as
$$\sqrt{(x+1)^2 +(x+2-2)^2} +\sqrt{(x-2)^2 + (x+2-5)^2}$$
Hence the point lies on the line y=x+2 it is the minimum sum of distance from the above given two points.
But from here I am not able to get the value of x and hence $\alpha$. Any suggestions?
| There are a number of ways to do this, including brute force and calculus, but since you already found out the rather nice geometric interpretation as the sum of the distances from the given points, let's do that.
A few things will come in handy here.
*
*The line on which $A$ and $B$ lie is $y=x+3$, which is parallel to your line of interest (i.e. $y=x+2$).
*Both lines have slope $1$.
Here's the general idea: Note that the sum of the distances is minimum along the perpendicular bisector of the line segment $AB$; the actual minimum is at the point of intersection of the bisector and $y=x+3$, but since you have an additional constraint, you find the intersection of the bisector with $y=x+2$, call it $C$.
Note that if you drop perpendiculars to $x$ and $y$ axes respectively from $A$ and $B$, they intersect at $A'=(0,2)$ and $B'=(2,4)$. You'll notice that their mid point is $C$. Then $C$ turns out to be $(1,3)$. That gives us that $\alpha=20$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3022822",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 0
} |
Finding the minimal number of members I've been working on the following problem
For every issue in the Blue's association, a commission with 10 members (belonging the Blue's) is formed in order to solve the problem. The only condition is
There can't be two commissions having more than one member in common
The Blue's association has formed this year 40 commissions.
What's the minimal amount of members in the Blue's association?
I've only found out the following
For any commission you can form $\binom{10}{2}=45$ different pairs and none of them can appear in another commission.
Since 40 different commissions are formed, the minimal number of pairs is $45\times 40=1800$.
Denote by $n$ the number of members. Thus $$\binom{n}{2}≥1800\Rightarrow n>60$$
$$$$
The minimal amount of members has to be 100 or less.
You can observe a distribution for 100 members here
My question:
Is 100 the answer or is there an ever smaller possible amount of members?
If so, how can I prove it?
| Let $i$ denote each member of Blue's association and assume that there are $N$ members in total, that is, $i=1,2,\cdots, N.$ And let $j,k=1,2,\ldots, 40$ denote each of 40 commission. We will show that $N$ is at least $82$.
Consider the set
$$
S=\{(i,j,k)\;|\;1\leq i\leq N, 1\leq j<k\leq 40, i\text{ belongs to }j,k\text{-th commission.}\}.
$$ Let $d_i$ denote the number of commissions that $i$ joined. We will calculate $|S|$ using double counting method. First, note that
$$
|S|=\sum_{(i,j,k)\in S}1 = \sum_{1\leq j<k\leq 40} \sum_{i:(i,j,k)\in S}1\leq \sum_{1\leq j<k\leq 40}1=\binom{40}{2},
$$ since for each $j<k$, there is at most one $i$ in common. On the other hand,
$$
|S| = \sum_{1\leq i\leq N} \sum_{(j,k):(i,j,k)\in S}1 = \sum_{1\leq i\leq N} \binom{d_i}{2},
$$ since for each $i$, the number of pairs $(j,k)$ that $i$ joined is $\binom{d_i}{2}$.
We also have $$\sum_{1\leq i\leq N}d_i = 400,$$by the assumption.
Finally, note that the function $f(x)= \binom{x}{2} = \frac{x^2-x}{2}$ is convex. Thus by Jensen's inequality we have that
$$
\binom{40}{2}\geq |S|=\sum_{1\leq i\leq N} \binom{d_i}{2}\geq Nf\left(\frac{\sum_i d_i}{N}\right)=N\binom{\frac{400}{N}}{2}.
$$ This gives us the bound
$$
40\cdot 39 \geq 400\cdot(\frac{400}{N}-1),
$$and hence
$$
N \geq \frac{4000}{49} = 81.63\cdots
$$ This establishes $N\geq 82$. However, I'm not sure if this bound is tight. I hope this will help.
$\textbf{Note:}$ If $N=82$ is tight, then above argument implies that $d_i$'s distribution is almost concentrated at $\overline{d} = 400/82 \sim 5$.
EDIT: @antkam's answer seemingly shows that $N=82$ is in fact optimal.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3023032",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 4,
"answer_id": 3
} |
What are the complex roots of $x^3-1$? What are the complex roots of $x^3-1$?
Work I've done so far:
I've set $x = a + bi$. Since $x^3-1=0$, I set $x^3 = (a+bi)^3=1$.
This gives me the following:
(1) $(-ab^2 + a^3) + (2ab^2 + 2a^2b + a^2b - b^3)i$
Which means that I set $(-ab^2 + a^3) = a(a^2-b^2)= 1$ which is also equivalent to
(2) $a(a-b)(a+b)=(a-b)(a^2+ab)=1$.
I also set
(3) $(2ab^2 + 2a^2b + a^2b - b^3) = 0$.
I simplify (3) to
(4) $2b(a^2 + ab) + (a^2 -b^2)b = 0 $
which gives me
(5) $\frac{2b}{a-b} + \frac{b}{a} = 0$ using (2).
Then I get
(6) $\frac{2b}{a-b} = -\frac{b}{a}$. Then I get that $3a=b$. Plugging into (2) I get
(7) $a(a^2 - (3a)^2)=1 = a(a^2 -9a^2) = -8a^3$. So that $a= \frac{-1}{2}$. Now I get that $b= \frac{3}{2}$, which would give me $\frac{1}{2} + \frac{3}{2}i$. But on Wolfram, the imaginary component is close to $.9$. Where am I going wrong?
| Hint:
The simpler way is to factorize:
$$
x^3-1=(x-1)(x^2+x+1)
$$
can you find all the roots?
Anyway, your algebra is wrong because:
$$
(a+ib)^3=a^3+3a^2(ib)+3a(ib)^2+(ib)^3=a^3-3ab^2+i(3a^2b-b^3)
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3023153",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Right triangle geometry problem Right triangle $\Delta ABC$ ($\angle ACB=90°$). The following is constructed: from point $C$ altitude $CD$, angle bisector $CL$ of $\angle ACB$, angle bisector $DK$ of $\angle ADC$, angle bisector $DN$ of $\angle BDC$.
$D, L$ lie on $AB$, $K$ lies on $AC$, $N$ lies on $BC$.
Prove that $C, K, L, D, N$ lie on same circle and prove that $|KN|=|CL|$
I think that I need to do something with quadrilateral $CKDN$. I got that
$$|KN|=\sqrt{|CK|^2+|CN|^2}=\sqrt{|DK|^2+|DN|^2}$$
$$\angle CKD+\angle CND=180°$$
I also tried to express sides with the angle bisector theorem, but I don't know how to continue / what I need to solve this. How can I solve this problem?
| First, since $CD \perp AB$ and $DK$ and $DN$ are angle bisectors to the right angles
$\angle \, ADC$ and $\angle \, BDC$, then $$\angle \, KDN = \angle \, KDC + \angle \, NDC = 45^{\circ} + 45^{\circ} = 90^{\circ}$$
However, $\angle \, KCN = 90^{\circ}$ so the quadrilateral $CKDN$ is inscribed in a circle.
Next, prove that $KL\, || \, CB$ and $NL\, || \, CA$ using the properties of angle bisectors and the similarity between triangles $ABC, ACD$ and $BCD$. Indeed, since $DK$ is a bisector of the angle at vertex $D$ of triangle $\Delta \, ADC$, we apply the theorem that
$$\frac{AK}{KC} = \frac{AD}{DC}$$ But triangles $\Delta \, ACD$ is similar to $\Delta \, ABC$ so $$\frac{AD}{DC} = \frac{AC}{CB}$$ so
$$\frac{AK}{KC} = \frac{AC}{CB}$$ By the fact that $CL$ is an angle bisector of the angle at vertex $C$ of triangle $\Delta\, ABC$
we have that
$$\frac{AC}{CB} = \frac{AL}{LB}$$ so consequently
$$\frac{AK}{KC} = \frac{AL}{LB}$$
which by Thales' intercept theorem implies that $KL \, || \, CB$. Analogously, one can show that $NL \, || \, CA$.
Then quad $CKLN$ is a rectangle, so $KL =NL$ as diagonals in a rectangle. Therefore the point $L$ also lies on the circumcircle of quad $CKDN$ and $KN$ and $CL$ are diameters of the said circle.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3023292",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Showing that $\int_0^\pi\frac{\cos n\theta}{\cos\theta-\cos\theta_0}d\theta=\pi\frac{\sin n\theta_0}{\sin\theta_0}$ I am reading Debnath & Bhatta "Integral Transforms and Their Applications, 3rd". They cited one example from Zayed "Handbook of Function and Generalized Function Transformations" and stated an integral (Eq.(9.5.45)), for a non-negative integer n,
$$\int_0^\pi \frac{\cos(n \theta)}{\cos(\theta)-\cos(\theta_0)}d\theta=\pi \frac{\sin(n \theta_0)}{\sin(\theta_0)}$$
It turns out many books on Hilbert transform use this relation for Airfoil Design example, e.g., Prederick W.King, Chapter 11.14 "Hilbert Transform-V1".
Interestingly, I remember the following one from Paul J. Nahin, Eq.(2.3.8) of "Inside Interesting Integrals"
$$\int_0^\pi \frac{\cos(n \theta)-\cos(n \theta_0)}{\cos(\theta)-\cos(\theta_0)}d\theta=\pi \frac{\sin(n \theta_0)}{\sin(\theta_0)}.$$
You can find the proof in that book.
So, if both integrals are correct, then we should have
$$\int_0^\pi \frac{1}{\cos(\theta)-\cos(\theta_0)}d\theta=0,$$ which I cannot see why. Mathmatica gives an pure imaginary result here. How shall I interpret these and how can I prove the first integral?
| Are those integrals even well-defined? Let $\theta_0$ be such that $\cos(\theta_0)=1/2$. For instance let $\theta_0=\frac{\pi}{3}$. Take $n=1$. Now
$$\int_0^\pi\frac{\cos(n\theta_0)}{\cos\theta-\cos(\theta_0)}\;d\theta=\int_0^\pi\frac{1/2}{\cos\theta-1/2}\;d\theta.$$
This integral is actually an improper one, as $\pi/3$ is a singularity. And it does not converge.
Similarly,
$$\int_0^\pi\frac{\cos(n\theta)}{\cos(\theta)-\cos(n\theta_0)}\;d\theta=\int_0^\pi\frac{\cos(\theta)}{\cos(\theta)-1/2}\;d\theta$$
fails to converge.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3023421",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Is there a name for this relation: for all $x$ there is $y$ such that $xRy$, and for all $x,y,z$, if $xRy$ and $xRz$, then $y=z$? Suppose for all $x$ there is $y$ such that $xRy$, and for all $x,y,z$, if $xRy$ and $xRz$, then $y=z$.
Does there exist such a binary relation $R$ on some set such that the above properties are satisfied by $R$?
| One set of examples is functions from $A$ to $B$. For this let $xRy$ mean that $(x,y) \in f$ [using the "function as ordered pairs" formulation]. Then your first requirement expresses that $f$ produces an output $f(x)$ for each $x$ in $A,$ while your second expresses that $f$ is a function.
There may be more examples.
Edit: In usual math terminology, the term "function" implies it is "single valued". That is, a single input doesn't map to more than one output. That's what your second condition expresses. The first condition really says each element of the domain $A$ maps to at leastone thing in $B$ [the "codomain"].
There is a version of a so-called "partial function" for which not every element of domain needs to map to something in codomain. [I've seen that more used by logicians]
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3023603",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
derivative of inverse matrix by itself Let $A$ be a matrix, supposedly $k\times k$ matrix.
I know that
$$\frac{\partial A^{-1}}{\partial A} = -A^{-2} $$
I do not know how I am supposed to obtain the following results using this fact. I want to know the step of
$$\frac{\partial a^\top A^{-1} b}{\partial A} = -(A^\top)^{-1}ab^\top (A^\top)^{-1} $$
Also, I want to know the solution to
$$\frac{\partial (A^\top)^{-1}ab^\top (A^\top)^{-1} }{\partial A} = ? $$
| Start with the defining equation for the matrix inverse and find its differential.
$$\eqalign{
I &= A^{-1}A \\
0 &= dA^{-1}\,A + A^{-1}\,dA \\
dA^{-1} &= -A^{-1}\,dA\,A^{-1} \\
}$$
Next note the gradient of a matrix with respect to itself.
$$
{\mathcal H}_{ijkl}
= \frac{\partial A_{ij}}{\partial A_{kl}}
= \delta_{ik}\delta_{jl}
$$
Note that ${\mathcal H}$ is a 4th order tensor with some interesting symmetry properties (isotropic). It is also the identity element for the Frobenius product, i.e. for any matrix $B$
$${\mathcal H}:B=B:{\mathcal H}=B$$
Now we can answer your first question. The function of interest is scalar-valued. Let's find its differential and gradient
$$\eqalign{
\phi &= a^TA^{-1}b \cr &= ab^T:A^{-1} \\
d\phi &= ab^T:dA^{-1} \cr &= -ab^T:A^{-1}\,dA\,A^{-1} \\
&= -A^{-T}ab^TA^{-T}:dA \\
\frac{\partial\phi}{\partial A} &= -A^{-T}ab^TA^{-T} \\
}$$
Now let's try the second question. This time the function of interest is matrix-valued.
$$\eqalign{
F &= A^{-1}ab^TA^{-1} \\
dF &= dA^{-1}ab^TA^{-1} + A^{-1}ab^TdA^{-1} \\
&= -A^{-1}\,dA\,A^{-1}ab^TA^{-1} - A^{-1}ab^TA^{-1}\,dA\,A^{-1} \\
&= -A^{-1}\,dA\,F - F\,dA\,A^{-1} \\
&= -\Big(A^{-1}{\mathcal H}F^T + F{\mathcal H}A^{-T}\Big):dA \\
\frac{\partial F}{\partial A}
&= -\Big(A^{-1}{\mathcal H}F^T+F{\mathcal H}A^{-T}\Big) \\
}$$
This gradient is a 4th order tensor.
If you prefer, you can vectorize the matrices to flatten the result.
$$\eqalign{
{\rm vec}(dF) &= -{\rm vec}(A^{-1}\,dA\,F + F\,dA\,A^{-1}) \\
&= -(F^T\otimes A^{-1} + A^{-T}\otimes F)\,{\rm vec}(dA) \\
df &= -(F^T\otimes A^{-1} + A^{-T}\otimes F)\,da \\
\frac{\partial f}{\partial a}
&= -\Big(F^T\otimes A^{-1} + A^{-T}\otimes F\Big) \\\\
}$$
In some step above, a colon was used to denote the Frobenius (double-contraction) product
$$\eqalign{
A &= {\mathcal H}:B &\implies &A_{ij}
&= \sum_{kl}{\mathcal H}_{ijkl} B_{kl} \\
\alpha &= H:B &\implies &\alpha
&= \sum_{ij}H_{ij} B_{ij} = {\rm Tr}(H^TB) \\
}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3023692",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Prove $\sum_{i=1}^n a_i$ = $\sum_{i=2}^{n+1} a_{i-1}$ Given $\sum_{i=1}^n a_i$ = $\sum_{i=2}^{n+1} a_{i-1}$
How would you show this true for all n ∈ N and $a_1, a_2, . . . , a_n$ ∈ R?
I know it is obviously true because i would just use a substitution like i=j-1 then summing j-1 from 2 to n+1 gives the same result but not really sure how to show this. It seems like an induction problem to me but im not sure, how is this done? Thanks
| Prove that
$F(n):= \sum_{i=1}^{n}a_i-\sum_{i=2}^{n+1}a_{i-1}= 0$ , for $n \in \mathbb{Z^+}$ by induction.
1) $n=1$√.
2) Hypothesis $F(n)=0$.
3) Step for $n+1$.
$F(n+1)=$
$\sum_{i=1}^{n+1}a_i - \sum_{i=2}^{n+2}a_{i-1}=$
$\sum_{i=1}^{n}a_i +a_{n+1}$
$- \sum_{i=2}^{n+1}a_{i-1}- a_{n+1}=$
$F(n)+(a_{n+1}-a_{n+1})=0$,
since $F(n)=0$ by hypothesis, and the second summand is zero.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3023810",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Variation of distributing $k$ balls into $n$ distinguishable boxes I am interested in understanding a variation problem of distributing balls into boxes. It seems to be not any of the individual case mentioned in twelvefold way classification.
The problem is described as follows:
In total there are $K$ balls, there are $I$ distinguishable groups by color, and in each color group the balls are indistinguishable with number to be $n_i$ where $i = 1, 2, \ldots, I$. Meanwhile, we have $N$ distinguishable boxes where $N$ is always bigger than any $n_i$. Therefore, how many ways are there to distribute these $K$ balls into $N$ boxes, when no box can contain more than one ball from same indistinguishable color group and no empty boxes?
Any suggestion will be appreciated.
| This is a straightforward application of the principle of inclusion-exclusion. First, count up all the ways to put all the balls into boxes, ignoring the condition that no box can be empty. This is simply
$$
\prod_{i=1}^I \binom{N}{n_i}
$$
Next, you have to subtract out the "bad" cases where some box is empty. For each of the $N$ boxes, there are $\prod_{i=1}^I \binom{N-1}{n_i}$ ways to place the balls where that box is empty, and you subtract this for each box, so we are left with
$$
\prod_{i=1}^I \binom{N}{n_i}-N\cdot \prod_{i=1}^I \binom{N-1}{n_i}
$$
However, distributions with two empty boxes were subtracted out twice by the above formula, so they must be added back in. The result is
$$
\prod_{i=1}^I \binom{N}{n_i}-N\cdot \prod_{i=1}^I \binom{N-1}{n_i}+\binom{N}2\prod_{i=1}^I \binom{N-2}{n_i}
$$
The summation continues developing in this way. The end result is
$$
\boxed{\sum_{j=0}^N (-1)^j\binom{N}j\prod_{i=1}^I \binom{N-j}{n_i}.}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3023944",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Why is $\lim_{x\to -\infty}\sqrt{x^2+5x+3}+x = \lim_{x\to -\infty}(-x)+x=\lim_{x\to -\infty}0 = 0$ not correct? $\lim_{x\to -\infty}\sqrt{x^2+5x+3}+x = \lim_{x\to -\infty}(-x)+x=\lim_{x\to -\infty}0 = 0$
Apparently, the 2nd step is illegal here. Probably because for $x=-\infty$ I'd get $(+\infty-\infty)$ which is not possible. I see why this wouldn't be possible, I'm not sure if it really is the cause which makes that equation illegal though.
But now, apparently, I could do this:
$\lim_{x\to -\infty}\sqrt{x^2+5x+3}+x=\lim_{x\to -\infty}\frac{[\sqrt{x^2+5x+3}+x][\sqrt{x^2+5x+3}-x]}{\sqrt{x^2+5x+3}-x}=\lim_{x\to -\infty}\frac{x^2+5x+3-x^2}{\sqrt{x^2+5x+3}-x}=\lim_{x\to -\infty}\frac{5x+3}{\sqrt{x^2+5x+3}-x}=\lim_{x\to -\infty}\frac{5+3/x}{\sqrt{1+5/x+3/x^2}-1}=-5/2$
which gives me the correct result.
But in the 3th step I used $x^2-x^2=0$, how is that legal?
Also, in the 2nd step I implicitly used:
$-x\sqrt{x^2+5x+3}+x\sqrt{x^2+5x+3}=0$
Which also seems to be fine, but why?
| The first step is illegal, not the second. This is because $\sqrt{x^2 + 5x + 3} \not\equiv -x$.
Also addressing your comment:
The idea behind $\lim_{x \to -\infty} \sqrt{x^2 + 5x + 3} = \dots = \lim_{x \to -\infty} −x$ was, that $x^2$ dominantes $5x+3$. I also thought if that argument is wrong here, if $\lim_{x \to -\infty} x − x = 0$ is legal and everything of the same "type"; then I'm okay again. Then I have to think about why the domination argument doesn't work as I thought it does here.
You are right in thinking that $x^2$ dominates $5x+3$, so that $\color{blue}{\sqrt{x^2 + 5x + 3}} \sim \color{red}{-x}$ for large negative $x$. Indeed there is no problem when computing their ratio:
$$\lim_{x \to -\infty} \frac{\color{blue}{\sqrt{x^2 + 5x + 3}}}{\color{red}{-x}} = 1$$
However, because you are computing the difference between $\color{blue}{\sqrt{x^2 + 5x + 3}}$ and $\color{red}{-x}$, such an approximation is not fine enough. You need more terms of the series expansion:
$$
\begin{align*}
\color{blue}{\sqrt{x^2 + 5x + 3}}
&= -x \sqrt{1 + \frac{5}{x} + \frac{3}{x^2}} \\
&\sim -x \left(1 + \frac{1}{2} \cdot \frac{5}{x} + O\left(\frac{1}{x^2}\right) \right) \\
&= \color{red}{-x} - \frac{5}{2} + O\left(\frac{1}{x}\right)
\end{align*}
$$
Then may you conclude that the required limit is $-5/2$.
By analogy, an approximation such as $\color{blue}{x + 1} \sim \color{red}{x}$ is perfectly fine for computing the ratio limit $\lim_{x \to -\infty} (\color{blue}{x + 1}) / (\color{red}{x})$. But if you are computing the difference limit, then you may not discard the $1$ in $\color{blue}{x + 1}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3024120",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 7,
"answer_id": 2
} |
Basic division and multiplication We know that 2 x 2 /2 can be solved by removing '2' from the denominator and numerator, we can't do the same if the operation was addition, These "rules" have been established based on the understanding of these operations and how these interact with each other.
Up until now, I've just been accepting it as a rule, but this causes a lot of discomforts while doing math.
I generally get good grades in math, but that's only by following these rules and patterns but there is no genuine understanding as to why we can remove "2" from the numerator and denominator in 2 x 2 /2 but we cannot in 2 + 2/2.
Secondly, 2 x 2 / 2 can be solved by removing "2" from the numerator and the denominator and it gives the same result as 4/2, how?.
Is it an axiom?
I hope to gain an intuition for the above concepts, so I can be free from my long-held guilt.
Thanks.
|
$\dfrac {2 \times 2}{ 2 }$ can be solved by removing $2$ from the numerator and the denominator and it gives the same result as $\dfrac 4 2$, how? Is it an axiom?
No, it is not an axiom. But, as for every rule for operating with numbers, it is obviously justified by axioms.
Regarding the rationals (i.e. fractions) they are an ordered field.
Regarding division, the axioms state that :
If $a \ne 0$, then the equation $a \cdot x = b$ has a unique solution : $x = \dfrac b a = b \cdot a^{−1}$.
Thus, $a^{−1}$ is the multiplicative inverse of $a$.
We have :
$\dfrac {a \cdot a} a = (a \cdot a) \cdot a^{−1}$.
By Associativity of product we have that :
$(a \cdot a) \cdot a^{−1} = a \cdot (a \cdot a^{−1})$.
But $(a \cdot a^{−1})=1$ and again by axiom :
$a \cdot 1=a$.
Putting all together, we get :
$\dfrac {a \cdot a} a = a$.
At an "intuitive level" it is simply a metter of multipling and dividing quantities.
We multiply two by two the get four and then we divide four by two, getting two.
In real life applications of numerical oeprations we do not "cancel" anything.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3024246",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 3
} |
Finding magnitude of a complex number
$$z = \dfrac{2+2i}{4-2i}$$
$$|z| = ? $$
My attempt:
$$\dfrac{(2+2i)(4+2i)}{(4-2i)(4+2i)} = \dfrac{4+12i}{20} = \dfrac{4}{20}+\dfrac{12}{20}i = \dfrac{1}{5} + \dfrac{3}{5}i$$
Now taking its magnitude and we have that
$$|z| = \sqrt{\biggr (\dfrac 1 5 \biggr ) ^2 +\biggr (\dfrac 3 5 \biggr )^2} = \sqrt {\dfrac 2 5 }$$
Am I right?
| Yes, you are. You can do it also like this: $$\Big|{2+2i\over 4-2i}\Big|=\Big|{1+i\over 2-i}\Big|={|1+i|\over |2-i|}= {\sqrt{2}\over \sqrt{5}}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3024418",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
} |
Triple Integrals Help Suppose $E$ is the sphere $x^2 + y^2 + z^2 = 1$ whose density at each point is proportional to the distance from the origin. Find an expression for the mass of $E$ as a Triple Integral and explain why it's difficult to compute
I believe it is difficult to compute because the region is a sphere and not a box but I'm not exactly sure how to write the triple integral
| For an arbitrary density $\rho$, the mass is expressible as a triple integral in spherical polar coordinates, viz. $\int_0^{2\pi}d\phi\int_0^\pi d\theta\sin\theta\int_0^1 \rho(r,\,\theta,\,\phi)r^2 dr$. If $\rho$ only depends on $r$ we can first integrate out the angles, giving $4\pi\int_0^1\rho(r)r^2 dr$. The choice $\rho=kr$ from your question gives $4\pi k\int_0^1 r^3 dr=\pi k$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3024627",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Fourth point of intersection of two conics Five points in general position define a unique conic section. Let $Q_1$ be a conic through points $A,B,C,E_1,F_1$ and likewise $Q_2$ through $A,B,C,E_2,F_2$. Two conics (over an algebraically closed field) generally intersect in four points, so these two will have a fourth point $D$ in common, in addition to the points $A,B,C$ which are common by construction.
What are its coordinates? To be more precise, how can you express the coordinates of that point $D$ as a rational function of the coordinates of the 7 given points $A,B,C,E_1,F_1,E_2,F_2$? And can you certify that the rational functions you found are as simple as possible, in terms of their total degree?
I have an approach to find these coordinates, which I will write up as an answer. But reducing its degree as far as possible makes it really ugly. So I encourage other approaches tackling this differently. If I'm not mistaken, the ideal solution would be of degree 6 in $A,B,C$ and of degree 4 in $E_1,F_1,E_2,F_2$.
I thought of this problem after re-reading a previous answer of mine and being unhappy about the way I translated between Möbius geometry and regular projective geometry there. With $A,B$ taken as the ideal circle points, the problem here can be used to find the second point of intersection for two circles, each defined by three points, and one of them common to both the circles.
| Let the conics be $ABCE^+F^+$ and $ABCE^-F^-$, and let the fourth point of intersection be $D$. Write $D$, and the $E$s and $F$s, using rampantly-reciprocated barycentric coordinates
$$D = \left(\;\frac1{a} : \frac1{b} : \frac1{c}\;\right) \qquad E^\pm = \left(\;\frac{1}{a_\circ^\pm} : \frac{1}{b_\circ^\pm} : \frac{1}{c_\circ^\pm} \;\right) \qquad
F^\pm = \left(\;\frac{1}{a_\star^\pm} : \frac{1}{b_\star^\pm} : \frac1{c_\star^\pm} \;\right)$$
(using "$a_\circ^\pm$", etc, to reduce some of the visual clutter otherwise encountered with "$a_E^\pm$") where $(u:v:w)$ represents the point
$$\frac{u A+v B+w C}{u+v+w}$$
Now, a computer algebra system makes pretty quick work of finding the equations for the conics and representations of the points of intersection. I won't bother TeX-ing up any of that here; I'll simply cut to the chase, with a first pass at helpful grouping:
$$\begin{align}
a &=
\phantom{+}
a_\circ^+ a_\circ^- \left(b_\star^+ c_\star^- - c_\star^+ b_\star^- \right)
+ a_\star^+ a_\star^- \left(b_\circ^+ c_\circ^- - c_\circ^+ b_\circ^- \right) \\
&\phantom{=}+ a_\circ^+ a_\star^- \left(b_\circ^- c_\star^+ - c_\circ^- b_\star^+ \right)
+ a_\circ^- a_\star^+ \left( b_\star^- c_\circ^+ - b_\circ^+ c_\star^- \right)
\\[8pt]
b &=
\phantom{+}
b_\circ^+ b_\circ^- \left( c_\star^+ a_\star^- - c_\star^- a_\star^+ \right)
+ b_\star^+ b_\star^- \left( c_\circ^+ a_\circ^- - c_\circ^- a_\circ^+ \right)
\\[4pt]
&\phantom{=}
+ b_\circ^+ b_\star^- \left( c_\circ^- a_\star^+ - c_\star^+ a_\circ^- \right)
+ b_\circ^- b_\star^+ \left( c_\star^- a_\circ^+ - c_\circ^+ a_\star^- \right)
\\[8pt]
c &=
\phantom{+}
c_\circ^+ c_\circ^- \left( a_\star^+ b_\star^- - a_\star^- b_\star^+ \right)
+ c_\star^+ c_\star^- \left( a_\circ^+ b_\circ^- - a_\circ^- b_\circ^+ \right)
\\ &\phantom{=}
+ c_\circ^+ c_\star^- \left( a_\circ^- b_\star^+ - a_\star^+ b_\circ^- \right)
+ c_\circ^- c_\star^+ \left( a_\star^- b_\circ^+ - a_\circ^+ b_\star^- \right)
\end{align}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3024751",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Let $f'$ be continuous. If $f$ is uniformly continuous then $f'$ is uniformly continuous? Is this, in general, true? I need this in order to prove that a polynomial is uniformly continuous in $\mathbb{R}$ if and only if $p(x)$ is of degree less or equal to 1. If $p(x)$ is of degree less or equal to 1, is easy to prove that is uniform. Now, if $p(x)$ is of degree greater than one, then is not uniform. I want to proceed by induction, for degree greater or equal to 2, if $p(x)$ is of degree 2, is not uniforn. Then by induction $p'(x)$ is not uniform, and therefore $p(x)$ is not uniform either.
| This is a modification of the example by Guido A wherein the function is defined on the whole line: take $f(x)=\sqrt {\frac {\pi} 2 +\tan^{-1}x}$. [Note that $f'$ is unbounded].
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3024845",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
decomposition group and inertia group, the minimal polynomial,surjectivity of the map $D_{M/P}\rightarrow Gal$ Can anyone explain the underlined sentence?
For notation, A:Dedekind domain, K=Frac(A), L/K:Galois extension, B:The integral closure of A in L, M:A maximal ideal of B, P:The intersection of M and A (hence the maximal ideal of A), $D_{M/P}$ :the decomposition group.
I reckon the way we take $\alpha$ is the key, but cannot make it to the conclusion, 'we find that the only non-zero roots of...'.
I read some of the close questions already answered but none of them was using this type of logic.
Thank you in advance.
| Let me expand on the highlighted part:
$g(y)$ is the min. polynomial of $\overline{\alpha}$ over $A/P$, so it has to divide the polynomial $\overline{f}(y)=f(y) \,\mathrm{mod}\, P$, since $\overline{\alpha}$ is a root of $\overline{f}(y)$ (and $\overline{f}(y)$ is nonzero, take $f(y)$ monic). From this and the expression $f(y)=\prod_H(y-\sigma(\alpha))$ it follows that the roots of $g(y)$ are just some of the roots $\sigma(\alpha)$ taken modulo $M$, and the goal is to identify which ones.
Now $\alpha$ was chosen so that $\alpha \in \sigma(M)$ whenever $\sigma \notin D_{M/P}$, i.e. $\sigma(M)\neq M$. Applying $\sigma^{-1}$, we have that $\sigma^{-1}(\alpha) \in M$ whenever $\sigma(M)\neq M$. Changing $\sigma^{-1}$ to $\sigma$ (note that $\sigma^{-1} \notin D_{M/P}$ iff $\sigma \notin D_{M/P}$), we have that $\sigma(\alpha) \in M $ whenever $\sigma \notin D_{M/P}$. And conversely, we have $\alpha \notin M$ (because $\overline{\alpha} \neq 0$), so given any $\sigma \in D_{M/P}$, we have that $\sigma(\alpha) \notin \sigma(M)=M$. So altogether: $\sigma(\alpha) \,\mathrm{mod}\,M$ is nonzero iff $\sigma \in D_{M/P}$. So the roots of $g(y)$ can come only from these, i.e. in the form $\overline{\sigma}(\overline{\alpha})$ (because $g(y)$ cannot have $0$ as a root, it's the min. poly. of $\overline{\alpha}$). And all of them has to be roots for Galois reasons (all the maps $\overline{\sigma}$ are elements of the Galois group of the residue field, and $\overline{\alpha}$ is a root of $g(y)$).
Hope this helps.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3025036",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Sylow $3$-subgroups of an order $180$ group My task is to show that if $G$ is a group with order $180 =2^23^25$ with $36$ Sylow $5$-subgroups, then there are two Sylow $3$-subgroups $H$ and $K$ such that $|H \cap K| = 3$. The number of Sylow $3$ groups $n_3$ is either $1$, $4$, or $10$ since $ n_3 \equiv 1 \pmod 3$ and $n_3 | 2^2\cdot5$. However from here I don't know how to force two subgroups to intersect by mere pigeonholeing.
It turns out from this post that $G$ is not simple, so either $n_2=1$ or $n_3=1$, but I don't see how to use this either . In particular we get a normal $5$-complement.
| Since no group of order 180 has $n_5=36$, any conclusion follows vacuously. The normal 5-complement would give a nontrivial normal subgroup of an $A_5$ composition factor, (the only candidate for a non-abelian composition factor) so if it existed the group would have to be solvable. But then the counting portion of Philip Hall's theorem would require we could write 36 as a product of prime powers EACH congruent to 1 mod 5, which we can't.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3025215",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
If A complement is the union of two separated sets, prove that the union of those separated sets with A is connected. Let $A$ be a connected subset of a connected metric space $(X,d)$.
Assume $A^{c}$ is the union of two separated sets $B$ and $C$.
Prove that $A \cup B$ and $A \cup C$ are connected.
Attempt
Proving $A \cup B$ is connected is sufficient.
Assume towards a contradiction that $A \cup B$ is not connected. (So it is disconnected).
There exists open sets $G_{1}$ and $G_{2}$ such that $A \cup B \subseteq G_{1} \cup G_{2}$, $(A\cup B) \cap G_{1} \neq \phi$, $(A\cup B) \cap G_{2} \neq \phi$ and $(A\cup B)\cap G_{1} \cap G_{2} = \phi$.
Since $A \subseteq A \cup B$ and $A$ is connected, then either $A$ lies in $G_{1}$ or $G_{2}$.
WLOG, assume $A$ lies in $G_{1}$.
I do not know how to proceed from here. I need to obtain a contradiction to finish my proof.
Edit
I have corrected my proof.
| Isn't the complement of $A \cup B$ just C? And I think it's easy to see that C is open.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3025311",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Words of weight 5 in in Ternary Golay Code I'm not really good at doing this type of exercices. But I'd like to know how to prove that ther are 132 words of weight 5 in the Ternary Golay Code. I am not allowed to use the weight enumerator.
I tried to ask the same question in the global code $GF(3)^{11}$ but not succeeded. So I'm quite suck on this.
Any suggestion would be aweomse.
| I assume that you are expected to answer this question using only the (big) piece of information that the ternary Golay code $G$ is a perfect code with covering radius $\rho=2$.
An attack (filling in the details as the OP solved the problem themself):
*
*The number of vectors of weight three in the space $GF(3)^{11}$ is equal to $\binom{11}3\cdot2^3=1320$. We can choose the three non-zero positions in $\binom{11}3$ ways, and each of those non-zero positions can have either $1$ or $2$ as the entry, so those three non-zero positions can be filled in $2^3$ different ways.
*The covering property implies that a vector $x$ of weight is within Hamming distance $\le 2$ of a unique word $w\in G$. The triangle inequality implies that the weight of $w$ must be in the interval $[1,5]$. The minimum distance of $G$ is five, so $w$ must be of weight five exactly.
*Given a codeword $w\in G$ of weight five, it is at distance two from exactly $\binom52=10$ vectors $x$ of weight three. This is because we get all such vectors $x$ by replaing two of the five non-zero components of $w$ with a zero. Note that the answer is independent of the choice of $w$.
*Let the number of codewords of weight five be $M$. In light of the previous bullet, between them they cover $10M$ vectors of weight three. Observe that there is no overlap, for if two distinct codewords were both within distance two of the same vector $x$, then the distance between them would be at most four in violation of the known minimum distance of $G$.
*Combining the first and the fourth bullet, we arrive at the equation $1320=10M$ implying $M=132$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3025523",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.