Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
Compute $f'(0)$ and check if $f'$ is continuous or not
Given the function
$$f(x)=\begin{cases}x^{4/3}\sin(1/x)&\text{if $x\neq 0$}\\0&\text{if $x=0$}\end{cases}.$$
*
*Compute $f'(0)$;
*Is $f'$ continuous on $\mathbb{R}$.
I am unsure of how to solve this but will post what I have.
Computing the part 1 it shows that $f'(0) =0$. (unless I computed it wrong)
For part two I am totally lost on how to prove this, but am thinking that since $f'=0$ wouldn't that make the function not continuous?. Any help is appreciated!
|
$$f(x)=\begin{cases}x^{4/3}\sin(1/x)&\text{if $x\neq 0$}\\0&\text{if $x=0$}\end{cases}.$$
We have,
$$f'(0)=\lim_{h\to 0}\frac{f(h)-f(0)}{h}=\lim_{x\to 0}h^{1/3}\sin\left(\frac{1}{h}\right) = 0$$
Since $|\sin\left(\frac{1}{h}\right)|\le 1$. Then
$$f'(x)=\begin{cases}\frac43x^{1/3}\sin(1/x) - x^{-2/3}\cos(1/x)&\text{if $x\neq 0$}\\0&\text{if $x=0$}\end{cases}.$$
But $$\lim_{x\to 0}f'(x) =\lim_{x\to 0}\frac43x^{1/3}\sin(1/x) - x^{-2/3}\cos(1/x) \\= \color{red}{-\lim_{x\to 0} x^{-2/3}\cos(1/x)~~DNE}$$
does not exists . So $f'$ is not continuous at $x=0$.
Indeed let $g(x) = x^{-2/3}\cos(1/x) $ and set $z_n = \frac{1}{n\pi}$ $$ \lim_{n\to \infty}z_n =0$$
But $$g(z_n) = \left(n\pi\right)^{2/3} \cos\left(n\pi\right) = (-1)^n\left(n\pi\right)^{2/3} = \begin{cases} \left(2k\pi\right)^{2/3}& n=2k\\-\left((2k+1)\pi\right)^{2/3}& n=2k+1\end{cases}$$
we see that $$ \lim_{n\to \infty}g(z_n)~~~DNE$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2526370",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Prove that set $\{(x,y,z)\in\mathbb{R}^3: x^2+y^2 \leq z, x+y+z=1\}$ is compact Prove that set $S=\{(x,y,z)\in\mathbb{R}^3: x^2+y^2 \leq z, x+y+z=1\}$ is compact!
I'm not really sure how to go about this. I can prove the set is closed since it's an intersection of preimages (of closed sets) of continuous functions.
I see that if $z$ gets too big the inequality can't stand considering that $x+y+z=1$ but I just can't see a formal way to prove that. I'm pretty sure that if $z>2$ it's pretty much impossible for the inequality to hold.
I've noticed that $z\geq 0$ but other than that I just can't find a way to prove that the set is bounded. Thanks a lot!
Edit: I've fixed the inequality.
Also, the problem asks: for $f:R^3 \rightarrow R^2, f(x,y,z) = (x,y)$ what is $f(S)$ (also not really sure ohw to solve,considering I can't even prove it's compact.
|
Note that $$
(1-z)^2=(x+y)^2\leqslant 2(x^2+y^2)\leqslant 2z,$$we know $z$ is bounded. Therefore, it follows from the condition $x^2+y^2\leqslant z$ that $x^2+y^2+z^2$ is also bounded.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2526478",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
Does $\frac{\langle a_k,b_k\rangle}{\|a_k\|}$ converge, if $(a_k)_k$ and $(b_k)_k$ tend to $0$ and $b\neq 0$ respectively? If $(a_k)_k$ and $(b_k)_k$ both convergent sequences in $\mathbb{R}^2$ such that their limits are $0$ and $b\neq0$ respectively. Does the sequence.
$$\frac{\langle a_k,b_k \rangle}{\|a_k\|}$$ converge? (where $\langle a_k,b_k \rangle$ is the scalar product).
I'm not really sure how to go about this, since the numerator and the denominator both go to $0$( all norms are equivalent so it doesn't matter which norm we choose for the denominator).
I figured I might try using the sandwich theorem to approximate this but I'm not sure how, since the numerator is basically $a^1_kb^1_k * a^2_kb^2_k$ and if $(a_k)_k$ goes to $(0,0)$ obviously the component sequences both tend to $0$ and i can't separate them from the component sequences of $b_k$. So I'm not sure how to go about this!
Any hints at all would be appreciated
Thanks in advance!
|
The answer is no.
Let $$a_n = \left((-1)^n\cdot\frac1n, \frac1n\right), \text{ for } n\in \mathbb{N}$$
and $b_n = (1, 1)$ for $n \in \mathbb{N}$, the constant sequence $(1,1)$.
We have:
$$\frac{\langle a_n, b_n\rangle}{\|a_n\|} = \frac{(-1)^n\cdot\frac1n + \frac1n}{\frac{\sqrt{2}}{n}} = \frac{(-1)^n + 1}{\sqrt{2}}$$
This sequence does not converge.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2526564",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Show recursion in closed form I've got following sequence formula:
$ a_{n}=2a_{n-1}-a_{n-2}+2^{n}+4$
where $ a_{0}=a_{1}=0$
I know what to do when I deal with sequence in form like this:
$ a_{n}=2a_{n-1}-a_{n-2}$
- when there's no other terms but previous terms of the sequence.
Can You tell me how to deal with this type of problems?
What's the general algorithm behind solving those?
|
$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
$\ds{a_{n} = 2a_{n - 1} - a_{n - 2} + 2^{n} + 4\,,\qquad a_{0} = a_{1} = 0:\ {\large ?}}$.
Note that
$\ds{a_{n} - a_{n - 1} = a_{n - 1} - a_{n - 2} + \pars{2^{n} + 4}}$. Then,
\begin{align}
\underbrace{\sum_{k = 2}^{n}\pars{a_{k} - a_{k - 1}}}
_{\ds{a_{n} - a_{1} = a_{n}}}\ &\ =\
\underbrace{\sum_{k = 2}^{n}\pars{a_{k - 1} - a_{k - 2}}}
_{\ds{a_{n - 1} - a_{0} = a_{n - 1}}}\ +\
\underbrace{\sum_{k = 2}^{n}\pars{2^{k} + 4}}_{\ds{2^{n + 1} + 4n - 8}}
\\[5mm] \implies a_{n} - a_{n - 1} & = 2^{n + 1} + 4n - 8
\end{align}
Similarly,
\begin{align}
\overbrace{\sum_{k = 2}^{n}\pars{a_{k} - a_{k - 1}}}^{\ds{a_{n} - a_{1} = a_{n}}}\ & \ =\
\sum_{k = 2}^{n}\pars{2^{k + 1} + 4k - 8} =
\sum_{k = 1}^{n - 1}\pars{2^{k + 2} + 4k - 4}
\\[5mm] & =
2^{3}\,{2^{n - 1} - 1 \over 2 - 1} + 4\,{\pars{n - 1}n \over 2} - 4\pars{n - 1}
\end{align}
$$
\bbx{a_{n} = 2^{n + 2} + 2n^{2} - 6n - 4\,,\qquad n = 0,1,2,\ldots}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2526695",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 2
}
|
Intuition behind $E(XY) = E(X) E(Y) $ for independent random variables $X,Y$ I have been wondering what's the intuition behind a well known result: $E(XY) = E(X) E(Y) $ for independent random variables $X,Y$
I found this post: here which kinda solves the problem.
But, the explanation given there seems to be not clear enough for me.
What I think:
Without loss of generality, we know that besides independence we can assume that both random variables, $X$ and $Y$ are simple random variables, and so, it is possible to represent them as, i.e. taking X first:
$X = \sum^n_{i=1} a_i 1_{A_i}$, then compute the product $XY$ and take expectation.
But could somebody please explain the intuition behind it to me?
I really want to get the notion of how to understand the result of that post (which i believe is correct)
Thank you all guys.!
|
It is hard to give precise answer since you are asking for intuition.
Suppose for a certain number b you will compute bX. What’s the expected value of this computation? Well, if the realization of the variable X was done independently of the choice of the number b, then your computation will produce on average b.EX. Now make b random...
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2526815",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Is my professor's explanation of direct sum accurate? This is coming from a graduate-level abstract algebra class, for reference. My professor says that given two groups $G,H$ we say $D$ is the direct sum of $G$ and $H$ and write $C = G \oplus H$ if $G$ and $H$ are disjoint except for zero and $C = G+H = \{g+h | g \in G, h \in H\}$.
My issue is that this isn't really a definition, since for arbitrary groups $G$ and $H$ that aren't necessarily disjoint, this isn't defined. For example, I have an assignment question to show that the direct sum of two modules with a certain property still has that property, but I don't see how the direct sum is defined for arbitrary modules. For example, what would $\mathbb{Z}_3 \oplus \mathbb{Z_2}$ be?
|
For arbitrary modules $M, N$ over a ring $A$, there's a external direct sum module $M\boxplus N = \{(m, n):\, m\in M, n\in N\}$ with operation $(m, n) + (m', n') = (m + m, n + n')$ and $A$-action $a(n, m) = (an, am)$. The resulting module has $M\boxplus N = M\oplus N$ in your notation, embedding $M$ and $N$ in $M\boxplus N$ in the obvious way. Furthermore, if $P = M\oplus N$, then the map $P \to M\boxplus N$ defined by $m + n \to (m, n)$ is a well-defined isomorphism (injectivity following from the fact $M\cap N = 0$). As such, the two types of direct sum usually aren't distinguished in nomenclature or notation.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2526923",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
}
|
How to figure out if there is an actual horizontal tangent without a graph There is this practice problem that asks to determine the points at which the graph of $y^4=y^2-x^2$ has a horizontal tangent.
So I did implicit differentiation to find that
$$\displaystyle\frac{dy}{dx} = \frac{-x}{2y^3-y}$$
To find the horizontal tangent, I set $\frac{dy}{dx}=0$ and solved for $x$:
$$\begin{align}
\displaystyle\frac{dy}{dx} &= 0 \\
\frac{-x}{2y^3-y} &= 0 \\
-x &= 0 \\
x &= 0
\end{align}$$
Then I substituted $x=0$ into the equation of the curve:
$$\begin{align}
y^4&=y^2-(0)^2 \\
0 &= y^4 - y^2\\
0 &= y^2(y+1)(y-1) \\
y&=-1,\,0,\,1
\end{align}$$
I concluded that the points $(0,0)$, $(0,-1)$, and $(0,1)$ were the points with a horizontal tangent.
However, when I graphed this using Desmos, it turns out that the point at $(0,0)$ did not look like it has horizontal tangent.
Graph of y^4=y^2-x^2
How would I have been able to figure this out without graphing it?
|
If
$y^4=y^2-x^2
$,
then
diffing implicitly,
$4y^3y'
=2yy'-2x
$
or
$2y^3y'
=yy'-x
$.
If $y' = 0$,
then
$x = 0$.
Putting this in,
$y^4 = y^2$
so the possible values are
$y = 0, \pm 1$.
At $x=y=0$,
suppose $y' = c$.
For small $x$ and $y$,
$y \approx cx$
so
$c^4x^4 \approx c^2x^2-x^2$
or,
dividing by $x^2$,
$c^4 x^2 \approx c^2-1$.
Since
$c^4x^2$ is small,
$c^2-1$ must also be small
so
$c^2 \approx 1$
so
$c \approx \pm 1$.
This means that,
at $(0, 0)$,
$y' = \pm 1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2527021",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Let $K = \mathbb{Q}( \sqrt{2}, i)$ be the field generated over $\mathbb{Q}$ by $\sqrt 2$ and $i$ [Delhi-University PhD Screening test]
Let $K = \mathbb{Q}( \sqrt{2}, i)$ be the field generated over $\mathbb{Q}$ by $\sqrt 2$ and $i$. Then the dimension of $\mathbb{Q}( \sqrt2, i)$, as a $\mathbb{Q}$-vector space is equal to
(a)1
(b)2
(3)3
(4)4
Basis for $\mathbb{Q}(\sqrt2)$ and $\mathbb Q( i)$ are $\{1,\sqrt 2\}$ and $\{1,i\}$ respectively. So Basis for $\mathbb Q( \sqrt 2, i)$ is $\{1,\sqrt 2\} \times \{1,i\} =\{1,\sqrt 2,i,i\sqrt2\}$. Am I Right?
|
For a given Extension of fields $L/K$ i simply write $[L:K]$ for the degree of $L$ over $K$ and $[a:K]$ for the degree of $a$ over $K$, that is defined as the degree of the minimum polynomial of $a$ over $K$. As we know it holds $[K(a):K] = [a:K]$ for algebraic(over $K$) $a$. Furthermore we need $K(a,b) = K(a)(b) = K(b)(a)$, it doesn't make any difference how you adjungate elements to a field (in which order). Last but not least we know $[K(a):K] = [a:K] = 1 \Longleftrightarrow a \in K$. Let's start:
Well there is a theorem that says:
Let $L/Z/K$ be a given chain of field-extensions. Then
$$[L:K] = [L:Z] \cdot [Z:K].$$
In your case we have $ L =\mathbf{Q}(\sqrt{2},i) = \mathbf{Q}(\sqrt{2})(i), Z = \mathbf{Q}(\sqrt{2})$ and of course $K = \mathbf{Q}$, thus it holds
$$[\mathbf{Q}(\sqrt{2},i):\mathbf{Q}] = [\mathbf{Q}(\sqrt{2},i):\mathbf{Q}(\sqrt{2})] \cdot [\mathbf{Q}(\sqrt{2}): \mathbf{Q}] = [\mathbf{Q}(\sqrt{2},i):\mathbf{Q}(\sqrt{2})] \cdot 2.$$ So lets compute $[\mathbf{Q}(\sqrt{2},i):\mathbf{Q}(\sqrt{2})]$. As one can check, it holds
$$[\mathbf{Q}(\sqrt{2},i):\mathbf{Q}(\sqrt{2})] = [\mathbf{Q}(\sqrt{2})(i):\mathbf{Q}(\sqrt{2})] = [i:\mathbf{Q}(\sqrt{2})] \leq_* [i:\mathbf{Q}] = 2.$$
The inequality * follows from the following idea:
$[a:K]$ is defined as the degree of the minimum polynomial of $a$ over $K$.
Since the minimum polynomial $m_{i,\mathbf{Q}}$ of $i$ over $Q$ is also a polynomial in $\mathbf{Q}(\sqrt{2})$, that has $i$ as root, the degree of the minimum polynomial $m_{i,\mathbf{Q}(\sqrt{2})}$ of $i$ over $\mathbf{Q}(\sqrt{2})$ is bounded upwards to the degree of $m_{i,\mathbf{Q}}$, this shows the inequality.
Thus $[\mathbf{Q}(\sqrt{2},i):\mathbf{Q}(\sqrt{2})]\leq [i:\mathbf{Q}] = 2$. But $[\mathbf{Q}(\sqrt{2},i):\mathbf{Q}(\sqrt{2})] = 1$ cannot hold since $i \notin \mathbf{Q}(\sqrt{2})$ as $\mathbf{Q}(\sqrt{2}) \subset \mathbf{R}$ holds.
Thus $[\mathbf{Q}(\sqrt{2},i):\mathbf{Q}(\sqrt{2})] = 2$ and you get
$$[\mathbf{Q}(\sqrt{2},i):\mathbf{Q}] = [\mathbf{Q}(\sqrt{2},i):\mathbf{Q}(\sqrt{2})] \cdot [\mathbf{Q}(\sqrt{2}): \mathbf{Q}] = [\mathbf{Q}(\sqrt{2},i):\mathbf{Q}(\sqrt{2})] \cdot 2 = 2 \cdot 2 = 4$$
as desired.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2527106",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Direct sum of non-orientable bundles is orientable? Let $M$ be a smooth manifold, and let $E_1,E_2$ be two non-orientable vector bundles over $M$.
Is $E_1 \oplus E_2$ orientable?
I am sure there is an easy answer, but somehow my search didn't result with anything useful.
|
As Qiaochu says, the answer is no, in general. However, there is more to be said here.
An orientation on a vector bundle is a global section of the associated orientation bundle. So, a vector bundle is orientable if and only if its associated orientation bundle (which is a double cover) is trivial.
Now, an observation: Let $V$ and $V'$ be two real vector spaces. Then an orientation on $V\oplus V'$ is equivalent to a bijection $\mathrm{or}(V)\to\mathrm{or}(V')$, where $\mathrm{or}(\cdot)$ denotes the set of orientations.
As follows from the above observation, an orientation on the vector bundle $E_1\oplus E_2$ is equivalent to a bundle-isomorphism $\mathrm{or}(E_1)\to\mathrm{or}(E_2)$. In other words, $E_1\oplus E_2$ is orientable if and only if the orientation bundles of $E_1$ and $E_2$ are isomorphic. This is usually not the case.
Of course, the answer may change if the topology of $M$ is simple in some way. For example, if $M$ is the circle, then it has exactly two different double covers. Hence, every two non-orientable vector bundles have the same orientation bundle, and consequently, their direct sum is orientable.
Edit: As an example, let $M$ be the twice punctured plane, $$M=\mathbb{C}\setminus\{0,1\}.$$ Set $$P_0:=\left.\left\{(x,y)\in\mathbb{C}\times M\right|x^2=y\right\},\quad P_1:=\{(x,y)\in\mathbb{C}\times M|x^2=y-1\}.$$ So $P_0$ and $P_1$ are two different non-trivial double covers of $M$. Each one of them has an associated line bundle. Each of these line bundles is non-orientable, and so is their direct sum.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2527212",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
Evaluate $\sum^{n}_{k=1}\sum^{k}_{r=0}r\binom{n}{r}$
Evaluate the summation $$\sum^{n}_{k=1}\sum^{k}_{r=0}r\binom{n}{r}$$
$\bf{Attempt:}$ From $$\sum^{n}_{k=1}\sum^{k}_{r=0}r\binom{n}{r} = \sum^{n}_{k=1}\sum^{k}_{r=0}\left[r\cdot \frac{n}{r}\binom{n-1}{r-1}\right] = n\sum^{n}_{k=1}\sum^{k}_{r=0}\binom{n-1}{r-1}$$
So $$ = n\sum^{n}_{k=1}\bigg[\binom{n-1}{0}+\binom{n-1}{1}+\cdots +\binom{n-1}{k-1}\bigg]$$
Could some help me to solve it, thanks.
|
We will leave out the $r=0$ term since it is $0$.
$$
\begin{align}
\sum_{k=1}^n\sum_{r=1}^kr\binom{n}{r}
&=\sum_{r=1}^n\sum_{k=r}^nr\binom{n}{r}\\
&=\sum_{r=1}^n(n-r+1)r\binom{n}{r}\\
&=\sum_{r=1}^n(n-r)r\binom{n}{r}+\sum_{r=1}^nr\binom{n}{r}\\
&=\sum_{r=1}^nn(n-1)\binom{n-2}{r-1}+\sum_{r=1}^nn\binom{n-1}{r-1}\\[6pt]
&=n(n-1)2^{n-2}+n2^{n-1}\\[15pt]
&=n(n+1)2^{n-2}
\end{align}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2527345",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
}
|
Show that $\sigma(\mathcal{F})$ coincides with the countable-cocountable $\sigma$-algebra. Let $S$ be a set and let $\mathcal{F} = \{\{s\}:s\in S\}$ be the collection consisting of all sets which contain one element of $S$.
Let $\mathcal{A} = \{A\subseteq S:A \text{ is countable or $A^c$ is countable}\}$ be the countable-cocountable $\sigma$-algebra for $S$.
Question How do I show that $\sigma(\mathcal{F})$, the smallest $\sigma$-algebra containing $\mathcal{F}$, coincides with the countable-cocountable $\sigma$-algebra $\mathcal{A}$?
I assume coincides means 'is' here, please correct me if I'm wrong. The idea of $\sigma(\mathcal{F})$ was just introduced to me and I'm not so sure how to use it. I know that $\mathcal{A}$ is in fact an $\sigma$-algebra and I understand that $\mathcal{F}$ is contained by $\mathcal{A}$, but I don't know how to show that there is no smaller $\sigma$-algebra that contains $\mathcal{F}$.
Thanks in advance!
|
Suppose $\mathcal{S}$ is a $\sigma$-algebra and $\mathcal{F} \subseteq \mathcal{S}$.
We need to see that $\mathcal{A} \subseteq \mathcal{S}$, and we have minimality and so $\sigma(\mathcal{F}) = \mathcal{A}$.
So let $A$ be countable, then enumerate $A$ as $A = \{a_n: n \in \mathbb{N}\}$
Then for all $n$, $\{a_n\} \in \mathcal{F}$ so $\{a_n\} \in \mathcal{S}$ as well.
And as the latter is a $\sigma$-algebra, $A = \cup_n \{a_n\} \in \mathcal{S}$.
Now, if $A$ is cocountable, then $X\setminus A$ is countable so $X \setminus A \in \mathcal{S}$ by the above paragraph. But $\mathcal{S}$ is closed under complements so $X \setminus (X\setminus A) = A \in \mathcal{S}$ as well.
So all members of $\mathcal{A}$ are in $\mathcal{S}$ as required.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2527524",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Proving a local minimum is a global minimum. Let $f(x,y)=xy+ \frac{50}{x}+\frac{20}{y}$, Find the global minimum / maximum of the function for $x>0,y>0$
Clearly the function has no global maximum since $f$ is not bounded.
I have found that the point $(5,2)$ is a local minimum of $f$. It seems pretty obvious that this point is a global minimum, but I'm struggling with a formal proof.
|
By AM-GM $$f(x,y)\geq3\sqrt[3]{xy\cdot\frac{50}{x}\cdot\frac{20}{y}}=30.$$
The equality occurs for $$xy=\frac{50}{x}=\frac{20}{y}=10,$$
id est, for $(x,y)=(5,2)$, which says that $30$ is a minimal value.
The maximum does not exist. Try $x\rightarrow0^+$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2527641",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Let $K \subset F$ fields. Proof that $F^n$ and $F \,\otimes K^n$ are isomorphic as $F-$vector spaces Let $F$ and $K$ be fields such that $K \subset F$. We can consider the tensor product $F\, \otimes \, K^n$ as $F-$vector space with the operation:
$$ \lambda (a \otimes v) = (\alpha a \otimes v), \, \forall a \in F, \, \forall a\otimes v \in F\,\otimes \, K^n. $$
How one can show that
$F^n$ and $F \,\otimes K^n$ are isomorphic as $F-$vector spaces?
I've tried the canonic way $\lambda \otimes (x_1,...,x_n) \in F \, \otimes \, K^n \mapsto (\lambda x_1,...,\lambda x_n) \in F^n$. However, I couldn't proof that this transform is isomorphism.
Help?
|
Let $x_i$ be a bsais of $V=K^n$. Then the elements of $F\otimes V$ are of the form $$\sum \lambda_i\otimes x_i.$$ Mapping this to $(\lambda_1,\cdots ,\lambda_n)$ is an isomorphism.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2527797",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Evaluate $ \sum_{m > n > 0} \frac{m^2 - n^2}{(m^2 + n^2)^2} $ I have been trying around different ranges of summations:
$$ \sum_{(m,n) \in \mathbb{Z}^2} \frac{m^2 - n^2}{(m^2 + n^2)^2} = 0$$
That's not any good. What about if we restrict to $m, n \in \mathbb{Z}$ as positive integers.
$$ \sum_{m > 0, n > 0} \frac{m^2 - n^2}{(m^2 + n^2)^2} = 0$$
Now here's an anti-symmetry. I do not like taking out the symmetry, but perhaps I can ask about:
$$ \sum_{m > n > 0} \frac{m^2 - n^2}{(m^2 + n^2)^2} = \; ? \tag{$*$} $$
There doesn't seem to be a change of variables that can work. And we've used symmetry about as much as we can. This looks related to:
$$ \sum \frac{1}{n^2} = \frac{\pi^2}{6}$$
Perhaps this other series ($*$) also has a special value.
|
Fix $m>0$ for a moment and consider the sum
$$
H(m):=\sum_{n=0}^{\lfloor m/2\rfloor}\frac{m^2-n^2}{(m^2+n^2)^2}.
$$
Here the numerator of the term, call it $x(n,m)$, is at least $3m^2/4$, and the denomimator is at most $4m^4$. Therefore
$$
x(n,m)\ge\frac{3m^2/4}{4m^4}=\frac{3}{16m^2}
$$
in this range. There are at least $m/2$ terms in $H(m)$, so we arrive at the lower bound
$$
H(m)\ge \frac{3}{32m}.
$$
This shows that the double sum
$$
\sum_{m>n>0}\frac{m^2-n^2}{(m^2+n^2)^2}
$$
diverges.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2527911",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Find $\lim\limits_{x\to0}\frac 1x(x^{-\sin x}-(\sin x)^{-x})$ The question is to evaluate this limit:$$\lim_{x\to0}\frac{\big(\frac{1}{x}\big)^{\sin x}-\big(\frac{1}{\sin x}\big)^x}{x}$$
I tried using l'Hospital's rule, taking the logarithm, doing some manipulations using known limits, but without success.
|
I deleted my previous answer as there were (stupid) mistakes in it, but as it turns out l'Hopital's rule is not helpful, and actually there is a much easier answer. Write the expression as
$$\frac{e^{-(\ln x)(\sin x)}-e^{-x\ln \sin x}}{x}$$
$$=(\sin x)^{-x}\frac{e^{-(\ln x)(\sin x)+x\ln \sin x}-1}{x}$$
Now $(\sin x)^{x}\to 1$ is easily verified. The remainder can be written as, $$\frac{e^{f(x)}-1}{x}=\frac{e^{f(x)}-1}{f(x)}\frac{f(x)}{x}$$ where the first factor again limits to $1$. Thus we are reduced to evaluating
$$\ln(\sin x)-\frac{\sin x}{x}\ln x=\ln(\frac{\sin x}{x})-(x\ln x)(\frac{\sin x-x}{x^2})$$ and its easy to see that all elements limit to zero.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2528080",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
}
|
Show that the Moore plane is not normal
Definition: A Hausdorff space is normal (or: $T_4$) if each pair of disjoint closed sets have disjoint neighborhoods.
Then, we have
Exercise 5, pg. 158, Dugundji's Topology: Let $X$ be the upper of the Euclidean plane $E^2$, bounded by the $x$-axis. Use the Euclidean topology on $\{(x,y)\,|\, y>0\}$, but define neighborhoods of the points $(x,0)$ to be $\{(x,0)\}\cup [\text{open disc in $\{(x,y)\,|\,y>0\}$ tangent to the $x$-axis at $(x,0)$}]$. Prove that this space is not normal.
It is easy to see that $X$ is Hausdorff. So, what we need is to find a pair of closed sets that fail to satisfy the definition given above. Here, Alice Munro says that $A=\{(x,0)\,|\, x\in \Bbb Q\}$ and $B=\{(x,0)\,|\, x\in \Bbb R-\Bbb Q\}$ are such sets. I can see that they are closed, but how can I show that they do not admit disjoint neighborhoods? (intuitively true, but I'm having difficulty to write it down...)
|
My preferred way (though the statement that these two sets are closed disjoint sets that cannot be separated is true) is to use a well-known cardinal number fact in normal spaces often called Jones' lemma:
Let $X$ be $T_4$. If $D$ is a dense subset of $X$ and $C$ is a closed and discrete (as a subspace) subset of $X$, then $2^{|C|} \le 2^{|D|}$.
I prove it in this answer; it uses Urysohn's extension theorem for normal spaces: we need a lot of different continuous real-valued functions on $X$ to separate all subsets of $C$, and $D$ bounds that number.
There I also show how it applies to the Moore plane (or Niemytzki plane): the $x$-axis is closed and discrete and $\mathbb{Q} \times \mathbb{Q}^+$ is countable and dense and $2^{|\mathbb{R}|} \not\le 2^{\aleph_0} = |\mathbb{R}|$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2528435",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
}
|
Prove $\det(1+tA)=1+t\cdot tr(A)+O(t^2)$ I need help proving
$\det(1+tA)=1+t\cdot \operatorname{tr}(A)+O(t^2)$
I'm not really sure where to start due to the $(1+tA)$, the $1$ is throwing me off.
|
Assume $A$ is $M_{n\times n}(\mathbb{C}).$ Suppose that $A$ is diagonalizable, with eigenvalues $\lambda_1,\dotsc,\lambda_n.$ Then $1+tA$ is also diagonalizable, with eigenvalues $1+t\lambda_i.$ The determinant of a diagonalizable matrix is the product of its eigenvalues, so we have
$$
\det(1+tA)=(1+t\lambda_1)\cdot\dotsb\cdot(1+t\lambda_n)\\=1+t(\lambda_1+\dotsb+\lambda_n)+\dotsb + t^n(\lambda_1\cdot\dotsb\cdot\lambda_n)\\=1+\text{tr}(A)t+\dotsb+t^n\det(A)=1+\text{tr}(A)t + O(t^2).
$$
If $A$ is not diagonalizable, observe that the diagonalizable matrices are dense in $M_{n\times n}(\mathbb{C}).$ By continuity, the above equation also holds for $A$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2528609",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Solution to the heat equation using the method of separation method Solution to the heat equation using the method of separation method
$$u_t=u_{xx},0\le x\le 2\pi, t>0\\\\u(0,t)=0\\u_x(2 \pi, t)=0\\u(x,0)=x=f(x)$$
my attempt:
let us take $u(x,t)=X(x)T(t)$
then i got $X''-kX=0, T'-T=0$
now for $k=0 $ and $k=\lambda^2$ has trivial solution
Now for $k=-\lambda^2$ i got $\lambda=(2n+1)/4$
but i am not sure about it can any help
|
For first order derivatives, in this case for $t$, the solution is of the form $e^{- \mu t}$ and can be used in the following. For the equation
$$u_{t} = u_{xx}$$ then let $u(x,t) = e^{- \mu t} \, F(x)$ to obtain $F'' + \mu F = 0$. This yields the form $$F(x) = A \, \cos(\mu x) + B \, \sin(\mu x).$$ By using the condition $u(0,t) = 0$ then this becomes $F(0) = 0$ and leads to $A = 0$ and $F(x) = B \, \sin(\mu x)$. Using the condition $u_{x}(2\pi, t) = 0$, or $F'(2\pi) = 0$, leads to $B \mu \, \cos(2 \pi \mu) = 0$ and provides $$\mu_{n} = \frac{2 n - 1}{4} \hspace{5mm} \text{ for } n \geq 1.$$ Placing this all together yields $$u(x,t) = \sum_{n=1}^{\infty} B_{n} \, e^{- \mu_{n} t} \, \sin(\mu_{n} x).$$ Applying the last condition, $u(x,0) = x$, gives a value to the coefficient $B_{n}$ by use of Fourier Sine series.
$$x = \sum_{n=1}^{\infty} B_{n} \, \sin(\mu_{n} x),$$ where $$B_{n} = \frac{1}{\pi} \, \int_{0}^{2 \pi} x \, \sin(\mu_{n} x) \, dx.$$ Calculating this integral yields the result $$B_{n} = \frac{16 \, (-1)^{n-1}}{(2n -1)^{2}}.$$
The solution is then given as:
$$u(x,t) = \sum_{n=1}^{\infty} \frac{(-1)^{n-1}}{\mu_{n}^{2}} \, e^{- \mu_{n} t} \, \sin(\mu_{n} x), $$ where $4 \mu_{n} = 2n -1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2528746",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Find all values of : $(2)^{i+1}=?$
Find all values of :
$$(2)^{i+1}=?$$
My Try :
$$i+1=\large\sqrt2e^{\frac{i\pi}4}$$
$$(2)^{\large\sqrt2e^{\frac{i\pi}4}}=?$$
now what ?
|
$$ 2^{i+1} = e^{ (i + 1) \ln 2} = e^{\ln 2} e^{i\ln 2} = 2(\cos (\ln 2) + i\sin (\ln 2)) $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2528850",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Is there a formula for coefficients of a given vector written in terms of the basis? Given that $\vec{u}_1,.., \vec{u}_N$ are a basis of $\mathbb{R}^N$, such that we can write any vector $\vec{v}\in\mathbb{R}^n$ as $$\vec{v} = \sum_{n=1}^Nc_n\vec{u}_n$$
Is there a formula to find the coefficients? In an old pdf I found the following $$c_n = \frac{1}{|\vec{u}_n|^2}\vec{v}\cdot\vec{u_n}$$
However it is very confusing and I don't understanding whether it is a typo, if it is talking about something else, or if this is correct but I just never saw this formula in Linear Algebra.
And if this is true, how can I derive it?
My Try
Writing $\vec{v} = (v_1,..,v_n)^T$ and $\vec{c}= (c_1,.., c_n)^T$, and $U = (\vec{u}_1,..,\vec{u}_n)^T$ then we have $$\vec{v} = \vec{c}^T U$$ and therefore $\vec{v}^T = U^T\vec{c}$ so that the coefficients can be found as $$\vec{c} = (U^T)^{-1}\vec{v}^T$$
So once we have the inverse of that matrix, we can calculate them. However I wouldn't know how to proceeed.
Solution
Thanks to all those who answered. This formula only holds if the basis are orthogonal. This is because we start with $$\vec{v} = c_1\vec{u}_1+..+c_n\vec{u}_n$$
Then, say we want to find coefficient $c_1$, then we can multiply both sides by the corresponding basis vector: $$\vec{u}_1^T \cdot \vec{v} = c_1\vec{u}_1^T\vec{u}_1 + 0+..+0$$ and therefore $$c_1 = \frac{\vec{u}_1^T\cdot \vec{v}}{||\vec{u}_1||_2^2}$$
|
That formula holds only if the $u_n$'s are orthogonal. In that case, the coefficients of $v$ are given by the scalar product of $v$ with the normalized basis. This is a basic theorem of Linear algebra. In the general case the formula is not true.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2529007",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
General rule for limit of $a_n^{b_n}$? Let $a_n$ and $b_n$ denote two series with well defined limits $a, b \in \mathbb{R}$ for $n\longrightarrow \infty$.
Is it possible to say following?: $$\lim (a_n^{b_n}) = (\lim a_n)^{\lim b_n} = a^b$$
If not, can you give a counterexample?
Edit: Assume that $a$ positive.
|
That in general does not provide the result.
Assume that $a = -2$ and $b=1/2$ then
$$\lim (a_n^{b_n}) = (\lim a_n)^{\lim b_n} = a^b =(-2)^{1/2} =\sqrt{-2}$$
Does not make any sense in $\Bbb R.$
But If $a>0$ then , there a certain rang from which $a_n >0$ and you can therefore write
$$\lim (a_n^{b_n}) = \lim \exp(b_n\ln a_n =\lim \exp(b\ln a )= a^b $$
If $a_n>0$ and $a=0$ and $b<0$ then $ \ln a_n\to -\infty$ and
$$\lim (a_n^{b_n}) = \lim \exp(b_n\ln a_n) =\infty $$
If $a_n>0$ and $a=0$ and $b>0$ then $ \ln a_n\to -\infty$ and
$$\lim (a_n^{b_n}) = \lim \exp(b_n\ln a_n) =0 $$
If $a_n>0$ and $a=0$ and $b=0$ we o not know appriory.
Consider
$$\left(\frac{1}{n}\right)^\frac{1}{n} \to 1$$
but
$$\left(\frac{1}{e^n}\right)^\frac{1}{n} \to e$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2529242",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Show that there is no positive real number which is less than every positive real number. I think what would happen if there is less than every positive real numbers
How can I prove that?
|
Let $r$ be a positive real. Consider the set $S=\{x\in\mathbb{R}~:~x<r\}$. Now this set $S$ is non-empty, as $0\in S$. So it has a lowest upper bound, say $r^*$. $r^*\leq r$ as $r\not<r$. If $r^*<r$, then $r^*<\frac{r+r^*}{2}<r$. Hence $\frac{r+r^*}{2}\in S$, which contradicts our assumption that $r^*$ is the lowest upper bound. So $r^*=r$, but $r\not\in S$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2529444",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 1
}
|
Function is bounded from below if sum of partial derivatives is positive Let $f:\mathbf{R}^n \to \mathbf{R}$ be differentiable, $\sum_{i=1}^n y_i \frac{\partial f}{\partial x_i}(y)\geq 0$ for all $y=(y_1,...,y_n)\in \mathbf{R}^n$. How do I show that $f$ is bounded from below by $f(0)$?
|
Hint:
The expression from your question is the derivative of f with respect to the radius squared. So if that is positive, then...
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2529533",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
How to show the action of $SL_2(\mathbb R)$ on the complex upper half plane is transitive? I've seen other answers here and here but I still am not understanding how to show that this action is transitive. I want to show that for all $z \in \mathbb H$ there exists a matrix $g \in SL_2(\mathbb R)$ such that $gi=z$.
If I start with $\frac{ai+b}{ci+d}=z=x+iy$, then I get
$$\frac{ai+b}{ci+d}=z \implies ai+b=z(ci+d) \implies i=\frac{dz-b}{a-cz}$$
I don't know what to do with this or what this tells me.
If I do $\frac{az+b}{cz+d}=i$ I don't get anywhere either.
How can I go about this?
|
Here's a hint. You can write
$$ z= a+bi = \frac{b\cdot i + a}{0\cdot i +1} ,$$
which is the transformation corresponding to the matrix
$$\begin{bmatrix} b & a \\ 0 & 1 \end{bmatrix} $$
evaluated on the point $i$ in $\mathbb{H}$. This matrix may not be in $SL_2 \mathbb{R} $, but you can easily fix that.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2529664",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
}
|
Find all $C^{1}$ functions $f: (0,+\infty) \to (0, +\infty)$ such that $f(x)^{f'(x)}=x$, $f(1)=1$. As the question title says, I'm trying to find all $C^1$ functions $f:(0, +\infty) \to (0, +\infty)$ which satisfy $f(x)^{f'(x)} = x$, and $f(1)=1$.
I know that $f(x)=x$ is one solution. When I put everything into the exponent, I get $f'(x) \ln{f(x)} = \ln{x}$, which gives me the implicit solution $f(x)(\ln{f(x)}-1) = x(\ln{x}-1)+C$, $C \in \mathbb{R}$. By inserting $(1,1)$ into the implicit solution, I get that the solution must satisfy $f(x)(\ln{f(x)}-1) = x(\ln{x}-1)$. The problem here is that I can't use Picard's theorem and claim uniqueness, because the expression $f'(x) = \frac{\ln{x}}{\ln{f(x)}}$ isn't defined for $(x, f(x))=(1,1)$.
Is there a different way to prove uniqueness, or is there another solution to this equation?
|
It seems that there is another solution to the equation, but for $0<x<e$, its value decreases from $e$ to $0$, and for $x>e$ the value becomes complex and is no longer real. This means that the derivative does not exist at that point. Other than that, it is a valid solution. Notice that, for $0<x<1$ the $W_0(\_)$ branch is used, and for $1<x<e$ the $W_{-1}(\_)$ branch is used in the formula given by Robert Israel. The key is that for any solution, $f'(1)^2=1$ and thus $f'(1)=1$ or else $f'(1)=-1$. The first leads to the easy $f(x):=x$ and the other to the solution using Lambert $W$. These are the only two possible solutions.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2529950",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
}
|
Usefulness of extended domain of derivative function? Say I have $f(x)=\ln(x)$. We know its domain is $(0,\infty)$, but the domain of $f’(x)$ is $(-\infty,0) \cup (0,\infty)$. Though this is a relatively simple example, is there any application of the derivative extended beyond its function’s real domain? For example, does the derivative when $x$ is negative in this case have any implications in the complex plane, for some sort of analytic continuity, or is the derivative never used where the function is not defined on the reals?
|
Alex is right that the derivative has the same domain as the function, but what you have discovered is that the derivative may be easier to extend analytically than the original function. In the example of $\log$, one can extend the derivative $1/x$ to all complex numbers, which gives one a hint as to the correct generalization of the log function to the complex numbers as $\log(z) = \int_\gamma 1/z dz$, where $\gamma$ is an appropriately-chosen path from 1 to $z$.
In flavor, this is similar to applications in analytic number theory, where functions like the Riemann zeta function may have nice descriptions in terms of their derivatives on the real positive line, and this is a clue for how to proceed in the whole complex plane. I am not sure how precise this analogy turns out to be though.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2530077",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
when can we interchange integration and differentiation Let $f$ be a Riemann Integrable function over $\mathbb{R}^2$. When can we do this?
$$\frac{\partial}{\partial\theta}\int_{a}^{b}f(x,\theta)dx=\int_{a}^{b}\frac{\partial}{\partial\theta}f(x,\theta)dx$$
(Here, $a$ and $b$ are not a function of $\theta$.)
In the problem, which I am solving recently, are like this:
$f_{\theta}(x)$, here $\theta$ is constant and $\theta\in\mathbb{R}$ (usually). For example $f_{\theta}(x)=x^2\theta$. So, I am blindly interchanging integration and differentiation because of continuity over $\theta$. But I want to know little bit more.
Also, what happens if $a$ and $b$ are function of $\theta$? Thanks.
|
You may interchange integration and differentiation precisely when Leibniz says you may. In your notation, for Riemann integrals: when $f$ and $\frac{\partial f(x,t)}{\partial x}$ are continuous in $x$ and $t$ (both) in an open neighborhood of $\{x\} \times [a,b]$.
There is a similar statement for Lebesgue integrals.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2530213",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "43",
"answer_count": 1,
"answer_id": 0
}
|
Show that the torque for x = b is $M_y -b\rho S$
The area R on the xy-plane corresponds to a thin metal plate with the
area S and a constant density $\rho$. $M_y$ is the plate's moment
corresponding to the y-axis.
a) Show that the moment corresponding to $x = b$ is $M_y - b\rho S$,
if the plate is right from $x = b$.
b) Show that the moment corresponding to $x = b$ is $b\rho S - M_y$,
if the plate is left from $x = b$.
Now, I don't actually know much about physics. I assume they are talking about this moment, but I can't be sure. I also assume the answer has something to do with the fact that $m = \rho \cdot V$, but otherwise I'm stuck.
|
Saying that $M_y$ is the moment corresponding to the $y$-axis (i.e., the line $x = 0$ is saying (according to the definition linked in Sou's comment) that
$$
M_y = \iint_R x^2 \rho~dy~dx
$$
where here $\rho$ is the density, not the "distance to the axis" as it is in the linked wikipedia page.
The thing you're supposed to compute replaces $x$ (the distance from a point $(x, y)$ to the $x$ axis) with $x - b$ (the distance from $(x, y)$ to the line $x = b$), i.e., you're computing
$$
M' = \iint_R (x-b)^2 \rho~dy~dx
$$
"Where the plate is right from $x = b$" means that $x > b$ for all points $(x, y)$ of the plate; presumably this helps somehow in the simplifications you have to make.
Anyhow, do some algebra to relate $M'$ to the known value $M_y$, and see where it gets you.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2530335",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Find the set of values for $c\in \mathbb R$ that allows real solution for $\sqrt{x}=\sqrt{\sqrt{x}+c}$
Given $\{x,c\}\subset \mathbb R$, $\sqrt{x}=\sqrt{\sqrt{x}+c}~~~~ (1)$.
Find: set of values for $c$ such that $(1)$ has solution in $\mathbb R$.
Question from the Brazilian Math Olympiad 2004. No solution provided.
I'm not sure whether I'm setting the right constraints to find the asked set. Hints and full solutions are appreciated.
My attempt: as an initial observation, the solution must consider at least 2 constraints: $(c_1)$ $x\ge 0;$ and, $(c_2)$ $\sqrt{x}\ge -c$, to avoid negative arguments in the square root.
By assuming ($c_1$) and ($c_2$) hold, and squaring both terms in (1) we get
$$\sqrt{x}=\sqrt{\sqrt{x}+c}\Leftrightarrow x=\sqrt{x}+c\Leftrightarrow x-c=\sqrt{x}.$$
Now considering an addicional constraint $(c_3)$ $x-c\ge 0$, and squaring both terms, we get
$$x-c=\sqrt{x}\Leftrightarrow x^2-2cx+c^2=x \Leftrightarrow x^2-(2c+1)x+c^2=0$$
To solve this last equation, notice that
$$\triangle=(2c+1)^2-4c^2=4c+1$$
Therefore, another constraint, for real roots, is $\triangle\ge 0$ or $(c_4)$ $c\ge -1/4$. And the tentative solution for $x$, before checking constraints, will be given by
$$x=\frac{2c+1\pm\sqrt{4c+1}}{2}.$$
Now the set of values for $c$ that leads to a real solution in $x$, will be the set resulting from the intersection of 4 conditions:
$$\left\{
\begin{array}{l}
(c_1)~~x\ge 0 \Leftrightarrow \frac{2c+1\pm\sqrt{4c+1}}{2}\ge 0\Leftrightarrow 2c+1\ge\pm\sqrt{4c+1}\\
(c_2)~~\sqrt{x}\ge -c \Leftrightarrow \sqrt{\frac{2c+1\pm\sqrt{4c+1}}{2}}\ge -c\\
(c_3)~~x\ge c\Leftrightarrow \frac{2c+1\pm\sqrt{4c+1}}{2}\ge c\Leftrightarrow 1 \ge \pm\sqrt{4c+1}\Leftrightarrow 1 \ge \sqrt{4c+1}\Leftrightarrow c \le 0\\
(c_4)~~c\ge -1/4\\
\end{array}
\right.
$$
Question: (a) is this last step the right set of conditions to be intersected to give the asked set for $c$? (b) how to solve for the intersection of the conditions?
Hints and full answers are appreciated. Maybe I'm just complicating something that is a lot easier.
|
Just a slightly different point of view, which may be useful in a different situation. Since it is easier to express $c$ in terms of $x$ rather than the other way around, you might make this argument:
Let $f:\mathbb{R}_{\geq0} \to \mathbb{R}$ be given by $f(x) = x - \sqrt{x}$. Then
$$\exists x\in \mathbb{R} (x\geq 0 \land c = x - \sqrt{x} = f(x)) \Leftrightarrow c \in {\rm ran} (f)$$
(there is no need for any other conditions because the domain of $f$ is exactly $x \geq 0$). Now here the substitution $\sqrt{x} = t$ would make finding the range more convenient. I will do it by noting that $g:\sqrt{x}\mapsto x$, with domain and codomain $\mathbb{R}_{\geq0}$, is surjective and so ${\rm ran}(g) = {\rm dom}(f)$; not only can the two functions be composed, but also ${\rm ran} (f\circ g) = {\rm ran} (f)$.
So instead of studying the solutions of an equation for some (unspecified) $c$, you could examine the function $f\circ g: \mathbb{R}_{\geq0} \to \mathbb{R}$, given by $f(g(t)) = t^2 - t$. (All the talk about domains and ranges in the previous paragraph was to demonstrate that no additional constraints are needed.) It's a quadratic restricted to $[0, +\infty)$ whose vertex is easily found to be $(1/2, -1/4)$ so the range is $[-1/4, +\infty)$.
This isn't really any different from finding the roots (parametrically) and checking/requiring that at least one be non-negative (as Yves does). But if it had been something more complicated than a quadratic, it is usually easier to find the range than to solve the corresponding equation.
EDIT: Note that when solving equations with radicals on both sides, $\sqrt{f(x)} = \sqrt{g(x)}$, it is redundant to add both $f(x) \geq 0$ and $g(x) \geq 0$. For any solution of the squared equation, $f(x) = g(x)$ they are equivalent. In your approach, $(c_1) \Leftrightarrow (c_2)$. And later on, the second squaring produces $x = (x-c)^2$ which automatically guarantees $(c_1)$. So only $(c_3)$ and $(c_4)$ are left, and it is not hard to check that the latter implies the former, since it is enough for one of the roots to meet it (I think you had already figured that out).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2530475",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Numerically computing $(X^tX)^{-1}X^t y$ and $(X^tX)^{-1}z$ In some algorithm I need to compute $a=(X^tX)^{-1}X^ty$ and $b=(X^tX)^{-1} z$ in each step, where $X$ is a $n \times p$ non-square matrix ($n \geq p$, $p$ is increased in each step) and $y,z$ are some appropriate vectors, but I'd like to do so in an efficient way.
What I came up with so far:
*
*To compute just $a$ it is worth noting that $a$ is a least squares solution, i.e. $a$ minimizes $\Vert Xa-y\Vert_2^2$, so I would have calculated $a$ using the QR decomposition of $X$ (cost is $O(np^2)$, but now I also need to compute $b$.
*So another naive approach would be computing $W := X^tX$ and $v = X^ty$, and then computing $a= W^{-1} v$, $b= W^{-1} z$. This approach has the advantage that we could use e.g. a Cholesky decomposition of $W$ (cost $O(p^3)$), but the backdraws are that we need to compute $W$ first (cost $O(np^2)$) and that $X^tX$ has a worse condition number than $X$.
But is there a more efficient way to compute $a$ and $b$?
|
With $X=QR$ in the small variant where $R$ is square, you get $X^tX=R^tR$ so that $a=R^{-1}Q^ty$ and $b=R^{-1}(R^t)^{-1}z$ where you have only triangular systems to solve.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2530582",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Reference request for Minimal Surfaces. I need books or articles based on minimal surfaces. By minimal surface, I mean a surface with 0 mean curvature.
More specifically, I wish to explore the Plateau's Problem: There exists a minimal surface with a given boundary.
I would also like to see a proof of the fact that a surface of revolution that is minimal is either a plane, helicoid or a catenoid.
As a supplementary text, could I also have a reference for calculus of variations? (Unimportant, but why is calculus of variations not taught as a course in universities?)
Thank you for your time.
|
some nice texts are the following
*
*Geometric Measure Theory and Minimal Surfaces - Bombieri, E.
*Lectures on Geometric Measure Theory - Simon, L.
*Minimal Surfaces and Functions of Bounded Variation - Giusti, E.
*Sets of Finite Perimeter and Geometric Variational Problems - Maggi, F.
*Calculus of Variations and PDEs - Ambrosio, L. & Norman, D.
*Gamma Convergence for Beginners - Braides, A.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2530707",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
}
|
Are all quadratics factorable into a product of two binomials? I'm learning algebra in school, and my teacher said that all quadratics are factorable into a product of two binomials. I then realized however that some quadratics would have imaginary roots, and therefore wouldn't be able to be put into factored form. Who's wrong here, my teacher or me? For example, can $x^2 + 4x + 1$ even be expressed in factored form? Thanks in advance.
|
It really depends on whether you want to have complex factors or not. If you can have complex factors, every expression can be. If not, then only if $b^2\ge4ac$ would they be factorable.
Take $x^2+1$, it can be factored into $(x-i)(x+i)$ but none of the factors are real.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2530935",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17",
"answer_count": 4,
"answer_id": 1
}
|
Change of variables and circular solutions Consider the equations:
$ \frac{dx}{dt} = y$ and $ \frac{dy}{dt} = -x $
By transforming variables, obtain $ \frac{dr^2}{dt}=0 $ and $ \frac{d \theta}{dt} = -1$
I know that if I say let $r^2 =x^2 +y^2 \cdot \frac{dr^2}{dt} = tx \frac{2x}{dt} + 2y \frac{dy}{dt} =2xy =2xy-2xy = 0$, which would mean $r^2$ is neutrally stable, but I don't understand where this gets me, much less what a change of variables is.
Moreover, what can one conclude about whether these are circular solutions, their direction, and their stability. I have not seen this mentioned before.
|
Assuming you're using polar coordinates, a change of variables to the polar coordinates system corresponds to $x=r\cosθ,y=r\sin θ,r=x^2+y^2$ :
It is :
$$r^2 = x^2 + y^2 \Rightarrow 2rr' = 2xx' + 2yy' \Rightarrow rr' = xx' + yy' $$
Substituting $x',y'$ from your given system :
$$rr'= xy - yx = 0$$
To find the angle $θ$, we take :
$$\dfrac{r \sin \theta}{r \cos \theta} = \tan \theta = \dfrac{y}{x}$$
Using the quotient rule, we get :
$$\theta' = \dfrac{x y' - y x'}{r^2}$$
Substituting $x',y'$ as before, from your given system :
$$θ = \frac{-x^2 -y^2}{r^2} =\frac{-(x^2 + y^2)}{r^2}=\frac{-r^2}{r^2}=-1$$
This is how you get the expressions.
Going over stability, it's easy to see that $O(0,0)$ is a stationary point of your given system and also the only one.
Since you have $rr' = 0$, you can conclude that $O(0,0)$ is clarified as a center for your system.
You can easily double check that by doing the common stability way :
$$J(x,y) = \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix}$$
Which means that :
$$J(0,0) = \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix}$$
and that :
$$\det(J(0,0) - λI) = 0 \Leftrightarrow \cdots \Leftrightarrow λ^2 + 1=0 \Leftrightarrow λ = \pm i$$
Since you have purely imaginary eigenvalues, $O(0,0)$ is a center for your system, thus we have double checked our finding.
Can you now make a conclusion about the direction and the circularity of the solutions ?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2531090",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
}
|
Is this function is one of the inner product in $\mathbb{R}^3$ If $\vec u = (u_1, u_2, u_3)$ and $\vec v = (v_1, v_2, v_3)$ is a vector of $\mathbb{R}^3$, then $f(u, v) =$
$2u_1v_1 + 3u_2v_2 – 2u_2v_2$ is one of the inner product in $\mathbb{R}^3$.
My answer is False, because if we simplify the $f(\vec u,\vec v)$, we can find that
$f(\vec u,\vec v) = 2u_1v_1 + u_2v_2$, as we can see we can't found the $u_3v_3$ operation in the function, so my opinion is since there is no function concerned about the $u_3$ and $v_3$, then the function is not an inner product in $\mathbb{R}^3$,
Please correct me if I'm wrong, it would be very helpful if anyone can give the right answer
|
To be more clear, note that $$f((0,0,1),(0,0,1))=0$$
but $(0,0,1) \neq (0,0,0)$ violating the positive definiteness.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2531190",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Confusion about differential of multivariable functions. I have been given a function $F: \mathbb{R²} \to \mathbb{R³}$
and another function $\beta: J \to \mathbb{R³}: t \mapsto F(a + tx)$
where $x = v_1(1,0) + v_2(0,1) = (v_1,v_2)$ and $a$ is fixed. we can assume that all given functions are differentiable on their domain.
I'm asked to find $\beta'(0)$ (this should be equal to $D_1F(a)v_1 + D_2F(a)v_2$)
My attempt:
Let $R(t) = a+tx$
Then $\beta'(0) = D\beta(0) = D(F\circ R)(0) = DF(R(0)) \circ DR(0)$
and then I'm stuck.
|
$R(t)=(R_{1}(t),R_{2}(t)):=(a_{1}+tx_{1},a_{2}+tx_{2})$, and $\beta=F\circ R$, so $\beta'(0)=\dfrac{\partial F}{\partial x}(R(0))R_{1}'(0)+\dfrac{\partial F}{\partial y}(R(0))R_{2}'(0)$. Note that $R_{1}'(0)=x_{1}$ and $R_{2}'(0)=x_{2}$.
Here $\dfrac{\partial F}{\partial x}=D_{1}F$ and similar to $D_{2}F$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2531282",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Limit of a trig function. (Without using L'Hopital) I'm having trouble figuring out what to do here, I'm supposed to find this limit:
$$\lim_{x\rightarrow0} \frac{x\cos(x)-\sin(x)}{x^3}$$
But I don't know where to start, any hint would be appreciated, thanks!
|
Since$$x\cos(x)-\sin(x)=x\left(1-\frac{x^2}2+\cdots\right)-\left(x-\frac{x^3}6+\cdots\right)=-\frac{x^3}3+\cdots,$$your limit is equal to $-\dfrac13$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2531380",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 1
}
|
Group homomorphisms $S_3\to\mathbb{Z}/2\mathbb{Z}$ Knowing that $S_3=\{\text{id},\sigma,\sigma^2,\tau,\tau\circ\sigma,\tau\circ\sigma^2\}$ ($\tau=(1 2), \sigma=(123)$), why does a group homomorphism $f:S_3\to\mathbb{Z}/2\mathbb{Z}$ satisfy $f(a)=0$ for all $a\in S_3$ such that $a^2\neq e$?
|
In $S_3$, for every element $a$ with $a^2\ne e$, we have $a^3=e$. If we map $a$ to $1$ instead of $0$, this creates a problem, because $1+1+1\ne 0$ in $\Bbb Z / 2\Bbb Z$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2531523",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Gambler's ruin model In the gambler's ruin model, $X_n$ is a gambling player's fortune after the $n^{th}$ game, when making 1 dollar bets at each game.
Also, for fixed $0<p<1$, we can find random variables $\{Z_i\}$ which are i.i.d. with $P(Z_i=1)=p$ and $P(Z_i=-1)=1-p$. So, we can set $$X_n=a+Z_1+Z_2+...Z_n$$ with $X_0 = a$.
Suppose that $0<a<c$, and let $\tau_0=\inf\{n\ge0:X_n=0\}$ and $\tau_c=\inf\{n\ge0:X_n=c\}$ be the first hitting time of $0$ and $c$, respectively.
The Question is,
For the gambler's ruin model mentioned above, let $\beta_n=P(min(\tau_0,\tau_c)>n)$ be the probability that the player's fortue has not hit $0$ or $c$ by time $n$.
(a) Find any explicit, simple expression $\gamma_n$ such that $\beta_n<\gamma_n$ for all $n \in N$, and such that $\lim_{n \to \infty}\gamma_n=0$.
(b) Also, find any explicit, simple expression $\alpha_n$ such that $\beta_n>\alpha_n>0$ for all $n \in N$.
I have no idea where to start to find $\gamma_n$ which is greater than $\beta_n$ for all $n$.
For (b), I thought about $\alpha_n=P(\tau_0>n)$, the probability that player's fortune is not hitting $0$ by time $n$. But I'm not sure how to show $\beta_n>\alpha_n>0$ for all $n \in N$.
Thanks for any help...
|
To get an upper bound, note that if the gambler hasn't hit the boundary by time $n$, then he hasn't had a win or loss streak of length $c$. The latter event is contained in the event that none of the sequences $(Z_1, \ldots, Z_c), (Z_{c+1}, \ldots, Z_{2c}), \ldots, (Z_{(m-1)c+1}, \ldots, Z_{mc})$ are the identically $+1$ or $-1$ sequence, where $m = \lfloor \frac{n}{c} \rfloor$. Excluding just these sequences is enough, and it makes the analysis easier.
The probability that a given sequence of $Z_i$'s of length $c$ is not all $+1$'s or all $-1$'s is $1-p^c-(1-p)^c.$ Thus
$\beta_n < (1-p^c-(1-p)^c)^{\lfloor \frac{n}{c} \rfloor} = \gamma_n$,
and $\gamma_n \to 0$ (exponentially quickly!) as $n \to \infty$. (Is this simple enough?)
Getting a positive lower bound is easy: just choose any sequence of wins and losses of length $n$ that doesn't hit the boundary! For example, if the gambler repeatedly wins, then loses, then wins, etc., he will never hit the boundary (as long as $c > 2$). This event has probability
$\mathbb{P}(Z_i = (-1)^i, 1 \leq i \leq n) \geq p^{n/2}(1-p)^{n/2}. $
So $\beta_n > (p(1-p))^{n/2} = \alpha_n$ works.
As an aside: the distribution of your two-sided hitting time $\min\{\tau_c, \tau_0\}$ can be computed exactly, using the optional stopping theorem with some clever martingales. (I can't find a good reference right now, but the method you need to do this is standard.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2531623",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Does $\bigcup_{n=1}^\infty \left(-\infty, 1-\frac{1}{n}\right) = (-\infty, 1)$? I'm currently trying to think of an example of a proper subset of $\mathbb{R}$ that is not compact in the topological space $(\mathbb{R}, \tau)$, where $\tau = \{(-\infty, a): a\in \mathbb{R}\}\cup\{\emptyset, \mathbb{R}\}$.
I suspect that $(-\infty, 1)$ is not compact. In trying to show this, I'm hoping that an open cover can be given by $\bigcup_{n=1}^\infty \left(-\infty, 1-\frac{1}{n}\right)$, but this is only true if $(-\infty, 1) \subset \bigcup_{n=1}^\infty \left(-\infty, 1-\frac{1}{n}\right)$. If this is true, then certainly there is no finite subcover of $(-\infty, 1)$ coming from $\bigcup_{n=1}^\infty \left(-\infty, 1-\frac{1}{n}\right)$, and so $(-\infty, 1)$ is not compact.
So, does $\bigcup_{n=1}^\infty (-\infty, 1-\frac{1}{n}) = (-\infty, 1)$?
|
We wish to show that for any $x \in (-\infty, 1)$, there exists a positive integer $n$ such that $x < 1 - 1/n$, or equivalently, $$n > 1/(1-x).$$ Recall the Archimedean property of the reals, which states that for any reals $a, b > 0$, there exists a positive integer $n$ such that $na > b$. So if we choose $$a = 1, \quad b = 1/(1-x),$$ (the latter choice being positive since $x < 1$ implies $1-x > 0$), we obtain the desired assertion, and the proof is complete.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2531740",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Linear Transformation of Dependent set. Let $V$ and $W$ be vector spaces and let $T: V\to W$ be a linear transformation. Let $\{v_1, v_2,\ldots, v_p\}$ be a linearly dependent set of vectors in $V$. Show that $\{Tv_1, Tv_2,\ldots, Tv_p\}$ is also linearly dependent.
|
If $\{v_1, v_2,\ldots, v_p\}$ is linearly dependent, there exists $a_1, \cdots, a_p \in \mathbb{R}$, such that
$$a_1v_1+\cdots+a_pv_p =0 $$
where $a_i \neq 0$, for at least one $i \in {1,\cdots,p}$. Now, if we apply $T$, follows that:
$$a_1v_1+\cdots+a_pv_p =0 \Rightarrow T(a_1v_1+\cdots+a_pv_p)=T(0)$$
But, $T$ is linear, and so $T(0)=0$. Thus,
$$a_1T(a_1)+\cdots+a_pT(v_p)=0$$
that is a linear combination of vectors $T(v_1), \cdots, T(v_p)$, such that at least one coefficient is non null. This means that the set $\{T(v_1),\ldots, T(v_p)\}$ is linearly dependent.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2531828",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
}
|
Epsilon-delta derivative proof of $x^n$ I'm currently trying to prove the power rule using the epsilon-delta definition of a derivative. I've already done it for the basic limit definition, but I thought it might be a helpful exercise to test my understanding by doing it this way. However, I'm struggling and would really appreciate any help/hints.
My work so far:
The epsilon-delta definition says a function is differentiable if, for every $\epsilon > 0$, there exists a $\delta > 0$ such that $|x-x_0| < \delta$ implies $\frac{f(x)-f(x_0)}{x-x_0} - f'(x_0) < \epsilon$. Thus, I need to find a delta, as function of epsilon, and possibly $x_0$, such that whenever the first inequality is true the second one is as well.
So, fix $x_o \in R$, and let $|x-x_0| < \delta$, then $|\frac{x^n - x_{0}^{n}}{x-x_0} - nx_{0}^{n-1}| = |\frac{x^n - nxx_{0}^{n-1} + (n-1)x_{0}^{n}}{x-x_0}|$.
Upper-bounding this with $x < \delta + x_0$ and simplifying, we get $|\frac{(\delta + x_0)^n - n\delta x_{0}^{n-1}-x_{0}^{n}}{\delta}| = |\frac{(\delta + x_0)^n}{\delta} -nx_{0}^{n-1} - x_{0}^{n}| < \epsilon$.
Is that right so far, and if so, any advice on how to push through this last step and solve for $\delta$? I'm stumped, to be honest.
-Thank you in advance for your help!
|
Using
$$(x^n - x_0^n) = (x - x_0)\sum_{k=0}^{n-1} x^k x_0^{n-1-k}$$
One can write
$$
\frac{x^n - x_0^n}{x - x_0} - n x_0^{n-1} = \sum_{k=0}^{n-1} (x^k x_0^{n-1-k} - x_0^{n-1})
$$
Define $M = |x_0| + 1$ and suppose that $|x-x_0|\le 1$, then $|x|\le M$
and $|x_0|\le M$ and
$$
|x^k-x_0^k| \le |x - x_0|\sum_{p=0}^{k-1} |x|^p |x_0|^{k-1-p}\le |x - x_0|\sum_{p=0}^{k-1} M^{k-1}\le |x-x_0|k M^{k-1}
$$
We get
$$
\left|\frac{x^n - x_0^n}{x - x_0} - n x_0^{n-1}\right| \le \sum_{k=1}^{n-1} |x-x_0|k M^{k-1} |x_0|^{n-1-k}\le |x - x_0|M^{n-2}\sum_{k=1}^{n-1}k
\le |x - x_0|M^{n-2}\frac{n (n-1)}{2}
$$
For $n>1$ we can take
$$\delta = \min\left(1, \frac{2\varepsilon}{n(n-1)(|x_0|+1)^{n-2}}\right)$$
For $n=1$ there is no condition on $\delta$ because $f(x) = f(x_0) + (x-x_0)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2531961",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Geometric/arithmetic sequences: $u_{n+1} = \frac12 u_{n} + 3$ I'm having trouble with writing this sequence as a function of $n$ because it's neither geometric nor arithmetic.
$$\begin{cases}
u_{n+1} = \frac12 u_{n} + 3\qquad \forall n \in \mathbb N\\
u_{0} = \frac13
\end{cases}$$
|
Let $u_m=v_m+a_0+a_1m+\cdots$
$$6=2u_{n+1}-u_n=2v_{m+1}-v_m+a_0(2-1)+a_1(2(m+1)-m)+\cdots$$
Set $a_0=6,a_r=0\forall r>0$
to find $$v_{n+1}=\dfrac{v_n}2=\cdots=\dfrac{v_{n-p}}{2^{p+1}}$$
Now $v_0+6=u_0\iff v_0=?$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2532105",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
}
|
Proving $\sum\limits_{r=1}^n \cot \frac{r\pi}{n+1}=0$ using complex numbers Let $x_1,x_2,...,x_n$ be the roots of the equation $x^n+x^{n-1}+...+x+1=0$.
The question is to compute the expression $$\frac{1}{x_1-1} + \frac{1}{x_2-1}+...+\frac{1}{x_n-1}$$
Hence to prove that $$\sum_{r=1}^n \cot \frac{r\pi}{n+1}=0$$
I tried rewriting the expression as $$\sum_{i=1}^n \frac{\bar{x_i}-1}{|x_i-1|^2}$$
I then used the fact that $$x^{n+1}-1=(x-1)(x^n+x^{n-1}+...+x+1=0$$ so $x_i$ are the complex nth roots of unity.Using cosine formula I found that $$|x_i-1|^2=2-2\cos(\frac{2i\pi}{n+1})=4(\sin \frac{\pi}{n+1})^2$$
After substituting this I couldn't simplify the resulting expression.Any ideas?Thanks.
|
Clearly, $x_k\ne1,1\le k\le n$
Let $y_k=\dfrac1{x_k-1}\ne0,1\le k\le n$
$\implies x_k=\dfrac{1+y_k}{y_k}$
Now $\displaystyle0=\sum_{r=0}^nx_k^r=\dfrac{x^{n+1}_k-1}{x_k-1}$
As $x_k-1$ is non-zero finite, $$x^{n+1}_k-1=0$$
$$\implies\left(\dfrac{1+y_k}{y_k}\right)^{n+1}=1$$
$$\binom{n+1}1y_k^n+\binom{n+1}2y_k^{n-1}+\cdots+1=0$$
$$\sum_{r=1}^ny_k=-\dfrac{\binom{n+1}2}{\binom{n+1}1}=?$$
Now as $x_k=e^{2k\pi i/(n+1)},1\le k\le n$
$$y_k=\dfrac1{e^{2k\pi i/(n+1)}-1}=\dfrac{e^{-k\pi i/(n+1)}}{e^{k\pi i/(n+1)}-e^{-k\pi i/(n+1)}}$$
$$=\dfrac{\cos\dfrac{k\pi}{n+1}-i\sin\dfrac{k\pi}{n+1}}{2i\sin\dfrac{k\pi}{n+1}}=?$$
Used: How to prove Euler's formula: $e^{it}=\cos t +i\sin t$?
$\implies e^{-it}=\cos t-i\sin t$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2532206",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 3
}
|
Expand function in Legendre polynomials on the interval [-1,1] Expand the following function in Legendre polynomials on the interval [-1,1] :
$$f(x) = |x|$$
The Legendre polynomials $p_n (x)$ are defined by the formula :
$$p_n (x) = \frac {1}{2^n n!} \frac{d^n}{dx^n}(x^2-1)^2$$
for $n=0,1,2,3,...$
My attempt :
we have using the fact that $|x|$ is an even function.
$$a_0 = \frac {2}{\pi}$$
$$ a_n= \frac {2}{π} \int_{-1}^{1}x\cos(nx)\,dx$$
Then what is the next step ?
|
We know that the Fourier-Legendre series is like
$$
f(x)=\sum_{n=0}^\infty C_n P_n(x)
$$
where
$$
C_n=\frac{2n+1}{2} \int_{-1}^{1}f(x)P_n(x)\,dx
$$
So now we are going to calculate the result of
$$
\frac{2n+1}{2} \int_{-1}^{1}|x|P_n(x)\,dx
$$
As $|x|$ is an even function, and the parity of $P_n(x)$ depends on the parity of $n$, We can write
$$
\int_{-1}^{1}|x|P_n(x)\,dx
$$
as
$$
\int_{-1}^{1}|x|P_{2k}(x)\,dx\ \ k=0,1,2...
$$
and
$$
\int_{-1}^{1}|x|P_{2k}(x)\,dx \\
=\int_{-1}^{0}-xP_{2k}(x)\,dx + \int_{0}^{1}xP_{2k}(x)\,dx\\
=2\int_{0}^{1}xP_{2k}(x)\,dx
$$
As
$$
(n+1)P_{n+1}(x)-x(2n+1)P_{n}(x)+nP_{n-1}(x)=0
$$
we get
$$
2\int_{0}^{1}xP_{2k}(x)\,dx\\
=2(\frac{2k+1}{4k+1}\int_{0}^{1}P_{2k+1}(x)\,dx+\frac{2k}{4k+1}\int_{0}^{1}P_{2k-1}(x)\,dx)
$$
As
$$
\int_{0}^{1}P_{n}(x)\,dx=\begin{cases}
0& n=2k\\
\frac{(-1)^k (2k-1)!!}{(2k+2)!!}& n=2k+1
\end{cases}
$$
We get
$$
C_{2k}=(2k+1)\frac{(-1)^k (2k-1)!!}{(2k+2)!!}+n\frac{(-1)^{k-1} (2k-3)!!}{(2k)!!}\\
=\begin{cases}
\frac{1}{2}& k=0\\
\frac{(-1)^{k+1} (4k+1)}{2^{2k}(k-1)!}\frac{(2k-2)!}{(k+1)!}& k>0
\end{cases}
$$
So
$$
|x|=\frac{1}{2}+\sum_{k=1}^\infty \frac{(-1)^{k+1} (4k+1)}{2^{2k}(k-1)!}\frac{(2k-2)!}{(k+1)!} P_{2k}(x)
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2532327",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Is this map continuous? Let $(E,\langle\cdot\;,\;\cdot\rangle)$ be a complex Hilbert space. For $M\in\mathcal{L}(E)^+$ (i.e. $M^*=M$ and $\langle M x\; |\;x\rangle\geq0$, for all $x\in E$). We define
the semi-inner product $\langle x\;,\;y\rangle_M:=\langle Mx\;,\;y\rangle,\; \forall x,y\in E$.
Assume that $\forall x,y\in E$, we have $x\cdot y\in E$ and $\langle x\cdot y\;,\;x\cdot y\rangle_M\leq \langle x\;,\;x\rangle_M\times \langle y\;,\;y\rangle_M$. We consider the following map:
\begin{eqnarray*}
\psi
:&(E,\langle\cdot\;,\;\cdot\rangle_M)\times (E,\langle\cdot\;,\;\cdot\rangle_M)&\longrightarrow (E,\langle\cdot\;,\;\cdot\rangle_M)\\
&(x,y)&\longmapsto x\cdot y\;
\end{eqnarray*}
Is $\psi$ continuous?
Thank you.
|
Let's write $\|x\|_M=\langle Mx,x\rangle^{1/2}$ for $x\in E$. Then for $x,y,x_0,y_0\in E$ we have
$$\|x\cdot y-x_0\cdot y_0\|_M\leq\|x\|_M\|y-y_0\|_M+\|y_0\|_M\|x-x_0\|_M.$$
Now if $0<\varepsilon<1$, $\|x-x_0\|_M<\varepsilon$ and $\|y-y_0\|_M<\varepsilon$, then $\|x\|_M<\|x_0\|_M+1$. Then we have
$$\|x\cdot y-x_0\cdot y_0\|_M<(\|x_0\|_M+\|y_0\|_M+1)\varepsilon.$$
Thus if instead we suppose $\|x-x_0\|_M<\varepsilon/(\|x_0\|_M+\|y_0\|_M+1)$ and $\|y-y_0\|_M<\varepsilon/(\|x_0\|_M+\|y_0\|_M+1)$, then we have
$$\|x\cdot y-x_0\cdot y_0\|_M<\varepsilon.$$
From this, continuity of the product is easily seen.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2532421",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
I wante to find some $v>0$ verifying this property Let us consider a function $f: (0,+∞)→(0,+∞)$. Assuming that $f$ is continuous differentiable and strictly increasing for all $t>0$.
I want to find some $v>0$ verifying this property:
$$f(a)<f(v)≤c$$
where $a>0$ and $c>0$ are given real numbrers. I have no idea how to start.
|
Let $a, c > 0$ such that $f(a) < c$ (otherwise it obviously doesn't hold). Then, because $f$ is continuous at $a$, there exists $\delta > 0$ such that $|v - a| < \delta \implies |f(v) - f(a)| \le c - f(a)$.
For any $v \in \langle a, a + \delta\rangle$ we have $f(a) < f(v)$ so:
$$0 < f(v) - f(a) = |f(v) - f(a)| \le c - f(a)$$
Which implies $f(a) < f(v) \le c$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2532555",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How many uncountable subsets of power set of integers are there? The question is to determine how many uncountable subsets of ${P(\mathbb Z)}$ are there.
I think that the answer is $2^c$.
Let $A=\{B\in P(P(\mathbb Z)):B \text{ is uncountable}\}$
$P(P(\mathbb Z))$ has $2^c$ elements, so cardinality of $A$ is at most $2^c$.
Of course, I'm having trouble with the lower bound and I'm trying to find an injective function from some set of cardinality $2^c$ into $A$.
If anybody has any idea, I'd be very grateful!
|
Start with your definition (after renaming the bound variable) and add a second definition:
$$
A=\{X\in P(P(\mathbb Z)):X \text{ is uncountable}\} \\
B=\{X\in P(P(\mathbb Z)):X \text{ is countable}\}
$$
Consider $f(X) = \overline{X}$ (the complement of $X$ relative to $P(\mathbb Z)$) as a function $f: B \rightarrow A$. This is a function because every countable subset of $P(\mathbb Z)$ must have an uncountable complement. Given $X \cup \overline{X} = P(\mathbb Z)$, since the RHS is uncountable, the LHS must also be uncountable, so if $X$ is countable (and in $B$), $\overline{X}$ is uncountable (and in $A$).
The complement is injective by elementary set theory. Since an injection from $B$ to $A$ exists, $|B| \leq |A|$. But then we have:
$$
A \cup B = P(P(\mathbb Z)) \\
A \cap B = \emptyset \\
|A| + |B| = |P(P(\mathbb Z))| \\
|A| + |A| \geq |P(P(\mathbb Z))| \\
2|A| \geq |P(P(\mathbb Z))| \\
|A| \geq |P(P(\mathbb Z))|
$$
We already know that:
$$
A \subset P(P(\mathbb Z)) \\
|A| \leq |P(P(\mathbb Z))|
$$
So the equality follows by Schröder–Bernstein.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2532635",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 6,
"answer_id": 1
}
|
Let $T:V\rightarrow W$ a linear transformation, $i_w:W\rightarrow W^{**}$ and $i_v:V\rightarrow V^{**}$ canonical morphism of biduality. Let $T:V\rightarrow W$ a linear transformation, $i_w:W\rightarrow W^{**}$ and $i_v:V\rightarrow V^{**}$ canonical morphism of biduality.
Prove $i_w\circ T=T^{**}\circ i_v$
I' m very very confused with this exercise.
Suppose i need to prove this:
$i_w\circ T\subset T^{**}\circ i_v$ and $T^{**}\circ i_v\subset i_w\circ T $ but i don't have idea of how to prove this. Can somone give me a hint?
|
Let's recall the definitions: if $T : V \to W$, then $T^{*} : W^* \to V^*$ is a linear map defined as $T^{*}(f) = f \circ T$ for all $f \in W^*$.
Then, $T^{**} : V^{**} \to W^{**}$ is defined as $T^{**}(g) = g\circ T$ for all $g \in V^{**}$.
Now, to prove $i_w\circ T=T^{**}\circ i_v$ let's first establish the domain and codomain of both sides. Since $T : V \to W$ and $i_W : W \to W^{**}$, we have $i_w\circ T : V \to W^{**}$. Similarly, since $i_v : V \to V^{**}$ and $T^{**} : V^{**} \to W^{**}$, we have $T^{**}\circ i_v : V \to W^{**}$, so both sides represent a function with the same domain and codomain.
Let $x \in V$ be arbitrary. Let's show that $ (i_w\circ T)(x)= (T^{**}\circ i_v)(x)$. This is an equality of two functions $W^* \to \mathbb{F}$, where $\mathbb{F}$ is the underlying field of your vector spaces. Therefore take an arbitrary $f \in W^*$, and let's show $ (i_v\circ T)(x)(f)= (T^{**}\circ i_v)(x)(f)$ in $\mathbb{F}$.
\begin{align}(i_w\circ T)(x)(f) &= (i_w(T(x))(f)\\
&= f(T(x))\\
&= (f\circ T)(x)\\
&= (i_v(x))(f\circ T)\\
&= (i_v(x))(T^*(f))\\
&= (i_v(x) \circ T^*)(f)\\
&= (T^{**}(i_v(x))(f)\\
&= (T^{**}\circ i_v)(x)(f)\\
\end{align}
Hence, $i_w\circ T=T^{**}\circ i_v$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2532787",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Finding simpler formula I have to find simpler formula to this one:
$$\lnot(p \land q) \lor (\lnot p \land q)$$
I started by using De Morgan's law to get: $$(\lnot p \lor \lnot \lnot q) \lor (\lnot p \land q)$$ then used the Double negation law to get: $$(\lnot p \lor q) \lor (\lnot p \land q)$$
Probably I am missing something because right now it seems it cannot go further. Thanks.
|
You have four different cases: $p$ true and $q$ true, $p$ true and $q$ false, $p$ false and $q$ true, $p$ and $q$ both false. Then
\begin{align}
& p \mbox{ true and }q \mbox{ true implies } \lnot p \mbox{ or } \lnot q \mbox{ false, } \lnot p \mbox{ and } q \mbox{ false } \\
& p \mbox{ true and }q \mbox{ false implies } \lnot p \mbox{ or } \lnot q \mbox{ true, } \lnot p \mbox{ and } q \mbox{ false } \\
& p \mbox{ false and }q \mbox{ true implies } \lnot p \mbox{ or } \lnot q \mbox{ true, } \lnot p \mbox{ and } q \mbox{ true } \\
&p \mbox{ false and }q \mbox{ false implies } \lnot p \mbox{ or } \lnot q \mbox{ true, } \lnot p \mbox{ and } q \mbox{ false.}
\end{align}
Hence $\lnot (p \mbox{ and } q) \mbox{ or } (\lnot p \mbox{ and } q)$ is false when both $p$ and $q$ are true, true in the other cases. Thus is equivalent to $q \rightarrow \lnot p$, which is false when both $p$ and $q$ are true and true in the other cases. You can also rewrite it as $\lnot q \mbox{ or } \lnot p$, which is $\lnot (p \mbox{ and } q)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2532906",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
}
|
Best way to compute the rank of $A$ Let
$$A=\begin{bmatrix}
1&2&3&4&5\\6&7&8&9&10\\11&12&13&14&15\\16&17&18&19&20\\21&22&23&24&25\end{bmatrix}.$$
Which would be the best way to compute its rank?
I first thought about computing the determinant but then it seemed better to find its echelon form which would give me the rank. Is there a more efficient way?
I found out that the last three rows were spanned by the first two (So the rank is 2) but it was kind of a lucky thing so I can't expect to get it every time like that.
|
You can notice that if $a=[1\ 2\ 3\ 4 \ 5]$ and $u=[1\ 1\ 1\ 1\ 1]$, then the matrix is
$$
\begin{bmatrix}
a \\
a + 5u \\
a + 10u \\
a + 15u \\
a + 20u
\end{bmatrix}
$$
and it is clear that the row space is generated by $a$ and $u$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2532993",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
}
|
For what values of x does the series $\sum_{n=1}^{\infty} \frac{n^x}{x^n} $ converge?
For what values of x does the series $\sum_{n=1}^{\infty}\frac{n^x}{x^n} $ converge?
I've attempted to solve this problem but I can't finish up my reasoning - I don't know how to "check" the remaining numbers. Namely:
(1) I showed that this series converges absolutely for $x \in (-\infty,-1) \cup (1, \infty)$
(2) Then, I checked $x=1$ and $x=-1$ - the series does not converge, because $\frac{n^x}{x^n} $ does not approach $0$.
(3) Zero does not work because the series is not defined.
(4). Now, what I am left with is to check the interval $(-1,1)$, but I don't know how to do this. I can cater for $(0,1)$ using the ratio test for the initial series, but what about $(-1,0)$?
|
For $x \in (-1, 1), x\neq 0$ we have $x^n \overset{n\to\infty}{\longrightarrow} 0$. So for $x>0$ , we know $\frac{n^x}{x^n}$ does not converge to zero.
For $x<0$ we can write $x=-\frac{1}{y}$ with $y>1$. Then the absolute value of our sequence is
$$\lvert\frac{n^x}{x^n}\rvert = \frac{y^n}{n^\frac1y} \ge \frac{y^n}{n}.$$
To see that this doesn't converge to zero, we can take the $\log$:
$$\log \left( \frac{y^n}{n} \right) = n\log(y) - \log(n)> 0 $$ for $n$ big enough.
That means $\frac{y^n}{n}>1$.
Alternative: (for showing $\frac{y^n}{n}$ doesn't converge to zero)
Recall that $y>1$, so we can write $y=1+ \epsilon$ with $\epsilon > 0$. Now
$$\frac{y^n}{n}=\frac{(1+\epsilon)^n}{n}=\frac{\sum_{k=0}^{n}\binom{n}{k}\epsilon^k}{n}=\frac{1+n\epsilon+\sum_{k=2}^{n}\binom{n}{k}\epsilon^k}{n} > \epsilon$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2533090",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
What exactly is the $H^{-1/2}$ space? What exactly is the $H^{-1/2}$ space?
Definition for $H^{1/2}$ given e.g. here.
|
$Y = \widehat{H^{1/2}(\mathbb{R})}$ is the Hilbert space of functions $f \in L^2(\mathbb{R})$ with the norm $$\|f\|_Y^2 = \int_{-\infty}^\infty (1+|x|) |f(x)|^2dx$$ ie. the inner product $$\langle f, g \rangle_Y = \int_{-\infty}^\infty (1+|x|) f(x)\overline{g(x)}dx$$
Its strong dual is $Y^*=\widehat{H^{1/2}(\mathbb{R})}^*$, the Hilbert space of functions with the norm $$\|g\|_{Y^*}^2 =\sup_{f \in Y} \frac{|\langle f, g \rangle_Y|^2}{\|f\|_Y^2}= \int_{-\infty}^\infty \frac{1}{1+|x|} |g(x)|^2dx$$ and inner product $$\langle g_1,g_2 \rangle_{Y^*} = \int_{-\infty}^\infty \frac{1}{1+|x|} g_1(x)\overline{g_2(x)}dx $$
Then $X=H^{1/2}(\mathbb{R})$ is the inverse Fourier transform of $Y$ and $X^*=H^{-1/2}(\mathbb{R})=H^{1/2}(\mathbb{R})^*$ is its strong dual, the inverse Fourier transform of $Y^*$.
We can also define those things in term of fractional derivatives (some self-adjoint and closed unbounded operators).
Restricting the norm to $\Omega$ we obtain $H^{\pm 1/2} (\Omega)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2533315",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Divergence, curl, and gradient of a complex function From an answer here I got
Green's theorem for functions in the complex plane
$$
\oint f(z) \, dz =
i \iint \left( \nabla f \right) \, dx \, dy =
i \iint \left( 1 {\partial f \over \partial x} +
i {\partial f \over \partial y} \right) \, dx \, dy
$$
Which leads to the well known Cauchy's integral theorem
$$
\oint f(z) \, dz =
\iint
\left( \frac{- \partial f_x}{\partial y} + \frac{- \partial f_y}{\partial x} \right)+
i \left( \frac{\partial f_x}{\partial x} + \frac{- \partial f_y}{\partial y} \right) \, dx \, dy
$$
From which I then get
$$
\oint f(z) \, dz =
\iint \left(
\nabla \times f +
i \nabla \cdot f
\right) \, dx \, dy
$$
I'm hoping someone here can tell me whether I'm on the right track or not.
Keep in mind that
$$\nabla = 1 {\partial \over \partial x} +
i {\partial \over \partial y}$$
|
I'm not a physicist, but I think that gradient, curl, and divergence are strictly for a real $d$-dimensional environment, in particular for $d=2$ and $d=3$. I have never met your strange complex definition of $\nabla$.
On the other hand it is of course possible to prove the Cauchy integral formula using Green's theorem in the form
$$\int_{\partial \Omega}\bigl(P(x,y)\>dx+Q(x,y)\>dy\bigr)=\int_\Omega(Q_x-P_y)\>{\rm d}(x,y)\ .\tag{1}$$
Write your analytic $f$ in the form $f=u+ iv$ as well as $dz$ in the form $dz=dx+i dy$. Then by definition of complex line integrals you have
$$\int_{\partial\Omega}f(z)\>dz=\int_{\partial\Omega}(u\>dx-v\>dy)+i\int_{\partial\Omega}(v\>dx+ u\>dy)\ ,$$
to which you can apply $(1)$ separately. Finally the CR equations will come to your rescue.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2533421",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Definite integral of the cube root of $x \ln (x)$ I was trying to solve the 2016 CSE question paper for Math Optional
$$I = \int_{0}^1 \left (x\log \left (\frac{1}{x}\right)\right)^{\frac{1}{3}} dx$$
In my attempt to find $I$, I tried to substitute
$$t = \log \left (\frac{1}{x}\right) \\\ dt = \frac{x}{-x^2}$$
Even tried taking
$$t = x\log \left (\frac{1}{x}\right)$$
and then tried by parts. But I am unable to make headway. Any idea what might work? How can I tackle these types of problems?
|
Take $x = e^{-3t/4}$
Then $\log\frac{1}{x} = 3t/4$
And $t$ varies from $0$ to $\infty$
Substitute these values in question
You'll get the answer in gamma function
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2533542",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
How do I Factor Odd and even Degree polynomial as a product of irreducible polynomial? I want to factor $x^n -1$ Into a product of irreducible polynomials over The Reals, when n is Odd and when n is even.
I know that The only irreducible polynomials over The Real are first Degree and second Degree polynomials.
But im stuck, so any hints would be great
|
The linear factors are easy because the real roots of $x^n -1$ can only be $\pm 1$.
Irreducible quadratic factors come from complex roots.
The complex roots of $x^n -1$ are the $n$-th roots of unit: $\omega^k$, where $\omega=\exp(\frac{2\pi}{n} i)$.
These roots come in conjugate pairs: $\omega^k, \bar\omega = \omega^{n-k}$. The quadratic polynomial having them as roots has real coefficients and is irreducible.
Full solution:
Linear factors:
When $n$ is odd, the only linear factor of $x^n -1$ is $x-1$.
and
When $n$ is even, the only linear factors of $x^n -1$ are $(x-1)(x+1)$.
Quadratic factors:
The quadratic factors of $x^n -1$ are $x^2-2Re(\omega^k)+1$ for $k=2,\dots,n-1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2533661",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
binormal vectors and generalized helix There's a problem I'm stucked on it. I wonder if anybody is able to help me because I tried almost every idea that I could thougth of. Here it is:
If $ u $ is a fixed direction and for any point $ s $ of a space curve
$ < B(s) , u > = constant $
holds then prove that there's a constant vector $ v $ such that for every point $ s $ we have
$ < T,v > = constant $
I wish somebody could help me.
[ $ B $ is the binormal vector of curve. ]
[ the curve is prameterized with arc length ]
|
If $\langle B(s), u\rangle = c$ then $\langle B'(s), u\rangle + \langle B(s), u'\rangle = 0$. But $u$ is constant, so $\langle B(s), u'\rangle = 0$ and therefore $\langle B(s), u\rangle ' = \langle B'(s), u\rangle = 0$.
Using the Frenet Serret formulas, $\langle B'(s), u\rangle = \tau(s) \langle N(s), u\rangle = 0$ , so either the torsion is zero or $\langle N(s), u\rangle$ is zero.
Let $\alpha(s)$ be the curve such that its binormal vector makes a constant angle with $u$.
*
*Case 1: If $\tau(s)$ is thero then the curve lies in its osculating plane and the binormal vector ( which is the normal vector of that plane ) is constant, so $T$ makes a constant angle with the constant direction $B$.
*Case 2: if $\langle N(s), u\rangle=0$ then $\langle N(s), u\rangle ' = 0$, so $\langle N'(s), u\rangle = 0$. On the other hand, $\langle T(s), u\rangle ' = \langle T'(s), u\rangle $ and using the Frenet Serret formulas again $\langle T'(s), u\rangle = k(s)\langle N(s), u\rangle = k(s) \cdot 0 = 0$, so $\langle T(s), u\rangle $ is constant.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2533793",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Verifying convergence in probability and distribution. Suppose random variables $\{X_n\}$ are defined on $([0,1],\mathcal{B}([0,1]),\lambda)$ (where $\lambda$ is Lebesgue measure) as follows:
$X_1 = \mathbb{1}_{[0,1]}, X_2 = \mathbb{1}_{[0,1/2]}, X_3 = \mathbb{1}_{[1/2,1]}, X_4 = \mathbb{1}_{[0,1/3]}, X_5 = \mathbb{1}_{[1/3,2/3]}, X_6 = \mathbb{1}_{[2/3,1]}$, etc.
Exercise:
a) Verify that $X_n \stackrel{p}{\to} 0$
b) Verify that for any $w \in [0,1]$, $X_n \not\to 0.$
Questions:
*
*a) How do I show that $\lim\limits_{n \to \infty}P(\left|X_n\right|>\epsilon)\to 0$? Intuitively it feels very clear that $X_n$ converges in probability: as $n$ goes to infinity the probability that the indicator functions is equal to $1$ goes to zero. I'm not sure however how to mathematically show this.
*b) What does $w$ do for our random variables? $X_n$ isn't dependent on $w$ right? So how can we write $X_n(w)$?
|
a) Note that
$P(x_n=1)=\frac{1}{k}$ for all $\frac{(k-1)k}{2}<n\leq \frac{k(k+1)}{2}$.
Hence as $n\to\infty$ (because $k\to\infty$)
$P(|X_n|>\epsilon)\to 0$.
b) Here note that
for any $w$,
$X_n(w)=1$ for infinitely many $n\in\mathbb N$.
Hence $X_n(w)\not\to0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2533996",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Prove that $\operatorname{trace}(T^*T) = \|Tu_1\|^2+\cdots+ \|Tu_n\|^2$ Let V be an inner product space with orthonormal basis $(u_1,...,u_n)$,
Show that
$$\operatorname{trace}(T^*T) = \|Tu_1\|^2+\cdots+ \|Tu_n\|^2$$
I was able to solve this in The case $u_i$'s are eigenvectors of the operator $T$. How can do this for any other orthonormal basis of V?
|
In the $u_i$ basis, we have the matrix of $T^\ast T$:
$[(T^\ast T)_{ij}] = \langle u_i, T^\ast T u_j \rangle = \langle Tu_i, Tu_j \rangle; \tag 1$
thus
$(T^\ast T)_{ii} = \langle Tu_i, Tu_i \rangle = \Vert Tu_i \Vert^2. \tag 2$
In any basis, the trace is the sum of the diagonal entries of the matrix of $T^\ast T$ in that basis; thus
$\operatorname{trace}(T^\ast T) = \sum_1^n (T^\ast T)_{ii} = \sum_1^n \Vert Tu_i \Vert^2, \tag 3$
is proved per request.
The issue of whether $\{u_i\}$ forms an eigenbasis or not does not enter into the above demonstration. It is important, however, the $\{u_i\}$ is orthonormal; otherwise, the dual basis to $\{u_i\}$, that is the
$\theta_j:V \to \Bbb F, \; \theta_j(u_k) = \delta_{jk}, \tag 4$
where $\Bbb F$ is the base field (most likely either $\Bbb R$ or $\Bbb C$), will not satisfy
$\theta_j(v) = \langle u_j, v \rangle, \; v \in V;\tag 5$
since in this more general case
$(T^\ast T)_{ij} = \theta_i(T^\ast Tu_j), \tag 6$
(1) will not necessarily bind.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2534144",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Is a quotient of topological vector space a topological vector space I should prove or disprove that quotient of topological vector space is a topological vector space.
I think that it is. May I reduce my prove to proof of following statement?
$X$ - topological vector space, $M \subset X$ - linear subspace of $X$. Let $A$ - vector addition in $X/M$, W - neighbourhood of origin $o$ in $X/M$. Then i should prove that $A^{-1}(W)$ is a neighbourhood of $(o, o)$ in $X/M \times X/M$.
But I have no idea how can I prove this fact.
|
A neighborhood of $0$ in $X/M$ is any set $W$ containing $0+M$ such that $\pi^{-1}(W)$ is a neighborhood of $0$ in $X$, where $\pi\colon X\to X/M$ is the canonical projection.
Let $W$ be a neighborhood of $0$ in $X/M$. Then there exists a neighborhood $V$ of $0$ in $X$ such that $V+V\subseteq \pi^{-1}(W)$.
This implies that $\pi(V)+\pi(V)\subseteq W$. Since $V\subseteq\pi^{-1}(\pi(V))$, we have that $\pi(V)$ is a neighborhood of $0$ in $X/M$. This proves continuity of the addition in $X/M$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2534303",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
update of centroids in $k$ means clustering I have been reading the lecture notes of Andrew Ng about Clustering techniques available in:
http://cs229.stanford.edu/notes/cs229-notes7a.pdf
but I have a problem in the second for of step $2$ which is the following:
For what I know, the previous step or for each section what it does is to set a value of $c$ considering the minimum value of each x minus the value of the centroid ($\mu$). According to what I read the second for each should update the values of the centroid by computing the new mean of the $x$ values assigned to this centroid, but how does this part do that? any numerical example would be great.
Why there is a value of $1\{\}$ before each set of brackets? and for what I see the value of the numerator by being multiplied with $x^{(i)}$ their summation would be greater than the sum of the value of the denominator. Should not be the other way around?
Thanks
|
The formula means if the $i$-th data point is considered to be group $j$, we should consider that data point in updating our mean.
For example, if we think $x_1, x_2, x_3$ belongs to cluster $1$.
Then $\mu_1 = \frac{x_1+x_2+x_3}{3}.$
The notation $1\{c^{(i)}=j\}$ is an indicator function.
$\sum_{i=1}^m1\{c^{(i)}=j\}$ count the number of data point in cluster $j$.
$\sum_{i=1}^m1\{c^{(i)}=j\}x_i$ sums up the data points in cluster $j$.
The formula is the average of data points in the $j$-th cluster.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2534413",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How to solve the equation $(f \circ f)(x)=x$? Solve the equation $(f \circ f)(x)=x$, if $f(x) = \frac {2x+1}{x+2}$ and $x \in \mathbb R$ \ $\{-2\}$.
How would I solve this equation and what does it even mean to be solved in this context?
|
How to begin:
$$f\bigl(f(x)\bigr)=\frac{2f(x)+1}{f(x)+2}=\frac{2\cfrac{2x+1}{x+2}+1}{\cfrac{2x+1}{x+2}+2}=\cdots$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2534558",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
What is the vector notation for $ \epsilon^{ijk} a_i b_j c_k $? If we use Einstein summation notation the upper indices and the lower indices match - then we do summation:
$$ a \cdot b = \langle a | b \rangle = a^i b_i := \sum a^i b_i $$
In the example I'm looking at there is a symbol called $\epsilon^{ijk}$ which is antisymmetric in the indices $i\leftrightarrow j \leftrightarrow k$. What is the vector notation for
$$ \epsilon^{ijk} a_i b_j c_k \stackrel{?}{=} a \cdot (b \times c)$$
At least if we write it this way, it's clear there's anti-symmetry between $(a,b,c)$. Is this a short-hand for $3 \times 3$ determinant or the vector triple product?
|
Check: http://internal.physics.uwa.edu.au/~styler/teaching/CM/index.pdf
(2.21)
It is:
$$a\cdot(b\times c)\left(=\det(a,b,c)\right)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2534661",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Proof Verification for Connectedness of a Set in a Metric Space While I was reading metric spaces, I came across a theorem statement which had no proof in the book I was reading. Therefore, I tried to construct the proof myself. Given below is the theorem statement as well as the proof that I tried to construct.
Theorem: Let $\left( M, d \right)$ be a metric space and let $X \subseteq M$. If $\phi$ and $X$ are the only sets which are both open and closed in $X$, then $X$ is connected.
Proof:-
Let, if possible, $X$ be disconnected.
Hence, $\exists A, B \subseteq X$ such that $X = A \cup B$ where $A$ and $B$ are separated and $A \neq \phi$, $B \neq \phi$, $A \neq X$, $B \neq X$.
Now, since $X$ is closed, we have $X = \tilde{X}$, where $\tilde{X}$ is the closure of $X$.
Therefore, we have $A \cup B = \tilde{A \cup B} = \tilde{A} \cup \tilde{B}$
i.e. $\forall x \in A \cup B$, $x \in \tilde{A} \cup \tilde{B}$
$\therefore x \in A$ or $x \in B$ $\Longrightarrow$ $x \in \tilde{A}$ or $x \in \tilde{B}$
$\because A$ and $B$ are separated, $A \cap \tilde{B} = \phi$ and $\tilde{A} \cap B = \phi$
Therefore, If $x \in A$, then $x \in \tilde{A}$
and if $x \in B$, then $x \in \tilde{B}$
Thus, $A \subseteq \tilde{A}$ and $B \subseteq \tilde{B}$
Also, we have
$\forall x \in \tilde{A} \cup \tilde{B}$, $x \in A \cup B$
i.e. $x \in \tilde{A}$ or $x \in \tilde{B}$ $\Longrightarrow$ $x \in A$ or $x \in B$
Again, since $A$ and $B$ are separated, $A \cap \tilde{B} = \phi$ and $\tilde{A} \cap B = \phi$
Hence, $x \in \tilde{A} \Longrightarrow x \in A$ and $x \in \tilde{B} \Longrightarrow x \in B$
Therefore, $\tilde{A} \subseteq A$ and $\tilde{B} \subseteq B$
Thus, we have proved that $A = \tilde{A}$ and $B = \tilde{B}$ which implies that $A$ and $B$ are closed in $X$.
Now, since $X$ is open, we have
$\forall x \in X$, $\exists r > 0$ such that $B \left( x, r \right) \subset X$
i.e. $\forall x \in A \cup B$, $\exists r > 0$ such that $B \left( x, r \right) \subset A \cup B$
i.e. $\forall y \in B \left( x, r \right)$, $y \in A$ or $y \in B$
Hence, $B \left( x, r \right) \subset A$ or $B \left( x, r \right) \subset B$
Now, combining the possibilities from the above two statements, we have
If $x \in A$ and $B \left( x, r \right) \subset A$, then A is open.
If $x \in B$ and $B \left( x, r \right) \subset B$, then B is open.
If $x \in A$ and $B \left( x, r \right) \subset B$, then we have a contradiction to the fact that $A$ and $B$ are separated and hence disjoint. Therefore, this cannot be true.
If $x \in B$ and $B \left( x, r \right) \subset A$, then we have a contradiction to the fact that $A$ and $B$ are separated and hence disjoint. Therefore, this cannot be true.
Therefore we have $A$ and $B$ are open in $X$.
Thus, we have proved that there are two more sets $A$ and $B$ neither of them
equal to $X$ or $\phi$ which are both open and closed in $X$. This is again a contradiction to the hypothesis and therefore, not possible.
Hence, X must be connected.
I would like to know if this proof is correct or does it need any changes?
|
With the standard definition of connectedness the theorem hardly requires a proof. Here is the proof which uses your definition of connectedness: I start with $X=A \cup B$ with A and B separated. The fact that A and B are separated implies they are disjoint. Since their union is X it follows they are complements of each other. Hence $\tilde {A}$ is contained in the complement of B which is A. This means A is closed. Similarly B is closed. Being each other's complements they are both open and closed which is a contradiction.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2534809",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Does SO(3) preserve the cross product? Let $g\in SO(3)$ and $p,q\in\mathbb{R}^3$. I wondered whether it is true that $$g(p\times q)=gp\times gq$$
I am not sure how to prove this. I guess I will use at some point that the last row $g_3$ of $g$ can be obtained by $g_3=g_1\times g_2$.
But I assume there is an easier proof than writing everything out.
|
You may use the scalar triple product formula $r \cdot (p\times q)=\det(r,p,q)$ to prove that
$$
gr \cdot (gp\times gq)=gr \cdot g(p\times q)\tag{1}
$$
($=\det(r,p,q)$) for any vector $r$. Since $g$ is invertible, if $(1)$ holds for every vector $r$, we must have $gp\times gq=g(p\times q)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2534923",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17",
"answer_count": 5,
"answer_id": 2
}
|
Expectation of the ratio of dependent random variables where the expectation of the numerator is known to be zero Let $\mathbf{x} = \begin{bmatrix} x_{1} \\ x_{2} \end{bmatrix}$ be a complex normal random vector with $\mathbf{x} \sim \mathcal{CN}(\boldsymbol{\mu}, \boldsymbol{\Sigma})$, where $\boldsymbol{\mu} \neq \mathbf{0}$ and $\boldsymbol{\Sigma}$ is not diagonal. Furthermore, I have that $\mathbb{E}_{x,y}[x_{1}^{*} x_{2}] = 0$, with $(\cdot)^{*}$ denoting complex conjugate.
I need to show that the following expectation is also equal to zero:
$$\mathbb{E}_{x_{1},x_{2}} \bigg[ \frac{x_{1}^{*} x_{2}}{|x_{1}|^{2} + |x_{2}|^{2} + c} \bigg]$$
where $c \in \mathbb{R}_{+}$.
|
Based on a classic counter-example for another question:
Let $X$ have a real standard normal distribution $N(0,1)$, and let $Y=X$ when $|X| \lt k$ and $Y=-X$ when $|X| \ge k$, where $k$ is square root of the median of a $\chi^2$ random variable with $3$ degrees of freedom, about $1.538172$. Clearly $Y$ also has a real standard normal distribution $N(0,1)$. Then
*
*$\mathbb{E}_{X,Y}[X^*Y] =0$ even though though there is a bijection between $X$ and $Y$
*$\mathbb{E}_{X,Y} \left[ \frac{X^{*} Y}{|X|^{2} + |Y|^{2} + c} \right] \gt 0 $ for $c \gt 0$ since the numerator is positive for small magnitude $X,Y$ and the denominator is less than $2k^2+c$, and the numerator is negative for large magnitude $X,Y$ and the denominator is greater than or equal to $2k^2+c$. For example with $c=1$ this expectation is about $0.118$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2535073",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Proving that $d$ defines a metric $d: \mathbb{R}^2 \times \mathbb{R}^2 \rightarrow [0,\infty)$ I have to show that $d((x_1,y_1),(x_2,y_2)) = max\{|3x_1 + y_1 − 3x_2 − y_2|, |y_1 − y_2|\}$ defines a metric.
My approach:
Let $x=(x_1,y_1)$ and $y = (x_2,y_2)$
$M_1: d(x,y) \geq 0$ is clearly satisfied since $d$ chooses the maximum value between two values defined by an absolute value.
$$d(x,y) = 0 \iff max\{|3x_1 + y_1 − 3x_2 − y_2|, |y_1 − y_2|\} = 0 \\
\iff |3x_1 + y_1 − 3x_2 − y_2|= 0 \land |y_1 − y_2| = 0 \\
\iff y_1 = y_2 \land x_1 = x_2.$$
$M_2: d(x,y) = |3x_1 + y_1 − 3x_2 − y_2|$ or $d(x,y) = |y_1 − y_2|$ and in both cases we can add a minus inside the absolute value and get that $d(x,y)=d(y,x)$.
Are these correct?
I am not sure how to go about $M_3$ since I think there are a lot of cases. Any help?
|
To show $d((x_1,y_1),(x_3,y_3)) \leq d((x_1,y_1),(x_2,y_2)) + d((x_2,y_2),(x_3,y_3))$
Note that since the metric is defined as being the max, it is greater than or equal to both options. That is:
$$
\begin{align}
|y_1-y_2|, |3x_1+y_1-3x_2-y_2| \,\leq& \ d((x_1,y_1),(x_2,y_2)) \\
|y_2-y_3|, |3x_2+y_2-3x_3-y_3| \,\leq& \ d((x_2,y_2),(x_3,y_3))
\end{align}
$$
$ \\ $
Case 1: $\ d((x_1,y_1),(x_3,y_3)) = |y_1-y_3| $
$|y_1-y_3| \leq |y_1-y_2| + |y_2 - y_3| \leq d((x_1,y_1),(x_2,y_2)) +d((x_2,y_2),(x_3,y_3)) $
Case 2 is the exact same.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2535207",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Find an Isometry $ \ S \ $ such that $ \ T=S \sqrt{T^* T} \ $ Let $ \ T : \ \mathbb{C}^2 \to \mathbb{C}^2 \ $ be given by $ \ T(z_1,z_2)=\begin{bmatrix} 2 & -1 \\ 4 & -2 \end{bmatrix} \begin{bmatrix} z_1 \\ z_2\end{bmatrix} $
Find an Isometry $ \ S \ $ such that $ \ T=S \sqrt{T^* T} \ $ , where $ \ T^* \ $ is the conjugate transpose of $ \ T \ $
Answer:
Here,
$ T= \begin{bmatrix} 2 & -1 \\ 4 & -2 \end{bmatrix} \\ T^*=\begin{bmatrix} 2 & 4 \\ -1 & -2 \end{bmatrix} $
Thus,
$ T^* T=\begin{bmatrix} 2 & 4 \\ -1 & -2 \end{bmatrix} \begin{bmatrix} 2 & -1 \\ 4 & -2 \end{bmatrix} \ = \begin{bmatrix} 20 & -10 \\ -10 & 5 \end{bmatrix} $
$ T^*T= \begin{bmatrix} 20 & -10 \\ -10 & 5 \end{bmatrix} \ $
Now we have to find $ \ P \ $ such that $ \ P=\sqrt {T^* T} \ \ \ \Rightarrow P^2=T^*T \ \ \ $
For this let us first find the Eigen values and Eigen vectors of $ \ T^*T \ $ as follows :
The Eigen values of $ T^* T \ $ are $ \ 0,\ 25 \ $
The corresponding Eigen vectors are $ \ \vec a_1=\begin{bmatrix} 1 \\ 2 \end{bmatrix} \ $ and $ \vec a_2=\begin{bmatrix} 2 \\ -1 \end{bmatrix} \ $
This gives the basis of $ \mathbb{C}^2 \ $ .
So let $ \ P: \mathbb{C}^2 \to \mathbb{C}^2 \ $ be given by $ \ P(a_1)=\sqrt 0 \vec a_1 , \\ P(a_2)=\sqrt{25} \vec a_2 \ $ ,
Then $ \ P \ $ will be the unique positive root of $ \ T^* T \ $
From here we can find the matrix $ \ \ P \ $ i.e., $ \sqrt{T^* T} \ $
But my question is how to find the Isometry $ \ S \ $ so that $ \ T=S \sqrt{T^* T } \ $
Help me out only to find $ \ S \ $
|
You want $T\vec{a_k}=S\sqrt{T^*T}\,\vec{a_k}$ for $k=1,2$. This does not uniquely determine $S$ because $\sqrt{T^*T}\,\vec{a_1}=0$ and $T\vec{a_1}=0$. The condition $S\sqrt{T^*T}\,\vec{a_2}=T\vec{a_2}$ determines $S$ on $\vec{a_2}$. Setting $S\,\vec{a_1}=e^{i\theta}\vec{b_1}$ defines an isometry $S$ if $\vec{b_1}$ is orthogonal to $T\vec{a_2}$; $S$ is not unique because $\theta$ can be any real number, which is to be expected in every case where $T$ is not invertible.
To be specific, $\sqrt{T^*T}\, \vec{a_2} = \frac{1}{5}\vec{a_2}$ where $\vec{a_2}$ is a unit vector
$$
\vec{a_2} = \frac{1}{\sqrt{5}}\left[\begin{array}{c}2 \\ -1\end{array}\right].
$$
The vector $\vec{a_1}$ is orthogonal to $\vec{a_2}$, which gives
$$
\vec{a_1} = \frac{1}{\sqrt{5}}\left[\begin{array}{c}1 \\ 2\end{array}\right]
$$
It's not hard to verify that
$$
T\vec{a_1}=\vec{0},\;\; T^*T\vec{a_2}=\frac{1}{\sqrt{5}}\left[\begin{array}{c}50 \\ -25\end{array}\right] = 25\vec{a_2}
$$
The matrix $S$ must be defined so that $S\sqrt{T^*T}\,\vec{a_2}=T\vec{a_2}$ or $S(5\vec{a_2})=T\vec{a_2}$, and so that $S\vec{a_1}$ is a unit vector orthogonal to $T\vec{a_2}$. In summary,
$$
S\vec{a_2} = \frac{1}{5}T\vec{a_2} \implies S\left[\begin{array}{c}2 \\ -1\end{array}\right] = \frac{1}{5}\left[\begin{array}{c}5\\10\end{array}\right] = \left[\begin{array}{c}1 \\ 2 \end{array}\right] \\
S\vec{a_1}\perp T\vec{a_2} \implies S\left[\begin{array}{c}1\\2\end{array}\right] = e^{i\theta}\left[\begin{array}{c}2 \\ -1\end{array}\right] \\
S\left[\begin{array}{cc}
1 & 2 \\ 2 & -1
\end{array}\right]
=\left[\begin{array}{cc}
2e^{i\theta} & 1 \\
-e^{i\theta} & 2
\end{array}\right] = \left[\begin{array}{cc} 2 & 1 \\ -1 & 2\end{array}\right]\left[\begin{array}{cc} e^{i\theta} & 0 \\ 0 & 1\end{array}\right]
\\
S =\frac{1}{5}\left[\begin{array}{cc} 2 & 1 \\ -1 & 2\end{array}\right]\left[\begin{array}{cc} e^{i\theta} & 0 \\ 0 & 1\end{array}\right]\left[\begin{array}{cc} 1 & 2 \\ 2 & -1\end{array}\right]
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2535472",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
2,3,5,6,7,10,11 Counting with Restrictions The sequence 2, 3, 5, 6, 7, 10, 11, $\ldots$ contains all the positive integers from least to greatest that are neither squares nor cubes nor perfect fifth powers (in the form of $x^{5}$, where $x$ is an integer). What is the $1000^{\mathrm{th}}$ term of the sequence?
I've seen problems where I need to count with restrictions (like cubes and squares). I have never seen a problem with this degree. Here is my thought process.
Find intersection of squares and cubes. This is simple enough. For every 6th square, there will lie a cube. We can do this for everything else (intersection of 2 and 4, 2 and 5, 3 and 4, 3 and 5), but it will be tedious. Now, count out the numbers. This solution is very tedious...Can someone guide me through the solution?
|
Let $B(n)$ be the number of integers in range $\{1,2,\dots, n\}$ that are not squares cubes or fifth powers.
We want to find the first number such that $B(n)=1000$.
A formula for $B(n)$ can be obtained with inclusion exclusion.
We get $B(n)=n- \lfloor \sqrt{n} \rfloor- \lfloor\sqrt[3]{n} \rfloor- \lfloor\sqrt[5]{n} \rfloor + \lfloor\sqrt[6]{n}\rfloor + \lfloor\sqrt[10]{n}\rfloor + \lfloor\sqrt[15]{n}\rfloor - \lfloor\sqrt[30]{n}\rfloor $
This is very close to $n$.
Proposed method:
Take $N_0=1000$ and take $N_{i+1}=N_i+(1000-B(N_i))$. Notice this guarantees $B(N_{i-1})<1000$
The method in action:
$N_0=1000\implies B(N_0)=1000-31-10-3+3+1+1-1=1000-40$
$N_1=1040\implies B(N_1)=1040-32-10-4+3+2+1-1=999$
$N_2=1041\implies B(N_1)=1041-32-10-4+3+2+1-1=1000$
we are done. $1041$ is the answer.
verification code in c++:
#include <bits/stdc++.h>
using namespace std;
int pot(int a, int p){
int res=1;
while(p){
if(p%2) res*=a;
a=a*a;
p/=2;
}
return(res);
}
int ispow(int a, int p){
for(int i=1;pot(i,p) <= a; i++){
if(pot(i,p)== a) return(1);
}
return(0);
}
int main(){
int count=0;
for(int i=1; ;i++){
int add=1;
for(int j=2;j<=5;j++){
if(ispow(i,j)){
add=0;
break;
}
}
count +=add;
if(count == 1000){
printf("%d\n",i);
return(0);
}
}
}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2535687",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 6,
"answer_id": 4
}
|
What am I doing wrong with this partial sum formula? I have this series:$$\sum_{n=1}^\infty (1-\frac{7}{4^n})$$
So my strategy is to divide it into two formulas to try and get a geometric formula:$$\sum_{n=1}^\infty 1 - \sum_{n=1}^\infty \frac{7}{4^n}$$
The first sum equals 1 and I simplify the second sum to a geometric formula:$$1-\sum_{n=1}^\infty 7(\frac{1}{4})^n$$
So I plug that into the formula $\frac{a(1-r^n)}{1-r}$ and end up with$$1-\frac{7(1-(\frac{1}{4})^n)}{1-\frac{1}{4}}$$
I simplify that and get$$1-\frac{28(1-(\frac{1}{4})^n}{3}$$
So that's how I did it but when I type that in my calculator it doesn't match up with the answers I calculated manually. Can somebody tell me where I went wrong?
|
$$\sum_{n=1}^\infty 1 = \infty$$
not one.
You can also observe that
$$\lim_n(1-\frac{7}{4^n})=1 \neq 0$$
therefore, the series is divergent.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2535823",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Reference Triangle Issue - Trig Evaluate $\cos(2\cot^{-1}(32/49))$
What I did for this problem was:
*
*Turn $\cot$ into $\tan(49/32)$
*And I am not sure what I do with the $2*$
I tried multiplying $49$ by $2$ and plugging the values into a reference triangle.
After I had all three sides I just found the $\cos$ of the triangle and used that as the answer but it was incorrect. I also knew i couldn't use any of the double angle formulas because the two is in front of the identity.
The cot is inverse
Where did I go wrong?
|
By a well-known formula you have
$$\sin\left(\cot^{-1}x\right)=\frac1{\sqrt{1+x^2}}$$
And, of course, the double angle formula
$$\cos2x=1-2\sin^2x$$
Now your solution is straightforward
$$\begin{align}\cos\left(\cot^{-1}\frac{32}{49}\right)&=1-2\sin^2\left(\cot^{-1}\frac{32}{49}\right)\\[10pt] &=1-2\left(\frac1{\sqrt{1+\frac{32^2}{49^2}}}\right)^2\\[10pt] &=-\frac{1377}{3425}\end{align}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2535902",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Showing convergence and integrability from $\sum_{n=0}^\infty \int_X |f_n| \, d \mu \lt \infty$ I have seen this question here but I didnt find a comprehensible answer so ill try my luck.
Let $(f_n)$ be a sequence of almost everywhere integrable functions defined on X with $$\sum_{n=0}^\infty \int_X |f_n| \, d \mu \lt \infty$$
Show that $$f(x) := \sum_{n=0}^\infty f_n(x)$$
$1)$ converges for almost all $x$
$2)$ the limit function $f$ is integrable
$3) \int_X f \, d\mu = \sum_{n=0}^\infty \int_Xf_n \, d\mu$
My try:
$1)$ Define $g_N := \sum_{n=0}^N|f_n|.$ It follows that $g_N$ is a monotone increasing function so I can use the monotone convergence theorem.
It follows that : $$\lim_{N \rightarrow \infty}\int_X g_N \, d \mu = \int_X \lim_{N \rightarrow \infty} g_N \, d \mu $$
$\Longrightarrow \sum_{n=0}^\infty|f_n| \lt \infty$ so the sum converges for almost all $x$
$2)$ Since all $f_n $ are measurable and we showed that $f = \lim f_n$ is finite it follows that $f$ is integrable
$3)$ I would say it strictly follows from dominated convergence theorem with $1)$ and $2)$.
Any help here is appreciated thanks
|
Let
$$
F_n= \sum_{k=0}^{n}f_k(x); \quad F(x)= \sum_{k=0}^{\infty}f_k(x)
$$
Clearly, $F_n\in L^1$. Further
$$
|F_n|\leq \sum_{k=0}^{n} |f_k(x)|\leq \sum_{k=0}^{\infty} |f_k(x)|\in L^1\tag{1}
$$
since
$$
\int_X \sum_{k=0}^{\infty} |f_k(x)|=\sum_{k=0}^{\infty}\int_X |f_k| \,d \mu <\infty
$$
by the Monotone convergence theorem. The inquality in (1) also implies that $F$ is integrable. To see why let $n\to\infty$ and integrate both sides.
Since $F_n\to F$, the DCT implies that $\int F_n\to \int F$ and hence
$$
\int_X f d\mu = \sum_{n=0}^{\infty}\int_Xf_n\,d\mu
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2535993",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Simplifying an equation (mod, floor) In what ways can I simplify the equation $$y=(1-\lfloor \bmod(x,3)\rfloor)(\bmod(x,1))+\frac{\lfloor \bmod(x,3)\rfloor-\frac{1}{2}}{2|\lfloor \bmod(x,3)\rfloor-\frac{1}{2}|}+\frac{1}{2}$$ or at least make it look nicer?
|
Notice that
*
*$\text{mod}(x+3,3)=\text{mod}(x,3)$
*$\text{mod}(x+3,1)=\text{mod}(x,1)$
So the function is periodic with period $p=3$.
So you can describe the function on the interval $[0,3)$ in a piecewise fashion, then use that to obtain a general piecewise formula.
The graph looks like this:
On the interval $[0,3)$, $\text{mod}(x,3)=x$ so the expression can be simplified on that interval to
$$ y=(1-\lfloor x\rfloor)\cdot\text{mod}(x,1)+\frac{1}{2}\left[\text{sign}\left( \lfloor x\rfloor-\frac{1}{2}\right)+1\right] $$
On the interval $[0,1)$, $\text{mod}(x,1)=x$, and $\lfloor x\rfloor=0$ and $\text{sign}(0-\frac{1}{2})=-1$, so the expression simplifies to
$$y=x\text{ on }[0,1)$$
On the interval $[1,2)$, $\text{mod}(x,1)=x-1$, $\lfloor x\rfloor=1$ and $\text{sign}(1-\frac{1}{2})=1$, so the expression simplifies to
$$y=1\text{ on }[1,2)$$
On the interval $[1,2)$, $\text{mod}(x,1)=x-2$, $\lfloor x\rfloor=2$ and $\text{sign}(2-\frac{1}{2})=1$, so the expression simplifies to
$$y=3-x\text{ on }[2,3)$$
To make this general we must replace each $x$ with $\text{mod}(x,3)$ and write it as a piecewise function
$$ y=\begin{cases}
\text{mod}(x,3)&\text{ if }0\le\text{mod}(x,3)<1\\
1&\text{ if }1\le\text{mod}(x,3)<2\\
3-\text{mod}(x,3)&\text{ if }2\le\text{mod}(x,3)<3\\
\end{cases} $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2536093",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Show a non-constant, continuous function $f:\bar{D}\rightarrow \bar{D}$ is such that $f(\partial D)=\partial D$ Suppose $\bar{D}= \{z:|z|\leq 1\}$ and assume we have a non-constant, continuous function $f:\bar{D}\rightarrow \bar{D}$, that is holomorphic on the interior of $\bar{D}$ and such that $f(\partial D)\subset \partial D$.
Show that $f(\partial D)=\partial D$.
Since $f$ is non-constant on $\bar{D}$, it is also non-constant on $\partial D$, because $\partial D$ is the closure of $\bar{D}$ and I think I have to use the fact that $|f|$ attains the max or min on the boundary $\partial D$. Could someone please help me to understand how to do that?
|
Hint. Let $\mathcal{B}$ be the set of non-constant, continuous functions $f:\bar{D}\rightarrow \bar{D}$, holomorphic in $D$ and such that $f(\partial D)\subset \partial D$ where $D=\{z: |z|<1\}$.
1) If $f\in \mathcal{B}$ then $0\in f(D)$.
If not $1/f\in \mathcal{B}$ and $1/|f|$ has a maximum in $\partial D$, which is a minimum for $|f|$. Therefore $|f|$ has a minimum AND a maximum in $\partial D$ where $|f|=1$. Hence $|f|=1$ in $\bar D$, which implies that $f$ is constant. Contradiction.
2) If $f\in \mathcal{B}$ then $f(D)=D$.
If not there is $a \in D$ such that $a\not \in f(D)$. Hence $M_a \circ f$ is never $0$ in $D$, where $M_a={{z-a} \over {1- \overline a z}}$. Since $M_a \circ f\in\mathcal{B}$ , by 1), we get a contradiction.
P.S. More generally, by Fatou's Theorem (see Theorem A), $f\in\mathcal{B}$ is a finite Blaschke product:
$$f(z) = e^{i \theta} \prod_{k=1}^{n} {{z-a_k} \over {1- \overline a_kz}}$$
where, $n\in\mathbb{N}^+$, $\theta\in\mathbb{R}$, and $a_k \in D$ for $k=1,\dots,n$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2536304",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
show that $\int_{(0,1)} f dx = \infty$ Anyone knows how to show that $\int_{(0,1)} f dx = \infty$
where
$$
f(x)=
\begin{cases}
0 & \text{if $x \in \mathbb{Q}$}\\
[\frac{1}{x}]^{-1} & \text{if $x \notin \mathbb{Q}$}
\end{cases}
$$
|
Edit: There were too many mistakes in the original answer. Thanks to Peter Melech for pointing them out.
The integral is actually $\frac{\pi^2}{6} - 1$, and not $+ \infty$.
Denote by $\mu$ the Lebesgue measure on $\mathbb{R}$.
Consider the simple functions
$$f_n = \sum\limits_{k = 1}^{n} \frac {1}{k} \cdot \chi_{\left(\frac{1}{k + 1}, \frac{1}{k}\right)\setminus \mathbb{Q}}$$
Note that if $x \in \left(\frac 1 {k + 1}, \frac 1 k\right)$, then $\left[\frac 1 x\right]^{-1} = \frac 1 k$ and therefore $f_n (x) = f (x)$ for every $n \ge \left[\frac 1 x\right] = k$.
It is easy to check that $f_n \le f$ for all $n$, and that $\lim\limits_{n \to \infty} f_n = f$ ae.
Further,
$$\begin{align}
\int_{(0,1)} f_n \mathrm d \mu &= \sum\limits_{k = 1}^n \frac 1 k \cdot \mu \left(\left(\frac 1 {k + 1}, \frac 1 k\right)\setminus \mathbb{Q}\right) \\
&= \sum\limits_{k = 1}^n \frac 1 k \cdot \left(\frac 1 k - \frac 1 {k + 1}\right) \\
&= \sum\limits_{k = 1}^n \frac 1 {k^2} + \sum\limits_{k = 1}^n \frac{k - (k + 1)}{k (k + 1)}\\
&= \sum\limits_{k = 1}^n \frac 1 {k^2} + \sum\limits_{k = 1}^n \frac 1 {k + 1} -\frac 1 k\\
&= \sum\limits_{k = 1}^n \frac 1 {k^2} - 1 + \frac 1 {n + 1}
\end{align}$$
so we have $\int_{(0,1)} f \mathrm d \mu = \lim\limits_{n \to \infty} \left(\sum\limits_{k = 1}^n \frac 1 {k^2} - 1 + \frac 1 {n + 1}\right) = \frac{\pi^2}6 - 1 < + \infty$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2536474",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Second Cohomology of the modular group What is known about $H^2(SL(2,\mathbb{Z}),\mathbb{Z})$?
Anyone knows some references about that?
|
Yes, this is known, see the first lines here: "Our goal is to compute its cohomology groups with trivial coefficients, i.e. $H^q (SL_N(\mathbb{Z}),\mathbb{Z})$. The case $N = 2$ is well-known and follows from the fact that $SL_2(\mathbb{Z})$ is the amalgamated product of two finite cyclic groups ([21], [6], II.7, Ex.3, p.52). Here $[21]$ is Serre's book Trees.
I found the result also in Richard Hain's paper here, Proposition $3.13$, with proof (page $25-26$).
$$
H^r(SL_2(\mathbb{Z}),\mathbb{Z})\cong \begin{cases} \mathbb{Z}/12, \text{ if $r$ is even} \\
0, \text{otherwise} \end{cases}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2536566",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Indefinite Integral: $\int{x\cos{(x+1)}}dx$ I am trying to solve the following indefinite integral:
$$\int{x\cos{(x+1)}}dx$$
But I keep running into problems as I am thinking of solving it using parts in which I did the following:
$$
u = x+1 \\
du = 1\ dx \\
\int{x\cos{(x+1)}}dx = \int{x\cos{u}}du
$$
Then from $u = x+1$ I get $x = u-1$ then:
$$\int{x\cos{u}}du = \int{(u-1)\cos{u}}du$$
Which appears that I am doing something really wrong. How do I proceed?
|
Your substitution is correct: the easiest way of proceeding with your method is to notice that $\int{(u-1)\cos{u}}\;du$ can be divided into two integrals
$$\int{(u-1)\cos{u}}\;du=\int u\cos{u}\;du-\int{\cos{u}}\;du$$
The first integral of the two can be found by integrating by parts
$$\int u\cos{u}\;du=u\sin u-\int \sin u\;du$$
Therefore using your method proceeds as
$$\int{(u-1)\cos{u}}\;du=u\sin u-\int \sin u\;du-\int{\cos{u}}\;du$$
which becomes
$$\int{(u-1)\cos{u}}\;du=u\sin u+ \cos u-\sin{u}+C$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2536669",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Prove or disprove: If $P = x...y$ is a path in a $2$-connected graph $G$ then there is another $xy$-path $P'$, which is internally disjoint from $P$. Prove or disprove: If $P = x...y$ is a path in a $2$-connected graph $G$ then $G$ contains another $xy$-path $P'$, which is internally disjoint from $P$.
I'm not sure where to start with this. Any help is appreciated.
|
You can prove this by induction on the distance of $x$ and $y$.
Base case: If $dist(x,y)=1$, then $\{x,y\}$ is an edge. Let $z$ be a vertex different from $x$ and $y$. Since the graph is 2-connected, the removal of $x$ (respectively $y$) does not disconnect the graph, therefore there is a $(x,z)-$path $P_1$ which does not contain $y$, and a $(y,z)-$path $P_2$ which does not contain $x$.
The $(x,y)-$path $P_1P_2$ is obviously internally disjoint from the trivial path $\{x,y\}$.
Inductive step: We assume that the statement holds for all pairs of vertices that have distance $\leq k$ and we take a pair of vertices $x$ and $y$ with $dist(x,y)=k+1$. Consider the shortest path from $x$ to $y$ and let $z$ be the vertex adjacent to $y$ on this path. Since $dist(x,z)=k$, from the inductive hypothesis we have that there are two internally disjoint $(x,z)-$paths, $P_1$ and $P_2$.
Furthermore, we know that the removal of $z$ does not disconnect the graph, therefore there is a $(x,y)-$path $P_3$ that does not contain $z$. From there on it is easy to construct two internally disjoint $(x,y)-$paths using $P_1$, $P_2$ and $P_3$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2536796",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Graph Coloring Clique bound: Find a graph G such that ω(G) < χ(G) = 4 For the life of me I can't seem to think of anything that works. I tried randomly combining triangle graphs and taking edges away from $K_4$, but every time I think I found a way to force the chromatic number to be 4, I find that there's actually a smaller number of colors that can be used, or that my graph actually contains $K_4$
|
Take a pentagon $C_5$. You need $3$ colors already. Add a vertex, and link it to all vertices of the pentagon, so that it cannot take any of these $3$ colors. You end up with $\chi=4$, and $\omega=3$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2536929",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Platonic solids calculating the relations between the diameter of the sphere and the sides lengths Let D be a diameter of a sphere and let s be the side of an inscribed regular polyhedron. Show that the d and s are related as follows. This is a book called Geometry by Hartshorne exercise 44.8
I) For an inscribed Tertrahedron $d^2=\frac{3}{2} s^2$
This one i sort of got after finding a way to inscribe a Tertrahedron inside of a cube so that one of the sides of the tetrahedron was a diagonal of the cube. let $s' $ be the side of the cube and s be the side of the tetrahedron then $s= \sqrt{s' + s' } $ from cube case we have $d^2 = 3(s')^2 $ but $s^2 = 2 (s')^2$ hence $ (s')^2= \frac{s^2}{2} $ hence $d^2 =3 \frac{s^2}{2} $ as desired.
II) For an inscribed octahedron $d^2=2s^2 $
This is the dual of the cube so it shouldnt be that hard i realize that from the top vertex to the bottom vertex is a diameter of the sphere this seems to imply that half the diameter gets you to center of the object and theirs a right angle to one of the vertices at that height but i am not sure what its length is in terms of the diameter.
III)For an Inscribed Cube $d^2=3s^2 $
This one is the only one i managed to get directly it turns out that if you take a vertex on the sphere there is anther one diagonally from it s.t the distance between the two points is a diameter you can then use Pythagoras three times to get the desired relation.
IV) For an icosahedron $d^2= \frac{1}{2}(5+\sqrt5)s^2 $
Inside the icosahedron it is easy to see that there is a rectangle on four vertices of it on the outside of the sphere. The longer sides of the rectangle coincide with the diagonals of the regular planar pentagons made up of the edges of the icosahedron, thus the ratio is of the diagonal in a regular pentagon to its side; this is well known to be the golden ratio. but given that the smaller side of the rectangle is s and the diagonal of the top and bottom vertices is the diameter of the sphere it is inscribed in the result follows.
V) For an inscribed dodecahedron $d^2=\frac{3}{2}(3+\sqrt5 ) s^2 $
Let us focus on the pentagon. The diagonal separates the pentagon
into a triangle and a trapezoid. The other 11 diagonals form a cube. The vertices of the cube lie in a sphere which is the same sphere
circumscribing the dodecahedron. So, to find the radius of that sphere is just half of the diagonal of the cube.
It is a well known fact that the length of a diagonal $d$ of a pentagon of side length $\ell$ is given by
\begin{equation}
d = \phi s
\end{equation}
where $\phi=(1+ \sqrt{5})/2$ is the golden ratio.
Now, the diagonal of the cube is
\begin{equation}
D = \sqrt{ d^2 + d^2 + d^2} = \sqrt{3} d = \sqrt{3} \phi s.
\end{equation}
So the diameter of the sphere is
\begin{equation}
D^{2}= (\sqrt{3} \phi s )^2 = 3 \phi^2 s^2.
\end{equation}
$\phi^2=\frac{(1+\sqrt5)^2}{4} = \frac{3+\sqrt5}{2} $
\begin{equation}
D^{2}= 3 \frac{3+\sqrt5}{2} s^2.
\end{equation}
The desired result.
EDIT:
I managed to get the dodechedron as and icosahedron accepted the answer finishing off the octahedron.
|
A general formula for $\frac{d^2}{s^2}$ for a Platonic solid with $f$ faces and $e$ edges per face is:-
$$\frac{\sec^2(\pi/e)}{\tan^2(\pi/e)-\tan^2(\pi(f-2)/ef)}$$
This was obtained using solid angles as follows.
Let $O$ be the centre of a face. The circumradius of the face, $r$ is given by$$s=2r\sin(\pi/e)$$ The distance of a face from $O$, $h$, is given by$$r^2+h^2=d^2/4$$ The solid angle at $O$ of a face is $4\pi/f$ and is also$$2\pi-2e\tan^{-1}\frac{\tan(\pi/e)}{\sqrt{1+r^2/h^2}}$$
I've played around with various versions of the formula and it is just about the best form I can find! Perhaps multiply both numerator and denominator to get rid of $\sec^2$?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2537079",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Is there any way to solve this without using a graphing calculator? Solve for $x$:
$$x=2^{8-2x}$$
Whenever I've seen solutions to this question, they have always been through plotting the two graphs and finding their intersection point. But, is there any other way to solve this (perhaps in a more algebraic way)?
|
Hint
When you have linear and exponential function in the same equation, then use Lambert W function defined as
$$e^{W(x)}W(x)=x$$
Your equation becomes
$$4^xx=256$$
which is equivalent to
$$e^{x\ln4}x\ln4=256\ln4$$
Can you continue from here?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2537583",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Find the value of $\frac{1}{1}+\frac{1}{1+2}+\frac{1}{1+2+3}+\ldots + \frac{1}{1+2+3 +\ldots+2015}$ The question:
Find the value of $$\frac{1}{1}+\frac{1}{1+2}+\frac{1}{1+2+3}+\ldots + \frac{1}{1+2+3 +\ldots +2015}$$
If this is a duplicate, then sorry - but I haven't been able to find this question yet. To start, I noticed that this is the sum of the reciprocals of the triangle numbers.
Let $t_n = \frac{n(n+1)}{2}$ denote the $n$-th triangle number. Then the question is basically asking us to evaluate
\begin{align}
\sum_{n=1}^{2015} \frac {1}{t_n} & = \sum_{n=1}^{2015} \frac {2}{n(n+1)}\\
& = \sum_{n=1}^{2015}\frac{2}{n}-\frac{2}{n+1}
\end{align}
Here's where my first question arises. Do you just have to know that $\frac {2}{n(n+1)} = \frac{2}{n}-\frac{2}{n+1}$? In an exam situation it would be very unlikely that someone would be able to recall that if they had not done a question like this before.
Moving on:
\begin{align}
\sum_{n=1}^{2015}\frac{2}{n}-\frac{2}{n+1} & = \left(\frac{2}{1}-\frac{2}{2}\right) +\left(\frac{2}{2}-\frac{2}{3}\right) + \ldots +\left(\frac{2}{2014}-\frac{2}{2015}\right) +\left(\frac{2}{2015}-\frac{2}{2016}\right)\\
&= 2 - \frac{2}{2016} \\
& = \frac {4030}{2016} \\
& = \frac {2015}{1008}
\end{align}
And I'm not sure if this is right. How does one check whether their summation is correct?
|
And I'm not sure if this is right. How does one check whether their summation is correct?
Replace "2015" with "10", do it by hand, and see if your results match with your general formula.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2537684",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
}
|
The meaning of denotation E[X|Y]? Assume we have 2-dim data $(x,y): {(a_1,b_1),...,(a_n,b_n)}$; where $X, Y$ are random variables.
The conditional expectation is $E(X|Y=b_j)=\sum_{i}a_iP(X=a_i|Y=b_j)$
There is a theorem: $E(E(X|Y))=E(X)$, but what does $E(X|Y)$ exactly mean? In the former fomula, $Y=b_j$ is a condition, but $X|Y$ does not really make sense?
|
$\mathbb E\left(X\mid Y\right)$ is by definition a random variable satisfying:
*
*It is measurable with respect to the $\sigma$-algebra generated by rv $Y$.
*$\int_{A}X\left(\omega\right)\rm P\left(d\omega\right)=\int_{A}\mathbb E\left(X\mid Y\right)\left(\omega\right)\rm P\left(d\omega\right)$
for each set $A$ in $\sigma$-algebra generated by rv $Y$.
Taking $A=\Omega$ in the second bullet point we get $\mathbb EX=\mathbb E[\mathbb E(X\mid Y)]$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2537929",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Knight on chess board aperiodic? So suppose I have a knight on the corner of a chess board. I have to work out the average return time and also the limit of the probability that the knight is back in the corner after n steps as n tends to infinity. I have done this using a model of vertex and edges and using the stationary distribution, but the theorem I have used for the second part says that the distribution will converge to the stationary distribution if the markov chain is aperiodic and irreducible. I dont think this chain is aperiodic so surely the theorem would not apply?
|
It's true that the knight's walk is periodic; specifically, it has period 2. As such, any statement you can make about its limiting distribution will have to be one about the walk when sampled after an even number of steps; that is, you can address the distribution of $Y_n := X_{2n}$, where $X_n$ is the position of the walk after $n$ steps. Note that $Y_n$ is aperiodic, and hence the theorem applies. For the odd-numbered steps of $X_n$, the calculation is trivially $0$. (I'll leave as an exercise the details of what changes in the distributions when you consider $X_{2n}$ versus $X_n$.)
You may also find the theorem referenced in this question to be helpful.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2538030",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Direct sum and Hom Let $R$ be a commutative ring with unity and let $M$ be a finitely generated noetherian R-module.
Can someone tell me how the isomorphism
$$
Hom_R(R\oplus R, M)\simeq Hom_R(R,M)\oplus Hom_R(R,M)\simeq M\oplus M
$$
is given? I know how the second isomorphism is given but I don't know about the first. Where does it send a map $R\oplus R\to M$ to?
|
The universal property of the biproduct is precisely the existence of the first isomorphism (assuming it is natural in $M$).
More generally, the coproduct of $X \amalg Y$ of two objects in any category is the object that satisfies an isomorphism natural in $Z$:
$$ \hom(X \amalg Y, Z) \cong \hom(X, Z) \times \hom(Y, Z) $$
In the case of $R$-modules, finite products and coproducts are both given by the biproduct; i.e. $X \amalg Y \cong X \oplus Y$ and $M \times N \cong M \oplus N$.
The projection
$$ \hom(X \amalg Y, Z) \to \hom(X, Z) $$
is precisely the map induced by the insertion $i_1 : X \to X \amalg Y$. That is, it sends a function $g : X \amalg Y \to Z$ to the function $g \circ i_1 : X \to Z$.
The same is true for the other projection. The isomorphism from the natural property is the map $g \mapsto (g \circ i_1, g \circ i_2)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2538181",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Tangent plane of the surface: $z-g(x,y)=0$ in $(x_0, y_0, z_0)$ How can I determine the equation of tangent plane of the surface:
$$z-g(x,y)=0$$
in the point:
$$(x_0, y_0, z_0)$$
in terms of implicit derivatives?
|
The gradient of a function is normal to its level curves. Proof:
Suppose $\vec r(t)=\langle x(t),y(t),z(t) \rangle$ parametrizes the curve $f(x,y,z)=c$, where $c \in \mathbb{R}$ is some constant. Then,
$$f(x(t),y(t),z(t))=c$$
Since $\vec r(t)$ is on the curve. We also have,
$$\frac{d}{dt}f(x(t),y(t),z(t))=0$$
By the chain rule this is the same as,
$$x’(t) \cdot f_x+y’(t) \cdot f_y+z’(t) \cdot f_z=0$$
$$\vec r’(t) \cdot \nabla f=0$$
A normal vector by definition is orthogonal to a tangent vector. Hence $\nabla f=\langle f_x,f_y,f_z \rangle$ at some point $p$ describes a normal vector to $f(x,y,c)=c$ at that point $p$.
Let $f(x,y,z)=z-g(x,y)$ then because $\nabla f$ is a normal vector to the level curves of $f$ it will be a normal to $z-g(x,y)=0$. Hence the equation of the tangent plane of $z-g(x,y)=0$ at $(x_0,y_0,z_0)$ is,
$$\nabla f \cdot \langle x-x_0,y-y_0,z-z_0 \rangle=0$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2538298",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Prove that $\gcd(a,b) = \gcd (a+b, \gcd(a,b))$ I started by saying that $\gcd(a,b) = d_1$ and $\gcd(a+b,\gcd(a,b)) = d_2$
Then I tried to show that $\ d_1 \ge d_2, d_1 \le d_2$.
I know that $\ d_2 | \gcd(a+b, d_1)$ hence $\ d_2 \le d_1 $.
How do I prove that $\ d_2 \ge d_1$ ?
|
Don't worry about size so much as what they divide.
1) $\gcd(a,b)=d_1$ by definition divides $a$ and $b$ and thus $a+b$. And everything divides itself. So $\gcd(a,b)$ is a common divisor of $a+b$ and $\gcd(a,b)$ (and thus by definition is less or equal to the greatest common divisor of $a+b$ and $\gcd(a,b)$. $d_1 \le d_2$).
2) Likewise $\gcd(a+b, \gcd(a,b))= d_2$ divides $\gcd(a,b)$. So $d_2$ divides everything that $\gcd(a,b)$ divides so $d_2|a$ and $d_2|b$. So $d_2$ is a common divisor of $a$ and $b$ (and thus by definition is less than or equal to the greatest common divisor of $a$ and $b$. $d_2 \le d_1$.)
Obligatory fleablood essay: It's best not the think of the "greatest" in greatest common divisor as size as in "Five is bigger than four" but more in the sense of "completeness". The size of the GCD is of little interest or importance. What really matters is that can't find a more "comprehensive" factor but adding an additional component factor to GCD or by replacing a component factor of the GCD with another.
As a direct consequence the GCD is the largest possible factor but that is mostly and usually irrelevant.
Many text do not define the "greatest common factor" as "the common factor that is greatest" but as "the common factor so that all other common factors will divide into it". I, personally, find this a troubling definition because i) it sounds much more obscure and obtuse than it actually is and ii) we must prove that there is such a number and that it is unique.
But I do like that it relates directly wot what does and does not divide the two terms and how.
By that definition: 1) above says $d_1|d_2$ and 2) says $d_2|d_1$ so (assuming we are dealing with positive integers) $d_1 = d_2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2538383",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 3
}
|
Since the limit of the sequence $b^n$ where $0The question is in the title. For a fixed number $b \in \mathbb{R}$ where $0<b<1$, I can prove that the sequence $b^n$ converges to zero.
Proof
Choose $K(\epsilon) = \lfloor \frac{\ln \epsilon}{\ln b} \rfloor + 1$. Then for all $n \geq K$ we have $$ \ln b^n < \ln \epsilon \implies \left|b^n - 0 \right| < \epsilon $$
So, can I say something like... clearly we have , $ 0 < n/(n+1) < 1, \forall n\in \mathbb{N}$ so set $b = n/(n+1)$ then
$$ \lim \left(\frac{n}{n+1} \right)^n = 0$$
|
Since $0\lt\left(\frac12\right)^{1/n}\lt1$ the logic in your argument could be used to show that
$$
\lim_{n\to\infty}\left(\left(\frac12\right)^{1/n}\right)^n=0
$$
However, for each $n$, $\left(\left(\frac12\right)^{1/n}\right)^n=\frac12$. The theorem you state assumes a fixed $b$; it does not necessarily hold for a varying $b$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2538534",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Permutations of 9 total letters, using at least 7 letters. Original Question:
How many different strings can be made from the letters in "EVERGREEN"
using at least 7 of it's letters.
Note that the 4 "E"s and 2 "R"s are indistinguishable.
I understand how to use permutations with repetition to determine the number of ways for 9 total letters, but the "at least 7 letters" part stumps me.
Without that condition, it would be 9!/4!2! possible permutations.
Now, should I simply subtract some other number of ways? Perhaps the number of permutations that are less than 7?
How should I proceed? Thanks!
|
using 9 letters the number of different words is $\binom{9!}{4!2!}$
to find how many words you can form with 8 letters you have to perform the same calculation on each of the following set of letters
EEEVRRGN, EEEERRGN, EEEEVRGN, EEEEVRRN, EEEEVRRG
e.g. for the first one the number of words is $\binom{8!}{3!2!}$, and so on
then you have to add to the previous results the same calculation for 7 letters considering all the different sets, and precisely
EEVRRGN, EEERRGN, EEEVRGN, EEEVRRN, EEEVRRG, EEEERGN,EEEERRN,EEEERRG, EEEEVGN, EEEEVRN,EEEEVRG, EEEEVRR
I don't know whether there exists a different direct way
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2538636",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Inverse of matrix with nonnegative entries I am interested in matrices with the property that both $A$ and $A^{-1}$ have nonnegative entries. The only such matrices I could construct were diagonal matrices, and my question is whether these are the only such examples.
What I can say about such matrices is that they must preserve the quadrant
$$
Q^+ = \{x\in\mathbb{R}^n \mid x_i \geq 0 \}.
$$
That is, $x\in Q^+$ if and only if $Ax\in Q^+$. This seems rather unlikely unless $A$ preserves the axes, that is, unless $A$ is diagonal. But I can't seem to turn this into a proof.
EDIT
Cameron Buie made the nice observation that permutation matrices also work.
So I wonder: are there any examples with more than $n$ nonzero entries? What about 2x2 examples with at least 3 nonzero entries?
|
Of course, il maestro @Qiaochu Yuan is right; and, of course, he knows that there exists an elementary proof !
Let $A=[a_{p,q}],A^{-1}=[b_{p,q}]$. We consider the $i^{th}$ row of the matrix $A$ and we assume that there are $j\not= k$ s.t. $a_{i,j},a_{i,k}\not= 0$. Then $AA^{-1}=I$ implies that, for every $p\not= i$, $b_{j,p}=b_{k,p}=0$. Thus, the rows $j,k$ of $B$ are proportional and $B$ is not invertible, that is contradictory. Using $A^{-1}A=I$, we show the same result for the columns of $A$ and we are done.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2538937",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
}
|
Need help in understanding the proof for Theorem 11.17 in Baby Rudin
I don't understand why the first two equality signs hold in the proof.
|
$g(x) > a \iff \sup_n f_n(x) > a$, i.e. there is some $n$ for which $f_n(x) > a$. So, $$\{x \mid g_n(x) > a\} = \bigcup_{n=1}^\infty \{x \mid f_n(x) > a\}.$$
The second equality sign follows by definition of the $\limsup$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2539083",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Best approximation to $t^2$ in first-dgree polynmial in $L^1[0,1]$ Let $u(t)=t^2$. Find the best approximation $v(t)$ in the form of $v(t)= ct+d $ (with $c,d\in\mathbb{R}$) to $u(t)$ in $L^1[0,1]$.
So we need to find
$$\inf\limits_{c,d\in\mathbb{R}} \int_0^1 \left|t^2-ct-d\right|dt$$
I've tried to approach this problem from different angles: completing the square, taking derivatives $\frac{\partial}{\partial c}$ and $\frac{\partial}{\partial d}$, but nothing has been helpful so far.
Would appreciate a hint.
|
As user7530 commented, consider the roots
$$t^2-ct-d=0 \implies t_{1,2}=\frac{1}{2} \left(c\pm\sqrt{c^2+4 d}\right)$$ So
$$I=\int_0^1 \left|t^2-ct-d\right|\,dt=\int_0^{t_1}(t^2-ct-d)\,dt-\int_{t_1}^{t_2}(t^2-ct-d)\,dt+\int_{t_2}^{1}(t^2-ct-d)\,dt$$
Compute each of these three integrals; for sure, the formulae are not the most pleasant but the total is quite nice
$$I=\frac{1}{3} c^2 \sqrt{c^2+4 d}+\frac{4}{3} d \sqrt{c^2+4
d}-\frac{c}{2}-d+\frac{1}{3}$$ Now, compute $\frac{\partial I}{\partial c}$ and
$\frac{\partial I}{\partial d}$ which, after simplication, are nice and simple. Set these equal to $0$ and solve.
I am sure that you can take it from here.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2539208",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
The closure of a subspace
Let $X$ be a topological space. If $A$ is a subspace of $X$ we denote its closure by $\overline A$. For each point $x \in X$ the family $N_x$ of neighbourhoods of $x$ is a filter on $X$, the $\textit{neighbourhood filter}$ of $x$.
This is from Bell and Slomson (1969) Models and Ultraproducts: an introduction, p.22, who prove $\textit{inter alia}$ that, where $F$ is a filter on $X$, (i) and (ii) are equivalent:
(i) $x \in \bigcap\{\overline A: A \in F\}$
(ii) For all $A \in F$ and $U \in N_x, A \cap U \neq \emptyset$
By $\overline A$ do they mean the intersection of all the closed sets of the subspace topology $(A, \tau_A$) that contain $A$; or, alternatively, do they simply mean the set of closed sets of the subspace topology? That is, suppose we have a topological space $(X, \tau)$ with $X = \{a, b, c, d, e\}$ and topology $\tau = \{X, \emptyset, \{a\}, \{d\}, \{a, d\} \}$, and suppose we have a subspace topology $(A, \tau_A$) of the topological space $(X, \tau)$, with $A= \{a, c\}$ and $\tau_A = \{A, \emptyset,\{a\} \}$. The closed sets of $(A, \tau_A$) are then:
$A - A = \emptyset, \hspace{0.8cm} A - \emptyset = A, \hspace{0.8cm} A - \{a\} = \{c\}$
The intersection of all these closed sets containing $A$ is then $A$. So the closure by $\overline A$ in this case would be $A$. Is that right?
Or, alternatively, by the "closure by $\overline A$" do Bell and Slomson just mean the set of closed sets of $(A, \tau_A$), with $A= \{a, c\}$; i.e, the set containing:
$A - A = \emptyset, \hspace{0.8cm} A - \emptyset = A, \hspace{0.8cm} A - \{a\} = \{c\}$
|
When you write $\overline{A}$ in your last sentence, do you mean the closure in the subspace topology? This would be A, as you said and your argumentation is correct. Or the closure in the topology of X? In this case: $\overline{A} = \{a,b,c,e\}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2539380",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
}
|
Prob. 4, Chap. 7, in Baby Rudin: For what values of $x < 0$ does this series converge (absolutely)? Here is part of Prob. 4, Chap. 7, in the book Principles of Mathematical Analysis by Walter Rudin, 3rd edition:
Consider
$$ f(x) = \sum_{n=1}^\infty \frac{1}{1+n^2 x }. $$
For what values of $x$ does the series converge absolutely? . . .
My Attempt:
For $x = 0$, we have
$$ \lim_{n \to \infty} \frac{1}{1+n^2 x} = 1\neq 0,$$
and so the series fails to converge, by Theorem 3.23 in Rudin.
For any $x > 0$, we see that
$$0 < \frac{1}{1+n^2 x} < \frac{1}{n^2 x} = \frac{1}{x} \frac{1}{n^2},$$
and since $\sum \frac{1}{n^2}$ is convergent, so is our series, by Theorem 3.25 (a) in Rudin.
What about the values of $x < 0$?
And, what about complex values of $x$?
|
For any $x\neq 0$ we have $|a_n|=\left|\dfrac 1{1+xn^2}\right|\sim \dfrac 1{|x|n^2}$ which is a term of a convergent series, so the initial series is absolutely convergent.
Of course this criteria operates for values $n\gg1$, but as kolobokish noticed, there is an issue when $x\in A=\{-\frac{1}{k^2}\mid k\in\mathbb N^*\}$.
If $x\notin A\cup\{0\}$ then $\sum\limits_{n=1}^{\infty}a_n$ is absolutely convergent.
For $x\in A$ then we can only speak about $\sum\limits_{n=n_0+1}^{\infty}a_n$ or $\sum\limits_{n=1\\n\neq n_0}^{\infty}a_n$ for $n_0=\sqrt{-\frac 1x}$.
what is the meaning of the symbol ∼ in your answer? Can you please elaborate?
This symbol means asymptotically equivalent :
https://en.wikipedia.org/wiki/Asymptotic_analysis
$f(x)\sim g(x)\iff \lim\limits_{x\to\infty}\dfrac{f(x)}{g(x)}=1$ or here with sequences $\lim\limits_{n\to\infty}\dfrac{a_n}{b_n}=1$.
There is this theorem for comparing series:
*
*Let $\sum a_n$ and $\sum b_n$ be two series with positive terms
*If $a_n\sim b_n$ then $\sum b_n$ converges $\iff\sum a_n$ converges
By reversing sign, it also works for series with only negative terms, in our case since we study $|a_n|$, so the question of sign is trivially verified.
When we have a series with a complicated term $a_n$ we try to reduce it to a known convergent series, here $\sum\frac 1{n^2}$ by noticing $a_n=\dfrac 1{n^2x}\times\underbrace{\dfrac 1{1+\underbrace{\frac 1{n^2x}}_{\to 0}}}_{\to 1}$ so $a_n\sim \frac 1{n^2x}$.
Search your book, I'm pretty sure this theorem is there somewhere, maybe next chapter.
Although this come for the fact that if sequences are equivalent (i.e. $\frac {a_n}{b_n}\to 1$), one can find for $n\ge n_0\gg 1$ : $c_1 b_n\le a_n \le c_2 b_n$
And the partial sums verify the same inequalities $c_1\sum\limits_{n=n_0}^Nb_n\le\sum\limits_{n=n_0}^Na_n\le c_2\sum\limits_{n=n_0}^Nb_n$
Thus the series are of same nature (remember series with positive terms are $\nearrow$).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2539481",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Normal operator close in norm to projection, what about the spectrum? Let $T$ be a normal operator on a Hilbert space and suppose $T$ is close in norm to a projection $P$. Can I say that the spectrum of $T$ is contained in small balls around $0$ and $1$?
|
First of all $\rho(T)=||T||$ because $T$ is normal ($\rho$ is the spectral radius). So by the triangle inequality $\rho(T)\leq1+c$, given $||T-P||\leq c$.
Now the formula in the comment
$$(T-\lambda I)^{-1}=((T-P)(P-\lambda I)^{-1}+I)^{-1}(P-\lambda I)^{-1}$$
makes sense, using the Von Neumann series, if $(T-P)(P-\lambda I)^{-1}$ is less than $1$ in norm.
If we let $A$ be the closed ball centered at $0$ of radius $1+c$, with two small open balls removed around $0$ and $1$, we have $\sup_A ||(P-\lambda I)^{-1}||<\infty$ by compactness. So if $c$ is small enough we have the result. The smaller the open balls, the smaller the $c$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2539543",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Are all connected and locally integral affine schemes globally integral? In these notes, on p. 2 Section 4, Kedlaya claims an affine scheme is integral if and only if it is connected and every local ring is an integral domain. But elsewhere I have seen that this requires a Noetherian condition on the affine scheme, e.g. problem 19 here, problem 56D in this.
What is wrong with Kedlaya's proof? Or is the result actually true without assuming Noetherian?
|
The error is in the exercise referred to in the first paragraph: it is not necessarily true that the set of $x$ such that $f$ is nonzero in $O_{X,x}$ is open. In fact, this can fail even if $A$ is Noetherian. For instance, let $k$ be a field and take $A=k[x,y]/(xy)$, so $X$ is the union of two lines intersecting at a point. Then the set where $x$ is locally nonzero is one of the lines, which is not open since it contains no neighborhood of the intersection points.
For a counterexample to the result without the Noetherian hypothesis, see https://stacks.math.columbia.edu/tag/0568.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2539668",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
$ \lim_{n \to \infty} \sqrt[n]{a^n+1}$ with $a \ge 0$ I'm a bit rusty with limits
$$ \lim_{n \to \infty} \sqrt[n]{a^n+1}$$ with $a \ge 0$.
The solution in my book is $max \left \{ 0,1 \right \}$ but
my final results are:
1) $+\infty$ if $0<a<1$
2) $1$ if $a>1$
3) $+\infty$ if $a=1$
|
The correct result is $\max\{a, 1\}$, so I suppose it's a typo or a miscopied expression. To see this, note that
*
*If $a \le 1$, then $1 \le a^n + 1 \le 2$ for all $n$. Taking $n$-th roots and a limit gives limit $1$.
*If $a > 1$, then
$$\left(a^n + 1\right)^{1/n} = a \left(1 + \frac{1}{a^n}\right)^{1/n}$$
Now apply the previous case with $1/a$ to see why this tends to $a \cdot 1 = a$.
The key idea here is that if $a < 1$, the term $a^n$ is negligible once $n$ is large. If $a > 1$, the term $1$ is negligible once $n$ is large.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2539784",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
}
|
How to visualise the curve $y=\ln x$ rotating around its $y$-axis
Let the area limited by the curve $y=\ln x$, the line $x=e$ and the $x$-axis rotate around the $y$-axis. Decide the volume of the resulting rotational body.
First thing, I drew the graph:
But then I got stuck on how to imagine/visualise the curve rotating around its $y$-axis. How should I think when visualising the function $lnx$ rotating around its $y$-axis?
EDIT:
In the comment section it was written that "if you rotate the rectangle $[0,e]×[0,1]$, it makes a cylinder which volume is easy to compute", but I am not very familiar with the notation for $[0,e]×[0,1]$.
|
take a look here with wolfy for $$f=log(\sqrt{x^2+y^2}$$
http://m.wolframalpha.com/input/?i=plot+log%28%28x%5E2%2By%5E2%29%5E.5%29
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2539911",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Without using a calculator and logarithm, which of $100^{101} , 101^{100}$ is greater? Which of the following numbers is greater? Without using a calculator and logarithm.
$$100^{101} , 101^{100}$$
My try : $$100=10^2\\101=(100+1)=(10^2+1)$$
So :
$$100^{101}=10^{2(101)}\\101^{100}=(10^2+1)^{100}=10^{2(100)}+N$$
Now what ?
|
You want to determine if $\left(\frac{101}{100}\right)^{100}\geq 100$. But we know that $ \left(1+\frac{1}{n}\right)^n$ is always less than $e$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2540063",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17",
"answer_count": 5,
"answer_id": 1
}
|
Parametric Equation for A Wiggly Tube I need to form a shape where the side view in the $xz$-plane is parallel inverse sines, and the surface is a pipe with circular cross-sections. Is there a name for this shape?
I tried messing around with ParametricPlot3D in Mathematica, but couldn't figure it out. I tried messing around with the equation of circles for two slots and inverse sine for the other.
Fig. 1:
Fig. 2:
|
Since the cross section is a circle and the cross section from the side are parallel arc sine functions, we can form the shape as a continuous sequence of circles whose height changes as we go along the x-axis. With circles lying in the xz-plane and the arcsines in flat in the xz-plane we get this parametric function:
$x=t, y = r_0cos(\theta),z=r_0sin(\theta)-sin^{-1}(t)$ with $0 \leq \theta \leq 2\pi$ and $a \leq t \leq b$
So with $r_0 = 1, a = -10$, and $b = 10$ we have the graph:
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2540190",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Understanding Legendre-fenchel Transform, looking for an easy example and intuition Looking for help in understanding this transform. I have no background in real analysis but need this stuff for my research.
I hope someone can give me some light on the intuition behind this transform and better if you can provide some example I can work with.
Thanks in advance.
I already found some information related to it here
But, still not able to fully understand.
Thanks.
|
Will answer my own question since I found a very good reading material which helped me to fully understand the topic.
If interested, take a look here.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2540332",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.