Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
$A$ is homeomorphic to $A\times A$ Is there any infinite topological space $A$ which is connected such that $A$ and $A\times A$ are homeomorphic?
| Yes, the easiest nice one is $\Bbb R^\mathbb{N}$ probably, in the product topology (product metric). If you want ugly spaces, the indiscrete topology on $\mathbb{N}$ also works.
In fact, for any connected space $C$, the space $C^\mathbb{N}$ will be an example.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3464203",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Suppose that X is a cell complex with $\tilde{H}_{*}(X) = 0.$ Prove that the suspension $SX$ is contractible. The question is:
Suppose that X is a cell complex with $\tilde{H}_{*}(X) = 0.$ Prove that the suspension $SX$ is contractible.
I feel like this link contains a part (or maybe all) of the solution to this question, am I correct? if so I just need a recap of the general idea of the solution please. If not could you give me a hint for the solution?
Suspension: if $X$ is $(n-1)$-connected CW, is $SX$ $n$-connected?
| Sketch of solution :
1) the hypothesis implies that $X$ is connected (i.e. $0$-connected)
2)The given link then implies (or just by Van Kampen) $SX$ is $1$-connected
3) The suspension isomorphism, the Hurewicz theorem and the Whitehead theorem allow us to conclude
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3464358",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Normal subgroup of $S_3$? For the subgroup $N$ of $S_3$, $N = \{(1),(123),(132)\}$, I calculate that $(13)N = \{(13),(123),(23)\}$ and $N(13) = \{(13),(23),(12)\}$. Shouldn't this show that $N$ is not a normal subgroup, as opposed to what's printed here?
| Your computation of $(13)N$ is wrong. The $(123)$ should be $(12)$. One way to check this is that every permutation in $N$ is even, so the coset should consist only of odd permutations.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3464446",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Normally distributed rain drops problem About 50% of raindrops land downtown, downtown is a perfectly circular space around the city centre. Assuming the coordinates of the raindrops are independent and distributed according to the standard normal about the city centre. What is the percentage of rain drops that land within a radius twice that of downtown?
Since around 50% of the raindrops land downtown, clearly the percentage landing within a radius twice that of downtown must be less than or equal to 50, but beyond this I am not sure how to approach/solve this problem, can someone help? Thanks!
| I believe you can use standard z-tables (e.g. https://en.wikipedia.org/wiki/Standard_normal_table) to work out the various percentages for the radius in question. The z-table will tell you the probability contained by the standard normal curve from 0 to whatever z-value you choose. Now, if you make the equivalence of rain percentage to probability and z to radius, then you should be able to solve the problem.
For example, if you look at the z-table for the standard normal, you will see that the z-value for 0.25 (25% - remember this is going to be 1/2 the total 50% because it is measured from the center), is about z=0.68. Now, if you double this (double the radius), you will get $z=2(0.68)=1.36$. This z-value corresponds to 0.413 or 41.3%. Then, you'll need to double this to get 82.6% which I believe should correspond to the final answer.
I hope this helps.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3464595",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
$X$ is a Hausdorff space and $f:X \rightarrow X$ a continuous function. Prove that $\{x \in X \mid f(x)=x\}$ is closed. (Is my proof correct?) Suppose $X$ is a Hausdorff space and $f:X \rightarrow X$ a continuous function. Prove that the set $\{x \in X \mid f(x)=x\}$ is closed in $X$.
I've already proved this proposition:
Let $X,Y$ be topological spaces with $Y$ Hausdorff, and let $f,g:X \rightarrow Y$ be continuous maps. Then the set
$$
\{x \in X \mid f(x)=g(x)\}
$$
is closed in $X$.
My question is whether I can use this proposition to prove the first statement?
I think I can, because if we let $g:X \rightarrow X$ be the identity map, then $g$ is continuous aswell, and the codomain of $f$ and $g$ is obviously Hausdorff, so the conditions in the above proposition seems to be satisfied.
| As an alternative proof for the general case (with both $f$ and $g$, and yes, we can of course take $g=\textrm{id}_X$ to derive the first from the second, as identities are always continuous), we can use nets: if $(x_i)_{i \in I}$ is a net in $X$ converging to some $x \in X$ and all $x_i, i \in I$ are in $C:=\{x\mid f(x)=g(x)\}$ then
we know that for all $i$, $f(x_i)=g(x_i)$ by definition of $C$ and so, as $f$ and $g$ are continuous:
$$\lim_i f(x_i) = f(\lim_i x_i) = f(x) \text{ and } \lim_i g(x_i)=g(\lim_i x_i)=g(x)$$ and as the nets $(f(x_i))_i$ and $(g(x_i))_i$ in $Y$ are the same by hypothesis and $Y$ is Hausdorff so that limits of nets are unique: $f(x)=g(x)$ and so $x \in C$ as well.
So nets from $C$ can only converge to members of $C$, which implies $C$ is closed.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3464708",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Boundedness of a linear operator on Hilbert space How can I prove that a linear operator $A$ on a Hilbert space $H$ that satisfies
$$ \langle x,Ay\rangle= \langle y,Ax\rangle$$
for all $x,y\in H$ is bounded (i.e., $\|Ax\|\leq c\|x\| $ for some constant $c>0$)?
| Let $B_{x}:y\rightarrow\left<y,Ax\right>$, then $B_{x}$ is a linear operator (beware that one shouldn't take $\left<Ax,y\right>$ because then it is conjugate linear, not linear), and $|B_{x}(y)|=\left|\left<y,Ax\right>\right|=\left|\left<x,Ay\right>\right|\leq\|x\|\|Ay\|$.
For fixed $y$, for all $x$ with $\|x\|\leq 1$, we have $|B_{x}(y)|\leq\|Ay\|$, so
\begin{align*}
\sup_{\|x\|\leq 1}|B_{x}(y)|\leq\|Ay\|<\infty.
\end{align*}
By Uniform Boundedness Principle we have
\begin{align*}
\sup_{\|x\|\leq 1}\|B_{x}\|<\infty,
\end{align*}
so
\begin{align*}
\|A\|=\sup\{\left|\left<y,Ax\right>\right|: \|x\|\leq 1, \|y\|\leq 1\}=\sup_{\|x\|\leq 1}\|B_{x}\|<\infty.
\end{align*}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3464857",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What is a field of odd characteristic? I have some problem finding info about what a field of odd characteristic is?
An example:
Let $K$ be a field of odd characteristic. In [5], Bernstein and Lange
introduce Edwards curves defined by $x^2 + y^2 = c^2(1 + dx^2y^2)$
where $c, d ∈ K$ with $cd(1 − dc^4) \neq 0$. In [1], this form is
generalized to twisted Edwards form defined by
$$ax^2 + y^2 = 1 + dx^2y^2$$
| It is the smallest integer $p>0$ satisfying
$$0_K=\underbrace{1_K + 1_K \dots + 1_K}_{p \text{ times.}}$$
which is odd (specifically, it is a prime greater than 2.) There is a chance the writer was commenting on how weird the field is, but I would not think so (this is a joke.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3464977",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Prove or disprove that in an 8-element subsets of $\{1,2…,30\}$ there must exist two $4$-element subsets that sum to the same number. How can I show that for any set of $8$ distinct positive integers not exceeding $30$, there must exist two distinct $4$-elements subsets that same up to the same number?
I tried using pigeon hole principle, but i still don't get it.
There are $$\binom {8}4=70$$ four-elements subsets of an $8$-element set.
The least possible sum is $1+2+3+4=10$ and the greatest possible sum is $27+28+29+30=114$. Hence, there are $105$ sums.
I have no idea how to continue because the number of possible integer sums is greater than the number of four-element subsets.
The $4$-element subsets are not necessarily non-overlapping.
Edit:
For example, from $X=\{1,3,9,11,15,20,24,29\}$ , we can choose two different subsets $\{1,3,15,24\}$ and $\{3,9,11,20\}$ because they both sum up to $43$.
| The statement is false.
Take for example a subset with 7 odd numbers and 1 even number.
Then we divide this subset into two 4-element subsets. One of them will have 4 odd numbers whose sum will be an even number while the other will have 1 even number and 3 odd numbers which will add up to an odd number.
Example: 1,3,5,7,9,11,13,14
Total sum is 63 and if we were to form 2 subsets, sum in each would not be a whole number.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3465258",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18",
"answer_count": 4,
"answer_id": 3
} |
Non-emptiness of the relative interior in the infinite-dimensional case Let $E$ be a finite-dimensional space and let $C \subseteq E$ be a nonempty convex set. Then, the relative interior of $C$, which we denote by $\mbox{ri}(C)$, is nonempty.
If space $E$ is infinite-dimensional, does the result above still hold?
$$ ri(C)= \{x \in \operatorname{aff}(C): B[x,\epsilon] \cap \operatorname{aff}(C) \subseteq C \text{ for some }
\epsilon>0\}$$
| If I understood your notation right then a subset $C=\prod_{n=1}^{\infty} [-2^{-n};2^{-n}]$ of the space $\ell_2$ has empty relative interior, because $\operatorname{aff} C$ contains $x+e_n$ for each $x\in C$ and each $n$, where $e_n$ is the standard unit vector of $\ell_2$, but $x+2^{2-n} e_n\not\in C$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3465446",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to find Mixed Nash Equilibrium and Correlated Equilibrium How do I find the mixed Nash equilibrium and Correlated Equilibrium of the following question? It looks impossible to find without any real numbers.
Given that $ M>>1>>\epsilon $
\begin{pmatrix}
(M,M)& (1+\epsilon,1+\epsilon)&(2\epsilon,2\epsilon)&(\epsilon,\epsilon) \\
(1+\epsilon,1+\epsilon)&(1,1)&(\epsilon,\epsilon)&(0,0)\\
(2\epsilon,2\epsilon)&(\epsilon,\epsilon)&(M,M)&(1+\epsilon,1+\epsilon)\\
(\epsilon,\epsilon)& (0,0)&(1+\epsilon,1+\epsilon)&(1,1)
\end{pmatrix}
I tried to compute, but I failed to get a solution for the probabilities.
| Let $A$ be the payoff matrix you defined. Let $x = [x_1, x_2, x_3, x_4]^{\top}$. Let $y = Ax.$
The candidate to be a MNE is the vector $x$ such that:
$$y_1 = y_2 = y_3 = y_4.$$
By setting $x_4 = 1 -x_1-x_2-x_3$ and solving the previous system, one gets:
$$\begin{cases}
x_1 = \frac{\epsilon}{2\epsilon-M+1}\\
x_2 = \frac{1}{2}\frac{1-M}{2\epsilon-M+1}\\
x_3 = \frac{\epsilon}{2\epsilon-M+1}\\
\end{cases}.$$
In order for this vector to be a MNE, you should find out which are the values of $M$ and $\epsilon$ such that $x_1 \in (0,1)$, $x_2 \in (0,1)$ and $x_3 \in (0,1)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3465610",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why does a sequence of unhappy numbers always loop back to itself? Given a positive integer $19$, it is said to be happy, because $1^2 + 9^2 = 82$, $8^2+2^2 = 68$, $6^2 + 8^2 = 100$, $1^2 + 0^2 + 0^2 = 1$. At each step we simply sum the square of all its digits, and if at some step the sum is equal to $1$, then we say this number is happy, otherwise unhappy. It is true that for an unhappy number $n$, it will always loop back to itself. Why is this true?
| This is untrue. $2$ is unhappy and does not loop back to itself: $$2\to4\to16\to37\to58\to89\to145\to42\to20\to4$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3465759",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
$\inf f(A) \leq f( \inf A)$ if $f$ is continuous Prove that $\inf f(A) \leq f( \inf A)$ if $f: [-\infty, + \infty] \to \mathbb{R}$ is continuous and $A \neq \emptyset$ is a subset of $\mathbb{R}$.
Attempt;
Put $a:= \inf A$. Choose a sequence in $A$ such that $a_n \to a$. Then
$$ \inf f(A)\leq\lim_{n \to \infty} \underbrace{f(a_n)}_{\geq \inf f(A)} = f(a) = f( \inf A)$$
and we can conclude.
Is this correct?
| You have $\inf_x f(x) \le f(y)$ for all $y$ by definition. Ley $y = \inf A$ to finish.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3465945",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Prove parallelogram has four triangles with same area using vectors I need to prove that a parallelogram has four triangles with same area using vectors only. Thought to prove it using area of triangle and parallelogram but with no success. May you help me please?
| Let $a$ and $b$ be the vectors of two nonparallel sides of you parallelogram (as in your linked picture). Now the diagonals of the parallelogram are given by $a+b$ and $a-b$. The four triangles to think about are-up to translation-the triangles formed by the pairs $(a, \tfrac{1}{2}(a+b))$, $(b, \tfrac{1}{2}(a-b))$, $(a, \tfrac{1}{2}(a-b))$, and $(b, \tfrac{1}{2}(a+b))$. Noting the following basic properties of the cross product for all vectors $u, v, w$ and scalars $c$, one completes the proof by four simple computations (which I'll leave to you):
1) $u \times u = 0$
2) $u \times (cv) = c (u \times v)$
3) $u \times (v + w) = (u \times v) + (u \times w)$
Hint: The common area is $\tfrac{1}{4}|a \times b|$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3466015",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
limit equaling zero at infinity I have a problem where I found the percentage of cells as time approaches infinity to be $\frac{100d}{c+d}$. All parameters are positive constants. The question asks are there circumstances when this quantity can be zero? I think if $d$ is small and $c$ is large, the percentage is very close to zero. But if all parameters are positive is it true that the quantity can never actually be zero?
| Yes, it is true that if $c,d > 0$, then $\frac{100d}{c+d} > 0$ (so it can't equal $0$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3466233",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Given directrix, eccentricity, and focus get center of ellipse Given
Directrix: $x=2$
Focus: $(0,0)$
Eccentricity: $0.8$
Find the semi major axis $a$.
I can write the cartesian equation $x^2+y^2=e^2(2-x)^2$ and work the center by manipulating it. However I've been looking for a formula for the semi major axis $a$, in terms of eccentricity and directrix when focus is fixed at $(0,0)$. Any help?
My work:
$x^2+y^2=e^2(x-k)^2=e^2x^2-2e^2kx+e^2k^2 $
$(1-e^2)(x^2+2\frac{e^2k}{1-e^2}x)\cdots$
$\Rightarrow h=-\dfrac{e^2k}{1-e^2}$
I have it XD Is there a better more geometric/clever way?
| Let the focus and directrix be $F=(f,0)$ and $x=d$; let $D:=(d,0)$ be the foot of the perpendicular from the focus to the directrix. The points on any conic satisfy
$$\text{eccentricity}=\frac{\text{distance from focus}}{\text{distance from directrix}} \tag{1}$$
In particular, an endpoint $P:=(p,0)$ of the major axis satisfies
$$e = \frac{|PF|}{|PD|}=\frac{|p-f|}{|p-d|} \quad\to\quad \frac{p-f}{p-d} = \pm e \quad\to\quad p = \frac{f \pm d e}{1 \pm e} \tag{2}$$
The center $H := (h,0)$ is the midpoint of the vertices, so its $x$-coordinate is the average of the two $p$-values, namely,
$$h=\frac{f-de^2}{1-e^2} \tag{3}$$
(which agrees with OP's solution, with $f=0$ and $d=k$). The major radius is half the distance between the vertices, hence the absolute value of half the difference of the $p$-values:
$$a=\left|\frac{(f-d)e}{1-e^2}\right|\tag{4}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3466368",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find Dirichlet series of $2^n$ How can I find the Dirichlet series of $2^n$?
The Dirichlet series of a sequence $\{a_n\}_{n=1}^\infty$ is defined as $f(s) = \sum_{i = 1}^\infty \frac{a_n}{n^s}$.
If $\{a_n\}_{n=1}^\infty$ is multiplicative, then we have the following formula for Dirichlet series: $\sum_{i = 1}^\infty \frac{a_n}{n^s} = \Pi_{p \text{ is prime }} g_p(p^{-s})$, where $g_p(x) := \sum_{i=0}^\infty a_{p^n}x^n$ is the ordinary generating function of $\{a_{p^n}\}_{n = 0}^\infty$.
However, I can not use this formula, because $2^n$ is not multiplicative.
| $\forall s \in \mathbb{R}$, $\lim_{n \to \infty} \frac{2^n}{n^s} = \infty$
Thus the Dirichlet series $\sum_{n = 1}^\infty \frac{2^n}{n^s}$ diverges $\forall s \in \mathbb{R}$, according to Cauchy convergence test.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3466649",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Why can't we multiply matrices entrywise? Why can't we multiply corresponding elements like addition is done?
Is there a specific reason why it won't be significant?
By definition, we have to multiply a row by columns.
Why such a definition other than multiplying corresponding elements?
Please ignore my ignorance. I had nowhere to ask. :(
| You can do such element-wise multiplication of matrices, but it obviously represents a different kind operation. The 'standard' way of multiplying matrices has important applications in linear algebra and, as such, in many areas of science and engineering. The element-wise multiplication has other (typically less used) practical applications.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3466795",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 1
} |
Baby Rudin exercise 8.26 Solution Manual:
https://minds.wisconsin.edu/bitstream/handle/1793/67009/rudin%20ch%208.pdf?sequence=4&isAllowed=y
In the second part of the exercise, we are asked to prove exercises 24 and 25 without the assumption of differentiability of $\gamma(t)$. However, isn't the definition of $Ind(\gamma)$ that was defined in exercise 23 contingent on the differentiability of $\gamma(t)$?
Could anyone provide some insight regarding this?
| Read the set up to the exercise a little more carefully. We formulate the winding number of a continuous curve by first approximating the continuous curve with a trigonometric polynomial (which is continuously differentiable) and then taking the winding number of that.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3466911",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
$y''+y'+y=\sin^2x$: particular solution? The problem I am trying to solve is finding the particular solution of the equation:
$$y''+y'+y=\sin^2x$$
I don't know what format the particular solution has. Once I know that, I can probably solve the problem with little difficulty.
I haven't seen any examples in my textbook or else where with a exponential trig function on the right side.
Using the format $y=Asin(x)+Bsin(x)$ (and therefore $y'=Acos(x)-Bsin(x)$, $y''=-Asin(x)-Bcos(x)$) and substituting these values for y and its derivatives doesn't give me any value with a $sin^2x$ in it.
What format does the specific solution have? How is this sort of equation supposed to be solved?
The solution of the characteristic equation $r^2+r+1=0$ is $(r+0.5)^2+0.75=0$ is $r=0.866i-0.5$; then $y_c=e^{0.5x}(Acos0.866x+Bsin0.866x)$ , but I don't know if this is is useful or at all how this is to be applied.
| $$y''+y'+y=\sin^2x$$
This particular solution works fine too:
$$y_p=Ae^{2ix}+Be^{-2ix}+C$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3467307",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
About perfectly normal spaces. In some books the terms: regular and $T_3$; normal and $T_4$; completely normal and $T_5$; perfectly normal and $T_6$ are synonyms, but in some books, the difference is that regular, normal, completely regular and perfectly regular spaces are not $T_1$ and $T_3, T_4, T_5, T_6$ spaces are $T_1$.
If we take the second definitions (that regular and $T_3$ and other pairs are not synonymous), than $T_6 \implies T_5 \implies T_4\implies T_3$, but completely normal $\implies$ normal but normal spaces do not include regular spaces. I know that perfectly normal $\implies$ completely normal.
But I am interested in: Does there exist
*
*A perfectly normal space that is $T_0$ but not $T_1$.
*A perfectly normal space that is not $T_0$ and is not regular.
*A perfectly normal space that is not $T_0$, is regular but is not completely regular.
*A perfectly normal space that is not $T_0$ and is completely regular.
| There are no examples for (2) and (3), for essentially the same reason that (as proved in Henno's answer) there are no examples for (1). A perfectly normal space must be $ \mathrm R _ 0 $ (which is a non-$ \mathrm T _ 0 $ version of $ \mathrm T _ 1 $), and Henno's answer is just the $ \mathrm T _ 0 $ version of the proof of that; a direct proof can be found at the self-answered question Perfectly normal spaces are completely regular by @PatrickR (which is what brought this to my attention). And it's well-known that an $ \mathrm R _ 0 $ normal space must be completely regular (by an argument that's also pretty much contained in Henno's answer).
On the other hand, Henno's answer already has an example of (4): any nontrivial indiscrete space.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3467411",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
A function $f: X \to Y$ is continuous if and only if $ ^{−1} (C) $ is closed in $X $ for every closed set $C$ in $Y$. Since a mapping $f$ of a metric space $X$ into a metric space $Y$ is continuous on $X$ if and only if $ ^{−1} (V)$ is open in $X$ for every open set $V$ in $Y$ and since a set is closed if and only if its compliment is open,
$^{−1} (E^c)= [^{−1} (E)]^c$ for every $E⊂Y$.
Is this a correct proof of this corollary from Rudin?
| Yes, the key is that the preimage behaves nicely with all the set operations.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3467710",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Evaluate the limit $\lim_{n\to\infty} n(\sqrt{n}-\sqrt{n+1})$ Evaluate the limit $\lim_{n\to\infty} n(\sqrt{n}-\sqrt{n+1})$
I am trying to evaluate this limit. First I multiplied by the conjugate to obtain: $a_n=\dfrac{-n}{\sqrt{n}+\sqrt{n+1}}$
I was able to show the limit by taking $f(x)=a_x$ and then applying l'Hôpital's rule. I was primarily wondering if there is a certain trick that can be used to find the limit without resorting to this.
\begin{align} \lim_{n\to\infty} n(\sqrt{n}-\sqrt{n+1})=-\infty \end{align}
Thanks.
| Or we can look at $n(\sqrt{n+1}-\sqrt{n})=\dfrac{n}{\sqrt{n+1}+\sqrt{n}}\geq\dfrac{n}{2\sqrt{n+1}}=\dfrac{\sqrt{n}}{2\sqrt{1+\dfrac{1}{n}}}\rightarrow\infty$ as $n\rightarrow\infty$, so the limit is $-\infty$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3467858",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Translation of a set Let $A\subset \mathbb R$ be Lebesgue measurable set. Is it true that if $\ \forall r\in(0,1)$
$$A\cap (A+r)\neq \emptyset$$
then $\lambda(A)>0$?
$$$$
I think that is linked to the Vitali set, but I did't manage to prove it.
| No.
Take for instance the Cantor set on $[0,1]$
We have that $m(C)=0$ and $C-C=[-1,1]$
Thus $(0,1) \subseteq C-C\Longrightarrow C \cap(C+r) \neq \emptyset,\forall r \in (0,1)$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3468159",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Three squares of a chess board being randomly chosen at random, what is the chance that two are of one color and one of other? I applied this concept;
$\Rightarrow$ there are total of 64 squares out of which 32 are white and others are black.
Now I considered two cases that, (1) the two squares of same colour are white and the other is black. (2) the two squares of same colour are black and the other is white.
thus P(E) = (2*${32 \choose 2}$*${32 \choose 1}$)/(${64 \choose 3}$).
Am I right?
| The number of ways to choose two black and one white square is ${32 \choose 2}32$ with the first factor from choosing the two black squares and the second from choosing the white square. Two white and one black is the same by symmetry. There are ${64 \choose 3}$ ways to choose three squares, so the chance you want is
$$\frac {2\cdot {32 \choose 2}\cdot 32}{{64 \choose 3}}=\frac {16}{21}$$
which is nicely less than $1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3468327",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Prove a sequence is convergent and find its limit I tried to prove that $a_{1}=s,\ a_{n+1}=s+a_{n}^{2}$ is a monotonically increasing series, but I didn't know how to prove that it is bounded from above.
about the limit, I tried to compare between the limit of $a_n$ and the limit of $a_{n+1}$ but I received: $L= S + L^2$
and I didn't know how to move on with it
s is between 0 to 0.25 included***
| If the limit $L$ does exist, $L = s + L^2$ is a quadratic equation in $L$. That has at most two real roots. If there are no real roots, there is no possibility of convergence. If there are, the next step might be to look at a cobweb plot
of the function $f(x) = s + x^2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3468444",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Suppose $f \geq 0$, if continuous on [a,b] and $\int_{a}^{b} f(x)dx=0$. Prove that $f(x)=0$ for all $x\in [a,b]$. My attempt:
Let P the partition of $[a,b]$. Let $x_{i}^{*} \in [x_{i-1},x_{i}]$ and f is non negative in $[x_{i-1},x_{i}]$.
Since f is continuous on $[a,b]$, then $f$ is R-integrable with $\int_{a}^{b} f(x)dx=0$.
$$\displaystyle \lim_{n\rightarrow \infty} \sum_{i=1}^{n}f(x_{i}^{*}) \delta x_{i}=0$$
This implies $f(x_{i})=0$ for all i. i.e $f(x)=0$.
Is this proof is correct?
| I would have used the Mean value theorem to solve this.
Take a generic point $c \in (a ,b).$ You know that:
$$\int_{a}^{b} f(x) dx = \int_{a}^{c} f(x) dx + \int_{c}^{b} f(x) dx = 0.$$
Since $f$ is continuous in $[a, b]$ (with $b > a$), then there exist $d \in (a, c)$
and $e \in (c, b)$ such that:
$$\int_{a}^{c} f(x) dx + \int_{c}^{b} f(x) d = f(d)(c-a) + f(e)(b-c) = 0.$$
This can be rewritten as follows:
$$f(d)(c-a) = -f(e)(b-c).$$
We know that $a < c < b$. As a consequence, $(c-a) > 0$ and $(b-c) > 0$.
Therefore:
$$f(d) = -f(e) \frac{c-a}{b-c}.$$
Since $f \geq 0$ in $[a, b]$, the previous equation is only satisfied by $f(d) = f(e) = 0.$
This means that $f(x) = 0$ for all $x \in [a, b].$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3468570",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
$u$-Subtitution with a definite integral that has a constant gives different answer I have the following integral:
$$\int_0^\pi (2+\cos^2(t)\sin(t))\,\mathrm dt$$
Choosing $u = \cos(t)$, I would get the following result:
$$2u - \frac{u^3}{3}$$
which is:
$$2\cos(t) - \frac{\cos^3(t)}{3}\Bigg|_0^\pi.$$ However, if I solve the same integral by taking the constant out as its own integral:
$$2\int_0^\pi\,\mathrm dt+\int_0^\pi\cos^2(t)\sin(t)\,\mathrm dt$$
And the computed antiderivative would be
$$2\pi-\frac{\cos^3(t)}{3}$$
Why this discrepancy? Which one is the right one?
| The latter is correct. You really only needed u-substitution for the trig part of that integral. You've incorrectly applied the substitution for $\mathrm dt$ when you used the substitution for the integral of the constant.
$$\begin{align}
\int_{0}^{\pi}2\,\mathrm dt = -\int_{u(0)}^{u(\pi)}\frac{2}{\sin(t)}\,\mathrm du &= -\int_{u(0)}^{u(\pi)}\frac{2}{\sqrt{1-\cos(t)^2}}\,\mathrm du\\ &= -\int_{1}^{-1}\frac{2}{\sqrt{1-u^2}}\,\mathrm du = \int_{-1}^{1}\frac{2}{\sqrt{1-u^2}}\,\mathrm du\\
&= 2\pi
\end{align}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3468814",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Axiom of Choice --Example Problem Q: Suppose that for any set $X$ and any function $f:X\rightarrow X$ there exists $g:X\rightarrow X$ such that $fgf=f$. Prove that any set has a choice function.
My attempt:
Let $A=\left \{a,b,c... \right \}\subset X$ be an arbitrary non-empty set.
Choose $f:X\rightarrow X$ defined by
$x\rightarrow x_{0}$ if $x\in A$
$x\rightarrow x_{1}$ if $x\notin A$
$**$ where $x_{0}$ and $x_{1}$ are arbitrary unique elements of $X**$
By hypothesis, a $g$ exists such that $f(g(f(x)))=f(x)$
$\Rightarrow f(g(f(x_{0}))) = f(x_{0})$
$\Rightarrow g(f(x_{0})) \in \left \{x\in X| f(x) = x_{0} \right \}$
By defintion of $f$, $\left \{x\in X| f(x) = x_{0} \right \} = A$
$\therefore$ Since $g(f(x_{0})) \in A$, $g$ is a choice function that picks an element, $g(f(x_{0}))$, for any set $A$
I feel like this is really close to a correct solution, but the big problem I see with it is that the $**$ step requires that $X$ must have at least two unique elements. My hope is that this can be fixed by just stating that a set with one element $a$ only has one non empty subset $\left \{a \right \}$ and so the choice function can just choose a. Does this fix the proof? Are there any other errors?
| The issue of $X$ having at least $2$ element is not a big deal.
However, you seemed to aim to prove the axiom of choice for a single set $A$, instead of a given family of nonempty sets. (Note that your definition of $f$ and $g$ depends on $A$.)
So, your proof should rather start with assuming a family $(A_i)_{i\in I}$ of nonempty sets is given, where $I$ is a set, and then provide a choice function $I\to\bigcup_iA_i$.
Hint: Let $X$ be the disjoint union of $A_i$'s, of $I$ and of one more singleton $\{0\}$, and define $f:X\to X$ by sending all elements of $A_i$ to $i$, and all elements of $I$ and $0$ to $0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3468953",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Proof that if a sequence of random variables converges weakly to a constant, then it converges to it in probability Is my proof correct? A sequence of random variables {$\xi_n$}$\xrightarrow{w}c$ means by definition that $F_{\xi_n}(t)\rightarrow F_c(t)$ for every $t$ such that $F_c(t)$ is continuous.
So, i have $F_{\xi_n}(t)-F_c(t)\rightarrow0$. $(1)$
$F_c(t)$ is not continuous at $t=c$, so we need to consider $t>c$ and $t<c$.
For $t>c$, $F_c(t)=1$, so $P(\xi_n\leq t)-1\rightarrow 0$
For $t<c$, $F_c(t)=0$, so $P(\xi_n\leq t)-0\rightarrow 0$
Now, i substitute $t=\varepsilon+c$ and get:
For $\varepsilon>0$, $P(\xi_n-c\leq \varepsilon)\rightarrow 1$ (3)
For $\varepsilon<0$, $P(\xi_n-c\leq \varepsilon)\rightarrow 0$ $\implies$ $P(\xi_n-c> \varepsilon)\rightarrow 1$ $\implies$ (?) $P(\xi_n-c\geq \varepsilon)\rightarrow 1$ (4)
Now, i say that ($(3)$ and $(4)$) is equal to $P(|\xi_n-c|\leq \varepsilon)\rightarrow 1$ for $\varepsilon>0$, which is equal to $P(|\xi_n-c|> \varepsilon)\rightarrow 0$, which is convergence in probability by definition. But for that, i need implication "(?)". Do i have it?
| Well, implication (?) is not true on its own in general - it would fail if many of the $\xi_n - c$ were equal to $\varepsilon$ with positive probability - but the conclusion is in fact true in this case. A quick fix is just to use $\varepsilon/2$ instead of $\varepsilon$.
Notation comment: people usually expect $\varepsilon$ to be a positive quantity, so if you want a negative quantity use $-\varepsilon$ instead.
Now for any $\varepsilon >0$, you can say $P(\xi_n - c \le -\varepsilon/2) \to 0$, meaning that $P(\xi_n - c > -\varepsilon/2) \to 1$. But any quantity $> -\varepsilon/2$ is certainly $\ge -\varepsilon$, so you have $P(\xi_n - c \ge -\varepsilon) \ge P(\xi_n - c > -\varepsilon/2)$. By the squeeze theorem, you conclude $P(\xi_n - c \ge -\varepsilon)\to 1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3469078",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Integrating $\frac1{a+b\cos(x)}$ using $e^{ix}$ In calculus class our teacher demonstrated us the evaluation of the definite integral $\int_0^\pi \dfrac{1}{a+b\cos(x)}dx=\dfrac{\pi}{\sqrt{a^2-b^2}}$, for $a\ne 0, b \ne 0, |\dfrac ba \lt 1|$, but there's a part I could not grasp.
Our teacher started out by setting up an equality: $bz^2+2az+b=b(z-\alpha)(z-\beta)$, where $\alpha, \beta$ are the roots of the expression on the LHS in $\Bbb C$, and $\alpha$ takes on the value with a negative sign before the radical, while $\beta$ is the other one. Letting $z=e^{ix}$, the integrand is transformed into the following form:
$\dfrac{1}{\sqrt{a^2-b^2}}\left(\dfrac{\alpha}{\alpha-e^{ix}}-\dfrac{\beta}{e^{ix}-\beta}\right)=\dfrac{1}{\sqrt{a^2-b^2}}\left(\dfrac{1}{\alpha e^{-ix}-1}+\dfrac{1}{1-\beta e^{-ix}}\right)$
$=\dfrac{1}{\sqrt{a^2-b^2}}\sum_{n=0}^{\infty}(\beta^n-\alpha^n)e^{-inx}=\dfrac{1}{\sqrt{a^2-b^2}}\sum_{n=1}^{\infty}(\beta^n-\alpha^n)(\cos(nx)-i\sin(nx))$
But then $\int_0^\pi \cos(nx)-i\sin(nx) dx=\dfrac 1n(\sin(nx)+i\cos(nx))|_0^\pi=\dfrac in(\cos(n\pi)-1)$ and I was completely lost on how the next part was carried out. For if we substitute that back into the infinite sum, we get that integrated version of that expression would be $\dfrac{1}{\sqrt{a^2-b^2}}\sum_{n=1}^{\infty}(\beta^n-\alpha^n)\dfrac in(\cos(n\pi)-1)$
$=\dfrac{1}{\sqrt{a^2-b^2}}\sum_{k=1}^{\infty}(\beta^{2k-1}-\alpha^{2k-1})\dfrac {i}{2k-1}(\cos((2k-1)\pi)-1)$
$=\dfrac{1}{\sqrt{a^2-b^2}}\sum_{k=1}^{\infty}(\beta^{2k-1}-\alpha^{2k-1})\dfrac {-2i}{2k-1}$
Yet I fail to see how this could lead to the supposed final result, especially with that $i$ there.
| In the given method, the fractions of the form $\dfrac1{1+z}$ are developed using Taylor, then integrated term-wise. Doing that, you retrieve the series for $\log(1+x)$, which are applied to $\alpha$ and $\beta$.
As the roots are complex, the computation of the logarithms is a little involved. In the end, only an imaginary number remains, which gets multiplied by $i$, resulting in a real.
A simpler way:
WLOG, $b=1$, and $|a|>1$. Let $z:=e^{ix}$ so that $\cos x=\dfrac{z+z^{-1}}2$ and $dz=iz\,dx$.
Then
$$\int_0^\pi\frac{dx}{a+\cos x}=-\int_1^{-1}\frac{2i\,dz}{2az+z^2+1}=-\int_1^{-1}\frac{2i\,dz}{(z+a)^2-(a^2-1)}.$$ (The complex integral is over a half circle.)
The antiderivative is readily found to be
$$-\frac ic(\log(z+a-c)-\log(z+a+c))$$ where $c=\sqrt{a^2-1}$. We have
$$\frac{(1+a-c)(-1+a+c)}{(1+a+c)(-1+a-c)}=\frac{a^2-(c-1)^2}{a^2-(c+1)^2}=-1.$$ so that the logarithm is just $i\pi$ and the integral is
$$\frac\pi c.$$
Now for general $b$ we divide by $b$ and get
$$\frac\pi{bc}=\frac\pi{\sqrt{a^2-b^2}}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3469375",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Example of application of Krasner's Lemma I'm learning about valued fields at the moment, and I stumbled upon these notes
http://www-personal.umich.edu/~wuyifan/ExpositoryArticles/NumberTheory/LocalFields/Krasner%27s_Lemma.pdf
As proposition 2.1 it states that when $\eta$ is a primitive $p$-th ($p$ odd) root of unit then $\mathbb{Q}_p(\eta)$ contains all $(p-1)$-st roots of $-p$.
I have to compute $v(\sqrt[{p-1}]{-p}-\eta)$ as well as $v(\sqrt[p-1]{-p}-\zeta \cdot \sqrt[p-1]{-p})$ where $\zeta$ is a $(p-1)$-st root of unity.
However I am unable to see how to do that computation.
Edit: I am trying to learn how to use Krasner's Lemma, this result is just an example I found, of something I'm currently unable to do.
| It is easier to check first the elementary proof.
Let $K = \Bbb{Q}_p(\zeta_p)$ and $O_K$ its ring of integers with residue field $O_K/(\pi)$.
*
*From $$\zeta_p^p - 1 \equiv (\zeta_p-1)^p \equiv 0\bmod \pi \implies v_p(\zeta_p-1)> 0$$ $$\implies v_p(\sum_{l=0}^{a-1}\zeta_p^l) = v_p(a)=0\implies v_p(\zeta_p-1)=v_p(\zeta_p^a-1)$$ and $$\prod_{a=1}^{p-1} (1-\zeta_p^a)= \sum_{k=0}^{p-1} 1^k = p$$ we know that
$$v_p(\zeta_p-1)= \frac1{p-1} \implies O_K=\Bbb{Z}_p[\zeta_p-1], \pi = \zeta_p-1$$ and $K/\Bbb{Q}_p$ is totally ramified of degree $p-1$.
Its uniformizer $\pi=\zeta_p-1$ satisfies $$v(\pi^{p-1})=1\implies \pi^{p-1} = u p, u\in \Bbb{Z}_p[\zeta_p]^\times \implies u= \zeta_{p-1}^b (1+r),v(r)> 0$$
Letting $U=(1+r)^{-1/(p-1)} =\sum_{n\ge 0} {-1/(p-1)\choose n} r^n\in \Bbb{Z}_p[\zeta_p]^\times$ we have $$(\pi U)^{p-1} = p\zeta_{p-1}^b \implies \Bbb{Z}_p[\zeta_p] = \Bbb{Z}_p[ (p\zeta_{p-1}^b)^{1/(p-1)}]$$
*
*To find $b$ use that $\zeta_p^a-1 = (\zeta_p-1) \sum_{l=0}^{a-1}\zeta_p^l=(\zeta_p-1)a (1+R_a),v(R_a)> 0$ thus $$(\zeta_p-1)^{p-1}=\prod_{a=1}^{p-1}\frac{\zeta_p^a-1}{a (1+R_a)} = \frac{p}{- (1+s)}, v(s) > 0 \implies \zeta_{p-1}^b = -1$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3469549",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is my proof of the irrationality of $\sqrt{3}$ legitimate? I read the proof of the irrationality of $\sqrt{3}$ in my textbook (Richard Hammack's Book of Proof), and I was wondering if my proof is legitimate as well.
Prop: $\sqrt{3}$ is irrational.
Suppose by way of contradiction that $\sqrt{3}$ is rational. Hence, $\sqrt{3} = \frac{m}{n}, m \in \mathbb{Z}, n \in \mathbb{N}$. Furthermore, suppose both $m,n$ are not even, so the fraction is reduced. Then, $3n^2=m^2$. Suppose $n$ is even, so $n = 2a, a \in \mathbb{Z}$. Therefore, $3 \cdot 4a^2=m^2 \longrightarrow m^2=12a^2 \longrightarrow m^2=2(6a^2)$ and $m^2$ is even, hence $m$ is even. This contradicts our assumption that both $m,n$ were not even, and hence $\sqrt{3}$ must be irrational. $\blacksquare$
Also, I'm wondering if I also need to show the case where $m$ is even?
Edit: Thank you for all the help. I realize that I was essentially trying to make the same argument as the classic proof of the irrationality of $\sqrt{2}$, but it doesn't quite work the same. I've done some research about the prime factorization theorem and I agree that $3n^2=m^2$ being a contradiction is definitely a more elegant proof that a case-by-case approach.
| There are at least two flaws:
*
*you say $m,n$ are both not even, which in fact means neither $m$ nor $n$ are even.
*you say "so the fraction is reduced", but is $\frac{15}{25}$ reduced ?
Allowing only irreducible fractions, $$\sqrt3=\frac pq\iff p^2=3q^2.$$
So $p^2$ is a multiple of $3$, and so must $p$ be. Then $p^2$ is a multiple of $9$ and $q^2$ is a multiple of $3$. And so must $q$ be !
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3470040",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Chess match where first to win game wins match. Probability of winning is p and q, of draw is 1-p-q. Find mean, PMF, and variance of match duration. Question
Fischer and Spassky play a sudden-death chess match whereby the first player to win a game wins the match. Each game is won by Fischer with probability $p$, by Spassky with probability $q$, and is a draw with probability $1 — p — q$.
*
*What is the probability that Fischer wins the match?
Fischer wins = draw the first $(n — 1)^{th}$ games and win the last game. $$P\text{(Fischer wins)} = \sum_{i=1}^n(1-p-q)^{n-1}\cdot p = \frac{p}{p+q}$$
*
*What is the $PMF$, the mean, and the variance of the duration of the match?
Doubt : I know that $Var[X] = E[X^2] -E[X]^2$, however, can't follow the arithmetic calculation provided in solution. Help me understand how $E[X]$ end up as $(P+1)^{-1}$ ? I supposed $E[X]^2$ is $(p+q)^{-2}$ in this case.
Can someone please explain in more steps?
| The expressions are wrong. P(Fisher wins)=$p\sum_{n=0}^\infty (1-p-q)^n=p\frac{1}{1-(1-p-q)}=\frac{p}{p+q}$
Let $X$ be the duration of the match . $E(X)=(p+q)\sum _{k=1}^\infty k(1-p-q)^{k-1}=\frac{1}{p+q}$, $E(X^2)=(p+q)\sum_{k=1}^\infty k^2(1-p-q)^{k-1}=\frac{2-p-q}{(p+q)^2}$ The variance is $\frac{1-p-q}{(p+q)^2}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3470122",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$S^2$ without $n$ points is homeomorphic to $S^2$ without $m$ points if and only if $n = m$
Consider the unit sphere $S^2$ with the subspace topology of $\mathbb{R}^3$. Now let $n,m$ be positive integers. Prove that $S^2$ with $n$ different points removed from it is homeomorphic to $S^2$ with $m$ different points removed from it if and only if $n = m$.
Let's assume that the $n$ points removed are $A = \{a_1, \cdots, a_n \} \subset S^2$ and that the $m$ points removed are $B = \{b_1, \cdots, b_m \} \subset S^2$.
First, assume that $n = m$. Using the stereographic projections, $S^2 \setminus A$ is homeomorphic to $\mathbb{R}^3$ without $n-1$ points and $S^2 \setminus B$ is homeomorphic to $\mathbb{R}^3$ without $m-1$ points. Since $n = m$, I assume that $\mathbb{R}^3$ without $n-1$ points is homeomorphic to $\mathbb{R}^3$ without $n-1$ (possible different) points, but I don't know how to explicetly prove this.
If $S^2 \setminus A$ is homeomorphic to $S^2 \setminus B$, then, using again the stereographic projection, if $n \neq m$, then I assume that $\mathbb{R}^3$ without $n-1$ points is not homeomorphic to $\mathbb{R}^3$ without $m-1$ points, but again, I don't know how to prove this.
| Not sure why algebraic topology seems to be eschewed here, but for the harder direction I would guess that that $H_1(X)\cong\Bbb Z^{n-1}$, whereas $H_1(Y)\cong\Bbb Z^{m-1}$.
For this one can use that $X$ deformation retracts onto a "rose with $n-1$ petals".
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3470247",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Calculus implicit differentiation question I stumbled upon this calculus implicit differential question: find
$\cfrac{du}{dy} $ of the function $ u = \sin(y^2+u)$.
The answer is $ \cfrac{2y\cos(y^2+u)}{1−\cos(y^2+u)} $. I understand how to get the answer for the numerator, but how do we get the denominator part?
And can anyone point out the real intuitive difference between chain rule and implicit differentiation? I can't seem to get my head around them and when or where should I use them.
Thanks in advance!
| $$u = \sin(y^2+u) \implies\frac{du}{dy} = \cos(y^2+u)\times\left(2y+\frac{du}{dy}\right)$$
$$\implies \frac{du}{dy} (1-\cos(y^2+u)) = 2y\cos(y^2+u) \implies \frac{du}{dy}=\frac{2y\cos(y^2+u)}{1-\cos(y^2+u)}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3470444",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 2
} |
Determining the Fundamental Matrix Using Generalized Eigenvectors
Determine $\mathit{e}^{At}$ by using generalized eigenvector method to find a fundamental matrix of $x'=Ax$ with $A=\begin{bmatrix}
5 &-4 &0 \\
1&0 &2 \\
0& 2 &5
\end{bmatrix}$.
I just want to know whether my solution is okay?
I found the eigenvalues to be $\lambda_{1}=0$ (multiplicity 1) and $\lambda_{2}=5$ (multiplicity 2).
I found the eigenvector of $\lambda_{1}$ to be $v_{1}=(-4,-5,2)$ and the eigenvector of $\lambda_{2}$ to be $v_{2}=(-2,0,1)$.
I used the generalized eigenvector property to find $v_{3}$, where $v_{2}=(A-\lambda_{2}I)v_{3}$. I got $v_{3}=(1/2,1/2,1)$.
So then $\mathit{e}^{At}=\begin{bmatrix}
-4 &-2 &\frac{1}{2} \\
-5&0 &\frac{1}{2} \\
2& 1 &1
\end{bmatrix}$. Is this okay?
Thanks for any help!
| $e^{At} = P e^{Jt} P^{-1}$
and
$e^{Jt} = \begin{bmatrix} e^{J_1 t} && 0 \\ 0 && e^{J_2 t} \end{bmatrix}$
Firstly, as $J_1 = 0$, $e^{J_1t} = 1$ .
Now, if we open taylor series of exponential around $\lambda_2$
$e^{xt} = \sum \frac{e^{\lambda_2t}}{n!}(xt-\lambda_2t)^n $
$e^{J_2 t} = \sum \frac{e^{\lambda_2 t}}{n!}(J_2t-\lambda_2 I t)^n$
Now,
$J_2t-\lambda_2I t = \begin{bmatrix} 0 && 1t \\ 0 && 0 \end{bmatrix}$
Note that,
$\begin{bmatrix} 0 && 1t \\ 0 && 0 \end{bmatrix}^n = 0, for: n>1$
Then,
$e^{J_2 t} = e^{\lambda_2 t} \begin{bmatrix} 1 && 0 \\ 0 && 1 \end{bmatrix} + e^{\lambda_2 t} \begin{bmatrix} 0 && t \\ 0 && 0\end{bmatrix} = \begin{bmatrix} e^{\lambda_2 t} && te^{\lambda_2 t} \\ 0 && e^{\lambda_2 t} \end{bmatrix}$
And finally if your eigenvectors are correct, you have:
$e^{At} = P^{-1} e^{Jt} P = \begin{bmatrix} -4 && -2 && 1/2 \\ -5 && 0 && 1/2 \\ 2 && 1 && 1\end{bmatrix} \begin{bmatrix} 1 && 0 && 0 \\ 0 && e^{5 t} && te^{5t} \\ 0 && 0 && e^{5t} \end{bmatrix} \begin{bmatrix} -4 && -2 && 1/2 \\ -5 && 0 && 1/2 \\ 2 && 1 && 1\end{bmatrix}^{-1}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3470586",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
An exercise in differential topology Problem: Given a smooth submanifold $M\subset\mathbb{R}^k$, show that the tangent bundle space $$TM=\{(x,v)\in M\times\mathbb{R}^k:v\in TM_x\}$$ is also a smooth manifold. Show that any smooth map $f:M\rightarrow N$ gives rise to a smooth map $$df:TM\rightarrow TN$$ where $$d(\text{identity})=\text{identity},d(g\circ f)=(dg)\circ(df).$$
This is a exercise in "Topology from the differentiable viewpoint" by John Milnor. My question is, the smooth manifold $M$ is a subset of $\mathbb{R}^k$, whose dimension may be another integer, saying $n(<k)$. Then how can the tangent vector $v$ lie in $\mathbb{R}^k$?
| One way to define the tangent space to a submanifold $M\subset \Bbb R^k$ at some point $x$ is to consider the set of all derivatives at $t=0$ of all smooth curves $f:\Bbb R\to M$ such that $f(0)=x$. As such, the tangent space $T_xM$ is indeed defined as a subspace of $\Bbb R^k$ (whose dimension is the dimension of $M$).
As an example, the tangent space to the standard $2$-sphere in $\Bbb R^3$ at some point $x$ "is" the subspace of $\Bbb R^3$ defined by all vectors that are orthogonal to $x$. It is isomorphic to $\Bbb R^2$, but it is still defined as a subspace of $\Bbb R^3$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3470744",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Expected runtime analysis for sums of four squares (Rabin and Shallit) I've been reading a paper of Rabin and Shallit ("Randomized Algorithms in Number Theory"), which gives a brief sketch of an ERH-conditional algorithm to compute a representation of a positive integer $n$ as a sum of four squares, originally presented in some unpublished notes. The claimed expected time complexity of the algorithm is $O(\log^2 n)$. I understand why the algorithm is correct (conditional on ERH), but I'm confused about why the time complexity is as claimed. In particular, the key step is the computation of a prime $p \leq n^5$ satisfying a certain congruence condition mod $4n$. ERH guarantees that if you keep choosing numbers satisfying this congruence condition in $[1, n^5]$ uniformly and independently at random, then you'll come across a prime in expected $O(\log n)$ time. But still, at every trial you must test whether the candidate $p$ is prime (in fact I was considering the possibility of seeing what happens later in the algorithm if $p$ is composite, hoping that it will terminate quickly with a wrong answer or something, but this doesn't seem to be the case; also Rabin--Shalit specifically writes on page S243 to test it for primality).
So in order to satisfy the desired time complexity, this means that the amortized time complexity of the primality test needs to be $O(\log n)$. As far as I know, even if you allow randomization there isn't any known algorithm for primality testing with this complexity (especially in 1986 when this paper was published) unless you allow some constant error probability. Can anyone point out what I am missing?
| As discussed in the comments, apparently the number used need to be a prime. We only need to find a square root of $-1$ modulo that number, and the prime density from ERH is used only to show that we have a certain probability of doing so.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3470873",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Convergence of Sum of Orthogonal Let $\{x_n\}$ be an orthonormal basis of a Hilbert space $H.$
Can anyone help me how to show that $\sum_{n=0}^{\infty}|(x_n,x)|^2$ is convergent and $\|x\|^2=\sum_{n=0}^{\infty}|(x_n,x)|^2?$
This is to show that $(x_n,x)\rightarrow 0.$
I used Bessel's Inequality to show that $\|x\|^2\geq\sum_{n=0}^{\infty}|(x_n,x)|^2$. Is this even right?
Thanks a lot.
| The usual definition of a basis for a Hilbert space denoted $\{x_n\}$ you mean that the set is orthonormal and that
$$x = \sum_{n=1}^\infty \langle x,x_n\rangle x_n$$
meaning that
$$\lim_{N\rightarrow \infty} \left\|x-\sum_{n=1}^N\langle x,x_n\rangle x_n\right\| = 0\quad \text{Convergence in the Hilbert space norm}$$
Now if two vectors are orthogonal its easy to prove Pythagoras Theorem in this general setting so that $x\perp y\Rightarrow \|x+y\|^2 = \|x\|^2+\|y\|^2$. Now we can write
$$\left\|x\right\|^2 = \left\|x-\sum_{n=1}^{N}\langle x,x_n\rangle x_n+\sum_{n=1}^{N}\langle x,x_n\rangle x_n\right\|^2 = \left\|x-\sum_{n=1}^{N}\langle x,x_n\rangle x_n\right\|^2+\left\|\sum_{n=1}^{N}\langle x,x_n\rangle x_n\right\|^2$$
Why is this equality true? How can you use this to prove Bessel's inequality? Taking the limit as $N$ tends to infinity you should get the equality you want after having applied Pythagoras theorem to the last term.
Note further that if you only want to prove that $\langle x,x_n\rangle \rightarrow 0$ then you only need that Bessel's inequality is true and then you don't need that $\{x_n\}$ is a basis, it suffices that it is an orthonormal set.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3471031",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is a locally compact Hausdorff vectorspace with countable base a topological vector space? my question is related to the very first part of Chapter 3 of Resnicks book "Extreme Values, Regular Variation and Point Processes.
He is claiming that one should think of a locally compact Hausdorff space with countable base as a subset of a compactified finite dimensional euclidean space (which is a topological vector space).
Indeed, if we were in a locally compact Hausdorff topological vector space with countable base this would be an obvious statement since every locally compact topological vector space is finite dimensional.
My question is: if we additionally assume that the locally compact Hausdorff space with countable base is a vector space do we automatically get that this space is also a topological vector space?
The mathematical formulation of the problem: Let $X$ be a vector space over $K$. Assume that $X$ is Hausdorff, locally compact and has a countable base. Is $X$ a topological vector space?
I tried to tackle the problem using that $X$ is metrizable and separable, but this did not lead me anywhere.
Thanks!
| This is obviously false, because the topology could have nothing to do with the vector space structure. For instance, with $K=\mathbb{R}$ and $X=\mathbb{R}$ with its usual vector space structure, we can pick a bijection between $X$ and $[0,1]$ and thus give $X$ a topology which is homeomorphic to the usual topology on $[0,1]$. Or, we could pick some crazy bijection between $X$ and $\mathbb{R}$ to put a topology on $X$ which is homeomorphic to the usual topology but for which addition is not continuous.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3471286",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Without calculating the square roots, determine which of the numbers:$a=\sqrt{7}+\sqrt{10}\;\;,\;\; b=\sqrt{3}+\sqrt{19}$ is greater.
Without calculating the square roots, determine which of the numbers:
$$a=\sqrt{7}+\sqrt{10}\;\;,\;\; b=\sqrt{3}+\sqrt{19}$$
is greater.
My work (I was wondering if there are other ways to prove this):
$$a^2=17+2\sqrt{70}, \;\;b^2=22+2\sqrt{57}$$
$$\sqrt{64}<\sqrt{70}<\sqrt{81}\implies 8<\sqrt{70}<9\implies a^2<35$$
$$\sqrt{49}<\sqrt{57}<\sqrt{64}\implies 7<\sqrt{57}<8\implies b^2>36$$
$$a^2<35<36<b^2\implies a^2<b^2\implies |a|<|b|$$
| There are indeed other ways to do this. Your solution is great, but if you were just curious about another method, here is one:
$$
\begin{align}
\sqrt{7} + \sqrt{10} \quad &? \quad \sqrt{3} + \sqrt{19} \\
\sqrt{10} - \sqrt{3} \quad &? \quad \sqrt{19} - \sqrt{7}
\end{align}
$$
Note that instead of comparing $a$ and $b$ directly, we can just compare these values.
Define the function
$$
f(x) = \sqrt{9x+10} - \sqrt{4x+3}
$$
We do this because $f(0) = \sqrt{10} - \sqrt{3}$ and $f(1) = \sqrt{19} - \sqrt{7}$.
The magic step is now figuring out that for all positive $x$, this function is increasing, which tells us that $f(1) > f(0)$.
Of course, seeing that this function is increasing is not exactly obvious, but it is not a difficult task if you have a calculus background.
Perhaps there is another step we can take or a different function we can use that would make the fact it is increasing more obvious?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3471412",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 5,
"answer_id": 0
} |
Finding a Recurrence Relation and Solving I am trying to find a recurrence relation for the following question:
For integer n ≥ 1, let h(n) be the number of length $n$ words consisting of A’s and B’s, that contain either at least an “AA”, or at least an “ABB”. Find a recurrence relation satisfied by $h(n)$ (with necessary initial conditions) and solve it.
So far, what i have come up with is $h(n) = h(n-1)+h(n-2)+2^{n-2}+2^{n-3}$, but when I try to solve that, I end up with non-whole number eigenvalues. Then, I tried the recurrence relation $h(n) = h(n-1)+2^{n-2}+2^{n-3}$, but when I solved I did not get an equation that matched the cases I computed by hand. Is my recurrence relation wrong or or am I solving wrong? Any help would be great!
| This is an odd problem, because it seems easier to find a closed expression for $h(n)$ than a recurrence relation!
After all, how many words of length $n$ don't meet the criteria? Such a word can start with any number of B's. But once it hits the first A, it needs to alternate between A and B to avoid having either an AA or an ABB. Since there are $n+1$ words of length $n$ that satisfy that (starting with between $0$ and $n$ B's), $$h(0)=h(1)=0\\h(n)=2^n-n-1\quad\text{for }n\ge2$$
So, let's work backwards from here and come up with a recurrence relation. To find $h(n+1)$, we can start with a term of $2h(n)$ since every $n$ letter word that contains an AA or an ABB can either have an A or a B at the end and will still obviously contain that substring. So how many $n+1$ letter words wouldn't meet the criterion if you took off its last letter? Just $n$ of them. For instance, the six 7-letter words of that type are BBBBBAA, BBBBABB, BBBABAA, BBABABB, BABABAA, and ABABABB. You can see how we formed them by thinking about the previous paragraph. so a recurrence relation would be $$h(1)=0\\h(n+1)=2h(n)+n\quad\text{for }n\ge1$$
As to how you'd come up with that before the closed form, your guess is as good as mine. ^_^
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3471542",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Definite integral help I'm working on a physics problem and I got to the integral:
$$\int_0^\infty (a+b+x^2)^{-\frac{3}2} dx = \frac{1}{(a+b)}$$
I am just trying to understand how this is achieved. Because the indefinite integral yields
$$x*(a+b)^{-1}*(a+b+x^2)^{-\frac{1}2}$$
Evaluating this from 0 to $\infty$, to me, gives
$$\frac{\infty}{\sqrt{a+b+\infty^2}} - 0$$
Edit: corrected my math
| The indefinite integral of $(C+x^2)^{-\frac{3}{2}}$ actually equals $\frac{x}{C\sqrt{C+x^2}}$. If we set $C=a+b$, the answer will coincide with yours.
I guess that in your approach you missed $x$ coming from deriving $x^2$ inside brackets.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3471666",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Show not Lebesgue integrable using step functions I have shown that conditionally $\int_0^\infty \int_0^\infty f(x,y) dx dy = \int_0^\infty \int_0^\infty f(x,y) dx dy = \frac{\pi}{2}$ where $f(x,y) = e^{-xy} \sin(x).$ This part is relatively easy because
$$\int_0^\infty e^{-xy} \sin(x) dy = \frac{\sin(x)}{x} $$
Now I want to show that $f \not\in L^1(\mathbb{R}^{+2})$. I know we can show $f$ is not Lebesgue integrable by finding a sequence of simple / step functions $\phi_n \leq |f|$ where $\int_{\mathbb{R}^{+2}} \phi_n \to \infty$, but I need some help here.
| It is readily apparent that $f$ is not Lebesgue integrable since $F(x) = \int_0^\infty e^{-xy} |\sin x| \, dy = \frac{|\sin x|}{x}$ is not integrable over $[0,\infty)$. If $f$ were integrable, the iterated integral must be finite by Tonelli's theorem.
Alternatively, using your suggested approach, take $A_{jk} = \left[\frac{\pi}{4} + j\pi, \frac{3\pi}{4} + j\pi\right] \times \left(\frac{1}{k+1},\frac{1}{k}\right]$ and define the sequence of step functions
$$\phi_{mn}(x,y)= 2^{-1/2}\sum_{k=1}^m \sum_{j=0}^n e^{-\pi\left(\frac{3}{4} + j\pi\right)\frac{1}{k}}\chi_{A_{jk}}(x,y)$$
Since $|\sin x| \geqslant 2^{-1/2}$ for $x \in \left[\frac{\pi}{4} + j\pi, \frac{3\pi}{4} + j\pi\right] $, we have for $(x,y) \in A_{jk}$,
$$e^{-xy} |\sin x| \geqslant 2^{-1/2}e^{-\left(\frac{3\pi}{4} + j\pi\right)\frac{1}{k}}$$
Thus,
$$\begin{align}\int_0^\infty \int_0^\infty e^{-xy} |\sin x| \, dx \, dy &\geqslant \lim_{m \to \infty}\lim_{n \to \infty} \int_0^\infty \int_0^\infty \phi_{mn}(x,y) \, dx \, dy \\&= 2^{-1/2}\sum_{k=1}^\infty \sum_{j=0}^\infty e^{-\left(\frac{3\pi}{4} + j\pi\right)\frac{1}{k}}\left(\frac{1}{k} - \frac{1}{k+1} \right) \\ &= 2^{-1/2}\sum_{k=1}^\infty \sum_{j=0}^\infty e^{-\frac{\pi j }{k}}\frac{e^{-\frac{3\pi}{4k}}}{k(k+1)}\\ &= 2^{-1/2}\sum_{k=1}^\infty \frac{1}{1-e^{-\frac{\pi }{k}}}\frac{e^{-\frac{3\pi}{4k}}}{k(k+1)} \\ &=2^{-1/2}\sum_{k=1}^\infty \frac{e^{\frac{\pi}{4k}}}{k(k+1)\left(e^{\frac{\pi}{k}}-1 \right)} \\ &= + \infty \end{align}$$
The series on the RHS diverges because the summand is $\sim \frac{1}{\pi (k+1)} $ as $k \to \infty$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3471790",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Find the minimal polynomial for $\cos(\frac{2\pi}{5})$ and $\sin(\frac{2\pi}{5})$ Let $\omega$ be the primitive 5th root of $1$, then $\cos(\frac{2\pi}{5}) = \frac{w+w^{-1}}{2}$ and $\sin(\frac{2\pi}{5}) = \frac{w-w^{-1}}{2i}$. How to find the minimal polynomial of $\frac{w+w^{-1}}{2}$ then? (without using the Chebyshev polynomials)
Thanks.
| Hint: square it, then make use of the fact that the 5 fifth roots of unity sum to 0.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3471903",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
} |
Using the Poisson approximation to estimate the number of trials required to guarantee at least one success Suppose that on average, out of $N$ trials, $q$ succeed. $q$ is much smaller than $N$. For a concrete example, suppose $N = 100$ and $q = 2$.
Let $n$ be the number of trials run in a particular experiment. How large should $n$ be to ensure with probability $x$ that there is at least 1 success? For a concrete example, suppose that $x = 0.95$.
The probability that there are $k$ successes in $n$ trials might be approximated using the binomial distribution with probability parameter $p = q/N$.
The probability that there are is at least $1$ success is given by:
$$ x = 1 - P(\text{$0$ successes}) = 1 - {n \choose 0} p^0 (1 - p)^{n - 0} = 1 - (1 - p)^n $$
Solving for $n$:
$$ (1 - p)^n = 1 - x $$
Using our concrete values, we get:
$$ n \approx 150 $$
Let us now try the Poisson approximation approach. Let $\lambda = np$. Then:
$$ 1 - \frac{\lambda e^{-\lambda 0}}{0!} = x \Leftrightarrow 1 - \lambda = x \Leftrightarrow \frac{1 - x}{p} = n$$
Recalling that $p = q/N$:
$$ \frac{N(1 - x)}{q} = n $$
Using our concrete values for $N$, $x$ and $q$:
$$ n \approx 3 $$
There is already an issue with what I have done so far, since the Poisson approximation gives a result that is totally non-sensical. What am I doing wrong?
Going further, I want to try and bound the error in the estimate for $n$ that I get from the Poisson approximation. Tccuracy bounds on the Poisson approximation for the binomial distribution state that if $X \sim \text{Bin}(M, r)$, and $Y \sim \text{Poisson}(Mr)$:
$$ |P(X \in \mathbb{N}) - P(Y \in \mathbb{N})| \leq Mr^2 $$
since $\mathbb{N}$ is the set over which both Poisson and binomial distributions are defined (naturals include $0$).
I am a bit confused by the $P(X \in \mathbb{N})$ bit, and not quite sure how to use the bound to estimate how good $n$ is. Can you help?
| You have miscalculated the probability in the Poisson case. Indeed, since the Poisson probability is given by
$$
\mathbb P\bigl(\textrm{Poisson}(\lambda)=k\bigr)=\frac{\lambda^ke^{-\lambda}}{k!},
$$
we have that the probability of no successes is $$\frac{\lambda^0e^{-\lambda}}{0!}=e^{-\lambda},$$
not $\lambda$ as you have written.
Thus, the Poisson approximation yields
$$
1-e^{-\lambda}=x\iff \lambda=-\ln(1-x)
$$
Since $\lambda=np=\frac{nq}{N}$, we obtain that
$$
n=\frac{\lambda N}{q}=\frac{-N\ln(1-x)}{q}=-50\ln(.05)\approx 149.787
$$
As a point of comparison, the true value of $n$ (without making any approximation) is
$$
n=\frac{\ln(1-x)}{\ln(1-q/N)}=\frac{\ln(.05)}{\ln(.98)}\approx 148.284
$$
The difference between these two expressions is that in the exact expression the denominator is $\ln(1-q/N)$, which in the approximation is replaced by its first Taylor approximation, $-q/N$. (In general, the first Taylor approximation of $\ln(1+y)$ is simply $y$, and this is the case when $y=-q/N$.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3472040",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Show that $2(\sin y + 1)(\cos y + 1) = (\sin y + \cos y + 1)^2$ The question states:
Show that: $$2(\sin y + 1)(\cos y + 1) = (\sin y + \cos y + 1)^2$$
This is what I have done
$2(\sin y + 1)(\cos y + 1) = 2(\sin y + \cos y + 1)^2$
L. H. S. = R. H. S.
From L. H. S.
$2(\sin y +1)(\cos y + 1) = 2(\sin y\cos y + \sin y + \cos y + 1)$
$= 2(\sin y\cos y + \sin y + \cos y + \sin^2 y + \cos^2 y) (\sin^2 y + \cos^2 y = 1)$
$= 2(\sin^2 y + \sin y\cos y + \sin y + \cos^2 y + \cos y)$
I got stuck here. I do not know what to do from here.
I have tried and tried several days even contacted friends but all to no avail.
| Expanding the RHS,
$$\color{blue}{(\sin y + \cos y + 1 )^2} = \sin^2 y + \cos^2 y + 1 +2\cos y \sin y + 2\cos y + 2\sin y$$
$$= 1+ 1 +2\cos y \sin y + 2\cos y + 2\sin y=2(1+\cos y \sin y + \cos y + \sin y) = 2 \left[(1+\cos y) + (\sin y + \cos y \sin y ) \right] = 2 \left[(1+\cos y) + \sin y(1 + \cos y) \right]=\color{blue}{2(1+\cos y)(1+\sin y )}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3472158",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
} |
Multivariate Lagrange inversion with zero powers (also asked in MO)
The multivariate Lagrange inversion formula, which I found in a couple of papers (such as this and this), is as follows. If $f_i=t_ig_i(f)$, $1\le i\le k$, then
$$[\boldsymbol{t^n}]h(\boldsymbol{f(t)})=\frac{1}{n_1n_2\cdots n_k}[\boldsymbol{x^{n-1}}]\sum_T \frac{\partial (h,g_1^{n_1},...,g_k^{n_k})}{\partial T},$$
where $\boldsymbol{t^n}=t_1^{n_1}\cdots t_k^{n_k}$ and the derivative is taken with respect to some trees (as discussed in those papers).
Not one of the papers in question has addressed the question of how this formula is to be used when some of the powers are zero, $n_j=0$, something that does not happen in the one-variable case (due to the assumption that $g(0)=0$) but can happen in the multivariable one.
| Let's see how the $n_i$ come as reciprocal factors into the formula. They come from the factorial denominators of the Taylor series terms.
Look at the one-variable Lagrange-Bürmann formula:
$$[t^n]h(f(t))=\frac{1}{n}[x^{n-1}](h'(x)g(x)^n).$$
It can be proved that
$$(h(f(t)))^{(n)}|_0=(h'(x)g(x)^n)^{(n-1)}|_0$$
Now we translate this from the derivatives to the Taylor series terms:
$$n))^{(n)}|_0=(n-1)g(x)^n)^{(n-1)}|_0$$
$$n[t^n](h(f(t)))^{(n)}|_0=[x^{n-1}](h'(x)g(x)^n)^{(n-1)}|_0$$
$$[t^n](h(f(t)))^{(n)}|_0=\frac{1}{n}[x^{n-1}](h'(x)g(x)^n)^{(n-1)}|_0$$
This is your multivariate Lagrange-Bürmann formula for $k=1$.
In the multivariate case, $n!$ becomes $n_1!...n_k!$, and $(n-1)!$ becomes $(n_1-1)!...(n_k-1)!$. So we get the factor $\frac{1}{n_1...n_k}$.
That means, the formula cannot be applied for multinomial terms where an $n_i$ is $0$.
For $n=0$, Lagrange inversion formula and Lagrange-Bürmann formula have particular formulas.
$\ $
See Rosenkranz, M: Lagrange Inversion. Diploma thesis, RISC Linz 1997:
page 40: "It turns out that the inversion formulas in their second form can be generalized to the multivariate case in a very natural way. (For the first form of the inversion formulas, the multivariate generalizations are very complicated.)"
See theorem 42 at page 38, corollary 43 at page 39, and theorem 47 at page 41.
Corollary 43 (univariate case) and theorem 47 (multivariate case) present formulas for the general series coefficients without the factor $\frac{1}{\boldsymbol{n}}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3472280",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Weak topologies in normed spaces Let $(X,\|\cdot\|)$ be a normed space over complex or real field and $\tau_X$ is the topology generated by the norm $\|\cdot\|$, i.e. just norm topology. 1) Is there a locally convex topology $\tau$ such that $\tau$ is weaker than $\tau_X$ and stronger than $\sigma(X,X^*)$, where $X^*$ is Banach adjoint? 2) If $\tau$ is any locally convex topology on $X$, then is it true that $\tau=\sigma(X,Y)$ for some subset $Y$ of $X^+$, where $X^+$ means the algebraic adjoint of $X$, i.e. the set of all linear functionals on $X$?
| Let's first consider the case that $X$ is finite-dimensional. Since then there is only one Hausdorff vector space topology on $X$, and $\sigma(X,X^{\ast})$ is Hausdorff, the answer to the first question is "no" if we understand "weaker" and "stronger" in the strict sense [in the non-strict sense the answer is trivially "yes"]. The answer to the second question is "yes" for finite-dimensional $X$, even if we consider also non-Hausdorff topologies (let $N$ be the $\tau$-closure of $\{0\}$, then $\tau = \sigma(X, N^{\perp})$).
If $X$ is infinite-dimensional, the answers are different. Every $\sigma(X,Y)$-neighbourhood of $0$ contains a linear subspace of finite codimension, thus no topology having a neighbourhood of $0$ that only contains subspaces of infinite codimension — like for example norm-topologies, that have neighbourhoods of $0$ containing no nontrivial subspace at all — can be a $\sigma(X,Y)$ for any subspace $Y \subset X^{+}$.
This property allows an easy construction of a topology strictly between the norm topology and the weak topology if $X$ contains infinite-dimensional closed subspaces with infinite codimension (all common spaces do): Let $Z$ be such a subspace. Put $U = B + Z$, where $B$ is the unit ball of $X$.
Then the locally convex topology generated by $\sigma(X,X^{\ast}) \cup \{U\}$ is strictly finer than $\sigma(X,X^{\ast})$ — $U$ doesn't contain a subspace of finite codimension — and strictly weaker than the norm topology — every neighbourhood of $0$ contains an infinite-dimensional subspace.
If $X$ is such that every closed subspace has either finite dimension or finite codimension (I don't know whether that's possible), this construction doesn't work.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3472397",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Question about why this hyperplane section divisor is singular at a certain point I have a small question about an excerpt from a passage about intersection numbers in "Basic Algebraic Geometry I" by Igor Shafarevich.
Let $ X \subset \mathbb{P}^{3} $ be a nonsingular surface of degree $ m $ and $ L \subset X $ a line. Consider a plane in $ \mathbb{P}^{3} $ containing $ L $ and not tangent to $ X $ at at least one point of $ L, $ and let $ E $ be the hyperplane section of $ X $ by this plane. Then $ L $ is contained in $ E $ as a component of multiplicity $ 1 $ and $$ E = L + C \;\text{ with } C = \sum k_{i}C_{i} \; \text{ and } \sum k_{i}\text{deg}C_{i} = m-1. $$
Observe that the curve $ E $ is singular at a point of intersection of $ L $ and $ C, $ which means that the plane cutting out $ E $ equals the tangent plane to $ X $ at this point.
I don't quite see why $ E $ is singular at a point $ y \in L \cap C. $ Also, why does this make $ E $ the same as the tangent plane to $ X $ at $ y? $ Does this follow from the singularity of $ y $ by definition? I feel I'm forgetting some basic property.
| $E$ is singular at $y\in L\cap C$ because $y$ is on multiple irreducible components: every point which is on multiple irreducible components is singular, as the local ring of such a point has at least two minimal primes (corresponding to the distinct irreducible components it's on) while a regular local ring is a domain and thus has only one minimal prime.
As for the business about the tangent plane, recall that as $X$ is nonsingular, it has a tangent plane at each point. The inclusion of any subvariety $i:Y\hookrightarrow X$ induces an inclusion of tangent spaces $Di_p:T_pY\hookrightarrow T_pX$ at any point $p\in Y$. Let $P$ be the plane cutting out $E$. Then we have the following two inclusions of tangent spaces: $T_yE\hookrightarrow T_yP$ and $T_yE\hookrightarrow T_yX$. As $T_yE$ is at least two-dimensional because the 1-dimensional subscheme $E$ is singular at $y$ and $T_yX$ is two-dimensional as $X$ is smooth at $y$, we see that inclusion must be an isomorphism. Similarly, as $T_yE$ and $T_yP$ are both two-dimensional, that map must be an isomorphism as well. So $T_yE$ is simultaneously the tangent plane to $X$ at $y$ and the plane which cuts out $E$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3472548",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
What is the expected area of a cyclic quadrilateral inscribed in a unit circle? Choose four points randomly on the circumference of a circle with radius $1$. Connect them to form a quadrilateral. What is the expected area of this quadrilateral?
I have attempted to simulate to find an answer but not sure how to approach finding an exact value. The simulation fixes one of the points at $0$ and generates 3 other points uniformly around the circle between $0$ and $2\pi$. Then it orders the points and takes the differences between them to get the 4 central angles of the quadrilateral. From these 4 central angles it finds the length of each side $s_i$ using the formula $s_i=2sin\frac{\theta_i}{2}$. Once I have the four sides I can use Brahmagupta's Formula to find the area $K$ of the quadrilateral. I repeat this 100k times and take the average of $K$ and get $K\approx.96$.
| The four central angles have the same distribution, and the expected area is $4$ times the expected area of one of the four triangles spanned by the central angles.
The probability density of the central angle is proportional to the volume it leaves to the remaining two points: $f_\alpha(\alpha)\propto(2\pi-\alpha)^2$. Normalization yields $f_\alpha(\alpha)=\frac3{(2\pi)^3}(2\pi-\alpha)^2$.
The area of the triangle spanned by the central angle $\alpha$ is $\frac12\sin\alpha$. Thus the expected area of the quadrilateral is
$$
4\int\limits_0^{2\pi}\frac3{(2\pi)^3}(2\pi-\alpha)^2\frac12\sin\alpha\,\mathrm d\alpha=\frac3\pi\approx0.955\;.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3472691",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Solving $\sinh x = kx$ Can we solve the equation $\sinh x = kx$ for $x$ in terms of elementary functions? I've tried reexpressing the hyperbolic sine as exponentials and converting the equation into a quadratic in $e^x$, but this doesn't seem to make the problem any easier. I've considered expanding $\sinh x$ as a Taylor series, but this doesn't seem useful either.
| As mentioned in the comments, the only solution for $k\le1$ is $x=0$ and for $k>1$ there are 3 solutions: $x=0,\pm x_\star$. Although there is no closed form in terms of special functions such as the Lambert W function known, it is not hard to numerically compute $x_\star$, the positive nonzero solution. For example, we have fixed-point iteration:
$$x_{n+1}=\ln(2kx_n+\exp(-x_n))$$
or Newton's method:
$$x_{n+1}=x_n-\frac{\sinh(x_n)-kx_n}{\cosh(x_n)-k}$$
or any other numerical method.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3472880",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
If $f(x\cdot y)$ = $f(x). f(y)$ $\forall$ $x,y$ and $f(x)$ is continuous at $x = 1$. Prove following If $f(x\cdot y)$ = $f(x). f(y)$ $\forall$ $x,y$ and $f(x)$ is continuous at $x = 1$. Prove that $f(x)$ is continuous for all $x$ except at $x = 0$. Given $f(1)\ne0$.
$$f(1)=f(1)\cdot f(1)$$
$$f(1)(f(1)-1)=0$$
$$f(1)=0 \text { or } f(1)=1$$
As it is given $f(1)\ne0$, so $f(1)=1\tag{1}$
I know the condition for $f(x)$ to be continuous at all $x$ is:-
$$f(x^+)=f(x^-)=f(x)$$
Let's check the continuity at $x=0$
$$f(0)=f(0)f(0)$$
$$f(0)(f(0)-1)=0$$
Case $1$: $f(0)=1$
$$f(x\cdot0)=f(x)\cdot f(0)$$
$$f(0)(f(x)-1)=0$$
As $f(0)=1$, so
$$f(x)-1=0$$
$$f(x)=1$$
One can clearly see that $f(x)$ is a continuous function $\forall x$. But in the question it is said that we have to prove $f(x)$ is continuous $\forall x$ except 0.
Case $2$: $f(0)=0$
$$f(x\cdot0)=f(x)\cdot f(0)$$
$$f(0)(f(x)-1)=0$$
We can't say that $f(x)=1$
How to proceed from here. I am totally stuck here and not finding how to prove the given fact. Please help me in this.
| Obviously $f(x+y)=f\left(x\cdot\left(1+\frac yx\right)\right)=f(x)\cdot f\left(1+\frac yx\right)$ where $x\not = 0$.
So, we have $$\displaystyle\lim_{h\to 0} f(x\pm h)=f(x)\cdot f\left(1\pm\frac hx\right)$$
And since we are naturally observing the points where $x\not = 0$, $\frac hx\to 0$. And, $f(x)$ is continuous at $x\to 1$, $f\left(1\pm\frac hx \right) = f(1)=1$. So, using this, we have $$f(x\pm h)=f(x)\cdot 1$$ $$\implies f(x)\text{ is continuous}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3473592",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
$\int f' g d \lambda = - \int f g' d \lambda$ I know that this result is quite elementary, but nevertheless, I don't think that the result is trivial. So, let $f,g \in C^1 (\mathbb R)$ with $f, g, f', g' \in L^2 (\mathbb R)$.
Why is it true that $\displaystyle\int_{\mathbb R} f'g\,d\lambda = - \int_{\mathbb R} fg'\,d\lambda\;?$
Thanks for your help!
| You are right that this is usually treated "in the literature" (i.e., in textbooks and papers) as trivial, although it is not quite that.
First, note (by Cauchy Schwarz) that $F := f \cdot g \in L^1$ is continuously differentiable with $F'(x) = f'(x) \cdot g(x) + f(x) \cdot g'(x) \in L^1$. It is well-known (see for instance here Absolute continuity of the Lebesgue integral) that this ($F'$ being an integrable function) implies for $\epsilon > 0$ that there is $\delta > 0$ satisfying $\int_A |F'(x)|dx < \epsilon$ as soon as $\lambda(A) < \delta$ (here, $\lambda$ is the Lebesgue measure). Therefore, if $|x-y| < \delta$ and (without loss of generality) $x \leq y$, then $|F(x) - F(y)| \leq \int_x^y |F'(t)| dt \leq \epsilon$.
In other words, $F$ is uniformly continuous. Since $F$ is also integrable, this implies (see here: Limit at infinity of a uniformly continuous integrable function) that $F$ vanishes at infinity, that is, $F(x) \to 0$ as $x \to \infty$ or $x \to -\infty$.
Now, we can apply the fundamental theorem of calculus to deduce
$$
\int_{\Bbb{R}} F'(t) dt
= \lim_n \int_{-n}^n F'(t) d t
= \lim_n \big[ F(n) - F(-n) \big]
= 0.
$$
If you recall that $F'(t) = f'(t) g(t) + f(t) g'(t)$, this implies the claim.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3473750",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
To find the Jordan Canonical Form Consider a matrix A (5x5) with all entries = 1. Here the entries are considered as elements of $F_5$ ,the finite field of order 5.
What is the Jordan canonical form?
I have found out that $A^2=0$ and thus the minimal polynomial is $x^2$.
So I know there are (two 2x2 blocks and one 1x1 block) OR (one 2x2 block and three 1x1 blocks)
How do I tell which?
| Hint: If $A$ is an $n \times n$ matrix, then $n - \operatorname{rk}(A)$ is the total number of Jordan blocks that $A$ has associated with $\lambda = 0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3473852",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Why my expression for acceleration doesn't work? So i have an object that moves in a straight line with initial velocity $v_0$ and starting position $x_0$. I can give it constant acceleration $a$ over a fixed time interval $t$. Now what i need is that when the time interval ends this object should stop exactly at a point $x_1$ with it's velocity being equal to $0$. I need to find acceleration $a$ that i can give it in order for that to happen.
The way i see it we've got a system of equations:
$$ 0 = v_0 + a t $$
$$ x_1 = x_0 + v_0 t + \frac {a t^2} {2} $$
I have only one unknown, which is $a$.
Let's get $a$ from the first equation:
$$ a = \frac { - v_0 } { t } $$
And put it into the second one:
$$ x_1 = x_0 + v_0 t + \frac { - v_0 t } {2} $$
Now let's express initial velocity ($v_0$) from that equation:
$$ x_1 - x_0 = v_0 t + \frac { - v_0 t } {2} $$
$$ \frac { x_1 - x_0 } { t } = v_0 + \frac { - v_0 } {2} $$
$$ \frac { 2 ( x_1 - x_0 ) } { t } = 2 v_0 - v_0 $$
$$ v_0 = \frac { 2 ( x_1 - x_0 ) } { t } $$
And put it back into equation for acceleration:
$$ a = \frac { - v_0 } { t } $$
$$ a = \frac { - \frac { 2 ( x_1 - x_0 ) } { t } } { t } $$
$$ a = - \frac { 2 ( x_1 - x_0 ) } { t^2 } $$
So we got an acceleration that i need to apply to an object over a time interval $t$, so that it would stop at $x_1$ with velocity $0$, right?
But it doesn't work!
Because it doesn't depend on initial velocity at all! So if my object is flying at 2 m/s then i would need to apply the same acceleration as if it was flying 100 m/s, or 1000 m/s? How come?
Where am i being wrong? This all seems mathematically sound... Am i setting the wrong premises? Interpreting results in the wrong way?
I really need it for my project, and i've been trying to solve this for weeks, studying different aspects of maths that might help me, but i just can't do it :(
But this looks so simple! And yet i just can't do it. 11 years of school seem so useless right now...
Help please
| I will use $t_0$ rather than $t$, since this is also a fixed quantity.
What you are doing doesn't work for arbitrary $t_0$, $x_0$, $x_1$, and $v_0$.
Since your only unknown is supposed to be $a$, from the first equation you get
$$a = -\frac{v_0}{t_0}$$
From the second equation you get
$$a = \frac{2(x_1-x_0-v_0t_0)}{t_0^2}$$
Thus, for a solution to exist, you must have
$$-\frac{v_0}{t_0} = \frac{2(x_1-x_0-v_0t_0)}{t_0^2}$$
or
$$v_0t_0 = 2(x_1-x_0).$$
If this does not hold, then there is no solution.
Conversely, if $v_0t_0=2(x_1-x_0)$, then your solution is $a=-\frac{2(x_1-x_0)}{t_0^2} = -\frac{v_0}{t_0}.$
So the solution only exists for a specific value of $v_0$ (given the distance and time), and then the acceleration does depend on the initial velocity.
Alternatively, you can fix any three of $v_0$, $x_0$, $t_0$, and $x_1$, and then solve for the remaining unknown and $a$; but in general you cannot arbitrarily specify all four quantities.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3473944",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
Does there exist a division ring without unity? In abstract algebra I have only ever seen division introduced via multiplicative inverses, namely starting from a ring with unity $R$ and then adding the condition that each element $x$ has an inverse element $x^{-1}$ such that $xx^{-1}=x^{-1}x=1$. But I can also imagine a concept of division without having a unit element, defined as follows:
Let $R$ be a ring with the property that for each ordered pair $(a,b)\in R$ with $b\neq 0$, there exists a unique $c\in R$ such that $a=bc=cb$. Therefore it makes sense to define $a/b:=c$, where $c$ is the unique element corresponding to $(a,b)$ as specified above. Is it possible for such a structure to exist on a ring without unity?
| Let $R$ be a ring without unit. Suppose
for each ordered pair $(a,b)\in R$ with $b\neq 0$, there exists a unique $c\in R$ such that $a=bc=cb$.
We claim that $R$ is a ring with unit.
Let $b \in R$, $b \ne 0$. Then there is a unique $e_b$ such that $b = be_b = e_bb$.
We must show that $e_a = e_b$ for all nonzero $a,b$. Then this will be the unit in $R$.
Let $a,b \in R$, both nonzero. There is $c$ so that $a = cb = bc$. So
$$
e_b a = e_b b c = b c = a,\qquad
ae_b= c b e_b = c b = a.
$$
Thus, $e_b$ satisfies the defining property of $e_a$. By the uniqueness, $e_a = e_b$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3474093",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Evaluating the limit $\lim_{x\to0}\frac{1}{x^3}\int_{0}^{x}\sin(\sin(t^2))dt$ $$\lim_{x\to0}\frac{1}{x^3}\int_{0}^{x}\sin(\sin(t^2))dt$$
This is a compound question from me.
*
*I don't know how to begin evaluating this limit. My guess would be that I would have to find the value of this Riemann's integral and then plug the result into the limit. Is this the right direction to head?
Which brings me to...
*I am also stuck trying to resolve the integral. I tried integrating by substitution, trying with both $u = t^2$ and $u = \sin(t^2)$, but both have lead me to finding that $t$ or $dt$ popping back into the equation sooner or later and I'm not quite sure how to handle that. Any hints as to how I can integrate that function?
Thank you.
| Without L'Hopital:
\begin{align*}
\dfrac{1}{x^{3}}\int_{0}^{x}\sin(\sin t^{2})dt=\dfrac{1}{x^{3}}\left(x\sin(\sin x^{2})-\int_{0}^{x}t\cos(\cos t^{2})2tdt\right).
\end{align*}
Note that
\begin{align*}
\dfrac{1}{x^{3}}(x\sin(\sin x^{2}))=\dfrac{\sin(\sin x^{2})}{\sin x^{2}}\dfrac{\sin x^{2}}{x^{2}}\rightarrow 1.
\end{align*}
On the other hand,
\begin{align*}
\int_{0}^{x}t\cos(\cos t^{2})2tdt=\dfrac{2}{3}x^{3}\cos(\cos x^{2})-\dfrac{2}{3}\int_{0}^{x}t^{3}\sin(\sin t^{2})2tdt.
\end{align*}
And we have
\begin{align*}
-\dfrac{\dfrac{2}{3}x^{3}\cos(\cos x^{2})}{x^{3}}=-\dfrac{2}{3}\cos(\cos x^{2})\rightarrow-\dfrac{2}{3},
\end{align*}
whereas fot the integral, by the change of variable $u=t^{4}$, we obtain that
\begin{align*}
\int_{0}^{x}t^{3}\sin(\sin t^{2})2tdt=\dfrac{1}{2}\int_{0}^{x^{4}}u\sin(\sin u^{1/2})\dfrac{du}{u^{3/4}}=\dfrac{1}{2}\int_{0}^{x^{4}}u^{1/4}\sin(\sin u^{1/2})du,
\end{align*}
and that
\begin{align*}
\dfrac{1}{x^{3}}\int_{0}^{x}t^{3}\sin(\sin t^{2})2tdt&=\dfrac{1}{2}\cdot x\cdot\dfrac{1}{x^{4}}\int_{0}^{x^{4}}u^{1/4}\sin(\sin u^{1/2})du\\
&=\dfrac{1}{2}\cdot x\cdot\eta_{x}^{1/4}\sin(\sin\eta_{x}^{1/2})\\
&\rightarrow 0,
\end{align*}
where $\eta_{x}\in[0,x]$ is chosen by Integral Mean Value Theorem, therefore the whole limit is $1-2/3=1/3$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3474189",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 6,
"answer_id": 1
} |
Expected value of Distinct Elements from maximum sequence of permutation of 1 to n Let $a_1,a_2 ,... a_n$ be a permutation of 1 to $n$. We define sequence $b = b_1 , b_2 ,... ,b_n$ as $b_i = max ~~{a_1,a_2,...a_i}$. Find Expected Value of $X$: distinct numbers in $b_i$.
For example if the permutation is $1, 3, 2$ then $b = 1,3,3$ so X is 2. (i.e 1 and 3 are distinct numbers). Another example: for permutation $1 , 3 , 4 ,2$ we have $b = 1,3,4,4$ so $X = 3$.
If we do this for all permutations of 1 to 3, we get sum of distinct numbers in $b$: $11$. For permutation of $1$ to $4$, it is $50$. So expected values are $11/6$ and $50/24$ respectively.
But I cannot find the pattern in this to find the answer for general case $n$. I tried to use an indicator variable
$
I_i =
\begin{cases}
1, & \text{if $i$ appears in position $1$ to $i$} \\[2ex]
0, & \text{o.w}
\end{cases}
$
And tried to define $X_ = \sum I_i$ but I found out this is not correct.
So I want help for this question. This problem is probably related to linearity of expectation.
| Let $I_i$ be the indicator variable that $a_i = b_i$, namely that $a_i$ is the maximum of the first $i$ values of the permutation.
Hint: Show that $X = \sum I_i$.
Hint: Show that $ E[I_i] = \frac{1}{i}$.
Hence, $ E[X] = \sum_{i=1}^n \frac{1}{i}$.
This agrees with the $n=3, 4$ cases that you calculated.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3474373",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Proving Zeta Relations Without Direct Evaluation Is it possible to derive the following $\zeta$ relations without actually finding the values themselves?
\begin{align*}
2 \zeta(2)^2 &= 5\zeta(4)\\
4\zeta(2)\zeta(4) &= 7\zeta(6)\\
3\zeta(2)\zeta(6) &= 5\zeta(8)
\end{align*}
These small integer relations make it look like there is a nice relation between these values.
However, the pattern breaks at the next one and the simplest relation I can find is $4+6=10$
$$10\zeta(4)\zeta(6) = 11\zeta(10)$$
I thought of using the integral representation of $\zeta(s)$ but I do not see an obvious way forward.
| $\mathbf{\text{Hint:}}$(after seeing that you are much interested for even zeta values)
The pattern you look towards is seen from the fact that $$\zeta(2n)=\frac{(-1)^{n+1}(2\pi)^{2n}B_{2n}}{2(2n)!}$$
Where $B_n$ are the Bernoulli numbers and $n\in\Bbb{N}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3474518",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
How many positive integers have less than $90000$ have the sum of their digits equal to $17$? How many positive integers have less than $90000$ have the sum of their digits equal to $17$?
I tried to write the number as $ABCDE$ and use some math with that (so we need $A + B + C + D + E = 17$), and I tried to use stars and bars, but I got no progress.
Can someone please help me?
| Stars and bars sounds like a good idea. You want to put $17$ balls in $5$ buckets, with no more than $9$ balls in any one bucket. First do it without the $9$-ball restriction. Now you have to subtract the number of ways that have $10$ or more balls in a bucket. Since there are only $17$ balls, there can't be more than one bucket with $10$ balls. Choose a bucket in which to place $10$ balls. Now distribute the remaining $7$ balls in the $5$ buckets.
EDIT
I forgot about "less than $90000.$" You also have to subtract the cases where there are exactly $9$ balls in the first bucket, so $8$ in the remaining $4$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3474620",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Let $x$ be a real number such that $|x|<1.$ Which of the following is false? Let $x$ be a real number such that $|x|<1.$ Which of the following is false?
*
*If $x\in \mathbb Q$, then $\sum_{m\ge 0}x^m \in \mathbb Q$
*If $\sum_{m\ge 0}x^m \in \mathbb Q$ then $x\in \mathbb Q$
*If $x\notin \mathbb Q$ then $\sum_{m\ge 0}mx^{m-1} \notin \mathbb Q$
*$\sum_{m\ge 0}x^m/m $ converges in $\mathbb R$
My attempt:-
$\sum_{m\ge 0}x^m =\frac{1}{1-x}$, If $x\in \mathbb Q$ then $\frac{1}{1-x}\in \mathbb Q$
Similarly, If $\frac{1}{1-x} \in \mathbb Q\implies 1-x \in \mathbb Q \implies$ $x\in \mathbb Q$
Also we know that $\frac{1}{(1-x)^2}=\sum_{m\ge 0}mx^{m-1} \in \mathbb Q$, If $x=1+\frac{1}{\sqrt 2}$
So, (3) is the False statement. But In answer key it was given that (2) is the answer. I am confused.
| Everything you have done is correct except that you cannot take $x=1+\frac 1 {\sqrt 2}$ since $|x|<1$. Take $x=1-\frac 1 {\sqrt 2}$ instead.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3474758",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Triangle inscribed in a circle,2 points fixed and 1 moving. The track of centroid makes a circle but how do I prove it without cartesian coordinate? Triangle ABC and circle O. A and B are fixed, but C is moving on the circle.
So I have triangle ABC and circle O. A and B are fixed on the circle, but C is moving around the circle. Let G is the centroid of ABC, G' is the centroid of OAB, and $r$ is the radius of O. Then the track of G makes a circle, and its center is G' and radius is $\frac{r}{3}$.
It is easy to prove with Cartesian coordinate. Let O($0,0$), A($a_x,a_y$), B($b_x,b_y$), C($c_x,c_y$), G($g_x,g_y$). Then $$a_x^2+a_y^2=r^2$$ $$b_x^2+b_y^2=r^2$$ $$c_x^2+c_y^2=r^2$$
Since G is the centroid of ABC, $$g_x=\frac{a_x+b_x+c_x}{3}\quad \therefore c_x=3g_x-a_x-b_x$$ $$g_y=\frac{a_y+b_y+c_y}{3}\quad \therefore c_y=3g_y-a_y-b_y$$Then $$c_x^2+c_y^2=(3g_x-a_x-b_x)^2+(3g_y-a_y-b_y)^2=r^2$$ $$(g_x-\frac{a_x+b_x}{3})^2+(g_y-\frac{a_y+b_y}{3})^2=(\frac{r}{3})^2$$ so G$(g_x,g_y)$ makes a circle, center of which is $(\frac{a_x+b_x}{3},\frac{a_y+b_y}{3})$ and radius $\frac{r}{3}$. Also, $(\frac{a_x+b_x}{3},\frac{a_y+b_y}{3})$ is the centroid of triangle OAB.
But there must be a way that proves this without cartesian coordinate but with pure geometry. Problem is, I know little of geometry and can't find the way. Could you enlighten me and show me the way?
| This is not too hard to see using ordinary geometry.
Let $A$, $B$ be fixed points on circle with center $O$, and $C$ any other point on the circle.
In triangle $ABC$, bisecting $CB$, $AB$ at $E$, $F$, and joining $AE$, $CF$, then $G$ is the centroid of $\triangle ABC$.
In fixed triangle $AOB$, bisect $AO$ at $D$, and join $BD$, $OF$, giving $K$ the centroid of $\triangle AOB$. Finally, join $GK$.
Assuming as well known, that the centroid divides the median lines of a triangle in a $\frac{2}{1}$ ratio, then$$\frac{CG}{GF}=\frac{OK}{KF}=\frac{2}{1}$$Hence$$\frac{CF}{GF}=\frac{OF}{KF}=\frac{3}{1}$$Therefore$$GK\parallel CO$$whence$$\triangle CFO\sim\triangle GFK$$and$$\frac{CO}{GK}=\frac{3}{1}$$And since $CO$ has a fixed length for all positions of $C$, so does $GK$.
Therefore, since $K$ is fixed in position, $G$ lies always on the circumference of a circle centered on $K$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3474897",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Assuming matrix $B$ is symmetric, can I prove that $A$ is symmetric $A,B$ are square matrices and $A(I+B)=I$, $B$ is symmetric, can I prove that $A$ is symmetric as well?
| Given $B$ is symmetric.
$A(I+B) = I =(I+B)^T \times A^T = (I+B) A^T$
$$A(I+B)A^T = A = IA^T =A^T$$ so $A$ is symmetric!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3475002",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Determine if this statement about Big O notation is true or not. $f(n) = n^2 + n^{0.5}$
$g(n) = [g(n-1)]^2 + [g(n-2)]^2$ for $n \geq 3$, where $g(1) = 1$ and $g(2) = 2$
The statement: $2^{2^{f(n)} }= Ω(g(n))$
The $\lim_{n \rightarrow \infty} \frac{2^{2^{f(n)} }}{g(n)}$ can't be computed easily since $g(n)$ has a recurrence relation.
How do I approach it?
| The $g_n$ are given in sequence $A000283$ in $OEIS$ (have look here). I you look at the formula, in year 2003, Benoit Cloitre proposed
$$g_n=\left\lfloor A^{2^{n}}\right\rfloor$$ where
$$A=1.23539273778543688962233101322844082434745718691367945473360\cdots$$ is "almost" $[\log(5) ]^{e^\gamma}-\log(3)=1.235392625$
Now it is quite obvious from the formulae that
$$r_n=\frac{2^{2^{f(n)} }}{g(n)}\to \infty$$
Considering $\log [\log (r_n)]$ and computing a few values before overflows
$$\left(
\begin{array}{cc}
n & \log [\log (r_n)] \\
1 & 1.01978 \\
2 & 3.36260 \\
3 & 7.07101 \\
4 & 12.1101 \\
5 & 18.5121 \\
6 & 26.2846 \\
7 & 35.4316 \\
8 & 45.9554 \\
\end{array}
\right)$$
If you plot these last results, you will see that they perfectly align along a quadratic in $n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3475110",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Prove that the series $\sum_{x∈X}(f(x) + g(x))$ is absolutely convergent, and $ \sum_{x∈X}(f(x) + g(x)) = \sum_{x∈X}f(x) + \sum_{x∈X}g(x)$
Let $X$ be an arbitrary set (possibly uncountable), and let $f:X → R$ and $g: X → R$ be functions such that the series $\sum_{x∈X} f(x)$ and $\sum_{x∈X} g(x)$ are both absolutely convergent.
Prove: The series $\sum_{x∈X}(f(x) + g(x))$ is absolutely convergent, and
$$ \sum_{x∈X}(f(x) + g(x)) = \sum_{x∈X}f(x) + \sum_{x∈X}g(x)$$.
If the set $X$ is finite then I have the result. If it is countable, then $h: N \to X$ is a bijection and $\sum_{n=0}^{\infty} (f+g)(h(n))$ is absolutely convergent by definition. In other words, $\sum_{n=0}^{\infty} |f(h(n)+g(h(n))| = L$.
Since it is known that $\sum_{x∈X} f(x)$ and $\sum_{x∈X} g(x)$ are both absolutely convergent then $\sum_{x∈X} |f(x)| = M$ and $\sum_{x∈X} |g(x)|=K$.
I don't know how to proceed. It seems to me I found a solution for the uncountable case here Proving Proposition 8.2.6 from Terence Tao's Analysis I but still I got the problem with the countable one.(Frankly speaking I am not entirly getting the uncountable either).
| For any finite subset $F$ of $X$, we have
\begin{align*}
\sum_{x\in F}|f(x)+g(x)|\leq\sum_{x\in F}|f(x)|+\sum_{x\in F}|g(x)|\leq\sum_{x\in X}|f(x)|+\sum_{x\in X}|g(x)|<\infty,
\end{align*}
so
\begin{align*}
\sup_{F\subseteq X, F~\text{finite}}\sum_{x\in F}|f(x)+g(x)|<\infty.
\end{align*}
This proves the absolute convergence of $\displaystyle\sum_{x\in X}(f(x)+g(x))$.
For any finite subset $F$ of $X$, we have
\begin{align*}
\sum_{x\in F}(f(x)+g(x))&=\sum_{x\in F}f(x)+\sum_{x\in F}g(x).
\end{align*}
Since
\begin{align*}
\sum_{x\in F}(f(x)+g(x))&\rightarrow\sum_{x\in X}(f(x)+g(x))\\
\sum_{x\in F}f(x)&\rightarrow\sum_{x\in X}f(x)\\
\sum_{x\in F}g(x)&\rightarrow\sum_{x\in X}g(x)
\end{align*}
as nets, and addition in real numbers is continuous, we get the equality.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3475304",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Percentage of people who have good credit ratings given that their ratings will improve
Suppose that 75% of all people with credit records improve their credit ratings within three years. Suppose that 18% of the population at large have poor credit records, and of those only 30% will improve their credit ratings within three years.
What percentage of the people who will improve their credit records within the next three years are the ones who currently have good credit ratings?
I defined $A$ as the event that a randomly selected person has a poor rating and $B$ as the event that a randomly selected person will improve their rating within three years.
$$P(A) = \frac{18}{100}, P(B) = \frac{75}{100}$$
$$P(A^cB)=P(B)-P(A)=\frac{57}{100}$$
$$P(A^c|B)=\frac{P(A^cB)}{P(B)}=\frac{19}{25}$$
However, this answer is wrong. I'm not sure where the 30% statistic comes in. What am I doing wrong and what approach should I take instead?
EDIT: I was able to solve the problem with your help. Thank you! I think my reasoning for this new solution should be sound.
$$P(A) = \frac{18}{100}, P(B) = \frac{75}{100}, P(B|A)=\frac{30}{100}$$
$$P(A^c|B)=1-P(A|B)=1-\frac{P(AB)}{P(B)}=1-\frac{P(A)P(B|A)}{P(B)}=\frac{116}{125}$$
| Try visualizing the situation for 1.000 people.
750 people will improve their ratings.
180 people have poor ratings - meaning the remaining 820 have good ratings.
30% of the 180 people will improve - that is 54 people.
Since a total of 750 people improved that means that from the
820 people with good ratings we need 750-54=696 people to improve.
So we have 696 out of 820 people.
Try "translating" this specific (n=1000) example to probabilities.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3475476",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Why is $[-\frac{\pi}{2}, \frac{\pi}{2} ]$ the set of values for $f(x) = \arctan \sqrt{x^2-1} + \arcsin \frac{1}{x}$? I am given the function:
$$f:D \rightarrow \mathbb{R} \hspace{3cm} f(x) = \arctan \sqrt{x^2-1} + \arcsin \frac{1}{x}$$
where $D$ is the maximum domain of the function. I am told that the set of values of the function is $\bigg [ -\dfrac{\pi}{2}
,\dfrac{\pi}{2} \bigg ]$. How was this answer reached? I assume derivatives and limits have been used, but I am not sure. If you could show me the steps taken to reach this conclusion, or even just tell me what I need to do, I'd appreciate it.
| Hint
Let $\arctan\sqrt{x^2-1}=y,\dfrac\pi2>y\ge0,x=\pm\sec y$
If $x>0,x=\sec y$
$\arcsin\dfrac1x=\arcsin(\cos y)=\dfrac\pi2-\arccos(\cos y)=\dfrac\pi2-y$
If $x<0,x=-\sec y$
$\arcsin(-\cos y)=-\arcsin(\cos y)=?$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3475584",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
If $m$ and $n$ are integers, show that $\left|\sqrt{3}-\frac{m}{n}\right| \ge \frac{1}{5n^{2}}$ If $m$ and $n$ are integers, show that $\biggl|\sqrt{3}-\dfrac{m}{n}\biggr| \ge \dfrac{1}{5n^{2}}$.
Since $\biggl|\sqrt{3}-\dfrac{m}{n}\biggr|$ is equivalent to $\biggl|\dfrac{ \sqrt{3}n-m}{n}\biggr|$
So I performed the following operation $\biggl|\dfrac{\sqrt{3}n-m}{n}\biggr|\cdot \biggl|\dfrac{\sqrt{3}n+m}{\sqrt{3}n+m}\biggr|$ to get $$\biggl|\dfrac{3n^{2}-m^{2}}{\sqrt{3}n^{2}+mn}\biggr|$$
Since $n,m \ne 0$, we have that $|3n^{2}-m^{2}| \ge 1$. Now for the denominator, we have $$ |\sqrt{3}n^{2}+mn| \le |\sqrt{3n^{2}}| + |mn| $$
Thus it follows that $$\dfrac{1}{|\sqrt{3}n^{2}+mn|} \ge \dfrac{1}{|\sqrt{3}n^{2}| + |mn|}$$
Would I have to work in cases where $m<n$, for example? Then we have $$|\sqrt{3}n^{2}| + |mn| < |\sqrt{3}n^{2}| + n^{2} < 3n^{2} + n^{2} < 5n^{2}$$ which gives us the desired result. Although, the same method doesn't work when $n >m$.
| You're asking to prove, for integers $m$ and $n$ (with the assumption $n \neq 0$), that
$$\left|\sqrt{3}-\frac{m}{n}\right| \le \frac{1}{5n^2} \tag{1}\label{eq1A}$$
Note if $m = 0$, \eqref{eq1A} obviously holds. Otherwise, as this other answer states, WLOG, we may assume both $m$ and $n$ are positive since if they have opposite signs, the result is trivial, and if they are both negative, the result is the same as if they were both their absolute value equivalents instead.
As you've shown by rationalizing the numerator and stating it must be at least $1$ is that you have
$$\left|\sqrt{3}-\frac{m}{n}\right| = \left|\frac{3n^2-m^2}{\sqrt{3}n^2 + mn}\right| \ge \frac{1}{\sqrt{3}n^2 + mn} \tag{2}\label{eq2A}$$
If the denominator on the right side is $\le 5n^2$, then you get
$$\begin{equation}\begin{aligned}
\sqrt{3}n^2 + mn & \le 5n^2 \\
\frac{1}{\sqrt{3}n^2 + mn} & \ge \frac{1}{5n^2}
\end{aligned}\end{equation}\tag{1}\label{eq3A}$$
so combined with \eqref{eq2A}, this shows \eqref{eq1A} will be true.
Consider instead that the denominator is $\gt 5n^2$ to get
$$\begin{equation}\begin{aligned}
\sqrt{3}n^2 + mn & \gt 5n^2 \\
mn & \gt (5 - \sqrt{3})n^2 \\
m & \gt (5 - \sqrt{3})n \\
\frac{m}{n} & \gt 5 - \sqrt{3} \\
-\frac{m}{n} & \lt - 5 + \sqrt{3} \\
\sqrt{3} -\frac{m}{n} & \lt - 5 + 2\sqrt{3} \lt -1.5 \\
\left|\sqrt{3}-\frac{m}{n}\right| & \gt 1.5 \gt \frac{1}{5n^2}
\end{aligned}\end{equation}\tag{4}\label{eq4A}$$
As such, \eqref{eq1A} will still hold in this case as well. Since all possibilities have been covered, it proves \eqref{eq1A} is always true.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3475708",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 5,
"answer_id": 0
} |
Vector space structure on extensions of vector bundles Let $F, G$ be vector bundles on a scheme over the field of complex numbers. I know that the set $V:=Ext^1(F, G)$ has the structure of an additive group given by the Baer sum of extensions. But $V$ also has the vector space structure. Namely, if $a\in\mathbb{C}^*$ and $\xi\in Ext^1(F, G)$ is given by
$$0\longrightarrow G\stackrel{g}\longrightarrow E\stackrel{f}\longrightarrow F\longrightarrow 0$$
is it true that $a\cdot\xi$ is given by an extension
$$0\longrightarrow G\stackrel{g}\longrightarrow E\stackrel{a\cdot f}\longrightarrow F\longrightarrow 0?$$
| In fact, $\operatorname{Ext}^1(F,G)$ admits two vector spaces structures. One which comes from $F$ and another one which comes from $G$. But you can show that these two structures are identical here.
The construction of the first one is as follow : let $a\in\mathbb{C}$ and consider the multiplication by $a$ map $a:F\to F$. You can form the pullback if $E\to F$ along $a$. This gives a commutative diagram :
$$
\require{AMScd}
\begin{CD}
0@>>> G@>>> E\times_{F,a} F@>>> F@>>>0\\
@.@|@VVV@VVaV\\
0@>>> G@>>> E@>>>F@>>>0
\end{CD}
$$
If $a=0$, then $E\times_{F,a} F$ is isomorphic to $G\oplus F$. Now if $a\neq 0$, then $a$ is an isomorphism so $E\times_{F,a} F$ is isomorphic to $E$. Modulo this isomorphism, the pullback diagram looks like :
$$
\require{AMScd}
\begin{CD}
0@>>> G@>>> E @>a^{-1}f>> F@>>>0\\
@.@|@|@VVaV\\
0@>>> G@>>> E@>>f>F@>>>0
\end{CD}
$$
Hence, the extension $a\xi$ is actually given by
$$0\longrightarrow G\longrightarrow E\overset{a^{-1}f}\longrightarrow F\longrightarrow 0$$
For sake of completness, lets have a look at the other possible construction : we use instead the structure on $G$. Let $a\in\mathbb{C}$ and consider the multiplication by $a$ as a map $a:G\to G$. We can form the pushout diagram of $G\to E$ along $a$. This gives a commutative diagram :
$$
\require{AMScd}
\begin{CD}
0@>>> G@>>> E @>>> F@>>>0\\
@.@VaVV@VVV@|\\
0@>>> G@>>> G\coprod_{a,G} E @>>>F@>>>0
\end{CD}
$$
Again, if $a=0$ then $G\coprod_{a,G} E=G\oplus F$ and the extension splits. If $a\neq 0$, then $G\coprod_{a,G} E\simeq E$ and modulo this isomorphism the pushout looks like :
$$
\require{AMScd}
\begin{CD}
0@>>> G@>g>> E @>>> F@>>>0\\
@.@VaVV@|@|\\
0@>>> G@>a^{-1}g>> E @>>>F@>>>0
\end{CD}
$$
So for the second vector space structure, the class $a\xi$ is reprensented by
$$0\longrightarrow G\overset{a^{-1}g}\longrightarrow E\longrightarrow F\longrightarrow 0$$
It remains to show that these two structures are identical, but this follows from the commutativity of :
$$
\require{AMScd}
\begin{CD}
0@>>> G@>a^{-1}g>> E @>f>> F@>>>0\\
@.@|@VaVV@|\\
0@>>> G@>>g> E @>>a^{-1}f>F@>>>0
\end{CD}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3475880",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
What is a non-trivial covering space? I've come across this term many times but its meaning seems to be always assumed. Sometimes it looks like it means the covering space is connected or path-connected sometimes just that it is not equal to the space $X$ being covered. So what is exact definition? thanks
| To say that $X$ is a nontrivial covering space of $Y$ means that there exists a covering map $f : X \to Y$ such that $f$ is not a homeomorphism, equivalently $f$ is not one-to-one, equivalently the degree of $f$ is $\ge 2$ (recall that the degree is the cardinality of any fiber $f^{-1}(y)$, which is well-defined independent of $y \in Y$).
As a complement to this, one might say that $X$ is a trivial covering space of $Y$ if there exists a homeomorphism $f : X \to Y$.
Any space is a trivial covering space of any space to which it is homeomorphic. In particular, every space is a trivial covering space of itself.
But, it is quite possible for a space to also be a nontrivial covering space of itself. For example, $S^1$ is a nontrivial covering space of itself in many different ways, meaning that there exist covering maps of any degree $n \ge 2$: using complex coordinates, take the map $f(z)=z^n$. More generally, the $k$-dimensional torus $$T^k = \underbrace{S^1 \times \cdots \times S^1}_{\text{$k$ times}}
$$
is also a nontrivial covering space of itself.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3476052",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is there an infinite set of finite strings such that no element is a subsequence of another? Of course, this is meant to be over a finite alphabet. My intuition is that this doesn't exist over any such alphabet, so that's what I'd want to know how to prove.
I'm also interested in questions like "can such a set be computably enumerable" and "can such a set be computable".
| This answer is translated (with small modifications) from here.
$\Sigma$ is a finite alphabet.
$\Sigma^\ast$ is the set of finite strings over $\Sigma$ (Kleene star).
$x\preceq y$ means that $x$ is a subsequence of $y$.
We'll prove that there is no infinite set $S \subseteq \Sigma^\ast$ such that no element of it is a subsequence of another (Higman's lemma).
Assume the thesis is false. Then there is an infinite sequence $x_1, x_2,\ldots$ such that
*
*$x_i\in\Sigma^\ast$
*$i<j \implies \textit{not} (x_i \preceq x_j) $ (notice that $x_i \succ x_j$ is possible)
From infinite sequences meeting the criteria 1-2 take one that's minimal in the sense that $|x_1|$ is minimal and with $|x_1|$ fixed $|x_2|$ is minimal, etc.
Take an infinite subsequence $x_{i_1}, x_{i_2},\ldots $ where the first letter of each element is $a$ (constant for all elements). Remove the first letter from each of those elements, getting the sequence $x_{i_1}', x_{i_2}',\ldots $. Then, the infinite sequence $$x_1, x_2, \ldots, x_{i_1-1}, x_{i_1}', x_{i_2}', x_{i_3}', \ldots$$ meets the criteria 1-2 and is "smaller" than $x_1, x_2, \ldots$, a contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3476176",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 2,
"answer_id": 1
} |
Find $m$ if $f(x)=x^m\sin\frac{1}{x}$ is continuous and is not differentiable If $f(x)=\begin{cases}
x^m\sin\dfrac{1}{x}, & x\ne 0 \\
0, & x=0
\end{cases}$.
Find $m$ if $f(x)$ is continuous and is not differentiable
My attempt is as follows:-
Let's find the condition of continuity
$$\lim_{x\to0^{+}}x^m\sin\dfrac{1}{x}$$
As $x\rightarrow 0^{+}, \dfrac{1}{x}\rightarrow \infty,\sin\dfrac{1}{x} \text { oscillates in } [-1,1]$
$$m>0$$
$$\lim_{x\to0^{-}}x^m\sin\dfrac{1}{x}$$
As we have the negative base
$$m>0 \cap m\notin \left\{\dfrac{p}{q} | p,q \text { are coprime and } q \text { is even }\right\}\tag{1}$$
Let's find the condition of non-differentiability
$\lim_{h\to 0}\dfrac{h^m\sin\dfrac{1}{h}}{h}$ should not exist
$$\lim_{h\to 0^{+}}h^{m-1}\sin\dfrac{1}{h}$$
$$m\le0$$
$$\lim_{h\to 0^{-}}h^{m-1}\sin\dfrac{1}{h}$$
$$m-1\le 0 \cup m-1\in\left\{\dfrac{p}{q} | p,q \text { are coprime and } q \text { is even }\right\}$$
$$m\le 1 \cup m\in\left\{\dfrac{p+q}{q} | p,q \text { are coprime and } q \text { is even }\right\}\tag{2}$$
Taking intersection of equations $(1)$ and $(2)$
$$\left(m\in(0,1] \cap m\notin \left\{\dfrac{p}{q} | p,q \text { are coprime and } q \text { is even }\right\}\right) \cup \left(m>0 \cap m\notin \left\{\dfrac{p}{q} | p,q \text { are coprime and } q \text { is even }\right\} \cap m\in\left\{\dfrac{p+q}{q} | p,q \text { are coprime and } q \text { is even }\right\}\right)$$
But actual answer is simply $m\in(0,1]$
| I have to admit I cannot really understand what you are trying to do, and how you come with your conditions for $p$ and $q$.
The first condition is that you need
$$
\lim_{x\to0} x^m\sin\frac1x=0.
$$
Since the sine is bounded, when $m>0$ we have $\left|x^m\sin\frac1x\right|\leq|x|^m$, so the limit is $0$ and the function is continuous. When $m\leq0$ the function oscilates at $0$ (unboundedly if $m<0$ so the limit doesn't exist).
So far: continuity when $m>0$.
For differentiability, we look at the limit at $0$ of
$$
\frac {x^m\sin\frac1x}{x}=x^{m-1}\sin\frac1x.
$$
If $m>1$, the limit is zero and so $f$ will be differentiable. When $m<1$, let $x_k=\frac2{\pi(2k+1)}$. Then
$$
x_k^{m-1}\sin\frac1{x_k}=\left(\frac{\pi(2k+1)}{2}\right)^{1-m}\xrightarrow[k\to\infty]{}\infty
$$
since $m<1$ so $1-m>0$. And when $m=1$ the limit is $1$ we can also choose a sequence so that the limit doesn't exist.
Thus $f$ is differentiable precisely when $m>1$. So,
$f$ is continuous but not differentiable when $0<m\leq 1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3476305",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
Show that $\mathbb Q(\sqrt m,\sqrt n)=\mathbb Q(\sqrt {m}+\sqrt {n})$. Show that $\mathbb Q(\sqrt m,\sqrt n)=\mathbb Q(\sqrt {m}+\sqrt {n})$
My attempt: It is obvious that $\mathbb Q(\sqrt {m}+\sqrt {n}) \subset \mathbb Q(\sqrt m,\sqrt n) $ .
Is this proof is correct?
| It is good, but you can shorten it to just deduce that
$$
2(m-n)\sqrt{n}\in\mathbb{Q}(\sqrt{m}+\sqrt{n})
$$
If $m=n$ the statement $\mathbb{Q}(\sqrt{m}+\sqrt{n})=\mathbb{Q}(\sqrt{m},\sqrt{n})$ is obvious, so we can assume $m\ne n$. Thus $\sqrt{n}\in\mathbb{Q}(\sqrt{m}+\sqrt{n})$ and so also
$$
\sqrt{m}=(\sqrt{m}+\sqrt{n})-\sqrt{n}\in\mathbb{Q}(\sqrt{m}+\sqrt{n})
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3476455",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Prove that surrounded by zero exist smooth function $y(x)$
Let the function $G$ be such that $G(x)=\sum_{ij}G_{ij}(x)x_ix_j$ for some $G_{ij}\in C^{\infty}$ that vanish at zero. Prove that in a neighborhood of zero there exists a smooth function $y(x)$ such that:
$$Q(x+y(x))=Q(x)+G(x), y(0)=0, dy(0)=0$$ where $Q(x)$ is a nondegenerate quadratic form.
I have an idea to use $F(x,y)=Q(x+y)-Q(x)-G(x)$ and then use the theorem:
Let $r$ - set of all smooth functions vanishing at zero. Assume that for smooth function $F$ we can write $F(x,0)$ as $F(x,0)=\sum_{ij}\frac{\partial F}{\partial y_i}(x,0) \frac{\partial F}{\partial y_j}(x,0)\phi_{ij}(x)$ for some $\phi_{ij}\in r$. Then $F(x,y(x))=0$ has in the surroundings $x=0$ smooth solution $y(x)$ such that $y_i(x)=\sum_{j} \frac{\partial F}{\partial y_j}(x,0)z_{ij}(x)$ for some $z_{ij}\in r$.
My idea stems from the fact that $F(x,0)=Q(x)-Q(x)-G(x)=-G(x)$ but I don't really know what to do with it.
Can I ask for help?
| This is a way of deriving the OP's desired result, using a deus ex machina argument that I happened to know.
Given $G$ and a non-degenerate quadratic form $Q$ as stated, the
function $f(x)=Q(x)+G(x)$ has a non-degenerate critical point at $x=0$, with $f(0)=0$, $df(0)=0$, and $d^2f(0)=2Q$. By the Morse Lemma there exists a diffeomorphism $u$ in a neighborhood of $0$ such that $f(x)= Q(u(x))$, with $u(0)=0$. Let $\epsilon(x)=u(x)-(u(0)+du(0)x)=u(x)-du(0)x$ be the remainder of the linear Taylor approximation to $u$ at $0$. We have $\epsilon(x)=o(\|x\|)$.
By matching Taylor expansions of $f(x)$ and $Q(u(x))=Q(0+du(0)x+o(\|x\|))$ through quadratic terms, we see that $Q(du(0)v)=Q(v)$ for all $v$.
That is, the coefficient of $x_ix_j$ in the Taylor expansion of $f(x)$ is the same as that of $Q(x)$, which is $2q_{ij}$. The corresponding coefficient for $Q(u(x))$ is $2\sum_{kl} q_{kl}(\partial_i u_k)(\partial_j u_l)$. Since $f(x)=Q(u(x))$ for all $x$ close to $0$, the coefficients of $x_ix_j$ must match.
Thus, $f(x) = Q(u(x))=Q(du(0)^{-1}u(x))=Q(x+y(x))$, where $y(x)=du(0)^{-1}\epsilon(x)$.
This is the desired formula. The matching Taylor expansion verifies that $du(0)\in \mathcal O(Q)$, in the notation of this post; this step is needed because of the notational mismatch between the way the current problem is stated and the more coordinate-free way the Morse Lemma is usually stated.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3476568",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
if $\{f_n\}\to f$ in $L^{p_2}(E)$ then $\{f_n\}\to f$ in $L^{p_1}(E)$. Assume that $E$ has finite measure and $1 \le p_1 < p_2 \le \infty$. Show that if $\{f_n\}\to f$ in $L^{p_2}(E)$ then $\{f_n\}\to f$ in $L^{p_1}(E)$.
my attempt: let $p=\frac{p_2}{p_1}$ and $1=\frac{1}{p}+\frac{1}{q}$, and pick $n$ such that $\|f_n-f\|_{p_2}\le \epsilon , \forall n\ge N$
\begin{align}
\|f_n-f\|_{p_1}^{p_1}
& = \int_E |f_n-f|^{p_2.\frac{p_1}{p_2}}d\mu = \int_\mathbb{R} |f_n-f|^{p_1.\frac{p_2}{p_1}} \chi_Ed\mu\\
& \le \|f_n-f\|_{p_2}^{p_1} . [\mu(E)]^{\frac{1}{q}} ~~~~~~\text{Holder}\\
\end{align}
so;
\begin{align}
\|f_n-f\|_{p_1}
& = \|f_n-f\|_{p_2} . [\mu(E)]^{1-\frac{p_1}{p_2}} \\
& \le \epsilon \to 0\\
\end{align}
| Good. But you should also take care of the case that $p_{2}=\infty$, but this is easy:
\begin{align*}
\|f_{n}-f\|_{L^{p_{1}}}^{p_{1}}=\int_{E}|f_{n}-f|^{p_{1}}\leq\|f_{n}-f\|_{L^{\infty}}^{p_{1}}\int_{E}=\|f_{n}-f\|_{L^{\infty}}^{p_{1}}\mu(E).
\end{align*}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3476684",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find the interval of convergence for $\sum\limits_{n=1}^{\infty}\frac{x^n}{n4^n}$. I am working on this problem, but I am not exactly sure about my answer. Can you help me how to do the steps to find the interval of convergence?
My answer is $L = \left| \frac{x}{4} \right| < 1$.
| A quick fomula for power series:
If $a_n=n^{b}\left(\frac{1}{a}\right)^n$ where $a,b\in\mathbb{R}\wedge a\neq0$ we have
$$\sum _{n=1}^{\infty }\:n^{b}\left(\frac{1}{a}\right)^n\left(x-c\right)^n\text{ which converges on }\left\{\begin{array}{l}
(c-a,c+a),a\in\mathbb{R}\wedge b\ge0
\\ [c-a,c+a),a>0\wedge -1\le b<0
\\ (c-a,c+a],a<0\wedge -1\le b<0
\\ [c-a,c+a],a\in\mathbb{R}\wedge b<-1\end{array}\right.$$
$\sum\limits_{n=1}^{\infty}\dfrac{x^n}{n4^n}=\sum\limits_{n=1}^{\infty}n^{-1}\left(\dfrac{1}{4}\right)^n(x-0)^n$ that ,$a=4>0\land b=-1<0$ converges on $[0-4,0+4)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3476813",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How to quickly yet convincingly claim that edge contractions preserve outerplanarity Let $G$ be a simple outerplanar graph with $n$ vertices. Let the vertex $v \in V(G)$ have degree 2 and be a member of a bounded face formed by a chordless cycle $C$ of more than 3 vertices.
Given the above conditions, I want to prove that another outerplanar graph $G'$ with $n-1$ vertices can always be found.
My current proof is as follows:
Because the degree of $v$ is 2, we know there are only two edges incident to $v$, namely $(v, v_0)$ and $(v, v_1)$. If we contract the edge $(v, v_0)$ by adding the edge $(v_0, v_1)$ and removing all edges incident to $v$, the resulting graph $G'$ will also be outerplanar since edge contractions preserve outerplanarity.
The part that I'm unsure about is this:
the resulting graph $G'$ will also be outerplanar since edge contractions preserve outerplanarity.
I know the type of edge contractions I'm doing here preserve outerplanarity, but that doesn't seem like an obvious enough claim to leave unsupported. Is there an accepted theorem I could cite to back this claim up? Should I include a proof related to that specific claim? Or does it seem ok to just leave that claim as it is? I'd rather not include a proof for this if there's an accepted theorem I could cite, since this is a one lemma in a fairly long proof that already has several.
| The easiest way to justify the claim is the fact that a graph is outerplanar if and only if it does not contain the graphs $K_4$ or $K_{2, 3}$ as a minor. A graph $H$ is a minor if a graph $G$ if $H$ can be obtained from $G$ by a sequence of edge contractions, edge deletions, and vertex deletions.
In case you're not allowed to cite the theorem, here's a more direct way to prove the claim. Let $G$ be an outerplanar graph. As $G$ is outerplanar every vertex is on the outer face, so the outer face must be a collection of cycles connected by paths. Without loss of generality, we assume $G$ is 2-connected, so the outer face of $G$ is a single cycle. When contracting an edge of $G$ there are two cases to consider: either the edge is on the outer face, or it is a chord. Contracting an edge on the outer face yields a new graph whose outer face is a cycle with $n - 1$ vertices, hence the resulting graph is outerplanar. Contracting a chord yields a graph whose outer face is two cycles that share one vertex in common. Hence, the resulting graph is outerplanar (although, it is no longer 2-connected).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3477068",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How to show that $f(x,y)$ is continuously differentiable on $\mathbb{R}^2$? I have been given the function $f(x,y)=\begin{cases} \frac{x^3y-xy^3}{x^2+y^2}, \quad \quad (x,y)\neq 0 ; \\ 0 \quad \quad \quad \quad \quad (x,y)=0.\end{cases}$.
Is it enough to compute all partial derivates and then showing that they are continuous?
| Yes it should be enough to compute all partial derivates and then showing that they are continuous. But notice the following:
Your $f$ is two times partial differentiable, but
$\partial_2\partial_1f(0,0)=-1\neq 1=\partial_1\partial_2f(0,0)$
hence (according to Schwarz's theorem) your partial differentials $\partial_1\partial_2f$ or rather $\partial_2\partial_1 f$ can not be continuous.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3477201",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find $f(10)$ for the following conditions Let $f(x)$ be a real valued function not identically zero satisfies the equation,
$f(x + y^n) = f(x) + (f(y))^n$ for all real $x$ & $y$ and $f'(0)\ge 0$ where $n>1$ is an odd natural number. Find $f(10)$
Putting $x=0,y=0$
$$f(0)=f(0)+f(0)$$
$$f(0)=0\tag{1}$$
Putting $x=0,y=1$
$$f(0+1)=f(0)+f(1)^n$$
$$f(1)^{n-1}=1 \text { where (n-1) is even }$$
$$f(1)=\pm1\tag{2}$$
$$f'(0)\ge 0$$
$$\lim_{h\to 0}\dfrac{f(h)-f(0)}{h}\ge0$$
$$\lim_{h\to 0}\dfrac{f(h)}{h}\ge0\tag{3}$$
I was not getting anything significant from it.
Putting $y=1$
Case $1:$ $f(1)=1$
$$f(x+1)=f(x)+f(1)^n$$
$$f(x+1)=f(x)+1\tag{4}$$
Case $2:$ $f(1)=-1$
$$f(x+1)=f(x)-1\tag{5}$$
So according to equation $(4)$
$$f(10)=10$$
According to equation $(5)$
$$f(10)=-10$$
But not able to determine which one to eliminate?
| To complete your thoughts just note that by the mean value theorem $$f(10) = f(10) - f(0) = 10 \cdot f^\prime (t)$$ for some $t \in (0, 10)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3477490",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Find all polynomials $P(x)$ with odd degree such that $P(x^2 - 2) = P^2(x)-2$ The problem says :
Find all polynomials $P(x)$ with odd degree such that
$$P(x^2 - 2) = P^2(x)-2$$
I tried a lot if ways (using high school mathematics) but the only solution I have so far is $P(x) = x$. Can anyone solve this problem using only high school mathematics ?
PS: I have reduced the solution set to the subset of all polynomials with leading coefficient 1.
| This extended comment has the sole purpose of showing a simple Mathematica code to print the $P(x)$ polynomials up to the tenth degree that satisfy the relation $P(x^2-2) - (P(x))^2 + 2 = 0$.
P[x_] = Sum[ToExpression[StringJoin["a", ToString[n]]] x^n, {n, 0, 10}];
coeff = CoefficientList[P[x^2 - 2] - P[x]^2 + 2, x];
zeros = ConstantArray[0, Length[coeff]];
sol = Solve[coeff == zeros];
poly = Table[P[x] /. sol[[n]], {n, Length[sol]}];
TableForm[Sort[poly]]
It's clear that, for $n \ge 1$, behind all this there is a sequence function, in fact:
Q[x_] = FindSequenceFunction[Sort[poly][[3 ;; All]], n];
TraditionalForm[Q[x]]
TableForm[Table[Expand[Q[x]], {n, 10}]]
From this simple numerical experiment I deduce that the relation:
$$P(x^2-2) - (P(x))^2 + 2 = 0$$
is satisfied by:
$$
P(x) = -1
\; \; \; \vee \; \; \;
P(x) = \left(\frac{x-\sqrt{x^2-4}}{2}\right)^n + \left(\frac{x+\sqrt{x^2-4}}{2}\right)^n
$$
where is assumed $n \in \mathbb{Z}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3477678",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Riemann integrability of a piecewise function over $[1, 7]$ I have the following function
$f(x) = \begin{cases} 2 & \text{if } 1 \leq x \leq 2, \\ 3 & \text{if } 2 < x \leq 4,\\ 1 & \text{if } 4 < x \leq 7. \end{cases}$
How do I determine if the function is Riemann integrable within $[1, 7]$?
I think I have to check if the lower and upper integrals of $f(x)$ match but I’m unsure how to approach.
| hint
Let $\epsilon>0$ given and consider the partition $\sigma$ definef by
$$\Bigl(1,2-\frac{\epsilon}{7},2+\frac{\epsilon}{7} ,4-\frac{\epsilon}{7}, 4+\frac{\epsilon}{7},7\Bigr)$$
then
$$U(f,\sigma)-L(\sigma,f)=$$
$$(3-2)\frac{2\epsilon}{7}+(3-1)\frac{2\epsilon}{7}=\frac{6\epsilon}{7}<\epsilon$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3477792",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
How to show $\langle T(x),y\rangle=\langle x,S(y)\rangle$ for all $x, y$ implies, S is the adjoint operator? Suppose that $H$ is a Hilbert space and $T$ and $S$ are two functions from $H$ to $H$.
If
$$
\langle T(x),y\rangle=\langle x,S(y)\rangle
$$
for all $x, y \in H$, show that $T$ and $S$ are continuous linear operators with $S=T^*$. The last equality means that $S^*$ is adjoint(Hermitian) operator.
First we need two show $T$ and $S$ are linear operator but we do not have access to the expression representing them in terms of $x$. Also, we need to show they are bounded to conclude they are continuous.
Note: this is not $\mathbb{R}^n$, is a Hilbert space.
| Hint: Closed Graph Theorem easily gives continuity of $T$ and $S$. By definition of $T^{*}$ we get $\langle x , T^{*}y \rangle =\langle x , Sy \rangle $ for all $x$ and $y$. Put $x=T^{*} y-Sy$ to see that $\|T^{*} y-Sy\|^{2}=0$ which gives $T^{*} y=Sy$ for all $y$.
Details for continuity: let $x_n \to x$ and $Tx_n \to z$. Then $\langle Tx_n , y \rangle =\langle x_n , Sy \rangle$ and letting $n \to \infty$ we get $\langle z , y \rangle =\langle x , Sy \rangle =\langle Tx , y \rangle$. This is true for all $y$ so $Tx=z$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3477882",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Calculating the line integral for the circle and the square
Consider the region $S$ bounded between the square with corners at the points (4,4),(-4,4),(-4,-4) and (4,-4) (oriented counterclockwise), and the circle of radius 1 centered at (-1,0) (oriented clockwise) and
$$
F(x,y)=\left(\frac{-y}{(x+1)^2+y^2}, \frac{x+1}{(x+1)^2+y^2}\right)
$$
and calculate $$\int_{ds} F\cdot dr$$
(Hint for calculating the line integral: Use the definition $\tan^{-1} a + \tan^{-1} a^{-1} = \frac{\pi}{2}$.
Let $P(x,y)=\frac{-y}{(x+1)^2+y^2}$ and $Q(x,y)=\frac{x+1}{(x+1)^2+y^2}$ I can't use the Green's Theorem because there is a singularity at the point $(−1,0)$ in $P$ and $Q$ .
So I want to calculate the line integral for the circle and the square
In the image I represented the curves to be integrated with their respective orientations but when I calculated the line integral of the circle I obtained that it diverges so I don't know how to continue the exercise. I leave below how to calculate this integral:
$$
\begin{split}
I &= \int_{ds} F\cdot dr \\
&= \int_{0}^{2 \pi} \frac{-\sin t (-\sin t) dt}
{(\cos t+1+1)^2+\sin^2 t}
+ \frac{(\cos t+1+1)\cos t dt}{(\cos t+1+1)^2+\sin^2 t} \\
&= \left[\frac{-1}{\sin t} - \tan^2 t +t\right]_{0}^{2 \pi}
\to \infty
\end{split}
$$
At this point I don't know how to solve the exercise in any other way so help would be appreciated! :)
| It looks like you are making a mistake when calculating the integral for the circle. You have
$$
x=-1+\cos t,\ \ \ y=-\sin t
$$
(where the minus sign accounts for the clockwise direction of the curve). Then $(x+1)^2+y^2=\cos^2t+\sin^2t=1$, and
$$
\int_{\text{circle}}F\cdot dr=\int_0^{2\pi} \left(\sin t,\cos t \right)\cdot(-\sin t,-\cos t)\,dt=\int_0^{2\pi}(-1)\,dt=-2\pi.
$$
Now you can apply Green to get (using that $F$ is conservative, so the integrand on Green is zero) that
$$
\int_\text{square}F\cdot dr= -\int_{\text{circle}}F\cdot dr=2\pi.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3478034",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
What is the logical reason to use a proof by contradiction Given an arbitrary set $A$ of real numbers. I want to decide if the set $A$ is infinite.
My question is: What is the logical reason to use a proof by contradiction.
I can think for example that there is no known method to prove this directly. But I am not convinced with this reason.
| The reason most people do proof by contradiction isn't really logical.
When you do a direct proof, you have $n$ assumptions and $1$ predetermined statement to prove.
When you do proof by contradiction, you have $n+1$ assumptions and you succeed when you prove any false or contradictory statement.
Some people find the second scenario more freeing. If you are interested in the subject you are proving about, it is worthwhile to unwind your proof by contradiction to see what a direct proof would look like.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3478142",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 7,
"answer_id": 4
} |
Prove if $u,v,w$ linearly independent then ${\{u+v+w,v-w,2w}\}$ linearly independent
Prove if $u,v,w$ linearly independent then ${\{u+v+w,v-w,2w}\}$
linearly independent
what I did is :
I need to Prove that for $x,y,z \in F$
$x(u+v+w) + y(v-w) + z(2w) = 0 $
implies that $x=y=z=0$
$x(u+v+w) + y(v-w) + z(2w) = xu + xv + xw + yv - yw + 2zw = $
$= (x)u + (x+y)v + (x-y+2z)w =0$
$ u,v,w $ linearly independent $\rightarrow x=0 $ and $x+y=0$ and $(x-y+2z)=0$ $\rightarrow x=y=z=0$
is that enough to prove the question ?
thanks
| The systematic way is to consider matrices:
$$
\begin{pmatrix} u' \\ v' \\ w' \end{pmatrix}
=
\begin{pmatrix} 1 & 1 & \hphantom{-}1 \\ 0 & 1 & -1 \\ 0 & 0 & \hphantom{-}2 \end{pmatrix}
\begin{pmatrix} u \\ v \\ w \end{pmatrix}
$$
The matrix is triangular with nonzero diagonal entries. Therefore, it is invertible.
Thus, the subspace generated by $u',v',w'$ is the same as the subspace generated by $u,v,w$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3478268",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
show the function $\cos^2(x)$ and $\sin^2(x)$ belong to Trig2(R) and find their coordinates for the basis In the next question it can be used without proof that the familie
$(1,\cos(x),\sin(x),\cos(2x),\sin(2x))$ is lineart independent and thus a basis for trig2(R)
b) show that the function $\cos^2(x)$ and $\sin^2(x)$ belong to Trig2(R) and determine their coordinate for the basis
I know $\cos(2x) = \cos^2(x)- \sin^2(x)$ but how about $\sin(2x)= 2\cos(x)\sin(x)$
| $ \cos (2x)= \cos^2 x- \sin^2x = \cos^2 x-(1- \cos^2 x)= 2 \cos^2 x-1,$ hence
$$ \cos^2 x= \frac{1}{2}( \cos(2x)-1).$$
Can you proceed ?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3478403",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Finding a min on $\sqrt{f(x)}$ is equal to min on $f(x)$ During a discussion about RMS, one said that finding the min of a function or of it square root is the same because square root is monotonic increasing.
Is this make any sense?
| Finding a minimum of a function is equivalent to finding the point at which the derivative of the function is equal to zero on a given interval. If we can show that the sign of the derivative of any $f(x)$ is equal to the sign of the derivative of any $\sqrt{f(x)}$ then we can show that the min or minimums for $\sqrt{f(x)}$ exist for exactly each minimum of $f(x)$.
Case 1:
Assume $f'(x) > 0$.
$\frac{d}{dx}\sqrt{f(x)}$ by the chain rule is equal to $\frac{1}{2\sqrt{f(x)}} * f'(x)$.We know the first term must be greater than zero since the square root cannot be negative. Thus $\frac{d}{dx}\sqrt{f(x)} > 0$
Case 2:
Assume $f'(x) < 0$
$\frac{d}{dx}\sqrt{f(x)} = \frac{1}{2\sqrt{f(x)}} * f'(x)$ Again, the first term is positive, and the second term is assumed to be negative, yielding a negative derivative.
$\frac{d}{dx}\sqrt{f(x)} < 0$
Case 3:
Assume $f'(x) = 0$
$\frac{d}{dx}\sqrt{f(x)} = \frac{1}{2\sqrt{f(x)}} * f'(x)$
Thus $\frac{d}{dx}\sqrt{f(x)} = 0$
For the last case, and the case we are after, if f'(x) = 0, the location of all possible minimums, $\frac{d}{dx}\sqrt{f(x)}$ also equals zero. Technically, we have shown that the critical points must exist at the same location, but since we have shown also that they increase and decrease over the exact same intervals, the nature of the critical points must also match. Thus we have shown that if a function is continuous, differentiable, and strictly positive, the minimums of its square root match the minimums of the function.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3478552",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Set of finite signed measures with the weak*-topology is a topological vector space For a compact space $X$ I want to show that the space of finite signed measures on the Borel-$\sigma$-algebra on $X$, equipped with the weak*-topology, i.e. the corsest topology such that
$\mu\mapsto\int f~\mathrm{d}\mu~$ is continuous for all $f\in C_b(X)$ is a topological vector space.
Therefore I think I need to show that the maps $(\mu,\nu)\mapsto\mu+\nu$ and $(c,\mu)\mapsto c\mu$ are continuous.
I tried to do it with the formal definition of continuity in topological spaces, i.e. I wanted to show that $\{(\mu,\nu):\mu+\nu\in U\}$ and $\{(c,\mu):c\mu\in U\}$ are open in the topology for open $U$ but I have no idea how to do that.
Can someone give me a reference or a hint on how to do that?
| Perhaps you should do it by nets:
Assume that $(\mu_{\alpha},\nu_{\alpha})\rightarrow(\mu,\nu)$ weak$^{\ast}$, then $\displaystyle\int fd(\mu_{\alpha}+\nu_{\alpha})=\int fd\mu_{\alpha}+\int fd\nu_{\alpha}\rightarrow\int fd\mu+\int fd\nu=\int fd(\mu+\nu)$, so $\mu_{\alpha}+\nu_{\alpha}\rightarrow\mu+\nu$ weak$^{\ast}$. For the scalar multiplication is similar.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3478646",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Limit as $y\to x$ of $(\sin y-\sin x)/(y-x)$ without L’Hospital $$\lim_{y\rightarrow x}\frac{\sin(y)-\sin(x)}{y-x}$$
Is there any cool trig identity I could use to solve this? We don't have L’Hospital yet, so I have to calculate it otherwise. I tried solving this using the series expansion of sine:
$$\cdots =\lim_{y\rightarrow x}\frac{\left(\sum_{k=0}^\infty (-1)^k \dfrac{y^{2k+1}}{(2k+1)!}\right) -\left(\sum_{k=0}^\infty (-1)^k \dfrac{x^{2k+1}}{(2k+1)!}\right)}{y-x}$$
But what now? With L’Hospital I get $\cos(x)$ as a solution. Differentiation isn't allowed either.
| In the comments, someone pointed out that you can show that $\lim_{y\to x}\dfrac{\sin(y)-\sin(x)}{y-x}=\cos(x)$ if we know that:
$$\lim_{\theta\to0}\dfrac{\sin(\theta)}{\theta}=1,\quad(1)$$
$$\lim_{\theta\to0}\dfrac{\cos(\theta)-1}{\theta}=0.\quad(2)$$
Here is a link to that argument: Solving a limit given a limit
I will discuss how we can show the above limits without using L'Hôpital's rule.
To prove (1), first we use some geometry to prove that for any $\theta$ with $0<\theta<\frac{\pi}{2}$, we have that
$$0<\cos(\theta)<\dfrac{\sin(\theta)}{\theta}<\dfrac{1}{\cos(\theta)}.\quad(3)$$
I will discuss how to prove this below. But first let me show how we can use this to evaluate the limits above. Note that the three functions in (3) above are all even. So the above inequality holds for all non-zero $\theta$ with $-\frac{\pi}{2}<\theta<\frac{\pi}{2}$. It follows from the squeeze theorem that:
$$\lim_{\theta\to0}\dfrac{1}{\cos(\theta)}=\lim_{\theta\to0}\dfrac{\sin(\theta)}{\theta}=\lim_{\theta\to0}\cos(\theta)=1.$$
To prove (2), note that
$\begin{align*}
\lim_{\theta\to0}\dfrac{\cos(\theta)-1}{\theta} &=\lim_{\theta\to0}\dfrac{(\cos(\theta)-1)}{\theta}\cdot\dfrac{(\cos(\theta)+1)}{(\cos(\theta)+1)} \\
&=\lim_{\theta\to0}\dfrac{\cos^2(\theta)-1}{\theta\cdot(\cos(\theta)+1)} \\
&=\lim_{\theta\to0}\dfrac{-\sin^2(\theta)}{\theta\cdot(\cos(\theta)+1)} \\
&=\lim_{\theta\to0}-\sin(\theta)\cdot\dfrac{\sin(\theta)}{\theta}\cdot\dfrac{1}{(\cos(\theta)+1)} \\
&=0.
\end{align*}$
To prove (3), let $0<\theta<\frac{\pi}{2}$, and let $A=(0,0)$, let $B=(\cos(\theta),0)$, let $C=(\cos(\theta),\sin(\theta))$, let $X=(1,0)$ and let $Y=(1,\tan(\theta))$. I apologize for not having a picture to go along with these definitions, but you can probably find this exact picture in any calculus textbook.
Now comparing the areas of triangle $ABC$, sector $AXC$, and triangle $AXY$, we have that
$$0<\frac{1}{2}\cos(\theta)\sin(\theta)<\frac{1}{2}\theta<\frac{1}{2}\tan(\theta).$$
Inverting this inequality gives
$$0<\dfrac{2\cos(\theta)}{\sin(\theta)}<\dfrac{2}{\theta}<\dfrac{2}{\cos(\theta)\sin(\theta)}.$$
If we multiply these inequalities by $\frac{1}{2}\sin(\theta)$ then we obtain (3).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3478762",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
Inverse of a multiplication operator Given an operator $M$: $L^2([0,1])\rightarrow L ^2([0,1])$:
$$M(f)(x) = x^2f(x) $$
I am trying to show if $(I + M)$ is invertible and what $|| (I+M)^{-1} ||$ is.
I am aware of the theorem which says if $||M|| < 1$ then $I-M$ and hence $I+M$ is invertible and allows computation of $|| (I+M)^{-1} ||$. But here since $||M|| = 1$, I am stuck about how to proceed. Any help is greatly appreciated.
| Let $P=I+M$ and $Qf=\dfrac{1}{1+x^{2}}\cdot f(x)$, it is routine to check that $PQf=f$ and $QP f=f$, so $P$ is algebraic invertible.
We also note that
\begin{align*}
\|Qf\|_{L^{2}}^{2}=\int_{0}^{1}\dfrac{1}{(1+x^{2})^{2}}|f(x)|^{2}\leq\int_{0}^{1}|f(x)|^{2}dx=\|f\|_{L^{2}}^{2},
\end{align*}
so $\|Q\|\leq 1$.
Now we let $f(x)=\chi_{[0,1/n]}(x)$, then $\|f\|_{L^{2}}^{2}=\dfrac{1}{n}$ and that
\begin{align*}
\|Qf\|_{L^{2}}^{2}=\int_{0}^{1/n}\dfrac{1}{(1+x^{2})^{2}}dx\geq\dfrac{1}{(1+(1/n)^{2})^{2}}\cdot\dfrac{1}{n}=\dfrac{1}{(1+(1/n)^{2})^{2}}\cdot\|f\|_{L^{2}}^{2},
\end{align*}
so
\begin{align*}
\|Q\|\geq\dfrac{1}{1+(1/n)^{2}},
\end{align*}
taking $n\rightarrow\infty$, we get $\|Q\|\geq 1$, we conclude that $\|Q\|=1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3478925",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Decomposing $SO_3$
(Artin Algebra, 9.4.9)Let $H_i$ be the subgroup of $SO_3$ of rotations about the $x_i$-axis, $i=1,2,3$. Prove that every element of $SO_3$ can be written as a product $ABA'$, where $A$ and $A'$ are in $H_1$ and $B$ is in $H_2$. Prove that this representations is unique unless $B=I$.
I know that $A$ and $B$ generate $SO_3$, but I cannot show that we can do it in the way $ABA'$. I think $A'$ here denotes the transpose of $A$ and since $A$ is orthogonal, then $A'=A^{-1}$. So it seems like we can somehow "change of basis" but I am lack of geometric intuition to visualize how to do this. Can someone help me with this? Thank you!
| for $ A \in H_1$ it has the form
$$
\begin{matrix}
1 & 0 & 0 \\
0 & c_\theta & s_\theta \\
0 & -s_\theta & c_\theta \\
\end{matrix}
$$
For any $ G \in SO_3 $ :
$$
\begin{matrix}
g_{11} & g_{12} & g_{13} \\
g_{21} & g_{22} & g_{23} \\
g_{31} & g_{32} & g_{33} \\
\end{matrix}
$$
Assume that G is not in $H_1$ otherwise G = AIA for some $\theta$, so that g11 is not equal to 1.
We can found $A_1$ and $A_2$ in $H_1$ to rotate $g_{21} = g_{12} = 0$
$B = A_1GA_2$
$$
\begin{matrix}
g'_{11} & 0 & g'_{13} \\
0 & g'_{22} & g'_{23} \\
g'_{31} & g'_{32} & g'_{33} \\
\end{matrix}
$$
As $g'_{11} \neq 1$ so $g'_{13} \neq 0$ and $g'_{31} \neq 0$
Use the fact that $B \in SO_3$ $BB^T = I$ we can see $g'_{32} = 0, g'_{22} = 1 ,g'_{23} = 0$
This show $B \in H_2$
So that $G = A_1^{-1}BA_2^{-1} $ where $A_1^{-1}, A_2^{-1} \in H_1$ as well.
For unique, assume $G = A_1BA_2 = A'_1B'A'_2$ so
${A'}_1^{-1}A_1B = B'A'_2A_2^{-1}$ this show B = I.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3479008",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
A polynomial $p(x,y)$ that is never $0$ if $y\neq 0$ I would like to find the best proof of the following fact:
If a polynomial $p\in k[x,y]$ (where $k$ is an algebraically closed field) is such that $p(a,b)\neq 0$ on the set $\{(a,b):b\neq 0\}$, then in fact $p\in k[y]$ (that is, $p$ does not depend on $x$). I do know how to prove this using an argument with a Vandermonde matrix, but I feel that there should be a more direct way to do it.
| As an amusing complement to anomaly's excellent answer, notice that the polynomial $p(x,y)=a_0(y)$ being nonzero for $y\neq0$ is necessarily of the form $f(x,y)=cy^n$ where $c\in k^*$ and $n\in \mathbb N$.
Notice also that the theorem is false for non algebraically closed fields, as witnessed by the polynomial $x^2+1\in \mathbb R[x,y]$ .
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3479106",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Concluding critical points of function are not in the domain for $f(x,y) = x^{3} - x +y^2 - 2y$ My question has to do more with determining if the points are in the domain or not. So I am asked to find the extrema of the function $f(x,y) = x^{3} - x +y^2 - 2y$ over the closed trinagular region $(-1,0), (1,0), (0,2)$.
Taking the necessary derivatives I find the following "critical points"
$$\left(\pm \sqrt{\frac{1}{3}},1\right)$$
Now a solution I have says "it is easy to conclude that the two points are not in the domain". My question is how is this conclusion drawn?
One way I wanted to know if it was correct was if I took the equation of the line between $(-1,0)$ and $(0,2)$ which is $y = 2x + 2$. If I plug my $x$ value from my critical point I don't get the corresponding $y$ value (in this case 1). Would that be the way to verify the points are not in the domain or is there another method to go about things?
| What you have is fine.
We can say that the triangle is bounded by the lines
$y = 2x + 2, y = -2x + 2, x = 0$
or the region is below the line
$y = \begin {cases} 2x + 2 & x\le 0\\ -2x + 2 & x>0\end{cases}$
Plugging the points $x = \pm \sqrt {\frac 13} \approx \pm\frac {4}{7}$
In fact $\frac {4}{7}$ is sligthly less than $\sqrt {\frac 13}$
$y(\sqrt {\frac 13}) < y(\frac 47) = \frac 67 < 1 $
The line is below the point $(\sqrt {\frac 13}, 1)$
Once you have found that there are no critical points inside the boundary, we must assume that the extrema lie on the boundary.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3479244",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is this proof of $\mathcal{L}\{\sin{at}\}=\frac{a}{s^2+a^2}$ valid? I found this proof of $\mathcal{L}\left\{\sin{at}\right\}=\frac{a}{s^2+a^2}$ on Proof Wiki:
This proof is way easier than others since it uses the linearity of the Laplace Transform. However, I am confused by the author's use of $\operatorname{Im}$. Isn't $\frac{a}{s^2+a^2}$ a complex number if $\Im{s}\neq0$? It seems like the author treats $s$ as a real number. Can we do the similar thing for complex $s$?
| Inspired by the above, the conclusion can be proved in another way.
We know that
$$
{\cal L}\left\{ {{e^{iat}}} \right\} = \frac{1}{{s - ia}}
$$
and
$$
{\cal L}\left\{ {{e^{ - iat}}} \right\} = \frac{1}{{s + ia}}.
$$
With the help of Euler's Formula, $\sin at$ can be written as
$$
\sin at = \frac{1}{{2i}}\left( {{e^{iat}} - {e^{ - iat}}} \right),
$$
so
$$
{\cal L}\left\{ {\sin at} \right\} = \frac{1}{{2i}}{\cal L}\left\{ {{e^{iat}}} \right\} - \frac{1}{{2i}}{\cal L}\left\{ {{e^{ - iat}}} \right\} = \frac{1}{{2i}}\left( {\frac{{s + ia}}{{{s^2} + {a^2}}} - \frac{{s - ia}}{{{s^2} + {a^2}}}} \right) = \frac{a}{{{s^2} + {a^2}}}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3479367",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Are Banach norms Fréchet differentiable? Suppose $(V, \|\cdot\|_V)$ and $(W, \|\cdot\|_W)$ are two Banach spaces and $f: V \to W$ is some function. We call a bounded linear operator $A \in B(V, W)$ Fréchet derivative of $f$ in $x \in V$ iff
$$\lim_{h \to 0} \frac{\|f(x + h) - f(x) - Ah\|_W}{\|h\|_V} = 0$$
We call a $f$ Fréchet differentiable in $x$ iff there exists a Fréchet derivative of $f$ in $x$.
My question is:
Suppose $(V, \|\cdot\|_V)$ is a Banach space. $f: V \to \mathbb{R}, v \mapsto \|v\|_V$. Is it true, that $f$ is Fréchet differentiable $\forall x \in V \setminus \{0\}$?
This statement is indeed true in the specific case, when $V$ is a Hilbert space.
Proof:
One can manually check, that $h \mapsto \frac{h}{2\sqrt{x_0}}$ is a Fréchet derivative for $x \mapsto \sqrt{|x|}$ in $x_0 \neq 0$. One can also manually check, that $h \mapsto 2\langle v, h \rangle_V$ is a Fréchet derivative for $x \mapsto \langle x, x \rangle_V$ in all $v \in V$. And it is a well known fact, that the composition of Fréchet derivatives of two functions is a Fréchet derivative of their composition. Thus, as $\|v\|_V = \sqrt{\langle v, v \rangle_V}$, we have, that $h \mapsto \ \frac{\langle v, h \rangle_V}{\|v\|_V}$ is a Fréchet derivative of $\|v\|_V$ in all $v \in V \setminus \{0\}$.
| No this is not always true. Take $(V, \Vert \cdot \Vert)=(\mathbb R^2, \sup (\vert x \vert, \vert y \vert))$. The norm is not Fréchet differentiable when $x = \pm y$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3479574",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
Calculate sum of series $\sum_{n=1}^\infty \frac{x+a}{n(x+a) + n^2}$. I've been stuck with calculating the sum of series of the following problem. Can you help me?
$$\sum_{n=1}^\infty \frac{x+a}{n(x+a) + n^2}$$
for real numbers $a\geq 0$ and $x\geq 1$.
| After @URL's comment, using
$$S_p=\sum_{n=1}^p\frac{x+a}{n(x+a)+n^2}=\sum_{n=1}^p\frac1n-\sum_{n=1}^p\frac1{n+(x+a)}$$ and using generalized harmonic numbers, we have
$$S_p=H_{a+x}+H_p-H_{a+x+p}$$
Now, using the asymptotics
$$H_q=\gamma +\log \left({q}\right)+\frac{1}{2 q}-\frac{1}{12
q^2}+O\left(\frac{1}{q^3}\right)$$ and using it for the second an third term, we end with
$$S_p=H_{a+x}-\frac{a+x}{p}+O\left(\frac{1}{p^2}\right)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3479694",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
let $d = \gcd(m,n), m, n > 0$ Bézout gives $d = mx + ny, x, y \in \mathbb{Z}$ prove that.. ... prove that it is always possible to choose $x < 0$.
I did $m = qn + r$ and $\gcd(m,n) = \gcd(n, \operatorname{rem}(m, n)) = \gcd(n, r)$
But I do not know where to go from here.
| If we suppose $x, y$ are given and $x>0$, we know the other solutions in integers of the equation $\;mX+nY=d\;$ are given by
$$X=x-kn,\quad Y=y+kn\qquad( k\in \mathbf Z),$$
so choose $k$ so large as to ensure that $x'=x-kn<0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3479858",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Basic binary operation on set Hey please help me answer this question:
Given a set $A$ with at least 2 elements which on it the binary operation * is defined in that manner:
for every $a,b\in A, a*b=b$.
Check if the binary operation * is commutative, associative and idempotent.
| Say $A$ has at least two different elements, name them $5$ and $2$. Then $$5*2 = 2\ne 5 = 2*5$$
so it is not commutative.
Since we have also:
$$a*(b*c) = a*c = c$$ and $$(a*b)*c = b*c=c$$
we see it is associative. Also we have $b*b = b$ so ...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3480003",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Are there uncountably many disjoint uncountable real null sets? It is easy to think of a countable family of disjoint Cantor sets, and their union is of course a null set.
It is equally trivial to define an uncountable family of Cantor sets but, how can it be ensured that they are pairwise disjoint?
Would their union be a null set?
| First do it for $C=\{0,1\}^{\mathbb{N}}$. For every $b=(b_n)_n\in \{0,1\}^{\mathbb{N}}$ let $C_b=\{(a_n)\ \mid\ a_{2n} = b_n \textrm{ for all } n\}$. Clearly $(C_b)_b$ form a partition of $C$ into compact subset of measure $0$.
Now $[0,1]$ is measure equivalent to $C$ under the map
$a\mapsto s(a) = \sum a_n/2^{n+1}$. We can eliminate the countable subset of "undetermination" of $s^{-1}$ (the binary fractions) by considering the images $s(C_b)$, where $b$ is a sequence that is not eventually constant. We get in this way an uncountable family $(s(C_b))_b$ of compact disjoint subsets of $[0,1]$ of measure $0$. The complement of their union in $[0,1]$ is again uncountable and of measure $0$.
Note that for $b$ not an eventually constant sequence we can describe $s(C_b)$ to be the set of those real numbers in $[0,1]$ such that their binary expansion $\sum a_n/2^{n+1}$ has $a_{2n}= b_n$ ( prescribed digits at odd positions after the binary dot).
While at it, it should be clear how to partition each $s(C_b)$ ($b$ not eventually constant) into an uncountable family of Cantor subsets.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3480129",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.