Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
Simple permutations: approach and proof explanation for an exercise not very clear Set $A = (a_1, a_2, a_3), B = (b_1, b_2, b_3)$ and $C = (c_1, c_2)$ are assigned.
How many are the ordered sequences formed by $5$ distinct elements containing $2$ elements of $A$, two elements of $B$, and one of $C$?
Related question: Is there use of the numbers of Stirling and of Bell for a combinatorial exercise? for to help a student.
Surely there are simple dispositions of $3$ class $2$ elements without repetition that allow me to exchange the elements of each set
$$\text{For the set $A$:}\quad D_{3,2}(A)=\frac{n!}{(n-k)!}=\frac{3!}{1!}=6$$
$$\text{For the set $B$:}\quad D_{3,2}(B)=\frac{n!}{(n-k)!}=\frac{3!}{1!}=6$$
$$\text{For the set $C$:}\quad D_{2,1}(C)=\frac{n!}{(n-k)!}=\frac{2!}{1!}=2$$
Hence, if $N$ is the total number of dispositions then I will have $N=D_{3,2}(A)\cdot D_{3,2}(A)\cdot D_{2,1}(C)=72$.
From the text of the exercise I do not know if I can mix the elements of each set. What would be the solutions done from six elements built by three elements of $A$, two elements of $B$ and one of set $C$. And if I can mix them, what happen?
| Your computation assumes that the five element sequence has the $a$'s first, the $b$'s second, and the $c$ last. If you are allowed to mix them up, it is easier to choose the two $a$'s without order in ${3 \choose 2}=3$ ways, the two $b$'s without order in ${3 \choose 2}=3$ ways and the $c$ in ${2 \choose 1}=2$ ways for a total of $18$ collections of elements. They can be placed in order in $5!=120$ ways, so there are $18 \cdot 120=2160$ five element sequences.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3792239",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Show that $ G $ is isomorphic to the direct product of $ H $ and $ K $. Let $ G $ be a group of order $ 20 $. Suppose that $ G $ has a subgroup $ H $ of order $ 4 $ and a subgroup $ K $ of order $ 5 $ such that $ hk = kh $ for all $ h \in H $ and $ k \in K $. Show that $ G $ is isomorphic to the direct product of $ H $ and $ K $.
Idea: If I could prove that $ G $ is cyclical then immediately $ H $ and $ K $ are cyclical and therefore $ H \cong \mathbb {Z}_4 $ and $ K \cong \mathbb {Z}_5$. Then $ G \cong \mathbb{Z}_{20} \cong \mathbb {Z}_4\times\mathbb {Z}_5 \cong H \times K.$ But if it's true, I don't see how to prove it. Can you help me please? Thanks so much for reading.
| Hint: It suffices to check that the following three properties are satisfied: $1)H\cap K=\emptyset \quad 2)G=HK$ and $3)$the elements of $H$ and $K$ commute.
These properties are easy to verify: $1)$ follows from Lagrange. $2)$ does also, after noting that $HK\le G$ since both $H$ and $K$ are normal. For $H$ and $K$ are proper subgroups of $HK$. $3)$ is given.
Btw: the group need not be cyclic, as $G\cong V_4\times C_5$ shows.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3792397",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Word Problems regarding Probability This post is going to be lengthy. I am studying Probability to recall my knowledge about it before I take a class in Stats this college. The thing is, the textbook I have didn't provide any solution which could help me determine if my answers were correct or not. Anyway, here are the problems with their respective solutions I have made:
$1.$ In how many ways can a librarian arrange $2$ Biology and $5$ Math books in a shelf?
My attempt: $2$ Bio books $\times$ $5$ Math books = $10$ ways
$2.$ How many $2$-letter words can you form using letters $w,x,y,z$ without repeating letters?
My attempt: 4!/2! = 12
$3.$ How many ways can $5$ questions be answered if for every question there are $3$ possible answers?
My attempt: 5 x 3 = 15
15! is the answer, I guess.
$4.$ There are $3$ math books and $3$ history books that are to be arranged in a shelf. How many different ways can the books be arranged on the shelf if $2$ history books are also to be kept together and $2$ mathematics books are also to be kept together? The $2$ math books should be immediately followed by the $2$ history books, and vice versa.
I have no idea how to tackle this one. The load of words confuse me.
I'm guessing it's $5 \times 5$? Since both $2$ books for history and math are
to be kept together.
$5.$ Cinderella and her $7$ dwarves will eat in a round table. Happy wishes not to be seated opposite Grumpy. What's the probability that things will not work out for Happy?
My attempt: (7-1)! = 6!
Thank you in advance. Any help will mean a lot.
| Ok, here we go!
I'll give you some answers and working, and leave some for you:
*
*This depends on the wording. If the books are all distinct, then there are $7! = 7*6*5*4*3*2*1 = 5040$ arrangements. But, if bio books are identical and math books are identical, there are$ \frac{7!}{5!*2!} = \frac{5040}{240} =$ 21.
*There are 4 options for the first letter, 3 for the second so since $4\times3$ = 12, you are correct.
*For the first question there are 3 options, second 3 options, 3rd 3 options... so the total will be $3 \times 3 \times 3 \times 3 \times 3$ = $3^5$ = 243 possibilities.
*Assuming you mean we have two math books followed by two history books or vice versa, we can place this block of 4 books amongst the 6 spaces we can order them in. Assuming Math books are identical and history books are identical, we have the following possibilities (blank spaces represent where we can place the other books):
(4-block)-- = 2 possibilities to place the 2 remaining books in the remaining spaces
-(4-block)- = 2 possibilities to place the 2 remaining books in the remaining spaces
--(4-block) = 2 possibilities to place the 2 remaining books in the remaining spaces
So total 6, but we can arrange it within the 4-block as history first then math or math first then history so multiply by 2: 12 is the answer.
*
*Firstly, this asks for the probability, not the possibility. I've given you some tips on the other one so I'll leave this for you to try and figure out, here's a hint:
First seat Happy and look then what are the possibilities left for Grumpy to seat.
N.B:
In case you want to learn, look up combinatorics - covering combinations, arrangements and permutations. Its a fascinating field.
Good luck!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3792510",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why the reals with the operation $x \bullet y = \sqrt[3]{x^3 + y^3}$ is a group? The operation above is a group for the real numbers, since 0 is the identity element, and the negative of any real number is its inverse, as it can be observed trivially. Associativity is less trivial, but it holds.
In fact, if we substitute 3 for any odd number (5, 7 ...), the operation satisfies the properties of the group. However, any even number fails.
Is there any geometric / analytic / ... interpretation why an operation like $x \bullet y = \sqrt[3]{x^3 + y^3}$ is associative and, as a consequence, it gives the structure of a group to the reals?
| For an arbitrary bijection $f\colon \mathbf R \to \mathbf R$, the operation $x*y = f^{-1}(f(x) + f(y))$ is a group law on $\mathbf R$. All this says is that if you rename each real number $x$ as $f(x)$ then you can convert the original group law $+$ into a group law $*$ so that $f$ is an isomorphism from $(\mathbf R, *)$ to $(\mathbf R,+)$. The intuition is algebraic, not geometric. There is nothing magical about $n$th roots for odd $n$ other than being a bijection.
The hyperbolic tangent function $\tanh \colon \mathbf R \to (-1,1)$ is a bijection that lets you transport addition on $\mathbf R$ to a group law on $(-1,1)$ that is used in special relativity (addition of velocities in one-dimensional motion). The inverse of this bijection, up to a scaling factor, is called “rapidity” in physics.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3792604",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 5,
"answer_id": 1
} |
How to construct an algebra / field that is infinitely countable? It is well-known that a $\sigma$-algebra / $\sigma$-field can only be finite or uncountable infinite, but how to construct an example of algebra / field that is infinitely countable? This is actually a question from Billingsley's Probability and Measure problem 2.12.
| An example is the collection of finite unions of intervals in $\Bbb{R}$ with end points in $\Bbb{Q}\cup \{\infty, -\infty\}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3792707",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Conditional expectation with multiple conditioning For any r.v.s $X$ and $Y$:
$$E(Y|E(Y|X)) = E(Y|X)$$
But I cannot seem to be able to prove this. I tried using Adam's Law with extra conditioning ($E(Y|X) = E(E(Y|X,Z)|Z)$) but I don't seem to get anywhere with it.
What I tried is the following:
$$g(X) = E(Y|X)$$
$$E(Y|g(X)) = E(E(Y|X,g(X))|g(X))$$
Since the event $X$ happened and $g(X)$ happened are equivalent, conditioning on both $X$ and $g(X)$ is the same as conditioning on only one of them.
Is there any intuitive interpretation of this ?
Does this also mean that conditioning on $X$ or any function $g$ of $X$ is the same ?
| Given the level of formality of the book, I think what the exercise is going for is primarily conceptual. I.e., what does the below conditioning mean:
E(Y|E(Y|X), X) | X = x
This represents the expectation of Y, if I know what the expectation of Y given X would be... and now I also know X! So I'm exactly in the situation where I know X, and I know what the expectation of Y given X is, which is what the expectation of Y given that conditional information is. I.e., E(Y|X). And then you Adam's Law it up via taking out what's known given X, which E(Y|X) is as that's a function of X.
(So it's more specific than just any g(X) - its essential to the argument that g(X) = E(Y|X) in particular.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3792932",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Congruent sets of an arithmetic sequence and a geometric sequence Suppose we have a $a,d,$ and $q$ such that $a \neq 0, d \neq 0.$ Then, let $M = \{a, a + d, a + 2d\}$ and $N = \{a, aq, aq^2\}.$ Given that $M = N,$ find the value of $q.$
(A) $\frac12$
(B) $\frac13$
(C) $-\frac14$
(D) $-\frac12$
(E) $-2$
I immediately thought about setting $a + d = aq$ and $a + 2d = aq^2.$ I then proceeded to do $aq^2 - aq = d,$ and substitute in for $d,$ which gave me $a + (aq^2 - aq) = aq.$ Simplifying then gave me $aq^2 - 2aq + a = 0,$ and dividing by $a$ gave me $q^2 - 2q + 1 = 0,$ which should mean that $q = 1.$ However, that's not an answer choice. What should I do instead?
| One could also work with the difference between terms, rather than eliminating it. The two possible orderings of the elements in the set are
$ \ \{ a \ , \ aq \ = \ a + d \ , \ aq^2 \ = \ a + 2d \} \ $ or $ \ \{ a \ , \ aq \ = \ a + 2d \ , \ aq^2 \ = \ a + d \} \ \ . $ [Initially, it "feels like" the second arrangement should be unreasonable...]
For the first arrangement, subtracting the first term from the second and third produces $ \ a·(q - 1) \ = \ d \ \ , \ \ a·(q^2 - 1) \ = \ 2d \ \ . $ Taking $ \ a \neq 0 \ \ $ (otherwise, set $ \ N \ $ would only contain zeroes), dividing the latter equation here by the former yields
$$ \frac{q^2 \ - \ 1}{q \ - \ 1} \ \ = \ \ q \ + \ 1 \ \ = \ \ \frac{2d}{d} \ \ = \ \ 2 \ \ \Rightarrow \ \ q \ = \ 1 \ \ \Rightarrow \ \ q \ - \ 1 \ \ = \ \ \frac{d}{a} \ \ = \ \ 0 \ \ . $$
This just gives us the "constant" sequence of terms you were concerned about (and which the stated conditions exclude). But in fact, $ \ q = 1 \ $ isn't even permissible for this ratio. So it turns out that the first arrangement of elements is the "incorrect one".
The seemingly unreasonable ordering leads to
$$ \ a·(q^2 - 1) \ = \ d \ \ , \ \ a·(q - 1) \ = \ 2d $$
$$ \Rightarrow \ \ \frac{q^2 \ - \ 1}{q \ - \ 1} \ \ = \ \ q \ + \ 1 \ \ = \ \ \frac{d}{2d} \ \ = \ \ \frac12 \ \ \Rightarrow \ \ q \ = \ -\frac12 $$ $$ \Rightarrow \ \ q^2 \ - \ 1 \ \ = \ \ -\frac34 \ \ = \ \ \frac{d}{a} \ \ \Rightarrow \ \ d \ \ = \ \ -\frac34 · a \ \ . $$
We note that this leaves $ \ a \ $ unspecified, so there are an infinite number of such sequences possible. The elements of the sets are thus
$$ \ \{ \ a \ \ , \ \ a \ - \ \frac34 a \ = \ \frac14 a \ = \ aq^2 \ \ , \ \ a \ - \ 2·\frac34 a \ = \ -\frac12 a \ = \ aq \ \} \ \ . $$
$$ \ \ $$
[I also had an argument in which we could avoid the question of sequence ordering by looking at the sum of the elements, which gives us
$$ a·( 1 \ + \ q \ + \ q^2) \ \ = \ \ a \ + \ (a + d) \ + \ (a + 2d) \ \ = \ \ 3a \ + \ 3d $$ $$ \Rightarrow \ \ q^2 \ + \ q \ - \ \left(2 \ + \ 3·\frac{d}{a}\right) \ \ = \ \ 0 $$
$$ \Rightarrow \ \ q \ \ = \ \ -\frac12 \ \pm \ \frac{\sqrt{1 \ + \ 4·\left(2 \ + \ 3·\frac{d}{a}\right)}}{2} \ \ = \ \ -\frac12 \ \pm \ \frac{\sqrt{ \ 9 + \left(12·\frac{d}{a} \right)}}{2} $$ $$ = \ \ -\frac12 \ \pm \ \frac{3·\sqrt{ \ 1 + \left(\frac{4d}{3a} \right)}}{2} \ \ . $$
Setting the discriminant equal to zero gives us $ \ q \ = \ -\frac12 \ $ and $ \ 4·d \ = \ -3·a \ \ $ as above, but I didn't really see a satisfying explanation for doing so, other than "tidiness" of the result.]
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3793023",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Density of Borel set at 0 The Lebesgue density theorem says that if $E$ is a Lebesgue measurable set, then the density of $E$ at almost every element of $E$ is 1 and the density of $E$ at almost every element not in $E$ is 0.
However, is it true that for each $t$ strictly between 0 and 1, there is a Borel set $E$ that has density $t$ at 0?
I have no idea how to construct such a set for a random value of $t$.
Any help would be appreciated.
| Consider a sequence of numbers $r_n \searrow 0$ such that $\frac{r_{n-1}}{r_n} \to 1$. Let $\theta$ be a measure preserving map from $(0,r_1]$ to $\mathbb R^2$ that takes $(\pi r_{n}^2,\pi r_{n-1}^2] \subset \mathbb R$ to $\{x \in \mathbb R^2: r_n < |x| \le r_{n-1}\}$. Then let $A$ be a 'piece of pie' centered at the origin in $\mathbb R^2$, with angle $\alpha$ at the corner. Then $\theta^{-1}(A)$ will be a set with density $\alpha/(4\pi)$ at $0$.
This will give densities $0 \le t \le \frac12$. To get $\frac12 < t \le 1$, simply add $(-\infty,0]$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3793108",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Inequality for function of $\arctan(x)$ I want to show that $$f(x) = \frac{1}{\arctan(x)} - \frac{1}{x} $$ is increasing on $(0, \infty)$. I can see this clearly by plotting it, but I'm struggling to write it out rigorously. It obviously suffices to show its derivative is always positive in this range (which is also clear from plotting it). We have $$f'(x) = \frac{(1+x^2)\arctan^2(x) -x^2}{x^2(1+x^2)\arctan^2(x)}$$ so again it suffices to show that $$g(x) \equiv (1+x^2)\arctan^2(x) -x^2 \ge 0 \quad \forall x >0$$ (and, yet again, this is clear from plotting it). I've jumped down the rabbit hole of taking the derivative of $g$ as well (since it is $0$ at $x = 0$ so it would again suffice to show that $g' \ge 0$) and it doesn't yield anything immediately useful for me. Please help if you can
| Consider instead $ \displaystyle g(x) = \arctan{x} - \frac{x^2}{1 + x^2}$. Note that $g(0) = 0$, so it suffices to show that $g'(x) = 0$ for $x \ge 0$.
Now, $\displaystyle g'(x) = \frac{2[(1 + x^2)\arctan{x} - x]}{(1 + x^2)^2}$. It thus suffices to consider $$h(x) = \arctan{x} - \frac{x}{(1 + x^2)},$$ and show that $h(x) \ge 0$ for $x \ge 0$. But $h(0) = 0$, and $$h'(x) = \frac{2x^2}{(1 + x^2)^2} \ge 0$$
for all $x$. This completes the proof.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3793223",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Continuous function with upper dini derivative greater than 0 implies function is increasing
Let $f$ be continuous on $[a,b]$ with $\bar D f \geq 0$ (upper Dini derivative of $f$) on $(a,b)$. Show that $f$ is increasing on $[a,b]$.
Hint: Show this is true for $g$ with $\bar D g \geq \epsilon > 0$ on $[a,b]$. Apply this to the function $g(x) = f(x) + \epsilon x$.
This is question 19 from chapter 6.2 of Royden-Fitzpatrick Analysis 4th edition.
My approach is as follows
*
*$g$ is continuous as it is the linear combination of 2 continuous functions.
*$\bar D g = \bar D f + \epsilon \geq \epsilon > 0$ which means $g$ is strictly increasing on $(a,b)$.
*$f = g - \epsilon x$ and $\bar D f = \bar D g - \epsilon \geq 0$ implies $f$ is increasing (it is not decreasing) on $(a,b)$.
Does it make sense? Thanks for any help. The question is also related to Continuous function on $[a, b]$ with bounded upper and lower derivatives on $(a, b)$ is Lipschitz.
| How do you know that $2$ holds? In fact, this is the gist of the proof, unless I am misreading your question, you need to do a bit of work. (Drawing a picture will help!) First suppose that $\bar D f >0$ on $(a,b)$. If there are $a<c<d<b$ such that $f(c)>f(d)$ then we may choose $f(c)>\mu>f(d)$. Let $S=\{t\in (c,d):f(t)>\mu\}$ and consider $\xi=\sup S.$ Note that $c<\xi<d$. Take an increasing sequence $(t_n)\subseteq (c,d)$ such that $t_n\to \xi.$ Then, $f(t_n)\to f(\xi)$. If $f(\xi)\neq \mu$ then there is a $\mu<\alpha<f(\xi)$. Continuity of $f$ now implies that there is an interval $I=(\xi,\xi+\delta)$ such that $t\in I\Rightarrow f(t)>\alpha>\mu$. But this contradicts the definition of $\xi.$ Thus, $f(\xi)= \mu.$
We have shown that for each $t\in (\xi,d),\ \frac{f(t)-f(\xi)}{t-\xi}\le0$, and we conclude that $ D^+ f(\xi)\le 0$, which is a contradiction. Thus, the claim is true for the strict inequality and $now$ we define $g_{\epsilon}(t)=f(t)+\epsilon t$. It follows that $\bar D g_{\epsilon} >0$ on $(a,b)$ so $g_{\epsilon}$ is non-decreasing there, and as $\epsilon$ is arbitrary, $f$ is also non-decreasing.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3793363",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
$f$ is Lipschitz if and only if there exists $L\geq0$ such that $|f'(x)|\leq L$ Let $f:[a,b]\to\mathbb{R}$ be absolutely continuous.
Prove that $f$ is Lipschitz if and only if there exists $L\geq0$ and a set $E\subset[a,b]$ such that, $m(E)=0$ and $f$ is differentiable at each $x\in[a,b]\setminus E\quad$ with $|f'(x)|\leq L$.
My attempt:
For $\Rightarrow$ direction:
Since $f$ is absolutely continuous on $[a,b]$, it is differentiable a.e on $[a,b]$
And notice:
$\begin{align}
|f'(x)|&=\left|\lim\limits_{h\to0}\frac{f(x+h)-f(x)}{h}\right|\\
&\leq\lim\limits_{h\to0}\left|\frac{f(x+h)-f(x)}{h}\right|\\
&\leq \lim\limits_{h\to0}\left| \frac{L|(x+h)-x|}{h}\right|\\
&=L \quad\text{ Where, $L$ is the Lipschitz constant}
\end{align}$
Is that correct?
And for the other side of the implication I would really appreciate your help
| Since $f$ is absolutely continuous, we have that
$$ f(x) - f(y) = \int_x^y f'(t) \, dt .$$
Then
\begin{align}
\text{$f$ is Lipschitz with constant $L$} &\Leftrightarrow
-L (y-x) \le \int_x^y f'(t) \, dt \le L (y-x) \text{ for all $x < y$} \\
&\Leftrightarrow \int_x^y (L - f'(t)) \, dt \ge 0 \text{ and } \int_x^y (L + f'(t)) \, dt \ge 0\text{ for all $x < y$}
\end{align}
Also
$$ \int_x^y g(t) \, dt \ge 0 \text{ for all $x < y$} \Leftrightarrow g \ge 0 \text{ a.e.} $$
Hence $f$ is Lipschitz with constant $L$ if and only if $L - f'(t)$ and $L + f'(t)$ are greater than or equal to zero for almost every $t$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3793507",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Find the sum of series: $\sum_{n=0}^{\infty}\frac{x^{2n}}{(2n)!}$ I have some trouble with series theory. The specific questions are as follows:
\begin{equation}
\sum_{n=0}^{\infty}\frac{x^{2n}}{(2n)!}
\end{equation}
My idea is just like this:
Since $e^x=\sum_{n=0}^{\infty}\frac{x^{n}}{n!}$,
\begin{align}
\sum_{n=0}^{\infty}\frac{x^{2n}}{(2n)!}&=\sum_{n=0}^{\infty}\frac{x^{2n}}{2^nn!}\\
&=\sum_{n=0}^{\infty}\frac{(\frac{x^2}{2})^n}{n!}\\
&=e^{\frac{x^2}{2}}
\end{align}
However, the answer is cosh $x$. The main idea is based on the power series of $e^x$ and $e^{–x}$. Then add them together. But I still don't understand what I did wrong.
Can anyone help me out,please. Thank you.
| What you did wrong was changing $(2n)!$ to $2^nn!$.
You were correct that $e^x=\sum\limits_{n=0}^{\infty}\dfrac{x^{n}}{n!}$,
so $\cosh x = \dfrac{e^x+e^{-x}}2=\dfrac{\sum\limits_{n=0}^{\infty}\frac{x^{n}}{n!}+\sum\limits_{n=0}^{\infty}\frac{(-x)^{n}}{n!}}2=\dfrac{\sum\limits_{n=0}^{\infty}\frac{x^{n}}{n!}\left(1+(-1)^n\right) }2$.
$\dfrac{1+(-1)^n}2$ is $0$ when $n$ is odd and $1$ when $n$ is even, so this becomes $\sum\limits_{n=0}^{\infty}\dfrac{x^{2n}}{(2n)!} . $
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3793598",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
How to calculate $\int _{-\infty }^{\infty }\frac{x\sin \left(x\right)}{1+x^4}\,dx$ I want to calculate $\int _{-\infty }^{\infty }\frac{x\sin \left(x\right)}{1+x^4}\,dx$, but I don’t want to use complex analysis. How can I calculate it?
I tried
$$I\left(t\right)=\int _{-\infty }^{\infty }\frac{x\sin \left(tx\right)}{1+x^4}\,dx$$
$$I''\left(t\right)=-\int _{-\infty }^{\infty }\frac{x^3\sin \left(tx\right)}{1+x^4}\,dx=-\int _{-\infty }^{\infty }\frac{\sin \left(tx\right)}{x}\,dx\:+\int _{-\infty }^{\infty }\frac{\sin \left(tx\right)}{x\left(1+x^4\right)}\,dx$$
$$=-\pi \:+\int _{-\infty }^{\infty }\frac{\sin \left(tx\right)}{x\left(1+x^4\right)}\,dx$$
$$I''''\left(t\right)=-\int _{-\infty }^{\infty }\frac{x\sin \left(tx\right)}{1+x^4}\,dx$$
$$I''''\left(t\right)+I\left(t\right)=0$$
Solving the differential equation and then setting the initial conditions seem like a very long process. How else can I calculate?
| With $I\left(t\right)=\int _{-\infty}^{\infty }\frac{x\sin \left(tx\right)}{1+x^4}\:dx$, you have
$I’’’’(t)+I(t)= 0$, along with all the initial conditions
$$I(0)=0, \>\>\>I’(0)=\int_{-\infty}^\infty \frac{x^2}{1+x^4}dx =\frac\pi{\sqrt2} ,\\ I’’(0)=-\pi, \>\>\>
I’’’(0)=\int_{-\infty}^\infty \frac{1}{1+x^4}dx =\frac\pi{\sqrt2}\\
$$
which lead to the solution $I(t) =\pi e^{-\frac t{\sqrt2}}\sin\frac t{\sqrt2} $. Thus,
$$\int _{-\infty}^{\infty }\frac{x\sin \left(x\right)}{1+x^4}\:dx
=I(1)=\pi e^{-\frac 1{\sqrt2}}\sin\frac 1{\sqrt2}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3793751",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Probability of meeting in the grid. Aubrey leaves home at $(0,0)$ to walk to school at $(1,5)$ and travels one block (one unit) north or east every minute; she does not leave the rectangle $0 \leq x \leq1$, $0\leq y \leq5$. Xander leaves school at the same time and is headed back toward home else at $(0,0)$ and travels one block (one unit) south or west every minute; he stays within the same rectangle. At each corner where she has a choice, Aubrey flips a coin to determine whether she should go north or east; at each corner where he has choice, Xander flips a coin to determine whether he should go south or west. Compute the probability that they meet.
| First, note that the distance between home and school is $6$ units. Since they start at the same time and are moving with the same speed ($1$ unit per minute) , they can only meet after three minutes. So, the possible meeting points are $(0,3)$ and $(1,2)$. Let's find the probability that the meet in the first one. For that to happen Aubrey must move north 3 times in a row, that is a $0.5^3$ probability, while Xander must NOT move south 3 times in a row, that is a $1 - 0.5^3$. The probability that they meet at $(0,3)$ is the product of those two independent probabilities. To complete the answer note that the problem is symmetric so the probability that they meet at $(1,2)$ is the same.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3793866",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is it possible to write a metric a space as a countable disjoint union of compact sets? Let $ (X,d)$ be a metric space and let $\mu $ be a Radon $\sigma$-finite measure on the Borel $\sigma$-algebra. I read that it's possible to find countable disjoint compact sets $\lbrace K_n\rbrace_{\mathbb{N}}$ and a $\mu$-null set $N$ such that $$ X=\bigcup_{\mathbb{N}}K_n\cup N. $$
I've tried to reach some results using inner regularity of $\mu$, but nothing. Is this statement true? How can i prove it?
| The key assumption here is that $\mu$ is a Radon measure, meaning it is inner regular with respect to compact sets. Without this assumption, this is not true, not even if $\mu$ is finite (for instance, there are metric spaces supporting continuous measures in which all compact sets are finite).
Write $X=\bigcup_n X_n$, where each $X_n$ are disjoint Borel and of finite measure. Then recursively, choose a compact $K_{n,m}\subseteq X_n\setminus \bigcup_{m'<m} K_{n,m'}$ such that $\mu((X_n\setminus \bigcup_{m'<m} K_{n,m'})\setminus K_{n,m})<1/m$. Then $X_n\setminus \bigcup_{m} K_{n,m}$ is null, and so $X\setminus\bigcup_{n,m} K_{n,m}$ is null, and $K_{n,m}$ are clearly disjoint.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3793962",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Show that the solution of $\frac{\rm d}{{\rm d}t}X^x(t)=v(t,X^x(t))$, $X^x(0)=x$, is differentiable in $x$ Let $X^x$ be the solution of$^1$ \begin{align}\frac{\rm d}{{\rm d}t}X^x(t)&=v(t,X^x(t))\\ X^x(0)&=x\end{align} and $$T_t(x):=X^x(t).$$
Assuming that $v$ is differentiable in the second argument, can we show that $T_t$ is differentiable?
I'm only able to prove this when $v$ is twice differentiable in the second argument, since then Taylor's theorem is applicable.
$^1$ Assume $v:[0,T]\times\mathbb R^d\to\mathbb R^d$ is Lipschitz continuous in the second argument uniformly with respect to $t$ and continuous in the first variable.
| Define an augmentation of the ODE system via
$$
\frac{d}{dt}U^x(t)=\frac{\partial v}{\partial x}(t,X^x(t))\,U^x(t), ~~~ U^x(0)=I.
$$
Then use Grönwall or similar to find a bound for $E^{x,Δx}(t)=X^{x+\Delta x}(t)-X^x(t)-U^x(t)\Delta x$.
Assume that the following considerations are restricted to a compact domain so that $X^x(s)$, $X^{x+Δx}(s)$, and the connecting segment are inside the domain for all $s\in[0,t]$.
Obviously, $E^x(0)=0$ by construction. Then
\begin{align}
E^{x,Δx}(t)&=\int_0^t\frac{d}{dt}E^{x,Δx}(s)\,ds
\\
&=\int_0^t\left(v(s,X^{x+Δx}(s))-v(s,X^{x}(s))-∂_xv(s,X^{x}(s))U^x(s)Δx\right)\,ds
\\
&=\int_0^t\left[v(s,X^{x+Δx}(s))-v(s,X^{x}(s))-∂_xv(s,X^{x}(s))\left(X^{x+Δx}(s)-X^{x}(s)\right)\right]\,ds
\\&\qquad
+\int_0^t∂_xv(s,X^{x}(s))\left[X^{x+Δx}(s)-X^{x}(s)-U^x(s)Δx\right]\,ds
\\
&=\int_0^t\rho_v\left(X^{x}(s),X^{x+Δx}(s)-X^{x}(s)\right)\,ds
+ \int_0^t∂_xv(s,X^{x}(s))E^{x,Δx}(s)\,ds
\end{align}
Now one could argue that the first integrand is uniformly $o(Δx)$ by continuity of $X^x$ in $x$ and the definition of differentiability of $v$, while $∂_xv$ in the second integrand is bounded by the Lipschitz constant $L$ of $v$. This allows to find $δ>0$ for some given $ε>0$ so that for all $\|Δx\|<δ$ one gets the integral inequality with its solution per Grönwall
$$
\|E^{x,Δx}(t)\|\le ε\|Δx\|t+L\int_0^t\|E^{x,Δx}(s)\|\,ds
\implies
\|E^{x,Δx}(t)\|\le ε\|Δx\|\frac{e^{Lt}-1}{L}
$$
from where $E^{x,Δx}(t)=o(Δx)\implies\dfrac{∂X^x(t)}{∂x}=U^x(t)$ follows.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3794233",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Obtaining a Quotient Space of $\mathbb{R}^2$ Using stereographic projection of a sphere, $S^2$, we can obtain the one point compactification of $\mathbb{R}^2$ is sphere, i.e. $S^2$ can be thought of as $\mathbb{R}^2 \cup \{ \infty \}$. Now I am wondering how can $S^2$ be obtained by quotienting $\mathbb{R^2}$.
I have an idea (maybe it's vague),that can we identify the boundary of $\mathbb{R}^2$ to one point and other points as singletons. But I have thought for a while that the boundary of $\mathbb{R}^2$ is not in $\mathbb{R}^2$, so we may not get sphere by quotienting the space $\mathbb{R}^2$ only.
So I think we have to take $\mathbb{R}^2$ union its boundary at first then proceed in the above way I said. But I have no confidence in my intuition. Please, someone help me to clear my doubts. (But I know it can be done by any bounded closed subset of $\mathbb{R}^2$ and identifying its boundary to one point.)
| $\mathbb R^2$ has no boundary. But that doesn't matter, we can still get $S^2$ by quotienting $\mathbb R^2$: take the open unit disc and collapse every point which is not in the disc into a single point. This point is essentially the pole needed for the one-point compactification of the unit disc.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3794356",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
cardinality of functions from N to N using Schröder–Bernstein theorem Im trying to prove that $\left|A\right|=\aleph$ for the following
group: $A=\left\{f\in\mathbb{N}\rightarrow\left.\mathbb{N}\right|\forall n\le m\ .f\left(n\right)\le f\left(m\right)\right\}$ using Schröder–Bernstein theorem
To prove that $\left|A\right|\le\aleph$ I used the fact that $A\subseteq\mathbb{N}\rightarrow\mathbb{N}$ and therefore $\left|A\right|\le\left|\mathbb{N}\rightarrow\mathbb{N}\right|=\aleph_0^{\aleph_0}=\aleph$
For the other side I think I found an injective function $f\in P\left(\mathbb{N}\right)\rightarrow(\mathbb{N}\rightarrow\mathbb{N})$
which given a set $B\in P\left(\mathbb{N}\right)$ it will return the identity on $B$.
In that case i will get $\aleph=2^{\aleph_0}=\left|P\left(\mathbb{N}\right)\right|\le\left|A\right|$
Is that correct or I missed something in the way?
| Hint: to get an injection of $\Bbb{P}(\Bbb{N})$ into $A$, define $f(X)$ to the function such that $f(0) = 0$ and:
$$f(x + 1) = \left\{ \begin{array}{l@{\quad}l}
f(x) & \mbox{$x \not\in X$}\\
f(x) + 1 & \mbox{$x \in X$}
\end{array}\right.$$
Now show how you can recover $X$ from $f(X)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3794462",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
The minimal poly of $\sqrt[3]{2}$ over $\Bbb{Q}$ is equal to $\det(T_a - xI)$ where $T_a$ is a matrix over $\Bbb{Q}$ that represents mult. by $a$. Let $K/F$ be a field extension of degree $n \in \Bbb{N}$ and for each $a \in K$ define $L_a(x) = a x$. Then $L_a(x)$ is an $F$-linear transformation of $K$ as a vector space of dimension $n$. So send $K$ into $F^{n \times n}$ the matrix ring by sending $a$ to $T_a = [ L_a(\theta_1) \ \cdots \ L_a(\theta_n) ]$ where abstractly we have $L = \{ a_1 \theta_1 + \dots + a_n \theta_n : a_i \in F\}$ for some $\theta_i$ basis in $K$.
Then for $a \in K$, $f(x) = \det (T_a - xI) \in F[X]$ the characteristic polynomial, we have that $f(a) = 0$ i.e. that $a$ is a root of the characteristic polynomial which is monic of degree $n$ so is the characteristic polynomial is in fact $m_{a, F}(x)$ the minimal polynomial for $a$ over $F$.
I'm trying to prove this in the general case, i.e. that $f(a) = 0$ or equivalently that $T_a(y) = ay$ for all $y \in F^n$.
What I have so far is:
$$
T_a(y) = \sum_{i=1}^n y_i L_a(\theta_i) \\
\implies T_a(y) = \sum_{i=1}^n y_i (a \theta_i) = a \cdot \dots
$$
So I've got that so far. Then the problem says, test this idea out to find the monic of degree $3$ satisfied by $a = \sqrt[3]{2}$.
So I want to compute the determinant of:
$$
\begin{pmatrix}
x - \sqrt[3]{2} & 0 & 0 \\
0 & x - \sqrt[3]{4} & 0 \\
0 & 0 & x - 2
\end{pmatrix}
$$
where I've reversed the sign for simplicity. I computed the above by multiplying $\theta_1 = 1, \theta_2 = \sqrt[3]{2}, $ and $\theta_3 = \sqrt[3]{4}$ by $a$ and subtracting that from $x$.
I'm getting:
$$
x^3 - 2 x^2 + (2 + 2\sqrt[3]{2} + 2\sqrt[3]{4}) x - 4
$$
which is not a polynomial over $F$. The bad term I got by doing $(-\sqrt[3]{2})(-2) + (-\sqrt[3]{4})(-2) + (-\sqrt[3]{2})(-\sqrt[3]{4})$ in a logical, symmetric way.
Where have I gone wrong in my computation?
| I think you've computed $T_a$ incorrectly. I assume you're using the ordered basis $(1,\sqrt[3]{2},\sqrt[3]{4})$ for $K$ as $\mathbb Q$-vector space (edit: I see that you are). So applying $L_a$ to the first basis vector gives $L_a(1)=\sqrt[3]{2}$. In terms of the coordinate vectors relative to this ordered basis, this is $$L_a\begin{bmatrix}1\\0\\0\end{bmatrix}=\begin{bmatrix}0\\1\\0\end{bmatrix}$$ so this should be the first column of $T_a$. I'll leave it to you to check the other columns.
Lastly, a small note: it should be $\det(xI-T_a)$ for the minimal polynomial to be monic.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3794597",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Pseudoinverse of a diagonal matrix Let matrix $A \in \Bbb R^{n \times n}$ have $k$ diagonal elements, where $k < n$, and rest of the elements are zero. I am trying to find the pseudoinverse of $A + \lambda I$ when $\lambda$ approaches zero.
Then $\frac{1}{a_i + \lambda}$ would be the diagonal elements for $i$ going from 1 to $k$ of the pseudo inverse and $\frac{1}{\lambda}$ would be the rest of the diagonal elements. If I put $\lambda$ equal to zero then the pseudo inverse would a matrix with elements of $A$ matrix inverted, but there would be elements going to infinity. But that does not sound right. What is wrong in this logic?
| The problem is that the pseudo inverse is not a continuous function on the space of matrices as exactly you've shown. Consider the 1d matrix $(x)$ for $x\in\mathbb R$. Then the pseudo-inverse map is
$$
(x)\mapsto\begin{cases}1/x&\text{ if }x\neq 0,\\0&\text{ otherwise.} \end{cases}
$$
This is not a continuous at zero, and so we would not expect it to preserve a limit of an element to zero. The same happens with your example when we restrict to the kernel of $A$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3794718",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Line in polar coordinates I just wanted to clarify something. A line in polar coordinates has the parameterization of $\theta = k\pi$ for $k \in \mathbb{R}$ right? Or am I missing something?
| Hint: Let $x = r \cos \theta$ and $y= r \sin \theta$ and see if you can find the polar equations for
$$y = mx + b \\ y = x + b \\ y = x$$ For the third equation, what do you notice?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3794791",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
} |
A Lie algebra with two of its Lie bracket are zero while the third is not. Is it possible to construct a system of three vector field u,v and w an $R^{3}$ such that [u,v]=0=[u,w], but [v,w]$\neq$0?
I tried to solve it by applying the Jacobi identity property which makes the 2nd and 3rd term vanish so I am left with [u[v,w]]=0. I am stuck here. Any help please. Thanks in advance
| Yes, consider the 2 dimensional non commutative Lie algebra, $[a,b]=a$, add $c$, in the center $[c,a]=[c,b]=0$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3794908",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Integrating a 'twisted' rational function For $x\in [0,1]$, let
$$
P_n (x) = \prod_{k=1}^{n} (x^k+1)^{(-1)^k}.
$$For example, $\displaystyle{P_4(x) = \frac{(x^2+1)(x^4+1)}{(x+1)(x^3+1)}}$. Of note: $P_n(1)=1/2$ if $n$ is odd and $1$ if $n$ is even, so we cannot expect uniform convergence on $[0,1)$. I am interested in the limit $\lim_{n\to\infty}P_n(x)$, if it exists, and several related integrals, namely:
*
*Whether $P(x):=\lim_{n\to\infty}P_n(x)$ exists and if so what it is
*$I_n:=\int_0^1 P_n(x)\,dx$ (this seems to be the natural range of integration since we want to avoid negative numbers and the even-index version blows up for $x>1$)
*$I:=\int_0^1 P(x)\,dx$
I calculated the first few values of $I_n$ by hand:
$$
\left\{\log (2),\log (4)-\frac{1}{2},\frac{1}{27} \left(9+2 \sqrt{3} \pi \right),\frac{5}{2}+\frac{\pi }{9
\sqrt{3}}-\frac{8 \log (2)}{3}\right\}
$$Then I computed $20$ values using a CAS; the sequence appears to be alternating with the odd values increasing and the even values decreasing (as expected). I got $I_{1000}\approx 0.79496$ and $I_{1001}\approx 0.794376$, so I would guess the limit $I$ is somewhere in between them.
I've seen infinite products before, mostly in the context of some introductory material I've read on hypergeometric series, so feel free to use them in your answer!
| The infinite product $$P(x) = \prod_{k=1}^\infty (x^k+1)^{(-1)^k}$$
converges to a nonzero value if $|x| < 1$ because
$$\sum_{k=1}^\infty \log \left((x^k+1)^{(-1)^k}\right) = \sum_{k=1}^\infty (-1)^k \log(x^k+1)$$
converges. Its Maclaurin series coefficients are OEIS sequence A083365. According to that, $P(x) = \psi(x) / \phi(x)$ where $\psi(x)$ and $\phi(x)$ are Ramanujan theta functions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3795183",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 1,
"answer_id": 0
} |
Understanding the proof of: Every convex function is continuous I am trying to understand the following proof:
Theorem 2.10. If $f$ is a convex function defined on an open interval $(a, b)$ then $f$ is continuous on $(a, b)$
Proof. Suppose $f$ is convex on $(a, b),$ and let $[c, d] \subseteq(a, b) .$ Choose $c_{1}$ and $d_{1}$ such that
$$
a<c_{1}<c<d<d_{1}<b.
$$
If $x, y \in[c, d]$ with $x<y,$ we have from Lemma 2.9 (see Figure 4$)$ that
$$
\frac{f(y)-f(x)}{y-x} \leq \frac{f(d)-f(y)}{d-y} \leq \frac{f\left(d_{1}\right)-f(d)}{d_{1}-d}
$$
and
$$
\frac{f(y)-f(x)}{y-x} \geq \frac{f(x)-f(c)}{x-c} \geq \frac{f(c)-f\left(c_{1}\right)}{c-c_{1}},
$$
showing the set
$$
\left\{\left|\frac{f(y)-f(x)}{y-x}\right|: c \leq x<y \leq d\right\}
$$
is bounded by $M>0 .$ It follows $|f(y)-f(x)| \leq M|y-x|,$ and therefore $f$ is uniformly continuous on $[c, d] .$ Recalling that uniform continuity implies continuity, we have shown that $f$ is continuous on $[c, d] .$ since the interval $[c, d]$ was arbitrary, $f$ is continuous on $(a, b)$. ${}^2$ $\square$
(transcribed from this screenshot)
My questions:
*
*Where did the modulus values in the expression $\left\{\left|\dfrac{f(y)-f(x)}{y-x}\right|\right\}$ come from?
*What about $M=0$? I think that case should also be addressed, although it is trivial. I think the idea is that if $M=0$, then $f$ is constant and hence continuous. But, how can we show that rigorously?
| Since the author found to numbers $\alpha$ and $\beta$ such that you always have, when $c\leqslant x<y\leqslant d$,$$\frac{f(y)-f(x)}{y-x}\leqslant\alpha$$and$$\frac{f(y)-f(x)}{y-x}\geqslant\beta,$$then the set$$\left\{\frac{f(y)-f(x)}{y-x}\,\middle|\,c\leqslant x<y\leqslant d\right\}$$is bounded and therefore the set$$\left\{\left|\frac{f(y)-f(x)}{y-x}\right|\,\middle|\,c\leqslant x<y\leqslant d\right\}$$is bounded too. So, you can take some $M>0$ such that$$c\leqslant x<y\leqslant d\implies\left|\frac{f(y)-f(x)}{y-x}\right|<M.$$And, since you took $M>0$, there is no need to bother with the possibility that $M=0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3795317",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
A sum of series with the inverse squared central binomial coefficient A nice challenge by Cornel Valean:
Show that
$$2\sum _{n=1}^{\infty }\frac{2^{4 n}}{\displaystyle n^3 \binom{2 n}{n}^2}-\sum _{n=1}^{\infty }\frac{2^{4 n}}{\displaystyle n^4 \binom{2 n}{n}^2}+\sum _{n=1}^{\infty }\frac{2^{4 n} H_n^{(2)}}{\displaystyle n^2 (2 n+1) \binom{2 n}{n}^2}=\frac{\pi^3}{3}.$$
I have to say that I am not experienced in series involving squared central binomial coefficient, so I leave it for people who are experts in such series.
All approaches are appreciated. Thank you.
| An excellent answer was already given (the chosen one), but good to have more ways in place.
A solution by Cornel Ioan Valean
Instead of calculating all three series separately, we might try to calculate them all at once. So, we have that
$$2\sum _{n=1}^{\infty }\frac{2^{4 n}}{\displaystyle n^3 \binom{2 n}{n}^2}-\sum _{n=1}^{\infty }\frac{2^{4 n}}{\displaystyle n^4 \binom{2 n}{n}^2}+\sum _{n=1}^{\infty }\frac{2^{4 n} H_n^{(2)}}{\displaystyle n^2 (2 n+1) \binom{2 n}{n}^2}$$
$$=\sum _{n=1}^{\infty }\frac{2^{4n} (4n^2-1+n^2 H_n^{(2)})}{\displaystyle n^4 (2 n+1) \binom{2 n}{n}^2}=\sum _{n=1}^{\infty }\frac{2^{4n} (4-1/n^2+ H_n^{(2)})}{\displaystyle n^2 (2 n+1) \binom{2 n}{n}^2}$$
$$=\sum _{n=1}^{\infty }\frac{2^{4n}(4-1/n^2+ H_n^{(2)}\color{blue}{+(4 n^2-1) H_{n-1}^{(2)}}-\color{blue}{(4 n^2-1) H_{n-1}^{(2)}})}{\displaystyle n^2 (2 n+1) \binom{2 n}{n}^2}$$
$$=\sum _{n=1}^{\infty }\frac{2^{4n}(\color{red}{4n^2H_n^{(2)}}-\color{blue}{(4 n^2-1) H_{n-1}^{(2)}})}{\displaystyle n^2 (2 n+1) \binom{2 n}{n}^2}$$
$$=\sum _{n=1}^{\infty}\left(\frac{2^{4n+2}H_n^{(2)}}{\displaystyle (2n+1) \binom{2 n}{n}^2}-\frac{2^{4n}(2n-1)H_{n-1}^{(2)} }{\displaystyle n^2\binom{2 n}{n}^2}\right)$$
$$=\lim_{N\to\infty}\sum _{n=1}^{N}\left(\frac{2^{4n+3}H_n^{(2)}}{\displaystyle (n+1) \binom{2 n+2}{n+1}\binom{2 n}{n}}-\frac{2^{4n-1}H_{n-1}^{(2)} }{\displaystyle n\binom{2 n}{n}\binom{2 n-2}{n-1}}\right)$$
$$=\lim_{N\to\infty}\frac{2^{4N+3}H_N^{(2)}}{\displaystyle (N+1) \binom{2 N+2}{N+1}\binom{2 N}{N}}=\frac{\pi^3}{3},$$
where we used the asymptotic form of the central binomial coefficient, $\displaystyle \binom{2 N}{N}\sim \frac{4^N}{\sqrt{\pi N}}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3795555",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 1,
"answer_id": 0
} |
Prove that this number is divisible by 7 Without using induction, how can it be proved that 7 divides $3^{2n+1}+2^{n+2}$ for each $n\in\mathbb{N}$?
I tried to expand it using $\frac{x^{n+1}-1}{x-1}=1+x+..+x^n$ but I had no success.
It would be great if more than one proof is provided.
| \begin{eqnarray*}
\sum_{n=0}^{\infty} (3^{2n+1}+2^{n+2})x^n = \frac{3}{1-9x}+\frac{4}{1-2x} = \frac{ \color{red}{7} (1-6x)}{(1-9x)(1-2x)}.
\end{eqnarray*}
This function clearly has integer coefficients
\begin{eqnarray*}
\frac{ (1-6x)}{(1-9x)(1-2x)}=(1-6x) \left( 1 +9x+81x^2+ \cdots \right) \left( 1 +2x+4x^2+ \cdots \right).
\end{eqnarray*}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3795659",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
} |
Calculate Grade Points and Credits needed to reach a specific GPA I'm attempting to come up with a formula that solves the amount of Grade Points and Credits Needed to achieve a certain GPA (On a $4.0$ scale). The formula for GPA is Grade Points / Credits.
As an example, let's say I have obtained $18$ Grade Points from $12$ Credits, this would result in me having a GPA of $1.5$ If my target GPA is $2.0$ by the end of the semester, I would need to obtain $12$ more Grade Points from 3 Credits. By the end of the semester I would have a $2.0$ GPA $(18+12)/(12+3) = 30/15 = 2.0$ GPA.
Now, how could I produce a formula to know how many Grade Points and Credits I would need based on my current Grade Points and Credits to achieve a bare minimum of $2.0$ GPA?
| Let $P$ represent the points you currently have, and $C$ represent the credits taken. As you pointed out, your current GPA would be $\frac{P}{C}$.
Then let $P_s$ represent the points you will earn in the next semester, and $C_s$ be the number of credits you will take next semester. After the semester is over, your GPA would be calculated by $\frac{P+P_s}{C+C_s}$.
Since your desired GPA is $2.0$, we are interested in solving $\frac{P+P_s}{C+C_s}=2$.
I think the most beneficial formula for you will come from isolating $P_s$ in the equation above. That formula is $P_s=2(C+C_s)-P$.
Once you know how many credits you have signed up for, you can plug in all the values on the right hand side to compute the number of points you need to get a $2.0$ GPA. You could also change the $2$ in the formula to a different GPA if you wanted.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3795755",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
If $f: \mathbb N \rightarrow \mathbb N \times \mathbb N $ such as $f(n)=(n,n+1)$ Is it surjective and/or injective? If $f: \mathbb N \rightarrow \mathbb N \times \mathbb N $ such as $f(n)=(n,n+1)$ Is it surjective and/or injective?
I know that it is surjective $\Leftrightarrow \forall (a,b) \in \mathbb N \times \mathbb N \exists c \in \mathbb N:f(c)=f(a,b)$
It is obviously injective because if $(n,n+1)=(m,m+1) \rightarrow n=m$
I can see that it is not surjective but do not know how to prove it, can I get some help?
| Consider $(1,1)\in\mathbb{N}\times\mathbb{N}$. Suppose for contradiction that there exists $n\in\mathbb{N}$ with $f(n)=(n,n+1)=(1,1)$. Then reading the first entry, we get $n=1$. Reading the second entry, we get $n+1=1\implies n=0$. Clearly we can't have $n=1$ and $n=0$ at the same time. Contradiction. Hence $f$ is not surjective.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3795938",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
product $\prod \limits_{i=2}^{\infty} (1+\frac{1}{(p_i -2)p_i})$ for primes $p_i$ I want to calculate the product $\alpha= \prod \limits_{i=2}^{\infty} (1+\frac{1}{(p_i -2)p_i})$ for all primes $p_i >2$.
I calculated this product first with computer and get for the first primes under ten millions $\alpha=1.5147801192603$.
Analytical I tried to expand the product as a telescope sum:
$\alpha=1+\frac{1}{2}(1-\frac{1}{3}+\frac{1}{3}-\frac{1}{5}+\frac{1}{5}-\frac{1}{7}+\frac{2}{45}+\frac{1}{9}-\frac{1}{11}+\frac{2}{105}+\cdots)$. But this doesn't help me. I don't even know if $\alpha$ converges.
| Recall that if $\{ a_n \}$ is positive and $a_n \to 0$ then $\prod (1 + a_n)$ and $\sum a_n$ converge/diverge together. Given that $p_n \sim n \log n$ convergence is clear. So at least that.
As for a closed form: this is actually the reciprocal of the twin primes constant. This appears in many, many conjectures about the prime numbers.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3796030",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Proof that the set $S = \{(x, y)\in \mathbb{R}^2 \mid y=x^2\}$ is closed. Our professor defined that a closed set is a set whose complement is open.
Based on this definition, how do I prove that the set $S = \{(x, y)\in \mathbb{R}^2 \mid y=x^2\}$ is closed?
It makes intuitive sense to me, but I'm unable to pen down a proof.
Thanks in advance!
| I give a solution using sequence characterization:
$A\subseteq \mathbb{R}^n$ is closed if and only if for each sequence $(x_n)\subseteq A$ such that $x_n \to x$ then $x \in A$.
Let $z_n=(x_n,y_n)$ sequence in $S$ such that $z_n \to (x,y)$. So $y_n=x_n^2, \forall n\in \mathbb{N}$.
How $z_n \to (x,y)$ we have $x_n \to x$ and $y_n \to y$, then
$$ y = \lim y_n = \lim x_n^2 = x^2 $$
so $(x,y)\in A$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3796153",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
find the distance of point $P(0,0,1)$ from the level surface $f(x,y,z)=0$ of $f(x, y, z)=\left(z-x^{2}\right)(z+3 y)$ given $$f(x, y, z)=\left(z-x^{2}\right)(z+3 y)$$
I am asking to find the distance of point $p=(0,0,1)$ from the level surface $f(x, y, z)= 0$.
The idea of what I am asked is pretty simple but How should I execute that?
| Consider:
Objective function $x^2+y^2+(z-1)^2$, which is the square of
the distance from the position $p$, and constrained to
$(z-x^2)(z+3y)=0$.
So, your auxiliary function is $F=x^2+y^2+(z-1)^2+\lambda(z-x^2)(z+3y)$.
Then the equations
$$\frac{\partial F}{\partial x}=0,$$
$$\frac{\partial F}{\partial y}=0,$$
$$\frac{\partial F}{\partial z}=0,$$
together with $(z-x^2)(z+3y)=0$,
going to give you where the distance to the square distance function:
$$x^2+y^2+(z-1)^2,$$
reaches an extremum.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3796391",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why is $|-a+\sqrt{a^2-1}|<1<|-a-\sqrt{a^2-1}|$ (where $a>1$) true? Why does $|-a+\sqrt{a^2-1}|<1<|-a-\sqrt{a^2-1}|$ (where $a>1$) hold? I understand that $a>1 \implies 1<|-a-\sqrt{a^2-1}|$ and that $|-a+\sqrt{a^2-1}|<|-a-\sqrt{a^2-1}|$
But I can't see why $a>1 \implies |-a+\sqrt{a^2-1}|<1$.
Does anyone see why? Thank you.
| We define $f(a)=-a+\sqrt{a^2-1}$. You can easily see that its derivative is defined for $a \in [1,+\infty[$ and $f'(a)=\frac{a-\sqrt{a^2-1}}{\sqrt{a^2-1}} >0$ in this interval (just consider the numerator).
Then $f$ is increasing on the interval $[1,+\infty[$, $f(1)=-1$ and $\lim_{x\to \infty} f(x)=0$ (as $\sqrt{a^2-1} \sim \sqrt{a^2}=a$ for sufficiently large considerations).
So $|f(x)|<1$ on the interval you want.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3796517",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 2
} |
How to prove that $\sum_{n=0}^{\infty} (-1)^n \ln \frac{3n+2}{3n+1}=\frac{1}{2} \ln 3$ The sum $$\sum_{n=0}^{\infty} (-1)^n \ln \frac{3n+2}{3n+1}=\frac{1}{2} \ln 3$$ has been encountered in the post below:
How can I prove $\int_{0}^{1} \frac {x-1}{\log(x) (1+x^3)}dx=\frac {\log3}{2}$
I would like to know as to how this sum can be proved.
| Not a solution but a starting point. Use that $\ln(x)$ has the property that $\ln(ab) = \ln(a) + \ln(b)$. So we can rewrite the partial sum as $$\sum_{n=0}^N (-1)^n\ln\Big(\frac{3n+2}{3n+1}\Big) = \ln\Big(\prod_{n=0}^N \Big(\frac{3n+2}{3n+1}\Big)^{(-1)^n}\Big)$$
Since $x\to \ln(x)$ is a continuous function, then if we can show that
$$ \prod_{n=0}^N \Big(\frac{3n+2}{3n+1}\Big)^{(-1)^n} \to \sqrt{3}$$
as $N\to\infty$, then we will have the result. It looks J.G. answer above will be helpful for this part.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3796613",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
If $M$ is a standard class model of ZFC isomorphic to $V$, then is $M = V$? Consider the following statement: (T) "If $M$ is a standard class model of ZFC isomorphic to $V$, then $M = V$." The statement (T) is equivalent to: "If the transitive collapse of a standard class model $M$ of ZFC is equal to $V$, then $M = V$." This is because the transitive collapse of a class $M$ is the unique transitive class that is elementhood-wise isomorphic to $M$.
Here, by standard class model of ZFC I mean a class model of ZFC whose elementhood relation is the real elementhood relation.
Assume that ZFC is consistent. Does ZFC prove (T)? Does ZFC disprove (T)? If no to both, does ZFC with some additional large cardinal axiom disprove (T)?
| No. Define $F:V\to V$ by $\in$-recursion as $F(x)=\{F(y):y\in x\}\cup\{\emptyset\}$. Clearly $F(x)$ is nonempty for all $x$. Also, $F$ is injective: if $F(x)=F(x')$, then by induction on $\max(\operatorname{rank}(x),\operatorname{rank}(x'))$ we may assume $F$ is injective on $x\cup x'$. Since $F(x)=F(x')$ we must have $\{F(y):y\in x\}=\{F(y):y\in x'\}$, but since $F$ is injective on $x\cup x'$ this implies $x$ and $x'$ have the same elements and thus $x=x'$. Also clearly $y\in x$ implies $F(y)\in F(x)$, and the converse follows from injectivity of $F$.
Taken all together, this shows that $F$ is an isomorphism from $(V,\in)$ to $(M,\in)$ where $M$ is the image of $F$. But $M\neq V$, since $\emptyset\not\in M$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3796727",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Find height of irregular trapezoid with known angles and surface area
KNOWN:
*
*Length DC
*Alpha
*Beta
*Surface S
NEEDED:
*
*Height h
For an algorithm, I require a way to solve this for any trapezoid. Sort of like this question (Given a known isosceles Trapezoid find height of another with same angles & one base but different area) but not with the isosceles restriction.
Just like in that question, I effectively have all information about a larger trapezoid with identical angles and DC as well, but I think the only gain I get from that are the angles.
Have mattered my brain a while now without success.
Going off of the formula for surface:
S = h * ((AB + DC)/2)
I could end up for the formula:
h = (2*S) / (AB + DC)
But this hardly helps because I do not know AB.
Formulas based on the angles also always required both DC and AB, or alternatively the lengths of the legs.
Another idea I had was to split trapezoid into two right triangles and one square because solving the problem appears to be easier for each in particular.
But after implementing half of that, I realized that I have no way of knowing what the desired surface area of each figure would be...
Is there a known solution to this?
Huuge thanks in advance!
| This seems like a problem best done using trig. Consider:
Draw a vertical line upward from $D$ to a point $E$ on $AB$. Do the same downward from $B$ to $F$ on $CD$.
We know $\overline{DE}$ and $\overline{BF}$ are equal to h. $\overline{BE}$ and $\overline{DF}$ are some unknown distance $d$.
As you noted, the area is the sum of the rectangle and two triangles, which is $$S = dh + S(\Delta BFC) + S(\Delta ADE)$$
And we can find our lengths for the new segments
$$\overline{CF} = \frac{h}{\tan \beta}$$ $$\overline{AE} = h \tan (\alpha - 90°) = h \tan \gamma$$
I'm just throwing gamma as a sub for alpha - 90° in there for ease of reading. And all this means $$ S = dh + \frac{1}{2}\frac{h^2}{\tan \beta} + \frac{1}{2}h^2 \tan \gamma $$
Well, that's one equation in two variables. We need at least one more. Thankfully we know the length $\overline{CD}$, and it has to be:
$$ \overline{CD} = d + \frac{h}{\tan \beta}$$
Two last substitutions give
$$ S = h\left(\overline{CD}-\frac{h}{\tan \beta}\right) + \frac{1}{2}\frac{h^2}{\tan \beta} + \frac{1}{2}h^2 \tan \gamma $$
$$ S = h\cdot\overline{CD } + h^2\left(\frac{1}{2}\tan \gamma - \frac{1}{2 \tan \beta}\right)$$
And I'm not going to go through the quadratic equation with that using variables, so plug in your actual numbers at this point.
Hope that helps! Going to quickly double-check my steps though.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3796866",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
For $A$ any subset of vector space $X$, is it true that $A+A=2A$? It seems a trivial question:
Prove/disprove: If we have a vector space $X$, then for any subset $A$ of $X$, we have $A+A =2A$.
It seems that $2A$ is always subset of $A+A$, but I don't think $A+A$ is subset of $2A$.
I am thinking in set of integers modulo $p$, for $p$ a prime, as a counterexample.
Am I right?
| Consider $A = \{v, -v\}$ for some vector $v\ne 0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3797011",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
In what direction should the airplane fly not to miss the airport? Here is the question:
A plane sights an airport at $[N30°E]$ and is travelling at a speed of 500km/hr. There is a wind from $[N30°W]$ at a speed of 25km/hr. Determine the reading the airplane must travel in order not to miss the airport.
Options are:
a) $[S18°W]$
b) $[N45°E]$
c) $[S14°W]$
d) $[N18°E]$
Here is how I approached the problem, although my answer doesn't match up with any answer provided. Maybe someone can spot my error?
Assume the airplane resides at the origin. Let $\vec{v}=\Big<500\cos(\theta),500\sin(\theta)\Big>$ denote the velocity of the airplane, where $\theta$ is an angle to be determined. We can represent the velocity of the wind by $$\vec{u}=\Bigg<25\cos(-\pi/3),25\sin(-\pi/3)\Bigg>=\Bigg<\frac{25}{2},-\frac{25\sqrt3}{2}\Bigg>$$
To answer this question, we need to find the angle $\theta$ which makes the resultant vector $\vec{v}+\vec{u}$ point in the direction of the airport, which is $$\Bigg<\cos(\pi/3),\sin(\pi/3)\Bigg>=\Bigg<\frac{1}{2},\frac{\sqrt3}{2}\Bigg>$$
In other words, we need to solve the equation for $\theta$. $$\frac{\vec{v}+\vec{u}}{||\vec{v}+\vec{u}||}=\Bigg<\frac{1}{2},\frac{\sqrt3}{2}\Bigg>$$
This yields $\theta \approx 1.091$ which is equivalent to $N27.5^{\circ} E$. Can anyone help me figure out what I'm doing wrong? Thank you.
| I would leave a comment but I do not have the reputation to do so.
I believe your problem is with the wind vector.
The wind is coming from $N30°W$, which means it is blowing in a direction $S60°E$. This would make both components of the wind vector negative.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3797096",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Solver for finding fixpoints of a boolean system Intro: The Problem
My problem relates to solving a system of equations that find the fixpoint of a studied boolean system ($F(X)=X$).
A Simple Example
Let $\bar{x}=\{x_1,x_2,x_3\} \in \{0,1\}$ be some boolean variables of interest.
Let $F=\{f_1,f_2,f_3\}$ be the update functions for these variables (i.e. $x_i(t+1) = f_i(x(t))$), defined as the follow condition functions (they are based on the inputs of each variable - node in the corresponding graph conceptualization):
$f_1 = \begin{cases}
1, -x_2 \ge 0 \\
0, \text{otherwise}
\end{cases}$,
$f_2 = \begin{cases}
1, x_1-x_3 \ge 0 \\
0, \text{otherwise}
\end{cases}$,
$f_3 = \begin{cases}
1, x_1+x_3 \ge 0 \\
0, \text{otherwise}
\end{cases}$
These functions are inspired by this question.
The goal is to to find answer sets $\bar{x}$ (could be zero, one or many per se) for which $F(\bar{x})=\bar{x}$.
The above example is of course a very simple case.
In the end I would like to solve such system of equations with hundreds of variables.
Note that the condition functions will always be linear combinations of each variable's inputs and the variables always boolean.
The Question
I need an efficient solver for this kind of problem (which is known to be NP-hard btw!). E.g. can this problem formulated as constraint programming and solved using Answer Set Programming (ASP) techniques?
| So it turns out that ASP can be used to solve this problem! Here I provide a possible encoding of the problem in a file named fp.lp:
% variables
var(1..3).
% functions
% f(Function,Coefficient,Variable)
f(1,-1,2).
f(2,1,1).
f(2,-1,3).
f(3,1,1).
f(3,1,3).
% guess assignment to variables
{ init(V) : var(V) }.
% compute functions
next(F) :- var(F), #sum { C,V : f(F,C,V), init(V) } >= 0.
% check if fixed point
:- init(V), not next(V).
:- next(V), not init(V).
#show next/1.
#show init/1.
Using clingo (version 5.4.0) from the command line: clingo fp.lp we get UNSATISFIABLE for this particular instance.
Commenting the fact %f(2,1,1). and running again the clingo solver we get:
Answer: 1
next(3) init(1) init(3) next(1)
SATISFIABLE
Variables that are returned both in the init/1 and next/1 predicates are translated to active boolean variables (1) and those that are missing to inactive values (0).
The returned result is thus the boolean vector $\bar{x}=\{x_1=1,x_2=0,x_3=1\}$.
Credits for this answer go to Roland Kaminski.
Various other members of the ASP potassco community provided helpful comments and solutions.
For more info on ASP, check: https://potassco.org/
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3797239",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$(a+1)(b+1)(c+1)\leq4$ for triangle sides $a,b,c$ with $ab+bc+ac=1$
Given that $a,b,c$ are the lengths of the three sides of a triangle, and $ab+bc+ac=1$, the question is to prove $$(a+1)(b+1)(c+1)\leq4\,.$$
Any idea or hint would be appreciated.
This is Problem 6 of Round 1 of the BMO (British Mathematical Olympiad) 2010/2011, as can be seen here.
Remark. This question has been self-answered. Nevertheless, any new approach is always welcome!
| OK first let's expand the bracket
$(a+1)(b+1)(c+1)=abc+ab+ac+bc+a+b+c+1$.
Now we know that $ab+ac+bc=1$ so we actually need $abc+a+b+c+1 \leq 3$ or $abc+a+b+c \leq{2}$.
Since $a,b$ and $c$ form the sides of a triangle, we know that $a \leq b+c$ and $b \leq a+c$ and $c \leq a+b$.
I found it hard to progress from here and wondered if the result was actually true so did a thought experiment. Let us say $a,b$ and $c$ are all equal to $1/\sqrt{3}$. This would be an equilateral triangle and $ab+bc+ac=1/3+1/3+1/3=1$.
Then $(a+1)(b+1)(c+1)=abc+ab+ac+bc+a+b+c+1$=
$1/3 \sqrt{3}+1/3+1/3+1/3+1/\sqrt{3}+1/\sqrt{3}+1/\sqrt{3}+1=$
$1/3\sqrt{3}+1+\sqrt{3}+1$.
Which needs to be $\leq{4}$
Iff $1/3\sqrt{3} +\sqrt{3} \leq2$
iff $1/3+3 \leq 2\sqrt{3}$. Which is true.
Let's take another extreme case: $a$ and $b$ are just under $1$ and $c$ is close to $0$ then we can also have $ab+ac+bc=1$. Here $(a+1)(b+1)(c+1)$ will also be just under $4$ so I believe that the inequality is correct. I can show that we need $abc+a+b+c \leq{2}$ but don't know how to do that right now. I'll think about. But we haven't yet used the triangle inequalities so I suspect they are needed.
Not being able to finish it is killing me :)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3797348",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
} |
Check the convergence of the series $\displaystyle{\sum_{n=1}^{+\infty}\frac{\left (n!\right )^2}{\left (2n+1\right )!}4^n}$ I want to check if the following series converge or not.
*
*$\displaystyle{\sum_{n=1}^{+\infty}\frac{\left (n!\right )^2}{\left (2n+1\right )!}4^n}$
I suppose we have to find here an upper bound and apply then the comparison test. But I don’t really have an idea which bound we could take. Could you give me a hint?
*
*$\displaystyle{\sum_{n=1}^{+\infty}\frac{1\cdot 3\cdot 5\cdot \ldots \cdot (2n-1)}{2\cdot 4\cdot 6\cdot \ldots \cdot 2n}}$
We have a term that is a product of the form $\frac{2i-1}{2i}=1-\frac{1}{2i}$. To apply the comparison test we have to find an upper bound. Does it holds that $1-\frac{1}{2i}\leq \frac{1}{2}$ and so $$\prod_{i=1}^n\left (1-\frac{1}{2i}\right )\leq \prod_{i=1}^n \frac{1}{2}=\frac{1}{2^n}$$ Then taking the sum we get $$\sum_{n=1}^{+\infty}\frac{1\cdot 3\cdot 5\cdot \ldots \cdot (2n-1)}{2\cdot 4\cdot 6\cdot \ldots \cdot 2n}\leq \sum_{n=1}^{+\infty} \frac{1}{2^n}=1$$ So from the comparison test the original sum must converge also.
Is everything correct?
*
*$\displaystyle{\sum_{n=1}^{+\infty}\frac{1\cdot 3\cdot 5\cdot \ldots \cdot (2n-1)}{2\cdot 4\cdot 6\cdot \ldots \cdot 2n\cdot (2n+2)}}$
We have a term that is a product of the form $\frac{2i-1}{2i+2}$. Which upper bound could we use in this case?
| Some hints:
For first we can use Raabe's test
$$n\left(\frac{a_n}{a_{n+1}}-1 \right) = \frac{n}{2(n+1)}$$
For second
$$\frac{1}{2\sqrt{n}} \leqslant \frac{1}{2} \frac{3}{4} \cdots \frac{2n-1}{2n} \leqslant \frac{1}{\sqrt{2n}}\quad (1)$$
Proof:
For $n=1$ we have $\frac{1}{2} \leqslant \frac{1}{2} \leqslant \frac{1}{\sqrt{2}} $, so let's assume $n \geqslant 2$. We have
$$\frac{3}{4}>\frac{2}{3}, \frac{5}{6}>\frac{4}{5},\frac{7}{8}>\frac{6}{7}, \cdots, \frac{2n-1}{2n}>\frac{2n-2}{2n-1}$$
multiplication this inequalities gives
$$\frac{3}{4} \frac{5}{6} \cdots \frac{2n-1}{2n} > \frac{2}{3} \frac{4}{5} \cdots \frac{2n-2}{2n-1}$$
Now if we multiply left and right sides on left side, the we have
$$\left( \frac{3}{4} \frac{5}{6} \cdots \frac{2n-1}{2n} \right)^2 > \frac{1}{n} $$
Which is left side of (1).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3797499",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
How to describe the unit ball as an Intersection of hyperplanes? How can one describe the unit ball in $\mathbb{R}^{3}$ as an intersection of supporting halfspaces?
| Let $B_3\subset \mathbb{R}^3$ denote the unit ball. By supporting halfspace, I assume you mean an affine half space containing $B_3$ such that its boundary contains at least one boundary point of $B_3$. Taking
$$B_3=\bigcap_{|x|=1}\{x+z:\langle z,x\rangle\leq 0\}$$
should work. Here is a good picture of the lower dimensional case; can you visualize what this looks like in $\mathbb{R}^3$?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3797630",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Are Riemann integrable functions the pointwise limit of continuous functions?
Given a function $f$ that is Riemann integrable on $[a,b]$, does there exist a sequence of continuous functions $\{f_n\}_{n=1}^\infty$ that converges to $f$ pointwise everywhere on $[a,b]$?
If I just require pointwise almost everywhere, this follows from the fact that continuous functions are dense in $L^1[a,b]$ and norm convergence yields a subsequence that converges a.e. This is an exercise in Krantz, Real Analysis and Foundations (4th ed., p. 153); I have not been able to prove it, and when I queried the author he could not provide a proof either. However, I cannot find a counter-example, either.
| Everywhere? no. Almost everywhere, yes.
A pointwise limit of a sequence of continuous functions is said to be a function of Baire class $1$. Baire proved many properties of such functions. In particular, if $E$ is a nonempty perfect set, then the restriction of $f$ to $E$ has a point of continuity.
Consider the following function $f$. Let $[a,b] = [0,1]$. Let $C$ be the middle-thirds Cantor set. So $C$ is a closed set of measure zero. Define $f: [0,1] \to \mathbb R$ as follows.
$\bullet \;f(x) = 0$ on $[0,1]\setminus C$.
$\bullet\;f(x) = 0$ on the endpoints
of the open intervals in $[0,1]\setminus C$.
$\bullet\;f(x) = 1$ elsewhere, uncountably many remaining points of $C$.
First note that $f$ is continuous at every point of $[0,1]\setminus C$,
a set of measure $1$, so $f$ is Riemann integrable.
But also note that the restriction of $f$ to the nonempty perfect set $C$ has no point of continuity: both $\{x \in C : f(x) = 0\}$ and $\{x \in C : f(x) = 1\}$ are dense in $C$. So $f$ is not of Baire class $1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3797728",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Two elements in the set differ by 3 or more
Call a set of integers sparse if any two elements in the set differ by at least 3. Find the number of sparse subsets of $\{1, 2, 3, \dots, 12\}.$ (Both $\emptyset$ and one-element sets are sparse, to my understanding.) For example, {$1, 5, 11, 12$} is a sparse set, since $1$ and $5$ differ by 3 or more.
I was thinking of a way using recursion. Here's my approach:
Call $a_n$ the number of sparse sets in the set of integers {$1, 2, 3, \dots, n$}. If we look at the set {$1, 2, 3, \dots, n-1$}, there are $a_{n-1}$ sparse sets. If we assign $n-1$ to a sparse set, when we incorporate $n$, then we can either
*
*include $n$ in the sparse set
*include $n$ and remove $n-1$ in the sparse set
*keep it as it is.
There are $3$ cases, each with the same value, so we have $a_n = 3a_{n-1}$ so far.
However, I don't know how to continue, and I'm not even sure if my current approach is correct.
| Let $S_n$ be the set of sparse subsets on $\{1..n\}$. Then $S_0 = \{\emptyset\}$, $S_1 = \{\emptyset, \{1\}\}$, $S_2 = \{\emptyset, \{1\}, \{2\}\}$, and in general
$S_{n + 3} = S_{n + 2} \cup \{A \cup \{n + 3\} : A \in S_n\}$
Define $F_n = |S_n|$. Then we see that $F_0 = 1$, $F_1 = 2$, $F_2 = 3$, and $F_{n + 3} = F_{n + 2} + F_n$.
To efficiently calculate $F_n$, we note that, defining
$M = \begin{pmatrix} 1 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{pmatrix}$
We have
$M \begin{pmatrix} F_{n + 2} \\ F_{n + 1} \\ F_n \end{pmatrix} = \begin{pmatrix} F_{n + 3} \\ F_{n + 2} \\ F_{n + 1} \end{pmatrix}$
And consequently, by induction on $n$, we have
$M^n \begin{pmatrix} F_{2} \\ F_{1} \\ F_0 \end{pmatrix} = \begin{pmatrix} F_{n + 2} \\ F_{n + 1} \\ F_{n} \end{pmatrix}$
Calculating $M^n$ will take $O(\log n)$ multiplications.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3797959",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Reference request: Sheaf for the Zariski topology In my course notes page 62 I've read the following: let $(E,0)$ be an elliptic curve over an arbitrary scheme $S$, then $U\rightarrow\ker(\,0^*_U:\operatorname{Pic}(E_U)\rightarrow \operatorname{Pic}(U))$ is a sheaf for the Zariski topology.
What is "sheaf for the Zariski topology" ? So far I know what is a sheaf and what is Zariski topology. I'll be thankful for any references in this subject.
| You can define a sheaf $\cal F$ with values in a category $\mathbf{C}$ on any topological space $X$ by letting ${\cal F}(U)$ be an object in $\mathbf{C}$ for any open set $U\subset X$ so that the usual axioms are met.
E.g. see this Wikipedia entry.
To fix ideas you may think that $\mathbf{C}$ is the category of groups, so that ${\cal F}(U)$ is a group for every open $U\subset X$ and the restriction maps
$$
\rho_{U,V}:{\cal F}(V)\longrightarrow{\cal F}(U)
$$
will be homomorphisms for every inclusion $U\subset V$ of open sets.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3798046",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Triangle greater than (probability) This one is a follow up of my previous question. But a different problem. And this one should have a more interesting answer. I don't really know how to approach this problem nonetheless reach a solution, so again help is appreciated.
Question:
You have a circle with radius $R$. If three points are randomly chosen inside this circle.
What is the probability that the three points form a triangle with an area greater than $\displaystyle \frac{R^2}{5}$?
Edit: Is anyone trying or maybe found an approach that might work? Is there any similar problems you've seen before that could work as a guide towards solving this one? What do you consider being the difficulties? I literally don't have any idea on where to even start.
| This isn't an answer, but just a simulation. I get the approximate value
$$P(A\geq \frac{1}{5}) \approx 0.45$$
Here is my Sage-code if someone wants to check it. It agrees with the mean value of mathworld
def randPt():
r = random()**0.5 #sqrt to make it uniform
a = random()*2*float(pi)
return (r*cos(a), r*sin(a))
def simuTriArea():
a,b,c = [randPt() for _ in range(3)]
return 0.5*abs(a[0]*b[1] + b[0]*c[1] + c[0]*a[1] - b[0]*a[1] - c[0]*b[1] - a[0]*c[1])
#points([randPt() for _ in range(1000)]).show(aspect_ratio=1)
simuN = 100000
triAreas = [simuTriArea() for _ in range(simuN)]
print ("simulated P(A>0.2): %f" % (sum(1 for a in triAreas if a>0.2) / float(simuN),) )
print ("mean A: %f" %mean(triAreas))
graph = Graphics()
graph += histogram(triAreas, density=True, bins=50)
maxArea = float(3*3**0.5 / 4)
#graph += plot(???, xmin=0, xmax=maxArea)
graph.show()
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3798330",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Finite field with nonprime cardinality In my textbook for discrete mathematics the following is stated:
Theorem: $\mathbb Z_p$ is a field if and only if $p$ is prime.
In the following we denote the field with $p$ elements by $GF(p)$ rather than $Z_p$. As explained later, "$GF$" stands for Galois field. Galois discovered finite fields around 1830.
However, there is a field with $p=4$ elements (right?) and clearly $4$ is not a prime. I think that I am misunderstanding something fundamentally. Is it maybe that there are fields other than $\mathbb Z_p$ (what is the spoken name of this set?) that do not need to be of prime cardinality?
| Yes, there are finite fields other than $\Bbb Z_p$. The cardinal of such a field is always the power of a prime number. And, yes, there is a field with $4$ elements. It can be defined as $\Bbb Z_2[x]/\langle x^2+x+1\rangle$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3798475",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Form an 8 letter word using A,B,C,D,E, if the letters in the word must appear in alphabetical order Form an 8 letter word using A,B,C,D,E, where each letter can be used multiple times. How many words can I form if the letters in the word must appear in alphabetical order?
For example: AABBDDDE is acceptable, BBBACCCE is not acceptable.
The only way I can think of to count this is to draw a table with the number of occurrences of each letter, then calculate the permutations of the letter positions for each row.
Is there an easier way to solve this question?
| Observe that the type of word that you want is univocally decided by the numbers of letters A,B,C,D and E. Then the problem is the same of ask in how many ways can you write 8 as sum of 5 numbers, and the answer is ${12\choose 8}$, do you know why?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3798603",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
Morley rank of group This is an example from S.Buechler's book Essential Stability Theory.
Let $M$ be the group $\bigoplus_{i<\omega}(\mathbb{Z}_4)_{i}$ with $\mathbb{Z}_4=\mathbb{Z}/4\mathbb{Z}$. Suppose $M^{*}$ is the monster model of $Th(M)$. My questions are the following:
*
*Why $2M^{*}$ is a vector space over $\mathbb{Z}_2$?
*Why the Morley rank of $M^{*}$ is 2?
Any hints or comments to my questions are welcomed. Thank you!
| For the first question: $M$ is an abelian group of exponent $4$, and hence so is $M^*$. It follows that $2M^*$ is an abelian group of exponent $2$. An abelian group of prime exponent $p$ is a vector space over the $p$-element field. (More generally, an abelian group of exponent $n>0$ is a $\mathbf Z/n\mathbf Z$-module.)
Regarding your second question, I suppose you want to compute the Morley rank of $M$ as a pure abelian group. The subgroup $2M\leq M$ is infinite, so it has Morley rank at least $1$, and $[M:2M]$ is infinite, so $M$ has Morley rank at least $2$.
To obtain the upper bound, it is enough to show that $M/2M$ and $2M$ are both rank $1$. But that follows from the fact that (for each of them), the induced structure is that of a vector space over the two-element field, and as such, they are strongly minimal.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3798704",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Continuous, injective map between annuli, but with a "hole" in the image In $\mathbb R^n$ let $B_r$ be the open ball with center zero and radius $r$. For $r\in (0,1)$ let $A_r = \overline{B_1}\setminus B_r$. Let $r,s\in (0,1)$ and assume that $F : A_r\to A_s$ is continuous and injective such that $F(\partial B_1) = \partial B_1$ and $F(\partial B_r) = \partial B_s$. Is it possible that the image of $F$ contains a hole in $A_s$, i.e., $F(A_r) = A_s\setminus U$, where $U$ is a connected open set?
| No, that's not possible. Let's pick $r = s$, and in fact work with an annulus $A$ of inner radius $1$ and outer radius $2$. And let's pick a point $P$ in the set $U$, so that $P$ is a point of $A$ such that $P \notin F(A)$. So we have our map,
$$
F : A \to A
$$
whose image misses $P \in A \subset \Bbb R^2$. Define
$$
\gamma_c
$$
to be a path starting at $(1,0)$ and travelling in a straight line to $(1+c, 0)$, then traversing a circle of radius $1+c$ counterclockise, and then returning to $(1,0)$, so that
$$
\gamma_c(t) = \begin{cases}
(1 + 3ct, 0) & 0 \le t \le \frac13 \\
((1+c)\cos(6\pi(t-\frac13)), (1+c)\cos(6\pi(t-\frac13))) & \frac13 \le t \le \frac23\\
(1 + t - 3(t-\frac23), 0) & \frac23 \le t \le 1
\end{cases}
$$
Then $\gamma_0$ and $\gamma_1$ are homotopic loops in $\pi_1(A, a)$, where $a = (1, 0)$. And $\gamma_c$ is homotopic to these for all values of $0 \le c \le 1$. In particular, the loop $\alpha$ defined by $\gamma_0$ followed by $\gamma_1$ is null-homotopic in $\pi_1(A,a)$. That means that $F \circ \alpha$ is nullhomotopic in $F(A)$, hence (by inclusion) in $\pi_1(\Bbb R^2 \setminus \{P\}, F(a)) = \Bbb Z$.
But $F \circ \alpha$ winds once around the point $P$ (OK, that takes a little proving, but not much), hence represents a nonzero element of $\pi_1(\Bbb R^2 \setminus \{P\}, F(a))$, which is impossible, because $F_\star$ is a homomorphism of groups, and cannot send $0$ to a generator.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3798982",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
How do I prove that $\log(1/b)^x= -\log b^x$? How do I prove that $\log(1/b)^x= -\log b^x$?
I am working on logarithm properties. And I have come across one of the power rules, where the base is a fraction. I'm struggling to make assumptions and prove that the two equations are equal. Someone to help.
|
You want to show:
$$\log(1/b)^x= -\log b^x$$
We need to use two Theorems for this.
$\bullet~$Theorem 1: Let $a, b > 0$. Then $\log(a/b) = \log(a) - \log(b)$
Proof: Let, $u = \log a$ and $v = \log b$. Let the base be $10$ wlog. Therefore from the definition of $\log$, we have that
$$ \log a = u \implies 10^u = a \quad \text{ and } \quad \log b = v \implies 10^v = b $$
Hence, $$ \frac{10^u}{10^v} = 10^{u - v} = \frac{a}{b} \implies u - v = \log\left(\frac{a}{b} \right) = \log a - \log b $$
$\bullet~$Theorem 2: Let $a > 0$ and $m \in \mathbb{R}$. Then $\log(a^m) = m\log(a)$
Proof: Let $u = \log (a^m)$ then we have that
$$ u = \log (a^m) \implies 10^u = a^m \implies 10^{u / m} = a \implies \frac{u}{m} = \log (a) \implies u = \log(a^m) = m \log (a) $$
Thus by Theorem 1 and Theorem 2 we have $$\log(1/b)^x= x (\log(1) - \log(b) ) = -x \log (b) = -\log b^x$$
Edit: $\log 1 = 0$.
proof: Let's take $u > 0$. Then
$$ u^0 = 1 \implies 0 \cdot \log(u) = \log(1) \implies \log(1) = 0 $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3799084",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
} |
Lemma of open mapping theorem Kreyszig Revising the Proof of the Kreyszig Open Mapping Theorem, being more specific the proof of the lemma 4.12-3, the central idea is find a open ball $B(0,r)$ such that exists $B(0,\delta)\subseteq T(B(0,r))$.
In the book they prove it for the case $r = 1$,
however it is valid for any $r$.
I understand the proof correctly, however in the final part I have a question that may be a bit subtle.
The sequence $z_n$ converge to $x$, i.e
$$ \sum_{n=1}^{\infty}{x_n} = x $$
So
$$ \lim_{n\to \infty} \Vert z_n \Vert = \Vert x \Vert $$
Then how $\Vert x_n \Vert < 1/2^n$ for all $n$ the book says:
$$ \Vert x \Vert = \lim_{n\to \infty} \Vert z_n \Vert < \sum_{n=1}^{\infty} \frac{1}{2^n} = 1 $$
And conclude that $\Vert x \Vert < 1$. My question is simple, it shouldn't be
$$ \Vert x \Vert = \sum_{n=1}^{\infty}\Vert x_n \Vert \leq \sum_{n=1}^{\infty} \frac{1}{2^n} = 1 $$
that is $\Vert x \Vert \leq 1$.
As I said at the beginning, it doesn't matter if we prove for $r = 2$, it still doesn't affect the proof of the theorem. However, I think we cannot say that $\Vert x \Vert < 1$.
| Suppose that $$\tag1\sum_k\|x_k\|=\sum_k2^{-k}.$$ Then
$$
0=\sum_k\tfrac1{2^k}-\sum_k\|x_k\|=\sum_k\big(\tfrac1{2^k}-\|x_k\|\big).
$$
This last series has non-negative terms; actually, all of its terms are positive. That is, the sequence of partial sums is positive and increasing, so its limit cannot be zero. The contradiction shows that the equality $(1)$ is impossible. It would have been enough for a single term to be nonzero, to guarantee the strict inequality.
On a separate note, you say that $\sum_k\|x_k\|=\|x\|$. That's rarely true. You have the inequality $\|x\|\leq\sum_k\|x_k\|$, though, so the proof has no issue.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3799201",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Sum of an empty set and a finite set. Can we define the sum of an empty set and a finite set$?$
For example -
If $A= \{1,2\} , B= \emptyset$
Then what is $A+B$.
My intuition says it should be $A$.
But I couldn't find any proper reason behind it.
| If by $A+B$ you mean $\{\,a+b\mid a\in A,b\in B\,\}$, then if $B$ is empty there is no $b\in B$ so there are no elements in $A+B$. "$B$ is empty" is very different from "zero is in $B$".
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3799374",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Find the angles of triangle $NPQ$.
$ABC$ is a triangle. $ACM$ and $BCN$ are equilateral triangles where $M$ and $N$ are at the outside of the triangle. $P$ is center of $ACM$. $Q$ is midpoint of AB. Then find the angles of the triangle $NPQ$.
I need the solution using homothety. I have already solved the problem, but I have not been able to get the solution with homothety.
My solution:(in short)
Let's take point $R$ at $PQ$ line where $PQ=QR$. Triangles $APQ$ and $BQR$ are congruent. Also notice that triangles NCP and $NQR$ are congruent. Now it's not hard to see NPR is equilateral triangle. Thus answer is $30°,60°,90°$.
| Let $D$ be a midpoint of $BC$. Since $$\angle PCN = \angle QDN = 90+\gamma$$ and $${PC \over QD} = {CN\over DN} = {2\over \sqrt{3}}$$ we see that $\triangle PCN\sim \triangle QDN$, so the spiral similarity at $N$ takes $\triangle PCN$ to $\triangle QDN$. But this spiral similarity induces new spiral similarity which has also center at $N$ and takes $\triangle CDN$ to $\triangle PQN$ so they have same angles.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3799433",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
The wrong way of finding the average distance between two points on a circle I was trying to find the average distance between two points on a circle and got the following result.
Why is my method wrong?
| Let $P_1$ be fixed while $P_2$ moves around a circle. The distance between these points is:
$$s=2R\sin{\alpha \over 2}$$
...where $\alpha$ represents the central angle corresponding to points $P_1,P_2$
Because of symmetry we can check only one half of the circle to calculate the average distance:
$$d=\frac{\int sdl}{\int dl}$$
$$d=\frac{\int_0^\pi 2R\sin\frac\alpha2\cdot Rd\alpha}{R\pi}$$
$$d=\frac{-4R^2cos\frac\alpha2|_0^\pi}{R\pi}=\frac{4R}{\pi}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3799608",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
The solution of the indefinite integral contains an error (I know correct answer), but I cannot find it There is an exercise on indefinite integral in some infinitesimal calculus book:
$$
\int \sqrt{x^{2} +1} \cdot dx
$$
The solution uses the first substitution x = sinh u and after some transformations the book gets the result:
$$
\frac{1}{2} \cdot \left( x\sqrt{x^{2} +1} +\ln\left( x+\sqrt{x^{2} +1}\right)\right) +C
$$
MAXIMA shows me a result:
$$
\[\frac{\operatorname{asinh}(x)}{2}+\frac{x\ \sqrt{{{x}^{2}}+1}}{2}\]
$$
It seems to be same, but the question is: the book says that there is an another solution via substitution with tan. I tried to solve it with the tan-substitution and got very different result. Please, show me where is my error:
$$
\int \sqrt{x^{2} +1} \cdot dx=\int \sqrt{\tan^{2} a +1} \cdot \frac{da}{\cos^{2} a} =\int \sqrt{\frac{1}{\cos^{2} a}} \cdot \frac{da}{\cos^{2} a} =\int \frac{da\cdot \cos a}{\cos^{4} a} =\int \frac{d(\sin a)}{\left(\cos^{2} a\right)^{2}} =
$$
$$
= \int \frac{d(\sin a)}{\left( 1\ -\ \sin^{2} a\right)^{2}} =\int \frac{dt}{\left( 1-t^{2}\right)^{2}} =\int \frac{e^{u} \cdot du}{\left( 1-e^{2\cdot u}\right)^{2}} =\frac{1}{2} \cdot \int \frac{2\cdot e^{u} \cdot du}{\left( 1-e^{2\cdot u}\right)^{2}} =\frac{1}{2} \cdot \int \frac{d\left( e^{2\cdot u}\right)}{\left( 1-e^{2\cdot u}\right)^{2}} =
$$
$$
= \frac{1}{2} \cdot \int \frac{dz}{( 1-z)^{2}} =\frac{1}{2} \cdot \int \frac{d( z-1)}{( z-1)^{2}} =\frac{1}{2} \cdot \int \frac{dv}{v^{2}} =-\frac{1}{2\cdot v} +C
$$
And then I'm trying to return back to the x through v -> z -> u -> t -> a -> x variables "back"-substitutions:
$$
= -\frac{1}{2\cdot ( z-1)} +C=-\frac{1}{2\cdot \left( e^{2\cdot u} -1\right)} +C=\frac{1}{2\cdot \left( 1-e^{2\cdot \ln t}\right)} +C=\frac{1}{2\cdot \left( 1-t^{2}\right)} +C=\frac{1}{2\cdot \left( 1-\sin^{2} a\right)} =
$$
$$
= \frac{1}{2\cdot \cos^{2} a} +C=\frac{1}{2\cdot \cos(\arctan x) \cdot \cos(\arctan x)} +C=\frac{1}{2\cdot \frac{1}{\sqrt{1+x^{2}}} \cdot \frac{1}{\sqrt{1+x^{2}}}} +C=\frac{1+x^{2}}{2} +C
$$
| You made a mistake going from $u$ to $z$: if $z = e^{2u}$, then $dz = 2e^{2u}\,du$, where you take it to be $2e^u\,du$. In fact, if you look at just what happens when you go from $t$ to $z$, you have replaced $t^2$ by $z$, but also replaced $dt$ by $dz$, where it should be $2\sqrt z\,dz$.
You could have stopped at $t$: once you have $\int \frac{dt}{(1-t^2)^2}$, you can take the partial fraction decomposition, and finish with no more substitutions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3799770",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
One point compactification of $\mathbb{R}^{n}$ is homeomorphic to $S^{n}$ I'd like to understand the first proof given to me of the fact that the one point compactification of $\mathbb{R}^{n}$ is homeomorphic to $\mathbb{S}^{n}$.
The proof goes as follows : there is an initial remark about $i: \mathbb{R}^{n} \longrightarrow \mathbb{S}^{n}$ being an open embedding (with the identification of $\mathbb{R}^{n}$ as $\mathbb{S}^{n}-\left\lbrace x_{0}\right\rbrace$) and then it states that we only have to proof that the euclidean topology of $\mathbb{S}^{n}$ coincides with the Alexandrov topology on the compactification of $\mathbb{R}^{n}$.
I don't understand how checking the open subset's conditions is sufficient to deduces the homomorphism. Are we using some uniqueness of the Alexandrov topology ?
However I know there is a much simpler way, which is to proof in general that if a topological space $X$ is compact and Hausdorff then it is homeomorphic to the one point compactification of $X$ minus a point, but I'm interested in understanding this one.
Any help or hint would be appreciated.
| Let $K$ be a compact Hausdorff space, $a\in K$, $K'=K\setminus\{a\}$
and $K'^+=K'\cup\{\infty\}$ be the one-point compactification of $K'$. Then $\phi:K'^+\to K$ given by inclusion on $K'$ and $\phi(\infty)=a$ is a homeomorphism.
One just has to prove that $\phi$ is continuous, since then $\phi$ is a continuous
bijection from a compact space to a Hausdorff space, it must be a homeomorphism.
Continuity of $\phi$ at all points of $K'$ is obvious. What about continuity at $\infty$?
If $U$ is an open neighbourhood of $a$ in $K$ then $\phi^{-1}(U)=(U\setminus\{a\})\cup\{\infty\}$. The complement of $\phi^{-1}(U)$ is $K\setminus U$ which is a compact
subset of $K\setminus\{a\}$, so $\phi^{-1}(U)$ is open by the definition of the
topology on the one-point compactification.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3799861",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Show that $\{x\}\in \mathbb{B}(X)$ for every $x\in X$ Let $(X, \tau)$ be a Hausdorff space, and let $\mathbb{B}(X)$ be the Borel $\sigma$ algebra on $X$. The question is,
Is it true that, if $x\in X$, then $\{x\}\in \mathbb{B}(X)$?
The reason why I ask is because of the previous post I made; the answer shows that one can determine a Radon measure at $\{x\}$, but I need to verify that $\{x\}\in \mathbb{B}(X)$.
| Since $X$ is Hausdorff $\{x\}$ (a singleton) is closed and $U = X \setminus \{x\}$ is open. Since $B(X)$ is a $\sigma$-algebra, it's closed under taking the complement:
$$
B(X) \ni X \setminus U = X \setminus ( X \setminus \left\{ x\right\} ) = \left\{ x \right\}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3800015",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Complex Analysis proof problem I have an feeling this bad proof because I am assuming my conclusion and I go into repeating. This isn't allowed in mathematics, so I need help to refix my steps.
So the proof I am proving is |ab| = |a||b|
So I started with:
Let a,b $\in \mathbb{C}$
I claimed $|ab|^{2} = |a|^{2} |b|^{2}$ is true
so I went to from LHS = RHS (to say LHS is equal to RHS) by using modulus of complex which is $|z|^{2}$ = $zz^{*}$
$|ab|^{2} = aa^{*} (bb^{*})$ = $(ab)(ab)^{*}$ = $|ab|^{2}$
then i took the square root of $|ab|^{2} = |a|^{2} |b|^{2}$ to give me |ab| = |a||b| because modulus is never negative, and we only take the positive.
this is where I believe this proof is 100% written wrong because I am assuming my claim is true (unproven to be exact) and it repeats again. How do I fix this?
| $|ab|^{2}=(ab)(ab)^{*}=aba^{*}b^{*}=(aa^{*}) (bb^{*})=|a|^{2}|b|^{2}$. Now take square root.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3800195",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Solving $\left(\frac{x}{10}\right)^{\log(x)-2}<100$ How to solve the following inequality?
$$\left(\frac{x}{10}\right)^{\log(x)-2}<100$$
The solution given is $x\in(1, 1000)$
I considered some things in my solving, but I couldn't get the solution to the problem. I would like to know if those assumptions were wrong.
First, I considered $$\log(x)-2 \implies \log(x)-\log(100) \implies \log\left(\frac{x}{100}\right)$$
I did proceed
$$\left(\frac{x}{10}\right)^{\log(x)-2}<100 \Longleftrightarrow \left(\frac{x}{10}\right)^{\log\left(\frac{x}{100}\right)}<100 \Longleftrightarrow \frac{x^{\log(\frac{x}{100})}}{\frac{x}{100}}<100 \Longleftrightarrow \frac{100x^{\log(\frac{x}{100})}}{x}<100$$
From $\log(x), x>0$, therefore I can multiply both sides by $x$
$$x^{\log(\frac{x}{100})}<x \Longleftrightarrow \log \left(\frac{x}{100}\right)<1 \Longleftrightarrow \frac{x}{100}<10 \Longleftrightarrow \boxed{x<1000}$$
| More directly, one can write the sequence of equivalent inequalities $$10^{(\log(x)-1)(\log(x)-2)} = \left(\frac{x}{10}\right)^{\log(x)-2}<100=10^2 \\ (\log(x)-1)(\log(x)-2) < 2 \\ \log(x)(\log(x)-3) < 0 \\ 0 < \log(x) < 3 \\ 1 < x < 1000$$
As for your solution, it is fine until
$$x^{\log(\frac{x}{100})}<x \Longleftrightarrow \log \left(\frac{x}{100}\right)<1$$ which is only true for $x>1$. If instead $0<x<1$, then we'd have $$x^{\log(\frac{x}{100})}<x \Longleftrightarrow \log \left(\frac{x}{100}\right)>1$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3800287",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
A question on linear maps itself Here is just a sample problem.
Suppose that $V$ is finite dimensional and $S, T \in \mathcal{L}(V)$. Prove that $ST = I$ if and only if $TS = I$
Proof : Suppose that $ST = I$. The identity map $I$ is invertible, so by problem 3.22 both $S$ and $T$ are invertible. Multiply $ST = I$ on the right by $T^{−1}$ to get $S = T^{−1}$. We then have $TT^{−1} = TS = I$. Of course the implication $TS = I$ implies $ST = I$ follows by reversing
the roles of $S$ and $T$.
My question : As of my understanding of linear maps, they are functions that map vectors from one vector space to another and function composition may not be commutative, i.e $f \circ g \neq g \circ f$. I am confused on to why they can treat the function as some sort of matrix, multiplying to the left or right sides of functions.
Am I misunderstanding the concept of linear maps or am I missing alternate definitions ?
Thank you for your help, very much appreciated.
| Maybe not a good explanation but since $V$ is a vector space over $\mathbb K$ of finite dimension, say , $n$ , we can always view $L(V)$ as matrix algebra $M_n(\mathbb K)$.
The isomorphism is constructed by calculating $f(v_j)$ where $\{v_1,...,v_n\}$ is the basis of $V$, and decomposing them into $f(v_j)=a_{1j}v_1+...+a_{nj}v_n$. Then $(a_{ij})_{i,j}$ is the matrix form of $f$.
The only place where finite dimensional condition is used is "The identity map I is invertible, so by problem 3.22 both S and T are invertible" and the rest of the proof has nothing to do with it. And, whether $f\circ g=g\circ f$ or not, multiplying on both sides should preserve equality (associativity may be needed).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3800382",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
If $f(x)=\sin^{-1} (\frac{2x}{1+x^2})+\tan^{-1} (\frac{2x}{1-x^2})$, then find $f(-10)$ Let $x=\tan y$, then
$$
\begin{align*}\sin^{-1} (\sin 2y )+\tan^{-1} \tan 2y
&=4y\\
&=4\tan^{-1} (-10)\\\end{align*}$$
Given answer is $0$
What’s wrong here?
| Let $\tan^{-1}\dfrac{2x}{1-x^2}=u\implies-\dfrac\pi2<u<\dfrac\pi2$
$\tan u=\dfrac{2x}{1-x^2}$
$\implies\sec u+\sqrt{1+\left(\dfrac{2x}{1-x^2}\right)^2}=\dfrac{1+x^2}{|1-x^2|}$
$\sin u=\dfrac{\tan u}{\sec u}=\text{sign of}(1-x^2)\cdot\dfrac{2x}{1+x^2}$
$\implies u=\sin^{-1}\left(\text{sign of}(1-x^2)\cdot\dfrac{2x}{1+x^2}\right)$
So if $1-x^2<0\iff x^2>1, u=\sin^{-1}\left(-\dfrac{2x}{1+x^2}\right)=-\sin^{-1}\left(\dfrac{2x}{1+x^2}\right)$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3800521",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 1
} |
There are given $n$ points on plane. Prove that there are not more than $n$ pairs of vertices, distance between which is exactly $d$ $\textbf{Source:}$I found this question in aopslink
As you can see in this link it doesn't mention any source either.
$\textbf{Question:}$There are given $n$ points on plane. Let $d$ be the biggest distance between any pair of vertices. Prove that there are not more than $n$ pairs of vertices, distance between which is exactly $d$
I tried to use induction.The base case is obvious.Assuming the result is true for n points,I tried to show that it also holds for $n+1$ points.Now,If I could show that there is one point which makes at most one pair with distance $d$, I would be done.So,assuming otherwise all points is in at least two pair whose distance is $d$.I could not progress any far.
I would appreciate some hint or solution.Thanks in advance
| Let $G$ denote the graph on the $n$ vertices, where two vertices share an edge if and only if the distance between them is $d$. Let $k$ denote the number of edges in $G$. We wish to show that $k\leq n$.
Let $G'$ denote the graph obtained by repeatedly removing all vertices $v\in G$ with $\deg v\leq1$, so that the number of edges removed is no greater than the number of vertices removed. Then it suffices to show that $k'\leq n'$, where $n'$ and $k'$ denote the numbers of vertices and edges of $G'$, respectively.
Suppose toward a contradiction that $\deg v\geq3$ for some $v\in G'$. The pairwise distance between the neighbours of $v$ is also at most $d$, and hence all neighbours of $v$ lie on a circular arc of radius $d$ centered at $v$ of at most $\tfrac\pi3$ radians. Let $w_1,w_2\in G'$ be the two neighbours of $v$ that are furthest apart, and $w\in G'$ any other neighbour of $v$. The following image clarifies the situation:
The four circles are centered at $v$, $w_1$, $w_2$ and $w$ and all have the same radius $d$. It follows that all other vertices of $G'$ are contained in the region shaded red. In particular the only vertex in $G'$ at distance $d$ from $w$ is $v$. But then in $G'$ we have $\deg w=1$, a contradiction. This shows that $\deg v=2$ for all $v\in G$ and hence that $k'\leq n'$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3800648",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Complete first order theories While studying Hodges' A shorter model theory I came across this observation:
Given a first order language $L$, we say that an $L$-theory $T$ is complete if $T$ has models and any two of its models are elementary equivalent. [...] the compactness theorem implies that any complete theory in $L$ is equivalent (i.e. has the same models) to a theory of the form $\text{Th}(A)$ for some $L$-structure $A$.
Now, I don't see how the compactness theorem comes into the picture. Why do we need it? Given the definition of complete theories it is immediate to me that a complete theory is equivalent to the theory of one of its models. What am I missing?
Thanks!
| This is indeed a mistake. As Nagase says, it's not present in the original ("big") model theory book. My suspicion is that Hodges added it after mixing up two notions of completeness: "satisfiable and all models are elementarily equivalent" versus "contains each sentence or its negation." Using the latter sense of completeness we do indeed need compactness to identify complete theories with theories of structures.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3800805",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 1,
"answer_id": 0
} |
You sample from a uniform distribution $[0, d]$ $n$ times. What is your best estimate of $d$, using on the variance of the samples drawn? Suppose I have $X_1, X_2, ..., X_n$ where $X \sim \text{Uniform}[0,d]$.
Since, $E[X]$ = $\frac{d}{2}$, an obvious estimator for $d$ would be $2\cdot\bar{X}$, where $\bar{X}$ is the sample mean of $X$.
How would you go about estimating $d$ using the sample variance instead? I know that $Var[X] = \frac{d^2}{12}$. If you were to estimate $d$ using $\sqrt{12\cdot S^2}$ where $S^2$ is the sample variance of X, you would get a biased estimate of $d$ due to Jensen's inequality. Is there some "correction" you can add to this estimate to correct for the bias?
| With a random sample from $\mathsf{Unif}(0,\delta),$ if you insist on using the sample variance $S^2$ to estimate $\delta$ you can do it, but it isn't the best way to estimate $\delta.$
Notice that the variance of $\mathsf{Unif}(0,\delta)$ is
$\sigma^2 = \delta^2/12,$ so the method of moments estimator
is $\tilde \delta = 2\sqrt{3}\,S,$ where $S$ is the sample standard deviation.
Let's try it with a huge sample of size $n = 1000$ from $\mathsf{Unif}(0, 10)$ simulated in R. The estimate is
$\tilde\delta = 10.007.$
set.seed(2020)
d = 10; x = runif(1000, 0, d); s = sd(x)
MME.d = sqrt(12)*s; MME.d
[1] 10.00703
However, one can show that the unbiased maximum likelihood
estimator of $\delta$ is $\hat\delta = \frac{n+1}{n}X_{(n)},$
where $X_{(n)}$ is the maximum observation: For the large
dataset above this is $\hat \delta = 10.003.$
(1001/1000)*max(x)
[1] 10.00316
For samples of small and moderate size the unbiased maximum
is often noticeably better. Let's look at 10,000 samples of
size $n = 20$ from $\mathsf{Unif}(0,\, \delta=10).$
set.seed(826)
m = 10^4; n = 20; x = runif(m*n, 0,10)
MAT = matrix(x, nrow=m) # each row a sample of 20
mme = sqrt(12)*apply(MAT, 1, sd)
mle = ((n+1)/n)*apply(MAT, 1, max)
mean(mme); var(mme)
[1] 9.955564
[1] 1.16755 # larger variance
mean(mle); var(mle)
[1] 10.00105
[1] 0.227096 # smaller variance
Both estimators are (nearly) unbiased, but the unbiased MLE has a
much smaller variance $0.023$ compared with $1.17$ for the MME.
Plots of the simulated distributions of the estimators are
shown below:
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3800899",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Evaluating Multivariable Limit $\lim\limits_{(x,y) \to (0,2)} \frac{\sin(xy)}{x}$ Question: Evaluate the limit $$\lim\limits_{(x,y) \to (0,2)} \frac{\sin(xy)}{x}$$
My first thought is that the limit looks a lot like the single variable $\lim\limits_{x \to 0} \frac{\sin(x)}{x} = 1$. Regardless of what $y$ is (as long as it is real) $xy \to 0$. Hence I am wrongly concluding that the entire limit evaluates to $1$. I guess the ratio of the convergence is not the same as in the single variable case, hence it may not be 1. However, I am unsure how to evaluate it properly.
| Your idea is correct but we need some correction, indeed since $xy\to 0$ we have that
$$\lim\limits_{(x,y) \to (0,2)} \frac{\sin xy}{x}=\lim\limits_{(x,y) \to (0,2)} \frac{\sin xy}{xy}\cdot \frac{xy}{x}=\lim\limits_{(x,y) \to (0,2)} \frac{\sin xy}{xy}\cdot y=1\cdot2=2$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3801046",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Updates of Serge Lang — Differential manifolds I have Serge Lang — differential manifolds. An interesting read. But the book is 50 years old. Are there newer books that give a better and more comprehensive treatment of the material, or this the best of its kind?
| I don't have much experience with Lang's book, but some other books that are in vogue amongst graduate students right now are:
*
*John Lee, Introduction to Smooth Manifolds
*Loring Tu, An Introduction to Manifolds
*Guillemin and Pollack, Differential Topology
*Milnor, Topology from the Differential Viewpoint
*Do Carmo, Riemannian Geometry
*Bott and Tu, Differential Forms in Algebraic Topology
*Milnor, Morse Theory
The first four are of a more introductory nature, while the last three draw on material from the first four.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3801180",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Counterexample to: If $|f''(x)|\leq m$, then $|f''(0)|+|f''(a)|\leq am$
Attempt towards a counter-example:
Consider $f:[0,1]\to \mathbb{R}$, $f(x)=-(x-\frac{1}{2})^4$.
$f$ takes its largest value at $x=0.5 \in [0,1]^o$, i.e. the interior of $[0,1]$.
$f''(x)=-12(x-\frac{1}{2})^2$, and $|f''(x)|\leq 3$, $\forall x \in [0,1]$.
Now, $|f''(0)|=|f''(1)|=3$. Now, $|f''(0)|+|f''(1)|=6\nleq3$
Does this example work?
Kindly VERIFY
| Your counterexample is correct. The statement is (trivially) correct if $a \ge 2$ and wrong if $a < 2$. As a counterexample one can choose any twice-differentiable function $f: [0, a] \to \Bbb R$ which has a maximum in the interior of the interval and where $f''$ attains its maximum both at $x=0$ and $x=a$. So another choice would be $f(x) = -(x-a/2)^2$.
On the other hand, one can prove that $|f'(0)|+|f’(a)|\leq am$, using the fact that $f'(c) = 0$ at the point $c$ where $f$ attains its maximum.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3801325",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
The derivative $\frac{\mathrm d}{\mathrm dx} x^x=x^x\left(\ln x+1\right)$ is problematic for $x<0$ To take the derivative of $x ^ x$, we write
$$\dfrac {\mathrm d}{\mathrm dx} x^x=\dfrac {\mathrm d}{\mathrm dx} e^{\ln x^x}=\dfrac {\mathrm d}{\mathrm dx} e^{x\ln x}= e^{x\ln x}× \dfrac {\mathrm d}{\mathrm dx}(x\ln x)=x^x\left(\ln x+1\right)$$
Here is my problem:
If $x\in\mathbb{Z^-}$, then $x^x\in\mathbb {R}$. But, $\ln x \not\in\mathbb {R}.$
Because, $\ln x$ is defined only in the set of positive real numbers.
If, $x \not\in\mathbb {Z^{-}}$ and $x\in\mathbb{R^{-}}$, then $x^x\in\mathbb {C}$ and $\ln x \in\mathbb {C}.$
But, the problem occurs if $x\in\mathbb{Z^-}.$
So, $x^x=e^{x\ln x}$ doesn't hold for all real numbers. This makes the derivative result suspicious.
Where is the problem?
| The differentiability of a function can only be found if it is continuous in an interval $(a,b)$.
$x^x$ is continuous only for $x > 0$. For $x<0$, the graph can only be drawn for some discrete points. Differentiability is not defined for this part of the graph.
$$\frac{\mathrm d (x^x)}{\mathrm{d}x}=x^x(\ln x+1)\quad \forall\quad x\in \mathbb{R}^+$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3801432",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Differential multi-variable function limit using polar coordinates does not work Given:
$$ f(x,y) = \frac{xy^3}{x^2 + y^6}$$
Is it differentiable at $(0,0)$ ?
I said no, as it is not even continuous by the path $x = y^3$
However, when we go to polar view that is: $x = r \cos(\theta) , y = r \sin(\theta)$
we get that $$\lim_{r \rightarrow 0^+} \frac{r^4 \cos(\theta) \sin^3 (\theta)}{r^2(\cos^2(\theta) + r^4 \sin^6(\theta))}$$ and thus:
$$\lim_{r \rightarrow 0^+} \frac{r^2 \cos(\theta) \sin^3 (\theta)}{(\cos^2(\theta) + r^4 \sin^6(\theta)}$$
And we can just plot $r = 0$ and get that it is indeed continuous...
What is wrong with this way? I don't understand, as we are taught to use this way to prove or disprove continuity every time, but I did not check if it actually worked or not! I assumed this way works every time, so why does it fail here?
| The problem is that : $\lim_{r \rightarrow 0^+} \frac{r^2 \cos(\theta) \sin^3 (\theta)}{(\cos^2(\theta) + r^4 \sin^4(\theta)}$ is not always determinate contrary to what you probably think. What happens if $\theta$ assumes a value which makes denominator $0$ as $r\to 0$. You have indeterminate form $(0/0)$.
As it turns out, particularly in this case, the above limit depends upon $\theta$, i.e., the direction through which you approach $(0,0)$. Try doing $r\to 0$ along $\theta =\pi/2$. See what happens.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3801584",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Interchanging calculus operations with pi product In the domain of convergence, we can interchange derivatives, sums and integrals but what about $ \prod$ and previous operations?
For example,
$$ \frac{d}{dx} \sum_{j=1}^{n} f_j(x) = \sum_{j=1}^{n} \frac{d}{dx}f_j(x)$$
and,
$$ \frac{d}{d u^i} \int_{a}^{b} F(u_1,u_2,u_3..) du_j = \int_{a}^{b} \frac{d}{du^i} F(u_1,u_2,u_3..) du_j $$
for $ i \neq j$
But, how would I interchange product and derivative like:
$$ \frac{d}{dx} \prod_{i=1}^{i=n} f_i(x)=?$$
One indirect way I did was this:
$$ g(x) = \prod_{i=1}^{i=n} f_i(x)$$
Take log of both sides and then,
$$ g'(x) = g(x) \sum_{i=1}^{i=n} \frac{ f_i^{'}(x) }{ f_i (x)}$$
Would there be more direct interchanges as I had shown before/ alternate proofs of this identity?
| The formula for the derivative of a product gives you
$$\frac{d}{dx} \prod_{j=1}^{n} f_i(x)= \sum_{j=1}^n f'_j(x)\prod_{i=1 \\ i\neq j}^{n} f_i(x).$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3801717",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Computing the limit of a sinc-like function Compute $\displaystyle{\lim_{k\to∞} \int_0^\infty \frac{k\sin(x/k)}{x^{3/2}} dx}$ .
Let $f_k = \frac{k\sin(x/k)}{x^{3/2}}$. by L'Hopital, $\displaystyle{\lim_{k\to \infty} f_k(x)} = \frac{1}{x^{3/2}}$, which is not Lebesgue integrable on $(0, 1]$ since the improper Riemann integral diverges to infinity. So if I can 'push the limit in the integral', I should be able to conclude that the limit diverges to infinity. I suspect that $f_k$ do indeed converge to $\frac{1}{x^{3/2}}$ uniformly on $(0, 1]$, since the but I am having trouble proving it. Am I on the right track to finding this limit?
| Note that $0\le \sin (x)\le x$ for $x\ge 0$. So, $\lim_{k\to \infty}\frac{k\sin(x/k)}{x^{3/2}}=\frac1{x^{1/2}}$.
Aside, enforcing the substitution $x/k\mapsto x$, we find that
$$\int_0^\infty \frac{k\sin(x/k)}{x^{3/2}}\,dx=\sqrt{k}\int_0^\infty \frac{\sin(x)}{x^{3/2}}\,dx=\sqrt{2\pi k}\to \infty$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3801817",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Relationship between the symmetric difference of two sets and their intersection How do we prove that the union of symmetric difference of two sets and their intersection is the same as the union of the two sets?
i.e. $(A\backslash B) \cup (A\cap B) \cup (B\backslash A) = A\cup B$
where $A, B$ are two sets?
I know how it is obvious. But I want a rigorous proof from definitions of difference and union and intersection.
| Maybe you were looking for a "computational" proof:
$$\underbrace{(A\backslash B)\cup(A\cap B)}_{A}\cup(B\backslash A)=A\cup (B\backslash A)=A\cup B.$$
In the first equality, I am using the fact that for any sets $A$ and $B$, we have
$$A=(A\backslash B)\cup (A\cap B).$$
Intuitively, this means that everything in $A$ either belongs to $B$ or it doesn't. For the second equality, it's just
$$A\cup (B\backslash A)=A\cup (B\cap A^c)=(A\cup B)\cap (A\cup A^c)=(A\cup B).$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3801970",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
one-relator groups which are free-by-cyclic I am reading an article of Baumslag:
Baumslag, Gilbert, "Finitely generated cyclic extensions of free groups are residually finite." Bull. Austral. Math. Soc. 5 (1971), 87–94.
and he mentions that many one-relator groups, in particular, fundamental groups of surfaces, are free-by-cyclic, see picture. Could somebody comment on this: how are surface groups free-by-cyclic? Or the one-relator groups mentioned by Baumslag?
| The orientable surface group is free-by-cyclic: if the standard generators are $x_1,...,x_g, y_1,...,y_g$ then the homomorphism onto the cyclic group $\langle x_1\rangle$ which kills all other generators is onto and its kernel is of infinite index, whence free.
The group $\langle a,b,c| c^n=[a,b]\rangle =\langle a,b,c| b^a=c^nb\rangle$ and many other $HNN$-extensions of a free group with cyclic associated subgroups is also free-by-cyclic. It has a homomorphism onto the cyclic group generated by the free letter whose kernel is free. The proof can be found here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3802110",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Is $P(a)$ logically equivalent to $\forall y [(y=a) \rightarrow P(y)]$? I am not entirely sure if my proof is correct. I would appreciate if somebody helped.
$(\rightarrow)$
Assume $P(a)$. Let an arbitrary $y$. Let $y=a$. Since $P(a)$ and $y=a$, then $P(y)$. Since $y$ is arbitrary, then $\forall y [(y=a) \rightarrow P(y)]$.
$(\leftarrow)$ Assume $\forall y [(y=a) \rightarrow P(y)]$. Then, by universal instantiation, $(a=a) \rightarrow P(a)$. Then, $P(a)$.
Since $P(a)$ is logically equivalent to $\forall y [(y=a) \rightarrow P(y)]$, then, assuming $\Gamma$ is a set of formulas, $\Gamma \rightarrow P(a)$ is equivalent to $\Gamma \rightarrow \forall y [(y=a) \rightarrow P(y)]$. If $y$ does not occur in $\Gamma$, then the statement is equivalent to $\forall y [\Gamma \rightarrow ((y=a) \rightarrow P(y))]$, which is the same as $\forall y [(\Gamma \land (y=a)) \rightarrow P(y)]$. Have I missed something? Thanks.
| Your first proof is valid.
Semantically: $P(a)$ holds exactly when "anything that is $a$ satisfies $P$."
Your second is not quite correct. $\Gamma$ cannot be a set of formula, rather it must be a well formed formula to be used the way you are using it. Otherwise the proof is okay.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3802222",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to show this inequality holds, when proving self-similar processes are non-differentiable? In the book Elementary Stochastic Calculus by T. Mikosch (1998), there is a result which shows that Brownian motion is a self-similar process and therefore it is nowhere differentiable. In the proof of this result, there is an inequality:
$$\lim\limits_{n\to\infty}P(\mathrm{sup}_{0\leq s \leq t_n}|\frac{X_s}{s}|>x )\geq {\lim\limits \text { sup}}_{n\to\infty}P(|\frac{X_{t_n}}{t_n}|>x)$$
where $(X_t)$ is a self-similar process. I have two questions: (1) Can we interchange $P()$ and $\mathrm{sup}$ operation? (2) how to show the inequality $\geq$ here? Thank you.
| Look at the section Borel-Cantelli Lemmas in Durret's Book (Probability Theory and Examples) an application of Fatou's Lemma ensure that $P(\limsup_n An)\geq \limsup_n P(A_n)$. So if you write $A_n$ for the set if LHS of the inequality and $B_n$ for the another then $$\lim_n P(A_n)=P(\lim_n A_n)=P(\limsup_n A_n)\geq P(\limsup_n B_n)\geq \limsup_n P(B_n)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3802328",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
If $d\mid nm$ and $\gcd(n, m)= 1$ then exist $d_1, \,d_2$ such that $d=d_1d_2$ and $d_1\mid n,\,d_2\mid m$ (without Fund. Theorem of Arit) We want to prove that if $d\mid nm$ and $\gcd(n,m)=1$ then $d=d_1d_2$ where $d_1\mid n$ and $d_2\mid m$ and $\gcd(d_1,d_2)=1$
We already proved it using Fundamental Theorem of Arithmethic. But we wonder if there is a way to prove it using only GCD basic theorems.
Our hints
If $d_1\mid n$ and $d_2\mid m$, then $d_1d_2\mid nm$
$(a\mid b \implies a\mid bc)$
If $d\mid nm$ then $d\mid \gcd(d,n) \gcd(d,m)$ (Properties)
$\gcd(d_1,d_2)\mid \gcd(n,m)$
| We can use the following two facts:
Lemma 1:
Given $m,n \in \mathbb{N}$, if $gcd(m,n) = 1$, then there exists, $x,y \in \mathbb{N}$, such that $xm + yn = 1$
Lemma 2:
For, $m, n \in \mathbb{N}$, if there exists $x, y \in \mathbb{N}$, such that $xm + yn= 1$, then $gcd(m,n) = 1$.
Proof:
Now we can show that if $d_1 = gcd(d,n)$ and $d_2 = gcd(d,m)$ then,
$gcd(d_1, d_2) = 1$ and $d = d_1 d_2$.
The proof is trivial if $d_1 = 1$ or $d_2 = 1$. So, I will assume, $d_1 > 1$ and $d_2 > 2$.
$d_1 | m \implies \exists q_1 \in \mathbb{N} \ni m = q_1d_1$.
Similarly, $d_2 | n \implies \exists q_2 \in \mathbb{N} \ni n = q_2d_2$
From Lemma-1, there exists $x,y \in \mathbb{N}$ such that,
$$(xq_1)d_1 + (yq_2)d_2 = 1$$
Therefore it follows from Lemma-2 that,
$$gcd(d_1, d_2) = 1$$
This implies $d = kd_1d_2$.
Now, it is given, $d | mn \implies kd_1d_2 | q_1q_2d_1d_2 \implies k | q_1q_2$.
Since $d_1 = gcd(d,m)$ and $d_2 = gcd(d,n)$, we have $gcd(k,q_1) = 1$ and $gcd(k,q_2) = 1$.
This taken together with $k | q_1q_2$ implies $k = 1$.
This proves that $d = d_1d_2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3802477",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How to identify the coefficients in a series expansion on a non-orthogonal basis? The solution of a PDE lead to a series expansion of the form
$$
\sum_{n=0}^\infty \left( A_n \cos \left( \lambda_n z\right) + B_n \sin \left( \lambda_n z \right) \right) = f(z) \, ,
$$
where $z \in [0,L]$ and $f(z)$ is a known function.
If $\lambda_n = n\pi/L$ then the coefficients $A_n$ and $B_n$ can easily be determined (Fourier coefficients).
In my case, $\lambda_n$ are known eigenvalues that are determined numerically.
Note that for $n \ne m$, $\lambda_n \ne \lambda_m + 2k\pi$, $k \in \mathbb{Z}$ holds.
I was wondering whether there is a way to identify $A_n$ and $B_n$ when the basis functions are not orthogonal. Thank you.
Example:
Consider 3 terms in the series with $f(z) = \delta(z)$, $\lambda_0 = 1$, $\lambda_1 = 2$, and $\lambda_2 = 4$.
| If this came from a self-adjoint PDE, and if you have endpoint conditions of the form
$$
Af(a)+Bf'(a)=0,\;\;\; Cf(b)+Df'(b)=0,
$$
then you can end up with trigonometric expansions where the periods are non-harmonic. But that does not mean they are not orthogonal, in which case the ODE solutions will still be orthogonal, and you will still Fourier expansions in orthogonal functions.
For example, this is a a self-adjoint ODE with orthogonal eigenfunctions that can be used to expand anything in $L^2[a,b]$:
$$
-f''+\lambda f = 0 \\
\cos(\alpha)f(a)+\sin(\alpha)f'(a)=0\\
\cos(\beta)f(b)+\sin(\beta)f'(b)=0.
$$
The general case for the eigenvalues $\lambda_n$ is that they are not evenly spaced. However, the eigenfunctions will be mutually orthogonal with respect to the inner product on $L^2[a,b]$, and they will form a complete orthogonal basis of $L[a,b]$. For infinite intervals, you may have a mixed discrete and continuous Fourier expansion.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3802589",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to solve $ (y+u)\dfrac{\partial u}{\partial x} + (x+u)\dfrac{\partial u}{\partial y} = x+y$ via method of characteristics? How to solve $ (y+u)\dfrac{\partial u}{\partial x} + (x+u)\dfrac{\partial u}{\partial y} = x+y$ via method of characteristics?
My attempt.
These are equations with which I begin:
$\dfrac{dx}{ds} = y+u $;
$\dfrac{dy}{ds} = x+u $;
$\dfrac{du}{ds} = x+y $.
However, I am stuck, because I can not solve the equations for $\dfrac{dx}{ds}$ and $\dfrac{dy}{ds}$ because we have a dependence on $u$.
Thanks for any help.
| $$ (y+u)\dfrac{\partial u}{\partial x} + (x+u)\dfrac{\partial u}{\partial y} = x+y$$
Charpit-Lagrange system of characteristic ODEs :
$$ds=\frac{dx}{y+u}=\frac{dy}{x+u}=\frac{du}{x+y}=\frac{dx-dy}{(y+u)-(x+u)}=\frac{dx+dy+du}{(y+u)+(x+u)+(x+y)}$$
$$\frac{dx-dy}{y-x}=\frac{dx+dy+du}{2(x+y+u)}$$
$$-\ln|x-y|=\frac12\ln|x+y+u|+\text{constant}$$
A first characteristic equation is :
$$(x+y+u)(x-y)^2=c_1$$
A second characteristic equation comes from
$$\frac{dx}{y+u}=\frac{dy}{x+u}=\frac{du}{x+y}=\frac{dx-du}{(y+u)-(x+y)}=\frac{dy-du}{(x+u)-(x+y)}$$
$$\frac{dx-du}{u-x}=\frac{dy-du}{u-y}$$
$$\ln|u-x|=\ln|u-y|+\text{constant}$$
A second characteristic equation is :
$$\frac{u-x}{u-y}=c_2$$
The general solution of the PDE expresed on the form of implicit equation $c_2=F(c_1)$ is :
$$\boxed{\frac{u-x}{u-y}=F\big((x+y+u)(x-y)^2\big)}$$
F is an arbitrary function until no boundary condition is specified.
Note that the same general solution could be expressed on a number of equivalent forms, for example $u=-x-y+\frac{1}{(x-y)^2}G\left(\frac{u-x}{u-y}\right)$ where G is an arbitrary function.
Of course the PDE has an infinity many solutions. Among them the linear one : $u=\frac{1}{1-c}(x-cy)$ which coresponds to the above second characteristc equation. Or for example another solution $u=-x-y+\frac{c}{(x-y)^2}$ which corresponds to the above first characteristic equation.
Depending on the kind of boundary condition, the function F could be (or not) determined explicitely. Then putting it into the above general solution the equation could be (or not) solved explicitely for $u$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3802816",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Prove $\frac{\mathbb Z[X,Y]}{(5,X^{2}-Y,XY+X+1)}$ is a field
Prove $\frac{\mathbb Z[X,Y]}{(5,X^{2}-Y,XY+X+1)}$ is a field.
I thought to prove that this is isomorphic with $\mathbb{\mathbb Z_{5}(X)}$, and because $5$ is prime it will follow that it's a field.
I wanted to use the first isomorphic theorem.
I wanted to use the map $\phi: Z[X,Y]\mapsto\mathbb{Z}_{5}(X)$, $f(x,y)\mapsto f(x,x^{2})$.
Now I'm proving that 1) $\phi$ is a morphism 2) $\phi$ is surjective 3)$\ker\phi=(y-x^{2},x^{3}+x+1,5)$
*
*take $x,y \in \mathbb{Z[X,Y]}$ random then:
*
*$\phi(x+y)$=$\phi(\sum((a_{i1}a_{i2}+b_{i1}b_{i2})X^{i1}Y^{i2})$=$(\sum((a_{i1}a_{i2}+b_{i1}b_{i2})X^{i1}Y^{i2})$=$\sum((a_{i1}a_{i2}X^{i1}Y^{i2})+\sum(b_{i1}b_{i2})X^{i1}Y^{i2})$=$\phi(x)$+$\phi(y)$
*$\phi(xy)$=$\phi(\sum((a_{i1}a_{i2}b_{i1}b_{i2})X^{i1}Y^{i2})$=$(\sum((a_{i1}a_{i2}b_{i1}b_{i2})X^{i1}Y^{i2})$=$\sum((a_{i1}a_{i2}X^{i1}Y^{i2})\sum(b_{i1}b_{i2})X^{i1}Y^{i2})$=$\phi(x)$$\phi(y)$
*i don't know how to prove this
*let's prove two inlcusions.
*
*first let $f\in ker\phi$ then $f\in (Z[X,Y])([Z])$. We use the division algorithm, then there exist an $q(x,y)$ and a $r(x,y)$ so that $f(x,y)$=$q(x,y)(x^{3}+x+1)$+$r(y-x^{2})$+$5$
met $deg(x)<deg(x^{3}+x+1)=3$
I'm not sure how to prove those things but this is what i already have. Can someone help me further.
EDIT: my answer that i tried to prove is wrong. Some of you write a solution down. But i still need to prove that it's isomorphic with your solution and i'm still struggeling with the same question how to do that exactly
EDIT:
So the people who answered my question (thank you for that) don't really see my problem now.
Well now after you guys helpt me I want to prove that
$\frac{Z[X,Y]}{5,X^{2}-Y,XY+X+1}$ is isomorphic with $\frac{F_{5}[X]}{(X^{3}+X+1)}$.
So I need to prove that for the map the map $\phi$:$Z[X,Y]$ $\mapsto$$\frac{F_{5}(X)}{X^{3}+X+1}$:$f(x,y)$$\mapsto$$f(x,x^{2})$.
Now I'm proving that 1) $\phi$ is a morphisme 2) $\phi$ is surjective 3)ker$\phi$=$(y-x^{2},x^{3}+x+1,5)$
I'm stuck with proving these three things correctly
| Hint:
$$
\frac{\mathbb Z[X,Y]}{\langle 5,X^{2}-Y,XY+X+1 \rangle}
\cong
\frac{\mathbb Z[X,X^2]}{\langle 5,0,X^3+X+1 \rangle}
\cong
\frac{\mathbb Z[X]}{\langle 5,X^3+X+1 \rangle}
\cong
\frac{\mathbb F_5[X]}{\langle X^3+X+1 \rangle}
$$
so it reduces to proving that $X^3+X+1$ is irreducible mod $5$, which is easy since the degree is $3$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3802960",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 0
} |
prove that $xy+yz+zx\ge x\sqrt{yz}+y\sqrt{xz}+z\sqrt{xy}$ prove that $xy+yz+zx\ge x\sqrt{yz}+y\sqrt{xz}+z\sqrt{xy}$ if $x,y,z>0$
My try : dividing inequality by $\sqrt{xyz}$ and putting $\sqrt{x}=a,\sqrt{y}=b,\sqrt{z}=c$
we have to prove $$\sum_{cyc}\frac{ab}{c}\ge a+b+c$$
or
$$2\sum_{cyc}\frac{ab}{c}\ge 2(a+b+c)$$ using $\frac{ab}{c}+\frac{bc}{a}\ge 2b$ and similarly for others the proof can be completed.
Is it correct? Also i am looking for different proofs for this (possibly more simpler ).Thanks
| $x,y,z>0$ and that the inequality is a symmetric expression implies that we can take without loss of generality, an ordering $x\ge y\ge z \implies xy\ge zx\ge yz \implies \sqrt{xy}\ge \sqrt{zx}\ge \sqrt{yz}$
so that the sequences $\{\sqrt{xy},\sqrt{zx},\sqrt{yz}\}, \{\sqrt{xy},\sqrt{zx},\sqrt{yz}\}$, (i.e. the same sequence) are similarly sorted, so that, by the Rearrangement Inequality, we have
$$\sqrt{xy}\sqrt{xy}+\sqrt{yz}\sqrt{yz}+\sqrt{zx}\sqrt{zx}\ge \sqrt{xy}\sqrt{zx}+\sqrt{yz}\sqrt{xy}+\sqrt{zx}\sqrt{yz}$$ which is the required inequality.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3803069",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 3
} |
What form of choice is "every Dedekind-finite set is finite" equivalent to? Halmos in his Naive Set Theory proves that every infinite set has a subset equivalent to $\omega$ using the axiom of choice with its full power. And this leads to the corollary that a set is infinite if and only if it is equivalent to some proper subset of it, which leads to each Dedekind-finite set being finite.
But I've also seen a proof (on Wikipedia) that this can also be proven with just countable choice. However Wikipedia also states that this result is strictly weaker than countable choice.
Question: It is clear that we do require some form of choice, not just ZF, to prove this result.$^1$ But it is even weaker than the countable choice. Can we explicitly state the form of this choice which is equivalent to this result?
$^1$ I've come across the fact that there exists a model of ZF (whatever that means (sorry I've not done any model theory; this is just for your reference)) in which every infinite set is Dedekind-infinite, and yet the countable choice fails.
| While answering this question: Strength of “Cofinite Choice”, I discovered that "every Dedekind-finite set is finite" is equivalent to the following "axiom of cofinite choice":
Let $A$ be a set of non-empty sets such that $(\bigcup A)\setminus X$ is finite for all $X\in A$. Then $A$ has a choice function.
See the linked answer for a proof. This seems to me to be a fairly natural choice principle.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3803247",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Prove or disprove the following statement using the definition of big-Θ: NOTE: I am not provign big O here I am proving big-Θ
Prove or disprove the following statement using the definition of big-Θ:
$$n^2−4n = Θ(2^n)$$
so, by definition, $$T(N)=O(h(N))$$ and $$T(N)=Ω(h(N))$$ must both hold.
checking condition 1, $$2^n*c≥n^2≥n^2-4n$$
and so we choose $$c=5, n=1$$
because $$2^n≥n^2$$ for all$$N≥n=1$$
and we conclude $$T(N)=O(h(N))$$
Now, checking condition 2,
$$c*2^n≤n^2-4n≤n^2 $$
but because we showed that $$2^n≥n^2$$ our check on condition two implies that $$2^n=n^2$$
but if we pick $$c=5$$ we get $$32=25$$ which is untrue and in conclusion we have disproved $$n^2−4n = Θ(2^n)$$
| This is correct, for $Ω$ as well.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3803408",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
If $\lim f(x)$ can theoretically be anywhere on the interval $0$ up to and including $\infty$, is the interval written $(0,\infty]$? Let's take $f(x)=a^x+b$ where $a\in\mathbb R$ and $b\in\mathbb R^+$. clearly $L=\lim_{x\to\infty}f(x)>0$, but is the interval written $L\in(0,\infty)$ because limits only approach infinity or is $L\in(0,\infty]$ because infinity is one of the values $L$ can take? I can't find an example of this one way or another.
| If you want to be able to write $L = \lim_{x\to\infty} f(x) = \infty$, then it would be false to state $L \in (0,\infty)$.
You would need to be able to write $L \in (0,\infty]$, but we also need to keep in mind that $L = \infty$ isn't a real number, so this only makes sense in the context of the extended real number line.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3803557",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Stone-Weierstrass theorem and Polynomials in multidimensional real space Stone-Weierstrass theorem on closed interval $[a, b]$ (in $\mathbb{R}$) states that any continuous function $f$ on $[a, b]$ can be approximated by polynomial function $p$, arbitrarily close to $f$.
From the above observation, I wonder if this can directly be applied to multidimensional case: is it true that any continuous function $f:X \rightarrow \mathbb{R}^m$, where $X \subset \mathbb{R}^n$ is compact, can be approximated by polynomial function $p$, i.e.,
$$\forall x\in X, \|f(x) - p(x)\|< \epsilon ?$$
If so, please show me a detailed example (qualitatively).
| Wikipedia quotes the Stone-Weierstrass theorem as
Stone–Weierstrass Theorem (real numbers). Suppose $X$ is a compact Hausdorff space and $A$ is a subalgebra of $C(X, \Bbb R)$ which contains a non-zero constant function. Then $A$ is dense in $C(X, \Bbb R)$ if and only if it separates points.
Which is to say, the only actual property of the closed interval $[a,b]$ that is necessary for the theorem is that it is compact and Hausdorff.
And the only necessary properties of the algebra of polynomial functions on this compact Hausdorff is that it is a subalgebra of the continuous real-valued functions (i.e. you can add and multiply polynomials together and scale them by real numbers, and still always end up with polynomials), that it contains some non-zero constant function, and that it can separate points (i.e. for any two points, there is at least one polynomial that evaluates to different values at the two points).
This immediately applies to polynomial functions in more than one variable, just as well as it does polynomial functions in a single variable, in addition to a whole host of other classes of functions (like sines of different frequencies for Fourier series).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3803727",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
How to rewrite integral of L^2 function and conclude about convergence of series I've read the following assertions:
" Suppose $f \in L^2(\mathbb{R}). $ Then $$ \int_{-\frac{1}{2}}^{\frac{1}{2}} \sum_{k \in \mathbb{Z}} \vert f(x+k) \vert^2 dx = \int_{-\infty}^{\infty} \vert f(x) \vert^2 dx < \infty. $$ Thus, $ \sum_{k \in \mathbb{Z}} \vert f(x+k) \vert^2 < \infty $ for $a.e. x\in \mathbb{R}.$ "
Why exactly does the given equality between the two integrals hold and what property or theorem gives the convergence of the sum?
| We can write any real number $x \in \mathbb{R}$ as a sum $k + \epsilon$, with $k \in \mathbb{Z}, \epsilon \in \left[-\frac12, \frac12\right)$ in a unique way.
That's what the sum on the left does explicitly. If you replace the variable $x$ with $\epsilon$ in the left-hand-side, it should be even more clear.
The integral being finite is given as part of the definition of a function in $L^2(\mathbb{R})$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3803863",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
How do I convert this equation to the standard form of a circle? I'm looking for the radius of the sphere of this: $4x^2 + 4y^2 +4z^2 -16x - 24y + 8z= 44$.
I have to get it into standard form in order to find the radius. So I factored out a 4 and simplified it to:
$$x(x-4) + y(y-6) +z(z+2) =11$$
I am not sure what else I can do from here. Any ideas? Apparently the answer is 5. Which means my 11 has to get to 25 somehow. Though I don't know what I can do. Thank you in advance.
| For each of the variables $x, y, z$, you want to get something of the form $(x - x_0)^2$, by adding some constant if necessary. Of course, you also should add the same constant to the right-hand side.
So we want $(x - x_0)^2 + (y - y_0)^2 + (z - z_0)^2 = 11 + C = R^2$, for some number $C$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3804000",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Diophantine equation $x^2 + xy − 3y^2 = 17$ Determine all integer solutions to the equation $x^2 + xy − 3y^2 = 17$.
The previous part of the question was finding the fundamental unit in $\mathbb{Q}(\sqrt{13})$, which is $\varepsilon = \frac{3+\sqrt{13}}{2}$, so my guess is that I should factorise the equation in $\mathbb{Q}(\sqrt{13})$ and then use the uniqueness of factorisation of ideals into prime ideals. (But it is also possible that the parts of the question are unrelated, because they certainly seem unrelated.)
| Hint:
Complete the square: $x^2+xy+\frac14y^2-(3+\frac{1}4)y^2=17\implies(x+\frac12y)^2-\frac{13}4y^2=17$
$\implies (2x+y)^2-13y^2=68$. That's a Pell-type equation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3804117",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
} |
Vector parametrization of the line that passes through two vectors. How would I find the vector parametrization $()$ of the line $L$ that passes through the points $(2,1,4)$ and $(5,6,7)$?
So I found a directional vector: $(3,5,3)$.
What do I do next?
| You found a direction vector $(3,5,3)=(5,6,7)-(2,1,4)$,
so the line can be parametrized as $(2,1,4)+t(3,5,3)=(2+3t,1+5t,4+3t)$.
Note that $\mathbf r(0)=(2,1,4)$ and $\mathbf r(1)=(5,6,7)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3804223",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Lagrange Multipliers Maxima or Minima? If I evaluate a surface $f$$($$x$$)$, that is subjected to a constraint $g$$($$x$$)$, for it's maximum and minimum values using Lagrange Multipliers then how do I know that the solution that is found is maximum or minimum.
For example $f$$($$x$$)$= $x^2$+$y^2$+$z^2$
and $g$$($$x$$)$=$x^3$$y^2$$z$= $6$$\sqrt{3}$
The solution using Lagrange multiplier is ($\sqrt{3}$,$\sqrt{2}$,$1$)
But is this point a maxima or minima?
| Since the point $(1,1,6 \sqrt{3})$ satisfies g and $f(1,1,6 \sqrt{3}) > f( \sqrt{3}, \sqrt{2},1) $ the point $( \sqrt{3}, \sqrt{2},1)$ must be a minima.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3804333",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Max limit on min magnitude of integer solution to underdetermined system of equations Given a system of equations on the form $\bar{a}_i\cdot\bar{x}=0\ \forall\ i\in\{1,\dots,d-1\}$, where $\bar{x}\in\mathbb{Z}^d\setminus\bar{0}$ must hold and $\bar{a}_i\in\{-L,-L+1,\dots,L-1,L\}^d\ \forall\ i\in\{1,\dots,d-1\}$ are known, linearly independent vectors, can we say anything about the non-trivial (non-zero) solution with the smallest magnitude? Can we limit it using big $O$ notation somehow?
| Non-trivial solutions may not exist. Let $A$ be the $(d-1)\times d$ matrix whose $i$-th row is $\tilde{a}_i$ for each $i$. By assumption, the rank of $A$ is $d-1$. Therefore $\ker(A)$ is one-dimensional. More specifically, by relabelling the columns of $A$ if necessary, we may write
$$
A\tilde{x}=\pmatrix{B&v}\pmatrix{y\\ c}
$$
where $B$ is invertible and $c$ is a scalar. The equation $A\tilde{x}$ thus becomes $By+cv=0$, or $y=cB^{-1}v$. The scalar $c$ cannot be zero, or else $y$ and $\tilde{x}$ become zero.
Although the entries of $B$ are bounded, when $B$ is close to singular, the entries of $B^{-1}$ can become very large. Hence the entries of $\tilde{x}$ can be very large too. E.g. consider
$$
B=\pmatrix{1&-1\\ &\ddots&\ddots\\ &&\ddots&-1\\ &&&1},
\quad v=\pmatrix{1\\ \vdots\\ \vdots\\ 1},
\quad y=c\pmatrix{1&1&\cdots&1\\ &\ddots&\ddots&\vdots\\ &&\ddots&1\\ &&&1}\pmatrix{1\\ \vdots\\ \vdots\\ 1}
=c\pmatrix{d-1\\ d-2\\ \vdots\\ 1}.
$$
As $c$ is a non-zero integer, $\|y\|_\infty$ (or $\|\tilde{x}\|_\infty$) is at least $d-1$. In particular, when $d>L+1$, there is not any feasible solution.
In general, if $v=0$, obviously the least-norm non-trivial solution is given by $\tilde{x}=(0,\ldots,0,1)^T$. If $v\ne0$, we may write $B^{-1}v=\frac1qw$ where $q$ is an integer and $w$ is an integer vector such that $w_1,w_2,\ldots,w_{d-1},q$ are relatively prime. Then the least-norm non-trivial integer solution is given by $c=\pm q$ or by $\tilde{x}=\pm\pmatrix{w\\ q}$. It is feasible if $\max\{|w_1|,|w_2|,\ldots,|w_{d-1}|,|q|\}\le L$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3804421",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove: $ -\int_{-\infty}^{+\infty} \frac{e^{t}t^3}{(1-e^t)^3} dt = \pi^{2}$ $$
\mbox{Prove that:}\quad
-\int_{-\infty}^{+\infty}\frac{\mathrm{e}^{t}\, t^3}{\left(1 -\mathrm{e}^{t}\right)^{3}}\,\mathrm{d}t = \pi^{2}
$$
I tried to solve this, but it seems hard to me.
| Using the geometric series we deduce for $|x|<1$ that
$$\sum_{n\ge0}x^n=\frac1{1-x}\,\implies\,\sum_{n\ge1}nx^n=\frac x{(1-x)^2}\,\implies\,\sum_{n\ge2}n(n-1)x^n=\frac{2x^2}{(1-x)^3}$$
Now, split the integrand at $t=0$ and enforce $t\mapsto-t$ to obtain
\begin{align*}
-\int_{-\infty}^\infty\frac{t^3e^t}{(1-e^t)^3}\,{\rm d}t&=-\int_0^\infty\frac{t^3e^t}{(1-e^t)^3}\,{\rm d}t-\int_{-\infty}^0\frac{t^3e^t}{(1-e^t)^3}\,{\rm d}t\\
&=\int_0^\infty\frac{t^3e^{-2t}}{(1-e^{-t})^3}\,{\rm d}t+\int_0^\infty\frac{t^3e^{-t}}{(1-e^{-t})^3}\,{\rm d}t\\
&=\int_0^\infty t^3 \frac{e^{-t}+e^{-2t}}{(1-e^{-t})^3}\,{\rm d}t
\end{align*}
Since $|e^{-t}|<1$ for $t>0$ and the singularity in $t=0$ is well-behaved we may interchange the order of integration and summation to obtain
\begin{align*}
\int_0^\infty t^3\frac{(e^{-t}+e^{-2t})}{(1-e^{-t})^3}\,{\rm d}t&=\frac12\sum_{n\ge2}n(n-1)\int_0^\infty t^3\left(e^{-(n-1)t}+e^{-nt}\right)\,{\rm d}t\\
&=3\sum_{n\ge2}n(n-1)\left[\frac1{n^4}+\frac1{(n-1)^4}\right]\\
&=3\sum_{n\ge2}\left[\frac{n-1}{n^3}\right]+3\sum_{n\ge2}\left[\frac n{(n-1)^3}\right]\\
&=3\sum_{n\ge1}\left[\frac1{n^2}-\frac1{n^3}\right]+3\sum_{n\ge1}\left[\frac1{n^2}+\frac1{n^3}\right]\\
&=6\sum_{n\ge1}\frac1{n^2}\\
&=\pi^2
\end{align*}
The crucial idea, however, was already given by Felix Marin before I could finish writing my own answer!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3804538",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Quasi-polynomial time complexity and proof In my algorithm class, we were talking about a function $f(x)$ that grows faster than any polynomial ($x^c$), but is outgrown by any exponential ($c^x$), where $c > 1$. That is, a function $f(x)$, such that both the sequences $P_x = \frac{x^c}{f(x)}$ and $E_x = \frac{f(x)}{c^x}$ converges to $0$, for all $c > 1$.
I'm considering a function $f(x) = x^{log_2x}$, and I've proven the sequence $P_x$ converges to $0$ as $x \to \infty$, but I'm having some trouble proving that $E_x$ also converges to $0$. (I've tried the epislon approach, but it generates really messy algebra that I can't deal with).
In fact, I'm still not a hundred percent sure if $f(x) = n^{log_2n}$ is such a function, I've tried to verify with my calculator that when $c$ is really close to $1$, $\lim_{x \to \infty}\frac{f(x)}{c^x} = 0$. However, as I said, I don't know how to construct a solid proof. Any hint?
| Note that $$ n^{\log_c(n)} = \Big( c ^{\log_c (n)} \Big)^{\log_c(n)} = c^{(\log_c(n))^2}.$$
Since $(\log_c(n))^2 < n$ for large enough $n$, $\lim_{n\to \infty} \frac{n^{\log_c(n)}}{c^n} = 0$ follows.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3804718",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Bin Packing Problem with fixed size of bins I'm studying Bin Packing Problem for my thesis and I meet this definition of the decision verson of the problem in the book "Computers and Intractability" by Michael R. Garey and David S. Johnson:
INSTANCE: Finite set $U$ of items, a size $s(u) \in Z$ for each $u \in U$, a positive integer bin capacity $B$, and a positive integer $K$.
QUESTION: Is there a partition of $U$ into disjoint sets $U_1, U_2, ..., U_k$ such that the sum of sizes of the items in each $U_i$ is $B$ or less.
And there is a curious comment about its solution in polynomial time, that is "Solvable in polynomial time for any fixed $B$ by exhaustive search."
Now my question is how it is possibile, searching in internet I've found nothing but this question:
NP-hardness of bin packing problem for fixed bin size but the answer doesn't convinces me, it seems wrong, or maybe simply I don't understand it. Can you help me with this?
| With a fixed bin size you also have a fixed number of possible ways to (partially) fill a bin. Suppose there are $p$ ways to do that.
If you solve each of the $k$ bins separately, you'd get $p$ possibilities for each bin, and then $p^k$ possibilities alltogether. This is exponential, and not what we would like. Note that many of those possibilities will not match up with the actual item sizes you have available, so it is merely an upper bound.
Instead of assigning a partition to each bin, you can do the opposite - assign some number of bins (possibly zero) to each partition. You then have $(k+1)^p$ possible ways of that assignment. This has a fixed exponent, so is polynomial in the number of bins. The degree $p$ of this polynomial can be huge, and this also is an upper bound since most of those assignments will have the wrong total number of bins, but all that does not matter - it is enough to show that it is polynomial.
For example, suppose the bin size is $3$. There are only $6$ possible ways to partially or completely fill up a bin: $1$, $1+1$, $1+1+1$, $2$, $2+1$, $3$. Let $a,b,c,d,e,f$ be variables representing how many bins there are for each of those ways of filling them. Each variable must have an integer value from $0$ to $k$ inclusive. So there are no more than $(k+1)^6$ possibilities to check. In fact, there are far less, since we also have $a+b+c+d+e+f=k$. For example, suppose we want to check if $a=b=c=d=0$, $e=f=4$ is a valid bin packing. We have four bins that contain a size $2$ and a size $1$ item, and four bins with a size $3$ item. If your inventory $U$ contains four items of each size, you have a valid packing. However many bins of size $3$ you have, there are only $6$ variables that you need to determine, and that is polynomial in the number of bins.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3804912",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Expectation of Sinc of Standardnormal Distribution Does anyone happen to know how to compute the expectation of the Sinc of a standard normal random variable, i.e.
$\mathbb{E} \Big[ \frac{\sin X}{X} \Big]$ where $X \sim \mathcal{N}(0,1)$?
Thanks!
EDIT: I suppose I could have elaborated what I have tried, so here goes:
*
*I tried to compute $ \int_\mathbb{R} \frac{\sin x}{x} \frac{1}{\sqrt{2 \pi}} e^{-x^2/2} \ dx$ but this didn't get me anywhere. I guess I didn't have a good approach as to how to calculate this integral.
*Also, I tried using $\mathbb{E} \Big[ \frac{\sin X}{X} \Big] = \mathbb{E} \Big[ \sum_{n = 0}^{\infty} \frac{(-1)^n X^{2n}}{(2n+1)!} \Big]$, applying dominated convergence (where I didn't check yet whether I can use it here because I wanted to see first where it leads) to pull in the expectation and then use the formula for the moments of the normal distribution. However, this didn't yield a (good) result either.
*Further, I tried first calculating the square of the integral in 1 first which is equal to $ \frac{1}{2 \pi} \int_\mathbb{R} \int_\mathbb{R} \frac{\sin x}{x} \frac{\sin y}{y} e^{-(y^2 + x^2)/2} \ dx \ dy$ and then try using polar coordinates but that was quite complicated and didn't help.
EDIT 2:
I tried another approach using the sine representation $ \sin x = \frac{e^{ix} - e^{-ix}}{2i}$ to compute the integral in 1 which I was able to simplify as $ \frac{1}{\sqrt{2\pi} e^{1/2}} \int_{\mathbb{R}} \frac{1}{2ix} \Big( e^{-(x-i)^2/2 } - e^{-(x+i)^2/2 } \Big)$. Unfortunately, I am not sure how to compute this further.
| Using the series for $\sin(x)$,
\begin{align}
\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}\frac{\sin(x)}{x}e^{-\frac{x^2}{2}}\,dx&=\sum_{k\ge 0}\frac{(-1)^k}{(2k+1)!}\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}x^{2k}e^{-\frac{x^2}{2}}\,dx \\
&=\sum_{k\ge 0}\frac{(-1)^k (2k-1)!!}{(2k+1)!}=\sqrt{\frac{\pi}{2}}\left(\operatorname{erf}\!\left(\frac{\sqrt{2}}{2}\right)\right),
\end{align}
where the second equality follows from the fact that for $X\sim N(0,1)$, $\mathsf{E}X^{2k}=(2k-1)!!$, and the third equality uses the error function's Maclaurin series.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3805101",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Problem books on harmonic analysis? I just got a copy of the book A Course In Abstract Harmonic Analysis by Gerald B Folland so I want a problem book that could be used with it?
Please give a list of problem books on Harmonic Analysis?
| An Intro to Harmonic Analysis by Yitzhak Katznelson has exercises following each chapter. It is easy to draw parallels between books for content like this.
Harmonic Analysis by Henry Helson is in the same boat. There are about 5 problems after each chapter’s section.
Lastly, Barry Simon’s book “Harmonic Analysis: A comprehensive course in Analysis” also contains various problems at the end of each significant subsection.
Good luck, and I hope this serves your studying well.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3805223",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Prove that $\frac{1 - x^{n+1} }{n+1} \lt \frac{1-x^n}{n}$ given $n$ is a positive integer and $0 < x \lt 1$. Problem Statement: If $n$ is a positive integer and $0 < x \lt 1$, show that
$$ \frac{1 - x^{n+1} }{n+1} \lt \frac{1-x^n}{n}.$$
My Solution:
$$
\frac{ 1- x^{n+1} }{n+1} \lt \frac{1-x^n}{n} ~~~~\text{is true} \\
\text{if}~~~~ \frac{n}{1-x^n} \lt \frac{n+1}{1- x^{n+1} }$$
If we see the LHS we find that it is of the form of sum of a geometric series with the first term $n$ and the common ration $x^n$ (which is less than 1) similarly the RHS represents the sum of a geometric series with the first term as $n+1$ and common ratio $x^{n+1}$ (which is less than $x^n$, that is less than 1)
Now, my point is that the series represented by the LHS have the first term lesser than the first term of the series represented by the RHS, and the series represented by LHS decreases fastly in comparison to the series represented by RHS (because the common ratio $x^{n+1} \lt x^n$), hence the sum of series represented by the RHS is greater than the sum of the series represented by the LHS.
Is my solution and reasoning correct?
| By your reasoning we need to prove that:
$$n(1+x^n+x^{2n}+...)<(n+1)(1+x^{n+1}+x^{2(n+1)}+...)$$ and it's not so clear, why it's true.
Another way:
We need to prove that:
$$nx^{n+1}-(n+1)x^n+1>0,$$ which is true by AM-GM:
$$nx^{n+1}+1\geq(n+1)\sqrt[n+1]{\left(x^{n+1}\right)^n\cdot1}=(n+1)x^n.$$
The equality occurs maybe for $x^{n+1}=1,$ id est, does not occur.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3805342",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 1
} |
Is essential spectrum not relevant to the topology on it Consider $F: \mathcal{D}(F)\subset X\rightarrow X$, we can define the essential spectrum as the set $\{\lambda\}$ s.t. the Fredholm index of $\lambda-F$ is not zero. Fredholm index can be written as
$ind\ \lambda-F= dim Ker(\lambda-F)-codim\ ran(\lambda-F)$ , where these two terms are both algebraic concept, does that mean essential spectrum is not related to the topology on the space?
| At least as a place-holder answer, in light of some comments: the usual definition/basic properties (whether something is part of the definition, or a basic property, depends on one's choice of logical order, and there is not a unique such...) of Fredholm operators on Banach spaces certainly does use the fact that the ambient space is a Banach space. This does imply that certain properties are equivalent to others, etc.
(Some properties of compact and/or Fredholm operators still make useful sense on "nuclear spaces", but things do start to unravel... I myself do know a little about this sort of extension, but mostly enough to know that spectral theory mostly doesn't work well... I remember Prof. Charles McCarthy, who got his PhD at Yale in the hey-day of operator theory there, once telling me that people spent a lot of time and effort trying to make spectral theory work more generally, but that it mostly just did not.)
It is certainly possible to choose some defining collection (not uniquely determined!) for Fredholm operators on Banach spaces, and use the same terminology in an arbitrary TVS or algebraic vector space.
Since the most-useful aspects of Fredholm and/or compact operators are (so far as I know) correct and easily provable on Banach spaces, and mostly fail otherwise, I myself am not aware of a useful definition beyond that case.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3805460",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Terminology: Upper limit and lower limit. Let $(x_n)$ be a sequence of real numbers. It defines:
$\limsup\limits_{n \rightarrow \infty} x_n=\lim\limits_{n \rightarrow \infty} (\sup_{m≥n} x_m)=\inf_{n≥1}(\sup_{m≥n}x_m)$
How should I understand the notation of the definition? Specifically the $m≥n$ and $n≥1$ terms.
| The notation $\sup_{m\geq n} x_m$ is just a short way to write $\sup\{x_m: m\geq n\}=\sup\{x_n,x_{n+1},x_{n+2},...\}$, the supremum of the sequence $(x_m)$ starting from the $n$th element.
Similarly, if we let $y_n=\sup_{m\geq n} x_m$ for every $n\in\mathbb{N}$, then $\inf_{n\geq 1} y_n$ is just $\inf\{y_n: n\in\mathbb{N}\}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3805581",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Let $f, g: X \to [-\infty, \infty]$ be measurable functions. Is it true that $f - g$ (where it's defined) is measurable? Let $f, g: X \to [-\infty, \infty]$ be measurable functions. Let $X'$ denote the set of all $x$ such that $f(x), g(x) \notin \{-\infty, \infty\}$. Then $X'$ is a measurable set.
Is it true that $h: X' \to \Bbb{R}$ defined by $h(x) = f(x) - g(x)$ is measurable? I think this should be obvious, but I'm having a hard time seeing it. I know if the codomain were $\Bbb{R}$ and $\Bbb{C}$, the sum and difference are measurable (but in that case, we don't need to restrict $X$).
Any help appreciated.
| Yes, it is true.
$f-g$ is not defined in the set $B=\Big(\{f=\infty\}\cap\{g=\infty\}\Big)\cup\Big(\{f=-\infty\}\cap\{g=-\infty\}\Big)$. This is measurable set, for example
$$
\{f=\infty\}=\bigcup_{n\in\mathbb{N}}\{f>n\}
$$
Thus, for $a\in\mathbb{R}$,
$$
\begin{align}
\{f-g<a\}=(X\setminus B) \cap\bigcup_{q\in\mathbb{Q}}\{f<q\}\cap\{q-a<g\}\tag{1}\label{one}
\end{align}
$$
which is measurable subset each set $\{f-q\}$, $\{q-a<g\}$ is measurable ($f$ and $g$ are real-extended measurable functions), and the union in $\eqref{one}$ is over a countable set.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3805772",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to solve diffrential equations of Chemical Kinetics 3rd order reaction [Problem setting]
(ii) A chemical reaction is described by
$$
A+B+C \to^{k_1} D\\
D \to_{k_2} A+B+C
$$
If $x_1, x_2, x_3, x_4$ are the concentrations of A, B, C and D respectively, then write down the governing equations.
Hence deduce the equations for
$$
A+A+A \to^{k_1} A_3 \\
A_3 \to_{k_2} A + A + A \\
$$
Solve these equations and illustrate the solutions in a graph of the concentration of $A$ against $A_3$.
D.N.Burghers, M.S.Borrie, "Modelling with Differential Equations", 1990, p.134 Chapter6 Exercises 3.(ii)
[Solved halfway]
If $x$ is the concentration of $A$, $y$ of $A_3$,
\begin{eqnarray*}
-\frac{1}{3}\frac{dx}{dt}
&=& k_{1}x^3 - k_{2}y\\
-\frac{dy}{dt} &=& k_2y - k_1x^{3}
\end{eqnarray*}
where $k_1, k_2$ are the reaction rates.
rearrange,
\begin{eqnarray}
\left\{
\begin{array}{ll}
\frac{dx}{dt}
&=& 3k_{2}y - 3k_{1}x^3 & (1) \\
\frac{dy}{dt} &=& k_1x^{3} - k_2y & (2)
\end{array}
\right.
\end{eqnarray}
rearrange eq(1),
$$
\frac{dx}{dt} = 3k_2y - 3k_{1}x^3\\
y = \frac{1}{3k_2}(3k_{1}x^3 + \frac{dx}{dt})\\
\frac{dy}{dt} = \frac{1}{3k_2}(3k_{1}3x^2\frac{dx}{dt} + \frac{d^2x}{dt^2})
= \frac{3k_{1}x^2}{k_2}\frac{dx}{dt} + \frac{1}{3k_2}\frac{d^2x}{dt^2}
$$
substitute to (2) from it, derive differential equation.
\begin{eqnarray*}
\frac{dy}{dt} &=& k_1 x^3 - k_2y\\
\frac{3k_{1}x^2}{k_2}\frac{dx}{dt} + \frac{1}{3k_2}\frac{d^2x}{dt^2} &=& k_1 x^3 - k_2\frac{1}{3k_2}(3k_{1}x^3 + \frac{dx}{dt})\\
\frac{3k_{1}x^2}{k_2}\frac{dx}{dt} + \frac{1}{3k_2}\frac{d^2x}{dt^2} &=& k_1 x^3 - k_{1}x^3 -\frac{1}{3} \frac{dx}{dt}\\
9k_{1}x^2 \frac{dx}{dt} + \frac{d^2x}{dt^2} &=& -k_2 \frac{dx}{dt}\\
\frac{d^2x}{dt^2} + (9k_{1}x^2 + k_2)\frac{dx}{dt} &=& 0\\
\end{eqnarray*}
replace $p=dx/dt$,
$$
\frac{d^2 x}{dt^2}
= \frac{dp}{dt}
= \frac{dp}{dx}\frac{dx}{dt}
= \frac{dp}{dx} p
$$
substitute $p$,
$$
\frac{d^2x}{dt^2} + (9k_{1}x^2 + k_2)\frac{dx}{dt} = 0\\
\frac{dp}{dx} p + (9k_{1}x^2 + k_2)p = 0\\
$$
if $p \neq 0$,
$$
\frac{dp}{dx} p + (9k_{1}x^2 + k_2)p = 0\\
\frac{dp}{dx} + (9k_{1}x^2 + k_2) = 0\\
\frac{dp}{dx} = -(9k_{1}x^2 + k_2)\\
p = -3k_1x^3 -k_2x + c_0
$$
now have a first order differential equation and can separate the variables to give,
$$
\int \frac{dx}{3k_1x^3 + k_2x - c_0} = - \int dt\\
\frac{1}{3k_1} \int \frac{dx}{x^3 + \frac{k_2}{3k_1}x - \frac{c_0}{3k_1} } = -t+ c_1\\
$$
How can I solve the rest?
| From general principles of mass conservation, or the construction of the system by mass exchange terms that balance, you can easily see that
$$
3x+y=C=3x_0+y_0
$$
is a constant. Thus the system lives on the line $y=C-3x$. From the right side of the first equation we see that equilibrium points satisfy $k_1x^3-k_2y=0$. There is only one intersection point of the rising cubic $y=\frac{k_1}{k_2}x^3$ with the falling line $y=C-3x$, thus exactly one stationary point for the system.
Eliminating $y$ from the first ODE, one finds
$$
\dot x = -3(k_1x^3-k_2y)=-3(k_1x^3+3k_2x-k_2C)
$$
so that the $x$-derivative of the right side is negative for all $x>0$. This means that the stationary point is attracting or stable, all "physical" solutions converge to that point.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3805920",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.