Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Question about Isomorphism I was reading this example on Wikipedia. It says the following: Another example is the quotient of $\mathbb{R}^n$ by the subspace spanned by the first $m$ standard basis vectors. The space $\mathbb{R}^n$ consists of all $n$-tuples of real numbers $(x_1, ..., x_n)$. The subspace, identified with $\mathbb{R}^m$, consists of all $n$-tuples such that the last $n-m$ entries are zero: $(x_1, ..., x_m, 0, 0, ..., 0)$. Two vectors of $\mathbb{R}^n$ are in the same congruence class modulo the subspace if and only if they are identical in the last $n-m$ coordinates. The quotient space $\mathbb{R}^n/\mathbb{R}^m$ is isomorphic to $\mathbb{R}^{n-m}$ in an obvious manner. I don't understand the last line. Why is $\mathbb{R}^n/\mathbb{R}^m$ is isomorphic to $\mathbb{R}^{n-m}$? Can anyone please help? Maybe I am not clear on what is an isomorphism. Can anyone explain? Thanks!
You can write the quotient space as $\mathbb{R}^n/\mathbb{R}^m$={v + $\mathbb{R}^m$} with v $\in$ $\mathbb{R}^n$. When we consider the projection p from $\mathbb{R}^n$ to $\mathbb{R}^n/\mathbb{R}^m$, we can see that its kernel is: ker(p)={v $\in$ $\mathbb{R}^m$}, since p(v)=0 for all v $\in$ $\mathbb{R}^m$. Therefore, it the dimension of $\mathbb{R}^n/\mathbb{R}^m$ has to be n-m, since dim(ker(p))+dim(im(p))=n and dim(ker(p))=m.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1948311", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
How to find the transition matrix for ordered basis of 2x2 diagonal matrices The problem: For the vector space of lower triangular matrices with zero trace, given ordered basis: $B=${$$ \begin{bmatrix} -5 & 0 \\ 4 & 5 \\ \end{bmatrix}, $$ \begin{bmatrix} -1 & 0 \\ 1 & 1 \\ \end{bmatrix}} and $C=${$$ \begin{bmatrix} -5 & 0 \\ -4 & 5 \\ \end{bmatrix}, $$ \begin{bmatrix} -1 & 0 \\ 5 & 1 \\ \end{bmatrix}} find the transition matrix $C$ to $B$. I know how to find a transition matrix when the basis consists of $n \times 1 $ vectors, but my textbook doesn't address this scenario where the basis consists of a set of $2 \times 2$ matrices and haven't found applicable guidance online.
Hint: If you know how to solve the problem for $n\times 1$ vectors than consider that your matrices can be considered as vectors of a vector space with standard basis the $2\times 2$ matrices that have only one element $=1$ and the other elements $=0$. In this basis, the matrix: $$ \begin{bmatrix} -5 & 0 \\ 4 & 5 \\ \end{bmatrix} $$ is the vector $$ \begin{bmatrix} -5 \\ 0 \\ 4\\ 5 \\ \end{bmatrix} $$ You can do the same for the other matrices and solve the problem as for usual vectors, but note that yours sets $B$ and $C$ are not basis for $M_{2\times 2}(\mathbb{R})$. If we work in the space of lower triangular and null trace matrix in $M_{2\times 2}(\mathbb{R})$, than this subspace has dimension $2$ and any matrix in it can be expressed as $$ \begin{bmatrix} a&0\\ b&-a \end{bmatrix} =a \begin{bmatrix} 1&0\\ 0&-1 \end{bmatrix} +b \begin{bmatrix} 0&0\\ 1&0 \end{bmatrix} =a\hat i +b \hat j $$ so $a$ $b$ can be seen as the componets of a vector $(a,b)^T$ in the basis $\{\hat i, \hat j\}$. In this notation your basis are: $$ B=\{(-5,4)^T,(-1,1)^T\} \qquad B=\{(-5,-4)^T,(-1,5)^T\} $$ Now you can find the $2\times2$ matrix that represents the transformation (in the basis$\{\hat i, \hat j\}$) solving: $$ \begin{pmatrix} x&y\\z&t \end{pmatrix} \begin{pmatrix} -5\\4 \end{pmatrix}= \begin{pmatrix} -5\\-4 \end{pmatrix} $$ and $$ \begin{pmatrix} x&y\\z&t \end{pmatrix} \begin{pmatrix} -1\\1 \end{pmatrix}= \begin{pmatrix} -1\\5 \end{pmatrix} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1948420", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Is the empty set homeomorphic to itself? Consider the empty set $\emptyset$ as a topological space. Since the power set of it is just $\wp(\emptyset)=\{\emptyset\}$, this means that the only topology on $\emptyset$ is $\tau=\wp(\emptyset)$. Anyway, we can make $\emptyset$ into a topological space and therefore talk about its homeomorphisms. But here, we seem to have an annoying pathology: is $\emptyset$ homeomorphic to itself? In order to this be true, we need to find a homeomorphism $h:\emptyset \to \emptyset$. It would be very unpleasant if such a homeomorphism did not exist. I was tempted to think that there are no maps from $\emptyset$ into $\emptyset$, but consider the following definition of a map: Given two sets $A$ and $B$, a map $f:A\to B$ is a subset of the Cartesian product $A\times B$ such that, for each $a\in A$, there exists only one pair $(a,b)\in f\subset A\times B$ (obviously, we denote such unique $b$ by $f(a)$, $A$ is called the domain of the map $f$ and $B$ is called the codomain of the map $f$). Thinking this way, there is (a unique) map from $\emptyset$ into $\emptyset$! This is just $h=\emptyset\subset \emptyset\times \emptyset$. This is in fact a map, since I can't find any element in $\emptyset$ (domain) which contradicts the definition. But is $h$ a homeomorphism? What does it mean for $h$ to have an inverse, since the concept of identity map is not clear for $\emptyset$? Nevertheless, $h$ seems to be continuous, since it can't contradict (by emptiness) anything in the continuity definition ("pre-images of open sets are open")… So is $\emptyset$ homeomorphic to itself? Which is the mathematical consensus about this? * *"Homeomorphic by definition"? *"We'd rather not speak about empty set homeomorphisms…" *"…"?
Your map $h$ does exist, and is a homeomorphism. In fact, it's the identity map: for every element $x\in\emptyset$, $h(x)=x$. So since $h\circ h=h=\operatorname{id}$, $h$ is its own inverse. Since both $h$ and $h^{-1}=h$ are continuous, $h$ is a homeomorphism. (Incidentally, checking that $h$ is continuous isn't entirely vacuous. You have to check that $h^{-1}(U)$ is open for any open subset $U\subseteq\emptyset$. It is not true that there are no choices of $U$: rather, there is exactly one choice of $U$, namely $U=\emptyset$. Of course, $h^{-1}(\emptyset)=\emptyset$ is indeed open.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1948742", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30", "answer_count": 3, "answer_id": 0 }
Showing Directly that Dyck Paths Satisfy the Catalan Recurrence How would one show, without appealing to a bijection with a well known problem, that Dyck Paths satisfy the Catalan recurrence?
A slightly novel approach to this: Considering your paths as mountain ranges (cf. this), let me first show: Lemma. The number of mountain ranges (i.e. lattice paths over $\mathbb Z^2$ between $(0,0)$ and $(m,0)$ with only steps $(1,-1)$ and $(1,1)$) that is always contained in the region $\mathbb Z\times\mathbb Z_{\le 0}$ equals $0$ if $m\ge 1$ is not even and $$\binom{m}{\frac{m}2}-\binom{m}{\frac{m}2-1}$$ if $m\ge 2$ is even. We also notice $$\binom{2m}{m}-\binom{2m}{ m-1}=\frac{(2m)!}{m!(m-1)!}\left(\frac1m-\frac1{m+1}\right)=\frac{1}{m+1}\binom{2m}m,$$ so that we indeed pick up the Catalan numbers $C_m$. Proof (following this great post). If $m$ is not even, then looking at the $y$ coordinate modulo $2$ shows that you cannot land at $y=0$. Otherwise, you can pick $\frac{m}2$ "up" moves, then the other $\frac m2$ moves must be down moves to land at $y=0$. So $$\binom{m}{\frac{m}2}$$ is the number of mountain ranges starting at $(0,0)$ and ending at $(m,0)$. We now need to count the invalid mountain ranges, i.e. those which leave the $y\le 0$ range. Any such mountain range will have to intersect with the line $y=1$. By Andre's reflection principle, the number of mountain ranges that start at $(0,0)$, end at $(m,0)$ and intersect with $y=1$ equal the number of mountain ranges that start at $(0,2)$ and end at $(0,0)$. Using an analogous argument as before (exercise), this is $$\binom{m}{\frac m2-1},$$ which proves the Lemma. $\square$ Exercise. (see again the before-mentioned post) Generalize the Lemma to mountain ranges ending in $(m,n)$ for some $n\in\mathbb Z\cap [-m,m]$ and show that you obtain the number (where a binomial coefficient with negative entries is set to $0$ by convention) $$\binom{m}{\frac{m+n}{2}}-\binom{m}{\frac{m+n}{2}-1}.$$ Some nice applications: * *The number of all mountain ranges that start at $(0,0)$ and end in $x=m$ while always staying in $\mathbb Z\times\mathbb Z_{\le 0}$ is $$\sum_{k=-m}^0 \begin{cases}0 & \text{ if }m+k\text{ is odd};\\ \binom{m}{\frac{m+k}{2}}-\binom{m}{\frac{m+k}{2}-1} & \text{ if }m+k\text{ is even}.\end{cases}$$ This is a telescoping sum equal to $$\binom{m}{\frac{m}2}$$ if $m$ is even and $$\binom{m}{\frac m2-1}$$ if $m$ is odd. In short, the number equals $$\binom{m}{\left\lceil\frac m2\right\rceil}.$$ *One can do a similar thing if you don't want to intersect with $y=r+1$ for some $m>r\ge 1$. Looking at step 2 here (very similar to the Lemma above), we then obtain (binomial coefficients with negative entries being set to $0$) $$\sum_{k=-m}^r \binom{m}{\frac{m+k}2}-\binom{m}{\frac{m+k}2-r-1}.$$ This once again telescopes and we are left with, where for simplicity we assume $m\ge r+2$, (someone please re-check my calculations :)) $$\sum_{k=-r}^0\binom{m}{\left\lfloor\frac{m+r}2\right\rfloor+k}.$$ Here are some mountain ranges not surpassing a $y$-axis-barrier:
{ "language": "en", "url": "https://math.stackexchange.com/questions/1948805", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Integration of Partial Fraction Expansion Hi This is my first time posting a question on this website. Thank you advance for helping me out here. My question is Suppose the density of $X$ is $$f(x) = \frac{Kx^2}{(1 + x)^5}$$ when $x > 0$. Find the constant $K$ and the density of $Y = \frac{1}{(1 + X)}$. ++one more thing since this is pdf. This equals to 1. I tried partial fraction expansion to do it but it was long and didn't lead to answer. For the density part I finished upto $\frac{-Kx^2}{(x+1)^3}$. However, I do not know K and could not finish it. Thank you again
Hint: Use $u$-sub first to get \begin{align} \int \frac{Kx^2}{(1+x)^5}\ dx = \int \frac{K(u-1)^2}{u^5}\ du. \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1948966", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Plotting sets in $\Bbb R^2$ or $\Bbb C$ in some CAS Im curious about how to plot sets of $\Bbb R^2$ or $\Bbb C$ in some computer algebra systems, mainly sage, axiom, mathematica and maple, by order of preference. To make the question not too broad it is enough for me just one example. How you can plot a set like $$\{z\in\Bbb C: |z-3|<|z+2i|\}$$ in the above mentioned CAS. My interest is mainly sage and axiom, the other systems I put here optionally. Note: the above set was typed randomly. It is just an example to see how I can write the code. Thank you in advance!
In SageMath, we can use region_plot as follows: sage: f = lambda x, y: (x-3)^2 + y^2 < x^2 + (y+2)^2 sage: f(3, 2) True sage: region_plot(f, (-5, 5), (-5, 5)) Launched png viewer for Graphics object consisting of 1 graphics primitive Note that we would get the same plot for the "complex" version: sage: g = lambda x, y: abs(x + i*y - 3) < abs(x + i*y + 2*i) sage: region_plot(g, (-5, 5), (-5, 5)) Launched png viewer for Graphics object consisting of 1 graphics primitive
{ "language": "en", "url": "https://math.stackexchange.com/questions/1949105", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Proving the correspondence theorem for groups The correspondence theorem in my notes is: Let $N < G$, $\pi = \text{can}:G \rightarrow G/H$. The map $H \rightarrow \pi(H)$ is a bijection between subgroups of $G$ containing $N$ and subgroups of $G/N$. Under this bijection, normal subgroups match with normal subgroups. I've been asked to prove this theorem using the following three facts: * *$N \triangleleft G$ and $K \leq G/N$ then $\pi^{-1}(K) \triangleleft G \iff K \triangleleft G/N$ *$N \triangleleft G$ and $N \leq H \leq G \implies H = \pi^{-1}\pi(H)$ *Since $\pi$ is surjective, $K \subseteq G/N \implies K = \pi\pi^{-1}(K)$ Firstly, I want to prove that the map $H \rightarrow \pi(H)$ is bijective. Using $2$, I have that $H = \pi^{-1}\pi(H)$ which means that $\pi^{-1}$ is surjective when its pre-image is restricted to $\pi(H)$. in particular this means that the map $\pi(H) \rightarrow H$ is surjective, which is equivalent to $H \rightarrow \pi(H)$ being injective. As for surjection, the map $H \rightarrow \pi(H)$ is clearly surjective by the construction of the map. Now using $1$, if $K \triangleleft G/N \iff \pi^{-1}(K) \triangleleft G$. Therefore Normal subgroups "match". I haven't used $3$, so I've definitely done something wrong. I would appreciate some help on where I'm going wrong.
I think I found the answer, so I'll post it here. Firstly I'd like to prove that the map $\phi :H \rightarrow\pi(H)$ is bijective, where $\pi$ is the map $G \rightarrow G/N$. This $\phi$ bijectively maps subgroups of $G$ containing $N$ to subgroups of $G/N$. Injective: $\phi(H_1) = \phi(H_2) \implies \pi(H_1) = \pi(H_2) \implies \pi^{-1}\pi(H_1) = \pi^{-1}\pi(H_2)$. Now by applying $2$ to each side of this equality, I have that $H_1 = H_2$. Surjective: Take $A/N \leq G/N$. By $3$ I have that $\pi\pi^{-1}(A/N) = A/N$. Furthermore, $A/N \leq G/N \implies N \triangleleft A \leq G$. Therefore $\pi^{-1}(A/N) = A$ which is a subgroup of $G$ containing $N$. so $\phi(A) = \pi(A) = A/N$. Therefore the map is also surjective. Finally, I want to show that normal subgroups in this map "match" with each other. This immediately follows from $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1949210", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Splitting Field Problem I have a problem on splitting field as follows: Determine the degree of the splitting field of the polynomial $x^6-7$ over: (a) $\mathbb{Q},$ (b) $\mathbb{Q(\alpha)},$ where $\alpha$ is primitive 3rd root of unity, (c) $\mathbb{F_3},$ (field with 3 elements). I try to prove (a) in the following way: Since $x^6-7=(x^3-\sqrt7)(x^3+\sqrt7),$ then the splitting field $K$ for $x^6-7$ must contain the splitting fields for $(x^3-\sqrt7)$ and $(x^3+\sqrt7).$ The roots of $(x^3-\sqrt7)$ are $7^{1/6}, 7^{1/6}\xi, 7^{1/6}\xi^2,$ and the roots of $(x^3+\sqrt7)$ are $-7^{1/6}, -7^{1/6}\xi, -7^{1/6}\xi^2,$ where $\xi$ is a primitive sixth root of unity. Is this reasoning correct? How do I solve parts (b) and (c)? Thanks in advance!
As $\sqrt[6]{7}$ is a zero of the polynomial and $\sqrt[6]{7} \not \in \mathbb{Q}$ we adjoin it to $\mathbb{Q}$ to obtain $\mathbb{Q}(\sqrt[6]{7})$. Now in $\mathbb{Q}(\sqrt[6]{7})$ the polynomial factors into $(x - \sqrt[6]{7})(x + \sqrt[6]{7})(x^2 + \sqrt[6]{7}x + \sqrt[3]{7})(x^2 - \sqrt[6]{7}x + \sqrt[3]{7})$. Now the roots of the quadratic factors are $\xi\sqrt[6]{7},\xi^2\sqrt[6]{7},-\xi\sqrt[6]{7},-\xi^2\sqrt[6]{7}$, where $\xi$ is the third root of unity. As $\xi \not \in \mathbb{Q}(\sqrt[6]{7})$. Adjoining we obtain that $\mathbb{Q}(\sqrt[6]{7}, \xi)$ is the splitting field of the polynomial over $\mathbb{Q}$. Now we have: $$[\mathbb{Q}(\sqrt[6]{7}, \xi) : \mathbb{Q}]=[\mathbb{Q}(\sqrt[6]{7},\xi) : \mathbb{Q}(\sqrt[6]{7})][\mathbb{Q}(\sqrt[6]{7}) : \mathbb{Q}] = 2 \cdot 6 = 12$$ This is true as the minimal polynomial of $\xi$ over $\mathbb{Q}(\sqrt[6]{7})$ is $x^2 + x + 1$ and the minimal polynomial of $\sqrt[6]{7}$ over $\mathbb{Q}$ is $x^6 - 7$. Therefore the degree is $12$. For the second part as $\mathbb{Q}(\xi) = \mathbb{Q}(\xi^2)$ and using the previous part we have that the splitting field is again $\mathbb{Q}(\sqrt[6]{7}, \xi)$, but this time the degree is $6$. For the third part we have that $x^6 - 7 = (x+1)^3(x+2)^3$ in $\mathbb{F}_3$, so the splitting field is infact $\mathbb{F}_3$ and the degree is of order $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1949327", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Simplify this expression fully How would i simplify fully the following expression? $\dfrac{{\sqrt 2}({x^3})}{\sqrt{\frac {32}{x^2}}}$ So far i have got this $\dfrac{{\sqrt 2}{x^3}}{{\frac{\sqrt 32}{\sqrt x^2}}}$ = $\dfrac{{\sqrt 2}{x^3}}{{\frac{4\sqrt 2}{x}}}$ Am not quite sure if this is correct however, could someone help explain how i would simplify this expression?
There is a mistake in the OP. Recall that $\sqrt{x^2}=|x|\ne x$ when $x<0$. To simplify, we can write $$\frac{\sqrt 2 x^3}{\sqrt{\frac{32}x}}=\frac{\sqrt 2 x^3}{\frac{4\sqrt 2}{|x|}}=\frac{x^3|x|}{4}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1949436", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 0 }
What's funny about $\forall \forall \exists \exists$? So, what's the joke in number $9$? $9$. You understand the following joke: $\forall \forall \exists \exists$
What struck me (personally) as droll was: * *It can be construed as redundant, in that two "for every" clauses in succession can be replaced by a single "for every", and similarly for two successive "there exists" clauses. *It looks like the cry of a cartoon character falling upside down from a great height. After reading the existing responses here, I lean toward the interpretation "whenever you see a $\forall$, a $\exists$ lurks nearby", a wry comment on epsilontics.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1949529", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 6, "answer_id": 5 }
Is there a method for solving equations like $x^{2}-\sqrt{x}-2=0$? Is there a method for solving equations like $x^{2}-\sqrt{x}-2=0$? As far as I can remember, I don't know any method for equations like this.
If you use the substitution $u=\sqrt x$ you see that your equation is basically a quartic. Since we can solve all polynomials of degree less than or equal to $4$, there is a "method" to solve it, but it's not necessarily easy. As it turns out, there are two real solutions to $u^4-u-2=0$. The first is $-1$, but that turns out to be an extra root introduced when we did the substitution $u=\sqrt x$ (we cannot plug $-1$ back in to the formula or we have a $\sqrt {-1}$ term). The other root is near $1.3$, If we solve this back for $x$, we get $x=1.3^2$ which is near $1.83$. Thus, there is one real root to the original equation which is the square of the second root given by Wolfram here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1949659", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Calculus - Increasing function Problem Find all values for $a\in \mathbb R$ so the function $\, f(x) = x^3 + ax^2 + 3x - 1\,$ is always increasing in $\mathbb R$: $\ f'(x) = 3x^2 + 2ax + 3 $ , So for the function to be increasing, $\,f'(x) $ must be greater than $ 0.$ Therefore $ a\gt -3(x^2+1)/2x $ Is this right for an answer ? Thank you.
You should think about whether dividing by x was actually allowed. Remember that x could be 0, since you need that inequality to hold for all real x. For $3x^2+2ax+3$ to be above the x axis, it cannot have any real roots. So it must have this quantity $b^2-4ac$ (the discriminant) be negative. You know what b and c are and you can solve for a.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1949760", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
A formula for the $n$th term of a sequence Find a formula for the $n$th term of the sequence $$1,2,2,3,3,3,4,4,4,4,\ldots.$$ Let $x_n$ denote the $n$th term of the sequence. If $$1+2+\cdots+m < n \leq 1+2+\cdots+m+(m+1)$$ then $x_n = m+1$. Is this a formula or not?
This is not considered a formula, because it does not give $x_n$ as a function of $n$ but instead gives (sharp) bounds on what $x_n$ is. Instead, note that * *$x_1=1=T_1$ and it is the last $x_n$ equal to 1 *$x_2=3=T_2$ and it is the last $x_n$ equal to 2 *$x_3=6=T_3$ and it is the last $x_n$ equal to 3, etc. Here $T_n$ is the $n$th triangular number. There is an analogue of the square root – the triangular root – that yields $n$ given $T_n$: $$n=\frac{\sqrt{8T_n+1}-1}2$$ Therefore a possible formula for $x_n$ is $$x_n=\left\lceil\frac{\sqrt{8n+1}-1}2\right\rceil$$ Alternatively, this sequence is A002024 in the OEIS and a simpler formula given there is $$x_n=\left\lfloor\sqrt{2n}+\frac12\right\rfloor$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1949968", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Multiplication by One Throughout school we are taught that when something is multiplied by 1, it equals itself. But as I am learning about higher level mathematics, I am understanding that not everything is as black and white as that (like how infinity multiplied by zero isn't as simple as it seems). Is there anything in higher-level, more advanced, mathematics that when multiplied by one does not equal itself?
It may not be what you're looking for, but I believe this is related. In the case of limits, repeated multiplication by a limiting value of $1$ can have surprising effects. Consider this: $$\left(1+\frac{1}{n}\right)^{n}=\left(1+\frac{1}{n}\right)\left(1+\frac{1}{n}\right)\left(1+\frac{1}{n}\right)\ldots$$ For large values of $n$, this appears to approach: $$\left(1+0\right)\left(1+0\right)\left(1+0\right)\ldots=1\cdot1\cdot1\ldots=1$$ $$1^{\infty}\stackrel{?}{=}1$$ However, this is not the true value of the limit. In reality: $$\lim_{n\to\infty}\left(1+\frac{1}{n}\right)^{n}=e\gt1$$ It appears that infinite repeated multiplication by $1$ somehow becomes another value entirely. This is why $1^\infty$ is considered an indeterminate form.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1950039", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 6, "answer_id": 4 }
Combined events and Venn diagrams: If $A$ and $B$ satisfy these conditions what is $P(A \cap B')$? Events $A$ and $B$ satisfy $P((A \cup B)') = 0.2$ and $P(A) = P(B) = 0.5$. Find $P(A \cap B')$.
Here's some potentially useful formulae $P(A'\cap B')=P((A\cup B)')$ $P(A')=1-P(A)$ $P(A|B)=\frac{P(A\cap B)}{P(B)}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1950136", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to show that if $f$ is rapidly decreasing, we have $|f(x-y)|\le C_N/(1+|x|)^N$ when $|y|\le |x|/2$. Let $u(x,t)=(f*\mathcal{H}_t)(x)$ for $t>0$ where $f$ is a function in the Schwartz space and $\mathcal{H}_t$ is the heat kernel. Then we have the following estimate (from Stein and Shakarchi's Fourier Analysis): $$|u(x,t)|\le \int_{|y|\le|x|/2}|f(x-y)|\mathcal{H}_t(y)dy+\int_{|y|\ge|x|/2}|f(x-y)|\mathcal{H}_t(y)dy \\ \le \frac{C_N}{(1+|x|)^N}+\frac{C}{\sqrt{t}}e^{-cx^2/t}.$$ To get the first inequality, the text says "Indeed,since $f$ is rapidly decreasing, we have $|f(x-y)|\le C_N/(1+|x|)^N$ when $|y|\le |x|/2$." However, this is what I don't understand. I don't know how to get this specific form of inequality given $|y|\le |x|/2$. A related inequality given in the text is that if $g$ is rapidly decreasing then by considering the two cases $|x|\le 2|y|$ and $|x|\ge 2|y|$, we have $\sup_x |x|^l |g(x-y)|\le A_l (1+|y|)^l.$ I think this one can be shown by similar reasoning as the above one, but I really have no idea how to show this by considering the two cases. A function $f$ is rapidly decreasing if it is indefinitely differentiable and $$\sup_{x\in R} |x|^k |f^{(l)}(x)|<\infty \; \text{for every} \; k,l\ge 0.$$ I would greatly appreciate it if anyone could show this inequality.
Use the binomial theorem and the rapidly decreasing property of $f$ to prove that for all $u\in \Bbb R^N$, $(1/2 + \lvert u\rvert)^N\lvert f(u)\rvert \le c_N$ for some constant $c_N$ depending only on $N$. Then $\lvert f(x - y)\rvert \le c_N (1/2 + \lvert x - y\rvert)^{-N}$. Since $\lvert y\rvert \le \lvert x\rvert/2$, the reverse triangle inequality gives $\lvert x - y\rvert \ge \lvert x\rvert/2$. Thus $(1/2 + \lvert x - y\rvert)^{-N} \le 2^N (1 + \lvert x\rvert)^{-N}$ and so $\lvert f(x - y)\rvert \le C_N (1 + \lvert x\rvert)^{-N}$, where $C_N = 2^Nc_N$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1950238", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Fibonacci numbers: proof4 I need to proove the following $F_{n+k}=F_{k-1}F_{n}+F_{k}F_{n+1}$ Firstly, I wanted to use mathematic induction, but I do not know, to which letter ($n$ or $k$) should be $1$ added, or it does not matter? I also tried to find out the solution on the Internet, but unsuccessfully. Thanks
There are a couple of things you need to note: * *The proposition you're assuming is $P(k)$ where $n, k \in \mathbb Z^+$ *You need to set the "domain" as $k\geq2$ (i.e. your base case will be $P(2)$) *You need to use strong mathematical induction for this. You should consider two consecutive generic cases such as $P(m)$ and $P(m+1)$ and show that if they hold true, then $P(m+2)$ will also hold true for all $m \in \mathbb Z^+$ Does this help? Here is the proof: $$P(k): F_{n+k}=F_{k-1}F_{n}+F_{k}F_{n+1}$$ $$ \begin{align} P(2): F_{n+2} &= F_{1}F_{n}+F_{1}F_{n+1}\\[1em] &= 1\cdot F_{n}+1\cdot F_{n+1}\\[1em] &= F_{n} + F_{n+1}\\[1em] &= F_{n+2}\tag{{P(2) is true}}\\[1em] \end{align} $$ $$P(m): F_{n+m}=F_{m-1}F_{n}+F_{m}F_{n+1}\tag{1}$$ $$P(m+1): F_{n+m+1}=F_{m}F_{n}+F_{m+1}F_{n+1}\tag{2}$$ $$P(m+2): F_{n+m+2} = F_{m+1}F_{n}+F_{m+2}F_{n+1}\tag{{Show this}}$$ $$ \begin{align} F_{n+m+2} &= F_{n+m}+F_{n+m+1}\\[1em] &= F_{m-1}F_{n}+F_{m}F_{n+1}+F_{m}F_{n}+F_{m+1}F_{n+1}\tag{using (1) and (2)}\\[1em] &= F_{n}(F_{m-1}+F_{m})+F_{n+1}(F_{m}+F_{m+1})\\[1em] &= F_{m+1}F_{n}+F_{m+2}F_{n+1}\\[1em] \end{align} $$ $\implies P(m+2)$ holds true. $\implies P(k)$ holds true.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1950363", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Travelling through all edges in a complete graph I have a complete graph with 14 vertices (all edges have equal weight). What is the shortest way to go through all edges? Edit: I've got a lower bound: as I go through every edge, I visit every vertex at least $\left\lceil\frac{13}2\right\rceil$ times so the path contains at least 98 vertices. But don't know if such way exists.
A complete graph with 14 vertices has $\frac{14(13)}{2}$ edges. This is 91 edges. However, for every traversal through a vertex on a path requires an in-going and an out-going edge. Thus, with an odd degree for a vertex, the number of times you must visit a vertex is the degree of the vertex divided by 2 using ceiling division (round up). With a complete graph of 14 vertices, each vertex is degree 13. Each vertex thus has 6 in-out pairs and an odd edge. Therefore, an algorithm must pass through the vertex 7 times, and thus traverse 14 edges per vertex, with one of the 13 unique edges traversed twice. Note that if the path is not required to be a cycle, we can have the start and end vertices be unique. Therefore, if we assume that we traverse the graph through only the in-out pairs, we will be left with the end-point of this path (call it $A$) such that we have traveled $6*14 = 84$ edges. Note that there are 14 degree-edges (or directional edges) still remaining. We can pair these edges off to the remaining vertices such that, given that we consider the set of vertices in the graph to be well-ordered and mappable to the positive integers starting from 1 ascending, each even numbered vertex will be paired up with an odd numbered vertex. There are thus 2 sides for each undirected edge, and thus only 7 undirected edges we will uniquely traverse. Thus, our count goes up to $84 + 7 = 91$ edges. However, note that we are left with 7 unique subgraphs graphs with respect to only unique, still un-traversed vertices (the edges which connect two vertices that we just established) which are not connected. Since we have already traversed all other edges, we are sure that these are all mutually unconnected outside of their individual networks. Thus, if we consider these networks as "supervertices" such that we can abstract each network as a vertex, we want to create a path between the vertex by inserting edges as needed. To make a path between $n$ unconnected vertices, $n-1$ vertices must be inserted. Since there are 7 "supervetices" or unconnected subgraphs, we insert or re-traverse 6 more vertices. Our total traversal count is thus $6 + 91 = 97$ edges. Once again, if we were to reconnect ourselves back to the original vertex, we would need to insert a vertex between the first and last supervertex/subgraph such that the actual connecting vertices is the first in our path and the end in the non-cyclic path that has $97$ edges to $97 + 1 = 98$ vertices. Hence, the optimal path has 98 vertices.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1950438", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Find the generating function for $c_r = \sum^r_{i=1}i^2$ Find the generating function of where $c_0 = 0, c_r = \sum^r_{i=1}i^2$. Hence show that $\sum^r_{i=1}i^2 = C^{r+1}_3 + C^{r+2}_3$ Attempt: $c_r = \sum^r_{i=1}i^2$ = $x + 4x^2 + 9x^3 + ... + r^2x^r$ = $x(1 + 4x + 9x^2 + ... + r^2x^{r-1})$ = $x(\frac{1}{1-2x})$ How do i proceed from here? Is this even right?
Let $C(t) := \sum_{n\ge0}c_nt^n$ be the generating series. Then, sung the fact that $$c_n = c_{n-1} + n^2$$ for $n>0$, we have \begin{align} C(t) = & \sum_{n\ge0}c_nt^n\\ = & c_0 + \sum_{n\ge1}(c_{n-1}+n^2)t^n\\ = & tC(t) + \sum_{n\ge0}n^2t^n\\ = & tC(t) + t\frac{d}{dt}\left(t\frac{d}{dt}\sum_{n\ge0}t^n\right)\\ = & tC(t) + t\frac{d}{dt}\left(t\frac{d}{dt}\frac{1}{1-t}\right)\\ = & tC(t) + \frac{t}{(1-t)^2}.\\ \end{align} Therefore, $$C(t) = \frac{t}{(1-t)^3}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1950528", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Find the explicit solution for the following differential equation. I have the following differential equation $$\frac{dx}{dt} = x^2-4$$ Separating the variables, I get $$\frac{dx}{x^2-4} = dt$$ Let us write it in partial form $$\frac{dx}{(x-2)(x+2)} = dt$$ $$\frac{dx}{4(x-2)} - \frac{dx}{4(x+2)} = dt $$ $$ \frac{dx}{(x-2)} - \frac{dx}{(x+2)} = 4dt $$ $$ \ln{|x-2|} - \ln{|x+2|} + C_1 = 4t + C_2 $$ Let $C_2 - C_1 = C$ $$ \ln{|\frac{x-2}{x+2}|} = 4t + C $$ $$e^{\ln{|\frac{x-2}{x+2}|}} = e^{4t+C}$$ $$e^{\ln{|\frac{x-2}{x+2}|}} = e^{4t}e^C$$ Let $e^C = C$ Since it is a constant $$\frac{x-2}{x+2} = Ce^{4t}$$ Let $x(0) = x_0$ $$\frac{x_0-2}{x_0+2} = C$$ Substituting for C $$\frac{x-2}{x+2} = \frac{(x_0-2)e^{4t}}{x_0+2}$$ I am rather stuck in here. The solutions manual to this question gives: $$x(t) =\frac{2[x_0 + 2 + (x_0 - 2)]e^{4t}}{x_0 + 2 - (x_0 - 2)}$$ The solutions manual does not elaborate on how it came to the solution above. How do I approach the problem? Any hints? Source: Differential Equations and Boundary Value Problems: Computing and Modeling (5th Edition) by C. Henry Edwards (Author), David E. Penney (Author), David T. Calvis (Author) Question 5 Chapter 2.2 NOTE: $x(0)$ is not given at all so this is not a mistake. Hence, we simply do $x(0) = x_0$.
If $$ \dfrac{a}{b}=\dfrac{c}{d}, $$ then $$ \dfrac{a+b}{a-b}=\dfrac{c+d}{c-d}. $$ This is Componendo& Dividendo Rule of elementary algebra which if applied to your last but one equation gives the last equation of the manual.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1950629", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
On $\sum_{n_1,\dots,n_k = 1}^\infty \frac{1}{(n_1+...+n_k)^p}$, $p \in \mathbb{R}^+$. Consider the "multiple harmonic series" $$\sum_{n_1,\dots,n_k = 1}^\infty \frac{1}{(n_1+...+n_k)^p}.$$ How can one study the behavior of this series for various values of $p \in \mathbb{R}^+$?
The series can be written $$ \sum_{n=1}^\infty \frac{a_{n,k}}{n^p}, $$ where $a_{n,k}$ is the number of ways $n$ can be written as an ordered sum of $k$ positive integers. We have $a_{n,k}={n-1\choose k-1}\asymp n^{k-1}$, so the series converges if and only if $p>k$, and by expressing ${n-1\choose k-1}$ as a polynomial in $n$, the sum of the series is a rational linear combination of the Riemann zeta values $\zeta(p-k),\zeta(p-k+1),\ldots,\zeta(p)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1950733", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Introduction to viscosity solutions theory Can you recommend an introduction to viscosity solutions theory? More specifically, I'm looking for a modern treatment similar to Chapter 10 of Evans's Partial Differential Equations, but somewhat more detailed and comprehensive. (Of course, I'm aware of the User's Guide, but it is not quite what I'm looking for).
I would recommend the book by Bardi and Capuzzo-Dolcetta. "Optimal Control and Viscosity Solutions of Hamilton-Jacobi-Bellman Equations" Bardi, Martino, Capuzzo-Dolcetta, Italo For something shorter and more introductory you can check out my notes: http://math.umn.edu/~jwcalder/222BS16/viscosity_solutions.pdf
{ "language": "en", "url": "https://math.stackexchange.com/questions/1950859", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Finding $\sum\limits_{n=1}^{k}\frac{1}{n(n+1)}$ in terms of $k$ So I found a question which asked me to find the sum $$\sum_{n=1}^{k}\frac{1}{n(n+1)}$$ The only hint given was to rewrite the summation (the fraction after the sigma, does anyone know what it's called?) using fraction decomposition, so I did: $$\frac{1}{n(n+1)}=\frac{1}{n}-\frac{1}{n+1}$$ So it becomes $$\sum_{n=1}^{k}\left(\frac{1}{n}-\frac{1}{n+1}\right)$$ I then tried to find a pattern by writing the terms out beginning with $n=1$ but couldn't find anything that would help me find the sum.
You have completed the problem. The sum becomes $$S=1-\frac12+\frac12-\frac13+\frac13-\frac14+\frac14+-\cdots$$ The $N^{th}$ partial sum is $$S_N=1-\frac{1}{N+1}$$ Thus, $$S=\lim_{n\to\infty}S_N=1$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1950977", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Why are homeomorphisms important? I attended a guest lecture (I'm in high school) hosted by an algebraic topologist. Of course, the talk was non-rigorous, and gave a brief introduction to the subject. I learned that the goal of algebraic topology is to classify surfaces in a way that it is easy to tell whether or not surfaces are homeomorphic to each other. I was just wondering now, why are homeomorphisms important? Why is it so important to find out whether two surfaces are homeomorphic to each other or not?
The notion of homeomorphism is of fundamental importance in topology because it is the correct way to think of equality of topological spaces. That is, if two spaces are homeomorphic, then they are indistinguishable in the sense that they have exactly the same topological properties.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1951074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "39", "answer_count": 5, "answer_id": 1 }
How can one use the chain rule to integrate? I am trying to calculate the anti-derivative of $y=\sqrt{25-x^2}$, for which I believe I may need the chain rule $\frac{dy}{dx} = \frac{dy}{du} \times \frac{du}{dx}$. How I would use it, however, is a different matter entirely. I used this website for a tutorial, however my answer of $\left[\frac{(200-8x^2)^\frac{3}{4}}{7}\right]_0^5$ is vastly different from the actual answer $\frac{25\pi}{4}$. How exactly would someone calculate the antiderivative of a function like this?
I lied in my comment - the substitution rule is what you need but it's not used in the conventional way. See, we usually sub $u = g(x)$ so we make a hard problem easier. However, we need something different here. We start with $\int \sqrt{25 - x^2} \ dx$. Well, let's remember a trig identity: $1 - \sin^2 x = \cos^2 x.$ So what do we do? Here's a trick. Let $x = 5 \sin w$. Then $25 - x^2 = 25 - 25 \sin^2 x$. What's next? Hey, pull a 25 out and we get something nice! Here's what we got: \begin{align*} \int \sqrt{25 - x^2} \ dx &= \int 5 \cos w \sqrt{25 - 25 \sin^2 w} \ dw \\ &= \int 25 \cos^2 w \ dw \end{align*} Now with the help of another trig identity (half angle), we'll be done! Can you take it from here, pal?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1951155", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 4 }
Set of closed intervals $[x, y] \subset [0, 1]$ $\pi$-system which generates Borel $\sigma$-algebra on $[0, 1]$? How do I see that the set of closed intervals $[x, y] \subset [0, 1]$ is a $\pi$-system which generates the Borel $\sigma$-algebra on $[0, 1]$?
To see this, first verify that the set of closed intervals is a $\pi-$system which is the result of having finite intersection of closed set as closed. To see that it generates Borel $\sigma-$algebra, it is enough to show that every open set $(x,y)$ (and $(x,1]$ and $[0,x)$ belongs to $\sigma-$algebra of closed set. $(x,1]$ and $[0,x)$ trivially belong to $\sigma-$algebra, since they are complements of closed sets. On the other hand, countable union of $[x-\frac 1n,y-\frac 1n]$ over $n\in\mathbb N$ yields the open set $(x,y)$. So all open sets are in $\sigma-$algebra of closed sets and therefore it includes Borel $\sigma-$algebra. The reverse direction can be shown by seeing that each closed set is already in Borel $\sigma-$algebra.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1951273", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Will the "closed" unit ball $\left\| x \right\| \le 1$ in $\Bbb R^n$ be a compact set for any norm? Will the "closed" unit ball $\left\| x \right\| \le 1$ in $\Bbb R^n$ always be a compact set for any norm? I am asking this question because the induced matrix norm is originally defined as a supremum, Given a vector norm ${\upsilon}:\Bbb R^n\to \Bbb R$, the induced matrix norm $\mu:\Bbb R^{m\times n} \to \Bbb R$ is defined as $\mu({\bf{A}})=\sup\{{\bf{Ax}}:\upsilon({\bf{x}})\le 1\}$. However, my lecture material later claims the closed ball "$\upsilon({\bf{x}})\le 1$" is compact so the supremum is actually a maximum, so I don't know where this lector material comes from and I have googled but cannot find the Corollary 2.153. Hope someone can help with this. Thank you!
Yes, any two norms are equivalent on a finite dimensional Banach space, so they generate the same topology. Perhaps for more detail, it is fairly easy to show that the unit ball under the $l_1$ norm, $B(l_1^n)$, is compact. If a sequence is bounded in the $l_1$-norm then it bounded coordinate-wise, and now apply Bolzano-Weierstarss (to the first coordinates, then to the second, and so on) to select a sequence that converges point-wise. In $l_1$ is then immediate that pointwise convergence of the sequence implies norm convergence. Since any two norms are equivalent, $B(l_1^n)$ is a compact set in the topology determined by other norm $||\cdot||$ and the unit ball $B_{||\cdot||}$ is a closed subset of a multiple of $B(l_1^n)$, thus also compact.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1951371", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
Prove that $F_n$ satisfies the recurrence relation $F_n = aF_{n-1} - \frac{1}{n}$ Suppose that $F_n = \int_0^1 \frac{x^n}{a-x}dx$, where $a>2$ and $n=0, 1, 2, 3,...$ Prove that $F_n$ satisfies the recurrence relation $F_n = aF_{n-1} - \frac{1}{n}$ My thought is that this can be proved by using an induction proof. We can assume that $F_n$ is true, then prove that $F_{n+1}$ holds, and conclude the proof. Inductive Hypothesis: Assume that $F_n$ is true ($F_n = aF_{n-1} - \frac{1}{n}$) Base Case: We need to solve for $F_0$. $$F_0 = \int_0^1 \frac{x^n}{a-x} dx = -\ln(a-x)|^1_0 = -\ln(a-1) + \ln(a) + C$$ Then show that $F_1$ holds the recurrence relation. $$F_1 = aF_0 - \frac{1}{n}$$ $$\int_0^1\frac{x}{a-x}dx = a[-\ln(a-1) + \ln(a) + C_1] - \frac{1}{n}$$ $$-a\ln(1-a) + a\ln(-a) - 1 + C_2 = -a \ln(a-1) + a \ln(a)+aC_1 - 1$$ $$-a\ln(1-a)+a\ln(-a)-1 = -a\ln(a-1) + a\ln(a)-1$$ Here's where I run into some issue. They are not quite equal, so did I integrate wrong? Inductive Step: Since we assume that $F_n$ is true, then we need to prove that $F_{n+1}$ is true. $$F_{n+1} = aF_{n}-\frac{1}{n}$$ $$\int_0^1\frac{x^{n+1}}{a-x}dx = a \int_0^1 \frac{x^n}{a-x} dx - \frac{1}{n}$$ At this point, I am not sure that how to do this integral, and check for equality. I'd like to know if I'm going in the right direction with this proof, and if I am, then how to do the integration in the inductive step.
Integration by parts ($(x^{n+1})'=(n+1)x^{n}$)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1951566", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Why does $\sum_{n=0}^k \cos^{2k}\left(x + \frac{n \pi}{k+1}\right) = \frac{(k+1)\cdot(2k)!}{2^{2k} \cdot k!^2}$? In the paper "A Parametric Texture Model based on Joint Statistics of Complex Wavelet Coefficients", the authors use this equation for the angular part of the filter in polar coordinates: $$\sum_{n=0}^k \cos^{2k}\left(x + \frac{n \pi}{k+1}\right)$$ My friend and I have tested many values of $k > 1$, and in each case this summation is equal to $$\frac{(k+1)\cdot(2k)!}{2^{2k} \cdot k!^2}$$ The paper asserts this as well. We are interested in having an analytic explanation of this equality, if it really holds. How can we derive this algebraically? TL;DR Is this true, and if so, why? $$\sum_{n=0}^k \cos^{2k}\left(x + \frac{n \pi}{k+1}\right) = \frac{(k+1)\cdot(2k)!}{2^{2k} \cdot k!^2}$$
Suppose we seek to verify that $$\sum_{k=0}^n \cos^{2n}\left(x+\frac{k\pi}{n+1}\right) = \frac{n+1}{2^{2n}} {2n\choose n}.$$ The LHS is $$\sum_{k=0}^n \cos^{2n}\left(x+\frac{k\times 2\pi}{2n+2}\right).$$ Observe also that $$\sum_{k=0}^n \cos^{2n}\left(x+\frac{(k+n+1)\times 2\pi}{2n+2}\right) \\ = \sum_{k=0}^n \cos^{2n}\left(x+\pi+\frac{k\times 2\pi}{2n+2}\right) = \sum_{k=0}^n \cos^{2n}\left(x+\frac{k\times 2\pi}{2n+2}\right)$$ because the cosine is raised to an even power. Therefore the LHS is in fact $$\frac{1}{2} \sum_{k=0}^{2n+1} \cos^{2n}\left(x+\frac{k\times 2\pi}{2n+2}\right).$$ Hence we need to prove that $$\frac{1}{2} \sum_{k=0}^{2n+1} \left(\exp\left(ix+k\times\frac{2\pi i}{2n+2}\right) + \exp\left(-ix-k\times\frac{2\pi i}{2n+2}\right)\right)^{2n} \\ = (n+1)\times {2n\choose n}.$$ Introducing $$f(z) = \left(\exp(ix)z+\exp(-ix)/z\right)^{2n} \frac{(2n+2)z^{2n+1}}{z^{2n+2}-1}$$ We have that the sum is $$\frac{1}{2} \sum_{k=0}^{2n+1} \mathrm{Res}_{z=\exp(2\pi ik/(2n+2))} f(z).$$ The other potential poles are at $z=0$ and at $z=\infty$ and the residues must sum to zero. For the candidate pole at zero we write $$f(z) = \left(\exp(ix)z^2+\exp(-ix)\right)^{2n} \frac{(2n+2)z}{z^{2n+2}-1}$$ and we see that it vanishes. Therefore the target sum is given by $$-\frac{1}{2} \mathrm{Res}_{z=\infty} f(z) \\ = \frac{1}{2} \mathrm{Res}_{z=0} \frac{1}{z^2} \left(\exp(ix)/z+\exp(-ix)z\right)^{2n} \frac{1}{z^{2n+1}} \frac{2n+2}{1/z^{2n+2}-1} \\ = \frac{1}{2} \mathrm{Res}_{z=0} \frac{1}{z} \left(\exp(ix)/z+\exp(-ix)z\right)^{2n} \frac{2n+2}{1-z^{2n+2}} \\ = \frac{1}{2} \mathrm{Res}_{z=0} \frac{1}{z^{2n+1}} \left(\exp(ix)+\exp(-ix)z^2\right)^{2n} \frac{2n+2}{1-z^{2n+2}}.$$ This is $$(n+1) [z^{2n}] \left(\exp(ix)+\exp(-ix)z^2\right)^{2n} \frac{1}{1-z^{2n+2}}.$$ Now we have $$\frac{1}{1-z^{2n+2}} = 1 + z^{2n+2} + z^{4n+4} + \cdots$$ and only the first term contributes, leaving $$(n+1) [z^{2n}] \left(\exp(ix)+\exp(-ix)z^2\right)^{2n} \\ = (n+1) \times {2n\choose n} \exp(ixn)\exp(-ixn) = (n+1) \times {2n\choose n}.$$ This is the claim. Remark. Inspired by the work at this MSE link.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1951708", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
Show that the random variables $Y_1$ and $Y_2$ are independent Let $X_1,X_2$ be i.i.d with pdf $$f_X(x)=\begin{cases} e^{-x} & \text{for } 0< x<\infty{} \\0 & \text{elsewhere } \end{cases}$$ Show that the random variables $Y_1$ and $Y_2$ with $Y_1=X_1+X_2$ and $Y_2=\frac{X_1}{X_1+X_2}$ are independent. I know that for $Y_1$ and $Y_2$ to be independent. $P(Y_1\cap Y_2)=P(Y_1)P(Y_2)$.
Here is a simulation of 100,000 $(Y_1, Y_2)$-pairs from R statistical software. The $X_i$ are iid $Exp(rate=1),$ $Y_1 \sim Gamma(shape=2, rate=1)$ and $Y_2 \sim Unif(0, 1).$ Also, $Y_1$ and $Y_2$ are uncorrelated. (If these distributions are not covered in your text, you can see Wikipedia articles on 'exponential distribution' and 'gamma distribution'.) x1 = rexp(10^5); x2 = rexp(10^5) y1 = x1 + x2; y2 = x1/y1 cor(y1,y2) ## 0.002440974 # consistent with 0 population correlation In the figure below, the first panel shows no pattern of association between $Y_1$ and $Y_2$. Of course, this is no formal proof of independence, but if you do the bivariate transformation to get the joint density function of $Y_1$ and $Y_2,$ you should be able to see that it factors into the PDFs of $Y_1$ and $Y_2$. These PDFs are plotted along with the histograms of the simulated distributions of $Y_1$ and $Y_2$ in the second and third panels, respectively.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1951806", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Show that two sets are disjoint I'm trying to solve this problem about two different sets. I have to show that they are disjoint. I understand that they need to be disjoint, because they are two different real ranks, but I dont know how to prove it. Thank you so much for the help. Any hint is welcome! Show that if two differentintegers $m\neq n$ the following sets are disjoint: $(n-1,n]=\{x\in \Re : n-1<x\leq n \}$ and $(m-1,m]=\{x\in \Re : m-1<x\leq m \}$
Without loss of generality we can assume $n>m$ (the case $n<m$ follows by symmetry). Then observe that $m-1<m\leq n-1<n$. We proceed using proof by contradiction. Suppose that there is some $x\in(m-1,m]\cap(n-1,n]$. Then $m-1<x\leq m$ and $n-1<x\leq n$. We have $x>n-1\geq m$ and $x\leq m$, absurd!
{ "language": "en", "url": "https://math.stackexchange.com/questions/1951925", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prime modulo maximum Prove that the remainder of division of a positive number $n$ by a prime $p \le n$ is maximized when $p$ is the smallest prime larger than $\frac{n}{2}.$ It is easy to see that for any number of the form $\frac{n}{2}+k$ where $k \gt 0$, if $k$ is increased remainder will decrease. How to prove that for any number $p \le \frac{n}{2}$, I cannot obtain remainder more than what can be obtained from $\frac{n}{2}+k$ (first prime number large than half of n).
This does not hold true. Consider $n=14$, the smallest prime larger than $\frac{14}{2}=7$ is $11$, but the maximum remainder is attained for prime $5 \lt 7$: $$14 \bmod 5 = 4 \;\;\gt\;\; 14 \bmod 11 = 3$$ [ EDIT ]    The following shetches the proof to the related question asked in a comment below. The remainder of the division of a positive number $n$ by a positive $p \le n$ (not necessarily a prime) is maximized when $p = \lfloor \frac{n}{2}\rfloor + 1$. First, let $p = \lfloor \frac{n}{2}\rfloor + k$ where $k \ge 1$. Then $2 p \gt n$ so the quotient of the division must be $1$, and the remainder will be $n - p = n - \lfloor \frac{n}{2}\rfloor - k$. This is obviously maximized when $k = 1$ in which case it is $r_{max} = n - \lfloor \frac{n}{2}\rfloor - 1$. Now, take $p \le \lfloor \frac{n}{2}\rfloor$. Then the remainder will be by definition a number $r \lt p \le \lfloor \frac{n}{2}\rfloor$. Since $r$ is an integer, it follows that $r \le \lfloor \frac{n}{2}\rfloor - 1 \le n - \lfloor \frac{n}{2}\rfloor - 1 = r_{max}$. The equality could only be attained when $2 \lfloor \frac{n}{2}\rfloor = n$ but in that case $n$ would need to be even, $p = \lfloor \frac{n}{2} \rfloor$ would be a divisor of $n$ and the remainder would be $0$. This proves the strict inequality $r \lt r_{max}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1952193", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
finding whether this sequence converges Let $$a_n=\left(1+\frac{1}{n}\right)^n$$ Does this converge? When $n$ goes to $\infty$ we end up with $1$ to the power of $\infty$. What does this mean to us? Is it $1$ or undefined. I remember from calculus course that this is not defined, so we take it as divergent right? But why diverges? $1$ to the infinity is $1$.
We first prove that the sequence $a_n=\left(1+\frac1n\right)^n$ converges. To do so, we will show that it is monotonically increasing and bounded above. The Monotone Convergence Theorem guarantees that such a sequence converges. SHOWING THAT $a_n$ MONOTONICALLY INCREASES To see that $a_n$ is monotonically increasing, we analyze the ratio $\frac{a_{n+1}}{a_n}$. Proceeding, we find $$\begin{align} \frac{a_{n+1}}{a_n}&=\frac{\left(1+\frac1{n+1}\right)^{n+1}}{\left(1+\frac1n\right)^n}\\\\ &=\left(1+\frac1n\right)\left(\frac{n(n+2)}{(n+1)^2}\right)^{n+1}\\\\ &=\left(1+\frac1n\right)\left(1-\frac{1}{(n+1)^2}\right)^{n+1} \tag 1\\\\ &\ge \left(1+\frac1n\right)\left(1-\frac{1}{(n+1)}\right) \tag2\\\\ &=1 \end{align}$$ where we applied Bernoulli's Inequality in going from $(1)$ to $(2)$. Therefore, $a_n$ is monotonically increasing. SHOWING THAT $a_n$ IS BOUNDED ABOVE Using the Binomial Theorem, we have $$\begin{align} a_n&=\left(1+\frac1n\right)^n\\\\ &=1+1+\frac{1}{2!}\frac{n(n-1)}{n^2}+\frac{1}{3!}\frac{n(n-1)(n-2)}{n^3}+\cdots +\frac{1}{n!} \tag 3\\\\ &\le 1+1+\frac{1}{2!}+\frac{1}{3!}+\cdots +\frac1{n!}\tag 4\\\\ &\le 1+1+\frac12+\frac14+\cdots +\frac1{2^n}\tag 5\\\\ &\le 1+\sum_{k=0}^n \frac{1}{2^k}\tag 6\\\\ &=1+\frac{1-2^{n+1}}{1-1/2}\tag 7\\\\ &\le 3 \end{align}$$ In going from $(3)$ to $(4)$, we observed that $\frac{n(n-1)(n-2)\cdots (n-k))}{n^{k+1}}\le 1$ for all $k\ge1$. In going from $(4)$ to $(5)$, we noted that $k!\ge 2^{k-1}$ for $k\ge 1$. In going from $(5)$ to $(6)$, we simply wrote the sum using the summation notation. In going from $(6)$ to $(7)$, we summed a Geometric Progression. Therefore, $a_n$ is bounded above (by $3$). Since $a_n$ is monotonically increasing and bounded above, the monotone convergence theorem guarantees that the $a_n$ is a convergent sequence. NOTE: The limit of the sequence $a_n$ is one of the Representations of Euler's number $e$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1952290", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Why one of NP-Complete problem had a polynomial solution then everyone of them has? Each NP problem is different, if one (even the hardest one) NP problem could be solved in polynomial time, I guess some related NP problems that could reduced to this one could also be solved in polynomial time. But why all? Does that mean all of other NP problems could be reduced to the one? Some examples would be very helpful.
NP completeness means exactly that "all other NP problems could be reduced [in polynomial time] to the one", so yes, if a single NP-complete problem has a polynomial-time solution, then all NP problems do. See the formal definition. Note that it is not obvious that NP-complete problems exist in the first place! E.g. maybe for every NP problem A, I can find an NP-problem B which is "polynomially harder" than A in the sense that there is no polytime-reduction from B to A. It turns out this isn't the case, but this takes proof. Some examples of NP-complete problems include: * *Determining whether a propositional formula is satisfiable. *Determining whether a graph has a Hamiltonian path. *Determining whether a graph can be $k$-colored. And there are many others; see this list.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1952517", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Show that $\| f(x) - f(y) \| \leq 2 \| x - y \| $ Let $V$ be a vector space with a given norm $\| \cdot \|$. Define a function $f:V \to V$ in the following way: $$ f(x) = \begin{cases} x, & \|x\| \leq 1 \\ x/ \|x\|, & \|x\| > 1 \end{cases} $$ Prove, that $\| f(x) - f(y) \| \leq 2 \|x-y\|$ for all $x,y \in E$. I tried to consider all $3$ cases, namely $\|x\|,\|y\| \lessgtr1$ and $\|x\| > 1, \|y\| \leq 1$, but it did not help much.
We will consider two cases. First case: WLOG, assume $\| x\| \geq 1$ and $ \|y \| \geq 1$, then we see that \begin{align} \| f(x) -f(y)\| \leq&\ \Bigg\| \frac{x}{\|x\|}-\frac{y}{\|y\|}\Bigg\| \leq \frac{1}{\|x\| \|y\|}\Big\| x\|y\|-y\|x\|\Big\|\\ \leq&\ \frac{1}{\|y\|} \Big\| x\|y\|-y\|y\|+y\|y\|-y\|x\|\Big\|\\ \leq&\ \|x-y\|+ \Big|\|x\|-\|y\| \Big|\\ \leq&\ 2\|x-y\|. \end{align} Second case: WLOG, assume $\|x\| \leq 1$ and $\|y\| >1$, then we see that \begin{align} \|f(x)-f(y)\| \leq&\ \Bigg\| x-\frac{y}{\|y\|}\Bigg\| \leq \frac{1}{\|y\|}\Big\|x\|y\|-y\Big\|\\ \leq&\ \frac{1}{\|y\|} \Big\| x\|y\| -y\|y\|+y\|y\|-y\Big\|\\ \leq&\ \|x-y\|+\Bigg\|y-\frac{y}{\| y\|} \Bigg\|\\ \leq&\ \|x-y\|+\|y-x \| =2\|x-y\| \end{align} where the last inequality comes from the fact that $y/\|y\|$ on the unit ball is distance minimizing from the point $y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1952642", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why is $O(\log(\log(n)))$ upper bound $\Theta(\log(n))$? This question more has to do with math than computer science, but there is a computer science component to it. If I have an algorithm with a time complexity function $$O(\log(\log(n)))$$Why would its upper bound be$$\Theta (\log(n))$$I do not follow the rules of logarithms on this, what steps were used to arrive at this answer?
Suppose that $f$ is $O(\log(\log n))$, and $g$ is $\Theta(\log n)$. Then there are positive constants $c_0$ and $c_1$ and an $m\in\Bbb Z^+$ such that $$f(n)\le c_0\log(\log n)$$ whenever $n\ge m$ and $$g(n)\ge c_1\log n$$ whenever $n\ge m$. Moreover, $$\lim_{n\to\infty}\frac{\log(\log n)}{\log n}=0\;,$$ so we may further assume that $m$ is large enough so that $$\frac{\log(\log n)}{\log n}\le\frac{c_1}{c_0}$$ whenever $n\ge m$. But then for $n\ge m$ we have $$f(n)\le c_0\log(\log n)\le c_1\log n\le g(n)\;,$$ i.e., $g$ is eventually an upper bound for $f$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1952748", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
How to formally prove that we cannot find a polynomial in $\textbf Z[x]$ with degree $2$ with such a root? I am trying to find the kernel of the map from $\textbf Z[x]$ to $\textbf C$. The map is evaluating at $\sqrt 2 + \sqrt 3$. A solution says that we cannot find polynomials of degree $2$ or $3$ that has such a root. So it skips the procedure of trying degree $2$ and $3$. And the final solution is the ideal in $\textbf Z[x]$ generated by $$x^4 − 10x^2 + 1 = (x − \sqrt 2 − \sqrt 3)(x − \sqrt 2 + \sqrt 3)(x + \sqrt 2 − \sqrt 3)(x + \sqrt 2 + \sqrt 3)$$ A solution says there are $\sqrt 6$'s in $(\sqrt 2 + \sqrt 3)^2$ and $(\sqrt 2 + \sqrt 3)^3$. So we cannot find polynomials with root $\sqrt 2 + \sqrt 3$ of degree $2$ or $3$. And that is what I am confused about: How does it imply the fact? Thanks in advance!
Let's work, for the moment, in $\mathbb{Q}[x]$. The number $b=\sqrt{2}+\sqrt{3}$ is certainly a root of $f(x)=x^4-10x^2+1$, because $(b-\sqrt{2})^2=3$, so $b^2-1=2b\sqrt{2}$ and, squaring again, $b^4-2b^2+1=8b^2$. Therefore there is a monic polynomial $p(x)$ of minimal degree (with coefficients in $\mathbb{Q}$) which $b$ is a root of; in particular, $\deg p(x)\le 4$. If we do Euclidean division, we get $$ f(x)=p(x)q(x)+r(x) $$ with $r(x)=0$ or $\deg r(x)<\deg p(x)$. Evaluating at $b$ gives $$ f(b)=p(b)q(b)+r(b) $$ and, since $f(b)=p(b)=0$, we obtain $r(b)=0$. By minimality of $\deg p(x)$, we conclude that $r=0$. In particular, $p(x)$ is a factor of $f(x)$ with rational coefficients. If we prove that $f(x)$ is irreducible in $\mathbb{Q}[x]$, we have that $f(x)=p(x)$ and, in particular, that no nonzero polynomial in $\mathbb{Q}[x]$ having $b$ as root has degree less than $4$. If $f(x)$ is reducible, it can either be split into a product of two polynomials of degree $2$ or into a product of a degree $1$ polynomial and one of degree $3$. The second possibility is dismissed, because the degree $1$ factor would provide a rational root of $f(x)$, which has none (the only rational roots can be $1$ and $-1$). Let's try the other one: $f(x)=(x^2+Ax+B)(x^2+Cx+D)$ (it is not restrictive to assume the factors to be monic). This gives $$ \begin{cases} A+C=0\\ B+D+AC=-10\\ AD+BC=0\\ BD=1 \end{cases} $$ Hence $C=-A$ and, substituting in the third equation, $A(D-B)=0$. The case $A=0$ leads to $B+D=-10$ and $BD=1$, which has no rational solutions. With $B=D$ we obtain $B^2=1$ and $2B-A^2=-10$. If $B=1$, we get $A^2=12$; with $B=-1$ we get $A^2=8$. Neither case leads to rational solutions. Since $f(x)$ is irreducible in $\mathbb{Q}[x]$ and is monic, it is also irreducible in $\mathbb{Z}[x]$. Now, suppose $g(x)\in\mathbb{Z}[x]$ belongs to the kernel of the homomorphism, meaning $g(b)=0$. We can still do Euclidean division by $f(x)$, because it is monic. Therefore $$ g(x)=f(x)q(x)+r(x) $$ with $r(x)=0$ or $\deg r(x)<\deg f(x)$. Evaluating at $b$ gives $r(b)=0$, so, as before, $r(x)=0$. Hence $g(x)$ belongs to the ideal generated by $f(x)$. The converse inclusion is obvious.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1952831", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 3 }
Uniqueness of unitarily upper triangular matrix Suppose A is a matrix in $\mathbb{C}^{n\times n}$ with n distinct eigenvalues $\lambda_1,\dots,\lambda_n$. Then by Schur's theorem, for any fixed order of $\lambda_1,\dots,\lambda_n$, we know there exists an unitary matrix $U$ s.t. $U^*AU$ is an upper triangular matrix with $\lambda_1,\dots,\lambda_n$ of required order on the diagonal. The question is is $U$ unique? If not, what freedom do we have to choose U? I know how to solve $A$ is unitarily diagonal (not unitarily upper triangular), then $U^*AU=\,\text{diag}(\lambda_i)\iff AU=U\,\text{diag}(\lambda_i)=[\lambda_1U_1,\dots,\lambda_nU_n]$. Then ith column of $U$ must be an eigenvector of $\lambda_i$ and $|U_i|=1$. Therefore we can choose $U$ up to multiplying a diagonal matrix whose diagonal entries have norm 1. But this method seems not fit the unitarily upper triangular case.
Key fact: if $BT=TB$ and $T$ is upper triangular with distinct diagonal entries, then $B$ is upper triangular (proof below). Now, if $$UTU^*=VTV^*,$$ then $V^*UT=TV^*U$. So $V^*U$ is an upper triangular unitary. As such, it is diagonal. Thus, $V$ is of the form $DU$ with $D$ diagonal and $|D_{kk}|=1$. In other words, the situation you observed for diagonal $A$ still occurs in general (it is essential that the diagonal entries are distinct). Proof that $TB=BT$ implies $B$ diagonal. Consider the $n,1$ entry: $$ (TB)_{n1}=\sum_kT_{nk}B_{k1}=T_{nn}B_{n1}, $$ while $$(BT)_{n1}=\sum_kB_{nk}T_{k1}=B_{n1}T_{11}.$$ As $T_{11}\ne T_{nn}$, we deduce that $B_{n1}=0$. Now consider the $n,2$ entry: $$ (TB)_{n2}=\sum_kT_{nk}B_{k2}=T_{nn}B_{n2}, $$ while $$ (BT)_{n2}=\sum_kB_{nk}T_{k2}=B_{n1}T_{12}+B_{n2}T_{22}=B_{n2}T_{22}. $$ As $T_{22}\ne T_{nn}$, we get that $B_{n2}=0$. Continuing inductively, after showing that $B_{n1},\ldots,B_{nr}=0$, we have $$ (TB)_{n,r+1}=\sum_kT_{nk}B_{k,r+1}=T_{nn}B_{n,r+1}, $$ while $$ (BT)_{n,r+1}=\sum_kB_{nk}T_{k,r+1}=\sum_{k=1}^{r+1}B_{nk}T_{k,r+1}=B_{n,r+1}T_{r+1,r+1}. $$ As $T_{r+1,r+1}\ne T_{nn}$, we get that $B_{n,r+1}=0$. Now start doing the same with the $n-1,1$ entry, then $n-1,2$, etc.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1952939", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
show that $X/(X+Y) $ has a cauchy distribution if $X $ and $X+Y $ are standard normal A random variable $X$ has a cauchy distribution with parameters $a$ and $b$ is the density of $X$ is $f(x\mid a,b)=\dfrac{1}{\pi b}\dfrac{1}{1+(\frac{x-a}{b})^2}$ where $-\infty <x< \infty $, $-\infty <a <\infty$, $b>0$ Suppose $X$ and $Y$ are independent standard normal random variables then show that $X/(X+Y)$ has a cauchy distribution. Since $X$ and $Y$ are independent standard normal random variables then the joint pdf for $(X,Y)$ is $$f_{X,Y}(x,y)=\frac{1}{2\pi}e^{-x^2/2}e^{-y^2/2}$$ I used the Jacobian method to try and find $f_{U,V}$ so I let $U=X+Y$ and $V=X/(X+Y)$ so $x=uv$ and $y=u-uv$ and then $J=u$. So $f_{U,V}(u,v)=\dfrac{u}{2\pi}e^{-(uv)^2/2}e^{-(u-uv)^2/2}=\dfrac{u}{2\pi}e^{-(uv)^2/2}e^{-(u^2-2u^2v + u^2v^2)/2}=\dfrac{u}{2\pi}e^{-(u^2v^2)/2}e^{(-u^2/2)+(u^2v) - (u^2v^2/2)}= \dfrac{u}{2\pi}e^{-(u^2v^2)-(u^2/2)+(u^2v)} = \dfrac{u}{2\pi}e^{-u^2(v^2-v+(1/2))} $ Therefore $$f_{U,V}(u,v)= \dfrac{u}{2\pi}e^{-u^2(v^2-v+(1/2))} ; -\infty<u,v<\infty$$ I now want to find the $V$ marginal density and that should have a cauchy distribution. $f_V(v)=2\int_0^{\infty} \dfrac{u}{2\pi}e^{-u^2(v^2-v+(1/2))}du=\int_0^{\infty} \dfrac{u}{\pi}e^{-u^2(v^2-v+(1/2))}du$. If I let $s=u^2$ then $\dfrac{ds}{2u}=du$ and then integral then becomes $ \int_0^{\infty} \dfrac{1}{2\pi}e^{-s(v^2-v+(1/2))}ds = \dfrac{1}{2\pi}\bigg( \dfrac{1}{v^2-v+(1/2)} \bigg)$. I think I might have made a mistake somewhere because I cant see how I can rearrange this to show that $f_V$ is a cauchy distribution.
You had almost reached the end. It suffices to transform: $$\left(\dfrac{1}{2\pi}\right)\dfrac{1}{v^2-v+\frac{1}{2}}=\left(\dfrac{1}{2\pi}\right)\dfrac{1}{(v-\frac{1}{2})^2+(\frac{1}{2})^2}=\left(\dfrac{1}{2\pi}\right)\dfrac{4}{1+\left(\frac{v-\frac{1}{2}}{\frac{1}{2}}\right)^2}$$ giving $$\left(\dfrac{1}{\pi \frac{1}{2}}\right)\dfrac{1}{1+\left(\frac{v-\frac{1}{2}}{\frac{1}{2}}\right)^2}.$$ Thus $a=b=\frac{1}{2}.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1953039", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Parametric to non parametric conversion of a line in 3d I can't for the life of me figure out how to convert this parametric equation to a non parametric equation for a line in 3D. Our lecture notes didn't cover it and I feel like it should be simple but whenever I try to figure it out I end up with nothing. Can someone try to explain how this can be done? The question is: Convert the parametric equation: (x, y, z) = (2, −1, −2) + t(3, 1, 2) to a nonparametric equation. I know that you're probably meant to come up with the equations for x, y and z: x= 2 + 3t y = -1 + t z = -2 + 2t but I'm not sure what to do with these Thanks in advance for any help.
A non-parametric representation of a line in 3D won't be a single equation but rather a system of (two) equations in the Cartesian coordinates $x$, $y$ and $z$. As the name suggests, you should try to eliminate the parameter by first solving for $t$: $$\left\{ \begin{array}{rcl} x &=& 2 + 3t \\ y &=& -1 + t \\ z &=& -2 + 2t \end{array} \right. \; (t \in \mathbb{R}) \quad\Rightarrow\quad \left\{ \begin{array}{rcl} t &=& \frac{x-2}{3} \\ t &=& y+1 \\ t &=& \frac{z+2}{2} \end{array} \right. \; (t \in \mathbb{R})$$ and now equating these three expressions for $t$. In general (if $v_i \ne 0$ for $i=1,2,3$), you'd have: $$\left\{ \begin{array}{rcl} x &=& a_1 + v_1t \\ y &=& a_2 + v_2t \\ z &=& a_3 + v_3t \end{array} \right. \; (t \in \mathbb{R}) \quad\Rightarrow\quad \left\{ \begin{array}{rcl} t &=& \frac{x-a_1}{v_1} \\ t &=& \frac{y-a_2}{v_2} \\ t &=& \frac{z-a_3}{v_3} \end{array} \right. \; (t \in \mathbb{R})$$ This leads to the following standard form: $$\color{blue}{\frac{x-a_1}{v_1} = \frac{y-a_2}{v_2} = \frac{z-a_3}{v_3}}$$ Note: this will not always be possible (one or more of the $v_i$'s might be $0$), but then you can still obtain a system of two equations by elimination of the parameter through substitution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1953163", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Introducing probability vectors When I studied probability I did it in a classical manner. My course used Loève's book on probability theory and I used Allan Gut's book for a modern version of it. Now I'm following a course in information theory and the notation, terms and concepts change so that it is difficult for me to connect these concepts with what I had. I was given the following problem: What is the minimum value of $H(p)$ as $p$ ranges over the set of n-dimensional probability vector. I looked on Wikipedia and there it says that A probability vector or stochastic vector is a vector with non-negative entries that add up to one. My question is how does this concept relates to probability theory. What is a probability vector used for? Is it an extended manner of denoting the probability mass function of a random variable which takes a finite number of values?
Let $X$ be a random variable with finite support, i.e. the possible values of $X$ are $x_1,\dots,x_n$. Let $p_i=P(X=x_i)$. Strictly speaking, the distribution of $X$ is the measure $\sum_{i=1}^np_i \delta_{x_i}$. Once you know the support of $X$, in order to describe its distribution, you can just give the vector $(p_1,\dots,p_n)$, which has non-negative entries and adds up to $1$. On the other hand, if you choose such a vector, it determines a probability distribution on $\{x_1,\dots,x_n\}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1953230", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Lusin Theorem Conclusion (Confusion on different versions) According to Royden, the conclusion of Lusin Theorem is that: (1) $f=g$ on $F$, where $g$ is a continuous function on $\mathbb{R}$. However, according to Wikipedia, the conclusion of Lusin Theorem is: (2) $f$ restricted to $F$ is continuous. I have seen somewhere that (1) and (2) are different, e.g. $\chi_\mathbb{Q}$ restricted to $\mathbb{Q}$ is continuous, but $\chi_\mathbb{Q}$ is not equal to a continuous function $g$ on $\mathbb{R}$. How do we resolve this paradox? Thanks.
Lusin's theorem contains the condition that $F$ be a closed set. That makes a difference. In sufficiently nice spaces $X$ ($T_4$ or normal spaces, whichever is the weaker in the nomenclature in use), every continuous function $f \colon F \to \mathbb{R}$, where $F\subset X$ is closed, has a continuous extension $F \colon X \to \mathbb{R}$. This is Tietze's extension theorem. All metric spaces are normal, so for a function $f \colon X \to \mathbb{R}$, where $X$ is a metric space, and a closed subset $F \subset X$, the two assertions * *$f\lvert_F$ is continuous, and *there is a continuous $g \colon X \to \mathbb{R}$ with $g\lvert_F = f\lvert_F$ are equivalent. Royden, it seems, considers only functions defined on (subsets of) $\mathbb{R}$ at that point. The wikipedia formulation that the restriction of $f$ to $E$ is continuous holds for Radon measures on arbitrary Hausdorff spaces. Since arbitrary Hausdorff spaces can be quite complicated, it is not surprising that only a weaker assertion can be made in the most general setting. The wikipedia article mentions the - at least formally - stronger conclusion in case of local compactness, so the author is aware of its availability in favourable circumstances.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1953349", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Unit square inside triangle. Some time ago I saw this beautiful problem and think it is worth to post it: Let $S$ be the area of triangle that covers a square of side $1$. Prove that $S \ge 2$.
Intuitively when vertices of square $S:=v_1v_2v_3v_4$ of side length $1$ are in sides of triangle $T=\Delta\ ABC$, then ${\rm area}\ T$ is smallest Notation : Edge $xy$ is $[xy]$ And length of edge $xy$ is $|xy|$ Here we have two cases : (1) $ v_1\in [AB],\ v_2\in [AC]$ and edge $[v_3v_4]$ is in edge $[BC]$ (2) $v_1\in [AB],\ v_2\in [AC],\ v_3\in [BC] $ That is $v_4$ is in interior of $T$ Consider case (1) : Consider a reflection point $A'$ of $A$ wrt edge $[v_1v_2]$ (1.1) Assume that $A'$ has distance larger than $1$ from $[v_1v_2]$ Define $$ [A'v_1]\cap [BC]:=\{ v_4'\},\ [A'v_2]\cap [BC]:=\{ v_3'\} $$ so that $$ 2{\rm area}\ S +2 {\rm area}\ \Delta\ A'v_3'v_4' ={\rm area}\ T $$ (1.2) Assume that $A'$ has distance smaller than $1$ from $[v_1v_2]$ If line $v_1 + t(A'-v_1)$ meets at $v_4'$ with $BC$, then note that $$ {\rm area}\ \Delta A'v_1v_2={\rm area}\ \Delta Av_1v_2 $$ $$ {\rm area}\ \Delta v_1Bv_4 ={\rm area}\ \Delta v_1v_4 v_4' $$ $$ {\rm area}\ (S- \Delta A'v_1v_2-\Delta v_1v_4v_4' )< {\rm area}\ \Delta v_2Cv_3 $$ Consider case (2) : WLOG assume that $[Av_2]<[v_2C]$ (2.1) Now consider $A'$ has distance larger than $1$ from $[v_1v_2]$ Note that $\Delta v_2 A' v_2'$ contains $v_3$ where $$ v_2'\in [v_2C],\ |v_2v_2'|=|Av_2| $$ Here $[v_2v_3]$ is angle bisector of $\angle A'v_2v_2'$ so that $\overrightarrow{v_2'v_3}$ enters into square So $Cv_3$ enters into square This contradicts to definition of $v_3$ (2.2) Assume that we have $A'$ in interior of the square $S$ Define $$ v_1'\in [v_1B],\ |v_1v_1'|=|Av_1| $$ $$ v_2'\in v_2C,\ |v_2v_2'|=|Av_2| $$ $$v_3'\in [Bv_3],\ |v_3'v_3 |=|v_3v_2'| $$ Note that $Av_1'v_4v_3'v_3v_2'$ has area $2$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1953443", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 6, "answer_id": 5 }
how many ways are there to arrange $8$ pennies and $5$ nickels in a How can you arrange a line of $8$ pennies and $5$ nickels, so that no $2$ nickels are next to each other (pennies are indistinguishable and nickels are too)? Answer: I set nickels and pennies next to each other so I get ${9 \choose 5}*8!*5!=609638400$. Is this correct?
If you first lay out the $8$ pennies, there are $9$ spaces where the nickels can go ($7$ spaces between adjacent pennies and $2$ spaces at the two ends). So the answer is simply $9\choose5$. You would multiply this by $8!5!$ only if the pennies and nickels were all distinguishable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1953576", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
determinant of a $2\times2$ matrix, sufficiency of inverse What is the simplest example of a nontrivial ring in which these two conditions are not equivalent for any $2\times 2$ matrix $A$: (1) there is a $2\times2$ matrix $B$ with $A\cdot B=1$ (2) $a_{11}a_{22}\neq a_{21}a_{21}$ To make the question more interesting: can the structures such that (1) is (not) equivalent to (2) be characterized?
A matrix $A\in M_2(R)$, where $R$ is a ring, is invertible iff $\det A\in U(R)$, where $U(R)$ is the set of the units of the ring $R$. So a simple example is given by the ring $\Bbb Z$, any matrices whose determinant is different to $\pm 1$ is not invertible. Example $$ A= \begin{pmatrix} 2 & 1\\ 3 & 4 \end{pmatrix} $$ $\det A=5$ so $A$ is not invertible in $M_2(\Bbb Z)$, even if is invertible in $M_2(\Bbb R)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1953646", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Solving equation in three variables please help me understand how the following equation with 3 variables and power of 2 is solved and what solution approach is the quickest. $$3y^2 - 3 = 0$$ $$4x - 3z^2 = 0$$ $$-6xz+ 6z = 0 $$
A quick solution: By simplification, rewrite $$\begin{cases}y^2=1\\(1-x)z=0\\3z^2=4x.\end{cases}$$ Then mentally, $$y=\pm1,\\x=z=0\lor x=1,z=\pm\frac2{\sqrt3}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1953751", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Show that the following sum of legendre symbols is -1. Let. $p$ be an odd prime. Consider the following sum of Legendre Symbols: $(\frac{1}{p})(\frac{2}{p}) + (\frac{2}{p})(\frac{3}{p}) + \cdots + (\frac{p-2}{p})(\frac{p-1}{p})$. Show that this sum is equal to $-1$. Using the algebra of the Legendre symbol i can show that this sum is the same as $\sum_{i = 2}^{\frac{p-1}{2}}(\frac{i-1}{p})(\frac{i}{p}) + (\frac{\frac{p-1}{2}}{p})(\frac{\frac{p+1}{2}}{p})$. I can also show that this last term is equal to 1 if $p \equiv 1 \mod 4$ and $-1$ if $p \equiv 3 \mod 4$ via Guass' Lemma. I'm really more interested in a hint than a full solution but any help would be greatly appreciated.
With the assumptions $\left(\frac{0}{p}\right)=0$, $p\equiv 1\pmod{2}$, by exploiting the multiplicativity of the Legendre symbol we have $$ \sum_{k=1}^{p-2}\left(\frac{k}{p}\right)\left(\frac{k+1}{p}\right)=\sum_{k=1}^{p-2}\left(\frac{k^2+k}{p}\right)=\sum_{k=1}^{p-2}\left(\frac{1+k^{-1}}{p}\right)$$ where $k^{-1}$ stands for the inverse of $k$ in $\mathbb{F}_p^*$. Now it is enough to consider how that map $k\mapsto 1+k^{-1}$ acts on $\{1,2,\ldots,p-1\}$ and recall that $$ \sum_{k=1}^{p-1}\left(\frac{k}{p}\right)=0, $$ i.e. that in $\mathbb{F}_p^*$ there are as many quadratic residues as quadratic non-residues.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1953874", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Sum of a squared dot product I have a constant $d \times n$ matrix $\textbf{A}$, and a variable $d \times 1$ vector $\textbf{v}$ $$\sum_{i=0}^n (\mathbf{A}_i^\top \mathbf{v})^2$$ Is there a way to simplify this? Can I pull out any A's? I know that I could use this property: $$\sum_{i=0}^n\left( \mathbf{A}_i^\top\mathbf{v} \right) = \left( \sum_{i=0}^n\mathbf{A}_i^\top \right) \mathbf{v}$$ But I don't think that works for squares of dot products
It depends upon what you mean by the 'square'. If it is scalar product then it equals: $$ \sum_i v^T A_i A_i^T v = v^T \left( \sum_i A_i A_i^T\right) v = v^T B v $$ where $B$ is a semi-positive definite matrix. It will be definite positive under reasonable conditions on the $A_i$'s.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1953969", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Pascal's Triangle and Binary Representations In the article that I am currently reading, it is stated as a well-known fact that positions $2^i$ or equivalently $(n-2^i)$ in the $n^{th}$ row in Pascal's Triangle modulo $2$ spell out the binary representation of $n$: $$ \newcommand{\red}{\color{red}} \newcommand{\blue}{\color{blue}} \begin{array}{rrc} 0:&0&1\\ 1:&1&\red1\ 1\\ 2:&10&\blue1\ \red0\ 1\\ 3:&11&1\ \blue1\ \red1\ 1\\ 4:&100&\red1\ 0\ \blue0\ \red0\ 1\\ 5:&101&1\ \red1\ 0\ \blue0\ \red1\ 1\\ 6:&110&1\ 0\ \red1\ 0\ \blue1\ \red0\ 1\\ 7:&111&1\ 1\ 1\ \red1\ 1\ \blue1\ \red1\ 1\\ 8:&1000&\blue1\ 0\ 0\ 0\ \red0\ 0\ \blue0\ \red0\ 1\\ 9:&1001&1\ \blue1\ 0\ 0\ 0\ \red0\ 0\ \blue0\ \red1\ 1\\ 10:&1010&1\ 0\ \blue1\ 0\ 0\ 0\ \red0\ 0\ \blue1\ \red0\ 1 \end{array} $$ To be more precise and/or technical, if $n$'s binary expansion is $b_t b_{t-1}\cdots b_1 b_0$ or equivalently $ n=\sum_{i=0}^t b_i\cdot 2^i $ then we have $$ b_i=\binom n{2^i}\pmod 2 $$ Now I was thinking about the cleanest and simplest way to prove this result. I find the self-similarity of Sierpinski's Triangle to provide a nice visual argument, yet it fails to be simple to communicate succintly in a paper, I think. My suggested proof Thus I thought it would be simpler to consider which powers of $2$ divide respectively the numerator and denominator of $$ \binom n{2^i}=\frac{\prod_{s=1}^{2^i}(n-2^i+s)}{\prod_{t=1}^{2^i} t} $$ Now note that the factors of the numerator $n-2^i+s$ cover a full set of residues modulo $2^i$. Those with non-zero remainder $n-2^i+s\equiv t$ modulo $2^i$ will have the same divisibility by $2$ as $t$ has, namely some power $2^j<2^i$ will be the maximal power of $2$ dividing both $t$ and that factor. Exactly one factor will be divisible by $2^i$, namely the single factor $n-2^i+s$ whose binary representation ends in $i$ zeros. Now if the $i^{th}$ bit is zero then this factor will at least be divisible by $2^{i+1}$ because then it ends in at least $i+1$ zeros. If on the other hand the $i^{th}$ bit is $1$, then this factor is divisible by no higher power of $2$ than $2^i$. Thus we see that if the $i^{th}$ bit is $1$ there is a 1:1 correspondance between the factors of the numerator and denominator with respect to their divisibility by $2$ thus resulting in an odd number. But if the $i^{th}$ bit is zero then the numerator has at least one more factor $2$ than the numerator - counted by multiplicity. Question: Do you have suggestions to simplify this argument or can you point me to a completely different approach making everything simpler? Perhaps there even is a clean and simple combinatorial proof?
Another approach uses generating functions (for a similar example, see the proof of Lucas's Theorem). Let $p(x) = \sum_{k=0}^n\binom{n}{k}x^k$. It is easy to check that for primes $p$ and nonnegative integers $k$, we have $(1+x)^{p^k}\equiv 1 + x^{p^k}\pmod p$. Then $$p(x) = (1+x)^n = \prod_{i=0}^t \left((1+x)^{2^i}\right)^{b_i} \equiv \prod_{i=0}^t \left(1+x^{2^i}\right)^{b_i}\pmod 2.$$ Thus $\binom{n}{2^j}$ is congruent to the coefficient of $x^{2^j}$ in $\prod_{i=0}^t \left(1+x^{2^i}\right)^{b_i}$ mod $2$. Since all the $b_i$ are 0 or 1, the coefficient of $x^{k}$ in $\prod_{i=0}^t \left(1+x^{2^i}\right)^{b_i}$ is the number of ways to write $k=2^{i_1}+2^{i_2}+\cdots+2^{i_m}$ for some $i_1<i_2<\cdots<i_m$ where $b_{i_1}=b_{i_2}=\cdots=b_{i_m} = 1$. Since binary representation is unique, all the coefficients of $\prod_{i=0}^t \left(1+x^{2^i}\right)^{b_i}$ are 0 or 1. In particular, the coefficient of $x^{2^j}$ is 1 if $b_j=1$ and 0 if $b_j=0$, so we have $b_j\equiv \binom{n}{2^j}\pmod 2$. I believe by the same argument you can show for all primes $p$, writing $n=\overline{b_tb_{t-1}\dots b_0}_p$ in base $p$, we have $$b_j\equiv\binom{n}{p^j}\pmod p. $$ EDIT: For this problem and the problem for general $p$ you can actually can just apply Lucas's Theorem directly: $$\binom{n}{p^j} \equiv \prod_{i=0}^t\binom{b_i}{[i=j]}\equiv b_j\pmod p$$ where we denote $[i=j]$ to be 1 if $i=j$ and 0 otherwise.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1954098", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 3, "answer_id": 0 }
Two matrices with special eigenvalues We say $\lambda$ is an eigenvalue of a square matrix $A$ if $Ax = \lambda x$. Now, i want two examples of a matrix like $A$. The first one, $A$ should have just one eigenvalue which should be $0$. The second one, $A$ should be a matrix in which $a_1,\dots,a_n$ are eigenvalues. ( They give $a_i$'s and want the matrix ) Note : I have no idea how to find these matrices. I don't know where to start ...
So, with the hint andrew gave me, i solved the problem myself... If a matrix is diagonal and $a_1,\dots,a_n$ are on the diagonal, then we have equations like : ($a_i-\lambda )(x_i)=0$ And one matrix which has zero as an eigenvalue and zero is its only eigenvalue is the zero matrix itself !!! ( For example, zero matrix in $M_2(\mathbb R)$ )
{ "language": "en", "url": "https://math.stackexchange.com/questions/1954192", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Can all complex expressions be simplified to the form $a+jb$? Are there any complex expressions that cannot be simplified to the form $a+jb$, where a and b are real numbers? For example, $$\frac{1}{j}=0+j(-1),\hspace{0.5cm}e^j=\cos(1)+j\sin(1),\hspace{0.5cm}\sin(j)=0+j\frac{e^2-1}{2e}$$ From what I understand, all complex numbers must exist somewhere on the complex plane where a and b are the coordinates. But some expressions don't have any obvious way to be simplified: $$\ln(1+j)=???,\hspace{0.5cm}\arctan(j)=???$$ If every expression can be simplified, are there any good references or list of identities?
To get $\ln(z)$ for any complex $z$ (where $z \ne 0$), write $z = |z|y$. Then $|y| = 1$, so there is a real $t$ such that $y = e^{it} =\cos(t)+i\sin(t) $, so $z = |z|y = |z|(\cos(t)+i\sin(t)) = |z|\cos(t)+i|z|\sin(t) $. Since $|1+j| = \sqrt{2}$, $(1+j) =\sqrt{2}\frac{1+j}{2} =\sqrt{2}e^{i\pi/4} $, so $\ln(1+j) =\ln(\sqrt{2}e^{i\pi/4}) =\dfrac{\ln(2)}{2}+i\pi/4 $ (you can throw in $+2\pi i n$ if you want).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1954349", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 0 }
A lower bound in variational inequality Let $f\in C^2([a.b])$, assume that $$f(a)=0=f(b),~f'(a) = 1,f'(b) =0.$$ Prove that $$\int_{a}^{b}\Big|f''(x)\Big|^2\mathrm{d}x\geq \frac{4}{b-a}.$$ Is there a way to convert it into an eigenvalue problem and then one may look for the first eigenvalue as the lower bound?
Change the variables: $x=(b-a)t+a$ and $g(t)=f((b-a)t+a)$ $$\int_a^b|f''(x)|^2dx = \int_0^1 |f''((b-a)t+a)|^2 (b-a) dt =\frac{1}{(b-a)^3} \int_0^1 |g''(t)|^2 dt$$ Therefore the problem is equivalent to proving $$\int_0^1|g''(t)|^2dt \geq4 (b-a)^2$$ for $g\in C^2([0,1])$ with $g(0)=g(1)=0$, $g'(0)=(b-a)=:c$ and $g'(1)=0$. Let $\phi$ be smooth then using boundary data and integration by parts and Holder we get $$\left| -c\phi(0)+\int_0^1g\phi'' dx \right|= \left| \int_0^1g''\phi dx \right| \leq \left(\int_0^1|g''|^2\right)^\frac{1}{2} \cdot \left( \int_0^1 \phi^2 \right)^\frac{1}{2}$$ Next the problem is to choose the correct $\phi$. Firstly we want $\phi''=0$ so that we get rid of the "bad" term on LHS. So $\phi$ is linear. Also note that the above inequality is invariant under $\phi \mapsto K \cdot \phi$, where $K$ is a constant. So we can look for $\phi(x)=Ax+1$ and find $A$. Now the inequality reads $$\int_0^1 |g''|^2 \geq c^2 \frac{|\phi(0)|^2}{\int_0^1 \phi^2}$$ Remains to choose $\phi$ so that the ratio in front of $c^2$ is $4$. So one can find that $\phi(x)=-\frac{3}{2}x+1$ works. Note that $\phi(0)=1$ and $$\int_0^1 \phi^2 dx = \frac{2}{3} \frac{(\frac{3}{2}x-1)^3}{3} |_0^1 = \frac{1}{4}$$. Moreover it is the best constant in this family since $\int_0^1 (Ax+1)^2 dx = \frac{1}{3} (A^2+3A+3)$ and we want to minimize this quantity (since it is in the denominator). Minimum achieved at the vertex $A=-\frac{3}{2}$. Also in general the constant $4$ is optimal since if you take $f$ to be $3$rd order polynomial satisfying all the given BC then you obtain equality.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1954476", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Prove that for any set with $m$ elements, if the average of any $nProve that for any set with $m$ elements, if the average of any $n < m$ elements is equal to the constant $k$, then each of the $m$ elements are equal to $k$. Attempt: Considering some specific examples, say $m = 4$, $n = 3$, and $k=4$. Let the set be $\{a_1, a_2, a_3, a_4\}$, clearly if the average of any 3 of the elements is 4, then $\frac{a_1+a_2+a_3}{3} = 4 \implies a_1+a_2+a_3 = 12$. If $a_1$, $a_2$, and $a_3$ weren't all equal to $4$, say, $a_1=0$, $a_2=0$, and $a_3=12$, then $a_4$ must be equal to $0$, since $a_2+a_3+a_4 = 0+12+0=12$. But, if $a_4=0$ then $a_1+a_2+a_4 = 0 \neq 12$, meaning the average of $a_1$, $a_2$ and $a_4$ is not $4$, so clearly the only way for the average of any 3 elements of the set to be equal to 4 is if all elements are equal to 4. How can I generalize this proof?
Call the elements $a_1,a_2,\dots,a_m.$ Note that $$a_1+a_2+\cdots+a_n=a_2+a_3+\cdots+a_{n+1}$$ since both sums are equal to $nk.$ It follows that $a_1=a_{n+1}.$ Since the indexing is arbitrary, it follows that any two elements are equal. And of course the common value must be $k.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1954628", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Question about homomorphisms between symmetric groups Let $f: S_A \to S_B$ be a homomorphism from the symmetric group on $A$ to the symmetric group on $B$, where $A$ and $B$ may be infinite. For $X\subseteq A$ and $b_1,b_2\in B$, say that $b_1\sim_X b_2$ if and only if $f(g)(b_1)=b_2$ for some $g$ s.t. $g(x) = x$ for all $x\in X$. Define $h: \mathcal{P}(B)\to\mathcal{P}(\mathcal{P}(A))$ s.t. $h(Y)=\{X: Y\text{ is closed under }\sim_X\}$. Call $Y$ principal if $\bigcap h(Y)\in h(Y)$ and boring if $\emptyset\in h(Y)$. Question 1: Is it possible that every principal $Y\subseteq B$ be boring but not every $Y\subseteq B$ be boring? Question 2: More generally, could there be a complete proper subalgebra of the power set algebra $\langle\mathcal{P}(B),\cap,\cup\rangle$ that contains all principal elements of $\mathcal{P}(B)$? ("More generally" because an affirmative answer to Question 1 implies that the boring subsets of $B$ form such a subalgebra.) Question 3: Is there standard terminology for and/or standard results about the notions defined above?
I think the answer to Question 1 (and hence Question 2) is "yes", unless I've made a mistake. For $X,Y\in\mathcal{P}(\mathbb{N})$ and $g\in\mathbb{N}^\mathbb{N}$, let $g(X) = \{g(x): x\in X\}$, $E = \{\langle X,Y\rangle: |(X\cup Y)\backslash(X\cap Y)|<\omega\}$, and $[X] = \{Y: \langle X,Y\rangle\in E\}$. Now let $A = \mathbb{N}$ and $B=(\mathcal{P}(\mathbb{N})/E)\backslash\{[\emptyset],[\mathbb{N}]\}$, and define $f: S_A\to S_B$ s.t. $f(g)([X]) = [g(X)]$. By construction, $h(Z)$ must be closed under $E$ for arbitrary $Z\subseteq B$; hence $Z$ is principal only if it is boring. And not every $Z$ is boring, since only $\emptyset$ and $B$ are boring.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1954726", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
How can the surd $\sqrt{2-\sqrt{3}}$ be expressed? I was wondering how $\sqrt{2-\sqrt{3}}$ could be expressed in terms of $\frac{\sqrt{3}-1}{\sqrt{2}}$. I did try to solve both the expressions separately but none of them seemed to match. I would appreciate it if someone could also mention the procedure
Theorem: Given a nested radical of the form $\sqrt{X\pm Y}$, it can be rewritten into the form $$\sqrt{\frac {X+\sqrt{X^2-Y^2}}{2}}\pm\sqrt{\frac {X-\sqrt{X^2-Y^2}}{2}}\tag{1}$$ Where $X>Y$. Therefore, we have $X=2,Y=\sqrt{3}$ because $2>\sqrt{3}$. So plugging that into $(1)$ gives us $$\sqrt{\frac {2+\sqrt{4-3}}{2}}-\sqrt{\frac {2-\sqrt{4-3}}{2}}\tag{2}$$ Simplifying $(2)$ gives us $$\sqrt{\frac {2+1}{2}}-\sqrt{\frac {2-1}{2}}\implies \sqrt{\frac 32}-\sqrt{\frac 12}$$ $$\therefore\sqrt{2-\sqrt{3}}=\frac {\sqrt{3}-1}{\sqrt{2}}$$ Alternatively, one can rewrite it as a sum of two surds, and simplify from there. Specifically, let $\sqrt{2-\sqrt3}$ equal $\sqrt d-\sqrt e$. Squaring, we get\begin{align*} & 2-\sqrt3=d+e-2\sqrt{de}\\ & \therefore\begin{cases}d+e=2\\de=\frac 34\end{cases}\end{align*} With solving for $d$ and $e$ gives the simplification.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1954822", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Correctly integrating the product of two functions I'm trying to determine the probability of finding a function inside another function. While plotting the results do give me a sign of there being an actual non-zero number, integrating to determine the number gives me zero. Plotting these two functions gives me $$ f(x) = \left( \frac{1}{\pi } \right)^{1/4} e^{- \frac{1}{2 } x^2} $$ $$ f_{1}(x) = \left( \frac{1}{ 2\pi} \right)^{1/4} x\ e^{- \frac{1}{4} x^2} $$ Which obviously shows that a portion of is found inside the other function. When integrating over all space defined as $$ \int_{-\infty}^{\infty} f_{1}^{*}(x)\ f(x) dx $$ (in which after integrating, I take the square modulus to determine the probability) $$ \int_{-\infty}^{\infty} \left( \frac{1}{2\pi} \right)^{1/4} \left( \frac{1}{\pi }\ \right)^{1/4} e^{ -\frac{3}{4 } x^2} x\ dx = 0 $$ Is this due to me integrating over all space? In which judging by my plot, I should change my bounds to 0 to $\infty$ instead?
Your orange plot is incomplete. If you were to plot your $f_1$ over the interval $[-15,15]$ you would see that it has a negative lobe that is symmetric with the positive lobe you plotted. There are two ways to fix this: 1) Do what you suggest and only integrate over the intersection of the supports of the two functions. In this case, $[0,\infty)$. 2) Actually write down your $f_1$'s complete definition: $$f_1(x) = \begin{cases} \frac{ \left( \frac{m \omega}{\pi \hbar} \right)^{1/4} }{\sqrt{\hbar m \omega}}\ x\ e^{ \frac{m \omega}{4 \hbar} x^2} ,& x \geq 0 \\ 0 ,& x < 0 \end{cases} \text{,} $$ so that the product of your functions is actually zero to the left of zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1954941", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that if $(a+b)$ divides $a^2$ then $(a+b)$ divides $b^2$ I'm trying to solve the following exercise: "Prove that if $(a+b)$ divides $a^2$ then $(a+b)$ divides $b^2$". It's quite obvious how to prove divisibility for a product, but how to do it for a sum?
If $a+b$ divides $a^2$, then you can write $a^2=(a+b)k$. But then $$b^2= a^2-a^2+b^2 = a^2 -(a^2-b^2)$$ $$=\underbrace{(a+b)k}_{a^2} - (a+b)(a-b)$$ $$=(a+b)(k-(a-b))$$ $$=(a+b)(k-a+b)$$ so $(a+b)$ divides $b^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1955034", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Complex Analysis Branches I have a single valued function is defined as a branch of the multivalued function $$f=\ln[z(z+1)] \tag{*}$$ on the complex plane with the segment $[-1,0]$ of the real axis removed, and require $$f(1)=\ln[2] +2\pi i \tag{**}$$ Show that * and ** and the proposed branch cut do not describe a single valued function. I am trying to trace the imaginary part of $f$. $\text{Im}(f)= \arg(z) + \arg(z+1)$. And it cant see to satisfy the condition **. When $z=1$. I'm getting $\arg(z)=0$ and $\arg(z+1) =0$.
I suppose you mean $f(1)=\ln(2) + 2\pi i$ which is a consistent choice for $f$ at $1$. As is mentioned the function is not single valued. When you go once around the cut the arg increases with $4\pi$. One way to see this is to take the derivative $f'(z)=\frac{1}{z} + \frac{1}{z+1}$ and note that if you choose a contour $\gamma$ winding counter-clockwise around the cut then $\oint_\gamma f\, dz=4\pi i$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1955156", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Strictly increasing function and its derivative We have "$f'(x)>0$ in interval $I\implies f(x)$ strictly increasing in interval $I$". However, the converse is not true. A frequently cited example is the function $f(x)=x^3$, which is strictly increasing but $f'(0)=0$. Here comes my question: What is the necessary and sufficient condition of $f'(x)$ to make $f(x)$ strictly increasing? My guess is: $f'(x)>0$ almost everywhere. This is clearly a sufficient condition, as we can use Lebesgue integration and "drop the word 'almost'". But is this a necessary condition? Is there a weaker condition? A full solution or a little hint will both be appreciated. Thank you in advance.
If $f(x)$ satisfies $f'(x)\ge 0$ for all $x\in \mathbb R$ and if for every $x_0\in \mathbb R$ with $f'(x_0)=0$, there is a neighborhood around $x_0$ , such that $f'(x)\ne 0$ within this neighborhood (except $x_0$) (in other words : isolated roots of the derivate ) , then $f(x)$ is strictly increasing on $\mathbb R$. Not sure, whether we have a weaker sufficient condition, but an interval $[a,b]$ with $a<b$ and $f'(x)=0$ for all $x\in [a,b]$ is obviously impossible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1955292", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Plotting complex inequalities in maple I'm having trouble plotting a set of complex numbers in maple. I'm trying to plot the set $$S = \lbrace z \in \Bbb C : 1 \leq \lvert z\rvert \leq 2, \frac{\pi}{4} \leq \lvert \arg(z)\rvert \leq \frac{\pi}{2}\rbrace.$$ I know what it should look like from a drawing I produced but I'd like to plot it in maple. My code is as follows; z := x + I*y; plots:-implicitplot([abs(z) <= 2, abs(z) <= 1, abs(arg(z)) >= Pi/4, abs(arg(z)) <= Pi/2], x = -3...3, y = -3...3, filled = true); The issue is that the inequalities are being plotted independently of each other rather than all together, so even the first pair of inequalities together fill the entire plane. Is there any way I can have the $4$ conditions imposed in $S$ be taken into account at the same time, rather than separately?
restart; z := x + I*y: plots:-implicitplot( piecewise( (abs(z) <= 2) and (abs(z) >= 1) and (abs(argument(z)) >= Pi/4) and (abs(argument(z)) <= Pi/2), false, true), x=-3...3, y=-3...3, gridrefine=3, view=[-3..3,-3..3] );
{ "language": "en", "url": "https://math.stackexchange.com/questions/1955535", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$f:[0,2]\to\mathbb{R}$ continuous, then if $\int_0^x |f(t)| \ dt = \int_x^2 |f(t)| \ dt$ then $f(x) = 0$ I have that $f:[0,2]\to\mathbb{R}$ continuous, then if $\int_0^x |f(t)| \ dt = \int_x^2 |f(t)| \ dt$, $f(x) = 0$ for $x\in [0,2]$ I tried to substitute $x=0$ in the equality to get: $$0 = \int_0^0 |f(t)| \ dt = \int_0^2 |f(t)| \ dt$$ So we have that $$\int_0^2 |f(t)| \ dt = 0$$ then $f(x) = 0$ because we're integrating something that's always positive or $0$. Am I right?
For an alternative approach, using the Fundamental theorem of calculus and the given identity: $$|f(x)| = \left(\int_0^x |f(t)|\;dt\right)' = \left(\int_x^2 |f(t)|\;dt\right)' = -|f(x)|$$ Therefore $\;\;|f(x)| = -|f(x)|$ $\;\;\implies\;\; |f(x)| = 0$ $\;\;\implies\;\; f(x) = 0\;\;$ for $\;\; x \in [0,2]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1955636", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Comparison principle for $-\varepsilon \, \Delta u + u_t + H(x,Du) = 0$ . Given $u$ is a classic solution for $$-\varepsilon \, \Delta u + u_t + H(x,Du) = 0$$ and $\varphi\in C^2$ (called a super solution?) satisfies $$-\varepsilon \, \Delta \varphi + \phi_t + H(x,D\phi) \geq 0$$ with the same initial condition $u(x,0) = \phi(x,0) = f(x)$. Can I say $u(x,t)\leq \varphi(x,t)$ with the only assumption that $H(x,p)$ is bounded uniformly continuous on $\mathbb{R}^n\times [0,T]$ and coercive in $p$?
The answer is no, without some additional growth constraints on $u$ and $\phi$. You can just take $H\equiv 0$, and then you have the heat equation, which has infinitely many smooth solutions for the same initial data if you do not impose growth constraints. If $u$ and $\phi$ are bounded, then the answer will be yes. The trick is to modify either the sub or supersolution in such a way that $u \leq \phi$ outside of a large rectangle $[-N,N]\times [0,T]$. Then use the usual maximum principle. Generally, when you prove uniqueness of viscosity solutions to HJ equations ($\varepsilon=0$), you assume the solutions are bounded. EDIT: Let me add a few words about the trick I mentioned above. For $\lambda >0$ write $v(x) = \phi(x) + \lambda^2 \log(1+|x|^2) + \lambda t.$ If $H$ is uniformly continuous in $\nabla u$, then for $\lambda$ sufficiently small $-\varepsilon \Delta v + v_t + H(\nabla v, x) > 0.$ Furthermore, if $\phi$ and $u$ are bounded, then $u \leq v$ on $\partial B(0,R)\times [0,T]$ for sufficiently large $R$. Use the ordinary maximum principle for bounded domains to get $u \leq v$ on $\mathbb{R}^n\times [0,T]$. Then send $\lambda \to 0^+$. The argument only requires uniform continuity of $H$ in $\nabla u$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1955766", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
acceleration formula of parametric curve? I have a parametric curve as follows: x(t)= 0.236t³-0.645t²+0.909t+0 y(t)= 0.189t³-0.792t²+0.603t+0 which looks like: Now, I want to find the acceleration of this curve, from when it starts at 0,0 and ends at .5,0. HOW would I do this? It has been years since i took a math course, and I have spent hours and hours on google trying to figure this out but I cant. can someone please help me in lehmans terms?? thanks in advance!
Hint: The acceleration is the second derivative of position. As such, the acceleration in the x direction would be x''(t), and the acceleration in the y direction would be y''(t).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1955881", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How To Set Up A Card Probability Problem Rosa draws a five-card hand from a $52-$card deck. For each scenario, calculate the total possible outcomes: Rosa’s hand has three red cards and three face cards. The only way I could think about doing this was to add together the $26$ red cards, along with the face cards, since half of them are already red, it was only $6$, which would give me $32$ cards that I am trying to get. At first I thought I would set this up like other problems I have done where I put $32/5$, but the answer is not right. Could someone please show me how to set this up?
Since Rosa's hand has $5$ cards and the conditions are $3$ red and $3$ face cards, at least one card has to be a "red, face card". Following hands satisfy the given conditions (please check that I did not forget something): * *$1$ red face card, $2$ red not-face cards, $2$ black face cards. *$2$ red face cards, $1$ red not-face card, $1$ black face card, $1$ black not-face card. *$3$ red face cards, $2$ black not-face cards. Now, there are $6$ red face cards ($2$ J's plus $2$ Q's plus $2$ K's), $20$ red not-face cards and the same for black. So, * *Possible ways: $$\dbinom{6}{1}\cdot\dbinom{20}{2}\cdot\dbinom{6}{2}=17100$$ *Possible ways: $$\dbinom{6}{2}\cdot\dbinom{20}{1}\cdot\dbinom{6}{1}\cdot\dbinom{20}{1}=36000$$ *Possible ways: $$\dbinom{6}{3}\cdot\dbinom{20}{2}=3800$$ Hence the total ways are equal to $$17100+36000+3800=56900$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1956114", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Number of integer triangles with sides 4 or less Consider triangles having integer sides such that no side is greater than 4 units. How many such triangles are possible? I suspect a relation to the following question: How many ways can $r$ things be taken from $n$ with repetition and without regard to order?
Without loss of generality, assume that $a \leq b \leq c < a+b$. This leaves us with very few choices. 1) All three could be equal. That gives us four choices. 2) $a=1$. Then, $b+1 > c$, so $b=c$ must happen, this gives three choices. 3) $a=2$. Then, $b \leq c < b+2$, so $b=2$,$c=3$ and $b=3, c=3,4$ , and $b=c=4$ are the possibilities. 4) $a=3$, then $b=3,c=4$ and $b=c=4$ are the only possibilities. Hence, the total is $4+3+4+2 = 13$ possibilities.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1956257", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Probability : Venn diagrams; independent Are both the Venn diagram's (i and ii) showing dependent properties (showing that A and B are dependent on one another), I am told that the quartered one is independent (Venn diagram i), however I do not see any difference between the 2 diagram's. Surely P(A|B) (probability of B given A), will come out to be the same in both diagrams. This is because we are constrained to A and AB (highlighted red in Venn diagrams) as we know the outcome will be either AA or AB. Further, the ratio of N(AB) to N(A) will give us our probability P(A|B), which can be seen to lie in both diagrams to be in the region A and AB. I know that if P(A|B) = P(B) , we have independent events, however how can we deduce if the Venn diagrams are showing independent events or not. Thanks
Edit: as @Henry commented, in general Venn diagrams do not rely on surface in any way, the only thing that matters are the intersections. If you consider your diagrams are pure Venn diagrams, then it is impossible to tell apart your two diagrams, and so it is to deduce independence. However, if you impose your diagram to respect the rule "the area is proportional to the probability", then the following applies. From what I've seen around the web, such diagrams are called scaled Euler diagram or, more informally, proportional Venn diagrams. Think of probabilities as areas. For instance, $P(A)$ is the area of the red rectangle, divided by the total area $\Omega$. Computing $P(A|B)$ is the same, but here you only consider your universe to be the area delimited by $B$. In the first diagram, it can been seen that the area of the red rectangle divided by the total surface, and the area of the small rectangle $AB$ divided by the area of the rectangle $B$ have the same ratio. If we put it into mathematical terms, we get $P(A) = P(A \cap B) / P(B) = P(A|B)$, so by definition $A$ and $B$ are independent. In the second diagram, it is not the case (you may want to prove it with an exact computation of the areas of all surfaces, but I guess it's not the point of the example you read): here, the ratio of the area of $A \cap B$ to the area of $B$ and the ratio of the area of $A$ to the area of $\Omega$ are different, so $A$ and $B$ are not independant. However, note that using Venn diagrams to deduce independance of variables is not a good idea; Venn diagrams are not made for such purposes and as such proving independance using Venn diagrams is not advised.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1956384", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Monotone convergence when $f_n$ is decreasing I know that if $f_n\geq 0$, that $(f_n)$ is increasing and that if $\lim_{n\to \infty }f_n=f\in L^1$, then, $$\lim_{n\to \infty }\int f_n=\int f.$$ Does this result also work when $(f_n)$ is decreasing ?
Let $f_{n}=\frac{1}{n}χ[0,n]$. For every $\epsilon > 0$ and $x \in \mathbb{R}$ there exists $N > 1$ such that $|f_n(x)| < \epsilon $ for every $n>N$. Hence, $f_n$ converges to $f = 0$ ,uniformly. However, $$0 = \int f d\lambda \not= \lim \int f_n d\lambda = 1$$ The MCT does not apply because the sequence is not monotone increasing. Fatou’s lemma obviously applies.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1956484", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Generating function for the number of ordered triples Let an be the number of ordered triples $(i, j, k)$ of integer numbers such that i ≥ 0, j? ≥ 1, k ≥ 1, and $i+3j +3k = n$. Find the generating function of the sequence (a0, a1, a2, . . .) and calculate a formula for $a_{n}$. First I used polynomials to express the possible combinations of $i,j,k$ and got $(\frac{1}{1-x})(\frac{1}{1-x^3}-1)(\frac{1}{1-x^3}-1)$, with $\frac{1}{1-x}$ standing for a sequence 1, 1, 1, 1 and the two other sums for possible combinations of $j$ and $k$. I subtracted 1 from each sum since I can't have $j=0$ or $k=0$. Is my calculation correct? How can I go about calculating the formula for $a_{n}$?
Yes, your generating function is correct. To extract its coefficients, there are many things you could do. The factor of $\frac1{1-x}$ always leads to sums of coefficients of other terms. For the term $\left(\frac{x^3}{1-x^3}\right)^2$, you can first remove the $x^6$ in the numerator and then deal with the denominator. One thing you can always do is just work out the first few terms in the series by hand and then hope to see a pattern. For instance, \begin{eqnarray*} \frac1{(1-x^3)^2}&=&(1+x^3+x^6+x^9+\cdots)(1+x^3+x^6+x^9+\cdots)\\ &=&1+2x^3+3x^6+4x^9+\cdots. \end{eqnarray*} Perhaps that is enough of a start so that you can finish on your own.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1956612", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Increasing sequence of events and the probability of their limit If $A_1\subset A_2\subset A_3\subset\cdots$ is an increasing sequence of events with limit $A=\bigcup_{i=1}^\infty A_i$. Prove that $$\lim_{n\rightarrow\infty} P(A_n)=P(A)$$ My attempt so far: Since $A_1\subset A_2\subset A_3\subset\cdots$ is increasing and $$\bigcup_{i=1}^\infty A_i=\lim_{n\rightarrow\infty} A_n \wedge A=\bigcup_{i=1}^\infty A_i$$ then $\lim_{n\rightarrow\infty}A_n=A$. I have that $$P\left(\lim_{n\rightarrow\infty}A_n\right)=P(A)\Rightarrow \lim_{n\rightarrow\infty}P(A_n)=P(A)$$
Let $B_1 = A_1$ and $B_{n+1} = A_{n+1}\setminus A_n$ for $n\ge 1$. Then $$ A_N = \bigcup_{n=1}^N B_n $$ and $$ \bigcup_{n=1}^\infty A_n = \bigcup_{n=1}^\infty B_n, $$ and $$ B_n \cap B_m = \varnothing \text{ for } n\ne m. $$ So \begin{align} P(A) & = P\left( \bigcup_{n=1}^\infty A_n \right) = P\left( \bigcup_{n=1}^\infty B_n \right) \\[10pt] & = \sum_{n=1}^\infty P(B_n) \qquad \left( \begin{array}{l} \text{by countable additivity of } P \text{ and} \\ \text{pairwise disjointness of } \{B_n\}_{n=1}^\infty \end{array} \right) \\[10pt] & = \lim_{N\to\infty} \sum_{n=1}^N P(B_n) = \lim_{N\to\infty} P\left( \bigcup_{n=1}^N B_n \right) = \lim_{N\to\infty} P(A_N). \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1956702", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
$f\in L^2(\mathbb{R})\Rightarrow f\to 0, x\to\pm\infty$? As the title already suggests: Let $f\in L^2(\mathbb{R})$. Does this imply that $$ f\to 0\text{ as }x\to\pm\infty? $$
The answer is no. Consider $$f(x)=\sum_{n=1}^\infty \chi_{[n-\frac 1 {n^2},n+\frac 1 {n^2} ]}(x)$$ With $\chi$ the charasteristic function. $f$ is in $L^2$ but $f$ does not have a limit at $\infty$. You can also find similar examples which create a continuous function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1956811", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prove that $S = \{a^2 + b^2: a,b \in \Bbb N\}$ is closed under multiplication. Is it true? Can you prove or disprove this? $S = \{a^2 + b^2: a,b \in \Bbb N\}$ is closed under multiplication.
Suppose that $x=a^2+b^2$ and $y=c^2+d^2$. Then $$xy=\det \begin{pmatrix} a&b\\-b&a \end{pmatrix} \det\begin{pmatrix} c&d\\-d&c \end{pmatrix} =\det \begin{pmatrix} a&b\\-b&a \end{pmatrix} \begin{pmatrix} c&d\\-d&c \end{pmatrix} =\det\left( \begin{array}{cc} a c-b d & b c+a d \\ -b c-a d & a c-b d \\ \end{array} \right)=(ac-bd)^2+(bc+ad)^2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1956926", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How does an exponent work when it's less than one? I'm rather familiar with exponents, I know that $y^x = y_1 \cdot y_2 \cdot y_3 .... y_x$, but what if the exponent is less than one, how would that work? I put in my computer $25^{1/2}$ anyway, expecting it to give me an error, and I got an answer!! And even more surprising, when I did this with more numbers and a little research, I found that $$x^{1/y} = \sqrt[y] x$$ Is it just me, or can exponents take on the role of square roots?
$$y^x = y\cdots y$$ only if $x$ is a positive integer. In general: $$y^x = \exp(x\log y)$$ provided that $y >0$ (the functions $\exp$ and $\log$ can be defined via power series). Also one can prove: $$\sqrt[x]{y} = y^{1/x}$$ as you noted (this can also be taken as the definition of roots).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1957055", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "45", "answer_count": 7, "answer_id": 2 }
Does this make mathematical sense? For a given set $A$, An element such that $a \in A $ exists. If $A$ is a set of all natural numbers, then: $$ a \in A \in \mathbb{N} \subset \mathbb{Z} \subset \mathbb{R}. $$ Would maths normally be written like this, if it is correct?
This question is a bit confusing and no it doesn't make a lot of "sense" overall. Especially given that $A$ being defined as the set of all natural numbers means $A\not\in\mathbb{N}$ but that $A=\mathbb{N}$. As for your question, would maths normally be written like this... yes, those are all valid mathematical symbols and there is some logic to the way you are formulating them.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1957166", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
How to know when a quintic is solvable So according to Abel-Ruffini Theorem, it states that there is no algebraic solution, in the form of radicals, to general polynomials of degree $5$ or higher. But I'm wondering if there is a way to decide whether a polynomial, such as $$x^5+14x^4+12x^3+9x+2=0$$ has roots that can be expressed in radicals or not just by having a glance at the polynomial.
As the others have commented, to know when a quintic (or higher) is solvable in radicals requires Galois theory. However, there is a rather simple aspect when it is not solvable that is easily understood and can be used as a litmus test. Theorem: An irreducible equation of prime degree $p>2$ that is solvable in radicals has either $1$ or $p$ real roots. (Irreducible, simply put, means it has no rational roots.) By sheer coincidence, the irreducible quintic you chose has $3$ real roots so, by looking at its graph, you can indeed tell at a glance that this is not solvable in radicals. Going higher, if an irreducible septic has $3$ or $5$ real roots, then you automatically know it is not solvable. And so on. P.S. And before you ask, it does not work the other direction: if it has $1$ or $p$ real roots, it does not imply it is solvable in radicals. It is a necessary but not sufficient condition.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1957284", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
how is it possible for $a^n\ne a$? If an ideal $a$ is an additive subgroup of a ring $A$ satisfying $Aa\subset a$, then in particular, for any $a_1, a_2, \dots, a_n \in A$ and any $x\in a$ $$y_1=a_nx\in a$$ so $$y_2=a_{n-1}y_1=a_{n-1}a_nx\in a$$ $$y_3=a_{n-2}y_2=a_{n-2}a_{n-1}a_nx\in a$$ $$\vdots$$ $$y_n=a_1y_{n-1}=a_1\dots a_nx \in a$$ but, taking each of the $a_i$ to be in $a$, and calling $x=a_0$ we get then that $a_0\dots a_n \in a$. So, if the powers $a^n$ of $a$ are defined as the ideals generated by all products $a_0\dots a_{n-1}$, then how is it possible for $a^n\ne a$?
Your argument shows that $a^n\subseteq a$, but it does not show that $a\subseteq a^n$, which may in fact be false. For instance take $A=\mathbb{Z}$ and $a=2\mathbb{Z}$. Then an element of $a^2$ is a sum of products of two even integers. Any such product is divisible by $4$, and so is any sum of such products, so every element of $a^2$ is divisible by $4$. In particular, $2\in a$ but $2\not\in a^2$. (With slightly more work, you can show that in fact $a^2=4\mathbb{Z}$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1957378", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Evaluate $\lim_{x\to0}\frac{1-\cos3x+\sin 3x}x$ without L'Hôpital's rule I've been trying to solve this question for hours. It asks to find the limit without L'Hôpital's rule. $$\lim_{x\to0}\frac{1-\cos3x+\sin3x}x$$ Any tips or help would be much appreciated.
Taylor expansion is always a good solution since the method will provide the limit and more. Remembering that $$\cos(t)=1-\frac{t^2}{2}+O\left(t^4\right)$$ $$\sin(t)=t-\frac{t^3}{6}+O\left(t^4\right)$$ replace $t$ by $3x$ to get $$1-\cos (3 x)+\sin (3 x)=1-\left(1-\frac{9 x^2}{2}+O\left(x^4\right) \right) +\left( 3 x-\frac{9 x^3}{2}+O\left(x^4\right)\right)$$ $$1-\cos (3 x)+\sin (3 x)=3 x+\frac{9 x^2}{2}-\frac{9 x^3}{2}+O\left(x^4\right)=3 x+\frac{9 x^2}{2}+O\left(x^3\right)$$ $$\frac{1-\cos (3 x)+\sin (3 x) }x=3+\frac{9 x}{2}+O\left(x^2\right)$$ which shows the limit and how it is approached.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1957488", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 4 }
Minimum value of $2^{\sin^2x}+2^{\cos^2x}$ The question is what is the minimum value of $$2^{\sin^2x}+2^{\cos^2x}$$ I think if I put $x=\frac\pi4$ then I get a minimum of $2\sqrt2$. But how do I prove this?
Let $y=2^{\sin^2x}+2^{\cos^2x}=2^{\sin^2x}+2^{1-\sin^2x}$ $$(2^{\sin^2x})^2-y\cdot2^{\sin^2x}+2=0$$ which is a Quadratic Equation in $2^{\sin^2x}$ So, the discriminant must be $\ge0$ $$(y)^2\ge4\cdot2\implies y^2\ge8$$ As $y>0,y\ge2\sqrt2$ The equality occurs if $$2^{\sin^2x}=\dfrac{2\sqrt2}2=\sqrt2=2^{1/2}$$ i.e., if $\sin^2x=\dfrac12\iff\cos2x=0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1957770", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 6, "answer_id": 0 }
Correct use of the implication symbol A lecturer mentioned that a common mistake people make in assignments is the incorrect use of the implication notation, $\Rightarrow $. I would like to clarify the correct use of the symbol as I am responsible for marking some first year assignments this term, and have been advised to deduct marks if students make this 'mistake'. The symbol should be used, I am told, only when making a logical statement $A\Rightarrow B $, i.e. when the truth value is unknown. In other situations where we know $A $ is true, we should use the therefore symbol $\therefore $. So, for example, a mark would need to be deducted for the following answer: Q: If $(a_n),(b_n) $ are positive, bounded real sequences, then $(a_nb_n) $ is also bounded.
I would not deduct any marks for the first answer. What is an implication? It simply says "If A is true, then B is true". This is symbolically written as $A \implies B$. When the implication is false, there is some object having property $B$ that does not have property $A$. In the implication in question, it is clear that the author knows the context he is working in, and does not need another redundant statement to clarify to a well-read instructor that he is aware of the context. Therefore, there is nothing wrong with the logic of the question, I would detest a deduction of marks.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1957885", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Question in complex numbers from GRE This is a question motivated from GRE subgect test exam. if f(x) over the real number has the complex numbers $2+i$ and $1-i$ as roots,then f(x) could be: a) $x^4+6x^3+10$ b) $x^4+7x^2+10$ c) $x^3-x^2+4x+1$ d) $x^3+5x^2+4x+1$ e) $x^4-6x^3+15x^2-18x+10$ What I thought at first was to calculate $(x-2-i)(x-1+i)$ and find the polynomial that is divisible by it.Unfortunately it turns out that $(x-2-i)(x-1+i)$ is complex polynomial which makes thing harder to calculate and since this is a multiple choice question with very limited time there must be an easier way. Then I thought maybe it will be easy if I write the complex numbers in polar form and check explicitly if they are the roots.But I don't think that's a very efficient way as well. Then I noticed that the question ends with "$f(x)$ could be" which may suggest that there is a way eliminate the wrong choices, however I have no idea what to eliminate or not. Does anyone have any ideas? Thanks in advance
1. $(x-(2-i))(x-(2+i))$ $x^2-x(2+i)-x(2-i)+(2-i)(2+i)$ $x^2-2x-xi-2x+xi+(4-2i+2i+1)$ $x^2-4x+5$ 2. $(x-(1-i))(x-(1+i))$ $x^2-x(1+i)-x(1-i)+(1-i)(1+i)$ $x^2-x-xi-x+xi+(1+i-i+1)$ $x^2-2x+2$ 3. $(x^2-4x+5)(x^2-2x+2)$ $x^4-2x^3+2x^2-4x^3+8x^2-8x+5x^2-10x+10$ $x^4-6x^3+15x^2-18x+10$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1958030", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Why $\frac{\log4}{\log b}$ can't be simplified to $\frac4b$? I want to know why $\frac{\log4}{\log b}$ can't be simplified to $\frac4b$. I am a high school student. Please do not quote some theories that are too advanced for me. Thank you!
Well, suppose you could do such simplification: $$ \frac{\log 4}{\log b}=\frac{4}{b}\tag{1} $$ You would end up with (do you know why?) $$ b\cdot \log 4=4\cdot\log b, $$ which implies (do you know why?) that $$ \log 4^b=\log b^4\tag{2}. $$ If (1) were true for every $b>0$, then (2) must also be true for every $b>0$ and in particularly true for $b=1$ which is $\log 4=\log 1$. But it is impossible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1958152", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 7, "answer_id": 6 }
Simple line integral Let $f(x,y)=x$ and $C=[0,1]\times\{0\}$ (the line segment joining the point $(0,0)$ and $(1,0)$). I want to calculate $\boxed{\displaystyle\int_C f(x,y)\,ds}$. I calculate the following: $$\displaystyle\int_C f(x,y)\,ds=\int_0^1 x\,dx=\frac{1}{2}$$ its ok?
It is correct if the orientation of the contour is left-to-right (starting at $(0,0)$ and ending at $(0,1)$). If the orientation is reversed, the value will be $-1/2$. For line integrals, you should imagine a point in motion along the contour, so there is a "time orientation" of the path. It's not enough to specify the path as a point set, a direction must also be specified (unless the value is zero).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1958347", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is it possible to elementarily parametrize a circle without using trigonometric functions? Just out of curiosity: Is it possible to parametrize a full circle or part of one with elementary functions but without using trigonometric functions? If so, what are advantages/disadvantages compared to the standard parametrizations using $\cos(t)$ and $\sin(t)$?
What about $f(x,\pm)=\pm\sqrt{1-x^2}$, where $f(\cdot,\cdot)$ has a discrete and continous parameter defined in $[-1,1]$... You may also use $e^{it}=\cos(t)+i\sin(t)$ to represent a circle in the complex plane. With this calculating Fourier transforms becomes handy... Just a comment to H.H. Rugh answer that needs graphical support: His parametrisation is the stereograhic projection which has an application in Photography: $\hskip1.5in$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1958573", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28", "answer_count": 3, "answer_id": 2 }
Model theory: What is the signature of `Category theory` I'm studying model theory nowadays, and I understand how one-sorted (classical) signatures and structures work. However I am also interested in groupoids, which can not be described as a structure for a one-sorted signature. Looking up online, I came to the notion of many-sorted signature: nLab, Wikipedia. According to nLab, these can be used to describe, for example, directed (multi-)graphs, which seems easy enough: Take sorts for edges and vertices, and source and range maps from edges to vertices. However I can't see how can we describe a signature for categories in this language. We need all the ingredients for graphs (edges=arrow, vertices=objects), and at least one function symbol for composition, but since composition is only partially defined, I don't see how this can be done.
Just to give you a name to search for: Categories are models for an essentially algebraic theory. Because they require partially defined functions, essentially algebraic theories don't fit into the standard formalism of model theory. But, as described in Eric Wofsey's answer, they can be simulated in many-sorted logic using relation symbols for the graphs of the partially defined functions (or in single-sorted logic if the number of sorts is finite, as it is in the case of categories - the usual presentation has one sort for objects and one sort for arrows). Another option for simulating partially defined functions in standard first-order logic is to add a new constant symbol $*$ and set $f(\overline{a}) = *$ whenever $f(\overline{a})$ is undefined.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1958695", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Why is $\int_{-\infty}^\infty |g(x)|\,dx = \int_0^\infty \mu(\{x : |g(x)| \ge t\})\,dt$ true? Why do we have the following equality$$\int_{-\infty}^\infty |g(x)|\,dx = \int_0^\infty \mu(\{x : |g(x)| \ge t\})\,dt,$$where $\mu$ is Lebesgue measure?
We claim that for a nonnegative measurable function $g:\Bbb{R}\to[0,\infty)$, $$ \int_\Bbb{R}g(x)\ d\mu(x)=\int_{[0,\infty)}\mu(\{x\in\Bbb{R}\mid g(x)\geq s\}) \ d\mu(s). $$ This is a good example of applications of the Fubini-Tonelli's Theorem. Let $\nu:=g_*\mu$ be the pushforward of $\mu$, i.e., $\nu=\mu\circ g^{-1}$. Then $$ \int_\Bbb{R}g(x)\ d\mu(x)=\int_{[0,\infty)}x\ d\nu(x). $$ Note that $$ \begin{align*} \int_{[0,\infty)}x\ d\nu(x)&=\int_{[0,\infty)}\left(\int_{[0,\infty)}1_{[0,x]}(y)\ d\mu(y)\right)\ d\nu(x)\\ &=\int_{[0,\infty)} \left(\int_{[0,\infty)}1_{[y,\infty]}(x)\ d\nu(x)\right)\ d\mu(y)\\ &=\int_{[0,\infty)} \nu([y,\infty))\ d\mu(y)\\ &=\int_{[0,\infty)} \mu\circ g^{-1}([y,\infty))\ d\mu(y)\\ &=\int_{[0,\infty)}\mu(\{x\in\Bbb{R}\mid g(x)\geq y\}) \ d\mu(y). \end{align*} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1958794", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Proving a graph has no Hamiltonian cycle Show that $ G = (V, E)$ has no Hamiltonian cycle, where the vertices are $ V = \{ a, b, c, d, e, f, g \} $ and the edges are $E = \{ ab, ac, ad, bc, cd, de, dg, df, ef, fg \}$. From my working out, the vertices $ a, b, c, d, e, f$ are odd degrees of 3 and 1. Moreover $g $ being a even vertices of 2. There were three points that were made in my textbook to show that a graph does not contain a Hamilton circuit: * *A graph with a vertex of degree one cannot have a Hamilton circuit. *Moreover, if a vertex in the graph has degree two, then both edges that are incident with this vertex must be part of any Hamilton circuit. *A Hamilton circuit cannot contain a smaller circuit within it. According to the definition graph G does not have a Hamiltonian cycle because of the first definition. However, I am confused about 2 & 3 definitions and I am not sure if this graph involves them or not.
As discussed in the comments, the three points are not definitions. They are just handy facts you can use to show that a graph is not Hamiltonian. If the facts don't apply to a given graph, it doesn't imply that it is Hamiltonian either - the test is just inconclusive. Fortunately enough, we can use facts 2 and 3 to prove that the given graph indeed has no Hamiltonian cycle (note that fact 1 doesn't help us - $G$ has no leaf vertices). To do this: * *Draw the graph with a blue pen, and label the degree of each vertex. *Assume, towards a contradiction, that $G$ has some Hamiltonian cycle $C$. *Apply fact 2 to each of the vertices of degree two. With a red pen, draw the edges that must be a part of $C$. *Use fact 3 to get the desired contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1958887", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Show that the equation $|z-z_0|=R$ of a circle centered at $z_0$ of radius $R$ can be written as $|z|^2-2\text{Re}(z\bar{z_0})+|z_0|^2=R^2$. Show that the equation $|z-z_0|=R$ of a circle centered at $z_0$ of radius $R$ can be written as $|z|^2-2\text{Re}(z\bar{z_0})+|z_0|^2=R^2$. I tried squaring both sides, but then what?
We have that $$R^2=|z-z_0|^2=(z-z_0)\cdot \overline{(z-z_0)}=(z-z_0)\cdot (\overline{z}-\overline{z_0})\\ =z\cdot \overline{z}-z\cdot \overline{z_0} -\overline{z}\cdot z_0 +z_0\cdot\overline{z_0}\\ =|z|^2-\left(z\cdot \overline{z_0}+\overline{z\cdot \overline{z_0}}\right)+|z_0|^2\\ =|z|^2-2\mbox{Re}\left(z\cdot \overline{z_0}\right)+|z_0|^2.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1959025", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prove that $(a+b+c-d)(a+c+d-b)(a+b+d-c)(b+c+d-a)\le(a+b)(a+d)(c+b)(c+d)$ Let $a,b,c,d>0$. Prove that $$(a+b+c-d)(a+c+d-b)(a+b+d-c)(b+c+d-a)\le(a+b)(a+d)(c+b)(c+d)$$ I don't know how to begin to solve this problem
We can assume that $a+b+c+d=2$. Then the inequality becomes $$ (1-d)(1-c)(1-b)(1-a)\le\left(1-\tfrac{c+d}2\right)\left(1-\tfrac{a+d}2\right)\left(1-\tfrac{a+b}2\right)\left(1-\tfrac{c+b}2\right)\tag{1} $$ If any of $a$, $b$, $c$, or $d$ is greater than $1$, then the left side is negative and the inequality is trivial. So we can assume $0\le a,b,c,d\le1$. Subsituting $a\mapsto1-a$, $b\mapsto1-b$, $c\mapsto1-c$, and $d\mapsto1-d$ shows that the inequality is equivalent to $$ \begin{align} abcd &\le\left(\frac{c+d}2\right)\left(\frac{a+d}2\right)\left(\frac{a+b}2\right)\left(\frac{c+b}2\right)\\ &=\left(\frac{ac-bd}4\right)^2+\left(\frac1a+\frac1b+\frac1c+\frac1d\right)\frac{abcd}8\tag{2} \end{align} $$ Since $\frac1x$ is convex for $x\gt0$, Jensen's Inequality says that $$ \begin{align} \frac14\left(\frac1a+\frac1b+\frac1c+\frac1d\right)&\ge\frac1{\frac14(a+b+c+d)}\\ &=2\tag{3} \end{align} $$ $(3)$ shows that $(2)$ is true, which in turn shows that $(1)$ is true.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1959149", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Given $\frac1{x+y+z}=\frac1x+\frac1y+\frac1z$ what can be said about $(x+y)(y+z)(x+z)$? If $\frac1{x+y+z}=\frac1x+\frac1y+\frac1z$ where $xyz(x+y+z)\ne0$, then the value of $(x+y)(y+z)(z+x)$ is (A) zero (B) positive (C) negative (D) non-negative I substituted $x=-y$ and the equality was established. In the given expression the factor $(x+y)$ would be 0 and the result would be 0. But how should I proceed to show that 0 is the only possible result? I did some algebraic manipulations which do not seem to be of any use. I also believe that we can assume the variables can only be real – this might somehow play a role. Thanks in advance.
$$\frac1{x+y+z}=\frac1x+\frac1y+\frac1z$$ $$\to \frac1{x+y+z}=\frac{yz+xz+xy}{xyz}$$ $$\to xyz=(yz+xz+xy)(x+y+z)$$ $$\to xyz=(yz+xz+xy)(x+y+z)$$ $$\to xyz=(x+y)(y+z)(x+z)-xyz$$ $$\to 0=(x+y)(y+z)(x+z)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1959364", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Dimensions of submanifolds of SO(n) I would like to calculate the dimension of \begin{align*} \mathcal{M}_k=\{R\in\mathsf{SO}(n,\mathbb{R})\,|\,\sigma(R)=\{-1,1\},\,m(-1)=k\}, \end{align*} where $\sigma$ is the spectrum and $m$ is the algebraic multiplicity for all $k=0,\ldots,\lfloor\frac{n}{2}\rfloor$. Clearly $\dim\mathcal{M}_0=0$ and $\dim\mathcal{M}_{\frac{n}{2}}=0$ if $n$ is even, but I do not know how to proceed from there. I can show certain other properties such that $\mathcal{M}_k$ is path connected and is separated by the trace function from $\mathcal{M}_j$, $j\neq k$. Any help would be appreciated.
$\newcommand{\Reals}{\mathbf{R}}\newcommand{\calm}[1][k]{\mathcal{M}_{#1}}$Hints: Since $\det R = (-1)^{k}$ for all $R$ in $\calm$, the index $k$ is even. Each element of $\calm$ determines a splitting $\Reals^{n} = E_{-1} \oplus E_{1}$ into eigenspaces, of respective dimension $k$ and $n - k$. Conversely, each splitting $\Reals^{n} = E_{-1} \oplus E_{1}$ with $\dim E_{-1}$ even corresponds to a unique $R$ in $\calm$. It may be helpful to read about Grassmannian manifolds if you haven't encountered the concept.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1959466", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is it true that one should not write the universal quantifier behind a statement? We all know that there are different ways to say that e.g. an element $x$ belongs to each member of a family of sets $(A_j)_{j \in J}$ for some index set $J$. The most common ways I know are the following: * *$\forall j \in J \colon x \in A_j$ *$x \in A_j$ for all $j \in J$ *$x \in A_j \ \forall j \in J$ I think I heared that from some people's point of view the last notation is not considered as "nice". Now my question to the pros outside there is: Is it true that one should not write the universal quantifier behind a statement? Do you recommend using one of the first two notations only? I think there are even some textbooks using the last notation so I am not sure if it is seen as "bad" my most mathematicians. Maybe I am just influenced by some special opinions.
It depends whether you are writing on a blackboard or in a formal article. In a formal article, I would write $x$ belongs to $A_j$ for all $j \in J$ to have a fluent sentence. See Halmos' recommendations on How to write mathematics. On a blackboard, I may simply write $x \in \bigcap_{j\in J} A_j$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1959606", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Proof of limits as one function goes to zero and one is bounded Suppose that $$ \lim_{x\to +\infty} f(x) = 0$$ and $$g(x)$$ is a bounded . Show that $$\lim_{x\to +\infty}f(x)g(x)=0$$ Thanks in advance
$$\lim _{ x\to +\infty } f(x)=0\Rightarrow \quad \forall x\in R,\exists M>0\quad \left| f\left( x \right) \right| \le M\\ \forall x\in R,\exists \frac { \epsilon }{ M } >0\quad \left| g\left( x \right) \right| \le \frac { \epsilon }{ M } \\ \left| f\left( x \right) g\left( x \right) \right| \le \left| f\left( x \right) \right| \left| g\left( x \right) \right| \le M\cdot \frac { \epsilon }{ M } =\epsilon $$ which means $$\lim_{x\to +\infty}f(x)g(x)=0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1959736", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Expressing a recurrence relation as a polynomial Let us define $u_0 = 0, u_1 = 1$ and for $n \geq 0$, $u_{n+2} = au_{n+1}+bu_n$, $a$ and $b$ being positive integers. Express $u_n$ as a polynomial in $a$ and $b$. Prove the result: Given that $b$ is prime, prove that $b$ divides $a(u_b-1)$. How do we deal with the case that the characteristic equation has a double root? We can deal with the other case by just solving the recurrence, but how should we do it in the general case?
Let's see if we find a pattern. \begin{align} u_0&=0\\ u_1&=1\\ u_2&=au_1+bu_0=a\\ u_3&=au_2+bu_1=a^2+b\\ u_4&=au_3+bu_2=a(a^2+b)+ab=a^3+2ab\\ u_5&=au_4+bu_3=a(a^3+2ab)+b(a^2+b)=a^4+3a^2b+b^2 \end{align} It seems reasonable to assert that, for $n\ge2$, $$ u_n=a^{n-1}+bf_n(a,b) $$ where $f_n(a,b)$ is a polynomial in $a$ and $b$. This can be proved by induction (do it). Finally we have $$ a(u_b-1)=a(a^{b-1}+bf_b(a,b)-1)=(a^b-a)+bf_b(a,b) $$ Can you prove this is divisible by $b$, when $b$ is prime?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1959837", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Queen’s random walk (Queen’s random walk). A queen can move any number of squares horizontally, vertically, or diagonally. Let Xn be the sequence of squares that results if we pick one of queen’s legal moves at random. $(a)$ Find the stationary distribution $(b)$ Find the expected number of moves to return to corner $(1,1)$ when we start there . So the answer is $\sum_{x∈S} deg(x)$$ = 1452$, and for the corner $deg(x) = 21$. expected number of moves to return to the corner $≈ 69.14.$ But there are no steps to the answer. I really appreciate if you could show me how to get to the answer, thanks!
The stationary distribution is defined as the normalized number of moves from a given position. In symbols, for a given position $x $ it is $\frac {deg (x)}{\sum_{x∈S} deg(x)}$, where $deg $ indicates the number of possible moves from $x $. For a queen on a chessboard, if it is on any of the $28$ squares adjacent to the outer edge (including corners), there are $21$ possible moves ($7$ ranks, $7$ files and $7$ diagonals). If it is on any of the $20$ squares that are in the second concentric frame (i.e., all squares separated from the outer edge by one square), there are $23$ possible moves (because there are two additional diagonal moves). If it is on any of the $12$ squares that are in the third concentric frame (i.e., all squares separated from the outer edge by two squares), there are $25$ possible moves (because there are two further additional diagonal moves). Lastly, if the queen is on one of the 4 central squares, there are $27$ possible moves (again two further additional diagonal moves). So we have for the corner $deg (x)=21$ and for the total chessboard $$\sum_{x∈S} deg(x)= 28 \cdot 21 + 20 \cdot 23 + 12 \cdot 25 + 4 \cdot 27 = 1456$$ which leads to an expected number of moves of $1456 /21\approx 69.3$ to return to the corner. Note that, in my opinion, the values of $1452$ (instead of $1456$) and the resulting $69.14$ (instead of $69.3$), both provided in the solutions that you cite, might be the result of a typo (the value of $1456$ is well established for problems on Queen random walks).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1959948", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Why does solving this equation a certain way yield only complex roots instead of real ones? For the system of equations $$ 4x^2 + y = 4,\quad x^4 - y = 1 $$ if I attempt to solve by solving each equation for $y$ and setting them equal to each other, I obtain $$ 4 - 4x^2 = x^4 - 1 $$ $$ -4(x^2 - 1) = (x^2 - 1)(x^2 + 1) $$ $$ -4 = x^2 + 1 $$ $$ x = ±\sqrt5 i $$ However, it can be shown by graphing the equations, and by following the method of elimination, that the system has the real solutions $(1,0)$ and $(-1,0)$. Why is it that one method yields only complex roots while another yields the real roots?
You divided by $x^2-1$ in the third step. That expression is not necessarily non-zero..
{ "language": "en", "url": "https://math.stackexchange.com/questions/1960031", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Roots of Unity Filters Suppose we want to evaluate $$\sum_{k\geq 0} \binom{n}{3k}$$ This can be done using roots of unity filters, i.e. showing the sum is equivalent to: $$\frac{(1+1)^n+(1+\omega)^n+(1+\omega^2)^n}{3}$$ where $\omega$ is a primitive 3rd root of unity. Using the fact that $1+\omega+\omega^2=0$, we can show that this is equivalent to $$\frac{2^n+(-\omega^2)^n+(-\omega)^n}{3}$$ Depending on whether $n$ is even or odd, we get that this sum is equal to either $\frac{2^n-1}{3}$ or $\frac{2^n+1}{3}$ Can we use the same trick to evaluate the following sums? $$\sum_{k\geq 0}\binom{n}{3k+1}, \sum_{k\geq 0}\binom{n}{3k+2}$$ Also, can this idea be generalized? I would appreciate any thoughts or ideas.
$$\sum \binom{n}{3k+1} = \frac{1^2 (1+1)^n + \omega^2(1+\omega)^n + \omega(1+\omega^2)^n}{3}$$ Basically, apply the same approach to $f(x)=x^2(1+x)^n$. Similarly, taking $g(x)=x(1+x)^n$ we get: $$\sum\binom{n}{3k+2}=\frac{1(1+1)^n + \omega(1+\omega)^n + \omega^2(1+\omega^2)^n}{3}$$ You should be able to get nice formula for these, depending on $n\bmod 3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1960129", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Calculating $\sum_{n=1}^∞ \frac{1}{(2 n-1)^2+(2 n+1)^2}$ using fourier series of $\sin x$ I have to calculate $\frac{1}{1^2+3^2}+\frac{1}{3^2+5^2}+\frac{1}{5^2+7^2}+...$ using half range Fourier series $f(x)=\sin x$ which is: $f(x)=\frac{2}{\pi}-\frac{2}{\pi}\sum_{n=2}^\infty{\frac{1+\cos n\pi}{n^2-1}\cos nx}$ I have no idea how to proceed. I'll appreciate if someone guide me.
A different approach. Since $\frac{1}{(2n-1)^2+(2n+1)^2}=\frac{1}{8n^2+2}$ we have: $$\begin{eqnarray*} \sum_{n\geq 1}\frac{1}{(2n-1)^2+(2n+1)^2}&=&\frac{1}{8}\sum_{n\geq 1}\frac{1}{n^2+\frac{1}{4}}\\&=&\frac{1}{8}\int_{0}^{+\infty}\sum_{n\geq 1}\frac{\sin(nx)}{n}e^{-x/2}\,dx\\&=&\frac{1}{8(1-e^{-\pi})}\int_{0}^{2\pi}\frac{\pi-x}{2}e^{-x/2}\,dx\\&=&\color{red}{\frac{1}{8}\left(-2+\pi\coth\frac{\pi}{2}\right)}.\end{eqnarray*} $$ We exploited $\int_{0}^{+\infty}\sin(ax)e^{-bx}\,dx = \frac{a}{a^2+b^2}$ and the fact that $\sum_{n\geq 1}\frac{\sin(nx)}{n}$ is the Fourier series of a sawtooth wave.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1960248", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Galois group of a splitting field of a polynomial over $\mathbb{F}_7$ Question: Find the Galois group of the splitting field $(x^2-3)(x^3+x+1)$ over $\mathbb{F}_7$. I know the splitting field is $K:=\mathbb{F}_7(\sqrt{3},\alpha_1)$, where $\alpha_1$ is one of the roots of the polynomial $x^3+x+1$. I know that the possible automorphisms of K fixing F must have the mappings $\sqrt{3} \mapsto \pm\sqrt{3}$ and $\alpha_1 \mapsto \{\alpha_1,\alpha_2,\alpha_3\}$ where $\alpha_1,\alpha_2,\alpha_3$ are the distinct roots of $x^3+x+1$. But when I wanted to write out all the automorphsism explicitly, I have some trouble. Any help will be appreciated
Note that the Galois group is some subgroup of the direct product of the Galois groups of each factor considered individually. Since the splitting field of $x^2 - 3$ over $\Bbb{F}_7$ has degree two, the splitting field of $x^3 +x+1$ has degree three, and the degrees are coprime the splitting field of their product has degree 6. The direct product of the Galois groups of the factors, $\Bbb{Z}_2 \times \Bbb{Z}_3$, has order 6, and the Galois group of $K$ is a 6 element subgroup of this so it must be the whole group. If you want it explicitly, a generator is the permutation $\sigma$ sending $\sqrt{3}$ to its negative and sending $\alpha _1 \to \alpha _2 \to \alpha_3 \to \alpha _1$. This is necessarily an automorphism, because the Galois group acts on the $\alpha _i$ as the alternating group $A_3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1960370", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
The number of solutions to a system of linear equations Can anyone suggest a formal proof that a system of linear equations can have no solution, one solution or infinitely many solutions?
Consider a linear system in $\mathrm x \in \mathbb R^n$ $$\mathrm A \mathrm x = \mathrm b$$ where $\mathrm A \in \mathbb R^{m \times n}$ and $\mathrm b \in \mathbb R^m$ are given. Suppose that the system is feasible and that $\mathrm x^{(1)}$ and $\mathrm x^{(2)}$ are two solutions. Hence, $\mathrm A \mathrm x^{(1)} = \mathrm b$ and $\mathrm A \mathrm x^{(2)} = \mathrm b$. Subtracting these two, we obtain $$\mathrm A \mathrm x^{(1)} - \mathrm A \mathrm x^{(2)} = \mathrm b - \mathrm b = 0_m$$ or, $$\mathrm A (\mathrm x^{(1)} - \mathrm x^{(2)}) = 0_m$$ Hence, $\mathrm x^{(1)} - \mathrm x^{(2)}$ is in the null space of $\mathrm A$. If * *the null space is trivial (i.e., it contains only $0_n$), then $\mathrm x^{(1)} - \mathrm x^{(2)} = 0_n$, or, $\mathrm x^{(1)} = \mathrm x^{(2)}$. *the null space is nontrivial (i.e., it is not $0$-dimensional), then any affine combination of $\mathrm x^{(1)}$ and $\mathrm x^{(2)}$ is also a solution to the linear system, for the following holds for all $\gamma \in \mathbb R$ $$\mathrm A (\gamma \mathrm x^{(1)} + (1-\gamma) \mathrm x^{(2)}) = \gamma \mathrm A \mathrm x^{(1)} + (1-\gamma) \mathrm A \mathrm x^{(2)} = \gamma \mathrm b + (1-\gamma) \mathrm b = \mathrm b$$ Thus, if a linear system is feasible, it either has one solution, or it has infinitely many.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1960500", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
About the parity of the product $(a_1-1)(a_2-2)\cdots(a_n-n)$ An exercise from Chapter 20 of "How to Think Like a Mathematician" by Kevin Houston: Let $n$ be an odd positive integer. Let $(a_1,a_2,\dots,a_n)$ be an arbitrary arrangement (i.e., permutation) of $(1,2,\dots,n)$. Prove that the product $(a_1-1)(a_2-2)\cdots(a_n-n)$ is even. For example, for $n = 3$, we can have $(a_1, a_2, \dots, a_n) = (3, 1, 2)$, so this yields that $(3-1)(2-1)(1-2)$ is even. Would the following be considered a full solution? Each even an has to be paired with a ($a_n$ -odd) in order for each even an to become odd. However after pairing them we have one -odd left as there is one more odd than even in the set $(1,2,\dots,n)$ where $n$ is odd. This must be paired with a odd number in a bracket. As odd-odd is even (the proof of this is trivial), then one of the bracket must be even so the product is even. Would THIS solution be considered full?
It suffices to show that one of the terms $a_k-k$ is even. For that to fail, we would need each $a_k$ be of a parity distinct to that of $k$. Since $n$ is odd, in $\{1,2,\dots, n\}$ there are $\frac{n+1}2$ odd numbers and $\frac{n-1}2$ even numbers. But in order for all the $a_k$ to be a distinct parity that of $k$, we'd need the $a_k$ to have $\frac{n-1}2$ odd numbers and $\frac{n+1}2$ even numbers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1960608", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 8, "answer_id": 5 }
Is an expanding map on a compact metric space continuous? I got inspired by this question Existence of convergent subsequence to think about the following problem: Suppose you have a compact metric space $(X,d)$ and an expanding map $T:X\rightarrow X$, i.e. $d(Tx,Ty)\geq d(x,y)$ for every $x,y\in X$. Is the map $T$ then continuous? Without assuming continuity we already know that $T^n(X)$ must be dense in $X$ for any $n\geq 1$, since given $z\in X$ the orbit of $z$ must accumulate upon $z$. But showing continuity has escaped me so far. If there are counter-examples using the Axiom of Choice that also interests me.
Question: Let $(X,d)$ be a compact metric space and let $f:X\to X$ be a map such that $d(f(x),f(y))\geq d(x,y)$ for all $x,y\in X$. Show that $f$ is an isometry onto $X$. Solution: First we will see that $f$ is an isometry, then that $f$ is onto $X$. 1. Given a (small) $r>0$, let $K_n(r)$ be the set of $n$-tuples $(x_1,...,x_n)$ in $X^n$ such that $d(x_i,x_j) \ge r$ for every $i \ne j$. By compactness of $X$, there is$^*$ (see footnote) a maximal $n$ such that $K_n(r) \ne \emptyset$. We now fix $n$ to take this value. Since $K_n(r)$ is a closed subset of $X^n$, it is compact, so the continuous function $$g(x_1,\ldots x_n):= \sum_{i >j} d(x_i,x_j)$$ attains its maximum over $K_n(r)$ at some n-tuple $(x_1^*,\ldots ,x_n^*) \in K_n(r)$. Write $y_i=f(x_i^*)$ for each $i$, and observe that $(y_1,\ldots,y_n) \in K_n(r)$ and $g(y_1,\ldots,y_n) =g(x_1^*,\ldots ,x_n^*)$; thus we must have $d(y_i,y_j)=d(x_i^*,x_j^*)$ for all $i,j$. Given $z,w \in X$, maximality of $n$ ensures that there is $i\le n$ such that $d(y_i,f(z)) \le r$, so $d(x_i^*,z) \le r$. Similarly, there is $j \le n$ such that $d(y_j,f(w)) \le r$, so $d(x_j^*,w) \le r$. We conclude that $$d(f(z),f(w)) \le d(f(z),y_i)+d(y_i,y_j)+d(y_j,f(w)\le d(x_i^*,x_j^*) +2r $$ $$ \le d(x_i^*,z)+d(z,w)+d(w,x_j^*)+2r \le d(z,w) +4r\,.$$ Since $r>0$ can be arbitrary, $$\forall z,w \in X, \quad d(f(z),f(w)) \le d(z,w) \,.$$ 2. If $f$ was not surjective, then $f(X)$ is a closed subset of $X$ and there must exist some $u \in X$ and $r>0$ such that $d(u,f(X))>r$. Using the notation of part 1, we consider the same $n$, and the $n$-tuples $(x_1^*,\ldots ,x_n^*) \in K_n(r) $ and $(y_1,...y_n) \in K_n(r)$. Then $(u,y_1,\ldots,y_n) \in K_{n+1}(r)$, contradictng the maximality of $n$. $(*)$ Footnote: By compactness of $X$, it can be covered by finitely many open balls of radius $r/2$, call them $\{B(v_i,r/2)\}_{i=1}^M$. If $(x_1,\ldots x_n) \in K_n(r)$, then each ball $B(v_i,r/2)$ can contain at most one $x_j$, so $n \le M$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1960773", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 2, "answer_id": 1 }