Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
To prove $\sup_{x\in X} |s(x)| = \|s\|_\infty$ where $s = \sum_{k=1}^n c_k \chi_{A_k}$ and $\mu(A_k) > 0$ for all $1\le k\le n$ Prove that $$\sup_{x\in X} |s(x)| = \|s\|_\infty$$ where $s = \sum_{k=1}^n c_k \chi_{A_k}$ and $\mu(A_k) > 0$ for all $1\le k\le n$. Assume the sets $A_k$ to be disjoint. In other words, $\sup_{x\in X}|s(x)|$ is equal to the essential supremum of $|s|$, for simple functions satisfying the aforementioned condition. I found this result without proof here. I have tried to show equality using basic definitions. Let $s = \sum_{k=1}^n c_k \chi_{A_k}$, where $\mu(A_k) > 0$ for every $1\le k\le n$. Rudin defines the essential supremum $\|s\|_\infty$ as follows: $$\|s\|_\infty = \inf\{\alpha\in\mathbb R: \mu(|s|^{-1}((\alpha,\infty])) = 0\}$$ How do I show that this equals $\sup_{x\in X} |s(x)|$? Moreover, the $c_k$'s could be in $\mathbb C$, so a nice expression for $|s|$ seems out of reach. Thank you! Follow-up: Will this result in any way help us prove that for $f \in C_c(\mathbb R^n)$, $$\sup_{x\in\mathbb R^n} |f(x)| = \|f\|_\infty$$
@copper.hat helped construct this answer. Any errors are due to me (though there probably are none). Suppose $\alpha\in\mathbb R$ is such that $\mu(|s|^{-1}((\alpha,\infty])) = 0$. Then we must have $|s(x)| \le \alpha$ $\mu$-a.e. ,i.e. $|s(x)| > \alpha$ possibly on a set of measure zero. Suppose this set is non-empty, and call it $A$. If for some $k$, $|s(x)| = |c_k| > \alpha$, then $\mu(A_k) \le \mu(|s|^{-1}((\alpha,\infty])) = 0 \implies \mu(A_k) =0$ which is a contradiction. So, $A = \varnothing$. Hence, $|s(x)| \le \alpha$ everywhere. Taking supremum, we have $\sup |s| \le \alpha$, and now taking infimum (appropriately), we get $\sup|s| \le \|s\|_\infty$. Suppose $\sup_{x\in X} |s(x)| < \|s\|_\infty$. Choose some $\beta$ for which $\sup_{x\in X} |s(x)| < \beta < \|s\|_\infty$. Then $\mu(|s|^{-1}((\beta,\infty])) = 0$ since $|s(x)| \le \sup_{x\in X} |s(x)| < \beta$ for all $x$. By definition of $\|\cdot\|_\infty$, we have $\beta \ge \|s\|_\infty$, which is a contradiction. Therefore, $$\sup_{x\in X} |s(x)| = \|s\|_\infty$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4153707", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
If $H$ is a nonempty finite subset of $G$ and $H$ is closed under multiplication, then $H$ is a subgroup of $G$ The line "If $H$ is a nonempty finite subset of $G$ and $H$ is closed under multiplication, then $H$ is a subgroup of $G$" appears as Lemma 2.4.2 in Topics in Algebra by Herstein. Is this a correct statement? I don't understand the multiplication part - should it be "closed under the operation of the group" instead of multiplication?
The word "multiplication" is often synonymous with the phrase "group operation". For example, I might say: We multiply $g$ and $h$ together to get $gh$. Writing this general sentence using the term "group operation" is messy. I also might say: The product of $g$ and $h$ is $gh$. But here the word "product" is multiplication-specific, as for example $a+b$ is the "sum". This usage is consistent with the use of the word "multiplication" in rings and things, as a ring is an abelian group with a multiplicative structure, and every group embeds into the multiplicative structure of some ring. Note that "finiteness" is required in your result. If $H$ is an infinite set then there are counter-examples. In particular, set $G=(\mathbb{Q}, \times)$ and $H=\{x\in\mathbb{Q}\mid |x|\geq1\}$, then $H$ is closed under the group operation but is not a subgroup. As another example, take $G=(\mathbb{Z}, +)$ and $H=\mathbb{N}$ (note the use of additive notation, and that this doesn't matter).
{ "language": "en", "url": "https://math.stackexchange.com/questions/4153859", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Geometrical proof that $b\cos\beta+c\cos\gamma=a\cos(\beta-\gamma)$ Prove that for triangle $ABC$ with side lengths $a,b,c$ and corresponding angles $\alpha,\beta,\gamma$ \begin{align}b\cos\beta+c\cos\gamma&=a\cos(\beta-\gamma)\tag{1}\label{1} \end{align} It is straightforward to prove \eqref{1} by expanding $\cos(\beta-\gamma)$ and expressing $\cos\beta,\sin\beta,\cos\gamma,\sin\gamma$ in terms of $a,b,c$ using the cosine rule. Both sides of equation \eqref{1} are equal to \eqref{2}: \begin{align} \frac{a^2(b^2+c^2)-(b^2-c^2)^2}{2abc} \tag{2}\label{2} . \end{align} The question is: is there any geometrical proof for \eqref{1}? One possible geometric construction is shown below.
The answer was triggered by this question. Consider triangle $ABC$ with side lengths $a,b,c$ and corresponding angles $\alpha,\beta,\gamma$, circumscribed circle with the center $O$, line $DE$ tangent to the circle at $A$, $BD\perp DE$, $CE\perp DE$, $BF\perp CE$: Then we have \begin{align} |DE|&=|BF| ,\\ \triangle ACE,\triangle ABD:\quad |DE|&=|AE|+|AD|=b\cos\beta+c\cos\gamma ,\\ \triangle BCF:\quad |BF|&=|BC|\cos\angle FBC =|BC|\cos(90^\circ-\angle BCF) \\ &= a\cos(90^\circ-(90^\circ-\beta+\gamma)) = a\cos(\beta-\gamma) . \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/4154213", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Positive semi-definite conditional covariance matrix Let $X$ be an $n\times m$ random matrix, where each entry is a real square integrable random variable on the probability space $(\Omega,\mathcal A,P)$. Consider the following matrix: $$E[XX'\mid\mathcal F],$$ where $\mathcal F$ is a sub-$\sigma$ algebra of $\mathcal A$. If $a\in\mathbb{R}^{n}$, then from the linearity of conditional expectations we have $$a'E[XX'\mid\mathcal F]a=E[(a'X)^2\mid\mathcal F]\geq 0 \quad P\text{-almost surely,}$$ but the null set might depend on $a$. Can I find a version of $E[XX'\mid\mathcal F]$ which is positive semi-definite almost surely?
The null set might depend on $a$, and therefore let $$ a'E[XX'|\mathcal F]a=E[(a'X)^2|\mathcal F]\geq 0, $$ on the set $L_a$, where $P(L_a^c)=0$. Now consider all rational numbers $\mathbf{Q}$, and for each $a\in \mathbf{Q}^n$ you will have an $L_a$, also note that $$ P(\cup_{a \in \mathbf{Q}^n} L_a^c) \leq \sum_{a \in \mathbf{Q}^n} P(L_a^c)=0, $$ Therefore $P(\cap_{a \in \mathbf{Q}^n} L_a) =1$. Define $L:=\cap_{a \in \mathbf{Q}^n} L_a$. For each fixed $\omega\in L$, $$ a'E[XX'|\mathcal F]a\geq 0 $$ for all $a \in \mathbf{Q}^n$. Now take any number $x\in \mathbb{R}^n$ and let $a_n$ be a sequence of rationals (vector) converging to $x$. By the previous argument for each fixed $\omega\in L$, $$ a_n'E[XX'|\mathcal F]a_n\geq 0. $$ Now taking the limit $n \to \infty$, $$ x'E[XX'|\mathcal F]x\geq 0. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4154363", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
This inequality holds on Hilbert Spaces? I am trying to solve a probelm and if I conclude that the following inequality holds I finish it. Consider $H$ an Hilbert space over $\mathbb{R}$. Let $\alpha_1, ..., \alpha_n$ be real numbers such that $\alpha_i\geq 0$, for all $i=1,...,n$ and $\sum_{i=1}^n\alpha_i=1$. Let $x_1, ...,x_n \in H$ and define $$x:=\sum_{i=1}^n\alpha_ix_i$$ My question is: the following inequality holds for all $i,j$? $$\lVert x-x_i\rVert ^2+\lVert x-x_j\rVert ^2 \leq \lVert x_i-x_j\rVert ^2$$ I can easily see that it holds when $\{x_1, ..., x_n\}$ is orthogonal, but I'm not sure in the general case. I would appreciate any hint. Thanks!
Geometric Answer: The vectors $x_i$ span an at most $n$ dimensional subspace of $H$, so we can ignore the rest of the (possibly infinite dimensional) Hilbert space and just work in $n$-dimensional Euclidean space. If the inequality in the question were false for all $i\neq j$, that would mean, by the law of cosines, that all the angles $\angle x_ixx_j$ are acute. But then, if we fix any paricular index $i$, we would have all the $x_j$'s (including $x_i$) strictly on the same side of the hyperplane through $x$ perpendicular to $x-x_i$. That's absurd, since $x$, which is on that hyperplane, is a weighted average (with weights $\alpha_j$) of the $x_j$'s. Algebraic Answer: To simplify notation, let $y_i=x_i-x$. So $\sum_i\alpha_iy_i=0$. The left side of your inequality is $\Vert y_i\Vert^2+\Vert y_j\Vert^2$, and the right side is, when you expand the norm in terms of the inner product, $\Vert y_i\Vert^2+\Vert y_j\Vert^2-2(y_i\cdot y_j)$. So if your inequality fails, you'd have $(y_i\cdot y_j)>0$. Fix some $i$, sum over $j$, and remember that $\sum_j\alpha_jy_j=0$, to get the contradiction $0>0$. Note that both proofs show that, for each $i$, there is at least one $j$ such that your inequality is satisfied. Note also that, in the geometric proof, the simplification of working in finite-dimensional spaces isn't really needed provided you know the law of cosines and the connection between half-spaces and weighted averages in infinite-dimensional Hilbert spaces.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4154562", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Approximation of $C_b(E,\mathbb{R})$ by $C_0(E,\mathbb{R})$ for a locally compact polish space. Let $E$ be a locally compact polish space and $\mu,\nu$ be probability measures on $(E,\mathcal{B}_E)$. How can one prove that $$ (\forall f \in C_0(E,\mathbb{R}): \int_Efd\mu=\int_E f d\nu)\Rightarrow (\forall f \in C_b(E,\mathbb{R}): \int_Efd\mu=\int_Efd\nu) ? $$ I want to show that $\mu$ and $\nu$ coincide when $ \forall f \in C_0(E,\mathbb{R})$ $\int_E fd\mu=\int_E fd\nu$ holds but I only got the assertion for $f \in C_b(E,\mathbb{R})$ and I can't think of a proof for the implication above.
On a Polish space any probability measure is tight. Given $\epsilon >0$ we can find a compact set $K$ such that $\mu (K) >1-\epsilon$ and $\nu (K) >1-\epsilon$. By local comapctness there exists and open set $V$ such that $K \subseteq V$ and $\overline V$ is compact. By Urysohn's Lemma there exists a continuous function $g: E \to [0,1]$ such that $g(x)=1$ for all $x \in K$ and $0$ in $V^{c}$. Now use the fact that $fg \in C_0(E,\mathbb R)$ so$\int fg d\mu=\int fg d\nu$; split the integral into integrals over $K$ and $K^{c}$. Can you finish?
{ "language": "en", "url": "https://math.stackexchange.com/questions/4154801", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
square root in 2-adics $\newcommand\Q{\mathbb Q} \newcommand\Z{\mathbb Z}$I am a bit confused on the square root of 2-adics. I am pretty sure I am mixing some steps in an algorithm. To be precise, I am trying to solve an exercise in Koblitz' book p-adic numbers, p-adic analysis and zeta functions (this is not a homework, I just want to do it). In exercise of I.7 the reader is asked to solve the square root of -7 in $\Q_2$ up to 5-digits and the hint is to use a generalization of Hensel's Lemma: Let $f(x)$ be a polynomial with coefficients in $\Z_p$. If $a_0$ in $\Z_p$ satisfies * *$f(a_0) = 0 \bmod p^{2m+1}$ *$f'(a_0) = 0 \bmod p^m$ *$f'(a_0) \ne 0 \bmod p^{m+1}$ then there is a unique $a\in\Z_p$ such that $f(a) = 0$ and $a=a_0 \bmod p^{m+1}$ Obviously the procedure to solve comes from an algorithm to be seen in the proof of this generalized Hensel's lemma. For my particular problem $f(x)=x^2+7$ and $p=2$, I choose $a_0$ either $1$ or $0$ (so in fact an integer) and then I iterate to get, for a chosen $b_1 =1$ or $b_1=0$, $a_1 = a_0 + b_1 p^{m+1}$ for which $f(a_1)=0 \bmod p^{2m+2}$ and so on i.e. I need to get $b_i$ which gives $a_i$ such that $f(a_i) = 0 \bmod p^{2m+1+i}$. I might be missing something, but for me the answer for square root of -7 would be of the form $a_0 + b_12+ b_22^2 + \dots $ But I have a problem getting the correct answer. So one of the square root (using Pari to check) is $(a_0,b_1,b_2,\dots)=(1,1,0,1,0,\dots)$. Is there a cleaner way to think of this algorithm. I get easily confused and make mistakes with the iteration steps. Maybe I have mixed something in the procedure. Could someone explain to me why for $a_0=1$, we get $(1,1,0,1,0,\dots)$? From this I feel that manually taking roots and even doing simple arithmetic is not practical in the p-adics. EDIT: The algorithm I have in mind is actually also written here in this online calculator: http://www.numbertheory.org/php/2adic.html I still fail to get the correct $b_4$ using the algorithm. However, the online calculator gives me the correct $b_4$. If I follow the procedure, my $b_4$ is $(a_3^2+7)/64 \bmod 2 = 7\bmod 2$ so it is $1$, while the correct answer is $0$.
OK I was finally able to get the correct answer. The site http://www.numbertheory.org/php/2adic.html provides a code for 2-adic square roots and I just looked at it and figured out my embarassing mistake. I will write the steps. We want to get $(11010\dots)_2$ in 2-adics. We have $f(x) = x^2+7$ , $m=1$ and most importantly that $a_0=3$ (not $1$). This will give me one (unique) solution and if we start with $a_0=1$ we get the negative of that solution. We want $a_0=3$ because, we (or I) want to obtain $(11\dots)_2$ so it must start with $a_0=3$ rather than $a_0=1$ because in the other case we would obtain $(10\dots)_2$. So what I am trying to say is: my idea was correct but I started with the wrong $a_0$. We can use Hensel's lemma because $f'(a_0)=6$ and $2\mid\mid 6$. To get the digit after $(11)_2$ we use this algorithm: For the i-th step we want to choose $b_i$ such that $a_i = a_{i-1}+b_i2^{m+i}$ satisfies $f(a_i)=0 \bmod 2^{2m+i+1}$. * *For $i=0$ (no steps here), $a_0=3$ , and $f(a_0)=3^2+7=16$ *For $i=1$ (the third digit) we want to choose $b_1$ such that $a_1 = 3 + b_12^{1+1}$ satisfies $$f(a_1) = 0 \bmod 2^{2+1+1}$$ So $b_1=0$ and $a_0=a_1=3$. *For $i=2$ (the fourth digit) we want to choose $b_2$ such that $a_2 = 3 + b_22^{1+2}$ satisfies $$f(a_2) = 0 \bmod 2^{2+2+1}$$ So $b_2=1$ and $a_2 = 3+8=11$ with $f(a_2)=128$ *For $i=3$ (the fifth digit) we want to choose $b_3$ such that $a_3=11 + b_32^4$ satisfies $$f(a_3) = 0 \bmod 2^6$$ So $b_3=0$ and so I got the correct five digits $11010$. It is much easier to program this than to iterate by hand, because it is so easy to make this stupid mistake I made (at some point I was confused modulo what powers of $2$ I should check the $a_i$ and the $f(a_i)$). Also, I do not need to "cache" the roots modulo $2^k$ after $a_0=3$ or $a_0=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4154965", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
I can't seem to prove propositions involving floor/ceiling function and the like.. What I know is: The floor function of $x$, denoted $[x]$, is the greatest integer smaller than $x$, ceiling of $x$ is the smallest integer greater than $x$. We define $\{x\} = x- [x]$, the factional part of x. Till now all is okay and fine, but whenever I come across a proving question involving these type of functions, I can't even seem to get started. I try to use extreme cases to disprove the statement if possible, but when it comes to proving, I'm just....lost. Like where do I even begin?? An example would be: Proposition: For any real numbers $x, y,$ and $z$, prove that $[x + y + z] = [x + y] + [z + \{x + y\}]$. I don't even know what to do with this except try and check it for some cases, which I already did.
Note that $$\tag1a=\lfloor a\rfloor+\{a\}$$ by definition. Also, if $n$ is an integer, then $$ \tag2\lfloor n+a\rfloor = n+\lfloor a\rfloor.$$ In order to show $$ \lfloor x+y+z\rfloor = \lfloor x+y\rfloor +\lfloor z+\{x+y\}\rfloor,$$ use $(2)$ with substitution $n\leftarrow\lfloor x+y\rfloor$, $a\leftarrow z+\{x+y\}$ and use $(1)$ with substitution $a\leftarrow x+y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4155068", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
What is the smallest possible cardinality of a non-finitely based magma? I was told that every magma $(S,*)$ whose base set $S$ has 2 elements has a finite basis of identities. The natural question is, what is the smallest possible cardinality of a non-finitely based magma? I would be very interested to see a 3-element set and a binary operation on it which is not finitely based.
V.L. Murskii in 1965 found an example of a 3 element magma that is not finitely based. Here is its multiplication table: $$\begin{array}{c|ccc} & 0 & 1 & 2 \\ \hline 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 1\\ 2 & 0 & 2 & 2\\ \end{array}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4155200", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Is it possible to turn this geometric demonstration of the area of a circle into a rigorous proof? In this New York Times article, Steven Strogatz offers the following argument for why the area of a circle is $\pi r^2$. Suppose you divide the circle into an even number of pizza slices of equal arc length, and wedge them together in such a way that half of the slices have an arc at the bottom, and half of the slices have an arc at the top: Then, the base of the shape created has length $\pi r$, and its height is $r$. As the number of slices tends to infinity, the limiting case is that of a rectangle: Hence, the area of the circle is $\pi r^2$. Although this argument is very geometrically appealing, it also seems fairly difficult to make rigorous. I suppose the most challenging part is showing that the base of the shape really does become arbitrarily flat, and its height becomes arbitrarily vertical, if that makes sense. How might we convert this intuitive argument into a rigorous proof?
Whether this is rigorous will depend on your definitions of area and length, and whether this is a proof will depend on what we are allowed to presuppose, but I will try to show how control the error. We only need to show that for small $x$, one of those segments with arc length $rx$ (making $x$ the angle) will be close enough in area to a rectangle with area $\frac{rx\cdot r}2$. Now from your usual diagram used to introduce trigonometric functions, we see that the area is between $\frac{r\cos x\cdot r\sin x}2$ and $\frac{r\cdot r\tan x}2$. Since the circumference of the circle is $2\pi r$, we will need $\frac{2\pi}x$ of these segments. (You can choose $x$ to make this an integer if you like, i.e. set $x=\frac{2\pi}n$ and let $n\to\infty$.) This sandwiches the area of the circle between $$\lim_{x\to0} \frac{2\pi}x \cdot \frac{r\cos x\cdot r\sin x}2 = \lim_{x\to0} \pi r^2\cdot \cos x\cdot\frac{\sin x}x = \pi r^2$$ and $$\lim_{x\to0} \frac{2\pi}x \cdot \frac{r\cdot r\tan x}2 = \lim_{x\to0} \pi r^2\cdot \frac1{\cos x}\cdot\frac{\sin x}x = \pi r^2.$$ So the area must be $\pi r^2$. Of course, this assumes that we already know that $\lim_{x\to0}\frac{\sin x}x=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4155362", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26", "answer_count": 3, "answer_id": 1 }
Finding unknown values given cumulative distribution for set of data. I am confused on the correct answers to this problem. The data is given by the age of the first 44 presidents upon inauguration -$(57,61,57,57,58,57,61,54,68,51,49,64,50,48,65,52,56,46,54,49,51,47,55,55,54,42,51,56,55,51,54,51,60,61,43,55,56,61,52,69,64,46,54,47)$ 1.For 1. would the answer be $.2$ or $.22$? Where should I read the value off on the graph? I know the cumulative distribution function is right continuous and if $F$ is the cumulative distribution function then $F_{44}(50)=\frac{1}{44}\{\text{number of observations less than or equal to 50}\}$. I'm guessing it should be around $.22$. Any suggestions? *Again I am confused because of the right continuity of the graph. Would the answer be $.2$ or $.22$? I am guessing it should be exactly $.2$. *I suppose the answer would be about $51$ based on the graph. Is this correct?
For a discrete random variable (which is the case here) the vertical rise at a point on the x-axis is the probability of observing that point, and a flat line means that the probability of those values is $0$. Using this information, the proportion that was 50 years old or younger should be counted from the top of the 50-year-old vertical line (so approximately 22% as you say). The fraction older than 60 should be the space from the top of the cdf (e.g. 1) to the top of the vertical line at 60, so say approximately 20%. Finally, I believe your answer for c is correct, and any age 51 or younger would be in the lower 1/3 of presidential ages.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4155487", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If $y = x^2$, then what is $dy$? (What's the definition of differential?) Suppose $y = f(x) = x^2$. We know that $dy = 2x dx$. In high school calculus, I overlooked a lot of details and mechanically calculated $dy$. Now, I am unsure how the definition of differential is developed in elementary (calculus or analysis) language. Of course, if we let $F(x,y) = y - f(x) \equiv 0$ and take exterior derivative, then $dF = dy - d(x^2) = dy - 2x dx = 0$, so we have the desired result. However, this exterior derivative is a purely algebraic concept and I feel that a lot of elementary calculus perspective is missing. Here are a few explanations that I remember in high school calculus: *Naive approach: $\frac{dy}{dx} = 2x$, so "multiply" by $dx$ yields $dy = 2x dx$. I am trying to stay away from this approach as much as possible. *Better alternative: if $\Delta x, \Delta y$ are infinitesimal changes, then $\Delta x$ and $\Delta y$ are related by: $\Delta y \approx f' \Delta x$. Taking Riemann sum,$ \int_{[a,b]} f \approx \sum\limits_{\|\Delta x\| \rightarrow 0} f'(x) \Delta x = \sum\limits_{\Delta y} \Delta y$, where $\Delta y = f'(x) \Delta x$. This naturally implies the relation $dy = f'(x) dx$. I am satisfied with 1) but it nevertheless fails to address the question: "what is" differential? This being said, how should I understand what differential is? Thank you in advance.
The differential $\mathrm df$ of a function $f$ is simply the linear function that realises the best linear approximation of the function at a point, in a precise sense given by asymptotic analysis: $$f(x+h,y+k)=f(x,y)+\mathrm df_{(x,y)}(h,k)+o\bigl(\|(h,k)\|\bigr).$$ This definition can be generalised to normed vector spaces, whether finite dimensional or not. Considering the example of your title, $\mathrm d_xy$ is the linear map of $h$: $\;h\longmapsto 2x h$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4155680", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Harvard Stat 110 Strategic Practice 2, Fall 2011 - Inclusion Exclusion - Problem 1.1 Harvard Stat 110 Strategic Practice 2, Fall 2011 - Inclusion Exclusion - Problem 1.1 For a group of $7$ people, find the probability that all $4$ seasons (winter, spring, summer, fall) occur at least once each among their birthdays, assuming that all seasons are equally likely. I tried to solve it using another method (which came into my mind at that moment) but I must be doing a mistake. Maybe someone could help me finding the error and also explaining WHY I make the error, so I can avoid it in the future. I tried to apply the naive definition. So, I have $7$ people and $4$ seasons to choose. 1st person can have $4$ picks, the 2nd $4$ picks, etc etc. All are independent, so I have a total of $4^7$ possibilities. The number of favourable outcomes are when from the $7$ people, $4$ have each Spring, Summer, Fall, Winter and the other $3$ might get any choice. From $7$ people, I chose $4$ to fill Spring, Summer, Fall, Winter and the other $3$ can have whatever choice. So, result should be $$\binom{7}{4} \cdot \frac{4^3}{4^7}$$ which yields $0.546$ which is clearly different from the practice answer of $0.513$. Could somebody, please, point out what I am doing wrong? Thank you!
There are multiple mistakes here. First, once you choose the four people, you neglected to designate how the four different seasons are assigned to them, since in the denominator you are counting all possible sequences of seven season assignments. It should be $7 \cdot 6 \cdot 5 \cdot 4$ rather than $\binom{7}{4}$. However, this leads to a nonsensical probability larger than $1$, which I explain below. You are counting some outcomes multiple times. For example, if the seven people are ordered and their picks are Fall, Spring, Summer, Summer, Winter, Summer, Summer in that order, then this is counted four times too many in your computation, because there are four different ways this could have arisen from choosing the set of "four different seasons" people first, before letting the rest be summer. Specifically, it could have been the 1st, 2nd, 3rd, and 5th people that were chosen first, or it could have been the 1st, 2nd, 4th, and 5th people that were chosen first. The difficulty is that there isn't a uniform way to correct for this overcounting (like just dividing by 4 or something). For instance, for Fall, Spring, Summer, Summer, Winter, Winter, Winter this would be counted six multiple times due to the different ways this could have arisen from a selection of "four different seasons" people first. If you want to handle this case by case, you have to go through all possible frequencies of exactly how many occurrences of each season there are, which is a bit difficult to compute. Admittedly, the standard approach (inclusion-exclusion) also doesn't have a simple expression either, but is slightly easier to write down. The complement event is "at least one season does not appear." Letting $A_1, A_2, A_3, A_4$ be the events that "season $i$ does not appear" respectively, we want $1-P(A_1 \cup A_2 \cup A_3 \cup A_4)$. Using inclusion-exclusion, this is $$1-\left(4 (3/4)^7 - \binom{4}{2} (2/4)^7 + \binom{4}{3} (1/4)^7\right) \approx 0.5126953125 $$ which matches BruceET's simulation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4155793", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 1 }
Find explicit form of following: $a_n=3a_{n-1}+3^{n-1}$ I wanted to find the explicit form of a recurrence relation , but i stuck in nonhomogenous part. Find explicit form of following: $a_n=3a_{n-1}+3^{n-1}$ where $a_0=1 , a_1 =4,a_2=15$ My attempt: For homogeneous part , it is obvious that $c_13^n$ For non-homogenouspart = $3C3^n=9C3^{n-1}+3^n \rightarrow 9C3^n=9C3^{n}+3 \times3^n$ , so there it not solution. However , answer is $n3^{n-1} + 3^n$ . What am i missing ?
Let $A(z)=\sum_{n \ge 0} a_n z^n$ be the ordinary generating function. The recurrence relation and initial condition imply that \begin{align} A(z) &= a_0 + \sum_{n \ge 1} a_n z^n \\ &= 1 + \sum_{n \ge 1} (3a_{n-1} + 3^{n-1}) z^n \\ &= 1 + 3z \sum_{n \ge 1} a_{n-1} z^{n-1} + z \sum_{n \ge 1} 3^{n-1} z^{n-1} \\ &= 1 + 3z \sum_{n \ge 0} a_n z^n + z \sum_{n \ge 0} (3z)^n \\ &= 1 + 3z A(z) + \frac{z}{1-3z}. \end{align} Solving for $A(z)$ yields \begin{align} A(z) &= \frac{1+z/(1-3z)}{1-3z} \\ &= \frac{1-2z}{(1-3z)^2} \\ &= \frac{2/3}{1-3z} + \frac{1/3}{(1-3z)^2} \\ &= \frac{2}{3}\sum_{n\ge 0}(3z)^n + \frac{1}{3}\sum_{n\ge 0}\binom{n+1}{1}(3z)^n \\ &= \sum_{n\ge 0}\left(\frac{2}{3}+\frac{1}{3}(n+1)\right)3^n z^n \\ &= \sum_{n\ge 0}(n+3)3^{n-1} z^n. \end{align} Hence $a_n=(n+3)3^{n-1}$ for $n \ge 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4155940", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Prove there is only one main class latin square of order 4. I'm learning about latin squares and orthogonal latin squares. My question is how can I prove there's only one main class latin square of order 4? I did this one $$\array{0&1&2&3\\1&2&3&0\\2&3&0&1\\3&0&1&2}$$ which is the normalized one and there's shouldn't be any other latin square orthogonal to this one but I don't get how to formally prove it. Thanks for the help.
I think you've written the wrong thing in the title. There are two main classes of Latin square of order $4$: $$\begin{array}{|cccc|} \hline 1 & 2 & 3 & 4 \\ 2 & 1 & 4 & 3 \\ 3 & 4 & 1 & 2 \\ 4 & 3 & 2 & 1 \\ \hline \end{array} \qquad \begin{array}{|cccc|} \hline 1 & 2 & 3 & 4 \\ 2 & 3 & 4 & 1 \\ 3 & 4 & 1 & 2 \\ 2 & 3 & 4 & 1 \\ \hline \end{array}$$ You can prove they're not in the same main class by counting $2 \times 2$ subsquares (intercalates): the one on the left has $12$ intercalates, and the one on the right has $4$ intercalates. As we'll see below, there is: * *one main class of sets of 2 orthogonal Latin squares of order $4$ (2-MOLS(4)), and *one main class of sets of 3 orthogonal Latin squares of order $4$ (3-MOLS(4)). The Latin square on the left above belongs to a set of 3 mutually orthogonal Latin squares (or as in PM 2Ring's comment). We can prove the one on the right has no orthogonal mate (it's the answer to this question), so it doesn't belong to any sets of orthogonal Latin squares. Since the Latin squares are small, we can find the orthogonal mates of the Latin square on the left manually. Since we can permute the symbols in an orthogonal mate to obtain another orthogonal mate, we can assume the first row is $(1,2,3,4)$. Afterwards, the cell $(2,1)$ can be filled in $2$ ways without violating the Latin property nor the orthogonal property. Once this decision is made, the remainder of the Latin square is determined (from the Latin property or the orthogonal property)---you can fill it in like a sudoku. $$\begin{array}{|cccc|} \hline 1 & 2 & 3 & 4 \\ 2 & 1 & 4 & 3 \\ 3 & 4 & 1 & 2 \\ 4 & 3 & 2 & 1 \\ \hline \end{array} \qquad \begin{array}{|cccc|} \hline \bf 1 & \bf 2 & \bf 3 & \bf 4 \\ \bf \color{blue} 3 & 4 & 1 & 2 \\ 4 & 3 & 2 & 1 \\ 2 & 1 & 4 & 3 \\ \hline \end{array} \qquad \begin{array}{|cccc|} \hline \bf 1 & \bf 2 & \bf 3 & \bf 4 \\ \bf \color{blue} 4 & 3 & 2 & 1 \\ 2 & 1 & 4 & 3 \\ 3 & 4 & 1 & 2 \\ \hline \end{array} $$ So there is only one main class of sets of 3 orthogonal Latin squares. To check there is only one main class of sets of 2 orthogonal Latin squares, we need to check the two we incidentally found above are in the same main class: $$\left\{\begin{array}{|cccc|} \hline 1 & 2 & 3 & 4 \\ 2 & 1 & 4 & 3 \\ 3 & 4 & 1 & 2 \\ 4 & 3 & 2 & 1 \\ \hline \end{array}, \qquad \begin{array}{|cccc|} \hline \bf 1 & \bf 2 & \bf 3 & \bf 4 \\ \bf \color{blue} 3 & 4 & 1 & 2 \\ 4 & 3 & 2 & 1 \\ 2 & 1 & 4 & 3 \\ \hline \end{array}\right\} $$ $$\left\{\begin{array}{|cccc|} \hline 1 & 2 & 3 & 4 \\ 2 & 1 & 4 & 3 \\ 3 & 4 & 1 & 2 \\ 4 & 3 & 2 & 1 \\ \hline \end{array}, \qquad \begin{array}{|cccc|} \hline \bf 1 & \bf 2 & \bf 3 & \bf 4 \\ \bf \color{blue} 4 & 3 & 2 & 1 \\ 2 & 1 & 4 & 3 \\ 3 & 4 & 1 & 2 \\ \hline \end{array}\right\} $$ To check this, we take the first pair and swap the rows 3 and 4, then swap the columns 3 and 4, then swap the symbols 3 and 4, which gives the second pair above.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4156118", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Let $f : \mathbb{R} \to \mathbb{R}$ be measurable and let $Z = {\{x : f'(x)=0}\}$. Prove that $λ(f(Z)) = 0$. The following is an exercise from Bruckner's Real Analysis: Let $f : \mathbb{R} \to \mathbb{R}$ be measurable and let $Z = {\{x : f'(x)=0}\}$. Prove that $λ(f(Z)) = 0$. For the case $f$ being nondecreasing / nonincreasing, we can defined $g=f^{-1}$ and then $g'$ and then use the following theorem from the book : Let $f$ be nondecreasing / nonincreasing / of bounded variation on $[a,b]$. Then $f$ has a finite derivative almost everywhere. Is it possible to "reduce" evaluation of any measurable $f$ to a nondecreasing one or otherwise how the claim can be proved for any measurable $f$?
Let $Z_n=\{x\in Z |\, \forall y, |x-y|<\frac{1}{n}\implies |f(x)-f(y)|<\epsilon|x-y| \}$. Then $Z\subset\cap_n Z_n$. Take a cover $U_n$ of $Z_n\cap[a,b]$ by intervals of length less than $\frac{1}{n}$. Have $U_n$ be chosen so that $\sum_n \lambda(U_n)$ is less than $2(b-a).$ Now as $f(Z_n\cap[a,b])\subset\cup f(U_n),\lambda(f(U_n))<\epsilon\lambda(U_n)$ we have that $\sum_n \lambda(f(U_n))<\epsilon\sum_n \lambda(U_n)<2\epsilon(b-a).$ This tells us that $\lambda(f(Z_n\cap[a,b]))<2\epsilon(b-a).$ Hence $\lambda(f(Z\cap[a,b]))<2\epsilon(b-a)$ and taking $\epsilon \to 0$ gives us $\lambda(f(Z\cap[a,b]))=0.$ As this holds true for all $a,b$ we have that $\lambda(f(Z))=0.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4156258", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 1 }
Eigenvalues of $H=aX+bY+cZ+dI$ Suppose I have the hamiltonian $H=aX+bY+cZ+dI$, where $a,b,c,d$ are some real constants, and $X,Y,Z,I$ are Pauli matrices. I'm trying to figure out the range of possible energy eigenvalues. If I limit the range of $a,b,c,d$ to be $[-1,1]$, then I think the range of eigenvalues should be $[-4,4]$ (linear combination), since the eigenvalues of each Pauli matrix are $-1$ and $1$. is my assumption correct? However, I tried to calculate the eigenvalues using python with such conditions, and it looks like the range of eigenvalues goes from $-\sqrt2-1$ to $\sqrt2+1$. I don't know if my assumption is wrong or the calculation is not working. Thanks!
The way you are getting $\pm (a+b+c+d)$ is if all the eigenvectors lined up so that you would get $\pm a$, $\pm b$ etc from each of the summands. But the eigenvectors do not line up like that. That means the $[-4,4]$ bound is more loose than it has to be. The actual range of eigenvalues is a proper subset of that. $$ H = \begin{pmatrix} d + c & a - bi\\ a + bi & d - c\\ \end{pmatrix}\\ H - \lambda = \begin{pmatrix} d + c - \lambda & a - bi\\ a + bi & d - c - \lambda\\ \end{pmatrix}\\ \det (H - \lambda) = (d-\lambda + c)(d-\lambda - c) - (a-bi)(a+bi)\\ = (d-\lambda)^2 - c^2 - a^2 - b^2 = 0\\ (d-\lambda)^2 = a^2 + b^2 + c^2\\ (\lambda_{\pm}-d) = \pm \sqrt{a^2 + b^2 + c^2}\\ \lambda_{\pm} = d \pm \sqrt{a^2 + b^2 + c^2}\\ $$ $\sqrt{a^2+b^2+c^2}$ ranges from $0$ to $\sqrt{3}$ as $a,b,c$ vary in $[-1,1]$. So the lowest $\lambda_-$ can be is if $d=-1$ and the square root gives $\sqrt{3}$. That is $-1 - \sqrt{3}$. The highest $\lambda_+$ can be is if $d=1$ and the square root gives $\sqrt{3}$. That gives $1 + \sqrt{3}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4156388", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Injective functions between sets Given the statement: The number of injective functions $f:\{1,2,3,4\} \to \{1,2,3,4,5\}$ such that $\{1,2,3\} \subseteq f[\{1,2,3,4\}]$ equals to the number of The number of injective functions $f:\{1,2,3,4\} \to \{1,2,3,4,5\}$ such that $\{1,2\} \not\subseteq f[\{1,2,3,4\}]$ My question is: The statement is true or false? MY APPROACH: First, I calculated the number of injective functions such that $\{1,2,3\} \subseteq f[\{1,2,3,4\}]$ and I got $3! \cdot 5=30$ Second, I calculated the number of all functions available from $A$ to $B$: $f:\{1,2,3,4\} \to \{1,2,3,4,5\}$, means: $5^4=625$ and then reduced the number of injective functions that do exist $\{1,2\} \subseteq f[\{1,2,3,4\}]: 2 \cdot 5= 50$ and then I got: $625-50 \neq 30$ But I think I have a mistake in my method any help?
As Eparoh pointed out in the comments, neither of your calculations is correct. The number of injective functions $f: \{1, 2, 3, 4\} \to \{1, 2, 3, 4, 5\}$ such that $\{1, 2, 3\} \subseteq f[\{1, 2, 3, 4\}]$: The statement means that the set $\{1, 2, 3\}$ is in the range of the injective function $f$. There are four ways to select which element of the domain maps to $1$, three ways to select which of the remaining elements in the domain maps to $2$, and two ways to select which of the remaining elements in the domain maps to $3$. Since the function is injective, the remaining element in the domain must map to $4$ or $5$. Hence, there are $$4 \cdot 3 \cdot 2 \cdot 2 = 48$$ injective functions $f: \{1, 2, 3, 4\} \to \{1, 2, 3, 4, 5\}$ such that $\{1, 2, 3\} \subseteq f[\{1, 2, 3, 4\}]$. The number of injective functions $f: \{1, 2, 3, 4\} \to \{1, 2, 3, 4, 5\}$ such that $\{1, 2\} \not\subseteq f[\{1, 2, 3, 4\}]$: The statement means that the elements $1, 2$ cannot both appear in the range of the injective function. Since the function is injective, each of the four elements in the domain must map to a different element in the codomain, so the range of $f$ must include four of the five elements in the codomain. Therefore, we can find the number of injective functions $f: \{1, 2, 3, 4\} \to \{1, 2, 3, 4, 5\}$ such that $\{1, 2\} \not\subseteq f[\{1, 2, 3, 4\}]$ by finding the number of injective functions $f: \{1, 2, 3, 4\} \to \{1, 2, 3, 4, 5\}$ that exclude $1$ or exclude $2$. The number of injective functions $f: \{1, 2, 3, 4\} \to \{1, 2, 3, 4, 5\}$ that exclude $1$ from the range: There are four ways to select which element of the domain maps to $2$, three ways to select which element of the domain maps to $3$, two ways to select which element of the domain maps to $4$, and one way to select which element of the domain maps to $5$. Hence, there are $$4! = 4 \cdot 3 \cdot 2 \cdot 1 = 24$$ injective functions $f: \{1, 2, 3, 4\} \to \{1, 2, 3, 4, 5\}$ which exclude $1$ from the range. The number of injective functions $f: \{1, 2, 3, 4\} \to \{1, 2, 3, 4, 5\}$ that exclude $2$ from the range: By symmetry, there are $$4! = 4 \cdot 3 \cdot 2 \cdot 1 = 24$$ injective functions $f: \{1, 2, 3, 4\} \to \{1, 2, 3, 4, 5\}$ which exclude $2$ from the range. Since an injective function $f: \{1, 2, 3, 4\} \to \{1, 2, 3, 4, 5\}$ must have four elements in its range, it is not possible for both $1$ and $2$ to be excluded from the range. Hence, the number of injective functions $f: \{1, 2, 3, 4\} \to \{1, 2, 3, 4, 5\}$ such that $\{1, 2\} \not\subseteq f[\{1, 2, 3, 4\}]$ is $$2 \cdot 4! = 48$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4156598", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Let $K=\mathbb{Q}(\theta)$ where $\theta$ is a root of $X^3-2X-2$. Integral basis for $\mathcal{O}_{K}$? Let $K=\mathbb{Q}(\theta)$ where $\theta$ is a root of $X^{3}-2X-2$. Compute an integral basis for $\mathcal{O}_{K}$. I have computed the discriminant as $\Delta_{K}=-2^2.19$. I want to apply the algorithm in the book of Frazer Jarvis Algebraic Number Theory. We know that if $\beta=\frac{u_0+u_1\theta+u_2\theta^2}{2}$ is an integral element then its trace and norm is an integer. $Trace(\beta)=\frac{3u_0+4u_2}{2}$. Since $0 \leq u_i<2$ for any i, $u_0=0.$ Then the only possible $\beta$ are : $\frac{\theta}{2},\frac{\theta^2}{2},\frac{\theta+\theta^2}{2}$. But when I computed the norm of all possiblities, Norms are 1/4, 1/2, 1/4 respectively, then they are not integer. Is there any computational mistakes or what is wrong?
The reason why neither of the norms is an integer is because the elements listed are not algebraic integers. Indeed, for example $\frac{\theta + \theta^2}2$ has $4X^3 - 8X^2 - 4X - 1$ as a minimal polynomial. In fact, it turns out that the ring of integers $\mathcal O_K$ is equal to $\mathbb Z[\theta]$, which can be deduced by eliminating all possibilites that come from applying the algorithm. So an integral basis of it is $\{1,\theta,\theta^2\}$. There is another way to compute the disciminant $d_K$ of the extension $K/\mathbb Q$, which is much more theory dependent, but doesn't involve long computations. In shortly, the only unknown prime power of $d_K$ is the power of $2$. To find out, you make use of the fact that $X^3 - 2X - 2$ is an Eisenstein polynomial with respect to $2$, compute the $2$-adic different of $K/\mathbb Q$ and deduce that $4$ is the highest power of $2$ dividing $d_K$. So as $d_K = \text{disc} f$ we deduce that $\mathcal O_K = \mathbb Z[\theta]$. However, I suspect that this method might be too advanced. But, if you want me to, I can include the computation steps, as well as giving you references for the theoretic stuffs.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4156755", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
On solving $z^n = (1 + z)^n$ with root of unity trick. I know that this question has already been sorta asked in this post, but as my solving strategy differs a bit, I didn't find the said post helpful. This is the c part of the problem 64 from chapter 1.3. of Complex Analysis with Applications by Asmar. So I am trying to solve the real and imaginary part of $z$ where $z^n = (1 + z)^n$, which are claimed to be $\frac{-1}{2}$ and $\frac{1}{2}\cot(k\pi/n)$, respectively. One way to do this is to note that $z^n \neq 0$, divide by $(1 + z)^n$ and to conclude that $\frac{z}{z + 1}$ is a root of unity, so that $z = (z + 1)\omega_k$ for some root of unity $\omega_k$. So far I have tried to solve the components of $z$ by both writing $z$ as $z = \frac{\omega_k}{1 - \omega_k}$ and just inspecting the real and imaginary components of $z = (1 + z)\omega_k$. Let $\mathrm{Arg(\omega_k)} = \theta_k = \frac{2\pi k}{n}$. The former way gives me the seemingly unhelpful solution of $z = \frac{1}{\sqrt{2(1 - \cos(\theta_k)}}(\cos(\theta_k) - 1 + i\sin(\theta_k))$, and the latter approach yields eventually $\mathrm{Re}(z) = \frac{\sin(\theta_k) - \cos(\theta_k) + \sin(\theta_k)\cos(\theta_k)}{\cos(\theta_k) - 1 - \sin(\theta_k)\cos(\theta_k)}$, and I don't know how to manipulate it any further. Is there any way of salvaging either of there approaches so that in the end we get the desired $\mathrm{Re}(z) = \frac{-1}{2}$?
A root of unity has modulus $1$, and so $|z|=|z+1|$, which implies $\Re(z)=-\frac12$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4156945", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
$τ: Y \to Χ$ is a continuous map and $A: C(X)\to C(Y)$ is defined by $(Af)(y) = f(τ(y))$. How $||A||=1$? The following is from Conway's Functional Analysis : If $X$ and $Y$ are compact spaces and $τ: Y \to Χ$ is a continuous map, define $A: C(X) \to C(Y)$ by $(Af)(y) = f(τ(y))$. Then $A \in \mathcal{B} (C(X), C(Y))$ and $||A||=1$. Q.1 - $f$ is continuous because $f \in C(X)$ and $τ$ is continuous by hypothesis, then so is their composition with respect to y meaning that $Af$ is continuous so I couldn't reduce it to the continuity of $A$ and use equivalence of boundedness and continuity; so how $A \in \mathcal{B} (C(X), C(Y))$? Q.2 - Why $||A||=1$? An attempt solution in here is unclear for both inequalities to reach $||A||=1$.
Showing linearity is not hard, simply check the axioms. Showing boundedness is easy: since $\tau(Y) \subseteq X$ is a compact set $$||Af-Ag|| = \max_{y \in Y} |f( \tau (y)) - g( \tau (y))| = \\ = \max_{x \in \tau (Y)} |f(x)-g(x)| \le \max_{x \in X} |f(x)-g(x)| = ||f-g||$$ Hence $||A|| \le 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4157263", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Lagrange multiplicator/unit sphere let $\vec{v} = (2,1,-2)^{T} \in \mathbb{R}^3$ and $S^2$ the unit sphere hence $\mathbb{S} = \{ (x,y,z)^{T} \in \mathbb{R}^3 | x^2+y^2+z^2=1 \} $ Define the shortest distance from $\vec{v}$ to $\mathbb{S}$ The function I set up was $$F(x,y,z, \lambda) = \sqrt{(2-x)^2+(1-y)^2+(-2-z)^2}+\lambda(x^2+y^2+z^2-1)$$ but I fear that this is not the way how I can calculate the distance, could someone help me? Thanks in advance
To finish the shorter way just subtract $1$ (the unit sphere radius) from that distance from $v$ to the origin which was $3$ so shortest distance is $2.$ You may want to draw a picture of the sphere and your point $v$ which is outside it to see what is going on. The shortest distance from a point $p$ outside the unit sphere to the surface of the unit sphere is always one less than the distance from the origin to $p.$ Note that Diego's suggestion would make the Lagrange multiplier method work fairly well. If you get it I suggest putting it in your question.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4157409", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to show that $ 1 + x + x^2 +x^3 +...= \frac{1}{1-x} $? By long division, it is easy to show that $$ \frac{1}{1-x} = 1 + x + x^2 +x^3 +... $$ But how to show that $$ 1 + x + x^2 +x^3 +...= \frac{1}{1-x} $$
Suppose $1+x+x^2+x^3...=S$ where $S \in R$. If $x=0$ it is trivial, so suppose $x \neq 0$. Subtract each side by 1 and divide both sides by $x$. This leaves: $S=(S-1)x^{-1}$. Solving for $x$ yields: $S*x=S-1$ which simplifies to $S(x-1)=-1$ which then yields $1+x+x^2+x^3...=(1-x)^{-1}$ Note domain restrictions of $|x|<1$ applies.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4157510", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 2 }
Solving an inequality using integrals Prove that for $a_i\in\mathbb R$, $$\sum_{i=1}^n\sum_{j=1}^{n} \frac{a_ia_j}{|i-j|+1}\geq 0$$ The first step I did was to write the above as $$\int_0^1\sum_{i=1}^n\sum_{j=1}^{n}a_ia_jt^{|i-j|}dt$$ Had the $|i-j|$ been $i+j$, the problem would have been done because we can complete the square. The problem is that modulus. Another thing that's came to my mind was to let $p(t)=\sum a_it^i$ then the thing inside the integral is almost $p(t)p(1/t)$ but the negative powers of $t$ are also present... Maybe we should try substituting $u=1/t$? But that brings with it a $1/u^2$ and the limits also change. Maybe some clever identity can help but I cannot find any good one. This problem is given as an unsolved exercise in PFTB where the trick of interchanging between $|\bullet-\bullet|$ and $\min(\bullet,\bullet)$ is frequently used but the modulus is in the exponent so even that does not look promising.... I want any solution using integrals please because the problem is given in that section
Overgenerous hint: For given $t\in[0,1]$ let $M(t)$ be the matrix whose $i,j$ entry is $t^{|i-j|}$. For your plan to work, you need all the $M(t)$ matrices to be positive definite. To prove this, you need to show that the function $f(x)=\exp(-|x|)$ is positive definite. Proving this involves a certain amount of integral calculus.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4157766", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
If $\lim_{x\to x_0+}\frac{df}{dx}=K$ exists then $f$ has right derivative and is equal to $K$ Let $f$ be continuous at $[x_0;x_0+h]$ and have finite derivative at $(x_0;x_0+h)$. If the limit $$ \lim_{x\to x_0^+}\frac{df}{dx}=K $$ exists then $f$ has right derivative and is equal to $K$ ($K$ can be finite or infinite). Can you help how from this we get that derivative of $f$ is continuous or has only second order discontinuous points?
If $K$ finite, for all $\epsilon>0$, there is a constant $\delta>0$ such that $$|f'(x)-K|<\epsilon$$ for all $x_0<x<x_0 +\delta$ Thus for all $x,y \in (x_0,x_0+\delta)$ $$\left| \frac{f(x)-f(y)}{x-y} -K\right|<\epsilon$$ By continuity of $f$, this implies $$\left| \frac{f(x_0)-f(y)}{x_0-y} -K\right|<\epsilon$$ for all $y \in (x_0,x_0+\delta)$ Hence forth the conclusion.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4157951", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What does the parameter $p$ refer to in this context? I'm reading about the axioms of set theory (from Jech), and I'm having some confusion. There are various parts where it talks about well-formed formulas that include a parameter $p$. This appears here: Also here: Why do we need to include $p$? Couldn't I have just written $\varphi(x, y)$ instead of $\varphi(x, y, p)$? What kind of parameter is it anyways? Is it a natural number that you plug in? These are very basic questions, but I genuinely don't understand.
I think it may help to see a specific (if rather boring) application. For example, suppose we want to show the following: $(*)\quad$ For every $p$ and every set $A$, the class $$\{\{a, p\}: a\in A\}$$ is a set. This is a variant of the argument that for every set $A$ the class $\{\{a\}: a\in A\}$ is a set. The $p$ above enters the Replacement scheme as a parameter: the result $(*)$ follows from the instance of replacement corresponding to the three-variable formula $$\varphi(x,y,z)\equiv y=\{x,z\}$$ (or a bit more precisely, "$\forall u(u\in y\leftrightarrow u=x\vee u=z)$"), since this instance says exactly "For all $A$ and all $p$ the class of $y$ such that for some $x\in A$ we have $\varphi(x,y,p)$ is a set." Now it turns out that we ultimately don't need to include parameters in our axioms after all - see here. But this is a very nontrivial and context-specific result; "morally speaking," it is important to include parameters in the Separation and Replacement axioms.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4158107", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Asymptotic expansions (large argument) for Bessel K function The $K$ Bessel function is defined as (let's assume $x$ is real): $$ K_{\nu}=\frac{\pi}{2}\frac{I_{-\nu}(x)-I_{\nu}(x)}{\sin(\nu\pi)}, $$ where $$ I_{\nu}(x)=\sum_{n=0}^{\infty}\frac{1}{n!\Gamma(1+n+\nu)}\Big(\frac{x}{2}\Big)^{2n+\nu}. $$ I would to know how it is possible to deduce the asymptotic expansions for large $x$: $$ I_{\nu}(x)\sim \frac{e^x}{\sqrt{2\pi x}}\Big(1-\frac{4\nu^2-1}{8x}+\frac{(4\nu^2-1)(4\nu^2-9)}{2!(8x)^2}+\ldots\Big) $$ and $$ K_{\nu}(x)\sim \sqrt{\frac{\pi}{2x}}e^{-x}\Big(1+\frac{4\nu^2-1}{8x}+\frac{(4\nu^2-1)(4\nu^2-9)}{2!(8x)^2}+\ldots\Big). $$
You can establish them by applying Watson's lemma to the integral representations $$ I_\nu (z) = \frac{1}{{\sqrt \pi \Gamma \left( {\nu + \frac{1}{2}} \right)}}(2z)^\nu e^z \int_0^1 {e^{ - 2zt} t^{\nu - 1/2} (1 - t)^{\nu - 1/2} dt} $$ and $$ K_\nu (z) = \frac{{\sqrt \pi }}{{\Gamma \left( {\nu + \frac{1}{2}} \right)}}\left( {\frac{2}{z}} \right)^\nu e^{ - z} \int_0^{ + \infty } {e^{ - 2zt} t^{\nu - 1/2} (1 + t)^{\nu - 1/2} dt} $$ where $\Re z>0$ and $\Re \nu >- \frac{1}{2}$. See https://en.wikipedia.org/wiki/Watson%27s_lemma
{ "language": "en", "url": "https://math.stackexchange.com/questions/4158195", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Any two components of a topological group are homeomorphic Let $G$ be a topological group. Prove that any two components of $G$ are homeomorphic. My idea: let $C_x$ and $C_y$ be two components for some $x,y\in G$. Since $G$ is a group we can find $g\in G$ such that $gx=y$. Note that the map $\mu:G\to G$ given by $h\mapsto gh$ is a homeomorphism. So it suffices to show that $\mu(C_x)=gC_x=C_y$. Now this is where I'm a little unsure... I'd like to do the following: $$ \begin{align*} gC_x&=g\bigcup_{\substack{x\in C\\C\text{ connected}}}C\\ &=\bigcup_{\substack{x\in C\\C\text{ connected}}}gC\\ &=\bigcup_{\substack{gx\in C\\C\text{ connected}}}C\tag{???}\\ &=\bigcup_{\substack{y\in C\\C\text{ connected}}}C\\ &=C_y. \end{align*} $$ My question is about the $(???)$ part above. Is that the correct way to do it? It seems like simple reindexing but for some reason it's tripping me up thinking about it. Is it because $gC$ is a connected set containing $gx$?
What you did is essentially correct. It's also not really necessary: suppose $C_x$ and $C_y$ are the components of $x$, resp. $y$. We have a homeomorphism $h;X \to X$ so that $h(x)=y$ (we only need that $X$ is homogeneous; the full force of being a topological group is not needed, this is also what you really use, anyway). As $C_x$ is connected and contains $x$, $h[C_x]$ is connected ($h$ is continuous) and contains $h(x)=y$, so $h[C_x] \subseteq C_y$ (by maximality). OTOH, $h^{-1}[C_y]$ is connected (as $h^{-1}$ is also continuous) and contains $x$ so $h^{-1}[C_y] \subseteq C_x$ (likewise by maximality) or $C_y \subseteq h[C_x]$ and so $h[C_x]= C_y$ by two inclusions. So $h\restriction_{C_x}: C_x \to C_y$ is also a homeomorphism and indeed $C_x \simeq C_y$. I only use the simple fact The component $C_x$ of $x \in X$ is the maximal (by inclusion) connected subset of $X$ that contains $x$. which often makes reasoning about components somewhat easier IMO.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4158395", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Showing that $1$ is the only unit in a ring with identity $R$ such that $a^2 = a$ for all a in $R$ So I've been given a ring $R$ with identity (no further information on what the identity is or what kinds of elements are in $R$ or operation definitions) such that $a^2 = a$ for all $a \in R$. I need to show that the only unit in $R$ is $1$. I've kind of assumed a proof by contradiction approach. I'm assuming there are other units in $R$ which I've called $u$, called its inverse $x$ and called the identity $I$. So I've done the following $ux=1$ $(ux)I=I$ $ux=I$ $u^2=u$ $(u^2)I=uI$ $u^2=u(ux)$ $u^2=(u^2)x$ $1=x$ $ux = 1$ $u(1) = 1$ $u = 1$ Contradiction, $1$ is the only unit in $R$. I know I've got the right answer but I'm a little apprehensive about it. Do I need to do the left-side multiplications too, since there's nothing about commutativity in the original question?
You have all the right pieces. It's just overly complicated. Let $1$ be the multiplicative identity of $R$ and assume $u$ is some unit of $R$. Then $$u = u1 = u(uu^{-1}) = u^2u^{-1} = uu^{-1} = 1,$$ so $u = 1$ is the only unit in $R$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4158500", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Exercise with cartesian closed category Suppose that the category $\mathbf C$ is cartesian closed: I must prove that, chosen three objects $x$, $y$, and $z$, the object $(x\times y)^z$ is isomorphic to $x^z\times y^z$. My idea was to define a morphism $(x\times y)^z\to x^z\times y^z$, one in the inverse direction, and prove that the composites are the identities. (Notation: given a product $a\times b$ I will call $\pi_a$ and $\pi_b$ the projections, and given $h:c\to a$, $k:c\to b$, the morphism $\langle h,k \rangle :c\to a\times b$ is the one defined using the universal property of products). Now, for $f:(x\times y)^z\to x^z\times y^z$ one can choose $\langle (\pi_x)^z, (\pi_y)^z \rangle$; for the other direction things are more difficult, basically because it seems that I should at least have a map from $x$ to $y$, and using the adjunction (with the Hom-set definition) doesn't look helpful in this context. Actually, I prefer not to see the solution yet, but only to know if this direction can be the right one, and I just need to work with the adjunction and the properties of products (and the terminal object eventually), or if this approach is inconclusive. Thanks in advance
The easiest proof is by using that right adjoints preserve limits. The exponent functor $(-)^z$ is right adjoint to the product functor $(-) \times z$, that is: $(-) \times z \dashv (-)^z$. So this is just a direct application of that fact. The proof of the above fact is constructive. So if you want an explicit proof you can just follow any (reasonable) proof, but just with the arbitrary adjunction and limit substituted by the exponential adjunction and product respectively. In fact, if you would follow the proof in the above link, then you get the proof you came up with in your comment.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4158651", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Integral $\int{\frac{x\cos(x)-\sin(x)}{2x^2+\sin^2(x)}dx}$ I need help with this integral: $$I=\int{\frac{x\cos(x)-\sin(x)}{2x^2+\sin^2(x)}dx}$$ I was given this integral in a Calculus worksheet, and I've tried every technique I know, specifically substitutions that just seem to make the integral harder. For reference, I am a first year Math student. I just thought it wouldn't have a closed form but according to WolframAlpha the integral is $$ I=- \frac{\arctan(\sqrt{2}x\csc(x))}{\sqrt{2}}+C$$ Edit: For instance, making the substitution $t = \tan(x)$, yielding $$ \int{\frac{\sqrt{\frac{\arctan^2(t)}{1+t^2}}-\sqrt{\frac{t^2}{1+t^2}}}{2\arctan^2(t)(1+t^2)+t^2}}dt $$ I have achieved similar results with $t=\sin(x)$, $t=\cos(x)$, and that is where I'm stuck.
\begin{aligned} \int \frac{x \cos x-\sin x}{2 x^{2}+\sin ^{2} x} d x &=\int \frac{\frac{x \cos x-\sin x}{x^{2}}}{2+\left(\frac{\sin x}{x}\right)^{2}} d x \\ &=\int \frac{d\left(\frac{\sin x}{x}\right)}{2+\left(\frac{\sin x}{x}\right)^{2}} \\ &=\frac{1}{\sqrt{2}} \tan ^{-1}\left(\frac{\sin x}{x \sqrt{2}}\right)+C \end{aligned}
{ "language": "en", "url": "https://math.stackexchange.com/questions/4158842", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
About the height of primes associated to a squarefree monomial ideal $I\subseteq J$ Edit Let $S=K[x_1,\dots,x_n]$ be a polynomial ring in $n$ indeterminates with coefficients in a field $K$. For a monomial ideal $I$ of $S$, $G(I)$ denotes the minimal generating set of $I$. For example, if $I=(x_1x_2,x_2x_3,x_1x_2x_3)$ then $G(I)=\{x_1x_2,x_2x_3\}$, since $x_1x_2x_3$ is not minimal. Given two squarefree monomial ideals $I,J$ such that $G(I)\subseteq G(J)$, let \begin{align} I&=\mathfrak{p}_1\cap\dots\cap\mathfrak{p}_r,\\ J&=\mathfrak{q}_1\cap\dots\cap\mathfrak{q}_s, \end{align} be the minimal primary decompositions of $I$ and $J$, respectively. It is known that each associated prime of $I$ (and $J$) is of the form $(x_{i_1},\dots,x_{i_\ell})$ for some subset $\{i_1,\dots,i_\ell\}\subseteq\{1,\dots,n\}$. Moreover it is known that each associated prime is minimal. Suppose that all primes $\mathfrak{q}_i$ have the same height: $q=\text{height}(\mathfrak{q}_1)=\ldots=\text{height}(\mathfrak{q}_s)$. Is it true that $$ \text{height}(\mathfrak{p}_i)\le q, $$ for all $i=1,\dots,r$? Many examples seems to suggest this is true. However, I haven't found a way to prove it. I thought that maybe I could use Alexander duality, but didn't progress. Any help is appreciated, also references to the literature are welcome.
No, this is not true. For a simple example, take $S=K[x_1,x_2,x_3]$, let $J=(x_3)$, and let $I=(x_1,x_2) \cap (x_3)$. Of course, $I \subseteq J$, but $I$ has a minimal prime of height $2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4158988", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Probability that $010$ is present in an $n$-length binary sequence Imagine a memoryless source that outputs 0's and 1's with probabilities $P_X(0)$ and $P_X(1)$. For example, $P_{X^2}(00)=P_X(0)P_X(0)$. How would you calculate the probability that the sequence $010$ is present in an $n$-length binary sequence? What I have thought so far is that, $$P[010 \text{ is in an } n\text{-length sequence}]=(n-3)P_X(0)^{\#0}P_X(1)^{\#1}$$ I am sure that I have to multiply the probabilities $P_X(0)$ and $P_X(1)$ by $n-3$, because I need to take into consideration all the possible combinations in which $010$ can appear (e.g. $\{010...x\}$,$\{x010...x\}$,etc.). But I am not sure about the number of zeros $\#0$ and the number of ones $\#1$.
Let $a(00,n), a(01,n), a(10,n), a(11,n)$ be the probability we get a sequence that doesn't contain $010$ and end in each of the finishes. We get: $a(00,n+1)=p(a(00,n) + a(10,0))$ $a(01,n+1) = (1-p)(a(00,n)+a(10,n))$ $a(10,n+1) = pa(11,n)$ $a(11,n+1) = (1-p)(a(01,n) + a(11,n))$ We can write this as: $\begin{pmatrix} p & 0 & p & 0 \\ 1-p & 0 & 1-p & 0 \\ 0 & 0 & 0 & p \\ 0 & 1-p & 0 & 1-p \\ \end{pmatrix} \begin{pmatrix} a(00,n) \\ a(01,n) \\ a(10,n)\\ a(11,n) \end{pmatrix} = \begin{pmatrix} a(00,n+1) \\ a(01,n+1) \\ a(10,n+1)\\ a(11,n+1) \end{pmatrix} $ When $n=2$ the values are $(p^2,p(1-p),(1-p)p,(1-p)^2)$. Hence we have: $\begin{pmatrix} p & 0 & p & 0 \\ 1-p & 0 & 1-p & 0 \\ 0 & 0 & 0 & p \\ 0 & 1-p & 0 & 1-p \\ \end{pmatrix}^{n-2} \begin{pmatrix} p^2 \\ p(1-p)\\ (1-p)p\\ (1-p)^2 \end{pmatrix} = \begin{pmatrix} a(00,n) \\ a(01,n) \\ a(10,n)\\ a(11,n) \end{pmatrix} $ If the matrix happens to be diagonalizable you can get explicit formulas, even if it isn't you can expect to put it in a good form. You can also use exponentiation by squaring for rapid computations.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4159144", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
The modular curve $X_0(11)$ has a quotient isomorphic to Riemann sphere Consider the congruence subgroup $$\Gamma_0(11)=\left\{\begin{pmatrix}a&b\\11c&d\end{pmatrix}\in SL(2,\mathbb{Z}) \right\}$$ and the associated modular curve $X_0(11).$ I can prove that the Fricke involution $\omega_{11}:H\to H$ (where $H$ is the Poincaré half-plane) $$\tau\mapsto-\frac{1}{11\tau}$$ induces an involution $\omega_{11}$on the modular curve $X_0(11).$ I can consider the quotient $X_0^+(11)=X_0(11)/\langle\omega_{11}\rangle,$ which is also a compact Riemann surface. The projection $\pi:X_0(11)\to X_0^+(11)$ has degree $2.$ I want to prove that $X_0^+(11)$ has genus $0,$ and hence is isomorphic to the Riemann sphere. I want to prove this fact using the Hurwitz formula. The only thing I miss is how to calculate the points with multiplicity. How can I do that?
We know already that $X_0(11)$ has genus $1$. Now in general suppose that $X$ is a curve (Riemann surface etc) of genus $1$, and that there is a degree $2$ map $X \to X'$ where $X'$ has genus $0$. By Riemann-Hurwitz you should have $$2g - 2 = 2(2g' - 2) + \sum_p(e_p - 1).$$ So there should be 4 ramification points (since $0 = -4 + \sum(e_p -1)$). Aside: One can think about this geometrically - let $E$ be an elliptic curve with a degree $2$ mapping to a copy of $\mathbb{P}^1$, then choosing this $\mathbb{P}^1$ as the $x$-coordinate we see this double cover is ramified above the 2-torsion points (this is essentially what we do when we put $E$ in Weierstrass form). Now to the original question, let $g = g(X_0(11))$ and $g^+ = g(X_0^+(11))$. You know that $2g -2 = 0$, and you know there is a ramification point (clearly $\sqrt{-1/11}$ works, as noted in the comments). Therefore the sum $ \sum_p(e_p -1 ) \geq 1$. In particular we then must have $2(2g^+ - 2) < 0$, which can only occur when $g^+ <1$. However $g^+$ is a nonnegative integer, so it must be zero, and there must be $4$ ramification points.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4159308", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to calculate quantile function for Birnbaum–Saunders distribution? According to wikipedia the quantile function of for Birnbaum–Saunders distribution, $ G(p)$, depends on the quantile function of the standard normal distribution. For example, in the paper https://arxiv.org/pdf/1805.06730.pdf it comes "evidently" on page 8, but I don't understand clear why it is almost the same as equation for $T$. How it was found?
I use the notation in the paper you've linked (in case the link in the question dies, it is Birnbaum-Saunders Distribution: A Review of Models, Analysis and Applications by N. Balakrishnan and Debasis Kundu). We have that if $T \sim \text{BS}(\alpha, \beta)$ that $$F_{T}(t) = \Phi\left[\dfrac{1}{\alpha}\left\{\left(\dfrac{t}{\beta}\right)^{1/2} - \left(\dfrac{\beta}{t}\right)^{1/2} \right\}\right]\text{, } \quad t > 0\text{, } \alpha > 0\text{, } \beta > 0\text{.}$$ The $q$th quantile, by definition, is the value $t_q$ (which we assume is $>0$) satisfying $$F_{T}(t_q) = \Phi\left[\dfrac{1}{\alpha}\left\{\left(\dfrac{t_q}{\beta}\right)^{1/2} - \left(\dfrac{\beta}{t_q}\right)^{1/2} \right\}\right] = q\text{.}$$ As $\Phi$ is invertible, we obtain $$\dfrac{1}{\alpha}\left\{\left(\dfrac{t_q}{\beta}\right)^{1/2} - \left(\dfrac{\beta}{t_q}\right)^{1/2} \right\} = \Phi^{-1}(q) := z_q \tag{1}$$ because $\Phi^{-1}(q)$ is the $q$th quantile of a standard normal random variable. To make the algebra easier, we first observe that $$\left(\dfrac{t_q}{\beta}\right)^{1/2} - \left(\dfrac{\beta}{t_q}\right)^{1/2} = \dfrac{t_q^{1/2}}{\beta^{1/2}} - \dfrac{\beta^{1/2}}{t_q^{1/2}} = \dfrac{t_q - \beta}{t_q^{1/2}\beta^{1/2}} = \dfrac{1}{\sqrt{\beta}}\left(\dfrac{t_q - \beta}{\sqrt{t_q}}\right)\text{.}$$ From $(1)$ and our work above, we obtain that $$\alpha \sqrt{\beta} z_q = \dfrac{t_q - \beta}{\sqrt{t_q}} \implies t_q - \alpha\sqrt{\beta}z_q\sqrt{t_q} - \beta = 0\text{.}$$ Let $u = \sqrt{t_q}$, then we have the quadratic $$u^2 - \alpha\sqrt{\beta}z_qu - \beta = 0\text{.}$$ It follows from the quadratic formula that $$\begin{align} u &= \sqrt{t_q} \\ &= \dfrac{\alpha\sqrt{\beta}z_q \pm \sqrt{\alpha^2\beta z_q^2 - 4(1)(-\beta)}}{2} \\ &= \dfrac{\sqrt{\beta}}{2}\left(\alpha z_q \pm \sqrt{\alpha^2 z_q^2 + 4} \right) \\ &= \dfrac{\sqrt{\beta}}{2}\left(\alpha z_q \pm \sqrt{(\alpha z_q)^2 + 4} \right)\text{.} \end{align}$$ Now we observe that $$\sqrt{(\alpha z_q)^2 + 4} > \sqrt{(\alpha z_q)^2} = \alpha |z_q| \geq \alpha z_q$$ hence $$ \alpha z_q - \sqrt{(\alpha z_q)^2 + 4} < 0$$ so, with the condition that $\sqrt{t_q} \geq 0$, we obtain the unique solution $$\sqrt{t_q} = \dfrac{\sqrt{\beta}}{2}\left(\alpha z_q + \sqrt{(\alpha z_q)^2 + 4} \right)$$ or $$t_q = \dfrac{\beta}{4}\left(\alpha z_q + \sqrt{(\alpha z_q)^2 + 4} \right)^2$$ This matches equation $(8)$ in the paper.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4159423", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Closed form of the sum $s_4 = \sum_{n=1}^{\infty}(-1)^n \frac{H_{n}}{(2n+1)^4}$ I am interested to know if the following sum has a closed form $$s_4 = \sum_{n=1}^{\infty}(-1)^n \frac{H_{n}}{(2n+1)^4}\tag{1}$$ I stumbled on this question while studying a very useful book about harmonic series and logarithmic integrals whichh has appeared recently [1]. Checking it for possible missing entries I was led to consider this family alternating Euler sums $$s(a) = \sum_{n=1}^{\infty}(-1)^n \frac{H_{n}}{(2n+1)^a}$$ where $H_{n} =\sum_{k=1}^{n}\frac{1}{k}$ is the harmonic number, as well as the corresponding integrals $$i(a) = \int_0^1 \frac{\log ^{a-1}\left(\frac{1}{x}\right) \log \left(x^2+1\right)}{x^2+1} \, dx$$ These are related by $$i(a) = - \Gamma(a) s(a)$$ As listed in detail in the appendix, closed forms exist for all odd $a=2q+1$. For even $a=2q$ only the case $a=2, q=1$ is known. Hence, the natural question is to aks for a closed form for the smallest case open up to now, $a = 4$. What did I do so far The integral $i(a)$ can be found by differentiation with respect to a parameter $u$ from the generating integral $$g_i(u) = \int_0^1 \frac{t^u \log \left(t^2+1\right)}{t^2+1} \, dt$$ which is evaluated by Mathematica in terms of hypergeometric functions as follows $$g_i(u) = \frac{1}{4} \left(-\frac{2^{\frac{u+5}{2}} \, _3F_2\left(\frac{1}{2}-\frac{u}{2},\frac{1}{2}-\frac{u}{2},\frac{1}{2}-\frac{u}{2};\frac{3}{2}-\frac{u}{2},\frac{3}{2}-\frac{u}{2};\frac{1}{2}\right)}{(u-1)^2}\\-2 \pi H_{-\frac{u}{2}-\frac{1}{2}} \sec \left(\frac{\pi u}{2}\right)-\log (4) B_{\frac{1}{2}}\left(\frac{1-u}{2},\frac{u+1}{2}\right)\right)$$ As the parameter $u$ appears in 7 places each derivative generates a factor 7 in the length of the result. Unless someone comes up with a very clever idea to simplify the hypergeometric expressions this path seems to be hopeless. Appendix: known closed forms For positive integer values of $a$ the following results have been obtained: a) for odd $a=2q+1$ the closed form was calculated in [1], 4.1.15 (4.91) as: $$s(2q+1) = (2q+1)\beta(2q+2) + \frac{\pi}{(2q)! 4^{q+1}}\lim_{m\to \frac12 }\frac{\mathrm{d}^{2q}}{\mathrm{d} m^{2q}} \frac{\psi(1-m) + \gamma}{\sin(m\pi)}$$ Here $$\beta(z)=\sum_{k=1}^{\infty}\frac{(-1)^k}{2k+1)^z}$$ is Dirichelt's beta function. As for a simplification of r.h.s. see https://math.stackexchange.com/a/4139359/198592 b) for even $a=2q$ there is a closed form just for $a=2, i.e. q=1$, found in [1] 4.5.5 (4.187) $$s(2) = 2 \;\Im \text{Li}_3(1-i)+ \frac{3\pi^3}{32}+\frac{\pi}{8}\log^2(2) -\log(2) G$$ where $G = \beta(2)$ is Catalan's constant. References [1] Ali Shadhar Olaikhan, "An introduction to harmonic series and logarithmic integrals", April 2021, ISBN 978-1-7367360-0-5
The interest of pisco and FDP encouraged me try out simple transformations on the integral. 1) Integration by parts IBP with $$U=\int \frac{\log \left(t^2+1\right)}{t \left(t^2+1\right)} \, dt = -\frac{\text{Li}_2\left(-t^2\right)}{2}-\frac{1}{4} \log ^2\left(t^2+1\right)$$ $$V=t \log ^3(t)$$ leads to $$\begin{align}i(3) =& -3 (4 C+2 i \text{Li}_3(-i)-2 i \text{Li}_3(i)+2 i \text{Li}_4(-i)-2 i \text{Li}_4(i)\\&+\pi -16+\log (4))+\frac{3}{4} A+\frac{1}{4}B\end{align}\tag{s1.1}$$ Where C = Catalan's constant and $$A = \int_0^1 \log ^2(t) \log ^2\left(t^2+1\right) \, dt\tag{s1.2}$$ $$B = \int_0^1 \log ^3(t) \log ^2\left(t^2+1\right) \, dt\tag{s1.3}$$ 2) Substitution of integration variable Letting $x\to \tan(\phi)$ we obtain $$i(3) = \int_0^{\frac{\pi }{4}} \log ^3(\tan (\phi )) \log \left(\sec ^2(\phi )\right) \, d\phi\tag{s2.1}$$ Observing $\log \left(\sec ^2(\phi )\right) = - 2 \log \left(\cos(\phi)\right) $ and expanding $\log ^3(\tan (\phi ))=\left(\log(\sin(\phi)) -\log(\cos(\phi))\right)^3 $ we end up with four nice integrals of the type $$i(p,q) = \int_0^{\frac{\pi }{4}} \log ^p(\cos (\phi )) \log ^q(\sin (\phi )) \, d\phi\tag{s2.2}$$ I was able (via the antiderivative, using Mathematica) to solve only this one: $$\begin{align}i(4,0)=\int_0^{\frac{\pi }{4}} \log ^4(\cos (\phi )) \, d\phi =-\frac{1}{480} \pi \left(15 \left(48 \zeta (3) \log (4)+\log ^4(4)\right)+19 \pi ^4+30 \pi ^2 \log ^2(4)\right)+\frac{1}{192} \left(48 \left(48 \sqrt{2} \, _6F_5\left(\{\frac{1}{2}\}^6;\{\frac{3}{2}\}^5;\frac{1}{2}\right)\\ +\log (2) \left(24 \sqrt{2} \, _5F_4\left(\{\frac{1}{2}\}^5;\{\frac{3}{2}\}^4,\frac{3}{2};\frac{1}{2}\right)\\ +\sqrt{2} \log (64) \, _4F_3\left(\{\frac{1}{2}\}^4;\{\frac{3}{2}\}^3;\frac{1}{2}\right)\\ -2 i \text{Li}_2\left(-\frac{1+i}{\sqrt{2}}\right) \log ^2(2)+2 i \text{Li}_2\left(1-\frac{1+i}{\sqrt{2}}\right) \log ^2(2)\right)\right)\\ -11 i \pi ^2 \log ^3(2)+3 \pi \left(\log (2)+8 \log \left(1+\frac{1+i}{\sqrt{2}}\right)\right) \log ^3(2)\right)\end{align}\tag{s2.3}$$ Numerically we have $$i(4,0) \simeq 0.00115068$$. Notice that the components in $(s2.3)$ are similar to those of @pisco's answer: hypergeometric functions of the order up to (6/5), and (poly)logs. No derivatives of the hypergeometric functions appear. The other three integrals $i(3,1)$, $i(2,2)$, and $i(3,1)$ have resisted up to now and are open to attacks. 3) Collecting knowledge and tentative bottom line Being grateful for the hints I have consulted this article here and the references therein, and the answer of pisco. I have not found any expression for e.g. $i(4,0)$ which does not contain at least one hypergeometric function. Hence may well be that the list of admissible components for "closed forms" must be enlarged to include hypergeometric functions. After all, also roots of polynomials can't always be expressed in terms of closed forms, called radicals, if the order surpasses a critical value ...
{ "language": "en", "url": "https://math.stackexchange.com/questions/4160026", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Definition of $(\exists_1x)\mathscr B(x)$ This question is from Introduction to Mathematical Logic by Elliot Mendelson , forth edition , page 99 about the definition of $(\exists_1x)\mathscr B(x)$. In the book , the definition is written like this: $(\exists_1x)\mathscr B(x)$ as for $$(\exists x)\mathscr B(x) \land (\forall x)(\forall y)(\mathscr B(x) \land \mathscr B(y) \to x = y)$$ In this definition, the new variable $y$ is assumed to be the first variable that does not occur in $\mathscr B(x)$. A similar convention is to be made in all other definitions where new variables are introduced. My question is , what do they exactly mean by "...assumed to be the first variable..." ?
The important point is that $y$ should not clash with the free variables of $\mathscr{B}(x)$. Requiring $y$ to be the first variable in the sequence of all variables $x_1, x_2, \ldots$ that does not occur in $\mathscr{B}$ is just a definite way of ensuring that $y$ does not clash. To see why this is necessary, take $\mathscr{B}(x) \equiv x > y$ so that, in $(\Bbb{Q}, >)$, for example, $(\exists_1 x) \mathscr{B}(x)$ is false. Then if we allow $y$ in the definition to clash with the free variable $y$ of $\mathscr{B}(x)$, the definition would give us: $$(\exists x)x > y \land (\forall x)(\forall y)(x > y \land y > y \to x = y)$$ which, unlike $(\exists_1 x) \mathscr{B}(x)$ is true, because $y > y$ is always false. If we avoid the clash, by picking a variable $z$ that is different from $x$ and $y$, we get the correct equivalent of $(\exists_1 x) \mathscr{B}(x)$: $$(\exists x)x > y \land (\forall x)(\forall z)(x > y \land z > y \to x = z)$$ which is false, as we would expect.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4160168", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
The Fredholm Alternative for Self-Adjoint Operators Let $H=(H,(\cdot,\cdot)_H)$ be Hilbert space and $A:D(A) \subset H \longrightarrow H$ be a self-adjoint operator, not necessarily bounded. Question. Is true that $\text{Ker}(A)^{\perp}=\text{Range}(A)$? Note that, given any $v \in \text{Range}(A)$ there exist $u \in D(A)$ such that $A(u)=v$. Hence, if $w \in \text{Ker}(A)$ is arbitrary, then since $A$ is self-adjoint we obtain $$ 0=(A(w),u)_H=(w,A(u))_H=(w,v)_H $$ that is, $\text{Ker}(A)^{\perp}$. Thus, $ \text{Range}(A) \subset \text{Ker}(A)^{\perp}$. But, the reverse inclusion holds? If so, would this be in a sense Fredholm Alternative for self-adjoint operators? Or is there a Fredholm Alternative for self-adjoint operators? I only know the version for compact operators.
If $T$ is densely defined with domain $\mathcal{D}(T)$ then $T^*$ is closed and $$y \perp \text{Range}(T)\iff \forall x\in \mathcal{D}(T):\langle Tx,y\rangle=0 \iff y\in\mathcal{D}(T^*) \land T^*(y) = 0$$ This proves $$\text{Range}(T)^\perp = \text{Ker}(T^*)$$ and proves, inter alia that $\text{Ker}(T^*)$ is a closed subspace because the orthogonal complement to any set is a closed subspace. If $T$ is densely defined and closable then $T^*$ is densely defined (and as before closed). Additionally, $T^{**} = \overline{T}$ the closure of $T$ obtained by taking the closure of the graph of $T$. From this we have $$\text{Range}(T^*)^\perp = \text{Ker}(T^{**})=\text{Ker}(\overline{T})$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4160338", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solving $(x-2)^{x^2-6x+8} >1$ Solving this equation: $(x-2)^{x^2-6x+8} >1$, by taking log on base $(x-2)$ both the sides, I get the solution as $x>4$. My work: Let $(x-2)>0$ $$(x-2)(x-4)\log_{(x-2)} (x-2) > \log_{(x-2)} 1 =0\implies x<2, or, x>4$$ But this doesn't appear to be the complete solution for instance $x=5/2$ is also a solution. I would like to know how to solve it completely.
$(x-2)^{x^2-6x+8}\gt 1$ If $x^2 -6x + 8=0$ $\implies$ $(x-2)^{x^2-6x+8}=1$ $x\ne4$ and $x\ne2$ At $x=3$ $(x-2)^{x^2-6x+8}=1$ $x\in (2,3)\cup (4,\infty)$ For $x=\frac{5}{2}$ $(0.5)^{-0.75}=1.68179....$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4162534", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Does $\int_0^{2\pi}\frac{d\phi}{2\pi} \,\ln\left(\frac{\cos^2\phi}{C^2}\right)\,\ln\left(1-\frac{\cos^2\phi}{C^2}\right)$ have a closed form? I am wondering if anyone has a nice way of approaching the following definite integral $\newcommand{\dilog}{\operatorname{Li}_2}$ $$\int_0^{2\pi}\frac{d\phi}{2\pi} \,\ln\left(\frac{\cos^2\phi}{C^2}\right)\,\ln\left(1-\frac{\cos^2\phi}{C^2}\right)\,.$$ Here $C$ is a positive, real constant that satisfies the constraint $C>1$. So far I have tried a simple $u$ substitution, $u = \cos\left(\phi\right)/C$. However, this doesn't get me anywhere. I have also tried performing a series expansion in small $1/C$ in the second log, performing the integration, and then summing in powers of $1/C$. However, the sum does not simplify nicely. I have also tried relating the expression to $\dilog\left(1-\frac{\cos^2\phi}{C^2}\right)$ and $\dilog\left(\frac{\cos^2\phi}{C^2}\right)$ and performing the integration of these polylog functions using their series representation. However I have trouble performing the summation for the $\dilog\left(1-\frac{\cos^2\phi}{C^2}\right)$ term.
This integral has a closed form in terms of dilogarithms; the idea is that the series $$\ell(r,\phi)=\log(1-2r\cos\phi+r^2)=-2\sum_{n=1}^\infty\frac{r^n}{n}\cos n\phi\qquad(|r|<1)$$ (obtained from $1-2r\cos\phi+r^2=(1-re^{i\phi})(1-re^{-i\phi})$ and the power series of $\log$) may be considered as a Fourier series, giving $\int_0^{2\pi}\ell(r,2\phi)\,d\phi=0$ for $|r|\leqslant 1$ and, by Parseval's identity, $$\int_0^{2\pi}\ell(a,2\phi)\ell(b,2\phi)\,d\phi=4\pi\sum_{n=1}^\infty\frac{a^n}{n}\frac{b^n}{n}=4\pi\operatorname{Li}_2(ab)$$ for $|a|,|b|\leqslant 1$ (all the boundary cases are attainable in the limit). Now $$ \log\left(\frac{\cos^2\phi}{C^2}\right)=\ell(-1,2\phi)-\log(4C^2),\\ \log\left(1-\frac{\cos^2\phi}{C^2}\right)=\ell(r,2\phi)-\log(4rC^2),\\ \color{blue}{r:=2C^2-1-2C\sqrt{C^2-1}}, $$ reducing the given integral to the above. The answer is $\color{blue}{2\operatorname{Li}_2(-r)+\log(4C^2)\log(4rC^2)}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4162664", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Modules and direct sum Let $A$ be a ring and $M$ an $A$-module, given $ g\colon M \to A$ a surjective morphism of $A$-module, prove that $ M \cong \ker(g) \oplus A$. Here's what I tought: first the following short sequence is exact $$ 0 \to \ker(g)\to M\xrightarrow{f} A \to 0 $$ and if I show that splits I got the thesis. I define a morphism of $A$-modules $ f:A \to M$ the following way $$ f(a) = a\cdot1_M \space\space \forall a \in A $$ and since $\forall a \in A$ we have $$ (g\circ f)(a) = g(a\cdot1_M ) = ag(1_M)=a\cdot1_A = a $$ We have $gf = \operatorname{id}_A$, the sequence splits and $ M \cong \ker(g) \oplus A$. What I've done is a possible solution? As last question if is right this could hold for every $M,N$ that are $A$-module and every $h\colon M \to N$ with $h$ surjective morphism and with the hypotesis that $N$ is finitely generated by a single element? Thank you very much.
Question: "As last question if is right this could hold for every $M,N$ that are $A$-module and every $h\colon M \to N$ with $h$ surjective morphism and with the hypotesis that $N$ is finitely generated by a single element? Thank you very much." Answer: The map $f\colon M \to A$ has a (non-unique) section $s: A\to M$: pick any $m\in M$ with $f(m)=1$ and define $s(a):=am$. It follows $f \circ s =\operatorname{id}_A$ . Define $\phi:= s \circ f: M \to M$. It follows $\phi^2 =\phi$ and $M \cong \ker(\phi) \oplus \operatorname{im}(\phi)=\ker(f)\oplus A$. There are isomorphisms $$ a\colon M \cong \ker(\phi) \oplus \operatorname{im}(\phi)$$ defined by $a(m):=(m-sf(m), f(m))$ and $$b\colon \ker(\phi)\oplus \operatorname{im}(\phi) \to M$$ defined by $b(u,v):=u+s(v)$. You may verify thatt $a\circ b=b \circ a=\operatorname{id}$, hence $a$ and $b$ are isomorphisms of $A$-modules. Note: There is no "identity element" $1_M \in M$. You must choose an element $m\in M$ mapping to $1\in A$ - the element $m$ is in general not unique. Note: We define an $A$-module $N$ to be "projective" iff for any surjective map of $A$-modules $f\colon M\to N$ there is a section $s\colon N \to M$ with $f \circ s =\operatorname{id}_N$. There are non-projective $A$-modules.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4162818", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Limit of Function raised to a power of another function I'm trying to evaluate the following limit: $$A =\lim_{x\to\infty}\left(\frac{x^2-3x+1}{x^2+x+2}\right)^{2x-5}$$ So far, I have exponentiated the limit and whatnot and now I am at this stage: $$A =\exp\left(\lim_{x\to\infty}(2x-5)\ln\frac{x^2-3x+1}{x^2+x+2}\right)$$ Now, I don't know what to do here, I know via simple algebra that: $$\lim_{x\to\infty}\ln\frac{x^2-3x+1}{x^2+x+2}=0$$ But $2x-5$ has no limit, and thus I cannot separate $A$ into the product of two limits. Perhaps I am missing something? WolframAlpha says $$\lim_{x\to\infty}(2x-5)\ln\frac{x^2-3x+1}{x^2+x+2}=-8$$ And that therefore $A = e^{-8}$, but gives no insight as to how this is the case.
As I use to say (joke) "we are always closer to $0$ then to $\infty$" So, start making $x=\frac 1 y$ $$A=\left(\frac{x^2-3 x+1}{x^2+x+2}\right)^{2 x-5}\implies A=\left(\frac{1-3 y+y^2}{1+y+2 y^2}\right)^{\frac{2}{y}-5}$$ Take the logarihm $$\log(A)=\left(\frac{2}{y}-5\right)\Big[\log(1-3 y+y^2)-\log(1+y+2 y^2)\Big]$$ Work the logarithms separately ... and just finish $\log(A)$. Continue with Taylor using $$A=e^{\log(A)}$$ and you are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4162919", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
functional iteration convergence functional iteration sequence $x_{n+1} = 2 - (1+c)x_n + cx_{n}^3$ will converge for some values of c to $ \alpha = 1$ for what values of c this sequence will converge? My attempt to solve this was using contractive mapping theorem First finding a closed set $ D = [1-a, 1+b] $ and show that $ f: D \rightarrow D $ and also $ |f^{'}(x)| < 1$ then by the contractive mapping theorem this series will converge to a unique fixed point in $D$ which is 1. $f^{'}(x) = -1 - c + 3cx^2$ is for some values of c always less than 1 but for some values not. I'm not sure how to define c value.
You get as progression of the distance to the fixed point $$ \frac{x_{n+1}-1}{x_n-1}=c(x_n^2+x_n)-1. $$ To get convergence you need the right side to be smaller than $1$ in absolute value. This gives $$ 0<c(x^2+x)\le 2\iff \frac14<\left(x+\frac12\right)^2<\frac2c+\frac14 $$ To get $x=α=1$ inside the contracting interval one needs $0<c<1$. The interval is then $$ 0<x<\sqrt{\frac2c+\frac14}-\frac12=1+\frac{4(1-c)}{\sqrt{8c+c^2}+3c} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4163290", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
What would be the gradient function Suppose we have the function $f(x) = \frac{1}{2} x^T Qx - x^Tb.$ What would be $\nabla f(x)$? I think I am just getting confused about having both $x^T$ and $x.$ My intuitive answer would be: $\nabla f(x) = \frac{1}{2} Qx - b$ but I'm not sure if that is correct though it seems like it would be since $x^T$ and $x$ represent the same vector.
The gradient of $g(x) = x^T A x$ is $g'(x) = (A + A^T) x$ The gradient of $h(x) = x^T b$ relies on the fact that $x^T b$ is a scalar, and a scalar transposed is itself. Therefore, $x^T b = b^T x$. Therefore, the gradient of $h$ is $h'(x) = b$. One can think of $g(x) = x^T A x$ as analogous to $\tilde{g}(x)=kx^2 = x k x$ when k and $x$ are both scalars. The derivative of $\tilde{g}$ is $2 k x$. But, since matrices are more interesting than scalars, instead of $2 k$ we have $(A + A^T)$. It's at least interesting to note that if $A$ were a scalar, then $A+A^T = 2A$, which satisfies the derivative when working with scalars. From here, you can use the linearity of the derivative and distribute over the sum to get your answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4163540", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Trigonometric Identities Using De Moivre's Theorem I am familiar with solving trigonometric identities using De Moivre's Theorem, where only $\sin(x)$ and $\cos(x)$ terms are involved. But could not use it to solve identities involving other ratios. For example, (1) $\tan\left(\frac{\theta}{2}\right)\sec(x)+\tan\left(\frac{\theta}{2^2}\right)\sec\left(\frac{x}{2}\right)+...+\tan\left(\frac{\theta}{2^n}\right)\sec\left(\frac{x}{2^{n-1}}\right)$ (2) $\csc(x)+\csc(2x)+...+\csc(2^nx)$ Is there any way to simplify this kind of problems and express them in smaller terms using De Moivre's Theorem?
We have $$\sin x=\frac{ e^{ix}-e^{-ix}}{2i}$$ And therefore it's reciprocal as $$\csc x=\frac{2i}{ e^{ix}-e^{-ix}}$$ which can also be written as $$\csc x=\frac{2i.e^{ix}}{ e^{2ix}-1}$$ therefore your series becomes $\displaystyle{\frac{2i.e^{ix}}{ e^{2ix}-1}+\frac{2i.e^{2ix}}{ e^{4ix}-1}+\frac{2i.e^{4ix}}{ e^{8ix}-1}\cdots\frac{2i.e^{2^{n}ix}}{ e^{2^{n+1}ix}-1}}$ $2i\displaystyle{(\frac{e^{ix}}{ e^{2ix}-1}+\frac{e^{2ix}}{ e^{4ix}-1}+\frac{e^{4ix}}{ e^{8ix}-1}\cdots\frac{e^{2^{n}ix}}{ e^{2^{n+1}ix}-1})}$ $2i\displaystyle{(\frac{e^{ix}+1-1}{ (e^{ix}-1)(e^{ix}+1)}+\frac{e^{2ix}+1-1}{ (e^{2ix}-1)(e^{2ix}+1)}+\frac{e^{4ix}-1+1}{ (e^{4ix}-1)(e^{4ix}+1)}\cdots\frac{e^{2^{n}ix}+1-1}{ (e^{2^{n}ix}-1)(e^{2^{n}ix}+1)})}$ $2i((\frac{1}{e^{ix}-1}-\frac{1}{e^{2ix}-1})+(\frac{1}{e^{2ix}-1}-\frac{1}{e^{4ix}-1})+(\frac{1}{e^{4ix}-1}-\frac{1}{e^{8ix}-1})\cdots (\frac{1}{e^{2^{n}ix}-1}-\frac{1}{e^{2^{n+1}ix}-1}))$ which at last simplifies to $$2i((\frac{1}{e^{ix}-1}-\frac{1}{e^{2^{n+1}ix}-1}))$$ Now you may take it forward
{ "language": "en", "url": "https://math.stackexchange.com/questions/4163707", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to find the limit of the sequence $x_n = 0.1\underbrace{00 \ldots0}_{n}1$? Given a sequence, $$(x_n) = (0.101,0.1001,0.10001,\ldots)$$How do I proceed with finding the limit of this sequence? Note that I'm asking for a "method" to solve/find the limit. The best I can come up with is write $x_n = 0.1\underbrace{00 \ldots0}_{n}1$ and "guess" that the limit should be $0.1$ and prove that $$\left|0.1\underbrace{00 \ldots0}_{n}1 - 0.1\right|= 10^{-(n+1)} < \epsilon, \quad \forall \epsilon > 0$$ when $n≥m=\lfloor \log_{10}{\epsilon}+1 \rfloor+1$. But that being said, I want to know how to correctly and/or mathematically find the limit without just "guessing". And how do I find out such limits in general, for instance when the sequence is something like $(0.101001, 0.10010001,0.1000100001, \ldots)$ where my $x_n = 0.1\underbrace{00 \ldots0}_{n}1\underbrace{00 \ldots 0}_{n+1}1$
In general, this is not an easy task to do. There are a few tricks we can use though, especially if we have a closed form representation for the elements. For example, for the first example, we can write $x_n = 0.1 + 10^{-(n+2)}$. Then: \begin{align*} \lim_{n \to \infty} x_n &= \lim_{n\to \infty} (0.1 + 10^{-(n+2)}) \\ &= \lim_{n \to \infty} (0.1) + \lim_{n \to \infty} (10^{-(n+2)}) \\ &= \lim_{n \to \infty} (0.1) + 10^{-2} \lim_{n \to \infty} \left(\left(\frac{1}{10}\right)^n\right) \end{align*} We know the limit of a constant sequence is just the constant, and the limit of $a^{n}$ as $n \to \infty$ where $|a| < 1$ is $0$, so we can simplify to just $0.1 + 0 = 0.1$. For the second example you gave, we again look for a nice representation of $x_n$. What I see here is $x_n = 0.1 + 10^{-(n+2)} + 10^{-(2n+4)}$. Again, both of the second terms vanish in the limit, so we are left with just the constant part. In general, it will be helpful to know some rules about how limits work. For instance, you should try to find out (and prove!) when it is valid to write things like: * *$\lim(a_n+b_n) = \lim(a_n) + \lim(b_n)$ *$\lim(a_n b_n) = \lim(a_n)\lim(b_n)$ *$\lim(f(a_n)) = f(\lim(a_n))$ where $a_n, b_n$ are sequences and $f$ is a function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4163881", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
For how many integer values of $m$ parabola $y=(m-2)x^2+12x+m+3$ passes through only three quadrants? For how many integer values of $m$ the graph of parabola $y=(m-2)x^2+12x+m+3$ passes through only three quadrants? $1)0\qquad\qquad2)7\qquad\qquad3)12\qquad\qquad4)\text{infinity}$ If the parabola $ax^2+bx+c=0$ passes through three quadrant I think we should have $\frac ca>0$ and $\Delta>0$: $$\frac{m+3}{m-2}>0\Rightarrow m\in(-\infty,-3] \cup(2,+\infty)$$ $$\Delta'>0\Rightarrow 36-(m-2)(m+3)>0\Rightarrow m^2+m-42<0\Rightarrow m\in [-7,6]$$ Also for $m=2$ we have $y=12x+5$ and it passes through three quadrants. So $m$ can be $-7,-6,\cdots,-3, $ or $2,3,4,5,6$ . so there are $10$ possible values for $m$ but this isn't in the options. What am I missing?
One mistake is that $$ m^2+m-42<0$$ gives $m\in (-7,6)$ instead of $m\in [-7, 6]$. So $\{-7, 6\}$ are excluded. If one excludes also $m=2$ (which does not give a parabola), then there are only $7$ possible choices: $\{-6, -5, -4, -3, 3, 4, 5\}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4164033", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
$\measuredangle C=120^\circ$ and two altitudes $AH$ and $BD$ are altitudes of $\triangle ABC$ and $\measuredangle ACB=120^\circ$. If $S_{\triangle HCD}=\dfrac{15\sqrt3}{4},$ find the area of $\triangle ABC$. $$S_{\triangle HCD}=\dfrac12\cdot CH\cdot CD\cdot\sin\measuredangle HCD=\dfrac{\sqrt3}{4}CH\cdot CD=\dfrac{15\sqrt{3}}{4}\ \implies CH\cdot CD=15$$ On the other hand $$S_{\triangle ABC}=\dfrac12\cdot AC\cdot BC\cdot\sin\measuredangle ACB=\dfrac{\sqrt3}{4}AC\cdot BC=?$$ I noted that $ABDH$ is inscribed, because $\measuredangle ADB=\measuredangle AHB=90^\circ$, so $$AC\cdot CD=BC\cdot CH.$$ I am stuck here. Thank you in advance!
Observe, $$\angle ACH=\angle BCD=60^{\circ}\implies \angle CAH=CBD=30^{\circ}$$ In right-triangle $ AHC$, $$\sin \angle CAH=\frac{CH}{AC}\implies AC=2\cdot CH $$ In right-triangle $ BDC$, $$\sin \angle CBD=\frac{CD}{BC}\implies BC=2\cdot CD $$ Since you have arrived at $CH\cdot CD=15$, $$\begin{align*} \text{area}(\triangle ABC)&=\frac{1}{2}\cdot AC\cdot BC\cdot \sin \angle ABC \\ &=\frac{1}{2}\cdot (2\cdot CH)\cdot (2\cdot CD)\cdot \sin 120^{\circ}\\ &=\boxed{15\sqrt{3}} \end{align*}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4164169", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How does Collinearity Justify This? I cannot make sense of one line in the given solution I am reading to this question: Problem: Let $A_0,A_1,\cdots,A_6$ be a regular $7$-gon. Prove that $\displaystyle \frac1{A_0A_1}=\frac1{A_0A_2}+\frac1{A_0A_3}$. Solution: Let $\varepsilon = e^{2i\pi/7}$. Take $a_k=\varepsilon^k$ to be the complex coordinates of $A_k$ where $k$ ranges from $0$ to $6$. Rotate $a_1$ (to $a_1^\prime$) and $a_2$ (to $a_2^\prime$) around $a_0$ by $2\pi/7$ and $\pi/7$ radians, respectively, so that they are collinear with $a_3$. It suffices, now, to show that: \begin{equation} \frac1{a_1^\prime-1}=\frac1{a_2^\prime-1}+\frac1{a_3-1} \end{equation} Why are we justified in writing the above, as opposed to: \begin{equation} \frac1{|a_1^\prime-1|}=\frac1{|a_2^\prime-1|}+\frac1{|a_3-1|} ? \end{equation} I suspect it has something to do with the fact all three lie on one line, but I am missing something obvious?
Since you're rotated $a_1$ and $a_2$ around $a_0=1$ to get $a'_1$ and $a'_2$, we have more than just collinearity of $(a'_1-1, a'_2-1,a_3-1)$: the line they share is a ray going out from the origin. So what you need is that $$ \frac{1}{z_1} = \frac{1}{z_2} + \frac{1}{z_3} \iff \frac{1}{|z_1|} = \frac{1}{|z_2|} + \frac{1}{|z_3|} $$ when $z_1, z_2, z_3$ have the same argument. Call the common argument $\theta$ so we have $z_n = e^{i\theta}|z_n|$. But then $$ \frac{1}{z_n} = e^{-i\theta}\frac{1}{|z_n|}$$ so every term in in the left equation is simply a constant (that is, $e^{-\theta}$) times the corresponding term in the right equation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4164432", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Conditions on Lebesgue's Dominated Convergence In Shiryaev's probability, the formulation of the Lebesgue Dominated Convergence Theorem is as it follows: Let $\eta, \xi, \xi_1 \xi_2 \ldots $ be random variables such that $| \xi_n | \leq \eta$, $E \eta < \infty$ and $\xi_n \to \xi$ almost-surely. Then $E|\xi| < \infty$ and: $$ E \xi_n \to E \xi $$ and $$ E |\xi_n - \xi | \to 0 $$ I know that if I remove the "dominated" from the DCT, the convergence can fail, but I'm thinking how critical the other conditions of the theorem are to ensure convergence. A few questions arised: * *If I drop the condition that $\xi_n \to \xi$ almost-surely and choose other modes of convergence, will the theorem still hold? Like, how "strong" should the convergence from $\xi_n \to \xi$ be? *Why state it in separate that $E \xi_n \to E \xi$ AND $E|\xi_n - \xi| \to 0 $? Is there a case where $E \xi_n \to E \xi$, but $E|\xi_n - \xi| \to 0 $ doesn't hold?
Convergence in probability is enough for the conclusion of DCT to hold. Conversely, if $E|\xi_n-\xi| \to 0$ then $\xi_n \to \xi$ in probability. If $X\sim N(0,1)$ and $X_n=-X$ for all $n$ then $EX_n \to EX$ but $E|X_n-X|$ does not tend to $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4164608", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is there a rigorous proof for this beautiful property of the function of type $f(x) \sin x$? I just noticed today that the graph of $x \sin x$ is like a $\sin x$ trapped in $x$ and $-x$. Upon this realisation, I tried to plot some graphs by hand, others by desmos. I tried to investigate this property according to which always $\sin x$ would be trapped inside the $+f(x)$ and $-f(x)$ for a function $g(x)=f(x) \sin x$ and its shape would change in order to fit the function at varying x coordinates. But, rather than doing induction I wanted to prove that this type of property will always be valid. I defined a function; $$g(x) = f(x) \sin x$$ $$-f(x)\le g(x) \le f(x)$$ $$-1 \le \sin x\le 1$$ Now we can argue that $f(x)$ will act like a varying amplitude for $\sin x$ wave(/graph) and thus it should be trapped. But this is not satisfactory enough. Thus my question is, “Is there a rigorous proof for this sort of property?” Following are the graphs I tried to analyse the property off of: 1. $x \cdot \sin x$ 2. $x^2 \sin x$ 3. $x^3 \sin x$ 4. $\frac{\sin x}{x^2+1}$ 5. $\ln x \cdot \sin x$ 6. $(3x^2-2x^3) \sin x$ 7. $((1-x^{\frac{2}3})^{\frac{3}2}) \sin x$ 8. $\sqrt{(x-1)(x-2)(x-3)} \cdot \sin x $ 9. $x \sqrt{\frac{x+5}{x-5}} \cdot \sin x $
Notice that $\sin(x)$ is bounded, indeed $|\sin(x)|\leq 1$. This gives us: $$|f(x)\sin(x)|\leq|f(x)|\cdot|\sin(x)|\leq|f(x)|\cdot 1=|f(x)|.$$ In other words, an element of $g(x)$ can never go above the graph of $f(x)$ or below the graph of $-f(x)$. Can you try to generalize this for a function $h(x)$ such that $|h(x)|\leq n$ for $n$ a natural number? What would happen to the graph of $h(x)f(x)$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/4164829", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Understanding Theorem 12.38 Bruckner's Real Analysis The following is a theorem from Bruckner's Real Analysis: How the underlined formulas (red and green) hold?
Red: We can write $y_1 = \frac{\alpha}{h_0} \left( \frac{h_0}{\alpha} y + x_0 \right)$ and estimate its norm as follows: $$||y_1|| = \left| \frac{\alpha}{h_0} \right| \left| \left| \left( \frac{h_0}{\alpha} y + x_0 \right) \right| \right| \geq \left| \frac{\alpha}{h_0} \right| \inf_y\left| \left| \left( \frac{h_0}{\alpha} y + x_0 \right) \right| \right| = \left| \frac{\alpha}{h_0} \right| h_0 = |\alpha|$$ Green You already have that $||y^*(y_1)|| = |\alpha| \leq ||y_1||$. If you pick $\alpha = ||y_1||$, you know equality holds, and after normalisation you get the desired equality.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4165140", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Having trouble understanding proof of continuity I am currently trying to understand continuity. I was given the following example: \begin{equation*} D=[0,1]\cup\{2\},~~~f(x)=\begin{cases}x^2 & 0\leq x<1,\\ 0 & x = 1, \\ 1&x=2.\end{cases} \end{equation*} Obviously: $f$ is continuous at $x\in[0,1)$. $1)~~~\underline{x_0=1:}$ Let $x_n=1-\frac{1}{n}$ $(n\in\mathbb{N}).$ Then $x_n\rightarrow1$ but $f(x_n)=x^2_n\rightarrow1\neq0=f(1).$ Therefore: $f~$ is not continuous at $x_0=1$. $2)~~~\underline{x_0=2:}$ Let $(x_n)$ be a sequence in $D$ and $x_n\rightarrow2$. Then $x_n=2~$ and $f(x_n)=1$ for almost all $n\in\mathbb{N}$. Further: $f(x_n)\rightarrow1=f(2)$. Therefore: $f$ is continuous at $x_0=2$. I'm having troubles understanding the explanations. How come I can't just apply $~2)$'s$~$ argument to $~1)~$ (or vice versa). For example, why can't I just say (in case of $x_0=1$): Let $(x_n)$ be a sequence in $D$ and $x_n\rightarrow1$. Then $x_n=1~$ and $f(x_n)=0$ for almost all $n\in\mathbb{N}$. Further: $f(x_n)\rightarrow0=f(1)$.
If $(x_n)$ is a sequence in $D$ such that $x_n \to 1$ it is not necessarily true that $x_n=1$ for all large $n$. (Consider the given example $x_n = 1-\frac{1}{n}$.) If $(x_n)$ is a sequence in $D$ such that $x_n \to 2$, we must necessarily have $x_n=2$ [for all large $n$] because there are no other points near $2$ in $D$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4165312", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Showing that $(x,y)=(4,5), (5,4)$ are the only positive integer solutions to $x+y=3n, x^2+y^2-xy=7n$ Show that $(x,y)=(4,5), (5,4)$ are the only positive integer solutions to $x+y=3n, x^2+y^2-xy=7n.$ I'm not very certain how to proceed on this problem. I know $x^2+y^2=(x+y)^2-2xy,$ so we essentially have $x+y=3n, (x+y)^2-3xy=7n$ for positive integers $x, y, n.$ However, this doesn't really help. I've also tried writing it as a fraction and doing some algebraic manipulations, but I haven't gotten anywhere either. May I have some help? Thanks in advance.
Because $x$ and $y$ are exchangeable, I represent them as $$ x=m+d \qquad y=m-d $$ Then $$ x+y=3n \qquad \to \qquad 2m = 3n \qquad \to \qquad 7n= 14/3m \tag 1$$ $$ x^2+y^2-xy = 7n \qquad \to \\ 2m^2+2d^2 - (m^2-d^2)= 7n \qquad\to \\ m^2+3d^2 = 14/3m \qquad\to \tag 2 $$ $$ m^2-14/3m + (7/3)^2 = (7/3)^2-3d^2 \qquad\to $$ $$ m= 7/3 \pm \sqrt{ (7/3)^2-3d^2} \tag 3 $$ The integer resp half-integer values of the term $7/3 \pm \sqrt{ (7/3)^2-3d^2}$ can be enumerated for $d \in \{0,1/2,1,3/2,...\}$ and all for $d \gt 1 $ become imaginary. From that only $d=0$ and $d=1/2$ are integer or half integer and lead to the solutions: d m x=m+d y=m-d --------------------------------------- for 7/3 + sqrt(...) 0 4.66666666667 4+2/3 4+2/3 0.5 4.50000000000 5 4 <---- the single integral solution 1 3.89680525327 <fractional> --------------------------------------- for 7/3 - sqrt(...) 0 0 "trivial" 0.5 0.166666666667 <fractional> 1 0.769861413392 <fractional> Conclusion: The only integer-solutions are $(x,y)=(y,x)=(5,4)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4165455", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Indirect application of dominated convergence theorem Let $f:[0,1]\to \mathbb{C}$ be Lebesgue integrable and be continuous at $1$. Prove that $$\lim_{n\to \infty} n\int_0^1 f(x)x^n dx=f(1)$$ My attempt: In order to use the fact that $f$ is continuous at $1$ i.e. $|x-1|<\delta \implies |f(x)-f(1)|<\epsilon$. So if $x\in (1-\delta,1)$, then either $f(x)<f(1)+\epsilon$ or $f(x)>f(1)-\epsilon$. In the following split up of integral, I am considering $f(x)<f(1)+\epsilon$: \begin{align*} \lim_{n\to \infty} n\int_0^1 f(x)x^n dx &=\lim_{n\to \infty} n\int_0^{1-\delta} f(x)x^n dx + \lim_{n\to \infty}n\int_{1-\delta}^1 f(x)x^n dx \\ &< \lim_{n\to \infty} n\int_0^{1-\delta} f(x)x^n dx + (f(1)+\epsilon) \lim_{n\to \infty} n\int_{1-\delta}^1 x^n dx \\ &= \lim_{n\to \infty} n\int_0^{1-\delta} f(x)x^n dx + (f(1)+\epsilon) \lim_{n\to \infty} n \left[\frac{x^{n+1}}{n+1}\right]_{1-\delta}^1 \\ &= \lim_{n\to \infty} n\int_0^{1-\delta} f(x)x^n dx + (f(1)+\epsilon) \end{align*} But I'm not sure how to proceed and apply LDCT?
On $(0,1-\delta)$ we have $|nf(x)x^{n}|\leq n(1-\delta)^{n} |f(x)|$. Note that $n(1-\delta)^{n} \to 0$ as $ n \to \infty$. In particular this sequence is bounded so we have a dominating function of the type constant times $|f(x)|$. You should also note that $|f(x)-f(1)| <\epsilon$ implies $f(x) <f(1)+\epsilon$ and $f(x) >f(1)-\epsilon$. You can get $\lim \sup n\int f(x)x^{n}dx \leq f(1)$ and $\lim \inf n\int f(x)x^{n}dx \geq f(1)$ to finish the proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4165644", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Generalized Method to find nth power of matrix in $P^n = 5I - 8P$ Question: Let $I$ be an identity matrix of order $2 \times 2$ and $P = \begin{bmatrix} 2 & -1 \\ 5 & -3 \\ \end{bmatrix}$. Then the value of $n ∈ N$ for which $P^n = 5I - 8P$ is equal to _______. Answer: 6 Question Source: JEE Mains $18^{th}$ March Shift-2 2021 By characteristic Equation: $P^2-(\operatorname {Tr}(P))P+(\det (P))I=0$ we can find $n=6$ by hit and trial. i.e. multiplying equation by P gives $P^3=(\operatorname {Tr}(P))P^2-(\det (P))P = (\operatorname {Tr}(P))[(\operatorname {Tr}(P))P+(\det(P))I]$ And going on solving for $P^6$. But is there a generalized method because it can't be done for high values of $n$ (eg:$100$)?
We can diagonalize the matrix $P$ because it has unique eigenvalues. One of the benefits of this is in finding matrix powers. For your example, we have eigenvalues as $$\lambda_{1,2} = \frac{1}{2} \left(\mp\sqrt{5}-1\right)$$ The eigenvectors are $$v_{1,2} = \begin{pmatrix} \dfrac{1}{10} \left(5\mp\sqrt{5}\right) \\ 1\end{pmatrix}$$ We can now write $P = V D V^{-1}$ as $$\begin{pmatrix} \dfrac{1}{10} \left(5-\sqrt{5}\right) & 1 \\ \dfrac{1}{10} \left(\sqrt{5}+5\right) & 1 \\ \end{pmatrix} \begin{pmatrix} \dfrac{1}{2} \left(-\sqrt{5}-1\right) & 0\\ 0 & \dfrac{1}{2} \left(\sqrt{5}-1\right) \end{pmatrix}\begin{pmatrix} \dfrac{1}{2} \left(1-\sqrt{5}\right) & 0 \\ 0 & \dfrac{1}{2} \left(\sqrt{5}+1\right) \\ \end{pmatrix}$$ Now $P^n = V D^n V^{-1}$ is $$\begin{pmatrix} \dfrac{1}{10} \left(5-\sqrt{5}\right) & 1 \\ \dfrac{1}{10} \left(\sqrt{5}+5\right) & 1 \\ \end{pmatrix} \begin{pmatrix} \left(\dfrac{1}{2} \left(-\sqrt{5}-1\right)\right)^n & 0\\ 0 & \left(\dfrac{1}{2} \left(\sqrt{5}-1\right)\right)^n \end{pmatrix}\begin{pmatrix} \dfrac{1}{2} \left(1-\sqrt{5}\right) & 0 \\ 0 & \dfrac{1}{2} \left(\sqrt{5}+1\right) \\ \end{pmatrix}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4165753", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 6, "answer_id": 2 }
Using definition prove that $\frac{1}{n}+(-1)^n$ is not cauchy Using definition prove that $\frac{1}{n}+(-1)^n$ is not cauchy Let $n,m \in \mathbb{N}$ and $m=2n+1$ then $|x_{m}-x_{2n}|=|x_{2n+1}-x_{2n}|=\bigg|\frac{1}{2n+1}-1-\frac{1}{2n}-1\bigg|=\bigg|2+\frac{1}{2n}-\frac{1}{2n+1}\bigg|>2$ thus by definition contradic the definition given an $\epsilon > 0$ we have a $N \in \mathbb N$ such that for all $n,m > N$ we have that $|x_n - x_m| < \epsilon$ Can anyone verify my answer?
Your answer is right since, as you say, if $\varepsilon\leq 2$ the definition is not fulfilled.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4165839", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Solve $3xdy = y(1+x\sin(x) - 3y^3 \sin(x))dx$ Solve $3xdy = y(1+x\sin(x) - 3y^3 \sin(x))dx$ Mt Attempt $$3xy' = y(1+x\sin(x) - 3y^3 \sin(x)) \rightarrow 3xy'-y(1+x\sin(x)) = -3y^3\sin(x)$$ $$y'-y\frac{(1+x\sin(x))}{3x}= -\frac{y^3\sin(x)}{x}$$ To me, this is the Bernoulli form. Hence let $u = y^{1-3} = y^{-2}$ and $u' = -2y^{-3}y'$ Divide by $y^3$ $$\frac{1}{y^3}y'-\frac{1}{y^2}\frac{(1+x\sin(x))}{3x}= -\frac{\sin(x)}{x}$$ $$\frac{-1}{2}u'-u\frac{(1+x\sin(x))}{3x}= -\frac{\sin(x)}{x}$$ $$u'+u\frac{2(1+x\sin(x))}{3x}= \frac{2\sin(x)}{x}$$ The integrating factor and several steps become very messy, so it is probably wrong somewhere here. Is this the right approach?
Your approach is correct, but you have made a mistake. The correct differential equation would be: $$y'-y\frac {1+x\sin x}{3x}=-\frac {y^{\mathbf 4} \sin x}{x}$$ This is indeed the Bernoulli form, so the procedure ahead is clear. The IF would not come out to be messy or complicated, in fact it is of the form $x e^{-\cos x}$, which when multiplied with $\frac {\sin x}{x}$ gives an elementary integral solvable by substituting $t=\cos x$..
{ "language": "en", "url": "https://math.stackexchange.com/questions/4165999", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Rudin Principles of Mathematical Analysis Chapter 5 Exercise 15 I tried to solve it using generalized mean value theorem (Theorem 5.9 in the book), but ran into a wrong conclusion. Let $f$ be twice-differentiable real function on $(a, \infty)$. Let $M_{0}, M_{1}, M_{2}$ be the least upper bounds of $|f(x)|, |f'(x)|, |f''(x)|$, respectively, on $(a, \infty)$. Prove that $$M_{1}^{2} \leq 4M_{2}M_{0}$$ Fix $y\in (a,\infty)$. Then for any $z\in(a,\infty)$ where $y<z$, for some $t\in(y,z)$, $$[f(z)-f(y)]f'(t)=[f'(z)-f'(y)]f(t)$$ Since $y\neq z$, $$\frac{[f(z)-f(y)]f'(t)}{z-y}=\frac{[f'(z)-f'(y)]f(t)}{z-y}$$ If I take the limit as $z$ goes to $y$, then it becomes: $$[f'(y)]^{2}=f''(y)f(y)$$ which implies: $$[f'(y)]^{2} \leq M_{2}M_{0}$$ and therefore $$M_{1}^{2} \leq M_{2}M_{0}$$ But clearly, this is wrong from the hint in the book since $M_{1}^{2}= 4M_{2}M_{0}$ can actually happen. I'm not sure what I did wrong. Could you point to where the error occurred?
Your error is in this line $$[f(z)-f(y)]f'(t)=[f'(z)-f'(y)]f(t)$$ which is not true.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4166175", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What is the probability that the message consists of $2 A$’s , $2 B$’s, and $1 C\;$? A message of length $5$ letters is to be sent. You can pick three letters $A$, $B$ and $C$ to make a message of length $5$. So the total sample space is $243 = 3^5$ Two Questions: * *What is the probability of that the message consists of $2 A$’s , $2 B$’s and $1 C$ ? *And what is the probability that it contains at least $1 A$ in the message ? First Question: At first I thought it was ${5\choose2} + {5\choose2}+ 5 = 10 + 10 + 5 = 25$. So the probability $25/243 = 0.1029 $ or ${5\choose2} + {5\choose2} = 20$, so the probability is $20/243 = 0.082$ but that wasn’t correct. Help wanted for both questions. Hints, proof or answer.
NOTE: The actual question has been changed. Please find below an answer to the original question. You can take this as a 'hint' to solving the new question. Also, I agree, the new question is best solved by enumerating number of possibilities with "no A's". The second question asks (asked!) "What if there is one A?". If so, these are the possibilities: $$\{A,C,C,C,C\}$$ $$\{ A, B, C,C,C \}$$ $$\{ A, B, B,C,C \}$$ $$\{ A, B,B,B,C\}$$ $$\{A, B,B,B,B\}$$ along with all the permutations of these. That would be: $$\begin{aligned}\pmatrix{5\\1}+ \pmatrix{5\\1}\pmatrix{4\\1}+\pmatrix{5\\1}\pmatrix{4\\2}+ \pmatrix{5\\1}\pmatrix{4\\3} + \pmatrix{5\\1} &= 5\cdot\left(1+4+6+4+1\right)\\&=80 \end{aligned}$$ so $$P(\text{one }A)=\frac{80}{243}\approx 0.329218$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4166338", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Finding cumulative distribution function, given density function using integration I am trying to find the cumulative distribution function given the following density function: $$f_X(x)= \begin{cases} \frac{2}{3\sqrt{x}}, &0\leq x < \frac{9}{16}\\ 0, &\text{otherwise.} \end{cases} $$ I know I need to integrate $f_X(x)$ to find $F_X(x)$ but I am unsure what the initial value for this integral must be. Would this be correct integral, $\int\limits_{0}^{x}f_X(x)dx=\frac{4\sqrt{x}}{3}=F_X(x)$?
Yes, this is correct $$ F_X(x) = P(X \le x) = \int_{-\infty}^x f_X(t)dt = \int_{0}^x f_X(t)dt = \frac{4}{2}\sqrt{x}, $$ for $x\in[0, 9/16)$, for $x \ge 9/16$, $F_X(x) = 1$, and for $x<0$, $F_X(x) = 0$. Check $$ \int_{0}^{9/16} f_X(x)dx = F_X(9/16) - F_X(0) = 1 - 0 =1. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4166501", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find $x$ if $\cot(x)=\csc(12^{\circ})-\sqrt{3}$ Find $x$ in degrees if $\cot(x)=\csc(12^{\circ})-\sqrt{3}$ My attempt: $$\cot (x)=\frac{1}{\sin (12^{\circ})}-2 \sin \left(60^{\circ}\right)$$ $$\Rightarrow \cot x=\frac{1-2 \sin (12^{\circ}) \sin (60^{\circ})}{\sin \left(12^{\circ}\right)}$$ $$\Rightarrow \cot x=\frac{1-\cos 48^{\circ}+\cos 72^{\circ}}{\sin \left(12^{\circ}\right)}$$ Now let $, \theta=12^{\circ},s=\sin(\theta)$, then we get $$\cot x=\frac{1-\cos (4 \theta)+\cos (6\theta)}{\sin (\theta)}$$ Converting to rational function in $s$, we get $$\cot x=\frac{-32 s^{6}+40 s^{4}-10 s^{2}+1}{s}$$
This is a very similar yet a little different question compared to this problem, and the answer by @albert chan paves the way for a similar solution to this problem. $$ \sin(12°) = \sin(30°-18°) = \sin(30°)\cos(18°) - \cos(30°)\sin(18°)$$ $$= {1\over2} (\cos(18°) - \sqrt3 \sin(18°))$$ Converting $\sin$ to $\csc$ and rationalizing the denominator: $$\csc(12°) = \left({2 \over \cos(18°) - \sqrt3 \sin(18°)}\right) \left({\cos(18°) + \sqrt3 \sin(18°) \over \cos(18°) + \sqrt3 \sin(18°)}\right)$$ $$= {2(\cos(18°) + \sqrt3 \sin(18°)) \over \cos^2(18°) - 3\sin^2(18°)} $$ Using $\cot(x)\sin(x)=\cos(x)$ for numerator and Pythagorean trig identity for denominator: $$= \left({2\sin(18°) \over 1 -4 \sin^2(18°)}\right) (\cot(18°) + \sqrt3) $$ Let $s=\sin(18°)$, using multiple angles formula and the fact that $s≠1$: $$\sin(90°) = \sin(5 \times 18°) = 16s^5 - 20s^3 + 5s -1 = (s-1)(4s^2+2s-1)^2 = 0 $$ $$\implies 4s^2+2s-1 = 0 \implies {2s \over 1-4s^2} = 1$$ $$\fbox{$ \cot(x) = (\cot(18°) + \sqrt3) - \sqrt3 = \cot(18°)$}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4166594", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Geodesic deviation in flat space Suppose that $x^\mu(t,s)$ represents a family of curves. Let $v^\mu$ represents the the tangent vector to a curve $x^\mu(t,s_0)$ with $s_0$ fixed that is $v^{\mu}=\partial x^{\mu} / \partial t$ and deviation vector is given by $\xi^{\alpha}=\partial x^{\alpha} / \partial s$. In the usual derivation of the geodesic deviation equation it is showed that $\xi$ is lie transposed through $v$ that is $$L_v\xi=0$$ where $L$ stands for the lie derivative. To make the discussion more clear let us suppose we are in flat spacetime with usual Cartesian coordinates $x^0=t,x^1,x^2,x^3$. We choose $x^\mu(t,s)=(t,st,0,0)$ and so $v^\mu=(1,s,0,0)$ and $\xi^{\alpha}=(0,t,0,0)$ we have $$L_v\xi=\left[\frac{\partial}{\partial t}+s\frac{\partial}{\partial x^{1}},t\frac{\partial}{\partial x^{1}} \right]=\frac{\partial}{\partial x^1}-\frac{\partial s}{\partial x^1}\frac{\partial}{\partial x^1}$$ So we choose the parameter $s$ for example to $2x^1$ we would have $L_v\xi \ne0$ Isn't this a contradiction?
Away from $t=0$ you can think of $x$ as a regular parametrization of a surface in your 4D space. The vector fields you call $v$ and $\xi$ are just the coordinate vector fields usually called $x_t$ and $x_s$. Their Lie bracket vanishes because $x_{st}=x_{ts}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4166777", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Distinguishing differentiable and continuously differentiable functions I came across this question as stated below: let $g(x) = xf(x)$ where $ f(x)= \begin{cases} x \hspace{0.1cm}\sin(\frac{1} {x} ) &\text{if}\ x\neq 0\\ 0 &\text{if}\, x= 0 \end{cases} $ Show that $g(x)$ is differentiable at $x = 0 \hspace{0.2cm}$but $g'(x)$ is not continuous at $x=0$ Please note that I do NOT want answers to the question What I want to know is whether functions can exist which are differentiable but their derivative isn't continuous at that point. And, if they do exist, why would the derivative be discontinuous but the function be differentiable? Until now, whenever they asked to check differentiability, instead of going for the limit definition, I directly differentiate the function and then check the continuity of the new function which clearly goes wrong as in this example. Why are there functions that behave like that? *To know exactly what I mean to ask, re-read the lines below the bold sentence.
For most functions that we can apply the usual "rules" of differentiation to (such as the product rule), the derivative will be continuous over any interval in which it exists. That is convenient, as it saves a lot of laborious delta-epsilon work. In this particular case it is not hard to confirm that the derivative exists at all non-zero values of $x$, that the usual convenient rules of differentiation give the same formula for the derivative at all non-zero $x,$ and that there is no possible value of the derivative at $x$ that would make the derivative continuous there. On the other hand, if we compute the derivative from first principles (delta-epsilon), we find that it exists at $x = 0$ and we can find the value of the derivative there. For an intuitive notion of how to construct a function with such bizarre properties as this, note that the factor $\sin(1/x)$ puts an infinite number of oscillations into the function near $x = 0.$ The factor $x^2$ (in $g(x) = x f(x) = x^2 \sin(1/x)$ when $x\neq 0$) "suppresses" the effect of those oscillations on the slope of the secant lines through $(0,g(0)),$ but it still allows the derivative oscillate between values that don't converge as we approach $x = 0$ from either side. If we consider $f(x)$ then the oscillations of $\sin(1/x)$ near $x = 0$ would prevent convergence of the slopes of the secant lines at $(0,f(0))$. On the other hand, the derivative of $x^2 f(x)$ is continuous. That is, if we only multiply $\sin(1/x)$ by $x$ when $x \neq 0,$ it does not "suppress" the oscillations enough to allow defining the derivative at $x=0$; but if we multiply by $x^3$ it "suppresses" the oscillations so thoroughly that it suppresses the discontinuity of the derivative too.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4167018", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Length of the straight path connecting two points in the taxicab geometry? Are such straight paths even permissible in this geometry? Let's equip $\mathbb R^2$ with the taxicab metric induced by the norm $||\cdot||_1$. Now let's look at this image on the Wikipedia page. The caption says "Taxicab geometry versus Euclidean distance: In taxicab geometry, the red, yellow, and blue paths all have the same shortest path length of 12. In Euclidean geometry, the green line has length $6\sqrt 2 \approx 8.49$ and is the unique shortest path." Let's name the black point at the bottom left as $A$ and the black point at the top right as $B$. Indeed, I understand that the red path, blue path and the yellow path all have the same length of $12$ units in this taxicab geometry. However, is there a notion of path length of the straight green line connecting $A$ and $B$ in this taxicab geometry? What's the length of the green line connecting $A$ and $B$ in this geometry? Or can such a path not even exist in this geometry? Are other "curved" paths between $A$ and $B$ permissible? Why or why not? I'm a bit confused about this issue. I keep seeing only these grid-like paths in all texts explaining taxicab geometry, but none of them explain the reason. Are other kinds of paths not permissible somehow? Some clarification would help.
It's a reasonable thing to be confused about. Presentations of taxicab geometry don't do any favors when they model it too closely after grid-based cities. The distance from $(0,0.5)$ to $(1,0.5)$ is one, even though a cab couldn't get you from the middle of a block of 14th Street to the middle of the same block of 15th street in a straight line. In a taxicab geometry, you can certainly talk about the green line segment, just like you could think about any other set of points in the plane. But the length of the line segment is $d_T(A,B)=12$, as all of the other paths demonstrate. There are other interesting phenomena that distinguish horizontal and vertical line segments from oblique line segments like $\overline{AB}$. For instance, if two points $X,Y$ form a horizontal or vertical line segment, then that line segment is $\{Z\mid d_T(X,Z)+d_T(Y,Z)=d_T(X,Y)\}$, like we're used to for the Euclidean metric. But, in your diagram, $\{C\mid d_T(A,C)+d_T(B,C)=d_T(A,B)\}$ is the entire rectangle whose opposite corners are $A$ and $B$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4167157", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Questions about following proof regarding why $\mathbb{Z}[x]$ is not a principal ideal domain Prove that $\Bbb{Z[x]}$ is not a principal ideal domain. Proof: Consider the ideal $I=\langle x,2\rangle$. We’ll show that this ideal is not principal. First note that I is not equal to $\Bbb{Z[x]}$ because 1 is not in I. If it were then $1=xf(x)+2g(x)$ hx, 2i because if it were then $1 = xf(x) + 2g(x)$ for f(x), g(x) ∈ Z[x], but xf(x) + 2g(x) has even constant term. Then suppose $I=\langle p(x) \rangle$ for some p(x) ∈ Z[x]. Then we must have x = p(x)f(x) and 2 = p(x)g(x) for some f(x), g(x) ∈ Z[x]. But the second implies that p(x) must be a constant polynomial, specifically p(x) = −2, −1, 1 or 2. We can’t have p(x) = ±1 because then I = Z[x] so p(x) = ±2. But then x = ±2f(x), a contradiction since ±2f(x) has even coefficients. I don't understand this part But then x = ±2f(x), a contradiction since ±2f(x) has even coefficients. What is the thing about the even coefficients, and why xf(x) + 2g(x) has even constant term in the first part? How do they lead us to the contradiction?
The proof shows that $x=\pm 2f(x)$. But no matter what $f(x)$ is, after multiplying by $\pm 2$, all of the coefficients of $f(x)$ will be even (as a multiple of $2$). But the coefficient of $x$ is $1$, which is odd. So if $x=\pm 2f(x)$, then $1$ is even, a contradiction. Does this make sense?
{ "language": "en", "url": "https://math.stackexchange.com/questions/4167309", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Variance of random walk Consider the Random Walk $$X_t = X_{t-1}+\epsilon_t$$ with $\epsilon_t \sim N(\mu,\sigma^2)$ and $X_0=0$. We can write $$X_t=X_0+\sum_{n=1}^t\epsilon_t.$$ Using the equation above we have $$\mathbb{E}[X_t]=\mu t,$$ but I'm strugling to calculate the variance. I have \begin{equation} \begin{split} Var[X_t] &=\mathbb{E}[X_t^2]-\mathbb{E}[X_t]^2 \\ & = \mathbb{E} \left[\left(X_0+\sum_{n=1}^t\epsilon_t\right)^2\right]-(\mu t)^2 \\ & = \mathbb{E} [X_0^2]-2X_0\mathbb{E}\left[\sum_{n=1}^t\epsilon_t\right]+\mathbb{E}\left[\left(\sum_{n=1}^t\epsilon_t\right)^2\right] - (\mu t)^2 \end{split} \end{equation} Well, now I think we can do $$\mathbb{E}\left[\left(\sum_{n=1}^t\epsilon_t\right)^2\right]=\sigma^2 t, \mathbb{E}[X_0^2]=0$$ and $$2X_0\mathbb{E}\left[\sum_{n=1}^t\epsilon_t\right]=2X_0\mu t=0$$ so we have $$Var(X_t) = \sigma^2 t - (\mu t)^2 .$$ But I'm not sure that this is right. Any help would be appreciated!
Assuming the $\epsilon_t$ are iid, you can simply use $$Var(X_t)=Var(\sum \epsilon_i)=\sum Var(\epsilon_i)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4167504", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Can we always draw a circle that is internally tangent to three circles? Here is the picture of the problem: I'm looking for an intuitive explanation and also an outline of how a rigorous proof would look like. Our professor directly started to speak about finding the radius of the small circle, but it got me thinking. How can we ensure that such a circle always exists? I researched and came across Appolonius Problem, but I couldn't find an explanation for my problem. I know that three points define a circle, but how does that help me? Thank you in advance.
Hint: As can be seen in figure: 1- Connect centers A, B and C of three circles. 2-Find circumcenter D of ABC. 3-E is the center of big circle and F is that of small one. By accurate drawing we can see that D is midpoint of EF. The radius of big circle is: $R=DA+\frac {AA'+BB'+CC'}3$ And that of small one is: $r=DA-\frac {AA'+BB'+CC'}3$ 4- for big circle take for example point A' and B' as centers of two circles which intersect at E, this is the center o big circle. 5- For small circle take for example two points, intersection of DA and DB with circles A and B, as centers of two circle with radius r which intersect at F , this will be the center of small circle.You may also connect D to E and extent it equal to DE from D to find F.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4167688", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
If $\dfrac{1-\sin x}{1+\sin x}=4$, what is the value of $\tan \frac {x}2$? If $\dfrac{1-\sin x}{1+\sin x}=4$, what is the value of $\tan \frac {x}2$? $1)-3\qquad\qquad2)2\qquad\qquad3)-2\qquad\qquad4)3$ Here is my method: $$1-\sin x=4+4\sin x\quad\Rightarrow\sin x=-\frac{3}5$$ We have $\quad\sin x=\dfrac{2\tan(\frac x2)}{1+\tan^2(\frac{x}2)}=-\frac35\quad$. by testing the options we can find out $\tan(\frac x2)=-3$ works (although by solving the quadratic I get $\tan(\frac x2)=-\frac13$ too. $-3$ isn't the only possible value.) I wonder is it possible to solve the question with other (quick) approaches?
This is the same way to go as in the OP, maybe combining the arguments looks simpler. Let $t$ be $t=\tan(x/2)$ for the "good $x$" satisfying the given relation. Then $\displaystyle \sin x=\frac {2t}{1+t^2}$, so $$ 4= \frac{1-\sin x}{1+\sin x} = \frac{(1+t^2)-2t}{(1+t^2)+2t} =\left(\frac{1-t}{1+t}\right)^2\ . $$ This gives for $(1-t)/(1+t)$ the values $\pm 2$, leading to the two solutions $-3$ and $-1/3$ mentioned in the OP.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4167867", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 0 }
Why does Maple plot a different result for a 3D parametric circle for parameter t from -100 to 100 vs 0 to 2$\pi$? I have the following parametric equation in three dimensions: $$\vec{r}(t) = (\cos t) \vec{i} + (\sin t) \vec{j}$$ I believe this is the equation for a circle on the xy-plane, centered at 0, with radius 1. In Maple, if I plot this equation for values of t from 0 to $2\pi$ it looks fine. If I plot for t from -100 to 100, it does not look fine: Also note $0 to 8\pi$ What is happening here? Here are the commands in Maple: with(VectorCalculus) ihat := <1, 0, 0> jhat := <0, 1, 0> r := cos(t)*ihat + sin(t)*jhat plot3d(r, t = -100 .. 100)
This is a sampling artifact. When you say $t = -100$ to $100$, you are specifying a range of $200$ radians for Maple to plot. This corresponds roughly to about $32$ full rotations around the circle ($\frac{200}{2\pi} \approx 31.8$). So you are overlapping the same path many times. Since Maple is using an internal algorithm to decide how many points along this path to sample for plotting the curve, it sees this large range and chooses some points that it thinks are representative of the behavior of the curve. It can't choose every point along the path because there are infinitely many. So it picks and then connects them with lines because it assumes that the behavior of the curve is roughly linear between the points it samples. The result is this artifact. There is usually a plotting option to increase the sampling resolution, basically telling Maple to sample more points so that the plot doesn't look so strange. As long as the curve being plotted is "smooth" (i.e., continuous and differentiable), this usually produces good results even when the plotting range is very large. As I am not an expert in Maple, preferring to use Mathematica instead, all I can tell you is that in Mathematica, this is specified using the option PlotPoints and MaxRecursion. The first one increases the number of points to sample, and the second tells the program how many times to recursively bisect between the sampled points if it detects that the plot is not smooth. Increasing both of these will generally produce better results.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4168235", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Property of Hausdorff distance on set minus I am trying to understand how the Hausdorff distance acts on complements but struggling to find any good resources. Is it true in general that if I have $3$ compact sets $A,B$ and $C$ that the following implication holds? $$d_H(A,B)\leq r \implies d_H(C\setminus A, C\setminus B) \leq r$$ Where $d_H$ is the Hausdorff distance (assume the sets $C\setminus A$ and $C\setminus B$ are non-empty). Also, does anyone know of any good resources for further reading on the Hausdorff distance? Any with exercises would really help!
Update: I think I have a counterexample in $\mathbb{R}^2$: Let $$A = \bigg(\bigg[\frac{1}{n},1\bigg]\times [0,1]\bigg) \cup ([0,1]\times [2,3])$$ $$B = ([0,1]\times [0,1]) \cup ([0,1]\times [2,3])$$ $$C = ([0,1]\times [0,1]) \cup ([0,1]\times [1,3])$$ We have that: $$C\setminus A =\bigg(\bigg[0,\frac{1}{n}\bigg)\times [0,1]\bigg) \cup ([0,1]\times [1,2))$$ and: $$C\setminus B =[0,1]\times [1,2)$$ Trivially we can make $d_H(A,B)$ as small as we like by increasing $n$. But for $d_H(C\setminus A,C\setminus B)$ will always be greater than or equal to $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4168399", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Hypothesis tests two means For a random person selected from a population, let $X$ be their Instagram followers and $Y$ be their Facebook followers. We randomly sample the followers of $n$ people, $(X_1, Y_1), ..., (X_n, Y_n)$. Let $D = X - Y$ and $D_i = X_i - Y_i$ for $i = 1, ..., n$. Suppose we can accurately model $D$ as a normal random variable, but whose mean $\mu_D$ and variance $\sigma^2_D$ are unknown. (a) We take $n = 7$ samples: $X_i: 118, 136, 125, 121, 111, 142, 114$ $Y_i: 81, 91, 63, 78, 59, 93, 83$ Determine a $95$% confidence interval for $\mu_D$. (b) Recent research suggests that $\mu_D = 40$. We claim that the average difference between Instagram and Facebook followers has increased, and we take $n = 17$ random samples. Use the test statistic $\frac{\bar{D} - 40}{\frac{S_D}{\sqrt{17}}}$ to form a test at the $0.05$ significance level. I tried to do this (a) I did a few calculations and found the sample statistics as: $\bar{x} = 123.86, s_x = 11.42, \bar{y} = 78.29, s_y = 13.00$ By the Welch statistic, we find the degrees of freedom as $$\Delta = \frac{(\frac{s_x^2}{n_x} + \frac{s_y^2}{n_y})^2}{\frac{1}{n_x - 1}(\frac{s_x^2}{n_x})^2 + \frac{1}{n_y - 1}(\frac{s_y^2}{n_y})^2}$$ Plugging in the values gives $\Delta = 11$. The value of $t_{\alpha /2}$ at $0.025$ with $11$ degrees of freedom, from the tables, is $2.01$. The interval is therefore $$((\bar{x} - \bar{y}) - t_{\alpha / 2}\cdot \sqrt{s_x^2/n_x + s_y^2/n_y}, (\bar{x} - \bar{y}) + t_{\alpha / 2}\cdot \sqrt{s_x^2/n_x + s_y^2/n_y})$$ $$= (34.6, 56.6)$$ Another way: $\mu_D = 45.57, s_D = 10.13$. $t_{\alpha/2}$ with $\alpha = 0.05$ and $6$ degrees of freedom is $2.447$. The interval is: $(\bar{d} - t_{\alpha/2} \cdot \frac{s_D}{\sqrt{n}}, \bar{d} - t_{\alpha/2} \cdot \frac{s_D}{\sqrt{n}})$ $= (36.20, 54.94)$ Both methods give similar values but I am not sure which is correct. (b) $H_0: \mu_D = 40$ and $H_1: \mu_D > 40$. $t_{\alpha}$ with $\alpha = 0.05$ and $16$ degrees of freedom is about $1.75$. The critical region is $$\frac{\bar{d} - 40}{\frac{s_d}{\sqrt{17}}} \geq 1.75$$ We reject the null hypothesis if the test statistic is greater than or equal to $1.75$. Is what I have done correct? For (a), which method is supposed to be the right one?
b looks good to me. For a, the second method is correct. We want to treat each observation as one entity, i.e. $d_i=x_i-y_i$ and then conduct the confidence interval treating d as a single normal random variable with unknown mean and unknown variance. Also called a paired analysis. Option 1 would be comparing the means of two different populations.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4168529", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If $f(x) = \frac{x^3}{3} -\frac{x^2}{2} + x + \frac{1}{12}$, then $\int_{\frac{1}{7}}^{\frac{6}{7}}f(f(x))\,dx =\,$? This is a question from a practice workbook for a college entrance exam. Let $$f(x) = \frac{x^3}{3} -\frac{x^2}{2} + x + \frac{1}{12}.$$ Find $$\int_{\frac{1}{7}}^{\frac{6}{7}}f(f(x))\,dx.$$ While I know that computing $f(f(x))$ is an option, it is very time consuming and wouldn't be practical considering the time limit of the exam. I believe there must be a more elegant solution. Looking at the limits, I tried to find useful things about $f(\frac{1}{7}+\frac{6}{7}-x)$ The relation I obtained was that $f(x) + f(1-x) = 12/12 = 1$. I don't know how to use this for the direct integral of $f(f(x)).$
In a comment after @Math Lover's elagant answer, I told that nobody tried the change of variable $f(x)=y$. So, I tried for the fun of it $$I=\int f[f(x)]\,dx=\int \Big[ \frac 1{12}+f(x)-\frac 12 f^2(x)+\frac 13 f^3(x)\Big]\,dx$$ Let $f(x)=y$. Solving the cubic with the hyperbolic method for only one real root $$x=\frac{1}{2}-\sqrt{3} \sinh \left(\frac{1}{3} \sinh ^{-1}\left(\frac{2-4 y}{\sqrt{3}}\right)\right)$$ $$dx=\frac{4 \cosh \left(\frac{1}{3} \sinh ^{-1}\left(\frac{2-4 y}{\sqrt{3}}\right)\right)}{\sqrt{48 (y-1) y+21}}\,dy$$ So, $$20480\,I=-10240 \sqrt{3} \sinh (t)+2700 \cosh (2 t)+1350 \cosh (4 t)+15 \cosh (8 t)+12 \cosh (10 t)$$ where $$t=\frac{1}{3} \sinh ^{-1}\left(\frac{2-4 y}{\sqrt{3}}\right)$$ Now, for $y$, the upper and lower bounds are $\frac{3223}{4116}$ and $\frac{893}{4116}$ and then for $t$, they are $$\mp \frac{1}{3} \sinh ^{-1}\left(\frac{1165}{1029 \sqrt{3}}\right)$$ Because of the symmetry in $t$, all cosines disappear and the result for the definite integral is just $$\int_{\frac{1}{7}}^{\frac{6}{7}}f[f(x)]\,dx =\sqrt{3} \sinh \left(\frac{1}{3} \sinh ^{-1}\left(\frac{1165}{1029 \sqrt{3}}\right)\right)$$ whic, numerically is $0.3571428571428571428571429$; its reciprocal is $2.8=\frac {14}5$ so the value of $\frac 5{14}$. For those who are curious, this took me close to one and half hour.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4168939", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20", "answer_count": 3, "answer_id": 1 }
Proving non-linear inequality doesn't have a solution [Does it have any positive solutions?] I am trying to find if following inequalities have a solutions for $x_i\in \mathbb{R+}$ $x_5(x_2-x_1)>x_0(x_4+x_5)$ $x_0x_3>(x_3+x_4)(x_2-x_1)$ How could I prove if this inequality has a solution or not? I attempted to reach a contradiction by combining the two in-equations but seems like it is not a straight-forward one. UPDATE: Some context where this inequality comes from. This was a calculation based on hedged trading. $x_0$ is the first buy position size, $x_1$ is the second buy size opened with $x_4$ diff from the first position along with a sell position at that price with size of $x_2$. If price moves down with diff of $x_5$ we should be able to make profit, on the other side if price moves up with diff of $x_3$ from the first opened buy we should also have positive profit. These inequalities come from there.
After multiplying left sides and right sides we have $$ x_5(x_2-x_1)x_0x_3\gt x_0(x_4+x_5)(x_3+x_4)(x_2-x_1) $$ or $$ x_3x_5 \gt (x_4+x_5)(x_3+x_4) $$ which is false because $x_i\in \mathbb{R+}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4169068", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What is meant by "side of a curve"? I'm currently reading "A primer on mapping class groups" by Farb-Magalit. A notion that often turns up is that of a "side of a curve". For example in the proof of Proposition 3.2 or in the proof that the center of the mapping class group of $S_g$ is trivial, whenever $g \geq 3$. I just can't seem to find any definition for "side" of a curve online. Also, what is meant by "two edges"? I can imagine what edges should be, when we build a simplex of 2 points and 2 edges, but with one point and one edge? $Z(MCG(S_g)) \cong 1$" />
It is related with the regular neighborhood of the curve: a simple closed curve in an orientable surface has as a regular neighborhood an annular subsurface, while in a non orientable surface a regular neighborhood it will be a Möbius band. The boundary of a regular neighbourhood of an annular subsurface has two components while a Möbius strip has one.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4169238", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is $K_1$ bipartite? I'm thinking that it could be trivially bipartite since it only has one vertex and no edges but I am still a little bit unsure about it being trivially bipartite.
Depends on your definitions. (As pointed out in Matthew Daly's answer, $K_1$ should be bipartite, because it has no odd cycles, so it would otherwise be an awkward exception. But not all definitions will play nicely with this desired property.) West's Introduction to Graph Theory says 1.1.10. Definition. A graph $G$ is bipartite if $V(G)$ is the union of two disjoint (possibly empty) independent sets called partite sets of $G$. So under this definition, if $V(K_1) = \{v\}$, then we let $\{v\}$ be one partite set, and $\varnothing$ be the other; $K_1$ is bipartite. Bondy and Murty write A graph is bipartite if its vertex set can be partitioned into two subsets $X$ and $Y$ so that every edge has one end in $X$ and one end in $Y$ which still works just fine, setting $X = \{v\}$ and $Y = \varnothing$. They do not specifically point out that $X$ or $Y$ could be empty, but they do not rule it out either. (Elsewhere, Bondy and Murty talk about "nontrivial partitions" or "partitions into nonempty parts", which make it clear that a partition without this qualifier is allowed to have an empty part.) They are in better shape than Diestel, who has A graph $G = (V, E)$ is called $r$-partite if $V$ admits a partition into $r$ classes such that every edge has its ends in different classes with an earlier qualification that the classes of a partition may not be empty. Since $K_1$ should be bipartite by any reasonable definition, Diestel is in the wrong here. (Diestel later claims that all graphs with no odd cycles are bipartite, with no mention of $K_1$ as a special exception.) If we treat "bipartite" as synonymous with "$2$-colorable", then $K_1$ is happily bipartite, since any function on its vertex set is a $2$-coloring (and also a $1$-coloring).
{ "language": "en", "url": "https://math.stackexchange.com/questions/4169353", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
For how many two-element subsets $\{a, b\}$ of the set $\{1, 2, 3, 4, 5, 6, \dots , 36\}$ is the product $ab$ a perfect square? Problem: For how many two-element subsets $\{a, b\}$ of the set $\{1, 2, 3, 4, 5, 6, \dots , 36\}$ is the product $ab$ a perfect square? Note: The numbers $a$ and $b$ are distinct, so the $a$ and $b$ must be different integers from the set of numbers $1 - 36$. Reiteration: Using two integers selected from the number group, $1-36$, how many combinations are possible so that the product of the two numbers selected is a perfect square. Attempt: First list out the square numbers between 1-36 (inclusive): 1, 4, 9, 16, 25, 36. NOTE: The square number “1” does not work because a and b need to be distinct. Rewrite the numbers paired with the number “1” so that it is in the format {a, b}. (4, 1) (9, 1) (16, 1) (25, 1) (36, 1) Next, since the number “16” for example, it can be broken down into 4•4 or 8•2, the subset {8, 2} also works. 4 : 2•2 (Note: 2•2 does not work because a and b need to be distinct) 16 : 4•4 or 8•2 (Note: 4•4 does not work because a and b need to be distinct) 36 : 2•18 or 3•12 or 4•9 Currently, we have found 9 solutions. Issue: I currently found 9 out of the total number of solutions that satisfy this problem. But, I have ran into an issue : What is the most efficient way to organize your solutions and find other combinations? Attempt Continued: I then continued to list out the combinations but started with the sets with “1” in the front and then 2, then 3, then 4, etc. Subsets which start with “1”: {1, 4} {1, 9} {1, 16} {1, 25} {1, 36} Subsets which start with “2”: {2, 8} {2, 18} {2, 32} Subsets which start with “3”: {3, 12} {3, 27} Subsets which start with “4”: {4, 9} {4, 16} {4, 25} {4, 36} Subsets which start with “5”: {5, 20} Subsets which start with “6”: {6, 24} Subsets which start with “7”: {7, 28} Subsets which start with “8”: {8, 18} {8, 16} Conclusion: The list keeps going on. The correct answer for this question is 27. If you carefully calculate and list out the subsets, you can get the answer.
Every number $n$ can be written as the product of a square part and a square-free part $n=a^2b$ where no perfect square besides $1$ divides $b$. One way to see this is to look at the prime factorization: if $n=\prod p_i^{k_i}$, let $\epsilon_i=1$ if $k_i$ is odd and $0$ if $k_i$ is even. Then $a=\prod p_i^{\lfloor k_i/2 \rfloor}$ and $b=\prod p_i^{\epsilon_i}$. Then $mn$ will be a perfect square if and only if the squarefree part of $m$ equals the squarefree part of $n$. By getting rid of all the multiples of 4, 9, and 25, we see that the squarefree numbers less than 36 are $$1, 2, 3, 5, 6, 7, 10, 11, 13, 14, 15, 17, 19, 21, 22, 23, 26, 29, 30, 31, 33, 34, 35$$ For each of these numbers, we can look at how many different square parts can go with those square-free parts to get a number less than or equal to 36. For any number greater than 9, there is at most 1 number with that square-free part: we would be $1^2b$, but then $2^2b$ would already be too big. So, for example, there aren't two numbers less than 36 that both have square-free part 11, and so no way to multiply 11 by something less than 36 to get a perfect square. The squarefree numbers less than 9 are $$1, 2, 3, 5, 6, 7$$ and the numbers with those square-free parts are the following sets $$\{1,4,9,16,25,36\}, \{2, 8, 18, 32\}, \{3, 12, 27\}, \{5, 20\}, \{6, 24\}, \{7, 28\}.$$ There are 6C2 ways to pick 2 numbers from the first set, 4C2 ways to pick 2 numbers from the second set, 3C2 ways to pick 2 numbers from the 3rd set, and 1 way to pick 2 numbers from each of the last 3 sets. This gives a total of $15+6+3+1+1+1=27$ different 2 element sets. Note that we could have figured out how big each of those sets were without listing out all the elements, for example, if the square-free part of a number is 3, and $3a^2\leq 36$, then $a\leq \sqrt{26/3}$, but it is instructive to list out the full sets.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4169463", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Problem 11, Chapter 18 of Calculus by Spivak The problem: Let $f$ be a nondecreasing function on $[1;\infty)$. Define $F(x)$ as follows: $$F(x) = \int_{1}^{x}\frac{f(t)}{t}dt.$$ Prove that $f$ is bounded on $[1;\infty)$ if only if $\frac{F}{\log}$ is bounded on $[1;\infty)$. The left-to-right can be easily proved. Let's just consider the converse. Here's what i did: Since $f$ is nondecreasing, $\forall x\in[1;\infty]$: We have: $\forall t\in[1;x]: f(t) \leq f(x)$. So, $$F(x)=\int_{1}^{x}\frac{f(t)}{t}dt \leq \int_{1}^{x}\frac{f(x)}{t}dt = f(x).log(x) (*)$$ (In Spivak's Calculus, he defines $log(x) = \int_{1}^{x}\frac{1}{t}dt$) If $f<0$ on $[1;+\infty)$ then: $$F(x) = \int_{1}^{x}\frac{f(t)}{t}dt < \int_{1}^{x}0dt = 0$$. And: $(*)$ becomes: $$-|F(x)| \leq -|f(x)|.logx \iff |F(x)| \geq f(x).logx \iff |\frac{F(x)}{logx}|\geq|f(x)|$$. Since $F/log$ bounded, so is $f$ If $\exists x\in[1;+\infty): f(x) \geq 0$ then the set $A=\{x\geq 1:f<0\ \text{on}\ [1;x]\}$ must be bounded above. Consider $\alpha = \text{sup}A$. So, $f<0$ on $[1;\alpha]$ but $\geq 0$ on $(\alpha;+\infty)$. Similarly, we can prove $f$ is bounded on $[1;\alpha]$. But i'm not quite sure how to handle the $(\alpha;+\infty)$. On $(\alpha;+\infty)$, what we have is: $$F(x)=(\int_{1}^{\alpha}+\int_{\alpha}^{x})\frac{f(t)}{t}dt \leq c_0 + f(x)(logx-log(\alpha)) \iff |\frac{F(x)}{logx - log\alpha}| - \frac{c_0}{logx - log\alpha} \leq |f(x)|$$. We cannot conclude $f$ is bounded at all. And the relation "$\leq$" seems to be the only way to take out $f(x)$. Any ideas on how to proceed ?
Why not prove it by contradiction? If $f$ is not bounded, then for any large $M > 0$ there exists $X$ such that $f(x) \geq M$ for all $x\geq X$. This means that $F(x)\geq \int_1^X \frac{f(t)}{t} dt + M\ln x - M\ln X$. Thus $$ \frac{F(x)}{\ln x} \geq \frac{C}{\ln x} + M $$ where $C$ is a constant depending on $M$ and $X$. Letting $x\to\infty$, we see that $F(x)/\ln x \geq M/2$ for all large $x$. This result holds for any $M$, and thus $F(x)/\ln x$ cannot be bounded.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4169599", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Don't know how to solve limits that have tan(x) as a power. I have these two problems. The first one is: $$\lim_{x \to \pi/2} \left(\frac{2x} \pi\right)^{\tan(x)}$$ and the second one is: $$\lim_{x \to \pi/4} (\tan(x))^{\tan(2x)}$$ The way I'm trying to solve them is by taking the natural logarithm of both sides and then trying to apply l'Hôpital's rule on the limit's side. However, in both cases, I found myself stuck trying to figure out how to find the limit of the denominator, which happens to be $$\frac{1} {\tan x}$$ And $$\frac{1}{\tan(2x)}$$.
use l'hopital to differentiate the denominators as well as the numerator and then compute the limit. follow : https://en.wikipedia.org/wiki/L%27H%C3%B4pital%27s_rule
{ "language": "en", "url": "https://math.stackexchange.com/questions/4169895", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Is the spectral norm of $I_n-1v^T$ bounded by $\sqrt{n}$? Let $I_n$ be $n$-dimensional identity matrix and $v$ be a stochastic vector, i.e., $v$ is non-negative and $v^T{\bf 1}=1$, where ${\bf 1}=[1,1,...,1]^T$. I wonder if the spectral norm of the matrix $I_n-{\bf 1}v^T$ is bounded by $\sqrt{n}$. It is easily seen that the spectral norm is bounded by $\sqrt{n}+1$ using the norm triangle inequality. However, I test several special cases such as $v_1=[1,0,..,0]^T$ and $v_2=[1,1,...,1]^T/n$ and the largest norm is $\sqrt{n}$ (attained at $v_1$). So I wonder if this bound can be made tighter.
You can check that $v \mapsto \lVert I-1v^T\rVert$ is convex using the definition of convexity and the properties of norms. The maximum of a convex function on the simplex is attained on one of its extremal points of the form $(0,\ldots,0,1,0,\ldots,0)$. This proves that your lower bound of $\sqrt{n}$ is actually tight.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4170049", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How do I calculate the area in this problem? I'm trying to solve this exercise about Stokes Theorem: prove the validity of Stokes Theorem when $F(x,y,z)=(y,2x,2z)$ and $S$ is the part of the plane $z=y$ that lies in the cylinder $x^2+y^2=4y$ and $C$ is the area where the cylinder and the plane meet. Solution:using stokes theorem $$\oint F.dR = \int \int_{S}(\operatorname {curl} F).n ds$$ From here I did some straight forward calculations. $$\operatorname {curl}F=(0,0,1)$$ $$n= \frac {(x,y,-2)}{\sqrt{x^2+y^2+4}}$$ $$ds= \frac{1}{2}\sqrt{x^2+y^2+4}$$ And finally we have $$\oint F.dR= -\int \int 1 dxdy$$ Here's my question: first why am I getting a negative integration? And second the last double integral means the area of what?? Or how do I determine the bounds for the integrals? Sorry if this is dumb? I'm just so confused!
Stokes theorem: $\iint \nabla\times F\ dS = \oint F\cdot dr$ $S$ is the piece of the plane $z = y$ bound by the cylinder. $dS = (-\frac {\partial z}{\partial x},-\frac {\partial z}{\partial y}, 1) = (0,-1,1)$ If we stay in Cartesian, the bounds would be $-\sqrt {4y-y^2}\le x\le \sqrt {4y-y^2}, 0\le y\le 2$ In polar the bounds would be $r \le 4\sin\theta, 0\le \theta\le \pi$ However, we can short-cut around any calculations based on these bounds. $\iint \nabla \times F\ dS = \iint dA = A$ were $A$ is the area of the cross-section of the cylinder. i.e. $4\pi$ A parameterization of the contour might be: $x = 2\cos t, y = 2\sin t+2, z = 2\sin t+2$ Verifing Stokes theorem: $4\pi = \int_0^{2\pi} (2\sin t+2, 4\cos t, 4\sin t + 2)\cdot (-2\sin t, 2\cos t, 2\cos t)\ dt$ When we multiply this all out, we will only need to care about $\sin^2 t$ terms and $\cos^2 t$ terms as $\int_0^{2\pi} \sin t \ dt = \int_0^{2\pi} \cos t\ dt = \int_0^{2\pi} \sin t\cos t\ dt = 0$ $\int_0^{2\pi} -4\sin^2 t + 8\cos^2 t\ dt = 4\pi$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4170168", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
What's the minimum number of information required to solve a triangle? In school, I was always taught that given 3 pieces information with at least the length of one side, it was possible to determine all the measures of a triangle. However, I recently stumbled on the following problem, to which I found two solutions: Find all the measures of the triangle that has a $64\unicode{xB0}$ angle opposed to a $48\ cm$ side and adjacent to a $50\ cm$ side. First solution -> The angles measure $64\unicode{xB0}$, $46.57\unicode{xB0}$ and $69.43\unicode{xB0}$ Second solution -> The angles measure $64\unicode{xB0}$, $5.43\unicode{xB0}$ and $110.57\unicode{xB0}$ So, what's the real minimum number of pieces of information to solve a triangle?
Solution of triangles Wikipedia The case you have is SSA which is not uniquely specified.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4170436", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
How can I prove that $\lim _{x\to 0}\left(\frac{1}{f\left(x\right)}\right)=\frac{1}{L}$ using the $ε-δ $ argument I want to know how I can prove that $\lim _{x\to 0}\left(\frac{1}{f\left(x\right)}\right)=\frac{1}{L}$ using the $ε-δ $ argument? Question: Suppose $f :\mathbb R \to\mathbb R$ is a function such that: $\lim _{x\to 0}\left(f\left(x\right)\right)=L$ Prove that: $\lim _{x\to 0}\left(\frac{1}{f\left(x\right)}\right)=\frac{1}{L}$ using the $ε-δ $ argument My attempt: I know that for every $ε>0$, there exist a $δ>0$ such that $|x-a| < δ$ implies that $|f(x) - L| < ε$ Am I allowed to do : $|x-0| < δ$ implies $ |\frac{1}{f(x)} -$ $\frac{1}{L}| < ε$, so I need to prove this statement using the fact that $|f(x) - L| < ε$ Pick $ε = 1$, then $-1 + L < f(x) < 1 + L$ so then: $$\frac{1}{L\:-1}>\:\frac{1}{f\left(x\right)}>\:\frac{1}{L+\:1}$$ Looking back: $$ \left|\frac{1}{f(x)} -\frac{1}{L}\right| = \left|\frac{L - f(x) }{f(x)\times L} \right| < \frac{1}{L\:-1} \times \frac{1}{L} \times ε $$ And then I just don't know what to do, am I even on the right path?
Depending on how much "scaffolding" you want to have. If you just want a rigorous way of verifying your claim is correct with the $\epsilon-\delta$ definition of limit, you can use the limit property (which proofs can be easily found online, such as in Paul's Online Notes) * *$$\lim_{x\to a}\left[ \frac{g(x)}{f(x)} \right] = \frac{lim_{x\to a}g(x)}{lim_{x\to a}f(x)}, \text{provided that } lim_{x\to a}f(x) \ne 0$$ *$$\lim_{x \to a} c = c$$ But if you want an efficient proof directly from the definition, you can try the following (credit, still, to Paul's Online Notes "Proof of 4"). Because $\lim_{x \to 0}f(x) = L$, there is a $\delta_1 > 0$ such that $$\begin{aligned} 0 < |x-0| < \delta_1 && \text{implies} && |f(x)-L| < \frac{|L|}{2} \end{aligned}$$ Then, $$\begin{aligned} |L| =& |L-f(x)+f(x)| && \text{just adding zero to }L \\ \le& |L-f(x)| + |f(x)| && \text{using the triangle inequality} \\ =& |f(x)-L|+|f(x)| && |L-f(x)|=|f(x)-L| \\ <& \frac{|L|}{2}+|f(x)| && \text{assuming that } 0 < |x-0| < \delta_1 \end{aligned}$$ Rearranging it gives $$\begin{aligned}|L| < \frac{|L|}{2}+|f(x)| &&\Rightarrow&& \frac{|L|}{2} < |f(x)| &&\Rightarrow &&\frac{1}{|f(x)|} < \frac{2}{|L|}\end{aligned}$$ Now, let $\epsilon > 0$. There is also a $\delta_2$ such that, $$\begin{aligned} 0 < |x-0| < \delta_2 && \text{implies} && |f(x)-L| < \frac{|L|^2}{2} \epsilon \end{aligned}$$ Choose $\delta = \min{(\delta_1, \delta_2)}$. If $0<|x-0|<\delta$ we have, $$\begin{aligned} \left|\frac{1}{f(x)} - \frac{1}{L}\right| =& \left| \frac{L-f(x)}{Lf(x)} \right| && \text{common denominators} \\ =& \frac{1}{\left|Lf(x)\right|} \left| L-f(x) \right| && \\ =& \frac{1}{|L|}\frac{1}{|f(x)|}|f(x)-L| && \\ <& \frac{1}{|L}\frac{2}{|L|}|f(x)-L| && \text{assuming that } 0<|x-0|<\delta \le \delta_1 \\ <& \frac{2}{|L|^2} \frac{|L|^2}{2} \epsilon && \text{assuming that } 0 < |x-0| < \delta \le \delta_2 \\ =& \epsilon && \end{aligned}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4170567", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Can I do this rearrangement with an infinite series to arrive at divergent parts? $$\sum_{i=3}^{\infty}\frac{i}{\left(i-2\right)\left(i-1\right)\left(i+1\right)} = \sum_{i=3}^\infty \frac{2}{3\left(i-2\right)} - \frac{1}{2\left(i-1\right)} - \frac{1}{6\left(i+1\right)} \\**= \frac{2}{3}\sum_{i=1}^\infty\frac{1}{i} - \frac{1}{2}\left(\left(\sum_{i=1}^\infty\frac{1}{i}\right) - 1\right) - \frac{1}{6}\left(\left(\sum_{i=1}^\infty\frac{1}{i}\right) - 1 - \frac{1}{2} - \frac{1}{3}\right)** \\= \left(\frac{2}{3} - \frac{1}{2} - \frac{1}{6}\right) \left(\sum_{i=1}^\infty\frac{1}{i}\right) + \frac{1}{2} + \frac{1}{6} + \frac{1}{12} + \frac{1}{18} \\= 0\cdot\left(\sum_{i=1}^\infty\frac{1}{i}\right) + \frac{29}{36} \\= \frac{29}{36} \space or \space \infty?$$ When summing up the series above I came by the harmonic series multiplied by $0$. I simply equated it to be $0$ while my friend keeps arguing that the harmonic series is an infinity and infinity multiplied by $0$ is undefined. Now I find it difficult to argue with it. Does this mean that the series above diverge? Or maybe I was thinking, in the starred line, the $\frac{2}{3}\left(\sum_{i=1}^\infty\frac{1}{i}\right) - \frac{1}{2}\left(\sum_{i=1}^\infty\frac{1}{i}\right) - \frac{1}{6}\left(\sum_{i=1}^\infty\frac{1}{i}\right)$ is cancelling out anyway, right? I mean I could just right it term by term and show every term cancels. If they are cancelling out then this rearrangement should not be problematic, right? So, am I right to say that $0\cdot\left(\sum_{i=1}^\infty\frac{1}{i}\right) = 0$ to show it's convergence?
UPDATE: This question was radically rewritten. I was answering the original question: What is 0 multiplied with a divergent series with summation of infinity? The series (the fixed variant) converges, its sum is $\frac{29}{36}.$ As to the separate question of zero being multiplied by a divergent series, the main algebraic property of zero is that it multiplied by anything is still zero, and this is usually taken as the stronger rule than multiplication by something infinite. Even in the extended real line ${\overline {\mathbb {R} }}$ while $0\cdot\infty$ is often left undefined, in many cases there is a solid justification to define it as $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4170727", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Continuity of Probability if $An\to A$ but neither increasing or decreasing Perhaps this is a silly question, but most books of probability talk about continuity of probability if either $A_n$ increases to $A$ or if $A_n$ decreases to $A$. My question is, if we just have that $A_n \to A$, does continuity of probability still holds? If so, why not just state that instead of separating the increasing and decreasing cases?
The answer by @Momo proved that a probability measure is indeed continuous. Now, my other question was why not presenting such property instead of separating continuity from above and from below. I think I’ve figured why most books do this. The reason would be that continuity from above (or below) plus finite additivity is equivalent to $\sigma$-additivity. Hence why won would prove them instead of the stronger claim of continuity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4170868", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Galois group of $L=\mathbb{Q}(i,\sqrt[3]2,\sqrt3)$ over $\mathbb{Q}$ is $D_{12}$ Consider $L = \mathbb{Q}(i,\sqrt[3]2,\sqrt3)$. Prove that $\operatorname{Gal}(L/\mathbb{Q})\cong D_{12}$. My attempt: It is easy to verify that $[L:\mathbb{Q}]=12$. In particular, $L$ is the splitting field of the separable polynomial $(x^2+1)(x^3-2)(x^2-3)$ over $\mathbb{Q}$, so $L/\mathbb{Q}$ is Galois of degree 12. I'm trying to show that $\operatorname{Gal}(L/\mathbb{Q})\cong C_6\rtimes C_2$. The extension $\mathbb{Q}(i)/\mathbb{Q}$ is Galois, and therefore, $H:=\operatorname{Gal}(L/\mathbb{Q}(i))\trianglelefteq \operatorname{Gal}(L/\mathbb{Q})=: G$. Then $$[\mathbb{Q}(i):\mathbb{Q}]=2=[G:H] \Rightarrow |H|=6.$$ With the theorem of natural irrationalitiets, we find that $L/\mathbb{Q}(\sqrt[3]2,\sqrt3)$ is Galois as well, with $$ C_2\cong \operatorname{Gal}(\mathbb{Q}(i)/\mathbb{Q})\cong\operatorname{Gal}(L/\mathbb{Q}(\sqrt[3]2,\sqrt3))=: K\le G.$$ If $\sigma\in H\cap K$, then $\sigma$ fixes $L$, implying $\sigma=1$ and $H\cap K=1$. Then, by cardinality comparison, we get that $HK=G$. So, it remains to show that $H\cong C_6$. This is where I'm stuck. I think $\sigma\equiv (\sqrt[3]2 \quad \sqrt[3]2\zeta_3\quad\sqrt[3]2\zeta_3^2)(\sqrt3 \quad -\sqrt3)\in H$ is the element of order 6 that we're looking for, but I'm struggling to write this last reasoning down in a rigorous way. I would just write down the whole original group $G$, which is $$ G=\{1,(i\quad -i),(\sqrt3\quad -\sqrt3),(i\quad -i)(\sqrt3\quad -\sqrt3), (\sqrt[3]2 \quad\sqrt[3]2\zeta_3),\\(\sqrt[3]2 \quad\sqrt[3]2\zeta_3^2),(i\quad -i)(\sqrt[3]2 \quad\sqrt[3]2\zeta_3), \quad (i\quad -i)(\sqrt[3]2 \quad\sqrt[3]2\zeta_3^2),\\(\sqrt3 \quad -\sqrt3)(\sqrt[3]2 \quad\sqrt[3]2\zeta_3),(\sqrt3 \quad -\sqrt3)(\sqrt[3]2 \quad\sqrt[3]2\zeta_3^2), \\ (i\quad -i)(\sqrt3\quad -\sqrt3)(\sqrt[3]2 \quad\sqrt[3]2\zeta_3),(i\quad -i)(\sqrt3\quad -\sqrt3)(\sqrt[3]2 \quad\sqrt[3]2\zeta_3^2)\}$$ I believe. Then the 6 elements which fix $i$ are contained in $H$. So, I conclude that $\sigma\in H$ of order 6, and therefore $H\cong C_6$. EDIT: I see that my reasoning does not make sense, as $\sigma$ is not even contained in $G$. So, I definitely need help with that. Is this alright? Thanks.
Let $M=\mathbf{Q}(i)$. You want to show that $\operatorname{Gal}(L/M)\cong C_6$. However, $M(\sqrt[3]{2})/M$ is not normal. Indeed, if it were, we would have $\zeta_3 \sqrt[3]{2}\in M(\sqrt[3]{2})$, which happens if and only if $\sqrt{-3}\in M(\sqrt[3]{2})$. As $i\in M$, this occurs if and only if $\sqrt{3}\in M(\sqrt[3]{2})$, which is impossible. This shows $L/M$ has more than 2 subfields of order 3; as there are only two subgroups of order 6, we have $\operatorname{Gal}(L/M)\cong S_3$ instead. Luckily, this argument shows that we should rather take $M'=\mathbf{Q}(\sqrt{-3})$ and consider $L/M'$. It then holds that $L$ is the compositum of the subfields $M'(\sqrt{3})$ and $M'(\sqrt[3]{2})$ (both Galois over $M')$ and $M'(\sqrt{3})\cap M'(\sqrt[3]{2})=M'$. It now holds that $$\operatorname{Gal}(L/M')\cong \operatorname{Gal}(M'(\sqrt{3})/M')\times \operatorname{Gal}(M'(\sqrt[3]{2})/M')\cong C_2\times C_3\cong C_6.$$ I think you can continue from here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4171212", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Wasserstein distance between two empirical measures Suppose that $\{X_i\}_{1\leq i\leq N}$ and $\{Y_i\}_{1\leq i\leq N}$ are $\mathbb{R}$-valued random variables. I found in some manuscripts the inequality $$\mathbb{E}\left[W^2_2\left(\frac{1}{N}\sum_{i=1}^N \delta_{X_i}, \frac{1}{N}\sum_{i=1}^N \delta_{Y_i}\right)\right] \leq \mathbb{E}\left[\frac{1}{N}\sum_{i=1}^N |X_i-Y_i|^2\right] \quad (*).$$ I believe this is not that obvious... May I know how can one justify this upper bound on the squared Wasserstein distance (with exponent $2$)?
First proof the "deterministic" case $$W^2_2\left(\frac{1}{N}\sum_{i=1}^N \delta_{x_i}, \frac{1}{N}\sum_{i=1}^N \delta_{y_i}\right)\leq \frac{1}{N}\sum_{i=1}^N |x_i-y_i|^2 \qquad (**)$$. This is easy by the definition of the Wasserstein distance as infimum of costs of transport plans (=couplings) between the measures $\mu:= \frac{1}{N}\sum_{i=1}^N \delta_{x_i}$ and $\nu:=\frac{1}{N}\sum_{i=1}^N \delta_{y_i}$ w.r.t. the cost function $c(x,y)=|x-y|^2$. The right hand side of $(**)$ is exactly the cost $\int_{\mathbb{R} \times \mathbb{R}} c(x,y)\,\mathrm{d}\gamma(x,y)$ of the coupling $ \gamma := \frac{1}{N}\sum_{i=1}^N \delta_{(x_i,y_i)}$, i.e. you "transport" each $x_i$ to $y_i$. Now, given random variable $X_1,\dots,X_N,Y_1,\dots,Y_N$ you just need to integrate both sides of $(**)$ w.r.t. the joint law of these RV to get $(*)$. Answer to the question why $\frac{1}{N}\sum_{i=1}^N \delta_{(x_i,y_i)}$ is a valid coupling: One has to show that the pushforward of $\frac{1}{N}\sum_{i=1}^N \delta_{(x_i,y_i)}$ unter the projection on the $x$-component is $\frac{1}{N}\sum_{i=1}^N \delta_{x_i}$ (and an analouge statement for the $y$-component). For each $A \subset \mathbb{R}$ measureable it holds $$\frac{1}{N}\sum_{i=1}^N \delta_{(x_i,y_i)}(A \times \mathbb{R}) = \frac{1}{N} \Big[\text{number of $i$ s.t. $x_i \in A$ }\Big] = \frac{1}{N}\sum_{i=1}^N \delta_{x_i}(A) $$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4171347", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
probability question on drawing ball I have a bag containing $20$ red balls and $16$ blue balls. I uniformly randomly take balls out from the bag without replacement until all balls of both colors have been removed. If the probability that the last ball I took was red can be represented as $\frac{p}{q}$, where $p$ and $q$ are coprime positive integers. Find $p+q$ My solution: as the last ball is red, that means we have drawn $19$ red balls and $16$ blue balls so far. The total no of ways of doing it is $35 \choose19$. Now total no of ways of drawing all the balls is $36 \choose 20$. The probability is $$\frac{35\choose19}{36\choose20}=\frac{5}{9}$$ But the answer is $\frac{4}{9}$. Can you please correct my solution?
Lets look at another perpective . Assume that we are arranging all balls in line . Moreover , this line is constructed according to selection order of the balls. Then, how many ways are there to align all of the balls ? . The answer is $\frac{36!}{20! \times 16!}$. Now assume that the last ball is red and we placed one red ball in the last place , so we need to align $19$ red balls and $16$ blue balls in a straight line (their positons are determined by selection order).Then , we can arrange them by $\frac{35!}{19! \times 16!}$ ways. As a result , $\frac{\frac{35!}{19! \times 16!}}{\frac{36!}{20! \times 16!}}=\frac{20}{36} = \frac{5}{9}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4171481", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
What am I missing in this proof? Unbiased vectors I have a set of vectors $\{P_{1},P_2,...,P_{d+1}\}$ in a $d$-dimensional Hilbert space, which satisfy the following two conditions on the inner product $$|P_{i}|^2=d, \quad \forall i=1,...,d+1, \qquad \qquad P_i^*\cdot P_j =-1, \quad i,j=1,...,d+1 \text{ and } i\neq j.$$ I want to prove that this set is not linearly independent but if you remove any vector, it becomes linearly independent. The first part of the proof is easy if one notices that any of the vectors can be written as \begin{equation} P_i=-\sum_{\substack{j=1\\ i\neq j}}^{d+1}P_j, \tag{1} \label{eq:pi} \end{equation} because this satisfies both conditions on the inner products. Therefore, let us now remove $P_1$ from the set, for example. Is the set $\{P_2,P_3,...,P_{d+1}\}$ linearly independent? I would say that it is. Let us take $P_2$ for example and see if it can be written as a linear combination of $\{P_3,...,P_{d+1}\}$. From \eqref{eq:pi} we have that $$P_2=-P_1-P_3-P_4-...-P_{d+1},$$ and this doesn't seem like it can be written as a linear combination of $\{P_3,...,P_{d+1}\}$. However, I feel that this proof is missing rigour somewhere. What am I missing?
The vectors $P_1 - P_{d+1}, \dots , P_d - P_{d+1}$ are all non-zero and orthogonal to each other, hence linearly independent: Let $i,j<d+1$, $i\ne j$ then: $$ \| P_i - P_{d+1}\|^2 = d + 2 + d = 2d+2>0 $$ $$ (P_i- P_{d+1}, P_j- P_{d+1}) = 0. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4171625", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Complex solutions of $z^4 = |z|^2 +2$ I'm struggling with the equation in the subject, I managed to find the real solutions ($\pm\sqrt{2}$) by setting $u = z^2$ and I know from Wolfram Alpha that the other two complex solutions should be $\pm i\sqrt{2}$ and I obviouvsly understand why those works but I'm unable to get the correct procedure to find them. I suspect that there is something around equations with modulus in them that I don't yet know. Could you point me to the right direction? Thanks!
If $z^4=|z|^2+2$, then$$|z|^4=\left|z^4\right|=\left||z|^2+2\right|=|z|^2+2,$$and therefore $|z|=\pm\sqrt2$. So, $z=\sqrt2e^{i\theta}$ for some $\theta\in\Bbb R$. And$$z^4=|z|^2+2\iff4e^{4i\theta}=4.$$Can you take it from here?
{ "language": "en", "url": "https://math.stackexchange.com/questions/4171793", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Let $G$ be a finite group, $H\leq G$ s. t. $\forall g\in G$, either $\langle H, H^g \rangle$ is nilpotent or $H H^g = H^g H$. Show that $H \lhd\lhd G$ Let $G$ be a finite group, $H\leq G$ such that $\forall g\in G$, either $\langle H, H^g \rangle$ is nilpotent or $H H^g = H^g H$. Show that $H \lhd \lhd G$. I know how to show that if $\langle H,H^g \rangle$ is nilpotent $\forall g\in G$, then $H$ is contained in the Fitting subgroup of $G$, and hence is nilpotent. From here we could conclude that $H \lhd \lhd G$. But we can’t apply that argument here, since there can be some $g\in G$ such that $\langle H,H^g \rangle$ is not nilpotent. I’m a bit stuck.
This is Problem $2B.1$ of Isaacs' Finite Group Theory. You just need to combine elements from the proofs of Theorem $2.8$ and $2.12$ of the same book into one statement. Induct on $|G|$, $|G|=1$ being trivial. Let $K$ be a proper subgroup of $G$ containing $H$. Then for each $k \in K$ either $\langle H,H^k \rangle$ is nilpotent or $HH^k = H^kH$. So by the induction hypothesis $H \lhd \lhd K$. Suppose $H$ is not subnormal in $G$. Then the Zipper Lemma guarantees the existence of a unique maximal subgroup $M$ with $H \subseteq M$. Let $g \in G$, there are two cases. $(1)$ $\langle H,H^g \rangle$ is nilpotent. Since every subgroup of a finite nilpotent group is subnormal, $\langle H, H^g \rangle$ is proper in $G$. Therefore $\langle H, H^g \rangle \subseteq M$. $(2)$ $HH^g=H^gH$. Then $HH^g$ is a subgroup of $G$. But, $H$ is proper and hence $HH^g$ is proper. Thus $HH^g \subseteq M$. In either case, $H^g \subseteq M$. Thus the normal closure $H^G \subseteq M$, whence $H^G$ is proper in $G$. But then we have $H \lhd \lhd H^G \lhd G$, contradicting $H$ was assumed to be not subnormal in $G$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4171956", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How to show that if fLet f, g: $Q \to R$ be bounded functions such that $f(x)\leq g(x)$, for $x \in Q$. Show that $\underline{\int_Q} f \leq \underline{\int_Q} g$ and $\overline{\int_Q} f \leq \overline{\int_Q} g$. Note that the domain Q denotes a rectangular region may be a subset of $R^n$ for any natural number n. If the domain is just R, then Q would be a closed interval, but that shouldn't change the proof much. Also f and g are not assumed to be Riemann integrable otherwise, else the proof would be trivial. What I have done so far is show that based on the definitions, if f and g has the same partition, then the lower sum and upper sum of f is less than or equal to the lower and upper sum of g. Even though $f\leq g$, it is possible for the upper sum of f to be greater than g if they have different partitions for some very specific partitions. So for that reason, I think we should keep the partitions the same for f and g. Now, one idea I had was to use use a sequence of partitions where we keep each subinterval (or subrectangles) the same length and just keep doubling the number of subintervals per step. At each step the upper Riemann sum of f is less than or equal to g and the limit of the partition should approach the infimum of the upper sum thus $\overline{\int_Q} f \leq \overline{\int_Q} g$. And we can do the same for the lower sum. Is this roughly correct?
The monotonicity follows directly from the definition of the lower (upper) Riemann integral as the supremum of all lower Riemann sums (resp. infimum over all upper Riemann sums) over all partitions. For any partition $P$ is $$ L(P, f) \le L(P, g) \le \underline{\int_Q} g $$ and that implies $$ \underline{\int_Q} f = \sup_P L(P, g) \le \underline{\int_Q} g \, . $$ Similarly, $$ \overline{\int_Q} f \le U(P, f) \le U(P, g) $$ for all partitions implies that $$ \overline{\int_Q} f \le \overline{\int_Q} g \, . $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4172456", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
A particular solution for $y''+a_1y' + a_2y = Ae^{i\omega x}$ I'm trying to show that for the equation $y''+a_1y' + a_2y = Ae^{i\omega x}$, there is a solution of $\phi(x)$ of the form $$\phi(x)=\frac{A}{\vert p(i\omega)\vert}e^{i(\omega x - \alpha)}$$ where $ p(i\omega)=\vert p(i\omega)\vert e^{ix}$, and $p(i\omega) \neq 0$. The last statement is confusing and I'm unsure how to interpret it. Why would $p(i\omega)=\vert p(i\omega)\vert e^{ix}$? I think that $$p(i \omega)=\omega^2+a_1i\omega+a_2$$ which is a complex number with $Re = \omega ^2+a_2$, and $Im = a_1\omega$. It does not seem like we will ever end up to such an identity, with the standard definition of complex norm. To be precise, we have that $$\vert p(i\omega)\vert=\sqrt{(\omega^2 +a_2)^2 + (a_1\omega)^2}$$ As for validating the solution, one would compute $L(\frac{A}{\vert p(i\omega)\vert}e^{i(\omega x - \alpha)})$, take the term with the norm as a common factor. I imagine the norm is cancelled at some point. What am I missing here?
I'm sure $p(i\omega)=\vert p(i\omega)\vert e^{ix}$ should be $p(i\omega)=\vert p(i\omega)\vert e^{i\alpha}$ The form $e^{i\omega x}$ is preserved by derivatives so for the particular solution try: $$y_p = K e^{i\omega x} \tag{1}$$ into $$ y''+a_1y' + a_2y = Ae^{i\omega x} \tag{2}$$ factor out $e^{i\omega x}$ $$-K {\omega}^2 + iKa_1\omega + Ka_2 = A \tag{3}$$ $$K = \frac{A}{-{\omega}^2 + ia_1\omega + a_2} \tag{4} $$ $$ y_p = \frac{Ae^{i\omega x}}{-{\omega}^2 + ia_1\omega + a_2} \tag{5} $$ The characteristic equation of this DE is: $$p(z) = z^2 + a_1z + a_2 \tag{6}$$ So $$p(i\omega) = -{\omega}^2 + ia_1\omega + a_2 \tag{7}$$ In $re^{i\theta}$form: $$p(i\omega) = \sqrt{(a_2-{\omega}^2)^2 + (a_1\omega)^2 } \, e^{i \arctan{\left(\frac{a_1\omega}{a_2-{\omega}^2}\right)}} \tag{8}$$ Substitute $(8)$ into $(5)$ $$ y_p = \frac{Ae^{i\omega x}}{\sqrt{(a_2-{\omega}^2)^2 + (a_1\omega)^2 } \, e^{i \arctan{\left(\frac{a_1\omega}{a_2-{\omega}^2}\right)}}} \tag{9} $$ $$ y_p = \frac{Ae^{i(\omega x - \alpha)}}{|p(i\omega)|} \tag{10} $$ Where $$\alpha = arg(p(i\omega)) = \arctan{\left(\frac{a_1\omega}{a_2-{\omega}^2}\right)}\tag{11}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4172577", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
The definition of simple function. Which definiton should I choose? I'm studying Lebesgue integral theory and I came across the concept of Simple function. And there are some definitions of simple function. I wonder which definition of simple function I should recognize. In this page https://en.wikipedia.org/wiki/Simple_function, it is said that $f$ is a simple function if there exist a sequence of Lebesgue-measurable sets $\{ E_k \}_{k=1}^{N}$ and a sequence of real numbers $\{ a_k \}_{k=1}^N$ s.t. \begin{align} &\cdot f= \displaystyle\sum_{k=1}^N a_k \chi_{E_k} (x). \\ &\cdot E_i \cap E_j =\phi \ \text{for }i\neq j.\\ \end{align} But in my class, I learned the definition of simple function as below. $f$ is a simple function $\iff$ There exist a sequence of Lebesgue-measurable sets $\{ E_k \}_{k=1}^{N}$ and a sequence of real numbers $\{ a_k \}_{k=1}^N$ s.t. \begin{align} &\cdot f= \displaystyle\sum_{k=1}^N a_k \chi_{E_k} (x). \\ &\cdot m(E_k)<\infty. \end{align} Moreover, according to another website, $f: X \to \mathbb{R}$ is a simple function if there exist a sequence of Lebesgue-measurable sets $\{ E_k \}_{k=1}^{N}$ and a sequence of real numbers $\{ a_k \}_{k=1}^N$ s.t. \begin{align} &\cdot f= \displaystyle\sum_{k=1}^N a_k \chi_{E_k} (x). \\ &\cdot X=\bigcup_{k=1}^N E_k. \\ &\cdot E_i \cap E_j =\phi. \\ \end{align} Is each definition correct or equivalent to each other? I wonder which definition I should choose.
The definitions are not equivalent in general. Definition 1 and 3 equivalently say that $f: X \to ℝ$ takes only finitely many values and is measurable. Definition 2 additionally demands that preimages of all values but $0$ have finite measure (which is trivial if $X$ has finite measure) – to ensure that it is integrable. The extra condition also ensures that simple functions form an algebra (you can add, substract and multiply them, and also multiply them by a scalar). This together with the simple observation that every characteristic function of a set of finite measure is simple actually proves the equivalence of the definitions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4172726", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Distance metric on Stiefel manifold vs Grassmannian Initially I was using a distance metric based on principle angles from this paper to calculate distances in the Grassmannian manifold. However, my project slightly changed and it seems like my manifold is now composed of ordered vectors. Hence I found that the Stiefel manifold might be more appropriate. However, I don't know if distance metrics for the Grassmannian can still be used in my case. From here it seems like I need a different distance metric for the Stiefel, but I'm not sure where to find any.
A distance function on the Grasmannian $d_G:Gr(n,k)\times Gr(n,k)\to\mathbb{R}$ won't suffice, in that we cannot define a distance function on the Stiefel manifold $d_S:V(n,k)\times V(n,k)\to\mathbb{R}$ by $d_S(E,F):=d_G(\operatorname{span}(E),\operatorname{span}(F))$. This is because there are distinct $k$-frames $E\neq F$ which span the same subspace, and defining $d_S$ in this manner would give $d_S(E,F)=0$. Still, there are many possible distance functions on $V(n,k)$, and which to choose depends on the properties you want this distance to have. Since $V(n,k)$ embeds canonically into $\mathbb{R}^{nk}$, one could simply use Euclidean distances in $\mathbb{R}^{nk}$, or alternately use the geodesic distance from the Riemannian metric induced by this embedding.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4172951", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Isomorphism between adjoint and coadjoint representations Given a complex Lie algebra $\mathfrak{g}$, with dual space $\mathfrak{g}^*$, fix $\operatorname{ad}: \mathfrak{g} \rightarrow \mathfrak{gl}(\mathfrak{g})$, $\operatorname{ad}(X) = [X,\_]$ the adjoint representation of $\mathfrak{g}$, and fix $\operatorname{ad}^*: \mathfrak{g} \rightarrow \mathfrak{gl}(\mathfrak{g^*})$, $\operatorname{ad}^*(X) = -\operatorname{ad}(X)^\top$ the coadjoint representation. I am trying to prove that the adjoint and coadjoint representations are isomorphic. I have been told that the Killing form $K: \mathfrak{g} \times \mathfrak{g} \rightarrow \mathbb{C}$ induces such isomorphism. To do so, I've defined a map $K^\flat: \mathfrak{g} \rightarrow \mathfrak{g}^*$ given by \begin{align} K^\flat(X): \mathfrak{g} & \rightarrow \mathbb{C} \\ Y & \mapsto K(X,Y). \end{align} Questions * *How do I prove that $K^\flat$ is a representation homomorphism? It feels very easy, but somehow I cannot seal the deal on this. *Is there a better approach to this problem? Any help would be greatly appreciated.
I'll denote the adjoint operator by $\operatorname{ad}_x \colon \mathfrak{g} \to \mathfrak{g}$, so that we have $\operatorname{ad}_x^* = - \operatorname{ad}_x^t$, which sends a linear functional $\varphi \colon \mathfrak{g} \to \mathbb{C}$ to $\operatorname{ad}_x^* \varphi = - \varphi \circ \operatorname{ad}_x$. Let $K \colon \mathfrak{g} \times \mathfrak{g} \to \mathbb{C}$ be any symmetric bilinear form, then we get a map $K^\flat \colon \mathfrak{g} \to \mathfrak{g}^*$ by defining $K^\flat(x) = K(x, -)$. Recall that for $K^\flat$ to give an isomorphism of representations between $\operatorname{ad}$ and $\operatorname{ad}^*$, we need the equation $K^\flat(\operatorname{ad}_x y) = \operatorname{ad}^*_x K^\flat(y)$ for all $x, y \in \mathfrak{g}$. Since both sides of this equation are elements of $\mathfrak{g}^*$, we can "check" both sides for all $z \in \mathfrak{g}$. A key property of the killing form in particular is that it is invariant, meaning that $K([x, y], z) = K(z, [y, z])$ for all $x, y, z \in \mathfrak{g}$. Now we work out that $$ \begin{aligned} (\operatorname{ad}^*_x K^\flat(y))(z) &= -(K^\flat(y) \circ \operatorname{ad}_x)(z) \\ &= -K(y, [x, z]) \\ &= K([x, y], z) \\ &= K(\operatorname{ad}_x y, z) \\ &= (K^\flat(\operatorname{ad}_x y))(z). \end{aligned}$$ Notice that the invariance property is crucial to proving that $K^\flat$ is a homomorphism of representations. In order for $K$ to be an isomorphism as well, you need that the Killing form is nondegenerate (equivalently, $\mathfrak{g}$ is a semisimple Lie algebra).
{ "language": "en", "url": "https://math.stackexchange.com/questions/4173081", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Proving upper bound on a term involving binomial coefficients I am trying to show: $$\binom{n}{k}\left(1-\frac{k}{n}\right)^{2n-2k+2} \left(\frac{k-1}{n}\right)^{2k}\leq \frac{1}{n^2}$$ for $k\in \{2,3,\cdots,n-1\}$ for all $n$ I observe that numerically that is true but analytically, I am not able to look at the term on LHS as a term of a binomial expansion or something like that to get an upper bound. Any ideas?
Sketch of a proof: Fact 1: Let $x, y$ be real numbers with $x \ge 6$ and $3 \le y \le x - 2$. Then $$\ln \frac{\Gamma(x + 1)}{\Gamma(y + 1)\Gamma(x - y + 1)} + (2x - 2y + 2)\ln(1 - y/x) + 2y\ln \frac{y - 1}{x} + 2\ln x \le 0.$$ The proof of Fact 1 is given at the end. By Fact 1, when $n\ge 6$ and $3 \le k \le n - 2$, we have $$\ln \binom{n}{k} + (2n - 2k + 2)\ln(1 - k/n) + 2k\ln \frac{k - 1}{n} + 2\ln n \le 0.$$ Thus, the desired inequality is true. When $n \ge 6$ and $k = 2, n - 1$, the desired inequality is verified directly. When $n \le 5$, the desired inequality is verified directly. We are done. Proof of Fact 1: Denote the LHS by $F(x, y)$. We have \begin{align*} \frac{\partial^2 F}{\partial y^2} &= - \psi'(y + 1) - \psi'(x - y + 1)\\ &\qquad + {\frac {2{x}^{2}y - 2x{y}^{2} - 4{x}^{2} + 4xy - 2{y}^{2} + 2x + 2y - 2}{ \left( x - y \right) ^{2} \left( y - 1 \right) ^{2}}}\\[5pt] &\ge - \frac{1}{y + 1} - \frac{1}{(y + 1)^2} - \frac{1}{x - y + 1} - \frac{1}{(x - y + 1)^2}\\[8pt] &\qquad + {\frac {2{x}^{2}y - 2x{y}^{2} - 4{x}^{2} + 4xy - 2{y}^{2} + 2x + 2y - 2}{ \left( x - y \right) ^{2} \left( y - 1 \right) ^{2}}}\\[6pt] &= \frac{G}{(y + 1)^2(x - y + 1)^2(x - y)^2(y - 1)^2}\\ &\ge 0 \end{align*} where $\psi(\cdot)$ is the digamma function defined by $\psi(u) = \frac{\mathrm{d} \ln \Gamma(u)}{\mathrm{d} u} = \frac{\Gamma'(u)}{\Gamma(u)}$, and \begin{align*} G &= {x}^{4}{y}^{3}-3\,{x}^{3}{y}^{4}+3\,{x}^{2}{y}^{5}-x{y}^{6}+2\,{x}^{3} {y}^{3}-6\,{x}^{2}{y}^{4}+6\,x{y}^{5}-2\,{y}^{6}\\ &\qquad -3\,{x}^{4}y+10\,{x}^{ 3}{y}^{2}-11\,{x}^{2}{y}^{3}+2\,x{y}^{4}+2\,{y}^{5}-6\,{x}^{4}+18\,{x} ^{3}y\\ &\qquad -18\,{x}^{2}{y}^{2}+6\,x{y}^{3}-2\,{y}^{4}-11\,{x}^{3}+30\,{x}^{2 }y-23\,x{y}^{2}+4\,{y}^{3}\\ &\qquad -6\,{x}^{2}+12\,xy-2\,{y}^{2}-2\,x+2\,y-2, \end{align*} and we have used $\psi'(u) \le \frac{1}{u} + \frac{1}{u^2}$ for all $u > 0$, and we have used $G \ge 0$. Hint for the proof of $G \ge 0$: With the substitutions $x = 6 + t, \ y = \frac{1}{1 + s}\cdot 3 + \frac{s}{1 + s}\cdot (x - 2)$ for $t, s \ge 0$, we have $(1 + s)^6G$ is a polynomial in $s, t$ with non-negative coefficients. Thus, $F(x, y)$ is convex with respect to $y$. Also, $F(x, 3) \le 0$ and $F(x, x-2) \le 0$. Thus, $F(x, y) \le 0$ for all $x \ge 6$ and $3 \le y \le x - 2$. We are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4173254", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Can every piecewise linear function be exactly realized as a neural network? Can every continuous piecewise linear function $[-1,1]^k \rightarrow \mathbb{R}^n$ be written as a composition of the following building blocks: * *Affine map: $x \mapsto Ax + b$ for some matrix $A$ and vector $b$ *Relu activation: $(x_1, x_2, ...) \mapsto (max(0, x_1), max(0, x_2),...)$ If so, how many composition factors are needed? Can every such function be represented by a network with "only one hidden layer": $affine \circ relu \circ affine \circ relu \circ affine$ By piecewise linear, I mean that there exists decomposition of the domain $[-1,1]^k$ into finitely many polytopes, such that the restriction of the function to each polytope is affine.
Long version of my short comment: First of all, not every piecewise affine linear function can be build by a ReLU neural network with only one hidden layer. The reason is that a compactly supported piecewise affine function, such as, $$ \mathbb{R}^d \ni x \mapsto \max\{0, 1 - \max_{i=1, \dots, d} |x_i| \} $$ cannot be represented by sums of ReLUs. The simple reason is that this function is smooth outside of a compact domain whereas a sum of ReLUs is either affine linear or has at least one line along which it is not smooth. (This is of course something one would need to prove in more detail. A proof can be found in Theorem 4.1 of https://arxiv.org/pdf/1807.03973.pdf.) On the other hand, it was shown in https://arxiv.org/pdf/1807.03973.pdf that deep ReLU neural networks can represent linear finite elements. This is because one can write these hat functions as a combination of max and min operations. I can only do a worse job than the authors themselves in explaining how this is done. Their paper also has a lot of nice illustrations. Therefore I think it is best to just refer to Chapter 3 of https://arxiv.org/pdf/1807.03973.pdf. From the construction of hat functions, it follows essentially directly that also all continuous piecewise linear functions can be represented by ReLU neural networks since every such function is a sum of hat functions. This is Theorem 5.2 of the work cited above.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4173446", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Evaluating $\lim _{x\to 0}\left(\ln \left[\frac{1-x^2}{\ln \left(\cos \left(x\right)\right)}\right]\right)$ How do I evaluate the following limit? \begin{align*} \lim _{x\to 0}\left(\ln \left[\frac{1-x^2}{\ln \left(\cos \left(x\right)\right)}\right]\right) \end{align*} I tried it this way, but I'm not sure if it is correct. \begin{align*} \lim _{x\to 0}\left(\ln \left[\frac{1-x^2}{\ln \left(\cos \left(x\right)\right)}\right]\right) &=\lim _{x\to 0} \left( \ln[1-x^2]-\ln[\ln(\cos x)]\right)\\ &=\lim _{x\to 0}\ln(1-x^2)-\cos x\\ &=\ln(1)-1\\ &=-1 \end{align*}
There is mistake in your solution as others have pointed out already. Alternatively, You may proceed like this also: Let $\displaystyle L=\ln\left(\frac{1-x^{2}}{\ln(\cos x)}\right)$ $\displaystyle \Longrightarrow e^{L} =\frac{1-x^{2}}{\ln(\cos x)} \Longrightarrow e^{-L} =\frac{\ln(\cos x)}{1-x^{2}} =\frac{\ln\left( 1-\sin^{2} x\right)}{\sin^{2} x} .\frac{\sin^{2} x}{2\left( 1-x^{2}\right)} =\frac{-1-o\left(\sin^{2} x\right)}{2\left( 1-x^{2}\right)} .\sin^{2} x$ It follows that: As $\displaystyle x\rightarrow 0$, we have $\displaystyle e^{-L}\rightarrow 0$, which is possible only when $\displaystyle L\rightarrow \infty $.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4173582", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Can this determinant ever vanish? Let $a_j=1 +2\cos\left(\frac{2\pi j}q\right)$ for $j=1,\dots,q$. Then consider the Hermitian matrix: $$A_q = \begin{pmatrix} a_1 & 1 & 0 & 0 & \ldots & 0 & 1 \\ 1 & a_2 & 1 & 0 & \ldots & 0 & 0 \\ 0 & 1 & a_3 & 1 & \ldots & 0 & 0 \\ 0 & 0 & 1 & a_4 & \ldots & 0 & 0 \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & 0 & \ldots & a_{q-1} & 1 \\ 1 & 0 & 0 & 0 & \ldots & 1 & a_q \end{pmatrix}$$ Using Mathematica, I find: $$\begin{align} \det(A_2)&=-4\\ \det(A_3)&=-1\\ \det(A_4)&=-7\\ \det(A_5)&=-\frac52(-5+\sqrt5) \end{align}$$ Is there any general formula that describes this determinant? And, more importantly, can the determinant ever be zero? The one answer I got so far, suggests no, but it is of course not a proof. This problem is motivated by a quantum mechanics problem, where the Hermitian matrix describes an observable in QM. In particular, it originates from the study of an electron on a lattice in a commensurable magnetic field.
Because of the $1$'s in the upper-right and lower-left corners, I didn't have any luck trying to derive a recurrence relation. But using Matlab, I was able to at least get the first several terms: $q$ $\text{det}\left(A_q\right)$ $1$ $3$ $2$ $-4$ $3$ $-1$ $4$ $-7$ $5$ $\frac{5-\sqrt{5}}{4}$ $6$ $5$ $7$ $12.95944337$ $8$ $-5.05887450$ $9$ $-12.24073273$ $10$ $-26.54915028$ The latter four decimal expansions don't even appear in OEIS, so as saulspatz notes in the his comment, "pretty" closed-form expressions are unlikely. In terms of asympotics, plotting the first 1000 terms shows an approximate exponential relationship: $\begin{align}\left|\text{det}\left(A_q\right)\right|\sim \exp\left(\beta_0+\beta_1 q\right) \hspace{1 em} \text{where} \hspace{1 em} &\beta_0 = -0.0103 \pm 0.09\\ &\beta_1 = 0.2514 \pm 0.0004\end{align}$ $\text{sgn}\left(\text{det}\left(A_q\right)\right)$ also appears to be somewhat periodic; the sign changes every $3$ or $4$ iterations from $2\leq q\leq 1000$. This is far from a complete answer, but I hope it's enough to serve as a branching off point.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4173768", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 3, "answer_id": 1 }