Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
$m + n \sqrt{2}$ is invertible $\iff$ $m^2 - 2n^2 =\pm 1$ I'm having trouble proving that, for $m, n \in \mathbb{Z}$, the existence of a multiplicative inverse for $m + n \sqrt{2}$ implies that $m^2 - 2n^2 = \pm 1$. The first step, I believe, is to solve for the inverse, which is clearly $\frac{1}{m + n\sqrt{2}}$, provided that $m +n \sqrt{2}$, as $m + n \sqrt{2}$ would otherwise not be invertible. From here, I'm unsure on how to piece together this proof. I read through some hints on another answer on here, so it seems that a plausible step is to use the fact that $m^2 - 2n^2$ is a difference of squares and factors into $(m - \sqrt{2} n)(m + \sqrt{2}n)$. One of these factors is invertible, but we don't have any information on whether it's conjugate is, or ability to equate the product with, say, $1$ or $-1$, so I can't quite figure out how to get there. In the opposite direction, the first step seems to be factoring into $(m + n \sqrt{2})(m - n \sqrt{2}) = \pm 1$. If this product is equal to $1$, then, $m - n \sqrt{2}$ is clearly the inverse of $m + n \sqrt{2}$, as we end up with a product of $1$. If not, we could scale both sides by $-1$, which should give us the same inverse. Any helpful thoughts and hint swould be greatly appreciated.
As you said, the only candidate for the inverse is clearly $\frac1{m+n\sqrt{2}}$. We have $$\frac1{m+n\sqrt{2}} = \frac{m-n\sqrt{2}}{m^2-2n^2} = \frac{m}{m^2-2n^2} + \frac{-n}{m^2-2n^2}\sqrt{2}$$ This is an element of $\mathbb{Z}[\sqrt{2}]$ if and only if both $\frac{m}{m^2-2n^2}$ and $\frac{-n}{m^2-2n^2}$ are in $\mathbb{Z}$. Let $d = \gcd(m,n)$. Assume that $m^2-2n^2 \mid m$ and $m^2-2n^2 \mid n$. Then also $m^2-2n^2 \mid d$. On the other hand, $d \mid m$ and $d \mid n$ so $d^2 \mid m^2-2n^2$. Therefore $d^2 \mid d$ so $d = 1$. Therefore if $\frac{m}{m^2-2n^2}, \frac{-n}{m^2-2n^2} \in \mathbb{Z}$ then $m^2-2n^2$ divides $d = 1$ so we conclude $m^2-2n^2 = \pm 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2864821", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
Contour integral of $\int_{-\infty}^\infty \frac{e^{ax}}{1+e^x}dx $ with non rectangular contour Is there a way to solve the integral of $$\int_{-\infty}^\infty \frac{e^{ax}}{1+e^x}dx $$ for $$a\in (0,1)$$ without using the rectangular region like in this post but still using a contour integral? Perhaps using a semicircular region, circular region, or freshnel contour perhaps? I just don't have a lot of experience with the rectangular region problems. Thanks.
(Not using contour integration ; Sorry ) $$I=\int_{-\infty}^{\infty} \frac {e^{ax}}{1+e^x} dx=\int_{-\infty}^{\infty} \frac {e^x\cdot e^{ax}}{e^x+e^{2x}} dx$$ Use the substitution $e^x=t$ $$I=\int_{0}^{\infty} \frac {t^{a-1}}{1+t}dt =B(a,1-a)=\Gamma(a)\Gamma(1-a)=\frac {\pi}{\sin (\pi a)}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2864922", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Why is $[0,1]$ an open subset of $[0,1] \cup [2,3]$? Given a metric space $X = [0,1]\cup[2,3]$ I have to show $[0,1]$ is both open and closed in $X$. This question is also asked in this thread : Let $X = [0,1] \cup [2,3]$ be a metric space. Why is $[0,1]$ both open and closed? I understand why $[0,1]$ is closed, but am having trouble understanding why it is open. From the answer in the previous thread, I understand that $[0,1]$ is open as it is the complement of $[2,3]$ which is closed. But for the point $0$ in $[0,1]$ we can construct no open ball which lies in X. So then how is $[0,1]$ open in X?
Thanks a lot MichaelBurr and copper.hat. The key was to think in terms of the relative metric for $X$ and not the metric used for $\mathbb{R}$. The relative metric restricts the definition of the metric for only those points which belong to $[0,1]\cup[2,3]$. In this case, $[0,1/2)$ is also an open ball in [0,1] when X is the metric space under consideration. Alternatively,as suggested by copper.hat, one may look at $[0,1]$ as $X\cap[0,1]$ where $[0,1]$ is closed in $\mathbb{R}$. Also it can be seen as $X\cap(-1,3/2)$ where $(-1,3/2)$ is open in $\mathbb{R}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2865003", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Example of a sequence of functions where the limit cannot be interchanged Give an example of a sequence of continuous functions $f_n$ on $[0,1]$ with $f_n$ converges pointwise to a continuous function $f$ such that the following relation does't hold: $$\lim_{n \rightarrow \infty} \lim_{x \rightarrow 0} f_n(x)=\lim_{x \rightarrow 0}\lim_{n \rightarrow \infty} f_n(x)$$ I know such a convergence is not uniform. I already tried with this one: $f_n(x)= 2nx e^{-nx^2}$. Actually this one satisfies the given limit condition even though the convergence is not uniform! Any hint?
Since all $f_n$ are continuous we have $\lim_{x\to0} f_n(x) = f_n(0)$ and since $f = \lim_{n\to\infty} f_n$ is also continuous we have $\lim_{x\to 0} f(x) = f(0)$. So your relation boils down to $\lim_{n\to\infty} f_n(0) = f(0)$, which is true because $f_n \to f$ pointwise.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2865092", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Solve: $2^x\Bigl(2^x-1\Bigl) + 2^{x-1}\Bigl(2^{x-1} -1 \Bigl) + .... + 2^{x-99}\Bigl(2^{x-99} - 1\Bigl) = 0$ The question says to find the value of $x$ if, $$2^x\Bigl(2^x-1\Bigl) + 2^{x-1}\Bigl(2^{x-1} -1 \Bigl) + .... + 2^{x-99}\Bigl(2^{x-99} - 1 \Bigl)= 0$$ My approach: I rewrote the expression as, $$2^x\Bigl(2^x-1\Bigl) + \frac{2^x}{2}\Bigl(\frac{2^x}{2} -1 \Bigl) + .... + \frac{2^x}{2^{99}} \Bigl(\frac{2^x}{2^{99}} - 1 \Bigl)= 0$$ I then took $\bigl(2^x\bigl)$ common and wrote it as, $$2^x \Biggl[ \Bigl(2^x - 1\Bigl) + \frac{1}{2^1}\Bigl(2^x -2^1\Bigl) + \frac{1}{2^2}\Bigl(2^x - 2^2\Bigl) + \;\ldots + \frac{1}{2^{99}} \Bigl(2^x - 2^{99}\Bigl)\Biggl] = 0$$ After further simplification I got, $$\frac{2^x}{2^{99}} \Biggl[ \Bigl(2^x\cdot2^{99} - 2^{99}\Bigl) + \Bigl(2^x \cdot 2^{98} - 2^{99}\Bigl) + \ldots + \bigl(2^x -2^{99}\bigl)\Biggl] = 0$$ Taking $-2^{99}$ common I got, $$-2^x \Biggl[ \Bigl( 2^{x+99} + 2^{x+98} + \ldots + 2^{x+2} + 2^{x+1} + 2^x \Bigl)\Biggl]= 0$$ Now the inside can be expressed as $$\sum ^ {n= 99} _{n=1} a_n$$ Where $a_n$ are the terms of the GP. Thus we can see that either $$-2^x= 0$$ Or, $$\sum ^ {n= 99} _{n=1} a_n = 0$$ Since the first condition is not poossible, thus, $$\sum ^ {n= 99} _{n=1} a_n = 0$$ So, $$2^{x + 99} \Biggl(\frac{1-\frac{1}{2^{100}}}{1-\frac{1}{2}} \Biggl) = 0$$ Either way once I solve this, I am not getting an answer that is even in the options. The answers are all in the form of logarithmic expressions. Any help would be appreciated. We have to find the value of $x$.
\begin{align} 2^x\Bigl(2^x-1\Bigl) + 2^{x-1}\Bigl(2^{x-1} -1 \Bigl) + .... + 2^{x-99}\Bigl(2^{x-99} - 1 \Bigl) &= 0 \\ \sum_{i=0}^{99}2^{x-i}\left(2^{x-i}-1\right) &= 0 \\ \sum_{i=0}^{99}\left[\left(2^{x-i}\right)^2-2^{x-i}\right] &= 0 \\ \sum_{i=0}^{99}\left(2^{x-i}\right)^2-\sum_{i=0}^{99}2^{x-i} &= 0 \\ \sum_{i=0}^{99}2^{2x-2i}&=\sum_{i=0}^{99}2^{x-i} \\ \dfrac{2^{2x}}{\sum_{i=0}^{99}2^{2i}}&=\dfrac{2^{x}}{\sum_{i=0}^{99}2^{i}} \\ 2^x&=\dfrac{\sum_{i=0}^{99}2^{2i}}{\sum_{i=0}^{99}2^{i}} \\ &= \dfrac{4^{100}-1}{4-1}\cdot\dfrac{2-1}{2^{100}-1} \\ &= \dfrac{4^{100}-1}{3\left(2^{100}-1\right)}\\ x &= \log_2\left(\dfrac{4^{100}-1}{3\left(2^{100}-1\right)}\right) \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2865206", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 3 }
What do you call this property involving a function between two complete metric spaces? I have a notion, for which I am not able to find any reference name, as I am not that familiar with these concepts. Please help me by pointing to a definition for the below scenario. Is there a name for the following property of the setup? There is a a continuous and onto function $e : A \to B$, $A$ and $B$ being two different complete metric spaces. For any element $b\in B$, and for any element $a \in e^{-1}(\{b\})$, (where $e^{-1}(\{b\})$ is the pre-image of the element $b$ in the function $e$), For every punctured neighbourhood of $b$ denoted as $P_{\epsilon}(b)$, the pre-image $e^{-1}(P_{\epsilon}(b))$ contains a sequence $\{a_n\}$, such that $\{a_n\} \to a$
Let us call the property described in question as Property P. Continuing the observations made in Stefan Böttner's answer we get the following. Observation. Let $A$ and $B$ be metric spaces and $e\colon A\to B$ be a continuous function. Then $e$ has the Property P if and only if $A$ has no isolated points and $e$ is nowhere constant. (I am not sure to which extent this is a standard therm, but it seems to ba a natural name for this. It also appears in some books.) By nowhere constant I mean that there is no non-empty open subset $U\subseteq A$ such that $e|_U$ is constant. Proof. $\boxed{\Rightarrow}$ If $a$ is any point of $A$ then property P implies existence of a sequence converging to $a$, hence $a$ is not isolated. Let $U\ne\emptyset$ and $a\in U$. Let $b=e(a)$. Let $\varepsilon>0$. The set $e^{-1}[P_\varepsilon(b)]$ contains sequence $(a_n)$ converging to $a$. Starting with some $n_0$, terms of these sequence belong to $U$ and we also have $e(a_n)\ne e(b)$. Therefore $e|_U$ $\boxed{\Leftarrow}$ Let $B(b,\varepsilon)$ be the open ball around $b$. By continuity we get that there is a $\delta$ such that $B(a,\delta)\subseteq e^{-1}[B(b,\varepsilon)]$. Let us choose $n_0$ with $1/n_0<\delta$. Then each ball $B(a,\frac1{n_0+k})$ lies inside $e^{-1}[B(b,\varepsilon)]$. And since the function $e$ is not constant on this ball, we can choose $a_k\in B(a,\frac1{n_0+k})$ such that $e(a_k)\ne e(a)$, i.e., $a_k\in e^{-1}[P_\varepsilon(b)]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2865353", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 1 }
Splitting field of a polynomial $f(x) =(x^2-3)(x^2-5)(x^5-1)$ over $\mathbb{Q}$. I was considering the splitting field E of the polynomial $f(x) =(x^2-3)(x^2-5)(x^5-1)$ over $\mathbb{Q}$. I expected $E=\mathbb{Q}(\sqrt{5},\sqrt{3},\omega)$, where $\omega=e^{\frac{2\pi i}{5}}$. But I saw a textbook that claimed $E=\mathbb{Q}(\sqrt{3},\omega)$; believing that $\pm\sqrt{5}\in \mathbb{Q}(\omega)$. Can someone please show me how the latter answer is true if it is actually true? Thanks
Note that $$\begin{align}\frac{x^5-1}{x-1}&=x^4+x^3+x^2+x+1=x^2\,\left(t^2+t-1\right) \\&=x^2\,\left(t-\frac{-1+\sqrt{5}}{2}\right)\left(t-\frac{-1-\sqrt{5}}{2}\right)\,,\end{align}$$ where $t:=x+\dfrac{1}{x}$. Therefore, $\sqrt{5}$ is already in the field $\mathbb{Q}(\omega)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2865475", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Set $S=\{(x,y,z)| x,y,z \in \mathbb{Z}\}$ is a subset of vector space $\mathbb{R}^3$, how do I show that it is not a subspace of $\mathbb{R}^3$. So I know that set $S=\{(x,y,z)| x,y,z\in \mathbb{Z}\}$ is a subset of vector space $\mathbb{R}^3$. Specifically, it is worded in our lecture that it is a " subset of $(\mathbb{R}^3, \oplus, \odot)$ , where $\oplus$ and $\odot$ are the usual vector addition and scalar multiplication." My teacher has stated in our lecture that this set $S$ is not a subspace of $\mathbb{R}^3$. But from what I can tell $S$ is: * *Closed under addition *Closed under multiplication *Contains a zero vector $(0,0,0)$ How is it not a subspace of $\mathbb{R}^3$, what am I missing?
Edit: User Randall pointed out that I misread the question. (I assumed S was under some invalid field.) First lets look at the definition of a subspace: * *All products and sums composed of elements within the subspace also are in the subspace. *All elements in the subspace must be able to be scaled by the vector space surrounding the subspace. *0 is also an element of the subspace. You remembered rule 1 and 3 however, it's clear S is violating rule 2. In order for S to be a subset of $R^3$, all elements within S that are scaled by any number within R are also an element of S.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2865662", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Is there a metric on the reals $\Bbb R$ so that a subset of $\Bbb R$ is open iff its complement is finite? I wanted to know if there is a distance function $d$ on $\Bbb R$ so that a nonempty subset $U$ of $\Bbb R$ is open with respect to $d$ if and only if its complement $\Bbb R$ \ U is finite ?
Let $d$ be such a metric and $a,b$ any distinct points. Then the open balls around $a$ and $b$ of radius $\frac 12 d(a,b)$ are disjoint proper non-empty open sets, hence at most one of them can be co-finite.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2865726", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Linear Programming optimization with multiple optimal solutions I am trying to solve the following optimization problem using linear programming (deterministic operations research). According to the book, there are multiple optimal solutions, I don't understand why. I'll show you what I have done. The problem is: $max Z=500x_{1}+300x_{2}$ s.t. $15x_{1}+5x_{2}\leq 300$ $10x_{1}+6x_{2}\leq 240$ $8x_{1}+12x_{2}\leq 450$ $x_{1},x_{2}\geqslant 0$ I have plotted the lines graphically to get: The intersection points are: $(15,15)$ $(\frac{135}{14},\frac{435}{14})$ $(\frac{5}{2},\frac{215}{6})$ The target function equals to 12000 for two of these points. If this value was the maximum, then I would say that the entire line, edge, between these intersection points is the optimal solution (multiple solutions). However, this is not the maximum. The second intersection I wrote gives a higher value, and therefore is the maximum. So I think there is a single solution. What am I missing ? And generally speaking, what is the mathematical justification for having multiple solutions when two points gives the maximum (or minimum)?
If you solve the problem graphically you should solve the objective function $Z$ for $x_2$ as well. $Z=500x_{1}+300x_{2}$ $Z-500x_{1}=300x_{2}$ $\frac{Z}{300}-\frac53x_1=x_2$ Now you set the level equal to zero, which means that $z=0$ and draw the line. This line goes through the origin and has a slope of $-\frac53$. Then you push the line parallel right upward till the objective function touches the last possible point(s) of the feasible solution(s). The graph below shows the process. All the points on the green line for $\frac52 \leq x_1\leq 15$ are optimal solutions. All the optimal solutions are on the the line of the second constraint. This result can be confirmed if we have a look on the coefficient of the second constraint and the objective function. The ratios of the coefficients are equal: $\frac{10}6=\frac{500}{300}$. And additionally The second constraint is fullfilled as a equality. Conclusion: If you see that the slopes of the objective function is equal to one of the constraints then there eventually exists a solution which is a line and not a single point (2 variables).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2865834", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Gradient vector $\nabla F = (z_x, z_y, -1)$ is normal to the integral surface? I have an integral surface $z = z(x, y)$. Writing this integral surface in implicit form, we get $$F(x, y, z) = z(x, y) - z = 0$$ I am then told that the gradient vector $\nabla F = (z_x, z_y, -1)$ is normal to the integral surface $F(x, y, z) = 0$. First of all, how was this calculated? I understand how the gradient is calculated, but I don't understand how it was calculated in this case? And lastly, where did the $-1$ come from and why? Couldn't they also have had $\nabla F = (z_x, z_y, 1)$, where this would just be the normal vector in the other direction? Why and how did they pick the $-1$ direction instead? I apologise. My vector calculus understanding is not particularly strong, and I strive to improve it. Thank you for any help.
Let us start with an example. $$ z=x^2+y^2$$ $$ F(x,y,z)=x^2+y^2-z$$ $$\nabla F = (z_x, z_y, -1)=< 2x,2y,-1>$$ If a point is given, for example $P(1,2,5)$ Then at that point you have two normal vector to the surface. Upward normal $$< -2x,-2y,1> = <-2,-4,1>$$ Downward normal $$< 2x,2y,-1> = <2,4,-1>$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2865943", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Given $t = \tan \frac{\theta}{2}$, show $\sin \theta = \frac{2t}{1+t^2}$ Given $t = \tan \frac{\theta}{2}$, show $\sin \theta = \frac{2t}{1+t^2}$ There are a few ways to approach it, one of the way i encountered is that using the $\tan2\theta$ formula, we get $$\tan\theta = \frac{2t}{1-t^2}$$ By trigonometry, we know the ratio of the triangle, that is the opposite to adjacent is $2t : 1-t^2$, and hence it follows the hypotenuse is $\sqrt{(2t)^2+(1-t^2)^2} = \sqrt{(t^2+1)^2} = (t^2+1)$ My question is, can the above square rooted answer be $-(t^2+1)$ also? Why do we reject the negative answer?
No, because $$ \sqrt {x^2} = |x|$$ Thus $$ \sqrt{(t^2+1)^2} = |(t^2+1)|=(t^2+1)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2866118", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
What are the steps involved in solving a quartic polynomial modulo a prime modulus? This: $$x^4 + 21x^3 + 5x^2 + 7x + 1 \equiv 0 \mod 23$$ Leads to: $$x = 18 || x =19$$ I know this because of this WolframAlpha example and because a fellow member posted it in a since deleted & related question. What I don't understand are the steps involved in arriving at x = 18 || x = 19 from this equation. My question starts with the reduced terms mod 23 example in the linked question. I'm now trying understand how to reduce this equation to x = 18 || x = 19. I have come across a few posts and theorems that hint a solutions, but I lack the math skills to connect any of it together. I am a software developer, not a mathematician. So if anyone can walk me through some steps on how to get from the equation to 18 || 19, that would be great! This is a toy example representing a new Elliptic Curve Crypto operation where the actual modulus is $2^{256}$ large. So, trying all possible values x is not practical. WolframAlpha is capable of producing solutions to my large modulo equations in a fraction of a second so I know they aren't trying all possible values x. Fermat’s Little Theorem seems the most promising so far, but I don't understand how to apply it to this equation. This post describes a solution but unfortunately their example is very basic and not very relatable to my equation. Anything would be helpful here. Steps would be great. Thanks!
Let $$f(x)=x^4 -2x^3 + 5x^2 + 7x + 1\tag{1}$$ be defined over the finite field $\mathbb{F}_{23}$. Now check for a linear factor by checking for roots over $\mathbb{F}_{23}=\{0,\pm1,\pm2,\pm3,\pm4,\pm5\pm6,\pm7,\pm8,\pm9,\pm10,\pm11\}$. We find $f(-4)=f(-5)=0$, so $(x+4)$ and $(x+5)$ are linear factors. Now factor $f$ as two quadratics modulo $23$: \begin{align*} f(x)&=(x^2+9x-3)(x^2+ax+b)\\ &=x^4+(9+a)x^3+(9a-3+b)x^2+(9b-3a)x-3b \end{align*} Comparing the coefficients in $(1)$ for the powers of $x$: \begin{array}\\ [x^3:] & -2=9+a\\ [x^2:] & 5=9a-3+b\\ [x:] & 7=9b-3a\\ [const:]& 1=-3b\\ \end{array} with $a$, $b$, $c$, $d\in\mathbb{F}_{23}$. Note this is a finite field so $-3b=1$ means $-3$ and $b$ are inverse mod $23$, making $b=15$. Now $a=-2-9=-11=12$ giving the factorization $$f(x)=(x^2+12x+15)(x+5)(x+4)$$ with the quadratic factor irreducible over $\mathbb{F}_{23}$ as it has no roots, since the discriminant of $(x^2+12x+15)$ is $15$ which is not a square modulo $23$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2866222", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 7, "answer_id": 6 }
Cardinality of $A = \varnothing, B = \{ \varnothing \}, C = \{\{\varnothing\}\}$ Given three sets $A = \varnothing, B = \{ \varnothing \}, C = \{\{\varnothing\}\}$ what are the cardinalities of those sets ? Obviously cardinality of $A$ is $0$ and cardinality of $B$ is $1$, but I am not sure about set $C$, because some sources say that cardinality of such set is $2$. Can you please clarify this to me ?
$$A = \varnothing, B = \{ \varnothing \}, C = \{\{\varnothing\}\}$$ The way you have it $B$ and $C$ both have cardinality of $1$ My guess is that you wanted $$C = \{ \varnothing ,\{\varnothing\}\}$$ Which has cardinality $2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2866357", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Left Kan extension: switching K and F If $F: ⟶ $ and $K: ⟶ ℰ$ are functors, where $$ is small and both $$ and $ℰ$ are cocomplete. How do the left Kan extension of $F$ along $K$ (${\rm Lan}_K(F)$) and the left Kan extension of $K$ along $F$ (${\rm Lan}_F(K)$) relate with one another? (Is it safe to say that they are adjoints for example?) EDIT: Actually, I asked the question in general but what I had in mind was the special case where $K := y: ⟶ \widehat{}$ is the Yoneda embedding, so any tip to show that ${\rm Lan}_y(F) ⊣ {\rm Lan}_F(y)$ (or that ${\rm Lan}_F(y) \cong N_F := Hom(F(-), -)$) would be helpful as well. Any insight on this would be very much appreciated!
If $y : A \to[A°,Set]$ is the Yoneda embedding it is a general fact that, given a functor $f : A \to B$, there is the adjunction $$ Lan_yf \dashv Lan_fy $$ this is called nerve-realization paradigm. As you can see e.g. here, asking that the two functors $F,K$ are related in this way is a rather strong request (in fact, the tentative proof in the answer contains a flaw, and to this day no convincing example, different than Yoneda extensions, has been given)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2866466", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Show that $\int_0^1 4 \space\operatorname{li}(x)^3 \space (x-1) \space x^{-3} dx = \zeta(3) $ My mentor tommy1729 wrote $\int_0^1 4 \space \operatorname{li}(x)^3 \space (x-1) \space x^{-3} dx = \zeta(3) $ I wanted to prove it thus I looked at some methods for computing integrals and also representations of $\zeta(3)$ that might be useful. But nothing was very helpful to me. In particular the fact that the RHS is so short - Just Apery’s constant - was surprising. I expected it longer and more complicated. So I tend to believe that either the integral computation requires many steps and Then Finally we Get a long expression but alot of cancellation until we are left with Apery’s constant only. Or There is a simple way to get Apery’s constant directly with a trick I missed. In either case it is amazing I would say. So How to show that $$\int_0^1 4 \space \operatorname{li}(x)^3 \space (x-1) \space x^{-3} dx = \zeta(3) $$ I would like to see different ways to show it. I assume real analysis methods are simpler than complex analysis methods ( on the complex plane like contour integration ). I Also wondered If not knowing the RHS in advance would change the difficulty of this question. Also I wonder about $$ \int_0^1 5 \space \operatorname{li}(x)^4 \space (x-1) \space x^{-4} dx = ?? $$
This is not a complete answer, but just a description of two ideas that might help with the evaluation of the integral $$ I \equiv 4 \int \limits_0^1 \left(\frac{\operatorname{li}(x)}{x}\right)^3 (x-1) \, \mathrm{d} x \, . $$ They are based on methods that can be applied to find the easier integral $$ J \equiv \int \limits_0^1 \left(\frac{\operatorname{li}(x)}{x}\right)^2 \, \mathrm{d} x \, . $$ The first approach relies on integration by parts and the series $$ x-1 = \sum \limits_{k=1}^\infty \frac{1}{k!} \ln^k (x) \, , \, x > 0 \, .$$ In order to evaluate $J$ we can use the antiderivative $x \mapsto 1-\frac{1}{x}$ of $x \mapsto \frac{1}{x^2}$ to avoid problems with the singularity of $\operatorname{li}(x)$ at $x = 1$ . We get \begin{align} J &= 2 \int \limits_0^1 \frac{\operatorname{li}(x)}{x} \frac{1-x}{\ln(x)} \, \mathrm{d} x = - 2 \sum \limits_{k=1}^\infty \frac{1}{k!} \int \limits_0^1 \frac{\operatorname{li}(x)}{x} \ln^{k-1} (x) \, \mathrm{d} x\\ &= 2 \sum \limits_{k=1}^\infty \frac{1}{k! k} \int \limits_0^1 \ln^{k-1} (x) \, \mathrm{d} x = 2 \sum \limits_{k=1}^\infty \frac{1}{k! k} (-1)^{k-1} (k-1)! \\ &= 2 \sum \limits_{k=1}^\infty \frac{(-1)^{k-1}}{k^2} = 2 \eta (2) = \zeta(2) = \frac{\pi^2}{6} \, . \end{align} Similarly, we can use the antiderivative $x \mapsto \frac{(x-1)^2}{2 x^2}$ of $x \mapsto \frac{x-1}{x^3}$ to find \begin{align} I &= - \frac{3}{2} \int \limits_0^1 \left(\frac{\operatorname{li}(x)}{x}\right)^2 \frac{(x-1)^2}{\ln(x)} \, \mathrm{d} x \\ &= \frac{3}{2} \sum \limits_{k=0}^\infty \frac{1}{k!} \int \limits_0^1 \operatorname{li}^2 (x) \frac{1-x}{x} \frac{\ln^{k-1} (x)}{x} \, \mathrm{d} x \, . \tag{A} \end{align} We can now integrate by parts once more to obtain at least one term that reduces to a multiple of $\zeta(3)$ as in the simpler case. However, I have not managed to solve the remaining integrals yet. We could of course use the series again to express the remaining $1-x$ in terms of logarithm powers, but that does not seem to solve the problem. The second suggestion employs the Fourier-Laguerre series $$ \operatorname{li} (x) = - x \sum_{n=0}^\infty \frac{\mathrm{L}_n (-\ln(x))}{n+1} \, , \, x \in (0,1) \, , \tag{B}$$ of the logarithmic integral. It can be proved by deriving a recurrence relation for the coefficients from that of the Laguerre polynomials. Using the substitution $x = \mathrm{e}^{-t}$ and the orthogonality relation of the Laguerre polynomials we immediately obtain $$ J = \sum \limits_{p=0}^\infty \sum \limits_{q=0}^\infty \frac{1}{(p+1)(q+1)} \int \limits_0^\infty \mathrm{L}_p (t) \mathrm{L}_q (t) \mathrm{e}^{-t} \, \mathrm{d} t = \sum \limits_{p=0}^\infty \frac{1}{(p+1)^2} = \zeta(2) = \frac{\pi^2}{6} \, .$$ Similarly, we have $$ I = 4\sum \limits_{p=0}^\infty \sum \limits_{q=0}^\infty \sum \limits_{r=0}^\infty \frac{1}{(p+1)(q+1)(r+1)} \int \limits_0^\infty \mathrm{L}_p (t) \mathrm{L}_q (t) \mathrm{L}_r (t) (1- \mathrm{e}^{-t}) \mathrm{e}^{-t} \, \mathrm{d} t \, .$$ General formulas for integrals involving three Laguerre polynomials appear to be known (see this paper or this one for a generalisation). I do not know whether they are nice enough to reduce the triple series to a representation of $\zeta(3)$ though. Remark: After doing some numerical calculations I now suspect that the triple series diverges. This is probably due to the fact the original series $(\mathrm{B})$ only converges in $L^2$, so it cannot be used here. For the simpler integral everything works out though. It is of course possible to combine the two methods by applying the Laguerre series $(\mathrm{B})$ in equation $(\mathrm{A})$. I do not know if these ideas can be used to get the final result, but maybe they can help someone else to find a way.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2866629", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
The conditions for parameterisation I have proved that $\gamma(t) = (1-cost, tant-sint)$ satisfies the equation for the conchoid $(x-1)^2(x^2+y^2)=x^2$. But is there any reason why this is not a parameterisation? How do I have to restrict the parameter $t$ to get a parameterisation for each branch of the curve? The graph of the conchoids is produced here, which is the union of two disjoint connected curves. What I can think about is regarding the domain of $t$. In this problem, considering $sint$, $cost$ and $tant$, $t$ shouldn't be equal to $\frac{\pi}{2}+k\pi$, where $k$ is an integer. Hence, neither $sint$ nor $cost$ can reach their maxima and minima. So $1-cost≠1$, while $tant-sint$ tends to be infinity and negative infinity from different directions, which seems satisfying the graph of the given conchoid. I am a bit confused here. Thanks in advance. :)
Usually, by a plane curve we mean a continuous function $\gamma \colon I \to \mathbb R^2$ where $I \subseteq \mathbb R$ is an interval. If $\gamma$ is defined by $\gamma(t) = (1 - \cos t, \tan t - \sin t)$, there are two natural choices for the interval: * *if we choose $I_1 = \left ( - \frac \pi 2, \frac \pi 2 \right )$ we obtain the branch of the conchoid containing the singular point $(0, 0)$, because as $t$ goes from $- \frac \pi 2$ to $0$ the abscissa $(1 - \cos t)$ goes from $1$ to $0$, and then as $t$ goes from $0$ to $\frac \pi 2$ the abscissa goes from $0$ to $1$; *if we choose $I_2 = \left ( \frac \pi 2, \frac 3 2 \pi \right )$ by a similar reasoning we obtain the other branch. Any other interval having length $\pi$ will give you the same points given by $I_1$ or $I_2$ because $\gamma$ has period $2 \pi$ and as you already said $\frac \pi 2 + k \pi$ must be excluded from the domain for any $k \in \mathbb Z$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2866693", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to show that $\mathbb{Z}_{12} $ is isomorphic to a subgroup of $S_7$? How to show that $\mathbb{Z}_{12}$ is isomorphic to a subgroup of $S_7$? My attempt: Using Cayley's theorem one can conclude $\mathbb{Z}_{12}$ is isomorphic to a subgroup of $S_{12}$. Or, if I use Generalised Cayley's theorem I can show that there is a homomorphism from $\mathbb{Z}_{12}\rightarrow S_{\mathbb{Z}_{12}/H}$ where $H$ is a subgroup of order $3,2^2$, therefore we have group homomorphism from $\mathbb{Z}_{12}$ to $S_3$ or $S_4$. But these maps have non-trivial kernel namely $H$ itself. Therefore, I have not been able to conclude the required statement. Any help is appreciated.
In this case, as Nicky Hekster pointed out in the top answer, you know the general structure of elements in $S_7$ so finding one of order $12$ is pretty easy. I want to add that a more general procedure to look for a copy of $\mathbb{Z}_{12}$ in a group $G$ is to consider that $\mathbb{Z}_{12} = \mathbb{Z}_4 \times \mathbb{Z}_3$. So you can look for an element of order $3$ (computing the Sylow $3$-subgroup) and then seeing if the centraliser of this element has an element of order $4$ (or, the dual procedure, looking for a copy of $\mathbb{Z}_4$ in the Sylow $2$-subgroup of $G$ and then computing the order of its centraliser, if $3$ divides it you are done). Of course, the difficulty in this lies in the fact that computing Sylow subgroups or centralisers might not be easy, but in some cases it is. (First example I can think of, if $G=S_4 \times S_4$ you can take $((1234), 1)$ as the element of order $4$ and it is immediate that $(1,(123))$ is contained in its centraliser).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2866807", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Is a non-euclidean-norm preserving map necessarily linear? Let $V$ and $W$ be two normed vector spaces and let $f:V \rightarrow W$ be a norm preserving map. I know that if both norms correspond to some inner product then $f$ is necessarily linear, but I can't find the answer for the more general case of normed vector spaces. I suspect the answer is no, so I tried to come up with a counter-example involving "pseudo" rotations along non-euclideanly-spherical paths centered at the origin of $\mathbb R^2$, unsuccessfully. I'd most importantly like an answer that does not assume $f$ to be surjective. However, any additional information about that particular case would be appreciated as well.
This is the example from J. A. Baker's paper for easy access. Define $f:\mathbb{R}^2\to\mathbb{R}$ by $$f(x,y)=\begin{cases}y,&\text{ if }0\leq y\leq x\text{ or }x\leq y\leq 0\\x,&\text{ if }0\leq x\leq y\text{ or }y\leq x\leq 0\\0,&\text{ otherwise}\end{cases}$$ Then $f$ satisfies * *$f(tx,ty)=tf(x,y)$ *$|f(x,y)-f(u,v)|\leq\sqrt{(x-u)^2+(y-v)^2}$ *$f$ is not linear Put in $\mathbb{R}^2$ the usual norm, and $\mathbb{R}^3$ with the norm $$\|(x,y,z)\|=\max\left(\sqrt{x^2+y^2},|z|\right)$$ Define $F:\mathbb{R}^2\to\mathbb{R}^3$ by $F(x,y)=(x,y,f(x,y))$. Then $F$ is a non-affine isometry.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2866940", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Minimizing properties of geodesics problem in do Carmo's book I'm DoCarmo's book Riemannian Geometry and in the section with minimizing properties of geodesics it this proposition. I don't understand why $\langle\frac{\partial f} {\partial r}, \frac{\partial f} {\partial t} \rangle=0$. Can someone fill in the details? This is the Gauss lemma that he is talking about. So my question becomes, how did he apply this lemma in order to obtain that inner product equal to zero.
Note that $f(r,t) = \exp(rv(t))$, hence by the chain rule$$ \partial_r f(r_0,t_0)=(d\exp_p)_{r_0v(t_0)}[v(t_0)] $$ and $$ \partial _t f(r_0,t_0)=(d\exp_p)_{r_0v(t_0)}[r_0\dot v(t_0)]. $$ Hence $$ \langle \partial_r f(r_0,t_0)\vert \partial_t f(r_0,t_0)\rangle = \langle (d\exp_p)_{r_0v(t_0)}[v(t_0)]~\vert~ (d\exp_p)_{r_0v(t_0)}[r_0\dot v(t_0)]\rangle\\ =r_0^{-1}\langle (d\exp_p)_{r_0v(t_0)}[r_0v(t_0)]~\vert~ (d\exp_p)_{r_0v(t_0)}[r_0\dot v(t_0)]\rangle \overset{\text{Gauß}}{=} r_0^{-1} \langle r_0 v(t_0) \vert r_0\dot v(t_0) \rangle. $$ The latter is zero as it is a multiple to the derivative of $t\mapsto \langle v(t)\vert v(t) \rangle \equiv 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2867086", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Showing the diagonal on $\mathbb{R}^2$ is closed $D=\{{(x,x)|x\in\mathbb{R}}\}$ , I want to show D is closed with the definition: $A$ is closed if and only if $\mathbb{R}/A$ is open. So basically what I really want to show is $A=\{{(x,y)\in \mathbb{R}^2|x\neq y\}}$ . My attemp so far was taking $a=(x,y)\in A$ and an open ball $B(a,r)$ with $r=\frac {|x-y|} {2\sqrt 2}$, which is the distance between $a$ and the line $x-y=0$, divided by two. It is easy to see geometrically why a ball with this radius is contained in A, but I wasn`t able to complete the proof formally, using algebra.
Your approach is fine. In fact (as you can see geometrically), there's no need to divide by $2$. In other words, you can take $r=\frac{|x-y|}{\sqrt2}$. I will prove that the open disk $D\bigl((x,y),r\bigr)$ contains no element of $D$, that is, I will prove that if $z\in\mathbb R$, then $\bigl\|(x,y)-(z,z)\bigr\|\geqslant r$. In fact,\begin{align}\bigl\|(x,y)-(z,z)\bigr\|<r&\iff(x-z)^2+(y-z)^2<\frac{(x-y)^2}2\\&\iff2\bigl((x-z)^2+(y-z)^2\bigr)<(x-y)^2\\&\iff x^2+y^2+4z^2+2xy-4xz-4yz<0\\&\iff(x+y-2z)^2<0,\end{align}which is impossible, of course.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2867201", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 7, "answer_id": 1 }
Is it always necessary to prove the 'iff' in both directions? I have an exercise in my course, which asks to prove $A \cup B = B \iff A \subseteq B$. My proof is: Let $A \nsubseteq B$, that is, $\exists a \in A : a \notin B$. Then from the definition follows $a \in A \cup B = B$, in contradiction to the initial assertion. $\square$ Usually I see that it's much more rigorous to prove $\implies$, then $\impliedby$, but I'm not sure, if that's only an option or a strict rule — and specifically if my proof does the job in both directions or there are some gaps that I don't recognize. My script suggests a really long 10+ lines proof using the 'both directions style', but I myself don't really see this necessity at least here. This being said, is it always a must to prove the 'iff' in both directions?
It appears that you're trying (without making it completely clear) to prove $A\cup B=B \Leftrightarrow A\subseteq B$ by showing that $A\cup B=B$ together with $A\not\subseteq B$ leads to a contradiction. If you think that is a complete proof, how about this one, by the same principle: Claim: For any integer $n>2$, $$ n\text{ is prime} \iff n\text{ is odd} $$ Proof: Assume that $n$ is prime and that $n$ is not odd. Then $n$ is even, so $n=2k$ for some $k$. But then $2$ divides $n$, which is a contradiction with $n$ being prime, since $n>2$. $\Box$ This seems to follow exactly the same logic as your proof -- namely, considering $P\Leftrightarrow Q$ to be proved because I have shown that $\neg Q$ and $P$ together lead to a contradiction. But there are odd numbers that are not prime -- such as $9$ -- so the claim is not actually true.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2867320", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
GRE combinatorics question about counting the no. of sets questions satisfying a certain requirement. From ETS Major Field Test in Mathematics A student is given an exam consisting of 8 essay questions divided into 4 groups of 2 questions each. The student is required to select a set of 6 questions to answer, including at least 1 question from each of the 4 groups. How many sets of questions satisfy this requirement? I'm thinking $$\binom{2}{1}^4 \binom{4}{2}$$ because we have to pick 1 from each of the 4 groups of 2 and then from the remaining 4 questions we pick 2.
The student can choose two questions to omit, not both i the same group. There are $8$ options for the first question, and then there are $6$ options left for the second question. Of course the order in which the questions are chosrn doesn't matter, so we get $$\frac{8\times6}{2}=24$$ options. Alternatively, the student can choose two groups to omit a question from, and then one question from each group. This way we get $$\binom{4}{2}\binom{2}{1}\binom{2}{1}=6\times2\times2=24$$ options. Finally, in line with your own approach, we can first choose one question from each group. We can indeed do so in $16$ ways. Then we can choose two more questions, indeed in $6$ ways. But now we have overcounted; we reach the same set of questions if we had first chosen other questions in the two groups we ended up choosing both questions from. In how many ways could we have chosen questions from these two groups? A total of $4$ ways. So by this method we have coumted each set of questions $4$ times. Hence the total number of options is $\frac{96}{4}=24$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2867479", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
Sum of squared eigenvalues of $A$ equals $\operatorname{tr}(A^2)$? Is the following always true: $$\sum_i \lambda_i^2 = \operatorname{tr}(A^2)$$ where $\lambda_i$ are the eigenvalues of $A$. If it's not true in general, then under what conditions is it true? Is it always true if $A$ is square and positive semidefinite? Please provide a proof or reference. Thanks!
tr $A^2 $ = tr $AA$ = tr $UDU^{-1}UDU^{-1} $ = tr $UD^2 U^{-1}$ = tr $U^{-1}UD^2 =$ tr $ D^2$ = $\sum \lambda_i^2$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2867580", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 3 }
$\arg(\overline{z}), \arg(z^2)$ A First Course in Complex Analysis by Matthias Beck, Gerald Marchesi, Dennis Pixton, and Lucas Sabalka Exer 3.42,43 Exer 3.42 multiple-valued: NO in general. $$\arg(1+i)=\frac \pi 4 + 2k \pi$$ $$\arg(1-i)=\frac {-\pi} 4 - 2l \pi$$ Choose $k=1$,$l=17$ to arrive at a contradiction. Exer 3.42 principal: NO, but YES iff $z \notin \mathbb R_{< 0}$ Pf: $$Arg(\overline{z}) = -Arg(z) \iff -Arg(z) \in (-\pi,\pi] \iff Arg(z) \in [-\pi,\pi]) \iff Arg(z) \in (-\pi,\pi) \iff z \notin \mathbb R_{< 0}$$ QED Exer 3.43 multiple-valued: NO in general. For $z = 0$, yes vacuously. For $z \ne 0$, Observe that $\ln|z^2| = \ln(|z|^2) = 2\ln|z|$ $$\therefore, \ln(z^2) = 2\ln(z) \iff \arg(z^2) = 2\arg(z)$$ Consider $$\arg(1+i)=\frac \pi 4 + 2k \pi$$ $$\arg(2i) = \frac \pi 2 + 4l \pi$$ Choose $k=1$,$l=17$ to arrive at a contradiction. Exer 3.43 single-valued: NO, but YES iff $Arg(z) \in (-\frac \pi 2,\frac \pi 2]$. Pf: For $z = 0$, yes vacuously. For $z \ne 0$, Observe that $Ln|z^2| = Ln(|z|^2) = 2Ln|z|$ $$\therefore, Ln(z^2) = 2Ln(z) \iff Arg(z^2) = 2Arg(z)$$ $$Arg(z^2) = 2Arg(z) \iff 2Arg(z) \in (-\pi,\pi] \iff Arg(z) \in (-\frac \pi 2,\frac \pi 2]$$ QED * *Where have I gone wrong for above solutions? *Which are wrong? I believe they're all right, but I might have missed something (For $z=0$, I guess the relevant statements are vacuously true). $$\arg(\overline{z}) \equiv -\arg(z) \mod 2 \pi \ \forall z \in \mathbb R_{<0}$$ $$Arg(\overline{z}) \equiv -Arg(z) \mod 2 \pi \ \forall z \in \mathbb R_{<0}$$ $$\arg(\overline{z}) \equiv -\arg(z) \mod 2 \pi \ \forall z \notin \mathbb R_{<0}$$ $$Arg(\overline{z}) \equiv -Arg(z) \mod 2 \pi \ \forall z \notin \mathbb R_{<0}$$ $$\arg(z^2) \equiv 2\arg(z) \mod 2 \pi \ \forall z \in \mathbb R_{<0}$$ $$Arg(z^2) \equiv 2Arg(z) \mod 2 \pi \ \forall z \in \mathbb R_{<0}$$ $$\arg(z^2) \equiv 2\arg(z) \mod 2 \pi \ \forall z \notin \mathbb R_{<0}$$ $$Arg(z^2) \equiv 2Arg(z) \mod 2 \pi \ \forall z \notin \mathbb R_{<0}$$ It seems 2.6 and 2.8 are right by Wiki. What about the others?
(Partial answer for 3.42 below, 3.43 can be worked out in a similar way.)  The following assumes that $z \ne 0$ and $\arg(ab)=\arg(a)+\arg(b)$ was already established for the multi-valued $\arg$. Exer 3.42 multiple-valued: NO in general. YES, in general, since $\arg(z\bar z) = \arg(z)+\arg(\bar z)$, but on the other hand $\arg(z \bar z) = \arg(|z|^2) = 0$ because $|z|^2$ is a positive real number, so $\,\arg(\bar z) = -\arg(z)\,$. $$\arg(1+i)=\frac \pi 4 + 2k \pi$$ $$\arg(1-i)=\frac {-\pi} 4 - 2l \pi$$ Choose $k=1$,$l=17$ to arrive at a contradiction. You don't get to choose $k,l$ here. The multi-valued $\arg$ equality holds if there exists any pair $k,l$ for which it holds, in this case $k=l=0$ for example. Exer 3.42 principal: NO, but YES iff $z \notin \mathbb R_{< 0}$ It looks like this uses the (common) definition of the principal value of $\operatorname{Arg}$ where the range is $(-\pi,\pi]$. In that case the answer is YES, except if $\,\operatorname{Arg}(z)=\pi\,$ (why?). If using the other (less common) definition for the principal value of $\operatorname{Arg}$ where the range is $[0,2 \pi)$ then the answer is of course NO, except if $\,\operatorname{Arg}(z)=0\,$ (why?).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2867696", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Does there exist an ideal sheaf $\mathcal F$ on some affine scheme $X$ such that $\mathcal F$ is not quasi-coherent? Please give an example such that $X$ is an affine scheme, and there exists an ideal sheaf $\mathcal F$ on $X$ such that $\mathcal F$ is not quasi-coherent.
It often helps to translate the problem into commutative algebra. I will define ideal sheaves on a scheme $X$ to be an $\mathcal{O}_X$ module $\mathcal{I}$ such that for all open sets $U\subset X$, $\mathcal{I}(U)$ is an ideal of $\mathcal{O}_X(U)$. So let $\mathcal{I}$ be an ideal sheaf on $X=\operatorname{Spec}A$. Recall that quasicoherence can be checked by showing that the natural map $\Gamma(\operatorname{Spec}A,\mathcal{I})_f\rightarrow \Gamma(\operatorname{Spec}A_f,\mathcal{I})$ is an isomorphism for all $f$. We clearly have $\Gamma(\operatorname{Spec}A,\mathcal{I})_f=I_f$ where $I$ is some ideal of $A$. Also $\Gamma(\operatorname{Spec}A_f,\mathcal{I})=J$ where $J$ is some ideal of $A_f$. Can you find an ideal $J$ of $A_f$ that doesn't come from localization of an ideal $I$ of $A$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2867809", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Show that $f(x,y)=\dfrac{\sin(x^4-y^4)}{x^2+y^2}$ is steady on $\Bbb{R}$ My task is to show that $$f(x,y) = \begin{cases} \dfrac{\sin(x^4-y^4)}{x^2+y^2}, & \text{if $(x,y) \ne (0,0)$} \\[2ex] 0, & \text{else} \end{cases}$$ is steady on $\Bbb{R}$ As a composition of steady functions, it is enough to show that $f(x,y)$ converges to $0$ for $(x,y) \rightarrow (0,0)$ I've started with the following: $$\tag{$\forall(x,y)\ne(0,0)$}x = \cos(\phi)\cdot r$$$$y = \sin(\phi)\cdot r$$ $$f(x,y) = f(\phi, r) = \dfrac{\sin(r^4(cos^4(\phi)-\sin^4(\phi)))}{r^2(\cos^2(\phi)+\sin^2(\phi))}$$ $$=\dfrac{\sin(r^4(\cos^4(\phi)-\sin^4(\phi)))}{r^2}$$ With $$\lim_{(x,y)\rightarrow(0,0)}f(x,y)= \lim_{r\rightarrow 0}f(\phi, r)$$ It can be seen that the term converges to $0$ for $r\rightarrow 0$ but I wanted to show this mathematically: $$\lim_{r\rightarrow 0}f(\phi, r) = \lim_{r\rightarrow 0}\dfrac{d f(\phi, r)}{\color{red}{d?}} = \color{red}{?}$$ My question is: w.r.t which variable would I need to take the derivative in order to do this mathematically correct. I am very happy for any kind of help or advice. Greetings, Finn
$|f(x,y)| \le \frac{|x^4-y^4|}{x^2+y^2} \le \frac{x^4+y^4}{x^2+y^2} \le x^2+y^2$. Can you proceed ?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2867939", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
how can I calculate the probability to get triples or better when throwing n 6-sided dice? I've been banging my head on a wall with this question. I'm designing a game and would like to implement a loot system inspired by a game called "Vermintide" where players roll a certain number of dice and gain loot according to the result. I want my players to get rewards depending on the result of the dice. but I need to evaluate the quality of items, and the number of dice they get to throw based on the probability. Essentially if they get doubles they get a standard item, if they get triples they get a magic item etc... I heard of the "birthday problem" and "multinomials" but most of what I could find about it seemed to be very particular cases (I found birthday problem for 2 or more but not for 3 or more, which seems much less trivial). Is there a smart way to go about this problem which I first thought was going to be trivial, and the more I search the more it seems complicated.
Hint You can compute the probability of not getting triple. The number $f_n$ of possibilities of not getting triples in $n$ throws is: $$f_1 = \binom{6}{1}\\ f_2 = \binom{6}{2}2!+\binom{6}{1}\\ f_3=\binom{6}{3}3!+\binom{6}{1}\binom{5}{1}\frac{3!}{2!}\\ f_4=\binom{6}{4}4!+\binom{6}{1}\binom{5}{2}\frac{4!}{2!}+\binom{6}{2}\frac{4!}{2!2!}\\ f_5=\binom{6}{5}5!+\binom{6}{1}\binom{5}{3}\frac{5!}{2!}+\binom{6}{2}\binom{4}{1}\frac{5!}{2!2!}\\ f_6=\binom{6}{6}6!+\binom{6}{1}\binom{5}{4}\frac{6!}{2!}+\binom{6}{2}\binom{4}{2}\frac{6!}{2!2!}+\binom{6}{3}\frac{6!}{2!2!2!}\\ f_7=\binom{6}{1}\frac{7!}{2!}+\binom{6}{2}\binom{4}{3}\frac{7!}{2!2!}+\binom{6}{3}\binom{3}{1}\frac{7!}{2!2!2!}\\ f_8=\binom{6}{2}\frac{8!}{2!2!}+\binom{6}{3}\binom{3}{2}\frac{8!}{2!2!2!}+\binom{6}{4}\frac{8!}{2!2!2!2!}\\ f_9=\binom{6}{3}\frac{9!}{2!2!2!}+\binom{6}{4}\binom{2}{1}\frac{9!}{2!2!2!2!}\\ f_{10} = \binom{6}{4}\frac{10!}{2!2!2!2!}+\binom{6}{5}\frac{10!}{2!2!2!2!2!}\\ f_{11}=\binom{6}{5}\frac{11!}{2!2!2!2!2!}\\ f_{12}=\binom{6}{6}\frac{12!}{2!2!2!2!2!2!}$$ For $k>13$ we have $f_k=0$ The number of all possible throws $\omega_n$ is: $$\omega_n=6^n$$ Now you can compute the probability of not getting triple: $$p_n=\frac{f_n}{\omega_n}$$ and probability of getting triple: $$q_n=1-p_n$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2868036", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
How do I find the minimum of this function? This might seem trivial to some of you, but I can't for the life of me figure out how to solve this. $$\underset{x}\arg \min (x - b)^T Ax$$ $$x \in \mathbb{R^n}$$ We may assume A to be invertable, but it is not symmetric. My idea was to calculate the first and second derivative. I know that $\frac{dx^T}{dx} = (\frac{dx}{dx})^T$, but when I try to apply the chain rule, I get $$\frac{d}{dx} = Ax + (x-b)^Tx$$ which doesn't make sense, as it's a vector plus a scalar. Even if there is another way to find the x for which the function is minimal, I am now more interested in how to derive this kind of formula.
You can rewrite it to a standard quadratic program and use corresponding methods as follows: $(x-b)^T A x = x^T A x - b^T A x = x^T A x - c^T x$ for $c := A^T b$. Your method can work too but your derivative calculation was wrong, it would be: $ \frac{d}{dx} (x-b)^T A x = (x-b)^T A + x^T A = 0 \Leftrightarrow 2A^T x = A^Tb $
{ "language": "en", "url": "https://math.stackexchange.com/questions/2868134", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Probability that a sum of uniformly distributed random variables is large Problem Let $\ell_1 \le \ell_2 \le \dots \ell_n$ be nonnegative real numbers, and $S$ a nonnegative real number that is smaller than the sum of the $\ell_i$. Suppose that for $i = 1, 2, \dots, n$, a number $a_i$ is picked from the interval $[0, \ell_i]$ uniformly at random. What is the probability that $$a_1 + a_2 + \dots + a_n \ge S\text{ ?}$$ Progress If $S > \ell_2 + \ell_3 + \dots + \ell_n$, it seems that the answer is just $$\frac{\left(\ell_1 + \ell_2 + \dots + \ell_n - S \right)^n}{n!\cdot \ell_1\ell_2\cdots \ell_n}.$$ I got this by computing the volume of the associated region, which in this case forms a simplex. I'm not sure what the answer is in the general case however. If there isn't a nice closed form, I'd still like to find an algorithmic approach that could determine the answer quickly.
It is a little easier to think about $\mathbb P( a_1+\dots+a_n\le S)$, then subtract from $1$. It turns out that $$P( a_1+\dots +a_n\le S)=\frac{1}{n!\ell_1\cdots \ell_n}\sum_{I\subseteq \{1,\dots,n\}}(-1)^{|I|}\Big(\Big(S-\sum_{i\in I}\ell_i\Big)^+\Big)^n,$$ where the notation $x^+$ means $\max(x,0)$. Essentially, the argument is inclusion exclusion. We start by considering the volume of the simplex defined by $\{a_i\ge 0,\sum_i a_i\le S\}$. This is $S^n/n!$. However, this simplex might extend outside of the box. For each $j$, the simplex defined by $S_j=\{a_i\ge 0 \forall i,a_j\ge \ell_j ,\sum_i a_i\le S\}$ will be a part of the first simplex extending outside the box, which is nonempty $S\ge \ell_j$. The volume of this simplex is the same as that of $S_j'=\{a_i\ge 0 \forall i ,\sum_i a_i\le S-\ell_j\}$, which is $(S-\ell_j)^n/n!$. However, for any $j,k$, the two simplexes $S_j$ and $S_k$ we subtracted might actually overlap. Their overlap is the volume of $S_{j,k}=\{a_i\ge 0\forall i,a_j\ge \ell_j,a_k\ge \ell_k,\sum_i a_i\le S\}$. This overlap is nonempty whenever $S\ge \ell_j+\ell_k$, and the volume is $(S-\ell_j-\ell_k)^n/n!$. This volume must be added back in, since is was subtracted out twice when subtracting the singly overflow simplexes in the last paragraph. Continuing in this fashion, you get the displayed formula. The $(\cdot)^+$ notation takes care of all the casework automatically. There is probability a simpler way to derive this by finding the characteristic function of $a_1+\dots+a_n$ and using the inversion formula, but I haven't quite been able to get that to work out.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2868199", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Find all entire functions with $\int_{\Bbb C} |f(z)|\, dz= 1$ I know that an entire function with bounded $L^1$ norm is identically $0$, but I do not know how to attack this problem. Does this contradict the fact I stated about entire functions with bounded $L^1$ norm?
There are no such functions. For each $w \in \mathbb C$, we have $$ |f(w)| \le \int_{\Bbb C} |f(z)|\, dz= 1 $$ Therefore, $f$ is bounded. Since $f$ is entire, $f$ is constant, by Liouville's theorem. But then we cannot have $\int_{\Bbb C} |f(z)|\, dz= 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2868334", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Given some $n ∈ ℤ$ what conditions must $v$ satisfy for $n \left \lfloor {v} \right \rfloor $ = $\left \lfloor {n v} \right \rfloor $ I'm probably overthinking this. What constraints must you place on $v\in \mathbb R$ : $n \left \lfloor {v} \right \rfloor $ = $\left \lfloor {n v} \right \rfloor $ if $n$ is an arbitrary integer? I can tell that $v \in ℤ$ works eg. $2 \left \lfloor {3} \right \rfloor = \left \lfloor {2 \times 3} \right \rfloor, $ but I'm wondering if I'm missing a more subtle set of constraints on $v$. Any help (even instructive comments/hints) would go a long way. Thanks for your time.
Here is a powerful result (stronger than needed, though), which can be used to deal with this problem. From the theorem below, you would see that $\{x\}\in\left[0,\dfrac1n\right)$ for $x\in\mathbb{R}$ to satisfy $$\lfloor nx\rfloor=n\,\lfloor x\rfloor\,,$$ where $n\in\mathbb{Z}_{>0}$ is fixed. Here, $\{x\}$ is the fractional part of a real number $x$, Theorem. For each $x\in\mathbb{R}$ and $n\in\mathbb{Z}_{>0}$, $$\lfloor nx\rfloor=\sum_{k=0}^{n-1}\,\left\lfloor x+\frac{k}{n}\right\rfloor\text{ and }\lceil nx\rceil=\sum_{k=0}^{n-1}\,\left\lceil x-\frac{k}{n}\right\rceil\,.$$ For $n\in\mathbb{Z}_{<0}$, use the fact that $\lfloor x\rfloor =-\big\lceil (-x)\big\rceil$ for all $x\in\mathbb{R}$. The theorem above shows that $$\{x\}\in \{0\}\cup\left(1+\frac{1}{n},1\right)$$ if $\lfloor nx\rfloor=n\,\lfloor x\rfloor$ for a given $n\in\mathbb{Z}_{<0}$. For $n=0$, any $x$ works.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2868466", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Proof that Vitali set is non-measurable. A Vitali set is a subset $V$ of $[0,1]$ such that for every $r\in \mathbb R$ there exists one and only one $v\in V$ for which $v-r \in \mathbb Q$. Equivalently, $V$ contains a single representative of every element of $\mathbb R / \mathbb Q$. The proof I read is in this short article on Wikipedia: https://en.wikipedia.org/wiki/Vitali_set Under "proof", the second to last inequality $1 \leq \sum \lambda (V_k) \leq 3$ is claimed to result from the previous inequality $[0,1] \subset \bigcup V_k \subset [-1,2]$ simply using sigma-additivity. There must be some missing argument to claim that the sum of the measures, although greater than the measure of the union, is still less than the measure of $[-1,2]$. What is the missing argument ?
The sets $V_k$ are disjoint and countable, hence the measure of the union is exactly equal to the sum of measures.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2868595", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
If $f$ is proper, lsc, and $\frac{f(x) + f(y)}{2} = f^{**}\left(\frac{x + y}{2}\right) \implies x = y$, is $f$ necessarily convex? Suppose $X$ is a real Hilbert Space and $f : X \to (-\infty, \infty]$ is a lower semicontinuous, proper function. Further, suppose $f$ satisfies the following, for all $x, y \in \operatorname{dom} f$: $$\frac{f(x) + f(y)}{2} = f^{**}\left(\frac{x + y}{2}\right) \implies x = y.$$ Is $f$ necessarily a convex function? Here $^*$ refers to the Fenchel conjugate, and $\operatorname{dom} f$ is the set of points $x \in X$ such that $f(x) \neq \infty$. I know that: * *$f^{**}(x) \le f(x)$ for all $x$ and $f^{**}(x) = f(x)$ for all $x$ if and only if $f$ is convex (and lsc). *In fact, $f^{**}$ is the greatest lsc convex minorant of $f$. *This means that $$\frac{f(x) + f(y)}{2} \ge \frac{f^{**}(x) + f^{**}(y)}{2} \ge f^{**}\left(\frac{x + y}{2}\right)$$ for all $x, y \in \operatorname{dom} f$. *Therefore, $$\frac{f(x) + f(y)}{2} = f^{**}\left(\frac{x + y}{2}\right)$$ implies that $f(x) = f^{**}(x)$ and $f(y) = f^{**}(y)$. *Another consequence is that $f^{**}(\lambda x + (1 - \lambda y)) = \lambda f^{**}(x) + (1 - \lambda)f^{**}(y)$ for all $\lambda \in [0, 1]$. My thoughts: * *Really, I just need to establish that $f^{**}(x) = f(x)$ for all $x$. *Despite biduals showing up both in the premises and the above desired conclusion, there doesn't seem to be a direct path to manipulate one to the other, especially since not every point in $\overline{\operatorname{conv}} \operatorname{dom} f$ can be expressed as $\frac{x+y}{2}$ where $x, y \in \operatorname{dom} f$. *The function $g(x, y) = \frac{f(x)+f(y)}{2} - f^{**}\left(\frac{x + y}{2}\right)$ is not a metric in general, even if $g(x, y) = 0 \implies x = y$. *I get a feeling that Stegall's variational principle might help, for a variety of reasons, but one handy reason is that we may add any linear functional to $f$, without changing $g$. Any thoughts are welcome!
The conjecture seems to be not true. Take $X=\mathbb R$, $$ f(x)=\sqrt{|x|}. $$ Then it holds $f^{**}\equiv 0$. Moreover, if $$ f(x)+f(y) = 2 f^{**}\left(\frac{x+y}2\right)=0, $$ then necessarily $x=y=0$. But $f$ is not convex.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2868745", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 1, "answer_id": 0 }
Prove the following theorem in propositional calculus I have the following Hilbert-style propositional calculus, having the logical connectives $\{\neg,\rightarrow\}$ of arity one and two respectively, and the following axioms: A1: $A\rightarrow(B\rightarrow A)$ A2: $(A\rightarrow(B\rightarrow C))\rightarrow((A\rightarrow B)\rightarrow(A\rightarrow C))$ A3: $(\neg A\rightarrow \neg B)\rightarrow (B\rightarrow A).$ $A,B,C$ are any well-formed formulas (wffs). The only inference rule is modus ponens. I want to prove the following theorem: $\vdash (\neg A\rightarrow A)\rightarrow A$ for any wff $A$. You may use any or all of the following theorems without proof: $\vdash A\rightarrow A$ for any wff $A$. $A\rightarrow B, B\rightarrow C\vdash A\rightarrow C$ for any wffs $A,B,C$. $A\rightarrow (B\rightarrow C), B\vdash A\rightarrow C$ for any wffs $A,B,C$. $\vdash \neg A\rightarrow (A\rightarrow B)$ for any wffs $A,B$. Please avoid using the deduction theorem unless absolutely necessary.
Lemma 1) $\vdash \lnot A \to (A \to B)$ --- Th.4 2) $\vdash (\lnot A \to (A \to B)) \to ((\lnot A \to A) \to (\lnot A \to \lnot B))$ --- Ax.2 3) $\vdash (\lnot A \to A) \to (\lnot A \to \lnot B)$ --- from 1) and 2) 4) $\vdash (\lnot A \to \lnot B) \to (B \to A)$ --- Ax.3 5) $\vdash (\lnot A \to A) \to (B \to A)$ --- from 3) and 4) by Th.2 Proof 1) $\vdash (\lnot A \to A) \to ((\lnot A \to A) \to A)$ --- from Lemma 2) $\vdash ((\lnot A \to A) \to ((\lnot A \to A) \to A)) \to (((\lnot A \to A) \to (\lnot A \to A)) \to ((\lnot A \to A) \to A))$ --- Ax.2 3) $\vdash ((\lnot A \to A) \to (\lnot A \to A)) \to ((\lnot A \to A) \to A)$ --- from 1) and 2) 4) $\vdash (\lnot A \to A) \to (\lnot A \to A)$ --- from Th.1 5) $\vdash (\lnot A \to A) \to A$ --- from 3) and 4)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2868878", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
An invisible ghost jumping on a regular hexagon Given a regular hexagon and an invisible ghost at one of the vertices of the hexagon (we don’t know which). We have a special gun, that can kill ghosts. In a step we are able to shoot the gun twice (i.e. choose two vertices and see if the ghost is there). After every step, the gost moves to an adjacent vertex. What is the minimum number of moves required to kill the ghost? I have an example with 6 steps. I am sure this is not the minimum. My friend has an example for 4. So what is the minimum? And can you generalize for a regular $n$-gon? Thanks!
Brute force exhaustion of possible strategies gives two solutions requiring four turns: * *Shoot at $(1,3)$ then $(4,6)$ then $(2,4)$ then $(1,5)$ *Shoot at $(1,3)$ then $(4,6)$ then $(4,6)$ then $(1,3)$ along with reflections and rotations of these basic solutions. There are none requiring three turns. To see that the second solution works, we can use the following method to analyze the locations where there could possibly be a live ghost: * *The ghost is among $1,2,3,4,5,6$. After shooting $(1,3)$, the ghost is among $2,4,5,6$. *The ghost is among $1,3,4,5,6$. After shooting $(4,6)$, the ghost is among $1,3,5$. *The ghost is among $2,4,6$. After shooting $(4,6)$, the ghost is at $2$. *The ghost is among $1,3$. After shooting $(1,3)$, it can't be anywhere.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2869002", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 3, "answer_id": 1 }
What is the plot of this implicit function: $|\sin x|^y+|\cos x|^y = 1$ I'm trying to manually plot the following function: $$ |\sin x|^y+|\cos x|^y = 1 $$ My basic approach for implicit functions is to try to express $y$ in terms of $x$ and plot it, or $x$ in terms of $y$ and then plot the inverse. Sometimes it's clear from the first glance if the equations is in some special form (for example a circumference). For the above I couldn't find an explicit expression. I've tried to manipulate the expression in different ways in order to take logarithms and get rid of the $y$ power. It's even harder to get an insight since neither W|A nor desmos is able to plot it. Below is the output from Mathematica which I don't really understand: I'm interested in the ways I could transform the equation above so that it's easier to see what the graph looks like. upd: As pointed in the comments the above graph shows various contours. Below is the one which reflects the initial function: Here is a Mathematica snippet for copy and paste: ContourPlot[Abs[Cos[x]]^y + Abs[Sin[x]]^y == 1, {x, -1, 1}, {y, -1, 10}] Just to be complete I'm adding the final plot from Mathematica (with some discrepancies which I assume are caused by the way Mathematica calculates the values) which reflects the answer by Michael Seifert.
We can see an obvious solution for the contour: if $y = 2$, we have $|\cos x|^2 + |\sin x|^2 = 1$, which is satisfied for all values of $x$. So the line $y = 2$ is part of the solution set. If $y > 2$, then since $0\leq |\cos x| \leq 1$, we have $|\cos x|^y \leq |\cos x|^2$, with equality iff $|\cos x| = 0$ or $|\cos x| = 1$. A similar relation holds for $|\sin x|$. Thus, $$ |\cos x|^y + |\sin x|^y \leq |\cos x|^2 + |\sin x|^2 = 1. $$ Since equality only holds if both $|\cos x|$ and $|\sin x|$ are either 0 or 1, we cannot have $|\cos x|^y + |\sin x|^y = 1$ unless this is so. This occurs when $x = n \pi/2$ for some integer $n$. A similar argument can be made for when $y < 2$; in this case, we have $|\cos x|^y \geq |\cos x|^2$ and similarly for $|\sin x|$. Thus, $x = n \pi/2$ is a solution when $y < 2$ as well. The only exception is that $0^0$ is indeterminate, so we cannot say that the points $x = n \pi/2$, $y = 0$ are part of the contour. Thus, the solution to the problem is the union of the sets $\{y = 2 \}$ and $\{x = n\pi/2, y \neq 0\}$ for $n \in \mathbb{Z}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2869092", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Gauss elimination. Where did I go wrong? Gaussian elimination with back sub: So my starting matrix: \begin{bmatrix} 1 & -1 & 1 & -1 \\2 & 1 & -3 & 4 \\2 & 0 & 2 & 2 \end{bmatrix} multiply the 2nd and 3rd row by -1 * (first row): \begin{bmatrix} 1 & -1 & 1 & -1 \\0 & 3 & -5 & 6 \\0 & 2 & 0 & 4 \end{bmatrix} then add -1(third row) to the 2nd row-> \begin{bmatrix} 1 & -1 & 1 & -1 \\0 & 1 & -5 & 2 \\0 & 2 & 0 & 4 \end{bmatrix} add -2(2nd row) to the third row -> \begin{bmatrix} 1 & -1 & 1 & -1 \\0 & 1 & -5 & 2 \\0 & 0 & 10 & 0 \end{bmatrix} But then this seems to have no solution because $10z = 0$.... ugh EDIT As I was writing this, it occurred to me that $z = 0$, $y = 2$, $x = 1$. Is that right?
I'd use a more systematic method: \begin{align} \begin{bmatrix} 1 & -1 & 1 & -1\\ 2 & 1 & -3 & 4\\ 2 & 0 & 2 & 2 \end{bmatrix} &\to \begin{bmatrix} 1 & -1 & 1 & -1\\ 0 & 3 & -5 & 6\\ 0 & 2 & 0 & 4 \end{bmatrix} &&\begin{aligned} R_2&\gets R_2-2R_1 \\ R_3&\gets R_3-2R_1 \end{aligned} \\ &\to \begin{bmatrix} 1 & -1 & 1 & -1\\ 0 & 1 & -5/3 & 2\\ 0 & 2 & 0 & 4 \end{bmatrix} && R_2\gets\tfrac{1}{3}R_2 \\ &\to \begin{bmatrix} 1 & -1 & 1 & -1\\ 0 & 1 & -5/3 & 2\\ 0 & 0 & 10/3 & 0 \end{bmatrix} && R_3\gets R_3-2R_2 \\ &\to \begin{bmatrix} 1 & -1 & 1 & -1\\ 0 & 1 & -5/3 & 2\\ 0 & 0 & 1 & 0 \end{bmatrix} && R_3\gets\tfrac{3}{10}R_3 \\ &\to \begin{bmatrix} 1 & -1 & 0 & -1\\ 0 & 1 & 0 & 2\\ 0 & 0 & 1 & 0 \end{bmatrix} && \begin{aligned} R_2 &\gets R_2+\tfrac{5}{3}R_3 \\ R_1&\gets R_1-R_3\end{aligned} \\ &\to \begin{bmatrix} 1 & 0 & 0 & 1\\ 0 & 1 & 0 & 2\\ 0 & 0 & 1 & 0 \end{bmatrix} && R_1\gets R_1+R_2 \end{align} The solution, which is explicit when the RREF is reached, is \begin{bmatrix} 1 \\ 2 \\ 0 \end{bmatrix}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2869200", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Equipotent Sets We know by definition that if a bijection between two sets A and B exists,then A and B are equivalent.The book I was reading took the function f:Z→N f(x)= -2x,for x<0 2x+1,for x=>0 which is a bijection(it can be easily proven),but I'm a little confused. How can Z and N be equivalent sets if N is a subset of Z ?
One immediate (though rather dangerous! .. see paragraph below) way to wrap your mind around this is to consider the fact that both sets are of infinite size. Yes, the one is a strict subset of the other, but they are both infinite ... so maybe it's a little easier to digest that way. Then again, the really surprising thing about cardinality is that not any two infinite sets have the same cardinality .. that there different 'kinds' of 'degrees' of infinity, if you want. As it so happens, $\mathbb{N}$ and $\mathbb{Z}$ are of the 'same' kind though; in both cases, you can put all elements in a list ... something which is not possible for the real numbers. However, it's exactly this fact that makes the notion of cardinality and equipotence an interesting and useful notion!
{ "language": "en", "url": "https://math.stackexchange.com/questions/2869315", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Notation of symmetric sum notation When you use the symmetric sum notation, for example, $$\sum_\text{sym}abc+a$$ if there are 3 variables, then does abc count once, 3 times or 6 times? I am confused about repetitions of the same expression in a symmetric sum notation.
We have that $$\sum_\mathrm{sym}Q(x_i)=\sum_\sigma Q(x_{\sigma(i)})$$ for all permutations of $1, \ldots , n$. Therefore it should be $$\sum_\text{sym}abc+a=Q(a,b,c)+Q(a,c,b)+Q(b,a,c)+Q(b,c,a)+Q(c,a,b)+Q(c,b,a)=$$ $$=2abc+2a+2abc+2b+2abc+2c=6abc+2(a+b+c)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2869628", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
why radians can be converted to reals in calculus? Consider this integral: $$ \int \sin^2x dx = \frac x2 - \frac {\sin2x}4 + C $$ Note the first term $\frac x2$ is a real as opposed to radian and can, in fact, be substituted with a real number when taking definite integral. To make the statement more clear, introduce trigonometric derivatives in degree form: $$ \frac {d}{dx} \sin^\circ x = \frac \pi {180} \cos^\circ x $$ However, this does not change the frist term of the integral... $$ \int \sin^{\circ2}xdx = \int \frac 12 - \frac {\cos^\circ2x}2 = \frac x2 - \frac {180}\pi \times \frac {\sin^\circ2x}4 + C $$ Then in this content, what is $\frac x2$, real or radian?
Your distinction between "reals" and "radians" is not a meaningful one. Radians are a unitless measurement, so "$x$ radians" is, in fact, just the real number $x$ (understood in a particular context---that of angles). Now, $\sin(x)$ and $\sin^\circ(x)$ are both functions from $\mathbb{R}$ to $\mathbb{R}$, but they are DIFFERENT functions! Notice that $\sin(x)$ has period $2\pi$, but $\sin^\circ(x)$ has period 360. Yes, a term of $x/2$ appears in the antiderivative of both and there is a different "scaling" factor in front of the second term (but again, the second term is different between the two examples because $\sin(x)$ is not the same function as $\sin^\circ(x)$). But this doesn't mean anything a priori---the functions are different, so their integrals will be different. Of course, the form of the antiderivative is similar because the two functions are related by $\sin^\circ(x) = \sin(180x/\pi)$. Thats all that's really going on.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2869716", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Evaluate $\int_0^{\frac{\pi}{2}}\frac{\sin x\cos x}{\sin^4x+\cos^4x}dx$ Evaluate $$ \int_0^{\frac{\pi}{2}}\frac{\sin(x)\cos(x)}{\sin^4(x)+\cos^4(x)}dx $$ I used the substitution $\sin x =t$, then I got the integral as $$\int_0^1 \frac{t}{2t^4-2t^2+1}dt $$ After that I don't know how to proceed. Please help me with this.
Hint: $$\dfrac{\sin x\cos x}{\sin^4x+\cos^4x}=\dfrac{\tan x\sec^2x}{\tan^4x+1}$$ Set $\tan^2x=y$ OR $$\dfrac{\sin x\cos x}{\sin^4x+\cos^4x}=\dfrac{\cot x\csc^2x}{\cot^4x+1}$$ Set $\cot^2x=u$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2869817", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 1 }
Proving limit of $a^n\to0$ for $|a|<1$ without use of logarithms Prove that $a^n\to0$ as $n\to∞$ for $|a|<1$ without use of logarithms by using properties of the sequence $u_n=|a|^n.$ I've noticed that I should use the subsequence $u_{2n}$, and the fact that $u_{2n}=u_n^2$. However, I don't know where to go from here. I'm not familiar with these types of proofs so a hint/solution would be greatly appreciated. Thank you!
By ratio test $$\frac{|a|^{n+1}}{|a|^n}=|a|<1 \implies |a|^n \to 0$$ then since $$-|a|^n\le a^n\le |a|^n$$ by squeeze theorem we conclude that $$a^n \to 0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2869873", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 7, "answer_id": 6 }
Proof by induction on $r$ variables If there is a statement $P(n)$, proof by induction has three steps. Base case is to show $P(1)$ is true Induction step is to assume $P(K)$ is true and then to show $P(k+1)$ is true. If our statement $P(n_1,n_2,n_3,\cdots, n_r)$ involves $r$ variables, then how to prove it by induction?
Depends on context. In general it boils down to finding a suitable well order on $\mathbb N^r$. Then the induction step is proving that $P(n_1,\dots,n_r)$ implies $P(m_1,\dots,m_r)$ where $(m_1,\dots,m_r)$ denotes the successor of $(n_1,\dots,n_r)$. Sometimes it is possible to do it with induction on $n=n_1+\cdots+n_r$. Also you could use strong induction. Then it must be proved that $P(n_1,\dots,n_r)$ is true if $P(k_1,\dots,k_r)$ is true for every tuple $(k_1,\dots,k_r)$ with $k_i\leq n_i$ for $i=1,\dots,r$ and $\sum_{i=1}^rk_i<\sum_{i=1}^rn_i$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2869987", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Munkres-Analysis on Manifolds: Theorem 20.1 I am studying Analysis on Manifolds by Munkres. I have a problem with a proof in section 20: It states that: Let $A$ be an $n$ by $n$ matrix. Let $h:\mathbb{R}^n\to \mathbb{R}^n$ be the linear transformation $h(x)=A x$. Let $S$ be a rectifiable set (the boundary of $S=BdS$ has measure $0$) in $\mathbb{R}^n$. Then $v(h(S))=|\det A|v(S)$ ($v=$volume). The author starts his proof by considering tha case of $A$ being a non-singular matrix (invertible). I think I understand his steps in that case (I basically had to prove that $h(int S)=int$ $h(S)$ and $h(S)$ is rectifiable, if anybody knows a way this statements are proven autumatically please tell me). He proceeds by considering the case where $A$ is singular, so $\det A=0$. He tries to show now that $v(T)=0$. He states that since $S$ is bounded so is $h(S)$ (I think thats true because $|h(x)-h(a)|\leq n|A||x-a|$ for each $x$ in $S$ and fixed a in $S$, if there is again a better explanation please tell me). Then he says that $h(\mathbb{R}^n)=V$ with $\dim V=p<n$ and that $V$ has measure $0$ (for each $ε>0$ it can be covered by countably many open rectangles of total volume less than $ε$), a statemant that I have no clue how to prove. Then he says that the closure of $h(S)=cl(h(S))$ is closed and bounded and has neasure $0$ (of course $cl((h(S))$ is closed but why is it bounded with measure $0$?). Then makes an addition step (which I understand) and proves the theorem for that case too. Cound someone help me clarify the points of the proof that I don't understand? Thank you in advance!
As Munkres explains, since $\det A=0$, $h(S)(=T)$ is contained in a vector space of dimension smaller than $n$. But any subset of such a vector space has measure $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2870136", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Finding a set of continuous functions with a certain property I need help finding the set of continuous functions $f : \Bbb R \to \Bbb R$ such that for all $x \in \Bbb R$, the following integral converges: $$\int_0^1 \frac {f(x+t) - f(x)} {t^2} \ \mathrm dt$$ I am thinking it could be the set of constant functions but i havent been able to prove it :( I have also noticed that you can kind of take any two functions and stick them together (continuously extend one into the other) the resulting function verifies the property in question. I hope you can provide some insight and thank you .
Partial answer: if $f$ is differentiable then it is constant We write $f(x+h) = f(x) + h g(h)$ where $g(h)$ is continuous and $g(0) = f'(x)$. Then the required integral becomes: $$\int_0^1 \frac {g(t)} t \ \mathrm dt$$ If WLOG $g(0) > 0$ then there is $\delta > 0$ such that $g(t) > \frac12 g(0)$ for every $0 \le t < \delta$, and then: $$\begin{array}{rcl} \displaystyle \int_0^1 \frac {g(t)} t \ \mathrm dt &=& \displaystyle \int_0^\delta \frac {g(t)} t \ \mathrm dt + \int_\delta^1 \frac {g(t)} t \ \mathrm dt \\ &>& \displaystyle \int_0^\delta \frac {g(0)} {2t} \ \mathrm dt + \int_\delta^1 \frac {g(t)} t \ \mathrm dt \\ &=& \infty \end{array}$$ So $g(0) = 0$, and $f'(x) = g(0) = 0$ everywhere, so $f$ is constant.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2870314", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 2, "answer_id": 1 }
Doubt about how to find a Lipschitz constant I have a doubt about a sentence of my Calculus text. Let $f: [t_1, t_2]\times \mathbb{R}^n \to \mathbb{R}^n, (t,y)\to f(t,y)$ such that $|\partial_{y_j} f_i|$ is continuous and bounded for every $i,j=1,...n$. Then f has $\sqrt n L$ as Lipschitz constant (with respect to y uniformly in t) where $L>0$ is such that $|\partial_{y_j} f_i|\le L$. I don't know how to get $\sqrt n L$ as Lipschitz constant: for $i=1,...,n$ we have $|f_i(t,y)-f_i(t,z)|\le|\nabla f_i(t, \theta y)||y-z|$ for some $\theta\in[0,1]$; and since $|\nabla f_i(t, \theta y)|\le \sqrt{nL^2} $ we obtain $|\nabla f(t, \theta y)|\le \sqrt{n^2L^2}=nL$. Do you know ways to improve my inequality? Thanks in advance.
I suspect it is using the fact that $\|x\|_2 \le \|x\|_1 \le \sqrt{n} \|x\|_2$. Consider the path $x_0=x \to (y_1,x_2,...,x_n) \to (y_1,y_2,x_3,...,x_n) \to \cdots \to x_n=y$, where one component changes at a time. \begin{eqnarray} \|f(x,t)-f(y,t)\|_2 &\le& \sum_k \|f_(x_{k+1},t)-f(x_k,t)\|_2 \\ &\le& \sum_k L \|x_{k+1}-x_k\|_2 \\ &=& L \sum_k |y_k-x_k| \\ &=& L \|x-y\|_1 \\ &\le& \sqrt{n}L \|x-y\|_2 \end{eqnarray}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2870403", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
What is the probability that each inhabitant of a three-story building lives on a different floor? There is a multi-apartment building with $3$ stories and $4$ apartments at each story. In each apartment lives one person. Three random inhabitants of this building are standing outside the building. What the probability that each of them live on a separate floor (event B). I want to solve this problem using combinatorics approach. The answer that I have in my book is: $$P(B) = \frac{|B|}{|\Omega|} = \frac{12 \cdot 8 \cdot 4}{12 \cdot 11 \cdot 10}$$ To best of my understanding the logic goes as follows: (a) total number of possibilities: ordered sample ($3$ out of $12$) (b) needed possibilities: first we take a person from any apartment ($12$ possibilities), then a person from two remaining floors ($8$ possibilities) and finally a person from one remaining floor ($4$ possibilities). My question: When I choose people in this problem I do not think we care about the order. So, I think the sample should be unordered. Probably mathematically it does not matter because the “order” factor is in both numerator and denominator is the same. But if indeed I solve the problem as the unordered sample, how I calculate the possibilities in numerator ($|B|$)? Many thanks.
Your understanding of the solution is correct. To do it without taking the order of selection into account, observe that there are $\binom{12}{3}$ ways to select three of the twelve apartments. The favorable cases are those in which one of the four apartments on each floor is occupied by the inhabitants standing outside the building. Hence, the probability that each of the three inhabitants of the the three-story building lives on a different floor is $$\dfrac{\dbinom{4}{1}\dbinom{4}{1}\dbinom{4}{1}}{\dbinom{12}{3}}$$ You should verify that this gives the same probability as the solution stated in the book.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2870483", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
The product of three consecutive integers is ...? Odd? Divisible by $4$? by $5$? by $6$? by $12$? If i have the product of three consecutive integers: $n(n+1)(n+2)$, so the result is: $A)$ Odd $B)$ Divisible by $4$ $C)$ Divisible by $5$ $D)$ Divisible by $6$ $E)$ Divisible by $12$ My thought was: $i)$ If we have three consecutive numbers, $a, (a + 1), (a + 2)$, one of these three numbers must be divisible by $3$. $ii)$ If we have two consecutive numbers, $a, (a + 1)$, one of these two numbers must be divisible by $2$, also one of these numbers will be even and the other will be odd. $iii)$ Any number is divisible by $6$, when is divisible by $3$ and $2$ at the same time. So, the correct answer must be $D)$ Well, I would like to know if: * *My answer is correct *What is the formal proof of what I said in $ i) $
Looks good. The multiplication table for three, is $3,6,9,12,15$ etc any consecutive three integers has to include one of these because there are only two integers between them.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2870587", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 1 }
How to treat a constant when integrating I am wondering what to do when coming across an integral like this where $a$ is a constant: $$\int^{1000}_a (x-a){1\over1000}dx $$ As far as I can see, it should be ok to do this: $$ \left.{1\over 1000} \left({{{x^2}\over2} -ax}\right)\right|^{1000}_a$$ But the book I am using does: $$ \left.{1\over 1000} \left({{(x-a)^2}\over2}\right)\right|^{1000}_a$$ Which also seems correct, but is different. How does one decide which way to integrate? (The actual question in the book is regarding expected value of probability, and I can copy it in full if it makes a difference.)
The two expressions are different only by a constant. When you take bounds, the constant vanishes, giving the same result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2870710", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Integrate :- $\int dx/(\sin(x) + a\sec(x))^2$ Please help me in evaluating this integral $$ \int \frac{1}{(\sin(x) + a \sec(x))^2}\,dx $$ I tried by converting $\sec(x)$ to $\cos(x)$ and by solving it became more complicated so guys please guide me further.
Hint: $$\dfrac1{(\sin x+a\sec x)^2}=\dfrac1{2(\sin x\cos x+a)^2}+\dfrac{\cos2x}{2(\sin x\cos x+a)^2}$$ The second part is elementary. $$\dfrac1{(\sin x\cos x+a)^2}=\dfrac{\sec^2x(1+\tan^2x)}{(\tan x+a\tan^2x+a)^2}$$ Choose $\tan x=u$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2870964", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Isomorphism Clarification and Identification I understand that to claim isomorphism, algebraic properties must be preserved...so....a) closed under multiplication and closed under addition. However, I am unsure how to apply these condition-testing properties in the context of polynomial/space based questions For example: Which of the following is isomorphic to a subspace of R^3x4 * *P9 *P11 *Upper triangular matrices in R^2x3 *R12 How can I show closed under addition/multiplication in each of these contexts?
The simpler way to find out if two finite dimensional vector spaces are isomorphic, as others said, is to find out what if the dimension of the vector space that you're studying. So we can state a little "theorem" Two finite dimensional vector spaces $V,W$ are isomorphic iff $$\dim(V)=\dim(W)$$ Here are some references. After stating this we can easily find out which of your vector spaces is isomorphic to $\mathbb{R}^{3\times 4}$. First of all we have that $$\dim(\mathbb{R}^{3\times 4}) = 12$$ and it's easy to see because to write down a $3$ by $4$ matrix you need $12$ indipendent components. Now let's see the other dimensions $$\begin{align} P_9 &&\rightarrow&&\dim(P_9)=9\tag1\\ P_{11} &&\rightarrow&&\dim(P_{11})=11\tag2\\ \text{upper triangular }\mathbb{R}^{2\times 3}&&\rightarrow&&\dim(\text{upper triangular }\mathbb{R}^{2\times 3})=5\tag3\\ \mathbb{R}^{12}&&\rightarrow&&\dim(\mathbb{R^{12}})=12\tag4 \end{align}$$ For the first and second one the dimension is trivial to find: a polynomial of degree less and equal to $n$ has $n$ coefficients, so the vector space is $n$-dimensional. The third one is more tricky: one would think that the dimension would be $6$ but an upper triangular $2$ by $3$ matrix is of the form $$\left(\begin{matrix}a_{11}&a_{12}&a_{13}\\0&a_{22}&a_{23}\end{matrix}\right)$$ so the element $a_{21}=0$ always. The only independent components ar the $5$ left. The fifth is just $\mathbb{R}^n$ so it's dimension is $n$. Clearly the only isomorphic space to $\mathbb{R}^{3\times2}$ in this list is $\mathbb{R}^{12}$. If you want to find an isomorphism, which is not requested if the only thing you want is to find if to vector spaces are isomorphic, you could use this isomorphism $$\phi:\mathbb{R}^{3\times4}\rightarrow \mathbb{R}^{12} \\ \left(\begin{matrix}a_{11}&a_{12}&a_{13}&a_{14}\\a_{21}&a_{22}&a_{23}&a_{24}\\a_{31}&a_{32}&a_{33}&a_{34}\end{matrix}\right)\mapsto (a_{11},a_{12},a_{13},a_{14},a_{21},a_{22},a_{23},a_{24},a_{31},a_{32},a_{33},a_{34})$$ i.e. you take all the entries in the matrix and map them to a vector that has as components the $a_{ij}$ of the matrix.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2871059", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Showing that $A=\{a_1,...,a_r\}$ is a closed set Let $A=\{a_1,\dots,a_r\}, a_i \in \mathbb{R}, i=1,...,r$. Show that $A$ is a closed set. As a bit of a beginner, I have written down a proof and I wanted to see if it is good enough/well structured. So I want to show that $\partial A\subseteq A$, where $\partial A=\{a \in A:D(a,\delta) \cap A\neq\emptyset, D(a,\delta)\cap(\mathbb{R}\setminus A)\neq\emptyset, \forall\delta>0\}$ is the set of boundary points of $A$. The neighbourhood of $A$ for any $\delta>0$ is denoted by $D(a,\delta)=(a-\delta,a+\delta)$. Let $a \in A$. For all $\delta>0$, $D(a,\delta)\cap A\neq\emptyset$ (since $D(a,\delta)\cap A=\{a\}$ even for arbitrarily small $\delta$). Also, $D(a,\delta)\cap(\mathbb{R}\setminus A)\neq\emptyset$. So $a \in \partial A$ for all $a \in A$. Now let $B=\mathbb{R}\setminus A$ and $y\in B$. If $y<\min A$, then $D(y, \frac{\min A-y}{2})\subseteq B$, so $y$ is exterior to $A$. If $y>\max A$, then $D(y,\frac{y-\max A}{2})\subseteq B$, so $y$ is exterior to $A$. If $a_j<y<a_k$, where $a_j\in A$ and $a_k=\min(A\setminus\{a\in A: a\leq a_j\})$, and $\delta=\frac{1}{2}\min(y-a_j,a_k-y)$, then $D(y,\delta)\subseteq B$, so $y$ is exterior to $A$. Therefore every element in $B=\mathbb{R}\setminus A$ is exterior to $A$. So $\partial A=\{a_1,\dots,a_r\}=A$, or $\partial A\subseteq A$, so $A$ is a closed set. 1) Is my conclusion (and the way I arrived at it) correct? Is every element in $A$ a boundary point of $A$? 2) I think I've made my proof a bit more complicated than it should be, is there a simpler way? 3) Tips for formatting/notation?
It seems to me the introduction of $\partial A$ into the discussion overly complicates things. I would argue it like this, which to my mind is somewhat simpler: For each $a_k$, $1 \le k \le r$, the set $\Bbb R \setminus \{ a_k \}$ is open; this is easy to see, since if $p \in \Bbb R \setminus \{a_k\}$, the open interval $(p - \delta, p + \delta) \subsetneq \Bbb R \setminus \{a_k\}, \tag 1$ where $\delta = \dfrac{\vert p - a_k \vert}{2}, \tag 2$ contains $p$ and is fully contained in $\Bbb R \setminus \{a_k \}$; thus $\Bbb R \setminus \{ a_k \}$ is open, since it contains an open neighborhood $(p - \delta, p + \delta)$ of any its points $p$; thus the complement of $\Bbb R \setminus \{a_k\}$, which is the singleton $\{a_k\}$, is closed. Now simply used the fact that a finite union of closed sets is closed, and $A = \{ a_1, a_2, \ldots, a_r \} = \displaystyle \bigcup_{k = 1}^r \{ a_k \}. \tag 3$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2871206", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 6, "answer_id": 0 }
Finite morphisms of schemes are closed. Let $f : X \rightarrow Y$ be a finite morphism of schemes. I have to show $f$ is closed. I have been able to prove that for any open affine $V= \mathrm{Spec}(B)$ in $Y$, $f : f^{-1}V \rightarrow V$ is a closed morphism. But I am having trouble to extend this globally. I am arguing as follows: say $C$ is some closed set in $X$. Then $C \cap f^{-1}V$ being closed in $f^{-1}V, f(C \cap f^{-1}V)$ is closed in $V$ for any open affine $V$. But how to conclude from this $f(C)$ is closed in $Y$ without any assumption of quasicompactness ?
It follows from the following lemma: Lemma: Let $X$ be a topological space, and let $\{U_{i}\}_{i \in I}$ be an open cover for $X$. Then a subset $C \subset X$ is closed if and only if $C \cap U_{i}$ is closed in $U_{i}$ for each $i \in I$. (Note that there are no assumptions on the cardinality of the index set $I$!) Proof: We prove the interesting direction, namely that if $C_{i} := C \cap U_{i}$ is closed in $U_{i}$ for each $i \in I$, then $C$ is closed in $X$. It suffices to show that $O := X\setminus C$ is open in $X$. We have $$X \setminus C = (\bigcup_{i \in I} U_{i}) \setminus C = \bigcup_{i \in I} U_{i} \setminus C_{i} $$ Since $C_{i}$ is a closed subset of $U_{i}$ for each $i \in I$, $U_{i} \setminus C_{i}$ is an open subset of $U_{i}$, hence an open subset of $X$. Thus, $X$ is a union of open subsets, hence open as desired. $\square$ Now, your claim follows from the observation that $f(C \cap f^{-1}V) = f(C) \cap V$ and taking an appropriate affine cover of $Y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2871323", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Surface integral - cone below plane After several years I suddenly need to brush up on surface integrals. Looking through my old Calculus book I have been attempting to solve some problems, but the following problem has really made me hit a wall, even though it probably is quite easy to solve: Find $\int \int_S y dS$, where $S$ is part of the cone $z=\sqrt{2(x^2 + y^2)}$ that lies below the plane $z=1+y$. So far I have found that $dS = \sqrt{3}$, which then means I have to solve the integral: $$\sqrt{3}\int \int_S y dx dy$$. However, I am really stuck on how to proceed from here. I have tried looking at the intersection between the cone and the plane, and transforming the integral to polar coordinates, but can't seem to get anywhere. If someone can help me out a bit here, then I would greatly appreciate it!
The domain of integration is the projection on the $xy$-plane of the intersection between the plane and the cone (an ellipse in 3D space). Using cylindrical coordinates we get: $$ x = r\cos \theta\\y=r\sin\theta\\ z =z\\ \sqrt 2 r= r\sin\theta + 1 $$ We can thus determine the expression for the ellipse: $$ r = \frac{1}{\sqrt 2 - \sin \theta} = \rho(\theta)$$ The area element with these coordinates has the form: $$ dS = \sqrt 3 dx dy = \sqrt 3 r drd\theta $$ Which yields the final integral: $$ \sqrt 3 \int_{0}^{2\pi} \sin \theta \int_{0}^{\rho(\theta)} r^2 dr d\theta $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2871381", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Example of two subgroups $H$ and $K$ of a non-abelian group $G$ such that $HK$ is not a subgroup of $G$? I need to disprove that the set $HK$ is not a subgroup of $G$ if $G$ is non-abelian. Does anyone have a trivial or easy counterexample? I cannot think of one...
Take $G=S_3,H=\left<(1,2)\right>$ and $K=\left<(2,3)\right>$. Then $H$ and $K$ are subgroups of $G($each containing two elements$)$ and $$HK=\{1,(12),(23),(132)\}$$a set of size $4$. Therefore, $HK$ is not a subgroup of $G$$($by Lagrange's Theorem$)$ since $4$ does not divide $6$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2871614", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Bayesian Update in the Presence of Noise - Estimating the Ratio of Balls in a Jar There are two jars with red balls and blue balls. Your goal is to estimate the ratio of red to blue for each jar, assuming some initial prior for each jar. On each iteration, you are handed a ball. You can see its color, and are told which jar it came from. However, for some known fraction, f, of the iterations, the information about which jar the ball came from is false. Whether the jar information is true or false is determined independently for each iteration. The ball is then replaced into the jar from which it actually came. What is the correct update rule for the ratios of each jar on each iteration?
I'll assume that, as specified in a comment, the balls are known to come from either jar with equal probability, independently chosen for each ball. I take your first paragraph to imply that your initial prior for the ratios factorizes into a product of marginal priors for the individual jars. This factorizability won't be preserved by the updates. For example, for $f=\frac12$ and a prior that's indifferent between all-red and all-blue jars (and excludes all fractional proportions), if you get a red ball, your prior becomes $\frac12$ for two all-red jars, $\frac14$ for each combination of mixed jars and $0$ for two all-blue jars, which doesn't factor. Thus we might as well start with a general joint prior $p(\lambda_1,\lambda_2)$ for the proportions of red balls in the jars. But then we can map the problem to the simpler problem of drawing directly from two jars. Consider two virtual jars, one for each possible announcement where a ball came from. Then the “announcement $k$” jar has an effective proportion $(1-f)\lambda_k+f\lambda_{\overline k}$ of red balls (where $\overline k$ is the jar other than $k$). The transformation matrix $$ \pmatrix{1-f&f\\f&1-f} $$ is invertible as long as $f\ne\frac12$, so you have a one-to-one map between the real ratios and the virtual ratios. You can transform your prior to the virtual ratios, perform standard updates for two jars on the virtual ratios, and transform back to the real ratios. The case $f=\frac12$ has to be treated separately, because you're not getting any information on which jar the balls are coming from. In this case, you should transform your prior to new variables $\lambda_\pm=\frac{\lambda_1\pm\lambda_2}2$, treat the marginal prior for $\lambda_+$ as the prior for a single jar, update it in the standard way with the balls you receive (ignoring the random information about the origin of the balls), and calculate the updated full prior as $$ p(\lambda_+,\lambda_-\mid\text{data})=p(\lambda_+,\lambda_-)\frac{p(\lambda_+\mid\text{data})}{p(\lambda_+)}\;. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2871696", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving a Polynomial Limit where x → 0 Prove $$\lim \limits_{x\to 0} x^3+x^2+x = 0$$ Note $|f(x)-L| = |x^3+x^2+x|$ Assume $\ \ |x-c|<\delta \implies |x| < \delta$ $\implies |x^3+x^2+x|<\delta\cdot|x^2+x+1|$ Assume $|x| < 1 \implies -1 < x < 1 \implies 0 < x+1 < 2$ And I am not sure where to go from there, since I can't multiply the inequality by $x$ in order to get $x^2$, because $x$ could be negative or positive.
As an alternative by squeeze theorem assuming $|x|<1$ $$0\le |x^3+x^2+x|=|x||x^2+x+1|\le 3|x| \to 0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2871798", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 3 }
Maximazing multinomial distribution I have an exercise in probability theory, which I can't solve: There are 3 factories A B C, which produce 3 types of light bulbs. Factory A / B / C makes 40 / 20 / 40 percent of whole light bulbs. The probability of manufacturing first type of light bulb in factory A / B / C is 0.6 / 0.3 / 0.5. The probability of manufacturing second type of light bulb in factory A/ B / C is 0.3/0.4/0.2. The probability of manufacturing third type of light bulb in factory A / B / C is 0.1/0.3/0.3. We buy 6 light bulbs. What is the most probable set of light bulbs (number of each type)? What is its probability? So I figured out it's multinomial distribution with parameteres 6 and probabilities $p_1=0.5$, $p_2=0,22$, $p_3=0.28$, where $p_i$ is probability of buying a light bulb of $i-th$ type. Let $X_i$ denote a number of light bulbs of $i-th$ type in the set of 6 light bulbs we bought. To solve first problem we want to maximaze $P(X_1=k_1, X_2=k_2, X_3=k_3) = \frac{6!}{k_1!\cdot k_2! \cdot k_3!}0.5^{k_1} \cdot 0.22^{k_2} \cdot 0.28^{k_3}$ So we can write a function $f(x,y,z)=\frac{6!}{x!\cdot y! \cdot z!} 0.5^{x}\cdot 0.22^{y}\cdot 0.28^{z}$. Since sum of all variables must be 6, we can reduce one variable. Then we want to maximaze function $g(x,y)=\frac{6!}{x!\cdot y!\cdot (6-x-y)!}0.5^{x}\cdot 0.22^{y}\cdot 0.28^{6-x-y}$. I don't how to do it because of the factorials. And also $k_i$ are natural numbers so i doubt that this aproach is good. I think you can examine this formula with sequences, but there is a lot of work to do then. I think there should be a theorem, which I don't know, that would help, because it was for an exam on some not mathematical (economy sth) studies and they have a small amount of math classes there.
Max $P = \dfrac{6!}{x_1!.x_2!.x_3!.x_4!.x_5!.x_6!.x_7!.x_8!.x_9!}.24^{x_1}.12^{x_2}.04^{x_3}.06^{x_4}.08^{x_5}.06^{x_6}.2^{x_7}.08^{x_8}.12^{x_9}$ with $x_1$ - Type 1 made in Factory A - probability $= .6*.4 = 0.24$ $x_2$ - Type 2 made in Factory A -probability $= .3*.4 = 0.12$ $x_3$ - Type 3 made in Factory A probability $= .1*.4 = 0.04$ $x_4$ - Type 1 made in Factory B probability $= .3*.2 = 0.06$ $x_5$ - Type 2 made in Factory B probability $= .4*.2 = 0.08$ $x_6$ - Type 3 made in Factory B probability $= .3*.2 = 0.06$ $x_7$ - Type 1 made in Factory C probability $= .5*.4 = 0.20$ $x_8$ - Type 2 made in Factory C probability $= .2*.4 = 0.08$ $x_9$ - Type 3 made in Factory C probability $= .3*.4 = 0.12$ $x_1+x_2+x_3+x_4+x_5+x_6+x_7+x_8+x_9=6$ and $x_i \ge 0$ Ran a simulation and found out that the number and itemtypes are $x_1 = 2, x_2 = 1, x_7 = 2$ and $x_9 = 1$ Most probable with maximum probability $= 0.005971$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2871897", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
if $a^3+b^3 +3(a^2+b^2) - 700(a+b)^2 = 0$ then find the sum of all possible values of $a+b$ If $a+b$ is an positive integer and $a\ge b$ and $a^3+b^3 +3(a^2+b^2) - 700(a+b)^2 = 0$ then find the sum of all possible values of $a+b$. I tried a lot to solve it, i came to a step after which i was not able to proceed forward, the step was $(a+b)^3 - 41\cdot17 (a+b)^2 - 3ab(a+b) - 6ab = 0$ Please help me to solve this.
Here is a start, not a full answer. Let $s=a+b$ and $p=ab$. Then $3 p (s + 2) = (s - 697) s^2$ as WA tells us. Then $s+2$ divides $(s - 697) s^2$ and so $s+2$ divides $2796 = 2^2 \cdot 3 \cdot 233$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2871998", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
compute the summation $\sum_ {n=1}^\infty \frac{2n-1 }{2\cdot4\cdots(2n)}= \text{?}$ compute the summation $$\sum_ {n=1}^\infty \frac{2n-1 }{2\cdot4\cdots(2n)}= \text{?}$$ My attempts : i take $a_n =\frac{2n-1}{2\cdot4\cdots(2n)}$ Now \begin{align} & = \frac{2n}{2\cdot4\cdot6\cdots2n} -\frac{1}{2\cdot4\cdot6\cdots2n} \\[10pt] & =\sum_ {n=1}^\infty \left(\frac{2n}{2\cdot4\cdot6\cdots2n} -\frac{1}{2\cdot4\cdot6\cdots2n}\right) \\[10pt] & =\sum_ {n=1}^\infty \left(\frac{1}{1\cdot2\cdot3\cdots(n-1)} -\frac{1}{2(1\cdot2\cdot3\cdots n}\right) \\[10pt] & =\sum_ {n=1}^\infty \left(\frac{1}{(n-1)!} - \frac{1}{2}\sum_ {n=2}^\infty \frac{1}{n!}\right) \\[10pt] & = e - \frac{1}{2} (e- 1)= \frac{1}{2}(e+1) \end{align} Is it correct ??? if not correct then any hints/solution will be appreciated.. thanks in advance
Using telescopic approach with$$a_n=\dfrac{1}{2\cdot4\cdot6\cdot\cdots\cdot2n}$$we have $$S=\sum_{n=1}^{\infty}{a_n-a_{n-1}}=1$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2872093", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Polynomial Division: dividing by a double root I have this fairly interesting problem that is based on Polynomial Division and/or factor/remainder theorem. Determine the values of $a$ and $b$ such that $ax^4 + bx^3 -3$ is divisible by $(x-1)^2$. This is interesting because the root we are dividing by is a double root, so its difficult to get $2$ equations and $2$ unknowns. (note that one cannot use ideas from Calculus) The only approach I have been able to come up with is to expand the perfect square divisor to a full quadratic and perform a brute force division, but it did not really lead me to a solution.
Method one (compare the coefficients): $$\begin{eqnarray}ax^4+bx^3-3 &=& (x-1)^2(cx^2+dx+e)\\ &=& (x^2-2x+1)(cx^2+dx+e)\\ &=& cx^4+(d-2c)x^3+(e-2d+c)x^2+(-2e+d)x+e\end{eqnarray}$$ From here we see that: $$ \begin{eqnarray*} c&=&a\\ e&=& -3\\ -2e+d&=& 0\implies d=-6\\ e-2d+c&=&0 \implies c=-9\implies a=-9\\ d-2c&=&b\implies b=-12 \end{eqnarray*}$$ Method two (Vieta formulas), $x_1=x_2=1$: $$ 2+x_3+x_4 = -{b\over a}$$ $$ 1+2(x_3+x_4)+x_3x_4 = 0$$ $$ 2x_3x_4+ x_3+ x_4 =0$$ $$ x_3x_4 = -{3\over a}$$ from 2. and 3. equation we get $$x_3+x_4 = -{2\over 3}\;\;\;{\rm and}\;\;\;x_3x_4 ={1\over 3}$$ From 4. equation we get $a=-9$ and from 1. equation we get $b=-12$. Method three Since $1$ is root of a polynomial $p(x)=ax^4+bx^3-3$ we get $b=3-a$ so we have $$p(x)=ax^4+3x^3 -ax^3-3 $$ $$= ax^3(x-1)+3(x-1)(x^2+x+1) $$ $$= (x-1)\underbrace{\Big(ax^3+3(x^2+x+1)\Big)}_{q(x)} $$ Now since $1$ double root we have also $q(1)=0$ so $a+9=0$. Method four: (Horner schema) $$\begin{array}{cccccc} & a & b & 0 & 0 & -3 \\ \hline 1 & & a & a+b & a+b & a+b \\ \hline & a & a+b & a+b & a+b & \color{red}{a+b-3} \\ \hline 1 & & a & 2a+b & 3a+2b & \\ \hline & a & 2a+b & 3a+2b & \color{red}{4a+3b} & \\ \end{array}$$ Both red expressions must be zero... Method five: Direct (long) division. What you are left ($1$. degree polynomial ) must be identical to $0$, so you get $2$ equations again...
{ "language": "en", "url": "https://math.stackexchange.com/questions/2872210", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Evaluate $\iint_D x \sin (y -x^2) \,dA$. Let $D$ be the region, in the first quadrant of the $x,y$ plane, bounded by the curves $y = x^2$, $y = x^2+1$, $x+y=1$ and $x+y=2$. Using an appropriate change of variables, compute the integral $$\iint_D x \sin (y -x^2) \,dA.$$ I've been reviewing for an upcoming test and this problem was recommended to do for study -- I just can't get it. I've tried many changes of variables and nothing has worked. I would really appreciate a hint or a solution. Thanks in advance!
An other interesting variable change which can take us much further and show that this integral cannot be evaluated to closed form solution without series expansion, would be to remove away $x$ in the integral. See that we have $x^2$ inside $\sin(\cdot)$ which can make this happen. Let, $k = x^2$ and keeping $y$ as it is, which gives a jacobian of $1/2x$ and cancels with the $x$. Now, although we have $\sin(y-k)$ kind of structure which we can expand by trigonometric identity of $\sin(A-B)$ into two parts when function of $k$ and $y$ are independent, it is better to do an other variable change which makes integral limits associated with $y$ to fixed limits. See that now the limit curves are of the form $y-k = c$ and $\sqrt k + y = c$. So, we can take new variable $t = y-k$ which actually does this, making the limits for $t$ to be 0 to 1. Limits for $k$ are taken to be dependent on t and of the form $\sqrt k + k + t = c$. Now the inner function is just $sin(t)$ which can be pulled out of the integral with $k$. Now to evaluate the integral with $k$ we just have to evaluate the limits i.e. $\sqrt k + k + t = 1$ and $\sqrt k + k + t = 2$. If we solve the quadratic equations in $\sqrt k$ and then take square the positive of the quadratic solution which is ($-1 + \sqrt{1 + 4(c-t)}$)/2, we get, constants, $t$ terms and $\sqrt{c-t}$ kind of terms. And remember a $sin(t)$ is waiting for us and limits are 0 to 1 in variable t! Now, $\sin(t)$ can be evaluated, $t \sin(t)$ is well known by integration by parts. The problematic part is the terms with the structure $\sqrt{c-t} \sin(t)$. To get rigid of c (which is 1 and 2 here), we can do variable transformation and $p = c-t$ which brings us a structure $\sqrt p \sin(c - p)$ which similar to the general structure, $\sqrt p \sin(p)$ through expansion of $\sin(c - p)$. So, now to this $\sqrt{p} \sin(p)$ when we do integration by parts, this ends up with having an integral of $cos(p)/\sqrt{p}$ to be evaluated, which on variable transformation $p = u^2$ gives integral of $\cos(u^2)$ which further more cannot be simplified by hand and is usually termed Fresnel integral which has series expansion ( see https://en.wikipedia.org/wiki/Fresnel_integral). PS: I have focused only on the hard terms and ignored signs, constants and other easy terms coming from integration by parts just for transparent idea of what creates problem, hope it is understandable, let me know if some part is unclear.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2872298", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
A conditional probability involving order statistic Here is the original question: Let $X_1,X_2,\cdots$ be i.i.d. random variables uniform on $[0,1]$. Let $N$ be the smallest integer such that $X_N$ is smaller than exactly one of its predecessors $X_1,X_2,\cdots,X_{N-1}$. Find the cumulative distribution function of $N$. I saw that the key trick is to realize $\mathbb{P}\left\{ N = n \mid N > n-1\right\} = \frac{1}{n}$. But how to rigorously prove it? I attempted to write that $$\mathbb{P}\left\{N = n\mid N > n-1\right\} = \mathbb{P}\left\{X_N=X_{(N-1)}\mid N>n-1 \right\}$$ But I don't know how to proceed. I know that all orderings of $X_1,\cdots,X_N$ are equally probable, but I don't know how that would be helpful. Intuitively, I am thinking that once we fix $X_1,\cdots,X_{n-1}$, then there are $n$ possible "slots" that we can "insert" $X_N$, so that gives us $\frac{1}{N}$.
Let $X^{(i)}$ be the $i$-th order statistic from $n$ draws of the uniform distribution. (To be clear, $X^{(1)}$ is the smallest draw while $X^{(n)}$ is the largest. I chose slightly different notation than the linked Wikipedia article to avoid confusion with the notation you've chosen.) Conditioning on $\{N>n\}$, $N=n+1$ whenever $X_{n+1}$ is larger than all but the largest of $n$ draws from the uniform distribution. That is, we must have that $$ \left\{ X^{(n-1)} \le X_{n+1} < X^{(n)} \right\}.$$ In other words, $$ \Pr(N=n+1 \vert N>n) = \Pr \left( X^{(n-1)} \le X_{n+1} < X^{(n)} \right).$$ Using the joint density of $\left( X^{(n-1)} , X^{(n)} \right)$, we have that \begin{align*} \Pr \left( X^{(n-1)} \le X_{n+1} < X^{(n)} \right) &= n(n-1) \int_{0}^1\int_{0}^v u^{n-2}(v-u)\,\mathrm du\,\mathrm dv \\ &= \int_0^1 v^n \, \mathrm dv \\ &= \frac{1}{n+1}. \end{align*} Hence, $$ \Pr(N=n+1 \vert N>n) = \frac{1}{n+1}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2872398", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Does the limit rule $\lim_{x \rightarrow 0}\frac{\sin x}{x}=1$ apply to $\lim_{x \rightarrow \pi}\frac{\sin\left(x-\pi\right)}{\left(x-\pi\right)}=1$? In my textbook, I was given an example below : $$\lim_{x \rightarrow \pi}\frac{\sin\left(x-\pi\right)}{\left(x-\pi\right)}=1$$ Previously I was taught that this formula : $$\lim_{x \rightarrow 0}\frac{\sin x}{x}=1$$ only applies when $x$ approaches $0$. Can someone explain to me?
Set $y=x-\pi$. Then $y$ approaches $0$ if and only if $x$ approaches $\pi$. So we may write the following $$\lim_{x \rightarrow \pi}\frac{\sin\left(x-\pi\right)}{\left(x-\pi\right)}=\lim_{y \rightarrow 0}\frac{\sin y}{y}=1$$ EDIT: One may also see it as a composition of limits.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2872616", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Notation for element of an ordered tuple? When $X$ is a set, we can write: for all $x\in X$ ... But say that $X=(a, b, c, ... ,n)$. I.e. an ordered tuple. Is it standard notation to still say the following? for all $x\in X$... It might be confusing because if you interpret it as a sentence in ZFC, then you're not quantifying over the thing you want (you're including the sets representing the order). But how would we write down to quantify over just the elements of the tuple?
If we insist on rigor here, we can proceed in a slightly ugly way. If we define a tuple as a function from sets of integers into the target set then we can proceed as follows. Let $$ [n] = \{m \in \mathbb{N}: m<n\} = \{0, \ldots, n-1\} = n $$ Where the last equality follows if the natural numbers are defined as the Von Neumann ordinals. We define an $n$-tuple $X$ over the set $S$ as the function $$ X:[n] \to S $$ Then the $i^{\text{th}}$ element of $X$ where $i \in [n]$ can be found by $$ X(i) = X_i $$ We write $X= (X_0, \ldots, X_{n-1})$ with $X_i \in S$. If we want the set $\{X_0, \ldots, X_{n-1}\}$ then we can take the image of $[n]$ under $X$. That is $$ X([n]) $$ So we could write, for example, $$ \forall s \in X([n])\ldots $$ If you want you could introduce a special notation for this like $$ \text{Set}_n(X), \text{Set}(X), \text{Img}_X([n]), \text{Img}(X) $$ to indicate where converting the tuple $X$ into the set containing elements of its range. None of this is really standard or pretty but I guess it's possible. I answer here because I had the same question as you and this is as far as I've gotten.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2872733", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Ito's formula, and the relationship between dt and dB(t) reading Bernt Oksendal Stochastic Differential Equations. I have just seen Ito's Formula, after this the author then says where $dX(t)^{2}$ is calculated using $$dt\cdot dt=dt \cdot dB(t) =dB(t)\cdot dt =0$$ and $$dB(t) \cdot dB(t) = dt$$ On page 45 at the top. Where do these relations come from, what is the intuition behind it.
$dt\cdot dt=0$ is shorthand notation for the fact that the quadratic variation of the identity function with itself is $0$. $dt\cdot dB(t)=0$ is shorthand notation for the fact that the cross variation of Brownian motion with the identity function is $0$. $dB(t)\cdot dB(t)=dt$ is shorthand notation for the fact that the quadratic variation of Brownian motion with itself over a time interval of length $t$ is equal to $t$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2872820", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
the derivative of $f(x)=|x|^{\frac 52}$ From my point of view answer must be $\frac 52|x|^{\frac 32}$...but answer in my text book is $\frac 52 x |x|^{\frac 12}$....help me to solve it.I have tried it to break the function for positive and negative parts and directly differentiated it but the answer is not matched
Breaking it up into a piecewise function is fine. $f(x) = x^{\frac 52}$ (for $x \geq 0$) [$1$] $f(x) = (-x)^{\frac 52}$ (for $x < 0$) [$2$] Differentiating them separately, $f'(x) = \frac 52x^{\frac 32}$(for $x \geq 0$) [$1$] $f'(x) = -\frac 52(-x)^{\frac 32}$(for $x < 0$) [$2$] The second part can be rearranged to: $f'(x) = -\frac 52(-x)(-x)^{\frac 12} = \frac 52 x (-x)^{\frac 12} = \frac 52 x|x|^{\frac 12}$, which holds for $x < 0$. Note where the minus signs cancel out. And for the first piece, a similar rearrangement holds trivially, $f'(x) = \frac 52x^{\frac 32} = \frac 52 x (x)^{\frac 12} = \frac 52 x|x|^{\frac 12}$, which holds for $x \geq 0$. So $f'(x) = \frac 52 x|x|^{\frac 12}$ for the entire domain. Your book is right.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2872899", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 2 }
Derivative to function ratio Is there a physical meaning for derivative of the function to function ratio? That is, this quantity, $$ Q(x) = \frac{1}{f(x)}\frac{df(x)}{dx} $$ Like for instance, if $f$ is the potential energy, this would be work to potential energy ratio. Or even what can we say about $Q(x)$, say when $Q(x)<0$.
If $f(t)$ measures the size of the population, and $t$ is time, then $Q(t)=f'(t)/f(t)$ is the rate of change of the population per capita. For example, if each individual reproduces at a constant rate $r$, then $f'(t)/f(t)=r$, so we get exponential population growth $f(t)=f(0) \, e^{rt}$. But if the per capita growth rate decreases linearly with the size of the population (due to limited resources, for example), $f'(t)/f(t)=r(1-f(t)/K)$, we get logistic growth.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2873033", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Question of Automorphism $T$ on Finite group with property that $T(x)=x$ only for $x=e$ Let $G$ be finite group, $T$ be automorphism on $G$ with property that $T(x)=x$ only for $x=e$. Then 1) every $g \in G $ can be written as $g=T(x)x^{-1}$ for some $x\in G$ 2) Furthermore if $T^2=\mathrm{Identity}$ then $G$ is abelian group My first question is the following: 0) What is so important about condition $T(x)=x$ only for $x=e$? Is there any significance associated with it? My Attempt for 1: Let $G =\{x_1,x_2,\dotsc,x_n\}$ be finite group. After applying required transformation we get $\{T(x_1)x_1^{-1},T(x_2)x_2^{-1},\dotsc,T(x_n)x_n^{-1}\}$. Now on contary assume they are not distanst. $T(x_i)x_i^{-1}=T(x_j)x_j^{-1}$ now i.e. $T(x_j)^{-1}T(x_i)=x_j^{-1}x_i$ which leads $T(x_j^{-1}x_i)=x_j^{-1}x_i$ due to given condition $x_j^{-1}x_i=e$ that means contradiction. Hence all are distanct. By the pigenhole priciple we get required. Is I am giving right argument? 2) Second question Here I am not able to use given data. Any Help will be appreciated.
Since $G$ is finite, the map $f\colon G\to G$ defined by $x\mapsto f(x):=T(x)x^{-1}$ is surjective (so the property 1) holds) if and only if it is injective. And it is injective, because: \begin{alignat}{1} &f(x)=f(y)&&\Longrightarrow \\ &T(x)x^{-1}=T(y)y^{-1}&&\Longrightarrow \\ &T(y^{-1}x)=y^{-1}x&&\Longrightarrow \\ &y^{-1}x=e&&\Longrightarrow \\ &x=y&&\Longrightarrow \\ \end{alignat} where the middle implication holds precisely by the assumption $T(g)=g\Longrightarrow g=e$. For the second question, note that if, in addition, $T^2=Id$, then $\forall g \in G,\exists x\in G$ such that: \begin{alignat}{1} T(g) &= T(T(x)x^{-1}) \\ &= T(T(x))T(x^{-1}) \\ &= xT(x)^{-1} \\ &= (T(x)x^{-1})^{-1} \\ &= g^{-1} \end{alignat} Therefore, for every $x,y\in G$, both: $$T(xy)=T(x)T(y)=x^{-1}y^{-1}=(yx)^{-1}$$ and: $$T(xy)=(xy)^{-1}$$ whence $yx=xy$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2873136", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Proving a formula for $\pi$ I found a formula for $\pi$ in this paper. However, I could not find any proof of this formula, and I don't know how to approach to it. Is there good explanation for it? $$ \pi + 3 = \sum_{n=1}^\infty \frac{n2^nn!^2}{(2n)!} $$
Here's part of an exercise from the book Pi and the AGM by Borwein and Borwein. Prove that $$\frac{2x\sin^{-1}x}{\sqrt{1-x^2}}=\sum_{m=1}^\infty \frac{m!^2(2x)^{2m}}{m(2m)!}.\tag{1}$$ Hint: show that $f=(\sin^{-1}x)/\sqrt{1-x^2}$ satisfies $(1-x^2)f'=1+xf$. Granted $(1)$, differentiating and multiplying by $x$ gives a formula for $$\sum_{m=1}^\infty \frac{m!^2(2x)^{2m}}{(2m)!}.$$ Doing it again, gives a formula for $$\sum_{m=1}^\infty m\frac{m!^2(2x)^{2m}}{(2m)!}.$$ Finally set $x=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2873233", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Evaluate $\int_M(x-y^2+z^3)dS$ Evaluate $\int_M(x-y^2+z^3)dS$ when $M$ is the part of the cylinder $x^2+y^2=a^2$ where $a>0$ which is between the two planes $x-z=0$ and $x+z=0$. So I did not manage to use green/gauss/stocks, so I tried to solve it as a surface integral. first to find $\|n\|$ we use the parameterisation $\phi(u,v)=(a\cos (u),a\sin (u),v)$ $\phi_u\times\phi_v=(a\cos(u),a\sin(u),0)$ So $\|n\|=a$ So the integral is $\iint (a \cos(u)-a^2\sin^2(u)+v^3)adudv$ but I can I find the limit of integration? I know that $u\in[0,2\pi]$ and as for $v$ is is bounded by $x$ and $-x$ P.S or I can say that $F=\nabla\cdot(\frac{x^2}{2},-\frac{y^3}{3},\frac{z^4}{4})$ and so I can use gauss? On $\phi(r,t,v)=(r\cos t,r\sin t,v)$ where $\theta\in[0,2\pi],r\in[0,a],v\in[-r \cos \theta,\cos\theta]$
You have to compute this as a surface integral. Stokes' and Gauss' theorems deal with the flux of certain vector fields across a surface, but here a scalar function (representing, e.g., a temperature) is integrated over $M$. Your parametrization is fine, and leads to the scalar surface element $${\rm d}S=|\phi_u\times\phi_v|\>{\rm d}(u,v)=a\>{\rm d}(u,v)\ .$$ In a sketch of the $(x,z)$-plane you can see that the part of the cylinder we are interested in is characterized by $|z|\leq |x|$, or $|v|\leq a|\cos u|$. Therefore we have to split the integral ($=:J$) into two parts corresponding to $-{\pi\over2}\leq u\leq{\pi\over2}$ and ${\pi\over2}\leq u\leq{3\pi\over2}$. In this way we obtain $$J=a\int_{-\pi/2}^{\pi/2}\int_{-a\cos u}^{a\cos u}\Psi(u,v)\>dv\>du+a\int_{\pi/2}^{3\pi/2}\int_{a\cos u}^{-a\cos u}\Psi(u,v)\>dv\>du\ .$$ Here $\Psi(u,v)$ denotes the pullback $$\Psi(u,v)=(x-y^2+z^3)\biggr|_{(x,y,z):=\phi(u,v)}=a\cos u-a^2\sin^2 u +v^3\ .$$ It becomes obvious that the contribution of the $v^3$ term is zero, by symmetry. I may leave the rest to you.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2873298", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Runs Application Probability Problem Feller Taken from a problem in Feller's Probability Theory Vol. 1 Chapter 2 Section 5: Suppose that an observation yielded the following arrangement of empty and occupied seats along a lunch counter: EOEEOEEEOEEEOEOE. There are $11$ runs here, and Feller argues that the probability of eleven runs would be $0.0578...$ and so therefore it is unlikely that this arrangement is due to chance, and this excess of runs (with five occupied and eleven empty seats it is impossible to get more than eleven runs) points to intentional mixing (people not wanting to sit next to each other if possible). Feller grabs this number out the air, so does anyone know how he calculates $0.0578..$ and how one would do this in a more general case. Thank you.
I think it is possible that Feller's calculator was off by 0.0001, since I am getting the answer to be 0.0577 after rounding. (It is 3/52). So, how do we go about calculating this? This is a simple counting argument: Assuming a length 16 string with 6 Es and 10 Os, how do we count the probability of getting a string that has 11 "runs" (by a run, we refer to a contiguous sequence of Es or Os)? We quickly note that (like OP noted) that 11 is indeed the maximum, and the only way that that can happen is if we start and with an E, and between each E there is at least one E. This reduces the problem to a stars and bars problem, since any string we have must be of the form: E $B_1$ O E $B_2$ O E $B_3$ O E $B_4$ O E $B_5$ O E $B_6$ where $B_i$ contains some number of Es and the sum of all Es in all those buckets is 5 (we used 6 of them to guarantee that we have 11 runs). Now, by using a stars and bars argument, you can count the number of such strings is $\binom{10}{5}$ (to count, just note that permutations of 5 stars and 5 bars corresponds to filling 6 buckets with 5 Es). Now, just count the total number of options, which is $\binom{16}{5}$, and thus the ratio is indeed $\frac{\binom{10}{5}}{\binom{16}{5}}$ which is approximately 0.0577.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2873439", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Differential Polynomials(?) Consider an equation of the form: cy"+cy'+cy Or something of the form. Essentially, it's a polynomial but instead of powers, there are derivatives. Do these kind of things have a name? Or are they completely useless? Note: I KNOW what Taylor Polynomials and the like are, but I mean something in the form of what I have shown.
Yes the polynomial associated to $$ ay'' + by' +cy =0$$ is $$P(\lambda )= a \lambda ^2 + b \lambda +c$$ which is called the charateristic polynomial. This polynomial plays a very important role in finding the solutions to your differential equation. The genera solution to the differential equation is $$ y=C_1 e^{\lambda _1} +C_2 e^{\lambda _2} $$ where $\lambda _1$ and $\lambda _2$ are solutions to $P(\lambda)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2873517", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
What algorithm do scientific calculators use to calculate Logarithms I have been introduced to numerical analysis and have been researching quite a bit on its applications recently. One specific application would be the scientific calculator. From the information that I found, computers typically use the Taylor Series to compute trigonometric problems and calculators would usually take a different approach and use the CORDIC algorithm to compute such problems, including hyperbolic, inverse trigo, square roots etc. However, how would they calculate logarithmic problems? I couldn't really find any information about this and was hoping someone would point me in the right direction or provide any insights on what kind of algorithms are used is such calculations. Thanks in advance.
Modern Computer Arithmetic suggests using an arithmetic-geometric mean algorithm. I'm not sure if this approach is meant for the low amount of precision one typically works with or if its meant for calculation in very high precision. Another approach is to observe that the Taylor series for $\ln(x)$ is efficient if $x$ is very close to $1$. We can use algebraic identities to reduce the general case to this special case. One method is to use the identity $$ \ln(x) = 2 \ln(\sqrt{x})$$ to reduce the calculation of $\ln(x)$ to that of an argument closer to 1. We could use a similar identity for more general radicals if we can compute those efficiently. By iteratively taking roots until we get an argument very close to $1$, we can reduce to $$ \ln(x) = m \ln(\sqrt[m]{x})$$ which can be computed by the Taylor series. If you store numbers in mantissa-exponent form in base 10, an easy identity to exploit is $$ \ln(m \cdot 10^e) = e \ln(10) + \ln(m)$$ so the plan is to precompute the value of $\ln(10)$, and then use another method to obtain $\ln(m)$, where $m$ is not large or small. A similar identity holds in base 2, which a computer is likely to use. A way to use lookup tables to accelerate the calculation of $\ln(x)$ when $x$ is not large or small is to observe that $$ \ln(x) = \ln(k) + \ln(x/k) $$ The idea here is that you store a table of $\ln(k)$ for enough values of $k$ so that you can choose the $k$ nearest $x$ to make $x/k$ very near $1$, and then all that's left is to compute $\ln(x/k)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2873604", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Definition of dependence in probability Here is classical definition and example of dependent events. "When two events are said to be dependent, the probability of one event occurring influences the likelihood of the other event. For example, if you were to draw a two cards from a deck of $52$ cards. If on your first draw you had an ace and you put that aside, the probability of drawing an ace on the second draw is greatly changed because you drew an ace the first time". Let’s consider another scenario: Suppose I apply to a job. There are two interviews. The second interview ($B$) will take place only if I pass the first interview ($A$). So, we have probabilities of $P(A)$ and $P(B)$. Two events per se do not depend on each other because different people conduct the interviews. But $B$ will not take place if $A$ was a failure. So $P(B)\ne P(B\mid A)$. So, can it be said that events $A$ and $B$ are dependent? Thanks!
Perhaps the most concise definition of independent events is $P(A\,\text{and}\,B)=P(A)P(B)$. In your example, constants $p,\,q$ exist for which $P(A)=p,\,P(A\,\text{and}\,B)=P(B)=q$, with independence only if $p=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2873682", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Simple problem on conditional geometric probability The occurrence of the event $A$ is equally likely in every moment of the interval $[0, T]$. The probability of event $A$ occurring at all in this interval is $p$. Given that $A$ hasn't occurred in the interval $[0,t]$ what's the probability that $A$ will occur in $[t, T]$? I am getting a different answer than the one given in the book. I wonder which one is correct. My answer is $\dfrac{Tp-tp}{T-pt}$ The book's answer is my answer divided by $p$.
We have $$\Pr(A\in (0,T)\mid A\not\in(0,t))=\frac{\Pr(A\in (t,T))}{\Pr(A\not\in(0,t)}.$$ The top probability is $p\frac{T-t}{T}$ and the bottom is $1-p\frac{t}{T}$; rearranging this gives the same answer you have.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2873834", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Types of undefined for removable discontinuities and vertical asymptotes Consider this rational function: $$ f(x) = \frac{x^2 - 2x - 24}{x^2 + 10x + 24} $$ I have been taught that to solve for a removable discontinuity, I find the $x$ values such that both the numerator and denominator are equal to $0$; and to solve for vertical asymptotes, I find the $x$ values that make just the denominator equal to zero. So: $$ f(x) = \frac{(x-6)(x+4)}{(x+6)(x+4)} = \frac{x-6}{x+6}, \quad x \neq -4 $$ What this means is that we have a vertical asymptote at $x = -6$ and a removable discontinuity at $x = -4$. Great. I can compute these. But why? We have two kinds of undefined here, $f(x_0) = \frac{0}{0}$ and $f(x_0) = \frac{g(x)}{0}$. Why do these result in different types of undefined behavior?
We have that * *at $x=-4$ the function is not defined but we don't have any vertical asymptote since for $x\to -4$ we have $f(x) \to -5$ *at $x=-6$ also the function is not defined but we have a vertical asymptote since for $x\to -6$ we have $f(x) \to \pm \infty$ Note that in the case of $x=-4$ we can remove the discontinuity defining $f(-4)=-5$ but for $x=-6$ we can't since $|f(x)| \to \infty$. Note also that not always the discontinuity in the form $f(x_0) = \frac{0}{0}$ can be removed as for example for $$f(x)=\frac{\log (1+x)}{x^2}$$ at $x=0$ since $|f(x)| \to \infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2873940", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How to prove $x^2<4 \implies|x|<2$? How to prove $x^2<4 \implies|x|<2$? I don't know exactly there is a proof for this or we take this as an axiom. Please help. What I did so far is, $$x^2-4<0$$ $$(x-2)(x+2)<0$$ $$-2<x<2$$ From this step can we directly say? $|x|<2$
What I did so far is, $$x^2-4<0$$ $$(x-2)(x+2)<0$$ Since $\,x^2=|x|^2\,$, you could also factor it as: $$\left(|x|-2\right)\left(|x|+2\right) \lt 0$$ Given that $\,|x|+2 \gt 0\,$ it follows that $\,|x| - 2\lt 0\,$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2874062", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Mapping and conservation law What is a general map from a point sitting in one dimensional space, to a set of points sitting in two dimensional space, to a set of points sitting in three dimensional space, to a set of points embedded in a torus sitting in four dimensional space? This is what I have but I'm not sure if it's correct notation. The parentheses are to show that those mappings inside the parentheses occur before the mapping to the torus. The motivation for this comes from physics and viewing these points as fixed points in space throughout all transformations, which lends itself to a conservation of momentum law for these points, which can be thought of as particles. The last embedding onto the torus is to provide a ring structure to the solution space. Physicists are not necessarily known as tidy mathematicians so I thought I'd seek help from the pros. $ F : ((\Bbb R \to \Bbb R^2) \to \Bbb R^3) \hookrightarrow \Bbb T^4 $. Thanks.
If I understand correctly you have first a mapping $\mathbb R \stackrel{f}{\longrightarrow} \mathscr P(\mathbb R^2),$ where $\mathscr P$ denotes power set, and then a chain of mappings $\mathbb R^2 \stackrel{g}{\longrightarrow} \mathbb R^3 \stackrel{\iota} {\hookrightarrow} \mathbb T^4$ acting on each element in $f(t)$ for $t \in \mathbb R.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2874261", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Finite dimensional subspace is weak star closed I want to show the weak star closed convex hull of a finite set of points is contained in the linear span of those points. It's enough to show that any finite dimensional subspace $V$ of a Banach space $Z$ is weak star closed in $Z$. Since $V$ is finite dimensional, it is a closed subspace of $Z$ in the norm topology. How can I show that it is also weak star closed? Also, it this result true for arbitrary subspaces?
This is true not only for arbitrary subspaces of Banach spaces, but in fact for arbitrary subspaces of locally convex spaces. For any locally convex space $X$, the weak and original closures of any convex set $E \subset X$ are the same (see, for instance, Theorem 3.12 here). Since Banach spaces are locally convex and subspaces of any space are convex, this shows that the weak and original closures of any subspace of a Banach space are the same.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2874351", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Why is $\epsilon^{p,q}(X):=\Gamma(X,\bigwedge^p\Omega^1_X\otimes\overline{\bigwedge^p\Omega^1_X})$? Here $\Omega^1_X=(T^*X)^{1,0}$, from : this notes we have $\epsilon^{p,q}(X):=\Gamma(X,\bigwedge^p\Omega^1_X\otimes\overline{\bigwedge^p\Omega^1_X})$ But I wonder why it's tensor not wedge? Shouldn't it be $\epsilon^{p,q}(X):=\Gamma(X,\bigwedge^p\Omega^1_X\wedge\overline{\bigwedge^p\Omega^1_X})$? See O'well's book (p.33) or here
Let us reduce ourselves to the fiber. If $V$ and $\bar V$ are two different $\Bbb R$-spaces with $W=V\oplus \bar V$, and assume there is an isomorphism over $\Bbb R$, $v\to\bar v$, between the two summands. We fix a basis $B$ of $V$. Thus also one, $\bar B$, for $\bar V$. Now let us consider an element in $$\wedge^\cdot W\ .$$ It can be written uniquely "adapted" w.r.t. the basis $B\sqcup \bar B$ of $W$ over $\Bbb R$. Adapted means the following. We order $B$ and correspondingly $\bar B$. Each wedge product in $\wedge^\cdot V$ is rewritten so that we have first a wedge in $\wedge^\cdot V$, and then one in $\wedge^\cdot \bar V$. This gives a well defined map $$ \wedge^\cdot W\to \wedge^\cdot V\otimes \wedge^\cdot \bar V\ ,$$ which is inverse to the obvious map $$ \wedge^\cdot W\leftarrow \wedge^\cdot V\otimes \wedge^\cdot \bar V\ ,$$ in opposite direction. So why $X\otimes Y$ and not $X\wedge Y$ for $X=\wedge^\cdot V$, $Y=\wedge^\cdot \bar V$? Because these are (seen as) different spaces from the start. (And there is no alternation in general, for two different spaces, $X,Y$. There is a space $X\otimes Y$, and a space $Y\otimes X$, defined using universal constructions, but there is no $X\wedge Y$. We would have to identify the two tensor products in a common world, but in our case we definitively not want to identify, for instance $dz_k$ with $d\bar z_k$ in the tangential space, we did our best to have them different.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2874460", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
what is the difference between $D(g\circ f)(x)$ and $Dg(f(x))$? Let $A$ be an open in $\mathbb R^m$. Let $B$ be open in $\mathbb R^n$. Let $f: A \to \mathbb R^n$ and $g: B \to \mathbb R^p$ where $B = f(A)$. If $f$ is differentiable at $a$ and $g$ is differentiable at $f(a) = b$. Then $D(g\circ f)(a) = Dg(f(a) Df(a)$. I got this theorem in Mukresh's Analysis of Manifolds . I can not understand what is the difference between $D(g\circ f)(x)$ and $Dg(f(x))$ . Can anyone please make me understand ? Thank You in Advance.
I think you are missing parentheses, which makes your question hard to parse. Here is how I parse it: I can not understand what is the difference between $D(g\circ f)(x)$ and $Dg(f(x))$. Think of it this way: $D$ is an operator, which "takes" as input a function and outputs another function. Therefore, if $f$ is the input, then $D(f)$ (often abbreviated $Df$, which does not help with the confusion) is another function, the "output" of $D$ applied to $f$. Therefore, $D(g\circ f)(x)$ is what you get when applying the function $D(g\circ f)$ on the point $x$. On the other hand, $Dg(f(x))$ is what you get when applying the function $Dg$ (that is, $D(g)$) on the point $f(x)$. Put differently: $h_1= D(g\circ f)$ and $h_2= D(g)$ are two functions, while $x$ and $y=f(x)$ are two points. With this notation, you are asking about the difference between $h_1(x)$ and $h_2(y)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2874622", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Euler Lagrange, weird Second Order DE I have found the Euler-Langrange equation of the functional: $$\int_{a}^{b} y'^2+y^4\,\mathrm dx$$ to be $y'' = 2y^3$ How do I solve this non-linear DE?
$$y''=2y^3$$ $$2y''y'=4y^3y'$$ $$y^4=y'^2+c_1$$ $$y'=\pm \sqrt{y^4-c_1}$$ $$x=\pm\int \frac{\mathrm dy}{\sqrt{y^4-c_1}}+c_2$$ This is an elliptic integral : http://mathworld.wolfram.com/EllipticIntegral.html The inverse function $y(x)$ involves the sn Jacobi elliptic function : http://mathworld.wolfram.com/JacobiEllipticFunctions.html $$y(x)= C_1 \text{sn}\left(C_1(x+c_2)\:|\:-1 \right)$$ with $C_1=c_1^{1/4}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2874756", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
In a village, $90\%$ of people drink Tea, $80\%$ Coffee, $70\%$ Whiskey, $60\%$ Gin. Nobody drinks all four. What percentage of people drinks alcohol? In a small village $90\%$ of the people drink Tea, $80\%$ Coffee, $70\%$ Whiskey and $60\%$ Gin. Nobody drinks all four beverages. What percentage of people of this village drinks alcohol? I got this riddle from a relative and first thought it can be solved with the inclusion-, exclusion principle. That the percentage of people who drink alcohol has to be in the range from $70\%$ to $100\%$ is obvious to me When $T$, $C$, $W$, and $G$ are sets, and I assume a village with $100$ people, then what I am looking for is $$\lvert W\cup G\rvert = \lvert W\rvert+\lvert G\rvert-\lvert W\cap G\rvert$$ I know that $$\lvert T \cap C \cap W \cap G \rvert = 0$$ and also the absolute values of the singletons. But I do not see how this brings me any closer, since I still need to figure out what $\lvert W\cap G\rvert$ is and that looks similar hard at this point On the way there I also noticed that $\lvert T\cap C\rvert \ge 70$ and similar $\lvert W\cap G\rvert \ge 30$ By now I think there is too little information to solve it precisely.
You can use inclusion/exclusion but you might not have enough information. Or then again you might. The number of people who are in $A$ or $B$ is $A + B - (A\cap B)$ and so if $A+B > 100$ percent we can conclude $A+B - 100\le A\cap B \le \min (A,B)$ So $WHISKEY + GIN - 100 = 70+60 -100 = 30 \le(WHISKEY \cap GIN) \le \min WHISKEY, GIN = GIN = 60$. Likewise $TEA + COFFEE - 100 = 90 + 80 100 = 70 \le(COFFEE \cap TEA)\le \min COFFEE, TEA = COFFEE = 80$. Let $A = (COFFEE \cap TEA)$ and $B = (WHISKEY \cap GIN)$ and $70 + 30 = 100 \le A + B$ so $A+B -100 \le A\cap B$. But we know that $A \cap B = COFFEE \cap TEA \cap WHISKEY \cap GIN)=0$. That can only happen if $A = 70$ and $B = 30$. So the people drink alcohol $= WHISKEY + GIN - (WHISKEY \cap GIN) = 70 + 60 -30 = 100$ percent. (We have everybody drinking alcohol and everbody drinking caffeine, and everybody drinks two of one and one of the other: $30\%$ drink gin, tea, and coffee. $40\%$ drink whiskey, tea, and coffee. $10\%$ drink whiskey, gin, and coffee. $20\%$ drink whiskey, gin, and tea.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2874859", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "46", "answer_count": 7, "answer_id": 0 }
Metric on the unit circle Let $d$ be a metric on the unit circle $S^1$ which defines the usual topology. Let $D = \sup_{x, y \in S^1} d(x,y)$ be the diameter for $S^1$ under this metric. Is it true that the map $d : S^1 \times S^1 \to [0,D]$ is an open map? Thank you!
The map is not always open. Here is a simple counter-example. The unit circle is homeomorphic to the following figure. This figure is endowed with the euclidean metric of the plane, that we then transport to a metric on the unit circle using the homeomorphism. Now consider the two points in red on the figure (or more exactly, their image on the unit circle by the homeomorphism). Let us call them $x$ and $y$. The distance will send any small neighborhood of $(x,y)$ to a set of the form $(\delta, \delta']$, with $\delta' < D$. Consequently, this map is not open.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2875111", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Weird Notation for Trace of an Endomorphism I am having some difficulty understanding a piece of notation from Riemannian Geometry: and Introduction to Curvature by John M. Lee. In Section 2 just under equation 2.3 Lee defines the trace operator which lowers the rank of a tensor by 2. He defines the map: $$\mathrm{tr}:T_{l+1}^{k+1}(V)\longrightarrow T_l^k(V)$$ By letting: $$\mathrm{tr}\; F(\omega^1,\dots\omega^l,V_1,\dots,V_k)$$ be the trace of the endomorphism: $$F(\omega^1,\dots,\omega^l,\bullet,V_1,\dots,V_k,\bullet)$$ But how is it that $F(\omega^1,\dots,\omega^l,\bullet,V_1,\dots,V_k,\bullet) \in\mathrm{End}(V)$, it looks like it should belong to $T_{l+1}^{k+1}$, I think my confusion lies with the $\bullet$ in the above expression. Unforturnately I cannot find any explanation of this notation in the textbook. Is this notation common for something that I am not aware of?
For fixed $\omega^1, \ldots, \omega^l \in V^*$ and $V_1, \ldots, V_k \in V$ the notation $F(\omega^1,\dots,\omega^l,\bullet,V_1,\dots,V_k,\bullet)$ signifies an element $G \in T_1^1(V)$ such that $$G(\omega^{l+1}, V_{k+1}) = F(\omega^1, \ldots, \omega^l, \omega^{l+1}, V_1, \ldots, V_k, V_{k+1}).$$ Then $\operatorname{tr} F \in T_l^k(V)$ is defined by $$(\operatorname{tr} F)(\omega^1, \ldots, \omega^l, V_1, \ldots, V_k) = \operatorname{tr}G.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2875264", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Help with Calculus Optimization Problem! We wish to construct a rectangular auditorium with a stage shaped as a semicircle of radius $r$, as shown in the diagram below (white is the stage and green is the seating area). For safety reasons, light strips must be placed on the perimeter of the seating area. If we have $45\pi + 60$ meters of light strips, what should $r$ be so that the seating area is maximized? So I first set the width of the seating area to 2r, and the depth to be x. The perimeter would then be $2x + 2r + \pi r$ = $45\pi + 60$. The problem asks us to maximize the area, though, so it's $2rx - (\pi r^2)/2$. I can solve the equation in terms of x so that it becomes $x = (45\pi + 60 - r(\pi + 2))/2$. Unfortunately, I'm stuck from this point on, so any hint that you could give me would be great. Thanks!
so you are going correctly you have $$2x+2r+\pi r=45\pi+60$$ so $$2x=45\pi+60-2r-\pi r$$ now you have $$area=2x*r-\frac{\pi r^2}{2}$$ now replace the value of 2x in the area equation from the 1st equation to obtain the area in a single variable so that you can calculate the derivative w.r.t. r. $$A=(45\pi+60-2r-\pi r)*r-\frac{\pi r^2}{2}$$ the area becomes $$A=45\pi r+60r-2r^2-\pi r^2-\frac{\pi r^2}{2}$$ differentiate it w.r.t. r $$\frac{dA}{dr}=45\pi+60-2\pi r-4r-\pi r=0$$ thus $$r=\frac{45\pi+60}{3\pi+4}=15$$ thus $$r=15$$ thus $$2x=45\pi + 60- 2*15-\pi *15=30+ 30* \pi $$ maximum area is $$A=45*pi*15+60*15-2*15^2-\pi *15^2- \pi * \frac{15^2}{2}$$ also, check the second derivative to confirm that it is indeed maximum.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2875381", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Trying to evaluate $\int_{0}^{\infty}\frac{\ln^{s}(t)\ln(1+t)}{t(1+t^2)}\mathrm dt$ $$G(s)=\int_{0}^{\infty}\frac{\ln^{s}(t)\ln(1+t)}{t(1+t^2)}\mathrm dt$$ Trying a substitution of $t=e^x$ $$G(s)=\int_{-\infty}^{\infty}\frac{x^s\ln(1+e^x)}{1+e^{2x}}\mathrm dx$$ I trying to evaluate $G(s)$, but unable to. Can anyone help? Thank you.
Assuming $s\in\mathbb{N}$, you may consider that $$ \int_{0}^{+\infty}\frac{t^{a-1}\log(1+t)}{1+t^2}\,\mathrm dt=\frac{\pi}{4\sin(\pi a)}\left[H_{-1/2-a/4}-H_{-a/4}+2\log(2)\cos\frac{\pi a}{2}+\pi\sin\frac{\pi a}{2}\right] $$ for any $a$ in a neighbourhood of the origin. In order to compute the wanted integral you may simply differentiate (wrt to $a$) the RHS multiple times, then consider the limit as $a\to 0$. This should work also by considering fractional derivatives, if defined in terms of the Laplace (inverse) transform. Ultimately we are just dealing with $D^{\alpha}\log\Gamma(z)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2875459", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Explain the odd/even inequality in the heights of numbers under the Collatz $(3x+1)/2$ transformation? My kid asked me a question and I'm finding it hard to answer: if every number under repeated application of the Collatz transformation1 eventually reaches $1$, then it must be true for both even and for odd numbers. Why, then, for numbers of a given "height" (the number of divisions before reaching $1$, A00666 in the OEIS) are there so many more even numbers than odd numbers? If you halve an even number then $50$% of the time the result should be odd. And if there are more even numbers than odd ones at every given height, won't you run out of even numbers more quickly? In fact the ratio of even:odd numbers of a given height is approximately $3:1$ (Pari/GP code below).2 My answer wasn't satisfactory: there are an infinite amount of both odd and even numbers, so you won't run out of either; and transforming an odd number can only lead to a subset of even numbers ($x: x= 3k+1$) so the other ones don't really count. Is there a more intuitive way to explain it? 1 $C(x)= \begin{cases} \frac{x}{2}&\text{when x is even; and}\\ {3x+1}&\text{when x is odd.} \end{cases}$ 2 heights(n)= if(n==0,return([1]), n==1, return([2]), n==2, return([4]), my(h=heights(n-1)); my(l=List()); for(x=1,#h, listput(l,h[x]*2); if(h[x]%3==2, listput(l,(h[x]*2-1)/3)));return(Vec(l))) \\ Returns a vector of numbers with a given Collatz height
The reason why one would expect more even numbers to appear at a given height rather than odd numbers is the following: the number of even numbers of height $n$ should be equivalent to the number of odd numbers of height$<n$. Why is that? Let the height of odd $g$ be $k<n$. Hence, $g\cdot 2^{n-k}$ will have height $n$. And under most circumstances, we expect the number of odd numbers of height less than $n$ to be higher than the number of odd numbers of height exactly $n$, especially for large $n$. I do believe your discovery of a $3:1$ ratio to be coincidence.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2875576", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Find all $n \in \mathbb{N}$ such that ${{2n}\choose{n}} \equiv (-1)^n\pmod{2n+1}$ Find all $n \in \mathbb{N}$ such that $${{2n}\choose{n}} \equiv (-1)^n\pmod{2n+1}.$$ I know that if $2n+1$ were prime number, then $${{2n}\choose{n}} = \frac{(2n) (2n-1) \cdots (n+1)}{n!} \equiv \frac{(-1)(-2)\cdots(-n)}{n!} = (-1)^n \pmod{2n+1}.$$ However, I'm not sure whether $\{ n \,|\, 2n+1 \text{ are prime}\}$ are the only possible solutions.
The solutions are those $n \in \mathbb N$ for which $2n+1$ is either prime or a Catalan pseudoprime. We say that $2n+1$ is a Catalan pseudoprime if it is a composite number and $$(-1)^n\, C_n \equiv 2 \pmod{2n+1}$$ where $C_n$ is the $n$-th Catalan number, that is, $$C_n = \frac 1 {n+1} \binom {2n} n.$$ Rewriting the definition, we see that this means $$(-1)^n \frac 1 {n+1} \binom {2n} n \equiv 2 \pmod {2n+1}$$ and multiplying by $(-1)^n (n+1)$, $$\binom {2n} n \equiv (-1)^n (2n + 2) \equiv (-1)^n \pmod {2n+1}$$ which is the original congruence. The only known Catalan pseudoprimes are $5907$, $1194649$ and $12327121$. So the only other solutions to the congruence that we know of are $2953$, $597324$ and $6163560$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2875712", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Exercise 2.4.1 of Durret's Probability : Theory and Examples 4th ed- Is this correct? In exercise 2.4.1. of Durret's Probability book, we're looking to construct a sequence of r.v. $X_n\in \{ 0,1 \}$, $X_n\rightarrow 0$ in prob., $N(n)\rightarrow \infty$ a.s., and $X_{N(n)}\rightarrow 1$ a.s. Here's picture of the solution suggested by Durret in his solution manual. However, I disagree with it. Since $X_k \rightarrow 0$ in prob. is not entirely correct. Here $k=2^n+m$. $P(|X_k-0|<\epsilon)= 1- 1/2^n $ So if I increase $m$, instead of $n$ we don't have that probability going to 1, but constant. I can always find a sufficiently big $m$ for $n=1$ where the size of the probability is 1/2. Am I correct?
For $2^{n}\leq k <2^{n+1}$ we have $P\{X_k >\epsilon \} =\frac 1 {2^{n}}<\frac 2 k \to 0$ as $k \to \infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2875829", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Monotonicity of trigonometric function Consider the function $$f(x)=\frac{\sin(x)}{\sin((2k+1)x)}$$ with $k$ a positive integer. It seems that $f$ is strictly increasing in $[0,\frac{\pi}{2(2k+1)}]$. Is there some easy proof of this monotonicity property that does not invole differentiation (through suitable inequalities)?
You have $\sin2x=2\sin x \cos x$ and $\sin 3x=\sin x \cos 2x + \cos x \sin 2x =\sin x (\cos 2 x + 2 \cos^2 x)$, and similar formulas for higher $\sin (kx)$. Then $$\frac{\sin(3x)}{\sin x} = \cos 2 x + 2\cos^2x $$ where the RHS is decreasing in $x$ and positive as long as $\cos 2x>0$. That gives you an interval where $\frac{\sin x}{\sin 3x}$ is increasing. Of course turning this into a proof for general $k$ is a lot more complicated than differentiating, but it should be possible. (You don't get non-integer $k$, though).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2875945", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
Proof of dy=f’(x)dx I’ve been wondering about the usage of $dy=f’(x)dx$ in my textbook. There’s not a single justification of how it is proved and it just states that it is true. Since $dy/dx$ can’t be assumed as a fraction, I’m guessing there’s more to it than just multiplying by $dx$ on both sides. Are there any proofs to this equation? Also with some research, I found this “proof”. Can it be done this way? $dy=f’(x)dx$ “proof” (Please keep this in high school level)
Well the derivative is given by: $$\lim_{dx \to 0} \frac{f(x+dx)-f(x)}{dx}=\lim_{dx\to 0} \frac{dy}{dx}$$ By definition the derivative is the rate of change of y with regard to x. That's why RHS stands. As you realise $\frac{dy}{dx}$ is not just a notation but it's mathematically how derivative is been defined. Since $f'(x)=\frac{dy}{dx}$ with $dx\to 0$, the equation $dy=f'(x)dx$ holds.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2876054", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Proof Verification: If $A \subset B$ and $B \subset C$, then $ A \cup B \subset C$ I am trying to prove that: If $A \subset B$ and $B \subset C$, then $ A \cup B \subset C$ My proof is : Given some $x \in A \cup B$, it is true that either $x \in A$ and/or $x \in B$. IN the case that $x \in A$ it is true that $x \in B$, as $A \subset B$, and that $x \in C$ , as $B \subset C$. In the case that $x \in B$ it is true that $x \in C$, as $B \subset C$. Therefore, $A\cup B \subset C$ Is this correct?. Any tips to improve this would be appreciated as I am self taught and new to proof writing.
Another approach is by using Venn diagrams. Draw circles $A$, $B$ and $C$ for three sets such that $A$ is contained in $B$ and $B$ is contained in $C$ (according to given set inclusions). So you have $A$ as the innermost, $B$ in the middle and $C$ as the outermost of them. Now $A\cup B$ is given by the middle circle which is offcourse contained in $C$ (the outer circle).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2876194", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 1 }
Implicit Differentiation- Related Rates Suppose that price, $p$ in dollars, and the number of items sold, $x,$ are related by: $p^2 - xp + x^2 = 175.$ If $p$ and $x$ are functions of time, how fast is $p$ changing with respect to time when $p=\$ 10$ and $x$ is increasing by $5$ units per day? So far I’ve got: $$\frac{dp}{dt} = \frac{-2x+p}{2p-x}.$$ Not sure what to do after this? Help would be extremely appreciated :(
Let's see: \begin{align*} \frac{d}{dt}[\,p^2-xp+x^2&=175\,] \\ \underbrace{2p\dot{p}}_{\text{Chain}}-\underbrace{(x\dot{p}+\dot{x}p)}_{\text{Product}}+\underbrace{2x\dot{x}}_{\text{Chain}}&=0 \\ \dot{p}(2p-x)&=\dot{x}(p-2x) \\ \dot{p}&=\frac{\dot{x}(p-2x)}{2p-x}. \end{align*} The issue is that the problem statement doesn't give you the value of $x$, but of $\dot{x}.$ You can find $x$ by solving the original equation for it when you've plugged in $p=10.$ That is, you are solving $$100-10x+x^2=175,\qquad\text{or}\qquad x^2-10x-75=0.$$ The solutions are $x=-5, 15.$ Can you rule out one of these? Why?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2876293", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
how to demonstrate that $\lim_{x\to2}3x^2=12$ using $\gamma$ and $\epsilon$ definition how to demonstrate that $\lim_{x\to2}3x^2=12$ using $\gamma$ and $\epsilon$ definition? My current steps I have in order from top to bottom: $|3x^2-12|<\epsilon$ and $0<|x-2|<\gamma$ assume $z=x-2$ $0<|z|<\gamma$ and $3z^2+12|z|<\epsilon$ $3z^2+12|z|<3\gamma^2+12\gamma=3\gamma(4+\gamma)$ Then assume $\gamma<1$ $\gamma=15$ hence $\gamma=\frac{\epsilon}{15}$ then now i am stuck and have no clue how to proceed
Let $|x-2|\lt 1$ , then $-1<x-2<1$, or $3<x +2<5$. $|3x^2-12| = 3|x-2||x+2|$. Let $\epsilon >0$ be given. Choose $ \delta = \min (1,\epsilon/(15))$. Then $|x-2| \lt \delta$ implies $3|x-2||x+2| \lt $ $(3)(5)|x-2| \lt (15)\delta \lt \epsilon$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2876398", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Necessary and sufficient condition for convergent series Let $(a_i)_{i \in \mathbb{N}}$ be a sequence of positive reals such that $$ \limsup_{i \rightarrow \infty} a_i \, i =0. $$ Is this condition necessary and sufficient for $\sum\limits_{i=1}^\infty a_i < \infty$? Of course, if $a_i = 1/i$, then the series is infinite and if $a_i = (1/i)^{1 + \varepsilon}$, for some $\epsilon >0$, then it is finite, but it is not clear to me what happens if I choose a sequence which decays faster than $1/i$ but not faster than $(1/i)^{1 + \varepsilon}$ for arbitrary small $\varepsilon>0$.
Note that $\sum_{n=2}^\infty\frac1{n\log n}$ diverges (this follows from the integral test), but $\lim_{n\to\infty}n\frac1{n\log n}=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2876467", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Two distinct differentials at the same point, with respect to different norms Is it possible for a function $f:E\to F$ (where $E,F$ are normed vector spaces) to admit two distinct differentials in a point $a\in E$ ? Of course one for every pair of norms on $E$ and $F$. It is known that if the two norms on $E$, and the other two of $F$ are equivalent, then it is not possible. So an example must be found on an infinite dimensional space (like a space of functions). Can you suggest such an example?
Sure. Let $F$ have infinite dimension and let $x_1,x_2,\dots\in F$ be linearly independent. We can then pick two different norms for which this sequence of linearly independent vectors converges to two different vectors. Say $\|\cdot\|_1$ is a norm on $F$ such that the sequence $(x_n)$ converges to $x$ and let $\|\cdot\|_2$ be a norm on $F$ such that $(x_n)$ converges to $y$, where $x\neq y$. Define $f:\mathbb{R}\to F$ by $f(1/n)=x_n/n$ if $n$ is a positive integer, interpolating linearly in between these values, defining $f(t)$ arbitrarily for $t>1$, $f(0)=0$, and $f(t)=-f(-t)$ for $t$ negative. Then it is easy to see that $f$ is differentiable at $0$ with respect to both norms on $F$, but the derivative $f'(0)$ with respect to $\|\cdot \|_1$ is $x$ while with respect to $\|\cdot\|_2$ it is $y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2877124", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Calculate matrix powering given one outer product: $(x\cdot{y}^T)^k$ It is an exercise on the chapter one of a book. Book: "Matrix Computations 4th edition" by Golub and Van Loan. It reads: Give an $O(n^2)$ algorithm for computing $C=(x\cdot{y}^T)^k$ where $x$ and $y$ are $n$-vectors. I did a lot of research because I am not great at linear algebra, I am just starting learning. I know somethings: $x\cdot{y}^T$ is a matrix, so this can be translated to matrix powering. However, I believe knowing one outer product gives more information than just a given matrix. I was trying to figure out if I can use that to my benefit, but I am not sure how. I also know algorithms for fast exponentiation, but they will not help here. I believe there is a solution, because this book seems very good and I read good reviews. I would love hints at it. Can I translate $(x\cdot{y}^T)^k$ to something more useful, for example?
If $k=1$, then $O(n)$ operations are required. If $k>1$ then $(x y^T)^k = x (y^T x)^{k-1} y^T = (y^T x)^{k-1} xy^T $. $(y^Tx)^{k-1}$ takes $O(n)$ operations, and there are $n^2$ multiplications to compute $(y^T x)^{k-1} xy^T $.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2877235", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }