Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
What is a matrix $A$ satisfying $A^m=I$ called? We have matrices that are idempotent $A^2=A$ or nilpotent $A^m=0$ for some $m$. Question: What is a square matrix $A$ called for which $A^m=I$ for some integer $m$, where $I$ is the identity matrix? Examples are rotations by $2\pi/m$. Any reference would be appreciated!
Such matrix $A$ is a matrix of finite order. You can find a classification in this article for matrices over $\mathbb{R}$, $\mathbb{C}$, and $\mathbb{Q}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3684706", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why is the surface area of a sphere not $ 2 ( \pi r)^2 $? I was trying to derive the formula for the surface area of a sphere and thought of deriving it this way. If we have a circle with radius $r$ and we rotate it along its center by $180$ degrees, the circumference of the circle would cover each part of the sphere once, so, the circumference multiplied by the amount it rotated, which was half the circumference, would give us the surface area. This would result in $$ 2 \pi r \times \pi r ,$$ or equivalently $$ 2 (\pi r)^2.$$ But this differs from the actual surface area of a sphere which is $$ 4 \pi r^2.$$ Though I understood the derivation of the second formula using Cavalieri's principle, I couldn't understand what was wrong with my approach. Can anyone explain using high school level mathematics?
As in my comment . . . In your rotation, the distance traveled by a point is variable. Only two points travel a distance equal to a half circumference. The rest travel less.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3684805", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If $f(x)>0$ near $x_0$ and $\lim_{x\to x_0}f(x)$ exists, then why is it always $\lim_{x\to x_0}f(x)\geq0$? If $f(x)>0$ near $x_0$ and $\lim_{x\to x_0}f(x)$ exists, then why is it always $\lim_{x\to x_0}f(x)\geq0$? The basic limit theorems, state that If $\lim_{x \to x_0}f(x)>0$, then $f(x)>0$, near $x_0$. Why does the opposite of that have to contain an equality?
Consider the function $f:\mathbb{R}\to\mathbb{R}$ given by $f(x)=|x|.$ Then $f(x)>0$ for all $x \neq 0.$ But $\lim_{x\to 0} f(x)=0.$ However, if $\lim_{x\to x_0}=L>0,$ then corresponding to $\frac{L}2>0,$ there exists $\delta >0$ such that $|f(x)-L|<\frac{L}{2}$ whenever $|x-x_0| <\delta.$ So near $x_0,$ we have $f(x)>L-\frac{L}2=\frac{L}{2}>0.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3684934", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Construct a circle tangent to sides $BC$ and $CD$ and s.t. its meetings with the diagonal $BD$ are tangent points from tangents draw from point $A$ Given square $ABCD$ I want to construct (with ruler and compass) the circle in the interior of the square such that it is tangent to sides $BC$ and $CD$ and such that it's meetings with the diagonal $BD$ are tangent points from tangents draw from point $A$: It is clear that the center of the circle must lie in $AC$. I tried finding some cyclic quad somewhere and I failed miserably. I then thought about puting $K$ in hyperbola with focii $A$ and the center $O$ of the square. Then again $K$ lies outside the segment $A O .$ This problem is hard because we would think of looking at the locus of the centers of circles such that the meetings of the circle with line $BD$ are the tangents from $A$. But that this locus is exactly the same as the locus of the centers of the circles tangent to $CD$ and $BC$: line $AC$. The proof is simple: as the tangents from $A$ must have the same lenght the meetings $M$ and $N$ of $BD$ with the circles must be reflections of each other with respect to the center $O$ of the square $ABCD$ thus the center of the circle must lie in line $AO$ which is line $AC$. The real geometric constrain is between the distance of the centers (all of which lie on line $AC$) to point $A$ and the radius of the circles. Let $P$ be in line segment $OC$. $PA = x$ $r$ the radius of the circle centered at $P$. $a=AB$ we have that $r^2 = x^2 - x \frac{a\sqrt2}2$ and $x = a\frac{\sqrt2}4 + \sqrt{r^2+\frac{a^2}8}$ and these weird relations are the "locus" that I desire to work with.
Let center $O$ of the circle lie on diagonal $\overline{AB}$ with midpoint $M$, and define $a:=|OA|$, $b:=|OB|$. Let the circle meet the other diagonal at $R$, and define $r:=|OR|$; note that $r=b/\sqrt{2}$. $$\begin{align} \underbrace{\frac{|OR|}{|OA|}=\frac{|OM|}{|OR|}}_{\triangle ORA\sim\triangle OMR} &\quad\to\quad \frac{r}{a}=\frac{a-\frac12(a+b)}{r} =\frac{a-b}{2r}\tag{1} \\ &\quad\to\quad a(a-b)=2r^2=b^2 \tag{2} \\[8pt] &\quad\to\quad \frac{a}{b}=\frac{b}{a-b}=\phi \tag{3} \end{align}$$ (ignoring a negative solution) where $\phi := \frac12(1+\sqrt{5})$ is the Golden Ratio. Consequently, the construction reduces to dividing diagonal $\overline{AB}$ in the ratio $\phi:1$. A simple method for doing so is described under "Dividing a line segment by interior division" in the Wikipedia entry. $\square$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3685145", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
$\int x^{dx}-1$ If you go to Flammable Maths's YouTube channel and scroll through some of his videos you see him solving the following integral: $$\int x^{dx}-1$$ he explains that this is a Product integral. My questions are the following: 1 - What is the geometric meaning of a product integral? 2 - does it make sense to have: $$\int f(x,dx)$$ and if $f(x,dx) = g(x)dx$ then it's just a regular integrals and if $f(x,dx) = g(x)^{dx}$ it's just a product integral? I'll leave the link to the video here.
$\int x^{dx}-1$ = $\int \frac{x^{dx}-1}{{dx}} {dx}$ = $\int (\lim_{h \to 0}\frac{x^h-1}{h}) {dx}$ = $\int \ln x {dx}$ = $x\ln x-x+const.$ $ \therefore \int x^{dx}-1 = x \ln x - x + const.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3685292", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
Find good bounds for the inverse of a function The general case: Let $f : A\subset \mathbb R \to B = f(A)\subset \mathbb R$ be a $\mathcal C^\infty$-diffeormorphism. Find good polynomial bounds for $f$ at any order (i.e. $P,Q \in \mathbb R_n[X]$ such as $\forall x \in B, P(x)\leq f(x) \leq Q(x)$ for every $n$). What I mean by that, is, for example, we can show that if $f^{-1}=\cos$, $\sum_{k=0}^{2n+1} \frac{(-1)^kx^{2k}}{(2k)!}\leq \cos(x) \leq \sum_{k=0}^{2n}\frac{(-1)^kx^{2k}}{(2k)!}$ for $x\geq 0$. A specific case: Let $f : (2\alpha/\pi,+\infty) \to f((2\alpha/\pi,+\infty)), s \mapsto A(\cos(\frac{\alpha}{s}))^{-s}\sin(\frac{2\alpha}{s})+B(\cos(\frac{\alpha}{s}))^{-s}\sin(\frac{\alpha}{s})$ With $A,B,\alpha >0$ three fixed parameters. I have to consider this function for some modelisation I'm working on. I want to find good bounds of its inverse. I don't really know how to do it. I would have consider the Taylor development of the inverse $f^{-1}$ , but I don't know how to deduce bounds.
Concerning your special case, $\lim_{s\to\infty} f(s) = 0$. Also there is a point $s_0$ with $f$ decreasing on $[s_0, +\infty)$. Exactly how low $s_0$ can be taken is difficult to determine, but because of the $\sin \frac{2\alpha}s$ factor, it is going to be somewhere around $\frac {4\alpha}\pi$, well above the asymptote caused by the cosine factor. If we restrict $f$ to $[s_0,\infty)$, then it is injective and can be inverted. $f^{-1}$ has domain $(0,f(s_0)]$, but $\lim_{x \to 0+} f^{-1}(x) = \infty$, from which it decreases to $s_0$. Again, you have a function with a vertical asymptote. It will not be boundable by any polynomial over this entire domain. So you cannot use polynomials to bound an inverse of $f$ whose domain extends down to $0$. Because $f$ itself is infinite at $\frac {2\alpha}\pi$ there must be at least two other domains on which $f$ is injective. These domains are bounded, so if you restrict $f$ to them, the corresponding $f^{-1}$ will have bounded codomain - i.e., you can bound it with constants. I do not know what is appropriate for your situation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3685462", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
The Number of Hyperplanes Intersecting a Unit Hypercube Prove that the number of hyperplanes such that $$c_1x_1 + c_2 x_2 + ... + c_n x_n = 0, \pm 1, \pm 2, \pm 3, ...$$ which intersect the unit $n$-cube, $0< x_i < 1,$ is at most $$|c_1| + |c_2| + ... + |c_n|.$$ I started out by plotting small values of $c_i$ in the 3D Geogebra grapher and this is true. But I don't know how to prove this. Perhaps we could use induction with decomposing to lower-dimensional hypercubes?
First assume that each $c_i\ge 0$, and set $C:=c_1+\dots+c_n$. Consider the family of hyperplanes $$H_t=\{(x_1,\dots, x_n)\, :\, c_1x_1+\dots+c_nx_n= t\} $$ for all $t\in\Bbb R$. Note that these hyperplanes are all parallel and have the normal vector $(c_1,\dots, c_n)$. Geometrically it's obvious, but you can also derive it algebraically, that $H_t$ doesn't meet the cube if $t\le 0$ or $t\ge C$ (in the edge cases $t=0$ and $t=C$, if all $c_i>0$, the hyperplane $H_t$ intersects the closed cube in the single corner points $(0,\dots, 0)$ and $(1,\dots, 1)$, respectively; if some $c_i=0$ we get some lower dimensional boundary faces at these edge cases, so they just miss the open cube). Specifically, the value of $c_1x_1+\dots+c_nx_n$ is always between $0$ and $C$ for the points in the cube, as each $x_i\in (0,1)$. It follows that among the ones with integer indices, exactly $H_0,\dots, H_{\lfloor C\rfloor}$ will intersect the closed cube, thus $H_1,\dots, H_{\lfloor C\rfloor}$ intersect the open cube if $C\notin\Bbb Z$, and exactly $H_1,\dots, H_{C-1}$ intersect it if $C\in\Bbb Z$. For the general case, you can apply symmetries of the cube in order to deducing it to the above special case. More explicitly, if $c_i<0$, consider the reflection through the midplane $x_i=\frac12$, that is $\phi(x_1,\dots, x_n)=(x'_1,\dots, x'_n)$ where $x'_j=x_j$ and $x_i'=(1-x_i)$. Then $\phi(H_t)=\{(x_1,\dots, x_n) \, :\, c_1x_1+\dots -c_ix_i+\dots+c_nx_n=t-c_i\}$. This way we swapped the sign of $c_i$ (the normal vector is also reflected), so, e.g. using induction on the number of negative coefficients, we can conclude that $t-c_i$, and thus also $t$, must lie in an open interval of length $C$. Consequently, $\phi(H_t)$ intersects the open cube at most for $\lfloor C\rfloor$ or $C-1$ integer values of $t$. Since the cube is invariant under $\phi$, it intersects $H_t$ iff it intersects $\phi(H_t)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3685592", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
What are the eigenvalues of $T(A) = A+ (A)_{2,2}I$, where $T\colon M_2(\mathbb{C})\to M_2(\mathbb{C})$ is a linear operator let $T$ be linear transformation $T:M_2(\mathbb{C})\to M_2(\mathbb{C})$. $$T(A) = A+ (A)_{2,2}I$$ I need to find its eigenvalues, does it not depend on the value of $(A)_{2,2}$? I dont understand why $T(A) = 2A$ is the answer (according to the answers the eigenvalue is $2$). thanks for help.
One approach here is to select a basis, find the $4 \times 4$ matrix of the linear transformation relative to this basis, then find the eigenvalues and eigenvectors of this matrix in the usual way. In this case, however, I prefer to work directly from the definition of eigenvalues and eigenvectors. Note that $\lambda$ is an eigenvalue of the transformation $T$ if there is a non-zero matrix $A$ for which $T(A) = \lambda A$. So, we are looking for values of $\lambda$ and $A$ for which the equation $$ T(A) = \lambda A \implies A + A_{22}I = \lambda A \implies A_{22} I = (\lambda - 1)A. $$ First, note that if $\lambda = 1$, then any $A$ for which $A_{22} = 0$ is a solution to this equation. Equivalently, we can say that the matrices $$ \pmatrix{1&0\\0&0}, \quad \pmatrix{0&1\\0&0}, \quad \pmatrix{0&0\\1&0} $$ form a basis for the eigenspace of $A$ corresponding to $\lambda = 1$. If $\lambda \neq 1$, then we can rewrite this equation in the form $$ A = \frac{A_{22}}{\lambda - 1} I. $$ We can see that two things need to be true in order for this equation to hold. First, $A$ has to be a multiple of the identity matrix, which means that $A = A_{22}I$. With that, this equation becomes $$ [A_{22}I] = \frac{A_{22}}{(1 - \lambda)}I. $$ If $A = A_{22}I$ is non-zero, then this can only happen when $1 - \lambda = 1 \implies \lambda = 2$. All together, we see that $T$ has an eigenvalue of $\lambda = 2$, and every associated eigenvalue is a multiple of the identity. So, the eigenvalues are $1$ (with multiplicity $3$) and $2$ (with multiplicity $1$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3685758", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 1 }
proving that $f_n$ doesn't uniformly converge I am trying to prove that $$fn(x)=(nx)/(1+n^2x^4)$$ doesn't uniformly converge in [0,1] For that I'm looking for K a real positive fixed number, m,n $\in \mathbb{N}$ which achieve the following inequality: $$|nx/(1+n^2x^4) - mx/(1+m^2x^4)| \ge K. $$ Why I'm doing this? well because we learnt that fn(x) uniformly converge when for every epsilon > 0 there is N a natural number S.t for every m,n > N and every x in my field the following is true: |fn(x)-fm(x)| <= epsilon So I wasn't successful solving this and uploaded the following question: https://math.stackexchange.com/posts/3685851 Given $n \in \mathbb{N}$, I'm looking for $m \in \mathbb{N}$ which achives the following: Note: $m$ is expected to be related to $n$. For example $m=2n$.... $$|n/(1+n^2) - m/(1+m^2)| \ge K. $$ $K > 0$, $K$: constant. An Example: $K = 0.5$ and it seems that people are struggling with it, am I doing something wrong here?
It is easier to observe that for $n>0$ $$\sup_{x\in \Bbb R}|f_n(x)-0|\ge $$ $$f_n(\frac{1}{\sqrt{n}})=\frac{\sqrt{n}}{2}$$ thus $$\lim_{n\to +\infty}\sup_{x\in\Bbb R}|f_n(x)-0|=+\infty$$ The convergence is not uniform.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3685879", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Unit vectors and quaternions This might be a dumb question but recently I came to know about the quaternion number system. I can't stop wondering, "Are they related to unit vectors in any way? They have similar notations and both seem to be related to Spatial dimensions."
As Don Thousand explains, you do need a bit of Linear Algebra/Abstract Algebra to fully appreciate the relationships. I would also add some elementary Complex Analysis (just the operations and basics) to also appreciate/understand. But for an application of quaternions and their vector connections, with a bit of the Math thrown in, I really do not think you can do better (at an elementary level) than the videos by 3Brown1Blue: Visualizing quaternions (4d numbers) with stereographic projection Quaternions and 3d rotation, explained interactively
{ "language": "en", "url": "https://math.stackexchange.com/questions/3686070", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Sum of digits of sum of digits of powers of 12345 Sum of digits of $12345$ is $1+2+3+4+5=15$. The sum of digits of sum of digits is $1+5=6$. I have plotted the sum of digits of powers of $12345$ with blue dots (x-axis is the power). As the average digit is $4.5$, we can see the roughly linear rise with slope $log_{10}(12345) * 4.5 ≈ 18.4$ The red dots represent the sum of digits of sum of digits. Any hint why they are multiplies of $9$? (except the first red dot with value of $6$)
As $12345$ is a multiple of $3$, the powers of $12345$, excluding itself, will be multiples of $9$. By the divisibility rule of $9$ the sum of the digits, and the sum of the sum of the digits will also be divisible by $9$. (That is why the first dot alone was an exception).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3686219", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Interpretation of the notation $x = (x_1,x_2)\in \{0,1\}^2$? I have a few questions regarding the following notation: $$ x = (x_1,x_2)\in \{0,1\}^2 $$ Question 1: Is the following correct? $\{0,1\}^2$ is the Cartesian product of the 2 sets $\{0,1\}$ and $\{0,1\}$, i.e. \begin{align} \{0,1\}^2 &= \{0,1\} \times \{0,1\} \\ &= \{(0,0),(0,1),(1,0),(1,1)\} \end{align} Question 2: With the notation we mean $``$$(x_1,x_2)$ is an element of the set $\{0,1\}^2$$``$, so we can write: $$ (x_1,x_2)\in \{(0,0),(0,1),(1,0),(1,1)\} $$ So $(x_1,x_2)$ can take the values \begin{align} (x_1,x_2) &= (0,0)\\ (x_1,x_2) &= (0,1)\\ (x_1,x_2) &= (1,0)\\ (x_1,x_2) &= (1,1) \end{align} ? Question 3: Does the notation mean that $(x_1,x_2)$ only can assign ONE value of $\{0,1\} \times \{0,1\}$? I.e. for $(x_1,x_2)$ we have 4 explicit cases: \begin{align} (x_1,x_2) &= (0,0) \\ \text{or} \quad (x_1,x_2) &= (0,1)\\ \text{or} \quad (x_1,x_2) &= (1,0)\\ \text{or} \quad (x_1,x_2) &= (1,1) \end{align}
Yes, yes and yes. In general, the notation $A^n$, for a set $A$ and natural number $n$, means $$ \underbrace{A \times \ldots \times A}_{n \text{ times}}. $$ So $(x_1, \ldots, x_n) \in A^n$ means that each $x_i$, for $1 \leq i \leq n$, is an element of $A$. Thus this is a tuple of $n$ elements of $A$ (allowing duplicates).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3686334", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Why does this cross ratio equal infinity? I'm currently studying linear fractional transformations and cross ratios and came across this in a book (this is translated from Korean, so I apologize if there are any errors or ambiguities): We can define the cross ratio for complex numbers as: $$[ z, z_2, z_3, z_4 ] = \dfrac{\dfrac{z - z_3}{z - z_4}}{\dfrac{z_2 - z_3}{z_2 - z_4}}$$ If we think of this as a function of complex number $z$ and express it as $S(z)$, we can easily observe that: $$\begin{align}S(z_2) & = 1 \\ S(z_3) & = 0 \\ S(z_4) & = \infty\end{align}$$ How was the $\infty$ derived? The other two are easy because you just plug in the values, but plugging in $z_4$ gives: $$S(z_4) = \left( \dfrac{z_4 - z_3}{z_4 - z_4} \middle/ \dfrac{z_2 - z_3}{z_2 - z_4} \right)$$ This may be due to my lack of background, but how does this result in $\infty$? Thanks.
$z_4 - z_4 = 0$, so you have $0$ at a denominator. Now recall that on the Riemann sphere we have: $$ \frac{1}{0}:= \infty $$ (note that we are not using a sign). This explains why $S(z_4)= \infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3686558", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
The family $f_n=\arctan(nx)$ is not equicontinous. I am studying Equicontinuous families and the book says that $\arctan(nx)$ is not equicontinuous since the definition is violated if $x=0$ I would really appreciate if someone explain me what part of the definition is violated. My definition is: Let $F$ be a collection of real functions on a metric space $X$ with metric $d$. We say that $F$ is equicontinuous if to every $\varepsilon>0$ corresponds a $\delta>0$ such that $|f(x)-f(y)|<\varepsilon\ \forall f\in F$ and for all pairs of points $x,y$ with $d(x,y)<\delta$ (In particular, every $f\in F$ is then uniformly continuous.) Thanks in advance!
Denote $f_n(x) = \arctan(nx)$. We know that for $x>0$, $\lim_{n\rightarrow \infty} f_n(x) = \pi/2$; for $x<0$, $\lim_{n\rightarrow \infty} f_n(x) = -\pi/2$. Also, $f_n(0) = 0$. This implies $\exists \epsilon = 1 (<\pi/2)$, $\forall \delta>0$, $\exists N$ and $x\in (-\delta,\delta)$, such that when $n>N$ we have $d(f_n(x),f_n(0)) \ge \epsilon$. This violates the definition of equicontinuity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3686730", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Evaluate the determinant $\begin{vmatrix} y+z&z&y\\z&z+x&x\\y&x&x+y\end {vmatrix}$ Performing the operation $R_1\rightarrow R_1-R_2-R_3$ $$\begin{vmatrix} 0&-2x&-2x \\ y&z+x&x \\ z & x&x+y \end{vmatrix}$$ Pulling $-2x$ out and performing $C_2\rightarrow C_2-C_3$ $$-2x\begin{vmatrix} 0&0&1\\ y&z&x \\ z&-y&x+y \end{vmatrix}$$ $$=-2x(-y^2-z^2)$$ However, this is not given in the options, which are $4xyz$, $xyz$, $xy$, $x^3+y^3$ What am I doing wrong?
You made an error right off the bat: $y$ and $z$ in the first column have somehow gotten swapped after your first manipulation. However, instead of manipulating the matrix as is, it’s often fruitful and less work to examine special cases first. Set $x=0$ and observe that the rows of the resulting matrix are linearly dependent. The same is true of the other two variables, so the determinant must have the form $kxyz$ for some unknown constant $k$. You can determine the constant by evaluating the determinant for some other convenient combination of values, such as $x=y=z=1$, for which you can use the well-known eigenvalues of a matrix of the form $aI+b\mathbf 1$ to compute it without expanding further. Since this is multiple-choice, however, it’s pretty obvious that the determinant of $I+\mathbf 1$ isn’t $1$, so the only remaining possibility is $4xyz$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3686851", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How to solve the characteristic equations $\frac{dx_1}{x_1} = \frac{dx_2}{x_2} = \frac{dV}{2V}$? How to solve the following $$\frac{dx_1}{x_1} = \frac{dx_2}{x_2} = \frac{dV}{2V},$$ where $V = V(x_1,x_2)$. My effort is that I take $$\int dx_1/x_1 = \int dV/2V + C \, \Rightarrow \, V = C_1(x_2) x_1^2$$ and $$\int dx_2/x_2 = \int dV/2V + D \, \Rightarrow \, V = D_1(x_1) x_2^2.$$ Since they are the same $V$, we get $$C_1(x_2)x_1^2=D_1(x_1)x_2^2 \, \Rightarrow \, C_1(x_2) = (\frac{x_2}{x_1})^2D_1(x_1).$$ So we get $$V(x_1,x_2) = (\frac{x_2}{x_1})^2D_1(x_1) x_1^2$$ Am I on the correct way? Thanks!
Probably you want to solve the PDE : $$x_1\frac{\partial V}{\partial x_1}+x_2\frac{\partial V}{\partial x_2}=2V,$$ and you correctly wrote the Charpit-Lagrange system of characteristic ODEs : $$\frac{dx_1}{x_1} = \frac{dx_2}{x_2} = \frac{dV}{2V}.$$ A first characteristic equation comes from $\frac{dx_1}{x_1} = \frac{dV}{2V}$ . You correctly get : $V=C_1x_1^2$. $C_1$ is an arbitrary parameter which sets a particular characteristic curve on which : $$\frac{V(x_1,x_2)}{x_1^2}=C_1$$ A second characteristic equation comes from $\frac{dx_1}{x_1} = \frac{dx_2}{x_2}$ leading to : $$\frac{x_2}{x_1}=C_2$$ Again $C_2$ is an arbitrary parameter to which a characteristic curve corresponds for each value of the parameter. The general solution of the PDE (see reference added at the end) is expressed on the form of implicit equation is : $$\Phi(C_1,C_2)=0$$ where $\Phi$ is an arbitrary function of two variables. Or equivalently $$C_1=F(C_2)\quad\text{or}\quad C_2=G(C_1)$$ where $F$ and $G$ are arbitrary functions. $$C_1=F(C_2)=\frac{V}{x_1^2}=F\left( \frac{x_2}{x_1}\right)$$ $$\boxed{V(x_1,x_2)=x_1^2\:F\left( \frac{x_2}{x_1}\right)}$$ If some boundary conditions are correctly specified the function $F$ can be determined. NOTE: Alternatively one could consider $\frac{dx_2}{x_2} = \frac{dV}{2V}$ instead of $\frac{dx_1}{x_1} = \frac{dV}{2V}$ . This leads to $V(x_1,x_2)=x_2^2\:H\left( \frac{x_2}{x_1}\right)$ where $H$ is an arbitrary function related to the above function $F$ : $F\left( \frac{x_2}{x_1}\right)=\left( \frac{x_2}{x_1}\right)^2H\left( \frac{x_2}{x_1}\right)$ . Since both $F$ and $H$ are arbitrary functions, the result is the same than above. REFERENCE: Copy from https://www.math.ualberta.ca/~xinweiyu/436.A1.12f/PDE_Meth_Characteristics.pdf The symbols are not the same as those used in my above answer. Be carefull about possible confusion.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3687200", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How were complex geometric shapes drawn without computers? How did mathematicians create drawings of complex geometric shapes in the past, without 3d graphics in computers? Here is one example of what I’m talking about, drawn in the 16th century:
Quite a bit of calculation! And very precise measurements. When I was a kid, in place of internet and all that, a person could do the calculations to draw 2D projections (or, alternatively, in-perspective versions) of aesthetically pleasing objects, such as polyhedra. Some challenging trigonometry, yes. In those days, there was a standard part of many curricula in the U.S., around 7th and 8th grade, "drafting", which required drawing orthographic projections of sometimes-complicated 3D objects... by measuring, and so on. Yes, being able to visualize things is a great help in doing these things! Then having the precedent of that in mind, branching out to objects that are more complicated than cubes and spheres was an interesting thing... (So, I'm thinking that if a kid in 1962 with no computers could draw such stuff...)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3687316", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Suppose $f(x) \rightarrow M$ as $x \rightarrow a$. Prove that if $f(x) \leq L$ for all $x$ near $a$, then $M \leq L$. I'm am a student taking a real analysis paper at university. I'm going through some problems on my problem sheet and I've been asked the question above. I'm still getting a hang on what it means for $x$ to be "near" $a$ but gather, that if $f(a) \leq L$ and $f(a) = M$ then $M \leq L$ is implied. If anyone can help me understand the mathematical definition of something being "near" something else and some tips on how to carve a rigorous proof of the above. It would be much appreciated! Thank you for your time!
Given: $\lim_{x \rightarrow a} f(x)=M$, and in a $\delta$ neighbourhood of $a$, i.e. there is a $\delta >0$ s.t. $|x-a| \lt \delta$ implies $f(x)\le L.$ Assume $M >L$. For $|x-a|<\delta$ we have $f(x)\le L <M.$ Let $n_0 >1/\delta$ (Archimedean principle). For $n\ge n_0$ $|x_n-a| <1/n \le 1/n_0 <\delta$, i.e. the $x_n$ are near enough and converge to $a$. Then $f(x_n) \le L <M$, and $M=\lim_{n \rightarrow \infty}f(x_n)\le L<M$, a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3687722", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Does Thomae's function have an antiderivative? Let $f(x)=0$ if $x$ is irrational and $f(x)=1/q$ if $x=p/q\in \mathbb{Q}$, $p,q$ coprime natural numbers (restrict to $[1,2]$ to make things easier). By boundedness and as it has at most countably many discontinuous points we know that $f$ is integrable on $[1,2]$. Does it have antiderivative on such a compact?
Here is another argument. Suppose there is an anti-derivative $F$. By Fundamental Theorem of Calculus we have $$F(x) =F(1)+\int_{1}^{x}f(t)\,dt$$ for all $x\in[1,2]$. Since $F'=f$ it follows from above equation that we have $g'=f$ where $g$ is defined by $$g(x) =\int_{1}^{x}f(t)\,dt$$ But notice that integral of Thomae function is $0$ over any interval and hence $g$ and $g'$ are both identically $0$. And this gives our contradiction as $f=g'$ is not identically zero (it is non-zero on rationals).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3687878", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
How do you show that 2 lines $L_{1}$ and $L_{2}$ are perpendicular(orthogonals) but using this result $ ||u+v||=||u-v||$ My question has 2 parts, I m struggling with the second one. Let's see the first:Prove that the vectors $u$ and $v$ are orthogonal, if and only if $\| u + v \| = \| u - v \|$ <= If $u,v$ are orthogonal vectors, then:(by Phythagoras'theorem) $$\| u + v \|^2 = \|u\|^2 + \|v\|^2$$ $$\| u - v \|^2 = \|u\|^2 + \|-v\|^2 = \|u\|^2 + \|v\|^2$$ now $\| u + v \|^2 = \| u - v \|^2$, but the norm is ever positive therefore: $\| u + v \| = \| u - v \|$. => Now, if $\| u + v \| = \| u - v \|$ we have: $\| u + v \|^2 = \|u\|^2 + 2u\cdot v + \|v\|^2$ $\| u - v \|^2 = \|u\|^2 - 2u\cdot v + \|v\|^2$ By the equality $\|u\|^2 + 2u\cdot v + \|v\|^2 = \|u\|^2 - 2u\cdot v + \|v\|^2$ if and only if $2u\cdot v = -2u\cdot v \Leftrightarrow 4u\cdot v = 0 \Leftrightarrow u\cdot v = 0$ this is $u$ and $v$ are orthogonal I understand this part. OK Let's see the second part of the question. Considering the previous result of the proof(orthogonal vectors) Show that $L_{1}=(1,2,3)+t(1,-1,1)$ And $L_{2}=(0,3,2)+t(5,2-3)$ are perpendiculars. I think that the main point here is to consider that $uv=0$ means orthogonal but I have no idea how to use this result to show the Lines $L_{1}$ and $L_{2}$ are perpendiculars(orthogonals). Suggestions or hints will be welcome. Thanks
$$\|(1,-1,1)+(5,2,-3)\|=\sqrt{41}=\|(1,-1,1)-(5,2,-3)\|.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3688121", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Residue and removable singularity I have the function $\displaystyle f(z)=\frac{z^2+z+1}{z^2(z-1)}$. I have to calculate residue in isolated singularities (including infinity). I calculated residue in $z = 0$ and $z = 1$, but I don't know how to calculate it in infinity. I don't understand if infinity is removable singularity or not.
The Residue at Infinity of $f(z)$ is given by $$\text{Res}\left(f(z),z=\infty\right)=\text{Res}\left(-\frac1{z^2}f\left(\frac1z\right),z=0\right)$$ So, for $f(z)=\frac{z^2+z+1}{z^2(z-1)}$, we have $$\begin{align} \text{Res}\left(f(z),z=\infty\right)&=\text{Res}\left(-\frac1{z^2}\frac{1/z^2+1/z+1}{(1/z^2)(1/z-1)},z=0\right)\\\\ &=-\text{Res}\left(\frac{z^2+z+1}{z(1-z)},z=0\right)\\\\ &=-\lim_{z\to 0}\left(z\,\frac{z^2+z+1}{z(1-z)}\right)\\\\ &=-1 \end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3688235", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Am I going correctly? Let $f(x)$ be a polynomial in $x$ and let $a, b$ be two real numbers where $a \neq b$ Show that if $f(x)$ is divided by $(x-a)(x-b)$ then the remainder is $\frac{(x-a) f(b)-(x-b) f(a)}{b-a}$ MY APPROACH:- Let $Q(x)$ be quotient so that : $(x-a)(x-b)Q(x)+{ Remainder }=f(x)$ L.H.S, $(x-a)(x-b) \cdot a(x)+ \cfrac{(x-a)f(b)-(x-b) f(b)}{b-a}$ So if we take $x=a$ : $ \text { Remainder }=\left.f(x)\right|_{x=a} $ $0+\frac{0-(x-b) f(b)}{(b-a)}=f(a) \\ f(b)=\frac{-(b-a) f(a)}{(x-b)}=\frac{(a-b){f}(a)}{x-b}$ again if we take $x=b,$ then $0+\frac{(x-a) f(b)}{b-a}=f(b)$ Substituting, $f(b)$ value in the equation then $\frac{(x-a) f(b)}{b-a}=\frac{(a-b) f(a)}{x-b}$
I think you have the right idea but have not set out your argument very clearly; it looks as if you assume your conclusion about half way through. A better lay out might be: When dividing $f(x)$ by $(x-a)(x-b)$ you obtain, $$ f(x) = (x-a)(x-b) q(x) + r(x) $$ where $r(x)$ is a polynomial of degree one or less. Thus $r(x) = \alpha x + \beta$ for values $\alpha$ and $\beta$. Then substitute the value $x=a$ and $x=b$ in turn to derive, $$ f(a) = r(a) = \alpha a + \beta, \quad f(b) = r(b) = \alpha b + \beta. $$ You an now solve for $\alpha$ and $\beta$ giving the result you wanted, $$\alpha =\frac{f(b)-f(a)}{b-a} \text{ and } \beta = \frac{bf(a) - af(b)}{b-a}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3688396", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Proving the following matrix is diagonalizable I'm asked to prove that the matrix $A\in M_{n}(\mathbb C)$ that satisfy $A^8+A^2=I$ is diagonalizable. I've tried looking at the equation $x^8+x^2-1=0$ and determining whether $M_A$ has any repeating roots, but this got me nowhere. Afterwards, I thought about trying to determine whether its Jordan form is diagonal (I know such form exists since $\mathbb C$ is algebraically closed, so $P_A$ splits into linear factors) still got nowhere. Is there a right approach to the question or is there something I'm missing?
Look at $g(x)=x^4+x-1$ . Observe that $g'(x)=4x^3+1$ has only one real root and deduce that g as no repeated root in $\mathbb R$. Moreover, we have $g(0)=-1$ and $g(x)\to \infty$ for $x\to \infty$ and $x\to -\infty$. So $g$ has only two distinct real roots. The other two roots are distinct and complex occurring in conjugate pairs. Now we have $f(x)=x^8+x^2-1=g(x^2)$ $\therefore \{\alpha \in \mathbb C|f(x)=0\}=\{\alpha \in \mathbb C|\alpha^2 \text{ is a root of }g\} $ As the roots of $g$ are distict so are the roots of $f$. So $f$ is product of distinct linear factors. Now you know your minimal polynomial $M_A$ divides $f$ in $\mathbb C[x]$ as $f$ annihilates $A$. Hence $M_A$ is just the product of distinct linear factors hence $A$ is diagonalizible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3688709", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
A Double Integral For Calculus I I stumbled upon this questions and really messed around with it for more than half an hour but did not get any where. I checked back and forth and do not think I copied the question incorrectly. Question: Evaluate the double integral: $\displaystyle \int_{1}^{2} \int_{\frac{1}{y}}^{y} (x^2+y^2)dxdy$ by using the substitution: $x = \frac{u}{v}, y = uv$. I drew the "region" bounded by the limits of the integral in both $xy$ and $uv$ coordinates but still don't see the "rectangle" that is supposed to come out.
Wouldn't this just be $\int_1^2\int_{1/y}^y(x^2+y^2)dxdy=\int_1^2\frac{y^3}{3}+y^3-(\frac{3}{y^3}+y) dy$? In doing so we're treating the inner integral as simplifying the area in a sense to something of a single variable. If I did the algebra correctly after full evaluation the integral should equal $\frac{263}{24}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3688841", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Stuck on volume of a solid of revolution The problem I've been working on is where the curves are y=sqrt(x), y=1/x, and x=5, which are rotated around the y-axis. I am able to do this problem when rotating around the x-axis, but I have no clue how to calculate the inner radius when neither curve allows for me to determine the inner radius. I just need an idea of how to start this problem. I do not know how to post a graph or something here or if that's even allowed for a visual picture, so sorry about that! Thanks!
I find it often helps to make a graph of the equations, even when I think I know what the inner and outer radius will be. For the three equations $y= \sqrt x,$ $y = 1/x,$ $x = 5$ I get the region that is shaded in the figure below. Now your problem, if you want to use the disk/washer method, is that the curves $y=\sqrt x$ and $y= 1/x$ are each closer to the $y$ axis than the curve $x = 5,$ but neither curve is the inner radius for the whole region. If you draw a horizontal line at $y=1/2$ the intersection with the region starts at $y=1/x$ and ends at $x=5$, but if you draw a horizontal line at $y=2$ the intersection starts at $y=\sqrt x$ instead. One thing you can do in a case like this is to break the region in two pieces. Now the lighter shaded region is bounded by $y=1/x,$ $x=5,$ and $y=1$, and the darker shaded region is bounded by $y=\sqrt x,$ $x=5,$ and $y=1$. You can work out the volumes swept by each of the smaller regions and add them together to get the total volume swept by the whole region. Alternatively, you can use the shell method as suggested by others. This lets you compute the volume using just one integral. Eventually, however, you will probably encounter a problem that cannot be done in just one integral by either the shell method or the washer method.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3688974", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Closed ball is weakly closed The problem is in a Banach space, if $||x_n||\leq 1$ and $x_n\to x$ weakly, then $||x||\leq 1$ This question has an answer here: math.stackexchange.com/questions/714049/closed-unit-ball-in-a-banach-space-is-closed-in-the-weak-topology But for convenience, I will repost the answer. "If $x_n \to x$ weakly then we have that $\lambda x_n \to \lambda x$ for all $\lambda \in V^*$and $|\lambda x_n| \leq \|\lambda\| \|x_n\|$. Dividing both sides by $\|\lambda\|$ gives $$ \frac{|\lambda x_n|}{\|\lambda \|} \leq \|x_n\|.$$ Taking $n \to \infty$ and substituting in $\|x\| = \sup_{\lambda \in V^*} \frac{|\lambda x|}{\|\lambda \|}$ gives $$\|x\| \leq \liminf_{n \to \infty} \|x_n\|.$$Then any limit $x$ of $x_n$ with $\|x_n \| \leq 1$ for all $n$ will necessarily have $\|x\| \leq 1$. " My question is, how does "liminf" appears? My understanding is that when we take $n\to \infty$, the left side is $||x||$, but would the right side be $\lim ||x_n||$?
Weak convergence does not guarantee existence of $\lim \|x\|$. So we cannot take limit on both sides of the inequality $|\lambda x_n | \leq \|\lambda\| \|x_n\|$. But we can always take $\lim \inf $ or $\lim \sup$. When you take $\lim \inf$ on both sides LHS becomes $|\lambda x|$ because if a sequence is convergent then its $\lim \inf$ is same as its limit.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3689074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why am I getting the wrong answer when I factor an $i$ out of the integrand? Consider the following definite integral: $$I=\int^{0}_{-1}x\sqrt{-x}dx \tag{1}$$ With the substitution $x=-u$, I got $I=-\frac{2}{5}$ (which seems correct). But I then tried a different method by first taking out $\sqrt{-1}=i$ from the integrand: $$I=i\int^{0}_{-1}x\sqrt{x}dx=\frac{2i}{5}[x^{\frac{5}{2}}]^{0}_{-1}=\frac{2i}{5}{(0-(\sqrt{-1})^5})=-\frac{2i^6}{5}=+\frac{2}{5} \tag{2}$$ which is clearly wrong. I understand that $x\sqrt{x}$ is not even defined within $(-1,0)$, but why can't we use the same 'imaginary approach' ($\sqrt{-1}=i$) to treat this undefined part of the function (i.e. the third equality in $(2)$). I can't find a better way of phrasing my question so it may seem gibberish, but why is $(2)$ just invalid?
If $x\in[-1,\,0)$ then $\Im\sqrt{x}=\sqrt{-x}$, so $\sqrt{-x}=\sqrt{x}/i=-i\sqrt{x}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3689229", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22", "answer_count": 3, "answer_id": 0 }
Probability of an exam, what do you do? Suppose you have an exam that consists of 10 questions, each having two options to choose from. Every correctly answered question gives you 1 point, every unanswered question gives you 0 points, and every wrongly answered question gives you -0.5 points. If you want to maximise your score and you do not know how to answer two out of the ten question, what do you do? What if you only didn't know one? What if each question had three options and you didn't know how to answer one out of the ten questions?
We can just consider maximizing the score on the two question out of the ten. If I don't answer, I'll get 0 points. If I answer randomly you can say I'll get 1 point from one and -0.5 from the other; so I'll get 0.5 So I should try answering. You can repeat the same argument if you just have a question to answer. If there are three option, if I don't answer I'll get 0; if I answer I'll get $1*1/3+2/3*(-0.5)=0 $. So the options are quite the same.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3689360", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Rudin's POMA Chapter 1 exercise 5 Hi I am writing to check if the proof that I wrote is valid, I feel like it seems kind of right to me but as I am only a beginner in writing proofs I feel like I might've missed something. Question: Let E be a nonempty set of real numbers which is bounded below. Let -A be the set of all numbers -x, where x is an element of A. Prove that $$ infA=-sup(-A)$$ My solution is as follows, $\text{Let } \alpha \space\text{be a lower bound of A}$ Then $\alpha = \inf (A)\space \text{if}\space x\geq\alpha\space\text{for every}\space x\in A\space \text{and there does not exist a }\beta\space \text{where}\space \beta\gt\alpha\space \text{and} $ $\space x\geq\beta\space \text{for}\space x\in A$ As $x \geq \alpha$ for $x \in A$ then, $-x \leq -\alpha $ for all $-x \in -A$ Since there is no $\beta \gt \alpha$ where x $\geq \beta $ for $x \in A$ Then there is no $-\beta \lt \alpha$ where $-x \leq -\beta$ for $-x \in A$ thus $-\alpha = -\sup (-A)$ and $\inf (A) = -\sup (-A)$ Is it right for me to just assume the existence of a greatest lower bound like that? Also, what are some good ways to self check my work?
If $A$ is bounded below then there exists $M$ such that $a\in A\implies M\le a$ for all $a\in A$. Therefore, $-a\le -M$ for all $-a\in -A$ and so, by the least upper bound property of the real numbers, there exists $U$ such that $U$ is a least upper bound for $-A$. Since $-x\le U$ for all $-x\in -A$, $-U\le x$ for all $x\in A$ and therefore is a lower bound for $A$. If $L$ is another lower bound for $A$ then $L\le x\implies -x\le -L$ so $-L$ is also an upper bound for $-A$. Since $U$ is the least upper bound for $-A$, $U\le -L\implies L\le -U$ so $-U$ is the greatest lower bound. Now $-U= -\text{sup(-A)}=\text{inf(A)}$ so the theorem is proved.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3689617", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Approximation of smooth diffeomorphisms by polynomial diffeomorphisms? Is it possible to (locally) approximate an arbitrary smooth diffeomorphism by a polynomial diffeomorphism? More precisely: Let $f:\mathbb{R}^d\rightarrow\mathbb{R}^d$ be a smooth diffeomorphism. For $U\subset\mathbb{R}^d$ bounded and open and $\varepsilon>0$, is there a diffeomorphism $p=(p_1, \cdots, p_d) : U\rightarrow\mathbb{R}^d$ (with inverse $q:=p^{-1} : p(U)\rightarrow U$) such that both * *$\|f - p\|_{\infty;\,U}:=\sup_{x\in U}|f(x) - p(x)| < \varepsilon$, $\ \textbf{and}$ *each component of $p$ and of $q=(q_1,\cdots,q_d)$is a polynomial, i.e. $p_i, q_i\in\mathbb{R}[x_1, \ldots, x_d]$ for each $i=1, \ldots, d$? Clearly, by Stone-Weierstrass there is a polynomial map $p : \mathbb{R}^d\rightarrow\mathbb{R}^d$ with $\|f - p\|_{\infty;\,U} < \varepsilon$ and such that $q:=(\left.p\right|_U)^{-1}$ exists; in general, however, this $q$ will not be a polynomial map. Do you have any ideas/references under which conditions on $f$ an approximation of the above kind can be guaranteed nonetheless?
As pointed out by Robert Bryant over at https://mathoverflow.net/questions/364099/approximation-of-smooth-diffeomorphisms-by-polynomial-diffeomorphisms , the answer to this question is 'no'.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3689873", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
How do I show that if $p_n q_{n-1} - p_{n-1}q_n = 1$, then $p_n/q_n$ converges? Let $p_{n}$ and $q_{n}$ be strictly increasing, integer valued, sequences. Show that if $$p_{n}q_{n-1}-p_{n-1}q_{n}=1,$$ for each integer $n \ge 1$, then the sequence of quotients $\frac{p_{n}}{q_{n}}$ converges. My attempt: For given $\epsilon >0$, there exists a positive integer $N$ such that for $m \geq n \geq N$, and $\frac{1}{q_{i}} \neq 0$ for $i=1,2,3,\dotsc$ \begin{align} \left| \frac{p_{n}}{q_{n}} - \frac{p_{m}}{q_{m}} \right| &=\left| \frac{p_{m}}{q_{m}} - \frac{p_{n}}{q_{n}} \right| \\ &\leq \left| \frac{p_{m}}{q_{m}} - \frac{p_{m-1}}{q_{m-1}} \right| + \left|\frac{p_{m-1}}{q_{m-1}} - \frac{p_{m-2}}{q_{m-2}} \right| +\dotsb +\left|\frac{p_{n+1}}{q_{n+1}} - \frac{p_{n}}{q_{n}} \right| \\ &\leq \left| \frac{1}{q_{m}q_{m-1}} \right| + \left| \frac{1}{q_{m-1}q_{m-2}} \right|+\dotsb+ \left| \frac{1}{q_{n+1}q_{n}} \right| \\ &\leq \frac{1}{q_{m-1}^{2}} + \frac{1}{q_{m-2}^{2}}+\dotsb+\frac{1}{q_{n}^{2}} \\ &= \sum_{i=n}^{m-1} \frac{1}{q_{i}^{2}} \\ & \leq \sum_{k=1}^{\infty} \frac{1}{k^{2}}. \end{align} We know that $\sum_{k=1}^{\infty} \frac{1}{k^{2}}$ is convergent. This implies $\sum_{k=n}^{m} \frac{1}{k^{2}} \leq \epsilon$ if $m \geq n \geq N$. This implies $|\frac{p_{n}}{q_{n}} - \frac{p_{m}}{q_{m}}|\leq \epsilon$ for all $m \geq n \geq N$. Hence, $\frac{p_{n}}{q_{n}}$ converges. Can anyone please suggest me any improvement to justify this claim? Is my last few steps are correct, where I apply the comparison test?
The question is tagged proof-writing, hence this answer is intended to address the style of the proof given, rather than to offer a more slick or elegant proof. My general critiques are as follows: * *The introduction of $\varepsilon$ is somewhat confusing. The argument presented relies on the fact that the sequence of partial sums $$ S_n = \sum_{k=1}^{n} \frac{1}{k^2} $$ is Cauchy which means that for any $\varepsilon > 0$, there exists an $N$ so large that $n > m > N$ implies that $$ |S_n - S_m| = \sum_{k=m+1}^{n} \frac{1}{k^2} < \varepsilon. $$ Thus I would start the proof by fixing $\varepsilon$ and choosing $m$ and $n$ which get this job done. *The statement of the exercise gives several hypotheses. The argument given does not explicitly invoke these hypotheses at any point, and uses them in a way which require some computation. While this computation is routine, it detracts from the clarity of the argument. It is good style to note when hypotheses are being used, particularly when they are being used in a way that is not immediately obvious. Remember that the goal of a proof is not to show off your knowledge or ability, but to clearly communicate an idea to the reader. *In the first step of the computation, the writer states $$ \left| \frac{p_n}{q_n} - \frac{p_m}{q_m} \right| = \left| \frac{p_m}{q_m} - \frac{p_n}{q_n} \right|.$$ I understand the reasoning behind this—the author wants the indexing to run in the "correct" order. However, this step is completely unnecessary. Skip routine computation when it won't interfere with the reader's comprehension. This seems to contradict the previous point. I will claim that it doesn't, really. There is a balance between saying too much and saying too little. The precise boundary between the two is a matter of taste, but I would say that properties of the absolute value of a difference are "obvious", while the use of hypotheses in an argument should be highlighted. *This is very much a matter of personal style, and one can have a difference of opinion without being wrong, but I (personally) don't really like ellipses in proofs. They are used to hide induction arguments in a way that I find vaguely off-putting. This isn't an absolute rule and is entirely a matter of personal choice, but I prefer to avoid ellipses unless it has a negative impact on readability. In this case, I think that sigma notation can eliminate the ellipses without hurting the clarity of the argument. *The given proof shows that the sequence $\{p_n/q_n\}$ is Cauchy. Cauchy sequences are not a priori convergent—it is only in a complete metric space that Cauchy implies convergent. I think that it is worth spending a couple of words to point this out. The following is my (rough, unedited) presentation of the original proof: Proposition: Let $p_{n}$ and $q_{n}$ be strictly increasing, integer valued, sequences. Show that $$p_{n}q_{n-1}-p_{n-1}q_{n}=1,$$ for each integer $n \ge 1$, then the sequence of quotients $\frac{p_{n}}{q_{n}}$ converges. Proof: Fix $\varepsilon > 0$. The sequence $$ S_n = \sum_{k=1}^{n} \frac{1}{k^2} $$ is Cauchy, hence there is an $N$ so large that if $m$ and $n$ are integers with $n \ge m \ge N$, then $$ |S_n - S_m| = \sum_{k=m+1}^{n} \frac{1}{k^2} < \varepsilon. $$ Then for any such $m$ and $n$, \begin{align} \left| \frac{p_n}{q_n} - \frac{p_m}{q_m} \right| &= \left| \frac{p_n}{q_n} + \sum_{k=m+1}^{n-1} \left( \frac{p_k}{q_k} - \frac{p_k}{q_k} \right) - \frac{p_m}{q_m} \right| \\ &\le \sum_{k=m+1}^{n} \left| \frac{p_k}{q_k} - \frac{p_{k-1}}{q_{k-1}} \right|. && (\text{triangle inequality}) \end{align} By hypothesis, $$ p_n q_{n-1} - p_{n-1} q_{n} = 1 \implies \frac{p_n}{q_{n}} - \frac{p_{n-1}}{q_{n-1}} = \frac{1}{q_n q_{n-1}} $$ for all $n \in \mathbb{N}$. Thus \begin{align} \sum_{k=m+1}^{n} \left| \frac{p_k}{q_k} - \frac{p_{k-1}}{q_{k-1}} \right| &= \sum_{k=m+1}^{n} \left| \frac{1}{q_n q_{n-1}} \right| \\ &\le \sum_{k=m+1}^{n} \frac{1}{q_{n-1}^2}. && (\text{$q_n > 0$ is a strictly increasing sequence}) \end{align} As $q_n$ is a strictly increasing sequence of integers, it follows that $q_{n-1} \ge n$ for each integer $n \ge 1$, therefore \begin{align} \left| \frac{p_n}{q_n} - \frac{p_m}{q_m} \right| \le \sum_{k=m+1}^{n} \frac{1}{q_{n-1}^2} \le \sum_{k=m+1}^{n} \frac{1}{k^2} < \varepsilon. \end{align} This demonstrates that the sequence $\{p_n/q_n\}$ is a Cauchy sequence in $\mathbb{R}$. Since $\mathbb{R}$ is complete, this sequence converges in $\mathbb{R}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3690006", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Why does $AB + BC + CA = (A\oplus B)C + AB$ not imply $BC + CA = (A\oplus B)C$ in boolean algebra? I am new to Logical Inequalities. Please bear with me if I am inexplicably stupid. The following is a Proven Equality: $$AB + BC + CA = (A\oplus B)C + AB$$ I noticed that I cannot "cancel out" $AB$ from both sides of the identity, i.e. $$BC + CA \neq (A\oplus B)C$$ In mathematical equations, we cancel out stuff all the time. For example, $$X + 3 = Y + 3 \quad\implies\quad X = Y$$ Can this not be done with Logical Statements? Is there a reason why this is such a way?
It is worth clarifying here: there is only one single rule at all times for equations: If $x=y$, then $f(x) = f(y)$, where $f$ is some mapping. Nothing more, nothing less. What about cancellation? Isn't it true that, if $f(x) = x+ 3$, then $f(x)=f(y)$ implies $x = y$? Does this make a rule? No. Forget it. What really happens behind this is that we define $g(x)=x-3$, so $g(f(x)) = x$. And we apply the one single rule and we get $g(f(x)) = g(f(y))$, which simplifies to $x=y$. Is that clear? Now go ahead and find a function $g$ such that when composed with $f(x) = x \oplus A$ recovers $g(f(x))=x$. There is no such function. And this is why cancellation doesn't hold.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3690142", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 4, "answer_id": 3 }
An integration inequality $f$ is differentiable on [-1,1], $M=\sup|f'|$. There is $a \in (0,1)$ such that $\int_{-a}^a f(x)dx=0$. Prove that $$ \left|\int^1_{-1}f(x)dx\right| \le M(1-a^2) $$ I don't know how to use $\int^a_{-a} f(x) dx = 0$.
$$ \left| \int_{-1}^1 f(x) \mathrm{d}x \right|= \left|\int_{-1}^{-a} f(x) \mathrm{d}x + \int_{a}^1 f(x) \mathrm{d}x\right| \\ \le \int_{-1}^{-a} |f(x)| \, \mathrm{d}x + \int_a^1 |f(x)| \, \mathrm{d}x. $$ As mentioned in the comments, $\int_{-a}^a f(x)\mathrm{d}x=0$ implies that $f(x)=0$ for some $x\in(-a,a)$, since if $f(x)$ was either strictly positive or strictly negative, then its integral over $(−a,a)$ could not be $0$. Let $x_0\in(-a,a)$ be the value of $x$ where $f(x_0)=0$. Then, as $\left\lvert f'(x) \right\rvert \le M$, we have that for every $x\in[-1,1]$, $$|f(x)| =|f(x)-f(x_0)| \le M|x-x_0|.$$ Therefore $$\int_{-1}^{-a} |f(x)| \, \mathrm{d}x\le M\int_{-1}^{-a} (x_0-x) \, \mathrm{d}x=M\int_a^1(x_0+x)\mathrm{d}x,$$ $$\int_{a}^{1} |f(x)| \, \mathrm{d}x\le M\int_{a}^{1} (x-x_0) \, \mathrm{d}x,$$ which gives us the bound $$M\int_a^1(x_0+x)\mathrm{d}x+M\int_{a}^{1} (x-x_0) \, \mathrm{d}x=M\int_a^1 2x\mathrm{d}x=M(1-a^2).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3690278", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Another series involving $\log (3)$ I will show that $$\sum_{n = 0}^\infty \left (\frac{1}{6n + 1} + \frac{1}{6n + 3} + \frac{1}{6n + 5} - \frac{1}{2n + 1} \right ) = \frac{1}{2} \log (3).$$ My question is can this result be shown more simply then the approach given below? Perhaps using Riemann sums? Denote the series by $S$ and let $S_n$ be its $n$th partial sum. \begin{align} S_n &= \sum_{k = 0}^n \left (\frac{1}{6k + 1} + \frac{1}{6k + 3} + \frac{1}{6k + 5} - \frac{1}{2k + 1} \right )\\ &= \sum_{k = 0}^n \left (\frac{1}{6k + 1} + \frac{1}{6k + 2} + \frac{1}{6k + 3} + \frac{1}{6k + 4} + \frac{1}{6k + 5} + \frac{1}{6k + 6} \right )\\ & \quad - \sum_{k = 0}^n \left (\frac{1}{2k + 1} + \frac{1}{2k + 2} \right ) - \frac{1}{2} \sum_{k = 0}^n \left (\frac{1}{3k + 1} + \frac{1}{3k + 2} + \frac{1}{3k + 3} \right )\\ & \qquad + \frac{1}{2} \sum_{k = 0}^n \frac{1}{k + 1}\\ &= H_{6n + 3} - H_{2n + 2} - \frac{1}{2} H_{3n + 3} + \frac{1}{2} H_{n + 1}. \end{align} Here $H_n$ denotes the $n$th harmonic number $\sum_{k = 1}^n \frac{1}{k}$. Since $H_n = \log (n) + \gamma + o(1)$ where $\gamma$ is the Euler-Mascheroni constant we see that $$S_n = \log (6n) - \log (2n) - \frac{1}{2} \log (3n) + \frac{1}{2} \log (n) + o(1) = \frac{1}{2} \log (3) + o(1).$$ Thus $$S = \lim_{n \to \infty} S_n = \frac{1}{2} \log (3).$$
Simpler, I do not know (except with integration as @J.G. answered). Otherwise, in order to have have a good approximation of the partial sums, you can start with $$s_p=\sum_{n=0}^p \frac 1{a n+b}=\frac{\psi \left(p+1+\frac{b}{a}\right)-\psi \left(\frac{b}{a}\right)}{a}$$ and consider that, for large values of $p$, $$s_p=\frac{\log \left({p}\right)-\psi ^{(0)}\left(\frac{b}{a}\right)}{a}+\frac{a+2 b}{2 a^2 p}-\frac{a^2+6 a b+6 b^2}{12a^3 p^2}+\frac{a^2 b+3 a b^2+2 b^3}{6 a^4 p^3}+O\left(\frac{1}{p^4}\right)$$ which, for your case, would give $$S_p=\frac 12{\log (3)}-\frac{1}{54 p^2}+\frac{1}{27 p^3}+\cdots$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3690439", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How to show that $\{(x, y) \in \mathbb{R}^2\mid\cos(x^2) + x^3 - 4 7y > e^x - y^2\}$ is open or closed? Show that the set of points $(x, y) \in \mathbb{R}^2$ such that $\cos(x^2) + x^3 - 4 7y > e^x - y^2$ is an open subset of $\mathbb{R}^2$. I was reading the fact that if $(X,d)$ and $(Y,d)$ are metric spaces and $f:(X,d) \to \rightarrow (Y,d)$ is continuous and if $v$ be a open set (or closed set) in $Y$ then $f^{-1}(v)$ is open (closed) in $X$. I don't really under stand how to do the applications for this lemma. So far I have proved using this theorem that the identity function and the constant function are continuous. I want help to understand this problem and for similar problems I want to know what I am looking for here.what are the steps to show that a given set is closed or open using this lemma.
$f(x,y)=\cos (x^{2})+x^{3}-47y-e^{x}+y^{2}$ defines a continuous function and the given set is $f^{-1} (0,\infty)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3690568", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
A set has measure zero iff for every $\epsilon>0$ there is a countable covering of open rectangles such that $ \sum_{i=1}^\infty v(Q_i)<\epsilon $ What shown below is a reference from "Analysis on manifolds" by James R. Munkres. Definition Let $A$ a subset of $\Bbb{R}^n$. We say $A$ has measure zero in $\Bbb{R}^n$ if for every $\epsilon>0$, there is a covering $Q_1,Q_2,...$ of $A$ by countably many rectangles such that $$ \sum_{i=1}^\infty v(Q_i)<\epsilon $$ Theorem A set $A$ has measure zero in $\Bbb{R}^n$ if and only if for every $\epsilon>0$ there is a countable covering of $A$ by open rectangles $\overset{°}Q_1,\overset{°}Q_2,...$ such that $$ \sum_{i=1}^\infty v(Q_i)<\epsilon $$ Proof. If the open rectangles $\overset{°}Q_1,\overset{°}Q_2,...$ cover $A$, then so the rectangles $Q_1,Q_2,...$ . Thus the given condition implies that $A$ has measure zero. Conversely, suppose $A$ has measure zero. Cover $A$ by rectangles $Q'_1,Q'_2,...,$ of total volume $\frac{\epsilon}2$. For each $i$, chose a rectangle $Q_i$ such that $$ 1.\quad Q'_i\subset\overset{°}Q_i\text{ and }v(Q_i)\le 2v(Q'_i) $$ (This we can do because $v(Q)$ is a continuous function of the end points of the component intervals of $Q$). Then the open rectangles $\overset{°}Q_1,\overset{°}Q_2,...$ cover $A$ and $\sum v(Q_i)<\epsilon$. So I don't understand why it is possible to make the rectangles $Q_i$ such that they respect the condition $1$ and so I ask to well explain this: naturally I don't understand Munkres explanation and so you can or to explain better what Munkres said or to show another explanation. So could someone help me, please?
Consider a rectangle $R=[a_1,b_1]\times [a_2,b_2] \times ... \times [a_n,b_n]$. For $\epsilon >0$ sufficiently small $R'=[a_1-\epsilon ,b_1+\epsilon ]\times [a_2-\epsilon ,b_2+\epsilon ] \times ... \times [a_n-\epsilon ,b_n+\epsilon ]$ contains $R$ in its interior and its volume tends to volume of $R$ as $ \epsilon \to 0$. Hence the volume of $R'$ is at most equal to $2v(R)$ for $\epsilon$ sufficiently small. .
{ "language": "en", "url": "https://math.stackexchange.com/questions/3690754", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Recurrence relation $a_{n+1}=a_n^2-2$ Sequence $a_n$ is defined $a_{n+1}=a_n^2-2, a_0=\alpha$. I know that a closed form for $a_n$ is $a_n=\beta^{2^n}+\frac{1}{\beta^{2^n}}$, where $\beta$ satisfies $\beta+\frac{1}{\beta}=\alpha$ and I can prove it by induction. But this way, we have to know answer. Is there any other solution that leads us to this form?
So far as I know, there is no general method for solving non-linear difference equations, even quadratic ones like this. The equation may suggest the clever transformation $$ \begin{align} a_n&=b_n+\frac1{b_n}\\ a_n^2&=b_n^2+\frac1{b_n^2}+2\\ a_{n+1}&=b_n^2+\frac1{b_n^2}\\ b_{n+1}&=b_n^2 \end{align}$$ but I don't know that I'd have come up with that if I hadn't seen the answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3690963", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Proof gone wrong: why don't all domains have characteristic $0$? I attempted a proof for: the characteristic of a subdomain of an integral domain $D$ is equal to the characteristic of $D$ Proof: Assume $D$ is an integral domain with characteristic $r$. Since $D$ is a ring with unity $1 \neq 0$ and no $0$ divisors, it is easily seen that $n \cdotp 1 \neq 0$ $\forall n \in \mathbb{Z}^+$ because $1 \neq 0$ and $n \neq 0$. Then, $r = 0$. Now, let $A$ be a subdomain of $D$ which means that $A$ is an integral domain contained in $D$. Let $a \in A^*$. Since $A$ contains no divisors of $0$, it follows that $n \cdotp a \neq 0$ $\forall n \in \mathbb{Z}^+$ which shows that $A$ has characteristic $0$ as well. My professor says this proof is completely wrong because I have basically shown that all intergral domains have characteristic $0$. Of course, this can't be true. My question is- where have I gone wrong in writing this proof? I was just following the theorems in the book I am reading and somehow ended up with a completely bogus proof! Thanks!
Answer to get this out of the list of unanswered questions, and to help future readers As has been indicated in the comments, it is not always true that $n\cdot 1\neq 0$ for all $n$. For example, in a prime-order field, which is clearly a domain, $p\cdot 1=0$ and this is no contradiction. Probably part of the misconception comes from conflating two notions of what $n\cdot a$ could mean: * *The ring product of two things in the ring *The module action of $\mathbb Z$ on the abelian group $(D,+)$ Of course, when the ring has identity, these two things amount to the same thing, but let me say a sentence or two on the distinction and how it could have contributed to the confusion that came up in the proposed proof. Interpreting $p\cdot 1=0$ as the ring product between $p, 1\in D$, there is no contradiction with the definition of domain, because $p=0\in D$ already. Interpreting $p\cdot 1=0$ as $p\in \mathbb Z$ and $1\in D$, we indeed have that both things in the product are nonzero, and the binary operation yields a zero composition. (This would be an example of nonzero torsion on the abelian group $D$.) But remember this is the operation from $\mathbb Z\times D\to D$, not the ring multiplication, so the criterion for a domain doesn't apply. At any rate, just remember this for rings with identity: the characteristic is the additive order of the identity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3691225", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why are the $p$-oldforms $f(z)$ and $f(pz)$ linearly independent at level $\Gamma_0(pN)$? Let $f$ be a newform (normalized eigenform) of weight $k$ and level $\Gamma_0(N)$. Fix $p$ not dividing $N$ and set $f_p(z)=f(pz)$. Viewing $f$ and $f_p$ at level $\Gamma_0(pN)$, why are they linearly independent? Here is what I'm thinking: Suppose there is some nontrivial relation $$ af+bf_p=0. \tag{1}$$ The Hecke eigenvalues for $f$ are the same at level $pN$, except possibly at $p$. Thus, for any $\ell\neq p$, applying $T_\ell-a_\ell(f)$ to equation (1) shows that $T_\ell f_p=a_\ell(f)f_p$. Thus, $f$ and $f_p$ have the same Hecke eigenvalues outside $p$. Now, at this point, I have heard mumblings that the "multiplicity one theorem" shows that $f=f_p$, but I can't seem to find a suitable reference. Simply Googling "multiplicity one" in this context leads to many high-brow results... Diamond and Shurman do not state the multiplicity one theorm in their book (my main reference thus far), but they do cite Miyake's book as a reference. I looked up the theorem there (I believe it is Theorem 4.6.12), and it seems to require that $f$ and $f_p$ both be of level $N$, which (I am fairly certain) they are not. Can someone clarify this for me and possibly provide a good reference for the classical multiplicity one theorem?
From the Fourier series it is obvious that $f \ne cf(pz)$. $$f(z)=\sum_n a(n) e^{2i\pi n z},\qquad f(pz)=\sum_n a(n) e^{2i\pi np z}=\sum_n a(\frac{n}p)1_{p|n} e^{2i\pi n z}$$ The sequence $a(n)$ is not a scalar multiple of $a(\frac{n}p)1_{p|n}$. The Fourier series is uniquely defined by the function : $$a(n)=\int_{iy}^{iy+1}f(z)e^{-2i\pi nz}dz$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3691359", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Calculating $E[Z(Z-1)(Z-2)(Z-3)]$ where $Z$ is Poisson If $Z$ is a variable that distributes by Poisson, with Expected value, $E(Z) = 2.5$. I need to solve: $E[Z(Z-1)(Z-2)(Z-3)]$ So What I thought to do is first: $E[(Z^2-Z)(Z^2-2Z)(Z^2-3Z)]$ $[E(Z^2)-E(Z)][E(Z^2)-2E(Z)][E(Z^2)-3E(Z)]$ From here I'm not sure how can I insert the $E(Z) = 2.5$ if what I get in the end is $E(Z^2)$
Here is a plain solution, because the exercise was rather designed to have this solution path. We have: $$ \begin{aligned} &\Bbb E[\ Z(Z-1)(Z-2)(Z-3)\ ]\\ &= \sum_{k\ge 0}k(k-1)(k-2)(k-3)\cdot e^{-\lambda}\frac {\lambda^k}{k!}\\ &= \sum_{k\ge 4}k(k-1)(k-2)(k-3)\cdot e^{-\lambda}\frac {\lambda^k}{k!}\\ &= \lambda^4\sum_{k-4\ge 0} e^{-\lambda}\frac {\lambda^{k-4}}{(k-4)!}\\ &=\lambda^4\cdot 1\ . \end{aligned} $$ With the same argument, the general expectation of a random variable of the shape $Z(Z-1)\dots(Z-(m-1))$ is $\lambda^m$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3691475", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
No sequence of rv's such that $X_n\overset{P}{\rightarrow}0$ and $\mathbb{E}(X_n)\to 2$ and also $\sup\mathbb{E}(X_n^2)<\infty$. Prove that there is no sequence $(X_n)$ such that $X_n\overset{P}{\rightarrow}0$ and $\mathbb{E}(X_n)\to 2$ and also $\sup_n\mathbb{E}(X_n^2)<\infty.$ Attempt. If we didn't have the last restriction, on $([0,1],\mathcal{B}_{[0,1]},\lambda)$, sequence $X_n:=2n{1}_{[0,1/n]}$ would give $X_n\overset{P}{\rightarrow}0$ and $\mathbb{E}(X_n)= 2$ (whereas $\sup\mathbb{E}(X_n^2)= +\infty$). Back to the nonexistence part, Markov's inequality: $$\mathbb{P}(|X_n|>\epsilon)\leqslant \frac{\mathbb{E}(X_n^2)}{\epsilon^2}\leqslant \frac{\sup_n\mathbb{E}(X_n^2)}{\epsilon^2}$$ seems compelling to use, but I have not reached a contradiction. Thanks in advance.
Assume that $X_n \to 0$ in probability and $\sup_n \mathbb{E}(X_n^2) < \infty$. Then $\mathbb{E}(X_n) \to 0$. Indeed, let $\varepsilon >0$ and write $$ \begin{align}|\mathbb{E}(X_n)| &\leq \mathbb{E}\left(|X_n| \mathbf{1}_{|X_n| > \varepsilon}\right) + \mathbb{E}\left(|X_n| \mathbf{1}_{|X_n|\leq \varepsilon}\right)\\ &\leq \sqrt{\mathbb E \left(X_n^2\right)\mathbb P\left(|X_n|>\varepsilon\right)}+\varepsilon \end{align} $$ where in the last inequality I use Cauchy-Schwarz. The assumptions imply that $$ \limsup_n \left|\mathbb E (X_n)\right| \leq \limsup_n \sqrt{\mathbb E \left(X_n^2\right)\mathbb P\left(|X_n|>\varepsilon\right)}+\varepsilon = \varepsilon. $$ Since this is true for every $\varepsilon >0$, letting $\varepsilon \to 0$ gives $\mathbb{E} (X_n) \to 0$. The above argument is elementary. As suggested in Oliver Diaz's comment, you can also argue that the sequence $(X_n)_n$ is uniformly integrable (as it is bounded in $L^2$) and it converges to $0$ in probability, hence it converges to $0$ in $L^1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3691622", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How do I prove the floor identity $⌊x + n⌋ = ⌊x⌋ + n$ in a more precise way? I am having trouble understanding the proof provided by the author for the property stated after "Goal:". Except from the text here is a list of useful properties: (PROPERTY 1a) $⌊x⌋ = n$ if and only if $n ≤ x < n + 1$ (1b) $⌈x⌉ = n$ if and only if $n − 1 < x ≤ n$ (1c) $⌊x⌋ = n$ if and only if $x − 1 < n ≤ x$ (1d) $⌈x⌉ = n$ if and only if $x ≤ n < x + 1$ (2) $x − 1 < ⌊x⌋ ≤ x ≤ ⌈x⌉ < x + 1$ (3a) $⌊−x⌋ = −⌈x⌉$ (3b) $⌈−x⌉ = −⌊x⌋$ (4a) $⌊x + n⌋ = ⌊x⌋ + n$ (4b) $⌈x + n⌉ = ⌈x⌉ + n$ Goal: As an exercise, prove the property $⌊x + n⌋ = ⌊x⌋ + n$ Proof: We will prove the property using a direct proof. Suppose that $⌊x⌋ = m$, where $m$ is a positive integer. By property (1a), it follows that $m ≤ x < m + 1$. Adding $n$ to all three quantities in this chain of two inequalities shows that $m + n ≤ x + n < m + n + 1$. Using property (1a) again, we see that $⌊x + n⌋ = m + n = ⌊x⌋ + n$. This completes the proof. Proofs of the other properties are left as exercises. From: Rosen, Kenneth. Discrete Mathematics and Its Applications (p. 159) My understanding of the property we are tasked with proving is that it is an identity that is stating the proposition $∀x∀n(⌊x + n⌋ = ⌊x⌋ + n)$ where the domains of discourse for $x$ and $n$ are the set of all real numbers and the set of all integers, respectively. With that in mind, since this is a direct proof, shouldn't we start with arbitrary values of the domains (represented by real number $x$ and integer $n$, and NOT JUST POSITIVE INTEGER $n$) and try to show that their properties imply the equation $⌊x + n⌋ = ⌊x⌋ + n$? (Ignoring the positive-n issue, how could we even do this if the only properties we are allowed to assume about $x$ and $n$ are that $x$ is real and $n$ is an integer?) Also, why was he able to start with and assume $⌊x⌋ = m$ ("suppose $⌊x⌋ = m$") out of nowhere? If anyone can provide a version of his proof that is more precise and doesn't skip steps of reasoning or explanations, that would answer my question.
First, the statement that $m$ is positive appears to simply be an error in the text and should be ignored (just remove the word "positive" from the proof). Then, when it says "Suppose that $\lfloor x\rfloor =m$", that is not an assumption but rather a definition of the variable $m$. In other words, they are just defining $m$ as a name for $\lfloor x\rfloor$. Since the floor of a number is by definition an integer, you then know that $m$ is an integer (and so the later steps that use property (1a) are valid).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3691757", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Numerical approximation of derivaties I am taking a course on deep learning and I a have a good understanding of High School math. In the course they gave a formula saying that the derivative of a function $f(x)$ $f'(x) = \frac\lim{h\to0}\frac {f(x+h)-f(x)}{h} $ (formula 1) Which is the first principle of derivatives Then they gave another formula saying that in numerical approximation the first principle don't work well, The formula was $f'(x)=\frac\lim{h\to0} \frac{f(x+h)-f(x-h)}{2h}$ (formula 2) They also said that the order of error in the first method is of $h$ and for the second is $h^2$ Which is much better assuming h is less than one. I don't understand how this works because take $f(x)=x^3$ the triangle considered when we take $(x,f(x))$ and $(x+h,f(x+h))$ will be smaller than the triangle considered when taking $(x-h,f(x-h))$ and $(x+h,f(x+h))$ so shouldn't formula 1 give a better approximation of slope than the second. Since bigger triangles tend to make more errors?
In the first case, the error term (using Taylor) is of the form ${ 1 \over 2} h f''(\xi)$ whereas for the second it is ${1\over 4} h (f''(\xi_+)- f''(\xi_-))$ and if $f$ is smooth then $|f''(\xi_+)- f''(\xi_-)| \le L|\xi_+- \xi_-| \le L |h|$, which is where the $h^2$ comes from. Roughly speaking the even terms tend to cancel.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3691954", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
How do I calculate the limit of $\lim_{n\to \infty} (1-\frac{\theta^2}{2n^2})^{2(n+1)}$ I'm going through some physics problems about polarizers and one problem is about the case where $n+1$ polarizers are stacked up and I have to look at the case where $n \to \infty$. Now I came up for a solution for the intensity in the case of $n+1$ polarizers: $$I_{n+1}=I_0*\left(\cos^2\left(\frac{\theta}{n}\right)\right)^{n+1}$$ Doing the Taylor expansion of $\cos(x)$ I get: $$I_{n+1}=I_0*\left(1-\frac{\theta^2}{2n^2}\right)^{2(n+1)}$$ I know that the limit of this as $n \to\infty$ should be just $I_0$, but I don't really know how to get to that result. I also saw someone saying that $$\left(1-\frac{\theta^2}{2n^2}\right)^{2(n+1)} \approx e^{-\theta^2/(n+1)}$$ which would indeed give me $1$ as $n \to \infty $ but I don't want to use something that I don't fully understand how to get to. Would be really great if someone could help me out! Thank you!
$$\lim\left(1-\frac{\theta^2}{2n^2}\right)^{2(n+1)}=\lim\left(1-\frac{\theta^2}{2n^2}\right)^{2n^2/n}\left(1-\frac{\theta^2}{2n^2}\right)^2=\left(e^{-\theta^2}\right)^{\lim 1/n}1^2=1.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3692119", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
How would you simplify the following boolean expression $(!A B)+(B !C)+(BC)+(A !B !C)$? How would you simplify the following boolean expression $(!A B)+(B !C)+(BC)+(A !B !C)$? I factorised B and managed to get $B(!A+!C+C)+(A !B !C) = B+(A !B !C)$, but I do not know how to continue. Using a K-map, I managed to get the result of $B+A!C$ and I am trying to achieve the same result using regular identities and laws of boolean algebra. By the way, sorry for poor formatting, but I do not know how I could paste an expression from word to make it look better and easier to read.
Actually, there is a non-intuitive (doesn't hold in ordinary algebra) Boolean algebra law that can be applied here: distributivity of disjunction over conjunction expressed as follows $$x+yz=(x+y)(x+z).$$ Using your notation $x=B$, $y=!B$, $z=A!C$. Hence we have $$B+!BA!C=(B+!B)(B+A!C)=1\cdot(B+A!C)=B+A!C.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3692255", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
An interesting property of a particular set of triples - multiplying two and adding the other always gives 1 Find all triples of real numbers such that multiplying any two in a triple and adding the third always gives $1$. When will this be the case? How can we find all such triples? So far, I've let the numbers be $a$, $b$ and $c$. Therefore, $ab+c$ etc must $= 1$ but how can I restrict the possibilities to find all possible triples? I seem to think this has something to do with 1s and 0s, eg. $0,0,1$ or $0,1,1$ Many thanks guys!!
If $ab+c=ac+b=bc+a=1$, then $a(b-c)+(c-b)=0$. Factoring out the common factor $b-c$, we get $(a-1)(b-c)=0$. Hence, either $a=1$ or $b=c$. Likewise, either $b=1$ or $a=c$, and either $c=1$ or $a=b$. Suppose that $a=1$. Then, $b+c=bc+1=1$, so $bc=0$. Hence, one of $b$ and $c$ must be $0$ and the other must be $1$, giving the solutions $(1,0,1)$ and $(1,1,0)$. Likewise, if $b=1$, then similar steps will give the solutions $(0,1,1)$ and $(1,1,0)$. Finally, if $c=1$, then similar steps will give the solutions $(0,1,1)$ and $(1,0,1)$. If none of $a$, $b$, and $c$ are equal to $1$, then they must all be equal. This reduces to solving the equation $a^2+a=1$, or $a^2+a-1=0$. By the quadratic formula, the two resulting solutions are $a=\frac{-1+\sqrt{5}}{2}$ and $a=\frac{-1-\sqrt{5}}{2}$. Hence, the $5$ solutions are $(0,1,1)$, $(1,0,1)$, $(1,1,0)$, $(\frac{-1+\sqrt{5}}{2},\frac{-1+\sqrt{5}}{2},\frac{-1+\sqrt{5}}{2})$, and $(\frac{-1-\sqrt{5}}{2},\frac{-1-\sqrt{5}}{2},\frac{-1-\sqrt{5}}{2})$. Using $\phi$, the golden ratio, the last $2$ solutions can be written as $(\phi-1,\phi-1,\phi-1)$ and $(-\phi,-\phi,-\phi)$ respectively.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3692438", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Matrices how to prove $A^{-1} = A$ Apologies mix up from earlier the wrong values where placed in $x_2$ and $x_3$. Question 1 Proof that the following is true for matrix $A$, $A^{-1}$ = $A^{T}$ = $A$ $A$= $$ 1/7 \begin{pmatrix} 2 & 3 & 6 \\ 3 & -6 & 2 \\ 6 & 2 & -3 \\ \end{pmatrix} $$ $A^T$= $$ 1/7 \begin{pmatrix} 2 & 3 & 6 \\ 3 & -6 & 2 \\ 6 & 2 & -3 \\ \end{pmatrix} $$ The determinant is $343$ The rule has already been applied to the matrix $(+ - +) $ $A^{-1}$= $$ 1/343 \begin{pmatrix} 14 & 21 & 42 \\ -14 & -21 & -42 \\ 42 & 14 & 21 \\ \end{pmatrix} $$ This is as far as I can go the identity rule is not producing $1$ in the diagonal how can I solve it from here? $A^{-1}$ $x_1$ = $2/49$ $A$ $x_1$ = $2/7$
To check that $A^{-1}=A$, you don't need to "calculate" $A^{-1}$. If $A^{-1}=A$, then $A^2=A^{-1}A=I$; and viceversa, if $A^2=I$, then you know that $A^{-1}=A$. Here you can calculate directly that $A^2=I$. Now, in light of the above, your calculation of $A^{-1}$ is wrong. You don't say what computations you made, so I cannot comment on that.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3692555", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How should I calculate $||\underline{u}-\underline{w}||_{2}$? I'm trying to calculate $||\underline{u}-\underline{w}||_{2}$ where: $$ u=\begin{bmatrix}1 & 3\\ 2 & 2\\ 3 & 1 \end{bmatrix},\,\,\, w=\begin{bmatrix}3 & 1\\ 2 & 2\\ 1 & 3 \end{bmatrix} $$ I'm not familiar with the $|| \cdot ||_2$ operator and I'm not sure how to search it in the search engine. How should I calculate $||\underline{u}-\underline{w}||_{2}$?
What you're looking for is Matrix Norm $\|\cdot\|_p$ given by $$\|A\|_p:=\max_{|x|_p=1}|Ax|_p$$ where $|x|_p$ is the vector norm. In particular for $p=2,$ $$\|A\|_2=\sqrt{\lambda_{\max}(A^*A)}$$ where $\lambda_{\max}(A)$ denotes the largest eigenvalue of $A.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3692749", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Problem for showing a $k$-coloration of the graph $G$ Let $G$ be a $k$-chromatic graph and $$f: V (G) → [k]$$ a $k$-coloration of $G$. Show that for each $i ∈ [1 ,. . . , k]$, there exists u ∈ $f^-1$[i] such that for each $j ∈ [1 ,. . . , k] / [i]$, there exists $v$ ∈ $N (u)$ of color $j$. I have been quite a while stuck in this, problem. We just started to see colorations in graph, can someone give me an advice?
here is an outline of the proof, hopefully you should be able to finalise it. * *Reason by contradiction. *Start with a valid $k$-coloring, not verifying your property. *You have one color $k$, such that for every vertex $u$ colored $k$, their neighbours don't use one color $j_u$. *Can you change the $k$-coloring in order to use one less color (and remain a valid coloring) ? *If so you will have a valid $k-1$ coloring of the graph, hence a contradiction. Spoiler : Suppose that there is a color (for instance color $k$) such that for any vertex $u$ colored $k$ (i.e. $u\in f^{-1}(k)$), there exist one color $j\in \{1,\ldots, k-1\}$ such that $u$ has no neighbour with color $j$ Remark Let $U=f^{-1}(k)$ be the set of vertices with color $k$. Note that because our coloring is a valid one, then the set $U$ is an independent set, i.e. for any two vertices $u,v\in U$ there is no edge between $u$ and $v$. Changing the coloring Now for any vertex $u$ in $U$, because there is one color $j_u$ not used by its neighbours, you can change its color from $k$ to $j_u$, and this will still be a valid coloring of the graph. Finishing Because the set $U$ is an independent set, changing the coloring of $u$ won't affect the other vertices in $U$. Therefore they will still have the property of having one color not used by their neighbours, and you can iterate over all vertices in $U$. Conclusion You've built a valid $k-1$ coloring of your graph $G$. But the chromatic number of $G$ is $k$. Hence a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3693077", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
In a triangle, G is the centroid of triangle ADC. AE is perpendicular to FC. BD = DC and AC = 12. Find AB. G is the centroid of the triangle ADC. AE is perpendicular to FC. BD = DC and AC = 12. Find AB. According to the solution manual, we can let the midpoint of AC be H. D, G, and H are collinear as G is the centroid. Given that AGC is a right triangle, AG is 6, and DG is 12. How come the DG is 6 and DG is 12?
Given AG $\perp$ CG, the midpoint H is the circumcenter of AGC, which yields GH = $\frac12$AC = 6 and in turn DH = 3GH = 18 due to the centroid point G. Then, AB = 2DH = 36 since D and H are the midpoints.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3693233", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Show that $|\sin(0.1) - 0.1| \leq 0.001$ with the lagrange remainder Show that $|\sin(0.1) - 0.1| \leq 0.001$ I know that's a basic exercise on taylor polynomial but I have made a mistake somewhere that I don't find out. Anyway, here's my attempt : Because the function $f: \mathbb{R} \rightarrow \mathbb{R}$, $x \rightarrow \sin(x)$ is $1$ time derivable, by the Taylor polynomial formula we find: \begin{equation*} \sin(x) = x + R^1_0 \sin(x) \end{equation*} Therefore, \begin{equation*} |sin(0.1) - 0.1| = |R^1_0 \sin(0.1)| \end{equation*} Because $f$ est 2 times derivable, by the Lagrange remainder formula, $\exists c \in ]0, 0.1[$ such that \begin{align*} R^1_0 \sin(0.1) &= f^{(2)}(c) \frac{(0.1)^2}{2!} \\ &= - sin(c) \frac{0.01}{2} \\ |R^1_0 \sin(0.1)| &= |sin(c) \cdot 0.005| \end{align*} Because $|sin(x)| \leq 1$, $\forall x \in \mathbb{R}$ \begin{align*} |R^1_0 \sin(0.1)| &\leq |0.005| \end{align*} However, $0.005 > 0.001$ so I'm wondering where I did a mistake ?
You also know that $\vert \sin x \vert \le \vert x \vert$. Hence $$\vert \sin(c) \vert \frac{0.01}{2} \le 0.1 \frac{0.01}{2} = 0.0005 < 0.001$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3693584", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How does Grinberg's theorem work? Grinberg's theorem is a condition used to prove the existence of an Hamilton cycle on a planar graph. It is formulated in this way: Let $G$ be a finite planar graph with a Hamiltonian cycle $C$, with a fixed planar embedding. Denote by $ƒ_k$ and $g_k$ the number of $k$-gonal faces of the embedding that are inside and outside of $C$, respectively. Then $$ \sum_{k \geq 3} (k-2)(f_k - g_k) = 0 $$ While i think i understood the definition i do not know how to apply it on a real problem. For instance, in a graph like that: how can i identify the internal/external faces of an hypothetical Hamilton cycle $C$ if what i want to do is actually find one of it(an Hamilton cycle)? I mean, the theorem should be used(as far as i understood) to prove(or disprove) the existence of an Hamilton cycle, yet the definition implies that i have to find one to use the whole theorem. Anybody can help me to understand? I'd like to see an example, even a different one from what i brought should be fine.
Of course, before we find a Hamiltonian cycle or even know if one exists, we cannot say which faces are inside faces or outside faces. However, if there is a Hamiltonian cycle, then there is some, unknown to us, partition for which the sum equals $0$. So the general idea for using the theorem is this: if we prove that no matter how you partition the faces into "inside" and "outside", we cannot make the sum equal to $0$, then there cannot be a Hamiltonian cycle. (In subtler applications, I can imagine making arguments such as "if all of these faces are inside faces of a Hamiltonian cycle, then this other face cannot be an outside face". But I don't know of any applications where we can't just take "inside" and "outside" to be an arbitrary partition of the faces and get a contradiction.) In the example you give, I count $21$ faces with $5$ sides, $3$ faces with $8$ sides, and $1$ face with $9$ sides (the external face). So in order to make the sum equal $0$, we must have $$ 3(f_5 - g_5) + 6 (f_8 - g_8) + 7 (f_9 - g_9) = 0 $$ where $f_5 + g_5 = 21$, $f_8 + g_8 = 3$, and $f_9 + g_9 = 1$. Taking the sum mod $3$, we get $f_9 - g_9 \equiv 0 \pmod 3$, which cannot happen if one of $f_9, g_9$ is $1$ and the other is $0$. So it's impossible to make the sum equal $0$, and therefore there cannot be a Hamiltonian cycle in this graph. This strategy, by the way, cannot prove the existence of a Hamiltonian cycle. Just because there's an arbitrary partition of the faces into an "inside" category and an "outside" category for which the sum is $0$, doesn't mean there is actually a Hamiltonian cycle that contains all of the "inside" faces and none of the "outside" faces.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3693993", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Units of $R[X]/(aX-1)$ In the most upvoted answer to another question here, the author states that: $R\to R[x]$ followed by the quotient map $R[x]\to R[x]/(ax-1)$. Call this $f$. Note that $f(a)$ is a unit in $R[x]/(ax-1)$ If I understand correctly, the result is $f(a)=ax^0\mod{ax-1}=ax^0$, and I fail to see why this is necessarily a unit, as $a\in R$ is just a non-zero element in an integral domain, so it doesn't necessarily have an inverse. Why is $f(a)$ a unit? Am I calculating it wrong?
Let the ideal $(ax-1) = I$. The map $f$ sends $a \mapsto a+I$ in the quotient ring and $(x+I)(a+I) = ax+I$ but $-ax+1 \in I$ so $(x+I)(a+I) = 1+I$. Hence $f(a)$ is a unit in $R[x]/I$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3694169", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Doubt in area of an infinitesimally thin ring I want to find out area of a ring of an infinitesimal width in a derivation of electrostatics. So here's how my teacher explained it. Let the inner radius of thin ring be $r$ and outer be $r + dr$. Area of thin ring $$dA = π(r + dr)² - πr² = π(r² + (dr)² + 2r dr) - πr² = π(dr)² +π(2r dr) = π(2r dr).$$ My doubt is that why do we ignore the term with $(dr)²$. What determines to ignore a term or contain the term in the expression like $π(2r dr)$ is contained? If we say that $(dr)²$ is infinitesimally small then can't we say that the term with $dr$ is also infinitesimally small to be ignored.
Notice, $dr$ is infinitesimal small which tends to zero i.e. $dr\to 0$ but $dr\ne0$. Now the square of infinitesimal small length i.e. $(dr)^2$ is even much much smaller. We can say that $2\pi rdr$ is much larger than $(dr)^2$. Thus adding $(dr)^2$ to $2\pi rdr$ doesn't make any valuable difference. Therefore $(dr)^2$ is neglected while adding to $2\pi rdr$ although $2\pi rdr$ is very small. $$2\pi rdr+(dr)^2\approx 2\pi rdr\quad \quad (\because \ \ \ (dr)^2\ll2\pi r dr)$$ Take a simple example of very small number say $10^{-15}$ if we add it to its square $(10^{-15})^2$ it makes no difference we can say that sum is approximately $10^{-15}$ i.e. $10^{-15}+(10^{-15})^2\approx10^{-15}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3694381", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Geometry behind $\int_{0}^{2π}\frac{e^{ix}}{e^{ix}-z}~dx=2\pi(|z|<1)$ It's a nice exercise to prove, $$\int_{0}^{2π}\frac{e^{ix}}{e^{ix}-z}~dx=2\pi(|z|<1)$$ using Leibneiz's rule.But,what's the geometrical interpretation of this?Any idea?
Multiplying by $i$, we get, with $w=e^{ix}$, $$ \oint_{|w|=1}\frac{\mathrm{d}w}{w-z}=2\pi i\,[|z|\lt1]\tag1 $$ The function $\frac1{w-z}$ has residue $1$ at $w=z$, and this simply states that $z$ is inside the contour $|w|=1$ when $|z|\lt1$ and outside when $|z|\gt1$. When $|z|=1$, $(1)$ only converges in the principal value sense to $\pi i$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3694502", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Proving a quadratic inequality Given that $(x+y)^2 \geq 4xy$, then prove $1/x^2 + 1/y^2 \geq 4/(x^2 + y^2)$. I have tried taking the reciprocal, squaring, square rooting, expanding, factoring and various other algebraic manipulation but nothing has worked so far. The question seems to be structured in a way which suggests that I should be manipulating the first inequality (which is already given) to produce the second inequality. Does anyone know how to do this
We can not make it because it's wrong! Try $x=0$. The condition gives $$y^2\geq0,$$ which is true, but the statement, which we need to prove is wrong because we can not divide by $0$. For $xy>0$ we obtain: $$(x+y)^2\geq4xy$$ it's $$\frac{x^2+2xy+y^2}{xy}\geq4$$ or $$\frac{x}{y}+1+\frac{y}{x}+1\geq4$$ or $$(x+y)\left(\frac{1}{x}+\frac{1}{y}\right)\geq4.$$ Now, after replacing $x$ on $x^2$ and $y$ on $y^2$ we obtain: $$(x^2+y^2)\left(\frac{1}{x^2}+\frac{1}{y^2}\right)\geq4$$ or $$\frac{1}{x^2}+\frac{1}{y^2}\geq\frac{4}{x^2+y^2}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3694710", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
A property of functorially finite subcategory Let $A$ be a finite dimensional algebra and $\mathcal{T}$ a full subcategory of mod$A$. $\mathcal{T}$ is said to be contravariantly finite in mod$A$ if for every module $M \in mod A$, there is some $X \in \mathcal{T}$ and a morphism $f:X \rightarrow M$ such that for every $X' \in \mathcal{T}$, the sequence $Hom_A(X',X) \overset{f_{\ast}}{\rightarrow} Hom_A(X',M) \rightarrow 0$ is exact. Dually, we can define covariantly finit subcategories. $\mathcal{T}$ is said to be functorially finite in mod$A$ if it is both contravariantly and covariantly finite. $X \in \mathcal{T}$ is Ext-projective if $Ext_A^1(X,\mathcal{T})=0$. If $\mathcal{T}$ is a torsion class of a torsion pair and functorially finite in mod$A$, then how to get that there are finitely many indecomposable Ext-projective modules in $\mathcal{T}$ up to isomorphism?
This essentially follows from the results of Auslander, M.; Smalø, Sverre O., Preprojective modules over Artin algebras, J. Algebra 66, 61-122 (1980). ZBL0477.16013. and Auslander, M.; Smalø, Sverre O., Almost split sequences in subcategories, J. Algebra 69, 426-454 (1981). ZBL0457.16017. but an explicit statement and proof can be found in Smalø, Sverre O., Torsion theories and tilting modules, Bull. Lond. Math. Soc. 16, 518-522 (1984). ZBL0519.16016. Concretely, if $\mathcal{T}$ is a functorially finite torsion class and $$A\stackrel{\alpha}{\longrightarrow} T_0\longrightarrow T_1\longrightarrow0$$ is an exact sequence with $\alpha$ a minimal left $\mathcal{T}$-approximation, then the indecomposable Ext-projective modules are the indecomposable summands of $T_0\oplus T_1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3694877", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How to prove that $f(x)=x+\frac{1}{x}$ is not cyclic? Let $f(x)=x+\frac{1}{x}$ and define a cyclic function as one where $f(f(...f(x)...))=x$. How do prove that $f(x)$ is not cyclic? What I tried was to calculate the first composition: $f(f(x))=x+\frac{1}{x}+\frac{1}{x+\frac{1}{x}}=\frac{x^4+3x^2+1}{x^3+x}$ Intuitively, I feel that this is clearly not going to simplify down to $x$, but how can I prove this beyond reasonable doubt?
Note that $ f(\frac{1}{x})=f(x) $, then $f$ is not injective. If $f^n(x)=x, \ n\geq2, $ for all $x$, then $f$ is injective, which is a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3695087", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Lottery variance The chance to win in Lottery game is $0.1$. Michael decided to buy a ticket every week until he will win or until he will buy 5 tickets. if X is the number of weeks Michael bought a lottery ticket, what is the variance of $X$? So I Calculated for X=5, which means LLLLL or LLLLLW. I calculated the probabilities for both, added them up together and then calculated the variance by the formula $\frac{1-p}{p^2}$. I get $0.798$ which I'm not sure makes sense, am I doing something wrong? Do I need to compute all weeks? Because if so, in the end I get 1.
The issue is that $X$ does not have a geometric distribution, precisely because you have the extra condition that $X \leq 5$. e.g. A geometric distribution would have $\mathbb{P}(X = 6) > 0$, but clearly $X$ can never be 6. So $X$ is a discrete random variable that takes values in $\{1,2,3,4,5\}$. Let $Y$ be a geometrically distributed random variable with $p = 0.1$. If $k \leq 4$, then $X$ is behaving geometrically, i.e. $$ \mathbb{P}(X = k) = \mathbb{P}(Y = k) = (1-p)^{k-1}p $$ But we have $$ \mathbb{P}(X = 5) = \mathbb{P}(Y \geq 5). $$ Can you see now how to proceed computing statistics for $X$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3695186", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Writing explicitly $(s^2-1)^2+(t^2-1)^2$ as a polynomial in $st$ and $s+t$? Consider the symmetric polynomial $$ P(s,t)=(s^2-1)^2+(t^2-1)^2.$$ How can we write $P$ as a polynomial in the variables $st,t+s$? The Fundamental theorem of symmetric polynomials implies this is possible, but I am having trouble doing it in practice.
$P(s,t)=s^4+t^4-2s^2-2t^2+2$. Now, $s^2+t^2=\sigma^2-2\pi$, where $\sigma$ and $\pi$ are the sum and product of $s$ and $t$. Thus, $s^4+t^4=(s^2+t^2)^2-2s^2t^2=(\sigma^2-2\pi)^2-2\pi^2=\sigma^4-4\pi\sigma^2+2\pi^2$. So $P(s,t)=2-2\sigma^2+4\pi+\sigma^4-4\pi\sigma^2+2\pi^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3695376", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Let $X$ be connected and $f:X\to\mathbb{R}$ continuous s.t. each point $x\in X$ has a nbh $U$ with $f(x)=\min_{y\in U} f(y)$. Show $f$ is constant. Let $X$ be a connected topological space and $f:X\to\mathbb{R}$ a continuous map such that each point $x\in X$ has a neighborhood $U$ with $f(x)=\min_{y\in U} f(y)$. Show that $f$ is constant. My attempt: Consider $x\in X$ and $V:=\{ y\in X: f(y) \ge f(x)\}$. There exists a neighborhood $U$ of $x$ such that $U\subseteq V$. Now $V$ is closed, since $X\backslash V = f^{-1}(]-\infty, f(x)[)$ is open. I know that $f(X)$ is an interval in $\mathbb{R}$. I want to use the connectedness of $X$, but $V$ is not open, so $X=V\cup X\backslash V$ will give no additional information. I have no idea how to proceed.
You are very close, in fact $V$ is open because $\forall y \in V$, $\exists U \subset X$ open such that $f(y) = \text{inf}_{a \in U}(f(a))$ so then $f(a) \geqslant f(y) \geqslant f(x)$ $\forall a \in U$. Hence $U \subset V$ so $V$ is open. Therefore by conectedness of $X$ either $V$ or $X \setminus V$ is empty. $V$ is not empty so $\forall y\in X$ , $f(y) \geqslant f(x)$ but the choice of $x$ was arbitrary so then can use the same argument to conclude that $\forall x,y \in X$ , $f(x) \geqslant f(y)$. Hence $f$ is constant.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3695574", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
A is diagonalizable if and only if its minimal polynomial is a product of distinct monic linear factors I have to prove that a matrix A is diagonalizable if and only if its minimal polynomial is a product of distinct monic linear factors. I have already proved it in one direction, meaning if f it's minimal polynomial is a product of distinct monic linear factors it's diagonalizable. I can't find a way to prove the other direction, meaning if A is diagonalizable so its minimal polynomial is a product of distinct monic linear factors. Thank you so much for the help!
Suppose A is diagonalizable with basis $\{ v_k \}$ and corresponding eigenvalues $\{ \lambda_k \}$. If $\{ \mu_k \}$ are the distinct eigenvalues, then you can check that $(A-\mu_1 I)(A-\mu_2 I)\cdots(A-\mu_n I)$ annihilates every eigenvector and, hence, must be the $0$ matrix. So the minimal polynomial $q$ divides $p(\lambda)=(\lambda-\mu_1)(\lambda-\mu_2)\cdots(\lambda-\mu_k)$. The minimal polynomial divides this product, and, if there were any factor omitted from the minimal polynomial, you could easily argue that $q(A)$ would not annihilate the eigenvectors corresponding to the eigenvalue $\mu_j$ of the missing factor. So $q=p$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3695761", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Functional equation with the property $P(x+1)=P(x)+2x+1$ Find all polynomials with real-valued coefficients for which the following property $$P(x+1)=P(x)+2x+1$$ holds for all $x \in \mathbb{R}.$ This seems to be a functional equation, so the initial approach would be to try some values. For $x=0$ we would have that $$P(1)=P(0)+1.$$ For $x=1$ $$P(2)=P(1)+3.$$ For $x=2$ $$P(3) = P(2)+5.$$ Now if I would know something about $P(0)$ that would seem to be helpful. If I assume that $P(0)=0$ I would get that $$P(1)=1$$ $$P(2)=4$$ $$P(3)=9.$$ From here it would seem that $P(n)=n^2,$ however I’m not sure if I can make the assumption that $P(0)=0$ or can I?
Big hint: suppose $P(0)=c$ and work towards proving $P(x)=x^2+c$ Solution: $$P(x+1)=P(x)+2x+1$$ Well our initial assumption from the looks of it that $P(x)=x^2+c$, but we'll work towards proving it's the only solution. First of all: $$P(x+1)-P(x)=2x+1$$ because $P$ is a polynomial, this directly means that $P(x)=ax^2+bx+c$, because of the well known method of differences (check it on brilliant wiki). It means that the second difference for the values of the polynomial is constant. (one can also know that by differentiating) So $$a((x+1)^2-x^2)+b((x+1)-x)=2x+1$$ $$\iff 2ax+a+b=2x+1$$ Thus, by equating coefficients, $a=1$ and $b=0$ so $$P(x)=x^2+c$$ for all $c \in \mathbb{R}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3695896", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 6, "answer_id": 3 }
Let $A,B,X$ be sets such that $A\cup B = X$ and $A \cap B = ∅$. Show that $A = X\backslash B$ and $B = X\backslash A$. I'm trying to prove this Let $A,B,X$ be sets such that $A\cup B = X$ and $A \cap B = ∅$. Show that: (1) $A = X\backslash B$ and (2) $B = X\backslash A$. My proof is Let $x \in A$. We know that $x \notin B$ by definition of intersection. It follows that $x \in A \cup B$ by definition of union so we have $x \in X$. Therefore all elements of A must not be in B and must also be in X so $A=X \backslash B$ by definition of difference sets. We can repeat the same process on an arbitrary element of B to get $B= X \backslash A$. Is this correct and is there any way I could word my argument better?
This argument is a correct but is not complete. In your proof of (1) you proved only that if $x \in A$ then $x \in X\backslash B$. This is just half of proving (1). To have a complete proof of (1) you need to also prove the reverse implication i.e. you need to prove that if $x \in X\backslash B$ then $x \in A$. The statement (2) you don't really need to prove, it follows by symmetry as you noticed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3696057", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Interpretation of $\mathbb P(A|X=x)$ in two ways Let $X:(\Omega,\mathscr A) \to (\mathbb R,\mathscr{B})$ be a random variable between two measurable spaces (the latter being the Borel measurable space over $\mathbb R$). Let $x\in \mathbb R$. Let $\mathbb P$ be a probability on $(\Omega,\mathscr A)$. Assuming $\mathbb P(X=x)\ne 0$ ,I have two interpretations for $\mathbb P(A|X=x)$: (1) Naive definition: Simply use the elementary definition of the conditional probability: $$\mathbb P(A|X=x)=\frac{\mathbb P(A\cap[X=x])}{\mathbb P([X=x])}.$$ (2) Official definition of such conditional probability: $$\mathbb P(A|X=x)=\mathbb E(\mathbb 1_A|X=x)=\varphi(x),$$ where the conditional probability $\mathbb E(\mathbb 1_A|X=x)=\varphi(x)$ is defined via the "factorization lemma", which states that there exists a measurable $\varphi:(\mathbb R,\mathscr{B})\to (\mathbb R,\mathscr{B})$ such that $\varphi(X)=\mathbb E(\mathbb 1_A|X):=\mathbb E(\mathbb 1_A|\sigma(X))$. ($\mathbb E(\mathbb 1_A|\sigma(X))$ is the conditional probability in the usual sense). The definition in (2) can be found for example in Probability Theory: A Comprehensive Course, by Achim Klenke, page 180-181. Do the conditional probabilities in (1) and (2) agree (at least almost surely)? If they only agree under some further assumptions, please let me know.
Given that the set $G := \{\omega: X(\omega) = x\}$ has positive probability, by conditional probability definition, $\varphi(X)$ satisfies $$P(A \cap G) = \int_G \varphi(X(\omega))dP = \int_{\{x\}}\varphi(y)\mu(dy) = \varphi(x)\mu{(\{x\})} = \varphi(x)P[X = x],$$ where $\mu$ is the induced probability measure on $(\mathbb{R}, \mathscr{B})$. In above, the second equality follows the change-of-variable formula. The third equality follows the definition of integration. Hence your conjecture is correct, as it should be.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3696198", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
If $\lim (f(x) + 1/f(x)) = 2 $ prove that $\lim_{x \to 0} f(x) =1 $ Let $f:(-a,a) \setminus \{ 0 \} \to (0 , \infty) $ and assume $\lim_{x \to 0} \left( f(x) + \dfrac{1}{f(x) } \right) = 2$. Prove using the definition of limit that $\lim_{x \to 0} f(x) = 1$ Attempt: Let $L = \lim_{x \to 0} f(x) $. Let $\epsilon > 0$ be given. If we can find some $\delta > 0$ with $|x| < \delta $ such that $|f(x) - 1 | < \epsilon $ then we will be done. We know that since $f(x) > 0$, then $\lim 1/f(x) $ is defined. In fact, applying the limit to hypothesis, we end up with $$ L+ \dfrac{1}{L} = 2 $$ and certainly $L=1$ as desired. I am having difficulties making this proof formal in $\delta-\epsilon$ language. Can someone assist me?
You could not assume the limit of $f$ exists, since the statement doesn’t include it. By the definition of limit, given $\varepsilon>0,$ there exists $\delta>0$ such that $$2-\varepsilon <f(x)+\frac 1 {f(x)}< 2+\varepsilon$$ for $x\in (0,\delta).$ Therefore $$|\sqrt{f(x)}-\frac 1 {\sqrt{f(x)}}|<\sqrt \varepsilon.$$ Hence $$|f(x)-1|<\sqrt{\varepsilon f(x)}<\sqrt{\varepsilon(2+\varepsilon)}$$ and the conclusion follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3696342", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 1 }
Why is the stereographic projection bijective? I know this might be a very basic question, but I am just not able to wrap my head around it. Why is the map $$ S:\mathbb{S}^n-\{e_{n+1}\}\rightarrow \mathbb{R}^n \quad \textrm{such that } \bar{x}\mapsto (\frac{x_1}{1-x_{n+1}},...,\frac{x_n}{1-x_{n+1}})$$ a bijective function? I tried to understand it geometrically as well, but I just could not see how it was formulated in that manner. Here $\bar{x}=(x_1,...,x_n)$ and $e_{n+1}=(0,...0,1)$. Any help will be much appreciated!
It's certainly not obvious (except, like many things in mathematics, in hindsight). One can show directly that this map is both injective and surjective. It's easier perhaps to simply write down the inverse function $S^{-1}\colon \Bbb R^n \to \Bbb S^n - \{e_{n+1}\}$: $$ S^{-1}\big( (y_1,\dots,y_n) \big) = \frac1{1+y_1^2+\cdots+y_n^2}\big( 2y_1,\dots,2y_n,y_1^2+\cdots+y_n^2-1 \big). $$ (Of course one has to verify that the domain/codomain of this function are correct and that both $S\circ S^{-1}$ and $S^{-1}\circ S$ are the identity maps.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3696478", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
digit product is $20$, digit sum is $12$, find the least number This is from a math olympiad: The product of the digits of positive integer $n$ is $20$, and the sum of the digits is $12$. What is the smallest possible value of $n$? I started with the prime factors of $20 = 2*2*5$. Then tried to sum them up — $2+2+5\neq12$ so started adding $1$s to the summation, since it wouldn't affect the product. and so, I come up with: $2+2+5+1+1=12$ to make it equal to $12$ Hence, arranging the digits in ascending order, we get the least $n = 111225$. Is this too childish an approach? Is there any better way to go about this?
This is too long for a comment. So, I'm writing here. This is not a solution. I think your solution is not childish at all. Since it's a competition type problem, no one should expect a super elegant solution for such a problem. I am rephrasing for your solution. First of all, it's impossible to have such a single-digit number. So, we begin our journey with a $2$ digits number. There are only $2$ two digits numbers whose product of digits is $20$ namely, $45$ and $54.$ The sum of the digits of these numbers is $9$. So, we place some $1$s at the left side of $45$ until we get a number whose digits add up to $12.$ This gives the number $11145.$ Since we're looking for smallest such number we did not choose $54.$ Note that you started with a $3$ digits number namely $225.$ That's why you had a bigger number. I wish someone writes some cool solutions. Thanks!
{ "language": "en", "url": "https://math.stackexchange.com/questions/3696626", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Lifting submanifolds Let $\Sigma$ be a submanifold of $M$ and let $\pi\colon \widetilde{M} \rightarrow M$ be a covering map. I would like to know if it is always true that $\pi^{-1}(\Sigma)$ is a submanifold of $\widetilde{M}$.
Let $\hat x\in\pi^{-1}(\Sigma)$, there exists an open subset $x\in \hat U$ such that the restriction $\pi_{\mid \hat U}:\hat U\rightarrow\pi(\hat U)=U$ is a diffeomorphism. Since $\Sigma$ a submanifold, there exits a neighborhood $V$ of $x=p(\hat x)$ a submersion $f:V\rightarrow \mathbb{R}^p$ such that $V\cap \Sigma= f^{-1}(0)$. Let $\hat V=(\pi_{\mid \hat U})^{-1}(V)$ and $g:\hat V\rightarrow \mathbb{R}^p$ defined by $f\circ\pi_{\mid\hat U}$, $g$ is a submersion and $g^{-1}(0)=\hat V\cap \pi^{-1}(\Sigma)$. This implies that $\pi^{-1}(\Sigma)$ is asubmanifold.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3696862", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is there a real valued positive function such that it and its square integrate to $1$ Does there exist a function $f : \mathbb{R} \rightarrow \mathbb{R}$ such that $f > 0$ and $$ \int_{-\infty}^\infty f(x) dx = \int_{-\infty}^\infty f(x)^2 dx = 1. $$ I suspect the answer is yes. I have looked at taking $f$ to be the PDF of a normal distibution $\mathcal N(0, \sigma)$ to guarantee that $f > 0$ and integrates to give $1$. I'm thinking about using IVT to find a value of $\sigma$ that works. However, in order to do this, I'm not entirely sure how to integrate $e^{x^4}$.
For $a>0$, $p>1$, we have $$ \int _0^{\infty} \frac{1}{(a+x)^p}\,dx=\frac{a^{1-p}}{p-1} $$Similarly, for $b>0$, we have $$ \int _{-\infty}^0 e^{bx}\,dx = \frac{1}{b} $$So if we let $f(x):\mathbb{R}\to\mathbb{R}$, $$ f(x) = \begin{cases} e^{bx}, & x <0\\ \frac{1}{(a+x)^p},& x\geq 0 \end{cases} $$the question amounts to finding $(a,b,p)$ such that: $$ \begin{cases} \frac{1}{b} + \frac{a^{1-p}}{p-1} & = 1\\ \frac{1}{2b} + \frac{a^{1-2p}}{2p-1} & = 1\\ \end{cases} $$Mathematica gives one solution as (approximately) $( 0.80297, 2.50859,5)$. You could probably modify this example to be continuous or even differentiable. If you don't require $f(x)>0$, a well-known example is $\sin(x)/x$, where both it and its square integrate to $\pi$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3697053", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 6, "answer_id": 1 }
Curious tautological pattern on"p->p" I found something that boggles me ( I'm really a beginner in symbolic logic, so maybe it's very trivial). I was practicing with truth-tables, and I found that: * *"p->p" is a tautology *"(p->p)->p" is not a tautology. I decided to go further, and: *"((p->p)->p)->p" is again a tautology, but 4."(((p->p)->p)->p)-> p" is not, and it keeps alternating. I checked with an online logic calculator, and it seems correct. Now, do you know why is that? Is there any particular reason for this pattern? Cheers
1) For any formula $p$, $p \to p$ is a tautology. 2) For any tautology $T$, $T \to p$ is logically equivalent to $p$. (Check it out with a truth table.) So an even amount of occurrences of $p$ will give you tautologies (by 1)); appending another $p$ will give you something that behaves like p (by 2)). And if you take that $p$-equivalent proposition and append another $p$, you will, by 1), get the tautology again, etc. The core answer to your question lies in 2).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3697197", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 1 }
What does it mean when $|z| = 2$ is the curve in a contour integral? B) Evaluate: $$\oint_{|z|=2} \tan{z}\,dz$$ Specifically looking at B on the image. What is meant by $|z|=2$?
$|z| = 2$ is the circle of radius $2$ centered at the origin. Typically, when one writes an integral like \begin{align} \oint_{|z| = r} f(z) \, dz, \end{align} what is meant is that we have to consider the path $\gamma: [0,2\pi] \to \Bbb{C}$ given as $\gamma(t) := re^{it}$ (so the orientation of the path is counter clockwise). And then, really we're supposed to compute a line integral: \begin{align} \int_{\gamma} f(z) \, dz &= \int_0^{2\pi} f(\gamma(t)) \cdot \gamma'(t) \, dt \\ &= \int_0^{2\pi} f(r e^{it}) \cdot ir e^{it} \, dt. \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/3697336", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Polynomials that form $1+xy+x^2 y^2$ Show that there is no polynomials $a(x), b(x) \in R[x]$ and $c(y), d(y) \in R[y]$ such that $1+xy +x^2 y^2 = a(x) c(y) + b(x) d(y) $
Expanding on @MikeDaas's comment we have$$\begin{align}1&=a(x)c(0)+b(x)d(0),\\1+x+x^2&=a(x)c(1)+b(x)d(1),\\1-x+x^2&=a(x)c(-1)+b(x)d(-1)\\\implies x&=\frac{c(1)-c(-1)}{2}a(x)+\frac{d(1)-d(-1)}{2}b(x),\\x^2&=\frac{c(1)-2c(0)+c(-1)}{2}a(x)+\frac{d(1)-2d(0)+d(-1)}{2}b(x).\end{align}$$Since $1,\,x,\,x^2$ are all linear combinations of $a,\,b$, these two functions span a $3$-dimensional space, a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3697490", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Olympiad Minimization Problem I've been struggling to find a solution to this problem that I found in the archive of my country's Olympiad questions. I'm particularly interested in a solution that doesn't involve the use of calculus since I know that Olympiad questions do not require the knowledge of calculus to solve but I will also like to see one that uses it. Here's the problem: Find the minimum value of $\frac{18}{a+b} + \frac{12}{ab} + 8a + 5b$ when $a$ and $b$ are positive real numbers.
Use AM-GM by rearranging terms creatively Hint: A good start of using AM-GM is to consider the following: $ \frac{ 12}{ab} + K a + L b \geq 3 \sqrt[3]{ 12 K L }$, with equality when $ \frac{12}{ab} = K a = Lb$. $ \frac{18}{a+b} + M(a+b) \geq 2 \sqrt{ 18 M }$, with equality when $\frac{18}{a+b} = M (a+b)$. Now, pick suitable $K, L, M$, so that equality holds throughout for the same values of $a, b$. Hence, the minimum of the expression is ... which is achieved when ... How to pick suitable $K, L, M$: (I strongly encourage you to think about this before reading on. Write down whatever equations/motivations you can think of, We want $ K + M = 8, L + M = 5$. We wishfully think that $ 12 K L$ is a perfect cube, and $ 18 M$ is a perfect square. An obvious choice is $ M = 2, K = 6, L = 3$. We just need to verify that equality holds, and for the same values, which thankfully it does with $a = 1, b = 2$, giving the minimum value of 30. (Otherwise, do some other wishful thinking, pick some other value of $M$ and try again.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3697630", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why are there multiple base cases in this strong induction? My understanding of needing a base case, in general, is that after proving the induction step, we can assert that the proposition is true for all values from the base case. This question ($∀n ∈ Z, n≥12$) $\implies$ ( $∃x, y ∈N$ such that $n= 4x + 5y$) uses 3 bases cases for its strong induction solution: Base cases: $n = 12, 13, 14, 15$ Clearly, $12= 4(3) + 5(0)$, so $P(12)$ is true. Also, $13= 4(2) + 5(1)$, so $P(13)$ is true. And, $14= 4(1) + 5(2)$, so $P(14)$ is true. And, $15= 4(0) + 5(3)$, so $P(15)$ is true Meanwhile I only used base case, $n=12$. Their rationale for providing multiple cases is as follows If we didn’t prove $P(13), P(14), P(15)$ as base cases, then the inductive step to get $k+1 = 13, 14,15$ will fail, since proving these assume that $P(r)$, for $ r < 12$, to be true, which we didn’t prove. By establishing base cases $n = 12$ to $n = 15$, the inductive step can then work forward from $n ≥ 15$. The first line starting from "If we..will fail" already contradicts with my understanding of the base case that I mentioned above as proving base case of $12$ should suffice to prove inductive steps for $k+1=13,14,15$. The part where they said "since proving..we didn't prove" doesn't even make sense to me.Why are they talking about $ r < 12$. And the last sentence says "work forward from n ≥ 15" confuses me, shouldn't we want it to work from $n ≥ 12$. My main question is essentially why we do need multiple base cases here? The sub-questions are the 3 paragraphs talking about the solution's rationale for providing multiple cases. Induction Step(included after a comment by fleabloods): Note: If the sub-questions should be separately posted, do let me know
Your understanding of base case is incorrect. The base cases are simply those cases needed to provide a sufficient foundation for the induction step. Here the induction step to prove that $P(k+1)$ is true relies on knowing that $P(k-3)$ is true: if there are $x,y\in\Bbb N$ such that $4x+5y=k-3$, then clearly $$4(x+1)+5y=k+1\;,$$ where $x+1,y\in\Bbb N$, so $P(k+1)$ is true. If we’ve verified only that $P(12)$ is true, that argument won’t tell us that $P(13)$ is true: in that case $k=12$, so $k-3=9$, and we’d need to know that $P(9)$ is true in order to use that argument. But as your source says, we didn’t verify that $P(9)$ is true, so that argument simply isn’t valid. As it happens, $P(9)$ is true, so if we’d checked it as a base case, we could have used the induction step to see that $P(13)$ is true. $P(10)$ is also true, so we could have verified it as part of the base case and used the induction step to see that $P(14)$ is true. But $P(11)$ is not true, so there is no possible way to use the induction step to derive $P(15)$. It turns out that $11$ is the last failure: $12$ is the smallest integer such that $P(n)$ is true for it and every larger integer. The induction works only after we know that $P(n)$ is true for four consecutive integers $n$: if $P(n)$, $P(n+1)$, $P(n+2)$, and $P(n+3)$ are true, the truth of $P(n)$ allows us to use the induction step to prove that $P(n+4)$ is true, the truth of $P(n+1)$ allows us to use the induction step to prove that $P(n+5)$ is true, and so on. But we need those four consecutive integers: if $P(n)$ and $P(n+1)$ were true, but $P(n+2)$ were not, the induction step would give us the truth of $P(n+4)$ and $P(n+5)$, but not the truth of $P(n+6)$, because for that we’d need to know that $P(n+2)$ was true. Thus, the induction argument in this example really does use all four parts of the base case, $P(12)$, $P(13)$, $P(14)$, and $P(15)$: without them, the argument falls apart at one of the next four cases.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3697786", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Representing a group as a quotient of a free group Consider $G=F \rtimes T$, where $F=\mathbb{Z}_3 \times \mathbb{Z}_3$ and $T=\mathbb{Z}_5$. Let $\phi : \mathbb{Z}_5 \rightarrow Aut(\mathbb{Z}_3 \times \mathbb{Z}_3)$. It is said that any group is the quotient of a free group? How can I represent the above group $G$ as a quotient of a free group? Do I have to specify $\phi$? If I know $\phi$ can some one explain what steps I should follow to represent it as a quotient of a free group? An answer to an example of $\phi$ is also Ok. Many thanks in advance.
You have to specify $\phi$ when there are different groups that are not isomorphic between them. In this case $Aut(\mathbb{Z}_3 \times \mathbb{Z}_3) \cong GL(2,3)$ but the order of $GL(2,3)$ is $(3^2-1)(3^2-3)=48$ so the only $\phi$ is the trivial one. You can always represent a finite group as a quotient of a free group; you have to work with the generators of the group. In this case your group is isomorphic to $\mathbb{Z}_5 \times \mathbb{Z}_3 \times \mathbb{Z}_3$ so there are $3$ generators, namely $a$,$b$,$c$. Since the group is abelian, the subgroup of $F_3$ you quotient for containes the derived group of F, so it containes $\{ [a,b] ; [a,c] ; [b,c] \}$. Morover we have to impose the condition about the order of the generators, so the subgroup have to contain $a^5$ , $b^3$ and $c^3$. So, you can represent this group as $$ \frac{F_3}{\langle a^5,b^3,c^3,a^{-1}b^{-1}ab,a^{-1}c^{-1}ac,b^{-1}c^{-1}bc \rangle }$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3697933", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Ways to obtain networks from multivariate time-series I recently became aware of a bridge between (dynamical) properties of time-series and (topological) features of an associated network representation. A variety of methods exist to embed the time-series into a network (see e.g., Transforming Time Series into Complex Networks by Michael Small, Jie Zhang and Xiaoke Xu). As far as I understand, to each univariate time-series is associated a single network. I wonder if you are aware of a method to embed a set of univariate time-series into a single network.
Yes, there are so-called functional networks, where you associate each time series with a node in the network. You then determine the existence or weights of edges by applying interaction measures to the respective pair of time series. The most simple such interaction measure would be the correlation coefficient, but you can also apply information theory, phase reconstructions, etc. There is an entire subfield dedicated to properly deducing interactions from time series. This technique has been applied in several fields, most prominently brain and climate science, where networks have been deduced from time series measured with sensors placed on or in the brain or on earth respectively. I am not aware of a recent good overview focussing exclusively on this, but this and this paper provide some starting point.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3698060", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Genus $3$ curves with a couple of distinct points $P,Q$ such that $4P \sim 4Q$ Let $C$ be a smooth curve of genus $3$ over $\mathbb{C}$. Is it true that there exist $P\neq Q \in C$ such that $4P \sim 4Q$ ? ($\sim$ denotes linear equivalence) Notice that if $C$ is hyperelliptic then this is true (just take two different points fixed by the hyperelliptic involution). My (very optimistic) guess is that the converse should hold. If we denote by $f \in K(C)$ the function whose divisor is $div(f)=4P-4Q$ one should try to verify that $f$ admits a square root in $K(C)$ the function field of $C$. There is another euristic reason (that maybe can be made precise) that make me think that in general such a couple of points does not exist. Namely if I take a very general smooth quartic $C \subset \mathbb{P}^2$ so that I may suppose that it does not have flexes of order $4$ (equivalently $4P$ does not belong to the canonical system for any $P \in C$); then the $g^1_4$ induced by divisors of the form $4P$ are a $1-$parameter family and I expect that the ramification is $3P+\sum_{i=1}^9P_i$ so that generically the $P_i$'s are distinct and in "singular" cases we have at worst points of multiplicity $2$ apart from $P$ (at least for a generic $C$).
The dimension of the subvariety $Z$ of the moduli space $M_3$ of genus 3 curves that have a pair of points $P \ne Q$ with $4P \sim 4Q$ is 5, so it is a divisor in $M_3$. Indeed, the linear system generated by the divisors $4P$ and $4Q$ defines a morphism $$ f \colon C \to \mathbb{P}^1 $$ which has ramification index 4 at $P$ and $Q$. By Hurwitz formula it follows that $f$ has at most 8 branch points, so the position of these points depend on $8 - 3 = 5$ parameters. The rest of the ramification data is discrete, hence $\dim(Z) \le 5$. However, the hyperelliptic locus (which also has dimension 5) is not the only component of $Z$. For instance, the curve $$ x^3y + xy^3 + z^4 = 0 $$ has $4P \sim K_C \sim 4Q$, where $P = (1,0,0)$ and $Q = (0,1,0)$, and is not hyperelliptic.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3698189", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
What's the relationship between Banach space and inner product space I know that Banach space is a special Hilbert space. Inner product space is a special normed space. Hilbert space is a complete inner product space. Banach space is a complete normed space. I'm wondering what is the relationship between Banach space and inner product space.
Every inner product induces a norm, so an inner product space is a normed vector space. Banach spaces are complete, normed vector spaces. So, the relation between inner product spaces and Banach spaces is that they're both normed vector spaces. However, (1) an inner product space might lack the completeness that a Banach space possesses and (2) a Banach space norm need not be induced by an inner product. The norm needs to satisfy the parallelogram law for this to be the case (see https://en.wikipedia.org/wiki/Parallelogram_law#Normed_vector_spaces_satisfying_the_parallelogram_law).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3698348", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Proving that removing any vector of the linearly dependent set gives a linearly independent set Consider the matrix representing 6 linearly dependent vectors: $$\left(\begin{array}{llllll} 1 & 0 & 0 & 0 & 0 & 1 \\ 0 & 1 & 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 & 0 & 1 \\ 0 & 0 & 0 & 1 & 0 & 1 \\ 0 & 0 & 0 & 0 & 1 & 1 \end{array}\right)$$ I know how to prove that the vectors in this matrix are linearly dependent, but how can I show (concisely) that removing any one of the vectors we get a linearly independent set?
Here the fastest method I can think of. If (for example) the first 5 columns are linearly dependent, you can find a vector in the right kernel of the form $$\left[ \begin{array}a a\\ b\\ c\\ d\\ e\\ 0 \end{array}\right], $$ that is, a vector that ends with $0$. In general if you leave out the $i-th$ column and the rest are linearly dependent, you find a vector in the right kernel with a $0$ in the $i$-th position. Since the only vectors in the right kernel of your matrix are the multiples of $$\left[ \begin{array}a 1\\ 1\\ 1\\ 1\\ 1\\ -1 \end{array}\right] $$ and it has no zeros, you can deduce that any 5 columns are linearly independent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3698549", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Can we always construct a matrix using its eigenvectors? In physics, a Hermitian matrix represents an observable and can be constructed using its eigenvalues and eigenvectors in the following way: $$ A = \sum_i \lambda_i v_iv_i^\dagger \qquad \qquad (1)$$ where $\lambda_i$ and $v_i$ are the $i^{th}$ eigenvalue and eigenvector and $v_i^\dagger$ is the transpose conjugate of $v_i$. The proof is the following: If the eigenvectors form an orthonormal basis, $\{v_i\}$, then we have: $$ \sum_i v_iv_i^\dagger =1$$ This must be true becuse we have can write a vector $u$ in the $\{v_i\}$ basis by: $$ u = \sum_i v_i v_i^\dagger u $$ Therefore, we can apply this identity twice to $A$ and get: $$ A = \sum_i \sum_j v_iv_i^\dagger A v_jv_j^\dagger = \sum_i \sum_j v_iv_i^\dagger \lambda_j v_jv_j^\dagger= \sum_i \sum_j \lambda_j v_iv_i^\dagger v_jv_j^\dagger = \sum_i \sum_j \lambda_j v_i \delta_{ij} v_j^\dagger= \sum_i \lambda_j v_i v_j^\dagger$$ Is equation (1) only valid for matrices having eigenvectors that form a basis? Or all matrices can be constructed in this way?
If the eigenvalues of $A$ are real, but $A\neq A^\dagger$, then the right hand side of (1) is Hermitian, but the left hand side is not, so (1) fails. An example is $$\left(\begin{array}{cc} 1&1\\0&2\end{array}\right)$$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3698916", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving $\operatorname{cos}(x+y)=\operatorname{cos}(x)\operatorname{cos}(y)-\operatorname{sin}(x)\operatorname{sin}(y)$ using differentiation While proving $\operatorname{cos}(x+y)=\operatorname{cos}(x)\operatorname{cos}(y)-\operatorname{sin}(x)\operatorname{sin}(y)$ by this $$\operatorname{sin}(x+y)=\operatorname{sin}(x)\operatorname{cos}(y)+\operatorname{cos}(x)\operatorname{sin}(y) \\ \text{differentiating both sides w.r.t } x \\ \operatorname{cos}(x+y) \left(1+\frac{dy}{dx}\right)=(\operatorname{cos}(x)\operatorname{cos}(y)-\operatorname{sin}(x)\operatorname{sin}(y))\left(1+\frac{dy}{dx}\right)\\ \text{for $\frac{dy}{dx} \neq -1$}\\\operatorname{cos}(x+y)=\operatorname{cos}(x)\operatorname{cos}(y)-\operatorname{sin}(x)\operatorname{sin}(y) $$ Now I am confused what happens when$\frac{dy}{dx} = -1$
To elaborate on my comment about taking a partial derivative. You are assuming that $y = y(x)$. Your expression is of the form $$f(x,y)(1 + y'(x)) = g(x,y)(1 + y'(x))$$ and your are trying to conclude the equality $$f(x,y) = g(x,y), \ \forall (x,y)\in \mathbb{R}^2 $$ But you choose $y$ to be an arbitrary function of $x$. So Suppose $$y'(x_0) = -1$$ Then of course you cannot claim that $$f(x_0,y(x_0)) = g(x_0,y(x_0))$$ So simply pick a new function $y_2$ with $y_2'(x_0) \neq 1$ and $y_2(x_0) = y(x_0)$. Then you have the desired equality $$f(x_0,y(x_0)) = g(x_0,y(x_0))$$ In particular if $y$ is constant with respect to $x$ then this is just the partial derivative since $y'\equiv 0$ . If you choose a "bad" $y= y(x)$ to begin with then your proof will not work.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3699074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
$P(x)=P(-x)$ holds for all values of $x$ ,two conditions I was doing a question on polynomials , where it was found that $P(x)=P(-x)$ in the interval $[-\sqrt2,\sqrt 2 ]$ It was then concluded that $P(x)=P(-x)$ holds for all values of $x$ ,"since it is a polynomial". Can someone help me understand why it could be generalized ? Edit - $P(x)$ is a polynomial with real coefficients .
Let $Q(x)=P(x)-P(-x)$, then $Q(x)$ is a real polynomial since $P(x)$ is (make sure you can show this!). By the assumption, $Q$ has infinitely many roots. But the only real polynomial with infinitely many roots is the zero polynomial. Hence $Q(x)=0$ for all real $x$, so $P(x)=P(-x)$ for all real $x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3699219", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Matrix of the differentiation operation Exercise: Find the matrix of the derivative operation $D$ related to the base $\{1, t, t^2,..., t^n\}$ $$D: \mathcal P_{n} \to \mathcal P_{n}$$ I found a possible solution to this exercise, given that $D(t^k)=kt^{k-1}$ $$ \begin{equation*} D_{n+1,n+1} = \begin{pmatrix} 0 & 1 & 0 & \cdots & 0 \\ 0 & 0 & 2 & \cdots & 0 \\ \vdots & \vdots & \vdots &\ddots & \vdots \\ 0 & 0 & 0 &\cdots & n \\ 0 & 0 & 0 &\cdots & 0 \\ \end{pmatrix} \end{equation*}$$Nevertheless, it doesn't convince me at all, because when multiplying the matrix with the vectors in $\mathcal P_{n}$, the exponent remains the same. Is this solution correct?
The numbers in your vectors represent the linear combination of the basis elements needed to form a polynomial. For example, \begin{equation*} v = \begin{pmatrix} 3 \\ 4 \\ \vdots \\ 6 \\ 7 \\ \end{pmatrix} \end{equation*} The polynomial represented by this vector is $3+4x+...6x^{n-1}+7x^n$. Now, for a polynomial like $p(x)=1$, it is represented by \begin{equation*} v = \begin{pmatrix} 1 \\ 0 \\ \vdots \\ 0 \\ 0 \\ \end{pmatrix} \end{equation*} This extracts out the first column of $D_{n+1, n+1}$ after left multiplying by $D$, which gives you the zero vector, corresponding to the polynomial $p'(x)=0$. For another example, if $p(x)=x$, the vector representing it is \begin{equation*} v = \begin{pmatrix} 0 \\ 1 \\ \vdots \\ 0 \\ 0 \\ \end{pmatrix} \end{equation*} After $D$ acts on it, the second column is returned, which is $p'(x)=1$. So $D$ works as expected.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3699311", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
How to visualize Euler's number? I am interested if there is geometric meaning (using graphs) of $(1 + \frac{1}{n})^n$ when $n \rightarrow \infty$. Also, is there visual explanation of why is $e^x = (1 + \frac{x}{n})^n$ when $n \rightarrow \infty$ and why is $\frac{d}{dx}e^x = e^x$? I see that this kind of question is not posted yet.
I think of my favorite, and pretty geometric, proof of this limit, using the squeeze or sandwich theorem for limits. You can do it using an upper and lower Riemann sum with one subdivision for the integral of $1/t$. One has $L\le\int_1^{1+x/n}1/t\rm dt\le U\implies x/n(1/(1+x/n))\le\ln(1+x/n)\le x/n(1)\implies x/(n+x)\le\ln(1+x/n)\le x/n\implies e^{x/(n+x)}\le(1+x/n)\le e^{x/n}\implies e^{nx/(n+x)}\le(l+x/n)^n\le e^x$, and take limits.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3699439", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Characteristic Subspace of a k-form I have to show that two decomposable k-forms on an n-dimensional vector space $\Bbb V^n$ have the same characteristic subspace (of dimension $n-k$) if and only if one is scalar multiple of the other. $\underline{\text{Definition}}$: Characteristic subspace of a $k$-form $\omega$ on $\Bbb V$, is the set of all vectors in $\Bbb V$ such that $\iota_u\omega=0$, where $\iota : \Bbb V^n\to\Lambda^{k-1}(\Bbb V)$ is the interior product operation. Any advice would be very helpful. Thank you in advance.
Let $Z$ denote the characteristic subspace, and let $\{z_1,\cdots,z_{n-k}\}$ be a basis for $V$. You can extend this to a basis $\{z_1,...,z_{n-k},w_1,...,w_k\}$ of $V$. Define an inner product on $V$ by saying the dot product of a vector in the basis with itself is $1$, and is $0$ with any other vector in the basis, and extending by bilinearity. From this, it follows that $\{w_1,...,w_k\}$ is a basis for $Z^\perp = W$. Now $\omega$ is decomposable, so it is of the form $\alpha_1 \wedge ... \wedge \alpha_k$. In particular, for each $\alpha_i$, we can find a $w_i'$ wuch that $\alpha_i(w_i') = 1$ (this also relies on the inner product). You can see that $\{z_i,...,z_{n-k},w_1',...,w_k'\}$ is also a basis for $V$, and there is a change of basis matrix that is invertible. Since the top left of the matrix is an identity block, the bottom $k \times k$ block is also invertible. So the projections of $\{w_i'\}$ onto $W$ are also a basis for $W$. Let us call this block $K$. Then, note that $$\omega(w_1,...,w_k) = \omega(Tw_1',...,Tw_k') = (\det T) \omega(w_1,...,w_k) = \det T \neq 0$$ Last, we note that this works for any extension of the $\{z_i\}$ to a basis of $V$, so the choice is irrelevant. So $\omega_W$ is an alternating, multilinear form on $W$ whose value on a basis is non-zero and so is some multiple of $\det_W$. Since $\omega$ is essentially the same as $\omega_W$, this shows that $\omega$ is some multiple of $\det_W$. This is true for any such $\omega$, so they are all multiples of each other. Edit: As Ted Shifrin pointed out, I meant that $\{z_i\}$ form a basis for $Z$ in my first sentence
{ "language": "en", "url": "https://math.stackexchange.com/questions/3699555", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Schlag's proof of the bigPicard theorem I am trying to understand the proof of Picard's big theorem which is theorem $4.20$ in Wilhelm Schlag's book A course in complex analysis and Riemann surfaces. The theorem is stated as follows If $f$ has an isolated essential singularity at $z_0$, then in any small neighborhood of $z_0$ the funtion $f$ attains every complex value infinitely many ofther with one possible exception. Schlag claims this is an application of Montel's normality test which says Any family of functions $\mathcal{F}$ in $\mathcal{H}(\Omega)$ which omits two distinct values in $\mathbb{C}$ is a normal family. Schlag proves the big Picard theorem in a few lines as follows: Let $z_0=0$ and define $f_n(z) = f(2^{-n}z)$ for an integer $n\geq 1$. We take $n$ so large that $f_n$ is analytic on $0<\lvert z\rvert < 2$. Then $f_{n_k}(z)\rightarrow F(z)$ uniformly on $1/2\leq \lvert z\rvert \leq 1$ where either $F$ is analytic of $F\equiv\infty$. In the former case, we infer from the maximum principle that $f$ is bounded near $z=0$, which is therefore removable. In the latter case, $z=0$ is a pole. I understand the steps that are taken in the proof but I am confused as to how it proves the statement. I guess he uses some sort of contraposition and assumes that $f$ omits two values in a neighborhood of $z_0$. But I am not sure how that shows $f$ hits every complex value infinitely often with one exception (which can be reached finitely many times).
He is proving that a function holomorphic in a annulus $\{z:0<|z|<R\}$ which omits two values has either a removable singularity or a pole at $z=0$, and certainly does not have an essential singualarity there.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3699695", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show $n I_n I_{n-1} = \frac{\pi}{2}$ where $I_n = \displaystyle\int_0^{\pi/2} \cos^n x dx$. Consider the sequence $(I_n)_{n \in \mathbb{N}}$: $$ I_0 = \frac{\pi}{2} $$ $$ I_n = \int_0^{\pi/2} \cos ^n x dx$$ For this sequence I have to prove that the following is true: $$n I_n I_{n-1} = \frac{\pi}{2}$$ for $n \in \mathbb{N}^*$. This is what I've done so far: $$I_n = \int_0^{\pi/2} \cos^n x dx = \int_0^{\pi/2} \cos x \cos ^{n-1} x dx$$ $$= \int_0^{\pi/2} (\sin x)' \cos ^{n-1} x dx = \sin x \cos ^{n-1} x \Bigg|_0^{\pi/2} + (n-1)\int_0^{\pi/2} \cos ^{n-2}x \sin^2x dx$$ $$ = 0 - 0 + (n-1)\int_0^{\pi/2} \cos ^{n-2} x (1 - \cos ^2 x) dx $$ $$ = (n-1) \int_0^{\pi/2} (\cos ^ {n-2} x - \cos^n x) dx $$ $$= (n-1)(I_{n-2} - I_{n})$$ So that means we have: $$ I_n = nI_{n-2} - nI_n - I_{n-2} + I_n $$ $$nI_n = (n-1) I_{n-2}$$ And if we multiply with $I_{n-1}$ we get: $$n I_n I_{n-1} = (n-1) I_{n-2} I_{n-1}$$ But that's as far as I got. I don't see how I could show that the above is equal to $\frac{\pi}{2}$.
I'll continue from where you got to : $$nI_{n}I_{n-1} = (n-1)I_{n-2}I_{n-1} $$ Now, $$nI_{n}I_{n-1} = (n-1)I_{n-2}I_{n-1}$$ $$(n-1)I_{n-1}I_{n-2} = (n-2)I_{n-3}I_{n-2}$$ $$(n-2)I_{n-2}I_{n-3} = (n-3)I_{n-4}I_{n-3}$$ $$.....$$ $$(2)I_2I_1 =I_0I_1$$ Multiply all these equations: You can see most of the terms get cancelled.....you end up with: $$nI_nI_{n-1} = I_0I_1 $$ $$=\int_0^{\frac{\pi}{2}} \cos^0x\,dx\int_0^{\frac{\pi}{2}} \cos x\,dx$$ $$nI_nI_{n-1} = \left(\frac{\pi}{2}\right) (1)$$ Alternatively, $$nI_nI_{n-1}=n\left(\int_0^\frac{\pi}{2} \cos^nx\,dx \right)\left(\int_0^\frac{\pi}{2} \cos^{n-1}x\,dx \right) =A$$ Use the substitution : $$\cos^2x=t$$ $$\cos x = t^{\frac{1}{2}}$$ $$-\sin x \,dx = \frac{1}{2} t^{-\frac{1}{2}}\,dt$$ $$\,dx=-\frac{1}{2} t^{-\frac{1}{2}} (1-t)^{-\frac{1}{2}}\,dt $$ $$$$ $$\therefore A = n \left( \int_0^1 \frac{1}{2} (t)^{-\frac{1}{2}} (1-t)^{-\frac{1}{2}} (t)^\frac{n}{2} \,dt \right) \left( \int_0^1 \frac{1}{2} (t)^{-\frac{1}{2}} (1-t)^{-\frac{1}{2}} (t)^\frac{n-1}{2} \,dt \right)$$ $$A=\frac{n}{4}\left( \int_0^1(t)^{\frac{n+1}{2} -1} (1-t)^{\frac{1}{2}-1}\,dt \right) \left( \int_0^1(t)^{\frac{n}{2} -1} (1-t)^{\frac{1}{2}-1}\,dt \right)$$ $$=\frac{n}{4}\beta\left( \frac{n+1}{2},\frac{1}{2} \right).\beta\left( \frac{n}{2},\frac{1}{2} \right)$$ Here, $\beta(x,y)$ is the Beta Function. Now, we can use the relation between the beta function and the Gamma function ($\Gamma (n)$) : $$\beta(x,y) = \frac{\Gamma(x)\Gamma(y)}{\Gamma(x+y)}$$ $$$$ $$A = \frac{n}{4} \frac{\Gamma\left( \frac{n+1}{2} \right) \Gamma\left( \frac{1}{2} \right) }{\Gamma\left( \frac{n}{2} +1 \right)}.\frac{\Gamma\left( \frac{n}{2} \right) \Gamma\left( \frac{1}{2} \right)}{\Gamma\left( \frac{n+1}{2} \right)} $$ The $\Gamma\left( \frac{n+1}{2} \right)$ gets cancelled out from num and denom. The $\Gamma\left( \frac{1}{2} \right)$ terms have a known value of $\sqrt\pi$ The rest : $\frac{\Gamma\left( \frac{n}{2} \right)}{\Gamma\left( \frac{n}{2} +1 \right)}$ can be reduced by the property of the Gamma function: $=\frac{\left( \frac{n}{2} -1 \right)!}{\left( \frac{n}{2} \right)!} = \frac{2}{n}$ So, we are left with : $$A=\frac{n}{4} \left(\Gamma\left(\frac{1}{2} \right)\right)^2 \frac{2}{n}$$ $${A}=\frac{(\sqrt\pi)^2}{2} = \frac{\pi}{2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3699899", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
$C(S)$ is dense implies $S$ is metrizable given $S$ is compact and Hausdorff For a compact Hausdorff space $S$, show that the following are equivalent: (a) $S$ is metrizable. (b) $S$ has a countable base. (c) $C(S)$ is separable, where $C(S)$ denotes the space of all continuous functions on $S$. (a)$\implies$(b) This is trivial. For every $N\in\mathbb Z_{>0}$, take the finite $1/N$-net of $S$ since $S$ is metrizable and compact. Then for each $1/N$-net, we take the countable balls with diameter $1/M$ where $M$ ranges over $\mathbb Z_{>0}$. And we take $N$ to infinite. Thus (a) implies (b). (b)$\implies$(c) This is again trivial. Since compactness together with Hausdorff implies normal, then we can easily construct a countable dense set of $C(S)$ by invoking Tietze extension theorem. (c)$\implies (a)$ Suppose $\{f_n\}_{n=1}^\infty$ is a dense subset of $C(S)$ and we construct a metric: $$ d(x,y):=\sum_{n=1}^\infty 2^{-n}\min\{|f_n(x)-f_n(y)|,1\}. $$ How to show that the topology of $S$ is induced by this metric?
Show that $d$ is continuous on $S \times S$. Also show that $d$ is actually a metric on $S$ (there is something to be shows for $d(x,y)=0 \to x=y$, e.g.) Let $\mathcal{T}_d$ be the topology on $S$ that is generated by $d$ (i.e. in which the balls $B_d(x,r)$ are open for all $x \in S$ and all $r>0$), and $\mathcal{T}$ the given topology on $S$. As the continuity of $d$ implies that $$\forall x \in X: \forall r>0: B_d(x,r)= d_x^{-1}[(-\infty, r)] \text{ where } d_x: S \to \Bbb R \text{ is defined by } d_x(y)=d(x,y)$$ we have that by continuity of $d$ and hence all maps $d_x$, that $\mathcal{T}_d \subseteq \mathcal{T}$ and so the identity map $(S,\mathcal{T}) \to (S, \mathcal{T}_d)$ is a continuous bijection from a compact space to a Hausdorff space and hence closed and a homeomorphism, so $\mathcal{T}=\mathcal{T}_d$ and $d$ induces the topology on $S$. Now just fulfill the promise in the first sentence and you're done. I also don't agree that (b) to (c) is "trivial". Give an actual proof! Or maybe you already covered a version of Stone-Weierstraß? (a) to (b) could use a more fleshed out version of the argument as well (as the theorem I proved in this answer, which is more general).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3700164", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
What's the distribution of $xy+xz+yz$ where $x,y,z $ are independent standard normal? We know the product of two independent Normal random variables has a normal product distribution, or Variance Gamma distribution if they are correlated. But, what if there are three Normal random variables? So, here is the question: Suppose $x,y,z$ are three independent normal random variables ($x, y, z\sim N(0,1)$), what's the distribution of $xy+xz+yz$?
The probability density function $f$ is given by $$f(x)=\begin{cases} \dfrac1{\surd3}\mathrm e^x & \text{if $x<0$}, \\ \dfrac2{\surd3}\mathrm e^x[1-\Phi(\sqrt{3x})] & \text{if $x\geqslant0$}, \end{cases}$$where $\Phi(x):=\dfrac1{\surd(2\pi)}\int_{-\infty}^x\exp\dfrac{-t^2}2\mathrm dt$ is the standard-normal cumulative distribution function. It is convenient to work at first with $2(YZ+ZX+XY)$, since this is$$(X+Y+Z)^2-(X^2+Y^2+Z^2).$$This may be written as $R^2(H^2-1)$, where $R:=\sqrt{X^2+Y^2+Z^2}$, $H:=0$ when $R=0$, and $$ H:=\frac{|X+Y+Z|}{|R|}\quad(R\neq0).$$Since $X$, $Y$, an $Z$ are independent, their joint distribution function is the product of their individual distribution functions. From the definition of the standard-normal distribution, and by expressing the product of exponentials as the exponential of their sum, it is easy to see that the density in $(x,y,z)$ space then depends only on (the square of) $r:=\surd(x^2+y^2+z^2)$. So, if we divide the space into spherical shells centered on the origin, the density is constant on each shell. By choosing the equatorial plane of the shell of radius $r$ to be $x+y+z=0$, and slicing into thin, equally spaced, rings parallel to the equator, it is easy to show that the “mass” of each ring is the same. The plane of each ring has the form $x+y+z=hr$, where $0\leqslant |h|\leqslant\surd3$. Thus the random variable $H$ has a uniform distribution with support $[0\,\pmb,\, \surd3]$. It is clear also that $H$ and $R$ are independent. It remains to find the distributions of $R^2$ and $H^2-1$. The former is $\chi^2_3$ by a standard result:$$f_{R^2}(x)=\chi^2_3(x)=\frac1{\surd(2\pi)}x^{1/2}\mathrm e^{-x/2}\quad(x\geqslant0).$$ The distribution of $H^2-1$ is obtained by first considering the cumulative distribution function of $H$, getting that of $H^2$ from it, differentiating, and then shifting by $1$: $$f_{H^2-1}(x)=\frac{\pmb1\{-1<x\leqslant2\}}{2\surd3\surd(x+1)}.$$ Next we use the formula for the distribution $f_{12}$ of the product of two independent random variables with distribution functions $f_1$ and $f_2$: $$f_{12}(x)=\int_{-\infty}^\infty f_1(t)f_2\left(\frac xt\right)\frac{\mathrm dt}{|t|}.$$Thus, substituting $f_1=f_{H^2-1}$ and $f_2=\chi_3^2$ , and discarding the range of integration ($t<0$) outside the support of $f_2=\chi_3^2$ , yields$$f_{12}(x)=\int_0^\infty\frac{t^{1/2}\mathrm e^{-t/2}}{\surd(2\pi)}\frac{\pmb1\{-1<x/t<2\}}{2\surd3\surd(x/t+1)}\frac{\mathrm dt}{t}=\int_0^\infty\frac{\pmb1\{-t<x<2t\}\mathrm e^{-t/2}}{2\surd(6\pi)\surd(x+t)}\mathrm dt.$$After the substitution $u^2=x+t$, with $\mathrm dt=2u\,\mathrm du$, this may be written$$f_{12}(x)=\frac{\mathrm e^{x/2}}{2\surd(6\pi)}\int_0^\infty\pmb1\{u^2>3x/2\}\mathrm e^{-u^2/2}\,\mathrm du.$$This reduces to $f_{12}(x)=\dfrac{\mathrm e^{x/2}}{2\surd3}$ when $x<0$, and $f_{12}(x)=\dfrac{\mathrm e^{x/2}}{\surd3}\left[1-\Phi\left(\sqrt{3x/2}\right)\right]$ when $x\geqslant0$. Finally, the distribution is scaled according to the formula $$f_{\alpha S}(x)=\frac1\alpha f_S\left(\frac x\alpha\right)$$for any random variable $S$ and scaling constant $\alpha$, which in this case is $\frac12$. Remark: This approach may readily be generalized to any number of independent standard-normal random variables $X_i\;(i=1,...,n)$ to obtain the distribution of $\sum_{i<j}X_iX_j$: As in the above case ($n=3$), the distribution can be boiled down to that of the product of just two independent random variables—one having a $\chi_n^2$ distribution while the other's distribution is derived, as above, from a uniform distribution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3700360", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Reference for very basic books in Functional analysis I'm confused about which books I have to read for Functional analysis for the beginner level. I need references for very basic books in Functional analysis and that book must contain given Topics below $1.$ Normed linear spaces, $2.$ Banach spaces, $3.$ Hilbert spaces, $4.$ Compact operators. $5.$ Properties of $ C[0;1]$ and $L^p[0;1]$ $6.$ Continuous linear maps (linear operators). $7.$ Hahn-Banach Theorem, Open mapping theorem, $8.$ Closed graph theorem and the uniform boundedness principle.
My top two recommendations would be Functional Analysis: A First Course by M.T Nair (very beginner friendly) and Introductory Functional Analysis with Applications by E. Kreyszig. I also recommend the notes by V. S. Sunder for more of a spectral theoretic focus. You can also check out Functional Analysis by S. Kesavan.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3700489", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Definition of eigen space. I am studying linear algebra and got confused in defining eigen space corresponding to eigen value.The thing wondering me is the same thing defined in two different books in different manners.let $\lambda$ be eigen value of matrix $A$ of order n over the field $\mathbb{F}$ then Hoffman & kunze defines eigen space as follows: Definition:The collection of all $x\in\mathbb{F^n}$ such that $Ax=\lambda x$ is called the eigenspace associated with $\lambda$. While in another book A first course in module theory by M.E Keating,he defines eigenspace as follows: Definition:Given an n x n matrix $A$ over a field $\mathbb{F}$, an eigenspace for $A$ is a nonzero subspace $U$ of $\mathbb{F^n}$ with the property that $$Ax=\lambda x$$ $\forall x\in U$ I think the two definitions are not equivalent: Let $x_1,x_2$ be two linearly independent eigenvectors corresponding to eigen value $\lambda$,then according to first definition a subspace generated by $x_1,x_2$ is the eigen space of $A$ corresponding to eigen value $\lambda$ while according to definition-2 there are three eigen space each of which generated by (1)$x_1$ only, (2)$x_2$ only, (3) $x_1$ and $x_2$ Am I right in my understanding?since my graduation days I am familiar with first definition and used it in my problems,could anybody help me in understanding Thanks in advance
You are right: they are not equivalent. The first definition is the usual one. The second one is more general: if $F_\lambda$ is the eigenspace corresponding to $\lambda$ (with respect to the first definition), then the author of the second definition is saying that any non-zero subspace of $F_\lambda$ is an eigenspace for $A$ corresponding to the eigenvalue $\lambda$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3700655", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prove the difference is more than $n$ and less than $2n$ We chose $n + 2$ numbers from the set $\{1,2,....3n\}$ . Prove that there are always two among the chosen numbers whose difference is more than $n$ but less than $2n$. Though I can understand it by taking examples but I really struggle when it comes to prove. Also is there really good book that will structure my thinking so that I can proof these type of questions.
Like Alexey suggested you can consider the remainders of division by $n+1$ of these $n+2$ numbers. Since the different possibile remainders are only $n+1$ and you have $n+2$ different possible solutions you know, thanks to the pidegeonhole principle, that at least $2$ different numbers, $x$ and $y$, have the same remainder. You can suppose that $y>x$ and you know that $n+1$ divides $y-x$. Since they are different you know that $y-x= n+1$ or $y-x=2n+2$. In the first case you have found the numbers you need. In the second case, you know that $x \le n$ and $y= 2n+2+x \ge 2n$. Now we can consider the other $n$ numbers. The only numbers which difference from $x$ and from $y=2n+x+2$ are not between $n$ and $2n$ are the ones below $x+3$ and above $x+2n-3$ and $3n$. These numbers, excluding $x$ and $y$ are $x+3 + (3n-(2n+x-3)) -2 = n+4$ but you can't take them all. You can freely take all the numbers below $x$ and above $y$, and you get $x + (3n - (2n+x+2)) = n-2 $ numbers but you can't choose all of the $6$ different numbers remaining because, for example, if you choose $x+3$ you can't choose $ 2n+x-3$ or $2n+x-2$ or $2n+x-1$ . You have to choose $4$ numbers between these $6$, this are only $15$ cases and you notice that you can't do that.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3700773", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Let $f:\mathbb R^n\to \mathbb R$ be a continuous function with the property that there exits two sequence $(b_n)$ and $(c_n)\in \mathbb R^n$ such that Let $f:\mathbb R^n\to \mathbb R$ be a continuous function with the property that there exits two sequences $(b_n)$ and $(c_n)\in \mathbb R^n$ such that $f(b_n)\to \infty$ and $f(c_n)\to -\infty $ Now I want to prove that $\lim\limits_{n\to \infty}||b_n||\to\infty$ and $\lim\limits_{n\to \infty}||c_n||\to\infty$ and I want to show that $k>1$ there exists some $(a_n)\in\mathbb R^k$ such that $f(a_n)=0$ for all $n\in\mathbb N$ and $||a_n||\to \infty$ My work for the $\lim\limits_{n\to \infty}||b_n||\to\infty$ and $\lim\limits_{n\to \infty}||c_n||\to\infty$ Assume not so $||b_n||\to b\in \mathbb R$ and similar for $c_n$, $||c_n||\to c\in \mathbb R$ so since $||.||$ is a continuous function so $b_n\to \vec b\in\mathbb R^n$ and $c_n\to \vec c\in\mathbb R^n$ so since $f$ is continuous. $$f(b_n)\to f(\vec b)\in\mathbb R$$ so it cannot be infinity. How to do it properly and can you give me a hint about second question I am stuck.$n>1$ there exists some $(a_n)\in\mathbb R^n$ such that $f(a_n)=0$ for all $n\in\mathbb N$ and $||a_n||\to \infty$
Suppose that $lim_n\|b_n\|$ is not $\infty$, there exists $C$ such that for every $n$, there exists $p_n>n$ with $\|b_n\|<C$. Let $B(0,C)$ be the closed ball of radius $f(B,0,C))$, since $f$ is continuous, $f(B(0,C))$ is compact and bounded, and contained in $[-P,P]$. Since $lim_nf(b_n)=\infty$, there exists $n_P$ such that $n>n_R$ implies that $|f(b_n)|>P$. Contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3700940", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Calculating $\frac{1}{a}$ using Newton-Raphson method I have a computer that doesn't implement division operation (it has only addition, substraction and multiplication). I need to find a method to find the approximate value of $\frac{1}{a}$, where $a\in \mathbb R \setminus\{0\}$. I'm supposed to do that with Newton-Raphson method ($x_{k+1}=x_k-\frac{f(x_k)}{f'(x_k)}$) and there mustn't be any division operation in the final formula. So far I have tried obvious $f(x)=ax-1$, but then: $$ x_{k+1}=x_k-\frac{a\cdot x_k-1}{a}=\frac{1}{a} $$ which obviously haven't brought me any closer to the answer. Do you have any ideas what $f$ function should I take to solve this?
We take function: $f(x)=a-\frac{1}{x}$. $$ x_{k+1}=x_k-\frac{a-\frac{1}{x_k}}{\frac{1}{x_k^2}}=\frac{\frac{1}{x_k}-a+\frac{1}{x_k}}{\frac{1}{x_k^2}}=2x_k-a\cdot x_k^2 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3701131", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Uniform convergence and integrals. I'm asked to tell if the following integral is finite: $$\int_0^1 \left(\sum_{n=1}^{\infty}\sin\left(\frac{1}{n}\right)x^n \right)dx$$ I studied the series (which converges uniformly on $(-1,1)$ by d'Alembert's Criterion and in $-1$ by Leibniz's Criterion, so in general the convergence is uniform in $[-1,1)$). In $1$ we have that the series goes like $\frac{1}{n}$ and so diverges. Can I exchange integral and sum if the convergence is not uniform in $1$? I'd say yes because I can write $\int_0^1$ as $\lim_{\epsilon \to 1} \int_0^{\epsilon}$ but I'd like a confirmation.
We know that $f_{n}(x)= \sin \Big( \frac{1}{n} \Big) x^{n}$ are Lebesgue integrable on $(0,1)$ and positive. Since $$ \sum_{n=1}^{ \infty} \int_{0}^{1} \sin \Big( \frac{1}{n} \Big) x^{n} dx = \sum_{n=1}^{ \infty} \sin \Big( \frac{1}{n} \Big) \frac{1}{n+1}, $$ which converges, because $$ \sin \Big( \frac{1}{n} \Big) \leq \frac{1}{n} \Rightarrow \sum_{n=1}^{ \infty} \sin \Big( \frac{1}{n} \Big) \frac{1}{n+1} \leq \sum_{n=1}^{ \infty} \frac{1}{n} \frac{1}{n+1} < \infty $$ we then know by Levi's Theorem for Series of Lebesgue Integrable Functions that $$ \int_{0}^{1} \sum_{n=1}^{ \infty} \sin \Big( \frac{1}{n} \Big) x^{n} dx = \sum_{n=1}^{ \infty} \int_{0}^{1} \sin \Big( \frac{1}{n} \Big) x^{n} dx$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3701461", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is the coordinate definition of holomorphic vector fields? We can write a holomorphic vector field in local co-ordinates as $X=X^i\dfrac{\partial}{\partial z_i}$, where $\dfrac{\partial}{\partial z_i}$ forms a local frame for $T^{(1,0)}M$. My questions are * *Does $X^i$ have to be holomorphic functions? *Can we say that $\dfrac{\partial X^i}{\partial \bar{z}^j}=0$ for all $i,j$? *If 2 is not true, then what is the right co-ordinate condition on $X^i$s?
* *is right due to the definition of the holomorphic vector field. *is also right.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3701628", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$\ker(T) + \ker(S) = V \implies {\rm Im}(T + S) ={\rm Im}(T) +{\rm Im}(S)$ Let $V$ be a vector space. Let $T, S$ be two linear operators $T:V \rightarrow V$, $S: V \rightarrow V$, such that $$\ker(T) + \ker(S) = V$$, then we must have $${\rm Im}(T+S) = {\rm Im}(T) + {\rm Im}(S)$$. If the statement is true, then give the proof. If the statement is false, give an counterexample. Attempt: So far, I am trying to use the $rank-nullity$ theorem and the theorem $\dim(U + W) + \dim(U \cap W) = \dim(U) + \dim(W)$, but didn't get any luck and I am getting stuck for this question for hours. Could someone tell me how to solve this question? Thanks!
True. Proof: Clearly $\text{Im}(T+S) \subseteq \text{Im}(T)+\text{Im}(S)$, so we just need to show $\text{Im}(T)+\text{Im}(S) \subseteq \text{Im}(T+S)$. As another answer suggested, it suffices to show $\text{Im}(T) \subseteq \text{Im}(T+S)$ and $\text{Im}(S) \subseteq \text{Im}(T+S)$, because in general if $U, W$ are contained in a subspace $X$, then $U+W$ is also contained in $X$. Thus, let $v \in V$ be any vector, hence $v = k_t + k_s$ for some $k_t \in \ker(T), k_s \in \ker(S)$. Then $T(v) = T(k_t) + T(k_s) = T(k_s) = T(k_s) + S(k_s) = (T+S)(k_s)$, showing that $\text{Im}(T) \subseteq \text{Im}(T+S)$. The case $\text{Im}(S) \subseteq \text{Im}(T+S)$ is similar. $\square$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3701793", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Prove that if $E[X_1^p]<\infty$, then $\frac{\max_{1\le i\le n} X_i}{n^{1/p}} \rightarrow 0$ in probability where $\{X_n\}$ is i.i.d and non-negative Suppose $\{X_n\}_{n\geq 1}$ are iid and non negative. Define $M_n=\max \limits_{i=1,\ldots,n}\{X_i\}.$ Prove if $E[X_1^p]<\infty$, then $\frac{M_n}{n^{1/p}}\rightarrow 0$ in probability. As a previous result I have that $P[M_n>x]\leq nP[X_1>x].$ Is this result useful to prove this convergence? I would appreciate any hint.
You can use the result $P[M_n>x]\leq nP[X_1>x]$. Observe that $$P[M_n/n^{1/p}>\epsilon]=P[M_n>\epsilon n^{1/p}] \le nP[X>\epsilon n^{1/p}]=nP[X^p>\epsilon^pn].$$ Now use the inequality $P[Y>a]\le E\left[\frac YaI(Y>a)\right]$ to conclude $$nP[X^p>\epsilon^pn]\le E\left[\frac{X^p}{\epsilon^p}I(X^p>\epsilon^pn)\right],$$ and finally argue that the RHS tends to $0$ as $n\to\infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3701965", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Basic question about inequality let $x>1$ Obviously $x^2>x$ Then $x^2>x>1$ Taking $x^2>1$, we can assert that this holds true for all the values for $x>1$ and $x<-1$ But if I take $-5$ such that $x<-1$, then $x^2>x$ holds but $x>1$ doesn't. Why is it so? Doesn't $x^2>x>1$ mean that all of the three must be true?
When you deduced that $^2>>1$, you specifically had the constraint that $x > 1$. Of course, this inequality is not then applicable since now you have $x > -5$ which is a larger domain than $x > -1$. More specifically, $x > 1 \Rightarrow x^2 > x > 1$ is a true statement, but $x^2 > x \iff x> 1$ is clearly not a true statement as you have demonstrated.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3702139", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }