Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Show that a ring with only trivial right ideals is either a division ring or $|R|=p$ and $R^2=\{0\}$. Why would $R$ be finite? Let $R$ be a ring such that the only right ideals of $R$ are $(0)$ and $R$. Prove that either $R$ is a division ring or that $R$ is a ring with a prime number of elements in which $ab= 0$ for all $a,b\in R$. I don't want the proof. I am stuck at one point. Why does $R$ have to be finite here?
Hint- any infinite group has infinitely many subgroups. If the ring operation is trivial, what are the ideals?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1858435", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Non-trivial example of algebraically closed fields I'm beginning an introductory course on Galois Theory and we've just started to talk about algebraic closed fields and extensions. The typical example of algebraically closed fields is $\mathbb{C}$ and the typical non-examples are $\mathbb{R}, \mathbb{Q}$ and arbitrary finite fields. I'm trying to find some explicit, non-typical example of algebraically closed fields, but it seems like a complicated task. Any ideas?
One interesting thing to note is that the first-order theory of algebraically closed fields of characteristic $0$ is $\kappa$-categorical for $\kappa > \aleph_0$. That means that there other algebraically closed field of characteristic $0$ of the same cardinality as $\mathbb{C}$! Maybe not the most helpful response, but I think quite interesting.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1858570", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 4, "answer_id": 3 }
Find the ratio of diagonals in Trapezoid Given $ABCD$ a rectangular trapezoid, $\angle A=90^\circ$, $AB\parallel DC$, $2AB = CD$ and $AC \perp BD$. What is the value of $AC/BD$ ? Attempts so far: I have tried using the ratio of the areas of triangles $AOB$ and $DOC$, which is $\frac14$ (where $O$ is the intersection of the diagonals), but I couldn't get anything useful. I don't know how to use the fact that the diagonals are perpendicular.
Put $AB=1$, $DC=2$, $AD=x$, and set the trapezoid with $A$ in the origin of a cartesian plane. Take the vector $AC=(x,2)$ and the vector $DB=(-x,1)$ and impose that they be orthogonal, i.e. their dot product be null, i.e. $x^2=2$. Now you have all the data to continue.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1858649", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How linear map transform the unit ball? Let $f:\mathbb{R}^n \to \mathbb{R^n}$ be a linear application, we suppose that $f$ is symmetric ($\langle f(x),y\rangle=\langle x, f(y)\rangle$), without using spectral theorem how we can see that $f$ maps the unit ball into an ellipsoid ? Or how can we prove spectral theorem geometrically (intrinsically) ? Can expand this reasoning to see why not any square matrix is diagonalizable ?
$\newcommand{\Reals}{\mathbf{R}}\newcommand{\Brak}[1]{\left\langle #1\right\rangle}$An invertible linear operator $f$ on $\Reals^{n}$ maps the unit ball to an ellipsoid whether or not $f$ is symmetric (or even diagonalizable): One strategy is to write the unit ball as the locus of a quadratic inequality and perform a linear change of variables, concluding the image is compact and defined by a quadratic inequality, hence an ellipsoid (for some value of "hence"). In other words, knowing the image of the unit ball is an ellipsoid does not imply the spectral theorem, so if I understand what you're asking, the answer to your third question is "no". As for proving the spectral theorem geometrically, one approach is to maximize the quadratic function $$ F(x) = \Brak{x, f(x)} $$ restricted to the unit sphere, i.e., subject to the constraint $g(x) = \|x\|^{2} = 1$. By Heine-Borel and the extreme value theorem, $F$ has an absolute maximum, $x_{0}$; Lagrange multipliers and symmetry of $f$ show that $$ 2f(x_{0}) = \nabla F(x_{0}) = \lambda \nabla g(x_{0}) = 2\lambda x_{0}; $$ that is, $x_{0}$ is an eigenvector of $f$, with eigenvalue equal to the maximum value of $F$. Now induct, restricting everything to the orthogonal complement of $x_{0}$, and iteratively constructing (for some value of "construct") an orthonormal $f$-eigenbasis of $\Reals^{n}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1858762", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Does every finite dimensional real nil algebra admit a multiplicative basis? We say that a finite dimensional real commutative and associative algebra $\mathcal{A}$ is nil if every element $a \in \mathcal{A}$ is nilpotent. By multiplicative basis, I mean a basis $\{ v_1, \dots , v_n \}$ for $\mathcal{A}$ as a real vector space such that for each $v_i$ and $v_j$, the algebra multiplication $v_i \star v_j = c v_k$ for some $c \in \mathbb{R}$, and some other element $v_k$ of the basis. Given such a nil algebra $\mathcal{A}$, does it always admit a multiplicative basis in the sense described above? If not, what is an example of a nil algebra which does not admit a multiplicative basis?
No. The following constructs a counterexample. Let $R$ be the graded ring $\mathbb{R}[x_1, \ldots, x_5]$ and $I = (x_1, \ldots, x_5)$. In any homgeneous degree $d$, define the "pure" polynomials to be the the products of $d$ linear polynomials, and let the rank of a homogeneous polynomial $f$ be the minimum number of terms needed to express $f$ as a sum of pure polynomials. I assert the following: * *The grade one piece $R_1$ is isomorphic to the vector space of $1 \times 5$ matrices *The grade two piece $R_2$ is isomorphic to the vector space of symmetric $5 \times 5$ matrices *The product $R_1 \times R_1 \to R_2$ corresponds to the symmetrized outer product $(v,w) \mapsto \frac{1}{2}(v^Tw + w^Tv)$ (phrased differently: $R_1$ is the space of linear forms, and $R_2$ is the space of symmetric bilinear forms) The rank of a matrix has a similar characterization: $\text{rank}(A)$ is the smallest number of terms you need to express $A$ as a sum of outer products $\sum_i v_i^T w_i$. Of particular note is that if a homogeneous quadratic polynomial $f$ corresponds to the matrix $A$, then $\text{rank}(A) \leq 2 \text{rank}(f)$. Consequently, there exists a homogeneous quadratic polynomial $f$ such that $\text{rank}(f) \geq 3$. One such example is $f = \sum_i x_i^2$. Now, consider the graded algebra $A = I / (I^3 + fR)$. Its grade 1 piece is 5-dimensional and its grade 2 piece is 14-dimensional. Suppose we have a collection of polynomials of $I$ that form a multiplicative basis for $A$. The basis must consist of at least five polynomials that span $I/I^2$. There are 15 products of pairs of these polynomials, and they are all distinct elements of $I^2 / I^3$. Suppose two of these products were the same in $A$. That would imply we have two rank one quadratic polynomials $g$ and $h$ with the property that $rg = sh + tf$ for some scalars $r,s,t$. However, we would have $rt^{-1}g + (-st^{-1})h = f$ which is impossible, because the left hand side has rank at most 2, but the right hand side has rank 3. Consequently, the 15 pairwise products of the multiplicative basis for $A$ are all distinct (and nonzero) elements of $A$, and they are distinct from the original $5$ polynomials as well. Consequently, the basis must have at least 20 elements, contradicting the fact that $A$ is 19-dimensional.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1858832", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
How many integers $\leq N$ are divisible by $2,3$ but not divisible by their powers? How many integers in the range $\leq N$ are divisible by both $2$ and $3$ but are not divisible by whole powers $>1$ of $2$ and $3$ i.e. not divisible by $2^2,3^2, 2^3,3^3, \ldots ?$ I hope by using the inclusion–exclusion principle one may derive such a formula and part of the formula has a form $$ N-\left[\frac{N}{2} \right]+\left[\frac{N}{2^2} \right]-\left[\frac{N}{2^3} \right]+\cdots -\left[\frac{N}{3} \right]+\left[\frac{N}{3^2} \right]-\left[\frac{N}{3^3} \right]+\cdots+\left[\frac{N}{2 \cdot 3} \right]+\text{some terms like as $\pm \left[\frac{N}{2^i \cdot 3^j} \right]$} $$ Question. What is the exact sign for a term $ \left[\frac{N}{2^i \cdot 3^j} \right]$?
The rules permit all numbers divisible by $6$, but excluding those also divisble by $4$ or $9$. This is given by: $$\lfloor\frac{N}{6}\rfloor-\lfloor\frac{N}{12}\rfloor-\lfloor\frac{N}{18}\rfloor+\lfloor\frac{N}{36}\rfloor$$ Firstly - Start by enumerating number of numbers divisible by 6. Next term: Remove numbers divisible by 6 and 4. These are all numbers divisible by 12 since the intersection of prime factors is $3\times2^2$ Next term: Remove numbers divisible by 6 and 9. These are all numbers divisible by 18 since the intersection of prime factors is $2\times3^2$ Next term: Add back in the multiples of 36 since these are the numbers we have deducted twice; divisible by 6, 4, and 9. Intersection of prime factors is $2^2\times3^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1859057", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Find the probability of getting two sixes in $5$ throws of a die. In an experiment, a fair die is rolled until two sixes are obtained in succession. What is the probability that the experiment will end in the fifth trial? My work: The probability of not getting a $6$ in the first roll is $\frac{5}{6}$ Similarly for the second and third throw. Again the probability of getting a $6$ is fourth roll is $\frac{1}{6}$. So the probability of ending the game in the fifth roll is $\frac{5^3}{6^3}\times\frac{1}{6^2}=\frac{125}{6^5}$. But the answer is not correct. Where is my mistake? Help please.
the odds of getting two sixes, in any order, within 6 dice (or is it die??) can be easily determined by using the binomial distribution the odds would be 1/6 * 1/6 * 5/6 ^ 4 that opening statement very simply finds the chances of getting two sixes, but there is more than one way to get two sixes, so you need to multiply that value by 2!/6!(6-2)! goes to 15.. so 15 * 1/6 * 6 * 5/6 * 4 = about 0.2009 so basically 1/5
{ "language": "en", "url": "https://math.stackexchange.com/questions/1859138", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 6, "answer_id": 5 }
nature of the series $\sum \tfrac{(-1)^{n}\ln(n)}{\sqrt{n+2}}$ I would like to prove the following series convergent $$\dfrac{(-1)^{n}\ln(n)}{\sqrt{n+2}},\quad \dfrac{(-1)^{n}\ln(2)}{\sqrt{n+2}}$$ using Alternating series test: * *$u_n=\dfrac{(-1)^{n}\ln(n)}{\sqrt{n+2}}$, so $|u_n|=\dfrac{\ln(n)}{\sqrt{n+2}}$. We have : $$\dfrac{\ln(n)}{\sqrt{n+2}}\sim \dfrac{\ln(n)}{\sqrt{n}}.$$ As $\ln(n)=\mathcal{o}\left( \sqrt{n}\right)$, then $\dfrac{\ln(n)}{\sqrt{n+2}} \underset{ \overset { n \rightarrow +\infty } {} } {\longrightarrow }0 $ Still I need to show that $|u_n|$ is decreases monotonically: I'm stuck here, I tired $|u_{n+1}|-|u_n|$ and $\dfrac{|u_{n+1}|}{|u_n|}$ and I tried to show that if $|u_n|=f(n)=\dfrac{\ln(n)}{\sqrt{n+2}}$ then $$f'(x)=\dfrac{2x+4-x\ln(x)}{2x\left(x+2 \right)^{\frac{3}{2}}},$$ but I can't tell if its negative or positive. * *$u_n=\dfrac{(-1)^{n}\ln(2)}{\sqrt{n+2}}$, so $|u_n|=\dfrac{\ln(2)}{\sqrt{n+2}}$ and $|u_n|=\dfrac{\ln(n)}{\sqrt{n+2}} \underset{ \overset { n \rightarrow +\infty } {} } {\longrightarrow }0 $ *If $|u_n|=g(n)=\dfrac{\ln(2)}{\sqrt{n+2}}$ then $g'(n)=\frac{-1}{2\sqrt{(n+2)^{3}}}\leq 0$ then by Alternating series test $$\sum_{n\geq 0}\dfrac{(-1)^{n}\ln(2)}{\sqrt{n+2}}$$ is convergent.
Note that the first derivative is actually: $$\frac{2x + 4 - x \ln x}{x (x+2)^{3/2}}$$ Consider $g(x) = (2 - \ln x)x + 4$; $g'(x) = 1 - \ln x < 0$ for $x \ge e$, and (say) $g(e^3) = 4-e^3 < 0$, so for $n \ge \lceil e^3 \rceil = 21$, $u_n$ is decreasing and we can apply the AST
{ "language": "en", "url": "https://math.stackexchange.com/questions/1859241", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Fibonacci using Matrix Representation. Fibonacci using matrix representation is of the form : Fibonacci Matrix. This claims to be of O(log n).However, isn't computing matrix multiplication of order O(n^3) or using Strassen's algorithm O(n^2.81)? How can this be solved in O(log n)?
Yes, using Fibonacci Matrix $\begin{pmatrix} 1&1\\1&0\end{pmatrix}$ is the way to calculate the nth fibonacci number in $O(log(n))$ time. You can apply this to the matrix, and the solution is reduced to $O(log(n))$. I put an example code. long long fibonacci(int n) { long long fib[2][2]= {{1,1},{1,0}},ret[2][2]= {{1,0},{0,1}},tmp[2][2]= {{0,0},{0,0}}; int i,j,k; while(n) { if(n&1) { memset(tmp,0,sizeof tmp); for(i=0; i<2; i++) for(j=0; j<2; j++) for(k=0; k<2; k++) tmp[i][j]=(tmp[i][j]+ret[i][k]*fib[k][j]); for(i=0; i<2; i++) for(j=0; j<2; j++) ret[i][j]=tmp[i][j]; } memset(tmp,0,sizeof tmp); for(i=0; i<2; i++) for(j=0; j<2; j++) for(k=0; k<2; k++) tmp[i][j]=(tmp[i][j]+fib[i][k]*fib[k][j]); for(i=0; i<2; i++) for(j=0; j<2; j++) fib[i][j]=tmp[i][j]; n/=2; } return (ret[0][1]); }
{ "language": "en", "url": "https://math.stackexchange.com/questions/1859328", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Real Projective $n$ space $\mathbb{R}P^{n}$ In example 0.4 of Hatcher, he says that $\mathbb{R}P^{n}$ is just the quotient space of the sphere $S^{n}$ with antipodal points identified. He then says that this is equivalent to the quotient of a hemisphere $D^{n}$ with the antipodal points of the boundary identified. I don't understand why those spaces are equivalent. Could someone please explain?
So, a "point" in $\mathbb{RP}^n$ is secretly the same thing as a pair of antipodal points. But, if you look at two antipodal points in $S^n$, one of two things occurs: * *One of the points lies in the open hemisphere $\mathrm{Int}(D^n)$, and the other is in the opposite open hemisphere. *Both points are on the equator $\partial D^n$. So, every point in $\mathbb{RP}^n$ has a representative in $D^n$, and it has in fact only one such representative, except if it lies on the equator. So you get $\mathbb{RP}^n$ by considering $D^n$ and "correcting" the only injectivity default by collapsing the pairs of antipodal points in the equator.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1859402", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
$\lfloor x\rfloor \cdot \lfloor x^2\rfloor = \lfloor x^3\rfloor$ means that $x$ is close to an integer Suppose $x>30$ is a number satisfying $\lfloor x\rfloor \cdot \lfloor x^2\rfloor = \lfloor x^3\rfloor$. Prove that $\{x\}<\frac{1}{2700}$, where $\{x\}$ is the fractional part of $x$. My heuristic is that $x$ needs to be "small": i.e. as close to $30$ as possible to get close to the upper bound on $\{x\}$, but I'm not sure how to make this a proof.
Let $\lfloor x \rfloor =y$ and $\{x\}=b$ Then $\lfloor x\rfloor \cdot \lfloor x^2\rfloor = \lfloor x^3\rfloor =y\lfloor y^2+2by+b^2 \rfloor= \lfloor y^3+3y^2b+3yb^2+b^3\rfloor$ One way this can happen is that $b$ is small enough that all the terms including $b$ are less than $1$, which makes both sides $y^3$. This requires $3y^2b \lt 1$, which gives $b \lt \frac 1{2700}$ as required. Now you have to argue that if $2by+b^2 \ge 1$ the right side will be too large.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1859483", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
$\mathbb Z$ basis of the module $\mathbb Z [\zeta]$ Given an $n$-th root of unity $\zeta$, consider the $\mathbb Z$-module $M := \mathbb Z[\zeta]$. * *Does this module have a special name? *Does a basis exist for every $n$? And if so, is there an algorithm to find a basis given an $n$? I was just playing around with this, and noticed that for $n=3$ we have e.g. the basis $(1,\zeta)$ because $1+\zeta = -\zeta^2$. For $n=4$ we obviously have the basis $(1,i)$, but I was unable to generalize this for an arbitrary $n$.
The ring $\mathbb Z[\zeta]$ is called the ring of cyclotomic integers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1859564", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
A closed form for $1^{2}-2^{2}+3^{2}-4^{2}+ \cdots + (-1)^{n-1}n^{2}$ Please look at this expression: $$1^{2}-2^{2}+3^{2}-4^{2} + \cdots + (-1)^{n-1} n^{2}$$ I found this expression in a math book. It asks us to find a general formula for calculate it with $n$. The formula that book suggests is this: $$-\frac{1}{2}\times (-1)^{n} \times n(n+1)$$ Would you mind explaining to me how we get this formula?
We wish to show that $$ 1^{2}-2^{2}+3^{2}-4^{2} + \dotsb + (-1)^{n-1} n^{2}= (-1)^{n+1}\frac{n(n+1)}{2}\tag{1} $$ To do so, induct on $n$. The base case $n=1$ is simple to verify. Now, suppose that $(1)$ holds. Then \begin{align*} 1^{2}-2^{2}+3^{2}-4^{2} + \dotsb + (-1)^{n} (n+1)^{2} &= (-1)^{n+1}\frac{n(n+1)}{2}+(-1)^{n} (n+1)^{2} \\ &= (-1)^n\left\{ -\frac{1}{2}\,n^2-\frac{1}{2}\,n+n^2+2\,n+1 \right\} \\ &= (-1)^{n+2}\left\{ \frac{1}{2}\,n^2+\frac{3}{2}\,n+1 \right\} \\ &= (-1)^{n+2}\frac{1}{2}\left\{ n^2+3\,n+2 \right\} \\ &= (-1)^{n+2}\frac{(n+1)(n+2)}{2}\\ \end{align*} This closes the induction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1859620", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 7, "answer_id": 5 }
Find the maximum of $U (x,y) = x^\alpha y^\beta$ subject to $I = px + qy$ Let be $U (x,y) = x^\alpha y^\beta$. Find the maximum of the function $U(x,y)$ subject to the equality constraint $I = px + qy$. I have tried to use the Lagrangian function to find the solution for the problem, with the equation $$\nabla\mathscr{L}=\vec{0}$$ where $\mathscr{L}$ is the Lagrangian function and $\vec{0}=\pmatrix{0,0}$. Using this method I have a system of $3$ equations with $3$ variables, but I can't simplify this system: $$ax^{\alpha-1}y^\beta-p\lambda=0$$ $$\beta y^{\beta-1}x^\alpha-q\lambda=0$$ $$I=px+qx$$
The solution The answer can be been found on the internet in any number of places. The function $U$ is a Cobb-Douglas utility function. The Cobb-Douglas function is one of the most commonly used utility functions in economics. The demand functions you should get are: $$x(p,I)=\frac{\alpha I}{(\alpha+\beta)p}\qquad y(p,I)=\frac{\beta I}{(\alpha+\beta)q}$$ The solution has a nice interpretation: the consumer spends a fraction $\frac{\alpha}{\alpha+\beta}$ of their income on good $x$ and fraction $\frac{\beta}{\alpha+\beta}$ on good $y$. If you want to find the full working spend a minute or two searching the internet. A simplification Note here that you can simplify things by instead maximizing the function $V$ where $$V(x,y)=\ln U(x,y)=\alpha \ln x+\beta\ln y$$ Since $V$ is an increasing transformation of $U$ it will have the same maximizer. In fact you could simplify the working further by maximizing $W$ where $$W(x,y)=\frac{V(x,y)}{\alpha+\beta}=\bar{\alpha}\ln x+(1-\bar{\alpha})\ln y$$ where $\bar{\alpha}=\frac{\alpha}{\alpha+\beta}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1859719", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Positive integer solution $pm = qn+1$ Let $m,n$ be relatively prime positive integers. Prove that there exist positive integers $p,q$ such that $pm = qn+1$. We know Bézout's identity that there exist integers $p,q$ such that $pm+qn = 1$, but how do we know we can get positive integers $p,q$ with $pm = qn+1$?
(1). Given integers $p',q'$ with $p'm+q'n=1,$ the set of all $(p'',q'')$ such that $p''m+q''n=1$ is $\{(p'+xn, q'-xm): x\in Z\}.$ If $p'>0$ then $q'<0,$ so let $p=p'$ and $q=-q'.$ If $p'\leq 0$ take $x\in Z^+$ where $x$ is large enough that $p'+xn>0 $ and $ xm-q'>0 . $ Let $p=p'+xn$ and $q=-q'+xm.$ (2). A one-step general way is that $$p'm\equiv 1\pmod n\implies \forall x\in Z\;((p'+xn)m\equiv 1\pmod n).$$ So for $x$ positive and large enough that $p'+xn>0$ we have $0<(p'+xn)m=1+qn$ for some $q\in Z,$ so $q>0.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1859795", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
The Thirty-one Game: Winning Strategy for the First Player I am going through UCLA's Game Theory, Part I. Below is an exercise on page 6: The Thirty-one Game. (Geoffrey Mott-Smith (1954)) From a deck of cards, take the Ace, 2,3,4,5, and 6 of each suit. These 24 cards are laid out face up on a table. The players alternate turning over cards and the sum of the turned over cards is computed as play progresses. Each Ace counts as one. The player who first makes the sum go above 31 loses. (The following words are left out.) (a) (omitted) (b) Nevertheless, the first player can win with optimal play. How? Here is the solution for question (b): (In the text below, a target position is a P-position, a position that are winning for the previous player. On that position, the next player has no way to win if the previous player uses the optimal strategy.) Start with 5. If your opponent chooses 5 to get in the target series, you choose 2, and repeat 2 every time he chooses 5. When the sum is 26, it is his turn and there are no 5's left, so you will win. But if he ever departs from the target series, you can enter the series and win. I do not quite understand the solution. The game is easy when the opponent chooses only 2 or 5. However, if the opponent departs from the target series, I think that it may go wrong. Let's consider the example below: number 5 3 4 3 4 3 4 5 player 1 2 1 2 1 2 1 2 The first player chooses 5 initially, and then the second player chooses 3. In order to enter the series, the first player chooses 4 so that 3 + 4 = 7. However, in the last step, the second player chooses 5, making the sum 31, and thus the first player loses. I believe that I must have misunderstood the solution. Please point out where I've made a mistake, and give me a detailed description and explanation on the optimal play for the first player. Thanks in advance.
The main thing to note here is that this is analogous to the game where one has as many of each card as desired, rather than just four. In particular, it is easy to see that, in this modified game, the winning positions are exactly the positions where the sum is of the form $31-7n$ for some $n$. This is presumably what is meant by the "target series". Therefore, if you play $5$ and your opponent plays $3$, then your next move should be to play $2$, not $4$, since $2$ brings the sum of all the flipped cards to $10=31-7\cdot 3$. That is, the strategy is as follows: On the first move play $5$. As long as your opponent continues to choose $5$ on their move, play $2$. Once they deviate, make a move that brings you to a number of the form $31-7n$ and end your turn on such numbers for all subsequent moves. I think the misunderstanding is in what it means to enter the "target series". In particular, you seem to have understood this as meaning that a player should always make sure that the sum of their move and their opponent's last move is equal to $7$. While it is true that this will happen once you are in the target series, in order to move from not being in the series to being in the series, some other sum is desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1859965", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Prove That If $(a + b)^2 + (b + c)^2 + (c + d)^2 = 4(ab + bc + cd)$ Then $a=b=c=d$ If the following equation holds $$(a + b)^2 + (b + c)^2 + (c + d)^2 = 4(ab + bc + cd)$$ Prove that $a$,$b$,$c$,$d$ are all the same. What I did is I let $a$,$b$,$c$,$d$ all equal one number. Then I substituted and expanded. I'm sort of proud of my self (first proof I done). I'm wondering, is there another way? (I'm teaching my self maths and I'm only a humble precalc student)
Consider the following steps $$\begin{align} (a + b)^2 + (b + c)^2 + (c + d)^2 &= 4(ab + bc + cd) \\ \left[ (a + b)^2-4ab \right] + \left[ (b + c)^2-4bc \right] + \left[ (c + d)^2-4cd \right] &=0 \\ \left[ a^2+b^2+2ab-4ab \right] + \left[ b^2+c^2+2bc-4bc \right] + \left[ c^2+d^2+2cd-4cd \right] &=0 \\ \left[ a^2+b^2-2ab \right] + \left[ b^2+c^2-4bc \right] + \left[ c^2+d^2-4cd \right] &=0 \\ (a-b)^2 + (b-c)^2 + (c-d)^2 &=0 \end{align}$$ and the sum of three positive numbers is zero if and only if they are all zero. So you will get $$a=b=c=d$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1860072", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Proof of the square root inequality $2\sqrt{n+1}-2\sqrt{n}<\frac{1}{\sqrt{n}}<2\sqrt{n}-2\sqrt{n-1}$ I stumbled on the following inequality: For all $n\geq 1,$ $$2\sqrt{n+1}-2\sqrt{n}<\frac{1}{\sqrt{n}}<2\sqrt{n}-2\sqrt{n-1}.$$ However I cannot find the proof of this anywhere. Any ideas how to proceed? Edit: I posted a follow-up question about generalizations of this inequality here: Square root inequality revisited
\begin{align*} 2\sqrt{n+1}-2\sqrt{n} &= 2\frac{(\sqrt{n+1}-\sqrt{n})(\sqrt{n+1}+\sqrt{n})}{(\sqrt{n+1}+\sqrt{n})} \\ &= 2\frac{1}{(\sqrt{n+1}+\sqrt{n})} \\ &< \frac{2}{2\sqrt{n}} \text{ since } \sqrt{n+1} > \sqrt{n}\\ &=\frac{1}{\sqrt{n}} \end{align*} Similar proof for the other inequality.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1860187", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 6, "answer_id": 0 }
If $U$ is a vector subspace of a Hilbert space $H$, then each $x∈H$ acts on $U$ as a bounded linear function $〈x〉$. Is $x↦〈x〉$ injective? If $H$ is a $\mathbb R$-Hilbert space, then the duality pairing $$\langle\;\cdot\;,\;\cdot\;\rangle_{H,\:H'}:H\times H'\;,\;\;\;(x,\Phi)\mapsto\Phi(x)$$ can be considered as being a mapping $H\times H\to\mathbb R$ which is identical to the inner product $\langle\;\cdot\;,\;\cdot\;\rangle_H$ in $H$ by (Riesz’ representation theorem). If $U$ is a vector subspace of $U$, then each $x\in U$ acts on $U$ in a natural way as a bounded linear functional on $U$ via $$\langle x\rangle_{U'}:=\left.\langle\;\cdot\;,x\rangle_{H,\:H'}\right|_{U}\in U'\;.\tag 1$$ Wich assumptions on $U$ do we need, if we want that $$H\to U'\;,\;\;\;x\mapsto\langle x\rangle_{U'}\tag 2$$ is injective? Obviously, if $x,y\in H$ with $\langle x\rangle_{U'}=\langle y\rangle_{U'}$, then $$\langle u,x-y\rangle_H=0\;\;\;\text{for all }u\in U\;.\tag 3$$ If $U$ would be dense in $H$, we would find some $(u_n)_{n\in\mathbb N}\subseteq U$ with $$0=\langle u_n,x-y\rangle_H\stackrel{n\to\infty}\to\langle x-y,x-y\rangle_H\;,\tag 4$$ i.e. $x=y$. So, do we need density of $U$ in $H$, if we want that $(2)$ is injective? As a second question: I've seen that $\langle x\rangle_{U'}(u)$ for $x\in H$ and $u\in U$ is usually denoted by $\langle u,x\rangle_{U,\:U'}$, but isn't that a misleading notation (cause $\langle\;\cdot\;,\;\cdot\;\rangle_{U,\:U'}$ should denote the duality pairing between $U$ and $U'$)?
For the map $\Phi \colon H \to U'$, $x \mapsto \langle \cdot, x \rangle$ we have $\ker \Phi = U^\perp$. Hence $\Phi$ is injective if and only if $U^\perp = 0$. Because $H = U^\perp \oplus \overline{U}$ we have $U^\perp = 0$ if and only if $\overline{U} = H$, i.e. if $U$ is dense.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1860281", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Switching limits of integration The solution in my textbook wrote $$\int_{\alpha \epsilon}^{\alpha N} \frac{f(u)}{u} \, du-\int_{\beta \epsilon}^{\beta N} \frac{f(u)}{u} \, du = \int_{\alpha \epsilon}^{\beta \epsilon} \frac{f(u)}{u} \, du-\int_{\alpha N}^{\beta N} \frac{f(u)}{u} \, du.$$ How can the limits of integration be switched like that?
If you write the equation with sums instead of differences, it reads: $$ \int_{\alpha \epsilon}^{\alpha N} \dfrac{f(u)}{u}du +\int_{\alpha N}^{\beta N} \dfrac{f(u)}{u}du = \int_{\alpha \epsilon}^{\beta \epsilon} \dfrac{f(u)}{u}du +\int_{\beta \epsilon}^{\beta N} \dfrac{f(u)}{u}du $$ Now if you use the additive rule for integrals over adjacent intervals, you see that each of these is equal to $$ \int_{\alpha \epsilon}^{\beta N} \dfrac{f(u)}{u}du $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1860356", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Probability problem with a die I've been practicing probability problems lately and I came to this problem A number is formed in the following way. You throw a six-sided die until you get a 6 or until you have thrown it three times at the most. A sequence of dice throws form either one, two or three-digit numbers. How many distinct numbers can be formed as a result of this experiment? I thought about solving the problem this way: First if the die is thrown and if it lands on a 6 that's the first number. Other numbers are: 16 26 36 46 56. Now the next numbers are: 123, 124, 125... and so on. Is this right and if it is how do you approach this problem faster. It would take too long to solve it this way.
For case 3: Not $6$, Not $6$, Any value. (Then use the multiplication principle).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1860436", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
A Riemannian manifold with constant sectional curvature is Einstein. A Riemannian manifold with constant sectional curvature is Einstein. Why? It's true the inverse?
By definition, a Riemannian manifold has constant sectional curvature if the sectional curvature $K$ is a constant that is independent of the point and $2$-plane chosen. If $R$ denotes the covariant curvature tensor and $g$ is the metric then, as a consequence of the definition of $K$, the components satisfy the relation $$R_{ljhk} = K(g_{lh}g_{jk}-g_{lk}g_{jh}).$$ Multiplication with the contravariant metric tensor $g^{jk}$ yields $$R_{lh} = K(ng_{lh} - g_{lh}) = K(n-1)g_{lh},$$ from which we conclude that our manifold is Einstein. For a counterexample of the converse, note that $\mathbb{C}P^n$ is Einstein but its sectional curvature is not constant (except for the sphere $n=1$). However I believe that the converse is true for manifolds of dimension $\leq 3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1860526", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Why must $|z|\gt 1$ be the necessary condition Question:- If $\left|z+\dfrac{1}{z} \right|=a$ where $z$ is a complex number and $a\gt 0$, find the greatest value of $|z|$. My solution:- From triangle inequality we have $$|z|-\left|\dfrac{1}{z}\right|\le\left|z+\dfrac{1}{z} \right|\le|z|+\left|\dfrac{1}{z}\right| \implies |z|-\left|\dfrac{1}{z}\right|\le a\le|z|+\left|\dfrac{1}{z}\right|$$ Now on solving the inequalities separately, we get the following $$\begin{equation}\tag{1}|z|-\left|\dfrac{1}{z}\right|\le a \implies \dfrac{a-\sqrt{a^2+4}}{2}\le|z|\le\dfrac{a+\sqrt{a^2+4}}{2}\end{equation}$$ $$\begin{equation}\tag{2}|z|+\left|\dfrac{1}{z}\right|\ge a \implies |z| \in \mathbb{R}-\left(\dfrac{a-\sqrt{a^2-4}}{2},\dfrac{a+\sqrt{a^2-4}}{2} \right)\end{equation}$$ From $(1)$ and $(2)$, we get $$\boxed{|z|_{max}=\dfrac{a+\sqrt{a^2+4}}{2}}$$ My problem with the question:- The book from which I am solving tells to take note of the following point for the question. $|z_1+z_2|\ge |z_1|-|z_2|$ and $|z_1+z_2|\ge |z_2|-|z_1|$. Here we have taken $|z|-\dfrac{1}{|z|}$ since we have to find the greatest value of $|z|$ and hence we take the case $|z| \gt 1$ Now all this does is that make the bound tighter nothing else, so why the need of the specific condition $|z|\gt 1$ and also why, only $|z|-\dfrac{1}{|z|}$ provides the maximum value. From, this what I mean to ask is how can we tell even before solving for $|z|$ that $|z|-\dfrac{1}{|z|}$ provides the maximum value
why the need of the specific condition $|z|\gt 1$ $$a=\left|z+\frac 1z\right|\ge|z|-\frac{1}{|z|}\tag1$$ If $0\lt |z|\le 1$, then $-\frac{1}{|z|}\le -1$, so $$|z|-\frac{1}{|z|}\le 1-1=0\tag2$$ From $(1)$, we have $$a=\left|z+\frac 1z\right|\ge |z|-\frac{1}{|z|}=(\text{non-positive})$$ which is true since $a\gt 0$, so in this case the maximum value of $|z|$ is $1$. Now, of course, we are interested in the case when $|z|\gt 1$. (so, I think that the book does not say that $|z|\gt 1$ is the necessary condition, and that the book implies that the case $0\lt |z|\le 1$ is trivial.) why, only $|z|-\dfrac{1}{|z|}$ provides the maximum value. If we take $$a\ge \frac{1}{|z|}-|z|$$ we have $$|z|^2+a|z|-1\ge 0$$ which is not useful to find the maximum value of $|z|$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1860591", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Does every locally compact group $G$ have a nontrivial homomorphism into $\mathbb{R}$? Does every locally compact group (second countable and Hausdorff) topological group $G$ that is not compact have a nontrivial continuous homomorphism into $\mathbb{R}$? Obviously for compact groups it is not possible since continuous functions send compact sets to compact sets, and there is only one (trivial) compact subgroup of $\mathbb{R}$.
For a connected example one can take $G=\mathrm{SL}_2(\mathbb{R})$ (or any connected simple Lie group). If $f:G\to \mathbb{R}$ is a continuous homomorphism then $f(G)$ is a connected simple subgroup of $\mathbb{R}$, hence trivial. EDIT: I see now someone already mentioned this in the comments.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1860683", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Closure of an operator I am wondering what is the closure of the domain of the operator $A_0:D(A_0)(\subset H)\to H$in $H=L^2(0,1)$ $$A_0= f^{(4)}-f^{(6)}$$ $$D(A_0)=\big\{ f\in H^6(0,1)\cap H_0^3(0,1) |f^{(3)}(1)=f^{(4)}(1)=f^{(5)}(1)=0\big\}$$
The closure of the domain in $L^2$ is simply $L^2$: Obviously it holds $C_0^\infty(0,1)\subset D(A_0)$. The set of smooth function is dense in $L^2(0,1)$, hence its closure is $L^2(0,1)$. This implies that the closure of $D(A_0)$ is $L^2(0,1)$ as well.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1860759", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Consider the function $f(x) = x^2 + 4/x^2$ a) Find$f ^\prime(x)$ b) Find the values of $x$ at which the tangent to the curve is horizontal. So far I have this... a) $f^\prime(x) = 2x + (0)(x^2)-(4)\dfrac{2x}{(x^2)^2}$ $= 2x - \dfrac{8x}{x^4}$ $= \dfrac{2x^5 - 8x}{x^4}$ $= \dfrac{2(x^4 - 4)}{x^3}$ I believe I derived this correctly. But i am not sure how to do part b). I know the horizontal slope $= 0$ but when i solve it i get $2(x^4 - 4)$ and don't know how to go from there. `
Your derivative is correct. You could have saved yourself some work by using the power rule. \begin{align*} f(x) & = x^2 + \frac{4}{x^2}\\ & = x^2 + 4x^{-2} \end{align*} Using the power rule yields \begin{align*} f'(x) & = 2x^1 - 2 \cdot 4x^{-3}\\ & = 2x - 8x^{-3}\\ & = 2x - \frac{8}{x^3} \end{align*} which is equivalent to your expression $$f'(x) = \frac{2(x^4 - 4)}{x^3}$$ To find the values of $x$ at which the tangent line is horizontal, set $f'(x) = 0$, which yields \begin{align*} \frac{2(x^4 - 4)}{x^3} & = 0\\ 2(x^4 - 4) & = 0\\ x^4 - 4 & = 0\\ (x^2)^2 - 2^2 & = 0\\ (x^2 + 2)(x^2 - 2) & = 0\\ \end{align*} Setting each factor equal to zero yields \begin{align*} x^2 + 2 & = 0 & x^2 - 2 & = 0\\ x^2 & = -2 & x^2 & = 2\\ x & = \pm i\sqrt{2} & x & = \pm\sqrt{2} \end{align*} Since there can only be a tangent at real values of $x$, we conclude that the only horizontal tangents of the graph occur at $x = \pm\sqrt{2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1860836", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Bending a line segment REVISED QUESTION With the help of the existing answers I have been able to put together this clearer animation, and I asked this question to discover the shape is called a cochleoid. What I am really trying to find out at this point is the following: It seems like this curve should be perfectly smooth through the point at $(0, 1)$, but because the curve is based on $sin(x)/x$ it is technically undefined at this point, and if I were programming a function to evaluate this curve I would have to add a special case for values near this singularity. I'm curious if there is any way to re-phrase this equation to remove the singularity and make evaluation of it near zero more numerically stable. --- Original text of question preserved below ---------------------------------------------------- Imagine I have a unit line segment going from $(0, 0)$ to $(0, 1)$. Over time, I want to bend this segment such that it always forms a circular arc (the initial configuration can be considered an arc on a circle with infinite radius). What shape will the end point trace out, and how can I get the coordinates of that point if I am given as input the angle that should be spanned by the arc length? Edit: Thanks to John Bales below, this animation accurately depicts what I am trying to describe, although I'm not sure how to render the bent segment itself, which would always connect back to the origin. Is this shape a cartioid? Is there a way to rephrase the equation so it doesn't become undefined when the input is zero?
The angle in radians subtended by an arc of length $s$ on the circumference of a circle of radius $r$ is given by \begin{equation} \theta=\dfrac{s}{r} \end{equation} In this instance $s=1$ and $r\ge\tfrac{1}{\pi}$. The circle has center $(r,0)$ and radius $r$. The arc $s$ of length $1$ extends upward along the circumference with one end fixed at $(0,0)$ and the other end ends at the point \begin{equation} (x,y)=\left(r-r\cos\left(\frac{1}{r}\right),r\sin\left(\frac{1}{r}\right)\right) \end{equation} These are parametric equations of the curve. Here is a desmos.com animation of the curve along which the point moves. https://www.desmos.com/calculator/row6dlgqom This curve has the following polar equation: \begin{equation} r=\dfrac{\sin\left(\frac{\pi}{2}-\theta\right)}{\frac{\pi}{2}-\theta} \end{equation} Here is a desmos.com graph of the polar curve with the particle moving along it. You want just the portion of the curve in the first quadrant. https://www.desmos.com/calculator/rbusr85zsc I think it is interesting that this turns out to be a $\dfrac{\sin x}{x}$ curve but in the polar coordinate system. If instead of the interval from $(0,0)$ to $(0,1)$ we bend the unit interval on the horizontal axis along circles with centers $\left(0,\frac{1}{2\theta}\right)$ and radii $\frac{1}{2\theta}$ as $0\le\theta\le\frac{\pi}{2}$ then the ends of the unit arcs along the circles from the origin will trace out the portion of the polar curve \begin{equation} r=\dfrac{\sin\theta}{\theta} \end{equation} in quadrant I. The following is a link to a GeoGebra animation illustrating this variation of the problem. https://www.geogebra.org/m/THDbt3Ad
{ "language": "en", "url": "https://math.stackexchange.com/questions/1860923", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Can $\frac{\sqrt{3}}{\sin20^{\circ}}-\frac{1}{\cos20^{\circ}}$ have two values? I would like to confirm a solution. The question goes as: Show that $$\frac{\sqrt{3}}{\sin20^{\circ}}-\frac{1}{\cos20^{\circ}}=4$$ Firstly I combined the two terms to form something like: $$\dfrac{\sqrt{3}\cos20^{\circ} - \sin20^{\circ}}{\sin20^{\circ}\cos20^{\circ}}.$$ Clearly, the numerator is of the form $a\cos\theta+b\sin\theta$, and can thus be expressed as $R\sin(\theta+\alpha)$, where $R=\sqrt{a^2+b^2}$, and $\alpha=\arctan(\dfrac{a}{b})$. Following this method, I obtain something like this: $$\dfrac{2\sin(20^{\circ}-60^{\circ})}{\sin20^{\circ}\cos20^{\circ}},$$ which results in the expression to be equal to $-4$. But, we took $R$ to be the principal square root, which was $2$. If we take it to be the negative root, $-2$, it follows that the expression is equal to $4$. So, can this expression have two values? Edit: I would also like to mention how I computed the numerator: Numerator: $\sqrt{3}\cos20^{\circ} - \sin20^{\circ}$. Now, this can be represented as $R\sin(\theta+\alpha)$. Computation of $R$: $R=\sqrt{a^2+b^2}\implies R=\sqrt{(3+1)}=2$ Computation of $\alpha$: $\tan\alpha=\dfrac{a}{b}\implies \tan\alpha=\dfrac{\sqrt3}{-1}\implies \alpha=-60^{\circ}$, because $\tan(-x)=-\tan x$, and in this case $\tan\alpha=-\sqrt3$, and $-\tan(60^{\circ})=-\sqrt3$, so $\alpha=-60^{\circ}$. Hence, we conclude that numerator:$$2\sin(20+(-60))=2\sin(-40)=-2\sin(40^{\circ})$$
When $R=-2,-2\sin\theta=\sqrt3\iff\sin\theta=?,-2\cos\theta=1\iff\cos\theta=?$ Observe that $\theta$ lies in the third quadrant $\theta=(2n+1)180^\circ+60^\circ$ where $n$ is any integer $$\sin\{20^\circ-(2n+1)180^\circ+60^\circ)\}=-\sin220^\circ$$ and $$\sin220^\circ=\sin(180+40)^\circ=-\sin40^\circ$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1861095", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Equivalence of surjectivity and injectivity for linear operators on finite dimensional vector spaces I'd like to show that for a linear operator $T$ and finite-dimensional vector space $V$ such that $T:V\rightarrow V$, $T$'s injectivity is equivalent to its surjectivity. I started by trying to show $T$'s surjectivity implies its injectivity by Surjectivity of $T \leftrightarrow \forall w \in V, \exists v \in V$ s.t. $ Tv = w.$ Let $v = v^ie_i$ for some basis $\{e_i\}$ of $V$. $w = v^i(T e_i) = v^ie'_i$. Surjectivity of $T$ now implies that the $\{ e'_i\}$ are another (linearly independent) set of basis vectors. Linear independence of $\{e'_i\}$ implies that $i\neq j \rightarrow e_i'-e'_j \neq 0$ or $ e_i'-e'_j = 0 \rightarrow i = j$ or $Te_i = Te_j \rightarrow e_i = e_j \leftrightarrow T$ is injective. Firstly, is this reasoning sound? Secondly, how would I go about showing the opposite statement, that $T$'s injectivity implies its surjectivity?
No such statement can be true for infinite-dimensional vector spaces. For example, let $V$ be a vector space with a countable basis $\left\{e_n\right\}_{n\in{\mathbb N}}$, then $$Te_i=e_{i+1}\ \forall i\in{\mathbb N}$$ defines an injective but not surjective operator, and $$Te_0=e_0, Te_i=e_{i-1}\ \forall i\ge 1$$ defines a surjective but not injective operator. However the equivalence is true for finite-dimensional vector spaces.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1861183", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 1 }
Show that there is a subsequence of $(f_n)_n$ that converges to $f$ almost everywhere. Let $(X,\mathcal{B}, \mu)$ be a measure space and assume the sequence $(f_n)_n$ converges to $f$ in $L^p(\mu)$, where $1\leq p<\infty$. Show that there is a subsequence of $(f_n)_n$ that converges to $f$ almost everywhere. Isn't it true that for all subsequence of $(f_n)_n$? Attempt: Since $f_n\to f$ in $L^p$, for any $\epsilon>0$, there exists $N\in\mathbb{N}$ such that for all $n,m\leq N$, $\|f_m-f_n\|_p<\epsilon /2$ or $\|f_n-f\|_p<\epsilon /2$ Let $(f_{n_k})_k$ be any subsequence of $(f_n)_n$. Then $$\|f_{n_k}-f\|_p\leq \|f_{n_k}-f_n\|_p+\|f_n-f\|_p< \epsilon /2+\epsilon /2=\epsilon.$$ I don't know what the wrong is here. Can anyone check my proof? Thanks!
The standard counterexample to your claim that the pointwise convergence holds for every subsequence is the following. Set $$ A_{n,m}:=[(n-1)/m, n/m] $$ Then $$1_{A_{1,1}}, 1_{A_{1,2}}, 1_{A_{2,2}}, 1_{A_{1,3}}, \dots$$ converges in $L_p[0,1]$ to the zero function. But it does not converge pointwise to the zero function (in fact it diverges at every point).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1861261", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Show that $\text{rank}(Df)(A) = \frac{n(n+1)}{2}$ for all $A$ such that $A^TA = I_n$ We identify $\mathbb R^{n \times n}$ with $\mathbb R^{n^2}$ and define $f:\mathbb R^{n^2} \to \mathbb R^{n^2}, A \mapsto A^TA$. Show that $\text{rank}(Df)(A) = \frac{n(n+1)}{2}$ for all $A$ such that $A^TA = I_n$. The solution by Omnomnomnom written in a more accessible (for me) way: We firstly note that $$f(A+H)=(A + H)^T(A + H) = A^TA + H^TA + A^TH +H^TH$$ Consider the following homomorphismus $\Phi:\mathbb R^{n^2} \to \mathbb R^{n^2}, H \mapsto H^TA + A^TH$ Thus the equation above can be written as: $$f(A+H)= f(A) + \Phi(H) +H^TH$$ Remember that according to the definition $(Df)(A)$ is a homomorphismus from $\mathbb R^{n^2}$ to $\mathbb R^{n^2}$ such that $$\lim_{H\to 0, H \ne 0} \frac{f(A+H)-f(A)-(Df)(A)H}{\|H\|}=0$$ Because $\lim_{H\to 0, H \ne 0}\|\frac{H^TH}{\|H\|}\| \le \lim_{H\to 0, H \ne 0}\frac{\|H^T\|\|H\|}{\|H\|}=\lim_{H\to 0, H \ne 0}\|H^T\|=0$, it follows that: $$\lim_{H\to 0, H \ne 0} \frac{f(A+H)-f(A)-\Phi (H)}{\|H\|}=\lim_{H\to 0, H \ne 0} \frac{H^TH}{\|H\|}=0$$ and consequently $Df(A)=\Phi$. Now we note that $\{\Phi(H):H \in \mathbb R^{n^2}\}=\{X\in \mathbb R^{n^2} : X^T = X\}$, because: 1) $(H^TA + A^TH)^T = A^TH^{TT} + H^TA^{TT} = H^TA + A^TH$ 2) $S$ is symmetric, then $\Phi(\frac{1}{2}AS)=\frac{1}{2}(SA^TA+A^TAS)=S$ Now it follows: $$\frac{n(n+1)}{2}=\dim \{X\in \mathbb R^{n^2} : X^T = X\}=\dim\{\Phi(H):H \in \mathbb R^{n^2}\}=\text{rank}\Phi=\text{rank}(Df)(A)$$
First, let's compute the derivative as it is defined here, noting that this coincides with the usual definition except that $Df(A)$, rather than producing an explicit $n^2 \times n^2$ matrix, produces a linear map from $\Bbb R^{n \times n}$ to $\Bbb R^{n \times n}$. In particular, we compute that $$ (A + H)^T(A + H) = A^TA + H^TA + A^TH + o(\|H\|) $$ So, the map we are considering (at any fixed $A \in \Bbb R^{n \times n}$) is the map $H \mapsto H^TA + A^TH$. Explanatory note: This map is a homomorphism from $\mathbb R^{n\times n}$ to $\mathbb R^{n\times n}$ which is representable by the matrix from $\mathbb R^{n^2 \times n^2}$ , rank of which we are trying to find. To go from this version of the derivative to the usual matrix, it suffices to apply the vectorization operator. Note, however, that the output of this map is always symmetric. In particular, $$ (H^TA + A^TH)^T = A^TH^{TT} + H^TA^{TT} = H^TA + A^TH $$ Thus, the image of this map is necessarily a subset of the space of symmetric matrices, which is a space of dimension $n(n+1)/2$. Thus, the rank of $Df(A)$ will be at most $n(n+1)/2$. In fact, there is no better upper bound that applies to all $A$. For example, if we take the derivative at $A = I$, then we find that $$ [Df(I)](H) = H + H^T $$ which is onto the space of symmetric matrices, and therefore has rank exactly equal to $n(n+1)/2$. Proof that $Df(A)$ will have the same rank whenever $A^TA = I$: We wish to show, in other words, that if $A^TA = I$, then the map $H \mapsto H^TA + A^TH$ is onto. To that effect, it suffices to break this map down into the composition of two maps: $$ T_1(H) = A^TH\\ T_2(H) = H + H^T $$ It is clear that $T_1$ is an isomorphism, with inverse $H \mapsto AH$ (recall that $AA^T = A^TA = I$). We note moreover from our analysis above that $T_2$ is a map with rank $n(n+1)/2$. It follows that the map $$ [Df(A)](H) = [T_2 \circ T_1](H) $$ must also be a linear transformation of rank $n(n+1)/2$, as desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1861371", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Finding the rank of an endomorphism Recently I tried to prove a statement I know should be easy, but for some reason I just can't prove it. The statement is: given a $9 \times 9$ matrix $N$ sucht that $N^3 = 0$ and that $rk(N^2)$ = 3, proof that $rk(N)=6$. I tried to proof this by using the dimension kernel formula, but I got stuck. Help would be greatly appreciated!
That the rank of $N^2$ is $3$ means that the image of $N^2$ is three dimensional. That $N^3$ is $0$ means that $N$ restricted to the image of $N^2$ is $0$. Thus the kernel of $N$ restricted to the image of $N^2$ has dimension $3$. This implies that the kernel of $N$ has dimension at least $3$, it also implies that the kernel of $N$ restricted to the image of $N$ has dimension at least $3$. From the fact that the rank of $N^2$ is $3$ we know that the kernel of $N^2$ has dimension $6$. Yet the dimension of the kernel of $N^2$ is the dimension of the kernel of $N$ plus the dimension of the kernel of $N$ restricted to the image of $N$. Thus they are both $3$ and the claim follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1861450", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Minimum value of algebraic expression. If $0\leq x_{i}\leq 1\;\forall i\in \left\{1,2,3,4,5,6,7,8,9,10\right\},$ and $\displaystyle \sum^{10}_{i=1} x^2_{i}=9$ Then $\max$ and $\min$ value of $\displaystyle \sum^{10}_{i=1} x_{i}$ $\bf{My\; Try::}$ Using Cauchy-Schwarz Inequality $$\left(x^2_{1}+x^2_{2}+.......+x^2_{10}\right)\cdot (1^2+1^2+....1^2)\geq \left(x_{1}+x_{2}+....+x_{10}\right)^2$$ So we get $$\left(x_{1}+x_{2}+....+x_{10}\right)\leq \sqrt{90}$$ Now How can I calculate for its minimum value, Help required, Thanks
You have already got the right inequality for the maximum, all you need to add is that equality can be achieved when $x_i = 3/\sqrt{10}$. For the minimum, note $x_i\in [0,1]\implies x_i^2\leqslant x_i\implies 9=\sum x_i^2\leqslant \sum x_i$ Equality is possible here when one of the $x_i$ is $0$ and all others $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1861491", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Does subtracting a positive semi-definite diagonal matrix from a Hurwitz matrix keep it Hurwitz? I am having a linear algebra problem here. I will be grateful if someone can help me. Let $A\in \mathbb{R}^{n\times n}$ be Hurwitz and diagonizable, and let $B$ be a diagonal matrix whose diagonal elements are non-negative. Is $A-B$ still Hurwitz? I know that if $B=cI$, where $c$ is a positive scalar, $A-B$ is a Hurwitz matrix. However, I am not sure whether $A-B$ is still a Hurwitz matrix when some diagonal elements in $B$ are zero and the others are positive. Are there any general results on the similar topic? Thanks in advance!
If $A$ is not only Hurwitz, but also symmetric, then it is negative definite and, thus, $-A$ is positive definite. Let $$D := \mbox{diag} (d_1, d_2, \dots, d_n)$$ where $d_i \geq 0$, be a positive semidefinite diagonal matrix. Hence, $-(A-D) = -A + D \succ 0$ and, thus, $A - D \prec 0$. As $A-D$ is negative definite, it is also Hurwitz. We conclude that Hurwitz-ness is preserved when $A$ is symmetric.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1861714", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Russell's paradox from Cantor's I learnt how Russell's paradox can be derived from Cantor's theorem here, but also from S C Kleene's Introduction to Metamathematics, page 38. In his book, Kleene says that if $M$ is set of all sets, then $\mathcal P(M)=M$ but since this implies $\mathcal P(M)$ has same cardinality as $M$, so there exists a subset $T$ of $M$ which is not element of power set $\mathcal P(M)$. This $T$ is desired set for Russell's paradox, i.e., it is the set of all sets which are not members of themselves. I can't understand how $T$ is desired set for Russell's paradox. Also, how is Kleene's argument similar to the quora answer?
Cantor's theorem shows that for any set $X$ and any function $f:X\to \mathcal{P}(X)$, there is some subset $T\subseteq X$ that is not in the image of $f$. Specifically, $T=\{x\in X:x\not\in f(x)\}$. Kleene is saying that if you apply this theorem to the identity function $f:M\to\mathcal{P}(M)$, the counterexample $T$ you get is exactly Russell's set. Indeed, this is immediate from the definition of $T$ given above. The Quora answer is just carrying out the diagonal proof that $T$ is not in the image of $f$ in this particular example.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1861826", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Intuition behind proof of bounded convergence theorem in Stein-Shakarchi Theorem 1.4 (Bounded convergence theorem) Suppose that $\{f_n\}$ is a sequence of measurable functions that are all bounded by $M$, are supported on a set $E$ of finite measure, and $f_n(x) \to f(x)$ a.e. $x$ as $n \to \infty$. Then $f$ is measurable, bounded, supported on $E$ for a.e. $x$, and$$\int |f_n \to f| \to 0 \text{ as } n \to \infty.$$Consequently,$$\int f_n \to \int f \text{ as } n \to \infty.$$ Proof. From the assumptions one sees at once that $f$ is bounded by $M$ almost everywhere and vanishes outside $E$, except for possibly on a set of measure zero. Clearly, the triangle inequality for the integral implies that it suffices to prove that $\int |f_n - f| \to 0$ as $n$ tends to infinity. The proof is a reprise of the argument in Lemma 1.2. Given $\epsilon > 0$, we may find, by Egorov's theorem, a measurable subset $A_\epsilon$ of $E$ such that $m(E - A_\epsilon) \le \epsilon$ and $f_n \to f$ uniformly on $A_\epsilon$. Then, we know that for all sufficiently large $n$ we have $|f_n(x) - f(x)| \le \epsilon$ for all $x \in A_\epsilon$. Putting these facts together yields\begin{align*} \int |f_n - f(x)|\,dx & \le \int_{A_\epsilon} |f_n(x) - f(x)|\,dx + \int_{E - A_\epsilon} |f_n(x) - f(x)|\,dx \\ & \le \epsilon m(E) + 2M\,m(E - A_\epsilon)\end{align*}for all large $n$. Since $\epsilon$ is arbitrary, the proof of the theorem is complete.$$\tag*{$\square$}$$ For reference, we include the statement of Lemma 1.2 here. Lemma 1.2 Let $f$ be a bounded function supported on a set $E$ of finite measure. If $\{\varphi_n\}_{n = 1}^\infty$ is any sequence of simple functions bounded by $M$, supported on $E$, and with $\varphi_n(x) \to f(x)$ for a.e. $x$, then: (i) The limit $\lim_{n \to \infty} \int \varphi_n$ exists. (ii) If $f = 0$ a.e., then the limit $\lim_{n \to \infty} \int \varphi_n$ equals $0$. My question is, could anybody supply me their intuitions behind the proof of the bounded convergence theorem here? What are the key steps I should distill the proof into as to be able to recreate it from scratch?
To remember the proof, maybe it is best to keep a particular example in mind. Let $E = [0,1]$, the closed unit interval on the line. Let $f_n(x) = x^n$, which is bounded by $M=1$. Then $f_n \rightarrow 0$ almost everywhere on $E$ but not uniformly. But we can exclude the bits where uniform convergence fails (this is Egorov's theorem). In this particular case, we can take $A_\epsilon = [0, 1-\epsilon]$. Then $f_n \rightarrow 0$ uniformly on $A_\epsilon$, i.e. for large enough $n$ we have that $|f_n(x) - 0| < \epsilon$ on $A_\epsilon$. Now add up the two pieces and let $\epsilon$ get arbitrarily small.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1861987", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Find the values of $b$ for which the equation $2\log_{\frac{1}{25}}(bx+28)=-\log_5(12-4x-x^2)$ has only one solution Find the values of 'b' for which the equation $$2\log_{\frac{1}{25}}(bx+28)=-\log_5(12-4x-x^2)$$ has only one solution. =$$-2/2\log_{5}(bx+28)=-\log_5(12-4x-x^2)$$ My try: After removing the logarithmic terms I get the quadratic $x^2+x(b+4)+16=0$ Putting discriminant equal to $0$ I get $b={4,-12}$ But $-12$ cannot be a solution as it makes $12-4x-x^2$ negative so I get $b=4$ as the only solution. But the answer given is $(-\infty,-14]\cup{4}\cup[14/3,\infty)$.I've no idea how.Help me please.
You have $$x^2+(4+b)x+16=0\tag1$$ This is correct. However, note that when we solve $$2\log_{\frac{1}{25}}(bx+28)=-\log_5(12-4x-x^2)$$ we have to have $$bx+28\gt 0\quad\text{and}\quad 12-4x-x^2\gt 0,$$ i.e. $$bx\gt -28\quad\text{and}\quad -6\lt x\lt 2\tag2$$ Now, from $(1)$, we have to have $(4+b)^2-4\cdot 16\geqslant 0\iff b\leqslant -12\quad\text{or}\quad b\geqslant 4$. Case 1 : $b\lt -14$ $$(2)\iff -6\lt x\lt -\frac{28}{b}$$ Let $f(x)=x^2+(4+b)x+16$. Then, since the equation has only one solution, we have to have $$f(-6)f\left(-\frac{28}{b}\right)\lt 0\iff b\lt -14$$ So, in this case, $b\lt -14$. Case 2 : $-14\leqslant b\leqslant -12$ or $4\leqslant b\lt \frac{14}{3}$ $$(2)\iff -6\lt x\lt 2$$ $b=4$ is sufficient, and $b=-12$ is not sufficient. For $b\not=4,-12$, $$f(-6)f(2)\lt 0\iff b\lt -14\quad\text{or}\quad b\gt \frac{14}{3}$$ So, in this case, $b=4$. Case 3 : $b\geqslant \frac{14}{3}$ $$(2)\iff -\frac{28}{b}\lt x\lt 2$$ $b=\frac{14}{3}$ is sufficient. For $b\gt\frac{14}{3}$, $$f\left(-\frac{28}{b}\right)f(2)\lt 0\iff b\gt \frac{14}{3}$$ So, in this case, $b\geqslant 14/3$. Therefore, the answer is $$\color{red}{(-\infty,-14)\cup{4}\cup\bigg[\frac{14}{3},\infty\bigg)}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1862083", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Confusion about geometric interpretation of proof that $\mathbb R[X,Y,Z]/ \left\langle X^2+Y^2+Z^2 -1 \right\rangle $ is a UFD I'm working through a proof that $R=\mathbb R[X,Y,Z]/ \left\langle X^2+Y^2+Z^2 -1 \right\rangle $ is a UFD. The idea is to localize at $1-x$ and show the result is a UFD. Since $R$ is atomic as a quotient of a Noetherian ring, and $1-x$ is prime, Nagata's lemma will imply the resut. Let $x,y,z$ be the images of $X,Y,Z$ in the quotient. By exactness of localization $R_{1-x}\cong \mathbb R[X,Y,Z]_{1-X}/ \left\langle X^2+Y^2+Z^2 -1 \right\rangle$. If I understand correctly, $\mathbb R[X,Y,Z]_{1-X}$ means we're deleting the plane $X=1$ from $\mathbb R^3$. Take $T=(1-X)^{-1}$ and note $\mathbb R[X,Y,Z]_{1-X}\cong \mathbb R[X,Y,Z,T]$. Descending to $R_{1-x}$ as the quotient above, some manipulations enable me to show $\mathbb R[x,y,z,t]\cong \mathbb R[ty,tz,t^{-1}]$, and since the latter is the localization at $t$ of a UFD, Nagata ends the proof. I would like to understand what exactly is going on geometrically here and I really have no clue where to start because already at the quotient $R_{1-x}\cong \mathbb R[X,Y,Z]_{1-X}/ \left\langle X^2+Y^2+Z^2 -1 \right\rangle$ I have no idea what I should be visualizing.
Given a noetherian integrally closed domain $A$ we have the equivalence $$ A \operatorname { is a } UFD \iff Cl(A)=0$$where $Cl(A)=0$ is the class group of $A$. If your case $\operatorname {Spec}(R)$ is smooth so that $Cl(R)=Pic(R)$, the Picard group of $R$. So the problem boils down to proving that the algebraic Picard group $Pic(S^2)$ of a sphere is trivial. I don't know a way of showing that simpler than the calculation you made but intuitively this is not so surprizing since all continuous (or differentiable) line bundles on the sphere are trivial because $H^1(S^2,\mathbb Z/2\mathbb Z)=0$. Contrastingly the ring $\mathcal O(S^1)=\mathbb R[X,Y]/\langle X^2+Y^2-1 \rangle$ is not a UFD since the circle $S^1$ possesses a non-trivial line-bundle, the notorious Möbius line bundle, which can be given an algebraic structure.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1862172", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Integral of the product $x^n e^x$ I would be very pleased if you could give me your opinion about this way of integrating the following expression. I think that it has no issues, but just wanted to confirm: $$ \int x^n e^x dx $$ $$ e^x = u $$ $$ e^x dx = du $$ $$ \int x^n e^x dx = \int \ln(u)^n du = nu(\ln(u) - 1) + C $$ $$ \int x^n e^xdx = ne^x (x-1) + C $$ Many thanks!
Here is one way to proceed. Note that $$\begin{align} \int x^ne^x\,dx&=\left.\left(\frac{d^n}{da^n}\int e^{ax}\,dx\right)\right|_{a=1}\\\\ &=\left.\left(\frac{d^n}{da^n}\frac{e^{ax}}{a}\right)\right|_{a=1}\\\\ &=\left.\left(\sum_{k=0}^n\binom{n}{k}\frac{d^k a^{-1}}{da^k}\frac{d^{n-k} e^{ax}}{da^{n-k}}\right)\right|_{a=1}\\\\ &=e^x\sum_{k=0}^n (-1)^k \left(\binom{n}{k}\,k!\right)x^{n-k}\\\\ &=e^x\left(x^n+\sum_{k=1}^n (-1)^n\left(n(n-1)(n-2)\cdots (n-k+1)\right)x^{n-k}\right) \end{align}$$ which gives the explicit form of the polynomial, $p(x)$, as discussed in the post by egreg.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1862264", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 2 }
Analytic continuation of $\sum (z/a)^n$ I'm having trouble continuing this function beyond its convergence radius, $R=a$. $$f(z)=\sum (z/a)^n$$ Given the context (a textbook in complex analysis) I suspect it should have a simple closed-form expression. I've tried differentiating and trying to relate it to the geomtric series, but so far I haven't had any success. Any hint or idea on how to analytic continuate it? Thanks in advance!
For $|z/a|<1$ the sum is $\frac{1}{1-(z/a)} = \frac{a}{a-z}$. That is the continuation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1862372", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
A series involving digamma function I am trying to solve the series $$\sum_{k=1}^\infty\frac{1}{k(k^2+n^2)}$$ The best I got is $$\frac{\Re\left\{\psi(1+in) \right\}+\gamma)}{n^2}$$ I am not able to simplify it more. Maybe there is another approach to solve the series. Any idea how ? You can assume that $n$ is an integer if that simplifies the solution.
In fact, your result is correct and it agrees to Eq.(6.3.17) of Abramovic & Stegun, where you'll find a nice zeta series representation for it. See link http://people.math.sfu.ca/~cbm/aands/page_259.htm
{ "language": "en", "url": "https://math.stackexchange.com/questions/1862471", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Fundamental group of $S^{1}$ unioned with its two diameters Is my solution correct? Call $X$ the riflescope space (I made this name up). I let $p$ be the point of intersection of the two diameters, and $q$ be the right point of intersection of the horizontal diameter with the circle. I let $A=X-{p}$ and $B=X-q$. Then $A \cap B = X-p-q$. $A$ and $B$ are open path connected with $X = A \cup B$. Furthermore $A \cap B$ is simply connected since it deformation retracts to a point. $A$ deformation retracts onto $S^{1}$ so has fundamental group $\mathbb{Z}$ and $B$ deformation retracts onto a space homotopy equivalent to the theta space which I know has fundamental group $\mathbb{Z}*\mathbb{Z}$. I conclude by van Kampen that $\pi_{1}(X) \simeq \mathbb{Z}*\mathbb{Z}*\mathbb{Z}$. Okay so my original solution isn't correct. I have a feeling that it's the product of 4 copies of $\mathbb{Z}$ but I'm not sure how to decompose $X$ yet.
If your space looks like $\bigoplus$ then it is homotopy equivalent to a wedge of four circles (collapse the diameters to a point) so Van Kampen gives its fundamental group is the free group on four generators (not three).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1862604", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Diameter of set in metric space I do agree with the statement that $$d(A) = \sup{\{d(x, y):x, y \in A\}}$$ But why can't we use maximum because according to me its max will also give diameter. I know it should not be correct, so please give me the correct explanation with example where the $\max$ formulation will be wrong.
It is possible for the supremum of a set to be a value that is not in the set, and as a result the set has no maximum value (since the maximum of a set is always taken to be the largest element in the set). For a simple example, consider the open interval $A = (0, 1) = \{x \in \mathbb{R}: 0 < x < 1\}$. What's the largest distance two elements of $A$ can be? It's not 1, because there are no elements in $A$ that are 1 apart (since 0 and 1 aren't elements of $A$). However, for any distance less than 1, you can find two elements of $A$ that are further apart than that. In fact, the set of distances is equal to $[0, 1) = \{x \in \mathbb{R}: 0 \le x < 1\}$. Since 1 is not an element of that set, the maximum of the set cannot be 1. However, it also cannot be any number less than 1, since there is always a larger number than it in the set. Hence, the set has no maximum value. However, the set does have a supremum, and the value of that supremum is 1.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1862694", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Action of $\mathbb{Z}/3\mathbb{Z}$ on $P^{1}$ I am reading from the book Topics in Galois theory by Serre. I have the following question , take $G=\mathbb{Z}/3\mathbb{Z}$. The group $G$ acts on $P^1$ by $$\sigma x\;=\;1/(1-x)$$ where $\sigma$ is generator of $G$. Am I interpreting this action correctly. I am thinking of it as following, think $P^1$ as extended complex plane and think x as a complex number. I am not able to interpret this action geometrically thinking of $P^1$ as set of lines. If we write $T=x+ \sigma x + \sigma^{2} x$. How $T$ gives a map $Y=P^1\rightarrow P^1/G$.
Thinking first of $\mathbb{C}^2$, the action of the group $G = \mathbb{Z}/3\mathbb{Z}$ is generated by the linear transformation $$\begin{pmatrix}0 & 1 \\ -1 & 1 \end{pmatrix} \cdot \begin{pmatrix}w \\ z \end{pmatrix} = \begin{pmatrix}z \\ -w+z \end{pmatrix} $$ Under the projection map $\mathbb{C}^2 - \{0\} \to P^1$, all the points in a single "line" $aw+bz=0$ get mapped to a single element of $P^1$. If $a \ne 0$ then we can rewrite this equation as $w = - \frac{b}{a} z$, set $\zeta = -\frac{b}{a}$ (the "minus slope"), rewrite the equation of the line as $w=\zeta z$, and instead map that line to $\zeta \in \mathbb{C} \subset \mathbb{C} \cup \{\infty\}$. If instead $a=0$ we map the line (whose equation may be rewritten $z=0$) to $\infty$. When this is done, the point $\frac{w}{z}=\zeta$ is mapped to the point $$\frac{z}{-w+z} = \frac{1}{-\frac{w}{z}+1} = \frac{1}{-\zeta+1} $$ In other words, the action is generated by $\sigma \zeta = \frac{1}{-\zeta + 1}$, where your argument $x$ has been changed to the "minus slope" variable $\zeta$. Thus, the $\mathbb{C} \cup \{\infty\}$ model for $P^1$ can be thought of as "minus slope space". Let me now stick with your variable $x$. Let me explain how to understand the formula $$\sigma x = \frac{1}{-x+1} $$ One thing to do is to find the fixed points $$\sigma x = x $$ Substituting the formula we obtain $$\frac{1}{-x+1}=x $$ and solving we obtain $$x = \frac{1}{2} \pm \frac{\sqrt{3}}{2} i $$ Now, working back to the $P^1$ model, you can think of the two roots of this equation as the "minus slopes" of the two eigenspaces of the linear transformation $\begin{pmatrix} 0 & 1 \\ -1 & 1 \end{pmatrix}$. One eigenspace is $$w = - \bigl(\frac{1}{2} + \frac{\sqrt{3}}{2}i\bigr) \, z $$ and the other eigenspace is similar but with the slope changed to its complex conjugate. Now you ask about the map $T = x + \sigma x + \sigma^2 x$. Here, I must admit, I am unsure what you are looking for, but I will make some guesses. In the $\mathbb{C} \cup \{\infty\}$ model, where $x$ is "minus slope", this formula for $T$ makes no sense: adding "minus slopes", or adding ordinary "slopes" for that matter, has no meaning. On the other hand this map does makes sense in $\mathbb{C}^2$ using vector addition, where $x$ is replaced by a vector variable $\vec v = \begin{pmatrix}w \\ z \end{pmatrix}$ and where $\sigma$ is replaced by the matrix $M = \begin{pmatrix}0 & 1 \\ -1 & 1 \end{pmatrix}$. Since $M$ is a linear transformation taking lines to line, it therefore also makes sense in the space of lines $P^1$. The point seems to be that $T\sigma = T(M \sigma)$ for all $\sigma$ (do the calculation to verity that $T=TM$). So $T$ can be thought of as a map $T : P^1 \to P^1$ having the property that it maps each $G$-orbit in the domain $P^1$ to a single point in the range $P^1$. Thus the orbit space $P^1 / G$ is shown, using $T$, to be equivalent to $P^1$ itself.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1862896", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
The relation between axes of 3D rotations Let's suppose we have two rotations about two different axes represented by vectors $v_1$ and $v_2$: $R_1(v_1, \theta_1)$, $R_2(v_2,\theta_2)$. It's relatively easy to prove that composition of these two rotations gives rotation about axis $v_3$ distinct from axes $v_1$ and $v_2$ . Indeed if for example $v_3=v_1$ then $R_1(v_1, \theta_1) R_2(v_2,\theta_2)=R_3(v_1,\theta_3)$ leads to $R_2(v_2,\theta_2)=R_1^T(v_1, \theta_1)R_3(v_1,\theta_3)=R(v_1,\theta_3 -\theta_1)$ what gives $v_1=v_2$. ... Contradiction... We see that composition of two rotations about different axes always generates a new axis of rotation. The problem can be extended for condition of the plane generated by the axes. Question: * *Is it true that composition of two rotations generates the axis which doesn't belong to the plane which is constructed by the original axes of rotations ? *How to prove it ? *If the statement is not however true what are conditions for not changing a plane during the composition of rotations $ ^{[1]}$ ? $ ^{[1]}$ It can be observed that even in the case of quite regular rotations the above statement is true Let's take $Rot(z,\dfrac{\pi}{2})Rot(x,\dfrac{\pi}{2})= \begin{bmatrix} 0 & -1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \\ \end{bmatrix} \begin{bmatrix} 1 & 0 & 0 \\ 0 & 0 & -1 \\ 0 & 1 & 0 \\ \end{bmatrix} = \begin{bmatrix} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \\ \end{bmatrix} = Rot([1,1,1]^T, \dfrac{2}{3}\pi)$ or $Rot(x, \pi )Rot(z, \pi )= \begin{bmatrix} 1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & -1 \\ \end{bmatrix} \begin{bmatrix} -1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \\ \end{bmatrix} = \begin{bmatrix} -1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & -1 \\ \end{bmatrix} = Rot( y, \pi)$ So I suppose it is generally true but how to prove it ?
This is easily seen if we assume familiarity with the use of unit quaternions in representing rotations. A rotation $R$ about the axis given $\vec{v}=v_1\bf{i}+v_2\bf{j}+v_3\bf{k}$ by the angle $\theta$ is represented by the quaternion $$ q=\cos\frac\theta2+\sin\frac\theta2\vec{v}. $$ Here it is essential that $\vec{v}$ is a unit vector. The connection is that the rotated version $R\vec{u}$ of a vector $\vec{u}$ is then given by the quaternion product $$ R\vec{u}=q\vec{u}\overline{q}, $$ where $\overline{q}=\cos\frac\theta2-\sin\frac\theta2\vec{v}$ is the conjugate quaternion. The composition of two such rotations is then faithfully reproduced as a product of the representing quaternions. So if another rotation $R'$ is represented by $q'=\cos\frac\alpha2+\sin\frac\alpha2\vec{v}'$, the composition $R'\circ R$ is represented by the product $$ \begin{aligned} qq'&=\left(\cos\frac\alpha2\cos\frac\theta2-\sin\frac\alpha2\sin\frac\theta2\,\vec{v}'\cdot\vec{v}\right)+\\ &+\cos\frac\alpha2\sin\frac\theta2\vec{v}+\cos\frac\theta2\sin\frac\alpha2\vec{v}'+\sin\frac\alpha2\sin\frac\theta2\,\vec{v}'\times\vec{v}. \end{aligned} $$ From the second row we can read the axis of the composition - it is the unit vector parallel to that linear combination of $\vec{v}$, $\vec{v}'$ and their cross product. The first two terms are in the plane $T$ spanned by $\vec{v}$ and $\vec{v}'$, but the cross product is perpendicular to $T$. Therefore the axis of the combined rotation is in the plane $T$ if and only if that cross product term is zero. Either of the sines vanishes only when the rotation is trivial ($\alpha=0$ or $\theta=0$). The cross product vanishes iff $\vec{v}$ and $\vec{v}'$ are parallel. In other words, your hunch is correct.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1863176", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Building volume using lagrange multipliers A rectangular building with a square front is to be constructed of materials that costs 20 dollars per square foot for the flat roof, 20 dollars per square foot for the sides and the back, and 14 dollars per square foot for the glass front. We will ignore the bottom of the building. If the volume of the building is 5,600 cubic feet, what dimensions will minimize the cost of materials? (Round your answers to the nearest hundreth such that the dimensions increase from the smallest to the largest.) I am trying to do this problem, and i went through it twice using y as the length and x as the width and height. I tried substituting and integrating the volume and surface area formula but my answer, $11.33 \times 22.23 \times 22.23$ was wrong and I cant really figure out where to go next with this problem.
Since the building has a square front its dimensions are $x \times x \times y$ where $x$ is both the height and width of the building and $y$ is its length. The areas of the front and back are $x^2$, and the areas of the sides and roof are $xy$. The cost of the materials to construct the building is given by $$C(x,y) = 20xy + 20xy + 20xy + 20x^2 + 14 x^2 = 60xy + 34 x^2$$ by considering, in order, the roof, the two sides, the back, and the front. You wish to minimize $C(x,y)$ subject to the constraint that $$V(x,y) = x^2y = 5600.$$ The Lagrange multiplier is a number $\lambda$ satisfying $\nabla C(x,y) = \lambda \nabla V(x,y)$. Thus you get the system of equations $$\begin{array}{rl} 60 y + 68 x &= 2\lambda xy \\ 60x &= \lambda x^2 \\ x^2y &= 5600.\end{array}$$ Solving nonlinear systems can be a bit ad-hoc. Since $x=0$ isn't a meaningful dimension for the building the solution requires $x \not= 0$. You can divide the middle equation by $x$ to get $$60 = \lambda x.$$ You can plug this into the first equation to obtain $$60 y + 68 x = 120 y$$ so that $68 x = 60 y$. Finally multiply the last equation by $68^2$ to get $$(68 x)^2 y = (68)^2\cdot 5600$$ so that $$60^2 y^3 = (68)^2\cdot 5600.$$ This gives you $y = 19.30$ and consequently $x = 17.03$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1863272", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Given $P(C)$ and $P(A\mid C)$, what is $P(C\mid A)$? I am wondering if there's a way to find the solution if we know: $P(C) = 0.01$ $P(A\mid C) = 0.7$ what is $P(C\mid A)$? I think we need to know $P(A)$ to answer this question right? There is no other way around it? Thank you!
Considering the given data, without knowing anything more, one may just write $$ P_A(C)=\frac{P(A \cap C)}{P(A)}=\frac{P(A \cap C)}{P(C)}\cdot \frac{P(C)}{P(A)}=0.7\frac{0.01}{P(A)}=\color{red}{\frac{0.007}{P(A)}}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1863340", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Applying distortion to Bézier surface I am trying to simulate the image warp effect, that is used in Adobe Photoshop. The rectangular image is warped according to a cubic Bézier surface (in 2D, all Z components are 0). Having any Bézier surface, vertical distortion $d \in[0,1]$ can be applied to it. Left: input bézier surface, $d=0$, Right: output surface, $d=0.8$ Do you have any idea, what is done to the Bézier surface (16 points), when converting from the version on the left to the output on the right?
Clearly the $y$-coordinates of the Bezier patch control points are being left unchanged, and the $x$-values are being "tapered". When I say "tapered", I mean that the upper edge of the patch is being shrunk inwards, the lower edge is being expanded outwards, and the mid-height curve is being left unchanged. I don't really know what the "$d=0.8$" value means. You say $d=0$ corresponds to no tapering, and I'd guess that $d=1$ is very strong tapering, where the top edge of the patch gets shrunk to a point (which means that the lower edge will double in width). Let's assume that the left-hand patch has corners at $(-1,-1)$, $(-1,1)$, $(1,-1)$, $(1,1)$. If my guess about $d$ is correct, then the required transformation is: $$ T(x,y) = \big( (1-dy)x, y\big) $$ It's easy to confirm that: $$ T(x,0) = (x,0) \quad \text{for all } x $$ $$ T(1,-1) = (1+d, -1) $$ $$ T(1,1) = (1-d, 1) $$ and so on.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1863406", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Does it follow that $\mu$ is a measure? Suppose $\mu_n$ is a sequence of measures on $(X, \mathcal{A})$ such that $\mu_n(X) = 1$ for all $n$ and $\mu_n(A)$ converges as $n \to \infty$ for each $A \in \mathcal{A}$. Cal the limit $\mu(A)$. Does it follow that $\mu$ is a measure?
This is overkill since I use the non trivial result that if the sequences $ x_n, x$ are in $l_1$, then $x_n \to x$ in norm iff $x_n(k) \to x(k)$ for all $k$. It is straightforward to verify that $\mu \emptyset = 0$ and $\mu A \ge 0$ for any $A \in \cal A$. Suppose $A_k \in \cal A$ are disjoint, let $x_n(k) = \mu_n A_k$, $x(k) = \mu A_k$, then we have $x_n(k) \to x(k)$ for each $k$ and hence $x_n \to x$ (in $l_1$) and hence $\|x_n\|_1 \to \|x\|_1$. In particular, we have $\mu( \cup_k A_k) = \lim_n \mu_n ( \cup_k A_k) = \lim_n \sum_k \mu_n A_k = \lim_n \|x_n\|_1 = \|x\|_1 = \sum_k \mu A_k$. Hence $\mu$ is a measure and $\mu X = 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1863468", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solve functional equation $f(x^4+y)=x^3f(x)+f(y)$ I need help solving this equation, please: $$f(x^4+y)=x^3f(x)+f(y),$$ such that $ f:\Bbb{R}\rightarrow \Bbb{R},$ and differentiable I've found that $f(0)=0$ and $f(y+1)=f(1)+f(y)$, but I couldn't continue, I think the solution is $f(x)=ax$. Thanks for your help
To help clarify or expand the hint by @xidgel and avoid any possible confusion. First introduce $t(x,y) = x^4+y$, then we can write: $$f(t) = x^3f(x)+f(y)$$ Now we can express differentiation with respect to x and y with the chain rule: $$\frac{\partial f}{\partial x} = \frac{\partial f}{\partial t} \cdot \frac{\partial t}{\partial x} \text{ and } \frac{\partial f}{\partial y} = \frac{\partial f}{\partial t} \cdot \frac{\partial t}{\partial y}$$ Now proceed as in the hint by calculating these differentials on both sides of the equation above.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1863541", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 4 }
Is there an intuitive meaning of $p - p^2$ If $p$ is the probability of an event occurring, does $p - p^2$ have an intuitive meaning?
Since $p-p^2=p(1-p)$ it is the probability of the event occurring multiplied by the probability of it not occurring. For example if $p$ is the probability of a coin coming up heads. Then $p(1-p)$ is the probability it comes up head then tail after two throws.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1863627", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Let $f$ be a Lebesgue measurable function on $\Bbb{R}$ satisfying some properties, prove $f\equiv 0$ a.e. Let $f$ be a Lebesgue measurable function on $\Bbb{R}$ satisfying: i) there is $p\in (1,\infty)$ such that $f\in L^p(I)$ for any bounded interval $I$. ii) there is some $\theta \in (0,1)$ such that: $$\left|\int_I f\ dx\right|^p\leq \theta (\mu(I))^{p-1}\int_I |f|^p\ dx$$ Prove that $f\equiv 0 $ a.e. This is a previous qual question (3, beware: PDF). My thoughts. We assume that there is some $E$ where without loss of generality $f>0$ on $E$ and $\mu(E)>0$. Using regularity of Lebesgue measure, for all $\epsilon>0$ there is some open $G_{\epsilon}$ with $\mu(G_{\epsilon}\setminus E)<\epsilon$ and $E \subseteq G_{\epsilon}$. We may decompose $G_{\epsilon}$ as a countable disjoint union $I_k$. So we must only show that there is a contradiction if $f>0$ on some interval. I tried arguing like this: $$\begin{align*}\left|\int_I f\ dx\right|^p&\leq \theta (\mu(I))^{p-1}\int_I |f|^p\ dx\\ &\leq\theta (\mu(I))^{p-1} \mu(I) \operatorname{essup}(|f|^p)\\&=\theta(\mu(I))^p\operatorname{essup}_I(|f|^p)\end{align*}$$ I feel like there should be a general contradiction here (for example if $I=(0,1)$ and $f=1$ this inequality doesn't hold). Can someone help me? I would like hints only please.
Hint: Suppose $\theta = 1/2$ just to scratch around a bit. For $h>0$ we have $$|\int_a^{a+h} f\,\,|^p \le \frac{1}{2}\cdot h^{p-1}\int_a^{a+h} |f|^p \implies |\frac{1}{h} \int_a^{a+h} f|^p \le \frac{1}{2}\cdot \frac{1}{h}\int_a^{a+h} |f|^p.$$ In the last inequality, let $h\to 0^+$ and apply the Lebesgue differentiation theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1863712", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Domain of $f(x)=x^{\frac{1}{\log x}}$ What is the domain of $$f(x)=x^{\frac{1}{\log x}}$$ Since there is logarithm , the domain is $(0 \: \infty)$ But the book answer is $(0 \: \infty)-\{1\}$ but if $x=1$ $$f(x)=1^\infty=1$$ So is it necessary to exclude $1$
Just as MathematicsStudent1122 answered, if $$f=x^{\frac{1}{\log (x)}}$$ $$\log(f)=\frac{1}{\log(x)}\times\log(x)=1 \implies f=e$$ provided $x>0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1863808", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
The set of all real or complex invertible matrices is dense I'm trying to show that the set of all invertible matrices $\Omega$ is dense over $F=\mathbb R$ or $\mathbb C$. Let $A\in\Omega$ and $C\in M_{n\times n}(F)$. Since $\|A-C\|<\frac{1}{||A^{-1}||}$, and $\lambda\neq 0$, $A\in\Omega\implies \lambda A\in\Omega$, we have for $\lambda$ large $\|\lambda A-C\|<\frac{1}{\lambda \|A^{-1}\|}$ is small. So for any open ball $B_{\epsilon}(C)$ centered at $C$, we have $\frac{1}{\epsilon\|A^{-1}\|}<|\lambda|\implies \|\lambda A-C\|<\epsilon$ and $\lambda\in\Omega$. Does this proof look correct? Also, could you use the same idea to prove that the diagonalizable matrices are dense in $M_{n\times n}(F)$, by just letting $A$ be a diagonal matrix with nonzero eigenvalues?
No, it is all wrong. What you need to show is that given $C \in M_{n \times n}(F)$ and $\epsilon > 0$ there exists $A \in \Omega$ with $\|A - C\| < \epsilon$. But you are assuming that $A \in \Omega$ with $\|A - C\| < 1/\|A\|$. Then you make the absurd assertion that $\|\lambda A - C \| < \dfrac{1}{\lambda \|A\|}$. Actually $\|\lambda A - C\| \ge |\lambda| \|A\| - \|C\| \to +\infty$ as $\lambda \to \infty$, not $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1863916", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If $g>0$ is in $L\ln\ln L$, then $\#\{n: g(\theta x)+\cdots+g(\theta^nx)\le t\,g(\theta^nx)\}\le Ct$ when $t\to\infty$ Here are two theorems: * *For every dynamical system $(X, Σ, m, T )$ and function $f \in L \ln \ln L(X,m)$ (that is, such that $\int |f| \ln^+ \ln^+ |f|\, {\rm d}m$ is finite), $$N^∗f(x)=\sup_{n\ge 1} \left( \dfrac{1}{n} \# \left\lbrace i\ge 1: \dfrac{f(T^ix)}{i} \ge \dfrac{1}{n} \right\rbrace \right)$$ is finite for $m$-almost every $x$. *Let $(S, A, µ)$ be a probability space, and $θ$ a $µ$-measure preserving transformation on $S$. Let $g > 0$ such that $\int g \ln^+ \ln^+ g dµ$ is finite. Define $$b_n=\dfrac{\sum_{k=1}^n g(\theta^kx)}{g(\theta^n x)}.$$ If $\theta $ is ergodic then $$\limsup\limits_{t\to\infty} \dfrac{ \# \left\lbrace n\ge 1: b_n\le t \right\rbrace }{t}\ \text{is finite}.$$ According to author, to prove the second theorem, we must use the pointwise ergodic theorem and the first theorem. Can you explain this clearly to me?
By the pointwise ergodic theorem, we have (by ergodicity) that $$\frac 1n\sum_{j=1}^ng\circ \theta^k(x)\geqslant \frac{\mathbb E\left[g\right]}2$$ for each $n\geqslant n_0(x)$. Therefore, we have $$\left\{n\geqslant n_0(x)\mid b_n\leqslant t\right\} \subset \left\{n\geqslant n_0(x)\mid \frac{g\left(\theta^nx\right)}n \geqslant \frac{\mathbb E\left[g\right]}{2t}\right\}=\left\{i\geqslant n_0(x)\mid \frac{g\left(\theta^ix\right)}i \geqslant \frac{\mathbb E\left[g\right]}{2t}\right\}.$$ Now, assume that $t$ is such that $\mathbb E\left[g\right]/(2t)\in \left[1/u,1/(u-1)\right)$ for some integer $u$. Then $$\operatorname{Card}\left\{n\geqslant n_0(x)\mid b_n\leqslant t\right\} \leqslant \operatorname{Card}\left\{i\geqslant n_0(x)\mid \frac{g\left(\theta^ix\right)}i \geqslant \frac{\mathbb E\left[g\right]}{2t}\right\}\leqslant \operatorname{Card}\left\{i\geqslant n_0(x)\mid \frac{g\left(\theta^ix\right)}i \geqslant \frac 1u\right\}.$$ Therefore, $$\frac{\operatorname{Card}\left\{n\geqslant n_0(x)\mid b_n\leqslant t\right\}}t\leqslant \operatorname{Card}\left\{i\geqslant n_0(x)\mid \frac{g\left(\theta^ix\right)}i \geqslant \frac 1u\right\}\cdot \frac 2{\mathbb E\left[g\right] (u-1)} \leqslant N^*g(x)\frac{2u} {\mathbb E\left[g\right] (u-1)}$$ and we get that $$\limsup_{t\to +\infty}\frac{\operatorname{Card}\left\{n\geqslant n_0(x)\mid b_n\leqslant t\right\}}t\leqslant N^*g(x)\frac{2} {\mathbb E\left[g\right]}.$$ Since $\limsup_{t\to +\infty}\operatorname{Card}\left\{n\lt n_0(x)\mid b_n\leqslant t\right\}/t=0$, we are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1864005", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
fourier transform of cos(kx) using the formula given. i want to find the fourier transform of $$ f(x) = \cos (kx)$$ using the fourier transformation formula $$f(k)={1\over \sqrt(2\pi)}\int _{-\infty}^\infty (f(x) e^{ikx}dk$$ how can i do that??
As tired mentioned, a delta function, $\delta(x)$, is a distribution that satisfies $$\int _{-\infty}^\infty\delta(x-a)f(x)=f(a)$$ Indeed the Fourier transform of the function $g(x)=1$ is a delta function.So we should Write $$\delta(\omega)={1\over \sqrt{2\pi}}\int _{-\infty}^\infty e^{i\omega x}dx$$ $$\mathcal{F}[f(x)](\omega)={1\over \sqrt{2\pi}}\int _{-\infty}^\infty \cos (kx)\,e^{i\omega x}dx={1\over \sqrt{2\pi}}\int _{-\infty}^\infty \left(\frac{e^{ikx}+e^{-ikx}}{2}\right) e^{i\omega x}dx$$ we have $$\mathcal{F}[f(x)](\omega)={1\over 2\sqrt{2\pi}}\int _{-\infty}^\infty \left(e^{ix(\omega +k)}+e^{ix(\omega -k)}\right) dx$$ $$\mathcal{F}[f(x)](\omega)={1\over 2}\delta(\omega+k)+\frac 12\delta(\omega-k)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1864103", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Solve 3 exponential equations $z^x=x$, $z^y=y$, $y^y=x$ to get $x$, $y$, $z$. The main question is : $z^x=x$, $z^y=y$, $y^y=x$ Find $z$, $y$, $x$. My method : I first attempted to get two equation for the unknowns $x$ and $y$. We can happily write : $z=x^{1/x}$ and $z=y^{1/y}$ Thus we get, $x^{1/x}=y^{1/y}$ Which is, $x^y=y^x$. I can't go any farther than this. Please help me.
$$z^{y}=y$$ $$(z^y)^y=y^y$$ $$z^{y^2}=x=z^x$$ therefore $$x=y^2$$ on the other hand $$y^y=x=y^2\implies y=2$$ thus $$x=4\quad,\quad z=\sqrt{2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1864161", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Matrix decomposition into square positive integer matrices This is an attempt at an analogy with prime numbers. Let's consider only square matrices with positive integer entries. Which of them are 'prime' and how to decompose such a matrix in general? To illustrate, there is a product of two general $2 \times 2$ matrices: $$AB=\left[ \begin{matrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{matrix} \right] \left[ \begin{matrix} b_{11} & b_{12} \\ b_{21} & b_{22} \end{matrix} \right]=\left[ \begin{matrix} a_{11} b_{11}+a_{12} b_{21} & a_{11} b_{12}+a_{12} b_{22} \\ a_{21} b_{11}+a_{22} b_{21} & a_{21} b_{12}+a_{22} b_{22} \end{matrix} \right]$$ Exchanging $a$ and $b$ we obtain the expression for the other product $BA$. Now, if we allow zero, negative and/or rational entries we can probably decompose any matrix in an infinite number of ways. However, if we restrict ourselves: $$a_{jk},~b_{jk} \in \mathbb{N}$$ The problem becomes well defined. Is there an algorithm to decompose an arbitrary square positive integer matrix into a product of several positive integer matrices of the same dimensions? There is a set of matrices which can'be be decomposed, just like the prime numbers (or irreducible polynomials, for example). The most trivial one is (remember, zero entries are not allowed): $$\left[ \begin{matrix} 1 & 1 \\ 1 & 1 \end{matrix} \right]$$ There are no natural numbers $a_{11},b_{11},a_{12},b_{21}$, such that: $$a_{11} b_{11}+a_{12} b_{21}=1$$ The same extends to any dimension $d$. Any 'composite' $d \times d$ matrix will have all entries $ \geq d$. Thus, for square matrices we can name several more 'primes': $$\left[ \begin{matrix} 2 & 1 \\ 1 & 1 \end{matrix} \right],~~~\left[ \begin{matrix} 1 & 2 \\ 1 & 1 \end{matrix} \right],~~~\left[ \begin{matrix} 1 & 1 \\ 2 & 1 \end{matrix} \right],~~~\left[ \begin{matrix} 1 & 1 \\ 1 & 2 \end{matrix} \right],~~~\left[ \begin{matrix} 2 & 2 \\ 1 & 1 \end{matrix} \right],~~~\left[ \begin{matrix} 1 & 1 \\ 2 & 2 \end{matrix} \right], \dots$$ And in general, any matrix which has at least one entry equal to $1$. It makes sense, that most entries in 'composite' matrices will be large, since we are multiplying and adding natural numbers. For example: $$\left[ \begin{matrix} 1 & 2 & 4 \\ 3 & 3 & 1 \\ 3 & 4 & 4 \end{matrix} \right] \left[ \begin{matrix} 2 & 5 & 5 \\ 4 & 5 & 5 \\ 5 & 1 & 4 \end{matrix} \right]=\left[ \begin{matrix} 30 & 19 & 31 \\ 23 & 31 & 34 \\ 42 & 39 & 51 \end{matrix} \right]$$ $$\left[ \begin{matrix} 2 & 5 & 5 \\ 4 & 5 & 5 \\ 5 & 1 & 4 \end{matrix} \right] \left[ \begin{matrix} 1 & 2 & 4 \\ 3 & 3 & 1 \\ 3 & 4 & 4 \end{matrix} \right] =\left[ \begin{matrix} 32 & 39 & 33 \\ 34 & 43 & 41 \\ 20 & 29 & 37 \end{matrix} \right]$$ If no decomposition algorithm for this case exists, is it at least possible to recognize a matrix that can't be decomposed according to the above rules?
It's a strange question... Let $A\in M(N)$ s.t. $A=PQ$ where $P,Q\in M(N)$ are random. I calculate "the" Smith normal decomposition of $A$: $A=UDV$ where $U,V\in GL(\mathbb{Z})$ and $D$ is a diagonal in $M(\mathbb{Z})$. During each Maple test, I consider the matrix $UD=[C_1,\cdots,C_n]$, where $(C_i)_i$ are its columns; curiously, (P) for every $i$, $C_i\geq 0$ or $C_i\geq 0$. Is it true for any such matrices $A$ ? EDIT. Answer to @ You're In My Eye . I conjectured that property (P) above and, for every $i,j$, $a_{i,j}\geq n$ characterize the decomposable matrices $A\in M(N)$. Unfortunately, the matrix $A=\begin{pmatrix}10&13\\9&5\end{pmatrix}\in M(N)$ satisfies (P) but is indecomposable. Remark 1. If $A=UV$ is decomposable, then there are many other decompositions: $A=(UP)(P^TV)$ where $P$ is any permutation. Remark 2. We can consider the permanent function; if $A=UV$, then $per(A)> per(U)per(V)$ and in particular $per(U)<\dfrac{per(A)}{n!}$. If we look for an eventual decomposition of the $A$ above, then we obtain $\det(U)\in\{\pm 67,\pm 1\}$ and $per(U)\leq 83$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1864244", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Prove that $(p \to q) \to (\neg q \to \neg p)$ is a tautology using the law of logical equivalence I'm new to discrete maths and I have been trying to solve this: Decide whether $$(p \to q) \to (\neg q \to \neg p)$$ is a tautology or not by using the law of logical equivalence I have constructed the truth table and concluded that it is indeed a tautology. However, I am having difficulty proving it using the law of logical equivalence. I can only realize that I can use $$(p \to q ) \equiv (\neg p \lor q)$$ but after that I have no idea how to continue. Any help would be appreciated.
The following line of reasoning may help: $\qquad\begin{align} (p\to q)\to(\neg q\to\neg p)&\equiv\neg(\neg p\lor q)\lor(q\lor\neg p)&&\text{material implication}\\[1em] &\equiv\neg(\neg p\lor q)\lor(\neg p\lor q)&&\text{commutativity}\\[1em] &\equiv \neg M\lor M&&{M:\neg p\lor q}\\[1em] &\equiv \mathbf{T}&&\text{negation law} \end{align}$ Is the above clear? It makes minimal use of other logical equivalences.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1864346", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Matrices that represent rotations So the question is What 3 by 3 matrices represent the transformations that a) rotate the x-y plane, then x-z, then y-z through 90? I believe this is the matrix that rotates the xy plane \begin{bmatrix} 0 &-1 &0 \\ 1 &0 &0 \\ 0 &0 &1 \\ \end{bmatrix} But I couldn't think of a rotation that rotates the xz axis. Okay, I'm going to make y axis stay where it is, and rotate the $xz$ plane around it, but still which way should I rotate it? I have no idea. There are two ways to rotate it one in which z axis comes to the posisition of the axis where as x axis becomes -z, the other one is x axis becoming the z axis and z axis becoming -x. Which one to choose? and Why?
Hint: since you are a beginner I give you a general, simple and powerful method that works well in many situations. You can interpret the action of your matrix \begin{bmatrix} 0 &-1 &0 \\ 1 &0 &0 \\ 0 &0 &1 \\ \end{bmatrix} You can interpret the action of your matrix looking at the columns. The first column is the transformation of the vector $[1,0,0]^T$, the second column is the transformation of the vector $[0,1,0]^T$ and the third columns is the transformation of $[0,0,1]^T$ (you can easily test this, that is a general result, true for all matrices). These three vectors are the standard basis of the vector space, and any vector can be expressed as a linear combination of them, and the transformed vector is the same linear combination of the transformed vectors of the basis (this is what ''linear transformation'' means). Now you can see that your matrix does not change the component of a vector in the direction of the $z$ axis, but change a componet in the direction of the $x$axis ( the direction of $[1,0,0]^T$) to $[0,1,0]^T$ and a component in the direction of the $y$ axis ($[0,1,0]^T$) to $[-1,0,0]^T$. A simple vision of this transformation shows that it is a rotation around the $z$ axis ( the fixed points of the rotation) of $90°$ counterclockwise, as you have found. Now you can use the same reasoning to find a rotation around the $y$ axis. If the angle of rotation is $90°$ counterclockwise (this is the usual convention) than you see that the $x$ axis transforms as: $$ \begin{bmatrix} 1\\0\\0 \end{bmatrix} \quad \rightarrow \quad \begin{bmatrix} 0\\0\\-1 \end{bmatrix} $$ and the $z$ axis transforms as: $$ \begin{bmatrix} 0\\0\\1 \end{bmatrix} \quad \rightarrow \quad \begin{bmatrix} 1\\0\\0 \end{bmatrix} $$ and, since the $y$ axis does not change, the matrix is: $$ \begin{bmatrix} 0&0&-1\\ 0&1&0\\ 1&0&0 \end{bmatrix} $$ Now you can use this method to find the other matrices in your question. Note that your doubt: ''the other one is x axis becoming the z axis and z axis becoming -x'', is simply the clockwise rotation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1864436", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How to find the n'th number in this sequence The first numbers of the sequence are {2, 1, 3, 1, 2, 1, 4, 1, 2, 1, 3, 1, 2, 1, 5, 1, 2, 1, 3, 1, 2, 1, 4, 1, 2, 1, 3, 1, 2, 1, 6, 1, 2, 1, 3, 1, 2, 1, 4, 1, 2, 1, 3, 1, 2, 1, 5, 1, 2, 1, 3, 1, 2, 1, 4, 1, 2, 1, 3, 1, 2, 1, 7, 1, 2, 1, 3, 1, 2, 1, 4, 1, 2}. I.e. the even indexed elements are 1. Removing those yields a new sequence where the odd indexed elements are 2. Removing those yields a new sequence where the odd indexed elements are 3. Etc. In fact what I need is the sum of the first $n$ elements for all $n\in\mathbb{N}$. I tried to use the binary digit counts of $n$ but I haven't find something useful.
The n-th term is equal to the largest power of 2 which divides (n+1), plus 1. In other words if $n+1=2^lm$ in which l is non-negative and m is odd, then $a_n=l+1$. (For example the 11-th term is equal to 2+1,since $2^2$ divides 12 but not $2^3$, the 23-th term is equal to 3+1, because $2^3$ is the greatest power which divides 24). Also we have $a_1+a_2+...+a_n=$ {the largest power of 2 which divides $(n+1)!$ }+n. In a more formal language $a_n=v_2(n+1)+1$. Sum of the first n elements$=n+v_2((n+1)!)$ $v_2(t)$ stands for the largest power of 2 which divides t.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1864521", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Prove or refute that $\{p^{1/p}\}_{p\text{ prime}}$ to be equidistributed in $\mathbb{R}/\mathbb{Z}$ I've tried follow the Example 3 (see minute 30'40" of the reference), where is required the related Theorem (stated at minute 21') combined with Serre's formalism for $\mathbb{R}/\mathbb{Z}$ (also explained in the video) from a video in YouTube, from the official channel matsciencechannel, An Introduction to Analytic Number Theory Lecture 07 Equidistribution by Ram Murty to ask myself Question. Is the sequence $\{p^{1/p}\}$, as $p$ varies over all primes, equidistributed in $\mathbb{R}/\mathbb{Z}$? I need to check if $$\frac{1}{\pi(N)}\sum_{p\leq N}e^{2\pi i m p^{1/p}}$$ tends to zero as $N\to\infty$, $\forall m\neq 0$, where we are denoting the prime-counting function by $\pi(x)$ and $p$ the sequence of prime numbers in increasing order. I take the definition following the teacher in this video of the more high quality, to do the calculation $$L(s,\psi_m)=\prod_{p}\left(1-\frac{p^{\frac{2\pi i m}{p}}}{p^s}\right)^{-1}=\zeta(s-\frac{2\pi i m}{p}).$$ Now I believe that it is required to say that one has a pole at $s=1+\frac{2\pi i m}{p}$ (notice that I believe that I have a sequence of poles, Murty's example only depends on $m$), and say that I can use the theorem to show that our sum $\rightarrow 0$ as $N\to\infty$, thus our sequence is equidistributed in the (compact) group $\mathbb{R}/\mathbb{Z}$. Can you explain if it is feasible such calculations or do it? I don't understand well if it is possible to apply the theorem for my example, if you can state in details how one can use the theorem, then it is the best, and this example will be here as a reference. Thanks in advance.
As $n$ goes to infinity, $n^{1/n}$ approaches $1$ from above. In particular the fractional part of $n^{1/n}$ approaches $0$, so the $p^{1/p}$ are not equidistributed modulo $1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1864615", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Laplace transform in ODE Use any method to find the laplace transform of coshbt Looking to get help with this example for my exam review
The laplace transformations of $coshbt$ is the following $$\int_0^\infty cosh({bt})e^{-st} dt$$ $$= \int_0^\infty \frac{(e^bt + e^{-bt})e^{-st}}{2} dt$$ $$= \frac{1}{2} \int_0^\infty e^{-st + bt} + e^{-st - bt} dt $$ $$= \frac{1}{2} \int_0^\infty e^{(-s + b)t} + e^{(-s - b)t} dt$$ $$=\frac{1}{2} \begin{bmatrix} \frac {e^{(-s+ b)t}}{-s+b} + \frac{e^{-st - bt}}{-s-b} \end{bmatrix}^\infty_0$$ You can probably take it from here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1864754", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Help with Telescopic Series with 3 terms in denominator All the examples i have done and seen only have 2 terms in the denominator so I am a bit stuck with this one. I have attached what I have done so far, not sure how to proceed with it. Thank you for the hints they were useful, after working it out more I ended up with the following but now i am confused on what to do next, do I have to do another partial fraction decomposition? My work after the hints
You are close to the end. Express what you got as $$\left(\frac{1/2}{n}-\frac{1/2}{n+1}\right)-\left(\frac{1/2}{n+1}-\frac{1/2}{n+2}\right).$$ It looks a little better as $$\frac{1/2}{n(n+1)}-\frac{1/2}{(n+1)(n+2)}.$$ Now add up, and (in either version) watch almost all the terms cancel.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1865001", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Fundamental theorem of calculus statement Let f be an integrable real-valued function defined on a closed interval [a, b]. Let F be the function defined, for all x in [a, b], by F(x)=$\int _{a}^{x}\!f(t)\,dt$ Doesn't this make F(a) = $0$ ?
Yes, yes it does. In this particular formation of the FTOC, you are defining a function F(x) such that it represents the area underneath the function f(t) as t ranges from 0 to x. When x = a, you are measuring an area that is f(a) high, and of zero width, so therefore it must be zero area. EDIT To respond to your comment: For any integrable function f(x), there are an infinite number of antiderivatives F(x). It is true that $F(x) = \frac{x^2}{2}$ is an antiderivative of $f(x) = x$. However, $F(x) = \frac{x^2}{2} + 7$ is also an antiderivative, as is $F(x) = \frac{x^2}{2} - \pi$, and in fact so is $F(x) = \frac{x^2}{2} + C$, where $C$ is any real value. You know how when you take the indefinite integral you're supposed to include a "+C" term? Yeah, it's that one. So, if we are interested in $F(x) = \int_1^x f(t) dt$, because this is a definite integral, we want to find the function that specifically measures the area between the vertical line $t = 1$ and the vertical line $t = x$, and as I said above that is a function that must equal $0$ at $x = 1$. So, clearly, in this particular case we are interested in the function $F(x) = \frac{x^2}{2} - 1$. On the other hand, if we were interested in the function $F(x) = \int_2^x f(t) dt$, then we would have $F(x) = \frac{x^2}{2} - 2$, noting that $F(2) = 0$ and, in fact, $F(x) < 0 $ for $x \in [1, 2)$, which is because we are, in this case, effectively measuring the area backwards.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1865068", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Give an approximation for $f(-1)$ with an error margin of less than $0.01$ $f$ is defined by the power series: $f(x) = \sum_{n=1}^{\infty}\frac{x^n}{3^n (n+2)}$ I need to find an approximation for $f(-1)$ such that the error margin will be less than $0.01$. I know I need to use the Taylor remainder and the Laggrange theorem, but I'm not exactly sure how. All the other times I had a function (not a series) and I knew how to calculate. Now I have a series and I don't really understand what to do
Just for your curiosity. As @gammatester answered, you are looking fo $n$ such that $$\frac 1{3^n(n+2)}\lt \frac 1{100}$$ which can rewrite $$3^n(n+2)\gt 100$$ Just by inspection, $n=3$ is the samllest value for which the inequality holds. In fact, there is an analytical solution to the equation $$x^n(n+k)=\frac 1 \epsilon$$ It is given by $$n=\frac{W\left(\frac{x^k \log (x)}{\epsilon }\right)}{\log (x)}-k$$ where $W(z)$ is Lambert function. As you will see in the Wikipedia page, since, in a case like your, the argument is quite large, you have a good approximation using $$W(z)=L_1-L_2+\frac{L_2}{L_1}+\cdots$$ where $L_1=\log(z)$ and $L_2=\log(L_1)$. Applied to you case $(x=3, k=2,\epsilon=\frac 1 {100})$, this would give, as a real, $$n\approx \frac{5.24544}{\log(3)}-2\approx 2.7746$$ May I suggest you play with this to see how many terms would be required for an error margin of, say, $10^{-6}$ ? Sooner or later, you will learn that any equation which can write $$A+Bx+C\log(D+Ex)=0$$ has analytical solution in terms of the beautiful Lambert function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1865147", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Are all the zeros of $1-a_2x^2+a_4x^4-a_6x^6+\cdots$ real for $a_{2n}>a_{2(n+1)}$ with $a_{2n+1}=0$ and $a_{2n}>0$? This question is related to a previous question of mine. I was not pleased about the conditions I provided there. I had something different in mind but I failed in stating it. So here are the premises. Supose I have a power series $\sum_{k=0}^{\infty}a_{k}x^{k}$ and: * *$a_{0}=1$; *$a_{2n+1}=0$; *$a_{2n}>a_{2(n+1)}$; *$a_{2n}>0$. My questions: * *Does this kind of power series allways have real zeros? *If not are there counterexamples? I've made some quick checking for $\cos(x)$, shifting the value of $a_{2n}$, and it seems that all the roots remain real provided that we obey the above conditions nevertheless the number of roots changes from infinite to finite. Thanks.
The same objection as before holds. If we consider $$ f(z)=1-a_2 z+a_4 z^2-a_6 z^3 +\ldots $$ the fact that $\{a_{2n}\}_{n\geq 1}$ is a positive decreasing sequence do not give that Newton's inequalities are fulfilled. If Newton's inequalities are not fulfilled, $f(z)$ cannot have only real roots and the same applies to your original function. For instance, the disciminant of the third-degree polynomial $$ p(z) = 1-\frac{z}{2}+\frac{z^2}{4}-\frac{z^3}{8} $$ is negative, hence $p(z)$ has some complex root and the sequence $a_2=\frac{1}{2},a_4=\frac{1}{4},a_6=\frac{1}{8},$ $a_8=\varepsilon,a_{10}=\frac{\varepsilon}{2},a_{12}=\frac{\varepsilon}{4},\ldots$ gives a counter-example for any sufficiently small $\varepsilon>0$. The only question that makes sense is the following: If $\{a_n\}_{n\geq 0}$ is a real sequence fulfilling Newton's inequalities and $$f(x)=\sum_{n\geq 0}a_n x^n $$ is an analytic/entire function, can we say that all the roots of $f$ are real? Unluckily, the answer is negative also in that case, always by a perturbation argument: it is enough to consider $f(x)=\varepsilon+e^{-x}$ for some small $\varepsilon>0$. So Newton's inequalities can be used to prove the existence of some complex root, but not to prove that every root is real. At some meta-level, theorems ensuring that every root of something is real have to be fairly complex (pun intended). Otherwise, RH would have been solved centuries ago.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1865233", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
A question about the term "depressed cubic" The depressed cubic equation is a cubic equation of the form $x^3+px+q=0$. This expression sounds strange especially for someone that English is not his mother tongue. Why this equation is called "depressed"? What is so depressing in it? Thanks!
Why this equation is called "depressed"? It is from latin deprimitur : lowered. It seems that the terminology was intoduced by François Viète (1540 – 1603) into his posthumous : * *Francisci Vietae Fontenaeensis ab aequationum recognitione et emendatione (1615), page 79: Anastrophe [anastrophe] is the transformation of inverse negative equations into their correlatives. It is carried out so that the original equation, with the help of its correlative, can be reduced [reducatur ad depressiorem] [...] to a lower [power] and, therefore, be more easily solved. [...] The work of anastrophe is performed this way : [...] and the equation, otherwise soluble only with difficulty, can depressed [deprimitur] to one that is easily solved by means of a pretty operation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1865336", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
A series with logarithms Can we express in terms of known constants the sum: $$\mathcal{S}=\sum_{n=1}^{\infty} \frac{\log (n+1)-\log n}{n}$$ First of all it converges , but not matter what I try or whatever technic I am about to apply it fails. In the mean time if we split it apart (let us take the partial sums) then: $$\sum_{n=1}^{N} \frac{\log (n+1) - \log n}{n}= \sum_{n=1}^{N} \frac{\log (n+1)}{n} - \sum_{n=1}^{N} \frac{\log n}{n}$$ The last sum at the RHS does resemble a zeta function derivative taken at $1$. Of course zeta function diverges at $1$ but its PV exists, namely $\mathcal{P}(\zeta(1))=\gamma$. Maybe we have a PV for the derivative also? The other sum at the RHS is nearly the last sum at the right. This is as much as I have noticed in this sum. Any help? Addendum: I was trying to evaluate the integral: $$\mathcal{J}=\int_0^1 \frac{(1-x) \log (1-x)}{x \log x} \, {\rm d}x$$ This is what I got. \begin{align*} \int_{0}^{1}\frac{(1-x) \log(1-x)}{x \log x} &=-\int_{0}^{1} \frac{1-x}{x \log x} \sum_{n=1}^{\infty} \frac{x^n}{n} \, {\rm d}x \\ &= -\sum_{n=1}^{\infty}\frac{1}{n} \int_{0}^{1}\frac{x^{n-1} (1-x)}{\log x} \, {\rm d}x\\ &=\sum_{n=1}^{\infty} \frac{1}{n} \int_0^1 \frac{x^n-x^{n-1}}{\log x} \, {\rm d}x \\ &\overset{(*)}{=} \sum_{n=1}^{\infty} \frac{\log(n+1) -\log n}{n} \\ &= ? \end{align*} $(*)$ since it is quite easy to see that: $$\int_{0}^{1}\frac{x^a-x^{a-1}}{\log x} \, {\rm d}x = \log (a+1) - \log a , \; a \geq 1$$ due to DUTIES. Maybe someone else can tackle the integral in a different manner?
We may exploit Frullani's theorem to get an integral representation of our series. $$\begin{eqnarray*}S=\sum_{n\geq 1}\frac{\log(n+1)-\log(n)}{n}&=&\int_{0}^{+\infty}\sum_{n\geq 1}\frac{e^{-nx}-e^{-(n+1)x}}{nx}\,dx\\ &=&\int_{0}^{+\infty}\frac{1-e^{-x}}{x}\left(-\log(1-e^{-x})\right)\,dx\\&=&\int_{0}^{1}\frac{x\log x}{(1-x)\log(1-x)}\,dx\tag{1}\end{eqnarray*}$$ In terms of Gregory coefficients $$ \frac{x}{\log(1-x)}=-1+\sum_{n\geq 1}|G_n|x^n\tag{2}$$ gives: $$ S = \zeta(2)+\sum_{n\geq 1}|G_n|\int_{0}^{+\infty}\frac{x^n \log(x)}{1-x} \,dx = \boxed{\zeta(2)-\sum_{n\geq 1}^{\phantom{}}|G_n|\,\zeta(2,n+1)}.\tag{3}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1865408", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 1 }
Arrangement of 12 boys and 2 girls in a row. 12 boys and 2 girls in a row are to be seated in such a way that at least 3 boys are present between the 2 girls. My result: Total number of arrangements = 14! P1 = number of ways girls can sit together = $2!×13!$ Now I want to find P2 the number of ways in one boy sits between the two girls and then P3 the number of ways in which two boys sit between the two girls. How to find these two?
The number of arrangements in which exactly one boy sits between the girls is $$12 \cdot 2! \cdot 12!$$ since there are twelve ways to choose the boy who sits between the girls, two ways of choosing the girl who sits to his left, one way of choosing the girl who sits to his right, and $12!$ ways of arranging the block of three people and the other eleven boys. The number of arrangements in which exactly two boys sit between the girls is $$12 \cdot 11 \cdot 2! \cdot 11!$$ since there are twelve ways to choose the boy who sits in the first seat between the two girls, eleven ways to choose the boy who sits in the second seat between the two girls, two ways to choose the girl who sits to their left, one way of choosing the girl who sits to their right, and $11!$ ways to arrange the block of four people and the other ten boys. Notice that $$14! - 2!13! - 12 \cdot 2!12! - 12 \cdot 11 \cdot 2!11! = \binom{11}{2}2!12!$$ in agreement with the answers provided by drhab and Henning Makholm.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1865515", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 0 }
nilpotent linear transformation and invariant subspaces I'm trying to proof a biconditional statement about a nilpotent linear transformation, and I think I already proved it one way,but I'm stuck on the other way. The statement is as follows: Let $\phi: \mathbb{R}^3 \rightarrow \mathbb{R}^3$, be a nilpotent linear transformation, then $\phi^2 = 0$ if and only if $\phi$ has an infinite amount of $\phi$-invariant subspaces I already proved that if $\phi^2 = 0$, then there exist an infinite amount of invariant subspaces, my proof is as follows: We know that $\phi^2 = 0$, so $\phi$ is nilpotent and therefore there exists a basis $B = \{ v_1, v_2,v_3\}$ such that for all $v_i \in B$ we have $\phi(v_i) = v_{i+1}$ or $\phi(v_i)=0$. One can easily find that because $\phi^2 = 0$, we have that $\phi(v_1) = v_2$ and that $\phi(v_2)=\phi(v_3)=0$ . Consider now all hyperplanes such that they are generated by $v_2$ and a vector $b$ such that $b = \lambda \cdot v_1 + \mu \cdot v_3$ with $\lambda \neq 0$. All these hyperplanes consist the line $L_{v_2} = \{ \lambda \cdot v_2 \shortmid \lambda \in \mathbb{R}\}$. By a straightforward calculation we also find that all those hyperplanes are projected onto this line, so all these hyperplanes are $\phi$ invariant and there are an infinite amount of these hyperplanes. I'm not quite sure whether this proof is correct, and I'm stuck at the reverse: I don't even know where to start with that statement . Also, I'd love to have some good tips on how to proof the converse statement without using jordan canonical form considering that hasn't come up yet in the textbook that I'm using. Any help would be greatly appreciated.
Your first proof is correct. In other words, if $\phi^2 = 0$, then the dimension of the kernel is at least $2$. On the other hand, if $\phi^2$ fails to be zero. Then, every invariant subspace of $\phi$ contains an eigenspace, but the only eigenvalue $\phi$ can have is $0$. So, every invariant subspace of $\phi$ contains $\ker(\phi)$, a $1$-dimensional subspace. So, there is exactly one $1$-dimensional invariant subspace. Now, consider any vector for which $\phi^2(v) \neq 0$. We could use such a vector to build a basis $(v,\phi(v),\phi^2(v))$. So, the only invariant subspace containing $v$ is $\Bbb R^3$. So, any two-dimensional subspace is a subset of $\ker \phi^2$, which is a $2$-dimensional subspace. So, there is exactly one $2$-dimensional subspace.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1865594", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Why can we change a limit's function/expression and claim that the limits are identical? Say you have limit as $x$ approaches $0$ of $x$. You could just write it as $\frac{1}{\frac{1}{x}}$ and then the expression would be undefined. So what are you really doing when you "rearrange" an expression or function so its limit "works", and doesn't have any division by zero? Why can we suddenly change one expression to another, when they are not exactly the same, and say the limit is the same? For example, the expression $x$ is not equal to the expression $\frac{1}{\frac{1}{x}}$ because the latter is not valid for $x=0.$
Remark: sort of a long comment. What you are rediscovering are so called removable singularities: Note that the functions $$f(x)=x$$ and $$\tilde{f}(x)=\frac{1}{\frac{1}{x}},$$ are not the same, as they domain of definition differs: the first is defined for all real numbers, whereas the second is not defined at zero (as you noticed). But one can adjust the second by defining $$\bar{f}(x):=\begin{cases}\tilde{f}(x)&, x\neq 0\\0 &, x=0\end{cases}.$$ As explained in the other answers, $\lim_{x\rightarrow 0}\bar{f}(x)=0,$ which implies that the function is continuous at $0$ (as was clerly expected). Such an adjustment is called removal of singularity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1865707", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Non-negative, integrable random variables which converge in probability and whose expected values have a finite limit Suppose we have a sequence $X_1, X_2,...$ of non-negative, real random variables (not necessarily increasing) in $L^1$ which converge in probability to an integrable, non-negative random variable $X \in L^1$. Moreover, let's assume \begin{equation} E(X_n) \rightarrow E(X). \end{equation} Since the sequence converges in probability, there exists a subsequence converging to $X$ a.s.. This, together with $E(X_n) \rightarrow E(X)$, implies that this subsequence converges in $L^1$ to $X$. But how can one prove that $X_n \rightarrow X$ in $L^1$?
It seems that you know how to handle the case where the convergence in probability is replaced by almost sure convergence. Let's do the general case. As David Mitra suggests, the key point is to extract an almost everywhere convergent subsequence. Suppose that we do not have the convergence in $\mathbb L^1$. Then there exists a positive $\delta$ and an increasing sequence of integers $\left(n_j\right)_{j\geqslant 1}$ such that for any $j\geqslant 1$, $$\lVert X_{n_j}-X\rVert_1\gt \delta.$$ Now define $Y_j :=X_{n_j}$. We have for any $j\geqslant 1$, $$\lVert Y_j-X\rVert_1\gt \delta.$$ Moreover, the sequence $\left( Y_j\right)_{j\geqslant 1}$ converges to $X$ in probability, hence we can find a subsequence $\left( Y_{j_l} \right)_{l\geqslant 1}$ which converges almost surely to $X$. The fact that $$\lVert Y_{j_l} -X\rVert_1\gt \delta$$ for any $l$ together with the case mentioned in the opening post gives a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1865804", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What are the constraints on $\alpha$ so that $AX=B$ has a solution? I found the following problem and I'm a little confused. Consider $$A= \left( \begin{array}{ccc} 3 & 2 & -1 & 5 \\ 1 & -1 & 2 & 2\\ 0 & 5 & 7 & \alpha \end{array} \right)$$ and $$B= \left( \begin{array}{ccc} 0 & 3 \\ 0 & -1 \\ 0 & 6 \end{array} \right)$$ What are the constraints on $\alpha$ so that the matrix equation $AX=B$ has solution? Since neither $A$ nor $B$ are square, I can't get their inverses. Is the problem wrong?
Ignoring the fourth column, notice that $$\begin{pmatrix} 3 & 2 & -1 \\ 1 & -1 & 2\\ 0 & 5 & 7 \end{pmatrix} \begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix}=\begin{pmatrix}0 \\ 0 \\ 0 \end{pmatrix}$$ and $$\begin{pmatrix} 3 & 2 & -1 \\ 1 & -1 & 2\\ 0 & 5 & 7 \end{pmatrix} \begin{pmatrix} \frac15 \\ \frac65 \\ 0 \end{pmatrix}=\begin{pmatrix}3 \\ -1 \\ 6 \end{pmatrix}.$$ Can you comment on whether $\alpha$ influence existence of solutions? Are you able to construct an $X$ for the original problem? Remark: In general, you might like to perform row operations on the system of equations. Tips: If $A_1$ is a matrix that consists of some columns of $A$ and $A_1$ is non-singular, the solution to $AX=B$ always exist. Do you see why?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1865900", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Why would we use the radius of a circle instead of the diameter when calculating circumference? Forgive me if this question is a little too strange or maybe even off. Mathematics has never been my strong point, but I definitely think it's the coolest... Anyway, I was looking into tau, pi's up-and-coming sibling. I started rethinking why pi worked. The thing about tau is that it supposedly skips the step of doubling πr, since tau is twice pi. I tried this, and it works! (Of course it works. The fact that this surprises and amazes me shows how little I get out...) Then I started wondering why we do 2πr instead of πd. It does give the same answer... I checked. Is there any reason why using the radius is preferred over using the diameter? Here's my work: r = 6 (ergo diameter must equal 12) C = 2πr C = 37.68 and... C = Tr C = 37.68 and... C = πd C = 37.68
You can define a circle knowing the centre and the radius (distance $r$).   A circle is the set of all points, on a 2D-plane, at distance $r$ from the centre. That's a concise and elegant definition; try doing so using the diameter (distance $d$). Then, having defined circles using the radius, it becomes convenient to also define the radian measure of angles in terms of the radius of a circle.   One radian is the measure of an angle subtended at the centre of a circle by an arc length equal to the radius. Then we asked: what is the radian measure of a straight angle (that formed by two rays of a line)?   Why it is that irrational number we have decided to call $\pi$, to honour Pythagorus. What then is the angle subtended by the circumference of a circle?   Well, we could call it $\tau$, but it is $2\pi$, and we just happened to have named $\pi$ first. Thus $$\begin{array}{cc}C&=&2\pi r &=& \pi d &=& \tfrac 12 \tau d &=& \tau r & \text{circumference of circle}\\[1ex]A&=& \pi r^2 &=& \tfrac 14 \pi d^2 &=& \tfrac 18 \tau d^2 &=& \tfrac 12\tau r^2 & \text{area of circle/disc} \\[2ex] S &=& 4\pi r^2 &=& \pi d^2 &=&\tfrac 12 \tau d^2 &=&2\tau r^2 & \text{surface area of sphere}\\[1ex] V &=& \tfrac 43 \pi r^3 &=& \tfrac 16 \pi d^3 &=&\tfrac 1{12}\tau d^3 &=& \tfrac 23 \tau r^3 & \text{volume of sphere/ball}\end{array}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1866017", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Evaluate $\int\sin^{7}x\cos^4{x}\,dx$ $$\int \sin^{7}x\cos^4{x}\,dx$$ \begin{align*} \int \sin^{7}x\cos^4{x}\,dx&= \int(\sin^{2}x)^3 \cos^4{x}\sin x \,dx\\ &=\int(1-\cos^{2}x)^{3}\cos^4{x}\sin x\,dx,\quad u=\cos x, du=-\sin x\,dx\\ &=-\int(1-u^{2})^3u^4{x}\,du\\ &=-\int (1-3u^2+3u^4-u^6)u^4\,du\\ &=u^4-3u^6+3u^8-u^{10}\\ &=\frac{u^5}{5}-\frac{3u^7}{7}+\frac{3u^9}{9}-\frac{u^{11}}{11}+c \end{align*} So we get: $$\frac{\cos^5x}{5}-\frac{3\cos^7x}{7}+\frac{3\cos^{9}x}{9}-\frac{\cos^{11}x}{11}+c$$ Where did I got it wrong?
is it correct? No, it isn't. You have errors in the following part : $$=-\int(1-u^{2})^3u^4{x}du=-\int (1-3u^2+3u^4-u^6)u^4du$$ $$=u^4-3u^6+3u^8-u^{10}=\frac{u^5}{5}-\frac{3u^7}{7}+\frac{3u^9}{9}-\frac{u^{11}}{11}+c$$ They should be $$-\int(1-u^{2})^3u^4du=-\int (1-3u^2+3u^4-u^6)u^4du$$ $$=\int \left(-u^4+3u^6-3u^8+u^{10}\right)du=-\frac{u^5}{5}+\frac{3u^7}{7}-\frac{3u^9}{9}+\frac{u^{11}}{11}+c$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1866129", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Notice that the sum of the powers of $2$ from $1$, which is $2^0$, to $2^{n-1}$ is equal to ${2^n}{-1}$. Please explain in quotations! "Notice that the sum of the powers of $2$ from $1$, which is $2^0$, to $2^{n-1}$ is equal to $2^n-1$." In a very simple case, for $n = 3, 1 + 2 + 4 = 7 = 8 - 1$.
If you’re familiar with binary, simply note that $$2^n-1 = {1\underbrace{000\dots00}_{\text{$n$ zeros}}}\text{$_2$} - 1 = \underbrace{111\dots11}_{\text{$n$ ones}}\text{$_2$}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1866192", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 3 }
related rates- rate a man's shadow changes as he walks past a lamp post (is a fixed distance away from it) A $186$ cm man walks past a light mounted $5$ m up on the wall of a building, walking at $2\ m/s$ parallel to the wall on a path that is $2$ m from the wall. At what rate is the length of his shadow changing when he is $4$ m past the point where the light is mounted? ($4$ m is the distance along the wall). I have been doing related rates in my year 12 maths class and I know how to figure it out if the person is walking towards or away from the light but I've never come across a question like this where they are walking past the light, and have no idea where to begin. Some help getting started would be a massive help. (does it still involve similar triangles but in a 3D way?)
The first thing to do with a problem like this is to draw a diagram: The man (at $M$) is walking parrallel to the wall at $2\mbox{ m/s}$ and $2\mbox{ m}$ from the wall. The plan distance from the wall below the lamp and the man's feet is: $$ p=\sqrt{2^2+x^2} $$ where $x$ is the distance of the man past his closest point of approach to the lamp. Then the triangle formed by the lamp, the man's head and the point on the wall below the lamp at the same height as the man is similar to triangle LSO and the triangle formed by the man's head, his feet and $S$. So the length of the shadow $s=\mbox{ MS}$ satisfies: $$ \frac{(5-1.86)}{p}=\frac{1.86}{s} $$ So the length of the shaddow is: $$ s=\frac{1.86\sqrt{2^2+x^2}}{3.14} $$ Now you are asked to find $\frac{ds}{dt}$ when $x=4$ and $\frac{dx}{dt}=2$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1866319", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Derivation of the Euler Lagrange Equation I'm self studying a little bit of physics at the moment and for that I needed the derivation of the Euler Lagrange Equation. I understand everything but for a little step in the proof, maybe someone can help me. That's were I am: $$ \frac{dJ(\varepsilon=0 )}{d\varepsilon } = \int_{a}^{b}\eta(x)\frac{\partial F}{\partial y}+\eta'(x)\frac{\partial F}{\partial y'}dx = 0 $$ Then the second term is integrated by parts: $$ \frac{dJ(\varepsilon =0)}{d\varepsilon } = \int_{a}^{b}\eta(x)\frac{\partial F}{\partial y}dx + \left [ \frac{\partial F}{\partial y'}\eta(x) \right ]_a^b-\int_{a}^{b}\frac{d}{dx}(\frac{\partial F}{\partial y'})\eta(x) = 0 $$ And the equation is simplified to: $$ \frac{dJ(\varepsilon =0)}{d\varepsilon } = \int_{a}^{b}\frac{\partial F}{\partial y}-\frac{d}{dx}(\frac{\partial F}{\partial y'})\eta(x)dx = 0 $$ What I don't understand is why you just can omit the $$ \left [ \frac{\partial F}{\partial y'}\eta(x) \right ]_a^b $$ Why does that equal zero, but the integral following it which also goes from a to b isn't left out? I think it's pretty obvious, but I'm just to stupid to see it. I'd appreciate if someone could help me!
In the Euler-Lagrange equation, the function $\eta$ has by hypothesis the following properties: * *$\eta$ is continuously differentiable (for the derivation to be rigorous) *$\eta$ satisfies the boundary conditions $\eta(a) = \eta(b) = 0$. In addition, $F$ should have continuous partial derivatives. This is why $\left [ \frac{\partial F}{\partial y'}\color{red}{\eta(x)} \right ]_a^b$ simplifies to $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1866400", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Extreme points of the unit ball of the space $c_0 = \{ \{x_n\}_{n=1}^\infty \in \ell^\infty : \lim_{n\to\infty} x_n = 0\}$ I want to prove that all "closed unit ball" of $$ c_0 = \{ \{x_n\}_{n=1}^\infty \in \ell^\infty : \lim_{n\to\infty} x_n = 0\} $$ do not have any extreme point. Would you please help me? (Extreme Point) Let $X$ be a vector space and $A \subset X$ be convex. We say $x\in A$ is an extreme point if for $x = (1-t)y + tz,\; y,z,\in A, \;t\in(0,1)$ then $y = z = x$. What I tried is as follows: Let $B$ be a closed unit ball of $c_0$, that is, $$B = \{\{x_n\}_{n=1}^\infty \in \ell^\infty : \lim_{n\to \infty} x_n = 0 \text{ and } \|x\|_{\ell^\infty}\le 1\}.$$ If there is a extreme point $b = \{b_n\}_{n=1}^\infty\in B$, then we have for $$ b = (1-t)y + tz, \quad y,z\in B,\quad t\in (0,1) $$ implies $$ y = z = b. $$ But I cannot do anymore here. Would you please help me?
Hint: if $x$ is in the unit ball of $c_0$, there is some $i$ such that $|x_i| < 1$. What happens if you increase or decrease $x_i$ a little bit?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1866546", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to solve this Sturm Liouville problem? $\dfrac{d^2\phi}{dx^2} + (\lambda - x^4)\phi = 0$ Would really appreciate a solution or a significant hint because I could find anything that's helpful in my textbook. Thanks!
Hint: Let $\phi=e^{ax^3}y$ , Then $\dfrac{d\phi}{dx}=e^{ax^3}\dfrac{dy}{dx}+3ax^2e^{ax^3}y$ $\dfrac{d^2\phi}{dx^2}=e^{ax^3}\dfrac{d^2y}{dx^2}+3ax^2e^{ax^3}\dfrac{dy}{dx}+3ax^2e^{ax^3}\dfrac{dy}{dx}+(9a^2x^4+6ax)e^{ax^3}y=e^{ax^3}\dfrac{d^2y}{dx^2}+6ax^2e^{ax^3}\dfrac{dy}{dx}+(9a^2x^4+6ax)e^{ax^3}y$ $\therefore e^{ax^3}\dfrac{d^2y}{dx^2}+6ax^2e^{ax^3}\dfrac{dy}{dx}+(9a^2x^4+6ax)e^{ax^3}y+(\lambda-x^4)e^{ax^3}y=0$ $\dfrac{d^2y}{dx^2}+6ax^2\dfrac{dy}{dx}+((9a^2-1)x^4+6ax+\lambda)y=0$ Choose $9a^2-1=0$ , i.e. $a=\dfrac{1}{3}$ , the ODE becomes $\dfrac{d^2y}{dx^2}+2x^2\dfrac{dy}{dx}+(2x+\lambda)y=0$ Let $t=bx$ , Then $b^2\dfrac{d^2y}{dt^2}+\dfrac{2t^2}{b}\dfrac{dy}{dt}+\left(\dfrac{2t}{b}+\lambda\right)y=0$ $\dfrac{d^2y}{dt^2}+\dfrac{2t^2}{b^3}\dfrac{dy}{dt}+\left(\dfrac{2t}{b^3}+\dfrac{\lambda}{b^2}\right)y=0$ Choose $b^3=2$ , i.e. $b=\sqrt[3]2$ , the ODE becomes $\dfrac{d^2y}{dt^2}+t^2\dfrac{dy}{dt}+\left(t+\dfrac{\lambda}{\sqrt[3]4}\right)y=0$ Which relates to Heun's Triconfluent Equation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1866638", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Integral of solid angle of closed surface from the exterior Jackson derives Gauss's Law for electrostatics by transforming the surface integral of the electric field due to a single point charge over a closed surface into the integral of the solid angle, demonstrating that the integral depends only on the charge enclosed by the surface. $$\mathbf{E}\cdot\mathbf{n}\, da = q\frac{cos\theta}{r^2}\,da$$ $$\mathbf{E}\cdot\mathbf{n}\, da = q\, d\Omega$$ And, apparently, it is "easy to see" that $$\oint_S \mathbf{E}\cdot\mathbf{n}\,da=\begin{cases}\begin{align}&4\pi q & ~~~~~~~~~~~~~~~~~~~~&\text{if q lies inside S} \\&0 & ~~~~~~~~~~~~~~~~~~~~~& \text{if q lies outside S}\\\end{align}\end{cases}$$ Now, intuitively, this is pretty obvious, but I have no idea how to demonstrate that the integral of the solid angle of some closed surface at a point outside the surface is equal to zero. Even inside the surface, I wouldn't know how to show that the integral is equal to $4\pi$ for an implicit function $f(r,\theta,\phi)=c$ where I can't just use the spherical Jacobian transformation. I'd like to be able to somehow generically parameterize a closed surface, or find a generic Jacobian for the surface area element, but it's just really not clear where to begin. I'd like to be able to show that $$\frac{\hat{r}\cdot\hat{n}\,da}{r^2} = \nabla\times A$$ when the point P is outside the surface, but no approach is presenting itself. Thanks
Jacob, you probably won't like this, but the easiest way I see to do the calculation rigorously is to use differential forms. (One reference that's somewhat accessible is my textbook Multivariable Mathematics ..., but you can find plenty of others.) If $S$ is a closed (oriented) surface in $\Bbb R^3$ not containing the origin and $S^2$ is the unit sphere, then we consider the mapping $f\colon S\to S^2$ given by $f(\mathbf x)=\dfrac{\mathbf x}{\|\mathbf x\|}=\dfrac{\mathbf x}r$. Let $\omega = x\,dy\wedge dz + y\,dz\wedge dx + z\,dx\wedge dy$ be the area $2$-form on $S^2$. (We can also think of $\omega$ as the restriction to the unit sphere of the $2$-form $$\eta =\frac{x\,dy\wedge dz + y\,dz\wedge dx + z\,dx\wedge dy}{r^3}$$ on $\Bbb R^3-\{0\}$. This is the $2$-form corresponding to the electric field of a point charge with strength $1$ at the origin.) It follows from the change of variables theorem that $$\int_S f^*\omega = \int_{S^2}\omega = 4\pi.$$ (This takes care of the subtle cancellation issues you were worrying about when $f$ is not one-to-one. Because $S$ is a smooth surface, the projection map $f$ has degree $1$.) The crucial calculation (which embodies the surface area statement in your question) is this: $f^*\omega = \eta$. To verify this, you'll need to know that $f^*(\mathbf x) = f(\mathbf x) = \dfrac{\mathbf x}r$, and $f^*(d\mathbf x) = df = \dfrac{d\mathbf x}r - \dfrac{\mathbf x}{r^2}dr$. So (take a deep breath), remembering that $\wedge$ is skew-symmetric: \begin{align*}f^*\omega &= \frac xr\left(\frac{dy}r-\frac y{r^2}dr\right)\wedge\left(\frac{dz}r-\frac z{r^2}dr\right)+\frac yr\left(\frac{dz}r-\frac z{r^2}dr\right)\wedge\left(\frac{dx}r-\frac x{r^2}dr\right)+\\& \hspace{1.5in}\frac zr\left(\frac{dx}r-\frac x{r^2}dr\right)\wedge\left(\frac{dy}r-\frac y{r^2}dr\right)\\ &= \eta - \frac1{r^4}\left(xz\,dy\wedge dr+xy\,dr\wedge dz + xy\,dz\wedge dr + yz\,dr\wedge dx + yz\,dx\wedge dr + xz\,dr\wedge dy\right) \\ &= \eta,\end{align*} as required. For further motivation, in spherical coordinates $\eta = \sin\phi\,d\phi\wedge d\theta$ (using mathematicians' convention that $\phi$ is the angle from the positive $z$-axis). But you still ultimately need to use my remark above that the degree of $f$ is $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1866709", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Given a function, how can one tell if it doesn't have a limit at $x=a$ due to a discontinuity? For example, if you have the $$\lim_{x \to 2} \frac{1}{x-2},$$ the limits approaching from the positive and negative are different. You can tell because the $x-2$ becomes $0$ and the entire binomial is raised to an odd power. How would you tell if a function has a jump discontinuity, point discontinuity, jump discontinuity (step) or vertical asymptote such that it does not have a valid limit at a given point?
You have to look at the one-sided limits $$ \lim_{x \to a^-} f(x) \quad \text{and} \quad \lim_{x \to a^+} f(x). $$ The 2-sided limit exists iff both one-sided limits exist and are equal to each other, and $f(a)$ also has that value. If 2-sided limits exist and are equal, but $f(a)$ has a different value, you have a point discontinuity. When the limits are not equal (like in greatest integer function) it is a jump discontinuity. When limits are infinite and different, (i.e. one is $+\infty$ and one is $-\infty$) you get an asymptote in different directions from each side...
{ "language": "en", "url": "https://math.stackexchange.com/questions/1866793", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Prove by induction that $a^{4n+1}-a$ is divisible by 30 for any a and $n\ge1$ It is valid for n=1, and if I assume that $a^{4n+1}-a=30k$ for some n and continue from there with $a^{4n+5}-a=30k=>a^4a^{4n+1}-a$ then I try to write this in the form of $a^4(a^{4n+1}-a)-X$ so I could use my assumption but I can't find any $X$ that would set the two expressions equal. Then I tried factoring $a^{4n+1}-a=30k$ as $a(a^n-1)(a^n+1)(a^{2n}+1)=30k$ and using that as my assumption. Then I also factor as $a^{4n+5}-a=a(a^{n+1}-1)(a^{n+1}+1)(a^{2n+2}+1)$ and again I'm stuck not being able to use my assumption. Please note that I strictly need to use induction in this problem.
Here is how I would write up the main part of the induction proof (DeepSea and Bill handle the base case easily), in the event that you may find it useful: \begin{align} a^{4k+5}-a&= a^4(a^{4k+1}-a)+a^5-a\tag{rearrange}\\[1em] &= a^4(30\eta)+a^5-a\tag{by ind. hyp.; $\eta\in\mathbb{Z}$}\\[1em] &= a^4(30\eta)+30\ell\tag{by base case; $\ell\in\mathbb{Z}$}\\[1em] &= 30(a^4\eta+\ell)\tag{factor out $30$}\\[1em] &= 30\gamma.\tag{$\gamma\in\mathbb{Z}$} \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1866864", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 3 }
I don't know what this symbol in root systems means (of coxeter groups) I'm reading Humphreys, Reflection groups and Coxeter groups. The section "Construction of root systems" and the books uses the symbol $ \mathop {\alpha}\limits^{\sim} $ to denote an special element. But I don't know what it is. I looked for it but there is nothing about it.
I believe it is just a formal symbol that represents the element. They wanted something that was a modified $\alpha$ for continuity reasons, and decided to go with that symbol for unknown reasons. It's the same as if they had written $\alpha'$ (or even $x$ for that matter)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1866955", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Let $E=\mathbb{Q}(2^{1/3})$. What is the normal closure of $E/E$? Let $E=\mathbb{Q}(2^{1/3})$. What is the normal closure of $E/E$? My thought is the $A(2^{1/3})$ where $A$ is an algebraic closure of $\mathbb{Q}$. But I am not sure whether it is correct and why... Thanks for your time.
The normal closure of $E/E$ is $E$. Note that a normal closure of a finite extension is always a finite extension, excluding your answer. Also note that $A[\sqrt[3]{2}]$ is in fact equal to $A$. Recall that a characterization of normal is: "Every irreducible polynomial in K[X] that has one root in L, has all of its roots in L, that is, it decomposes into linear factors in L[X]." Yet the only way that a polynomial $f$ in $E[X]$ that is irreducible can have a root in $E$ is that it is of degree $1$. Note that the polynomial $x^3 -2$ is not irreducible over $E$. Therefore it is no longer a problem that it does not decompose. Or, for example the polynomial $x^2 - 2$ has no root in $E$, so it is not relevant it does not decompose.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1867106", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
counting number of steps using permutation-combination We need to climb 10 stairs. At each support, we can walk one stair or you can jump two stairs. In what number alternative ways we'll climb ten stairs? How to solve this problem easily using less calculation?
Consider $f(n)$ as the number of ways to climb $n$ stairs. We note that $f(1) = 1$ and $f(2) = 2$. Now consider the case when $n>2$. The first action one can take is climb $1$ or $2$ stairs. Suppose $1$ stair is climbed for the first step. Then there are $n-1$ stairs left and thus there are $f(n-1)$ alternate ways to climb $n$ stairs with the first step being to take $1$ stair. Similarly, there are $f(n-2)$ alternate ways to climb $n$ stairs with the first step being to take $2$ stairs. Then we have $$f(n) = f(n-1) + f(n-2)$$ This is the Fibonacci relation. Thus, $f(10) = 89$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1867225", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Problem including three circles which touch each other externally The circles $C_{1},C_{2},C_{3}$ with radii $1,2,3$ respectively,touch each other externally. The centres of $C_{1}$ and $C_{2}$ lie on the x-axis ,while $C_{3}$ touches them from the top. Find the ordinate of the centre of the circle that lies in the region enclosed by the circles $C_{1},C_{2},C_{3}$ and touches all of them. Okay, I can see the lines joining the centres of the circle form a right angled triangle with sides $3,4,5$. But, I can't prosper furthur..any hint or solution.
HINT...if $(a,b)$ is the centre of the circle and its radius is $r$ you can set up and solve a system of three simultaneous equations. So for example, for circle $C_1$ you have $$(a+1)^2+b^2=(r+1)^2$$ and likewise for the other two circles. Of course, quoting Descartes' Theorem will be a short-cut to finding $r$ but you will still need two of these equations to find $(a,b)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1867315", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
A question about the product functor on finite sets I am a beginner in Category Theory so please excuse me if this is a trivial question. Let $\mathbf{FSet}$ denote the category of finite sets. The product functor $X\times -:\mathbf{FSet}\to \mathbf{FSet}$ has a right adjoint for every finite set $X$. My question is, does it also have a left adjoint? Thanks!
No, it doesn't unless $X$ is a singleton. The very first condition to check for a functor to have a left adjoint is that it should preserve limits (such as products, equalizers...). But clearly in general, if $X$ has at least two element, $$X \times (Y \times Z) \not\cong (X \times Y) \times (X \times Z),$$ and so the functor doesn't preserve products, thus it doesn't have a left adjoint. If $X = \varnothing$ we do have the equality above, but then $\varnothing \times \{*\} = \varnothing \neq \{*\}$ and the functor does not preserve the terminal object. So again it doesn't have a left adjoint. However if $X = \{*\}$ itself then your functor is the identity functor, which indeed has a left adjoint: itself.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1867421", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Squeeze fractions with $a^n+b^n=c^n+d^n$ Let $0<x<y$ be real numbers. For which positive integers $n$ do there always exist positive integers $a,b,c,d$ such that $$x<\frac ab<\frac cd<y$$ and $a^n+b^n=c^n+d^n$? For $n=1$ this is true. Pick any $a,b$ such that $x<\frac ab<y$ -- this always exists by the density of the rationals. Since $\frac{a}{b}=\frac{ka}{kb}$ for any positive integer $k$, it suffices to choose $c=ka+1$ and $d=kb-1$. Since $\lim_{k\rightarrow\infty}\frac{ka+1}{kb-1}=\frac{a}{b}$, there exists $k$ such that $\frac{ka+1}{kb-1}<y$.
Partial answer I: if $x < 1 < y$, then we can find $a,b$ with $\frac{a}{b}, \frac{b}{a}$ arbitrarily close to one, satisfying the requirements for any $n$. Then it can be seen that that it suffices to prove the result for $y<1$ or $1<x$, since we have symmetry about 1 by inversion: $$x < \frac{a}{b} < \frac{c}{d} < y < 1 \iff 1 < \frac{1}{y} < \frac{d}{c} < \frac{b}{a} < \frac{1}{x}$$ Partial answer II: it can always be done for $n=2$. First suppose we can find $a, b, c, d$ with $x < \frac{a}{b} < \frac{c}{d} < y$ s.t. $a^2 + b^2 = j^2$ and $c^2 + d^2 = k^2$ for some integers $j, k$. Then we would have $(ka)^2 + (kb)^2 = (jc)^2 + (jd)^2$, and hence we would be done. So it suffices to prove that $S = \{ \frac{a}{b} \ | \ a, b\neq0 \in \mathbb{Z}, \exists c \in \mathbb{Z} \ \text{s.t.} \ a^2 + b^2 = c^2 \}$ is dense in $(0, 1)$. To see that this is the case, consult the identity giving Pythagorean triples: $$(m^2-n^2)^2 + (2mn)^2 = (m^2+n^2)^2;$$ $$v(m,n) := \frac{m^2-n^2}{2mn} = \frac{m}{2n} - \frac{n}{2m}$$ Given some small positive $\varepsilon$, we can take $n$ s.t. $\frac{1}{n} < \varepsilon$. Observe that the derivative of $v$ with respect to $m$ is always positive but strictly decreasing, going to $\frac{1}{2n}$ from above as $m$ goes to infinity. Thus for any positive $k$ we can infer that: $$\frac{1}{2n} < v(n+k+1,n)-v(n+k,n) \leq v(n+1,n)-v(n,n) = \frac{2n+1}{2n+2} \cdot \frac{1}{n} < \varepsilon$$ Since $v(n,n) = 0$, it follows that any value in $(0,1)$ is within $\varepsilon$ of a value in $\{ v(m, n) \ | \ m, n \in \mathbb{Z}, m > n \} \subset S$, so we are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1867550", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Finding a tricky composition of two piecewise functions I have a question about finding the formula for a composition of two piecewise functions. The functions are defined as follows: $$f(x) = \begin{cases} 2x+1, & \text{if $x \le 0$} \\ x^2, & \text{if $x > 0$} \end{cases}$$ $$g(x) = \begin{cases} -x, & \text{if $x < 2$} \\ 5, & \text{if $x \ge 2$} \end{cases}$$ My main question lies in how to approach finding the formula for the composition $g(f(x))$. I have seen a couple of other examples of questions like this online, but the domains of each piecewise function were the same, so the compositions weren't difficult to determine. In this case, I have assumed that, in finding $g(f(x))$, one must consider only the domain of $f(x)$. Thus, I think it would make sense to test for individual cases: for example, I would try to find $g(f(x))$ when $x <= 0$. $g(f(x))$ when $x <= 0$ would thus be $-2x-1$, right? However, I feel like I'm missing something critical, because I'm just assuming that the condition $x < 2$ for $g(x)$ can just be treated as $x <= 0$ in this case. Sorry for my rambling, and many thanks for anyone who can help lead me to the solution.
You're correct about the value of $g(f(x))$ when $x\leq 0$; since $f(x)$ will be at most $2\cdot0+1=1$, $g$ is only going to evaluate $f(x)$ according to the definition for $x<2$. Testing for cases here is a good approach, and you've just resolved the $x\leq0$ case. When $x>0$, consider the values of $f(x)$: when will they be less than $2$, and when will they be greater? This will determine where $g(f(x))$ takes on its values.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1867644", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 2 }
Gradient and Hessian of function on matrix domain Let $A \in R^{k \times p}$. Define $f(X) : R^{p \times k} \rightarrow R$ to be $f(X) = \log \det(XA + I_{p})$, where $I_{p}$ is a $p \times p$ identity matrix. I want to know what is the gradient and hessian of $f(X)$ with respect to $X$. Thank you!
Let $f(X)=\log(|\det(I+XA)|)$; we calculate $Df_X$ in a point $X$ s.t. $I+XA$ is invertible, that is, $-1$ is not an eigenvalue of $XA$. $Df_X:H\in M_{p,k}\rightarrow tr(HA(I+XA)^{-1})=tr((I+XA)^{-T}A^TH^T)$ or $Df_X(H)=<(I+XA)^{-T}A^T,H>$ (the scalar product over the matrices). In other words, the gradient of $f$ is $\nabla(f)(X)=(I+XA)^{-T}A^T$, the lynn's result. The Hessian is the bilinear symmetric function: $Hess(f)(X):(H,K)\in M_{p,k}\times M_{p,k}\rightarrow -tr(HA(I+XA)^{-1}KA(I+XA)^{-1})$, that is equivalent to $\dfrac{\partial^2f}{\partial x_{i,j}\partial x_{k,l}}=-tr(E_{i,j}A(I+XA)^{-1}E_{k,l}A(I+XA)^{-1})$ where $X=[x_{i,j}]$ and $E_{i,j}=e_ie_j^T$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1867732", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Summing a series of integrals I asked this question on Mathoverflow, but it was off-topic there (though it is related to my research...) and I was told to ask it here. I have a series of integrals I would like to sum, but I don't understand how I would begin to do that considering the structure of the integrals. Question: How do I sum (or at the very least approximate the sum of) these integrals? Question 2:Is this question deceptively difficult, or actually difficult? The sum is written as follows: $$\sum_{i=m}^n \left(\int_{0}^i\frac{i}{(i+x^2)^\frac{3}{2}} \, \mathrm{d}x \right)^2$$ $ $ where $m$ and $n$ are integers $s.t.$ $n>m> 0$. This problem looks ridiculously difficult.
This really isn't so bad. $$\int_0^i\frac{i}{(i+x^2)^{3/2}}dx=\frac{i}{\sqrt{i(i+1)}}.$$ So you're summing: $$\sum_{i=m}^n\frac{i}{i+1}.$$ The latter sum unfortunately doesn't have an explicit form, unless you're willing to use digamma functions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1867835", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Correlated brownian motions and Lévy's theorem $W^{(1)}_t$ and $W^{(2)}_t$ are two independent Brownian motions. How can I use Lévy's Theorem to show that $$W_t:=\rho W^{(1)}_t+\sqrt{(1-\rho^2)} W^{(2)}_t,$$ is also a Brownian motion for a given constant $\rho\in(0,1)$. Also, it is clear why $\rho$ in front of the first Brownian term is there, to get $E[W^{(1)}_t W^{(2)}_t] = \rho dt $ But, I don't understand why the term $\sqrt{(1-\rho)}$ needs to be there in that form
If $W_t:=\rho W^{(1)}_t+\sqrt{1-\rho^2} W^{(2)}_t$ then we can show that, $W_t$ is a Brownian motion. Proof Let $(\Omega, \mathcal{F},\mathbb{P},\{\mathcal{F_t}\})$ be a probability space . Clearly, $W_t$ has continuous sample paths and $W_0=0$. $$\mathbb{E}[W_t|\mathcal{F_s}]=\rho\,\mathbb{E}[W^{(1)}_t|\mathcal{F_s}]+\sqrt{1-\rho^2}\,\mathbb{E}[W^{(2)}_t|\mathcal{F_s}]=\rho W^{(1)}_s+\sqrt{1-\rho^2} W^{(2)}_s=W_s$$ So $W_t$ is a martingale. Now we should show $W_t^2-t$ is a martingale. By application of Ito's lemma, we have $$dW_t^2=2W_tdW_t+d[W_t,W_t]$$ $$dW_t^2=2W_tdW_t+\rho^2 d[W_t^{(1)},W_t^{(1)}]+(1-\rho^2) d[W_t^{(2)},W_t^{(2)}]+2\rho\sqrt{1-\rho^2}d[W_t^{(1)},W^{(2)}_t]$$ Since $W^{(1)}_t$ and $W^{(2)}_t$ are two independent Brownian motions, thus $d[W_t^{(1)},W^{(2)}_t]=0$ , hence $$dW_t^2=2W_tdW_t+dt$$ consequently $$d(W_t^2-t)=2W_tdW_t+dt-dt=2W_tdW_t$$ so to speak $$d(W_t^2-t)=2W_tdW_t$$ Indeed $$d(W_t^2-t)=2\rho W_t dW^{(1)}_t+2\sqrt{1-\rho^2}\,W_t dW^{(2)}_t$$ Therefore $W_t^2-t$, is a martingale (because it's SDE has a null drift ) and $W_t$ is a standard Brownian motion.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1867936", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What is the way to show the following derivative problem? If $f$ is function twice differentiable with $|f''(x)|<1, x\in [0,1]$ and $f(0)=f(1)$, then $|f'(x)|<1$ for all $x\in [0,1]$ I have tried with Rolle's theorem, but fail
Hint: For some $c\in[0,1]$, $f'(c)=0$ (can you see why?). Now apply the mean value theorem to $f'$ to bound $f'(x)$ for any other $x\in[0,1]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1868098", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If $A$ is infinite and bounded, the infimum of the difference set of $A$ is zero. Let $A$ be a non-empty subset of $\mathbb{R}$. Define the difference set to be $A_d := \{b-a\;|\;a,b \in A \text{ and } a < b \}$ If $A$ is infinite and bounded then $\inf{A_d} = 0$. Since $a < b$ we have $b - a > 0$. Thus zero is a lower bound for $A_d$ and $\inf(A_d) \geq 0$. I then want to show that if $\inf(A_d) = \epsilon > 0$ and $A$ is bounded, then $A$ is finite. Let $\inf(A) = \beta$ and $\sup(A) = \alpha$. Then there can be at most $\lfloor \frac{\alpha - \beta}{\epsilon} \rfloor$ real numbers in $A$. Suppose that there are greater than $\lfloor \frac{\alpha - \beta}{\epsilon} \rfloor + 1$ numbers in $A$. Since $b - a > \epsilon$ for each $a , b \in A$, We have $\alpha > (\lfloor \frac{\alpha - \beta}{\epsilon} \rfloor + 1)(\epsilon) + \beta$. However this is a contradiction, since $(\lfloor \frac{\alpha - \beta}{\epsilon} \rfloor + 1)(\epsilon) + \beta > (\lfloor \frac{\alpha - \beta}{\epsilon} \rfloor)(\epsilon) + \beta\geq ( \frac{\alpha - \beta}{\epsilon} )(\epsilon) + \beta = \alpha$. Thus the cardinality of $A$ must be less than or equal to $\lfloor \frac{\alpha - \beta}{\epsilon} \rfloor + 1$ and thus finite. We have show that if $\inf($A_d$) > 0$ and $A$ is bounded then, $A$ cannot be infinite. One question I have is whether this would be enough to prove the theorem. I'm sure that there are more effecient ways to formulate the above argument. I feel like this is a good opportunity for the pigeon hole principle but I don't really know how to "invoke" it. Critique is welcomed and appreciated.
Your argument is ok for me. If you want to apply the Pigeon-Hole Principle: We have $A\subset [\inf A, \sup A]=[x,y]$ with $x<y$. For any $r>0$ take $n\in N$ such that $(y-x)/n<r.$ The set of $n$ intervals $S= \{[x+j(y-x)/n,x+(j+1)(y-x)/n] : 0\leq j<n\}$ covers $[x,y].$ Take any set $B$ of $n+1$ members of $A.$ At least two distinct $c,d\in B $ belong to the same member of $S.$ So $\exists c,d\in A\;(0<|c-d|\leq (y-x)/n<r).$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1868226", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Boundedness and convergence of $x_{n+1} = x_n ^2-x_n +1$ Suppose that $x_0 = \alpha \in \mathbb{R}$ and $x_{n+1} = x_n ^2-x_n +1$. I am asked to study the boundedness of $(x_n)$ and then asked if $(x_n)$ converges. How can I show that $(x_n)$ is bounded? I have noted that $$x_{n+1}-x_n = (x_n-1)^2\geq 0$$ so $x_n$ is increasing. Suppose that $(x_n)$ converges, then $l = l^2-l+1$ so $l = 1$. This means that the sequence converges only when $\alpha \leq 1$. And so $(x_n)$ is bounded above by $1$ if $\alpha\leq 1$ (since an increasing sequence converges to its supremum) otherwise the sequence is not bounded above. Is my reasoning correct?
Notice that $x_{n+1}=1-x_n(1-x_n)$. Thus $x_{n+1}\in[0,1]$ if and only if $x_n\in[0,1]$. Thus the sequence is bounded above by $1$ if and only if $\alpha\in[0,1]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1868333", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
What does the notation $\overline{\mathbb R}$ mean in that context? In an old question, it can be read that "the finiteness of $\text{Gal}(\overline{\mathbf R}/\mathbf R)$" is one of the "impressive finiteness results in mathematics". I commented the question to know what was meant by the notation $\overline{\mathbf R}$, but I got no answer. Hence my question: what is $\overline{\mathbf R}$, in that context? For me, this doesn't denote an algebraic closure of $\Bbb R$, because then it would be isomorphic to $\Bbb C$, and the finiteness of $\text{Gal}(\Bbb C/\Bbb R)$ is not hard to establish, in my opinion. Any comment would be appreciated!
My reading of this quote is that * *$\overline{\bf R}$ refers to $\Bbb{C}$, and that *"impressive finiteness result" refers to the (not that obvious) fact that $\Bbb{C}$ is algebraically closed. Most of us hear about $\Bbb{C}$ being algebraically closed early in our studies, may be in the same course the complex numbers are first introduced? More often than not the first proof we see is one of the highlights of that sophomore/junior complex analysis course. I am not conversant with the history of FTA, so I don't know if that proof was the first discovered? Surely people had suspected this to be the case much earlier (or were totally uninterested). Personally I have a soft spot for the Galois theoretic proof of $[\overline{\Bbb{R}}:\Bbb{R}]=2$ that only needs the following pieces of analysis: * *All odd degree polynomials with real coefficients have a real zero. *All the complec numbers have a complex square root (ok, this part really only needs trigonometry). Still, whichever way you look at it, the finiteness of $[\overline{\Bbb{R}}:\Bbb{R}]$ is a non-trivial fact.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1868441", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Finding the ring of integers of $\Bbb Q(\sqrt[4]{2})$ I know$^{(1)}$ that the ring of integers of $K=\Bbb Q(\sqrt[4]{2})$ is $\Bbb Z[\sqrt[4]{2}]$ and I would like to prove it. A related question is this one, but it doesn't answer mine. I computed quickly the discriminant $\text{disc}(1,\sqrt[4]{2},\sqrt[4]{4},\sqrt[4]{8})=-2^{11}$. According to this answer, this means that $\mathcal{O}_K \subset \frac{1}{m}\Bbb Z\left[\sqrt[4]{2}\right]$ where $m$ is an integer whose square divides $2^{11}$, so $m=1,2,2^2,\dots,2^5$ are possible. But how could I rule out the values $m>1$? I am aware that it can be a tricky problem. Any reference providing a description of $\mathcal{O}_{\Bbb Q(\sqrt[4]{2})}$ would be satisfactory. I will be grateful for any help! $^{(1)}$ I tested with SAGE the following code K.<a> = NumberField([x^4-2]); K.integral_basis() and I got the expected answer, namely $[1,a,a^2,a^3]$.
Following the approach of Keith Conrad, suppose that $$\alpha = a + b \sqrt[4]2+c\sqrt[4]4+d\sqrt[4]8,\quad a,b,c,d\in\mathbb Q$$ is an element of $\mathcal O_K$. We will show that $\alpha\in\mathbb Z[\sqrt[4]2]$. Calculating traces, $$ \mathrm{Tr}_{K/\mathbb Q}(\alpha) = 4a\\ \mathrm{Tr}_{K/\mathbb Q}(\sqrt[4]2\alpha) = 8d\\ \mathrm{Tr}_{K/\mathbb Q}(\sqrt[4]4\alpha) = 8c\\ \mathrm{Tr}_{K/\mathbb Q}(\sqrt[4]8\alpha) = 8b $$ are all integers, and therefore, the denominators of $a,b,c$ and $d$ can only involve powers of $2$. This enables us to solve our problem $2$-adically - indeed, it suffices to show that $\mathcal O_{\mathbb Q_2(\sqrt[4]2)} = \mathbb Z_2[\sqrt[4]2]$, since if $\alpha=\frac{1}{2^k}\alpha'$, where $\alpha'\in\mathbb Z[\sqrt[4]2]$, then $\alpha$ can only be an element of $\mathbb Z_2[\sqrt[4]2]$ if $k\le 0$. But $\mathbb Q_2(\sqrt[4]2)$ is totally ramified with uniformiser $\sqrt[4]2$ (by observation, or since $X^4-2$ is Eisenstein at $2$), so it follows by Lemma $1$ in Conrad's notes that $\mathcal O_{\mathbb Q_2(\sqrt[4]2)} = \mathbb Z_2[\sqrt[4]2]$. Hence $\mathcal O_K = \mathbb Z[\sqrt[4]2]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1868525", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 2, "answer_id": 0 }
Getting characteristic polynomial from a small matrix Sorry I don't know how to format matrices, but if I have this matrix $\pmatrix{1& 1& 0\\ 0& 0& 1\\ 1 &0& 1\\}$ How is the characteristic polynomial $λ^3 − 2λ^2 + λ − 1$? Is there some methodical approach to getting the characteristic polynomial from a matrix? EDIT: $\text{Det}(A - \lambda I)$ means $\pmatrix{1-\lambda& 1& 0\\ 0& -\lambda& 1\\ 1 &0& 1-\lambda\\}$ and so the determinant of this matrix is $= (1-\lambda)((-\lambda)(1-\lambda) - (1)(0)) - (1)((0)(1-\lambda)-(1)(1)) + (0)((0)(0) - (-\lambda(1)))$ $= -\lambda^3+2 \lambda^2-\lambda+1$ Huh, seems to be similar, but the signs are different?
As mentioned in the comments, you just find $\det(A-\lambda I)$ (or $\det(\lambda I-A)$ if you want the leading term positive). Alternatively, if you find all of the (complex) eigenvalues $\lambda_1, \lambda_2, \lambda_3$, counted with multiplicity, then the characteristic polynomial will be $(\lambda-\lambda_1)(\lambda-\lambda_2)(\lambda-\lambda_3)$. In this particular case you'll want to do the first method because the roots of the characteristic polynomial of $A$ are pretty gnarly: ~ WolframAlpha
{ "language": "en", "url": "https://math.stackexchange.com/questions/1868618", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }