Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
A Taylor series question The Taylor series for $cos(x)$ about $x=0$ is $1-x^2/(2!)+x^4/(4!)-x^6/(6!)+...$ If $h$ is a function such that $h'(x) = cos(x^3)$, then the coefficient of $x^7$ in the Taylor series for $h(x)$ about $x = 0$ is?
The Taylor expansion of $\cos t$ is $$1-\frac{t^2}{2!}+\frac{t^4}{4!}-\frac{t^6}{6!}+\cdots.$$ Let us cross our fingers and treat this as a "long" polynomial. Then substituting $x^3$ for $t$, we get $$\cos(x^3)=1-\frac{x^6}{2!}+\frac{x^{12}}{4!}-\frac{x^{18}}{6!}+\cdots.$$ This is $h'(x)$. Integrate term by term, again treating the expression as if it were a long polynomial. We get $$h(x)=C-\frac{x^7}{7\cdot 2!}+\frac{x^{13}}{13\cdot 4!}-\frac{x^{19}}{19\cdot 6!}+\cdots.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/751025", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What is Cramer's rule used for? Cramer's rule appears in introductory linear algebra courses without comments on its utility. It is a flaw in our system of pedagogy that one learns answers to questions of this kind in courses only if one takes a course on something in which the topic is used. On the discussion page to Wikipedia's article on Cramer's rule, we find this detailed indictment on charges of uselessness, posted in December 2009. But in the present day, we find in the article itself the assertion that it is useful for * *solving problems in differential geometry; *proving a theorem in integer programming; *deriving the general solution to an inhomogeneous linear differential equation by the method of variation of parameters; *(a surprise) solving small systems of linear equations. This one is what it superficially purports to be in linear algebra texts, but then elementary row operations turn out to be what is actually used. At some point in its history, the Wikipedia article asserted that it's used in proving the Cayley–Hamilton theorem, but that's not there now. To me the Cayley–Hamilton theorem has always been a very memorable statement, but at this moment I can't recall anything about the proof. What enlightening expansions on these partial answers to this question can the present company offer?
In control theory, the closely related rule $$A^{-1} = \frac{1}{\det A} \operatorname{Adj}(A)$$ where $\operatorname{Adj}(A)$ is the adjugate matrix, is used for going from a state space representation to a transfer function description (doing all computations by hand). Explicitly, if we are given a state space representation of a system $$\begin{align} \dot x &= Ax + Bu \\ y &= Cx + Du \end{align}$$ and want to find the transfer function from the input $u$ to the output $y$, we can Laplace transform the equations to: $$\begin{align} sX &= AX + BU \\ Y &= CX + DU \end{align}$$ and from the first equation we get $X = (sI-A)^{-1}BU$ which inserted into the second equation yields $Y = (C(sI - A)^{-1}B +D)U$ so the transfer function $G(s)$ is $$G(s) = C(sI-A)^{-1}B + D$$ and we use the inversion rule above to calculate $(sI-A)^{-1}$. Of course, since we can write $$G(s) = C\frac{\operatorname{Adj}(sI-A)}{\det (sI - A)}B + D$$ we can directly observe that the poles of the system are the same as the eigenvalues of $A$, which provides information on the system's stability. Example We want the transfer function $G(s)$ from $u$ to $y$ in this system: $$\begin{align} \dot x &= \begin{pmatrix} 1 & 3 \\ 1 & 6 \end{pmatrix} + \begin{pmatrix} 1 \\ 0 \end{pmatrix} u \\ y &= \begin{pmatrix} 3 & 1 \end{pmatrix} x \end{align}$$ so when computing $G(s)$ we have to compute: $$(sI-A)^{-1} = \begin{pmatrix} s - 1 & -3 \\ -1 & s - 6 \end{pmatrix} = \frac{1}{s^2-7s+3} \begin{pmatrix} s - 6 & 3 \\ 1 & s - 1 \end{pmatrix}$$ so our transfer function becomes: $$\begin{align} G(s) &= \begin{pmatrix} 3 & 1 \end{pmatrix}\frac{1}{s^2-7s+3} \begin{pmatrix} s - 6 & 3 \\ 1 & s - 1 \end{pmatrix} \begin{pmatrix}1 \\ 0\end{pmatrix} = \\ &= \frac{1}{s^2-7s+3}\begin{pmatrix} 3 & 1 \end{pmatrix}\begin{pmatrix} s - 6 \\ 1 \end{pmatrix} = \\ &= \frac{3s-5}{s^2-7s+3} \end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/751089", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 5, "answer_id": 2 }
Jumping back into Calculus III At the age of 30 I am going back to school for Electrical Engineering. Because of the way higher education works, all of my previous college coursework is being transferred, which does not allow you to retake classes that were already successfully completed. Since I took Calc I and II and did rather well, I cannot retake them. It has been almost 10 years since I have had any formal math class, and I am nervous about jumping right into Calc III. What are some recommended textbooks or other tools that I can use to prepare. I have until the end of August, it is mid-April now. I have been working with Khan Academy, but it seems to be very unorganized. I have worked my way up, going along, from pre-algebra up through precalculus. I would like to change gears and start working with books, focusing and PreCalc topics to start and working towards Calc II. Any insights greatly appreciated!!!
Calculus Early Transcendentals by James Stewart, I think it is one of the best books out there. Apart from Khan Academy on Youtube, you have MIT Multivariable Calculus and UCBerkely.
{ "language": "en", "url": "https://math.stackexchange.com/questions/751154", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 6, "answer_id": 3 }
Finding an expression for the complex number Z^-1 So I want to find out an expression to express: $$z^{-1}$$ I know the answer is: $$z^{-1} = \frac{x-iy}{x^2+y^2}$$ But how would I go about proving this/the steps to this?
Our intuition from the real numbers tells us that the inverse of a real number $c$ is $1/c$. Likewise, we conjecture that the inverse of a complex $z \in \mathbb{C}$ is $\frac{1}{z} = \frac{1}{x + iy}$. To get this into a more recognizable form, multiply the numerator and denominator by $z^* = x - iy$. From here, we arrive at $\frac{x-iy}{x^2+y^2} = \frac{x}{x^2 + y^2} + \frac{y}{x^2+y^2}i$. Certainly, our purported $z^{-1}$ is of the form $a+bi$ for $a, b \in \mathbb{R}$. Next, we can simply multiply $z$ by our purported $z^{-1}$ to confirm that we do indeed get $1 + 0i$, which is the multiplicative identity in $\mathbb{C}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/751238", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Which triangular numbers are also squares? I'm reading Stopple's A Primer of Analytic Number Theory: Exercise 1.1.3: Which triangular numbers are also squares? That is, what conditions on $m$ and $n$ will guarantee that $t_n=s_m$? Show that if this happens, then we have: $$(2n+1)^2-8m^2=1,$$ a solution to Pell's equation, which we will study in more detail in Chapter $11$. I thought about the following: $$\begin{eqnarray*} {t_n}&=&{s_n} \\ {\frac{n^2+n}{2}}&=&{m^2} \\ {n^2+n}&=&{m^2} \end{eqnarray*}$$ I've solved for $n$ and $m$ but I still have no clue of how to proceed. I've looked at the book's solution and the solution is as follows: $$\begin{eqnarray*} {\frac{n(n+1)}{2}}&=&{m^2} \\ {n(n+1)}&=&{2m^2} \\ {\color{red}{4n(n+1)}}&\color{red}{=}&{\color{red}{8m^2}} \\ {4n^2+4n+1-1}&=&{8m^2} \\ {(2n+1)^2-1}&=&{8m^2} \\ \end{eqnarray*}$$ In the red line, he multiplies the equation by $4$, I don't understand why to do it nor how the condition is achieved.
As the other answers have explained, the multiplication by 4 is to make things neater. However, on closer look, the formulation (2n+1)^2 - 1 = 8m^2 doesn't really simplify the situation. This is because (2n+1), an odd number, when squared, will always be one more than [8 times a triangular number]. This formulation simply restates the problem: when is m^2 a triangular number. The way I have gone about this problem is to state it as follows: n(n+1) = 2m^2 n and (n+1) can share no common factors. Since all the prime factors except 2 on the RHS are raised to an even power, one of (n) and (n+1) must be a square and the other 2 times a square. The situation now becomes: a^2 = 2b^2 +- 1 To illustrate, the first triangular and square number after 1 is 36, because: 3^2 = 2*2^2 + 1 The series of 'square triangulars' can be found by finding all a-b pairs which fulfil the above equation. The first few pairs are: (1,1); (3,2); (7,5); (17,12). These yeild the square triangular numbers: 1, 36, 1225, 41616. An infinite series of these pairs can be generated as follows: take one pair; (a,b) the next pair is (a+2b, a+b)
{ "language": "en", "url": "https://math.stackexchange.com/questions/751316", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 5, "answer_id": 3 }
Checking a solution of a PDE I have the following PDE: \begin{equation} -yu_x + xu_y = 0 \quad\text{where } u(0, y) = f(y) \end{equation} I derived a solution as follows: \begin{align} -yu_x + xu_y =& 0 \\ \iff& \nabla u(x,y)\dot \langle -y, x\rangle = 0 \\ \implies& \frac{dy}{dx} = \frac{-x}{y} \\ \iff& ydy = -xdx \\ \iff& \int ydy = \int -xdx \\ \iff& y^2 = -x^2 + c_2 \\ \iff& y^2 + x^2 = c_2 \end{align} Since $u(x,y)$ is constant along the ODE $\frac{dy}{dx}$, we have: \begin{align} u(x,y) =& c_1 \\ =& f(c_2) \quad\text{for some function $f$} \\ =& f(y^2+x^2) \end{align} I want to check that this satisfies the PDE. Specifically, any function $f$ should satisfy the pde. My calculus is a bit rusty, and I am not exactly sure how to do this. Here is my reasoning: Since $u(x,y) = -yu_x + xu_y = 0$ we have to substitute in for $f$ which yields \begin{equation} u(x, y) = -yf_x2x + xf_y2y = 2xyf_y - 2xyf_x \end{equation} We need the above to equal $0$. The $2xy$ and $-2xy$ give me evidence that it should cancel, and that my calculus is off... What is not clear to me is that, we don't know what $f$ is, hence I can't find out what $f_x$ or $f_y$ are. Though, the problem does look symmetrical, and I could see a potential solution involving polar coordinates, but I'm not quite sure how to solve it in this way either. * *How can I verify that the above is indeed equal to $0$ and hence satisfies the PDE? Thanks for all the help!
Since we conjecture $u(x,y)=f(x^2+y^2)$, let us (carefully) apply calculus and verify $-yu_x+xu_y=0$. Note (by chain rule), $$ u_x = f'(x^2+y^2)*2x, \qquad u_y = f'(x^2+y^2) * 2y. $$ Thus, $-yu_x+xu_y=-2xyf'+2xyf'=0$. So, you had it, except a small detail with the chain rule. Another way to have caught the mistake -- $f$ is a function of a single variable, so what does $f_x$ or $f_y$ mean?
{ "language": "en", "url": "https://math.stackexchange.com/questions/751433", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
The group of rigid motions of the cube is isomorphic to $S_4$. I want to solve the following exercise from Dummit & Foote. My attempt is down below. Is it correct? Thanks! Show that the group of rigid motions of a cube is isomorphic to $S_4$. My attempt: Let us denote the vertices of the cube so that $1,2,3,4,1$ trace a square and $5,6,7,8$ are the vertices opposite to $1,2,3,4$. Let us also denote the pairs of opposite vertices $d_1,d_2,d_3,d_4$, where vertex $i$ is in $d_i$. To each rigid motion of the cube we associate a permutation of the set $A=\{ d_i \}_{i=1}^4$. Denote this association by $\varphi:G \to S_4$, where $G$ is the group of those rigid motions, and we identified $S_A$ with $S_4$. By definition of function composition we can tell that $\varphi$ is a group homomorphism. We prove that $\varphi$ is injective, using the trivial kernel characterisation: Suppose $\varphi(g)=1$ fixes all of the the pairs of opposite edges (that is we have $g(i) \in \{i,i+4 \}$ for all $i$, where the numbers are reduced mod 8). Suppose $g$ sends vertex $1$ to its opposite $5$. Then the vertices $2,4,7$ adjacent to $1$ must be mapped to their opposite vertices as well. This is because out of the two seemingly possible options for their images, only one (the opposite vertex) is adjacent to $g(1)=5$. This completely determines $g$ to be the negation map which is not included in our group. The contradiction shows that we must have $g(1)=1$, and from that we can find similarly that $g$ is the identity mapping. Since $\ker \varphi$ is trivial $\varphi$ is injective. In order to show that it is surjective, observe that $S_4$ is generated by $\{(1 \; 2),(1 \; 2 \; 3 \; 4) \}$ (this is true because products of these two elements allow us to sort the numbers $1,2,3,4$ in any way we like). We now find elements in $G$ with images under $\varphi$ being those generators. Observe that if $s$ is a $90^\circ$ rotation around the axis through the centres of the squares $1,2,3,4$ and $5,6,7,8$, such that $1$ is mapped to $2$, followed by a rotation by $120^\circ$ around the line through $2,6$ (so that $1$ is mapped to $3$), we have $\varphi(s)=(1 \; 2)$ .Observe also that if $t$ is $90^\circ$ rotation around the axis through the centres of the squares $1,2,3,4$ and $5,6,7,8$, such that $1$ is mapped to $2$, we have $\varphi(t)=(1 \; 2 \; 3 \; 4)$. Now if $\sigma \in S_4$ is any permutation, we express in as a product involving $(1 \; 2),(1 \; 2 \; 3 \;4)$, and the corresponding product involving $s,t$ is mapped to $\sigma$ by $\varphi$. This proves $\varphi$ is surjective. We conclude that $\varphi$ is an isomorphism, so $G \cong S_4$.
Surjective: The group of rigid motions of a cube contains $24$ elements, same as $S_4$. Proof - A cube has $6$ sides. If a particular side is facing upward, then there are four possible rotations of the cube that will preserve the upward-facing side. Hence, the order of the group is $6\times 4 = 24$. Injective: A cube has $4$ diagonals. For one of the diagonals's head and tale attach $1$, for another $2$ and so on. We choose tale and head of a diagonal same, since there is no way to have rigid motion for a cube to change head and tale and still the diagonal remains in same orientation. For first diagonal you can attach any one of $1$, $2$, $3$, or $4$, as you could choose for first place in $S_4$. For the second diagonal it remains $3$ numbers to choose to attach as same for the second place in $S_4$. And till the last one, and we are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/751507", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 1 }
Limit of a polynomic-exponential sequence I have to calculate the following limit: $$L=\lim \limits_{n \to \infty} -(n-n^{n/(1+n)})$$ I get the indeterminate form $\infty - \infty$ and I don't know how to follow. Any idea? Thank you very much.
HINT : $$n - n^{n/1+n} = n - n^{1+1/n} = n(1 - n^{1/n}) = \frac{1 - n^{1/n}}{1/n}$$ Now use L'Hôpital's rule.
{ "language": "en", "url": "https://math.stackexchange.com/questions/751584", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Showing an endomorphism is not surjective Let $$A=\begin{pmatrix}2&-2\\2&-2\end{pmatrix}$$ and the endomorphism $f_A:M_2(\mathbb R)\longrightarrow M_2(\mathbb R); B\longmapsto AB$. I want to show that $f_A$ is not surjective. My try: $\ker f_A$ is clearly shown to be containing elements other than the null matrix, so $f_A$ is not injective. Since this is endomorphism in finite dimension then $f_A$ is not injective if and only if $f_A$ is not surjective. Is there any other way to show that $f_A$ is not surjective?
Your approach works (it'd be better to explicitly write down a nonzero element of $\mathrm{ker} \, f_A$). Alternatively, every element of the image satisfies $\det(f_A(B)) = \det(AB) = 0 \cdot \det(B) = 0$. Of course there are matrices with nonzero determinant, and they are therefore not in the image.
{ "language": "en", "url": "https://math.stackexchange.com/questions/751717", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find $\int_\Gamma\frac{2z+j}{z^3(z^2+1)}\mathrm{d}z$ where $Γ:|z-1-i| = 2$ pls, some ideas for integral solution (residue theory)? $$\int_\Gamma\dfrac{2z+j}{z^3(z^2+1)}\mathrm{d}z$$ Where $Γ:|z-1-i| = 2$ is positively oriented circle. Thx, for help!
So, my solution is: Is it correct ?? pole z1 = 0 ( order 3 pole ); pole z2 = -i ( simple pole ); pole z3 = i ( simple pole ); z1 : |0-1-i| = sqrt(2) < 2 => in circle; z2 : |-i-1-i| = sqrt(5) > 2 => not in circle; z3 : |i-1-i| = 1 < 2 => in circle; $$ \underset{z=0}{res}\frac{2z+i}{z^3(z^2+1)}=-i $$ $$ \underset{z=i}{res}\frac{2z+i}{z^3(z^2+1)}=\frac{3i}{2} $$ $$ \int_\Gamma\dfrac{2z+i}{z^3(z^2+1)}\mathrm{d}z = 2\pi i(\underset{z=0}{res} f(z) + \underset{z=i}{res} f(z)) = 2\pi i(-i+\dfrac{3i}{2}) = -\pi $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/751953", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How to calculate this area in $\mathbb{R}^2$? Write the area $D$ as the union of regions. Then, calculate $$\int\int_Rxy\textrm{d}A.$$ First of all I do not get a lot of parameters because they are not defined explicitly (like what is $A$? what is $R$?). Here is what I did for the first question: The area $D$ can be written as: $$D=A_1\cup A_2\cup A_3\cup A_4\cup A_5.$$ Where: $$A_1=\{(x, y)\in\mathbb{R}^2: x\geq-1\}.$$ $$A_2=\{(x, y)\in\mathbb{R}^2: y\geq-1\}.$$ $$A_3=\{(x, y)\in\mathbb{R}^2: x\leq1\}.$$ $$A_4=\{(x, y)\in\mathbb{R}^2: x\leq y^2\}.$$ $$A_5=\{(x, y)\in\mathbb{R}^2: y\leq1+x^2\}.$$ First, for me I see that $D$ is the intersection of these regions and not the union. Am I wrong? P.S. This is a homework.
You should write $$ D =\{(x,y): -1<x<1, -1<y<1+x^2 \} - \{ (x,y): 0<x<1, -\sqrt{x}<y<\sqrt{x} \} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/752045", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
When is a symmetric 2-tensor field globally diagonalizable? Suppose that $\mathbb{R}^n$ has a Riemannian metric $g$. Let $h$ be a smooth symmetric 2-tensor field on $\mathbb{R}^n$. At any point $p \in \mathbb{R}^n$, there is a basis of $T_p \mathbb{R}^n$ in which $h$ is diagonal. Is it always possible to find a global orthonormal frame $\{E_i\}$ that diagonalizes $h$? If not, what are the obstructions to the existence of such a frame?
Once I had similar question. I asked if you are given continuous matrix valued function $A:\Omega \rightarrow \mathbb{R}^{n\times n}$. Can you find continuous matrix valued functions $D$ diagonal and $S$ orthogonal, such that $$ A(x) = S(x)D(x)S^T(x) $$ for all $x\in \Omega$ ? The answer is negative. Take this matrix valued function: $$ A(x)=\left( \begin{matrix} 1 + \phi(x) \sin^2{\theta(x)} & - \phi(x)\cos{\theta(x)} \sin{\theta(x)}\\ - \phi(x)\cos{\theta(x)} \sin{\theta(x)} & 1 + \phi(x) \cos^2{\theta(x)} \end{matrix} \right) = \left( \begin{matrix} \cos{\theta(x)} & -\sin{\theta(x)} \\ \sin{\theta(x)} & \cos{\theta(x)} \end{matrix} \right) \left( \begin{matrix} 1 & 0 \\ 0 & 1+\phi(x) \end{matrix} \right) \left( \begin{matrix} \cos{\theta(x)} & \sin{\theta(x)} \\ -\sin{\theta(x)} & \cos{\theta(x)} \end{matrix} \right) = S(x)D(x)S^T(x) $$ If you now take $\Omega = \mathbb{R}$, $\phi(x) = |x|$, $\theta(x) = \frac1{|x|}$, $\theta(0)=0$. Than $A$ is continuous but $S$ is not. And since eigen value decomposition is unique(apart from some signs and permutations) this is counter example. If you require $A$ to be differentiable than pick $\phi(x) = |x|^n$ with sufficiently high $n$. If you need $A$ to be infinitely differentiable than I would pick $\phi(x) = e^{-\frac1{x^2}}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/752115", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Geometry Right triangles in a rectangle, find the area. Please help, I've been struggling to figure out this problem for too long... Given the area of rectangle $ABCD = 1200 \text{ unit}^2$, find the area of right triangle $ABE$
We have $[ABCD]=1200$, therefore the area of $\Delta{ABD}=\dfrac{1}{2}[ABCD]=600$. Now, calculate length $AD$ and $BD$. $$ \begin{align} [ABD]&=600\\ \dfrac{1}{2}AB\cdot AD&=600\\ \dfrac{1}{2}\cdot40\cdot AD&=600\\ 20\cdot AD&=600\\ AD&=30 \end{align} $$ Using Phytagoras' formula, we get $$ BD^2=AB^2+AD^2\quad\Rightarrow\quad BD=\sqrt{40^2+30^2}=50. $$ Now, calculate length $AE$. $$ \begin{align} [ABD]&=600\\ \dfrac{1}{2}BD\cdot AE&=600\\ \dfrac{1}{2}\cdot50\cdot AE&=600\\ 25\cdot AE&=600\\ AE&=24. \end{align} $$ Again we use Phytagoras' formula to obtain $BE$. $$ AB^2=AE^2+BE^2\quad\Rightarrow\quad BE=\sqrt{AB^2-BE^2}=32. $$ Thus, the area of $\Delta{ABE}$ is $$ \begin{align} [ABE]&=\dfrac{1}{2}AE\cdot BE\\ &=\dfrac{1}{2}\cdot24\cdot 32\\ &=384. \end{align} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/752186", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Are curves closed in $\mathbb{R}\times \mathbb{R}$ with the standard topology? Given the graph of the curve $y=\frac{1}{x}$, can we determine if the curve is closed or open in $\mathbb{R}^{2}$ with the standard topology?
Perhaps the easiest way to show closedness of $$G=\{(x,\ 1/x)\mid x\ne 0\}\subset\Bbb R^2$$ is to note that it is the preimage of $\{1\}$ under the map $$\Bbb R\times\Bbb R→\Bbb R,\qquad (x,y)\mapsto x\cdot y$$ and this map is continuous and $\{1\}$ is closed. Alternatively, note that $G=G(f)$, the graph of the continuous map $$f:\Bbb R\setminus\{0\}\to\Bbb R,\qquad x\mapsto f(x)=1/x$$ and if $f:X→Y$ is continuous and $Y$ is Hausdorff, then $G(f)$ is closed in $X×Y$. This, however, only gives closedness of $G$ in $\Bbb R\setminus\{0\}×\Bbb R$. You still had to show that no point in $\{0\}×\Bbb R$ is in the closure of $G$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/752269", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Functions definition + question Am I correct in saying that for Functions, the below is the correct definition: For each value of x in the domain there is only one value of y in the range. Hence, the picture below means that it is not a many-to-one function as the values of x do not map onto a single value of y in the range (they map onto two values of y in the range) and therefore it is not a function but an operation (square root)
A function is a relation between a set of inputs and a set of permissible outputs (determined by the relation), with the property that each input is related to exactly one output. So you need to make sure to change "only" to "exactly" or to " one and only one". I emphasize one and only one because each input value must be related to one output value, and only one output value. The picture cannot depict a function from the set on the left to the set on the right, for the reason you give. However, if the direction of all the arrows were reversed, then we would have a function from the set on the right to the set on the left, though it would not be a one-to-one function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/752325", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
So-called Artin-Schreier Extension Let $F$ be a field of characteristic $p$. Let $K$ be a cyclic extension of $F$ of degree $p$. Prove that $K=F(\alpha)$ where $\alpha$ is a root of the polynomial $p(x) = x^{p} - x - a$ for $a \in \mathbb{F}$. I've seen, and attempted, a lot of problems that look similar. But I'm not really sure about this one.
Let $\sigma$ be a generator of $G_{K/F}$. The equation $x^p-x=a$ can be written in two different ways: $$\begin{cases}x(\sigma x)\cdots(\sigma^{p-1}x) & =a \\ x(x+1)\cdots(x+p-1) & =a\end{cases}$$ The first follows because $a$ is the opposite of the constant term of $x$'s minimal polynomial thus giving its norm (as either the polynomial is odd-degree or $1=-1$) and the second follows from simply factoring the polynomial $x^p-x$ in characteristic $p$. Wouldn't it be nice if the stars aligned and $\sigma x=x+1$, thus guaranteeing that not only were the products equal (both being $a$) but each of their factors were equal in the listed order? Try to show that given $K/F$ is cyclic $C_p$ there exists an $x\in K$ such that $(\sigma -1)x=1$. Necessarily this would mean that $x\not\in F$ hence $K=F(x)$ and would imply $x^p-x\in F$. (My preferred method would be to construct $x$ as an appropriate polynomial in $\sigma$ applied to a normal basis generator.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/752410", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
A bounded integral I want to show that there exists $K\in\mathbb{R}^+$ such that $$\left|\int_{1}^x \sin(t+t^7)dt \right|<K$$ for all $x\ge 1$. Intuitively, I'm quite sure it is true, but I can't find a formal proof. Any idea?
Let $f(x) = x + x^7$, and $g(t)$ its inverse function on $[0,\infty)$. Then $$\int_0^x \sin(x + x^7)\ dx = \int_0^{f(x)} \sin(t) g'(t)\ dt$$ It can be shown that as $t \to \infty$, $$g(t) = t^{1/7} - \dfrac{1}{7} t^{-5/7} + O(t^{-11/7})$$ and $$g'(t) = \dfrac{1}{7 g(t)^6 + 1} = \dfrac{1}{7} t^{-6/7} + O(t^{-12/7})$$ Now $\int_1^\infty |\sin(t)| t^{-12/7}\ dt < \infty$. On the other hand, using Integration by Parts, $$ \int_1^R \sin(t) t^{-6/7}\ dt = \left. - \cos(t) t^{-6/7} \right|_1^R - \dfrac{6}{7} \int_1^R \cos(t) t^{-13/7}\ dt$$ and again the term on the right is bounded.
{ "language": "en", "url": "https://math.stackexchange.com/questions/752606", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Jacobian Linearisation of Non Linear System Can any one please solve the below problem. It is related to Jacobian Linearisation of Non Linear System I have only got till here
You can try something like this: $$ \dfrac{dx_1}{dt}=f_1(u,x_1,x_2)$$ $$ \dfrac{dx_1}{dt}=f_2(u,x_1,x_2)$$ $$ \dfrac{d\Delta x_1}{dt}=(\dfrac{df_1}{du})_0\Delta u + (\dfrac{df_1}{d x_1})_0\Delta x_1 +(\dfrac{df_1}{dx_2})_0\Delta x_2 $$ $$ \dfrac{d\Delta x_2}{dt}=(\dfrac{df_2}{du})_0\Delta u + (\dfrac{df_2}{d x_1})_0\Delta x_1 +(\dfrac{df_2}{dx_2})_0\Delta x_2 $$ Notice that I use: $\delta_x=\Delta x$ notation... Now calculate all derivations in linearization point for example some values of $(x_1)_0=\hat x_1$, so we have: $$ \dfrac{df_1}{du}=c_1\dfrac{dT}{du}|_{u=u_0} =b_1$$ $$ \dfrac{df_1}{dx_1}= -C_2x_2 |_{x_2=x_{20}} =a_{11}$$ $$ \dfrac{df_1}{dx_2}=-C_2x_1|_{x_1=x_{10}}=a_{12} $$ Further more, you have: $$ \dfrac{df_2}{du}=0 =b_2$$ $$ \dfrac{df_2}{dx_1}= \frac{c_3}{J_e}=a_{21} $$ $$ \dfrac{df_2}{dx_2}=\frac{1}{J_e} (-0,106-2c_4x_2)|_{x_1=x_{10},x_2=x_{20}}=a_{22} $$ Now you have a set of linear equations that you can use to calculate state variables, equilibrium points, and so on... $$\left[\begin{matrix} \Delta\dot{x}_1 \\ \Delta\dot{x}_2 \end{matrix}\right]=\left[\begin{matrix} a_{11} & a_{12} &\\ a_{21} & a_{22}\end{matrix}\right]\left[\begin{matrix}\Delta x_1 \\\Delta x_2 \end{matrix}\right]+\left[\begin{matrix}b_1 \\ b_2 \end{matrix}\right]\Delta u$$ Also if you consider $v$ tp be the output variable, you get: $$ v=y=\left[\begin{matrix} 1 & 0\end{matrix}\right]\left[\begin{matrix}\Delta x_1 \\ \Delta x_2\end{matrix}\right]+0\Delta u$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/752721", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why isn't the zero after the decimal in $0.01$ significant? Why isn't the zero after the decimal in $0.01$ significant? Although it is pretty obvious that the zero before the decimal is insignificant, I don't understand why the zero after the decimal is not significant.
Significant figures are used to denote the precision of a measurement. The leading zeros are not significant because they don't give us information about the precision of the measurement. Let's say you measure something with a meter stick that only has centimeter markings (no millimeters). You get that the object is $8.5 cm$ long, but you want to use your measurement a formula that expects units of meters. When you convert from $8.5 cm$ to $0.085 m$, you haven't improved the precision of the measurement, but you gain the leading zeros. For more information: * *This significant figures overview talks about how significant figures are tied to the precision of measurements, with an introduction that covers the meaning of precision (and how it's not the same as accuracy). *See this helpful video from KhanAcademy
{ "language": "en", "url": "https://math.stackexchange.com/questions/752884", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Proving $\gcd \left(\frac{a}{\gcd (a,b)},\frac{b}{\gcd (a,b)}\right)=1$ How would you go about proving that $$\gcd \left(\frac{a}{\gcd (a,b)},\frac{b}{\gcd (a,b)}\right)=1$$ for any two integers $a$ and $b$? Intuitively it is true because when you divide $a$ and $b$ by $\gcd(a,b)$ you cancel out any common factors between them resulting in them becoming coprime. However, how would you prove this rigorously and mathematically?
Assume WLOG that $a, b \geq 1$. Let $m = \dfrac{a}{\gcd(a,b)}$, and $n = \dfrac{b}{\gcd(a,b)}$, and let $c = \gcd(m,n)$. Then $c \mid m$, and $c \mid n$. This means: $(c\cdot \gcd(a,b)) \mid a$, and $(c\cdot \gcd(a,b)) \mid b$. So $(c\cdot \gcd(a,b)) \mid \gcd(a,b)$. but $\gcd(a,b) \mid (c\cdot \gcd(a,b))$. Thus: $c\cdot \gcd(a,b) = \gcd(a,b)$, and this means $c = 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/752928", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 9, "answer_id": 8 }
Finitely many prime ideals lying over $\mathfrak{p}$ Let $A$ be a commutative ring with identity and $B$ a finitely generated $A$-algebra that is integral over $A$. If $\mathfrak{p}$ is a prime ideal of $A$, there are only finitely many prime ideals $P$ of $B$ such that $P\cap A=\mathfrak{p}$. Let me say that I am aware of this answer, but I can't follow through the hint. Also, I don't know how to extend to work for algebras rather than extension rings.
One very useful tool is: For a ring map $A\to B$, and $\mathfrak{p}$ a prime ideal of $A$, the prime ideals which contract to $\mathfrak{p}$ are in $1:1$ correspondence to the prime ideals of $\kappa(\mathfrak{p})\otimes_AB$, where $\kappa(p)=Q(A/\mathfrak{p})$ is the quotient field of the domain $A/\mathfrak{p}$. Now we come back to the question. Since $\kappa(\mathfrak{p})\otimes_AB$ is integral over $\kappa(\mathfrak{p})$, all prime ideals of $\kappa(\mathfrak{p})\otimes_AB$ must be maximal (in fancy words, the Krull dimension is zero). And note that $\kappa(\mathfrak{p})\otimes_AB$ is finitely generated over $\kappa(\mathfrak{p})$, then $\kappa(\mathfrak{p})\otimes_AB$ is noetherian. A noetherian ring whose prime ideals are all maximal is an Artinian ring. An Artinian ring has only finitely many prime ideals.
{ "language": "en", "url": "https://math.stackexchange.com/questions/753042", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 2, "answer_id": 0 }
Set of solutions for a binomial inequality I bumped into the following inequality: $${a-b\choose c}{a\choose c}^{-1} \le \exp\left(-\frac{bc}{a}\right)$$ Playing with it a little bit, trying to bound it asymptotically for large $a$'s, using Stirling's approximation, I ended up with nothing. Finally I decided to put some numbers and check it, and figured out it is wrong. Moreover: it looked like it is always wrong. Can you prove this inequality? Edit: since it wasn't clear, I'll add that $a,b,c$ are all positive integers. Edit: I also forgot to add the assumption $a>b>c$.
Actually, I don't know if you want LHS $\ge $ RHS or LHS $\le $ RHS. But here is an example: If you take $a=b+1=c+2$ --- e.g. $a=5,b=4,c=3$, then LHS $=0$ and the RHS is positive. So LHS $<$ RHS. I think that given your restrictions $a>b>c$, it can never hold that LHS $\ge $ RHS.
{ "language": "en", "url": "https://math.stackexchange.com/questions/753146", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Sum of fractions of squared sines I'm trying to prove the following approximate identity for $p$ integer: $$ \sum_{l=1}^m\frac{\sin^2\left(\frac{\pi l}{p}\right)}{\sin^2\left(\frac{\pi l}{mp}\right)}\sim \frac{m^2(p-1)}{2}+O(m) $$ Things I have tried: * *Convert to an integral through a Riemann sum, however, the function $1/\sin^2(x)$ and it's derivatives are unbounded for small $x$ *I tried to relate it to this problem, but I found it impossible to eliminate the numerator, simply averaging over one period of $p$ doesn't work. Any help would be much appreciated.
For large $m$, the quantity $\pi \ell/(m p)$ is small except where $\ell \approx m$. Even then, the argument of the sine is small for even moderate values of $p$, so to first order we can replace the sine by its argument. Thus the ratio looks like, approximately, $$\left (\frac{m p}{\pi} \right )^2 \sum_{\ell=1}^m \frac{\sin^2{\frac{\pi \ell}{p}}}{\ell^2}$$ Now, you can evaluate the sum as $m \to \infty$ using the following relation, derived here: $$\sum_{\ell=-\infty}^{\infty} \frac{\sin^2{a \ell}}{\ell^2} = \pi a$$ when $a \in [0,\pi)$. Here, $a=\pi/p$, so we are OK. Thus we have that the fraction is approximately $$\frac12 \left (\frac{m p}{\pi} \right )^2 \left (\frac{\pi^2}{p}-\frac{\pi^2}{p^2} \right ) = \frac12 m^2 \left (p-1 \right )$$ ADDENDUM You can also show that the next order term is $O(m)$ by considering the next term in the expansion of the sine in the denominator. That is, we can easily show that, when the argument is "small": $$\frac1{\sin^2{\frac{\pi \ell}{m p}}} \sim \left (\frac{m p}{\pi \ell} \right )^2 + \frac13$$ The first term gives the above result. The second term may be evaluated exactly: $$\frac13 \sum_{\ell=1}^m \sin^2{\frac{\pi \ell}{p}} = \frac1{12} \left [2 m+1 - \frac{\sin{\left ( (2 m+1) \frac{\pi}{p}\right )}}{\sin{\frac{\pi}{p}}} \right ] \sim \frac{m}{6}$$ for large $m$. Thus the next order term is $O(m)$ as expected.
{ "language": "en", "url": "https://math.stackexchange.com/questions/753276", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Centralizer of $SO(n)$ Given the set $M(n,\mathbb C)$ of all complex $n\times n$ matrices, what's the centralizer of $SO(n)$ in $M(n,\mathbb C)$? For $n=2$, the centralizer must be the matrices $A$ such that $RA=AR$ where $R$ is a rotation matrix. Since 2D rotations commute, I can see $A$ is probably a rotation matrix itself. But let's say $n \ge 3$. Then rotations don't commute in general. Can you still find a nontrivial matrix $A$ that commutes with $R$, or is $A$ just a multiple of the identity?
If $A$ centralises $SO(n)$, it commutes with every $R$ of the form $R=P\left[\pmatrix{0&-1\\ 1&0}\oplus I_{n-2}\right]P^T$ where $P$ is a permutation matrix. Therefore, when $n\ge3$, $A$ must be a diagonal matrix. Consider the equality $AR=RA$ again, we can further infer that $A$ is a scalar multiple of $I_n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/753373", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Find an $n\times n$ integer matrix with determinant 1 and $n$ distinct eigenvalues Pretty much what the title suggests: for any positive integer $n$, I'm looking for an $n$-by-$n$ matrix with integer entries, determinant $1$ and $n$ eigenvalues. In case it is absolutely useless to come up with such a matrix, I'm looking for a proof that such a matrix exists.
Take any representation of degree $n$ of any symmetric group $S_m$. All the matrix entries will be integers but determinant could be $\pm 1$. As suggested by others, we can change all the signs in the first row, if needed, and get integer matrices of determinant $+1$. As all matrices are of finite order, they will be diagonalizable and hence have $n$ eigenvalues.
{ "language": "en", "url": "https://math.stackexchange.com/questions/753485", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Proving that the line integral $\int_{\gamma_{2}} e^{ix^2}\:\mathrm{d}x$ tends to zero Let $f(z) = e^{iz^2}$ and $\gamma_2 = \{ z : z = Re^{i\theta}, 0 \leq \theta \leq \frac{\pi}{4} \} $. All the sources I have found online, says that the line integral $$ \left| \int_{\gamma_2} e^{iz^2}\mathrm{d}z \right| $$ tends to zero as $R \to \infty$. By using the ML-inequality one has $$ \left| \int_{\gamma_2} e^{iz^2}\mathrm{d}z \right| \leq \frac{R\pi}{4} \max_{\theta \in [0,\pi/4]} \left| e^{iR\exp(i2\theta)} \right| \leq \frac{R\pi}{4} \max_{\theta \in [0,\pi/4]} e^{-R^2 \sin 2 \theta} $$ The problem is now that this is a decreasing function, and if one inserts $\theta=0$, then the inequality becomes$\pi R e^{0}/4 = \pi R / 4$, which does not tend to zero. If one instead looks at the interval $(0,\pi/4)$ then everything works out. This document instead tells us to look at Jordans lemma, which one can use by rewriting the function as $e^{ix^2} = e^{ix^2-ix}e^{ix}$. However I run into exactly the same problems here if one studies $[0,\pi/4]$, as then one get $\pi/4 \cdot \cos R(R-1)$. Both Jordan's lemma and the $ML$-inequality clearly states that one should include the endpoints, and clearly this does not work here? What does one do instead? Why is it wrong to look at $[0,\pi/4]$ and correct to ignore the endpoints?
You can do this as follows. Start as you did with $$\left\vert\int_{\gamma_2} e^{iz^2}dz\right\vert\leq R\,\int_0^{\frac\pi4} e^{-R^2\sin 2\theta} d\theta\, . $$ Then observe that on $[0\frac\pi4]$ you have $$\sin 2\theta \geq \frac4\pi \, \theta\, ,$$ thanks to the concavity of the sine function on $[0\frac\pi2]$. It follows that $$\left\vert\int_{\gamma_2} e^{iz^2}dz\right\vert\leq R\,\int_0^{\frac\pi4} e^{-\frac{4R^2}\pi \theta} d\theta=R\times\frac{\pi}{4R^2}(1-e^{-R^2})\leq \frac{\pi}{4R}\, ,$$ which gives the result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/753676", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Inequality with two binomial coefficients I am having trouble seeing why $$ \binom{k}{2} + \binom{n - k}{2} \le \binom{1}{2} + \binom{n - 1}{2} = \binom{n - 1}{2} $$
Assuming $1\le n$ and $0\le k\le n$ it's equivalent to \begin{align*}k(k-1)+(n-k)(n-k-1)&\le(n-1)(n-2)\\ (n-k)(n-k-1)&\le(n-k-1+k)(n-2)-k(k-1)\\ (n-k)(n-k-1)&\le(n-k-1)(n-2)+k(n-2)-k(k-1)\\ 0&\le(n-k-1)(k-2)+k(n-k-1)\\ 0&\le(n-k-1)(k-1)\end{align*} So it holds for all $0<k<n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/753799", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Angle in figure consisting of a square surrounded by semi circles I'd like to know how to get the angle in the following problem: It is a square with side equal to 1. The radius of each semi circle is equal to the side of the square. How can this angle be determined?
It's $30^\circ$. Let the point of intersection of "upper" arcs $BD$ and $AC$ be called $E$, and of upper $BD$ with lower arc $AC$ be called $F$. You should recognize that $\triangle ABE$ is equilateral (why?). What about $\triangle ADF$? Now finish.
{ "language": "en", "url": "https://math.stackexchange.com/questions/753874", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 1 }
What is a zero morphism in an abelian category I am trying to familiarize myself with some basic category theory and I am getting confused with what a $0$-morphism is. If we are in category of say $k$-vector spaces then I am guessing $0$-morphism would be the map that sends everything to $0$. In these examples it makes more sense to me because each object has this $0$ element. But it's not really clear to me what happens in more abstract cases. I would appreciate if someone could explain to me how I should think of these $0$-morphisms. Thanks!
The zero morphism $A \to B$ can be factored into $$ A \to 0 \to B $$ where $0$ is a zero object. (i.e. it is a terminal object and an initial object) As an aside, when you wrote it as "$0$-morphism", my first reaction was that you were referring to the concept from higher category theory; e.g. in $\mathbf{Cat}$, categories are $0$-morphisms (i.e. objects), functors are $1$-morphisms, and natural transformations are $2$-morphisms.
{ "language": "en", "url": "https://math.stackexchange.com/questions/753963", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Show that there is no integer n with $\phi(n)$ = 14 I did the following proof and I was wondering if its valid. It feels wrong because I didn't actually test the case when purportedly $n$ is not prime, but please feel free to correct me. Assume there exists $n$ such that $\phi(n) = 14$. Assume $n$ is prime. Then $\phi(n) = n-1$. Then here, n-1 = 14, so n = 15. We know since Euler's totient function is multiplicative that $\phi(xy)$ = $\phi(x)\phi(y)$, so $\phi(15) = \phi(3)\phi(5)$, but alas $\phi(3)\phi(5) = 2\cdot4 = 8 \ne 14$. If $n$ is not prime a similar argument follows since we know then that $n$ must be composed of prime numbers by the prime factorization theorem.
This does not hold. What kind of similar argument are you then talking about? You can use the fact that $\phi$ is multiplicative. Assume $n = p_1^{a_1}p_2^{a_2}...p_t^{a_t}$, then $\phi(p_1^{a_1}p_2^{a_2}...p_t^{a_t})$ $\phi(p_1^{a_1})\phi(p_2^{a_2})...\phi(p_t^{a_t}) = 2 \cdot 7 = 1 \cdot 14$ Use this to arrive at a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/754023", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 3, "answer_id": 2 }
Triangles Numbers counting How measure to number of triangles??? any helps?? I want to calculate it by using a formula.
My answer has mathematics and observation. My approach is to get the number of triangles of a paricular size, and then add that to the next size, and so on. First, as your figure has four small triangles on each side, I take t as $4$. But that comes later. Starting with triangles of Size $1$. The number of Size $1$ is $1$ on the top row, $3$ on the second, $5$ on the third, and so on. In fact, it starts with $1$ and goes in an Arithmetic Progression of common difference $2$. This makes it the series of odd numbers from $1$. As we all know, the sum of all odd numbers from $1^st$ to $n^{th}$ is $n^2$. That is, $1+3+5+7+...+(2n-1) = n^2$ And since that is what we have here, The number of triangles of Size $1$ is $t^2$. Onto Size $2$. You will see that the triangles of Size $2$ begin at the second line from the top. In fact, for this kind of figures, triangles of size $a$ will begin at the $a^{th}$ line from the top. If we see here, there is $1$ such triangle on the second line, $2$ on the third line, and so on. But here, it continues only until $(t-1)$. As we know, $1+2+3+4+...+n = {{n(n+1)}\over2}$. Here, $n$ is $(t-1)$. So, it is changed to $(t-1)\{(t-1)+1\}\over2$, on simplifying which we get $t(t-1)\over2$. So, The number of triangles of Size $2$ is $t(t-1)\over2$. For Size $3$, the triangles will start from the third from top, and continue from $1$ till $(t-2)$. Going by the same method applied above, we get that The number of triangles of Size $3$ is $(t-1)(t-2)\over2$. Taking into account upside down triangles of sizes $2$ and above in figures of sizes $4$ and above gets us ${\sum_{n=4}^{t}{2(n-4)}}+1$ This goes on and on, so the final formula for total number of triangles $n$ in a figure of side $t$ is:- $$\small{n=t^2+{{t(t-1)}\over{2}}+{{(t-1)(t-2)}\over2}+{{(t-2)(t-3)}\over2}+{\cdots}+{{1×0}\over2}+\{\sum_{n=4}^{t}{2(n-4)\}}+1}$$ Which can also be written as:- $$t^2+{{\sum_{n=0}^{t-1} {(t-n)(\{t-(n+1)\}\over2}}}+{\{{\sum_{n=4}^{t}{2(n-4)}}\}+1}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/754116", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Proving minimum vertex cover Every vertex cover of a graph contains a minimum vertex cover. I know the statement to be true but how do I go proving it?
Let $C = \{v_{1}, ..., v_{k} \}$ be a vertex cover. By definition of a vertex cover, every edge is incident to some vertex in $C$. Suppose $C$ contains no minimum vertex cover. So begin removing vertices inductively. Since there is no minimum vertex cover, we can keep removing vertices from $C$ in this manner. The process will terminate since $C$ is finite. This implies $\emptyset$ covers $G$. If $G = \emptyset$, then $\emptyset$ covers $G$ and is a minimal cover, a contradiction. If $\emptyset$ does not cover $G$, then that means we removed too many vertices. Hence, we need a minimum number of vertices to cover $G$. So we get another contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/754201", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that this triangle is equilateral? Given $\triangle ABC$. Let $D$ be the point where the altitude form the $A$ vertex intersect $\overline{BC}$ and the point $E$ is the intersect between the bisector of $\angle ABC$ with $\overline{AC}$. Let $P$ be the point of intersect of $\overline{AD}$ with $\overline{BE}$. Prove that if $AP=2PD$ and $BP=2PE$, then $\triangle ABC$ is equilateral. This is essentially what I've tried. But I don't know how to continue, I can't find any useful congruences.
By the Angle Bisector Theorem in $\triangle ABD$, $$\frac{|BA|}{|BD|} = \frac{|PA|}{|PD|} = \frac{2}{1}$$ Therefore, $\triangle ABD$ is a $30^\circ$-$60^\circ$-$90^\circ$ triangle; and, then, so is $\triangle BPD$. This implies that your single-tick-mark segments are congruent to your double-tick-mark segments, so that $\triangle APE \cong \triangle BPD$ (SAS). The conclusion follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/754293", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Application of Kodaira Embedding Theorem I am going to give a talk on Kahler manifold. In particular, I will outline a proof of the Kadaira Embedding theorem. I also wish to give some applications of the theorem. One of the application would be the Riemann bilinear relation on complex torus. I am searching for other applications. Does anyone has a good suggestion?
Let me mention some important theorems about Kodaira embedding theorem Let $X$ be a compact complex manifold, and $L$ be a holomorphic line bundle over $X$ equipped with a smooth Hermitian metric $h$ whose curvature form (locally given by $−i2π∂\bar ∂\log h$) is a positive definite real $(1,1)$-form, and so defines a Kähler metric $ω$ on $X$. Then the Kodaira embedding theorem states that there is a positive integer $k$ such that $L^k$ is globally generated (i.e. for every $x∈X$ there is a global holomorphic section $s∈H^0(X,L^k)$ with $s(x)≠0$) and the natural map $X\to\mathbb P(H^0(X,L^k)^∗)$, which sends a point $x$ to the hyperplane of sections which vanish at $x$, is an embedding. In particular, $X$ is a projective manifold. Theorem 1.1 of the this paper extends this theorem of Gang Tian to the case of $X$ not necessarily compact, with compact analytic subvariety $Σ$ and holomorphic-Hermitian line bundle $(L,h)$ such that $h$ is continuous on $X∖Σ$ and has semi-positive curvature current $γ=c_1(L,h)$. In this context the authors consider the spaces of $L^2$-holomorphic sections of the tensor powers $L^p|_{X∖Σ}$, the Bergman density functions Pp associated with orthonormal bases, and the Fubini-Study $(1,1)$-currents $γ_p$ for which the $P_p$ serve as potentials. Under these conditions, it is shown in Theorem 1.1 that each $γ_p$ extends to a closed positive current on $X$, and that $\frac{1}{p}γ_p$ approaches $γ$ weakly if $\frac{1}{p}\log P_p→0$ locally uniformly on $X∖Σ$, as $p→∞$. We have also the following theorem If $X$ is a normal compact Kahler variety with isolated singularities that admits a holomorphic line bundle $L$ that is positive when restricted to the regular part of $X$, then $X$ is biholomorphic to a projective-algebraic variety.
{ "language": "en", "url": "https://math.stackexchange.com/questions/754354", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
valuation ring is a field? suppose $a$ and $a'$ are units of $B$ ,$b$ and $b'$ are the elements of any ideal of $B$. $x$ is a element of $K$. $K$ consist of $a/a,a/b,b/a,b/b$ $\color{green} x=a/a' \Rightarrow x\in B~and~x^{-1}\in B$ $\color{green} x=a/b \Rightarrow x^{-1}\in B $ $\color{green} x=b/a \Rightarrow x\in B $ $\color{red} x=b/b' \Rightarrow x\notin B~and~x^{-1}\notin B $ So there no ideal in $B$,if $B$ is a valuation ring Something seems wrong. Can someone fix it and tell me what is valuation rings?
I don't follow your reasoning. I think you may be arguing that $K$ has no (nonzero) ideals. Here is a simple nondegenerate example: let $B$ the the ring of all rational numbers with odd denominator. It is a valuation ring of $\mathbf{Q}$. $B$ is also a local ring, whose maximal ideal is the one generated by $2$; it is the set of all rational numbers with even numerator and odd denominator.
{ "language": "en", "url": "https://math.stackexchange.com/questions/754442", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Problem with trigonometric equation I am having trouble solving this equation $$4\cdot \sin \theta + 2 \cdot \sin 2\theta =5$$ Thank you for your help.
If you put $t=\tan \frac {\theta}2$ you obtain $$4\cdot\sin \theta+4\cdot \sin \theta\cdot\cos\theta=4\left(\frac {2t}{1+t^2}\right)\left(1+\frac{1-t^2}{1+t^2}\right)=5$$ Multiply though by $(1+t^2)^2$ to obtain $$16t=5(1+t^2)^2$$From which it is clear that any solution has $t$ positive (rhs is positive), and a quick sketch graph shows there will be two solutions. $t=0, 16t=0, 5(1+t^2)^2=5\gt 0$ $t=.5, 16t=8, 5(1+t^2)^2=\frac {125}{16}\lt 8$ $t=1, 16t=16, 5(1+t^2)^2=20\gt 16$ So there is one solution for $t$ in $(0,0.5)$ and another in $(0.5,1)$ The equation can be rewritten as a quartic $$5t^4+10t^2-16t+5=0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/754542", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Find bases of matrix without multiplying This question is related to a solved problem in Gilbert Strang's 'Introduction to Linear Algebra'(Chapter 3,Question 3.6A, Page 190). Q) Find bases and dimensions for all four fundamental subspaces of A if you know that $A = \begin{bmatrix}1 & 0 & 0\\2 & 1 & 0 \\ 5 & 0 & 1\end{bmatrix}\begin{bmatrix}1 & 3 & 0 & 5\\0 & 0 & 1 & 6\\ 0 & 0 & 0 & 0\end{bmatrix} = LU = E^{-1}R$ Answer given in the text: This matrix has pivots in columns $1$ and $3$. Its rank is $r=2$. Column space : Basis $(1,2,5)$ and $(0,1,0)$ from $E^{-1}$ Why does he choose the first two columns of $E^{-1}$ as a basis ? If anything, the pivot columns of $R$ are an obvious choice for basis.
If you think of the product $E^{-1}R$ as the composition of linear applications, then $E^{-1}$ acts "last" and its columns determine somehow the image of $E^{-1}R$ (of course, it also depends on $R$). More generally, for any functions from any sets that you can compose (assuming $Im(g) \subset D(f)$ where $D(f)$ is the domain where f is defined) then $Im(f \circ g) = f(Im(g)) \subset Im(f)$ because by definition $Im(f \circ g) = \{ f(g(x)) \mid x \in D(g)\} = \{ f(y) \mid y \in Im(g) \} \subset \{ f(z) \mid z \in D(f) \}$ If $g$ is a constant function, so is $f \circ g$, and even if the image of $f$ can be very large, the image of $f \circ g$ is just a singleton. On the other hand, if $g$ is identity, then $Im(f \circ g) = Im(f)$ is maximal. You can have every cases between. More specifically here in a linear setting, where $R : \mathbb{R}^4 \rightarrow \mathbb{R^3}$ and $E^{-1} : \mathbb{R^3} \rightarrow \mathbb{R^3}$, $Im(E^{-1}R)=E^{-1}(ImR)=E^{-1}(span\{Re_1,Re_2,Re_3,Re_4\})$ if $(e_1,e_2,e_3,e_4)$ is the canonical basis of $\mathbb{R^4}$. If one writes $(e'_1,e'_2,e'_3)$ for the canonical basis of $\mathbb{R}^3$, since the last element of each columns is zero in $R$ (in other words : the last line is null), this shows that $Im(R) \subset span\{e'_1,e'_2\}$ and $rank(R) \leq 2$. But the rank of $R$ is at least $2$ (first and last column are free e.g), hence it is exactly $2$, and $Im(R) = span\{e'_1,e'_2\}$. Finally, $Im(E^{-1}R)=span\{E^{-1}(e'_1),E^{-1}(e'_2)\}$ which are precisely the two first columns of $E^{-1}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/754644", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How do I solve this definite integral: $\int_0^{2\pi} \frac{dx}{\sin^{4}x + \cos^{4}x}$? $$\int_0^{2\pi} \frac{dx}{\sin^{4}x + \cos^{4}x}$$ I have already solved the indefinite integral by transforming $\sin^{4}x + \cos^{4}x$ as follows: $\sin^{4}x + \cos^{4}x = (\sin^{2}x + \cos^{2}x)^{2} - 2\cdot\sin^{2}x\cdot\cos^{2}x = 1 - \frac{1}{2}\cdot\sin^{2}(2x) = \frac{1 + \cos^{2}(2x)}{2}$, and then using the $\tan(2x) = t$ substitution. But if I do the same with the definite integral, both bounds of the integral become $0$.
\begin{aligned} & \int_{0}^{2 \pi} \frac{d x}{\sin ^{4} x+\cos ^{4} x} \\ =& \int_{0}^{2 \pi} \frac{d x}{\left(\sin ^{2} x+\cos ^{2} x\right)^{2}-2 \sin ^{2} x \cos ^{2} x} \\ =& \int_{0}^{2 \pi} \frac{d x}{1-\frac{\sin ^{2} 2 x}{2}} \\ =& 16 \int_{0}^{\frac{\pi}{4}} \frac{d x}{1+\cos ^{2} 2 x} \\ =& 16 \int_{0}^{\frac{\pi}{4}} \frac{\sec ^{2} 2 x}{\sec ^{2} 2 x+1}d x \\ =& 8 \int_{0}^{\frac{\pi}{4}} \frac{d(\tan 2 x)}{\tan ^{2}(2 x)+2} \\=&4 \sqrt{2}\left[\tan ^{-1}\left(\frac{\tan 2 x}{\sqrt{2}}\right)\right]_{0}^{\frac{\pi}{4}} \\ =& 2 \sqrt{2} \pi \end{aligned}
{ "language": "en", "url": "https://math.stackexchange.com/questions/754750", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 4, "answer_id": 3 }
concerning a cheque A man went into a bank to cash a check. In handling over the money the cashier, by mistake, gave him dollars for cents and cents for dollars. He pocketed the money without examining it and on the way home he spent a nickel. Later on examining it, he found that he had twice the amount of money written on the check.He had no money in his pocket before going to the bank. How can we solve this problem? I tried by solving it by making equations but it does not work. The answer is given as $31.63$ on which he received $63.31$ and spent a nickel. But other than guessing, is there any way to find the answer of this problem?
Let $D$ be the check's actual number of dollars and $C$ be its actual number of cents. Then the actual amount of the check, expressed in pennies, is $$A=100D+C$$ The amount the man is given is $$G=100C+D$$ The pertinent equation is $$G-5=2A$$ Can you take it from there? Added later: A couple of people correctly admonished me for leaving the hard part of the solution for the OP to do. That wasn't really my intention; I had miscounted the three equations as having three unknowns. Let me try to atone for that by suggesting a fairly slick way to get to the final answer. As others have found, the problem boils down to finding a solution in non-negative integers to the equation $$98C-199D=5$$ with $C\lt100$. Since $98$ and $199$ have no common factor, the equation has a unique solution with $C\lt199$. Moreover, if you can find any integer solution, then you get to the solution with $C\lt199$ by subtracting an appropriate multiple of $199$. (We might note at this point that there's no guarantee that the solution with $C\lt199$ will actually satisfy $C\lt100$. If the problem had specified some amount other than a nickel, there might not be a solution.) The standard way to solve the equation $98C-199D=5$ is to run the Euclidean Algorithm on it. But let's see if we can eyeball our way more quickly. The fact that $199$ is close to $2\times98=196$ suggests a clever multiplication of the equation by $2$: $$196C-199(2D)=10$$ which can be rewritten as $$199(C-2D)-3C=10$$ If we now note that $$1990-1980=10$$ we see that $$C={1980\over3}=660$$ is a solution (with $660-2D=10$ giving an integer value to $D$). To get it below $199$, we need to subtract the appropriate multiple of $199$: $$660-3\times199=63$$ The corresponding value of $D$ is now $${98\times63-5\over199}=31$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/754864", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 6, "answer_id": 0 }
Prove that this function is injective I need to prove that this function is injective: $$f: \mathbb{N} \times \mathbb{N} \to \mathbb{N}$$ $$f: (x, y) \to (2y-1)(2^{x-1})$$ Sadly, I'm stumbling over the algebra. Here is what I have so far: Suppose $f(x, y) = f(a, b)$. We want to show that $x = a$ and $y = b$. $$(2y-1)(2^{x-1}) = (2b-1)(2^{a-1})$$ $$(2y-1)\dfrac{(2^{x})}{-2} = (2b-1)\dfrac{(2^a)}{-2}$$ At this point I got stuck, and I don't know how to get it to a place where I can solve and conclude that $x = a$ and $y = b$.
$$(2y-1)2^{x-1}= (2b-1)2^{a-1}$$ If $x\ne a$ then either $x>a$ or $x<a$. Just call whichever one is bigger $a$, so that $x<a$. Divide both sides by $2$, and repeat $x-1$ times. For example, say $(2y-1)2^{x-1}=1344$. Dividing by $2$ gives $672$; dividing by $2$ again gives $336$; dividing by $2$ again gives $168$; dividing by $2$ again gives $84$; dividing by $2$ again gives $42$; dividing by $2$ again gives $21$, an odd number, so $2y-1=21$ and $y=11$. You should also get $21$ on the right side after all those divisions by $2$. But you get $(2b-1)2^{a-x}$. Since $a>x$, that's an even number, so it can't be $21$. Hence $a$ must be equal to $x$, so $2y-1=2b-1$, and finally $y=b$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/754986", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
The derivative of $\rho e^{it}$ Why is $${df \over dz} \rho e^{it} = i \rho e^{it} \text{?}$$ The product rule states that $$ {df\over dz}(f_1 \cdot f_2) = f_1 f'_2 + f'_1 \cdot f_2 $$ so why doesn't this imply that $$ {df\over dz}(\rho \cdot e^{it}) = \rho e^{it} + 0 \cdot e^{it} \text{?} = \rho e^{it} \ne i \rho e^{it} \text{?} $$
You're confusing three different things with each other: $$ \frac{df}{dz}, \qquad \frac{d}{dz}, \qquad\frac{d}{dt} $$ If you had written $$ \frac{d}{dt} \rho e^{it} = \rho ie^{it} $$ then it would be correct, but what you have written is at best a misunderstanding of notation. Applying the product rule, one gets $$ \begin{align} \frac{d}{dt} \rho e^{it} & = \rho\frac{d}{dt} e^{it} + e^{it}\frac{d\rho}{dt} \\[12pt] & = \rho\frac{d}{dt}e^{it} + e^{it}\cdot 0 \\[12pt] & = \rho\frac{d}{dt}e^{it}. \end{align} $$ Next, use the chain rule: $$ \rho\frac{d}{dt} e^{it} = \rho e^{it} \frac{d}{dt}(it). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/755095", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Give an equational proof $ \vdash (p \lor \lnot r) \rightarrow (p \lor q) \equiv \lnot q \rightarrow (r \lor p)$ Give an equational proof $$ \vdash (p \lor \lnot r) \rightarrow (p \lor q) \equiv \lnot q \rightarrow (r \lor p)$$ What I tried $(p \lor \lnot r) \rightarrow (p \lor q)$ Applying De morgan $\lnot(\lnot p \land r) \rightarrow (p \lor q)$ Applying Implication rule $ (\lnot p \land r) \lor (p \lor q)$ Distributing $[(p \lor q) \lor \lnot p] \land [(p \lor q) \lor r] $ And I couldn't continue. See George Tourlakis, Mathematical Logic (2008) or this post for a list of axioms and theorems.
Here is a shorter and more 'documented' proof, compared to the original answer to this old question.$ \newcommand{\calc}{\begin{align} \quad &} \newcommand{\op}[1]{\\ #1 \quad & \quad \unicode{x201c}} \newcommand{\hints}[1]{\mbox{#1} \\ \quad & \quad \phantom{\unicode{x201c}} } \newcommand{\hint}[1]{\mbox{#1} \unicode{x201d} \\ \quad & } \newcommand{\endcalc}{\end{align}} \newcommand{\Ref}[1]{\text{(#1)}} \newcommand{\then}{\rightarrow} \newcommand{\when}{\leftarrow} \newcommand{\true}{\top} \newcommand{\false}{\bot} $ Our strategy will be to start at the most complex side of the equivalence (the left hand side), simplify as much as possible, and then from that point work towards the other side. So we calculate: $$\calc p \lor \lnot r \;\then\; p \lor q \op\equiv\hints{expand $\;\then\;$ using theorem $(2.4.11)$ -- since $\;\then\;$ is usually more}\hint{difficult to manipulate, and this is the shortest way to expand it} \lnot (p \lor \lnot r) \lor p \lor q \op\equiv\hints{DeMorgan: theorem $(2.4.17)$; double negation: theorem $(2.4.4)$}\hint{-- this looks like the only way to make progress} (\lnot p \land r) \lor p \lor q \op\equiv\hints{distribute $\;{}\lor p\;$ over $\;\land\;$ by theorem $(2.4.23)(ii)$ and axiom $(6)$}\hint{-- to bring both $\;p\;$'s together in the hope of simplifying} ((\lnot p \lor p) \land (r \lor p)) \lor q \op\equiv\hint{excluded middle: axiom $(9)$ -- to simplify} (\true \land (r \lor p)) \lor q \op\equiv\hint{$\;\true\;$ is identity of $\;\land\;$, theorem $(2.4.20)$ -- to simplify some more} r \lor p \lor q \tag{*} \op\equiv\hints{reorder using symmetry of $\;\lor\;$, i.e., axiom $(6)$}\hint{-- working towards our goal} q \lor r \lor p \op\equiv\hint{reintroduce $\;\then\;$ using theorem $(2.4.11)$} \lnot q \;\then\; r \lor p \endcalc$$ which completes the proof. Note how the steps up until $\Ref{*}$ were all more or less 'forced' by our strategy of simplifying, and from there the goal was directly within reach. Also, note that his proof format implicitly uses transitivity $(1.4.13)(c)$ and Leibniz $(2.1.16)$. Also, I implicitly used associativity of $\;\lor\;$ $(5)$ by leaving out the parentheses in $\;A \lor B \lor C\;$, and occasionally symmetry of $\;\equiv\;$ $(2)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/755170", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Why does a positive definite matrix with a repeated eigenvalue have infinitely many square roots? So, if we consider a positive definite matrix $A$, (meaning that $A$ is self-adjoint $(Ax,x) > 0$ and also that $A$ has strictly positive eigenvalues) we see right away that since it is self adjoint, and has an orthonormal basis of eigenvectors $h_{i}$, that it has a uniquely determined positive square root. We see this, since for the basis \begin{align} \{h_{1},...,h_{n}\} \end{align} each $x$ can be written as \begin{align} x &= \sum_{i = 1}^{n}c_{i}h_{i} \\ \Rightarrow Hx &= \sum_{i=1}^{n}c_{i}Hh_{i} = \sum_{i=1}^{n}c_{i}\lambda_{i}h_{i}. \end{align} We let $\sqrt{H}x$ be represented by \begin{align} \sum_{i=1}^{n}c_{i}\sqrt{\lambda_{i}}h_{i}. \end{align} where we take $\sqrt{\lambda_{i}}$ to be the positive square root of $\lambda_{i}$. Naturally, one sees that this is the unique positive definite square root, and that there are $2^{n}$ possible square roots, however, I keep reading that if $A$ has a repeated eigenvalue there are infinitely many square roots. Why is this? Please note by square root I mean $S$ such that $S^{2} = A$.
In case you mean "square root" as $C C^T = A:$ Suppose $A = \lambda^2 I$ two by two, $$ C \; = \; \lambda \left( \begin{array}{rr} \cos \theta & \sin \theta \\ -\sin \theta & \cos \theta \end{array} \right) , $$ which is to say that a rotated basis of a two-dimensional eigenspace is being used.
{ "language": "en", "url": "https://math.stackexchange.com/questions/755268", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Which means adjoint problem of a differential equation? I wanted to know if anyone can help me with the following problem: Get the adjoint problem (differential equation and boundary conditions) for the problem given by: $$\frac{d^2 u}{dx^2}=f(x)$$ $$0<x<1$$ $$u(0)=\frac{du}{dx}(0)=0$$ Actually I do not know to be the "adjoint problem", any help, example or reference would help me too.
You need to find an operator adjoint to the given one $$ L = \frac {d^2}{dx^2} $$ Condition on real adjoint operator is $$ \left \langle Lu, v\right \rangle = \left \langle u, L^*v\right \rangle \Longleftrightarrow \int_0^1 \left(Lu\right ) v\ dx = \int_0^1 u \left ( L^* v\right) dx $$ Now, just do the integration by parts, twice. $$ \int_0^1 u'' v dx = \left . u' v \right |_0^1 - \int_0^1 u'v'dx = \left . u' v \right |_0^1 - \left . u v' \right |_0^1 + \int_0^1 u v'' dx $$ Expand boundary conditions $$ \left . u' v \right |_0^1 - \left . u v' \right |_0^1 = \left ( \left . u' v \right |_1 - \left . u v' \right |_1 \right )- \left( \left . u' v \right |_0 - \left . u v' \right |_0 \right ) = \left . u' v \right |_1 - \left . u v' \right |_1 $$ Analyzing it you can deduce that if $\left . v\right |_1 = \left . v' \right |_1 = 0$, then $$ \left \langle Lu, v\right \rangle = \int_0^1 u'' v dx = \int_0^1 uv'' dx = \left \langle u, L^*v\right \rangle $$ so, final answer is $$ L^* = \frac {d^2}{dx^2} $$ with boundary conditions $$ v(1) = \frac {dv}{dx}(1) = 0 $$ or in terms of ODE $$ \frac {d^2v}{dx^2} = f \\ v(1) = \frac {dv}{dx}(1) = 0,\qquad x \in [0,1] $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/755368", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is the reasoning/algebra for my proof correct? (musical tuning theory proof) This isn't for a class, I was just wondering if I would be able to work out a proof for something like this myself for fun, and wanted to verify that my methods are correct. Basically, what I'm trying to prove, in terms of music theory is: Prove that it is impossible to stack a number of pure 5ths in just intonation, and to end up with a perfectly tuned octave, or multiple of octaves. Or more formally... Let $ (\frac{3}{2})^m = (\frac{1}{2})^n $, where n and m are positive integers. Show that there exists no positive integers m and n such that the equation is true. Assume the equation is true: $ (\frac{3}{2})^m = (\frac{1}{2})^n $ $ (\frac{3}{2})^m = \frac{1}{2^n} $ $ (\frac{3}{2})^m = 2^{-n} $ $ \log{_2}[(\frac{3}{2})^m] = -n $ $ n = -\log{_2}[(\frac{3}{2})^m] $ $ n = -m\log{_2}[\frac{3}{2}] $ $ n \not\in Z^{+} $ $ \therefore $ by contradiction, there exists no n, m $\in Z^{+}$ such that $ (\frac{3}{2})^m = (\frac{1}{2})^n $. Just for reference, showing a more elegant proof of the same thing would also be appreciated (maybe from using other proven theorems in mathematics).
Yes, your proof is correct. Here's another proof: $$\left(\dfrac{3}{2}\right)^m=\left(\dfrac{1}{2}\right)^n\\ \implies 3^m=2^{m-n}\\ \implies m-n=m\log_23\not\in\mathbb{Z}^+\text{ as $m$ is an integer.}\\ \implies n=m(1-\log_23)\not\in\mathbb{Z}^+\text{ as $m$ is an integer.}\\ \implies \{n,m\}\not\subset\mathbb{Z}^+$$ Contradiction!
{ "language": "en", "url": "https://math.stackexchange.com/questions/755462", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 3 }
fourier series analysis, show that for every integer n, using euler's formulas relating trigonometric and exponential functions Show that for every integer $n$, $$\int_0^{\pi} \cos nt~\sin t~\mathrm{d}t = \begin{cases} \dfrac{2}{1-n^2} & \text{if } n \text{ is even} \\[10pt] 0 &\text{if } n \text{ is odd} \end{cases}$$ by using Euler's formulas relating trigonometric and exponential functions: $$\cos x=\frac{1}{2}(\mathrm{e}^{ix}+\mathrm{e}^{-ix}), \ \ \ \sin x=\frac{1}{2i}(\mathrm{e}^{ix}-\mathrm{e}^{-ix})$$ (Here $i$ is the imaginary unit; remember that $\mathrm{e}^{\pi i} = -1$) I am not sure how to go about doing this problem, if anyone could help it would be much appreciated, thank you for your help in advance.
Consider if the function is even or odd for $n$ even or odd (with respect to the mid point $\pi /2$), this will get your odd case dealt with. To deal with the even case, use the double angle formula repeatedly.
{ "language": "en", "url": "https://math.stackexchange.com/questions/755581", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Wronskian Bessel Equations I need to compute the wronskian of $J_n$ and $Y_n$ (the Bessel functions of the first and second kinds). I've been able to find in many sources that it is $$W(J_n,Y_n)=\frac{\pi}{2x}$$, but I haven't been able to prove it. I already could use Abel's formula to get $$ W(J_n,Y_n)=\frac{c}{x}$$, but I can't find the value of $c$. Any idea? Thanks.
Hint: Bessel functions of all kinds satisfy the following recurrences: $$\frac{2n}{x} R_n(x) = R_{n-1}(x) + R_{n+1}(x)$$ $$2\frac{dR_n}{dx} = R_{n-1}(x) - R_{n+1}(x),$$ where $R_n$ can be $Y_n$ or $J_n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/755672", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Take 2: When/Why are these equal? This didn't go right the first time, so I'm going to drastically rephrase the query. As per this previous question, I am wondering if the two series $$\frac{f(a)+f(b)}{2}\frac{(b-a)}{1!}+\frac{f'(a)-f'(b)}{2}\frac{(b-a)^2}{2!}+\frac{f''(a)+f''(b)}{2}\frac{(b-a)^3}{3!}+\cdots$$ and $$\frac{f(a)+f(b)}{2}\frac{(b-a)}{1!}+\frac{f'(a)-f'(b)}{2^2}\frac{(b-a)^2}{2!}+\frac{f''(a)+f''(b)}{2^3}\frac{(b-a)^3}{3!}+\cdots$$ could possibly be equal. The only difference is the powers of $2$ in the denominator. NOTE: the numerator is $f^{(k)}(a)+(-1)^kf^{(k)}(b)$ in general.
Denote a primitive of $f$ by $F$ and add ${1\over2}\bigl(F(a)-F(b)\bigr)$ to both series. Then the first series becomes $$\eqalign{{1\over2}\sum_{k=0}^\infty \bigl(F^{(k)}(a)-(-1)^kF^{(k)}(b)\bigr){(b-a)^k\over k!}&={1\over2}\sum_{k=0}^\infty F^{(k)}(a){(b-a)^k\over k!} -{1\over2}\sum_{k=0}^\infty F^{(k)}(b){(a-b)^k\over k!}\cr &={1\over2}\bigl(F(b)-F(a)\bigr)\ ,\cr}$$ and similarly the second series becomes $$-{1\over2}\bigl(F(a)-F(b)\bigr)+\sum_{k=0}^\infty \bigl(F^{(k)}(a)-(-1)^k F^{(k)}(b)\bigr)\>{\bigl({b-a\over2}\bigr)^k\over k!}$$ $$={1\over2}\bigl(F(b)-F(a)\bigr)+F\left({a+b\over2}\right)-F\left({a+b\over2}\right)={1\over2}\bigl(F(b)-F(a)\bigr)\ .$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/755750", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
A question on basis of vectorspaces and subspaces Let $V$ be a finite dimensional vector space and $W$ be any subspace . It is known that if $A$ is any basis of $W$ then by "extension-theorem" , there is a basis $A'$ of $V$ such that $A \subseteq A'$. Is the reverse true ? that is if $B$ is any basis of $V$ , does there exist a basis $B'$ of $W$ such that $B'\subseteq B$ ?
No. Suppose that $V$ is a $k$-dimensional vector space over some infinite field, such as $\mathbb{R}$ or $\mathbb{C}$. Then since every basis contains exactly $k$ members, it follows that any given basis $B$ has exactly $2^k$ subsets, and thus bases for $2^k$ different subspaces of $V$. But since $V$ has infinitely many subspaces, it follows that $V$ has infinitely many subspaces which do not have subsets of $B$ as bases! The other answers, by Brian Fitzpatrick and jswiegel, give the specific example of $k=2$, $V=\mathbb{R}^2$ and $B=\{\mathbf{e}_{1},\mathbf{e}_{2}\}$. This is probably the simplest example (it was also the example I used in the original version of this answer), but the result is far more general.
{ "language": "en", "url": "https://math.stackexchange.com/questions/755809", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Solving integral $\int\frac{\sin x}{1+x\cos x}dx$ How I can find the anti-derivative? $$\int\frac{\sin x}{1+x\cos x}dx$$
Sorry for “cheating”, but it is as it seems: Wolfram|Alpha states it is not solvable “in terms of standard mathematical functions” (which should then be true). http://www.wolframalpha.com/input/?i=%E2%88%ABsinx%2F%281%2Bxcosx%29dx Are you sure this is the correct integral?
{ "language": "en", "url": "https://math.stackexchange.com/questions/755903", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Automorphisms of group extensions Assume we have a group extension $1 \to N \to G \to H \to 1$, and an automorphism $\phi: G \to G$. Is it correct that this automorphism induces automorphisms $\phi_N : N \to N$ and $\phi_H : H \to H$ ? If so, this would mean that the image by $\phi$ of elements of the form $(n,1_H) \in G$ are the elements $(\phi_N(n),1_H)$, and that elements of the form $(n,h) \in G$ are sent to $(\phi_N(n)\cdot n'(h),\phi_H(h))$, where $n'(h)$ is an element of $N$ which depends on $h$. Is this also correct ?
Not true. Take an automorphism which is not inner. For abelian examples, take $G$ to be the points of plane under vector addition, and $N$ to be any line through origin. Now rotations of the plane by any angle (not $0$ or $\pi$) is an automorphism which does not take $N$ to $N$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/756001", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Laplacian $\frac{1}{r^2}\frac{\partial}{\partial r}(r^2 \frac{\partial \phi }{\partial r})= \frac{1}{r} \frac{\partial ^2 }{\partial r^2}(r \phi )$ Does anyone have any intuition on remembering or very quickly deriving that $$\frac{1}{r^2}\frac{\partial}{\partial r}(r^2 \frac{\partial \phi }{\partial r}) = \frac{1}{r} \frac{\partial ^2 }{\partial r^2}(r \phi )$$ holds for the Laplacian in spherical coordinates? Doing the IBP is too long and slow, maybe there is some obvious obvious physical reason for this simplification making it obvious and easy to remember? Edit: In other words, is there a nice natural way to start with $$\frac{1}{r^2}\frac{\partial}{\partial r}(r^2 \frac{\partial \phi }{\partial r})$$ and derive the other side of the equality intuitively, without remembering it beforehand?
Maybe you want to take a look at how the Laplacian is derived for any system of curvilinear coordinates: http://en.wikipedia.org/wiki/Curvilinear_coordinates
{ "language": "en", "url": "https://math.stackexchange.com/questions/756111", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Find $m$ and $n$ Two finite sets have m and n elements. Thew total number of subsets of the first set is 56 more than the two total number of subsets of the second set. Find the value of $m$ and $n$. The equation to this question will be $2 ^ m$ - $2 ^ n = 56$. But I don't know how to solve this equation.
Let $k = m-3$ and $l=n-3$, then $$ 2^k-2^l = 56/2^3 = 7. $$ Now determine the values of $k$ and $l$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/756211", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Using Polar Integrals to find Volume of surface Here's the Question and the work that I've done so far to solve it: Use polar coordinates to find the volume of the given solid. Enclosed by the hyperboloid $ −x^2 − y^2 + z^2 = 61$ and the plane $z = 8$ Ah MathJaX is confusing by the way. Not sure how to close the text without having to add something like theta. $\int_0^{\sqrt{3}} 8r-\sqrt{61+r^2}r dr$ (with respect to $r$) I evaluated this to be $12 - 512/2 + \dfrac{61^3}{3}$ What do I put here I then took the integral with respect to $\theta$ from $0$ to $2\pi$ because it is a circle covering the whole $xy$ plane. This gave the above evaluation multiplied by $2\pi$. However this answer is wrong? Can anyone see something wrong with my reasoning?
Notice that $-x^2-y^2+z^2=61$ can be rewritten as $x^2+y^2=z^2-61$ or $$x^2+y^2=\sqrt{z^2-61}^2$$ Given a horizontal slice of the upper sheet of the hyperboloid (i.e. given $z$), it should be plain to see from the graph (and the equation that we have a circle of radius $\sqrt{z^2-61}$. The vertex of the upper sheet occurs at $(0,0,\sqrt{61})$ because $0^2+0^2-\sqrt{61}^2=61$, so $z$ will range from $\sqrt{61}$ to 8. So what we would like to evaluate would be this integral, where $r(\theta)=\sqrt{z^2-61}$, which is a constant with respect to $\theta$ $$\int_{\sqrt{61}}^8\int_0^{2\pi}\frac{1}{2}r(\theta)^2d\theta dz$$ $$=\int_{\sqrt{61}}^8\int_0^{2\pi}\frac{1}{2}\sqrt{z^2-61}^2d\theta dz$$ $$=\int_{\sqrt{61}}^8\int_0^{2\pi}\frac{1}{2}(z^2-61)d\theta dz$$ $$=\int_{\sqrt{61}}^8\frac{1}{2}(z^2-61)dz\int_0^{2\pi}1d\theta$$ $$=\pi\int_{\sqrt{61}}^8(z^2-61)dz$$ Alternatively, I think what you're trying to do is compute a volume by cylindrical shells, in which case the integral should look like this (where $h(r)$ is the height of a cylinder). $$\int_0^{\sqrt{3}}2\pi r h(r)dr$$ $$=\int_0^{\sqrt{3}}2\pi r (8-\sqrt{r^2+61})dr$$ $$=2\pi\int_0^{\sqrt{3}} (8r-r\sqrt{r^2+61})dr$$ $$=2\pi\bigg( 12 -\int_0^{\sqrt{3}} (r\sqrt{r^2+61})dr\bigg)$$ It seem that where you went wrong was in evaluating $$\int_0^{\sqrt{3}} r\sqrt{r^2+61}dr$$ $$=\int_0^{\sqrt{3}} r\sqrt{r^2+61}dr\cdot\frac{\frac{d(r^2+61)}{dr}}{\frac{d(r^2+61)}{dr}}$$ $$=\int_0^{\sqrt{3}} \frac{r\sqrt{r^2+61}}{2r}d(r^2+61)$$ $$=\frac{1}{2}\int_0^{\sqrt{3}} \sqrt{r^2+61}d(r^2+61)$$ $$=\frac{1}{2}\int_{61}^{64} \sqrt{u}du,\:u=r^2+61$$ $$=\frac{\sqrt{u}^3}{3}\bigg|_{61}^{64}$$ $$=\frac{8^3}{3} - \frac{\sqrt{61}^3}{3}$$ so the final answer would be $$\int_0^{2\pi}\int_0^{\sqrt{3}}(8r-r\sqrt{r^2+61})drd\theta=2\pi\bigg(12-\frac{8^3}{3} + \frac{\sqrt{61}^3}{3}\bigg)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/756305", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Lie algebra of $\mathbb{R}^{n}$ Until now the only example of lie groups I have seen are subgroups of $GL_n$. Today I had the idea, that also $G=(\mathbb R^n,+)$ must be a lie group ($(\mathbb R^n,+)$ is a group with the differentiable group operation $+$). Is it right that the lie algebra of this lie group is $\mathfrak g = \mathbb R^n$ with the exponential map $\exp : \mathfrak g \rightarrow G: x \mapsto x$? What is the lie bracket of $\mathfrak g$? Is $[x,y]$ always zero, because $+$ is commutative?
Solution 1 The left translation maps have the form $L_a:\mathbb R^n \rightarrow \mathbb R^n: x\mapsto a+x$. So $D L_a(x)=\operatorname{id}$ for all $a,x\in\mathbb R^n$ ($D L_a$ is the total derivative of $L_a$). So the set of all left-invariant vector fields is the set of all constant vector fields $\mathfrak g=\{f:\mathbb R^n\rightarrow T\mathbb R^n: f = \operatorname{const}\}$ (which is isomorph to $\mathbb R^n$ because $\mathfrak g$ is $n$-dimensional). Because $X\in \mathfrak g$ is constant, one has $[X,Y]=\mathcal L_X(Y)=0$ for any given vector field $Y$. Solution 2 $(\mathbb R^n, +)$ can be embedded into $GL_n$ via $$f:\mathbb R^n \rightarrow GL_n: (x_1,x_2,\ldots,x_n) \mapsto \left(\begin{matrix} e^{x_1} & & & & \\ & e^{x_2} & & \\ & & \ddots & \\ & & & e^{x_n}\end{matrix}\right)$$ $f$ is well defined because the determinant $e^{x_1 + x_2 + \ldots + x_n}$ is always positive and because of $e^{a+b}=e^a\cdot e^b$ the function $f$ is a group homomorphism. Via $f$ one can show that $(\mathbb R^n,+)$ is isomorph to the set $D^+$ of diagonal matrices with positive entries on the diagonal. The lie algebra $\mathfrak g$ of $D^+$ is given by $$\mathfrak g = \{\dot\gamma(0): \gamma:(-\epsilon,\epsilon)\rightarrow D^{+}\}$$ which is also $D^{+}$ (as one can easily show). Because $D^{+}$ is in the center of $GL_n$ the lie bracket is always zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/756365", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 2, "answer_id": 0 }
Error solving "stars and bars" type problem I have what I thought is a fairly simple problem: Count non-negative integer solutions to the equation $$x_1 + x_2 + x_3 + x_4 + x_5 = 23$$ such that $0 \leq x_1 \leq 9$. Not too hard, right? Simply ignore the upper-bound, count the $$\begin{pmatrix}23 + (5-1) \\ (5-1)\end{pmatrix} = \begin{pmatrix}27 \\ 4\end{pmatrix} = 17550$$ solutions. Subtract from this all (non-negative integer) solutions to the equation $$y_1 + 10 + x_2 + x_3 + x_4 + x_5 = 23,$$ and there are $\begin{pmatrix}17 \\ 4\end{pmatrix} = 2380$ of these "bad" solutions we shouldn't have counted earlier, but did. Thus we find $17550 - 2380 = 15170$ solutions. Since this is a prohibitively large number of solutions to check by hand, I wrote a simple Python program to verify whether this answer is correct. It does indeed say there are $17550$ solutions without upper bounds, and $2380$ solutions to the equation for counting "bad" solutions. However, when I ask it throw away all solutions to the non-upper-bounded problem for which $x_1 \geq 10$, it tells me it's found $15730$ solutions. My question is: do I not understand the combinatorial calculation so that there are not actually $\begin{pmatrix}27\\4\end{pmatrix}-\begin{pmatrix}17\\4\end{pmatrix}$ solutions, or have I made some kind of programming mistake? Of course, both are also possible.
Your method is correct, there are 15170 solutions. It seems like your python script is doing something wrong at the end. I wrote a C++ program myself to confirm this: #include <stdio.h> int main() { int a,b,c,d,e,sum=0; for (a=0; a<10; a++) for (b=0; b<24; b++) for (c=0; c<24; c++) for (d=0; d<24; d++) for (e=0; e<24; e++) { if (a+b+c+d+e == 23) sum++; } printf("sum=%d",sum); } And it prints out sum=15170.
{ "language": "en", "url": "https://math.stackexchange.com/questions/756462", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Find Normalizing constant let $f(x,\theta)=C_\theta \exp(-\sqrt{x}/\theta)$ where $x$ and $\theta$ are both positive. Find the normalising constant $C_\theta$. I get $C_\theta=\sqrt{2}/\theta$ but my book says $C_\theta=1/2\theta^2$. Who is right?
\begin{align} u & = \sqrt{x}/\theta \\ u^2 & = x/\theta^2 \\ 2u\,du & = dx/\theta^2 \end{align} $$ \int_0^\infty e^{-\sqrt{x}/\theta} \, dx = \theta^2\int_0^\infty e^{-u} \Big( 2u\,du \Big) = 2\theta^2. $$ The integral can be done by parts, thus: $$ \int u \Big(e^{-u}\,du\Big) = \int u\,dv=uv-\int v\,du = -ue^{-u}-\int -e^{-u}\,du,\text{ etc.} $$ Then there's the problem of finding $-ue^{-u}\Big\vert_{u=0}^{u\to\infty}$. That can be done by L'Hopital's rule.
{ "language": "en", "url": "https://math.stackexchange.com/questions/756560", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Sigma Algebra Measurable R.V I am trying to figure out what random variables are measurable with respect to sigma algebra given by $[1-4^{-n}, 1]$ where $n= 0, 1, 2, ....$ if $[0,1]$ is the sample space. I believe I can do with with indicator functions but I'm not sure how to write this. Thanks!
Well, any indicator function won't do. You can see that there is no way to write $(0,25,0.5)$ as an element of the $\sigma$-algebra generated by those sets, and so the indicator of that set won't be measurable. But the set of simple functions which are measurable in that space is most likely dense with respect to any measure that is absolutely continuous with respect Lebesgue measure.
{ "language": "en", "url": "https://math.stackexchange.com/questions/756662", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Least Squares of Symmetric Positive Semidefinite Matrices What's the best (in terms of computation time and numerical robustness) way to find the least squares solution of $$Ax = b$$ if $A$ is symmetric and positive semi-definite? If $A$ were symmetric and positive definite, the Cholesky decomposition would seem to be the way to go to build the inverse. But that doesn't work for positive semidefinite cases. Performing a spectral decomposition to get $A = Q \Lambda Q^T$, and then forming the pseudoinverse $A^+ = Q \Lambda^+ Q^T$ to get $x = A^+ b$ works, but I'm not convinced it's the most efficient.
According to MATLAB's mldivide(): If the Cholesky Decomposition doesn't work you should use LDL Decomposition.
{ "language": "en", "url": "https://math.stackexchange.com/questions/756724", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Create a Huge Problem I am wondering if any problems have been designed that test a wide range of mathematical skills. For example, I remember doing the integral $$\int \sqrt{\tan x}\;\mathrm{d}x$$ and being impressed at how many techniques (substitution, trig, partial fractions etc.) I had to use to solve it successfully. I am looking for suggestions/contributions to help build up such a question. For example, one problem could have as its answer $\tan x$ which would then be used in the integral, and something about the answer to the integral could lead into the next part. The relevant subjects would be anything covered in the first few years of an undergraduate degree in mathematics. The main reason I ask is that I want to work on developing some more integrated ways to practice mathematics "holistically" which I feel is very lacking in the current educational model.
In terms of an integral question, I tried to come up with a double integral for my students that would test many first year integral techniques. It requires both integration by parts and multiple subsitutions, as well as an understanding of double integral regions. Here it is - Sketch the region below $y=\sqrt{\sin x}$ and above $\displaystyle y=\frac{2x}{\pi}$ in the first quadrant. Find and mark the lower intersection point, $a$, and the upper intersection point, $b$. The volume under the surface $f(x,y) = y$ over the region is given by $$ V=\int\limits^b_a \! \int\limits_{ 2x/\pi}^{\sqrt{\sin x}} y \ \mathrm{d}y \, \mathrm{d}x, $$ where $a$ and $b$ are the intersection points of the curves. Evaluate this double integral. Verify your result by switching the order of integration. Show all working.
{ "language": "en", "url": "https://math.stackexchange.com/questions/756817", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "32", "answer_count": 2, "answer_id": 1 }
Explanation of recursive function Given is a function $f(n)$ with: $f(0) = 0$ $f(1) = 1$ $f(n) = 3f(n-1) + 2f(n-2)$ $\forall n≥2$ I was wondering if there's also a non-recursive way to describe the same function. WolframAlpha tells me there is one: $$g(n) = \frac{(\frac{1}{2}(3 + \sqrt{17}))^n - (\frac{1}{2}(3 - \sqrt{17}))^n}{\sqrt{17}}$$ However, I have absolutely no clue how to determine this function, especially the $\sqrt{17}$ makes no sense to me. Could anyone maybe explain why $f(n)$ and $g(n)$ are the same?
This is called a linear recurrence. Solving them is fairly straightforward, and is explained here: http://en.wikipedia.org/wiki/Linear_recurrence#Solving. The key thing to note is that if $f_1$ and $f_2$ are solutions of this recurrence, then $f_1 + f_2$ is as well (except for the initial conditions). The trick is to assume that there is a solution of the form $f = cr^n$, and see where that leads you. Let's see how it works out: $f(n) = 3f(n-1) + 2f(n-2)$\ $cr^n = 3cr^{n-1} + 2cr^{n-2}$ Dividing everything by $cr^{n-2}$ and rearranging gives: $r^2 - 3r - 2 = 0$. Factor, to get: $\frac{-1}{4} \left(-2 r+\sqrt{17}+3\right) \left(2 r+\sqrt{17}-3\right) = 0$ I conclude that $r = r_1 = \frac{\sqrt{17}-3}{2}$, or $r = r_2 = \frac{-\sqrt{17}+3}{2}$. Using the fact that I can add solutions together and still have a solution to the recurrence relation, I'll write: $f(n) = c_1r_1^n + c_2r_2^n$ From here, you can plug in the initial conditions to solve for $c_1$ and $c_2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/756931", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 0 }
What's the relation between prime spectrum and affine space? Let $A$ be a ring ,$X$ be the set of all prime ideal of $A$.For each subset $E$ of $A$,let $V(E)$ denoted the set of all prime ideals of $A$ which contain $E$. we have: * *$V(0)=X,V(1)=\emptyset$ *$V(\bigcap_{i \in I} E_i)=\bigcup_{i \in I} V(E_i)$ *$V(E)=V(a),\text{if a is the ideal in A generated by}~E$ *$V(ab)=V(a)\bigcup V(b) \text{for any ideals a,b in}~A$ Let $k$ be any field,By $\mathbb{A}^n(k)$,we shall mean the cartesian product of k with itself n times.If $F \in k[x_1,...,x_n]$,the set of zeros of $F$ is called the hyper-surface,and is denoted by $V(F)$. we have: * *$V(0)=\mathbb{A}^n(k),V(1)=\emptyset$ *$V(\bigcap_{\alpha} E_{\alpha})=\bigcup_{\alpha} V(E_{\alpha})$ *$V(S)=V(I),\text{if I is the ideal in k[x_1,...,x_n] generated by }~S$ *$V(PQ)=V(P)\bigcup V(Q) \text{for any polynomials P,Q in k[x_1,...,x_n]}$ It seem that prime spectrum and affine space are the same things, then a polynomial can be regard as an ideal, Zariski topology can be regard as affine algebraic set. How to explain the hyper-surface and dimension in the algebraic view?and why they are look like a same thing.
This is really only a partial answer that was too long for a comment but I hope it is helpful. If you consider the prime spectrum of the ring of regular functions on an affine variety $X$, where by affine we mean the zeros of a collection of polynomials over an algebraically closed field, you almost get back the variety $X$ but not quite. The maximal spectrum will be the same as $X$ since by the nullstellensatz points in $X$ correspond to maximal ideals in the ring of regular functions on $X$. But in the prime spectrum you have additional points, which correspond to the irreducible subvarieties of $X$. These points are not closed, but rather their closure consists exactly of their irreducible subvarieties. You can think of the prime spectrum of a commutative ring as a generalization of affine varieties, which plays a very important role from the modern perspective of the theory.
{ "language": "en", "url": "https://math.stackexchange.com/questions/757039", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Prove that $\{(a,b):a,b\in\mathbb N, a\geq b\}$ is denumerable. If $S=\{(a,b):a,b\in\mathbb N, a\geq b\}$, how do I prove that $S$ is denumerable? Work: Since $S \subseteq\mathbb{N\times N}$ I know that $S$ is denumerable. But I don't know how to structure the proof clearly. I know that the two theorems : every infinite subset of a denumerable set is denumerable and the theorem: If $A$ and $B$ are denumerable sets, then $A\times B$ is denumerable are useful to this proof, but I don't know how to apply it here.
If you're allowed to use than any subset of a denumerable set is denumerable, then just use the fact that $\mathbb{N} \times \mathbb{N}$ is denumerable, and prove it is if necessary by using an injection $f(m,n) = 2^m 3^n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/757127", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
prove that if A is a subset of B, B is a subset of C, and C is a subset of A, then A=B and B=C To prove A=B, I must prove that A is a subset of B and B is a subset of A. A is a subset of B is already given. So all that is left is to prove B is a subset of A. Is it suffice to say that since A is a subset of B, B is a subset of C, and C is a subset of A, by transitive property B is a subset of A.
In fact, the problem deals wit two separate issues! Check transitivity: $$A\subseteq B\subseteq C\implies A\subseteq C$$ Check antisymmetry: $$A\subseteq C\subseteq A\implies A=C$$ (These are rather different!)
{ "language": "en", "url": "https://math.stackexchange.com/questions/757214", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Markov Chain Ergodic Theorem (Proof references) Where can I find a proof of the erogidc theorem for Markov chains that doesn't use Birkhoff? The theorem states the following : Let $(X_n)_{n\in \mathbb{N}}$ be an irreducible and positively recurrent Markov chain in a countable state space $E$ with invariant measure $\pi$. Then for every function $f \in L^{1}(E,\pi)$ $$\frac{1}{N}\sum_{n=0}^{N-1}f(X_n) \stackrel{N\to \infty}{\longrightarrow} \int_E fd\pi=\sum_{k\in E}\pi_k f(k) \quad\mathrm{ a.e.}$$
You can find an elementary proof in Durrett's Essentials of Stochastic Processes, at the end of Chapter 1.
{ "language": "en", "url": "https://math.stackexchange.com/questions/757304", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding power series of function could anyone help me answer question? $$F(x)=\ln\left(\dfrac{7+x}{7-x}\right)$$ Find a power series representation for the function.
When I was young (that is to say a loooong time ago), one of the series the professor asked us to always remember (because of its extreme simplicity) is precisely $$\ln\left(\frac{1+x}{1-x} \right)=2\sum_{n=0}^{\infty}\frac {x^{2n+1}}{2n+1}$$ Thank you for remembering me my youth
{ "language": "en", "url": "https://math.stackexchange.com/questions/757376", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
The splitting fields of two irreducible polynomials over $Z / p Z$ both of degree 2 are isomorphic $p$ is a prime. Let $ f_1, f_2 \in Z / p Z [t]$ both of degree 2 and irreducible. Show that they have isomorphic splitting fields. My approach was let $ K_1 = F(\alpha_1, \beta_1) / F$ be the splitting field of $f_1$, and let $K_2 = F( \alpha_2, \beta_2) / F$ be the splitting field of $ f_2$. Then I have trouble finding any relations between $ \alpha$'s and $\beta$'s. Any help is appreciated.
Some ideas (hopefully you've already studied this stuff's details): For a prime $\;p\;$ and for any $\;n\in\Bbb N\;$, prove that the set of all the roots of the polynomial $\;f(x):=x^{p^n}-x\in\Bbb F_p[x]\;$ in (the, some) algebraic closure $\;\overline{\Bbb F_p}\;$ of $\;\Bbb F_p:=\Bbb Z/p\Bbb Z\;$ , with the usual operations modulo $\;p\;$ , is a field with $\;p^n\;$ elements, which we denote by $\;\Bbb F_{p^n}\;$ From the above it is immediate that $\;\Bbb F_{p^n}\;$ is the minimal field which contains all the roots of $\;f(x)\;$ and is thus this polynomial's splitting field over $\;\Bbb F_p\;$ . Now, apply the above to the particular case $\;n=2\;$ and deduce at once your claim.
{ "language": "en", "url": "https://math.stackexchange.com/questions/757466", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
What are good resources for learning Numerical methods for Partial Differential Equations? I'm having an undergraduate course on Numerical Solutions to Ordinary and Partial Differential Equations. I need online resources to supplement my study preferably videos and books. I want to build a good understanding of the subject so that I can easily apply them to fields like computer vision and robotics.
Try: * *Numerical Solution of Partial Differential Equations: Finite Difference Methods , by G. D. Smith Also, use the open courseware at: * *MIT Open Courseware
{ "language": "en", "url": "https://math.stackexchange.com/questions/757511", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Properties of compact set: non-empty intersection of any system of closed subsets with finite intersection property Let $X$ be a Hausdorff topological vector space. Let $C$ be a nonempty compact subset of $X$ and $\{C_\alpha\}_{\alpha \in I}$ be a collection of closed subsets such that $C_\alpha \subset C$ for each $\alpha \in I$ with $I$ is an infinite index set. Assume that the intersection of any finite sets among $\{C_\alpha\}_{\alpha \in I}$ is nonempty. Can we conclude that $\bigcap_{\alpha \in I}C_\alpha \neq \emptyset$? Thanks in advance!
The more general fact is true. If $(C_\alpha)_{\alpha\in I}$ is a collection of closed subsets with finite intersection property, then all these subsets have a common point. Assume $\cap_{\alpha\in I}C_\alpha=\varnothing$, then for open subsets of $C$ which we denote $U_\alpha=C\setminus C_\alpha$ we have $\cup_{\alpha\in I}U_\alpha=C$. Since $C$ is compact, we have finite collection $\{\alpha_1,\ldots,\alpha_n\}\subset I$ such that $C=U_{\alpha_1}\cup\ldots\cup U_{\alpha_n}$. Taking complements we get that $C_{\alpha_1}\cap\ldots C_{\alpha_n}=\varnothing$. Contradiction, so $\cap_{\alpha\in I}C_\alpha\neq\varnothing$
{ "language": "en", "url": "https://math.stackexchange.com/questions/757600", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Show that $(x + 1)^{(2n + 1)} + x^{(n + 2)}$ can be divided by $x^2 + x + 1$ without remainder I am in my pre-academic year. We recently studied the Remainder sentence (at least that's what I think it translates) which states that any polynomial can be written as $P = Q\cdot L + R$ I am unable to solve the following: Show that $(x + 1)^{(2n + 1)} + x^{(n + 2)}$ can be divided by $x^2 + x + 1$ without remainder.
Suppose $a$ is a root of $x^2+x+1=0$, then we have both $$a+1=-a^2$$ and $$a^3=1$$ Let $f(x)=(x+1)^{2n+1}+x^{n+2}$ then $$f(a)=(-a^2)^{2n+1}+a^{n+2}=-a^{4n+2}+a^{n+2}=-a^{n+2}+a^{n+2}=0$$ Since the two distinct roots of the quadratic are also roots of $f(x)$ we can use the remainder theorem to conclude that the remainder is zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/757702", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 2 }
For which values of the real parameter the following... How should I solve this exercise: For which values of real parameter $a$ the following equality is true: $$\lim_{x\to 0}{1-\cos{ax}\over x^2}=\lim_{x\to \pi}{\sin{x}\over \pi-x}$$
As pointed out by Claude Leibovici. Rewrite $$ \lim_{x\to \pi}{\sin{x}\over \pi-x}=\lim_{\pi-x\to 0}{\sin{(\pi-x)}\over \pi-x}=1.\tag1 $$ As pointed out by Your Ad Here, you will get $$ \begin{align} \lim_{x\to 0}{1-\cos{ax}\over x^2}&\stackrel{\text{l'Hospital}}=\lim_{x\to 0}{a\sin{ax}\over 2x}\\ &\stackrel{\text{l'Hospital}}=\lim_{x\to 0}{a^2\cos{ax}\over 2}\\ &=\frac{a^2}2.\tag2 \end{align} $$ $(1)$ and $(2)$ yield $\ \Large a=\large\pm\sqrt{2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/757792", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Convergence of the series $\sum_{n=0}^\infty \frac{1}{n+1}\sin\bigr(\frac{p\pi u_n}{q}\bigl)$ Let $(u_n)_{n\in \mathbb{N}}$ defined by : $u_0=1, u_1=1$ and for all integer $u_{n+1}=3u_n-u_{n-1}$ Study the convergence of $$\displaystyle\sum_{n=0}^\infty \frac{1}{n+1}\sin\left(\frac{p\pi u_n}{q}\right)$$ with $p,q \in \mathbb{N}^*$ I have $ S_N = \sum_{n=0}^N \frac{1}{n+1} \sin \left( \frac{2 \pi p u_n}{q} \right) = \frac{a_N}{N+1} - \sum_{n=0}^{N-1} \frac{a_n}{(n+1)(n+2)}$ where $$a_n = \sum_{k=0}^n \sin \left( \frac{2 \pi p u_k}{q} \right)$$ Now If I shows that $a_n$ is bounded, then $$\frac{a_N}{N+1} \xrightarrow[N \to \infty]{} 0 \mbox{ and } \frac{a_n}{(n+1)(n+2)} \underset{n \to \infty}{=} \mathcal{O} \left( \frac{1}{n^2} \right)$$ Unfortunately I was not managed to prove it. I try an another method : $$\forall n \in \mathbb{N}, \left( \begin{array}{c} u_n \\ u_{n+1} \end{array} \right) = A \left( \begin{array}{c} u_{n-1} \\ u_n \end{array} \right)$$ with $$ A = \left[ \begin{array}{ c c } 0 & 1 \\ -1 & 3 \end{array} \right] $$ Then, $$ \forall n \in \mathbb{N}, \left( \begin{array}{c} u_n \\ u_{n+1} \end{array} \right) = A^n \left( \begin{array}{c} 0 \\ 1 \end{array} \right) $$ As before I don't see how can I continue Thank you in advance
When $u$ is an integer, $\sin(u\pi/q)$ only depends on the value of $u$ modulo $2q$. So in your case, only the value of $u_n$ modulo $2q$ is important. If you look at the sequence $v_n = u_n \pmod {2q}$, since we still have $v_{n+1} = 3v_n - v_{n-1}$ (as well as $v_{n-1} = 3v_n - v_{n+1}$), and because $v_n$ can only take $2q$ different values, the sequence $v_n$ has to be periodic, and so is the sequence $\sin(\frac{p\pi u_n}q)$. Let $T$ be the period of the sequence and let $S = \sum_{n=1}^{T} \sin (\frac {p\pi u_n} q)$. If $S \neq 0$ then $\sum_{n=a}^{a+T-1} \frac 1 {n+1} \sin(\frac {p\pi u_n} q) \sim \frac S n$ when $n \to \infty$, so the sequence diverges. If $S = 0$ then $\sum_{n=a}^{a+T-1} \frac 1 {n+1} \sin(\frac {p\pi u_n} q) = O(\frac 1 {n^2})$ when $n \to \infty$, so the sequence converges.
{ "language": "en", "url": "https://math.stackexchange.com/questions/757889", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Why does the result follow? How does this theorem follow? Theorem. If $g$ is differentiable at $a$ and $g(a) \neq 0$, then $\phi = 1/g$ is also differentiable at $a$, and $$\phi'(a) = (1/g)'(a) = -\frac{g'(a)}{[g(a)]^2}.$$ Proof. The result follows from $$\frac{\phi(a+h)-\phi(a)}{h} = \frac{g(a)-g(a+h)}{hg(a)g(a+h)}.$$
The results follow because: * *if $\lim\limits_{h\to 0}\dfrac{\phi(a+h)-\phi(a)}{h}=\ell$ exists ($\in\mathbb{R}$) then $\ell=\phi'(a)$. *Calculate the limit of this ratio $\dfrac{\phi(a+h)-\phi(a)}{h}=\dfrac{g(a)-g(a+h)}{h}\cdot \dfrac{1}{g(a)g(a+h)}$ knowing that $g$ is defirentiable at $a$. *You find that $\lim\limits_{h\to 0}\dfrac{\phi(a+h)-\phi(a)}{h}=\phi'(a)=\lim\limits_{h\to 0}\dfrac{g(a)-g(a+h)}{h}\cdot \dfrac{1}{g(a)g(a+h)}=\dfrac{-g'(a)}{g^2(a)}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/757956", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Critical points and Convexity? Function $f(x)$ has no critical points in $M$, can we say $f(x)$ is either convex or concave over $M$?
If $f(x) = 2x+\sin(x), x\in \mathbb{R}$, then $f^\prime(x) = 2+\cos(x)$ is nowhere $=0$, hence there is no critical point. However, $f^{\prime\prime}(x)=-\sin(x)$ changes sign, so $f$ is neither convex nor concave. By scaling you can do that on any interval, as small as you like.
{ "language": "en", "url": "https://math.stackexchange.com/questions/758071", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
How to prove $x^3$ is strictly increasing I am trying to use $f(x)=x^3$ as a counterexample to the following statement. If $f(x)$ is strictly increasing over $[a,b]$ then for any $x\in (a,b), f'(x)>0$. But how can I show that $f(x)=x^3$ is strictly increasing?
You want to show that the function $f(x) = x^3$ is strictly increasing on $\mathbb{R}$. Maybe you can just use the definition. That is, let $a < b$. Assume that $0<a$ (you can do the other cases I am sure). Let $h = b - a < 0$. You want to show that $f(a) < f(b)$. So $$\begin{align} f(a) = a^3 &= (b - h)^3\\ &= b^3 - 3b^2h +3 bh^2 - h^3 \\ &<b^3 \\ &= f(b). \end{align} $$ This is clear since $h > 0$, so $-3b^2h + 3bh^2 = -3hb(b - h) = -3hba < 0$. All that said, you could, of course just use $f(x) = x$ as a counter example.
{ "language": "en", "url": "https://math.stackexchange.com/questions/758158", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 6, "answer_id": 2 }
c0mpatible system $A^TAx=A^Tb$ Let $A\in\mathbb{R}^{n\times n}$ be a singular matrix. Prove that the system $$A^TAx=A^Tb$$ is compatible for any $b\in\mathbb{R}^n$. I want to prove that $A^Tb\in Ran(A^TA)$,i.e. $A^Tb\bot Ker(A^TA)$
Equivalent statements: * *$x\in\mathrm{ker}(A^TA)$, *$Ax=0$. Proof: $x\in\mathrm{ker}(A^TA)$ implies that $x^TA^TAx=(Ax)^T(Ax)=\|Ax\|_2^2=0$. Hence $Ax=0$. The other direction is trivial. Hence $A^Tb\perp\mathrm{ker}(A^TA)$, because $y^TA^Tb=(Ay)^Tb=0^Tb=0$ for any $y\in\mathrm{ker}(A^TA)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/758241", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Splitting an Indefinite Matrix into 2 definite matrices I'm attempting to use some quadratic programming techniques to solve a particular optimization problem and my chosen Objective Function is indefinite. I've found some texts online which regard splitting the objective function into two components of which one is positive semi-definite and one is negative semi-definite. All these concerns aside, my question becomes this: If an nxn matrix Q is indefinite, is there an algorithm which produces matrices A, and B, both nxn such that A and B are both positive semidefinite and Q = A - B? Thanks, James
You can just set $A = Q + kI$ where $k$ is larger than all the negative eigenvalues of $Q$, and put $B = kI$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/758339", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Another Divergent Series Question Suppose the series $$\sum{a_n}$$ diverges where $a_n\ge 0$ and the sequence is monotone non-increasing. If exactly one element is chosen from each interval of size $k$ -- i.e., one element from $[a_0,a_1,...,a_{k-1}]$, one element from $[a_k,...,a_{2k-1}]$, etc. -- must this series diverge? Must $$\sum{a_{n_i}}=\infty,\;\;\;n_i\in[ik,(i+1)k)$$
Clearly there is some choice of elements from each interval for which the sub-series diverges, or else the overall series would converge. Now note that for any choice of element, you can bound that element as being greater than or equal to the element that was chosen from the next interval. Thus, for any choice of elements from intervals of size $k$, you get that the summation is greater than or equal to the sum of the $2$nd to $\infty$ term of the the divergent choices of elements from intervals of size $k$. Thus all choices of elements lead to a divergent subseries.
{ "language": "en", "url": "https://math.stackexchange.com/questions/758454", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Basic question about dimensionality of Euclidean group I have a basic question about the dimensionality of the Euclidean group. Why are degrees of freedom greater than the dimension? I thought that a degree of freedom is the same as a dimension, as in, $x\;\text{-}\;y$ plane is of dimension $2$ and therefore has $2$ degrees of freedom. Right? So I don't understand how the degrees of freedom of the Euclidean group $E(n)$, defined by: $$\dfrac{n(n+1)}{2}$$ turn out to be greater than the dimension $n$. For example, for dimension $n=2,$ the degrees of freedom are $3$, and for dimension $n=6$, the degrees of freedom are $n=3$. I don't understand this. Please explain. Thanks.
First of all, each Euclidean transformation (an element of $E(n)$) has the form $Ax+b$ where $A\in O(n)$ (an orthogonal matrix) and $b\in R^n$. Clearly, $b$ depends on $n$ parameteris and we just have to compute the dimension of $O(n)$. Orthogonality of a matrix means that each column is a unit vector and any two distinct columns are orthogonal. Thus, you have $n^2$ variables (entries of the matrix) and $\frac{n(n+1)}{2}$ equations. By subtracting, this leaves you with $\frac{n(n-1)}{2}$ parameters for the orthogonal group $O(n)$. (This argument sounds like cheating but there is a way to justify it using implicit function theorem.) Now, you get $$ n+ \frac{n(n-1)}{2}= \frac{n(n+1)}{2}, $$ for the dimension of $E(n)$, which is the number you found somewhere.
{ "language": "en", "url": "https://math.stackexchange.com/questions/758542", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Interesting association between tangent lines of slope one and ellipses Why is it that a tangent line with slope $1$ to an ellipse centered at the origin will have a transformation of $\pm \sqrt{a^2 +b^2}$ where $a$ and $b$ are the major and minor axis of the ellipse? For example: The tangent line of slope one to the ellipse $\frac{x^2}{9} + \frac{y^2}{4} = 1$ is $y=x+\sqrt{13}$.
The relation you are looking for can be derived algebraically. Start with some general line $y=mx+c$ for the tangent. Substitute it into the equation of your ellipse $\frac{x^2}{a^2} + \frac{y^2}{b^2} = 1$. This gives: $$ \frac{x^2}{a^2} + \frac{(mx+c)^2}{b^2} = 1 \\ b^2 x^2 + a^2 (m^2 x^2 + 2mxc + c^2) = a^2 b^2 \\ (b^2 + m^2 a^2)x^2 + 2a^2 mc x + a^2 (c^2 - b^2) = 0 \\ $$ Now since the line is a tangent the discriminant of this quadratic equation will be zero: $$ \Delta = (2a^2 mc)^2 - 4(b^2 + a^2 m^2)[a^2(c^2 - b^2)] = 0 \\ $$ Rearrange it to obtain the value of $c$: $$ 4a^4 m^2 c^2 - 4a^2 (b^2 c^2 - b^4 + a^2 m^2 c^2 - a^2 m^2 b^2 ) = 0 \\ b^2 c^2 - b^4 - a^2 m^2 b^2 = 0 \\ b^2 c^2 = b^4 + a^2 m^2 b^2 \\ c^2 = b^2 + a^2 m^2 \\ \therefore c = \pm \sqrt{a^2 m^2 + b^2} $$ So the tangent line equation is $y = mx \pm \sqrt{a^2 m^2 + b^2}$. In your case when the gradient $m=1$ then we have the required transformation $y=x \pm \sqrt{a^2 + b^2}$. NB: Geometrically speaking the tangents to the ellipse pass through the foci of a hyperbola with the same $a$ and $b$ values e.g. for the case of $a=3, b=2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/758638", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Using Rolle's Theorem to show that $3^x+4^x=5^x$ iff $x=2$ I need to use Rolle's Theorem to show that the only real solution to $3^x+4^x=5^x$ is $x=2$. Here's what I have: Proof: Note that a number $x$ satisfies $3^x+4^x=5^x$ if and only if $f(x)=0$ where $f(x)=3^x+4^x-5^x$. Obviously $x=2$ is solution since $f(2)=0$. Suppose that there exists a second solution $x_2$. By Rolle's Theorem, there exists a number $c$ between $2$ and $x_2$ such that $f'(c)=0$. Note that $f'(x)=3^x\ln(3)+4^x\ln(4)-5^x\ln(5)$ and that $f'(1.287)=0$. Rolle's Theorem doesn't help us here since there is a value where the derivative is zero - so I can't get a contradiction. That is, there could be another solution that is less than 1.287. Could someone maybe point me in the right direction here? Am I approaching the proof correctly?
What about the function $(3/5)^x+(4/5)^x-1$. The derivative of this does not have a root.
{ "language": "en", "url": "https://math.stackexchange.com/questions/758746", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How exactly does the response "infintely many" answer the question of "how many"? I admit that the level of this question is roughly about middle school, but this is what the question asks: The ratio of nickels to dimes to quarters is 3:8:1. If all the coins were dimes, the amount of money would be the same. Show that there are infinitely many solutions to this problem.
Three nickels and a quarter make up $40$ cents, as do four dimes. As a consequence, the second sentence in your problem does not amount to an additional condition. It follows that any multiple of the package "$3$ nickels, $8$ dimes, and $1$ quarter" solves the problem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/758870", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
How to prove a set of vectors does not span a space. Ok, so I'm a bit curios as to how you can prove a set does not span a vector space. For example, let ${S}$ be the vector set \begin{bmatrix} 1\\ 0\\ 0\\ 0\\ \end{bmatrix} \begin{bmatrix} 0\\ 1\\ 0\\ 0\\ \end{bmatrix} \begin{bmatrix} 0\\ 0\\ 1\\ 0\\ \end{bmatrix} \begin{bmatrix} 0\\ 0\\ 0\\ 1\\ \end{bmatrix} So how would you prove that it is not in ${R^3}$. Thanks!
Plug the vectors in as rows in a matrix, then row-reduce to find a basis for the row space. Remember the row space of a matrix is the subspace spanned by the initial row vectors. If you end up with one or more rows of zeros after row-reduction, then that indicates that your initial row vectors were not linearly independent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/758928", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Is a knot $K$ and it's mirror image $^*K$ considered the same knot in terms of tabulating prime knots? If so, why? I'm just wanting to confirm whether this is the case and why? Is it purely to do with the sheer number of knot projections that would have to be dealt with?
A knot and its mirror image are not always the same, but when it comes to knot tables, most people do not bother drawing both. I believe this is just for space and the fact that we can pretty much visualize what the mirror image looks like. But there is a way to tell. Most tables will then provide the Jones Polynomial of the knot, which can distinguish mirror images. It is usually given by some sequence of numbers like: $\{-4\}(-1, 1, 0, 1)$ which means a polynomial with $-4$ exponent as the leading term and $-1$ as its coefficient. So, the actual polynomial is $-t^{-4}+t^{-3}+t^{-1}$, which is the trefoil. The impressive fact that if you substitute $t^{-1}$ in for $t$, it will be the polynomial of the mirror image, is why the Jones polynomial is so powerful. If the two polynomials you get this way are not equal, the mirror images are not equivalent. Notice, they are not equivalent for the trefoil. But for the figure eight knot, $\{-2\}(1,-1,1,-1,1)$ is symmetric, so by the Jones polynomial, it may be equivalent to its mirror image, as pointed out by Grumpy Parsnip, and in this case, it is.
{ "language": "en", "url": "https://math.stackexchange.com/questions/759013", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Looking for notation of set of all entries of some matrix? I'm busy writing my thesis, and I'm looking for some concise notation to denote the supremum of the matrix entries of, say $A \in M_n(\mathbb{R})$. How should I do this? Looking for something like $$\sup_{a_{i,j} \in A}|a_{i,j}|$$ but the notation $a_{i,j} \in A$ in reality doesn't make much sense in my opinion. What else can I do? EDIT: Even more ideally I want to denote $\sup_{a_{i,j}\in (A-B)}|A - B|$, but I might just introduce general notation for the "norm" to simplify this.
Although not really a notation but a combination of notations, the quantity in question is $\|\operatorname{vec}(A)\|_\infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/759087", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Order of groups and group elements? Let G be a group and let p be a prime. Let g and h be elements of G with order p. I am wondering how I can use group theory to find the possible orders of the intersection between $\def\subgroup#1{\langle#1\rangle}\subgroup g$ and $\subgroup h$ and also to prove that the number of elements of order $p$ in $G$ is a multiple of $p-1$. I've been looking for the path for ages and got nothing really. These are presented as typical applications to group theory and I'm not at ease with the subject so I'd like to see how you think on this example (in order to get a better idea). Can you hint me? Thank you.
As noted by Potato, the first thing to notice is that the intersection of two subgroups, $\langle g\rangle $ and $\langle h\rangle$ is a subgroup of $G$, but moreover, it is also a subgroup of both $\langle g\rangle$ and $\langle h\rangle$. A cyclic group of prime order, such as $\langle g\rangle$ only has two subgroups $\langle 1 \rangle$ and $\langle g\rangle$, so we see that the intersection is either trivial or the two subgroups are the same. Consider what the above shows about subgroups of order $p$. What can their overlap look like? If you focus on $\langle g\rangle$, how many elements of order $p$ does it have? Can you use these ideas to get what you want?
{ "language": "en", "url": "https://math.stackexchange.com/questions/759185", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Can anyone explain a residue in fairly simple terms? I'm studying Complex Analysis and everything up to this point has been pretty straightforward to visualise, but I can't get my head around residues, especially as they seem to have two very different definitions (as a Laurent series coefficient and as an expression involving an integral on a closed path) that I can't understand why they equate. Can anyone give a fairly simple explanation of residues? Thanks!
To see why they equate, just take the following contour integral around the unit circle: $$ \oint \frac{dz}{z^k}=\int_{0}^{2\pi}\frac{d(e^{i\theta})}{e^{ik\theta}}=\int_{0}^{2\pi}\frac{ie^{i\theta}d\theta}{e^{ik\theta}}=\int_{0}^{2\pi}ie^{i(1-k)\theta}d\theta. $$ If $k=1$, then the integral is $2\pi i$. If it's any other integer, then it's $$ \frac{ie^{i(1-k)\theta}}{i(1-k)}\bigg\vert_{0}^{2\pi}=0, $$ because the integral is periodic with period $2\pi$. Only the $\frac{1}{z}$ term contributes to a contour integral around a point (with a factor of $2\pi i$); and it gets a special name for being so special.
{ "language": "en", "url": "https://math.stackexchange.com/questions/759315", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Integral $\int_0^{\pi/2} \theta^2 \log ^4(2\cos \theta) d\theta =\frac{33\pi^7}{4480}+\frac{3\pi}{2}\zeta^2(3)$ $$ I=\int_0^{\pi/2} \theta^2 \log ^4(2\cos \theta) d\theta =\frac{33\pi^7}{4480}+\frac{3\pi}{2}\zeta^2(3). $$ Note $\zeta(3)$ is given by $$ \zeta(3)=\sum_{n=1}^\infty \frac{1}{n^3}. $$ I have a previous post related to this except the logarithm power is squared and not to the 4th power. If you are interested in seeing this result go here: Integral $\int_0^\pi \theta^2 \ln^2\big(2\cos\frac{\theta}{2}\big)d \theta$.. However, I am wondering how to calculate the result shown above. Thanks.
From Table of Integrals, Series, and Products Seventh Edition by I.S. Gradshteyn and I.M. Ryzhik equation $3.631\ (9)$ we have $$ \int_0^{\Large\frac\pi2}\cos^{n-1}x\cos ax\ dx=\frac{\pi}{2^n n\ \operatorname{B}\left(\frac{n+a+1}{2},\frac{n-a+1}{2}\right)} $$ Proof Integrating $(1+z)^p z^q$, for $p,q\ge0$, in the $z=u+iv$ plane around the contour bounded by the $u$-axis from $-1$ to $1$ and the upper semicircle of unit radius yields $$ \int_{-1}^1(1+z)^p z^q\ dz=-i\int_0^\pi\left(1+e^{it}\right)^p e^{i(q+1)t}\ dt, $$ since $(1+z)^p z^q$ is holomorphic within and continuous on and within the given contour. The imaginary part of the RHS is $$ \Im\left[-i\int_0^\pi\left(1+e^{it}\right)^p e^{i(q+1)t}\ dt\right]=-\int_0^\pi\left(2\cos\frac t2\right)^p \cos bt\ dt, $$ where $b=q+\frac12p+1$. The LHS integral is equal to $$ \int_{0}^1(1+u)^p u^q\ du+e^{i\pi q}\int_{0}^1(1-u)^p u^q\ du $$ of which the imaginary part is \begin{align} \operatorname{B}\left(p+1,q+1\right)\sin\pi q&=-\frac{\Gamma\left(p+1\right)\Gamma\left(q+1\right)}{\Gamma\left(p+q+2\right)}\sin\pi q\\ &=-\frac{\Gamma\left(p+1\right)\Gamma\left(b-\frac12p\right)}{\Gamma\left(b+\frac12p+1\right)}\sin\pi \left(b-\frac12p\right)\\ &=-\frac{\pi\Gamma\left(p+1\right)}{\Gamma\left(1+\frac12p+b\right)\Gamma\left(1+\frac12p-b\right)}. \end{align} Final step is setting $t=2x$, $p=n-1$, and $a=2b$. $\qquad\color{blue}{\mathbb{Q.E.D.}}$ Thus \begin{align} \int_0^{\Large\frac\pi2}\theta^2\ln^4(2\cos \theta)\ d\theta&=\lim_{n\to5}\lim_{a\to0}\frac{\partial^6}{\partial n^4\partial a^2}\left[\int_0^{\Large\frac\pi2}(2\cos \theta)^{n-1}\cos a\theta\ d\theta\right]\\ &=\lim_{n\to5}\lim_{a\to0}\frac{\partial^6}{\partial n^4\partial a^2}\left[\frac{\pi}{2 n\ \operatorname{B}\left(\frac{n+a+1}{2},\frac{n-a+1}{2}\right)}\right]\\ &=\large\color{blue}{\frac{33\pi^7}{4480}+\frac{3\pi}{2}\zeta^2(3)}. \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/759513", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 1, "answer_id": 0 }
Smooth approximation of maximum using softmax? Look at the Wiki page for Softmax function (section "Smooth approximation of maximum"): https://en.wikipedia.org/wiki/Softmax_function It is saying that the following is a smooth approximation to the softmax: $$ \mathcal{S}_{\alpha}\left(\left\{x_i\right\}_{i=1}^{n}\right) = \frac{\sum_{i=1}^{n}x_i e^{\alpha x_i}}{\sum_{i=1}^{n}e^{\alpha x_i}} $$ * *Is it an approximation to the Softmax? * *If so, Softmax is already smooth; why do we create another smooth approximation? *If so, how do derive it from Softmax? *I don't see why this might be better than Softmax for gradien descent updates.
This is a smooth approximation of maximum function: $$ \max\{x_1,\dots, x_n\} $$ where $\alpha$ controls the "softness" of the maximum. The detailed explanation is available here: http://www.johndcook.com/blog/2010/01/13/soft-maximum/ Softmax is better then maximum, because it is smooth function, while $\max$ is not smooth and does not always have a gradient.
{ "language": "en", "url": "https://math.stackexchange.com/questions/759637", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Book recommendation for Linear algebra. I am looking for suggestions, it has to be a self study book and should be able to relate to applications to real world problems. If it is more computer science oriented , that would be great.
There is an innovative course Coding the Matrix offered by Philip Klein which consists of a book and a course offered on Coursera and other places. It even has a Twitter account for keeping updated. The reviews are controversial, see also here and here, but it looks as an interesting challenge to try. It is designed, according to the author's website, as a "course is to provide students interested in computer science an introduction to vectors and matrices and their use in CS applications".
{ "language": "en", "url": "https://math.stackexchange.com/questions/759717", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 7, "answer_id": 1 }
What is a good topic for an essay on applications of Calculus 3? In a class I have, the professor has offered extra credit for 1 page paper on a topic in Calculus 3 that has an application in the real world. I know calculus is used a lot in physics but I do not know physics very well. What is a good topic that is understandable to a layman and that I might write an essay on?
I have two ideas for you. Since you are only allowed to write one page, you are not going to be able to do much. But may * *Look at electromagnetism and Maxwell's equations. Some completely random notes: http://www.phys.ufl.edu/~thorn/homepage/emlectures1.pdf. Take a look at chapter 2. *Again, since you just have one part to write, you could also just consider how calculus is used in business. In business calculus people are interested in optimizing functions of several variables. You could for example discuss the terms consumers surplus, producer's surplus, market equilibrium. If you study the non-linear functions, that already takes you outside of what a lot of economics majors study these days.
{ "language": "en", "url": "https://math.stackexchange.com/questions/759788", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
Probability, random line up Five distinct families arrive to a party. Each family consists of 3 people. The 15 participants of the party are arranged randomly in a line. Let X be the number of families that their members sit next to each other. Find E[X] and Var(x). My attempt: Just go straight to find out the pmf of X, P(X=1), P(X=2)... up to P(x=5). Does this question ask for all members of at least 2 of the family members? If it is the second case, I have no idea how to do it.
Hint: The random variable takes values $0,1,2, \cdots 5$. Find the probability of each event. Also, $X^2$ takes values $0,1,4,16,25$ with the same probabilities, as computed above, and $Var(X)=E(X^2)-E(X)^2$
{ "language": "en", "url": "https://math.stackexchange.com/questions/759873", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Distance and speed of two people walking 100 miles This is for GRE math prep. Can you explain why the answer to this is 54? Five hours after Sasha began walking from A to B, a distance of 100 miles, Mario started walking along the same road from B to A. If Sasha's walking rate was 2 miles per hour and Mario's was 3 miles per hour, how many miles had Mario walked when they met? [ ] 42 [ ] 46 [X] 54 [ ] 58 [ ] 64 I know the formula $$Rate=\frac{Distance}{Time}$$ but not sure how to use it here to solve this. Do you have a general rule for helping solve these mind-twisting word puzzles? Thank you.
Let $t$ be the number of hours that Mario walks before they meet. Then Sasha has walked $5+t$ hours, and $$2(5+t)+3t=100.$$ Or else, without "algebra": Sasha has walked $10$ miles before Mario sets out, so at that time they are $90$ miles apart. Then distance between them shrinks $5$ miles per hour, so it takes $18$ hours to shrink to $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/759978", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How find this $5xy\sqrt{(x^2+y^2)^3}$ can write the sum of Four 5-th powers of positive integers. Find all positive integer $x,y$ such $$5xy\sqrt{(x^2+y^2)^3}$$ can write the sum of Four 5-th powers of positive integers.In other words: there exst $a,b,c,d\in N^{+}$ such $$5xy\sqrt{(x^2+y^2)^3}=a^5+b^5+c^5+d^5$$ This problem is from Math competition simulation test.I seach this problem and found this problem background is Euler's sum of powers conjecture.can see link maybe this problem is not hard.because is from competition. since $$5xy\sqrt{(x^2+y^2)^3}=5xy(x^2+y^2)\sqrt{x^2+y^2}$$ so $$x^2+y^2=m^2$$ $$x=3,y=4,m=5$$ and $$x=(a'^2-b'^2),y=2a'b',m=a'^2+b'^2$$ then I can't it Thank you for you help .
Something that might help. Using what you said, namely: $x^2+y^2=k^2$, with $x=m^2-n^2$, $y=2mn$ then the left hand side of the equation becomes: $LHS=10(m^9n-mn^9+2m^7n^3-2m^3n^7)$ Since $RHS\equiv 0$ modulo 2 and modulo 5 then we have $a+b+c+d\equiv 0 \mod{2}$ and $a+b+c+d\equiv 0 \mod{5}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/760071", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Continuity of bilinear maps Given a vector space $V$ over $\mathbb{R}$ with a norm $||*|| $. Can $(x,y)\rightarrow(x+y)$ be an example of continous bilinear map, if yes, can you please exlain why? Definition of continuous bilinear map $\lambda$ on $V\times V \rightarrow V$ is: For all $v,w\in V$, there is $C>0$ such that: $||\lambda(v,w)||\le C||v||||w||$, how can I proceed from here?
Your map $\lambda:V\times V\rightarrow V$ is continuous, but not bilinear: For $\mu\not=0\in \mathbb{R}$ and $v,w\not=0\in V$: $$\lambda(\mu v,w)=\mu v+ w\not = \mu (v+w)=\mu\cdot\lambda(v,w)$$ However, $\lambda$ is a linear map from the vector space $V\times V$ to $V$. Therefore it is continuous if and only if there exists $C>0$ such that $$\|\lambda(x,y)\|_V \le C \|(x,y)\|_{V\times V}\tag{1}$$ Note that $\|x\|_V \|y\|_V$ is not a norm on $V\times V$. A suitable norm (where I mean by suitable that it generates the product topology induced by $\|\cdot\|_V$) would for example be $$\|(x,y)\|_V = \|x\|_V + \|y\|_V$$ With respect to that norm, $\lambda$ clearly satisfies $(1)$ with $C=1$ by the triangle inequality.
{ "language": "en", "url": "https://math.stackexchange.com/questions/760248", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Quadratic equations and inequalities $\sqrt{4n+1}<\sqrt{n} + \sqrt{n+1}<\sqrt{4n+2}$ and $[\sqrt{n}+\sqrt{n+1}] = [\sqrt{4n+1}]$ For every positive integer $n$, prove that $$\sqrt{4n+1}<\sqrt{n} + \sqrt{n+1}<\sqrt{4n+2}$$ Hence or otherwise, prove that $[\sqrt{n}+\sqrt{n+1}] = [\sqrt{4n+1}]$, where $[x]$ denotes the greatest integer not exceeding $x$. This question was posed to me in class by my teacher. Since we are discussing quadratic equations. I can only imagine that this question is related to that topic. Actually, by squaring the terms on both sides of the inequality, the first part of the question is solved easily. It is the second half that is causing me trouble. Clearly we have to show, that if $x <\sqrt{4n+1} < x+1$ where $x$ is a natural number, then $x <\sqrt{n} + \sqrt{n+1} < x+1$, but how? I am in high school, so please use techniques appropriate for my level.
Observe that $$\lfloor\sqrt{4n+1}\rfloor=\lfloor\sqrt{4n+2}\rfloor$$ unless $4n+2$ is perfect square But any square $\equiv0,1\pmod4$ $$\implies\lfloor\sqrt{4n+1}\rfloor=\lfloor\sqrt n+\sqrt{n+1}\rfloor=\lfloor\sqrt{4n+2}\rfloor$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/760330", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Evaluate $\displaystyle\lim_{j\to0}\lim_{k\to\infty}\frac{k^j}{j!\,e^k}$ I found this problem in my deceased grandpa's note today when I was visiting my grandma's home. \begin{equation} \lim_{j\to0}\lim_{k\to\infty}\frac{k^j}{j!\,e^k} \end{equation} I asked my brother and he said the answer is $\cfrac{1}{2}$, but as usual, he didn't give me any explanation why the answer is $\cfrac{1}{2}$. I only get the indeterminate form $\infty^0$ in the numerator part if I substitute $j=0$ and $k=\infty$. I have no idea how to answer this problem. Could anyone here help me to answer it? I really appreciate for your help. Muchas gracias!
The sum was probably transcribed wrong somewhere along the way,because it doesn't make much sense as written (the limit as $j \rightarrow 0$ of something involving $j!$ ?). But since the answer is supposed to be $1/2$, I'm guessing that the intended formula was something like $$ \lim_{k \rightarrow \infty} \sum_{j=0}^k \frac{k^j} {j! e^k}, $$ which I'm sure I've seen before (possibly on this forum or on mathoverflow) but can't easily locate a reference. If we had replaced the $\sum_{j=0}^k$ by $\sum_{j=0}^\infty$ then the sum would equal $1$ for all $k$, because on writing $$ \sum_{j=0}^\infty \frac{k^j} {j! e^k} = \frac1{e^k} \sum_{j=0}^\infty \frac{k^j}{j!} $$ we recognize the sum as the power series for $e^k$. So the problem is asking in effect to prove that as $k\rightarrow\infty$ the $\sum_{j=0}^k$ part of the sum is asymptotically half of the entire $\sum_{j=0}^\infty$ sum. To see that this is at least plausible, observe that the $j=k$ term is the largest, but still accounts for only $O(1/\sqrt k)$ of the total, and the $j=k+1, \, k+2, \, k+3, \, \ldots$ terms are approximately equal to the $j=k-1, \, k-2, \, k-3, \, \ldots$ terms respectively. There are various ways to finish the proof, and your brother can probably point you towards one of them :-)
{ "language": "en", "url": "https://math.stackexchange.com/questions/760451", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Estimate standard deviation of sample You have a random sample of 25 objects with mean weight of 24 grams, estimate the standard deviation of the sample. In addition, you know it's supposed to be 25 grams with a deviation of 1 gram, but this has no relevance to the above question, right? How is this done? Looking in my formula reference this is not enough information to give an estimate.
A sample mean of $24$ is unlikely from a sample of twenty-five items with a population mean of $25$ and population standard deviation of $1$. So the sample casts doubt on the population parameters. But conditioned on the data given and ignoring issues such as finite populations, a good estimate of the variance of the sample conditioned on the population mean and sample mean is the same as the population mean. $1$ in this case. Here is an illustration of ten thousands trials in R from a normal distribution, suggesting no relationship between the sample standard deviation and the difference between the population mean and the sample mean library(matrixStats) samplesize <- 25 cases <- 10000 popmean <- 25 popsd <- 1 set.seed(1) matdat <- matrix(rnorm(ss*cases, mean=popmean, sd=popsd), ncol=ss) samplemeans <- rowMeans(matdat) samplesds <- rowSds(matdat) plot(samplesds ~ samplemeans)
{ "language": "en", "url": "https://math.stackexchange.com/questions/760526", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Looking for an introductory Algebraic Geometry book I am looking for recommendations on an AG text to work through this summer, possibly with the help of a mentor. I would want this book to have some introduction to categories, and then develop the modern methods (some development of sheaves and schemes), hopefully up to some realization of Riemann-Roch. Any recommendations?
These notes by Andreas Gathmann are precisely what you're asking for. He starts very gently, schemes being introduced in Chapter 5, but he ends with sheaf cohomology (including Riemann-Roch) and some intersection theory.
{ "language": "en", "url": "https://math.stackexchange.com/questions/760579", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
The eigenvalue of $A^TA$ If $\lambda$ is the eigenvalue of matrix $A$,what is the eigenvalue of $A^TA$?I have no clue about it. Can anyone help with that?
Generally, you won't be able to say much about them. However, if $A$ is for instance real symmetric, it is diagonalizable with real eigenvalues, meaning there is an orthogonal matrix B (that is with $B^{-1}=B^T$) and a diagonal matrix D such that: $$A = B D B^T$$ $$A^TA = (B D B^T)^T B D B^T = B D^T B^T B D B^T = BD^2B^T$$ In other words, the eigenvalues of $A^TA$ are the squares of the eigenvalues of $A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/760654", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }