Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Three couples sit at random in a line of six seats, probability that no couple sits together? If three married couples (so 6 people) sit in a row of six seats at random, what is the probability that no couples sit together? Another way to think about it (couples are AB, CD, and EF)
There are $6!$ possible seating arrangements. From these, we must exclude those in which one or more couples sit in adjacent seats. There are three ways to select a couple who sit in adjacent seats. That gives us five objects to arrange, the couple and the other four people. The objects can be arranged in $5!$ ways. The couple that sits together can be arranged internally in $2!$ ways. Hence, there are $$\binom{3}{1}5!2!$$ seating arrangements in which a couple sits in adjacent seats. However, if we subtract these seating arrangements from the total, we will have subtracted too much since we have counted seating arrangements in which two couples sit together twice, once for each way we could designate one of the couples as the couple that sits in adjacent seats. Since we only want to subtract such couples once, we must add them back. There are $\binom{3}{2}$ ways to select two couples that sit together. That gives us four objects to arrange, the two couples and the two other people. The objects can be arranged in $4!$ ways. Each of the two couples that sit in adjacent seats can be arranged internally in $2!$ ways. Hence, there are $$\binom{3}{2}4!2!2!$$ seating arrangements in which two couples sit together. When we subtracted arrangements in which a couple sits together, we counted seating arrangements in which all three couples sit together three times, once for each way we could have designated one of those couples as the couple that sits together. When we added arrangements in which two couples sit together, we counted seating arrangements in which all three couples sit together three times, once for each of the $\binom{3}{2}$ ways we could have designated two of the three couples as the ones that sit together. Therefore, we have not excluded seating arrangements in which all three couples sit together at all. There are $3!$ ways to arrange three couples. Each couple can be arranged internally in $2!$ ways. Hence, the number of seating arrangements in which all three couples sit together is $$\binom{3}{3}3!2!2!2!$$ By the Inclusion-Exclusion Principle, the number of seating arrangements of the three couples in which no couples sit together is $$6! - \binom{3}{1}5!2! + \binom{3}{2}4!2!2! - \binom{3}{3}3!2!2!2!$$ The probability that no couple sits together is $$\frac{6! - \dbinom{3}{1}5!2! + \dbinom{3}{2}4!2!2! - \dbinom{3}{3}3!2!2!2!}{6!} = 1 - \frac{\dbinom{3}{1}5!2! - \dbinom{3}{2}4!2!2! + \dbinom{3}{3}3!2!2!2!}{6!}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2498773", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Let $A$ be a $2\times 2$ matrix with eigenvalues $1,2$. What is $\det(A^3-3A^2+A+5I)$ Let $A$ be a $2\times 2$ matrix with eigenvalues $1,2$. What is $\det(A^3-3A^2+A+5I)$ I know that $\det(A)=$ product of eingenvalues = $2$. I also know that since $A$ has 2 distinct eiginvalues, it is diagonalizable. $A=UDU^T$, where $U$ is the orthogonal matrix such that $UU^T=I$, and $D$ is the diagonal matrix consists of columns of the eigeinvalues. $$D=\begin{bmatrix}1&0\\0&2\end{bmatrix} $$ I don't know what $U$ is? If we have no information about what $A$ looks like, how can we calculate $U$, which contains the eigenvectors? Suppose I have $U$. Then I know that: $A^2=UD^2U^T$, but again I have no information about what $A$ is?
By Cayley–Hamilton, $A^2-3A+2I=0$ and so $$A^3-3A^2+A+5I=A(A^2-3A+2I)-A+5I=-A+5I$$ The eigenvalues of $-A+5I$ are $-1+5=4$ and $-2+5=3$ and so $$\det(A^3-3A^2+A+5I) = \det(-A+5I) = 4 \cdot 3 = 12$$ Or you could argue directly that the eigenvalues of $P(A)$ are $P(1)$ and $P(2)$ and so $\det P(A) = P(1)P(2)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2498901", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
l'Hôpital vs Other Methods Consider the first example using repeated l'Hôpital: $$\lim_{x \rightarrow 0} \frac{x^4}{x^4+x^2} = \lim_{x \rightarrow 0} \frac{\frac{d}{dx}(x^4)}{\frac{d}{dx}(x^4+x^2)} = \lim_{x \rightarrow 0} \frac{4x^3}{4x^3+2x} = ... = \lim_{x \rightarrow 0}\frac{\frac{d}{dx}(24x)}{\frac{d}{dx}(24x)} = \frac{24}{24}=1 $$ Consider the following example using a different method: $$ \lim_{x \rightarrow 0} \frac{x^4}{x^4+x^2} = \lim_{x \rightarrow 0}\frac{\frac{x^4}{x^4}}{\frac{x^4}{x^4}+\frac{x^2}{x^4}} = \lim_{x \rightarrow 0} \frac {1}{1 +\frac{1}{x^2}} = \frac {1}{1+\infty} = \frac{1}{\infty}=0 $$ The graph here clearly tells me the limit should be $0$, but why does l'Hôpital fail?
After doing derivative one more time you get $12x^2 +2 $ which is not $0$ when $x$ goes to $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2499036", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 3, "answer_id": 2 }
Difference between torsion and Lie bracket After viewing a lecture on torsion, the lecturer said that the torsion is the failure of curves to close. Since this is almost also what I have read about the Lie bracket, I want to know their difference and also understand it geometrically. The Lie bracket is included in the definition of torsion, so I am guessing that the "non-closure" of curves due to torsion has to do with non-zero Lie bracket. 1) But, what if we have a zero Lie bracket and non-zero torsion? Does thit mean that the curves will again fail to close? 2) And what if we have a zero torsion and a non-zero Lie bracket? Does this mean that the curves will again fail to close? If not, in what way, graphically, can we say that the covariant derivatives found in the definition of torsion compensate for the effect of the Lie bracket? EDIT: For completeness, I also give the definition of the torsion tensor: $T(X,Y):=\nabla_XY-\nabla_YX-[X,Y]$
In simple words(not formal): The torsion describes how the tangent space twisted when it is parallel transported along a geodesic. The Lie bracket of two vectors measures, as you said, the failure to close the flow lines of these vectors. The main difference is that torsion uses parallel transport whereas Lie bracket uses flow line. This image is not mine. I saved it from MSE months ago
{ "language": "en", "url": "https://math.stackexchange.com/questions/2499466", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Maximum value of $\gcd(u^2-v^2, 2uv, u^2+v^2)$ What is the maximum of $\gcd(u^2-v^2, 2uv, u^2+v^2)$, where $u$ and $v$ are integers and $u\neq v\neq 0$ ? I have tried to find it but I haven't got anywhere. I got this question when I was watching this video, where he says, at 9:36, that we need not scale down by less than $1/2$ (you might want to rewind the video). How do we prove it? Follow up question: What about $\gcd(u^2-v^2-w^2, 2uv, 2uw, u^2+v^2+w^2)$ and so on?
Let $d=\gcd(u,v)$, $u=dx$, $v=dy$. Then $$\gcd(u^2-v^2,2uv,u^2+v^2)=d^2\gcd(x^2-y^2,2xy,x^2+y^2).$$ Now if $p\mid x$ then $p\nmid x^2+y^2$ and if $p\mid y$ then $p\nmid x^2+y^2$. Hence $\gcd(x^2-y^2,2xy,x^2+y^2)$ can at most be $2$, and this happens iff $x,y$ are both odd.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2499566", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Number of Integer Solutions for $x_1 + x_2 + x_3 + x_4 = 15$ where $-5 \le x_i \le 10$ I am trying to find the number of Integer Solutions for $x_1 + x_2 + x_3 + x_4 = 15$ where $-5 \le x_{i_{\in [4]}} \le 10$ I know if $x_i$s are all non-negative integers, it is a number partition of 15 however, this case a bit tricky with the possible negative integers. Any hint to pinch on this problem?
Let $y_i = x_i + 5$ for each $i$. Then you're trying to find the number of integer solutions to $y_1 + y_2 + y_3 + y_4 = 35$ with $0 \le y_i \le 15$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2499728", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find real and complex $A_{m \times n}$ such that $\operatorname{Ran} A = \operatorname{Ker} A^T$ where $\operatorname{Ran} A$ is column space of $A$. $\newcommand{\Ran}{\operatorname{Ran}} \newcommand{\Ker}{\operatorname{Ker}}\newcommand{\b}{\mathbf}$ If $\b y \in \Ran A$ and $\b y \in \Ker A^T$, then $\b y = A \b x$ and $A^T \b y = \b 0 $ or $A^TA \b x = \b 0$. Therefore either $\b x = \b 0$ or $A^TA=0$. I need a $A$ such that $A^TA = 0_{m \times m}$. For complex matrix $A$, I easily found $$\begin{bmatrix}1 & -i\\i &1\end{bmatrix}$$ but I can't find a real matrix $A$. If I expand the product, $A^TA = 0$, I get $$0= (A^TA)_{ij} = \sum^m_{k=1} \left(A^T\right)_{ik}\left(A\right)_{kj} = \sum^m_{k=1} \left(A\right)_{ki}\left(A\right)_{kj}$$ for $i = j$, $\sum^m_{k=1} \left(A\right)_{ki}\left(A\right)_{ki} = 0$ which implies $A_{ki} = 0$ for all $1\le k \le m$. Since $1\le i \le m$, therefore $A_{ki} = 0$ for all $1 \le k,i\le m$. The matrix $$B = \begin{bmatrix} 0 & 0 &0 \\ 0 &0 & 2 \end{bmatrix}$$ satisfies the condition $B_{ij} = 0$ for all $1 \le i,j\le m = 2$ but $B^TB \ne 0$. My question : Is there a real matrix (non-trivial) for which $\Ran A = \Ker A^T$ ?
Here is an argument for a real Euclidean space. Note that if $A \neq 0$ then $\langle A v, A v \rangle > 0$ for some $v$. Then since $\langle A^{\mathrm{t}} A v, v \rangle = \langle A v, A v \rangle > 0$ it follows that $A^{\mathrm{t}} A v \neq 0$. So $A v$ is in the range of $A$ but not in the kernel of $A^{\mathrm{t}}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2499801", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Does $f_n$ uniformly converge to $f$? $f_n :[0, 1]\rightarrow\mathbb{R} \qquad x \mapsto x^n - x^{n+1}$ The sequence converges pointwise to the zero function. It converges uniformly if $$\sup_{x \in [0, 1]} \; \big| \, f_{n}(x) - f(x) \, \big|$$ tends to zero. But I am not sure if it does or how to prove.
Notice that $f_{n+1} = x f_n$ as $x\in[0,1]$ you have that $f_n$ is monotone. Use Dini's theorem Link to complete the proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2499961", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 1 }
Determining Whether a function is linear I'm positive this function is linear but am having trouble showing it: Determine whether $T:V \to W$ defines a linear transformation: $V=R^3$, $W=R$ $T((a_1, a_2, a_3)) = 3a_1 +2a_2 + a_3 $ I know I have to show that $T(u+v) = T(u) + T(v)$ and $T(cu) = cT(u)$ but am unsure how to proceed.
These are the steps you should take to prove $T(u+v)=T(u)+T(v)$. * *Set $u=(u_1,u_2,u_3)$ and $v=(v_1,v_2,v_3)$. *Write down what $T(u)$ is. *Write down what $T(v)$ is. *Write down what $T(u)+T(v)$ is. *Write down what $u+v$ is. *Write down what $T(u+v)$ is. *Compare results from (3) and (5). Which of these steps were you able to do, and where are you stuck?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2500209", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding the Lagrange Dual I'm working on the following (convex) optimization problem: Let $Q$ be an $n \times n$ positive semidefinite matrix, $A$ an $m\times n$ matrix and $b\in \mathbb{R}^m$. Determine the Lagrange dual of \begin{align} \min\{x^TQx \ | \ Ax\leq b , \ x\in\mathbb{R}^n \} \end{align} My problem is especially what if $Q$ is not invertible? There is where I get stuck. Attempt: I have calculated the Lagrangian $\phi(x,u)$ and that is: \begin{align} \phi(x,u)=x^TQx + u(Ax-b) \end{align} The Lagrange dual function $\Theta(u)$ is: \begin{align} \Theta(u) = \inf_{x\in\mathbb{R}^n} \phi(x,u) \end{align} To have an explicit form for it I minimized $\phi$ w.r.t. $x$. So I differentiated and equated to zero: \begin{align} \nabla_x \phi(x,u) = 2x^TQ + uA = 0 \end{align} That gives: \begin{align} x^TQ= -\frac{u}{2}A \end{align} Now two cases: Case I: $Q$ invertible, then it is easy. I get $x=-\frac{1}{2}(Q^{-1})^TA^Tu^T$. Then I put it in $\phi(x,u)$ and get expression for $\Theta(u)$ what automatically gives me the Lagrange dual. Case II: $Q$ is not invertible, then I really do not know how to proceed.
The primal model is \begin{array}{ll} \text{minimize} & x^T Q x \\ \text{subject to} & A x \leq b \end{array} The Lagrangian is $$L(x,u) = x^TQx - u^T(b-Ax)$$ where the Lagrange multiplier $u$ is nonnegative. The dual function is $$g(u) = \inf_x x^TQx - u^T(b-Ax)$$ The optimality conditions for the infimum are indeed $$2Qx+A^Tu=0$$ as you have pointed out. Because the infimum expression is convex, all values of $x$ that satisfy the optimality conditions must result in the same value of $g(u)$. Therefore, $$g(u) = \begin{cases} x^TQx - u^T(b-Ax) & \exists x ~\text{s.t.}~ 2Qx+A^Tu=0 \\ -\infty & \not\exists x~\text{s.t.}~2Qx+A^Tu=0 \end{cases}$$ The key is to realize that when the optimality condition is satisfied, we have \begin{aligned} x^TQx - u^T(b-Ax) &= x^TQx - b^T u + (A^Tu)^Tx \\&= x^TQx-b^Tu-2x^TQx \\&= -b^Tu - x^TQx \end{aligned} This last form of the expression is useful to us because it is concave in both $x$ and $u$! Therefore, we can write the dual as follows: \begin{array}{ll} \text{maximize} & -b^T u - x^T Q x \\ \text{subject to} & 2Qx + A^T u = 0 \\ & u \geq 0 \end{array} It might seem strange to leave the primal variable in the Lagrange dual, but this is exactly what the Wolfe dual does for general NLP. But unlike the general Wolfe dual, the objective function is jointly concave in $x$ and $u$, so it can be solved as-is.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2500340", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
An example of a unipotent matrix which is NOT upper triangular Define::= $I_n$ -- the $n \times n$ identity matrix. Let $A$ be an $n \times n$ real matrix. Define::= Nilpotent matrix -- an $n \times n$ real matrix $X$ such that $X^n = $ the zero matrix for some $n$ in the positive integers. Define::= unipotent matrix $U$ -- $A - I$ which is nilpotent. I'd like an example of a unipotent matrix which is not upper triangular
$$A=\left[ \begin{array}{ccc}1&1&0\\ 0&1&0\\ 0&1&1\end{array}\right]$$is unipotent because $(A-I)^2=0$ and $(A-I)$ is nilpotent where $I$ is the identity (or one can also argue that the characteristic polynomial of $A$ is $(\lambda-1)^3$) whereas A is neither upper nor lower triangular.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2500467", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Prove 11 does not divide $3^{3k-1}+5*3^k$ for any odd k. First I did an induction proof that it does work for even k. Then I started the proof as so. Suppose there exists a k of the form 2n+1, s.t 11 divides $3^{3k-1}+5*3^k$. After some algebra I can arrive at this point $5*2^{6n-1}+3(2^{6n-1}+5*3^{2n})$ Since I proved separately this works for even k, the right side sum if of that form, $(2^{6n-1}+5*3^{2n})$ is divisible by 11. So I believe if I can somehow prove that 11 does not divide any power of 2, I would have finished the proof. However I don't know how to do that. Someone may have to fix the tags as I'm not entirely sure what is appropriate here, sorry.
$3^{3k-1}+5\times 3^k=3^{k-1}(3^{2k}+15)=3^{k-1}(x^2+15)$ with $x=3^k$ Also $x^2+15\equiv x^2+4\pmod{11}$ $\begin{array}{l} k=0: & 3^0\equiv 1\pmod{11} & x^2+4\equiv 5\pmod{11}\\ k=1: & 3^1\equiv 3\pmod{11} & x^2+4\equiv 13\equiv 2\pmod{11}\\ k=2: & 3^2\equiv 9\pmod{11} & x^2+4\equiv 85\equiv 8\pmod{11}\\ k=3: & 3^3\equiv 5\pmod{11} & x^2+4\equiv 29\equiv 7\pmod{11}\\ k=4: & 3^3\equiv 4\pmod{11} & x^2+4\equiv 20\equiv 9\pmod{11}\\ k=5: & 3^5\equiv 1\pmod{11} & \text{and it cycles there...}\\ \end{array}$ So $x^2+4$ is never a multiple of $11$ for any $x=3^k$, and since $3^{k-1}$ is neither divisible by $11$ we have our result for any $k\ge 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2500568", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Stirling numbers of first kind proof For the Stirling numbers of the first kind, show that \begin{align*} (x)^{(n)} =\sum_{k=0}^n s'(n,k)x^k. \end{align*} For this proof we can proceed with induction, by proving the base cases first $n=0$ and $n=1$: \begin{eqnarray*} (x)^{(0)}=s'(0,0)x^0 = 1 \\ (x)^{(1)}=s'(1,1)x^1 = (1)(x)= x \end{eqnarray*} The statement holds for the base case $n=1$. Now we must prove for every other integer, let $n=m-1$, \begin{align*} (x)^{(m-1)} =\sum_{k=0}^{m-1} s'(m-1,k)x^k \end{align*} To obtain equation for $n=m$, we use $m-1$ to find $m$ \begin{equation*} (x)^{(m)}=(x)^{(m-1)}(x+m-1) = x\cdot x^{(m-1)}+(m-1)x^{(m-1)}\end{equation*} \begin{equation*} =x\sum_{k\geq 0} s'(m-1,k)x^k + (m-1)\sum_{k\geq 0} s'(m-1,k) x^k \end{equation*} \begin{equation*} =\sum_{k\geq 0} s'(m-1,k)x^{k+1} + (m-1)\sum_{k\geq 0} s'(m-1,k) x^k \end{equation*} \begin{equation*} =\sum_{k\geq 1} s'(m-1,k)x^{k} + (m-1)\sum_{k\geq 0} s'(m-1,k) x^k \end{equation*} \begin{equation*} =\sum_{k\geq 0}[s'(m-1,k-1)+(m-1)s'(m-1,k)]x^k \end{equation*} \begin{equation*} (x)^{m} =\sum_{k\geq 0} s'(m,k)x^k \end{equation*} this was my proof but it is not clear, i feel i did my induction wrong, any ideas of where i can improve my proof.
The Stirling numbers of the first kind $s(n,k)$ satisfy the recurrence: $$ s(n,k) = s(n-1,k-1) + (n-1) s(n-1,k) $$ The rising factorials are defined by: $$(x)^n = x(x+1)(x+2) \cdots (x+n-1) $$ We wish to show that: $$ (x)^n = \sum_k s(n,k) x^k $$ The induction hypothesis is that: $$ (x)^{n-1} = \sum_k s(n-1,k) x^k $$ Using our recurrence we have: $$ \sum_k s(n,k) x^k = \sum_k s(n-1,k-1)x^k + (n-1) \sum_k s(n-1,k) x^k \\ = x \sum_k s(n-1,k-1)x^{k-1} + (n-1) (x)^{n-1} \\ = (x + n-1) (x)^{n-1} \\ = (x)^n $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2500704", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Reference request: Toroidal graph I have asked a similar question here but not sure if it has reached the right community. I need reference to learn about graphs that have genus 1 i.e. toroidal graphs. Specifically, i am trying to find answers to the questions below. * *Since toroidal graphs can be recognized in polynomial time, what are different known characterizations of toroidal graphs ? *It is known that there are more than thousand forbidden minors for toroidal graph class, and only four of them does not contain $K_{3,3}$ as a subdivision (This paper). Where can i find a bigger list of forbidden structures of toroidal graphs ? *Two disjoint copies of $K_5$'s are not toroidal. Is it true that if a graph $G$ have two vertex disjoint non-planar induced subgraphs, then $G$ is not a toridal ? If not, then what is special about disjoint copies of $K_5$'s ?
For (2), take a look at this paper: Myrvold, Wendy; Woodcock, Jennifer, A large set of torus obstructions and how they were discovered, Electron. J. Comb. 25, No. 1, Research Paper P1.16, 17 p. (2018). ZBL1380.05134. Abstract We outline the progress made so far on the search for the complete set of torus obstructions and also consider practical algorithms for torus embedding and their implementations. We present the set of obstructions that are known to-date and give a brief history of how these graphs were found. We also describe a nice algorithm for embedding graphs on the torus which we used to verify previous results and add to the set of torus obstructions. Although it is still exponential in the order of the graph, the algorithm presented here is relatively simple to describe and implement and fast-in-practice for small graphs. It parallels the popular quadratic planar embedding algorithm of Demoucron, Malgrange, and Pertuiset. In section 6 of the paper, the authors provide links to a database of torus obstructions: 6 The Torus Obstructions The torus obstruction described in the paper are available from: http://www.combinatorics.org/ojs/index.php/eljc/article/view/v25i1p16/html and http://webhome.cs.uvic.ca/~wendym/torus/torus_obstructions.html
{ "language": "en", "url": "https://math.stackexchange.com/questions/2500884", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Elementary asymptotics of $\sum_{k=0}^\infty \sqrt k \frac{x^k}{k!}$ as $x\to \infty$ Consider the power series $\sum_{k=0}^\infty \sqrt k \frac{x^k}{k!}$. It is easily seen that its radius of convergence is $\infty$. I'm looking for an elementary proof of the asymptotic expansion $$\sum_{k=0}^\infty \sqrt k \frac{x^k}{k!}=e^x\left(\sqrt x - \frac{1}{8 \sqrt x} + o\left(\frac{1}{\sqrt x} \right)\right)$$ as $x$ goes to $\infty$. Using Stirling's estimate and an asymptotic property of power series, one may derive $$\sum_{k=0}^\infty \sqrt k \frac{x^k}{k!}\sim \frac{1}{\sqrt{ 2\pi}}\sum_{k=0}^\infty \frac{(ex)^k}{k^k}$$ so the question boils down to finding an estimate of $$\sum_{k=0}^\infty \frac{x^k}{k^k}$$ I find this answer quite unconvincing since it makes heavy use of asymptotics of special functions and is not very rigorous . I'd be satisfied if someone showed how to derive the simpler estimate $$\sum_{k=0}^\infty \sqrt k \frac{x^k}{k!}\sim e^x \sqrt x$$
This answer deals with different approaches to the asymptotics of the series in question. Here I will try to give a probabilistic argument. This series has the following probabilistic interpretation: If $N_t$ is a Poisson distribution of rate $t > 0$ then we have $$ \mathbb{E}[\sqrt{N_t}] = \sum_{k=0}^{\infty} \sqrt{k} \, \frac{t^k}{k!}e^{-t}. $$ Together with some basic inequalities we can get the leading order of the asymptotics. For instance, applying Jensen's inequality with the concave function $x \mapsto \sqrt{x}$ tells that $$ \mathbb{E}[\sqrt{N_t}] \leq \sqrt{\mathbb{E}[N_t]} = \sqrt{t} $$ whereas writing $ \mathbb{E}[\sqrt{N_t}] = t \sum_{k=0}^{\infty} \frac{t^k}{k!} \frac{1}{\sqrt{k+1}} e^{-t} = t \mathbb{E}\left[ (N_t + 1)^{-1/2} \right] $ and applying Jensen's inequality with the convex function $x \mapsto (x+1)^{-1/2}$ gives $$ \mathbb{E}[\sqrt{N_t}] \geq \frac{t}{\sqrt{\mathbb{E}[N_t] + 1}} = \frac{t}{\sqrt{t+1}}. $$ In terms of the original series, this reads as $$ \frac{x}{\sqrt{x+1}} e^x \leq \sum_{k=0}^{\infty} \sqrt{k} \, \frac{x^k}{k!} \leq \sqrt{x} e^x, $$ which is enough to derive the asymptotics $\sim \sqrt{x} e^x$. Higher order terms can also be extracted by utilizing various concentration behaviors of $N_t$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2501051", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Compact Hausdorff Topologies on a set are all incomparable. Is the same true for the $\sigma$-compact Hausdorff case? It is well known that compact Hausdorff topologies on a set are all incomparable; i.e. if $X$ is a set and $\tau_1$ and $\tau_2$ are compact Hausdorff topologies on $X$ then the identity map $(X,\tau_1)\to (X,\tau_2)$ is continuous if and only if $\tau_1 =\tau_2$. Is the same true for the $\sigma$-compact Hausdorff case? What if the spaces are also locally compact?
You can use $\mathbb{Z}$ with the discrete and the finite complement topologies. They are both $\sigma$-compact and Haussdorff, but not the same.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2501185", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Pythagorean Triple where $a=b$? I cannot find even a single webpage mentioning this topic. I'm a programmer and I'm looking for a 45-45-90 triangle where all of the sides are whole numbers. In the video I am watching, they say to use $ a = 10 $, $ a = 10 $, $ c = 14 $ because $ 10 \sqrt{2} $ is close enough to $ 14 $. In my program I am worried this could have serious consequences because it's not accurate. Does there exist a case where $ 2 a^2 = c^2 $ where a and c are whole numbers? If it does not exist, why? Does it revolve around the fact that $ \sqrt{2} $ is irrational?
Yes, it is just the fact that $\sqrt{2}$ is irrational. Suppose there would be such a right-triangle. Then $n=1$ would be a congruent number, i.e., the area of an right-triangle with rational sides. By Fermat, for exponent $4$, it isn't.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2501295", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 1 }
Proving $\lim_{x \to \infty}\frac{\ln x}{x^r}=0$ and $\lim_{x \to 0^+}x^r\ln x=0$ for $r>0$ This is the question I'm trying to answer This is how I went about proving i) and ii): i) $$\lim_{x \to \infty}\frac{\ln x}{x^r}=\lim_{x \to \infty}(\frac{1}{x^{r-1}}\cdot\frac{\ln x}{x})=\lim_{x \to \infty}(\frac{1}{x^{-1}}\cdot\frac{1}{x^{r}}\cdot\frac{\ln x}{x})=\lim_{x \to \infty}x\cdot\lim_{x \to \infty}\frac{1}{x^r}\cdot \lim_{x \to \infty}\frac{\ln x}{x}={\infty}\cdot\frac{1}{\infty}\cdot 0=1\cdot 0=0$$ ii) $$\lim_{x \to 0^+}x^r\ln x=\lim_{x \to 0^+}(x^{r-1}\cdot x \ln x)=\lim_{x \to 0^+}x^{r-1}\cdot \lim_{x \to 0^+}x\ln x=\lim_{x \to \infty}\frac{1}{x^{r-1}}\cdot \lim_{x \to 0^+}x\ln x=\lim_{x \to \infty}\frac{1}{x^{-1}}\cdot \lim_{x \to \infty}\frac{1}{x^r}\cdot\lim_{x \to 0^+}x\ln x=\lim_{x \to \infty}x\cdot\lim_{x \to \infty}\frac{1}{x^r}\cdot\lim_{x \to 0^+}x\ln x=\infty\cdot\frac{1}{\infty}\cdot 0=1\cdot 0=0$$ For i) I wasn't sure if writing $\lim_{x \to \infty}x$ as $\infty$ and $\lim_{x \to \infty}\frac{1}{x^r}$ as $\frac{1}{\infty}$ so that they cancelled out to give $1$ was the right thing to do since the limits for $x$ and $x^r$ are different as $x$ gets very large, although they both tend to infinity. For ii) I wasn't sure if writing $\lim_{x \to 0^+}x^{r-1}$ as $\lim_{x \to \infty}\frac{1}{x^{r-1}}$ was right to do even though they both give the value $0$.
Using $\infty\cdot 0$ in limit formulae is not valid. Rather, perhaps, you need to show that, for $r>0$, there exists some constant $C_{r}>0$ such that $\log x\leq C_{r}x^{r/2}$ for $x\geq1$. Then $\left|\dfrac{\log x}{x^{r}}\right|\leq C_{r}\dfrac{1}{x^{r/2}}$, taking $x\rightarrow\infty$ finishes the job. For the second part, use the transformation $x\rightarrow1/x$ and appeal to the first result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2501420", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
There is no straightline in $C\setminus\{3i\}$ which is mappeed onto a straight line in C by f(True/false) Let $\displaystyle f \colon \Bbb C\setminus \{3i\} \to \Bbb C$ be defined by $$f(z)=\frac{z-i}{iz+3}.$$ which of the following statement is true ? 1) f map circles in $C\setminus\{3i\}$ onto circles in C 2)There is no straightline in $C\setminus\{3i\}$ which is mappeed onto a straight line in C by f My attempts : Möbius transformation maps circles to circles ..as $C\setminus\{3i\}$ is connected,,as continous image of connected is connected,,so option 1 is trues,...i don't know the option 2,,,pliz help me,,
Hint: $$ \frac{z-i}{iz+3}=-i+\frac{2}{z-3i} $$ If $z-3i$ is bounded away from $0$, $f$ is bounded.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2501600", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
the set of compact operators on $H$ is nonunital How to prove $K(H)$ is a nonunital ,where $K(H)$ is the set of compact operators on $H$,$H$ is a infinite dimensional Hilbert space? Can anyone give me some hints?Thanks
For a projection $P\in B(H)$ to be compact, it has to be finite-rank. This is because its rank is $PH$, and the unit ball of a subspace will be compact if and only if said subspace is finite-dimensional. Since $I$ is also a projection, it can only be compact when its image is finite-dimensional; that is, when $H$ is finite-dimensional.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2501723", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Covariance and contravariance Given a vector space $V$, a vector $v \in V$ can be written in components with respect to different bases, say $X$ and $Y$. Now when i make a transformation from $X$ to $Y$, the components of the vector are transforming contravariantly. Now the dual space$V^*$ of $V$ is also a vector space, but the components of a vector there transform differently in a change of dual basis, i.e. covariantly. My question is, if we see the dual space $V^*$ as a vector space $W$, having no relation with the vector space $V$, will we then say that the components of a vector $w \in W$ will transform contravariantly?
The condition "$V$ left or right vector space over a field $F$" is purely algebraic. It tells us nothing on the type of transformations of tensors on it. That is actually due to further definitions, i.e. those of vectors/vector fields and covectors/covector fields on $V$. In other words, being a vector space doesn't imply that vectors transform "covariantly" or "contravariantly". Vectors just transform according to a particular law which involves a matrix $J$, if you change basis. Correspondingly, covectors change according to a law which involves $J^{-1}$, and this is a consequence of the definition of dual basis. Because of this duality, we give these two types of transformations two different names. You can see how one comes after each other, they are not intrinsic properties of the structure of vector space.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2501862", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Solution of $\cos(2x) - A\sin(2x) = 0?$ The physics exercises of today's lecture made me face following equation: $$\cos(2x) - A\sin(2x) = 0$$ I was not able to solve for $x$? Do you know how to proceed? (Note: A is $\approx -1/7$)
You can rearrange to get $$\tan(2x)=\frac{1}{A}\,.$$ The general solution is $$x=\frac{1}{2}\arctan\left(\frac{1}{A}\right)+\frac{n\pi}{2},$$ where $n$ is an integer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2502017", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
help me verify my proof ($\lfloor x-1 \rfloor = \lfloor x \rfloor - 1$) prove the following statement: $\forall x \in \mathbb{R}, \lfloor x-1 \rfloor = \lfloor x \rfloor - 1$ suppose $x \in \mathbb{Z}$, then $\lfloor x-1 \rfloor = x-1 $ and $ \lfloor x \rfloor -1 = x-1 $ since the floor of any integer is itself. suppose $x \in \mathbb{R} $, then $\lfloor x-1 \rfloor$ will give an integer that is also given when taking $\lfloor x \rfloor -1$. eg. $\lfloor 1.5-1 \rfloor = \lfloor .5\rfloor = 0 = \lfloor 1.5 \rfloor - 1$ i think ive almost got this proof correct but something about it just doesnt seem quite right. Can someone please help me verify? -thanks
Good start. You have shown that for $x \in \mathbb{Z}$, $\lfloor x-1 \rfloor = x-1 $ and $ \lfloor x \rfloor -1 = x-1 $. For $x \in \mathbb{R} $, let $x=y+\delta$, where $y \in \mathbb{Z}$ and $0 \le \delta <1$. Then $\lfloor x-1 \rfloor = \lfloor y+ \delta -1 \rfloor = \lfloor y-1 + \delta \rfloor = y-1$. and $\lfloor x \rfloor-1 = \lfloor y+ \delta \rfloor -1= y-1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2502141", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Tricky induction proof: pile of stones split into n groups So I understand setting up this proof and the basis step just fine, however, it is the induction step where I am completely lost. I went and asked the math tutors at my school, the the tutor that spoke with me even had a tough time with this problem... it is supposed to be tricky, any help would be appreciated, thank you! Suppose you begin with a pile of n stones (n ≥ 2) and split this pile into n piles of one stone each by successively splitting a pile of stones into two smaller piles. Each time you split a pile you multiply the number of stones in each of the two smaller piles you form, so that if these piles have p and q stones in them, respectively, you compute pq. Show that no matter how you split the piles (eventually into n piles of one stone each), the sum of the products computed at each step equals n(n − 1)/2. (Hint: use strong induction on n.) My attempt at this question makes no sense, and I just couldn't make sense of the tutors words in the time I had with her. Again, any help would be much appreciated!
I am sure your instructor will want an algebraic proof, so don't turn this in as your proof, but here's a 'Proof by Picture' for the inductive step: Explanation: The claim is that $n$ stones you will eventually end up with $\frac{n(n-1)}{2}$ stones, which is the sum of all numbers $1$ through $n-1$ ... which is the number of little squares in the figure above. Now, if you divide your pile into two piles of $k$ and $n-k$, you gain $k(n-k)$ points (the number of blue squares), plus whatever points you can get by dividing the pile with $k$ stones and the pile with $n-k$ stones. By inductive hypothesis, however, the $k$ stones will give you the sum of $1$ through $k-1$ (the white squares above the blue squares), and the $n-k$ stones will give you the sum of $1$ through $n-k-1$ (the white squares to the right of the blue squares), and so we see that indeed this all adds up to $\frac{n(n-1)}{2}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2502251", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
How to find $x$ given $\log_{9}\left(\frac{1}{\sqrt3}\right) =x$ without a calculator? I was asked to find $x$ when: $$\log_{9}\left(\frac{1}{\sqrt3}\right) =x$$ Step two may resemble: $${3}^{2x}=\frac{1}{\sqrt3}$$ I was not allowed a calculator and was told that it was possible. I put it into my calculator and found out that $x$=-0.25 but how do you get that?
$\log_{9}\left(\frac{1}{\sqrt3}\right) = $ $\log_{9}(3^{-\frac 12})=$ $\log_{9}((\sqrt{9})^{-\frac 12})=$ $\log_{9}((9^{\frac 12})^{-\frac 12}) = $ $\log_{9}(9^{\frac 12*(-\frac 12}) =$ $\log_9(9^{-\frac 14}) =$ $-\frac 14$ ..... or ..... $\log_9(\frac 1{\sqrt 3}) =x$ So $9^x = \frac 1{\sqrt 3}= \frac 1{\sqrt{\sqrt 9}} = \frac 1{\sqrt[4]{9}} = 9^{-\frac 14}$ $x = -\frac 14$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2502301", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 10, "answer_id": 8 }
joint pdf of two random variables A pair of random variable $(X, Y)$ is uniformly distributed in the quadrilateral region with $(0,0),(a,0),(a,b),(2a,b)$, where $a,b$ are positive real numbers. What is the joint pdf $f(X,Y.)$ Find the marginal probability density functions $f_X (x)$ and $f_Y (y)$. Find $E(X)$and $E(Y)$. Find $E(X,Y)$. My understanding is the uniformly distributed pdf $=\frac1{\text{area}}=\frac1{ab}$. It seems that $X$ and $Y$ are independent R.V. because it seems the joint pdf can be factored as $\frac1a \frac1b$ if the pdf is correct. However, after I found the marginal pdf $f_X=\frac2{a^2}; f_Y=\frac2{b}-\frac{2}{b^2}$, which shows $X, Y$ are not independent. If I can not get correct marginal pdf, I can not finish the question d and e. Could anyone help me out? Thank you!
Note that \begin{align}f_X(x) &= \int_{0}^b f_{X,Y}(x,y) \,dy \\ &= \begin{cases} \int_0^{\frac{b}{a}x} f_{X,Y}(x,y) \, dy & , 0 \leq x \leq a \\ \int_{\frac{b}{a}x-b}^b f_{X,Y}(x,y) \, dy &, a < x \leq 2a\end{cases}\end{align} Recheck your value value for $f_Y$ as well. You should get a simpler expression. Notice that pdf should integrate to $1$ of which both of your proposed density functions have failed the test.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2502425", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How prove this $\sum_{k=2}^{n}\frac{1}{3^k-1}<\frac{1}{5}$ show that $$\sum_{k=2}^{n}\frac{1}{3^k-1}<\dfrac{1}{5}\tag1$$ I try to use this well known: if $a>b>0,c>0$,then we have $$\dfrac{b}{a}<\dfrac{b+c}{a+c}$$ $$\dfrac{1}{3^k-1}<\dfrac{1+1}{3^k-1+1}=\dfrac{2}{3^k}$$ so we have $$\sum_{k=2}^{n}\dfrac{1}{3^k-1}<\sum_{k=2}^{n}\dfrac{2}{3^k}=\dfrac{1}{3}$$but this is big than $\dfrac{1}{5}$,so how to prove inequality (1)
Since $3^k > 6$ for $k \geq 2$, $$\frac{1}{3^k-1}<\frac{1}{3^k-\frac{3^k}{6}}=\frac{6}{5}\cdot\frac{1}{3^k}$$ so $$\sum_{k=2}^{n}\dfrac{1}{3^k-1}<\frac{6}{5}\sum_{k=2}^{n}\dfrac{1}{3^k}<\frac{6}{5}\sum_{k=2}^\infty\dfrac{1}{3^k}=\dfrac{1}{5}$$ as required.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2502560", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Show that $T$ is an isomorphism if and only if $T$ is invertible Let $T : V → W$ be a linear transformation of vector spaces. We say that $T$ is invertible if and only if there exists a map $S : W → V$ such that $S \circ T = 1_W$ and $T \circ S = 1_V$ . Show that $T$ is an isomorphism if and only if $T$ is invertible. My thoughts on the problem is as follows: Since I know we call $2$ vector spaces isomorphic if and only if there exists linear maps $α: V → W$ and $β: W → V$ such that $α \circ β = \text{Id}_W$ and $β \circ α = \text{Id}_V$. Thus if $T$ is invertible if and only if there exists a map $S : W → V$ such that $S \circ T = 1_W$ and $T \circ S = 1_V$, then it should suffice to say that $T$ is an isomorphism if and only if $T$ is invertible. Although I believe my logic is correct, I am sure this not an acceptable proof. I was just looking for any insight on how I could possibly improve my proof so that it would become acceptable. Thanks for any help!
Let $V$ and $W$ be two vector spaces and let $T$ be a Linear Transformation. T is said to be an isomorphism if T is also a Bijection. Bijection implies that there exists another Linear Transformation $S:W\to V$ such that S and T are each other's inverse. So essentially the statements that T is an Isomorphism and T is Invertible are one and the same. Let me give an outline of the proof Let T be an isomorphism. To prove T is invertible, we need to show that there exists a linear transformation $S:W\to V$ that maps each element of W uniquely onto V. Thus T is invertible. Let $y\in W$. Since T is onto, there exists an $x\in V$ such that $T(x) = y$. Uniqueness of x is established by T being one-to-one. So, $\forall y\in W, \exists$ unique $x \in V$ and hence define $$S:W\to V \quad \text{such that} \quad S(y) = x$$ where $T(x) = y$. Checking for Linearity of S is simple. For the other way proof, using the two linear maps that compose to give identity, we can prove that T is ono-to-one and onto and therefore an isomorphism.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2502643", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why are corner points of feasible region candidates in solving linear programming problem? In a linear programming problem, when the goal is to optimize a linear combination of variables with some constraints, it is said that the corners of feasible solution (the Polyhedron determined by constraints) are candidates for optimization problem. More description is here. It seems obvious that one of the corners should be the solution (as simplex algorithm uses this fact). But is there any proof for showing this?
Here are a few pictures that might help. If we are working with a system that has three constraints such that our feasible space is inside this triangle. Then, if we look at any point on the interior of our space We see that we can improve (get a more extreme value of) our function by moving closer to one of the boundaries of our feasible space. For example: But then we still have another degree (coordinate) of freedom, so we still can't do any worse if we go to one of the extremes in that dimension (along the constraint we are currently on: Then we land on a corner point and we can't go any farther. This must be the best we can do in this direction. Then we reason, that we only need to check the other corners of our feasible space.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2502781", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
show that $\sum_{k=1}^{n}(1-a_{k})<\frac{2}{3}$ Let $a_{1}=\dfrac{1}{2}$, and such $a_{n+1}=a_{n}-a_{n}\ln{a_{n}}$,show that $$\sum_{k=1}^{n}(1-a_{k})<\dfrac{2}{3}$$ My attemp: let $1-a_{n}=b_{n}$,then we have $$b_{n+1}=b_{n}+(1-b_{n})\ln{(1-b_{n})}<b^2_{n}<\cdots<(b_{1})^{2^{n}}=\dfrac{1}{2^{2^n}}$$ where use $\ln{(1+x)}<x,x>-1$ so $$\sum_{k=1}^{n}(1-a_{k})<\sum_{k=1}^{n}\dfrac{1}{2^{2^{k-1}}}?$$ But $$\sum_{k=1}^{+\infty}\dfrac{1}{2^{2^{k-1}}}=0.816\cdots$$big than$\frac{2}{3}$,so this inequality How to prove it?
We can prove that $0< a_{n}< 1$ inductively by making use of the graph$:\quad y= x\left ( 1- \ln x \right ).$ We let $$b_{n}:= 1- a_{n}, \left \{ b_{n} \right \}_{n= 1}^{\infty}\Leftrightarrow b_{1}= \frac{1}{2}, b_{n+ 1}= b_{n}+ \left ( 1- b_{n} \right )\ln\left ( 1- b_{n} \right )$$ Well$,\quad a_{n+ 1}- a_{n}= -a_{n}\ln n> 0,$ then $b_{n+ 1}< b_{n}\Rightarrow 0< b_{n}< b_{1}= \frac{1}{2},$ according to inequality $$\ln\left ( 1- x \right )< -x,\quad x\in\left ( 0, 1 \right )$$ So$,\quad n> 2$ $$\Rightarrow b_{n}= b_{n- 1}+ \left ( 1- b_{n- 1} \right )\ln\left ( 1- b_{n- 1} \right )< b_{n- 1}- \left ( 1- b_{n- 1} \right )b_{n- 1}= b_{n- 1}^{2}\Rightarrow b_{n}< b_{2}^{2^{n- 2}}=$$ $$= \left ( \frac{1- \ln 2}{2} \right )^{2^{n- 2}}\Rightarrow\sum_{k= 1}^{n}b_{k}< \sum_{k= 1}^{n}\left ( \frac{1- \ln 2}{2} \right )^{2^{k- 2}}< \sum_{k= 1}^{\infty}\left ( \frac{1- \ln 2}{2} \right )^{2^{k- 2}}= 0.56\cdots< \frac{2}{3}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2502896", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 3, "answer_id": 1 }
Prove that: $1-\frac 12+\frac 13-\frac 14+...+\frac {1}{199}- \frac{1}{200}=\frac{1}{101}+\frac{1}{102}+...+\frac{1}{200}$ Prove that: $$1-\frac 12+\frac 13-\frac 14+...+\frac {1}{199}- \frac{1}{200}=\frac{1}{101}+\frac{1}{102}+...+\frac{1}{200}$$ I know only this method: $\frac {1}{1×2}+\frac {1}{2×3}+\frac {1}{3×4}+....=1-\frac {1}{2}+\frac {1}{2}-\frac {1}{3}+\frac {1}{3}-...$ But, unfortunately, I could not a hint.
My method : $$\left\{ 1-\frac {1}{2}-\frac {1}{4}-...-\frac {1}{128} \right\}+\left\{ \frac {1}{3}-\frac {1}{6}- \frac{1}{12}-...- \frac{1}{192}\right\}+\left\{\frac {1}{5}-\frac{1}{10}-\frac{1}{20}-...- \frac{1}{160}\right\}+...+\left\{ \frac{1}{99}-\frac{1}{198}\right\}+\left\{ \frac{1}{101}+\frac{1}{103}+\frac{1}{105}+...+\frac{1}{199}\right\}=\frac{1}{101}+\frac{1}{102}+\frac{1}{103}+...+\frac{1}{200}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2502982", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 4, "answer_id": 0 }
Linear Algebra Challenge Question involving Vector Spaces. Problem: Let $V$ be the space of continuously differentiable maps $f:\mathbb{R}\to\mathbb{R}$ and let $W$ be the subspace of those maps $f$ for which $f(0)=f^\prime(0)= 0.$ Let $Z$ be the subspace of $V$ consisting of maps $x\to ax+b$, with $a,b \in \mathbb{R}$. Prove that $V=W\oplus Z.$ In order to solve the problem, we have to show that for all $v\in V$ $$v=w+z,$$ where $w\in W$ and $z\in Z.$ I don't know why this must be true, however I can show that $W\cap Z=\{0\}$ since if $z(x)=ax+b$ and $z\in W$ then $z(0)=z^\prime(0)=0$ implies that $a=b=0.$ and so the intersection of the two subspaces must be the zero function denoted by $0.$ How do I proceed to show that every function can be represented as the sum of the functions $w\in W$ and $z\in Z$?
Hint Suppose there is a decomposition $V = W \oplus Z$, so that for any $v(x) \in C^1(\Bbb R)$ we have $$v(x) = w(x) + z(x)$$ for $w(x) \in W, z(x) \in Z$. Now, we want to express $w(x)$ and $z(x)$ in terms of $v(x)$, and the only information available is the definitions of $W, Z$. Since $W$ is characterized by the evaluation of functions (and their derivatives) at $x = 0$, this suggests evaluating our above expression at $x = 0$, giving $$v(0) = w(0) + z(0) .$$ Now, what does the definition of $W$ tell us about $w(0)$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2503057", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Airy's equation 2nd order ODE Suppose we have the equation $$ \frac{ d^2 y}{d z^2}= z\, y $$ for function $y(z).$ We would like to find the 6 first coefficients of the equation given the boundary conditions $\dfrac{dy}{dz}=1$ at $ z=0,y=2,\, y(z)=a_0+a_1z+a_2z^2+a_3z^3+...+a_5z^5.$ I've arrived at the recurrence relation $$a_{k+3}=\frac{a_k}{(k+3)(k+2)} $$ Can anyone kindly help me with this? Would be really grateful.
You have a third order recurrence relation for the coefficients. Since the recursion is third order, it needs three initial conditions. $a_0$ and $a_1$ come directly from $y(0),y'(0)$. (If you had different boundary conditions, the problem would turn out differently.) To find $a_2$, notice that putting the expansion into the equation gives you $2a_2=za_0$ at lowest order. Since Frobenius expansion is supposed to work in a neighborhood of $z=0$, you can in particular send $z \to 0$ in this equation to deduce what $a_2$ must be. Then you have all three initial conditions for the recurrence and you can iterate to solve it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2503187", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Understanding Laurent and Taylor series I know that we should use Laurent series to expand a function around a singularity and Taylor series otherwise. But there is a few aspects that I don´t understand. Imagine $\sin\left(\frac{1}{z}\right)$. I know $\sin\left(\frac{1}{z}\right)$ has an essential singularity at $z=0$. I don´t understand how do I expand $\sin\left(\frac{1}{z}\right)$ in Laurent series (I know how to expand $\sin(z)$ in Taylor series). Basically I don´t understand the difference between the formula of Laurent and Taylor series. How someone give me some intuition? Also how could I expand $\sin\left(\frac{1}{z}\right)$ in Laurent series around $z=0$ and how can I tell that it is an essential singularity based on the expansion? Thanks!
$ \sin z = \displaystyle \sum_{n=0}^{\infty} \frac{ (-1)^n z^{2n+1}}{(2n+1)!}$ if $|z|< \infty$ Then if $|z| < \infty \rightarrow \displaystyle 0<{|\frac{1}{z}|} < \infty$ $ \sin{\frac{1}{z}} = \displaystyle \sum_{n=0}^{\infty} \frac{ (-1)^n}{z^{2n+1}(2n+1)!}$ And $0$ is a essential singularity because we have infinite terms $b_n$ where $b_n= \displaystyle \frac{1}{2\pi i} \int \frac{f(z)}{z^{-n+1}}$. Basically a point $z_0$ is a essential singularity if the Laurent Series of $f(z)$ in $R_1<|z-z_0|<R_2$ has infinite terms $b_n$ where the Laurent Series is $$\sum_{n=0}^{\infty} a_n(z-z_0)^n +\sum_{n=1}^{\infty} \frac{b_n}{(z-z_0)^n} $$ with $a_n=\frac{1}{2 \pi i} \int \frac{f(z)}{(z-z_0)^{n+1}}$ and $b_n= \frac{1}{2 \pi i}\int \frac{f(z)}{(z-z_0)^{-n+1}}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2503303", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Hermitian form induces an isomorphism between $V$ and $V^*$? Let $H(-,-):V \times V \rightarrow \mathbb{C}$ be an Hermitian form, linear in the first factor. My question is, how can one show that this induces an isomorphism from $V$ to it's dual space $V^*$? Here's what I've tried so far: We must check that $H(-,v):V \rightarrow \mathbb{C}$ given by $H(-,v)(u)=H(u,v)$ is linear. But since $H$ is linear in the first factor, this is clear and so $H(-,v)$ takes values in $V^*=\{ \text{ linear maps } \, V \rightarrow \mathbb{C} \, \}$. Now consider $\Phi_H : V \rightarrow V^* $ given by $\Phi_H : v \mapsto H(-,v)$. We need that $\Phi_H$ is a vector space isomorphism. Checking the condition on vector addition is fine, but it's the scalar multiplication condition which seems to go wrong for me: \begin{equation} \Phi_H(cv) = H(-,cv) = \bar{c}H(-,v) \neq c\Phi_H(v) \end{equation} Any advice on where I'm going wrong would be much appreciated. Thank you!
There's nothing wrong in your derivation. Actually, what the Hermitian form $H$ induces is a conjugate linear bijection, not a vector space isomorphism. However, in some sense we can say that $V$ and $V^*$ are "the same". In an infinite-dimensional Hilbert space $H$, we have analogous one-to-one correspondence between $H$ and $H^*$, i.e. the famous Riesz Representation Theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2503441", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Proofs on Convex Sets. Prove that the set $C:=\{ x : x+S_2 \subset S_1 \}$, with $S_1,S_2\subset \Bbb R^n$ is convex if $S_1$ is convex. I understant that a vectorial space is a convex set. So $S_1$ and $S_2$ are both convex sets. But I do not understand how to continue with the idea.
Suppose $x, y \in C$ and $0 \le t \le 1$. You want to show $tx + (1-t)y \in C$, i.e. $tx + (1-t)y + S_2 \subset S_1$. If $s_2 \in S_2$, $tx + (1-t) y + s_2 = t (x + s_2) + (1-t) (y + s_2)$ with $x + s_2 \in S_1$ and $y + s_2 \in S_1$. Since $S_1$ is convex, you are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2503565", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Difficulty with a "trivial" probability question: drawing balls from an urn --- is event probability affected by order in which balls are drawn? Say you have an urn with $2$ red balls, $2$ white balls, and $2$ black balls. Draw $3$ balls from this urn. What is the probability that all balls have a different colour? I am going to answer this question in 4 different ways, and arrive at 3 different answers: * *We need not need to care about the order in which the balls are drawn to answer our question. Thus, the sample space (the space of all possibilities) is: $\Omega_1 = \{\text{rrw}, \text{rrb}, \text{wwr}, \text{wwb}, \text{bbr}, \text{bbw}, \text{rbw}\}$. The event corresponding to the case where all balls have a different colour is $E_1 = \{\text{rbw}\}$. The probability of this event occurring, i.e. the relative size of this event, compared to the entire sample space is trivial to obtain: $E_1/\Omega_1 = 1/7$. *Say that we did care consider the order in which balls are drawn when determining the answer. Each ball is labeled with two indices: the first is the label of the ball within its own colour group, and the second is the draw order of the ball. So the event $\{\text{r}_{1,2}, \text{r}_{2, 2}, \text{w}_{2, 3}\}$ corresponds to the case where red ball $1$ is drawn first, followed by red ball $2$ second, followed by white ball $2$ third. Let us determine the size of $\Omega_2$ based on the elements of $\Omega_1$: each element in $\Omega_1$ with at least one off-colour ball corresponds to $12$ elements in $\Omega_2$ because we have $6$ choices for the order in which the $3$ balls are selected, and $2$ choices for which off-colour ball is selected. The last element in $\Omega_1$, $\text{rbw}$, corresponds to $8 \times 6 = 48$ elements in $\Omega_2$, since I have $2$ choices for each of the $3$ ball colours ($2 \times 2 \times 2 = 8$), and $6$ ways to order the selected balls. Note that $|E_2|$, the size of the event where all balls have a different colour, is thus $48$. So, $|\Omega_2| = 6 \times 12 + 6 \times 8 = 6 \times 20 = 120$, and $|E_2|/|\Omega_2| = 48/120 = 8/20 = 2/5$. *Say that we cared about the order in which the balls are drawn without distinguishing between balls of the same colour when determining the answer. Let the sample space this time be denoted $\Omega_3$. Note that in $\Omega_1$, there are 6 cases where there is at least one off-colour ball, and each one of these corresponds to $3$ different cases in $\Omega_3$, since there are 3 different ways we can place the off-colour ball. The element where all 3 ball colours are different in $\Omega_1$ corresponds $6$ elements in $\Omega_3$, because there are $6$ ways to order 3 distinct objects. Thus, $|E_3| = 6$, and $|\Omega_3| = 6\times 3 + 1\times 6 = 24$. In particular, $|E_3|/|\Omega_3| = 6/24 = 1/4$. *Let us calculate the probability that we draw 3 balls of different colour using a conditional probability approach. Draw the first ball. Now there are $5$ balls left in the urn, and $4/5$ of the balls are of a different colour than the first ball. Draw the second ball from this pool of $4$ balls. Now there are $4$ balls left, and $2$ are balls of a different colour from those already drawn, so there is a $2/4$ chance we draw a ball of yet another colour. Thus, the probability of drawing $3$ balls of different colours is $4/5 \times 2/4 = 2/5$. I am doing something wrong, but I am not sure what. In particular, why is answer 3 different from answer 4? Second, why are answers 1, 2 and 3 different? Have I simply not restated the same problem in 4 different ways?
There is yet another way to get the probability: label the two balls of each color differently (as in your method 2) so that each of the six balls is uniquely identified and you can distinguish exactly which three of them were selected, but do not consider the order in which the three balls were drawn. There are $\binom 63 = 20$ possible ways to choose three items out of six distinguishable items without regard to the order of choosing. Since all six balls are equally uniquely identified, by symmetry all $20$ possible outcomes are equally likely. Of those $20$ outcomes, the ones that have one ball of each color are the ones with either the first or second red ball, either the first or second white ball, and either the first or second black ball. There are $2\times 2\times 2 = 8$ of these combinations. Hence the probability is $8/20 = 2/5.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2503676", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How can I prove $\begin{equation*} \lim_{n \rightarrow \infty} \frac{n}{\sqrt{n^2 + n+1}}=1 \end{equation*}$ How can I prove(using sequence convergence definition): $$\begin{equation*} \lim_{n \rightarrow \infty} \frac{n}{\sqrt{n^2 + n+1}}=1 \end{equation*}$$ I need to cancel the n in the numerator, any hint will be appreciated.
Note that$$\lim\limits_{x\to\pm\infty}\left(\frac 1{x}\right)^n=0$$With simple limits to infinity type problems, we can divide both the numerator and denominator by the greatest power and take the limit as each term tends towards infinity$$\begin{align*}\lim\limits_{x\to\infty}\frac x{\sqrt{1+x+x^2}} & =\lim\limits_{x\to\infty}\frac {x}{\sqrt{1+x+x^2}}\frac {\tfrac 1x}{\tfrac 1x}\\ & =\lim\limits_{x\to\infty}\frac 1{\sqrt{\frac 1{x^2}+\frac 1x+1}}\end{align*}$$The limit of the denominator is simple one, so therefore,$$\lim\limits_{x\to\infty}\frac x{\sqrt{1+x+x^2}}=\color{blue}{\lim\limits_{x\to\infty}\frac 1{\sqrt{\frac 1{x^2}+\frac 1x+1}}=1}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2503786", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Prove a property of the weight decomposition of representation of $\mathfrak{sl}(2,\mathbb{C})$ I try to prove the following property of complete decomposition of the representation of $\mathfrak{sl}(2,\mathbb{C})$: Show if $V$ is a finite-dimensional representation of $\mathfrak{sl}(2,\mathbb{C})$, then $$V\cong \bigoplus n_k V_k$$ and $n_k=\dim V[k]-\dim V[k+2]$. The last part (about $n_k$ and the dimension of the eigenspace) seems to be a very amazing result for me. Since $V_k$ does not contain $V[k+2]$. And we can take any numbers of copies of $V_k$. So I am really confused here since I think these two numbers at each side are totally irrelevant. So I must miss something very crucial. Please point it out. Thanks!
I assume that $V_k$ are the standard irrep of weight $k$. Then, consider multiplication by $x : V[k] \to V[k+2]$. Clearly, $x$ is surjective and moreover, $x(v) = 0$ iff $v$ is a heighest weight vector of weight $k$. So $\dim \ker(x) = \dim V[k] - \dim V[k+2]$ is indeed $n_k$. Here is an intuitive explanation : there are two kind of eigenvector of weight $k$, the one which spans an irrep $V_k$ and the one which comes from irrep of bigger weight. For example, if $w \in V_{k+2}$, then $y(w) \in V_k$ is a vector of "second kind". Now, by definition, the "second kind" vectors are exactly the vectors which are not killed when multiplied by $x$. So $\dim \ker x$ is the number you were looking for, and it is $\dim V[k] - \dim V[k+2]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2503970", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Every day a student solves one, two or three problems. Find the number of distinct ways Every day a student solves one, two or three problems. Find the number of distinct ways a) he solves problems in 30 days, b) he solves 50 problems, c) he solves 50 problems in 30 days Answer. For part A I guess We will say there are three option for the first day 1, 2 or 3 and second day it will go on like that the answer is $3^{20}$. But for part B and part C I couldn't think anything Thanks for any help guys.
Hint. As regards b) note that if for $d_i$ days he solves $i$ problems for $i=1,2,3$ then $d_1+2d_2+3d_3=50$ and this can be done in $$\frac{(d_1+d_2+d_3)!}{d_1!\cdot d_2!\cdot d_3!}.$$ For c) we have another constraint: $d_1+d_2+d_3=30$. P.S. Generating function approach. The answers for b) and c) are given respectively by (WA-link) $$[x^{50}]\sum_{k=0}^{\infty}(x+x^2+x^3)^{k}=[x^{50}]\frac{1}{1-(x+x^2+x^3)},$$ and (WA-link) $$[x^{50}]((x+x^2+x^3)^{30}).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2504139", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Irreducible cubic curve in normal form I have been given a cubic curve $F : X^3 - XZ^2 + YZ^2 = 0$ in the projective plane. I have that a singular point of the curve is $[0,1,0]$. I am asked to find a change of variable which puts $F$ into 'normal form', i.e $Y^2Z - G(X,Z)$ where $G(X,Z) = X^3 + bX^2Z + cXZ^2 + dZ^3$ for $b,c,d \in \mathbb{C}$. I have been trying to get this for quite a while now and can't seem to get the change correct. I seem to be stuck with the two $Z^2$ terms in $F$. I have a method but that relies on the point being a flex, which clearly isn't the case here. My notes indicate that it is common to just go with "guess work"... Is there a better method I could be using or is this just down to practice (and if so does any one know any good resources or problem sources for this)? Many thanks!
There is an "algorithm" for transforming cubic curves to the (long) Weierstrass form, and from that to the short Weierstrass form, which is usually introduced when studying elliptic curves. You can find it in its various forms here or here. But that involves a lot of tedious work. Here is how I'd solve it manually: Usually it is helpful to derive the transformation step by step: First note that $X^3 -XZ^2+YZ^2 = X^3 - (X-Y)Z^2$. By looking at the degrees we can easily see that whatever variables in the linear combination that we replace $X$ with will appear as third degree terms, and similarly what we plug in $Z$ will appear as only second degree terms. This suggests that $Z$ will be our future $Y$, so let us apply $Z \mapsto Y$ and $Y \mapsto Z$. We get $$X^3 - (X-Z)Y^2$$ So now we want to get rid of the $X$ as a coefficient of $Y^2$ as the term with $Y^2$ should not include a factor $X$. We can do this by replacing $Z$ with $X+Z$ so this yields $$Y^2Z + X^3$$ Now we are almost there, we need to change the sign of the $X^3$ term which helpfully has and odd degree so we can just replace $X$ by $-X$ and get $$Y^2Z -X^3$$ The important thing we need to check is whether our compound linear transformation is actually invertible. If we write the substitution as a matrix vector equation $$\begin{bmatrix} X \\ Y \\ Z \end{bmatrix} = \begin{bmatrix} * & * & * \\ * & * & * \\ * & * & * \end{bmatrix}\begin{bmatrix} X' \\ Y' \\ Z' \end{bmatrix}$$ this is equivalent to this matrix being invertible. If you recall from linear algebra: The modulus of the determinant does not change if we swap rows, or add the multiple of one one row to another. This corresponds to swapping variables and replacing one variable by itself plus a multiple of another. And this is all we did above so this is indeed an invertible transformation. Alternatively you can also go through each step and actually determine this transformation matrix. Lastly: If you want to play around with that without too much writing use some CAS, I used Maxima, and you can try my "program" online: Try it online!
{ "language": "en", "url": "https://math.stackexchange.com/questions/2504361", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
pigeonhole principle, for a chessboard problem. assume we have a chess-board 8 by 8 squares. lets define the "prince" play tool: the "prince" can move from any given square to any other square the Queen can move to or to any other square the Horse can move to. means that if from some square on the chess board the set off all squares the Queen can go to is Q = {$x_1,...,x_m$} and the set off all squares the Horse can go to is H = {$y_1,...,y_n$} than the set off all squares the Prince can go to is P = Q $\cup$ H. prove that for 7 pieces of "princes" on the board there must be 2 that will Threaten each other. im' not sure if the statement is still correct for even 6 pieces but i know that for less than 6 pieces it isn't.
HINT Note that a prince ends up attacking all of the 24 squares around it with itself being in the middle of a $5 \times 5$ square. So think of it how you can place such $5 \times 5$ squares on a board, noting of course that a prince placed along the side of the board threatens less than $24$ actual squares of the board (so it is still not an easy problem this way, but maybe a little easier nevertheless...)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2504505", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Help solving variable separable ODE: $y' = \frac{1}{2} a y^2 + b y - 1$ with $y(0)=0$ I am studying for an exam about ODEs and I am struggling with one of the past exam questions. The past exam shows one exercise which asks us to solve: $$y' = \frac{1}{2} a y^2 + b y - 1$$with $y(0)=0$ The solution is given as $$y(x) = \frac{2 \left( e^{\Gamma x} - 1 \right)}{(b + \Gamma)(e^{\Gamma x} - 1) + 2\Gamma},$$ with $\Gamma = \sqrt{b^2 + 2a}$ I am really getting stuck at this exercise and would love to have someone show me how this solution is derived. One thing I did find out is that this ODE is variable separable. That is, $$y' = g(x)h(y) = (1) \cdot (\frac{1}{2} ay^2 + by - 1),$$ and therefore the solution would result from solving $$\int \frac{1}{\frac{1}{2} ay^2 + by - 1} dy = \int dx + C,$$ where C is clearly zero because $y(0) = 0$. I am now getting stuck at solving the left integral. Could anyone please show me the steps? UPDATE So I came quite far with @LutzL solution, however my answers seems to slightly deviate from the solution given above. These are the steps I performed (continuing from @LutzL's answer): You complete the square $\frac12ay^2+by-1=\frac12a(y+\frac ba)^2-1-\frac{b^2}{2a}$ and use this to inspire the change of coordinates $u=ay+b$ leading to $$ \int \frac{dy}{\frac12ay^2+by-1}=\int\frac{2\,du}{u^2-2a-b^2} $$ and for that your integral tables should give a form using the inverse hyperbolic tangent. Or you perform a partial fraction decomposition for $$ \frac{2Γ}{u^2-Γ^2}=-\frac{1}{u+Γ}+\frac{1}{u-Γ} $$ and find the corresponding logarithmic anti-derivatives, $$ \ln|u-Γ|-\ln|u+Γ|=Γx+c,\\ \frac{u-Γ}{u+Γ}=Ce^{Γx},\ C=\pm e^c $$ which you now can easily solve for $u$ and then $y$. Given that $y(0) = 0$ we have $u(y(0)) = u(0) = b$ and therefore the final equation becomes $$\frac{u(0)-Γ}{u(0)+Γ}= \frac{b-Γ}{b+Γ}=Ce^{Γ\cdot0} = Ce^{Γ \cdot 0} = C$$ Now by first isolating $u$ I get $$u - \Gamma = u C e^{\Gamma x} + \Gamma C e^{\Gamma x} \Rightarrow \\ u \left( 1 - C e^{ \Gamma x} \right) = \Gamma \left( 1 + C e^{\Gamma x} \right) \Rightarrow \\ u = \frac{\Gamma \left( 1 + C e^{\Gamma x} \right)}{\left( 1 - C e^{ \Gamma x} \right)}$$ Now substituting u and C gives $$ay + b= \frac{\Gamma \left( 1 + \frac{b-Γ}{b+Γ} e^{\Gamma x} \right)}{\left( 1 - \frac{b-Γ}{b+Γ} e^{ \Gamma x} \right)} \Rightarrow \\ y = \frac{\Gamma \left( 1 + \frac{b-Γ}{b+Γ} e^{\Gamma x} \right) - b \left( 1 - \frac{b-Γ}{b+Γ} e^{ \Gamma x} \right)}{a \left( 1 - \frac{b-Γ}{b+Γ} e^{ \Gamma x} \right)}$$ Now using the fact that $\Gamma = \sqrt{b^2 + 2a} \Rightarrow a = \frac{(\Gamma + b)(\Gamma - b)}{2}$ we get that $$y = \frac{2 \left( \Gamma \left( 1 + \frac{b-\Gamma }{b+\Gamma } e^{\Gamma x} \right) - b \left( 1 - \frac{b-\Gamma }{b+\Gamma } e^{ \Gamma x} \right) \right)}{(\Gamma + b)(\Gamma - b) \left( 1 - \frac{b-\Gamma }{b+\Gamma } e^{ \Gamma x} \right)} \\ = \frac{2 \left( (\Gamma - b) + (\Gamma + b) \frac{b-\Gamma }{b+ \Gamma } e^{\Gamma x} \right)}{(\Gamma + b)(\Gamma - b) \left( 1 - \frac{b-\Gamma }{b+\Gamma } e^{ \Gamma x} \right)} \\ = \frac{2 \left( (\Gamma - b) - (\Gamma + b) \frac{\Gamma - b }{b+ \Gamma } e^{\Gamma x} \right)}{(\Gamma + b)(\Gamma - b) \left( 1 - \frac{b-\Gamma }{b+\Gamma } e^{ \Gamma x} \right)}$$ Now cancelling the terms $(b + \Gamma)$ and $(b - \Gamma)$ wherever possible and multiplying denominator and nominator by -1 gives $$y = \frac{2 \left( e^{\Gamma x} - 1 \right)}{(\Gamma + b) \left( \frac{b-\Gamma }{b+\Gamma } e^{ \Gamma x} - 1 \right)} $$ So clearly, I got the nominator right, but I can not seem to get the denominator to equal $(b + \Gamma)(e^{\Gamma x} - 1) + 2\Gamma$. Can someone rescue me and show me what I did wrong? Maybe it helps if I say that x is always positive?
You complete the square $\frac12ay^2+by-1=\frac12a(y+\frac ba)^2-1-\frac{b^2}{2a}$ and use this to inspire the change of coordinates $u=ay+b$ leading to $$ \int \frac{dy}{\frac12ay^2+by-1}=\int\frac{2\,du}{u^2-2a-b^2} $$ and for that your integral tables should give a form using the inverse hyperbolic tangent. Or you perform a partial fraction decomposition for $$ \frac{2Γ}{u^2-Γ^2}=-\frac{1}{u+Γ}+\frac{1}{u-Γ} $$ and find the corresponding logarithmic anti-derivatives, $$ \ln|u-Γ|-\ln|u+Γ|=Γx+c,\\ \frac{u-Γ}{u+Γ}=Ce^{Γx},\ C=\pm e^c $$ which you now can easily solve for $u$ and then $y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2504760", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Why does a partition of unity has the finite intersection property of compact sets? I have a question regarding partitions of unity. I use the same definition and notation as 2. here (Wikipedia). Let $K \subseteq X$ be a compact set. Why do we have that $K \cap \text{supp}\rho_j \neq \varnothing$ for only finitely many $j \in J$? (we have that $\text{supp}\rho_j \subseteq U_i$ as in wikipedia)
For each $x\in K$, there exists an open neighbourhood $U_x$ of $x$ where all but finitely many of the functions are $=0$. By compactness, $K$ is covered by finitely many of the $U_x$. All but the finitely many functions that are non-zero in at least one of these finitely many $U_x$ are $=0$ on all of $K$. This does not yet make $\operatorname{supp}(\rho_j)\cap K=\emptyset$, but if we can shrink $U_x$ (replace it with open $V_x$ such that $\overline{V_x}\subseteq U_x$) then all is fine.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2504906", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Clean way to expand product $\prod_{k=1}^n (1 + x_k)$ Is there a clean way to write the expansion of: $$\prod_{k=1}^n (1 + x_k) = (1 + x_1)(1 + x_2)\dots(1 + x_n)$$ The expansion may be written: $$1+\sum_{1\le i \le n} x_i +\sum_{i \le n}\sum_{j\lt i} x_j x_i + \sum_{i \le n}\sum_{j\lt i}\sum_{k\lt j} x_kx_jx_i+\cdots+ x_1x_2\cdots x_n$$ But it would be nice if there were a more compact way of writing this. I tried making use of the Levi-Civita tensor over an indexed set: $$1 + \sum_{k=1}^n \sum_{(i_j)_{j \in \{1 \dots k\}} \in \{1 \dots n\}} \frac{1}{2} |\epsilon_{i_1 \dots i_k}| \prod_{j=1}^k x_{i_j}$$ But that seems a little messy.
As explained in Richard Stanley book we have: $$\prod_{k=1}^n (1 + x_k) = (1 + x_1)(1 + x_2)\dots(1 + x_n) =\sum_{A\subseteq [n]}\prod_{i\in A}x_i$$ where $\displaystyle{\prod_{i\in\varnothing}x_{i}:=1}$ as Ethan remarks. More generally, you can consider even the multiset case: $$\prod_{k=1}^n (1 + x_k+x_k^{2}+\cdots) = (1 + x_1+x_1^{2}+\cdots)(1 + x_2+x_2^{2}+\cdots)\dots(1 + x_n+x_n^{2}+\cdots)$$ $$=\sum_{(M,\,\mu)\subseteq [n]}\prod_{i\in (M,\mu)}x_i^{\mu(i)}$$ where $(M,\mu)$ is a multi-subset of $[n]=\{1,2,\ldots,n\}.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2505024", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Let $\gamma=[-1, 1+i]+\beta+[-1+i,1]$, where $\beta(t)=i+e^{it}$ for $0\leq t\leq 3\pi.$ Compute: $\int_{\gamma}(z^2+1)^{-1}dz$ Let $\gamma=[-1, 1+i]+\beta+[-1+i,1]$, where $\beta(t)=i+e^{it}$ for $0\leq t\leq 3\pi.$ Compute: $\int_{\gamma}(z^2+1)^{-1}dz$ I have come to the following but I do not know what else to do: $\int_{\gamma}\frac{1}{z^2+1}dz=\int_{[-1,1+i]}\frac{1}{2i(z-i)}dz+\int_{\beta}\frac{1}{2i(z-i)}dz+\int_{[-1+i, 1]}\frac{1}{2i(z-i)}dz-(\int_{[-1,1+i]}\frac{1}{2i(z+i)}dz+\int_{\beta}\frac{1}{2i(z+i)}dz+\int_{[-1+i, 1]}\frac{1}{2i(z+i)}dz)$ Could anyone help me, please? Thank you very much.
Let's make this problem easier by deforming your contour $\gamma$ into $\gamma^*$ where $\gamma^*=[-1,0]+\beta^*+[0,1]$ and $\beta^*=i+e^{it}$ for $-\frac{\pi}{2}\leq t \leq \frac{7\pi}{2}$ (draw a picture!). We can do this because our integrand is analytic everywhere besides $z=\pm i$. Now we have \begin{align} \int_\gamma\frac{dz}{1+z^2}&=\int_{\gamma^*}\frac{dz}{1+z^2} \\ &=\int_{-1}^1\frac{dz}{1+z^2}+\int_{\beta^*}\frac{dz}{1+z^2} \end{align} Our first integral above can be evaluated via fundamental theorem of calculus. The second integral above can be evaluated via residue theorem (https://en.wikipedia.org/wiki/Residue_theorem). \begin{align} \int_\gamma\frac{dz}{1+z^2}&=\int_{-1}^1\frac{dz}{1+z^2}+\int_{\beta^*}\frac{dz}{1+z^2} \\ &=\tan^{-1}(z)\bigg|_{-1}^1+2\cdot2\pi i\cdot\text{Res}\left((1+z^2)^{-1},i\right) \\ &=\tan^{-1}(1)-\tan^{-1}(-1)+4\pi i\cdot\text{Res}\left(\frac{1}{2i(z-i)}-\frac{1}{2i(z+i)},i\right) \\ &=\frac{\pi}{2}-\frac{-\pi}{2}+4\pi i\cdot\frac{1}{2i} \\ &=3\pi \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2505163", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find the largest interval for solution If $y(x)$ is the solution of differential equation$\frac {dy}{dx}=2(1+y)(\sqrt y)$ satisfying $y(0)=0,y(\frac {\pi}{2})=1 $ then the largest interval (to the right of origin ) on which the solution exists is (a)$[0,\frac {3\pi}{4}]$ (b)$[0,\pi)$ (c)$[0,2\pi)$ (d) )$[0,\frac {2\pi}{3}]$ Exact image of question This question was asked in GATE Mathematics exam My try I solved the givrn differential equation and the solution is $${tan^{-1}}(\sqrt y)=x+c$$ Can anyone give me hint in these question?
The curve $$y=\tan^2 x\qquad(x\geq0)\tag{1}$$ and all its horizontal translates are solution curves of this ODE. To this family of curves its envelope $y(x)\equiv0$ has to be added, since it is a solution as well. But this is not all: Since the ODE does not satisfy a crucial assumption of the existence and uniqueness theorem in the points $(x,0)$ we have to accept that for such initial points there are several solutions. Intuitively: We may choose an arbitrary $p\in{\mathbb R}$, then follow the envelope as long as $x\leq p$, and for $x\geq p$ go along a translate of the curve $(1)$. Now you want the solution passing through the point $\bigl({\pi\over2},1\bigr)$. It so happens that the following "splined" function does the trick: $$y(x)=\left\{\eqalign{0\qquad\qquad&\bigl(x\leq{\pi\over4}\bigr)\cr \tan^2\bigl(x-{\pi\over4}\bigr)\qquad&\bigl({\pi\over4}\leq x<{3\pi\over4}\bigr)\ .\cr}\right.$$ This solution lives in the $x$-interval $\ \bigl]-\infty,{3\pi\over4}\bigr[\ $. It follows that answer (a) comes nearest.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2505434", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Asymptotic solutions to ODE Wondering where I can find resources to do more questions of the following type and also if you guys can help me answer this problem. Consider the differential equation: $$u'' + \left( 1-\frac{\gamma}{x^2} \right) u = 0$$ for $x > 4$. Obtain the first two terms of the asymptotic solution for each of the two real solutions of this equation. We start by writing out $u = u_0+u_1$ and consider the zeroth order solution which is $$u_0''+u_0=0$$ This has solution $u_0=e^{ix}$. Basically how do I proceed from here? Also where can I find more problems requiring this method of solution. Thanks!
An expansion of $u $ derived from the perturbation theory would be $u=u_0 +\gamma u_1 + \dots$ The $0$th-order and $1$st-order equations (in terms of powers of $\gamma$) are respectively \begin{aligned} u''_0 + u_0 &= 0 \, ,\\ u''_1 + u_1 &= \frac{u_0}{x^2} \, . \end{aligned} The $0$th-order solution is $$u_0(x) = a_0\cos x + b_0\sin x \, ,$$ whereas there is more work to find the $1$st-order solution. Indeed, $$u_1(x) = a_1\cos x + b_1\sin x + u_p(x)\, ,$$ where $u_p$ is a particular solution to the non-homogeneous ODE satisfied by $u_1$. The initial conditions $u(4) = U$, $u'(4) = V$ are first applied on $u_0$, $u'_0$: \begin{aligned} a_0\cos 4 + b_0\sin 4 &= U \, ,\\ b_0\cos 4 - a_0\sin 4 &= V \, , \end{aligned} which yields $a_0 = U\cos 4 - V\sin 4$ and $b_0 = U\sin 4 + V\cos 4$. Then, they are applied on $u_0 +\gamma u_1$, $u'_0 + \gamma u'_1$, etc. There is much more to read about such methods in the book Perturbation Methods by A.H. Nayfeh (Wiley, 2008). This linear ODE may be solved analytically. The general solution involves Bessel functions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2505577", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Can we prove that $1/\sqrt2\in$ Cantor set on $[0,1]$? Today, I was trying to prove Cantor set is uncountable and I completed it just a while ago. So, I know that the end-points of each $A_n$ are elements of $C$ and those end-points are rational numbers. But since $C$ is uncountable, $C$ must contain uncountable numbers of irrational numbers. Then, is their a way to prove that, a specific irrational number (say $1/\sqrt2$ or $1/4\pi$) belongs to the set $C$ or not? (Description of notation can be found in the link given above or here) Lets say, Prove or disprove that $1/\sqrt2\in$ Cantor set on $[0,1]$. Can we do that? Or is their a way to solve such problem?
Any number in $[0, 1]$ which has (at least one) base $3$ expansion without a $1$ will be in the Cantor set. More precisely, $$\mathcal{C} = \left\{ \sum_{n=1}^{\infty} \frac{c_n}{3^n} : (\forall n)( c_n \in \{ 0, 2 \} ) \right\}.$$ It is known that a number as above is irrational if and only if the expansion is non-recurring. So for example $0{.}202202220222202222202\ldots$ will be an irrational number in the Cantor set. But you seem to be looking for an irrational number expressed by radicals, for example $\frac{\sqrt{7-\sqrt{3}}}{4}$. This appears to be difficult, since there is no obvious connection between an expression of an irrational number by radicals and it's base $3$ expansion. As others have indicated, $\frac{1}{\sqrt{2}} \notin \mathcal{C}$. This is quite understandable, because it's like a random guess, and since the Cantor set has measure $0$, it's difficult to hit the Cantor set with such.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2505757", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Proving that $s$ is a reflexive closure of $r$ I have this problem that I cannot figure out. Could you please help me out with this? Let $r$ be a relation on the set $X$ and let $$R:=\{t:t \text{ is a reflexive relation on } X \text{ with } r\subseteq t \}.$$ For $x,y \in X$ we have $x \, s \, y$ iff for each $t\in R$ we have $x \, t \, y$. Show that $s$ is the reflexive closure of $r$. My idea so far was as follows. Suppose $x,y \in X$ and $x \ s \ y$. It follows that for all $t\in R$ we have $x \ t \ y$. Since $r\subseteq t$ for all $t\in R$, it follows $r \subseteq s$ and also $s \subseteq t$. Since each $t\in R$ is reflexive, $s$ is also reflexive. Could you please verify my idea? I'm sure it has some mistakes.
Yeah, this has some problems. Since $r\subseteq t$ for all $t\in R$, it follows $r \subseteq s$ and also $s \subseteq t$. How does it follow that $r \subseteq s$? That's not clear. Fortunately, to show that $s$ is reflexive we don't need it ... yes, we need that $s$ is reflexive, but given its definition that's the easy part. The harder part is that at some point you need to show that $s$ is the closure of $r$, i.e. that $s$ is the smallest reflexive relation for which $r \subseteq s$. That part you haven't addressed at all yet.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2505855", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
How many Functions/ Bijections X → X do exist, if the set X has 4 elements? So I belive the amount of functinos is $4^4$. But I can't seem to find the right explaination to prove it. My idea was, that $a\mapsto a,b,c,d$ $b \mapsto a,b,c,d$ ... ... are the possible ways the elements are related. ... and if you change/exclude the diffrent elements you get to $4^4$. But I know that this is not a real prove and I can't seem to find one. And the amount of bijections is $4!$. Because the defintions of bijection is that one element can't be related to a diffrent amount of elements, than exactly one element. Would you have suggestions how you could do that
Suppose $X = \{a,b,c,d\}$. Consider a bijection $f\colon X \rightarrow X$. You are looking for all the possible combinations of $f(a),f(b),f(c),f(d)$. Start with $f(a)$. You have $4$ possibilities for it, i.e. $a,b,c$ or $d$. Now $f(a)$ is fixed, so $f(b)$ will have to be different from $f(a)$ in order for $f$ to be injective. Then $f(b)$ can be chosen in $3$ ways. Similarly, $f(c)$ can be chosen in $2$ ways and $f(d)$ is then forced to be the only remaining element in $X$. To sum up, you got $4$ possibilities for $f(a)$, and for each of them $3$ possibilities for $f(b)$, then again $2$ for $f(c)$ and $1$ for $f(d)$. Hence $4 \cdot 3 \cdot 2 \cdot 1 = 4!$ possibilities.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2506033", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Book recommendation about mathematics I'm looking for a book about mathematics, but not about calculus, algebra or any field of math. I want to read a book about the use of mathematics in the world, for example, Why is math so useful. Or maybe something about the point of view of a mathematician in the world or even a biographyy of someone math related. Any book related to math (but not with the theorems/definitions/integrals/etc) I will take in consideration. Thanks
A good idea-since you are after all a Maths student-would be "Mathematics Made Difficult" by Carl E. Linderholm.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2506120", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 7, "answer_id": 0 }
$\varepsilon$-$\delta$ definition of a limit [Calculus] $$\lim_{x\to 2} x^2 - 8x + 8= -4 $$ Now to express $ f(x) - l $ it as $x - x_0$ , we write $ x = (x - x_0) + x_0$. So \begin{align} f(x) - l &= x^2 - 8x + 8 + 4\\ &= (x - 2 + 2)^2 - 8(x - 2 + 2)+ 12\\ &= (x - 2)^2 +4(x-2) + 4 - 8(x - 2) - 16+ 12 \\ &= (x - 2)^2 - 4(x-2)\\ &= |f(x) - l|\\ &\le |x - 2|^2 -4|x-2| \end{align} We choose $\delta$ such that $\delta^2 -4\delta \le \varepsilon$. For $\delta \le 1$, $$\delta -4\delta \le \varepsilon$$ $$\delta \le -\frac\varepsilon3$$ Is my $\delta$ correct ? Could someone guide me through this? Only then I guess I can verify this limit by $\varepsilon$-$\delta$ definition. Thanks.
No, it is not correct. If you choose $\delta\leqslant-\frac\varepsilon3$, then $\delta<0$. And it is part of the $\varepsilon-\delta$ definition of limite that $\delta>0$. Note that the inequality $\bigl|f(x)-l\bigr|\leqslant|x-2|^2-4|x-2|$ is false, since it leads to situations in which $\bigl|f(x)-l\bigr|<0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2506205", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Show $\int_0^{2\pi}\cos(n\phi')\cos^l(\phi-\phi')\mathrm{d}\phi=\frac{2\pi}{2^l}\cos(l\phi)\delta_{l,n}$ I have to show $$\int_0^{2\pi}\cos(n\phi')\cos^l(\phi-\phi')\mathrm{d}\phi=\frac{2\pi}{2^l}\cos(l\phi)\delta_{l,n}$$ where $l,n$ are positive integers such that $l\leq n$ I'm supposed to use the fact $$ \int_0^{2\pi}e^{i(l-n)\phi}\mathrm{d}\phi=2\pi\delta_{l,n}$$ But I'm really lost. I tried to rewrite the cosines as the real part of $e^{in\phi}$ et cetera and trying to expand the power with the binomial theorem but it didn't work. I don't see any other way to obtain a useful way to use that identity. Any input will be appreciated
Use $\cos x=(e^{ix}+e^{-ix})/2$ to rewrite $$ \int_0^{2\pi}\cos(n\phi')\cos^l(\phi-\phi')\mathrm{d}\phi'=\frac{1}{2^{l+1}}\int_0^{2\pi}(e^{in\phi'}+e^{-in\phi'})(e^{i(\phi-\phi')}+e^{-i(\phi-\phi')})^l d\phi'\ . $$ Then use the binomial theorem to rewrite $$ \frac{1}{2^{l+1}}\sum_{k=0}^l {l\choose k}\int_0^{2\pi}(e^{in\phi'}+e^{-in\phi'})e^{ik(\phi-\phi')}e^{-i(l-k)(\phi-\phi')} d\phi'=\frac{1}{2^{l+1}}\sum_{k=0}^l {l\choose k} e^{i\phi(2k-l)}2\pi(\delta_{n-k+(l-k),0}+\delta_{-n-k+(l-k),0})=\frac{2\pi}{2^{l+1}}\left[{l\choose (l+n)/2}e^{in\phi}+{l\choose (l-n)/2}e^{-in\phi}\right]\ , $$ from which you should be able to complete.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2506310", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
The limit of $f_n = \frac {e^x \sin(x) \sin(2x) \cdots \sin(nx)}{\sqrt n}$ I want to calculate a limit of $f_n = \frac {e^x \sin(x) \sin(2x) \cdots \sin(nx)}{\sqrt n}$ if $n$ goes to an infinity. I was wondering if I can use this: $ \lim_{n\to \infty} \frac {\sin(nx)}{n} = 0$, because I have doubts. I know that $$ \lim_{x\to x_0} f(x)g(x) = \lim_{x\to x_0}g(x)\lim_{x \to x_0} f(x)$$ (or I suppose), but what if the limit of $f(x)$ or $g(x)$ is zero My attempt of a solution: $$ \lim_{n\to \infty} \frac {e^x \sin(x) \sin(2x) ... \sin(nx)}{\sqrt n} = \lim_{n\to \infty} \frac {\sqrt ne^x \sin(x) \sin(2x) ... \sin(nx)}{ n}\\ = \lim_{n\to \infty} {\sqrt ne^x \sin(x) \sin(2x)} \frac {\sin(nx)}{n} = \lim_{n\to \infty} {\sqrt ne^x \sin(x) \sin(2x)} * 0 = 0$$ And the range for the convergence is $\infty$ , right?
The pointwise convergence towards zero is trivial, we may actually prove we have uniform convergence over any compact subset of $\mathbb{R}$. Let $$ g_n(x) = \sin(x)\sin(2x)\cdots \sin(nx). $$ The supremum of $g_n$ over $\mathbb{R}$ is attained at a point of the interval $\left(0,\frac{\pi}{n}\right)$. Due to the approximation $\sin(x)\leq x e^{-x^2/6}$ over the interval $(0,\pi)$ we have $$ \sup g_n(x) \leq \sup n! x^n \exp\left(-\frac{n(n+1)(2n+1)}{36}x^2\right)= n!\left(\frac{18}{e(n+1)(2n+1)}\right)^{n/2}$$ and $$ \sup g_n(x) \leq \frac{6}{5}\sqrt{n}\,e^{-\frac{2}{5}n}. $$ It follows that on the interval $\left[-\frac{n}{5},\frac{n}{5}\right]$ the absolute value of $f_n(x)$ is bounded by $\frac{6}{5}e^{-n/5}$. This bound implies U.C. towards zero on any compact subset of $\mathbb{R}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2506439", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Compute $m$ so that the polynomial $R(x)$ has a root at $x=-3$ Apologies upfront, this is quite an elementary question, but I'm stuck here. I'm asked to figure out the value of $m$ so that this polynomial: $$R(x)=x^2-mx+3$$ has a root at $x=-3$. Following the definition of the root of a polynomial, $x=-3$ is a root if and only if $R(-3)=0$, which yields $\boxed{m=-12}$. But in the other hand, following the Factor Theorem, if $x=-3$ is a root, then the polynomial $S(x)=(x+3)$ needs to be a factor of $R(x)$. Therefore, the remainder of $R(x)/S(x)$ needs to be $0$. Applying Ruffini's rule with $x=-4$ I get the condition: $$3+3(3+m)=0$$ which is satisfied only in case $\boxed{m=-4}$ So I'm getting 2 different values for $m$, which is driving me crazy. I know this is a very basic question but I'm still wondering where I went wrong... Thanks
It looks like you incorrectly performed your original calculation using the definition of a root of a polynomial. The calculation should look like this: $$(-3)^2 - m(-3) + 3=0$$ $$9 + 3m + 3 = 0$$ $$12 + 3m = 0$$ $$12 = -3m$$ $$m=-4$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2506667", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
verfiy my proof (Prove that for all sets A, B, C, $(A \cup B)-C = (A-C) \cup (B-C)$) Prove that for all sets A, B, C, $(A \cup B)-C = (A-C) \cup (B-C)$ suppose $x \in A,B,C$ if $x\in A$ or $B$ or both , then by definition of union, $x\in A\cup B$. Since $x\in (A \cup B) $ and $x\in C$, by definition of subtraction, $x\notin (A \cup B)-C$ Since $x \in A$ and $x \in C$, by definition of subtraction, $x \notin (A-C)$, the same is true if $x \in B$, then $x \notin (B-C)$. Now if $x \notin (A-C)$ and $x \notin (B-C)$ then $x \notin (A-C) \cup (B-C)$ and $x \notin (A \cup B) -C$ thus $(A \cup B)-C = (A-C)\cup (B-C)$ verify my proof? -thanks
after rethinking my proof i came up with: suppose $x \in (A \cup B) -C$ if $x \in (A \cup B)-c $ then $x \notin C$ this $x \in A$ and/or $x\in B$ Since $x \notin C$ and $x\in (A \cup B)-C$ , $x \in (A-C)$ and/or $x \in (B-C)$ thus if $x \in (A-C)$ and/or $x \in (B-C)$ then $x \in (A-C) \cup (B-C)$ by the definition of union, the same result occurs if x is in only one of both A and B. Hence, $(A\cup B) - C = (A-C)\cup (B-C)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2506790", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Divergence of reciprocal of primes, Euler On Wikipedia at link currently is: \begin{align} \ln \left( \sum_{n=1}^\infty \frac{1}{n}\right) & {} = \ln\left( \prod_p \frac{1}{1-p^{-1}}\right) = -\sum_p \ln \left( 1-\frac{1}{p}\right) \\ & {} = \sum_p \left( \frac{1}{p} + \frac{1}{2p^2} + \frac{1}{3p^3} + \cdots \right) \\ & {} = \sum_{p}\frac{1}{p}+ \sum_{p}\frac{1}{2p^2} + \sum_{p}\frac{1}{3p ^3} + \sum_{p}\frac{1}{4p^4} + \cdots \\ & {} = \left( \sum_p \frac{1}{p} \right) + K \end{align} And then Wikipedia says that $K<1$, without any explanation. How do we know that $$\sum_{p}\frac{1}{2p^2} + \sum_{p}\frac{1}{3p ^3} + \sum_{p}\frac{1}{4p^4} + \cdots$$ is equal to a constant $K<1$?
Hint : $$\sum_p \frac {1}{p^s} < \sum_{n=1}^{\infty} \frac 1{n^s} = \zeta(s) $$ $\zeta(s)$ is Riemann Zeta function, and $\zeta(s)$ converges for each $s \in \Bbb R, s>1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2507014", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Question on the proof that a closed subset of a compact set is compact I was reading the proof here on the claim that a closed subset of a compact set is compact, which reads: Say F ⊂ K ⊂ X where F is closed and K is compact. Let $\{V_α\}$ be an open cover of F. Then $F^c$ is a trivial open cover of $F^c$. Consequently { $F^c$} ∪ $\{V_α\}$ is an open cover of K. By compactness of K it has a finite sub-cover – which gives us a finite sub-cover of F. The proof has to add $\{F^c\}$ to an open cover of of F so that it covers K. What about the open cover $\{V_α\}$ without $\{F^c\}$? How can one be sure that it has a finite subcover for F since it is not necessarily true that $\{V_α\}$ covers K? UPDATE I want to put the question differently. How do I know that there is not an open cover that covers F but not K?
You don't have to know that $V_{alpha}$ doesn't cover $K$, if it happens to cover $K$ then still adding $F^C$ wouldn't alter anything - the resulting cover would still cover $K$. Now of course in the subcover of $K$ including $F^C$ we must know that the subcover excluding $F^C$ would cover $F^C$, but this is rather obvious. If $x\in F$ then it would be in at least one of the set in the subcover, but it can't be in $F^C$ by definition so it must be in at least another of the sets in the subcover.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2507164", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
What are vertex fields, gradient and divergence on graphs? I have a few questions these two slides on the topic of calculus on graphs: * *What are the vertex fields defined here? My understanding is that it is a set of functions that takes in a vertex and gives a real number output. And because each vertex may need to undergo different transformation, each vertex $v_i$ has its corresponding function $f_i$. Is that right? *What does the inner product here means? *Why is there a square root of weight in gradient and divergence operator? Is it necessary? My understanding is that multiplying by weight and not square root of weight is sufficient. *What is $F$ in divergence operator? These are all the slides that I have and I am having a lot of trouble understanding it. Is it that it is badly written? If not, can someone kindly explain to me please? Thanks.
* *A vertex field is a square-summable function from the set of vertices into $\mathbb{R}$. (If there are only finitely many vertices, saying "square-summable" is unnecessary.) Imagine a graph, say the triangle $K_3$. Put a number next to each vertex, say $3, 6, -2$. You have a vertex field. *The concept of inner product is explained on Wikipedia. Here the inner product of two vertex field $f,g$ means: multiply each value of $f$ by the corresponding value of $g$ and by the weight of that vertex. Add the results. *The reason for having a square root in $\sqrt{w_{ij}}$ will become apparent on a later slide, where the graph Laplacian is defined as the divergence of gradient. Since both the gradient and the divergence involve multiplying by $\sqrt{w_{ij}}$, the Laplacian will have $w_{ij}$. The author would rather have a simpler formula for Laplacian, because it will be used often in the future. *$F$ is a square-summable function defined on the edges. (If there are only finitely many edges, saying "square-summable" is unnecessary.) This is what the first line of definition "$\operatorname{div}:L^2(\mathcal E)\to L^2 (\mathcal V)$" is for, to state what are the domain and codomain of this map.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2507331", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Simple group problem. I am reading a book of algebra and I have a difficulty in understanding one thing. In the book it is written there is no simple group of order 528 and the explanation is as follows. Let $H$ be a Sylow 11-subgroup. Then $n_{11}=12$, which I understood why, and $|N(H)|=44$ and $G$ is isomorphic to a subgroup of $A_{12}$. Could you please explain?
For the subgroup of $A_{12}$ part, note that $G$ acts transitively by conjugation on the $12$ Sylow $11$-subgroups. Consider the Kernel of this action - it is a normal subgroup and the action is non-trivial, so given $G$ is simple, the Kernel must be the trivial subgroup, and the image is isomorphic to $G$. The image is contained in $S_{12}$ because it consists of permutations of the $12$ subgroups. Now if the image contained an odd permutation, it would have a subgroup of index $2$ and this subgroup would be normal. But the image is known to be simple, so therefore cannot contain any odd permutations, so must be wholly contained in $A_{12}$. I have sketched over some parts of this which you should already know. The technique of looking at the action on of $G$ on the Sylow subgroups as a homomorphism to the relevant symmetric group is likely to come up again.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2507505", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Is the derived category of a full subcategory full? Certainly not, but I cannot find a good counterexample. I tried to do something like $\mathbf Z\text{-Mod}\subseteq \mathbf Z[x]\text{-Mod}$, but without success. Does anyone have a short counterexample?
Thanks to the comments by @MarianoSuárez-Álvarez, I can formulate the following answers: Consider a field $k$ and the ring $k[x]$. Of course, $\hom_{D(k)}(k, k[i])=\begin{cases}k & \text{if i=0,}\\0& \text{o/w}\end{cases}$ in the derived category $D(k)$. Now in the category $k[x]\text{-Mod}$, there is a nontrivial extension $0\to k\xrightarrow{x} k[x]/(x^2) \to k\to 0$, where we consider $k$ a $k[x]$-module by $k=k[x]/(x^2)$. This extension yields a map $$\begin{matrix}k\\\simeq\:\downarrow\phantom{\simeq} \\k[x]/(x^2) &\rightarrow& k\\&&\downarrow\\&&k\end{matrix}$$ which is a nontrivial map $k\to k[1]$ in $D(k[x])$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2507607", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Trying to solve differential equation $y'=\frac{3x-y+4}{x+y}$ ÊDIT: Found the second mistake (failed by calculating $u_{1,2}$)! I might have made a mistake, however, I am not able to detect it. Here we go: $$y'=\frac{3x-y+4}{x+y}, \quad y(1)=1$$ 1.) set $x = X+a$ and $y=Y+b$, so $$\frac{dY}{dX} = \frac{3X-Y+(3a-b+4)}{X+Y+(a+b)}$$ 2.) Choose $a,b$ with $a=-1$ and $b=1$, then the differential equation is: $$\frac{dY}{dX} = \frac{3X-Y}{X+Y} = \frac{3-\frac{Y}{X}}{1+\frac{Y}{X}}$$ 3.) Now substitution: $Y = uX$, so $Y' = u+ Xu'$ and we have $$\frac{dY}{dX} = \frac{3-u}{1+u} = u+Xu'$$ 4.) Solve this, ending with: $$\frac{1+u}{-u^2-2u+3} du = \frac{dX}{X}$$ 5.) Solving this by integration, ending with: $$-\frac{1}{2} \ln (-u^2-2u+3) = \ln(X) + \ln(C)$$ $$-u^2-2u+3 = \exp(-2(\ln(CX))) = e^{\ln((CX)^{-2})} = \frac{1}{(CX)^2}$$ $$-u^2-2u+3-\frac{1}{(CX)^2} = 0$$ 6.) So I get $$u_{1,2} = -1 \pm \sqrt{16-\frac{4}{(CX)^2}}$$ Because of the substitution and by having chosen $Y=y-1$ and $X=x+1$ it follows that $$y-1 = -x-1 \pm (x+1)\cdot \sqrt{16-\frac{4}{(C(x+1))^2}}$$ Finally: $$y(x) = -x \pm \sqrt{16(x+1)^2-\frac{4}{C^2}}$$ 7.) With the initial values I get $$C=\frac{1}{\sqrt{15}}$$ 8.) However, my result is not true for the differential equation I started with, since left hand side and right hand side are not equal. Thanks for any advice/hint :)
Your mistake started from 6 (I'm writing $c=C^2$ here) $$ u^2 + 2u = 3 - \frac{1}{cX^2} $$ $$ (u+1)^2 = 4 - \frac{1}{cX^2} $$ $$ u= -1 \pm \sqrt{4-\frac{1}{cX^2}} $$ or $$ \frac{y-1}{x+1} =-1 \pm \sqrt{4-\frac{1}{c(x+1)^2}} $$ Using the condition $x = 1, y = 1$ we get $$ -1 \pm \sqrt{4-\frac{1}{4c}} = 0 $$ Only the plus sign satisfies $$ c = \frac{1}{12} $$ Final solution $$ y(x) = 1 + (x+1)\left(-1 + \sqrt{4-\frac{12}{(x+1)^2}} \right) $$ Or $$ y(x)= -x + 2\sqrt{x^2+2x-2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2507701", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to prove this sequent by natural deduction? How do I prove $$\forall x\forall y\forall z(S(x, y)\land S(y, z) \Rightarrow S(x, z)), \forall x\neg S(x, x) \vdash \forall x\forall y(S(x, y) \Rightarrow \neg S(y, x)).$$ by natural deduction? 1 $\quad \forall x\forall y\forall z(S(x, y)\land S(y, z) \Rightarrow S(x, z))\quad \text{premise}$ 2 $\quad\forall x\neg S(x, x) \quad\text{premise}$ I don't know what's the next step, replace $x$ by some term? I got it!!
Hint: Assume $S(x,y)$ and $S(y,x)$. Then $S(x,x)$ by the first premise, contradicting the second premise.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2507808", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
How to take a partial derivative of $\|y - Xw\|^2$ with respect to w? So I've tried to solve this problem where we are asked to solve the partial derivative of function $\sum(y - Xw)^2$ or $\|y-Xw\|^2$ and then minimize it. I've never done any linear algebra aside some really basic stuff and I can't seem to find any information how to take partial derivative of such a function. I know how to take partial derivative of simple function but not functions with $\||x||^2$ notation. In this case it would probably help to denote some variable e.g $\ z=y-Xw $ that way we get $\|z\|^2 $. Is this even a right approach? I have no clue what to do after this.
An easy way to do it is write it like scalar product and then use the properties of the scalar product. I assume you are working with real matrix and vectors (it really seems that you are dealing with Ordinary Least Squares). \begin{align} \|y-Xw\|^2 &=\langle y-Xw,y-Xw \rangle \\ &=y^Ty-2w^TX^Ty+w^TX^TXw. \end{align} I've just used the fact that the scalar product is bilinear and symmetric. Now just take the derivative and use the product rule: $$\implies -2X^Ty+2X^TXw=0 \\ \implies X^Ty=X^TXw \\ \implies w=(X^TX)^{-1}X^Ty.$$ If you have problem with the derivation in general just write it down the two dimensional case.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2507923", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Showing that ${1 + i,−1 + i}$ is a basis for the vector space $\mathbb C$ over $\mathbb R$ I feel my justification for this is weak, and I'm seeking for improvement. $$Span((1+i), (-1+i)) = a(1+i) + b(-1+i)$$ I have two conclusions from this: 1. $$a(1+i) + b(-1+i) = a + ai -b + bi$$ $$a(1+i) + b(-1+i) = (a - b) + i (a +b)$$ And, other than just saying by setting $a$ and $b$ to any real number, any vector in $\mathbb C$ can be created. 2. $a(1+i)$ is a vector that resembles the image of $y = x$ in that it can extend to any point in that direction. $b(-1+i)$ is a vector that resembles the image of $y=-x$ in that it can extend to any point in that direction. Since these two vectors are orthogonal, any linear combination of the two can form any vector in $\mathbb C$. Is this a valid proof? Or is there a better way, or proper way, to show this? Besides, the span of any two linearly independent vectors in $\mathbb C$ should be able to span all of $\mathbb C$, anyway. An example would be $(1+i)$ and $(3+2i)$.
By definition, the numbers $1$ and $i$ are a basis the vector space $\mathbb C$ over $\mathbb R$, so this is a two-dimensional $\mathbb R \text{-}$vector space. In general, a nonzero vector $\vec{w}$ is linearly independent from a nonzero $\vec{v}$ iff for any $\lambda$, $\; \vec{w} \ne \lambda \vec{v}$. In particular, the vectors $\quad +1 + i$ $\quad −1 + i$ are linearly independent, and therefore form a basis. You can also span all vectors with these two complex numbers. To see this, we can easily get back the 'canonical' basis in the span, $(1 + i) + (−1 + i) = 2i,$ so $i = (0,1)$ is a $\checkmark$, $(1 + i) - (i) = 1,\; \; \;\;\;\;\;\;\;$ so $1 = (1,0)$ is a $\checkmark$, and since the span in closed under any (recursive) linear combinations you can form, you can now 'get to' any vector (number) in $\mathbb C$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2508056", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Calculating variance for a sequence of i.i.d. variables which seems non-partitionable Let $(Z_n : 1 \leq n < \infty )$ be a sequence of independent and identically distributed (i.i.d.) random variables with $$\mathbb{P}(Z_n=0)=q=1-p,\quad \mathbb{P}(Z_n=1)=p.$$ Let $A_i$ be the event $$\{Z_i=0\}\cap\{Z_{i-1}=1\}.$$ If $U_n$ is the number of times $A_i$ occurs for $2 \leq i \leq n$, prove $\mathbb{E}(U_n)=(n-1)pq$, and find the variance of $U_n$. I was able to solve this question, only calculating the variance didn't work. I think it's best to use $$\mathrm{Var}(U_n)=\mathbb{E}(U_n^2)-\mathbb{E}(U_n)^2,$$ because the second expectation is already calculated. However, I have no idea how to calculate $\mathbb{E}(U_n^2)$. Can anyone provide some help? EDIT: By the way, the variance should be $\mathrm{Var}(U_n)=(n-1)pq-(3n-5)(pq)^2$, so the second moment of $U_n$ should be $(n-1)pq+(n-3)(n-2)(pq)^2$. Still, I have no clue how to calculate this second moment. EDIT 2: Using Dean's answer, I tried solving for the variance. We have a random sum for the amount of zeros in $\{2,3,\ldots,n\}$, which has a binomial distribution with parameters $n-1$ and $q$. Then we obtain in total $j$ zero's, which can be preceded by ones. The preceding elements follow a Bernoulli distribution with parameter $p$. Thus, we can set up the probability generating functions to obtain: \begin{eqnarray} G_{zero's}(s) & = & (p+qs)^{n-1}\\ G_{ones}(s) & = &q+ps\\ G_{U_n}(s) & = & G_{zero's}(G_{ones}(s))=(p+q(q+ps))^{n-1}\\ G_{U_n}'(s) & = & (n-1)(p+q(q+ps))^{n-2}pq\\ G_{U_n}''(s) & = & (n-1)(n-2)(p+q(q+ps))^{n-3}(pq)^2\\ \mathrm{Var}(U_n) & = & G_{U_n}''(1)+G_{U_n}'(1)-G_{U_n}'(1)^2\\ & = & (n-1)(n-2)(pq)^2+(n-1)pq-(n-1)^2(pq)^2\\ & = & (n-1)pq+(n-1)(n-2-n+1)(pq)^2\\ & = & (n-1)pq-(n-1)(pq)^2. \end{eqnarray} However, the answer for this question says that the variance should be $\mathrm{Var}(U_n)=(n-1)pq-(3n-5)(pq)^2$, and since this is an old Oxford examination question, it seems unlikely that their answer is wrong. Also, the probability generating function seems allright, since $G_{U_n}'(1)$ indeed yields $\mathbb{E}(X)$. Any ideas on where I can have made a mistake? It seems also strange to me that we determine a random sum which can yield at most $n-1$ zero's, and then treat this like every one of these zeros can have a preceding one. For example, when all tosses yield a zero, we know for sure that $U_n=0$, while the above implies that we have $n-1$ Bernoulli trials, so quite likely $U_n\neq 0$. How to fix this?
The following is an incorrect treatment (incorrectly assuming independence)... just leaving in place for reference. Let $J$ be the number of zeros in the sequence, not counting the first element. $J$ is binomial with $n-1$ trials and probability q. For a sample with $j$ zeros beyond the first element, the number that precede the zero with a one is $U$ which is binomial with $j$ trials and probability p. The pmf for U is therefore given by: $$f(u) = \sum_{j=0}^{n-1} B(j|n-1,q)B(u|j,p)$$ From that you can work out the second moment.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2508112", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Final step in proof of the Hellinger-Toeplitz Theorem Consider the following statement: (Hellinger-Toeplitz Theorem) Let $H$ be a Hilbert space and $A : H \to H$ be linear and symmetric, i.e. $$\langle x,Ay \rangle = \langle Ax,y \rangle$$ holds for all $x,y \in H$. Then $A$ is bounded. I prove this using the Banach-Steinhaus theorem. I defined $\varphi_y : H \to \mathbb{C}$ by $\varphi_y(x) := \langle A(y),x \rangle$ and $$\mathcal{F} := \{\varphi_y : y \in \partial B_1(0)\}$$ It is easy to show that $\mathcal{F}$ satisfies the prerequisites for the Banach-Steinhaus theorem and thus $$\sup_{T \in \mathcal{F}} \|T\| < \infty$$ Now I want to use this to show that $A$ is bounded. For $x \in H$ I compute $$\|A(x)\|^2 = \langle A(x),A(x) \rangle = \|x\|\langle A(x/\|x\|),A(x)\rangle = \|x\|\varphi_{x/\|x\|}(A(x))$$ But somehow I cannot get rid of the $A(x)$ in the argument of $\varphi$. Can anyone help me how to proceed?
Because $\varphi_{x/ \|x\|}(A(x)) \leq c \|A(x)\|$ where $c = \sup_{ T \in \mathcal{F}} \|T\|$ we have $$ \|A(x)\|^2 \leq c \|x \| \|A(x)\|. $$ Then dividing by $\|A(x)\|$ gives you that $A$ is bounded.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2508235", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
How do I use Leibniz formula to solve this difficult equation? Suppose there exists a $y$ such that $$y \equiv \frac{d^n}{dx^n}e^{-x^2/2}$$. Prove that $$\frac{d^2y}{dx^2} + x\frac{dy}{dx} + (n+1)y = 0$$ I'm not sure where to start as Leibniz formula require at least 2 functions to begin with. There are no clear two functions in this problem. My thought process: * *I could possibly factor out the y in all 3 terms, but this will make the derivatives invalid, wouldn't it? *So since approach 1 wouldn't work, I could try integrating the whole equation, but I wouldn't get an exponential somewhere. *I can try to sub in y into the equation, but I don't know how to proceed from here.
Using the product rule iteratively, we have \begin{eqnarray*} x \frac{d^{n+1}}{dx^{n+1}} e^{-x^2/2} &=&\frac{d}{dx} \left(x \frac{d^n}{dx^n} e^{-x^2/2} \right) - \frac{d^{n}}{dx^{n}} e^{-x^2/2} \\ &=&\frac{d^2}{dx^2} \left(x \frac{d^{n-1}}{dx^{n-1}} e^{-x^2/2} \right) - 2\frac{d^{n}}{dx^{n}} e^{-x^2/2} \\ & \vdots & \\ &=&\frac{d^n}{dx^n} \left(x \frac{d}{dx} e^{-x^2/2} \right) - n\frac{d^{n}}{dx^{n}} e^{-x^2/2} . \end{eqnarray*} Now the equation can be rewritten as \begin{eqnarray*} \frac{d^n}{dx^n} \left( \frac{d^2}{dx^2} e^{-x^2/2} +x \frac{d}{dx} e^{-x^2/2} -ne^{-x^2/2}+(n+1)e^{-x^2/2} \right) \end{eqnarray*} and it is easy to show that the content of the bracket is zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2508348", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Let $\sum_{n=-\infty}^{\infty}a_{n}(z+1)^n$ be the laurent's series expansion of $f(z)=\sin(\frac{z}{z+1})$. Then $a_{-2}$ is Let $\sum_{n=-\infty}^{\infty}a_{n}(z+1)^n$ be the laurent's series expansion of $f(z)=\sin(\frac{z}{z+1})$. Then $a_{-2}$ is (a)$1$ (b)$0$ (c)$\cos(1)$ (d)$-\frac{1}{2}\sin(1)$ Reference: Complex analysis by Joseph Bak and Donald J. Newman I am trying to use the result, $\int_{C(-1;R)}(z+1)\sin(\frac{z}{z+1})dz$. How to find $R$ here. In this disk $C(-1;R),$ $f(z)$ has essential singularity at $z=-1$. Am I right?. I am not able to evaluate the integral. Please help me.
HINT: $$\sin\left( \frac {z}{z+1}\right)=\sin(1)\cos\left( \frac1{z+1}\right)-\cos(1)\sin\left( \frac1{z+1}\right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2508490", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Olympiad inequality Let $a,b,c,d \in [0,1]$. Prove $$ \frac {a}{1+b}+\frac{b}{1+c}+\frac{c}{1+d}+\frac{d}{1+a}+abcd\le3$$ Hi. Im a school student so ive been unable to look for ways to apply this problem. I attempted to use AM-GN on 1+a etc but the bound $2\sqrt{a}$ was too large and was easily larger than 3. Thus id like some help solving this problem.
We need to prove that $$\sum_{cyc}\left(\frac{a}{1+b}-a\right)+abcd\leq3-a-b-c-d$$ or $$3-a-b-c-d+\sum_{cyc}\frac{ab}{1+b}\geq abcd.$$ Now, by C-S and the given condition $$\sum_{cyc}\frac{ab}{1+b}=\sum_{cyc}\frac{abcd}{cd+bcd}\geq\frac{16abcd}{\sum\limits_{cyc}(cd+bcd)}\geq\frac{16abcd}{8}=2abcd.$$ Thus, it's enough to prove that $$3-a-b-c-d+abcd\geq0,$$ which is a linear inequality of all variables. Thus, $$\min_{\{a,b,c,d\}\subset[0,1]}(3-a-b-c-d+abcd)=\min_{\{a,b,c,d\}\subset\{0,1\}}(3-a-b-c-d+abcd)=0$$ and we are done!
{ "language": "en", "url": "https://math.stackexchange.com/questions/2508583", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
A nonzero unital division ring $D$ A nonzero unital ring $D$ in which every nonzero element is invertible is called a division ring. I think when the ring is divided, the following two are equivalent?Is it right? 1: For all $a, b \in D$ with $a \neq 0$, the equations $ax = b$ and $ya = b $ have unique solutions in $D$. 2: For all $a, b \in D$ with $a \neq 0$, the equation $ax = b$ has a solution in $D$. Can the below statement be obtained from The second case above? 3:$D^{2} \neq 0$ and $D$ has no right ideals other than $0$ and $D$. Does the phrase 3 alone indicate a unital ring??
As I showed you here, (2) implies there are only trivial right ideals. Then here I showed you that as long as multiplication isn't zero ($D^2\neq \{0\}$) that having only trivial right ideals implies $D$ has an identity. Then obviously there are no proper left ideals either, and $ya=b$ has a solution when $a$ is nonzero. Finally, the solutions to $ax=b$ and $ya=b$ are unique since left and right multiplication by nonnegative elements is injective again as previously mentioned to you here. So (1) does indeed follow from (2) as long as you are still talking about $D^2\neq \{0\}$. I think when the ring is divided, I don't know for sure what you mean, but I suspect you mean "the ring is divisible" like a divisible module, meaning $D=xD$ for any nonzero $D$, which is just a fancy way of saying $D$ is a division ring when $D^2\neq\{0\}$. Does the phrase 3 alone indicate a unital ring?? At first glance, perhaps no, but it is true, and you already asked this and got the positive answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2508719", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Counterexample to Bolzano-Weierstrass in infinite dimension From Bolzano-Weirerstrass we can demonstrate that in a normed vector space $E$ of finite dimension, every bounded sequence admits a limit point. What are some counterexamples in infinite dimension? Does there exist a counterexample in every infinite dimensional normed space? I believe this one works: Let $E$ be the space of sequences of real numbers with finite support, equipped with the norm $\| (a_k)_{k \in \mathbb{N} } \|=\sup |a_k|$. Then take define the sequence $(s_n)$ as follows: $s_n$ is the sequence whose $n$th term is $1$ and every other term is $0$. Then $(s_n)$ is bounded, and we can easily show that it has no limit point.
Yes, that's the obvious counter example. You don't need to require finite support, when using supremum norm you only need the sequences to be bounded. Let the $j$th sequence be $k\to \delta_{jk}$ where $\delta_{jk}$ is the Kronecker delta. Every element in the sequence has norm $1$ and the distance between any two elements is $1$ so clearly this is a bounded sequence that and no subsequence is Cauchy and therefore not convergent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2508834", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 1 }
Show that $n(n^2-1)$ is divisible by 24, if $n$ is an odd integer greater than $2$. How can I show that $n(n^2-1)$ is divisible by 24, if $n$ is an odd integer greater than $2$? I am thinking that since odd numbers have the form of $2n-1$ in which if it is to be more than $2$, it will be $2n-1+1 = 2n+1$. So would it be correct to use this and try solving through induction?
Easier with congruences: * *$n^3-n\equiv 0\mod 3$ for all $n$ (that's Little Fermat's theorem), *If $n$ is odd, $n\equiv \pm 1,\pm3\mod 8$, so $\;n^2\equiv 1\mod8$, *last, use the Chinese remainder theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2508927", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 2 }
Distribution function related with the limit from left show that the distribution fun $F$ of random variable $f$ takes only values $0$ and $1$ iff there exists areal number $c$ such that $P(f=c)=1$ ?? I begin solve the ex and i prove $F$ takes two value $0$ and $1$ (one side ) another side I had problem suppose $F$ takes two values $0$ and $1$ , we must found $c$ satisfies $P(f=c)=1$ but I know $P(f=c)=P(f\leq c)-P(f<c)$ and I know $P(f<c)=\lim F(x)$ as $x$ approached to $c$ from left
Hint: Define $c:=\inf\{x\in \mathbb R\mid F(x)=1\}$ Then, since $F$ is continuous from the right, we have $F(c)=1$. What can be said about $F(x)$ if $x<c$? What does that say about $P(f=c)$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2509057", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Piecewise step function for changing dollar amounts? Visiting the Wikipedia article for continuous functions I found this: As an example, consider the function h(t), which describes the height of a growing flower at time t. This function is continuous. By contrast, if M(t) denotes the amount of money in a bank account at time t, then the function jumps at each point in time when money is deposited or withdrawn, so the function M(t) is discontinuous. However, a quick investigation of money-, economics-related formulae showed that no function depicts money's innate discreteness (jumping from dollar to next dollar, cent to next cent). Obviously a bank account jumps up or down in dollars/cents, hence, discreteness. But how would you construct such a function. Typically, a discrete function looks like this: $$ f(x) = \left\{ \begin{array}{r@{\quad \mathrm{if} \quad}l} 1 & x \geq 0, \\ \!\! -1 & x < 0. \end{array} \right. $$ that is, your classic step function. It seems to me a "money step function" would have to have a condition for literally every dollar -- or my thinking is way off here. So my question is, how can I depict the discrete nature of money jumping up and down by dollar/cent amounts over time?
The jumps come in increments of time, not in increments of dollars. You can use the floor or ceiling to describe the function. Say I open a bank account for $100$ that pays $5$ simple interest every year. Let $t$ be the time in years. The account value is then $100+5\lfloor t \rfloor$. You can also use the floor and ceiling to describe the effect of rounding.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2509189", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to express the size of a dimension of a matrix Let's say I have a matrix $A\in \mathbb{C}^{4\times6}$. How can I express mathematically that I want to input $A$ and output $4$ as an answer, for example? In code we might do size(A), A.shape, A.RowCount(), depending on the computer language we are using. But what about in math? Note that putting $A_{4\times 6}$ is not what I'm looking for because $4$ or $6$ is the desired answer, not part of the formula for finding the answer. And putting $A_{m\times n}$ doesn't derive the answer either; it just states $A$ has dimensions $m$ and $n$. Update I found this, if this gives anyone any ideas. It gets the columns. $\text{rk}(A)+\text{nul}(A)=n$, given $A\in \mathbb{C}^{m\times n}$. Update 2 I decided to use $\#_1 A$ for the row count and $\#_2 A$ for the column count. I welcome feedback as to why that would confuse mathematicians who may read my notebook.
You would say that's the number of rows of the matrix $A$. EDIT: If roundabout ways are okay, one route is to identify a $m\times n$ matrix $M$ as some linear transformation $T:\mathbb{R}^n\longrightarrow\mathbb{R}^m$. Then we can say $\dim(\text{Codomain}(M))=m$ and $\dim(\text{Domain}(M))=n$. If you think thre's no ambiguity with what is meant by $\dim(X)$ when $X$ is a square matrix, then one can note that $MM^T$ is $m\times m$ and $M^TM$ is $n\times n$, so that $\dim(MM^T)=m$ and $\dim(M^TM)=n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2509428", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Intermediate value theorem for $\sin x.$ How is intermediate value theorem valid for $\sin x$ in $[0,\pi]$? It has max value $1$ in the interval $[0,\pi]$ which doesn't lie between values given by $\sin0$ and $\sin\pi$.
In general, any time you have a theorem, "If ____, then ____," it really means, "If ____, then ____ and maybe some other stuff not mentioned here also happens." Because when you're working in just about any branch of mathematics, no matter how thoroughly you describe the implications of any mathematical fact there is always some other possible case you could have said something about but didn't. The only things you can rule out are the things the theorem explicitly says cannot happen. Unless a theorem says "and there are no other values outside this interval," don't assume all the values are inside the interval. In your example, the value $\sin(\pi/2)=1$ is part of the "other stuff" that "also happens."
{ "language": "en", "url": "https://math.stackexchange.com/questions/2509583", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Weird Combinatorial Identitiy A friend of mine came across this rather odd combinatorial identity. We've spent a while but haven't been able to prove it. Any ideas? The following holds exactly for even integers $n$, and is approximately true for odd integers $n$: $$n = \dfrac{n+1}{n^n - 1} \sum_{k=1}^{n/2} \dbinom{n-k}{k-1} n^k (n - 1)^{n+1-2k}$$
So we want to prove that for even $n$ $$ \frac{n^n-1}{n+1}=\sum_{l\ge0}\binom{n-1-l}l n^l(n-1)^{n-1-2l}. $$ The RHS is the coefficient of $t^{n-1}$ in $\frac1{1-(n-1)t-nt^2}$. But $$ \frac1{1-(n-1)t-nt^2}=\frac1{(1+t)(1-nt)}=(1-t+t^2-\ldots)(1-nt+n^2t^2-\ldots), $$ so this coefficient is a sum of a geometric progression and it's equal to $\frac{n^n+(-1)^{n-1}}{n+1}$ and we're done (as a bonus we get the correct version of the identity for odd $n$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2509693", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Finding $cdf$ of the sample minimum, $X_{(1)}$ Consider iid random variables $X_1$ and $X_2$, having $pdf$ $$f_X(x) = 4(1−2x)I_{(0,1/2)}(x)$$ Give the $cdf$ of the sample minimum, $X_{(1)}$. $$\begin{align*} F_{X(1)}(x) &= P(X_{(1)} \leq x) \\\\ &= 1 - P(min{\{X_1, X_2}\} \gt x) \\\\ &= 1 - P(X_1 \gt x, X_2 \gt x) \\\\ &= 1 - P(X_1 \gt x)\cdot P(X_2 \gt x) \\\\ &= 1 - [1-F_X(x)]^2 \\\\ &= 1 - [1-\int4(1-2x)]^2 \\\\ &= 1 - [1-(4x-4x^2)]^2 \\\\ \end{align*}$$ Did I do this correctly?
seems good to me: Just a baseline check: $F(1/2) = 1$, and as long as you understand that this is valid for $x \in (0, 1/2)$ we are ok. What would the values of $F$ be for negative $x$ and $x \ge 1/2$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2509739", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Show that $f\in C([-1,1],\mathbb{R})$ and that $\int_{-1}^{1}f(x)\,dx=0$. Consider $X=C([-1,1],\mathbb{R})=\{f:[-1,1]\rightarrow \mathbb{R}\,:\,f \text{ is continuous}\}$ with the supremum metric defined by $$d(f,g)=\text{sup}_{x\in[-1,1]}|f(x)-g(x)|$$ for all $f,g \in X$. Show that if $(f_n)_{n=1}^{\infty}$ is a sequence in $X$ such that $\int_{-1}^{1}f_{n}(x)\,dx=0$ for all $n\in\mathbb{N}$, and that $f_n \rightarrow f$ as $n\rightarrow\infty$ in $(X,d)$ for some function $f$, then $$f\in X\,\,\,\,\,\,\, \text{ and } \,\,\,\, \int_{-1}^{1}f(x)\,dx=0.$$ I'm firstly a little confused about how to to show $f \in X$. Do I look at the usual $\epsilon-\delta$ definition for continuity of real functions (since isn't $X$ just all real, continuous functions $f:[-1,1]\rightarrow \mathbb{R}$), or more probably whether I need to consider $d$? Either way I've got to somehow get $f_n$ involved in that, and I guess I'll need to use the obvious integral inequality somewhere ($\left\lvert\int_a^b f(x)\,dx\right\rvert\leq\int_a^b\lvert f(x)\rvert dx$), but I'm not sure how to get the ball rolling.
If $(f_n)_{n \in \mathbb{N}}$ converges in $(X,d)$ to some function $f$ that means: $$\sup_{x \in [-1,1]} |f_n(x)-f(x)| \stackrel{n \rightarrow +\infty}\rightarrow 0$$ which implies $|f_n(x)-f(x)| \rightarrow 0$ for any $x \in [-1,1]$. This is called uniform convergence (because you are not fixing $x$ in your domain, as you would do for pointwise convergence) and trivially implies pointwise convergence (check!). For $f$ continuous you would like to check that for any $\varepsilon > 0$ then you can find $\delta > 0$ such that $|f(x)-f(x_0)| < \varepsilon$ when $x \in (x_0-\delta,x_0+\delta)$. Then you can observe that by triangle inequality $$|f(x)-f(x_0)| \leq |f(x)-f_n(x)|+|f_n(x)-f_n(x_0)|+|f_n(x_0)-f(x_0)|.$$ I let you conclude this part. About the integral: we had $$\forall \varepsilon > 0 \mbox{ }\exists N \in \mathbb{N}: |f_n(x)-f(x)| < \varepsilon \mbox{ when } n > N, \forall x \in [-1,1].$$ Then $$-\varepsilon +f_n(x) < f(x) < f_n(x)+\varepsilon$$ hence if you integrate, you get $\int_{-1}^1 f(x)dx = 0$ (check!).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2509861", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Projection onto the second-order cone I'm having difficulties in proving that the projection of $$(s,y)\in R \times R^{n}$$ onto the second-order cone $$Q^{n+1} = \{(t,x) \in R \times R^n : \|x\|_2 \leq t \}$$ is $$ \frac{s+\|y\|_2}{2\|y\|_2} (\|y\|_2,y)$$ when $\|y\|_2 > s $ and $ \|y\|_2 > -s $. I tried to first show that to minimize the distance, $x$ must be parallel to $y$, then I construct $$\alpha (\|y\|_2 + \epsilon,y)$$ and minimize the distance over $\alpha$ and $\epsilon$. I indeed come up with two quadratic functions individually attains its minimum when $$\epsilon = 0$$ and $$\alpha = \frac{s+\|y\|_2}{2\|y\|_2}$$ but I still have $$-\frac{s^2(\|y\|_2+\epsilon)}{2\|y\|_2} - \frac{{\|y\|_2}^3}{2\|y\|_2 + \epsilon}$$ which attains its maximum when $$\epsilon = 0$$ As a result, I can't say the distance attains its minimum accordingly. Is there any other method or elementary method to prove the optimal solution is $$ \frac{s+\|y\|_2}{2\|y\|_2} (\|y\|_2,y)$$ when $\|y\|_2 > s $ and $ \|y\|_2 > - s $ ?
Let $(t^*,x^*)$ be the projection. The optimality condition implies that for any $(t,x) \in Q^{n+1}$ we must have $$\langle(t-t^*, x-x^*), (s-t^*, y-x^*)\rangle \le 0.$$ Applying this inequality to $(t,x)=(0,0)$ and $(t,x) = 2(t^*,x^*)$ shows that in particular we have $$\langle(t^*, x^*), (s-t^*, y-x^*)\rangle = 0\tag{1}$$ and consequently, the first inequality can be rewritten as $$\langle(t,x), (s - t^*, y - x^*)\rangle \le 0.\tag{2}$$ These last two lines actually characterize $(t^*,x^*)$. That is, $(t^*,x^*)$ is the unique element of $Q^{n+1}$ satisfying both (1) and (2). Note that $(s-t^*, y -x^*) = \frac{\|y\|_2 - s}{2} (-1, y/\|y\|_2)$. From here (1) is easily verified. To verify $2$, note that the left-hand side is $$\frac{\|y\|_2 - s}{2} (-t + x^\top y / \|y\|_2),$$ and the first term is positive by the assumption $\|y\|_2 > s$, and the second term is nonpositive by Cauchy-Schwarz and the definition of $Q^{n+1}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2509986", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Determine galois group $Gal \Big(\frac{\mathbb{Q}(\sqrt[3]{3},\sqrt{-3})}{\mathbb{Q}}\Big)$ I've had some hard time determining Galois group $Gal \Big(\frac{\mathbb{Q}(\sqrt[3]{3},\sqrt{-3})}{\mathbb{Q}}\Big)$ because I didn't know exactly how to compute the order of the elements. See here for the computation of the orders and the order of the group. However, I'm arriving at two contradictory conclusions. I get that the order of the group is 6 and that all the non-identity elements are of order 2. However, I know this cannot be the case since there are only the diedric group and the cyclic group of order 6 and in both you find elements of order three. What am I doing wrong? \begin{array}{|c|c|c|c|} \hline notation & \sqrt[3]{3} & \sqrt{-3} & order \\ \hline Id & \sqrt[3]{3} & \sqrt{-3} & 1\\ \hline & \sqrt[3]{3} & -\sqrt{-3} & 2\\ \hline & \omega \sqrt[3]{3}& \sqrt{-3}& 2\\ \hline & \omega \sqrt[3]{3}& -\sqrt{-3}& 2\\ \hline & \omega^2 \sqrt[3]{3}& \sqrt{-3}& 2\\ \hline & \omega^2 \sqrt[3]{3}& -\sqrt{-3}& 2\\ \hline \end{array} I do the computations considering that $\omega = -\frac{1}{2}+\frac{\sqrt{-3}}{2}$. So for instance if I compute the order of the isomorphism that sends $\sqrt[3]{3} \mapsto \omega^2 \sqrt[3]{3}$ and $\sqrt{-3} \mapsto -\sqrt{-3}$ I observe that two applications of this isomorphism on $\sqrt{-3}$ will give the identity and for the other element I have the following chain $\sqrt[3]{3} \mapsto \omega^2 \sqrt[3]{3} \mapsto \omega \omega^2 \sqrt[3]{3}$ observing that $\omega^2 \mapsto \omega$. Edit Let me update the table as I go along: \begin{array}{|c|c|c|c|} \hline notation & \sqrt[3]{3} & \sqrt{-3} & order \\ \hline Id & \sqrt[3]{3} & \sqrt{-3} & 1\\ \hline & \sqrt[3]{3} & -\sqrt{-3} & 2\\ \hline & \omega \sqrt[3]{3}& \sqrt{-3}& 3\\ \hline & \omega \sqrt[3]{3}& -\sqrt{-3}& 2\\ \hline & \omega^2 \sqrt[3]{3}& \sqrt{-3}& 3\\ \hline & \omega^2 \sqrt[3]{3}& -\sqrt{-3}& 2\\ \hline \end{array} So I conclude the group is diedric of 6 elements. Thanks everyone.
Wouldn't it be simpler to remark that by definition, $\omega$ is a primitive cubic root of $1$, so that your field $K$ is just the splitting field of the polynomial $X^3 - 3$ ? As such, $K/\mathbf Q$ is normal, with Galois group $G$ isomorphic to the permutation group of the roots, so $G\cong S_3 \cong D_6$, generated by the transposition $\tau:\omega \to \omega^2, \sqrt [3] 3 \to \sqrt [3] 3$ and the 3-cycle $\sigma: \sqrt [3] 3 \to \omega \sqrt [3] 3, \omega \to \omega$ .
{ "language": "en", "url": "https://math.stackexchange.com/questions/2510098", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Unique Cyclic Subgroups If we let $$S = \{ e^{q\pi i} : q\in Q \} $$ Prove that for each $ n \ge 1$ there is a unique cyclic subgroup of order $n$ in $S$ and the union of these cyclic subgroups is $S$. Any help on this?
The elements of $S$ are complex numbers. The group operation is complex multiplication and the identity element is $1 = e^{2\pi i}$. So $x \in S$ lies in a copy of $\mathbb Z/ n \mathbb Z$ if and only if it is an $n$th root of unity. We have explicit formulas for the roots of unity. This will prove existence. From the fundamental theorem of algebra, we know there are exactly $n$ roots of unity. This will prove uniqueness.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2510208", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Every order interval in $l^1$ is norm compact Let $l^1$ denote the space of sequences $(x_n)\subset \mathbb{R}$ with $\Vert (x_n)\Vert_1:=\sum_{n\geq 1} |x_n|<\infty$. We say that $(x_n^1)\leq (x_n^2)$ whenever $x_n^1\leq x_n^2$ for every $n\in\mathbb{N}$. It is well-known that $(l^1,\Vert\cdot\Vert_1)$ is a Banach space. Given $(x_n^1),(x_n^2)\in l^1$ with $(x_n^1)\leq (x_n^2)$ we define the order interval $$[(x_1^n),(x_2^n)]:=\{ (y_n)\in l^1\colon (x_n^1)\leq (y_n)\leq (x_n^2)\}.$$ I suspect that this set is norm compact. Any hint to prove that?
We may consider the set $S=[0, (x_n)]$ only without loss of generality. Let $(y_n^k) \in S$. We want to show that it has a convergent subsequence in $S$. A short but not elementary proof: The set $S$ is compact in the product topology. Thus, there is a point wise convergent subsequence $(y_n^{k_m})$ with a limit $(y_n)\in S$. By dominated convergence theorem, it converges in $\ell^1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2510328", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
If $AT=TA$ with $A\geq0$. Why $A^{1/2}T=TA^{1/2}$? Let $\mathcal{H}$ be a complex Hilbert space. Let $T\in \mathcal{B}(\mathcal{H})$ and let $A\in \mathcal{B}(\mathcal{H})^+$ (i.e. $A^*=A$ and $\langle Ax\;| \;x\rangle \geq0,\;\forall x\in \mathcal{H}$). Assume that $AT=TA$. Why $A^{1/2}T=TA^{1/2}$? Thank you
You can check that $p(A)T=Tp(A)$ for any polynomial $p$. For any continuous function $f$ on $\sigma(A)$, there is a sequence $\{p_n\}$ of polynomials such that $p_n$ is convergent to $f$ uniformly. Hence $p_n(A)$ is convergent to $f(A)$ and thus $f(A)T=Tf(A)$. Specially, $f(x)=\sqrt{x}, x\in \sigma(A)\subset [0,\infty)$ is your case.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2510496", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
How to calculate $a,b,c,d$ given the eigenvectors Given the matrix $$A=\begin{bmatrix} a & b & 2 \\ c & d & 6 \\ 3 & 4 & -3 \end{bmatrix}$$ with eigenvectors $$v_1=\begin{bmatrix} 5 \\ 1 \\ 3 \end{bmatrix} \quad\text{and}\quad v_2=\begin{bmatrix} 7 \\ 4 \\ 3 \end{bmatrix}$$ find $a,b,c,d$. After this i know i should compute $Av_1$ and $Av_2$. $$Av_1=\begin{bmatrix} 5a+b+6 \\ 5c+d+18 \\ 10 \end{bmatrix}$$ So what next in order to find value for $a,b,c,d$?
$A*V_1 = \lambda*V_1$ $A*V_1$ is \begin{bmatrix} 5a+b+6 \\ 5c+d+18 \\ 10 \\ \end{bmatrix} $\lambda*V_1$ is \begin{bmatrix} 5*\lambda \\ 1*\lambda \\ 3*\lambda \\ \end{bmatrix} On equating, we see that $\lambda = \frac{10}{3}$ Similarly solve for other eigen value. Then with equations formed, get the desired values
{ "language": "en", "url": "https://math.stackexchange.com/questions/2510634", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Does there exist a non-measurable function $f : \mathbb{R} \rightarrow \mathbb{R}$ such that $f ^{−1} (y) $is measurable for any $y \in \mathbb{R}$? Does there exist a non-measurable function $f : \mathbb{R} \rightarrow \mathbb{R}$ such that $f ^{−1} (y) $is measurable for any $y \in \mathbb{R}$?
Let $A \subseteq \mathbb{R}$ be a non-measurable set such that $|A| = |\mathbb{R} \setminus A| = \mathfrak{c}$. Let $f : A \to \{x \in \mathbb{R} \mid x < 0\}$ and $g : \mathbb{R} \setminus A \to \{x \in \mathbb{R} \mid x \geq 0\}$ be bijections. Then $(f \cup g) : \mathbb{R} \to \mathbb{R}$ is a non-measurable bijection (as the preimage of the open set $\{x \in \mathbb{R} \mid x < 0\}$ is $A$) . As the preimage of each point is a singleton, your condition is satisfied.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2510877", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prove: $\int_a^b f(x) g(x) dx = f(a) \int_a^c g(x) dx + f(b) \int_c^b g(x) dx$ Let $f$ and $g$ be real-valued continuous functions on a closed and bounded interval $[a,b]$. Given that $f$ is increasing and $g$ is positive on $[a,b]$. Prove the following: $$ \int_a^b f(x) g(x) dx = f(a) \int_a^c g(x) dx + f(b) \int_c^b g(x) dx $$ where $c$ $ϵ$ $[a,b]$. I would have attempted this by working from the RHS, which gives the following: $$f(a) \int_a^c g(x) dx + f(b) \int_c^b g(x) dx = f(a) [g(c)-g(a)] + f(b) [g(b)-g(c)]$$ by Fundamental Theorem of Calculus (Part II). Subsequently, i would expand it as: $$ f(a)g(c)- f(a)g(a) + f(b)g(b) - f(b)g(c) $$ Working from LHS, $$ \int_a^b f(x) g(x) dx = f(b)g(b) - f(a)g(a) $$ As such, $ f(a)g(c) - f(b)g(c) \implies g(c) [f(a) - f(b)] $ must be $0$. How do i derive the above to complete this proof?
Write $H(x)=f(a)\int_a^xg(x)dx+f(b)\int_x^bg(x)dx-\int_a^bf(x)g(x)dx$. $H(a)\geq 0$ and $H(b)\leq 0$. $H(a)=\int_a^b(f(b)-f(x))g(x)\geq 0$ since $f(b)\geq f(x)$ and $g(x)\geq 0$ $H(b)=\int_a^b(f(a)-f(x))g(x)\leq 0$, so there exists $c$ such that $H(c)=0$ i.e $f(a)\int_a^cg(x)d(x)+f(b)\int_c^bg(x)dx=\int_a^bf(x)g(x)dx$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2511007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
$\sum_{n=1}^{\infty}\frac{1}{n^2} <\frac{33}{20}$ using elementary inequalities There are many ingenious ways for proving $$\zeta(2)=\sum_{n=1}^{\infty}\frac{1}{n^2}=\frac{\pi^2}{6} \approx 1.6449$$ Using the inequality $$\frac{1}{n^2} < \frac{1}{n^{2}-\frac{1}{4}} =\frac{1}{n-\frac{1}{2}}-\frac{1}{n+\frac{1}{2}}$$ we can see that $$\sum_{n=2}^{\infty}\frac{1}{n^2} <\frac{2}{3} \Rightarrow \zeta(2)<\frac{5}{3}=1.6666$$ Can we improve upon these bounds using elementary inequalities? Like is it possible to show (of course without assuming $\zeta(2)\approx 1.64449$) that $\zeta(2)<\frac{33}{20}$? If there are much nicer bounds which follow using elementary inequalities I would be happy to see them.
You only need a few more terms: $$\zeta(2)<1+\frac14+\frac19+\sum_{n=4}^\infty\frac1{n^2-1/4} =1+\frac14+\frac19+\frac27$$ which is already less than $33/20$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2511115", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
For a fixed element of a ring R, show that the set is a subring of R. The question follows. For a fixed element of a ring R, show that the set $S=\{x\in \mathbb{R}|ax=0\}$ is a subring of R. To show that S is a subring of R, it must meet the following conditions: $(i) S \neq \emptyset$ $(ii) x \in S$ and $y \in S$ imply that $x-y$ and $xy$ are in S. This is my proof: $(i) $ Since S is a subring of R, then the additive identity is in S. Thus $a*0=0 \rightarrow 0=0$ and $0\in S$. Thus $S \neq \emptyset.$ $(ii)$ Let $x,y \in S$, then $ax=0$ and $ay=0$. Thus $a(x-y)=0 \rightarrow ax-ay=0 \rightarrow 0-0=0.$ Thus $x-y\in S.$ Which also following with $xy$. Let $x,y \in S$, then $ax=0$ and $ay=0$, then $a(xy)=0 \rightarrow (ax)y=0 \rightarrow 0*y=0 \rightarrow 0=0$, thus $xy \in S.$ Is it right? any modifications that I can do to make it better?
You have the right ideas, you just need to write them down better. For instance, you want to prove that $S$ is a subring, so you cannot start with “since $S$ is a subring of $R$”. Also the argument in (ii) is backwards. Avoid those pesky arrows! ;-) It seems that you start from $a(xy)=0$ (which instead is what you need to prove) and deduce that $0=0$. But $0=0$ can be deduced from everything, even from a false assumption! Revised version (i) Since $a0=0$, we see that $0\in S$, so $S\ne\emptyset$. (ii) Suppose $x,y\in S$. Then $ax=0$ and $ay=0$ by definition; so $$ a(x-y)=ax-ay=0-0=0 $$ and therefore $x-y\in S$. Similarly, $$ a(xy)=(ax)y=0y=0 $$ which proves $xy\in S$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2511263", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that $d_1\sim d_2$ and whether $(X,d)$ is complete Let $X = (-\pi/2,\pi/2), d_1(x,y) = | \tan x - \tan y|,\ d_2(x,y) = | x - y|$. Show that $d_1\sim d_2$, the space $(X,d_1)$ is complete, whereas the space $(X,d_2)$ is not. Conclude that completeness is not topological invariant. For the first metric, I know that $((−\pi/2,\pi/2),d_1)$ is isometric to $(\mathbb R,d_\mathbb R)$ where $d_\mathbb R$ is standard distance in $\mathbb R$. So by using this fact, I found $(X,d_1)$ is complete. But I could not find a counter example for the second metric and also how can I show that $d_1\sim d_2$?
To show $d_1 \sim d_2$ it is enough to show that for an arbitrary open ball $B_2(x_0, r)$ with respect to $d_2$, there exists $r' > 0$ such that $B_1(x_0, r') \subseteq B_2(x_0, r)$, and vice versa. Let $x, y \in X$. Using the mean value theorem, we have: $$\left|\tan x - \tan y\right| = \frac1{\cos^2\theta}|x - y|$$ for some $\theta \in \langle x, y\rangle$. Therefore: $$|x - y| = (\cos^2\theta)\cdot\left|\tan x - \tan y\right| \le \left|\tan x - \tan y\right|$$ Hence, for any ball $B_2(x_0, r)$ we have $B_1(x_0, r) \subseteq B_2(x_0, r)$. Now let $B_1(x_0, r)$ be a ball with respect to $d_1$. Since $\tan$ is continuous, taking $\varepsilon = r$, there exists a $\delta > 0$ such that $$|x - x_0| < \delta \implies \left|\tan x - \tan x_0\right| < r$$ Therefore, $B_2(x_0, \delta) \subseteq B_1(x_0, r)$. Hence, $d_1 \sim d_2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2511362", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How to expand $(x^{n-1}+\cdots+x+1)^2$ (nicely) sorry if this is a basic question but I am trying to show the following expansion holds over $\mathbb{Z}$: $(x^{n-1}+\cdots+x+1)^2=x^{2n-2}+2x^{2n-3}+\cdots+(n-1)x^n+nx^{n-1}+(n-1)x^{n-2}+\cdots+2x+1$. Now I can show this in by sheer brute force, but it wasn't nice and certainly wasn't pretty. So I am just wondering if there are any snazzy ways to show this? If it helps, I am assuming $x^m=1$ for some $m>n-1$.
HINT.-Why not using $x^{n-1}+\cdots+x+1=\dfrac{x^n-1}{x-1}$ and dividing $x^{2n}-2x^n+1$ by $x^2-2x+1$? You will easily find successively the coefficients $$1,2,3,4,\cdots,(n-2),(n-1) ,n,(n-1),(n-2)\cdots,4,3,2,1$$ with a symmetry like the binomial coefficients.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2511494", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 6, "answer_id": 4 }
if $\int_Q f$ exists then $\int_{y \in B} f(x, y)$ exists for each $x \in A − D$, where $D$ is of measure zero. Let $f : Q = A \times B \to \Bbb R$ be bounded, where $A, B$ are respectively rectangles in $\Bbb R^l$ and $\Bbb R^k$. Show that if $\int_Q f$ exists then $\int_{y \in B} f(x, y)$ exists for each $x \in A − D$, where $D$ is of measure zero. Require Hints to solve the problem.
Hints: (if you are working with Riemann integration) (1) For $f$ bounded and Riemann integrable on $Q = A \times B$ show that $$\int_Q f = \int_{x \in A} \underline{\int}_{y \in B}f(x,y) = \int_{x \in A}\overline{\int}_{y \in B}f(x,y)$$ where $\underline{\int}$ and $\overline{\int}$ denote lower and upper Darboux integrals. (2) Since it must hold that $$ \int_{x \in A} \left( \overline{\int}_{y \in B}f(x,y)- \underline{\int}_{y \in B}f(x,y) \right) = 0,$$ what can you conclude about where the upper and lower integrals are equal and consequently the existence of the integral $\int_{y \in B} f(x,y)$ for fixed $x$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2511584", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Probability that sum is odd but not divisible by $3$ Out of $20$ consecutive natural numbers two are chosen randomly.Find Probability that sum is odd but not divisible by $3$. We have denominator as $\binom{20}{2}$. Now any number will be of the form $3k$, $3k+1$ or $3k-1$. Since sum of $3p+1$ and $3q-1$ is divisible by $3$ our chosen two numbers must fall in the following two cases: Case $1.$ one number is of form $3k$ and another $3p+1$ So select any multiple of $3$ from $20$ numbers and select any number which leaves remainder $1$ from among $20$ which makes sum not divisible by 3. Case $2.$ one number is of form $3k$ and another $3q-1$ So select any multiple of $3$ from $20$ numbers and select any number which leaves remainder $2$ from among $20$ which makes sum not divisible by 3. But now how to choose making their sum is odd?
Because with $20$ numbers you do not have the same number of numbers of the form $3k$, $3k+1$, and $3k+2$, I would recommend considering three separate cases, namely where the series starts with a number of the form $3k$, where it starts with $3k+1$, and where it starts with $3k+2$. For each, figure out how many pairs there would be of the desired property (because of the asymmetry, probably you don't get the same number for each), add them all up, and divide by $3 \cdot {20 \choose 2}$ Just to show how to do this for one case, consider where the first number of the series is of the form $3k$. Now, to add a second number and get an odd sum, we need to either add the second number, or the fourth number , or ... Of those, the 4th, 10th, and 16th will be of the form $3p$, so we rule those out, leaving $7$ numbers that can be added to the first with the desired property. The same $7$ numbers can be added to the 7th, 13th, and 19th number, giving $28$ pairs. Similarly, to the 3rd, 9th, and 15th we can add all but the 6th, 12th, and 18th, so that is another $21$ pairs. Finally, to the 5th, 11th, and 17th we can add all but the 2nd, 8th, 14th, and 20th, giving another $18$ pairs, for a total of $67$ pairs. Now do the same analysis for the first number being of the form $3k+1$, and then for $3k+2$. Like I said, you may get a slightly different number of pairs for those, but add them all up, and divide by the total number of possible pairs you can get between all these three different sequences, i.e divide by $3 \cdot {20 \choose 2}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2511696", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
show this inequality $\left(\sum_{i=1}^{n}x_{i}+n\right)^n\ge \left(\prod_{i=1}^{n}x_{i}\right)\left(\sum_{i=1}^{n}\frac{1}{x_{i}}+n\right)^n$ Let $x_{i}\ge 1$,show that $$\left(\sum_{i=1}^{n}x_{i}+n\right)^n\ge \left(\prod_{i=1}^{n}x_{i}\right)\left(\sum_{i=1}^{n}\dfrac{1}{x_{i}}+n\right)^n$$ or $$\left(\dfrac{\sum_{i=1}^{n}x_{i}+n}{\sum_{i=1}^{n}\dfrac{1}{x_{i}}+n}\right)^n\ge \prod_{i=1}^{n}x_{i}$$ and it seem use AM-GM inequality? $$\sum_{i=1}^{n}x_{i}\ge n\sqrt[n]{x_{1}x_{2}\cdots x_{n}}$$ $$\sum_{i=1}^{n}\dfrac{1}{x_{i}}\ge \dfrac{n}{\sqrt[n]{x_{1}x_{2}\cdots x_{n}}}$$ let $\sqrt[n]{x_{1}x_{2}\cdots x_{n}}=t$,since $$\Longleftrightarrow \left(\dfrac{t+1}{\frac{1}{t}+1}\right)^n\ge t^n$$But I can't it
we have to prove : $$\left(\dfrac{\sum_{i=1}^{n}x_{i}+n}{\sum_{i=1}^{n}\dfrac{1}{x_{i}}+n}\right)^n\ge \prod_{i=1}^{n}x_{i}$$ Let $x_i\geq 1$ be real numbers so we have : $$\frac{1}{\sum_{i=1}^{n}S_i}\left(\sum_{i=1}^{n}S_i[(-x_i+\frac{(\sum_{i=1}^{n}x_i)+n}{(\sum_{i=1}^{n}\frac{1}{x_i})+n})(n)x_i^{n-1}+x_i^n]\right)\ge \prod_{i=1}^{n}x_{i}$$ Where : $$S_i=\frac{\prod_{i=1}^{n}x_{i}}{|((-x_i+\frac{(\sum_{i=1}^{n}x_i)+n}{(\sum_{i=1}^{n}\frac{1}{x_i})+n})(n)x_i^{n-1}+x_i^n)-1|}$$ And : $$S_{min}=\frac{n\prod_{i=1}^{n}x_{i}}{|((-x_{min}+\frac{(\sum_{i=1}^{n}x_{i})+n}{(\sum_{i=1}^{n}\frac{1}{x_{i}})+n})(n)x_{min}^{n-1}+x_{min}^n)-1|}$$ After that we use the theorem 5 of this link : You just have to replace the function by $\phi(x)=x^n$ with domain $[x_{min};x_{max}]$ and $d=\frac{(\sum_{i=1}^{n}x_i)+n}{(\sum_{i=1}^{n}\frac{1}{x_i})+n}$ and remark that we have : $x_{min}\leq d \leq x_{max}$ and put $S_i=W_i$ to get the LHS.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2511793", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Show that $|\sin x−\cos x|≤ 2$ for all $x.$ Please help me out in answering this: Show that $|\sin x−\cos x|≤ 2$ for all $x.$ I don't know where to start and I think we have to use the mean value theorem to show this.
The mean value theorem also works, since $$ \sin(x+\pi/2)-\sin x=-\frac{\pi}{2}\cos\xi, $$ where $\xi\in(x,x+\pi/2)$. It follows that $|\cos x-\sin x|\le \pi/2$. [This is, of course, not as sharp as achille hui's approach.]
{ "language": "en", "url": "https://math.stackexchange.com/questions/2511917", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 8, "answer_id": 6 }
Prove that $\langle H, K\rangle = HK$ This is a question from an exam I sat about two hours ago, and I still couldn't figure it out on the bus home so I've decided to ask for a clue here. Let $G$ be a group, and $H, K \leq G$. Let $HK := \{hk: h\in H, k\in K\}$, and $KH := \{kh: h\in H, k\in K\}$. Suppose $HK = KH$. * *Show that $HK \leq G$. (This is fine) *Show that $\langle H, K\rangle = HK$. (This wasn't fine!) Showing that $HK \subset \langle H, K\rangle$ is trivial, but this was all I could write for the reverse set inclusion: "Let $x \in \langle H, K\rangle$. Then $x = h_1^{n_1}k_1^{m_1}h_2^{n_2}k_2^{n_2}\cdots$. As $H$ and $K$ are closed under multiplication, for each $h_i, k_i$ we can find $s_i \in H, t_i \in K$ such that $s_i = h_i^{n_i}, t_i = k_i^{m_i}$, so that $x = s_1t_1s_2t_2\cdots$. Furthermore, we know that $HK = KH$, so for each $a\in H, b\in K$, we can find $\alpha \in H, \beta \in K$ such that $ab = \beta\alpha$. It follows by induction that $x = hk$ for some $h\in H, k\in K$, so $x \in HK$." I'm unhappy with this because of the potential "infiniteness" of $x = h_1^{n_1}k_1^{m_1}h_2^{n_2}k_2^{n_2}\cdots$. I handwaved and said "by induction", but induction only proves that "$s_1t_1s_2t_2\cdots s_nt_n = hk$ for some $h, k$ for all $n \in \mathbb{N}$". This means it verifies that all elements of $\langle H, K\rangle$ with finitely many symbols is of the form $hk$, but not a situation like $x = hkhkhkhkhk\cdots$. How could I have avoided this issue?
Let me first introduce a standard notation in computer science. Given a subset $S$ of $G$, let $S^0 = \{1\}$ and $S^{n+1} = S^nS$ for all $n \geqslant 0$. Finally, let $S^* = \cup_{n \geqslant 0} S^n$. Hint. First prove that $(HK)^*$ is equal to $\langle H, K\rangle$. Next, using the relations $HK = KH$, $HH = H$ and $KK = K$, show that $(HK)^* = HK$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2512042", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Integral of a power combined with a Gaussian hypergeometric function I think the following is true for $k \ge 3$, $$ \int_0^{\infty } (w+1)^{\frac{2}{k}-2} \, _2F_1\left(\frac{2}{k},\frac{1}{k};1+\frac{1}{k};-w\right) \, dw = \frac{\pi \cot \left(\frac{\pi }{k}\right)}{k-2} . $$ I have checked Table of Integrals, Series, and Products without any hit. Has anyone seen any formula that resembles the left hand side? Edit: This came up in my research when I tried to show another integral is equals the RHS. I was able to transform the integral into the LHS and it looked promising. But eventually I used another proof. Anyway, it's still nice to see this direction also works.
Using the integral representation 15.6.6 $$ \, _2F_1\left(\frac{2}{k},\frac{1}{k};1+\frac{1}{k};-w\right)=\frac{\Gamma\left(1+1/k\right)}{2\pi i\Gamma\left(1/k\right)\Gamma\left(2/k\right)}\int_{-i\infty}^{i\infty}\frac{\Gamma\left(1/k+t\right)\Gamma\left(2/k+t\right )\Gamma\left(-t\right)}{\Gamma\left(1+1/k+t\right)}w^{t}dt, $$ where the contour of integration separates the poles of $\Gamma\left(1/k+t\right)\Gamma\left(2/k+t\right)$ from the poles of $\Gamma(-t)$, interchanging the order of integration and calculating the integral over $w$ $$ \int_0^{\infty } (w+1)^{\frac{2}{k}-2} w^tdw=B(t+1,1-2/k-t) $$ we get \begin{align} &\int_0^{\infty } (w+1)^{\frac{2}{k}-2} \, _2F_1\left(\frac{2}{k},\frac{1}{k};1+\frac{1}{k};-w\right) \, dw\\ &=\frac{1/k}{2\pi i\Gamma\left(2/k\right)}\int_{-i\infty}^{i\infty}\frac{\Gamma\left(1/k+t\right)\Gamma\left(2/k+t\right )\Gamma\left(-t\right)\Gamma(t+1)\Gamma\left(1-2/k-t\right)}{\Gamma\left(1+1/k+t\right)\Gamma\left(2-2/k\right)}dt\\ &=\frac{\sin \left(\frac{2 \pi }{k}\right)}{2i(k-2)} \int_{-i\infty}^{i\infty}\frac{dt}{t\sin (\pi/k-\pi t) \sin \left(\pi/k+\pi t\right)}. \end{align} The contour of integration here is so that the origin lies to the left. We see that the integrand is odd. Therefore if we choose the contour as $(-i\infty,-ir)\cup \Gamma_r\cup(ir,i\infty)$,$r>0$, where $\Gamma_r=\left\{|z|=r,\text{Re}~z>0\right\}$, the integrals along $(-i\infty,-ir)$ and $(ir,i\infty)$ cancel out. The integral along $\Gamma_r$ can be calculated in the limit $r\to +0$ and equals $$ \frac{\pi i}{\sin^2(\pi/k)}. $$ Thus we get $$ \frac{\sin \left(\frac{2 \pi }{k}\right)}{2i(k-2)} \cdot \frac{\pi i}{\sin^2(\pi/k)}= \frac{\pi \cot \left(\frac{\pi }{k}\right)}{k-2}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2512169", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Can we split a function while doing differentiation? I have a silly question here. Assume a function $$f(x)= \frac{x}{(x-1)^2} \cdot (x+2)$$ Can I write $$\frac{d}{dx} f(x) = \frac{d}{dx} x \cdot \frac{d}{dx} (x-1)^{-2} \cdot \frac{d}{dx} (x+2)^{-1}$$ If not, then why?
Yes, you can "split" a function; in fact, that's the standard way for computing derivatives algebraically. However, you have to know what to do with the pieces. Some examples (the "addition rule", "product rule", and "chain rule") are $$ \frac{d}{dx} \big( f(x) + g(x) \big) = f'(x) + g'(x) $$ $$ \frac{d}{dx} \big( f(x) \cdot g(x) \big) = f(x) g'(x) + f'(x) g(x) $$ $$ \frac{d}{dx} \big( f(g(x)) \big) = g'(x) \cdot f'(g(x)) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2512258", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 4 }