Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Could "$\infty$" be understood by taking the reciprocals of the Hyperreal numbers? When learning mathematics we are told that infinity is undefined. (*) Recently I read about the infinitesimal version of Calculus and how we can in fact treat $dy/dx$ as a fraction under this approach (something we can't do with limits). This is achieved by constructing the Hyperreal numbers $^*\mathbb{R}$, which contain the real numbers and the infinitesimal numbers (all positive numbers greater than zero, yet less than any real number). Thinking back to (*) I wonder, could sense of infinity be made somehow by including the reciprocal infinitesimal numbers? Presumably these numbers would be defined since the infinitesimal numbers are never zero.
In fact, infinite numbers are a part of the Hyperreal Numbers. Just check this out on Wikipedia.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1075326", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
Evaluate this infinite product involving $a_k$ Let $a_0 = 5/2$ and $a_k = a_{k-1}^{2} - 2$ for $k \ge 1$ Compute: $$\prod_{k=0}^{\infty} 1 - \frac{1}{a_k}$$ Off the bat, we can seperate $a_0$ $$= -3/2 \cdot \prod_{k=1}^{\infty} 1 - \frac{1}{a_k}$$ Lets see: $a_0 = 5/2$ $a_1 = 25/4 - 2 = 17/4$ $a_2 = 289/16 - 2 = 257/16$ $$P = (-3/2)\cdot(-13/4)\cdot(-241/16).....$$ Lets compute the first three for $P_3$ $P_1 = -3/2 = (-192/128)$ $P_2 = (-3/2)(-13/4) = (39/8) = (624/128)$ $P_3 = (39/8)(-241/16) = (-9399/128)$ But it is difficult to find a pattern for $P_k$ Help? Thanks =)
$\textbf{Hint:}$ show this by induction on $n\ge0$ $$a_n=2^{2^n}+2^{-2^n},\quad n=0,1,2\dots$$ and notice that $a_k+1=a_{k-1}^2-1=(a_{k-1}-1)(a_{k-1}+1),$ $$1-\frac{1}{a_k}=\frac{a_k-1}{a_k}=\frac{a_{k+1}+1}{a_k+1}\cdot\frac{1}{a_k}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1075415", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Checking where the complex derivative of a function exists I have the following function: $$f(x+iy) = x^2+iy^2$$ My textbook says the function is only differentiable along the line $x = y$, can anyone please explain to me why this is so? What rules do we use to check where a function is differentiable? I know the Cauchy-Riemann equations, and that $u=x^2$ and $v=y^2$ here.
A perhaps more accessible perspective is to understand that the definition of the complex derivative, as in the real case, relies on the existence and uniqueness of a certain limit. In particular, given a point and a function, one considers evaluation of the function in a small neighbourhood of the point in question in the domain of the function. The limit in question is of course the ratio of the difference in the value of the function between the distinguished point and close neighbours to the difference in the argument. In the familiar real case, the limit must be uniquely defined, regardless of the direction from which the point is approached (from above or below). For example, a piecewise linear function has an upper derivative and a lower derivative everywhere; it only has a full derivative where they are equal (which may not be true at the boundaries of the pieces). Failure to meet this condition corresponds to a loss of regularity of the function; it is both intuitively and rigorously meaningless to assign a value to the derivative or slope of such a function at points where the full derivative does not exist. The intuition is identical in the complex case; the difference is that the distinguished point may be approached from an infinity of directions. In common with the real case, the derivative is defined iff the limit agrees regardless of this choice. Similarly, the intuition behind this is captured by the idea that the "slope" - here generalised to include the phase as well as the magnitude of the defining ratio, since a ratio of complex numbers is in general complex - must agree at a point, independent of the direction from which it is approached. This is reflected in the structure of the Cauchy-Riemann equations. A slightly counterintuitive aspect of complex analysis is that this can easily be the case even if the real and imaginary parts of the function under consideration are smooth as real functions; agreement with the complex structure is a rigid constraint on differentiability. Try evaluating the limit from several directions for your function and this will persuade you that the limit is not independent of the choice. If the gradient of this function represented a force on a particle in imaginary space, which direction would it move in? It wouldn't be able to make its mind up.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1075505", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Find point on line withv given start point, distance, and line equation I have line equation $$ Ax +By + C = 0.$$ I have start point (on this line): $ P_0 = (X_0, Y_0)$. I have distance $d$ too. I need find point $P_2$ with distance $d$ from $P_0$ and placed on this line. I know that we have 2 points with this distance. But how calculate? I need some programmatic solution.
$(A, B)$ is a normal vector for the line, therefore $v = (-B, A)$ is a direction vector and you get all points on the line with $$ (x, y) = P_0 + t \, v = (X_0, Y_0) + t (-B, A), \quad t \in \mathbb R. $$ Now choose $t$ such that the length of $t(−B,A)$ is equal to the given distance $d$, this gives the two points $$ (X_2, Y_2) = (X_0, Y_0) \pm \frac d{\sqrt { A^2 + B^2}} (-B, A) \quad . $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1075564", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Calculate a limit of exponential function Calculate this limit: $$ \lim_{x \to \infty } = \left(\frac{1}{5} + \frac{1}{5x}\right)^{\frac{x}{5}} $$ I did this: $$ \left(\frac{1}{5}\right)^{\frac{x}{5}}\left[\left(1+\frac{1}{x}\right)^{x}\right]^\frac{1}{5} $$ $$ \left(\frac{1}{5}\right)^{\frac{x}{5}}\left(\frac{5}{5}\right)^\frac{1}{5} $$ $$ \left(\frac{1}{5}\right)^{\frac{x}{5}}\left(\frac{1}{5}\right)^\frac{5}{5} $$ $$ \lim_{x \to \infty } = \left(\frac{1}{5}\right)^\frac{x+5}{5} $$ $$ \lim_{x \to \infty } = \left(\frac{1}{5}\right)^\infty = 0 $$ Now I checked on Wolfram Alpha and the limit is $1$ What did I do wrong? is this the right approach? is there an easier way?:) Edit: Can someone please show me the correct way for solving this? thanks. Thanks
$$\lim_{x\to\infty}\left(\frac15\right)^x=0$$ $$\lim_{x\to\infty}\left(1+\frac1x\right)^x=e$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1075652", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
What is the combination of Complex, Split-Complex and Dual Numbers If $a+bi:i^2=-1$ is a complex number, $a+cj:j^2=+1$ is a split-complex number, and $a+d\epsilon:\epsilon^2=0$ is a dual number; what is the term for the combination $a+bi+cj+d\epsilon:i^2=-1,j^2=+1,\epsilon^2=0$?
There is no special term as far as I know. You can call them hypercomplex numbers deffiened by certain Clifford algebra $ \mathcal{Cl} (1,1,1)$ .
{ "language": "en", "url": "https://math.stackexchange.com/questions/1075708", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
Is Dodecahedron tesselation somehow possible? In this video (at 3:25) there is an animation of planets inside a dodecahedron matrix (or any data-structure that best fit this 3d mosaic). I tried reproducing it with 12 sided dices, or in Blender, but it looked impossible, because there is a growing gap inbetween faces. The article what-are-the-conditions-for-a-polygon-to-be-tessellated helped me understand which polyhedron are and which ones are not possible to tesselate. So for regular matrixes, we are limited to some irregular polyhedron, like the Rhombic dodecahedron. My questions on this: * *What is the technique used to represent a dodecahedron matrix in this video *Is it only a 3D voronoi tiling, and were lucky it has 12 pentagonal sides in a region? *Is it an optical illusion? Maybe the underliying structure is just a 3d Hyperbolic tiling, and it only looks like a regular dodecahedron matrix from our perspective? * *If so, are there more of these "distorted representations of space" used to tesselate more polygons/polyhedrons *Are there online resources for writing geometry shaders with this technique?
As User7530 pointed out, there is - in the video - a heavy distortion of the earths near the left side of the screen and it must therefore be a hyperbolic tiling. Also, L'Univers Chiffoné is a book that explains the possibility of living in a hyperbolic (a non-euclidian) space. its written by the french astrophysicist Jean-Pierre Luminet.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1075756", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
A question about essential ideal Let $I$ be a nonunital C*-algebra and $I\subset B(H)$ be any nondegenerate representation and define $$M(I)=\{T\in B(H): Tx\in I~and ~xT\in I, ~for ~all~ x\in I\}.$$ Then, how to prove $I$ sits in $M(I)$ as an essential ideal? Definition An ideal $I\triangleleft E$ is essential if it has "no orthogonal complement" - i.e., $$I^{\bot}:=\{e\in E: ex=xe=0,~ for ~all~x\in I\}=\{0\}.$$
First of all, see this question. If $Tx=xT=0$ for every $x\in I$ then for every $h\in H$ we have $Txh=0$. Now the set $\{xh,\ x\in I, h\in H\}$ is dense in $H$, since the representation is non-degenerate by the question above. Since $\ker(T)$ is closed in $H$ and contains $\{xh,\ x\in I, h\in H\}$, which is dense in $H$ then $\ker(T)=H$ and $T=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1075843", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Find the fraction where the decimal expansion is infinite? Find the fraction with integers for the numerator and denominator, where the decimal expansion is $0.11235.....$ The numerator and denominator must be less than $100$. Find the fraction. I believe I can use generating functions here to get $1+x+2x^2+3x^3+5x^4+.....$, but I do not know how to apply it.
An exhaustive search (by computer) of all fractions with numerator and denominator $< 100$ shows there is only one whose decimal representation starts 0.11235: $$ \frac{10}{89} $$ The decimal expansion continues $\ldots 955056179775\ldots$ (I haven't bothered finding aprogram that can calculate enough that we can see the period), that's completely impossible to guess- I would call the question poorly worded.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1075901", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 1 }
Notation of inf In this paper (equation 4.1) the following formula is listed: $\inf_{u \in R} \left \{ \frac{\partial V}{\partial \boldsymbol{x}}f(\boldsymbol{x},u) \right \} < 0, \quad \forall \boldsymbol{x} \neq \boldsymbol{0} $ Now I don't understand what the term $\inf_{u \in R}$ indicates. I know that inf stands for infimum, but I can not make any sense out of this notation.
The subscript gives context to the infimum. You could also write it as $$ \inf\left\{\frac{\partial V}{\partial x} f(x,u)\ \bigg| \ u \in R\right\} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1075979", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Prove that a matrix equals to its transpose Let $A$ be a $(n\times n)$ matrix that satisfies: $AA^t=A^tA$ Let $B$ be a matrix such that: $B=2AA^t(A^t-A)$ Prove/disprove that: $B^t=B$ I started with: $$\begin{align} B &=2AA^t(A^t-A) \\ B^t &=(2AA^t(A^t-A))^t \\ B^t &=((A^t-A)^t(A^t)^tA^t2) \\ B^t &=((A-A^t)AA^t2) \\ B^t &=(AAA^t2-A^tAA^t2) \end{align} $$ At this point I see no clue how to turn it into this form: $2AA^t(A^t-A)$, I thought it might be not true got no idea how to disprove that either. Suggestions? Thanks.
$$ \begin{eqnarray*} B^T&{}={}&2\left(A-A^T\right)AA^T\newline &{}={}&2\left(AAA^T-A^TAA^T\right)\newline &{}={}&2A\left(AA^T-A^TA^T\right)\newline &{}={}&2AA^T\left(A-A^T\right)\newline &{}\neq{}& B\,. \end{eqnarray*} $$ Almost, but not quite, in general.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1076048", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 7, "answer_id": 4 }
Analysis: Prove divergence of sequence $(n!)^{\frac2n}$ I am trying to prove that the sequence $$a_n = (n!)^{\frac2n}$$ tends to infinity as $ n \to \infty $. I've tried different methods but I haven't really got anywhere. Any solutions/hints?
Let $c_n=(n!)^2$; then $\displaystyle\frac{c_{n+1}}{c_n}=\frac{((n+1)!)^2}{(n!)^2}=(n+1)^2\to\infty,$ $\;\;\;$so $(n!)^{\frac{2}{n}}=(c_n)^{\frac{1}{n}}\to\infty$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1076109", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 3 }
3D coordinates of circle center given three point on the circle. Given the three coordinates $(x_1, y_1, z_1)$, $(x_2, y_2, z_2)$, $(x_3, y_3, z_3)$ defining a circle in 3D space, how to find the coordinates of the center of the circle $(x_0, y_0, z_0)$?
In this formula the center $O$ of a circumscribed circle of $\triangle ABC$ is expressed as a convex combination of its vertices in terms of coordinates $A,B,C$ and corresponding side lengths $a,b,c$, suitable for both 2d and 3d: \begin{align} O&= A\cdot \frac{a^2\,(b^2+c^2-a^2)}{((b+c)^2-a^2)(a^2-(b-c)^2)} \\ &+B\cdot \frac{b^2\,(a^2+c^2-b^2)}{((a+c)^2-b^2)(b^2-(a-c)^2)} \\ &+C\cdot \frac{c^2\,(b^2+a^2-c^2)}{((b+a)^2-c^2)(c^2-(b-a)^2)} . \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1076177", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 11, "answer_id": 7 }
Counting in other bases While this could be considered opinionated to a certain degree, by setting the requirement as ease of use, is there a base that is better for performing simple math functions (+-×÷) than base ten. I recently attended a lecture about whether cover stories affect the learning of students. One of the controls was not having foreign exchange students. Questions regarding this control started, and eventually it was implied that foreign number systems (Chinese) might be better than ours. I attributed this to them using a different base.
Some points to consider in choosing a base: (1) In lower bases there are very few $1$-digit facts to learn, but the numbers are longer. (2) In higher bases, the numbers are shorter, but there are a lot more facts to learn. (3) Certain bases give quick divisibility tests for certain numbers. For instance in base $12$, there are instant divisibility tests for $2, 3, 4, 6, 12$, just by checking the last digit.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1076249", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Solve the equation $2x^2+5y^2+6xy-2x-4y+1=0$ in real numbers Solve the equation $2x^2+5y^2+6xy-2x-4y+1=0$ The problem does not say it but I think solutions should be from $\mathbb{R}$. I tried to express the left sum as a sum of squares but that does not work out. Any suggestions?
You can solve for $x$: $(2)x^2+(6y-2)x+(5y^2-4y+1)=0\implies$ $x_{1,2}=\frac{-(6y-2)\pm\sqrt{(6y-2)^2-4\cdot2\cdot(5y^2-4y+1)}}{2\cdot2}=\frac{-6y+2\pm\sqrt{-4y^2+8y-4}}{4}=\frac{-6y+2\pm\sqrt{-4(y-1)^2}}{4}=\frac{-6y+2\pm2i(y-1)}{4}$ Then the only real solution is with $y=1$, hence $x=-1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1076340", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
Convergence of summable sequences If $(a_n)$ is a sequence such that $$\lim_{n\to\infty}\frac{a_1^4+a_2^4+\dots+a_n^4}{n}=0.$$ How do I show that $\lim_{n\to\infty}\dfrac{a_1+a_2+\dots+a_n}{n}=0$?
Since: $$\left|\sum_{k=1}^{n} a_k\right|\leq \sqrt{n}\sqrt{\sum_{k=1}^{n} a_k^2}\leq n^{\frac{3}{4}}\left(\sum_{k=1}^{n}a_k^4\right)^{\frac{1}{4}}$$ by applying the Cauchy-Schwarz' inequality twice (or the Holder's inequality once), we have: $$\frac{1}{n}\left|\sum_{k=1}^{n}a_k\right|\leq\left(\frac{1}{n}\sum_{k=1}^{n}a_k^4\right)^{\frac{1}{4}}, $$ so, if the RHS tends to zero, so does the LHS.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1076436", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
No. of different real values of $x$ which satisfy $17^x+9^{x^2} = 23^x+3^{x^2}.$ Number of different real values of $x$ which satisfy $17^x+9^{x^2} = 23^x+3^{x^2}.$ $\bf{My\; Try::}$Using Hit and trial $x=0$ and $x=1$ are solution of above exponential equation. Now we will calculate any other solution exists or not. If $x\geq 2\;,$ Then $17^x+9^{x^2}>9^{x^2} = (6+3)^{x^2}>6^{x^2}+3^{x^2} = (6^x)^x+3^{x^2}>23^x+3^{x^2}\;,$ bcz $(6^x>23)\; \forall x\geq 2.$ So no solution in $x\in \left[2,\infty\right)$ Now i did not understand how can i calculate in $x<0$ and $0<x<1$. Help me, Thanks
Using derivatives, is studying functions $ f, g: R \rightarrow R, f(x)= 9^{x^2}-3^{x^2}, g(x)=23^x-17^x$ and is found: * *$f$ has a minimum point in the interval $(0, 1)$ and limits to $+\infty$,$-\infty$ are equal with $+\infty$; *$g$ has a negative minimum point and limited to $-\infty$ is $0$ and to $+\infty$ is $+\infty$. For these reasons and noting that $f$ grows faster than $g$ infinite, it follows that graphs their only two points in common. Conclusion: The equation has exactly two real roots.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1076497", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Prove that a square of a positive integer cannot end with $4$ same digits different from $0$ Prove that a square of a positive integer cannot end with $4$ same digits different from $0$. I already proved that square of positive integer cannot end with none of digits $1,2,3,5,6,7,8,9$ using the remainder of division by $3,4,8,10$. Now problem is how to prove that this number cannot end with $4444$.
As $x^2=4444\equiv12 \pmod{ 16}$, so no squares can end in four $4$'s.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1076573", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Multitangent to a polynomial function I'm trying to build some exercises on tangents of functions for beginner students in mathematical analysis. In particular I would like to suggest the study of polynomial functions $ y = p (x) $ of which is possible to determine the graph with elementary methods and also determine (if it exists) the $n$-tangent, i.e. a straight line $ y = mx + q $ (with $ m \ne 0$ to avoid trivial solutions) which has $ n $ distinct points of tangency with the graph and no other intersection points with it, so that the system $$ \begin{cases} y=p(x)\\ y=mx+q \end{cases} $$ has $n$ double solutions. For bi-tangents I find, for example, functions of the form: $$ y= a(x^4-3k^2x^2+2k^3x) $$ that have as bi-tangents the straight lines $$ y=2ak^3x-\dfrac{9}{4}ak^3 $$ or: $$ y=a\left( \dfrac{1}{4}x^4 -\dfrac{3k}{2}x^3+\dfrac{9k^2}{4}x^2-k^3x\right) $$ with bi-tangents $y=ak^3x$. I cannot, however, find an example with a $ 3 $ -tangent, i.e. a polynomial of sixth degree $ y = p (x) $ such that $ p (x) $ and $ p '(x) $ are decomposable (more or less easily) in factors of degree $n \le 2$, and that at the same time has a $3$-tangent such that hits points of tangency can be determined without using the general formula to solve a cubic equation. Someone knows any function of this type, or may suggest an efficient way to find it? In other words: find a function: $$ y=a_6x^6+a_5x^5+a_4x^4+a_3x^3+a_2x^2+a_1x+a_0 $$ such that $ y $ and $y'$ are factorizable with factors of degree $n \le 2$ and there exist $m,q \in \mathbb{R}$ (or better $\in \mathbb{Q}$) such that $$ a_6x^6+a_5x^5+a_4x^4+a_3x^3+a_2x^2+(a_1-m)x+a_0-q =a_6\left( x^3+Bx^2+Cx+D\right)^2 $$ and the latter $3^{rd}$ degree polynomial is also factorizable. Added after the Answer. The answer of Michael Burr don't fit the request that $f(x)$ and $f′(x)$ has roots that we can find solving equations of degree $\le 2$. Here I sum up and a bit generalize the problem: Let $f(x) \in \mathbb{R}[x]$ be a polynomial of degree $2n > 4$ and $f'(x) $ its derivative. I want determine the coefficients of $f(x)$ in such a way that: all the real roots of $f(x)$ and $f'(x)$ can be found solving equations of degree $\le 2$, and there are $m,q \in \mathbb{R}$ such that the polynomial $g(x)=f(x)+mx+q$ has $n$ double roots. Or proof that such a polynomial can not exists.
After three solutions, I found the easy way! Start with $g(x)=(x-p)^2(x-r)^2(x-s)^2$. For any $m$ and $q$, consider $y=m(x-p)+q$. Add this to $g$, to get $f$, then $f$ passes through $(p,q)$ and the line $y=m(x-p)+q$ is tangent to the curve of $f$ at $p$, $r$, and $s$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1076688", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Orbits in G = $Z_6$ by listing 2 element subsets in G. 1) Let $G = \mathbb{Z}_6$. List all 2-element subsets of $G$, and show that under the regular action of G (by left addition) there are 3 orbits, 2 of length 6, one of length 3. Deduce that the stabilizers of 2-element subsets of $G$ have order 1 or 2. 2) Carry out the same calculations as in the previous question for $S_3$, showing that there are four orbits, of lengths 3, 3, 3 and 6. These are the questions - I have the answers but what I am really struggling with is getting to grips with what it all means. Firstly I know the definition of orbit to be the equivalence class for $x\sim y$ if $\theta(g)(x)=y$ for g$\in$G and $x,y\in X$ with $G$ being the Group and $X$ the set on which $G$ acts. But what I am not seeing is how I can apply this to the questions. Is my "Set" the set of all two element subsets? And if so what is my group action $\theta$ and how can I apply it? Thanks in advance.
The two element subsets: $$\{0,1\},\{0,2\},\{0,3\},\{0,4\},\{0,5\},\{1,2\},\{1,3\},\{1,4\},\{1,5\},\{2,3\},\{2,4\},\{2,5\},\{3,4\},\{3,5\},\{4,5\}.$$ Left addition: Take $\left[a\right]_6 \in \mathbb{Z}_6$ and do $a+H$ (where $H$ is a two element subset of $G$). For example: $$1+\{0,1\} =\{1,2\}, 1+\{1,2\} = \{2,3\}, 1+\{2,3\} = \{3,4\}, 1+\{3,4\} = \{4,5\}, 1+\{4,5\} = \{5,0\}, 1+\{5,0\} = \{0,1\} $$ Here is an example of an orbit of length $6$. $\textbf{Lastly}$: To do a problem like this with maximum efficiency think of these sets as ordered pairs $(x,y)$. If you want to show that there are three orbits, think of all the ordered pairs $(x,y)$ in which you can add $1,2,3,4,5$ to and get back to the original ordered pair.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1076802", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Abelian group of order 99 has a subgroup of order 9 Prove that an abelian group $G$ of order 99 has a subgroup of order 9. I have to prove this, without using Cauchy theorem. I know every basic fact about the order of a group. I've distinguished two cases : * *if $G$ is cyclic, since $\mathbb Z/99\mathbb Z$ has an element of order $9$, the problem is solved. *if $G$ isn't cyclic, every element of $G$ has order $1,3,9,11,33$. I guess I need to prove the existence of an element of order $9$. How should I do that ? Note that $G$ is abelian (I haven't used it yet). Context: This was asked at an undergraduate oral exam where advanced theorems (1 and 2) are not allowed.
There exists an element $a$ of order $3$. If there was not every element would need to have order $11$. But then a counting argument leads to contradiction. We take the quotient group $ G/<a>$. Then it has order $33$. The same argument shows that there exists an element $b$ of order $3$. Take the map $p: G \rightarrow G /<a>$ the natural projection. The pre-image of a subgroup is a subgroup, so we take the subgroup of $G$ $p^{-1}(<b>)$. And this subgroup has order $9=|\rm{Ker p}|| <b>| $.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1076880", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 6, "answer_id": 0 }
Prove $4^k - 1$ is divisible by $3$ for $k = 1, 2, 3, \dots$ For example: $$\begin{align} 4^{1} - 1 \mod 3 &= \\ 4 -1 \mod 3 &= \\ 3 \mod 3 &= \\3*1 \mod 3 &=0 \\ \\ 4^{2} - 1 \mod 3 &= \\ 16 -1 \mod 3 &= \\ 15 \mod 3 &= \\3*5 \mod 3 &= 0 \\ \\ 4^{3} - 1 \mod 3 &= \\ 64 -1 \mod 3 &= \\ 21 \mod 3 &= \\3*7 \mod 3 &= 0\end{align} $$ Define $x = \frac{4^k - 1}{3}$. So far I have: $$k_1 \to 1 \Longrightarrow x_1 \to 1 \\ k_2 \to 2 \Longrightarrow x_2 \to 5 \\ k_3 \to 3 \Longrightarrow x_3 \to 21 \\ k_4 \to 4 \Longrightarrow x_4 \to 85$$ But then it's evident that $$4^{k_n} = x_{n+1} - x_n$$ I don't know if this helps, these are ideas floating in my head.
Hint: $$4 \equiv 1 \mod 3 \Rightarrow 4^n \equiv 1^n \mod 3 \Rightarrow 4^n \equiv 1 \mod 3$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1076946", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 8, "answer_id": 2 }
Random variables and Linearity I have an equation $Y = 5 + 3\times X$ and I assume that $X$ is a random variable taking values from a uniform distribution. Can I consider that also $Y$ is a random variable which takes values from a uniform distribution also but in different space? Thank you in advance
If X is uniform on (0,1) then Y=5+3X is uniform on (5,8). More generally, for every nonzero b, Z=a+bX is uniform on (a,a+b).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1076996", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Stopped process of Brownian motion I am baffled about the following problem: Let $(B_t)$ be a standard Brownian motion. Let $$ \tau:= \inf\{ t \geq 0 :B_t = x \} \wedge \inf\{ t \geq 0 :B_t = -y \}$$ be a stopping time, where $x,y >0$. I am eager to know why the stopped process $(B_{t \wedge \tau})_{t \geq 0}$ is U.I.. Moreover, we know that $(\tilde{B}_t := B^2_t -t)_{t \geq 0}$ is a martingale. But the book also claims that $(\tilde{B}_{t \wedge \tau \wedge n})_{t \geq 0}$ is U.I., for any $n \in \mathbb{N}$. Why is that?
We know that $\mathbb{E}[\tau] < \infty$ and $B_{t \wedge \tau}$ is a martingale. It is also easy to see that $-y \leq B_{t \wedge \tau} \leq x$. Any bounded martingale is uniform integrable. Hence, $B_{t \wedge \tau}$ is UI martingale.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1077169", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Hitting time process of Brownian motion I am stuck with this problem: Let $(B_t)$ be a standard Brownian motion in $\mathbb{R}$. For $t \geq 0$, let $$ H_t = \inf \{ s \geq 0 : B_s = t \}, \quad S_t = \inf \{ s \geq 0 : B_s > t \}. $$ I want to prove three things: $1.$ Given $t \geq 0$, $H_t$ and $S_t$ are a.s. equal. (I tried to use the continuity property of Brownian paths, but still can't show this.) $2.$ Give an example of a sample path of $\{B_t\}$ such that $\{S_t\}$ and $\{H_t\}$ are not equal on a set of positive measure. $3.$ Prove that ${S_t}$ is almost-surely nowhere continuous. Any ideas on how to approach this hard problem?
1)As, Brownian paths are continuous its clear that $H_t \leq S_t$ a.s. Also, we know that $B_{\tau + s} - B_{\tau}$ is Brownian Motion for any finite stopping time $\tau$. Now, take $\tau = H_t$ which means $B_{H_t+s}-t$ is Brownian motion. Brownian motion oscillates infinitely at $0$ and hence we can find sequence of times $s_n$ such that $s_n \to 0$ and $B_{H_t+s_n}-t$ are positive and converge to $0$ which means $H_t \geq S_t$ a.s. and hence $H_t = S_t$ a.s.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1077231", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Height and coheight of an ideal Given an ideal $\mathfrak{a}$, Matsumura defined the height of $\mathfrak{a}$ as: $$\text{ht}(\mathfrak{a})=\inf_{\mathfrak{p}\in V(\mathfrak{a})}\text{ht}(\mathfrak{p})$$ He states that: $$\text{ht}(\mathfrak{a})+\dim(A/\mathfrak{a})\leq \dim(A)$$ Any ideas on how to show this? I know I have to use the correspondence theorem. I'm just having a hard time writting it properly with the definitions of height and dimension. Thanks in advance!
Hints. $\operatorname{ht}\mathfrak{p}+\dim A/\mathfrak{p}\leq \dim A$ for any prime ideal $\mathfrak p$ (why?). $\dim A/\mathfrak a=\sup_{\mathfrak{p}\in V(\mathfrak{a})}\dim A/\mathfrak p$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1077355", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Definite integral $\int_0^{2\pi}\frac{1}{\cos^2(x)}dx$ I encountered this very simple problem recently, but I got stuck on it because I think I am missing something. It is easy to see that indefinite integral $\int\frac{1}{\cos^2(x)}dx$ is $\tan(x)+C$. Also because $\frac{1}{\cos^2(x)}\geq0$ with the inequality being strict on some interval, the definite integral $\int_0^{2\pi}\frac{1}{\cos^2(x)}dx$ should be strictly positive. But when I evaluate it using the indefinite form result, I get $\tan(2\pi)-\tan(0)=0$, which puzzles me. I guess this may be common in trigonometric integration and that there should be a trick to get the actual result, but could someone shed some light on this problem?
This is a trigonometric analog to the classic paradox that $$\int_{-1}^1{1\over x^2}dx={-1\over x}\Big|_{-1}^1=-1-1=-2$$ despite the fact that $1/x^2$ is strictly positive, hence its definite integral should give a positive result for the area beneath the curve. The explanation (as given by k170), lies in the fact that improper integrals have to be given careful treatment.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1077481", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Choosing a contour to integrate over. What are the guidelines for choosing a contour? For example to integrate a real function with a singularity somewhere. What type of contour from Square, keyhole, circle, etc should be chosen for integration? (1) In what cases would you choose square? (2) In what cases would you choose a circle? (3) In what cases would you choose a keyhole etc....? In general, what are the guidelines? Thanks!
Here are some brief general guidelines for the three contours you mentioned: * *The circular contour is used when we have an integral from $0$ to $2\pi$, with the integrand consisting of trigonometric functions, such as sin and cosine. Thinking about the polar coordinates, Euler's identity ($e^{ix}=\cos(x)+i\sin(x)$) and the circumference of a circle of radius $1$ (ie, $2\pi\times 1$), the unit circle is the most natural contour to use. *Using the square contour really depends on the nature of the poles and residues of your function. If $\cosh z$ is the denominator of the integrand, then this has infinitely many poles given by $z=i(\frac{\pi}{2}+n\pi)$ with $n$ an integer. Hence you choose the contour such that you enclose a finite, rather than an infinite number of poles. *We use the keyhole contour when we have branch cuts (an arbitrarily small circuit around a point in $\mathbb{C}$ across which an analytic and multivalued function is discontinuous). For example, take the integral \begin{equation*} \int^{\infty}_{x=0}\frac{x^{-s}}{x+1}dx,~0<s<1. \end{equation*} The numerator of the integrand can be written $e^{-s\ln x}$ which has a discontinuity at $x=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1077554", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Sum of roots: Vieta's Formula The roots of the equation $x^4-5x^2+2x-1=0$ are $\alpha, \beta, \gamma, \delta$. Let $S_n=\alpha^n +\beta^n+\gamma^n+\delta^n$ Show that $S_{n+4}-5S_{n+2}+2S_{n+1}-S_{n}=0$ I have no idea how to approach this. Could someone point me in the right direction?
Here is a hint: multiply your equation by $x^n$. When you substitute $\alpha$ in the modified equation, what do you find? How can you relate that to the equation for $S_n$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1077648", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 0 }
If $T : F^{2 \times 2} \to F^{2\times 2}$ is $T(A) = PA$ for some fixed $2 \times 2$ matrix $P$, why is $\operatorname{tr} T = 2\operatorname{tr} P$? I am asked to prove that if $T$ is a linear operator on the space of $2 \times 2$ matrices over a field $F$ such that $T(A) = PA$ for some fixed $2 \times 2$ matrix $P$, then $\DeclareMathOperator{\tr}{tr} \tr T = 2 \tr P$. At first I thought about considering the isomorphism between $F^{2 \times 2}$ and $F^4$ so that the matrix of $T$ as it would act on $F^4$ would be a $4 \times 4$ block matrix $(P,0;0,P)$ with trace $2\tr P$, but the trace is not an invariant under isomorphism, only under change of basis. Furthermore, the definition of the trace of a matrix is for a linear operator acting on an arbitrary vector space $V$ over a field $F$, not specifically on $F^n$ for some $n$, so taking an isomorphism shouldn't be necessary at all. Taking a basis of $F^{2 \times 2}$ and using linearity of the trace as a linear functional gives $$ \begin{align} \tr T &= P_{11} \tr \left[\begin{array}{cc}1&0\\0&0\end{array}\right] + P_{12} \tr \left[\begin{array}{cc}0&1\\0&0\end{array}\right] + P_{21} \tr \left[\begin{array}{cc}0&0\\1&0\end{array}\right] + P_{22} \tr \left[\begin{array}{cc}0&0\\0&1\end{array}\right] \\&= P_{11} + P_{22} \\&= \tr P, \end{align} $$ which is not the desired result. (Since the trace is invariant under change of basis, it should not matter which basis we choose for $F^{2 \times 2}$.) What am I missing?
If we represent $T$ as a $4 \times 4$ matrix with respect to your basis, we have $$ [T]_B = \pmatrix{P_{11}&0&P_{12}&0\\0&P_{11}&0&P_{12}\\P_{21}&0&P_{22}&0\\0&P_{21}&0&P_{22}} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1077969", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
How do I prove the circumference of the Koch snowflake is divergent? How do I prove that the circumference of the Koch snowflake is divergent? Let's say that the line in the first picture has a lenght of $3cm$. Since the middle part ($1cm$) gets replaced with a triangle with sidelenghts of $1cm$ each we can assume that the circumference increases by the $\frac{4}{3}$-fold. I guess to calculate the circumference the following term should work, no? $\lim\limits_{n \rightarrow \infty}{3cm\cdot\frac{4}{3}^n}$ I know that the limit of the circumference is divergent ( $+\infty$). I also know that a divergent sequence is defined as : But how do I prove syntactically and mathematically correct that the sequence diverges to $+\infty$ ?
Hint:$$\frac{4^n}{3^n}=L\quad\Rightarrow\quad n=\log_{4/3}L$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1078034", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Power series for the rational function $(1+x)^3/(1-x)^3$ Show that $$\dfrac{(1+x)^3}{(1-x)^3} =1 + \displaystyle\sum_{n=1}^{\infty} (4n^2+2)x^n$$ I tried with the partial frationaising the expression that gives me $\dfrac{-6}{(x-1)} - \dfrac{12}{(x-1)^2} - \dfrac{8}{(x-1)^3} -1$ how to proceed further on this having doubt with square and third power term in denominator.
The most simple way to prove your identity, IMHO, is to multiply both sides by $(1-x)^3$. This leads to: $$ 1+3x+3x^2+x^3\stackrel{?}{=}(1-3x+3x^2+x^3)\left(1+\sum_{n\geq 1}(4n^2+2)\,x^n\right).\tag{1}$$ If we set $a_n=(4n^2+2)$, for any $n\geq 4$ the coefficient of $x^n$ in the RHS is given by $a_n-3a_{n-1}+3a_{n-2}+a_{n-3}$ that is zero, since we are applying three times the backward difference operator to a polynomial in $n$ having degree two. So we just have to check that the first four coefficients, $[x^0],[x^1],[x^2],[x^3]$, match.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1078092", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 2 }
Differentiate Archimedes's spiral I read that the only problem of differential calculus Archimedes solved was constructing the tangent to his spiral, $$r = a + b\theta$$ I would like to differentiate it but I don't know much about differentiating polar functions and can't find this particular problem online. Without giving me a full course in differential geometry, how does one calculate the tangent to the curve at $\theta$?
Let $r(\theta)=a+b\theta$ the equation of the Archimedean spiral. The cartesian coordinates of a point with polar coordinates $(r,\theta)$ are $$\left\{\begin{align} x(r,\theta)&=r\cos\theta\\ y(r,\theta)&=r\sin\theta \end{align}\right. $$ so a point on the spiral has coordinates $$\left\{\begin{align} x(\theta)&=r(\theta)\cos\theta=(a+b\theta)\cos\theta\\ y(\theta)&=r(\theta)\sin\theta=(a+b\theta)\sin\theta \end{align}\right. $$ Differentiating we have $$ \left\{\begin{align} x'(\theta)&=b\cos\theta-(a+b\theta)\sin\theta\\ y'(\theta)&=b\sin\theta+(a+b\theta)\cos\theta \end{align}\right. $$ The parametric equation of a line tangent to the spiral at the point $(r(\theta_0),\theta_0)$ is $$ \left\{\begin{align} x(\theta)&=x(\theta_0)+x'(\theta_0)(\theta-\theta_0)=x(\theta_0)+[b\cos\theta_0-(a+b\theta_0)\sin\theta_0](\theta-\theta_0)\\ y(\theta)&=y(\theta_0)+y'(\theta_0)(\theta-\theta_0)=x(\theta_0)+[b\sin\theta_0+(a+b\theta_0)\cos\theta_0](\theta-\theta_0) \end{align}\right. $$ or in cartesian form $$ y(\theta)-y(\theta_0)=\frac{y'(\theta_0)}{x'(\theta_0)}[x(\theta)-x(\theta_0)]=\frac{b\sin\theta_0+(a+b\theta_0)\cos\theta_0}{b\cos\theta_0-(a+b\theta_0)\sin\theta_0}[x(\theta)-x(\theta_0)] $$ where the slope of the line is $$ \frac{y'(\theta)}{x'(\theta)}=\frac{\operatorname{d}y}{\operatorname{d}x}=\frac{b\sin\theta+(a+b\theta)\cos\theta}{b\cos\theta-(a+b\theta)\sin\theta}=\frac{b\tan\theta+(a+b\theta)}{b-(a+b\theta)\tan\theta} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1078185", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
"Almost mean" of a set of integer I have three integers $(a, b, c)$ (but this can be generalized to any size). I want to redistribute the sum $a+b+c$ as equally as possible between three variables. In our case, we have three cases: If $a+b+c=3k$, then the solution is $(k, k, k)$. If $a+b+c=3k+1$, then the solution is $(k, k, k+1)$. If $a+b+c=3k+2$, then the solution is $(k+1, k+1, k)$. Does this notion have a name in mathematics? Any references? Thanks.
This really is just the "division algorithm." Let $m$ be the sum you want, and $n$ be the number of variables. Use the division algorithm to write $m=nq+r$ with $q$ the quotient and $0\leq r<n$ the remainder. Then you can write $m$ as the sum of $r$ instances of $q+1$ and $n-r$ instances of $q$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1078256", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Splitting up A Conditional Probability When, if ever, is it true for random variables $a,b,c,d$ that $P(a,b,c\mid d) = P(a\mid d)P(b\mid d)P(c\mid d)$? Is it related to the dependencies among the variables?
It can be true, and yes, it is related to the dependencies among the variables. It holds when variables $a$, $b$ and $c$ are conditionally independent given $d$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1078332", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Showing $(Tp)(x) = x^2p(x)$ is a linear map (transformation) Define a linear map function $T: \mathcal{P}(\mathbb{R}) \to \mathcal{P}(\mathbb{R})$ where $\mathcal{P}(\mathbb{R})$ is the set of all polynomials with real-valued coefficients. Now let $T$ belong to the set of all possible linear maps from $\mathcal{P}(\mathbb{R})$ to itself, i.e.: $T \in \mathcal{L}(\mathcal{P}(\mathbb{R}), \mathcal{P}(\mathbb{R}))$ whereby we define this to be, $$(Tp)(x) = x^2p(x) \tag{1}$$ for $x \in \mathbb{R}$ and where $p : \mathbb{R} \to \mathbb{R}$ is a polynomial function. To show that (1) is a linear map we must show that it follows additivity and homogeneity. I'm not quite sure how to show that (1) homogeneous. Thanks for any help! Note: the above can be found in Axler's Linear Algebra Done Right on page 39.
let $p(x)=a_0+a_1x+ a_2x^2+\cdot\cdot\cdot\cdot+a_nx^n$ Now let $\lambda\in \mathbb R$ then $\lambda p(x)=\lambda a_0+(\lambda a_1)x+(\lambda a_2)x^2+\cdot\cdot\cdot\cdot+(\lambda a_n)x^n$ Then $T(\lambda p(x))=x^2\lambda a_0+(\lambda a_1)x^3+(\lambda a_2)x^4+\cdot\cdot\cdot\cdot+(\lambda a_n)x^{n+2}=\lambda x^2(a_0+a_1x+ a_2x^2+\cdot\cdot\cdot\cdot+a_nx^n)=\lambda T(p(x))$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1078422", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Expected value of time integral of a gaussian process While working on a problem I've stumbled upon some expected values of time integrals of Gaussian stochastic processes. Some of them were addressed this question, but I have found also this one $$\left\langle\int _0^tdt_1\int_0^{t_1}dt_2\left(B\left(t_1\right)-B\left(t_2\right)\right)\int _0^tdt_3\int_0^{t_3}dt_4\left(B\left(t_3\right)-B\left(t_4\right)\right)\right\rangle$$ Here $B(t)$ is a stationary Gaussian process with known autocorrelation function $K(t_1-t_2)=\langle B(t_1)B(t_2) \rangle$ and $\langle \cdot \rangle$ is the expected value over all possible realizations of the stochastic process. I tried writing the integral as $$\left\langle\int _0^tdt_1\int_0^{t_1}dt_2\int _0^tdt_3\int_0^{t_3}dt_4\left[B\left(t_1\right)B(t_3)-B(t_1)B(t_4)-B(t_2)B\left(t_3\right)+B(t_2)B\left(t_4\right)\right]\right\rangle$$ and then exchanging the expectation value with the integral. This way I can integrate the known form of $K(t_i-t_j)$. However there must be something wrong with my reasoning because I expect this quantity to be positive while, for example by choosing a Ornstein-Uhlenbeck process, with $K(t_i-t_j) = \frac \gamma 2 e^{-\gamma |t_i-t_j|}$, I obtain non-positive function, namely $$ \frac{4}{\gamma ^3} -\frac{5 t^2}{4 \gamma } + \frac{5 t^3}{12} - e^{-\gamma t} \left(\frac{4}{\gamma ^3}+ \frac{4 t}{\gamma ^2} +\frac{t^2}{\gamma }\right) $$
Indeed, the result is $$\int _0^t\mathrm dt_1\int_0^{t_1}\mathrm dt_2\int _0^t\mathrm dt_3\int_0^{t_3}\mathrm dt_4\left[K(t_1-t_3)-K(t_1-t_4)-K(t_2-t_3)\color{red}{+}K(t_2-t_4)\right],$$ which should be nonnegative. Note that the last sign $\color{red}{+}$ reads $-$ in your post. Edit: One asks to compute the second moment of $$X_t=\int_0^t(2s-t)B(s)ds,$$ hence a formula equivalent to the integral in the question is $$\langle X_t^2\rangle=2\int_0^t\mathrm ds(2s-t)\int_0^s\mathrm du(2u-t)K(s-u).$$ When $K(s)=\mathrm e^{-s}$ for every $s\geqslant0$, one finds $$\langle X_t^2\rangle=4-t^2+\tfrac13t^3-(t+2)^2\mathrm e^{-t},$$ which is nonnegative for every $t\geqslant0$, from which one deduces the result for every $\gamma$, by homogeneity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1078510", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Do we really need reals? It seems to me that the set of all numbers really used by mathematics and physics is countable, because they are defined by means of a finite set of symbols and, eventually, by computable functions. Since almost all real numbers are not computable, it seems that real numbers are a set too big and most of them are unnecessary. So why do mathematicians love this plethoric set of numbers so much?
Suppose you have a square of area 1 cm.You want to calculate length of the diagonal.If you don't have notion of real number then you can not find length of the diagonal.Another way to say this that in a countable system you can always find some equation that has no solution in that system.For example in rational number system there exist no solution of the equation $x^2-2=0$. Note: Mathematician always concern with mathematical structure in which every equation has solution.In natural number system there exist no solution of the equation $X+2=1$ .So mathematician extended system of natural number to set of integers.Again there exist no solution of the equation $x^2-2=0$ in rational number system.So mathematician extended the rational number system to real number system.Again the equation$X^2+1=0$ has no solution in real number.So mathematician extended real number system to complex number system.Indeed in complex number system every equation has solution.So they stopped...:) Hope this helps.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1078593", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "55", "answer_count": 13, "answer_id": 10 }
Easy example why complex numbers are cool I am looking for an example explainable to someone only knowing high school mathematics why complex numbers are necessary. The best example would be possible to explain rigourously and also be clearly important in a daily day sense. I.e. complex Laplace transform has applications in pricing of options in mathematical finance which is somewhat easy to sell as important, but impossible to explain the details of. It is easy to say: Then we can generalise the square root! - but it is harder to argue why that makes any difference in the real world. The question has edited the wording cool out of it replaced with a description to stop it from being opinion based. I hope it helps :)
Complex numbers can be used to give a very simple and short proof that the sum of the angles of a triangle is 180 degrees. Construct a triangle in the plane whose vertices are complex numbers z1.z2 and z3 in the plane. Recall that the angle $\theta$ at a vertex like $ABC$ is the angular part $\theta$ of $(C-B)/(A-B) = r\exp(i\theta)$. It is written $\theta = \arg(r\exp(i\theta))$ and is defined only modulo $2\pi$, although conventionally a representative in the interval $(-\pi, \pi]$ is taken. Now recall that $\arg(z w) =\arg(z) + \arg(w)$ for any nonzero complex numbers $z$ and $w$: this follows immediately from the multiplicative property of the exponential, $\exp(i\theta)\exp(i\phi) = \exp(i(\theta + \phi))$. The sum of the three angles at vertices $Z1$, $Z2$, and $Z3$ is obtained merely by rearranging the product of three numbers in the denominator of a fraction. So now: $$\eqalign{\arg\frac{Z2-Z1}{Z3-Z1} + \arg\frac{Z3-Z2}{Z1-Z2}+\arg\frac{Z1-Z3}{Z2-Z3} &= \arg\left(\frac{Z2-Z1}{Z3-Z1}\frac{Z1-Z3}{Z2-Z3}\right) \\ &= \arg\left(\frac{Z2-Z1}{Z1-Z2}\frac{Z3-Z2}{Z2-Z3}\frac{Z1-Z3}{Z3-Z1}\right) \\ &= \arg\left(-1^3\right) = \arg(-1) \\ &= \pi. }$$ Ligget se. I'll always remember Ravi Kulkarni showing us this trick at the end of one of his lectures and quoting his granddaughter, "Cool." I hope your students agree-especially if they've just been tortured with the painfully tedious classical proof of the result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1078618", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "85", "answer_count": 26, "answer_id": 23 }
Connections of theory of computability and Turing machines to other areas of mathematics The question is quite straightforward: Could you point out some reference papers that highlight (in a way that is fairly accessible) the connections between (1) theory of computability, algorithms, and Turing machines and (2) other areas of mathematics.
You can consider Computability and Complexity : [for the] study and classification of which mathematical problems are computable and which are not. In addition, there is an extensive classification of computable problems into computational complexity classes according to how much computation — as a function of the size of the problem instance — is needed to answer that instance. See e.g. : * *Sanjeev Arora & Boaz Barak, Computational complexity : A Modern Approach (2009) *Pavel Pudlak, Logical Foundations of Mathematics and Computational Complexity (2013). And also Computable Analysis i.e. : is the study of mathematical analysis from the perspective of computability theory. It is concerned with the parts of real analysis and functional analysis that can be carried out in a computable manner. See e.g. : * *Klaus Weihrauch, Computable Analysis : An Introduction (2000) [and also here].
{ "language": "en", "url": "https://math.stackexchange.com/questions/1078695", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Rewriting the heat diffusion equation with temperature dependent diffusion coefficient to include joule heating. I am modelling heat flow in a solid round copper conductor with a set area. I plan to discretize and solve numerically in Python. However, I only have a curve fit for thermal conductivity and specific heat. I'm hoping this won't be a problem as I am not seeking an analytical solution. Here is where I have started: \begin{equation} \frac{\partial u}{\partial t} = \frac{k(u)}{\rho C(u)}\frac{\partial^2 u}{\partial x^2} \end{equation} Where $k(u)$ and $C(u)$ are the thermal conductivity and specific heat, respectively. $\rho$ is density. Now, in the initial estimation of conductor heat load, in watts, I used the following: \begin{equation} \frac{dQ}{dt} = I^2r\frac{L}{A} + k\frac{A}{L}\Delta u \end{equation} Where $I$ is electric current, $r$ is resistivity, $k$ is thermal conductivity, $\Delta u$ is temperature change, and $L$ and $A$ are length and area of the conductor, respectively. I interpreted this answer to mean that the cold side of the conductor is seeing $\frac{dQ}{dt}$ joules per second (or watts), some of which is coming from the temperature difference in the thermally anchored ends, $k\frac{A}{L}\Delta u$, and some which is coming from the current passing through, $I^2r\frac{L}{A}$. Heat is related to temperature change by the following equation: \begin{equation} Q=mc\Delta T \end{equation} Is this relationship what will allow me to incorporate the resistive heating $I^2r\frac{L}{A}$ into the original pde? Do I take the time derivative of both sides? My related background: BSEE and one ode class. I have read quite a bit about discretizing pde's and also about solving 2nd order linear pde's. From all the searching I've done it seems like nonlinear pdes are very problem specific in what works. While an analytic solution would be nice (if possible), I'd be quite happy with a non linear pde that incorporates resistive heating. From there I feel I could find a numerical solution. Thank you in advance for any help, comments, or observations.
$$ \rho \cdot C(u)\frac{\partial u}{\partial t} = k(u)\cdot\frac{\partial^2u}{\partial x^2} $$ Due to the different type of terms, I was unable to use the equation in the form from the OP. I was able to get reasonable answers after switching the terms to what is shown above.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1078748", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Basis if and only if $\varphi$ is an isomorphism Let $V$ be a finite dimensional vector space. I am aware of the theorem stating that if $\varphi \colon V \rightarrow V$ is an automorphism and $\mathcal{A}=(a_1, \ldots, a_n)$ is a basis of V then $\varphi(\mathcal{A})=(\varphi(a_1), \ldots, \varphi(a_n))$ is another basis of $V$. However, suppose we have some other set $\mathcal{B}=(b_1, \ldots, b_n)$ and we want to prove that it is also a basis of V. I know how to prove it working from definitions but i was wondering if you could construct a linear map $f$ $$ f(x_1 a_1 + \ldots + x_n a_n) = y_1 b_1 + y_n b_n $$ such that if $\mathcal{A}$ is a basis then $\mathcal{B}$ is another basis if and only if $f$ is an isomorphism. That way we could reduce a problem of showing that $\mathcal{B}$ is a basis to show that a map $f$ is an isomorphism.
Yes, you are right. If you can show that $b_1,\ldots,b_n$ is the image of a basis $a_1,\ldots,a_n$ by an isomorphism $f$, then you have shown that $\left\{b_i\right\}_{i=1,\ldots,n}$ is a basis. This happens because since $f$ is an isomorphism, the image of $f$ has dimension $n$. But the image of $f$ is generated by $f(a_1),\ldots,f(a_n)$, hence the vectors $b_i=f(a_i)$ generate a space of dimension $n$. There are $n$ such vectors, and so they form a basis for the space they generate (which is $V$ itself).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1078816", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to evaluate $ \int \sec^3x \, \mathrm{d}x$? How to evaluate $ \int \sec^3x \, \mathrm{d}x$ ? I tried integration by parts and then I got $$\int \sec^3x \, \mathrm{d}x=\sec x \tan x - \int\sec x \tan^2 x \, \mathrm{d}x.$$ Now I'm stuck solving $\int\sec x \tan^2 x \, \mathrm{d}x$? How to solve that and how to solve my initial question? What is the smartest way solving $ \int \sec^3x \, \mathrm{d}x$?
This is a well-known problem. Using $ \left( u, v \right) = \left( \sec x, \tan x \right) $ gives $$ \begin {align*} \displaystyle\int \sec^3 \, \mathrm{d}x &= \displaystyle\int \sec x \tan x - \displaystyle\int \sec x \tan^2 x \, \mathrm{d}x \\&= \sec x \tan x - \displaystyle\int \sec^3 x \, \mathrm{d}x + \displaystyle\int \sec x \, \mathrm{d}x. \end {align*} $$If we let our integral be $I$, then, $ 2I = \sec x \tan x + \displaystyle\int \sec x \, \mathrm{d}x = \sec x \tan x + \ln \left| \sec x + \tan x \right| + \mathcal{C}_0 $, so we have $$ \displaystyle\int \sec^3 x \, \mathrm{d}x = \boxed {\dfrac {\sec x \tan x + \ln \left| \sec x + \tan x \right|}{2} + \mathcal{C}}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1078878", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 3 }
Is $x\in\{\{x\}\}$ Is $x\in\{\{x\}\}$. I understand that $x\in\{x\}\in\{\{x\}\}$ does this mean $x\in\{\{x\}\}$? Very simple just unsure about the properties of $\in$, not looking for an extravagant answer, thanks in advance for the help.
The answer is no. $\{\{x\}\}$ has exactly one element, namely $\{x\}$. In general, to test if $a\in b$, we do the following: * *Is $b$ a set? *If so, is $a$ one of the elements in the set $b$? $a$ can be a set itself, or not, doesn't matter.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1078981", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Vector Field in a complex projective space This question is motivated by this answer here. Let $\mathbb{C}P^{n}$ be a complex projective space. Let $X\in\Gamma(T\mathbb{C}P^{n})$, be a vector field. It seems, by the answer I got in the mentioned link, that the zeros of $X$ are the points $[z]\in\mathbb{C}P^{n}$ such that $X([z])=\lambda[z]$, for some $\lambda$. Can anyone please make this clear for me ? In another way : What does it mean to be a zero of a vector field in a complex projective space ? Thanks!
Let's try a different approach. We can think of $\Bbb CP^n$ as the usual quotient of $\Bbb C^{n+1}-\{0\}$ with the projection map $\pi(z) = [z]$. Let's think of a vector field $X$ on $U\subset\Bbb CP^n$ as being pushed down from an appropriate vector field $\tilde X$ on $\pi^{-1}(U)$ with the property that $\pi_{*z}\tilde X(z) = \pi_{*\lambda z}\tilde X(\lambda z)$ for all nonzero (functions) $\lambda$ on $U$. Now, we can visualize a zero $[z]$ of $X$ in terms of saying that $\pi_{*z}\tilde X(z) = 0$, which means that $\tilde X(z)$ is a multiple of $z\in\Bbb C^{n+1}$. Along these lines, also see my answer to this question about the Euler sequence.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1079080", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is the indicator function of the rationals Riemann integrable? $f(x) = \begin{cases} 1 & x\in\Bbb Q \\[2ex] 0 & x\notin\Bbb Q \end{cases}$ Is this function Riemann integrable on $[0,1]$? Since rational and irrational numbers are dense on $[0,1]$, no matter what partition I choose, there will always be rational and irrational numbers in every small interval. So the upper sum and lower of will always differ by $1$. However, I know rational numbers in $[0,1]$ are countable, so I can index them from 1 to infinity. For each rational number $q$ in $[0,1]$, I can cover it by $[q-\frac\epsilon{2^i},q+\frac\epsilon{2^i}]$. So all rational numbers in $[0,1]$ can be covered by a set of measure $\epsilon$. On this set, the upper sum is $1\times\epsilon=\epsilon$. Out of this set, the upper sum is 0. So the upper sum and lower sum differ by any arbitrary $\epsilon$. Thus, the function is integrable. One of the above arguments must be wrong. Please let me know which one is wrong and why. Any help is appreciated.
Recall that a function is Riemann integrable if and only if, for any $\varepsilon > 0$, there exists a partition $P$ such that $U(f, P) - L(f, P) < \varepsilon$. Now consider any partition $P$ of $[0,1]$. The lower sum is always zero since the infimum of the function values along any interval is zero. Further, the supremum of the function values along any interval is $1$ as every interval contains a rational number, so we have: $$U(f, P) - L(f, P) = U(f, P) = \sum_{k = 1}^n (x_k - x_{k-1}) = 1$$ And so the integrability criterion in the first line fails for any $\varepsilon < 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1079172", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 4, "answer_id": 2 }
The finite field extension Let field $K$ embedded into the finite field $M$. Prove that $M = K(\theta)$ for some $\theta \in M$. I have tried 2 ways but got stuck at both. 1) Let $|K| = p^s$ and $|M| = p^{st}$ for prime $p$ and $s, t \in \mathbb N$. And if we find the irreducible polynomial $f(x) \in K[x]$ where $deg f(x) = t$, and $f(x)$ has at least one root in $M$ we will solve the problem. But I can not prove there is always such polynomial. 2) The second idea was to represent $K$ and $M$ as $L(\theta)$ where $L = \{n \cdot 1: n = 0,1,\ldots,p-1\}$, where $p$ is characteristic of fields. And if $K = L(\theta_1)$ and $M = L(\theta_2)$ we can say that $M = K(\theta_2)$. But I do not know how to prove that we always can represent a finite field as $L(\theta)$ where $L = \{n \cdot 1: n = 0,1,\ldots,p-1\}$, and moreover I do not know is it truth or not. Thank you for your help!
One easy approach is to appeal to the following theorem: If $F$ is any field, and $G$ is a finite subgroup of $F^\times$, then $G$ is cyclic. Since $M$ is finite, we can take $G=M^\times$, so there is some $\theta\in M$ such that every non-zero element of $M$ is equal to $\theta^n$ for some $n$. Clearly, this $\theta$ will have the property that $M=K(\theta)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1079303", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
AlgebraII factoring polynomials equation: $2x^2 - 11x - 6$ Using the quadratic formula, I have found the zeros: $x_1 = 6, x_2 = -\frac{1}{2}$ Plug the zeros in: $2x^2 + \frac{1}{2}x - 6x - 6$ This is where I get lost. I factor $-6x - 6$ to: $-6(x + 1)$, but the answer says otherwise. I am also having trouble factoring the left side. Could someone please explain to me why the answer to the question was: $(x - 6)(2x + 1)$. How does $-\frac{1}{2}$ become $1$?
When you factor out the equation $2x^2-11x-6$, you get $(x-6)(2x+1)$ (David Peterson did the factoring process). This shows that the functions has two zeros in the graph. Thus, we have to sent $(x-6)(2x+1)$ equal to $0$: $$(x-6)(2x+1)=0.$$ Then we find the zeros: $$(x-6)(2x+1)=0$$ $$x-6=0$$ $$\boxed{x=6}$$ $$2x+1=0$$ $$2x=-1$$ $$\boxed{x=-\frac{1}{2}}.$$ Therefore, we can clearly see that by factoring and sovling for $x$, we get $$\boxed{x=6,-\frac{1}{2}}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1079374", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
To show that $f (x) = | \cos x | + |\sin x |$ is not one one and onto and not differentiable Let $f : \mathbb{R} \longrightarrow [0,2]$ be defined by $f (x) = | \cos x | + |\sin x |$. I need to show that $f$ is not one one and onto. I have only intuitive idea that $\cos x$ is even function so image of $x$ and $-x$ are same. Not one to one , but how do I properly check for other things. Hints ? Thanks
hints: 1) Both $\sin$ and $\cos $ share the same period. Use that to show the function is not one to one. 2) Each of $\sin $ and $\cos$ is bounded between $-1$ and $1$. Use that to show that the function is not onto by showing it is bounded between $0$ and $2$ (assuming you take as the codomain $\mathbb R$, otherwise, the question has no meaning). It appears you edited the question so the domain is now $[0,2]$. So, now to show the function is not onto, show that $0$ is never attained by remembering basic facts about the trigonometric functions. 3) Apply the limit definition of the derivative at $0$, and show the limit does not exist. Alternatively, if the function was differentiable at $0$, then in a small enough neighborhood of $0$ the function $f(x)-\cos x$ would also be differentiable (as the difference of differentiable functions). However, that function is $|\sin x|$, and you can show that function is not differentiable at $0$ by applying the definition again.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1079479", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 0 }
prove that $a^2 b^2 (a^2 + b^2 - 2) \ge (a + b)(ab - 1)$ Good morning help me to show the following inequality for all $a$, $b$ two positive real numbers $$a^2 b^2 (a^2 + b^2 - 2) \ge (a + b)(ab - 1)$$ thanks you
Let $a+b=2u$ and $ab=v^2$, where $v>0$. Hence, we need to prove that $2v^4u^2-(v^2-1)u-v^6-v^4\geq0$, for which it's enough to prove that $u\geq\frac{v^2-1+\sqrt{(v^2-1)^2+8v^4(v^4+v^6)}}{4v^4}$ or $(4v^5-v^2+1)^2\geq(v^2-1)^2+8v^4(v^4+v^6)$ because $u\geq v$, or $(v-1)^2(v+1)(v^2+v+1)\geq0$. Done!
{ "language": "en", "url": "https://math.stackexchange.com/questions/1079550", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
A probability function is determined on a dense set- Where is density used in the following proof? A probability function is determined on a dense set- Where is density used in the following proof? Consider the following theorem and proof from Resnick's book A probability path. I cannot really see where the assumption that $D $ is dense in $R $ is used. Can you enlighten me? Is it needed for $(8.2) $? I think there would be sufficient if there existed an $x' \in D$ such that $x' \ge x $ for it to hold. Thanks in advance!
You’re right: as long as $D$ contains arbitrarily large reals, $F$ is right continuous. However, $F$ need not extend $F_D$ even if $D$ is dense in $\Bbb R$, so the last sentence of the statement of the lemma is a non sequitur. To see this, let $D=\Bbb R\setminus\{1\}$, and let $F_D(x)=0$ if $x\le 0$ and $F_D(x)=1$ if $0<x\in D$. Then $$F(0)=1\ne 0=F_D(0)\;,$$ and $F\upharpoonright D\ne F_D$. If the real goal was to prove that if $F$ and $G$ are right continuous df’s that agree on a dense $D\subseteq\Bbb R$, then $F=G$, it could have been accomplished much more simply. Let $x\in\Bbb R\setminus D$; $D$ is dense, so there is a sequence $\langle x_n:n\in\Bbb N\rangle$ in $D$ converging to $x$ from the right. $F$ and $G$ are right continuous, and $F(x_n)=G(x_n)$ for $n\in\Bbb N$, so $$F(x)=\lim_nF(x_n)=\lim_nG(x_n)=G(x)\;,$$ and therefore $F=G$. Also, one can prove that if $D$ is dense in $\Bbb R$, and $F_D$ is right continuous on $D$ as well as satisfying the hypotheses of Lemma $8.1.1$, then $F$ is a right continuous df extending $F_D$: the extra hypothesis that $F_D$ is right continuous on $D$ allows us to show that if $x\in D$, then $F(x)=F_D(x)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1079661", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Finding the area under a speed time graph I recently learned about integration and I wondered how it could be applied to a speed time graph since it does not have a particular equation of a line that one can integrate. Do you split it into parts? I have heard of a type of integration which specialises in putting shapes under th graph to find the area so that might be the answer. An example of one can be something like this : Thanks for the clarification Gedr Edit : I'm mainly interested in points C to D since all others are basically triangles.
This graph is a Position(time), it's antiderivative will give you the absement. To calculate the area under this graph you can separate into these subregions: [] Then each area is just a simple geometric figure, for the rectangles you use the very well known formula $base\cdot height$ and for the triangles $ \dfrac {base\cdot height} 2$. Notice, however, that to calculate the area under the curved region, we need the formula that describes the position in that timeframe. E: I didn't see your edit, we can approximate that area without knowing the formula splitting that region into smaller rectangles, triangles, etc.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1079763", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why the differentiation of $e^x$ is $e^x?$ $$\frac{d}{dx} e^x=e^x$$ Please explain simply as I haven't studied the first principle of differentiation yet, but I know the basics of differentiation.
The number $e$ is defined by: $e=\sum\limits_{n=0}^\infty\dfrac{1}{n!}$ but historically I think that the definition of the $exp$ function is: $e^x=\lim\limits_{n\to+\infty}\left(1+\dfrac{x}{n}\right)^n$ and the properties of this function comes from this definition as shown in this article.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1079821", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 9, "answer_id": 7 }
Does equicontinuity imply uniform continuity? If $\{f_n(x)\}$ is an equicontinuous family of functions, does it follow that each function is uniformly continuous? I am a bit confused since in the Arzela-Ascoli theorem, equicontinuity is seems to mean "$\delta$ depends neither on $n$ nor $x$." For example, Wikipedia says: "The sequence is equicontinuous if, for every $\epsilon > 0$, there exists $\delta>0$ such that $$|f_n(x)-f_n(y)| < \epsilon$$ whenever $|x − y| < \delta$  for all functions  $f_n$  in the sequence." but I also seem to recall such a term as "uniform equicontinuity." Is the definition not universal?
Now it does not. The singleton family $F=\{f\}$, with $f(x)=x^2$ is not uniformly continuous, but it is equicontinuous. However, if $K$ is a compact metric space and $\mathcal F\subset C(K)$ is an equicontinuous family, then $\mathcal F$ is also uniformly equicontinuous.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1079905", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 1 }
How to prove that $\frac{\zeta(2) }{2}+\frac{\zeta (4)}{2^3}+\frac{\zeta (6)}{2^5}+\frac{\zeta (8)}{2^7}+\cdots=1$? How can one prove this identity? $$\frac{\zeta(2) }{2}+\frac{\zeta (4)}{2^3}+\frac{\zeta (6)}{2^5}+\frac{\zeta (8)}{2^7}+\cdots=1$$ There is a formula for $\zeta$ values at even integers, but it involves Bernoulli numbers; simply plugging it in does not appear to be an efficient approach.
$\newcommand{\angles}[1]{\left\langle{#1}\right\rangle} \newcommand{\braces}[1]{\left\lbrace{#1}\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack{#1}\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\half}{{1 \over 2}} \newcommand{\ic}{\mathrm{i}} \newcommand{\iff}{\Leftrightarrow} \newcommand{\imp}{\Longrightarrow} \newcommand{\ol}[1]{\overline{#1}} \newcommand{\pars}[1]{\left({#1}\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert{#1}\right\vert}$ \begin{align} &\bbox[10px,#ffd]{\sum_{n = 1}^{\infty}{\zeta\pars{2n} \over 2^{2n - 1}}} = \sum_{n = 2}^{\infty}{\zeta\pars{n} \over 2^{n - 1}} - \sum_{n = 1}^{\infty}{\zeta\pars{2n + 1} \over 2^{2n}} \\[3mm] = &\ -\sum_{n = 2}^{\infty}\pars{-1}^{n}\zeta\pars{n}\pars{-\,\half}^{n - 1} - \sum_{n = 1}^{\infty}\bracks{\zeta\pars{2n + 1} - 1}\pars{\half}^{2n}\ -\ \underbrace{\sum_{n = 1}^{\infty}\pars{\half}^{2n}}_{\ds{1 \over 3}} \\ = &\ -\bracks{\Psi\pars{1 + z} + \gamma}_{\ z\ =\ -1/2} \\[3mm] & - \bracks{% {1 \over 2z} - \half\,\pi\cot\pars{\pi z} - {1 \over 1 - z^{2}} + 1 - \gamma - \Psi\pars{1 + z}}_{\ z\ =\ 1/2} - {1 \over 3} \\[8mm] = &\ -\Psi\pars{\half} - {2 \over 3}\ +\ \underbrace{\Psi\pars{3 \over 2}}_{\ds{\Psi\pars{1/2} + 1/\pars{1/2}}} - {1 \over 3} = \color{#f00}{1} \end{align} $\Psi$ and $\gamma$ are the Digamma function and the Euler-Mascheroni constant, respectively. We used the Digamma Recurrence Formula $\ds{\Psi\pars{z + 1} = \Psi\pars{z} + 1/z}$ and the identities $\mathbf{6.3.14}$ and $\mathbf{6.3.15}$ in this link.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1080000", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "88", "answer_count": 4, "answer_id": 3 }
Solving recursive integral equation from Markov transition probability How do I solve something like: $$f(x) = \frac{1}{\sqrt{2 \pi}} \int_{-\infty}^\infty e^{\frac{-(y - x/2)^2}{2}}f(y)\:\mathrm{d}y$$ for $f(x)$? Is there also a general formula that this falls under? The closest thing I found was the Fredholm integral equation, but those (I believe) assume that the eigenfunction is linear. In case it helps, here's my motivation for this problem: I'm trying to find the stationary distribution of a continuous state, discrete time Markov process. I came up with this transition function: $$P(s_i = x | s_{i-1} = y) = p(x,y) = \frac{1}{\sqrt{2 \pi}} e^{\frac{-(x - y/2)^2}{2}}$$ The idea is that the probability function always brings the mean closer towards $0$ by using a Normal distribution with mean $\mu = y/2$. I then tried to solve the stationary distributions as follows $$\pi_x = \int_{-\infty}^{\infty}p(y, x)\pi_y \:\mathrm{d}y$$ At this point, I'm stuck. (I got the original equation by setting $\pi_x = f(x)$ for clarity.) This might deserve its own questions, but does solving for stationary distributions have its own technique?
We are given a discrete step continuous state transition system from state $y$ to state $x$ as $$M[y, x] = \frac 1 {\sqrt{2\pi}} e^{-\frac 12(x - y/2)^2}$$ which has the following characteristics * *If you are at $x$, your destination is a normal curve from $-\infty$ to $\infty$ centered at $x/2$ with a variance of $1$ So you can see that in this system, you are statistically going to approach $0$ with some variance. Define $*$ as $$(M*N)[y, x] = \int_{-\infty}^{\infty} M[y, u]\cdot N[u, x] du$$ The steady state transition matrix is given by $M * M * M \dots = M^{\infty}$. We can tell right away that $$\begin{align} M^2[y, x] &= \int_{-\infty}^{\infty} M[y, u]\cdot M[u, x] du \\ &= \int_{-\infty}^{\infty} \left(\frac 1 {\sqrt{2\pi}} e^{-\frac 12(u - y/2)^2}\right) \left(\frac 1 {\sqrt{2\pi}} e^{-\frac 12(x - u/2)^2}\right) du \\ &= \sqrt{\frac{2}{5\pi}} e^{-\frac 1{40}y^2 - \frac 15 xy + \frac 25 x^2} \end{align}$$ Which is a normal curve peaking at $x = y/4$. Similarly, $$M^4[y, x] = \frac 45 \sqrt{\frac{10}{17\pi}}e^{-\frac 1{680}y^2 + \frac 4{85} xy - \frac {32}{85} x^2}$$ which is a normal curve peaking at $x = y/16$. So each step of the transition does indeed move the distribution closer to a normal curve peaking at $x = 0$. Define ansatz : $$M^{\infty}[y, x] = N[y, x] = Re^{-Ax^2}$$ with the constraint that $N$ is stochastic so $\int_{-\infty}^{\infty} N[y, x] dx = 1 \quad \Rightarrow \quad \sqrt{\pi}R = \sqrt{A}$. We wish to find: $$\begin{align} N &= N*M \\ Re^{-Ax^2} &= \int_{-\infty}^{\infty} Re^{-Au^2} \cdot \frac 1 {\sqrt{2\pi}} e^{-\frac 12(x - u/2)^2} du \\ \sqrt{\frac{A}{\pi}}e^{-Ax^2} &= \sqrt{\frac{A}{\pi}} \cdot \frac 1 {\sqrt{2\pi}} \int_{-\infty}^{\infty} e^{-Au^2} \cdot e^{-\frac 12(x - u/2)^2} du \\ \sqrt{2\pi}e^{-Ax^2} &= \sqrt{\frac{8\pi}{8A + 1}} e^{-4Ax^2/(8A + 1)} \\ A &= \frac 38 \end{align}$$ So your final steady state is going to be $$M^\infty[y, x] = \sqrt{\frac{3}{8\pi}}e^{-\frac 38 x^2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1080055", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Prove that a $1$-periodic function $\phi$ is constant if $\phi(\frac{x}{2}) \phi(\frac{x+1}{2})$ is a constant multiple of $\phi$ For $0 < x< \infty$, let $\phi (x)$ be positive and continuously twice differentiable satisfying: (a) $\phi (x+1) = \phi (x)$ (b) $\phi(\frac{x}{2}) \phi(\frac{x+1}{2}) = d\phi(x),$ where $d$ is a constant. Prove that $\phi$ is a constant. I am trying to answer this as the first step in the proof of Euler's reflection formula. I was am given the hint "Let $g(x) = \frac{\mathrm{d}^2}{\mathrm{d}x^2} \log \phi (x)$ and observe that $g(x+1)=g(x)$ and $\frac{1}{4}(g(\frac{x}{2}) + g(\frac{x+1}{2})) = g(x)$" Firstly, I don't understand how they get the second bit of the hint, and then even assuming that I'm still not sure what to do. Any help would be much appreciated. Thanks.
Taking logarithms in (b), we obtain the identity $$\log \phi\biggl(\frac{x}{2}\biggr) + \log \phi\biggl(\frac{x+1}{2}\biggr) = \log \phi(x) + \log d.\tag{1}$$ Differentiating $(1)$ twice, that becomes $$\frac{1}{4}\Biggl(g\biggl(\frac{x}{2}\biggr) + g\biggl(\frac{x+1}{2}\biggr)\Biggr) = g(x).\tag{2}$$ So $g$ is a continuous function with period $1$ that satisfies the relation $(2)$. By periodicity, we can assume $g$ is defined on all of $\mathbb{R}.$ We want to show that $\phi$ is constant, so in particular that $g\equiv 0$. Choose $x_1,x_2 \in [0,1]$ so that $$g(x_1) = \min \{ g(x) : x\in [0,1]\};\qquad g(x_2) = \max \{ g(x) : x\in [0,1]\}.$$ Since $g$ is continuous and $[0,1]$ is compact, such points exist. Since $g$ has period $1$, $g(x_1)$ is also the global minimum that $g$ attains on $\mathbb{R}$, and $g(x_2)$ the global maximum. By $(2)$, we have $$g(x_1) = \frac{1}{4}\Biggl(g\biggl(\frac{x_1}{2}\biggr) + g\biggl(\frac{x_1+1}{2}\biggr)\Biggr) \geqslant \frac{1}{4}\bigl(g(x_1) + g(x_1)\bigr) = \frac{1}{2}g(x_1),$$ so $g(x_1) \geqslant 0$. The same argument shows $g(x_2) \leqslant 0$, hence $g\equiv 0$, as desired. Therefore, it follows that $$\biggl(\frac{d}{dx}\log \phi\biggr)(x) \equiv a = \text{const},$$ and hence $$\log \phi(x) = ax+b$$ and $\phi(x) = e^{ax+b}$ for some constants $a,b\in\mathbb{R}$. It remains to show that $a = 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1080130", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Elements with infinite roots in $p$-adic integers Let $\mathbb{Q}_p$ the $p$-adic completion of $\mathbb{Q}$ and $$S=\{x\in\mathbb{Q}_p:1+x\mbox{ has $n^{th}$ roots in }\mathbb{Q}_p\mbox{ for infinitely many }n\in\mathbb{N}\}$$ I have to show that $p\mathbb{Z}_p\subseteq S\subseteq \mathbb{Z}_p$ where $\mathbb{Z}_p=\{x\in\mathbb{Q}_p:|x|_p\le 1\}$. Any hint on how to start? I'm totally stuck at the moment, trying to find out the structure of those $n^{th}$ roots but going nowhere.
So let's start the easy way, if $x\not\in\Bbb Z_p$ we have $v_p(x)<0$, so that $v_p(1+x)<0$ by the strong triangle inequality. Then say $y_n^n=1+x$ for some infinite sequence, $y_n$. Then as $$v_p(y_n)={1\over n}v_p(1+x)$$ we see that as $n\to\infty$ we have that $v_p(y_n)\to 0$. But the valuation is discrete, hence for large enough $n$ we must have $v_p(y)=0$ contradicting $v_p(1+x)<0$. Now let's show that $1+pz\in S$ for every $z\in\Bbb Z_p$. Then we need only show that $$t^n-1-pz\in\Bbb Z_p[t]$$ has a solution in $\Bbb Z_p$ for infinitely many $n$. However, we can do this by using Hensel's lemma to reduce the problem to one modulo $p$ (or $8$ in the case $p=2$) and Dedekind's theorem on infinitely many primes in arithmetic progressions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1080201", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
I can't understand a step in the proof of the associativity of matrix multiplication Matrix multiplication associativity is proven by the following reasoning: Let there be matrices $A^{m \times n}$, $B^{n \times k}$ and $C^{k \times l}$. Then $$ \{(AB)C\}_{ij}=\sum\limits_{p=1}^k{\{AB\}_{ip}c_{pj} \\=\sum\limits_{p=1}^k \left(\sum\limits_{q=1}^n a_{iq}b_{qp}\right)}c_{pj} \\=\sum\limits_{q=1}^n a_{iq} \left(\sum\limits_{p=1}^k b_{qp}c_{pj}\right) \\= \{A(BC)\}_{ij}. $$ I don't understand how we get the third line from the second.
Here is, I think, a more intuitive way to prove it. Letting A be a $m \times n$ matrix, it follows that B must have n rows for AB to exist. So letting B be a $n \times p$ matrix, (AB) will be an $m \times p$ matrix. For (AB)C to exist, C must have p rows, so let C be a $p\times r$ matrix. C is a $p\times r$ matrix, B must have $p$ columns. So (again) letting B be an $n \times p$ matrix, BC will be an $n \times p$ matrix. For A(BC) to be allowed A must have n columns. So (again) letting A be an $m \times n$ matrix, A(BC) will be an $m \times p$ matrix. For (AB)C and A(BC) to have the same shape, A must have the same number of columns as B has rows, and C must have the same number of rows as B has columns. When this is the case it will be true (as shown below) that $A(BC) = (AB)C$. All we have to do is prove that an arbitrary column of (AB)C will be equal to the same arbitrary column in A(BC) (I'll call this column the jth column of both matrices.) $$\text{Notation: } K_i \text{ represents the ith column of the matrix } K.$$ $$B = \left[ {\begin{array}{*{20}{c}} {{B_1}}& \cdots &{B{_p}} \end{array}} \right]$$ $$C = \left[ {\begin{array}{*{20}{c}} {{c_{11}}}& \cdots &{{c_{1j}}}& \cdots &{{c_{1r}}}\\ \vdots & \vdots & \vdots & \cdots & \vdots \\ {\underbrace {{c_{p1}}}_{{C_1}}}& \vdots &{\underbrace {{c_{pj}}}_{{C_j}}}& \cdots &{\underbrace {{c_{pr}}}_{{C_R}}} \end{array}} \right]$$ $$AB = A\left[ {\begin{array}{*{20}{c}} {{B_1}}& \cdots &{B{_p}} \end{array}} \right] = \left[ {\begin{array}{*{20}{c}} {A{B_1}}& \cdots &{AB{_p}} \end{array}} \right]$$ $${\left( {BC} \right)_j} = B{C_j} = \left[ {\begin{array}{*{20}{c}} {{B_1}}& \cdots &{B{_p}} \end{array}} \right]\left[ {\begin{array}{*{20}{c}} {{c_{1j}}}\\ \vdots \\ {{c_{pj}}} \end{array}} \right] = \left[ {\begin{array}{*{20}{c}} {{c_{1j}}{B_1}}& \cdots &{{c_{pj}}B{_p}} \end{array}} \right]$$ $${\left( {\left( {AB} \right)C} \right)_j} = {\left( {AB} \right)_{{C_j}}} = AB\left[ {\begin{array}{*{20}{c}} {{c_{1j}}}\\ \vdots \\ {{c_{pj}}} \end{array}} \right] = \left[ {\begin{array}{*{20}{c}} {A{B_1}}& \cdots &{AB{_p}} \end{array}} \right]\left[ {\begin{array}{*{20}{c}} {{c_{1j}}}\\ \vdots \\ {{c_{pj}}} \end{array}} \right] = {c_{1j}}A{B_1} + \cdots + {c_{pj}}A{B_p}$$ $${\left( {A\left( {BC} \right)} \right)_j} = A{\left( {BC} \right)_j} = A\left( {{c_{1j}}{B_1} + \cdots + {c_{pj}}{B_p}} \right) = {c_1}A{B_1} + \cdots + {c_{pj}}A{B_p} = {\left( {\left( {AB} \right)C} \right)_j}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1080284", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
How close apart are two message - "Document Distance" algorithm I was looking at this algorithm that computes how close apart are two texts from one another and the formula seems a bit weird to me. The basic steps are: * *For each word encountered in a text you let a vector hold its frequency. For $n$ different words our vectors would then be $R^n$. You can assume that the word of row $i$ of vector $1$ equals row $j$ of vector $2$. For example row $0$ of vector $1$ has the value $4$ and row $0$ of vector $2$ has a value of $3$, both referring to the word Dog but one text having $4$ occurrences of the word Dog while the other text having only $3$ occurrences of the word Dog. If a word appears in one text but not in the other text then the vector that corresponds to the text not having that word will have a value of $0$. *Compute the distance between the two vectors using the formula: $$\arccos\left(\frac{L1\cdot L2}{\sqrt{(L1\cdot L1)(L2\cdot L2)}}\right)\tag1$$ Why use the above equation when we could use Euclidean distance instead? $$d(p,q) = \sqrt{(p_1-q_1)^2 + (p_2-q_2)^2 + \cdots + (p_i-q_i)^2 + \cdots + (p_n-q_n)^2}.$$ It seems like a simpler way to go about finding the distance between two vectors. I don't really understand the equation (1) too well so i am not sure if it is more appropriate in finding how close apart are two texts.
The first formula is related to dot product $$\mathbf u\cdot \mathbf v = \|\mathbf u\|\ \|\mathbf v\|\cos \theta$$ One property (compared with Euclidean distance) is that the cosine has a upper bound. Also does not depend so much on the length of the text, because each vector is normalised to unit length in the denominator.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1080377", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Borel $\sigma$-Algebra definition. Definition: The Borel $\sigma$-algebra on $\mathbb R$ is the $\sigma$-algebra B($\mathbb R$) generated by the $\pi$-system $\mathcal J$ of intervals $\ (a, b]$, where $\ a<b$ in $\mathbb R$ (We also allow the possibility that $\ a=-\infty\ or \ b=\infty$) Its elements are called Borel sets. For A $\in$ B($\mathbb R$), the $\sigma$-algebra $$B(A)= \{B \subseteq A: B \in B(\mathbb R)\}$$ of Borel subsets of A is termed the Borel $\sigma$-algebra on A. I struggle with this part especially "generated by the $\pi$-system $\mathcal J$ of intervals (a, b]" In addition could someone please provide an example of a Borel set, preferably some numerical interval :) Also is $\mathbb R$ the type of numbers that the $\sigma$-algebra is acting on?
Ignore the phrase "$\pi$-system" for the time being : What you are given is a collection $\mathcal{J}$ of subsets of $\mathbb{R}$ and the $\sigma$-algebra you seek is the smallest $\sigma$-algebra that contains $\mathcal{J}$. This is the definition of the Borel $\sigma$-algebra. For example $\{1\}$ is a Borel set since $$ \{1\} = \bigcap_{n=1}^{\infty} (1-1/n,1] = \mathbb{R}\setminus \left(\bigcup_{n=1}^{\infty} \mathbb{R}\setminus (1-1/n,1]\right) $$ Does this help you understand what this $\sigma$-algebra can contain? It is not possible to list down all the elements in $B(\mathbb{R})$ though. Now, the reason we choose this $\sigma$-algebra is simple : We want continuous functions to be measurable - a rather reasonably requirement which is often imposed when dealing with measure spaces that are also topological spaces.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1080473", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Number of $ 6 $ Digit Numbers with Alphabet $ \left\{ 1, 2, 3, 4 \right\} $ with Each Digit of the Alphabet Appearing at Least Once Find the number of 6 digit numbers that can be made with the digits 1,2,3,4 if all the digits are to appear in the number at least once. This is what I did - I fixed four of the digits to be 1,2,3,4 . Now remaining 2 places can be filled with 4 digits each. Number of 6 digit numbers if two places are filled with same digit are 4 * 6!/3! and if filled by different digits are 12 * 6!/(2!*2!). Therefore, total such numbers are 2880. But the correct answer is 1560. Any hint would be appreciated.
A good way (not necessarily the best way) of doing such a problem as advised by my high school teacher is to first determine the number of combinations, then follow by permuting their arrangements. For your question, there are two cases, one of the four digits repeating three times, and two of the digits repeating twice each. For the first case, the number of ways to choose a digit repeating three times is $4C1$. The possible arrangements for this case is $\frac{6!}{3!}$. Then the number of six-digit numerals associated to this case will be the product of these two numbers. For the second case, repeating the previous argument, we have the number of combinations as $4C2$ and the number of arrangements as $\frac{6!}{2!2!}$ Again, the product of these two numbers will be the number of six-digit numerals associated to this case. Adding the two cases together, you will obtain the answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1080555", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Biholomorphic function between given set and open unit disk Let $A=\{z\in\mathbb{C}:|z|<1, |z-1|<1\}$. From Riemann mapping theorem we know that there exist biholomorphic function between $A$ and open unit disk. How to construct these function ? I tried to apply Möbius transformation and classical mappings ( exponent, $z^2$, etc.) but I doesn't work. I need only hints for this problem.
The circles $\{|z|=1\}$ and $\{|z-1|=1\}$ intersect at two points $a,\bar a\in\mathbb{C}$, $\operatorname{Im}a>0$. The Möbius mapping $$ \phi(z)=\frac{z-a}{z-\bar a} $$ transforms $A$ into an angle with vertex at $0$. A power function transforms this into a half plane, and another Möbius mapping transforms it in the open unit disk.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1080617", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Integral of $\log(\sin(x))$ using contour integrals I know the integral is possible with a simple fourier series expansion of $-\log(\sin(x))$ But I am interested in complex analysis, so I want to try this. $$I = \int_{0}^{\pi} \log(\sin(x)) dx$$ The substitution $x = \arcsin(t)$ first comes to mind. But that substitution isnt valid as, The upper and lower bounds would both be $0$ because $\sin(\pi) = \sin(0) = 0$ What is a workaround using inverse trig or some other way? Inverse sine is good, because that gives us a denominator from which we will be able to find poles. $$x = arctan(t)$$ Is also good, but it would hard to do. Idea?
We have \begin{align}\int_0^{\pi} \log(\sin x)\, dx &= \int_0^{\pi} \log\left|\frac{e^{ix}-e^{-ix}}{2i}\right|\, dx\\ &= \int_0^{\pi} \log\left|\frac{e^{2ix}-1}{2}\right|\, dx\\ &= \int_0^{\pi} \log|e^{2ix}-1|\, dx - \int_0^{\pi} \log 2\, dx\\ &= \frac{1}{2}\int_0^{2\pi} \log|1 - e^{ix}|\, dx - \pi\log 2\\ &= \lim_{r\to 1^{-}} \frac{1}{2}\int_0^{2\pi} \log|1 - re^{ix}|\, dx - \pi\log 2\\ &= \lim_{r\to 1^{-}}\bigl(\pi\log|1 - z||_{z = 0}\bigr) - \pi \log 2 \quad (\text{by Gauss's mean value theorem})\\ &= -\pi\log 2. \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1080710", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
How can I prove $|a_{m+1}-a_{m+2}+a_{m+3}-...\pm a_{n}| \le |a_{m+1}|$? Give a sequence $(a_n)$ that satisfying: (i) the sequence is decreasing: $$a_n \ge a_{n+1} \forall n \in \mathbb N$$ (ii) the sequence is converging to $0$: $$(a_n) \rightarrow 0$$ Problem:Prove that $|a_{m+1}-a_{m+2}+a_{m+3}-...\pm a_{n}| \le |a_{m+1}|$ $\forall n \gt m \ge N$ for some $N \in \mathbb N$ I think we shall do this by induction. Let $n=1$, then clearly $|a_{m+1}| \le |a_{m+1}|$. Suppose $n=k$ holds true, then for $n=k+1$, $|a_{m+1}-a_{m+2}+a_{m+3}-...\pm a_{m+k} \mp a_{m+k+1}|$ ... For the step $n=k+1$, I give up(the triangle inequality will not work!). So, how do we continue to do this? I have a strange feeling that I have made a big mistake somewhere because this induction is really weird. Please help me proceed with how to prove this strange inequality. I thank you very much. Note: every term in this sequence MUST BE positive(proven).
Let us write $n=m+p$, so that the sum has $p$ elements. We prove that $$\left| a_{m+1}-a_{m+2} +\cdots \pm a_{m+p} \right| \leq |a_{m+1}|=a_{m+1} \tag{1} $$ for all $m,p$: First note that the $\pm$ actually equals $(+)$ if $p$ is odd, and is $(-)$ if $p$ is even. This motivates breaking the proof into two cases: * *If $p=2k+1$ is odd, summing the inequalities $a_{m+2l+1}-a_{m+2l+2} \geq 0$ from $l=0$ to $k-1$, shows that we can take the absolute value in $(1)$ off, as both expressions $$(a_{m+1}-a_{m+2})+(a_{m+3}-a_{m+4})+ \cdots (a_{m+p-2}-a_{m+p-1}) $$ and $$a_{m+p} $$ are nonnegative in that case. Once the absolute value is gone, it's easy to see that $(1)$ holds by considering both expressions above again. *If $p=2k$ is even, we can similarly break the sum into a sum of nonnegative summands, which allows us to take the absolute value off. Can you take it from here?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1080845", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Finding singularities in circle of convergence of $f(z)$ and showing taylor series diverges there $$f(x)=\arctan(x)$$ I know that $$\dfrac{1}{1+x^2}=1-x+x^2-x^3+\cdots=\sum_{i=0}^\infty (-x)^i$$ Also: $$\arctan(x)=\int \dfrac{1}{1+x^2}dx = \int \sum_{i=0}^\infty (-x)^i dx = x - 0.5x^2+1/3x^3+\cdots=\sum_{i=0}^\infty \frac{(-1)^i(x)^{i+1}}{2i+1}$$ Radius of convergence $R = 1$ using ratio test $\implies |z|\le1$ is the circle of convergence Singularities are at $\pm i$ and both lie in $|z|$ however how do i show that the series diverges there? EDIT: $$\frac{1}{1+x^2} = 1 - x^2 + x^4 - x^6 +\cdots + (-1)^kx^{2k} + \cdots\to \int \dfrac{dx}{1+x^2} = \sum_{k=0}^\infty (-1)^k\frac{x^{2k+1}}{2k+1}.$$
If a power series converges in some point $x_0$, then the derivation also converges in the point $x_0$. Here, the derivation diverges at $x_0=-1$ and $x_0=1$, so the original power series diverges there as well.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1080921", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Show without differentiation that $\frac {\ln{n}}{\sqrt{n+1}}$ is decreasing Show that the function $\displaystyle \frac {\ln{n}}{\sqrt{n+1}}$ is decreasing from some $n_0$ My try: $\displaystyle a_{n+1}=\frac{\ln{(n+1)}}{\sqrt{n+2}}\le \frac{\ln{(n)+\frac{1}{n}}}{\sqrt{n+2}}$ so we want to show that $\ln{n}\cdot(\sqrt{n+1}-\sqrt{n+2})+\frac{\sqrt{n+1}}{n}\le 0$ or equivalently $n\cdot \ln{n} \cdot (\sqrt{\frac{n+2}{n+1}}-1)\ge1$ and I'm stuck here.
If you subtract f(n+1)-f(n), then it comes down to $\frac{(n+1)^{\sqrt{n+1}}}{(n)^{\sqrt{n+2}}}\leq 1$ , take logs and it should be easy to finish?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1081050", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Sum of the series $\sinθ\sin2θ + \sin2θ\sin3θ + \sin3θ\sin4θ + \sin4θ\sin5θ + \cdots+\sin n\theta\sin(n+1)\theta$ terms The series is given: $$\sum_{i=1}^n \sin (i\theta) \sin ((i+1)\theta)$$ We have to find the sum to n terms of the given series. I could took out the $2\sin^2\cos$ terms common in the series. But what to do further, please guide me.
Hint1: $$\sum_{i=1}^n \sin (i\theta) \sin ((i+1)\theta) = 1/2\sum_{i=1}^n \left(\cos (\theta)-\cos ((2i+1)\theta)\right)$$ Hint2: $$ 2\sin (\theta)\sum_{i=1}^n \cos ((2i+1)\theta) = \sum_{i=1}^n \left(\sin ((2i+2)\theta) - \sin (2i\theta)\right) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1081132", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Number of Solutions of $y^2-6y+2x^2+8x=367$? Find the number of solutions in integers to the equation $$y^2-6y+2x^2+8x=367$$ How should I go about solving this? Thanks!
Note: this method requires basic knowledge of conic sections. First of all it is equation of an ellipse, And its vertex are $(-2,3-8\sqrt6)|(-2,3+8\sqrt6) \approx(-2,-16.6)|(-2,22.6)$ So, you can vary $y$ from $-16$ to $22$ and you will get all integral solutions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1081212", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Simplification a trigonometric equation $$16 \cos \frac{2 \pi}{15} \cos\frac{4 \pi}{15} \cos\frac{8 \pi}{15} \cos\frac{14 \pi}{15}$$ $$=4\times 2 \cos \frac{2 \pi}{15} \cos\frac{4 \pi}{15} \times2 \cos\frac{8 \pi}{15} \cos\frac{14 \pi}{15}$$ I am intending in this way and then tried to apply the formula, $2\cos A \cos B$ but i think I might not get the answer. What to do now? the result will be 1.
Using the identity, $\cos\theta\cos2\theta\cos2^2\theta\cdots\cos2^{n-1}\theta=\dfrac{\sin2^n\theta}{2^n\sin\theta}$ By putting $n=4$ and $\theta=\dfrac{\pi}{15}$ you will get, $- \cos\dfrac{\pi}{15}\cos \dfrac{2 \pi}{15} \cos\dfrac{4 \pi}{15} \cos\dfrac{8 \pi}{15}=-\dfrac{\sin2^4\dfrac{\pi}{15}}{2^4\sin\dfrac{\pi}{15}} $ Multiply both sides by $16$ and you get the required identity , $\implies - 16\cos\dfrac{\pi}{15}\cos \dfrac{2 \pi}{15} \cos\dfrac{4 \pi}{15} \cos\dfrac{8 \pi}{15}=-16\dfrac{\sin16\dfrac{\pi}{15}}{16\sin\dfrac{\pi}{15}} $ $\implies -\dfrac{\sin\dfrac{16\pi}{15}}{\sin\dfrac{\pi}{15}}= \dfrac{\sin\dfrac{\pi}{15}}{\sin\dfrac{\pi}{15}}=1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1081316", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Combinatoric proof for $\sum_{k=0}^n{n\choose k}\left(-1\right)^k\left(n-k\right)^4 = 0$ ($n\geqslant5$) I'm trying to prove the following: For every $n \ge 5$: $$\sum_{k=0}^n{n\choose k}\left(-1\right)^k\left(n-k\right)^4 = 0$$ I've tried cancelling one $(n-k)$, and got this: $$n\sum_{k=0}^{n-1}{n-1\choose k}\left(-1\right)^k\left(n-k\right)^3 = 0$$ I've also tried expressing the first formula as such: $$\sum_{k=0}^na_kb_k$$ Where $a_k = {n \choose k}\left(-1\right)^k$ and $b_k = \left(n-k\right)^4 = \sum_{j=0}^4{4\choose j}n^j\left(-k\right)^{4-j}$ It's easy to see that $\sum_{k=0}^n a_k = \left(1-1\right)^n = 0$ by the binomial theorem. But I'm lost as to why this work only for n>=5. What am I missing?
It will be useful to know: $\mathbf{ Theorem.}$ Let $p(x)= a_0+a_1x+\cdots +a_nx^n$ be $\mathit{any}$ polynomial in $\mathbb{C}[x]$ (of degree $\leq n$), then $$ \sum_{k} {n\choose k}(-1)^k p(k)=(-1)^n n! a_n.$$ So in particular when $p$ has degree<n, then such sums are $0$. Proof: see Graham, Knuth, Patashnik, Concrete Mathematics, Addison Wesley, 1989 , formula (5.42). (As was suggested above the proof is by using difference operators. ) $\Box$ Concerning the original question, since $x\mapsto (n-x)^4$ is of degree 4 in $x$ but $n\geq 5 $ we get 0.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1081364", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 6, "answer_id": 4 }
Differential geometry problem about curves Please, help me to solve the following differential geometry problem. Equations $F(x, y, z) = 0$ and $G(x, y, z) = 0$ denote space curve $L$. Gradients of $F$ and $G$ are not collinear in some point $M(x_0, y_0, z_0)$ which belongs to the curve $L$. Find: * *The tangent line (its equation) in the point $M$ *The osculating plane in the point $M$ *The curvature in the point $M$ The 1st part is quite evident for me: one can simply differentiate $F(x(t), y(t), z(t))$ and $G(x(t), y(t), z(t))$ obtaining two equations on $x'$, $y'$ and $z'$. Such simultaneous equations have a solution due to the condition of non-collinearity. Then, assuming $x' = 1$, it's easy to find $y'$ and $z'$ which gives us a vector $(x', y', z')$. This vector together with the point $M$ denote the desired tangent line. But what to do with the other two parts? Thanks in advance.
The curvature vector is defined to be the derivative of the unit tangent vector. Let us say that $\gamma(s)$ is the arc length parametrization of your curve. That is to say: $$F(\gamma(s))=0$$ $$G(\gamma(s))=0.$$ To find the tangent vector you derived and obtained: $$\nabla F(\gamma)\cdot\gamma'=0$$ $$\nabla G (\gamma)\cdot \gamma'=0.$$ To get the answer to the second question you need the direction of $\gamma''$ (having supposed $\|\gamma'\|=1$). So you derive and get: $$\mathcal{H}F(\gamma)[\gamma',\gamma']+ \nabla F(\gamma)\cdot\gamma''=0$$ $$\mathcal{H}G(\gamma)[\gamma',\gamma']+ \nabla G(\gamma)\cdot\gamma''=0.$$ Where $\mathcal{H}F$ is the hessian matrix and the square parenthesis represent the linear product. Once you solve it you will get the answer. This will answer to your third question too.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1081432", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Inequality in Integral Show that $\dfrac{28}{81}<\int_0^\frac{1}{3}e^{x^2}dx<\dfrac{3}{8}$. It would be great if a solution based on the Mean Value Theorem for Integrals is posted.
To obtain the upper bound, note that for $|x| < 1$ $$e^{x^2} = \sum_{k=0}^{\infty}\frac{x^{2k}}{k!} < \sum_{k=0}^{\infty}x^{2k} = \frac1{1-x^2}.$$ Hence, $$\int_0^{1/3}e^{x^2}\,dx < \int_0^{1/3}\frac{dx}{1-x^2} < \frac1{3}\frac{1}{1-(1/3)^2}= \frac1{3}\frac{9}{8}= \frac{3}{8}.$$ The lower bound is easily derived by truncating the Taylor series.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1081512", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Understanding linearly independent vectors modulo $W$ We've learned in class: Let $W \subseteq V$, a subspace. $v_1, \ldots, v_k \in V$ are said to be linearly independent modulo $W$ if for all $\alpha_1, \ldots, \alpha_k: \sum_{k=1}^n \alpha_i v_i \in W \implies \alpha_1 = \ldots = \alpha_k = 0$ Can you explain it to me / give some intuitions or an example? Thanks.
Consider the quotient space $V/W.$ Then the given condition says that the image of the vectors $v_1, v_2, \cdots , v_n \in V,$ under the natural map $V \rightarrow V/W$ is also linearly ondependent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1081593", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Irrational number not ocurring in the period of rational numbers Write each rational number from $(0,1]$ as a fraction $a/b$ with $\gcd(a,b)=1$, and cover $a/b $ with the interval $$ \left[\frac ab-\frac 1{4b^2}, \frac ab + \frac 1{4b^2}\right]. $$ Prove that the number $\frac 1{\sqrt{2}}$ is not covered. What I did was the following:- We define the given period as P. Assume that $\frac 1{\sqrt{2}}$ is not present in P. Hence set $\frac ab=k=\frac 1{\sqrt{2}}+x$. Therefore we have to prove that $$ \begin{align}x\gt\frac 1{4b^2}&\implies x \gt \frac {k^2}{4a^2}\\ &\implies4a^2x\gt\left(\frac 1{\sqrt{2}}+x\right)^2\\ &\implies4a^2x\gt x^2+x\sqrt{2}+\frac 12\\ &\implies x^2-x(4a^2-\sqrt{2})+\frac 12\lt 0\end{align} $$ Hence $$ \begin{align} D\lt 0 & \implies (4a^2-\sqrt{2})^2-4\cdot\frac 12 \lt 0 \\ & \implies(4a^2-\sqrt{2})^2-\sqrt{2}^2\lt 0 \\ & \implies(4a^2-2\sqrt{2})*4a^2\lt 0\\ &\implies4a^2\lt2\sqrt{2}\\ &\implies a^2\lt \frac 1{\sqrt{2}}\end{align} $$ This is impossible since $a$ is a natural number???? So, what do I do?
I got a good solution from one of my teachers...$$\left|\frac ab-\frac 1{\sqrt2}\right|\left(\frac ab+\frac 1{\sqrt2}\right)=\left|\frac {a^2}{b^2}-\frac 12\right|=\frac {|2a^2-b^2|}{2b^2}\gt\frac 1{2b^2}$$ Also we know that $\frac ab+\frac 1{\sqrt2}\lt2=>\left|\frac ab-\frac 1{\sqrt2}\right|\gt\frac 1{4b^2}$. Hence proved.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1081688", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
showing the equation is true Am from writing my class test and I have failed to answer this question. None of my friends could help me after the test was over.the question reads: If $P$ is the length of the perpendicular from the origin to the line which intercepts the axis at points $A$ and $B$, then show that $$\frac{1}{P^2}=\frac{1}{A^2}+\frac{1}{B^2}$$ Someone to help me do this.
Hint: Let $Q$ be the point of intersection of the lines $P$ and $AB$. Then, $\triangle AOQ$, $\triangle BOQ$, and $\triangle OAB$ are all similar.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1081866", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How to compare products of prime factors efficiently? Let's say that $n$ and $m$ are two very large natural numbers, both expressed as product of prime factors, for example: $n = 3×5×43×367×4931×629281$ $m = 8219×138107×647099$ Now I'd like to know which is smaller. Unfortunately, all I have is an old pocket calculator that can show at most (say) ten digits. So while there are enough digits to enter each factor individually, $n$ and $m$ are both too large to fit in the calculator. To my disappoint, they are also so close that even their logarithms are indistinguishable looking at the first 10 digits. Question: how would one go to determine which one of two integers is smaller in a case like this? Any easier alternative than calculating the full decimal expansion of both products with pen and paper?
As a practical example, in a programming language like C or C++, you can easily calculate both products using floating-point arithmetic, and you can easily calculate both products modulo 2^64 using unsigned integer arithmetic. If you can estimate the rounding error in the floating-point arithmetic products, and prove that the total rounding error is less than 2^63, then it's easy: Calculate the difference d between the products in floating-point arithmetic. If the difference is >= 2^63 or <= -2^63 then that decides. Otherwise, you know that the exact difference is greater than $d - 2^63$ and less than $d + 2^63$. We also know the exact difference modulo 2^64; this is enough to determine the exact difference. This should work for numbers up to 33 or 34 digits.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1081963", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 3, "answer_id": 2 }
What parts of a pure mathematics undergraduate curriculum have been discovered since $1964?$ What parts of an undergraduate curriculum in pure mathematics have been discovered since, say, $1964?$ (I'm choosing this because it's $50$ years ago). Pure mathematics textbooks from before $1964$ seem to contain everything in pure maths that is taught to undergraduates nowadays. I would like to disallow applications, so I want to exclude new discoveries in theoretical physics or computer science. For example I would class cryptography as an application. I'm much more interested in finding out what (if any) fundamental shifts there have been in pure mathematics at the undergraduate level. One reason I am asking is my suspicion is that there is very little or nothing which mathematics undergraduates learn which has been discovered since the $1960s$, or even possibly earlier. Am I wrong?
As elementary number theory is still considered to be a pure mathematics course, much has entered this field which is currently being applied. In 1964, there was no Diffie-Hellman nor RSA public key cryptography; nor were elliptic curves being used for digital signatures, key agreements, or to generate ``random'' numbers; nor were computers an integral tool in cryptography. There were no sieve methods as far as I know, being taught at the undergraduate level, for they had yet to be invented. And besides, the main focus of cryptography in 1964 was on encryption. All that has changed---semiprimes, academically speaking, are now in vogue; a solution to the discrete logarithm problem ranks much higher (I dare say) than it did a half century ago; and even the amateur is trying to grasp at the notion of what quantum computing is all about.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1082010", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "123", "answer_count": 19, "answer_id": 18 }
Equation with three variables I am confused as how to solve an equation with three squared variables to get its integer solutions? As: $$x^2+y^2+z^2=200$$ Thanks!
If you are doing it to find solution to your previous question about the area of that triangle then you should use heron's formula and simplify it in terms of $x, y, z$ like, $\sqrt{s(s-a)(s-b)(s-c)}=\dfrac{1}{4}\sqrt{(x+y+z)(x+y-z)(x-y+z)(-x+y+z)}$, which will simplify into, $\dfrac{1}{4}\sqrt{-(x^4+y^4+z^4)+2(x^2y^2+y^2z^2+z^2x^2)}$ Now you can substitute the values to get the required area.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1082092", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
How are Zeta function values calculated from within the Critical Strip? We note that for $Re(s) > 1$ $$ \zeta(s) = \sum_{i=1}^{\infty}\frac{1}{i^s} $$ Furthermore $$\zeta(s) = 2^s \pi^{s-1} \sin \left(\frac{\pi s}{2} \right) \Gamma(1-s) \zeta(1-s)$$ Allows us to define the zeta function for all values where $$Re(s) < 0$$ By using the values where $$Re(s) > 1$$ But how do we define it over $$ 0 \le Re(s) \le 1$$ Which is where most of the "action" regarding the function happens anyways...
An extension of the area of convergence can be obtained by rearranging the original series. The series $$\zeta(s)=\frac{1}{s-1}\sum_{n=1}^\infty \left(\frac{n}{(n+1)^s}-\frac{n-s}{n^s}\right)$$ converges for $\Re s > 0$. See here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1082139", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 7, "answer_id": 6 }
How to prove that $\sin(\theta)= 2\sin(\theta/2)\cos(\theta/2)$? $$\sin (\theta) = 2 \sin \left(\frac{\theta}{2}\right) \cos \left(\frac{\theta}{2}\right)$$ How? Please help. Thanks in advance.
If you know complex numbers, you can also use: $e^{i\theta} = \cos(\theta)+i\sin(\theta)$ with $e^{i\theta} = (e^{i\frac{\theta}{2}})^2$ (this "trick" yields De Moivre's formula). Look at the imaginary part of that expression.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1082266", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Number of valid NxN Takuzu Boards a.k.a 0h h1 (details inside)? Takuzu a logic puzzle which has a $N \times N$ grid filled with $0$'s and $1$'s following these rules: * *Every row/column has equal number of $0$'s and $1$'s *No two rows/columns are same *No three adjacent (all three horizontal or all three vertical) numbers are same For more details: Takuzu. It has also been recently popular as the game 0h h1 I was wondering how many number of boards of size $N \times N$ ($N$ is even) are possible? For any odd $N$ it's $0$ (Since rule 1 would be violated), For $N=2$, it is $2$, i.e the boards $[01,10]$ and $[10,01]$, For $N=4$, I think it is $72$ but I'm not pretty sure, For any other $N$ I'm not sure how to count them.
This is a hard problem and I doubt it's been studied seriously. However, my computer tells me that for the first few boards, you can get: * *2 x 2 ... 2 *4 x 4 ... 72 *6 x 6 ... 4,140 *8 x 8 ... 4,111,116 After that my algorithm is too slow to keep going. I searched on OEIS for any variation of this sequence but couldn't find it. So I double-checked with a different method and then tried to type it into the database, at which point the website warned me that the sequence was already there — it was just too recent and hadn't been accepted yet! Almost certainly, it's there because you asked about the problem. I was worried about stealing somebody's thunder, but it's been a week, and it's published, so here it is: A253316. Unfortunately, at the moment the answer for 10 x 10 is still a mystery.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1082341", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
The locus of points $z$ which satisfy $|z - k^2c| = k|z - c|$, for $k \neq 1$, is a circle Use algebra to prove that the locus of points z which satisfy $|z - k^2c| = k|z - c|$, for $k \neq 1$ and $c = a + bi$ any fixed complex number, is a circle centre $O$. Give the radius of the circle in terms of $k$ and $|c|$. I squared both sides and got this: $$(k^2−1)x^2+(k^2−1)y^2+(a^2+b^2-k^2a^2-k^2b^2)k^2=0$$ I might have gone wrong somewhere though. Edit. Never mind, I didn't go wrong. $$(k^2-1)x^2+(k^2-1)y^2-(k^2-1)k^2a^2-(k^2-1)k^2b^2=0$$ $$x^2+y^2=k^2(a^2+b^2)$$ $$r^2=k^2(a^2+b^2)$$ $$r=k|c|$$
Hint: If you square both sides, and expand out (initially writing things in terms of your variables and their complex conjugates), you will get a lot of useful cancellation. Here is the beginning of such a calculation, to help you out. Squaring the left hand side yields: $$(z-k^2c)(\overline{z}-k^2\overline{c})=|z|^2+k^4|c|^2-k^2(z\overline{c}+\overline{z}c).$$ Similarly, squaring the right hand side yields: $$k^2(|z|^2+|c|^2-(z\overline{c}+\overline{z}c)).$$ Setting the two sides equal and rearranging, we have $$(k^2-1)|z|^2=(k^4-k^2)|c|^2.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1082442", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Integration by differentiating under the integral sign $I = \int_0^1 \frac{\arctan x}{x+1} dx$ $$I = \int_0^1 \frac{\arctan x}{x+1} dx$$ I spend a lot of my time trying to solve this integral by differentiating under the integral sign, but I couldn't get something useful. I already tried: $$I(t) = \int_0^1 e^{-tx}\frac{\arctan x}{x+1} dx ; \int_0^1 \frac{(t(x+1)-x)\arctan x}{x+1} dx ; \int_0^1 \frac{\arctan tx}{x+1} dx ; \int_0^1 \frac{\ln(te^{\arctan x})}{x+1} dx $$ And something similar. A problem is that we need to get +constant at the very end, but to calculate that causes same integration problems. To these integrals: $$I(t) = \int_0^1 e^{-tx}\frac{\arctan x}{x+1} dx ; \int_0^1 \frac{\arctan tx}{x+1} dx$$ finding constant isn't problem, but to solve these integrals by itself using differentiating under the integral sign is still complexive. Any ideas? I know how to solve this in other ways (at least one), but I particularly interesting in differentiating.
I have an answer, though it is without using differentiation under integration $$I=\int_{0}^1 \frac{\tan^{-1}x}{1+x}dx=\int_{0}^{\pi/4}\frac{\theta \sec^2\theta}{1+\tan \theta}d\theta\\ =\int_{0}^{\pi/4}\frac{(\pi/4-\theta)\sec^2(\pi/4-\theta)}{1+\frac{1-\tan\theta}{1+\tan\theta}}d\theta\\=\int_{0}^{\pi/4}\frac{(\pi/4-\theta)}{1+\tan\theta}\sec^2\theta d\theta\\ \Rightarrow 2I=\pi/4\int_{0}^{\pi/4}\frac{\sec^2\theta }{1+\tan\theta}d\theta=\pi/4\ln 2\\ \Rightarrow I=\pi/8\ln 2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1082491", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 7, "answer_id": 0 }
If $(a,4)=2=(b,4)$, prove $(a+b,4)=4$. I'm almost embarrassed to be asking about a problem such as this one (exercise 12 in Niven 1.2), but here goes: Given $(a,4)=(b,4)=2$, show that $(a+b,4)=4$. I have plenty of tricks for working with gcds multiplicatively, however I have honestly no idea how to attack this problem. I sort of halfheartedly tried using Bezout's identity, writing $ax_0+4y_0 = 2$ and $bx_1+4y_1 = 2$. Adding the equations gives $$ax_0+bx_1+4(y_0+y_1) = 4$$ and hence if I were able to use that $x_0=x_1$ I'd only have to show minimality. However, seeing the exercises around this one I am completely sure I am overcomplicating things here. What am I missing?
$2$ divides both of $a$ and $b$, but $4$ doesn't, then $a,b$ are congruent with $2$ (mod $4$), then $a+b$ is a multiple of $4$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1082565", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 2 }
Find the value of : $\lim_{ x \to \infty} \left(\sqrt{x+\sqrt{x+\sqrt{x}}}-\sqrt{x}\right)$ I need to calculate the limit of the function below: $$\lim_{ x \to \infty} \left(\sqrt{x+\sqrt{x+\sqrt{x}}}-\sqrt{x}\right)$$ I tried multiplying by the conjugate, substituting $x=\frac{1}{t^4}$, and both led to nothing.
Multiply both numerator and denominator by $\sqrt{x+\sqrt{x+\sqrt{x}}}+\sqrt{x}$ You will get $$\dfrac{\sqrt{x+\sqrt{x}}}{\sqrt{x+\sqrt{x+\sqrt{x}}}+\sqrt{x}}$$ Divide both numerator and denominator by $\sqrt{x}$ $$\dfrac{\sqrt{1+\dfrac{1}{\sqrt{x}}}}{\sqrt{1+\sqrt{\dfrac{1}{x}+\dfrac{1}{x\sqrt{x}}}}+1}$$ On finding the limit to infinity, you get $$\dfrac{\sqrt{1+0}}{\sqrt{1+0}+1} = \dfrac{1}{2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1082649", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 6, "answer_id": 0 }
What is the difference between a function and a formula? I think that the difference is that the domain and codomain are part of a function definition, whereas a formula is just a relationship between variables, with no particular input set specified. Hence, for two functions $f$ and $g$, $f(x)$ can be equal to $g(x)$ for all integers, say, but if the domain of $f$ is {2, 3, 4} and the domain of $g$ is {6, 7, 8, 9}, the two functions will be different. And on the converse, if the functions 'do different things' - i.e. $f(x) = x$ and $g(x) = x^3$ - but the domains of $f$ and $g$ (these are the same) are set up such that the values of the functions are the same over the domain (this would work in this case for {-1, 0, 1}), then the functions are the same, even though the formulas are different. Is this correct? Thank you.
A function is a map from one set to another. They are the same if both the domain and the 'formula' are the same. A formula on the other hand is a word physicists and chemists like to use for a function that expresses a relation between variables that arise in nature.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1082718", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 2 }
How can I use these two bijections to form a bijection $\mathbb{R}^{\mathbb{N}} \to \mathbb{R}$? Build a bijection $\mathbb{R}^{\mathbb{N}} \to \mathbb{R}$ by using the two following known biections $\varphi:{\mathbb{N}} \to {\mathbb{N}} \times {\mathbb{N}}$ and $\psi:{\mathbb{R}} \to \{0,1\}^{{\mathbb{N}}}$. Edit. My solution. Use the classical bijection $\varphi: \mathbb{R}^2 \to \mathbb{R}.$ Now construct a bijection $\Phi: \mathbb{R}^{\mathbb{N}} \to \mathbb{R}$ by $$ \Phi(x_1,x_2,\ldots,x_n, \ldots)=\varphi(...\varphi(\varphi(x_1,x_2),x_3)...) $$ I know that it doesnt use those two proposed bijections but is it correct?
Hint: Note that $\mathbb R^\omega=(2^\omega)^\omega=2^{\omega^2}=2^\omega=\mathbb R$. If you can find a bijection for each equality, composing them should give you your desired bijection.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1082818", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
A continuous function on $[0,1]$ not of bounded variation I'm looking for a continuous function $f$ defined on the compact interval $[0,1]$ which is not of bounded variation. I think such function might exist. Any idea? Of course the function $f$ such that $$ f(x) = \begin{cases} 1 & \text{if $x \in [0,1] \cap \mathbb{Q}$} \\\\ 0 & \text{if $x \notin [0,1] \cap \mathbb{Q}$} \end{cases} $$ is not of bounded variation on $[0,1]$, but it is not continuous on $[0,1]$.
Consider any continuous function passing through the points $(\frac1{2n},\frac1n)$ and $(\frac1{2n+1},0)$, e.g. composed of linear segments. It must have infinite variation because $\sum\frac1n=\infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1082897", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27", "answer_count": 4, "answer_id": 2 }
Sum of a series of a number raised to incrementing powers How would I estimate the sum of a series of numbers like this: $2^0+2^1+2^2+2^3+\cdots+2^n$. What math course deals with this sort of calculation? Thanks much!
late to the party but i think it's useful to have a way of getting to the general formula. this is a geometric serie which means it's the sum of a geometric sequence (a fancy word for a sequence where each successive term is the previous term times a fixed number). we can find a general formula for geometric series following the logic below $$ a = \text{firstterm}\\ r = \text{common ratio}\\ n = \text{number of terms}\\ S_n = \text{sum of first n terms}\\ S_n = a + ar + ar^2 + \dots+ar^{n-1}\\ \\ \\ -rS_n = -ar-ar^2-ar^3 - \dots-ar^{n}\\ S_n-rS_n = a + ar + ar^2 + \dots+ar^{n-1}-ar-ar^2-ar^3 - \dots-ar^{n}\\ S_n-rS_n =a-ar^n = a(1-r^n)\\ S_n(1-r) = a(1-r^n)\\ S_n = \frac{a(1-r^n)}{(1-r)} $$ then using a = 2 $$ S_n = \frac{1(1-2^n)}{(1-2)} = 2^n-1 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1082963", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 6, "answer_id": 5 }
Recommended Books for AIME/USAMO Preparation? What books would you guys recommend to learn number theory, geometry, combinatorics, and algebra at a level appropriate for the AIME and/or USAMO? With the month of February approaching, the month the AMC (American Mathematics Competition) series takes place in, I'm realizing how much more I have left to learn. Thanks
I find Larson "Problem Solving Through Problems" a great book. It's problem after problem organized by approaches and topics. Many of the solutions are particularly clever. Yet he breaks the thought process into steps so you can really gain an insight as to how to solve problems that look pretty challenging at first glance. http://www.amazon.com/Problem-Solving-Through-Problems-Problem-Mathematics/dp/0387961712
{ "language": "en", "url": "https://math.stackexchange.com/questions/1083036", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Do there exist nontrivial global solutions of the PDE $ u_x - 2xy^2 u_y = 0 $? Consider the following PDE, $$ u_x - 2xy^2 u_y = 0 $$ Does there exist a non-trivial solution $u\in \mathcal{C}^1(\mathbb{R}^2,\mathbb{R})$? It is clear that all solutions for $u\in \mathcal{C}^1( \mathbb{R}^2_+,\mathbb{R})$ are given by $u(x,y) = f\left( x^2 - \tfrac{1}{y}\right)$ where $f\in \mathcal{C}^1(\mathbb{R}_+,\mathbb{R})$. But can we extend such solutions to the entire plane?
The only potential problem is at $y = 0$. For continuity we need $f(t)$ to go to a limit as $t \to \pm \infty$, and then we can define $u(x,0)$ as that limit (let's call it $c$). For partial differentiability wrt $y$ we need existence of $$ \lim_{y \to 0} \dfrac{u(x,y) - u(x,0)}{y} = \lim_{y \to 0} \dfrac{f(x^2 - 1/y) - c}{y} = \lim_{t \to \pm \infty} (x^2 - t) (f(t) - c) $$ For example, you could take $f(t) = \dfrac{1}{t^2+1}$, so that $$ u(x,y) = \dfrac{y^2}{(x^2 y - 1)^2 + y^2} $$ which is $C^\infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1083113", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Simulating elastic collision I wrote a simple program where i can move around some objects. Every object has a bounding box and I use hooke's law to apply forces to the colliding objects. On every tick, I calculate the forces, divide them by the masses, multiply by elapsed time to get the velocities and then move the objects. However, this leads to perfect elastic collision. Each of my bounding boxes has a priority value. Lower priority boxes can't push higher priority ones. This is important, because I want to have immovable objects later on. I want my spring formula to work like a rubber ball that you let go of above the ground. In other words, I want to be able to control how much of the momentum gets absorbed (it's supposed to turn into heat energy afaik) in the collision. How should I go about this in a real-time simulation? (a before-after formula is no good, the collision happens in a time interval)
You want to implement a coefficient of restitution. The common way of doing this with a penalty (spring-based) method is to modify your force depending on the relative velocity. You don't list how you estimate the closest distance between the two objects, but the main idea is to multiply the force (exerted on both objects) by $\sigma$, where $$\sigma = \begin{cases}1, & \textrm{objects are approaching}\\\textrm{CoR}, & \textrm{objects are separating}\end{cases},$$ where CoR is a number between 0 and 1 (closer to 0 for less bouncy objects). By the way, the more principled way to handle immovable objects is to ditch the priority system, and give the immovable objects infinite mass.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1083195", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Functional inequalities Let x,y,z be the lengths of the sides of a triangle, and let$$f(x,y,z)=\left|\frac {x-y}{x+y}+\frac {y-z}{y+z}+\frac {z-x}{z+x}\right|.$$ Find the upper limit of $f(x,y,z)$. I simply used the fact that $|x-y|\le z$ and the other 3 to prove that $f(x,y,z)\le \frac 18=0.125$. But the answer given is in terms of irrational numbers. Actually it is $f(x,y,z)\le \frac {8\sqrt2-5\sqrt5}{3}=0.04446.$ How do irrational numbers come into the picture...and well could you give the solution as well?
Solution:let $x\ge y\ge z$, First Note $$I=\dfrac{x-y}{x+y}+\dfrac{y-z}{y+z}+\dfrac{z-x}{z+x}=\dfrac{(x-y)(x-z)(y-z)}{(x+y)(y+z)(x+z)}$$ then let $$x=c+b,y=c+a,z=a+b,c\ge b\ge a>0$$ so $$I=\dfrac{(c-b)(c-a)(b-a)}{(2a+b+c)(2b+a+c)(2c+a+b)}<\dfrac{(c-b)cb}{(b+c)(2b+c)(2c+b)}=\dfrac{1}{F}$$ then we only find $F$ minimum let $\dfrac{c}{b}=k\ge 1$,then $$F=\dfrac{(1+k)(2+k)(1+2k)}{k(k-1)},k>1$$ since $$F'_{k}=0\Longrightarrow \dfrac{2(k^4-2k^3-7k^2-2k+1)}{(k-1)^2k^2}=0$$ since $k>1$,this equation have only one roots,so seewolf $$k=\dfrac{1}{2}(1+\sqrt{10}+\sqrt{7+2\sqrt{10}})=\dfrac{1}{2}(1+\sqrt{2}+\sqrt{5}+\sqrt{10})$$ and $$F(k)\ge F\left(\dfrac{1}{2}(1+\sqrt{2}+\sqrt{5}+\sqrt{10})\right)=\left(\dfrac{8\sqrt{2}-5\sqrt{3}}{3}\right)^{-1}$$ so $$\left|\dfrac{x-y}{x+y}+\dfrac{y-z}{y+z}+\dfrac{z-x}{z+x}\right|\le \dfrac{8\sqrt{2}-5\sqrt{3}}{3}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1083250", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Finding the inverse of a non linear function Let $F:\mathbb{R}^2\to \mathbb{R}^2$ be the diffeomorphism given by $$F(x,y)=(y+\sin x, x) $$ Find $F^{-1}$. I know that the answer is $F^{-1}(x,y)=(y,x-\sin y$), this can be shown to be true by taking $F\circ F^{-1}(x,y)=F(F^{-1}(x,y))=F(y,x-\sin y)=(x-\sin y+\sin y,y)=(x,y)$ If the function is linear, we can use $F(x,y)=A\cdot (x,y)$ for $A\in\mathbb{R}^{2\times 2}$ and then find the inverse of $A$, giving us the inverse map This cannot be done here as it it not linear, is there a standard method to solving non-linear inverse function problems like these, instead of just playing around with the equation until you reach a solution. I don't think I could solve a more complex problem than this as I kind of used trial and error. Is there a standard concrete process that one can use?
Just found the answer: * *Replace $f\left(x,y\right)$ by $\left(u,v\right)$ resulting in: $x+y+1=u$ and $x-y-1=v$ *Switch $x$ and $u$ and switch $y$ and $v$ resulting in: $u+v+1=x$ and $u-v-1=y$ *Solve for $u$ and $v$ resulting in: $u=\frac{1}{2}x+\frac{1}{2}y$ and $v=\frac{1}{2}x-\frac{1}{2}y-1$ *Replace $\left(u,v\right)$ with $f^{-1}\left(x,y\right)$ resulting in: $f^{-1}\left(x,y\right)$=$\left(\frac{1}{2}x+\frac{1}{2}y,\frac{1}{2}x-\frac{1}{2}y-1\right)$ So, same procedure. This gives you the inverse of function $f:\mathbb{R}^{2}\rightarrow\mathbb{R}^{2}$ defined by $f\left(x,y\right)=\left(x+y+1,x-y-1\right)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1083342", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Recurrence Relation Involving the gamma Function I'm having some doubts about my approach to the following problem. I am given that the function $k(z)$ is defined such that, $$k(z)=\Gamma\left(\frac{1}{2}+z\right)\Gamma\left(\frac{1}{2}-z\right)\cos{\pi z}$$ I'm required to find the recurrence relation linking $k(z+1)$ and $k(z)$ and to then evaluate $k(z)$ for one specific integer value and thus find $k(z)$ for any real, integer value. My attempt was as follows. Note that $\Gamma(s+1)=s\Gamma(s)$ and so \begin{align*} k(z+1)&=\Gamma\left(\frac{1}{2}+z+1\right)\Gamma\left(\frac{1}{2}-z+1\right)\cos{(\pi z + \pi)} \\ &=\left(\frac{1}{2}+z\right)\Gamma\left(\frac{1}{2}+z\right)\left(\frac{1}{2}-z\right)\Gamma\left(\frac{1}{2}-z\right)\cos{(\pi z + \pi)} \end{align*} Then since $\cos{(\pi z + \pi)}=-\cos{\pi z}$ we have that $$k(z+1)=\left(z^2-\frac{1}{4}\right)k(z)$$ If we then consider the case $z=0$, we have $$k(0)=\Gamma\left(\frac{1}{2}\right)\Gamma\left(\frac{1}{2}\right)=\pi$$ From this we see that \begin{align*} &k(1)=-\frac{1}{4}\pi\\ &k(2)=-\frac{3}{16}\pi\\ &k(3)=-\frac{45}{64}\pi\\ &\vdots \end{align*} I can't spot any pattern here other than the $4^z$ in the denominator which is making me think i've done something wrong, maybe in my use of $\Gamma(s+1)=s\Gamma(s)$? Any advice would be really appreciated.
You made a sign error. \begin{align} \frac{k(n+1)}{k(n)} &=\frac{\Gamma\left(\frac{1}{2}+n+1\right)\Gamma\left(\frac{1}{2}-n\color{red}{-1}\right)(-\cos(\pi n))}{\Gamma\left(\frac{1}{2}+n\right)\Gamma\left(\frac{1}{2}-n\right)\cos(\pi n)}\\ &=\frac{\left(\frac{1}{2}+n\right)\Gamma\left(\frac{1}{2}+n\right)\frac{\Gamma\left(\frac{1}{2}-n\right)}{\left(\frac{1}{2}-n-1\right)}(-1)}{\Gamma\left(\frac{1}{2}+n\right)\Gamma\left(\frac{1}{2}-n\right)}\\ &=-\frac{\frac{1}{2}+n}{-\frac{1}{2}-n}\\ &=1 \end{align} Hence $k(n)=k(0)=\pi$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1083502", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Find the number of polynomial zeros of $z^4-7z^3-2z^2+z-3=0$. Find the number of solutions of $$z^4-7z^3-2z^2+z-3=0$$ inside the unit disc. The Rouche theorem fails obviously. Is there any other method that can help? I have known the answer by Matlab, but I have to prove it by complex analysis. Thanks!
Note that $|z^4-2z^2+z-3|<7$ if $|z|=1$. The triangle inequality only says "$\leq$" but equality can only occur if $z^4$, $-2z^2$, $z$ and $-3$ all have the same argument. So it suffices to check $z=-1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1083540", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
proof: primitive pythagorean triple, a or b has to be divisible by 3 I'm reading "A friendly introduction to number theory" and I'm stuck in this exercise, I'm mentioning this because what I need is a basic answer, all I know about primitive pythagorean triplets is they satisfy $a^2 + b^2 = c^2$ and a, b and c has no common factors. Now.. my approach (probably kind of silly) was to "classify" the odd numbers (not divisible by 3) as $6k+1$, $6k+2$ and $6k+5$, and the even numbers with $6k+2$ and $6k+4$, then, trying different combinations of that, I could probe all the cases when I assume c is not 3 divisible, but I still have to probe that c cannot be 3 divisible and I don't know how to do it. Anyway, probably there is a better simpler solution. (Sorry, if this is a stupid question, I'm trying to teach myself number theory without much math background)
\begin{array}{|c|c|} \hline n \pmod 3 & n^2 \pmod 3 \\ \hline 0 & 0 \\ 1 & 1 \\ 2 & 1 \\ \hline \end{array} If neither $a$ nor $b$ is a multiple of $3$, then $a^2 + b^2 \equiv c^2 \pmod 3$ becomes $1 + 1 \equiv c^2 \pmod 3$, which simplifies to $c^2 \equiv 2 \pmod 3$; which has no solution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1083592", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Is $\ln(1+\frac{1}{x-1}) \ge \frac{1}{x}$ for all $x \ge 2$? Plotting both functions $\ln(1+\frac{1}{x-1})$ and $\frac{1}{x}$ in $[2,\infty)$ gives the impression that $\ln(1+\frac{1}{x-1}) \ge \frac{1}{x}$ for all $x \ge 2$. Is it possible to prove it?
hint : $$f(x)=ln(1+\frac{1}{x-1})-\frac{1}{x}\\x≥2\\ f'<0$$f(x) is decreasing function ,but f(x) is above the x axis $$\\f(2)>0,f(\infty)>0\\f(x)>0\\so\\ln(1+\frac{1}{x-1})-\frac{1}{x}≥0\\ln(1+\frac{1}{x-1})≥\frac{1}{x}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1083668", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
A union in the proof of Egorov's theorem Egorov's Theorem: Let $(X,M,\mu)$ be a finite measure space and $f_n$ a sequence of measurable functions on $X$ that converges pointwise a.e. on $X$ to a function $f$ that is finite a.e. on $X$. Then for each $\epsilon>0$, there is a measurable subset $X_{\epsilon}$ of $X$ for which $f_n→f$ uniformly on $X_{\epsilon}$ and $\mu(X\backslash X_\epsilon)<\epsilon$. Proof: Let $X_0$ be the set on which $f_n \to f$, so $\mu(X \backslash X_0)=0$. Let $m \in \Bbb N$. For every $x \in X_0$ we can find $n \in \Bbb N : |f_k(x)-f(x)|< \frac 1m\ \forall k \gt n$. Now define $$A_n^m = \{x\in X_0 : |f_n(x)-f(x)|< \frac 1m\},$$ and set $X_n^m=\bigcap_{k=n}^{\infty}A_k^m$. Since the $A_n^m$ are measurable and $X_n^m$ is the countable intersection of measurable sets, the $X_n^m$ are measurable. The $X_n^m$ are ascending, since as $n$ grows one is intersecting fewer and fewer of the $A_n^m$. And so on... And question is: How can I show that $\bigcup_{n=1}^{\infty}X_n^m=X_0$? It is taken as obvious in books, but if my teacher asks me about it I will not have an answer.
An arguably more efficient way to see this is to keep in mind the close relation between the logical quantifiers $\forall,\exists$ and the set operations $\cap,\cup$, respectively. In this regard, an argument showing $\bigcup_n X_n^m=X_0$ might go like this: Let $x\in X_0$. Since $f_n(x)\to f(x)$, $\forall\epsilon>0,\exists n\geq1,\forall k\geq n: |f_n(x)-f(x)|<\epsilon$. Let $m\in\mathbb{N}$. Then \begin{align} &\exists n\geq1,\forall k\geq n:|f_n(x)-f(x)|<\epsilon:=1/m\\ &\implies\exists n\geq1,\forall k\geq n:x\in A_n^m\\ &\implies\exists n\geq1, x\in \bigcap_{k\geq n} A_n^m=X_n^m\\ &\implies x\in \bigcup_{n\geq1} X_n^m \left(= \bigcup_{n\geq1} \bigcap_{k\geq n} A_n^m = \liminf_n A_n^m \right). \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1083733", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }