Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Which side of a 2d curve is a point on? Given a point $Q$ and $2d$ Cubic Bezier Curve: $$P = A(1-t)^3 + 3Bt(1-t)^2 + 3Ct^2(1-t) + Dt^3$$ Is there a way to know which side of the curve the point lies on? I know that the term "side" is a bit strange since there can be a loop, but I'm wondering if answering this question might be more easily done than finding a points distance to the curve, which would require solving a cubic equation. I found info about finding the distance via a cubic root here: http://www.pouet.net/topic.php?which=9119&page=1 That isn't super complex, but if answering a side question makes things easier, I'm hoping it'll extend to higher order curves or higher dimension curves. If this is more easily solved with a quadratic curve instead of a cubic bezier curve that would also be helpful! $$P = A(1-t)^2 + 2Bt(1-t) + Ct^2$$ Thanks for any info!!
I ended up finding a solution to this that I like better than implicitization (I'm a better programmer than I am a mathematician!). I am trying to use this in a computer graphics situation, and had found a paper that talked about how to use graphics hardware to draw a triangle in a specific way to make it so you could use the $u,v$ texture coordinates to test if a point was inside or outside of the halfspace defined by the curve. You can see it here: Resolution Independent Curve Rendering using Programmable Graphics Hardware In my situation, I didn't have the ability to do it the way they did because I'm not doing triangle rendering, I'm doing an operation per pixel. So, to solve it, I ended up just calculating the barycentric coordinates of a point within the triangle defined by the control points, and did the technique manually. It ended up working pretty well. Here's the technique in action, implemented in GLSL at shadertoy: Shadertoy: 2D Quadratic Bezier II
{ "language": "en", "url": "https://math.stackexchange.com/questions/1288309", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Order on eigenvalues on diagonal matrix If the eigenvalues are say $-1$, $-1$ and $2$ for a $3$ x $3$ matrix, then when comes to the diagonal matrix, is it (from top left, to bottom right) $-1$ $-1$ $2$ or $2$ $-1$ $-1$ or $-1$ $2$ $-1$? Does the order matter? How do you know what the order is?
It does not matter, it just has to be coherent with the order of the columns of $P$ in $A=PDP^{-1}$. In some applications we prefer to order the eigenvalues but in theory it's not necessary.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1288578", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Help for a problem with inscribed triangles If we have a triangle $ABC$ with $AB = 3\sqrt 7$, $AC = 3$, $\angle{ACB} = \pi/3$, $CL$ is the bisector of angle $ACB$, $CL$ lies on line $CD$ and $D$ is a point of the circumcircle of triangle $ABC$, what's the length of $CD$? Here I attach my solution. The problem is that I get $CD = 5\sqrt 3$ while in my text book the solution is given as $4\sqrt 3$ and I really cannot understand where I'm doing wrong. Could somebody help me out?
Your problem is in stating that $\cos \beta = \frac {\sqrt {7}} {14}$. This is incorrect. Although it is true that $\cos^2 \beta = \frac {7} {196}$, there are two possible values for $\cos \beta$. If you were to make a scale drawing, you would find that AB is over twice as long as Ac, making $\beta$ an obtuse angle. Thus $\cos \beta = -\frac {\sqrt {7}} {14}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1288662", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Finding cubic bezier curve endpoints based on relationship between endpoints and a point on the curve. I have the following information about a bezier curve: * *The curve begins at $x=0$ and ends at $x=1$. *The curve has two control points each at the same height as their closest endpoints, one at $x=.25$ and one at $x=.75$. *The curve can be represented by $$y(x,a,b)=(1-x)^3a+3(1-x)^2xa+3(1-x)x^2b+x^3b$$ *Some point $(x,y)$ resides on the curve. *$a$ and $b$ are related, such that $b=f(a)$ and $a=f^{-1}(b)$ *$f$ is a piecewise linear function as pictured in an example below: * *$f$ is variable, however it is always increasing and passes both the vertical and horizontal line tests. My goal is to find $a(x,y)$ given some point $(x,y)$ and the function $b=f(a)$. I've tried solving the curve's equation for $a$, resulting in: $$a(b,x,y)=\frac{(2x-3)bx^2+y}{(x-1)^2(2x+1)}$$ However, substituting for $b$ results in $$a(b,x,y)=\frac{f(a(b,x,y))(2x-3)x^2+y}{(x-1)^2(2x+1)}$$ which is recursive. While I would prefer not to estimate the result, if anyone has an accurate estimation method that would be very helpful.
Plugging the known point $(x,y)$ into the equation $$\underbrace{\bigl((1-x)^3+3(1-x)^2x\bigr)}_\alpha a + \underbrace{\bigl(3(1-x)x^2+x^3\bigr)}_\beta b=y$$ That's the equation of a line in $a,b$-space. You can intersect that with each linear piece of your piecewise linear function. If you have $$(\alpha a_i+\beta b_i-y)(\alpha a_{i+1}+\beta b_{i+1}-y)\le 0$$ then the linear segment from $(a_i,b_i)$ to $(a_{i+1},b_{i+1})$ will intersect the line, and will contribute one possible solution (except for the special case where you're counting segment endpoints twice for each adjacent linear piece, but they only contribute once).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1288742", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Computing line integral using Stokes´theorem Use Stokes´ theorem to show that $$\int_C ydx+zdy+xdz=\pi a^2\sqrt{3}$$ where $C$ is the curve of intersection of the sphere $x^2+y^2+z^2=a^2$ and the plane $x+y+z=0$ My attempt: By Stokes´ theorem I know that $$\int_S (\nabla \times F) \cdot n \ dS=\int_c F \cdot d\alpha$$ In this case the intersection curve $C$ is a circle and $S$ is "half" of the sphere using $r(u,v)=(acos(u)sin(v), a sin(u)cos(u), acos(v))$ $0\le v \le \pi$, $-\pi/4 \le u \le 3\pi/4$ as a parametrization of the sphere and computing $\nabla \times F=(-1,-1,-1)$ (where $F=(y,z,x)$), and $${\partial r\over \partial u}\times {\partial r\over \partial v}$$ the surface integral becomes: $$\int_{-\pi/4}^{3\pi/4}\int_{0}^{\pi}({a^2sin(v)sin(2u)\over 2}-a^2sin(v)sin^2(u)+{a^2sin(2v)\over 2})dv\ du$$, but after computing the integral I don´t get the answer, Can you please tell me where is my mistake?
Your parameterization doesn't look right. In particlar, note that if $x a\cos u \sin v$, $y = a\sin u \cos u$, and $z = a\cos v$, then $x^2+y^2+z^2 = a\cos^2u\sin^2v+a^2\sin^2u\cos^2u+a^2\cos^2v$, which does not simplify to $a^2$. One correct parameterization of the sphere would be $\vec{r}(u,v) = (a\cos u \sin v, a\sin u \color{red}{\sin v}, a\cos v)$, for appropriate bounds on $u$ and $v$. Instead of letting $S$ be the half-sphere, why not let $S$ be the flat circular disk in the plane $x+y+z = 0$. Both of these surfaces have $C$ as their boundary, but one is easier to integrate over. You know that $\nabla \times F = (-1,-1,-1)$ is constant on this disk. Also, the unit normal to the circle is the same as the unit normal to the plane $x+y+z = 0$ which is $\hat{n} = \dfrac{1}{\sqrt{3}}(1,1,1)$. Now, it is easy to compute $(\nabla \times F) \cdot \hat{n}$, which happens to be constant on this disk. Also, for any constant $c$, we have $\displaystyle\iint\limits_S c\,dS = c \cdot \text{Area}(S)$. Can you figure out the integral from these hints? Also: note that if you get $-\pi^2a\sqrt{3}$ instead of $+\pi^2a\sqrt{3}$, its ok since the problem didn't specify in which direction $C$ was traversed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1288934", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Evaluate $ \lim_{n\to\infty} \frac{(n!)^2}{(2n)!} $ I'm completely stuck evaluating $ \lim_{n\to\infty} \frac{(n!)^2}{(2n)!} $ how would I go about solving this?
As all terms are positive, we have $$0 \leq \frac{(n!)^2}{(2n)!} = \frac{n!}{2n \cdot \dots \cdot (n+1)} = \prod_{k=1}^n \frac{k}{k+n} \leq \prod_{k=1}^n \frac{1}{2} = \left(\frac{1}{2}\right)^n$$ So then as $$\lim\limits_{n\rightarrow\infty} \left(\frac{1}{2}\right)^n = 0$$ It follows that $$\lim\limits_{n\rightarrow\infty} \frac{(n!)^2}{(2n)!} = 0$$ Edit: For what it's worth if you want quick justification of the second inequality step: $$k \leq n \implies 2k \leq k + n \implies \frac{k}{k+n} \leq \frac{1}{2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1289031", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 1 }
Tough Differential equation Can anyone help me solve this question ? $$ \large{y^{\prime \prime} + y = \tan{t} + e^{3t} -1}$$ I have gotten to a part when I know $r = \pm 1$ and then plugging them into a simple differential equation. I do not know how to the next step. Thank you very much for all your help.
Find the complementary solution by solving \begin{equation*} y''+y=0. \end{equation*} Substitute $y=e^{\lambda t}$ to get \begin{equation*} (\lambda ^2+1)e^{\lambda t}=0. \end{equation*} Therefore the zeros are $\lambda=i$ or $\lambda =-i.$ The general solution is given by \begin{equation*} y=y_1+y_2=c_1e^{it}+\frac{c_2}{e^{it}}. \end{equation*} Apply Euler's identity and regroup the terms to get \begin{equation*} y=(c_1+c_2)\cos(t)+i(c_1-c_2)\sin(t) \\ =c_3\cos(t)+c_2\sin(t). \end{equation*} For the particular solution, try $y_{b_1}=\cos(t)$ and $y_{b_2}=\sin(t).$ Calculating the Wronskian $W$ gives $1$. Let $f(t)$ be RHS of the differential equation. Use the two formulae \begin{equation*} v_1=-\int \frac{f(t)y_{b_2}}{W},~v_2=\int \frac{f(t)y_{b_1}}{W} \end{equation*} to get the particular solution \begin{equation*} y_p=v_1y_{b_1}+v_2y_{b_2}. \end{equation*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1289159", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
undefined angles with arcsin I have this problem but I couldn't solve it. In a paper I'm reading for controlling a device, I need to generate the following angle $$ \theta = \tan^{-1}\left( \frac{Y_{2} - Y_{1}}{ X_{2} - X_{1}} \right) $$ where $Y_{2} = 10, Y_{1} = 0, X_{2} = 10$ and $X_{1} = 0$. Now I need to generate the following angle $$ \phi = \sin^{-1} ( 0.401*\sin(\theta) - 2.208*\cos(\theta) ) $$ where $\theta = 0.7854$ (rad). The next angle $\psi$ is then generated as follows $$ \psi = \sin^{-1} \left( \frac{0.401*\sin(\theta) + 2.208*\cos(\theta)}{\cos\phi} \right) $$ In my code, both angles $\phi$ and $\psi$ are undefined. I know that $\phi$ should be wrapped so that $\cos\phi \neq 0$ to avoid singularity but the problem $\phi$ is already undefined.
When $\theta=0.7854\approx \frac{\pi}{4}$, $\sin{\theta}=\cos{\theta}=\frac{\sqrt{2}}{2}$. Evaluating your definition for $\phi$ at $\frac{\pi}{4}$ comes out to $-1.278$, which is not in the domain of $\sin^{-1}{\theta}$ (because it is not in the range of $\sin{\theta}$). The problem seems to be in the part of your expression for $\phi$ that is inside the inverse sine function, since it is generating values that are not in $[-1,1]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1289274", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How many $s,t,u$ satisfy: $s +2t+3u +\ldots = n$? Given $n\in \mathbb{N}^+$, what is the possible number of combinations $s,t,u,\ldots\in\mathbb{N}$, such that: $$s +2t+3u +\ldots = n\quad?$$ Additionally, is there an efficient way to find these combinations other than an elimination process? This problem comes from the formula for series reversion, and gives the number of terms in each inverse coefficient.
The number of solutions of the equation equals the coefficient of $z^n$ in the expression $$ \frac{1}{(1-z)(1-z^2)(1-z^3)\ldots }=\sum_{n}p(n)z^n, $$ where $p(n)$ is the Euler partition function, so that the number of combinations is $p(n)$. (Christoph): There is a bijection between the partition function and the desired function, since from a partition $a_1+a_2+\ldots+a_k=n$ you set $s$ to the number of $1$s appearing, $t$ to the number of $2$s appearing, etc.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1289556", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Why the general formula of Taylor series for $\ln(x)$ does not work for $n=0$? I need to find the taylor series for $\ln(x)$ about $a = 2$, and I have find the following solution, but I don't understand why the general formula does not work for $n = 0$.
The general formula would be $\frac{(-1)^{0+1} (-1)!}{2^0} = -(-1)!$. The factorial of $-1$ is undefined, so the general formula isn't defined for $n=0$. However $f^{(0)}(2) = f(2) =\ln 2$. That's what is meant by "not working". Note that this is closely related to $\int x^{n-1}\ \mathrm dx = \frac1n x^n$ wich is also not defined for $n=0$ but $\int x^{-1} \ \mathrm dx = \ln x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1289684", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Integral equation involving Planck radiation formula I am stuck in solving the following integral equation: $$\sigma T^4=\pi\int_{\lambda_0}^{\lambda_1}d\lambda W_{\lambda,T}$$ where: $$W_{\lambda,T}=\dfrac{C_1}{\lambda^5\left(\exp\left(\frac{C_2}{\lambda T}\right)-1\right)}$$ and $C_1,C_2$ are constant coefficients. Fixing $\lambda_1$ and $\lambda_2$ and plotting the left and the right side of the equation, I found the approximate numerical value of the variable $T$. Is it possible to solve the previous equation analytically? Thanks.
Ok as suggested in my comment, transform $b/\lambda=x, d\lambda=-b/x^2$ with $b=C_2/T$. Therefore our integral becomes: $$ I(C_1,b,\lambda_1,\lambda_0)=\frac{C_1 \pi}{b^4}\underbrace{\int^{b\lambda_1}_{b\lambda_0}\frac{x^3}{1-e^x}}_{J(b\lambda_1,b\lambda_0)}dx $$ therefore (i will not care about possible convergence issues, i leave this to you but there is no real difficulty) using the geometric series $$ {J(b\lambda_1,b\lambda_0)}=\sum_{n=0}^{\infty}\int^{b\lambda_1}_{b\lambda_0}x^3e^{nx}dx=\left[\frac{x^4}{4}-\sum_{n=1}^{\infty}\frac{6e^{nx}}{n^4}-\frac{6xe^{nx}}{n^3}+\frac{3x^2e^{nx}}{n^2}-\frac{x^3 e^{nx}}{n}\right]_{x=\lambda_0b}^{{x=\lambda_1b}} $$ You may conclude by using the Definition of Polylogarithm $$Li_s(z)=\sum_{n=1}^{\infty}\frac{z^n}{n^s}$$ Because the result is rather messy, i spare it here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1289776", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Can $a=\left(\sqrt{2(\sqrt{y}+\sqrt{z})(\sqrt{x}+\sqrt{z})}-\sqrt{y}-\sqrt{z}\right)^2$ be an integer if $x$, $y$, and $z$ are not squares? Let $\gcd(x,y,z)=1$.Can we find 3 non-perfect squares $x,y,z\in \mathbb{Z},$ such that $a \in \mathbb{Z} \geq 2$ $$a=\left(\sqrt{2(\sqrt{y}+\sqrt{z})(\sqrt{x}+\sqrt{z})}-\sqrt{y}-\sqrt{z}\right)^2$$ I cannot seem to find any such triplets. Any hints on how to prove it?
Let $y=k^2$, $z=r^2$ and $x=c^2$(with $k,r,c$ integers) then the expression becomes: $$(\sqrt {2\cdot (k+r)(c+r)}-k-r)^2$$ Now for $a$ to be an integer $$2(k+r)(c+r)=n^2$$ with $n$ integer. We put $$k+r=2^{2p+1}\cdot t^s$$ And $$c+r=t^v$$ (or vice versa,with $v$ and $s$ both odd or even). If $r=1$ $$k=2^{2p+1}\cdot t^s-1$$ And $$c=t^v-1$$. This can be a solution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1289834", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
"Rationalizing" an equation $$x=\sqrt[3]{p}+\sqrt[3]{q}$$ I'm trying to figure out some way to "rationalize" the previous equation, meaning to rewrite it purely in terms of whole number powers of $p$, $q$, and $x$. It seems quite simple but I've been stuck trying to do it. I'd appreciate anyone's help with this.
Hint: Use the high school identity: $$(a+b)(a^2-ab+b^2)=a^3+b^3.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1289915", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
For which values of $\alpha \in \mathbb{R}$, does the series $\sum_{n=1}^\infty n^\alpha(\sqrt{n+1} - 2 \sqrt{n} + \sqrt{n-1})$ converge? How do I study for which values of $\alpha \in \mathbb{R}$ the following series converges? (I have some troubles because of the form [$\infty - \infty$] that arises when taking the limit.) $$\sum_{n=1}^\infty n^\alpha(\sqrt{n+1} - 2 \sqrt{n} + \sqrt{n-1})$$
Hint: $\sqrt{n+1}-2\sqrt{n}+\sqrt{n-1} = \frac{-2}{(\sqrt{n}+\sqrt{n+1})(\sqrt{n-1}+\sqrt{n+1})(\sqrt{n}+\sqrt{n-1})}$ So, this term is for big n approximately $-2n^{-\frac{3}{2}}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1290025", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 0 }
Solving an integral with trig substitution I'm looking to solve the following integral using substitution: $$\int \frac{dx}{2-\cos x}$$ Let $z=\tan\frac{x}{2}$ Then $dz=\frac 1 2 \sec^2 \frac x 2\,dx$ $$\sin x=\frac{2z}{z^2+1}$$ $$\cos x =\frac{1-z^2}{z^2+1}$$ $$dx=\frac{2\,dz}{z^2+1}$$ $$\int \frac{dx}{2-\cos x} = \int \frac{\frac{2\,dz}{z^2+1}}{2-\frac{1-z^2}{z^2+1}} =\int \frac{2\,dz}{3z^2+1}$$ But this is where things start to look at bit sticky. If I integrate this last fraction, then I get a very complex expression that seems to defeat the point of z-substitution. Any suggestions for where I may be going wrong? Thanks! Edit: Thank you for your feedback. I've completed my work as per your suggestions: $$\int \frac{2\,dz}{3z^2+1} = 2\cdot\left(\frac{\tan^{-1} \frac{z}{\sqrt{3}}}{\sqrt{3}} \right) = \frac{2\tan^{-1} \left(\sqrt{3}\tan{\frac{x}{2}}\right)}{\sqrt{3}}+c$$
HINT: $$\int \frac{1}{a^2+x^2}dx=\frac1a \arctan(x/a)+C$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1290118", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Identification of a quadrilateral as a trapezoid, rectangle, or square Yesterday I was tutoring a student, and the following question arose (number 76): My student believed the answer to be J: square. I reasoned with her that the information given only allows us to conclude that the top and bottom sides are parallel, and that the bottom and right sides are congruent. That's not enough to be "more" than a trapezoid, so it's a trapezoid. Now fast-forward to today. She is publicly humiliated in front of the class, and my reputation is called into question once the student claims to have been guided by a tutor. The teacher insists that the answer is J: square ("obviously"... no further proof was given). * *Who is right? Is there a chance that we're both right? *How should I handle this? I told my student that I would email the teacher, but I'm not sure that's a good idea.
Of course, you are right. Send an email to the teacher with a concrete example, given that (s)he seems to be geometrically challenged. For instance, you could attach the following pictures with the email, which are both drawn to scale. You should also let him/her know that you need $5$ parameters to fix a quadrilateral uniquely. With just $4$ pieces of information as given in the question, there exists infinitely many possible quadrilaterals, even though all of them have to be trapezium, since the sum of adjacent angles being $180^{\circ}$ forces the pair of opposite sides to be parallel. The first one is an exaggerated example where the trapezium satisfies all conditions but is nowhere close to a square, even visually. The second one is an example where the trapezium visually looks like a square but is not a square. Not only should you email the teacher, but you should also direct him/her to this math.stackexchange thread. Good luck! EDIT Also, one might also try and explain to the teacher using the picture below that for the question only the first criterion is met, i.e., only one pair of opposite sides have been made parallel.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1290291", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "210", "answer_count": 15, "answer_id": 10 }
What will happen if we remove the hypothesis that $V$ is finite-dimensional in this problem Original problem: Suppose $V$ is finite-dimensional and $S, T,U ∈L(V)$and $STU = I$. Show that T is invertible and that $inv(T) = US$. I know that it is because of the hypothesis of finite-dimensional that we can take advantage of the basis or spanning list of vectors. But I'm not quite clear of the power of "Finite-Dimensional" in linear algebra. Please help me with some examples and insights. Really really appreciate it.
This is definitely false in infinite dimensions. Let $V$ be an vector space of countably infinite dimension, and pick a basis $\{v_n\}_{n \in \mathbb{N}}$ for $V$. Let $T$ be the be the map which sends $v_1 \mapsto 0$ and $v_n \mapsto v_n$ for $n\geq 2$. Let $U$ be the map which sends $v_n \mapsto v_{n+1}$. Let $S$ be the map which sends $v_1 \mapsto 0$ and $v_n \mapsto v_{n-1}$ for $n \geq 2$. Then $STU=I_V$ but $T$ is not invertible since nothing maps to $v_1$ under $T$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1290439", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
What does $\mathbb{R} \setminus S$ mean? What does $\mathbb{R}\setminus S$ mean? I am not getting it what it actually means. I have found it manywhere in real-analysis like in the definition of boundary points of a set. Can anyone tell me what it means really?
That symbols means set difference. It is called \setminus in $\TeX.$ If $A$ and $B$ are sets, then $A \setminus B$ is the set of elements in $A$ but not in $B$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1290494", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Prove that $H$ is normal subgroup of $G$ I have a following question. Let $p$ be a prime and let $G$ be a group and $H$ be a subgroup of $G$. $$ G = \left\{ \begin{bmatrix} a & b \\ 0 & 1 \end{bmatrix} : a,b \in \mathbb{Z}_p, a \neq 0 \right\}$$ and $$ H = \left\{ \begin{bmatrix} 1 & b \\ 0 & 1 \end{bmatrix} : b \in \mathbb{Z}_p\right\} $$ prove that $H$ is normal subgroup. In my textbook, it gives two types of definition for normal subgroup. The first one is if left coset and right coset are same with the subgroup, the subgroup is normal. The other definition is let $H$ be a subgroup of $G$. Then $H$ is a normal subgroup of $G$ iff $xhx^{-1} \in H$ for every $h \in H$ and every $x \in G$. I have tried with the first definition and got $$ xH = \left\{ \begin{bmatrix} a & ac+b \\ 0 & 1 \end{bmatrix} \right\} $$ and for right coset $$ Hx = \left\{ \begin{bmatrix} a & b+c \\ 0 & 1 \end{bmatrix} \right\} $$ and I can't see that $xH \subset Hx$ and $Hx \subset xH$. And I don't know how to apply the second definition. Need help!
to follow Solid Snake's suggestion it will help if you know that $$ \begin{bmatrix} a & b \\ 0 & 1 \end{bmatrix}^{-1} = \begin{bmatrix} a^{-1} &-ba^{-1}\\ 0 & 1 \end{bmatrix} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1290568", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
How to compute $\int_0^\infty \frac{x^4}{(x^4+ x^2 +1)^3} dx =\frac{\pi}{48\sqrt{3}}$? $$\int_0^\infty \frac{x^4}{(x^4+ x^2 +1)^3} dx =\frac{\pi}{48\sqrt{3}}$$ I have difficulty to evaluating above integrals. First I try the substitution $x^4 =t$ or $x^4 +x^2+1 =t$ but it makes integral worse. Using Mathematica I found the result $\dfrac{\pi}{48\sqrt{3}}$ I want to know the procedure of evaluating this integral.
Here is an approach. You may write $$\begin{align} \int_0^{\infty}\frac{x^4}{\left(x^4+x^2+1\right)^3}dx &=\int_0^{\infty}\frac{x^4}{\left(x^2+\dfrac1{x^2}+1\right)^3\,x^6}dx\\\\ &=\int_0^{\infty}\frac{1}{\left(x^2+\dfrac1{x^2}+1\right)^3}\frac{dx}{x^2} \\\\ &=\int_0^{\infty}\frac{1}{\left(x^2+\dfrac1{x^2}+1\right)^3}dx\\\\ &=\frac12\int_0^{\infty}\frac{1}{\left(x^2+\dfrac1{x^2}+1\right)^3}\left(1+\dfrac1{x^2}\right)dx\\\\ &=\frac12\int_0^{\infty}\frac{1}{\left(\left(x-\dfrac1{x}\right)^2+3\right)^3}d\left(x-\dfrac1{x}\right)\\\\ &=\frac12\int_{-\infty}^{+\infty}\frac{1}{\left(u^2+3\right)^3}du\\\\ &=\frac14\:\partial_a^2\left(\left.\int_{-\infty}^{+\infty}\frac{1}{\left(u^2+a\right)}du\right)\right|_{a=3}\\\\ &=\frac14\:\partial_a^2\left.\left(\frac{\pi}{\sqrt{a}}\right)\right|_{a=3}\\\\ &=\color{blue}{\frac{\pi }{48 \sqrt{3}}} \end{align}$$ as desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1290669", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 3, "answer_id": 2 }
If $\forall V\subseteq X$ where $x\in \overline V; f(x) \in \overline{f(V)}$, then $f$ is continous in $x$ Let $f:(X,\tau_X)\to (Y,\tau_Y)$ Prove: If $\forall V\subseteq X$ where $x\in \overline V; f(x) \in \overline{f(V)}$, then $f$ is continous in $x$. Could someone verify the following proof? Proof If $f$ would not be continous in $x$, then $$(\exists U \text{ neighbourhood of } f(x))(f^{-1}(U) \text{ is not a neighbourhood of } x) $$ $U$ is a neighbourhood of $f(x)$, then $f(x)$ is an interior point of $U \Rightarrow x \in f^{-1}(U). \qquad \color{red}{(A)}$ However $f^{-1}(U)$ is not a neighbourhood of $x$, thus $x$ is not an interiour point of $f^{-1}(U)$. Or $x\in X\setminus (f^{-1}(U))°$. Which is equivalent with $x\in \overline{X\setminus f^{-1}(U)}$. Since $X\setminus f^{-1}(U) = \{x\in X: f(x) \notin U\}$ then $f(X\setminus f^{-1}(U)) = \{f(x)\in f(X): f(x) \notin U\}$. Then $f(x) \in \overline{f(X\setminus f^{-1}(U))}$ means $f(x)\not \in U$ or $f(x)$ adherent to $f(X\setminus f^{-1}(U))$. We reach $\color{green}{(B)}$ a contradiction with $\color{red}{(A)}$. Remarks I think the general idea of the proof is good, but I'm not really sure if the notation at $\color{green}{(B)}$ is correct... How do I write down the contradiction?
Same idea, better put: Suppose that $U$ is a neighbourhood of $f(x)$ such that $f^{-1}[U]$ is not a neighbourhood of $x$, which indeed means that $x \in \overline{X \setminus f^{-1}[U]}$. So by assumption $f(x) \in \overline{f[X \setminus f^{-1}[U]]}$. As $U$ is a neighbourhood of $f(x)$, $U$ intersects $f[X \setminus f^{-1}[U]]$. So some $y = f(p) \in U$ exists such that $p \in X \setminus f^{-1}[U]$, and the latter means that $f(p) \notin U$, which is a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1290753", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
False $\Sigma_1$-sentences consistent with PA I'm preparing for an exam and encounter the following exercise in the notes I use. In the next chapter we shall see that there are $\Sigma_1$-sentences which are false in $\mathcal{N}$ but consistent with PA. Use this to show that the following implication does not hold: for a $\Sigma_1$-formula $\phi(w)$ with only free variable $w$, if $\exists!w\phi(w)$ is true in $\mathcal{N}$, then $\mathrm{PA} \vdash \exists!w\phi(w)$. So we want to find a $\Sigma_1$-formula $\phi(w)$ such that $\mathcal{N} \models \exists!w\phi(w)$, but $\mathrm{PA} \not\vdash \exists!w\phi(w)$. We are given the fact that there exists a $\Sigma_1$-sentence $\psi$, such that $\mathcal{N} \not\models \psi$ and $\mathrm{PA} \not\vdash \psi \to \bot$. But then also $\mathcal{N} \models \psi \to \bot$, hence, the sentence $\psi \to \bot$ is already what we are looking for. But this $\psi$ doesn't have a free variable. Now we could tack on a $\wedge w = 0$ or something like that, but I suspect I've made an error somewhere.
The answer you give works, but I think here's what they're looking for: replace $\varphi(x)$ with $\varphi'(x)\equiv\varphi(x)\wedge\forall y(y<x\implies \neg\varphi(y)$). By induction, $\varphi$ has a solution iff $\varphi'$ has a unique solution. This is a trick which is frequently useful and nontrivial.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1290857", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Solving $e^\frac1x = x$ non-graphically? This question has come up twice in different tests and the instructions always point out that it should be solved using a graphic calculator. Fair enough, the answer is ≈ 1.76322...(goes on forever?). But how do you approach $e^\frac1x = x$ analytically for that solution? Is there a way?
Now note that $e^{\frac{1}{x}}>0$ for $x<0$ and note that $x<0$ for $x<0$. Hence we only have to consider $x>0$. If $e^{\frac{1}{x}}=x$, then $1/x=\ln(x)$ for $x>0$ , which means that $x\ln(x)=1$. Now write $x\ln(x)=\ln(x)e^{\ln(x)}=1$ and note that Lambert's W function says then that $\ln(x)=W(1)$, hence $x=e^{W(1)}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1290950", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 6, "answer_id": 3 }
Derivative of matrix exponential w.r.t. to each element of the matrix I have $x= \exp(At)$ where $A$ is a matrix. I would like to find derivative of $x$ with respect to each element of $A$. Could anyone help with this problem?
I arrive at the following. The ij element of $e^A$ is $$e^A_{ij} = \sum_{n=0}^\infty \frac{1}{n!} A_i^{k_1} ... A_{k_{n-1}j} ~.$$ Each matrix element can be seen as an independent variable, so the derivative toward $A^{kl}$ is $$e^A_{ij} = \sum_{n=0}^\infty \frac{1}{n!} \sum_{p=0}^{n-1} A^p_{ik} A^{n-1-p}_{lj} ~.$$ For my purpose $t$ is less relevant. It can easily be added in without changing my result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1291063", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22", "answer_count": 5, "answer_id": 4 }
Trigonometric double angle formulas problem I want to simplify the answer to an equation I had to compute, namely, simplifying $\sin^2 (2y) + \cos^2 (2y)$. I know that $\sin^2 (y) + \cos^2 (y) = 1$ but is there anything like that I can use at all?
In general $$\sin^2(\color{red}{\rm something}) + \cos^2(\color{red}{\text{the same thing}})=1.$$So your expression is just $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1291261", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
quaternions - understanding a formula Quaternions are new for me. I am trying to understand the following formula: What are: * *$\large{q^x}$ ? I don't think it is a power. *$\large{q^t}$ ? just a transposition of the quaternion $q$? Do the subscripts next to the $q's$, represent entire rows or columns of the quaterion in question? This should normally give me a $3 \times 3$ matrix $R$, if I understood it correctly. source: http://www.dept.aoe.vt.edu/~cdhall/courses/aoe4140/attde.pdf page 4-14
Addressing your questions in order: * *$\mathbf q^\times$ is the $3\times3$ matrix that represents the operation of taking the cross product with $\mathbf q$, i.e. $(\mathbf q^\times)\mathbf x=\mathbf q\times\mathbf x$. *No, $\mathbf q$ is not a quaternion. The quaternion is being denoted by $\bar{\mathbf q}$, and $\mathbf q$ is the vector of its components $1$ to $3$. The symbol $^\top$ represents transposition. *The subscripts refer to the four components of the quaternion, which are here being indexed with $1$ to $4$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1291366", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving two Recurrence Relations I encountered this problem in the International baccalaureate Higher Level math book, and my teacher could not help. If anyone could please take the time to look at it, that would be great. I need help with question (b) in the attached picture. All of the relevant information is in there; I'm just conflicted about how to solve it. Thanks :)
Maybe try this: $$ a_{n+2} = 3a_{n+1}+b_{n+1}=9a_n+3b_n+4a_n-b_n=14a_n+2b_n \\ 2a_{n+1}+8a_n=6a_n+2b_n+8a_n=14a_n+2b_n $$ They are equal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1291437", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How many 3 letters-long codes can be made by 5 different letters? You have five letters: C, H, E, S, T How many different codes, consisting of three letters, can be made from the above letters? I'd say ${5}\choose{3}$ is the correct answer, since the order of the letters doesn't matter. Is this (that the order doesn't matter) why $\frac{5!}{3!}$ isn't the correct answer?
Letters can be repeated. The first letter can be chosen from 5 different letters, so can the second and third letter. Thus the answer is $5^{3} = 125$ different codes. Thank you @Yinon Eliraz!
{ "language": "en", "url": "https://math.stackexchange.com/questions/1291535", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Direct Proof on Divisibilty Using Induction proof makes sense to me and know how to do, but I am having a problem in using a direct proof for practice problem that was given to us. The problem is: For all natural numbers $n$, $2n^3 + 6n^2 + 4n$ is divisible by 4. We are to use direct proof as a way proving it. I have no clue where to start.
For all $\,n\!:\,$ $\,4\mid 2f(n)$ $\!\iff\! 2\mid f(n)$ $\!\iff\! f(0)\equiv 0\equiv f(1)\pmod{\!2}$ $\!\iff 2\mid f(0),f(1)$ for any polynomial $\,f(x)\,$ with integer coefficients (see also the Parity Root Test). This applies to $ $ OP $\ f(x) = n^3+3n^2+2n\,$ since $\ 2\mid f(0) = 0,\ $ and $\ 2\mid f(1) = 6$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1291623", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 7, "answer_id": 6 }
find the derivative of the integral Prove that the following integral $F(x)$ is differentiable for every $x \in \mathbb{R}$ and calculate its derivative. $$F(x) = \int\limits_0^1 e^{|x-y|} \mathrm{d}y$$ I don't know how to get rid of the absolute value in the integral any ideas?
Let $\phi_y(x) = e^{|x-y|}$. The mean value theorem shows that $|e^a-e^b| \le e^{\max(a,b)} |a-b|$, and also, $||a|-|b|| \le |a-b|$, hence we have $|{\phi_y(x+h)-\phi_y(x) \over h}| \le e^{\max(|x-y|,|x+h-y|)} |h|$ for $h \neq 0$. If $\phi_y(x) = e^{|x-y|}$ then $\phi_y'(x) = e^{|x-y|}$ for $x>y$ and $\phi_y'(x) = -e^{|x-y|}$ for $x <y$. If we fix $x$ and take $|h| < 1$, then we have $e^{\max(|x-y|,|x+h-y|)} \le M$ for some $M$ and all $y \in [0,1]$, hence the quotient ${\phi_y(x+h)-\phi_y(x) \over h}$ is uniformly bounded by $M$ and ${\phi_y(x+h)-\phi_y(x) \over h} \to \phi_y'(x)$ for ae. $y$. The dominated convergence theorem gives $F'(x) = \int_0^1 \phi_y'(x) dy$, hence we have $F'(x) = \int_0^1 \operatorname{sgn}(x-y)\phi_y(x) dy = \int_0^1 \operatorname{sgn}(x-y)e^{|x-y|} dy$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1291728", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
How do I read this triple summation? $\sum_{1\leq i < j < k \leq 4}a_{ijk}$ How do I read this triple summation? $$\sum_{1\leq i < j < k \leq 4}a_{ijk}$$ The exercise asks me to express it as three sumations and to expand them in the following way: 1) Summing first on $k$, then on $j$ and last on $i$. 2) Summing first on $i$, then on $j$ and last on $k$. My attempt: $$ \sum_{k=3}^4\sum_{j=2}^{k-1}\sum_{i=1}^{j-1} a_{ijk}=\sum_{j=2}^2\sum_{i=1}^{j-1}a_{ij3}+\sum_{j=2}^3\sum_{i=1}^{j-1}a_{ij4}=\sum_{i=1}^1a_{i23}+\sum_{i=1}^{1}a_{i24}+\sum_{i=1}^2a_{i34} \\=a_{123}+a_{124}+a_{134}+a_{234}$$ Is this correct so far? I don't know how to do the following parts of the exercise. Could someone give me a general explanation on how to read this type of multiple summations? Thanks all in advice.
You are correct. For (1), $\displaystyle\sum_{1\leq i<j<k\leq 4}a_{ijk}=\sum_{1\leq i<j< 3}a_{ij3}+\sum_{1\leq i<j<4}a_{ij4}=\sum_{1\leq i<2}a_{i23}+\sum_{1\leq i<2}a_{i24}+\sum_{1\leq i<3}a_{i34}=a_{123}+a_{124}+a_{134}+a_{234}.$ (2) is analogous.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1291805", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Find an example of a sequence not in $l^1$ satisfying certain boundedness conditions. This question is about getting a concrete example for this question on bounded holomorphic functions posed by @user122916 (something that he really expected as explained in the comments). Give an example of a sequence of complex numbers $(a_n)_{n\ge 0}$ so that \begin{eqnarray} |\sum_{n\ge 0} {a_n z^n} | &\le &1 \text{ for all }z \in \mathbb{C}, |z| < 1 \\ \sum_{n\ge 0} |a_n| &=& \infty \end{eqnarray} Such sequences exist because there exist bounded holomorphic functions on the unit disk that do not have a continuous extension to the unit circle ( one finds a bounded Blaschke product with zero set that contains the unit circle in its closure). However, a concrete example escapes me. Note that all this is part of the theory of $H^{\infty}$ space, so the specialists might have one at hand.
An example is $f(z) = \exp {(-\frac{1+z}{1-z})}.$ As $-\frac{1+z}{1-z}$ is a conformal map of the open unit disc $\mathbb {D}$ onto the left half plane, $f$ is bounded and holomorphic in $\mathbb {D}.$ We have $f$ continuous on $\overline {\mathbb {D}} \setminus \{1\}.$ Check that on $\partial \mathbb {D}\setminus \{1\},$ we have $|f| = 1.$ However $f(r) \to 0$ (to say the least) as $r\to 1^-.$ It follows that $f$ does not have a continuous extension to $\overline {\mathbb {D}}.$ Hence for this $f, \sum_{n=1}^{\infty}|a_n| = \infty.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1291914", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Is the semidirect product of normal complementary subgroups a direct product. If $G$ is a group with $K$ and $N$ as normal complementary subgroups of $G$, then we can form $G \cong K \rtimes_\varphi N$ where $\varphi:N \to Aut(K)$ is the usual conjugation. But someone told me that $G \cong K \times N$ also. Is this true? It suffices to prove that $\varphi$ is trivial, but why is this the case? (This question came up in an attempt at a different problem. In the specific case that this relates to, I also know that $K$ and $N$ are simple groups, but I don't know if this is relevant.)
This is a consequence of the fact that if two normal subgroups intersect trivially, then elements of the first group commute with elements of the second group, so conjugation does nothing. This is proved by showing the commutator of the two elements is contained in the intersection of the subgroups, which means it is the identity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1292029", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
If $\overline A\cap B = A\cap \overline B = \varnothing$, $A\cup B$ is disconnected. I'm trying to show that if $\overline A\cap B = A\cap \overline B = \varnothing$, $A\cup B$ is disconnected. First of all, I think I have to assume that $A$ and $B$ are nonempty, or else the statement would not be true if I just let $A = \varnothing$ and let $B$ be a connected set. I'm working in a metric space where the definition of a set $S$ being open is that $\forall x\in S$, $\exists \varepsilon > 0: B(x,\varepsilon)\subseteq S$, and the definition of a set $T$ being closed is that $T$ is the complement of an open set. So assuming $A$ and $B$ are nonempty and $\overline A\cap B = A\cap \overline B = \varnothing$, to demonstrate that $S = A\cup B$ is disconnected I must find open sets $U_1$ and $U_2$ such that * *$U_1\cap U_2 = \varnothing$ *$S = (S\cap U_1) \cup (S\cap U_2)$ *$S\cap U_1\neq \varnothing$ and $S\cap U_2\neq \varnothing$. At first I thought of taking $U_1 = (\overline A)^c$ and $U_2 = (\overline B)^c$, but I don't necessarily know that these are disjoint. My second thought was this: Let $x\in A$. Then since $x\in (\overline B)^c$ and this set is open, $\exists \varepsilon_x > 0: B(x,\varepsilon_x)\subseteq (\overline B)^c$. Then I wanted to define $U_1 = \cup_{x\in A} B(x,\varepsilon_x)$ so that $A\subseteq U_1 \subseteq (\overline B)^c$, and then similarly define $U_2$ so that $B\subseteq U_2\subseteq (\overline A)^c$. However, I still don't think this works, since $U_1$ and $U_2$ could just share a point that is neither in $A$ nor $B$, but is in $(\overline A)^c \cap (\overline B)^c$. Any suggestions?
\begin{align*} U_1 &= \{x : \mathop{\text{dist}}(x,A) < \mathop{\text{dist}}(x,B) \} \\ U_2 &= \{x : \mathop{\text{dist}}(x,A) > \mathop{\text{dist}}(x,B) \} \end{align*} (The use of this kind of distance trick is suggested by the fact that the result is not true in general topological spaces; a nice counterexample is the co-finite topology on $\mathbb N$, which produces exactly the problem you were worried about where $U_1$ and $U_2$ end up having to share points.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1292103", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Characterizing affine subspaces order-theoretically Let $V$ denote a real vectorspace and $\mathrm{Con}(V)$ denote the poset of convex subsets of $V$. The goal is to identify those elements of $\mathrm{Con}(V)$ that happen to be affine subspaces of $V$ in a purely order-theoretic manner. Something like so: Definition. Let $P$ denote a poset satisfying [blah]. Then $x \in P$ is called affine iff... Proposition. The affine elements of $\mathrm{Con}(V)$ are precisely the affine subspaces of $V$. What definition(s) are available to do this?
Clearly, it suffices to define lines in a order-theoretically way, as $X$ is affine if and only if it contains (it is bigger than) all the lines through its points. All sets are supposed convex. The order relation is "$X$ is smaller thatn $Y$ iff $X\subset Y$" Definition: X is polygonal if $\exists$ a convex $Y\supset X$ such that $Y\setminus X$ is convex. P (In practice $X$ is polygonal if and only if it is convex but not stricly convex) Definition: $Y$ is long if it is polygonal and for any $Y\supset X$ such that $Y\setminus X$ is convex, we have $Y\setminus X$ is unbounded. Definition: a line is a minimal long set, more precisely, $Y$ is a line if and only if $Y$ is long and has no long subset. Fact: lines with this definition conincide with affine lines. Proof: Any convex subset of a line is a segment or an half line. These are not long with respect to the present definition. On the other hand, let $Y$ be long and of dimension $>1$, and let $\pi$ be the affine plane containing $Y$. If $Y=\pi$ then it is not minimal, as it contains a line as a long subset. If $Y\neq \pi$ then it has a boundary and, by pushing that boundary inside $Y$, we find a convex subset of $Y$ wich is long. Definition: $X$ is affine if and only if it is a single point or, for any line $L$ we have $$X\cap L\neq \emptyset \text{ in at least two points } \Rightarrow L\subset X$$ Fact: $X$ is affine iff it is an affine subspace of $V$. So, in general, following this scheme, you need: 1) a poset $(P,<)$ such that i) ther is a maximal element $V$ ii) for any $Y\in P$ there is an involution map on $\{X\ :\ X<Y\}$ wich reverses the order: $X\mapsto Y\setminus X$. 2) a class of convex elements of $P$. 3) a class of bounded (convex) elements of $P$. Now, points are defined as minimal elements of $P$, and $X\cap Y\neq \emptyset$ reads: "there is a point $q$ so that $q<X$ and $q<Y$". Next, you can give the definitions of polygonal and long as above and define lines as minimal long elements. Finally, you have the definition of affine as an element $X$ which is either a point or so that if $X\cap L\neq\emptyset$ in two points for a line $L$, then $L<X$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1292169", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How to prove $\int_0^1 \ln\left(\frac{1+x}{1-x}\right) \frac{dx}{x} = \frac{\pi^2}{4}$? Can anyone suggest the method of computing $$\int_0^1 \ln\left(\frac{1+x}{1-x}\right) \frac{dx}{x} = \frac{\pi^2}{4}\quad ?$$ My trial is following first set $t =\frac{1-x}{1+x}$ which gives $x=\frac{1-t}{t+1}$ Then \begin{align} dx = \frac{2}{(1+t)^2} dt, \quad [x,0,1] \rightarrow [t,1, 0] \end{align} [Thanks to @Alexey Burdin, i found what i do wrong in substitution] then the integral reduces to \begin{align} \int^{0}_1 \ln(t)\frac{2}{1-t^2} dt \end{align} How one can obtain above integral? Please post an answer if you know the answer to this integral or the other methods to evaluate above integral. Thanks!
Hint. By the change of variable $$ t =\frac{1-x}{1+x} $$ you get $$ \int_0^1 \ln\left(\frac{1+x}{1-x}\right) \frac{dx}{x}=-2\int_0^1 \frac{\ln t}{1-t^2} dt=-2\sum_{n=0}^{\infty}\int_0^1t^{2n}\:\ln t \:dt=2\sum_{n=0}^{\infty}\frac{1}{(2n+1)^2}= \frac{\pi^2}{4}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1292295", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Non-constant entire function-bounded or not? Show that if $f$ is a non-constant entire function,it cannot satisfy the condition: $$f(z)=f(z+1)=f(z+i)$$ My line of argument so far is based on Liouville's theorem that states that every bounded entire function must be a constant. So I try-to no avail-to show that if $f$ satisfies the given condition, it must be bounded. I haven't made much progress with this, so any hints or solutions are welcome.
First you can prove that $$f(z+\mathbb Z(i)) = f(z)$$ so $f$ is determined by it's values on the square $[0, 1+i)$. Entire functions are bounded on precompact sets. QED.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1292405", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Squaring a trigonometric inequality A very, very basic question. We know $$-1 \leq \cos x \leq 1$$ However, if we square all sides we obtain $$1 \leq \cos^2(x) \leq 1$$ which is only true for some $x$. The result desired is $$0 \leq \cos^2(x) \leq 1$$ Which is quite easily obvious anyway. So, what rule of inequalities am I forgetting?
Squaring does not preserve inequality since it is a 2-to-1 function over $\mathbb{R}$, hence nonmonotone. Mind you a monotone decreasing function would reverse the inequality there would still be a valid inequality pointing the opposite way.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1292500", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 6, "answer_id": 4 }
Suppose $A^2B+BA^2=2ABA$.Prove that there exists a positive integer $k$ such that $(AB-BA)^k=0$. Let $A, B \in M_n(\mathbb{C})$ be two $n \times n$ matrices such that $$A^2B+BA^2=2ABA$$ Prove that there exists a positive integer $k$ such that $(AB-BA)^k=0$. Here is the source of the problem. Under the comment by Carcul, I don't understand why $[A,[A,B]]=0$ implies that $[A,B]$ is a nilpotent matrix. Can anyone explain it to me? Clarification: I have problem understanding the solution given by Carcul. I don't see the link between $[A,[A,B]]=0$ and $[A,B]$ being nilpotent.
From $[A,[A,B]] = 0$, we decuce by induction that $$ [A^k, B] = kA^{k-1}[A,B] \tag 1 $$ For $k = 0$, (1) is obviuos, if (1) holds for $k-1$, we have \begin{align*} [A^k, B] &= A^kB - BA^k\\ &= A^{k-1}AB - BAA^{k-1}\\ &= A^{k-1}[A,B] + A^{k-1}BA - BA^{k-1}A\\ &= A^{k-1}[A,B] + [A^{k-1},B]A\\ &= A^{k-1}[A,B] + (k-1)A^{k-2}[A,B]A\\ &= kA^{k-1}[A,B] \qquad \text{ as $[A,B]A = A[A,B]$} \end{align*} As $[\cdot, B]$, is linear, we have that for any polynomial $p$ we have $$ [p(A), B] = p'(A)[A,B] $$ Now let $\mu_A$ denote the minimal polynomial of $A$, we will show by induction that $$ \tag 2 \mu^{(k)}_A(A)[A,B]^{2^k- 1} = 0 $$ holds for any $k$. For $k = 0$, we have nothing to show, as $\mu_A(A) = 0$ by definition of the minimal polynomial. Suppose $k \ge 1$ and (2) holds for $k-1$, then \begin{align*} 0 &= [\mu^{(k-1)}_A(A)[A,B]^{2^{k-1}-1}, B]\\ &= [\mu^{(k-1)}_A(A), B][A,B]^{2^{k-1}-1} + \mu_A^{(k-1)}(A)[[A,B]^{2^k-1}, B]\\ &= \mu^{(k)}_A(A)[A,B]^{2^{k-1}} + \mu_A^{(k-1)}(A)\sum_{l=1}^{2^{k-1}-1} [A,B]^{l-1} [[A,B],B][A,B]^{2^{k-1}-l}\\ \end{align*} As $[A,[A,B]] = 0$, any polynomial in $A$ commutes with $[A,B]$. Multiplying the last equation with $[A,B]^{2^{k-1}-1}$ from the left, we have \begin{align*} 0 &= [A,B]^{2^{k-1}-1}\mu^{(k)}_A(A)[A,B]^{2^{k-1}} + [A,B]^{2^{k-1}-1}\mu_A^{(k-1)}(A)\sum_{l=1}^{2^{k-1}-1} [A,B]^{l-1} [[A,B],B][A,B]^{2^{k-1}-l}\\ &= \mu^{(k)}_A(A)[A,B]^{2^k-1} + \sum_{l=1}^{2^{k-1}-1}\mu_A^{(k-1)}(A) [A,B]^{2^k-1 + l-1} [[A,B],B][A,B]^{2^{k-1}-l}\\ &= \mu^{(k)}_A(A)[A,B]^{2^k-1} \quad\text{by induction hypothesis} \end{align*} This proves (2). Now, in (2), let $k = \deg \mu_A$ to get $$ \tag 3 (\deg \mu_A)![A,B]^{2^{\deg \mu_A} - 1} = 0 $$ Hence, $[A,B]$ is nilpotent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1292619", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Finding the period of $f(x) = \sin 2x + \cos 3x$ I want to find the period of the function $f(x) = \sin 2x + \cos 3x$. I tried to rewrite it using the double angle formula and addition formula for cosine. However, I did not obtain an easy function. Another idea I had was to calculate the zeros and find the difference between the zeros. But that is only applicable if the function oscillates around $y = 0$, right? My third approach was to calculate the extrema using calculus and from that derive the period. Does anybody have another approach? Thanks
Hint The period of $\sin(2x)$ is $\pi$, and the period of $\cos(3x)$ is $2\pi/3$. Can you find a point where both will be at the start of a new period?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1292688", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
Distribution of server utilisations in an M/M/c queuing model with an unusual dispatching discipline I'm studying an M/M/c queuing model with an unusual (?) dispatching discipline: * *Servers are numbered 1...c *The servers have an identical mean service time, exponentially distributed (as usual), which does not vary with time or load *If all servers are busy, the transaction is allocated to the first server that becomes free *If any servers are free, the transaction is allocated to the free server with the lowest number. I am especially interested in the mean server utilisation ($\rho_i$) for each server (which could be derived from the proportion $p_i$ of jobs served by server $i$), and also its distribution (though I guess that is more difficult). What results are available which give the distribution of traffic going to each server?
This is (an attempt at) a partial answer. I think that you should be able to decouple the server selection aspect from the composite load aspect, since the servers are identical with exponentially distributed service time $\mu$. That is, one can analyze the number in system (irrespective of distribution across the servers) as an ordinary M/M/$c$ system with the usual distribution: $$ p_k = \begin{cases} \hfill p_0 \frac{\sigma^k}{k!} \hfill & k \leq c \\ \hfill p_0 \frac{\sigma^k}{c!c^{k-c}} \hfill & k \geq c \end{cases} $$ where $\sigma = \lambda/\mu$ and $$ p_0 = \left(\sum_{k=0}^{c-1} \frac{\sigma^k}{k!} + \sum_{k=c}^\infty \frac{\sigma^k}{c!c^{k-c}}\right)^{-1} = \left(\frac{c\sigma^c}{c!(c-\sigma)} + \sum_{k=0}^{c-1} \frac{\sigma^k}{k!}\right)^{-1} $$ For the simplest case $c=2$, we can then disentangle the individual states $(1, 0)$ (server $1$ busy, server $2$ idle) and $(0, 1)$ (server $1$ idle, server $2$ busy) by writing $$ \lambda p_0 = \mu p_{1, 0} + \mu p_{0, 1} = \mu p_1 $$ $$ (\lambda+\mu) p_{1, 0} = \lambda p_0 + \mu p_2 $$ $$ (\lambda+\mu) p_{0, 1} = \mu p_2 $$ $$ 2\mu p_2 = \lambda p_{1, 0} + \lambda p_{0, 1} = \lambda p_1 $$ Together, these equations yield $$ p_{1, 0} = \frac{2+\sigma}{2+2\sigma} \, p_1 $$ $$ p_{0, 1} = \frac{\sigma}{2+2\sigma} \, p_1 $$ I'm still thinking about (a) whether this is all correct for $c = 2$, and (b) if it is, how one might generalize for all $c$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1292768", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Changing argument into complex in the integral of Bessel multiplied by cosine I got a problem solving the equation below: $$ \int_0^a J_0\left(b\sqrt{a^2-x^2}\right)\cosh(cx) dx$$ where $J_0$ is the zeroth order of Bessel function of the first kind. I found the integral expression below on Gradshteyn and Ryzhik's book 7th edition, section 6.677, number 6: $$ \int_0^a J_0\left(b\sqrt{a^2-x^2}\right)\cos(cx) dx = \frac{\sin\left(a\sqrt{b^2+c^2}\right)}{\sqrt{b^2+c^2}}.$$ My naive intuition says that I can solve the integral above by changing $c\to ic$, so I would get $$ \int_0^a J_0\left(b\sqrt{a^2-x^2}\right)\cosh(cx) dx = \begin{cases} \frac{\sin\left(a\sqrt{b^2-c^2}\right)}{\sqrt{b^2-c^2}}, & \text{if } b > c\\ \frac{\sinh\left(a\sqrt{c^2-b^2}\right)}{\sqrt{c^2-b^2}}, & \text{if } b < c\\ a, & \text{otherwise.} \end{cases} $$ I am not quite sure about this because it contains substitution into complex number. If anyone could confirm that this is correct/incorrect, it would be much appreciated! Confirmation with some proof will be better. Many thanks!
$$I=\int_{0}^{a}J_0(b\sqrt{a^2-x^2})\cosh(cx)\,dx = a\int_{0}^{1}J_0(ab\sqrt{1-z^2})\cosh(acz)\,dx$$ The trick is now to expand both $J_0$ and $\cosh$ as Taylor series, then to exploit: $$ \int_{0}^{1}(1-z^2)^{n}z^{2m}\,dz = \frac{\Gamma(n+1)\,\Gamma\left(m+\frac{1}{2}\right)}{2\cdot\Gamma\left(m+n+\frac{3}{2}\right)}\tag{2}$$ hence, by assuming $b>c$ and applying twice the Legendre duplication formula: $$\begin{eqnarray*} I &=& a\int_{0}^{1}\sum_{n\geq 0}\frac{(-1)^n(ab)^{2n}(1-z^2)^n}{n!^2 4^n}\sum_{m\geq 0}\frac{(ac)^{2m}z^{2m}}{(2m)!}\,dz\\&=&\frac{a}{2}\sum_{n,m\geq 0}\frac{(-1)^n a^{2n+2m}b^{2n}c^{2m}}{4^n}\cdot\frac{\Gamma\left(m+\frac{1}{2}\right)}{\Gamma(n+1)\Gamma(2m+1)\Gamma\left(m+n+\frac{3}{2}\right)}\\&=&\frac{a}{2}\sum_{n,m\geq 0}\frac{(-1)^n a^{2n+2m}b^{2n}c^{2m}}{4^{n+m}}\cdot\frac{\sqrt{\pi}}{\Gamma(n+1) \Gamma(m+1)\Gamma\left(m+n+\frac{3}{2}\right)}\\&=&\frac{a}{2}\sum_{s=0}^{+\infty}\frac{a^{2s}\sqrt{\pi}\,(c^2-b^2)^s}{\Gamma\left(s+\frac{3}{2}\right)4^s\,\Gamma(s+1)}\\[5pt]&=&\color{red}{\frac{\sin\left(a\sqrt{b^2-c^2}\right)}{\sqrt{b^2-c^2}}}\tag{3}\end{eqnarray*}$$ as you claimed. This, in fact, proves also the Gradshteyn and Ryzhik's formula: we just need to replace the $(-1)^n$ factor in the last lines with $(-1)^{n+m}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1292854", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
The convergence of the series $\sum (-1)^n \frac{n}{n+1}$ and the value of its sum This sum seems convergent, but how to find its precise value? $$\sum\limits_{n=1}^{\infty}{(-1)^{n+1} \frac{n}{n+1}} = \frac{1}{2}-\frac{2}{3}+\frac{3}{4}-\frac{4}{5}+...=-0.3068... $$ Any help would be much appreciated.
Hint: This series can't converge, since |$a_n| \rightarrow 0$ is not true, because $\frac{n}{n+1}=1$, if $n \rightarrow \infty$(This is a neccessary term).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1292919", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Find the ratio of curved surface area of frustum to the cone. In the figure, there is a cone which is being cut and extracted in three segments having heights $h_1,h_2$ and $h_3$ and the radius of their bases $1$ cm, $2$cm and $3cm$, then The ratio of the curved surface area of the second largest segment to that of the full cone. $\color{green}{a.)2:9}\\ b.)4:9\\ c.)\text{cannot be determined }\\ d.) \text{none of these}\\$ I found that $h_1=h_2=h_3\\$ and $ \dfrac{A_{\text{2nd segment}}}{A_{\text{full cone}}}=\dfrac{\pi\times (1+2)\times \sqrt{h_1^2+1} }{\pi\times 3\times \sqrt{(3h_1)^2+3^2} }=\dfrac13$ But book is giving option $a.)$
Area is proportional to the square of linear dimension. So the area of the full cone is $k(3^2)$ for some $k$. The area of the second largest segment is $k(2^2) - k(1^2)$, so the ratio is $3:9 = 1:3$. You are right, and the book is wrong.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1292998", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
How to prove that a straight line is an infinite set of points? From the basic elementary level when we start reading geometry we get this idea developed in us that a straight line is the conjuction of infinite points.but how to prove this? I mean is this an axiom or its provable?
Take your endpoints, call them $S$ and $E$. Pick some point between $S$ and $E$, let us call it $M$. Then, pick a point between $S$ and $M$. Call it $M_2$. Pick a point between $S$ and $M_2$ .... If you can keep going like that, it is easy to see that the line must have infinite points... and you can keep going on like that. The numbers don't just end.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1293086", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
Explicit form of this series expansion? I am considering the following series expansion: $$f(k):=\sum_{n\geq 1} e^{-k n^2}$$ with $k>0$ a fixed parameter. Is there a possibility to either find a closed form expression for $f(k)$? Or at least an upperbound of the type $|f(k)|\leq \frac{C}{k^p}$ for some constant $C$ and power $p$? Thanks in advance!
For $k\ge 1,$ $$\sum_{n=1}^{\infty}e^{-kn^2} \le \sum_{n=1}^{\infty}e^{-kn} = \frac{e^{-k}}{1-e^{-k}}\le \frac{1}{1-e^{-1}}\cdot e^{-k}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1293183", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Proving that $f$ is differentiable at $0$ Let's consider the following function: $$f(x,y)=\begin{cases} (x^2+y^2)\sin\left(\dfrac{1}{x^2+y^2}\right) & \text{if }x^2+y^2\not=0 \\{}\\ 0 & \text{if }x=y=0 \end{cases}$$ I know that $f_x$ and $f_y$ are not continuous at $0$. How to prove that $f$ is differentiable at $0$?
Hint: Change to polar coordinates and show $$\lim_{r\to 0}\frac{r^2\sin(r^{-2})-0}{r}=0$$ Hint 2: The sine function is bounded.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1293285", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Horse racing question probability Been thinking about this for a while. * *Horse Campaign length: 10 starts *Horse Runs this campaign: 5 *Horse will is guaranteed to win 1 in 10 this campaign Question: what is the Probability of winning at the sixth start if it hasn't won in the first five runs? 20%? Thanks for looking at this. I would love to how to calculate this.
I assume, that the horse will win only one race. And the probability of the (garanteed) win and a start at a race are equally like for every start/race. The probability, that the horse has had no starts ($s_0$) is $\frac{{5 \choose 0}\cdot {5 \choose 5}}{{10 \choose 5}}$ and the probabilty that then the horse will win at the 6th start is 1 divided by the number of remaining starts. $P(w_6 \cap s_0)=P(s_0)\cdot P(w_6|s_0)=\frac{{5 \choose 0}\cdot {5 \choose 5}}{{10 \choose 5}}\cdot \frac{1}{5}$ The probability, that the horse has had one start ($s_1$) is $\frac{{5 \choose 1}\cdot {5 \choose 4}}{{10 \choose 5}}$ and the probabilty that then the horse will win at the 6th start is 1 divided by the number of remaining starts. $P(w_6 \cap s_1)=P(s_1)\cdot P(w_6|s_1)=\frac{{5 \choose 1}\cdot {5 \choose 4}}{{10 \choose 5}}\cdot \frac{1}{4}$ And so on ... The probability of winning at the 6th start is $P(w_6)=P(w_6 \cap s_0)+P(w_6 \cap s_1)+P(w_6 \cap s_2)+P(w_6 \cap s_3)+P(w_6 \cap s_4)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1293476", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Computing the value of a series by telescoping cancellations vs. infinite limit of partial sums $$\sum_{m=5}^\infty \frac{3}{m^2+3m+2}$$ Given this problem my first approach was to take the limit of partial sums. To my surprise this didn't work. Many expletives later I realized it was a telescoping series. My question is why my first approach failed. My expectation is that both approaches would produce the same answer. Why didn't they? First approach: $$\sum_{m=5}^\infty \frac{3}{m^2+3m+2} = \lim \limits_{N \to \infty} \int_{m=5}^N \frac{3}{m^2+3m+2} = \lim \limits_{N \to \infty} 3 \left [ \ln \left ( \frac{N+1}{N+2} \right ) + \ln \left ( \frac{7}{6} \right ) \right ]$$$$ = 3 \ln(7/6) \approx 0.46245$$ An empirical check showed that the above approach is wrong. After I realized it was a telescoping series I was able to produce a sequence of partial sums: $$S_{m} = \left ( \frac{3}{6}-\frac{3}{m+2} \right ) $$ And the limit of this sequence gets to an answer that agrees with a crude empirical spreadsheet validation: $$\lim \limits_{m \to \infty} \left ( \frac{3}{6}-\frac{3}{m+2} \right ) = \frac{1}{2}$$ So clearly my initial intuition and understanding was wrong. But why was it wrong? I thought I could take the limit of an integral to calculate the value of a series. In what case do we apply the first approach I took? I've used it before, but I must have forgotten how to apply it correctly (and all my searches come up with references to computing convergence, not actual values).
The integral test simply tells you if an infinite sum is convergent, it will not necessarily tell you what the sum converges to. Imagine back to when you first started learning about the Riemann integral. You might remember seeing pictures of a smooth curve with rectangular bars pasted over the curve, approximating the area below the curve. The idea being that the more narrow the width you make the bars, the closer you will get to the actual area under the curve. The integral you calculated gives you the area under the curve as if $\frac{3}{m^2+3m+2}$ were a smooth curve. Here we think of $m$ taking on every real number from $5$ to $\infty$. The infinite sum gives you the area of the rectangles with height $\frac{3}{m^2+3m+2}$ and width $1$ (not at all narrow for a rectangle). In this case we strictly treat $m$ as an integer. As such, the integral and the sum can only be rough approximations of each other.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1293669", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
$\lim \limits_{n \to \infty}$ $\prod_{r=1}^{n} \cos(\frac{x}{2^r})$ $\lim \limits_{n \to \infty}$ $\prod_{r=1}^{n} \cos(\frac{x}{2^r})$ How do I simplify this limit? I tried multiplying dividing $\sin(\frac{x}{2^r})$ to use half angle formula but it doesnt give a telescopic which would have simplified it.
$$\prod_{r=1}^{\infty} \frac{\cos (x/2^r) \sin (x/2^r)}{\sin (x/2^r)} = \prod_{r = 1}^{\infty} \frac{\sin (x/2^{r-1})}{2\sin(x/2^r)} = \lim_{r \to \infty} \frac{\sin x}{2^r \sin (x/2^r)}$$ Can you take it from here? Recall that $\lim_{t \to 0} \frac{\sin \alpha t}{t} = \alpha$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1293756", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
Little confusion about connectedness Consider $X=\{(x,\sin(1/x)):0<x<1\}$. Then clearly $X$ is connected , as it is a continuous image of the connected set $(0,1)$. So, $\overline X$ is also connected , as closure of connected set is connected. Now if we notice about the set $\overline X$ then $$\overline X=X\cup B$$ where , $B=\{(0,y):-1\le y\le 1\}$ Now , $X$ and $B$ both are connected , and $X\cap B=\emptyset$ . So, $\overline X$ is disconnected. Where my mistake ??
The set $B$ is NOT open, so this is not a partition of $\overline X$ in open sets.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1293822", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
Prove that for any given $c_1,c_2,c_3\in \mathbb{Z}$,the equations set has integral solution. $$ \left\{ \begin{aligned} c_1 & = a_2b_3-b_2a_3 \\ c_2 & = a_3b_1-b_3a_1 \\ c_3 & = a_1b_2-b_1a_2 \end{aligned} \right. $$ $c_1,c_2,c_3\in \mathbb{Z}$ is given,prove that $\exists a_1,a_2,a_3,b_1,b_2,b_3\in\mathbb{Z}$ meet the equations set. Apparently,the question equal to how to decompose the integer vector into the cross product (vector product) of two integer vector.And the real question I want to ask just is it.
First let us assume that $c_1, c_2, c_3$ are coprime, that is $(c_1, c_2, c_3) = 1$. Suppose first that there exist integer numbers $u_1, u_2, u_3$ and $v_1, v_2, v_3$ such that $$ \begin{vmatrix} c_1 & c_2 & c_3 \\ u_1 & u_2 & u_3 \\ v_1 & v_2 & v_3 \end{vmatrix} = 1 $$ In that case we can take $a = c \times u$ and $b = c \times v$. Then $a \times b = (c \times u) \times (c \times v) = (c \cdot (u \times v))c$ (this is one of triple product properties, see http://en.wikipedia.org/wiki/Triple_product for details). Now $c \cdot (u \times v)$ is equal to the determinant above and hence is equal to one. So $a \times b = c$. Now let us go back and prove the assumption. Let $d = (c_2, c_3)$ and pick $u_2, u_3$ such that $c_2 u_3 - c_3 u_2 = d$. Also pick $u_1 = 0$. Now we have $$ \begin{vmatrix} c_1 & c_2 & c_3 \\ u_1 & u_2 & u_3 \\ v_1 & v_2 & v_3 \end{vmatrix} = v_1 d - v_2 (c_1 u_3 - c_3 u_1) + v_3 (c_1 u_2 - c_2 u_1) = v_1 d + c_1(v_3 u_2 - v_2 u_3). $$ Now $(c_1, d) = 1$ and $(u_2, u_3) = 1$. Let us pick $v_2$ and $v_3$ such that $v_3 u_2 - v_2 u_3 \equiv c_1^{-1} \pmod d$. In that case the right hand side is congruent to $1$ modulo $d$. Finally let us pick $v_1$ such that it is equal to $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1293883", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 2 }
How write a periodic number as a fraction? What I call as a periodic number is for exemple $$0.\underbrace{13}_{period}131313...$$ or $$42.\underbrace{465768}_{period}465768465768.$$ So how can we put theses numbers like a integer fractional, i.e. of the form $\frac{a}{b}$ with $a,b\in\mathbb Z$ ?
Multiplying your number by a suitable power of $10$ we can make some parts the nummber jump to the left of the decimal point leaving identical fractional part. That is $10^mx$ and $x$ have the same fractional part. SO their difference is an integer $a$: That is $a= (10^k-1)x$, this shows $x$ is a rational number.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1293994", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Evaluate the double sum $\sum_{m=1}^{\infty}\sum_{n=1}^{m-1}\frac{ 1}{m n\left(m^2-n^2\right)^2}$ As a follow up of this nice question I am interested in $$ S_1=\sum_{m=1}^{\infty}\sum_{n=1}^{m-1}\frac{ 1}{m n\left(m^2-n^2\right)^2} $$ Furthermore, I would be also very grateful for a solution to $$ S_2=\sum_{m=1}^{\infty}\sum_{n=m+1}^{\infty}\frac{ 1}{m n\left(m^2-n^2\right)^2} $$ Following my answer in the question mentioned above and the numerical experiments of @Vladimir Reshetnikov it's very likely that at least $$ S_1+S_2 = \frac{a}{b}\pi^6 $$ I think both sums may be evaluated by using partial fraction decomposition and the integral representation of the Polygamma function but I don't know how exactly and I guess there could be a much more efficient route.
Numerically, I get $$ S_1+S_2 = 0.14836252987273216621 $$ which agrees with $$ \frac{\pi^6}{6480} $$ Also numerically, $$ S_1 = 0.074181264936366083104 \\ S_2 = 0.074181264936366083104 $$ are seemingly equal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1294185", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22", "answer_count": 2, "answer_id": 1 }
Solve of the differential equation $y'=-\frac{x}{y}+\frac{y}{x}+1$ I've tried to solve this equation, and in the course of solving any problems. Please help me understand. $$y'=-\frac{x}{y}+\frac{y}{x}+1$$ Results in a normal form. $$y'=-\frac{1}{\frac{y}{x}}+\frac{y}{x}+1$$ Make replacement. $$\frac{y}{x}=U$$ $$y'=U'x+U$$ Substitute into the original equation. $$U'x+U=-\frac{1}{U}+U+1$$ $$\frac{dU}{dx} \cdot x = -\frac{1}{U}+1$$ What to do with it?
HINT: we have $$\frac{du}{1-\frac{1}{u}}=\frac{dx}{x}$$ the term $$\frac{u}{u-1}=\frac{u-1+1}{u-1}=1+\frac{1}{u-1}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1294275", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
limit of sin function as it approches $\pi$ In my assignment I have to find the Classification of discontinuities of the following function: $$f(x)=\frac{\sin^2(x)}{x|x(\pi-x)|}$$ I wanted to look what happens with the value $x=\pi$ because the function doesn't exist in that value. I have to check if some $L \in \Bbb R$ exists such that $$\lim _{x \to \pi} f(x)=L $$ I didn't have much success in making the argument simpler, so I thought to make a little 'trick', and I know it works for sequences. I am not sure if it's "legal" to do in function. Here it is: $$\frac{\sin^2(x)}{x|x(\pi-x)|} < \frac{\sin^2(x)}{x|x(\pi )|}$$ Now find the limit for the "bigger" function: $$\lim _{x \to \pi} \frac {sin^2(x)}{x|x(\pi )|} = \frac{0}{\pi}=0$$ Is my solution "legal" or valid? Thanks, Alan
Since $\sin x=\sin (\pi -x)$ then \begin{eqnarray*} \lim_{x\rightarrow \pi }\frac{\sin ^{2}(x)}{x\left\vert x(\pi -x)\right\vert } &=&\lim_{x\rightarrow \pi }\left( \frac{\sin (\pi -x)}{(\pi -x)}\right) ^{2}\frac{\left\vert \pi -x\right\vert }{x\left\vert x\right\vert } \\ &=&\lim_{x\rightarrow \pi }\left( \frac{\sin (\pi -x)}{(\pi -x)}\right) ^{2}\cdot \lim_{x\rightarrow \pi }\frac{\left\vert \pi -x\right\vert }{% x\left\vert x\right\vert } \\ &=&1^{2}\cdot \frac{\left\vert \pi -\pi \right\vert }{\pi \left\vert \pi \right\vert } \\ &=&1\cdot \frac{0}{\pi ^{2}}=0. \end{eqnarray*} I have used the standard limit \begin{equation*} \lim_{u\rightarrow 0}\frac{\sin (u)}{(u)}=1. \end{equation*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1294358", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Number of words which can be formed with INSTITUTION such that vowels and consonants are alternate Question: How many words which can be formed with INSTITUTION such that vowels and consonants are alternate? My Attempt: There are total 11 letters in word INSTITUTION. The 6 consonants are {NSTTTN} and the 5 vowels are {IIUIO}. So if we begin with consonant then we can have $6!$ different arrangement of consonants and $5!$ different arrangement for vowels. But I is repeated 3 times, T is repeated 3 times and N is repeated 2 times. Thus we get $$\frac{6! \cdot 5!}{3! \cdot 3! \cdot 2!} = 1200$$ different words. Now, It is also possible that that word begins with a vowel, thus we will have another $1200$ words. Thus total number of words formed is $1200 + 1200 = 2400$ But answer given is 1200. Am I missing something?
You do not have the option of either starting with a vowel or starting with a consonant since you must alternate and you have 6 consonants and 5 vowels. It must start with a consonant.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1294462", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
$xf(y)+yf(x)\leq 1$ for all $x,y\in[0,1]$ implies $\int_0^1 f(x) \,dx\leq\frac{\pi}{4}$ I want to show that if $f\colon [0,1]\to\mathbb{R}$ is continuous and $xf(y)+yf(x)\leq 1$ for all $x,y\in[0,1]$ then we have the following inequality: $$\int_0^1 f(x) \, dx\leq\frac{\pi}{4}.$$ The $\pi$ on the right hand side suggests we have to do something with a geometric function. Letting $f(x) = \frac{1}{1+x^2}$ we have equality but this function does not satisfy $xf(y)+yf(x)\leq 1$.
Let $I = \displaystyle\int_{0}^{1}f(x)\,dx$. Substituting $x = \sin \theta$ yields $I = \displaystyle\int_{0}^{\pi/2}f(\sin \theta)\cos\theta\,d\theta$. Substituting $x = \cos \theta$ yields $I = \displaystyle\int_{0}^{\pi/2}f(\cos \theta)\sin\theta\,d\theta$. Hence, $I = \dfrac{1}{2}\displaystyle\int_{0}^{\pi/2}\left[f(\sin \theta)\cos\theta+f(\cos\theta)\sin\theta\right]\,d\theta \le \dfrac{1}{2}\displaystyle\int_{0}^{\pi/2}1\,d\theta = \dfrac{\pi}{4}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1294575", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21", "answer_count": 2, "answer_id": 0 }
Upper bound for truncated taylor series A paper claimed the following but I can't figure out why it's true: For all $1/2> \delta > 0$, $k\le n^{1/2-\delta}$, and $j\le k-1$ where $n$, $j$, and $k$ are positive integers, the following holds: $$\sum_{i=j}^k \frac{(2/n^{2 \delta})^i}{i!} \le \frac{4}{n^{2\delta j}}.$$ Why is this the case? Upper-bounding the sum by $(k-j+1)$ times the largest term doesn't seem to work, and it is not obvious to me that replacing $k$ with infinity would work either. (This paper has many typos in it, so it's also possible that the inequality might not even be true. The inequality in question is in the proof of Lemma 5.2 of this paper.
Notes: * *If I read the notation of the paper correctly (and the putative inequality is on line 16 of page 19), the authors' claim is weaker than in your question, namely $$ \sum_{i=j}^{k} \frac{(2/n^{2\delta})^{i}}{i!} \leq \frac{4}{n^{\delta j}} $$ (n.b. $n^{\delta j}$ in the right-hand denominator rather than $n^{2\delta j}$). This doesn't seem to matter, however, since $n^{2\delta}$ can be arbitrarily close to $1$ (if $\delta \ll 1$). *Since $n$ (and therefore $k$) can be arbitrarily large while $\delta$ can be arbitrarily close to $0$, you can't avoid estimating the infinite tail of the exponential series. *If $j = 2$, the integers $n$ and $k$ are large, and we let $\delta \to 0$, so in the limit $2/n^{2\delta} \approx 2$, the inequality as stated becomes, approximately, $$ e^{2} - 1 - 2 = \sum_{i=2}^{\infty} \frac{2^{i}}{i!} \leq 4; $$ this is false, since $e^{2} - 3 > 4.38$. However, the inequality is true as stated for $j \geq 3$. (The inequality is a straightforward consequence of Taylor's theorem if $j \geq 5$; with a bit more work, one gets $j \geq 3$.) ($j \geq 5$) If $x > 0$ is real and $j$ is a positive integer, Taylor's theorem with the Lagrange form of the remainder applied to $e^{x}$ implies there exists a $z$ with $0 < z < x$ such that $$ \sum_{i=j}^{\infty} \frac{x^{i}}{i!} = e^{x} - \sum_{i=0}^{j-1} \frac{x^{i}}{i!} = R_{j}(x) = e^{z} \cdot \frac{x^{j}}{j!}. $$ Setting $x = 2/n^{2\delta}$ and letting $k > j$ be an arbitrary integer, we have $$ \sum_{i=j}^{k} \frac{(2/n^{2\delta})^{i}}{i!} \leq \sum_{i=j}^{\infty} \frac{(2/n^{2\delta})^{i}}{i!} = e^{z} \cdot \frac{2^{j}}{j!\, n^{2\delta j}} = e^{z} \cdot \frac{2^{j-2}}{j!}\, \frac{4}{n^{2\delta j}} \tag{1} $$ for some positive real number $z < 2/n^{2\delta} \leq 2$. Since $e^{2} \approx 7.39$ and $2^{j-2}/j! \leq 1/2$ (with equality if and only if $j = 1$ or $j = 2$), you get the stated inequality within a factor of $e^{2}/2 < 4$ for all $j \geq 1$, and the stated inequality for $j \geq 5$. ($j \geq 3$) The goal is to sharpen the estimate of $e^{z}$. To this end, use the well-known geometric series estimate of the infinite tail: $$ \sum_{i = j}^{\infty} \frac{x^{i}}{i!} = \frac{x^{j}}{j!} \sum_{i = 0}^{\infty} \frac{j!\, x^{i}}{(i+j)!} \leq \frac{x^{j}}{j!} \sum_{i = 0}^{\infty} \left(\frac{x}{j + 1}\right)^{i} = \frac{x^{j}}{j!} \cdot \frac{1}{1 - x/(j + 1)}. \tag{2} $$ (The inequality follows because $j!\, (j + 1)^{i} \leq (i + j)!$ for all positive integers $i$ and $j$.) Substituting the remainder into (2) gives $$ e^{z} \cdot \frac{x^{j}}{j!} = \sum_{i = j}^{\infty} \frac{x^{i}}{i!} \leq \frac{x^{j}}{j!} \cdot \frac{1}{1 - x/(j + 1)}, $$ or $$ e^{z} \leq \frac{1}{1 - x/(j + 1)} = \frac{j + 1}{j + 1 - x}. $$ If $j \geq 3$, then (since $x < 2$) we have $e^{z} \leq (j + 1)/(j - 1) \leq 2$. Inequality (1) therefore implies $$ \sum_{i=j}^{k} \frac{(2/n^{2\delta})^{i}}{i!} \leq \frac{4}{n^{2\delta j}}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1294678", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Average distance between two randomly chosen points in unit square (without calculus) Imagine that you choose two random points within a 1 by 1 square. What is the average distance between those two points? Using a random number generator, I'm getting a value of ~0.521402... can anyone explain why I'm getting this value, or what this number means? More importantly, is there a way to solve this without using calculus and/or large random sampling?
Let $(X_i, Y_i)$ for $i=1,2$ be i.i.d. and has uniform distribution on $[0, 1]^2$. Then $|X_1 - X_2|$ has PDF $$ f(x) = \begin{cases} 2(1-x), & 0 \leq x \leq 1 \\ 0, & \text{otherwise} \end{cases}. $$ This shows that the average distance is \begin{align*} \int_{0}^{1}\int_{0}^{1} 4(1-x)(1-y)(x^2 + y^2)^{1/2} \, dxdy &= \frac{1}{15}(2+\sqrt{2}+5\log(1+\sqrt{2})) \\ &\approx 0.52140543316472067833 \cdots. \end{align*} If $l_n$ denotes the average distance between two uniformly chosen points in $[0, 1]^n$, then the following formula may help us estimate the decay of $l_n$ as $n \to \infty$: $$ l_n = \frac{1}{\sqrt{\pi}} \int_{0}^{\infty} \left \{1 - \left( \frac{\sqrt{\pi} \operatorname{erf}(u)}{u} - \frac{1 - e^{-u^2}}{u^2} \right)^{n} \right\} \frac{du}{u^2}. $$ Using an estimate (which I believe to be true but was unable to prove) $$ e^{-u^2 / 6} \leq \frac{\sqrt{\pi} \operatorname{erf}(u)}{u} - \frac{1 - e^{-u^2}}{u^2} \leq 1 $$ and hence we get $$ l_n \leq \frac{1}{\sqrt{\pi}} \int_{0}^{\infty} \frac{1 - e^{-nu^2/6}}{u^2} \, du = \sqrt{\frac{n}{6}}. $$ For example, for $2 \leq n \leq 20$ we have
{ "language": "en", "url": "https://math.stackexchange.com/questions/1294800", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25", "answer_count": 2, "answer_id": 0 }
Complex numbers modulo integers Is there a "nice" way to think about the quotient group $\mathbb{C} / \mathbb{Z}$? Bonus points for $\mathbb{C}/2\mathbb{Z}$ (or even $\mathbb{C}/n\mathbb{Z}$ for $n$ an integer) and how it relates to $\mathbb{C} / \mathbb{Z}$. By "nice" I mean something like: * *$\mathbb{R}/\mathbb{Z}$ is isomorphic to the circle group via the exponential map $\theta \mapsto e^{i\theta}$, and *$\mathbb{C}/\Lambda$ is a complex torus for $\Lambda$ an integer lattice (an integer lattice is a discrete subgroup of the form $\alpha\mathbb{Z} + \beta\mathbb{Z}$ where $\alpha,\beta$ are linearly independent over $\mathbb{R}$.) Intuitively, it seems like it should be something like a circle or elliptic curve.
$\mathbb C/\mathbb Z$ is isomorphic to $\mathbb C/ n\mathbb Z$ for any non-zero complex number $n$. You can show that $\mathbb C/\mathbb Z\cong (\mathbb C\setminus\{0\},\times)$ via the homomorphism $z \to e^{2\pi iz}$. You can also think of this as $S^1\times\mathbb R$, which is topologically a cylinder.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1294898", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Find all solutions in N of the following Diophantine equation $(x^2 − y^2)z − y^3 = 0$ i divide by $z^3$ and look for rational solutions of the equation $A^2 − B^2 − B^3 = 0.$ The point $(A,B) = (0, 0)$ is a singular point, that is any line through this point will meet the curve twice in $(0, 0)$. Now i wanna use Diophantus chord method using the lines passing through $(0, 0)$ but i can't seem to pass this point
At least for a nonvertical line $B=kA$ when plugged into $$A^2-B^2-B^3=0 \tag{1}$$ gives $$A^2-k^2A^2-k^3A^3=0,$$ where here the double root corresponds to $A^2$ being a factor. After dividing by that and solving for $A$ one gets $A=(1-k^2)/k^3,$ and so also $B=kA=(1-k^2)/k^2.$ Finally a check reveals these values of $A,B$ satisfy $(1).$ So this is a parametrization of the solution set for rationals, which covers most points on the curve. There of course would still be a lot of work to go from such a rational parametrization to finding all positive integer solutions to the starting equation. Added note: For the starting equation $$(x^2-y^2)z-y^3=0, \tag{2}$$ if one puts the $k$ parameter in the above rational parametrization equal to $m/n,$ where we assume $0<m<n$ in order that we have $0<k<1,$ then we can work backwards and get a collection of positive integer solutions $(x,y,z)$ to equation $(2).$ The resulting expressions are $$x=n(n^2-m^2),\ \ y=m(n^2-m^2),\ \ z=m^3.\tag{3}$$ Note that if given a positive integer solution to $(2)$ we multiply each of $x,y,z$ by some positive integer $k$ say, another solution results. So one could call a solution "primitive" provided the gcd of $x,y,z$ is $1$. In this sense, the remaining question is whether there are primitive solutions not included in the equations $(3).$ I've had some success so far on this, having shown at least that $z$ must be a cubic factor of $y^3.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1294993", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
What is the mathematical distinction between closed and open sets? If you wanted me to spell out the difference between closed and open sets, the best I could do is to draw you a circle one with dotted circumference the other with continuous circumference. Or I would give you an example with a set of numbers $(1, 2)$ vs $[1,2]$ and tell you which bracket signifies open or closed. But in many theorems the author is dead set about using either closed or open sets. What is the strict mathematical difference that distinguish between the two sets and signifies the importance for such distinction? Can someone demonstrate with an example where using closed set for a theorem associated with open set would cause some sort of a problem?
Let's talk about real numbers here, rather than general metric or topological spaces. This way we don't need notions of Cauchy sequences or open balls, and can talk in more familiar terms. We define that a set $X \subset \mathbb{R}$ is open if for every $x \in X$ there exists some interval $(x-\epsilon,x+\epsilon)$ with $\epsilon > 0$ such that this interval is also fully contained in $X$. An example is the inverval $(0,1) =\{x \in \mathbb{R} : 0 < x < 1\}$. Note that this is an infinite set, because there are infinitely many points in it. If you choose a number $a \in (0,1)$ and let $\epsilon = \min\{a-0, 1-a\}$ then we can guarantee that $(a-\epsilon,a+\epsilon) \subset X$. The set $X$ is open. A set $X$ is defined to be closed if and only if its complement $\mathbb{R}- X$ is open. For example, $[0,1]$ is closed because $\mathbb{R}-[0,1]= (-\infty,0)\cup(1,\infty)$ is open. It gets interesting when you realise that sets can be both open and closed, or neither. This is a case where strict adherence to the definition is important. The empty set $\emptyset$ is both open and closed and so is $\mathbb{R}$. Why? The set $[1,2)$ is neither open nor closed. Why?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1295138", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30", "answer_count": 8, "answer_id": 3 }
Help With SAT Maths Problem (Percentages and Numbers) I usually solve SAT questions easily and fast, but this one got me thinking for several minutes and I cannot seem to find an answer. Here it is: In 1995, Diana read $10$ English and $7$ French books. In 1996, she read twice as many French books as English books. If 60% of the books that she read during the 2 years were French, how many English and French books did she read in 1996? (A) $16$, (B) $26$, (C) $32$, (D) $39$, (E) $48$ Could you please help me by either giving hints or explaining how to solve the problem? I cannot find, in any way, the number of books of any language she read in 1996. I have tried a lot of operations with percentages, but no results. Sorry for this question, as I am not very good at Maths. Thank you.
Total English books: $x=10+E$ Total French books: $y=7+F$ Where $E$ and $F$ are English and French books in 1996. We know $F=2E$ Thus: Total books: $T=x+y=17+E+F$ We know that $0.6t=y$ that is $0.6(17+E+F)=7+F\implies 0.6(17+3E)=7+2E$ - solve this for $E$ Which is sufficient to work it out :)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1295232", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 2 }
How can I prove the Carmichael theorem I am trying to prove that these two definitions of Carmichael function are equivalent. I am using this definition of Carmichael function: $\lambda(n)$ is the smallest integer such that $k^{\lambda(n)}\equiv1 \pmod n\,$ $\forall k$ such that $\gcd(k,n)=1$. to prove this one: $\lambda(n) = \begin{cases}\phi(p^a) & (p=2\text{ and }a\le 2)\text{ or }(p\ge 3)\\\dfrac 1 2 \phi{(2^a)} & a\ge 3\\ \text{lcm}\left((\lambda(p_1^{a_1}),\cdots,\lambda(p_k^{a_k})\right)& \text{for }n=\prod_{i=1}^k p_i^{a_i} \end{cases}$ Since $a^{\lambda(n)}\equiv 1 \pmod n$, and $a^{\lambda(m)}\equiv1 \pmod m$ with $\gcd(m,n)=1 \implies a^{\text{lcm}({\lambda(n)},\lambda(m)}\equiv1 \pmod {nm} $ Hence $\lambda (mn)=\text{lcm}(\lambda(n), \lambda(n))$ for $m$ and $n$ coprime to each other. How can I prove the other relations?
You need to prove that there are primitive roots $\bmod p^k$ when $p$ is an odd prime. We can do this by induction over $k$ To do this prove the base case using the fact that multiplicative groups of finite fields are cyclic and using $\mathbb Z_p$ is a field. After this prove it for $k=2$. Take $a$ a primitive root $\bmod p$, the order of the subgroup generated by $a$ of $\mathbb Z_{p^2}$ is either $p-1$ or $p(p-1)$ by Lagrange's theorem and the fact it is a multiple of $p-1$ since it is a primitive root $\bmod p$. If it is not a primitive root $\bmod p^2$ then we have $a^{p-1}\equiv 1 \bmod p^2$ We now prove $a+p$ is a primitive root $\bmod p^2$. This is because $(a+p)^{p-1}=a^{p-1}+(p-1)a^{p-2}p+\binom{p-1}{2}a^{p-3}p^2+\dots +p^{p-1}\equiv a^ {p-1}+(p-1)a^{p-2}p\equiv 1+(p-1)a^{p-2}p\bmod p^2$ Since $(p-1)a^{p-2}p$ is not a multiple of $p^2$ we have that $1+(p-1)a^{p-2}p$ is not $1\bmod p^2$. Hence the order of $a+p$ is not $p-1$ and since $a+p$ is also a primitive root $\bmod p$ we conclude it is a primitive root $\bmod p^2$. We now use the lifting the exponent lemma to prove that a primitive root $\bmod p^2$ is a primitive root $\bmod p^k$ for any $k$. Proof: Let $a$ be a primitive root $\bmod p^2$, to prove $a$ is a primitive root $\bmod p^k$ we have to prove $a^{p^{k-2}(p-1)}\not\equiv 1\bmod p^k$. In other words we must prove that $a^{p^{k-2}(p-1)}-1$ is not a multiple of $p ^k$. We rewrite this as $a^{p^{k-2}(p-1)}-(1)^{p^{k-2}(p-1)}$ and we apply the lifting the exponent lemma to obtain that the maximum power of $p$ dividing $a^{p^{k-2}(p-1)}-(1)^{p^{k-2}(p-1)}$ is equal to the maximum power of $p$ dividing $a^{p-1}-1^{p-1}$ plus the maximum power of $p$ dividing $p^{k-2}$ in other words $k-2+1=k-1$. So $p^k$ does not divide $a^{p^{k-2}(p-1)}-1$ as desired. This proves to us that $p^k$ has primitive roots for all powers of odd primes. In other words this proves $\lambda (p^k)=(p-1)p^{k-1}$ (Because there are $(p-1)p^k$ residue classes that are relatively prime to $p$ and we have proven that there is an element that reaches them all with its powers. Proving $\lambda(2)=1$ and $\lambda (4)=2$ is easy by inspection. To prove $\lambda(2^k)=2^{k-2}$ can also be done using ideas similar to the previous ones. Sketch of the proof: Look at the problem $\bmod 8$. the residue classes are $1,3,5,7$ but the powers of any of these congruence classes, $a$ are always $1$ and $a\bmod 8$. This means that an element can generate at mos half of the residue classes (this is because it can only reach at most two of the four classes $\bmod 8$). Prove that the element $3$ does in fact have order $2^{n-2}$ and you are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1295333", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Problem about covering space Let $p:\tilde{X}\to X$ be a covering space, $\tilde{X}$ and $X$ are both path-connected and locally path-connected, if $p(x_1)=p(x_2)=x$, is $p_\ast(\pi_1(\tilde{X},x_1))=p_\ast(\pi_1(\tilde{X},x_2))$ always true?
Not in general. These subgroups will be conjugate but might not be equal. You can check this by taking a path between your points in $\widetilde{X}$ and projecting it down to a loop in $X$. It is true if $p$ is a regular covering. Also, you don't need the "locally path connected" assumption for any of these arguments.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1295403", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Learning spectral methods in numerical analysis I'm trying to learn the theory about spectral methods without any specific ties to a particular program like MATLAB. I tried to search for some lecture videos but it seems very limited and I'm not sure which books are well suited for self-learning about this topic. Does anyone know of any good books and/or lecture videos for learning about this method?
I would recommend: * *Canuto C. et al. Spectral Methods (springer link). *Trefethen L. Spectral Methods in Matlab. (This last one guides you through the implementation of the codes in Matlab but it can be easily extended to other programming language). Hope it helps. Cheers!
{ "language": "en", "url": "https://math.stackexchange.com/questions/1295493", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Find the range of a $4$th-degree function For the function $y=(x-1)(x-2)(x-3)(x-4)$, I see graphically that the range is $\ge-1$. But I cannot find a way to determine the range algebraically?
Non-calculus Approach/ Completing Square Approach Let $u=(x-2)(x-3)=x^2-5x+6$ $(x-1)(x-4)=x^2-5x+4=u-2$ So \begin{align} y&=(x-1)(x-2)(x-3)(x-4)\\&=u(u-2)\\&=u^2-2u\\&=(u-1)^2-1 \end{align} As $(u-1)^2\ge0$ for all $u\in{\Bbb{R}}$ $y\ge0-1=-1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1295586", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Show that a linear transformation $T$ is one-to-one Problem: Consider the transformation $T : P_1 -> \Bbb R^2$, where $T(p(x)) = (p(0), p(1))$ for every polynomial $p(x) $ in $P_1$. Where $P_1$ is the vector space of all polynomials with degree less than or equal to 1. Show that T is one-to-one. My thoughts: I'm a little stuck on this one. I've already proven that T is a linear transformation. I know that to prove that T is one to one, for any $p(x)$ and $q(x)$ in $P_1$, (1) $T(p(x) + q(x)) = T(p(x)) + T(q(x))$ (2) $T(cp(x)) = cT(p(x))$ I'm going about it in a proof oriented way: Let p(x), q(x) be a polynomial in P1 such that T(p(x)) = T(q(x)). Let $c$ be a real number. (1) $T(p(x)) = T(q(x))$ -> $(p(0), p(1)) = (q(0), q(1))$ This means $p(0) = q(0)$ and $p(1) = q(1)$. I can't figure out how to prove that $p(x) = q(x)$ from knowing $p(0) = q(0)$ and $p(1) = q(1)$. Am I going about this in the wrong way? I know that another way to prove that $T$ is one to one is if you can find that the standard matrix $A$ for $T$ is invertible, but I can't figure out how to get a standard matrix for the linear transformation $T$ in this case. Any help or hints would be appreciated.
Yet another way is with the useful result that a map $L$ between alebraic structures is injective iff the kernel of $L$ is trivial. This means that $T(p)=0$ iff $p==0$, i.e., iff $p$ is the $0$ polynomial. Assume then that $T(p)=(p(0),p(1))=(0,0)$. This means, for $p=ax+b$ : $$p(0)=a(0)+b=b=0 $$ ,together with, $$p(1)=a+b=a+0=0$$ It follows that $p==0$ necessarily, so the kernel of $T$ is trivial an the map is then injective..
{ "language": "en", "url": "https://math.stackexchange.com/questions/1295765", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Ratios and percents Mrs. Smith has 80 birds: geese, hens and ducks. The ratio of geese to hens is 1:3. 60% of the birds are ducks. How many geese does Mrs. Smith have? a) 16 b) 8 c) 12 d) 11 I know that if 60% if the birds are ducks, 40% are geese and hens. Now I have to find 40% of 80 so in order to do this I change 40% into a decimal, which is 0.4, and then multiply this by 80. The result of this is 32 which means there are 32 birds not including the ducks. I am sure I got this part of the question correct but after this I am not sure what to do. I started off by creating a chart because for every one geese there are three hens, so for every two geese there are six hens (multiples of three) and so on. Since this does not go evenly into 32, I know I am probably doing something wrong. I was told the answer is 8 but I do not understand why. I just know you add the 1 and the 3 and then divide this by 32 which gives you 8, but why do you do this? I understand it now. Thanks.
Let $G$ be the number of geese and $H$ be the number of Hens. You said $G+H=32$. And you are also given $1 \cdot H=3 \cdot G$ or $H=3G$. So $G+3G=32$ I think you can finishing solving that for $G$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1295858", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Verify solution to ODE I am given the ODE $$\left(f''(x)+\frac{f'(x)}{x} \right) \left(1+f'(x)^2 \right) = f'(x)^2f''(x)$$ and I already know that the solution to this ODE is given by $$f(x)= c \cdot arcosh \left( \frac{r}{c} \right) + d$$ where $|c|<r$ and $d \in \mathbb{R}.$ The problem is I want to show that this is an actual solution by direct integration (so I want to derive it) and not just verify it by plugging it in. Does anybody know how this can be done? After rearraging as Daniel Fischer proposed, I end up with $$f''(r) = -\frac{(f'(r)^3+f'(r))}{r}$$
$$ \left(f'' +\frac{f'}{x}\right)(1+f'^2)=f'^2f'' $$ multiply by $x$ $$ (xf''+f')(1+f'^2) = \left(\dfrac{d}{dx}xf'\right)(1+f'^2) = xf'^2f'' $$ then we have $$ \frac{1}{xf'}\left(\dfrac{d}{dx}xf'\right) = \frac{f'f''}{(1+f'^2)} $$ thus $$ \dfrac{d}{dx}\ln(xf') = \frac{1}{2}\dfrac{d}{dx}\ln(1+f'^2) $$ which is $$ \ln (xf') = \frac{1}{2}\ln(1+f'^2) + C $$ or $$ x^2f'^2 = C_1(1+f'^2) $$ or $$ (x^2-C_1)f'^2 = C_1\implies f' = \sqrt{\frac{C_1}{x^2-C_1}} $$ or $$ \dfrac{df}{dx} = \frac{1}{\sqrt{\left(\frac{x}{\sqrt{C_1}}\right)^2-1}} $$ you should see what the solution of the last equation is.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1295951", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is there a book on proofs with solutions? I am a biochemistry graduate student who works on cancer. I am interested in learning proofs as a personal interest. I use math as a tool, but would like to start building a deeper understanding on my own. I am not taking any course. Hence, I am looking for a book with theory, exercise, and solution manual, in case I am stuck. I find this forum extremely helpful, but I would still like to have a reference. Most books that I have started looking to buy do not have a solution manual. Can anyone recommend an author? Sorry for this general question. Thank you! EDIT: I watched the movie Good Will Hunting, so I feel confident! lol I
Well, there are lots and lots of books with solutions. An answer to your question depends on which part of mathematics you would like to explore. For instance, a series of great ones about real analysis come from Kaczor & Nowak
{ "language": "en", "url": "https://math.stackexchange.com/questions/1296029", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 4 }
Checking psd-ness of matrix I have the following problem and don't know how to proceed... I want to check if \begin{equation} \frac{1}{2}(B^\top A^\top A + A^\top A B) - \frac{1}{4}B^\top A^\top A A^\top (AA^\top AA^\top)^{-1}AA^\top A B \end{equation} is positive semi-definite (psd). We have $A \in \mathbb{R}^{m \times n}$ with full row rank. In addition, $B \in \mathbb{R}^{n \times n}$ is psd and symmetric. Could anyone please give me a hint?
I don't finished the problem, but maybe these equalities can to help you. Note that: $$B^T A^T A A^T (AA^T AA^T)^{-1}AA^T AB=B^T A^T A A^T \left((A^T)^{-1}A^{-1}(A^T)^{-1}A^{-1}\right) A A^T AB$$ Then, $$B^T A^T A A^T (AA^T AA^T)^{-1}AA^T AB=B^T A^T AB=(AB)^T AB.$$ And $$B^T A^T A + A^T AB=(AB)^T A + A^T AB. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1296113", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Let $G$ be an abelian group of order $(p^n)m$, where $p$ is a prime and $m$ is not divide by $p$. Let $P = \{a\in G\mid a^{p^k} = e\text{ for some $k$ depending on $a$}\}$. Prove that (a) $P$ is a subgroup of $G$. (b) $G/P$ has no elements of order $p$. (c) order of $P=p^n$.
I like the rest above, so I just wanted to suggest an argument for b). Suppose there is an $\overline{x} \in G/P$ with order $p$, then $(x+P)^p=P$. But this means $x^p \in P$, so $(x^p)^{p^k}=e$ for some $k$. But this means $x\in P$ and thus $\overline{x}$ is actually the identity in $G/P$. But the identity cannot have order $p$, so this is not possible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1296194", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Determinant of block matrix with commuting blocks I know that given a $2N \times 2N$ block matrix with $N \times N$ blocks like $$\mathbf{S} = \begin{pmatrix} A & B\\ C & D \end{pmatrix}$$ we can calculate $$\det(\mathbf{S})=\det(AD-BD^{-1}CD)$$ and so clearly if $D$ and $C$ commute this reduces to $\det(AD-BC)$, which is a very nice property. My question is, for a general $nN\times nN$ matrix with $N\times N$ blocks where all of the blocks commute with each other, can we find the determinant in a similar way? That is, by first finding the determinant treating the blocks like scalars and then taking the determinant of the resulting $N\times N$ matrix. Thanks in advance!
The answer is affirmative. See, e.g., theorem 1 of John Silvester, Determinants of Block Matrices.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1296257", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
What is a Galois closure and Galois group? I was reading wikipedia for Galois groups and this term suddenly appears and there is no definition for it. What is a Galois closure of a field $F$? Does this mean a maximal Galois extension of $F$ so that it merely means a separable closure of $F$? Secondly, what is a Galois group of an arbitrary extension $E/F$? Wikipedia states that $Gal(E/F)$ is defined as $Aut(G/F)$ where $G$ is a Galois closure of $E$. (Since I don't know what Galois closure is, if you don't get bothered, i will add this part after I know what a Galois closure is. Otherwise, I will post another one)
Two points: One, Galois closure is a relative concept, that is not defined for a field, but for a given extension of fields. Second, it is not something maximal. To the contrary it is something minimal. Given an extension of fields $F\subset E$ if it is not Galois, then the smallest extension of $F$ that containing $E$ and that is a Galois extn of $F$ is called the Galois closure. EDIT (added on 8th Sep 2020) Given a finite extension $E$ of $F$ we can obtain that as $F[X]/(f(X))$ for an irreducible polynomial $f(X)$ with coefficients in the field $F$. So $E$ is one field that contains a root of $f(X)$. Now the Galois closure is theoretically the field generated by all the roots of $f(X)$. Example: Let $b=\sqrt[3]{2}$ the positive real cube root of 2. So the field $E=\mathbf{Q}[b]$ is an extension of degree 3 over $F=\mathbf{Q}$ completely contained inside the real numbers. The $f(X)$ in this case is $X^3-2$. This cubic polynomial has 2 other roots in the complex number system. Let $\omega= \cos 120^\circ + i\sin 120^\circ$ be a cube root of 1. Then $\omega b, \omega ^2b$ are the other two roots of $f(X)$. The new field $K=\mathbf{Q}[b,\omega]$ is the Galois closure of $E$ over $\mathbf{Q}$. (also called the splitting field of $f(X)$ over the rationals. So normal closure for $E=F[X]/(f(X))$ can be defined as the splitting field for the polynomial $f(X)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1296466", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Modified version of Simpson Rule I'm supposed to use some different version of Simpson's Rule in my Numerical Methods homework to compute some areas, considering the non-uniform spacing case . Namely, I've got two equal length vectors $x,y$ representing the pairs $(x_i,f(x_i))$ and the components of the $x$ array aren't equally spaced. Is there some modified version of Simpson's Rule that fits my purposes ? I couldn't find anything online. I know that I can just use Trap. Rule, but I was specifically asked to use Simpson's Rule.
If you want to use Simpson's rule with Taylor polynomials and unequally spaced intervals you can look at the derivation in https://www.math.ucla.edu/~yanovsky/Teaching/Math151A/hw6/Numerical_Integration.pdf . (See Section 3, page 2). It doesn't have the derivation, but it has enough to use the derivation within as a guide. I'll do my part and try to write out some of it. Consider the integral of $f(x)$ between the points $x_0$, $x_1$, and $x_2$. Here the three points are needed to describe a parabolic function. $\int_{x_0}^{x_2} f(x) dx$ = $\int_{x_0}^{x_2} \left [f(x_1) + f'(x_1) (x - x_1) + (1/2) f''(x_1) (x-x_1)^2 + (1/3) f'''(x_1) (x-x_1)^3 + \mathcal{O}(f^{(4)}(\xi) \right ] dx$. So now, taking this integral you'll get: $f(x_1)(x_2 - x_0) + \frac{1}{2} f'(x_1) \left[(x_2-x_1)^2 - (x_0 -x_1)^2 \right] + \frac{1}{6} f''(x_1) \left[(x_2 - x_1)^3 - (x_0 - x_1)^3)\right] + \frac{1}{24} f'''(x_1) \left[(x_2-x_1)^4 -(x_0 - x_1)^4 \right]+ \mathcal{O}(f^{(4)}(\xi)$. Now for equally spaced intervals $x_1 - x_0$ = $x_2 - x_1$, and so this reduces to the common expression for Simpson's rule (more in the referenced pdf). If you can't accurately estimate the derivatives you could use a different polynomial approximant besides Taylor polynomials, such as the Lagrange Polynomials (http://mathworld.wolfram.com/LagrangeInterpolatingPolynomial.html). To see a derivation of it you can see here, (Easa, S. M. (1988). Area of irregular region with unequal intervals. Journal of Surveying Engineering, 114(2), 50-58.. That approximant doesn't require knowledge of the derivatives of the function, but it'll only be accurate to $f'''(\xi)$, at least from what I can remember, rather than $f^{(4)}$, like the Taylor polynomial approximant.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1296610", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
For an infinite cardinal $\kappa$, $\aleph_0 \leq 2^{2^\kappa}$ I'm trying to do a past paper question which states: $$ \text{For all infinite cardinals $\kappa$, we have } \aleph_0 \leq 2^{2^\kappa}. $$ I'm supposed to be able to do this without the axiom of choice, but I can't see how. I know that there can be no bijection (or surjection) from any natural number with $X$, where $|X| = \kappa$, but I can't get any further. A solution or hint would be great, thanks! EDIT: Something that bugs me about this question is that they've written $\aleph_0 \leq 2^{2^\kappa}$, whereas the solution below using Cantor's theorem shows that $\aleph_0 < 2^{2^\kappa}$, that is strict inequality holds. Is this just a mistake, or to throw people off?
HINT: It suffices to find a surjection from $2^\kappa$ onto the natural numbers, now look at cardinalities of finite sets. You can also show that equality is never possible. But that is besides the point. You are asked to find an injection, not to prove there are no bijections.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1296702", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Proof that a degree $4$ polynomial has a minimum Let $$f(x) = x^4+a_3x^3+a_2x^2+a_1x+a_0.$$ Prove that $f(x)$ has a minimum point in $\Bbb{R}$. The Extreme Value Theorem implies that the minimum exists in some $[a,b]$, but how do I find the right $a$ and $b$?
If $x>|a_3|+\sqrt{|a_2|}+\sqrt[3]{|a_1|}+\sqrt[4]{|a_0|}$ then $f(x)>0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1296780", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Ring with no identity (that has a subring with identity) has zero divisors. Let $L$ be a non-trivial subring with identity of a ring $R$. Prove that if $R$ has no identity, then $R$ has zero divisors. So I assumed that there $\exists$ $e \in L$, such that $ex=xe=x$, $\forall$ $x\in L$ and there $\exists$ $x' \in R/L$ such that $ex'\neq x'$ and $x'e\neq x'$. I then show (we obtain it from the last two inequalities), that we have $y=ey=ye,$ where $y=ex', y\neq x'$, which is a contradiction because $y\notin L.$ Is there a way to show from this that $R$ has zero divisors? Because in one of the steps I have $e(x'-y)=0$, but I can't seem to figure out if this can directly lead to the desired result.
Actually you can prove something even better: a nonzero idempotent in a rng without nonzero zero divisors is actually an identity for the rng.. This applies to your case since the identity of the subrng would be a nonzero idempotent. The linked solution proves the contrapositive of your statement very simply, and along analogous lines to yours.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1296877", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Symmetry argument and WLOG Suppose that $f:[0,1]\rightarrow \mathbb{R}$ has a continuous derivative and that $\int_0^1 f(x)dx=0$. Prove that for every $\alpha \in (0,1)$, $$\left|\int_0^{\alpha}f(x)dx\right| \leq \frac{1}{8}\max_{0\leq x\leq 1}|f'(x)|.$$ The solution goes like this: Let $g(x)=\int_0^x f(y)dy$, then $g(0)=g(1)=0$, so the maximum value of $|g(x)|$ must occur at a critical point $y\in (0,1)$ satisfying $g'(y)=f(y)=0$. We may take $\alpha = y$ hereafter. Since $\int_0^{\alpha}f(x)dx= - \int_0^{1-\alpha}f(1-x)dx$, we may without loss of generality assume that $\alpha \leq 1/2$. I don't understnd why the WLOG argument works in this case. Could someone please explains? Thanks in advance.
Consider an $\alpha$ that is greater than $\frac{1}{2}$. Then you know that $1-\alpha \leq \frac{1}{2}$ and then we can make the substitution they give you to "pretend" we have the kind of $\alpha$ that we want (Less than or equal to $\frac{1}{2}$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1296955", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Prove that $f$ in monotonic In my assignment I have to prove the following: Let $f$ a continuous function in $\Bbb R$. Prove the following: if $|f|$ is monotonic increasing, in R then $f$ is monotonic in R. Hint: Prove that $f(x)\ne 0$ for every $x \in \Bbb R$ My proof is rather long and I didn't use the fact the $f$ is continuous, which is kind of worrying. Is there a way proving that with using the continuity? Please let me know if I have a mistake somewhere, as I want to know if I'm correct. Proof Assume in contradiction that there is some $x=0$ such that $f(x)=0$ $$==>f(x)=0\to|f(x)=0|$$ Since $|f|$ in increasing, there is some $f(x_1)$ such that $$|f(x_1)|<|f(x)| =>|f(x_1)|< 0$$ That means the I found two values which in them |f| is decreasing which is a contradiction. Therefore, $$f(x)\ne0$$ for all $x$. I claim that $x>0$ for all x. Proof of that statement: Assume in contradiction that there is $x<0$ such that $f(x)=y$ * *if $f(x)<0$ then there is some $t$ such that $f(t)=0$ according to the intermidate value theorem. That is a contradiction to the fact that $f(x)\ne0$ *if $f(x)>0$ then $|f(x)|=f(x)$ so according to intermidate value theorem there is some $t$ such that $f(t)=0$, in contradiction to my previous proof.That is also a contradiction to the fact the $|f(x)|$ is increasing. Therefore, x>0 and |f| is increasing. * *If $f(x)>0$ then $f$ is increasing since $|f|$ is increasing. *if $f(x)<0$ then $f$ is decreasing since $|f|$ in increasing. I assume is contradiction the opposite and prove accordingly. I'm sorry it's that long. Is my solution true? Thank you for reading!
It's too complicated and some aguments seem to be not justifief: e.g. Why do ypu try to prove $x> 0$ for all $x$? $x$ is the variable, it can take any value. As you noticed, we can't have $f(x=0, because this is the same as $\lvertf(x)\rvert=0$, which would imply that if $x' From there, as $f$ is continuous, $f(x)$ has always the same sign (else it would have a root by the IVT). If $f(x)>0$, then $f=\lvert f\vert$, and it is increasing. If $f(x)<0$, then $f=-\lvert f\vert$, and it is decreasing.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1297055", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
For which values of $x$ is the following series convergent: $\sum_0^\infty \frac{1}{n^x}\arctan\Bigl(\bigl(\frac{x-4}{x-1}\bigr)^n\Bigr)$ For which values of $x$ is the following series convergent? $$\sum_{n=1}^{\infty} \frac{1}{n^x}\arctan\Biggl(\biggl(\frac{x-4}{x-1}\biggr)^n\Biggr)$$
If $x>1$ the series converges absolutely because $\arctan$ is a bounded function: $$ \Biggl|\frac{1}{n^x}\,\arctan\Bigl(\frac{x-4}{x-1}\Bigr)^n\Biggr|\le\frac{\pi}{2\,n^x}. $$ If $x<1$ then $(x-4)/(x-1)>1$ and $$ \lim_{n\to\infty}\arctan\Bigl(\frac{x-4}{x-1}\Bigr)^n=\frac{\pi}{2}. $$ It follows that the series diverges in this case.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1297194", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Residue Calculus - Showing that the quotient of polynomials have integral $0$ in a simple closed contour in a special case. I'm having difficulty understanding the solution to the following problem. In the solution below, I can't understand why since $b_m\neq 0$, the quotient of these polynomials is represented by a series of the form $d_0+d_1 z+d_2 z^2 +\cdots$. In fact, since the degree of the denominator is greater, shouldn't the quotient include negative powers of $z$? I would greatly appreciate if anyone could explain this to me. Problem: Solution:
What I'd do: We have $|P(z)/Q(z)|$ on the order of $1/|z|^2$ as $|z| \to \infty.$ By Cauchy, $$\int_C \frac{P(z)}{Q(z)}\,dz = \int_{|z|=R} \frac{P(z)}{Q(z)}\,dz$$ for large $R.$ Use the "M-L" estimate on the latter to see its absolute value is no more than a constant times $(1/R^2)\cdot2\pi R.$ The latter $\to 0$ as $R\to \infty.$ Conclusion: $\int_C \frac{P(z)}{Q(z)}\,dz = 0.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1297282", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Finding a nullspace of a matrix - what should I do after finding equations? I am given the following matrix $A$ and I need to find a nullspace of this matrix. $$A = \begin{pmatrix} 2&4&12&-6&7 \\ 0&0&2&-3&-4 \\ 3&6&17&-10&7 \end{pmatrix}$$ I have found a row reduced form of this matrix, which is: $$A' = \begin{pmatrix} 1&2&0&0&\frac{23}{10} \\ 0&0&1&0&\frac{13}{10} \\ 0&0&0&1&\frac{22}{10} \end{pmatrix}$$ And then I used the formula $A'x=0$, which gave me: $$A' = \begin{pmatrix} 1&2&0&0&\frac{23}{10} \\ 0&0&1&0&\frac{13}{10} \\ 0&0&0&1&\frac{22}{10} \end{pmatrix} \begin{pmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_5 \end{pmatrix}= \begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix}$$ Hence I obtained the following system of linear equations: $$\begin{cases} x_1+2x_2+\frac{23}{10}x_5=0 \\ x_3+\frac{13}{10}x_5=0 \\ x_4+\frac{22}{10}x_5=0 \end{cases}$$ How should I proceed from this point? Thanks!
here is an almost algorithm to find a basis for the null space of a matrix $A:$ (a) row reduce $A,$ (b) identify the free and pivot b=variables. the variables corresponding to the non pivot columns, here they are $x_2$ and $x_4,$ are called the free variables and the rest are called pivot variables. (c) you can set one of the free variables to one and the rest to zero. solve for the pivot variables. that gives you one solution in the null space; cycle through the free variables to find a basis for the null space. i will show you how to find one solution; you can find the other. $x_2 = 1, x_4 = 0$ you can solve for $x_5$ first, giving you $x_5 = 0$ and then $x_3 = 0, x_1 = -2.$ so one vector in the basis is $$ \pmatrix{-2, &1, &0, &0, &0}^\top.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1297366", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
An introduction to algebraic topology from the categorical point of view I'm looking for a modern algebraic topology textbook from a categorical point of view. Basically, I'd like a textbook that uses the language of functors, natural transformations, adjunctions, etc. right from the start. I'm comfortable with general topology and category theory, but I haven't had much exposure to algebraic topology beyond the basics of the fundamental group(oid) and de Rham cohomology. In particular, I'd like to learn about the various homology and cohomology theories.
Rotman's An Introduction To Algebraic Topology is a great book that treats the subject from a categorical point of view. Even just browsing the table of contents makes this clear: Chapter 0 begins with a brief review of categories and functors. Natural transformations appear in Chapter 9, followed by group and cogroup objects in Chapter 11. The aspect I like most about this book is that Rotman makes a clear distinction between results that are algebraic and topological. E.g., he proves several statements about group actions before then applying them to the particular topological setting of covering spaces and the action of the fundamental group on a fiber.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1297513", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 4, "answer_id": 3 }
Visualize $z+\frac{1}{z} \ge 2$ As we know, always $$z+\frac{1}{z} \ge 2,~~~~~~~~~ z\in \mathbb{R}^+$$ However, is there any geometric way to visualize this equation for some one who is not that expert in math? I know this question might have various answers.
Well, what you wrote is clearly not true, it is only true for $x\in (0,\infty)$. You could just try plotting the function $x\mapsto x+\frac1x$ and see that it is always above $2$. http://www.wolframalpha.com/input/?i=plot+z+%2B+1%2Fz Or, you can plot $x\mapsto x$ and $x\mapsto \frac1x$ and try to understand what is happening on $(0,\infty)$. It's clear you only need to look at the interval $(\frac12, 2)$, since the equality obviously holds outside it. https://www.wolframalpha.com/input/?i=plot+x+and+1%2Fx+from+x%3D1%2F2+to+2 Now, on this graph, I think it's possible to explain what is happening: * *It is clear that for $x=1$, the sum of the two function is precisely $1$. *If you go to the right of the point $x=1$, then the bottom function $\frac1x$ is decreasing more slowly than the top function $x$ is increasing, so, their sum will be above $2$, because it will be $$(1 + \text{something}) + (1 - \text{something smaller}) = 2 + (\text{something} - \text{something smaller})$$ so it will be $2$ plus something positive. *If you go to the left of the point $x=1$, then the bottom function $x$ is decreasing more slowly than the top function $\frac1x$ is increasing, so, their sum will be above $2$, because it will be $$(1 + \text{something}) + (1 - \text{something smaller}) = 2 + (\text{something} - \text{something smaller})$$ so it will be $2$ plus something positive.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1297652", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 6, "answer_id": 2 }
Finding the solutions of $x^2\equiv 9 \pmod {256}$. Find the solutions of $x^2\equiv 9 \pmod {256}$. I try to follow an algorithm shown us in class, but I am having troubles doing so. First I have to check how many solutions there are. Since $9\equiv 1 \pmod {2^{\min\{3,k\}}}$ where k fulfills $256=2^k$, then since $k\ge 3$, there are 4 solutions. As I understood, I have to take a smaller power of 2 and check $x^2\equiv 1 \pmod{2^m}$. I took m=4 and found that x=5 is a solution. According to the algorithm, I should take $x=5+Ax$ where A is a power of 2 and raise it by 2 and check congruence modulo, 256(I guess?), but the examples I were given didn't tell me accurately what I am to pick. I could really use your help on this.
The answers above explain the idea great. If you need a general Formula for solving all the questions of this type then check this out - $x^2 \equiv a \pmod {2^n}$ where a is odd for n = 1 there is one answer and it is $x \equiv 1$ for $n = 2$ then we separate for 2 cases - $ a \equiv 1$ then we have two answers $x \equiv 1,3$ $a \equiv 3$ then we have no solution now, in your case when $n \ge 3$ , and $a \equiv 1 \pmod 8$ we always have 4 solutions. all you need to do is find one easy solution - lets call it $x_0$ in your case $(x-3)(x+3) \equiv 0 \pmod {256}$ so lets pick $x_0 = 3$ as our easy solution. Then all 4 solutions are - $x_0 , -x_0, x_0 + 2^{n-1} , -x_0 + 2^{n-1}$ which gives the answers as above ${3,125,131,253 \equiv -3}$ write if you need proof for the above
{ "language": "en", "url": "https://math.stackexchange.com/questions/1297717", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Interpolation with nonvanishing constraint Let $x_1,x_2,\ldots,x_n$ be distinct complex numbers. Let $y_1,y_2,\ldots,y_n$ be nonzero complex numbers, and let $K$ be a bounded subset of $\mathbb C$. Does there always exist a polynomial $P$ such that $P(x_i)=y_i$ for every $i$ and $P$ is everywhere nonzero on $K$ ? This is obviously false when $\mathbb C$ is replaced by $\mathbb R$.
Yes, such a polynomial always exists: By scaling and replacing $K$ with a superset, we may assume that $K = \overline{\mathbb{D}} = \{ z : \lvert z\rvert \leqslant 1\}$ and that $\lvert x_i\rvert \leqslant 1$ for all $i$. For each $i$, consider the interpolation polynomial $$P_i(z) = \prod_{\substack{k=1\\k\neq i}}^n \frac{z-x_k}{x_i-x_k}$$ and define $C_i = \max \{ \lvert P_i(z)\rvert : z\in K\}$, and $C = \max \{ C_i : 1 \leqslant i \leqslant n\}$. Lagrange interpolation gives us a polynomial $L$ with $L(x_i) = \log y_i$ for $1 \leqslant i \leqslant n$, whichever logarithm of $y_i$ we choose. Then $f(z) = e^{L(z)}$ is an entire function with $f(x_i) = y_i$ for $1\leqslant i \leqslant n$, and $f(z) \neq 0$ for all $z\in \mathbb{C}$. Thus $m := \min \{ \lvert f(z)\rvert : z\in K\} > 0$. Since the Taylor series of $f$ converges uniformly to $f$ on $K$, there is a Taylor polynomial $T$ of $f$ with $$\lvert T(z) - f(z)\rvert < \frac{m}{n(C+1)}$$ for $z \in K$. Finally, consider the polynomial $$P(z) = T(z) + \sum_{j=1}^n \bigl(y_j - T(x_j)\bigr)P_j(z).$$ Then we have $$P(x_i) = T(x_i) + \sum_{j=1}^n \bigl(y_j - T(x_j)\bigr)P_j(x_i) = T(x_i) + \bigl(y_i - T(x_i)\bigr) = y_i$$ for $1 \leqslant i \leqslant n$, and \begin{align} \lvert P(z)\rvert &\geqslant \lvert T(z)\rvert - \sum_{j=1}^n \lvert y_j - T(x_j)\rvert\cdot \lvert P_j(z)\rvert\\ &\geqslant m - \sum_{j=1}^n \lvert y_j - T(x_j)\rvert C_j\\ &\geqslant m - nC\frac{m}{n(C+1)}\\ &= \frac{m}{C+1}\\ &> 0 \end{align} for $z\in K$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1297856", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding a Lyapunov function for a given system I need to find a Lyapunov function for $(0,0)$ in the system: \begin{cases} x' = -2x^4 + y \\ y' = -2x - 2y^6 \end{cases} Graph built using this tool showed that there should be stability but not asymptotic stability. I'd like to prove that fact by means of Lyapunov function but cannot find the appropriate one among the most common ones such as $Ax^{2n}+By^{2m}$, $Ax^2+By^2+Cxy$, etc. Please, give me some hint about how to proceed with the search of the suitable function or even the desired function itself it you know the easy way of finding it in this case. Thanks in advance
As explained in the comments, even though simulated phase diagrams seem to exhibit cycling trajectories near the origin, they are not conclusive enough to decide whether the origin is stable or not, that is, whether trajectories cycle or spiral outwardly or spiral inwardly. Caution is advised about approximation errors in simulations. streamplot{{-2x^4+y,-2x-2y^6},{x,-1,1},{y,-1,1}}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1297904", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 0 }
Having problem in last step on proving by induction $\sum^{2n}_{i=n+1}\frac{1}{i}=\sum^{2n}_{i=1}\frac{(-1)^{1+i}}{i} $ for $n\ge 1$ The question I am asked is to prove by induction $\sum^{2n}_{i=n+1}\frac{1}{i}=\sum^{2n}_{i=1}\frac{(-1)^{1+i}}{i} $ for $n\ge 1$ its easy to prove this holds for $n =1$ that gives $\frac{1}{2}=\frac{1}{2}$ Now assuming $n$ its true I want to say it is true $n+1$. So, $\sum^{2n}_{i=n+1}\frac{1}{i}=\sum^{2n}_{i=1}\frac{(-1)^{1+i}}{i} $ $\sum^{2n}_{i=n+1}\frac{1}{i}+\frac{(-1)^{1+(2n+1)}}{2n+1}+\frac{(-1)^{1+(2n+2)}}{2n+2}=\sum^{2n}_{i=1}\frac{(-1)^{1+i}}{i}+\frac{(-1)^{1+(2n+1)}}{2n+1}+\frac{(-1)^{1+(2n+2)}}{2n+2} $ $\sum^{2n}_{i=n+1}\frac{1}{i}+(-1)^{2n+2}[ \frac{1}{2n+1}+\frac{(-1)}{2n+2}]=\sum^{2n+2}_{i=1}\frac{(-1)^{1+i}}{i} $ $\sum^{2n}_{i=n+1}\frac{1}{i}+ \frac{1}{2n+1}+\frac{(-1)}{2n+2}=\sum^{2(n+1)}_{i=1}\frac{(-1)^{1+i}}{i} $ $\sum^{2n+1}_{i=n+1}\frac{1}{i}+\frac{(-1)}{2n+2}=\sum^{2(n+1)}_{i=1}\frac{(-1)^{1+i}}{i} $ i don't know what can i do next if the numerator of $\frac{1}{i}+\frac{(-1)}{2n+2}$ was positive i knew. is there a way i can turn it positive?or my approach is wrong ?
Since an answer how to continue your proof was already given, I want to give another proof not using induction (which is in my eyes not the right way to proof this equality). We have $$\sum_{i=1}^{2n}\frac{(-1)^{i+1}}{i} \overset{(1)}{=} \sum_{i=1}^{n}\frac{1}{2i-1}-\sum_{i=1}^{n}\frac{1}{2i} = \sum_{i=1}^{n}\frac{1}{2i-1}+\sum_{i=1}^{n}\frac{1}{2i}-2\sum_{i=1}^{n}\frac{1}{2i}$$ $$\overset{(2)}{=}\sum_{i=1}^{2n}\frac{1}{i}-\sum_{i=1}^{n}\frac{1}{i} = \sum_{i=n+1}^{2n}\frac{1}{i},$$ where $(1)$ is dividing the sum into even and odd part and (2) is the inverse.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1298016", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Confusion with Summations I am having a little bit of confusion regarding summations. I know that $$\sum_{i=m}^n a_i = a_{m}+a_{m+1}+\cdots +a_{n-1}+a_n$$ Here is my confusion. How do we interpret/decompose the following: $$ \sum_{i=m}^n a_i~~~~,~~~m=0,1,2,3,,,,k~~? $$
Is it the $m=0,1,2,3,\dots,k$ that's confusing you? That just means there are several sums: $$ \text{$\sum_{i=0}^na_i$, $\ \ \sum_{i=1}^na_i$, $\ \ \sum_{i=2}^na_i$, $\ \ \dots\ $, $\ \sum_{i=k}^na_i$}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1298104", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Kernel Principal Component Analysis (PCA) I learn kernel PCA from wikipedia. In this article, the eigen equation is \begin{equation} N \lambda \vec{\alpha} = \boldsymbol{K} \vec{\alpha} \end{equation} where $\lambda$ is the eigen value, $\vec{\alpha}$ is the eigen vector of $\lambda$ and $\boldsymbol{K}$ is the kernel matrix. Why is the left side multiplied by N? What is the meaning and effect of $N$?
Recall for linear PCA $N$ is the number of data points in the set and comes in the eigen decomposition of the covariance matrix $C$, where $C = \frac{1}{N} \sum_{i=1}^N x_i x_i^\top$ ( we look for eigenvectors $v$ such that $\lambda v = C v$). So we simply multiply through by the constant $N$ to bring it through to the other side. It is similar multiplication by the constant when using a kernel method.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1298200", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Prove result about basis of a linear map with specific properties I am working on the following problem. Let $V$ be an $n$-dimensional vector space over $K$ and $T: V\to V$ a linear map. For $k = 1, \ldots, n$ let $x_k \in V \smallsetminus \{0\}$ and $\lambda_k \in K$ be given such that $T(x_k) = \lambda_k x_k \space \forall k.$ Assume that $\lambda_k \not= \lambda_l$ whenever $k \not= l.$ Prove that $(x_1, \ldots, x_n)$ is a basis of $V$. Hint: assume $\sum_{k = 1}^n \alpha_k x_k = 0$, where $l:= \max\{k \mid \alpha_k \not= 0\}$ is as small as possible. Then apply $\lambda_l\text{Id}_V - T$ to both sides. So far, I have managed to prove that, for $n >= 1, x_1, \ldots, x_n$ must be linearly independent (by applying the hint). However, I am not yet sure how to show that $x_1, \ldots, x_n$ span $V$. I would appreciate help with this.
If $V$ is a vector space of dimension $n$ over $K$, then any $n$ linearly independent vectors in $V$ form a basis of $V$. Hence $x_1, \ldots, x_n$ span $V$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1298287", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What means to complete a pair of vectors $(w, s)$ to an arbitrary basis of $R^d$? I found in an article this : Let $B = (b_1, b_2, . . . , b_d)$ be an orthonormal basis of $R^d$ such that $<b1, b2 >=< w,x >$ (where $< ... >$ denotes linear span). In order to construct $B$, first complete $(w, x)$ to an arbitrary basis of $R^d$ and then apply Gram-Schmidt orthonormalization process. My question is : what means to complete a pair of vectors $(w, s)$ to an arbitrary basis of $R^d$ ? I read this how to complete arbitrary basis knowing 2 orthonormal vectors of Rd (d > 2)$2$-orthonormal-vectors-of-rd-d-2 but I didn't understand how to choose randomly these vectors : "Practically, one could choose $d−2$ random vectors $x_1,…,x_{d−2}$ (e.g. be choosing each coordinate following a Gaussian distribution)." Please use as many details as possible in your explanations.
To complete a pair of vectors $(w,s)$ to an arbitrary basis of $\mathbb R^d$ means to find an ordered basis of $\mathbb R^d$ such that $w,s$ are the first two vectors of it. Of course there are a lot of differen possible choices. Not at all completely random however. The third vector $v_3$ should be chosen as not to belong to $\langle w,s\rangle$. The (eventual) fourth vector $v_4$ should be chosen as not to belong to $\langle w,s, v_3\rangle$ and so on... In this way each vector is not a linear combination of the previous ones and this is enough to show that they are all linearly independent. When you've got $d$ of them you have a basis. Nothing you've done so far guarantees you that this basis is orthonormal. At this point you apply Gram-Schmidt.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1298361", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solving $\int_0^{+\infty}\frac{e^{-\alpha x^2} - \cos{\beta x}}{x^2}dx$ I need to find solution of $$\int_0^{+\infty}\frac{e^{-\alpha x^2} - \cos{\beta x}}{x^2}dx$$ I know that Leibniz rule can help but I don't know how to use it. Could you help me please? Thank you.
Comparison with $1/x$ shows that the integral does not converge at $x=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1298427", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
Understanding the difference between Span and Basis I've been reading a bit around MSE and I've stumbled upon some similar questions as mine. However, most of them do not have a concrete explanation to what I'm looking for. I understand that the Span of a Vector Space $V$ is the linear combination of all the vectors in $V$. I also understand that the Basis of a Vector Space V is a set of vectors ${v_{1}, v_{2}, ..., v_{n}}$ which is linearly independent and whose span is all of $V$. Now, from my understanding the basis is a combination of vectors which are linearly independent, for example, $(1,0)$ and $(0,1)$. But why? The other question I have is, what do they mean by "whose span is all of $V$" ? On a final note, I would really appreciate a good definition of Span and Basis along with a concrete example of each which will really help to reinforce my understanding. Thanks.
The span of a finite subset $S$ of a vector space $V$ is the smallest subvector space that contains all vectors in $S$. One shows easily it is the set of all linear combinations of lelements of $S$ with coefficients in the base field (usually $\mathbf R,\mathbf C$ or a finite field). A basis of the vector space $V$ is a subset of linearly independent vectors that span the whole of $V$. If $S=\{x_1, \dots, x_n\}$ this means that for any vector $u\in V$, there exists a unique system of coefficients such that $$u=\lambda_1 x_1+\dots+\lambda_n x_n. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1298492", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23", "answer_count": 3, "answer_id": 2 }
Intuition behind independence result The following problem is from Wasserman's $\textit{All Of Statistic}s$. I have worked through the algebra to arrive at the result, but it still seems very strange to me, so I would appreciate any intuition that could be offered. Let $N$~Poisson($\lambda$). Suppose we toss a coin $N$ times, with $p$ being the probability of heads, and let $X$ be the number of heads, and $Y$ be the number of tails. Show that $X$ and $Y$ are independent. Now, as I said, I have shown that $X$ and $Y$ are independent, but it $\textit{feels}$ like they shouldn't be. At first glance it seems like a very large value of $X$ should make $P(Y|X)$ smaller for large $Y$. What's going on here?
Suppose you're talking about a fair coin, and you're told that 50 tosses (from an unknown number of tosses) are heads. Intuitively, to infer $Y=N-X$ given $X=50$, say, you would need to first infer $N$, which in turn is inferred based on the probability of heads relative to tails (the next best thing you have as you don't know X), and this probability has absolutely nothing to do with your observed values. Essentially, the detail is that you have no idea about what the entire sample size is, so if you are only given that a million people in city X are male, the best guess you have for the number of females is also a million, i.e. $\mathbb P [\text{individual is female}]=0.5$ regardless of the number of males in the city when you don't know the full population.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1298596", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
How to find the greatest prime number that is smaller than $x$? I want to find the greatest prime number that is smaller than $x$, where $ x \in N$. I wonder that is there any formula or algorithm to find a prime ?
As Emilio Novati stated in a comment, the sieve of Eratosthenes will work. The sieve will probably be fast enough for your needs, although potentially faster approaches exist (see Lykos's answer). I didn't want to bother converting it to pseudocode, so here is a function written in C that returns the greatest prime less than or equal to $N$. #include <stdlib.h> unsigned long primeNoGreaterThan(unsigned long N) { unsigned long i,j,winner; _Bool* primes = (_Bool*) malloc(N*sizeof(_Bool)); primes[0] = primes[1] = 0; for(i = 2; i <= N; ++i) primes[i] = 1; for(i = 2; i <= N; ++i) { if(primes[i]) { winner = i; for(j = i+i; j <= N; j += i) primes[j] = 0; } } free(primes); return winner; }
{ "language": "en", "url": "https://math.stackexchange.com/questions/1298683", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Calculate $\lim_{x \to 0} \frac{e^{3x} - 1}{e^{4x} - 1}$ Question: Calculate $$\lim_{x \to 0} \frac{e^{3x} - 1}{e^{4x} - 1}$$ using substitution, cancellation, factoring etc. and common standard limits (i.e. not by L'Hôpital's rule). Attempted solution: It is not possible to solve by evaluating it directly, since it leads to a "$\frac{0}{0}$" situation. These kinds of problems are typically solved by cancellation and/or direct application of a after some artful substitution in a precalculus context. It was a bit hard to find a good substitution, so I tried several: $t = 3x$ $$\lim_{x \to 0} \frac{e^{3x} - 1}{e^{4x} - 1} = \{t=3x\} = \lim_{x \to 0} \frac{e^{t} - 1}{e^{4\frac{t}{3}} - 1}$$ Although this gets me closer to the standard limit $$\lim_{x to 0} \frac{e^x - 1}{x} = 1$$ ...it is not good enough. $t = e^{4x} - 1$ $$\lim_{x \to 0} \frac{e^{3x} - 1}{e^{4x} - 1} = \{t=e^{4x} - 1\} = \lim_{t \to 0} \frac{e^{\frac{3 \ln (t+ 1)}{4}} - 1}{t}$$ Not sure where to move on from here, if it is at all possible. Looks like the denominator can be factored by the conjugate rule: $$\lim_{x \to 0} \frac{e^{3x} - 1}{e^{4x} - 1} = \lim_{x \to 0} \frac{e^{3x} - 1}{(e^{2x} - 1)(e^{2x} + 1)}$$ Unclear where this trail can lead. What are some productive substitutions or approaches to calculating this limit (without L'Hôpital's rule)?
$$\frac{(e^x)^3-1}{(e^x)^4-1}=\frac{e^x+1}{2(e^{2x}+1)}+\frac{1}{2(e^x+1)}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1298780", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 8, "answer_id": 4 }