Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Number of solutions to $x^2\equiv b \mod p^n$ For an odd prime $p$, and some integer $b,n$. I'm interested in finding the number of solutions to $$x^2 \equiv b \mod p^n$$ Researching this led me into Hensel's lemma but I want to verify I understood correctly. By Hensel's lemma, a solution $x_i$ to $f(x)\equiv 0 \text{ mod } p^i$ lifts to a unique solution modulo $p^{i+1}$ if and only if $p\nmid f'(x_i)$. In the given case, $f'(x_i)=2x_i$ . Thus if $x_1$ is a solution for $x^2\equiv b \text{ mod } p$, then $p\nmid f'(x_1)$ and a $x_1$ will lift to a single solution, which will keep lifting using the same idea as always $x_i\equiv x_1\text{ mod } p$ which means that always $p\nmid x_i$ and $p\nmid 2x_i$. Does that mean that the original congruence will always have exactly two solutions if $b$ is a quadratic residue mod $p$ or am I doing something wrong?
Since $p$ is odd and $p\nmid b$, the proof given by @André Nicolas can be easily formalized as follows : the cyclic group $W_n := (Z/p^nZ)^* $ admits exactly one cyclic subgroup of order $(p-1)$ and one of order $p^{n-1}$, and since these orders are co-prime, $W_n \cong F_p^* \times C_{p^{n-1}}$. Because $p$ is odd, $C_{p^{n-1}}=C_{p^{n-1}}^2$ and so $W_n / {W_n}^2 \cong F_p^* / {F_p^*}^2$. The desired result follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1769291", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Prove that $\sum_{k=1}^n \frac{1}{n+k} = \sum_{k=1}^{2n} \frac{1}{k}(-1)^{k-1}$ using induction I'm trying to prove (using induction) that: $$\sum_{k=1}^n \frac{1}{n+k} = \sum_{k=1}^{2n} \frac{1}{k}(-1)^{k-1}.$$ I have found problems when I tried to establish an induction hypothesis and solving this because I've learned to do things like: $$ \sum_{k=1}^{n+1} \frac{1}{k}= \sum_{k=1}^{n} \frac{1}{k} + \frac{1}{n+1}.$$ But, in this case, $n$ appears in both parts of summation and I have no idea how make a relation with $$\sum_{k=1}^n \frac{1}{n+k} $$ and $$\sum_{k=1}^{n+1} \frac{1}{n+1+k}. $$ Because I've seen tha, the case with "$n+1$" should be like: $$\sum_{k=1}^{n+1} \frac{1}{n+1+k} = \sum_{k=1}^{2n+2} \frac{1}{k}(-1)^{k-1}$$ and I cant find a connection between $$\sum_{k=1}^{n+1} \frac{1}{n+1+k} $$ and $$\sum_{k=1}^{n} \frac{1}{n+k}.$$ Could anyone help me with this?
This is kind of a cute induction proof. The key in this case will be a careful rearrangement of the terms. Let's get started. I'll assume the base case has been checked, so let's move on to the induction step. That is, we assume then that $$ \sum_{k=1}^{2n} (-1)^{k-1}\frac{1}{k} = \sum_{k=1}^n \frac{1}{n+k}. $$ It seems hard (as you've noted!) to look at the right-hand side, so let us focus on manipulating the left-hand side and see where that gets us. If we look at the next terms we would see, we should expect $$ \frac{1}{2n+1} - \frac{1}{2n+2} + \sum_{k=1}^{2n} (-1)^{k-1}\frac{1}{k}. $$ Now, by the induction hypothesis we can replace this last bit with its equivalent form. That is, what we have just written is equal to $$ \frac{1}{2n+1} - \frac{1}{2n+2} + \sum_{k=1}^n \frac{1}{n+k}. $$ So how can this help us? Let us explicitly write this sum out: The whole term we have is $$ \frac{1}{n+1} + \underbrace{\frac{1}{n+2} + \cdots + \frac{1}{2n} + \frac{1}{2n + 1}}_{\text{this is almost the good part!}} - \frac{1}{2n+2} $$ so we just need to find a way to make the two end terms be equal to $+\frac{1}{2n + 2}$. Can you see how that could happen?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1769421", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Find $\lim_{n \to \infty} \left( \frac{3^{3n}(n!)^3}{(3n)!}\right)^{1/n}$ Find $$\lim_{n \to \infty} \left( \frac{3^{3n}(n!)^3}{(3n)!}\right)^{1/n}$$ I don't know what method to use, if we divide numerator and denominator with $3^{3n}$, I don't see that we win something. I can't find two sequences, to use than the squeeze theorem. I'm stuck.
Use equivalents and Stirling's formula: $\;n!\sim_\infty \sqrt{2\pi n}\Bigl(\dfrac n{\mathrm e}\Bigr)^{\!n}$: $$\biggl(\!\frac{3^{3n}(n!)^3}{(3n)!}\!\biggr)^{1/n}\!\sim_\infty\left(\frac{3^{3n}\sqrt{(2\pi n)^3\strut}\Bigl(\dfrac{n}{\mathrm e}\Bigr)^{\!3n}}{\sqrt{6\pi n}\Bigl(\dfrac{3n}{\mathrm e}\Bigr)^{\!3n}}\right)^{\!\tfrac 1n}=\Bigl(\sqrt{\tfrac 43} \pi n\Bigr)^{\!\tfrac 1n}\to 1.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1769570", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 2 }
Prove that if $x$ is conjugate to $x^{-1}$ and $y$, that $y$ is conjugate to $y^{-1}$ The full question: Prove the following: a) If $x$ is conjugate to $x^{-1}$ and $y$, then $y$ is conjugate to $y^{-1}$ b) If $x$ is conjugate to $x^{-1}$ in a finite group, $G$, and $x \neq x^{-1}$, then the conjugacy class of $x$ has an even number of elements. c) If $G$ has odd order, and $x \in G$ is not the identity element, the $x$ is not conjugate to $x^{-1}$. So far I have: a) $x = g^{-1}x^{-1}g$ for some $g$ and $x = h^{-1}yh$ for some $h$ Therefore $y = hxh^{-1}$ and so $y$ is conjugate to $x$. Now, I presume the theory is that because $x$ is conjugate to its inverse, then $y$ is conjugate to its inverse - but how do I prove this? b) using part a, each element also has it's inverse in the conjugacy class and therefore there is an even number of elements. c) I'm not sure how to prove it, but it would use the idea that because it contains an odd order of elements, it cannot include the pairs of element and inverse so therefore $x$ is not conjugate to $x^{-1}$
From $h^{-1}yh=x=g^{-1}x^{-1}g$ we obtain by inverting $h^{-1}y^{-1}h=g^{-1}xg=g^{-1}(h^{-1}yh)g$, which finally gives $$ y^{-1}=hg^{-1}h^{-1}yhgh^{-1}=(hgh^{-1})^{-1}y(hgh^{-1}). $$ Part c) has been shown already in this MSE quation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1769683", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Fourier decomposition of solutions of the wave equation with respect to the spatial variable Say I have a wave equation of the form $$\nabla^{2}f(t,\mathbf{x})=\frac{1}{v^{2}}\frac{\partial^{2}f(t,\mathbf{x})}{\partial t^{2}}$$ which is clearly a partial differential equation (PDE) in $\mathbf{x}$ and $t$. Is it valid to consider a solution in which one Fourier decomposes the spatial part of the function $f(t,\mathbf{x})$ and not its temporal part. That is, is it ok to express the solution in the form $$f(t,\mathbf{x})=\int_{-\infty}^{+\infty}\frac{d^{3}k}{(2\pi)^{3}}\tilde{f}(t,\mathbf{k})e^{i\mathbf{k}\cdot\mathbf{x}}$$ And if so, why is it ok to do this? [Is it simply that one Fourier decomposes the solution at a particular fixed instant in time, $t$ and then requires that the solution has this form for all $t$, hence the mode functions $\tilde{f}(t,\mathbf{k})$ must satisfy the PDE $$\frac{\partial^{2}\tilde{f}(t,\mathbf{k})}{\partial t^{2}}+v^{2}\mathbf{k}^{2}\tilde{f}(t,\mathbf{k})=0$$ or is there some other reasoning behind it?]
When considering a function $f(t,\mathbf x)$ on $[0,\infty)\times \mathbb{R}^n$, we can focus on one time slice at a time, fixing $t$ and dealing with a function of $\mathbf x$ only. The Fourier transform can be applied to this slice, since it's just a function on $\mathbb{R}^n$. The result can be denoted $\tilde f(t,\mathbf k)$, and is usually called the Fourier transform with respect to spatial variable (or sometimes, "partial Fourier transform"). As long as $t$ is fixed, nothing new happens: for example, $$f(t,\mathbf{x})=\int_{\mathbb{R}^n}\frac{d\mathbf k}{(2\pi)^{3}}\tilde{f}(t,\mathbf{k})e^{i\mathbf{k}\cdot\mathbf{x}}\tag{1}$$ is just the inversion formula for Fourier transform. One eventually wants to understand how the solution evolves in time, so the time derivative has to be taken. Formally speaking, we differentiate (1) under the integral sign, which of course this needs a justification, as always when this trick is performed. The lecture notes Using the Fourier Transform to Solve PDEs by Joel Feldman present such calculations for the wave equation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1769930", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove algebraically that, if $x^2 \leq x$ then $0 \leq x \leq 1$ It's easy to just look at the graphs and see that $0 \leq x \leq 1$ satisfies $x^2 \leq x$, but how do I prove it using only the axioms from inequalities? (I mean: trichotomy and given two positive numbers, their sum and product is also positive).
$x \geq x^2$ If $x = x^2$, $x - x^2$ = $0$ (Subtract x^2 from both sides) $x^2 - x$ = $0$ $x(−x+1$) = $0$ (Factor left side of equation) $x = 0$ or $−x+1 = 0$ (Set factors equal to 0) $x = 0$ or $x = 1$ Check intervals in between critical points. (Test values in the intervals to see if they work.) $x ≤ 0$ (Doesn't work in original inequality) $0$ ≤ $x$ $≤1$ (Works in original inequality) $x ≥ 1$ (Doesn't work in original inequality)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1770047", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 5 }
Counter example for uniqueness of second order differential equation I have a second order differential equation, \begin{eqnarray} \dfrac{d^2 y}{d x^2} = H\left(x\right) \hspace{0.05ex}y \label{*}\tag{*} \end{eqnarray} where, $\,H\left(x\right) = \dfrac{\mathop{\rm sech}\nolimits\left(x\right) \mathop{\rm sech}\nolimits\left(x\right)}{x + \ln\big(2\cosh\left(x\right)\big)}$. Plot of function $\,H\left(x\right) $ is shown below: I need to find solution of the equation $\eqref{*}$ for the boundary conditions $$\begin{aligned} y\left(x\right)\bigg\rvert_{-\infty} &= 0, & \left.\dfrac{d\hspace{0.1ex}y\left(x\right)}{d\hspace{0.1ex}x}\right\rvert_{ -\infty} &= 0 \end{aligned} \label{**}\tag{**}$$ Obvious solution of the problem is $\,y=0$. But $\,y = x + \ln\big(2 \cosh\left(x\right)\big)\,$ also satisfies differential equation $\eqref{*}$, and satisfies boundary conditions $\eqref{**}$. Plot of $y\left(x\right)$ is shown below: As far as I know there cannot be two solution of the differential equation satisfying given boundary conditions. What am I missing here? Is uniqueness theorem not valid if the boundary conditions are applied at $\,\pm\infty$. EDIT Thanks to the comment by Santiago, appearance contradiction is better seen: Differential eq. $%\begin{align} y'\left(x\right) = y\left(x\right) %\end{align} $ with boundary condition $\displaystyle\lim_{x \to \infty}y\left(x\right) = 0$. There are infinitely many solution to this problem all of the form $y\left(x\right) = k\exp\left(x\right)$, where $k$ is some constant. Post Edit Is it possible to generalize observation above that, boundary conditions at $\pm \infty$ may not yield unique solution?
Consider the constant coefficient first order linear dynamical system of dimension $n$ \begin{equation}{\bf{x}}'=A{\bf{x}},\end{equation} where $A$ is an $n\times n$ constant matrix and ${\bf{x}}$ is an $n$-dimensional vector. If $A$ has $k$ eigenvalues with positive real parts, then system above has a $k$ dimensional space of solutions satisfying $\displaystyle{\lim_{t\rightarrow-\infty}{\bf{x}}=0}$. More generally, we now consider a class of equations which the system (*) in the question is part of. More precisely, we consider systems of the form \begin{equation}{\bf{x}}'=A(t){\bf{x}},\end{equation} where $A$ is $t$-dependent and such that the limit $A^{-\infty}\equiv \displaystyle{\lim_{t\rightarrow -\infty} A(t)}$ exists. Assume also that $A(t)$ approaches $A^{-\infty}$ exponentially fast as $t\rightarrow -\infty$. Then the solutions of the system above behave like the solutions of the constant coefficient system $$ {\bf{x}}'=A^{-\infty}{\bf{x}}, $$ as $t\rightarrow -\infty$, which in particular implies that If $A^{-\infty}$ has $k$ eigenvalues with positive real parts, then the non-constant system above has a $k$ dimensional space of solutions satisfying $\displaystyle{\lim_{t\rightarrow -\infty}{\bf{x}}=0}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1770171", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 1, "answer_id": 0 }
If we have a real line and $X$ is any subset of it, let $Y$ be a set such that $X\subseteq Y \subseteq \bar{X}$ If we have a real line and $X$ is any subset of it, let $Y$ be a set such that $X\subseteq Y \subseteq \bar{X}$. Prove that $\bar{X}=\bar{Y}$ My Attempt We can use definition and show that $\bar{X} $ is in $\bar Y$. For example, let's say we have an element $z$ is $\bar Y$ then for every $\epsilon$ we have to look for an element $x$ in $X$ such that $$|x-z|\leq \epsilon$$ $z$ is in $\bar Y$ so we can have a $y$ in $Y$ such that $$|x-y|\leq \epsilon/2$$ Since $y$ is in $\bar{X}$, we can find an $x$ in $X$ such that $$|x-y|\leq \epsilon /2$$Now how do I put this together with the triangle inequality to draw my conclusion?
Another answer, more general (this property is true in every topological space) : $\overline{X}$ is the smallest (for the inclusion) closed set that contain $X$ $Y$ is contained in $\overline{X}$ that is closed, so $\overline{Y}\subset \overline{X}$ because $\overline{Y}$ is the smallest closed set that contain $Y$ (hence $\overline{Y}$ smaller than $\overline{X}$) $X$ is contained in $\overline{Y}$ that is closed (because $X\subset Y \subset\overline{Y}$) , so $\overline{X}\subset \overline{Y}$ because $\overline{X}$ is the smallest closed set that contain $X$ And then we have the equality
{ "language": "en", "url": "https://math.stackexchange.com/questions/1770292", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Proving no vector potential for gravitation field defined on all of $\mathbb{R}^3 -$ origin Let: $$F=\frac{x,y,z}{(x^2+y^2+z^2)^{3/2}}$$ Show that there is no vector potential for F which is defined on all of $\mathbb{R}^3 - \text{origin}$ I can find a vector potential which is not well-behaved on the z-axis quite easily, but I’m not sure how to show that it’s impossible to find one for all $\mathbb{R}^3 -$ origin. My professor suggested I assume a vector potential exists then calculate the following in two different ways: $$\iint_SF\bullet\mathbf{n} dS$$ where S is the unit sphere. I’m not sure how to calculate this integral though, and I don’t see how it would help.
Hint: If there were a vector field $G$ defined in the complement of the origin such that $F = \nabla \times G$, then Stokes' theorem would imply $$ \iint_{S} F \cdot n\, dS = 0 $$ because the sphere has empty boundary. On the other hand, you can calculate $$ \iint_{S} F \cdot n\, dS $$ explicitly (without calculus, even!), since on the unit sphere you have $n = (x, y, z)$ and $x^{2} + y^{2} + z^{2} = 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1770381", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Finding the volume of the region bounded by $z=\sqrt{\frac{x^2}{4}+y^2}$and $x+4z=a$. Cylindrical coordinates. I would like the answer to preferably be done using either using a surface integral, or an integral with substitutions. But anything other than this is alright, if nothing else exists. I have to find the volume of the region bounded by $z=\sqrt{\frac{x^2}{4}+y^2}$and $x+4z=a$. So, here we have a cone and a plane "cutting" it. I definitely must do this using some some of coordinate substitution. When doing a problem in class, that is, the area to be found being bounded in between $(z-1)^2=x^2+y^2$ and $z=0.$ the substitution was made, (which would be logical here to do aswell): $$x=r\cos\varphi \\ y=r \sin\varphi \\ z=z$$ and I also understand that the bounderies being $0\leq r\leq 1,0\leq\varphi\leq2\pi,0\leq z\leq 1-r.$ But in the problem I give that is a lot more difficult to do, I know that the bounderies for $\varphi$ should be the same,but with $z$ and $r$ I find it impossible. Should I find these conditional extreme points on the cone with the plane equation or am I not seeing something quite obvious here?
$\newcommand{\angles}[1]{\left\langle\, #1 \,\right\rangle} \newcommand{\braces}[1]{\left\lbrace\, #1 \,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\, #1 \,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\half}{{1 \over 2}} \newcommand{\ic}{\mathrm{i}} \newcommand{\iff}{\Leftrightarrow} \newcommand{\imp}{\Longrightarrow} \newcommand{\pars}[1]{\left(\, #1 \,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\, #2 \,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert}$ With $\ds{\mathcal{V} = \braces{\pars{x,y,z}\ \left.\vphantom{A^A}\right\vert\ \root{{x^{2} \over 4} + y^{2}}\ <\ z\ <\ {a - x \over 4}}}$: \begin{align} \color{#f00}{\iiint_{\mathcal{V}}\dd x\,\dd y\,\dd z} & = \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} \Theta\pars{z - \root{{x^{2} \over 4} + y^{2}}}\Theta\pars{{a - x \over 4} - z} \,\dd x\,\dd y\,\dd z \end{align} Where $\Theta$ is the Heaviside Step function. Then, \begin{align} \color{#f00}{\iiint_{\mathcal{V}}\dd x\,\dd y\,\dd z} & = 2\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} \Theta\pars{z - \root{x^{2} + y^{2}}}\Theta\pars{{a - 2x \over 4} - z} \,\dd x\,\dd y\,\dd z \\[3mm] & = 2\int_{-\infty}^{\infty}\int_{0}^{2\pi}\int_{0}^{\infty} \Theta\pars{z - \rho}\Theta\pars{{a - 2\rho\cos\pars{\phi} \over 4} - z} \rho\,\dd\rho\,\dd\phi\,\dd z \\[3mm] & = 2\int_{0}^{2\pi}\int_{0}^{\infty} \bracks{{a - 2\rho\cos\pars{\phi} \over 4} - \rho} \Theta\pars{{a - 2\rho\cos\pars{\phi} \over 4} - \rho}\rho\,\dd\rho\,\dd\phi \\[3mm] & = \color{#88f}{2\int_{0}^{2\pi}\int_{0}^{\infty} \braces{{1 \over 4}\,a\rho - \half\bracks{2 + \cos\pars{\phi}}\rho^{2}} \Theta\pars{{a/2 \over 2 + \cos\pars{\phi}} - \rho}\,\dd\rho\,\dd\phi} \\[3mm] & = 2\int_{0}^{2\pi} \braces{{1 \over 8}\,a\,\bracks{a/2 \over 2 + \cos\pars{\phi}}^{2} - \half\bracks{2 + \cos\pars{\phi}} {1 \over 3}\bracks{{a/2 \over 2 + \cos\pars{\phi}}}^{3}}\,\dd\phi \\[3mm] & = {1 \over 48}\,a^{3}\ \overbrace{\int_{0}^{2\pi}{\dd\phi \over \bracks{2 + \cos\pars{\phi}}^{\,2}}} ^{\ds{{4\root{3} \over 9}\,\pi}}\ =\ \color{#f00}{{\root{3} \over 108}\,\pi a^{3}} \end{align} Note that the $\color{#88f}{\mbox{"blue integration"}}$ requires $a > 0$ as we can see from the $\Theta$ argument. Otherwise it vanishes out.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1770498", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Bijection from $\mathbb {Z}^3$ to $\mathbb {Z}$ I am not a mathematician. Let $\mathbb {Z}$ be a positive integer set. I need to know whether there exist a bijection from $\mathbb {Z}^3$ to $\mathbb {Z}$, what might be a possible mapping? I know that bijection exists from $\mathbb {R}^3$ to $\mathbb {R}$.
You have an injective map $Z\rightarrow N$ defined by $f(n)=2^n, n>0$ and $f(n)=3^{-n}, n\leq 0$, this induces an injective map $g:Z^3\rightarrow N$ defined by $g(a,b,c)=2^{f(a)}3^{f(b)}5^{f(c)}$. The image of $g$ is in bijection with $N$, now take a bijection between $N$ and $Z$ and compose with $g$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1770640", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Solving Trig Equation with Unknown Inside and Outside of Function In my physics course, we're covering physical pendulums, and we are to essentially analyze the range of angles within the interval $\left[0, \frac{\pi}{6}\right]$ to show that $\sin\theta \approx \theta$. (I completed my analysis using Desmos.) After creating our analyses, we are to estimate an angle, in radians, which has an error margin of approximately one percent. The error estimation function I have is $$ E_1(x) = \frac{x - \sin x}{\sin x} \cdot 100, $$ where values of $x$ are in radians. I then rewrote the RHS as $$ E_1(x) = (x\csc x - 1) \cdot 100. $$ So if the error threshold is one percent, I let $E_1(x) = 1$. And so I have been trying to figure out how to solve $$ x \csc x - 1.01 = 0. $$ The best I could do was to graph the function on Desmos to find the roots. However, I was hoping to get some pointers in the right direction as to how to solve this equation algebraically. While searching the Math Stack Exchange, I happened upon a related question, but the best I could gather is that the approach depends on the type of equation you have. Is there a purely algebraic approach to solve this equation? Any advice and/or pointers to further reading would be appreciated.
Likely, your instructor wants you to find the approximate value where the error is 1%. Your equation is transcendental and won't be solved algebraically. You could use a numerical method such as Newton's Method to approximate the roots to desired accuracy. That presupposes your familiarity with calculus (or at least with differentiation). Another alternative - which is what you did - is to solve it graphically, plotting $y_1=1$ and $y_2=\frac{x-\sin x}{\sin x}$ and assessing where they intersect visually. From your visual judgment, you can test (i.e. plug in) nearby values and, via trial and error, estimate where the error converges to 1%.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1770780", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How is the null space related to singular value decomposition? It is said that a matrix's null space can be derived from QR or SVD. I tried an example: $$A= \begin{bmatrix} 1&3\\ 1&2\\ 1&-1\\ 2&1\\ \end{bmatrix} $$ I'm convinced that QR (more precisely, the last two columns of Q) gives the null space: $$Q= \begin{bmatrix} -0.37796& -0.68252& -0.17643& -0.60015\\ -0.37796& -0.36401& 0.73034& 0.43731\\ -0.37796& 0.59152& 0.43629& -0.56293\\ -0.75593& 0.22751& -0.4951& 0.36288\\ \end{bmatrix} $$ However, neither $U$ nor $V$ produced by SVD ($A=U\Sigma V^*$) make $A$ zero (I tested with 3 libraries: JAMA, EJML, and Commons): $$ U= \begin{bmatrix} 0.73039& 0.27429\\ 0.52378& 0.03187\\ -0.09603& -0.69536\\ 0.42775& -0.66349\\ \end{bmatrix} $$ $$ \Sigma= \begin{bmatrix} 4.26745& 0\\ 0& 1.94651\\ \end{bmatrix} $$ $$ V= \begin{bmatrix} 0.47186& -0.88167\\ 0.88167& 0.47186\\ \end{bmatrix} $$ This is contradiction to Using the SVD, if $A=U\Sigma V^*$, then columns of $V^*$ corresponding to small singular values (i.e., small diagonal entries of $\Sigma$ ) make up the a basis for the null space.
For an $m \times n$ matrix, where $m >= n$, the "full" SVD is given by $$ A = U\Sigma V^t $$ where $U$ is an $m \times m$ matrix, $\Sigma$ is an $m \times n$ matrix and $V$ is an $n \times n$ matrix. You have calculated the "economical" version of the SVD where $U$ is an $m \times n$ and $S$ is $n \times n$. Thus, you have missed the information about the left null space given by the "full" matrix $U$. The full SVD is given by $$ U = \left[ \begin{array}{cc} -0.7304 & -0.2743 & -0.1764 & -0.6001 \\ -0.5238 & -0.0319 & 0.7303 & 0.4373\\ 0.0960 & 0.6954 & 0.4363 & -0.5629 \\ -0.4277 & 0.6635 & -0.4951 & 0.3629 \end{array} \right], $$ $$ \Sigma = \left[ \begin{array}{cc} 4.2674 & 0 \\ 0 & 1.9465 \\ 0 & 0 \\ 0 & 0 \end{array} \right], $$ $$ V = \left[ \begin{array}{cc} -0.4719 & 0.8817 \\ -0.8817 & -0.4719 \end{array} \right]. $$ If you need the null spaces then you should use the "full" SVD. However, most problems do not require the "full" SVD.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1771013", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "29", "answer_count": 1, "answer_id": 0 }
What does it mean the notation $\int{R\left( \cos{x}, \sin{x} \right)\mathrm{d}x} $ Sometimes I find this notation and I get confused: $$\int{R\left( \cos{x}, \sin{x} \right)\mathrm{d}x} $$ Does it mean a rational function or taking rational operations between $\cos{x}$ and $\sin{x}$ ? Can you explain please? Update: I think you did not understand the question well, Here is an example (maybe it is a lemma or a theorem): All the integrals of the form $\int{R\left( \cos{x}, \sin{x} \right)\mathrm{d}x} $ can be evaluated using the substitution $u=\tan{\dfrac{x}{2}} $. I think that $R$ here does not stand for a rational function but for taking rational operations(addition, subtraction, multiplication, division) between $\cos{x} $ and $\sin{x}$ Update : I did not noticed that $R$ is a rational function of two variables and that means exactly that we are taking rational operations.
Here $R$ is a function of two variables $s$ and $t$. For instance, if $$R(s,t) = \frac{s}{1+t}$$ then $$R(\cos x , \sin x) = \frac{\cos x}{1 + \sin x}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1771125", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
bijective function from [a,b] to [c,d] Im trying to think about bijective function from the closed interval [a,b] to the closed interval [c,d]. When $a,b,c,d \in \mathbb{R}$ and $a < b,\;c < d$. Is there such a function?
The idea is to construct a line whose domain is $[a, b]$ and whose range is $[c, d]$ Hence, two points on the line will be $$ p_0 = (a, c) \\ p_1 = (b, d) $$ Since we want $a \to c$, $b \to d$, and a straight line between them. The slope of such a line will be $$ m = \frac{y_2 - y_1}{x_2 - x_1} = \frac{d - c}{b - a}$$ Using point slope form of a line, $$ y - y_1 = m \left ( x - x_1 \right) \\ y - c = \frac{d - c}{b - a} \left (x - a\right) $$ Is the required equation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1771247", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
if $p\mid a$ and $p\mid b$ then $p\mid \gcd(a,b)$ I would like to prove the following property : $$\forall (p,a,b)\in\mathbb{Z}^{3} \quad p\mid a \mbox{ and } p\mid b \implies p\mid \gcd(a,b)$$ Knowing that : Definition Given two natural numbers $a$ and $b$, not both zero, their greatest common divisor is the largest divisor of $a$ and $b$. * *If $\operatorname{Div}(a)$ denotes the set of divisors of $a$, the greatest common divisor of $a$ and $b$ is $\gcd(a,b)=\max(\operatorname{Div}(a)\cap\operatorname{Div}(b))$ *$$d=\operatorname{gcd}(a,b)\iff \begin{cases}d\in \operatorname{Div}(a)\cap\operatorname{Div}(b) & \\ & \\ \forall x \in \operatorname{Div}(a)\cap\operatorname{Div}(b): x\leq d \end{cases}$$ *$$\forall (a,b) \in \mathbb{N}^{2}\quad a\mid b \iff Div(a) \subset Div(b)$$ *$$\forall x\in \mathbb{Z}\quad \operatorname{Div}(x)=\operatorname{Div}(-x) $$ *If $a,b\in\mathbb{Z}$, then $\gcd(a,b)=\gcd(|a|,|b|)$, adding $\gcd(0,0)=0$ Indeed, Let $(p,a,b)\in\mathbb{Z}^{3} $ such that $p\mid a$ and $p\mid b$ then : $p\mid a \iff \operatorname{Div}(p)\subset \operatorname{Div}(a)$ and $p\mid b \iff \operatorname{Div}(p)\subset \operatorname{Div}(b)$ then $\operatorname{Div}(p)\subset \left( \operatorname{Div}(a)\cap \operatorname{Div}(b)\right) \iff p\mid \gcd(a,b)$ Am I right?
It depends on what definition of greatest common divisor you use. You probably use the second one. Definition 1 Given natural numbers $a$ and $b$, the natural number $d$ is their greatest common divisor if * *$d\mid a$ and $d\mid b$ *for all $c$, if $c\mid a$ and $c\mid b$, then $c\mid d$ Theorem. The greatest common divisor exists and is unique. Proof. Euclidean algorithm. Definition 2 Given two natural numbers $a$ and $b$, not both zero, their greatest common divisor is the largest divisor of $a$ and $b$. If $\operatorname{Div}(a)$ denotes the set of divisors of $a$, the greatest common divisor of $a$ and $b$ is $\gcd(a,b)=\max(\operatorname{Div}(a)\cap\operatorname{Div}(b))$ Extension to $\mathbb{Z}$, for both definitions If $a,b\in\mathbb{Z}$, then $\gcd(a,b)=\gcd(|a|,|b|)$, adding $\gcd(0,0)=0$ for definition 2. Proof of the statement using definition 1 With this definition, the statement is obvious. Proof of the statement using definition 2 Let $p\mid a$ and $p\mid b$. We need to show that $p\mid\gcd(a,b)$. It is not restrictive to assume $p,a,b>0$. It is true that $\operatorname{Div}(p)\subseteq\operatorname{Div}(a)\cap\operatorname{Div}(b)$, but this just implies that $p\le\gcd(a,b)$, not that it is a divisor thereof. The proof can be accomplished by using the fact that $\gcd(a,b)=ax+by$ for some integers $x$ and $y$ (Bézout's theorem). With this it is easy: $a=pr$, $b=ps$, so $$ \gcd(a,b)=ax+by=prx+psy=p(rx+sy) $$ How to prove Bézout's theorem is beyond the scope of this answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1771373", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
Hyperbola equation proof I've been trying to prove the canonical form of the hyperbola by myself. $\frac{x^2}{a^2} - \frac{y^2}{b^2} = 1 $ I started from the statement that A hyperbola may be defined as the curve of intersection between a right circular conical surface and a plane that cuts through both halves of the cone.(Wikipedia) The conical surface is constructed around the y-vertical axis, with its apex at (0,0,0). I assumed, without loss of generality, that the plane is parallel to the $x$ axis, so, for any point on that plane, we have $z = a*y+b$, a, b not null Also, any point lying on the conical surface satisfies $y^2 = c^2*(x^2+z^2)$, c not null Combining the 2 equations, we get $y^2 = c^2x^2+c^2a^2y^2+2c^2aby+c^2b^2$ We can now rewrite it $y^2(c^2a^2-1)+x^2c^2+2c^2aby=-c^2b^2$ $\frac{y^2(c^2a^2-1)}{-c^2b^2} + \frac{x^2c^2}{-c^2b^2} + \frac{2c^2aby}{-c^2b^2} = 1$ So now we got the sum of $x^2$, $y^2$, and $y$ multiplied by some coefficients equals 1. This does not resemble the canonical form of the hyperbola. The presence of the $y$ is the most notable difference. I know $x$ and $y$ in my equations are in the 3D space, and the canonical form takes them in the 2D plane space, but the 2D equations should only be a scaled version of the 3D ones, because the plane is parallel to the x-axis, isn't it? $x^2*p_1 + y^2*p_2 + y*p_3 = 1$ I am sure I had overlooked something, but can't figure out what. I appreciate any assistance. Thank you.
If $a \neq 0,$ the cutting plane is parallel to the $x$ axis but not parallel to the $y$ axis. Starting at the point $(x,y,z) = (0,0,b)$ on this plane, and traveling along the line of intersection of the cutting plane and the $y,z$ plane (that is, the line that simultaneously satisfies $z = ay+b$ and $x = 0$), we can move toward the point where this line intersects the $y$ axis or we can move in the opposite direction. Assuming that you actually have a hyperbola, the intersection with the cone is closer in one direction than the other, that is, one intersection is at $(0,y_1,ay_1+b)$ and the other is at $(0,y_2,ay_2+b)$ with $y_1 \neq -y_2.$ But these two points are the two vertices of the hyperbola, and the center of the hyperbola is halfway between them, at $y_0 = \frac12(y_1 + y_2) \neq 0.$ So the equation that you end up with will be equivalent to something in the form $$ \frac{(y - y_0)^2}{B^2} - \frac{x^2}{A^2} = 1,$$ which is what you get when you translate a hyperbola with the equation $\frac{y^2}{B^2} - \frac{x^2}{A^2} = 1$ so that its center moves from $(0,0)$ to $(0,y_0).$ The extra term in $y$ (without the square) comes from the expansion of $(y - y_0)^2.$ The other thing that may be confusing is that you have set up your cone and plane in such a way as to produce a hyperbola whose axis is in the $y,z$ plane. If you eliminate the $z$ coordinate, the axis of the hyperbola becomes the $y$ axis. But the equation $\frac{x^2}{A^2} - \frac{y^2}{B^2} = 1$ is the equation of a hyperbola whose axis is the $x$ axis.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1771498", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$T$ can be $\infty$ with positive probability From Williams' Probability with Martingales How exactly do we know $T$ can be $\infty$ with positive probability or $$P(T = \infty) > 0 \text{ ?}$$ I'm guessing that that means there is a positive probability that for some $c$, the sum, $X_1 + \cdots + X_r$, will never exceed $c$ no matter how large $r$ is. If the partial sums are bounded, then no matter how many terms in the sequence we add, they won't exceed some $c$ so $$\{r \mid |X_1 + X_2 + \cdots + X_r| > c\} = \emptyset \text{ ?}$$ Why can't we say that $P(T = \infty) = 1$?
Indicating the dependence of $T$ on $c$ explicitly, you have $$\{T_c=\infty\}=\{\sup_n|M_n|\le c\}$$ Therefore, $$\bigcup_{c>0,c\in\Bbb Q}\{T_c=\infty\}=\{\sup_n|M_n|<\infty\}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1771606", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Diophantine Equation $x^2+y^2+z^2=c$ $x^2+y^2+z^2=c$ Find the smallest integer $c$ that gives this equation one solution in natural numbers. Find the smallest integer $c$ that gives this equation two distinct solutions in natural numbers. Find the smallest integer $c$ that gives this equation three distinct solutions in natural numbers. Clearly the first answer is c = 3. I know how to do linear diophantine equations, but I am stumped on this one. By distinct solutions, I am looking for different threesomes (unordered triples) Can you help?
Since the equation is symmetric: if $(x,y,z)$ is solution than every permutation of$(x,y,z)$ is the solution too. So it's impossible for this equation to have 2 solutions (because if for example $x\neq y$ than we have at least three distinct permutations). The smallest $c$ for three distinct solutions is $6$: $(2,1,1), (1,2,1), (1,1,2)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1771724", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Projectivised tangent bundle of 2 sphere I'm trying to understand how rotations act on the "projectivised" tangent bundle of the sphere. Let $S^2$ be the two sphere and denote by $P(TS^2)$ the tangent bundle where each tangent space $T_xS^2$ is taken to be a projective vector space. I'm trying to show that given any $x$ and $y$ in $P(TS^2)$ there is a rotation of the sphere that maps $x$ to $y$. If we use coordinates $(\theta,\phi)$ on the sphere then the bundle $P(TS^2)$ is locally $\{(\theta, \phi, [u:v]\}$ where $(u,v)$ are the projective coordinates corresponding to the basis $\frac{d}{d\theta},\frac{d}{d\phi}$. We can write $x = (\theta_1,\phi_1, [u_1:v_1])$ and $y = (\theta_2,\phi_2, [u_2:v_2])$. Now it's clear that there is a rotation that takes the "manifold part" of $x$ and $y$ onto each other. Now intuitively I'd like to rotate about that point until the "tangent space parts" also match up. I'm not sure if this is a good approach and I'm struggling to see how a rotation acts on the projective vector spaces. I'd also be interested in how one geometrically visualises such a projectivised tangent bundle - is there even a natural geometric interpretation in this case?
You can visualize this action very explicitly: a tangent vector to a point $p \in S^2$ is literally a little vector tangent to $S^2$ inside of $\mathbb{R}^3$, and rotation acts in the obvious way. The rotations around the axis through $p$ act transitively on unit tangent vectors at $p$ (and so act transitively on the projectivized tangent space at $p$), and rotations also act transitively on $S^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1771842", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
If it has an emergency locator, what is the probability that it will be discovered? Okay so here's the question Seventy percent of the light aircraft that disappear while in flight in a certain country are subsequently discovered. Of the aircraft that are discovered, 60% have an emergency locator, whereas 90% of the aircraft not discovered do not have such a locator. Suppose that a light aircraft has disappeared. If it has an emergency locator, what is the probability that it will be discovered? Anndd here's my answer The answer to this question was, however, given as 93%. I don't understand how they got that answer and I was pretty confident in my solution. Can someone either tell me the answer given in the text is incorrect or what's wrong with my solution? Thanks so much!
$P(D \mid E)$ $= P(D \cap E)/P(E)$ $= P(D \cap E)/P((E \cap D) \cup (E \cap \overline{D}))$ $= P(D \cap E)/\{P(E \cap D) + P(E \cap \overline{D})\}$ $= \{P(D) \cdot P(E \mid D)\}/\{(P(D) \cdot P(E \mid D)) + P(\overline{D}) \cdot P(E \mid \overline{D})\}$ $= (0.70 \cdot 0.60)/((0.70 \cdot 0.60) + (0.30 \cdot 0.10))$ $= 0.93$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1771920", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Still not clear why longest path problem is NP-hard but shortest path is not? I've heard/read many times that shortest path problem is P, but longest path problem is NP-hard. But I have a problem with this: we say longest path problem is NP-hard because of graphs with positive cycles. But if you think about graphs with one or more negative cycles, we don't have any polynomial time algorithm for the shortest simple path either. In fact, once can be reduced to another. To summarize, we can find a shortest/longest simple path only if there are no negative/positive cycles. Also we can detect negative/positive cycles with Bellman-Ford in both cases. So under identical criteria, both problems should be considered NP-hard.
The longest path problem is commonly understood as follows: given a graph, find the longest simple path. Simple means that no vertex is visited more than once. Only in graphs with cycles can a vertex be visited more than once. The shortest path problem, however, is commonly defined for simple paths in acyclic graphs. The difficulty for finding simple paths in graphs (possibly containing cycles) is easy to prove from the Hamiltonian path, i.e., a path visiting all the vertices. The reduction is easy, just assign length 1 to all edges. A Hamiltonian path exists iff the longest path has length n-1.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1772053", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Why is the domain of these two functions different? I tried graphing the functions $y = x^{.42}$ and $y = (x^{42})^{\frac{1}{100}}$. From my understanding of the laws of exponentiation these two expressions should be equivalent. What I found out after graphing these two functions was that they were equivalent for all non negative real numbers. The difference between these two functions was that the domain of $y = (x^{42})^{\frac{1}{100}}$ is $( -\infty, \infty)$ whereas the domain of $y = x^{.42}$ is $[0, \infty)$. I was wondering whether someone could explain why the domain of these two functions is different?
For $b > 0$ then $b^{n/m}=\sqrt [m]{b^n} $ is well defined whereas the same definition for $b < 0$ is not. If $b < 0$ for example, then $b^{3/2} = \sqrt {b^3} $ is not defined as $b^3$ is negative. But $3/2 = 6/4$ makes $b^6$ positive so a 4th root is possible. So note: $.42=42/100 =21/50$ but $42/100$ is not in lowest terms. This matters as $b^{21}$ will be negative if $b $ is but $b^{42} $ will be positive. So for non-negative $b $, $b^{n/m} = b^{2m/2n} $ but for negative $b $ that no longer holds.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1772133", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 3 }
Can the sum be simplified? $\sum_\limits{x=y}^{\infty} {x \choose y}\left(\frac{1}{3}\right)^{x+1}$ Let: $$f(y) = \sum_{x=y}^{\infty} {x \choose y} \left(\frac{1}{3}\right)^{x+1}$$ Can this be simplified somehow? Is it a standard probability distribution? I can only get as far as: $$f(y) = \frac{1}{3} \sum_{x=y}^{\infty} \frac{x!}{y!(x-y)!} \left(\frac{1}{3}\right)^{x}$$ This arose in the context of a probability problem in which X is a geometrically distributed random variable ($p=1/2$) and $Y$ is a binomially distributed random variable ($p=1/2$, $n=x$)
$$f(y)=\sum_{x=y}^∞ \binom xy(\tfrac 13 )^{x+1} = {\tfrac 12}^{1+x}$$ This arose in the context of a probability problem in which X is a geometrically distributed random variable (p=1/3) and Y is a binomially distributed random variable (p=1/2, n=x) In effect you have a sequence of trials each with three equally probable outcomes: head, tail, end and $Y$ is the count of heads before the (first) end.   $Y$ is also the count of heads before the end in all trials which don't show tails.   In any trial that doesn't show tails, there is an equal probability of instead showing heads or an end. Thus $Y\sim\mathcal{Geom}_0(1/2)$ and hence $f_Y(y) = (\tfrac 1 2)^{y+1}$ (Didn't I answer this earlier?)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1772229", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
DTFT of the unit step function If i apply the DTFT on unit step function, then i get follow: $$DTFT\{u[n]\}=\sum_{n=-\infty}^{\infty}u[n]e^{-j\omega n}=\sum_{n=0}^{\infty}e^{-j\omega n} = \frac{1}{1-e^{-j\omega}}$$. Now i have the problem, if $|e^{-j\omega}|$ = 1, the sum diverges.To handle this case, i know that $e^{-j\omega}$ is $2\pi$ periodic, i get $$\frac{1}{1-e^{-j\omega}}+\underbrace{e^{-j0}}_1 \sum_{k=-\infty}^{\infty}\delta(\omega+2\pi k)$$. In books i found that the DTFT of the unit step is $$\frac{1}{1-e^{-j\omega}}+\pi \sum_{k=-\infty}^{\infty}\delta(\omega+2\pi k)$$. Can me anyone explain why get the $\pi$ in the DTFT of the unit step?
Hi guys I think this way is better than others,What's Ur idea? $u[n]=f[n] + g[n] $ Where: $f[n]= {1\over2}$ for $-\infty<n<\infty $ and $g[n]=\left\{ \begin{array}{c} {1\over2} \text{ for } n\ge 0 \\ {-1\over2} \text{ for } n<0 \end{array} \right.$ do: $ \delta [n] = g[n] - g[n-1]$ U Know DTFT of $\delta[n]$ is $1$ and DTFT of $g[n] - g[n-1]\to G(e^{j\omega})-e^{-j\omega}G(e^{j\omega})$ so: $1=G(e^{j\omega})-e^{-j\omega}G(e^{j\omega})$ therefore $G(e^{j\omega})={1\over 1-e^{-j\omega}}$ and we know that the DTFT of $f[n]\to F(e^{j\omega})=\pi\sum_{k=-\infty}^\infty\delta(\omega -2\pi k)$ finally : $u[n] = f[n]+g[n] \to U(e^{j\omega})=F(e^{j\omega})+G(e^{j\omega})$ $U(e^{j\omega})={1\over 1-e^{-j\omega}}+\pi\sum_{k=-\infty}^\infty\delta(\omega -2\pi k)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1772285", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Prove $\left| \int_a^b f(t) dt \right| \leq \int_a^b \left| f(t) \right| dt$ I've been given a proof that shows the following: If $f:[a,b]\to \mathbb C$ is a continuous function and $f(t)=u(t)+iv(t)$ then $$\left| \int_a^b f(t) dt \right| \leq \int_a^b \left| f(t) \right| dt$$ The proof begins by letting $\theta$ be the principle argument of the complex number $\int_a^b f(t)dt$ and there is one step in the proof I don't understand; can anyone explain to me why $$\int_a^b e^{-i\theta}f(t) dt=\Re\bigg(\int_a^be^{-i\theta}f(t)dt\bigg)$$ Would this not imply that the imaginary part of the left hand side is equal to $0$? I'm not sure why the LHS and RHS would be equal here.
If $A = \int_a^b f(t)dt$, notice that, since $\theta$ is the principal argument of $A$, $$A = |A| e^{i \theta}$$ Which gives: $$|A| = Ae^{-i\theta} = \left(\int_a^b f(t)dt \right) e^{-i\theta} = \int_a^b e^{-i\theta} f(t)dt$$ Hence $\int_a^b e^{-i\theta} f(t) dt = |A| \in \Bbb R$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1772406", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
$\bigcup X$ finite implies $\mathcal P(X)$ is finite. Can anyone help with this past paper question from a Set Theory exam. Prove that, for all sets $X$, $\bigcup X$ finite implies $\mathcal P(X)$ finite. I am using the Kuratowski definition of finiteness, ie A is finite if every Kuratowski inductive set for A contains A. Thanks in advance!
The key facts are: * *If $B$ is finite and $A\subseteq B$, then $A$ is finite; $\quad(1)$ *If $A$ is finite, then $\mathcal{P}A$ is finite. $\quad(2)$ Notice that $A\subseteq\mathcal{P}(\bigcup A)$. If this is not immediately obvious: fix $a\in A$, then every $\gamma\in a$ must also be in $\bigcup A$ by definition of the union; it is then clear that $$a=\{\gamma\in\bigcup A:\gamma\in a\}\in\mathcal{P}(\bigcup A).$$ Hence, by $(1)$, if $\mathcal{P}(\bigcup A)$ is finite, $A$ is finite.$\quad(3)$ Putting these facts together: $$\bigcup X\text{ finite}\quad\stackrel{(2)}{\Rightarrow}\quad\mathcal{P}(\bigcup X)\text{ finite}\quad\stackrel{(3)}{\Rightarrow}\quad X\text{ finite}\quad\stackrel{(2)}{\Rightarrow}\quad \mathcal{P}X\text{ finite}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1772537", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
How to write the real projective plane as a pushout of a disk and the mobius strip? I heard in topology class that the real projective plane is obtained by gluing a disk along the boundary of the mobius strip. I was wondering - how can I write this as a pushout? Also, how can I write a mobius strip itself as a pushout?
We have two copies of $S^1$ here: the $S^1$ that is the boundary of $D^2$, and the $S^1$ that forms the boundary of the Möbius band. $$\require{AMScd} \begin{CD}S^1 @>>> D^2 \\ @VVV @VVV \\ M @>>> \mathbb{RP^2}\end{CD}$$ Actually showing that these are the same might take a little more work, depending on how familiar you are with manipulating these spaces! Perhaps the easiest way to see it is to do the gluing and then shrink $M$ onto the middle of the band - after you've done this you see you've identified antipodal points on the boundary of $D^2$, which is $\mathbb{RP}^2$ as desired. I don't know how you'd write a Möbius strip as a pushout - I can't see how you can express it as gluing two separate topological spaces together, which is what a pushout does. (Normally you obtain the Möbius strip as a quotient space of the unit square $I^2$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1772630", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Equations of Custom Curves How would one find a equation or a function to describe a custom curve in a coordinate system? Like for example, when someone presents you with a graph of a repeating curve, how would one find its equation to describe it? Here is my sketch of the example: What would be the equation to describe the curve that looks something like the repeating curve on my sketch, which follows through a given line in a coordinate system? I guess we would need to have the parameters in our equation to determine the width and height of the loops, and the length of the curved lines connecting the loops. All in all, I wouldn't know where to start, so any hints you can provide are helpful.
There's no general process for finding a parametrisation of a given curve. It is a question of having some experience and then trying things until you find something that looks good. If we look at one copy of your curve, we can se that $y$ increases, then decreases about the double of the increase and then increases to the starting level. That sounds like how $\sin t$ behaves. If we then look at $x$ we can see that it changes direction four times, so if we want to use $\sin$/$\cos$ we'll need a factor $2$ on the period, and as it has to end in a different position than where it started we'll have to add something to $\sin$/$\cos$, a good starting point is just to take the parameter. That leads to the guess that $(t+\sin(2t), \sin(t))$ might look like the curve. If you try to draw that (wolfram alpha is good for that), you get a promising result - that can be improved, I would start by trying a lower factor to the $t$ in the equation for $x$, but I'll leave the that to you, then you can some experience in what changes have what effect. Edit: Regarding the added question in a comment: One way of making the curve shrink (or grow) would be to multiply the equation for $y$ with something. For high values of $x$, $\frac{1}{x}$ has a nice slow decrease, so a factor like $\frac{1}{t+N}$ sounds like it make have the desired effect. Again I recommend you try for yourself.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1772776", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Third Order Differential Equations I am having trouble solving the third order differential equation $y'''+y'=0$ It was given to me in a quiz (which I got wrong) with boundary conditions $y(0) = 0$ $y'(0)=2$ $y(\pi)=6$ I know that the obvious trial solution is $y=Ae^{rx}$ but I kind of get stuck after here. I have seen in many solutions to other problems when there are complex roots of the complementary equation that this can be expressed as a combination of sine and cosine (I imagine using Euler's Formula). Is that what happens here?
Let's see: If $y=e^{rx}$, then the Chain Rule tells us that $y' = re^{rx}$ and $y''' = r^3e^{rx}$. If $y$ is a solution, that means $y'''+y'=0$. In other words, $r^3e^{rx} + re^{rx}=0$ If you factor, you find that $(r^3+r)e^{rx}=0$. This happens when the (characteristic) polynomial $p(r) = r^3+r = r(r^2+1)$ has roots: $r = 0,i,-i$. From here you know that each of $y_1=e^{0x}=1$, $y_2=e^{ix}$, $y_3=e^{-ix}$ are solutions. Alternatively, using Euler's Identity, the real valued functions $y_1=e^{0x}=1$, $y_2=\sin(x)$, $y_3=\cos(x)$ are solutions. Using linearity, the general (real valued) solution takes the form $$y = A + B\sin(x) + C\cos(x)$$ You should be able to use the boundary conditions now to solve for the $A, B, C$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1772908", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Minimizing quadratic objective function on the unit $\ell_1$ sphere I would like to solve the following optimization problem using a quadratic programming solver $$\begin{array}{ll} \text{minimize} & \dfrac{1}{2} x^T Q x + f^T x\\ \text{subject to} & \displaystyle\sum_{i=1}^{n} |x| = 1\end{array}$$ How can I re-write the problem using linear constraints? Note: I have read other similar questions. However, they define the absolute value differently.
In one of the other answers (unfortunately now deleted) it is incorrectly assumed that we can apply a standard variable splitting technique, without worrying that both the positive and negative parts can become nonzero. I posted a comment already that this probably needs some additional binary variables to fix this. Let me try to show how I would implement this. I assume we have some upper bounds and lower bounds $U_i, L_i$ on $x_i$, i.e. $x_i \in [L_i,U_i]$ where $U_i>0$ and $L_i<0$. Then: $$\begin{align} &x_i = x^{plus}_i - x_i^{min} \\ &x_i^{abs} = x^{plus}_i + x_i^{min} \\ &x_i^{plus} \le U_i \delta_i \\ &x_i^{min} \le -L_i (1-\delta_i) \\ &\sum x_i^{abs} =1 \\ &\delta_i \in \{0,1\}\\ &x^{plus}_i \ge 0 \\ &x^{min}_i \ge 0 \\ &x^{abs}_i \ge 0 \\ \end{align}$$ You need a MIQP solver to handle this (Cplex and Gurobi can do this).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1773038", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Can geometric programs be solved more efficiently than general convex optimization problems? I want to solve an optimization problem for which I have already proven that it is feasible and convex. Introducing further variables and considering a special case of the problem, I can formulate it as geomtric programming problem. Now I am wondering whether there is any advantage in having a geometric program. In this introduction to geometric programming it is only said that geometric programs are easy to solve, because it is possible to transfer them to convex optimization problems, which in turn can be solved efficiently. However, considering that I already know that my problem is convex, do I gain anything, if I bring it down to the structure of a geometric program? I yes, how/why?
Depends on your alternative; I'm one of the gpkit developers, and in comparisons we've run GPs solve much faster than naive gradient descent (it's worth noting that not all GPs are convex without the transformation). However, if your problem is can be solved by another convex solver (e.g. it's also a valid LP) then that solver is likely to be a little bit faster, because GP solvers are not as heavily developed as most other convex solvers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1773155", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How do I calculate the number of trials required to verify that a failed intermittent test is fixed? Say I've got a software test that fails randomly one out of ten times. I make a change to the code which I hope will fix it. I know ten trials is not sufficient to verify the fix. How many trials do I need so I can be X percent certain the fix is successful.
Let $ 0 < p < 1$ be the probability that the test fails and $X$ the random variable that represents the number of tests needed to get a (first) fail. Suppose you tried $N$ tests and got no fails. The probability that you get this result under the hypothesis $$ H_0 : \text{the software still has a problem} $$ is $$\begin{align} p_N &= P(X=N+1) + P(X=N+2) + \dotsb \\ &= (1 - p)^Np + (1 - p)^{N+1}p + \dotsb \\ &= (1 - p)^Np\frac{1}{1 - (1 - p)} \\ &= (1 - p)^N. \end{align}$$ So if such probability satisfy an inequality $$p_N \leq \alpha$$ for (small enough) $\alpha$ then you might be able to say $100(1 - \alpha)$% certain that the software is fixed. In other words, the probability that you mistakenly think that the software is fixed is less than or equal to $\alpha$% under $H_0$ (cf. Type I error).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1773246", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Range of a P-name I am working on a set theory problem from Kunen's Set Theory book, and it involves knowing $\text{ran}(\tau)$ where $\tau$ is a $\mathbb{P}$-name. The entire section loves to talk about the domain of things like $\tau$, but not the range. Is $$\text{ran}(\tau) = \{p \in \mathbb{P} : \exists \sigma \in M^{\mathbb{P}}(\langle \sigma, p \rangle \in \tau)\}$$ correct? I don't think I have the right thing since this doesn't seem to help me solve my problem.
The name $\tau\in M^\mathbb{P}$ is an element of the set $M$, and in particular, it is a relation. So you can compute its range $$ \mathrm{ran}(\tau) = \{p\in \mathbb{P} : \exists \sigma\,(\langle\sigma,p\rangle\in\tau)\}. $$ But in this particular problem, $\tau$ is the name of a function and you have the expression $$ p\Vdash \check b\in\mathrm{ran}(\tau). $$ So you have to read the expression to the right of $\Vdash$ as a formula of the forcing language. Therefore, $p$ forces that $b$ belong to the range of the function denoted by $\tau$. In other words, using the Fundamental Theorem, $$ M[G]\models b\in \mathrm{ran}(\tau_G) $$ for every generic such that $p\in G$ and $b\in M$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1773346", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Show a set is open using open balls The set is $ \{ (x_1 , x_2) : x_1 + x_2 > 0 \}$ I wanted to solve this using open balls, so I said let $y = (y_1, y_2)$ be in the stated set. Then create an open ball $ B_r (y)$ around this point with radius $r = \frac {y_1 + y_2}{2} $. (My professor suggested this radius, but I understand why it'll work) Next, pick some point $k = (k_1, k_2)$ such that $k$ is in this ball. We must show that $k$ must also be in the original set. So then I though, ok, since k is in the ball then: $$ \sqrt { (k_1 - y_1)^2 + (k_2 - y_2)^2 } < \frac {y_1 + y_2}{2} $$ which implies that $$ (k_1 - y_1)^2 + (k_2 - y_2)^2 < (y_1 + y_2)^2 $$ I tried playing around with this, but couldn't arrive at the needed result of $k_1 + k_2 > 0$. It's frustrating, because it's a problem we did long ago in class, but it never clicked :( Thank you for your help :D
Actually there is a general result: If function $f:\mathbb{R}^n\to \mathbb{R}$ is continuous on $\mathbb{R}^n,$ then for every real number $a,$ the set $\{x\in\mathbb{R}^n\mid f(x)>a\}$ is open in $\mathbb{R}^n$ (with respect to the natural Euclidean topology). The proof of this result is elementary. For convenience, I give its proof here. Proof. Let $f$ be a real-valued function which is continuous on $\mathbb{R}^n,$ and $a$ be a real number. Let $E=\{x\in\mathbb{R}^n\mid f(x)>a\}.$ Suppose $x\in E.$ Then $f(x)>a$ and $f$ is continuous at $x.$ Assume $a<c<f(x).$ Put $\epsilon=f(x)-c.$ Then $\epsilon>0.$ By definition of continuity of $f$ at $x,$ there exists some $\delta>0$ such that for every $y$ in the ball $\mathbb{B}(x,\delta)$ of center $x$ with radian $\delta,$ we have $|f(y)-f(x)|<\epsilon.$ Thus we deduce that $f(y)>f(x)-\epsilon=f(x)-\big(f(x)-c\big)=c>a,$ which implies that $y\in E,$ and so $\mathbb{B}(x,\delta)\subset E.$ Since $x$ is chosen arbitrarily, we have proved that $E$ is open. Finally, what you need to do is to let $n=2, a=0,$ and $f(x_1,x_2)=x_1+x_2,$ which is surely continuous on $\mathbb{R}^n,$ and then, the desired result follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1773486", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
How can I show that there cannot exist a homotopy from $\mathcal{C_1}$ to $\mathcal{C_2}$? Consider the diagram below with an annulus $\mathcal{A}$ and two circles in the annulus $\mathcal{C}_1$ and $\mathcal{C}_2$. In $\mathbb{R^n}$, there clearly exists a homotopy between any two circles. But if we have this scenario here, where we are restricted by the annulus, it doesn't seem like there exists a homotopy from $\mathcal{C}_1$ to $\mathcal{C}_2$. This is just based on my intuition. But how can I actually prove this?
I think you mean "homotopy" instead of "homeomorphism". If that's the case, then you can use the fact that "homotopic" is an equivalence relation, in particular it is transitive. The circle $\mathcal{C}_2$ is obviously homotopic to the trivial circle. So if you prove that $\mathcal{C}_1$ is not homotopic to the trivial circle, you're done. You can prove that using complex analysis. $\mathcal{C}_1$ in $\mathcal{A}$ is homeomorphic to $\gamma=\{z \in \Bbb C \ | \ |z|=2\}$ in $D=\{z\in \Bbb C \ | \ 1<|z|<3\}$. Now, $f(z)=\frac 1 z$ is holomorphic on $D$. So by Cauchy's theorem, $\int_\lambda f=0$ for every null-homotopic closed path $\lambda$. But $\int_\gamma f\ne 0$ (easy to compute), so $\gamma$ (and thus $\mathcal{C}_1$) are not null-homotopic.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1773572", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
In a loop, if $(xy)x^r = y$ then $x(yx^r)=y$ Consider a loop $L$, that is, a quasigroup with an identity, and recall that a quasigroup $L$ is a set together with a binary operation such that, for every $a$ and $b$ in $L$, the equations $ax=b$ and $ya=b$ have unique solutions $x$ and $y$ in $L$. Further denote $x^r$ the right-side inverse of $x$. I am trying to prove the following: Let $L$ be a loop such that for all $x$ and $y$ in $L$ we have $(xy)x^r = y$. Then prove that $x(yx^r)=y$ also holds for all $x$ and $y$ in $L$. Here is my attempt: If $(xy)x^{r}=y$, then $xy=y(x^r)^r$ and this implies $x=[y(x^r)^r]y^r = (x^r)^r$. Now we have $xy = yx$ for all $x$, $y$. Then using this information, we get $(xy)x^r = (yx)x^r=y$. And finally, $x(yx^r)=(yx^r)x=(yx^r)(x^r)^r=y$. Unfortunately, this uses the associative law in the very beginning. Is there a way to correct my mistake, or can someone provide a different proof?
Substituting $yx^r$ for $y$ in the premise yields $ \left(x\left(yx^r\right)\right)x^r=yx^r$. The conclusion follows since the solution of $ux^r=b$ is unique.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1773698", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Radius of convergence of two power series I am trying to find the radius of convergence and trying to figure out the behaviour on the frontier of the disk of convergence of the following power series: a) $\sum_{n=1}^{\infty} \dfrac{n!}{(2-i)n^2}z^n$ b) $\sum_{n=1}^{\infty} \dfrac{1}{1+(1+i)^n}z^n$ I know that the radius of convergence of a power series is $R$ where $\dfrac{1}{R}=\overline{\sup \lim} \sqrt[n]{|a_n|}$ So, in a), we have $\overline{\sup \lim} \sqrt[n]{|a_n|}=\dfrac{\sqrt[n]{n!}}{\sqrt[n]{\sqrt{5}n^2}}$, I have no idea how to calculate this limit. And in b) I have the same problem, how do I calculate $\lim_{n \to \infty} \dfrac{1}{\sqrt[n]{|1+(1+i)^n|}}$ I would really appreciate help calculating these limits. Thanks in advance.
One would rather use the ratio test. For a), one obtains, as $n \to \infty$, $$ \left|\dfrac{(n+1)!}{(2-i)(n+1)^2}\times \dfrac{(2-i)n^2}{n!}\right|=\frac{n^2}{(n+1)} \to \infty $$ thus $R=0$. For b), one obtains, as $n \to \infty$, $$ \left|\dfrac{1}{1+(1+i)^{(n+1)}}\times \dfrac{1+(1+i)^n}{1}\right|=\left|\dfrac{1}{1+i}\right| \times \left|\dfrac{1+(1+i)^{-n}}{1+(1+i)^{-(n+1)}}\right|\to \frac1{\sqrt{2}} $$ thus $R=\sqrt{2}$. On the frontier of the disk of convergence, one may write $z=\sqrt{2}e^{i\theta}$, $\theta \in [0,2\pi]$ then one may observe the behaviour of the general term, $$ \dfrac{\left(\sqrt{2}\right)^n}{1+(1+i)^n}\,e^{in\theta} $$ giving, as $n \to \infty$: $$ \left|\dfrac{\left(\sqrt{2}\right)^n}{1+(1+i)^n}\,e^{in\theta}\right|=\left|\dfrac{\sqrt{2}}{1+i}\right|^n \times \left|\dfrac1{1+(1+i)^{-n}}\right|\to 1 \neq0, $$ the series is divergent everywhere on the frontier of the disk.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1773765", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Is the general equation for a straight line not considered a linear function in linear algebra? Is the general equation for a straight line, which we called a linear function in highschool, i.e. $$f(x)=mx+c \tag{1}$$ not considered to be a linear function according to the linear algebra definition that it need to satisfy $$f(\alpha x_{1} + \beta x_{2}) = \alpha f(x_{1}) + \beta f(x_{2}) \tag{2}$$ since the two sides of equation $(2)$ would differ by the constant term if using equation $(1)$ as the definition of $f$? As far as I understand the first equation I posted would be considered an affine function but not a linear function in linear algebra (right?). Are there just (annoyingly) two completely different definitions of what a linear function is or is there a reason that these two definitions have the same name (i.e. are the concepts related)?
You are correct, the term linear is sligtly misused in high school, when affine functions from $\mathbb R$ to $\mathbb R$ are also called linear. That said, two points that make this a little less annoying: * *"linear" in this term simply means "forming a line", and you can't argue with the fact that $f(x)=kx+n$ forms a line in $\mathbb R^2$... *Technically, you could say that functions are always mappings from some set to the field $\mathbb R$, and you could then maybe get away with it by saying that we have one definition for linear functions (that of $y=kx+n$), and another for linear mappings (the one with additivity). It's stretching it a bit, I know.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1773857", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Cardinality of equivalence relations on N I asked a similar question yesterday about well ordered sets, now I am having troubles with equivalence relations. Could someone suggest an injection from a well known set of cardinality $2^{\aleph_{0}}$ to the set of all equivalence relations of $\mathbb{N}$? Many thanks in advance!
For each subset $S$ of $\mathbb{N}$, declare all elements of $S$ to be equivalent, and all elements outside $S$ to only be equivalent to themselves. It's not quite an injection, but if you ignore singleton and empty $S$, then it is. And because there are uncountably many subsets of $\mathbb{N}$, removing countably many won't hurt, and so the domain of this map is uncountable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1774003", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Problem in Linear Algebra about dimension of vector space Let U and V be finite dimensional vector spaces Over $\mathbb R$, Let $L(U,V)$ be the vector space of linear transformations from $U$ to $V$, and Let $W$ be a vector subspace of $U$. If $Z$= {$T$ $\in L(U,V)$: $T(w)=0$ for all w $\in W$}, then What is the vector space dimension Dim($Z$) of $Z$ in terms of vector space dimension of $U$, $V$, and $W$? I have no idea how to start about this problem?
Hint: Choose a basis of $W$ and extend it to a basis of $U$. Express $T$ in that basis as a matrix. How many degrees of freedom are left?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1774075", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
What is the relationship between the width, height, and radius of an arc? What is the relationship between the width ($w$), height ($h$), and radius ($r$) of an arc? Specifically, the relationship in terms of $h$. I know this is a simple question - I'm a hobbyist engineer, and I'm having one of those moments, where your mind goes blank and you can't remember the simplest thing.
Another take. Let $(0,-r)$ and $(x, y)$ be the endpoints of the arc. Then $h = |r - y|$ and $w = |x|$. We know $x^2 + y^2 = r^2$ so $h = |r \pm \sqrt{r^2 - w^2}| = r \pm \sqrt{r^2 - w^2}$. And $w = |x| = \sqrt{r^2 - y^2} = \sqrt{r^2 - (h-r)^2} = \sqrt{2hr - h^2}$ ===== However not your interpretation of width and height are a little odd. You assume you are lying the arc on its tangent (like pressing one end of a clipped toenail to the floor) and measuring the displacement. I think most people would assume the width is the measure from endpoint to endpoint and the height to be the displace of that. (Like leaving the toenail balancing one its center and measuring how high and far apart the ends are). (The difference between the new picture and your picture is in the new picture the radius goes through the arc cutting it in half. The height is from the midpoint of the chord to the tip of the circle, and the width is the length of a chord.) (In Your picture the chord is the hypotenuse slanting upward. Width is the horizontal leg and the height is the vertical leg. I think most would view the chord itself as the width, and the height as the perpendicular bisector of the chord to the circle.) In this case the endpoints are $(-x,y)$ and $(x,y)$ and $w = 2x$ and $h = r - y$. Using $x^2 + y^2 = r^2$ we get: $w = 2\sqrt{r^2 - (r-h)^2} = 2\sqrt{2hr - h^2}$ and $h = r \pm \sqrt{r^2 - ((1/2)w)^2} = r \pm \sqrt {r^2 - w^2/4}$. ==== Note. Given an arc's width and length you can calculate the radius of the circle. I probably should have included those as well. In your picture: $(r-h)^2 + w^2 = r^2$ so $w^2 = 2hr - h^2$ so $r = (h^2 + w^2)/2h$. In mine: $(r-h)^2 + (w/2)^2 = r^2$ so $w^2/4 = 2hr - h^2$ so $r = (h^2 + w^2/4)/2h$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1774177", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Evaluate the double integral $\iint_D\sqrt{4-x^2-y^2}$ bounded by semi-circle I would appreciate it if someone can help me solve this question, as I'm struggling to get its answer. Q: Evaluate the double integral $$\iint_D\sqrt{4-x^2-y^2}dxdy$$ bounded by semi-circle $$x^2+y^2=2x$$ and lying in first quadrant Thanks
We have: $$x^2+y^2=2x$$ Moving everything to the left: $$x^2-2x+y^2=0$$ Completing the square: $$(x-1)^2+y^2=1$$ Which is a circle with centre $(1,0)$, hence: $$0\le x\le2$$ Solving $y$: $$y=\sqrt{1-(x-1)^2}$$ So the integral becomes: $$\int_{x=0}^{x=2}\int_{y=0}^{y=\sqrt{1-(x-1)^2}}\sqrt{4-x^2-y^2}\ \mathrm dy\ \mathrm dx$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1774331", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
A union of n sets with diferent order, how can I prove? Let$k_1,k_2,...,k_n$ be any ordination of the indexes $1,2,...,n$then $\bigcup_{i=1}^{n}A_{k_i}=\bigcup_{i=1}^{n}A_i$ I tried the double contention technique with no good results, is there a handy theorem I can use?
Let $x\in \bigcup_{i=1}^{n}A_{k_i}$. Then for some m, $x\in A_{k_m}$. But $ A_{k_m}$ is one of $A_1, \dots A_n$. So $x$ is in on of $A_1, \dots A_n. $ Hence $x \in\bigcup_{i=1}^{n}A_i$. We proved so far: $\bigcup_{i=1}^{n}A_{k_i}\subseteq\bigcup_{i=1}^{n}A_i$. Similarly: $\bigcup_{i=1}^{n}A_i\subseteq \bigcup_{i=1}^{n}A_{k_i}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1774457", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Determine the coefficient of $x^{18}$ in $\left(x+\frac{1}{x}\right)^{50}$ Determine the coefficient of $x^{18}$ in $\left(x+\frac{1}{x}\right)^{50}$. I know he Binomial Theorem will be useful here, but I am struggling to use it with any certainty.
Hint The general term of the binomial expansion is $${50 \choose k} x^k \left(\frac{1}{x}\right)^{50 - k} = {50 \choose k} x^{2k - 50} .$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1774573", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Why do authors make a point of $C^1$ functions being continuous? I've just got a little question on why authors specify things the way they do. Is their some subtlety I'm missing or are they just being pedantic? I've encountered the function spaces $C^k[a,b]$ a few times this year and usually the author will make a point that the functions are continuous and have continuous first derivatives, continuous second derivatives, and so on up to $k$. Why bother specifying it like this? A differentiable function is necessarily continuous so couldn't we just state $C^k[a,b]$ as the space of real/ complex -valued functions with continuous $k$th derivatives? Then the functions themselves and their less-than-$k$th derivatives would have to be continuous as well.
As Dave Renfro commented, this may be useful for pedagogical reasons, even if it's logically unnecessary. One difficulty that people often have is putting too much trust in formulas. Of course if $f'$ is to exist, $f$ must be continuous, but a formula for $f'$ might sometimes exist and be continuous without $f$ being continuous. If you don't first check for discontinuities of $f$, you might miss them. As a simple example, consider $$ f(x) = \arctan(\tan(x))$$ The naive student, asked to check if $f$ is $C^1$, might start by computing with the Chain Rule $$ f'(x) = 1 $$ see no sign of discontinuity there, and conclude that $f$ is $C^1$. Of course, it's easy for a somewhat less naive student to see the error in this case, but more complicated examples can arise that can even catch experts off guard. For example, a "closed-form" antiderivative of a meromorphic function will typically have branch cuts, even though the poles are not real; whether the branch cuts intersect the real axis, and if so where, is often not obvious. This often arises with antiderivatives produced by Computer Algebra systems.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1774648", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Continuity of the Box-Cox transform at λ = 0: Why is it the log? The Box-Cox power transform frequently used in statistical analysis takes the value (x^λ -1) /λ for λ not equal to zero, and ln(x) for λ=0. I would like to see a demonstration, that need not be a full formal proof, that this family of transformations is continuous in the neighborhood of λ=0. Though I do not necessarily need all the superstructure of a formal proof, I very much want a demonstration that provides an intuitive understanding of why this is the right thing to do for this parameter value. I observe that for exp((x^λ -1)/λ ), this function has the nice property that it returns exp(ln(x)) = x at λ = 0, dividing the outcomes between those that compress the extreme values, those that inflate them, and those that do neither. But I still don't understand how that is related to the values of the transformed variable for λ in the near vicinity of 0. I am asking this here rather than in CrossValidated because I am looking at this as a question about analytic properties of a class of functions rather than about any statistical properties of distributions before and after the transform.
Since $\lim_{\lambda \rightarrow 0} x^\lambda = x^0 = 1$ we can use L'hopitals rule to get $$ \lim_{\lambda \rightarrow 0} \frac{x^\lambda -1}{\lambda} = \lim_{\lambda \rightarrow 0} \frac{\ln x e^{\lambda \ln x}}{1} = \ln x e^0 = \ln x $$ showing the continuity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1774747", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Continuous deformation of loop to point. Suppose I have a homotopy from a loop around the origin to a constant loop which is not the origin. Prove that the origin is in the image of the homotopy. Basically prove that if I deform a loop to a point, at some point in time it has to cross the origin. I have tried to prove this but I get stuck trying to express mathematically that the loop can't break.
If the origin is not in the image, then consider your original loop as a loop in space $\mathbf{R}^2 \backslash(0,0)$. Since your deformation avoids the origin, it can be viewed, again, as a deformation in the "punctured" space that deforms the loop encompassing the origin to the constant map. But the loop in the constant map is not null-homotopic since the punctured space is equal to the circle, and the loop is nontrivial (homotopically speaking of course.) [You may need to put more flesh on these!]
{ "language": "en", "url": "https://math.stackexchange.com/questions/1774919", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Prove $(x_n)$ defined by $x_n= \frac{x_{n-1}}{2} + \frac{1}{x_{n-1}}$ converges when $x_0>1$ $x_n= \dfrac{x_{n-1}}{2} + \dfrac{1}{x_{n-1}}$ I know it converges to $\sqrt2$ and I do not want the answer. I just want a prod in the right direction. I have tried the following and none have worked: $x_n-x_{n-1}$ and this got me nowhere. I have tried $\dfrac{x_{n+1}}{x_{n}}$ with no luck, and I was expecting this to help,
We have: By AM-GM inequality: $x_n > 2\sqrt{\dfrac{x_{n-1}}{2}\cdot \dfrac{1}{x_{n-1}}}=\sqrt{2}, \forall n \geq 1$. Thus: $x_n-x_{n-1} = \dfrac{1}{x_{n-1}} - \dfrac{x_{n-1}}{2}= \dfrac{2-x_{n-1}^2}{2x_{n-1}} < 0$. Hence $x_n$ is a decreasing sequence,and is bounded below by $\sqrt{2}$. So it converges, and you showed the limit is $\sqrt{2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1775041", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
How can you check if there exists a valid magic square with given initial conditions? For example, if I have a $4\times4$ magic square that looks like so: \begin{pmatrix} \hspace{0.1ex}2 & 3 & \cdot & \cdot\hspace{1ex} \\ \hspace{0.1ex}4 & \cdot & \cdot & \cdot\hspace{1ex} \\ \hspace{0.1ex}\cdot & \cdot & \cdot & \cdot\hspace{1ex} \\ \hspace{0.1ex}\cdot & \cdot & \cdot & \cdot\hspace{1ex} \\ \end{pmatrix} How can I determine if there exists a valid magic square for which these initial conditions hold?
In the present case, you can determine it by checking this list. There seems to be no such magic square. In the general case, you can treat the emtpy squares as variables, introduce the $2n+2$ constraints and solve the corresponding system of linear equations. If you prescribe $k$ squares, this will leave $n^2-k-(2n+2)$ variables to choose freely. You can assign all possible values to them and check whether any of them make the entries come out to form the set $\{1,\ldots,n^2\}$; if so, you've found a magic square of the desired form. In the present case, you'd have $4^2-3-(2\cdot4+2)=3$ free variables, and $4^2-3=13$ numbers left to choose, so you'd have $13\cdot12\cdot11=1716$ combinations to try; an easy task for a computer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1775158", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Number of distinct values Question: How many possible values of (a, b, c, d), with a, b, c, d real, are there such that abc = d, bcd = a, cda = b and dab = c? I tried multiplying all the four equations to get: $$(abcd)^2 = 1$$ Not sure how to proceed on from here. Won't there be infinite values satisfying this equation?
1) If $abcd = 0$, i.e., if one at least among $a,b,c,d$ is zero, all are clearly zero. Thus there is a solution: $$a=b=c=d=0$$ 2) If $abcd \neq 0$, let $A=\ln(|a|), B=\ln(|b|), C=\ln(|c]), D=\ln(|d|).$ Taking absolute values and then logarithms of the 4 equations, we obtain the following linear homogeneous system $$\begin{bmatrix}1&1&1&-1\\-1&1&1&1\\1&-1&1&1\\1&1&-1&1 \end{bmatrix}\begin{bmatrix}A\\B\\C\\D \end{bmatrix}=\begin{bmatrix}0\\0\\0\\0 \end{bmatrix}$$ The determinant of the matrix $M$ of the system is $16 \neq 0$, thus the kernel of $M$ is reduced to $$(A,B,C,D)=(0,0,0,0)=(\ln(1),\ln(1),\ln(1),\ln(1))$$ Therefore $$|a|=1, |b|=1, |c|=1, |d|=1 \ \ \ (1)$$ All solutions of (1) may not be solutions to the initial system (because, by taking absolute values, we possibly have enlarged the set of solutions). Thus, we have to check the 16 different possible sign combinations for $a,b,c$ and $d$. Doing this, 8 solutions remain: $$(a,b,c,d)=(-1, -1, -1, -1), (-1, 1, -1, 1), (-1, 1, 1, -1), (-1, -1, 1, 1), (1, 1, -1, -1), (1, -1, -1, 1) , (1, -1, 1, -1), (1, 1, 1, 1).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1775266", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Question involving functions and permutations If $A=\{1,2,3,4\},B=\{a,b,c\}$, how many functions $A\to B$ are not onto? My Try: so one element in $B$ shouldn't have a preimage in A so one element is excluded(for convenience) so for $4$ elements in $A$ there are $2$ in B hence total ways are $16$ then $2$ elements in $B$ don't have a pre image so $2^3$ ways. Thus total ways are $16+8=24$ . So this can be done for all $3$ elements. Hence total ways are $72$ but answer given is $45$
You overcounted the "one element excluded" mappings (assuming they were supposed to be "exactly one element excluded"). Yes, there are $16$ ways to map $A$ onto the two remaining elements of $B$ after one element of $B$ is excluded, but if you exclude $a$ (for example), then two of the mappings you produce in this way are the one that maps everything to $b$ and the one that maps everything to $c$. So if you take your $16$ mappings and multiply by $3$, you have not only already counted all the mappings that exclude two elements of $B$, you have counted each of those mappings twice. So the Inclusion-Exclusion Principle applies, and rather than add the "two elements excluded" mappings, you should subtract them from the total. Moreover, there are not $2^3$ ways to map the elements of $A$ if two elements of $B$ are excluded; there is only one way: all elements of $A$ onto the one remaining element of $B$. But there are three different elements of $B$ that might be the one remaining, so there are a total of $3$ mappings excluding two elements. In conclusion, $$ 48 - 3 = 45 $$ is the correct answer. An alternative is to remove the "only one element of $B$" mappings from the "one element excluded" mappings, so instead of $16$ mappings, you have only the $14$ mappings that actually use both remaining elements of $B$. Then you can compute the total number of mappings as $$ (14 \cdot 3) + 3 = 42 + 3 = 45. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1775382", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Finding the Exponential of a Matrix that is not Diagonalizable Consider the $3 \times 3$ matrix $$A = \begin{pmatrix} 1 & 1 & 2 \\ 0 & 1 & -4 \\ 0 & 0 & 1 \end{pmatrix}.$$ I am trying to find $e^{At}$. The only tool I have to find the exponential of a matrix is to diagonalize it. $A$'s eigenvalue is 1. Therefore, $A$ is not diagonalizable. How does one find the exponential of a non-diagonalizable matrix? My attempt: Write $\begin{pmatrix} 1 & 1 & 2 \\ 0 & 1 & -4 \\ 0 & 0 & 1 \end{pmatrix} = M + N$, with $M = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}$ and $N = \begin{pmatrix} 0 & 1 & 2 \\ 0 & 0 & -4 \\ 0 & 0 & 0 \end{pmatrix}$. We have $N^3 = 0$, and therefore $\forall x > 3$, $N^x = 0$. Thus: $$\begin{aligned} e^{At} &= e^{(M+N)t} = e^{Mt} e^{Nt} \\ &= \begin{pmatrix} e^t & 0 & 0 \\ 0 & e^t & 0 \\ 0 & 0 & e^t \end{pmatrix} \left(I + \begin{pmatrix} 0 & t & 2t \\ 0 & 0 & -4t \\ 0 & 0 & 0 \end{pmatrix}+\begin{pmatrix} 0 & 0 & -2t^2 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}\right) \\ &= e^t \begin{pmatrix} 1 & t & 2t \\ 0 & 1 & -4t \\ 0 & 0 & 1 \end{pmatrix} \\ &= \begin{pmatrix} e^t & te^t & 2t(1-t)e^t \\ 0 & e^t & -4te^t \\ 0 & 0 & e^t \end{pmatrix}. \end{aligned}$$ Is that the right answer?
this is my first answer on this site so if anyone can help to improve the quality of this answer, thanks in advance. That said, let us get to business. * *Compute the Jordan form of this matrix, you can do it by hand or check this link. (or both). *Now, we have the following case: $$ A = S J S^{-1}.$$ You will find $S$ and $S^{-1}$ on the previous link. For the sake of simplicity, $J$ is what actually matters, $$ J = \begin{pmatrix} 1 & 1 & 0\\ 0 & 1 & 1\\ 0 & 0 & 1\\ \end{pmatrix} $$ because: *$e^A = e^{SJS^{-1}} = e^J$ And the matrix $J$ can be written as: $J = \lambda I + N$, where $I$ is the identity matrix and $N$ a nilpotent matrix. *So, $e^J = e^{\lambda I + N} = \mathbf{e^{\lambda} \cdot e^N}$ By simple inspection, we get that: $$ J = \lambda I + N = 1 \cdot \begin{pmatrix} 1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1\\ \end{pmatrix} + \begin{pmatrix} 0 & 1 & 0\\ 0 & 0 & 1\\ 0 & 0 & 0\\ \end{pmatrix} $$ where you can check that $\lambda =1$ and N is $$ \begin{pmatrix} 0 & 1 & 0\\ 0 & 0 & 1\\ 0 & 0 & 0\\ \end{pmatrix} $$ *So, $e^A = e \cdot e^N$ we just apply the definition $ e^N \equiv \sum^{\infty}_{k=0} \frac{1}{k!} N^k$. And, of course, it converges fast: $N^2 \neq 0$ but $N^3=0$. *Finally: $$ e^A = e \cdot \left[ 1 \cdot I + 1 \cdot N^1 + \frac{1}{2} N^2 \right] $$ where $$ N^2 = \begin{pmatrix} 0 & 0 & 1\\ 0 & 0 & 0\\ 0 & 0 & 0\\ \end{pmatrix} $$ then: $$ \mathbf{ e^A = e \cdot \begin{pmatrix} 1 & 1 & 1/2\\ 0 & 1 & 1\\ 0 & 0 & 1\\ \end{pmatrix} } $$ Last but not least, $$ e^{At} = e^{A \cdot t} = e^{\lambda \cdot t} \cdot e^{N \cdot t} = e^t \cdot \begin{pmatrix} 1 & t & 1/2 t^2\\ 0 & 1 & t\\ 0 & 0 & 1\\ \end{pmatrix} $$ You replace $N$ by $At$ in the exp definition and that's it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1775469", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 7, "answer_id": 3 }
Find the value of P such that the seris converge $$\sum_{n=2}^{\infty} \frac{1}{\log^p(n)} \tag{1.}$$ $$\sum_{n=2}^{\infty} e^{n(p+1)} \tag{2.}$$ In 1 if $p=0$ then the whole series is 1. in 2 I can look at some individual results. The question is what is the correct way to answer this.
This can be shown by using Ratio Test. For (1), $$a_n=\ln(n)^{-p}$$ The ratio is, $$\frac{a_{n+1}}{a_n}=(\frac{\ln(n)}{\ln(n+1)})^p$$ Thus for any $p\gt 0$, the ratio is less than 1 and the series absolutely converges. For (2), $$a_n=e^{n(p+1)}$$ The ratio is, $$\frac{a_{n+1}}{a_n}=(\frac{e^{(n+1)}}{e^n})^{(p+1)}=e^{p+1}$$ Thus for any $p\lt -1$, the ratio is less than 1 and the series absolutely converges.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1775557", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
A weaker characterization of convergence in distribution $X_n$ and $X$ are real-valued random variables. If $X_n \to X$ in distribution, then we know that $$P\{X_n \leq a\} \to P\{X \leq a\}$$ for every $a$ at which $x\mapsto P\{X \leq x\}$ is continuous. For a certain problem I have at hand this condition is too strong to check. Instead, I only have the following. $$P\{X_n \leq a\} \to P\{X \leq a\}$$ for every $a \in D$ where $D = \{d_1,d_2,\ldots\}$ is a countable dense set in $\mathbb{R}$ such that $x\mapsto P\{X \leq x\}$, $x\mapsto P\{X_1 \leq x\}$, $x\mapsto P\{X_2 \leq x\},\ldots$ are continuous at $d_i$ for every $i = 1,2,\ldots$. I would like to show that this condition is sufficient for convergence in distribution. I fix an arbitrary $d \notin D$ such that $x\mapsto P\{X \leq x\}$ is continuous at $d$. By denseness of $D$ I can find a sequence $(d_k)_k$ such that $d_k \downarrow d$. By right-continuity and monotonicity of the distribution functions $$P\{X_n \leq d_k\} \to P\{X_n \leq d\} \quad \text{and} \quad P\{X \leq d_k\} \to P\{X \leq d\}$$ for every $n$ as $k \to \infty$. Furthermore, by definition of the set $D$ $$P\{X_n \leq d_k\} \to P\{X \leq d_k\}$$ for every $k$ as $n\to\infty$. How do I conclude rigorously from these that $$P\{X_n \leq d\} \to P\{X \leq d\}$$
If a sequence $(d_k)$ from $D$ decreases to $x$, then $F_n(x)\le F_n(d_k)$, so $\limsup_nF_n(x)\le F(d_k)$ for all $k$, so $\limsup_nF_n(x)\le F(x)$. If $(d_k)\subset D$ increases to $x$, and $x$ is a continuity point of $F$, then in like fashion $\liminf_nF_n(x)\ge F(x)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1775608", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to solve this algorithmic math olympiad problem? So, today we had a local contest in my state to find eligible people for the international math olympiad "IMO" ... I was stuck with this very interesting algorithmic problem: Let $n$ be a natural number ≥ 2, we take the biggest divisor of $n$ but it must be different from $n$ itself, subtract it from $n$. We repeat this until we get $1$. Example: Let $n = 30$. Then we have to subtract its biggest different divisor, that is, 15. So 30 - 15 = 15, now we do the same: * *5 is the biggest divisor for 15, so 15 - 5 = 10 *5 is the biggest divisor for 10, so 10 - 5 = 5 *1 is the greatest divisor for 5, so 5 - 1 = 4 *2 is the biggest divisor for 4 so 4 - 2 = 2 *1 is the biggest divisor for 2, so 2 - 1 = 1 . And we're done ! it took 6 steps to get 1. If $n = 2016^{155}$ how many steps we have to get 1 at the end ? I'm a programmer, and I used to rock with logical puzzles, but this time I'm completely lost. So please help me.
Firstly, note that: $$n=2^{775}3^{310}7^{155}$$ Let the number of steps to get from $x$ to $1$ be $f(x)$. Then, note that the biggest divisor of $2x$ is always $x$. Therefore: $$f(2x)=f(x)+1$$ For example: $$f(30)=f(15)+1$$ Applying to here: $$f(n)=f(3^{310}7^{155})+775$$ Now, when $x$ is not divisible by $2$, the biggest divisor of $3x$ is always $x$. Therefore: $$f(3x)=f(3x-x)+1=f(2x)+1=f(x)+2$$ For example: $$f(15)=f(5)+2$$ Applying to here: $$f(n)=f(7^{155})+2\times310+775=f(7^{155})+1395$$ Now, where $x$ is not divisible by $2$, $3$, or $5$, the biggest divisor of $7x$ is always $x$. Therefore: $$f(7x)=f(6x)+1=f(3x)+2=f(x)+4$$ For example: $$f(77)=f(11)+4$$ Applying to here: $$f(n)=f(1)+4\times155+1395=2015$$ Is it just a coincidence? Extra: I wrote a program in Pyth to confirm this (takes a while to calculate). This is for smaller numbers. I used this to generate $f(x)$ for $x$ from $1$ to $100$. A quick search returns OEIS A064097.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1775719", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25", "answer_count": 3, "answer_id": 1 }
Prove that the line integral of a vector valued function does not depend on the particular path Let C denote the path from $\alpha$ to $\beta$. If $\textbf{F}$ is a gradient vector, that is, there exists a differentiable function $f$ such that $$\nabla f=F,$$ then \begin{eqnarray*} \int_{C}\textbf{F}\; ds &=& \int_{\alpha}^{\beta} \textbf{F}(\vec{c}(t)).\vec{c'}(t)\; dt \\ &=&\int_{\alpha}^{\beta} \nabla f(\vec{c}(t)).\vec{c'}(t)\; dt \\ &=&\int_{\alpha}^{\beta} \frac{\partial f(\vec{c}(t))}{\partial t}\; dt \\ &=&f(\vec{c}(\beta))- f(\vec{c}( \alpha)) \end{eqnarray*} That is, the integral of $\textbf{F}$ over $C$ depends on the values of the end points $c(\beta)$ and $c(\alpha)$ and is thus independent of the path between them. This proof is true if and only if $\textbf{F}$ is a gradient vector, what if not ?
In general, for a smooth vector field $\vec F(\vec r)$, Helmholtz's Theorem guarantess that there exists a scalar field $\Phi(\vec r)$ and a vector field $\vec A(\vec r)$ such that $$\vec F(\vec r)=\nabla \Phi(\vec r)+\nabla \times \vec A(\vec r)$$ Then, forming the line integral along a path $C$, from $\vec r_1$ to $\vec r_2$, we see that $$\begin{align} \int_C \vec F(\vec r)\cdot \,d\vec \ell&=\int_C\left(\nabla \Phi(\vec r)+\nabla \times \vec A(\vec r)\right)\cdot \,d\vec \ell\\\\ &=\Phi(\vec r_2)-\Phi(\vec r_1)+\int_C \nabla \times \vec A(\vec r) \cdot \,d\vec \ell \tag 1\\\\ \end{align}$$ Inasmuch as the integral on the right-hand side of $(1)$ is, in general, path dependent, then the path integral of $\vec F(\vec r)$ is also, in general, path dependent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1775830", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Linear exponential sum over $(\mathbb{Z}/(p^t))^*$ Let $p$ be a prime and $t$ a natural number. Let us denote $(\mathbb{Z}/(p^t))^*$ to be the group of units of $\mathbb{Z}/(p^t)$. I have the following exponential sum $$ S = \sum_{w \in (\mathbb{Z}/(p^t))^*} e^{2 \pi i a w/ p^t}, $$ which I am convinced is $0$ if $a \not \equiv 0 (\mod p^t)$. However, I am not too sure how to proceed to prove this statement. I would greatly appreciate any hints or comments or explanations! Thanks!
Write that sum as $S(a,t)$. Here are some thoughts: * *If $\gcd(a,p)=1$, then $S(a,t)=S(1,t)$ because raising a primitive $p^t$-root of unity to $a$ still gives you a primitive $p^t$-root of unity. *If $\gcd(a,p)>1$, then $S(a,t)=m S(a',t')$ for some $m\in \mathbb N$ and $a'<a$ with $\gcd(a',p)=1$ and $t'<t$. So, the only sum that you need to care about is $S(1,t)$. For $t=1$, we have $$ S(1,1) = \theta+\theta^2+\cdots+\theta^{p-1}=\dfrac{\theta^p-1}{\theta-1}-1=-1 $$ For $t=2$, we have $$ \begin{eqnarray} S(1,2) &=&1+\theta+\theta^2+\cdots+\theta^{p^2-1} -(1+\theta^p+\theta^{2p}+\cdots+\theta^{(p-1)p}) \\ & = & 0-(1+\theta^p+\theta^{2p}+\cdots+\theta^{(p-1)p}) =-\dfrac{\theta^{p^2}-1}{\theta^p-1}=0 \end{eqnarray} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1775936", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Can three vectors have dot product less than $0$? Can three vectors in the $xy$ plane have $uv<0$ and $vw<0$ and $uw<0$? If we take $u=(1,0)$ and $v=(-1,2)$ and $w=(-1,-2)$ $$uv=1\times(-1)=-1$$ $$uw=1\times(-1)+0\times(-2)=-1$$ $$vw=-1\times(-1)+2\times(-2)=-3$$ is there anyway to show that without examples, just working with $u=(u_1,u_2)$, $v=(v_1,v_2)$ and $w=(w_1,w_2)$, I mean analytically?
The dot product between two vectors is negative when the angle $\theta$ between them is greater than a right angle, since $$ u \cdot v = |u||v|\cos(\theta). $$ So when three vectors point more or less to the vertices of an equilateral triangle all three dot products will be negative.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1776069", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Question on Indefinite Integration: $\int\frac{2x^{12}+5x^9}{\left(x^5+x^3+1\right)^3}\,\mathrm{d}x$ Give me some hints to start with this problem: $${\displaystyle\int}\dfrac{2x^{12}+5x^9}{\left(x^5+x^3+1\right)^3}\,\mathrm{d}x$$
Hint Take $x^5$ common from denominator so that it's cube it will come out as $15$ and take $x^{15}$ common from the numerator. Put the remaining denominator as $t$ numerator becomes $dt$. Hope you can do the rest.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1776317", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Calculation of total number of real ordered pairs $(x,y)$ Calculation of total number of real ordered pairs $(x,y)$ in $x^2-4x+2=\sin^2 y$ and $x^2+y^2\leq 3$ $\bf{My\; Try::}$ Given $x^2-4x+2=x^2-4x+4-2=\sin^2 y\Rightarrow (x-2)^2-2=\sin^2 y$ Now Using $0 \leq \sin^2 y\leq 1$. So we get $0\leq (x-2)^2-2\leq 1\Rightarrow 2\leq (x-2)^2\leq 3$ So we get $\sqrt{2}\leq |x-2|\leq \sqrt{3}\Leftrightarrow \bigg(-\sqrt{3}\leq (x-2)\leq \sqrt{3}\bigg)\;\;\cap\;\;\bigg((x-2)\leq -\sqrt{2} \;\cup\; (x-2)\;\geq \sqrt{2}\bigg)$ So we get $$x\in \left[2-\sqrt{3}\;\;,2-\sqrt{2}\right]\;\cup \;\left[2+\sqrt{2}\;,2+\sqrt{3}\right]$$ Now How can I solve it after that, Help me, Thanks
Now How can I solve it after that I don't know how to continue from that. So, let us take another approach. Solving $x^2-4x+2-\sin^2 y=0$ for $x$ gives $$x=2\pm\sqrt{2+\sin^2y}$$ Now $x=2+\sqrt{2+\sin^2y}$ does not satisfy $x^2\le 3$. So, we have $x=2-\sqrt{2+\sin^2y}$. Now $$\left(2-\sqrt{2+\sin^2y}\right)^2+y^2\le 3$$ is equivalent to $$3+\sin^2y+y^2\le 4\sqrt{2+\sin^2y}\tag1$$ By the way, for $-\pi/4\le y\le \pi/4$, we have $$y^2\le \frac{\pi^2}{16},\quad 0\le \sin^2y\le \frac 12$$ and so using $\pi\le 4$ gives $$3+\sin^2y+y^2\le 3+\frac 12+\frac{\pi^2}{16}\le 3+\frac 12+\frac{4^2}{16}=4\times 1.125\le 4\sqrt 2\le 4\sqrt{2+\sin^2y}$$ Hence, if $-\pi/4\le y\le \pi/4$, then $(1)$ holds. It follows from this that there are infinitely many such pairs.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1776444", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How can I show the uniqueness of homomorphism? Let $R$ be a commutative ring and let $k(x)$ be a fixed polynomial in $R[x]$. Prove that there exists a unique homomorphism $\varphi:R[x]\rightarrow R[x]$ such that $\varphi(r)=r\;\mathrm{for\; all\;}r\in R\quad\mathrm{and}\qquad\varphi(x)=k(x)$ and I found such ring homomorphism as follows: For any $f(x)=\sum_{i=0}^{n}a_{i}x^{i}\in R[x]$, define $\varphi(f(x))=\sum_{i=0}^{n}a_{i}\left[k(x)\right]^{i}$ Then I can easily show that $\varphi(r)=r$ $\mathrm{and}$ $\varphi(x)=k(x)$ But it's hard for me to prove the uniqueness..
Let's say we have a homomorphism $\phi$ that satisfies $\phi(r)=r$ for $r \in R$ and $\phi(x)=k(x)$. We need to prove that this homomorphism equals your homomorphism above, which we can do simply by using the definition of homomorphisms and the values of $\phi$ that are given. Consider $\phi(f(x))$: $$\phi\left(\sum_{i=0}^n a_ix^i\right)$$ Distribute $\phi$ over the summation: $$\sum_{i=0}^n\phi(a_ix^i)$$ Distribute $\phi$ over multiplication: $$\sum_{i=0}^n\phi(a_i)\phi(x)^i$$ Substitute: $$\sum_{i=0}^n a_i[k(x)]^i$$ Thus, using the definition of homomorphisms and the hypothesis, we were able to prove that any homomorphism satisfying the hypothesis needs to have the above definition. Since they all have the same definition, they are all equal and thus there is only one unique homomorphism satisfying the hypothesis.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1776579", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Are these sets Borel-measurable? Are the following sets Borel-measurable and if so, what is the value of the measure? 1) A = {(x,y) ∈ $[0,1]^2$| x and y rational} 2) B = {(x,y) ∈ $[0,1]^2$ | x or y rational} 3) C = {(x,y) ∈ $[0,1]^2$ | x and y irrational} 4) D = {(x,y) ∈ $[0,1]^2$ | x=y}
Some hints to help you further in this. 1) Singletons are Borel-measurable and countable unions of Borel-measurable sets are Borel-measurable. Conclusion: countable sets (which are countable unions of singletons) are measurable. The Lebesgue measure of a singleton is $0$. What can you conclude then about the Lebesgue measure of a countable set? 2) What do you think: are sets of the form $\{x\}\times[0,1]$ or $\times[0,1]\times\{y\}$ Borel-measurable? If so then what is the Lebesgue measure on these sets? Note that $B$ can be written as a countable union of this sort of sets. 3) $C$ is the complement of $B$. What is your conclusion? 4) $D$ is closed. What can be concluded from that? Be aware that the Borel-measurable sets together form the $\sigma$-algebra generated by the open sets. edit: First let me emphasize what I said about Borel-measurable sets under 4). That implies directly that open (hence also their complements, the closed sets) are Borel-measurable. Consequently singletons are measurable and for a countable set $A$ we have $A=\bigcup_{a\in A}\{a\}$ wich is a countable union of closed sets. Also sets of the form $\{x\}\times[0,1]$ or $[0,1]\times\{y\}$ are closed, hence are Borel measurable. Note that $C$ is a countable union of sets like this.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1776708", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Factor $6x^2​ −7x−5=0$ I'm trying to factor $$6x^2​ −7x−5=0$$ but I have no clue about how to do it. I would be able to factor this: $$x^2-14x+40=0$$ $$a+b=-14$$ $$ab=40$$ But $6x^2​ −7x−5=0$ looks like it's not following the rules because of the coefficient of $x$. Any hints?
When in doubt, you always have the quadratic equation. But if you really want to do it this way, you have to consider how the dominant term can factor. Here, you have either $6=6\times 1$ or $6=3\times 2$, so $(6x+a)(x+b)$ or $(3x+a)(2x+b)$. Try both. In both cases, you get the Viete-like formulas, just like you wrote down, but with a few prefactors ($a+6b=-7$ and $ab=-5$ in the first case).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1776775", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 10, "answer_id": 5 }
Curious inequality: $(1+a)(1+a+b)\geq\sqrt{27ab}$ I was recently trying to play with mean inequalities and Jensen inequality. My question is, if the following relation holds for any positive real numbers $a$ and $b$ $$(1+a)(1+a+b)\geq\sqrt{27ab}$$ and if it does, then how to prove it. By AM-GM inequality we could obtain $1+a\geq\sqrt{4a}$ and $1+a+b\geq\sqrt[3]{27ab}$, so by putting this together we obtain$$(1+a)(1+a+b)\geq\sqrt{4a}\sqrt[3]{27ab},$$but this is slightly different from what I wanted. It's probably true that for all positive $a$ and $b$ there is $\sqrt{4a}\sqrt[3]{27ab}\geq\sqrt{27ab}$, but could there be equality? Thanks a lot.
Use AM-GM: $$\frac{1}{2}+\frac{1}{2}+a\ge3\sqrt[3]{\frac{a}{4}}\\ \frac{1}{2}+\frac{1}{2}+a+\frac{b}{3}+\frac{b}{3}+\frac{b}{3}\ge6\sqrt[6]{\frac{ab^3}{108}}\\\therefore(1+a)(1+a+b)\ge\left(3\sqrt[3]{\frac{a}{4}}\right)\left(6\sqrt[6]{\frac{ab^3}{108}}\right)=\sqrt{27ab}$$ Equality holds iff $\frac{1}{2}=a=\frac{b}{3}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1776857", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 3 }
What is the sum of this alternating series: $\sum_{n=1}^\infty \frac{(-1)^n}{4^nn!}$? I need to find the sum of an alternating series correct to 4 decimal places. The series I am working with is: $$\sum_{n=1}^\infty \frac{(-1)^n}{4^nn!}$$ So far I have started by setting up the inequality: $$\frac{1}{4^nn!}<.0001$$Eventually I arrived at $$n=6$$ giving the correct approximation, which is approximately equal to $$\frac{1}{2,949,120}$$ But this is not the answer WolframAlpha gets.
The sum of this series is known, since it is the expansion of $\;\mathrm e^{-x}-1\;$ for $x=\frac14$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1776931", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Bayes Theorem probability Past Exam Paper Question - Prof. Smith is crossing the Pacific Ocean on a plane, on her way to a conference. The Captain has just announced that an unusual engine fault has been signalled by the plane’s computer; this indicates a fault that only occurs once in 10,000 flights. If the fault report is true, then there’s a 70% chance the plane will have to crash-land in the Ocean, which means certain death for the passengers. However, the sensors are not completely reliable: there’s a 2% chance of a false positive; and there’s a 1% chance of the same fault occurring without the computer flagging the error report. Question Formulate this problem in terms of conditional probabilities of outcomes, existence of a fault and whether or not it is reported and use Bayes’ rule to compute Prof. Smith’s chances of survival. My Attempt P(Fault) - 0.0001 P(Crash | Fault) - 0.7 P(FalsePositive | Fault) - 0.02 P(NoReport | Fault) - 0.01 I have no idea what to do next, every example I look at seems a lot easier than this. Could someone help me out?
When confused, it is useful to break the problem into parts, and work through a fictitious scenario. The prof will survive all false alarms and $30\%$ of correct alarms Suppose the prof takes $1,000,000$ trips (to avoid decimals) A fault is likely to occur $100$ times, of which $99\% \;\; or\;\;99$ would sound the alarm, and not occur $999,900$ times, of which $2\%\;\; or\;\; 19,998\;$ would sound false alarms Thus of the $99+19,998 = 20097$ times the alarm sounds, the prof will survive in $19,998+ 30\%$ of $99 = 20027.7$ times and P(prof survives) $= \dfrac{20027.7}{20097} = 99.6551724..\%$ Added Once you have got the scenario clearly mapped out, you should easily be able to put it in the usual mould for Bayes' rule. The important thing is to first get it clearly into your head. PS: To match with terminology in main answer, P(Correct alarm) =P(fault)*P(alarm|fault) = $\dfrac1{10,000}\times(1-0.01) = \dfrac{99}{1,000,000}$ P(False alarm) = P(no fault)*P(alarm|no fault) $= \dfrac{9999}{10,000}\times(0.02) = \dfrac{19,998}{1,000,000}$ P(survive|alarm) $$= \dfrac{\text{P(survive|false alarm)*P(false alarm) + P(survive|correct alarm)*P(correct alarm)}}{\text{P(false alarm) + P(correct alarm)}}$$ Cancelling out the denominators, $$P(survive|alarm) =\dfrac{1*19,998+0.3*99}{19,998+99}= 99.6551724..\%$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1777064", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 7, "answer_id": 1 }
Infinite Product $\prod_n^\infty \frac{1}{1-\frac{1}{n^s}} \rightarrow$? Can this $$P_n(s)=\prod_{m=2}^n \frac{1}{1-\frac{1}{m^s}}$$ for s>1 and $\lim_{n\rightarrow\infty}$ be written any simpler (does it converge)? When $m$ runs here only over the primes this is the famous Euler-Product form of the Riemann zeta function. Is there anything known for this product when n runs over all natural numbers? For checking the convergence one could look at $$\log P_m(s) = -\sum_{m=2}^n \log(1-\frac{1}{m^s})$$ and use the integral criterion. I somehow did not succeed to integrate it. (The idea how I came up with this was somehow to write the zeta function as quotient of this function and a similiar one with terms from the Erathostenes sieve: $$ \zeta(s) = \frac{P(s)}{\prod_{m,n=1}^\infty \left[ 1-((m+1)(n+1))^{-s}\right]^{-\frac{1}{M(m,n)}}}.$$ Here $M(n,m)$ is obviously the number of divisors $\sigma_0$ of $nm$ minus two: $$M(n,m)=\sigma_0(mn) -2,$$ using the prime factorisation of $nm=\prod_{i}p_i^{\alpha_i}$ it is in turn $$\sigma_0(nm)=\prod_i (\alpha_i + 1).$$ )
Hint: Note that $1-\frac1x = \frac{x-1}{x}$. Then you can simplify $$\begin{align*}\log P_m(s) &= -\sum_n \log(1-\frac{1}{m^s}) \\ &= -\sum \log(\frac{m^s-1}{m^s}) \\ &= \sum\log(m^s)-\log(m^s-1) \end{align*}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1777129", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Fermat's little theorem question: why isn't $a^p \equiv 1$? Fermat's little theorem says that $a^p \equiv a \pmod p$. I have kind of a stupid question. Since $p \equiv 0\pmod p $, why isn't $a^p \equiv a^0 \equiv 1 \pmod p$ ?
While $x\equiv y\pmod p$ implies $a+x\equiv a+y\pmod p$ as well as $a\cdot x\equiv b\cdot y\pmod p$ (and even $x^a\equiv y^a\pmod p$), it does not imply $a^x\equiv a^y\pmod p$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1777279", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
For $R$-modules M, is $M\cong R^{\oplus n}\otimes_RM\cong M^{\oplus n}$? I wanted to explicitly give a bilinear map $R^{\oplus n}\times M\longrightarrow M^{\oplus n}$ when trying to prove that $R^{\oplus n}\otimes_RM\cong M^{\oplus n}$ and ended up with $(r_1,...,r_n,m)\mapsto ((\prod_{i=1}^nr_i)m,...,(\prod_{i=1}^nr_i)m)$ which seems to work in showing that every bilinear map to some $R$-module P : $R^{\oplus n}\times M\longrightarrow P$ factors through $M^{\oplus n}$. But I think the same works for $R^{\oplus n}\times M\longrightarrow M$, $(r_1,...,r_n,m)\mapsto (\prod_{i=1}^nr_i)m$. So is this true or did I do a mistake? I'm in a commutative ring $R$ with identity. Edit: I probably should have made clear that I am trying to find a map that together with $M^{\oplus n}$ satisfies the universal property for the tensor product, and am not trying to give an isomorphism.
The map you wrote is not an isomorphism. It is instead a map that arguably parametrizes (though not uniquely) all multiples of the diagonal map $M\rightarrow M^{\oplus n}$ (where the multiple, depending on an element $(r_1,\ldots,r_n)$ of $R^{\oplus n}$, is $\prod_ir_i$). This is why, as you observe, you could make a similar argument for $R^{\oplus n}\times M \rightarrow M$. Instead, what you want is the map $(r_1,\ldots,r_n,m) \mapsto (r_1m,\ldots,r_nm)$. You can check that this induces the isomorphism $R^{\oplus n}\otimes M \rightarrow M^{\oplus n}$ that you want. If you know about the distributive property of the tensor product, you can also deduce this same result by noting the following: $$ \begin{array}{rcl} R^{\oplus n}\otimes M & = & (R \oplus \cdots \oplus R)\otimes M\\ & \cong & (R\otimes M)\oplus \cdots \oplus (R\otimes M)\\ & \cong & M\oplus \cdots \oplus M\\ & = & M^{\oplus n} \end{array} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1777358", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Sum of squares of integers divisible by 3 Suppose that $n$ is a sum of squares of three integers divisible by $3$. Prove that it is also a sum of squares of three integers not divisible by $3$. From the condition, $n=(3a)^2+(3b)^2+(3c)^2=9(a^2+b^2+c^2)$. As long as the three numbers inside are divisible by $3$, we can keep pulling out a factor of $9$, until we get $n=9^k(x^2+y^2+z^2)$. Modulo $3$, squares leave a remainder of either $0$ or $1$. Therefore one or three of $x,y,z$ are divisible by $3$.
Induction on the power of $9$ dividing the number. Begin with any number not divisible by $9,$ although it is allowed to be divisible by $3.$ The hypothesis at this stage is just that this number is the sum of three squares, say $n = a^2 + b^2 + c^2.$ Since this $n$ is not divisible by $9,$ it follows that at least one of $a,b,c$ is not divisible by $3.$ Let us order so that we can demand $c$ not divisible by $3.$ If $a+b+c$ is divisible by $3,$ replace it by $-c$ and use the same name. We now have $$ a + b + c \neq 0 \pmod 3. $$ Induction: let $n = a^2 + b^2 + c^2,$ with $a+b+c \neq 0 \pmod 3.$ Then $$ 9n = (a-2b-2c)^2 + (-2a+b-2c)^2 + (-2a-2b+c)^2, $$ where all three summands are nonzero $\pmod 3.$ We can multiply by another $9,$ then another, and so on. The formula is $- \bar{q} p q $ in the quaternions with rational integer coefficients, where $q=i+j+k$ and $p= ai+bj+ck.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1777521", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
How can I Show that, $\arcsin\left(\frac{b}{c}\right)-\arcsin\left(\frac{a}{c}\right)=2\arcsin\left(\frac{b-a}{c\sqrt2}\right)$ Where $c^2=a^2+b^2$ is Pythagoras theorem. Sides a,b and c are of a right angle triangle. Show that, $$\arcsin\left(\frac{b}{c}\right)-\arcsin\left(\frac{a}{c}\right)=2\arcsin\left(\frac{b-a}{c\sqrt2}\right)$$ How do I go about proving this identity? Can anybody help? I know of the arctan.
HINT: $$\dfrac a{\sin A}=\dfrac b{\sin B}=\dfrac c1$$ $\implies\arcsin\dfrac ac=\arcsin(\sin A)=A$ as $0<A<\dfrac\pi2$ Now $A=\dfrac\pi2-B$ and $\dfrac{b-a}{\sqrt2c}=\dfrac{\sin B-\sin A}{\sqrt2}=\dfrac{\sin B-\cos B}{\sqrt2}=\sin\left(B-\dfrac\pi4\right)$ Can you take it from here?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1777635", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Discreet weighted mean inequality Let ${p_{1}},{p_{2}},\ldots,{p_{n}}$ and ${a_{1}},{a_{2}},\ldots,{a_{n}}$ be positive real numbers and let $r$ be a real number. Then for $r\ne0$ , we define ${M_{r}}(a,p)={\left({\frac{{{p_{1}}a_{1}^{r}+{p_{2}}a_{2}^{r}+\cdots+{p_{n}}a_{n}^{r}}}{{{p_{1}}+{p_{2}}+\cdots+{p_{n}}}}}\right)^{1/r}}$ and for $r=0$ , we define ${M_{0}}(a,p)={\left({a_{1}^{{p_{1}}}a_{2}^{{p_{2}}}\cdots a_{n}^{{p_{n}}}}\right)}^{1/\sum\nolimits _{i=1}^{n}p_i}$ . Then prove that $ {M_{{k_{1}}}}(a,p)\geqslant{M_{{k_{2}}}}(a,p) $ if $k_{1}\geqslant k_{2}$. How to prove this generalized theorem? I have found this in a book without any proof. So can anyone show me?
Let $$w_{i} = \frac{p_{i}}{\sum_{i=1}^np_{i}}\implies\sum_{i=1}^nw_{i}=1 $$ we recall that by Holder inequality one has $$\sum_{i=1}^nw_{i}A_i B_i \le \left(\sum_{i=1}^nw_{i}A_i^{\color{red}{q}}\right)^{\color{red}{1/q}}\left(\sum_{i=1}^nw_{i}B_i^{\color{red}{q'}}\right)^{\color{red}{1/q'}}~~~~~~{\color{red}{1/q}}+{\color{red}{1/q'}}=1 $$ with $q\ge1$ Therefore, for $k\ge r$ if we let $q=\frac{k}{r}\ge 1$ we have, $$\color{blue}{M_r(a,p)}={\left({\frac{{{p_{1}}a_{1}^{r}+{p_{2}}a_{2}^{r}+\cdots+{p_{n}}a_{n}^{r}}}{{{p_{1}}+{p_{2}}+\cdots+{p_{n}}}}}\right)^{1/r}} =\left(\sum_{i=1}^nw_{i}a_{i}^{r}\right)^{1/r} \\=\left(\sum_{i=1}^nw_{i}\left(a_{i}^{k}\right)^{\color{red}{r/k}}\right)^{1/r}=\left(\sum_{i=1}^nw_{i}\left(a_{i}^{k}\right)^{\color{red}{1/q}}\cdot\color{blue}{1}\right)^{1/r} \\\overset{Holder}{\le}\left(\left(\sum_{i=1}^nw_{i}\left(a_{i}^{k}\right)^{\color{red}{q/q}}\right)^{\color{red}{1/q}}\left(\sum_{i=1}^nw_{i}1^{\color{red}{1/q'}}\right)^{\color{red}{q'/q}}\right)^{1/r}~~~~~~{\color{red}{1/q}}+{\color{red}{1/q'}}=1 \\=\left(\sum_{i=1}^nw_{i}a_{i}^{k}\right)^{1/qr}=\left(\sum_{i=1}^nw_{i}a_{i}^{k}\right)^{1/k}=\color{blue}{M_k(a,p)}~~~~~~qr=k $$ that is $$M_{k}(a,p)\geqslant M_{r}(a,p)~~~~~k\ge r$$ where
{ "language": "en", "url": "https://math.stackexchange.com/questions/1777754", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Evaluation of $\int_{0}^{2}\frac{(2x-2)dx}{2x-x^2}$ Evaluate $$I=\int_{0}^{2}\frac{(2x-2)dx}{2x-x^2}$$ I used two different methods to solve this. Method $1.$ Using the property that if $f(2a-x)=-f(x)$ Then $$\int_{0}^{2a}f(x)dx=0$$ Now $$f(x)=\frac{(2x-2)}{2x-x^2}$$ So $$f(2-x)=\frac{2(2-x)-2}{2(2-x)-(2-x)^2}=\frac{2-2x}{2x-x^2}=-f(x)$$ Hence $$I=0$$ Method $2.$ By partial fractions $$I=\int_{0}^{2}\frac{-1}{x}-\frac{1}{x-2} dx$$ So $$I=-\log |x| -\log |x-2| \vert_{0}^{2}$$ which gives $$I=\infty$$ which is correct?
Hint: $u=2x-x^2$ and $du=(2-2x)dx$. You will get: $$I=-\ln(2x-x^2)|_{x=0}^{x=2}$$ Lets treat both boundary values as variables $a$ and $b$ $$I_{a,b}=-\ln(2x-x^2)|_{x=a}^{x=b}=-\ln(2b-b^2)+\ln(2a-a^2)=\ln\left(\frac{2a-a^2}{2b-b^2}\right)=\ln\left(\frac{a(2-a)}{b(2-b)}\right)$$ Now take the limit $(a,b)\to(0,2)$: $$\lim_{(a,b)\to(0,2)}I_{a,b}=\lim_{(a,b)\to(0,2)}\ln\left(\frac{a(2-a)}{b(2-b)}\right)$$ Notice that you can set $a=0$ in $2-a$ and $b=2$ in $b$. $$\lim_{(a,b)\to(0,2)}I_{a,b}=\lim_{(a,b)\to(0,2)}\ln\left(\frac{2a}{2(2-b)}\right)=\lim_{(a,b)\to(0,2)}\ln\left(\frac{a}{2-b}\right)$$ In the last step we substitute $b=u+2$ and then $u=ka$ (to show that the limit does not exist). $$\lim_{(a,b)\to(0,2)}I_{a,b}=\lim_{(a,u)\to (0,0)}\ln\left(\frac{a}{u}\right)=\lim_{(a,u)\to (0,0)}\ln\left(\frac{1}{k}\right)=-\ln(k)$$ The limit is depending on $k$, hence the limit does not exist.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1777926", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Are these facts about the Poisson process correct? Before studying theorems one by one, I want to check whether it is right what I know about Poisson process. Let $\left\{N(t)\right\}$ be a Poisson process. Then * *the number of the event occur during time $t\sim{}Poisson(\lambda{}t)$ *Each time interval between adjacent events $\sim{}Exponential(\lambda)$ *From any time, time taken until the next event occur $\sim{}Exponential(\lambda)$ *Immediately after an event occur, time until $n$ events occur $\sim{}\Gamma(n, \lambda)$ *From any time, time taken until $n$ events occur $\sim{}\Gamma(n, \lambda)$ Are there any wrong sentences? If so, let me know what is wrong. Thank you.
Yes, those are correct. Here is more useful information: * *The interarrival times are iid's. *The conditional distribution of arrival time $T_1$, $\:P[T_1 \leq \tau \mid N(t)=1]$ with $\tau \leq t$ is uniformly distributed over $(0,t)$, $\:P[T_1 \leq \tau \mid N(t)=1] = \frac{\tau}{t}$. And this generalizes to later times. *A PP has independent increments. *A PP has stationary increments. *A PP is nonstationary, like any process with stationary independent increments. *A PP is a renewal process with exponentially distributed intervals.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1778059", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
How can I prove that $S=\{x_k:k\in \Bbb N\}\cup\{x_0\}$ is closed in $\Bbb R^n$? How can I prove that if $\{x_k\}$ is a convergent sequence in $\Bbb R^n$ with limit $x_0$, then $S=\{x_k:k\in \Bbb N\}\cup\{x_0\}$ is closed in $\Bbb R^n$? This is my attempt: It suffices to show that $x_0$ is the only limit point of $S$. Suppose to the contrary that $\lim_{x\to \infty} x_k=x_0, \lim_{x\to \infty} x_k=x_0' $ and $x_0\neq x_0'$. Let $\epsilon=\frac1 2 \Vert x_0 -x_0'\Vert >0$ be given. Then there exists $ k_1, k_2 \in \Bbb N$ such that $k \ge k_1 \Rightarrow \Vert x_k -x_0\Vert < \frac1 2 \Vert x_0 -x_0'\Vert$ and $k \ge k_2 \Rightarrow \Vert x_k -x_0'\Vert < \frac1 2 \Vert x_0 -x_0'\Vert$ Let $k_0=\max\{k_1,k_2\}$ Then for $k\ge k_0$ $\Vert x_0 -x_0'\Vert \le \Vert x_0 -x_k\Vert + \Vert x_k -x_0'\Vert < \frac1 2 \Vert x_0 -x_0'\Vert+\frac1 2 \Vert x_0 -x_0'\Vert = \Vert x_0 -x_0'\Vert $ (contradiction) What I proved above is that $x_0$ is the only limit point of $\{x_k\}$. How can I prove it is the only limit point of S, not just of $\{x_k\}$?
Proceed this way : Let $x=\lim_\infty x_k$ and $A=\{x_k\vert k\in\mathbb N\}\cup\{x\}$. You want to prove that for any sequence $(y_k)_{k\in\mathbb N}\in A^{\mathbb N}$, if $(y_k)$ has a limit $y\in \mathbb R^n$, then $y\in A$. Now there are two cases : * *either $(y_k)$ is stationary, in which case the result is trivial *Otherwise, you can extract a sequence $(y_{\sigma(k)})$ from $(y_k)$ such that * *$\forall k,y_{\sigma(k)}\in \{x_k\vert k\in\mathbb N\}$ *$\sigma$ is strictly increasing In this case, $(y_{\sigma(k)})$ is also a sub sequence of $(x_k)$, so its limit is $x$, hence $\lim_\infty y_k=\lim_\infty y_{\sigma(k)}=x\in A$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1778190", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Banach Contraction mapping of $\Phi(f)(x)=\int_0^x \frac{1}{1+f(t)^2}dt$ , find a fixed point. Let $(X,d)$ be metric space with $d(f,g)=\sup |f(x)-g(x)|$ where $X$ is the set of continuous function on $[0,1/2]$. Show $\Phi:X\rightarrow X$ $$\Phi(f)(x)=\int_0^x \frac{1}{1+f(t)^2}dt$$ has a unique fixed point $f(0)=0$. b) Show that it satisfies $\frac{df}{dx}=\frac{1}{1+f(x)^2}$ My attempt: I assumed $[0,1/2]$ is complete. I need to show that: $$d(\Phi(f)(x),\Phi(g)(x))=\sup \Big|\int_0^x \frac{1}{1+f(t)^2}dt-\int_0^x \frac{1}{1+g(t)^2}dt\Big|\leq\alpha d(f,g)$$ Now, $$\sup \Big|\int_0^x \frac{1}{1+f(t)^2}dt-\int_0^x \frac{1}{1+g(t)^2}dt\Big|=\sup \Big|\int_0^x \frac{1}{1+f(t)^2+g(t)^2}dt\Big|$$ Now ideally I need to somehow change this integral to include $d(f,g)=\sup|f(x)-g(x)|$, but I don't know how. Then I would integrate, and get the integration constant to be $<1$, which will be my contraction constant.
A useful approach for these sorts of estimates is to use the mean value theorem. (a) Let $g(x) = {1 \over 1+x^2}$, then $g'(x) = -{2x \over (1+x^2)^2 }$ and $g''(x) = 2 { 3 x^2 -1 \over (1+x^2)^3 }$. It is not hard to show that $|g'|$ has a maximum of $L={3 \sqrt{3} \over 8} <1 $ at $x = \pm { 1\over \sqrt{3}}$. Hence $|g(x)-g(y)| \le L |x-y|$ for all $x,y$. Then \begin{eqnarray} |\Phi(f_1)(x)-\Phi(f_2)(x)| &\le& \int_0^x |{1 \over 1+f_1(t)^2} - {1 \over 1+f_2(t)^2} | dt \\ &\le& \int_0^x L|f_1(t)-f_2(t)| dt \\ &\le& \int_0^x L \|f_1 - f_2\|_\infty dt \\ &\le& { 1\over 2} L \|f_1 - f_2\|_\infty \end{eqnarray} It follows that $\Phi$ has a unique fixed point $\hat{f}$, and it is clear that $\hat{f}(0) = (\Phi(\hat{f})(0) = \int_0^0 {1 \over 1+\hat{f}(t)^2} dt = 0$. (b) follows from the fundamental theorem of calculus.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1778310", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
How do I find the smallest positive integer $a$ for which $a^n \equiv x \pmod{2^w}$? $x$ is fixed odd positive integer value. $n$ and $w$ are fixed positive integer values. $a$ is positive integer value. I am interested for $n=41$ and $w=160$, but would appreciate a general algorithm. I know how to find any $a$ for which $a^n \equiv x \pmod{2^w}$. Algorithm requires $w$ steps: * *Let $a\leftarrow1$ *Iterate $i$ from 1 to $w-1$ and for each $i$ do: * *if $a^n \equiv x \mod(2^{i+1})$, do nothing. *otherwise assign $a \leftarrow a+2^i$ Is this algorithm giving the smallest value $a$ for which $a^n \equiv x \pmod{2^w}$? How to find smallest $a$ for which $a^n \equiv x \pmod{2^w}$?
Note that here, the exponent $n$ is odd, while the order of every element modulo $2^w$ divides $\phi(2^w)=2^{w-1}$. Since $n$ and $2^{w-1}$ are relatively prime, the solution to $a^n=x\pmod{2^w}$ exists and is unique modulo $2^w$. So if you have a solution $a<2^w$, then it is indeed the smallest positive solution. Another way of solving these problems is to find integers $y$ and $z$ for which $ny+2^{w-1}z=1$ (always possible when $n$ and $2^{w-1}$ are relatively prime), for then $$ (x^y)^n = x^{ny} = x^{1-2^{w-1}z} = x\cdot(x^{2^{w-1}})^z \equiv x\cdot1^z = x\pmod{2^w} $$ (where the congruence is due to Euler's theorem). Therefore $a\equiv x^y\pmod{2^w}$ is the solution. Both finding $y$ (from the extended Eucliden algorithm) and computing $x^y\pmod{2^w}$ are extremely fast when implemented correctly.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1778579", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
A pair of continued fractions that are algebraic numbers and related to $a^2+b^2=c^m$ Similar to the cfracs in this post, define the two complementary continued fractions, $$x=\cfrac{-(m+1)}{km\color{blue}+\cfrac{(-1)(2m+1)} {3km\color{blue}+\cfrac{(m-1)(3m+1)}{5km\color{blue} +\cfrac{(2m-1)(4m+1)}{7km\color{blue}+\cfrac{(3m-1)(5m+1)}{9km\color{blue}+\ddots}}}}}\tag1$$ $$y=\cfrac{-(m+1)}{km\color{red}-\cfrac{(-1)(2m+1)} {3km\color{red}-\cfrac{(m-1)(3m+1)}{5km\color{red}-\cfrac{(2m-1)(4m+1)}{7km\color{red}-\cfrac{(3m-1)(5m+1)}{9km\color{red}-\ddots}}}}}\tag2$$ The first one is the superfamily which contains Nicco's cfracs in another post. Let $i$ be the imaginary unit. For $k>1$ and $m>1$, it can be empirically observed that $x$ obeys, $$\left(\frac{(x+i)^m-(x-i)^m}{(x+i)^m+(x-i)^m}\right) \color{blue}{\left(\frac{(k+i)^{m+1}+(k-i)^{m+1}}{(k+i)^{m+1}-(k-i)^{m+1}}\right)^{(-1)^m}}=1\tag3$$ while $y$ obeys, $$\left(\frac{(y+1)^m+(y-1)^m}{(y+1)^m-(y-1)^m}\right) \color{blue}{\left(\frac{(k+1)^{m+1}+(k-1)^{m+1}}{(k+1)^{m+1}-(k-1)^{m+1}}\right)^{(-1)^{m+1}}}=-1\tag4$$ where the colored part is a constant that depends on the choice of $k,m$. Hence, as shown in this post, $x,y$ are radicals and algebraic numbers of degree $m$. Question: How do we prove that $(3)$ and $(4)$ are indeed true? P.S. Since, $$\left(\frac{(z+i)^m+(z-i)^m}{2}\right)^2+i^2\left(\frac{(z+i)^m-(z-i)^m}{2}\right)^2 = (z^2+1)^m$$ then the structure of $(3)$ explains the observations about $a^2+b^2=c^m$ in Nicco's post.
Too long for a comment. If you let $a=-1$ and $b=2m+1$ of the general continued fraction in this post, it reduces to the first continued fraction in this post (with $k=1$) and is expressible as a quotient of gamma functions, $$x=-\tan\Big(\frac{\pi(m+1)}{4m}\Big)=\frac{\tan\Big(\frac{\pi}{4m}\Big)+1}{\tan\Big(\frac{\pi}{4m}\Big)-1}=-\frac{(m+1)}{4m}\frac{\Gamma\Big(\frac{3m+1}{4m}\Big)\Gamma\Big(\frac{m-1}{4m}\Big)}{\Gamma\Big(\frac{5m+1}{4m}\Big)\Gamma\Big(\frac{3m-1}{4m}\Big)}$$ like you did on the other post. So the conjecture for general $k$ in this post becomes a generalisation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1778673", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Is a function of $\mathbb N$ known producing only prime numbers? It is well known that a polynomial $$f(n)=a_0+a_1n+a_2n^2+\cdots+a_kn^k$$ is composite for some number $n$. What about the function $f(n)=a^n+b$ ? Do positive integers $a$ and $b$ exists such that $a^n+b$ is prime for every natural number $n\ge 1$ ? I searched for long chains in order to find out whether there is an obvious upper bound. For example $4^n+4503$ is prime for $n=1,\ldots,14$.
The answer to your question is no. For any $a>1,b\geq 1$ there will always exists $n$ such that $a^n+b$ is composite. Suppose that $a+b$ is prime, (otherwise we are finished) and consider $n=a+b$ and look at $a^{a+b}+b$ modulo $a+b$. Then by Fermat's little theorem, which states that $$a^p\equiv a\pmod{p}$$ for any prime $p$, it follows that $$a^{a+b}+b\equiv a+b\equiv 0\pmod{a+b},$$ and so $a^{a+b}+b$ is divisible by $a+b$ and hence it is composite.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1778736", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 2, "answer_id": 1 }
Proving divisibility for $256 \mid 7^{2n} + 208n - 1$ I can't come up with a way of proving this: $$256 \mid 7^{2n} + 208n - 1\\ \forall n \in \Bbb N$$ I've tried by induction but couldn't see when to apply the inductive hypothesis... $$P(n+1) = 7^{2n+2}+208n+207$$ How can I continue? Thank you
We have transforming step by step suitably, $$7^{2n}+208n-1=$$ $$(48+1)^n+(256-48)n-1=$$ $$(2^4\cdot3+1)^n+(256-2^4\cdot3)n-1 =$$ $$\sum_{k=0}^{k=n-1}\binom nk (2^4\cdot3)^{n-k}+1+256n-2^4\cdot3n-1=$$ $$=\sum_{k=0}^{k=n-2}\binom nk (2^4\cdot3)^{n-k}+256n+2^4\cdot3n-2^4\cdot3n=$$ $$=\sum_{k=0}^{k=n-2}\binom nk (2^4\cdot3)^{n-k}+256n\equiv 0\pmod{256}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1778846", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
counting probability with multiple cases There are four different colors of paint one can use for four different houses. If one color can be used up to three times, how many total possibilities are there? I approached the problem by splitting it into cases: 1) each color used once 2) one color used twice 3) two colors used twice 4) one color used thrice but i dont know how to move on from here or if im right
Choose colours, then choose houses for each colour choice. 1) each color used once $$\binom{4}{4}\cdot\binom{4}{1}\binom{3}{1}\binom{2}{1}$$ 2) one color used twice (plus two colours used once) $$\binom{4}{1}\binom{3}{2}\cdot\binom{4}{2}\binom{2}{1}$$ 3) two colors used twice $$\binom{4}{2}\cdot\binom{4}{2}$$ 4) one color used thrice (and one colour used once) $$\binom{4}{1}\binom{3}{1}\cdot\binom{4}{3}$$ but i dont know how to move on from here or if im right It's a right approach, but the long one.   It is easier to work with complements.   How many ways can you paint the houses with four colours and not use one colour for all houses. $$4^4-4$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1779045", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 5, "answer_id": 3 }
Prove or disprove that $\sqrt[3]{2}+\sqrt{1+\sqrt2}$ is a root of a polynomial Prove or disprove that there is a polynomial with integer coefficients such that the number $\sqrt[3]{2}+\sqrt{1+\sqrt2}$ is a root. My work so far: Let $P(x)=x^3-2$. Then $\sqrt[3]{2}$ is a root of $P(x)$ Let $Q(x)=x^4-2x^2-1$. Then $\sqrt{1+\sqrt2}$ is a root of $Q(x)$. $\left(x=\sqrt{1+\sqrt2}\Rightarrow x^2=1+\sqrt2\Rightarrow x^2-1=\sqrt2 \Rightarrow x^4-2x^2+1=2 \Rightarrow x^4-2x^2-1=0\right)$ I need help here.
$$\mathbb{Q}[\alpha,\beta]/(\alpha^3-2,\beta^4-2\beta^2-1)$$ is a vector space over $\mathbb{Q}$ with dimension $12$: a base is given by $\alpha^n \beta^m$ for $0\leq n\leq 2$ and $0\leq m \leq 3$. If follows that if we represent $(\alpha+\beta)^k$ for $k=0,1,\ldots,11,12$ with respect to such a base, we get $13$ vectors in a vector space with dimension $12$, hence we may find a linear combination of them that equals zero, hence a polynomial with coefficients in $\mathbb{Q}$ and degree $\leq 12$ that vanishes at $\alpha+\beta$. Namely, such a polynomial is: $$x^{12}-6x^{10}+8x^9 +9 x^8+28 x^6-144x^5 + 63 x^4+96x^3-78 x^2-168 x-41. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1779204", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
A combinatorial task I just can't solve Suppose you have $7$ apples, $3$ banana, $5$ lemons. How many options to form $3$ equal in size baskets ($5$ fruits in each) are exist? At first I wrote: $\displaystyle \frac{15!}{7!3!5!} $ But its definitely not true because I do not care about permutations of apples in one $5$-sized basket. So the real value in divider is smaller than $7!$ (the same logic for other fruits). Textbook says that the answer is $\displaystyle\frac{15!}{2^93^35^3}$ The method of getting of numerator is clear but divider...
We need to be very careful about what we are assuming is distinguishable and what is not distinguishable. Both @almagest and @windircursed have correct answers with different sets of assumptions: @almagest assumed that both the baskets and the fruit are indistinguishable while @windircursed assumed that only the baskets were indistinguishable. A third option would we to assume that all objects and baskets are distinguishable and then the correct answer would be simply $15!$. Clearly after inspecting all the possibilities, we can see which interpretation the authors of the problem were assuming when they wrote their answer. Also the form of their answer was simply a way to mask how they came up with the answer: $2^93^35^3$ is simply the prime factorization of $(5!)^3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1779299", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Calculate $ \lim_{x\to0}\frac{\ln(1+x+x^2+\dots +x^n)}{nx}$ My attempt: \begin{align*} \lim_{x\to0}\frac{\ln(1+x+x^2+\dots +x^n)}{nx} &= (\frac{\ln1}{0}) \text{ (we apply L'Hopital's rule)} \\ &= \lim_{x \to0}\frac{\frac{nx^{n-1}+(n-1)x^{n-2}+\dots+2x+1}{x^n+x^{n-1}+\dots+1}}{n} \\ &= \lim_{x \to0}\frac{nx^{n-1}+(n-1)x^{n-2}+\dots+2x+1}{n(1+x+\dots+x^n)} \\ &= \frac{1}{n}. \end{align*} Are my steps correct? Thanks.
Besides using L'Hospital's Rule, By the definition of derivative, $\displaystyle\lim \limits_{x\to0}\frac{\ln(1+x+x^2+...+x^n)}{nx}=\lim \limits_{x\to0} \frac{\ln(1+x+x^2+...+x^n)-\ln1}{n(x-0)}=\left.\frac{1}{n}\frac{d}{dx}(ln(1+x+x^2+...+x^n))\right|_{x=0}$ $=\displaystyle\left.\frac{1}{n}\frac{1+2x+...+nx^{n-1}}{1+x+x^2+...+x^n}\right|_{x=0}=\frac{1}{n}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1779394", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 0 }
integration using change of variables find $$\iint_{R}x^2-xy+y^2 dA$$ where $R: x^2-xy^+y^2=2$ using $x=\sqrt{2}u-\sqrt{\frac{2}{3}}v$ and $y=\sqrt{2}u+\sqrt{\frac{2}{3}}v$ To calculate the jacobian I take $$\begin{vmatrix} \frac{\partial x}{\partial u} & \frac{\partial x}{\partial v}\\ \frac{\partial y}{\partial u} & \frac{\partial y}{\partial v} \end{vmatrix}=\begin{vmatrix} \sqrt{2} &-\sqrt{\frac{2}{3}}\\ \sqrt{2} & \sqrt{\frac{2}{3}} \end{vmatrix}=\frac{4}{\sqrt{3}}dudv$$ So the integral I have to calculate is now: $\iint_{R} u^2+v^2\frac{4}{\sqrt{3}}dudv$ or $\iint_{R} u^2+v^2\frac{\sqrt{3}}{4}dudv$ ?
You're sloppy with notation, you're just gluing differentials next to the Jacobian after you get the determinant. Let's settle this question once and for all: The Jacobian is used in place of the chain rule, so $$\left|\frac{\partial (x,y)}{\partial (u,v)}\right|=\frac{4}{\sqrt3}$$ Now, just like you can write $dx=\frac{dx}{du}du$ in one dimension, you write $$dx\,dy=\left|\frac{\partial (x,y)}{\partial (u,v)}\right| du\,dv$$ Now there's no ambiguity how to flip the Jacobian when you do the substitution. It's obvious, dx and dy are on top on both sides, and du and dv are top and bottom on the right, effectively "cancelling out". I deliberately wrote the Jacobian in compact notation that just records what's on top and bottom, but didn't write out the entire matrix.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1779468", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Proposition 13 in Royden's Real Analysis According to the Proposition 13 Ch.1 in the book Real Analysis by Royden: Let $F$ be a collection of subsets of a set $X$. Then the intersection $A$ of all σ-algebras of subsets of $X$ that contain $F$ is a σ-algebra that contains $F$. Moreover, it is the smallest σ-algebra of subsets of $X$ that contains $F$ in the sense that any σ-algebra that contains $F$ also contains $A$. I don't understand what this theorem says at all, especially there are many "contain"s and I am confused. Maybe because I am not a native speaker.. Please help!
Mathematical "translation": Let $F\subset2^{X}$. Let $\mathcal{B}$ be the set of all $\sigma$-algebras $B$ on $X$ such that $F\subset B$. Then, $A=\cap_{B\in\mathcal{B}}B$ is a $\sigma$-algebra such that $F\subset A$. Moreover, for any $\sigma$-algebra $A^{\prime}$ on $X$ such that $F\subset A^{\prime}$, $A\subset A^{\prime}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1779584", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Can multiplication and division be treated as logical operations? A few of my friends and I were playing around with math (more specifically, why (-1)(-1)=1) and we figured out that multiplication (with regards to signs) was an "nxor" operation (I.E. If we treat "1" as "true" and "-1" as "false," than the values of multiplication, again with regards to signs, are the same as the "nxor" operation.) Now, we've begun thinking about redefining multiplication as other logical operations (For example: under "and" (-1)(-1)=-1) My questions are these: is this line of thought similar to any current area of mathematical research? If so, where can I go to find more information on it. I am especially interested in any proven theorems or open conjectures on this topic.
" My questions are these: is this line of thought similar to any current area of mathematical research?" Yes, absolutely. The area you rediscovered is called algebraic logic. I think it is not a very active area of research any more, but in 50's and 60's it was rather active. Especially Tarski school of logic did many things in this area. You may want to check this wkikipedia page https://en.wikipedia.org/wiki/Algebraic_logic
{ "language": "en", "url": "https://math.stackexchange.com/questions/1779680", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Prove that $11^{2n}+5^{2n+1}-6$ is divisible with $24$ for $n∈ℤ^+$ Prove that $11^{2n}+5^{2n+1}-6$ is divisible with $24$ for $n∈ℤ^+$ I've been trying to solve it by using modulo; $11^{2n}+5^{2n+1}-6≡ (11^2 mod24)^n + 5*(5^2mod24)^n-6 = 1^n + (5*1^n)-6 = 0$ Is this the right way to tackle the problem? I am not certain if I am placing the "$mod$" marks at the right places.
$$11^{2n}+5^{2n+1}-6=(5\cdot24+1)^n+5(24+1)^n-6\equiv 1+5-6\equiv0\pmod{24}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1779793", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
$2^n=n$ and similar equations Is it possible to solve equations in the form $k^n=n$ for n and if so, How? I am new to logarithms and so would be glad if someone could explain even if there is an obvious answer. Also What about $k^{a+b}=a$ for a? Or $k^{ab}=a$?
All these equations can be standardized to the form $$xe^x=y$$ for which the general solution has been studied in depth and is denoted as the function $x=W(y)$ known as Lambert's. * *$k^n=n$ Write $k^n=e^{-x}$, i.e. $x=-n\log(k)$, and $e^{-x}=-\dfrac x{\log(k)}$ or $xe^x=-\log(k)$. * *$k^{a+b}=a=k^ak^b$ Like before, $xe^x=-\log(k)k^b$, with $x=-a\log(k)$. * *$k^{ab}=a$. Can be rewritten $bk^{ab}=ab$, and $xe^x=-\log(k)b$, with $x=-ab\log(k)$. In general, there is no simpler approach, but you can create equations with a known solution by working in reverse. For instance, taking $x=-\log(2)$ so that $\log(k)=\dfrac{\log(2)}2$ yields the equation $$(\sqrt2)^n=n$$ with the solution $n=2$. For those not willing to take Lambert's $W$ for granted we can discuss the real roots of $xe^x=y$. The derivative is $(x+1)e^x$ so that the function is decreasing from an horizontal asymptote $(-\infty,0)$ to the minimum at $(-1,-1/e)$, then increasing exponentially to $(\infty,\infty)$ after crossing the origin $(0,0)$. So there are no solutions in $y$ for $x<\frac1e$, two negative solutions on both sides of $x=-1$ for $1/e<y<0$ and a single solution for $y>0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1779919", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Generalised Gauss sums Let $\chi$ be a non-trivial Dirichlet character modulo an odd prime $p$ and let $f(x) \in \mathbb{Z}[x]$ be a polynomial. We define the generalised Gauss sum $$ G(\chi, f):=\sum_{y \in \mathbb{F}_p^*} \chi(y) \left(\frac{f(y)}{p}\right).$$ Under which conditions for $f$ can we prove that $$ G(\chi,f)\ll \deg(f) \sqrt{p},$$ with an absolute implied constant ?
Let $\eta$ be a generator of the character group $\widehat{\Bbb{Z}_p^*}$. Therefore $\chi=\eta^k$ for some integer $k$, $0<k<p-1$ and the Legendre symbol is equal to $\eta^{(p-1)/2}$. Therefore $$ G(\chi,f)=\sum_{y\in\Bbb{F}_p}\eta(y^kf(y)^{(p-1)/2}). $$ The general Weil bound for multiplicative character sums says that the sum $$ S(g)=\sum_{y\in\Bbb{F}_p}\eta(g(y)) $$ is in the non-trivial cases bounded by $$ |S(g)|\le (d-1)\sqrt p, $$ where $d$ is the number of zeros of $g(y)$ in its splitting field over $\Bbb{F}_p$. The sum is trivial, if $g(y)$ is of the form $g(y)=c h(y)^{p-1}$ for some $c\in\Bbb{F}_p$ and $h(y)\in\Bbb{F}_p[y]$. Obviously the number of zeros of $g(y)$ is bounded from above by $\deg g$. In your case the number of zeros of $y^kf(y)^{(p-1)/2}$ is bounded from above by $\deg f+1$ (add one for the power of $y$ factor, unless $f(0)=0$). Therefore When your sums are non-trivial they are bounded as guessed. With absolute constant $1$ and without the logarithmic factor. The sum is trivial, if $y^kf(y)^{(p-1)/2}$ is a constant times a $(p-1)$th power of a polynomial (modulo $p$). This happens for example when $f(y)=y$ and $\chi$ is also the Legendre character.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1780202", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to prove that if $f$ is continuous a.e., then it is measurable. Definition of simple function * *$f$ is said to be a simple function if $f$ can be written as $$f(\mathbf{x}) = \sum_{k=1}^{N} a_{_k} \chi_{_{E_k}}(\mathbf{x})$$ where $\{a_{_k}\}$ are distinct values for $k=1, 2, \cdots, N$ and $\chi_{_{E_k}}(\mathbf{x})=\cases{1&if $\mathbf{x}\in E_k$\\0&if $\mathbf{x}\notin E_k$}$. Theorem $(4.12)$ *If $\{f_k\}$, is a sequence of measurable functions, then $\displaystyle\limsup_{k\to\infty}f_k$ and $\displaystyle\liminf_{k\to\infty}f_k$ are measurable. In particular if $\displaystyle\lim_{k\to\infty}f_k$ exists a.e., it is measurable. Theorem $(4.13)$ *Every function $f$ can be written as the limit of a sequence $\{f_k\}$ of simple functions. Problem If $f(x)$, $x\in\mathbb{R}^1$, is continuous at almost every point of an interval $[a, b]$, show that $f$ is measurable on $[a, b]$. Generalize this to functions defined in $\mathbb{R}^n$. [For a constructive proof, use the subintervals of a sequence of partitions to define a sequence of simple measurable functions converging to $f$ a.e. in $[a, b]$. Use $(4.12)$.] * *By the theorem $(4.13)$, we can choose $\{f_k\}$, which are simple functions defined on $[a, b]$ and approaching to $f$. That is, choose $\{f_k\}$ such that, for $k=1, 2, \cdots$, $$f_k=\sum_{i=1}^{N} a_{_i}^{(k)} \chi^{(k)}_{_{E_i}} \quad \text{and} \quad \displaystyle\lim_{k\to\infty}f_k=f$$ *Since $f$ is continuous a.e., $f_k$ are measurable for all $k\in\mathbb{N}$. *Then, by the theorem $(4.12)$, $f$ is measurable. Q1) Why are $f_1, f_2, \cdots, f_N$ all measurable? Q2) Morever, if $f_k$ are all measurable, then how can I ensure that $\displaystyle \lim_{k\to\infty}f_k$ exists? Is it ensured by theorem $(4.13)$? If there is any advice or other proofs, please let me know them. Thank you.
Suppose $f: [a,b]\to \mathbb R$ is continuous at a.e. point of $[a,b].$ Let $D$ be the set of points of discontinuity of $f$ in $[a,b].$ Then $m(D)=0.$ We therefore have $E = [a,b]\setminus D$ measurable, and $f$ is continuous on $E.$ Let $c\in \mathbb R.$ Then $$\tag 1 f^{-1}((c,\infty)) = [f^{-1}((c,\infty))\cap E] \cup [f^{-1}((c,\infty))\cap D].$$ Because $f$ is continuous on $E,$ the first set on the right of $(1)$ is open in $E.$ Thus it equals $E \cap U$ for some $U$ open in $\mathbb R.$ Because $E,U$ are both measurable, so is $f^{-1}((c,\infty))\cap E.$ And because $m(D) = 0,$ any subset of $D$ is measurable. It follows that $(1)$ is the union of two measurable sets, hence is measurable, and we're done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1780336", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
How to integrate $\int_{0}^{1}\text{log}(\text{sin}(\pi x))\text{d}x$ using complex analysis Here is exercise 9, chapter 3 from Stein & Shakarchi's Complex Analysis II: Show that: $$\int_{0}^{1}\text{log}(\text{sin}(\pi x))\text{d}x=-\text{log(2)}$$ [Hint: use a contour through the set $\{ri\,|\,\,r\geq0\} \cup\{r\,\,|\,\, r\in[0, 1]\}\cup\{1+ri\,\,|\,\,r\geq0\}$] That contour doesn't seem to make sense to me for this integral in particular. I really have no clue for this one. Any ideas?
Let $I$ be the integral given by $$I=\int_0^1 \log(\sin(\pi x))\,dx \tag 1$$ Enforcing the substitution $ x \to x/\pi$ in $(1)$ reveals $$\begin{align} I&=\frac{1}{\pi}\int_0^\pi \log(\sin(x))\,dx \\\\ &=\frac{1}{2\pi}\int_0^\pi \log(\sin^2(x))\,dx\\\\ &=\frac{1}{2\pi}\int_0^\pi \log\left(\frac{1+\cos(2x)}{2}\right)\,dx \tag 2 \end{align}$$ Next, enforcing the substitution $x\to x/2$ in $(2)$ yields $$\begin{align} I&=\frac{1}{4\pi}\int_0^{2\pi} \log\left(\frac{1+\cos(x)}{2}\right)\,dx \\\\ &=-\frac{\pi}{2}\log(2)+\frac{1}{4\pi}\int_0^{2\pi}\log(1+\cos(x))\,dx\tag 3 \end{align}$$ We now move to the complex plane by making the classical substitution $z=e^{ix}$. Proceeding, $(3)$ becomes $$\begin{align} I&=-\frac{\pi}{2}\log(2)+\frac{1}{4\pi}\oint_{|z|=1}\log\left(1+\frac{z+z^{-1}}{2}\right)\,\frac{1}{iz}\,dz \\\\ &=-\frac{\pi}{2}\log(2)+\frac{1}{4\pi i}\oint_{|z|=1}\frac{2\log(z+1)-\log(z)}{z}\,dz \tag 4 \end{align}$$ Note that the integrand in $(4)$ has branch points at $z=0$ and $z=-1$, and a first-order pole at $z=0$. We choose to cut the plane with branch cuts from $z=0$ to $z=-\infty$, and from $z=-1$ to $z=-\infty$, along the non-positive real axis. We can deform the contour $|z|=1$ around the branch cuts and the pole and write $$\begin{align} \oint_{|z|=1}\frac{2\log(z+1)-\log(z)}{z}\,dz &=\lim_{\epsilon \to 0^+}\left(\int_{-\epsilon}^{-1} \frac{2\log(x+1)-\log|x| -i\pi}{x}\,dx \right.\\\\ &-\int_{-\epsilon}^{-1} \frac{2\log(x+1)-\log|x|+i\pi}{x}\,dx\\\\ &\left. +\int_{-\pi}^{\pi}\frac{2\log(1+\epsilon e^{i\phi})-\log(\epsilon e^{i\phi})}{\epsilon e^{i\phi}}\,i\epsilon e^{i\phi}\,d\phi\right)\\\\ &=0 \tag 5 \end{align}$$ Finally, using $(5)$ in $(4)$, we obtain the coveted equality $$I=-\frac{\pi}{2}\log(2)$$ as was to be shown!
{ "language": "en", "url": "https://math.stackexchange.com/questions/1780449", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Weak convergence and strong convergence on $B(H)$ Let $\mathcal{A} \subset B(H)$ be a weak closed convex bounded set of self-adjoint operators. If $A_n \rightarrow_{wo} A\in \mathcal{A}$, do we have $A_n \rightarrow A$ strongly?($A_n$ is a sequence in $\mathcal{A}$)
Let $H=\ell^2(\mathbb{N})$ and let $\mathcal{A}$ be the set of all self-adjoint elements in the unit ball of $B(H)$. Define $S_n\colon H\to H, S_n \xi(k)=\xi(k+n)$. Then $\|S_n\|=1$, hence $A_n:=\frac 1 2(S_n+S_n^\ast)\in \mathcal{A}$. The adjoint of $S_n$ is given by $$ S_n^\ast\xi(k)=\begin{cases}\xi(k-n)&\colon k\geq n\\0&\colon k<n\end{cases} $$ It is easy to see that $S_n\to 0$ strongly and consequently $S_n^\ast\to 0$ weakly. Thus, $A_n\to 0$ weakly. However, let $\xi=\delta_0=(1,0,\dots)$. Then $A_n\xi=\frac 1 2 \delta_n$ has norm $\frac 1 2$. Hence $A_n$ does not converge to $0$ strongly.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1780565", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Special Gamma function integral I'm trying to evaluate this integral $$\int_{0}^{1} \sin (\pi x)\ln (\Gamma (x)) dx$$ and I got to the point, when I need to find $\displaystyle \int_{0}^{\pi } \sin (x)\ln (\sin (x)) dx$ but everything I tried just failed,or either I was not able to put in the borders . Could you please help me. Thanks
Let we put everything together. $$ I = \int_{0}^{1}\sin(\pi x)\log\Gamma(x)\,dx = \int_{0}^{1}\sin(\pi z)\log\Gamma(1-z)\,dz \tag{1}$$ leads to: $$ I = \frac{1}{2}\int_{0}^{1}\sin(\pi x)\log\left(\Gamma(x)\,\Gamma(1-x)\right)\,dx \tag{2}$$ but $\Gamma(x)\,\Gamma(1-x) = \frac{\pi}{\sin(\pi x)}$, hence: $$ I = \frac{\log \pi}{\pi}-\frac{1}{\pi}\int_{0}^{\pi/2}\sin(x)\log\sin(x)\,dx\tag{3} $$ or, with a change of variable and integration by parts: $$ I = \frac{\log \pi}{\pi}-\frac{1}{\pi}\int_{0}^{1}\frac{x\log x}{\sqrt{1-x^2}}\,dx = \frac{\log \pi}{\pi}+\frac{1}{\pi}\int_{0}^{1}\frac{1-\sqrt{1-x^2}}{x}\,dx\tag{4}$$ so: $$ \int_{0}^{1}\sin(\pi x)\log\Gamma(x)\,dx = \color{red}{\frac{1}{\pi}\left(1+\log\frac{\pi}{2}\right)}\tag{5}$$ since a primitive for $\frac{1-\sqrt{1-x^2}}{x}=\frac{x}{1+\sqrt{1-x^2}}$ is given by $\log(1+\sqrt{1-x^2})-\sqrt{1-x^2}$. By using the reflection formula, $(5)$ can be seen as a consequence of Raabe's formula, too. It also follows from Kummer's Fourier series expansion.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1780677", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 3, "answer_id": 2 }
Relative entropy (KL divergence) of sum of random variables Suppose we have two independent random variables, $X$ and $Y$, with different probability distributions. What is the relative entropy between pdf of $X$ and $X+Y$, i.e. $$D(P_X||P_{X+Y})$$ assume all support conditions are met. I know in general pdf of $X+Y$ is convolution of pdf of $X$ and $Y$, but is there an easier way to calculate the relative entropy or at least simplify it?
Let $f(t)$ be the PDF of $X$ and $g(t)$ be the PDF of $Y$. $$D_{KL}(P_X\parallel P_{X+Y}) = \int_{-\infty}^{+\infty}f(x)\log\frac{f(x)}{(f*g)(x)}\,dx$$ does not admit any obvious simplification, but the term $$\log\frac{f(x)}{(f*g)(x)}=\log\frac{\int_{-\infty}^{+\infty} f(t)\,\delta(x-t)\,dt}{\int_{-\infty}^{+\infty} f(t)\,g(x-t)\,dt} $$ can be effectively controlled if some informations about the concentration/decay of $g(t)$ are known. Is this the case?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1780790", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Find all integral solutions of the equation $x^n+y^n+z^n=2016$ Find all integral solutions of equation $$x^n+y^n+z^n=2016,$$ where $x,y,z,n -$ integers and $n\ge 2$ My work so far: 1) $n=2$ $$x^2+y^2+z^2=2016$$ I used wolframalpha n=2 and I received the answer to the problem (Number of integer solutions: 144) 2) $n=3$ I used wolframalpha n=3 and I not received the answer to the problem How to do without wolframalpha?
An approach that can sometimes help to find solutions to this type of equation is to consider the prime factors of the given integer. In this case: $$2016 = 2^5.3^2.7 = 2^5(2^6-1) = 2^5((2.2^5)-1)$$ Hence: $$\mathbf{2016 = 4^5 + 4^5 + (-2)^5}$$ And also (finding a solution of $x^3+y^3+z^3=252$ by trial and error or from this list): $$2016 = 2^3.252 = 2^3(7^3 - 3^3 - 4^3)$$ Hence: $$\mathbf{2016 = 14^3 + (-6)^3 + (-8)^3}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1780881", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
What is the 'meaning' of nowhere dense set? In some books, nowhere dense set is defined to be $int(\bar A)=\emptyset$ but meanwhile is defined to be $int(A)=\emptyset$ in some books(e.g. Munkres). So what is the 'meaning' (i.e motivation, intuitive/geometric meaning etc.) of nowhere dense set? Thank you.
A set $A$ is nowhere dense if every nonempty open set contains a nonempty open set which is disjoint from $A.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1780973", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Counterexamples of Cumulative Distribution Function ( multidimensional ) For simplicity, we consider 2-dimensional. We consider the function $F:\mathbb{R}^2\to\mathbb{R}$. Let $F$ satisfies: * *$0\leq F(x_1,x_2)\leq1$ for $(x_1,x_2)\in\mathbb{R}^2$ *$\left\{\displaystyle\begin{matrix} \displaystyle\lim_{x_1,x_2\uparrow\infty}F(x_1,x_2)=1, &\\ \displaystyle\lim_{x_1\downarrow-\infty}F(x_1,x_2)=0&\text{for } x_2\in\mathbb{R}, \\ \displaystyle\lim_{x_2\downarrow-\infty}F(x_1,x_2)=0&\text{for } x_1\in\mathbb{R}. \\ \end{matrix}\right.$ *$\forall(x_1,x_2)\in\mathbb{R}:\ \forall\epsilon>0:\ \exists\delta>0\text{ s.t. }\\ \forall(t_1,t_2)\in\mathbb{R}^2:\ 0\leq t_1-x_1<\delta, 0\leq t_2-x_2<\delta\Rightarrow0\leq F(t_1,t_2)-F(x_1,x_2)<\epsilon$ In addition, if $F$ satisfies: *$\forall(x_1,x_2)\in\mathbb{R}^2:\ \forall(h_1,h_2)\in\mathbb{R}_+^2:\\ F(x_1+h_1,x_2+h_2)-F(x_1+h_1,x_2)-F(x_1,x_2+h_2)+F(x_1,x_2)\geq0$ then $F$ is Cumulative Distribution Function. Now, Let $F^{\prime}$ satisfies 1, 2, and 3. And $F^{\prime}$ is monotonically increasing. Does this $F^{\prime}$ satisfy 4? i.e., Is $F^{\prime}$ Cumlative Distribution Function? No. We can make counterexamples which $F^{\prime}$ don't satisfy 4. This means 4 is stronger than monotonically increasing property. But I don't know constitution of $F^{\prime}$. Could you give me some counterexamples?
The function $$ F(x, y):=\begin{cases} 1& \text{if $x+y\ge0$}\\ 0& \text{otherwise} \end{cases} $$ satisfies conditions (1) and (2). It also meets condition (3), since for any $(x, y)$ there is $h>0$ such that $F(x+h, y+h)=F(x, y)$. However, condition (4) is not satisfied since $$ F(1,1) + F(0,0) = 1 + 0 < 1 + 1 = F(1,0) + F(0,1). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1781101", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Eigen values of matrix formed by column vector multiplied by row vector. Let $u$ and $v$ be column vectors in $\mathbb{R}^n$. Let $A = uv^T$ that is matrix formed by column vector multiplied by row vector. What are all the eigen values and eigen vectors of $A$? What is the rank of $A$?
If one of $u$ and $v$ is zero, then $A=0$ and the case is trivial. Suppose then $u\ne0$ and $v\ne0$. Then $A\ne0$ and has rank at most $1$ (the rank of a product can't be greater than the rank of the factors). So the rank is $1$. Consider $x=v$; then $uv^Tv=(v^Tv)u$ by direct computation. The other eigenvalue is $0$. Since $uv^Tx=(v^Tx)u$, you should be able to finish.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1781189", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }