Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
How do I make a primitive recursive function that does division? I am trying to define a primitive recursive function that does division. I looked at this answer but it seems wrong to me, because according to Wikipedia: The primitive recursive functions are among the number-theoretic functions, which are functions from the natural numbers (nonnegative integers) {0, 1, 2, ...} to the natural numbers So the inequality x−t⋅y≥0 will always be true and the function will always keep adding +1. The function given in the answers seems right but only assuming that I have negative numbers. Now how could I build a PRF with just natural numbers? EDIT: I found a way to either make a division that always rounds up or always rounds down. But I haven't found one yet that always does the correct thing. So far: Div(x,y,0) = 0 Div(x,y,S(m) = A(Div(x,y,m),V(D(x,M(y,S(m))))) where S(m) is successor, A is addition, V is 0 if 0 and 1 otherwise, D is subtraction and M is multiplication. Now the above always rounds down and the next one always rounds up: Div(x,y,0) = 0 Div(x,y,S(m) = A(Div(x,y,m),V(D(x,M(y,m))))
Definition by cases is a valid, derived principle of definition for primitive recursive functions. So is subtraction, and so is equality. I will therefore use them freely. Moreover, it is a good idea to define not just integer division $d(m,n)$ but also the remainder function $r(m,n)$. One can then write \begin{align} r(0,n) & = 0 \\ r(m+1,n) & = \begin{cases} 0 & \text{if}\; n-1 = r(m,n) \\ r(m,n) + 1 &\text{otherwise} \end{cases} \end{align} and define integer division by \begin{align*} d(0,n) & = 0 \\ d(m+1,n) & = \begin{cases} d(m,n) + 1 & \text{if}\; r(m,n) = n-1 \\ d(m,n) & \text{otherwise} \end{cases} \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2290137", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
Where can I search pairs of primes, of certain gap? Is there an online database that lets me search prime pairs by the gap among primes? I know of the primes.utm.edu's search, and I see twin primes (gap = $2$) can be searched by typing "twin", but how can I search any other valid gap, for example; gap of $12$? I couldn't find on their site a guide to valid comment commands for searching specific types of primes. "Advanced Search" mentions some commands, but I couldn't figure out of how to set a gap other than obvious "twin" for gap $2$. I tried "sexy" for gap of $6$ and "cousin" for gap of 4, but that does not work; I need to be able to search any valid gap.
Here's an online table of prime gaps: http://www.trnicely.net/gaps/gaplist.html
{ "language": "en", "url": "https://math.stackexchange.com/questions/2290310", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Question on limits in $\; \mathbb R^n $ including norm Let $\;f:\mathbb R \rightarrow \mathbb R^n \;$ a Lipschitz continuous map such that $\; \lim_{x \to x_0} f(x)=0\;$. I want to see the behaviour of the following limit: $\; \lim_{x \to x_0} \frac{f(x)}{\vert f(x) \vert}\;$ where $\;\vert \;\; \vert \;$ is the Euclidean norm in $\; \mathbb R^n\;$. EDIT: My initial purpose was to solve this $\; \lim_{x \to x_0} \frac{f(x)g(x)}{\vert f(x) \vert}\;$ where $\;g:\mathbb R \rightarrow \mathbb R^n\;$ such that $\; \lim_{x \to x_0} g(x)=0\;$ If $\; f\;$ was a function in $\; \mathbb R\;$ then I could write $\; \lim_{x \to x_0} \frac{f(x)}{\vert f(x) \vert}=\begin{cases} \frac{f(x)}{f(x)} & \text{if $f \ge 0$} \\ \frac{f(x)}{- f(x)} & \text{if $f \lt 0$} \end{cases}\;$ since the norm would be the absolute value of $\;f\;$ and so the limit wouldn't exist. My question : Can I say something similar to the above assuming $\;f \in \mathbb R^n\;$? How should I handle this $\; \lim_{x \to x_0} \frac{f(x)g(x)}{\vert f(x) \vert}\;$? I would appreciate any help! Thanks in advance!
This limit must not exist! For example take \begin{align} \newcommand{\R}{\mathbb{R}} f:\mspace{0.3em} \begin{array}{rcl} \R &\to &\R^2\\ x &\mapsto& x \left(\begin{array}{cc}\cos x \\ \sin x\end{array}\right) \end{array} % \end{align} Note that $\|f(x)\|_2=|x|$ and $\frac{x}{|x|} = \operatorname{sgn}(x)$ if $x\neq 0$. Now $\lim_{x\to 0} f(x) = 0$ but \begin{align} \lim_{x\to 0+}\frac{f(x)}{\|f(x)\|_2} &= \lim_{x\to 0+}\left(\begin{array}{cc}\cos x \\ \sin x\end{array}\right) = \left(\begin{array}{cc}1 \\ 0\end{array}\right) \\ \lim_{x\to 0-}\frac{f(x)}{\|f(x)\|_2} &= \lim_{x\to 0-}-\left(\begin{array}{cc}\cos x \\ \sin x\end{array}\right) = \left(\begin{array}{cc}-1 \\ 0\end{array}\right) \end{align} So we have that $\lim_{x\to 0+}\frac{f(x)}{\|f(x)\|_2}\neq \lim_{x\to 0-}\frac{f(x)}{\|f(x)\|_2}$ which implies that $\lim_{x\to 0}\frac{f(x)}{\|f(x)\|_2}$ does not exist. For your initial purpose this doesn't matter because $\frac{f(x)}{\|f(x)\|_2}$ is bounded and $g(x)$ converges to $0$, so the product converges also to $0$. However maybe you want either $f:\R\to\R$, $g:\R\to\R$ or your product is the scalarproduct of $\R^n$. Anyway this will converge to $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2290421", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$\sin 9^{\circ}$ or $\tan 8^{\circ} $ which one is bigger? $\sin 9^{\circ}$ or $\tan 8^{\circ} $ which one is bigger ? someone ask me that , and said without using calculator !! now my question is ,how to find which is bigger ? Is there a logical way to find ? I s there a mathematical method to show which is greater ? I am thankful for your guide , hint or solution
I would check the first two terms of the Taylor series. $\sin 9^\circ \approx \frac \pi{20}-\frac {\pi^3}{6 \cdot 20^3}, \tan 8^\circ \approx \frac {2\pi}{45}+\frac {8\pi^3}{3 \cdot 45^3}$, so $$\sin 9^\circ -\tan 8^\circ\approx \frac \pi{20}-\frac {\pi^3}{6 \cdot 20^3}-\frac {2\pi}{45}-\frac {8\pi^3}{3 \cdot 45^3}\\\approx \frac \pi{180}-\frac{(3^6+2^{10})\pi^3}{2^73^75^3}\\ \approx \frac \pi{180}(1-\frac {1753}{2^43^55}) \\ \approx \frac \pi{180}(1-\frac {1753}{17440})\\ \gt 0$$ where I used $\pi^2 \approx 10$. Alpha agrees, but I didn't check until I was done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2290527", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 4, "answer_id": 0 }
Smooth fibration is a submersion in Wikipedia and the following questions: [1], [2] or their respective answers or comments it is said that a smooth fibration is a submersion. To make clear what I mean: Definition. A smooth map $p\colon E \to B$ is said to satisfy the homotopy lifting property in the smooth category if given the following commutative diagram where all maps are smooth: there exists an smooth map $\widetilde{F}$ making the following diagram smooth: Definition. A smooth map is said to be a smooth (Hurewicz) fibration if it satisfies the homotopy lifting property in the smooth category for all manifolds $Y$. Definition. A smooth map is said to be a smooth Serre fibration if it satisfies the homotopy lifting property in the smooth category for all discs $I^n$, $n\ge 0$. Question. Could you please help me to understand why smooth Serre fibrations or smooth Hurewicz fibrations are submersions. I have tried using the characteristic property of submersions and so on.....but I haven't been able to prove anything. By the way, I know that the projections in smooth fiber bundles are smooth submersions. That is straightforward. But I would like to generalize it to fibrations. Remark. If necessary we can assume both $B$ and $E$ are compact since it is the case I am interested in.
Let $x$ be a point of $B$, consider a chart $f:U\simeq I^n\rightarrow B$ whose domain contains $x$ and $f(0)=x$. Write $Y=\{x\}$ and consider a point $z\in p^{-1}(x)$. You can define $\tilde f:Y\times \{0\}\rightarrow E$ by $\tilde f(x,0)=z$. Let $F:\{x\}\times I^n\rightarrow B$ defined by $F(x,y)=f(x)$. We have $p\circ \tilde f=F\circ i$. The homotopy lifting property implies the existence of $H:\{x\}\times U\rightarrow E$ such that $p\circ H=f$. Since the tangent map $df_0:T_0\mathbb{R}^n\rightarrow T_xB$ is bijective, and $df_0=dp_z\circ dH_{(0,x)}$, we deduce that $dp_z$ is surjective, and henceforth $p$ is a submersion.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2290610", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Determine the value without solving for a I got this question at the bottom of this page, the last question on the page. I've been taught how to handle this question but I couldn't figure this one out so I'd like to show how the website solved it and I please want someone to explain to me the steps they took to get to their answer. Assume that I know nothing about algebra. Given that $a+\frac{1}{3a}=2$, determine the value of $a^3+\frac{1}{27a^3}$ without solving for $a$. $$a^3+\frac{1}{27a^3} = (a+\frac{1}{3a})(a^2-\frac{1}{3}+\frac{1}{9a^2})=2(a^2-\frac{1}{3}+\frac{1}{9a^2})$$ $$a^2-\frac{1}{3}+\frac{1}{9a^2}=(a+\frac{1}{3a})^2-1=4-1=3$$ $$a^3+\frac{1}{27a^3}=2(3)=6$$ Apologies, I don't know how to put new lines in the middle of equations so if you want clarity you can check on the site, it's literally the last question on the page.
hint use $$(a+b)^3=a^3+3ab (a+b)+b^3$$ with $b=\frac {1}{3a} $, observe that $3ab=1$, and $$27=3^3$$ With $a+b=2$, the result is $$2^3-2=6$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2290812", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Find $\sin(A)$ and $\cos(A)$ given $\cos^4(A) - \sin^4(A) = \frac{1}{2}$ and $A$ is located in the second quadrant. Question: Find $\sin(A)$ and $\cos(A)$, given $\cos^4(A)-\sin^4(A)=\frac{1}{2}$ and $A$ is located in the second quadrant. Using the fundamental trigonometric identity, I was able to find that: • $\cos^2(A) - \sin^2(A) = \frac{1}{2}$ and • $$ \cos(A) \cdot \sin(A) = -\frac{1}{4} $$ However, I am unsure about how to find $\sin(A)$ and $\cos(A)$ individually after this. Edit: I solved the problem through using the Fundamental Trignometric Identity with the difference of $\cos^2(A)$ and $\sin^2(A)$.
Hint $$\left( \cos(A)+ \sin(A) \right)^2 = 1+2 \sin(A) \cos(A)=\frac{1}{2} \\ \left( \cos(A)- \sin(A) \right)^2 = 1-2 \sin(A) \cos(A)=\frac{3}{2} $$ Take the square roots, and pay attention to the quadrant and the fact that $\cos^4(A) >\sin^4(A)$ to decide is the terms are positive or negative. Alternate simpler solution $$2 \cos^2(A)= \left( \cos^2(A)+\sin^2(A)\right)+\left( \cos^2(A)-\sin^2(A)\right)=1+\frac{1}{2} \\ 2 \sin^2(A)= \left( \cos^2(A)+\sin^2(A)\right)-\left( \cos^2(A)-\sin^2(A)\right)=1-\frac{1}{2} \\$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2290899", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Weierstrass factorization theorem for sin Wikipedia says that $$\sin \pi z = \pi z \prod_{n\neq 0} \left(1-\frac{z}{n}\right)e^{z/n} = \pi z\prod_{n=1}^\infty \left(1-\left(\frac{z}{n}\right)^2\right).$$ Where did the second equality come from?
The first product is absolutely convergent, so the order of the terms can be changed without changing its value. We pair the $\pm n$ terms, so $$ \left( 1 - \frac{z}{n} \right)e^{z/n}\left( 1 - \frac{z}{-n} \right)e^{-z/n} = \left( 1 - \frac{z^2}{n^2} \right). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2291020", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Show that if $G$ is abelian, and $|G| \equiv2 \mod 4$, then the number of elements of order $2$ in $G$ is $1$. I've tried proving it by contradiction, assuming the number of elements is different than one, which, by Sylows $3$, implies that $|G|=2^xm$ with $m$ odd. With that I managed to show that $m\equiv1\mod4$, but kinda got stuck there...
Here's an alternative solution without using Sylow's theorems. Note that $|G|\equiv 2 \pmod 4 \Rightarrow |G|=2(2k+1)$ for some $k\in \mathbb{Z}$. Since $2$ divides $|G|$, by Cauchy's theorem there exists an element $g\in G$ of order $2$. Now $\langle g\rangle$ is a subgroup of order $2$. Now since $G$ is abelian we have that $\langle g\rangle $ is a normal subgroup, and hence $G/\langle g\rangle $ is a group. Now by Lagrange's theorem $|G/\langle g\rangle | = 2k+1$, which is odd. Suppose there was another element $h\in G$ with order $2$ and $h\neq g$. Then we have that $ h + \langle g \rangle $ is an element of order $2$ in $G/\langle g\rangle$. Thus $\langle h + \langle g\rangle \rangle $ is a subgroup of $G/\langle g\rangle$ or order $2$. However, this is a contradiction since $|G/\langle g\rangle|$ is odd.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2291126", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 0 }
Show $PQ$ and $QP$ have the same eigenvalues with density of $GL_n$ There is a wonderful series of lectures on YouTube of Dr. Tadashi Tokieda on Geometry and Topology. In the fourth video in this playlist Tadashi sketches an argument for why if $P$ and $Q$ are $n$ by $n$ matrices then $PQ$ and $QP$ have the same eigenvalues using the density of invertible matrices in $M_n$, the space of $n$ by $n$ matrices. The argument goes as thus: (1) Let $(*)$ denote the statement "$PQ$ and $QP$ have the same eigenvalues." (2) Note that $GL_n$ is an open dense subset of $M_n$. If $Q$ is not invertible, then let $Q_n$ be a sequence of invertible matrices converging to $Q$ (with whatever norm you like). (3) Let $(*)_n$ denote the statement "$PQ_n$ and $Q_nP$ have the same eigenvalues." Since $Q_n$ is invertible, the statement $(*)_n$ is true for every $n$. (Tadashi has already shown the claim is true in the case that $Q$ is invertible.) (4) Now, Tadashi claims that the statement $(*)_n$ depends continuously on $n$. Therefore, as $Q_n\to Q$, and the statement $(*)_n$is true for every $n$, the statement $(*)$ is also true. Can someone flesh out this step (4)? How exactly does $(*)_n$ depend continuously on $n$? Many thanks.
The roots of a complex polynomial are continuous in its coefficients. Hence, if $A_n$ is a sequence of matrices converging to $A$, then the eigenvalues of $A_n$ converge to those of $A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2291236", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 0 }
Determining coefficients of an infinite series Here is my question: Let $s=\sum_{i\ge 0} a_ix^i = (1-4x^2)^{-20}$ $t=\sum_{j\ge 0} b_jx^j = (1+x^5)^{-17}$ Determine $a_i$ and $b_j$ for all $i,j \ge 0$. (The answer will be divided into cases. Ex $a_i$ will depend on whether $i\equiv0\pmod 2$) Then, determine the coefficient of $x^{16}$ in $3x^4st$ I believe I have the correct first steps but I'm not sure how to proceed further: $(1-4x^2)^{-20} = \sum_{k\ge0} (_{k}^{19+k})(4x^2)^k = \sum_{k\ge0} (_{k}^{19+k})4^kx^{2k}$ $(1+x^5)^{-17} =(1-(-x^5))^{-17}= \sum_{l\ge0} (_{l}^{16+l})(-x^5)^l= \sum_{l\ge0} (_{l}^{16+l})(-1)^l(x^{5l})$ For the second part, it should be the same as calculating the coefficient on $x^{12}$ in $3st$ which I believe is straight forward once I know what all the coefficients in $s$ and $t$.
It is convenient to use the coefficient of operator $[x^n]$ to denote the coefficient of $x^n$ in a series. In order to determine the coefficient of $3x^{16}$ in $s(x)t(x)$ we obtain \begin{align*} \color{blue}{[x^{16}]3x^4 s(x)t(x)} &=3[x^{12}]\left(\sum_{k\geq 0}\binom{19+k}{k}4^kx^{2k}\right) \left(\sum_{l\geq 0}\binom{16+l}{l}(-1)^lx^{5l}\right)\tag{1}\\ &=3\sum_{k=0}^6\binom{19+k}{k}4^k[x^{12-2k}]\sum_{l\geq 0}\binom{16+l}{l}(-1)^lx^{5l}\tag{2}\\ &=3\left(\binom{20}{1}4[x^{10}]+\binom{25}{6}4^k[x^0]\right)\sum_{l\geq 0}\binom{16+l}{l}(-1)^lx^{5l}\tag{3}\\ &=12\binom{20}{1}\binom{18}{2}+3\cdot 4^6\binom{25}{6}\binom{16}{0}\tag{4}\\ &\color{blue}{=2176241520} \end{align*} Comment: * *In (1) we use the linearity of the coefficient of operator and apply the rule $[x^{p-q}]A(x)=[x^p]x^qA(x)$. *In (2) we apply the same rule as in (1) to the left series. We restrict the upper limit of the left-hand series with $6$ since the exponent of $x^{12-2k}$ is non-negative. *In (3) we skip terms $[x^p]$ which are not multiples of $5$ in the exponent. *In (4) we select the coefficients of $x^{10}$ and $x^0$ accordingly.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2291309", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Find the solution to a PDE with an initial condition Find the solution to $u_x + y u_y = u$ with initial condition $u(0,y) = \cos(y)$. Attempted solution - Suppose we parametrize a curve $(x,y)$ by a parameter $\xi$. So that $$ u=u(x(\xi),y(\xi)) $$ $$ \frac{\mathrm{d}u}{\mathrm{d}\xi}=\frac{\mathrm{d}x}{\mathrm{d}\xi}\frac{\partial u}{\partial x}+\frac{\mathrm{d}y}{\mathrm{d}\xi}\frac{\partial u}{\partial y} $$ Comparing to our original PDE $$ u = \frac{\partial u}{\partial x}+y\frac{\partial u}{\partial y} $$ We then set $$ \frac{\mathrm{d}u}{\mathrm{d}\xi}=u $$ $$ \frac{\mathrm{d}x}{\mathrm{d}\xi}=1 $$ $$ \frac{\mathrm{d}y}{\mathrm{d}\xi}=y $$ We then have $$ \mathrm{d}\xi=\frac{\mathrm{d}u}{u}= \mathrm{d}x = \frac{\mathrm{d}y}{y} $$ Integrating we have $$\xi = \ln(u) = x = \ln(y) + C$$ Now for the general solution we have $$\frac{\mathrm{d}u}{\mathrm{d}\xi} = u \Rightarrow \mathrm{d}\xi = \frac{\mathrm{d}u}{u}\Rightarrow \xi = \ln(u) + C \Rightarrow u = Ce^{\xi}$$ The initial condition implies... I am following the structure from another user that answered a similar question I had but I am now a bit lost applying the initial condition. I do know that $x = \xi$ but I am not sure how to get the rest of the solution from there.
$$u_x+yu_y=u$$ Set of characteristic ODEs: $\quad \frac{dx}{1}=\frac{dy}{y}=\frac{du}{u}$ First family of characteristics curves, from $\frac{dx}{1}=\frac{dy}{y} \quad\to\quad ye^{-x}=c_1$ Second family of characteristics curves, from $\frac{dy}{y}=\frac{du}{u} \quad\to\quad \frac{u}{y}=c_2$ General solution, with any differentiable fonction $F$ : $$\frac{u}{y}=F(ye^{-x}) \quad\to\quad u=y\:F(ye^{-x}) $$ Condition : $\quad u(0,y)=\cos(y)=y\:F(ye^{0}) \quad$ determines the function $F$ : $$F(t)=\frac{\cos(t)}{t} \quad \text{any } t\neq 0$$ Bringing this function $F$ into the above general solution gives $\quad u=y\:\frac{\cos(ye^{-x}) )}{ye^{-x}}$ $$u(x,y)=e^x\cos(y\:e^{-x})$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2291391", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Lemma about ordinal arithmetic I'm trying to prove the following lemma: For any three ordinals $\alpha, \alpha' >0$, $\beta >1$, if $\alpha > \alpha'$, then we have $$ \beta^\alpha > \beta^{\alpha'} \cdot \delta + \gamma $$ For $0 < \delta < \beta$ and $\gamma < \beta^{\alpha'}$. My question is, is this even true, and if so what is the best way to prove it?
Just note that $$\beta^\alpha\geq\beta^{\alpha'+1}=\beta^{\alpha'}\cdot \beta\geq\beta^{\alpha'}\cdot (\delta+1)=\beta^{\alpha'}\cdot\delta+\beta^{\alpha'}>\beta^{\alpha'}\cdot\delta+\gamma.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2291512", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
(How to show that something isn't an) Irreducible Polynomial in $\mathbb{Z}_5$ Let $f(x) = x^4+x+p$, where $p$ is a prime. I'm asked to show that if $p \neq -1$ (mod $5$), then $f(x)$ is not irreducible in $\mathbb{Z}_5$. I guess I could try to show that $f(x)$ is not irreducible for $p=0,1,2,3$, but that seems like a lot of work and I'm guessing there's a more efficient way which I don't know.
Let $f(x) = x^{p-1}+x+q$, $p$ is a prime number, $q$ is any integer, we want to find a root of $f(x)$ in $\mathbb{Z}_p$. By Fermat's Little Theorem, any $p\nmid a$, $a^{p-1} \equiv 1 \pmod p$, so $$f(0) = q$$ $$f(1) \equiv 1+1+q\pmod p$$ $$f(2) \equiv 1+2+q\pmod p$$ $$...$$ $$f(p-2) \equiv 1+(p-2)+q\pmod p$$ $$f(p-1) \equiv 1+(p-1)+q\pmod p$$ Thus if $f(x)$ is irreducible in $\mathbb{Z}_p$, then it has no root in $\mathbb{Z}_p$, it must be because all $q, q+2, q+3, ... q+p$ are not divisible by $p$. Thus $p \mid q+1$, that is, $q \equiv -1 \pmod p$. Thus if $q \not\equiv -1 \pmod p$, then $f(x)$ is reducible in $\mathbb{Z}_p$. Let $p = 5$, and that's the answer to your question.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2291596", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
From $a_{n+1}= \frac{3a_n}{(2n+2)(2n+3)}$ to $a_n$, Case 2 Find and prove by induction an explicit formula for $a_n$ if $a_1=1$ and, for $n \geq 1$, $$P_n: a_{n+1}= \frac{3a_n}{(2n+2)(2n+3)}$$ Checking the pattern: $$a_1=1 $$ $$a_2= \frac{3}{4 \cdot 5}$$ $$a_3= \frac{3^2} { 4 \cdot 5 \cdot 6 \cdot 7}$$ $$a_4= \frac{3^3} { 4 \cdot 5 \cdot 6 \cdot 7 \cdot 8 \cdot 9 }$$ $$a_n = \frac{3^{n-1}}{ \frac {(2n+1)!}{3!} }$$ $$a_n = \frac {3! \cdot 3^{n-1}} {(2n+1)!}$$ Proof by induction: $$a_1 = \frac {3! \cdot 3^{0}} {3!} =1$$ $$a_{n+1} = \frac {3! \cdot 3^{n}} {(2n+3)!}$$ Very grateful for the feedback given before. I am new to this. I am a bit stock at the end, what is the most efficient way to reach back to $a_n$?
$a_{n+1} = \frac{3a_n}{(2n+2)(2n+3)} = \frac{3\cdot 3!\cdot 3^{n-1}}{(2n+2)(2n+3)(2(n-1)+3)!} = \frac{3!3^n}{(2n+3)!}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2291696", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Difference between units and dimensions Though this question may seem related to Physics, I think that at the very root this is a mathematical question and so I have posted this on math.stackexchange. Background: Initially I thought that the terms-unit and dimension, refer to the same thing. Physical quantities are categorised into fundamental/basic physical quantities and derived physical quantities. A fundamental physical quantity cannot be broken down into simpler physical quantitities, cannot be obtained from other fundamental quantities and all the known physical quantities can be obtained using fundamental quantities. There was a line in my book which stated Mass, length, time, thermodynamic temperature, electric current, amount of substance, luminous intensity are the seven fundamental quantities and are often called the seven dimensions of the world. Thus the dimension of mass is [M], that of length is [L] and so on. The dimensions of a derived physical quantity are the powers to which the units of the fundamental physical quantities have to be raised in order to represent that derived physical quantity completely. This is very confusing and I am finding it difficult to understand the difference between the two terms-dimension and unit. Question: What exactly is the difference between the meaning of the terms unit and dimension?
I think there is no difference between dimension and unit in your case. They can be used interchangeably. However, the same word "dimension" is also used in another context, namely, describing the amount of numbers needed to describe a point in your space uniquely. These two use cases should not be confused. They are very different. For example, the space we live in is 3-dimensional. This means we describe a point inside by giving three values of the dimension (= unit) [M].
{ "language": "en", "url": "https://math.stackexchange.com/questions/2291784", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 7, "answer_id": 1 }
Identifying the Probability distribution. If I am certain to receive a phone call in a span of 60 minutes, what distribution does the phone call follow with time, within the 60 minutes period? Obviously, all instances of time cannot have same probability because if it did, then it would imply that there are chances of not receiving the call even at the end of 60 minutes period, which contradicts our initial assumption. To further corroborate on unequal distribution of probability, the chances of a 30 year old getting married is more than that of 18 year old. I believe it follows Poisson distribution. I would appreciate very much if anyone derived the probabilities for this question.
The Poisson distribution would allow more than one call in a given hour; it would also allow zero calls with non-zero probability. I would not recommend this to model a call that is absolutely certain to occur within a given 60-minute interval. It's unclear why marriage rates should have anything to do with the question; 18-year-olds have different lives than 30-year-olds, but your life typically doesn't change much between 9 o'clock and 10 o'clock. Personally, I would let $X$ be the number of minutes after the start of the period at which the call is received, with a uniform distribution on $[0,60].$ This will give you a $\frac1{60}$ probability to get the call in any given one-minute interval, but unlike a Poisson process, the probability of a call in one interval is not independent of the probability of a call in any other interval. For example, the probability that the call occurs in the last minute of the hour is $\frac1{60}$ a priori but the probability that the call occurs in the last minute, given that it did not occurs in the previous $59$ minutes, is $1.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2291897", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving that $ \ln\left(\frac{49}{50}\right)<\sum^{98}_{k=1}\int^{k+1}_{k}\frac{k+1}{x(x+1)}dx<\ln(99)$ If $\displaystyle I = \sum^{98}_{k=1}\int^{k+1}_{k}\frac{k+1}{x(x+1)}dx.$ Then prove that $\displaystyle \ln\left(\frac{49}{50}\right)<I <\ln(99)$ Attempt: $$I = \sum^{98}_{k=1}\int^{k+1}_{k}(k+1)\bigg[\frac{1}{x(x+1)}\bigg]dx = \sum^{98}_{k=1}\int^{k+1}_{k}(k+1)\bigg[\frac{1}{x}-\frac{1}{x+1}\bigg]dx$$ So $$I = \sum^{98}_{k=1}(k+1)\bigg[\ln(x)-\ln(x+1)\bigg]\bigg|_{k}^{k+1}$$ $$I= \sum^{98}_{k=1}(k+1)\bigg[\bigg(\ln(k+1)-\ln(k+2)\bigg)-\bigg(\ln(k)-\ln(k+1)\bigg)\bigg]$$ $$ \sum^{98}_{k=1}(k+1)\ln(k+1)-k\ln(k)-\sum^{98}_{k=1}(k+1)\ln(k+2)-k\ln(k+1)+\sum^{98}_{k=1}\ln(k+1)-\ln(k)$$ So we have $$I = \ln(2)+\ln \bigg(\frac{99}{100}\bigg)^{100}$$ could some help me how to prove $$\ln\left(\frac{49}{50}\right)<I <\ln(99)$$
Note that, for $x\in[k,k+1]$, $$ k<x<k+1\Rightarrow\frac{1}{k+1}<\frac{1}{x}<\frac{1}{k}, \frac{1}{k+2}<\frac{1}{x+1}<\frac{1}{k+1}$$ and hence \begin{eqnarray} I &=& \sum^{98}_{k=1}\int^{k+1}_{k}\frac{k+1}{x(x+1)}dx\\ &\le&\sum^{98}_{k=1}\int^{k+1}_{k}\frac{1}{x}dx\\ &=&\sum^{98}_{k=1}[\ln(k+1)-\ln(k)]\\ &=&\ln99 \end{eqnarray} and \begin{eqnarray} I&=&\sum^{98}_{k=1}\int^{k+1}_{k}\frac{k+1}{x(x+1)}dx\\ &\ge&\sum^{98}_{k=1}\int^{k+1}_{k}\frac{1}{x+1}dx\\ &=&\sum^{98}_{k=1}[\ln(k+2)-\ln(k+1)]\\ &=&\ln55>\ln(\frac{49}{50}). \end{eqnarray}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2291997", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Prove that function $f$ is injective if its Jacobian matrix is positive definite. Assume that $\Omega\in\mathbb{R}^m$ is an open convex set and the vector-valued function $f:\Omega\rightarrow\mathbb{R}^m$ is differentiable. If Jacobian matrix $J_f(x)$ is positive definite for all $x\in\Omega$, prove that $f$ is an injective function on $\Omega$. I have no way of dealing with it. Is there a theorem to do with it?
Hint: suppose $x\neq y$ but $f(x)=f(y).\ $ On $\Omega$, define for $0\le t\le 1,\ \gamma (t)=ty+(1-t)x,\ $ which is well-defined because $\Omega$ is convex. Now, compute $D\langle ((f\circ \gamma)(t)-f(x)), (y-x)\rangle$ and use the Mean Value Theorem and your hypothesis on the Jacobian, to arrive at a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2292127", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Is my proof for $f(0)=1$ for a specific continuous function correct? Alright, I think I have found a much simpler proof to a question than the one I was provided with, and wanted to hear how it is inevitably incorrect. Let $f$ be a continuous function that you can always get the derivative of and that is always positive. Additionally, $f'(0)=\lim\limits_{x\rightarrow 0}\frac{f(x)-1}{x}$ prove $f(0)=1$ I proved it as such: According to the definition of a continuous function: $\lim\limits_{x\rightarrow c}f(x)=f(c)$ Therefore: $f'(0)=\lim\limits_{x\rightarrow c}\frac{f(x)-1}{x}=\frac{f(c)-1}{c}\Rightarrow c\cdot f'(0)=f(c)-1$ Let $c=0:$ $f(0)-1=0\Rightarrow f(0)=1 \blacksquare$
The solution is not correct. By definition, $$f'(0)=\lim_{x\to 0}\frac{f(x)-f(0)}{x}=\lim_{x\to 0}\frac{f(x)-1}{x}$$ so, $$\lim_{x\to 0}\frac{f(x)-f(0)}{x}-\lim_{x\to 0}\frac{f(x)-1}{x}=0\to \lim_{x\to 0}\frac{f(0)-1}{x}=0\quad (1)$$ but $$\lim_{x\to 0}\frac{c}{x}=c\cdot \lim_{x\to 0}\frac{1}{x}$$ Doesn't exist if $c\ne 0$ and it is a constant number. Then if $(1)$ is true you must have $f(0)=1$. P.S.: Your solution in not correct because when you write $c\cdot f'(0)=f(c)-1$ you are assuming that $c\ne 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2292386", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Finding the Jordan form for a $4\times4$ matrix $$A:= \begin{bmatrix}4 & -4 & -11 & 11\\3 & -12 & -42 & 42\\ -2 & 12 & 37 & -34 \\ -1 & 7 & 20 & -17 \end{bmatrix}$$ I'm struggling with this matrix: it has $p_A(x) = (x-3)^4 $, yet $\ker(A - 3I)^3 $ is already the whole space $ \mathbb C^4 $. I read on another post that (for the $ \mathbb C^3 $ case of a similar matrix) I should take this to be the size of the biggest Jordan block, thus leaving me with $1$ standard eigenvector and a Jordan chain (image of the image of $\ldots$ ) of a vector $v$ not in $\ker(A - 3I)^2$. But this doesn't work for any $v$. What steps should I follow to find the Jordan form of $A$?
The issue here is that the method you're trying to apply is not quite right. My guess is that the result that is attempted to being applied is that "the algebraic multiplicity of $3$ in the minimal polynomial of your matrix is the size of the largest Jordan block". (you're looking at the geometric multiplicity of your eigenvalue in the minimal polynomial instead). This is true, and it's also true that the minimal polynomial is $(x-3)^{3}$. This tells me that the size of the largest Jordan block is $3$. Once you've applied this, we're done, we only have one eigenvalue, the largest Jordan block is of size $3$, the dimension of our space is $4$, we only have room for one more block, so we know what the Jordan Canonical form is (right?).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2292491", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Show that for every integer $n$, $n^3 - n$ is divisible by 3 using modular arithmetic Problem: Show that for every integer $n$, $n^3 - n$ is divisible by 3 using modular arithmetic I was also given a hint: $$n \equiv 0 \pmod3\\n \equiv 1 \pmod3\\n \equiv 2 \pmod3$$ But I'm still not sure how that relates to the question.
Using the hint is to try the three cases: Case 1: $n \equiv 0 \mod 3$ Remember if $a \equiv b \mod n$ then $a^m \equiv b^m \mod n$ [$*$] So $n^3 \equiv 0^3 \equiv 0 \mod 3$ Remember if $a \equiv c \mod n$ and $b \equiv d \mod n$ then $a+b \equiv c + d \mod n$ [$**$] So $n^3 - n\equiv 0 - 0 \equiv 0 \mod n$. Case 2: $n \equiv 1 \mod 3$ Then $n^3 \equiv 1^3 \mod 3$ and $n^3 - n \equiv 1^3 - 1 \equiv 0 \mod n$. Case 3: $n \equiv 2 \mod 3$ Then $n^3 \equiv 2^3 \equiv 8 \equiv 2 \mod 3$. So $n^3 - n \equiv 2- 2 \equiv 0 \mod 3$. So in either of these three cases (and there are no other possible cases [$***$]) we have $n^3 - n \equiv 0 \mod 3$. That means $3\mid n^3 - n$ (because $n^3 - n \equiv 0 \mod 3$ means there is an intger $k$ so that $n^3 - n = 3k + 0 = 3k$.) I find that if I am new to modulo notation and I haven't developed the "faith" I like to write it out in terms I do have "faith": Let $n = 3k + r$ where $r = 0, 1$ or $2$ Then $n^3 - n = (3k+r)^3 -(3k+r) = r^3 - r + 3M$ where $M = [27k^3 + 27k^2r + 9kr^2 - 3k]/3$ (I don't actually have to figure out what $M$ is... I just have to know that $M$ is some combination of powers of $3k$ and those must be some multiple of $3$. In other words, the $r$s are the only things that aren't a multiple of three, so they are the only terms that matter. ) and $r^3 -r$ is either $0 - 0$ or $1 - 1 = 0$ or $8 - 2 = 6$. So in every event $n^3 - n$ is divisible by $3$. That really is the exact same thing that the $n^3 - n^2 \equiv 0 \mod 3$ notation means. [$*$] $a\equiv b \mod n$ means there is an integer $k$ so that $a = kn + b$ so $a^m = (kn + b)^m = b^m + \sum c_ik^in^ib^{m-i} = b^m + n\sum c_ik^{i}n^{i-1} b^{m-i}$. So $a^m \equiv b^m \mod n$. [$**$] $a\equiv c \mod n$ means $a= kn + c$ and $b\equiv d \mod n$ means $b = jn + d$ for some integers $j,k$. So $a + b = c+ d + n(j+k)$. So $a+b =c + d \mod n$. [$***$]. For any integer $n$ there are unique integers $q, r$ such that $n = 3q + r ; 0 \le r < 3$. Basically this means "If you divide $n$ by $3$ you will get a quotient $q$ and a remainder $r$; $r$ is either $0,1$ or $2$". In other words for any integer $n$ then $n \equiv r \mod 3$ where $r$ is either $0,1,$ or $2$. These are the only three cases. P.S. That is how I interpretted the hint. As other have pointed out, a (arguably) more elegant proof is to note $n^3 - n = (n-1)n(n+1)$ For any value of $n$ one of those three, $n$, $n-1$, or $n+1$ must be divisible by $3$. This actually proves $n^3 - n$ is divisible by $6$ as one of $n$, $n-1$ or $n+1$ must be divisible by $2$. There is also induction. As $(n+ 1)^3 - (n+1) = n^3 - n + 3n^2 + 3n$, this is true for $n+1)$ if it is true for $n$. As it is true for $0^3 - 0 = 0$ it is true for all $n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2292586", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Minimal polynomial of a power of an element over $\mathbb F_{2}$ Let $\beta \in \mathbb F_{16}$ whose minimal polynomial over $\mathbb F_{2}$ is $x^4+x+1$. What is the minimal polynomial of $\beta^7$ over $\mathbb F_{2}$? I know from this answer that $\beta$ generates $\mathbb F_{16}$ but I don't know if that helps at all. Can this be done without first finding out what element $\beta$ is?
Daniel Schepler's suggestion in the comments is a good way to approach this problem. Let me flesh out this method, as well as a few computational tricks/sanity checks. First, as you note, $\beta$ is a generator for $\mathbb{F}_{16}^{\times}$, which is a cyclic group of order $15$. Since $\gcd(7, 15) = 1$, it follows that $\beta^{7}$ also generates $\mathbb{F}_{16}^{\times}$, and so is a generator for $\mathbb{F}_{16}$ as a field extension of $\mathbb{F}_{2}$. Thus, we expect the minimal polynomial of $\beta^{7}$ over $\mathbb{F}_{2}$ to have degree $4$. We need to compute $\beta^{7}, \beta^{14}, \beta^{21}, \beta^{28}$ in terms of the $\mathbb{F}_{2}$-basis $1, \beta, \beta^{2}, \beta^{3}$ by using the relation $\beta^{4} + \beta + 1 = 0$, i.e. $\beta^{4} = \beta+1$. This is easy enough for $\beta^{7}$: $$\beta^{7} = \beta^{4}(\beta^{3}) = (\beta+1)(\beta^{3}) = \beta^{4}+\beta^{3} = \beta^{3} + \beta+1$$ Now, it is not too hard to do this for $\beta^{14}$, etc., but one thing which makes computations quicker is to use the fact that $\mathbb{F}_{16}$ has characteristic $2$, so $(a+b)^{2} = a^{2}+b^{2}$ for any $a, b \in \mathbb{F}_{16}$. Thus, we note $$\beta^{14} = (\beta^{7})^{2} = (\beta^{3}+\beta+1)^{2} = \beta^{6}+\beta^{2}+1 = \beta^{2}(\beta+1)+\beta^{2}+1 = \beta^{3}+1$$ and $$\beta^{28} = (\beta^{14})^{2} = (\beta^{3}+1)^{2} = \beta^{6}+1 = \beta^{2}(\beta+1)+1 = \beta^{3}+\beta^{2}+1$$ From here, it is not too hard to compute $\beta^{21}$ by computing $\beta^{7}\beta^{14}$. I leave this (and the final computation of the minimal polynomial of $\beta^{7}$) to you. You can check these computations here (spoilers): $\beta^{21} = \beta^{3}+\beta^{2}$. Thus, $\beta$ is a root of $X^{4}+X^{3}+1 \in \mathbb{F}_{2}[X]$. As noted above, the minimal polynomial of $\beta^{7}$ over $\mathbb{F}_{2}$ must have degree $4$, so this must be the minimal polynomial of $\beta$ over $\mathbb{F}_{2}$; however, it is also easy to check directly that this polynomial is irreducible over $\mathbb{F}_{2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2292662", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Moment generating function within another function (a) Let $X$ be an exponential random variable with parameter $\lambda$. Find the moment generating function of $X$. (b) Suppose a continuous random variable $Y$ has moment generating function $M_Y(s)= \frac{\lambda^2}{(\lambda-s)^2}$ for $s<\lambda$ and $M_Y(s)+\infty$ for $s\ge \lambda$. Find the probability density function of $Y$. So (a) is simple. Using $M_X(s)=E(e^{sx})=\int_0^\infty e^{sx}\lambda e^{-\lambda x }dx = \frac{\lambda}{\lambda-s }$ if $s<\lambda$ and $+\infty$ for $s\ge \lambda$. For (b) I do see a relation between X and Y, that is, $M_Y(s)=(M_X(s))^2$. Some of my ideas. If I derive $M_Y(s)$ I can obtain the expected value, but I am not sure as to how this can be helpful. I know that this requires manipulation of $M_X(s)$ but I also know that I cannot square the pdf of $X$. I would appreciate some help.
If you add two independent random variables, you convolve their PDF's (i.e. if the densities are $f, g$, their sum has density $h(t) = \int f(s)g(t-s) ds$) or multiply their moment generating functions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2292748", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Prove that if $F$ is a splitting Field of $S$ over $K$ and $E$ is an intermediate field then $F$ is a splitting field of $S$ over $E$. This happens to be Hungerford problem 5.3.2. Here $S$ is a set of polynomials in $K[x]$. $F$ is a splitting field of $S$ over $E$ if * *Every $f\in S$ splits in $F$ *$F= E(X)$ where $X=\{ \text{roots of all polynomials in }$ S} Property 1 follows quickly from the fact that $K[x] \subset E[x]$ but I am stuck on property 2. By assumption, we know that $F=K(X)\subseteq E(X)$ so $F\subseteq E(X)$. For the reverse inclusion I am stuck and in general I don't see why it should be true.
$E \subseteq F$ because it's an intermediate field and $X \subseteq F$ because $X$ all the roots of polynomials in $S$, since $X \subseteq K(X) = F$, by hypothesis. Therefore $F= K(X) \subseteq E(X) \subseteq F$, since $E$ and $X$ both are contained in F therefore the field generated by them.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2292859", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
$\sin(x) \leq x$ on the interval $[0,1]$ I need to prove that: $\sin(x) \leq x$ on the interval $[0,1]$ using Calculus. First I calculated the area between the graph of the function $x$ and $\sin(x)$ on the interval $[0,1]$, if it is positive then $\sin(x) \leq x$: $$\int_0^1 x - \sin(x) dx = \frac{x^2}{2} \vert_0^1 +\cos(x) \vert_0^1=\frac{1}{2} - 0 + \cos(1) -1=\cos(1)-\frac{1}{2}$$ With the help of the calculator I know that $\cos(1) > \frac{1}{2}$, but I don't know how to prove it without help of the calculator. Any idea of how to prove that $\cos(1) > \frac{1}{2}$ or another way to prove $\sin(x) \leq x$?
Fix $x \in [0,1]$,and write $\sin x = \displaystyle \int_{0}^x \cos t dt, x = \displaystyle \int_{0}^x 1dt\implies x - \sin x = \displaystyle \int_{0}^x (1-\cos t)dt\ge \displaystyle \int_{0}^x 0dt = 0\implies x \ge \sin x$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2292982", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Balls and vase $-$ A paradox? Question I have infinity number of balls and a large enough vase. I define an action to be "put ten balls into the vase, and take one out". Now, I start from 11:59 and do one action, and after 30 seconds I do one action again, and 15 seconds later again, 7.5 seconds, 3.75 seconds... What is the number of balls in the vase at 12:00? My attempt It seems like that it should be infinity (?), but if we consider the case: Number each balls in an order of positive integers. During the first action, I put balls no. 1-10 in, and ball no.1 out, and during the $n^{\text{th}}$ action I take ball no. $n$ out. In this way, suppose it is at noon, every ball must have been taken out of the vase. So (?) the number of balls in the vase is Zero??? My first question: if I take the ball randomly, what will be the result at noon? (I think it may need some probability method, which I'm not familiar enough with.) Second one: is it actually a paradox? Thanks in advance anyway.
What you just discovered is that the cardinality of a set (the number of elements) is not a continuous function, that is, for a convergent sequence $S_n$ of sets you may have $$\lim_{n\to\infty}\left|S_n\right|\ne \left|\lim_{n\to\infty} S_n\right|$$ where $\left|S_n\right|$ is the cardinality of $S_n$ (e.g. $\left|\{2,3,4,5\}\right|=4$). In your case, the left hand side diverges (giving you the infinite number of balls), while the right hand side gives $0$ (the cardinality of the empty set). This is not a paradox, but a warning that you have to be careful with such limits.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2293091", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 4, "answer_id": 0 }
Analytic number theory question. Let $n$ be a positive integer, and define $f(n)$ as $n +\lfloor\sqrt{n}\rfloor$, where $\lfloor x\rfloor$ is the greatest positive integer less than or equal to $x$. Prove that the sequence $n, f(n), f(f(n)), f(f(f(n))), \ldots$ contains a perfect square.
If $n$ is a square, you're done. If $n=m^2+k$ with $0\lt k\lt2m+1$, then in either one or two steps you'll be at a number of the form $(m+1)^2+k'$ with $0\le k'\lt k$. Induction (on $k$) now tells you that eventually you'll land on a square.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2293345", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Prove that $\|UVU^{-1}V^{-1}-I\|\leq 2\|U-I\|\|V-I\|$ $U,V$ are unitary $n\times n$ matrices, and the norm is the operator norm (so we can use $\|UV\|\leq\|U\|\|V\|$). I've noticed that \begin{align} \|UVU^{-1}V^{-1}-I\|&= \|(UV-VU)U^{-1}V^{-1}\|\\ &\leq \|UV-VU\|\|U^{-1}V^{-1}\| \end{align} I can bound the first term by $\|UV\|+\|VU\|$, but I don't think this is useful. Hints (rather than complete answers) would be appreciated. The question comes from here (exercise 1)
(I cannot prove the inequality as stated in the exercise, but here is some information that maybe will help you). Note that $\|UVU^{-1}V^{-1}\|=1$ for all unitaries $U,V$. We would like to show that $$\tag{1} \|UVU^{-1}V^{-1}-I\|\leq 2\|U-I\|\,\|V-I\|. $$ The left-hand-side, as hinted in the exercise, is $\|UV-VU\|$. Now $$\tag{2} \|UV-VU\|\leq\|UV-V\|+\|V-VU\|=2\|U-I\|. $$ As the roles of $U$ and $V$ are symmetric, we can get, by multiplying $(2)$ and the corresponding inequality for $V$, $$ \|UV-VU\|\leq 2\|U-I\|^{1/2}\|V-I\|^{1/2}, $$ which is sharper than $(1)$ when $\|U-I\|>1$ and $\|V-I\|>1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2293450", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Find the limit of $(1-\cos x)/(x\sin x)$ as $x \to 0$ Can you please help me solve: $$\lim_{x \rightarrow 0} \frac{1- \cos x}{x \sin x}$$ Every time I try to calculate it I find another solution and before I get used to bad habits, I'd like to see how it can be solved right, so I'll know how to approach trigonometric limits. I tried to convert $\cos x$ to $\sin x$ by $\pi -x$, but I think it's wrong. Should I use another identity?
We can also use L'Hospital's Rule $$\lim_{x\to0}\frac{1-\cos x}{x\sin x}=\lim_{x\to0}\frac{\sin x}{x\cos x+\sin x}=\lim_{x\to0}\frac{\cos x}{-x\sin x+2\cos x}=\frac{1}{2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2293514", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 8, "answer_id": 3 }
What is the value of the intersection of X and the set containing X? How to calculate X $\cap$ $\{X\}$ for finite sets to develop an intuition for intersections? If $X$ = $\{$1,2,3$\}$, then what is $X$ $\cap$ $\{X\}$?
As far as developing intuition for intersection, the idea of $A \cap B$ are the elements that $A$ and $B$ both have in common. So if we're looking at $X \cap \left\{ X \right\}$ where $X = \left\{ 1,2,3 \right\}$ then it is a matter of $$X \cap \left\{ X \right\} = \left\{ 1,2,3 \right\} \cap \left\{ \left\{ 1,2,3 \right\} \right\}.$$ However, there is a rather subtle difference here between the left and right side of the intersection. The left side, $\left\{ 1,2,3\right\} = X$, is the set at hand. Where the right side, $\left\{ X \right\}$ is viewing the set $X$ as an element, which is different than $X$ itself, so they have nothing in common. Hence, $$X \cap \left\{ X \right\} = \emptyset.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2293600", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Equality/Equivalence of functions When we say something like: $$ \frac{dy}{dx} = x$$ we're describing the way a function $y$ changes with respect to $x$. To solve the differential equation we integrate both sides. Is there a proper way to do this, or is it simply the case that if have a reliable method to get the correct answer then that's fine. For example: Method 1: $$ \int\frac{dy}{dx} dx = \int x \, dx$$ $$ \int dy= \int x \, dx$$ $$ y + c_1 = \frac{x^2}{2} + c_2$$ Method 2: $$ \frac{dy}{dx} = x$$ $$ dy = x \, dx$$ $$ \int dy= \int x \, dx$$ $$ y + c_1 = \frac{x^2}{2} + c_2$$ This is the thinking that lead me to the question to the title. * *Why is it okay to integrate both sides of an equation? Are we basing that on the fact that indefinite integration is simply the reverse of differentiation? Will integrating both sides always make sense with continuous, integrable, elementary functions? *Can we integrate without respect to a variable? In method 2 I employed the integration operator on both sides without adding a differential (I know it was already there). *If I write $ y'' + 5xy = 2x^2$, is it acceptable to integrate both sides of the equation with respect to y, or x, and get a valid equality as a result? *When I write: $ xy + y^x = 2x + 3$, maybe this is bad example, but am I stating that the left side is the same as the right side, or am I proposing that there are certain values of x and y that satisfy that equation. I'm having sort of a fundamental crisis here. I don't think I was ever properly taught what equality means. If you were, where did you learn it?
Method $2$ would be considered more appropriate and note $\int x dx = \frac{1}{2}x^2 + C$. However, you only need one arbitrary constant in the solution, $y = \frac{1}{2}x^2 + C$ would suffice for a solution since $c_2 - c_1$ would result in a new constant, where we call it $C$. 1) It is okay to integrate both sides of an equation when we have separated our variables. In other words, when we have $f(x,y)dx + g(x,y)dy = h(x,y)$, we manipulate it to look like $\alpha(x)dx = \beta(y)dy$. However, sometimes this is not feasible so the topic of Ordinary Differential Equations (ODEs) is introduced to some of the cases where it is not separable. And yes, integrating both sides will always make sense when the function is Riemann integrable. 2) No, we cannot integrate without respect to variable. This would not make sense. You integrated both sides because you already had it in the form involving a differential. You integrated in order to remove the differential. 3) When given $y'' + 5xy = 2x^2$ assuming $y''$ is the second derivative with respect to $x$, you would have to tackle solving this using techniques from ODE, it is not as simple as integrating twice! 4) When you have $xy + y^x = 2x + 3$, you are treating $x$ and $y$ as variables here. You are implying that there may exist solutions, i.e. $x$ and $y$ that satisfy this equation. Clearly, it is not true for every $x$ and $y$, so the right hand side does not equal the left hand side in general. However, there may be certain $x$ and $y$ that make the equality hold true.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2293714", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 1 }
Vector valued analytic functions near essential singularity Suppose $f:\mathbb{C}\setminus\{0\}\to X$ is a vector valued analytic function, that has an essential singularity at $0$ ($X$ is some Banach space). It can be easily shown that, in this case, $f$ must be unbounded near $0$. I am intrested whether a stronger condition holds, namely, is it true that for any sequence $z_n\to 0$, we have that $||f(z_n)||\to\infty$? This is not true in the $\mathbb{C}$-valued setting. Indeed, by Picard's theorem, such a $f:\mathbb{C}\setminus\{0\}\to \mathbb{C}$, while still unbounded near $0$, will attain any value, except perhaps one, in any neighborhood of $0$. Thus this function will be constant on many sequences $z_n$ converging to $0$. However, there is no Picard in the vector-valued setting, so perhaps there is a chance that the stronger requirement will hold? Edit: The situation I am interested is more concrete, when $f(z)=(zI-T)^{-1}x$, where $T$ is quasinilpotent and injective, and $x$ is some fixed non-zero vector.
This need not be the case. We could have a function of the form $f(z) = g(z)\cdot x$ where $g \colon \mathbb{C}\setminus \{0\} \to \mathbb{C}$ is holomorphic with an essential singularity at $0$, and $x \in X \setminus \{0\}$. But it can be the case, e.g. if $f(z) = g(z)\cdot x + h(z)\cdot y$, where $x$ and $y$ are linearly independent, $g$ has an essential singularity at $0$ and $h$ has a pole at $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2293824", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Statistical Mechanics - Canonical Partition Function - An harmonic Oscillator * *The problem statement, all variables and given/known data With the Hamiltonian here: Compute the cananonical ensemble partition function given by $\frac{1}{h} \int dq dp \exp^{-\beta(H(p,q)}$ for 1-d , where $h$ is planks constant *Relevant equations *The attempt at a solution I am okay for the $p^2/2m$ term and the $aq^2$ term via a simple change of variables and using the gaussian integral result $\int e^{-x^2} dx = \sqrt{\pi}$ I am stuck on the $ \int dq e^{\beta b q^{3}}$ and $ \int dq e^{\beta c q^{3}}$ terms. If these were of the form $ \int dq e^{-\beta b q^{3}}$ I could evaluate via $\int dx e^{-x^n} = \frac{1}{n} \Gamma (1/n) $ where $ \Gamma(1/n) $ is the gamma function; however because it is a plus sign I have no idea how to integrate forms of $ \int dq e^{x^n}$ Or should I be considering the integral over $q$ all together and there is another way to simply: $\int dq e^{-\beta(aq^2-bq^3-cq^4)}$ PART B) To compute the grand canonical ensemble of $N$ oscillators where $ E_n=((n+\frac{1}{2})−x(n+\frac{1}{2})^2)\bar{h}ω $ to leading order in $x$ I have that $\zeta(β,μ)=∑\lim^{N=∞}_{N=0}z^NZ(β,N)$ where $\zeta(β,μ)$ is the grand canonical ensemble, $Z(β,N)$ is the canonical ensemble and $z=e^{βμ} $ is the fugacity. I have attempted to answer this question via summing first over $N$ in hope of getting a simplified expression for $Z_1$ to leading order in $x$ and then I will raise this to the power of $N$ , dividing by $N!$ (Gibbs factor) and then use (1) above to get the grand canonical partition function. My attempt is as follows: $Z_1 = \sum_n e^{-\beta((n+\frac{1}{2})\bar{h}\omega-x(n+\frac{1}{2})^2\bar{h}\omega)}=e^{-\frac{\beta \bar{h} \omega}{2}}\sum_n e^{-\beta \bar{h} \omega n} e^{\beta \bar{h} \omega x (n+\frac{1}{2})^2}$ I will expand out the exponential $x$ term to get $ e^{-\frac{\beta \bar{h} \omega}{2}} \sum_n e^{-\beta \bar{h} \omega n} (1+\beta\bar{h}\omega x(n+\frac{1}{2})^2) + O(x^2)$ $e^{-\frac{\beta \bar{h} \omega}{2}}\sum_n e^{-\beta \bar{h} \omega n} + e^{-\frac{\beta \bar{h} \omega}{2}}e^{-\beta \bar{h} \omega n} (\beta \bar{h} \omega x) \sum_{n=0}^{n=\infty} (n^2+ \frac{1}{4} + n)$ which diverges... Many thanks in advance
This is my first answer, so I hope I'm doing it right. As pointed out in an earlier comment, I think you need to start of by getting the limits straight, which will answer a couple of your questions. The integral over $p$ is independent and easily done as you've stated yourself. The integral over $q$ goes from $-\infty$ to $+\infty$, as it is the position in one dimension. Note in passing that it is $$\int_0^{\infty} e^{-x^n} = \frac{1}{n}\Gamma\left(\frac{1}{n}\right)$$ but your lower limit is $-\infty$, and so this cannot be used. (Incidentally, $\int_{-\infty}^{+\infty}e^{\pm x^3}dx$ does not converge to the best of my knowledge). But all of this is beside the point: unless I've misunderstood you (please correct me if I'm wrong!), you're claiming that $$\int_{-\infty}^{+\infty}dq \,\,e^{-\beta a q^2 + \beta b q^3 + \beta c q^4} = \int_{-\infty}^{+\infty}dq \,\,e^{-\beta a q^2} \int_{-\infty}^{+\infty}dq \,\,e^{\beta b q^3} \int_{-\infty}^{+\infty}dq \,\,e^{ \beta c q^4}$$ which is clearly not true. So performing the integrals separately is not the way to go and you must consider the integral over all the functions of $q$ together. If the extra terms had been linear in $q$ you could have used the "completing the square" trick, but I don't there is anything similar for higher powers. For an exact (and possibly useless) answer, you might be able to use this formula that is completely beyond me, but I don't think it's helpful. With a little research I found the original question online. If it's the same question, it says You should work in the approximation where the anharmonicity is small In other words, when $$\frac{b}{a} \ll 1 \quad \quad \quad \frac{c}{a^2}\ll 1$$ Which leads me to believe that they require you to perform some sort of series in the two "problematic" terms. If I were to solve this question, I would do the following. I'd begin by making a simple variable substitution: $u = \sqrt{a} q$, which makes my integral $$\frac{1}{\sqrt{a}} \int_{-\infty}^\infty du\,\,\exp{\left(-\beta u^2 + \frac{\beta b}{a\sqrt{a}}u^3 + \frac{\beta c}{a^2} u^4\right)} = \frac{1}{\sqrt{a}} \int_{-\infty}^\infty du\,\, e^{-\beta u^2}e^{\frac{\beta b}{a\sqrt{a}}u^3} e^{\frac{\beta c}{a^2} u^4}$$ I would then approximate the anharmonic terms using $e^{ax}\approx 1 + ax + \frac{a^2 x^2}{2}...$ $$I = \frac{1}{\sqrt{a}} \int_{-\infty}^\infty du\,\, e^{-\beta u^2} \left(1 + \frac{\beta b}{a\sqrt{a}}u^3 + \frac{\beta^2 b^2}{2 a^3}u^6\right) \left(1+ \frac{\beta c}{a^2} u^4\right)$$ You'll notice I took three terms for the first approximation and only two for the second. Why I did this will become clear in a moment. This is now just a sum of integrals of the form $$\int_{-\infty}^\infty du\,\, e^{-\beta u^2}u^n$$ Clearly, when $n$ is an odd integer, these integrals are zero, since the function is odd and its positive and negative contributions cancel each other. This is why when expanding the series earlier I stopped at the first even power, to get the leading dependencies in $b$ and $c$. We are thus left with three integrals (the fourth term involving $\frac{b^2 c}{a^5}$ can be neglected since it is of a much higher order.) $$I = \frac{1}{\sqrt{a}}\left( \int_{-\infty}^\infty du\,\, e^{-\beta u^2} + \frac{\beta c}{a^2} \int_{-\infty}^\infty du\,\, e^{-\beta u^2}u^4 + \frac{\beta^2 b^2}{2 a^3}\int_{-\infty}^\infty du\,\, e^{-\beta u^2}u^6\right)$$ The remaining (even) integrals can be easily computed realising that whenever $2z-1$ is even, $$\int_{-\infty}^{\infty} u^{2z-1} e^{-\beta u^2}du = \beta^{-z} \Gamma[z]$$ Thus, $$I \approx \sqrt{\frac{\pi}{\beta a}}\left( 1 + \frac{3 c}{4 a^2 \beta} + \frac{15 b^2}{16 a^3 \beta}\right)$$ EDIT: Upon some introspection I've decided that it does indeed make sense to have a term in $\frac{b^2}{a^3}$. I've made the necessary changes. Furthermore, the complete canonical partition function (including the integral over $p$) is thus $$Z = \frac{\pi}{\beta}\sqrt{\frac{2 m}{a}}\left( 1 + \frac{3 c}{4 a^2 \beta} + \frac{15 b^2}{16 a^3 \beta}\right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2293920", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to reduce congruence systems with moduli not coprime? I'm given a system of congruences and want to apply the Chinese remainder theorem (CRT) on it, but the greatest common divisor (GCD) of the moduli is not $1$. Well, I need to reduce the system, such that the GCD is $1$, but I don't know how the "reducing" works. I mean I need an explanation. I found an example, but don't get how they magically transform it from $$x \equiv 1 \mod 108$$ $$x \equiv 13 \mod 40$$ $$x \equiv 28 \mod 225$$ to $$x \equiv 1 \mod 27$$ $$x \equiv 5 \mod 8$$ $$x \equiv 3 \mod 25$$ I see that $$108 = 2^2 \times 3^3$$ $$ 40 =5 \times 2^3$$ $$225=3^2 \times 5^2$$ but not sure when and how they used it... thanks for the help!
Reducing is trivial: If $a \equiv b \mod n$ then $a \equiv b \mod k$ for all $k|n$. Just think about it...... ======= $x \equiv 1 \mod 108 \implies x= 108k + 1 = 27(4k) + 1 \implies x \equiv 1 \mod 27$. $x \equiv 13 \mod 40 \implies x = 40k + 13 = 8(5k) + 13 \implies x \equiv 13 \equiv 5 \mod 8$. etc. ======= It's going the other way that requires (minor) stipulation. If $a \equiv b \mod k$ then $a \equiv b + ck \mod n$ for any $n$ a multiple of $k$ and $c$ is some integer; (exactly which integer is not necessarily known). ==== So $x \equiv 1 \mod 3^3*2^2$ $x \equiv 13 \mod 2^3*5$ $x \equiv 28 \mod 5^2*3^2$ Pick the largest of the mutually prime factors: $3^3;2^3;5^2$ So $x \equiv 1 \mod 3^3$ $x \equiv 13\mod 2^3$ $x \equiv 28 \mod 2^5$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2294043", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 1 }
Find the simplified form of $\frac {1}{\cos x + \sin x}$ Find the simplified form of $\dfrac {1}{\cos x + \sin x}$. a). $\dfrac {\sin (\dfrac {\pi}{4} +x)}{\sqrt {2}}$ b). $\dfrac {\csc (\dfrac {\pi}{4} + x)}{\sqrt {2}}$ c). $\dfrac {\sin (\dfrac {\pi}{4} + x)}{2}$ d). $\dfrac {\csc (\dfrac {\pi}{4} + x)}{2}$ My Attempt: $$=\dfrac {1}{\cos x + \sin x}$$ $$=\dfrac {1}{\cos x + \sin x} \times \dfrac {\cos x - \sin x}{\cos x - \sin x}$$ $$=\dfrac {\cos x - \sin x}{\cos^2 x - \sin^2 x}$$ $$=\dfrac {\cos x - \sin x}{\cos 2x}$$
This is the graph of the function $\sin x+\cos x$. Notice that itself looks like a wave. So, we should be able to deduce that(since only the amplitude and phase seem to have changed), $$\sin x+\cos x=A\sin( x+\phi)$$ Expanding using the identity for $\sin(A+B)$, $$\sin x+\cos x=A\sin( x)\cos\phi+A\cos(x)\sin\phi$$ Comparing the coefficients of $\sin $ and $\cos, $$1=A\cos\phi$$ $$1=A\sin\phi$$ Assuming non-zero $\cos$,dividing the second equation by the first gives: $$1=\tan\phi\implies \phi=\frac{\pi}{4}$$ We only need one solution so we won"t worry about other solutions. Now squaring the equations and adding, $$2=A^2\implies A=\sqrt{2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2294276", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Combinatorics struggles I am stuck in combinatorics problem that there must be a solution to but every experiment is leaving me stuck. I am needing to build a bracket for a some games at a graduation event: * *There will be $8$ teams doing eight events. They will complete against each other in each event. The eight events will be in two different time blocks. *So events $1$-$4$ will happen simultaneously and will be repeated for a total of four times, then events $5$-$8$ will happen simultaneously and will be repeated for a total of four times. $\textsf{My goal}$ is to have each team face all the other teams. I realize each team will have to repeat one event and face one team twice. It is quite similar to a round robin tournament but I cant figure out how to keep them from repeating events. Help would be greatly appreciated.
For any one team, there is a maximum of $7$ distinct pairings with other teams for the various events. Since you have $8$ events, you must have at least one pairing repeat. Another way of thinking about this would be the fact that there are only 28 distinct team pairings and you are trying to uniquely assign 32 events, which is impossible. The easiest solution would be to eliminate one of the events. For example Teams $1-8$ competing in Events $A-G$ could be matched up as follows: $$ \begin{array}{c|cccccccc} &1&2&3&4&5&6&7&8\\ \hline 1& &A&B&C&D&E&F&G\\ 2&A& &C&D&E&F&G&B\\ 3&B&C& &E&F&G&A&D\\ 4&C&D&E& &G&A&B&F\\ 5&D&E&F&G& &B&C&A\\ 6&E&F&G&A&B& &D&C\\ 7&F&G&A&B&C&D& &E\\ 8&G&B&D&F&A&C&E& \end{array} $$ If you insist on $8$ events, assign the pairings for event $H$ the same pairing of event $A$. Now you'll want to run events $A,B,C,D$ as your first group of events and $E,F,G,H$ as your second group of events.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2294390", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Compute $gcd\left(1714, 1814\right)$ using Euclidean Algorithm So I know the answer for this is $2$, but based on my own work, I can't get to that solution. I haven't done a gcd before where $b>a$. I thought I could just flip the numbers and use the same method but that didn't seem to work. Here's what I have so far, what I am doing wrong? $$\begin{align} \mathrm{gcd}(1714, 1814) &= \mathrm{gcd}(1814, 1714) \\ \mathrm{gcd}(1814, 1714) &= (1714, 100)\\ &= (100, 14)\\ &= (14, 9)\\ &= (9, 5)\\ &= (5, 4)\\ &= (4, 1)\\ &= (1, 0)\\ &= 1\\ \end{align} $$ I basically tried using the Euclidian algorithm method where you keep doing long division into each number to get the remainder and continue with that process.
\begin{align} GCD(1814,1714)&=(1714,100)\\ &=(100,14)\\ &=(14,2)\\ \end{align} So, $2$ is the $GCD$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2294451", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
How to find the radix in number system.? I have $(132)_{10} = (2010)_r$ I have tried the above .. I got the answer that $2(r^3)+r =132 $ From this I am unable to find the value of $r$. Can anyone help me out to solve this problem?
Our goal is to find a positive integer solution to $2 r^3 + r - 132 = 0$. To proceed, we use the rational root theorem, which states that any rational solutions $\frac{p}{q}$ must have $p$ dividing $-132$ and $q$ dividing $2$. Factoring $132$ gives you $2^2 \cdot 3 \cdot 11$. So here are all the possibilities (ignoring sign): $q = 1$ or $2$ $p = 2$ or $3$ or $4$ or $6$ or $11$ or $\dots$ or $132$ Now observe $4$ is a solution to $2 r^3 + r - 132 = 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2294534", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Calculating the remainder of the series $1/n!$ (euler number) I am currently studying series and in my textbook and there's an example of calculating the remainder of the series which I don't understand completely The series in question is: $$\sum_{i=0}^\infty \frac{1}{n!} = e,$$ so the remainder is \begin{align} \sum_{k=n+1}^\infty \frac{1}{k!} &= \frac{1}{(n+1)!}\left(1 + \frac{1}{n+2} + \frac{1}{(n+2)(n+3)} + ...\right) \\&< \frac{1}{(n+1)!}\left(1+\frac{1}{(n+1)}+\frac{1}{(n+1)^2} + ...\right) \\&= \frac{1}{(n+1)!}\frac{1}{1-\frac{1}{(n+1)}} \\&= \frac{1}{n!n} \end{align} What I don't understand and I need clarification about is the inequality part (why is the first term less than the second) and why the term after the inequality equals what the textbook says it equals. Thanks in advance!
For $k > 1$ we have $$\frac{1}{n+k}<\frac{1}{n+1}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2294685", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Why this time-frequency representation of a pure sine function with Stockwell Transform is different in the boundary? This time-frequency decomposition was made with Stockwell Transform to a pure sine function and graphed with matlab. But as you can see in the boundary of the image the pattern is different. I need to know why is this happening?Time-frequency representation
The S-transform is defined using an integral over an infinite time interval. Supposedly your pure sine function is considered zero outside some finite interval? Or it wraps around to the other end, but the phases at the ends do not match? Then it's not so pure at the ends of that interval. There will be some discontinuity in the cropped/wrapped-around signal or its derivative; such discontinuities spread the spectrum. As with Fourier analysis, use of window functions helps in reducing such phenomena.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2294930", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Evaluating closed form of $I_n=\int_0^{\pi/2} \underbrace{\cos(\cos(\dots(\cos}_{n \text{ times}}(x))\dots))~dx$ for all $n\in \mathbb{N}$. I was wondering if there is any way to evaluate a general closed form solution to the following integral for all $n\in \mathbb{N}$. $$I_n=\int_0^{\pi/2} \underbrace{\cos(\cos(\cos(\dots(\cos}_{n \text{ times}}(x))\dots)))~dx \tag{1}$$ I have already evaluated closed forms to this integral for certain values of $n$, however I am still missing a closed form for a large number of values of $n$. Those in $\color{red}{\text{red}}$ I numerically evaluated, meaning that I currently do not have a closed form for them. $$\begin{array}{c|c}n&I_n\\\hline0&\dfrac{\pi^2}{8}\\1&1\\2&\dfrac{\pi J_0(1)}{2}\\\color{red}{3}&\color{red}{\approx 1.11805}\\\color{red}{4}&\color{red}{\approx 1.18186} \\ \color{red}{5} & \color{red}{\approx 1.14376} \\ \color{red}{6}&\color{red}{\approx 1.17102}\\\color{red}{\vdots}&\color{red}{\vdots}\\\infty&\alpha\cdot \dfrac{\pi}{2} \approx 1.16095\end{array}$$ Where $J_p(\cdot )$ is the Bessel function of the first kind, and $\alpha$ is the Dottie Number. The cases $n=0$ and $n=1$ are trivial, hence I will not show how I derived these solutions. Hence, I will show how I derived the case where $n=2$ and $n\to \infty$. Evaluating $I_2$: i.e $\int_0^{\pi/2} \cos(\cos(x))~dx$. Introducing the definition of the Bessel Function of the first kind: $$J_{\beta}(z)=\frac{1}{\pi}\int_0^{\pi} \cos(z\sin{\theta}-\beta \theta)~d\theta$$ We can use the substitution $\theta=u+\frac{\pi}{2}$ to obtain: $$\begin{align} J_{\beta}(z)&=\frac{1}{\pi}\int_{-\pi/2}^{\pi/2} \cos\left(z\sin\left(u+\frac{\pi}{2}\right)-\beta\left(u+\frac{\pi}{2}\right)\right)~du\\&=\frac{1}{\pi}\int_{-\pi/2}^{\pi/2} \cos\left(z\cos(u)-\beta\left(u+\frac{\pi}{2}\right)\right)~du \end{align}$$ To get it into a similar form to our case, notice that we can let $\beta=0$ and $z=1$. Therefore: $$J_0(1)=\frac{1}{\pi}\int_{-\pi/2}^{\pi/2} \cos(\cos(u))~du$$ At first sight, it may seem like the bounds are problematic. However, note that $f(x)=\cos(\cos(x))$ is an even function, hence we know that: $$J_0(1)=\frac{2}{\pi}\int_0^{\pi/2} \cos(\cos(u))~du \iff \int_0^{\pi/2} \cos(\cos(u))~du=\frac{\pi J_0(1)}{2}$$ Evaluating $\lim\limits_{n\to \infty} I_n$: I realized that as $n\to \infty$, the integrand will converge to a constant function for all $x\in \mathbb{R}$, as shown below. The blue, yellow, green and red curves is when $n=1,2,5,10$ respectively. I figured that we can represent the repeated composition of functions by the following recurrence $x_{n+1}=\cos(x_n)$. Using the principles of fixed point iteration, we know thus know that the value it tends to is the unique solution to $x=\cos(x)$. This turns out to be the Dottie Number, which I evaluated numerically using the Newton-Raphson Method and denoted this value by $\alpha$. I obtained: $$\alpha\approx 0.739085133215161$$ Hence: $$\lim_{n\to \infty} I_n=\int_0^{\pi/2}\cos(\cos(\cos(\dots(\cos(x))\dots)))~dx=\alpha\cdot \frac{\pi}{2} \approx 1.160952212443092$$ As mentioned, I am unsure how to evaluate closed forms for the cases when $n\geq 3$. I've checked some other definitions such as the Struve function $\mathbf{H}_{\gamma}(\cdot)$, though it only seems to be useful when evaluating $\int_0^{\pi/2} \sin(\sin(x))~dx$, which is not the integral we are considering. Hence, I would appreciate some guidance on how to evaluate a general closed form for $(1)$ for all $n\in \mathbb{N}$, if possible.
The mere fact that even the simple case $n=2$ ceases to possess a meaningful closed form in terms of elementary functions, and an entirely new function had to be invented from scratch in order to express its value, should be enough to settle all questions one might have concerning the possibility of finding such a form for larger values of the argument. Indeed, the very next case, $n=3,$ is not known to be expressible even in terms of special functions. That $\cos^{[\infty]}(x)$ is a constant $($since the function is both bound and monotone$)$ certainly constitutes a blessing, but such clearly does not hold for finite values of the iterator. $($ See also Liouville's theorem and the Risch algorithm $).$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2295034", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28", "answer_count": 1, "answer_id": 0 }
Solving the functional equation $\tau \left(\frac{-1}{z}\right) = - \tau(z)$ Let $\mathbb{H} \subset \mathbb{C}$ be the upper half plane. Find $\tau: \mathbb{H} \to \mathbb{C} $, holomorphic and non-constant, satisfying $\tau \left( \frac{-1}{z} \right) = - \tau(z)$. There is a very good answer already. But since the question was put on hold, I'll add some context. This question came up in the context of Klein's $j$ - invariant. If you change coordinates from $z \to w=z-i$ and expand in $w$, you'll see from symmetry arguments that the first two coefficients must be $0$. From this, I'm wondering if $j$ is locally the square of a coordinate around $i$. This makes sense if you look at the fundamental domain for $\mathbb{H}$ mod the $SL(2, \mathbb{Z})$ action, and note that $j$ defines a complex analytic structure on it.
EqWorld is our right partner. In fact this functional equation belongs to the form of http://eqworld.ipmnet.ru/en/solutions/fe/fe1121.pdf. The general solution is $\tau(z)=C\left(z~,-\dfrac{1}{z}\right)$ , where $C(u,v)$ is any antisymmetric function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2295129", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Suppose G be a non-abelian group and H,K be two abelian subgroups of G. Then must HK be an abelian subgroup of G? Suppose $G$ is a non-abelian group and $H,K$ are two abelian subgroups of $G$. Then must $HK$ be an abelian subgroup of $G$? I know an example, but I am confused. Thus I just want to check that.
This is not true. There are even metabelian groups (non-abelian, but $G'$ abelian) which are the product of two abelian subgroups $A$ and $B$, i.e., $G=AB$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2295255", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Prove the perpendicular bisector of chord passes through the centre of the circle Hello, can someone please give me a simple proof to the following theorem: "The perpendicular bisector a chord passes through the centre of the circle." I have attached a diagram of what I mean and web link of a proof that I did not understand below. https://proofwiki.org/wiki/Perpendicular_Bisector_of_Chord_Passes_Through_Center Please explain simply and fully because I have an exam on this tomorrow. Also, could you explain the converse theorem whereby a bisector passes through the centre of the circle, prove it's perpendicular and a perpendicular line passes through the centre, prove it bisects the chord.
The proof in the picture is the simplest possible one. All you have to do is write the conditions for congruence thus proving that the triangles in the above picture are congruent. This is the best possible approach to the given question.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2295360", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 7, "answer_id": 5 }
Is $f(x) = x^{10}-x^5+1$ solvable by radicals? Is $f(x) = x^{10}-x^5+1$ solvable by radicals? So far I've showed that $f$ is irreducible because if we let $y=x^5$ then $f(y)=y^2-y+1$ which is irreducible because it has a negative discriminant. I also know that $f$ has no real roots so I've concluded that $Gal(L_f/\mathbb{Q}) \subset S_{10}$ and that there is a 10-cycle and a transposition in $Gal(L_f/\mathbb{Q})$. However I have no clue as to what $Gal(L_f/\mathbb{Q})$ could be.
Yes, of course. Let $x+\frac{1}{x}=a$. Hence, $$x^{10}-x^5+1=x^{10}+x^7-x^7+x^6-x^5-x^6+1=$$ $$=(x^2-x+1)(x^7(x+1)-x^5-(x^3-1)(x+1))=$$ $$=(x^2-x+1)(x^8+x^7-x^5-x^4-x^3+x+1)=$$ $$=(x^2-x+1)x^4\left(x^4+\frac{1}{x^4}+x^3+\frac{1}{x^3}-x-\frac{1}{x}-1\right)=$$ $$=(x^2-x+1)x^4(a^4-4a^2+2+a^3-3a-a-1)=$$ $$=(x^2-x+1)x^4(a^4+a^3-4a^2-4a+1)=$$ $$=(x^2-x+1)x^4\left(a^2+\frac{1-\sqrt5}{2}a+\frac{3+\sqrt5}{2}\right)\left(a^2+\frac{1+\sqrt5}{2}a+\frac{-3+\sqrt5}{2}\right)...$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2295462", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Taking logarithm of sum and products $\newcommand{\cost}{\operatorname{cost}}$My cost-metric is in following form \begin{equation} \cost(x,y) = A(x,y_1) \times \sum_{i}b_i B_i(x,y_i)\tag{1} \end{equation} where $A$ and $B$'s follow normal distribution. For my computer implementation, I am thinking of taking $\log(\cdot)$ to avoid computing exponential. By taking log of (1), I get \begin{equation} \log(\cost(x,y)) = \log(A(x,y_1)) + \log\left(\sum_i b_i B_i(x,y_i)\right) \end{equation} which simplifies first term but second term remains unchanged. Can I bring $\log(\cdot)$ inside the summation?
No, you can't bring the logarithm inside the summation because that's equivalent to saying that the logarithm of a sum is the sum of logarithms.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2295568", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Simplifying a Boolean expression (confused) Could somebody help me make sense of how to use the Boolean rules to simplify this expression? $$(x'+(yz)')(x + z')'$$ I used distributivity to get $$(x'+y')(x'+z')(x'+z)$$ I don't know if that was the right path to go down, or where to go from here. Thanks.
I'd rather take this from the start: $$(x'+(yz)')(x+z')'$$ Apply De Morgan's laws to $(yz)'$ and $(x+z')'$: $$=(x'+y'+z')zx'$$ Distribute: $$=zx'x'+zx'y'+zxz'$$ $x'x'=x'$ and $zz'=0$: $$=zx'+zx'y'$$ $zx'$ absorbs $zx'y'$: $$=zx'$$ This is the simplest form.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2295731", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Continued Fraction pattern I have been give then continued fraction $\dfrac{1}{\dfrac{1}{\dfrac{1}{x-1}-1}-1}$ If I let x=5, I get the following pattern... $\frac{1}{4}, \, -\frac{4}{3}, \,-\frac{3}{7}, \,-\frac{7}{10}, \,-\frac{10}{17}, \,...$ It appears that (excluding the first case) the previous denominator becomes the numerator and the denominator becomes the sum of the previous denominator and numerator. Is there a way to write that pattern as a sequence so I can find the 653th instance where x=56?
Rather than starting with $x=5$, it's more enlightening to just write down the sequence in terms of $x$: $$\frac1{x-1}, -\frac{x-1}{x-2}, -\frac{x-2}{2x-3}, -\frac{2x-3}{3x-5}, -\frac{3x-5}{5x-8}, -\frac{5x-8}{8x-13}, \dots$$ The coefficients in these fractions might look familiar: these are the Fibonacci numbers! More precisely, the $n^{\text{th}}$ fraction in the list is $$-\frac{ F_{n-1}x - F_n}{F_n x - F_{n+1}}$$ where $F_n$ is the $n^{\text{th}}$ Fibonacci number. Since each fraction is obtained by subtracting $1$ from the previous then taking the reciprocal, this pattern is not hard to show by induction. So the answer to your question should be $$-\frac{56 \cdot F_{652} - F_{653}}{56 \cdot F_{653} - F_{654}}.$$ If you want an explicit answer, you can either substitute in the closed-form expression for Fibonacci numbers, or use the matrix form of the Fibonacci recurrence together with exponentiation by squaring to compute $F_{652}$, $F_{653}$, and $F_{654}$, which have quite a few digits.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2295825", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Can we use the defining property of weak derivatives when integrating over general measurable sets? Let $f \in H^1(\mathbb{R}^d)$. Then, we have that for every smooth function $\phi$ with compact support: $$ \int_{\mathbb{R}^d} f \, \nabla \phi dx= - \int_{\mathbb{R}^d} \nabla f \, \phi dx $$ where $\nabla f$ denotes the weak derivative of $f$. This is the defining property of a function which is weakly differentiable, i.e. $f$ is in $H^1$ if and only if the above property holds for some vector of $L^2$ functions $g=\nabla f$. Now let $B$ be any Borel measurable set. Does the following still hold true? $$ \int_{B} f \, \nabla \phi dx= - \int_{B} \nabla f \, \phi dx $$
No. What you are integrating is actually $\chi_B$ times the integrand over the whole space, i.e. $$ \int f \chi_B \nabla \phi$$ so what you are asking is wether $f\in H^1 \Rightarrow f\chi_B$ in $H^1$ (which is not true). Also note that for $B$ with smooth boundary and smooth $f$ there are theorems which tell you that you get boundary terms when you want to relate $\int f\nabla \phi$ and $\int \nabla f \phi$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2295986", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Surface integral - spherical I'm trying to calculate the following surface integral $$\int \int_{s_r} \frac{z-R}{(x^2+y^2+(z-R)^2)^{3/2}} dS $$, where $s_r=\{(x,y,z)\in \mathbb{R}^3 : x^2+y^2+z^2=r^2 \}. $ I've switched to spherical coordinates but don't really know how to do it.
The reason to use spherical coordinates is that the surface over which we integrate takes on a particularly simple form: instead of the surface $x^2+y^2+z^2=r^2$ in Cartesians, or $z^2+\rho^2=r^2$ in cylindricals, the sphere is simply the surface $r'=r$, where $r'$ is the variable spherical coordinate. This means that we can integrate directly using the two angular coordinates, rather than having to write one coordinate implicitly in terms of the others. So in spherical coordinates, $dS = r^2 \sin{\theta} \, d\theta \, d\phi$. The integral becomes $$ \int_{\theta=0}^{\pi} \int_{\phi=0}^{2\pi} \frac{r\cos{\theta}-R}{(r^2 -2 rR\cos{\theta + R^2})^{3/2}} r^2\sin{\theta} \, d\phi \, d\theta = r^2 \int_{\theta=0}^{\pi} \frac{r\cos{\theta}-R}{(r^2 -2 rR\cos{\theta + R^2})^{3/2}} \sin{\theta} \, d\theta. $$ Putting $u=\cos{\theta}$ gives $$ \int_{-1}^1 \frac{u-(R/r)}{(1-2u(R/r)+(R/r)^2)^{3/2}} \, du $$ It makes sense to write $R/r=a$ at this point, so $$ \int_{-1}^1 \frac{u-a}{(1-2ua+a^2)^{3/2}} \, du. $$ The easiest way to do this integral is by a sort of partial fractions idea: the integrand is $$ \frac{u-a}{(1-2ua+a^2)^{3/2}} = \frac{1}{2a}\left(\frac{a^2-1}{(1-2ua+a^2)^{3/2}} - \frac{1}{(1-2ua+a^2)^{1/2}} \right), $$ which has either $$ \frac{1-ua}{a^2(1-2ua+a^2)^{1/2}} \quad \text{or} \quad \frac{1-u/a}{a^2(1/a^2-2u/a+1)^{1/2}} $$ as continuous antiderivative, depending on whether $R$ is smaller or larger than $r$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2296108", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Maximal ideal of $\mathbb{Q}[x,y]$ I want to show that $ \ A =\langle x,y \rangle$, the ideal generated by $x$ and $y$ is maximal in $R = \mathbb{Q}[x,y]$. I have seen a different solution somewhere but it was kind of longer so I tried something else. I just want to know if it is correct. Let $A \subseteq B \subseteq R$. We need to show that $B = A$ or $B=R$. If $B=A$, then we are done. Suppose, $B \neq A$. Then, there exists $p(x,y) \in B \setminus A$. As $p(x,y) \in A =\ <x,y>$, therefore the constant term of $p$ is not zero. Thus $$p(x,y) = \sum_{i=0}^n a_i x^i, \hspace{0.5cm} a_0 \neq0, a_i \in \mathbb{Q}[y]$$ Now $\sum_{i=1}^n a_i x^i \in A \subseteq B$. This implies $p(x,y)-\sum_{i=1}^n a_i x^i \in B$. Therefore, $a_0 \in B$. As $a_0 \neq 0$, therefore $1 \in B$ which shows $B=R$.
This is almost correct but not quite. You cannot deduce that $1\in B$ from $a_0\in B$ since you only know that $a_0\in\mathbb{Q}[y]$, not that $a_0\in\mathbb{Q}$. What you can say is that $a_0$ has the same constant term as $p$ and hence has nonzero constant term, so you can write $a_0=\sum_{j=0}^m b_jy^j$ for $b_j\in \mathbb{Q}$, $b_0\neq 0$. Since $\sum_{j=1}^mb_jy^j\in B$, you get that $b_0\in B$, and now you can conclude that $1\in B$ since $b_0$ must be a unit.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2296334", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to (or why cannot) define complex conjugate in the structure $(\mathbb{Z}+i\mathbb{Z},+)$? Let $(\mathbb{Z}+i\mathbb{Z},+)$, where $i$ is the imaginary unit, be a structure with an only operation $+$, the ordinary addition in $\mathbb{Z}$, and with no constant symbols. In this structure, the number zero and the inverse of any number are definable (I think), for they can be defined as $\forall x(x=0 \leftrightarrow\forall y(x+y=y))$ and $\forall x\forall y(x=-y\leftrightarrow x+y=0)$. Then is there any way to define the complex conjugate of any number or define some specific complex numbers, like $i$, $1+i$, etc., within the given structure?
There is no way to do this. Remember that for a function $f: \mathcal{M}\rightarrow\mathcal{M}$ (or indeed any function or relation in general) to be definable in $\mathcal{M}$, it must be fixed by automorphisms: if $\alpha$ is an automorphism of $\mathcal{M}$, we must have that $f(\alpha(m))=\alpha(f(m))$ for all $m\in\mathcal{M}$. Note that this is not a sufficient condition - it's a good exercise to prove this. Now observe that since we don't have the multiplicative structure on $\mathbb{Z}+i\mathbb{Z}$ in this case, the "switching" map $\alpha: a+bi\mapsto b+ai$ is an automorphism of the structure; but it doesn't preserve conjugation, since e.g. $$conj(\alpha(1-2i))=conj(-2+i)=-2-i\quad\mbox{ but }\quad\alpha(conj(1-2i))=\alpha(1+2i)=2+i.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2296422", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Counterexamples for uniqueness of viscosity solutions Recall a classical comparison result for viscosity solutions: Let $H:[0,T]\times \mathbb{R}^n \times \mathbb{R}^n \to \mathbb{R}$ * *continuous satisfying *$\vert H(t,x,p)-H(t,x,q) \vert \le C \vert p-q \vert$ *$\vert H(t,x,p) - H(s,y,p)\vert \le C(\vert t-s \vert+ \vert x-y \vert)(1+\vert p \vert)$ for some $C>0$. Let $\underline{u},\overline{u}$ bounded and uniformly continuous be sub- and a supersolutions of the Cauchy problem for $$\frac{\partial u}{\partial t}+H(t,x,Du) = 0.$$ If $\underline{u}(0,x) \le \overline{u}(0,x)$ for all $x \in \mathbb{R}^n$, then $\underline{u} \le \overline{u}$ in $[0,T]\times \mathbb{R}^n.$ Can you give counterexamples that show that the claim does not hold if we remove any one of assumptions 1 or 2 or remove parts of 3?
I assume you are interpreting viscosity solutions via Ishii's notion of solution when $H$ is discontinuous (where you replace $H$ by its upper and lower semicontinous envelopes in the super and subsolution properties, respectively). In this case consider the PDE (or ODE rather) $$u'(x) = f(x)$$ where $f(x)=1$ for $x$ rational, and $f(x)=0$ for $x$ irrational. Comparison does not hold for this PDE (any linear function $u(x)=mx+b$ with $0 \leq m \leq 1$ is a viscosity solution in the Ishii sense. It is possible to prove uniqueness when $H$ is discontinuous, but the discontinuity has to be relatively mild (say, a jump discontinuity along a Lipschitz surface), and other conditions have to be placed on $H$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2296539", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Inverse or approximation to the inverse of a sum of block diagonal and diagonal matrix I need to calculate $(A+B)^{-1}$, where $A$ and $B$ are two square, very sparse and very large. $A$ is block diagonal, real symmetric and positive definite, and I have access to $A^{-1}$ (which in this case is also sparse, and block diagonal). $B$ is diagonal and real positive. In my application, I need to calculate the inverse of the sum of these two matrices where the inverse of the non-diagonal one (e.g. $A^{-1}$) is updated frequently, and readily available. Since $B$ is full rank, the Woodbury lemma is of no use here (well, it is, but it's too slow). Other methods described in this nice question are of no use in my case as the spectral radius of $A^{-1}B$ is much larger than one. Methods based on a diagonalisation assume that it is the diagonal matrix that is being updated frequently, whereas that's not my case (i.e., diagonalising $A$ is expensive, and I'd have to do that very often). I'm quite happy to live with an approximate solution.
Let $A_1,A_2,\cdots,A_q$ be the diagonal blocks of $A$, and $a_{1,1},a_{1,2},\cdots,a_{1,n_1},a_{2,1},a_{2,2},\cdots,a_{2,n_2},\cdots,a_{q,1},a_{q,2},\cdots,a_{1,n_q}$ the diagonal elements of $B$, then the inverse of the sum would simply be a diagonal block matrix with blocks: ${(A_i+diag(a_{i,1},\cdots,a_{i,n_i}))}^{-1}$ for $i\in(1,2,\cdots,q)$. So the problem is reduced to finding the inverse of the sum of a matrix and a diagonal matrix. Fortunately, $A$ is symmetric positive definite, so each $A_i$ diagonalizable, hence, we can write it as follows: $$ A_i=P_iD_i{P_i}^{T}=P_i\begin{bmatrix} \lambda_{i,1} & 0 & 0 & \cdots & 0 & 0 \\ 0 & \ddots & \ddots & \ddots & & 0 \\\ 0 & \ddots & \ddots & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & \ddots & 0 \\ 0 & & \ddots & \ddots & \ddots & 0 \\ 0 & 0 & \cdots & 0 & 0 & \lambda_{i,n_{i}} \end{bmatrix}{P_i}^{T} $$ Where $\lambda_{i,1},\cdots,\lambda_{i,n_{i}}$ are the eigenvalues of $A_i$, hence: $$ A_i+diag(a_{i,1},\cdots,a_{i,n_i})=P_i\begin{bmatrix} \lambda_{i,1}+a_{n_i} & 0 & 0 & \cdots & 0 & 0 \\ 0 & \ddots & \ddots & \ddots & & 0 \\\ 0 & \ddots & \ddots & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & \ddots & 0 \\ 0 & & \ddots & \ddots & \ddots & 0 \\ 0 & 0 & \cdots & 0 & 0 & \lambda_{i,n_{i}}+a_{i,n_i} \end{bmatrix}{P_i}^{T} $$ Since $A_i$ and $B$ are symmetric positive definite, then $\lambda_{i,j}+a_{i,j}\neq 0$, so : $$ {(A_i+diag(a_{i,1},\cdots,a_{i,n_i}))}^{-1}=P_i\begin{bmatrix} \frac{1}{\lambda_{i,1}+a_{i,1}} & 0 & 0 & \cdots & 0 & 0 \\ 0 & \ddots & \ddots & \ddots & & 0 \\\ 0 & \ddots & \ddots & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & \ddots & 0 \\ 0 & & \ddots & \ddots & \ddots & 0 \\ 0 & 0 & \cdots & 0 & 0 & \frac{1}{\lambda_{i,n_{i}}+a_{i,n_i}} \end{bmatrix}{P_i}^{T}=P_iD_i{P_i}^{T} $$ Hence the inverse of ${(A+B)}^{-1}$ is a block diagonal matrix with diagonal elements being the matrices above. You can rewrite that as $$ {(A+B)}^{-1}=\begin{bmatrix} P_1 & 0 & 0 & \cdots & 0 & 0 \\ 0 & \ddots & \ddots & \ddots & & 0 \\\ 0 & \ddots & \ddots & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & \ddots & 0 \\ 0 & & \ddots & \ddots & \ddots & 0 \\ 0 & 0 & \cdots & 0 & 0 & P_q \end{bmatrix}\times\begin{bmatrix} D_1 & 0 & 0 & \cdots & 0 & 0 \\ 0 & \ddots & \ddots & \ddots & & 0 \\\ 0 & \ddots & \ddots & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & \ddots & 0 \\ 0 & & \ddots & \ddots & \ddots & 0 \\ 0 & 0 & \cdots & 0 & 0 & D_q \end{bmatrix}\times\begin{bmatrix} {P_1}^{T} & 0 & 0 & \cdots & 0 & 0 \\ 0 & \ddots & \ddots & \ddots & & 0 \\\ 0 & \ddots & \ddots & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & \ddots & 0 \\ 0 & & \ddots & \ddots & \ddots & 0 \\ 0 & 0 & \cdots & 0 & 0 & {P_q}^{T} \end{bmatrix} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2296724", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Two individuals are walking around a cylindrical tower. What is the probability that they can see each other? It'd be of the greatest interest to have not only a rigorous solution, but also an intuitive insight onto this simple yet very difficult problem: Let there exist some tower which has the shape of a cylinder and whose radius is A. Further, let this tower be surrounded by a walking lane whose width is B. Now, there are two individuals who are on the walk; what is the probability that they're able to see each other?
Hint: By symmetry, one may assume that the first individual is located on the horizontal radius on the left. The surface he can see is the portion of the lane cut by the tangents to the tower. The area of this surface, $R(r)$, can be computed with a little bit of trigonometry, as the sum of an annular sector, two right triangles and two segments. We can use a reduced area, i.e. the fraction of the whole ring which is visible. Then you need to compute the average area of this zone for all positions on the radius. As we assume a model of uniform distribution, the positions must be weighted by the distance to the center, as longer circumferences have higher probabilities (said differently, the element of area in polar coordinates has a factor $r\,dr$). $$P=\frac{\displaystyle\int_A^{A+B}R(r)\,r\,dr}{\pi((A+B)^2-A^2)}.$$ The aperture of the annular sector is $2\alpha=2\arccos\dfrac Ar$. Then the equation of a tangent: $$(-r,0)+t(\sin\alpha,\cos\alpha).$$ The tangency point is given by $$t_t=r\sin A=\sqrt{r^2-A^2},$$ $$(x_t,y_t)=\frac Ar(-A,\sqrt{r^2-A^2}),$$ and the intersection with the outer circle, by $$t_i=r\sin\alpha+\sqrt{r^2\sin^2\alpha+r\sin\alpha((A+B)^2-r^2)},$$ $$(x_i,y_i)=(-r,0)+t_i(\sin\alpha,\cos\alpha).$$ A triangle has height $B$ and basis $t_i-t_t$, and a segment has radius $A+B$ and aperture angle $$\arctan\frac{y_i}{x_i}+\pi-\alpha.$$ Computing the integral looks like a tremendous task. With $S$ the area of the annulus, $$SR(r)=\frac S\pi\arccos\frac Ar+B\sqrt{r^2-A^2+\sqrt{r^2-A^2}((A+B)^2-r^2)}+(A+B)^2\left(\beta-\sin\frac{\beta}2\right)$$ where $$\beta=\arctan\frac{\left(\sqrt{r^2-A^2}+\sqrt{r^2-A^2+\sqrt{r^2-A^2}((A+B)^2-r^2)}\right)\dfrac Ar}{-r+\left(\sqrt{r^2-A^2}+\sqrt{r^2-A^2+\sqrt{r^2-A^2}((A+B)^2-r^2)}\right)\dfrac{\sqrt{r^2-A^2}}r}\\+\pi-\arccos\frac Ar.$$ Sigh.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2296854", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22", "answer_count": 2, "answer_id": 1 }
Are there infinite cardinals $\kappa$, $\lambda$ with $\kappa^\lambda = \kappa$? Going through an exam question for revision. I need to prove the following: Are there infinite cardinals $\kappa$, $\lambda$ with $\kappa^\lambda = \kappa$? I really am not sure, though intuition says no. I am then asked to state and prove Cantor's Theorem, i.e. there is no surjection from a set to its power set. This is also simple to prove. The final part of the question asks Let $\kappa$ be a cardinal number. Prove $2^\kappa ≠ \aleph_0$ This I am also basically clueless for. Perhaps I can show that for any subset of the naturals, its power set has size either less than or greater than $\aleph_0$, depending on whether or not the subset is finite? But I am not sure if this would even work, let alone constitute a proof. Any hints anyone can provide would be helpful, since it would help me learn better if I can come across the solutions to these by myself! Thanks
Yes. Try $\kappa=2^{\aleph_0}$ and $\lambda=\aleph_0$. We need to biject sequences of 0-1 sequences with 0-1 sequences. But sequences of sequences are just a 2-dimensional array, and this can be treated wit Cantor's zigzag enumeration. If $\kappa$ is finite, then $2^\kappa$ is finite. If $\kappa\ge\aleph_0$, then $2^\kappa\ge 2^{\aleph_0}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2296915", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Understanding the proof of convergence criterion for infinite products via the relation of series? In the text "Functions of one Complex Variable" I'm having trouble understanding the proof for convergence criteria of an infinite product via it's relation to infinite series as seen in Corollary $(8.1.4)$ $Corollary \, (8.1.3)$: If $a_{j} \in \mathbb{C}$, $|a_{j}| < 1$ then the partial product $P_{n}$ for $$\prod_{j=1}^{\infty} (1+|a_{j}|)$$ satisfies: $$\exp(\frac{1}{2}\sum_{}^{}|a_{j}|) \leq P_{n}\leq \exp(\sum_{}^{}|a_{j}|). $$ $Corollary \, (8.1.4)$ If: $$\sum_{}^{}|a_{j}| < \infty$$ then: $$\prod_{j=1}^{\infty} (1+|a_{j}|)$$ converges. I observed that the author directly applied the previous result in Corollary $(8.1.3)$ directly to $(8.1.4)$. This initially begins by allowing the series in $(9.1)$ to exist $(9.1)$ $$\sum_{}^{}|a_{j}| = M$$ Initially from $(9.1)$ applying the following observations can be made: $$\sum_{}^{}|a_{j}| = a_{1}+a_{2}+a_{3}+a_{4}+a_{5}+a_{6}+ \cdot \cdot \cdot + a_{n}=M$$. Now the partial product for $a_{j}$ can be defined as follows: $$\prod_{j=1}^{\infty} (1+|M|)$$ Our product satisfies the following inequality sated below: $$\exp(\frac{1}{2}\sum_{}^{}|M|) \leq P_{n}\leq \exp(\sum_{}^{}|M|)$$. The final result which the concludes the proof is the following: $$P_{n} \leq \exp M$$ In summary my question is how did the inequality in Corollary $(8.1.4)$ was used to show that our infinite product converges, I'm missing any small but fundamental observations.
Note that $1 + |a_{n+1}| \geqslant 1$ for all $n$. Hence, $P_{n+1} = P_n(1 + |a_{n+1})\geqslant P_n$. Since the sequence $(P_n)$ is nondecreasing and bounded above by $\exp(\sum_{n=1}^\infty |a_n|)$, it converges.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2297021", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Is there a way to see this geometrically? In my answer to this question - Finding the no. of possible right angled triangle. - I derived this result: If a right triangle has integer sides $a, b, c$ and integer inradius $r$, then all possible values of $a$ and $b$ can be gotten in terms of $r$ as follows: For every possible divisor $d$ of $2r^2$, $a = 2r+d$ and $b = 2r+\dfrac{2r^2}{d}$. These are exactly the solutions. From this, of course, the number of solutions depends only on the prime factorization of $r$. My answer involved some annoyingly complicated algebra. My question is "is there a geometrical way to show that the expressions for $a$ and $b$ are true?" (Added later) Another way to phrase this, without mentioning divisibility: Take a rectangle of area $2r^2$. Extend the sides by $2r$. Then the inradius of the resulting right triangle is $r$.
As shown in this answer by using Heron's formula, if a generic triangle has integer inradius $r$ and integer sides $a$, $b$, $c$, then its sides can be written as $a=x+y$, $b=x+z$, $c=y+z$, where positive integers $x$, $y$, $z$ satisfy $$ r^2(x+y+z)=xyz. $$ If the triangle is rectangle, Pythagoras' constraint $a^2+b^2=c^2$ translates into the additional relation $x+y+z=yz/x$, which substituted into the above equation gives $x=r$. One can then set $x=r$ in the same equation and solve for $z$, to find: $$ z=r{y+r\over y-r}, \quad\hbox{that is:}\quad z=r+{2r^2\over d}, \ \hbox{where}\ d=y-r. $$ For $z$ to be integer $d$ must be a divisor of $2r^2$ and substituting $x=r$, $y=r+d$, $z=r+2r^2/d$ into the expressions for $a$, $b$ one finds the expressions reported in the question: $$ a=2r+d,\quad b=2r+{2r^2\over d}. $$ EDIT. There is a simpler geometrical way of getting the same result. As one can see from the diagram below, in a right triangle with legs $a$, $b$, hypotenuse $c$ and inradius $r$ one has: $$ c=a+b-2r. $$ Squaring both sides of that equation gives $$ 2r(a+b)=2r^2+ab, $$ and one can solve for $b$ to obtain: $$ b=2r{a-r\over a-2r}. $$ Defining $d=a-2r$ this can be rewritten as $$ b=2r{d+r\over d}=2r+{2r^2\over d}. $$ From that it is apparent that $d$ must be a divisor of $2r^2$ for $a$, $b$ and $r$ to be integers, and one recovers the same results obtained above.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2297169", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
$C^\infty_c (K)$ is separable for $K$ compact $C^\infty_c (K)$ is the space of smooth functions supported on $K$ a compact subset of $\mathbb{R}^d$. For simplicity assume $K$ is just a ball centered at the origin. This has the smooth topology in which convergence is uniform convergence of the functon and all derivatives. Note that convergence in the smooth topology of $C^\infty_c(K)$ is the same as uniform convergence of all derivatives. I'm trying to show this space is separable. I initially wanted to use polynomials with rational coefficients, as these are countable and dense in $C_c(K)$ with uniform topology. Then using the fundamental theorem of calculus, we can easily show that this set is also dense in the smooth topology. But the polynomials are not supported on $K$ so this fails. I was thinking about multiplying by a smooth function supported on $K$ that equals $1$ on most of $K$, but then I'm not sure if the fundamental theorem of calculus argument will still work. So what can I do?
For simplicity let's assume $K=$ the closed Unit ball in $R^n.$ Let $\lambda_n$ be a sequence of positive number such that $\lambda_n ‎\nearrow‎ 1.$ Define $$A_{n} = \{P : K \overset{smooth}{\longrightarrow} R ~| P ~\text{is a rational polynomials on}~ B_ {\lambda_n} \text{and } P=0 ~\text{on}~ K\setminus B_{\lambda_{n+1}} \}$$ This set is well-defined by picking suitable bump-function (cut-off), we also may assume $A_n$ is countable by selecting only one $P$ satisfying RHS conditions in the set. Clearly $A_{n} \subset C^\infty_c (K).$ we show that $\bigcup_{n \in N} A_n$ is dense in $C^\infty_c (K)$. To end this pick $f \in C^\infty_c (K)$ and $\epsilon > 0.$ One can find enough large $n \in N$ such that first $\sup_{x \in K\setminus B_{\lambda_{n+1}}} |f (x)| \leq \frac{\epsilon}{4} $ and second $\sup_{x \in B_{\lambda_{n+1}}\setminus B_{\lambda_{n}} } |f (x)| \leq \frac{\epsilon}{4}$ (These two are possible since $f$ is close to zero near the boundary of $K$ is ) Now choose $P \in A_{n}$ such that First $\sup_{x \in B_{\lambda_{n}}}|f(x) - P (x)| \leq \frac{\epsilon}{4}$ (wirestrass theorem) and Second $\sup_{x \in B_{\lambda_{n+1}}\setminus B_{\lambda_{n}} } |P (x)| \leq \frac{\epsilon}{4}$ ($P$ near boundary of $K$ is close to zero) . Then $$ \| f- p \| \leq \sup_{x \in B_{\lambda_{n}}}|f(x) - P (x)| + \sup_{x \in B_{\lambda_{n+1}}\setminus B_{\lambda_{n}} } |f (x)|+ \sup_{x \in B_{\lambda_{n+1}}\setminus B_{\lambda_{n}} } |P (x)|+\sup_{x \in K\setminus B_{\lambda_{n+1}}} |f (x)-0| \leq \epsilon $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2297275", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Show that if $(x_n) \rightarrow x$ then $(\sqrt{x_n}) \rightarrow \sqrt{x}$. Let $x_n \ge 0$ for all $n \in \mathbf{N}$ and $x>0$, show that if $(x_n) \rightarrow x$ then $(\sqrt{x_n}) \rightarrow \sqrt{x}$. My textbook does the following proof: Let $\epsilon >0$, we must find an $N$ such that $n \ge N$ implies $|\sqrt{x_n} - \sqrt{x}|< \epsilon$ for all $n \ge N$. \begin{align} |\sqrt{x_n} - \sqrt{x}| &= |\sqrt{x_n} - \sqrt{x}|\left(\frac{\sqrt{x_n} + \sqrt{x}}{\sqrt{x_n} + \sqrt{x}}\right) \\ & = \frac{|x_n - x|}{\sqrt{x_n} + \sqrt{x}} \\ & \le \frac{|x_n - x|}{\sqrt{x}} \ \ \ \ \ \ \cdots \ \ \ \ \ \ (1) \end{align} Since $(x_n) \rightarrow x$ and $x>0$, we can choose $N$ such that $|x_n - x| < \epsilon\sqrt{x}$ whenever $n \ge N$ and so for all $n \ge N$, $|\sqrt{x_n} - \sqrt{x}| < \frac{\epsilon \sqrt{x}}{\sqrt{x}} = \epsilon$. What I'm wondering is, in Eqn.$(1)$, why is $\sqrt{x}$ kept in the denominator? Couldn't one just have simply $\le |x_n - x|$ and choose $N$ such that $|x_n - x| < \epsilon$ for $n \ge N$ and the rest follows?
Because $\sqrt{x_n}+\sqrt{x}\geq \sqrt x$ and thus $$\frac{1}{\sqrt{x_n}+\sqrt x}\leq \frac{1}{\sqrt x}.$$ When you'll see continuous function, such a proof is easier using the continuity of $x\longmapsto \sqrt x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2297395", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Find $\lim_{x\to 1}\frac{p}{1-x^p}-\frac{q}{1-x^q}$ Find $\lim_{x\to 1}(\frac{p}{1-x^p}-\frac{q}{1-x^q})$ My attempt: I took LCM and applied lhospital but not getting the answer.Please help
Hint: write your term in the form $$\frac{p(1-x^q)-q(1-x^p)}{(1-x^p)(1-x^q)}$$ and use L'Hospital. the result is $$\frac{1}{2}(p-q)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2297504", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
How to solve the recurrence relation $f(1,n)=n+2$? Given the following information: $f(0,n)= n+1$ $ \ \ $ $\forall n$ $f(m,0)=f(m-1,1)$ when $m>0$ $f(m,n)=f(m-1, f(m,n-1))$ when $m>0$ and $n>0$ I have worked out that $f(1,n)=f(0,f(1,n-1))= f(1,n-1) + 1$ But I am unsure of how to get $f(1,n) =n+2$ from this stage ?
You already have: $$f(1,n)=f(1,n-1) + 1$$ Notice $f(0,1)=f(1,0)=2$, and this is because $f(0,n)= n+1$, and you let $n=1$, you get $f(0,1)= 2$ $f(m,0)=f(m-1,1)$, and you let $m=1$, you get $f(1,0)=f(0,1)$ Thus $f(1,0)=f(0,1)=2$ Let $a_n=f(1,n)$ So you have $a_n=a_{n-1}+1$, and $a_0=2$ Then $$a_n=(a_n-a_{n-1}) + (a_{n-1} - a_{n-2}) + \ldots +(a_1-a_0)+a_0=n+ 2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2297588", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Limit of function calculations I must solve limit of next function: $$\lim_{x\to \infty}\frac{2x^3+x-2}{3x^3-x^2-x+1}$$ Does my calculations are proper? If not where is my mistake? $$=\lim_{x\to \infty}\frac{x^3\left(2+\frac{1}{x^2}-\frac{2}{x^3}\right)}{x^3\left(3-\frac{1}{x}-\frac{1}{x^2}+\frac{1}{x^3}\right)} \\ \ =\lim_{x\to \infty}\frac{x^3\left(2+0-0\right)}{x^3\left(3-0-0+0\right)} \\ \ =\frac{2}{3}$$
You are correct, if you have the ratio of two polynomial of the same degree $n$ then $$\lim_{x\to +\infty}\frac{a_nx^n+a_{n-1}x^{n-1}+\dots +a_0}{b_nx^n+b_{n-1}x^{n-1}+\dots +b_0}=\lim_{x\to +\infty}\frac{x^n(a_n+\frac{a_{n-1}}{x}+\dots +\frac{a_0}{x^n})}{x^n(b_n+\frac{b_{n-1}}{x}+\dots +\frac{b_0}{x^n})}\\=\lim_{x\to +\infty}\frac{a_n+\frac{a_{n-1}}{x}+\dots +\frac{a_0}{x^n}}{b_n+\frac{b_{n-1}}{x}+\dots +\frac{b_0}{x^n}}=\frac{a_n+0+\dots +0}{b_n+0+\dots +0}=\frac{a_n}{b_n}$$ where $a_n$ and $b_n$ are different from zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2297710", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Find all the values of a for a system that has a) no solution b) 1 solution c) infinitely many solutions Solving for a system for a with a matrix x + ay - z = 1 -2x - ay + 3z = -4 -x -ay + ay = a + 1 This is the solution I found: 1 a -1 1 0 a 1 -2 0 0 a-1 a+2 And reduced it even further 1 0 -2 3 0 1 1/a -2/a 0 0 a-1 a+2 1. Is it possible to have inifinite solutions? Would no solution be when a = 1 or a = -2? 2. What is also confusing is in my first solution, the first row works with a =1 but not a = -2? Does this mean that a = -2 is not a solution. I apologize about my formating if someone can let me know how to proper format my matrixes that would be great.
You can only perform your final reduction if $a\ne0$; so you need to split off $a=0$ as a separate case and investigate it individually. You also point out the cases $a=1$ and $a=-2$. Why don't you investigate these too? For instance when $a=1$ you get the final row $0\ 0\ 0\ 3$ which is the equation $0x+0y+0z=3$: impossible. But what happens with $a=-2$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2297934", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Question about statement of axiom of choice Here is a statement of axiom of choice given in Folland's book Real Analysis. If $\{X_\alpha\}$ is a nonempty collection of nonempty sets, then, $\Pi X_\alpha$ is empty. What does it mean to say "a nonempty collection"? Isn't it just enough to say that a collection of nonempty sets?
What does it mean to say "a nonempty collection"? It means that the collection $\{X_\alpha\}_{\alpha \in A}$ itself is non-empty, i.e., that the indexing set $A$ is non-empty. Isn't it just enough to say that a collection of nonempty sets? No, because $\emptyset$ is a collection of non-empty sets.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2298040", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Proving $\mathcal{F}^{-1}\left\{\frac{i}{2t}\hat{f}(t)\right\}=-\int_{-\infty}^tf(s)\,ds$ in the sense of distributions Let $f\in C^\infty_0(\mathbb{R})$ with $\operatorname{supp}f\subset(0,\infty)$. I would like to prove that $$\mathcal{F}^{-1}\left\{\frac{i}{2t}\hat{f}(t)\right\}=-\int_{-\infty}^tf(s)\,ds,\qquad t\in\mathbb{R}\qquad(\star)$$ where $c>0$ is a constant, using rigorous distribution theory "Unrigorous" Proof: The approach I considered is to consider the fact that the Fourier transform of the Heaviside function is given by $$\hat{H}(\omega)=\frac{1}{2}\delta(\omega)-\frac{i}{2\pi t}.$$ Hence, we may write $$ \begin{aligned} \mathcal{F}^{-1}\left\{\frac{i}{2t}\hat{f}(t)\right\}&=\pi\mathcal{F}^{-1}\left\{\frac{i}{2\pi t}\hat{f}(t)\right\} \\ &=-\pi\mathcal{F}^{-1}\{\hat{H}(\omega)\hat{f}(t)\}\qquad(\omega\ne 0) \end{aligned}$$ where we have used that $0\notin\operatorname{supp}f$ and $\operatorname{supp}\delta=\{0\}$. Then $(\star)$ follows via an application of the convolution theorem. Now, my question is: Can we rigorously prove $(\star)$ in the sense of distributions
As definition of the Fourier transform we take $$\hat\phi(\xi) = \int \phi(x) e^{-i\xi x} \, dx$$ Then we have $\hat\delta = 1$ since $\langle \hat\delta(t), \phi(t) \rangle = \langle \delta(t), \hat\phi(t) \rangle = \hat\phi(0) = \int \phi(x) 1 \, dx = \langle 1(t), \phi(t) \rangle.$ Now, $H = \frac12 (1 + \theta),$ where $\theta(x) = -1$ when $x<0$, and $\theta(x) = +1$ when $x>0$. For the constant part we have $\hat 1(t) = \hat{\hat\delta}(t) = 2\pi \, \delta(-t) = 2\pi \, \delta(t).$ For $\theta$ we have $\theta' = 2\delta$ so $it \hat\theta(t) = \widehat{\theta'}(t) = \hat\delta(t) = 1(t).$ Thus $\hat\theta(t) = -i \, \operatorname{pv}\cfrac{1}{t}$. Summarizing we get $$\hat H(t) = \frac12 (\hat 1 + \hat\theta) = \frac12 \left(2\pi\,\delta(t) - i\,\operatorname{pv}\cfrac{1}{t}\right)$$ If we instead define the Fourier transform as $$\hat\phi(\xi) = \frac{1}{2\pi} \int \phi(x) e^{-i\xi x} \, dx$$ we get $$\hat H(t) = \frac12 \left(\delta(t) - i \frac{1}{2\pi} \,\operatorname{pv}\cfrac{1}{t}\right)$$ There's a small difference from the formula that Jason Born had. Have I made a mistake or is it an error in Jason's formula?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2298155", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Why does the Category Theory Definition of $\mathbf{Set}$ Product Not Define a Subset of a Product? 1 Definition of Product From the Wikipedia Definition of a Categorical Product (in the simple binary case): 2 Question Now let us focus our attention on $\mathbf{Set}$. $$ Y = \{1, 2 \} \\ X_1 = \{a, b \} \\ X_2 = \{c, d \} \\ $$ Then if $1 \mapsto a, c$ and $2 \mapsto b, d$ (under each respective $f_1$ and $f_2$), then wouldn't we have that $Y$ maps to $\{ (a, c), (b, d) \}$ rather than the full $X_1 \times X_2$? If that's the case, then how does the Categorical Definition of a product square with the Set Theory definition (in the $\mathbf{Set}$)?
I think you're not being precise enough with the crucial question: wouldn't we have that $Y$ maps to $\{ (a, c), (b, d) \}$ Well, certainly there does exist a function $g:Y\to \{(a,c),(b,d)\}$ such that $g(1)=(a,c)$ and $g(2)=(b,d)$. But this $g$ is not the product of $f_1$ and $f_2$, precisely because of the definition of a product: the codomain of $g$ is not (the Cartesian product of sets) $X_1\times X_2$. (And yes, we know that the Cartesian product of sets is indeed the categorical product. It sounds like you don't doubt that part.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2298275", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Finding solutions to $2^x+17=y^2$ Find all positive integer solutions $(x,y)$ of the following equation: $$2^x+17=y^2.$$ If $x = 2k$, then we can rewrite the equation as $(y - 2^k)(y + 2^k) = 17$, so the factors must be $1$ and $17$, and we must have $x = 6, y = 9$. However, this approach doesn't work when $x$ is odd.
It looks like I need to spell out the details for insipidintegrator. If $x$ is even, the prime $17$ is the product of $y+2^{x/2}$ and $y-2^{x/2}.$ Averaging, we find $y=9$ whence $x=6.$ If $x$ is odd, write $y-2^{x/2}=\frac{17}{y+2^{x/2}}$. Letting $x=2n+1$, we have $\Big|\frac{y}{2^n}-\sqrt{2}\Big|=\frac{17}{2^n(y+2^{n+.5})}.$ From Beuker's thesis, If $q=2^k, \ \Big|\frac{p}{q}-\sqrt{2}\Big|>2^{-43.9}q^{-1.8}$ Thus $\Big|p-q\sqrt{2}\Big|>2^{-43.9}q^{.2}$ In our case, $p=y,q=2^n$ so $p+q\sqrt{2}>2q\sqrt{2}.$ Multiplying, $$17=y^2-2^x>2^{-43.9}q^{.2}\cdot 2q\sqrt{2}$$ or $$q^{1.2}<17\cdot2^{42.4}$$ and $$n<38.73955$$ A computer check shows the only solutions are $n=1,2,4.$ These values correspond to $x=3,5,9.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2298402", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 5, "answer_id": 0 }
Dedekind complete model Prove that there is no first order theory $T$ in $ L=\{<\}$ such that for every linearly order set $A$, $A$ is a model of $T$ iff $A$ is Dedekind complete.
Here is a slight variation of Wore's argument: Suppose that $T$ would characterize Dedekind complete linear orders. Let $T^*$ be $T$ together with $\operatorname{DLONE}$ - the theory of dense linear orders without endpoints. Now $T^*$ characterizes Dedekind complete dense linear orders without endpoints and is consistent with infinite models (as $(\mathbb R;<)$ is such a model). Hence, by Löwenheim-Skolem, it has a countable model $(X; \prec)$. It follows, by the $\aleph_0$-categoricity of $\operatorname{DLONE}$, that $(X; \prec) \cong (\mathbb Q; <) \models T^*$. Contradiction!
{ "language": "en", "url": "https://math.stackexchange.com/questions/2298497", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Is this a valid proof to prove that if $a_n$ converges, then $a_{n+1}-a_{n}$ converges to $0$? by definition I saw someone's comment on another website on this proof and they presented this (by definition): now, by the definition of a convergent sequence, for all $\epsilon _1 >0$ there is in fact an $n> N_1$ such that it ensures $|a_n - L| < \epsilon _1$, where $L$ is the limit of $a_n$. Now, this means that $a_{n+1}$ is also a convergent sequence so for all $\epsilon _2 > 0$ there is in fact an $n> N_2$ such that it ensures $|a_{n+1} - L| < \epsilon_2$, where $L$ is the limit of $a_{n+1}$. Now consider, $|a_{n+1} - a_{n}| < \epsilon$ for all $\epsilon > 0$ for some $n>N$ Now if we were to pick $N = \max\{N_1,N_2\}$, then we have our result.
Yes. This is correct. Using the triangular inequality: $|a_{n+1} - a_n| = |a_{n+1} -L + L - a_n| \le |a_{n+1} - L| + |a_n - L| < \epsilon$, therefore when you take the limit you get $a_{n+1} - a_n = 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2298612", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Is there a systematic way of determining the "correct" asymptotic approximation? Consider these two quadratic equations, $$\text{i)} \quad x^2+4x-5-\epsilon$$ $$\text{ii)} \quad \quad x^2+(4+\epsilon)x+4-\epsilon = 0$$ If we attempt to find an asymptotic approximation of the form $$x = x_0 + \epsilon x_1+...$$ for i) this works out fine, in ii) we get to an equation $-3 = 0$, which is rubbish. From using the quadratic formula we find $$\text{i)} x = -2 \pm \sqrt{9+\epsilon}$$ $$\text{ii)} \quad x = \frac{-4-\epsilon \pm \sqrt{\epsilon}\sqrt{\epsilon+12}}{2}$$ The radical epsilon factor leads me to beleive that we should try an approximation of the form $$x = x_0 + \sqrt{\epsilon} x_1+ \epsilon x_2 + ...$$ This works out fine. From this I would like to know if Was this the correct way to deduce the new form of the asymptotic approximation? Is is there a way to "spot" that a standard asymptotic approximation is going to fail? For instance, above I did not have any idea that it was going to fail until it failed!
Compute the differential of the equation. * *$(2x+4)dx-d\epsilon=0$. This shows that $\dfrac{dx}{d\epsilon}$ is finite a the roots. *$(2x+4+\epsilon)dx+(x-1)d\epsilon=0$. At the double root $x=-2$, $\dfrac{dx}{d\epsilon}$ is infinite so that an "ordinary" approximation (entire function) cannot work.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2298696", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Derivatives of trigonometric functions: $y= \frac{x \sin(x)}{1+\cos(x)}$ I'm trying to find the derivative of: $$y= \frac{x \sin(x)}{1+\cos(x)}$$ I've tried but I can't achieve the simplified form - Here's my try- $$y' = \left(\frac{x \sin(x)}{1+\cos(x)}\right)'$$ $$y' = \frac{x\sin^2(x) + (\cos(x)+1 )(\sin(x)+x\cos(x))}{(\cos(x)+1)^2}$$ I'm pretty sure the above is correct that is why I didn't show the steps in between ... but I can't simplify it until - $$\frac{x+\sin(x)}{1+\cos(x)}$$ Which concept or formula am I missing out from in order to simplify it further? Or what should I do next? Thanks!
Note that $x\sin^2 x = x(1 - \cos^2 x)$. So we can rewrite the numerator as $$x-x\cos^2 x + x\cos^2 x+(1+\cos x)\sin x +x\cos x = (1+\cos x)(x+\sin x)$$ so $$y'=\frac{(1+\cos x)(x+\sin x)}{(1+\cos x)(1+\cos x)} = \frac{x+\sin x}{1+\cos x}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2298811", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Showing that $\sum_{k=1}^n\dfrac{1}{2^{k-1}}$ for $n \geq 2$ is not an integer . Suppose $n\geq2 ,s(n)=\sum_{k=1}^n\dfrac{1}{2^{k-1}} $ $$s(2)=1+\frac 12=1.5\\s(3)=1+\frac12+\frac14=1.75 ,\\\vdots$$ Is an elementary proof to $s(n)$ can never be an integer number ? As honestly as possible : One of my students( k-12) asked this question .I said I will think and answer .But I got stuck ... I am thankful for your hint,guide or solutions in advanced.
$2^{n-1} s(n)$ is odd and $\frac{\text{odd}}{\text{even}\neq 0}$ cannot be an integer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2298890", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 1 }
Solution to this integral? could anyone solve this integral ? $$\int_0^\infty \frac{e^{-x}\sin(x)\cos(ax)}x~\mathrm dx$$ well i have tried opening up the sin*cos using trigonometric identities but that didn't help so much
Note that we can write $$\begin{align} e^{-x}\sin(x)\cos(ax)&=\text{Re}\left(e^{-x}e^{iax}\sin(x)\right)\\\\ &=\text{Re}\left(\frac{e^{i(a+1+i)x}-e^{i(a-1+i)x}}{2i}\right) \end{align}$$ Hence, we have from the Generalized Frullani's Theorem $$\begin{align} \int_0^\infty \frac{e^{-x}\sin(x)\cos(ax)}{x}\,dx&=\text{Re}\left(\frac{1}{2i}\int_0^\infty \frac{e^{i(a+1+i)x}-e^{i(a-1+i)x}}{x}\,dx\right)\\\\ &=\frac12\arctan\left(\frac{\text{Im}((a+1-i)(a-1+i))}{\text{Re}((a+1-i)(a-1+i))}\right)\\\\ &=\frac12 \arctan(2/a^2) \end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2299268", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 4, "answer_id": 2 }
How many terms (summands) are in the sum? I realize there are similar stacks to my question such as: How many summands are there Although I require further clarifications to understand. Consider the sequence: 4 + 11 + 18 + 25 + ... + 249. 1) How many summands are in the sum. 2) Compute the sum.
By simple investigation, you can observe that the sequence you're dealing with is an arithmetic progression, hence if you name it $a_n$, then you're interested in summing the sequence $a_n$ defined by: $$ a_n=4+7n, n\in\mathbb N $$ And to know how many summands there are, yoi solve for $n$ such that: $$ a_n=249=4+7n $$ Hence $n=35$, so there are $36$ summands. The sum is simply: $$ a_0+a_1+...a_{35}=\sum_{n=0}^{35}4+7n=4\times 36+7\times\frac{35\times 36}{2}=4554 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2299359", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
periodical function $f$ with period $1$ satisfying $f(x)+f(x+1/2)=f(2x)$ is zero. Let $f$ be continuously differentiable on $\Bbb R$, periodical with period $1$, and $f(x)+f(x+1/2)=f(2x)$ for all $x\in\Bbb R$. Show that $f\equiv 0$. A natural attempt is to use Fourire series. Let $f(x)=a_0/2+\sum_{n=1}^\infty(a_n\cos 2 n\pi x+b_n\sin 2n\pi x)$. Then after checking both side of $f(x)+f(x+1/2)=f(2x)$ , we find $a_0=0$, $2a_{2n}=a_n$, $2b_{2n}=b_n$. Then I have on idea to proceed on.
You can inductively prove that $$ f(x) = \sum_{k=0}^{2^n - 1} f\left(\frac{x}{2^n} + \frac{k}{2^n}\right) $$ holds for all $x \in \Bbb{R}$ and $n \geq 1$. Since $f \in C^1$, this implies $$ f'(x) = \sum_{k=0}^{2^n - 1} f'\left(\frac{x}{2^n} + \frac{k}{2^n}\right) \frac{1}{2^n} \xrightarrow[n\to\infty]{} \int_{0}^{1} f'(t) \, dt = f(1) - f(0) = 0. $$ So $f$ is constant, which then implies $f \equiv 0$ by the functional equation. Edit (2017/08/20). Here are two remarks: * *We can give an alternative solution using Fourier series: Differentiating the functional equation, we have $f'(x) + f'(x+\frac{1}{2}) = 2f'(2x)$. So it follows that $$ \hat{f'}(k) := \int_{0}^{1} f'(x)e^{-2\pi i k x} \, dx = \int_{0}^{1} \frac{f'(\frac{x}{2}) + f'(\frac{x+1}{2})}{2} \, e^{-2\pi i k x} \, dx = \hat{f'}(2k). $$ So by the Parseval's identity, for any $k \neq 0$ and $m \geq 1$ we have $$ |\hat{f'}(k)|^2 = \frac{1}{m}\sum_{j=0}^{m-1} |\hat{f'}(2^j k)|^2 \leq \frac{1}{m} \sum_{n \in \mathbb{Z}} |\hat{f'}(n)|^2 = \frac{1}{m} \int_{0}^{1} |f'(x)|^2 \, dx $$ and letting $m\to\infty$ gives $\hat{f'}(k) = 0$. This tells that $f'$ is constant, from which we easily deduce $f' \equiv 0$ and consequently $f \equiv 0$. *Without differentiability, we have non-trivial solutions such as $$ f(x) = \sum_{n=1}^{\infty} \frac{\cos(2^n \pi x)}{2^n}. $$ Indeed, Weierstrass $M$-test tells that $f$ is continuous. Also, it follows that $$ f(x) + f(x + \tfrac{1}{2}) = \sum_{n=1}^{\infty} \frac{\cos(2^n \pi x) + \cos(2^n \pi x + 2^{n-1}\pi)}{2^n} = \sum_{n=2}^{\infty} \frac{2\cos(2^n \pi x)}{2^n} = f(2x).$$ So the differentiability condition is essential.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2299535", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Find the eigenvalues and a basis for an eigenspace of matrix A Find the eigenvalues and a basis for each eigenspace of matrix A: \begin{bmatrix} 1 & -3 & 3 \\ 2 & -2 & 2 \\ 2 & 0 & 0 \\ \end{bmatrix} I found the eigenvalues by computing $|A-\lambda I|$: $\lambda_1 = 0,$ $\lambda_2 = 1,$ $\lambda_3 = -2$ How do I find a basis for each eigenspace of matrix A? ---------------------------------------------------------------------------- I tried the following: $\lambda = 0:$ \begin{bmatrix} 1 & -3 & 3 \\ 2 & -2 & 2 \\ 2 & 0 & 0 \\ \end{bmatrix} Do reduced row echelon form: \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & -1 \\ 0 & 0 & 0 \\ \end{bmatrix} Does this mean one of the results is \begin{bmatrix} 0\\ t\\ t\\ \end{bmatrix} and to find the other two answers, do the same thing except set $\lambda$ equal to the other two values? Or does finding the basis for each eigenspace of matrix A mean something different? ----------------------------------------------------------------------------
Correct, you subtract each eigenvalue from the main diagonal and use row reduction to find the eigenvectors. You may also want to spend a minute or two trying to spot the eigenvectors because that tends to be faster than row reduction. For instance in your case the second and third columns are scalar multiples of each other and you can use this information to get the eigenvector $(0,1,1)^T$, which corresponds to the eigenspace $\{(0,t,t)^T : t \in \mathbf{R} \}$. For the eigenvalue of $1$ you are looking for a vector $v$ with $Av = v$. If $v = (a, b, c)^T$ then $Av = (a - 3b + 3c, 2a - 2b + 2c, 2a)^T$. Thus $2a = c$ and we can now do this again with $A(a,b,2a)^T = (7a - 3b, 6a - 2b, 2a)^T$. This gives you the equations $7a - 3b = a$ and $6a - 2b = b$, both equivalent to $6a - 3b = 0$. Hence one eigenvector ($a = 1$) is $(1,2,2)^T$. This process is basically row reduction. I am making use of the fact that the third row has just one entry to try to move through the steps a bit faster. For practice, use row reduction to verify that this is the correct eigenvector.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2299628", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Show that the integral of $1/(1-x^x)$ from $0$ to $1/e$ is divergent How can you show that $\int_0^{1/e}\frac{dx}{1-x^x}$ diverges? Do you have to substitute $x = \frac1u$?
Using the inequality: $e^{-y} > 1 - y, 0 < y < 1$. Put $u = x\ln x$. Observe that $x \to 0^{+} \implies u \to 0^{-}$. Thus you can write $u = -y, 0 < y < 1$. The inequality $e^{-y} > 1 - y$ can be proved easily on $y \in (0,1)$. Thus $\dfrac{1}{1- x^x} = \dfrac{1}{1-e^{x\ln x}} = \dfrac{1}{1-e^{-y}} > \dfrac{1}{y} = -\dfrac{1}{x\ln x}\implies \displaystyle \int_{0}^{1/e}\dfrac{dx}{1-x^x}> \displaystyle \int_{0}^{1/e} \dfrac{dx}{-x\ln x}= +\infty$ by a simple substitution $m = \ln x$, proving divergence.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2299775", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Neglect $1/2 \ln(2\pi n)$ in Stirlings approximation formula, but this term is not bounded or gets smaller, but larger One form of Stirlings approximation reads $$ \ln(n!) \approx n\ln(n) - n + \frac{1}{2} \ln(2\pi n) $$ another $$ \ln(n!) \approx n\ln n - n. $$ But thats makes me wonder, for the difference of both is $\frac{1}{2}\ln(2\pi n)$, which gets arbitrary large (surely very slow, but it is not bounded...), so the error between both approximations gets larger and larger for $n \to \infty$, but it is not the point of an approximation formula to give a lower error for $n \to \infty$? So, in what sense is the second approximation valid, when the difference between both terms nonetheless becomes larger and larger for $n\to \infty$? Could anybody please explain this to me?
This is called an asymptotic expansion. Since you have $$\frac{\ln(n!)}{n\ln(n)}\xrightarrow[n\to\infty]{} 1,$$ $n\ln(n)$ is a valid approximation for $\ln(n!)$. You actually have that the error goes to $0$ at the speed $\frac 1{\ln(n)}$ which is very slow. So if you want a better approximation, you can notice that $$\frac{\ln(n!)}{n\ln(n)-n}\xrightarrow[n\to\infty]{} 1,$$ and the error goes to $0$ at the speed $\frac {1/2 \ln(2\pi n)}{n\ln(n)-n}$ which is better already. And so on... you will always have better approximations as the error goes to $0$ faster and faster.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2299881", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
A infinite series expansion for $e^e$. How can $e^e$ be expressed in an infinite series with as much simplification as possible. * *I wrote the series of $e^x$ by keeping $x$ as $e$ and from there I also expanded every $e$ in this expansion now I was thing about expanding it further by binomial theorem but I am not able to understand how can i use binomial theorem here and how much can this be simplified in another words I am trying to write this series as simple as expansion of $e$ , is it possible and how it can be done. *Any help will be highly appreciated , thanks in advance.
Let $f(x)=e^x$ Then $$f^{(1)}(x)=f^{(2)}(x)=...=f^{(n)}(x)=...=e^x, \forall x\in\Bbb R.$$ Then applying Taylor's theorem we get - $$f(x)=f(0)+xf^{(1)}(0)+\frac {x^2}{2!} f^{(2)}(0)+...+\frac {x^n}{n!} f^{(n)}(\phi), 0\lt\phi\lt1.$$ Then you will get $$e^x=1+x+\frac{x^2}{2!}+...$$ i.e. $$e^x=\sum_{n=0}^{\infty}\frac {x^n}{n!}$$ And hence $$e^e=\sum_{n=0}^{\infty}\frac {e^n}{n!}$$ $$=\sum_{n=0}^{\infty}\frac{\sum_{i=0}^{\infty}\frac{n^i}{i!}}{n!}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2299994", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Converting decision tree into a logical expression I need to convert this decision tree into a logical expression by using "and", "or" and "not" logical operators. I have been trying to solve this for 3 days. Any help would be appreciated.
It's $$(\neg F \wedge \neg H) \vee (\neg F \wedge H \wedge J) \vee (F \wedge G) \vee (F \wedge \neg G \wedge K)$$ (Here '$\wedge$' means 'and', '$\vee$' means 'or' and '$\neg$' means 'not'.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2300372", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Reason for order of multiplication of ordinals I understand that addition and multiplication on ordinal numbers cannot be commutative and I know why $1+\omega$ and $2\times\omega$ must be different from $\omega+1$ and $\omega\times2$, respectively. For addition, I see why $1+\omega$ is just $\omega$, because it represents a position after $1$ and then $\aleph_0$ elements, and I can include the first element into the rest and the position wouldn't change. What I am intrigued about is why $\omega+\omega$ is equal to $\omega\times2$ and not $2\times\omega$, as I would think. How I interpret an expression like $5×2$ is "five-times two", like two seen five-times, as in $2+2+2+2+2$. In the context of ordinals, I would also imagine the expression $\omega+\omega$ (as I see omega two-times) to be $2\times\omega$. In contrast, I would interpret an expression like $\omega\times2$ to be $\underbrace{2+2+2+...}_\omega$, which also corresponds to a set of $\aleph_0$ elements and thus be equal to $\omega$. However, the usual definition is the exact opposite. I understand that notation is not the underlying mathematics and one may freely redefine the operator in terms of changing the order of operands, but this definition seems to me arbitrary and inconsistent. Is there any particular reason why it was defined so?
Well, it is quite arbitrary. But one soft reason to define it this way is that it makes ordinal arithmetic left distributive, i.e. for all ordinals $\alpha, \beta, \gamma$ $$\alpha \cdot (\beta + \gamma) = \alpha \cdot \beta + \alpha \cdot \gamma$$ and allows for left cancellation $$ \alpha > 0 \wedge \alpha \cdot \beta = \alpha \cdot \gamma \implies \beta = \gamma. $$ Now, since historically these (and similar) laws have been stated predominantly for 'left variants' rather than their 'right' counterparts, it feels natural to me to define ordinal arithmetic the way it is. edit: I actually also see an argument for the other position. If we consider $\alpha \cdot \beta$ it's $\beta$ that behaves more like a scalar in that, for every $\alpha > 0$ $$\beta \mapsto \alpha \cdot \beta$$ is strictly increasing and, as above, $\alpha \cdot (\beta + \gamma) = \alpha \cdot \beta + \alpha \cdot \gamma$. Since modules usually have their scalars to the left (and I consider your 'beer example' to be a natural module), this might serve as an argument to define at least ordinal multiplication the other way.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2300478", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
How to find power series of $f(z)=\frac{e^z}{1-z}$ at $z_0=0$? I tried to calculate few derivatives, but I cant get $f^{(n)}(z)$ from them. Any other way? $$f(z)=\frac{e^z}{1-z}\text{ at }z_0=0$$
Hint: $$\frac1{1-z}=\sum_{n=0}^\infty z^n$$ $$e^z=\sum_{n=0}^\infty\frac{z^n}{n!}$$ Now apply Cauchy products to see that $$\frac{e^z}{1-z}=\sum_{n=0}^\infty z^n\sum_{k=0}^n\frac1{k!}=\sum_{n=0}^\infty e_n(1)z^n$$ where $e_n(x)$ is the exponential sum formula.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2300613", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Degenerate eigenvalues problem for a 4x4 system In summary, my question is whether or not I'm allowed to have the zero vector as my generalised eigenvector. I'm given the system $$x'=\begin{bmatrix} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ -2 & 2 & -3 & 1 \\ 2 & -2 & 1 & -3 \end{bmatrix}x$$ I also found two eigenvalues: 0 & -2. For 0, I have an eigenvector $$ \begin{bmatrix} 1 \\ 1 \\ 0 \\ 0 \end{bmatrix}$$ and for -2, I have $$ \begin{bmatrix} -1 \\ 0 \\ 2 \\ 0 \end{bmatrix} \ \text{and}\ \begin{bmatrix} 0 \\ -1 \\ 0 \\ 2 \end{bmatrix}$$ I tried finding a third generalised eigenvector using eigenvalue 2, but it just doesn't exist (I get a row of zeros equals 4). What's going on and how do I proceed?
Let's name your matrix A. The matrix $(A + 2E) = \begin{pmatrix} 2 & 0 & 1 & 0 \\ 0 & 2 & 0 & 1 \\ -2 & 2 & -1 & 1 \\ 2 & -2 & 1 & -1 \end{pmatrix}$ has two linearly independent ordinary eigenvectors with eigenvalue 0: $\begin{pmatrix} -1 \\ 0 \\ 2 \\ 0 \end{pmatrix}$ and $\begin{pmatrix} 0 \\ -1 \\ 0 \\ 2 \end{pmatrix}$, that you have already found. They are A's eigenvectors with eigenvalue -2. There are no other ordinary eigenvectors of A with eigenvalue -2 that are linearly independent with these two, because rk(A + 2E) = 2. The matrix $(A + 2E)^2 = \begin{pmatrix} 2 & 2 & 1 & 1 \\ 2 & 2 & 1 & 1 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix}$ has three linearly independent ordinary eigenvectors with eigenvalue 0: $\begin{pmatrix} -1 \\ 0 \\ 2 \\ 0 \end{pmatrix}$, $\begin{pmatrix} 0 \\ -1 \\ 0 \\ 2 \end{pmatrix}$ and $\begin{pmatrix} 1 \\ -1 \\ 0 \\ 0 \end{pmatrix}$. The first two ones are A's ordinary eigenvectors with eigenvalue -2, that you have already found. And the vector $\begin{pmatrix} 1 \\ -1 \\ 0 \\ 0 \end{pmatrix}$ is a generalised eigenvector of rank 2 of the matrix A with eigenvalue 2, making it the thing you were looking for.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2300725", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Calculating $\sum_{k=1}^\infty \frac{k^2}{2^k}=\frac{1}{2}+\frac{4}{4}+\frac{9}{8}+\frac{16}{16}+\frac{25}{32}+\cdots+\frac{k^2}{2^k}+\cdots$ I want to know the value of $$\sum_{k=1}^\infty \frac{k^2}{2^k}=\frac{1}{2}+\frac{4}{4}+\frac{9}{8}+\frac{16}{16}+\frac{25}{32}+\cdots+\frac{k^2}{2^k}+\cdots$$ I added up to $k=50$ and got the value $5.999999999997597$, so it seems that it converges to $6.$ But, I don't know how to get the exact value. Is there any other simple method to calculate it?
If we start with the power series $$ \sum_{k=0}^{\infty}x^k=\frac{1}{1-x} $$ (valid for $|x|<1$) and differentiate then multiply by $x$, we get $$ \sum_{k=1}^{\infty}kx^k=\frac{x}{(1-x)^2}$$ If we once again differentiate then multiply by $x$, the result is $$ \sum_{k=1}^{\infty}k^2x^k=\frac{x(x+1)}{(1-x)^3}$$ and setting $x=\frac{1}{2}$ shows that $$ \sum_{k=1}^{\infty}k^22^{-k}=\frac{\frac{3}{4}}{\frac{1}{8}}=6 $$ as you guessed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2300889", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Solve the Inequality $x+\frac{x}{\sqrt{x^2-1}} \gt \frac{35}{12}$ Solve the Inequality $$x+\frac{x}{\sqrt{x^2-1}} \gt \frac{35}{12}$$ First of all the Domain of LHS is $(-\infty \:\: -1) \cup (1 \:\: \infty)$ So i assumed $x=\sec y$ since Range of $\sec y$ is $(-\infty \:\: -1) \cup (1 \:\: \infty)$ So $$\sec y+ |\csc y| \gt \frac{35}{12}$$ Any help here to proceed?
HINT: Clearly, we need $x>0$ so will be $\sec y,\csc y\implies0< y<\dfrac\pi2$ Now $\sec y+\csc y>\dfrac{35}{12}$ $\iff\left(\dfrac{35}{12}\right)^2<\sec^2y+\csc^2y+2\sec y\csc y=\sec^2y\csc^2y+2\sec y\csc y$ as $\sec^2y\csc^2y=\sec^2y+\csc^2y$ Set $\sec y\csc y=u$ to find $$u^2+2u>\left(\dfrac{35}{12}\right)^2\iff(u+1)^2>\left(\dfrac{37}{12}\right)^2$$ As $u>0,$ $$u+1>\dfrac{37}{12}\iff\dfrac{25}{12}<u=\dfrac2{\sin2 y}\iff\dfrac{24}{25}>\sin2y=\dfrac{2\tan y}{1+\tan^2y}$$ Can you find the range of $\tan y?$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2301181", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Suppose $h:M \rightarrow E$ is a homeomorphism onto its image . Show that $h(M)$ is a $G_{\delta}$-set. Let $M$ and $E$ be complete metric spaces. Suppose $h:M \rightarrow E$ is a homeomorphism onto its image (i.e. $h$ is a continuous one-to-one map, and $h^{-1}|_{h(M)}$ is continuous). Show that $h(M)$ is a $G_{\delta}$-set. My attempt: I think it has something to do with $$h(M) = \bigcap_{n \in \mathbb{N}}h(?)$$ where $?$ is open in $E.$ However, I do not know what precisly is $?$.
Let $B_E = \{ e \in E: \| e \| \leq 1 \}$ be the unit ball centered at $e$ with radius $1.$ Let $(\varepsilon_n)_{n \in \mathbb{N}}$ be a sequence of positive real numbers such that $\varepsilon_n \rightarrow 0$ as $n \rightarrow \infty.$ We claim that $$h(M) = \bigcap_{n \in \mathbb{N}}\bigcup_{y \in h(M)}(y+ \varepsilon_n \cdot B_E).$$ For each $n \in \mathbb{N},$ clearly we have $h(M) \subseteq \cup_{y \in h(M)}(y + \varepsilon_n \cdot B_E).$ Therefore, $h(M) \subseteq \bigcap_{n \in \mathbb{N}}\bigcup_{y \in h(M)}(y+ \varepsilon_n \cdot B_E).$ I constructed the following proof based on Batominovski's comment. To show another inclusion, we let $z \in \bigcap_{n \in \mathbb{N}}\bigcup_{y \in h(M)}(y+ \varepsilon_n \cdot B_E).$ By definition, for each $n \in \mathbb{N},$ there exists $y \in h(M)$ such that $z \in y+\varepsilon_n \cdot B_E.$ Therefore, as $n \rightarrow \infty,$ there exists $y \in h(M)$ such that $z = y \in h(M).$ Hence, the reverse inclusion holds. Since $\bigcup_{y \in h(M)}(y+ \varepsilon_n \cdot B_E)$ is open in $E,$ we conclude that $h(M)$ is a $G_{\delta}$-set.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2301302", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why is $a^4+b^4$ factorable in two different ways, and why are the solutions not the same? I am working on factoring $a^4+b^4$ and I have found two different solutions to this. First, I have factored it to $$a^4+b^4=(a+b)[a^3-a^2b+ab^2+b^3]-2ab^3$$ But then I also found that the equation is factorable to $$a^4+b^4=(a^2+b^2-\sqrt{2}ab)(a^2+b^2+\sqrt{2}ab)$$ First of all, why are there two ways of factoring this equation, and why are the solutions not the same? Additionally, is it possible to convert either solution so it is the same as the other?
Contrary to what's being said in the comments, neither of those expressions is wrong because both yield $a^4 + b^4$ when expanded. It is, however, incorrect to call this a factorization: $$a^4+b^4=(a+b)[a^3-a^2b+ab^2+b^3]-2ab^3$$ When you factor an expression, say $F(x)$, you break it down into two or more factors, such as $F(x) = G(x)H(x)K(x)$. That is not what you did here. You broke it down into two factors plus a remainder of $-2ab^3$. That is not factoring. To address the question more generally, since $\Bbb R$ (or $\Bbb C$, or $\Bbb Q$, whichever you're working with) is a field, then $\Bbb R[x]$ is a unique factorization domain (among other things, but those other things are irrelevant here). This means that every expression in $\Bbb R[x]$ has one and only one factorization (up to ordering of the factors, for example $x(x-1)$ and $(x-1)x$ are considered the same factorization of $x^2-x$). An expression may have two factorizations that look different but actually aren't. For a simple example over $\Bbb C[x]$, we could say $x^4 - 1 = (x^2 + 1)(x^2 - 1)$ and we could say $x^4 - 1 = (x^2 - x + i(1-x))(x^2 + x + i(1+x))$. These look very different but they're really the same. How can we be sure they're the same? Break it into linear factors over $\Bbb C$: $$ x^4 - 1 = (x-1)(x+1)(x+i)(x-i)$$ $(x-1)(x+1) = x^2-1$ and $(x+i)(x-i) = x^2+1$. That's how we can get the first factorization. Alternatively, $(x-1)(x-i) = x^2 - x + i(1-x)$ and $(x+1)(x+i) = x^2 + x + i(1+x)$. That's how we can get the second factorization.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2301484", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Prove that $\left[\mathbb{Q}(\sqrt[3]{5}+\sqrt{2}):\mathbb Q\right]=6$ Prove that $\left[\mathbb{Q}(\sqrt[3]{5}+\sqrt{2}):\mathbb Q\right]=6$ My idea was to find the minimal polynomial of $\sqrt[3]{5}+\sqrt{2}$ over $\mathbb{Q}$ and to show that $\deg p(x)=6$ Attempt: Let $u:=\sqrt[3]{5}+\sqrt{2}\\ u-\sqrt[3]{5}=\sqrt 2\\ (u-\sqrt[3]{5})^2=2\\ u^2-2\sqrt[3]{5}u+5^{2/3}-2=0\\ u^2-2-5^{2/3}=2\sqrt[3]{5}u\\ (u^2-2-5^{2/3})^3=2^3\cdot 5 \cdot u$ I'm stuck here Here Wolfram's result My previous question over $\mathbb{Q}(\sqrt[3]{5})$
First, $p:=x^3-5$ and $q:=x^2-2$ are the minimal polynomials of $\sqrt[3]{5}$ and $\sqrt{2}$ over $\mathbb{Q}$ since they are monic and using Eisenstein's criterion, they are irreducible over $\mathbb{Q}$. Then, the following is an annihilator polynomial with rational coefficients of their sum: $$\textrm{res}_y(p(y),q(x-y))=x^6 - 6 x^4 - 10 x^3 + 12 x^2 - 60 x + 17.$$ Indeed, $p(y)$ and $q(\sqrt[3]{5}+\sqrt{2}-y)$ both vanish on $\sqrt{2}$. To conclude, it suffices to see that the above polynomial is irreducible over $\mathbb{Q}$ and I have no trick to do so: reduction mod $2$ and $3$ fail, the reduced polynomial has a root. Feel free to share one.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2301573", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
What does Liu mean by "topological open/closed immersion" in his book "Algebraic Geometry and Arithmetic Curves"? In his book "Algebraic Geometry and Arithmetic Curves", Liu defines open/closed immersions of locally ringed spaces in terms of topological open/closed immersions: What does he mean by the terms "topological open (resp. closed) immersion"? Does he mean that * *$f(X)$ is an open (resp. closed) subset of $Y\!,\,$ and *the induced map $X\to f(X); \;x \mapsto f(x)$ is a homeomorphism? Many thanks! :)
Yes, that's a correct definition. Yours (1.) is also equivalent to 2. below. * *$f(X)$ is open (closed) and $f$ is a homeomorphism on its image *$f$ is open (closed) and a homeomorphism on its image If we then define an immersion to be a homeomorphism on its image, then an open (closed) immersion really is an immersion that is open (closed). Note: it is also called an embedding, which is safer to use than immersion, because it is closer to the terminology used in differential geometry.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2301719", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 1, "answer_id": 0 }
Prove fact about polynomial with coefficients from $\mathbb{Z}$ Let $f \in \mathbb{Z}[x]$. And for more than 3 (i.e $\geqslant 4$) distinct $a \in \mathbb{Z}\ f(a) = 1$. Prove that $\forall a \in \mathbb{Z} \ f(a)\neq -1$. I have clearly no idea how to tackle this. The only thing I've noticed (I'm pretty sure it is an obvious fact for the most of you) is that since polynomial is continuous function we have to prove that $f(a) > -1\ \forall a \in \mathbb{Z}$.
Let $$ f(x)=(x-1)(x-3)(x-5)(x-7)+1. $$ Then $f(a)=1$ for $a=1,3,5,7$. Nevertheless we have $f(6)=-14<-1$. As for the first claim, compare with this question, for an idea how to tackle this.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2301807", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Heegaard Splittings of Non-orientable 3 manifolds A well known and oft-utilized fact from 3-manifold topology is that all closed, orientable 3-manifolds admit Heegaard splittings. I am trying to understand what the appropriate notion of Heegaard splitting for a closed, nonorientable 3-manifold should be, assuming I want lots of familiar facts to carry over to this setting. I'm also curious about the interaction with the orientable case. In particular, some things I am pondering include: Given a closed, nonorientable 3-manifold $Y$, 1) Can one decompose $Y = H_{1} \cup_{\Sigma}H_{2}$, for some surface $\Sigma \hookrightarrow Y$, and some (possibly nonorientable) handlebodies $H_{1},H_{2}$ ? 2) Can one decompose $Y = \Sigma \coprod \tilde{H}$, for a one-sided surface $\Sigma \hookrightarrow Y$ and an open handlebody $\tilde{H}$? And in the same vein as this question: 3) Can one realize $Y$ as a quotient $M/h$ of a free, involutive, orientation reversing homeomorphism $h:M \rightarrow M$ of an orientable 3-manifold $M$, where $h$ exchanges the two sides $U,V$ of some Heegaaard splitting $M= U \cup V$?
Considering question 2) let me exemplificate a 3-manifold which can be splitted along an orientable surface: Let $N_3$ be the nonorientable genus three surface. Take $E=N_3\times S^1$. Since $N_3=T_0\cup_C M\ddot{o}$, where $T_0$ is a puntured 2-torus, $M\ddot{o}$ is a Möbiusband and $C=\partial T_0=\partial M\ddot{o}=T_0\cap M\ddot{o}$ then $E=(T_0\times S^1)\cup_U (M\ddot{o}\times S^1)$, where $U$ is the 2-torus $C\times S^1$. But one can consider $M\ddot{o}\times S^1=\overline{{\cal N}(C\ddot{o}\times S^1)}$ (a regular neighbourhood and has, as aconnected boundary, another 2-torus) where $C\ddot{o}$ is the core of $M\ddot{o}$. Then $E=(T_0\times S^1)\cup \overline{{\cal N}(C\ddot{o}\times S^1)}$. That is $$E\smallsetminus{\rm int}{\cal N}(C\ddot{o}\times S^1) =T_0\times S^1.$$ That is, $C\ddot{o}\times S^1$ (an orientable surface) is splitting $E$ within $C\ddot{o}\times S^1$ and $T_0\times S^1$ both orientable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2301919", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
Given that $f(E)$ is compact if and only if $E$ is, can we deduce the continuity of $f$? Let $(X_1,d_1)$ and $(X_2,d_2)$ be metric spaces. A criterion for the global continuity of some $f:X_1\to X_2$ is that for all closed $E\subseteq X_2$, $f^{-1}(E)$ is closed. This is a corollary of the analogous theorem for counterimages of open subsets of the codomain. Another necessary condition is that the image of any compact subset of $X_1$ be compact. But it is not sufficient: e.g. in $\mathbb{R}$, $$ f(x)= \begin{cases} x & x\ge 0\\ \sin\frac1x & x <0 \end{cases} $$ satisfies it, but it is discontinuous in $x_0=0$. I noticed that, on the other hand, $f((-1,1))=[-1,1]$ so I tried strengthening the condition, i.e. requiring the preimage of any compact subset of $X_2$ to be compact. Vacuously, it works if $X_1$ and $X_2$ are discrete and finite because then all of their subsets are compact and $f $ is continuous in every point of $X_1$ because each one is isolated. By contrapositive, I think I have proved it also for $X_1\subseteq\mathbb{R}=X_2$, considering a point of discontinuity of the different kinds (except removable discontinuites not included in the domain, since thus the function is still continuous on it). I haven't really tried to generalise, so here's the question: Let $(X_1,d_1)$ and $(X_2,d_2)$ be metric spaces. Let $f:X_1\to X_2$ be such that for all $E\subseteq X_1$, $f(E)$ is compact if and only if $E$ is. Must $f$ be continuous?
Yes, $f$ must be continuous. For metric spaces, continuity is equivalent to sequential continuity (this requires some choice, but topology without choice is very strange, so we assume choice anyway). So suppose that $X_1, X_2$ are metric spaces, and $f \colon X_1 \to X_2$ is not continuous at $p$. Then there is a sequence $(x_n)_{n\in\mathbb{N}}$ in $X_1 \setminus \{p\}$ and an $\varepsilon > 0$ such that $x_n \to p$ and $d_2(f(x_n),f(p)) \geqslant \varepsilon$ for all $n$. The set $A := \{ x_n : n \in \mathbb{N}\}$ is not compact, while $A \cup \{p\}$ is compact. But since $f(p)$ has positive distance from $f(A)$, the set $f(A)$ is compact if and only if $f(A) \cup \{f(p)\} = f(A\cup \{p\})$ is compact. So either we have found a compact set whose image isn't compact, or we have found a non-compact set whose image is compact.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2302024", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Normed vector space inequality $|\|x\|^2 - \|y\|^2| \le \|x-y\|\|x+y\|$ I'm looking at an old qualifying exam, and one question is to prove the following inequality in any normed vector space: $$ |\|x\|^2 - \|y\|^2| \le \|x-y\|\|x+y\| $$ My initial thought was that $$ |\|x\|^2 - \|y\|^2| = |(\|x\|+\|y\|)(\|x\|-\|y\|)|=\left|(\|x\|+\|y\|)\right||(\|x\|-\|y\|)|,$$ and it's easy to show $|\|x\|-\|y\||$ is less than both $\|x-y\|$ and $\|x+y\|$, but it isn't true that $\|x\|+\|y\|$ is less than either in general (by the triangle inequality it's 'usually' larger than the latter), so I'm unsure what to do. Any guidance is appreciated.
We may assume w.l.o.g. that $\|x\|^2 \geq \|y\|^2$. Write $x = u + v$ and $y = u - v$. Now the inequality can be rewritten as $$ \|u + v\|^2 \leq 4 \|u\| \|v\| + \|u - v\|^2. $$ But this is the inequality one gets by combining $\|u + v\|^2 \leq (\|u\| + \|v\|)^2$ and $|\|u\| - \|v\||^2 \leq \|u - v\|^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2302122", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 4, "answer_id": 1 }
Bayes Theorem Coin Problem There are 3 coins. One is regular (both head and tail) and the other two only have head sides. Now, flip one coin and get head. The question is what is the probability of this coin is regular one? Or Twisted There are 3 coins. One is regular (both head and tail) and the other two only have head sides. Now, flip one coin and get head. The question is what is the probability you pick up the fair coin and it is head? P(H)=P(A)⋅P(H∣A)+P(B)⋅P(H∣B)+P(C)⋅P(H∣C) = (1/3)(2/2) + (1/3)(2/2) + (1/3)*(1/2) = 5/6 P(A∣H)=P(A)⋅P(H∣A)/P(H) = (1/3)/(5/6) =2/5 I am confused as to whether this is the correct answer or not.
you are using $C$ for the regular coin. So you should be calculating $P(C|H)$, which is $$P(C|H)=\frac{P(H|C)*P(C)}{P(H)}=\frac{\frac{1}{6}}{\frac{5}{6}}=\frac{1}{5}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2302222", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
How to find $\lim\limits_ {n\to\infty}n^5\int_n^{n+2}\frac{{x}^2}{{ {2+x^7}}}\ dx$? How to find $$\displaystyle\lim_ {n\to\infty}n^5\int_n^{n+2}\dfrac{{x}^2}{{ {2+x^7}}}\ dx$$Can I use Mean Value Theorem? Someone suggested I should use Lagrange but I don't know how it would help.
$$ \begin{align} 2n^5\frac{n^2}{2+(n+2)^7}&\le n^5\int_n^{n+2}\frac{x^2}{2+x^7}\,\mathrm{d}x\le2n^5\frac{(n+2)^2}{2+n^7}\\[12pt] \frac2{\frac2{n^7}+\left(1+\frac2n\right)^7}&\le n^5\int_n^{n+2}\frac{x^2}{2+x^7}\,\mathrm{d}x\le\frac{2\left(1+\frac2n\right)^2}{\frac2{n^7}+1} \end{align} $$ Apply the Squeeze Theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2302388", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
If $A$ is densely defined symmetric operator, $\lambda-A$ is onto $X \implies \lambda \in \rho(A)$ I want to understand the following statement: If $\lambda \in \mathbb{C}\setminus\mathbb{R}$ and $A$ is densely defined symmetric operator ($A:D(A)\to X$), $\lambda-A$ is onto $X \implies \lambda \in \rho(A)$ I think there is a missing assumption: that $A$ has to be closed. But I'm not entirely sure; I know that if $A$ is symmetric, then with given $\lambda$, $\lambda-A$ is injective, so this means $\lambda - A$ is a bijection onto $X$. Now if $A$ was closed, then we would have a conclusion. But it's not included in the assumption, so I'm wondering if I'm just missing something, or assumption really need closedness?
One does not need closedness. Let $z=x+iy$, then we compute $$ \Vert (A-z)\phi \Vert^2 = \Vert (A-x)\phi\Vert^2 + y^2 \Vert \phi \Vert^2 + \langle (A-x)\phi , -iy \phi \rangle + \langle -i y \phi , (A-x) \phi \rangle $$ Now use the fact that $A$ is symmetric to show the last two terms cancel. Then you end up with $$ \Vert (A-z)\phi \Vert^2 \geq y^2 \Vert \phi \Vert^2.$$ Thus, if $\psi = (A-z)\phi$ you have $$ \Vert (A-z)^{-1}\psi \Vert \leq \frac{1}{\vert y \vert} \Vert \psi \Vert.$$ I.e. $(A-z)^{-1}$ is bounded.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2302527", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
The approximation function of $\frac{x}{y}$ Is there a approximation function of $$\frac{x}{y},$$ and the approximation function is in the form of $f(x) + f(y)$ or $f(x) - f(y)$. That's to say the approximation function can split $x$ and $y$.
Though the question is unspecific about what constitutes an "approximation", the answer appears to be "no". As lulu notes in the comments, an approximation $\frac{x}{y} \approx f(x) + f(y)$ leads (for $x = y$) to $$ 1 = \frac{x}{x} \approx f(x) + f(x) = 2f(x)\quad\text{for all $x$.} $$ Similarly, an approximation $\frac{x}{y} \approx f(x) - f(y)$ leads (for $x = y$) to $$ 1 = \frac{x}{x} \approx f(x) - f(x) = 0. $$ From the other direction (i.e., starting with customary notions of approximation and seeing where they lead): If $y_{0} \neq 0$, then for $|y - y_{0}| < |y_{0}|$ the geometric series gives the first-order approximation \begin{align*} \frac{x}{y} &= \frac{x}{y_{0} + (y - y_{0})} = \frac{x}{y_{0}} \cdot \frac{1}{1 + (\frac{y - y_{0}}{y_{0}})} \\ &= \frac{x}{y_{0}} \cdot \left[1 - \frac{y - y_{0}}{y_{0}} + \bigg(\frac{y - y_{0}}{y_{0}}\biggr)^{2} - \cdots\right] \\ &\approx \frac{x}{y_{0}} - \frac{x(y - y_{0})}{y_{0}^{2}}, \end{align*} which is not of the form you seek.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2302661", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Find the range of $f(x) = 3x^4 - 16x^3 + 18x^2 + 5$ without applying differential calculus Find the range of $f(x) = 3x^4 - 16x^3 + 18x^2 + 5$ without applying differential calculus. I tried to express $$f(x)=3x^4-16x^3+18x^2+5=A(ax^2+bx+c)^2+B(ax^2+bx+c)+C $$ which is a quadratic in $ax^2+bx+c$ which itself is quadratic in $x$. Comparing coefficients, we get $$Aa^2=3 \tag{1}$$ $$2abA=-16$$ $$A(b^2+2ac)+aB=18$$ $$2bcA+bB=0$$ $$Ac^2+Bc+C=5$$ But I felt its very lengthy to solve these equations. Any hints?
The range is $[k,+\infty)$ where $k$ is the minimum value such that the inequality $$ 3x^4-16x^3+18x^2+5\ge k $$ is true for any $x \in \mathbb{R}$ and this is the minimum value $k$ such that the equation $$ 3x^4-16x^3+18x^2+5- k=0 $$ has a double solution, that is the discriminant of $3x^4-16x^3+18x^2+5- k$ is null. The calculation of the discriminant for a quartic polynomial is a bit ''heavy'', but Wolfram Alpha gives: $$\Delta= -6912(k-5)(k-10)(k+22)$$ so the minimum value is $k=-22$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2302781", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }