Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Seeking for other methods to evaluate $\int_0^{\infty} \frac{\ln \left(x^n+1\right)}{x^n+1} dx$ for $n\geq 2$. Inspired by my post, I go further to investigate the general integral and find a formula for $$ I_n=\int_0^{\infty} \frac{\ln \left(x^n+1\right)}{x^n+1} dx =-\frac{\pi}{n} \csc \left(\frac{\pi}{n}\right)\left[\gamma+\psi\left(1-\frac{1}{n}\right)\right] \tag*{} $$ Let’s start with its partner integral $$ I(a)=\int_0^{\infty}\left(x^n+1\right)^a d x $$ and transform $I(a)$, by putting $y=\frac{1}{x^n+1}$, into a beta function $$ \begin{aligned} I(a) &=\frac{1}{n} \int_0^1 y^{-a-\frac{1}{n}-1}(1-y)^{-\frac{1}{n}-1} d y \\ &=\frac{1}{n} B\left(-a-\frac{1}{n}, \frac{1}{n}\right) \end{aligned} $$ Differentiating $I(a)$ w.r.t. $a$ yields $$ I^{\prime}(a)=\frac{1}{n} B\left(-a-\frac{1}{n}, \frac{1}{n}\right)\left(\psi(-a)-\psi\left(-a-\frac{1}{n}\right)\right) $$ Then putting $a=-1$ gives our integral$$ \begin{aligned} I_n&=I^{\prime}(-1) \\&=\frac{1}{n} B\left(1-\frac{1}{n}, \frac{1}{n}\right)\left[\psi(1)-\psi\left(1-\frac{1}{n}\right)\right] \\ &=-\frac{\pi}{n} \csc \left(\frac{\pi}{n}\right)\left[\gamma+\psi\left(1-\frac{1}{n}\right)\right]\end{aligned} $$ For examples, $$ \begin{aligned}& I_2=-\frac{\pi}{2} \csc \frac{\pi}{2}\left[\gamma+\psi\left(1-\frac{1}{2}\right)\right]=\pi \ln 2,\\ & I_3=-\frac{\pi}{3} \csc \left(\frac{\pi}{3}\right)\left[\gamma+\psi\left(\frac{2}{3}\right)\right]=\frac{\pi \ln 3}{\sqrt{3}}-\frac{\pi^2}{9} ,\\ &I_4=-\frac{\pi}{4} \csc \left(\frac{\pi}{4}\right)\left[\gamma+\psi\left(\frac{3}{4}\right)\right]=\frac{3 \pi}{2 \sqrt{2}}\ln 2-\frac{\pi^2}{4 \sqrt{2}},\\ & I_5=-\frac{\pi}{5} \csc \left(\frac{\pi}{5}\right)\left[\gamma+\psi\left(\frac{4}{5}\right)\right]=-\frac{2 \sqrt{2} \pi}{5 \sqrt{5-\sqrt{5}}}\left[\gamma+\psi\left(\frac{4}{5}\right)\right], \\ & I_6=-\frac{\pi}{6} \csc \left(\frac{\pi}{6}\right)\left[\gamma+\psi\left(\frac{5}{6}\right)\right]=\frac{2 \pi}{3} \ln 2+\frac{\pi}{2} \ln 3-\frac{\pi^2}{2 \sqrt{3}}, \end{aligned} $$ Furthermore, putting $a=-m$, gives $$\boxed{I(m,n)=\int_0^{\infty} \frac{\ln \left(x^n+1\right)}{(x^n+1)^m} dx = \frac{1}{n} B\left(m-\frac{1}{n}, \frac{1}{n}\right)\left[\psi(m)-\psi\left(m-\frac{1}{n}\right)\right] }$$ For example, $$ \begin{aligned} \int_0^{\infty} \frac{\ln \left(x^6+1\right)}{(x^6+1)^5} dx & =\frac{1}{6} B\left(\frac{29}{6}, \frac{1}{6}\right)\left[\psi(5)-\psi\left(\frac{29}{6}\right)\right] \\ & =\frac{1}{6} \cdot \frac{21505 \pi}{15552} \cdot\left(-\frac{71207}{258060}-\frac{\sqrt{3} \pi}{2}+\frac{3 \ln 3}{2}+2 \ln 2\right) \\ & =\frac{21505 \pi}{93312}\left(-\frac{71207}{258060}-\frac{\sqrt{3} \pi}{2}+\frac{3 \ln 3}{2}+2 \ln 2\right) \end{aligned} $$ Are there any other methods? Your comments and alternative methods are highly appreciated.
For large $n$, a simple asymptotic behaviour of the integral can be deduced. Let us first examine the structure of the integrand $$y(x)=\frac{\ln \left(x^n+1\right)}{x^n+1} $$ Take the derivative of the function and set the result to zero to get the point $x=x_{m}$ where $y(x)$ reaches his maximum $$1+(x_{m})^{n}=e$$ Putting this into $y(x)$ gives $$y_{max}=y(x_{m})=\frac{\ln e}{e}=\frac{1}{e}$$ $y_{max}$ is independent of $n$ For integrals whose integrand, $y(x)$, has a sharp maximum a simple asymptotic formula exists. $$I\approx \sqrt{2\pi\frac{y^{3}(x_{m})}{\left|y''(x_{m}) \right|}}$$ Here $y''(x_{m})$ is the second derivative of $y(x)$ at $x=x_{m}$. I will skip elementary computations and write down final result $$I_{n}\approx \frac{\sqrt{2\pi}}{n(e-1)^{1-\frac{1}{n}}}$$ Below is a few numerical examples to show approximation errors produced by the last formula $n=10$, the approximation error about $0.03$ $n=20$, the approximation error about $0.01$ $n=30$, the approximation error about $0.007$ The higher n is, the higher the accuracy of the last formula.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4577083", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 2 }
Possible values for infinum of polynomial with integer coefficients I'am trying to find all natural numbers $a\in\mathbb{N}$ such that there is polynomial with integer coefficients $P\in \mathbb{Z}[x]$ with $$\inf_{x\in\mathbb{R}}\, P(x) = \sqrt{a}.$$ If $a=b^2$ for $b\in\mathbb{N}$ we obviously can take $P(x)=x^2 + b$, but what about $a$ that are not perfect squares? I have tried to find any but not succeed.
The polynomial $$P(x) = (2a)^8x^8-(2a)^5x^6-(2a)^5x^4+6a^2x^2+a^2$$ satisfies $$\min_{x\in\mathbb{R}} P(x) = \sqrt{a}.$$ If there is a polynomial of lower degree with the same property, then it must have degree $6$ (see below), assuming $a$ is not a perfect square. There is however the degree-$4$ polynomial $$R(x) = -(2a)^4x^4+(2a)^2x^3+(2a)^3x^2-3ax-a^2$$ that satisfies $$\max_{x\in\mathbb{R}} R(x) = \sqrt{a},$$ leading to the surprising conclusion that achieving a minimum of $\sqrt{a}$ requires a strictly higher degree than achieving a maximum of $\sqrt{a}$. Note that any $P(x)$ satisfying the original requirements necessarily has even degree. To see why $P(x)$ cannot have degree $4$, let $\beta$ be one of the real algebraic numbers where $P(x)$ achieves its minimum of $\sqrt{a}$. Then $\sqrt{a} = P(\beta)$ is an element of the field $\mathbb{Q}(\beta)$. Moreover, if there is an automorphism $\sigma$ of $\mathbb{Q}(\beta)$ satisfying $$\sigma(\sqrt{a}) = -\sqrt{a} \qquad \text{and} \qquad \sigma(\beta) \in \mathbb{R},$$ then $$P(\sigma(\beta)) = \sigma(P(\beta)) = \sigma(\sqrt{a}) = -\sqrt{a},$$ meaning $\sqrt{a}$ is not actually the minimum of $P(x)$. (This is the step where the asymmetry between minimum and maximum becomes apparent.) In summary, any automorphisms of $\mathbb{Q}(\beta)$ that send $\sqrt{a}$ to $-\sqrt{a}$ must send $\beta$ to a non-real number. In particular, $\beta \not \in \mathbb{Q}(\sqrt{a})$, so $\mathbb{Q}(\sqrt{a})$ is a proper subfield of $\mathbb{Q}(\beta)$. Since degrees of field extensions are multiplicative, this means that the degree of $\mathbb{Q}(\beta)/\mathbb{Q}$ is even and strictly greater than $2$; so $\beta$ has degree at least $4$. Finally, since $P'(\beta) = 0$, $P(x)$ must have degree greater than $4$. Here's how I found $P(x)$: The simplest $\beta$ that satisfies the above restrictions on $\mathbb{Q}(\beta)$ is $\beta = \sqrt[4]{a}$. I first constructed a polynomial $Q(x)$ with rational coefficients to have minimum $\sqrt{a}$ at $x=\pm\sqrt[4]{a}$. Here $Q(x)$ necessarily has the form $$Q(x) = (x^4-a)q(x)+x^2$$ such that $q(x)$ has positive leading coefficient and $Q'(x)$ is divisible by $x^4-a$. This quickly leads to $$Q(x) = (x^4-a)^2 -\frac{1}{2a}x^2(x^4-a) + x^2.$$ Finally, to get integer coefficients without affecting the minimum, we scale $Q(x)$ horizontally; $P(x) = Q(2ax)$ does the job.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4577306", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Collatz-like problem involving prime factors Unfortunately I am not well-versed in LaTeX so I will try my best to keep this looking presentable. As an overview, I was investigating a variation of the Collatz conjecture: Define $f(1) = 1$ Then, if $n$ is even, $f(n) = \frac{n}{2}$ Otherwise, let $n$'s smallest prime factor be $p$, then $f(n) = pn+1$ So then I was trying it out for small values: $f(1) = 1$ $f(2) = 1, f(1) = 1$ $f(3) = 10, f(10) = 5, f(5) = 26, f(26) = 13, ...$ and so on. Eventually you get to $f(213) = 640$ which reduces down to $10$. Evidently, loops/cycles can form. $f(4) = 2, f(2) = 1$ $f(5) = 26$ which is already a part of $3$'s cycle $f(6) = 3$ which is, again, part of $3$'s cycle By now, I was thinking that this function had a nice trend of either reducing to $1$ or joining onto another number's cycle. However, $f(7)$ causes an issue. I have iterated for $f(7)$ around $100$ times using python, however it is still unclear whether $f(7)$ converges or not (for want of a better word). I also noticed that convergence of $f(9)$ depended on convergence of $f(7)$. There is no reason why it should converge, I saw on another page that the average smallest prime factor of integers up to $n$ is asymptotic to $\frac{n}{2\log(n)}$. So then $f(n)$ has average order $\frac{n^2}{2\log(n)}$. My question then is ultimately: for any integer n, will $f(n)$ eventually reach 1 or form a cycle? And, if not, is there an interesting reason/proof why not? This question could be (and probably is) fairly difficult, so as a weaker question, does $f(7)$ converge? If this question is similar to, or a corollary of, another question asked here, I apologise. Please let me know and I will remove the question. Edit: I wrote some basic python code to do the computations for me, however, it is not very efficient. With that being said, I also have an interest in the computational complexity of this problem e.g. finding smallest prime factor, checking for repeated values (i.e. checking if a cycle has been formed) etc.
Summary of new information that has been commented so far: * *The scale of which the iterations can grow really bogs down total computation time, the problem isn't quite prime factorisation but is a weaker form. In most cases, the numbers grow so large that computation beyond them is extremely slow (Thanks to Peter/Gottfried, it is confirmed that $f(7)$ goes as large as ~$10^{100}$) *Thanks to Gottfried, we have a list of non-powers of two that also converges to 1. Namely $\small \begin{array} {rr} n & :&factorization \\ \hline 14941629968793 & : & 3.11.17.53.79.313.20323 \\ 11206222476595 & : & 5.2241244495319 \\ 109435766373 & : & 3^2.12159529597 \\ 20519206195 & : & 5.2129.1927591 \\ 200382873 & : & 3.19.3515489 \\ 150287155 & : & 5.37.812363 \\ 5733 & : & 3^2.7^2.13 \\ 1075 & : & 5^2.43 \\ 21 & : & 3.7 \\ 1 & : & 1 \\ \end{array}$ Any of these numbers multiplied by a power of two will also converge. However, it appears that the smallest prime factor of each of the numbers in the table alternates between 3 and 5 (unknown whether its coincidental or not) *Further exploration of cycles forming, including 1-cycles i.e powers of 2 and Gottfried's list, leads the conclusion to be that convergence onto a cycle is rare. Making divergence the average outcome, rather than an anomalous one. Perhaps a better question to ask is: other than the cycle generated by 3, do other cycles even exist (not including the 1-cycle)?
{ "language": "en", "url": "https://math.stackexchange.com/questions/4577513", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
general solution beta $(\sin(\alpha+\beta)+\cos(\alpha +2\beta)\sin(\beta))^2 = 4\cos(\alpha)\sin(\beta)\sin(\alpha+ \beta);\tan(\alpha)=3\tan(\beta)$ We need to find general solution of $\beta$ for $$(\sin(\alpha+\beta)+\cos(\alpha +2\beta)\sin(\beta))^2 = 4\cos(\alpha)\sin(\beta)\sin(\alpha+ \beta)$$ $$\tan(\alpha)=3\tan(\beta)$$ I took the first equation and managed to simplify it up till here: $$(\cos(\beta)\sin(\alpha+2\beta))^2=4\cos(\alpha)\sin(\beta)\sin(\alpha+ \beta)$$ What should I do next? I tried to use $2\sin(\beta)\sin(\alpha+\beta)=\cos(\alpha)-\cos(\alpha+2\beta)$ in the RHS in the hopes of cancelling or doing something about the $\alpha+2\beta$ terms. But I cannot seem to boil it down further. Thanks!
By request, here's a sketch of the solution I mentioned in a comment to the question. (There's probably a quicker path.) I'll ignore the case $\beta=\pm\pi/2$. Now, start by expanding the sides of OP's first equation, express them in terms of $p:=\tan\alpha$ and $q:=\tan\beta$, and then apply the substitution $p=3q$ from OP's second equation: $$\begin{align} \left(\sin(\alpha+\beta) + \cos(\alpha + 2\beta) \sin\beta\right)^2 &= \left( 2\sin\alpha\cos^2\beta + 2 \cos\alpha \cos\beta \sin\beta-\sin\alpha \right)^2 \cos^2\beta \\[4pt] &= \frac{(2\tan\alpha + 2 \tan\beta-\tan\alpha\sec^2\beta)^2}{\sec^4\beta} \cos^2\alpha\cos^2\beta\\[4pt] &= \frac{(2p + 2q-p(1+q^2))^2}{(1+q^2)^2}\cos^2\alpha\cos^2\beta\\[4pt] &= \frac{(p + 2q-pq^2)^2}{(1+q^2)^2}\cos^2\alpha\cos^2\beta \\[4pt] &\overset{p=3q}{=}\;\frac{q^2(5-3q^2)^2}{(1+q^2)^2}\cos^2\alpha\cos^2\beta \tag1\\[4pt] 4 \cos\alpha \sin\beta \sin(\alpha+\beta) &= 4 \sin\beta (\sin\alpha\cos\beta + \cos\alpha \sin\beta)\cos\alpha \\[4pt] &= 4\tan\beta\,(\tan\alpha + \tan\beta) \cos^2\alpha \cos^2\beta \\[4pt] &= 4q(p+q)\cos^2\alpha \cos^2\beta \\[4pt] &\overset{p=3q}{=} 16q^2\cos^2\alpha \cos^2\beta \tag2 \end{align}$$ So, OP's first equation, $(1)=(2)$, becomes $$\begin{align} \frac{q^2(5-3q^2)^2}{(1+q^2)^2}\cos^2\alpha\cos^2\beta &\;=\; 16q^2\cos^2\alpha\cos^2\beta \\[4pt] q^2(5-3q^2)^2 &\;=\; 16q^2(1+q^2)^2\\[6pt] q^2 (7 q^2-1) (q^2+9) &\;=\; 0 \tag3 \end{align}$$ Therefore, ignoring non-real values, we have $\tan\beta=0$ or $\tan\beta=\pm1/\sqrt{7}$; and, respectively, $\tan\alpha=0$ or $\tan\alpha=\pm3/\sqrt{7}$. $\square$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4577719", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Evaluating this definite integral Given: $$I=\int_0^1 \frac{\sin(\ln(x))}{\ln(x)}dx$$ Now I have tried every strategy, I could think of, substitution, by parts (it will return the same $I$ after a few times), etc. In the solution provided, they did introduce fancy complex numbers, which I have no idea of. Note that I am a normal high school student, so I cannot understand high-level calculus. Please help here.
Using $$ \sin x=\sum_{n=1}^\infty\frac{(-1)^{n-1}x^{2n-1}}{(2n-1)!}, \int_0^1(\ln x)^ndx=(-1)^nn! $$ one has \begin{eqnarray} I&=&\int_0^1 \frac{\sin(\ln(x))}{\ln(x)}dx\\ &=&\int_0^1\sum_{n=1}^\infty\frac{(-1)^{n-1}(\ln x)^{2n-2}}{(2n-1)!}dx\\ &=&\sum_{n=1}^\infty\frac{(-1)^{n-1}(2n-2)!}{(2n-1)!}\\ &=&\sum_{n=1}^\infty\frac{(-1)^{n-1}}{2n-1}\\\\ &=&\frac\pi4. \end{eqnarray}
{ "language": "en", "url": "https://math.stackexchange.com/questions/4577918", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Find All Complex solutions for $z^3+3i\bar{z}=0$ Find All Complex solutions for $z^3+3i \bar z =0$. I tried substituting $z=a+bi$ and simplifying as much as I can and this is what I ended up with: $a^3-2b^2a+3b+i(3ba^2-b^3+3a)=0$. I just did not understand how do I get the values of $z$ from this equation
This is a commentary on all of the existing answers, rather than a suggested method of solution. It seems fashionable to state that there are $9$ solutions, when in reality there are $5$ solutions $z$ to the equation $$z^3-3i\overline z = 0$$ Writing $z=x+iy$, we obtain the following polynomial equations by taking real and imaginary parts of the given equation: \begin{align*} &x^3-3xy^2+3y = 0 \\ &3x^2y+3x-y^3 = 0 \end{align*} By Bézout's Theorem, the corresponding projective curves have $9$ intersections, counting multiplicity. However, we are seeking real solutions $(x,y)$ in the affine plane, so there could be fewer than $9$ solutions to the equation $z^3-3i\overline z = 0$. By standard polynomial algebra, this system is equivalent to \begin{align*} &9x-8y^7+51y^3 = 0 \\ &64y^9-432y^5+81y=0 \end{align*} Factoring $y$ out of the second equation and performing the substitution $Y=y^4$, we obtain \begin{equation*} 64Y^2-432Y+81 \end{equation*} The roots are $$Y = \frac{27}8 \pm \frac94\sqrt2$$ These are positive real numbers, and so the solutions for $y$ are \begin{equation*} y = 0, \pm\left(\frac{27}8 \pm \frac94\sqrt2\right)^{1/4} \end{equation*} For each $y$, the coordinate $x$ is uniquely determined and real. This means that there are $5$ solutions to the equation, $$z^3-3i\overline z = 0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4578068", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Find the measure of angle $AMN=\theta$ Let $ABC$ be an isosceles triangle where $AB = AC$, so that $\angle BAC = 20°$. Also let $M$ be the projection of the point $C$ on the side $AB$ and $N$ a point on the side $AC$, so that $2CN = BC$. The measure of angle $AMN$ is equal to?(A:$60^o$) Follow my progress.. I made the drawing and marked all the angles I could find but still need to find the path...maybe an additional segment
Here is a solution using trigonometry. We will prove $IN = IM$. First, in the triangle $INC$, we have $$\frac{IN}{ \sin(70°)} = \frac{NC}{\sin(60)} \Longleftrightarrow IN = \frac{ \sin(70°)}{\sin(60°)}NC \tag{1}$$ In the triangle $BMC$ and $IHC$, we have: $$MC = \cos(10°) BC\tag{2}$$ and $$\frac{IC}{ \sin(50°)} = \frac{HC}{\sin(120°)} \Longleftrightarrow IC = \frac{ \sin(50°)}{\sin(60°)}HC \tag{3}$$ From $(2),(3)$, we deduce that $$\begin{align} IM &= MC - IC\\&=\left(2\cos(10°)-\frac{ \sin(50°)}{\sin(60°)}\right)HC\\ &=\frac{2\sin(60°)\cos(10°)-\sin(50°)}{\sin(60°)}HC\\ &=\frac{(\sin(70°)+\sin(50°))-\sin(50°)}{\sin(60°)}HC \quad \text{because: } 2\sin(a)\cos(b)=\sin(a+b)+\sin(a-b)\\ &=\frac{ \sin(70°)}{\sin(60°)}HC \tag{4} \end{align}$$ From $(4),(1)$, we deduce that $IN = IM$. So the triangle $IMN$ is isosceles, then $$90°-\theta = \theta -30° \Longrightarrow \theta = 60°$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4578211", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Durret 2.5.8: Question about probability of limsup I am interested in this problem by Durret in Probability: Theory and Examples. Let $X_1,X_2, \dots$ be i.i.d. and not $\equiv0$. If $E|X_1| < \infty$, show that $$\limsup_{n \to \infty} |X_n|^{1/n} = 1$$ almost surely. One idea I had, was trying to show that $P(\limsup |X_n|^{1/n} > 1) = 0$ and $P(\limsup |X_n|^{1/n} < 1) = 0$. This would give us the result. Now for the first one, we have $$P(\limsup |X_n|^{1/n} > 1) \leq P(|X_n|^{1/n} > 1 \text{ i.o.}).$$ I want to show that the last probability above is $0$ maybe by Borel-Cantelli and using $E|X_1| < \infty$ if and only if $\sum_nP(|X_1| \geq n) < \infty$. I am not sure if this is the right directon. Please help.
From the last two lines of the OP, it follows that $$\sum_nP[|X_n|>n]<\infty$$ (for $X_n\stackrel{law}{=}X_1$). Borel-Cantelli direct lemma implies that $P[\{X_n>n,\, \text{i.o}\}]=0$ and so for $p$-almost all $\omega$, there is $N_\omega$ such that $|X_n(\omega)|\leq n$ for all $n\geq N_\varepsilon$. Hence $\limsup_n|X_n|^{1/n}\leq \lim_n\sqrt[n]{n}=1$ almost surely. As $P[|X_1|>0]>0$, there is $\delta>0$ such that $P[|X_1|>\delta]>0$; hence $\sum_nP[|X_n|>\delta]=\infty$ and so, by the converse Borel-Cantelli (independence is used here) $P[\{X_n>\delta,\,\text{i.o}\}]=1$; hence for almost all $\omega$, there is an increasing sequence $n_k(\omega)$ of integers such that $|X_{n_k(\omega)}(\omega)|>\delta$ whence it follows that $\limsup_n|X_n|^{1/n}\geq\lim_n\sqrt[n]{\delta}=1$ almost surely.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4578407", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
A Cantor-type set Let $A=\{\sum_{k=1}^\infty \frac{a_k}{k!} : a_k\in\{0,\,1\}\}$. We can prove that $A$ is a closed set with $\operatorname{int}(A)=\emptyset$ and Lebesgue measure of $0$. Is there a $m \in \mathbb{N}$ s.t. $A+A+\cdots+A$ ($m$ times) is of positive Lebesgue measure? (Note that for the Cantor set $C$ we have $C+C=[0,\,2]$.)
Here is a reference... Erdős, Pál; Volkmann, B., Additive Gruppen mit vorgegebner Hausdorffscher Dimension [Additive groups with given Hausdorff dimension], J. Reine Angew. Math. 221, 203-208 (1966). ZBL0135.10202. For each $\alpha \in (0,1)$, they define an additive [sigma-compact] subgroup $G(\alpha)$ of $\mathbb R$ with Hausdorff dimension $\alpha$. [For more detail, see below.] For every $\alpha$, your set $A$ is a subset of $G(\alpha)$. Since $G(\alpha)$ is a group, $A+A+\dots +A \subseteq G(\alpha)$. So $A+A+\dots+A$ has Hausdorff dimension at most $\alpha < 1$, and thus contains no interval. [Of course, this is true for all $\alpha \in (0,1)$, so $A+A+\dots+A$ has Hausdoff dimension $0$.] More detail from the Erdős & Volkmann paper. Each real number $x$ has a unique "Cantor expansion" of the form $$ x = \lfloor x \rfloor + \sum_{k=2}^\infty \frac{a_k(x)}{k!} \tag1 $$ where $a_k(x) \in \mathbb Z$ and $0 \le a_k(x) < k$. Fix $\alpha \in (0,1)$. Let $G(\alpha)$ be the set of all reals $x$ such that: there exists $\kappa = \kappa(x) > 0$ and $k_0 \in \mathbb N$ such that, for all $k \ge k_0$ \begin{align} \text{either}\qquad & a_k(x) \le \kappa\;k^\alpha \\ \text{or}\qquad & a_k(x) \ge k - \kappa\;k^\alpha . \tag 2\end{align} Satz 1. $G(\alpha)$ is an additive group with $\dim G(\alpha) = \alpha$. The proof that $G(\alpha)$ has dimension $\alpha$ is relatively easy. The more involved part is the proof that $G(\alpha)$ is a group. They have to keep track of "carrying" when you add two expansions of the form $(1)$ with restrictions $(2)$. Plug My paper with Miller, Edgar, G. A.; Miller, Chris, Borel subrings of the reals, Proc. Am. Math. Soc. 131, No. 4, 1121-1129 (2003). ZBL1043.28003. We show a Borel set $\subseteq \mathbb R$ which is a subring is either all of $\mathbb R$ or else has Hausdorff dimension zero. That was a question asked by Volkmann in 1960.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4578819", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Positive Solution of Poisson Equation Given the Poisson's equation \begin{equation} -\nabla^2 u = f \quad \mathrm{in} \ \Omega \end{equation} \begin{equation} u=0 \quad \mathrm{on} \ \Gamma_D, \quad \frac{\partial u}{\partial n} = 0 \quad \mathrm{on} \ \Gamma_N \end{equation} Is it possible to prove that if $f(x) \geq 0 \ \forall x \in \Omega$ then $u(x) \geq 0 \ \forall x \in \Omega$?
As long as $\Gamma_D$ is not empty, then yes. To prove this, consider the perturbed problem \begin{equation} -\nabla^2 u_\varepsilon = f + \varepsilon \quad \mathrm{in} \ \Omega \end{equation} \begin{equation} u_\varepsilon=0 \quad \mathrm{on} \ \Gamma_D, \quad \frac{\partial u_\varepsilon}{\partial n} = \varepsilon \quad \mathrm{on} \ \Gamma_N \end{equation} where $\varepsilon > 0$. Here $\nabla^2$ is the Laplacian. Since $f\geq 0$, the Laplacian $\nabla^2 u_\varepsilon$ is strictly negative everywhere, and so $u_\varepsilon$ cannot have an interior minimum (since at any interior minimum, the Hessian would be nonnegative, and so the Laplacian---the trace of the Hessian--would also be nonnegative). Likewise, since the normal derivative is strictly positive, the minimum cannot occur on $\Gamma_N$. Hence, the minimum of $u_\varepsilon$ occurs on $\Gamma_D$, where $u_\varepsilon=0$. So $u_\varepsilon\geq0$. Then send $\varepsilon\to 0$, and use that $u_\varepsilon \to u$ uniformly.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4579069", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
AM - GM inequality with Bernoulli's inequality I am stuck on this proof: The AM-GM Inequality is Equivalent to the Bernoulli Inequality. I get the first part with the Bernoulli Inequality, but how did he get from $\frac{A_n}{A_{n-1}}\ge \frac{x_n}{A_{n-1}}$ to $A_n^n\ge x_{n}A_{n-1}^{n-1}$ Can someone explain to me what I am missing there? Thanks in advance!
The initial inequality is not $\frac{A_n}{A_{n-1}} \ge \frac{x_n}{A_{n-1}}$ but $$\left(\frac{A_n}{A_{n-1}}\right)^n \ge \frac{x_n}{A_{n-1}}.$$ Multiplying both sides by $A_{n-1}^n$, we get $$ \left(\frac{A_n}{A_{n-1}}\right)^n A_{n-1}^n \ge \frac{x_n}{A_{n-1}} \cdot A_{n-1}^n \implies A_n^n \ge x_n \cdot A_{n-1}^{n-1}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4579189", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find the directional derivative of function $$f(x,y) = \begin{cases} \displaystyle \frac{x^{3}-3xy^{2}}{x^{2}+y^{2}}, & (x,y) \neq (0,0) \\ 0 , & (x,y) = (0,0) \end{cases}$$ Find the directional derivative in $(0,0)$ such that makes an angle $135°$ with $x$ positive direction. My attempt: $\lim_{h\to 0}f_x(0+h,0)=\frac{\frac{(0+h)^3-3(0+h)0^2}{(0+h)^2+0^2}-f(0,0)}{h}=1$ $\lim_{h\to 0}f_y(0,0+h)=\frac{\frac{(0)^3-3(0)(0+h)^2}{(0)^2+(0+h)^2}-f(0,0)}{h}=0$ The directional derivative is $(-\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}})$, i have to normalize the vector , so $(-1,1)$. The directional derivative is $(1,0)(-1,1)=-1$ My solution isn't correct , I can't get why , hope for some help , thanks !
Let $v=(v_1,v_2)$ a direction of $\mathbb{R}^2$: $$\dfrac{\partial f}{\partial v}(0,0)=\lim_{t \to 0}\dfrac{f(0+tv_1,0+tv_2)-f(0,0)}{t}=\lim_{t \to 0}\dfrac{1}{t}\dfrac{(tv_1)^3-3tv_1(tv_2)^2}{(tv_1)^2+(tv_2)^2}=\dfrac{v_1^3-3v_1v_2^2}{v_1^2+v_2^2}$$ Now you can substituite. Alternatively, you check if $f$ is differentiable in $(0,0)$; if so you can use the following formula: $$\dfrac{\partial f}{\partial v}(x_0,y_0)=\langle\nabla f(x_0,y_0),v\rangle$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4579484", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How far can we go with the integral $I_n=\int_0^1 \frac{\ln \left(1-x^n\right)}{1+x^2} d x$ Inspired by my post, I decided to investigate the integral in general $$ I_n=\int_0^1 \frac{\ln \left(1-x^n\right)}{1+x^2} d x$$ by the powerful substitution $x=\frac{1-t}{1+t} .$ where $n$ is a natural number greater $1$. Let’s start with easy one \begin{aligned} I_1 &=\int_0^1 \frac{\ln \left(\frac{2 t}{1+t}\right)}{1+t^2} d t \\ &=\int_0^1 \frac{\ln 2+\ln t-\ln (1+t)}{1+t^2} d t \\&=\frac{\pi}{4} \ln 2+\int_0^1 \frac{\ln t}{1+t^2}-\int_0^1 \frac{\ln (1+t)}{1+t^2} d t \\&=\frac{\pi}{4} \ln 2-G-\frac{\pi}{8} \ln 2 \\ &=\frac{\pi}{8} \ln 2-G\end{aligned} By my post $$I_2= \frac{\pi}{4} \ln 2-G $$ and $$\begin{aligned}I_4 &=\frac{3 \pi}{4} \ln 2-2 G \end{aligned} $$ $$ \begin{aligned} I_3=& \int_0^1 \frac{\ln (1-x)}{1+x^2} d x +\int_0^1 \frac{\ln \left(1+x+x^2\right)}{1+x^2} d x \\=& \frac{\pi}{8} \ln 2-G+\frac{1}{2} \int_0^{\infty} \frac{\ln \left(1+x+x^2\right)}{1+x^2} d x-G \\ =& \frac{\pi}{8} \ln 2-\frac{4G}{3} +\frac{\pi}{6} \ln (2+\sqrt{3}) \end{aligned} $$ where the last integral refers to my post. Let’s skip $I_5$ now. $$ I_6=\int_0^1 \frac{\ln \left(1-x^6\right)}{1+x^2} d x=\int_0^1 \frac{\ln \left(1+x^3\right)}{1+x^2} d x+I_3\\ $$ $$\int_0^1 \frac{\ln \left(1+x^3\right)}{1+x^2} d x = \int_0^1 \frac{\ln (1+x)}{1+x^2} d x +\int_0^1 \frac{\ln \left(1-x+x^2\right)}{1+x^2} d x\\=\frac{\pi}{8}\ln 2+ \frac{1}{2} \int_0^{\infty} \frac{\ln \left(1-x+x^2\right)}{1+x^2} d x-G \\= \frac{\pi}{8}\ln 2+ \frac{1}{2}\left( \frac{2 \pi}{3} \ln (2+\sqrt{3})-\frac{4}{3} G \right)- G \\= \frac{\pi}{8} \ln 2+\frac{\pi}{3} \ln (2+\sqrt{3})-\frac{5}{3} G $$ Hence $$I_6= \frac{\pi}{4} \ln 2+\frac{\pi}{2} \ln (2+\sqrt{3})-3 G$$ How far can we go with the integral $I_n=\int_0^1 \frac{\ln \left(1-x^n\right)}{1+x^2} d x?$
We can proceed to $I_{8}$ now. $$ \int_0^1 \frac{\ln \left(1-x^8\right)}{1+x^2} d x=\int_0^1 \frac{\ln \left(1-x^4\right)}{1+x^2} d x +\int_0^1 \frac{\ln \left(1+x^4\right)}{1+x^2} dx\\ \qquad\qquad =\frac{3 \pi}{4} \ln 2-2 G+ \int_0^1 \frac{\ln \left(1+x^4\right)}{1+x^2} dx. $$ In my post, two beautiful formula were found. $$\boxed{\begin{align} &\int_0^\infty \frac{\ln(1+x^{4m})}{1+x^2}dx =2\pi \ln \bigg( 2^m \prod_{k=1}^m \cos\frac{(2k-1)\pi}{8m}\bigg)\ \\ &\int_0^\infty \frac{\ln(1+x^{4m+2})}{1+x^2}dx = \pi\ln2 + 2\pi \ln \bigg( 2^m \prod_{k=1}^m \cos\frac{k\pi}{2(2m+1)}\bigg) \end{align}}$$ As $$ \int_0^1 \frac{\ln \left(1+x^n\right)}{1+x^2} d x=\frac{1}{2}\left[\int_0^{\infty} \frac{\ln \left(1+x^n\right)}{1+x^2} d x-n G\right] $$ Putting $m=1$ into the first formula in the box yields $$ \int_0^1 \frac{\ln \left(1+x^4\right)}{1+x^2} d x=\frac{1}{2}\left[2 \pi \ln \left(2 \cos \frac{\pi}{8}\right)-4 G\right] $$ Hence $$\boxed{\int_0^1 \frac{\ln \left(1-x^8\right)}{1+x^2} d x = \frac{3 \pi}{4} \ln 2-4 G+\pi \ln (\sqrt{2+\sqrt{2}}) }$$ Similarly, We can go further to $I_{12}$ now. $$ \int_0^1 \frac{\ln \left(1-x^{12}\right)}{1+x^2} d x=\int_0^1 \frac{\ln \left(1-x^6\right)}{1+x^2} d x +\int_0^1 \frac{\ln \left(1+x^6\right)}{1+x^2} dx\\ \qquad\qquad = \frac{\pi}{4} \ln 2+\frac{\pi}{2} \ln (2+\sqrt{3})-3 G + \int_0^1 \frac{\ln \left(1+x^6\right)}{1+x^2} dx. $$ Putting $m=1$ into the second formula in the box yields $$\boxed{\int_0^1 \frac{\ln \left(1-x^{12}\right)}{1+x^2} d x= \frac{\pi}{4} \ln 2+\frac{\pi}{2} \ln (2+\sqrt{3})-3 G + \frac{\pi}{2} \ln 6-3G= \frac{\pi}{4} \ln (72(7+4 \sqrt{3}))-6 G }$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4579985", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 5, "answer_id": 3 }
Prove that $\lim\limits_{n\to\infty} n^2 \int_0^{1/n} x^{x+1} dx = 1/2.$ Prove that $\lim\limits_{n\to\infty} n^2 \int_0^{1/n} x^{x+1} dx = 1/2.$ Suppose we've shown that $\lim\limits_{n\to\infty} n^2\int_0^{1/n} x(x^x-1) dx = 0.$ Then the desired limit equals $\lim\limits_{n\to\infty} n^2 \int_0^{1/n} xdx + n^2\int_0^{1/n} x(x^x-1) dx = \lim\limits_{n\to\infty} n^2 \int_0^{1/n} xdx = 1/2.$ Let $0<a\leq 1$. First, consider evaluating the integral $I_{m,n}(a) := \int_0^a x^m (\ln x)^n dx$ for some $m,n\ge 0$. Observe that $I_{m,0}(a) = \dfrac{a^{m+1}}{m+1}$ for all m. Assume that $n>0$. We have that $I_{m,n}(a) = \int_0^a x^{m}(\ln x)^n dx = \dfrac{x^{m+1}}{m+1} (\ln x)^n|_0^a - \dfrac{n}{m+1} \int_0^a x^{m} (\ln x)^{n-1} dx \Rightarrow I_{m,n}(a) = \dfrac{a^{m+1}}{m+1} (\ln a)^n - \dfrac{n}{m+1} I_{m,n-1} (a)$, since $\lim\limits_{x\to 0} \dfrac{x^{m+1}}{m+1} (\ln x)^n = 0.$ In particular, $\sum_{i=0}^\infty \int_0^a x\dfrac{(x\ln x)^i}{i!}- \dfrac{(x\ln x)^i}{i!} dx = \sum_{i=0}^\infty I_{i+1,i}(a) - I_{i,i}(a)$. I'm not sure how to evaluate the latter sum, since I haven't really come up with a good recurrence relation for it. $\int_0^{a} x(e^{x\ln x} -1) dx = \int_0^a \sum_{i=0}^\infty x(\dfrac{(x\ln x)^i}{i!} - 1) dx = \sum_{i=0}^\infty \int_0^a x\dfrac{(x\ln x)^i}{i!} - \dfrac{(x\ln x)^i}{i!} dx.$ Let $f(x)$ be the function $f(x)=x$ or $f(x)\equiv 1$. Since the series $\sum_{i\ge 0} x\dfrac{(x\ln x)^i}{i!}dx$ and $\sum_{i=0}^\infty \dfrac{(x\ln x)^i}{i!}$ are convergent for any real number x, their difference equals the value of the series $\sum_{i=0}^\infty x(\dfrac{(x\ln x)^i}{i!} - 1)$. Since for $x\in (0,1], \sum_{i=0}^\infty f(x)\dfrac{(-x\ln x)^i}{i!}$ is dominated by the integrable function $f(x)e^{-x\ln x}$, by the Dominated Convergence Theorem, we may interchange the summation and limit in the last equality above. In more detail, for all n, $\sum_{i=0}^n(\dfrac{(x\ln x)^i}{i!}- \dfrac{(x\ln x)^i}{i!} ) \leq \sum_{i=0}^n|\dfrac{(x\ln x)^i}{i!}- \dfrac{(x\ln x)^i}{i!} | \leq \sum_{i\ge 0} |x-1| \dfrac{(-x\ln x)^i}{i!}$, and the latter function is integrable.
The key fact here is that $$\lim_{x \rightarrow 0+} x^x = 1$$ There are various ways to show the above. As a result, for any $1 > \epsilon > 0$, there is an $N$ such that for $n > N$, on $(0,1/n]$ one has $$(1 - \epsilon)x < x^{x + 1} < (1 + \epsilon) x$$ Insert this into your integral and let $n \rightarrow \infty$...
{ "language": "en", "url": "https://math.stackexchange.com/questions/4580339", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 0 }
Asymptotic stability of linear time varying system Consider a linear time varying homogeneous system: $$\dot{x}=A(t)x$$ where $x\in\mathbb{R}^n$ and $A(t)$ is a $n\times n$ real symmetric matrix satisfying $A(t)\to -I_n$ as $t\to\infty$. Suppose $A$ is a continuously differentiable function on $[0,\infty)$. Notice that $I_n$ is the $n\times n$ identity matrix. Is it true that $x(t)$ is asymptotically stable? Is it true that $x(t)$ is Lyapunov stable? For Lyapunov stability, I find a related question. Since $A(t)\to -I_n$, when $t>0$ is large enough, $A(t)$ will be eventually negative definite, so by using the result in the link, it seems we have Lyapunov stability. For asymptotic stability, it seems to be possible because here we don't have the situation that $A(t)$ will vanish very fast, as shown in several counterexamples in the above link. It is obvious that if $A(t)$ is exactly $-I_n$, it will be asymptotically stable. Also, the fact that $A(t)$ is symmetric may be essential.
Let $n(t) = \lambda_1(A(t))$, the $\max$ eigenvalue of $A(t)$. $n$ is continuous and $n(t) \to -1$. Choose $T>0$ such that $n(t) \le -{1 \over 2}$ for $t > T$ and let $B=\sup_{t \in [0,T]} n(t)$. Pick some $x$ and let $V_x(t) = {1 \over 2} \|x(t)\|^2$, where $x$ is the response of the system to the initial condition $x$. We see that $\dot{V_x}(t) = x(t)^T A(t) x(t) \le 2n(t) V_x(t)$. On $[0,T]$, $\dot{V_x}(t) \le 2B V_x(t)$ and so $V_x(t) \le V_x(0) e^{2Bt} \le V_x(0) e^{2BT}$ and similarly, for $t >T$, $V_x(t) \le V_x(T) e^{-(t-T)} \le V_x(0) e^{2BT} e^{-(t-T)}$. Let $\epsilon>0$ and choose $\delta = e^{-BT}$, then if $\|x_0\| < \delta$ we see that $\|x(t)\| < \epsilon$ for $t \ge 0$. Furthermore, $x(t) \to 0$, hence the system is asymptotically stable (in fact exponentially stable).
{ "language": "en", "url": "https://math.stackexchange.com/questions/4580627", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Relationship between two triangle side lengths where one other side is shared This question was asked on an Australian year 10 (15 to 16 year olds) practice exam. Diagram of two triangles with sides a and b indicated: "Determine the relationship between the values of $a$ and $b$ by writing $a$ in terms of $b$". The solution given was simply the following. $a=\dfrac{b}{b \sqrt 3 - 1} \tag{1}\label{1}$ My attempt to solve this used the cosine rule on each of the two smaller triangles to get the side length opposite the $30°$ angle, then on the larger triangle for the side length opposite the $60°$ angle, giving the following relationship between a and b. $\left(\sqrt{a^2 + 1 - a \sqrt 3} + \sqrt{b^2 + 1 - b \sqrt 3}\right)^2 = a^2 + b^2 - ab \tag{2}\label{2}$ However I was not able to simplify (\ref{2}) to get equation (\ref{1}). My question is: * *How can equation (\ref{2}) be simplified to give equation (\ref{1}) using algebra that is accessible to a high school student? *Is there another way, perhaps using other trigonometric identities, that does not use the form of (\ref{2})?
I would like to further add that this problem can be solved fairly simply using the method below: 1.) First, we label the triangle $\triangle ABC$ with a point $D$ that lies on $AC$ for our ease. We can also labor $\angle ACB=\alpha$. Now, we draw a line $DE$ from $D$ onto a point $E$ that lies on $AB$, such that $DE\parallel BC$. 2.) This gives us an isosceles triangle $\triangle BED$ where $\angle EDB=\angle EBD=30^\circ$. We also know that $BE=ED$ as well, furthermore, we can calculate the length of $BE$ and/or $ED$ as we know the length of $BD$, there are multiple ways of doing this (law of cosines, dropping a perpendicular, etc...), but it's fairly trivial so I'll just use the result and say that $BE=ED=\frac{1}{\sqrt3}$. This also implies that the remaining line segment $EA=b-\frac{1}{\sqrt3}$ or $\frac{b\sqrt3 -1}{\sqrt3}$. 3.) With this information now, we can finish off the problem. Notice that $\triangle AED$ is similar to $\triangle ABC$ via the AAA property. In that case, we can use the Thales theorem and say that: $$\frac{1}{a\sqrt3}=\frac{b\sqrt3 -1}{b\sqrt3}$$ $$\frac{1}{a}=\frac{b\sqrt3 -1}{b}$$ Now we can just "flip" the expressions and get the desired result: $$a=\frac{b}{b\sqrt3 -1}$$ To answer your "first" question, it may be possible to simplify that algebraic expression and find an equation for $b$ in terms of $a$, but as you stated too, this is not something that would typically be expected of a high-schooler. Even if you can simplify it, its simply not an efficient way to solve the problem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4580792", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Probability of an event (two random variables are independent) Let $n\ge 1$ and $X_1, ...,X_n$ be i.i.d. $N(0,1)$ random variables. Let $S_n=\sum_{i=1}^n X_i$. Let $\varepsilon=(\varepsilon_1, ..., \varepsilon_n)$, where all $\varepsilon_i$ are i.i.d and $\varepsilon_1=+1$ or $-1$ with equal probability of $\frac{1}{2}$. Define $S_{n,\epsilon}=\sum_{i=1}^n \varepsilon_i X_i$. My question is how do we calculate $P[S_n \mbox{ and } S_{n,\varepsilon} \mbox{ are independent}]$ with respect to the distribution of $\varepsilon$? i.e. What is $\frac{\text{the numbers of $\varepsilon$ making $S_n$ and $S_{n,\varepsilon}$ independent}}{\text{the numbers of all possible $\varepsilon$}}?$ Thanks. I can imagine in some situations $S_n$ and $S_{n,\varepsilon}$ are dependent (cf. here), but I cannot even think of a concrete situation where $S_n$ and $S_{n,\varepsilon}$ are independent. Any help will be appreciated.
Fix $\varepsilon\in\{\pm 1\}^n$. Let $A=\{i:\varepsilon_i=1\}$, and let $B=\{i:\varepsilon_i=-1\}$. If $B={\large{\varnothing}}$ then $S_{n,\varepsilon}=S_n$, and if $A={\large{\varnothing}}$ then $S_{n,\varepsilon}=-S_n$, so in both of those cases, the random variables $S_{n,\varepsilon},S_n$ are dependent. Next assume $A,B$ are both nonempty. Let random variables $Y,Z$ be given by $$ \left\{ \begin{align*} Y&=\sum_{i\in A}\varepsilon_iX_i\\[4pt] Z&=\sum_{i\in B}\varepsilon_iX_i\\[4pt] \end{align*} \right. $$ Since $X_1,...,X_n$ are i.i.d. $\mathcal{N}(0,1)$ random variables, we get that * *$Y,Z$ are independent.$\\[4pt]$ *$(Y,Z)$ are jointly normal.$\\[4pt]$ *$Y\sim\mathcal{N}(0,|A|)$ and $Z\sim\mathcal{N}(0,|B|)$. It follows that $(Y+Z,Y-Z)$ are also jointly normal. Noting that $S_{n,\varepsilon}=Y+Z$ and $S_n=Y-Z$, we get that \begin{align*} & S_{n,\varepsilon},S_n\;\,\text{are independent} \\[4pt] \iff\;& Y+Z,Y-Z\;\,\text{are independent} \\[4pt] \iff\;& \text{Cov}(Y+Z,Y-Z)=0 \\[4pt] \iff\;& E\Bigl((Y+Z)(Y-Z)\Bigr) = E(Y+Z)E(Y-Z) \\[4pt] \iff\;& E\Bigl((Y+Z)(Y-Z)\Bigr) = 0 \\[4pt] \iff\;& E(Y^2-Z^2) = 0 \\[4pt] \iff\;& E(Y^2)=E(Z^2) \\[4pt] \iff\;& |A|=|B| \\[4pt] \end{align*} Note that $|A|+|B|=n$, hence $|A|=|B|$ implies $n$ is even and $|A|=|B|={\large{\frac{n}{2}}}$. Hence for a random choice of $\varepsilon$ from $\{\pm 1\}^n$, the probability that $S_{n,\varepsilon},S_n$ are independent is equal to zero if $n$ is odd, and is equal to $$ \frac { \large{ n\choose{m} } } {2^n} $$ if $n$ is even, where $m={\large{\frac{n}{2}}}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4581018", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Cauchy principal value: methods I don't understand yet the logic in the Cauchy Principale Value (P.V.) calculations. Let the resideu theorem: $$\color{red}{\oint_Cf(z) \ dz = 2\pi i \sum_{k=1}^n \underset{z=z_k}{Res}\{f(z)\}}$$ (we supposed that $C$ was a "nice" closed curb and $f(z)$ analytical $\forall z \in \mathrm{int} \ C$ except in $z_k$ ($1 \le k \le n$)) I've got 2 poorly justified examples in my exercice book (forgive me for non-mathjaxing all of that follows) : Apparently, the corrector found a nice closed curb, but don't want to share how he did it. Usually, when $f(z)$ is pair, we can define such a $C$ curb: but it doesn't explain why the 2 factor is missing in the answer. Why $\pi i$ instead of $2\pi i$ ? Second example: $$P.V. \int_{-\infty}^{+\infty} \frac{\exp(-4ix)}{(x+i)^2} \ dx$$ Here, we have a reversed cup (why reversed ?) $\gamma (R)$ with the segment $C(R)$: Again, I don't understand what has been done. Is there a general methode I can understand to solve those 2 exercices ? EDIT: corrected one error pointed out in the comments ($(x+i)^2$ instead of $(x+1)^2$)
$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{{\displaystyle #1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\on}[1]{\operatorname{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\sr}[2]{\,\,\,\stackrel{{#1}}{{#2}}\,\,\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ Contour: Semi-circle in the upper complex half plane with an indent around $\ds{x = -1}$. Integration along the upper arc vanishes out as $\ds{R \to \infty}$. \begin{align} & \color{#44f}{\lim_{\epsilon \to 0^{+}}\ \left\{% \int_{-\infty}^{-1 - \epsilon}{\dd x \over \pars{x + 1}\pars{x^{2} + 2}} + \int_{\pi}^{0}{\epsilon\expo{\ic\theta}\ic\,\dd\theta \over \pars{\epsilon\expo{\ic\theta}}\bracks{\pars{-1 + \epsilon\expo{\ic\theta}}^{2} + 2}}\right.} \\ & \color{#44f}{\left.\phantom{\lim_{\epsilon \to 0^{+}}\,\,\,}\int_{-1 + \epsilon}^{\infty}\,\, {\dd x \over \pars{x + 1}\pars{x^{2} + 2}}\right\}} = 2\pi\ic\on{Res}\bracks{{1 \over \pars{x + 1}\pars{x^{2} + 2}}, x = \root{2}\ic} \\[5mm] & \implies \mbox{P.V.}\int_{-\infty}^{\infty}{\dd x \over \pars{x + 1}\pars{x^{2} + 2}} - {\pi\ic \over 3} = 2\pi\ic\bracks{-\,{1 \over 2\root{2}\pars{\root{2} - \ic}}} \\ - & ----------------------------------- \\[5mm] & \implies \mbox{P.V.}\int_{-\infty}^{\infty}{\dd x \over \pars{x + 1}\pars{x^{2} + 2}} = 2\pi\ic\bracks{-\,{1 \over 2\root{2}\pars{\root{2} - \ic}}} + {\pi \ic \over 3} \\[5mm] = & \ \bbx{\color{#44f}{{\root{2} \over 6}\,\pi}} \approx 0.7405 \\ & \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/4581220", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
What does Serge Lang mean when he says $\mathbb{R}^4=\Bbb R^3\times \mathbb{R}^1$ is equal to $\mathbb{R}^4=\mathbb{R}^2\times \mathbb{R}^2$? Serge Lang equates $\mathbb{R}^4=\mathbb{R}^3\times \mathbb{R}^1$ to $\mathbb{R}^4=\mathbb{R}^2\times \Bbb R^2$ in terms of constructing a $4$ space, but intuitively, I'm am unable to understand the first case, where he takes $x_1,x_2$ and $x_3,x_4$ separately. The part where the $3$ coordinates $x_1,x_2,x_3$ are considered separately with $x_4$ comes intuitively to me, to make a $4$ space.
He’s most likely talking about vector space isomorphisms. The isomorphism in this case would be: $T:\mathbb{R}^3 \times \mathbb{R}^1 \to \mathbb{R}^2 \times \mathbb{R}^2 $, $T((x_1,x_2,x_3),y) = ((x_1,x_2),(x_3,y))$. It is easy to check this is in fact an isomorphism.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4581789", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving $ \lim_{n \to \infty} \, \left(\frac{2}{3}\right)^n = 0$ I'm having trouble with a proof of this question: $$ \lim_{n \to \infty} \, \left(\frac{2}{3}\right)^n = 0$$ I know $N(0)$ is a log of epsilon with base $\frac{2}{3}$ but I can't proceed with a proof.
The limit exists because the sequence is monotonic and bounded. L'hopital: $$c:=\lim_{n\to\infty}(\dfrac 23)^n=\lim_{n\to\infty}\dfrac {2^n}{3^n}=\lim_{n\to\infty}\dfrac {\ln2\cdot 2^n}{\ln3\cdot 3^n}=\dfrac {\ln2}{\ln3}\lim_{n\to\infty}\dfrac {2^n}{3^n}=\dfrac {\ln2}{\ln3}\lim_{n\to\infty}(\dfrac 23)^n=\dfrac {\ln2}{\ln3}c$$. Thus $c=0$. (You actually don't need L'hopital. You can just split off a factor of $\frac 23$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/4582354", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 2 }
integration of a log-normal random variable Let X be a log-normal disributed random variable, $\forall{a}\in\mathbb{R}$ compute: $$\int_{\mathbb{R}}|a+X|dP.$$ I get that when $a\ge0$, since $X$'s support is $[0,\infty)$, holds: $$\int_{\mathbb{R}}|a+X|dP=\int_{\mathbb{R}}(a+X)dP=a+E[X].$$ When $a<0$, is there a way of computing the integral without using the pdf like I did before?
Note $|a+X|=(a+X)^+-(a+X)^-=\max(a+X,0)-\min(a+X,0)$. If $a<0$ set $b:=-a,\,b>0$ and we have $$|X-b|=(X-b)^+-(X-b)^-=\max(X-b,0)-\min(X-b,0)$$ The expectation $E[(X-b)^+]$ is given in closed form by the Black-Scholes formula for call options. On the other hand: $$E[(X-b)^-]=-E[(b-X)^+]$$ which is again given by the Black-Scholes formula of put options.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4582666", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Maximum number of concave quadrilaterals in n points in regular position Consider N points in a plane. What is the maximum number of quadrilaterals that are concave? The minimum is always zero: form a convex polygon with all N points, and no 4 of the N will form a concave quadrilateral. But what is the maximum? I think it goes (based on my software): N points,Max(Concave Quadrilaterals) 4,1 5,4 6,12 ... Is there an existing formula? To be specific, here I am interested in all combinations of four points, so out of combin(N 4), what is the most that can be concave. So a triangle containing two points counts as two concave quadrilaterals. I read as much as I could before asking, so, sorry if this is well known, I'm an amateur (actually hobbyist) mathematician at best. Thank you for this site, this is my first post after getting a lot out of it.
This is more of a long commentary on the question. This is a very interesting question, thank you OP. This is a revised version of my answer. Perhaps the question can be reworded this way. Let $N$ be a positive integer. If $X$ be a set of points in the plane that $|X|=N$ and no three of which lie on the same line, then denote by $c_N(X)$ the number of fours of $X$ whose convex hull is a triangle. What is the maximal value of $c_N(X)$? This problem can also be reformulated as follows. For a given integer $N$, find the maximum possible integer $\bar{c}_N$ such that any set of $N$ points in the plane in general position has at least $\bar{c}_N$ convex quadrilaterals. It is clear that $$c_N+\bar{c}_N=\binom{N}{4}.$$ Further, we can observe that the number of convex quadrilaterals on the set $X$ of points in the plane in general position is equal to the number of crossings of the edges of the complete graph on the set $X$ if its edges are rectilinear. Therefore the convex quadrilateral problem is equivalent to the following problem. Find the rectilinear crossing number of the complete graph $K_N$, i.e., to determine the minimum number of crossings in a drawing of $K_N$ in the plane with straight edges and the nodes in general position. We denote this number by the symbol $\bar{v}(K_N)$. We see that $\bar{c}_N=\bar{v}(K_N)$. It turns out to be a pretty old problem. Here are some first values $\bar{c}_N=1,3,9,19,\ldots$ for $N=5,6,7,8\ldots$ (OEIS A014540). Quite a few values are known for small $N$ see here To illustrate what we have said, here is one picture. On the left is the configuration $X$ of $5$ points, for which $\bar{v}(K_5)=\bar{c}_5(X)=1$. The configuration $X$ of $6$ points is shown on the right, for which $\bar{v}(K_6)=\bar{c}_6(X)=3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4583186", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Does this space exist? Does the space $[0,1]^r$ exist with $r\in\mathbb R$? And the Hausdorff dimension is $r$ Thank you very much
Not directly: In the notation $[0,1]^n$ with $n\in\mathbb{N}$, we mean $[0,1]\times[0,1]\times\cdots\times[0,1]$, i.e. the $n$-times cartesian product. There is no $r$-times cartesian product for $r\notin\mathbb{N}$, because you cannot repeat an operation (taking cartesian product) for a non-natural number amount of times. Of course, you could construct a space with hausdorff dimension $r$ for any $r\in\mathbb{R}_{>0}$, but this would not be related directly to the notation $[0,1]^n$ for $n\in\mathbb{N}$, and there is probably more than one way to do it. One such construction would be to make $[0,1]^r = [0,1]^{\lfloor r\rfloor}\times S_{\mathrm{frac}(r)}$ where $\mathrm{frac}(r)=r-\lfloor r\rfloor$ is the fractional part of $r$ and $S_{\mathrm{frac}(r)}$ is your favorite set of Hausdorff dimension $\mathrm{frac}(r)$, e.g. some generalised Cantor set.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4583333", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How to solve $\cos^{40}x-\sin^{40}x=1$ for real and imaginary solutions Solve : $$\cos^{40}x-\sin^{40}x=1$$ I found by induction that $\sin(x)=0$ satisfy the equation so $$x=n\pi$$ must be the solution but there should be more solutions real or imaginary , how to find them ?
HINT I would recommend you to notice that \begin{align*} \cos^{40}(x) - \sin^{40}(x) = 1 & \Longleftrightarrow \cos^{40}(x) = \sin^{40}(x) + 1 \geq 1 \end{align*} Based on such relation, can you proceed from here?
{ "language": "en", "url": "https://math.stackexchange.com/questions/4583530", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Explain $\Bbb Q$ is a dense set This is approximately my explanation for 14-year-old students in saying that $\Bbb Q$ is a dense set. In contrast to the set $\Bbb N$ and the set $\Bbb Z$, between any two rationals another rational is always included, and thus we can say that between two rationals infinite rationals are included. For example, let us put the numbers $0$ and $1$ on the straight line. Now let us denote on the line a rational number between $0$ and $1$, for example, their half. Now we indicate on the line a rational number between $0$ and $\frac 12$, for example, their half. I will obtain the sequence $0, \frac 14, \frac 12$. Now we indicate on the line a rational number between $0$ and $\frac 14$, for example, their half $\frac 18$. Considering to always divide by $\dfrac{1}{2^n}$ with $n\in\Bbb N$ the fraction $\dfrac{1}{2^n}$ with the large $n$ the points in the subdivision will all accumulate toward $0$ (zero becomes an accumulation point). For this reason we can say that the set $\Bbb Q$ is a DENSE set. By this expression we mean an ordered set in which, given any INTERVAL, THERE IS AT LEAST ONE ELEMENT INSIDE it. Is there another easy explanation to give my students or is the one I have given enough?
I would slightly modify your explanation by adding one preliminary step and one final step. Step 1: Motivate the definition of "dense" which is jargon but students already have a sense of what it is. Let $X$ be a collection of points that can be (totally) ordered: for all $x,y$ in $X$, either $x<y$ or $y<x$ or $x=y$. For example, $X$ may look like $$x_1<x_2<x_3,\qquad X=\{x_1,x_2,x_3\}$$ or $$x_1<x_2<\cdots<x_{999}<\cdots<x_{n}<y,\qquad X=\{y,x_1,x_2,\ldots\}$$ or $$x_1<x_2<\cdots<x_{999}<\cdots<x_{n}<y<z,\qquad X=\{z,y,x_1,x_2,\ldots\}$$ Henceforth assume $X$ is an infinite collection. Say $X$ is dense if for each $y$ in $X$, there exists an infinite sequence of points $(x_1,x_2,\ldots)$ such that the points $x_n$ never coincide with $y$ (i.e. $x_n\ne y$) and either $$x_1<x_2<\cdots<y\qquad\text{i.e.}\qquad \begin{cases}m<n\Rightarrow x_m<x_n\\n=0,1,2,\ldots\Rightarrow(x_n<y)\end{cases}$$ OR $$y<\cdots<x_2<x_1\qquad\text{i.e.}\qquad \begin{cases}m<n\Rightarrow x_n<x_m\\n=0,1,2,\ldots\Rightarrow(y<x_n)\end{cases}$$ I recommend having a chalkboard for this previous part, and I recommend saying "the 'x-sub-n'approach y either from the left or from the right." Also, for the children, please do not use logic notation such as $\wedge$ or $\exists$. Step 2: [insert your explanation that $\mathbb{Q}$ is dense] Step 3: Remark that there is a simpler and equivalent stricter definition of dense-ness. Say $X$ is dense if for all $x,y$ in $X$, if $x\ne y$, then there exists $z$ in $X$ such that either $x<z<y$ or $y<z<x$. [justify the equivalence with intuition learned from the explanation in Step 2] Let the students ponder why this definition is "stricter" (I almost missed it entirely before logging off MSE, for example)
{ "language": "en", "url": "https://math.stackexchange.com/questions/4583695", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
properties of the Mandelbrot set and complex dynamical system I want to learn some knowledge about complex dynamical system, especially about the properties of Mandelbrot set, are there any literatures about this topics?
* *The Science of Fractal Images chapter 4 and appendix D H-O. Peitgen et al (eds.) https://archive.org/details/scienceoffractal0000unse_1987 *interactive demonstrations in Mandel software Wolf Jung http://mndynamics.com/indexp.html *Dynamics in one complex variable John W. Milnor https://abel.math.harvard.edu/archive/118r_spring_05/docs/milnor.pdf *Exploring the Mandelbrot set. The Orsay Notes Adrien Douady, John H. Hubbard https://pi.math.cornell.edu/~hubbard/OrsayEnglish.pdf *Internal addresses in the Mandelbrot set and Galois groups of polynomials Dierk Schleicher https://arxiv.org/abs/math/9411238 *various preprints on arXiv by the above and others
{ "language": "en", "url": "https://math.stackexchange.com/questions/4583890", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Understanding how to get composition of disjoint cycles from composition of nondisjoint cycles in a proof I am looking at the proof of the theorem given on page 19 of this document as: "Writing $\sigma \in S_n$ as a product of transpositions in different ways, $\sigma$ is either always composed of an even number of transpositions, or always an odd number of transpositions." One confusing part of this proof to me is the line: $$(c \enspace a_2 \enspace \cdots \enspace a_{k-1} \enspace d \enspace a_{k+1} \enspace \cdots \enspace a_{k+l})(c \enspace d) = (d \enspace a_2 \enspace \cdots \enspace a_{k-1})( c \enspace a_{k+1} \enspace \cdots \enspace a_{k+l}) $$ How do I understand this composition of cycles on the left-hand side as the composition of cycles on the right-hand side? Is there a step-by-step way to show how to get from one composition to the other? Thanks.
I explain how I do it here for specific permutations. For generic ones like this one the process is similar. It's important to know if you compose permutations left-to-right or right-to-left, though... Your equation suggests you compose them right-to-left. Meaning that if you have two cycles/permutations, $\sigma_1$ and $\sigma_2$, and you write $\sigma_1\sigma_2$, then you mean that $\sigma_2$ is applied first, and then $\sigma_2$. Consider your left hand side (I will use commas instead of forcing space): $$(c, a_2, \ldots, a_{k-1}, d,a_{k+1}, \ldots,a_{k+l})(c, d)$$ What happens to $c$? First we apply the transposition $(c,d)$, which sends $c$ to $d$. Then we apply the cycle, which sends $d$ to $a_{k+1}$. Thus, $c$ is sent to $a_{k+1}$. So we start writing the result as: $$(c,a_{k+1},$$ Next, $a_{k+1}$ is left alone by $(c,d)$, and then sent to $a_{k+2}$; then $a_{k+2}$ is left alone by $(c,d)$, and then sent to $a_{k+3}$, and so on until we get to $a_{k+l}$. So now we have: $$(c,a_{k+1},a_{k+2},\ldots,a_{k+l},$$ Next, $a_{k+l}$ is left fixed by $(c,d)$, and then sent to $c$ by the long cycle, so this "closes" this cycle and we get $$(c,a_{k+1},\ldots,a_{k+l})\cdots$$ Next, what happens to $d$? First it is sent to $c$, then $c$ is sent to $a_2$; so $d$ is sent to $a_2$: $$(c,a_{k+1},\ldots,a_{k+l})(d,a_2,$$ then $a_2$ is fixed by $(c,d)$ and sent then sent to $a_3$, and so on until we get to $a_{k-1}$. Then $a_{k-1}$ is fixed by $(c,d)$, and sent to $d$ by the cycle, so we get $$(c,a_{k+1},\ldots,a_{k+l})(d,a_2,a_3,\ldots,a_{k-1}).$$ And this covers all the elements that appear in either of the two permutation, so we get the final result: $$(c,a_2,\ldots,a_{k-1},d,a_{k+1},\ldots,a_{k+l})(c,d) = (c,a_{k+1},\ldots,a_{k+l})(d,a_2,a_3,\ldots,a_{k-1}).$$ Which is what you had. If after completing the second cycle there were still terms left over, you would start "running them through" to see what happens to them, leading to further cycles on the right hand side.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4584313", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that $(x^2−1) \bmod 8$ $\in \{ 0 , 3 , 7 \}, \forall x \in \mathbb{Z}$. It must be verified that for all $x \in \mathbb{Z}$ it holds that $x^2 - 1 \bmod{8} \in \{0, 3, 7\}$. First some definitions. Using the following theorem a definition for $\bmod$ is provided: Theorem. For all $a \in \mathbb{Z}$ and $d \in \mathbb{N}$ unique integers q, r exist satisfying $a = q \cdot d + r \wedge 0 \leq r <d$ The integers $q$, $r$ correspond to the quotient and remainder, respectively, of $a$, these being defined as: $a = (a / d) \cdot d + a \bmod d \wedge 0 \leq a \bmod d < d$ With $q = (a/d)$ and $r = a \bmod d$. To verify the claim I use induction. I use a brute force approach where I search for all numbers $x$ such that $x^2 - 1 \bmod 8 = 0$, $x^2 - 1 \bmod 8 = 3$ or $x^2 - 1 \bmod 8 = 7$. First I search for all $x$ such that $x^2 - 1 \bmod 8 = 0$. If $x^2 - 1 \bmod 8 = 0$ then 8 is a divisor of $x^2 - 1$, i.e.: $8 \mid x^2 - 1$. Now I search for some other $x$ such that 8 is a divisor of $x^2 - 1$. Let $f(x) = x^2 - 1$ and $8 \mid f(x)$ then $8 \mid f(x + a)$ for some $a \in \mathbb{N}$. I compute this a: $8 \mid f(x) = 8 \mid x^2 - 1 \implies 8 \mid (x + a)^2 - 1 = 8 \mid x^2 + a^2 + 2ax - 1 = 8 \mid x^2 - 1 + a^2 + 2ax$ The implication holds for e.g.: $a = 4$ since: $8 \mid x^2 - 1 + a^2 + 2ax \implies 8 \mid x^2 - 1 + 16 + 8x = 8 \mid f(x) \wedge 8 \mid 16 + 8x = true$ So it can be concluded that if $8 \mid f(x)$ then $8 \mid f(x + 4)$. In fact it may be concluded that if $8 \mid f(x)$ then $8 \mid f(x + 4k), \forall x \in \mathbb{Z} \wedge k \in \mathbb{N}$, since: $8 \mid (x + 4k)^2 - 1 = 8 \mid x^2 - 1 + 16k^2 + 8kx = 8 \mid f(x) \wedge 8 \mid 16k^2 + 8kx$ So it holds that $8 \mid f(x) \implies 8 \mid f(x + 4k)$. By inspection one finds that $8 \mid f(1) = 8 \mid 1^2 - 1 = 8 \mid 0 = true$ and so $f(x) \bmod 8 = 0$ for all $x \in \{1, 5, 9, 13, ...\}$. Since $f(x)$ is symmetric it holds that $f(-x) = f(x)$ and so it follows that $f(x) \bmod 8 = 0$ for all $x \in \{-1, -5, -9, -13, ...\}$. Upon further inspection one finds that $8 \mid f(3) = 8 \mid 3^2 - 1 = 8 \mid 8 = true$ and so it follows that $x^2 - 1 \bmod 8 = 0$ for all $x \in \{3, 7, 11, 15, ...\}$ and all $x \in \{-3, -7, -11, -15, ...\}$ Next I search for all x such that $x^2 - 1 \bmod 8 = 3$, or equivalently $x^2 - 4 \bmod 8 = 0$. Using the same approach I find that $x^2 - 1 \bmod 8 = 3$ for all $x \in \{2, 6, 10, 14, ...\}$ and all $x \in \{-2, -6, -10, -14, ...\}$. Finally $x^2 - 1 \bmod 8 = 7$ for all $x \in \{0, 4, 8, 12, ...\}$ and $x \in \{0, -4, -8, -12, ...\}$. And so one concludes that for all $x \in \mathbb{Z}$ it holds that $x^2 - 1 \bmod{8} \in \{0, 3, 7\}$. This question comes from a course in discrete mathematics in computer science. I feel that my approach is overkill and that I'm doing something wrong and that there must be some cleverer way of solving the problem. If anyone can help with a better, cleaner, approach, or point out errors, it will be greatly appreciated :).
First of all, if $x^2 - 1 \equiv y \pmod 8$, then $$(x+4)^2 - 1 = (x^2 - 1) + 8(x + 2) \equiv y \pmod 8;$$ that is to say, $x^2-1$ and $(x+4)^2 - 1$ have the same remainder upon division by $8$, for all integers $x$. Hence it suffices to test $x \in \{0, 1, 2, 3\}$. We have $$\begin{align} 0^2 - 1 &\equiv 7 \pmod 8, \\ 1^2 - 1 &\equiv 0 \pmod 8, \\ 2^2 - 1 &\equiv 3 \pmod 8, \\ 3^2 - 1 &\equiv 0 \pmod 8. \end{align}$$ This concludes the proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4584496", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Verify taking the derivative of this polynomial I've been splitting my head over this, and most likely it's obvious but I can't see it. Taken from Understanding Analysis, 2nd Edition, Stephen Abbott. Exercise $\mathbf{6.6.9}$ (Cauchy's Remainder Theorem) Let $f$ be differentiable $N+1$ times on $(-R,R)$. For each $a\in(-R,R)$, let $S_N(x,a)$ be the partial sum of the Taylor series for $f$ centered at $a;$ in other words, define $$S_N(x,a)=\sum_{n=0}^{N}{c_n(x-a)^n} \text{ where } c_n=\frac{f^{(n)}(a)}{n!}.$$ Let $E_N(x,a)=f(x)-S_N(x,a).$ Now fix $x\neq0$ in $(-R,R)$ and consider $E_N(x,a)$ as a function of $a$. Explain why $E_N(x,a)$ is differentiable with respect to $a$, and show $$E'_N(x,a)=-\frac{f^{(N+1)}(a)}{N!}(x-a)^N. $$ My attempt. $E_N$ is differentiable w.r.t $a$ because $f(x)$ is constant, and $S_N$ is a polynomial in $a$ and hence differentiable. In fact we can say $E_N$ is infinitely differentiable. Now, \begin{align} E'_N=-S'_N &=-\sum_{n=1}^{N}{nc_n(x-a)^{n-1}(-1)}\\ &=\sum_{n=1}^{N}{nc_n(x-a)^{n-1}}\\ &=\sum_{n=1}^{N}{n\frac{f^{(n)}(a)}{n!}(x-a)^{n-1}}\\ &=\sum_{n=1}^{N}{\frac{f^{(n)}(a)}{(n-1)!}(x-a)^{n-1}}\\ \end{align} I'm stuck here. How to proceed?
When you differentiate $S_N$ term-wise, you can't ignore that $c_n$ depends on $a$ (and so $S_N(x, a)$ isn't polynomial in $a$). Correct way will be $$E_N' = \frac{\partial}{\partial a} \left(- \sum_{n = 0}^N\frac{f^{(n)}(a)}{n!}(x - a)^n\right) \\ = - f^{(1)}(a) - \sum_{n=1}^N \frac{\partial}{\partial a} \left(\frac{f^{(n)}(a)}{n!}(x - a)^n\right)\\ = -f^{(1)}(a) - \sum_{n=1}^N \frac{f^{(n + 1)}(a)}{n!}(x - a)^n - \sum_{n=1}^N-\frac{f^{(n)}(a)}{(n - 1)!}(x - a)^{n - 1} \\ = -f^{(1)}(a) - \sum_{n=1}^N \frac{f^{(n + 1)}(a)}{n!}(x - a)^n + \sum_{n = 0}^{N - 1}\frac{f^{(n + 1)}(a)}{n!}(x - a)^n\\ = \color{red}{-f^{(1)}(a)} - \color{blue}{\sum_{n=1}^{N - 1} \frac{f^{(n + 1)}(a)}{n!}(x - a)^n} - \frac{f^{(N + 1)}(a)}{N!}(x - a)^N + \color{red}{f^{(1)}(a)} + \color{blue}{\sum_{n = 1}^{N - 1}\frac{f^{(n + 1)}(a)}{n!}(x - a)^n}\\ = -\frac{f^{(N + 1)}(a)}{N!}(x - a)^N $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4584626", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Parity of the class number of cyclotomic fields I am interested if there are any heuristics on the parity of the class number of $\mathbb{Q}(\zeta_p)$ where $\zeta_p$ is a primitive root of unity. Is it true that it is odd infinitely many often? Is the density of primes for which it is odd known or conjectured?
Let $h_p$ be the class number of $\Bbb Q_p$ and $h_p^+$ the class number of the maximal real subfield, then $h_p=h_p^+h_p^-$ for some integer $h_p^-$. Hasse proved that $h_p$ and $h_p^-$ have the same parity. It is conjectured that the class number of $\Bbb Q_p(\zeta_p)$ is odd if $p$ is a Sophie-Germain prime, i.e. if $q=\frac{p-1}{2}$ is also prime. (This has been proven by Estes if one assumes that $2$ is a primitive root modulo $p$) and it is conjectured that there are infinitely many Sophie-Germain primes.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4584791", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Need help to calculate the integral $\int\limits_{-1}^{1}{(1-x^{2})^{n}dx}$ I want to calculate this integral $$\int\limits_{-1}^{1}{(1-x^{2})^{n}dx}$$ but I don't know how to start. Should I use the method of changing variables? I somehow know that its value should be $${{2^{2n+1}(n!)^{2}}\over{(2n+1)!}}$$ but I don't know how to prove it Edit: Using the link that Martin introduced, I managed to calculate the integral. integrating by parts we get $$I_{n}=\int\limits_{-1}^{1}{(1-x^{2})^{n}dx}=\left[{x(1-x^{2})^{n}}\right]\matrix{ {x=1}\cr {x=-1}\cr }-\int\limits_{-1}^{1}{-2nx^{2}}(1-x^{2})^{n-1}dx=2n\int\limits_{-1}^{1}{x^{2}}(1-x^{2})^{n-1}dx$$ Now we transform the expression inside the integral in this way $$I_{n}=2n\int\limits_{-1}^{1}{x^{2}}(1-x^{2})^{n-1}dx=2n\int\limits_{-1}^{1}{(1+x^{2}-1)(1-x^{2})^{n-1}}dx$$ $$=2n\left[{\int\limits_{-1}^{1}{(1-x^{2})^{n-1}dx-\int\limits_{-1}^{1}{(1-x^{2})^{n}dx}}}\right]=2n(I_{n-1}-I_{n})$$ $$\rightarrow I_{n}=2n(I_{n-1}-I_{n})\rightarrow I_{n}+2nI_{n}=2nI_{n-1}\rightarrow I_{n}={{2n}\over{2n+1}}I_{n-1}$$ Using this recursive relation, We can write $$I_{n}={{2n}\over{2n+1}}{{2(n-1)}\over{2n-1}}I_{n-2}=\cdots={{2n}\over{2n+1}}{{2(n-1)}\over{2n-1}}\cdots{{2(2)}\over{5}}{{2(1)}\over{3}}I_{0}$$ where $I_{0}=2$. Thus $$I_{n}=2{{2^{n}(n!)}\over{(2n+1)(2n-1)\cdots 3.1}}=2{{2^{n}(n!)}\over{{{(2n+1)!}\over{2^{n}(n!)}}}}=2{{2^{2n}(n!)^{2}}\over{(2n+1)!}}$$
The idea is to rewrite your problem to $$B(p,q)=\int_0^1 t^{p-1}(1-t)^{q-1} dt=\frac{\Gamma(p)\Gamma(q)}{\Gamma(p+q)}$$ where $Re(p),Re(q)>0$. Exercise: Rewrite your integral to $$\int_0^1t^{-1/2}(1-t)^ndt$$ For suitable $p$ and $q$ you will obtain your answer. We want to show that $$\frac{\Gamma(\tfrac{1}{2})\Gamma(n+1)}{\Gamma(n+\frac{3}{2})}={{2^{2n+1}(n!)^{2}}\over{(2n+1)!}}$$ First we know that $\Gamma(\tfrac{1}{2})=\sqrt{\pi}$. Second we know that $\Gamma(n+1)=n\Gamma(n)$. On the other hand we know that $\Gamma(n)=(n-1)!$ so $n\cdot (n-1)!=n!$ hence $$\frac{\Gamma(\tfrac{1}{2})\Gamma(n+1)}{\Gamma(n+\frac{3}{2})}=\frac{n!\sqrt{\pi}}{\Gamma(n+\frac{3}{2})}$$ So we have to deal with $\Gamma(n+\frac{3}{2})$. Now $$\Gamma(n+\tfrac{3}{2})=(n+\tfrac{1}{2})!=(n+\tfrac{1}{2})(n-\tfrac{1}{2})!=(n+\tfrac{1}{2})\Gamma(n+\tfrac{1}{2})$$ An identity from Wiki (https://en.wikipedia.org/wiki/Gamma_function#General) tell us that $$\Gamma(n+\tfrac{1}{2})=\frac{(2n)!}{n!4^n}\sqrt{\pi}$$ Hence $$\frac{n!\sqrt{\pi}}{\Gamma(n+\frac{3}{2})}=\frac{n!\sqrt{\pi}}{(n+\frac{1}{2})\Gamma(n+\tfrac{1}{2})}=\frac{n!\sqrt{\pi}}{(n+\frac{1}{2})\frac{(2n)!}{n!4^n}\sqrt{\pi}}$$ From here it should be easy to see that $$\frac{n!\sqrt{\pi}}{(n+\frac{1}{2})\frac{(2n)!}{n!4^n}\sqrt{\pi}}=\frac{2\cdot 4^{n} (n !)^{2}}{\left(2 n +1\right)!}=\frac{2^{2n+1}(n!)^2}{(2n+1)!}$$ Notice that I used the fact that $n+\frac{1}{2}=\frac{1}{2}(2n+1)$ and $(2n+1)\cdot (2n)!=(2n+1)!$. Hope it helps.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4584921", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
An estimate for the number of integers with no small factors Given integers $x > y$, I am interested in the number of positive integers $\leq x$ and free of factors $\leq y$. In my case $x$ and $y$ will be large. For a concrete example we can take $x=2^{100}$ and $y=2^{40}$. Is there a computationally efficient method to estimate this number? I was looking at Mertens' third theorem but I can't see how to use it make an efficient algorithm. I also found On the number of positive integers ≦ x and free of prime factors > y but that is the opposite of what I want. I would like the number of positive integers $\leq x$ and free of factors $\leq y$.
If $y \gt \sqrt[3]x$ you are just looking for primes in the range from $y$ to $x$ plus the pairs of primes greater than $y$ that multiply to less than $x$. Let $\pi(z)$ be the prime-counting function For the primes we want $\pi(x)-\pi(y)\approx \operatorname{li}(x)-\operatorname{li}(y)$. For the pairs of primes we get an estimate by saying that each number $z$ has $\frac 1{\ln(z)}$ chance of being prime. The number of pairs is then about $$\int_y^{\sqrt x} dt \frac 1{\ln(t)} \int _t^{\frac xt}du \frac 1 {\ln u}=\int_y^{\sqrt x} dt \frac 1{\ln(t)}\left(\operatorname{li}\left(\frac xt\right) - \operatorname{li}(t)\right)$$ which is an integral with a nice smooth integrand, so it should be easy to evaluate numerically. If $y$ is between $\sqrt [4]x$ and $\sqrt[3]x$ you can do the same thing but add a triple integral for numbers with three large factors.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4585082", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
I am confused on how to find a function from a set that is also a 3-to-1 correspondence? I have a homework problem that asks me to find a function from the set $\{1, 2, \ldots, 30\}$ to $\{1, 2, \ldots, 10\}$ that is a $3$-to-$1$ correspondence, I am confused on how to even derive a function from the sets that also are $3$-to-$1$ correspondence. In class we have had $k$-to-$1$ rule examples but never done $3$-to-$1$ or any $x$-to-$1$ problems so I am confused on how to apply it to my homework. I will rewrite the problem for context: Find a function from the set $\{1, 2, \ldots, 30\}$ to $\{1, 2, \ldots, 10\}$ that is a $3$-to-$1$ correspondence. (You may find that the division, ceiling or floor operations are useful).
The 'obvious' way to do this would be to send $1,2,3$ to $1$, then $4,5,6$ to $2$ and so on, so $28,29,30$ goes to $10$. This would be $f(n) = \lceil \frac{n}{3} \rceil$. Another (possibly neater) idea would be $1,11,21 \rightarrow 1$, then $2,12,22 \rightarrow 2$ etc. This would be $f(n) = n$ (mod $10$). In fact, there are many different possible answers - most will not have a neat closed form like these, but any partition of $\{1, \ldots, 30\}$ into $10$ groups of $3$ can be such a function, by sending the first group of $3$ to $1$, the 2nd group to $2$ and so on
{ "language": "en", "url": "https://math.stackexchange.com/questions/4585281", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Does CR for positive definite matrices have the same convergence theorem as CG? The conjugate residual (CR) algorithm (specifically for symmetric positive definite matrices) differs from the conjugate gradient (CG) algorithm in a very slight way. I am using this as a reference. On page 6, the CG algorithm is described. CG uses $$\alpha_k=\frac{r_k^Tr_k}{p_k^TAp_k}$$ and $$p_{k+1}=r_{k+1}-\frac{r_k^Tr_k}{r_{k+1}^Tr_{k+1}}p_k$$ while CR uses $$\alpha_k=\frac{r_k^TAr_k}{p_k^TA^2p_k}$$ and $$p_{k+1}=r_{k+1}-\frac{r_k^TAr_k}{r_{k+1}^TAr_{k+1}}p_k.$$ Clearly these algorithms are almost identical despite the slight change of how we pick a step size (this is related to what norm of vectors we use as CG uses 2-norm and CR uses matrix $A$-norm). On page 8 Theorem 4.4 describes the convergence bound for CG algorithm. The proof of this theorem, on page 9, uses Chebyshev polynomials to find an upper bound for the error of residuals. My question is, should CR have the exact same upper bound as given in the convergence theorem for CG since this bound only relied on the Chebyshev polynomials which would be the same for both CR and CG?
In exact arithmetic, conjugate residual and MINRES both minimize the 2-norm of the residual over Krylov subspace; see for instance Theorem 2 in the original CR paper. Thus, similar bounds as to the bound you provide for CG can be derived using Chebyshev polynomials. However, you need to be careful what norm you are working in as the bounds will not apply to the A-norm of the error, but rather to the 2-norm of the residual.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4585732", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solving for travel time given the Nth derivative of a particle's position in terms of distance traveled Suppose a particle is traveling along a path in $\mathbb{R}$, with (nondecreasing) position function $s(t)$. The $n^\text{th}$ derivative of $s(t)$ is known as a function not of time, but of its current position, i.e. $s^{(n)}(t) = f(s(t))$ for some $f$. How long will it take for the particle to travel a given length? That is, how to solve for $s^{-1}(x)$? For $n=1$, by the inverse function rule, we see $s^{-1}(x)$ has derivative $\frac{1}{f(x)}$, and therefore $s^{-1}(x) = s^{-1}(0) + \int_0^x \frac{1}{f(u)} \,du$. For $n=2$, we can use the work-energy theorem to deduce the velocity at position $x = s(t)$: $$s'(x) = \sqrt{s'(0)^2 + 2 \int_0^x f(u) \,du}$$ and if we know the velocity at every location during its path, it reduces to the case $n=1$ above. For $n=3$ and above, is there a general method?
I'll write $y$ instead of $s$ and $x$ instead of $t$. You wonder, when can the solution of $$\tag{1} y^{(n)}(x)=f(y) $$ be reduced to quadratures? (As you have done for $n=1,\ 2$ in OP). For $n=2$ you used a conserved quantity (energy) to reduce the order of the equation. The 'general (theoretical) procedure' is to find as many conserved quantities as possible, and use each to reduce the order by one. For $n\geq 3$ only one conserved quantity is known, and that's only when $n$ is even- so the short answer is: there is no complete general procedure of the type you describe for $n\geq 3$. Looking in Polyanin and Zaitsev: Handbook of exact solutions to ODEs eq. $5.2.6.8$ we find that $$\tag{2} C=\int dy \ f(y)+\sum\limits_{k=1}^{m-1}(-1)^ky^{(k)}y^{(2m-k)}+\frac{(-1)^m}{2}\left[y^{(m)}\right]^2 $$ is a first integral of (1) when $n=2m$. Equation (2) is an autonomous ODE of order $n-1$, which can be reduced to a non-autonomous order $n-2$ ODE by the usual substitution $y'(x)=v(y)$. Another 'trick' is to recast (1) as $n$ coupled first order equations. Define $y^{(k)}=y_k$ for $k=0,1, \dots n-1$, then we have the system $$\tag{3} y_0'=y_1 \\ y_1'=y_2 \\ \cdots \\ y_{n-1}'=f(y_0) $$ A formal solution to (3) may be written using an exponential operator $$\tag{4} y_k(x)=\exp\left(x y_1 \frac{\partial}{\partial y_0} +x y_2 \frac{\partial}{\partial y_1}+\cdots +xf(y_0)\frac{\partial}{\partial y_{n-1}}\right) \ y_k(0) $$ where the exponential of an operator $A$ is defined using the power series $$\tag{5} \exp(A) =\sum\limits_{j=0}^\infty \frac{A^j}{j!} $$ See chapter 5, CJ Papachristou: Aspects of integrability for details. By truncating the exponential power series, eq. (4) furnishes a systematic method of generating approximate solutions to (3) in powers of $x$ (ie. as $x\to 0$). A concise discussion of (1), its Lagrangian and Hamiltonian, and the problem with $n\geq 3$ may be found in this article by G Lopez.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4585941", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Determine the following factor group: $\mathbb{C}^* / \mathbb{R}_{>0}$ Using the first isomorphism theorem, determine the following factor group: $\mathbb{C}^* / P$ with $P = \mathbb{R}_{>0}$ How to find the factor group: Find a surjective group homomorphism from the respective group whose kernel is the respective subgroup: $\phi : \mathbb{C}^* \rightarrow G $ with ker$(\phi) = \{x \in \mathbb{C}^*, \phi(x) = $ neutral element of G$ \} = P = \mathbb{R}_{>0} $ After that use the isomorphism theorem to describe the respective factor group down to isomorphism. My problem here is that I can't find a suitable group G so that exactly the kernel is equal to P. Which group G would fit here and what would be the corresponding mapping rule and the neutral element of G?
The obvious answer is the circle group $\Bbb T$, denoted that way because it is a $1$-torus. $\Bbb T$ is a group under the operation of complex multiplication, with the role of the identity played by $z=1$. The homomorphism is $\Bbb C^*\ni z\mapsto \dfrac z{\lvert z\rvert}\in\Bbb T$, and it's kernel is $P$. This is an important example.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4586389", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
what are accumulation points in easy terms? and how do we use them? I've read the definition and have seen videos where they all say the same thing. We have a $\epsilon$ neighborhood around $l$ and if it contains another point, let's say set $A$, other then $l$, then we call $l$ an accumulation point of $A$. But what's the point of this? why do we need accumulation points? and is my description correct?
Your description is not correct. You say: "We have an epsilon neighborhood". The definition says: "For every epsilon neighborhood". The answer to "Why do we need them" is not easily summarized here. Suffice it to say, this is one way to think about convergence, closeness, separation, and related ideas. Its value will become apparent as you continue to study topology.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4586557", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Solving Kronecker product equation I am trying to solve for $\mathbf{X}$ the following equation $\mathbf{A}\otimes \mathbf{X}=\mathbf{B}$ Is there a closed form solution to this? Thank you very much in advance!
Assume the following dimensions for the variables $$\eqalign{ \def\sz{\operatorname{size}} m,n=\sz(A) \qquad p,q=\sz(X) \qquad mp,nq=\sz(B) \\ }$$ and let $K_{m,n}$ denote the Commutation Matrix which satisfies $$\eqalign{ \def\vc{\operatorname{vec}} \vc(A^T) &= K_{m,n}\vc(A) \\ mn,mn &= \sz(K_{m,n}) }$$ Then the Kronecker product term can be vectorized $$\eqalign{ \def\LR#1{\left(#1\right)} \def\BR#1{\Big(#1\Big)} a &= \vc(A), \qquad x = \vc(X), \qquad b = \vc(B) \\ b &= \vc(A\otimes X) \\ &= \LR{I_n\otimes K_{q,m}\otimes I_p}\LR{a\otimes I_{pq}}x \\ b &= Rx \\ }$$ Since $R$ is rectangular $\big(\sz(R)=mnpq,pq\big)$ the solution requires the pseudoinverse $R^+$ $$\eqalign{ R^+ &= \LR{\frac{a}{\|a\|^2}\otimes I_{pq}}^T\BR{I_n\otimes K_{m,q}\otimes I_p} \\ x &= R^+b \\ X &= \operatorname{Reshape}(x, p,q) \\ }$$ The Reshape() operation can be expressed using even more Kronecker products. Update A computationally and conceptually simpler approach is to * *Partition $B$ into blocks of size $p\times q$ *Locate any non-zero component of $A,\:$ e.g. $\,A_{ij}$ *Note that the $(i,j)^{th}$ block of $B$ must equal $A_{ij}X$ Therefore $$ \def\m#1{\begin{bmatrix}#1\end{bmatrix}} \def\R{{\mathbb R}} X=\frac{{\large\tt[}B{\large\tt]}_{ij}}{A_{ij}} \;=\; \frac{(e_i\otimes I_p)^TB\,(f_j\otimes I_q)}{e_i^TAf_j} $$ where $\big\{e_i\in\R^m,\;f_j\in\R^n\big\}$ are the standard basis vectors for their respective dimensions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4586801", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If $2^\frac{2x-1}{x-1}+2^\frac{3x-2}{x-1}=24$, find all values of $x$ that satisfy this As title suggests, the problem is as follows: Given that $$2^\frac{2x-1}{x-1}+2^\frac{3x-2}{x-1}=24$$ find all values of $x$ that satisfy this. This question was shared in an Instagram post a few months ago that I came across today. Examining it at first, it seems there are many ways to solve this. I'll show my own approach here, please let me know if there are any issues in my solution and please share your own solution too! Here's my approach for the problem: Let $a=2^\frac{2x-1}{x-1}$ and $b=2^\frac{3x-2}{x-1}$ We then get $a+b=24$. Now notice that: $$(3x-2)-(2x-1)=x-1$$ That gives us a motivation to perform division with $a$ and $b$ (as the denominator and numerator of the exponent will be equal, hence reducing the exponent) thus: $$\frac{b}{a}=\frac{2^\frac{3x-2}{x-1}}{2^\frac{2x-1}{x-1}}$$ $$\frac{b}{a}=2^\frac{x-1}{x-1}=2$$ $$b=2a$$ Therefore: $$2a+a=24$$ $$3a=24$$ $$a=8$$ $$2^\frac{2x-1}{x-1}=8$$ $$2^\frac{2x-1}{x-1}=2^3$$ $$\frac{2x-1}{x-1}=3$$ $$2x-1=3x-3$$ Thus, $x=2$
Here's my resolution :) $$ 2^\frac{2x-1}{x-1} + 2^\frac{3x-2}{x-1} = 24 \\ \left(2^\frac{1}{x-1}\right)^{2x} \ \left(2^\frac{1}{x-1}\right)^{-1} + \left(2^\frac{1}{x-1}\right)^{3x} \ \left(2^\frac{1}{x-1}\right)^{-2} = 24 \\ 2^\frac{2x-1}{x-1} \ \left[ 1 + 2^\frac{x-1}{x-1} \right] = 24\\ 2^\frac{2x-1}{x-1} \cdot 3 = 24\\ \log_2 \ 2^\frac{2x-1}{x-1} = 8 \\ \frac{2x-1}{x-1} \cdot \log_22= \log_2 8\\ 2x-1=3(x-1)\\ x=2 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4587013", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Sequence does not converge weakly in $\ell^\infty$ Let $k\in\mathbb{N}$ and define $x^k\in\ell^\infty$ by $$ x_n^k:=\begin{cases}1, & n\leqslant k\\0, & \textrm{otherwise}\end{cases}. $$ I am searching for a proof that $(x_n^k)_{k\in\mathbb{N}}$ does not converge weakly in $\ell^\infty$. I know that there (and also here) is a similar result (using Banach limits) for the sequence where the first $n$ entries are $0$'s and $1$'s otherwise. But here, its vice versa and I am not sure how to perform the proof. Edit I wonder, if the following is a valid argument (without referring to the given link at all): We have $\ell^\infty=(\ell^1)'$ and an isomorphism is given by $T(x)(y)=\sum_{i=1}^\infty x_iy_i$ for $x=(x_i)\in\ell^\infty$ and $y=(y_i)\in\ell^1$. Thus, for the given sequence $(x_n^k)_k$, $$ T(x_n^k)(y)=\sum_{n=1}^k y_n\to a< \infty~\quad\textrm{as }k\to\infty $$ for $y=(y_i)\in\ell^1$. But, as far as I see, $a$ is not unique here, it depends on $y\in\ell^1$, doesn't it? Thus, $(x_n^k)_k$ cannot converge weakly in $\ell^\infty$ as there is no (unique) limit with respect to weak* topology. In other words, it does not converge in the weak* topology, however, if it converged weakly this would imply weakly* convergence.
Let $y=(1,1,1,\ldots)$ and let $y^k=y-x^k$. This flips the zeros and ones. Then $(x^k)_k$ is weakly convergent if and only if $(y^k)_k$ is. This will turn your question into the linked question.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4587208", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$b_n \to +\infty$, $b_n > 0$, $n \to \infty$. Prove $a_n$ is convergent to $0$ if $a_n b_n$ is bounded. $b_n \to +\infty$, $b_n > 0$, $n \to \infty$. Prove $a_n$ is convergent to $0$ if $a_n b_n$ is bounded. There are a lot of questions like "$a_n$ is convergent, $b_n$ is bounded. Prove $a_n b_n$ is convergent/bounded/evaluate lim etc". So my questions is, I guess, pretty basic, but a little bit different (at least I hope so) Just to clarify, the question is why if $a_n b_n$ is bounded, while $b_n \to +\infty$, implies ($\implies$) that $a_n \to 0$ Intuitive thinking is the following: if $b_n \to +\infty$, then $a_n$ must be decreasing (at least as fast as $b_n$ growth). So, for example, $a_n = \frac{k}{b_n}$, $k$ -- some fixed number. Hence, $a_n \to 0$, $\lim_{n \to \infty} \frac{k \times b_n}{ b_n} = k$. Unfortunately, it can't be counted as proof. Also I'm not so good at $\epsilon$-notation. But, my guess, we must somehow express $a_n$ through $b_n$. So I've got the idea "what to do" without knowledge "how to do".
$b_n \to +\infty$ means that for every $A>0$ there is $N \in \mathbb{N}$ so that $b_n>A$ whenever $n \geq N$. $\{a_nb_n\}$ bounded means that there is $L>0$ so that $|a_nb_n| \leq L$ for every $n \in \mathbb{N}$. Let $\varepsilon>0$ be given. Since $\{a_nb_n\}$ is bounded, there is $L>0$ so that $|a_nb_n| \leq L$ for every $n \in \mathbb{N}$. Set $A=\max\{L/\varepsilon,L\}$. Since $A>0$, there is $N \in \mathbb{N}$ so that $b_n>A$ whenever $n \geq N$. We combine to conclude \begin{align*} |a_n| \leq |a_n|\frac{b_n}{A}=\frac{1}{A}|a_nb_n| \leq \frac{L}{A} \leq \varepsilon \text{ whenever } n \geq N. \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/4587870", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Prove that the analytic mean value of an arithmetic function equals the logarithmic mean value Let f be an arithmetic function and let $F(s)$ be its Dirichlet series $\sum_{n=1}^{\infty}f(n)n^{-s}$. We say f has an analytic mean value A if $F(s)=\frac{A}{s-1}+o(\frac{1}{s-1})$ as $s\rightarrow 1^+$. My question is, how would we show that the existence of the logarithmic mean value for f implies the existence of the analytic mean value, i.e. if the following limit exists: $\lim{x\to\infty}\frac{1}{logx}\sum_{n\leq x}\frac{f(n)}{n}$ then the analytic mean value exists, and the two values are equal? So far I have tried using summation by parts to get the following: $\sum_{n\leq x}f(n)n^{-s}=x^{1-s}\sum_{n\leq x}\frac{f(n)}{n}+\int_{1}^{x}\frac{(s-1)\sum_{n\leq t}\frac{f(n)}{n}}{t^{s-2}}dt$ This appears to get us part of the way there, since there is an $f(n)/n$ sum on the right hand side which looks similar to the logarithmic mean value (minus the log), but I'm not sure where to go from there. Any help is appreciated!
For convenience, let $$ B(x)=\sum_{n\le x}{f(n)\over n}=A\log x+o(\log x). $$ Then for every $u>0$ there is $$ F_T(1+u)=\sum_{n\le T}{f(n)\over n^{1+u}}=\int_{1^-}^T{\mathrm dB(t)\over t^u}={B(T)\over T^u}+u\int_1^T{B(t)\over t^{u+2}}\mathrm du. $$ Since $B(T)=O(T)=o(T^u)$, we see that for all $u>0$, as $T\to+\infty$ there is $$ F(1+u)=u\int_1^\infty{B(t)\over t^{u+1}}\mathrm dt=Au\underbrace{\int_1^\infty{\log t\over t^{u+1}}\mathrm dt}_{I_1}+u\underbrace{\int_1^\infty{B(t)-A\log t\over t^{u+1}}\mathrm dt}_{I_2} $$ For $I_1$, substitution gives $$ I_1=\int_0^\infty ye^{-uy}\mathrm dy={1\over u^2}. $$ For $I_2$, by the conditions for $B(x)$ we know that $$ \forall\varepsilon>0,\exists X>0,(t>X\Rightarrow|B(t)-A\log t|<\frac12\varepsilon\log t) $$ and $$ \exists M>0,(1\le t\le X\Rightarrow|B(t)-A\log t|\le M\log t), $$ so we see that $I_2$ is bounded by \begin{aligned} |I_2| &<Mu\int_1^X{\log t\over t^{u+1}}\mathrm du+\frac12\varepsilon u\int_X^\infty{\log t\over t^{u+1}}\mathrm du \\ &\le Mu\int_1^X{\log t\over t}\mathrm dt+\frac12\varepsilon u\int_1^\infty{\log t\over t^{u+1}} \\ &=\frac12 Mu(\log X)^2+{\varepsilon\over2u}. \end{aligned} Now, let $0<u<\varepsilon/(M(\log X)^2)$, so we have $|I_2|<\varepsilon/u$. Combining all these results, we see that $$ F(s)=\sum_{n\ge1}{f(n)\over n^s}={A\over s-1}+o\left(1\over|s-1|\right). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4588063", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proof that linear combination of self adjoint maps is also self adjoint. I want to show that if $V$ is an inner product space and $S,T\in \mathcal{L}(V)$ are self-adjoint linear maps, then $aS+bT$ is a self-adjoint linear map for all $a,b\in \mathbb{R}$. From what I tried, I feel like I need to play with the properties of the inner product in order to shaw that $aS$ and $bT$ are self adjoint, but I am not sure how to proceed, if anyone could help.
An operator $T:V\rightarrow V$ is self adjoint if $$\left<Tu,v\right> = \left<u,Tv\right>$$ for all $u,v\in V$. Fix $u,v\in V$ we want to prove that if $T$ and $S$ are self adjoint, then $$\left<(aT+bS)u,v\right> = \left<u,(aT+bS)v\right>$$ Indeed, by the bi-linearity of the inner product $\left<(aT+bS)u,v\right> = \left<aTu + bSu,v\right> = \left<aTu,v\right>+\left<bSu,v\right>=...=\left<u,(aT+bS)v\right>$ can you complete the "..." on your own?
{ "language": "en", "url": "https://math.stackexchange.com/questions/4588209", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is the stationary distribution of this Markov chain? A Markov chain is shown in the figure above. I am writing the transition matrix as: $$P = \begin{bmatrix} 0.6 & 0.4 & 0 & 0\\ 0 & 0 & 0.5 & 0.5\\ 0 & 0 & 0.25 & 0.75\\ 0.7 & 0.3 & 0 & 0 \end{bmatrix}$$ I using given formula to find stationary distribution: $$AP=A$$ where $A=[w_1 \; w_2\; w_3\; w_4]$ is the stationary distribution. The answer for stationary distribution is given as: $$w_1=\frac{4}{12} \quad w_2=\frac{3}{12} \quad w_3=\frac{3}{12} \quad w_4=\frac{2}{12}$$ which I can easily check the formula $$AP=A$$ is not satisfying. Where am I going wrong?
You should be able to get the exact answers. In the steady state, one more iteration won't make any difference, so $\displaylines{(3/5)w_1 + (7/10)w_4 = w_1,\\(2/5)w_1+ (3/5)w_4= w_2,\\(1/2)w_2+ (1/4)w_3 = w_3,\\ (1/2)w_2+(3/4)w_3 = w_4\\w_1+w_2+w_3+w_4=1}$ Solving the above system of linear equations, I get $w_1 = 21/53, w_2 = 12/53, w_3 = 8/53, w_4 = 12/53$ In order to get the exact answer, you need to use fractions rather than decimals.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4588661", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
In a composition of two inversions, characterize straight lines that remain straight lines Let S1 and S2 be the inversions in the circles $x^2+y^2 = 16$ and $(x−8)^2+y^2 = 1$ respectively. Consider the composition W for these inversions W = T2 ◦T1. Which lines under the transformation W will become lines? From my knowledge, I know that lines are inverted to lines when they pass through the center of the circles of inversion. So for the line to remain lines under inversion does the non-inverted line have to pass through the centers of both the circles above? Is that the only way it remains a line?
Circles and lines mapped to lines by circle inversion are exactly those which pass through the center of inversion; the center is the only point mapped to infinity. However, inversion is a bijection so there is exactly one point that $S_1$ maps to the center of $S_2$. It is the point on the line between the centers of $S_1$ and $S_2$ a distance $d$ from the center of $S_1$ such that $$ 8d = 16 \implies d = 2 $$ since the center of $S_2$ is 8 units away from the center of $S_1$ and 16 is the squared radius of $S_1$. This point has coordinates $(2, 0)$. The set of all lines which map to lines under $S_2\circ S_1$ is exactly all lines which pass through this point. Note that the radius of $S_2$ is irrelevant.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4588846", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Can a stochastic process with independent random variables be a stochastic process with independent increments? We know that if a stochastic process is a stochastic process with independent increments, we must have: For any natural number $n$ and all real $0≤α_1<β_1≤α_2<β_2≤\cdots ≤α_n<β_n$, the increments $X(β_1)−X(α_1),\cdots,X(β_n)−X(α_n)$ are mutually-independent random variables. So my question is: Can a stochastic process with independent random variables be a stochastic process with independent increments?
I will answer the question when the index set is $\mathbb N$ but the idea of the proof works for other index sets like $[0,\infty)$ too. If $(X_n)$ is independent and has independent increments then $X_{n+1}-X_{n-1}=[X_{n+1}-X_{n}]+[X_{n}-X_{n-1}]$, a sum of two indepndent r.v.'s. Denoting by $h_n$ the characteristic function of $X_n$ we get $$h_{n+1}(t)h_{n-1}(-t)=[h_{n+1}(t)h_{n}(-t)][h_{n}(t)h_{n-1}(-t)].$$ This implies that $|h_n(t)|\equiv 1$ in a neighborhood of $0$. Hence, $X_n$ is a.s constant.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4588997", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Formal Powerseries: Compute the reciprocal of $1-A(z)$ for $a_0 \ne 0$ Let $A(z)$ be a formal powerseries with $a_0=0$. Show that the reciprocal of $1-A(z)$ is given by $$B(z) := \sum_{n=0}^\infty A(z)^n = 1 + A(z)^1 + A(z)^2 + \ldots$$ I realise that when we rewrite $C(z) := 1-A(z)$, then the reciprocal of $C$ has to exist since $c_0 \ne 0$. If we let $D(z) := C(z)B(z)$ we are done if we can show that $d_0 = 1$ and $d_n = 0$ for all $n > 0$. We can readily see that $b_0, c_0 = 1$, so $d_0 = 1$. So we are left to compute the remaining $d_n$ for $n > 0$ via the formula $$d_n = \sum_{k=0}^n c_k b_{n-k}.$$ It is clear that $c_0 = 1$ and $c_n = -a_n$ for $n > 1$, but I do not see how to find a formula for the $b_n$ with $n > 1$. Could you please help me?
We consider $\mathbb{C}[[z]]$, the ring of formal power series with coefficients in $\mathbb{C}$ and show for $A(z)\in\mathbb{C}[[z]]$ with $a_0=[z^0]A(z)=0$: \begin{align*} \color{blue}{\left(1-A(z)\right)^{-1}=\sum_{n=0}A^n(z)}\tag{1} \end{align*} We can show (1) essentially in two steps. Step 1: At first we claim \begin{align*} \color{blue}{\left(1-z\right)^{-1}=\sum_{n=0}^{\infty}z^n}\tag{2} \end{align*} In order to show (2) we calculate \begin{align*} \color{blue}{(1-z)\sum_{n=0}^{\infty}z^n} =1+(1-1)z+(1-1)z^2+\cdots\color{blue}{=1}\tag{3} \end{align*} and the claim (2) follows. In this proof we use two facts about formal power series * *The calculation of the coefficients of the product of two formal power series is done using the Cauchy product in the same way as for ordinary generating functions. \begin{align*} A(z)B(z)=\left(\sum_{k=0}^{\infty}a_kz^k\right)\left(\sum_{l=0}^{\infty}b_lz^l\right) =\sum_{n=0}^{\infty}\left(\sum_{k=0}^n a_kb_{n-k}\right) z^n \end{align*} In (3) we have $A(z)=1-z$ and $B(z)=\sum_{n=0}^{\infty}z^n$. *Two formal power series are equal iff their coefficients are equal for all $n\geq 0$. \begin{align*} A(z)=B(z)\qquad\longleftrightarrow\qquad [z^n]A(z)=[z^n]B(z)\quad n\geq 0 \end{align*} This is used in (3) with $A(z)=(1-z)\sum_{n=0}^{\infty}z^n$ and $B(z)=1$. Intermezzo: We want to use (3) by substituting $z$ with a formal power series $D(z)$. To guarantee that this kind of substitution is valid we have to assure that always a finite number of operations is involved when doing the calculation. \begin{align*} A(z)=\sum_{n=0}^{\infty}a_nz^n\qquad\longrightarrow\qquad A(D(z))=\sum_{n=0}^{\infty}a_nD^n(z)\tag{4} \end{align*} This finite number of operations does not refer to the countably infinite number of terms in $A(D(z))$ we usually have, but to the number of terms we have to consider when calculating a coefficient $[z^k]A(D(z))$. In fact this is assured iff \begin{align*} \color{blue}{[z^0]D(z)=0}\tag{5} \end{align*} because then we have \begin{align*} [z^n]A(D(z))&=[z^n]\sum_{k=0}^{\infty}a_kD(z)^k\\ &=[z^n]\sum_{k=0}^{\infty}a_k\left(d_1z+d_2z^2+\cdots\right)^k\\ &=\sum_{k=0}^n[z^n]\left(d_1z+d_2z^2+\cdots\right)^k\\ \end{align*} Since the coefficient $[z^0]D(z)=0$ the smallest power of $z$ in $D(z)^k$ is greater or equal to $k$ and we can restrict the calculation of $[z^n]$ in $A(D(z))$ to the finite number of the first $n+1$ summands of the formal power series. * *A family of series is called locally finite if for each coefficient $[z^n]$ the number of series in this family with non-zero coefficient of $[z^n]$ is finite. We observe if (5) is given, the family \begin{align*} \{D^n(z): n\geq 0\} \end{align*} is locally finite. It turns out that a substitution (4) with a formal power series $D(z)$ is valid whenever the family $\{D^n(z): n\geq 0\}$ is locally finite. We are now well prepared for the second step. Second step: Let $A(z), B(z), C(z)$ be formal power series in $\mathbb{C}[[z]]$. Let $D(z)\in\mathbb{C}[[z]]$ with $[z^0]D(z)=0$. The following is valid \begin{align*} \color{blue}{A(z)B(z)=C(z)\qquad \longrightarrow \qquad A(D(z))B(D(z))=C(D(z))}\tag{6} \end{align*} In order to prove the equality of the right-hand side in (6) we have to show that $[z^n]A(D(z))B(D(z))=[z^n]C(D(z))$ for all $n\geq 0$. We obtain \begin{align*} \color{blue}{[z^n]}&\color{blue}{A(D(z))B(D(z))}\\ &=\sum_{k=0}^n[z^k]A(D(z))[z^{n-k}]B(D(z))\\ &=\sum_{k=0}^n\left(\sum_{q=0}^na_q[z^k]D^q(z)\right)\left(\sum_{r=0}^nb_r[z^{n-k}]D^r(z)\right)\\ &=\sum_{q=0}^n\sum_{r=0}^na_qb_r\sum_{k=0}^n[z^k]D^q(z)[z^{n-k}]D^{r}(z)\\ &=\sum_{q=0}^n\sum_{r=0}^na_qb_r[z^n]D^{q+r}(z)\tag{7}\\ &=\sum_{{0\leq q+r\leq n}\atop{q,r\geq 0}}a_qb_r[z^n]D^{q+r}(z)\\ &=\sum_{m=0}^{n}\sum_{q=0}^{m}a_qb_{m-q}[z^n]D^{m}(z)\\ &=[z^n]\sum_{m=0}^{\infty}\sum_{q=0}^{m}a_qb_{m-q}[z^n]D^{m}(z)\\ &\;\color{blue}{=[z^n]C(D(z))} \end{align*} and the claim (6) follows. Comment: In (7) we observe that $[z^n]D^{q+r}=0$ if $q+r>n$, so that we can restrict the index region in the next line. Looking at OPs stated problem and noting that $[z^0]A(z)=0$, so that $\{A^n(z): n\geq 0\}$ is a locally finite family we can apply (6) and obtain from (3) by substituting $z$ with $A(z)$: \begin{align*} \color{blue}{\left(1-A(z)\right)\sum_{n=0}^{\infty}A^n(z)=1} \end{align*} where we set in (6) $A(z):=1-z, B(z):=\sum_{n=0}^{\infty}z^n, C(z):=1$ and $D(z):= A(z)$. Note: This approach is nicely elaborated in chapter 7 of Discrete Calculus: Methods for Counting by C. Mariconda and A. Tonolo.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4589281", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
What happens when you find the derivative of $f(x) =|x^2-1|$. Let $f(x) =|x^2-1|$. I'm trying to see if this function has a point of inflection and even though looking at the graph tells me the answer already, I was just curious. What happens to the modulus part when you differentiate it? Would you just take the piecewise version of the function and differentiate each function?
$$\dfrac{\text{d}}{\text{d}x} \vert f(x)\vert = f'(x) \text{sgn}(f(x))$$ Where $\text{sgn}(\cdot)$ is the signum function. $$\text{sgn}(f(x)) = \dfrac{\vert f(x) \vert}{f(x)}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4589466", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
FInd $\Bbb{Z}[i]/(i)$ and $\Bbb{Z}[i]/(1+i)$ and find GCD of $85$ and $1+13i$ in $\Bbb{Z}[i]$. So for finding $\Bbb{Z}[i]/(i)$ I have that an ideal is the whole ring if it contains a unit. So ($i$) contains a unit, namely $i$ as $$(-i)(i)=1$$ thus $i$ is a unit forcing $(i)$ to be all of $\Bbb{Z}[i]$ thus $\Bbb{Z}[i]$ is the trivial ring. For $\Bbb{Z}[i]/(1+i)$ I know $1+i=0$ so $i= -1$ and $-i= 1$ so we only have $1$ and $-1$ which is iso to $\Bbb{Z}_2$ is this correct? And for dividing $85$ by $1+13i$ I know $$\frac{85}{1+13i}=-6i$$ but I wanna know why we round the fractions $\frac{1}{2}$ and $\frac{13}{2}$ down to $0$ and $6$ Then when I plug this for $q$ into $$85=q(1+13i)+r$$ and solve for $r$ I get $7+6i$. where do I go from there do I divide $1+13i$ by $7+6i$ to show the next remainder is $0$ Cause that equals 1+i which means $r=0$ in the next step. So is the GCD just $$7+6i$$? Thanks in advance! EDIT: this post has been up for over 12 hours and no one has helped can someone comment if I’m on the right track or not
You solved everything you asked but it is necessary to talk about the prime factorization of the Gaussian integers for a better solution for your last question. Here, $85=(2+i)(2-i)(4+i)(4-i)$ and $1+13i=(2+i)(4+i)(1+i)$ are prime factorization of these numbers up to units, so upto units, $GCD(85,1+13i)=(2+i)(4+i)=7+6i$. I learned these things today from : https://en.wikipedia.org/wiki/Gaussian_integer#:~:text=Basic%20definitions,-The%20Gaussian%20integers&text=In%20other%20words%2C%20a%20Gaussian,is%20thus%20an%20integral%20domain. It also says: Unfortunately, except in simple cases, the prime factorization is difficult to compute, and Euclidean algorithm leads to a much easier (and faster) computation. I think this is a simple case.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4589615", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to evaluate $\int\frac{x^ndx}{\sqrt{ax^2+bx+c}}$ for natural $n$ How do we evaluate the following integral? $$\int\frac{x^ndx}{\sqrt{ax^2+bx+c}}$$ I am strictly looking for a solution for natural $n$ I wanted to try a trig substitution $$\int\frac{x^ndx}{\sqrt{ax^2+bx+c}}=\int\frac{x^ndx}{\sqrt{a\left(\left(x+\frac{b}{2a}\right)^2+\frac{c}{a}-\frac{b^2}{4a^2}\right)}}$$$$=\int\frac{(u-\frac{b}{2a})^ndx}{\sqrt{a\left(u^2+\frac{c}{a}-\frac{b^2}{4a^2}\right)}}$$ but I didn't know how to deal with the $(\tan\theta+k)^n$ I can accept an answer with special functions.
$$\int_0^xt^r(t-a)^{-\frac12}(t-b)^{-\frac12}dt\mathop=^{t=ux}\int_0^1 (ux)^r (ux-a)^{-\frac12}(ux-b)^{-\frac12}x du=\frac{x^r}{\sqrt{ab}}\int_0^1 u^{(r+1)-1} \left(-\left(\frac xbu-1\right)\right)^{-\frac12}\left(-\left(\frac xau-1\right)\right)^{-\frac12} x du=-\frac{x^{r+1}}{\sqrt{ab}}\int_0^1 u^{(r+1)-1} \left(1-\frac xbu\right) ^{-\frac12}\left(1-\frac xau\right)^{-\frac12} du$$ Using this Appel F$_1$ integral representation: $$\text F_1(a;b_1,b_2;c;x,y)=\frac{\Gamma(c)}{\Gamma(a)\Gamma(c-a)}\int_0^1\frac{t^{a-1}(1-t)^{c-a-1}}{(1-xt)^{b_1}(1-yt)^{b_2}}dt$$ Therefore: $$\boxed{\int_0^x\frac{t^r}{\sqrt{(t-a)(t-b)}}dt=\frac{x^{r+1}}{\sqrt{ab}(r+1)}\text F_1\left(r+1;\frac12,\frac12;r+2;\frac xa,\frac xb\right)}$$ Shown here in the “alternate forms“ section. For $r\in\frac{\Bbb Z}2$, the Appel function reduces to a rational function including Elliptic F and Elliptic E. Also, for $r\in\Bbb Z$, inverse trigonometric functions appear with a rational function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4589950", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What are polar coordinates of the origin and why is $\arg(0) =$ undefined, an option? I have come here from reading this and but it didn't really answered my question. Wikipedia says the cartesian co-ordinates are converted using $$x=r \cos \theta$$ $$y = r \sin \theta$$ where $r≥0$ but that doesn't make sense because those conversions are found using the definitions of $\sin$ and $\cos$ which are $$\sin \theta = \frac{y}{r},$$ $$\cos \theta =\frac{x}{r}$$ where $r>0$ and thus $r≠0$. So for $(0,0)$ those conversions are not valid. Also from this answer, we may let $\theta = \arg (0,0)$ be undefined or $\mathbb{R}$. My question is why is "$\arg (0) =$ undefined" even an option when all real numbers are perfect candidates for it? Why is the "convention" of leaving it undefined even a thing? What are we afraid of? Multiple values? Because surely, $\arg(z)$ has multiple values even when $z≠0$. So why "undefined" for $z=0$?
The origin has polar coordinates $ (r,\theta)$ defined as $$ ( 0, t )$$ for any polar angle $t$. Let me say with some caution... in rectangular coordinates since the origin is $$ (0,0)$$ we tend to continue the habit or custom and assume it as some sort of convention. In cases where our curve is going through the origin it presents a minor difficulty. For example the circle with polar equation $r= \sin (\theta-\dfrac{\pi}{4} )$ can still be thought of as having polar coordinates $(0,0)$ at the origin. However we like to take the point representation as $$(0, \dfrac{\pi}{4} )$$ rather than $(0,0) $ in order to avoid discontinuity in the polar angle at $\theta=\dfrac{\pi}{4}$, derivatives takes taken would be smooth continuous.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4590101", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Can equivalence relations be axiomatized using just one elementary sentence? Equivalence relations are traditionally axiomatized by the Reflexivity, Symmetry, and Transitivity axioms. However, they can also be axiomatized by Reflexivity and Circularity. (Circularity is this axiom: $(xRy \land yRz) \rightarrow zRx$). I am wondering if we can make do with just one elementary sentence. By elementary sentence, I mean either an atomic formula, or a negated atomic formula, or finite disjunctions of the previous two. For example, Circularity can be rewritten as $(\neg xRy \vee \neg yRz \vee zRx)$, which is an elementary sentence by my definition. If it can't be axiomatized by just one elementary sentence, I want to see a proof that it can't be so axiomatized.
The answer to your question is: this is not possible. I'll assume that our non-logical vocabulary is exactly $\{R\}$. Let $F$ denote the always-false binary relation and $T$ denote the always true binary relation. Lemma 101: If $\varphi$ is negation-free, then $\varphi$ cannot require $R$ to be transitive without requiring it to be $T$. Suppose $\varphi$ is negation-free. If $\varphi$ is of the form $xRx$, then $R$ must be a superset of the identity relation. This does not require $R$ to be transitive. If $\varphi$ is of the form $xRy$, then $R$ must be $T$. This requires $R$ to be transitive, but misses out on the identity relation, which is a potential equivalence relation that we need to allow. For the general case, if $\varphi$ is negation-free, then, for any $S$ satisyfing $\varphi$ any superset $S' \supset S$ will also satisfy $\varphi$. Therefore, all the relations satisyfing $\varphi$ are transitive if any only if the only relation satisyfing $\varphi$ is $T$. End of Lemma 101. Lemma 102: If $\varphi$ contains negation, then it allows $F$ Suppose $\varphi$ contains at least one negated clause $\lnot xRy$ where $x$ and $y$ are not necessarily distinct. If $R$ is $F$, then $xRy$ will always be false and hence $\varphi$ will vacuously be true. End of Lemma 102. Theorem: Equivalence Relations cannot be axiomatized using an elementary sentence. No elementary sentence can be both negation-free and contain a negation, however, by Lemmas 101 and 102, we need both kinds of sentences in order to insist on transitivity without insisting that we have $T$ and to rule out $F$. This is however possible to do if you cheat and add a unary function symbol $f$ to the language. The sentence below does the job and defines an equivalence relation. $$ f(x) = f(y) \to xRy \;\;\text{or equivalently}\;\; \lnot(f(x) = f(y)) \lor xRy $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4590299", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Asymptotic of $\sum (H_n-H_{k-1})\ln(k)$ where $H_k$ are the harmonic numbers I want to find the asymptotic of $f(n)=$ \begin{equation} \sum_{k=2}^n (H_n-H_{k-1})\ln(k)=\left(\frac{1}{2}+\cdots+\frac{1}{n}\right)\ln(2)+\left(\frac{1}{3}+\cdots+\frac{1}{n}\right)\ln(3)+\cdots+\frac{1}{n}\ln(n), \end{equation} up to the constant level. Using Abel's summation formula, let $a_k=H_n-H_{k-1}$, $A_k=\sum_{i=1}^k a_k=[u]\left(1+\frac{1}{[u]+1}+\cdots+\frac{1}{n}\right)$, and $\phi(x)=\ln(x)$. We have \begin{align} &\sum_{1<k\leq n}a_k\phi(k)=A(n)\phi(n)-A(1)\phi(1)-\int_1^n A(u)\phi'(u)du\\ =&n\ln n-\int_1^n \frac{[u]}{u}\left(1+\frac{1}{[u]+1}+\cdots+\frac{1}{n}\right)du. \end{align} How to proceed from here? It seems the integral part is not small.
Using summation by parts: Let $f_k = H_n- H_k$ and $G_k = \log k = g_{k+1} -g_k$,and $g_k = \sum_{j=2}^{k-1} \log j$ Then $$ \sum_{k=2}^n f_k G_k = f_n g_{n+1} - f_2 g_2 - \sum_{k=3}^n g_k (f_k-f_{k-1}) = \sum_{k=3}^n \frac{g_k}{k} $$ because $f_n= g_2=0$ Now, $$ g_{k+1} = k \log k - k +\frac12 \log k + \frac12 \log 2\pi + o(1)$$ $$ \frac{g_{k}}{k} = \log k -1 + \frac32 \frac{\log k}{k} + \frac12 \log 2\pi \frac{1}{k} + o(1/k)$$ $$ \sum_{k=3}^{n} \frac{g_{k}}{k} = n \log n - 2n + o(n) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4590434", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How to solve $\frac{\partial^2u}{\partial x\partial t}=-\frac{\partial u}{\partial t}+\sin(x)e^{-x}$ How to solve $$\frac{\partial^2u}{\partial x\partial t}=-\frac{\partial u}{\partial t}+\sin(x)e^{-x}$$ I'm looking for one solution only. I was reading about separable variables, but I'm not sure if it will work in this case, as it has an extra term and a cross derivative? Could anyone suggest me a way to solve it? Is $u(t,x)=X(x)T(t)$ still a good choice?
Since the free term is the product of an exponential and a sine, it is reasonable to guess a solution of the form $$e^{-x}(a\cos x+b\sin x).$$ Plugging this in the differential equation should give you the desired solution: $$\frac{1}{5}e^{-x}(2\cos x +\sin x).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4590684", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that solution to wave equation can be used to solve the Heat equation Let $u$ be a regular and bounded solution of $u_{tt}-u_{xx}=0$ in $\mathbb{R} \times \mathbb{R}$ with $u(0,x) = g(x)$, $u_t(0,x)=0$ in $\mathbb{R}$. Define $$ w(t,x) = \frac{1}{\sqrt{4 \pi t}} \int_{\mathbb{R}} u(s,x)e^{-\frac{s^2}{4t}}ds $$ Show that $w$ solves the heat equation $w_t - w_{xx} = 0$ in $(0,\infty) \times \mathbb{R}$ with $w(0,x) = g(x)$ in $\mathbb{R}$. So far I have tried this: The solution to the wave equation $u_{tt} - u_{xx}=0$ must be of the form found in d'Alembert's formula. So i plugged this formula into $w$ and then calculated $w_t$ and $w_{xx}$. But I don't see any obvious simplifications that lead to the result.
Notice that $$w(t,x)=\int_\mathbb R u(s,x) K(t,s) ds$$ where $K$ is the heat kernel which we know satisfies $(\partial_t + \Delta)K=0$ (where I use the Laplacian with a negative sign). Now when you compute $(\partial_t+\Delta)w$, using your favorite theorem to justify the interchange of differentiation and integration, you can bring $(\partial_t+\Delta)$ inside the integral. Then, the time derivative will only interact with the heat kernel $K$ and the Laplacian will only act on $u$. With the information we have about $u$ and $K$, we arrive at an integral appearing in Green's second identity (by which I mean an integral of the form $\int (f\Delta g - g\Delta f) ds$) and thus show $(\partial_t+\Delta)w=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4590935", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Number of ellipses with a given area that pass through three given points Suppose you are given three arbitrary points in the $xy$ plane. You want to generate all possible ellipses that pass through the three points, and at the same time have a given area of $S$. If $S$ is greater than the minimum possible area, then is it true that there will be exactly six possible ellipses? My progress: I first transformed the three given points (which are $A, B, C$) to the vertices of an equilateral triangle, call them $A',B',C'$, with $A' = (0,0), B' = (1, 0), C' = (\dfrac{1}{2}, \dfrac{\sqrt{3}}{2} )$. From symmetry, we can pass three congruent ellipses through $A',B', C'$, the first with one axis parallel to $A'B'$, the second with one axis parallel to $B'C'$ and the third with axis parallel to $A'C'$. The area of each of these three ellipses is $S \ |\det(T)| $ where $T$ is the transformation matrix that sends $A,B,C$ to $A', B', C'$. Since the ellipses are congruent, and symmetrical about the centroid of $\triangle A'B'C'$ it suffices to find the first one, and then rotate it by $120^\circ$ and $240^\circ$ about the centroid to generate the second and third. The equation of the ellipses with an axis parallel to the $x$ axis (and the other one to the $y$ axis), that passes through $A',B',C'$ takes the form $ \dfrac{(x - \dfrac{1}{2})^2}{a^2} + \dfrac{(y - k)^2}{b^2} = 1 $ Substituting $A'$ or $B'$ gives $\dfrac{1}{4 a^2} + \dfrac{k^2}{b^2} = 1 \dots (1)$ Substituing $C'$ gives $ \dfrac{\sqrt{3}}{2} - k = b \dots (2)$ And finally, $S' = S \ | \det(T) | $ will be the area of the ellipse. So $ S' = S \ | \det(T) | = \pi a b \dots (3) $ Substituting for $k$ from $(2)$ in $(1)$: $\dfrac{1}{4 a^2} + \dfrac{3}{4 b^2} - \dfrac{\sqrt{3}}{b} = 0 \dots (4)$ Next, I checked numerically the solution of $(3)$ and $(4)$. And it turns out there are two possible solutions. The figure below shows the two ellipses for an area that is $1.5 \times S_{min} $ Hence the total number of possible ellipses is $3 \times 2 = 6$ ellipses. Putting this all together, I picked three points: $A = (3, 1), B = (16, 5), C = (7, 13) $ and drew the possible ellipses through them, each having an area of $1.5 S_{min}$ where $S_{min} $ is the minimum possible area of an ellipse passing through $A,B,C$. This is shown below: I would appreciate it if someone could confirm my findings.
Let $ABC$ be an equilateral triangle, $D$, $E$ and $F$ the midpoints of its sides and $G$ its centre. Consider then any ray $r$ with origin $G$ and a point $O$ on $r$ inside triangle $DEF$. The ellipse through $ABC$ centred at $O$ has its minimum area when $O=G$ (the ellipse is the circumcircle of $ABC$), but as $O$ approaches the boundary of triangle $DEF$ the area of the ellipse tends to $+\infty$, because the ellipse degenerates into a pair of parallel lines when $D$ is on a side of $DEF$. By continuity, we can then choose $O$ on the ray such that the area of the ellipse has any fixed value $S$ greater than the minimum area. Hence there are infinitely many ellipses with area $S$, passing through $ABC$. EDIT. Here's a plot (made with Mathematica) of the loci of centres (blue), when the related ellipse has area equal to (from left to right) $1.1$, $1.2$, $1.5$, $2$ times minimum area, for an equilateral triangle centred at the origin with radius $1$. Loci are shown only for the inner triangle on the right and should be rotated by $\pm120°$ about the origin to get their full extent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4591191", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Group of order $630$ is solvable I tried using the Sylow theorems. $|G|=630=2\cdot 3^2 \cdot 5 \cdot 7$. Denot the number of $p$-Sylow groups by $k_p$ then: $k_2 \in \{1, 3, 5, 7, 9, 15, 21, 35, 45, 63, 315\}$ $k_3 \in \{1, 10, 70\}$ $k_5 \in \{1, 6, 21, 126\}$ $k_7 \in \{1, 15\}$ What are restrictions I might have to narrow the number of options I have? How do I continue from here?
As I have explained in the comments, a trick using the Cayley embedding and the Feit-Thompson theorem imply that every group whose order is not divisible by $4$ is solvable. Here's a more elementary solution: we apply the same trick as in the comments: let $G$ be a group of order $2k$, where $k$ is odd, then consider the Cayley embedding $i:G \to S_{2k}$. The image of an element of order $2$ is a product of $k$ transpositions and thus an odd permutation. It follows that $i(G)$ is not contained in $A_{2k}$ and hence $i^{-1}(A_{2k})$ is a normal subgroup of index two (i.e. order $k$). Thus for the question it suffices to show that a group of order $315=3^2 \cdot 5 \cdot 7$ is solvable. So let $G$ be of order $315$. If $n_3(G)=1$, then we are done, because then the $3$-Sylow is normal and hence we are reduced to showing that a group of order $35$ is solvable. But all groups with order $pq$ for $p,q$ prime are solvable, so we are done. So suppose that $n_3(G)=7$. Then let $P$ be a $3$-Sylow subgroup of $G$, then we have $|N_G(P)|=45$. We know that either $P \cong C_9$ or $P \cong C_3 \times C_3$. In the first case $\mathrm{Aut}(P) \cong (\Bbb Z/9\Bbb Z)^\times$, which has order $\varphi(9)=6$. In the latter case, we have $\mathrm{Aut}(P) \cong \mathrm{GL}(2,3)$, which has order $48$. None of these is divisible by $5$, thus we get that there is a $5$-Sylow subgroup $Q$ of $G$ such that $Q \leq C_G(P)$ and hence $P \leq N_G(Q)$. From this it follows that $n_5(G)$ is not divisible by $3$, which implies that $n_5(G) = 1$. Thus there's a normal subgroup $Q$ of order $5$ and $G/Q$ has order $63=3^2 \cdot 7$. So it only remains to show that groups of order $63$ are solvable. I'll leave that as an exercise.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4591380", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to know the shape of trajectories of Prey Predator Model and existence of orbit Consider the Prey Predator Model, $$ \begin{align} \frac{du}{dt}&=(a_1-b_1 v)u,\\ \frac{dv}{dt}&=(-a_2+b_2 u)v\\ u(0)&=u_{10},v(0)=v_{20} \end{align}\tag 1 $$ Although the model $(1)$ looks simple, a complete analytic solution of these equations are not possible but for every $a_1, a_2, b_1, b_2, u_{10}, v_{20}$. By variable separation, $$ \begin{align} \frac{du}{dv}&=\frac{(a_1-b_1 v)u}{(-a_2+b_2 u)v}\\ \left(\frac{u}{u_{10}}\right)^{-a_2}e^{b_2(u-u_{10})}&=\left(\frac{v}{v_{20}}\right)^{a_1}e^{-b_1(v-v_{20})} \quad\textrm{ Integrating from 0 to t}\\ e^{b_2u+b_1v}&=\lambda v^{a_1}u^{a_2} \quad\text{ Where }\lambda=\frac{v_{20}^{-a_1}e^{b_1v_{20}}}{v_{10}^{a_2}e^{-b_2u_{10}}}\tag 2 \end{align} $$ Both the $u$ and $v$ axes are orbits of $(1)$ and the orbits are the family of curves defined by $(2)$ and these curves are closed. This implies that each solution $u(t)$ and $v(t)$ of $(1)$ which starts in the first quadrant $u>0,v>0$ at time $t=t_0$ will remain there for all future time $t\geq t_0$. I didn't understand this paragraph. How they said about orbits and closed without any justification. Especially, "each trajectory is bounded closed curve". This system has two equilibrium points. The zero equilibrium point $(0,0)$ and the positive equilibrium point $(a_2/b_2,a_1/b_1)$. In fact, I couldn't find any way to conclude this. * *How could I get some idea of the shapes of the trajectories? *How could I know there exist a orbit? We have recently introduced with population Dynamics and have only lecture notes to follow. That's why so many unclear paragraph need to resolve before exam :) Any help will be appreciated. TIA
The terms $b_1v-a_2\log v$ and $b_2u-a_2\log u$ in the "logarithmic" first integral are both convex functions with a minimum $m_v$ and $m_u$ and going to $+\infty$ moving towards $0$ and $\infty$. If the sum of both is equal to some constant $C$, then we know that the first term is bounded above by $C-m_u$ and the second by $C-m_v$. Both bounds give a finite interval where they are valid. This means that the level sets have to consist of closed curves. A change of direction requires a stationary point, which do not exist outside the one equilibrium point at the center/minimum. So the level curves are followed in only one orientation. As the first integral is also jointly convex as a function of both variables, all level sets are convex, so the level curves all have only one component.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4591910", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Showing that the rational function $f(x)= \frac {x+1} {x^2 +3}$ behaves like $y= \frac 13 x +\frac 13$ around $x=0$ Source : Thomas, Calculus ( Chapter on limits). Context: The question asks for the behaviour of the function $f(x)= \frac {x+1} {x^2 +3}$as $x$ goes to $+\infty$ and $ - \infty$. The answer being that function $f$ admits of $y=0$ as asymptote, both to the right and to the left. But, in order to explain the graph of $f$, I'd like to go a bit further than the original question and to determine its behaviour around $x=0$. To do this I rewrite $f$ as : $\large f(x)= \frac {x+1} {x^2 +3} = \frac { \frac {x+1} {3} } {\frac {x^2+3} {3}} = \frac {x+1} {3} \times \frac { 1} {\frac {x^2+3} {3}} = \bigg{(}\frac 13 x + \frac 13 \bigg{)}\times \frac { 1} { 1+ \frac {x^2} {3}}$ From this I would like to conclude that, since the second factor goes to $1$ as $x$ goes to $0$ , the first factor is preponderant , in such a way that, near $0$ , function $f$ behaves like $y= \frac 13 x + \frac 13$. Desmos construction : https://www.desmos.com/calculator/heksb1tgoj My question is : to which extent is this reasoning rigorous, as it stands? what could be answered to the following objections (1) the first factor goes to $1/3$ as $x$ goes to zero, so why not conclude that function $f$ behaves overall like $y=(1/3) \times 1$ as $x$ approaches $0$ ? why applying the limiting process only to the second factor? (2) if the second term were not $\frac { 1} { 1+ \frac {x^2} {3}}$ but $\frac { 1} { 1+ \frac {x} {3}}$, this second term would also go to $1$ ( as $x$ goes to $0$), but one would not want to conclude from this that ( in this hypothetical case) the global function behaves like $y= \frac 13 x + \frac 13$ near $x=0$, so why using ths reason in the first case, and not in the second?
When you're talking about limiting behavior, its not just about what the limit of the function is, but also the "speed" in which that function also approaches its limit The limits of both functions $x$ and $x^2$ are $\infty$ as $x\rightarrow\infty$, however we would say that the limit $$ \lim_{x\rightarrow\infty}\frac{x}{x^2} \neq 1 $$ because $x^2$ gets so much bigger so much faster than $x$. So for your first question, yes you can say that the function acts like $y=\frac{1}{3}$ around $x=0$ but also if you want to be more accurate you'd say it acts like $y=\frac{x+1}{3}$, and if you wanted to get even more accurate than that you'd add in the quadratic term. As for your second question of why we can't say the say thing if you change that one term, its because then $x$ decays at the same rate so they impact each other on the same scale. If you Taylor series expand around $0$ you get $$ f(x) = \frac{1}{3}+\frac{x}{3}-\frac{x^2}{9}+\mathcal{O}(x^3) $$ where you cut it off at is what your approximation is and you'll be able to find the error from there
{ "language": "en", "url": "https://math.stackexchange.com/questions/4592105", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Write an equation for a sphere passing through a circle and tangent to a plane I'm trying to solve this task: '''Write an equation for a sphere passing through a circle $x^2 + y^2 = 11$ and tangent to a plane $x + y + z - 5 = 0$.''' Center of the sphere should moves only in axis Z, so it has coordinates $(0, 0, \alpha)$. I also found that $\alpha^2 = R^2 - 11$, where R - radius of a sphere. I came to this system: * *$x_0^2 + y_0^2 + (z_0 - \sqrt(R^2 - 11))^2 = R^2$ *$x_0 + y_0 + z_0 = 5$ where $x_0, y_0, z_0$ are coordinates of a touch point of a sphere with the plane. The correct answer is two spheres: $x^2 + y^2 + (z + 1)^2 = 12$ $x^2 + y^2 + (z + 4)^2 = 27$ I can't figure out how to find these two values of the parameter $\alpha$. Could somebody please explain how to do it? Thanks in advance.
If the center of the sphere is $(x_0, y_0, z_0)$, and its radius is $R$ then its equation is $ (x - x_0)^2 + (y - y_0)^2 + (z - z_0)^2 = R^2 $ Set $z = 0 $ , then you get $ (x - x_0)^2 + (y - y_0)^2 + z_0^2 = R^2 $ Comparing this with $ x^2 + y^2 = 11 $ we deduce that $x_0 = y_0 = 0 $ and $ 11 = R^2 - z_0^2 $ Since the sphere is tangent to $ x+ y+z = 5 $, then the distance of the center which is (0, 0, z_0) to the plane is equal to $R$ , thus $ R = \dfrac{| 0 + 0 + z_0 - 5 |} {\sqrt{3}} $ Squaring $ R^2 = \dfrac{(z_0 - 5)^2}{3} $ Thus $ 11 + z_0^2 = \dfrac{(z_0 - 5)^2 }{3} $ So that, $ 33 + 3 z_0^2 = z_0^2 - 10 z_0 + 25 $ From which, $ 2 z_0^2 + 10 z_0 + 8 = 0 $ Dividing through by $2$, $ z_0^2 + 5 z_0 + 4 = 0 $ Factoring, $ (z_0 + 4)(z_0 + 1) = 0 $ Therefore, $z_0 = -4$ or $z_0 = -1$, and corresponding to that, $R^2 = 27 $ or $ R^2 = 12 $ And the equations of the two spheres are $ x^2 + y^2 + (z + 4)^2 = 27 $ and $ x^2 + y^2 + (z + 1)^2 = 12 $
{ "language": "en", "url": "https://math.stackexchange.com/questions/4592308", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
First differential in cellular homology How do I compute the first differential of cellular homology in general for a CW complex $X$, without using for example that the first differential is zero if $X$ is path-connected and has one zero cell? (Question: doesn't path connectedness follow from having exactly one zero cell?) By the definition in my lecture, fixing a one cell and a zero cell, we have to consider the map $$ S^0 \rightarrow X_0 \rightarrow X_0 / X_{-1} \cong \bigvee_{\text{$0$-cells}} S^0 \rightarrow S^0 \,. $$ The first map is the attaching map of the one cell, and the last map is the projection corresponding to the fixed zero cell in the wedge sum. The isomorphism to the wedge sum comes in higher dimensions from identifications $D^k / S^{k-1} \cong S^{k-1}$. But in our case we would have $D^0 / S^{-1} = \mathrm{pt} \neq S^0$. Then it isn't clear how the map $\mathrm{pt} \rightarrow S^0$ would look like, so I cannot determine the degree of the total map $S^0 \rightarrow S^0$. I don't see how this definition is working out for the first differential or what I can do instead to compute it directly, for example in the case of a $1$-sphere with two $0$-cells and two $1$-cells.
First, we have $S^{-1} = \emptyset$, and by convention (in this context) $X / \emptyset \cong X_+$, i.e., $X$ with a disjoint point added, for any space $X$. Hence, we have $D^0 / S^{-1} \cong S^0$, which is consistent with the same fact in higher dimensions. Second, there is an easy way to read off the differential of $1$-cell in (cellular) homology. In terms of the attaching map $\varphi: S^0 \to X^0$ of a $1$-cell (edge) $e$, write $S^0 \cong \{1, -1\}$. Then, $$de = \varphi(1) - \varphi(-1).$$ In other words, the differential of a $1$-cell is just the signed sum of its two boundary $0$-cells.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4592783", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Let $Y=h(X),$ Find $E\{Y\}$ Problem: Let $X: (\Omega, \mathscr{A}) \rightarrow (\mathbb{R},B)$ be a random variable with the uniform distribution $P^X=\frac{1}{2\pi}\mathbb{1}_{\{(0,2\pi)\}}$ on the interval $(0,2\pi),$ and $h:(\mathbb{R},B) \rightarrow (\mathbb{R},B)$ be given by $h(x)=\sin(x).$ Let $Y=h(X)$ Find $E\{Y\}$ Attempt I want to use the expectation rule so: $$E\{Y\}=E\{h(X)\}=\int h(x)P^X(dx)=\int \sin(x) \frac{1}{2\pi}\mathbb{1}_{\{(0,2\pi)\}} (dx)=$$ $$\frac{1}{2\pi} \int_0^{2\pi} \sin(x)dx=0$$ I have tried a few things and this is my "best" attempt.. but it still just doesn't seem right.
Well I think I did find this incomplete. * *What's $\Omega$? $\mathbb R$? [Edit: Ah ok it's not necessarily $\mathbb R$. I was figuring it out as I was typing this.] *Also I think it's $\mathbb{1}_{(0,2\pi)}$ not $\mathbb{1}_{\{(0,2\pi)\}}$. $\mathbb{1}_{\{(0,2\pi)\}}$ sounds like the interval $(0,2\pi)$ is just a part of a collection of sets instead of a subset of $\mathbb R$ like $\Omega = \{\{3\}, (0,2\pi), \{4,7\}, [10,16] \cup \{28.7\}\}$ and then we have like a uniform discrete distribution on those 5 elements of $\Omega$ or something. *Also I'm not sure about $\int_{\Omega} h(x)P^X(dx)$. I think there should be some $\omega$ s.t. we have something like $\int_{\Omega} h(X(\omega))dP^X(\omega)$ or $\int_{\Omega} h(X(\omega))P^X(d\omega)$ Ah in this case I think $\Omega$ need not be $\mathbb R$... Wait lemme check Rosenthal... Ah yeah ok I believe you're missing also the probability measure for the probability space. So let's say it's $\mathbb P$ s.t. the probability space is $(\Omega, \mathscr A, \mathbb P)$. So I think it's supposed to be... $$E\{Y\}=E\{h(X)\}=\int_{\Omega} h(X(\omega))\mathbb P(d\omega)$$ $$=\int_{\mathbb R} h(x)P^X(dx)$$ $$=\int_{\mathbb R} \sin(x) \left[ \left(\frac{1}{2\pi}\mathbb{1}_{(0,2\pi)}\right) \circ (dx) \right]$$ $$=\int_{\mathbb R} \sin(x) \left[\left(\frac{1}{2\pi} \mathbb{1}_{(0,2\pi)}\right) \circ (x)\right] dx$$ $$=\int_{\mathbb R} \sin(x) \frac{1}{2\pi} \left[\left(\mathbb{1}_{(0,2\pi)}\right) \circ (x)\right] dx$$ $$=\int_{\mathbb R} \sin(x) \frac{1}{2\pi} \mathbb{1}_{(0,2\pi)}(x) dx$$ $$=\frac{1}{2\pi} \int_{\mathbb R} \sin(x) \mathbb{1}_{(0,2\pi)}(x) dx$$ $$=\frac{1}{2\pi} \int_{\mathbb R \cap (0,2\pi)} \sin(x) dx \ \text{(I guess no need for this line though like...}$$ $$\text{...(0,2π) is already understood as a subset of the ambient space)}$$ $$=\frac{1}{2\pi} \int_{(0,2\pi)} \sin(x) dx$$ $$=\frac{1}{2\pi} \int_0^{2\pi} \sin(x)dx=0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4593091", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Relating the two errors terms in the prime number theorem I would like to show that the two errors terms in the prime number theorem $\pi(x)-\frac{x}{\log{x}}$ and $\psi(x)-x$ are quite similar (or differing by something like a factor of $(\log{x})^{\varepsilon}$). This is my attempt: by partial summation, $$ \pi(x)= \frac{\psi(x)}{\log{x}}+\int_{2}^{x}\frac{\psi(t)}{t\log^2(t)}dt + O(\sqrt{x}). $$ This shows that $$ \pi(x)-\frac{x}{\log{x}} = \frac{\psi(x)-x}{\log{x}}+\int_{2}^{x}\frac{\psi(t)}{t\log^2(t)}dt + O(\sqrt{x}). $$ The problem now is that the the order of the integral is bigger than the error term. So I was wondering what would be the way of relating the two errors. Intuitively they should be the same but I’ve been not able to prove this formally
In fact they're not the same! We do expect $\psi(x)-x$ to be very roughly $\sqrt x$ in size (that's the Riemann hypothesis). However, we would only expect something this good for $\pi(x)-\mathop{\rm li}(x)$ where li$(x) = \int^x \frac{dt}{\log t}$ is the logarithmic integral. For $\pi(x)-\frac x{\log x}$ the error term is provably of size $\frac x{(\log x)^2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4593332", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Joint distribution of $\max(X, Z)$ and $\max(Y, Z)$ where $X$, $Y$, $Z$ are independent exponential variables with mean $1$ Assume there are three independent exponential random variables $X$, $Y$, and $Z$ with mean $1$. I am trying to look at the joint distribution of $[U = \max(X, Z), V = \max(Y, Z)]$. I want to find $F_{\, U,V}(u, v)$ which is the cumulative distribution function of joint distribution $(U,V)$. What I am doing is diving the possible relations of $X$, $Y$, $Z$ like $X < Y < Z$, $X < Z < Y$, etc. But I’m getting stuck at cases like $u < v$ and $Z < X < Y$. What is some other way(s) to find the cumulative distribution of this joint distribution?
I like your solution. As you mentioned, $$P(U<V \textrm{ and } Z<X<Y)= P(Z<X<Y) = P(Z=\min(X,Y,Z))P(X<Y) = \frac{1}{3}\cdot \frac{1}{2}$$ because the event $Z<X<Y$ is contained in the event $U<V$. Now, you can consider all 6 cases, $$P(U<V \textrm{ and } Z<X<Y)$$ $$P(U<V \textrm{ and } X<Z<Y)$$ $$P(V<U \textrm{ and } Z<Y<X)$$ $$P(V<U \textrm{ and } Y<Z<X)$$ $$P(U<V \textrm{ and } X<Y<Z)$$ $$P(U<V \textrm{ and } Y<X<Z)$$ Does that make sense? Here is another idea: \begin{align} &P(U<u, V<v, z<Z<z+dz) \\& =P(U<u, V<v |z<Z<z+dz) P(z<Z<z+dz) =\\ &=P(U<u |z<Z<z+dz) P(V<v|z<Z<z+dz) P(z<Z<z+dz) \end{align} Therefore, \begin{align} &P(U<u, V<v, z<Z<z+dz) \\& =P(X<u, Z<u |z<Z<z+dz) P(Y<v, Z<v |z<Z<z+dz) \exp(-z)dz\end{align} If $u>z$ and $v>z$ \begin{align} &P(U<u, V<v, z<Z<z+dz) \\& =(1-\exp(-u) )(1-\exp(-v) ) \exp(-z)dz\end{align} Do the other cases follow similarly?
{ "language": "en", "url": "https://math.stackexchange.com/questions/4593484", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
need help with expressing uniqueness of elements that have the same property In class we were discussing how to translate the sentence there's a sucker born every minute . The teacher showed us two examples 1: which is the wrong way $\exists x [ sucker(x) \land \forall y(mintue(y) \rightarrow bornAt(x,y))]$ 2: the correct way $\forall y[minute(y) \rightarrow \exists x(sucker(x) \land bornAt(x,y))]$ I wanted to take my understanding further and make a wff that said there's a sucker that's born every minute and each sucker is also different from each other without paraphrasing the original sentence. are any of these the correct way of translating it? $\forall y[minute(y) \rightarrow \exists x(sucker(x) \land bornAt(x,y) \land \forall z (sucker(z) \rightarrow z \neq x))]$ $\forall y[minute(y) \rightarrow \exists x(sucker(x) \land bornAt(x,y) \land \lnot\exists z (sucker(z) \land z=x))]$
Neither of those express what we want them to. $\forall z \mathop. (S(z) \to z \neq x)$ is equivalent to $\lnot \exists z \mathop. (S(z) \land z = x)$. Both mean all suckers $z$ are distinct from $x$ (where $x$ is a free variable denoting a sucker). Those statements are false when $z$ and $x$ refer to the same entity. There are a few properties of suckers and when they're born that you might be talking about by saying that each sucker is different from the others. That could be taken to imply that there's a 1:1 mapping between suckers and minutes. I'm interpreting the sentence as, instead, a consequence of observation that suckers are born at most once and therefore any sucker we find representing some minute can't be a representative of any other minute. What we want to do is express two different facts about suckers: * *$\forall y \in M \mathop. \exists x \in S \mathop. B(x, y)$ -- for every minute $y$, there's a sucker born at $y$. *$\forall y \in S \mathop. \exists_{\le 1} x \mathop. B(x, y)$ -- every sucker is born at most once. This is equivalent to: * *$\forall y \mathop. ( M(y) \to \exists x \mathop. S(x) \land B(x, y))$ *$\forall y \mathop. (S(y) \to \forall a \forall b \mathop. (B(y, a) \land B(y, b) \to a = b)) $
{ "language": "en", "url": "https://math.stackexchange.com/questions/4593733", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Equivalence of two expressions involving the derivative of the exponential map While working on a problem involving the derivative of the exponential map, I came across an interesting identity that seems to be true but I can't prove it. Here is the identity: $$\frac\partial{\partial A}\mathrm{tr}\left(S\exp(A)\right)=\lim_{t\to0}\frac d{dt}\left[\exp(A+tS)\right]$$ where $S$ is a symmetrical definite-positive $n\times n$ matrix, $A$ is a symmetrical $n\times n$ matrix, and $t\in\mathbb{R}$. The function $\exp$ is the usual matrix exponential. The function $\mathrm{tr}$ is the usual matrix trace. All matrices are restricted to real numbers. Is there a straightforward argument why these two expressions are the same? In this, the matrices $A$ and $S$ are not expected to commute, i.e.: $AS\ne SA$. This implies in particular that $S\exp(A)\ne\exp(A)S$. Using Duhamel's formula, I can write that: $$\lim_{t\to0}\frac d{dt}\left[\exp(A+tS)\right]=\int_0^1\exp(\tau A)S\exp((1-\tau)A)d\tau$$ but I have no idea yet how to simplify the other part of the identity. I can show that: $$\frac\partial{\partial A}\mathrm{tr}(\exp(A))=\exp(A)$$ but adding the (non-commuting) $S$ inside the trace makes a direct generalization difficult.
$ \def\o{{\tt1}}\def\p{\partial} \def\BR#1{\left[#1\right]} \def\LR#1{\left(#1\right)} \def\op#1{\operatorname{#1}} \def\trace#1{\op{Tr}\LR{#1}} \def\qiq{\quad\implies\quad} \def\grad#1#2{\frac{\p #1}{\p #2}} \def\Sk{\sum_{k=\o}^\infty} \def\Sj{\sum_{j=\o}^k} \def\Skj{\Sk\Sj} \def\fracLR#1#2{\LR{\frac{#1}{#2}}} \def\k{\frac{\o}{k!}} \def\c#1{\color{\red}{#1}} $Define the matrix variables $$\eqalign{ B &= A+St \qiq \dot B = S \\ F &= \exp(B) \;=\; I + \Sk\k\:B^k \\ }$$ then use differentials to calculate the derivative of $F$ $$\eqalign{ dF &= \Sk\k\:\c{dB^k} \\ &= \Sk\k\c{\Sj\LR{B^{k-j}\:dB\:B^{j-\o}}} \\ \dot F &= {\Skj\k\LR{B^{k-j}SB^{j-\o}}} \\ \lim_{t\to 0}\dot F &= \Skj\k\LR{A^{k-j}SA^{j-\o}} \\ }$$ Now consider the gradient of the trace expression $$\eqalign{ \phi &= S:e^A \\ &= S:\LR{I+\Sk\k A^k} \\ d\phi &= S:\LR{\Sk\k\:dA^k} \\ &= S:\LR{\Sk\k\Sj A^{k-j}\:dA\:A^{j-\o}} \\ &= \LR{\Skj\k\LR{A^{k-j}SA^{j-\o}}}:dA \\ \grad{\phi}{A} &= \Skj\k\LR{A^{k-j}SA^{j-\o}} \\ }$$ The above derivation uses the Frobenius product, which is a concise notation for the trace $$\eqalign{ A:B &= \sum_{i=1}^m\sum_{j=1}^n A_{ij}B_{ij} \;=\; \trace{A^TB} \\ A:A &= \|A\|^2_F \\ }$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4593906", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
isomorphism between field extension Assume F is a field, E/F and H/F are two field extension. If E is isomorphic to H, then whether exist a isomorphism $\varphi:E \rightarrow H$ such that $\varphi|_F = Id_F$? I think it's worry but I can find some counterexamples. If F = Q, then it is true obviously, but other field? I think it maybe true if E/F is finite extension. Maybe someone give me some references or hints, thanks.
A rather general way of constructing an example is to take a Galois extension $E/K$ which admits an intermediate field $F$ such that $F/K$ is not Galois. In other words, there is $\varphi \in \operatorname{Gal}(E/K)$ such that $\varphi(F) \ne F$. Now just take $H = E$. For instance you may take $K = \mathbb{Q}$, $E = H = K(\sqrt[3]{2}, \omega)$, where $\omega$ is a primitive third root of unity. $E/K$ is Galois, as $E$ is the splitting field over $K$ of the polynomial $x^{3} - 2$. Let $F = K(\sqrt[3]{2})$. There is an element $\varphi$ of $\operatorname{Gal}(E/K)$ such that $\varphi(\sqrt[3]{2}) = \omega \sqrt[3]{2}$, so that $\varphi(F) = K(\omega \sqrt[3]{2}) \ne F$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4594183", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Prove that polynomial $p(x)$ od degree $n$ is equal to $cP_{n}(x)$, where $P_{n}(x)$ is Legendre polynomial. This is the problem $5$ from chapter $45$ on properties of Legendre polynomials from Simmons book "Differential Equations with Applications and Historical Notes". If $p(x)$ is a polynomial of degree $n ≥ 1$ such that $$\int_{-1}^{1} x^{k}p(x) dx = 0 \\ \text{for} \ k=0,1,2,\cdots,n-1$$ Show that $p(x)=cP_{n}(x)$ for some constant $c$ I gave it go, trying to use properties of even and odd function. I suspect that I have to use orthogonality of Legendre polynomials. Sadly I am stuck and have no idea what to do. I was also thinking about using induction out of despretation but that seems "wrong" for me in this case.
First of all, you should prove that $$\int_{-1}^1 x^k P_n(x)\ \mathrm dx=0$$ for all $0\le k<n$. Then, notice instead that $$\int_{-1}^1 x^n P_n(x)\ \mathrm dx\neq0$$ and call $C$ that value. Now, define your $c$ as $$c:=\frac1C\int_{-1}^1 x^n p(x)\ \mathrm dx$$ You proved that, for any $0\le k\le n$, it holds $$\int_{-1}^1 x^k (p(x)-cP_n(x))\ \mathrm dx=0$$ but now, by linearity, for ANY polinomial $q$ of degree $\le n$ it holds $$\int_{-1}^1 q(x) (p(x)-cP_n(x))\ \mathrm dx=0$$ But now, $p(x)-cP_n(x)$ is a polynomial of degree $\le n$, and so $$\int_{-1}^1 (p(x)-cP_n(x))^2\ \mathrm dx=0$$ and the only possibility is $p(x)-cP_n(x)=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4594316", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
If $x=a$ is not a function, nor an equation—as in its graph is not the set of all points which satisfy that equation—, then what is it? In precalculus, I was introduced to conic sections and their equations. I learned how parabolas, for example, aren't always formed from quadratic functions and how they're oftentimes better described with the use of equations. A parabola is then better defined as the set of all points which satisfy either of the two following equations:$$(y-k)^2 = 4p(x-h)$$ $$or$$ $$(x-h)^2 = 4p(y-k)$$ That being the case, what exactly is $x=a$? It's not a function because it's not 'one-to-one' and it's not the set of all points which satisfy that equation because if that were the case then its graph would be a single coordinate-point, $(x, a)$. If it's not a function, nor an equation, then what is it? Why is $x=a$ graphed as a vertical line, and likewise, why is $y=b$ graphed as a horizontal line?
In fact $x=a$ is very much an equation, just as much as $y=a$. The graph is a vertical line. You can write $x=a$ as $0×y + x = a$ if you prefer to see both variables participate.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4594699", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Is there a noncommutative ring with prime characteristic? Is there a noncommutative ring with prime characteristic? My first answer is “no”, because $(R, +)$ is an abelian group, and I think that the characteristic is related to (or the same as) the order of the group $(R, +)$, i.e., every ring with prime characteristic is either $\mathbb{Z}_p$ or something related to $\mathbb{Z}_p$. But I'm not able to prove it, or give a counterexample. Where am I wrong?
I think that the characteristic is related to (or the same as) the order of the group $(,+)$ Not really unless $R$ is finite, when by Lagrange's theorem the maximum order of an element must divide $|R|$. $|R|$ can easily be infinite while having positive characteristic, for example. The connection is not strong. i.e., every ring with prime characteristic is either $ℤ_$ or something related to $ℤ_.$ The only relationship is that a ring (with identity) with characteristic $p$ contains a copy of $\mathbb Z_p$. This is true for $p$ not prime as well. It is just the subring generated by the identity $1$. Even when $R$ is finite and has positive characteristic, it is completely possible for it to be noncommutative: a small example would be the upper triangular matrices over the field of two elements. It has eight elements, is noncommutative, and characteristic $2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4595075", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Finitely generated modules in short exact sequences Let $0\to A\to B\to C\to 0$ be a short exact sequence of $R$-modules where $R$ is a ring. * *Show that if $B$ is finitely generated, then $A$ and $C$ don't have to be finitely generated. *If $R$ is a PID and $B$ is finitely generated, then are $A$ and $C$ finitely generated? I know that if $A$ and $C$ are finitely generated, then $B$ is also finitely generated. It would be nice if this statement would be and iff statement but it isn't, but I suppose that it is true if $R$ is a PID. For the first question I thought of finding a counter example, but I can't come up with one. And for the second question I can't even see why it would be true by just knowing that is a PID (it might not be true though). Any help is greatly appreciated.
You can take $R$ to be a non-Noetherian ring, for example $\mathbb{R}[X_1,X_2,\dots]$. Then $R$ is obviously a finitely generated module over $R$, but $I=\langle X_1, X_2,\dots\rangle$ is not a finitely generated module over $R$. You get the following exact sequence $0\to I\to R\to R/I\to 0$, with $R$ finitely generated over $R$ but $I$ isn't. Note that as noted in the comments, $R/I$ would necessarily be finitely generated over $R$. Remark: Every non-Noetherian ring would do, as a ring is Noetherian iff every submodule of finitely generated module is finitely generated. This gives you that in the case of $2$, $A$ and $C$ must be finitely generated, as $A$ can be considered as a submodule of the finitely generated module $B$, and since being PID implies being Noetherian.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4595340", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Chess sum symmetry for which natural numbers? The entries of a $n\times n$ chess table are black and white. For which natural numbers $n$, we can fill the table entries with numbers $\{1,2,...,n\}$ where: * *In each row, we have all ${1,2,...,n}$, and the sum of black entries are equal to white entries. *In each column, we have all ${1,2,...,n}$, and the sum of black entries are equal to white entries. My attempt: $1+2+...+n$ should be even, so $4|n(n+1)$.
We can build such a table if and only if $\mathbf{4 | n}$. Proving $4 | n$ works Here's a diagram that helped me see what's going on here: As that diagram shows, it can be useful to break the chessboard down into 4 colors instead of 2. (Then just think of red/blue cells as secretly white and green/yellow cells as black.) Notice how the red cells form a (spaced out) $\frac n 2 \times \frac n 2$ grid, and likewise for the other colors. Requiring the sum of whites vs blacks to be equal in each row/column is the same as requiring that the row/column sums of the $\frac n 2 \times \frac n 2$ single-color matrices are all equal. We assumed $4 | n$, so we can divide the numbers $\{1, 2, \cdots, n\}$ into two piles $A$ and $B$ such that $|A| = |B| = \frac n 2$ and $\text{sum}(A) = \text{sum}(B) = \frac{n(n+1)}{4}$. Now fill the red $\frac n 2 \times \frac n 2$ submatrix with the elements of $A$ in any Latin square configuration, and do the same with the blue $\frac n 2 \times \frac n 2$ submatrix. Next, fill the green and yellow submatrices with a Latin square using $B$'s elements. The resulting matrix satisfies all required conditions. Proving other values of $n$ don't work As already stated in the question, $n \equiv 1 \text{ or } 2 \mod 4$ are doomed from the start because $\frac{n(n+1)}{2}$ will be odd. So we only need to worry about $n \equiv 3 \mod 4$. We reuse the same "4 colors" idea from the proof above. This time $n$ is odd, so the 4 colored subgrids will have different sizes: Now, let's compute the total value across all red+yellow cells in two different ways. First: Every column in the grid has a sum of $\frac{n(n+1)}{2}$. There are $\frac{n+1}{2}$ red/yellow columns, so the total of all numbers in red and yellow spaces is $\left(\frac{n(n+1)}{2} \right) \left( \frac{n+1}{2}\right) = \frac{n(n+1)^2}{4}$. Second: In each row, the red or yellow cells must contribute exactly half of the row total, or $\frac{n(n+1)}{4}$. There are $n$ rows total, so the sum over all red and yellow spaces is $\frac{n^2 (n+1)}{4}$. At this point we've derived two conflicting formulas for the sum across all red and yellow squares, so we conclude it's not possible to satisfy the listed requirements in this case.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4595518", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Understanding The Fundamental Theorem of Calculus, Part 1 First Part of the Fundamental Theorem of Calculus says that "the derivative of a definite integral with respect to its upper limit is the integrand evaluated at the upper limit." So it means that the antiderivative of integrand evaluated at the upper limit is it's integral? Can you please explain for me how it works? So it says that for $$g(x) = \int_a^x f(t) dt$$ $$\frac{d}{dx}g(x) = f(x)$$ So does it mean that we can find an integral from a to x, by finding antiderivative of f(x)? If so where does a go?
You asked whether it means: ...we can find an integral from $a$ to $x$ by finding antiderivative of $f(x)$? No, this is not what it means. What is means is exactly the opposite: ... we can find an antideritive of $f(x)$ by finding an integral from $a$ to $x$. You can tell that's what the theorem says if you pay attention not only to the conclusion of the theorem, but also its hypothesis. To add to the excellent comment of @geetha290krm, it's a bad idea to state a theorem in math avoiding its hypothesis. The hypothesis of this theorem is: Let $f : [a,b] \to \mathbb R$ be a continuous function. Right away you can see: you already have the function $f$ in your hand. Your goal is to find an antiderivative of it. And the way you find that antiderivative is given in the conclusion of the theorem: For $g(x) = \int_a^x f(t) \, dt$, we have $\frac{d}{dx} g(x) = f(x)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4595698", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Japanese Temple Geometry Problem: Radii of inner circles inside quarter arcs I was able to get the equation for the radius of larger circle but couldn't think for the smaller one. Source: wu riddles
Since you've already shown how to determine the radius of the larger circle, here's a method for calculating the smaller circle's radius. First, below is your diagram with a few lines, points and line lengths added to it: Note the tangent line at $G$ to the smaller circle, and the one for the circular arc with center at $A$, are the same. Thus, the perpendicular line to that tangent from $G$ to $C$ (the center of the smaller circle), and from $G$ to $A$, are colinear (i.e., the two circle's tangent point's normal line goes through their centers), so $\lvert AC\rvert = a + r$. Similarly, the line joining $B$ to $F$ goes through $C$, thus $\lvert CF\rvert = a - r$. With $CE$ being perpendicular to $AF$, then $\triangle AEC$ and $\triangle FEC$ are both right-angled triangles. Thus, using the Pythagorean Theorem to relate the squares of the various triangle side lengths to the square of the length of their common side of $CE$ gives $$\begin{equation}\begin{aligned} \lvert AC\rvert^2 - \lvert AE\rvert^2 & = \lvert CF\rvert^2 - \lvert EF\rvert^2 \\ (a+r)^2 - (a-r)^2 & = (a-r)^2 - r^2 \\ (a^2 + 2ar + r^2) - (a^2 - 2ar + r^2) & = (a^2 - 2ar + r^2) - r^2 \\ 4ar & = a^2 - 2ar \\ 6r & = a \\ r & = \frac{a}{6} \end{aligned}\end{equation}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4595910", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
doubt about probability solution with $2$ girls and $2$ boys selection Q: A couple decides to have $4$ children. If they succeed in having $4$ children and each child is equally likely to be a boy or a girl, what is the probability that they will have exactly $2$ girls and $2$ boys? Explanation of the solution: Total number of ways of having $4$ kids is $1/2 \cdot 1/2 \cdot 1/2 \cdot 1/2 = 1/16$. Total number of ways of having exactly $2$ girls and $2$ boys is: First, count the $2$ girls and $2$ boys as $2$ girls glued together as one and $2$ boys glued together as one so there are $2!$ ways to move them. Second, count the number of ways the $2$ girls can be moved within themselves $= 2!$ Third, count the number of ways the $2$ boys can be moved within themselves $= 2!$ So total is $2! + 2! + 2! = 6$ So the probability of then is $6/16$ or $3/8$. What about GIRL, BOY, GIRL, BOY and BOY, GIRL, BOY, GIRL?? I am unable to understand where that was counted?
Solution : We know, total number of outcomes that fullfill the requirement Probabilty = ------------------------------------------------------ total number of all possible outcomes In the case we want probability of girl being born ; among all possible outcomes of (girl,boy) our desired outcome is birth of girl. so total number of outcomes that fullfill the requirement = 1 while total number of outcomes = 2 So ; Probability of Girl Birth P(G) = 1/2 ; Similarly ;Probability of Boy Birth P(B) = 1/2 Now consider A case where first birth is girl ; Second Birth is a girl; Third birth is a boy; Fourth birth is a boy; let the above case be represented as GGBB. Proabability of GGBB is given by : P(GGBB) = P(G) x P(G) x P(B) x P(B) = 1/16 GGBB is just one of many possible outcomes where 2 Girls and 2 Boys are born in 4 Births. there are other combinations of births that can give 2 Girls and 2 Boys including GGBB. they are : GGBB BBGG BGBG BGGB GBBG GBGB where each have probability of 1/16. So probability of (2Girls and 2Boys) = Sum of probability that result in birth of two boys and two girls = P(GGBB)+P(BBGG)+P(BGBG)+P(BGGB)+P(GBBG)+P(GBGB) = 1/16 + 1/16 + 1/16 + 1/16 + 1/16 + 1/16 = 6/16 So basicallly probability of (2Girls and 2Boys) ie. P(2G2B) = total number of ways 2 girls and 2 boys can be born * probability of any one case where 2 boys and 2 girls is born = 6 * P(GGBB) So if we can find the total no. of ways 2 girls and 2 boys can be born then we can find P(Exact 2G2B) without having to list all possibilities of ways 2 girls and 2 boys could be born. How do you find total no. of ways 2 girls and 2 boys can be born? We use the formula nCk = n!/(k!(n-k)!) . Understanding nCk before using it : nCk gives the total no. of ways , we can arrange total of n objects into combinations of k , where the ordering of combinations made by selected objects doesnt matter . meaning if we have 5 Cars - A B C D E 5C3 gives 10. meaning if we select and order the cars in groups of 3: ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE total = 10 And the ordering of combinations made by selected objects doesnt matter means we didnt include variations of combinations . eg variation of first combination ABC are - BCA ,ACB, CAB so on theyre considered same as ABC. so nCk gives the total number of combinations of k possible;from total objects n . Now How does this help to find total no. of ways 2 girls and 2 boys can be born .? Consider " 1 2 3 4 " as 1st-birth 2nd-birth 3rd-Birth and 4th-birth. let us use 4C2 on the above ; meaning 4C2 gives total no. of ways the objects 1 2 3 4 can be arranged in pairs of 2. ie. 1 2 1 3 1 4 2 3 2 4 3 4 so 4C2 = 6 Also , Indirectly the above combinations i wrote gives u all the positions u could put giving birth to boy ; 1 2 or 1 3 or 1 4 or 2 3 or 2 4 or 3 4 so automatically the remaining position becomes giving birth to girl. So combinations could be - BBGG BGBG BGGB GBBG GBGB GGBB or vice versa these position could be used to put giving birth to a girl. so automatically the remaining position becomes girl. So indirectly 4C2 gave us total no. of ways 2Girl 2Boys could be born. so the way of solving the question without having to list all possibilities of ways 2 girls and 2 boys could be born is = 4C2 * P(GGBB) = 6 * 1/16 = 6/16 [Soln]
{ "language": "en", "url": "https://math.stackexchange.com/questions/4596165", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Does the forgetful functor from $\mathbf{VEC}_{\mathbf{R}}$ to $\mathbf{SET}$ have right adjoint? Let $\mathbf{SET}$ be a category of sets, and $\mathbf{VEC}_{K}$ be a category of vector spaces over a field $K$. In my course on Category Theory we discussed the concept of adjoint functor. For example, we have constructed a functor $F$ that is left adjoint to the forgetful functor from $\mathbf{VEC}_{K}$ to $\mathbf{SET}$. Indeed, it is enough to take a functor that sends every set $A$ to some vector space with a basis $A$. But what about the right adjoint functor to this forgetful functor in the case of $K$ equal to the field of real numbers $\mathbf{R}$? After several unsuccessful attempts to come up with such a functor, it began to seem to me that such a functor does not exist at all. Is it so? If so, why? If not, then how to build such a functor? Any hints or advices would really help me, thank you!
If the functor forgetful functor $F$ were to admit a right adjoint, then it would preserve initial objects. Therefore, it would need to map the zero vector space to the empty set. This is clearly not the case.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4596416", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Finding the laurent expansion of $\frac{z}{(z-1)(z-2)}$ I want to find the Laurent series for $\frac{z}{(z-1)(z-2)}$ in the region $1 < |z| < 2$. This implies that $\frac{1}{|z|} < 1$, so noticing that $(z-1) = z(1 - \frac 1 z)$ I can rewrite the desired function as $$\frac{z}{z(1- \frac 1 z)(z-2)} = \frac{1}{z-2} \cdot \frac{1}{1 - \frac 1 z}.$$ Now using the definition of the geometric series I rewrite it as $$\sum_{k=0}^{\infty} \frac{1}{z^k(z-2)}$$ Is this the right Laurent series, and if not, where did I go wrong? Please note I am trying to understand where I made a mistake, not simply finding any solution. I've read a similar question at Finding the Laurent series of $f(z)=1/((z-1)(z-2))$ and one answer uses the fact that $\frac{|z|}2 < 1$, but I am not sure if my method is also valid.
Is this the right Laurent series, and if not, where did I go wrong? The condition $1<|z|<2$ itself is sufficient to tell that each term of the Laurent series expansion must contain either negative or non-negative power of $z$ because the annular region $r<|z-z_0|<R$ between two concentric circles has center $z_0$ ; which is also the center of series. In your case, it is $0$. The correct expansion will be obtained by using partial fractions as displayed in other answers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4596764", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Finding $x_1x_2+x_1x_3+x_2x_4+x_3x_4$ without explicitly finding the roots of $x^4-2x^3-3x^2+4x-1=0$ The equation $x^4-2x^3-3x^2+4x-1=0$ has $4$ distinct real roots $x_1,x_2,x_3,x_4$ such that $x_1\lt x_2\lt x_3\lt x_4$ and product of $2$ roots is unity, then find the value of $x_1x_2+x_1x_3+x_2x_4+x_3x_4$ This question has an answer on this link but I am trying to solve it without explicitly finding the roots because the question tells us that the product of $2$ roots is unity. I want to use it. My Approach: Using Descartes rule, I can see that there is one negative root and three positive roots. Also, at $x=0, 1, -1$, the value of the polynomial is negative. Thus, $x_1\lt-1, x_4\gt1$ and $x_2,x_3$ lies between $0$ to $1$. Thus, I am concluding that $x_2x_4=1$ and $x_1x_3=-1$ (because product of roots is $-1$) How to conclusively reject the case $x_3x_4=1$? For $\alpha\gt1, \beta\gt1$, $x_1=-\beta, x_2=\frac1\alpha, x_3=\frac1\beta, x_4=\alpha$ Sum of roots$=-\beta+\frac1\alpha+\frac1\beta+\alpha=2\implies\frac1\beta-\beta=2-(\alpha+\frac1\alpha)$ Sum of product of roots taken $3$ at a time$=-\frac1\alpha+\frac1\beta-\alpha-\beta=-4\implies\frac1\beta-\beta=-4+(\alpha+\frac1\alpha)$ Therefore, $\alpha+\frac1\alpha=3, \frac1\beta-\beta=-1$ Multiplying these two, $\frac\alpha\beta-\alpha\beta+\frac1{\alpha\beta}-\frac\beta\alpha=-3$ The question asks us to find $\frac\alpha\beta-\frac\beta\alpha$, that means $-3+\alpha\beta-\frac1{\alpha\beta}$ Can we conclude this approach?
For $ \ f(x) \ = \ x^4 - 2x^3 - 3x^2 + 4x-1 \ \ , \ $ the "depressed" polynomial is $ \ f \left(x + \frac12 \right) $ $ \ = \ f(y) \ = \ y^4 - \frac92·y^2 + \frac{1}{16} \ \ , \ $ which, as an even function, has symmetrically-arranged zeroes given by $ \ y^2 \ = \ \frac14·( \ 9 \pm \sqrt{80} \ ) \ \ . \ $ We can approximate the locations as $$ y^2 \ \ = \ \ \frac14·\left( \ 9 \ \pm \ 9·\sqrt{1 \ - \ \frac{1}{81}} \ \right) \ \ \approx \ \ \frac94·\left( \ 1 \ \pm \ \left[ \ 1 \ - \ \frac{1}{2·81} \ \right] \ \right) \ \ \approx \ \ \frac{1}{8·9} \ \ , \ \ \frac92 \ \ . $$ The four zeroes of $ \ f(x) \ $ are thus estimated by $$ x_1 \ \approx \ \frac12 - \frac{3}{\sqrt2} \ < \ 0 \ \ \ , \ \ \ x_2 \ \approx \ \frac12 - \frac{1}{6\sqrt2} \ \ \ , \ \ \ x_3 \ \approx \ \frac12 + \frac{1}{6\sqrt2} \ \ \ , \ \ \ x_4 \ \approx \ \frac12 + \frac{3}{\sqrt2} \ > \ 2 \ \ . $$ What is clear from this is that $ \ x_3·x_4 \ $ cannot be equal to $ \ 1 \ $ and that the only product of two zeroes than can be is $ \ \mathbf{x_2}·x_4 \ \ . \ $ [The exact values are in fact $ \ -\phi \ \ , \ \ 2 - \phi \ \ , \ \ \frac{1}{\phi} \ = \ \phi - 1 \ \ , \ \ \phi + 1 \ \ , \ $ but we don't need to know that in order to resolve this particular issue.] To return to the main question, if we label the four zeroes of $ \ f \left(x + \frac12 \right) \ $ as $ \ -\gamma \ , \ -\delta \ , \ +\delta \ , \ +\gamma \ \ , \ $ then their product is $ \ \gamma^2·\delta^2 \ = \ \frac{1}{16} \ \Rightarrow \ \gamma·\delta \ = \ \frac14 \ \ . \ $ (The estimates above actually happen to fit this nicely.) We may then express the zeroes of $ \ f(x) \ $ as (using your notation) $$ x_1 \ \ = \ \ \frac12 \ - \ \gamma \ \ = \ \ -\beta \ \ \ , \ \ \ x_2 \ \ = \ \ \frac12 \ - \ \frac{1}{4 \ \gamma} \ \ = \ \ \frac{1}{\alpha} \ \ \ , $$ $$ x_3 \ \ = \ \ \frac12 \ + \ \frac{1}{4 \ \gamma} \ \ = \ \ \frac{1}{\beta} \ \ \ , \ \ \ x_4 \ \ = \ \ \frac12 \ + \ \gamma \ \ = \ \ \alpha \ \ . $$ We can establish from these results that $$ \alpha·\frac{1}{\alpha} \ \ = \ \ \left( \ \gamma \ + \ \frac12 \ \right)·\left( \ \frac12 \ - \ \frac{1}{4 \ \gamma} \ \right) \ \ = \ \ \frac{4·\gamma^2 \ - \ 1}{8 \ \gamma} \ \ = \ \ 1 $$ $$ \Rightarrow \ \ 4·\gamma^2 \ - \ 8·\gamma \ - \ 1 \ \ = \ \ 0 \ \ \Rightarrow \ \ \gamma \ \ = \ \ \frac{8 \ + \ \sqrt{64 \ + \ 16}}{8} \ \ = \ \ 1 \ + \ \frac{ \sqrt5}{2} \ \ , $$ where we have retained only the positive solution (we obtain the same result from $ \ \beta·\frac{1}{\beta} \ \ ) $ $$ \Rightarrow \ \ \alpha \ \ = \ \ \frac{3 + \sqrt5}{2} \ \ ( \ = \ x_4 \ ) \ \ \ , \ \ \ \beta \ \ = \ \ \frac{1 + \sqrt5}{2} \ \ ( \ = \ -x_1 \ ) \ \ $$ $$ \Rightarrow \ \ \alpha·\beta \ \ = \ \ \frac{8 \ + \ 4\sqrt5}{4} \ \ = \ \ \ 2 \ + \ \sqrt5 \ \ \Rightarrow \ \ \frac{1}{\alpha·\beta} \ \ = \ \ \frac{ 2 \ - \ \sqrt5 }{4 \ - \ 5} \ \ = \ \ \sqrt5 \ - \ 2 $$ $$ \Rightarrow \ \ \alpha·\beta \ - \ \frac{1}{\alpha·\beta} \ \ = \ \ 4 \ \ \Rightarrow \ \ x_1 x_2 \ + \ x_1 x_3 \ + \ x_2 x_4 \ + \ x_3 x_4 \ \ = \ \ -3 \ + \ 4 \ \ = \ \ 1 \ \ . $$ [Using the exact value of the zeroes written above in terms of the Golden Ratio $ \ \phi \ \ , \ $ we verify that $$ (-2 \phi \ + \ \phi^2) \ + \ (-\phi^2 \ + \ \phi) \ + \ (2 \phi \ - \ \phi^2 \ + \ 2 \ - \ \phi) \ + \ (\phi^2 \ - \ 1) \ \ = \ \ 1 \ \ . \ ] $$ So while we have succeeded in finding the value of the sum of the specified pair-products of zeroes without explicitly determining all of the zeroes, it appears to be necessary to characterize those zeroes to a certain extent to evaluate the sum by your approach.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4597005", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
One of f and g has to be injective for g ◦ f to be injective. Is this a counter-example In this example that I have created, f and g are both surjective and total (total is a requirement for both f and g in this question) and the composite is bijective meaning it is injective. Is this a counter-example? Context: Im studying cs first year in uk and one of the modules is discrete maths
Suppose $f$ is not injective. Then there exist some $x_1$ and $x_2$ with $x_1\neq x_2$ such that $f(x_1)=f(x_2)$. It follows then that $g(f(x_1))=g(f(x_2))$ and so by definition $(g\circ f)(x_1)=(g\circ f)(x_2)$ and therefore $(g\circ f)$ is also not injective. That is to say, if $g\circ f$ is injective then $f$ must also be injective.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4597173", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
A group of $2\times 2$-matrices which is isomorphic to $\left(\mathcal{O}/24\mathcal{O}\right)^{\times}$ where $\mathcal{O}=\mathbb{Z}[\sqrt{-3}]$ Is there a subgroup $G\leq\mathrm{Gl}_2(\mathbb{Z}_n)$ such that $G\cong\left(\mathcal{O}/n\mathcal{O}\right)^{\times}$ where $\mathcal{O}=\mathbb{Z}[\sqrt{-3}]$ and for a ring $R$, $R^{\times}$ denotes the group of units of $R$? If yes (which seems to be the case, intuitionally), how does one construct it? Some context: I am particularly looking for a group of $2\times 2$-matrices which is isomorphic to $\left(\mathcal{O}/24\mathcal{O}\right)^{\times}$. I would very much appreciate it if somebody could point it out to me.
Let $M$ be any $2$-by-$2$ matrix with entries in $\mathbb{Z}_n$ for which $M^2=-3I$, but for which $M$ and $I$ do not have a nonzero common scalar multiple. Then, there is a map $\left(\mathcal{O}/n\mathcal{O}\right)^{\times} \to \mathrm{Gl}_2(\mathbb{Z}_n)$ that sends the coset of $a+b\sqrt{-3}$ to $aI+bM$. Using the fact that $M^2=-3I$, this map is easily seen to be a well-defined group homomorphism. It is also injective, since if $aI+bM=I$, then $bM=(1-a)I$ is a common scalar multiple of $M$ and $I$, so $a \equiv 1 \pmod n$ (since $1-a \equiv 0 \pmod n$) and $b \equiv 0 \pmod n$, which means that $a+b\sqrt{-3} \equiv 1 \pmod n$. So, the image is a subgroup of $\mathrm{Gl}_2(\mathbb{Z}_n)$ isomorphic to $\left(\mathcal{O}/n\mathcal{O}\right)^{\times}$. Such an $M$ is easy to construct for any $n$ (namely, consider $\begin{pmatrix}1&1\\-4&-1\end{pmatrix}$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/4597564", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How to prove that $ \ln{G(z+n)}+\ln{Γ(z+n)}=\ln{G(z)} +(n+1)\ln{Γ(z)}+\displaystyle \sum_{k=1}^{n} k\ln(z+n-k) $? where Γ(z) is the gamma function, G(z) is Barnes G Function. In Barnes' original 1857 article on the G function, on page 266 of this issue of "The Quarterly Journal Of Pure And Applied Mathematics", he starts off by stating an equality equivalent to the one in the title. I'm trying to understand how he arrived at his definition of the G function and this already got me baffled.
Recall that $G(z+1)=\Gamma(z)G(z)$ and $\Gamma(z+1)=z\Gamma(z)$. Repeatedly applying the functional equation for the G-function yields $$G(z+n)\Gamma(z+n) =\Gamma(z+n)\Gamma(z+n-1)\Gamma(z+n-2)\cdots\Gamma(z)G(z).\qquad(1)$$ Similarly, repeatedly applying the functional equation for the gamma function gives $$\Gamma(z+l)=\Gamma(z)\prod_{j=0}^{l-1}(z+j).$$ We may replace each gamma function (except for the rightmost $\Gamma(z)$) in the RHS of $(1)$ with the above product to acquire \begin{align} G(z+n)\Gamma(z+n) &=G(z)\Gamma(z)\prod_{l=1}^n\left(\Gamma(z)\prod_{j=0}^{l-1}(z+j)\right), \\ &\qquad\text{expand out the double product and} \\ &\qquad\text{ collect similar factors} \\ &= G(z)\Gamma^{n+1}(z)\prod_{k=1}^n(z+n-k)^k. \end{align} So we have $$G(z+n)\Gamma(z+n)=G(z)\Gamma^{n+1}(z)\prod_{k=1}^n(z+n-k)^k$$ and taking the logarithm gives you what you're after.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4597763", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is wrong in Method $3$? Let $z,w$ be complex numbers such that $\bar z+i\bar w=0$ and $\arg(zw)=\pi$, find $\arg(z)$ Method $1:$ $\bar z=-i\bar w\implies z=iw$ Also, $\arg z+\arg w=\pi\implies\arg z+\arg\frac zi=\pi$ Therefore, $2\arg z-\frac\pi2=\pi\implies\arg z=\frac{3\pi}4 $ Method $2:$ $zw=-k, k\gt0$ Also, $w=\frac zi$ Therefore, $z^2=-ik$ It means, $\arg z^2=\frac{3\pi}2$ Therefore, $\arg z=\frac{3\pi}4$ Method $3:$ Let $z=x+iy, w=a+ib$ Given, $\bar z+i\bar w=0\implies x-iy+i(a-ib)=0$ Therefore, $x=-b, y=a$ Also, $\arg(zw)=\pi\implies\arg(x+iy)(a+ib)=\pi$ It means, $\arg (ax-by+i(bx+ay))=\pi$ Therefore, $\tan\pi=\frac{bx+ay}{ax-by}\implies ay=-bx$ Putting $b=-x, a=y,$ I get, $y^2=x^2$ Therefore, $y=x$ or $y=-x$ $y=x$ gives $\frac{\pi}4$. Is this incorrect? If yes, how to eliminate this?
(After reading the comments) I got $\arg (ax-by+i(bx+ay))=\pi$ Putting $b=-x, a=y,$ I get, $$\arg(2xy+i(y^2-x^2))=\pi$$ Making $x^2=y^2$ could mean the argument is zero or $\pi$. For it to be necessarily $\pi$, $2xy$ should be negative i.e. $x,y$ are of opposite sign. Therefore, the answer is $\frac{3\pi}4$ and not $\frac\pi4$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4597987", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Schaum's outlines general topology problem $38$ Let $\mathcal{T}$ be the topology on the real line $\mathbb{R}$ generated by the class $\mathcal{S}$ of all closed intervals $[a,b]$ where $a$ and $b$ are rational. Determine the closure of each of the following subsets of $\mathbb{R}$: $(a) \ (2,4), \ (b) \ (\sqrt2, 5], \ (c) \ (-3, \pi), \ (d) \ A= \{1, \frac{1}{2}, \frac{1}{3}, \dots \}.$ Hello everyone! This is a problem from Schaum's outlines (general topology) regarding subbases for topologies. I'm trying to work on this one and got stuck already on part $(a)$! What I tried to do is to understand the closed sets on this topology and I found that at least $$[a,b]^c = (- \infty, a) \cup (b, \infty)$$ should be closed. The closure of $(2,4)$ is now by definition the smallest closed set containing it. So at least $(- \infty, 4) \cup (b, \infty)$ contains this for any $b \ge 4$. I think I cannot make progress here before I completely understand the closed sets of this topology. Can I have some help understanding those?
$(2, 4)$ is already closed in this topology. To see that, recall that the open sets of the topology generated by a subbase are the (arbitrary) unions of finite intersections of subbasic sets. If you get a sense of what the open sets are then you can also get an idea of what the closed sets look like (by taking complements). For each $n \in \mathbb{N}$, consider $C_n = [-n, 2]$. This is a subbasic set in your topology, so it is open. Thus $C:=\bigcup_{n=1}^{\infty}C_n = (-\infty, 2]$ is open. Similarly, let $D_n = [4, n]$. Then $D:=\bigcup_{n=5}^{\infty}D_n = [4, \infty)$ is also open. Therefore $C \cup D = (-\infty, 2] \cup [4, \infty)$ is open, so $\mathbb{R} \setminus (C \cup D) = (2, 4)$ is closed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4598130", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Does the following integral exist Let $(F_n : n ∈ \mathbb{N})$ be an increasing sequence in $\mathcal{B}(\mathbb{R})$ and $f$ be a measurable function defined on $\cup_{n∈\mathbb{N}}F_n$. Suppose $f$ is integrable on $F_n$ for every $n∈\mathbb{N}$ and $\lim_{n \to \infty} \int_{F_n}{f}dλ$ exists in $\mathbb{R}$. Does $\int_{\cup_{n∈\mathbb{N}}F_n}f dλ$ exist? So far, I have used a definition I found online: Let $\delta>0$, $F_n=F$, and $\forall n>0$: $\int_F{|f_n|<\epsilon}$, whenever $\mu(F)<\delta$. Then using this definition: $\int_{\cup_{n=1}^{M}F}|f_n|\leq \sum_{n=1}^{M}{\int_F{|f_n|}} \leq \sum_{n=1}^{M} \epsilon = \epsilon M$. So, I guess the integral exists but I'm not sure. I might be completely wrong but that's my only solution so far. Any help is appreciated!
It's not true in general. Let $F_{n}=[-n,n]$ and let $f(x)=x$ . Then $\int_{F_{n}} f\,d\lambda= 0$ for all $n$ and hence the limit $\lim_{n\to\infty}\int_{F_{n}} f\,d\lambda =0$ . Now $\bigcup_{n\in\mathbb{N}}F_{n}=\mathbb{R}$ but $x$ is not integrable over the real line. However it's true if $f$ is non-negative. In that case let $g_{n}=f\cdot\mathbf{1}_{F_{n}}$ . Then $g_{n}$ is an increasing sequence of non-negative measurable functions and $\lim_{n\to\infty} g_{n}= f\cdot\mathbf{1}_{\{\bigcup_{n\in\mathbb{N}} F_{n}\}}$ . So By Monotone Convergence Theorem you get your required result as $\displaystyle\lim_{n\to\infty}\int_{F_{n}}f\,d\lambda=\lim_{n\to\infty}\int_{\Bbb{R}} g_{n}\,d\lambda = \int_{\Bbb{R}} f\cdot\mathbf{1}_{\bigcup_{n\in\mathbb{N}}F_{n}}\,d\lambda = \int_{\bigcup_{n\in\mathbb{N}}F_{n}}f\,d\lambda$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4598297", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prove $\frac{\sum_{j=1}^nX_j}{\sqrt{\sum_{j=1}^nX_j^2}}$ converges to $\mathcal{N}(0,1)$ in distribution Suppose $\{X_n\}$ iid, $E(X_1)=0$, and $\operatorname{Var}(X_1)=1$. Prove that $\frac{\sum_{j=1}^nX_j}{\sqrt{}\sum_{j=1}^nX_j^2}$ converges to $\mathcal{N}(0,1)$ in distribution. Actually up to now I have no feasible thoughts because I think the main difficulty is that the random operator appears in the denominator, then the CLT can't be directly used. And I have no extra thoughts to prove convergence in distribution.
Actually, this is true as long as $\operatorname{Var}(X_1) < \infty$. We'll use a fairly standard stochastic asymptotic toolkit: * *Strong law of large numbers (SLLN), although the weak law of large numbers would suffice. *Continuous mapping theorem (CMT) *Standard central limit theorem (CLT) *Slutsky's theorem *Multiplying by $1 = \sqrt n/ \sqrt n = \sqrt{\operatorname{Var}(X_1)} / \sqrt{\operatorname{Var}(X_1)}$ where convenient We'll apply these with an excessive use of underbraces: $$ \frac{\sum_{j=1}^n X_j}{ \sqrt{\sum_{j=1}^n X_j^2 } } = \underbrace{ \underbrace{ \underbrace{ \bigg( \underbrace{\frac{1}{n} \sum_{j=1}^n X_j^2}_{\overset{\text{a.s.}}{\to} EX_1^2 = \text{Var}(X_1) \;\text{ by SLLN}} \bigg)^{-1/2} }_{\overset{\text{a.s.}}{\to} [\operatorname{Var}(X_1)]^{-1/2} \; \text{by CMT}}\cdot \sqrt{\operatorname{Var}(X_1)} }_{\overset{\text{a.s.}}{\to} 1 \; \text{by CMT}}\cdot \underbrace{ \frac{1}{\sqrt{n \cdot \operatorname{Var}(X_1)}} \sum_{j=1}^n X_j}_{\overset{\mathcal D}{\to} \mathcal N(0,1) \; \text{by CLT}} }_{\overset{\mathcal D}{\to} \mathcal N(0,1) \; \text{by Slutsky's theorem}} \overset{\mathcal D}{\longrightarrow} \mathcal N(0,1). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4598514", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Find vertical asymptotes of function I have problem solve this task. Find vertical asymptotes of this function: $$f(x)=\frac{\sqrt{x^2-4}}{2x+1}$$ I found the domain:$$D\in(-\infty,2]\cup[2,+\infty)$$ I solved limes: $$\lim\limits_{x\to-2^{-}}\frac{\sqrt{x^2-4}}{2x+1}=0$$ and $$\lim\limits_{x\to2^{+}}\frac{\sqrt{x^2-4}}{2x+1}=0$$ And I concluded that there are no vertical asymptotes? Is that right?
To find the largest possible domain, we want $x^2-4 \ge 0$ and $2x+1 \ne 0$, which corresponds to $$((-\infty, -2] \cup [2, \infty))-\{-\frac12\}=(-\infty, -2] \cup [2, \infty).$$ The function does not have a denominator that can take value $0$, there is no vertical asymptote. I have attached a plot of the function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4598709", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that area of parabola is $\frac{2}{3}$ of rectangle's area My math book has following problem on chapter of basic integration: Show that area between parabola and x axis is 2/3 of rectangle's area. The book has no examples for situation like this and I could not find anything helpful from internet. (Maybe I don't know correct English terms.) I thought I could calculate the two areas and compare them. $$ A_{\text{rectangle}} = (b-a)*c $$ If parabola is drawn by function $ f(x) $ then I can get its area by integration. $$ A_{\text{parabola}} = \int_a^b f(x) dx $$ The parabole goes through points $ (a,0) (b,0) $ and $ (\frac{a+b}{2}, -c) $ Using those points and the quadric equation I could find equation for the parabola and integrate it. But can't figure out how I could compare area resulting from this with the area of rectangle. What would be the correct approach to this problem?
You have already correctly identified the area of the rectangle but you are vague in stating the integral to compute the area between the parabola and the x-axis. Your approach is correct but there are some simplifications we can use to make the computation easier. We first translate the parabola to the origin and let $b-a=2w$. This gives the area of the rectangle as $A_{\text{rec}}=2wc$ and makes the algebra easier. Since the parabola whose vertex is the origin is of the form $y=kx^2$ we immediately get that $$kw^2=c\implies k=\frac{c}{w^2}\implies y=\frac{c}{w^2}x^2$$ Although we want the area bounded by the parabola and the line $y=c$, we can take the area under the parabola and subtract that from the area of the rectangle to prove the result. We exploit the symmetry of the parabola to get $$A_{\text{par}}=2\int_{0}^{w}\frac{c}{w^2}x^2 dx=\frac{2}{3}wc=\frac{1}{3}2wc=\frac{1}{3}A_{\text{rec}}$$ Subtracting the result from the area of the rectangle, we get that the area bounded by the parabola and $y=c$ is $A_{\text{rec}}-\frac{1}{3}A_{\text{rec}}=\frac{2}{3}A_{\text{rec}}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4598875", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
If $\lim_{x \to 0} f(x)/x = 1$, then $\lim_{x \to 0} f(x)=0$ This statement is said to be true, but I am not sure how to illustrate it. I found an old thread that discusses using the product limit law. Just intuitively I am struggling as well. The only function for $f(x)$ I can think of that would make $\lim_{ x\to 0} f(x)/x = 1$ is $x$. Therefore $x/x=1$ So if we know the limit goes to $1$ then that means we know that the function $f(x)$ must be cancelled to make a $1$. Furthermore I thought we couldn't have $0/0$? How do I look at this problem?
I am wondering whether the title should say "lim x -> 0 f(x) = 0"? I will assume that this is what you meant. As you mention, it does follow from the fact that a product of limits is equal to the limit of the product whenever the first two limits exist. In your case, \begin{align*} \lim_{x \to 0} f(x) = \lim_{x \to 0} \frac{f(x)}{x} \cdot x = \left(\lim_{x \to 0} \frac{f(x)}{x}\right)\left(\lim_{x \to 0} x\right) = 1 \cdot 0 = 0 \end{align*} as required.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4599036", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Median of medians algorithm My question is I don't understand how we determine the constant in T(T(n/3), T(n/5)). For example, we divide elements into 9 groups and get a formula like this T(n) <= T(n/9) + T(7n/9) + O(n) I don't understand how we get 7n/9, why is it not 5n/9, 8n/9, or something else? This question comes from the task. "Does the algorithm "Median of Medians" run in linear time, if one uses blocks of three or blocks of nine?"
Wikipedia has a good explanation here. The relevant sections are "Properties of pivot" and "Proof of $O(n)$ running time". I'll still explain the idea here, but you can find more details on the Wikipedia page. In the quickselect / "median of medians" algorithm, you do three things at each recursive step. First, you break into small chunks (you're using chunk size 9) and find the median of all those chunks in $O(n)$ time. Second, you find the median of all the chunk medians; there are $\frac n 9$ of these, so this will take $T\left(\frac n 9 \right)$ time. Finally, once you compute that median-of-medians (call it $M$), you'll use it as a pivot. $M$ guaranteed to be bigger than $\frac{n/9}{2}$ of the chunk medians, and if it's bigger than the median of a chunk then it must be bigger than the 4 smaller elements in that chunk as well. Therefore $M$ will be greater than at least $5 \cdot \frac{n/9}{2} = \frac{5n}{18}$ elements from the array. Using essentially the same argument we can show $M$ will also be smaller than at least $\frac{5n}{18}$ elements. We'll need to recursively call our algorithm on the larger of the two "sides" created by using $M$ as a pivot. Due to the bounds above, the larger side will have size $\le n - \frac{5n}{18} = \frac{13n}{18}$. That means the final term we need to add on the RHS is $\mathbf{T\left(\frac{13n}{18}\right)}$. Note I got a slightly different term from the one you quoted: $\frac{13n}{18} < \frac{7n}{9}$. Mine gives a slightly tighter bound. I'm not sure where you got your term but I'm guessing it was obtained by being a little less careful when deciding how many elements must be smaller than $M$. It doesn't really matter to the proof in the end, since you'll still get the $O(n)$ time complexity either way, but I think my version of the term is "slightly better".
{ "language": "en", "url": "https://math.stackexchange.com/questions/4599201", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Sum binomial coefficients $$\sum_{k=0}^{n} {\frac{k^2+k}{3^{k+2}} {n \choose k}}=?$$ What I've tried: $$(k^2+k){n \choose k}=k(k+1){\frac{n!}{k!(n-k)!}}$$ $$k^2+k = k^2-k+2k=k(k-1)+2k$$ => $$ \begin{align} (k^2+k){n \choose k} &= k(k-1){\frac{n!}{k!(n-k)!}}+2k{\frac{n!}{k!(n-k)!}}\\&={\frac{n!}{(k-2)!(n-k)!}}+2{\frac{n!}{(k-1)!(n-k)!}}\\&=n(n-1){n-2 \choose k-2}+2n{n-1 \choose k-1} \end{align} $$ and $${\frac{1}{3^{k+2}}}={\frac{1}{9}}({\frac{1}{3}})^k$$ So I have $$\sum_{k=0}^{n} {\frac{n(n-1){n-2 \choose k-2}+2n{n-1 \choose k-1}}{9*3^k}}$$ And I'm stuck... Can anyone help me?
As in the first hint $$(\ x(1+x)^n\ )''= \left(\sum_{k=0}^n {n\choose k}x^{k+1}\right)''= \sum_{k=1}^n k(k+1){n\choose k}x^{k-1}= f(x)$$ So if you take $x= \frac{1}{3}$ : $$\frac{1}{27}f(\frac{1}{3})= \frac{1}{3^3}\sum_{k=1}^n k(k+1){n\choose k}\frac{1}{3^{k-1}}= \sum_{k=1}^n \frac{k(k+1)}{3^{k+2}}{n\choose k}= \sum_{k=0}^n \frac{k(k+1)}{3^{k+2}}{n\choose k}- 0$$ And differentiate : $$f(x)= (\ (1+x)^n+ nx(1+x)^{n-1}\ )'= 2n(1+x)^{n-1}+ n (n-1)x(1+x)^{n-2}$$ You can compute $f(\frac{1}{3})= 2n(1+\frac{1}{3})^{n-1}+ n (n-1)\frac{1}{3}(1+\frac{1}{3})^{n-2}= 2n(\frac{4}{3})^{n-1}+ n(n-1)\frac{1}{3}(\frac{4}{3})^{n-2}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4599408", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
find the probability the black marble is from box A and the blue one is from C (HMMT 2000 Guts Round #43) Box A contains 3 black marbles and 4 blue marbles. Box B has 7 black marbles and 1 blue marble. Box C has 2 black marbles, 3 blue marbles, and 1 green marble. Person A closes their eyes and picks two marbles from 2 different boxes. If it turns out that A gets $1$ black and 1 blue marble, what is the probability that the black marble is from box A and the blue one is from C? My question is at the bottom. We assume the process is equivalent to uniformly choosing 2 distinct boxes and then independently and randomly choosing a marble from each box (!) Let the colours black, blue, and green be denoted by the numbers 1,2,3. A has 3 black and 4 blue marbles, B has 7 black and 1 blue, and C has 2 black, 3 blue, and 1 green. Consider the sample space $\Omega := \{\{(m_1,b_1),(m_2,b_2)\} : b_i \in \{A,B,C\}, m_i \in \{1,2,3\}, i = 1,2, b_1\neq b_2\}$. Here, the $b_i$'s in a pair refer to the boxes chosen. Let $X = \{\{(m_1,b_1),(m_2,b_2)\} \in \Omega : (m_1 = 1 \wedge m_2 = 2)\},Y = \{\{(m_1,b_1),(m_2,b_2)\} \in \Omega : (m_1 = 1 \wedge m_2 = 2 \wedge b_1 = A \wedge b_2 = C) \}.$ For $R\neq S \in \{A,B,C\},$ let $P_{R,S} = \{\{(m_1,b_1),(m_2,b_2)\} \in \Omega : b_1 = R \wedge m_2 = S\}.$ We want to find $P(Y | X) = P(Y\cap X)/P(X)$. To find $P(X),$ we first find $P(X\cap P_{A,B}) , P(X\cap P_{A,C}) , P(X\cap P_{B,C})$. Now $P(X\cap P_{A,B}) = P(X | P_{A,B}) \cdot P(P_{A,B}). P(P_{A,B}) = P(P_{A,C}) = P(P_{B,C}) = 1/3,$ since every pair of two distinct boxes is selected with equal probability (by (!)). Now $P(X | P_{A,B}) = 3/7\cdot 1/8 + 4/7\cdot 7/8 = 31/56.$ Similarly, $P(X | P_{A,C}) = 3/7\cdot 3/6 + 4/7\cdot 2/6 = 17/42, P(X|P_{B,C}) = 7/8\cdot 3/6 + 1/8\cdot 2/6 = 23/48.$ Thus $P(X) = 1/3(31/56+17/42+23/48) = (483/1008).$ To find $P(Y\cap X),$ we find $P(Y\cap X \cap P_{A,B}), P(Y\cap X \cap P_{A,C}), P(Y\cap X \cap P_{B,C})$. $P(Y\cap X \cap P_{A,B}) = P(Y\cap X \cap P_{B,C}) = P(\emptyset) = 0.$ Now $P(Y\cap X \cap P_{A,C}) = P(Y\cap X | P_{A,C}) P(P_{A,C}) = 1/3 \cdot (3/7\cdot 3/6) = 1/14.$ So the desired probability is $\dfrac{24}{161}$. This is different from the solution on HMMT.org, which is $120/1147$. Did I make a mistake somewhere, and if so, where and how can I get the correct answer?
The simplest way is to make an array. Since the probabilities of choosing each box is equal, we can tabulate conditional probabilities of drawing black-blue in that order given we have chosen the indicated boxes \begin{array}{| c | c | c | c | c | c |}\hline \\Combo &AB & BA & AC & CA & BC & CB \\\hline Black-Blue &\frac37\frac18 & \frac78\frac47 & \frac37\frac36 &\frac26\frac47 & \frac78\frac36 & \frac26\frac18 \\ \hline \end{array} $$Pr =\dfrac{\frac37\frac36}{\frac37\frac18 + \frac78\frac47 + \frac37\frac36 +\frac26\frac47\ + \frac78\frac36 + \frac26\frac18 }=\frac{24}{161}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4599636", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Critical points of a system of linear differential equations I have the following system of linear ODEs $$ \begin{aligned} x' &= x + 2y \\ y' &= 2x + 4y \end{aligned} $$ Looking for critical points and their classification, I set $x'=0$ and $y'=0$, ending up with $x=-2y$. Does this mean that I have infinitely many critical points? Is there another way to approach this problem?
The system is easy to solve just divide $y'$ by $x'$ to get $$\frac{dy/dt}{dx/dt}=\frac{dy}{dx}=2\\ y=2x+c_1$$ using the chain rule. We then solve for $x$ $$x'-3x=c_1\\ \left(xe^{-3t}\right)'=c_1e^{-3t}\\ x=-\frac{c_1}{3}+c_2e^{3t}$$ and for $y$ $$y=2x+c_1\\ y=\frac{c_1}{3}+2c_2e^{3t}.$$ We want $$x'=y'=0\\ 3c_2e^{3t}=6c_2e^{3t}=0\\ c_2=0.$$ But what exactly is $c_2$? To solve for it, we need to choose a set of initial conditions that we will solve for, for example $x(0)=x_0$ and $y(0)=y_0$, to get rid of the exponential terms. $$x_0=-\frac{c_1}{3}+c_2\\ y_0=\frac{c_1}{3}+2c_2$$ Add the $2$ equations to get $$c_2=\frac{x_0+y_0}{3}.$$ So we essentially need $$x_0+y_0=0.$$ That is $$x(0)+y(0)=0.$$ Whenever that happens, we get a critical point. So yes, we get an infinite number of solutions since we have no condition on $c_1.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4599809", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Show the function for which the Dirichlet generating series is $\zeta(2s)$ using only $\tau,\varphi,\sigma\text{ and }\mu$ or some explicit formula. I'm trying to find the function with Dirichlet generating series $\zeta(2s)$, I know that this relates somehow to the Liouville function but I am trying to express it in terms of only the standard arithmetic functions $\varphi,\tau,\sigma,\mu$ or some explicit formula. What I have tried so far is I know, $$F_f(s)=\sum_{j=1}^\infty\frac{f(j)}{j^s}$$ $$F_f(s-1)=\sum_{j=1}^\infty\frac{jf(j)}{j^s}$$ And I have tried to find the g such that $$F_g(s)=\sum_{j=1}^\infty\frac{g(j)}{j^{2s}}$$ But everything I have tried from this point onwards has been unsuccessful. Any help would be great.
In this sort of question, it is often a good idea to consider the effect on the Euler product: $$ \zeta(2s) = \prod_p \left( 1 - \frac{1}{p^{2s}} \right)^{-1}. $$ This particular question happens to be relatively straightforward, as this is obviously multiplicative and this expression indicates that the coefficients are defined on prime powers by the function $$ a(p^k) = \begin{cases} 1 & 2 \mid k \\ 0 & \text{else}. \end{cases} $$ This is, of course, another name for the "is a square" function. Visually inspecting, we can quickly convince ourselves of this as well: $$ \zeta(2s) = 1 + \frac{1}{4^s} + \frac{1}{9^s} + \frac{1}{16^s} + \frac{1}{25^s} + \cdots$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4600006", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Joined squares with concyclic edges [EDITED after feedback. Thanks, YNK] Two squares are joined at a corner, I am trying to figure out what conditions are needed to ensure that the vertices of one edge from each square all lie on the same circle. In the diagram below, the edge vertex sets that I would like to be concyclic are $\{A,B, F, G\}$ or $\{A,B, G, H\}$. Three ways to start with one of the squares and generate another that meets the condition are shown below. I guess my questions are: Are these the only solutions? If so, how can we prove this? Solution 1 Draw any circle through $A$,$B$ Take rays from $B$ through the $C$ and the centre of the circle, $O$, label the points where they cross the circle again as $F$ and $G$. $\angle CFG = \angle BFG = 90^{\circ}$. I think it should be easy to show that $A$, $C$, $G$ are collinear using symmetry. This makes $\angle GCF = \angle CGF = 45^{\circ}$, so that $\triangle GCF$ is isosceles and $CF=FG$. So we can 'complete the square' to get what we want. In terms of transformations, this solution can be obtained from the initial square by a rotation about $C$ of $135^{\circ}$ and a scale (centre $C$) by a suitable factor to make $F$ coincide with the circle. Solution 2 Starting as before, with any circle through $A$, $B$, we can draw a line through the centre of the circle and $C$. Reflecting the square in this line gives a congruent one with the desired property. Solution 3 We can obtain another solution by applying starting with solution 1 and applying the operation (reflection) used for solution 2. This leads to the square labelled '3' below. We could have started with solution 2 and applied the operations that were used for solution 1 (rotate and scale) as, I believe, these operations commute. There is a slight difference between solution 3 and solutions 1 and 2. Starting from the point of contact and reading the edges in anti-clockwise order, the second edge coincides with the circle for solutions 1 and 2, while the third edge coincides for solution 3. Update Some playing with Geogebra seems to confirm that there are only 4 squares with the property, once the joining point $C$ is fixed. In the illustration below, $C$ is chosen, point $H$ can move freely around the circumference of the circle, parameterised by $\alpha$, the angle that the radius to $H$ makes with the positive $x$-axis. Extending $HC$ to $E$ and using the diameter from $E$ leads to $F$ for which $\angle FHC = 90^{\circ}$. Varying $\alpha$ between $0$ and $2 \pi$, we can generate a square whenever $|HF|=|CH|$. The right hand plot shows the graphs for $|HF|$ and $|CH|$ as $\alpha$ varies, this seems to confirm four solutions that can lead to a square.
It is sufficient to place the given squares so that the side of each lies in a straight line with the diagonal of the other. Thus $AC$ is collinear with $CG$ and $FC$ with $CB$. Then by similar triangles$$\frac{AC}{BC}=\frac{FC}{GC}$$making$$AC\cdot GC=BC\cdot FC$$ Hence $AG$ and $BF$ are intersecting chords in a circle and points $A$, $B$, $G$, $F$ are concyclic. Addendum: The condition can be met in a second way of course, by rotating $CG$ $180^o$ about point $C$ as in the figure below, where collinear side and diagonal now overlap instead of running in opposite directions. Since by similar triangles$$\frac{CA}{CB}=\frac{CF}{CG}$$then$$CA\cdot CG=CB\cdot CF$$and points $A$, $B$, $F$, $G$ are concyclic. There seem to be only two situations where given square $ABCD$ and non-equal square $CEGF$, in contact at $C$, can have points $A$, $B$ concyclic with $F$, $G$. But there are likewise two situations where points $G$, $E$ are concyclic--not with $A$, $B$, however, but with $D$, $B$. For $CG$ rotating $360^o$ about point $C$, yields side/diagonal alignment four times, with $A$, $B$, $F$, $G$ concyclic in two of them and $D$, $B$, $E$, $G$ in the other two. The first two we've seen, where $CG$ runs "east-west"; in the second two, shown below, $CG$ runs "north-south". Thus side/diagonal collinearity is sufficient to make either $A$, $B$, $F$, $G$ or $D$, $B$, $E$, $G$ concyclic. I think side/diagonal collinearity is also a necessary condition, but can't yet prove it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4600170", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Does almost sure convergence implies that a sequence is cauchy with probability one? Well, if $P\left((X_{n}) \ \text{is cauchy}\right)=1$ then $X_n \longrightarrow X \ \text{a.s}$. But is it true for converse? we have $$(X_n\longrightarrow X) := \bigcap\limits_{ε\in\mathbb{Q^+}}\bigcup\limits_{N=1}^{\infty}\bigcap\limits_{n=N}(|X_n-X|\leq\epsilon) $$ And $$(X_n \ \text{is cauchy}) := \bigcap\limits_{ε\in\mathbb{Q^+}}\bigcup\limits_{N=1}^\infty\bigcap\limits_{n=N}\bigcap\limits_{m=N}(|X_n-X_m|\leq\epsilon)$$. we see that $(X_n \ \text{is cauchy}) \subset (X_n\longrightarrow X \ \text{a.s})$ so: $ P(X_n \ \text{is cauchy}) \leq P(X_n\longrightarrow X)$ and since $P\left((X_{n}) \ \text{is cauchy}\right)=1$, hence $P(X_n\longrightarrow X)=1$. But what about the converse, that if $(X_n)$ converges a.s then $((X_n) \ \text{is cauchy})$ with probability one? intuitively, if $(X_n)$ converges almost surely to a limit X, then if we consider $N\in\mathbb{N}$ too large, then $\forall\epsilon>0\ \exists \ N\in\mathbb{N} \ \forall n,m\geq N :(|X_n-X_m|\leq \epsilon)$, which is cauchy, and it holds with probability one, right? Hope someone can calrify this. Thanks in advance.
No, both notions are not equivalent if we consider random variables with values in the extended real line, namely $\overline{\mathbb{R}}:=\mathbb{R}\cup \{-\infty ,+\infty \}$, as $\overline{\mathbb{R}}$ cannot be a metric space with the usual metric from $\mathbb{R}$. This means that sequences such that $\lim_{n\to \infty }X_n(\omega )\in\{-\infty ,+\infty \}$ cannot be defined as Cauchy as there is no metric in $\overline{\mathbb{R}}$ that generalizes the metric in $\mathbb{R}$. However if $\Pr [\{\omega \in \Omega :\limsup_{n\to\infty}|X_n(\omega )|<+\infty \}]=1$ then both conditions, being almost sure convergent to some random variable $X$ and being almost sure Cauchy are equivalent because the set where this equivalence fails have zero probability. To be more precise: $\lim_{n\to \infty }X_n(\omega )\in \mathbb{R}$ if and only if $\{X_n(\omega )\}_{n\in\mathbb{N}}$ is Cauchy, therefore $$ \left\{\omega \in \Omega : \lim_{n\to \infty }X_n(\omega )\in \mathbb{R}\right\}=\left\{\omega \in \Omega : \{X_n(\omega )\}_{n\in\mathbb{N}}\text{ is Cauchy}\right\} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4600354", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Nature of critical points of $f(x,y,z)$ How can I study the nature of the critical points of $f(x,y,z)=3xy^2+6y^2+x^2z+3z^2$? Since $$ \begin{cases} f'_x=3y^2+2xz\\ f'_y=6xy+12y\\ f'_z=x^2+6z \end{cases} $$ the only critical point is $(0,0,0)$. The Hessian matrix is $$ H_f(0,0,0)=\left( \begin{matrix} 0 & 0 & 0\\ 0 & 12 & 0\\ 0 & 0 & 6 \end{matrix}\right), $$ whose determinant is zero. Now I'm stuck, because I've tried to study $f$ along many restriction, and every time I realized that $O$ is a minimum, but I am not able to prove (or disprove) that $O$ is indeed a minimum. What can I do now?
$3xy^2 +6y^2$ is certainly nonnegative in small enough balls around the origin, but the other part isn’t obviously always nonnegative, so let’s work there. It would be convenient to deal with homogeneous polynomials, probably. So try curves of the form $(t, 0, at^2)$ for various values of $a$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4600549", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
show that all $n+1$ cyclic shifts of an augmented Dyck path are distinct Define an augmented Dyck path from $(-1,-1)$ to $(2n,0)$ to be a path where the first step is an up step (corresponding to the vector $(1,1)$) and every other step is either an upstep or a down step down step (corresponding to the vector $(1,-1)$) and the path never goes below the x-axis. Let $X_n$ be the set of all paths from $(-1,-1)$ to $(2n,0)$ where the first step is an up step and every other step is either an upstep or a downstep. To each augmented Dyck path, associate a word $w_n$ consisting of the letters $U$ for an upstep in the corresponding part of the path and $D$ for a downstep in the corresponding part and let $W_n$ be the set of these words. Define the kth cyclic shift of an element in $w\in W_n$, say $w_1\cdots w_{2n+1}$, where $w_k = U$, to be $w_k \cdots w_{2n+1}w_1\cdots w_{k-1}$. Prove that all $n+1$ cyclic shifts of any element of $W_n$ are distinct. Source: this is inspired by problem #16 of the 2001 HMMT Guts Round. Choose $k > 0$ to be minimal such that cyclically shifting by $k$ places fixes a word $w\in X_n$. We claim that $k=2n+1,$ which shows that all $n+1$ cyclic shifts beginning with a distinct U are distinct. Consider the equivalence relation on $[2n+1] := \{1,\cdots, 2n+1\}$ given by $i~j$ iff $i\equiv j+kx\mod 2n+1$ for some $x\in \mathbb{Z}$. For each equivalence class under $~$, all of the letters in those positions must be the same. We now find the shortest period $d$ of the word w, which is defined to be the smallest $c$ so that $w_{i+c} = w_i$ for all i, where we take indices modulo $n$ so that $w_{i+n}=w_i$ for all $i$ and we just define $w_1,\cdots, w_n$ as a base case. One can show using the division algorithm that if $j$ is a period for w, then $d | j.$ By assumption, both $k$ and $2n+1$ are periods of $w $ (rotating by k and fixing w is equivalent to w having a period of k), so $d | k,2n+1.$ Now we just need to show that $d = \gcd(k,2n+1)$. To see why, we can write $d = ak+(2n+1)b$ for some integers a and b and then using the fact that $k$ and $2n+1$ are periods of w, we see that $w_{i+d} = w_i$ for all i. Now if an element $j$ belongs to the equivalence class of the element i, then by definition, $j-i$ is a period of $w$ (since $w_{a+i} \equiv w_{a+j}$ for all indices a), and so $d | j-i,$ which implies that $i \equiv j\mod d.$ Now the number of solutions $j\in \{1,\cdots, 2n+1\}$ to $j\equiv i\mod d$ is precisely $(2n+1)/d.$ To see why this holds, it suffices to show that all solutions are given by $i + ad$ for $a\in \{0,1,\cdots, (2n+1)/d - 1\}.$ For a solution j, write $j= i+dt.$ Then if $t\equiv a\mod (2n+1)/d,$ we have $dt\equiv da\mod (2n+1),$ so $j\equiv i+da\mod 2n+1.$ Finally, all the solutions $i+ad$ for $a\in \{0,1,\cdots, (2n+1)/d-1\}$ are distinct since if $i+a_1 d \equiv i+a_2 d\mod 2n+1,$ then $d(a_1-a_2) \equiv 0\mod (2n+1)\Rightarrow a_1 - a_2\equiv 0\mod (2n+1)/d\Rightarrow a_1 = a_2$ as $a_1,a_2\in \{0,1,\cdots, (2n+1)/d-1\}$. Hence the claim that the cardinality of the equivalence class is $(2n+1)/\gcd(2n+1,k)$ indeed holds. But I'm not sure how to show $k$ must equal $2n+1$ in this case. Of course, it would be enough to show that $m = 1$. The latter would follow if one knew that $m$ must divide both the number of upsteps and the number of down steps in $P_w$, but that doesn't seem obvious to me.
If we slant the graph of the augmented Dyck path down by slope $-\frac{1}{2n+1}$, it starts and ends at the same $y$-coordinate, and in between is always above that $y$-coordinate. So after any cyclic shift, a similar slant downward will expose the unique global minimum as the shifted position of the original start of the augmented Dyck path. More formally: Define the function $\delta: \{\mathrm U,\mathrm D\} \to \mathbb{R}$ with $\delta(\mathrm U) = +1$ and $\delta(\mathrm D) = -1$. To every word $w_1\ldots w_{2n+1}$ containing only letters U and D, associate the sequence $(s_0, \ldots, s_{2n+1})$ defined by $$ s_j = \sum_{i=1}^j \left(\delta(w_i) - \frac{1}{2n+1}\right) $$ If the word corresponds to an augmented Dyck path: $s_0 = s_{2n+1} = 0$. Since $w_1 = \mathrm U$, $s_1 = \frac{2n}{2n+1}$. The constraint that the path doesn't go below the $x$-axis after the first step from $(-1,-1)$ to $(0,0)$ means that $\sum_{i=2}^j \delta(w_i) \geq 0$ for every $j$. Therefore when $2 \leq j \leq 2n$, $s_j \geq s_1 - \frac{j-1}{2n+1} > 0$. The minimum value of the sequence is $0$, attained only at $s_0$ and $s_{2n+1}$. Since $s_0 = s_{2n+1}$, we can extend this sequence to an infinite repeating sequence with period $2n+1$. The $k$-th cyclic shift of an augmented Dyck path then has associated sequence $(t_0, \ldots)$ with terms defined by $$ t_j = \sum_{i=1}^j \left(\delta(w_{k+i-1}) - \frac{1}{2n+1}\right) = s_{k+j-1} - s_{k-1} $$ The minimum value of $t_j$ is $-s_{k-1}$, attained only when $k+j-1 \equiv 0 \pmod{2n+1}$. Since each value of $k$ puts the minimum value in a different position $j$, two different cyclic shifts must have different sequences $(t_j)$ and therefore different shifted words.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4600904", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
I got a different value with wolfram of evaluating $~\int_{0}^{2\pi}\left|\sin\theta+\cos\theta\right|\mathrm d \theta$ $$\begin{align} A&:=\int_{0}^{2\pi}\left|\sin\theta+\cos\theta\right|\mathrm{d}\theta\\ \end{align}$$ $$\begin{align} \sin\theta+\cos\theta&<0\leftrightsquigarrow\theta\neq{\pi\over2},{3\pi\over 2}\\ \cos\theta&<-\sin\theta\\ 1&<-\tan\theta\\\iff-1&>\tan\theta \\\equiv\tan\theta&<-1\\ f(\theta)&:=\sin\theta+\cos\theta \\ A&=\int_{0}^{{\pi\over 2}}f(\theta)\mathrm d\theta-\int_{{\pi\over 2}}^{{\pi\over 2}+{\pi\over 4}}f(\theta)\mathrm d\theta+\int_{{\pi\over 2}+{\pi\over 4}}^{3\pi\over 2}f(\theta)\mathrm d\theta\\~~~&-\int_{{3\pi\over 2}}^{{3\pi\over2}+{\pi\over 4}}f(\theta)\mathrm d\theta+\int_{{3\pi\over 2}+{\pi\over 4}}^{2\pi}f(\theta)\mathrm d\theta \end{align}$$ And I got $~ A=0 ~$ but wolfram shows me the value of $~ A ~$ should be $~ 4\sqrt{2} ~$ Where I've made (a) mistake(s)?
Note $$\int_{0}^{2\pi}\left|\sin\theta+\cos\theta\right|\mathrm d \theta=\int_{0}^{2\pi}\sqrt2\left|\sin(\theta+\frac\pi4)\right|\mathrm d \theta.$$ Since $\sin(\theta)$ is a periodic function with period $2\pi$. So $$\int_{0}^{2\pi}\left|\sin\theta+\cos\theta\right|\mathrm d \theta=\int_{0}^{2\pi}\sqrt2\left|\sin(\theta+\frac\pi4)\right|\mathrm d \theta=\sqrt2\int_0^{2\pi}|\sin\theta|\mathrm d \theta=4\sqrt2\int_0^{\pi/2}|\sin\theta|\mathrm d \theta=4\sqrt2.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4601049", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Find all three complex solutions of the equation $z^3=-10+5i$ Let $z\in \mathbb{C}$. I want to calculate the three solutions of the equation $z^3=-10+5i$. Give the result in cartesian and in exponential representation. Let $z=x+yi $. Then we have $$z^2=(x+yi)^2 =x^2+2xyi-y^2=(x^2-y^2)+2xyi$$ And then $$z^3=z^2\cdot z=[(x^2-y^2)+2xyi]\cdot [x+yi ] =(x^3-xy^2)+2x^2yi+(x^2y-y^3)i-2xy^2=(x^3-3xy^2)+(3x^2y-y^3)i$$ So we get $$z^3=-10+5i \Rightarrow (x^3-3xy^2)+(3x^2y-y^3)i=-10+5i \\ \begin{cases}x^3-3xy^2=-10 \\ 3x^2y-y^3=5\end{cases} \Rightarrow \begin{cases}x(x^2-3y^2)=-10 \\ y(3x^2-y^2)=5\end{cases}$$ Is everything correct so far? How can we calculate $x$ and $y$ ? Or should we do that in an other way?
There is another way $$z^3+10-5i=0$$ Let's replace the complex number with a constant $a$ $$z^3+a=0$$ $$z^3+(a^{1/3})^3=0$$ Using the Sum of Cubes $$(z+a^{1/3})(z^2-a^{1/3}z+a^{2/3})=0$$ We get $z=-a^{1/3}$ as a solution $$z^2-a^{1/3}z+a^{2/3}=0$$ $$z=\frac{-(-a^{1/3})±\sqrt{(-a^{1/3})^2-4(1)(a^{2/3})}}{2(1)}=\frac{a^{1/3}±\sqrt{-3a^{2/3}}}{2}=\frac{a^{1/3}±\sqrt3ia^{1/3}}{2}$$ $$=\left(\frac{1±i\sqrt3}{2}\right)a^{1/3}=a^{1/3}e^{±\pi i/3}$$ Now we find the cube root of $a$ using polar form $$(10-5i)^{1/3}=\left(\sqrt{10^2+5^2}\exp\left(i\arctan\left(-\frac{5}{10}\right)\right)\right)^{1/3}$$$$=\left(5\sqrt5\exp\left(-i\arctan\left(\frac{1}{2}\right)\right)\right)^{1/3}=\sqrt{5}\exp\left(\frac{-i}{3}\arctan{\frac{1}{2}}\right)$$ WE GET THE SOLUTION $${z=\sqrt{5}\exp\left(\frac{-i}{3}\arctan{\frac{1}{2}}\right)e^{k\pi i/3}}, k=-1,0,1$$ NOTE: Instead of using the sum of cubes, we can notice the following $$z^3=-a$$ $$z^3=ae^{k\pi i}, k=-1,1,3$$ Take the cube root on both sides $$z=(ae^{k\pi i})^{1/3}=a^{1/3}e^{k\pi i/3}, k=-1,1,3$$ Then find the cube root and finish
{ "language": "en", "url": "https://math.stackexchange.com/questions/4601201", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 3 }
Let $x_i$ be positive, even integer such that $x_1 + x_2 + x_3 + x_4 = 100.$ How many solutions are there for this expression? Let $x_i$ be positive, even integer such that $$x_1 + x_2 + x_3 + x_4 = 100.$$ How many solutions are there for this expression? I think I've understood something incorrectly about the stars and bars idea. If we drop the "even" condition this has $$103\choose 3$$ solutions. Now keeping the evenness we can express $x_i=2a_i$ for $a_i$ a positive integer. We thus get $$2a_1 + 2a_2 + 2a_3 + 2 a_4 = 100$$ which is equivalent to $$a_1 + a_2 + a_3 + a_4 = 50$$ and this has $$53 \choose 3$$ solutions. However $53 \choose 3$ is not apparently the correct answer here? Is there some implication here that doesn't go backwards or what am I missing?
You are correct that the problem reduces to finding the number of positive integers $a_1, \ldots, a_4$ summing to $50$. Write $a_i = b_i + 1$ for each $i$. Then we wish to find the number of nonnegative $b_1, \ldots, b_4$ such that $b_1 + 1 + b_2 + 1 + b_3 + 1 + b_4 + 1 = 50$. That is equivalent to $b_1 + b_2 + b_3 + b_4 = 46$. Apply the stars and bars method to get the answer, $\binom{49}{3}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4601396", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }