Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Using Inclusion-Exclusion, probability of occurrence of exactly $m$ events out of $\{A_1,\ldots,A_n\}$ Using Inclusion-Exclusion, prove that the probability of occurrence of exactly $m$ events out of $\{A_1,\ldots,A_n\} = \sum_{i=0}^{n-m} (-1)^i \binom{m+i}{m} P_{m+i} $ where $P_k = \sum_{1 \leq i_1<\cdots \le i_k \leq n} P(A_{i_1} \cap\cdots \cap A_{i_k})$ Attempt: I was trying to understand this term on the RHS : $\binom{m+i}{m} P_{m+i}$. And then the inclusion-exclusion principle shall follow. This is what I thought to be able to use inclusion-exclusion but I am not very sure if this is the right approach. $m$ events can occur either from the first $m$ events, or from the first $m+1$ events, or from the first $m+2$ events, $\cdots$, or from the n events. $$E_k = \text{the $m$ events occur from the first $m+k$ events}$$ Then $P(E_k) = \binom{m+k}{m} \sum_{1\leq i_1<\cdots i_m\leq m+k} P(A_{i_1} \cap \cdots \cap A_{i_m})$ But, then I could not move forward from here. I tried understanding the solution given here, but I would really appreciate something more intuitive. Could someone please explain how do I move forward from here?
I discovered a marvelous proof. However, since it's long, I haven't been able to type it. Posting clear images for the sake of anyone who might be searching for the same answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4061433", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Norm of functional I'm having little trouble with understanding one proof I saw on mathstack. Let's consider functional $$f\colon (C[0,1], \Vert\cdot \Vert_\infty) \ni g\rightarrow g(0) - 7g(\frac12)$$ Calculation $\Vert f\Vert_\infty$ was the following: The norm of $f$ is equal to $8$ as for the map $h(x)=\vert 4x - 2 \vert -1$ we have $\Vert h \Vert_\infty = 1$ and $f(h)=8$. And I'm not exactly sure how come it ends the justification. Could you please tell me why with referring to theorems on which this way of thinking is based on ?
I'm going to call the operator $T$ for clarity (instead of $f$). For any $g$ which is continuous on $[0,1]$, we have that $$|Tg|\leq |g(0)|+7|g(1/2)|\leq 8\|g\|_\infty.$$ Thus, $$\sup_{\|g\|_\infty=1}|Tg|=\|T\|\leq 8.$$ The particular function in your post shows that this upper bound can be achieved, and since the operator norm is defined as the least upper bound, the above is actually an equality.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4061637", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why can't the Augmenting Path Algorithm be used for matching in general graphs? I'm told that the problem lies behind the possible odd cylces in general graphs. However, I can't find any counterexamples and get confused. For example, one says that if we do DFS in the graph below starting from 6 in the order of 4-5-1-2-3, we'll miss the augmenting path. But I think starting a search from every unsaturated vertex would solve the problem, since a DFS from 1 in the order of 6-5-1-2-3-4 results in an augmenting path. For another example, one says that if we do DFS in an odd cycle, we'll get stuck in an infinite loop. But should we visit the vertex that has been visited before? I think that's unnecessary. I would be extremely appreciative of any assistance!
It's possible for the same issue to arise starting from every vertex. Consider the following diagram: Here, $1$ and $10$ are the two vertices unsaturated by the matching $\{23, 45, 67, 89\}$, and $(1,2,3,4,5,6,7,8,9,10)$ is an augmenting path. But we won't find it starting from vertex $1$ if we happen to explore from $1$ to $5$ first: we'll walk $(1,5,4,3,2)$ and get stuck. Similarly, we won't find it starting from vertex $10$ if we happen to explore from $10$ to $6$ first: we'll walk $(10,6,7,8,9)$ and get stuck.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4061766", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Infinite type algebra Maybe this question is trivial : most of the algebras I encounter (issued from invariant theory !) are of finite type. Does someone have a 'simple' example of an algebra/subalgebra of infinite type. In fact, I don't see immediately how to show that some algebra is of infinite type. For instance, if we take the subalgebra $$\mathbb{R}[x^{k}+k,\quad k\geq 2]$$ it seems to me to be of infinite type... Same for $$\mathbb{R}[x^{k_1}y^{k_2},k_1\geq 1,k_2\geq 2]$$ Is it obvious ?
Any finite type $K$-algebra is isomorphic to $K[X_1,\ldots,X_n]/I$, for some $n\geq 0$ and some ideal $I$. In particular, a finite type $K$-algebra is a Noetherian ring. Hence any non Noetherian ring containing a field $K$ as a subring will give you a counterexample. The simplest one is $K[X_m, m\geq 0]$. If you want an example of a finite type algebra having a subalgebra not of finite type, you can take $K[X,Y]$ and $K[XY^k,k\geq 0]$ , since the second one is not Noetherian (see How to prove that $k[x, xy, xy^2, \dotsc]$ is not noetherian?)
{ "language": "en", "url": "https://math.stackexchange.com/questions/4062433", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to show this integral inequality? For $t \in \mathbb{R}$, let $$ I_1(t) = \int_{0}^1 \frac{x e^x}{1 + \cosh(t x)} dx $$ and $$ I_2(t) = \int_{0}^\infty \frac{x e^{-x}}{1 + \cosh(t x)} dx. $$ I want to show $I_1(t) \geq I_2(t)$ with equality only if $t = 0$. Numerical integration suggests this is true. Any suggestions?
This is only a partial answer: We have $I_1 (0) = I_2 (0) = \frac{1}{2}$ and since $I_1$ and $I_2$ are symmetric functions we can assume $t > 0$ from now on. Then \begin{align} I_1 (t) &= \int \limits_0^1 \frac{x \mathrm{e}^{x}}{1 + \cosh(tx)} \, \mathrm{d} x \stackrel{t x = 2 u}{=} \frac{2}{t^2} \int \limits_0^{t/2} \frac{u \mathrm{e}^{2u/t}}{\cosh^2(u)} \, \mathrm{d} u \, , \\ I_2 (t) &= \int \limits_0^\infty \frac{x \mathrm{e}^{-x}}{1 + \cosh(tx)} \, \mathrm{d} x \stackrel{t x = 2 u}{=} \frac{2}{t^2} \int \limits_0^\infty \frac{u \mathrm{e}^{-2u/t}}{\cosh^2(u)} \, \mathrm{d} u \, . \end{align} Using Taylor series and a few known integrals we can derive the asymptotic behaviour of the integrals (working with the $x$-integrals for small and the $u$-integrals for large values of $t$, of course): \begin{align} I_1(t) &\sim \begin{cases}\frac{1}{2} \left[1 - \frac{3 - \mathrm{e}}{2} t^2 + \mathcal{O} (t^4)\right] , & t \to 0^+ \\ \frac{2 \log(2)}{t^2} \left[1 + \frac{\pi^2}{6 \log(2) t} + \mathcal{O} \left(\frac{1}{t^2}\right)\right], & t \to \infty\end{cases}, \\ I_2(t) &\sim \begin{cases}\frac{1}{2} \left[1 - \frac{3}{2} t^2 + \mathcal{O} (t^4)\right] , & t \to 0^+ \\ \frac{2 \log(2)}{t^2} \left[1 - \frac{\pi^2}{6 \log(2) t} + \mathcal{O} \left(\frac{1}{t^2}\right)\right], & t \to \infty\end{cases}. \end{align} These results show that $I_1(t) > I_2(t)$ holds for sufficiently small or large $t$. In order to obtain the inequality on a larger interval we write \begin{align} I_2 (t) &= 2 \int \limits_0^\infty x \mathrm{e}^{-x} \frac{\mathrm{e}^{-t x}}{(1 + \mathrm{e}^{-t x})^2} \, \mathrm{d} x \stackrel{\text{IBP}}{=} \frac{2}{t} \int \limits_0^\infty \frac{(x-1) \mathrm{e}^{-x}}{1 + \mathrm{e}^{-tx}} \, \mathrm{d}x \\ &= \frac{2}{t} \sum \limits_{k=0}^\infty (-1)^k \int \limits_0^\infty (x-1) \mathrm{e}^{-(1+kt)x} \, \mathrm{d} x = 2 \sum \limits_{k=1}^\infty \frac{(-1)^{k-1} k}{(1+kt)^2} \end{align} (interchanging integration and summation is justified by the dominated convergence theorem). Doing the same for $I_1$ we find the series representation $$ I_1(t) - I_2(t) = \frac{2}{t} \sum \limits_{k=1}^\infty (-1)^{k-1} f(kt) \, , \tag{1}$$ where $f \colon \mathbb{R}^+ \to \mathbb{R}^+$ is defined by $$ f(r) = r \left(\frac{1 - r \mathrm{e}^{1-r}}{(1-r)^2} - \frac{1}{(1+r)^2}\right) \, . $$ The apparent singularity at $r=1$ is removed by setting $f(1) = \frac{1}{4}$. $f$ is indeed a positive function, since letting $r = \frac{1 - s}{1 + s}$ and rearranging the inequality shows that $$ f(r) > 0 ~ \forall ~ r \in (0,\infty) \setminus \{1\} ~ \Leftrightarrow ~ \log(1+s) > \frac{s}{1+s} ~ \forall ~ s \in (-1,1) \setminus \{0\}$$ and this elementary inequality is true. Moreover, taking the derivative we see that $f$ increases from $f(0^+) = 0$ to a maximum at $r_0 \simeq 1.8143$ before monotonically approaching zero again. In particular, the sequence $(f(kt))_{k \in \mathbb{N}}$ is strictly decreasing to zero if $t \geq r_0$ (and, in fact, for some smaller $t$ as well). But this implies that the alternating series in equation $(1)$ is positive, so $I_1(t) > I_2(t)$ holds for $t \in [r_0,\infty)$. This method does not work on the remaining intervall $(0,r_0)$, however (ironically, this is where $I_1(t) - I_2(t)$ is largest numerically). At the moment, I can only think of piecing together a few Taylor expansions and their error estimates in this region. We have, for example ($\operatorname{Li}_2$ is the dilogarithm), \begin{align} I_1 (1) &= \frac{\pi^2}{6} - 2 \log(2) + 2 \left[2 \log(1 + \mathrm{e}) + \operatorname{Li}_2 \left(-\mathrm{e}\right) - \frac{\mathrm{e}}{1 + \mathrm{e}}\right] \, , \\ I_2 (1) &= \frac{\pi^2}{6} - 2 \log(2) \end{align} and the expression in the square brackets is positive, so the inequality holds in some neighbourhood $(\varepsilon, 2 -\varepsilon)$ of $1$. It should be possible (but probably tedious) to make $\varepsilon > 0$ small using Taylor polynomials. Combining this with the previous results would then complete the proof, but this is clearly not a very elegant method.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4062585", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Quadratic and Linear Function relationship I just stumbled across this problem and would appreciate some help with it, as I'm not getting far with it. Problem: You are given a quadratic function $f(x)=ax^2+bx+c$ and a linear function $g(x)$. The two functions intersect at $x=0$ and at also at an $x$ with $g(x)=f(x)=0$ and where $x<0$. Which of the two could, for some values of $a,b,c$, be an expression for $g(x)$: * *$g(x)=bx+c$ *$g(x)=ax+c$ My Progress thus far: We know that the $y$-intercept must be $c$, because $f(x)$ and $g(x)$ intercept at $x=0$. So that makes perfect sense. However I fail to see how for some values of $a,b,c$ the gradient of $g(x)$ could either be $a$ or $b$. Any help would greatly be appreciated.
Given f(x)=$ax^2$+bx+c ($a \neq0$), if g(x)=bx+c for some x=$x_1$<0, f($x_1$)=${ax_1}^2$+$bx_1$+c (I) g($x_1$)=b$x_1$+c (II) (I)-(II) ${ax_1}^2$ = 0, a=0 , it is not allowed. So, this is not the right choice, only option is (2) g(x)=ax+c. You can prove that from here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4062692", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What's wrong in my approach in evaluating $\cos^{-1}(\cos\ 10)$ The Question $\large\cos^{-1}(\cos 10)$ My Approach $\large\cos^{-1}(\cos\ (10-2\pi))$ As $2\pi$ is the period of $\cos x.$ Now our angle is- Now we know that for $\large\cos^{-1}(\cos x) = x$, $\large x\ \epsilon\ [0,\pi]$ but currently our $\large x$ is in the 3rd quadrant, we need to get it into 1st or 2nd quadrant. Therefore let us subtract $\large \pi$ from our current angle i.e $\large 10-2\pi$ $\large\cos^{-1}(\cos\ -((10-2\pi)-\pi))$ Now you must be wondering why did I put this minus sign, w.k.t $\large \cos x$ is -ve in third quadrant, by subtracting $\large \pi$ we brought it into 1st quadrant and $\large \cos x$ is +ve in 1st quad, so to compensate that I have put a minus sign. Now its within range so we can use $\large\cos^{-1}(\cos x) = x$. So Final answer $\large -((10-2\pi)-\pi)$ = $\large 3\pi - 10$ But this answer is wrong and the correct answer is $\large 4\pi - 10$. They have used the property $\large \cos\ (\pi - \theta) = \cos\ (\pi + \theta)$ But why my approach is wrong? Kindly help.
If $x$ is in the third quadrant, its symmetric w.r.t. the $x$-axis is in the second quadrant, and has the same cosine. This symmetric point has curvilinear abscissa $6\pi - x$, as you've found. To determine the value which is represented by the same point on the trigonometric circle, let's just make some elementary manipulations on inequalities. If $x=10$ˆ, we have $3\pi < x <\dfrac{7\pi}2$, so \begin{align} &&\frac{5\pi}2 &<6\pi -x< 3\pi \\[1ex] &\llap{\text{whence}}\qquad& \frac \pi 2 &<(6\pi-x)-2\pi=4\pi -x <\pi&&\qquad \end{align} The solution follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4062893", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 5, "answer_id": 0 }
Composition of uniform function and continuous function The following is given: $$g: \mathbb{R}\to\mathbb{R}\colon\quad x \mapsto f\left(\frac{1}{1+x^2}\right),$$ with $f:\mathbb{R}^+\to\mathbb{R}$ continuous. Is $g$ uniform continous? I answered there yes and I thought I could prove it by dividing the problem by looking in an interval and by looking outside that interval where the interval is big enough so $g(x)$ goes to $f(0)$. Then using the triangular inequality I proved the problem. However, the next question is: is $h\colon\mathbb{R}\to\mathbb{R}: x \mapsto g(x)^2$ also uniform continuous? My gut feeling says no but also yes because I can use the same method, but I can not proof this or give a counter example.
We need $f$ defined and continuous at $x = 0$ for $g$ to be uniformly continuous. For example, if $f:\mathbb{R}^+ = (0,\infty) \to \mathbb{R}$ with $f(x) = 1/x$, then $f$ is continuous on its domain, but $g(x) = 1 + x^2$ is not uniformly continuous on $\mathbb{R}$. The same applies to $h = g^2$. Otherwise, if $f$ is continuous on $[0,\infty)$ then both $g$ and $h=g^2$ are uniformly continuous on $\mathbb{R}$. The proofs are similar and one for $h$ follows. Since $\lim_{x \to \pm \infty}h(x) = f^2(0)$, there exists $X>0$ such that $|h(x) - f^2(0)| < \epsilon/3$ for all $x \in [X, \infty]$ and $x \in (-\infty,-X]$. Since $h$ is continuous on the compact set $[-X,X|$, it is uniformly continuous there and there exists $\delta > 0$ such that $|h(x) - h(y)| < \epsilon/3$ for $x,y \in [-X,X]$ when $|x-y| < \delta$. Now we can show that for all $x, y \in \mathbb{R}$ such that $x - y| < \delta$, we have $|h(x) - h(y)| < \epsilon$. There are a few cases to consider. For example if $x \in [-X,X]$, $y \in [X,\infty)$ and $|x-y| < \delta$, then $$|h(x) - h(y)| \leqslant |h(x) - h(X)| + |h(X) - f^2(0)| + |f^2(0) - h(y)| < \frac{\epsilon}{3} + \frac{\epsilon}{3}+ \frac{\epsilon}{3} = \epsilon$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4063166", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Factorization $x^4+px^3+qx^2+r x +s=(x^2+a x +b)(x^2+\bar a x +\bar b)$ Question: Under what condition, does the quartic polynomial with rational coefficients $p$, $q$, $r$ and $s$ factorizes as $$x^4+px^3+qx^2+r x +s= (x^2+a x +b)(x^2+\bar a x +\bar b) $$ with $a$, $b$ complex numbers, along with their conjugates $\bar a $, $\bar b$. Examples: $$x^4+2x^3+6x^2+2x+1=( x^2 +(1-i \sqrt3)x +1) (x^2 +(1+i \sqrt3)x +1) $$ $$x^4+2x^3+4x^2+2=( x^2 +(1+i)x +(1-i)) (x^2 +(1-i)x +(1+i)) $$ Note that the symmetry of coefficients leads to such factorization, as seen in the first example; but not exclusively so, as shown by the second example. Is there any test on the coefficients $p$, $q$, $r$ and $s$ that can be carried out to determine the possibility of such factorization? I reviewed here the discriminate tests on the nature of roots for quartic equations and did not find anything applicable.
Short answer: Actually, the things I do, do not involve any special manipulation. It just contains simple algebra. Let $a,b,c,d \in\mathbb R$ then we have, $$\left(x^2+(a+bi)x+(c+di)\right)\left(x^2+(a-bi)x+(c-di)\right)=x^4+2ax^3+(a^2+b^2+2c)x^2+(2ac+2bd)x+(c^2+d^2).$$ $$x^4+px^3+qx^2+rx+s=x^4+2ax^3+(a^2+b^2+2c)x^2+(2ac+2bd)x+(c^2+d^2)$$ which follows $$\begin{cases} p=2a \\ q=a^2+b^2+2c \\ r=2ac+2bd \\s=c^2+d^2\end{cases} $$ * *If $d=0$, then the system of equation becomes extremely simple. I'll leave this case to you. Here we will work with $d≠0.$ *If $d≠0$, then $s-c^2≠0$. We have, $$\begin{cases} a=\frac p2 \\b^2+2c=q-\frac{p^2}{4} \\pc+2bd=r \\c^2+d^2=s \end{cases}$$ $$\implies \begin{cases}\left( \frac{r-pc}{2d}\right)^2+2c-q+\frac{p^2}{4} =0 \\ c^2+d^2=s \end{cases}$$ Finally we get, $$\frac{(r-pc)^2}{4(s-c^2)}+2c-q+\frac{p^2}{4} =0$$ $$\color {gold}{\boxed {\color{black}{8 c^3-4qc^2+(2pr-8s)c+(4qs-sp^2-r^2)=0.}}}$$ As it seems, we obtain the cubic equation with respect to $c$. If all of the coefficients of the cubic equation are real numbers, then it has at least one real root. Therefore, we can always choose $c$ to be a real number. If you accept $a,b,c,d$ as rational, then we can immediately use the Rational Root theorem. For this, all we need is to find the factors of this expression $\color{red}{\dfrac{4qs-sp^2-r^2}{8}.}$ We're almost done. We have the last two restrictions. After finding real $c$, then we must check the following two cases: * *$s-c^2≥0$ *$p^2-4q+8c≤0.$ If these conditions hold, then you can easily find the $a,b,d$ from the system of equations. End of the answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4063414", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 1, "answer_id": 0 }
prove $\left\{\begin{array}{l}X^{\prime \prime}+\lambda X=0,00$ without solving Can you help me some idea to prove it or some similar question about it. I know prove $\lambda>0$ is easy when solving it, it's just like normal differential equation.
The idea is that the differential equation either has sines and cosines for solutions (if $\lambda > 0$) or exponentials (if $\lambda < 0$) or linear ($\lambda = 0$). The second boundary condition looks a bit strange. If it had been $X(1) = 0$, that would have been enough to conclude it must be trig functions right there. The $\lambda = 0$ case can be ruled out because $X(1) + X'(1) = 0$ means that there must be a positive slope to a negative value, or vice versa, which doesn't work starting from the origin except with a zero solution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4063561", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Reduction formula for $\int \frac { x ^ n } { \sqrt { 1 - x ^ 2 } } \ \mathrm d x $ I'm struggling to go any further. Anyone have any hints? $$ I _ n = \int \frac { x ^ n } { \sqrt { 1 - x ^ 2 } } \ \mathrm d x $$ I used integration by parts where I differentiated $ x ^ n $, but it resulted in an $ \arcsin x $ term which didn't get me anywhere. $$ u = x ^ n $$ $$ v = \arcsin x $$ $$ \mathrm d v = \frac { \mathrm d x } { \sqrt { 1 - x ^ 2 } } $$ $$ I _ n = \int u \ \mathrm d v = u v - \int v \ \mathrm d u = x ^ n \arcsin x - ( n - 1 ) \int x ^ { n - 1 } \arcsin x \ \mathrm d x $$
$$I_n=\int\frac{x^n}{\sqrt{1-x^2}}dx$$ Let $x=\sin y\implies dx=\cos y \,dy$ $$I_n=\int\sin^ny \,dy$$ which has a well known reduction formula
{ "language": "en", "url": "https://math.stackexchange.com/questions/4063665", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Stopped martingale Given a probability space $(\Omega,\mathcal{F},\mathbb{P})$ and $\mathbb{P}-$Brownian Motion $B$ starting from 0, consider $\tau$ the first hitting time at 1. As $\tau<\infty$ $\mathbb{P}$ a.s., the optional stopping theorem tells us that the stopped process $B^\tau$ is a martingale starting from zero and ending at 1. However, $1=\mathbb{E}[B_\tau]\neq \mathbb{E}[B_{\tau\wedge 0}]=0$, indicating it should not be a martingale. How can we reconcile this? Where the argument goes wrong?
Yes, the process $(B_{t \wedge \tau})_{t \geq 0}$ is a martingale, but you are considering the random variable $B_{\tau}$, which you cannot find among the family $B_{t \wedge \tau}$, since $\tau$ is not bounded.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4063861", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How many possible seat arrangements are there? Four boys Bobby, Benson Benny, Benjamin and three girls Grace, Gloria, Georgia are to be seated in a row according to the following rules: * *a boy will not sit next to another boy and a girl will not sit next to another girl. *Benjamin must not sit next to Grace. How many possible seat arrangements are there? My solution: $$3!4! - 6 = 138$$ The $3!4!$ is the answer if the rule is only #$1$. I subtracted $6$ because it’s supposedly the number of possibilities Benjamin and Grace sitting next to each other. I’m sure there are a lot of flaws in my solutions since in the answer key, the answer is $72$. I have just recently dived into these type of problems so please give me constructive criticism for the many flaws and preferably some detailed explanations. Thank you.
Let name them $B_1,B_2,B_3,B_4,G_1,G_2,G_3$ ,respectively. Let arrange girls such that $-G_1-G_2-G_3-$ , there are $4$ place to place boys and we do not want $G_1,B_4$ together. So , there are $3$ possible choices for left of $G_1$ and $2$ choices for the right of it (Because we cannot place $B_4$ right or left of $G_1$ so we have $3 \times 2$ boys to place next to her). Moreover , when you place boys like that , there are $3 \times 2 \times 2 \times1 = 12$ arrangement for boys and there are $3!$ for girls . So $12 \times 3! =72$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4063969", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Is the primitive of a density function Hölder continuous? Let $I=[-1,1]$ and let $f\in L^1(I)$. Is it true that the function $F:I\to\mathbb{R}^+$ defined as $$F(s)=\int_{-1}^s |f(x)|dx$$ belongs to $C^{0,\alpha}(I)$ for a certain $\alpha>0$?
For $j=1,2\dots$ let $\delta_j=2^{-j^2}$. Define $$c_j=\delta_j^{-1+\frac1j},$$ $$I_j=(0,\delta_j)$$ and $$f=\sum_1^\infty c_j\chi_{I_j}.$$ Since all the terms are non-negative, for every $\alpha>0$ we have $$F(\delta_j)-F(0)=F(\delta_j)\ge\int_0^{\delta_j}c_j=\delta_j^{1/j}\ne O(\delta_j^\alpha).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4064097", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to see that existence non-trivial parallel $p$-form implies $b_p\geq 1$ using De Rham cohomology? Suppose $(M,g)$ be a (for simplicity consider closed) Riemannian manifold. Because every parallel $p$-form $\omega$ is harmonic so the $p$-th Betti number should be positive i.e. $b_p\geq 1$. How to see this using De Rham cohomology (of course without using properties of Hodge theory like $\delta$, 1-1 correspondence between De Rham and Harmonic forms, etc.)? It is well-known that parallel forms are closed. What is the next step? Such an easy argument based on harmonic forms I think there must be similar easy argument based on De Rham cohomology or it needs somewhat difficult argument?
You should read a reference on Hodge theory. The conclusion is not hard, that the map $\mathcal H^k \to H^k_{dR}$ is injective (that is, that harmonic forms are closed and exact harmonic forms are zero). The hard part is arguing that this map is surjective. Lemma: A $p$-form is harmonic, $\Delta \alpha = 0$, if and only if $d \alpha = 0$ and $d^* \alpha = 0$. Proof: Recall that $\Delta = \pm (d^* d + d d^*)$, where the sign depends on dimension of manifold and degree $p$ of the form. $$\langle \pm \Delta \alpha, \alpha \rangle = \langle d^* d \alpha, \alpha \rangle + \langle d d^* \alpha, \alpha \rangle = \langle d^* \alpha, d^*\alpha \rangle + \langle d\alpha, d\alpha \rangle = |d^*\alpha|^2 + |d\alpha|^2,$$ here using that $d^*$ is the $L^2$-adjoint of $d$; the inner product is $\langle \alpha, \beta \rangle = \int \alpha \wedge *\beta$. The RHS can only be zero if $d^* \alpha$ and $d\alpha$ are both zero. QED. Corollary: if $\alpha$ is an exact harmonic form, then $\alpha = 0$. Proof: we know that $d^*\alpha = 0$. If $\alpha = d\beta$, it follows that $d^* d\beta = 0$. But then $$0 = \langle d^* d\beta, \beta \rangle = \langle d\beta, d\beta \rangle = |d\beta|^2,$$ so that $0 = d\beta = \alpha,$ as desired. The same argument shows that coexact harmonic forms ($\alpha = d^* \beta$) are also zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4064240", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Volume of solid of revolution by revolving the region $y=x^2$,$x=0$,$y=9$ Find the volume generated by revolving the region shown below: I have done as follows: Revolve the rectangle formed by the vertices $(0,0),(3,0),(3,9),(0,9)$ about $X$ axis. The required volume generated is the volume of cylinder of radius $9$ and height $3$ which is $$V_1=\pi (9)^2(3)=243\pi$$ Now revolve the region formed by $y=x^2$,$x=3$ and $X$ axis about the $X$ axis. By Disk method the volume is: $$V_2=\pi \int_{0}^{3}(x^2)^2dx=\frac{243\pi}{5}$$ So the required volume is $$V_1-V_2=\frac{972\pi}{5}$$ Is there any other approach?
We could set it up by integrating volume of cylindrical shells as well (shell method). As we are rotating the region around x-axis, At distance $y$ from x-axis, the shell width = $ \sqrt y \ $ and $0 \leq y \leq 9$. So $ \ V = \displaystyle \int_0^9 2 \pi y \ \sqrt y \ dy = \frac{972 \pi}{5}$ We could also set up using the disk method as (similar to what you did), $\displaystyle \int_0^3 \int_{x^2}^{9} 2 \pi y \ dy \ dx = \frac{972 \pi}{5}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4064413", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How can I solve this maximization problem with inequality constraint on the Frobenius norm? $$ \max_{C\in \mathbb{S}^n} x^TCx\\ s.t. \|C\|_{tr}^2\leq b $$ where $\|C\|_{tr}^2=tr(C^TC)=tr(CC)$, $C$ is a symmetric matrix.
Through some calculations, I have an idea as follows: $$ L=x^TCx-\lambda(\|C\|_{tr}^2-b) $$ Calculating $\frac{\partial L}{\partial C}$ and $\frac{\partial L}{\partial \lambda}$, we have $$ \frac{\partial L}{\partial C}=xx^T=\lambda C\\ \frac{\partial L}{\partial \lambda}=tr(CC^T)=b $$ that is $$ C=\frac{xx^T}{\lambda} $$ plugging the term into the second equation, we have $$ tr(xx^T/\lambda)=\sqrt{b}\\ tr(xx^T/\lambda)=1=\sum{x_i^2}/\lambda\\ \lambda = \|x\|^2=\sum{x_i^2} $$ Hence, the solution is $$ C^*=\sqrt{b}*\frac{xx^T}{\|x\|^2} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4064558", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What use is $L^2$-convergence for Fourier series? I'm working through some notes for my signal processing class and there's something elementary that baffles me. We spent lots of hours and dozens of pages setting up the entire theory of Hilbert spaces in order to define the Fourier series of a square integrable periodic function in terms of the orthogonal basis of exponentials $ e_n(t):[0,2\pi] \to \mathbb{C}: t \to e^{int} $. Then all of the sudden, out of the blue, the notes assault me with a seemingly unrelated theorem about pointwise convergence of the series, and I find out that pointwise convergence isn't guaranteed by $L^2$-convergence. So my (probably naive) question is: what was all that work good for? What use is $L^2$-convergence if it doesn't guarantee pointwise convergence?
In addition to paul's answer, one thing to keep in mind is that Hilbert spaces are typically meaningless when it comes to function values at individual points. An often glanced over fact is that if you take a function $f\in L^2$ and modify it at a single point $x_0$ calling the result $g$, then $\|f-g\|_2=0$. More generally, the elements of $L^2$ are representatives of the equivalence relation $f\equiv g$ defined by $\|f-g\|_2=0$. This can also be expressed in terms of measure theory, in that $\|f-g\|_2=0$ iff $f$ and $g$ differ on a set of measure 0. So, when speaking about convergence in these spaces, typically the best you can do is almost-everywhere convergence, and not point-wise.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4064684", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Inequality on the family of intersecting antichain Let $\{A_1, A_2,..., A_m\}$ be an intersecting antichain of subsets of $[n]$ s.t. $|A_i| \leq n/2$ for each $i$. Prove that $$\sum_{i=1}^m \binom{n-1}{|A_i|-1}^{-1} \leq 1.$$ I know it originally appeared in Sperner Systems Consisting of Pairs of Complementary Subsets [Bollobas, 1972], but the proof in the paper is somehow confusing to me. Could anyone give me more explanations on that? Thanks a lot!
I think I can answer it myself now. For $A \in \mathcal{F}$ and $\pi$, a permutation of $[n]$. We let $w(\pi, A) = 1/|A|$ if $\pi(A)$ is cyclically consecutive and let $w(\pi, A) = 0$ otherwise. We do double counting on $\sum_{\pi, A} w(\pi, A)$. First we fix $A$ and we have $$\sum_{\pi, A} w(\pi, A) = \sum_{A \in \mathcal{F}} = \sum_A n|A|!(n-|A|)!/|A|.$$ Then we fix $\pi$ and by the Lemma in the paper we have $$\sum_{\pi, A} w(\pi, A) = \sum_\pi \sum_{A \in \mathcal{F}} w(\pi, A) \leq \sum_\pi 1 = n!,$$ completing the proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4064834", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Basis of $V=\{a\cdot(1,2,3)^T\}$ I've a vector space $V=\left\{a\left(\begin{array} {l}1 \\ 2 \\ 3\end{array}\right)\right\}$ $a$ is any real number. Can I choose it's basis as $\left(\begin{array}{l}1 \\ 2 \\ 3\end{array}\right)$. I think that it spans the space but then I can also have it's basis the set of these three vectors $\left(\begin{array}{l}1 \\ 0 \\ 0\end{array}\right)$ $\left(\begin{array}{l}0 \\ 1\\ 0\end{array}\right)$$\left(\begin{array}{l}0 \\ 0 \\ 1\end{array}\right)$. Something is wrong with my logic. Can anyone please point out my error. Thank you
Yes, the set $A$ containing only one element, $\begin{pmatrix}1\\2\\3\end{pmatrix}$, is indeed a basis for $V$. It is very easy to show that the set satisfies both conditions that are in the definition of basis: * *It is true that the span of $A$ is equal to $V$, i.e. $\mathrm{span}(A)=V$. *It is true that the set $A$ is linearly independent. On the other hand, the set $B=\left\{\begin{pmatrix}1\\0\\0\end{pmatrix},\begin{pmatrix}0\\1\\0\end{pmatrix},\begin{pmatrix}0\\0\\1\end{pmatrix}\right\}$, containing the three vectors you list is not a basis for $V$, because the first condition fails. The span of $B$ includes the vector $\begin{pmatrix}1\\0\\0\end{pmatrix}$ which is not an element of $V$. This means that the span is not equal to $V$, so $B$ cannot be a basis for $V$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4065208", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
A reference book to study Thom-Pontryagin Theory from basic I am a beginner in Algebraic Topology and Differential Topology. Please suggest me a reference book to study Thom-Pontryagin Theory from basic and which book is easy to understand for a beginner. Thank you.
The standard reference for this would be Robert Stong's book "Notes on Cobordism Theory". The first chapter sets up the theory of cobordism categories, and then in the second chapter, the Pontryagin-Thom Theorem is proved in the generality of cobordism classes of manifolds with normal/tangential structure. I think this book is not too difficult to understand. Nonetheless, the first chapter requires some basic category theory.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4065356", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
About the transference of perfect sets under Polish group actions Let $G$ be a Polish group, $X$ a Polsih space on which $G$ acts continously and consider the orbit equivalence relation on $X$ with respect to $G$. Suppose $A\subseteq X$ is a perfect set of pairwise non orbit equivalent elements, $B\subseteq X$ is closed and for all $a\in A$, the $G$ orbit of $a$ contains exactly one $G$ orbit of $B$. My question: Does it follow that $B$ contains a perfect set of pairwise non orbit equivalent elements? I assume the answer is positive but I cannot prove it. One idea I have been entertaining is to show that for some $g\in G$, there are uncountably many $a\in A$ such that $g\circ a\in B$, where $\circ$ denotes the action of $G$ on $X$. Clearly, if this can be done, then by the perfect set theorem the question is answered. Any help is appreciated.
Sorry about my earlier false attempt. I think this one now is slightly better. Consider the Polish space $A\times G$ with a subset $S$, such that $(a,g)\in S$ iff $g\circ a\in B$. Clearly $S$ is closed, and $\operatorname{proj}_A(S)=A$ by assumption. By Jankov-von-Neumann uniformization, there is a $\sigma(\mathbf{\Sigma^1_1})$ measurable function $f:A\to G$ such that $f\subseteq S$. In particular, $f$ is Baire mesurable. Recall that every Baire measurable function is continuous on a comeager set, i.e. we may fix some comeager $G_\delta$ set $C\subseteq A$ such that $f\upharpoonright C$ is continuous. Now because that for any $a\ne a'\in A$ we have $f(a)\circ a\nsim_G f(a')\circ a'$, the set $T=\{f(a)\circ a\mid a\in C\}\subseteq B$ is $G$-independent. As $C$ is comeager $G_\delta$, the set $T$ is also Borel uncountable. By the perfect set property of Borel sets, $T$ has a perfect $G$-independent subset as desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4065513", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Convergent sequence/series proof Let $\{x_n\}$ for $n \geq 1$ be a sequence of positive real numbers and let $s_n = \frac{x_n}{n}$. Let $\{y_n\}$ for $n \geq 1$ be another sequence such that $y_n = n^{-1} \sum_{i = 1}^{n} x_i$. Prove that if $\sum_{j = 1}^{\infty} s_j$ converges, then $\{y_n\}$ converges to $0$. A proof I was shown: Let $S_n = \sum_{j=1}^{n} s_j = \sum_{j=1}^n \frac{x_j}{j}$. Because it converges there is some $S$ such that $\lim_{n \rightarrow \infty} S_n = \sum_{j=1}^{\infty} \frac{x_j}{j} = S$. We can also write: $$y_n =n^{-1}\sum_{k=1}^n x_k = n^{-1} \sum_{k=1}^nk \frac{x_k}{k} = n^{-1} \sum_{k=1}^nk s_k$$ Using summation by parts, with $S_0 := 0$, $$\begin{align*}y_n &= n^{-1} \sum_{k=1}^nk(S_k- S_{k-1})\\ &= n^{-1}\left(nS_n + \sum_{k=1}^{n-1}S_k (k - (k+1)) \right)\\ &= S_n - n^{-1}\sum_{k=1}^{n-1}S_k \end{align*}$$ We need to show $n^{-1}\sum_{k=1}^{n-1}S_k \to S$ as $n \to \infty$ when $S_n \to S$. Since $S_n$ converges to $S$, there is some $N \in \mathbb{N}$ such that for $n \geq N$ $$\begin{align*} \left|n^{-1} \sum_{k=1}^n (S_k- S_{k-1})\right| &\leq n^{-1} \sum_{k=1}^N |S_k- S_{k-1}| \\ &\qquad+ n^{-1} \sum_{k=N + 1}^n |S_k- S_{k-1}|\\ &= n^{-1} \sum_{k=1}^N |S_k- S_{k-1}| + \epsilon\left(1 - \frac{N}{n}\right) \end{align*}$$ Indeed by taking the limsup for both sides we see that for any $\epsilon > 0$ $$0 \leq \limsup_{n \rightarrow \infty} \left|n^{-1} \sum_{k=1}^n (S_k- S_{k-1})\right| \leq \epsilon$$ By squeeze theorem, $$\lim_{n \rightarrow \infty} \left|n^{-1} \sum_{k=1}^n (S_k- S_{k-1})\right| = \limsup_{n \rightarrow \infty} \left|n^{-1} \sum_{k=1}^n (S_k- S_{k-1})\right| = 0$$ It then follows that $y_n \to S-S=0$ as $n \to \infty$. QED. Is this proof correct? If so, could somebody provide some intuition about it. Any insight much appreciated.
It’s not quite correct, but it can be fixed. The first part is correct, but in case you’re not accustomed to summation by parts, I’ll note that you don’t need that technique: $$\begin{align*} \sum_{k=1}^nks_k&=\sum_{k=1}^n\sum_{\ell=1}^ks_k=\sum_{\ell=1}^n\sum_{k=\ell}^ns_k\\ &=\sum_{\ell=1}^n(S_n-S_{\ell-1})=nS_n-\sum_{\ell=1}^nS_{\ell-1}\tag{1}\\ &=nS_n-\sum_{k=1}^{n-1}S_k\,, \end{align*}$$ since $S_0=0$, and that gives us $$y_n=S_n-n^{-1}\sum\limits_{k=1}^{n-1}S_k\,.\tag{2}$$ The one sequence whose convergence we do know is the sequence of partial sums $S_n$ of the series $\sum_{k\ge 1}s_k$, so expressing the numbers $y_n$ in terms of those partial sums is a natural thing to try, and $(2)$ accomplishes it. Once we get that, it’s clear that we want $S_n-n^{-1}\sum_{k=1}^{n-1}S_k$ to approach $0$ as $n\to\infty$. We don’t know much about $\sum_{k=1}^{n-1}S_k$, but we do know that when $n$ and $k$ are large, $|S_n-S_k|$ is small, and the first summation in line $(1)$ shows that $$y_n=n^{-1}\sum_{k=1}^n(S_n-S_k)\,,$$ and if $n$ is large enough, ‘most’ of the terms in the summation ought to be small. Let $\epsilon>0$. There is an $N\in\Bbb N$ such that $|S_n-S_k|<\epsilon$ whenever $n,k\ge N$, so for $n\ge N$ we have $$\begin{align*} |y_n|&=\left|n^{-1}\sum_{k=1}^n(S_n-S_k)\right|\\ &\le n^{-1}\sum_{k=1}^N|S_n-S_k|+n^{-1}\sum_{k=N+1}^n|S_n-S_k|\\ &<n^{-1}\sum_{k=1}^N|S_n-S_k|+\left(1-\frac{N}n\right)\epsilon\,. \end{align*}$$ It follows that $\lim\limits_{n\to\infty}|y_n|\le\epsilon$ for all $\epsilon>0$ and hence that $\lim\limits_{n\to\infty}y_n=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4066118", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
finding the remainder of a variable n where n is not an integer multiple of 4 I started off by saying that $(n+1)$ must be a multiple of $4$ for $n(n+1)$ to be a multiple of 4. Hence $n$ when divided by $4$ would have a remainder of $3$ i.e $n = 4k+3$ $n(n+1)=(4k+3)(4k+3+1) = 4(k^2+7k+3)$ Hence $n(n+1)$ is a multiple of 4 I'm not sure if this explanation is adequate and how to progress from here if it is not. Please advise, thanks!
"$n(n+1)$ is an integer multiple of 4" We know we can write integers as prime factors, so $n(n+1)$ has 4 in its prime factors, and there are only two cases for this to happen: * *Both $n$ and $(n+1)$ are even, which is a refused case, because two successive numbers can't both be even. *$n$ and $(n+1)$, one of them has to be odd while the other is a multiple of 4, and we know from the given that $n$ is not a multiple of 4, which leaves us with the case that $n$ is odd while $(n+1)$ is a multiple of 4. For the second part of the proof, I'm going to represent $(n+1)$ as $4k$ such that $k \in \mathbb{Z}$, this means that $n=4k-1$, then we can do this: $$n = 4k - 1 +4 - 4 \\ = (4k -4) + (4-1) \\ = 4(k-1) + 3$$ And according to closure property for addition on integers, $(k-1)$ is also an integer. Then we calculate the reminder: $$\frac n 4 = \frac {4(k-1) +3} {4} = (k - 1) + \frac 3 4$$ Which means we are left with 3.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4066302", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Approximate: $\sum_{\Im(\rho)>0}\frac{1}{|\rho-\frac{1}{2}|^2}$ where $\rho$ is non trivial zeros of the zeta function I am reading Kevin Broughan's Equivalents of Riemann Hypothesis Vol. 1 (p. 38). If $\rho$ denote the non trivial zeros of Riemann zeta function in the strip $0<\Re(\rho)<1$, consider,$$S=\sum_{\Im(\rho)>0}\frac{1}{|\rho-\frac{1}{2}|^2}.$$ Question: Find the approximate value of $S$. Attempt: Riemann $\xi$ function can be written as $$\xi(s)=\xi(0)\prod_{\Im(\rho)>0}(1-\frac{s}{\rho})(1-\frac{s}{\bar{\rho}}),$$ where $\bar{\rho}$ denotes complex conjugate. Taking logarithm on both sides, $$\log(\xi(s))=\log(\xi(0))+\sum_{\Im(\rho)>0}\log(1-\frac{s}{\rho})+\sum_{\Im(\rho)>0}\log(1-\frac{s}{\bar{\rho}})$$ and differentiating both sides w.r.t. $s$, $$\frac{\xi'(s)}{\xi(s)}=\sum_{\Im(\rho)>0}[\frac{1}{s-\rho}+\frac{1}{s-\bar{\rho}}].$$ After this I cannot think on it. Please find an approximate value of $$S=\sum_{\Im(\rho)>0}\frac{1}{|\rho-\frac{1}{2}|^2}.$$ Edit $$\xi(s)=\xi(0)\prod_{\rho}(1-\frac{s}{\rho})$$ Taking log we get $$\log(\xi(s))=\log(\xi(0))+\sum_{\rho}\log(1-\frac{s}{\rho})$$ Differentiating w.r.t. s, $$\frac{\xi'(s)}{\xi(s)} = \sum_{\rho}\frac{1}{s-\rho}$$ Differentiating w.r.t. s again, $$\frac{d}{ds}\frac{\xi'(s)}{\xi(s)} =- \sum_{\rho}\frac{1}{(s-\rho)^2}$$ Putting $s=1/2$ and using $\xi'(1/2)=0$, $$\frac{\xi''(\frac{1}{2})}{\xi(\frac{1}{2})}=-\sum_{\rho} \frac{1}{(\frac{1}{2}-\rho)^2}$$
$$\xi(s)=\xi(0)\prod_{\rho}(1-\frac{s}{\rho})$$ Taking log we get $$\log(\xi(s))=\log(\xi(0))+\sum_{\rho}\log(1-\frac{s}{\rho})$$ Differentiating w.r.t. s, $$\frac{\xi'(s)}{\xi(s)} = \sum_{\rho}\frac{1}{s-\rho}$$ Differentiating w.r.t. s again, $$\frac{d}{ds}\frac{\xi'(s)}{\xi(s)} =- \sum_{\rho}\frac{1}{(s-\rho)^2}$$ Putting $s=1/2$ and using $\xi'(1/2)=0$, $$\frac{\xi''(\frac{1}{2})}{\xi(\frac{1}{2})}=-\sum_{\rho} \frac{1}{(\frac{1}{2}-\rho)^2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4066508", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Find maximum natural number k such that for any odd n $n^{12} - n^8 - n^4 + 1$ is divisible by $2^k$ Given $n^{12} - n^8 - n^4 + 1$ it's easy to factorize it: $(n-1)^2(n+1)^2(n^2+1)^2(n^4+1)$. It's stated than this should be divisible by $2^k$ for any odd $n$. So I think that such maximum $k$ can be found by putting the least possible odd $n$ in the expression (because we can easily find such a $k$, that for a bigger $n$ the expression above is divisible by $2^k$, but for a smaller $n$ it isn't). So the least possible $n=3$ (for $n=1$ expression is $0$) and we have $4 \cdot 16 \cdot 100 \cdot 82 = 25 \cdot 41 \cdot 2^9$, so the $k$ we are looking for is $9$. I am rather weak in number theory so I hope the get some feedback if my solution is correct.
$$(2r+1)^2+1\equiv8\cdot\dfrac{r(r+1)}2+2=8p+1\text{(say)}\equiv2\pmod8$$ $$(2r+1)^4+1=(8p+1)^2+1\equiv2\pmod{16}$$ Now $n^2-1=(2r+1)^2-1=8\cdot\dfrac{r(r+1)}2$ $\implies k\ge2\cdot3+2\cdot1+1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4066693", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 4 }
two numbers chosen from 0 to 1, expected val of larger number If I pick two numbers in the interval (0,1), what is the expected value of the larger number? Please help. I know that the expected value of one number is 0.5 but I'm not sure how to find the expected vlaue of the larger number here.
Assume that the value of the larger number is $x$. The probability of this is equal to the probability that the other number is less than $x$. This probability is obviously $x $. Therefore the expected value of the larger number is: $$2\int_0^1 x\cdot x\, dx=\frac23, $$ where the factor 2 counts the ways to choose the larger number.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4066819", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How to find if two series are analytic continuations of each other Show that the series $a) \sum_{n=0}^{\infty} \frac{z^n}{2^{n+1}}$ and $b) \sum_{n=0}^{\infty} \frac{(z-i)^n}{(2-i)^{n+1}}$ are analytic continuations of each other. I have performed the ratio test for both series and found that $a)$ converges for $|z|<1$ and $b)$ for $|\frac{z-i}{2-i}|<1$. I have a hint that says $b)$ also converges for $|z-i|< \sqrt{5}$. I am not sure where the second convergence of $b)$, ($|z-i|< \sqrt{5}$), comes from. I am also not really sure what to do next. My notes are very vague. Wouldn't the convergence of $|\frac{z-i}{2-i}|<1$ not be an analytic continuation since $1 \ngtr 1$. But the second convergence would. I don't know if I am saying that right, but that is what I am getting from my notes.
For $|z|<2$, the series $\sum_{n=0}^\infty \frac{z^n}{2^{n+1}}$ represents the function $f(z)= \frac1{2-z}$. Then, note that we can write for $|z-i|<|2-i|=\sqrt 5$ $$\begin{align} f(z)&=\frac{1}{2-z}\\\\ &=\frac{1}{(2-i)-(z-i)}\\\\ &=\frac1{2-i}\left(\frac1{1-\frac{z-i}{2-i}}\right)\\\\ &=\sum_{n=0}^\infty \frac{(z-i)^n}{(2-i)^{n+1}} \end{align}$$ Hence, we see that $f(z)=\sum_{n=0}^\infty \frac{(z-i)^n}{(2-i)^{n+1}}$ for $|z-i|<\sqrt 5$ is indeed the analytic continuation of $f(z)=\sum_{n=0}^\infty \frac{z^n}{2^{n+1}}$ for $|z|<2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4066951", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Heron's Formula Heron's Formula gives the area of a triangle when the length of all three sides are known. There is no need to calculate angles or other distances in the triangle first to determine its area. The formula is given by $Area=\sqrt{p(p-a)(p-b)(p-c)}$, where $p=\frac{a+b+c}{c}$, $a, b, c$ are sides of the triangle and $p$ is the perimeter of the triangle. The following is my concern: If one of the sides of the triangle is greater than $p$, then the Area will not be a real number (which shouldn't be true). Example. Let the sides of a triangle be 175 metre, 88 metre and 84 metre, then $p=173.5$. Therefore, $Area=\sqrt{-1991498.0625}$, which not a real value. Therefore, the following is my question: Why shouldn't the area be expressed as $Area=\sqrt{|p(p-a)(p-b)(p-c)|}$?
In addition to the correct answers' "that's not a triangle!", the imaginary values are still suggestive. Kendig wrote an American Mathematical Monthly article titled "Is a 2000-Year-Old Formula Still Keeping Some Secrets?" exploring ways to make sense of those values. (Currently) you can read the entire article here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4067114", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Intuition for polynomial long division So I am trying to learn some basic algebra and go over some precalc material and I am terribly confused on why or how polynomial long division works. I have looked at many sources online and they seem to all suggest that division of numbers is identical to division of polynomials since a number in its decimal expansion can be viewed as a sort of polynomial e.g $a_n10^n+\cdots+a_2100+10a_1+a_0$ What i don't understand is why we only use the term with the highest degree in the denominator as the divisor for the whole polynomial as shown here Polynomial Long Division whereas with normal division we divide the whole divisor into the dividend for example when calculating $88÷32$ we look at how many times $32$ goes into $88$ and not how many times $30$ does whilst with polynomial division only the highest degree term is considered which corresponds to 30 in this example ignoring 2. So how are they the same? Some intuition behind why polynomial division works would be greatly appreciated. Thanks in advance.
Hint: in the algebra of polynomials there is no concept of "carrying" or "borrowing" when we add two polynomials, because there is no constraint on the size of the coefficients. In the algebra of integers expressed as decimals, when we add or subtract two decimal representations, we have to make adjustments to make the digits (coefficents) lie between $0$ and $9$. The intuition behind polynomial division is the same as the intuition behind integer division (subtract the biggest allowable multiple on each iteration), but the implementation is simpler - because there is no trial and error required to find the the biggest allowable multiple.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4067510", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 4 }
Probabilities in a game John plays a game with a die. The game is as follows: He rolls a fair six-sided die. If the roll is a $1$ or $2$, he gets $0$ dollars. If he rolls a $3, 4$ or $5$, he gets $5$ dollars. If he rolls a $6$, he wins $X$ dollars where $X$ is a continuous random variable that is uniform on the interval $(10, 30)$. Let $Y$ be the amount of money John wins by playing the game. (i) Compute the cumulative distribution function (cdf) of $Y$. (ii) What is the probability that John rolled a $6$ given that he won less than $15$ dollars? (iii) Compute $E(Y)$. My attempt: (i) If $Y > 5$ then $P(Y \leq y) = P(X \leq y)$. Then $P(Y \leq 5) = P(Y = 0) + P(Y = 5) = P(roll = 1 or 2) + P(roll = 3, 4 or 5) = \frac{1}{3} + \frac{1}{2} = \frac{5}{6}$. Not sure if this is even correct. (ii) $P(roll = 6 | Y < 15) = \frac{P(roll = 6 \land Y < 15)}{P(Y<15)}$ But by Bayes theorem $P(roll = 6 | Y < 15) = \frac{P(Y < 15 | roll = 6) \cdot P(roll = 6)}{P(Y<15)}$ We know $P(roll = 6) = \frac{1}{6}$. Given that he rolled a $6$, the probability that he won less than $15$ dollars is equal to $P(X < 15) = \frac{15 - 10}{30 - 10} = \frac{1}{4}$ Lastly, $P(Y < 15) = 1 - P(Y \geq 15)$. However, since the only situation in which John wins $15$ dollars or more is if he rolls a $6$. We may infer that $P(Y \geq 15) = P(X \geq 15) = 1 - P(X < 15) = \frac{3}{4}$. So $P(Y < 15) = 1 - \frac{3}{4} = \frac{1}{4}$. So $P(roll = 6 | Y < 15) = \frac{\frac{1}{4} \cdot \frac{1}{6}}{\frac{1}{4}} = \frac{1}{6}$ Not sure how to determine the other probabilities. (iii) We can check $Y$ conditioned on $X = x$: $E(Y|X=x) = (0 \cdot \frac{1}{3}) + (5 \cdot \frac{1}{2}) + (x \cdot \frac{1}{6}) = \frac{5}{2} + \frac{x}{6} = \frac{x + 15}{6}$. But $E(E(Y|X)) = E(Y)$ by the law of iterated expectation. Applying it we have $E(Y) = E(E(Y|X)) = E(\frac{X + 15}{6}) = \frac{1}{6} E(X) + \frac{15}{6}$ Since $X$ is uniform on $(10, 30)$ we know $E(X) = \frac{1}{2} (10 + 30) = 20$ Therefore $E(Y) = \frac{35}{6}$. Is this correct? I have a feeling that my attempt for (i) is incorrect, and I am unsure about (ii) and (iii). Any assistance is much appreciated.
Here is a simulation of a million iterations of your process. Possibly except for conditional probabilities, it should be accurate to a couple of decimal places. Also the ECDF plot of from the simulation is very nearly the CDF of $Y,$ which you have not drawn. set.seed(2021) die =sample(1:6, 10^5, rep=T) y = runif(10^5, 10, 30) y[die<=2.1] = 0 y[die>=2.9 & die <=5.1] = 5 [1] 5.838549 # aprx E(Y) 35/6 [1] 5.833333 # exact mean(die[y<15]==6) [1] 0.04736204 # aprx P(Die = 6 | Y < 15) hdr = "ECDF: Mixture of Discrete and Continuous" plot(ecdf(y), ylab="ECDF", xlab="y", lwd=2, main=hdr) At the 'jumps', the CDF takes the upper value.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4067662", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Von-Neumann-Morgenstern Axioms Clarification Question I want to solidify my understanding of the von-Neumann-Morgenstern axioms. How would you analyze these scenarios using the lens of these axioms? Suppose that Person A owns a potentially expensive baseball card. All people within this universe abide by maximizing subjective utility, aligning with the von-Neumann-Morgenstern axioms. a. In a universe in which both person A and person B have the same utility function and same wealth to start off with, why would person A want to sell the baseball card to person B instead of owning it, and In the same instance, why would person B want to purchase the card instead of returning it? (Note: Technically, person A owns the baseball card, so he does have slightly more money) b. In this scenario, person B wants to purchase a different card from person C. Is there ever a scenario where person B would purchase a card from person C for a set amount, and then immediately flip the purchase to sell to person D for less money?
In the context of just the Von Neumann Theorem, I don't think there is anything sophisticated going on here. As the problem is stated, there doesn't seem to be any sort of transaction costs being occurred. For part a. Person A will sell the card if and only if he receives the monetary amount that is of equal or greater value to that of the card. Additionally, person B will buy the card if and only if they receive the card for less than or equal value to that of the card. Since person A and B have the same utility function, this transaction occurs if and only if the card is sold for its exact value. For part b. The answer is no following identical reasoning.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4067823", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to Solve for t and C in S'(t)=-420t^2/5 Annual sales of a certain cell phone has been declining at a rate of S′(t) = −420t2/5 phones per year, where S(t) is the number of cell phones sold in t years. The company is planning on withdrawing this model when annual sales reach 60,000 cell phones. If current annual sales of these phones are 100,000, find S(t). How long will the company continue to manufacture this phone? I started by integrating S'(t) = -300t^7/5 + C But this is where im stuck. Since im not given any indication of the starting t when s(t) = 100000 nor s(t) = 60000, how would I go about solving this equation. Should I start by finding dS/dt? but yet again I don't have any indication for that. any pointers would be appreciated. thank you.
You got that $S(t)=-300 t^{7/5}+C.$ Since you know the current $S$ you can assume that now is when $t=0.$ That is: $S(t)=100000-300 t^{7/5}.$ How long will the company continue to manufacture this phone? We have to solve $S(t)=60000.$ In other words: $$100000-300 t^{7/5}=60000.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4068017", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Linearity of Power of Point I have 4 questions about this: 1)What is meant by $\mathbb{R}^2 \rightarrow \mathbb{R}$ in this context? 2)What is meant by $C=kA+(1−k)B$?Aren't $A,B,C$ just points in euclidean plane?How can they be related by an equation? 3)Why is it sufficient to show the condition for points $A$ and $B$? 4)What is meant by $F$ is linear?How to prove it and what is it's use? Sorry if that's too obvious.I am not aware of the terminology used here i guess.
1)What is meant by R2→R in this context? $F$ is a function whose domain is $\mathbb{R}^2$ and codomain is $\mathbb{R}$. The domain is $\mathbb{R}^2$ because its inputs are points, and the codomain is $\mathbb{R}$ because its outputs are real numbers. Since what $F$ does is ignore the $\omega$ terms in $\mathbb{P}$, it is what is called a `partial function' in programming. Put another way, the domain of $\mathbb{P}$ is $\mathbb{R}^2 \times \mbox{the space of circles}^2$ (sorry, I don't know what space $\omega$ lives in), and its codomain is $\mathbb{R}$. What $F$ does is ignore some of the inputs. 2)What is meant by C=kA+(1−k)B?Aren't A,B,C just points in euclidean plane?How can they be related by an equation? It's a statement that $C$ is on the line segment between $A$ and $B$. This kind of formula: $kx + (1-k)y$ is called a convex combination of $x$ and $y$. In the context of two points in Euclidean space, it is a way to associate the points on the line thru $x$ and $y$ with values of $k$. If $0 \leq k\leq 1$, then the point is between $x$ and $y$. 3)Why is it sufficient to show the condition for points A and B? 4)What is meant by F is linear?How to prove it and what is it's use? The answers to 3 and 4 are connected. A function $f$ is linear if $f(a + cb) = f(a) + c f(b)$ no matter what $a$ and $b$ (and $c$) are. So the statement about sufficiency is just a bit of semantic fluff to get the proof started. Any time you want to prove a function is linear, you start with a statement similar to: "Let $a$ and $b$ be in the domain of the function".
{ "language": "en", "url": "https://math.stackexchange.com/questions/4068206", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If for the real numbers $a,b(a\ne b)$ it is true that $a^2=2b+15$ and $b^2=2a+15$, then what is the value of the product $ab$? If for the real numbers $a,b(a\ne b)$ it is true that $a^2=2b+15$ and $b^2=2a+15$, then what is the value of the product $ab$? I tried to solve it as follows: I state that $p=ab$ $p^2=(2b+15)(2a+15)$ $p^2=4ab+30(a+b)+225$ $p^2=4p+30(a+b)+225$ and this is where I got stuck. I don't know how to get over this hurdle. could you please explain to me how to solve the question?
Let $ a + b = c$. Then, the quadratic $x^2 - 2 (c-x) -15 = x^2 +2x + (-15-2c) = 0 $ has distinct roots $a , b$. Hence, these are the roots to the quadratic. Thus, by Vieta's formula, $ a+b = - 2 \Rightarrow c = -2$ and $ ab = -15 - 2c = -11$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4068417", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
$63^{63^{63}} \mod 100$ I need to find $63^{63^{63}} \bmod 100$. This is what I've got so far: Since $\gcd(63,100)=1$ we can use Euler's theorem. We have $\phi (100)=40$ so $63^{40} \equiv 1 \mod 100$ Again $\gcd(16,100)=1$ and $\phi (40)=16$, that is $63^{16} \equiv 1 \mod 40$ Using this I got that $63^{63} \equiv 7 \mod 40 $ which led me to $63^{63^{63}} \equiv 63^7 \mod 100$ I'm stuck here and don't know what to do next, what could I do now?
You might use Chinese Remainder Theorem. $100 = 4 \cdot 25$, and $$63^7 \equiv (-1)^7 \equiv 3 \pmod{4}.$$ Then $$63^7 \equiv 9^7 7^7 \equiv 729\cdot 729 \cdot 9 \cdot 49^3\cdot 7 \equiv 4\cdot 4 \cdot 9 \cdot (-1)^3 \cdot 7 $$ $$\equiv 144(-7) \equiv (-6)(-7) \equiv 42 \pmod{25}.$$ So you just have to add or subtract various multiples of $25$ to $42$ until you get a number that's $3 \pmod{4}.$ And $42+25 = 67$ works.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4068529", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 2 }
How to show that there is no positive rational number a such that $a^3$ = 2? I'm stuck on this one. So far, I've tried this: \begin{align} a^3 &= 2 \\ a &= \left(\frac mn\right) \\ a^3 &= \left(\frac mn\right)^3 = 2 \\ m^3 &= 2n^3 \\ m &= 2p \\ m^3 &= (2p)^3 \end{align} I'm really confused about what to do after this—the book answer says that $m^3$ becomes $m^3 = 2(4p)^3$. And $2n^3 = 2(4p)^3$. I'm super confused here and can't really understand what's going on here. Can someone tell what exactly is going on here and how should I prove this?
Assume that $a=p/q$, where $p$ and $q$ are not both even (or make the stronger assumption that $\gcd(p,q)=1$), and that $a^3=2$. Then $(p/q)^3=2$, or $p^3=2q^3$. Since the right-hand side is even, the left-hand side must be as well; and since the cube of an odd number is odd, $p$ must be even. So $p=2p'$ for some integer $p'$. Now $8(p')^3=2q^3$, or $q^3=4(p')^3$, and hence $q$ must be even as well. This contradicts the original assumption, completing the proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4068683", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 6, "answer_id": 3 }
How to uncurl $\mathbf{B}=\mathbf{\nabla\times A}$ if we only know $\mathbf{B}$ at $z=z_0$? Let $$\mathbf{B} = \mathbf{\nabla\times A},$$ where $$\mathbf{B}(x,y,z) = B_x(x,y,z)\mathbf{\hat{x}} + B_y(x,y,z)\mathbf{\hat{y}} + B_z(x,y,z)\mathbf{\hat{z}},$$ $$\mathbf{A}(x,y,z) = A_x(x,y,z)\mathbf{\hat{x}} + A_y(x,y,z)\mathbf{\hat{y}} + A_z(x,y,z)\mathbf{\hat{z}},$$ Assume that $$\mathbf{\nabla \cdot A} = 0.$$ This ensures that $\mathbf{A}$ is uniquely defined by the top equation. Is it possible to calculate $A_x(x,y,z_0)$, $A_y(x,y,z_0)$ if we only know $\mathbf{B}(x,y,z)$ at $z=z_0$? Here is my naive attempt. Assume that $$A_x(x,y,z_0) = \frac{\partial \phi}{\partial y},$$ $$A_y(x,y,z_0) = -\frac{\partial \phi}{\partial x},$$ where $$\phi=\phi(x,y).$$ Substituing this into the $z$-component of the top equation gives the following 2D Poisson equation $$\left(\frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2}\right)\phi(x,y) = -B_z(x,y,z_0).$$ We can solve this for $\phi(x,y)$ which we can use to calculate $A_x(x,y,z_0)$ and $A_y(x,y,z_0)$. Is this a valid solution? There seems to be problems. For example, since $$\mathbf{\nabla\cdot A} = 0,$$ this implies that $$\left.\frac{\partial A_z}{\partial z}\right|_{z=z_0}=0,$$ but $z_0$ is arbitrary so $\partial A_z / \partial z = 0$ $\forall z$?
Given $\mathbf{B}$ on $z=z_0$ we can extend the field to all on $\mathbb{R}^3$ by $$\begin{cases} B_x(x,y,z) := B_x(x,y,z_0) \\ B_y(x,y,z) := B_y(x,y,z_0) \\ B_z(x,y,z) := B_z(x,y,z_0) - z \left( \partial_x B_x(x,y,z_0) + \partial_y B_y(x,y,z_0)\right) \end{cases}$$ This extensin satisfies $\nabla\cdot \mathbf{B} = 0$ but is not the only possible extension. Then, given $\mathbf{B}$ in all of $\mathbb{R}$ we can construct $\mathbf{A}$ such that $\nabla\times \mathbf{A} = \mathbf{B}$ and $\nabla\cdot \mathbf{A}=0$ by the convolution $$\mathbf{A} = -G * (\mathbf{\nabla\times B}),$$ where $\nabla^2 G = \delta,$ i.e. $G(x,y,z) = \frac{1}{4\pi\sqrt{x^2+y^2+z^2}}.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4068865", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Find a function $f$ which is Riemann-integrable on $[0,T]$, and so that $\int_0^T f(t)^2 dt$ is infinite. I am reading "Linear Algebra, Signal Processing, and Wavelets - A Unified Approach: Python Version" by Øyvind Ryan. There is the following exercise in this book. Exercise 1.18: Riemann-Integrable Functions Which Are Not Square Integrable Find a function $f$ which is Riemann-integrable on $[0,T]$, and so that $\int_0^T f(t)^2 dt$ is infinite. $\int_0^1 \frac{dt}{\sqrt{t}} = 2$, but $\int_0^1 \frac{dt}{t} = \infty$. But do we say $f(x) = \frac{1}{\sqrt{x}}$ is riemann integrable on $[0,1]$? I think if $f$ is riemann integrable on $[0,T]$, $f$ must satisfy the condition that $f$ is bounded on $[0,T]$ at least. Is there really $f$ such that $f$ is bounded on $[0,T]$ and $\int_0^T f(t) dt$ exists and $\int_0^T f(t)^2 dt$ is infinite?
A function $g$ is Riemann integrable on $[0,T]$ iff $g$ is bounded and continuous almost everywhere on $[0,T]$. If your $f$ is Riemann integrable, then it satisfies the latter two properties, then so does $f^2$ and there is no way that $f^2$ cannot be Riemann integrable. It is impossible for the integral of a Riemann integrable function on a compact interval to take the value $\infty$. So perhaps you are looking for improper Riemann integrability. In such a case, we can take exactly your function $f(x)=\dfrac{1}{\sqrt{x}}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4068999", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Does the Mandelbrot Set with a power of -1 produce a Farey Sequence, and if so, can this be connected to the Riemann Hypothesis? I'm by no means an expert in Farey Sequences nor understanding the Riemann Hypothesis - but I do know there is some relationship between them, as alluded to in this question. As such, if the Mandelbrot Set with a -1 power (see image below) produces a Farey Sequence along the imaginary axis, could this be used to help solve/prove the non-trivial zeros for the Riemann Hypothesis? Has something like this already been attempted? Note that the radii of the following circles will be reduced if either the bailout/threshold or the maximum iterations are increased, I don't know if they converge or not. The Escape-Time algorithm is used, so more detail might be revealed with a different algorithm. Edit: On the subject, I noticed that as the Mandelbrot's power approaches zero, what would correspond to the first elements to the Farey Sequence move slower to the left: I'm pretty sure that this is the Farey Sequence at this point, but all I have to show for it is just by looking at it. I wonder if algebraically solving for the boundary of this fractal will yield interesting insight...
Here is pixels $c$ coloured by iteration index $p$ where $|z_p|$ is minimized under iterations of $z_{n+1} = \frac{1}{z_n} + c$. The right hand side is clamped to $\Re(c) = 0.01$ to make the structure clearer. The Farey sequence seems to be visible (but this is no mathematical proof).
{ "language": "en", "url": "https://math.stackexchange.com/questions/4069114", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is the interior of this set? (weak topology) Here the $\mathbb R$ is a set of real numbers. Let the topological space, $T$ of wich basis are $B =\{(a,b] \vert a,b\in \mathbb R\} $ Define $f_1, f_2 : (\mathbb R,T_1) \rightarrow (\mathbb R,T)$ by $f_1(x) =x^2$ and $f_2(x) = -x^2$ respectively. Here the $(\mathbb R ,T_1)$ be the smallest topological spaces defined by both of the $f_1$ and $f_2$ are continuous on $\mathbb R$. Find the interior of the $[0,1)$ in $(\mathbb R, T_1)$ In my solution all the $a$ to $e$ are positive real number. In the $f_1$ case, the open set $G_1$ can be $\{0\}$ and $[-b,-a) \cup (a,b] $ considering the $f_1^{-1}((-a,0])$ and $f_1^{-1}((a^2,b^2])$ set in order. Vice versa when the $f_2$ case with the same method, open set $G_2$ would be $(-c,c)$ and $(-c,-d] \cup [d,c)$ considering the $f_2^{-1}((-c^2,0])$ and $f_2^{-1}((-c^2,-d^2])$ in sequence. Consequently, the $T$ should have a subbasis as $G_1$ and $G_2$. The open set form of the $T $ are like the below (1) $\{0\}$ considering the intersection of $G_1 =\{0\}$ and $G_2 = (-c,c)$ (2) $(-e,-d) \cup (d,e)$ considering the intersection of $G_1 = [-b,-a) \cup (a,b] $ and $G_2= (-c,-d] \cup [d,c) $ So my answer is "interior of the $[0,1) $ is $\{0\}$". But the answer was $\phi$ in my workbook. What did I wrong?
A base for $\mathcal{T}_1$ is given by all sets of the form $$f_1^{-1}[O]\cap f_2^{-1}[O'] \text{, where } O,O' \in \mathcal{T}$$ So $\{0\} = f_1^{-1}[(-1,0]] \cap f_2^{-1}[(-1,0]] = \{0\}\cap (-1,1)$ is indeed an open neighbourhood of $0$, so the interior of $[0,1)$ contains at least that. So your book's answer of $\emptyset$ is not correct.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4069386", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If $3(x +7) - y(2x+9)$ is the same for all values of $x$, what number must $y$ be? If we let $x = 0$. \begin{align*} 3(0+7)-y(2(0)+9) \\ 21-9y \\ \end{align*} Then $9y$ should always equal $21$? Solving for $y$ finds $\frac{7}{3}$. But $3(x+7)-\frac{7}{3}(2(x)+9)$ does not have the same result for diffrent values of $x$. Where am I going wrong?
You have the function: $$ f \left( x, y \right) = 3 \left( x + 7 \right) - y \left( 2 x + 9 \right) $$ Since the function is constant for $ x $ then it means: $$ \nabla_{x} f \left( x, y \right) = 0 = 3 - 2 y \Rightarrow y = \frac{3}{2} $$ Another way thinking about it, is by definition the function with respect to $ x $ must be constant (Specific case of Linear). Hence its coefficients (Which are given by the derivative since it's a specific case of Linear Function) must vanish (As seen in @Deepak answer).
{ "language": "en", "url": "https://math.stackexchange.com/questions/4069499", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
How do you solve this recurrence relation with divides? I am trying to find all sequences $(a_n)$ of positive integers with the following property: for all $m$ and $n$, $a_ma_n$ divides $a_{m+n}$. I’m not sure whether this counts as a recurrence relation or not, given the presence of “divides”. Two sequences that satisfy it are $b^n$ for any positive integer $b$, and $n!$. But how would I find the general solution?
For any prime $p$ and $k\in \mathbb N^+$, let $\nu_p(k)$ be the largest power of $p$ dividing $k$. The divisibility condition implies $\nu_p(a_{n+m})\ge \nu_p(a_n)+\nu_p(a_m)$, so that the sequence $\nu_p(a_n)$ is super-additive in $n$ for all $p$. Clearly, this is sufficient as well. This gives a full characterization of all sequences $a_n$ which work; they are defined by a collection of super-additive sequences $\nu_p(a_n)$ for each prime $p$, with the only constraint between the sequences being that for each $n\in\mathbb N$, $\nu_p(a_n)$ must be nonzero for only finitely many $p$. There are several infinite classes of examples here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4069782", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Coin Distribution Problem- Combinations In how many ways can we distribute a single quarter, a single dime, a single nickel and 25 separate cents between $5$ children if a) we have no restrictions? b) such that the oldest kid gets either $20$ cents or $25$ cents. a) $$\;\binom{1+5-1}{1}\times \binom{1+5-1}{1}\times \binom{1+5-1}{1}\times \binom{25+5-1}{5} = 14,844,375$$ b) My textbook is a spanish translation of the original and the exercise is badly worded and I cannot find it in the english version, so I am confused. Please help me out.
In how many ways can we distribute a single quarter, a single dime, a single nickel, and $25$ pennies between five children if we have no restrictions? There are five ways to distribute the quarter, five ways to distribute the dime, and five ways to distribute the nickel. Since there are no restrictions on the distribution of the pennies, the number of ways they can be distributed is the number of solutions of the equation $$x_1 + x_2 + x_3 + x_4 + x_5 = 25 \tag{1}$$ in the nonnegative integers, where $x_i$ represents the number of pennies received by the $i$th oldest child. A particular solution of equation 1 corresponds to the placement of $5 - 1 = 4$ addition signs in a row of $25$ ones. For instance, $$1 1 1 1 1 1 1 + 1 1 1 1 1 1 + 1 1 1 1 1 + 1 1 1 1 + 1 1 1$$ corresponds to the solution $x_1 = 7$, $x_2 = 6$, $x_3 = 5$, $x_4 = 4$, $x_5 = 3$. The number of such solution corresponds the number of ways we can select which $4$ of the $29$ positions required for $25$ ones and four addition signs will be filled with addition signs, which is $$\binom{25 + 5 - 1}{5 - 1} = \binom{29}{4}$$ Hence, you should have obtained the answer $$5^3\binom{29}{4}$$ When choices are made independently, you should be multiplying rather than adding. Addition is used when choices are mutually exclusive. In how many ways can we distribute a single quarter, a single dime, a single nickel, and $25$ pennies between five children if the oldest kids gets either $20$ cents or $25$ cents? Consider cases, depending on how many cents the oldest child receives and how that amount is distributed. Once you have chosen which coins the oldest child has received, distribute the remaining coins to the remaining children. For instance, if the oldest child receives $25$ cents as by receiving the dime, the nickel, and ten pennies, distribute the quarter and the remaining $15$ pennies to the other four children.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4070114", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Can anyone tell me if my proof is right? Let $0<a<b,$ compute $\lim_{n\to \infty}{\frac{a^{n+1}{+}b^{n+1}}{a^n+b^n}}$ My proof are followings : Assuming that $r=b-a>0,$ so $b=a+r.$ So the original formula equals to $\frac{a\cdot a^{n}{+}{(a+r)\cdot}b^{n}}{a^n+b^n}=a+\frac{r\cdot b^n}{a^n+b^n}=a+\frac{r}{1+(a/b)^n},$ so it can limit to $a+r=b.$ From my point of view, I think that the results should contain $a,$ however my result contains no $a.$ I am a self learner, I don't know wheather my result is right. Hope that someone can help me out.
What you made is $\frac{a^{n+1}{+}b^{n+1}}{a^n+b^n}=a+\frac{b-a}{1+(a/b)^n}$ and using $\left( \frac{a}{b}\right)^n\to 0$, when $n\to\infty$, you obtain answer $b,$ so everything is correct. Another possible way is $$\frac{a^{n+1}{+}b^{n+1}}{a^n+b^n} = b\frac{1+\left( \frac{a}{b}\right)^{n+1}}{1+\left( \frac{a}{b}\right)^{n}}\to b,\text{ when }n\to\infty.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4070280", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 1 }
Discarding apples - why is my reasoning wrong? There are two types of apples - red and green. The ratio of red to green is $4:1$. They are mixed together before they are boxed. There is $\frac{1}{50}$ chance that a green apple is discarded, and $\frac{1}{100}$ chance that a red apple is discarded. What is the probability that an apple selected will be discarded? Attempt: I have Green apple discarded OR red apple discarded $\frac{4}{5}\frac{1}{50}$ + $\frac{1}{5}\frac{1}{100}$ Question: Why is this incorrect? It seems I have to multiply by $\frac{2}{3}$ to get the correct answer.
Haven't you mixed it up a bit? Let $P(G)$ be probability of a green apple being selected and $P(R)$ be probability of a red apple being selected. Let $P(D|G)$ be probability of discard if green apple and $P(D|R)$ be probability of discard if red apple. Probability of an apple being discarded $P(D) = P(G)\cdot P(D|G) + P(R)\cdot P(D|R) = \frac 15 \frac 1{50} + \frac 45 \cdot \frac 1{100} = \frac 3{250}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4070531", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Formula for number of surjective functions in case where codomain is greater than domain We were deriving the formula for number of surjective functions from f: $X \to Y$, where $|X| = n$ and $|Y| = m$. The formula that we have is: $$\sum_{k=0}^m(-1)^k (m-k)^n {m \choose k}$$ Now, if $m>n$ we have no surjective functions, as codomain is greater than domain and the answer should be zero. The formula gives the right answer, but I cannot figure out why does it work algebraically. Any hints on why does this sum equals to zero when $m > n$?
We have $$\begin{align*} \sum_{k=0}^m (-1)^k (m-k)^n {m\choose k} &= n! [z^n] \sum_{k=0}^m (-1)^k \exp((m-k)z) {m\choose k}\\ &= n! [z^n] \exp(mz) \sum_{k=0}^m (-1)^k \exp(-kz){m\choose k}\\ &= n! [z^n] \exp(mz) (1-\exp(-z))^m\\ &= n! [z^n] (\exp(z)-1)^m. \end{align*}$$ Now $\exp(z)-1 = z + \cdots$ so $(\exp(z)-1)^m = z^m + \cdots$ and hence $n! [z^n] (\exp(z)-1)^m = 0$ when $n\lt m.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4070687", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
Condensation points and Lebesgue-zero measure set Definition. $x \in \mathbb{R}$ is a condensation point of a subset $A \subseteq \mathbb{R} \iff$ the intersection of every neighbourhood of $x$ with $A$ is uncountable. Can one construct a subset $N \subseteq \mathbb{R}$ that has lebesgue measure zero, such that every point in $\mathbb{R}$ is a condensation point of $N$ ?
Let $I_n = [a_n, b_n]$ be an enumeration of the set of intervals with rational endpoints. For each $n$ let $C_n \subseteq I_n$ be a cantor set contracted to $I_n$. To do so, if $C$ is the cantor set and $l_n$ is the length of $I_n$, we let $C_n = (l_nC) + a_n$ where $l_nC = \{l_n x\;|\;x \in C\}$. Then, recall that the cantor set has Lebesgue measure 0, denoted $|C| = 0$. Then, by countable subadditivity we have that $$\left| \bigcup_{n \in \mathbb{N}} C_n \right| \leq \sum_{n \in N} |C_n| = \sum_{n \in N} |C| = 0.$$ Moreover, if $p \in \mathbb{R}$ and $(p - \epsilon, p + \epsilon)$ is contained in some neighborhood of $p$, we know that there are distinct rational numbers $q, q' \in (p - \epsilon, p + \epsilon)$ with $q < q'$. Then, $[q, q'] = I_k$ for some $k$ and $[q, q'] = I_k \subseteq (p - \epsilon, p + \epsilon)$. By construction, $C_k \subseteq I_k \subseteq (p - \epsilon, p + \epsilon)$. Since $C_k$ is in bijection with $C$, which is uncountable, we find that $p$ is a condensation point of $\bigcup_{n \in \mathbb{N}} C_n $. Hence the set $\bigcup_{n \in \mathbb{N}} C_n $ is what we're looking for.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4070795", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Indefinite integral $\int \frac{1}{1+\sin^4(x)} \, \mathrm dx$ I'm a bit lost in this integral: $$\int \frac{1}{1+\sin^4(x)} \, \mathrm dx$$ I have tried solving with Wolfram, but I was getting a cosecant solution which doesn't seem as the correct method. Do you have any ideas? :) EDIT: Do you please have step-by-step solution, because I am now somewhat lost. Using the substitution $t=\tan(x)$, I got to $$\int \left(\frac{t^2}{2t^4+2t^2+1}+\frac{1}{2t^4+2t^2+1}\right)\mathrm dt$$ By expanding with 1: $$\int \frac{1}{1+\sin^4x}\cdot \frac{\frac{1}{\cos^4x}}{\frac{1}{\cos^4x}}\mathrm dx$$ $$\int \:\frac{1}{\frac{1}{\cos^4x}\cdot \frac{\sin^4x}{\cos^4x}}\cdot \frac{1}{\cos^4x} \mathrm dx$$ $$\int \:\frac{1}{\left(\frac{1}{\cos^2x}\right)^2\cdot \tan^4x}\cdot \frac{1}{\cos^2x}\cdot \frac{1}{\cos^2x}\mathrm dx$$ And using the substitution: $t=\tan\left(x\right)$ $$\mathrm dt=\frac{1}{\cos^2x}\mathrm dx$$ $$t^2=\tan^2\left(x\right)$$ $$t^2=\frac{\sin^2x}{\cos^2x}$$ $$t^2=\frac{1-\cos^2x}{\cos^2x}$$ $$t^2=\frac{1}{\cos^2x}-\frac{\cos^2x}{\cos^2x}=\frac{1}{\cos^2x}-1$$ $$t^2+1=\frac{1}{\cos^2x}$$ Using it: $$\int \:\frac{t^2+1}{2t^4+2t^2+1}\mathrm dt$$ I don't think I got to the expected result but I can't seem to be able to find why…
Following your substitution $t= \tan x$, integrate the resulting as follows \begin{align} &\int \frac1{1+\sin^4x}dx\\ =&\int \:\frac{t^2+1}{2t^4+2t^2+1}dt =\int \frac{1+\frac1{t^2}}{(\sqrt2t)^2 + \frac1{t^2}+1}dt\\ =&\frac{\sqrt2+1}{2\sqrt2} \int \frac{\sqrt2+\frac1{t^2}}{(\sqrt2t)^2 + \frac1{t^2}+1}dt -\frac{\sqrt2-1}{2\sqrt2} \int \frac{\sqrt2-\frac1{t^2}}{(\sqrt2t)^2 + \frac1{t^2}+1}dt\\ =&\frac{\sqrt2+1}{2\sqrt2} \int \frac{d(\sqrt2t-\frac1{t})}{(\sqrt2t-\frac1t)^2 + 2(\sqrt2+1)}dt -\frac{\sqrt2-1}{2\sqrt2} \int \frac{d(\sqrt2t+\frac1{t})}{(\sqrt2t+\frac1t)^2 -2(\sqrt2-1)} dt\\ =&\frac{\sqrt{\sqrt2+1}}4\tan^{-1} \frac{t-\frac1{\sqrt2 t}}{\sqrt{\sqrt2+1}} +\frac{\sqrt{\sqrt2-1}}4\coth^{-1} \frac{t+\frac1{\sqrt2 t}}{\sqrt{\sqrt2-1}}+C \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/4070996", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
Let $G=\Bbb Z\times\Bbb Z$ be additive and $H=\{0\}\times\Bbb Z$. Show that: a) $H\le G$. b) $H\cong \Bbb Z$. I did more or less the item a) $H$ is a subset of $G$ so we need to show that $H$ is a group. $H$ is associative because $$\begin{align} [(a, b) + (c, d)] + (e, f) &= (a + c, b + d) + (e, f) \\ &= (a + c + e, b + d + f) \\ &= (a, b) + [(c + e, d + f)] \\ &= (a, b) + [(c, d) + (e, f)]. \end{align}$$ $H$ has a neutral element because $$(0,0) + (a, b) = (a, b)$$ Does $H$ have an inverse? (Not because $H$ is of the form $(0, x)$ where $x\in\Bbb Z$ $$(-a, -b) + (a, b) = (0.0)$$ Thank you in advance for any help.
$H $ is a subset and a group (easy). $\varphi:\Bbb Z\to\{0\}\times\Bbb Z $ by $x\mapsto (0,x) $ is an isomorphism.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4071283", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Suppose a series $a_n$ is greater than 0 for all positive integer n, and that $\sum \frac {a_n} n$ converges, then does the following also converge? I was wondering if the following is true. Suppose a series $a_n$ is greater than 0 for all positive integer n, and that $\sum \frac {a_n}n$ converges, then is $\displaystyle \lim_{m\to \infty}\sum_{n= 1}^m {a_n \over m+n} = 0$? It seems to be true because if $\sum {a_n\over n}$ converges, then that means that ${a_n\over n }\to 0$ for $n\to \infty$. This means, neglecting $n$, ${a_n \over m+n}$ will also tend to 0, and thus the summation would be equal to 0, but I don't know if this is true.
Let $\varepsilon$ be a strictly positive real. $\sum\dfrac{a_n}{n}$ converges, so: $\displaystyle \ \ \exists N \in \mathbb N^{\star} \ , \ \sum_{n=N}^{+\infty} \dfrac{a_n}{n} \leqslant \dfrac{\varepsilon}{2} $ Then: $\ \ \displaystyle \forall m \geqslant N \ , \ \ \sum_{n=1}^{m} \dfrac{a_n}{m+n} \leqslant \sum_{n=1}^{N-1} \dfrac{a_n}{m} + \sum_{n=N}^m \dfrac{a_n}{n} \leqslant \dfrac{\varepsilon}{2}+\dfrac{1}{m} \sum_{n=1}^{N-1} a_n $ Now: $\\ \displaystyle \exists M\geqslant N \ , \ \forall m\geqslant M , \dfrac{1}{m} \sum_{n=1}^{N-1} a_n \leqslant \dfrac{\varepsilon}{2}$ We can conclude that: $$ \forall \varepsilon > 0 \ , \ \exists M \in \mathbb N \ , \ \forall m\geqslant M \ ,\ 0\leqslant \sum_{n=1}^m\dfrac{a_n}{n+m} \leqslant \varepsilon $$ And $\ \ \displaystyle \lim_{m\rightarrow +\infty} \sum_{n=1}^m\dfrac{a_n}{n+m} = 0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4071418", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Compute $Df^{-1}(2,5)$ given $f(u,v)=(uv,u^2+v^2)$ using the Inverse Function Theorem (IFT) I need help figuring out where my solution below goes wrong, because the question was phrased as if the derivative would exist. $$f(u,v)=(uv,u^2+v^2)$$ We prove $D {f}^{-1}(2,5)$ does not exist. Clearly $ {f}$ is $\mathcal{C}^1$ on $\mathbb{R}^2$. Note that $ {f}(1,2)=(2,5)= {f}(2,1)$. We consider the Jacobian of $ {f}$ at $(1,2)$ and at $(2,1)$: $$\Delta_{ {f}}(1,2)=\begin{vmatrix} 2 & 1 \\ 2 & 4 \end{vmatrix}=2(4-1)=6.$$ $$\Delta_{ {f}}(2,1)=\begin{vmatrix} 1 & 2 \\ 4 & 2 \end{vmatrix}=2(1-4)=-6.$$ Since $\Delta_{ {f}}(1,2)\neq 0$ and $\Delta_{ {f}}(2,1)\neq 0$, by the IFT there exists an open $V$ and $W$ respectively in $\mathbb{R}^2$ such that $ {f} ^{-1} $ is $\mathcal{C}^1$ on $ {f}(V)$ and $ {f}(W)$. We know $ {f}$ is $\mathcal{C}^1$ on $\mathbb{R}^2$ and $V$, $W$ are open in $\mathbb{R}^2$, thus $ {f}(V)$, $ {f}(W)$ are open in $\mathbb{R}^2$. Note that $ {f}( {f} ^{-1}(2,5))= {f}(1,2)= {f}(2,1)$. Since $(2,5)= {f}(1,2)\in {f}(V)$ and $(2,5)= {f}(2,1)\in {f}(W)$ by the IFT we have: $$D {f} ^{-1}(2,5) = [D {f}( {f} ^{-1}(2,5))] ^{-1} = [D {f}(1,2)] ^{-1} $$ and $$D {f} ^{-1}(2,5) = [D {f}( {f} ^{-1}(2,5))] ^{-1} = [D {f}(2,1)] ^{-1}.$$ Thus $[D {f}(1,2)] ^{-1} =[D {f}(2,1)] ^{-1} $. We know $[D {f}(u,v)] ^{-1} =\begin{bmatrix} v & u \\ 2u & 2v \end{bmatrix} ^{-1} $ so for $|u|\neq|v|$ we have: $$\displaystyle[D {f}(u,v)] ^{-1} =\frac{1}{2v^2-2u^2}\cdot\begin{bmatrix} 2v & -u \\ -2u & v \end{bmatrix}.$$ Since $1\neq 2$ we have: $$D {f} ^{-1}(2,5)=[D {f}(1,2)] ^{-1} =\frac{1}{6}\cdot\begin{bmatrix} 4 & -1 \\ -2 & 2 \end{bmatrix}$$ $$D {f} ^{-1}(2,5)=[D {f}(2,1)] ^{-1} =-\frac{1}{6}\cdot\begin{bmatrix} 2 & -2 \\ -4 & 1 \end{bmatrix}$$ However, it is clear that $\displaystyle \frac{1}{6}\cdot\begin{bmatrix} 4 & -1 \\ -2 & 2 \end{bmatrix}\neq -\frac{1}{6}\cdot\begin{bmatrix} 2 & -2 \\ -4 & 1 \end{bmatrix}$. Thus $D {f} ^{-1}(2,5)\neq D {f} ^{-1}(2,5)$ a contradiction, so $D {f} ^{-1}(2,5)$ does not exist. What I really don’t understand is that this contradiction implies that $Df^{-1}(2,5)$ does not exist, thus $f^{-1}$ is not differentiable. This would contradict the IFT since one of the implications of the theorem is that $f^{-1}$ must be differentiable on the open intervals $f(V)$/$f(W)$.
Notice that you're applying the IVT twice here. Once at the point $(1,2)$ and another times around $(2,1).$ By applying the IFT around the point $(1,2)$ you get an open set $W_1$ containing $(1,2)$ and a second open set $V_1 = f(W_1)$ containing $f(1,2) = (2,5)$. The IFT tells you that the restriction of $f$ to $W_1$ is invertible and that it's inverse $f_{W_1}^{-1}$ is also differentiable. By applying this theorem again at the point $(2,1)$ you will find open sets $W_2$ and $V_2 = f(W_2)$ and the IVF tells you that these the restriction of $f$ to $W_2$ , $f_{W_2}$ is invertible and that its inverse $f_{W_2}$ and that it's inverse $f_{W_2}^{-1}$ is also differentiable. We are therefore dealing with two different functions here : $f_{W_1}$ and $f_{W_2}$ ! In your solution you do not take this into account and are giving these functions the same name $f$. The apparent contradiction that you found isn't one because you're calling two different matrices $D{f_{W_1}^{-1}}(2,5)$ and $Df_{W_2}^{-1}(2,5)$ by the same name $Df^{-1}(2,5)$. It might be surprising to you that $$ D{f_{W_1}^{-1}}(2,5) \neq Df_{W_2}^{-1}(2,5)$$ but when you think about it $f_{W_1}$ and $f_{W_2}$ are just different functions so one shouldn't always expect the above quantities to be same. That doesn't mean that they aren't related however ! I suspect that we have $f_{W_1} = f_{W_2} \circ \pi_{W_1}$ where $\pi_{W_1}$ is the smooth invertible function $$ \pi_{W_1} : W_1 \rightarrow W_2 : x,y \mapsto \pi_{W_1}(x,y) = (y,x)$$ We would then have $f_{W_1}^{-1} = \pi_{W_1}^{-1}\circ f_{W_2}^{-1}$ and using the chain rule you will find that $$ Df_{W_1}^{-1}(2,5) = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} \times Df_{W_2}^{-1}(2,5).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4071610", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is it true that any set which contains elements with contradictory properties is the empty set? Here's what I've "discovered" but can't verify. Let's take a set $A$ defined as "the set of all $x$ which don't belong to $A$". Obviously this definition wouldn't hold in particular axiomatic systems, but it would in others. Let's analyze this in NST. $X\in A$ iff $X\not\in A$. This is a logical contradiction, but it still doesn't prove that $A$ doesn't exist. In fact, I would argue that $A$ is the empty set. If put all of the elements of $A$ into the empty set, we get $(X∈A \text { iff } X∉A) \text { iff } X\in \emptyset$. This is a tautology for whatever $X$ and $A$. Now, if we switch two of the atomic sentences, we get $X\not\in A \text { iff } (X\in A \text { iff } X\in\emptyset$). We know this to also always be true, which means that either $A$ doesn't exist or $A$ is the empty set. Therefore, the empty set does contain every element belonging to any set it doesn't belong to. Is there some error in my proof?
The statement $$(*)\quad(X∈A \text { iff } X∉A) \text { iff } X\in \emptyset$$ does not mean what you seem to think it means (though I'm not sure exactly what you think it means). If $A=\{X:X\not\in A\}$, then $(X∈A \text { iff } X∉A)$ must be true for all $X$. When you assert $(*)$ you are asserting that $(X∈A \text { iff } X∉A)$ is true if $X\in\emptyset$, but also that it is false if $X\not\in\emptyset$. So if $A$ actually is equal to $\{X:X\not\in A\}$, then this statement (with an implicit universal quantifier on $X$) is false, since $(X∈A \text { iff } X∉A)$ would be true for all $X$, even $X$ such that $X\not\in\emptyset$. Again, I'm not entirely sure what your train of thought is, but it sounds like you are confusing $$A=\{X:X\not\in A\}$$ with $$A=\{X:X∈A \text { iff } X∉A\}.$$ If $A$ was supposed to satisfy the latter equation, then $A=\emptyset$ would be equivalent to $(*)$ (and indeed, your argument then correctly shows that $A=\emptyset$ satisfies the latter equation). But the first equation is quite different.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4071921", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
If $f$ is a polynomial of degree $k$, can we quickly compute $f(1),f(2),\dots,f(n).$ $f(x)=\left(\sum_{i=1}^k{a_ix^i}\right) \pmod P$ $P > n \gg k.$ Obviously calculating $f(1),f(2)..f(n)$ takes $O(nk)$. So is there a faster way to speed up the process? $k$ is not very big. $k$ is only about $30.$
Thanks for your reply.In fact, both n and P are large, P>n, k is small. I know how to use FFT to speed up multipoint evaluations of polynomials, it takes $O(nlog^2n)$. Because k is small here, and x is 1,2,3...,n, so I guess there's a faster way to do it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4072072", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Find all positive $x,y\in\Bbb{Q}$ such that $x+\frac{1}{y}$ and $y+\frac{1}{x}$ are natural numbers. What I did is, the product is also a natural number. $xy+\frac{1}{xy}=m$ is also a natural number. Assuming $xy=z$, we get the equation $z^2-mz+1=0$ Hence, $z=\frac{m\pm \sqrt{m^2-4}}{2}$. Since z is rational $\sqrt{m^2-4}$ has to be rational. It is rational only when $m=2$ (not sure). So $z=xy=1$. So, $y=\frac{1}{x}$ Hence $2x$ and $2y$ are natural numbers. I can't proceed further. Is my way correct? Any help please.
If $x,y$ are rational $x = \frac pq$ and $y = \frac ab$ (assume in lowest terms) so $\frac pq + \frac ba = \frac {pa + qb}{aq}\in \mathbb Z$ And $\frac ab +\frac qp = \frac {pa+qb}{pb}\in \mathbb Z$ So $q|pa+qb$ so $q|pa$ but $q$ and $p$ are relatively prime so $q|a$. Likewise $a|pa + qb$ so $a|qb$ so $a|q$ so $a= q$. And similarly $p = b$. So $x = \frac 1y$. So we must have $x + \frac 1y = x + x = 2x$ and $y + \frac 1x = 2y$ are natural numbers. Whoo boy. Okay. Case one $x = 1 \in \mathbb Z$ then $y = \frac 11 = 1$ and $x = y = 1$ are obviously solutions. Case 2: $x \in \mathbb Z$ but $x > 1$. Then $y=\frac 1x \not \in \mathbb Z$. But $2y=\frac 2x\in \mathbb N$ so $x|2$ so $x = 2$ and $y= \frac 12$ is a solutions ($2+ \frac 1{\frac 12} = 4$ and $\frac 12 + \frac 12 = 1$). Case 3: $x\not \in \mathbb Z$ then $2x \in \mathbb Z$ so $x = \frac m2$ for some odd $m$. But then $y =\frac 2m$ and $2y=\frac 4m$ is an integer so $m|4$. But $m$ is odd so $m =1$. So those are the only three possibilities. $x = y = 1; x = 2; y =\frac 12$ and $x = \frac 12$ and $y = 2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4072247", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
nth term of $2,2+\frac{1}{2},2+\frac{1}{2+\frac{1}{2}},2+\frac{1}{2+\frac{1}{2+\frac{1}{2}}},2+\frac{1}{2+\frac{1}{2+\frac{1}{2+\frac{1}{2}}}}...$ What is the nth term of the sequence: $$2,2+\frac{1}{2},2+\frac{1}{2+\frac{1}{2}},2+\frac{1}{2+\frac{1}{2+\frac{1}{2}}},2+\frac{1}{2+\frac{1}{2+\frac{1}{2+\frac{1}{2}}}}...$$ in terms of $s_{n-1}$. I have tried for some time to calculate this but to no avail. Hopefully somebody with a better understanding of the world of sequences could help/point me in the right direction!
In my answer to this question, I detailed the steps for solving a first-order rational difference equation such as $${ a_{n+1} = \frac{ma_n + x}{a_n + y} }=m+\frac{x-m y}{a_n+y}$$ For your case $m=2$, $x=1$ and $y=0$. So, using the initial condition, $$a_n=\frac{\left(1+\sqrt{2}\right)^n-\left(1-\sqrt{2}\right)^n } { \left(1+\sqrt{2}\right) \left(1-\sqrt{2}\right)^n+\left(\sqrt{2}-1\right)\left(1+\sqrt{2}\right)^n}$$ Edit In the documentation of sequence $A000129$ in $OEIS$, there is superb formula given by Peter Luschny in year $2018$. It write $$a_n=\frac 1{\sqrt{2}}\, e^{\frac{i \pi n}{2}}\,\sinh \left(n \cosh ^{-1}(-i)\right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4072610", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
"Obvious" integral inequality for radially decreasing function I would like to show the following inequality in $\mathbb{R}^2$: $$ \iint_{B\left( x_0, R \right)} \frac{1}{\lVert x \rVert} \,\mathrm{d}A \leq \iint_{B\left( 0,R \right)} \frac{1}{\lVert x \rVert} \,\mathrm{d}A. $$ Here, $\mathrm{d}A$ denotes the usual Lebesgue measure on $\mathbb{R}^2$. Intuitively, this should be true since points closer to the origin should on average give larger values, but I'm having a lot of trouble proving it rigorously. Any guidance would be appreciated.
Let $A=B(x_0,R)\cap B(0,R)$ with $A_1=B(x_0,R)-A$ and $A_0=B(0,R)-A$. Note that $A_1$ and $A_0$ have the same area. $\int_{A_1} \frac{1}{||x||}dx \le \int_{A_0} \frac{1}{||x||}dx$ since $||x||\ge R$ for all points in $A_1$ while $||x||\le R$ for all points in $A_0$. Add $\int_A \frac{1}{||x||}dx $ to both and get the desired inequality.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4072732", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
For a regular Borel measure with infinite support, how to find a decreasing sequence of positive measure open sets? Let $K$ be a compact Hausdorff space and $\mu$ be a regular Borel measure on $K$ with infinite support. How to find a decreasing sequence of open sets $\{V_{n}\}$ such that $\mu(V_{n}) > 0$ and $\lim_{n \rightarrow \infty} \mu(V_{n}) = 0$? If $(X, \tau)$ is a topological Hausdorff space and $\mu$ is a measure on $X$, then definition of support of a measure is $\{x \in X: x \in N_{x} \in \tau \implies \mu(N_{x})>0\}$. I am sure that the regularity of $\mu$ plays a role in finding the above decreasing sequence, but I don't know how. Please provide hints.
Consider $\{x\in supp (\mu): \mu (\{x\}) >0\}$. If this is infinite then there exist $x_n$'s with $\mu (\{x_n\}) >0$ for all $n$ and $\sum \mu (\{x_n\}<\infty$. Hence, $\mu (\{x_n\}) \to 0$. By going to a subsequence we may suppose $\mu (\{x_n\}) <\frac 1 {2^{n}}$. Now there exist open sets $U_n$ with $\mu(U_n)<\mu (\{x_n\})+\frac 1 {2^{n}}$ and $x_n \in U_n$. Note that $\mu(U_n) >0$ for all $n$ because $x_n$ belongs to the support of $\mu$. Take $V_n=\bigcup_{k\geq n} U_k$. The case where $\{x\in supp (\mu): \mu (\{x\}) >0\}$ is a finite set is easier. Here there exists at least one point $x$ in the support with $\mu (\{x\})=0$. Can you finish?
{ "language": "en", "url": "https://math.stackexchange.com/questions/4072860", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Let x and y be real numbers such that $6x^2 + 2xy + 6y^2 = 9 $.Find the maximum value of $x^2+y^2$ I tried to re-arrange the terms $6x^2 + 2xy + 6y^2 = 9 $ $6x^2 + 6y^2 = 9 - 2xy $ $6 (x^2 + y^2) = 9 - 2xy $ $x^2 + y^2 = \frac{9 - 2xy}{6} $ Using A.M $\geq$ G.M $\frac{x^2 + y^2}{2} \geq xy $ Can ayone help me from here? Am I going correct?
For AM-GM, $-2xy \le 2|xy| \le x^2+y^2$ so $2xy\ge -x^2-y^2$. $$5(x^2+y^2)\le 6x^2+2xy+6y^2 =9.$$ Then $x^2+y^2\le 9/5$. The maximum is reached when $x=-y=\pm\sqrt{9/10}$. You can also get the min value of $x^2+y^2$. Because $2xy\le 2|xy|\le x^2+y^2$ $$7(x^2+y^2)\ge 6x^2+2xy+6y^2=9.$$ Then $x^2+y^2\ge 9/7$. The minimum is reached when $x=y=\pm\sqrt{9/14}$. I think the shape $6x^2+2xy+6y^2=9$ is an ellipse with major/minor axes given by the lines $x=\pm y$. The axes have length $2\sqrt{9/5}$ and $2\sqrt{9/7}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4073042", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Does every bounded, divergent sequence contain only convergent subsequences with at least two different limits? Claim: Every bounded and divergent sequence $(a_n)$ contains only convergent subsequences. A bounded, divergent sequence $(a_n)$ implies that there is at least one convergent subsequence (by Bolzano Weierstrass) $(a_{nk}) \rightarrow l$. Now, because $(a_n)$ diverges, there exists some subsequence $(a_{jk})$ which isn't in the $\epsilon -$ neighborhood of $l$. This means that either $(a_{jk})$ is divergent or converges to some other point (as it cannot be unbounded). If it converges to some other point $(a_{jk}) \rightarrow k$ we have shown that there exist two limit points and we are done. If, however, $(a_{jk})$ is divergent itself, I can apply Bolzano Weierstrass again and basically have a recursive loop until I am only left with only convergent subsequences that all converge to a different limit. Is this valid? I'm not really sure if this is sufficient or even correct...
As the comments already mentioned, the claim is incorrect (The sequence itself is a subsequence for example). The flaw in your reasoning is in your recursive loop. You implicitly assume this loop will end in finite steps. This is by no means clear, since we can have infinitely many different subsequences
{ "language": "en", "url": "https://math.stackexchange.com/questions/4073375", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Please help me clear confusion over principal roots and identities for n-th radicals From my old high school math textbook: If ${a{\geq }0}$ and $n\in \mathbb{N} ^{\ast }$, then ${\sqrt[{n}] {a}}$ is the non-negative solution of ${{x}^{n}}=a$. It then goes on to infer a number of properties and identities in the case that $\alpha \geq 0$ and $n\in \mathbb{N} ^{\ast }$. However the choice of defining the n-th root of something only as the positive solution of this equation leaves me quite confused. For example, we could define $\sqrt[3] {-8}=-2$ but we don't. Instead the book goes on to break down the solution of ${{x}^{n}}=a$ based on $\pm \sqrt[n] {\left| a\right| }$ instead of just $\sqrt[n] {a}$. Why do we do this? The second part is please tell me if these expressions are in fact correct because the book doesn't mention them (although it might be using them) and I've seen them elsewhere online. * *If $x \geq 0$ and $a\geq 0$ and $n\in \mathbb{N} ^{\ast }$, then $\ x^{n}=a\Leftrightarrow x=\sqrt[n] {a}$ *$\left( \sqrt[n] {a}\right) ^{n}=a$ holds for every $a\in \mathbb{R}$, while $\sqrt[n] {a^{n}}$ holds only for $a\geq 0$
It is quite likely that your textbook deals with it that way in order not to have to deal with case case in which $n$ is even (in which case no negative number has a $n$th root) and the case in which $n$ is odd (in which case each number has one and only one $n$th root) separately. And, yes, those assertions are correct.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4073531", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find the trigonometric limit Find the limit $$\lim_{x \to 0} \frac{\tan x-\sin x}{x^4} $$ Here is my solution: $$\lim_{x \to 0} \frac{\tan x-\sin x}{x^4} = \lim_{x \to 0}\frac{\sin x(\frac{1}{\cos x}-1)}{x^4}=\lim_{x \to 0}\frac{\sin x}{x^4}*(\frac{1}{\cos x}-1)=\lim_{x \to 0}\frac{\sin x}{x^4}*(0)$$ $$ \lim_{x \to 0}\frac{\sin x}{x^4}$$ $$ \to \lim_{x \to 0+0}\frac{\sin(0+0)}{(0+0)^4}=\frac{0}{0} \space AND \space \lim_{x \to 0-0}\frac{\sin(0-0)}{(0-0)^4}=\lim_{x \to 0-0}\frac{-\sin(0)}{(0)^4}=-\frac{0}{0}, Hence \space the \space limit \space DNE $$ So, is that an appropriate way? I have troubles with the limits that have no solution (do not exist). I can easily find the limit with L'Hôpital's Rule, but then it turns out that the limit one-sided and do not exist. Is there any universal method for that type of limits?
You were on the right track with the second equality, all that was left was to multiply numerator and denominator by a "conjugate" $$\frac{\sin x(\sec x - 1)}{x^4}\cdot\frac{\sec x+1}{\sec x + 1} = \frac{\sin x}{x}\cdot \frac{\tan^2x}{x^2} \cdot\frac{1}{x} \cdot\frac{1}{\sec x + 1}$$ The limit to $0$ of $xf(x)$ is $$ 1\cdot 1^2 \cdot \frac{1}{1+1} = \frac{1}{2}$$ Thus the limit of $f$ to $0$ does not exist.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4073688", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
If $1+a$ is a unit for all non-units $a$ in a nonzero ring $R$, then $R-R^*$ is an ideal Suppose that $R$ is a nonzero commutative ring with $1$. Suppose $1+a$ is a unit for all non-units $a$ in a nonzero ring $R$. I'd like to understand why this implies that $R-R^*$ is an ideal. In particular, I'd like to know why the sum of two non-units is a non-unit. So suppose $a,b$ are non-units. We'd like to show that $a+b\in R-R^*$ as well. From the assumption, we have that $1+a,1+b\in R^*$, and so $(1+a)(1+b)=1+a+b+ab\in R^*$. But now I'm stuck. Any suggestions?
These are equivalent definitions of local rings. For a nonzero commutative ring $R$, we show the equivalence of the following statements: * *For every $a \in R \backslash R^\times$, we have $1 + a \in R^\times$. *$R$ has a unique maximal ideal. *If $a, b \in R \backslash R^\times$, then $a + b \in R\backslash R^\times$. We may prove in the following order: $1 \implies 2 \implies 3 \implies 1$. $1 \implies 2$: let $I$ be a maximal ideal of $R$. We want to show that any proper ideal $J \subseteq R$ is contained in $I$. Suppose there is a proper ideal $J$ not contained in $I$. Then the ideal $I + J$ must be $R$, as $I$ is maximal. Thus there exist $i \in I$ and $j \in J$ such that $i + j = 1$. But both $-i$ and $1 + (-i) = j$ are non-units, contradiction. $2 \implies 3$: both $aR$ and $bR$ are proper ideals of $R$, hence are contained in the unique maximal ideal $M$. Therefore their sum is also in $M$ and is not a unit. $3 \implies 1$: suppose there exists $a \in R$ such that both $a$ and $1 + a$ are non-units. Then $1 = (1 + a) + (-a)$ is a non-unit, contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4073839", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
$x,y\in \mathbb Z^+$ Show that $P(x)=P(y) \implies x=y$ Let $x,y\in \mathbb Z^+$, we define $P(a)$ as the product of all the divisors of $a$ including $1$ And $a$, Show that $$P(x)=P(y)\implies x=y.$$ Im gonna show my Attempt, which is based on cases. Case 1: if $x,y$ Are primes. $$x,y\in \mathbb P\implies \cases{P(x)=x\\P(y)=y}\implies (P(x)=P(y) \iff x=y)$$ Where $\mathbb P$ Is the set of prime numbers. Case 2: if $x,y$ are some powers of prime. $$\cases{x=p^n\\ y=q^m } \implies \cases{P(x)=1\cdot p\cdot p^2...p^n=p^{\frac{n(n+1)}{2}}\\P(y)= 1\cdot q\cdot q^2...q^m= q^{\frac{m(m+1)}{2}}} $$ So to prove the statement We set $$p^{\frac{n(n+1)}{2}} =q^{\frac{m(m+1)}{2}}$$ The next step is to show that this statement implies $x=y$, I have no idea how to do that. (By the way the next case is to consider $x,y$ As a product of primes.)
Deep breath. So every positive integer is going to have a unique prime factorization. Each factor is going to be a product of the these primes to some power and so $P(x)$ is going to be a product of these factors is going to be a product of these primes to powers. So if $x = \prod p_i^{k_i}$ is the unique prime factorization of $x$ then $P(x) = \prod p_i^{m_i}$ where $\{m_i\}$ is a distinct set of natural numbers based on some manipulation of the values of $k_i$ and possibly, but probably not, the values of $p_i$. Just what this manipulation is ... well, we'll figure that out later. So if $P(x) = P(y) = M = \prod p_i^{m_i}$ then it must be that $x$ and $y$ have the exact same prime factors, namely $\{p_i\}$ and $x = \prod p_i^{k_i}$ and $y = \prod p_i^{w_i}$ and that somehow the set of $\{k_i\}$ and the set of $\{w_i\}$ both get manipulated to the same set $\{m_i\}$. We have to show nothing more or less than if two set's of $\{k_i\}$ and $\{w_i\}$ both get manipulated to the same set of $\{n_i\}$ then that would imply that each $k_i$ is equal $w_i$ respectively. Okay... so just what is the manipulation[1]? ...... Well. Consider $x = \prod p_i^{k_i}$ and lets consider one particular prime designated $Q$ (with a capital) so that $x = Q^K \prod_{p_i\ne Q} p_i^{k_i}$ and let $N = \frac x{Q^K}$ and so that $p\not \mid N$. Okay.... Let $F = \{$ factors of $N\}$ and so each factor of $x$ will be expressible as $Q^jf$ for some $f\in F$ and some power $j$ where $w = 0,...., K$. So $P(x) = \prod_{j= 0}^K [\prod_{f\in F} Q^j f]=$ $(\prod_{j=0}^K Q^j)^{|F|}\prod_{f\in F} f$ Okay... $\prod_{f\in F} f = P(N) = P(\frac x{Q^K})$ and $\prod_{j=0}^K Q^j= Q^{\sum_{j=0}^K j} =Q^{\frac {K(K+1)}2}$ and $|F|$.... well.... if a number has $\prod p_i^{k_i}$ as it's prime factorization then a factor $f$ is of the form $\prod p_i^{a_i}$ where $0 \le a_i \le k_i$. There are $k_i +1$ options for $a_i$ so there are $\prod (k_i + 1)$ factors. So $|F| =\prod_{p_i \ne Q} (k_i + 1)$. So we have $P(x) = Q^{\frac K2\cdot (K+1)\prod_{p_i \ne Q} (k_i + 1)}\cdot P(\frac x{Q^K})$ but we can put that $(K+1)$ inside the product so $P(x) = Q^{\frac K2\prod(k_i + 1)}\cdot P(\frac x{Q^K})$ And so recursively we have $P(x) = \prod p_j^{\frac {k_j}2 \prod(k_i + 1)}$. ........ So if $P(x) = P(y) = \prod p_j^{m_j}$ then $x = \prod p_j^{k_j}$ and $y = \prod p_j^{w_j}$ and we have for each $j$ than $m_j = \frac {k_j}2(\prod (k_i +1)) =\frac {w_j}2(\prod (w_i +1))$ and our task is simply to show that implies each $k_j = w_j$. ....Phew..... Okay so we know that each $k_j = w_j \cdot \frac {\prod(w_i+1)}{\prod(k_i+ 1)}$ but $\frac {\prod(w_i+1)}{\prod(k_i+ 1)}=\prod \frac {w_i + 1}{k_i + 1}$ is a positive constant. If we call it $C$ we have: $C = \prod \frac {w_i + 1}{k_i + 1} = =\prod \frac {w_i + 1}{Cw_i + 1}$ Now we just have to show that is only possible if $C = 1$. And if $C < 1$ then $\frac {w_i + 1}{Cw_i + 1} > 1$ and $C = \prod \frac {w_i + 1}{Cw_i + 1} > 1$. A contradiction. And if $C > 1$ then $\frac {w_i + 1}{Cw_i + 1} < 1$ and $C = \prod \frac {w_i + 1}{Cw_i + 1} < 1$. A contradiction. ====== [1] Credit where credit is due. Calvin Lin had a for more elegant calculation and expression of $P(N)$ that escaped me. Let $x =\prod p_i^{k_i}$ and let $\tau(x) = \prod(k_i + 1)=$ the number of factors of $x$. Now if $d|x\iff\frac xd|x$ obviously. So.... $P^2(x) = (\prod_{d|x}) d\cdot (\prod_{d|x})\frac xd = \prod_{d|x} N= N^{\tau(x)} = N^{\prod(k_i + 1)}$ So $P(x) = N^{\frac {\prod(k_i + 1}2}$.... Which as $\sqrt{N} = \prod p_i^{\frac {k_i}2}$ is the same result as mine.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4073932", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Prove $\pi=\lim_{n\to\infty}2^{4n}\frac{\Gamma ^4(n+3/4)}{\Gamma ^2(2n+1)}$ How could it be proved that $$\pi=\lim_{n\to\infty}2^{4n}\frac{\Gamma ^4(n+3/4)}{\Gamma ^2(2n+1)}?$$ What I tried Let $$L=\lim_{n\to\infty}2^{4n}\frac{\Gamma ^4(n+3/4)}{\Gamma ^2(2n+1)}.$$ Unwinding $\Gamma (n+3/4)$ into a product gives $$\Gamma \left(n+\frac{3}{4}\right)=\Gamma\left(\frac{3}{4}\right)\prod_{k=0}^{n-1}\left(k+\frac{3}{4}\right).$$ Then $$\lim_{n\to\infty}\frac{(2n)!}{4^n}\prod_{k=0}^{n-1}\frac{16}{(3+4k)^2}=\frac{\Gamma ^2(3/4)}{\sqrt{L}}.$$ Since $$\frac{(2n)!}{4^n}\prod_{k=0}^{n-1}\frac{16}{(3+4k)^2}=\prod_{k=1}^n \frac{4k(4k-2)}{(4k-1)^2}$$ for all $n\in\mathbb{N}$, it follows that $$\prod_{k=1}^\infty \frac{4k(4k-2)}{(4k-1)^2}=\frac{\Gamma ^2(3/4)}{\sqrt{L}}.$$ But note that this actually gives an interesting Wallis-like product: $$\frac{2\cdot 4\cdot 6\cdot 8\cdot 10\cdot 12\cdots}{3\cdot 3\cdot 7\cdot 7\cdot 11\cdot 11\cdots}=\frac{\Gamma ^2(3/4)}{\sqrt{L}}.$$ I'm stuck at the Wallis-like product, though.
I suppose you could do it the cheap way and use Stirling's approximation: $$n! \sim \sqrt{2\pi n} (n/e)^n$$ implies $$\Gamma^4(n+3/4) \sim 4\pi^2 \frac{(n-1/4)^{4n+1}}{e^{4n-1}},$$ and $$\Gamma^2(2n+1) \sim 2\pi \frac{(2n)^{4n+1}}{e^{4n}};$$ hence $$2^{4n} \frac{\Gamma^4(n+3/4)}{\Gamma^2(2n+1)} \sim \pi \left(1 - \frac{1}{4n}\right)^{4n+1} e,$$ and the rest is straightforward.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4074052", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 0 }
Does the power rule come from the generalised binomial theorem or the other way around? Remember how we say $lim \frac{\sin{x}}{x}=1$ doesn't come from the L Hospital's rule, because the differentiation of $sin x$ from first principles uses the fact that $lim \frac{\sin{x}}{x}=1$? A similar situation is arising for the power rule and the generalised binomial theorem. The generalised binomial theorem can be proven as the Taylor expansion of $(1+x)^r$. This derivation uses the power rule, as we need the power rule to calculate derivatives of $(1+x)^r$ The derivation of the power rule from first principles would need the generalised binomial theorem, as we'd need to expand the term $(x+h)^r$ So which of these two results is more fundamental? Which one comes from the other and why?
I'm guessing: $\lim_{h\rightarrow 0} \frac{(x+h)^r-x^r}{h}$ =$\lim_{h\rightarrow 0} \frac{e^{r\ln{x+h}}-x^r}{h}$ Now we use the L Hospital's rule $=\lim_{h\rightarrow 0} e^{r\ln{x+h}} r \frac{1}{x+h}$ $= rx^{r-1}$ So the generalised power rule does not depend on the generalised binomial theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4074185", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
minimal polynomial of $7^{th}$ root of unity Let $\alpha=\zeta_7+\zeta_7^2+\zeta_7^4$. What's the minimal polynomial of $\zeta_7$ over $\mathbb{Q}(\alpha).$ I have managed to show that $[\mathbb{Q}(\alpha):\mathbb{Q}]=2$ and $[\mathbb{Q}(\zeta):\mathbb{Q}]=6$. By tower rule, the minimal polynomial has degree 3. As long as we find a polynomial with degree 3 such that $\zeta$ is a root, then we are done. But I got stuck in finding such polynomial. Any hint is appreciated.
Following you're analysis of the indices, the Galois group of the extension $\mathbb{Q}(\zeta_7)\supseteq \mathbb{Q}(\alpha)$ is a (the) subgroup of order $3$ of $\text{Gal}(\mathbb{Q}(\zeta_7)/\mathbb{Q})\cong (\mathbb{Z}/7\mathbb{Z})^\times\cong\mathbb{Z}/6\mathbb{Z}$. This subgroup is the cyclic subgroup generated by the element $$ \sigma\in\text{Gal}(\mathbb{Q}(\zeta_7)/\mathbb{Q}), \,\,\,\sigma(\zeta_7)=\zeta_7^2 $$ A great trick which often helps in finding polynomials (or really any type of object in the context of galois/representation theory) which are invariant under the action of some group of transformations is taking the 'average'. More explicitly, if we're interested in finding a polynomial in $\mathbb{Q}(\alpha)[x]=(\mathbb{Q}(\zeta_7))^{\langle \sigma \rangle}[x]$ which has $\zeta_7$ as a root, we can construct it by taking the product of the orbit of $(x-\zeta_7)$ under $\langle \sigma \rangle$: $$ p(x)=(x-\zeta_7)(x-\sigma(\zeta_7))(x-\sigma^2(\zeta_7))=(x-\zeta_7)(x-\zeta_7^2)(x-\zeta_7^4) $$ Because of the construction, clearly $p(x)$ is invariant under the action of $\langle \sigma \rangle$ (thought of as an automorphism of the ring $\mathbb{Q}(\zeta_7)[x]$ by acting on coefficients) and hence the coefficients of the polynomial $p(x)$ must all lie in $(\mathbb{Q}(\zeta_7))^{\langle \sigma\rangle}=\mathbb{Q}(\alpha)$. At this point, I think it's quite satisfying to verify this last statement directly:$$ p(x)=(x-\zeta_7)(x-\zeta_7^2)(x-\zeta_7^4)= $$ $$=x^3-(\zeta_7+\zeta_7^2+\zeta_7^4)x^2+(\zeta_7^3+\zeta_7^5+\zeta_7^6)x-1=x^3-\alpha x^2 + \frac{\alpha^2-\alpha}{2}x-1. $$ And low and behold the coefficients in fact are in $\mathbb{Q}(\alpha)$. Hope this helps :) POST: bare in mind that this is all possible because $\mathbb{Q}(\zeta_7)\supseteq \mathbb{Q}$ is Galois.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4074356", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Clarifying Polygonal Paths in Complex Analysis Introduction to Complex Analysis by H. A. Priestley (Revised ed.) shows a diagram (page 40) which is a polygonal path in a set $G$ (link below). Their proof claims we can polygonally pave our way from $z$ to $a$ by using the fact that open discs centred at $z$ enclose a point $w$ such that the line segment $[z,w]\subset D(z;r),r>0$. I am confident in that notion, but maybe it's their diagram which confuses me. It displays large line segments paving through the set $G$ which if you were to draw open discs centred from the original point, the open disc is then not contained by $G$. I assume they could have used smaller open discs, therefore, more $w$s to get to $a$. Any clarification on this topic would help me out a lot. Thank you very much.
H. A. Priestley, Introduction to Complex Analysis, Revised Edition (Oxford University Press 1990), p.39f.: Let $G_1 = \{z \in G \colon\ \text{there exists a polygonal path in } G \text{ with endpoints } a \text{ and } z\}$ and let $G_2 = G \setminus G_1.$ We require $G_1 = G.$ We shall prove that each of $G_1$ and $G_2$ is open. Connectedness of $G$ will then imply that one of these sets is empty. This cannot be $G_1,$ since $a \in G_1.$ We now establish our claim that $G_1$ and $G_2$ are open. For any $z \in G,$ we can find $r$ such that $D(z; r) \subseteq G.$ For each $w \in D(z; r),$ $[z, w] \subseteq G.$ It follows that $z$ can be joined to $a$ by a polygonal path in $G$ if and only if $w$ can be (see Fig. 3.3). Hence, for $k = 1, 2,$ $z \in G_k$ implies $D(z;r) \subseteq G_k.$ The diagram illustrates the implication that if $z \in G_1$ then $w \in G_1.$ A similar diagram could be drawn to illustrate the converse implication that if $w \in G_1$ then $z \in G_1;$ but that would be overkill. Instead, just imagine that the penultimate segment of the polygonal path from $a$ terminates at $w$ rather than $z.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4074566", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Strang: Simple Calculus Question I'm looking at Gilbert Strang's introductory calculus textbook "Calculus". https://ocw.mit.edu/ans7870/resources/Strang/Edited/Calculus/Calculus.pdf. In chapter 1, fig 1.5, Strang illustrates what happens when you alter the function f(t) = 2t + 1. He shows the following equations along with two similar graphs, the second shifted downward. f(t) = 2t + 1 f(t) - 2 = 2t - 1 I realize that taking the first equation and subtracting 2 from the RHS alone will nudge the graph down 2 tics, but he's done it to both sides. Shouldn't this result in the same function, and the same graph? My only guess is that maybe he forgot to relabel the axes? Screenshot: https://i.stack.imgur.com/HOtXF.png
If you rename $$g(t) := f(t) -2$$ then you have the function $g(t) = 2t - 1$ which is what he illustrated. If he would write $f(t) = 2t - 1$ then it would be ambiguous because you would label two different objects with the same name. This could lead to the wrong perception that $$2t - 1 = 2t + 1$$ i.e. $$-1 = +1$$ which is obviously not true.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4074666", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How to learn the necessary proof writing skills / set theory to work through Spivak's Calculus I am planning on being a math major this coming fall and will likely be enrolled in honors calculus. In preparation for this course and having found myself with an abundance of free time, I have begun working through Spivak's calculus to get out ahead of the curve. Unfortunately, I have found that my proof writing ability, my knowledge/intuition of mathematic logic, and my understanding of the requisite set theory to write these proofs is not at the level of Spivak's questions. What should I do to bring myself up to speed (books, resources etc.)? For reference, I have solid knowledge of computational single and multi variable calculus, but little to no experience writing formal proofs besides a super dumbed-down delta-epsilon proof. Please delete if this breaks any rules, as this is my first post here.
Do you know Daniel Velleman's fine book How To Prove It (CUP)? This could well be worth looking at as a first port of call. From his blurb: Many students have trouble the first time they take a mathematics course in which proofs play a significant role. This new edition of Velleman's successful text will prepare students to make the transition from solving problems to proving theorems by teaching them the techniques needed to read and write proofs. The book begins with the basic concepts of logic and set theory, to familiarize students with the language of mathematics and how it is interpreted. These concepts are used as the basis for a step-by-step breakdown of the most important techniques used in constructing proofs. The author shows how complex proofs are built up from these smaller steps, using detailed 'scratch work' sections to expose the machinery of proofs about the natural numbers, relations, functions, and infinite sets. Many students have found this book of great help.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4074818", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Corollary of convergence in distribution Can I say that if $\xi_n \xrightarrow {d} \xi$ and $\xi_n, \xi$ are non-negative random variables with finite expected value then $$\mathbb{E}\xi \le \lim{\inf{}_{n\to\infty}\mathbb{E}\xi_n}?$$ I have problems to disprove or prove it. May somebody can help me? I would be grateful.
Write each expectation as integral of the tail on the positive real line, and apply Fatou's lemma to the positive real line (with Lebesgue measure) and the sequence $f_n(t)=\mathbb P(X_n>t)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4075039", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Plot in Octave of Integral on unit circle $\int_{-\pi}^\pi \frac{(1-r^2)\cos(x)}{1-2r \cos(\theta-x)+r^2}dx $ i want to get a 3D Plot in Octave of the function $$f(r\cdot e^{i\theta})=\int_{-\pi}^\pi \frac{(1-r^2)\cdot\cos(x)}{1-2\cdot r\cdot\cos(\theta-x)+r^2}dx $$ what is called the Poisson-Integral for the function $\cos(x)$ for $0\leq r<1$. I really don't know how to get it and didn't find any answers in the internet. so maybe someone can help me. Until now i got the following code but the output is far away from what i need: r = linspace(0,0.999,10) theta = pi*linspace(-1,1,10) z = r.*exp(1i*theta) [Y1,Y2]=meshgrid(sqrt(real(z).^2+imag(z).^2),imag(z)) for i=1:10 for j=1:10 n=Y1(i,j) m=Y2(i,j) y=@(x) (1-n.^2).*cos(x)/(1-2.*n.*cos(m-x)-n.^2) Z(i,j)= quad(y,-pi,pi) end end plot3(real(z),imag(z),Z) I'm thankful for every answer or try to help me.
Let $t= x -\theta$ \begin{align} \int_{-\pi}^\pi \frac{(1-r^2)\cos x}{1-2r\cos(\theta-x)+r^2}dx =&(1-r^2) \int_{-\pi-\theta}^{\pi -\theta}\frac{\cos t\cos\theta- \sin t \sin\theta}{1-2r\cos t+r^2}dt\\ =& 2\cos\theta (1-r^2)\int_{0}^{\pi}\frac{\cos t}{1-2r\cos t+r^2}dt\\=& 2\pi r\cos\theta \end{align} where, with $y=\tan\frac t2$ \begin{align} &\int_{0}^{\pi}\frac{\cos t}{1-2r\cos t+r^2}dt =\frac{1}{2r}\left(-\pi+ \frac{2(1+r^2)}{(1+r)^2} \int_{0}^{\infty}\frac{dy}{y^2+ \left(\frac{1-r}{1+r}\right)^2}\right) =\frac{\pi r}{1-r^2} \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/4075183", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Uniform convergence of $(f_n)_{n \in \Bbb{N}}$ on $D_1 \cup D_2$ I want to show using the Cauchy criterion that if $(f_n)_{n \in \Bbb{N}}$ is uniformly convergent on both $D_1$ and $D_2$, then it must be uniformly convergent on $D_1 \cup D_2$ Here's what I have: $(f_n)_{n \in \Bbb{N}}$ is uniformly convergent on $D_1$, therefore, $\forall \varepsilon \gt 0, \exists n_0$ such that if $ n,m \ge n_0$ then $\vert f_n(x) - f_m(x) \vert \lt \frac{\varepsilon}2$. Similarly, $(f_n)_{n \in \Bbb{N}}$ is uniformly convergent on $D_2$, therefore, $\forall \varepsilon \gt 0, \exists k_0$ such that if $ p,q \ge k_0$ then $\vert f_p(x) - f_q(x) \vert \lt \frac{\varepsilon}2$. Hence letting all $n, m, p, q \ge max\{n_0, k_0\}$: $$\vert \vert f_n(x)- f_m(x)\vert -\vert f_p(x) - f_q(x)\vert \vert \le \vert f_n(x)- f_m(x)\vert + \vert f_p(x) - f_q(x)\vert \lt \frac{\varepsilon}2 + \frac{\varepsilon}2 = \varepsilon$$ Then it would follow that $$\vert \vert f_n(x)- f_m(x)\vert -\vert f_p(x) - f_q(x)\vert \vert \lt \varepsilon $$ Which I want to think implies that $(f_n)_{n \in \Bbb{n}}$ is convergent on $D_1 \cup D_2$ I don't know if what I did was correct, or if I made a mistake somewhere, so I would appreciate any feedback on my take and some hints on what direction my proof should go assuming it went wrong.
Let $N=\max \{n_0,k_0\}$, take any $x \in D_1 \cup D_2$ then we will have $|f_{n_1}(x)-f_{n_2}(x)|< \frac{\epsilon}{2}$ for all $n_1, n_2 \geq N$, because if $x \in D_1$ then the fact that $f$ is uniformly convergent on $D_1$ is used (the inequality you've written in the 3rd paragraph), similar if $x \in D_2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4075352", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Prove if $x > 0$, $1+\frac{1}{2}x-\frac{1}{8}x^2 \le \sqrt{1+x}$ Prove if $x > 0$, $1+\frac{1}{2}x-\frac{1}{8}x^2 \le \sqrt{1+x}$. I find online, one person suggested using Taylor Theorem to expand the right-hand side, and apply Bernoulli's inequality. So, if $x_0 = 0$, $\sqrt{1+x} = 1+\frac{1}{2}x-\frac{1}{4(2!)}x^2+R$, where R is larger than $0$, this makes sense to me, but I'm trying to find another way to prove the equality. Like Mean Value Theorem for inequality $\sqrt{1+x} \le 1+\frac{1}{2}x$. I see the part where $1+\frac{1}{2}x$ is the same, but get trouble to put $\frac{1}{8}x^2$ into the equation.
Well putting $x=\sinh^2(a)$ We have to show (using $\cosh^2(a)-\sinh^2(a)=1$): $$\cosh(a)\geq 1+\frac{1}{2}\sinh^2(a)-\frac{1}{8}\sinh^4(a)$$ Or : $$\sinh^6\Big(\frac{a}{2}\Big)(\cosh(a)+3)\geq 0$$ Wich is clearly obvious !
{ "language": "en", "url": "https://math.stackexchange.com/questions/4075477", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
How can I calculate win percentage needed based on risk/reward ratio? Let's say I have calculated the probability of a win in two games to be $60\%$. I have $2$ bets I am looking at. For the first bet I need to risk $165$ to win $85$. For the second bet I need to risk $263$ to win $237$. Risk/Reward for first bet is $1.94$ and for second $1.1$. If I calculate first bet to be $(0.6 * 85) - (0.4 * 165) = -15$ it is losing. If I calculate second bet to be $(0.6 * 237) - (0.4 * 263) = 37$ it is winning. But how do I get the minimum win percentage I need for a bet based of money I am risking? I get confused because the risk is more than reward. All calculations I have seen assume risk is less than reward.
I feel like in your case you would win the amount over the bet. So you would do something like this $win\implies 165+85$ and $lose\implies -165$ so you’re expected value should be something like this assuming that $p=P(win)$ $$\mathbb{E} (bet)= p(165+85)-(1-p)(-165)>0 $$ Solving this for p will give you the interval such that you can have an expected positive value of money after betting. Please note that however this is only the probability, people might have different preferences associated to how much they’d like to risk.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4075614", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is another more direct proof of this exercise? The exercise states that " If we know that $(A \cup B)-(A \cap B)=(A \cup C)-(A \cap C)$, then $B=C$ ". At my first sight, I don't know how to do that. However, when I see another exercise in the later of the book, I find a way to work out. That exercise states that we can definite $+$ and $.$, such that, $A+B:=(A \cup B)-(A \cap B), A.B=A \cap B$. Then it let me prove that : $1.A+B=B+A$ $2.A+\emptyset=A$ $3.A+A=\emptyset$ $4.A.A=A$ $5.A+(B+C)=(A+B)+C$ These are very easy to prove. And from these property, I start my proof of the former exercise. I use property 3 to prove the former. Because now I can translate the $(A \cup B)-(A \cap B)=(A \cup C)-(A \cap C)$ to $A+B=A+C$. By using property 3, $A+A+B=A+A+C$, using property 5, I can conclude that $(A+A)+B=(A+A)+C$, so by using property 3, $B=C$. Though I work out the exercise finally, I think that there must be some simpler and more direct proof, since this exercise is situated before than those five properties. Can anyone help with a more direct and more intuitive proof of this?
A direct proof would be to take $x\in B$ and then to deduce $x\in C$, that is $B\subset C$. If we can do that then we may exchange $B$ and $C$ and reach $B=C$. So, let us assume $x\in B$. Now either $$x\in (A\cup B) - (A\cap B)\tag{1}$$ or $$x\notin (A\cup B) - (A\cap B) \tag{2}$$ Our assumption $x\in B$ certainly imply $x\in A\cup B$, hence in case (1) $x\notin A$. But by the assumption on $C$ we have $$(A\cup C) - (A\cap C) = (A\cup B) - (A\cap B)\ni x$$ and since $x\notin A$, $x$ must belong to $C$. The case (2) can be handled similarly, I leave the fun for you.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4075693", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Two estimations in analytic number theory I want to show the following two estimation: $$ \sum_{p|n}\frac{\log p}{p}=O(\log\log n) $$ And $$ \sum_{p|n}\frac1p=O(\log\log\log n) $$ First assume $$n=p_1^{\alpha_1}...p_s^{\alpha_s}$$ then the first can be rewritten as: $$ \sum_{i=1}^s\frac{\log p_i}{p_i} $$ I think i should bulid some connections between s and n, and could i do the following? $$ \sum_{i=1}^s\frac{\log p_i}{p_i}=O(\sum_{i=1}^s\frac{1}{i}) $$ If this is true, why?
Take the first as an example: from what we have done, it suffices to estimate: $$ \sum_{i=1}^s\frac{\log p_i}{p_i} $$ Classify $p_1,...,p_s$as follows: $p_1,...,p_{r_0}\leq\log n$,$p_{r_0+1},...,p_s>\log n$, so: $$ \sum_{i=1}^{r_0}\frac{\log p_i}{p_i}\leq\sum_{p\leq\log n}\frac{\log p}{p}\leq\log\log n+O(1) $$ We then estimate how large the $r-r_0$ is : $$ (\log n)^{r-r_0}<p_{r_0+1}...p_s\leq n\to r-r_0\leq\frac{\log n}{\log\log n} $$ So, $$ \sum_{i=r_0+1}^s\frac{\log p_i}{p_i}\leq\frac{\log\log n}{\log n}\times\frac{\log n}{\log\log n}=1 $$ I.e $$ \sum_{i=1}^s\frac{\log p_i}{p_i}=O(\log\log n) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4075896", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Find integral limits for step functions I have an equation of the form $$f(t) + \int_0^t H(t') dt' = c$$. For now, assume that $f(t)$ is linear, and $H(t)$ is some arbitrary step function (not necessarily just a standard heaviside step function. Is there any easy/ logical way of solving this to find the integral limit t. It feels like it should be easy, as the integral of the step function is 'just' a piecewise linear function, but I'm really struggling to do this. While I don't have any specific values in mind, I am most interested in the following: * *The values of the n components of H(t) form a non-negative decreasing sequence. *$f(t) = kt$, where $k > 0$. *$c > 0$. Under these conditions, I think there should be a unique solution (though would be interested to hear if I am wrong in assuming this!).
For any function $h$ (not necessarily step), $$f(t)+\int_0^t h(t)\,dt=c$$ is $$f(t)+H(t)-H(0)=c$$ where $H$ is an antiderivative of $h$. To obtain $t$, you solve this ordinary, univariate equation. If $h$ is a step function, its antiderivative is piecewise linear and continuous. You can solve the equation in every piece independently (and check that the solution(s) belong to the definition interval of the piece). If $f$ is $kt$, then $t=0$ is always a solution. There can be a second solution if the values of $h$ are larger, then smaller than $k$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4076094", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Find polynomials $M_1(x)$ and $M_2(x)$ such that $(x+1)^2M_1(x) + (x^2 + x + 1)M_2(x) = 1$ I have been trying to solve this with no success. Could you suggest me a solution?
Here is an alternate basic approach based upon coefficient comparison of polynomials. We are looking for polynomials $M_1(x), M_2(x)$ so that the identity \begin{align*} \left(x^2+2x+1\right)M_1(x)+\left(x^2+x+1\right)M_2(x)=1\tag{1} \end{align*} is valid. We start with some observations: * *Since the right-hand side of (1) is a polynomial of degree zero, the left-hand side has also to be a polynomial of degree zero. *We are looking for polynomials $M_1$ and $M_2$ which have smallest possible degree. We observe setting $M_1(x)=1$ and $M_2(x)=-1$ is not appropriate since we get rid of the square terms but not of the linear terms. *Ansatz: We start with linear polynomials \begin{align*} M_1(x)=ax+b\qquad M_2(x)=cx+d \end{align*} with unknown coefficients $a,b,c,d\in\mathbb{R}$, multiply out and make a comparison of coefficients of LHS and RHS of (1). We denote with $[x^n]$ the coefficient of $x^n$ of a polynomial and obtain from (1) \begin{align*} [x^3]:\qquad\qquad &a+c&=0\\ [x^2]:\qquad\qquad&(2a+b)+(c+d)&=0\\ [x^1]:\qquad\qquad&(a+2b)+(c+d)&=0\\ [x^0]:\qquad\qquad&b+d&=1\\ \end{align*} Coefficient comparison of $[x^3]$ gives: $c=-a$. Putting this in the other three equations results in \begin{align*} a+b+d&=0\\ 2b+d&=0\\ b+d&=1\\ \end{align*} from which we easily find $a=-1, b=-1, c=1, d=2$. We finally obtain \begin{align*} \color{blue}{M_1(x)=-x-1\quad M_2(x)=x+2} \end{align*} in accordance with the other given answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4076255", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What is the free abelian group on $\mathbb{N}$? I learnt that the free abelian group on a set $X$ is the group $(\operatorname{Hom}(X, \mathbb{Z}), +)$. Okay, this sounds all right, but I also know the famous result that $\mathbb{Z}^{\mathbb{N}}$ is not free abelian. Since $\mathbb{Z}^{\mathbb{N}}=\operatorname{Hom}({\mathbb{N}, \mathbb{Z}})$, I am pretty confused and I have two questions: * *Are the concepts of the "free abelian group on some set" and that of "free abelian group" (this is the one where we take the definition as that of a free $\mathbb{Z}$-module) two different things? *What is the free abelian group on $\mathbb{N}$, since it doesn't seem that $\operatorname{Hom}({\mathbb{N}, \mathbb{Z}})$ is what we are looking for?
The free Abelian group on a set $X$ is not given by the set of functions from $X$ to $\mathbb{Z}$. It is given by the subset of those functions that take a nonzero value on only finitely many inputs. If you re-do your investigations with that change, everything should make a lot more sense.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4076438", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Orthogonality Relations Exercise 2.19, Brezis' Book Functional Analysis I am studying some topics of Brezis's book and I am trying to solve this excercises: Let $E$ a Banach space and let $A\,\colon\, D(A)\subset E \rightarrow E^*$ be a densely defined unbounded operator. Assume that there exists a constant $C$ such that $$ \langle Au, u \rangle \geq -C \|Au\|^2\quad \forall u\in D(A). $$ Prove that $N(A)\subset N(A^*)$. I was trying some solutions but, I coudn't. In fact, I search the hint in the back of the book and say: Recall that $N(A^*)=R(A)^{\perp}$. Let $u\in N(A)$ and $v\in D(A)$; we have $$ \langle A(u+tv),u+tv\rangle \geq -C \|A(u+tv)\|^2\quad \forall t\in\mathbb{R}, $$ which implies that $\langle A v,u\rangle=0$. Thus $N(A)\subset R(A)^\perp$. Could anyone explain me why this inequality implies the result?.
$R(A)^{\perp} \subset N(A^{*})$: If $ \langle y, Ax \rangle=0$ for all $x \in D(A)$ then $ \langle A^{*}y, x \rangle=0$ for all $x \in D(A)$. Since $D(A)$ is dense it follows that $A^{*}y=0$ so $y \in N(A^{*})$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4076566", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Volume integral $\int_{0}^{2\pi}\int_{0}^{1}\sqrt{r^2-2r\cos(t)+1}\>r dr dt$ I need to calculate the volume of the solid $$E=\{(x,y,z)\in\mathbb{R}^{3}\mid x^2+y^2-2y\leq0,0\leq z\leq\sqrt{(x-1)^2+(y-1)^2}\}$$ I know $$x^2+y^2-2y=0$$ can be rewritten in polar coordinates $$x=r\cos(t),\quad y-1=r\sin(t)$$ with $0\leq r\leq1,0\leq t\leq2\pi$. Then the volume integral for the solid is $$\int_{0}^{2\pi}\int_{0}^{1}\sqrt{r^2-2r\cos(t)+1}\>r dr \space dt$$ But I have no idea how to evaluate this integral.
While I am late, wanted to share this answer. This is volume of cylinder $x^2+(y-1)^2 \leq 1$, above the plane $z = 0$ and below the surface of the cone $z = \sqrt{(x-1)^2+(y-1)^2}$. As you can see, the vertex of the cone $(1, 1, 0)$ is on the surface of the cylinder. For integral, it is easier to parametrize the surface of the cone as $x = 1 + r\cos \theta, y = 1 + r\sin \theta, z = r, 0 \leq \theta \leq 2\pi$. So the equation of the cylinder becomes, $r \leq -2 \cos \theta$ and forms between $\frac{\pi}{2} \leq \theta \leq \frac{3\pi}{2}$. $V = \displaystyle \int_{\pi/2}^{3\pi/2} \int_0^{-2\cos\theta} r^2 \ dr \ d\theta = \frac{32}{9}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4076688", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
GAP checking for symmetric group In GAP one can use StructureDescription(G) to find out that $G$ is isomorphic to (say) $S_{120}.$ ($G$ might be defined by a collection of permutation of $140$ elements). One can also ask IsSymmetricGroup(G) which is a lot faster but just returns a boolean, so if the answer is true, we don't know if it is isomorphic to $S_{120}$ or $S_{7}.$ Is there some trick to get the degree of the permutation rep? StructureDescription is much slower.
As noted, StructureDescription (which is a toy routine for small examples) will perform horribly badly on this question. But there are two special functions for this task: * *IsNaturalSymmetricGroup takes a permutation group and checks whether it is the the symmetric group on its MovedPoints. (That is, it returns true on the group $\langle (4,5,6),(4,5)\rangle$.) The algorithm used will be much faster than even computing the order and should easily work for degrees into the 10000s. The degree is then given by NrMovedPoints. *IsSymmetricGroup in contrast tests whether a group (of arbitrary representation) is isomorphic to a symmetric group. If so, SymmetricDegree will return the degree of this symmetric group. The algorithm calculates the order and then a composition series and tests (with special treatment for $n=6$) that the group has a normal subgroup of index 2 isomorphic to $A_n$ (which, apart from $A_8$ can be identified from its order once we know it is simple) on which the group induces an outer automorphism.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4076841", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Calculate the Euler-Lagrange equations for the functional and show that the Euler-Lagrange equations have a solution Consider the functional $J_2$ of two functions y(x) and z(x) given by $$J_2[y,z] = \int_{x0}^{x1}\sqrt{{1+(z')^{2}+z^2(y')^2}\over{C^2+z^2}}$$ where $y' = dy/dx, z'= dz/dx$, and C is a non-zero constant. * *Calculate the Euler-Lagrange equations for the functional $J_{2}$ *By substitution or otherwise, show that the Euler-Lagrange equations in (1) have a solution of the form $$y(x) = Ax + B, z(x) = D$$ where $A, B$ and $D$ are appropriate constants and $D$ does not equal zero. For part (1) I have worked out that there are two Euler-Lagrange equations. The first is $$\frac{d}{dx}\frac{\partial L}{\partial y'}-\frac{\partial L}{\partial y}=0, \quad\frac{\partial L}{\partial y}=0$$ and so $$0=\frac{d}{dx} \left(\frac{z^2y'}{(z^2+c^2)^\frac{1}{2}(z^2(y')^2+(z')^2+1)^{\frac{1}{2}}} \right)$$ The second one is \begin{align} \frac{d}{dx}\frac{\partial L}{\partial z'}-\frac{\partial L}{\partial z} &= \frac{d}{dx} \left(\frac{z'}{(c^2+z^2)^{\frac{1}{2}}(x^2+(y')^2z^2+1)^\frac{1}{2}} \right) - \frac{\partial L}{\partial z} \\ &= \frac{d}{dx} \left(\frac{z'}{(c^2+z^2)^\frac{1}{2}(x^2+(y')^2z^2+1)^\frac{1}{2}} \right) - \frac{z(c^2(y')^2-(z')^2-1)(z)}{(z^2+c^2)^\frac{3}{2}((y')^2z^2+(z')^2+1)^\frac{1}{2}} \\ &= 0 \end{align} I am unsure how to do the $\frac{d}{dx}$ for part (1) if anyone can help and would appreciate any help with part (2) feel free to add adapt tags if you think any don't fit.
* *OP's Lagrangian $$ L ~=~ \sqrt{\frac{1+\dot{z}^2+z^2\dot{y}^2}{C^2+z^2}} \tag{1}$$ has momenta $$ p_y~:=~\frac{\partial L}{\partial \dot{y}}~=~\frac{z^2\dot{y}}{\sqrt{(C^2+z^2)(1+\dot{z}^2+z^2\dot{y}^2)}}, \tag{2}$$ $$ p_z~:=~\frac{\partial L}{\partial \dot{z}}~=~\frac{\dot{z}}{\sqrt{(C^2+z^2)(1+\dot{z}^2+z^2\dot{y}^2)}}, \tag{3}$$ and energy $$E~:=~\dot{y}p_y+\dot{z}p_z-L~=~\frac{1}{\sqrt{(C^2+z^2)(1+\dot{z}^2+z^2\dot{y}^2)}}~\neq~ 0.\tag{4}$$ *Since $L$ does not depend explicitly on $y$ and $x$, we have 2 constants of motion $p_y$ and $E$, cf. Noether's theorem. *Eqs. (2) simplifies to $$\frac{p_y}{E}~=~ z^2\dot{y}.\tag{5}$$ Eqs. (4) simplifies to a first-order ODE $$\frac{1}{E^2(C^2+z^2)}~=~1+\dot{z}^2+\frac{p_y^2}{E^2z^2}\tag{6}$$ for $z\neq 0$. *Eqs. (5) & (6) are 2 first integrals to the 2 EL equations. *Clearly the Ansatz $$y(x) ~=~ Ax + B,\qquad z(x) ~=~ D \tag{7}$$ are solutions to eqs. (5) & (6) for appropriate constants $A$, $B$ and $D$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4076976", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If A is an n×n invertible symmetric matrix, then (A^(-1) )^k is symmetric, for any positive integer k. i have this t/f question with prove required. If $A$ is an $n×n$ invertible symmetric matrix, then $(A^-$$^1 )^k$ is symmetric, for any positive integer $k$.
Firstly, note that the inverse of a symmetric matrix is also symmetric. I.e. if $A$ is an invertible symmetric matrix, then $A^{-1} = {A^{-1}}^T$. (Is the inverse of a symmetric matrix also symmetric?) Thus, the question can be simplified to: If a matrix $A$ is invertible and symmetric, are all of its exponents $A^k$ also symmetric? The answer to this question is yes as well. You can see this through induction: For the base case, we have $AA = A^T A^T = (AA)^T$ Now, assume $A^k = (A^k)^T$. Then $A^{k+1} = A^k A = (A^k)^T A^T = (A A^k)^T = (A^{k+1})^T$ Q.E.D Thus, the statement If A is an n×n invertible symmetric matrix, then (A−1)k is symmetric, for any positive integer k. is True.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4077328", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is there some graphically way to (intuitive) understand the reverse triangle inequality in the $\ell^p$ spaces? Given $(\eta_i), (\xi_i) \in \ell^p$, for some $p \geq 1$ in $\mathbb{R}$, is there a graphically way to see the inequality $$\left|(\sum_{i=1}^{\infty}|\eta_i|^p)^{1/p} - (\sum_{i=1}^{\infty}|\xi_i|^p)^{1/p}\right| \leq (\sum_{j=1}^{\infty}|\xi_i-\eta_i|^p)^{1/p}\:\:?$$
Triangle inequality? Indeed, $$\big|\|y\| - \|x\| \big| \le \|x-y\|$$ in any normed space. This follows from the usual triangle inequality. Proof. First, $(y) + (x-y) = x$, so $$ \|y\| + \|x-y\| \le \|x\| \tag1$$ Next, $x + (y-x) = x$, so $$ \|x\| + \|y-x\| \le \|y\| \tag2$$ Now from $(1)$ we get $$ \|x\|-\|y\| \le \|x-y\| $$ and from $(2)$ we get $$ \|y\| - \|x\| \le \|y-x\| \\ -\big(\|x\|-\|y\|\big)) \le \|x-y\| $$ But from $$ \|x\|-\|y\| \le \|x-y\| \le \|x-y\|\quad\text{and}\quad -\big(\|x\|-\|y\|\big) \le \|x-y\| $$ we conclude $$ \big|\|x\|-\|y\|\big| \le \|x-y\| $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4077530", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is there a function that maps each element of $\mathbb{Q}$ between 0 and 1 to $\mathbb{N}$? Kepler devised a method to enumerate the rational numbers between 0 and 1. This clip from his Harmonices Mundi illustrates the method. It works like this: Let $m=n=1$. Then $\tfrac{m}{n}$ is the first rational number. The next two are $\tfrac{m}{m+n}$ and $\tfrac{n}{m+n}$. These last two rationals become $m$ and $n$ for the next iteration. Wash, rinse, repeat for eternity. Done. If we read Kepler's diagram left to right; down to up, the first 10 rationals are: Suppose we want to know where, say, $\tfrac{6067}{9000}$ appears on that list. We could systematically run through Kepler's tree. Eventually the process will terminate. But is there a better way to do this? Is there some function that maps each element of $\mathbb{Q}$ between 0 and 1 to $\mathbb{N}$? EDIT: I'd like to specify that I'd like to know what that function is. I'd like to be able to plug in, say, $\tfrac{3}{8}$, and get the number $10$.
Yes. Just define $f(\frac{p}{q})=42$. But maybe you want an injective map. Then you can use $f(\frac{p}{q})=q^2 + p$. Surjective: $f(p/q)=q$ Bijective is more difficult. You've presented one possibility already, but it sounds like you're looking for functions that are easier to compute. But maybe Kepler's function is easier to compute than you think. Let's say we're given the fraction (in lowest terms) $p/q$. We know that $p<q$. So $r = q-p$ is a positive number. Now one of $p/r$ or $r/p$ will be less than 1, whichever one that is, we rename it to $p'/q'$. We also know that $p/q = p / (p+r)$. So in the binary tree above, $p'/q'$ will be the fraction just above $p/q$. Then we repeat, starting from $p'/q'$ this time. A little bit of extra bookkeeping will tell us exactly which branch of the binary tree of fractions we're on, and from that we can compute our natural number output. If you're observant, you'll notice that this algorithm is pretty much just Euclid's algorithm with extra steps. And Euclid's algorithm is fast.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4077662", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Prove by Induction -Inductive Step problem The question is prove by induction that $2^n\le n!$ for all $n\ge 4$ So far I have completed the base case so, Base case $n=4$ $ 2^4\le4!$ $16\le24$ Therefore the base case holds Inductive Step Assume F(n) is true $2^{n+1}\le (n+1)!$ $2^{n+1}\le n!(n+1)$ $2\cdot 2^n \le n!(n+1)$ <- I don’t know what do after this to get New Approach $2\cdot n!\le n!(n+1)$ $ by F(n) $ $2^{n+1}\le (n+1)!$
When $n\geqslant 4$ by assumption we have $$2^n \leqslant n! $$ Let's multiply both sides on $2$, which gives $$ 2^{n+1} \leqslant 2 \cdot n!\quad(1)$$ now, knowing, that for $n\geqslant 4$ we have $2 \lt n+1$, we obtain $$2 \cdot n! \lt (n+1) \cdot n! = (n+1)! \quad(2)$$ $(1)$ and $(2)$ together gives $ 2^{n+1} \lt (n+1)!$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4078986", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Every non-generic point in a curve is closed? Let $X$ be a scheme integral scheme of dimension 1. If $X$ is affine, then it is clear that every non-generic point is closed. I wonder if this is true in general. If not, is it true if we suppose that $X$ is a curve? (A variety of dimension 1.)
If $x\in\overline{\{y\}}$, we say $x$ is a specialization of $y$ or $y$ is a generalization of $x$ and denote this $y\leadsto x$. Observe that in a scheme, a chain of specializations is in one-to-one correspondence with a chain of inclusions of irreducible closed subsets: in one direction, we take the closure of the points, while in the other, we take the generic points of the irreducible closed subsets. Therefore the dimension of a scheme is exactly the number of specializations in a maximal chain of nontrivial specializations. Now suppose $X$ is irreducible of dimension one with generic point $\eta_X$. If $\overline{\{x\}}=\{x\}$, $x$ is closed. If $\overline{\{x\}}\neq\{x\}$, then let $y$ be some element distinct from $x$ in $\overline{\{x\}}$, and we get a chain of specializations $\eta_X\leadsto x\leadsto y$. As $x\neq y$, we must have $x=\eta_X$ otherwise we contradict the fact that $X$ has dimension one.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4079383", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Using a 'Similarity Variable' to transform a PDE into an ODE? I have a PDE: $$\frac{\partial y}{\partial t}=\alpha\frac{\partial^2y}{\partial x^2}\tag{1}$$ So we're looking for a function: $$y=f(x,t)$$ The following substitution with a Similarity Variable then transforms the PDE into a simple ODE: $$z=\frac{x}{2\sqrt{\alpha t}}\tag{2}$$ $$\frac{\mathrm{d}^2y(z)}{\mathrm{d}z^2}=-2z \frac{\mathrm{d}y(z)}{\mathrm{d}z}\tag{3}$$ The latter solves easily to: $$y=c_1\int e^{-z^2}\mathrm{d}z+c_2$$ The trouble is I can't seem to carry out this substitution to get from $(1)$ to $(3)$, using $(2)$. I thought of extracting $x$ and $t$ from $(2)$ and differentiating them as $\mathrm{d}t$ and $\mathrm{d}x^2$ but couldn't make that work. So how to make this substitution work?
I don't know anything about similarity transformations, but this looks pretty straightforward: By the chain rule, $$\frac{\partial y}{\partial t} = \frac{\partial y}{\partial z} \frac{\partial z}{\partial t} = -\frac{\alpha x}{4(\alpha t)^{3/2}}\frac{\partial y}{\partial z}=-\frac{2\alpha z^3}{x^2}\frac{\partial y}{\partial z}$$ $$\frac{\partial y}{\partial x}=\frac{\partial y}{\partial z}\frac{\partial z}{\partial x}$$ $$\frac{\partial^2y}{\partial x^2} = \frac{\partial}{\partial x}\left(\frac{\partial y}{\partial z}\right)\frac{\partial z}{\partial x}+\frac{\partial}{\partial x}\left(\frac{\partial z}{\partial x}\right)\frac{\partial y}{\partial z} = \frac{1}{2\sqrt{\alpha t}}\frac{\partial^2 y}{\partial z^2}\frac{\partial z}{\partial x} = \frac{1}{4\alpha t}\frac{\partial^2 y}{\partial z^2}=\frac{z^2}{x^2}\frac{\partial^2 y}{\partial z^2}$$ Hence, $$-\frac{2\alpha z^3}{x^2}\frac{\partial y}{\partial z} = \alpha \frac{z^2}{x^2}\frac{\partial^2 y}{\partial z^2}$$ Rearranging, $$\frac{d^2 y}{d z^2}=-2z\frac{d y}{d z}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4079557", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Upper bound for $\sum_{i = 0}^{k-1} {n \choose i} (1 - \varepsilon)^i\varepsilon^{n-i}$ What is tight upper and lower bound for following expression where $0 < \varepsilon < 1$ and $1\leq k \leq n$. $\sum_{i = 0}^{k-1} {n \choose i} (1 - \varepsilon)^i\varepsilon^{n-i}$
For a simpler solution, use the appximation to $\binom{n}{k} = \frac{n^{k}}{k!}$, then the summand becomes $$ (1-\varepsilon)^n \frac{n^{i}\varepsilon^{i}(1-\varepsilon)^{-i}}{i!} $$ then the upper bound can be easily found by using partial Taylor series expansion for the exponential function: $$ S_{n} \leq(1-\varepsilon)^{n}\exp\bigg(\frac{n \varepsilon}{1-\varepsilon}\bigg) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4079685", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Level curves of a joint normal distribution I need to plot the level curves of a joint distribution of two standard normal distributions. I'm trying to proof that level curves are circumferences. $$p_{X,Y}\left(x,y\right)=\frac{1}{2\pi}\exp\left(\frac{-x^{2}-y^{2}}{2}\right)$$ $$\frac{1}{2\pi}\exp\left(\frac{-x^{2}-y^{2}}{2}\right)=c\Leftrightarrow$$ $$\Leftrightarrow\exp\left(\frac{-x^{2}-y^{2}}{2}\right)=2\pi c\Leftrightarrow$$ $$\Leftrightarrow\ln\left(\exp\left(\frac{-x^{2}-y^{2}}{2}\right)\right)=\ln\left(2\pi c\right)\Leftrightarrow$$ $$\Leftrightarrow\frac{-x^{2}-y^{2}}{2}=\ln\left(2\pi c\right)\Leftrightarrow$$ $$\Leftrightarrow x^{2}+y^{2}=-2\ln\left(2\pi c\right)$$ Is this a correct proof that level lines are circumferences here? I'm a bit confused about the minus in the last equation.
Looks good to me. Regarding the minus: when you wrote $$\frac{1}{2\pi}\exp\left(\frac{-x^{2}-y^{2}}{2}\right)=c,$$ then you are implicitly assuming that such a $c$ exists. In other words you can not put a $c$ there which is larger than the maximum value of the left side i.e. $\frac{1}{2\pi}\exp\left(\frac{-x^{2}-y^{2}}{2}\right)$. Finally the maximum of this expression is $1/{2\pi}$. Therefore $0<c<\frac{1}{2\pi}$ whichch gives $-2\pi c>0$. Hope this clarifies your doubt.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4079836", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Incorrect exam question? A package moving up an inclined plane, find range of values of coefficient of friction The following question was in a recent exam: A package P of weight $10$ N is moving up an inclined plane under the action of a horizontal force of magnitude $20$ N. The plane is inclined at angle $\alpha$ to the horizontal, where $\tan{\alpha}=\frac{3}{4}$. Package P is modelled as a particle. Find the range of values $\mu$, the coefficient of friction. To answer this I found $R=20$N and the force acting up the plane which is $10$ N. From here I feel like there is no way to continue as I do not know acceleration. The given answer is $\mu \leq \frac{1}{2} $, as they use $20\mu \leq 10. $ This assumes acceleration is either $0$ or positive. I argue there is nothing in the question that eliminates the possibility that acceleration could be negative and that the particle is slowing down, in which case $\mu$ could be greater than $\frac{1}{2}$. Can someone confirm this, or explain why I am wrong? Many thanks.
Supposedly in this case it is to be assumed that the force pushed the particle from rest despite the fact that in the ideal conditions outlined the force would not be sufficient to overcome friction if $\mu=\frac{1}{2}$. So then we can assume acceleration is 0 or positive in the direction of movement, thus getting the desired answer. Applied math. Yuck.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4080121", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How one makes a nice approximation using Taylor series Approximations of expressions like $2^{0.6}$ , or $3^{0.7}$ . For firrst one i know binomial expansion can help with x very less we need but is it better to do taylor expansion around √2 than binomially expansion if we just need upto three decimal places ? ( I know the log values needed for taylor expansion so basically is it better to do taylor expansion around √2 or some better value , or binomial is best , if we meed upto 3 decimal places)
Write $2^{0.6}$ as $\exp(0.6 \ln 2)$ where $\exp(x) = e^x$, where you know $\ln 2$ to three decimal places. Then since $e^x = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \frac{x^4}{4!} + O(x^5)$, the maximum Lagrange remainder is the next term, thus $\frac{(0.6 \ln 2)^n}{n!} > 0.001$ for any power of $n$. Some trial and error gives $n > 4$, so we need the $x^5$ term as well: $$2^{0.6} \approx1 + 0.6 \ln 2 + \frac{1}{2!}(0.6 \ln 2)^2 + \frac{1}{3!}(0.6 \ln 2)^3 + \frac{1}{4!}(0.6 \ln 2)^4 + \frac{1}{5!}(0.6 \ln 2)^5$$ and the approximation with $\ln 2 \approx 0.693$ gives $1.51558$, whereas the actual value of $2^{0.6}$ is around $1.51572$, an absolute error of around $1.4 \times 10^{-4}$, and a relative error of $9.3 \times 10^{-5}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4080379", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Cartwright's proof that $\pi$ is irrational induction part I am currently proving that $\pi$ is irrational, and I am basing from Cartwright's proof. I let \begin{equation*} A_k(x) = \int_{-1}^{1} (1-y^2)^k \cos(xy)dy \end{equation*} where $k \in \{0\} \cup \mathbb{Z}^+$. I then used integration by parts and got \begin{equation*} x^2A_k(x) = 2k(2k-1)A_{k-1}(x) - 4k(k-1)A_{k-2}(x), k \geq 2. \end{equation*} Next, I let $B_k(x) = x^{2k+1}A_k(x)$ for all nonnegative integer $k$. I have already shown \begin{align*} B_k(x) &= 2k(2k-1)B_{k-1}(x) - 4k(k-1)x^2B_{k-2}(x)\\ B_0(x) &= 2\sin x\\ B_1(x) &= 4\sin x -4x\cos x\\ \end{align*} I am struggling to show \begin{equation} B_k(x) = k!(M_k(x)\sin x + N_k(x)\cos x) \end{equation} where $M_k(x)$ and $N_k(x)$ are polynomials in $x$ with integer coefficients and of degree less than or equal to $k$ for all nonnegative integer $k$. How do I show this? Based on what I have seen online, I have to use mathematical induction and show it holds for $k+1$ i.e., \begin{equation*} B_{k+1}(x) = (k+1)!(M_{k+1}(x)\sin x + N_{k+1}(x)\cos x) \end{equation*}
Notation: $\Bbb Z[x]$ is a standard notation for the set of polynomials in the free variable $x,$ with integer co-efficients. Let $S(k)$ be the statement $T(k)\land T(k+1)$ where $T(k)$ is the statement that $B_k(x)=k![M_k\sin x +N_k\cos x]$ where $M_k, N_k\in \Bbb Z[x]$ with $\max (\deg M_k, \deg N_k)\le k.$ You have $S(0).$ Now prove that $\forall k\in \Bbb N_0\,[S(k)\implies S(k+1)].$ Note that $S(k)\implies T(k+1)$ while $[S(k+1)]\iff [T(k+1)\land T(k+2)] .$ So to prove $S(k)\implies S(k+1),$ it suffices to prove that $S(k)\implies T(k+2).$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4080514", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Use mean value theorem to prove $(1+x)^3 \ge (1+2x)$, where $x\ge 1$ We know $(1+x)^3 \ge (1+3x) \ge (1+2x)$. Also we take $f(x)=(1+x)^3$, we can prove $f'(x)=3(1+x)^2\ge (1+2x)$ But I don't have any idea how to use MVT for this. Can anyone help?
Note that $f'(x)\geqslant12\geqslant2$ if $x\geqslant1$. So, if $x\geqslant1$, then, by the mean value theorem,$$f(x)-f(1)\geqslant2(x-1)(\iff f(x)\geqslant2x-2+8\geqslant2x+1).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4080676", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Ring endomorphism of $p$-adic integers I am doing an individual study of an abstract algebra for number theory course online. I just started, so I hope my question just note come off as too trivial. The lecture notes state that the ring of $p$-adic integers does not have a ring endomorphism. Questions: 1. Does not the identity mapping work as a counterexample? Then, assuming they meant: "no endomorphism except the trivial case", so the entire thing is not just a mistake: 2. I still cannot convince myself that there is no other ring endomorphism of $p$-adic integers. Could you please give me a hint how to prove it or point me to literature where such a proof is shown?
For your reference, there is a more general perspective from the theory of "Witt vectors". Throughout, we let $p$ denote a fixed prime and $\mathbb{F}_p$ the finite field with $p$ elements. The p-adic integers $\mathbb{Z}_p$ are an example of a strict $p$-ring, which is a ring $A$ such that * *$p$ is not a zero divisor in $A$, *$A$ is $p$-adically complete and Hausdorff, and *$A/(p)$ is a perfect ring (for example, a finite field). Then we have the following theorem: if $A$ and $B$ are strict $p$-rings, then there is a natural "mod $p$" bijection between ring homomorphisms $A \to B$ and ring homomorphisms $A/(p) \to B/(p)$ (see theorem 1.2 in https://arxiv.org/abs/1409.7445, which is an expository paper on Witt vectors, and/or Chapter II of Serre's Local Fields). Taking $A = \mathbb{Z}_p = B$, we have a bijection between endomorphisms of $\mathbb{Z}_p$ and endomorphisms of $\mathbb{Z}_p/(p) = \mathbb{F}_p$. The only ring endomorphism of $\mathbb{F}_p$ is the identity, so the same holds for $\mathbb{Z}_p$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4080776", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
$\int_{1}^{\infty} \dfrac{1}{x\ln(x + 1)} \ dx$ Comparison Test? So I am having a bit of trouble with this integral, and I would like some verification to see if I am on the right track. Q: Determine if $\displaystyle \int_{1}^{\infty} \dfrac{1}{x\ln(x + 1)} \ dx$ converges or diverges. So I know that we need to find a function that is comparable to the original function. So I chose $\displaystyle \dfrac{1}{x\ln(x)}$ because \begin{equation*} \dfrac{1}{x\ln(x + 1)} \leq \dfrac{1}{x\ln(x)} \end{equation*} for $x \geq 1$. So we take the integral: $\displaystyle \int_{1}^{\infty} \dfrac{1}{x\ln(x)} \ dx$. Let $u = \ln(x)$, then $du = \dfrac{1}{x} \ dx$. \begin{align*} \int_{1}^{\infty} \dfrac{1}{x\ln(x)} \ dx &= \int_{x = 1}^{x = \infty} \dfrac{1}{u} \ du \\ &=\lim_{t \to \infty}\left[\ln|u|\right]_{x = 1}^{x = t} \\ &= \lim_{t \to \infty} \left[\ln|\ln(x)|\right]_{1}^{t} \\ &= \lim_{t \to \infty} \ln|\ln(t)| - \lim_{t \to \infty} \ln|\ln(1)| \\ &= \infty - DNE \end{align*} The last part is where I am having a bit of trouble. I know that the integral diverges with the infinity, which means that $\displaystyle \int_{1}^{\infty} \dfrac{1}{x\ln(x + 1)} \ dx$ also diverges. But if $\ln|\ln(1)|$ does not exist, does that mean this integral still diverges? I am not sure how to explain this part well. Would appreciate some tips.
OP wrote: I know that the integral diverges with the infinity, which means that $\displaystyle \int_{1}^{\infty} \dfrac{1}{x\ln(x + 1)} \ dx$ also diverges. You can't figure this out of comparison test. The comparison test says that: When for all $x$ we have $0 ≤ f(x) ≤ g(x)$, if $\displaystyle \int f(x) \ dx$ diverges, than $\displaystyle \int g(x) \ dx$ diverges, but not the opposite! I think this is what you're looking for $\begin{equation*} \dfrac{1}{(x+1)\ln(x + 1)} \leq \dfrac{1}{x\ln(x+1)} \end{equation*}$. After that you can simply substitute $x+1=t$ and figure out that $\displaystyle \int_{1}^{\infty} \dfrac{1}{(x+1)\ln(x+1)} \ dx$ diverges! Now it's simple to figure out that $\displaystyle \int_{1}^{\infty} \dfrac{1}{x\ln(x + 1)} \ dx$ diverges from the comparison test
{ "language": "en", "url": "https://math.stackexchange.com/questions/4080915", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Must convex function $f(x)$ bounded by $\|x\|_0$ be bounded by $\|x\|_1$? One of my professors stated the following result without proof: Suppose that $f \colon \mathbb{R}^p \to \mathbb{R}$ is convex such that $f({\bf x}) \leq \|{\bf x}\|_0$, then $f({\bf x}) \leq \|{\bf x}\|_1$. (Recall that $\|{\bf x}\|_0$ counts the number of non-zero components of ${\bf x}$). The argument for $p = 1$ is fairly easy, but I did not see a way to prove this result in higher dimensions. Thank you very much! Edit: we might need $f$ to be defined only on a bounded (convex) subset of $\mathbb{R}^p$ in order to avoid triviality.
Here is a proof when $f : X \to \mathbb{R}$ such that $\{\mathbf{x} \in \mathbb{R}^p : \|\mathbf{x}\|_1 \leq p\} \subseteq X$ that requires no special tools. Let $\mathbf{x} \in X$. If $\|\mathbf{x}\|_1 \not\in (0, p)$, then $f(\mathbf{x}) \le \|\mathbf{x}\|_0 \leq \|\mathbf{x}\|_1,$ so we may assume $\|\mathbf{x}\|_1 \in (0,p)$. Then $\frac{p}{\|\mathbf{x}\|_1}\mathbf{x} \in X$ and $$\begin{align*}f(\mathbf{x}) &= f\left(\left(1 - \frac{\|\mathbf{x}\|_1}{p}\right)\mathbf{0} + \frac{\|\mathbf{x}\|_1}{p}\left(\frac{p}{\|\mathbf{x}\|_1}\mathbf{x}\right)\right) \\ &\leq \left(1 - \frac{\|\mathbf{x}\|_1}{p}\right)f(\mathbf{0}) + \frac{\|\mathbf{x}\|_1}{p}f\left(\frac{p}{\|\mathbf{x}\|_1}\mathbf{x}\right) \\ &\leq \left(1 - \frac{\|\mathbf{x}\|_1}{p}\right)\|\mathbf{0}\|_0 + \frac{\|\mathbf{x}\|_1}{p}\left\|\frac{p}{\|\mathbf{x}\|_1}\mathbf{x}\right\|_0 \\ &\leq \left(1 - \frac{\|\mathbf{x}\|_1}{p}\right)(0) + \frac{\|\mathbf{x}\|_1}{p}(p) \\ &= \|\mathbf{x}\|_1\end{align*}$$ Note that as long as $X$ contains a neighborhood of $\mathbf{0}$, it becomes more difficult to prove as $X$ shrinks, so my result here is not unexpected if you accept the proof that it's true when $X$ is the unit ball in $(\mathbb{R}^p, \|\cdot\|_\infty)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4081124", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Finding the minimum of $v^2 + x^2 - 2xv\gamma$ Let's say I have this expression $v^2 + x^2 - 2xv\gamma$, in which $\gamma$ is a constant. I want to find values for $v$ and $x$ to minimize the expression. Sounds easy: just differentiate and set the result to zero. However, differentiating with respect to $v$ yields different results compared to differentiating w.r.t $x$. $\partial/\partial v \rightarrow 2v -2x \gamma = 0 \rightarrow v = x\gamma$ $\partial/\partial x \rightarrow 2x -2v \gamma = 0 \rightarrow x = v\gamma$ Why is there a difference? Like, the first equation implies that this function hits an extremum when $v = x \gamma$. The value of the expression at that extremum is now $\gamma^2 x^2 + x^2 - 2x^2 \gamma^2 = x^2 - x^2 \gamma^2$. On the other hand, the second equation implies that this function hits an extremum when $v = x / \gamma$, and that the value at the extremum is actually $x^2/\gamma^2 - x^2$. The two expressions are plainly different (unless $\gamma = \pm 1$). How do I interpret this discrepancy? What exactly are the two equations $v = x\gamma$ and $x = v \gamma$? What is the true value of the expression at the extremum?
$v=x\gamma$ and $x=v\gamma$ give $v=v\gamma^{2}$. If $\gamma \neq \pm 1$ we get $v=0$ and $x=v\gamma =0$. If $\gamma =\pm 1$ the expression becomes $(x+v)^{2}$ or $(x-v)^{2}$. Can you minimize this?
{ "language": "en", "url": "https://math.stackexchange.com/questions/4081257", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 0 }