Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Find $\int\ln^{n} x$ . Find $\int\ln^{n} x$ . My observation $\int\ln^{1} x= x(\ln x- 1)+ constant$ $\int\ln^{2} x= x(\ln^{2} x- 2\ln x+ 2)+ constant$ $\int\ln^{3} x= x(\ln^{3} x- 3\ln^{2} x+ 6\ln x- 6)+ constant$ $\int\ln^{4} x= x(\ln^{4} x- 4\ln^{3} x+ 12\ln^{2} x- 24\ln x+ 24)+ constant$ $$\ddots$$ We have $$\ddots$$ $\frac{{\rm d}}{{\rm d}\ln x} (\ln^{4} x- 4\ln^{3} x+ 12\ln^{2} x- 24\ln x+ 24)= 4(\ln^{3} x- 3\ln^{2} x+ 6\ln x- 6)$ $\frac{{\rm d}}{{\rm d}\ln x} (\ln^{3} x- 3\ln^{2} x+ 6\ln x- 6)= 3(\ln^{2} x- 2\ln x+ 2)$ $\frac{{\rm d}}{{\rm d}\ln x} (\ln^{2} x- 2\ln x+ 2)= 2(\ln x- 1)$ I used these to prep for my tests, thanks!
$$ \int \ln^{n} x dx = x \ln^{n}x - n\int\ln^{n-1}x dx + C $$ Please see: https://www.youtube.com/watch?v=xE0Pp4I7PiA
{ "language": "en", "url": "https://math.stackexchange.com/questions/3307965", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Limit Cycle at $r=1$? Consider the non-linear ODE $$u''+(u^2+u'^2-1)u'+u=0.$$ Transforming this to polar coordinates: $$r'=-(r^2-1)r\sin^2(\theta),$$ $$\theta'=-\sin(\theta)\cos(\theta)(r^2-1)-1.$$ If we consider an annulus (trapping region), $\frac{1}{2}<x^2+y^2<2,$ how do we deal with the $\sin^2(\theta)$ term? We can use the fact that $$\sin^2(\theta)=\frac{1-\cos(2\theta)}{2}.$$ If we take $\cos(2\theta)=-1\implies r'>0$ for $r<\frac{1}{2}$ and $r'<0$ for $r>2$. But, if we take $\cos(2\theta)=1\implies r'=0$ for $r<\frac{1}{2}$ and $r'=0$ for $r>2$. Ideally $r'$ should point towards the annular region for both cases. I can't see an error in my logic.
Hint. Putting in the form $$ \dot u_1 = u_2\\ \dot u_2 = -(u_1^2+u_2^2-1)u_2 - u_1 $$ then $$ \frac 12(u_1^2+u_2^2)' = -u_2^2(u_1^2+u_2^2-1) $$ and the stream plot gives NOTE When $u_1^2+u_2^2 = 1$ then $\dot u_1 = u_2$ and also $$ 2u_1\dot u_1 +2u_2\dot u_2 = 0\Rightarrow \dot u_2 = - u_1 $$ and those points have the dynamics dictated by $$ \dot u_1 = u_2\\ \dot u_2 = -u_1 $$ which describes concentric circles.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3308068", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Proving the following inequality using Mathematical Induction I need to prove the following inequality is true for n > 2. $n^3$ > 2$n^2$ + 3 * *Prove using base case n = 3: $3^3$ > 2$(3)^2$ + 3 27 > 2(9) + 3 27 > 18 + 3 27 > 21 (true) *Assume true for n = k: $k^3$ > 2$k^2$ + 3 *Prove for n = k +1 $(k+1)^3$ > 2$(k+1)^2$ + 3 I'm not sure where to go from here. I'm thinking I may need to prove it for n = k +2 (since I'm proving the expression is true for n > 2), but I'm not sure. How would I be able to prove this using n = k + 1?
Now, $$(k+1)^3=k^3+3k^2+3k+1=2(k+1)^2+3+k^3+k^2-k-4>2(k+1)^2+3.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3308215", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Let $U\neq \emptyset$ In $\mathbb{R}^n$ Is Open, Then $U$ Is Not Compact Prove: Let $U\neq \emptyset$ In $\mathbb{R}^n$ Is Open, Then $U$ Is Not Compact How can I approach this? I know that in $\mathbb{R}^n$ we have that $U$ is comapct $\iff$ $U$ is closed and bounded So we can start with assuming the $U$ is compact, but I can not see which path will lead to a contradiction
There are many good answers already, but here's another approach. First, we can solve this for $\mathbb{R}$. An open set $U \subset \mathbb{R}$ is the (countable) disjoint union of open intervals, $U = \sqcup_{j \geq 1}(a_j,b_j)$. We can assume $U$ bounded, otherwise it will not be compact from the get go. Thus all intervals have finite endpoints. Now, we have that $a_1 \not \in U$ but you can certainly find a sequence in $(a_1,b_1)$ that converges to $a_1$. Hence $U$ is not closed, and in particular, it is not compact. Finally, if $U \subset \mathbb{R}^n$, note that the projection mapping $$ \pi : (x_1, \dots, x_n) \in \mathbb{R}^n \mapsto x_1 \in \mathbb{R} $$ is not only continuous, but open (i.e. $\pi(U)$ is open, if $U$ is open). Therefore, the set $\pi(U) \subset \mathbb{R}$ is open, and compact (in particular, bounded). This implies $\pi(U) = \emptyset$ which can only occur if $U = \emptyset$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3308298", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 5 }
Prove that a group of order 5 doesnt have any self-inverse member other than the identity member. Assuming you only know the most basic ideas about groups properties- If group G contains 5 members, how do you prove that no member other than the identity member is self inverse? *I rather have a clue on how to begin solving it or how to approach this problem than have the whole proof :)
Assume that $a=a^{-1}$ and $a \neq e$. $G$ can be partioned in two disjoint sets: put $S = \{x \in G: x=x^{-1}\}$ and $T=\{x \in G: x \neq x^{-1}\}$, then $G = S \cup T$ and since $a, e \in S$, $|S| \geq 2$. Note that $|T|$ is even (elements of $T$ come in pairs), which leaves us with $|T|=0$ or $2$ (if its cardinality would be equal to $4$ then $5 \geq 2+4$, which is absurd). If $T$ is empty then every element of $G$ equals its inverse. Next to $e$ and $a$, let $b$ a third element of $G$ different from $e$ and $a$. Then clearly, $ab \in G$ and $ab \notin \{e,a,b\}$. Note that $ab=(ab)^{-1}=b^{-1}a^{-1}=ba$. So $G=\{e,a,b,ab,c\}$ for some $c$ not equal to any of the other elements. Since $ac \in G$ we now reach a contradiction: go verify that $ac \neq e, ac \neq a, ac \neq b, ac \neq ab, ac \neq c$. If $|T|=2$, then $S=\{1,a,b\}$ and $T=\{c, c^{-1}\}$, for certain $b,c \in G, a \neq b \neq e$. Since $G$ is closed, $ab \in G$ and it is trivial to see that $ab \notin \{e,a,b\}$. Hence (after appropriate renaming) we can assume $ab=c$ and thus $(ab)^{-1}=b^{-1}a^{-1}=ba=c^{-1}$. So after all, $G=\{e,a,b,ab, ba\}$. Now for the final contradiction, we must have that $bab \in G$, but go verify it is not equal to any of the elements of $G$ (for example, if $bab=e$, then $ba=b^{-1}=b$, whence $a=e$, etc..)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3308446", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Argand formula and more for quaternions? Is it possible to define a similar form of Argand's formula but for quaternions? In the sense $$ \cos(nA)+i\cos(nB)+j\cos(nC)+k\cos(n) =(\cos(A)+i\cos(B)+j\cos(C)+k\cos(D))^{n}, $$ where $A, B, C, D$ are the angles of the quaternion with respect the axes $x,y,z,t.$ Also for a quaternion why can you not define the 'numbers' $ \frac{i+j}{k} $ or $ i^{j+k} $ ... where the quaternion is defined in 4-dimensions as $ a+ib+cj+dk =z? $
Since @LordSharktheUnknown discussed the trigonometry, I'll answer your later questions. Do you want $w:=z_1/z_2$ to satisfy $z_1=z_2w$ or $z_1=wz_2$? It matters, which is why we don't usually write such expressions as $\frac{i+j}{k}$; you'd want to say $(i+j)k^{-1}$ or $k^{-1}(i+j)$ instead. (These are respectively $-ik-jk=ki-jk=j-i,\,i-j$.) That quaternions don't commute also introduces problems with defining exponentiation. Do we want $z_1^{z_2}$ to mean $\exp(z_2\ln z_1)$ or $\exp((\ln z_1)z_2)$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3308586", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
How to show that $\sqrt[4]{13}$ is not in $\mathbb{Q}_{13}(\sqrt[4]{26})$ Approach * *I was able to show that $[\mathbb{Q}_{13}(\sqrt[4]{26}):\mathbb{Q}_{13}] = 4$ since $x^4 - 26$ in an irreducible polynomial in $\mathbb{Q}_{13}[x]$ (this can be shown by using Eisenstein's Criterion and the Gaussian Lemma). Therefore, a basis of $\mathbb{Q}_{13}(\sqrt[4]{26})$ as a $\mathbb{Q}$-vector space is $$ (1, \sqrt[4]{26}, \sqrt{26}, \sqrt[4]{26^3}).$$ *If $\sqrt[4]{13}$ would be in $\mathbb{Q}_{13}(\sqrt[4]{26})$, then there would be coefficients $c_0,c_1,c_2,c_3 \in \mathbb{Q}_{13}$ such that $$(\sum_{k=0}^4 c_k \sqrt[4]{26}^k)^4 = 13. $$ However, getting a contradiction out of this equation seems to be super complicated. Could you please help me with this problem? Thank you!
Note that $2^{(13^2-1)/4}=(2^6)^7$ is congruent to $-1$ mod $13$, thus $2$ is not a fourth root in a quadratic extension of $\mathbb{F}_{13}$. Therefore, $X^4-2$ is irreducible in $\mathbb{F}_{13}$, thus in $\mathbb{Z}_{13}$ thus in $\mathbb{Q}_{13}$. So $K=\mathbb{Q}_{13}[X]/(X^4-2)=\mathbb{Q}_{13}(2^{1/4})$ is a field with dimension $4$ over $\mathbb{Q}_{13}$. So it is enough to prove that ${\sqrt{13}} \notin K$. Now, if $\sqrt{13} \in K$, there are $a,b,c,d \in \mathbb{Q}_{13}$ such that $(a+b2^{1/4}+c2^{2/4}+d2^{3/4})^2=13$. The equations are rewritten as $a^2+4bd+c^2=13$, $2ac+b^2+2d^2=0$, $4dc+2ab=0$, $2ad+2bc=0$. Thus $ab=-2cd$, $ad=-bc$, $2ac+b^2+d^2=0$, $a^2+4bd+c^2=13$. Assume $a=0$, then $c=0$ or $d=0$ and $b=0$ or $c=0$. If $c=0$, then $b^2+d^2=0$ and $4bd=13$, which is impossible because the $13$-adic valuations of $b$ and $d$ must both be the same and have an odd sum. So if $a=0$, then $c \neq 0$, thus $b=d=0$, therefore $2c^2=13$, impossible. So $a \neq 0$, thus $b=-2cd/a$, thus $d=-bc/a=2c^2d/a^2$, hence $d=0$ or $(c/a)^2=1/2$. The latter is impossible, because $2$ is not a square root mod $13$ (and $\mathbb{Z}_{13}$ is integrally closed and surjects to $\mathbb{F}_{13}$), thus $b=d=0$, thus $2ac=0$, thus $c=0$, thus $a^2=13$, impossible. So $\sqrt{13} \notin K$. Thus there is no field extension of $\mathbb{Q}_{13}$ with dimension $4$ containing $2^{1/4}$ and $13^{1/2}$, let alone $26^{1/4}$ and $13^{1/4}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3308818", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Show that $\lim_{x\to \frac{\pi}{2}} \frac{1}{\big(x-\frac{\pi}{2}\big)}+{\tan(x)}=0$. Prove that $$\lim_{x\to \frac{\pi}{2}} \frac{1}{\big(x-\frac{\pi}{2}\big)}+{\tan(x)}=0.$$ I'm not really sure how to proceed. I know that I should not try L'Hôpital's rule (tried that) but not sure how I would incorporate into the Squeeze Theorem or how I would use continuity. Thanks! Edit: Turns out I was really dumb and you do use L'Hôpital's rule twice. I made the mistake of differentiating the whole quotient rather than the function on top and the bottom of the vinculum separately.
First note that $$\lim_{x\to0}\frac{\sin(x)}{x}=1$$ $$\lim_{x\to0}\frac{x}{\sin(x)}=1$$ $$\lim_{x\to0}\frac{1-\cos(x)}{x}=0$$ Keeping these fundamental trigonometric limits in mind, we have $$\lim_{x\to\frac{\pi}{2}} \frac{1}{\left(x-\frac{\pi}{2}\right)}+\tan(x)$$ $$=\lim_{t\to0} \frac{1}{t}-\cot(t)$$ $$=\lim_{t\to0}\frac{\sin(t)-t\cos(t)}{t\sin(t)}$$ $$=\left(\lim_{t\to0}\frac{\sin(t)}{t}\right)\left(\lim_{t\to0}\frac{\sin(t)-t\cos(t)}{\sin^2(t)}\right)$$ $$=\left(\lim_{t\to0}\frac{\sin(t)}{t}\right)^2\left(\lim_{t\to0}\frac{t-t\cos(t)}{\sin^2(t)}\right)$$ $$=\left(\lim_{t\to0}\frac{\sin(t)}{t}\right)^2\left(\lim_{t\to0}\frac{t}{\sin(t)}\right)^2\left(\lim_{t\to0}\frac{1-\cos(t)}{t}\right)$$ Let me know if this helps you.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3308938", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
Two Solutions for an ODE: $x' = x^{\frac45}$ Find two different solutions $x_1, x_2 : \mathbb{R} \to \mathbb{R}$ of $$ \dot{x} = x^{\frac45}, \quad x(1) = 1. $$ This is a problem in a 60-minute exam, so it should be quite simple but still I'm failing. I can get $x(t) = \left(\frac{t+4}{5} \right)^5$ via separation of variables (might have miscomputed but that's not too important) but how can we find another solution? The solutions on Wolfram-Alpha do not look too simple...
To get two different solutions, you need to have a violation of the Lipschitz condition, even the local one. This can be tested by looking for singularities of the derivative of the right side. You will find that there is such a singularity at $x=0$, and find further that the constant-zero function is a solution. You can check that your solution has derivative $0$ where it takes the value $0$, so that a continuation to the left with the zero solution is also a solution of the IVP. $$ x(t)=\begin{cases}0,&t<-4,\\\left(\frac{t+4}5\right)^5,&t\ge-4.\end{cases} $$ There are other solutions in-between these two.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3309178", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
double tours embedding of nonhamiltonian bicubic graphs Can Georges Graph (or any other nonhamiltonian bicubic graph ) be embedded on an oriented surface of genus -2, i.e. a double torus? If it helps, it would have $F=E+\chi-V=75-2-50=23$ faces...
A double torus is a sphere with two handles and two holes, so its genus (which is the number of holes, see, for instance, p. 133 in “Chromatic Graph Theory” by Gary Chartrand and Ping Zhang, CRC Press, 2009) is $2$. My search at the House of Graphs of bicubic non-hamiltonian graphs of genus at most two provided the following three graphs: #6923, #27548, and #27678.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3309358", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why does combining these two linear diophantine equations yield solutions, when each individually has none? The two following linear diophantine equations have no solutions: $$412x + 18y = 49$$ $$33x + 99y = 15$$ We can however combine them as $412x + 18y - 49=0, 33x + 49y - 15 = 0$ gives $412x+33x+18y+49y-49-15=0 \to $ $$445x + 117y = 64$$ This has solutions $x=-3584 + 117k, y=-13632-445k$. Why? Doesn't $A+C=B+D$ imply $A=B, C=D$, and if not, why does it work fine when we combine equations so many other times in math?
You lost some information. You have $a=0$ and $b=0$ and (essentially) conclude that $a=b$, which is correct. But you've lost the information that each side equals $0$. You'd get the same conclusion if $a=5$ and $b=5$. Concluding $a=b$ is correct, but you've lost the $5$-ness. Since you lost information, you have fewer restrictions on your variables, and so the new equation allows more solutions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3309453", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Find the intersection of the the sets given If $aN= \{ ax:x\in N \}$ then $3N\cap 7N$ = Options are a)3N b)7N c)N d)21N I think the answer should be 3N as 7N would contain 3N and we have to find their intersection. But the answer is 21N, and I am not able to figure out the reason for that answer.
Think about it this way: $3N=\{n:n\in N\land 3\mid n\}$ $7N=\{n:n\in N\land 7\mid n\}$ For an two sets $A$ and $B$, such that $A=\{a:\varphi(a)\}$ and $B=\{b:\psi(b)\}$, the intersection is $A\cap B= \{c:\varphi(c)\land\psi(c)\}$ So... $$3N\cap7N=\{n:n\in N\land (3\mid n)\land (7\mid n)\}$$ If $3$ divides $n$ and $7$ divides $n$, then $n$ must be a multiple of $21$ (since $3$ and $7$ are prime and $21$ is their least common multiple). Hence, $3N\cap7N=21N$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3309516", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
n empty balls and an observer Consider $n$ empty circles in a row. An observer sits in one of the $n−1$ gaps between them. A random subset of r circles is colored black. Show that the expected number of empty circles the observer sees is at most $ \frac{2(n − r)}{(r + 1)}$. (The observer can see through empty circles but not through black ones.) Hint: Instead of coloring r circles color $r + 1$ of them black first and then make a random black circle empty again. The end result seems dubious to me, let alone attempting to prove it. Even for the case of $r = 1$, I think I should get $\frac{n-1}{2}$. I say this because the number of empty balls the observers sees must be the number of balls in-between two black balls (or one black ball in the case the observer is at the two ends of the line up). Could you explain to me why this is not the case? I already saw the proof from the book but I don't believe this result is correct.
There is no inconsistency between the claimed result and the value you think you get in the case $r=1$, since the result says "at most", and $\frac{n-1}{2}$ is certainly at most $n-1$. Also, $\frac{n-1}{2}$ isn't correct for $r=1$. For example, if $n=2$ the expected value is precisely $1$ (there is one uncoloured circle, and the observer can see both circles), and for $n=3$ the expected value is $\frac 53$, since if the black circle is in the middle (probability $1/3$) the observer can only see one uncoloured circle, but otherwise (s)he can see both.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3309785", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Diagonalising matrices over different fields examples Let say $M=\left[ {\begin{array}{cc} 1 & 1 \\ 1 & 0 \\ \end{array} } \right]$ and so its characteristic polynomial is $x^2-x-1$, which will be diagonalisable if the field chosen is $\mathbb{R}$ but not if the field is $\mathbb{Q}$. How can I determine if such matrix is diagonalisable over finite field $\mathbb{F_p}$, for some $p$ prime? Also if a matrix is diagonalisable over $\mathbb{C}$ but not $\mathbb{R}$ then am I right in thinking that it cannot be diagonalisable over $\mathbb{F_P}$?
You're right, if the matrix is diagonalizable over $\mathbb{C}$ or $\mathbb{R}$, this must not hold for a finite field. Look at your example. I reduce it here to the case $p = 2$. The elements of $\mathbb{F}_2$ are $0$ and $1$. But since $$ 0^2 - 0 - 1 = 1 = 1^2 - 1 -1,$$ we see that the matrix has no eigenvalues in $\mathbb{F}_2$ and is therefore not diagonalizable over $\mathbb{F}_2$. In general, also in finite fields, you always have to calculate the characteristic polynomial and investigate the eigenvalues and their algebraic and geometric multiplicities.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3309894", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Relations between two applications of Catalan Numbers Say I am looking at how many different balanced parenthesis I can make. Then I look at how many ways n triangles can be made with a n+2 polygon. Because they both use Catalan numbers, I know they are bijective. What I am having trouble understanding is HOW they are bijective. Does anyone have an answer?
I suppose we are talking about convex polygons on $n+2$ vertices. Index vertices sequentially, I will give them indices in the next way $0, 0, 1, 2, \ldots, n$ Algo $1$ (From triangulation to parenthesis): * *Mark $0-0$ edge as "seen" *Start at vertex $1$ (at each vertex go over all the outgoing edges in a clockwise manner, i.e. the indexing was done in the positive direction) *If the edge under inspection, closes the triangle with two other marked edges. Mark the edge as ')' (and write it down). *Otherwise, ignore the edge. Unless this is the edge of the polygon itself, in this case Mark it as '(' (and write it down). Proceed to the next vertex on the polygon. E.g. The number in parenthesis denotes the index of the edge during the algorithm execution. Now, ignoring the $0$th edge write out 'o' for open parenthesis and 'c' for close we get '(())(((()())))' [Need to prove all legal parenthesis are produced. In sketch, convex hull edges could be 'o', except between the right $0$ and $n$, and between two $0$s. So those are $n$. There are $n$ triangles, so there are $n-1$ inner edges, none could be 'o', and $1$ convex hull edge marked as 'c': so there are $n$ 'c's. Legality probably stems from triangulation (open for scrutiny). Different triangulations obviously produce different strings] The other direction, is basically building triangles according to the string, in the opposite direction. You are closing the triangles if two other sides are set, and the edge starts at the current vertex. If there is an open parenthesis - you proceed on CH edge to the next vertex.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3310008", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Find range of parameters for which a given curve is a geodesic I am working on the following problem; Given parameterised surface $X(u,v)=(u \cdot \cos v,u \cdot \sin v,v)$ determine for which values $\alpha$ the curve $\gamma_{\alpha}=(t \cdot \cos (\alpha t),t \cdot \sin(\alpha t),\alpha t)$ is a geodesic. According to a theorem a curve $\gamma = X \circ \beta $ is a geodesic if and only if $\beta(t)=(u(t),v(t))$ satisfy $\frac{d}{dt}(E\dot{u}+F\dot{v})=\frac{1}{2}(E_{u}(\dot{u})^{2}+2F_{u}\dot{u}\dot{v} +G_{u}(\dot{v})^{2})$ $\frac{d}{dt}(F\dot{u}+G\dot{v})=\frac{1}{2}(E_{v}(\dot{u})^{2}+2F_{v}\dot{u}\dot{v} +G_{v} (\dot{v})^{2})$. Given the data in this problem we have, $X_{u}=(\cos v,\sin v,0)$ $X_{v}=(-u \cdot \sin v, u \cdot \cos v, 1 )$ Hence $<X_{u},X_{u}>=E=1$ $<X_{u},X_{v}>=F=0$ $<X_{v},X_{v}>=G=u(t)^2+1$ Therefore we get the system $0=\alpha^2\cdot t$ $\alpha\cdot 2t=0$ Which means that $\alpha$ must be zero. This however dosn't seem quite right. Can anyone see where I go wrong and what the right answer should be?
There are some mistakes on the LHS of your equations. The corrected equations are as follows: $\displaystyle \frac{d(E \dot{u} + F \dot{v})}{dt} = \frac{1}{2}\left(E_u \dot{u}^2 + 2F_u \dot{u}\dot{v} + G_u \dot{v}^2 \right)$ and $\displaystyle \frac{d(F \dot{u} + G \dot{v})}{dt} = \frac{1}{2}\left(E_v \dot{u}^2 + 2F_v \dot{u}\dot{v} + G_v \dot{v}^2 \right)$ If $\alpha$ is a constant, it must be zero. What if $\alpha$ is a function of $t$? Both the equations become different: $\frac{1}{2} \left( 2t(\alpha + \dot{\alpha}t)^2\right) = 0$ $(t^2+1) (\alpha + \dot{\alpha}t) = 0$ Finally $\alpha + \dot{\alpha}t = 0$ Solving, $\displaystyle \alpha = \frac{C}{t}$ where $C$ is a constant
{ "language": "en", "url": "https://math.stackexchange.com/questions/3310109", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
The confusion usage of atlas and maximal atlas I'm studying Loring Tu's An Introduction to Manifolds. In p.60 he said Given a smooth manifold $M=(\underline{M},\Phi_{\text{maxi}})$, it is understood by people that there exist a maximal atlas $\Phi_{\text{maxi}})$ of that underlying set $\underline{M}$. However, what does the "atlas" refer to in (ii)? Is it belongs to the origin maximal atlas (given immediately when he said "Let $M$ be a manifold ..."), or could be any atlas even outside the original maximal atlas? How to tell from the context?
Isn't required that the atlas in (ii) be maximal, but a such atlas can be extended to a maximal atlas preserving the property, namely the maximal atlas defining the smooth structure.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3310216", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Derivative is just speed of change? In school we've been told that derivative of $x^2$ is $2x$. Also I've read that derivative is simply a speed of value change. So if $$f(x)=x^2$$ then, using simple explanation, derivative of that function would be $$f'(x)=f(x+1)-f(x).$$ Now if we take derivative when $x=2$ we will get $$f'(2)=f(2+1)-f(2)=3^2-2^2=9-4=5.$$ But if we will take conversion rule (from school) which says that $[x^2]'$ is $2x$ then $$f'(x)=2x$$ and if we will put the same point here we will get $$f'(2)=2*2=4$$ so first result gives me $5$ and second result gives me $4$. And this problem seems to be appearing for every number. Number calculated by simplified interpretation is always bigger by $1$. And my only guess is that simplified explanation missing something. Or, maybe, I made a mistake somewhere. Can you, please, help me figure it out? Update: I spend 2 days trying to figure this out! Thanks to all of you, guys!!! Now i got it!))))
You are right, the derivative is a speed (or rate) of change. People are often familiar with the idea that the slope of a line is it's rate(speed) of change. I'd just like to add to Marco's answer and tell you to pay particular attention to his very first equation: $gradient =\frac{f(x + h) - f(x)}{(x+h) - x}$. What he calls a gradient, is more commonly known as a slope and what you are looking at is a slope for a line that passes through the two points $(x, f(x))$ and $(x+h, f(x+h))$. When h is a very small quantity that line is very, very close to the tangent line at $(x, f(x))$ and the slope of the tangent line is the speed of change for that function when it is at the point $(x, f(x))$. The problem with finding the slope of the tangent line, and thus the instantaneous speed of change, is that we only know 1 point on the tangent line, $(x, f(x))$. By also looking at the point $((x+h), f(x+h))$ and making h be very small, we are finding the slope of a line that is very close to the tangent line. It's slope is an approximation for the tangent's slope. As we make h be smaller and smaller, we are making our lines get closer to the tangent line and getting better approximations of its slope. When we take the limit of our slope formula as h approaches $0$, the limit of our lines' slope will be the tangent line slope. That is why the derivative is defined as the limit of the slope of our lines as h approaches $0$: $$derivative = lim_{h \to 0}{\frac{f(x+h) - f(x)}{(x+h) - x}}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3310297", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 7, "answer_id": 4 }
How to solve this weird ODE? I came across this differential equation which I'm having trouble finding an analytic solution to: $$\frac{dy}{dx}=\frac{A}{xy}+\frac{B}{(xy)^2}$$ I'm trying to solve for y. I have initial conditions as $x_0=0.02$ and $y_0=100000$, and A and B are known constants. I don't have a very heavy differential equations background so all I know is that I can't use separation of variables--what kind of method should I use to solve this equation? Thank you!!
Assume $A,B\neq0$ for the key case: Hint: Let $u=xy$ , Then $y=\dfrac{u}{x}$ $\dfrac{dy}{dx}=\dfrac{1}{x}\dfrac{du}{dx}-\dfrac{u}{x^2}$ $\therefore\dfrac{1}{x}\dfrac{du}{dx}-\dfrac{u}{x^2}=\dfrac{A}{u}+\dfrac{B}{u^2}$ $\dfrac{1}{x}\dfrac{du}{dx}=\dfrac{u}{x^2}+\dfrac{A}{u}+\dfrac{B}{u^2}$ $\dfrac{1}{x}\dfrac{du}{dx}=\dfrac{(Au+B)x^2+u^3}{x^2u^2}$ $((Au+B)x^2+u^3)\dfrac{dx}{du}=xu^2$ Let $v=x^2$ , Then $\dfrac{dv}{du}=2x\dfrac{dx}{du}$ $\therefore\dfrac{(Au+B)x^2+u^3}{2x}\dfrac{dv}{du}=xu^2$ $((Au+B)x^2+u^3)\dfrac{dv}{du}=2u^2x^2$ $((Au+B)v+u^3)\dfrac{dv}{du}=2u^2v$ This belongs to an Abel equation of the second kind.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3310458", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Find Automorphism Group if the following graph (Picture below) I'm studying for an exam and found the following task concerning Automorphism Groups: "Find the Automorphism $Aut(G)$ Group of the following Graph $G$." On the outside, the graph looks like a regular pentagon, which would be an easy task. I figured out, that $Aut(G)$ doesn't contain any rotations since the middle part wouldn't be isomorphic. I think I have found one isomorphism, which results in mirroring $G$ on the central vertical axis. This would be the permutation $(24)(9 10)(56)(78)$. I can't find any other isomorphisms, which is hard to believe for me since this task gave 4 out of 40 points on last years exam. Can anyone help me here?
Let $I$ be the identity isomorphism, and let $M$ be the mirroring isomorphism. We want to determine whether there exist any other isomorphisms. In order to do this, let's imagine that there exists an isomorphism that I'll denote $F_1$ which is unequal to either $I$ or $M$. Let's see what we can learn about $F_1$; maybe we will be able to prove a contradiction, or maybe we will be able to find a construction of $F_1$. An isomorphism must respect the valences of vertices. There are exactly two vertices of valence $2$, namely vertex 5 and vertex 6, and therefore $F_1$ either fixes 5 and 6, or $F_1$ interchanges them. Define a new automorphism $F_2$: * *Case 1: if $F_1$ interchanges $5$ and $6$ then $F_2 = M F_1$ fixes $5$ and $6$; *Case 2: if $F_1$ fixes $5$ and $6$ then $F_2 = F_1$. Either way, we have produced an isomorphism $F_2$ that fixes $5$ and $6$. Furthermore, $F_2$ is different from $I$ (because in Case 1, if $F_2=I$ then $M_1 F_1 = I$ and so $F_1 = M_1^{-1}=M_1$; or if $F_1=F_2=I$; and in either case we get a contradiction). Next, I look in the graph for all paths between $5$ and $6$. There is just one path of length $3$, namely 5---8---7---6. An isomorphism must take paths to paths and must respect lengths of paths. Therefore $F_2$ take this path to itself and it also fixes the endpoints, and it follows that $F_2$ fixes $7$ and $8$. Since $F_2$ fixes $5$ it must either fix both $2$ and $8$ or interchange them, but we've just shown it fixes $8$ so it also fixes $2$. By a similar argument $F_2$ fixes $4$. I think you should probably be able to continue from here, showing that $F_2$ fixes $1$ and $9$ and $10$, and therefore $F_2 = I$. This is a contradiction, and therefore $F_1$ did not exist.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3310574", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Can I write $y$ only in terms of $x$ in the following equation? How would you write $y$ only in terms of $x$ in this equation? $$x^2 + xy + y^2=100$$
This is a quadratic equation in $y$: $$y^2+xy+x^2-100=0$$ As such, the quadratic formula gives an expression for $y$ in $x$: $$y=\frac{-x\pm\sqrt{x^2-4(x^2-100)}}2=\frac{-x\pm\sqrt{400-3x^2}}2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3310647", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Fourier series with all coefficients $\frac1n$ The function with Fourier series given by $$f(x)=\sum_{n=1}^\infty \frac{\cos{(nx)}+\sin{(nx)}}n$$ appears to be a curve with vertical asymptotes at $x=2\pi k$ where $k\in\mathbb{Z}$. Is there an elementary closed form for $f(x)$? Wolfram gives us $$f(x)=-\frac12(1+i)(\ln{(1-e^{-ix})}-i\ln{(1 - e^{ix})})$$ but is there a way to simplify the above expression into one which does not involve complex numbers as the function $f(x)$ is clearly real? Edit: I found that this question may have been asked in different contexts before. I have provided a proof of the related results as one of the answers below.
$$ f(x) = - \frac{\ln(2-2\cos(x))}{2} + \arctan\left(\frac{\sin(x)}{1-\cos(x)}\right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3310746", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
differential equations - exponential growth and decay The population $P$ of bacteria in an experiment grows according to the equation $\frac{dP}{dt}=kP$, where $k$ is a constant and $t$ is measured in hours. If the population of bacteria doubles every $24$ hours, what is the value of $k$? I was given this problem and I'm not sure what to do with it. I know the formula for this kind of equation is $ce^{kx}$. But, how do you plug in the values given?
From $\dfrac{dP}{dt} = kP, \tag 1$ assuming $P \ne 0, \tag 2$ we deduce that $\dfrac{1}{P}\dfrac{dP}{dt} = k; \tag 3$ we integrate 'twixt $t_0$ and $t$, assuming $P$ takes the value $P(t_0)$ at $t = t_0$: $\ln P(t) - \ln P(t_0) = \displaystyle \int_{t_0}^t \dfrac{1}{P(s)}\dfrac{dP(s)}{ds} \; ds = \int_{t_0}^t k \; ds = k(t - t_0), \tag 4$ or $\ln \left (\dfrac{P(t)}{P(t_0)} \right ) = k(t - t_0); \tag 5$ we apply the function $\exp(\cdot)$ to this to obtain $\dfrac{P(t)}{P(t_0)} = e^{k(t - t_0)}, \tag 6$ whence $P(t) = P(t_0) e^{k(t - t_0)}. \tag 7$ Now given that $P(t)$ doubles every $24$ hours, starting at any $t_0$ we take $t = t_0 + 24, \tag 8$ and thus $2P(t_0) = P(t_0 + 24) = P(t_0) e^{24k}, \tag 9$ leading to $e^{24k} = 2; \tag{10}$ solving for $k$ we conclude $k = \dfrac{\ln 2}{24}. \tag{11}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3310856", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How to get the value of $A + B ?$ I have this statement: If $\frac{x+6}{x^2-x-6} = \frac{A}{x-3} + \frac{B}{x+2}$, what is the value of $A+B$ ? My attempt was: $\frac{x+6}{(x-3)(x+2)} = \frac{A(x+2) + B(x-3)}{(x-3)(x+2)}$ $x+6=(x+2)A + B(x-3)$: But from here, I don't know how to get $A + B$, any hint is appreciated.
By your work the coefficient before $x$ it's $1$, which ends. Also, $$\frac{x+6}{x^2-x-6}=\frac{x+6}{(x-3)(x+2)}=\frac{x-3+9}{(x-3)(x+2)}=$$ $$=\frac{1}{x+2}+\frac{9}{5}\left(\frac{1}{x-3}-\frac{1}{x+2}\right)=\frac{\frac{9}{5}}{x+3}+\frac{-\frac{4}{5}}{x+2},$$ which gives $A+B=1.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3310929", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
I'm not understanding combinations and counting The question: An urn has 10 black balls numbered from 1 to 10, and 10 white balls numbered from 1 to 10. In how many ways can we choose 5 balls from the urn? (There are more questions, which is why there's balls that are different colors and numbered. This is just one of the) I did answer ${20 \choose 5}$ which is the correct answer. But I don't know why, other than the word choose being in the question. Intuitively, I don't understand why $20 \times 19 \times 18 \times 16 \times 17$ isn't an answer for this question. I know it's incorrect and that they're very different answers, but I don't know why. My thinking is that the first ball you choose, you pick out of 20 possible choices, then there's 19, then 18, then 17 choices to pick from.
You have to choose $5$ balls from $20$ balls; so the order of the balls does not matter. There are $20×19×18×17×16 $ number of ways contains only $\frac{20×19×18×17×16 }{5!}$ different combinations of the balls. Although,if order of the drawn balls did matter,then yours would be the correct answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3311004", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Number of ways to split $N$ up into $k$ baskets such that different arrangements of the $k$ baskets are considered equivalent? I've been considering the problem of integer partitions and while there have been some answers for related questions, I haven't came across a solution for my following problem. Suppose you have $N$ balls and wish to throw it into $k$ indistinguishable baskets. Find the number of ways to do this. Then $S_1+S_2+...+S_k= N$ where each $S_i$ ca n only take on integer values. So if $k=3$ and $N=5$, then something like $(1,1,3)$ will be equivalent to $(1,3,1)$ and $(3,1,1)$. I've thought about generating polynomials, and if I wanted the number of non-distinct ways to do this, I would take the coefficient of $x^5$ in the expansion of $(x^1+x^2+x^3)^3$, which also can be evaluated by the multinomial coefficient formula to give $6$. It makes sense as the only sets of values $(S_1,S_2,S_3)$ can take are $(1,1,3)$ and $(1,2,2)$, both of which can be permuted $3$ times. There was another solution to a related problem, and it involved the number of ways to split $N$ up into $N$ integers or less such that no two numbers are the same. For our problem, it would be the sum of the number of ways to split $5$ into $1$ number, split $5$ into $2$ numbers, split $5$ into $3$ numbers... such that $S_i \neq S_j, \forall i \neq j$. In this case, integer partitions of $5$ into $3$ number will not be considered, since both $(1,1,3)$ and $(1,2,2)$ contain repetitions. The $3$ ways that this can be done are $(5,0), (4,1), (3,2)$. But obviously this is not what I want as it doesn't count $(1,1,3)$ and $(1,2,2)$. Is there a formula to do this? A related question is here, but no explicit algorithm/formula is given. EDIT: @marcelgoh said that Stirling numbers of the second kind would work. I have a follow-up question: Is there a way to iterate through permutations of numbers making up $N$, but in a 'Stirling' sense? For instance, if I wanted to express: $$\frac{20!}{(2*1+1)!(2*1+1)!(2*3+1)!} + \frac{20!}{(2*1+1)!(2*3+1)!(2*1+1)!} + \frac{20!}{(2*3+1)!(2*1+1)!(2*1+1)!} + \frac{20!}{(2*2+1)!(2*2+1)!(2*1+1)!} + \frac{20!}{(2*2+1)!(2*1+1)!(2*2+1)!} + \frac{20!}{(2*1+1)!(2*2+1)!(2*2+1)!}$$ I could use: $$\sum_{i+j+k=5, i,j,k\geq 1}\frac{20!}{(2i+1)!(2j+1)!(2k+1)!}$$ But what if I just wanted: $$\frac{20!}{(2*1+1)!(2*1+1)!(2*3+1)!} + \frac{20!}{(2*2+1)!(2*2+1)!(2*1+1)!}$$ Could I use something like: $$\sum_{i+j+k=5, 1\leq i\leq j\leq k}\frac{20!}{(2i+1)!(2j+1)!(2k+1)!}$$ Or is there some less messy notation for the same concept?
I believe that the Stirling numbers of the second kind $\big\{{n\atop k}\big\}$ are what you need. This is the number of ways to partition $n$ labelled elements into $k$ unlabelled non-empty subsets. EDIT: If we're trying to partition $n$ unlabelled elements into $k$ subsets, then the function we're actually looking to use is $p_k(n)$. According to Wikipedia, this function satisfies the recurrence relation $$p_k(n) = p_k(n-k) + p_{k-1}(n-1),$$ with initial conditions $p_0(0) = 1$ and $p_k(n) = 0$ if either of $n$ or $k$ is non-positive.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3311137", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Are the $\Bbb S^2\times \Bbb R^2$ and $\Bbb R^2\times \Bbb S^2$ homeomorphic? Are the $\Bbb S^2\times \Bbb R^2$ and $\Bbb R^2\times \Bbb S^2$ homeomorphic? I know that the answer is certainly yes but what is confused me is the following: * *$\Bbb R^2\times \Bbb S^2$: Consider a plane and attach a $2$-sphere to each point of it, *$\Bbb S^2\times \Bbb R^2$: Consider a $2$-sphere and attach a plan $\Bbb R^2$ to each point of it. How to justify geometrically that this two are exactly a copy of each other? Update: How to justify this paradox: In first case we have 1 plan with many spheres and in second case 1 sphere with many plans?
As you said, the spaces are homeomorphic. You imagine a product $X \times Y$ in two different ways: * *A copy of $Y$ attached at each point of $X$. *A copy of $X$ attached at each point of $Y$. The copies $\{x\} \times Y$ are pairwise disjoint, and I guess you imagine them as "isolated bags hanging on string". However, they are not isolated, for each $y \in Y$ the collection of points $(x,y)$ with $x \in X$ forms again a string going through the bags. Thus you see that you do not have a string with isolated bags, but a web which is on a par with respect to vertical and horizontal threads. Edited: For any product $X \times Y$ you have two projections $p_X : X \times Y \to X, p_X(x,y) = x$, and$p_Y : X \times Y \to Y, p_Y(x,y) = y$. This gives you two directions to look at $X \times Y$: * *Look from $X$ at $X \times Y$. For each $x \in X$ you see the "fiber" $p_X^{-1}(x) = \{x\} \times Y$, in the case of $\mathbb R^2 \times S^2$ a sphere "attached" at each point of the plane. *Look from $Y$ at $X \times Y$. For each $y \in Y$ you see the "fiber" $p_Y^{-1}(y) = X \times \{y\}$, in the case of $\mathbb R^2 \times S^2$ a plane "attached" at each point of the sphere. There is no paradox. It is just a matter of perspective. Perhaps a simpler example will illustrate this. Consider the set $P = [0,1] \times \mathbb \{0,1\}$ which is a subset of the plane $\mathbb R^2$. Looking at $P$ from the left (i.e. in the direction of the $x$-axis) you see two intervals. each attached at the points $0,1$. Looking at $P$ from below (i.e. in the direction of the $y$-axis) you see a collection of two-points sets, each attached at a point of $[0,1]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3311276", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Points A,B,C (fix) and X (variable) such that |AX| + |BX| = |CX| Let $A,B,C$ (fix) points of the plane. Where are points $X$ (variable) in the plane with $|AX| + |BX| = |CX|$ ? It seems $X$ need to lie on the arc under $AB$ of the circumscribed circle of the triangle $ABC$. How can I prove that? Note: There is no Trapezium $AXBC$.
From the equation for $X$ we can get $$4 |A-X|^2 |B-X|^2 = (|C-X|^2 - |A-X|^2 - |B-X|^2)^2$$ which gives you a quartic polynomial in the coordinates of $X$. If this doesn't factor (which in general it won't), the locus of $X$ will not be an arc of a circle, nor any conic section. Here's a picture of it in the case $A = (0,0)$, $B=(1,0)$, $C = (0,1)$. EDIT: In this particular case, the curve has genus $0$ and therefore has a rational parametrization: $$ \eqalign{X_1 &= {\frac {64\,{t}^{3}-1024\,{t}^{2}+4096\,t}{9\,{t}^{4}-96\,{t}^ {3}+512\,{t}^{2}-4096\,t+16384}}\cr X_2 &= {\frac {12\,{t}^{4}-160\,{t}^{3}+512\,{t}^{2}}{9\,{t}^{4}-96\, {t}^{3}+512\,{t}^{2}-4096\,t+16384}}\cr 0 \le &t \le 8 }$$ In most cases, it appears the curve has genus one and is an elliptic curve, with no rational parametrization.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3311396", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Program to find intersection of subgroups of free groups As the title says, I am working on examples for a research project I'm doing, and I need a way to efficiently calculate the intersection of subgroups of a free group (say, of rank 2). Are there any computer programs to do this, or any papers explaining how such a program could be written?
The algorithm is easy and well known. Let $A, B$ be finitely generated subgroups of a free group $F$. Construct the Stallings cores $U,V$ of these subgroups. These are labeled graphs whose labels-generators of $F$ and which have basepoints $u,v$. Then $A$ (resp. $B$) consists of all labels of reduced loops of $U$ (resp. $V$) at $u$ (resp. $v$). Then consider the pull-back graph $U*V$ which has vertices $(x,y)$ where $x$ is a vertex of $U$, $y$ is a vertex of $V$. The edges have the form $(x,y)--(xs,ys)$ where $[x,xs]$ (resp $[y,ys]$) is an edge of $U$ (resp. $V$) starting at $x$ (resp. $y$) and labeled by $s$. Then $A\cap B$ is the set of labels of reduced loops of $U*V$ at $(u,v)$. That is $A\cap B$ is the fundamental group of $U*V$ with basepoint $(u,v)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3311562", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
Calculating 2 non-definite integrals Calculate the following: A) $\int \sqrt{3x^4 +x^6 +9x^2} \, dx$ B) $\int \sqrt[3]{{\frac{1}{x^2 +1}}}\, dx$ A) I managed to write $\int x \sqrt{3x^2 +x^4 +9} \, dx$, but then I didn't know what to do because of the square root, even with integration by parts. B) I tried substituting $u=x^2 +1$ so $du=2x\,dx$ but then I don't have any $x$ for $du$.
In the integral $$\int x \sqrt{x^4+3x^2+9} \ dx$$ substitute $u=x^2$, $du=2xdx$ to get $$\frac{1}{2} \int \sqrt{u^2+3u+9} \ du = \frac{1}{2} \int \sqrt{ \left( u+\frac{3}{2} \right)^2 + \frac{27}{4} } \ du$$ and to continue, substitute $$u+\frac{3}{2}= \frac{3\sqrt 3}{2} \tan \theta$$ you will need to remember the integral of $\sec^3 \theta$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3311846", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
On proving that the geometric realization of $\Delta^n = \Delta(-,n)$ is homeomorphic to $|\Delta^n|$. I'm trying to prove that the geometric realization of $\Delta^n = \Delta(-,n) : \Delta^{op} \to \mathsf{Set}$ coincides with the geometric realization of $\Delta^n$ as a simplicial complex (I will note $|\Delta^n|$ only for the latter, to avoid confusion). I'm working with the definition of geometric realization given by $$ |X| = \left(\coprod_{k \geq 0} X_k \times |\Delta^k|\right) \Big/ \sim \tag{1} $$ with each $X_k$ discrete, identifying $$(x,|d^i|(p)) \sim (d_i(x),p) \text{ and }(x,|s^i|(p)) \sim (s_i(x),p),$$with $d_i,s_i$ the face and degeneracy maps of the simplicial set $X$ and $d^i, s^i$ the coface and codegeracy maps of the standard simplices. My idea was to prove that $\{id_n\} \times |\Delta^n|$ is a fundamental domain for $\sim$, since it is both compact and homeomorphic to $|\Delta^n|$. Given a point $(f,q)$, we know that $f : k \to n$ can be written as $$ f = f_1 \circ \cdots \circ f_n $$ with $f_i = s^{j_i}$ or $f_i = d^{j_i}$ being some coface/codegeneracy maps. Since the face and degeneracy maps on $\Delta^n$ are given by the precomposition of coface and codegeneracy maps, we have that $$ \begin{align} (f,q) = (f_n^* \cdots f_1^* (id),q) = (id, |f_1 \cdots f_n|(q)) \in \{id\} \times |\Delta^n|. \end{align} $$ This shows that each point of the geometric realization has a representative in $\{id\} \times |\Delta^n|$. Now, it is intuitive to me that the relation by which we divide in $(1)$ does not identify points between simplices of $X$ of the same dimension. Is this the case? If true, how can this be shown?
Just observe that there is a map $|X|\to |\Delta^n|$: each element of $X_k=\Delta(k,n)$ determines a map $|\Delta^k|\to|\Delta^n|$, and so these maps together over all values of $k$ give a map $\coprod_kX_k\times|\Delta^k|\to|\Delta^n|$. This map respects the equivalence relation $\sim$, essentially by definition (if two elements of $X_k\times|\Delta^k|$ and $X_{k+1}\times|\Delta^{k+1}|$ are related by a face or degeneracy, the corresponding maps $|\Delta^k|\to |\Delta^n|$ and $|\Delta^{k+1}|\to|\Delta^n|$ are related by composing with a map between $|\Delta^k|$ and $|\Delta^{k+1}|$ and this gives exactly the relation required by $\sim$), and thus descends to a map $f:|X|\to |\Delta^n|$. If $i:|\Delta^n|\to |X|$ is the map given by the inclusion of $\{id_n\}\times|\Delta^n|$, then it is clear that $fi$ is the identity, and so in particular $i$ is injective. On the other hand, the work you have done proves exactly that $i$ is surjective. Thus $i$ is a bijection, and $f$ is its inverse, and so they are inverse homeomorphisms.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3311960", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How is $\sin 90° = 1$ possible? How can two angles of a triangle be equal to $90°$? If two angles were $90°$, this would mean that the two sides would be parallel and the angle of the third side would be equal to 0. Thus, there would be only two vertices and this wouldn't be a triangle at all, ultimately making $\sin 90° = 1$ impossible.
Consider polar coordinates, $(r \cos\theta, r \sin \theta)$ in a unit circle such that $r=1$. Then, we can see that, the mapping in the first quadrant inside the unit circle is just $(\cos \theta, \sin \theta)$. Now, consider a moving point $A$ starting from $\theta=0°$ to $\theta=90°$ on the circumference of the circle. Now, $OA$ is the hypotenuse of the right angle triangle inside the circle. Now, it is clear that at $\theta=90°$, the hypotenuse and the perpendicular are the same line in the $x$-axis (i.e., they coincide). So, $\sin(90°)=\frac{p}{h}=1$. As per your argument, the case is two sides coinciding rather than being parallel, because by Pythagoras theorem, we have $h^2=p^2+b^2$, so if $p$ increases then $b$ must decrease, and $h=p$ iff $b=0$. Where, h is the radius of the unit circle, p is the perpendicular drawn from (aka height) the point in the circumference to the x-axis, and b is the distance of intersection of p and x-axis from the origin.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3312092", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 4, "answer_id": 0 }
Green's identity and gradient estimate After the proof of the Green's identity in the book "Han Q., Lin F. - Elliptic partial differential equations - AMS (1997)", they state at page 9: We may employ the local version of the Green's identity to get gradient estimates without using mean value property. Suppose $u \in C(\bar{B_1})$ harmonic in $B_1$. For any fixed radius $0<r<R<1$ choose a cut-off function $\varphi \in C_{0}^{\infty}(B_R)$ such that $\varphi =1$ in $B_r$ and $0 \leq \varphi \leq 1$. Apply Green's formula to $u$ and $\varphi \Gamma(a, \cdot)$ in $B_1 \setminus B_{\rho}(a)$ for $a \in B_r$ and $\rho$ small enough. We proceed as in the proof of theorem 1.17 ( which is the proof of Green's identity) and we obtain \begin{align} u(a)=- \int_{r < |x| < R} u(x) \Delta_x \big(\varphi(x) \Gamma(a,x)\big)dx \quad \quad (\star) \end{align} for any $a \in B_r(0)$ Hence one may prove $\sup_{B_{1/2}}{Du} \leq C \max_{B_1} {|u|}$ where $\Gamma(a, x)$ is the fundamental solution I can't understand how to derive that bound! I think I should take the derivative of $u(a)$ w.r.t $a_i$ and get \begin{align} | \partial_{a_i} u(a)| \leq \int_{r<|x|<R} |u(x)|\left |\Delta_x\big(\varphi(x) \partial_{a_i} \Gamma(a,x)\big)\right| dx \end{align} Now I should take outside the maximum of $|u|$ over the closure of the unitary ball, but I don't know how to treat the Laplacian term in the right way
You barely have to treat it. You have already shown $$|\partial_{a_i} u(a)| \le \int_{r < |x| < R} |u(x)| \left| \Delta_x(\varphi(x)\partial_{a_i} \Gamma(a,x))\right|dx,$$ so $$\sup_{a \in B_{1/2}} |\partial_{a_i} u(a)| \le \sup_{a \in B_{1/2}}\int_{r < |x| < R} |u(x)| \left| \Delta_x(\varphi(x)\partial_{a_i} \Gamma(a,x))\right|dx.$$ Let $r=\frac{3}{4}$ and $R = \frac{4}{5}$. $\varphi$ is fixed and so all you need is that $$\sup_{a \in B_{1/2}} \int_{3/4 < |x| < 4/5} |\Delta_x(\varphi(x)\partial a_i \Gamma(a,x))|dx < \infty$$ (then just call it $C$), but the integrand changes continuously in $a$ and $\overline{B_{1/2}}$ is compact, so you're good.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3312241", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
How to evaluate the following integral$\int_a^{pa}\frac{ax}{\sqrt{(a-x)(x-pa)}}dx$? Can anyone help me to evaluate the definite integral $\int_a^{pa}\frac{ax}{\sqrt{(a-x)(x-pa)}}dx$?I encountered this integral while doing a problem of particle dynamics in Ganguly Saha(Applied Mathematics).Can this integral be evaluated without the substitution of $x=asin^2\theta+pacos^2\theta$.Please anyone suggest some other method i.e. some direct method to calculate this integral.
With the variable change $u=x/a-1$ and the shorthand $q=p-1$, $$I=\int_a^{pa}\frac{ax}{\sqrt{(a-x)(x-pa)}}dx=a^2\int_0^{q}\frac{u+1}{\sqrt{u(q-u)}}du=a^2(I_1+I_2)$$ where, $I_1$ and $I_2$ are given below, integrated with the convenient substitution $u=q\sin^2\theta$. $$I_1=\int_0^{q}\sqrt{\frac{u}{q-u}}du=q\int_0^{\pi/2}2\sin^2\theta d\theta=\frac{\pi}{2}q$$ $$I_2=\int_0^{q}\frac{1}{\sqrt{u(q-u)}}du=\int_0^{\pi/2}2\theta =\pi$$ Thus, $$I=a^2\left(\frac{\pi}{2}q+\pi\right)=\frac{\pi}{2}(1+p)a^2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3312359", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Partial Fractions: Why does this shortcut method work? Suppose I want to resolve $1/{(n(n+1))}$ into a sum of partial fractions. I solve this by letting $1/{(n(n+1))} = {a/n} + {b/(n+1)}$ and then solving for $a$ and $b$, which in this case gives $a=1$ and $b=-1$. But I learnt about a shortcut method. It says suppose $1/{(n(n+1))} = {a/n} + {b/(n+1)}$, then find $a$ by finding the value which makes its denominator in the RHS equal to $0$ and computing the LHS with the $0$ term (or $a$'s denominator in RHS) removed so we get $a = {1/(0+1)} = 1$ [as $n=0$], and we get $b = {1/(-1)} = -1$ [as $n+1=0$]. Another example, if I am not clear, suppose $$\frac{1}{n(n+1)(n+2)} = \frac{a}{n} + \frac{b}{n+1} + \frac{c}{n+2};$$ then $$ \begin{eqnarray} a &=& \frac{1}{(0+1)(0+2)}=\frac{1}{2}, \\ b &=& \frac{1}{(-1)(-1+2)}=-1, \\ c &=& \frac{1}{(-2)(-2+1)}=\frac{1}{2}. \end{eqnarray} $$ Why does this shortcut method work?
Let's take your example. We have \begin{align}\frac{1}{n(n+1)(n+2)} = \frac{a}{n} + \frac{b}{n+1} + \frac{c}{n+2}&=\color{red}{\boxed{\frac an+\frac{b(n+2)+c(n+1)}{(n+1)(n+2)}\cdot\frac nn}}\quad(\text{group together}\,b,c)\\&=\color{blue}{\boxed{\frac b{n+1}+\frac{a(n+2)+cn}{n(n+2)}\cdot\frac{n+1}{n+1}}}\quad(\text{group together}\,a,c)\\&=\color{green}{\boxed{\frac c{n+2}+\frac{a(n+1)+bn}{n(n+1)}\cdot\frac{n+2}{n+2}}}\quad(\text{group together}\,a,b)\end{align} so we get $$\color{red}{\frac a{\color{black}{\boldsymbol{n}}}=\frac{1-n[b(n+2)+c(n+1)]}{n(n+1)(n+2)}\implies \color{red}a=\frac{1-\color{black}{\boldsymbol{n}}\boldsymbol{[b(n+2)+c(n+1)]}}{(n+1)(n+2)}}\\\phantom{2cm}\\\color{blue}{\frac b{\color{black}{\boldsymbol{n+1}}}=\frac{1-(n+1)[a(n+2)+cn]}{n(n+1)(n+2)}=\frac{1-\color{black}{\boldsymbol{(n+1)}}\boldsymbol{[a(n+2)+cn]}}{n(n+2)}}\\\phantom{2cm}\\\color{green}{\frac c{\color{black}{\boldsymbol{n+2}}}=\frac{1-(n+2)[a(n+1)+bn]}{n(n+1)(n+2)}=\frac{1-\color{black}{\boldsymbol{(n+2)}}\boldsymbol{[a(n+1)+bn]}}{n(n+1)}}$$ Notice that in each case, when you set $n,n+1,n+2=0$ respectively, the terms in bold disappear, so you get $$\color{red}{a=\frac{1-0}{(0+1)(0+2)}=\frac12}\\\color{blue}{b=\frac{1-0}{(-1)(-1+2)}=-1}\\\color{green}{c=\frac{1-0}{-2(-2+1)}=\frac12}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3312450", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 4, "answer_id": 3 }
Boyd & Vandenberghe, problem 4.55 — how to show that solution is Pareto optimal? In problem 4.55 of Boyd & Vandenberghe's Convex Optimization, the authors ask the following. Show that in a multicriterion optimization problem, a unique solution of the scalar optimization problem $$ \min. \max._{i =1,2\cdots q}F_i (x) $$ $$\text{s.t. } f_i(x)\leq 0 $$ $$ h_i(x)=0 $$ is Pareto optimal. I know that, in a multicriterion optimization problem a solution is pareto optimal point if we can not find a better point. I assume that $x^*$ is the solution of the above scalar optimization problem. Now I have to show that for every feasible $y\neq x^*$ we have $$[F_1(x^*),~F_2(x^*), \cdots F_q(x^*)]\preceq [F_1(y),~F_2(y), \cdots F_q(y)].$$ I think the only other information that I have is that one of the $F_i(x^*)$ is greater than all of the rest of $F_j(x^*)'s$ for $j\neq i$. How to solve this problem? Thanks in advance.
Assume $x^*$ is not Pareto optimal then wlog there exists $y \neq x^*$ such that $F_1(y) < F_1(x^*)$ and $F_i(y) \leq F_i(x^*)$ for all $i =2\cdots q$. Now, this implies $$ \max_{i =1,2\cdots q}F_i (y) \leq \max_{i =1,2\cdots q}F_i (x^*) = \min_{x} \max_{i =1,2\cdots q}F_i (x),$$ by definition of $x^*$. Therefore, $y$ is also a minimum to the original scalar optimization problem. By assumption, this minimum is unique, therefore $y = x^*$, a contradiction. Therefore, $x^*$ is Pareto optimal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3312617", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Given a knight on an infinite chess board that moves randomly, what's the expected number of distinct squares it reaches in 50 moves? I was asked this in an interview and wasn't sure how to frame the answer. Basically as in the question you have a knight on an infinite chess board and it chooses one of its valid 8 moves uniformly at each move. After 50 moves, the question was to give (as tight as possible) a lower and upper bound on the expected number of distinct squares it reached. I got as far as realizing that the knight must live in a 200x200 square, and that it can only reach half of the squares (since it must end at the same colour as it started). However this doesn't really address the randomness aspect of the question.
My experiments in Mathematica give $40.06$ as the average number of distinct cells (I've done several tries for $100'000$ trials). I'm counting the initial square though, because it makes the program more simple. I wouldn't know how to approach this problem theoretically, but it's simple enough to do tests. Knight's moves shift it $2$ cells in one direction and $1$ cell in the other direction. Which gives us $8$ options on an infinite board. Here's the code I used and a sample of the results: Tm = 100000; Ds = Table[1, {t, 1, Tm}]; Do[ Nm = 50; P = Table[{0, 0}, {n, 1, Nm}]; M = {{1, 2}, {-1, 2}, {1, -2}, {-1, -2}, {2, 1}, {2, -1}, {-2, 1}, {-2, -1}}; Do[R = RandomInteger[{1, 8}]; P[[n + 1]] = P[[n]] + M[[R]], {n, 1, Nm - 1}]; Ds[[t]] = CountDistinct[P], {t, 1, Tm}]; Histogram[Ds] N[Mean[Ds], 10] Note that the distribution is asymmetrical. Increasing the number of steps to $100$ I get around $77.36$. Which agrees well with the $4/5$ ratio. Made $1'000'000$ tests with $100$ steps and got $77.38$, so the numbers don't change much for larger samples. I wonder how we can get the $4/5$ estimate theoretically. In general, this falls under the topic of random walks and their return probabilities. Edit It would be better to set $N_m=51$ in my program, then it corresponds to $50$ moves and the resulting average $40.82$ fits well with the other answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3312820", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 1 }
How can one prove this property of integrals?$\int_0^bf(x)(b-x)\,dx=\int_0^b\left(\int _0^xf(t)\,dt\right)\,dx$ $$\int_0^bf(x)(b-x)\,dx=\int_0^b\left(\int _0^xf(t)\,dt\right)\,dx$$ I can't understand how this property can be proven but it has held true for everything I have tried. How do you even approach this? I have tried substitution but that gets you no where.
\begin{align} & \int_0^b\left(\int _0^xf(t)\,dt\right)\,dx \\[10pt] = {} & \iint\limits_{(t,x)\,:\,0\,<\,t\,<\,x\,<\,b} f(t)\, d(t,x) \\[10pt] = {} & \int_0^b \left( \int_t^b f(t) \, dx \right) dt \\[10pt] = {} & \int_0^b f(t)(b-t) \, dt. \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/3312960", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 4 }
proof of some statement about finite field extension I would like to have some explanation for the following statement Let $K$ be an algebraically closed field of characteristic $p>0$, and $K((t))$, the field of Laurent series with coefficients in $K$. The Galois group of the polynomial $X^{p^n}-X=t^{-1}$ is isomorphic to the additive group of $F_{p^n}$, i.e. to $(\mathbb{Z}/p\mathbb{Z})^n$. Another question: Are there some extensions with Galois group isomorphic to $(\mathbb{Z}/p^n\mathbb{Z})$ with $n>1$
Say that $\alpha$ is a root of your polynomial $X^{p^n}-X-t^{-1}=0.$ Then it is obvious that if $a \in \mathbb{F}_{p^n},$ that $\alpha+a$ is a root as well, since $(\alpha+a)^{p^n}= \alpha^{p^n}+a^{p^n} = \alpha^{p^n}+a.$ So the Galois group is as claimed. There are finite extensions with Galois groups isomorphic to $\mathbb{Z}/p^n\mathbb{Z}$ with $n>1.$ This can be done using the Witt polynomials, see for example: Cyclic Artin-Schreier-Witt extension of order $p^2$ .
{ "language": "en", "url": "https://math.stackexchange.com/questions/3313098", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Solutions for inequality $\frac{1}{x} + \frac{1}{1-x} \gt 0$ How to find all real solutions for $$\frac{1}{x} + \frac{1}{1-x} \gt 0$$ I came up with $\frac{1}{x(1-x)} \gt 0$ implies $x(1-x)>0$ and finally ended with $0<x<1$ but the answer provided was $0<x<1$ or $x>1$. I tried sample values for $x>1$ but they don't satisfy the inequality.
It should be $\frac{1}{x(1-x)} \gt 0$ thus, $x(1-x)>0$ and so, $0 < x < 1$ is the only solution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3313178", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
What series should I use to compare? Direct comparison test Use direct comparison test to prove if the following series converge or not. A) $\sum_{n=0}^\infty \frac{1}{3^n -1}$ B) $\sum_{n=0}^\infty\frac{1}{\sqrt{n+2}}$ In A) I wrote $3^n -1<3^n$ so $\frac{1}{3^n -1}>\frac{1}{3^n}$, but that is useless because $\frac{1}{3^n}$ converges and it's smaller than $\frac{1}{3^n-1}$ so I can't conclude anything. And then in B) I don't know what series I should use to compare.
You have the right idea for A). Try to compare it with a geometric series. How about using/proving the inequality $2^n \leq 3^n - 1$? Let me give a hint for B): We have $$\frac{1}{\sqrt{n+2}} \geq \frac{1}{\sqrt{n+n}} = \frac{1}{\sqrt{2}}\frac{1}{\sqrt{n}}$$ for all $n \geq 2$. Does this help you?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3313276", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Combinatorics: Partnerships Problem Overcounting My textbook presents the following story proof for partnership counting: Let's use a story proof to show that $$\dfrac{(2n)!}{2^n \cdot n !} = (2n - 1)(2n - 3) \dots 3 \cdot 1$$ Story proof: We will show that both sides count the number of ways to break $2n$ people into $n$ partnerships. Take $2n$ people, and give them ID numbers from $1$ to $2n$. We can form partnerships by lining up the people in some order and then saying the first two are a pair, the next two are a pair, etc. This overcounts by a factor of $n! \cdot 2^n$ since the order of pairs doesn't matter, nor does the order within each pair. Alternatively, count the number of possibilities by noting that there are $2n - 1$ choices for the partner of person 1, then $2n - 3$ choices for person 2 (or person 3, if person 2 was already paired to person 1), and so on. I'm struggling to understand how this is overcounting by a factor of $2^n \cdot n !$. I would appreciate it if people could please take the time to break down as to how this was figured out.
This overcounts by a factor of $n! \cdot 2^n$ since the order of pairs doesn't matter, nor does the order within each pair. We assume at the start that the pairs formed are ordered, both within the pair (A, B not the same as B, A) and between the pairs (this can be interpreted as the pairs standing side-by-side, in a row). Now there are certain transformations that we can make to this actual arrangement of people so that the same pairs still result, and the number of such transformations will be the overcounting factor. We can treat pairs as units, and permute all of them; the same pairs will still be there. Since all pairs are distinct, there are $n!$ ways to do this. Independently of this (justifying the multiplication), we can choose whether or not to swap the people in each pair, which is one of $2$ possibilities for each of $n$ pairs, for $2^n$ ways in all. Thus, multiplying, we see that each combination of pairs has been counted $n!2^n$ times, so we divide by that number to get the true amount.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3313406", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Definition of 'product' for ordered pairs In 'Mathematics Form and Function' ch.2, section 4, 'Integers' by Saunders Mac Lane (p.50 in the 96 edition) I came across the following definitions of sum and product for ordered pairs: (m, n) + (m', n') = (m + m', n + n') (m, n) (m', n') = (mm' + nn', mn' + m') My understanding is that each m or n are related to m' and n' in such a way that, for example, if m' is m + 1, then n' must be n + 1; the relationship must be consistent between both the original and subsequent numbers. Given that, if I plug in some real numbers such as m = 2 and n = 3 (and in this case the number with the prime symbol is equivalent to adding 1), I would get this: (2 + 3, 3 + 4) = (5, 7) # still obtain an ordered pair, so far so good ((2 * 3) + (3 * 4), (2 * 4) + 3) = (18, 11) # not an ordered pair! My understanding is the result of the product should be an ordered pair with the 2nd element greater than the 1st, but that's the not case because 18 > 11. Is this an error, typo, or am I fundamentally misunderstanding the maths here? Thanks for any advice you can give.
The word ordered in ordered pair $(a,b)$ does not mean $a<b$ It simply means that $a$ is the first element and $b$ is the second one. The difference between the set and ordered pair is that $\{a,b\}=\{b,a\}$ but $(a ,b)\ne (b,a)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3313504", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
What is the series $\sum_{n=1}^{\infty} \frac{e^{-n^2 x}}{n}$? Following Passare: How to compute $\sum 1/n^2$ by solving triangles I tried the following $$ \int_0^{\infty}\frac{e^{-nx}}{n^2} dx = \frac{1}{n^3} $$ So we can write (with some help of Wolfram Alpha) $$ \sum_{n=1}^{\infty} \int_0^{\infty}\frac{e^{-nx}}{n^2} dx = \int_0^{\infty} \sum_{n=1}^{\infty} \frac{e^{-nx}}{n^2} dx = \int_0^{\infty} Li_2(e^{-x}) dx = \zeta(3) $$ where $Li_2$ is the https://en.wikipedia.org/wiki/Polylogarithm#Dilogarithm . But is also true that $$ \int_0^{\infty}\frac{e^{-n^2x}}{n} dx = \frac{1}{n^3} $$ so that one can write $$ \sum_{n=1}^{\infty} \int_0^{\infty}\frac{e^{-n^2x}}{n} dx = \int_0^{\infty} \sum_{n=1}^{\infty} \frac{e^{-n^2x}}{n} dx = \int_0^{\infty} ?? dx = \zeta(3) $$ The problem here is the evaluation of the series $$ \sum_{n=1}^{\infty} \frac{e^{-n^2x}}{n} = ?? $$ which I (and also Wolfram Alpha) don't know how to evaluate. Is this series known in the literature and is there any way to evaluate it or express it somehow in terms of some special functions?
Let's replace: $$x=y^2/4$$ Then we have: $$e^{-n^2 y^2/4}= \frac{1}{\sqrt{\pi}} \int_{-\infty}^\infty e^{-t^2+i n y t} dt$$ Thus, provided the integral exists, we should have: $$g(y)=\sum_{n=1}^\infty \frac{e^{-n^2 y^2/4}}{n}=-\frac{1}{\sqrt{\pi}} \int_{-\infty}^\infty e^{-t^2} \log \left(1-e^{i y t} \right) dt$$ Extracting the real part and using symmetry, we obtain: $$g(y)=-\frac{1}{\sqrt{\pi}} \int_0^\infty e^{-t^2} \log \left(1-\cos (y t) \right) dt- \frac{\log 2}{2}$$ Now getting back to $x$: $$f(x)=\sum_{n=1}^\infty \frac{e^{-n^2 x}}{n}$$ $$f(x)=- \frac{\log 2}{2}-\frac{1}{\sqrt{\pi}} \int_0^\infty e^{-t^2} \log \left(1-\cos (2 t \sqrt{x}) \right) dt$$ or: $$f(x)=-\frac{1}{\sqrt{\pi}} \int_0^\infty e^{-t^2} \log \left(2-2\cos (2 t \sqrt{x}) \right) dt$$ This works numerically, even though the integrand has an infinite number of singularities. Let's substitute: $$t= \sqrt{x} u$$ $$f(x)=-\frac{\sqrt{x}}{\sqrt{\pi}} \int_0^\infty e^{-x u^2} \log \left(2-2\cos (2 u x) \right) du$$ Now there might be a chance to integrate w.r.t. $x$ as well: $$I=-\frac{1}{\sqrt{\pi}} \int_0^\infty \int_0^\infty \sqrt{x} e^{-x u^2} \log \left(2-2\cos (2 u x) \right) du dx$$ But I somehow doubt the integral converges, though I will check numerically later.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3313634", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Rank of a Differential This a mostly a sanity check sort of thing. I am working with the map given by $$F:M_{2\times 2}(\mathbb{R})\rightarrow S_{2\times 2}(\mathbb{R}):A\mapsto A^t J A$$ where J is the matrix $$J = \begin{pmatrix} 1 & 0 \\ 0 & -1\end{pmatrix}.$$ I am trying to show that the set $S = \{A : F(A) = J\}$ is a smooth submanifold of $M$. I know that I need to use the Regular Level Set Theorem to do this, but I am having trouble showing $DF$ will have maximal rank for that the matrices in $S$ (which I have already shown contains only invertible matrices). Using the canonical isomorphism on $M_{2\times 2}(\mathbb{R})\cong \mathbb{R}^4$ described by $\begin{pmatrix}a_1 & a_2\\ a_3 & a_4\end{pmatrix}\mapsto (a_1,a_2,a_3,a_4)$ and the similar one on $S_{2\times 2}(\mathbb{R})\cong \mathbb{R}^3$, I computed $$F(a_1,a_2,a_3,a_4) = (a_1^2 - a_3^2, a_1a_2 - a_3a_4, a_2^2 - a_4^2)$$ which gives us that $$DF = \begin{pmatrix} 2a_1 & 0 & -2a_3 & 0\\ a_1 & a_2 & -a_3 & -a_4\\ 0 & 2a_2 & 0 & -2a_4\end{pmatrix}.$$ My problem is that I think that this matrix can never be full rank since the center row can always be eliminated. Have I done something wrong here? Can anyone verify for me under what conditions $DF$ would be surjective?
Ok, I just wanted to add a post showing another way of answering this question that is more in line with the structure posted in the question itself. This is NOT an elegant solution like the one provided above by Ted Shifrin (Thank you again!), but I think it might be useful for some people seeing this later to have both methods available to them. Anyway, the matrix for $DF$ that is provided above is wrong. The correct matrix for the differential is given by $$DF = \begin{pmatrix} 2a_1 & 0 & -2a_3 & 0\\ a_2 & a_1 & -a_4 & -a_3\\ 0 & 2a_2 & 0 & -2a_4 \end{pmatrix},$$ and to use the Regular Level Set Theorem, we need the map $DF$ to be surjective which is equivalent to the matrix representation of $DF$ having maximal rank (3 in this case). There are two ways for the above matrix to drop rank: either we eliminate the middle row, or we eliminate the right two columns. Using row operations we can see that an attempt to reduce the middle row would result in the matrix $$\begin{pmatrix} 2a_1 & 0 & -2a_3 & 0\\ 2(a_1 + a_2) & 2(a_1 + a_2) & -2(a_3 + a_4) & -2(a_3 + a_4)\\ 0 & 2a_2 & 0 & -2a_4 \end{pmatrix}$$ which will drop rank precicely when $a_1 = -a_2$ and $a_3 = -a_4$ $(*)$. Similarly, an attempt to reduce the right two columns will result in the matrix $$\begin{pmatrix} 2a_1 & 0 & 2(a_1 - a_3) & 0\\ a_2 & a_1 & a_2-a_4 & a_1-a_3\\ 0 & 2a_2 & 0 & 2(a_2 - a_4) \end{pmatrix}$$ which will drop rank when $a_1 = a_3$ and $a_2 = a_4$ $(**)$. If a matrix $A$ satisfies $(*)$ or $(**)$, we will have that $\det(A) = 0$, so, as long as all of the matrices $A$ that satisfy the condition $A^tJA = J$ do not have determinant 0, we will have that $J$ is a regular value and the set $S = \{A : A^tJA = J\} = F^{-1}(J)$ is a regular level set. This does turn out to be the case, as can be verified using properties of the determinant, so $S$ is a smooth submanifold by the Regular Level Set Theorem as desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3313769", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Find the maximum possible perimeter of a right triangle The ratio between the perimeter of a right triangle and its area is 2:3. The sides of the triangle are integers. Find the maximum possible perimeter of the triangle. If the sides of the triangle are $A$, $B$ and $C$ (the hypotenuse), I have deduced that: $$A+B+\sqrt{A^2+B^2}=\frac{AB}{3}$$ I am stuck here. Any hints? From the above I know that $A^2+B^2$ should be a square and that $AB$ should be divisible by 3.
There are naturals $m$ and $n$ such that $m>n$, $a=m^2-n^2$, $b=2mn$ and $c=m^2+n^2$. See here: https://en.wikipedia.org/wiki/Pythagorean_triple Thus, $$\frac{m^2-n^2+2mn+m^2+n^2}{mn(m^2-n^2)}=\frac{2}{3}.$$ Can you end it now? I got $56$ as the answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3313890", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Convergence integral given weak convergence of measures and functions/random variables Let $X\subset\mathbb{R}^d$ be compact. Given sequences of real-vauled random variables $f_n\to f$ and positive radon measures $\mu_n\to \mu$ both converging weakly for $n\to\infty$. Under which further conditions can we deduce that $$\lim_{n\to\infty}\int_X f_nd\mu_n =\int_X f d\mu ?$$
Claim: the conclusion holds for every sequence $(\mu_n)$ converging weakly to $\mu$ iff $f_n \to f$ uniformly. Since $\mu$ is Radon and $X$ is compact, $\mu$ is a finite measure. Since $\mu_n(X) \to \mu(X)$ it follows that $sup_n \mu_n(X)<\infty$. So, if $f_n \to f$ uniformly then $\int f_n d\mu_n -\int fd\mu_n \to 0$ from which the conclusion follow easily. Now suppose the conclusion holds for every sequence $(\mu_n)$ converging weakly to $\mu$. Let $x_n \to x$ and $\mu_n=\delta_{x_n}, \mu =\delta_x$. Then $\mu_n \to \mu$ weakly so $f(x_n)=\int f_n d\mu_n \to \int f d\mu=f(x)$. Since $X$ is compact the statement $f(x_n) \to f(x)$ whenever $x_n \to x$ is equivalent to uniform convergence of $f_n$ to $f$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3314036", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Inverse of a structured matrix of sines. Suppose I have a matrix $P$ defined by $$ P =\begin{pmatrix} \sin(\frac{\pi}{n+1}) & \sin(\frac{2\pi}{n+1}) & \cdots & \sin(\frac{n\pi}{n+1}) \\ \sin(\frac{2\pi}{n+1}) & \sin(\frac{4\pi}{n+1}) & \cdots & \\ \vdots & & \ddots & \\ \sin(\frac{n\pi}{n+1}) & \cdots & & \sin(\frac{n^2\pi}{n+1}) \end{pmatrix} $$ and suppose I wish to find its inverse. I claim that $P^2 = \frac{n+1}{2} I_n$ and hence $P^{-1} = \frac{2}{n+1} P$ and indeed computing this in Matlab for a variety of $n$ it appears to be true, but I've been struggling to show this rigourously. I suspect I'm just very rusty with my trig manipulations but any help would be appreciated. So far, I have that $$ (P^2)_{kl} = \sum_{j=1}^n \sin(\frac{kj\pi}{n+1})\sin(\frac{lj\pi}{n+1}) $$ which looks somewhat similar to the stuff you get in fourier anyalysis where $\sin$ and $\cos$ form orthogonal polynomials, but I've been struggling to adapt it to this sum situation. From here I've been trying to use that $$ (P^2)_{kl} = \frac{1}{2}\sum_{j=1}^n \cos\left(\frac{(k-l)j}{n+1} \pi\right) - \cos\left(\frac{(k+l)j}{n+1} \pi\right) $$ but I really haven't made much progress. Even the diagonal case where $k = l$ seems to not work out as immediately as I hoped, since I'm a bit lost on what to do with the second term. I suspect I am missing something obvious...
I managed to solve it. It was a bit of a headache but here is my proof. For $k, l \in \mathbb{N}$ and $n \in \mathbb{N}$ consider the sum \begin{equation} A_{kl} = \sum_{j=1}^n \sin \left(\frac{k \pi}{n+1}j\right) \sin \left(\frac{l \pi}{n+1}j\right). \end{equation} Claim: \begin{equation} A_{kl} = \frac{n+1}{2}\delta_{kl}. \end{equation} We start by noting that \begin{equation} A_{kl} = \frac{1}{2} \sum_{j=1}^n \cos \left(\frac{(k-l)\pi}{n+1} j\right) - \cos \left(\frac{(k+l)\pi}{n+1} j\right) \end{equation} and letting $\theta_\pm = \frac{(k\pm l)\pi}{n+1}$ we have \begin{equation} A_{kl} = \frac{1}{2} \sum_{j=1}^n \cos( j\theta_- ) - \cos(j\theta_+ ). \end{equation} We now recall the following trigonometric identity due to Lagrange: \begin{equation} \sum_{j=1}^n \cos(j \theta) = \frac{1}{2} \left[\frac{\sin((n+\frac{1}{2})\theta)}{\sin(\frac{1}{2}\theta)}-1 \right] \end{equation} which holds when $\theta \neq 0$. Note as well that \begin{align*} \sin((n+\frac{1}{2})\theta) &= \sin((n+1)\theta - \frac{1}{2}\theta) \\ &= \sin((n+1)\theta)\cos(\frac{1}{2}\theta) - \cos((n+1)\theta)\sin(\frac{1}{2}\theta). \end{align*} and hence the Lagrange identity reduces to \begin{equation} \sum_{j=1}^n \cos(j \theta) =\frac{1}{2} \left[ \sin((n+1)\theta)\cot(\frac{1}{2}\theta) - \cos((n+1)\theta) - 1\right]. \end{equation} We start by considering the diagonal case, when $k = l$. In this situation $\theta_- = 0$ and $\theta_+ = 2\pi k$. Then \begin{align*} A_{kk} &= \frac{1}{2} \sum_{j=1}^n (1 - \cos(j\theta_+)) \\ &= \frac{n}{2} - \frac{1}{4} \left[\frac{\sin((n+\frac{1}{2})\theta_+)}{\sin(\frac{1}{2}\theta_+)}-1 \right] \\ &= \frac{n}{2}-\frac{1}{4}\left[\sin(2k\pi)\cot(\frac{k\pi}{n+1}) - \cos(2 k \pi) - 1\right] \\ &= \frac{n}{2} - \frac{1}{4}\left[ -2 \right] \\ &= \frac{n+1}{2}, \end{align*} since $\sin(2k\pi) = 0$ and $\cos(2k\pi) = 1$ for all $k$. If we now consider $k \neq l$, then the same method follows but we have to expand both terms. \begin{align*} A_{kl} &= \frac{1}{4} \left[\left[\frac{\sin((n+\frac{1}{2})\theta_-)}{\sin(\frac{1}{2}\theta_-)}-1 \right] - \left[\frac{\sin((n+\frac{1}{2})\theta_+)}{\sin(\frac{1}{2}\theta_+)}-1 \right] \right] \\ &= \frac{1}{4}\left[\sin((k-l)\pi)\cot(\frac{1}{2}\theta_-) - \cos((k-l)\pi) - \sin((k+l)\pi)\cot(\frac{1}{2}\theta_+) + \cos((k+l)\pi)\right] \\ &= \frac{1}{4}\left[\cos((k+l)\pi) - \cos((k-l)\pi) \right]\\ &= \frac{1}{4} \left[\cos{k\pi}\cos{l\pi} - \sin{k\pi}\sin{l\pi} - \cos{k\pi}\cos{l\pi} - \sin{k\pi}\sin{l\pi}\right] \\ &= -\frac{1}{2}\sin{k\pi}\sin{l\pi} \\ &=0 \end{align*} since $\sin(m \pi) = 0$ for all $m \in \mathbb{Z}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3314129", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Need help with concept of expectation! I have been reading blogs on Expectation. I am quite familiar with what it is but still don't understand what do we infer from it, what does it tell us about the experiment? For eg: expected number of coin flips for two consecutive heads is 6. So what does this "6" say...are these the most probable coin flips that would get me two consecutive heads? but again most probable value is different from expected value... Can someone please explain it? PS: please refer some good book/blog that covers expectation to the depth.
Intuitively, the expectation is the average of a random variable for an infinite number of drawings. E.g. with a fair coin, exactly $\dfrac12$ heads. It is the "real" average. The Law of Large Numbers tells that the average computed on an increasing number of drawings does converge to the expectation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3314338", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
How to solve this word problem using graph theory? Suppose that there is a global network of one hundred airports, and that between each pair of airports there is a direct connection. In connection with the cuts the governments of various countries want to eliminate connections as far as possible. It must however still be possible to travel from one airport to another where appropriate after transfer. The first question is easy. It asks: How many connections can you eliminate if you do not take into account the maximum number of times to transfer? (multiple choice) We're dealing with a complete graph and so: $\frac{n(n-1)}{2}$ gives all vertices connected $(4950)$. thus the number we can eliminate is: 4950 - 99 = 4851 So there it is obvious we're using a graph theoretic concept to solve it. With the next problem I don't know how to solve it. How many connections do you need if you want to transfer only one time? multiple choice answers: 99, 2424, 50, 99!, 49, 100!-99!, 4851, 2425, 4850, 2475 This must also involve some understanding of graph theoretic concept(s) but since I haven't had the course yet I don't know. I would really like to know though, then I know what I should study. (any recommendations on books I should read?)
Hint: Consider one really big airport and 99 small ones.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3314435", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
I perform 5 "independent" card draws from a deck w/ replacement. All drawn cards are queens, what is the probability of queen of spades drawn twice? So, drawing a queen of spades on the first draw has probability $\frac{1}{52}$. As there is replacement involved the next card, assuming the card drawn is again, the queen of spades, must be of probability $\frac{1}{52}$. As i need exactly two queen of spades, the rest of the draws must be queens but not the spade suit. So the probability of drawing a queen, but not a spade is $\frac{3}{52}$. Hence the answer is $\frac{1}{52}$$\frac{1}{52}$$\frac{3}{52}$$\frac{3}{52}$$\frac{3}{52}$. I am absolutely new to the idea of probability, or even if I have learnt it before I have completely forgotten anything about it. The answer I have calculated seems too absurd to be true, so kindly help me out with this problem. Thanks.
Pretending that the deck has only the four queens is great advice. Let's assume that you need exactly two queen of spades. Choose which two of the five draws you get your queen of spades. How many ways? How many ways can you get three non-queen of spades in the other three slots? Divide by the total number of ways to draw the five cards. Spoiler: There are $_5C_2 = 10$ ways to choose the particular draws for the two queen of spades. There are $3^3 = 27$ ways to draw the other three non-queen of spades, and $4^5 = 1024$ ways to draw five queens. So the probability is $10 \times 27 / 1024 = 270/1024.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3314538", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why do we want lots of $0$’s in a matrix? I am working thru Axler’s “Linear Algebra Done Right.” I am okay with the math, but I’m losing sight of the forest. Like all books on linear algebra, there’s a lot of time and energy spent on finding, proving existence of, and interpreting matrices with “lots of zeros” - that is, matrices with as many zeros as possible given some particular vector space and transformation. But I cannot see why a simple matrix generally, or one with lots of 0’s in particular, is very important. Furthermore, discussions of matrices with lots of zeros closely correspond to discussions of the eigenvalues and eigenvectors (or generalized eigenvectors) of the transformation. I see why eigenvectors and values are important for understanding a transformation, but we certainly don’t need a simple matrix to calculate the eigenvalues and vectors. So, why are we spending so much time and energy finding matrices of the transformations with lots of zeros, and especially how such matrices relate to eigenvalues? Given my lack of understanding as to why linear algebra procedes along this course, my question is necessarily vague. Consequently, I am hoping only for a discussion of the issues, more so than specific mathematical derivations. (Followup in response to comments): And even if a sparse matrix allows easier computations, don’t the computations needed to find that sparse matrix generally negate any benefit?
There area a whole theory about SPARSE MATRICES (matrices most of the elements are zero) which are used in computing science and numerical analysis. You can find many papers on this subject if you are interested in knowing more about it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3314632", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
In the exapnsion of $(1+x+x^3+x^4)^{10}$, find the coefficient of $x^4$ In the exapnsion of $(1+x+x^3+x^4)^{10}$, find the coefficient of $x^4$. What's the strategy to approach such problems. Writing expansion seems tedious here.
Referring to Jack Crawford's comment above, there are three possible ways to get $x^4$: $$\underbrace{1\times1\times\cdots\times 1}_{9}\times x^4\\ \underbrace{1\times1\times\cdots\times 1}_{8}\times x\times x^3\\ \underbrace{1\times1\times\cdots\times 1}_{6}\times x\times x\times x \times x$$ The first is the combination: ${10\choose 1}=10$. The second is the permutation: $P(10,2)=\frac{10!}{8!}=90$. The third is again combination: ${10\choose 4}=210$. Hence: $10+90+210=310$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3314754", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Prove that $ I_{R} (A) $ is the "biggest" subring in $R$ in which $A$ is ideal. Let $A$ be subring of ring $R$. Let $ I_{R} (A) = \{ x \in R \colon xa, ax \in A, \forall a \in A \} $. Prove that $ I_{R} (A) $ is the maximal (biggest) subring of $R$ in which $A$ is ideal. I easily proved that $I_{R} (A) $ is subring of $R$, and that $A$ is ideal of $I_{R} (A) $. I am having trouble with proving that $ I_{R} (A) $ is the maximal subring. I tried to prove it directly, i.e. observe will every subring of $R$ in which $A$ is ideal be subset of $I_{R} (A)$, but since I am still beginner in this scope I got stuck. Any hint helps.
Let $A\subseteq S\subseteq R$ with $S$ a ring and suppose $A$ is an ideal of $S$. If $x\in S$, then $xa$, $ax\in A$ for all $a\in A$. Therefore $x\in I_R(A)$. So $S\subseteq I_R(A)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3314970", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Exercise in differential geometry using Gauss-Bonnet For a positive real number $r$ let $M_{r}$ be the regular surface $M_{r}=\{(x,y,z) \mid x^2+y^2=z<r^2, x>0,y>0\}$. Let $K$ denote the Gaussian curvature of $M_{r}$. Determine $\int_{M_{r}}KdA$ and $\lim_{r \rightarrow \infty}\int_{M_{r}}KdA$. Solution; Using Gauss-Bonnet we know $\int_{M_{r}}KdA=\frac{\pi}{2}$, since the surface is the positive quadrant of a paraboloid. But how does one solve the limit?
As a complement to the other answer here is a solution that uses Gauss-Bonnet: Let $S_{r}=\{(x,y,z) \mid x^2+y^2=z\leq r^2\}$. By symmetry $\int_{M_{r}}K\;dA=\frac 14\int_{S_{r}}K\;dA$. Since $S_{r}$ is a compact two-dimensional Riemannian manifold by Gauss-Bonnet $$\int_{S_{r}} K\;dA+\int_{\partial S_{r}}k_g\;ds=2\pi\chi(S_{r})$$ As $S_{r}$ is homeomorphic to a disc, $\chi(S_{r})=1$. The boundary $\partial S_{r}$ can be parametrized by the curve $\gamma(t)= (r\cos t,r\sin t,r^2)$. The unit tangent vector is $T=(-\sin t,\cos t,0)$ and the inward-pointing unit normal to the boundary $\partial S_{r}$ on the surface $S_r$ at $\gamma(t)$ is $N=\frac {-1}{\sqrt{1+4r^2}}(\cos t,\sin t,2r)$. Hence $$\int_{\partial S_{r}}k_g\;ds=\int_{0}^{2\pi}\langle T',N\rangle \,dt=\int_{0}^{2\pi}\frac 1{\sqrt{1+4r^2}}\,dt=\frac {2\pi}{\sqrt{1+4r^2}}$$ which implies $$\int_{S_{r}} K\;dA=2\pi\left(1-\frac{1}{\sqrt{1+4r^2}}\right)\;,\;\text{so}\;\int_{M_{r}}K\;dA=\frac{\pi}{2}\left(1-\frac{1}{\sqrt{1+4r^2}}\right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3315068", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Find the average rate of change of $g(t)=t^2+3t+1$ on the interval [0,a] I am working on a textbook question "Find the average rate of change of $g(t)=t^2+3t+1$ on the interval $[0,a]$". The solution provided, along with the steps in between is: Avg. rate of change: = $\frac{g(a)-g(0)}{a-0}$ = $\frac{(a^2+3a+1)-(0^2+3(0)+1)}{a-0}$ = $\frac{a^2+3a+1-1}{a}$ # Here - how did they get rid of -0 in denominator? = $\frac{a(a+3)}{a}$ = $a+3$ Where I'm stuck is between the second and third step. How does one go from denominator of $a-0$ to just $a$?
For any number $a, a-0=a$. Subtracting $0$ makes no change.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3315185", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Is this limit for the sum of remainders correct? I found the following problem: $$\lim_{n\to\infty}\left(\frac{1}{n^2}\sum_{m=1}^{n}n\space\text{mod}\space m\right)$$ and decided to give it a go. I got an answer of $1-\frac{\pi^2}{12}$, but I am not sure if this is correct. It was posted with no source/context, and I wasn't able to find the solution online. I did find that the inner sum is sometimes referred to as the "sum of remainders" function, however. Does anyone know where this problem comes from, and if my answer is correct? Edit: Here's my working out: I first noticed that, for large $n$: $$n\space\text{mod}\space \left(n-k\right)\equiv k, \space\space\space n-k>\frac{n}{2}$$ $$n\space\text{mod}\space \left(\left[\frac{n}{2}\right]-k\right)\equiv 2k, \space\space\space \left[\frac{n}{2}\right]-k>\frac{n}{3}$$ $$n\space\text{mod}\space \left(\left[\frac{n}{3}\right]-k\right)\equiv 3k, \space\space\space \left[\frac{n}{3}\right]-k>\frac{n}{4}$$ $$\vdots$$ I visualised these as right triangles with base $\frac{n}{m(m+1)}$ and height $\frac{n}{m+1}$, which gave a new expression for the sum for large $n$ as the sum of areas of these triangles: $$\frac{1}{2}\sum_{m=1}^{a}\frac{n^2}{m(m+1)^2}, \space\space\space a=O(\sqrt{n})$$ The new limit is then: $$\lim_{n\to\infty}\left(\frac{1}{2}\sum_{m=1}^{n}\frac{1}{m(m+1)^2}\right)$$ Partial fraction decomposition into a telescoping sum gave me: $$\frac{1}{2}\sum_{m=1}^{\infty}\frac{1}{m(m+1)^2}=\frac{1}{2}\left(2-\sum_{n=1}^{\infty}\frac{1}{n^2}\right)=1-\frac{\pi^2}{12}$$
Using $$n\bmod m = n-m\left\lfloor \frac{n}{m}\right\rfloor$$ you get: $$\sum_{m=1}^{n} n\bmod m = n^2-\sum_{m=1}^{n} m\left\lfloor \frac{n}{m}\right\rfloor$$ or: $$\frac{1}{n^2}\sum_{m=1}^{n} n\bmod m = 1-\frac{1}{n}\sum_{m=1}^{n} \frac{m}{n}\left\lfloor \frac{n}{m}\right\rfloor$$ But $\frac{1}{n}\sum_{m=1}^{n} \frac{m}{n}\left\lfloor \frac{n}{m}\right\rfloor$ is a Riemann sum for $$\int_{0}^{1}x\lfloor 1/x\rfloor \,dx.$$ You can re-write this integral as: $$\sum_{n=1}^{\infty}n\int_{1/(n+1)}^{1/n}x\,dx =\frac{1}{2}\sum_{n=1}^{\infty}n\left(\frac{1}{n^2}-\frac{1}{(n+1)^2}\right)\tag{1}$$ More generally, we have $\sum_{n=1}^{\infty} n(f(n)-f(n+1))=\sum_{m=1}^{\infty} f(m),$ when $nf(n+1)\to 0.$ This means that $(1)$ is $\frac{1}{2}\zeta(2)=\frac{\pi^2}{12},$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3315285", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Domain of $7^{\log_7(x^2-4x+5)}$ If $$7^{\log_7(x^2-4x+5)}=x-1$$ then $x$ may have values... My attempt: $$x^2-4x+5=x-1$$ So, $$x^2-5x+6=0$$ So, $$x=2,3$$ To check the domain of log, $$x^2-4x+5>0$$ i.e., $$(x-2+i)(x-2-i)>0$$ That gives me, $x<2-i$ and $x>2+i$. Is this a valid way of writing domain here? If No, how to write the domain of $7^{\log_7(x^2-4x+5)}$? Also, if I put the value of $x$ as $2$ or $3$ in the given equation, it satisfies, but if I compare it with the inequalities $x<2-i$ or $x>2+i$, then I am not able to get a satisfactory answer.
Note that $$x^2-4x+5 = (x-2)^2 +1 >0 $$ for all x, so there is no problem with logarithm and we have $$7^{log_7(x^2-4x+5)}=x^2-4x+5$$ Therefore $$ 7^{log_7(x^2-4x+5)}=x-1 \iff x^2-5x+6=0$$ Thus the solutions are $x=2$ and $x=3$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3315376", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Modular arithmetic $(2n+1)x \equiv -7 \pmod 9$ Find a solution $(2n+1)x \equiv -7 \pmod 9$ I’m sure this is trivial but I still have doubts about it. I know the equation has solution for certain $n \in \mathbb {Z}$. Actually I have tried a few and got a similar results (with Diophantine equations ). I wonder if there’s general solution for the equation without changing the n for an integer. Thanks in advance.
Since $2n + 1$ 'cycles through' the modulo $9$ residues, the problem is reduced to solving $$\tag 1 x'x \equiv 2 \pmod 9$$ This is equivalent to $x'x = 9k +2$ and we need only look for solutions $$ 0 \le x' \lt 9 \text{ and } 0 \le x \lt 9$$ We represent both $x'$ and $x$ in $\text{base-}3$ format, $$\tag 2 x' = a' + b'3 \text{ and } x = a + b3 \quad \text{with } a',b',a,b \in \{0,1,2\}$$ Multiplying, $$ x'x = a'a + (a'b+ab')3 + bb'3^2$$ Since $a'a + (a'b+ab')3 \le 28 \lt 29 = 2 + 3 \times 9$, we segment the work into 3 parts. Part 1: $a'a + (a'b+ab')3 = 2$ $\quad$ Ans: [$x' = 1$ and $x = 2$] OR [$x = 1$ and $x' = 2$] Part 2: $a'a + (a'b+ab')3 = 11$ $\quad$ Ans: [$x' = 4$ and $x = 5$] OR [$x = 4$ and $x' = 5$] Part 3: $a'a + (a'b+ab')3 = 20$ $\quad$ Ans: [$x' = 7$ and $x = 8$] OR [$x = 7$ and $x' = 8$] We only work out the details for Part 3: Since $3 \nmid 20$, $\,3 \nmid 19$ and $3 \nmid 16$, if we have any solutions at all we must have $\quad a'a = 2$ $\quad (a'b+ab') = 6$ If we set $a' = 2$ and $a = 1$ we get $2b + b' = 6$. So $b = 2$ and $b' =2$. So $x' = 2 + 2 \times 3 = 8$ and $x = 1 + 2 \times 3 = 7$. Up to an interchange, there can be no other solutions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3315462", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 8, "answer_id": 7 }
Find every $n$: $n^2 + 340 = m^2$ Let $n$, $m \in N$. The problem asks to find every natural number $ n $ such that: $ n^2 + 340 = m^2 $ I tried to solve the equation like this: $ n^2 - m^2 = 340 $ $ (n + m)(n - m) = 2^2 * 5 * 17 $ I listed all possible pairs of dividers of 340: $(1, 340), (2, 170), (4, 85), (5, 68), (10, 34), (20, 17)$ I set up six linear systems; only two gave me integers. $$ \left\{ \begin{array}{c} m+n=170 \\ m-n=2 \\ \end{array} \right. $$ $m = 86, n = 84$ $$ \left\{ \begin{array}{c} m+n=34 \\ m-n=10 \\ \end{array} \right. $$ $m = 22, n = 12$ I posted this problem because I don't have the solution. Did I make any mistake? Could the problem be solved in a quicker way? Thanks.
I think the way you did it is the best way. A worse way would be that if we let $m = n + k$ then $n^2 + 2nk + k^2 = m^2$ so $2nk + k^2 = 340$. Clearly $k$ is even so if $k=2k'$ then $nk' + k'^2 = 85$. And $k'(n+k') = 85$ so for $k' =1....\sqrt{85};k|85=5*17$ or in other words for $k'=1, 5$ we get $n=84,12$ and $k=2,10$ so $(m,n) = (86,84)$ or $(22, 10)$. Hmm... I guess that wasn't that much worse. In essence it was basically the same thing. Still I prefer your way which would have been for first choice.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3315606", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Does there exist an area-preserving map from the hyperbolic plane to the Euclidean plane? Fairly simple question: does there exist an area-preserving map from the hyperbolic plane to the Euclidean plane? If not, does there exist an area-preserving map from an arbitrarily large subset of the hyperbolic plane, to an arbitrarily large subset of the Euclidean plane? If so, what does the map look like? It would basically be similar to the "Mollweide projection."
There does in fact exist an area-preserving map, as demonstrated in this video at 11:20: the Lambert azimuthal equal-area projection. The idea is that you take polar coordinates of the hyperbolic plane and map them to polar coordinates of the euclidean plane via a map $(r, \theta) \mapsto (f(r), \theta)$ where $f$ is chosen such that area is preserved. Let's derive $f$: Let $h(r)$ be the area of a hyperbolic ball of radius $r$, and $e(r)$ be the area of a euclidean ball of radius $r$. Assuming that the hyperbolic plane has curvature $-1$, it is true that $h(r) = 2\pi(\cosh(r)-1)$ and $e(r) = \pi r^2$. We want to ensure $h(r) = e(f(r))$. Substitution and rearrangement yields $f(r) = \sqrt{2\cosh(r)-2}$. Another area-preserving map, which is analogous to the sinusoidal projection: Pick any geodesic $g$ in the hyperbolic plane, and mark a special point $O$ on it. For any point $P$ on the hyperbolic plane, project $P$ perpendicularly onto $g$ to obtain a point $Q$. Let $x$ denote the signed distance $OQ$, let $y$ denote the signed distance $QP$. Then the map that takes $P$ to the point $(x \cosh y, y)$ is equal-area. The similarity is obtained as such: If you replace the hyperbolic plane with a sphere, and let $g$ be the equator in particular, and replace $\cosh$ by $\cos$, you get the sinusoidal projection. Finally, let's do an analogue of the Lambert cylindrical equal-area projection of the sphere, by taking the previous setup and mapping $P$ to the point $(x, \sinh y)$ instead. Again, this map is equal-area. The analogue: In the spherical case, the map maps $P$ to $(x, \sin y)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3315718", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 1, "answer_id": 0 }
How to find value of $k$ and $p$ Im so confused How can i solve this problem the given below is: 1.Find the $k$; $2k+1,3k+4,7k+6$ in geometric 2.Find the $k$; $k-3,k+2,k+3$ in geometric 3.Find the $p$; $p+7,3p+9,p+3$ in arithmetic Thanks for the solution any answer is highly appreciated.
Hint: for three consecutive terms $a,b,c$ in a geometric progression, we must have $ac=b^2$, and if they are instead in an arithmetic progression we must have $a+c=2b$. Thus you can formulate a quadratic equation for the geometric problems and a linear one for the arithmetic problems.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3316080", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Can anyone help with geometry (area with an unknown length) question? I would really appreciate it. **Note - the problem I'm struggling with is how to calculate the area of APBQ (the last question) Figure 1 on the right shows a right-angled triangle ABC where AB = 1 cm, AC = 2 cm, and angle BAC = 90°. Triangle PAB is an isosceles triangle where AP = AB and sides PA and BC are parallel. Assume point P is located opposite to point C with respect to line AB. Answer the following questions. 〔Question 1 〕 Consider the case in Figure 1 where the magnitude of angle APB is a°. Find the magnitude of angle ACB in terms of a. 〔Question 2 〕 Figure 2 on the right shows the case in Figure 1 where a perpendicular line to side BC is drawn from vertex A. Let Q be the intersection of side BC and the perpendicular line. Answer (1)and (2). (1) Prove triangle ABQ is similar to triangle CAQ. (2) Calculate the area of quadrilateral APBQ.
Following on from your comment: $\angle CAB = \tan^{-1} (2)$, and since $PA$ and $BC$ are parallel, $\angle PAB = \tan^{-1} (2)$ as well. Now if you split $\Delta PAB$ in half where $M$ is the midpoint of $PB$, you will have $\sin PAM = \frac{PM}{PA} = \frac{PM}{1}$. This gives me a value of $PM = \sqrt{\frac{2}{5+\sqrt5}}$ and $PB= \sqrt{2 - \frac{2}{\sqrt5}}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3316125", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Parallel system functioning problem I am currently solving the following problem about conditional probability: "A parallel system functions whenever at least one of its components works. Consider a parallel system of n components, and suppose that each component works independently with probability $\frac{1}{2}$. Find the conditional probability that component 1 works given that the system is functioning." I think that I do have the answer to this problem; however, since textbook does not contain answer to this one, I am sharing it with the community to poke holes in my logic, if any.
Let's call event "whole parallel system works" as $W$ and event "first component works" as $W_1$. Our task is to find $P(W_1|W)$. Let's use conditional probability definition to expand it: $$P(W_1|W) = \frac {P(W_1 \cap W)}{P(W)} = \frac {P(W | W_1)\cdot P(W_1)}{P(W)}$$ Now, $P(W_1) = \frac{1}{2}$. $P(W) = 1 - P(\bar W) = 1 - (\frac{1}{2})^n = 1 - \frac{1}{2^n}$. And $P(W | W_1) = 1$, since the whole system is active if first component is active. As the result, we have: $$P(W_1|W) = \frac{\frac{1}{2}}{1 - \frac {1}{2^n}} = \frac{\frac{1}{2}}{ \frac {2^n-1}{2^n}} = \frac{2^{n-1}}{2^n-1}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3316241", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why ReLU function is not differentiable at 0? I'm kind of rusty in calculus. Why is the ReLU function not differentiable at $f(0)$? $$ f(x) = \begin{cases} 0 & \text{if $x \leq 0$} \\ x & \text{if $x > 0$}. \end{cases} $$
If you look at $x > 0$, or the righthand derivative, $$\frac{df}{dx} = \frac{d}{dx} x = 1$$ for all $x$. If you look at $x \le 0$, or the lefthand derivative, $$\frac{df}{dx} = \frac{d}{dx} 0 = 0$$ for all $x$. Since $x = 0$ is the "break" point, the lefthand and righthand derivatives are not the same, and thus, the derivative is not defined at $x = 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3316444", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 0 }
Analytic Continuation of Complex Function I am triyng to solve the following problem in Brown and Churchill's complex variables textbook. Show that the function $f_2 (z) = 1/z^2$ ($z \neq 0$) is the analytic continuation of the function \begin{align*} f_1 (z) = \sum\limits_{n=0}^{\infty} (n+1)(z + 1)^n \ \ \ (|z+1| < 1) \end{align*} into the domain consisting of all points in the $z$ plane except $z = 0$. As a first note, I am having difficulty mapping the definition of analytic continuation to this problem. The definition in the textbook is that if we have two domains, say $D_1$ and $D_2$, where some function $f_1$ is analytic on $D_1$, some function $f_2$ is analytic on $D_2$, and $f_1 (z) = f_2 (z)$ on $D_1 \cap D_2$, where this intersection is nonempty, then $f_2$ is the analytic continuation of $f_1$ into $D_2$. Assuming that I have not misstated that (please tell me if I have), we have: \begin{align*} D_1 = \{z \in \mathbb{C} : |z + 1| < 1\}, \ \ \ D_2 = \{z \in \mathbb{C} : z \neq 0\}. \end{align*} So we have \begin{align*} D_1 \cap D_2 = \{z \in \mathbb{C} : |z + 1| < 1 \text{ and } z \neq 0\}. \end{align*} From here, I am stuck. I know I need to prove that $\frac{1}{z^2} = \sum\limits_{n=0}^{\infty} (n+1)(z + 1)^n$ for any $z \in D_1 \cap D_2$. I don't know if I should try to demonstrate that the moduli are equal or expand $\frac{1}{z^2}$ in a power series and hope that these results will match, subject to the given constraint. Any help would be greatly appreciated. EDIT: I do not believe this question is a duplicate. I looked through the link below, and it does not address this problem, nor does it seem to deal with concepts in complex analysis.
We need to show that $1/z^2$ is the analytic continuation of $S$, so amongst other things we need to show they are equivalent on $D_1\cap D_2$. Splitting the sum, \begin{eqnarray} S&=&\sum_{n=0}^\infty(z+1)^n+\sum_{n=0}^\infty n(z+1)^n\\ &=&\frac{1}{1-(z+1)}+(1+z)\frac{d}{dz}\sum_{n=0}^\infty (1+z)^n\\ &=&-\frac{1}{z}-(1+z)\frac{d}{dz}\frac{1}{z}\\ &=&-\frac{1}{z}+\frac{1+z}{z^2}\\ &=&\frac{1}{z^2}. \end{eqnarray} Since $S$ and $1/z^2$ are both analytic functions in the domains $D_1$ and $D_2$ respectively, and the intersection $D_1\cap D_2\neq\emptyset$, and furthermore $S=1/z^2$ on $D_1\cap D_2$, as shown above, then $1/z^2$ is the analytic continuation of $S$ to $\mathbb{C}\setminus\{0\}$, and vice-versa. Furthermore, this analytic continuation from $S$ to $1/z^2$ is unique.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3316545", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Why is $\sum_{k = 0}^{\infty}(-x^{2})^{k}$ not uniformly convergent on $(-1,1)$ but it is on $[-r,r] \subset (-1,1)$? Clarification Note: I'm aware there are similar postings about the same idea, but I'm attempting to develop understanding of the concept through my explanation of what is happening. Why is $\sum_{k = 0}^{\infty}(-x^{2})^{k}$ not uniformly convergent on $(-1,1)$ but it is on $[-r,r] \subset (-1,1)$?. I'm trying to understand precisely why we cannot use the M-Test in such situations. From the notes I took in class we demonstrated this by showing that the series is not uniformly Cauchy. Proof Let $S_{n} = \sum_{k = 0}^{n}(-x^{2})^{k}$ Therefore: $$\|S_{n+m}(x) - S_{n}(x)\|_{\infty} = \sup_{x \in (-1,1)}\Bigg|\sum_{k = n}^{n+m}(-x^{2})^{k}\Bigg|$$ If we let $$x_{0} := \Bigg(\frac{1}{2}\Bigg)^{\frac{1}{2N}} \in (-1,1)$$ and $n + m \leq N$, where N is the integer that satifies the Cauchy sequence criterion for $n,m \geq N$. Then $$\sup_{x \in (-1,1)}\Bigg|\sum_{k = n}^{n+m}(-x^{2})^{k}\Bigg| \geq \Bigg|\sum_{k = n}^{n+m}(-1)^{K}\Bigg(\frac{1}{2}\Bigg)^{\frac{k}{N}}\Bigg|$$ Interpretation: So if I am understanding this properly, this shows that we can always choose a $N > 0$ such that if $n + m \geq N$ then $$x_{0} := \Bigg(\frac{1}{2}\Bigg)^{\frac{1}{2N}} \in [-r,r]$$, this is because $\frac{1}{2N} > \frac{1}{n+m}$ and in which case our series would converge uniformly. But on the other hand if we choose$N$ such that $n+m < N$ then $$x_{0} := \Bigg(\frac{1}{2}\Bigg)^{\frac{1}{2N}} \notin [-r,r]$$ but is in $(-1,1)$ because $\frac{1}{2N} < \frac{1}{n+m}$. As such there will always be an $\epsilon$ such that $$\|S_{n+m}(x) - S_{n}(x)\|_{\infty} > \epsilon$$. But I thought the negation of uniformly Cauchy in this case would be: "There exists $\epsilon > 0$ such that for all $N > 0$, $\|S_{n+m}(x) - S_{n}(x)\|_{\infty} > \epsilon$ for all $x \in (-1,1)$ and $n, m > N$" ? So how would I be able to choose my $N$ such that $n+m < N$ ?
We will use the following result to prove that the series $\sum_{k=0}^\infty(-x^2)^k$ does not converge uniformly. Result: Let $D$ be a subset of a metric space $(X,d)$, and a series of function $\sum{f_k}$ be uniformly convergent on $D$ to a function $f$. Let $x_0$ be a limit point of $D$ and $\lim_{x\rightarrow x_0}f_k(x)=a_k$. Then the series $\sum{a_k}$ converges, $\lim_{x\rightarrow x_0}f(x)$ exists and $\lim_{x\rightarrow x_0}f(x)=\sum{a_k}$. We know that for $x\in(-1,1)$, $\sum_{k=0}^\infty(-x^2)^k=\frac{1}{1+x^2}$ pointwise. Now if the convergence is uniform, then as $1$ is a limit point of $(-1,1)$ and $\lim_{x\rightarrow 1}(-x^2)^k=(-1)^k$, so by the stated result $\sum_k(-1)^k$ exists, which is not true. To prove the second part, observe that if $x\in[-r,r]\subset(-1,1)$, then $|(-x^2)^k|\leq r^{2k}$ and $\sum_kr^{2k}$ converges as $0\leq r<1$. Therefore by so called Weierstrass' M-test, $\sum_{k=0}^\infty(-x^2)^k$ converges uniformly on $[-r,r]\subset(-1,1)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3316628", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What are bases, really? I'm taking a course in Linear Algebra right now, and am having a hard time wrapping my head around bases, especially since my prof didn't really explain them fully. I would really appreciate any insight you could give me as to what bases are! Also, can there can be multiple different bases for a single subspace? Thanks in advance.
While a bit late to the game, I thought another perspective might help. Consider the following physical example. Now, without being too pedantic about definition, a basis for a vector space is much like a building block of a biological system. We can build a human body from a set of cells. That is, we can construct all aspects of our anatomy beginning with a certain set of cells (e.g. nerve cells, blood cells, germ cells, epithelial cells, etc). Thus, if we take our various tissue as vectors, then we have as a basis our cells. But we could certainly have another biological basis from which to build our biological vectors. Namely, biomolecules. Indeed, we could express our other defined basis using this basis. Thus, our biological vector space has more than one biological basis. Some might argue that there's not a "full correspondence" here with the mathematical notion of a basis for a vector space because, for example, how could one exhibit a change of basis from biomolecules into cells (i.e. how does one express a biomolecule as "linear combination" of cells)? But I argue the idea of building blocks captures the underlying spirit of a basis for a first pass.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3316730", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
The name for equations/problems like "How many four-digit numbers have four different digits (without a leading $0$)?" I'm not very quick at these kinds of problems so I want to improve on them. But I don't know what topics to search for. Should I be searching for significant figures + base arithmetic? How many four-digit decimal numbers are made from four (4) different digits between $0$ and $9$? Here, a four-digit decimal number has a non-zero leading digit.
As you were told in the comments, the topic is called permutation. But, as you don't the whole set of digit numbers some may refer to that as an variation on some texts. I have read that a variation is a permutation of r elements from n. By this definition we could get the closed formula, $$V(n,r)=\frac{n!}{(n-r)!}.$$ This term is introduced because some people link the idea of permutation to the factorial. To solve your example this "new" idea should be use with the multiplication principle as follows, $$9\times V(9,3)$$ First you choose the first digit (9 possibilities) and then reduce the problem to choose a number with three digits (it could be a zero leading digit in this case).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3316814", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Algorithm for computing algebraic numbers? (Why are algebraic numbers computable?) Suppose $b$ is algebraic over the rationals. In other words: $p(b) = 0$ for some polynomial where all the coefficients are rational. I am told $b$ is computable. But why? * *Can I derive a polynomial from $p$ that I can evaluate to get $b$? edit: commentor clearly pointed out no. Rationals are closed under those operations. *Is there some other algorithm for computing $b$ given the polynomial? edit: accepted answer: a root finding algorithm. *If not what is the general argument that algebraic numbers are computable? edit: an argument that the root finding algorithm will converge.
The answer is that iterative root-finding algorithms, like the Aberth method, exist to numerically find the roots of polynomials whose coefficients are themselves computable. Therefore if p(x) has rational coefficients, there exists a program that will produce b. So b is computable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3316901", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Renting Vs Buying?? Define Function? A family has $100,000 in savings accounts. They seek financial advice to develop a ten-year housing strategy. The two options are: continue renting the apartment or take a bank loan to buy a property. (a) Assume that they spend USD1900 per annum on renting the apartment and put their $100,000 savings in a bank deposit at the interest rate 2% per annum compounded annually. Find the family net worth (calculated as the money on the deposit minus the rent paid, in thousand dollars) after t years. Define the function frent : [0, 10] → R by allowing t to be a real number in the expression for the net worth. (b) Now assume that the family takes a bank loan of USD400,000 for the period of ten years to buy a property worth USD500,000. This is an interest only loan at 5% per annum so the total sum to be repaid to the bank after ten years is $600,000 (one and a half of the original amount). Suppose the loan is paid regularly so it is described by a linear function, whose value at t = 10 equals the total sum to be repaid to the bank. The property value is expected to grow at the rate of 2% per annum. Similarly to (a), define the function fbuy : [0, 10] → R corresponding to the family net worth (calculated as the property value minus the loan paid, in thousand dollars) after t years? FYI, I've never done Personal finance math (I have CS background), anyone wants to shed a light on this math problem? Have no idea how to approach this problem.
At a) the family starts with $100,000$. This amount is compounded once. At the end of the year they pay $1900$. The net value after one year is therefore $NV_1=100,000\cdot 1.02-1900=100,100$. Now this value is compounded one year: $(100,000\cdot 1.02-1900)\cdot 1.02=102,102$. Here we see that the rent is compounded as well. And the second rent is paid wich gives a net value of $NV_2=102,102-1900=100,202$. This was an iterative method. In general the net value after t years $$NV_t=100000\cdot (1.02)^t-1900\cdot \frac{1.02^t-1}{0.02}$$ Let´s check for $t=2$ if we get the same value as the value which was obtained by the iterative method. $$NV_2=100000\cdot (1.02)^2-1900\cdot \frac{1.02^2-1}{0.02}=100,202 \ \checkmark$$ I hope it has become a little bit clearer why the formula is valid. At b) the family has to pay back 10 times an annuity of $r$ which equals $600,000$ at $t=10$. Thus the equation is $$r\cdot \frac{1.05^{10}-1}{1.05-1}=600$$ I take $600$ here since it is required that the unit of the function is" in thousand dollars". After the value of $r$ has been calculated the function $f_{\text{buy}}$ is $$f_{\text{buy}}(t)=500\cdot 1.02^t-r\cdot \frac{1.05^{t}-1}{1.05-1}$$ $$\textrm{property value - loan paid}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3317068", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$f$ convex $\iff f(y)\geq f'(x)(y-x)+f(x).$ I want to prove that $f$ is convex $\iff f(y)\geq f'(x)(y-x)+f(x)$. The implication is fine, but I have difficulties with the converse implication. I tried to prove that $f'$ is increasing, but still, it doesn't work : Let $y>x$, then $$\frac{f(y)-f(x)}{y-x}\geq f'(x),$$ but taking $x\to y$ we get $f'(y)\geq \lim_{x\to y}f'(x).$ So if $f$ is $\mathcal C^1$ we don't get better than $f'(y)\geq f'(y)$... any idea ?
Graphically, this means that the tangent at any point of a convex function lies below the graph. Convexity implies $ f(y)\geq f'(x)(y-x)+f(x)$ Now, take $0 < \theta < 1$, we know that for $x,y \in \operatorname{dom}(f)$, we also have that $x + \theta(y-x) \in \operatorname{dom}(f)$. Using the definition of convexity, we can say $$f(x + \theta(y-x)) \leq (1-\theta)f(x) + \theta f(y)$$ which implies $$f(y) \geq f(x) + \frac{f(x + \theta(y-x)) -f(x)}{\theta}$$ Taking $\theta \rightarrow 0$, we get the desired result. Now, to show the other way around, $ f(y)\geq f'(x)(y-x)+f(x)$ implies convexity Choose any $x \neq y$, and $0 < \theta < 1$, let $z = \theta x + (1-\theta) y$. Applying the equation twice we have $$ f(x)\geq f'(z)(x-z)+f(z) \tag{1}$$ and $$ f(y)\geq f'(z)(y-z)+f(z) \tag{2}$$ Now, multiplying $(1)$ by $\theta$ and $(2)$ by $1 - \theta$, we get $\theta f(x) + (1-\theta)f(y) \geq f(z)$ which means we have convexity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3317196", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Continuity of $x\sin\frac{1}{y}$ at $(x, 0)$ I need to check if function is continue in $(x,0)$ $$f(x,y) = \left\{ \begin{array}{ll} x\sin\frac{1}{y} & \mbox{if } y \ne 0 \\ 0 & \mbox{if } y = 0 \end{array} \right.$$ Can someone help me understand if I approached this correctly? First, I check the following limit $$\lim_{(x,y) \to (x,0)} x\sin\frac{1}{y}$$ Which does not exist. I can write it as $$\lim_{(x,y) \to (x,0)} x \lim_{(x,y) \to (x,0)} \sin\frac{1}{y}$$ The first limit tends to $x$ itself, the second one diverges. For $x=0$ the limit $$\lim_{(x,y) \to (0,0)} x\sin\frac{1}{y} = 0$$ Indeed, by the squeeze theorem I can write $$-1\leq\sin\frac{1}{y} \leq 1 $$ $$-x \leq x\sin\frac{1}{y} \leq x$$ Thus reducing the limit to $$ \lim_{(x,y) \to (0,0)} x = 0$$ Thus $f(x,y)$ is continuous $\{ \forall (x, y) \in \Re^2 \mid x \ne 0 \}$ I'm a beginner, thank you in advance!
It's easier to use sequences: Fix $0\neq x\in \mathbb R$. There are sequences $(y_n)$ and $(z_n)$ such that $y_n\to 0$ and $z_n\to 0$ and such that $\sin(1/y_n)=1$ and $\sin(1/z_n)=-1$ (why?). Then, $\alpha_n=(x,y_n)\to (x,0)$ and $\beta_n=(x,z_n)\to (x,0)$ but $f(\alpha_n)\to x$ and $f(\beta_n)\to -x$ so $f$ is not continuous at $(x,0).$ The case $x=0$ is even easier.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3317301", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
If $f$ and $g$ are paths with $g$ constant, then $f'\simeq g'$ rel$\{1\}$ if an only if there exist a free homotopy $f'\simeq g'.$ Using Theorem 1.6 (http://www.ugr.es/~acegarra/Rotman.pdf page 17), show (using the notation of exercise 3.2 (page 40)) that if $f$ and $g$ are paths with $g$ constant, then $f'\simeq g'$ rel$\{1\}$ if an only if there exist a free homotopy $f'\simeq g'.$ Attempt to the proof $\implies$ direction Follows immediately. $\Longleftarrow$ direction Suppose we have an homotopy $H:I\times I\to X$ such that $$H(t,0)=f'(t)$$ $$H(t,1)=g'(t)$$ We only need to find $H$ such that $H(1,t)=f'(1)=g'(1).$ Applying exercise 3.2 i) to $f$ and $g$, I have $f'\simeq g'.$ Since $g'$ is now constant, then $f'$ is nullhomotopic. Now I am not sure if I can use iii) from Theorem 1.6 because I have $f':S^1\to X$ and I need $f':S^n\to X$. I am wrong in the previous line, in fact I can use the Theorem 1.6 with $n=1.$ I think now I can conclude that there exist an homotopy $H:f'\simeq g'$ such that $H(cte,t)=f'(cte),$ for all $t\in I.$ That is $H(1,t)=f'(1)$ for all $t\in I$. I can have $H(1,t)=f'(1)=g'(1)$ only if $H(t=1,1).$ How can I correct this? Am I on the right track? If someone could help me, thank you.
You confuse $f,g : I \to X$ which are closed paths and $f',g' : S^1 \to X$. Thus your homotopy $H : I \times I \to X$ cannot be a homotopy from $f'$ to $g'$. So take a (free) homotopy $H : S^1 \times I \to X$ such that $H(z,0) = f'(z), H(z,1) = g'(z) = x_0$. In Example 1.7 Rotman shows that if $f : S^n \to X$ is map and $F : S^n \times I \to X$ a homotopy such that $F(z,0) = f(z), F(z,1) = \xi$ for all $x$, then $f$ has a continuous extension $F' : D^{n+1} \to X$. So let $H': D^2 \to X$ be a continuous extension of $f'$. Define $$R : S^1 \times I \to D^2, R(z,t) = t \cdot 1 + (1-t)\cdot z$$ where we regard $D^2$ as a subset of the complex plane. The map $R$ is continuous. Define $$H'' = H' \circ R : S^1 \times I \to X .$$ Then $H''(z,0) = H'(z) = f'(z)$, $H''(z,1) = H'(1) = f'(1) = x_0 = g'(z)$ and $H''(1,t) = H'(1) = x_0$. Thus $H''$ is the desired basepoint-preserving homotopy.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3317382", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Computation Of Integrals Computer the Integral: $$\int\frac{2x+1}{(x-1)(x-2)}dx$$ Now using partial fraction we can write $$\frac{2x+1}{(x-1)(x-2)}=\frac{A}{x-1}+\frac{B}{x-2}$$, So we get $$\frac{2x+1}{(x-1)(x-2)}=\frac{A(x-2)+B(x-1)}{(x-1)(x-2)}$$ Now for all $x$ not equal to $1, 2$ we can cancel out the denominator to get $$2x+1=A(x-2)+B(x-1)$$ Now to find $A$ and $B$ how can we put $x=1$ and $x=2$ in this identity as this identity is valid if and only if $x$ is not equal to $1, 2$
The expression (2x+1)/(x-1)(x-2). = A / ( x-1). + B/(x-2) is discontinuous at x= 1 and at x= 2 but identy for restall value of x. Once we write as 2x +1 = A (x-2) + B (x-1) since left hand side is continuous thus Right-hand side will be having same behaviour .hence we can put x= 2 and x= 1 to find A and B
{ "language": "en", "url": "https://math.stackexchange.com/questions/3317524", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
$f(z)=\sqrt{1-z^2}$ pole at infinity Consider integrating $f(z)=\sqrt{1-z^2}$ with a branch cut of $[-1,1]$ around the following contour. $\gamma_1:[-1,1]\to\mathbb{C}, t\mapsto t+\epsilon i$ $\gamma_2:[-\pi/2,\pi/2]\to\mathbb{C}, t \mapsto 1+\epsilon e^{-it}$ $\gamma_3:[-1,1]\to\mathbb{C}, t\mapsto -t-\epsilon i$ $\gamma_4:[-3\pi/2,-\pi/2]\to\mathbb{C}, t \mapsto -1+\epsilon e^{-it}$ As $\epsilon$ tends to $0$ we have that $\int_{\gamma_2}f(z)dz$ and $\int_{\gamma_4}f(z)dz$ tend to $0$. Now, $\int_{\gamma_1}f(z)dz$ tends to $\int_{-1}^{1}\sqrt{1-x^2}dx:=I$ and also $\int_{\gamma_3}f(z)dz$ also tends to $I$ (one minus sign from being on the other side of the branch cut, another one from reversing the lower/upper limits). Now $I=\pi/2$ So the integral around the closed contour $\lim_{\epsilon\to 0}\int_{\gamma_1+\gamma_2+\gamma_3+\gamma_4} f(z) dz=\pi$ Note that $f$ is analytic in $\mathbb{C}\setminus[-1,1]$ with our choice of branch cut and so we can consider the contour integral around $\gamma=\gamma_1+\gamma_2+\gamma_3+\gamma_4$ as a closed contour around infinity. i.e. $\int_{\gamma}f(z)dz=-2\pi i Res[f,z=\infty]$ ($-2\pi i$ instead of $2\pi i $ because we are going clockwise around infinity.) Now, $Res[f,z=\infty]=Res[\sqrt{1-1/z^2},z=0]=\lim_{z\to 0}z\sqrt{1-1/z^2}=\lim_{z\to 0}\sqrt{z^2-1}=i$ So $\int_{\gamma}f(z)dz=-2\pi i (i)=2\pi$ And the two results do not agree. I am not confident my arguments about the pole at infinity. What exactly did I do wrong there?
See this answer. You have implicitly used the condition that $f(x + i0) > 0$ for $-1 < x <1$ (otherwise you would get $I = -\pi$). With this condition, $f$ can be written as $$f(z) = -i z \sqrt {1 - \frac 1 {z^2}},$$ where $\sqrt z$ is the principal value of the square root. $\gamma$ goes around the origin clockwise, therefore $I = +2 \pi i \operatorname{Res}_{z = \infty} f(z)$. By the binomial theorem, the Laurent expansion of $\sqrt {1 - 1/z^2}$ around infinity is $1 - 1/(2 z^2) + O(1/z^3)$, which gives $\operatorname{Res}_{z = \infty} f(z) = 1/(2 i)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3317610", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Find distribution function through moment generating function Suppose that the moment generating function $M_X$$(t)$ of a random variable $X$ is given by $$ M_X(t)=\frac{e^t+e^{-t}}{6} + \frac 23 $$ I need to find the distribution function $F_X(x)$. Until now, I have been given (in my lecture notes) that I can express $E(X)$= $M_X^{(1)}(0)$ . But I can't use this here for finding the distribution function $F_X(x)$?(Or at least I have no idea how to do it) Could you please tell me how to proceed?
Hint: From the moment generating function we can determine the distribution of $X$, which is $P(X=1)=P(X=-1)=\frac16$, $P(X=0)=\frac23$. I believe that you can move on now.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3317728", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why is $\nabla \cdot u=0$? (If $u=v\cos(k\cdot x)$) Suppose $k\in \mathbb R^3$ and $v \in \mathbb S^2$ (2D sphere) and $k\cdot v=0$. Let $u=v\cos(k\cdot x)$, then why is $\nabla \cdot u=0$? To calculate the divergence, I look at one of the components $\partial_{j} u_{j}$. $$\partial_{j} u_{j}=\partial_{j} v_{j}\cos(k_{j}x_{j})-v_{j}[\partial_{j} k_{j}x_{j}\sin(k_{j}x_{j})+\partial_{j} x_{j}k_{j}\sin(k_{j}x_{j})]$$ I can see the term $k_{j}v_{j}$, which will be zero after summing since $k\cdot v=0$, but I don't know how to treat the rest.
I like to avoid indices altogether whenever possible, and would use the product rule for divergence instead: $$\nabla \cdot \left[f(x)v(x)\right] = \langle \nabla f(x), v(x)\rangle + f(x) \nabla \cdot v(x).$$ In particular if $v%$ is constant, the second term vanishes and you just get $$\nabla \cdot u = v \cdot \nabla \cos(k\cdot x) = (v\cdot k) \sin(k\cdot x) = 0.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3317837", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How dense are primes congruent to 1 and 3 (mod 4)? There are infinitely many primes of the form $4n+1$ and $4n+3$. In a given interval $[0,N]$ for a large enough $N$ do we expect to see the same number of primes congruent to $1$ and $3$ (mod 4)?
Dirichlet theorem of primes in arithmetic progression says that the asymptotic density of primes of the form $4k+1$ and $4k+3$ are both equal to $\frac{x}{2\log x}$. However in the small scale we observe a phenomenon called Chebyshev bias where in the actual number of primes of the form $4k+3$ are slightly more than those of the form $4k+1$. The first violation of this bias occurs only at $x = 26861$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3317919", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Finding a basis for the Eisenstein space $\mathcal{E}_1(12,\chi)$ I am trying to find a basis of the Eisenstein space $\mathcal{E}_1(12,\chi)$ of modular forms of weight 1, level 12 and Dirichlet character \begin{equation*} \chi(m)=\genfrac{(}{)}{}{}{-12}{m}=\begin{cases} 1 ~~&\text{ if } m\equiv 1,\,7\pmod{12}\\ -1 ~~&\text{ if } m\equiv 5,\,11\pmod{12}\\ 0 ~~&\text{ if } m\equiv 0,\,2,\,3,\,4,\,6,\,8,\,9,\,10 \pmod{12}. \end{cases} \end{equation*} For this I am following the approach described in Section 4.8 of "A first course in modular forms" by Diamond and Shurman. Since $\chi$ is induced by the primitive character modulo 3, it has conductor 3. Therefore I am led to believe that a basis of $\mathcal{E}_1(12,\chi)$ is given by $E_1^{\mathbf{1}_1,\chi,1}(z)=E_1^{\mathbf{1}_1,\chi}(z)$, $E_1^{\mathbf{1}_1,\chi,2}(z)=E_1^{\mathbf{1}_1,\chi}(2z)$ and $E_1^{\mathbf{1}_1,\chi,4}(z)=E_1^{\mathbf{1}_1,\chi}(4z)$. Here $E_1^{\mathbf{1}_1,\chi}(z)$ has $q$-expansion $$ E_1^{\mathbf{1}_1,\chi}(z)=L(0,\chi) + 2 \sum_{m=1}^{\infty}\sigma_0^{\mathbf{1}_1,\chi}(m)q^m, $$ where $$\sigma_0^{\mathbf{1}_1,\chi}(m) = \sum_{\substack{d|m\\d>0}}\chi(d).$$ Writing out the first few coefficients in the $q$-expansions I obtain \begin{align*} E_1^{\mathbf{1}_1,\chi,1}(z) &= \frac{2}{3} + 2q + 2q^2 + 2q^3 + 2q^4 + 2q^6 + 4q^7 + 2q^8 + \dots \\ E_1^{\mathbf{1}_1,\chi,2}(z) &= \frac{2}{3} + 2q^2 + 2q^4 + 2q^6 + 2q^8 + \dots \\ E_1^{\mathbf{1}_1,\chi,4}(z) &= \frac{2}{3} + 2q^4 + 2q^8 + \dots . \end{align*} However, the span of these forms does not agree with the span of the forms that Sage outputs as a basis for $\mathcal{E}_1(12,\chi)$ and I can not manage to express the theta function of the quadratic form $Q(X)=X_1^2 + 3X_2^2$ in terms of the basis I obtained. Something is going wrong here, but I can not seem to put my finger on where I am making a mistake. Any help will be greatly appreciated!
I don't know about how to generate that Eisenstein space, but from quadratic reciprocity for $O_K = Z[\frac{\sqrt{-3}+1}{2}]$ which is a PID we have $$\sum_{a,b \ne (0,0)} |a+b \frac{\sqrt{-3}+1}{2}|^{-2s} = |O_K^\times| \zeta_K(s) =6 \zeta(s) L(s,(\frac{-3}{.}))=6 \zeta(s) L(s,(\frac{.}{12}))\\ =6 \sum_{n=1}^\infty n^{-s} \sum_{d | n, d \ odd} (\frac{d}{3})(-1)^{(d-1)/2}$$ So that with the quadratic form $f(a,b) = |a+b \frac{\sqrt{-3}+1}{2}|^2$ we have $$\sum_{a,b} q^{f(a,b)} = 1 +6 \sum_{n=1}^\infty q^n \sum_{d | n, d \ odd} (\frac{d}{3})(-1)^{(d-1)/2} \in M_1(\Gamma_1(12))$$ Then with the quadratic form $Q(a,b) = |a+b\sqrt{-3}|^2$ I think it is not of level $12$ but of level $12 [O_K : Z[\sqrt{-3}]] = 24$, that's why you won't find it in your Eisenstein space.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3318140", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$ab$ divides $a^2+b^2 \implies a=b$ Let $a$ and $b$ be two positive integers. If $ab$ divides $a^2+b^2$ then $a=b$. I can show that $a$ divides $b^2$ and $b$ divides $a^2$ but then I get stuck. Any ideas?
Hint $\ n = \dfrac{a^2\!+b^2}{ab} = \dfrac{a}b + \dfrac{b}a =\, x+x^{-1}\,\overset{\large {\times\, x}}\Longrightarrow\,x^2-n\,x + 1 = 0$ By RRT = Rational Root Test $\ a/b\, =\, x\, = \pm 1.\,$ It is special case $\, j = 1 = k,\, c_1 = 0\,$ of below. Generally applying RRT as above yields the degree $\,j+k\,$ homogeneous generalization $$a,b,c_i\in\Bbb Z,\,\ a^{\large j}b^{\large k}\mid \color{#c00}{\bf 1}\:\! a^{\large j+k}\! + c_1 a^{\large j+k-1} b + \cdots + c_{\large j+k-1} a b^{\large j+k-1}\! + \color{#c00}{\bf 1}\:\!b^{\large j+k}\Rightarrow\, a = \pm b \qquad $$ $\qquad\qquad\ \ \ \ \ \ $ e.g. $\ a^2b \mid a^3 + c_1 a^2b + c_2 ab^2 + b^3\,\Rightarrow\, a = \pm b,\ $ e.g. here (see also here). Alternatively the statement is homogeneous in $\,a,b\,$ so we can cancel $\,\gcd(a,b)^{\large j+k}$ to reduce to the case $\,a,b\,$ coprime. The dividend $\,c\,$ has form $\,a^{\large n}\!+b^{\large n}\! + abm\,$ so by Euclid it is coprime to $a,b$ thus $\,a,b\mid c\,\Rightarrow\, a,b = \pm1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3318249", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Distance from any point in the plane to vertices of a triangle related to its sides I was writing a program calculating all possible configurations of $n$ random circles with random radius and center coordinate in the plane and met this problem. Being puzzled by it for quite a long time, I decided to have a try at this site. Problem statement. Given a triangle $ABC$, pick a random point $P$ in its plane (not restricting it inside the triangle). Known that three sides of $ABC$ are $a$, $b$ and $c$; the distance from $P$ to $A$, $B$ and $C$ are $a'$, $b'$ and $c'$ respectively. Question: Find an equation $F(a, b, c, a', b', c') = 0$, with $F$ preferrably a polynomial in $a'$, $b'$ and $c'$. Distance from any point to vertices of triangle related to its sides After days of search, I actually came across an unnamed theorem quite close to what I was looking for: $a'^2+b'^2+c'^2 = (a^2+b^2+c^2)/3 + 3*PG^2$, where $PG$ represents the distance from $P$ to the centroid of triangle $ABC$. I was a bit satisfied with this result but still wanting to get an explicit form without involving extra terms. Considering that this problem is itself quite neat and probably of interest to some of you, I really want to get some hints from you guys. Thanks for the help.
$a',b',c'$ are the tripolar coordinates of $P$. MathWorld gives two equations $F$ by Euler, one of which is reproduced below (using $x,y,z$ for $a',b',c'$): $$(a^2+b^2-c^2)(x^2y^2+c^2z^2)+(a^2-b^2+c^2)(b^2y^2+x^2z^2)+(-a^2+b^2+c^2)(a^2x^2+y^2+z^2)-(a^2x^4+b^2y^4+c^2z^4)-a^2b^2c^2=0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3318361", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Extending Pre-measure into two different measures. I am trying to find an example of an algebra $\mathcal{A}$, and a pre-measure $\mu_0$ such that, you can extend $\mu_0$ in the $\sigma-$algebra generated by $\mathcal{A}$ to two different measures. By Caratheodory's extension theorem, you must have that the trivial extension is not $\sigma - $finite. I was thinking of doing something in $\mathbb{R}^{\mathbb{R}}$, and work with integrals (cannot use $L^p(\mathbb{R})$ because it is separable), but from that idea, all I have got, are failed attempts. Any help is appreciated.
On $[0,1)$, consider the algebra $\mathcal{A}$ of all finite unions of half-open intervals, of the form $\bigcup_{i=1}^n [x_i, x_{i+1})$. This generates the Borel $\sigma$-algebra. Consider the pre-measure $\mu_0$ which assigns measure $+\infty$ to every non-empty set in $\mathcal{A}$. Then one extension of $\mu_0$ to the Borel $\sigma$-algebra is counting measure $\mu$. But $c \mu$ is also an extension of $\mu_0$ for any $0 < c \le \infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3318441", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find the derivative of an improper triple integral I want to find the density function of the random variable $Y=X_1+X_2+X_3$, where the random variables $X_1, X_2$ and $X_3$ have a joint density function $$f_{X_1, X_2, X_3}(x_1, x_2, x_3) = (2\pi)^{-\frac{3}{2}}e^{-\frac{1}{2}(x_1^2+x_2^2+x_3^2)},\qquad -\infty<x_1, x_2,x_3<\infty.$$ It is a known result that the sum of three independent normally distributed random variables with mean 0 and variance 1 is normal, with its mean being 0 and variance being 3, so that we know $f_Y(y)=\frac{1}{\sqrt{6\pi}}e^{-\frac{y^2}{6}},\quad -\infty<y<\infty.$ However, I want to verify this by finding the derivative of $F_Y(y):=P(X_1+X_2+X_3\leq y)$ with respect to $y$. In other words, I need to find $$\frac{d}{dy}\Bigg[\int_{-\infty}^{\infty}\int_{-\infty}^{y-x_3}\int_{-\infty}^{y-x_2-x_3}(2\pi)^{-\frac{3}{2}}e^{-\frac{1}{2}(x_1^2+x_2^2+x_3^2)}dx_1dx_2dx_3\Bigg].$$ My questions are: (1) Are the upper and lower bounds correct? (2) How can I evaluate this expression? I have tried applying the Leibniz's rule but end up getting something impossible to integrate. I have also looked at the Reynolds transport theorem, but I don't quite understand it, especially the region is unbounded in this case. Any help is greatly appreciated.
Since no one answers it for 2 years and now I know where I went wrong, I'll post my answer here. The short answer is: (1) The bounds are incorrect. (2) Once we fix that, the expression can be evaluated quite easily using Leibniz's rule and the Gaussian integral. Let's consider the case with 2 random variables first, i.e. $Y = X_1 + X_2$. In this case, we do have $$\frac{d}{dy} F_Y(y)=\frac{d}{dy}P(X_1+X_2\leq y)=\frac{d}{dy}\Bigg[\int_{-\infty}^{\infty}\int_{-\infty}^{y-x_2}(2\pi)^{-1}e^{-\frac{1}{2}(x_1^2+x_2^2)} \: dx_1dx_2\Bigg].$$ We then use the Leibniz integral rule for differentiation under the integral sign. Noting that the upper bound of the inner integral involves $y$ but the integrand does not involve $y$, we get \begin{align} \frac{1}{2\pi}\frac{d}{dy}\Bigg[\int_{-\infty}^{\infty}\int_{-\infty}^{y-x_2}e^{-\frac{1}{2}(x_1^2+x_2^2)} \: dx_1dx_2\Bigg] &= \frac{1}{2\pi}\int_{-\infty}^{\infty}\Bigg[e^{-\frac{1}{2}[(y-x_2)^2+x_2^2]}\cdot 1+\int_{-\infty}^{y-x_2}0 \: dx_1\Bigg]dx_2 \\ &= \frac{1}{2\pi} \int_{-\infty}^{\infty}e^{-\frac{1}{2}[(y-x_2)^2+x_2^2]} \: dx_2 \\ &= \frac{1}{2\pi} \int_{-\infty}^{\infty}e^{-(x_2-\frac{y}{2})^2-\frac{y^2}{4}} \: dx_2 \\ &= \frac{1}{2\pi} e^{-\frac{y^2}{4}} \int_{-\infty}^{\infty}e^{-(x_2-\frac{y}{2})^2} \: dx_2 \\ &= \frac{1}{2\pi} e^{-\frac{y^2}{4}} \sqrt{\pi} \\ &= \frac{1}{\sqrt{2} \cdot \sqrt{2\pi}} e^{-\frac{1}{2}(\frac{y}{\sqrt{2}})^2}, \end{align} where the second last equality is obtained by a simple change of variables and using the well-known Gaussian integral. We see that the resulting function is indeed the density function of the normal distribution with mean 0 and variance 2. Now consider the original 3 variables problem, i.e. $Y = X_1+X_2+X_3$. To see why the bounds that the OP (that's me) gave are incorrect, consider the point $P := (x_1, x_2, x_3) := (-1, y, 1) \in \mathbb{R}^3$. We have $x_1+x_2+x_3 \leq y \:$. (In fact, $P$ lies on the plane$X_1+X_2+X_3=y$). However, it is false that $-\infty < x_2 \leq y-x_3$, as described by OP's bounds. The reason why I made this mistake was that I was too used to the standard Calculus 3 exercises where we are asked to find the volume of a $\textbf{bounded}$ solid. In our case here, we really don't have any constraints on $x_2$ and $x_3$: given any $x_2$ and $x_3$, we only need to make sure that $-\infty < x_1 \leq y - x_2 - x_3$. Therefore, very similar to the 2 variables case, we get: \begin{align} \frac{d}{dy} F_Y(y) &=\frac{d}{dy}P(X_1+X_2+X_3\leq y) \\ &=\frac{d}{dy}\Bigg[\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{y-x_2-x_3}(2\pi)^{-\frac{3}{2}}e^{-\frac{1}{2}(x_1^2+x_2^2+x_3^2)} \: dx_1dx_2dx_3\Bigg] \\ &=(2\pi)^{-\frac{3}{2}} \int_{-\infty}^{\infty}\int_{-\infty}^{\infty} e^{-\frac{1}{2}[(y-x_2-x_3)^2+x_2^2+x_3^2]} \: dx_2dx_3\\ &=(2\pi)^{-\frac{3}{2}} e^{-\frac{1}{6}y^2} \int_{-\infty}^{\infty} e^{-\frac{3}{4}\big(x_3-\frac{y}{3}\big)^2} dx_3 \int_{-\infty}^{\infty} e^{-\big(x_2 + \frac{x_3-y}{2}\big)^2} dx_2 \\ &=(2\pi)^{-\frac{3}{2}} e^{-\frac{1}{6}y^2}\bigg(\frac{2}{\sqrt{3}}\sqrt{\pi} \bigg) \sqrt{\pi} \\ &=\frac{1}{\sqrt{3} \cdot \sqrt{2\pi}} e^{-\frac{1}{2}(\frac{y}{\sqrt{3}})^2} \end{align} So there we have it, the density function of the normal distribution with mean 0 and variance 3.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3318517", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving $\lim\limits_{n \to \infty} \frac{n^a}{c^n} = 0$ using L'Hôpital's Rule I am trying to prove $\displaystyle \lim_{n \to \infty} \frac{n^a}{c^n} = 0$ using L'Hôpital's Rule, but I'm stuck. Here's what I have so far: $$ \lim_{n \to \infty} \frac{n^a}{c^n} = \lim_{n \to \infty}\frac{an^{n-1}}{c^n \ln c} = \lim_{n \to \infty}\frac{a(a-1)n^{a-2}}{c^n(\ln c)^2 + c^n \frac{1}{c}}$$ All three limits above seem to evaluate to $\frac{\infty}{\infty}$, so I feel like I'm not getting anywhere. Any ideas? Edit: So, with the help of the hints below, I was able to figure out that $$ \lim_{n \to \infty} \frac{n^a}{c^n} = \frac{a}{\ln c} \cdot \lim_{n \to \infty} \frac{n^{a-1}}{c^n} = \frac{a}{\ln c} \cdot \frac{a - 1}{\ln c} \cdot \lim_{n \to \infty} \frac{n^{a-2}}{c^n} = \cdots $$ So, disregarding the constant, it looks like the numerator keeps decreasing, while the denominator stays the same. I can also see that if I let $a = 2$, for instance, I end up with $0$ after applying L'Hopital's $2$ times: $$ \begin{aligned} \lim_{n \to \infty} \frac{n^2}{c^n} &\overset{LH}= \lim_{n \to \infty} \frac{2n}{c^n \ln c} \\ &= \frac{2}{\ln c} \lim_{n \to \infty} \frac{n}{c^n} \\&\overset{LH}= \frac{2}{\ln c} \lim_{n \to \infty} \frac{1}{c^n \ln c} \\ &= \frac{2}{(\ln c)^2} \lim_{n \to \infty} \frac{1}{c^n} \\ &= 0 \end{aligned} $$ So it seems reasonable to conclude that for an arbitrary $a > 0$, I will end up with $0$ after applying L'Hopital's $a$ times. But I'm not sure how to go about using induction to prove it formally. I've only proven very simple sums by induction so far. Do I have to apply it to a product here?
Hint If $c>1$, the limit is trivial for $a \leq 0$. For $a>0$ show instead that $$\left( \lim\limits_{n \to \infty} \frac{n^a}{c^n} \right)^\frac{1}{a}=0$$ Then, raise both powers to $a$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3318614", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Finding angle between two points on circle given cartesian coordinates of the points If I am given the coordinates of two points on a circle of radius'R', how can I find the angle between those two points, as well as the area of the arc between them(created on the circumference of the circle).
If we take the dot product of the vectors from the center to these points then, $u\cdot v = |u||v|\cos \theta$ where theta is the angle you seek. And since this is a circle $|u| = |v| = R$ If your circle is centered at the origin, and your points are $(x_1,y_1),(x_2,y_2)$ $\theta = \arccos \frac {x_1x_2 + y_1y_2}{R^2}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3318680", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Methods for finding the minimum coordinate of an equation (any shortcuts?) I want to know if there's a quicker method to find the minimum coordinates of a quadratic equation other than the one described below. Some background: I was reviewing "Completing Squares and Inequalities" when I came to this inequality: $ y \leq x^2 - 2x +2$. This ends up being $(x-1)^2 + 1$ I now want to find the minimum coordinates of this expression. The vertex can be found with $\frac{-b}{2a}$ and my current rationale is the following: Premise I: $a > 0$, then this equation has a minimum Premise II: Vertex can be found with $[\frac{-b}{2a}, f(\frac{-b}{2a})]$ * *Find the vertex: $\frac{-b}{2a} = \frac{-(-2)}{2(1)} = \frac{2}{2} = 1$ *Then replace that in the original equation: $(1)^2 - 2(1) +2 = 1 - 2 + 2 = 1$ Alternatively I can make use of the "Vertex form" $a(x - h)^2 + k$ and get that from the "completed square" $(x - 1)^2 + 1$ so we know that the Vertex is 1 and repeat the two steps above. Other than this method, do we have a shortcut for finding the minimum coordinates? The book tells it is $(1,1)$, but doesn't show the method.
Derivative is also one of tool, $\frac{dy}{dx}=2x-2 =0$ At $x=1$: $\frac{d^2y}{dx^2} = 2 >0$ We get min. At $x=1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3318816", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
How to find number of words made using letters of word 'EQUATION' if order of vowels do not change Find number of words made using letters of word 'EQUATION' if order of vowels do not change. My attempt:- since we do not have to change the order of the vowels hence, _E_U_A_I_O_ we have $6$ places to fill the remaining letters. Therefore total number of cases $= C(6,3) \times 3! + 6 \times 3!= 156$ But the answer is $336$.
You need to put Q, T and N into a blank eight-letter word, and fill in the remaining five places with the vowels in the prescribed order. There are $3!=6$ ways to order the consonants, and $\binom{6}{3}=56$ choices of three positions to put them in. So the answer is $6\times56=336$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3318935", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
Parametrisation of a surface in $\mathbb{R}^3$ I am trying to parameterise $M=\bigl\{(x,y,z) \mid e^z=\frac{\cos x}{\cos y}\bigr\}\subset \mathbb{R}^3$ where $x,y \in \bigl(-\frac{\pi}{2},\frac{\pi}{2}\bigr)$ in $(u,v)$ i.e $2$ variables but without any luck. Can someone see how that is supposed to be done?
Since $z=\ln\frac{\cos x}{\cos y}$, we immediately get the parametrisation $$\left(x,y,\ln\frac{\cos x}{\cos y}\right)\qquad x,y\in(-\pi/2,\pi/2)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3319048", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Colors and corresponding numbers The 10 colors Green, Blue, Violet, Red, Orange, Yellow, Cyan, Magenta, Fuchsia, Brown are associated to each of the numbers 1, 2,…, 10 but we don’t know which color corresponds to each number. In a large box there are infinite sealed envelopes, each containing one card of the above colors. Our target is to find which number corresponds to each color. There is a keypad with the 10 numbers outside this box. Each time I type a sequence of 5 numbers (10 is considered as ONE number), and 5 envelopes come out of the box (but not in the order I typed the numbers). We can open the envelopes and see the colors but we will still not know which is which. We can repeat this process only 3 times. Is it possible to determine the correspondence of colors with the numbers? What combinations of numbers must we use each of the 3 times? Let's say we first type 11223. Then we get 5 envelopes, of which, 2 + 2 will have the same color of cards. So now we know the color that corresponds to number 3 and also we know that 1 & 2 correspond to two other colors (that we also know - but we don't know which is which). We repeat the same process with 44556. Again we know 6, and 4 & 5. In our last turn, we type 1, 4, 7, 8, 9. It the 5 envelopes, we will see 2 of the colors we have already seen in the first two draws and we will now know 1, 4 and 7 & 8 & 9, but not which is which. We will also know number 10. We can also do 11223, 14456, 57789 but I am still missing one number :(
I think this works: 11234 implies (1)(234) 25567 implies (1)(2)(34)(5)(67) 36889 implies (1)(2)(3)(4)(5)(6)(7)(8)(9) and the only color you haven't seen is (10)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3319147", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 1, "answer_id": 0 }
If $\sec A-\cos A=1$, then determine the value of $\tan^2\frac A2$ This is what I tried $\sec A=\frac{1}{\cos A}$, so the equation becomes $1-\cos^2A=\cos A$ If we solve the above quadratic equation, we the values of $\cos A$ as $\frac{-1\pm \sqrt5}{2}$ Therefore, $\tan\frac A2$ becomes $$\sqrt \frac{3-\sqrt 5}{1+\sqrt 5}$$ Squaring that value, the answer remains meaningless The options are A) $\sqrt 5+ 2$ B) $\sqrt 5-2$ C) $2-\sqrt5$ D) $0$ Since the options are not matching, where am I going wrong?
Let $t=\tan^2\dfrac A2$. Then $\cos A=\dfrac{1-t}{1+t}$. $\dfrac{1+t}{1-t}-\dfrac{1-t}{1+t}=1$ $4t=1-t^2$ $(t+2)^2=5$ $t=-2+\sqrt5$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3319274", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Operator that has no fixed point Let $X = \{f \in C[0,1]; \|f\|_{\infty}\leq1, f(0)=0, f(1)=1\}$ be a subset of $C[0,1]$ and define the operator $T:X \rightarrow X$ by $Tf(t)=f(t^{2})$. Show that $T$ is continuous and has no fixed point. Could you help me with the later question?
Suppose $f(t)=f(t^2)$ for some $f\in C[0,1]$ such that $f(0)=0$ and $f(1)=1$. Then $$f(x^{2^n})=f(x)$$ for all $n\geq 1$. As $f$ is continuous and $x^{2^n}$ converges to $0$ for all $x\in [0,1)$ for $n\to\infty$, we have that $f(0)=f(x)$ for all $x\in [0,1)$ which is a contradiction to continuity and $f(1)=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3319357", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Coloring the grid points with three colors About half a year ago I posted a problem: "Coloring grid points with two colors" (The problem) I found it really interesting, so I thought I make some research. Now I need help with my following question. I am thankful for every idea, hint and solution. Let $S$ be a set of finite many grid points (points in the coordinate system with integer coordinates). Is it always possible to color them with three colors, red, green and blue , such that in each vertical and horizontal line the following statements is true: if there are $R$ number of red, $G$ number of green and $B$ number of blue points, than $|R-G|\leq 1, \ |G-B|\leq 1 \ |B-R|\leq 1$?
Yes, this can always be done. Lemma. This can be done when every vertical and horizontal line with points on it contains exactly $3$ points. Proof. In this case, all three points on a line must receive different colors. We can think of this problem as a graph theory problem. Consider the bipartite graph with vertices on one side corresponding to the horizontal lines, and vertices on the other side corresponding to the vertical lines. Put an edge between two vertices when the corresponding lines intersect. This is a regular graph, since every vertex has three edges out of it. Every regular bipartite graph has a perfect matching (this can be proven using Hall's theorem, for example here): a set of edges covering each vertex exactly once. Back in the grid, this corresponds to a set of points such that every line (vertical or horizontal) contains exactly one of them. Color this set of points red, and remove the corresponding edges from the graph. The remainder is still regular and bipartite (every vertex has two edges left coming out of it), so there is another perfect matching, giving us another set of points with this property. Color this second set of points green, and the leftover points blue. Now every line has exactly one red, blue, and green point on it. In general, we can reduce the problem for an arbitrary grid to an instance of the lemma above. First of all, we can get rid of horizontal lines with more than $3$ points on them. If a line has $k>3$ points, split it up into $\lfloor \frac k3\rfloor$ lines with $3$ points on them, and maybe a leftover line with $1$ or $2$ points. To do this, move the points so that they still have their old $x$-coordinates (and therefore lie on their old vertical lines) but instead of all having the same $y$-coordinate, only share $y$-coordinates in groups of $3$ or less. If we can color the new arrangement of points, we could color the old arrangement. On each line with $3$ points, each color is used once; if there is a leftover line of $1$ or $2$ points, no color repeats on it. So each color is used at least $\lfloor \frac k3\rfloor $ times, with $1$ or $2$ colors possibly used $\lfloor \frac k3\rfloor + 1$ times, which still satisfies the conditions. Then do the same thing for the vertical lines. Second, we can get rid of horizontal lines with $1$ or $2$ points on them. On every such line, add new points to get up to $3$, making sure not to reuse $x$-coordinates (so that every point added lies on a new vertical line). The condition on the resulting line is that all $3$ points must be different colors, so if we get rid of the new points, the old line still satisfies the coloring condition. Then do the same thing for the vertical lines. Now all vertical lines have exactly $3$ points on them, but there are some horizontal lines with $1$ point on them (the rest have $3$). The total number of points must be a multiple of $3$ now. So the number of horizontal lines with $1$ point on them is also a multiple of $3$. Group them up in threes, and for every three points $(x_1, y_1)$, $(x_2, y_2)$, $(x_3,y_3)$ we group together, add more points $(x_4,y_1)$, $(x_4,y_2)$, $(x_4,y_3)$ and $(x_5,y_1)$, $(x_5,y_2)$, $(x_5,y_3)$. This creates two new vertical lines with $3$ points on them, and fills the horizontal lines with $1$ point up to $3$. Now we are in the case of the lemma, and so we can color the points in a way that satisfies the condition. Undo everything we've done (deleting points we added, and merging together lines we split up) and we get a coloring of the original grid.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3319447", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why does unit group of valuation ring contains $K^{\times}$ Let $K$ be a number field, $K_v$ be a completion of $K$ by non-archimedean valuation $v$ of $K$ and $U_v$ be a unit group of valuation ring $R_v$. Then why $K^{\times}\subset U_v$?(for almost all $v$) This implies principal idele is idele. Edit this is wrong
Let $K = \Bbb{Q}$ then $$\Bbb{A_Q} = \{ a_\infty\prod_p a_p \in \Bbb{R} \times \prod_p \Bbb{Q}_p, \text{ for all but finitely many } p, a_p \in \Bbb{Z}_p\}$$ it is a commutative unital ring whose $1$ is $1_\infty \prod_p 1_p$ (with $1_p$ the $1$ of $\Bbb{Q}_p$) and $$\Bbb{A_Q}^\times = \{ a \in \Bbb{A_Q}, \exists b \in \Bbb{A_Q}, ab = 1\}$$ $$=\{ a_\infty\prod_p a_p \in \Bbb{R}^\times \times \prod_p \Bbb{Q}_p^\times, \text{ for all but finitely many } p, a_p \in \Bbb{Z}_p^\times\}$$ For $x \in \Bbb{Q}$ let $x_p$ its image in $\Bbb{Q}_p$ then the diagional embedding $x \mapsto x_\infty \prod_p x_p$ is a ring homomorphism $\Bbb{Q \to A_Q}$ and it sends $\Bbb{Q^\times \to A_Q^\times}$. With $x = \frac{n}{m}$ we have $x \in \Bbb{Z}_p^\times$ whenever $p \nmid nm$. The other thing to know is that $\prod_p \Bbb{Z}_p = \varprojlim \Bbb{Z} /(n)$ is the set of limits of sequences of integers that converge $\bmod n$ for every $n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3319706", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Maximizing $\frac{a^2+6b+1}{a^2+a}$, where $a=p+q+r=pqr$ and $ab=pq+qr+rp$ for positive reals $p$, $q$, $r$ Given $a$, $b$, $p$, $q$, $r \in\mathbb{R_{>0}}$ s.t. $$\begin {cases}\phantom{b}a=p+q+r=pqr \\ab =pq+qr+rp\end{cases} $$ Find the maximum of $$\dfrac{a^2+6b+1}{a^2+a}$$ This question is terrifying that I even don't know how to start it. I've found different ways such as multiplying the numerator and denominator by $a$ and substitute, but it is not useful (maybe?). Anyway, please comment or answer if you solve it or having clues that may help solve this question.
Here is an approach, which provides, so I think, a certain understanding of the "working domain", prior to the consideration of maximisation issue. Your issue is to find values of $a$ and $b$ such that polynomial equation : $$(x-p)(x-q)(x-r)=0 \ \ \iff$$ $$x^3-(p+q+r)x^2+(qr+pr+pq)x-pqr=0 \ \ \iff \tag{2}$$ $$x^3-ax^2+abx-a=0 \tag{1}$$ * *(i) has three real roots ($p,q$ and $r$), *(ii) that are all positive (implying that $a$ and $b$ are themselves positive). Take a look at the following figure. Each little red circle represents a couple of (random) values $(a,b)$ ($a$ : abscissas, $b$ : ordinates) fulfilling condition (i). Among them, clearly, only the circles with a star also fulfill condition (ii), occupying a very tiny domain... What is the limit of these circles, i.e., how has the green curve been obtained ? Its equation is $d=0$ where $d$ denotes the so-called discriminant $d$ of parametric equation (1), a particular case of a "resultant", concepts that you will see in University if you do a degree in mathematics. See remark 3 below. Here it is given under the form of a determinant : $$d=\begin{vmatrix} 1&-a&ab&-a&0\\ 0&1&-a&ab&-a\\ 3&-2a&ab&0&0\\ 0&3&-2a&ab&0\\ 0&0&3&-2a&ab\\ \end{vmatrix}=-a^2(a^2b^2-4a^2-4ab^3+18ab-27)\tag{3}$$ (maybe, you have recognized in the two first rows of (3) the entries of polynomial (1) and on the 3rd, 4th and 5th rows the entries of its derivative, another concept that hopefuly you haven't met yet. Please note the progressive shifting). Now, the maximisation issue : one finds (I don't give a proof) that it is the point at the extreme left of the "good" spiky region that achieves the maximality, which is exactly point $(3\sqrt{3},\sqrt{3})$ found by @Cesareo. Remarks : 1) Considering $d=0$ as a quadratic in variable $a$, with entries depending on parameter $b$, one can express $a$ as a function of $b$ : $$a=\dfrac{2b^3-9b\pm\sqrt{\delta}}{b^2-4} \ \ \text{where} \ \ \delta=b^6-18b^4+108(b^2-1),$$ allowing to plot the green curve rather easily . 2) Formulas dealing with $p+q+r$, $pq+qr+rp$ and $pqr$ are called Vieta's formulas. 3) Ask your professor why a discriminant equal to $0$, already for a quadratic equation, expresses the fact that there are double roots.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3319843", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
Help with this inequality : $-1 \le \frac{1+x^2}{2x} \le 1$ I have to solve $$ -1 \le \frac{1+x^2}{2x} \le 1 $$ My attempt at solution: $$ -1 ≤ (1+x²)/2x ≤ 1 $$ $$ - 1 ≤ (1+x²)/2x \quad\text{and}\quad (1+x²)/2x ≤ 1 $$ $$ 0≤(1+x²)/2x + 1 \quad\text{and}\quad (1+x²)/2x - 1 ≤0 $$ i.e. $(x²+2x+1)/2x \quad\text{and}\quad (x²-2x+1)/2x $ $$ 0≤ (x+1)²/2x \quad\text{and}\quad (x-1)²/2x ≤0 $$ since the numerator is positive, the sign depends on the denominator i.e. $\;0≤x$ and $x≤0$ $$ therefore the answer is $0$ which is wrong as $1$ and $-1$ also satisfy the inequation and zero will make $(1+x²)/2x$ not defined, so where did I go wrong?
This problem is equivalent to $$\left|\frac{1+x^2}{2x}\right|\le 1,$$ which gives $$(1+x^2)^2\le(2x)^2,$$ which gives $$(1-x)^2(1+x)^2\le 0.$$ Since LHS of last inequality can never be negative, it follows that the only solution will occur when LHS vanishes. Can you continue now?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3319993", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 2 }
Prove that $T=\frac{2X}{\sqrt{Y}} \sim t_4$ where $X \sim N(0,1)$ and $Y \sim \chi^2_4$ Question. Let $X,Y$ be independent random variables. Suppose $X \sim N(0,1)$ and $Y \sim \chi^2_4$, use a bivariate transformation to show that $T=\frac{2X}{\sqrt{Y}} \sim t_4$. Attempt. Use the bivariate transformation $T=\frac{2X}{\sqrt{Y}}, S=\sqrt{Y}$. So the inverse transform is given by $X=\frac{1}{2}TS, Y=S^2$. The Jacobian is thus $s^2$. Then I used the transform formula to try to derive the joint density function of $f_{T,S}$ and thus differentiate w.r.t. $s$ to get the marginal density function of $T$. But the expression just got too messy so I wonder if my initial transform was the best way. Would appreciate any help/hint.
If $U,\,V$ are independent continuous random variables of respective pdfs $f_U,\,f_V$ with $V$ of support $[0,\,\infty)$, $W:=U/V$ has pdf $f_W(w):=\int_0^\infty vf_U(wv)f_V(v)dv$. With the choice$$f_U(u)=\frac{1}{\sqrt{2\pi}}\exp-\frac{u^2}{2},\,f_V(v)=\frac12 v^3\exp -\frac{v^2}{2}$$so $U\sim N(0,\,1),\,V\sim\chi_4$, $$f_{U/V}(w)=\int_0^\infty\frac{1}{\sqrt{8\pi}}v^4\exp-\frac{(1+w^2)v^2}{2}dv\propto(1+w^2)^{-5/2}.$$Similarly, $2U/V$ has a pdf $\propto\left(1+\frac{w^2}{4}\right)^{-5/2}$, so is $t_4$-distributed. You're welcome as an exercise to double-check this method gets the right coefficients for unitarity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3320125", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find "A" in this equation. $$ \sqrt[3] {A-15√3} + \sqrt[3] {A+15√3} = 4 $$ Find "A" ? The way of exponentiation took too much time, is there any easier method?
If there exists a cubic polynomial of integer coefficients with roots: $$0,\sqrt[3] {A-15√3} , \sqrt[3] {A+15√3} $$ then it will be of the form $x^3-4x^2 + \sqrt[3]{A^2-675}x$ You just need to find $A$ such that $A^2-675$ is a perfect cube. Trivially $A^2 = 676$ will do.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3320229", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
constant evaluation when using differential equations. This is regards to constant evaluation when using differential equations. * *A solution is given to be: $$y=(e^{2x}+e^x ) \ln⁡(1+e^{-x} )-(c_1+1) e^x+(c_2-1) e^{2x}$$ *A simplified solution in an answer book is given as: $$y=(e^{2x}+e^x ) \ln⁡(1+e^{-x} )+(c_1 ) e^x+(c_2 ) e^{2x}$$ There is a change in sign of $c_1$ in the third term. $C_1$ is a constant and not specified to be positive or negative or is it supposed to be positive and that information is simply not specified. I never know how to interpret this kind of results. Can someone explain, please? Thank you. Sincerely, Mary A. Marion
Both solutions are correct and they are equivalent. The constants $C_1$ and $C_2$ are just place holders for numbers to be found from initial values and you may as well call them $-C_1-1$ or $C_2+1$ Once the initial values are given the constants are found and the final result is unique regardless of the notations for constants.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3320339", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Why must there be disjoint sets $A$ and $B$ such that $m^*(A \cup B) < m^*(A) + m^*(B)$? (Theorem 18, Royden) I am trying to follow Theorem 18 in Royden's Real Analysis book (fourth edition). It says the following: Proof The "preceding theorem" is Vitali's theorem which says that any set with positive outer measure contains a non-measurable subset. I am trying to follow the proof but am not understanding why the definition of measurability (combined with the assumption that $m^*(A \cup B) = m^*(A) + m^*(B)$) implies that every set must then be measurable. I feel like I'm missing something obvious here.. because he doesn't elaborate on that fact. Second, why does the contradiction for assuming equality prove the theorem? Couldn't it still be the case that $m^*(A \cup B) > m^*(A) + m^*(B)$? for two disjoint sets $A$ and $B$?
Any outer measure (like $m^*$) must satisfy countable subadditivity for all sets, in particular for $A$ and $B$, which is why $m^*(A\cup B)>m^*(A)+m^*(B)$ cannot occur. Regarding your first question, the equality for all sets means that the outer measure $m^*$ actually satisfies all the conditions of a measure, and moreover it inherits from the outer measure the property that it is defined for all sets. So combining the "best" of both worlds, it is a measure defined for all sets, which is a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3320469", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }