Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Restriction of adjoint map for normal subgroup of Lie group Let $G$ and $H$ be Lie groups, such that $H$ is a normal subgroup of $G$. Let $\mathfrak{g},\mathfrak{h}$ denote the corresponding Lie algebras. This means that $$ \forall g\in G, h \in H, ghg^{-1} \in H.$$ In particular, since $\forall X\in\mathfrak{g}, e^{tX} \in G$, we have $$ \forall X \in \mathfrak{g}, h \in H, e^{tX}h e^{-tX} \in H.$$ Does it follow from this, that $$\forall X\in\mathfrak{g}, Y\in\mathfrak{h}, e^{tX}Ye^{-tX} \in \mathfrak{h}?$$ Phrased differently, can one restrict the adjoint map $\text{Ad}_{e^{tX}} : \mathfrak{g}\rightarrow\mathfrak{g}$ to $\mathfrak{h}$? I feel like the answer is yes, and the explanation is simple, but I am not seeing it, so perhaps it is not true?
Let $g\in G$, it defines a map $Ad_g:H\rightarrow H$ defined by $Ad_g(h)=ghg^{-1}$. The differential of $Ad_g$ at $e\in H$ ($e$ is the neutral element) is the adjoint map defined $dAd_g:{\cal H}\rightarrow {\cal H}$ where ${\cal H}$ is the Lie algebra of $H$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4174179", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Confusion understanding probability I am a beginner in this forum - please don't judge me too harsh. I understand that my question is noobie but I read a lot and couldn't understand the concept of summing probabilities. Here is the problem I cannot understand: We have a dice with 6 possible outcomes 1...6. Rolling the dice once, the chance to hit 3 is 1/6 What is the chance to hit 3 if I throw the dice 6 times or 8 times? Simply summing the probabilities doesn't make sense to me. I mean 1/6 + 1/6 + 1/6 + 1/6 + 1/6 + 1/6 + 1/6 + 1/6 = 8/6 is greater than 1. How come probability gets greater than 1? My reasoning must be wrong
You are right that each throw of a die has a $(1/6)$ chance of rolling a $(3)$. Therefore, your addition is correct that if you roll the die $8$ times, the expected number of times that you will roll a $(3)$ is $$8 \times (1/6) = (8/6) = (4/3) > 1.$$ Your intuition is also right, that if you roll the die $8$ times, the chance of at least one of the rolls coming up $(3)$ must be less than $(1)$. This begs the question: if the distribution of $8$ rolls is such that you expect $(4/3)$ of the rolls to be a $3$, how can the chance of not rolling any $(3)$ in 8 rolls, still be positive. It is because there is a (small) chance, that in $8$ rolls, you may have a $3$ appear two or more times. It is the possibility of a $3$ appearing two or more times that balances the fact that there is still a positive chance that there will be no $(3)$'s rolled. However, this is all intuitive hand waving, which doesn't mean much, without math to back it up. Suppose that you roll a die $8$ times. There are $(6)$ possibilities for each roll. Therefore, the total number of possible sequences of $8$ rolls is $6^8$. For $k \in \{0,1,2,\cdots,8\}$, a natural question is: how many of the $6^8$ sequences will result in exactly $k$ of the rolls coming up $(3)$. There are $\frac{8!}{k!(8-k)!} = \binom{8}{k}$ ways of selecting $k$ rolls out of $8$, so that those rolls (and only those rolls) come up $3$. Once the $k$ rolls are selected, you then have $(8-k)$ rolls whose only constraint is that the roll is any number other than a $(3)$. Therefore, there are exactly $\left[\binom{8}{k} \times 5^{(8-k)}\right]$ sequences of $8$ rolls, in which $3$ comes up exactly $k$ times. Therefore, the probability of this happening is $$P(k) = \frac{\binom{8}{k} \times 5^{(8-k)}}{6^8}.$$ You will find that : * *$\sum_{k = 0}^8 [k \times P(k)] = (8/6) = (4/3)$, as expected. *$P(0) = \frac{\binom{8}{0} \times 5^{(8-0)}}{6^8} = \left(\frac{5}{6}\right)^8 > 0.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4174387", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Calculating this line integral (Finding the intersection curve and which parametrization to choose). Let $C$ be part of the intersection curve of the Paraboloid $z=x^2+y^2$ with the plane $2x+2y-z+2=0$ that starts from point $(3,1,10)$ and ends in $(1,3,10)$. We define $f(x,y,z)=z-y^2-2x-1$. Calculate $\int_{C}f dl$. My work: Finding $C$: from the plane equation: $z=2x+2y+2$. Substituting that into the paraboloid equation: $2x+2y+2=x^2+y^2 \Longrightarrow x^2-2x+y^2-2y=2 \Longrightarrow (x-1)^2+(y-1)^2=4$. I find this result of getting a circle very weird, because the plane isn't parallel to $z=0$ plane, so I can't see why I received a circle, I expected an ellipse or something. The only thing I can think about is that I received the "Shadow" of the ellipse on the $xy$ plane, but I would appreciate any help understanding what have happened here! Anyway, I also got stuck here on which parametrization should I choose, if it's $x=r\cos(t), y=r\sin(t),z=r^2$ OR $x=1+r\cos(t),y=1+r\sin(t),z=?$ Then if I substitute it in the circle's equation I can find $z$. But I'm not sure if I can do that since $C$ isn't all the circle, it's just part of it. I would appreciate any help, thanks in advance! Edit After the help from answers: If I define $\vec r(t)=(1+2cos(t), 1+2sin(t), 4cos(t)+4sin(t)+6)$ to be the vector that draws the circle. Then $\vec r'(t) = (-2sin(t), -2cos(t), 4cos(t)-4sin(t))$. Where $\frac{\pi}{2} \ge t \ge 0$. $f(x,y,z)=z-y^2-2x-1=2x+2y+2-y^2-2x-1=-y^2+2y+1 = 2 - (y-1)^2$ And so my integral: $$ \begin{split} \int_C f dl &= \int_0^{\pi/2} 2\cos(2t) \sqrt{(2\sin(t))^2 + (2\cos(t))^2 + (4\cos(t)-4\sin(t))^2} dt \\ &= \int_0^{\pi/2} 2\cos(2t) \sqrt{8 + (16\cos(t)^2 - 16\sin(2t) + 16\sin(t)^2)} dt \\ &= \int_0^{\pi/2} 2\cos(2t)\sqrt{24-16\sin(2t)} dt \end{split} $$ I'm having some difficult times deciding how to do this integral
You have a circle (not a disc) of radius $2$ centered at $(1,1)$, which would suggest $$ \begin{split} x &= 1 + 2\cos t \\ y &= 1 + 2\sin t \end{split} $$ from where $z = 2x+2y+2 = 6 + 4\cos t + 4\sin t$. The last question is about the range of $t$. You need the part that starts at $(3,1,10)$ so you need $3 = x = 1 + 2\cos t$ and $1 = y = 1 + 2\sin t$, which implies $\cos t = 1$ and $\sin t = 0$, which you can solve for $t = t_0$. Similarly, plug in the other end and solve for $t = t_1$. Then you integrate over $t \in [t_0, t_1]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4174708", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Finding my mistake - residue I'm trying to find the residue of $$g\left(z\right)=\frac{z}{\left(e^{iz}+1\right)^{2}}$$ around $-\pi$. I was able to write: $$g\left(z\right)=\frac{1}{\left(z+\pi\right)^{2}}\left(\frac{z+\pi}{\left(-i-\frac{z+\pi}{2}+...\right)^{2}}-\frac{\pi}{\left(-i-\frac{z+\pi}{2}+...\right)^{2}}\right)$$ So I suppose it is a pole of order 2. That means $$Res\left(g,-\pi\right)=\lim_{z\to-\pi}\frac{d}{dz}\left(\frac{z+\pi}{\left(-i-\frac{z+\pi}{2}+...\right)^{2}}-\frac{\pi}{\left(-i-\frac{z+\pi}{2}+...\right)^{2}}\right)$$ So I seem to get $-1$, although in a wolfram calculator I get $-1-i\pi$. What have I done wrong? *Maybe I have to rethink changing the limit and the derivative?
Yes, you are correct. We need to differentiate and then take the limit. To carry out the evaluation of the limit we proceed as follows. We begin with $$e^{iz}+1=-\left(i(z+\pi)-\frac12(z+\pi)^2+\dots) \right)$$ where "$+\dots$" means "$+O(z+\pi)^3$." Then, we see that $$\begin{align} \lim_{z\to-\pi}\frac{d}{dz}\frac{z(z+\pi)^2}{(e^{iz}+1)^2}&=\lim_{z\to -\pi}\frac{d}{dz}\left(\frac{z}{\left(i-\frac12(z+\pi)+\dots) \right)^2}\right)\\\\ &=\lim_{z\to -\pi}\left(\frac1{\left(i-\frac12(z+\pi)+\dots) \right)^2}-\frac{2z\left(\frac12+O(z+\pi)\right)}{\left(i-\frac12(z+\pi)+\dots) \right)^3}\right)\\\\ &=\frac1{i^2}-\frac{\pi}{(i)^3}\\\\ &=-1-i\pi \end{align}$$ as was to be shown!
{ "language": "en", "url": "https://math.stackexchange.com/questions/4174887", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Determine the Galois group of the splitting field of $f(x) = x^4+x+t \in F_2(t)[x]$ I used Gauss's lemma to show that the polynomial is irreducible since it is irreducible in $F_2[t,x]$, used the derivative GCD test to conclude that the polynomial is separable, and finally started by taking the quotient $F_2(t)[x]/(f(x))$ which gives a degree $4$ extension that contains at least $2$ roots: $\bar{x}, \bar{x}+1$ (using $a^2+b^2= (a+b)^2$). However, from here, I can't seem to find other roots or clearly show the non-existence of any other roots to have to take another quotient by a degree $2$ factor to find that the Galois group is $D_4$ (thought of as a subgroup of $S_4$). It seems that all subgroups of $S_4$ of order $2^n$ are transitive so that doesn't help. Let me know if you can think of any ways to finalize the computation of the Galois group.
Let $\alpha$ be a root of $f=X^4+X+t$ in an algebraic closure of $k=\mathbb{F}_2(t)$. If $j$ is a primitive third root of $1$ (so $j^2+j+1=0$), the roots of $f$ are $\alpha,\alpha+1,\alpha+j,\alpha+j+1$ and the splitting field of $f$ is $L=k(\alpha,j)$. Let us prove that $j\notin k(\alpha)$. Otherwise, $j=a+b\alpha+c\alpha^2+d\alpha^3$. Then $j^2=a^2+b^2\alpha^2+c^2(\alpha+t)+d^2(\alpha^3+t\alpha^2)$ since $\alpha^4=\alpha+t$ and $\alpha^6=\alpha^2\alpha^4=\alpha^2(\alpha+t)$. Thus $0=j^2+j+1=(a^2+a+tc^2+1)+(b+c^2)\alpha+(b^2+c+td^2)\alpha^2+(d+d^2)\alpha^3$. In particular, $b=c^2$, so $0=(a^2+a+tc^2+1)+(c^4+c+td^2)\alpha^2+(d+d^2)\alpha^3$. Now $d^2+d=0$, so $d=0$ or $1$. If $d=1$, we would get $c^4+c+t=0$, which is not possible since $f$ is irreducible. So $d=0$ and $c^4+c=0=c(c+1)(c^2+c+1)$. Since $X^2+X+1$ is irreducible over $k$ (easy), we get $c=0$ or $1$. If $c=0$, we get $a^2+a+1=0$, which is not possible since $X^2+X+1$ is irreducible over $k$. Hence $c=1$, and $a^2+a+t+1=0$. But $X^2+X+(t+1)$ is irreducible over $k$ , so we get a contradiction. Finally, $j\notin k(\alpha)$, $[k(\alpha)(j):k(\alpha)]=2$ and $[L:k]=8$. Now, the subextension $k(\alpha)/k$ is non Galois (since the splitting field of $f$ has degree $8$) of degree $4$, so the Galois group has a non normal subgroup of order $2$. The only group of degree $8$ satisfying this property is the dihedral group $D_4$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4175057", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Given $x= \left ( 6, 2, -3 \right ),$ how to find the coordinate $x$ up to the basis $V$ In the vector space $\mathbb{R}^{3},$ given two systems of vectors $$U= \left \{ u_{1}= \left ( 4, 2, 5 \right ), u_{2}= \left ( 2, 1, 3 \right ), u_{3}= \left ( 3, 1, 3 \right ) \right \}$$ $$V= \left \{ v_{1}= \left ( 5, 2, 1 \right ), v_{2}= \left ( 6, 2, 1 \right ), v_{3}= \left ( -1, 7, 4 \right ) \right \}$$ Proved that $U$ and $V$ are two bases of $\mathbb{R}^{3}.$ Source: StackMath/@haidangel_ in.edit In the edited part, I gave a bonus question: Given $x= \left ( 6, 2, -3 \right ).$ How to find the coordinate $x$ up to the basis $V.$ Now I have two approaches but I don't know which one is true ? I need to the help. First approach. Consider the linear combination $$\alpha_{1}v_{1}+ \alpha_{2}v_{2}+ \alpha_{3}v_{3}= x$$ This is equivalent to the matrix equation $$\begin{bmatrix} 5 & 6 & -1\\ 2 & 2 & 7\\ 1 & 1 & 4 \end{bmatrix}\begin{bmatrix} \alpha_{1}\\ \alpha_{2}\\ \alpha_{3} \end{bmatrix}= \begin{bmatrix} 8\\ 2\\ -3 \end{bmatrix}$$ To find the solution, consider the augmented matrix. Applying elementary row operations, we obtain $$\left [ \begin{array}{rrr|r} 5 & 6 & -1 & 8\\ 2 & 2 & 7 & 2\\ 1 & 1 & 4 & -3 \end{array} \right ]\xrightarrow{R_{3}\leftrightarrow R_{2}}\left [ \begin{array}{rrr|r} 5 & 6 & -1 & 8\\ 1 & 1 & 4 & -3\\ 2 & 2 & 7 & 2 \end{array} \right ]\xrightarrow{2R_{2}- R_{3}}\left [ \begin{array}{rrr|r} 5 & 6 & -1 & 8\\ 1 & 1 & 4 & -3\\ 0 & 0 & 1 & -8 \end{array} \right ]$$ $$\left [ \begin{array}{rrr|r} 5 & 6 & -1 & 8\\ 1 & 1 & 4 & -3\\ 0 & 0 & 1 & -8 \end{array} \right ]\xrightarrow{6R_{2}- 25R_{3}- R_{1}}\left [ \begin{array}{rrr|r} 1 & 0 & 0 & 174\\ 1 & 1 & 4 & -3\\ 0 & 0 & 1 & -8 \end{array} \right ]$$ It follows that the solution is $\alpha_{1}= 174, \alpha_{3}= -8, \alpha_{2}= -3- \alpha_{1}- 4\alpha_{3}= -145.$ We obtain $$\left [ x \right ]_{V}= \begin{bmatrix} 174\\ -145\\ -8 \end{bmatrix}$$ Second approach. By the coordinate transformation equation $$\left [ x \right ]_{V}= P_{V\rightarrow E}\cdot\left [ x \right ]_{E}= \left ( P_{E\rightarrow V} \right )^{-1}\cdot\left [ x \right ]_{E}= \begin{bmatrix} 5 & 6 & -1\\ 2 & 2 & 7\\ 1 & 1 & 4 \end{bmatrix}^{-1}\begin{bmatrix} 6\\ 2\\ -3 \end{bmatrix}= \begin{bmatrix} 176\\ -147\\ -8 \end{bmatrix}$$
In the first approach, you wrote $8$ in place of $6$. Otherwise, both approaches are correct.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4175248", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Geometry problem from RMO 2016 The following problem is from RMO 2016. Initially it seems pretty trivial, but I am not able to find an easy or elegant solution. The official solution is not intuitive. I am looking for an alternate elegant proof, and also framed in a proper way like we do in contests, because in such problems showing the exact steps is very critical. Let $ABC$ be a right-angled triangle with $\angle B=90^{\circ}$ degree. Let $I$ be the incentre of $ABC$. Let $AI$ extended intersect $BC$ in $F$. Draw a line perpendicular to $AI$ at $I$. Let it intersect $AC$ in $E$. Prove that $IE = IF$. So far I have tried taking a point $E'$ such that $IE'=IF$ and then proving that $E$ and $E'$ coincide.
You can solve the problem only playing with angles. In the picture, all red angles are equal to $90^{\circ}$. In addition, we will prove that all green angles are the same, so $\angle BAF=\angle FAC=\angle GIF=\angle EIH$. That $\angle BAF=\angle FAC$ is obvious because $AF$ is a bissection. $IG$ is paralel to $AB$, so $\angle BAF=\angle GIF$. In addition, because $\angle EIH=90^{\circ}$ then $\angle EIH=\angle FAC$. Finally, $GI=IH$ because both are in-radius. Therefore, the triangles $GIF$ and $HIE$ are congruent by the case $ASA$. It implies that $IF=IE$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4175429", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 0 }
Can we simplify objective function based on the property of optimal solution? Consider the non-convex optimization problem \begin{equation} \begin{aligned} \max_{x} & \quad f(x)\\ s.t. & \quad 0 \leq x\leq 1 \end{aligned} \tag{1} \end{equation} where $f(x)$ is non-concave. But $\forall y \in X = \{x| Ax=b\}$ we have $f(y) = g(y)$, where $g(y)$ is concave. We know the optimal solution $x^* \in X$ . So can we transform the non-convex optimization problem into the following convex problem? \begin{equation} \begin{aligned} \max_{x} & \quad g(x)\\ s.t. & \quad Ax = b\\ & \quad 0 \leq x\leq 1 \end{aligned} \end{equation}
Yes. The following are equivalent problems: $$\max_x f(x) \quad \text{subject to} \quad 0 \le x \le 1$$ $$\max_x f(x) \quad \text{subject to} \quad x\in X,\ 0 \le x \le 1$$ $$\max_x f(x) \quad \text{subject to} \quad Ax=b,\ 0 \le x \le 1$$ $$\max_x g(x) \quad \text{subject to} \quad Ax=b,\ 0 \le x \le 1$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4175564", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Werewolf Puzzle A certain village has 3 inhabitants, each either human or werewolf. Humans always tell the truth and werewolves always lie. They each make a statement: Advik says, "At least one of us is a werewolf." Bardia says, "At least one of us is a human." Cherry says, "Exactly two of us are werewolves." What must be true? The discussion declares Advik as Human like this: Suppose Advik is a werewolf. Then his claim that there is at least one werewolf would be true, but werewolves can't tell the truth. So he must be a human instead. It supposes that Advik is a Werewolf but why is it not supposing it a Human? Secondly Advik doesn't say that he is a werewolf so even if we suppose him a Werewolf, he being the liar does make him a human. Any help would be highly appreciated.
Each of the three inhabitants is either a human or a werewolf. The given line of reasoning shows that Advik being a werewolf is a contradiction: that would make his statement true but a werewolf cannot utter truth. So Advik, by elimination, must be human; there is therefore no need to assume Advik is a human, we derive it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4175740", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Matrix Equation $A\vec{v}=\lambda \vec{v} \rightarrow \det(A-\lambda I) = 0$ I'm going over eigenvectors and eigenvalues, and I'm confused about the following: $A\vec{v}=\lambda \vec{v}$ $A\vec{v}-\lambda I\vec{v} = 0$ $(A-\lambda I)\vec{v}=0$ So far so good, but this step I don't understand: $\det(A-\lambda I) = 0$ How did $\vec{v}$ just fall off? Would appreciate some clarification, thanks in advance!
You lost something: in the first equation, you're supposing that there's a nonzero vector $v$ with the property that $Av = v$. This lets you conclude that $(A-\lambda I)$ sends the nonzero vector $v$ to $0$. Since it also sends $0$ to $0$, transformation defined by multiplication $A - \lambda I$ must be non-injective, hence not invertible. But if the determinant were NONzero, then the matrix would have an inverse (by Cramer's rule, for instance). Hence the determinant must be zero. This argument does not work unless you use that $v$ is nonzero, though. You could check that using $A = I$ and $\lambda = 0$, with $v = 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4175889", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
$\aleph_1$ Borel partition Can we prove the existence or non-existence of an $\aleph_1$ Borel partition (with possibily unbounded Borel ranks) of the Baire space $\omega^\omega$ ? In which axiomatic system? Leo Harrington proved that, assuming $\text{AD}$, we cannot have an $\omega_1$-sequence of distinct Borel sets of a fixed bounded rank, I was wondering whether we could say something similar regarding an $\aleph_1$ Borel partition (of possibly unbounded Borel ranks). Thanks
I remember this being asked here before, but I can't find the duplicate, so here goes: Yes, this is provable in $\mathsf{ZF}$. Basically the idea is to assign to each real $r$ a countable ordinal $o(r)$ so that $(i)$ $o^{-1}(\alpha)$ is Borel for each ordinal $\alpha$ and $(ii)$ for each $\alpha$ there are reals with $o(r)>\alpha$. Then $$\omega^\omega=\bigsqcup_{\alpha<\omega_1}o^{-1}(\alpha)$$ will be a partition of the desired type. One way to do this is by using reals to code relations on $\omega$. A binary relation $R\subseteq\omega^2$ can be represented as a real (e.g. by hitting $R$ with a bijection $\omega^2\rightarrow\omega$ we get an infinite binary sequence). Now let $o(r)$ be the ordertype of the relation coded by $r$ if that relation is a well-ordering of $\omega$, and (say) $17$ otherwise. Another example comes from computability theory: let $o(r)$ (usually denoted "$\omega_1^r$" or "$\omega_1^{CK}(r)$") be the smallest countable ordinal with no "$r$-computable" copy. Again, this does the job. In each case however the Borel sets we get have unbounded Borel rank, so there is no tension with Harrington's theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4176045", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Basic Math Question I think... I apologize for the title. I am not even sure how to phrase this question per se. I feel like this should be easy and yet I am questioning my thinking. Here's the scenario: I have two groups. Group A has 50 members. Group B has 400 members. What I want to know is what the calculation would be to adjust Group A to have the same "influence" on its other metrics (as if it also had 400 members). So, if Group A had 50 followers and, for their last 1,000 tweets has a mean of 5 likes, what would be the number that I would use to multiply against the mean of 5 likes? Example: Group A: 50 followers, mean of 5 likes, total tweets 1,000 Group B 400 followers, mean of 20 likes, total tweets 1,000 Since Group B has 8X the number of followers, it would be obvious that they will have more likes (as they are reaching more people). Is it as simple to say that Group A's 5 likes would need to be multiplied by 8 to adjust Group A to be the same as Group B? Thanks for your assistance!
Comment continued with results from test procedures in R: prop.test(c(5,20),c(50,400), cor=F) 2-sample test for equality of proportions without continuity correction data: c(5, 20) out of c(50, 400) X-squared = 2.1176, df = 1, p-value = 0.1456 alternative hypothesis: two.sided 95 percent confidence interval: -0.03585336 0.13585336 sample estimates: prop 1 prop 2 0.10 0.05 Warning message: In prop.test(c(5, 20), c(50, 400), cor = F) : Chi-squared approximation may be incorrect Notice the warning message, triggered by the small counts in the first group. Because the P-value is so far above 5%, rejecting $H_0$ seems out of reach. But we don't need to speculate because Fisher's exact test gives a reliable P-value that is even larger. fisher.test(TBL)$p.val [1] 0.1931484
{ "language": "en", "url": "https://math.stackexchange.com/questions/4176527", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solving A System of Two Inequalities Each With Two Variables I am writing an image processing algorithm. The algorithm calculates a random contrast adjustment, and a random brightness adjustment, and applies those to each pixel in an image, like... resultPixel = originalPixel * contrast + brightness The problem is that the ranges for brightness and contrast are very large. Let's assume they are arbitrarily large and that it is necessary for them to be so. This means that a large number of my result images are completely white or completely black. I don't want images which are completely white or completely black! But I am unsure exactly how to limit these values. Assuming... pixelMax * contrast + brightness <= 255 and... pixelMin * contrast + brightness >= 0 where pixelMax and pixelMin are constants how can I get a random brightness and a random contrast value such that the above statements are true? I want all possible solutions to have an equal chance of being selected. bonus: assume there are two contrast values, such that total contrast = contrast1 * contrast2
You can just solve as if it were a normal two-variable systems of equations. To do so, I used substitution: Solving for contrast in terms of brightness and pixelMax in the first equation gives: $pixelMax * contrast <= 255 - brightness$ $contrast <= \frac{255-brightness}{pixelMax}$ (assuming $pixelMax>0$) Solving for contrast in terms of brightness and pixelMin in the second equation gives: $pixelMin * contrast >= -brightness$ $contrast >= \frac{-brightness}{pixelMin}$ (assuming $pixelMin>0$) Writing as a single equation gives $\boxed{\frac{-brightness}{pixelMin} <= contrast <= \frac{255 - brightness}{pixelMax}}$. As I do not know the exact constants $pixelMin$ and $pixelMax$, I can only say pick a $brightness$ such that $\frac{-brightness}{pixelMin} <= \frac{255 - brightness}{pixelMax}$, and then pick a $contrast$ between the two.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4176815", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Existence simply connected manifold of $\dim=6$ with $H_2(M)=0$ and $\chi(M)=1$ I am trying to do the following exercise : Show that there does not exist a simply connected closed manifold of dimension $6$ with $H_2(M)=0$ and $\chi(M)=1$. First thing we notice is that since $M$ is simply connected then it's orientable. Now using Poincare-Duality and the universal coefficients theorem we know that $\mathrm{rk} (H_i)= \mathrm{rk}(H_{6-i}(M))$. From the fact that $\chi(M)=1$, and that $H_2(M)=0=H_1(M)$ we obtain that $\mathrm{rk}(H_3(M))=1$. Now I don't know how to go from here. Any hint is appreciated. Thanks in advance.
You have shown that $\text{rank}(H^3(M))=\text{rank}(H_3(M))=1$. Thus $H^3(M)$ module torsion is a copy of $\mathbb{Z}$ generated by a cocycle $\alpha\in H^3(M)$. Let $H^3_f(M)$ denote the subgroup of $H^3(M)$ generated by $\alpha$. The cup product pairing $$ H^3_f(M)\times H^3_f(M) \overset{(\varphi,\psi)\mapsto(\varphi\smile \psi)[M]}{\longrightarrow} \mathbb{Z} $$ is non-singular and sends $(\alpha,\alpha)$ to a generator, and by graded commutativity of the cohomology ring of $M$, we get that this is a skew-symmetric form. Any skew-symmetric non-singular form can only exists for even-dimensional rank. This implies that $H^3_f(M)$ is even-dimensional which is a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4177289", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Why from $xThis trick is used by Rudin in his "Functional Analysis" to prove a certain inequality: In essence, he shows that, given some numbers $x,y,t$, if $x<t$, then necessarily $y\leq t$ (why this is true I understand) and that, because this works for every $t$ (that fulfills the inequality $x<t$), from this it automatically follows that $y\leq x$. The final implication I don't understand, why exactly does this $$((x<t)\Rightarrow(y\leq t))\implies(y\leq x)$$ work? I feel like an idiot for not understanding this, it seems like it should be trivial.
By way of contradiction, let $$x< t \implies y\leq t$$ but $y>x$. Then let $t = \frac{x+y}2$ and observe how $x=\frac{x+x}2 <\frac{x+y}2 = t$ but $y=\frac{y+y}2 > \frac{x+y}2 = t$ so the implication fails. Thus you must have $y\leq x$. Edit: This may feel dodgy, so if you want to think more like someone wanting to understand analysis, you could think about how this holds for all $t$ such $x< t \leq y$. Thus you cannot have any $t$ such $x<t\leq y$, so you cannot have $y$ to be greater than $x$. A convenient choice is usually their average, of course. And thinking BWOC is often a decent way to cheese analysis.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4177465", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to solve $a^x+b^x=1$ (solve for $x$) I am sorry if this is too easy question for this site, but I really can't find the solution... $a^x+b^x=1$ Tyma Gaidash asked for context: I don't have much to add, I was trying to understand how fast population grow assuming that everyone born exactly same amount of time (for example everyone have 2 children 1 born 40 years after the parent born and the other 20 years after the parent born, is it faster than having a twins after 28 years) If I assume the population formula is $a^x$ and I need to find a I get that the time it will take to the population multiply itself is the solution to the equation: $\sum_{i=1}^{n}2^{-t_i/x}=1$ where $t_i$ it after how many year children I born after his parent. I hoped that there is a way to solve this equation (at least for 2 children ) but I haven't found one.
In the most general case, there is no analytical solution for the zero of function $$f(x)=a^x+b^x-1$$ and numerical iterative methods would be required. If wa assume $a>1$ and $b>1$, $f(x)$ is not very pleasant to look at if graphing but this is not the case of its logarithmic tansform $$g(x)=\log(a^x+b^x)$$ which looks quite close to a straight line. Being lazy, expand $g(x)$ as a Taylor series around $x=0$ and obtain $$g(x)=\log(2)+\log(\sqrt{ab})x+O(x^2)\implies x_0=-\frac {2\log(2) } {\log({ab}) }$$ and start Newton method for generating the sequence $$x_{n+1}=x_n-\frac{\left(a^{x_n}+b^{x_n}\right) \log \left(a^{x_n}+b^{x_n}\right)}{a^{x_n} \log (a)+b^{x_n} \log (b)}$$ For illustration, using $a=3$ and $b=7$, the iterates will be $$\left( \begin{array}{cc} n & x_n \\ 0 & -0.455340 \\ 1 & -0.468168 \\ 2 & -0.468178 \end{array} \right)$$ which is quite fast. But we could have a still better approximation performing one single iteration of Halley method starting at $x=0$. This would give, as an approximation, $$x_0=\frac {4\log(2)\log(ab) } {(\log (2)-2) \left(\log ^2(a)+\log ^2(b)\right)-2 (2+\log (2)) \log (a) \log (b) }$$ For the worked example, this would give $x_0=-0.46790$. This estimate could still be improved using one single iteration of higher order methods (we still get analytical expressions. The formulae start to be too long for typing them, but for the worked example, as a function of the order of the method, the results would be $$\left( \begin{array}{ccc} n & x_0^{(n)} & \text{method} \\ 2 & -0.4553404974 & \text{Nexton} \\ 3 & -0.4679002951 & \text{Halley} \\ 4 & -0.4682565630 & \text{Householder} \\ 5 & -0.4681819736 & \text{no name} \\ 6 & -0.4681774686 & \text{no name} \\ 7 & -0.4681781776 & \text{no name} \\ 8 & -0.4681782373 & \text{no name} \end{array} \right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4177633", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20", "answer_count": 4, "answer_id": 1 }
Geometric reason why this determinant can be factored to (x-y)(y-z)(z-x)? The determinant $\begin{vmatrix} 1 & 1 &1 \\ x & y & z \\ x^2 & y^2 &z^2 \\ \end{vmatrix}$ can be factored to the form $(x-y)(y-z)(z-x)$ Proof: Subtracting column 1 from column 2, and putting that in column 2, \begin{equation*} \begin{vmatrix} 1 & 1 &1 \\ x & y & z \\ x^2 & y^2 &z^2 \\ \end{vmatrix} = \begin{vmatrix} 1 & 0 &1 \\ x & y-x & z \\ x^2 & y^2-x^2 &z^2 \\ \end{vmatrix} \end{equation*} $ = z^2(y-x)-z(y^2-x^2)+x(y^2-x^2)-x^2(y-x) $ rearranging the terms, $ =z^2(y-x)-x^2(y-x)+x(y^2-x^2)-z(y^2-x^2) $ taking out the common terms $(y-x)$ and $(y^2-x^2)$, $ =(y-x)(z^2-x^2)+(y^2-x^2)(x-z) $ expanding the terms $(z^2-x^2)$ and $(y^2-x^2)$ $ =(y-x)(z-x)(z+x)+(y-x)(y+x)(x-z) $ $ =(y-x)(z-x)(z+x)-(y-x)(z-x)(y+x) $ taking out the common term (y-x)(z-x) $ =(y-x)(z-x) [z+x-y-x] $ $ =(y-x)(z-x)(z-y) $ $ =(x-y)(y-z)(z-x) $ Is there a geometric reason for this? The determinant of this matrix is the volume of a parallelopiped with sides as vectors whose tail is at the origin and head at x,y,z coordinates being equal to the columns(or rows) of the matrix.$^{[1]}$ So is the volume of this parallelopiped equals $(x-y)(y-z)(z-x)$ in any obvious geometric way? References [1] Nykamp DQ, “The relationship between determinants and area or volume.” From Math Insight. http://mathinsight.org/relationship_determinants_area_volume
Subtracting a multiple of another column (or row) to an existing column (or row) does not change the determinant. $$ \begin{vmatrix} 1 & 1 & 1 \\ x & y & z \\ x^2 & y^2 & z^2 \\ \end{vmatrix} $$ $$= \begin{vmatrix} 1 & 0 &0 \\ x & y-x & z-x \\ x^2 & y^2-x^2 &z^2-x^2 \\ \end{vmatrix}$$ $$= \begin{vmatrix} 1 & 0 &0 \\ 0 & y-x & z-x \\ 0 & y^2-x^2 &z^2-x^2 \\ \end{vmatrix}$$ $$={(y-x)(z-x) \begin{vmatrix} 1 & 0 & 0 \\ 0 & 1 & 1 \\ 0 & y+x &z+x \\ \end{vmatrix} } $$ $$=(y-x)(z-x) \begin{vmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & y+x & z-y \\ \end{vmatrix} $$ $$=(y-x)(z-x)(z-y)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4177770", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Measurable functions $f,g$ are finite a.e. Then, $f+g$ is measurable. Let $E\in \mathbb{R}^n$ be a Lebesgue measurable set. Let $f,g:\mathbb{R}^n \to \overline{\mathbb{R}}$ be Lebesgue measurable functions. Suppose $f$ and $g$ are finite almost everywhere. Then, prove that $f+g$ is a Lebesgue measurable function. In $f+g$ is measurable no matter how it is defined at points where it has the form $\infty-\infty.$?, use the fact that if $f$ is measurable and $f=g$ a.e., then $g$ is measurable. But I don't know how I should use the fact. Thank you for your help. Other way to prove is also welcomed.
Suppose $f,g: \mathbb R^n \to \mathbb R\cup\{\pm\infty\}.$ Let $h(x) = \begin{cases} g(x) & \text{if } g(x)\in\mathbb R, \\ 0 & \text{if } g(x)\in\{\pm\infty\}. \end{cases}$ Now use the fact that if $g$ is measurable and $g=h$ a.e., then $h$ is measurable. You have $f+g= f+h$ a.e. So the problem of showing $f+g$ is measurable is reduced to that of showing $f+h$ is measurable, and here you have no $\infty-\infty$ problem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4177925", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Equality between degree of separability of field extensions. The Problem: Let $k \subset F \subset L$ such that $[L:k] < \infty.$ Let $S_1$ be the separable closure of $k$ in $F$, $S_2$ the separable closure of $F$ in $L$ and $S$ be the separable closure of $k$ in $L.$ Show that $[S:S_1]=[S_2:F]$ and $[F:S_1]=[S_2:S].$ My approach: Actually, I have tried only that $p=[S_1:k]$ and $q=[S_2:F]$ is the number of distinct $k$ and $F$-embeddings of $F$ and $L$ in $\bar k$ and $\bar F$ respectively. So, $q \geq p$ and then $r=[S:k]$. Now, in both cases, I am unable to use any other fact to show the equality. A help is warmly appreciated, thanks in advance.
First reduce to the question of showing that if $k \subset F$ is purely inseparable and $F \subset L$ is separable and $S$ is the separable closure of $k$ in $L$ then $[S:k] = [L:F]$, and the answer to this question is that they are both the number of $k$-embeddings $L \to \overline{k}$. Or, more directly, show that the number of $k$-embeddings $L \to \overline{k}$ (under your original hypothesis) is equal to the number of $k$-embeddings $F \to \overline{K}$ multiplied by the number of $F$-embeddings $L \to \overline{k}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4178054", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is $ \int \frac{f'(x)}{f(x)} \ dx = \log |f(x)| + C$ true for all differentiable functions $f$? Let $f$ be a differentiable function. Is the following identity true for all such $f$? $$ \int \frac{f'(x)}{f(x)} \ dx = \log |f(x)| + C $$ I ask because there exist differentiable functions whose derivatives are not Riemann integrable (see here for instance). On the other hand, if we use the substitution $u = f(x)$ for $f$ on $[a,b]$, $$ \int_a^b \frac{f'(x)}{f(x)} \ dx = \int_{f(a)}^{f(b)} \frac{1}{u} \ du $$ and the RHS appears to be integrable. How can we reconcile this? Any comments, help and explanations are welcome.
Even if you take, say $f(x)=2+x^2\sin\frac{1}{x^2}, f(0)=2$ on $[-1,1]$ so that $f(x)\neq 0,$ you have $f’(x)$ is unbounded and $f$ is bounded away from $0$ on $[-1,1],$ so $\frac{f’(x)}{f(x)}$ is not bounded, and hence it isn’t Riemann integrable. Indeed, if $f$ is never $0$ on $[a,b]$ then $f$ is bounded away from zero, so that $f’(x)$ is unbounded if and only if $f’(x)/f(x)$ is unbounded. That doesn’t mean there isn’t a useful value for the integral, only that Riemann doesn’t get us to there immediately. For the given $f,$ we’d have to do $$\int_{-1}^{-a}+\int_b^1$$ and then let $a,b\to0^+,$ to get a the value. Or we can do a Lesbesgue integral. Either way, we’ll get $\log |f(1)|-\log|f(-1)|.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4178215", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
A continuous inverse of the exponential function is holomorphic Let $\Omega \subseteq \mathbb{C}$ be open and connected such that $0 \notin \Omega$. Let $f: \Omega \to \mathbb{C}$ be a continuous function such that $e^{f(z)} = z$ for all $z \in \Omega$. Prove that $f$ is holomorphic and that $f'(z) = \frac{1}{z}$. After asserting the first part I guess the second part is pretty trivial by just taking the derivatives of both sides (since $e^{f(z)}$ is holomorphic if $f(z)$ is holomorphic). There is also a second part asking if there exists a continuous function $f: \mathbb{C} \setminus \{0\} \to \mathbb{C}$ such that $e^{f(z)} = z$ for all $z \in \mathbb{C} \setminus \{0\}$. The answer for this question shoul be no because in class we defined $\log(z)$ as the inverse of $e^z$ and showed that it is continuous in $\mathbb{C}\setminus (-\infty, 0]$ and discontinuous everywhere else. Is what I said correct? How would I go about proving the first part? Thanks in advance!
Since $(\forall z\in\Bbb C):\exp'(z)\ne0$, $\exp$ is locally invertible. So, take $z_0\in\Bbb C$. There is some neighborhood $N$ of $f(z_0)$ such that $\exp|_N$ is an holomorphic inverse $l$. Since $f$ is continuous at $z_0$, there is some neighborhood $W$ of $z_0$ such that $f(W)\subset N$. So, since$$(\forall z\in W):\exp(f(z))=z,$$you have$$(\forall z\in W):f(z)=l(z).$$Therefore, $f$ is differentiable at $z_0$. So, $f$ is holomorphic. And if there was some function $f\colon\Bbb C\setminus\{0\}\longrightarrow$ such that $(\forall z\in\Bbb C\setminus\{0\}):e^{f(z)}=z$, then $f'(z)=\frac1z$, and so $f$ would be a primitive of $\frac1z$. But then $\oint_{|z|=1}\frac{\mathrm dz}z=0$. But, in fact, $\oint_{|z|=1}\frac{\mathrm dz}z=2\pi i$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4178317", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Advanced online courses for different subjects During the pandemic, almost all the courses (basic or advanced) have been moved online instead of in-person. I was wondering if anyone knows online complete new courses for * *Measure theory; *Set theory; *Descriptive set theory. Any idea will be appreciated greatly.
A few years ago, IMPA recorded lectures on Measure Theory, which you can find here. You can find the auxiliary notes here: https://w3.impa.br/~landim/Cursos/MT.pdf While I have not seen the measure theory lectures myself, I did follow the probability theory lecture series they recorded, which were of superb quality. I expect the same for this lecture series.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4178531", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
mean and variance formula for negative binomial distribution The equation below indicates expected value of negative binomial distribution. I need a derivation for this formula. I have searched a lot but can't find any solution. Thanks for helping :) $$ E(X)=\sum_{x=r}^\infty x\cdot \binom {x-1}{r-1} \cdot p^r \cdot (1-p)^{x-r} =\frac{r}{p} $$ I have tried: \begin{align} E(X) & =\sum _{x=r} x\cdot \binom{x-1}{r-1} \cdot p^r \cdot (1-p)^{x-r} \\[8pt] & = \sum_{x=r}^\infty x \cdot \frac{(x-1)!}{(r-1)! \cdot ((x-1-(r-1))!} \cdot p^r \cdot (1-p)^{x-r} \\[8pt] & = \sum_{x=r}^\infty \frac{x!}{(r-1)!\cdot ((x-r)!} \cdot p^r \cdot (1-p)^{x-r} \\[8pt] \Longrightarrow & \phantom{={}} \sum_{x=r}^\infty r\cdot \frac{x!}{r!\cdot (x-r)!}\cdot p^r \cdot (1-p{)}^{x-r} \\[8pt] & = \frac{r}{p} \cdot \sum_{x=r}^\infty \frac{x!}{r!\cdot (x-r)!}\cdot p^{r+1}\cdot (1-p)^{x-r} \end{align} If the power of $p$ in the last equation were not $r + 1,$ I can implement Newton Binomial. So It will be true. But I am stuck here.
If you want to continue that derivation instead of using linearity of expectation on a sum of i.i.d. geometric random variables, then you can follow this; however, doing it this way is much more complicated than the method using the i.i.d. variables. When you arrive at the step $\operatorname{E}(X) = \sum_{x\geq r} r \binom{x}{r} p^r (1 - p)^{x - r}$, we can use this fact about power series: $$ \frac{1}{(1 - z)^{r + 1}} = \sum_{n\geq r} \binom{n}{r}z^{n-r}, \quad \text{for }\lvert z\rvert < 1. $$ If this fact is unfamiliar to you, then you can derive it from the geometric series $\frac{1}{1 - z} = \sum_{n\geq 0} z^n$ by differentiating both sides $r$ times and dividing by $r!$. Of course, we are tacitly assuming that $p \neq 0$ in order to use this. Otherwise, the event that we want to occur $r$ times could not occur at all! It follows that $$\begin{align*} \operatorname{E}(X) &= r p^r\sum_{x\geq r} \binom{x}{r} (1 - p)^{x - r} \\ &= rp^r \cdot \frac{1}{\big(1 - (1 - p)\big)^{r + 1}} \\ &= rp^r \cdot \frac{1}{ p^{r + 1}} \\ &= \frac{r}{p} \end{align*}$$ We can do something similar for the variance using the formula $$\begin{align*} \operatorname{Var} X &= \operatorname{E}\big(X^2\big) - \big(\operatorname{E}(X)\big)^2 \\ &= \operatorname{E}\big(X(X + 1)\big) - \operatorname{E}(X) - \big(\operatorname{E}(X)\big)^2. \end{align*}$$ This means that $$\begin{align*} \operatorname{Var} X &= \sum_{x\geq r} x (x + 1)\binom{x - 1}{r - 1} p^r (1 - p)^{x - r} - \frac{r}{p} - \frac{r^2}{p^2} \\ &= \sum_{x\geq r} r (r + 1)\binom{x + 1}{r + 1} p^r (1 - p)^{x - r} - \frac{r p + r^2}{p^2} \\ &= r(r + 1)p^r \sum_{x\geq r+1} \binom{x}{r + 1} (1 - p)^{x - (r + 1)} -\frac{r p + r^2}{p^2} \\ &= r(r + 1)p^r \cdot \frac{1}{\big(1 - (1 - p)\big)^{r + 2}} -\frac{r p + r^2}{p^2} \\ &= \frac{r^2 + r}{p^2} - \frac{rp + r^2}{p^2} \\ &= \frac{r (1 - p)}{p^2}. \end{align*}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4178708", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How would I prove that zero is the only element in the intersection of the following family of sets? Find the intersection of the following family: $$ \mathcal{M}=\{n\mathbb{Z}:n\in\mathbb{N}\}, $$ where $$ n\mathbb{Z}=\{\dots,-3n,-2n,-n,0,n,2n,3n,\dots\} $$ for each $n\in\mathbb{N}$. It appears that the only element that is common to all sets $M_n\in\mathscr{M}$ is $0$. Therefore, $\bigcap\limits_{n\in\mathbb{N}}M_n=\{0\}$. But how do I prove this? It occurred to me that since $M_n$ is a proper subset of the set $M_i$ for all $i\in\mathbb{N}$ such that $i<n$, the number of elements in the intersection will decrease as the index $n$ increases. Examining a finite number of cases, zero appears to be common to every set and the number of nonzero elements in the intersection seems to approach zero, but this is an inductive argument at best. I'm not sure how I would show $\bigcap\limits_{n\in\mathbb{N}}M_n=\{0\}$ deductively.
You already mentioned that $0 \in n \mathbb{Z}$ for all $n \in \mathbb{N}$. So we only need to show that for $k \neq 0$ there is some $n \in \mathbb{N}$ such that $k \not \in n \mathbb{Z}$. For this we can simply take $n = |k|+1$ (check this!). So indeed $\bigcap_{n \in \mathbb{N}} n \mathbb{Z} = \{0\}$. Note that if you consider $0$ to be a natural number (i.e. $0 \in \mathbb{N}$) then the entire question trivialises, because $0 \mathbb{Z} = \{0\}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4178810", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Determine the value of the double integral (lemniscate for $x \ge 0$ So this time I have to evaluate the following integral: $$\int \int \sqrt{a^2-x^2-y^2}dxdy$$ in the area $$(x^2+y^2)^2=a^2(x^2-y^2)$$ and $$x \ge 0$$, $$a > 0 $$ So my first instinct was to introduce polar coordinates $x=r\cos\phi$ $y=r\sin\phi$ $|J| = r$ So, substituting this we have that $$r^4 = a^2r^2\cos(2\phi)$$ or $$r = a \sqrt{\cos(2\phi)}$$ this means that $$r \in [0, a\sqrt{\cos(2\phi)}$$ For the angle $\phi$, we have that $x \ge 0$, so $$\phi \in [-\frac{\pi}{2}, \frac{\pi}{2}]$$ Now, I'm not sure if my boundaries for the angle are correct, because in the upper boundary for $r$ I have the square root of cosine, so I don't know if I should take $\cos(2\phi) \ge 0$ into account. If my deduction is correct, the integral would be: $$ \int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} \int_0^{a\sqrt{\cos(2\phi)}} r\sqrt{a^2-r^2}drd\phi $$ However, I'm not sure if my boundaries for the angle are correct.
No, your boundaries are not correct. Since $r^2=a^2\cos(2\phi)$, you must have $\phi\in\left[-\frac\pi4,\frac\pi4\right]$, so that $\cos(2\phi)\geqslant0$. So, compute$$\int_{-\pi/4}^{\pi/4}\int_0^{a\sqrt{\cos(2\phi)}}r\sqrt{a^2-r^2}\,\mathrm dr\,\mathrm d\phi.$$You should get:\begin{align}\int_{-\pi/4}^{\pi/4}\int_0^{a\sqrt{\cos(2\phi)}}r\sqrt{a^2-r^2}\,\mathrm dr\,\mathrm d\phi&=2\int_0^{\pi/4}\int_0^{a\sqrt{\cos(2\phi)}}r\sqrt{a^2-r^2}\,\mathrm dr\,\mathrm d\phi\\&=2\int_0^{\pi/4}\frac13a^3 \left(1-2 \sqrt2\sin^3(\phi)\right)\,\mathrm d\phi\\&=\frac1{18}a^3\left(20-16\sqrt2+3\pi\right).\end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/4178970", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
$\triangle ABC$ is an equilateral triangle with circumcentre $O(1,2)$ and vertex $A$ lying on the line $5x+2y=4$.Find area of quadrilateral $BCDE$. $\triangle ABC$ is an equilateral triangle with circumcentre $O(1,2)$ and vertex $A$ lying on the line $5x+2y=4$. A circle with centre $I(2,0)$ passes through the vertices $B$ and $C$ and intersects the sides $AC$ and $AB$ at $D$ and $E$ respectively. Find area of quadrilateral $BCDE$. My Attempt All I could do here was finding the perpendicular distance of $O$ from the given line. How do I use $I(2,0)$
As can be seen in figure OI is coincident on altitude of AH. The equation of AH is: $y-2=\frac{-2}{2-1}(x-1)\Rightarrow y=-2x+4$ This with line $2y+6x=4$ gives $A(-4, 12)$ and we hav: $AO=\sqrt {(1+5)^2+(12-2)^2}\approx 11.2$ $OH=\frac{AO}2=5.6$ $AH=\frac{3\times AO}2=16.8$ $AC=\frac{AH}{\sin 60}\approx 19.4$ $A_{ABC}=\frac{19.4\times 16.8}2=163$ $OI=\sqrt{1^2+5^2}=\sqrt 5\approx 2.24$ Now : $IH=OH-OI=5.6-2.24=3.36$ $R=IC=\sqrt {3.36^2+(\frac{19.4}2)^2}\approx 10$ Equation of circle is: $(x-2)^2+y^2=10^2$ This with line AH , $y=-2x+4$ gives: $p(1-\sqrt 5, 2+2\sqrt 5$ and $Q(1+\sqrt 5, 2-2\sqrt 5)$ $AP=\sqrt{(-4-1+\sqrt 5)^2+(12-2-2\sqrt 5}\approx 6.2$ $ED=\frac{6.2}{16.8}\times 19.4\approx 7\Rightarrow A_{AED}=\frac 12(7\times 6.2)\approx 22$ $A_{BCDE}\approx 163-22=141$ .
{ "language": "en", "url": "https://math.stackexchange.com/questions/4179139", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Expected lifespan implied by the Lindy Effect Sorry in advance, the maths in this one is pretty basic but I'm fairly rusty at this point so I just wanted to check my reasoning. I came to the idea of the Lindy effect via Nassim Nicholas Taleb and I found a good write up of how J. Richard Gott arrived at a technique for estimating the future lifespan of pretty much anything here (https://fs.blog/2012/06/copernican-principle/). Suppose you observe something that has been around N years and you assume that you are observing that thing at a random point in its' life then there is a 50% chance that you are observing it somewhere between 75% and 25% of its' lifespan. Therefore there is a 50% chance that it will continue to exist between N/3 and 3N years. Using a similar argument I can say there is a * *25% chance it will exist shorter than another $N/3$ years *50% chance it will exist shorter than another $N$ years *66.66..% chance it will exist shorter than another $2N$ years *75% chance it will exist shorter than another $3N$ years ..etc. Now, supposing I want to find the expected future lifespan given this approach. Would I be correct in saying that it would be $$\int_0^1\frac{Nx}{1 - x} \, dx = N\int_0^1\left(\frac{1}{1-x} - 1\right) \, dx$$ $$=N\Big[-\ln(1-x)-x\Big]_0^1$$ $$=-N(\ln(0) -1 + \ln(1) + 0)$$ $$=\infty$$ If so, I think I'd find it a little disappointing. Gotts' argument is so elegant I feel like it should produce something better than everything has the same expected lifespan no matter how long it has existed. EDIT: I've had a go with doing this with a proper probability density function $$p(x) = \frac{N}{(N+x)^2}$$ Now we have $$\int_0^\infty p(x)\,dx = \int_0^\infty \frac{N}{(N+x)^2}\,dx$$ $$ = \left[ \frac{-N}{N+x} \right]_0^\infty$$ $$ = 1$$ and furthermore $$\int_\frac{N}{3}^{3N} p(x)\,dx = \left[ \frac{-N}{N+x} \right]_\frac{N}{3}^{3N}$$ $$=\frac{-N}{N+3N}-\frac{-N}{N+\frac{N}{3}}$$ $$=\frac{1}{2}$$ so it fits with Gott's observation This gives us $$\int_0^\infty xp(x)\,dx = \int_0^\infty \frac{Nx}{(N+x)^2}\,dx$$ $$=\int_0^\infty \frac{Nx+N^2}{(N+x)^2}-\frac{N^2}{(N+x)^2}\,dx$$ $$=\int_0^\infty \frac{N}{N+x}-\frac{N^2}{(N+x)^2}\,dx$$ $$ = \left[ N\ln(N+x)+\frac{N^2}{N+x} \right]_0^\infty$$ $$=\infty$$
As you’ve noticed, Gott’s model with the assumption that you are equally likely to observe an event over any point in it’s lifetime and that you have an uninformed uniform prior (whatever that means...) has a fat tail, so it doesn’t ever give a finite expected value. It’s more useful and typical to think of the median distribution which Gott clearly predicts is $2N$ You could also try transforming it by taking the $\log$ and taking the mean of that.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4179280", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Interesting Inequality using AM-GM and other identities. Let $a, b, c > 0$. Prove that $$\sqrt{a^2-ab+b^2} + \sqrt{b^2 - bc + c^2} + \sqrt{c^2 - ca + a^2} \le \frac{ab}{c} + \frac{bc}{a} + \frac{ca}{b}.$$ This should be solvable with AM-GM and a few other inequalities, but I am a little stuck on this problem. My idea was to remove the radical. $\sqrt{a^2-ab+b^2} \le \frac{a^2-ab+b^2}{a+b} + \frac{a+b}{4}$ by AM-GM. Adding this up cyclically, it suffices to show the inequality $$\frac{5}{2}\left(a+b+c\right)-3\left(\frac{ab}{a+b}+\frac{bc}{b+c}+\frac{ac}{a+c}\right) \le \frac{ab}{c} + \frac{bc}{a} + \frac{ca}{b},$$ which I'm pretty sure is true, but I have no clue how to prove. This inequality resembles https://artofproblemsolving.com/community/c6h1288310p6804993 and https://artofproblemsolving.com/community/q2h1817483p12130020, the latter of which is a weaker version of this inequality.
WLOG, assume $c = \max(a, b, c)$. By AM-GM inequality, we have \begin{align*} \left(\frac{b^2}{c} - b + c\right) + c &\ge 2\sqrt{b^2 - bc + c^2}, \\ \left(\frac{a^2}{c} - a + c\right) + c &\ge 2\sqrt{a^2 - ac + c^2}, \\ \left(\frac{b^2}{a} - b + a\right) + a &\ge 2\sqrt{b^2 - ab + a^2}. \end{align*} It suffices to prove that $$\frac{b^2}{a} + \frac{a^2 + b^2}{c} + a - 2b + 4c \le 2\left(\frac{ab}{c} + \frac{bc}{a} + \frac{ca}{b}\right)$$ that is $$\frac{(2c^2 - ab - bc)(a - b)^2}{abc} \ge 0$$ which is true. We are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4179434", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 2, "answer_id": 0 }
Almost sure convergence of AR(1) model I am trying to solve following problem. Problem. Suppose that $X_n = \rho X_{n-1} + \epsilon_n$ with $|\rho| < 1$ and $X_0 = 0$, where $\epsilon_n$ are iid r.v.'s with mean $0$ and variance $1$. Show that $\max_{1\le k \le n} |X_k|/\sqrt{n} \to 0~$ a.s. My idea is using Borel-Canteli lemma to show the a.s. convergence. Since $$\max_{1\le k \le n} |X_k|/\sqrt{n} \to 0 ~\text{ a.s. } \iff \forall\epsilon > 0: P(\max_{1\le k \le n} |X_k|/\sqrt{n} > \epsilon ~\text{ i.o.}) = 0,$$ and $$\{\max_{1\le k \le n} |X_k|/\sqrt{n} > \epsilon ~\text{ i.o.}\} = \{ |X_n|/\sqrt{n} > \epsilon ~\text{ i.o.}\},$$ (is it true?) I think that it is enough to show that $|X_n|/\sqrt{n} \to 0 ~\text{ a.s. }$ To apply BC lemma, I am trying to bound $\sum_n P(|X_n|/\sqrt n > \epsilon)$. But Markov inequality only shows that $$ P(|X_n|/\sqrt n > \epsilon) \le \frac{var(X_n)}{n\epsilon^2} = \frac{1-\rho^{2n}}{(1-\rho^2)n\epsilon^2}. $$ But this cannot bound $\sum_n P(|X_n|/\sqrt n > \epsilon)$. How can I precede? Shall we need a 4th finite moment for $X_n$?
Hints: $X_n=\rho^{n-1}\epsilon_1+\rho^{n-2} \epsilon_2+...+\epsilon_n$ by iteration. This shows that $X_n$ converges a.s.. [ You can use Kolmogorov's Three Series Theorem, for example]. Note that if a sequence $(x_n)$ of real numbers is bounded then $\max \{|x_1|,|x_2|,...|x_n|\} / \sqrt n \to 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4179603", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Variable pairs of chords at right angles are drawn through a point $P$ (with eccentric angle $\pi/4$) on the ellipse. Variable pairs of chords at right angles are drawn through a point $P$ (with eccentric angle $\pi/4$) on the ellipse $\frac {x^2}{4}+y^2=1$, to meet the ellipse at two points say $A $ and $B $. if the line joining $A$ and $B$ passes through a fix point $Q (a,b)$ such that $a^2+b^2$ has value equal to $\frac{m}{n}$, where $m,n$ are relatively prime positive integers. Find $(m+n)$ My Approach: Note:-$m_{AB}$ denotes Slope of $AB$. Equation of $AB$ is $\frac{y}{1}\cdot sin \frac {(\alpha + \beta)}{2}+ \frac{x}{2}\cdot cos\frac {(\alpha+ \beta)}{2}= cos\frac {(\alpha -\beta)}{2}$ Let $A=(2cos\alpha,sin\alpha)$ and $B=(2cos\beta,sin\beta)$ $P=(2cos\frac{\pi}{4},sin\frac{\pi}{4})$ $m_{AP}=\frac{sin\alpha - \frac{1}{\sqrt2}}{{2cos\alpha}-\frac{2}{\sqrt2}}$ $m_{BP}=\frac{sin\alpha - \frac{1}{\sqrt2}}{{2cos\alpha}-\frac{2}{\sqrt2}}$ Because $AP$ and $BP$ are perpendicular so $m_{AP}\cdot m_{BP}=-1$ After solving I reach to $\frac{5}{2}cos(\alpha-\beta)+\frac{3}{2}cos(\alpha+\beta)$= $\frac{2cos \frac{\alpha- \beta}{2}}{\sqrt2} \biggl ( sin \frac {(\alpha - \beta)}{2}+cos\frac {(\alpha - \beta)}{2}\biggl)+\frac{5}{2}$ How to get end Result? Can i get end result using my method or similar to my method? This Question is same as below but he used some direct result. Ellipse in which two chords are perpendicular to each other Prove that the chord of the ellipse passes through a fixed point
My idea would be to simplify the working. The question does not ask us to prove that the chords pass through the same point $Q$. It states that if they do, what is the coordinates of $Q$. So if they do pass through a common point $Q$, its coordinates can be found using any such two chords. Given eccentric angle of $\frac{\pi}{4}$, coordinates of $P$: $\left(\sqrt2, \dfrac{1}{\sqrt2}\right)$ So if we take the first pair as a horizontal and a vertical line, Coordinates of $A$: $\left(\sqrt2, -\dfrac{1}{\sqrt2}\right)$ Coordinates of $B$: $\left(-\sqrt2, \dfrac{1}{\sqrt2}\right)$ Equation of line $AB$ turns out to be $x+2y = 0 \ \ $ ...$(i)$ Now you have two approaches you can follow, you can show using the answer to one of the questions you linked (link) that the normal line at $P$ will pass through point $Q$. That makes it easier to find the coordinates of $Q$. Or just take point $B'$ as $\left(-\sqrt2, - \dfrac{1}{\sqrt2}\right)$ so slope of line $PB'$ is $ \dfrac{1}{2}$. Hence equation of line $PA'$ will be, $\left(y - \dfrac{1}{\sqrt2}\right) = -2 (x - \sqrt2) \implies 2x + y = \dfrac{5}{\sqrt2}$ Plugging it into equation of ellipse, you get the coordinates of $A'$ as $\left(\dfrac{23\sqrt2}{17}, -\dfrac{7}{17\sqrt2} \right)$. That leads to equation of line $A'B'$ as, $ y + \dfrac{1}{\sqrt2} = \dfrac{1}{8} (x + \sqrt2) \ \ $ ...$(ii)$ Solving $(i)$ and $(ii)$ should give you coordinates of $Q (a, b)$ and you should get to the final answer of $m + n = 19$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4179780", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How may we formally prove the equivalence of taking permutations along the rows and along the columns in the Leibniz determinant? If the square matrix $\mathbf{A}\in M_{n\times n}$ and $a_{i,j}$ is the element in the $i$th row and the $j$th column of $\mathbf{A}$, and the sign of a permutation is positive when the number of interchanges is even, and negative when it is odd, and $\sigma,\tau$ are permutations in the permutation-group $S_n$ then: $$\DeclareMathOperator{\sgn}{sgn}|\mathbf{A}|=\sum_{\sigma\in S_n}\sgn(\sigma)\prod_{i=1}^na_{i,\sigma(i)}\equiv\sum_{\tau\in S_n}\sgn(\tau)\prod_{i=1}^na_{\tau(i),i}$$ And while it is intuitive to me that selecting elements row-wise or column-wise in the product comes to the same thing as everything is selected eventually, I can't see a clear proof for why the two expressions are equivalent, especially taking the sign, or "signature", of the permutation into account. Everything is selected, yet I don't see how to guarantee that all selections land in the same n-tuples, with the same sign! For example, a tuple of a $3\times3$ matrix might be $(a_{1,3},a_{2,1},a_{3,2})$, and using the other permutation method its correspondent is $(a_{2,1},a_{3,2},a_{1,3})$, but although these correspondents always exist, I cannot prove that the corresponding tuples have the same sign. Having the same sign would require an even number of interchanges between correspondents, which is clear in my example $((3,1,2)\to(2,3,1)$ with two interchanges$)$, but not so clear in the general case. How can we show that the set of all the products $\sgn(\sigma)\cdot a_{i,\sigma(i)}$ is identical, (just with a different order), to the set of the products $\sgn(\tau)\cdot a_{\tau(i),i}$?
An argumentation is based upon the group structure of $S_n$. We start with \begin{align*} |\mathbf{A}|=\sum_{\sigma\in S_n}\mathrm{sgn}(\sigma)\prod_{i=1}^na_{i,\sigma(i)}\tag{1} \end{align*} Since the symmetric group $S_n$ with respect to composition of permutations is a group we have for each permutation $\sigma\in S_n$ a unique inverse $\sigma^{-1}\in S_n$. If $\sigma(i)=k$ we have $\sigma^{-1}(k)=i$, and we get \begin{align*} a_{i,\sigma(i)}=a_{\sigma^{-1}(k),k} \end{align*} Since $\sigma\in S_n$ is a permutation of $\{1,\ldots,n\}$ and each element of $1,\ldots,n$ occurs precisely once we can write \begin{align*} \prod_{i=1}^n a_{\sigma^{-1}(i),i}\tag{2} \end{align*} Summing over all $n!$ permutations in $S_n$ it follows from (1) and (2): \begin{align*} \color{blue}{|\mathbf{A}|}&=\sum_{\sigma\in S_n}\mathrm{sgn}(\sigma)\prod_{i=1}^na_{i,\sigma(i)}\\ &=\sum_{\sigma\in S_n}\mathrm{sgn}(\sigma)\prod_{i=1}^na_{\sigma^{-1}(i),i}\\ &\,\,\color{blue}{=\sum_{\sigma^{-1}\in S_n}\mathrm{sgn}(\sigma^{-1})\prod_{i=1}^na_{\sigma^{-1}(i),i}}\tag{3}\\ \end{align*} and the claim follows. In (3) we use that $\mathrm{sgn}(\sigma)=\mathrm{sgn}(\sigma^{-1})$ and summing over $\sigma^{-1}\in S_n$ is just a reordering of summing over $\sigma \in S_n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4179933", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Average degree of a vertex in a labeled tree Consider all labeled trees on vertices $\{1,\dots,n\}$. What is the average degree of vertex $1$? I tried to count for each possible degree $i$ of the vertex $1$ the number of trees such that the vertex $1$ has exactly degree $i$. Then $i$ tried to compute the summation: $$S= \sum_{i=1,...,n-1} i *N_i $$ Where $N_i$ is the number of trees such that vertex 1 has degree i. I got $N_i$ = $ \binom{n-2}{i-1} (n-1)^{n-2-i}$ using prufer codes. The result at the end would simply be $\frac{S}{n^{n-2}}$. Is there a way to simplify the result ? The problem is the i in the summation, if there was not the $i$, I could simply use the binomial theorem. I also tried to take the derivative of $(1+x)^n$ but I could not conclude.
The average degree of vertex $1$ is one $n$'th of the sum of the average degrees of all the vertices, which is $2n-2$. Therefore the average degree is $\frac{2n-2}{n} = 2 - \frac{2}{n}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4180099", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Number of integer solutions of $a^2+b^2=10c^2$ Find the number of integer solutions of the equation $a^2+b^2=10c^2$. I can only get by inspection that $a=3m, b=m,c=m$ satisfies for any $m \in Z$. Is there a formal logic to find all possible solutions? Any hint? Also i tried taking $a=p^2-q^2$, $b=2pq$ and $10c^2=(p^2+q^2)^2$ which gives $$\frac{p^2+q^2}{c}=\sqrt{10}$$ which is invalid, since a rational can never be an irrational.
Solving the $C$-function of Euclid's formula $\quad A=m^2-k^2,\quad B=2mk,\quad C=m^2+k^2\quad$ for $(k), \space $ we can find Pythagorean triples for any given $C$-values, if they exist, that are primitive, doubles, or square multiples of primitives. This will not find, for example $(9,12,15)\space$ or $(15,20,25),$ but it will find $(3,4,5),\space (6,8,10),\space (12,16,20),\space (27,36,45), \space$ etc. We begin with the following formula. Any $m$-value that yields an integer $k$-value indicates a valid $(m,k)$ pair for generating a Pythagorean triple. \begin{equation} C=m^2+k^2\implies k=\sqrt{C-m^2}\\ \text{for}\qquad \bigg\lfloor\frac{ 1+\sqrt{2C-1}}{2}\bigg\rfloor \le m \le \lfloor\sqrt{C-1}\rfloor \end{equation} The lower limit ensures $m>k$ and the upper limit ensures $k\in\mathbb{N}.$ Here is an example for $C=40\implies 10c=4$ where $c$ is the one shown in the OP equation. $$C=40\implies \bigg\lfloor\frac{ 1+\sqrt{80-1}}{2}\bigg\rfloor=4 \le m \le \lfloor\sqrt{40-1}\rfloor=6\\ \land \quad m\in\{6\}\Rightarrow k\in\{2\}\\$$ $$F(6,2)=(32,24,40)\implies (32,24,10\times 4)$$ This method will not find all Pythagorean triples that match the criteria but it will find an infinite number of triples that do such as: $$c=1\longrightarrow (8,6,10\times 1)\\ c=2\longrightarrow (12,16,10\times 2)\\ c=4\longrightarrow (32,24,10\times 4)\\ c=9\longrightarrow (72,54,10\times 9)\\ $$ Note that any multiple of a triple found also yields a valid triple so $3\times (8,6,10)\longrightarrow (24,18,10\times3)$ and provides the "missing" $c=3$ triple in the list above. The combination of the two will find all Pythagorean triples where the $c$ in $10c$ is an integer except for the most unusual case like $(3,1,10\times 1)$ mentioned in another post.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4180314", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 5, "answer_id": 4 }
Generalize the volume formula for Cylinder and Cone Frustum I'm trying to derive the Frustum Volume. Also I want to generalize the Formula to be applicable to the Cylinder, which is the case when r1 = r2. Below picture is the typical derivation of it, and I know that the Formula holds whether r1 = r2 is true or not. But there is a line in the proof that the devisor is r1 - r2. So it seems cheated if I say that this Formula can be used when r1 = r2. So I want a guidance why this Formula holds even when r1 = r2. what is the reason behind that? I'm also not sure if this derivation is Mathematically correct and can be referred in my Thesis or not. or do you have any suggestion? Frustum Volume
There is nothing wrong with this formula and it works for cylinder too. The reason it works is that you can show that volume of frustum converges to the volume of cylinder as $r_2\to r_1$. Indeed, consider two frustums of the same height $h$. One with radii $r-h$ and $r$ and another with $r$ and $r+h$. Volume of cylinder with radius $r$ will be between volume of frustums. And their volumes converge to the same value as $h\to 0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4180451", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Example of ordinals $a,b,c$ such that $(a + b) \cdot c \neq a\cdot c + b\cdot c$ I know that $c\cdot(a +b)=c\cdot a+c\cdot b$, but I don't see the counterexample to the hypothesized property in the question.
By definition, we have that $(\omega_0+1)\cdot\omega_0$ is the supremum of $\{(\omega_0+1)\cdot n\mid n\text{ is finite}\}$. This supremum is $\omega_0^2$, not $\omega_0^2+\omega_0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4180609", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Doubt in application of Cauchy's Residue Theorem in the proof of Prime Number Theorem I have been studying the proof of Prime Number Theorem as outlined in the book Introduction to Analytic Number Theory by Apostol and I came across the following lemma : In the proof of this lemma, the author takes two different contours for $u>1$ and $0<u\leq 1$ respectively and tries to show the required result. Notice that the function has poles at integers $n = 0,-1,\cdots,-k$. The case for $u>1$ is a straightforward application of Cauchy's Integral Theorem but I am having trouble understanding the case for $0<u\leq 1$. Here is the proof, using Cauchy's Residue Theorem, as mentioned in the text: The first equality is pretty clear to me but I just can't understand the second equality. How does the author jump from the first line to the second ? Please help!
This is a general property of residue calculus: if $F$ has a simple pole at $z_0$ and $G$ is analytic at $z_0$ then $$\text{Res}(F\cdot G,z_0) =G(z_0)\cdot \text{Res}(F,z_0).$$ Indeed if $F(z)=\frac{a_{-1}}{z-z_0}+a_0 +a_1(z-z_0)+o(z-z_0)$ and $G(z)=b_0 +b_1(z-z_0)+o(z-z_0)$ then $$F(z)G(z)=\frac{b_0a_{-1}}{z-z_0}+b_0a_0+b_1a_{-1}+o(1)$$ which implies that $$\text{Res}(F\cdot G,z_0)=b_0a_{-1}=G(z_0)\cdot \text{Res}(F,z_0).$$ Note that, in our case, $F(z)=\Gamma (z)$ has simple poles at the non-positive integers: $$\text{Res}(\Gamma,-n)=\frac{(-1)^n}{n!}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4180760", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Computing $\int_{-2}^{2}\frac{1+x^2}{1+2^x} dx$ I am trying to compute the following integral by different methods, but I have not been able to come up with the result analytically. $$\int_{-2}^{2}\frac{1+x^2}{1+2^x}dx$$ First I tried something like: $2^{x}=e^{x\ln{2}}\Rightarrow u=x\ln{2} \iff x=\frac{u}{\ln{2}}$ $\Rightarrow$ $\frac{du}{\ln{2}}=dx$. On the other hand, $1+x^{2}=(x-1)^{2}-2x$ Replacing $$\int_{-2}^{2}\frac{1+x^2}{1+2^x}dx=\int_{-2}^{2}\frac{(x-1)^{2}-2x}{1+e^{x\ln{2}}}dx=\int_{-2}^{2}\frac{(x-1)^{2}-2x}{1-(-e^{x\ln{2}})}dx$$ $$\int_{-2}^{2}\frac{1+x^2}{1+2^x}dx=\int_{-2}^{2}((x-1)^{2}-2x)\sum_{n=0}^{\infty}(-e^{x\ln{2}})^{n}dx$$ $$\int_{-2}^{2}\frac{1+x^2}{1+2^x}dx=\int_{-2}^{2}((x-1)^{2}-2x)\sum_{n=0}^{\infty}((-1)^{n}e^{nx\ln{2}})dx=\int_{-2}^{2}((x-1)^{2}-2x)\sum_{n=0}^{\infty}\frac{((-1)^{n}n^{n}x^{n}\ln^{n}{2})}{n!}dx$$ $$\int_{-2}^{2}\frac{1+x^2}{1+2^x}dx=\sum_{n=0}^{\infty}\frac{((-1)^{n}n^{n}\ln^{n}{2})}{n!}\int_{-2}^{2}(1+x^2)x^{n}dx$$ I do not know if the reasoning is correct. I hope someone can help me. Note: By symmetry the integral can be reduced to $f(-x)=2^{x}f(x)$ so $2I=I+I=\int_{-2}^{2}(1+2^{x})\frac{1+x^2}{1+2^x}dx=\frac{14}{3}$
My approach: $$\int_{-2}^{2}\frac{1+x^2}{1+2^x} dx = \int_{-2}^{0}\frac{1+x^2}{1+2^x}dx + \int_{0}^{2}\frac{1+x^2}{1+2^x} dx $$$$ \overset{t = -x}= \int_{0}^{2}\frac{1+t^2}{1+2^{-t}} dt+\int_{0}^{2}\frac{1+x^2}{1+2^x}dx = \int_{0}^{2}x^2 + 1 dx =\frac{14}{3}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4181138", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Does there exist a non-zero module over $R$ which vanishes modulo a nilpotent ideal? Let $R$ be a noetherian ring, $I$ a nilpotent ideal in $R$, and $M$ a module over $R$, of infinite type, such that $M/IM = 0$. Is it necessarily the case that $M=0$? If $M$ were of finite type, then we would immediately get a positive answer by Nakayama's Lemma. If $R$ is not noetherian, then I think I have a counterexample.
Yes, as Eric Wofsey points out, it is necessarily the case that $M=0$, and this is in fact true even if $R$ is not Noetherian! Since $I$ is nilpotent, let $n$ be such that $I^n=0$. Now, since $M\big/IM=0$, we have in other words that $M=IM$. By induction on $k$, we then have $M=I^kM$ for all $k$, whence in particular $M=I^nM=0$. Perhaps instead, again as Eric points out, you are thinking of a nil ideal; this is an ideal in which every element is nilpotent. Now, if $R$ is Noetherian, then every nil ideal of $R$ is nilpotent. Indeed, if $I\leqslant R$ is any ideal, then, since $R$ is Noetherian, we have $I=\langle a_1,\dots,a_n\rangle$ for some $a_i\in R$. If $I$ is also nil, then for each $i\leqslant n$ there exists $k_i$ such that $a_i^{k_i}=0$. Let $k=\sum_{i=1}^nk_i$; can you show that $I^k=0$? In particular, by the paragraph above, the answer to your question is still yes even if "nilpotent" is replaced by "nil". However, it is the case that the generalized statement does not hold in general if $R$ is not Noetherian. For example, let $R$ be the quotient $$\frac{\mathbb{Q}[x_n:n\in\mathbb{N}]}{\langle x_1^2,x_n-x_{n+1}^2:n\in\mathbb{N}\rangle}.$$ For convenience, denote the image of each $x_n$ in $R$ as $a_n$, and let $I$ be the ideal of $R$ generated by the $a_n$. We have $I^2=I$, ie $I\big/I^2=0$, since $a_n=a_{n+1}^2\in I^2$ for each $n\in\mathbb{N}$. Furthermore, by induction, we have $a_n^{2^{n}}=0$ for each $n\in\mathbb{N}$, so $I$ is generated by nilpotent elements and hence (why?) nil. But $I\neq 0$, so taking $M=I$ gives the desired counterexample.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4181466", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Dirichlet integral integrates to $\pi$ I think the question has been asked several times on MSE since seems a standard fourier analysis fact, but I don't find any reference for it. Let's consider the Dirichlet Kernel as $$D_n(x) := \sum\limits_{\lvert k \rvert \leq n}e^{ikx}$$ In a further proposition I think it's used something like $$\int_0^{\pi} D_n(x)dx = \pi$$ Is this true? I tried to computed this explicitly or using $$D_n(x) = \frac{\text{sin}\left(\left(n+\frac{1}{2}\right)x\right)}{\text{sin}(\frac{x}{2})} \hspace{0.2cm} \forall x\ne 0 \hspace{0.2cm} \text{mod} 2\pi$$ But I got stuck since the integral of the part relative to $\text{cos}$ is $0$, since integrate to $\text{sin}$, and the part relative to $\text{cos}$ depends on $k$ but the $i$ term remains. Where is my mistake? And how to prove $$\int_0^{\pi} D_n(x)dx = \pi$$ if true ?
The truncated Fourier series $S_n(x)$ for a function $f$ on $[-\pi,\pi]$ is given by \begin{align} S_N(f)(x)&= \sum_{n=-N}^{N}\frac{1}{2\pi}\int_{-\pi}^{\pi}f(y)e^{-iny}dy e^{inx} \\ &=\frac{1}{2\pi}\int_{-\pi}^{\pi}f(y)\sum_{n=-N}^{N}e^{in(x-y)}dy \\ &=\frac{1}{2\pi}\int_{-\pi}^{\pi}f(y)D_N(x-y)dy \end{align} That defines the Dirichlet kernel $D_N (x)=\sum_{n=-N}^{N}e^{inx}$. Notice that $D_N(x)=D_N(-x)$ is an even function, and $\int_{-\pi}^{\pi}D_N(x)dx=2\pi$ because it gives you $2\pi$ times the truncated Fourier series of $1$, which is $2\pi$ for $N \ge 1$. So, $$ \int_0^\pi D_N(x)dx=\frac{1}{2}\int_{-\pi}^{\pi}D_N(x)dx=\frac{2\pi}{2}=\pi. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4181651", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
$ \lim_{a\to\infty} {a\int_{0}^{\frac{\pi}{4}}e^x\space \tan^a{x}\space dx}$ $$Let \space I(a) = \int_{0}^{\frac{\pi}{4}} {e^{x} \tan^{a} x} \space . \space Find \space \lim_{a\to\infty}aI(a). $$ Well this is an integral that I'm curently dealing with and so far I've tried to solve it using different approaches. Still, I haven't solved it and what I find always is an undefined answer. Even when I used mathematica it returned an undefined answer. This solution in my opinion is most likely to be the one that works: $$\\$$ Our Integral is $$\int_{0}^{\frac{\pi}{4}}e^x\space \lim_{a\to\infty}[{a\tan^a{x}}]\space dx,$$ letting $$A=\lim_{a\to\infty}{a\tan^a{x}}$$ We know that $$x \in [0,\frac{\pi}{4}]\space \Rightarrow \space \tan{x} \in[0,1] \space \Rightarrow \space \lim_{a\to\infty}a\tan{^ax}= \ \begin{cases} 0 & 0\leq x< \frac{\pi}{4} \\ 1 & x=\frac{\pi}{4}\\ \end{cases} \ $$ Which tells us that $$ A = \ \begin{cases} 0 \times \infty & 0\leq x< \frac{\pi}{4} \\ \infty & x=\frac{\pi}{4}\\ \end{cases} $$ Which are some undefined integrands... . Mathematica says the integral is undefined but the book, Advanced Calculus Explored, says it has an answer. I think this is the right solution and I just need to work on the limit a little bit more, but this is something that I'm stuck in. I appreciate any kind of hints or helps.
Rewrite the integral as $$\int_0^{\pi/4} dx \, \exp{x} \, \exp{\left [a \left (\log{\tan{x}} \right ) \right ]}$$ Sub $x=\pi/4-y$ and use the tangent addition rule and the integral is $$\exp{\left ( \frac{\pi}{4} \right )} \int_0^{\pi/4} dy \, \exp{(-y)} \, \exp{\left [a \left (\log{\left (\frac{1-\tan{y}}{1+\tan{y}} \right )} \right ) \right ]}$$ Not that the dominant contribution to the integral as $a \to \infty$ is in the interval in a small neighborhood about $y=0$. So expand the integrand about $y=0$; then the interval of integration may be expanded out to $[0,\infty)$ because the other contributions are exponentially subdominant. Accordingly, as $a \to \infty$, the integral behaves as $$\exp{\left ( \frac{\pi}{4} \right )} \int_0^{\infty} dy \, e^{-2 a y} = \frac1{2 a} \exp{\left ( \frac{\pi}{4} \right )}$$ Accordingly, the sought-after limit is $$ \frac12 \exp{\left ( \frac{\pi}{4} \right )}$$ This has been verified in Mathematica.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4181806", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Proving that group $G$ of order $|G|=35$ is Abelian This is my outline of proof: * *By Sylow's theorems, $G$ has two unique subgroups $H$ and $K$ respectively of order $5$ and order $7$ and both are Abelian; as groups of prime order are Abelian *Next I use the counting argument to say: Taking any $x\in{H}$ and $y\in{K}$ such that $x,y\notin{e}$, I will have more number of ordered pairs than the remaining elements in $G$ (i.e $35-5-7+1=24$) ($e$ is common to both so $+1$) *Thus there are combinations which are same; implying $x$ and $y$ commute; but how do I proceed to prove that even the other elements commute. Kindly help in showing the way further.
If $|G|=35$, then all of it elements must have order $1,5,7$ or $35$ by Lagrange. The Sylow theorems shows that we have one subgroup $H$ of order $5$ and one subgroup $K$ of order $7$. Hence, we have $6$ elements of order $7$ and $4$ elements of order $5$. So, by counting, $35 - 4 - 6 - 1 = 24$ elements of $G$ must have order $35$, then $G$ is cyclic, hence it's abelian.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4181931", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Show that $f(x)$ is continuous at point $c$ I have a function $f(x) = 0$ on $[a,b]$ except for $c \in [a,b]$ where $f(c) = 1$. I am asked to show $f(x)$ is continuous at point $c$. Thus I have for the definition of continuity: if $\;|x-c|<\delta\;$ then $\;|f(x)-f(c)|<\varepsilon\;,$ if $\;|x-c|<\delta\;$ then $\;|1|<\varepsilon\;.$ However given that the definition has to hold for all $\varepsilon>0$ then I'm not sure how this applies given that $\varepsilon>1$ by the above and thus there exist some values for which the definition does not hold.
Recall the definition of continuity: $f$ is continuous at $c$ if for all $\epsilon > 0$, there exists $\delta > 0$ such that for all $x$, if $|x-a| < \delta$, then $|f(x) - f(c)| < \epsilon$. I'm emphasizing the quantifiers. When you negate a statement, the quantifiers flip and the negation moves down the line. So $f$ is not continuous at $c$ if there exists $\epsilon > 0$ such that for all $\delta > 0$, there exists $x$ such that $|x-a| < \delta$ but $|f(x) - f(c)| \geq \epsilon$. If you draw a graph of $f$ and inspect each of these definitions, I think you'll see which one is satisfied, and how.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4182095", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Heat Kernel on a compact manifold without boundary I was wondering if the conservation of mass which is obvious for the heat equation in $\mathbb{R}^n$ holds also for the heat kernel in a general compact manifold without boundary. I mean I want to know if the heat kernel generally satisfies $$\int dx K(x,y,t)=1$$ If so, may I have some references to look at? Thanks in advance
Yes, it's true, and is easy to prove in this setting. Differentiate under the integral sign to get $$\frac{d}{dt} \int K(x,y,t)\,dx = \int \partial_t K(x,y,t)\,dx = \int \Delta K(x,y,t)\,dx$$ the Laplacian $\Delta$ taken in the $y$ variable. Now integrating by parts, $$\int \Delta K(x,y,t)\,dx = \int K(x,y,t) \Delta 1\,dx = 0$$ so that $\int K(x,y,t)\,dx$ is constant with respect to $t$. To show it's a constant with respect to $y$, you could again differentiate under the integral sign to show that $F(y,t) = \int K(x,y,t)\,dx$ also solves the heat equation, and since it's independent of $t$, it is harmonic. The only continuous harmonic functions on a compact manifold without boundary are constants, by the maximum principle. More generally, any Riemannian manifold for which this holds is said to be stochastically complete. It's well known (but not quite so easy to prove) that every complete Riemannian manifold with Ricci curvature bounded below is stochastically complete; see for instance Hsu, Pei, Heat semigroup on a complete Riemannian manifold, Ann. Probab. 17, No. 3, 1248-1254 (1989). ZBL0694.58043. and references therein.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4182338", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Polynomials which are invariant to the cyclic permutation of variables I'm trying to solve the following problem from this book. I can find the Gröbner basis of $J$ using Buchberger’s algorithm, and so I don't have any problem with the first part of this problem. But my problem is about the second part. Could you please say how it is related to the Gröbner basis? Let $\eta_{0}=x_{1}+x_{2}+x_{3}, \eta_{1}=x_{1} \varepsilon+x_{2} \varepsilon^{2}+x_{3}, \eta_{2}=x_{1} \varepsilon^{2}+$ $x_{2} \varepsilon+x_{3} \in \mathbb{C}\left[x_{1}, x_{2}, x_{3}\right]$, where $\varepsilon \in \mathbb{C}$ is a primitive 3-root of unity. (i) Compute the reduced Gröbner basis of $$ J=\left(t_{1}-\eta_{0}, t_{2}-\eta_{1}, t_{3}-\eta_{2}\right) \subset \mathbb{C}\left[x_{1}, x_{2}, x_{3}, t_{1}, t_{2}, t_{3}\right] $$ with respect to an elimination order for $x_{1}, x_{2}, x_{3}$. (ii) Let $f \in \mathbb{C}\left[x_{1}, x_{2}, x_{3}\right]$ be a polynomial. Show that $f$ is invariant to the cyclic permutation of variables if and only if $$ f=\sum_{\mathbf{a}=\left(a_{0}, a_{1}, a_{2}\right)} c_{\mathbf{a}} \eta_{0}^{a_{0}} \eta_{1}^{a_{1}} \eta_{2}^{a_{2}} $$ where $a_{1}+2 a_{2} \equiv 0 \bmod 3$ for every a such that $c_{\mathbf{a}} \neq 0$.
In part (i) you effectively express $x_1,x_2,x_3$ in terms of $\eta_0, \eta_1, \eta_2$. Namely you get $$\left[x_{1} - \frac{1}{3} t_{1} + \frac{1}{3} t_{2} \varepsilon + \frac{1}{3} t_{2} - \frac{1}{3} t_{3} \varepsilon, x_{2} - \frac{1}{3} t_{1} - \frac{1}{3} t_{2} \varepsilon + \frac{1}{3} t_{3} \varepsilon + \frac{1}{3} t_{3}, x_{3} - \frac{1}{3} t_{1} - \frac{1}{3} t_{2} - \frac{1}{3} t_{3}\right]$$ which implies $$x_1 = \tfrac{1}{3}\left(\eta_0-\eta_1\varepsilon-\eta_1+\eta_2\varepsilon\right),\quad x_2 = \tfrac{1}{3}\left(\eta_0+\eta_1\varepsilon-\eta_2\varepsilon-\eta_2\right),\quad x_3=\tfrac{1}{3}\left(\eta_0+\eta_1+\eta_2\right).$$ Hence, by substitution, a polynomial in $x_1,x_2,x_3$ can be expressed as a polynomial in $\eta_0,\eta_1,\eta_2$, say $$f(x_1,x_2,x_3) = \sum_{\boldsymbol{a}=(a_0,a_1,a_2)} c_\boldsymbol{a} \eta_0^{a_0}\eta_1^{a_1}\eta_2^{a_2}.$$ The cyclic permutation $(x_1,x_2,x_3)\mapsto(x_2,x_3,x_1)$ maps $$\eta_0 \mapsto \eta_0,\quad \eta_1 \mapsto x_2\varepsilon + x_3\varepsilon^2 + x_1 = \eta_1\varepsilon^2,\quad \eta_2 \mapsto x_2\varepsilon^2 + x_3\varepsilon + x_1 = \eta_2\varepsilon,$$ and hence $f(x_1,x_2,x_3)$ is mapped to $$f(x_2,x_3,x_1) = \sum_{\boldsymbol{a}=(a_0,a_1,a_2)} c_\boldsymbol{a}\eta_0^{a_0}\eta_1^{a_1}\eta_2^{a_2}\varepsilon^{2a_1+a_2}.$$ Finally, $f$ is invariant under the cyclic permutation exactly when $2a_1 + a_2 \equiv 0 \pmod 3$ for all $\boldsymbol{a}$. That condition is equivalent to $a_1 + 2a_2 \equiv 0 \pmod 3$. This can be seen by multiplying by the invertible constant $2 \pmod 3$, or by considering the cyclic permutation in the other direction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4182470", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
An ellipse has foci at $(1, -1)$ and $(2, -1)$ and tangent $x+y-5=0$. Find the point where the tangent touches the ellipse. An ellipse has foci at $(1, -1)$ and $(2, -1)$ and tangent $x+y-5=0$. Find the point where the tangent touches the ellipse. Here is a procedure how to do it analiticaly. * *If $T(x_0,y_0)$ is a touching point, then $x_0+y_0=5$ *The equation of ellipse is $${(x_0-{3\over 2})^2\over a^2} +{(y_0+1)^2 \over b^2}=1$$ *Since $2e=1$ we have $a^2-b^2 = {1\over 4}$ *Since the slope of tangent is $-1$ we have $${2(x_0-{3\over 2})\over a^2} -{2(y_0+1)\over b^2}=0$$ And now we have to solve this tedious system. How to do it more geometrical?
The tangency point is that point $P$ on the given line having the minimum sum of distances from foci $A=(1,-1)$ and $B=(2,-1)$. But it is well known how to find such a point: reflect point $B$ about the line, to get $B'=(6,3)$, and $P$ is then the intersection between the given line and line $AB'$. A simple computation gives then $P=(34/9,11/9)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4182690", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
If a graph has a perfect matching, does that mean that there exists a vertex set $S$ such that $|S|$ is equal to $o(G-S)$? In the mathematical discipline of graph theory the Tutte theorem, named after William Thomas Tutte, is a characterization of graphs with perfect matchings. Let ${\displaystyle o(X)}$ be the number of odd components of the subgraph induced by ${\displaystyle X}$. Theorem (Tutte's 1-Factor Theorem (1947)) A graph $G$ has 1 factor if and only if for any $S\subset V(G)$, $$o(G-S)\le |S|.$$ My problem: If a graph $G$ has a perfect matching, does that mean that there is a $S$ such that $|S|$ is equal to $o(G-S)$? This fact seems to be seen in the generalized Tutte-Berge formula which says that the size of a maximum matching of a graph $G=(V,E)$ equals $$\frac{1}{2} \min_{S⊆V}(|V|−(o(G−S)-|S|))$$ If for any $S$, we have $o(G-S)<|S|$. Then according to the above formula the size of maximum matching is strictly greater than $\frac{1}{2}n$, which is not possible. However, I feel that the above explanation relies too much on the Tutte-Berge formula which seems unnatural. Can we directly explain it from the Tutte 1-Factor theorem?
If $G$ has a perfect matching, then $o(G-S) = |S|$ whenever $|S|=1$. Indeed, for $|S|=1$, $o(G-S)$ is either $0$ or $1$. Since $G$ has a perfect matching, $G$ has an even number of vertices, so $G-S$ has an odd number of vertices, and thus $o(G-S) = 1 = |S|$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4182783", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How do I calculate the sum of sum of triangular numbers? As we know, triangular numbers are a sequence defined by $\frac{n(n+1)}{2}$. And it's first few terms are $1,3,6,10,15...$. Now I want to calculate the sum of the sum of triangular numbers. Let's define $$a_n=\frac{n(n+1)}{2}$$ $$b_n=\sum_{x=1}^na_x$$ $$c_n=\sum_{x=1}^nb_x$$ And I want an explicit formula for $c_n$. After some research, I found the explicit formula for $b_n=\frac{n(n+1)(n+2)}{6}$. Seeing the patterns from $a_n$ and $b_n$, I figured the explicit formula for $c_n$ would be $\frac{n(n+1)(n+2)(n+3)}{24}$ or $\frac{n(n+1)(n+2)(n+3)}{12}$. Then I tried to plug in those two potential equations, If $n=1$, $c_n=1$, $\frac{n(n+1)(n+2)(n+3)}{24}=1$, $\frac{n(n+1)(n+2)(n+3)}{12}=2$. Thus we can know for sure that the second equation is wrong. If $n=2$, $c_n=1+4=5$, $\frac{n(n+1)(n+2)(n+3)}{24}=5$. Seems correct so far. If $n=3$, $c_n=1+4+10=15$, $\frac{n(n+1)(n+2)(n+3)}{24}=\frac{360}{24}=15$. Overall, from the terms that I tried, the formula above seems to have worked. However, I cannot prove, or explain, why that is. Can someone prove (or disprove) my result above?
One approach is to calculate $5$ terms of $c_n$, recognize that it's going to be a degree-4 formula, and then solve for the coefficients. Thus: $$c_1 = T_1=1 \\ c_2 = c_1 + (T_1+T_2) = 5 \\ c_3 = c_2+(T_1+T_2+T_3) = 15 \\ c_4 = c_3 + (T_1+T_2+T_3+T_4) = 35 \\ c_5 = c_4 + (T_1+T_2+T_3+T_4+T_5) = 70$$ Now we can find coefficients $A,B,C,D,E$ so that $An^4+Bn^3+Cn^2+Dn+E$ gives us those results when $n=1,2,3,4,5$. This leads to a linear system in 5 unknowns, which we can solve and obtain $A=\frac1{24},B=\frac14,C=\frac{11}{24},D=\frac14,E=0$. Thus taking a common denominator, we have $$c_n=\frac{n^4+6n^3+11n^2+6n}{24}=\frac{n(n+1)(n+2)(n+3)}{24}$$ So that agrees with your result. Another way is to use the famous formulas for sums of powers. Thus, we find $b_n$ first: $$b_n = \sum_{i=1}^n \frac{i(i+1)}{2} = \frac12\left(\sum i^2 + \sum i\right) = \frac12\left(\frac{n(n+1)(2n+1)}{6}+\frac{n(n+1)}{2}\right)\\ =\frac{n^3+3n^2+2n}{6}$$ Now, we find $c_n$: $$c_n = \sum_{i=1}^n \frac{i^3+3i^2+2i}{6}=\frac16\sum i^3 + \frac12\sum i^2 + \frac13\sum i \\ = \frac16\frac{n^2(n+1)^2}{4} + \frac12\frac{n(n+1)(2n+1)}{6} + \frac13\frac{n(n+1)}{2} \\ = \frac{n^4+6n^3+11n^2+6n}{24}=\frac{n(n+1)(n+2)(n+3)}{24}$$ So we have confirmed the answer 2 different ways. As is clear from the other solutions given here, there are other ways as well.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4182890", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 6, "answer_id": 5 }
Is this function bounded? $g(n)=t>1\text{ s.t. } \int_1^n (1-i^{-p})^t i^{-p} \, di =\varepsilon$ Let $p>1,t>1,\varepsilon>0$. Assuming the function below exists, when is it bounded? $$g(n)=t>1\text{ s.t. } \int_1^n (1-i^{-p} )^t i^{-p} \,di =\varepsilon$$ From simulations I suspect it's bounded for all $p>1$ and $\varepsilon>0$, can this be shown analytically? For instance, for $p=2$ and $\varepsilon=0.01$ the graph obtained numerically
This is not an answer but it is too long for a comment. I repeated the calculations for the general case and found that, if $$I_p=\int_1^n \left(1-i^{-p}\right)^t \,i^{-p}\, di$$ $$(p-1)\,I_p=\frac{\Gamma \left(2-\frac{1}{p}\right) \Gamma (t+1)}{\Gamma \left(t-\frac{1}{p}+2\right)}-n^{1-p} \,\, _2F_1\left(\frac{p-1}{p},-t;2-\frac{1}{p};n^{-p}\right)$$ So, for $p=2$ $$I_2=\frac{\sqrt{\pi }\,\, \Gamma (t+1)}{2 \Gamma \left(t+\frac{3}{2}\right)}-\frac{1}{n}\,\, _2F_1\left(\frac{1}{2},-t;\frac{3}{2};\frac{1}{n^2}\right)$$ while your file shows $$I_2=\frac{\sqrt{\pi }\,\, \Gamma (2 t+1)}{2 \Gamma \left(2 t+\frac{3}{2}\right)}-\frac{1}{n}\,\, _2F_1\left(\frac{1}{2},-2 t;\frac{3}{2};\frac{1}{n^2}\right)$$ which does not seem to be the same. So, at this time, I prefer to not continue until we clarify.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4183223", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Find the number of bijections $g:S\to S$ such that $g^{g(x)}(x)=x$ $\forall x\in S$ I found a question which asks For any function $f$, define $f^1(x)=f(x)$, and for $n\geq 2$, $f^n(x)=f(f^{n-1}(x))$. Let $S=\{1,2,3\dots ,10\}$. Find the number of bijections $g:S\to S$ such that $g^{g(x)}(x)=x$ $\forall x\in S$. I know how to deal with these kind of questions if we have a specific integer at the power of $g(x)$ instead of $g(x)$ itself. That is, I can find the number of bijections which satisfy $g(x)=x$ or $g^2(x)=x$ by trying to count cycles. But, I don't have any idea how to approach this one. Any help would be appreciated.
Hint: Figure out what cycles are possible in this permutation. Letting $C$ be a cycle, then for any $y\in C$ with $g(x)=y$, we have $$ g^y(x)=x $$ This implies that $y$ is a multiple of the length of the cycle $C$. Put another way, in an cycle of length $\ell$, all the elements must be multiples of $\ell$. How many cycles of length $2$ are therefore possible? From before, this consists of a pair of numbers which are both multiples of $2$. How many cycles of length $3$ are therefore possible? From before, this consists of a triple of numbers which are all multiples of $3$, ordered in one of $2!$ ways. How many cycles of length $4$ are therefore possible? From before, this consists of a quadruple of numbers which are all multiples of $4$, ordered in one of $3!$ ways. And so on... Once you have all possible cycles (there aren't too many), it becomes a matter of counting the number of ways they can be combined without overlapping.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4183326", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Show that $\dfrac{\phi (n)}{n}=\sum\limits_{d|n} \dfrac{\mu (d)}{d}$ This problem is from Ram Murtys book Problems in analytic number theory. In his solutions it only says that this is an immediate result when combining the Möbius inversion formula: $$f(n)=\sum\limits_{d|n}g(d) \ \ \ \forall n\in \mathbb{N} \Leftrightarrow g(n)=\sum\limits_{d|n}\mu (d)f(n/d) \ \ \ \forall n\in \mathbb{N}$$ and gauss theorem: $$\sum\limits_{d|n}\phi (d)=n.$$ I cant get it right though... Here is one thing that I have tried thus far: Let $f(n)=\phi (n)$ in Möbius inversion formula. We wish to show that $g(d)\stackrel{?}{=}\dfrac{\mu (d)}{d}$. We have that $$g(n)=\sum\limits_{d|n}\mu (d)f(n/d)=\sum\limits_{d|n}\mu (d)\phi (n/d).$$ From here I have tried a bunch of things which seemed unjustified and did not actually get me anywhere. Can someone give me a hint on how to solve this problem?
$$ {\rm Id}(n)=n=\sum_{d|n}\varphi(d)=(\varphi*1)(n) $$ Therefore using Möbius inversion formula, we have $$ \varphi(n)=({\rm Id}*\mu)(n)=\sum_{d|n}\mu(d)\frac{n}{d}=n\sum_{d|n}\frac{\mu(d)}{d} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4183504", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove by induction that $3 \mid n^4-n^2 \forall n \in \mathbb{Z^+}, n \ge 2$. Proposition: $3 \mid n^4-n^2$ for all $n \in \mathbb{Z^+}, n \ge 2$ My attempt Lemma: $3 \mid (m-1)m(m+1)$ Proof. Suppose $m \in \mathbb{Z}$. By the QRT, we have $m=3q+r$ $\,\ni\, r \in \{0,1,2\}$, and $q=\lfloor{\frac{m}{3}}\rfloor$. We want to show that $(m-1)m(m+1)$ is divisible by 3. Clearly, if any one of the three factors in $(m-1)m(m+1)$ is divisible by 3, then the product must also be divisible by 3. $\cdot$ Case 1 ($r=0$): If $r=0$, then $m=3q$ is divisible by 3. $\cdot$ Case 2 ($r=1$): If $r=1$, then $m-1=(3q+1)-1=3q$ is divisible by 3. $\cdot$ Case 3 ($r=2$): If $r=2$, then $m+1=(3q+2)+1=(3q+3)=3(q+1)$ is divisible by 3. In either case, one of the three factors in $(m-1)m(m+1)$ is divisible by 3. Thus $3 \mid (m-1)m(m+1)$. Therefore, the product of any 3 consecutive integers is divisible by 3. Proof. Base case: Let $n=2$. Then $\exists k \in \mathbb{Z}$ such that $12=3k$, namely, $k=4$. Thus $3 \mid 12$ and hence the claim holds for the base case. Assume for positive integer $m \ge 2$ that $m^4-m^2=3k$ for some $k \in \mathbb{Z}$. We need only to show that this implies that $3 \mid (m+1)^4-(m+1)^2$. $(m+1)^4-(m+1)^2= \sum_{i=0}^{4} {4\choose i} m^i \cdot 1^{4-i}-(m+1)^2=(m^4-m^2)+4m^3+6m^2+2m=(m^4-m^2)+3(m^3+2m^2+m)+(m^3-m)$. By the inductive hypothesis, $3 \mid(m^4-m^2)$. And since $(m^3+2m^2+m) \in \mathbb{Z}$, it follows that $3 \mid 3(m^3+2m^2+m)$. Note that $(m^3-m)=m(m^2-1)=(m-1)m(m+1)$. By the Lemma, $3 \mid (m^3-m)$. Thus we have $3 \mid (m^4-m^2)$, $3 \mid 3(m^3+2m^2+m)$, and $3 \mid (m^3-m)$. To show that their sum is divisible by 3, let $k_1,k_2,k_3 \in \mathbb{Z} \,\ni\, m^4-m^2=3k_1, 3(m^3+2m^2+m)=3k_2$, and $(m^3-m)=3k_3$. Thus $3k_1+3k_2+3k_2=3(k_1+k_2+k_3)=m^4-m^2+3(m^3+2m^2+3m)+(m^3-m)$, where $(k_1+k_2+k_3) \in \mathbb{Z}$. Hence $3 \mid (m+1)^4-(m+1)^2$. Therefore, by induction, $3 \mid n^4-n^2, \forall n \in \mathbb{Z^+}, n \ge 2$.
You don't need induction, write $$n^4-n^2=n^2(n-1)(n+1)$$ a one line proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4183694", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Functional derivative and arbitrary function I have a questions regarding the definition of the functional derivative. Unfortunately a lot of text books give not a proper formal definition. Wikipedia gives the following definition \begin{align} \int \frac{\delta F}{\delta\rho}(x) \phi(x) \; dx &= \lim_{\varepsilon\to 0}\frac{F[\rho+\varepsilon \phi]-F[\rho]}{\varepsilon} \\ &= \left [ \frac{d}{d\varepsilon}F[\rho+\varepsilon \phi]\right ]_{\varepsilon=0}, \end{align} with $\phi$ an arbitrary function, $M$ be a manifold of continous functions $\rho$ and $F:M\to \mathbb{R}$ If $\phi$ is arbitrary then how do I know the left integral exists? Are there no constraints on $\phi$ like it has to be integrable and in $C_c^{\infty}$?
* *First of all, don't be fooled by the integral on the right side of the formula reported by Wikipedia for the functional derivative: not all the functional derivatives have that structure. More information on this can be find in the links given in the notes below. *An answer to your question. The constraint on $\phi$ is simply the fact that the functional $F$ should be defined on all points $\rho+\varepsilon \phi$ for a sufficiently small $\varepsilon\in [0,\varepsilon_0]$ for some $\varepsilon_0>0$ (including $\varepsilon_0=+\infty$): it is the structure of $F$ that implies the structure of the variation $\delta \rho=\phi$. More precisely, if $F$ is a functional defined on a (subset of a) topological vector space $X$, then $\rho+\varepsilon \phi\in X$ for all $\varepsilon$ belonging to a suitable neighborhood of $0\in\Bbb R$. And as said in the comments below, this also implies that $X$ can be only a topological manifold, i.e. a manifold that is locally isomorphic to a topological vector space. Notes * *For the definition of functional derivative, perhaps it would be useful to have a look at this MathOverflow Q&A, where some commonly spread misunderstanding is corrected. *For an example on how $\phi$ is chosen, you could also have a look at this answer where, for formally the same functional, two different kind of function spaces are used, depending on their characteristics.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4183854", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Finding a metric of constant negative curvature on cylinder over a torus ($\mathbb{S}^1 \times \mathbb{S}^1 \times \mathbb{R}$) I read that a counterexample to show that compacity as a hypothesis in Preissman's theorem is a necessary condition is the manifold $\mathbb{S}^1 \times \mathbb{S}^1 \times \mathbb{R}$, which admits a complete metric of constant negative sectional curvature. I realize for this to be true, we must be able to see it as a quotient of $\mathbb{H}^3$, but I don't know why that's true. Can anyone shed some light on this example? What metric is this, explicity?
Consider the upper half-space model of ${\mathbb H}^3$ whose elements are identified with pairs $(z,t), z\in {\mathbb C}, t>0$. Take the group $\Gamma$ of isometries of ${\mathbb H}^3$ generated by the Euclidean translations $$ a: z\mapsto z+1, b: z\mapsto z+i. $$ This group is isomorphic to ${\mathbb Z}^2$, it acts on ${\mathbb H}^3$ properly discontinuously, preserving its (topological) product decomposition ${\mathbb C}\times (0,\infty)$. Since ${\mathbb C}/\Gamma\cong T^2$, we obtain a diffeomorphism $$ M={\mathbb H}^3/\Gamma\to T^2\times (0,\infty). $$ Thus, $M$ is your hyperbolic manifold.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4183972", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Summation of the function If $f(x)=e^{x+1}-1$. Find the sum of all of the values of 'n' that makes $g(x)$ differentiable over $\mathbb{R}$. $g(x)=100|f(x)|-\sum_{k=1}^n|f(x^k)|$ such that $n \in N$. I have no idea of how to procced but I can figure out from desmos.com that $f(x)\ge0$ for $x\ge-1$ But could not procced for $\sum_{k=1}^n|f(x^k)|$
Here is a sequence of hints, in the form of statements to be proven by the reader: * *$f(x^k)$ is positive for $k$ even. *$f(x^k)$ crosses zero exactly once for $k$ odd, at $x=-1$. *Hence the derivative of $|f(x^k)|$ has a discontinuity only at $x=-1$ and only for $k$ odd. For $g$ to be differentiable at $x=-1$, the discontinuity in the derivative of $100 |f(x)|$ must exactly cancel that of the sum. Lemma: The above holds if and only if $$100 f'(-1) = \sum_{\text{k odd}}^n f(x^k)'|_{x=-1}$$ Proof: Exercise. Now to answer the question, find a formula for $f(x^k)'|_{x=-1}$ (hint: it's proportional to $k$), evaluate the sum on the right, and find which values of $n$ make it equal to $100 f'(-1)$. Note: There will be two such values of n!
{ "language": "en", "url": "https://math.stackexchange.com/questions/4184114", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Exercise 7, Chapter 5, Sheaves in Geometry and Logic Let $G$ be a group object in a topos $E$.Prove directly that $U: E^G \rightarrow E$ has a right adjoint. I know that $U: Sets^G\rightarrow Sets$ has a right adjoint but I don't know how to generalize it for an arbitrary topos...
$\require{AMScd}$ The idea is the same as for Sets. Define the right adjoint by $Y \mapsto Y^G$ with the $G-$action defined as the transpose of $G \times G \times Y^G \xrightarrow{m \times 1} G \times Y^G \xrightarrow{ev} Y.$ To see that this is an action, we need to show that the diagram commutes: \begin{CD} G \times G \times Y^G @>{1\times\mu}>> G \times Y^G \\ @V{m \times 1}VV @V{\mu}VV \\ G \times Y^G @>{\mu}>> Y^G. \end{CD} Let's transpose it, using the definition of $\mu$: the equality $ev \circ( m \times 1) = ev \circ(1 \times \mu).$ Then we will use the fact that multiplications commute with each other, and transpose back. Transposing and using the this equality gives \begin{CD} G \times G \times G \times Y^G @>{1\times 1\times\mu}>> G \times G \times Y^G @>{m \times 1}>> G \times Y^G \\ @V{1 \times m \times 1}VV @. @V{ev}VV \\ G \times G \times Y^G @>{m \times 1}>> G \times Y^G @>{ev}>> Y. \end{CD} Since $(m \times 1) \circ(1 \times m\times1) = (m\times1) \circ (m \times 1\times1) $ and also $(m \times 1) \circ (1\times 1\times\mu) = (1\times\mu) \circ (m \times 1\times1),$ this is the same as the diagram \begin{CD} G \times G \times G \times Y^G @>{m \times 1\times 1}>> G \times G \times Y^G @>{m \times 1}>> G \times Y^G \\ @. @V{1 \times \mu}VV @V{ev}VV \\ @.G \times Y^G @>{ev}>> Y. \end{CD} But this commutes by the definition of $\mu$. To show adjointness, suppose we are given $f: UA \to Y$. Define $\bar{f}$ by $G \times A \xrightarrow{\mu} A \xrightarrow{f} Y.$ The transpose of this, $A \xrightarrow{g} Y^G,$ is to be the transpose along the adjunction we are constructing. E.g. to see that it's equivariant, consider \begin{CD} G \times A@>>> G \times Y^G \\ @VVV @VVV \\ A @>>> Y^G. \end{CD} Transposing this gives \begin{CD} G \times G \times A@>{1 \times 1 \times g}>> G \times G \times Y^G @>{m \times 1}>> G \times Y^G\\ @V{1\times \mu}VV @. @V{ev}VV \\ G \times A @>{\mu}>> A @>{f}>> Y. \end{CD} Quite like before, we now have $$ev \circ (m \times 1) \circ (1 \times 1 \times g) = ev \circ (1 \times g) \circ (m \times 1) = \bar{f} \circ (m \times 1) = f \circ \mu \circ (m \times 1) = f \circ \mu \circ (1 \times \mu),$$ so this commutes. To go back along the adjunction we construct, we evaluate, like in Sets, at $e:$ $A \simeq 1 \times A \xrightarrow{e \times 1} G \times A \xrightarrow{1 \times g} G \times Y^G \xrightarrow{ev} Y$. The proof that these are reciprocal is done in a similar fashion.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4184298", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Equation of motion including gear ratio Two-mass rotational system has the following form and is represented in following structural diagram. where $\tau_e$, $\omega_1$ and $J_m$ - motor torque, angular velocity and moment of inertia $\tau_s$, $\tau_s$, $\omega_2$ and $J_d$ - shaft torque, load torque, angular velocity and load moment of inertia; $K_{md}$ - shaft stiffness Problem: how to include a gear ratio $N=\frac{\omega_1}{\omega_2}$ in equation of motion and in in a block diagram respectively? $L=V-P=J_m \frac{\omega_1^2}{2}+J_d \frac{\omega_2^2}{2}-\frac{K_{md}(\phi_1-\phi_2)^2}{2}$ $V$ - kinetic, and $P$ - potential energy Here is the Lagrangian for the entire system. And I don’t understand how to insert the gear ratio here?
I dont have enough reputation to comment or downvote the existing (accepted) answer. Why I think the answer is different [motor]---)--[GB]---)--===shaft=====--)--[load] phi1 phi1' phi2 In the presence of a flexible shaft there are 3 angular positions. See the ASCII diagram above. The important point to note is that the definition of N is not $\omega_1/\omega_2$. It is $\omega_1 / \omega_1'$, and $\omega_1' \neq \omega_2$ when the shaft is in twisted condition. The shaft can be in twisted condition at various times while the system is operating. In fact the OP has asked I have a question right away. If N=1, then the term with the shaft stiffness is lost? This is very strange, how to explain it? So I am posting my answer to the duplicate question asked the OP at engineering.se. My answer at engineering.se Assuming that the gear box is on the left end of the shaft (i.e. no flexible shaft between motor and gearbox). * *The angular velocity on the left end of the gear box is $\omega_1$. *The angular velocity of the shaft side of the gear box is assumed as $\omega_1' = \frac{\omega_1}{N}$. *The angular velocity on the right end of the shaft is $\omega_2$. So the torque on the shaft is $\pm K_m (\frac{\phi_1}{N} - \phi_2)$. (sign to be checked). *Because of the way I described the gearbox, $\omega_1' < \omega_1 $. so the torque on the shaft when acting on the motor through the gearbox is $\frac{1}{N}$. This can be seen in the below derivation. *Since I have assumed that shaft is directly connected to the load, the torque in the shaft is made available 1:1. This can also be seen in below derivation. (Below derivation to be verified independently by OP) $$ L = \frac{J_m \omega_1^2}{2} + \frac{J_d \omega_2^2}{2} + \frac{Km (\frac{1}{N} \phi_1 - \phi_2)^2}{2} $$ $$ \frac{d}{dt} \frac{\partial L}{\partial \omega_1} = \frac{d}{dt} J_m \omega_1 = J_m \frac{d \omega_1}{dt} $$ $$ \frac{\partial L}{\partial \phi_1} = \frac{K_m}{\color{red}{N}} (\frac{1}{N} \phi_1 - \phi_2) $$ Similarly for the other body also (exercise left to you). $$ \frac{\partial L}{\partial \phi_2} = -K_m (\frac{1}{N} \phi_1 - \phi_2) $$ I have not considered the input torque. It can be added to this result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4184435", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
if Hessian matrix of a function has positive real eigenvalues, then function is positive definite Can we say that if Hessian matrix of a function has positive real eigenvalues, then function is positive definite? Is it true? Would you recommend a document for me to read on this subject? $V=\alpha x^2+\alpha y^2+ z^2$, I know this function is positive definite for $\alpha >0$. But I wonder if it would be correct to check with its Hessian matrix as I asked above?
There are simple counterexamples, such as $V(x,y)=x+x^2+y^2$, which is obviously not positive definite although its Hessian has eigenvalues $\lambda_1=\lambda_2=2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4184628", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Mle estimation in hardy Weinberg There is a population of 3 kinds 1 2 3 occurring in hardy Weinberg proportion $\theta^2, 2\theta(1-\theta), 1-\theta$ If we abserve a sample of 3 individuals and obtain $X_1=1, X_2=2, X_3=1$ To find the mle of $\theta$ My solution I am calculating likelihood using the observation of the sample size given but the answer I am getting is $1/2$ while the correct answer is $5/6$ $Farc{2n_1+n_2}/{2n}$ Here $n_i$ is equal the values of $x_i$ given This is the formula I am getting after solving the likelihood
Where you had $\theta^2, 2\theta(1-\theta), 1-\theta,$ you need $\theta^2, 2\theta(1-\theta), (1-\theta)^2.$ The probability of observing $1,2,1$ is $\Big(\theta^2\Big)\cdot\Big(2\theta(1-\theta)\Big)\cdot\Big(\theta^2\Big).$ So the likelihood function is \begin{align} L(\theta) & = \Big(\theta^2\Big)\cdot\Big(2\theta(1-\theta)\Big)\cdot\Big(\theta^2\Big) \\[8pt] & = 2\theta^5(1-\theta) = 2\theta^5 - 2\theta^6. \end{align} So \begin{align} L'(\theta) & = 10\theta^4 - 12\theta^5 \\[8pt] & = 12\theta^4 \left( \frac 5 6 - \theta \right)\quad\begin{cases} >0 & \text{if } 0\le\theta<5/6, \\[4pt] =0 & \text{if } \theta = 5/6, \\[4pt] < 0 & \text{if } 5/6<\theta\le1. \end{cases} \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/4184812", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Find the value of $α^3+β^3+γ^3+δ^3$ given that $α,β,γ,δ$ are roots of $x^4+3x+1=0$ Let $α,β,γ,δ$ be the roots(real or non real) of the equation $x^4-3x+1=0$. Then find the value of $α^3+β^3+γ^3+δ^3$. I tried this question as $S_1=0, S_2=0, S_3=3, S_4=1$ and then I used $S_1^3$ to find the the value of asked question, but i am not able to factorise it further and it seems like a dead end. Moreover it is very lengthy method so can you tell of any other more elegant way of approaching this question?
$$x^4-3x+1=0\implies x^3=3-\frac{1}{x}$$ This is satisfied by each of the roots, so $$\Sigma x^3=\Sigma3-\Sigma\frac{1}{x}$$ Can you finish?
{ "language": "en", "url": "https://math.stackexchange.com/questions/4185020", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Why doesn't $\int\lfloor {x}\rfloor~dx=x\lfloor x\rfloor +C$? Why doesn't $$\int\lfloor {x}\rfloor~dx=x\lfloor x\rfloor +C?$$ When I tried integrating $\lfloor {x}\rfloor$ initially, I thought of the integral as representing the area beneath the graph and so was successful in finding its indefinite integral. However, when I tried thinking about it from a 'formulaic' perspective, ie without thinking about what the integral really meant, I've become confused: If we try integrating by parts, we seem to get $$\int\lfloor {x}\rfloor~dx=x\cdot\lfloor {x}\rfloor-\int x\cdot\frac{d}{dx}(\lfloor {x}\rfloor)~dx=x\lfloor {x}\rfloor+C$$ since I would think that $\frac{d}{dx}(\lfloor {x}\rfloor)=0$. Please can you explain why my result is wrong? I would guess that it has something to do with the derivative of the floor function being undefined at places where there is jump discontinuity, but I'm not sure.
$x\lfloor x\rfloor$ has a discontinuity at every integer, and cannot be an antiderivative. If you look at the successive discontinuities, they are of amplitude $0,1,2,3,4,\cdots$, and cumulated, $0,1,3,6,10,\cdots$. So we compensate with $$\int_0^x \lfloor t\rfloor\,dt=x\lfloor x\rfloor-\frac{\lfloor x\rfloor(\lfloor x\rfloor+1)}2.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4185184", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 4, "answer_id": 1 }
Find the largest number that $ n(n^2-1)(5n+2) $ is always divisible by? My Solution: $$ n(n^2-1)(5n+2) = (n-1)n(n+1)(5n+2) $$ * *This number is divisible by 6 (as at least one of 2 consecutive integers is divisible by 2 and one of 3 consecutive integers is divisible by 3. *$ 5n+2 \equiv 5n \equiv n \mod 2 $ then $n$ and $5n+2$ have the same pairness and at least one of $n+1$ and $5n+2$ is divisible by 2. *$ n \equiv 5n \equiv 5n+4 \mod 4 \to $ if $ 2\ | \ n+1 \to n - 1 $ or $ n + 1 $ is divisible by 4 if $ 2\ | \ 5n+2 \to n $ or $ 5n + 2 $ is divisible by 4 The expression is divisible by 6 and has 2 even integers and one of them is divisible by 4 $\to$ is divible by 24.
Your proof seems correct to all of us, as it appears in the comments. I would consider applying a method like this: $$\begin{align}f(n)&=n(n^2-1)(5n+2) \\&=n(n^2-1)(4n+n+2)\\ &=\underbrace{4n^2(n-1)(n+1)}_{\equiv ~0~(\text{mod}~~ 48)} \\ &+\underbrace{(n-1)n(n+1)(n+2)}_{\equiv ~0~(\text{mod}~ 24)}\end{align}$$ If $n=3$, then $5n+2$ is prime and if the largest number to which the function is always divided was greater than $24$, the next factor must be $17$. But, $f(2)$ is not divisible by $17.$ Therefore, the largest number should only be $24$. Explanations: * *$24|(n-1)n(n+1)(n+2)$ Because, the product of $4$ consecutive positive integers are always divisible by $24$. Applying $$n=8k±m, ~0≤m≤4, m\in\mathbb Z$$ shows that, $8|(n-1)n(n+1)(n+2)$ and we already know that, $6|(n-1)n(n+1)(n+2)$. This means $24|(n-1)n(n+1)(n+2)$. * *$48|4n^2(n-1)(n+1)$ Because, $48|4n^2(n-1)(n+1)=12|(n-1)n^2(n+1)$ Observing at the cases where $n$ is odd or even completes the proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4185314", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Maximal subset iff maximum size Let $X$ be a finite set with at least one element and $\eta\subseteq 2^X$ a collection of subsets of $X$ such that $$A\cup B\in\eta\text{ for every }A,B\in\eta$$ $$X=\bigcup_{A\in\eta}A$$ $$\text{For every }x,y\in X\text{ there exists }A\in\eta\text{ which contains one of them but not the other}$$ Now, for every $x\in X$ let $$\eta_x=\{\,A\in\eta\;:\;x\in A\,\}$$ The $\eta_x$ can be ordered either by size or by set inclusion. Is it true that $\eta_x$ is maximal (not a subset of any other $\eta_y$) if and only if $|\eta_x|=\max_{z\in X}|\eta_z|$ ? Of course, if $|\eta_x|$ is the maximum then $\eta_x$ is maximal, but I'm having trouble proving the converse. All I know is that 1) If $A\in\eta$ has maximum size then $|A|=|X|-1$ and 2) Every $\eta_x$ contains one such set of maximum size. For an attempt at a proof, if $|\eta_x|$ is not the maximum then 1) There exists $y\in X$ with $|\eta_x|<|\eta_y|$ and 2) One can choose an $A\in\eta_x$ of maximum size and take the only element $z\in X\setminus A$. The proof must continue trough one of these two because there aren't any other "interesting" points that we can focus on, but I can't finish the proof. Thanks!
For a simple counterexample, let $X=\{a,b,c\}$ and let $\eta$ be the collection of subsets $A\subseteq X$ such that if $c\in A$ then $b\in A$. Then $\eta_a$ and $\eta_b$ are both maximal, with $$\eta_a=\{\{a\},\{a,b\},\{a,b,c\}\}$$ having 3 elements and $$\eta_b=\{\{b\},\{a,b\},\{b,c\},\{a,b,c\}\}$$ having 4 elements.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4185682", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
What is the local trivialization map for quotient vector bundle? Let $\eta = (E, \pi, M)$ be a vector bundle of rank $k$ with projection map $\pi \colon E \to M$, and $\xi=(E', \pi', M)$ be a subbundle of $\eta$ that has rank $k'$, with a projection map $\pi'= \pi|E'\colon E' \to M$. (Definition of a vector bundle is https://en.wikipedia.org/wiki/Vector_bundle) Define the quotient bundle $\eta/\xi = \coprod_{p \in M} (E_p /E'_p)$. I know $\pi(\eta/\xi)\colon \coprod (E_p /E'_p) \to M$ is projection map of $\eta/\xi$, but I don't know what is the local trivialization map for $\eta/\xi$. I know $\phi':(\pi|E')^{-1}(U) \to U × R^{k'}$ which $\phi'(p,e)=(p,\pi_{R^{k'}}(\phi'(p,e)))= (p, \nu_1,..., \nu_{k'})$ is local trivialization of $E'⊂ E$.
You have to construct a specific type of local trivialization for $E$ in order to get a local trivialiyzation of the quotient bundle. This is easier to understand in the language of local frames (i.e. the sections given by preimages of the basis elements under a local trivialization). In these terms, you have to start with a local frame for the subbundle $E'$ and then extend it to a local frame of the bundle $E$. (Given $x\in M$, choose a basis for $E'_x$ and extend it to a basis of $E_x$. Then extend the first vectors to local smooth sections of $E'$ and the remaining ones to local smooth sections of $E$. On a sufficiently small neighborhood of $x$, this defines a frame as required.) Converting this to a local trivialzation of $E'$ as a subbundle of $E$, i.e. a trivialization $\phi:\pi^{-1}(U)\to U\times \mathbb R^k$, which restricts to a trivializtaion $(\pi|_{E'})^{-1}(U)\to U\times\mathbb R^{k'}$. Passing to quotients in each fiber, one obtains a local trivialization of $E/E'$ as required.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4185821", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Probability of $0\leq X\leq Y$ for two standard Gaussian random variables random variables We are given two independent standard Gaussian random variables $X\sim N(0, \sigma_x^2), Y\sim N(0, \sigma_y^2)$. Compute $Pr(X-Y\leq 0 \cap X\geq 0)$. Here is what I did so far: Denote by $\mathbb{1}(x)$ a function which is $1$ if $0\leq x \leq y$ and $0$ else. Moreover let $F(x)$ be th CDF of $X$ and $G(y)$ the CDF of $Y$. $Pr(X-Y \leq 0 \cap X\geq 0)=Pr(0\leq X \leq Y)=\int \int \mathbb{1}(x) dF(x)dG(y)=\int \int_0^y F'(x) dx dG(y)=\int \frac{1}{2} erf(\frac{y}{\sqrt{2\sigma_x^2}})dG(y) $ But now I don't know which bound do apply to the integral? $0$ to $\infty?$ EDIT: Please note that $X$ and $Y$ have different variances.
As $X$ and $Y$ are independent, $ \displaystyle f(x,y) = \frac{1}{2 \pi \sigma_x \sigma_y} \ e^{- \left(\dfrac{x^2}{2\sigma_x^2} + \dfrac{y^2}{2\sigma_y^2}\right)}$ $X - Y \leq 0 \cap X \geq 0 \implies 0 \leq X \leq Y$, Use change of variable, $x = r \sqrt{2} \ \sigma_x \cos\theta, y = r \sqrt{2} \ \sigma_y \sin\theta$ Jacobian $|J| = 2 r \sigma_x \sigma_y$ $Y \geq X \geq 0 \implies \dfrac{\sigma_x}{\sigma_y} \leq \tan \theta$. So, $ \ \arctan \left(\dfrac{\sigma_x}{\sigma_y}\right) \leq \theta \leq \dfrac{\pi}{2}$. $0 \leq r \leq \infty$ $ \displaystyle P(0 \leq X \leq Y) = \int_{\arctan(\sigma_x / \sigma_y)}^{\pi/2} \int_0^{\infty} |J| \ f(r,\theta) \ dr \ d\theta$ Can you take it from here?
{ "language": "en", "url": "https://math.stackexchange.com/questions/4186014", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Chern class of reflexive extension of sheaf I have the following question. Let $U\subset X$ be an open subset of $X$ such that the complement $X\setminus U$ has codimension $\ge2$ in $X$. Suppose $L$ is a line bundle on $U$ such that $c_1(L)^2=0$. Now let $j:U\to X$ be the inclusion map and let $L'=j_*L$ be the extension of $L$ as a reflexive sheaf over $X$. Is it true that $c_1(L')^2=0$? Any suggestions/comments are appreciated, thanks!
The answer is no. For instance, let $X \subset \mathbb{P}^3$ be a smooth quintic surface, let $Z \subset X$ be the intersection of $X$ with a general line (so, this is a finite scheme of length 5), let $U = X \setminus Z$, and let $L$ be the restriction of $\mathcal{O}_{\mathbb{P}^3}(1)$. Then $c_1(L)^2 = 0$ but $c_1(L')^2 = 5$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4186155", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show $A^k$ doesn't converge to $0$ Let $A \in \mathbb C^{n \times n}$. Let $r=\{\max \lvert \lambda \rvert \text{ such that }\lambda \in \mathbb C \text { is an eigenvalue of } A\}$. If $r\geq 1$ show that $A^k$ doesn't converge to the zero matrix as $k\to \infty$. Here is the proof I wrote : $A=P^{-1}JP$ with $P$ inversible and $J$ in a Jordan normal form. Let $\lambda_m$ be the eigenvalue that has $\lvert \lambda_m \rvert=r\geq 1$. We have that $A^k=P^{-1}J^kP$. Then I show by induction that $J^k$ has a coefficient $A_{ij}=a\lambda_m^k+b$ where $a, b\in \mathbb C^{n}$ (they do not matter). Now since that $\lvert \lambda_m \rvert \geq 1$ then $A_{ij}$ diverges when $n$ goes to $+\infty$ that means that $A^k$ does not converge to the zero matrix. Is it correct ?
This is almost correct. You don’t need to use the normal form. You have an eigenvector $v_m$, so $A^k v_m = \lambda^k_m v_m$. In particular, there’s no problem with multiplication of Jordan matrices. This is the weakest spot in your proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4186424", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Understanding Kolmogorov's inequality for submartingales I'm reading the Ross & Peköz book called "Secon Course in Probability Theory" and they mention a version of the Kolmogorov's inequality for submartingales: Suppose $Z_n,$ $n ≥ 1$, is a nonnegative submartingale, then for $a > 0$ $P(max\{Z_1,...,Z_n\} ≥ a) ≤ E[Z_n]/a$ And then proceed to name a new variable and use the Marköv inequality Proof: Let $N$ be the smallest $i$, $i ≤ n$ such that $Z_i ≥ a$, and let it equal $n$ if $Z_i < a$ for all $i = 1, . . . , n$. Then $P(max\{Z_1,...,Z_n\} ≥ a) = P(Z_N ≥ a)$ $≤ E[ZN ]/a\hspace{1cm}$ by Markov’s inequality $≤ E[Zn]/a$ $\hspace{1cm}$since $N ≤ n$ But I can't see why the equality $P(max\{Z_1,...,Z_n\} ≥ a) = P(Z_N ≥ a)$ is true, since I know that the monotony of the $Z_n$ can be anything. Any help is appreciated, thanks.
Let $E=\left\{\max \{Z_1,\ldots,Z_n\}\ge a\right\}=\{\exists i\in[n] : Z_i\ge a\}$ and $F=\{Z_N\ge a\}$. * *If there exists $i$ such that $Z_i\ge a$, we have by construction $Z_N\ge a$. So $E\subseteq F$. *If for all $i$, $Z_i<a$, then in particular $Z_N<a$. So $\overline{E}\subseteq \overline{F}$. It follows that $E=F$, so $P(E)=P(F)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4186592", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$a \geq b \geq c \geq 0$ and $a + b + c \leq 1$. Prove that $a^2 + 3b^2 + 5c^2 \leq 1$. Positive numbers $a,b,c$ satisfy $a \geq b \geq c$ and $a + b + c \leq 1$. Let $f(a, b, c) = a^2 + 3b^2 + 5c^2$. Prove that $f(a, b, c) \leq 1$. One observation is that the bound is met: $f(1, 0, 0) = f\left(\frac{1}{2}, \frac{1}{2}, 0\right) = f\left(\frac{1}{3}, \frac{1}{3}, \frac{1}{3}\right) = 1$. Another observation is that clearly $a + b + c = 1$ at a maximum of $f$ since increasing any of $a,b,c$ increases $f$. So I'm just going to assume that $a + b + c = 1$ from now on. My progress so far is that using Lagrange multipliers, you can see that $f$ is minimised subject to the constraint $a + b + c = 1$ at the point $\left(\frac{15}{23}, \frac{5}{23}, \frac{3}{23}\right)$. And since there are no other minimums it must be increasing as you choose $(a, b, c)$ away from this point. Setting $a = 1 - b - c$, we just need to optimise in $b, c$. The optimisation is bounded by the triangle formed from the lines $$c = 0, b = \frac{1}{2} - \frac{1}{2}c, \textrm{ and } c = b.$$ As $f$ is increasing as you move away from the minimum, the optimisation occurs at one of the corners of this triangle, and these give the three solutions that I gave at the start. I think my it must be increasing as you choose $(a, b, c)$ away from this point. claim is a bit vague and I'm not sure how to formulate it properly. Hopefully there is a clearer method of proving this. I found the puzzle here in case it's of interest.
Suppose we are not at one of the three vertices of the triangle, and note that the sides of the triangle correspond to $c=0$, $b=c$ and $a=b$. This means we must have at least one of the following: * *$a>b$ and $c>0$ *$a>b$ and $b>c$ *$b>c$ and $c>0$ In all of these cases we necessarily have $a<1$. In the first case, choose $\delta>0$ sufficiently small. It's easy to check that $$f(a+2\delta,b-\delta,c-\delta)+f(a-2\delta,b+\delta,c+\delta)>2f(a,b,c),$$ so one of these two modifications increases the function (while remaining feasible) and so $f(a,b,c)$ is not maximal. In the other two cases you can do the same by considering $(a\pm\delta,b\mp\delta,c)$ and $(a\pm\delta,b\pm\delta,c\mp2\delta)$ respectively.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4186722", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 2 }
What is the minimum value of $8 \cos^2 x + 18 \sec^2 x$? As per me the answer should be $26$. But when we apply AM-GM inequality it gives $24$ as the least value but as per the graph 24 can never come. What I think is that in AM-GM, it gives $8 \cos^2 x = 18 \sec^2 x$ which gives $\cos x > 1$ which is not possible and because of this, AM-GM is giving a wrong minimum value. If we had $18 \cos^2 x + 8 \sec^2 x$, then AM-GM would have worked and $24$ would be a right answer since $18 \cos^2 x = 8 \sec^2 x$, which gives $\cos x < 1$ which is true. Is this reason correct?
No need for AM-GM. Differentiate wrt $x$ and set $f'(x)=0$ as follows: $$\implies -16\sin x\cdot \cos x + 36\sec^2 x\cdot \tan x=0$$ $$4\cos x\sin x=\frac{9\sin x}{\cos^3 x}$$ If $\sin x$ is non-zero, then: $$\cos^4 x=\frac{9}{4}\implies \cos x=±\sqrt{\frac{3}{2}}>1\implies \text{no solution}$$ Hence $\sin x=0\implies \cos x=±1$ and hence $\cos^2 x=1$. Substitute this into your original equation and you get $8+18=26.$ NOTE: You can confirm that this is a minima by evaluating $f''(x).$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4186848", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Exact value for the continued fraction of $\tiny 1+\cfrac{1}{3+\cfrac{3}{5+\cfrac{5}{7+\cfrac{7}{9...}}}}$? Does anyone know the exact value for the continued fraction of $$1+\cfrac{1}{3+\cfrac{3}{5+\cfrac{5}{7+\cfrac{7}{9+\ddots}}}}?$$ I already know that $$1+\cfrac{1}{3+\cfrac{1}{5+\cfrac{1}{7+\cfrac{1}{9\ddots}}}}=\frac{e^2+1}{e^2-1},$$ but I only figured that out by typing the decimal approximation into google of the first few terms of the continued fraction (before I knew the exact value) which took me to a math paper saying that $\frac{e^2+1}{e^2-1}$ roughly equals the decimal approximation I typed in. I then typed in the continued fraction of $\frac{e^2+1}{e^2-1}$ into wolfram alpha and it spat out $$1+\cfrac{1}{3+\cfrac{1}{5+\cfrac{1}{7+\cfrac{1}{9\ddots}}}}.$$ I have no idea how to solve these so please don't downvote, I'm just doing this in case it's useful to someone one day, and out of curiosity of course.
added Comments point out that this post did the wrong continued fraction. For the correct one, use $a=-1$ not $1$. Then follow the same Satz $2$. The result is $$ \frac{2\;{}_2F_1(-\frac12;1;\frac12)}{{}_1F_1(\frac12;2;\frac12)} =\frac{I_0(\frac14)+I_1(\frac14)}{I_0(\frac14)-I_1(\frac14)} \approx 1.2831923 . $$ original post Here is the reference for everything on continued fractions (as of 1913): Perron, Oskar, Die Lehre von den Kettenbrüchen., Leipzig - Berlin: B. G. Teubner. xiii, 520 S. $8^\circ$ (1913). ZBL43.0283.04. Section 81, Satz 2 evaluates $$ c + \frac{a+b}{\displaystyle c+d + \frac{a+2b}{\displaystyle c+2d+\frac{a+3b}{\displaystyle c+3d+\ddots}}} $$ So we need $a=1,b=2,c=1,d=2$. Value of the continued fraction is $$ \frac{2\;{}_1F_1(\frac12, 1, \frac12)}{{}_1F_1(\frac32, 2, \frac12)} \approx 1.779306397 $$ This can be written $$ \frac{2e^{1/4} I_0(\frac14)} {e^{1/4} I_0(\frac14)+e^{1/4} I_1(\frac14)} = \frac{2}{\displaystyle 1+\frac{I_1(\frac14)}{I_0(\frac14)}} $$ in terms of Bessel functions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4187029", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 1, "answer_id": 0 }
When is $\sqrt{z^2} = -z$ for a complex number $z$? One of my problems requires determining for which $z \in \mathbb{C}$ is $\sqrt{z^2} = z$ true, and for which $\sqrt{z^2} = -z$. The former can be found by noting that if $z = \left|z\right|e^{i\varphi_z}$ with $\varphi_z \in (-\pi, \pi]$, then $\sqrt{z} = \sqrt{\left|z\right|}e^{i\varphi_z/2}$. Thus only if $\varphi_{z^2} = 2\varphi_z \in (-\pi, \pi] \Longleftrightarrow \varphi _z \in \left(\frac{-\pi}{2}, \frac{\pi}{2}\right]$ does $\sqrt{z^2} = z$ hold. But how can we find with a similar reasoning when $\sqrt{z^2} = -z$? Answer in my reading material is that as a complex number is either nonnegative or negative, it follows that if $\sqrt{z^2} = z$ for $\varphi _z \in \left(\frac{-\pi}{2}, \frac{\pi}{2}\right]$, then necessarily $\sqrt{z^2} = -z$ for $\varphi _z \in \left(\frac{\pi}{2}, \frac{3\pi}{2}\right]$. However I don't quite find this as a satisfactory answer as firstly, it is not a direct reasoning with the argument of a complex number and secondly, the requirement $\varphi \in \left(-\pi, \pi\right]$ is not satisfied. On the other hand I haven't come up with anything useful for determining the required bounds for $\varphi_z$. So how should the question of determining where $\varphi_z$ needs to live, in order for $\sqrt{z^2} = -z$ be satisfied, be formed?
Every non-zero complex number $\DeclareMathOperator{\Arg}{Arg}w$ has a unique representation as $w=r\exp(i\theta)$ if we require that $r>0$ and $\theta\in(-\pi,\pi]$. Then we can define $\sqrt{w}$ as $\sqrt{r}\exp(i\theta/2)$, where $\sqrt{r}$ denotes the positive square root of $r$. Suppose that $z$ is a complex number such that $z=r\exp(i\theta)$, with $r>0$ and $\theta\in(-\pi,\pi]$. Then, $z^2=r^2\exp(i(2\theta))$. There are three possible cases we must consider: * *If $2\theta$ is the principal argument of $z^2$ (that is, if $2\theta\in(-\pi,\pi]$), then $\sqrt{z^2}=r\exp(i\theta)=z$. So $\sqrt{z^2}=z$ if $\theta\in(-\pi/2,\pi/2]$. Actually, $\sqrt{z^2}=z$ if and only if $\theta\in(-\pi/2,\pi/2]$, but that remains to be proven. *If $2\theta\in(-2\pi,-\pi]$, then the principal argument of $z^2$ is $2\theta+2\pi$. Hence, $\sqrt{z^2}=\sqrt{r^2\exp(i(2\theta+2\pi))}=r\exp(i(\theta+\pi))=-r\exp(i\theta)=-z$. So $\sqrt{z^2}=-z$ if $\theta\in(-\pi,-\pi/2]$. *If $2\theta\in(\pi,2\pi]$, then the principal argument of $z^2$ is $2\theta-2\pi$, and so $\sqrt{z^2}=\sqrt{r^2\exp(i(2\theta-2\pi))}=r\exp(i(\theta-\pi))=-r\exp(i\theta)=-z$. So $\sqrt{z^2}=-z$ if $\theta\in(\pi/2,\pi]$. In summary, * *$\sqrt{z^2}=z$ if and only if $z=0$ or has a principal argument $\theta\in(-\pi/2,\pi/2]$. *$\sqrt{z^2}=-z$ if and only if $z=0$ or has a principal argument $\theta\in(-\pi,-\pi/2]\cup(\pi/2,\pi]$. Warning: while this procedure does define a single-valued square root function in the complex plane, this comes at a cost: $\sqrt{z}$ is discontinuous along the negative real axis, and in order to define $\sqrt{z}$, we had to make an arbitrary choice about the "principal" argument of $z$. Moreover, the radical rule $\sqrt{z}\sqrt{w}=\sqrt{zw}$ is true if and only if $\Arg(z)+\Arg(w)=\Arg(zw)$. On the plus side, this function does define $\sqrt{-1}=i$ rather than $\sqrt{-1}=-i$, and our choice of principal square root is consistent with that for nonnegative reals.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4187139", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Markov chains and an integral equation I'm struggling with solving the following problem numerically: Given $\sigma>0, X_0 \in (0,1)$ and a markov chain defined by $X_{i+1}=X_{i}+N(0,\sigma^2) \ \forall i \in \mathbb{Z^{0+}}$. What is the probability that $min(\{i : X_i < 0\}) < min(\{i : X_i > 1\})$? In other words, what is the probability this markov chains hits $0$ before it hits $1$? I've managed to reduce the problem to solving the following integral equation: Define $E = min(\{i : X_i < 0\}) < min(\{i : X_i > 1\})\\$ Write $$f(x) = \mathbb{P}(E | X_0 = x) $$ then we can say that $$f(x) = \int_{-\infty}^{\infty}\mathbb{P}(E|X_0=x,X_1=y)P(X_1=y) dy \\ = \int_{-\infty}^{0}1 \times P(X_1=y) dy + \int_{0}^{1} f(y) \times P(X_1=y) dy + \int_{1}^{\infty}0 \times P(X_1=y) dy \\ = \Phi(\frac{-x}{\sigma}) + \frac{1}{\sqrt{2\pi\sigma^2}}\int_{0}^{1} f(y)\exp{(-\frac{(y-x)^2}{2\sigma^2})} dy$$ So we have a fredholm equation of the second kind. Unfortunately, I've been trying to get mathematica to solve this numerically (for a given $\sigma$) but I cannot yield any reasonable numerical solutions. (I am expecting a decreasing function, with $f(0)=1$ and $f(1)=0$) So I have three questions: * *Is my logic up to the point of the integral equation correct? *Are there any tricks I'm missing that would simplify this equation? *How can I solve this numerically?
One numerical approach goes like this. I like this kind of method because it is giving an exact solution to a related finite dimensional problem. Given an integer $n \geq 1$, consider a finite state Markov chain on $n+2$ states, which we identify with $(-\infty,0),[0,1/n),[1/n,2/n),\dots,[(n-1)/n,1),[1,\infty)$. We'll zero index the states. We treat state $0$ (identified with $(-\infty,0)$) and state $n+1$ (identified with $[1,\infty)$) as absorbing states. Starting from one of the other states $i$, define the probability to go from $i$ to $j$ to be the exact probability that the original process would go from the center of the $i$th interval to anywhere in the $j$th interval. Then the transition probabilities are given as \begin{align} p_{i,j} & =F \left ( \frac{j}{n} - \frac{2i-1}{2n} \right ) - F \left ( \frac{j-1}{n} - \frac{2i-1}{2n} \right ) \quad i,j=1,2,\dots,n \\ p_{i,0} & = F \left ( -\frac{2i-1}{2n} \right ) \quad i=1,2,\dots,n \\ p_{i,n+1} & = 1-F \left ( 1-\frac{2i-1}{2n} \right ) \quad i=1,2,\dots,n \end{align} where $F(x)=\Phi(x/\sigma)$. Finally, consider $u_i$ to be the probability to hit state $0$ before state $n+1$ starting from state $i$, and assemble the $p_{i,j}$ into a $(n+2) \times (n+2)$ matrix $P$. Then the desired system of equations reads \begin{align} (Pu)_i - u_i & = 0 \quad i=1,2,\dots,n \\ u_0 & = 1 \\ u_{n+1} & = 0. \end{align} You can then construct an approximate solution to the original problem by e.g. linear interpolation. Interestingly, these numerics show a discontinuity at the boundaries, which actually makes sense: no matter how close you get to the boundary, there is a chance, however small, that you will instantly jump past the other boundary in the very next step. This goes to zero as $\sigma \to 0$ of course, and is already very small as soon as $\sigma$ is say $1/3$, but still, it is there as long as $\sigma>0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4187276", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Need help with a tricky inequality problem I need help with this inequality: Given that $y \geq 1$ and $x > 0$ and knowing that $y \geq 4 x \ln(x)$ show that this implies $y \geq x \ln(2y)$. I saw this problem while reading a theoretical machine learning paper and it says this inequality follows but I have attempted many things including going directly from the assumption to working backwards. Seeing on desmos the inequality does seem to hold for the values of x,y listed. Also it may be important to note that y is a sample size (so natural numbers). Thank you!
If $x^4>2y$ finish because $y> x ln(x^4)$, how $x^4> 2y$ then $y> x ln(x^4)>xln(2y)$. Now just prove $x^4>2y$. $y> 4xln(x)$ then when you calculate $x^4 > 2(4xln(x))$ you will see $x>0$ is the solution. For x=0 this inequality don't exist
{ "language": "en", "url": "https://math.stackexchange.com/questions/4187425", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Is this metric space complete? Consider the metric defined on $\mathbb{R}$: $$d(x, y) = \left\{\begin{array}{ll} |x-y|, &\text{if } x,y\in \mathbb{R}\setminus\mathbb{Q} \text { or } x,y\in\mathbb{Q}, \\ |x|+|y|, &\text{if } (x\in \mathbb{R}\setminus\mathbb{Q} \text{ and } y\in\mathbb{Q}) \text { or } (x\in\mathbb{Q} \text{ and } y\in \mathbb{R}\setminus\mathbb{Q}). \end{array}\right.$$ The question asks whether or not the metric space $(\mathbb{R}, d)$ is complete. If not show why and give the completion. My intuition immediately told me it was not complete, as there are Cauchy sequences in $\mathbb{R}$ for the standard metric where for every $n \in \mathbb{N}$ $(x_n)_n \in \mathbb{Q}$ but whose limit $x$ is in $\mathbb{R}\setminus\mathbb{Q}$. This sequence is also Cauchy for the metric $d$ as if the sequence is entirely in the rationals we just use the standard metric. Therefore $d(x_n,x) = |x_n|+|x|$ which will not become arbitrarily small. Is this correct? The next part asked for the completion which I don't know how to determine.
Your argumantation about not being complete is totally correct. Provide $\mathbb R^2$ with the taxicab metric: $$\tilde d(x,y) := |x_1-y_1|+|x_2-y_2|.$$ Now consider the embedding $$\iota : \mathbb R \to \mathbb R^2, \quad x \mapsto \begin{cases}(x,0),x \in \mathbb Q,\\ (0,x) , x \in \mathbb R \setminus \mathbb Q.\end{cases}$$ Then it turns out that you have for $x,y \in \mathbb R$ $$d(x,y) = \tilde d(\iota(x),\iota(y)).$$ Since $(\mathbb R^2,\tilde d)$ is complete we have that the completion of $(\mathbb R,d)$ is $$\overline{\iota(\mathbb R)} = \{(x_1,x_2) \in \mathbb R^2 : x_1x_2 = 0 \}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4187682", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Domain of linear operator being the direct sum of kernel and another subspace. Let, $F: X \rightarrow Y$ be a linear operator. Let, $Q = F(X)$ be an $n$-dimensional vector space. How do I demonstrate that, $$X = \ker(F) \oplus Z,$$ for some subspace $Z$ with dimension equal to $n$?
Let $\alpha$ be a basis for $\ker F$, which we can extend to a basis $\beta$ for $X$. Then $$X = \rm{Span}(\alpha) \oplus \rm{Span}(\beta \setminus \alpha) = \ker(F) \oplus \rm{Span}(\beta \setminus \alpha).$$ Let $Z = \rm{Span}(\beta \setminus \alpha)$; we need only show that $\dim Z = n$. Well, we must have $\dim Z \geq n$ since $$n = \dim F(X) = \dim \rm{Span}\{F(v)\}_{v \in \beta} = \dim \rm{Span}\{F(v)\}_{v \in \beta \setminus \alpha} \leq \dim Z$$ where the third equality follows from the fact that $\alpha \subseteq \ker F$. Can you take it from here? (Why must we have $\dim Z \leq n$?)
{ "language": "en", "url": "https://math.stackexchange.com/questions/4187874", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Covariant derivative on principal bundle I know that there exists a connection on a principal bundle and via parallel transport it is possible to define a a covariant derivative on the associated bundle. However, can we also define a covariant derivative on the principal bundle. I.e. something that can differentiate a section along a vector field? Or do we need a linear structure like the one in a vector bundle to 'take derivatives'?
It is defind in Wikipedia as $D\phi(v_0,...,v_k)=d\phi(hv_0,...,hv_k)$ where $h$ is the projection to horizontal subspace according to the given principal connection of the principal G-bundle $\pi:P\to M$. If $\rho:G\to GL(V)$ is a representaion of $G$ on some vector space $V$, then a tensorial (or basic, i.e. G-equivariant and horizontal) k-form of type $\rho$ on $P$ can be identified with $P×_\rho V$-valued k-form on M. As other answers/comments said, it is not a covariant derivative in the common sense, but is a covariant derivative in the sense that if $\phi$ is a $P×_\rho V$-valued k-form on M, and it is identified with tensorial form $\hat\phi$ on $P$, and $\nabla\phi$ is $P×_\rho V$-valued k+1-form on M, where $\nabla$ is the covariant derivative on $P×_\rho V$ corresonding to the given principal connection, and $\widehat{\nabla\phi}$ is the identification of $\nabla\phi$ to tensorial form on $P$, then $D\hat\phi=\widehat{\nabla\phi}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4188020", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Monotonocity of Integrals, if and only if? Assume that $f(x)$ and $g(x)$ are Riemann Integrable and that $\int_a^W f(x) \, dx<\int_a^W g(x) \, dx$ for all $W \in [a,b]$. Does it follow that $$ f(x) < g(x) $$ for all $W \in [a,b]$. Is it true with weak inequalities?
I think this doesn't hold in general. Fix some $c \in [a,b]$ and take for example $g : [a,b] \to \mathbb{R}$, with $g(x)=1$, and $$ f : [a,b] \to \mathbb{R}, \quad x \mapsto \begin{cases} 0 & x \in [a,b]\setminus \{c\}, \\ 2 & x=c. \end{cases} $$ Then $f$ and $g$ are Riemann integrable on $[a,b]$, because $g$ is continuous and $f$ is bounded and has only one discontinuity. We indeed have that $$ \int_a^W f(x) \, dx < \int_a^W g(x)\, dx $$ for all $W \in [a,b]$. However, for $x=c$, we have $f(c)=2 > 1 = g(c)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4188171", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
$2^\sqrt{10}$ vs $3^2$ Is there a neat way to show that $2^\sqrt{10} < 3^2$? I have tried raising to larger powers, like $(2^\sqrt{10})^{100}$ vs $3^{200}$ but the problem is the two functions $2^{x\sqrt{10}}$ and $3^{2x}$ are almost equivalent, and there is no point (that I can find) where one function is "obviously" larger than the other. Looking at $2^{2\sqrt{10}}$ vs $3^4$ I tried to find a way of showing that $2^{2\sqrt{10}} < 2^6+2^4 = 2^4(2^2+1)$ but couldn't see any neat solution. Any help or hints are appreciated. edit: I should have specified when I say "neat solution" I was looking for a method readily done by hand. I realise this might be an unrealistic limitation, but it was why I'm interested. Final thoughts before going to bed: I was looking at the functions $f(x)=2^{\sqrt{1+x^2}}$ and $g(x)=x^2$. At $x=0$, clearly $f>g$. They are equal at $x=2\sqrt{2}$. At $x=4$, again $f>g$. This mean that somewhere in the interval $2<x<4$, $g>f$ (as they are both convex). The task then is to try to find a point such that $x>3$ and $g>f$. But then another fun inequality pops out... $2^{\sqrt{11}}$ vs $10$...
A trick is to blow up the gap $[2^\sqrt{10},\;3^2] → [2^{\sqrt{10}-3},\;9/8] → [2,\;(9/8)^{\sqrt{10}+3}]$ $1.26^3 = 2.000376$ $(9/8)^2 = 81/64 = 1+1/4+1/16 > 1.26$ $(9/8)^{\sqrt{10}+3} > (9/8)^6 > 1.26^3 > 2\qquad\qquad$ ⇒ RHS is bigger
{ "language": "en", "url": "https://math.stackexchange.com/questions/4188357", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 4, "answer_id": 2 }
Understanding vector field(s) on $\mathbb{S}^3$. I was slving the exercises of John Lee's book "Introduction to Smooth Manifolds", where there is an exercise asking us to prove that $\mathbb{S}^3$ is parallelizable. In the hint, the author asks us to consider the vector fields: $$X_1 = -x\dfrac{\partial}{\partial w} + w \dfrac{\partial}{\partial x} - z \dfrac{\partial}{\partial y} + y \dfrac{\partial}{\partial z},$$ $$X_2 = -y\dfrac{\partial}{\partial w} + z \dfrac{\partial}{\partial x} + w \dfrac{\partial}{\partial y} - x \dfrac{\partial}{\partial z},$$ $$X_3 = -z\dfrac{\partial}{\partial w} - y \dfrac{\partial}{\partial x} + x \dfrac{\partial}{\partial y} + w \dfrac{\partial}{\partial z}.$$ I get the hint and how to use it. What I don't understand is why are the vector fields $4$-dimensional? Isn't $\mathbb{S}^3$ a $3$-dimensional manifold? This is why the tangent vectors should have only $3$ coordinates! I also searched other places on the internet and more or less, everybody uses $4$ coordinates for a vector field on $\mathbb{S}^3$. Could anybody help me understand this?
You can parametrize the sphere $S^3$ by polar coordinates but computations will be painful! You can identify $S^3$ to the Lie group $SU(2)$ and find a basis to its Lie algebra. Lie groups are parallelizable (pick any basis of left invariant vector fields).
{ "language": "en", "url": "https://math.stackexchange.com/questions/4188536", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
Pretty Simple Physics/Geometry Problem that Stumped Me Okay friends. So the setup here is that I was playing Silent Hunter III (World War II submarine simulator) and received a radio telling me the position and approximate velocity of a convoy I wanted to raid. I was trying to calculate the direction I should travel to intercept it most quickly. To solve this, I set up the problem as a 2D coordinate problem. I know that's not strictly speaking accurate with curvature of the earth, but I figured it was close enough and it still have me stumped. So my position is $a_0$, the convoy I want to intercept has position $b_0$ and velocity $\dot b$. My submarine has a max speed (obviously) and $|\dot a| > | \dot b |$. To make things a little bit easier on myself, I defined $b_0$ to be $(0,0)$. Also, to make this easier to conceptualize, I'll give you my actual figures I was working with, but I was trying to solve this algebraically to come up with a general solution. $$a_0 = (-230, 75)\\ |\dot a| = 30 \\ b_0 = (0,0) \\ |\dot b| = 18.5$$ $b$ was traveling due southwest. This was the work I managed to figure out: $$ a(t) = \dot at+a_0 \\ b(t)=\dot bt\\ \text{The ships intersect when their position functions are equal.}\\ \therefore\dot at + a_0 - \dot b t=0\\ \text {Simplifying a bit:}\\(\dot a - \dot b)t+a_0=0$$ So I have two unknowns (i.e. $\dot a$ and $t$) but only one equation. I'm kind of stuck to figure out my second equation. Edited to add: I should mention that my units are kilometers for distance and kilometers per hour for speed.
Let $c$ be the point of intersection. You have a triangle $\triangle abc$ and the law of cosines says $(ab)^2 + (bc)^2-2(ab)(bc)\cos \angle abc = (ac)^2$. $ab = \sqrt{230^2 + 75^2}$. $ac = |\dot a|t; bc=|\dot b|t$ (where $t$ is the time til intersection; an irrelevant variable which will cancel out) so $\frac {ac}{bc}=\frac {|\dot a|}{|\dot b|}$. And $m\angle abc = 135+\arctan(\frac {75}{230})$ (Actually you said the enemy is going due southwest but your image shows due southeast. I calculated for simplicity that $m\angle abc = m\angle (-230,75)(0,0)(-230,0) + m\angle(-230,0)(0,0)c$ and $m\angle (-230,75)(0,0)(-230,0)=\arctan \frac {75}{230}$ [ignoring orientation and negative values to keep it simple] and $m\angle (-230,0)(0,0)c = \text{due southeast} = 135$.) So $(230^2+75^2) + (bc)^2 - 2\sqrt{230^2 + 75^2}(bc)\cos (135+\arctan\frac {75}{230}) = (bc)^2( \frac {|\dot a|}{|\dot b|})^2$. That's enough to solve $(bc)$ (there are two solutions but one of them will be a point where $b$ was in the past). And that's enough to give you $c$. ANd that gives you $\angle acb$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4188695", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
find all the maximal ideal of $\mathbb{Z} /p^n\mathbb{Z}?$ find all the maximal ideal of $\mathbb{Z} /p^n\mathbb{Z}?$ I know that $p\mathbb{Z}$ is a maximal ideal in $\mathbb{Z}$ whenever $p$ is prime Here $\mathbb{Z} /p^n\mathbb{Z}\cong \mathbb{Z}_{p^n}$ Edit: From Arthur comment, I take $\mathbb{Z}_{2^3}=\mathbb{Z}_8$ here divisor of $8$ are $1,2$ and $4$ by analysis $(2)$ is the only maximal ideal. Similarly in $\mathbb{Z_{2^2}} $, $(2)$ is the only maximal ideal. My question : How to find all the maximal ideal of $\mathbb{Z} /p^n\mathbb{Z}?$
Question: "My question : How to find all the maximal ideal of $\mathbb{Z}/p^n\mathbb{Z}$?" Answer: If $I:=(p)$ with $p$ a non-zero prime, it follows $I^l=(p^l)$. If $I^l \subseteq J$ with $J$ a prime ideal, there is a maximal ideal $J \subseteq I'$, hence there is an inclusion $I^l \subseteq I'$ of maximal ideals. It follows, since $I'$ is a prime ideal that $p\in I'$, hence $I'=I=(p)$. Hence the only maximal (and prime) ideal in $\mathbb{Z}/p^n\mathbb{Z}$ is $(p)$. Note: This question points to a general fact: If $\mathfrak{m} \subseteq A$ is a maximal ideal and if $\mathfrak{m}^l \subseteq \mathfrak{p}$ is a prime ideal containing $\mathfrak{m}^l$, it follows $\mathfrak{p}=\mathfrak{m}$: If $x\in \mathfrak{m}$, it follows $x^l \in \mathfrak{m}^l \subseteq \mathfrak{p}$ and since $\mathfrak{p}$ is a prime ideal it follows by induction that $x\in \mathfrak{p}$ hence $\mathfrak{p}=\mathfrak{m}$. Hence the ring $A/\mathfrak{m}^l$ is an Artinian ring with maximal ideal $\mathfrak{m}/\mathfrak{m}^l$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4189170", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Let $f\in\mathbb{Q}[X]$ such that $f(1)=-2, f(2)=1$ and $f(-1)=0$. Find the remainder of the division of $f$ by $X^3-2X^2-X+2$. Let $f\in\mathbb{Q}[X]$ such that $f(1)=-2, f(2)=1$ and $f(-1)=0$. Find the remainder of the division of $f$ by $X^3-2X^2-X+2$. So, I figured: $f=(X+1)q$. Assumming that $f$ has degree 3, I solve $\begin{cases} (2+1)(2-a)(2-b)=1 \\ (1+1)(1-a)(1-b)=-2\end{cases}$ to find that $\begin{cases} a=\frac{5}{6}-\frac{\sqrt{37}}{6}, b=\frac{5}{6}+\frac{\sqrt{37}}{6} \\ a=\frac{5}{6}+\frac{\sqrt{37}}{6}, b=\frac{5}{6}-\frac{\sqrt{37}}{6} \end{cases}$. I divide $X^3-2X^2-X+2$ by $(X+1)(X-\frac{5}{6}-\frac{\sqrt{37}}{6})(X-\frac{5}{6}+\frac{\sqrt{37}}{6})$. The remainder is $\frac{4}{3}(X+1)(X-\frac{7}{4})$. Is it correct to assume that the remainder is always this, no matter the degree of $f$? Since that's what the problem asks for, I'm lead to assume this.
You can not assume that $f$ has degree $3$. Also the remainder that you have calculated is not correct, the actual remainder is $\frac{1}{3} \, {\left(4 \, x - 7\right)} {\left(x + 1\right)}$. For example take $f(x) = \frac{3}{4} \, x^{4} + \frac{1}{6} \, x^{3} - \frac{11}{4} \, x^{2} - \frac{7}{6} \, x + 1$.Observe that $f$ satisfies all the requirements given in the question. So when you divide (perhaps using long division or some other methods) $f$ by $x^3 - 2x^2 - x + 2$ you get the same remainder. A big hint: Let $g(x) = x^3 - 2x^2 - x + 2$ and let $r(x)$ be the remainder when you divide $f$ by $g$. Then there exist a $q(x)$ such that $f(x) = g(x)q(x) + r(x)$. Observe that $r(x)$ is atmost quadratic polynomial. Also see that $g(1) = g(2) = g(-1) = 0$. So $f(1) = r(1) = -2$, $f(2) = r(2) = 1$ and $f(-1) = r(-1) = 0$. Can you find $r(x)$ satisfieng these conditions?
{ "language": "en", "url": "https://math.stackexchange.com/questions/4189376", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Given a general region, find the double integral bounded between $y = x$ and $y=3x-x^2 $ $$ J = \iint_R (x^2-xy)\,dx \,dy, $$ Suppose region R is bounded between $y = x$ and $y=3x-x^2 $ My attempt using vertical integration: $$ \int^{x=2}_{x=0} \int^{y=3x-x^2}_{y=x} \left({x^2-xy}\right)dy\ dx$$ $$\int^2_0 \left[x^2y-x\frac{y^2}{2}\right]^{3x-x^2}_{x}\, dx$$ $$\int^2_0 \frac{-x^5+4x^4-4x^3}{2} \,dx $$ $$\boxed{J = -\frac{8}{15}}$$ My attempt using horizontal integration : $$ \int^{y=2}_{y=0} \int^{x=y}_{x=3\,\pm \sqrt{9-y}} \left({x^2-xy}\right)dx\ dy$$ For $ x = 3+\sqrt{9-y}$ $$ \int^2_0 \left[\frac{x^3}{3}-\frac{x^2}{2}y\right]^y_{3+\sqrt{9-y} }\,dy$$ For $ x = 3-\sqrt{9-y}$ $$ \int^2_0 \left[\frac{x^3}{3}-\frac{x^2}{2}y\right]^y_{3-\sqrt{9-y} }\,dy$$ My doubts : 1.) How do I set my limit of integration for horizontal integration, if there is $\pm$ to be considered ? 2.) the answer as negative what does that imply in questions related to double integrals? Could you guys please help
Your first approach is correct, but your second approach is not. The possible values of $y$ lie in the interval $\left[0,\frac94\right]$, because $\frac94$ is the maximum of $3x-x^2$ when $x\in[0,2]$. When $y\in\left[0,\frac94\right]$, then (see the picture below): * *if $y\in[0,2]$, then the possible values of $x$ lie in $\left[\frac{3-\sqrt{9-4y}}2,y\right]$; *if $y\in\left[2,\frac94\right]$, then the possible values of $x$ lie in $\left[\frac{3-\sqrt{9-4y}}2,\frac{3+\sqrt{9-4y}}2\right]$. So, the answer is\begin{multline}\int_0^2\int_{\frac{3-\sqrt{9-4y}}2}^yx^2-xy\,\mathrm dy\,\mathrm dx+\int_2^{9/4}\int_{\frac{3-\sqrt{9-4y}}2}^{\frac{3+\sqrt{9-4y}}2}x^2-xy\,\mathrm dy\,\mathrm dx=\\=\int_0^2\frac{1}{12} \left(-2 y^3-6 y^2+\left(45-11 \sqrt{9-4 y}\right) y+18 \left(\sqrt{9-4 y}-3\right)\right)\,\mathrm dy+\\+\int_2^{9/4}\frac{1}{6}(18-11 y)\sqrt{9-4 y}\,\mathrm dy=-\frac{47}{120}-\frac{17}{120}=-\frac8{15}.\end{multline}
{ "language": "en", "url": "https://math.stackexchange.com/questions/4189572", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
$X_n\xrightarrow{d} X$ and $X_n/Y_n\xrightarrow{P} 1$ implies $Y_n\xrightarrow{d} X$? Let $(X_n)_{n\ge 1}$ and $(Y_n)_{n\ge 1}$ be sequences of random variables in a probability space such that $Y_n(\omega)\neq 0\ \forall \omega$ and $$X_n\xrightarrow{d} X,\ \frac{X_n}{Y_n}\xrightarrow{P} 1$$ Is it true that $Y_n\xrightarrow{d} X$? I know it holds in the case $X_n-Y_n\xrightarrow{P} 0$ then if $\{Y_n\}_{n\ge 1}$ is uniformly bounded $$\frac{X_n}{Y_n}\xrightarrow{P} 1\Rightarrow X_n-Y_n\xrightarrow{P} 0$$ and the property is valid. In general, I think intuitively that it is true as well but cannot prove it, thus I would appreciate some hint. Thanks in advance!
You may proceed as Prof. Rama Murthy said. But if you're familiar with some well-known results, it'll be easier to prove. Write $Z_n = \frac{X_n}{Y_n}$. So, it's given that $Z_n \xrightarrow{P} 1$. * *Use the fact : $\quad Z_n \xrightarrow{d} c\quad\iff\quad Z_n \xrightarrow{P} c\quad$ , where $c$ is some non-random constant. Here, $c=1$ . *Let $h(t) = \frac{1}{t}$ . Then, $V_n = \frac{Y_n}{X_n} = h(Z_n)$ . Now, the set of points where $h$ is discontinuous has finite cardinality. Also, from the previous point, $Z_n \xrightarrow{d} 1$. So, by continuous mapping theorem (General version), $V_n = h(Z_n) \xrightarrow{d} h(1) = 1$ . *$X_n \xrightarrow{d} X$ , and $V_n \xrightarrow{d} 1$. Hence, by Slutsky's theorem, $$Y_n = X_n \cdot \frac{Y_n}{X_n} = X_n \cdot V_n \xrightarrow{d} X \cdot 1 = X$$ which is what you wanted to show. Hope this helps. Thank you.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4189741", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Proposition 1.6 Atiyah Let $A$ be a ring and $\mathcal{m}$ a maximal ideal of $A$, such that every element of $1+\mathcal{m}$ is a unit in $A$. Then $A$ is a local ring. Let $x\in A\setminus\mathcal{m}$ (It is not the quotient ring, but $A$ setminus $\mathcal{m}$). Since $m$ is maximal, the ideal generated by $x$ and $\mathcal{m}$ is $A=(1)$, that is $$(1)=(m,\{x\})$$ Edit $(m,\{x\})$ denotes the ideal generated by $\mathcal{m}$ and $x$. Question. Why there exist $y\in A$ and $t\in\mathcal{m}$ such that $xy+t=1$?
If $a\in A$, I denote $\bar a$ the class of $a$ in $A/\mathfrak m$. Let $x\in A\setminus \mathfrak m$. Since $\mathfrak m$ is maximal, $A/\mathfrak m$ is a field. Therefore there is $\bar y\in A/\mathfrak m$ s.t. $\bar x\bar y=1$, i.e. $1=(x+m_1)(y+m_2)=xy+t$ where $t=m_1y+m_2x+m_1m_2\in \mathfrak m$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4189907", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
A contradiction I found with integrals and limits of an exponential function "a" is a constant. $$\lim_{a\to 0^+} \int e^{ax} \, dx = \lim_{a\to 0^+} \frac{e^{ax}}{a}=\frac{e^{0^+\times x}}{0^+}=\frac{1}{0^+}=\infty$$ but doing it in a different order: $$\lim_{a\to 0^+} \int e^{ax} \, dx =\int e^{0^+\times x} \, dx=\int e^{0} \, dx=\int 1 \, dx=x$$ so which one is it? why did I get this contradiction? Thanks.
Integrate your function from $0$ to $x$ like this: $$\lim_{a\to 0^+}\int_0^x e^{at} \, dt = \lim_{a\to 0^+} \bigg(\frac{1}{a} e^{ax} - \frac{1}{a}\bigg).$$ You can see how this plays out from here. I prefer definite integrals with a variable upper bound because of issues like this amongst others. I have personally become, pardon the word play, very anti antiderivative over time. (Does that mean I'm derivative?)
{ "language": "en", "url": "https://math.stackexchange.com/questions/4190036", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Roots of $3x^5 - 15x +5$ Let $f(x) =3x^5 - 15x +5$. By Eisenstein’s Criterion we can show that $f(x)$ irreducible over $\mathbb{Q}$(Since $5 \nmid 3, 5 \vert -15,5$ and $5^2 \nmid 15$). Since $g$ is continuous and $ f( -2 ) = -61, f( -1 ) = 17 ,f( 0 ) = 5,f( 1 ) = -7,f( 2 ) = 71$. So by intermediate value theorem we can say that $f$ has three real roots. Clearly $f'(x) > 0 $ for all $x > 2$ and $f'(x) < 0$ for all $x < -2$. So $f(x)$ is monotone and so $f$ does not have zeroes in these regions. Suppose $f(x)$ has $4$ real zeroes then by using Rolle's theorem we can say that $f'(x)$ should contain at least $3$ zeroes between the roots of $f$. But $f'(x) = 15(x^4 -1)$ does not have $3$ real zeroes. So $f$ has two other complex roots. Let $K$ be the smallest subfield of complex numbers containing $\mathbb{Q}$ and $5$ roots of $f(x)$. Then using fundamental theorem of Galois theory we can say that $Gal(K/\mathbb{Q}) \approx S_5$, the symmetric group of five letters. Since $S_5$ is not solvable, by a theorem of Galois we can conclude that $f(x)$ is not solvable by radicals. That is each zero of the polynomial $f(x)$ cannot be written as an expression involving elements of $\mathbb{Q}$ combined by the operations of addition, subtraction, multiplication, division, and extraction of roots. How do the roots of $f$ look like? We have information about the location of real roots but I think that information may not help in finding some expression for roots. Precisely, my question is that that does there exist a series, continued fractions, or some integral which represent the roots of $f(x)$?.
The solutions are algebraic numbers. Because the Galois group of your algebraic equation isn't solvable, the soultions cannot be represented as radical expressions. They can't be represented in terms of elementary functions either. See closed-form expression for roots of a polynomial Polynomials with degree $5$ solvable in elementary functions? Additionally, your equation $$3x^5-15x+5=0$$ is a trinomial equation. see Szabó, P. G.: On the roots of the trinomial equation. Centr. Eur. J. Operat. Res. 18 (2010) (1) 97-104 A closed-form solution can be obtained also using confluent Fox-Wright Function $\ _1\Psi_1$. see Belkić, D.: All the trinomial roots, their powers and logarithms from the Lambert series, Bell polynomials and Fox–Wright function: illustration for genome multiplicity in survival of irradiated cells. J. Math. Chem. 57 (2019) 59-106
{ "language": "en", "url": "https://math.stackexchange.com/questions/4190198", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
How many 5 card hands out of a 52 card deck are there that have at least one red card. I am aware of the correct answer, which is: $$ \text{All possible hands} - \text{All hands with no red cards} = \text{All hands with at least one red card} $$ However, there is an incorrect argument that I cannot figure out why is wrong. Here is the argument: We first select $26 \choose 1$ red card and then we select $51 \choose 4$ of the remaining cards in the deck. This produces an incorrect result and I do not know why. Intuitively it makes sense to me. Could someone help me out?
The problem is over-counting. Consider the case where you end up with A-hearts, 2-hearts, and 3 black cards. You count this twice, once where the Ace is the (first) red card chosen, and once where the 2 is the first red card chosen. The actual analysis of what it would take to correct and use such a direct approach is very complicated, since you could have $k$ red cards, where $k \in \{1,2,3,4,5\}.$ Naturally, the correct enumeration is $\binom{52}{5} - \binom{26}{5}.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4190329", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
To check differentiability of $f(z)=z |z|$ at $(0,0).$ Here $f(z)=z |{z}|=(x+iy) \sqrt{x^2+y^2}$.Here CR equations are not satisfied.Then to check differentiability at $(0,0),$ I find the partial derivatives $u_x,u_y,v_x,v_y$ at $(0,0)$ and found that CR equations are satisfied at $(0,0)$ (Here $u_x,u_y,v_x,v_y$ are all zero at $(0,0).$) Hence given function is differentiable only at $(0,0).$ Any improvement in my answer and is there any other way to solve this.
Let's use the definition: $$ f'(0)=\lim_{z\to 0}\frac{f(z)-f(0)}{z}=\lim_{z\to 0}|z|=0 $$ So $f'(0)$ exists and is $0$. We can argue by contradiction at $z≠0$. Indeed, let $z_0≠0$ and assume $f$ is differentiable at $z_0$. Then so is the function $g(z):=\frac{f(z)}{z}$. But $g(z)=|z|$, which is a contradiction since $z\mapsto |z|$ is nowhere differentiable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4190463", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Matrix in vector notation? Let, $\mathbf{A} \in \mathbb{R}^{n \times n}$, $x \in \mathbb{R}^{n}$, and $\mathbf{I}$ be an $n$ by $n$ identity matrix. What does, $$ \begin{bmatrix}\mathbf{A} \\ \mathbf{I} \end{bmatrix}x, $$ mean? I see this notation often used in books and no idea what it implies. Thanks!
It denotes a $2n\times 1$ vector with coordinates $$ [A_1x, \ldots, A_nx, x_1,\ldots,x_n]^{T} $$ Where $A_i$ denotes the $i$-th row of matrix $A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4190567", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
geometric progression point distribution when one extreme point is negetive How to generate 20 points from -0.01 to 100 which are geometrically equal in separation means if I want to plot in log scale $\log d_2 - \log d_1 = \log d_3 - \log d_2$ where $d_1$, $d_2$, $d_3$ are some points in between them. I can generate 20 points from 0.01 to 100 which follow a geometrical progression. However, if the extreme boundary is negative what is the procedure.
Here is one way to partition an interval $[a,b]$ into $m$ subintervals whose lengths form a geometric progression. Use the formula $$g(k)=2a-b+2^{k/m}\cdot(b-a) \tag{1}$$ for $k=0,1,\cdots m$ to give the $m+1$ endpoints of the partition. Then the lengths of the $m$ subintervals are (for $k$ from $1$ to $m$) given by $g(k)-g(k-1).$ The common ratio of the geometric progression of lengths is $2^{1/m}.$ I did this for your case of $[a,b]=[-0.01,100]$ using $m=12$ intervals and it worked fine. [I realize you wanted $20$ intervals but you can re-do this for that choice.] Here we have $$g(k)=-100.02+2^{k/12}\cdot(100.01). \tag{2}$$ To check the ratio one needs two adjacent intervals, so three successive values of $g(k).$ The first three $g(k)$ [to 4 decimals] are $-0.01,5.9369,12.2374.$ This gives the first two lengths as $5.9469,\ 6.3005$ and the ratio between these is $6.3005/5.9469=1.0594..$ which is the same as $2^{1/12}$ to four places. I checked also a few more cases of adjacent intervals, which I invite you to do also. I also checked that the last division point $g(12)$ is $100.0$ as expected. There are likely a lot of other ways to interpose the points, but this method definitely is not sensitive to the possibility that the interval $[a,b]$ has $0$ in its interior and so some division points are negative and others positive. NOTE: If anyone read this already I had left out a $2$ in the main formula (1). Now fixed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4190700", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to evaluate $\lim\limits_{x\to \infty}\frac {2x^4-3x^2+1}{6x^4+x^3-3x}$? Evaluate: $\lim\limits_{x\to \infty}\frac {2x^4-3x^2+1}{6x^4+x^3-3x}$. I've just started learning limits and calculus and this is an exercise problem from my textbook. To solve the problem, I tried factorizing the numerator and denominator of the fraction. The numerator can be factorized as $(x-1)(x+1)(2x^2-1)$ and the numerator can be factorized as $x(6x^3+x^2-3x)$. So, we can rewrite the problem as follows: $$\lim\limits_{x\to\infty}\frac {2x^4-3x^2+1}{6x^4+x^3-3x}=\lim\limits_{x\to\infty}\frac {(x-1)(x+1)(2x^2-1)}{x(6x^3+x^2-3)}$$ But this doesn't help as there is no common factor in the numerator and denominator. I've also tried the following: $$\lim\limits_{x\to\infty}\frac {(x-1)(x+1)(2x^2-1)}{x(6x^3+x^2-3)}=\lim\limits_{x\to\infty}\frac{x-1}{x}\cdot \lim\limits_{x\to\infty}\frac {(x+1)(2x^2-1)}{6x^3+x^2-3}=1\cdot \lim\limits_{x\to\infty}\frac {(x+1)(2x^2-1)}{6x^3+x^2-3}$$ Here I used $\frac{x-1} x=1$ as $x$ approaches infinity. Yet this does not help. The answer is $\frac 1 3$ in the book but the book does not include the solution. So, how to solve the problem?
Divide the numerator and denominator of the expression by $x^4$:$$\lim\limits_{x\to \infty}\frac {2-3/x^2+1/x^4}{6+1/x-3/x^3}$$and use the fact that $\lim\limits_{x\to\infty}\frac{1}{x^n} = 0$ for any positive $n$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4190851", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Proving $||AB|| \le ||A||||B||$ for $\mathbb{C}\oplus \mathcal{U}$ Let $\mathcal{U}$ be a $C^{*}$-algebra without a unit and consider the algebra $\mathbb{C}\oplus \mathcal{U}$ formed by the ordered pairs $(\alpha, A)$, $\alpha \in \mathbb{C}$ and $A \in \mathcal{U}$ with the operations $(\alpha, A) + (\beta, B) := (\alpha+\beta, A+B)$ and $(\alpha,A)(\beta, B) := (\alpha \beta, \alpha B + \beta A + AB)$. I'm trying to prove that $||(\alpha, A)(\beta, B)|| \le ||(\alpha,A)||||(\beta, B)||$, where $||(\alpha, A)|| := \sup_{||B||=1, B \in \mathcal{U}}||\alpha B + AB||$. I know that: $$||(\alpha, A)(\beta, B)|| = ||(\alpha\beta, \alpha B + \beta A + AB)|| = \sup_{||C||=1}||\alpha\beta C +\alpha BC+\beta AC+ABC||$$ but I'm stuck there. Can anyone help me? Thanks!
We have : \begin{align} \|(\alpha,A)(\beta,B)\| &= \|(\alpha\beta,\alpha B+\beta A + AB)\| \\ &= \sup_{\|C\| = 1 } \| \alpha\beta C + (\alpha B + \beta A + AB)C \| \end{align} For $C\in\mathcal U$ with $\|C\|=1$, we have : \begin{align} \alpha\beta C + (\alpha B + \beta A + AB)C = \alpha (\beta C+ BC) + A(\beta C + BC) \end{align} If $\beta C+ BC\neq 0$, we have : \begin{align} \| \alpha\beta C + (\alpha B + \beta A + AB)C \| &= \|\beta C +BC\| \cdot \left\| \alpha \frac{\beta C +BC}{\|\beta C +BC\|} + A \frac{\beta C +BC}{\|\beta C +BC\|}\right\| \\ &\leq \|(\beta,B)\|\|(\alpha,A)\| \end{align} This also holds if $\beta C+ BC = 0$. By taking the supremum over $C$, we get : $$\|(\alpha,A)(\beta,B)\| \leq \|(\alpha,A)\|\|(\beta,B)\|$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4190988", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
For what primes $p$ and positive integers $k$ is this algebraic expression divisible by $3$? My initial question is as is in the title: For what prime $p$ and positive integers $k$ is this algebraic expression divisible by $3$? $$A(p,k):=\dfrac{p^{2k+2} - 4p^{2k+1} + 6p^{2k} + 2p^{k+1} - 8p^k + 3}{2(p - 1)^2}$$ I would like to qualify that I am specifically interested in those values for $p$ and $k$ satisfying the congruence $p \equiv k \equiv 1 \pmod 4$. MY ATTEMPT Here, I will evaluate my expression for $$p_1 = 5, k_1 = 1,$$ which gives $A(p_1,k_1)=9$ (which is divisible by $3$), $$p_2 = 13, k_2 = 1,$$ which gives $A(p_2,k_2)=73$ (which is not divisible by $3$), and $$p_3 = 17, k_3 = 1,$$ which gives $A(p_3,k_3)=129$ (which is divisible by $3$). Here is my final question: Will it be possible to consider the cases $p \equiv 1 \pmod 3$ and $p \equiv 2 \pmod 3$ separately, then use the Chinese Remainder Theorem afterwards? (I know the concept, but I would have forgotten how to do that.) CONJECTURE: If $p \equiv 2 \pmod 3$, then $3 \mid A(p,k)$. Alas, this is where I get stuck.
Consider the point that the sum of coefficiant of numerator is 0, so for $p\equiv 1\bmod3$ the numerator is divisible by 3 for any k. For the case $p\equiv 2\bmod 3$ we have: $p\equiv 2 \bmod 3\Rightarrow p^2\equiv 1 \bmod 3\Rightarrow p^{2(k+1)}\equiv 1 \bmod 3$ $p^{2k+1}=p^{2k}\cdot p\equiv(1\bmod 3)(2\bmod 3)\equiv 2\bmod 3\Rightarrow 4p^{2k+1}\equiv 8 \bmod 3\equiv 2 \bmod 3$ $6p^{2k}\equiv 0 \bmod 3$ $p^{k+1}p^k\cdot p\equiv (2^k \bmod 3)(2\bmod 3)\Rightarrow 2p^{k+1}\equiv 2^{k+2} \bmod 3$ $8p^k\equiv 2^{k+3} \bmod 3$ Summing these we get for numerator: $r=1-2+0+2^{k+2}-2^{k+3}-3\equiv 2^{k+2}(1-2)-1\equiv -1-2^{k+2}\bmod 3$ $2^{k+1}=(3-1)^{k+1}$ So if $k=2m+1$ we have: $r=-1 -3t+1=-3t\equiv 0 \bmod 3$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4191369", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How to make use of angle sum and difference identities to find the value of sine and cosine? Calculate: $\cos\left({5\pi\over12}\right)$ and $\cos\left({\pi\over12}\right)$ What is the easiest way to find $\cos\left({5\pi\over12}\right)$ and $\cos\left({\pi\over12}\right)$ (without a calculator)? If I know that $\frac{5\pi }{12}=\frac{\pi }{4}+\frac{\pi }{6}$ and $\frac{\pi }{12}=\frac{\pi }{3}-\frac{\pi }{4}$, then I can apply angle sum and difference identities. But how do I know $\frac{5\pi }{12}= \frac{\pi }{4}+\frac{\pi }{6}$ and $\frac{\pi }{12}= \frac{\pi }{3}-\frac{\pi }{4}$ in the first place. I know $ \frac{\pi }{4}+\frac{\pi }{6} = \frac{5\pi }{12}$ and $ \frac{\pi }{3}-\frac{\pi }{4}=\frac{\pi }{12}$ but I can't go the other way round. I gave $\frac{5\pi}{12}$ and $\frac{\pi}{12}$ as an example, I want the general solution for any value in pi rational form $\frac{\pi p}{q}$.
We know that $$\begin{align} \cos{\pi\over 4}&=\sin{\pi\over 4}={\sqrt{2}\over 2}\\ \cos{\pi\over 6}&=\sin{\pi\over 3}={\sqrt{3}\over 2}\\ \cos{\pi\over 3}&=\sin{\pi\over 6}={1\over 2}\\ \cos(x+y)&=\cos{x}\cos{y}-\sin{x}\sin{y}\\ \cos(x-y)&=\cos{x}\cos{y}+\sin{x}\sin{y}\\ \sin(x+y)&=\sin{x}\cos{y}+\cos{x}\sin{y}\\ \cos(x-y)&=\sin{x}\cos{y}-\cos{x}\sin{y}\\ \end{align}$$ With the above you should be done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4191686", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
The domain of $f(x)=x^{1/x}$ I've been taught that the domain of the function $f(x) = x^{1/x}$ is $x > 0$ assuming the function to be from $\mathbb{R} \to \mathbb{R}$. But $f(-1) = (-1)^{(-1)} = -1$, so why does the domain not include $x = -1$?
Generally, in calculus, you consider expressions of the form $$f(x)^{g(x)}$$ as an abbreviation for $$\mathrm e^{g(x)\ln f(x)},$$ which is only defined when $f(x)>0$. (Do you see why both expressions are equal in that case?) You are correct to say that this does not tell the whole truth, but the missing cases are generally not interesting from the point of view of calculus.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4191851", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Checking if limit exists Check if limits exists $$\large\lim_{x \to \infty} \frac{2+2x+\sin2x}{(2x+\sin2x)e^{sinx}}$$ My approach to this problem was $$\large\lim_{x \to \infty} \frac{\frac{2}{x}+2+\frac{\sin2x}{x}}{(2+\frac{\sin2x}{x})e^{sinx}}$$ on further simplifying $$\large\lim_{x \to \infty} \frac{0 + 2 + 0}{(2+0)(\text{value} \; \text{between}\; \frac{1}{e} \;to \; e) }\;\;\;\; (\text{since as } \; x\to \infty \; \ sinx \; \in ( -1,1))$$ which is equal to $$\large\lim_{x \to \infty} \frac{1}{\text{value} \; \text{between}\; \frac{1}{e} \;to \; e}$$ which shows limit exist. But answer key says limit does not exist. Did I do some mistake or answer key is wrong?
and welcome, so according to my results the limit when $x\to +\infty$ doesn't exist, for the following reason : Let our function be : $$\varphi (x) =\frac{2x+2+\sin (2x)}{(2x+\sin (2x))e^{\sin(2x)}}$$ Well first let's simplify the function : \begin{align} \frac{2x+2+\sin (2x)}{(2x+\sin (2x))e^{\sin(2x)}} &=\left(\frac{2x+\sin(2x)}{2x+\sin(2x)} +\frac{2}{2x+\sin(2x)}\right) \times \frac{1}{e^{\sin(x)}}\\ & = \frac{1}{e^{\sin(x)}}+ \frac{2}{(2x+\sin(2x))e^{\sin (x)} } \end{align} Now let's see if we can determine two functions $\psi$ and $\eta$ such that $\psi\leq \varphi\leq \eta$ : We have : $$-1\leq\sin(2x)\leq 1\Longleftrightarrow 2x-1\leq 2x+\sin(2x)\leq 2x+1$$ Hence : $$1+\frac{2}{2x+1}\leq 1+ \frac{2}{2x+\sin(2x)}\leq 1+\frac{2}{2x-1}$$ And we have : $$e^{-1}\leq e^{-\sin(x)}\leq e$$ Therefore : $$e^{-1}+\frac{2e^{-1}}{2x+1}\leq e^{-\sin(x)}\left(1+ \frac{2}{2x+\sin(2x)}\right)\leq e+\frac{2e}{2x-1}$$ And we obtained our two functions : $$\lim_{x\to +\infty} \psi (x) =e^{-1} \ \ \ \ \ \ \ \text{and}\ \ \ \ \ \ \ \lim_{x\to +\infty} \eta(x) =e$$ Hence we can't apply the squeeze theorem so there's no limit when $x\to \pm \infty$, the function is just oscillating between $e^{-1}$ and $e$, you can use Desmos to visualize this.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4191977", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Proving $\operatorname{card} \mathbb{R} = \operatorname{card} 2^{\mathbb{N}}$ *without* using Cantor-Schröder-Bernstein theorem? On math.stackexchange.com and elsewhere proofs of the equality $\operatorname{card} \mathbb{R} = \operatorname{card} 2^{\mathbb{N}}$, or equivalently the equality $\operatorname{card} \mathbb{R} = \operatorname{card} \mathcal{P}(\mathbb{N})$, abound that use the Cantor-Bernstein theorem. What is a proof that does not use that theorem? P.S. All the proofs that I previously knew, including those appearing in two undergraduate texts I authored, use CB and prove CB there, too.
This is my preferred proof, one that avoids the Cantor-Bernstein theorem. It is just a bit different from that in https://math.stackexchange.com/a/4193008/32337. Denote $\mathbb{N} \setminus \{0\}$ by $\mathbb{N}^{\ast}$. It suffices to show that $\operatorname{card} 2^{\mathbb{N}^{\ast}} = \operatorname{card}(0, 1)$. Let $C$ be the set of binary sequences $(b_{n})_{n \in \mathbb{N}^{\ast}}$ that are eventually constant and let $B = 2^{\mathbb{N}^{\ast}} \setminus C$, the set of those that are not. Since $B$ of $2^{\mathbb{N}^{\ast}}$ is denumerable while the interval $(0, 1)$ is uncountable, a ``Hilbert's hotel'' maneuver shows that $\operatorname{card} \bigl((0, 1) \cup B\bigr) = \operatorname{card} (0, 1)$. Hence it suffices to show that $\operatorname{card} 2^{\mathbb{N}^{\ast}} = \operatorname{card} \bigl((0, 1) \cup B\bigr)$. From order-completeness of $\mathbb{R}$, for each $x \in (0, 1)$, there is a unique $(x_{n})_{n \in \mathbb{N}^{\ast}} \in B$ for which $x = \sum_{n =1}^{\infty} x_{n}/2^{n}$. Define the map $f \colon 2^{\mathbb{N}^{\ast}} \to (0, 1) \cup B$ as follows: If $b \notin C$, then $f(b) = \sum_{i=1}^{\infty} b_{i}/2^{i}$; but if $b \in B$, then $f(b) = b$. Then $f$ is the desired bijection.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4192108", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
How to prove this (seemingly elusive) double binomial identity? When manipulating polynomials, I came across a double binomial identity as follows, $$ \sum_{i=0}^{n} \binom{2n+1}{2i+1} \binom{i}{k} = \binom{2n - k }{k} 2^{2 n-2 k} , $$ where $n$ and $k$ are given nonnegative integers such that $2n \geq k$. I tried to prove it but failed. I worked on this problem for two days and tried mathematical induction, k-degree derivative at zero, algebra methods but they all didn't work for me.
I haven’t made all the calculations so I may be wrong. But here’s how a proof could work: let $c_{n,k}$ be the LHS. It’s easy to see that $c_{n,k}$ is the coefficient in front of $x^n$ of the series $\sum_l{\binom{2n+1}{2l}x^l}\sum_l{\binom{l}{k}x^l}$. So $c_{n,k}$ is the coefficient in front of $x^{2n}$ of $\sum_l{\binom{2n+1}{2l}x^{2l}}\sum_l{\binom{l}{k}x^{2l}}$, which can be rewritten as $\frac{(1+x)^{2n+1}+(1-x)^{2n+1}}{2}\frac{x^{2k}}{(1-x)^{k+1}}$. Because of parity considerations, it means that $c_{n,k}$ is the coefficient in front of $x^{2n-2k}$ of $\frac{(1+x)^{2n-k}}{(1-x)^{k+1}}$. When differentiating twice, this yields the equality $(2n-2k)(2n-2k-1)c_{n,k}=(2n-k)(2n-k-1)c_{n-1,k}+2(2n-k)(k+1)c_{n,k+1}+(k+1)(k+2)c_{n+1,k+2}$. And then you can use induction on $n-k$, since $(n-1)-k <n-k,n-(k+1)<n-k,(n+1)-(k+2) < n-k$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4192271", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 1 }
Extreme case of Swiss chess pairing algorithm Assume that $2^n$ players play a $n$ round Swiss chess tournament and White always wins. Here is the result an experiment a fellow of Chess SE kindly did for me. He claimed that due to the floating rule no pairing problems (same color thrice consecutive is forbidden) will arise, even for large $n$. (Which is also my expectation - but can at least extreme floating happen for large $n$, i.e. two paired players have more than one point score difference?) Can you even express the result as a more or less closed form for given $n$? ("$n_1$ player have $p_1$ points, ...") Addendum: I assume the official FIDE algorithm which is exactly described. See e.g. here
I think the algorithm is more fully described at https://handbook.fide.com/chapter/C0403 Those who won twice as White in the first two rounds will not be paired with each other in the third round due to them having the same absolute colour preference. Similarly those who those who lost twice as Black. So in your extreme example, the third round will not match anybody with someone with the same score, and after the third round everybody will either have won 2 and lost 1, or will have won 1 and lost 2. The fourth round (and all even rounds) will be easier to match, and the fifth (and all odd rounds) will be rather like the third. After an odd number of rounds, half the players' scores will be 1 more than the other half and you can match all players with the same score in the next round; after an even number the range is 2 and you cannot match any players with the same score in the next round.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4192403", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }