Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Error in proving of the formula the sum of squares Given formula $$ \sum_{k=1}^n k^2 = \frac{n(n+1)(2n+1)}{6} $$ And I tried to prove it in that way: $$ \sum_{k=1}^n (k^2)'=2\sum_{k=1}^n k=2(\frac{n(n+1)}{2})=n^2+n $$ $$ \int (n^2+n)\ \text d n=\frac{n^3}{3}+\frac{n^2}{2}+C $$ But $$ \frac{n^3}{3}+\frac{n^2}{2}+C $$ is not equal to $$\frac{n(n+1)(2n+1)}{6}=\frac{n^3}{3}+\frac{n^2}{2}+\frac{n}{6}$$ Where I made a mistake?
Here's an interesting approach using the summation of binomial coefficients. First, note that $$\sum_{i=1}^n {i+a\choose b} = {{n+a+1}\choose {b+1}}$$ and also that $$i^2=2\cdot \frac{(i+1)i}{1\cdot 2}-i=2{{i+1}\choose 2}-{i\choose 1}$$ Hence $$\begin{align} \sum_{i=1}^ni^2 &=\sum_{i=1}^n \left[2{{i+1}\choose 2}-{i\choose 1}\right]\\ &=2 \sum_{i=1}^n {{i+1}\choose 2}- \sum_{i=1}^n {i\choose 1}\\ &=2{{n+2}\choose 3}-{{n+1}\choose 2}\\ &=2\cdot \frac {(n+2)(n+1)n} {1\cdot 2\cdot 3} -\frac {(n+1)n} {1\cdot 2}\\ &=\frac {(n+1)n} 6 \cdot \left[ 2(n+2)-3 \right]\\ &={\frac 16}n(n+1)(2n+1) \end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/890254", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 7, "answer_id": 2 }
Brain teaser Strategic choice... $X$ and $Y$ are playing a game. There are $11$ coins on the table and each player must pick up at least $1$ coin, but not more than $5$. The person picking up the last coin loses. $X$ starts. How many should he pick up to start to ensure a win no matter what strategy $Y$ employs?
A player facing $n$ coins can force a win iff there exists $1\le k \le 5$ such that a player facing $n-k$ coins cannot escape a loss. Clearly, $n=1$ is a lost position. Therefore $n=2, 3, 4, 5, 6$ are won positions. Therefore $n=7$ is a lost position. Therefore $n=8, 9,10,11,12$ are won positions. One readily sees that this pattern continues and a position $n$ is lost if and only if $n\equiv 1\pmod 6$. Therefore $n=11$ is a won position - and the (only) winning move consists in going to the lost position $7$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/890326", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 6, "answer_id": 4 }
Find polynomial whose root is sum of roots of other polynomials We have two numbers $\alpha$ and $\beta$. We know that $\alpha$ is root of polynomial $P_n(x)$ of degree $n$ and $\beta$ is root of polynomial $Q_m(x)$ of degree $m$. How do you find polynomial $R_{n m}(x)$ which has root equal to $\alpha+\beta$ without finding values of roots? All polynomials are with integer coefficients. One more question, can it be found using matrix determinant?
If you have two polynomials, say $P(x)$ and $Q(y)$ then, their Resultant, $\operatorname{Res}(P,Q)$ is the determinant of their Sylvester matrix. It equals to $$ \operatorname{Res}(P,Q)=\prod_{P(\alpha)=0, \ P(\beta)=0,}(\alpha-\beta), $$ the product of all the differences of their roots. Now if you consider the two variable polynomial $Q(z-y)$ as a polynomial in the variable $y$ (and therefore its roots are $\{z-\beta: \ Q(\beta)=0\}$) then, the resultant $\operatorname{Res}(P,Q)$ is a polynomial in the variable $z$ with roots $\alpha+\beta$ where $\alpha$ runs over the roots of $P$ and $\beta$ runs over the roots of $Q$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/890400", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 1 }
Simple example of $X$ with torsion in $H^1(X,\mathbb{Z})$? Question: Is there a simple example of a space $X$ possessing torsion in its first integral cohomology group $H^1(X,\mathbb{Z})$? For reasonable spaces $X$, e.g. CW-complexes, one has $H^1(X,\mathbb{Z}) \cong [X,\mathbb{T}]$, the group of homotopy classes of maps from $X$ to the circle group $\mathbb{T}$. I would be happiest if I could also see an explicit mapping $u : X \to \mathbb{T}$ whose homotopy class is torsion.
By the universal coefficient theorem $H^1(X)\cong\operatorname{Hom}(H_1(X),\Bbb Z)$ as $H_0$ is always free. Morphism groups to torsion-free groups are always torsion-free.
{ "language": "en", "url": "https://math.stackexchange.com/questions/890482", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Perpendicular line that crosses specific point? I know the coordinates of line AB, I also have the coordinates of a point called C. I need to find the coordinates of the start of a line that is perpendicular to AB and that would cross point C. (Point D) Also the coordinates of point A are always (0, 0)
Given $A(0,0)$ and $B(x_B, y_B)$, we have $$\nabla \vec {AB} = \frac{y_B-y_A}{x_B-x_A} = \frac{y_B-0}{x_B-0} = \frac{y_B}{x_B} $$ and since $\nabla \vec {CD} \times \nabla \vec {AB} = -1$, given $CD\ \bot \ AB $: $$ \nabla\vec {CD} = -\frac{x_B}{y_B}$$ Hence the equation of $\vec {CD}$ would be $y-y_C = -\frac{x_B}{y_B} (x-x_C)$ for some known $C(x_C,y_C)$. Solving simultaneously, we substitute the equation for $\vec {AB}$, $\; y = \frac{y_B}{x_B}x$, and we obtain the intersection of $\vec{AB}$ and $\vec{CD}$, $D(x_D,y_D)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/890582", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 5 }
Can functions be defined by relations? So let us say that for whatever reasons, we are not allowed to use function symbols in first-order logic. Then can we define and use a function only by relations?
Usually functions are defined in the language of set theory. Here is a possible definition using only FOL: Let $D$ and $C$ be unary predicates. Let $F$ by a binary predicate. $F$ is said to be a functional relation with domain predicate $D$ and codomain predicate $C$ if and only if: * *$\forall x,y:[F(x,y)\implies D(x)\land C(y)]$ *$\forall x:[D(x)\implies \exists y:[F(x,y)]$ *$\forall x,y_1,y_2:[F(x,y_1)\land F(x,y_2)\implies y_1=y_2]$
{ "language": "en", "url": "https://math.stackexchange.com/questions/890661", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 3 }
If $ \frac {z^2 + z+ 1} {z^2 -z +1}$ is purely real then $|z|=$? If z is a complex number and $ \frac {z^2 + z+ 1} {z^2 -z +1}$ is purely real then find the value of $|z|$ . I tried to put $ \frac {z^2 + z+ 1} {z^2 -z +1} =k $ then solve for $z$ and tried to find |z|, but it gets messy and I am stuck. The answer given is |z|=1
Let's come up with a more interesting question (which might be what you intended to ask). Find an $\alpha \in \mathbb{R}$ such that $\forall z \in \mathbb{C}: |z| = \alpha \implies \dfrac{z^2+z+1}{z^2-z+1} \in \mathbb{R} \cup \{\infty\}$. I'll show that $\alpha = 1$ works. If $|z| = 1$, then $z = e^{i\theta}$ for some $\theta \in [0,2\pi)$. Thus, $\dfrac{z^2+z+1}{z^2-z+1} = \dfrac{z+1+z^{-1}}{z-1+z^{-1}} = \dfrac{e^{i\theta}+e^{-i\theta}+1}{e^{i\theta}+e^{-i\theta}-1} = \dfrac{2\cos\theta+1}{2\cos\theta-1} \in \mathbb{R} \cup \{\infty\}$, as desired. Therefore, if $|z| = 1$, then $\dfrac{z^2+z+1}{z^2-z+1}$ is either purely real or undefined.
{ "language": "en", "url": "https://math.stackexchange.com/questions/890723", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 2 }
Prove $X_n \xrightarrow P 0$ as $n \rightarrow \infty$ iff $\lim_{n \to \infty} E(\frac{|X_n|}{|X_n|+1} )= 0$ Let $X_1, X_2, ...$ be a sequence of real-valued random variables. Prove $X_n \xrightarrow P 0$ as $n \rightarrow \infty$ iff $\lim_{n \to \infty} E(\frac{|X_n|}{|X_n|+1} )= 0$ Attempt: Suppose $X_n \xrightarrow P 0$ as $n \rightarrow \infty$. Then since $X_1, X_2, ...$ is uniformly integrable and E(|X|)<$\infty$, then $\lim_{n \to \infty} E(X_n) = E(X)$. Since $X_n \xrightarrow P 0$ as $n \rightarrow \infty$, then $\lim_{n \to \infty} E(\frac{|X_n|}{|X_n|+1} )$ must be equal to 0.
Hint: $X_n \xrightarrow P 0$ mean $\forall \epsilon >0, P(|X_n| > \epsilon) \to 0$. $\frac{|X_n|}{|X_n|+1} = \frac{|X_n|}{|X_n|+1} 1_{|X_n| > \epsilon} + \frac{|X_n|}{|X_n|+1} 1_{|X_n| \leq \epsilon} \leq 1_{|X_n| > \epsilon} + \epsilon$ $\frac{|X_n|}{|X_n|+1} = \frac{|X_n|}{|X_n|+1} 1_{|X_n| > \epsilon} + \frac{|X_n|}{|X_n|+1} 1_{|X_n| \leq \epsilon} \geq 1_{|X_n| > \epsilon} \frac{\epsilon}{1+\epsilon}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/890811", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
The other ways to calculate $\int_0^1\frac{\ln(1-x^2)}{x}dx$ Prove that $$\int_0^1\frac{\ln(1-x^2)}{x}dx=-\frac{\pi^2}{12}$$ without using series expansion. An easy way to calculate the above integral is using series expansion. Here is an example \begin{align} \int_0^1\frac{\ln(1-x^2)}{x}dx&=-\int_0^1\frac{1}{x}\sum_{n=0}^\infty\frac{x^{2n}}{n} dx\\ &=-\sum_{n=0}^\infty\frac{1}{n}\int_0^1x^{2n-1}dx\\ &=-\frac{1}{2}\sum_{n=0}^\infty\frac{1}{n^2}\\ &=-\frac{\pi^2}{12} \end{align} I am wondering, are there other ways to calculate the integral without using series expansion of its integrand? Any method is welcome. Thank you. (>‿◠)✌
Using the dilogarithm $\mathrm{Li}_2\;$ and the particular values for $0,1,-1\;$you get: $$\int_0^1\frac{\ln(1-x^2)}{x}dx= \int_0^1\frac{\ln(1-x)(1+x)}{x}dx= \int_0^1\frac{\ln(1+x)}{x}dx + \int_0^1\frac{\ln(1-x)}{x}dx= -\mathrm{Li}_2(-x)\Big{|}_0^1 - \mathrm{Li}_2(x)\Big{|}_0^1 =\frac{\pi^2}{12}-\frac{\pi^2}{6} = -\frac{\pi^2}{12}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/890872", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 5, "answer_id": 1 }
Integrate $\left[\arctan\left(x\right)/x\right]^{2}$ between $-\infty$ and $+\infty$ I have tried to calculate $$ \int_{-\infty}^{\infty}\left[\arctan\left(x\right) \over x\right]^{2}\,{\rm d}x $$ with integration by parts and that didn't work. I looked up the indefinite integral and found it contained a polylogarithm which I don't know how to use so I tried contour integration but got stuck. $${\tt\mbox{Wolfram Alpha said the answer is}}\,\,\,{\large \pi\log\left(4\right)}$$ Can anyone show me how to do this integral ?.
You can use the following way to evaluate. It is pretty neat and simple. Let $$ I(a,b)=\int_{-\infty}^\infty\frac{\arctan(ax)\arctan(bx)}{x^2}dx. $$ Clearly $I(0,b)=I(a,0)=0$ and $I(1,1)=I$. Now \begin{eqnarray} \frac{\partial^2I(a,b)}{\partial a\partial b}&=&\int_{-\infty}^\infty\frac{1}{(1+a^2x^2)(1+b^2x^2)}dx\\ &=&\frac{1}{a^2-b^2}\int_{-\infty}^\infty\left(\frac{a^2}{1+a^2x^2}-\frac{b^2}{1+b^2x^2}\right)dx\\ &=&\frac{1}{b^2-a^2}\pi(a-b)\\ &=&\frac{\pi}{a+b}. \end{eqnarray} Hence $$ I=\int_0^1\int_0^1\frac{\pi}{a+b}dadb=2\pi\ln2.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/890965", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 6, "answer_id": 2 }
Ratios as Fractions I’m having trouble understanding how fractions relate to ratios. A ratio like 3:5 isn’t directly related to the fraction 3/5, is it? I see how that ratio could be expressed in terms of the two fractions 3/8 and 5/8, but 3/5 doesn’t seem to relate (or be useful) when considering a ratio of 3:5. Many textbooks I’ve seen, when introducing the topic of ratios, say something along the lines of “3:5 can be expressed in many ways, it can be expressed directly in words as ‘3 parts to 5 parts’, or it can be expressed as a fraction 3/5, or it can be…” and so on. Some textbooks will clarify that 3/5, when used this way, isn’t “really” a fraction, its just representing a ratio. This makes absolutely no sense to me. Why express 3:5 as 3/5 at all?
In general I would not compare ratios with fractions, because they are different things. As rschwieb mentioned in the comments: when you have apples, lemmons and oranges, you can say that the ratio is $3:4:5$. The only "link" you can make to fractions is that you can now say that this the same as saying the ratio is $\frac35:\frac45:1$. From which you can very easily see that there are $\frac35$ as much apples as there are oranges. You see, the fraction is used to express a relation between two "elements" of your whole whatever. The key thing is this: when you have a bowl with $3$ oranges and $5$ apples, the ratio $3:5$ gives you the relation between the amount of oranges and the amount of apples. And so does the number $\frac35$ within this context. Confusion arrises from the fact that we are trained to immedately conclude that a fraction, alway gives you a relation between the whole of someting and a part of the whole. This is not always the case though as you can see from the ratio story. N.B. I would like to add that I personally feel that someone or some method that is teaching mathematics should avoid the comparrison of ratios with fractions, because it generally leads to confusion as it did for the OP. Someone in the process of learning ratios is very likely not ready to be confronted with such a subtle, yet major difference in the application of fractions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/891020", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
How to make this bet fair? A person bets $1$ dollar to $b$ dollars that he can draw two cards from an ordinary deck of cards without replacement and that they will be of the same suit. How to find the value of $b$ so that the bet will be fair? My effort: There are a total number of ${52 \choose 2} = 26 \cdot 51$ ways of drawing two cards oout of a deck of $52$. And, there are ${13 \choose 2} = 13 \cdot 6 $ ways of drawing two cards of any given suit, say, hearts. Now since there are four distinct suits, the number of ways of drawing two cards of the same suit is $13 \cdot 6 \cdot 4$. So the probability of drawing two cards of the same suit (assuming that the deck is well-shuffled so that each card is equilikely to be drawn) is $$ \frac{13 \cdot 6 \cdot 4}{26 \cdot 51} = \frac{4}{17}.$$ Is it correct? And if so, then what next?
It is correct. So the probability of getting different suits is $\dfrac{13}{17}$. So the odds should be $\dfrac{4}{17} : \dfrac{13}{17}$ or $4 : 13$ or $1:3.25$. There are other ways of getting the same result. For example, given the first card, there are $12$ other cards of the same suit and $39$ cards of other suits, making the odds $12:39$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/891135", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Evaluating a limit using L'Hôpital's rule I know that it can be also evaluate using Taylor expansion, but I am intentionally want to solve it using L'Hôpital's rule: $$ \lim\limits_{x\to 0} \frac{\sin x}{x}^{\frac{1}{1-\cos x}} = \lim\limits_{x\to 0}\exp\left( \frac{\ln(\frac{\sin x}{x})}{1-\cos x} \right)$$ Now, from continuity and L'hopital Rule: $$\lim\limits_{x\to 0} \frac{\ln(\frac{\sin x}{x})}{1-\cos x} = \lim\limits_{x\to 0} \frac{\frac{x}{\sin x}\cdot\frac{x\cos x - \sin x}{x^2}}{\sin x} = \lim\limits_{x\to 0}\frac{\frac{x\cos x - \sin x}{x\sin x}}{\sin x}$$ This is where I got stuck. If I'm not mistaken the limit is $-\frac{1}{3}$ so the orginial one is $e^{-\frac{1}{3}}$ What should I do different (Or what's is wrong with my calculation?) Thanks
After rearranging the quotient a bit we can apply L'hopitals rules $2$ more times to get the answer: $$\lim_{x\to0}\frac{x\cos(x)-\sin(x)}{x\sin^{2}(x)}\underbrace{=}_{\text{l'hopital}}\lim_{x\to0}\frac{-x\sin(x)}{\sin^{2}(x)+2x\sin(x)\cos(x)}=\lim_{x\to0}\frac{-x}{\sin(x)+2x\cos(x)}$$ $$\underbrace{=}_{\text{l'hopital}}\lim_{x\to0}\frac{-1}{3\cos(x)-2x\sin(x)}=\frac{-1}{3}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/891242", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
Construction of an equilateral triangle from two equilateral triangles with a shared vertex Problem Given that $\triangle ABC$ and $\triangle CDE$ are both equilateral triangles. Connect $AE$, $BE$ to get segments, take the midpoint of $BE$ as $O$, connect $AO$ and extend $AO$ to $F$ where $|BF|=|AE|$. How to prove that $\triangle BDF$ is a equilateral triangle. Attempt * *I've noticed that $\triangle BCD \cong \triangle ACE$ so that $|AE|=|BD|$, but I was stuck when proving $\triangle AOE \cong \triangle FOB$. *Even though I assume that $ABFE$ is a parallelogram, I can neither prove one of three angles of $\triangle BDF$ is 60 degree nor prove $|DB|=|DF|$ through proving $\triangle DEF \cong \triangle DCB$ (which I think is right.) Could anybody give me a hand?
Embed the construction in the complex plane. Let $\omega=\exp\left(\frac{\pi i}{3}\right),B=0,C=1,E=1+v$. Then $A=\omega$ and $D=1+\omega v$, hence $F=B+E-A$ implies: $$ F = 1-\omega+v,$$ hence: $$ \omega F = \omega -\omega^2 + \omega v = 1+\omega v = D, $$ so $BFD$ is equilateral.
{ "language": "en", "url": "https://math.stackexchange.com/questions/891319", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 0 }
Constructing two tangents to the given circle from the point A not on it I'm trying to complete Level 21 from euclid the game: http://euclidthegame.com/Level21/ The goal is to construct two tangents to the given circle from the point A not on it. So far I've figured that the segments from B to the tangent points must be equal. And of course the triangles AB[tangent point] are right angles. I'm not seeing how I can find those tangent point, a hint for a good step would be appreciated! I would rather have a hint in the sense of, this is a good step because this and that then just saying what I need to do. I could find those steps without explanation everywhere on the internet if I would like that.
If a line from $A$ intersects a circumference in two points $P$ and $Q$ then it holds that $AP·AQ = {AT}^2$ where $T$ is a point of the circumference so that $\overline{AT}$ is tangent to the circumference. This is called power of a point Create such a line (for instance $\overline{AB}$). Name the points of intersection with the circumference $P$ and $Q$ where $P$ is the closest to $A$. We want $\sqrt{AP·AQ}$. You can read about the square root of the product of two segments here. I'll add the method to construct it: Costruct the circle centered on $A$ with radius $AP$ so it intersects the line $\overline{AB}$ on $P'$. Find the midpoint of $P'Q$, name it $M$ and construct a circle centered on $M$ with radius $MQ$. Perpendicular to $\overline{AB}$ construct a line through $A$ that intersect the last circle on $R$ and $R'$. Then construct a circle centered on $A$ with radius $AR$ so that intersects the circumference on $T$ and $T'$. Construct the lines $\overline{AT}$ and $\overline{AT'}$. Those are tangent to the circumference.
{ "language": "en", "url": "https://math.stackexchange.com/questions/891430", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
A math line interpretation From the text of the question posed here: "How many whole numbers less than 2010 have exactly three factors?" this statement is made: If there is no fourth factor, then that third factor must be the square root of the number. Furthermore, that third factor must be a prime, or there would be more factors. I don't understand this. Can you explain please.
It is easier than the other answers suggest. If $k|n$ then also $\frac nk|n$, so the divisors of $n$ come in pairs unless $k=\cfrac nk$ or equivalently $n=k^2$ So the only numbers which have an odd number of divisors are squares. Every positive integer greater than $1$ has the two divisors $1$ and $n$. A positive integer with precisely three divisors must be a square $n=k^2$, which has divisors $1, k, n$. If $p|k$ then $p|n$ and $p$ is a further divisor unless $p=k$. So $n$ must be the square of a prime. More generally, you may want to show that the number of divisors of $$n=p_1^{n_1}\cdot p_2^{n_2}\cdot p_3^{n_3} \dots p_r^{n_r}$$ (where this is the prime factorisation of $n$ into powers of distinct primes) $$(n_1+1)(n_2+1)(n_3+1)\dots (n_r+1)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/891474", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
Evaluate Left And Right Limits Of $f(x)=\frac{x}{\sqrt{1-\cos2x}}$ At $0$ Evaluate Left And Right Limits Of $f(x)=\frac{x}{\sqrt{1-\cos2x}}$ At $0$ The graph of $f(x)=\frac{x}{\sqrt{1-\cos2x}}$ appears to have a jump discontinuity at $0$ and I want to calculate the left and right limits of $f(x)$ to show there is a discontinuity at $0$. I can't figure out how to manipulate the function in order to give different left and right limits. Here's one of my attempts at trying to manipulate the funtion in to something more familiar to me: $\lim_{x \to 0}\frac{x}{\sqrt{1-\cos2x}}$ (divide numerator and denominator by $(2x)^2)$ $ =\lim_{x \to 0}\frac{\frac{x}{4x^2}}{\sqrt{\frac{-(\cos2x-1)}{2x}}}$ $= \lim_{x \to 0}\frac{\frac{1}{4x}}{\sqrt{\frac{-(\cos2x-1)}{2x}}}$ Now I was thinking that I can apply $\lim_{x \to 0}\frac{\cos(\theta)-1}{\theta} =0$ but it doesn't help me at all. Any ideas?
A start: Use $\cos 2x=1-2\sin^2 x$. One needs to be careful when finding the square root of $2\sin^2 x$. It is $\sqrt{2}|\sin x|$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/891598", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Prove that: If $X$ is a topological space and $A$ and $B$ are two subsets of $X$ then,$Cl(A) \cup Cl(B) = Cl(A \cup B) $ Prove that: If $X$ is a topological space and $A$ and $B$ are two subsets of $X$ then,$Cl(A) \cup Cl(B) = Cl(A \cup B) $ where $Cl(H)$ means the closure f the subsets $H $of $X$. I was able to prove that $Cl(A) \cup Cl(B) \subset Cl(A \cup B) $. I don't know how to prove the other way. The problem is: If $x\in Cl(A \cup B) $ then every neighborhood $U$ of $x$ intersect $A \cup B$ so every neighborhood $U$ of $x$ intersects either $A$ or $B$ or both. But it may happen that some neighborhoods intersect $A$ but not $B$ and the rest of them intersect $B$ but not $A$, in which case we get $Cl(A \cup B) \not\subset Cl(A) \cup Cl(B)$. How to proceed? This problem is from Munkres topology.
A very useful lemma here is the fact that $Cl(A)$ is the minimal closed set containing $A$ (Verify for yourself!). Assume for the sake of contradiction that the two sets are not equal. We find that, just as Michael said, $Cl(A) \cup Cl(B)$ is closed. In addition, we must have $A \cup B \subset Cl(A) \cup Cl(B)$. Combining this with $Cl(A) \cup Cl(B) \subset Cl(A \cup B)$ gives us the desired result since otherwise we arrive at a contradiction due to the lemma.
{ "language": "en", "url": "https://math.stackexchange.com/questions/891653", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Converges Or Diverges: $\sum _{n=1}^{\infty }\:e^{-\sqrt{n}}$ Converges Or Diverges: Attempt: int_1^∞ e^(-sqrt(n)) dn t = -sqrt(n); dt = -dn/(2*sqrt(n)); int -2*sqrt(n)/[-2*sqrt(n)] * e^(-sqrt(n)) dn int (2t)*(e^t) dt u = 2t; du = 2 dt; dv = e^t dt; v = e^t (2t)*(e^t) - int 2*e^t 2t*(e^t) - 2*(e^t) 2 * (e^t) * (t - 1) 2 * e^(-sqrt(n)) * (-sqrt(n) - 1) -2 * e^(-sqrt(n)) * (sqrt(n) + 1) limit n->∞ [-2 * e^(-sqrt(n)) * (sqrt(n) + 1)] [-2 * d/dn (sqrt(n) + 1)] / [d/dn e^sqrt(n)] limit n->∞ -2*e^(-sqrt(n)) = 0
Converges by comparison with $~\displaystyle\sum_{n=1}^\infty e^{-2\ln n}~=~\sum_{n=1}^\infty\frac1{n^2}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/891746", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
2 of 3 dice are selected randomly and thrown. What is the probability that one of the dice shows 6 1 red die with faces labelled 1, 2, 3, 4, 5, 6. 2 green dice labelled 0, 0, 1, 1, 2, 2. Answer: 1/9 Please can you show me how to get the answer. I'm confused about joining the events of choosing 2 of 3 dice vs. getting the probability that one of the dice chosen will get a 6 when rolled. Note: There is an equi-probable chance of getting any of the six sides on a given die.
Out of the three possible choosings, two contain the die with a $6$. (Chance $2/3$) If the die is selected, then there is a chance in six to get a six. (Chance $1/6$) The total chance is: $$\frac{2}{3}·\frac{1}{6} = \frac{2}{18} = \frac{1}{9}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/891823", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
Limit of differences of truncated series and integrals give Euler-gamma, zeta and logs. Why? In the MSE-question in a comment to an naswer Michael Hardy brought up the following well known limit- expression for the Euler-gamma $$ \lim_{n \to \infty} \left(\sum_{k=1}^n \frac 1k\right) - \left(\int_{t=1}^n \frac 1t dt\right) = \gamma \tag 1$$ I've tried some variations, and heuristically I found for small integer $m \gt 1$ $$ \lim_{n \to \infty} (\sum_{k=1}^n \frac 1{k^m}) - (\int_{t=1}^n \frac 1{t^m} dt) = \zeta(m) - \frac 1{m-1} \tag 2$$ With more generalization to real $m$ it seems by Pari/GP that eq (1) can be seen as a limit for $m \to 1$ and the Euler-$\gamma$ can be seen as the result for the Stieltjes-power-series representation for $\zeta(1+x)$ whith the $\frac 1{1-(1+x)}$-term removed and then evaluated at $x=0$ Q1: Is there any intuitive explanation for this (or, for instance, a graphical demonstration)? Another generalization gave heuristically also more funny hypotheses: $$ \tag 3$$ $$ \small \begin{eqnarray} \lim_{n \to \infty} (\sum_{k=2}^n \frac 1{k(k-1)}) &-& (\int_{t=2}^n \frac 1{t(t-1)} dt) &=& \frac 1{1!} \cdot(\frac 11 - 1\cdot \log(2)) \\ \lim_{n \to \infty} (\sum_{k=3}^n \frac 1{k(k-1)(k-2)}) &-& (\int_{t=3}^n \frac 1{t(t-1)(t-2)} dt) &=& \frac 1{2!} \cdot(\frac 12 - 2\cdot \log(2) + 1\cdot \log(3) ) \\ \lim_{n \to \infty} (\sum_{k=4}^n \frac 1{k...(k-3)}) &-& (\int_{t=4}^n \frac 1{t...(t-3)} dt) &=& \frac 1{3!} \cdot(\frac 13 - 3\cdot \log(2) + 3\cdot \log(3)- 1\cdot \log(4) ) \\ \end{eqnarray} $$ where the coefficients in the rhs are the binomial-coefficients and I think the scheme is obvious enough for continuation ad libitum. Again it might be possible to express this with more limits: we could possibly write, for instance the rhs in the third row as $$ \lim_{h\to 0} \frac 1{3!} \cdot(- \small \binom{3}{-1+h} \cdot \log(0+h) +1 \cdot \log(1) - 3\cdot \log(2) + 3\cdot \log(3)- 1\cdot \log(4) ) \tag 4$$ Q2: Is that (3) true and how to prove (if is it not too complicated...)? And is (4) somehow meaningful?
For Q1. the proof just relies on summation by parts. For Q2., you can evaluate $$S_k = \sum_{n=1}^{+\infty}\frac{1}{n(n+1)\ldots(n+k)} = \frac{1}{k!}\sum_{n=1}^{+\infty}\frac{1}{n\binom{n+k}{k}}$$ by exploiting partial fractions decomposition and the residue theorem, or just the wonderful telescoping trick $\frac{1}{n(n+k)}=\frac{1}{k}\left(\frac{1}{n}-\frac{1}{n+k}\right)$, giving: $$\begin{eqnarray*}S_k &=& \frac{1}{k}\left(\sum_{n=1}^{+\infty}\frac{1}{n(n+1)\ldots(n+k-1)}-\sum_{n=1}^{+\infty}\frac{1}{(n+1)(n+2)(n+k)}\right)\\ &=&\frac{1}{k}\cdot\frac{1}{1\cdot 2\cdot\ldots\cdot k}=\frac{1}{k\cdot k!}.\end{eqnarray*}$$ The same telescoping technique applies to the integral: $$I_k = \int_{1}^{+\infty}\frac{dt}{t(t+1)\ldots(t+k)}=\frac{1}{k}\int_{0}^{1}\frac{dt}{(t+1)\ldots(t+k)}$$ and now the RHS can be evaluated through partial fraction decomposition, since: $$\frac{1}{(t+1)\ldots(t+m)}=\frac{1}{(m-1)!}\sum_{j=0}^{m-1}\frac{(-1)^j\binom{m-1}{j}}{t+j+1}.$$ We have $\int_{0}^{1}\frac{dt}{t+h}=\log(h+1)-\log(h)=\log\left(1+\frac{1}{h}\right)$, hence: $$\begin{eqnarray*}I_k &=& \frac{1}{k(k-1)!}\sum_{j=0}^{k-1}(-1)^j\binom{k-1}{j}\left(\log(j+2)-\log(j+1)\right)\end{eqnarray*}$$ just gives your $(3)$ after rearranging terms.
{ "language": "en", "url": "https://math.stackexchange.com/questions/891918", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Proving or disproving inequality $ \frac{xy}{z} + \frac{yz}{x} + \frac{zx}{y} \ge x + y + z $ Given that $ x, y, z \in \mathbb{R}^{+}$, prove or disprove the inequality $$ \dfrac{xy}{z} + \dfrac{yz}{x} + \dfrac{zx}{y} \ge x + y + z $$ I have rearranged the above to: $$ x^2y(y - z) + y^2z(z - x) + z^2x(x - y) \ge 0 \\ \text{and, } \dfrac{1}{x^2} + \dfrac{1}{y^2} + \dfrac{1}{z^2} \ge \dfrac{1}{xy} + \dfrac{1}{xz} + \dfrac{1}{yz} $$ What now? I thought of making use of the arithmetic and geometric mean properties: $$ \dfrac{x^2 + y^2 + z^2}{3} \ge \sqrt[3]{(xyz)^2} \\ \text{and, } \dfrac{x + y + z}{3} \ge \sqrt[3]{xyz} $$ but I am not sure how, or whether that'd help me at all.
Holder's inequality: $$u\cdot v \leq |u||v|$$ for any vectors $u,v$. Let $\mathbf u=(1/x,1/y,1/z)$ and $\mathbf v=(xz,xy,yz)$. Then show $|v| = xyz |u|$ and thus $$|\mathbf u||\mathbf v| = xyz\left(\frac{1}{x^2}+\frac{1}{y^2}+\frac{1}{z^2}\right)= \dfrac{xy}{z} + \dfrac{yz}{x} + \dfrac{zx}{y} $$ and: $$\mathbf u\cdot\mathbf v = x+y+z $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/891997", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Evaluating $\int \tan^{1/3}(\theta) d \theta$ I know $\int \tan^{1/3}\theta d \theta$ is a non integrable but wolfram alpha does it easily. And can anyone explain how a fuction is non integrable? My argument is that there has to be a function which represents the area under a graph. It's just that we do not know it. Please shed a bit light on it. Also, how can we solve $\int \tan^{1/3}\theta d \theta$?
I will just leave here the exact solution.\begin{align} \int (\tan x)^\frac13\,dx=&\frac14 \bigg[-2\sqrt{3} \tan^{-1}\left(\sqrt{3}-2(\tan x)^\frac13 \right)-2\sqrt{3}\tan^{-1}\left(\sqrt{3}+2(\tan x)^\frac13 \right)\\ &-2\log\left(\tan^\frac23x+1\right)+\log\left(\tan^\frac23x -\sqrt{3}(\tan x)^\frac13+1 \right)\\ &\hspace{5mm}+\log\left(\tan^\frac23x+\sqrt{3}(\tan x)^\frac13+1 \right) \bigg]. \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/892087", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
How can I find a $k$ and a $n_0$? Find $k$ such that $$(\lg n)^{\lg n}= \Theta (n^k), k \geq 2$$ That's what I did so far: $$(\lg n)^{\lg n}=\Theta(n^k) \text{ means that } \exists c_1,c_2>0 \text{ and } n_0 \geq 1 \text{ such that } \forall n \geq n_0: \\ c_1 n^k \leq (\lg n)^{\lg n} \leq c_2n^k$$ How can I continue?
I find that $\ln n$ is a gross exponent, and so I want to simplify the expression. You're right when you say we want to find conditions so that $$ c_1 n^k < \ln n ^{\ln n} < c_2 n^k,$$ but if we take logs everywhere, then this is the same as finding conditions so that $$\ln c_1 + k \ln n < \ln n (\ln \ln n) < \ln c_2 + k \ln n.$$ Now we run into a problem. For any finite $k$, eventually $\ln \ln n$ is going to grow much bigger than $k$. Arbitrarily bigger, even. So we will never find conditions so that $\ln n (\ln \ln n) < \ln c_2 + k \ln n$. We are forced to conclude that $(\ln n) ^ {\ln n}$ is not $\Theta (n^k)$ for any $k$. More interestingly, even though logs are small, exponential growth is huge. However, since it won't win until $\ln \ln n \gg k$, it will take a really, really long time for $(\ln n) ^ {\ln n}$ to catch up to the initial polynomial growth.
{ "language": "en", "url": "https://math.stackexchange.com/questions/892160", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Is there a geometric interpretation for lower order terms in sum of squares formula? So we know that $\sum_{k=1}^n k^2 = n(n+1)(2n+1)/6$ and if we think about making a square layer out of $n^2$ unit cubes, and then placing a square layer of $(n-1)^2$ unit cubes on top of the first layer centered upon the center, and so forth, we can build a chunky square pyramid of $\sum_{k=1}^n k^2$ unit cubes that has height $n$ and base side length $n$. As $n$ gets large, we can think of this as a Riemann sum for the volume of an ordinary square pyramid of height $n$ and base side length $n$, which has volume $n^3/3$. And indeed the leading term of $n(n+1)(2n+1)/6$ is $n^3/3$. So I was wondering, does the quadratic (and or linear) term in $n(n+1)(2n+1)/6$ have any geometric interpretation in terms of the ordinary square pyramid, e.g. being a multiple of surface area of the triangular sides or something?
You want to talk about $$\frac{n(n+1)(2n+1)}6=\frac{n^3}3+\frac{n^2}2+\frac n6$$ I'd prefer to think of the pyramid layers not as centered on one another, but aligned in one corner of the plane. That way they form a regular integer grid, which makes things a bit easier for me. The layer $k$ is represented by the square $0\le x,y\le k;\;k-1<z<k$, so the $z$ axis is going from the tip towards the base. Your argument about the pyramid value still holds. The volume covered by the pyramid $0\le x,y\le z\le n$ is equal to $\frac{n^3}3$ since its base is $n^2$ and its height is $n$. But your cubes are protruding beyond that. Let's look at the edges first. In each layer, the outer edge of the square consists of a seried of $k-1$ cubes which are half inside and half outside the volume of the pyramid you have already accounted for. The corner cube is a bit more tricky, so we might want to have a closer look at that later. There are two edges to each layer, so multiplying the volume of $\frac12$ per cube by the total number of half-accounted cubes, you get $$\frac12\sum_{k=1}^n 2(n-1)=\frac{n^2-n}2$$ Now about the corner cubes of each layer. They are all the same, so we'll take the simplest one to imagine, namely the one from the $k=1$ layer. The volume accounted for by the pyramid volume is $\frac13$, but its actual volume is $1$. So you'd add $\frac23$ for each such cube, and there is one such cube in each layer, so you have to add $\frac23n$ to the sum and obtain the final result: $$\frac{n^3}3+\frac{n^2-n}2+\frac{2n}3=\frac{n^3}3+\frac{n^2}2+\frac n6$$ If you want to do this strictly one degree after the other, then you'd include the corner cubes into the consideration of the edge cubes, turning that term into $\frac{n^2}2$, and in the last step only consider that part of the edge cubes which hasn't ben accounted for by either pyramid volume or edge volume. But you'd have to be careful of how many times you count each part of them in this case, since the edge corrections would overlap. That's the reason I find the above to be simpler. So if you look at the described cube arangement from the top (i.e. the $-z$ direction), then the cubic term is associated with the inner volume of the continuous pyramid, the square term is associated with the edge cubes which (almost) cover the square if you project them down to the base plane, and the linear term is associated with the edge cubes which form a (diagonal) line in that projection. The coefficients are the correction terms which express how much of each group hasn't been accounted for by the higher orders.
{ "language": "en", "url": "https://math.stackexchange.com/questions/892259", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How to solve this IVP? Could you please help me solve this IVP? A certain population grows according to the differential equation: $$\frac{\mathrm{d}P}{\mathrm{d}t} = \frac{P}{20}\left(1 − \frac{P}{4000}\right) $$ and the initial condition $P(0) = 1000$. What is the size of the population at time $t = 10$? The answer is $$P(10)=\frac{4000}{(1 + 3e^{1/2})}$$ but I can't seem to get it. I have tried to bring all the right hand side terms to integrate with respect to $P$. But I end up getting stuck with this equation $$20\ln{(P)} + \frac{80000}{P} = 90 + 20\ln{(1000)}$$ and don't know how to isolate $P$... Thanks so much! :)
$\textbf{Given}$ $\dfrac{dP}{dt} = \dfrac{P}{20}\left(1-\dfrac{P}{4000}\right)$, $P(0) = 1000$. $\textbf{Find}$ $P(10)$ $\textbf{Analysis}$ $\dfrac{dP}{dt} = \dfrac{P}{20}\left(1-\dfrac{P}{4000}\right)$. First divide out $P\left(1-\dfrac{P}{4000}\right)$ from the RHS and multiply $dt$ over from the LHS: $$ \dfrac{dP}{P(1-P/4000)} = \dfrac{dt}{20}. \hspace{3in} (1) $$ Next use partial fraction decomposition on the LHS: \begin{align*} \dfrac{dP}{P(1-P/4000)} & = \dfrac{A\ dP}{P} + \dfrac{B\ dP}{(1-P/4000)} \\ & \Rightarrow 1 = A\left(1-\dfrac{P}{4000}\right) + BP\\ & \Rightarrow \left\{\begin{array}{lr} A = 1\\ B = 1/4000 \end{array}\right. \end{align*} The solutions for $A,B$ comes from equating coefficients. Hence, we may rewrite equation (1) as $$ \dfrac{dP}{P} + \dfrac{dP}{4000-P} = \dfrac{dt}{20}. $$ Integrate both sides $$ \int \left(\dfrac{1}{P} + \dfrac{1}{4000-P}\right)dP = \dfrac{1}{20}\int dt, $$ $$ \ln\left(\dfrac{P}{4000-P}\right) = \dfrac{t}{20} + C. $$ Exponentiate both sides $$ \dfrac{P}{4000-P} = Ke^{t/20} \hspace{1cm} \Rightarrow \hspace{1cm} P = (4000-P)Ke^{t/20} $$ so that $$ P = 4000\dfrac{Ke^{t/20}}{1 + Ke^{t/20}}. $$ Now, the initial condition is $P(0) = 1000$, thus $$ P(0) \equiv 1000 = 4000\dfrac{K}{1 + K} \hspace{1cm} \Rightarrow \hspace{1in} K=\dfrac{1}{3}. $$ So $P(t)$ is given explicitly by $$ P(t) = \dfrac{4000}{1+3e^{t/20}}, $$ and $P(10) = \dfrac{4000}{1+3e^{-1/2}}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/892321", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
How do I determine if I should submit a sequence of numbers to the OEIS? When I search a sequence in the OEIS and it's not there, it gives me a message saying "If your sequence is of general interest, please submit it using the form provided and it will (probably) be added to the OEIS!" But I'm not always sure if I should. It's clear in cases when I made a mistake, I correct my mistake and the right sequence comes up; the OEIS does not need my erroneous version. But in other cases, it looks like my computations are totally correct and the sequence is not in the OEIS, but I'm not sure it's really worth adding, and that thing about "general interest" sounds kind of vague, plus there are some sequences in the OEIS that seem like they're of interest only to a very small group of specialists. What criteria would you suggest to apply to a particular sequence to help me decide about submitting it one way or the other? P.S. Something that occasionally happens is that it will tell me something like "Your sequence appears to be: $+ 20 x + 3 $. (I searched for "23,43,63,83,103,123,143").
I would submit any sequence related to typical analytic maneuvers and forms that lie in the intersection of diverse fields of mathematics. For example, in dynamical systems, nonlinear PDEs, complex calculus, and differential geometry, the Schwarzian derivative frequently pops up and can be represented quite simply in terms of a convolution of the Faber polynomials (cf. OEIS A263646). In algebraic geometry, special functions, group theory, differential topology, and operator calculus, the various classes of symmetrc polynomials play special roles, so any array related to non-trivial transformations between the classes should be submitted (cf. A036039).
{ "language": "en", "url": "https://math.stackexchange.com/questions/892412", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 2 }
Solving exponential equation $e^{x^2+4x-7}(6x^2+12x+3)=0$ How would you find $x$ in: $e^{x^2+4x-7}(6x^2+12x+3)=0$ I don't know where to begin. Can you do the following? $e^{x^2+4x-7}=1/(6x^2+12x+3)$ and then find $ln$ for both sides?
$e^{x^2+4x-7}(6x^2+12x+3)=0 \Rightarrow e^{x^2+4x-7}=0 \text{ or } \ 6x^2+12x+3=0$ $$\text{It is known that } e^{x^2+4x-7} \text{ is non-zero }$$ therefore,you have to solve : $$6x^2+12x+3=0$$ The solutions are: $$x=-1-\frac{1}{\sqrt{2}} \\ x=-1+\frac{1}{\sqrt{2}}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/892484", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
What's $\sum_{k=0}^n\binom{n}{2k}$? How do you calculate $\displaystyle \sum_{k=0}^n\binom{n}{2k}$? And doesn't the sum terminate when 2k exceeds n, so the upper bound should be less than n? EDIT: I don't understand the negging. Have I violated a rule of conduct or something?
$\newcommand{\angles}[1]{\left\langle\, #1 \,\right\rangle} \newcommand{\braces}[1]{\left\lbrace\, #1 \,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\, #1 \,\right\rbrack} \newcommand{\ceil}[1]{\,\left\lceil\, #1 \,\right\rceil\,} \newcommand{\dd}{{\rm d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,{\rm e}^{#1}\,} \newcommand{\fermi}{\,{\rm f}} \newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,} \newcommand{\half}{{1 \over 2}} \newcommand{\ic}{{\rm i}} \newcommand{\iff}{\Longleftrightarrow} \newcommand{\imp}{\Longrightarrow} \newcommand{\pars}[1]{\left(\, #1 \,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\pp}{{\cal P}} \newcommand{\root}[2][]{\,\sqrt[#1]{\vphantom{\large A}\,#2\,}\,} \newcommand{\sech}{\,{\rm sech}} \newcommand{\sgn}{\,{\rm sgn}} \newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert}$ $\ds{\sum_{k = 0}^{n}{n \choose 2k}:\ {\large ?}}$ $$ \mbox{Note that}\quad\sum_{k = 0}^{n}{n \choose 2k} =\sum_{k = 0}^{\color{#c00000}{\large\infty}}{n \choose 2k}. $$ $$ \mbox{We'll use the identity}\quad {m \choose s}=\oint_{0\ <\ \verts{z}\ =\ a} {\pars{1 + z}^{m} \over z^{s + 1}}\,{\dd z \over 2\pi\ic}\,,\quad s \in {\mathbb N} $$ Then, \begin{align} &\color{#66f}{\large\sum_{k = 0}^{n}{n \choose 2k}} =\sum_{k = 0}^{\infty}\oint_{\verts{z}\ =\ a\ >\ 1} {\pars{1 + z}^{n} \over z^{2k + 1}}\,{\dd z \over 2\pi\ic} =\oint_{\verts{z}\ =\ a\ >\ 1}{\pars{1 + z}^{n} \over z} \sum_{k = 0}^{\infty}\pars{1 \over z^{2}}^{k}\,{\dd z \over 2\pi\ic} \\[3mm]&=\oint_{\verts{z}\ =\ a\ >\ 1}{\pars{1 + z}^{n} \over z} {1 \over 1 - 1/z^{2}}\,{\dd z \over 2\pi\ic} =\oint_{\verts{z}\ =\ a\ >\ 1}{\pars{1 + z}^{n}\,z \over \pars{z - 1}\pars{z + 1}} \,{\dd z \over 2\pi\ic} \\[3mm]&={\pars{1 + 1}^{n}\times 1 \over 1 + 1}=\color{#66f}{\large 2^{n - 1}} \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/892552", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 2 }
How to arrive at coordinates using 2D rotation matrix multiplication. I'm having a little trouble understanding 2D rotation matrices. I apologize in advance, as I have probably missed something really obvious! (My mathematics isn't brilliant!) Ok, so I have the following matrices (the brackets should be joined vertically): [x'] = [cos(a) -sin(a)] [x] [y'] [sin(a) cos(a)] [y] Now, once I've multiplied x and y by the columns I am left with a new matrix of 4 numbers. However, I am supposed to end up with the following formulae to get the new coordinates: x' = x cos(a) - y sin(a) y' = x sin(a) + y cos(a) My question is, how do I get from the 4 numbers of my matrix to the 2 numbers which will be the result of the above formulae. What happened to the rest of the sines and cosines? I imagine some kind of cancelling out? Please could you explain the process to get from the matrices to formulae. Thank you so much! (If my figures aren't clear, then they are available here!)
If you multiply your matrix with the vector, you get the results immediately. There are no steps in between. This is how matrix multiplication works.
{ "language": "en", "url": "https://math.stackexchange.com/questions/892631", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
About integrating $\sin^2 x$ by parts This is about that old chestnut, $\newcommand{\d}{\mathrm{d}} \int \sin^2 x\,\d x$. OK, I know that ordinarily you're supposed to use the identity $\sin^2 x = (1 - \cos 2x)/2$ and integrating that is easy. But just for the heck of it, I tried using the $u$-$v$ substitution method (otherwise known as integration by parts). $$ \int \sin^2 x\,\d x = \int \sin x \sin x\,\d x $$ We can say $u=\sin x$ and $\d u=\cos x\,\d x$ while $\d v = \sin x\,\d x$ and $v = -\cos x$. When we put it all together: $$\int \sin^2 x\,\d x = u v - \int v\,\d u = -\sin x \cos x - \int -\cos^2 x\,\d x$$ and doing the same routine with $\cos^2 x$ we get $u = \cos x$, $\d u = -\sin x\,\d x$, $\d v = \cos x\,\d x $ and $v = -\sin x$, leading to: $$\begin{align} \int \sin^2 x\,\d x = uv - \int v\,\d u &= -\sin x \cos x - (-\cos x \sin x - \int -\sin^2 x\,\d x) \\ &= -\sin x \cos x + \sin x \cos x - \int \sin^2 x\,\d x \end{align}$$ which eventually works out to $$2\int \sin^2 x\,\d x = 0$$ So I wanted to get an idea why this didn't work. Maybe it's higher math and the why will be beyond me (I would think that might be the case), or maybe it's one of those proofs that looks absurdly simple when shown that I am just unaware of.
In the second application of the $u$-$v$ method, there may be an error: If $\newcommand{\d}{\mathrm{d}} \d v = \cos x\,\d x$, then $v = \sin x$ and not $-\sin x$. Hence the $u$-$v$ method overall would produce $$\int \sin^2 x\,\d x = −\sin x\cos x + \sin x\cos x + \int \sin^2 x\,\d x$$ resulting in circular reasoning: $$\int \sin^2 x\,\d x = \int \sin^2 x\,\d x$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/892725", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Integral Involving Dilogarithms I came across the identity $$\int^x_0\frac{\ln(p+qt)}{r+st}{\rm d}t=\frac{1}{2s}\left[\ln^2{\left(\frac{q}{s}(r+sx)\right)}-\ln^2{\left(\frac{qr}{s}\right)}+2\mathrm{Li}_2\left(\frac{qr-ps}{q(r+sx)}\right)-2\mathrm{Li}_2\left(\frac{qr-ps}{qr}\right)\right]$$ in a book. Unfortunately, as of now, I am not very adept at manipulating such integrals and thus I have little idea on how to proceed with proving this identity. For example, substituting $u=r+sx$ doesn't seem to help much. Hence, I would like to seek assistance as to how this integral can be evaluated. Help will be greatly appreciated. Thank you.
Among various ways to do it, this one is simple :
{ "language": "en", "url": "https://math.stackexchange.com/questions/892805", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Projection of one 2D-rectangle onto another 2D-rectangle I have the following: http://i.imgur.com/iwOzmxa.png The center of the blue box is related to the center of the black box. For example, such a relationship could be described such that the black box would be a game's main screen where the red dot is the player, and the blue box would be the map where the red dot is the player on the map. It is just an example. I'm trying to project any point located within the blue box onto the black box and vice-versa. I basically one to map a coordinate from one of the boxes into the coordinate space of the other. How can I do this? I wasn't sure what to search online. All the results seem to show how many smaller rectangles fit into a larger one. I thought about translating the smaller rectangle to the top left corner of the larger one and then somehow stretching the points to match? I'm not sure what I'm thinking or how to start. Is there any algorithms that can do this or help me?
Assuming that you want the co-ordinates in the blue and black rectangles to be proportionately the same across and down the rectangles, you get $ \frac{X_{blue} - X_S}{X_{S2}-X_S}=\frac{X_{black}-X_L}{X_{L2}-X_L}$ and similarly with $Y$, so $$X_{blue} = X_S + \left(X_{black}-X_L\right)\frac{X_{S2}-X_S}{X_{L2}-X_L}$$ $$Y_{blue} = Y_S + \left(Y_{black}-Y_L\right)\frac{Y_{S2}-Y_S}{Y_{L2}-Y_L}$$ $$X_{black} = X_L + \left(X_{blue}-X_L\right)\frac{X_{L2}-X_L}{X_{S2}-X_S}$$ $$Y_{black} = Y_L + \left(Y_{blue}-Y_L\right)\frac{Y_{L2}-Y_L}{Y_{S2}-Y_S}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/892875", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Can this log question be simplified? $ { 2^{log_3 5}} -  {5^{log_3 2}}.$ I don't know any formula that can apply to it or is there a formula?? Even a hint will be helpful.
$ { 2^{\log_3 5}} -  {5^{\log_3 2}}$ $= 2^{\log_3 5} - 5^{\frac{\log_5 2}{\log_5 3}}$ (change of base) $= 2^{\log_3 5} - (5^{\log_5 2})^{(\frac{1}{\log_5 3})}$ (using $a^{\frac{b}{c}} = {(a^b)}^{\frac{1}{c}}$) $= 2^{\log_3 5} - 2^{\frac{1}{\log_5 3}}$ $= 2^{\log_3 5} - 2^{\frac{\log_5 5}{\log_5 3}}$ ($\because \log_5 5 = 1$) $=2^{\log_3 5} - 2^{\log_3 5} $ (basically a "reverse" change of base) $=0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/892959", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 3 }
How do elements of $\mathbb{R}(xy,x+y)$ look like? I have problems with determining what are typical elements of such field $\mathbb{R}(xy,x+y)$ In one indeterminate it is easier as $\mathbb{R}(x)=\Bigl\{\frac{f(x)}{g(x)}, g(x)\neq 0, f(x),g(x)\in\mathbb{R}[x]\Bigr\} $ where $f(x)=a_{0}+a_{1}x+a_{2}x^2+\dots+a_{n}x^{n}$ $g(x)={b_{0}+b_{1}x+b_{2}x^2+\dots+b_{n}x^{n}}$ and by translating the above case, we get something like this $\mathbb{R}(xy,x+y)=\Bigl\{\frac{f(xy,x+y)}{g(xy,x+y)}, g(xy,x+y)\neq 0, f(xy,x+y),g(xy,x+y)\in\mathbb{R}[xy,x+y]\Bigr\} $ but does $f(xy,x+y)$ really mean? Can we take for example $f(xy,x+y)=5xy-3(x+y)$? Can someone give me example of non trivial elements belonging to the above field? For example is it true that $(xy)^2=x^2y^2\in \mathbb{R}(xy,x+y) $?
This is the field of all rational functions symmetric in $x$ and $y$. So for example it contains $x^3+y^3+5x^2y+5xy^2 -x-y$ since this is symmetric in $x$ and $y$. $x+y$ and $xy$ are the elementary symmetric functions and a basic theorem of algebra is the every symmetric rational functions is a rational function of the elementary symmetric functions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/893052", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Continuity of function where $f(x+y) = f(x)f(y) ~~\forall x, y \in \mathbb{R}$. Let $f:\mathbb{R}\rightarrow \mathbb{R}$ be a function which satisfies $f(x+y) = f(x)f(y) ~~\forall x, y \in \mathbb{R}$ is continuous at $x=0$, then it is continuous at every point of $\mathbb{R}$. So we know $\forall \epsilon > 0 ~~\exists \delta > 0$ such that $|x-0|<\delta \implies |f(x)-f(0)|<\epsilon$ and we want to show that given any $\epsilon > 0 ~~\exists \delta > 0$ such that $|x-y|<\delta \implies |f(x)-f(y)|<\epsilon$. Now I see that the 'trick' that we can use is replacing $f(x)$ with $f(x)f(0)$ but I still cannot seem to finish the proof. Any advice?
To show continuity at $x$ simply notice that: $$\lvert f(x+h)-f(x)\rvert=\lvert f(x)f(h)-f(x)\rvert=\lvert f(x)\rvert\lvert f(h)-1\rvert$$ and notice that $f(0)=f(0+0)=f(0)f(0)$ so either $f(0)=0$ or $f(0)=1$. So either $f$ is identically $0$ and hence continuous or we use continuity at $x=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/893175", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 0 }
How to solve the recurrence relation $T(n) = T(\lceil n/2\rceil) + T(\lfloor n/2\rfloor) + 2$ I'm trying to solve a recurrence relation for the exact function (I need the exact number of comparisons for some algorithm). This is what i need to solve: $$\begin{aligned} T(1) &= 0 \\ T(2) &= 1 \\ T(n) & = T(\lceil n/2\rceil) + T(\lfloor n/2\rfloor) + 2 \qquad(\text{for each $n\ge2$}) \end{aligned}$$ Without the ceiling and floor I know that the solution is $T(n) = 3n/2 -2$ if $n$ is the power of 2, but how can I solve it with them? (and for every $n$, not just $n$ that is the power of 2). Thanks a lot.
As mentioned in comment, the first two conditions $T(1) = 0, T(2) = 1$ is incompatible with the last condition $$\require{cancel} T(n) = T(\lfloor\frac{n}{2}\rfloor) + T(\lceil\frac{n}{2}\rceil) = 2\quad\text{ for } \color{red}{\cancelto{\;\color{black}{n > 2}\;}{\color{grey}{n \ge 2}}} \tag{*1}$$ at $n = 2$. We will assume the condition $(*1)$ is only valid for $n > 2$ instead. Let $\displaystyle\;f(z) = \sum\limits_{n=2}^\infty T(n) z^n\;$ be the generating function for the sequence $T(n)$. Multiply the $n^{th}$ term of $(*1)$ by $z^n$ and start summing from $n = 3$, we obtain: $$\begin{array}{rrl} &f(z) - z^2 &= T(3) z^3 + T(4) z^4 + T(5) z^5 + \cdots\\ &&= (T(2) + 2)z^3 + (T(2) + T(2) + 2)z^4 + (T(2)+T(3)+2)z^5 + \cdots\\ &&= (1+z)^2 ( T(2)z^3 + T(3)z^5 + \cdots) + 2(z^3 + z^4 + z^5 + \cdots)\\ &&= \frac{(1+z)^2}{z}f(z^2) + \frac{2z^3}{1-z}\\ \implies & f(z) &= \frac{(1+z)^2}{z}f(z^2) + z^2\left(\frac{1+z}{1-z}\right)\\ \implies & \frac{(1-z)^2}{z} f(z) &= \frac{(1-z^2)^2}{z^2}f(z^2) + z(1-z^2) \end{array} $$ Substitute $z^{2^k}$ for $z$ in last expression and sum over $k$, we obtain $$f(z) = \frac{z}{(1-z)^2}\sum_{k=0}^\infty \left(z^{2^k} - z^{3\cdot2^k}\right) = \left( \sum_{m=1}^\infty m z^m \right)\sum_{k=0}^\infty \left(z^{2^k} - z^{3\cdot2^k}\right)$$ With this expression, we can read off $T(n)$ as the coefficient of $z^n$ in $f(z)$ and get $$T(n) = \sum_{k=0}^{\lfloor \log_2 n\rfloor} ( n - 2^k ) - \sum_{k=0}^{\lfloor \log_2(n/3)\rfloor} (n - 3\cdot 2^k)$$ For $n > 2$, we can simplify this as $$\bbox[4pt,border: 1px solid black;]{ T(n) = n \color{red}{\big(\lfloor \log_2 n\rfloor - \lfloor \log_2(n/3)\rfloor\big)} - \color{blue}{\big( 2^{\lfloor \log_2 n\rfloor + 1} - 1 \big)} + 3\color{blue}{\big( 2^{\lfloor \log_2(n/3)\rfloor +1} - 1\big)}}\tag{*2}$$ There are several observations we can make. * *When $n = 2^k, k > 1$, we have $$T(n) = n(k - (k-2)) - (2^{k+1} - 1) + 3(2^{k-1} - 1) = \frac32 n - 2$$ *When $n = 3\cdot 2^{k-1}, k > 0$, we have $$T(n) = n(k - (k-1)) - (2^{k+1} - 1) + 3(2^k - 1) = \frac53 n - 2$$ *For $2^k < n < 3\cdot 2^{k-1}, k > 1$, the coefficient for $n$ in $(*2)$ (i.e the factor in red color) is $2$, while the rest (i.e those in blue color) didn't change with $k$. So $T(n)$ is linear there with slope $2$. *For $3\cdot 2^{k-1} < n < 2^{k+1}, k > 1$, the coefficient for $n$ in $(*2)$ is now 1. Once gain $T(n)$ is linear there but with slope $1$ instead. Combine these, we find in general $$\frac32 n - 2 \le T(n) \le \frac53 n - 2 \quad\text{ for }\quad n > 2$$ and $T(n) = O(n)$ as expected. However, $\frac{T(n)}{n}$ doesn't converge to any number but oscillate "between" $\frac32$ and $\frac53$. Above is a picture ilustrating the behavior of $T(n)$. The blue pluses are the value of $T(n) - (\frac32 n - 2)$ computed for various $n$. The red line is $\frac{n}{6} = (\frac53 n - 2) - (\frac32 n - 2)$. As one can see, $T(n)$ doesn't converge to any straight line. Instead, it oscillate between lines $\frac32 n - 2$ and $\frac53 n - 2$ as discussed before.
{ "language": "en", "url": "https://math.stackexchange.com/questions/893251", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 0 }
How to calculate the probability of rolling 6 at least 5 times in a row, out of 50 tries? If I roll the dice 50 times, how do I calculate the chance that I will roll 6 at least 5 times in a row? Why this problem is hard * *With 5 tries this would be easy: take $(1/6)$ to fifth power. *With 6 tries this is manageable; take the probability of rolling 6 the first five times, add the probability of rolling 6 the last five times, then subtract the overlap (all six results are 6). *Given two overlapping sets of 5 rolls, the probability that one is all 6's is not independent from the probability that the other is all 6's. *In principle this could be continued, but inclusion-exclusion gets out of hand. There has to be a better way; what is it?
There is no straightforward formula for this probability. However it can be computed exactly (within numerical error). You can keep track of the vector of probabilities $\mathbf{p}_t$ that at time $t$ you are in state: * *you haven't already rolled 5 6s and at the present time have rolled 0 6s, *you haven't already rolled 5 6s and at the present time have rolled 1 6s, *you haven't already rolled 5 6s and at the present time have rolled 2 6s, *you haven't already rolled 5 6s and at the present time have rolled 3 6s, *you haven't already rolled 5 6s and at the present time have rolled 4 6s, or *you have already rolled 5 6s. There are thus 6 probabilities in the vector. And initially $\mathbf{p}_0 = [1, 0, 0, 0, 0, 0]^T$. The chance of transitioning from state 1 to state 2 is $1/6$ and similarly for other states upto state 5 to state 6. From states 1 to 5 the chance of transitioning back to state 1 is $5/6$. However once you have already rolled 5 6s you have always already rolled 5 6s so if you get to state 6 you stay in state 6. These probabilities can be specified in a transition matrix $X$: $$ X = \left\{ \begin{array}{c|cccccc} & S_1 & S_2 & S_3 & S_4 & S_5 & S_6 \\ \hline S_1 & \frac56 & \frac56 & \frac56 & \frac56 &\frac56 & 0 \\ S_2 & \frac16 &0 & 0 &0 & 0 & 0 \\ S_3 & 0 & \frac16 &0 &0 & 0 & 0 \\ S_4 & 0 & 0 &\frac16 &0 & 0 & 0 \\ S_5 & 0 & 0 &0 &\frac16 & 0 & 0 \\ S_6 & 0 & 0 & 0&0 &\frac16 & 1 \end{array}\right\} $$ Probabilities for time $t+1$ can be computed from probabilities for time $t$ by applying the transition matrix: $$ \mathbf{p}_{t+1} = X\mathbf{p}_t $$ Computing $\mathbf{p}_{50} = X^{50}\mathbf{p}_0$ gives the probability of observing at least 5 6s within 50 rolls to be 0.00494 (simulation gave 0.00493). As Hurkyl points out, for large numbers of rolls it can be worth using the square and multiply method for matrix exponentiation to maintain accuracy.
{ "language": "en", "url": "https://math.stackexchange.com/questions/893342", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 4, "answer_id": 1 }
Any abstract algebra book with programming (homework) assignment? All: I had studied abstract algebra long time ago. Now, I would like to review some material, particularly about Galois theory (and its application). Can anyone recommend an abstract algebra book which cover Galois theory (and its applications)? I have been a software engineer for past many years. Ideally, I would like an algebra book with programming assignments or exercise (help me to understand concept.) For example, homework assignments to write a program to verify Galois theory, or to construct 'solvable Group", or anything like that. (Of course, I can think some random questions, but I prefer a text book with well designed, and meaningful home works). I felt that I was not good at deriving formula anymore, I would like to use my programming skills to help me to understand subject, do more hand-on exercise and calculations.
All books listed in the comments above are listed in my answer to the question Novel approaches to elementary number theory and abstract algebra, so I am placing a CW-answer here to remove this question from the unanswered queue.
{ "language": "en", "url": "https://math.stackexchange.com/questions/893401", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 1, "answer_id": 0 }
There always exists a subfield of $\mathbb C$ which is a splitting field for $f(x)$ $\in$ $Q[X]$? So I've been studying field theory on my own, and I just started learning about splitting fields. Based on my understandings if a polynomial, $f(x)$ $\in$ $Q[X]$, then there should be always a subfield of $\mathbb C$ which is a splitting field for that polynomial. Is this true? Thanks in advance.
Yes. Every polynomial in $\Bbb C[x]$ has a root in $\Bbb C$ (fundamental theorem of algebra), which means that every polynomial has a linear factor; after dividing we get another polynomial in $\Bbb C[x]$ of smaller degree, which also has a root, which means we get another linear factor we can divide by, and we can continue this process until we're left with $1$, and when we collect all of the linear factors we've gathered (possibly with multiplicity) we have a complete factorization (over $\Bbb C$) of the original polynomial. Every polynomial in $\Bbb Q[x]$ is in also a polynomial in $\Bbb C[x]$. Indeed, every field $k$ has an algebraic closure $K$. Every polynomial in $K[x]$ factors completely over the field $K$. Splitting fields (which algebraic closures are a particular example of) can be explicitly constructed by iteratively adjoining roots of irreducible factors over higher and higher extensions, and this process might need to be done transfinitely using Zorn's lemma; this should be covered in any text on field theory. (Another way I never see discussed anywhere is forming a splitting ring for polynomials using Vieta's formulas and then quotienting by a maximal ideal.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/893484", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How to understand uniform integrability? From the definition to uniform integrability, I could not understand why "uniform" is used as qualifier. Can someone please enlighten me?
Many arguments in probability use truncation: take a random variable $X$ and define $X^k := X1_{|X| \leq k}$. This allows us to handle "most of $X$" on a compact set $K:= [-k,k]$ if $X$ is integrable, as $E|X-X^k| < \epsilon$ for sufficiently large $k$. In other words, $$\lim_{k \rightarrow\infty} E|X1_{|X| > k}| = 0.$$ The point of uniform integrability is to have a single $k$ which works for a family $\{X_\alpha\}_{\alpha \in A}$ of random variables. That is, for any $\epsilon > 0$, there is a single, uniform $k$ for which $E|X_\alpha 1_{|X| > k}| < \epsilon$. Hence, $$\lim_{k\rightarrow\infty} \sup_{\alpha \in A} E|X1_{|X| > k}| = 0.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/893549", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Find $ \int_0^2 \int_0^2\sqrt{5x^2+5y^2+8xy+1}\hspace{1mm}dy\hspace{1mm}dx$ I need the approximation to four decimals Not sure how to start or if a closed form solution exists All Ideas are appreciated
In Maple environment: [> s:= Int(sqrt(5*x^2+8*x*y+5*y^2+1), x = 0 .. 2); /2 (1/2) | / 2 2 \ s:= | \5 x + 8 x y + 5 y + 1/ dx | | /0 [> int(s, y = 0 .. 2, numeric); 17.71654322
{ "language": "en", "url": "https://math.stackexchange.com/questions/893620", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Elementary algebra problem Consider the following problem (drawn from Stanford Math Competition 2014): "Find the minimum value of $\frac{1}{x-y}+\frac{1}{y-z}+ \frac{1}{x-z}$ for for reals $x > y > z$ given $(x − y)(y − z)(x − z) = 17.$" Method 1 (official solution): Combining the first two terms, we have $\frac{x−z}{(x-y)(y-z)} + \frac{1}{x-z}= \frac{(x-z)^2}{17}+ \frac{1}{x-z}.$ What remains is to find the minimum value of $f(a) = \frac{a^2}{17} + \frac{1}{a} = \frac{a^2}{17} + \frac{1}{2a}+ \frac{1}{2a}$ for positive values of $a.$ Using AM-GM, we get $f(a) \geq \frac{3}{68^{1/3}}$. Method 2: Let $x-y:=a, \; y-z:=b, \; x-z:=c.$ Then $\frac{1}{a}+\frac{1}{b}+\frac{1}{c}= \frac{ab+bc+ac}{17}= \frac{3}{17}\frac{ab+bc+ac}{3} \geq \frac{3}{17} (a^2b^2c^2)^{1/3} =\frac{3}{17^{1/3}}.$ So Method 2 seems to give a sharper bound than the official solution. Have I done something wrong?
In the second solution, when $=$ in $\ge$ is satisfied, $a=b=c$. But then, $abc=17$ and $a+b=c$ makes the contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/893793", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Cauchy-Goursat theorem I understand the solution to this question via the use of the following corollary However for $k+1=0$ you get : $\displaystyle \oint_0^{2\pi}id\theta$ Why am I not able to apply the cauchy-goursat theorem to get that the integral=0?
There's no contradiction, you just proved $z^{-1}$ doesn't have a primitive function on any open set containing $\gamma_1$. $(z^n)'=nz^{n-1}$ gives you a primitive function of $z^{n-1}$ only for $n\ne0$, so $z^{-1}$ is excluded. You could think of using the principal branch of the complex logarithm, but that's not even continuous on $\gamma_1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/893889", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Conversion of rotation matrix to quaternion We use unit length Quaternion to represent rotations. Following is a general rotation matrix obtained ${\begin{bmatrix}m_{00} & m_{01}&m_{02} \\ m_{10} & m_{11}&m_{12}\\ m_{20} & m_{21}&m_{22}\end{bmatrix}}_{3\times 3}\tag 1 $. How do I accurately calculate quaternion $q = q_1i+q_2j+q_3k+q_4$for this matrix?Means how can we write $q_i$s interms of given $m_{ij}$ accurately?
The axis and angle are directly coded in this matrix. Compute the unit eigenvector for the eigenvalue $1$ for this matrix (it must exist!) and call it $u=(u_1,u_2,u_3)$. You will be writing it as $u=u_1i+u_2j+u_2k$ from now on. This is precisely the axis of rotation, which, geometrically, all nonidentity rotations have. You can recover the angle from the trace of the matrix: $tr(M)=2\cos(\theta)+1$. This is a consequence of the fact that you can change basis to an orthnormal basis including the axis you found above, and the rotation matrix will be the identity on that dimension, and it will be a planar rotation on the other two dimensions. That is, it will have to be of the form $$\begin{bmatrix}\cos(\theta)&-\sin(\theta)&0\\\sin(\theta)&\cos(\theta)&0\\0&0&1\end{bmatrix}$$ Since the trace is invariant between changes of basis, you can see how I got my equation. Once you've solved for $\theta$, you'll use it to construct your rotation quaternion $q=\cos(\theta/2)+u\sin(\theta/2)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/893984", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 4, "answer_id": 2 }
Logic behind dividing negative numbers I've learnt in school that a positive number, when divided by a negative number, and vice-versa, will give us a negative number as a result.On the other hand, a negative number divided by a negative number will give us a positive number. Given the following equations: * *$\frac{-18}{2} = -9$ *$\frac{18}{-2} = -9$ *$\frac{-18}{-2} = 9$ This is how I would think the logic behind the equation would be: * *If I have a debt of 18 dollars , owed to 2 people equally, I would owe each of them $9 * *$\frac{-18}2 = -9$ *If I have 18 dollars, owed to 2 people equally, I would thus give them $9 each * *$\frac{18}{-2} = -9$ However, I can't seem to come to terms with a negative number dividing by a negative number giving me a positive number as a result. What would be the logic behind it? Also, I think that I have the logic/reasoning for the 2nd example wrong, as it is exactly the same as the reasoning for the first example?Could someone give me a better example of the logic behind the 2nd example? I would really appreciate it if anyone could enlighten me.
Division can be thought of (in an algorithmic sense) as repeated subtraction. The question "what is 6/2" is exactly equivalent to the question "how many times must one subtract 2 from 6 to reach 0". How many times must one subtract 2 from 18 to reach 0? Of course it's 9. How many times must one subtract -2 from 18 to reach 0? Of course it's -9. BTW this is a good way to help kids understand why division by zero is a singularity. How many times must one subtract 0 from X to reach 0? Any finite or inifinite number is not sufficient.
{ "language": "en", "url": "https://math.stackexchange.com/questions/894029", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "37", "answer_count": 16, "answer_id": 9 }
How to calculate length and area for this curve? $C : x^{2/3} + y^{2/3} = 1$ I'm stuck, so any tip will be helpful Thanks in advance!
The solution for $y$ is $$y(x) = (1 - x^{2/3})^{3/2}$$ The area $A$ is given by $$A=4\int_0^1 y(x) dx=\frac{3\pi}{8}$$ The length $L$ of the curve is given by: $$ L=4\int_0^1dx\sqrt{(y'(x))^2+1}=6$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/894113", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find the value of $\sum_{k=1}^{n} k \binom {n} {k}$ I was assigned the following problem: find the value of $$\sum_{k=1}^{n} k \binom {n} {k}$$ by using the derivative of $(1+x)^n$, but I'm basically clueless. Can anyone give me a hint?
Notice that $\displaystyle S = \sum_{k=0}^{n} k \binom {n} {k} = \sum_{k=0}^{n}(n-k)\binom {n} {n-k} = n\sum_{k=0}^{n}\binom {n} {n-k}-S$ so, as $\displaystyle \binom {n} {n-k}=\binom {n} {k}$ we have $\displaystyle 2S = n\sum_{k=0}^{n}\binom {n} {k} = n2^n$ and so $\displaystyle S = n2^{n-1}.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/894159", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 1 }
Show that a linear matrix transformation is bijective iff A is invertible. Suppose a linear transformation $T: M_n(K) \rightarrow M_n(K)$ defined by $T(M) = A M$ for $M \in M_n(K)$. Show that it is bijective IFF $A$ is invertible. I was thinking then that I could show that it is surjective. So suppose there exists a $B \in M_n(K)$ such that $T(B) = ?$ What would it equal to show that?
By the rank-nullity theorem $T$ is bijective if and only if $T$ is injective if and only if $T$ is surjective. For the injectivity: we have $$T(M)=T(N)\iff AM=AN$$ * *If $A$ is invertible then $AM=AN\implies M=N$ and then $T$ is injective. *If $A$ isn't invertible so let $N=M+(x\; x\;\cdots\; x)$ where $x\in\ker A, x\ne0$ and we have $N\ne M$ but $T(M)=T(N)$ so $T$ isn't injective.(Proof by contapositive)
{ "language": "en", "url": "https://math.stackexchange.com/questions/894244", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Simplify $(7-2i)(7+2i)$. Found difference between mine and solution guide's and didn't know why. I looked up the solution guide and found out: $(7-2i)(7+2i)$ $=49-(2i)^2$ $=49+4$ $=53$ Why the unknown "$i$" just disappeared$?$ I supposed it might be: $(7-2i)(7+2i)$ $=49-(2i)^2$ $=49-4i$ does it? May someone tell me which one is right and tell me the reasons? Thank you so much.
We have $(7-2i)(7+2i)=49+14i-14i-4i^2=49-4i^2.$ Note that $i=\sqrt{-1}\implies i^2=(\sqrt{-1})^2=-1.$ Therefore, $$(7-2i)(7+2i)=49-4i^2=49-4(-1)=49+4=53$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/894356", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Showing Galois Group is Abelian I'm having trouble showing that $\text{Gal}(\mathbb{Q}(\zeta_n)/\mathbb{Q})$ is Abelian. First I want to be able to show that $\mathbb{Q}(\zeta_n)/\mathbb{Q}$ is Galois, but I'm also not sure how to do this. Any help is appreciated!
It's the splitting field for the polynomial $x^n-1$, hence it is Galois by the characterization of normal extensions as splitting fields for a set of polynomials. To see it is abelian, it is most direct to show that $$\text{Gal}\left(\Bbb Q(\zeta_n)/\Bbb Q\right)\cong (\Bbb Z/n\Bbb Z)^\times$$ by showing there is a map of the latter into the former and using the fact that the groups have the same cardinality, since $\varphi(n)=\left|(\Bbb Z/n\Bbb Z)^\times\right|$ by definition and is equal to $[\Bbb Q(\zeta_n):\Bbb Q]$ by direct computation, noting that $\varphi(n)=\text{deg}(\Phi_n(x))$, the $n^{th}$ cyclotomic polynomial, or by showing it is so for $n=p^\alpha$ and using $$\Bbb Q(\zeta_{p^\alpha})\cap\Bbb Q(\zeta_{q^\beta})=\Bbb Q$$ for distinct primes, $p,q$ and using $[LK:k]=[L:k][K:k]$ when $L\cap K=k$. The map in the isomorphism is, of course $$\begin{cases}a\mapsto \sigma_a\\ \sigma_a(\zeta_n)=\zeta_n^a\end{cases}$$ which is easily seen to be a group homomorphism, since $$\left(\zeta_n^{a}\right)^b=\zeta_n^{ab}$$ and injective since $$\zeta_n^a=\zeta_n^b\iff n|(b-a)$$ but since $a,b\in(\Bbb Z/n\Bbb Z)^\times$, are represented by integers $1\le a,b\le n-1$ we see that $|b-a|\le n$ with equality only if $b=a$. But the only integer between $-(n-1)$ and $n-1$ which $n$ divides is $0$, since all multiples of $n$ are $nk$ and if $|k|\ne 0$, we have $|nk|> n-1$. So in order for $\zeta_n^a=\zeta_n^b$ it must be that $a=b$, hence injectivity and so we have an isomorphism because the two sets are finite of the same cardinality, so all injections are surjections as well.
{ "language": "en", "url": "https://math.stackexchange.com/questions/894406", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Divergence of a recursive sequence If $(x_n)$ isthe sequence defined by $x_1=\frac{1}{2}$ and $x_{n+1}=\sqrt{x_n^2 +x_n +1}$, show that $\lim x_n = \infty$ Ive tried a couple of things but none of them helped. Ive tried to suppose, by contradiction, that the sequence is bounded, find a lower sequence that goes to infinite and the definition. Thanks in advance!
The sequence is increasing. If it were bounded above, it would have a limit $L$. Then $$L=\lim_{n\to\infty} x_{n+1}=\lim_{n\to\infty} \sqrt{x_n^2+x_n+1}=\sqrt{L^2+L+1}$$ would give $L=\sqrt{L^2+L+1}$, which is impossible, since $L\gt 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/894491", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
For recurrence T(n) = T(n − a) + T(a) + n, prove that T(n) = O(n^2 ) complexity I have been looking over this question for hours now, and can't seem to work it out. It's a question regarding the complexity of sorting algorithms Assume that $a$ is constant and so is $T(n)$ for $n ≤ a$. For recurrence $T(n) = T(n − a) + T(a) + n$, prove that $T(n) = O(n^2)$ I don't really know where to begin with the question, any help is greatly appreciated!
To show that $T(n) = O(n^2)$, we will prove by induction on $n$ that there exist constants $c, n_0$ such that for all $n \geq n_0$, we have that: $$ T(n) \leq cn^2 $$ Base Case: I'll let you do this part. Induction Hypothesis: Assume that the claim holds for all $n' < n$. It remains to prove that the claim holds for $n' = n$. Choose any constant $a \geq 1$, and assume that $T(n) = k \geq 1$ for all $n \leq a$. Now define: $$ c = \frac{1}{a} > 0 \qquad\text{and}\qquad n_0 = \frac{a^2c + k}{2ac - 1} > 0 $$ Observe that if $n \geq n_0$, then since $1 - 2ac = 1 - 2a(\frac{1}{a}) = 1 - 2 = -1 < 0$, we have that: $$ (1 - 2ac)n \leq (1 - 2ac)n_0 = (1 - 2ac)\frac{a^2c + k}{2ac - 1} = -(a^2c + k) \tag{$\star$} $$ Hence, these chosen constants will do the trick, since for all $n \geq n_0$, we have that: \begin{align*} T(n) &= T(n - a) + T(a) + n \\ &= T(n - a) + k + n \\ &\leq c(n - a)^2 + k + n &\text{by the induction hypothesis}\\ &= c(n^2 - 2an + a^2) + k + n \\ &= cn^2 + (1 - 2ac)n + (a^2c + k) \\ &\leq cn^2 - (a^2c + k) + (a^2c + k) &\text{by $(\star)$}\\ &= cn^2 \end{align*} which completes the induction. $~~\blacksquare$
{ "language": "en", "url": "https://math.stackexchange.com/questions/894665", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
How prove $(\ln{\frac{1-\sin{xy}}{1+\sin{xy}}})^2 \geq \ln{\frac{1-\sin{x^2}}{1+\sin{x^2}}}\ln{\frac{1-\sin{y^2}}{1+\sin{y^2}}}$ How prove that if $x, y \in (0,\sqrt{\frac{\pi}{2}})$ and $x \neq y$, then $(\ln{\frac{1-\sin{xy}}{1+\sin{xy}}})^2 \geq \ln{\frac{1-\sin{x^2}}{1+\sin{x^2}}}\ln{\frac{1-\sin{y^2}}{1+\sin{y^2}}}$?
the right question is : $(\ln{\dfrac{1-\sin{xy}}{1+\sin{xy}}})^2 \le \ln{\dfrac{1-\sin{x^2}}{1+\sin{x^2}}}\ln{\dfrac{1-\sin{y^2}}{1+\sin{y^2}}}$ $f(x)=\ln{\dfrac{1-\sin{x}}{1+\sin{x}}}\le 0 , f(0)=0$ WLOG $y=ax,a\ge1 \implies $the inequality $\iff (f(ax^2))^2 \le f(x^2)f(a^2x^2) \iff (f(ax))^2 \le f(x)f(a^2x) \iff \dfrac{f(ax)}{f(x)} \le \dfrac{f(a^2x)}{f(ax)} $ $g(x)=\dfrac{f(ax)}{f(x)} \implies g(x) \le g(ax) \implies g'(x)> 0$ $g'(x)=\dfrac{2(\cos{(ax)}f(ax)-a\cos{x}f(x))}{\cos{x}\cos{(ax)}(fx)^2}$ $g'(x) >0\implies g_1(x)=\cos{(ax)}f(ax)-a\cos{x}f(x) \ge 0 $ $g'_1(x)=-a\sin{(ax)} f(ax)+a\cos{(ax)}f'(ax)+a\sin{x}f(x)-a\cos{x}f'(x)=a((\sin{x}f(x)-\cos{x}f'(x))-(\sin{(ax)} f(ax)-\cos{(ax)}f'(ax)))=a(g_2(x)-g_2(ax)) \\ g_2(x)=\sin{x}f(x)-\cos{x}f'(x)=\sin{x}f(x)+2$ $g'_2(x)=\cos{x}f(x) -2\tan{x}<0 \implies g_2(x) \ge g_2(ax) \implies g'_1(x) \ge 0$ $g_1(0)=0 \implies g_1(x)\ge 0$ the "=" will hold when $a=1$ or $x=0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/894759", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Sum of cubes of roots of a quartic equation $x^4 - 5x^2 + 2x -1= 0$ What is the sum of cube of the roots of equation other than using substitution method? Is there any formula to find the sum of square of roots, sum of cube of roots, and sum of fourth power of roots for quartic equation?
Easy way to do this is to take the first derivative of the polynomial and divide it by the polynomial itself. f ( x ) = x^4 - 5x^2 + 2x - 1 = 0 f ' ( x ) = 4x^3 - 10x + 2 Now do f ' ( x ) / f ( x ) = ? (x^4 + 0x^3 - 5x^2 + 2x - 1 ) ) ......4x^3 + 0x^2 - 10x + 2 .......- 0....+5........- 2..+ 1 4 / 1 = 4 so that is our first coefficient. multiply all these numbers - 0.....+ 5....- 2 and + 1 by 4 Place the answers under the columns to the right. so it would look like this : 4x^3 + 0x^2 - 10x + 2 .....................................................- 0..........+20........-8..........+ 1 so the sum of the roots is 0, but the sums of the squares of the roots is +10.
{ "language": "en", "url": "https://math.stackexchange.com/questions/894826", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
Which number remains alive? There are $100$ people standing in a circle numbered from $1$ to $100$. The first person is having a sword and kills the the person standing next to him clockwise i.e $1$ kills $2$ and so on. Which is the last number to remain alive? Also if $1$ kills both the person standing next to him. Which is the last number standing? Can both of them be generalized?
Here is the solution in Python : Not an elegant mathematical one ; but since I did not know the mathematically from itertools import cycle NUMBER = 100 people = list(range(1, NUMBER + 1)) dead = [] print "Runing" print people people_list = cycle(people) while len(people) != 1: tolive = next(people_list) if tolive not in dead: todel = next(people_list) if todel not in dead: dead.append(todel) else: people = set(people) - set(dead) print sorted(people) people_list = cycle(sorted(people)) print people print "over" For 100 it is 73
{ "language": "en", "url": "https://math.stackexchange.com/questions/894912", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 9, "answer_id": 7 }
Why is $\sin \theta$ just $\theta$ for a small $\theta$? When $\theta$ is very small, why is sin $\theta$ taken to be JUST $\theta$?
It's not just $\theta$. What you observe is the fact that $\sin \theta$ and $\theta$ approach zero from either side of the number line at a pretty similar rate. This can be best demonstrated with a graph. You can see that they are about to overlap just at zero. So when $\sin \theta$ is approaching $0$ for some very very small $\theta$ we can approximate it as $\theta$. Does this make sense?
{ "language": "en", "url": "https://math.stackexchange.com/questions/895015", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
What is the probability that no married couples are among the chosen? Eight married couples are standing in a room. 4 people are randomly chosen. What is the probability that no married couples are among the chosen?
Another way to figure this out would be to choose 4 different married couples to provide a chosen person (there are $\left(\begin{array}{c} 8\\ 4\end{array}\right)=70$ ways to do this). Then pick one member of each of the 4 chosen couples (there are $2^4=16$ ways to do this). So altogther there are $70\cdot 16=1120$ ways to choose 4 people so that no two are married to each other. So the probability of none married to each other among the 4 chosen is $\frac{1120}{\left(\begin{array}{c} 16\\ 4\end{array}\right)}=\frac{1120}{1820}=\frac8{13}$. So same as John's answer, just a different approach to counting in the problem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/895237", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 1 }
Prove that a function $f(n)$ counting the number of odd divisors multiplicative How can I show that $f(n)$ is multiplicative, where $f(n)$ represents the number of the divisors of n in the form $2k + 1$? I'm studying algebra and I came across some questions on multiplicative functions (which should be number theory though), but there is any worked example of such proof. Can you help?
First note $f(1)=1$, then let compute $f(2^k)= 1$. Note that the total number of divisors function is $\tau(n)$ which is multiplicative, so considering if $n=2^km$ with $m$ odd, we can see that $f(n)=f(m)$, so $$f(2^km)={\tau(n)\over \tau(2^k)}=\tau(m)$$ and we know $\tau$ is multiplicative.
{ "language": "en", "url": "https://math.stackexchange.com/questions/895321", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
derivative formula $\nabla \times (\mathbf{a} \times \mathbf{r}) = \nabla \cdot(\mathbf{a} \wedge\mathbf{r}) = (n-1)\mathbf{a}$ Assume $\mathbf{r}=\mathbf{x}−\mathbf{x}′$ is the position vector in $\mathbb{R}^n$, for constant $\mathbf{a}$, we have $$\nabla \times (\mathbf{a} \times \mathbf{r}) = \nabla \cdot(\mathbf{a} \wedge\mathbf{r}) = (n-1)\mathbf{a}.$$ This comes from fig.6 in Tutorial on Geometric Calculus by David Hestenes. Can anyone help me to derive it? I thought that: $\nabla \cdot(\mathbf{a} \wedge\mathbf{r}) = \epsilon^{ijk}\partial_i a_j x_k = 0$.
I would think it is still applicable for traditional vector analysis approach. With $\mathbf{r}\in\mathbb{R}^n$, then \begin{align} [\nabla\times(\mathbf{a}\times\mathbf{r})]&=\varepsilon_{ijk}\partial_j[\mathbf{a}\times\mathbf{r}]_k=\varepsilon_{ijk}\partial_j(\varepsilon_{klm}a_{\ell}r_m)\\ &=\varepsilon_{ijk}\varepsilon_{klm}(r_m\partial_ja_{\ell}+a_{\ell}\partial_jr_m)\\ &=(\delta_i^{\ell}\delta_j^m-\delta_i^m\delta_j^{\ell})(r_m\partial_ja_{\ell}+a_{\ell}\partial_jr_m)\\ &=r_j\partial_ja+a\partial_jr_j-r\partial_ja_j-a_j\partial_j{r}\\&=(\mathbf{r}\cdot\nabla)\mathbf{a}+\mathbf{a}(\nabla\cdot\mathbf{r})-\mathbf{r}(\nabla\cdot\mathbf{a})-(\mathbf{a}\cdot\nabla)\mathbf{r}\\ &=\mathbf{a}(\nabla\cdot\mathbf{r})-(\mathbf{a}\cdot\nabla)\mathbf{r}\\ &=n\mathbf{a}-\mathbf{a}\\ &=(n-1)\mathbf{a} \end{align} (where I have mixed lower and upper indices).
{ "language": "en", "url": "https://math.stackexchange.com/questions/895398", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Are $i,j,k$ commutative? I am trying to understand quaternions. I read that Hamilton came up with the great equation: A) $i^2 = j^2 = k^2 = ijk = −1$ In this equation I understand that $i,j,k$ are complex numbers. Later on, I read that B) $ij=k$ C) $ji=-k$ So, if $i,j,k$ are complex numbers, and complex number multiplication is commutative, why are these two equations different? I do understand that quaternion multiplication is non-commutative, but I do not understand why multiplying these complex components are also non-commutative. Could someone please help me understand what is going on here. I (obviously) am not an expert in mathematics, so a simple explanation would be greatly appreciated.
Indeed commutativity still holds in the quaternions if your complex number only contains i's, j's or k's. I.e. any number's of the form $a + bi + 0j + 0k$ are commutative to each other, similarly $a + 0i + bj + 0k$ is commutative with other numbers of that form also and same with $a + 0i + 0j + bk$. However commutativity doesn't hold over the entire group of quaternions equipped with multiplication as its operation. $\mathbb{Q}_8$ is a non-abelian group in other words.
{ "language": "en", "url": "https://math.stackexchange.com/questions/895463", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Determine variables that fit this criterion... There is a unique triplet of positive integers $(a, b, c)$ such that $a ≤ b ≤ c$. $$ \frac{25}{84} = \frac{1}{a} + \frac{1}{ab} + \frac{1}{abc} $$ Just having trouble with this Canadian Math Olympiad question. My thought process going into this, is: Could we solve for $\frac{1}{a}$ in terms of the other variables? Then substitute that value in for each occurrence of $a$, to solve for $a$? That's all I can really think of right now. It's a question I'm not exactly used to... It's sort of the first of these kinds that I've faced. Thanks.
Factoring, we see $\displaystyle \frac{25}{84} = \frac{1}{a}(1+\frac{1}{b}(1+\frac{1}{c}))$ And we know the prime factoring of 84 gives $2\times2\times3\times7$ So we know $a,b,$ and $c$ are each going to be multiples of these primes. So we start with finding $a$: $\displaystyle \frac{25}{84}a = 1+\frac{1}{b}(1+\frac{1}{c})$ Now, $25a/84>0$, but $\frac{1}{b}(1+\frac{1}{c})>0$ too. Therefore $25a/84>1$. We want $a$ to be the smallest of the three factors, so we ask, what is the smallest it can be here? 2 won't work and neither will 3, but $2\times2=4$ will. So we provisionally say $a=4$. Then, $\displaystyle \frac{25}{21} = 1+\frac{1}{b}(1+\frac{1}{c})$ We go through the same process for $b$, remembering that $b>4$. Turns out that $b=3\times2=6$ is the smallest factor that will work. We provisionally say $b=6$. Finally, $\displaystyle \frac{8}{7} = 1+\frac{1}{c}$ $c=7$ follows immediately. So $a=4$, $b=6$, and $c=7$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/895556", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 2 }
Nil radical of an ideal on a commutative ring This is a problem of an exercise list: Let $J$ be an ideal of a commutative ring A. Show that $N(N(J))=N(J)$, where $N(J)=\{a \in A; a^n \in J$ for some $n \in \mathbb{N}\}$. What I did: $N(J)=\{a \in A; a^n \in J$ for some $n \in \mathbb{N}\}$ $N(N(J))=\{a \in A; a^k \in N(J)$ for some $k \in \mathbb{N}\}$ If $a \in N(J)$, then $a^1 \in N(J)$ $\Rightarrow$ $a \in N(N(J)) \Rightarrow N(J) \subseteq N(N(J))$. On the other hand, if $a \in N(N(J))$, then $a^k \in N(J) \Rightarrow (a^k)^n \in J \Rightarrow a^{kn} \in J \Rightarrow a \in N(J) \Rightarrow N(N(J)) \subseteq N(J)$, for some $k,n\in\mathbb{N}$. The problem is: I didn't use the hypothesis that A is a commutative ring. What did I do wrong? Thanks!
You didn't do anything wrong. The definition given for the radical of an ideal does not actually yield an ideal for noncommutative rings, because nilpotent elements needn't be closed under multiplication. There are a couple of ways to fix this, at least for the nilradical (radical of the zero ideal), both achieved by replacing the usual definition with an equivalent definition that does yield an ideal in noncommutative rings. One is the lower nilradical is the intersection of all prime ideals, and the upper nilradical is the ideal generated by all nil ideals. (I just got this from Wikipedia, really.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/895718", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Nonintegral element and a homomorphism Assume $R\subseteq S$ are rings. Choose $x\in S$ nonintegral over $R$. I want to define a homomorphism from $R[x^{-1}]$ to a field which maps $x^{-1}$ to zero. I was trying to show that $R[x^{-1}]$ is a polynomial ring in one variable. Then I could define my map in this way $\sum_{i=0}^mr_i'(x^{-1})^i\longmapsto r_0$. It's clear that I can show elements of $R[x^{-1}]$ like $\sum_{i=0}^nr_i(x^{-1})^i$ but at the remained part I got have a problem. I mean when I want to show that if $\sum_{i=0}^nr_i(x^{-1})^i=\sum_{i=0}^mr_i'(x^{-1})^i$ then $n=m$ and for all i, $r_i=r_i'$. My attempt was in this way; Let $n\geq m$, from our assumption we have $\sum_{i=0}^nr_i''(x^{-1})^i=0$ which for $0\leq i\leq m$, $r_i''=r_i-r_i'$ and for $m+1\leq i\leq n$, $r_i''=r_i$. By multiplying by $x^n$, $r_0''x^n+r_1''x^{n-1}+\cdots+r_{n-1}''x+r_n''=0$. If I were able to make this relation monic then it would be a contradiction and the problem was solved but $r_0''$ may be not invertible so what should I do now? By the example which 'Hans' brought at below my approach was wrong.
Let $R=\mathbb{Z}$ and $S=\mathbb{Q}$. Then $x=\frac{1}{2} \in \mathbb{Q}$ is not integral over $\mathbb{Z}$ since $\mathbb{Z}$ is integrally closed. But of course $\mathbb{Z}=\mathbb{Z}[x^{-1}]$ which is not the polynomial ring. I guess you need some additional assumptions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/895821", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Function with continuous inverse is continuous? If function $\textbf{F}^{-1}(x)$ is an inverse of function $\textbf{F}$ and $\textbf{F}^{-1}(x)$ is continuous. Is it true that $\textbf{F}(x)$ is continuous too?
The answer is no: take $f^{-1}(x) = e^{ix}$, defined from $[0,2\pi)$ to $\mathbb{S}^1$ (the unit sphere in the plane) This function is clearly continuous. Unfortunately, its inverse cannot be continuous since otherwise $[0,2\pi)$ would be compact being the image of the compact set $\mathbb{S}^1$ under a continuous function. I hope it helps, let me know if the details are clear enough :) EDIT: something easier to check: the identity map from the reals with the lower limit topology ($\mathbb{R}_l$) to the real with the standard topology ($\mathbb{R}$) is another counterexample! This is immediate to check since the inverse of the identity map is of course the identity map itself which now goes from $\mathbb{R}$ to $\mathbb{R}_l$. Clearly the pre image of the open set $[a,b)$ is not open!
{ "language": "en", "url": "https://math.stackexchange.com/questions/895889", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
if $ax-2by+cz=0$ and $ac-b^2>0$ , Prove $zx-y^2\leq0$ for real numbers like $a,b,c,x,y,z$ that $ax-2by+cz=0$ and $ac-b^2>0$ Prove:$$zx-y^2\leq0$$ Additional info: The Proof should be by contradiction.we can use Cauchy , AM-GM and other simple inequalities. Things I have done so far: as Problem wants a Proof by contradiction, I assume that $zx-y^2>0$ and later show that it is impossible.We know that $ac-b^2>0$. So $$ac+zx-(b^2-y^2)>0$$ by AM-GM we know that $$b^2+y^2\geq 2by$$ So $$ac+zx-2by>0$$ and by $ax-2by+cz=0$ we can write $$ac+zx-ax-cz>0$$ and I stuck here. and another problem about my uncompleted proof is using AM-GM.because the question did not mentioned about being positive real number.it just said real numbers.
Notice that $by=\frac{ax+cz}{2}$, so that $$ acy^2 \geq b^2y^2=\bigg(\frac{ax+cz}{2}\bigg)^2 \geq ax\times cz $$ and hence $y^2 \geq xz$ (notice that $ac>0$ because $ac>b^2$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/896079", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
matrix differentiation - derivative of matrix vector dot product with respect to matrix Given the function $$f(N) = x_1^T M x_2 $$ where * *$x_1 = Nv_1 $ *$x_2 = Nv_2 $ *$x_1, x_2, v_1, v_2$ are vectors with dimension $n \times 1$ *$M$ and $N$ are matrices with dimension $n \times n$ what's the derivative of $f(N)$ with respect to $N$?
Generalizing the problem slightly, to use matrix (instead of vector) variables, $X_k = N\cdot V_k$. We can write the function as $$ \eqalign { f &= X_1 : M\cdot X_2 \cr } $$ Now take the differential and expand $$ \eqalign { df &= dX_1 : M\cdot X_2 + X_1 : M\cdot dX_2 \cr &= d(N\cdot V_1) : M\cdot X_2 + X_1 : M\cdot d(N\cdot V_2) \cr &= dN\cdot V_1 : M\cdot X_2 + X_1 : M\cdot dN\cdot V_2 \cr &= dN : M\cdot X_2\cdot V_1^T + M^T\cdot X_1\cdot V_2^T : dN \cr &= [M\cdot X_2\cdot V_1^T + M^T\cdot X_1\cdot V_2^T] : dN \cr &= [M\cdot N\cdot V_2\cdot V_1^T + M^T\cdot N\cdot V_1\cdot V_2^T] : dN \cr } $$ The derivative is therefore $$ \eqalign { \frac {\partial f} {\partial N} &= M\cdot N\cdot V_2\cdot V_1^T + M^T\cdot N\cdot V_1\cdot V_2^T \cr } $$ If you dislike the Frobenius product, you can change the above derivation to use the trace instead $$ \eqalign { A : B &= \text{tr}(A^T\cdot B) \cr } $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/896157", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Inequality involving a finite sum this is my first post here so pardon me if I make any mistakes. I am required to prove the following, through mathematical induction or otherwise: $$\frac{1}{\sqrt1} + \frac{1}{\sqrt2} + \frac{1}{\sqrt3} + ... + \frac{1}{\sqrt{n}} < 2{\sqrt{n}}$$ I tried using mathematical induction through: $Let$ $P(n) = \frac{1}{\sqrt1} + \frac{1}{\sqrt2} + \frac{1}{\sqrt3} + ... + \frac{1}{\sqrt{n}} < 2{\sqrt{n}}$ $Since$ $P(1) = \frac{1}{\sqrt1} < 2{\sqrt{1}}, and$ $P(k) = \frac{1}{\sqrt1} + \frac{1}{\sqrt2} + \frac{1}{\sqrt3} + ... + \frac{1}{\sqrt{k}} < 2{\sqrt{k}},$ $P(k+1) = \frac{1}{\sqrt1} + \frac{1}{\sqrt2} + \frac{1}{\sqrt3} + ... + \frac{1}{\sqrt{k}}+ \frac{1}{\sqrt{k+1}} < 2{\sqrt{k+1}}$ Unfortunately, as I am quite new to induction, I couldn't really proceed from there. Additionally, I'm not sure how to express ${\sqrt{k+1}}$ in terms of ${\sqrt{k}}$ which would have helped me solve this question much more easily. I am also aware that this can be solved with Riemann's Sum (or at least I have seen it being solved in that way) but I do not remember nor quite understand it.
If you want to take a look on the Riemann's sum method : $\forall n > 1$, we have $$\int_{n-1}^n \frac{dt}{\sqrt{t}} \ge (n-(n-1))\cdot \underset{x \in [n-1,n]}{\min} \frac{1}{\sqrt{x}} = \frac{1}{\sqrt{n}} $$ Hence, $$\frac{1}{\sqrt1} + \frac{1}{\sqrt2} + \frac{1}{\sqrt3} + ... + \frac{1}{\sqrt{n}} \le 1+ \int_{1}^2 \frac{dt}{\sqrt{t}} + \int_{2}^3 \frac{dt}{\sqrt{t}} + ... + \int_{n-1}^n \frac{dt}{\sqrt{t}} = 1+\int_{1}^n\frac{dt}{\sqrt{t}} $$ $$\frac{1}{\sqrt1} + \frac{1}{\sqrt2} + \frac{1}{\sqrt3} + ... + \frac{1}{\sqrt{n}} \le 1+\left[2\sqrt{t}\right]_1^n = 1+ (2\sqrt{n} -2 )= 2\sqrt{n} -1 < 2\sqrt{n} $$ The difference can be seen by comparing the Right Riemann Sum of the function $t \rightarrow \frac{1}{\sqrt{t}} $ to its integral on $[1,n]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/896259", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 4 }
Using Stokes theorem to integrate $\vec{F}=5y \vec{\imath} −5x \vec{\jmath} +4(y−x) \vec{k}$ over a circle Find $\oint_C \vec{F} \cdot d \vec{r}$ where $C$ is a circle of radius $2$ in the plane $x+y+z=3$, centered at $(2,4,−3)$ and oriented clockwise when viewed from the origin, if $\vec{F}=5y \vec{\imath} −5x \vec{\jmath} +4(y−x) \vec{k}$ Relevant equations: Stokes theorem: $$\int_S \operatorname{curl}{F} \cdot \mathbf{n} \, dS = \oint_{\partial S} F \cdot d\mathbf{r}$$ My attempt: * *For the curl I get $(4,4,-10)$. *For $d\vec{S}$ I get $(1,1,1)$ from $z = 3-x-y$ *Dotted together its $-2$. *So: $-2 \iint_S dA$. *Area of circle is $4\pi$. My answer would be $-8\pi$ but the online homework system says it's not correct. Please help!
well, I think that it is rather a question of getting C in its parametric form. Your plane is $\Pi :x+y+z=3$. I will try here to find the orthogonal vectors to the $\Pi$: let's say that $\underline e_1=(a_1,a_2,0)$ and $\underline{e_2}=(-a_1,a_2,0)$. Dot product of those vectors with the normal vector of $\Pi$ gives you these 2 eqautions: $a_1+a_2+a_3=0, -a_1+a_2=0$. So you can say that $\underline{e_1}= \left(\begin{array}{c} 1 \\ 1 \\ -2 \end{array} \right)$ and $\underline{e_2}=\left(\begin{array}{c} -1 \\ 1 \\ 0 \end{array}\right)$ So that your curve parametric form is: $$C = \sqrt 3 +4 \cos t\cdot \left(\begin{array}{c} 1 \\ 1 \\ -2 \end{array}\right) -4\sin t\cdot \left(\begin{array}{c} -1 \\ 1 \\ 0 \end{array} \right)$$ where $t\in[0,2\pi]$ And from here it becomes much easy. I hope I helped!
{ "language": "en", "url": "https://math.stackexchange.com/questions/896365", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Integration of some floor functions Can anyone please answer the following questions ? 1) $\int\left \lfloor{x}\right \rfloor dx$ 2) $\int$ $ \left \lfloor{\sin(x)}\right \rfloor $ $dx$ 3) $\int_0^2$ $\left \lfloor{x^2+x-1}\right \rfloor$ $dx$ 4) $\int_o^\pi$ $\left \lfloor{x(1+\sin(\pi x)}\right \rfloor$ Also can anyone please make me understand the way in which to proceed in these types of sums? $\left \lfloor{x}\right \rfloor$ is the floor function Thanks
The floor function turns continuous integration problems in to discrete problems, meaning that while you are still "looking for the area under a curve" all of the curves become rectangles. In general, the process you are going to want to take will go something like this: Consider the function (before taking the floor of it), and look at where the output is an integer. For instance, in your first example $f(x)=x$, $f(x)$ equals an integer when an integer is put in. The next step is to look at what happens between these points. Again considering your first example, for $1 \leq x < 2$, the floor function maps everything to 1, so you end up with a rectangle of width 1 and height 1. It is the areas of these rectangles you need to add to find the value of the integral (being careful to understand that rectangles below the x-axis have "negative areas"). In the case of the indefinite integrals you will end up with some summation since you don't know the bounds, on the others you should be able to find exact numerical answers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/896437", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 7, "answer_id": 2 }
Is the series $\sum _{n=1}^{\infty } (-1)^n / {n^2}$ convergent or absolutely convergent? Is this series convergent or absolutely convergent? $$\sum _{n=1}^{\infty }\:(-1)^n \frac {1} {n^2}$$ Attempt: I got this using Ratio Test: $$\lim_{n \to \infty} \frac{n^2}{(n+1)^2}$$
Hint: Use the integral test on $\sum_{k=1}^{\infty}\frac{1}{k^{2}}$. Note that the ratio test is inconclusive in this case.
{ "language": "en", "url": "https://math.stackexchange.com/questions/896521", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 2 }
Maximum of Three Uniform Random Variables Here is the question, I am studying for exam P, and am using a study guide with a solution guide. I am stumped on this problem, and the solution in the back was very confusing. Any clarity I can get would be tremendously appreciated. Here is the problem: Three individuals are running a one kilometer race. The completion time for each individual is a random variable. $X_i$ is the completion time, in minutes, for person $i$. $X_1$: uniform distribution on the interval [2.9, 3.1] $X_2$: uniform distribution on the interval [2.7, 3.1] $X_3$: uniform distribution on the interval [2.9, 3.3] The three completion times are independent of one another. Find the expected latest completion time (nearest .1) I let $Y = \max\{X_1, X_2, X_3\}$ $F_Y(y) = P[Y \leq y] = P[\max\{X_1, X_2, X_3\} \leq y]$ So, in short, $5(y-2.9)(2.5)(y-2.7)(2.5)(y-2.9)$, However, the author uses a piecewise function, with what I have above, but he has two different intervals which I don't understand where he got. This is what he has: $$F_Y(y) = \begin{cases} 5(y-2.9)(2.5)(y-2.7)(2.5)(y-2.9), & 2.9 \leq y \leq 3.1 \\ 2.5(y-2.9), & 3.1 \leq y \leq 3.3 \\ \end{cases}$$ Sorry, for the jumbled piecewise function, I'm not sure how to space it out more. My questions, is how did he get the second part where the interval is $3.1 \leq y \leq 3.3$
If $y > 3.1$, then you know that the first two runners are already done. So the probability that the last runner finishes in at most $y$ minutes is the probability that $X_3 \le y$; i.e., $$\Pr[Y \le y] = \Pr[X_3 \le y] = \frac{y-2.9}{3.3-2.9}, \quad 3.1 < y \le 3.3.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/896614", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Proof by Induction: $(1+x)^n \le 1+(2^n-1)x$ I have to prove the following by induction: $$(1+x)^n \le 1+(2^n-1)x$$ for $n \ge 1$ and $0 \le x \le 1$. I start by showing that it's true for $n=1$ and assume it is true for one $n$. $$(1+x)^{n+1} = (1+x)^n(1+x)$$ by assumption: $$\le (1+(2^n-1)x)(1+x)$$ $$= 1+(2^n-1)x+x+(2^n-1)x^2$$ That is what I was able to do on my own. I do not understand the next step of the solution (they are now using $x$ instead of $x^2$ at the end): $$Because\,0 \le x \le 1$$ $$(1+x)^{n+1} \le 1 + (2^n-1)x+x+(2^n-1)x$$ $$=1+2(2^n-1)x+x$$ $$=1+(2^{n+1}-1)x$$ Because $0 \le x \le 1$ I assume that $(2^n-1)x^2 < (2^n-1)x$. So in my opinion, I just proved that $(1+x)^{n+1}$ is less than something greater than what I had to prove initially. Why is it enough to prove the original problem?
$$(1+x)^{n+1}=(1+x)(1+x)^{n}\le(1+x)(1+(2^{n}-1)x)=1+(2^{n}-1)x+x(1+(2^{n}-1)x)$$ $$\underbrace{\le}_{x\le1}1+(2^{n}-1)x+x(1+2^{n}-1)=1+(2^{n}-1)x+2^{n}x$$ $$=1+(2\cdot2^{n}-1)x=1+(2^{n+1}-1)x$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/896720", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Consecutive Prime Gap Sum (Amateur) List of the first fifty prime gaps: 1, 2, 2, 4, 2, 4, 2, 4, 6, 2, 6, 4, 2, 4, 6, 6, 2, 6, 4, 2, 6, 4, 6, 8, 4, 2, 4, 2, 4, 14, 4, 6, 2, 10, 2, 6, 6, 4, 6, 6, 2, 10, 2, 4, 2, 12, 12, 4, 2, 4. My conjecture is that the sum of consecutive prime gaps is always prime whenever a prime gap of 2 is added. $$ 1 + 2 = 3 $$ $$ 1 + 2 + 2 = 5 $$ $$ 1 + 2 + 2 + 4 + 2 = 11 $$ $$ 1 + 2 + 2 + 4 + 2 + 4 + 2 = 17 $$ $$ 1 + 2 + 2 + 4 + 2 + 4 + 2 + 4 + 6 + 2 = 29 $$ I don't know if this is meaningful or how to go about testing it completely (I've tested it up to 461) so I'll just leave this here and see what comes of it.
Set $g_n=p_{n+1}-p_n$, where $p_n$ is the series of prime numbers, with $p_1=2$. Then $$ p_1+\sum_{i=1}^n g_i=\sum_{i=1}^n g_i+2=p_{n+1}. $$ So the conjecture is obviously true, but not useful.
{ "language": "en", "url": "https://math.stackexchange.com/questions/896802", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 1 }
Is there a bijection from a bounded open interval of $\mathbb{Q}$ onto $\mathbb{Q}$? It is easy to create a bijection between two bounded open intervals of $\mathbb{R}$, such as: $$ \begin{align} f : (a,b) &\to (\alpha,\beta) \\ x &\mapsto \alpha+(x-a)(\beta-\alpha). \end{align} $$ It is also possible to biject a bounded open interval of $\mathbb{R}$ onto the whole of $\mathbb{R}$, e.g.: $$ \begin{align} f : (a,b) &\to \mathbb{R} \\ x &\mapsto \tanh^{-1} x. \end{align} $$ Consider now the set of rational numbers $\mathbb{Q}$. The bijection between two bounded open intervals still holds, but is it possible to biject: $$ f : (p,q) \to \mathbb{Q} $$ where $p,q\in\mathbb{Q}$ and $(p,q) = \{x\in \mathbb{Q} : p < x < q\}$?
Cantor showed that any two countable densely ordered sets with no first or last element are order isomorphic. One can build the isomorphism using his Back and Forth Method. If all we want is a bijection, we can more simply note that both sets are countably infinite.
{ "language": "en", "url": "https://math.stackexchange.com/questions/896989", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
Why learn to solve differential equations when computers can do it? I'm getting started learning engineering math. I'm really interested in physics especially quantum mechanics, and I'm coming from a strong CS background. One question is haunting me. Why do I need to learn to do complex math operations on paper when most can be done automatically in software like Maple. For instance, as long as I learn the concept and application for how aspects of linear algebra and differential equations work, won't I be able to enter the appropriate info into such a software program and not have to manually do the calculations? Is the point of math and math classes to learn the big-picture concepts of how to apply mathematical tools or is the point to learn the details to the ground level? Just to clarify, I'm not trying to offend any mathematicians or to belittle the importance of math. From CS I recognize that knowing the deep details of an algorithm can be useful, but that is equally important to be able to work abstractly. Just trying to get some perspective on how to approach the next few years of study.
So that someone can teach the computer how to do it better.
{ "language": "en", "url": "https://math.stackexchange.com/questions/897034", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "63", "answer_count": 15, "answer_id": 13 }
how to find $(I + uv^T)^{-1}$ Let $u, v \in \mathbb{R}^N, v^Tu \neq -1$. Then I know that $I +uv^T \in \mathbb{R}^{N \times N}$ is invertible and I can verify that $$(I + uv^T)^{-1} = I - \frac{uv^T}{1+v^Tu}.$$ But I am not able to derive that inverse on my own. How to find it actually? That is if am given $A = I +uv^T$ and I am asked to find $A^{-1}$, how to get the answer?
If you guess that the inverse has a similar form, $$(I+uv^T)^{-1}=\alpha I+\beta uv^T\ ,$$ then multiplying out and equating coefficients will solve the problem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/897114", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Placing symbols so that no row remains empty. Q.1.The symbols +, + , #, # , *, $ ($6$ in total) are to be placed in the squares of the given figure. Find the number of ways of placing the symbols so that no row (there are $5$ rows) remains empty. I know combinatorics at undergrad level. This problems are from advanced challenging section.
Position would be $(1,1,1,1,2)$ or $(1,1,2,1,1)$ or $(2,1,1,1,1)$ for five rows respectively. Positions to which can beselected in $\binom 32$ for the row with $2$ elements and $3\times3$ for other two. Now treating each symbol as a unique entity, arrangement can be done in $6!$ ways. Now to cancel multiples we divide by $2!2!$ one each for + and #. So ways are: $$3\times\left[\binom 32\times3\times3\right]\times \frac14\times6!=14580\text{ ways}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/897224", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Initial-value problem for non-linear partial differential equation $y_x^2=k/y_t^2-1$ For this problem, $y$ is a function of two variables: one space variable $x$ and one time variable $t$. $k > 0$ is some constant. And $x$ takes is value in the interval $[0, 1]$ and $t \ge 0$. At the initial time, $y$ follows a parabolic profile, like $y(x, 0) = 1 - (x-\frac{1 }{2})^2$. Finally, $y$ satisfies this PDE: $$ \left(\frac{\partial y} {\partial x}\right)^2 = \frac{k}{\left(\frac{\partial y} {\partial t}\right)^2} - 1.$$ Does anyone have an idea how to solve this problem (and find the expression of $y(x,t)$) ? About: The problem arise in physics, when studying the temporal shift of a front of iron particles in a magnetic field. Edit: I solved it numerically on a (badly-designed) 1st-order numerical scheme with a small space & time discretization, with the initial condition I wanted (in Octave/Matlab, in Python and in OCaml + GNUplot). The numerical result was enough to confirm the theory and the experiment (the observation done in the lab), so I did not try any further to solve it analytically. See here for an animation of the front of iron matter, and here for more details (in French).
Your equation can be written as follows: $$F(p,q,x,t,y) = (p^2+1)q^2 -k = 0, \quad p = y_x, \quad q = y_t,$$ and hence: \begin{align} F_p & = 2pq^2, \\ F_q & = 2(1+p^2) q,\\ F_t = F_x = F_y & = 0, \end{align} so the Lagrange-Charpit equations read: $$ \frac{\mathrm{d}x}{2 pq^2 }= \frac{\mathrm{d}t}{2(1+p^2)q} = \frac{\mathrm{d}y}{2p^2q^2 + 2(1+p^2)q^2 } = -\frac{\mathrm{d}p}{0} = - \frac{\mathrm{d}q}{0},$$ which tells you that $\mathrm{d}p = \mathrm{d}q = 0$ and, thus, $p = A$ (or $q = B$) constant. Since $p = y_x$ we have that $y(x,t) = Ax+f(t)$. Plug this into the original PDE to find: $$f(t) = \pm \frac{\sqrt{k} t}{\sqrt{1+A^2}} + C, \quad C \in \mathbb{R}, $$ so the solution is finally given by: $$ \color{blue}{y(x,t) = Ax \pm \frac{\sqrt{k} t}{\sqrt{1+A^2}} + C },$$ this is known as a complete integral of your PDE. It remains to set the initial condition $y(x,0)$. Can you take it from here? Cheers!
{ "language": "en", "url": "https://math.stackexchange.com/questions/897299", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
An ABC soft question about epsilon-delta argument Someone told me that some textbooks present epsilon-delta argument somewhat misleadingly. For example, consider the simplest one: the convergence of the sequence $(1/n)_{1}^{\infty}$ to $0$. These textbooks may prove this convergence as follows: Let $\epsilon > 0$. Since $n \geq N,$ we have $1/n \leq 1/N,$ so that if we choose $N := [1/\epsilon] + 1$ (where [x] denotes the greatest integer not greater than the given number $x$) then $n \geq N$ implies $1/n < \epsilon$. Yes, such proof looks to prove the implication $n \geq N \implies 1/n < \epsilon.$ But indeed this implication is assumed valid and what requires to prove is the existence of $N$ for every $\epsilon > 0.$ I do not know how to reply to such question, would anyone please help?
" ...what requires to prove is the existence of N for every ϵ>0." The Archimedean property of real numbers ensures that for every real $\epsilon > 0$ there exists a positive integer $N$ such that $$N > \frac1{\epsilon}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/897396", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Vertical asymptote of $h(x)=\frac{x^2e^x}x$ $$h(x)=\frac{x^2e^x}x$$ The function h is defined above. Which of the following are true about the graph of $y=h(x)$? * *The graph has a vertical asymptote at $x=0$ *The graph has a horizontal asymptote at $y=0$ *The graph has a minimum point A. None B. 1 and 2 Only C. 1 and 3 Only D. 2 and 3 Only E. 1, 2, and 3 I thought it was E but the answer is D, meaning the vertical asymptote is not at $x=0$, why? If you set the denominator to $0$ it equals $0$ since $x=0$.
If you are allowed negative values of $x$, then you do get a horizontal asymptote at $y = 0$ for negative $x$ because $x e^x$ increases up to $0$ (but never reaches it) as $x$ becomes larger and larger magnitude negative numbers. Also, if you are allowed negative $x$, then the function has a minimum. The derivative is $(x+1)e^x$ which is $0$ at $x = -1$ which is where the minimum is achieved. So if negative $x$ is allowed, then 2 and 3 are true.
{ "language": "en", "url": "https://math.stackexchange.com/questions/897462", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Complex function defined by contour integral along a smoothly varying path Let $D$ be a domain in the complex plane. Consider the function $F: D\to \mathbb{C}$, defined by $$ F\left( z \right) = \int_{\mathscr{C}\left( z \right)} {f\left( {z,t} \right)dt} . $$ Suppose that the path $\mathscr{C}\left( z \right)$ is a continuous function of $z\in D$, $f\left( {z,t} \right)$ is an analytic function of $z\in D$ and $t\in \mathscr{C}\left( z \right)$, and that the integral converges absolutely for any $z\in D$. Is it true that $F$ is analytic in $D$?
Simple special case for intuitions sake: Let $D=\mathbb C$, $f(z,t)=1$, $\gamma_z(t)=g(z)t$, $0\leq t \leq 1$, where the bar denotes complex conjugation. Then $F(z)=g(z)$ so $F$ is analytic if and only if $g$ is analytic. Suppose now in addition to your stated hypothesis that $z\mapsto \gamma_z'(t)$ and $z\mapsto \gamma_z(t)$ are analytic for each $t$. Then $z\mapsto f(z,\gamma_z(t))\gamma_z'(t)$ is analytic for each $t$ and uniformly bounded in every compact set $C\subset D$. It follows that $$ F(z)=\int _0^1 f(z,\gamma_z(t))\gamma_z '(t) \,dt $$ is well-defined and it is a simple consequence of Morera's Theorem (as pointed out by Mhenni Benghorbal) coupled with Fubini's Theorem that $F$ is analytic.
{ "language": "en", "url": "https://math.stackexchange.com/questions/897528", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Group with order $|G|=p^2q^2$ and $p\not\mid q-1, p\not\mid q+1$ Group with order $|G|=p^2q^2$ and $p\not\mid q-1, p\not\mid q+1$. $q\not=p$ both prime. I want to show that there is only one $p$-Sylow subgroup. Let $S_p(G)$ the number of $p$-Sylow subgroups. I know that $S_p(G) \equiv 1 \mod p$ and $S_p(G) \mid q^2$. If $S_p(G) > 1$, then $S_p(G)= q = k \cdot p + 1$ or $S_p(G) = q^2 = k \cdot p +1$ with $k \in \mathbb{N}_0$. If $q = k \cdot p + 1 \Rightarrow q-1=k \cdot p \Rightarrow p \mid q-1$ and this contradict to our assumption. So I have to know if $q^2 = k \cdot p +1$, but I don't know how? Thank for your help.
Note that from $q^2 = kp + 1$ we get that $$kp = q^2 - 1 = (q - 1)(q+1).$$ So $p \mid (q-1)(q+1)$, but as $p$ is prime we must have $p \mid q - 1$ or $p \mid q + 1$; either way, we have a contradiction. Therefore, $S_p(G) \neq q^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/897614", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Show there exists a value such that each partial sum equals its limit in modulus For each $n \in \mathbb{N}_0$, and for all $z \in \mathbb{C}$, define $$p_n(z) := \sum^{n}_{k=0} {z^k \over {k!}}.$$ Show that for all $r > 0$ and for all $n \in \mathbb{N}_0$, there exists $z \in \mathbb{C}$ with $|z|=r$ such that $|p_n(z)| = |e^z|$. There is a first part to this problem that asks one to show that for all $r > 0$, there exists $N \in \mathbb{N}$ such that for all $n > N$, $p_n$ has no zeros in $B(0,r)$. This follows pretty immediately from $e^z$ being the limit of $p_n$ and Hurwitz's Theorem. So far, I haven't found this useful. I'm also curious as to what the assignment $z(n;r)$ looks like, and if it's a function. I know $z(0;r) = ir$ thus far, and these also appear to be the only solutions for $n=0$. Edit: tag for Bessel functions added since one of the answers makes an interesting use of them for a more technical approach to this problem.
As stated in the comments, we have that $\|p_n(r)\|=p_n(r)<e^r=\|e^r\|$, hence we just need to prove that: $$\exists\, z=r\,e^{i\theta}:\quad \|p_n(z)\|^2 > \|e^z\|^2 = e^{2r\cos\theta}\tag{1}$$ then apply a continuity argument. If we prove that: $$\int_{0}^{\pi}\|p_n(r e^{i\theta})\|^2\sin\theta\,d\theta > \int_{0}^{\pi}e^{2r\cos\theta}\sin\theta\,d\theta=\frac{\sinh(2r)}{r}\tag{2}$$ then $(1)$ just follows. Notice that: $$\|p_n(r e^{i\theta})\|^2 = p_n(re^{i\theta})\cdot p_n(r e^{-i\theta})=\sum_{j=0}^{n}\sum_{k=0}^{n}\frac{r^{j+k}}{j!\,k!}e^{(j-k)i\theta},\tag{3}$$ $$\int_{0}^{\pi}\cos(m\theta)\sin\theta\,d\theta = \left\{\begin{array}{rcl}0 &\text{if}& m\equiv 1\!\!\!\pmod{2}\\ -\frac{2}{m^2-1}&\text{if}&m\equiv 0\!\!\!\pmod{2}\end{array}\right.\tag{4}$$ so: $$\begin{eqnarray*}\int_{0}^{\pi}\|p_n(r e^{i\theta})\|^2\sin\theta\,d\theta&=&2\sum_{j=0}^n\frac{r^{2j}}{j!^2}-\sum_{k=1}^{\lfloor n/2\rfloor}\frac{4}{4k^2-1}\sum_{j=0}^{n-2k}\frac{r^{2j+2k}}{j!(j+2k)!}\\&>&2\,I_0(2r)-\sum_{k=1}^{+\infty}\frac{4\,I_{2k}(2r)}{4k^2-1}\end{eqnarray*}\tag{5}$$ where $I_n(z)$ is the modified Bessel function of the first kind. By using the integral representation for such functions, from $(5)$ we have: $$\begin{eqnarray*}\int_{0}^{\pi}\|p_n(r e^{i\theta})\|^2\sin\theta\,d\theta&>&2\,I_0(2r)-\frac{1}{\pi}\int_{0}^{\pi}e^{2r\cos \theta}\sum_{k=1}^{+\infty}\frac{4\cos(2k\theta)}{4k^2-1}\,d\theta\\&=&2\,I_0(2r)-\frac{1}{\pi}\int_{0}^{\pi}e^{2r\cos \theta}\left(2-\pi\sin\theta\right)d\theta\\&=&\int_{0}^{\pi}e^{2r\cos\theta}\sin\theta\,d\theta=\frac{\sinh(2r)}{r},\end{eqnarray*}$$ as wanted. Maybe it is possible to prove $(1)$ by choosing a different weight-function $\psi:(0,\pi)\to\mathbb{R}^+$ and proving the weigthed $L^2((0,\pi))$ inequality: $$\int_{0}^{\pi}\|p_n(r e^{i\theta})\|^2\psi(\theta)\,d\theta>\int_{0}^{\pi}e^{2r\cos\theta}\psi(\theta)\,d\theta\tag{2*}$$ $\psi(\theta)=\sin(\theta)$ was just the most natural choice for me, since I numerically checked that $(2*)$ does not hold with $\psi(\theta)=1$. This problem also arose another question on Meta.
{ "language": "en", "url": "https://math.stackexchange.com/questions/897716", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Is $\mathbb{R}$ a subspace of $\mathbb{R}^2$? I think I've been confusing myself about the language of subspaces and so on. This is a rather basic question, so please bare with me. I'm wondering why we do not (or perhaps "we" do, and I just don't know about it) say that $\mathbb{ R } $ is a subspace of $\mathbb{ R }^2 $. It's elementary to prove that the set $$ S:= \left\{ c \cdot \mathbf{x} \mid c \in \mathbb{ R }, \mathbf{x} \in \mathbb{ R }^2 \right\}$$ is a vector subspace of $\mathbb{ R } ^2$. What is confusing me is that there seems to be an isomorphism between the set $S$ and $\mathbb{ R } $: \begin{align*} \varphi: S &\rightarrow \mathbb{ R } \\ c \cdot \mathbf{x} &\mapsto c \\ \end{align*} If this is indeed true, as I believe it is having checked that $\varphi$ gives an isomorphism, wouldn't we say that $\mathbb{ R } $ is a subspace of $\mathbb{ R } ^2$? Any help sorting out this (language) problem will be greatly appreciated!
This is indeed an important question. No doubt that $\mathbb{R}$ is isomorphic to many subspaces of $\mathbb{R}^2$, or in other words, $\mathbb{R}$ can be embedded in many ways in $\mathbb{R}^2$. The thing is that there isn't any specific subspace in $\mathbb{R}^2$ which is the best one to represent $\mathbb{R}$. If you want to treat $\mathbb{R}$ as a subspace, you need to specify the embedding $\mathbb{R}\hookrightarrow\mathbb{R}^2$ you refer to. I would say that as long as we don't choose the embedding, $\mathbb{R}$ is not a subspace of $\mathbb{R}^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/897775", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Find $\delta$ when $\epsilon$ is known (limit problem) I had a wicked time of these limit problems on a quiz, I can't find any examples of this with the $x$ in the denominator. How do you establish the connection between $|x – 1|$ and $|f(x) – 1|$? Find the largest $\delta$ such that if $0 < |x – 1| < \delta $, then $|f(x) – 1| < 0.1$. $f(x) = 2 - \frac{1}{x}$
Start by writing down what $|f(x) - 1| < 0.1$ means and then solve for a range of $x$. $|f(x)-1| < 0.1$ $-0.1 < f(x)-1 < 0.1$ $0.9 < f(x) < 1.1$ $0.9 < 2-\dfrac{1}{x} < 1.1$ Can you find a range for $x$ that makes this true? That will help you determine what $\delta$ needs to be.
{ "language": "en", "url": "https://math.stackexchange.com/questions/897860", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Evaluation of a sum of $(-1)^{k} {n \choose k} {2n-2k \choose n+1}$ I have some question about the paper of which name is Spanning trees: Let me count the ways. The question concerns about $\sum_{k=0}^{\lfloor\frac{n-1}{2} \rfloor} (-1)^{k} {n \choose k} {2n-2k \choose n+1}$. Could you recommend me how to prove $\displaystyle \sum_{k=0}^{\lfloor\frac{n-1}{2} \rfloor} (-1)^{k} {n \choose k} {2n-2k \choose n+1}=n 2^{n-1}$?
Since $\binom{m-2k}{n+1}$ is a degree $n+1$ polynomial in $k$ with lead term $\frac{(-2k)^{n+1}}{(n+1)!}$, we get that the multiple forward difference $\Delta_k^{n+1}\binom{m-2k}{n+1}=(-2)^{n+1}$. Multiply both sides by $(-1)^{n+1}$ to get $$ \begin{align} 2^{n+1} &=\sum_{k=0}^{n+1}(-1)^k\binom{n+1}{k}\binom{m-2k}{n+1}\\ &=\sum_{k=0}^{n+1}(-1)^k\left[\binom{n}{k}+\binom{n}{k-1}\right]\binom{m-2k}{n+1}\\ &=\sum_{k=0}^n(-1)^k\binom{n}{k}\binom{m-2k}{n+1}-\sum_{k=0}^n(-1)^k\binom{n}{k}\binom{m-2k-2}{n+1}\tag{1} \end{align} $$ Equation $(1)$ also says that for some $m_0$, $$ \sum_{k=0}^n(-1)^k\binom{n}{k}\binom{m-2k}{n+1}=2^n(m-m_0)\tag{2} $$ To determine $m_0$, consider the equation $$ \begin{align} \sum_{k=0}^n(-1)^k\binom{n}{k}\binom{m-2k}{n+1} &=\sum_{k=0}^n(-1)^{n+1-k}\binom{n}{k}\binom{n-m+2k}{n+1}\\ &=\sum_{k=0}^n(-1)^{k+1}\binom{n}{k}\binom{3n-m-2k}{n+1}\tag{3} \end{align} $$ If we set $m=\frac32n$, then the left and right sides of $(3)$ are negatives of each other yet equal, therefore, $0$. Thus, $m_0=\frac32n$. Therefore, $$ \sum_{k=0}^n(-1)^k\binom{n}{k}\binom{m-2k}{n+1}=2^n\left(m-\tfrac32n\right)\tag{4} $$ Plugging $m=2n$ into $(4)$ and noting that $2n-2k\ge n+1\implies k\le\frac{n-1}{2}$, we get $$ \sum_{k=0}^{\left\lfloor\frac{n-1}{2}\right\rfloor}(-1)^n\binom{n}{k}\binom{2n-2k}{n+1}=n2^{n-1}\tag{5} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/897948", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Finding the asymptotes of a general hyperbola I'm looking to find the asymptotes of a general hyperbola in $Ax^2 + Bxy + Cy^2 + Dx + Ey + F = 0$ form, assuming I know the center of the hyperbola $(h, k)$. I came up with a solution, but it's too long for me to be confident that I didn't make a mistake somewhere, so I was wondering if I could run it by someone and see if it works. It's mostly algebraic, and I'm prone to making tiny errors in algebra that throw off the entire problem. So to start, since we know the center $(h, k)$, we can first translate the hyperbola by $(-h, -k)$ using the transform $x_0 = x - \Delta{x}, y_0 = y - \Delta{y}$ with $\Delta{x} = -h$ and $\Delta{y} = -k$. Assuming $F'$ is the translated $F$, we can divide the entire equation by $-F'$ to put it in the following form: $$ ax^2 + bxy + cy^2 + dx + ey = 1 $$ With $a = -A'/F'$ and $A'$ the translated $A$, $b = -B'/F'$ and $B'$ the translated $B$, and so on. Next we convert to polar coordinates to get the following: $$ r^2(a\cos^2{\theta} + b\cos{\theta}\sin{\theta} + c\sin^2{\theta}) + r(d\cos{\theta} + e\sin{\theta}) - 1 = 0 $$ Solving for $r$ will give us $$ r = \frac{-d\cos{\theta} - e\sin{\theta} \pm \sqrt{(d\cos{\theta} + e\sin{\theta})^2 + 4(a\cos^2{\theta} + b\cos{\theta}\sin{\theta} + c\sin^2{\theta})}}{2(a\cos^2{\theta} + b\cos{\theta}\sin{\theta} + c\sin^2{\theta})} $$ Now assume $\theta_0 = 2(a\cos^2{\theta} + b\cos{\theta}\sin{\theta} + c\sin^2{\theta})$, this means that as $\theta_0 \rightarrow 0^{\pm}$, $r \rightarrow \pm\infty$. The angles at which $r \rightarrow \pm\infty$ are the asymptotes of the hyperbola, so now it's just a matter of solving for where $\theta_0 = 0$. This is where the majority of the algebra takes place and this is where I'm worried I made some miniscule mistake. $$ \begin{align} & 2(a\cos^2{\theta} + b\cos{\theta}\sin{\theta} + c\sin^2{\theta}) = 0\\ & \Longleftrightarrow a(1 - \sin^2{\theta}) + b\cos{\theta}\sin{\theta} + c\sin^2{\theta} = 0 \\ & \Longleftrightarrow \sin^2{\theta}(c - a) + b\cos{\theta}\sin{\theta} = -a \\ & \Longleftrightarrow \frac{1 - \cos{2\theta}}{2}(c - a) + \frac{b}{2}\sin{2\theta} = -a \\ & \Longleftrightarrow (c - a)(1 - \cos{2\theta}) + b\sin{2\theta} = -2a \\ & \Longleftrightarrow (c - a)(1 - \cos{2\theta}) + 2a = -b\sqrt{1 - \cos^2{2\theta}} \\ & \Longleftrightarrow \frac{a - c}{b}(1 - \cos{2\theta}) - \frac{2a}{b} = \sqrt{1 - \cos^2{2\theta}} \\ & \Longleftrightarrow (\frac{a - c}{b}(1 - \cos{2\theta}))^2 - 4a\frac{a - c}{b^2}(1 - \cos{2\theta}) + (\frac{2a}{b})^2 = 1 - \cos^2{2\theta} \\ & \Longleftrightarrow (\frac{a - c}{b})^2(1 - 2\cos{2\theta} + \cos^2{2\theta}) - 4a\frac{a - c}{b^2}(1 - \cos{2\theta}) + (\frac{2a}{b})^2 = 1 - \cos^2{2\theta} \\ & \Longleftrightarrow [(\frac{a - c}{b})^2 + 1]\cos^2{2\theta} + [2(\frac{a - c}{b})(\frac{a}{b} - \frac{a - c}{b})]\cos{2\theta} + [(\frac{2a}{b})^2 + (\frac{a - c}{b})^2 - 4a\frac{a - b}{b^2}) - 1] = 0 \\ & \Longleftrightarrow [(\frac{a - c}{b})^2 + 1]\cos^2{2\theta} + 2(\frac{a - c}{b})(\frac{c}{b})\cos{2\theta} + [(\frac{2a}{b})^2 + (\frac{a - c}{b})(\frac{-3a - c}{b}) - 1] = 0 \\ & \Longleftrightarrow [(\frac{a - c}{b})^2 + 1]\cos^2{2\theta} + 2(\frac{a - c}{b})(\frac{c}{b})\cos{2\theta} + [\frac{a^2 + 2ac + c^2}{b^2} - 1] = 0 \\ \end{align} $$ Now let $$ U = [(\frac{a - c}{b})^2 + 1] \\ V = 2(\frac{a - c}{b})(\frac{c}{b}) \\ W = \frac{a^2 + 2ac + c^2}{b^2} - 1 $$ So that the above equation becomes $$ U\cos^2{2\theta} + V\cos{2\theta} + W = 0. $$ Solving for $\cos{2\theta}$ gives us $$ \cos{2\theta} = \frac{-V \pm \sqrt{V^2 - 4UW}}{2U}. $$ Finally we may solve for theta like so, $$ \theta = \frac{1}{2}\arccos{\frac{-V \pm \sqrt{V^2 - 4UW}}{2U}}. $$ This gives us two numbers, $\theta_1$ and $\theta_2$, each corresponding to the slopes $m_1$ and $m_2$ of the asymptotes. The relationship between the slope of a line $m$ and the angle $\theta$ between the line and the positive x-axis is $m = \tan{\theta}$. You can use the identity $\tan{(\frac{1}{2}\arccos{x})} = \sqrt{\frac{1 - x}{x + 1}}$ to solve for $m_1$ and $m_2$ in terms of non-trig functions, but I think this answer is sufficient enough. Now that we have the slopes of the asymptotes, we can find the y-intercepts $b_1$ and $b_2$ of each line by simply plugging in the original center $(h, k)$ into each equation for the line and solving. $$ b_1 = k - m_1h \\ b_2 = k - m_2h $$ Thus, the asymptotes of the hyperbola with general equation $Ax^2 + Bxy + Cy^2 + Dx + Ey + F = 0$ have equations $y = m_1x + b_1$ and $y = m_2x + b2$. Does this look correct? Also, did I overcomplicate things? Was there an easy solution all along that I was missing?
For a conic, $$ax^2+2hxy+by^2+2gx+2fy+\color{blue}{c}=0$$ which is a hyperbola when $ab-h^2<0$. Its asymptotes can be found by replacing $\color{blue}{c}$ by $\color{red}{c'}$ where $$\det \begin{pmatrix} a & h & g \\ h & b & f \\ g & f & \color{red}{c'} \end{pmatrix} =0$$ That is $$\color{red}{c'}=\frac{af^2+bg^2-2fgh}{ab-h^2}$$ The asymptotes are $$\fbox{$ax^2+2hxy+by^2+2gx+2fy+\frac{af^2+bg^2-2fgh}{ab-h^2}=0 \,$}$$ Alternatively, using the centre of the conics $$ \left( \frac{bg-fh}{h^2-ab}, \frac{af-gh}{h^2-ab} \right)$$ and the slope $m$ of an asymptote is given by $$a+2hm+bm^2=0$$ On solving, $$m=\frac{-h \pm \sqrt{h^2-ab}}{b}=\frac{a}{-h \mp \sqrt{h^2-ab}}$$ Therefore $$\fbox{$ y-\frac{af-gh}{h^2-ab}= \frac{-h \pm \sqrt{h^2-ab}}{b} \left( x-\frac{bg-fh}{h^2-ab} \right) $}$$ See another answer here for your interest.
{ "language": "en", "url": "https://math.stackexchange.com/questions/898005", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Determining if a recursively defined sequence converges and finding its limit Define $\lbrace x_n \rbrace$ by $$x_1=1, x_2=3; x_{n+2}=\frac{x_{n+1}+2x_{n}}{3} \text{if} \, n\ge 1$$ The instructions are to determine if this sequence exists and to find the limit if it exists. I prove that the sequence is eventually decreasing and that it is bounded below (see work below). Hence, if my work is correct, the sequence will have a limit. The problem is that if the limit is, say, $L$ then it should satisfy $L=\frac{L+2L}{3}$; however, every real number satisfies the latter equality. So, if this limit exists, how do I actually find it? Proving the limit exists: 1) The sequence is bounded below by $1$. Base Case: $x_1=1\ge 1$. Induction: Suppose $x_n\ge 1$ for all $k=1,\ldots ,n+1$. Then $x_{n+2}=\frac{x_{n+1}+2x_{n}}{3}\ge \frac{3}{3}=1$. 2) The sequence is eventually decreasing: It actually starts decreasing after the 4th term. So I take as my base case $x_5<x_4$. Base Case: Note that $$x_3=\frac{x_2+2x_1}{3}=\frac{3+2}{3}=5/3$$ and hence $$x_4=\frac{x_{3}+2x_{2}}{3}=\frac{5/3+2\times3}{3}=23/9$$ so $$x_5=\frac{x_{4}+2x_{3}}{3}=\frac{23/9+2\times5/3}{3}=\frac{53}{27}<x_4$$ Induction: Suppose $x_{n}<x_{n-1}.$ Then $$x_{n+1}-x_n=\frac{x_{n}+2x_{n-1}}{3}-x_n \\=2\frac{x_{n-1}-x_n}{3}<0$$ Therefore, the sequence is bounded below and is eventually decreasing.
Setting $$ y_n=x_{n+1}-x_n \quad \forall n\in \mathbb{N}, $$ we have $$ y_{n+1}=x_{n+2}-x_{n+1}=\frac{x_{n+1}+2x_n}{3}-x_{n+1}=-\frac23(x_{n+1}-x_n)=-\frac23y_n. $$ It follows that $$ y_n=\left(-\frac23\right)^{n-1}y_1=2\left(-\frac23\right)^{n-1} \quad \forall n \in \mathbb{N}. $$ Finally, for every $n\in \mathbb{N}$, we get: \begin{eqnarray} x_n&=&x_1+\sum_{k=1}^{n-1}(x_{k+1}-x_k)=x_1+\sum_{k=1}^{n-1}y_k=1+2\sum_{k=1}^{n-1}\left(-\frac23\right)^{k-1}=1+2\frac{1-(-2/3)^{n-1}}{1+2/3}\\ &=&1+\frac65\left[1-\left(-\frac23\right)^{n-1}\right]=\frac{11}{5}-\frac65\left(-\frac23\right)^{n-1}. \end{eqnarray} We can now see that $\{x_n\}$ converges and its limit is $\frac{11}{5}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/898071", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Geometry, finding the possible values of 'a' $P (a,4)$, $Q (2,3)$, $R (3,-1)$ and $S (-2,4)$ are four points. If $|PQ| = |RS|$, find the possible values of a I know this is a pretty basic problem but I'm having a lot of trouble with it, here is my answer: I found $|RS|$ to be $\sqrt{50}$ so |PQ| = sqrt-2a^2 + 1 = sqrt50 and from there I got $\sqrt{8}$ which is incorrect because that is only one value. The true answer should be; $-5$ or $9$ (according to textbook) What is the proper method that I should use to come to the correct solution?
There's two things that have gone wrong here. The first is that you have incorrectly calculated the horizontal distance (I'll call it $d$) between P and Q. It should be $$\sqrt{50} = \sqrt{d^2+(4-3)^2}$$ $$50 = d^2+1^2$$ $$d^2=49$$ $$d=7$$ The second is, once you've gotten a distance value, you can then use it to find the possible values of $a$: it could be $2 + 7 = 9$ or $2 - 7 = -5$, as both are 7 units away from Q horizontally.
{ "language": "en", "url": "https://math.stackexchange.com/questions/898209", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
The limit of complex sequence $$\lim\limits_{n \rightarrow \infty} \left(\frac{i}{1+i}\right)^n$$ I think the limit is $0$; is it true that $\forall a,b\in \Bbb C$, if $|a|<|b|$ then $\lim\limits_{n\rightarrow \infty}\left(\frac{a}{b}\right)^n=0$? I would like to see a proof, if possible. Thank you
$\frac{i}{1+i} = \frac{i(1-i)}{(1+i)(1-i)} = \frac{i(1-i)}{2}=\dfrac{e^{\frac{\pi}{2}i}e^{-\frac{1}{4}\pi i}}{\sqrt{2}} = \dfrac{e^{\frac{\pi}{4}i}}{\sqrt{2}}$, so $(\frac{i}{1+i})^n = \dfrac{e^{\frac{n\pi}{4}i}}{(\sqrt{2})^n}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/898296", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
Integrate $\int \left(A x^2+B x+c\right) \, dx$ I am asked to find the solution to the initial value problem: $$y'=\text{Ax}^2+\text{Bx}+c,$$ where $y(1)=1$, I get: $$\frac{A x^3}{3}+\frac{B x^2}{2}+c x+d$$ But the answer to this is: $$y=\frac{1}{3} A \left(x^3-1\right)+\frac{1}{2} B \left(x^2-1\right)+c (x-1)+1.$$ Could someone show me what has been done and explain why?
$$\frac{A x^3}{3}+\frac{B x^2}{2}+c x+d$$ where $d$ is a cosntant to fix is equivalent to $$\frac{1}{3} A \left(x^3-1\right)+\frac{1}{2} B \left(x^2-1\right)+c (x-1)+d$$ where $d$ is a constant to fix. And the second one is surely more handy to apply the initial condition.
{ "language": "en", "url": "https://math.stackexchange.com/questions/898363", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 4 }
Division of point-value representation polynomials In Cormen's "Introduction to algorithms" is exercise: "Explain what is wrong with the “obvious” approach to polynomial division using a point-value representation, i.e., dividing the corresponding y values. Discuss separately the case in which the division comes out exactly and the case in which it doesn’t." Could anybody give me a solution to that exercise? I searched the web but I didn't find solutions/hints. Additionally, I have completely no idea why dividing the corresponding y values is wrong. So, could anybody explain me that? It interests me and it isn't my homework.
If you are using a point-value representation of polynomials, then if you have something that isn't a polynomial (e.g. because it is a rational function with nonconstant denominator), then it can't possibly be represented correctly. However, to go beyond the point your book is trying to make, you can do point-value representation of rational functions too. It's been a while since I've done it, but I think if you want to represent all rational functions whose numerator and denominator in lowest terms have degrees less than or equal to $c$ and $d$, then you need $c+d+1$ points. Note that point value form doesn't give any way to tell the difference between "rational function whose numerator and denominator have degrees $c$ and $d$" and "polynomial of degree $c+d$": you have to decide ahead of time exactly which space of objects you wish to represent by point-value form. More generally, if you are using a point-value representation with points $a_1, a_2, \ldots, a_n$, then define $$ m(x) = \prod_{i=1}^n (x - a_i) $$ Then doing arithmetic in point-value representation is really the same thing as doing arithmetic modulo $m(x)$. So anything that can be exactly expressed in terms of arithmetic modulo $m(x)$ can be correctly computed using point-value representation. There is a version of rational reconstruction that applies to rational functions and polynomials rather than rational numbers and integers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/898449", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
find formula to count of string I have to find pattern of count of series. Lenght of series is $2n$. It is neccessary to use exaclty double times every number in range $[1...n]$ and all of neighboring numbers are different. Look at example. $a_0 = 0$ $a_1 = 0$ $a_2 = 2$ (because of $1212$ and $2121$) And my idea is: $$ a_{n+1}= {2n+1\choose 2}a_n $$ Is it properly ?
For $a_3$, you'll add two $3$'s somewhere to each one of those so they're not next to each other. There are $5$ possible places: the two ends, and three in between two numbers, so there are ${5 \choose 2}$ ways of putting the two $3$'s in so they're not next to each other. For case $a_n$: There are $2n+1$ possible places to place $n+1$ in each of the ${a_n}$ series of length $2n$, so there are ${2n+1 \choose 2}$ ways to place the two $n$'s in each series. Or, $$a_{n+1} = {2n+1 \choose 2} a_n.$$ We can see the closed form by inspection: $$a_{n} = {2n-1 \choose 2} a_{n-1} \\ = {2n-1 \choose 2}{2n-3 \choose 2}{2n-5 \choose 2} \cdots {7 \choose 2}{5 \choose 2}a_2 \\ = \frac{(2n-1)(2n-2)}{2}\frac{(2n-3)(2n-4)}{2}\frac{(2n-5)(2n-6)}{2} \cdots \frac{7 \cdot 6}{2}\frac{5 \cdot 4}{2}\cdot 2,$$ so $$a_n =\frac{(2n-1)!}{3 \cdot 2^{n-2}}, n \geq 3.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/898527", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find all solutions to the equation $x^2 + 3y^2 = z^2$ Find all positive integer solutions to the equation $x^2 + 3y^2 = z^2$ So here's what I've done thus far: I know that if a solution exists, then there's a solution where (x,y,z) = 1, because if there is one where $(x,y,z) = d$, then $\frac{x}{d}, \frac{y}{d}, \frac{z}{d}$. is also a solution. I'm trying to mimic the pythagorean triple proof where they have that $x = u^2 - v^2$ and $y = 2uv$ and $z = u^2 + v^2$ So, looking at the original equation mod 4, I can see that it's in the form: $(0 or 1) - (0 or 1) = 3(0 or 1)$. Thus we for sure know that $3y^2$ is congruent to 0 mod 4 and that $y$ is even. We also learned that $z$ and $x$ are either both even or both odd. From there I'm guessing I need to handle each case, but I'm stuck as to where to go from here.
I don't think y is necessarily even, $13^2 + 3*3^2 = 14^2.$ There seem to be two sets of solutions $2x = p^2 - 3q^2, y = 2pq, 2z = p^2 + 3q^2$ and $pq$ odd $x = p^2 - 3q^2, y = pq, z = p^2 + 3q^2$ and $pq$ even $gcd(p,q) = 1 = gcd(p,3)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/898608", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 2 }
Probability of drawing at least 1 red, 1 blue, 1 green, 1 white, 1 black, and 1 grey when drawing 8 balls from a pool of 30? Given a pool of 30 balls (5 of each color). When drawing 8 balls without replacement, what is the probability of getting at least one of each color? Related: Probability of drawing at least one red and at least one green ball. When drawing more than 2 colors you need to exclude overlapping 'hands'. Thus when finding the probability of drawing no red, you can have a hand made up of blue, green, white, black and grey. But when you are determining the probability of drawing no blue you draw from red, green, white, black, grey. So you need to exclude all green, white, black, grey hands as they have already been counted. And the same for the other colors as well. The other complexity of the problem is that since there are only 5 of each color, no draw will only include balls of the same color.
The quickest approach is to count ways to get $6$ colors on $8$ balls. There are essentially two cases, $\langle 3,1,1,1,1,1\rangle$ and $\langle 2,2,1,1,1,1\rangle$. There are $\binom{6}{1}\binom{5}{3}\binom{5}{1}^5=187,500$ ways to get the first case. The $\binom{6}{1}$ counts the number of ways of choosing one color to get three balls, and the rest is the number of ways of choosing three balls from that one color and one for each of the other colors. The second case has $\binom6 2\binom 5 2^2\binom 5 1^4=937,500$ different ways. There are $\binom 6 2$ ways to choose two colors to receive two balls, and the rest is the number of ways of choosing two balls from each of those colors and one ball from the others. So the total cases are $1,125,000$, out of $\binom{30}{8}=5,852,925$. That gives a probability of about $0.1922$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/898683", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Upper and lower bounds for the smallest zero of a function The function $G_m(x)$ is what I encountered during my search for approximates of Riemann $\zeta$ function: $$f_n(x)=n^2 x\left(2\pi n^2 x-3 \right)\exp\left(-\pi n^2 x\right)\text{, }x\ge1;n=1,2,3,\cdots,\tag{1}$$ $$F_m(x)=\sum_{n=1}^{m}f_n(x)\text{, }\tag{2}$$ $$G_m(x)=F_m(x)+F_m(1/x)\text{, }\tag{3}$$ Numerical results showed that $G_m(x)$ is zero near $m+1.2$ for $m=1,2,...,8$. Please refer to fig 1 below for the plot of $\log|G_m(x)|$ v.s. $x$ for $m=1,2,...,8$ Let us denote these zeros by $x_0(m)$. I am interested if it can be proved that (A) $x_0(m)$ is the smallest zero of $G_m(x)$ for $x\ge1$ (B) there exist bounds $\mu(m),\nu(m)$ such that $0<\mu(m)\le x_0(m)\le \nu(m)$;$\mu(m),\nu(m)\to\infty$, when $m\to\infty$. Here are the things I tried. Because $G_m(1)>0$ and $G_m(x)\to F_m(1/x)<0$ when $x\to\infty$, so there exist a zero for $G_m(x)$ between $x=1$ and $x=\infty$. But I was not able to find the bounds for this zero. It is tempting to speculate that $x_0(m)$ is the only zero for $G_m(x)$ and $m+1<x_0(m)<m+2$. The values for $x_0(m), (m=1,2,...,10)$ are given by: $x_0(1)$=2.24203, $x_0(2)$=3.21971, $x_0(3)$=4.21913, $x_0(4)$=5.22283, $x_0(5)$=6.22764, $x_0(6)$=7.23268, $x_0(7)$=8.23764, $x_0(8)$=9.24241, $x_0(9)$=10.2469, $x_0(10)$=11.2512.
While this is just a partial answer, I hope this serves at least as a step in the right direction for proving what you need to. First, to work with something more concrete, I substituted the expressions for $f_n(x)$ into $F_m(x)$ and then that into $G_m(x)$ in order to get an explicit set of functions: $$F_m(x) = \sum\limits_{n=1}^mn^2x(2\pi n^2x - 3)\exp(-n^2\pi x)$$ $$G_m(x) = \sum\limits_{n=1}^mn^2x(2\pi n^2x - 3)\exp(-n^2\pi x) + \sum\limits_{n=1}^m\frac{n^2}{x}\left(\frac{2\pi n^2}{x} - 3\right)\exp\left(-\frac{n^2\pi}{x}\right)$$ While this class of functions is particularly nasty, we can make some ways with proving the proposed bounds $m + 1 > x_0(m) > m + 2$ hold. (By the way, the inequalities are reversed here than in your question, since the function $G_m(x)$ is positive and decreases in $x_0(m)$'s neighborhood, so $G_m(m + 1) > G_m(m + 2)$, thus making the former the upper bound and the latter the lower bound, not vice versa.) So basically what we need to show is that $G_m(m + 1)$ is always positive for all $m$, and $G_m(m + 2)$ is always negative, thus by the intermediate value theorem (since the function in question is continuous), $G_m(x)$ has a root in between $m + 1$ and $m + 2$. Substituting in $x = m + 1$ for the function, we get: $$G_m(m + 1) = \sum\limits_{n=1}^mn^2(m + 1)(2\pi n^2m + 2\pi n^2 - 3)\exp(-n^2\pi (m + 1)) + \sum\limits_{n=1}^m\frac{n^2}{m + 1}\left(\frac{2\pi n^2}{m + 1} - 3\right)\exp\left(-\frac{n^2\pi}{m + 1}\right)$$ If we can show that each factor in each summation is always positive, then the whole summation is positive and so the functions are always positive at that point. (Actually, that would be the best case scenario of it satisfying a sufficient but not necessary condition; it can have some negative sums as long as the total value of the positive terms is larger than the negative ones.) From the first summation, factor by factor: 1st summation, 1st factor: $n^2$ Since the square of any number is always positive, $n^2$ is positive. 1st summation, 2nd factor: $m + 1$ Obviously $m$ is a positive number by definition, and so $m + 1$ is also always positive. 1st summation, 3rd product: $(2\pi n^2m + 2\pi n^2 - 3)$ $2\pi n^2m + 2\pi n^2$ must be greater than 3 for this factor to be positive. Taking the 'lowest' case of $n = m = 1$, we get $2\pi + 2\pi = 4\pi,$ and $4\pi > 3$, so this factor will always be positive. 1st summation, 4th product: $\exp(-n^2\pi (m + 1))$ An exponential term is never negative or zero. Now, we go on to the second summation: 2nd summation, 1st product: $\frac{n^2}{m + 1}$ $n^2$ and $m + 1$ are always positive, so their quotient is too. 2nd summation, 2nd product: $\frac{2\pi n^2}{m + 1} - 3$ Alas, here we run into trouble. Taking the case $n = 1, m = 2$, we see that the resulting term is negative. When will it be negative? Like the corresponding factor in the first summation, the fraction must be greater than $3$: $\frac{2\pi n^2}{m + 1} > 3 \implies 2\pi n^2 > 3(m + 1) \implies n > \sqrt{\frac{3}{2\pi}}\sqrt{m + 1}$ So, for the first few terms in the sum, the product will be negative, but once n is sufficiently large to satisfy the inequality, the product will become positive. 2nd summation, 3rd product: $\exp\left(-\frac{n^2\pi}{m + 1}\right)$ Again, an exponential term is never negative. So, what does this all mean since not all terms are positive? All it means is that we need to prove that the first summation is larger than the second one (which I couldn't do), so that their difference is still positive: $$\sum\limits_{n=1}^mn^2(m + 1)(2\pi n^2m + 2\pi n^2 - 3)\exp(-n^2\pi (m + 1)) > \sum\limits_{n=1}^m\frac{n^2}{m + 1}\left(\frac{2\pi n^2}{m + 1} - 3\right)\exp\left(-\frac{n^2\pi}{m + 1}\right)$$ Or, if we strip out the positive terms from the second summation, (I'm calling the terms $a_n$ and $b_n$ for the 1st and 2nd summations respectively) $$\left[\sum\limits_{n=1}^ma_n + \sum\limits_{n > \sqrt{\frac{3}{2\pi}}\sqrt{m + 1}}^mb_n\right] > \sum\limits_{n < \sqrt{\frac{3}{2\pi}}\sqrt{m + 1}}^mb_n$$ If we prove either inequality (the first one is stronger than the second), we deduce that the function is always positive at the point $m + 1$. We can use an extremely similar argument for the point $m + 2$ to prove it is negative, and thus we will have proved the bounds. About the first question (whether $x_0(m)$ is the smallest zero of $G_m(x)$), if we take that the function is decreasing from $G_m(1)$ to $x_0(m)$, we can prove it to be the smallest zero by contradiction (and if we also accept that $\lim\limits_{x \to \infty} G_m(x) = 0$, then we can prove that it is the only zero.) For suppose that there exists other zeros smaller than $x_0(m)$; that is, in the interval $[1, x_0(m)]$. Since the function is continuous, the only way for it to have a smaller zero is if the function dips below zero and back up again (since it needs to pass through zero at $x_0(m)$). But since the function is always decreasing on that interval, then we reach a contradiction, since after the first 'smaller' zero the function would be negative and would need to be increasing to cross the x-axis again. I know this is far from rigorous, but either way proving the bounds would also prove this. I apologize for the (extremely!) long answer, but I found out a lot about this function and didn't want anything to go to waste. Cheers!
{ "language": "en", "url": "https://math.stackexchange.com/questions/898755", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 0 }
How to deal with linear recurrence that it's characteristic polynomial has multiple roots? example , $$ a_n=6a_{n-1}-9a_{n-2},a_0=0,a_1=1 $$ what is the $a_n$? In fact, I want to know there are any way to deal with this situation.
A bare-hands method would be as follows: $$a_n - 3 a_{n-1} = 3 ( a_{n-1} - 3 a_{n-2} )$$ $$a_n - 3 a_{n-1} = 3^{n-1} ( a_1 - 3 a_0 )$$ $$\sum_{k=1}^n 3^{n-k} ( a_k - 3 a_{k-1} ) = \sum_{k=1}^n 3^{n-k} \left( 3^{k-1} ( a_1 - 3 a_0 ) \right)$$ $$\sum_{k=1}^n 3^{n-k} a_k - 3^{n-(k-1)} a_{k-1} = \sum_{k=1}^n 3^{n-1} ( a_1 - 3 a_0 )$$ $$a_n - 3^n a_0 = n 3^{n-1} ( a_1 - 3 a_0 )$$ The constants were obtained from the factorization of the quadratic. Note that the above method works regardless of whether the roots are repeated, although the last step will differ.
{ "language": "en", "url": "https://math.stackexchange.com/questions/898819", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
An infinite group $G$ and $\forall x\in G, x^n=e$ Let $G$ be an infinite group and $n\in \mathbb N$. If for any infinite subset $A$ of $G$ there is $a\in A$ such that $$a^n=e,~~~~(e=e_G)$$ then prove that for every element $x\in G$ we have $x^n=e$. This question was asked me and honestly I could not find any way to break it well. I worked on the order of an element $x$ to show that every element is of finite order but it seemed not to complete a solid approach. Thanks for the time!
Notice that $G$ has no elements of infinite order. Suppose there exists $y \in G$ with $|y| \nmid n$. Replacing $y$ with a power of $y$, we may assume $\text{gcd}(|y|, n) = 1$. If $y^G = \{gyg^{-1} \mid g \in G\}$ is infinite, we are done (since $|y| = |gyg^{-1}|$ for any $g \in G$). On the other hand, if $|y^G| = |G : C_G(y)| < \infty$, then $|C_G(y)| = \infty$, so there exists an infinite sequence of distinct elements $a_1, a_2, \ldots$ in $C_G(y)$ with $a_i^n = e$ for all $i$. But $|a_iy| = \text{lcm}(|y|, |a_i|) \nmid n$ for all $i$, so $\{a_iy \mid i = 1, 2, \ldots\}$ is an infinite set of elements, none of whose orders divide $n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/898896", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 3 }
Finding all the possible values of an Integral in the Complex Plane I am studying Complex Analysis by Lars V Ahlfors. I am unable to solve one of his exercises. It is: Find all possible values of $$\int \frac{dz}{\sqrt{1-z^2}}$$ over a closed curve. I do not have any clue as to how to solve the above problem, if the denominator terms like $z-z_0$ then I can try winding a curve around $z_0$. But in this in particular question I cannot think of any way to approach it. If anyone can give some hints it would be great. Thanks.
The exercise considers regions in the complex plane such that the two points $1$ and $-1$ belong to the same component of the complement. On such regions, an analytic branch of $\frac{1}{\sqrt{1-z^2}}$ exists, which is the first part of the exercise, and the second part asks for the possible values of $$\int_\gamma \frac{dz}{\sqrt{1-z^2}}$$ where $\gamma$ is a closed curve (suitable for integration) in such a region. The two most used regions to define branches of $\frac{1}{\sqrt{1-z^2}}$ are: * *$U = \mathbb{C} \setminus \{ t\in\mathbb{R} : \lvert t\rvert \geqslant 1\}$, where $1$ and $-1$ belong to the same component of the complement of $U$ in the sphere $\hat{\mathbb{C}}$, but not to the same component of the complement in the plane, so it is arguable whether this region or similar ones are to be considered. $U$ is simply connected, so Cauchys integral theorem immediately yields the vanishing of all the integrals in that case. *$V = \mathbb{C} \setminus [-1,1]$, and in this case it is easy to see that the value of the integral only depends on how often $\gamma$ winds around the interval $[-1,1]$, so the value of the integral is $n(\gamma,1)\cdot C$ for some constant $C$, which I leave for you to find. Now what remains is to see that every case can be reduced to one of the above. Since there are no assumptions on the niceness of the complement, the argument, when made precise, is a little tedious (Hint: the trace of the curve $\gamma$ is a compact set, hence has a positive distance from the complement, which allows you to replace the complement with a nicer set).
{ "language": "en", "url": "https://math.stackexchange.com/questions/898985", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }