Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Expected value for the number of tries to draw the black ball from the bag We have a bag with $4$ white balls and $1$ black ball. We are drawing balls without replacement. Find expected value for the number of tries to draw the black ball from the bag. Progress. The probability to draw a black ball from first trial is $1/5$. The problem is how to find the probability to draw black ball from $2$nd, $3$rd, $ \ldots, 5$th trial. When I know all this probabilities I can find expected value as $1\cdot(1/5) + 2 p_2 + \dots + 5 p_5$.
It is as if you will create a word with $4$ W's and $1$ B. For example $BWWWW$ or $WWWBW$ etc. How many such words can you create? Answer: $5$ and any such word is equally likely. In other words: the probability that the black ball will be drawn at any place - not only the first - is equal to $1/5$. Not conditional probability, but probability. Do not get confused, that if you have drawn $4$ White balls then the probability of drawing the black ball in the fifth draw is $1$. This is the conditional probability. "A priori" it is equally likely that the black ball will be drawn at any given point from $1$ to $5$. So, $$E[X]=\frac{1}{5}\cdot 1+ \frac{1}{5}\cdot2+\ldots+\frac15\cdot 5=\frac15(1+2+3+4+5)=3 $$ (where $X$ denotes the number of trials).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1581188", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
What is $\textit{the}$ discriminant of a degree $n$ polynomial? In my high school algebra class the teacher (who is me) says that the discriminant of a quadratic polynomial $ax^2 + bx + c$ is $b^2 - 4ac$. I have read in the Wikipedia article that the discriminant of a polynomial is the product of the squares of the differences of its roots. This does not seem to be consistent with the above. If I subtract the roots of a quadratic and then square the result I get $\frac{(b^2 - 4ac)}{a^2}$.
Wolfram Mathworld says that the most common definition for the discriminant of a polynomial $p(z) = a_n z^n + a_{n - 1} z^{n - 1} + \cdots + a_1 z +a_0$ is: $$D(p) \equiv a_n^{2n - 2} \prod_{i, j}^n(r_i - r_j)^2, i<j$$ This definition gives, and I quote, ...a homogenous polynomial of degree $2(n - 1)$ in the coefficients of $p$. In the example of the quadratic equation $ax^2 + bx + c = 0$ (or $a_2 z^2 + a_1 z + a_0$), the discriminant is given by $b^2 - 4ac$ (or $D_2 = a_1^2 - 4a_2a_0$). You can extend this the the cubic equation $ax^3 + bx^2 + cx + d = 0$ (or $a_3 z^3 + a_2 z^2 + a_1 z + a_0 = 0$) and get $c^2 + b^2 - 4db^3 - 4c^3a + 18dcba - 27d^2a^2$ (or $D_3 = a_1^2 + a_2^2 -4a_0a_2^3 - 4a_1^3a_3 + 18a_0a_1a_2a_3 - 27a_0^2a_3^2$) That is indeed the best explanation I can give. Resources: * *Polynomial Discriminant - Wolfram MathWorld
{ "language": "en", "url": "https://math.stackexchange.com/questions/1581283", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Can you compute $\int_0^1\frac{\log(x)\log(1-x)}{x}dx$ more precisely than $1.20206$ and do a comparision with $\zeta(3)$? I know from Wolfram Alpha that $$\int_0^1\frac{\log(x)\log(1-x)}{x}dx=1.20206$$ and in the other hand, too from this online tool that $$\int\frac{\log(x)\log(1-x)}{x}dx=\mathrm{Li}_3(x)-\mathrm{Li}_2(x)\log(x)+constant.$$ Question. I would like made a comparision, and need obtain $$\int_0^1\frac{\log(x)\log(1-x)}{x}dx,$$ more precisely than $1.20206$. I believe that could be $\zeta(3)$, but now I don't sure, and I don't know if holding this claim could be deduce easily. Can you compute $\int_0^1\frac{\log(x)\log(1-x)}{x}dx$ more precisely than $1.20206$ to discard that this value is $\zeta(3)$, Apéry constant, or claim that the equality $$\int_0^1\frac{\log(x)\log(1-x)}{x}dx=\zeta(3)$$ holds and is known/easily deduced (perhaps from some of its known formulas involving integrals)? This definite integral was computed as a summand, when I made some changes of variable in Beuker's integral (see [1]), and now i don't know if too I could be wrong in my computations. I've searched in this site about this integral $\int\frac{\log(x)\log(1-x)}{x}dx$, and in Wikipedia about a possible identity between $\zeta(3)$ and particular values of logarithmic integrals $\mathrm{Li}_2(x)$ and $\mathrm{Li}_3(x)$. References: [1] https://en.wikipedia.org/wiki/Ap%C3%A9ry%27s_constant
Let $x = e^{-y}$, we have $$\int_0^1 \frac{\log x\log(1-x)}{x} dx = \int_0^1 \frac{(-\log x)}{x} \sum_{n=1}^\infty \frac{x^n}{n} dx = \sum_{n=1}^\infty \frac{1}{n}\int_0^1 (-\log x) x^{n-1} dx\\ = \sum_{n=1}^\infty \frac{1}{n}\int_0^\infty y e^{-ny} dy = \sum_{n=1}^\infty \frac{1}{n^3} = \zeta(3) $$ Please note that we can switch the order of summation and integration because all the individual terms are non-negative.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1581354", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 8, "answer_id": 3 }
If $u$ is a unit in $S$, then $u$ is a unit in $R$ I think the proof is fairly simple...but of course one of my big problems in math is overdoing simple problems. So.. Let $R$ be a ring with identity, and let $S<R$ be a subring containing the identity. Prove that if $u$ is a unit in $S$, then $u$ is a unit in $R$. Show by example that the converse is false. So, let $u$ be a unit in $S$. By definition of a unit, there exists a $v\in S$ such that $1=uv=vu$. However, since $1\in R$, then by closure, $uv\in R$, and thus, $u$ is a unit in $R$. As far as the converse, take the ring of Gaussian integers $\mathbb{Z}[i]=R$, and $S=\mathbb{Z}$. If the converse was true, this would say that $i= \sqrt{-1}\in S$, which is clearly false, since $1=-i\cdot i$. How is this?
For the first part: That’s not quite correct. You need to argue that $v ∈ R$ rather than $1 = uv ∈ R$. For the counterexample: I think you should rather disprove that Whenever $S ⊂ R$ is a subring of a ring $R$, and $u ∈ S$ is a unit in $R$ (that is there is some $v ∈ R$ with $uv = 1$ in $R$) then $u$ is also a unit in $S$ (that is there is some $v' ∈ S$ with $uv' = 1$ in $S$). Your example doesn’t work in that case. Rather think of $ℤ ⊂ ℚ$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1581426", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Proof of Euclid's Lemma in N that does not use GCD I am looking for a proof of Euclid's Lemma, i.e if a prime number divides a product of two numbers then it must at least divide one of them. I am coding this proof in Coq, and i'm doing it over natural numbers. I aim to prove the uniqueness of prime factorization (So I cannot use this lemma!). However, I can use the existence of a prime factorization, which I already proved. I do not want to use the gcd algorithm as that would involve coding it in Coq and proving it is correct which may be difficult. The idea is to use this proof in a computer science course, so I do not want to overcomplicate things. Is there any proof of this lemma that does not use gcd, or Bezout's lemma, or the uniqueness of prime factorization? Maybe something using induction? Thank you in advance. EDIT: The Proof should be on NATURAL NUMBERS. No answer did the proof in N.
Claim 2 below should answer the question. Since the only unit in $\mathbb{N}$ is $1$, we have $p$ is prime iff $p\mid ab\implies p\mid a\lor p\mid b$ $p$ is irreducible iff $a\mid p\implies a=1\lor a=p$ Claim 1: $p$ is prime $\implies$ $p$ is irreducible Proof: Assume that $a\mid p$. For some $b$, we have $$ p=ab\tag{1} $$ Since $p\mid ab$, we know that $p\mid a$ or $p\mid b$. Case $p\mid a$: for some $c$, we have $a=pc$. Therefore, $(1)$ implies that $abc=a$. Since the only unit in $\mathbb{N}$ is $1$, we have that $b=c=1$. Therefore $a=p$. Case $p\mid b$: for some $c$, we have $b=pc$. Therefore, $(1)$ implies that $abc = b$. Since the only unit in $\mathbb{N}$ is $1$, we have that $a=c=1$. Therefore, $a=1$. Thus, assuming that $p$ is prime and $a\mid p$, we have shown that $a=1$ or $a=p$. QED Claim 2: $p$ is irreducible $\implies$ $p$ is prime Proof: Assume that $p$ is irreducible, $p\mid ab$, and $p\nmid a$. Let $g$ be the smallest positive element of the set $$ S=\{\,ax+py:x,y\in\mathbb{Z}\,\}\tag{2} $$ If $g\nmid p$, then there is an $r$ so that $0\lt r\lt g$ and $qg+r=p$. However, then $$ r=p-q(ax+py)=a(-qx)+p(1-qy)\in S\tag{3} $$ but $g$ is the smallest positive element of $S$. Therefore, $g\mid p$. Similarly, $g\mid a$. Since $p$ is irreducible, $g=1$ or $g=p$. Since $p\nmid a$ and $g\mid a$, we must have $g=1$. Therefore, we have $x,y$ so that $$ 1=ax+py\tag{4} $$ Since $p\mid ab$, for some $c$, we have $ab=pc$. Multiply $(4)$ by $b$ to get $$ \begin{align} b &=abx+pby\\ &=p(cx+by)\tag{5} \end{align} $$ Equation $(5)$ says that $p\mid b$. Thus, assuming that $p$ is irreducible and $p\mid ab$, we have shown that if $p\nmid a$, then $p\mid b$. QED
{ "language": "en", "url": "https://math.stackexchange.com/questions/1581545", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Why is the number of divisors of an integer $n$ equal to the number of factors in the factorization of the polynomial $x^n - 1$ over the integers? Sloane's OEIS A000005 gives the number of divisors of the integer n. A comment (by a very reputable contributor) in this sequence claims that this is also the number of factors in the factorization of the polynomial $x^n-1$ over the integers. For example a(6) = 4 because there are 4 divisors of the integer 6 and (apparently) because there are 4 factors in the factorization of $x^6 - 1= (-1 + x) (1 + x) (1 - x + x^2) (1 + x + x^2)$. Is there an easy explanation for this? How can I count the number of factors in the factorization of $x^n - 1$? Is there an algorithm to produce these factors? I read an article on-line that gave a method to produce all the factors of x^n - 1 with no rational roots ( that is 1 and -1 are not roots) but this is not the same as the irreducible factors that are being counted in this sequence.
The idea is to construct a bijection between divisors $d$ of $n$ and irreducible factors $F$ of $X^n-1$. Given a divisor $d$, let $F$ be the $d$-th cyclotomic polynomial (the minimal polynomial of a primitive $d$-th root of unity). Then $F$ is irreducible, and $F \mid X^d - 1 \mid X^n - 1$. Given an irreducible factor $F$ of $X^n-1$, let $d$ be minimal such that $F \mid X^d - 1$. Then $F \mid \gcd(X^d-1,X^n-1)=X^{\gcd(d,n)}-1$, which implies $d \leq \gcd(d,n)$ hence $d \mid n$. It now remains to show that these two maps (of which we have shown that they are well-defined) are inverses of each other.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1581627", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Information from a characteristic polynomial (III) I know that since the highest multiplicity is 4, that the largest possible dimension is 4. (IV) I believe that there are 3 distinct eigenvalues, but because of the multiplicities, there are exactly 7 eigenvalues? I don't know how to tell if it's invertible, or the size of the matrix from the information given.
(I): $A$ can't be invertible because $0$ is an eigenvalue. (II): $A$ must be $7 \times 7$ because its characteristic polynomial has degree $7$. (III): Your answer is correct (IV): I don't think that they mean to count eigenvalues up to multiplicity. Yes, $A$ has exactly $3$ distinct eigenvalues.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1581714", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Dimension of this subspace of $\mathbb R^4$ generated by $\{v_{i1},v_{i2},v_{i3},v_{i4}\}$ with $\sum_{i=1}^4 v_{ij}=0$ for $i,j\in\{1,2,3,4,\}$ Let $$v_i=(v_{i1},v_{i2},v_{i3},v_{i4}),\ \ for\ \ i=1,2,3,4$$ be four vectors in$\mathbb R^4$ such that $$\sum_{i=1}^4v_{ij}=0\ \ for\ \ each\ \ j=1,2,3,4.$$ $W$ be the subspace generated by $\{v_1,v_2,v_3,v_4\}.$ Then the dimension $d$ of $W$ over $\mathbb R$ is $A.\ d=1\ \ or\ \ d=4$ $B.\ d\le 3$ $C.\ d\ge 2$ $D.\ d=0\ \ or\ \ d=4$ Now writing them vectors as follows : $$v_1=\ \ v_{11}\ \ \ v_{12}\ \ \ v_{13}\ \ \ v_{14}\\v_2=\ \ v_{21}\ \ \ v_{22}\ \ \ v_{23}\ \ \ v_{24}\\v_3=\ \ v_{31}\ \ \ v_{32}\ \ \ v_{33}\ \ \ v_{34}\\v_4=\ \ v_{41}\ \ \ v_{42}\ \ \ v_{43}\ \ \ v_{44}$$ $\sum_{i=1}^4 v_{ij}=0$ gives $$v_{i1}+v_{i2}+v_{i3}+v_{i4}=0\ \ for\ \ i=1,2,3,4.$$ i.e. *sum of $i$-th co-ordinate of all the $4$ vectors is $0$. So,if we know any $3$ of them then the co-ordinates of the $4$-th can be found and so dimension can never be $4.$ So , strike off options $A$ and $D.$ Now , we are left with the options $B$ and $C.$ Now , my question is how do I know if the dimension can be reduced to $2$ or not i.e. whether $B$ or $C$ is correct $?$ Thanks.
Take the four vectors $(1,0,0,0), (0,1,0,0), (0,0,1,0)$ and $(-1,-1,-1,0) .$ These four satisfy the given conditions and the first three are linearly independent . Hence the answer will be $B.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1581832", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Measurability for discrete measure Let $(X,\mathcal{B},\mu)$ be a measure space with discrete measure $\mu$ and Borel sigma-algebra. Why can we claim that all real functions on $X$ are measurable?
Strictly speaking, it is not true. In fact, measurability of a function does not depend on the measure $\mu$, it depends only on its sigma-algebra (i.e. family of measurable sets). For example, take Borel sigma-algebra on $\mathbb R$, and define $\mu(X) = 1$ when $0 \in X $ and $\mu(X) = 0$ otherwise. The measure $\mu$ is discrete, but indicator function of any non-Borel set is not measurable. However, it is easy to see that any discrete measure $\mu$ can be extended to the family of all subsets of $X$. I.e. you can extend your discrete measure to $\mathcal M = 2^X$. Just define $\mu(A)$ as a sum (at most countable) of all the masses for all the peaks in $A$. It should be easy to check that $\mu$ is really a measure, due to the fact that you can change the order of summation in a countable sum of nonnegative numbers as you wish. So, you can always consider a discrete measure on a sigma-algebra of all subsets. In this case all sets are measurable, so all functions are measurable too.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1581941", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Kernel and Image I have to find the kernel for this transformation: $$T(p(x))=xp(1)-xp'(1)$$ My solution so far: $p'(1)=0$ Hence $T(p(x))=xp(1)$ So $xp(1)=0$ Either $x=1$ or $p(1)=0$? How do you solve the kernel from this point? Thank you!
A polynomial $p(x)$ is in the kernel of $T$ iff it is mapped to the zero polynomial $q(x)=(0,0)^T$. So, in order to be in the kernel, $(Tp)(x)=xp(1) - xp'(1)= x(p(1) - p'(1))$ needs to be equal to $q$ for all $x \in \mathbb R$. This is only possible if $p(1)=p'(1)$. For $p(x)=ax^2 + b x + c$, which values of $a,b,c\in \mathbb R^2 $ allow this identity?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1582036", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Errors in math research papers Have there been cases of errors in math papers, that were undetected for so long, that they caused subsequent errors in research, citing those papers. ie: errors getting propagated along. My impression is that this type of thing is extremely rare. What was the worst case of such a scenario? Thanks.
One egregious case recently analyzed in detail by Adrian Mathias is Bourbaki's text Theory of sets and a couple of sequels published by Godement and others. Mathias' paper is: Mathias, A. R. D. Hilbert, Bourbaki and the scorning of logic. Infinity and truth, 47–156, Lect. Notes Ser. Inst. Math. Sci. Natl. Univ. Singap., 25, World Sci. Publ., Hackensack, NJ, 2014. Mathias analyzes several ubiquitous errors in the book, such as choosing an inappropriate foundation in Hilbert's pre-Goedel epsilon (or tau) operator, confusion of language and metalanguage, missing hypotheses that make certain statements incorrect, and even more serious "editorial comments" suggesting to the reader that certain issues in logic are too complicated to be clarified completely. The result was not merely perpetuation of errors in other papers, but the stagnation of logic in France for several generations that only recently has begun to be corrected.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1582083", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 1, "answer_id": 0 }
Solution to the recurrence relation I came across following recurrence relation: * *$T(1) = 1, $ *$T(2) = 3,$ *$T(n) = T(n-1) + (2n-2)$ for $n > 2$. And the solution to this recurrence relation is given as $$T(n)=n^2-n+1$$ However I am not able to get how this is obtained.
$C=const$ is the solution of the homogeneous equation $T(n)=T(n-1)$ and $n^2-n$ is a solution of the inhomogeneous equation since $n^2-n=(n-1)^2-(n-1)+2n-2$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1582148", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 4 }
Set questions(discrete mathematics) Let $A$, $B$ and $C$ be sets. Prove or disprove (with a counter example) each of the following: (a) If $A / C = B / C$ then $A = B$. (b) If [($A \cap C = B \cap C)\&( A / C = B/ C)$] then $A = B$. (c) If [$(A \cup C = B \cup C)\&(A / C = B / C)$] then $A = B$. I understand why they are equal when I think about it visually in my head but how do I prove or disprove them formally? Like the steps required?
Hint For proving equality, apply the definitions: (i) $A \setminus C = B \setminus C$ iff $x \in A \setminus C \leftrightarrow x \in B \setminus C$, for any $x$ and : (ii) $x \in A \setminus C$ iff $x \in A$ and $x \notin C$. Note : for $A \setminus C$, note that it is not necessary that the "subtracted" set (in this case : $C$) must be included into the other one ($A$). To disprove them, you have to find some suitable counterexamples.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1582239", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
do infinite family of lines in $\mathbb{R}^2$ have a common point by knowing that any three of them have common point? Suppose we have given an infinite family of lines; say $\mathfrak{F}$, in the plane $\mathbb{R}^2$ such that any three of the lines in $\mathfrak{F}$ have a common point. How can we prove that all lines in $\mathfrak{F}$ have a common point. (here we should note that using Helly theorem is not applicable, because first of all we are working on an infinite set of convex sets and second that no of them are closed and bounded!)
Note that any two distinct lines in the family have at most one point in common. Take any three distinct lines $l_1,l_2,l_3$ from the family, and let $v_0$ be their common point. Take any $l$ in the family distinct from these, and consider $\{l_1,l_2,l\}.$ Added: This result is actually true in all $\Bbb R^n$ (vacuously when $n=1,$ since there is no infinite family of lines, in the first place).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1582323", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
find $\lim _{x\to \infty }\left(\frac{n^{k-1}}{n^k-\left(n-1\right)^k}\right)=\frac{1}{2005}$ the value of k $\lim _{x\to \infty }\left(\frac{n^{k-1}}{n^k-\left(n-1\right)^k}\right)=\frac{1}{2005}$ What is the value of k in the given expression? When I expand the term (n-1) to the k-th power I get n^k-n^k..
Using Taylor expansions, assuming $k \geq 1$ (so that things do tend to $\infty$ when $n\to\infty$): $$\begin{align} n^k - (n-1)^k &= n^k\left(1-\left(1-\frac{1}{n}\right)^k\right) = n^k\left(1-\left(1-\frac{k}{n} + o\left(\frac{1}{n}\right)\right)\right) \\&= kn^{k-1} + o\left(n^{k-1}\right) \end{align}$$ so $$ \frac{n^{k-1}}{n^k - (n-1)^k} = \frac{n^{k-1}}{kn^{k-1} + o(n^{k-1})} = \frac{1}{k+o(1)} \xrightarrow[n\to\infty]{}\frac{1}{k} $$ and you can conclude by unicity of the limit.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1582424", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Evaluating the Definite Integral $\int_{0}^{1}\frac{2 \sin \pi x \cos \pi x}{1+x^2}dx$ How can I find this integral $$I=\int_{0}^{1}\frac{2 \sin \pi x \cos \pi x}{1+x^2}dx$$ Any trick that could compute the definite integral is acceptable. However, it will be more challenging to find a primitive. Any hint or help is appreciated. My Work I just wrote the integral as $$I=\int_{0}^{1}\frac{\sin 2 \pi x}{1+x^2}dx$$ Then, I decided to introduce $$J(\alpha)=\int_{0}^{1}\frac{\sin \alpha x}{1+x^2}dx$$ and solve a more general problem. Hence, we would have $I=J(2\pi)$ as a special result. But I don't know how to go further. I tried integration by parts and substitutions but to no avail!
Let's try your last idea and start with : $$J(\alpha):=\int_{0}^{1}\frac{\sin \alpha x}{1+x^2}dx$$ Then $\;\displaystyle J''(\alpha)=-\int_{0}^{1}\frac{x^2\;\sin \alpha x}{1+x^2}dx\;$ and we obtain this ODE : $$J(\alpha)-J''(\alpha)=\frac {1-\cos(\alpha)}{\alpha}$$ The solution of the homogeneous ODE is simply $\;J(x)=ae^x+be^{-x}$. Let's use variation of constant and start with $J(x)=a(x)e^x$ then : \begin{align} J(x)&=a(x)e^x\\ J'(x)&=(a'(x)+a(x))e^x\\ J''(x)&=(a''(x)+2a'(x)+a(x))e^x\\ J(x)-J''(x)&=-(a''(x)+2a'(x))e^x=\frac {1-\cos(x)}{x}\\ \end{align} For $b(x):=-a'(x)\,$ we have \begin{align} b'(x)+2b(x)&=\frac {1-\cos(x)}{x}e^{-x}\\ \end{align} For $b(x):=c(x)e^{-2x}\,$ this becomes for $\operatorname{Ei}$ the exponential integral : \begin{align} c'(x)&=\frac {1-\cos(x)}{x}e^{x}\\ c(x)&=\int\frac {e^{x}}x\,dx-\int\frac {e^{x+ix}+e^{x-ix}}{2\,x}\,dx\\ c(x)&=C+\operatorname{Ei}(x)-\frac 12\operatorname{Ei}((1+i)x)-\frac 12\operatorname{Ei}((1-i)x)\\ \end{align} Coming back to $a(x)$ : \begin{align} a'(x)&=-\left(C+\operatorname{Ei}(x)-\frac 12\operatorname{Ei}((1+i)x)-\frac 12\operatorname{Ei}((1-i)x)\right)e^{-2\,x}\\ \end{align} Now $\ \displaystyle \int \operatorname{Ei}(u\,x)e^{-vx}\,dx=\frac 1v\left(\operatorname{Ei}((u-v)x)-e^{-vx}\operatorname{Ei}(u x)\right)\;$ from Wolfram functions so that for $\,v=2$ and $u=1,1+i,1-i$ : $$a(x)=D-\frac 12\left(C_1\,e^{-2x}+\operatorname{Ei}(-x)-e^{-2x}\operatorname{Ei}(x)\\-\frac 12\left(\operatorname{Ei}((-1+i)x)-e^{-2x}\operatorname{Ei}((1+i)x)+\operatorname{Ei}((-1-i)x)-e^{-2x}\operatorname{Ei}((1-i)x)\right)\right)$$ Multiplying by $e^x$ we get the general formula for $J(x)$ : $$J(x)=De^x+D_1\,e^{-x}-\frac {e^x\operatorname{Ei}(-x)-e^{-x}\operatorname{Ei}(x)}2\\+\frac {e^x\operatorname{Ei}((-1+i)x)-e^{- x}\operatorname{Ei}((1+i)x)+e^x\operatorname{Ei}((-1-i)x)-e^{-x}\operatorname{Ei}((1-i)x)}4$$ (of course getting this with alpha is faster!) Now we have to find $J(2\pi)$ knowing that $J(0)=0$ (I'll let you try too!).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1582594", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
Division in finite fields Let's take $GF(2^3)$ as and the irreducible polynomial $p(x) = x^3+x+1$ as an example. This is the multiplication table of the finite field I can easily do some multiplication such as $$(x^2+x)\cdot(x+1) = x^3 + x^2 + x +1 = x+1+x^2+x+2 = x^2$$ I am wondering how to divide some random fields such as $x^2 / (x+1)$. The result is $x^2+x$ (compare above). But how do I actually calculate this. Polynomial long division does not help be: * *Why don't I get $x+1$ as result? *How can I calculate $x / (x^2+x+1)$? The result should be $x+1$
If we have a field $K = F[X]/p(X)$, then we can compute the inverse of $\overline{q(X)}$ in $K$ as follows. Since $p$ is irreducible, either $q$ is zero in $K$ or $(p,q)=1$. By the Euclidean algorithm, we can find $a,b$ such that $a(X)p(X) + b(X)q(X) = 1$. Then $\overline{b(X)} \cdot \overline{q(X)} = \overline{1}$, so $\overline{b(X)}$ is the inverse of $\overline{q(X)}$. In your example, the Euclidean algorithm gives us $(x+1)(x^3 + x + 1) + (x^2)(x^2 + x+1) = 1$ (modulo 2), so $(x^2+x+1)^{-1} = x^2$ and $\frac{x}{x^2+x+1} = x(x^2) = x^3 = x+1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1582685", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Series Solution to $y''+xy=e^x$ I am thoroughly familiar with using power series to solve the differential equation $y''+xy=0$, but how exactly does one go about solving $y''+xy=e^x$? I would imagine you represent $e^x$ as it's power series, along with everything else first, but then what?
Just looking at $y''+xy=e^x$ and ignoring the part about power series, I would first let $y = ze^x$. Then $y' =e^x(z+z') $ and $y'' =e^x((z+z')+(z'+z'')) =e^x(z+2z'+z'') $ so the equation becomes, dropping the $e^x$, $1 =z''+2z'+z+xz =z''+2z'+z(1+x) $. From this, we can readily get a recurrence for the coefficients of $z$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1582783", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Intermediate Value Theorem on $\mathbb{R^n}$ Let $S^2$ denotes the subset of $\mathbb{R^3}$ which includes the points $(x,y,z)$ s.t $x^2+y^2+z^2=1$ i.e the boundary of a unit sphere. Let $f$ be a continuous function from $S^2$ to $\mathbb{R}$ Prove there exists$ p=(x,y,z) $ s.t $f(p)=f(-p)$ Exam hint: The intermediate value theorem. I have deduced the following: If $p$ is in $S^2$ then $-p$ is also. $S^2$ is connected since its path connected and its closed and bounded so by Heine Borel its compact. Continuous functions preserve compactness and connectedness. We can use fact that the only sets in $\mathbb{R}$ that satisfy this are closed intervals. So the image of $f$ will always be a closed interval. The extreme value theorem also applies because of compactness so the endpoints of the interval will be attained. I am thinking maybe define a new function $g$ from the reals to the reals to somehow get the result. Otherwise I certainly can't use it on $f$. I think I might have most of the ideas I need but I don't know how to go further. Thanks
Think about the function $g(x)=f(x)-f(-x)$. Now the goal of the problem is to show that there is some $P\in S^2$ such that $g(P)=0$. if $g$ is identically $0$ then there is nothing to prove. Otherwise let $Q\in S^2$ be a point such that $g(Q)\geq 0$. then $g(-Q)=f(-Q)-f(Q)=-g(Q)$. Now you can apply the intermediate value theorem. You should also consider the case where $g(Q)\leq 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1582873", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Method of cylindrical shells Use the method of cylindrical shells to find the volume V generated by rotating the region bounded by the given curves about the specified axis. $y=32-x^2, \ y=x^2$ about the line $x=4$ My confusion is that what will be the radius 'x' of cylinder shells which we have to put in the integral.
Cylindrical shell: consider the volume element $$ dV = 2 \pi h\,dr = 2\pi\,2y\,dx = 4\pi\,x^2dx $$ Then integrate $$ V = \int_{x=-4}^4 dV = 4\pi \int_{-4}^4 x^2dx $$ Cylindrical disk: Cut your solid in half so that you only have to consider the bottom part. Then integrate the volume element $$ dV = \pi x^2 dy = \pi (2\sqrt{y})^2 dy $$ $$V = \int_{y=0}^{16} dV = 4\pi \int_0^{16} y\,dy $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1582946", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
why $ 1 - \cos^2x = \sin^2x $? I'm trying to prove this result $$\lim_{x\to 0} \frac{1 - \cos(x)}{x} = 0$$ In this process I have come across an identity $1-\cos^2x=\sin^2x$. Why should this hold ? Here are a few steps of my working: \begin{array}\\ \lim_{x\to 0} \dfrac{1 - \cos(x)}{x}\\ = \lim_{x\to 0} \left[\dfrac{1 - \cos(x)}{x} \times \dfrac{1 + \cos(x)}{1 + \cos(x)}\right] \\ =\lim_{x\to 0} \left[\dfrac{1 - \cos^2(x)}{x(1+\cos(x))}\right] \\ =\lim_{x\to 0} \left[\dfrac{\sin^2(x)}{x(1+\cos(x))}\right] \end{array}
Here is another simple way to prove the trig. identity, we know $i^2=-1$ so we have $$\sin^2 x+\cos^2x$$$$=\cos^2x-i^2\sin^2x$$ $$=(\cos x+i\sin x)(\cos x-i\sin x)$$ using Euler's theorem, $$=(e^{ix})(e^{-ix})=e^{0}=1$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1583004", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 2 }
Can a point and a compact set in a Tychonoff space be separated by a continuous function into an arbitrary finite dimension Lie group? Given a topological space $X$ which is Tychonoff (i.e., completely regular and Hausdorff), we know that given a compact set $K\subseteq X$ and a point $p \in X$ with $p\not\in K$, we can construct a continuous function $f:X\to \mathbb{R}$ such that $f(K)=0$ and $f(\{p\})=1$. If, instead of $\mathbb{R}$, I take an arbitrary finite dimensional Lie group $M$, can I still construct a continuous function $f':X\to M$ that will separate $K$ and $p$ for two different arbitrary points in $M$? Is it enough to assume $X$ to be Tychonoff or do I need further or different assumptions about $X$? For example, is $X$ being metric sufficient?
As far as I remember, each Lie group $M$ is locally Euclidean, so there is a homeomorphic embedding $i:\Bbb R\to M$, which yields the required separating map $f'=i\circ f$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1583112", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Find the length of PC Here PE is the tangent of the two circle. PA = 12 ; CD/AB = 2 Find the length of PC [Source: BDMO] ]1
It holds that $$ PE^2= \boxed{PA \cdot PD = PB \cdot PC}.$$ Taking advantage of the equality in the box, we have: $\begin{array}[t]{l} 12\cdot (PC + CD) = (12+AB) \cdot PC\\ 12 PC +12 CD = 12 PC + AB \cdot PC\\ PC = \dfrac{12CD}{AB}=24 \end{array}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1583190", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Find base of exponentiation Given the two primes $23$ and $11$, find all integers $\alpha$ such that $\alpha^{11} \equiv 1 \mod 23$. How to compute this? What to use?
As $\phi(23)=22$ and if ord$_{23}\alpha=a, a|22\implies a=1,2,11,22$ Now $2^2\equiv4,2^5\equiv9\implies2^{11}=2(9)^2\equiv1\implies$ord$_{23}2=11$ We know, ord$_ma=d,$ ord$_m(a^k)=\frac{d}{(d,k)}$ (Proof @Page#95) So, ord$_{23}(2^k)=\dfrac{11}{(11,k)}$ $\implies$ord$_{23}(2^k)=\dfrac{11}{(11,k)}=11$ for $1\le k\le10$ and there will be exactly $\phi(11)=10\ \alpha$s such that ord$_{23}\alpha=11$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1583287", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
summation of determinants of $3\times3$ matrices I have an algebra problem but no idea how to solve it. The problem is: "you can create 9! matrices the elements of which lie in a set $ \{1,2,3,...,9\} \subset \mathbb N$ so that their elements do not repeat, i.e. e.g. $$ \begin{pmatrix}1&2&9\\3&5&7\\6&4&8 \end{pmatrix} $$ Find the sum of the determinants of all these matrices." Could you give me a hint how to solve it? Thank you.
The number of those matrices is even, and note that the sum of the determinants of the pair $\begin{pmatrix}a&b&c\\d&e&f\\g&h&i\end{pmatrix}$ and $\begin{pmatrix}b&a &c\\e&d&f\\ h&g& i\end{pmatrix}$ is 0. If we pair up these matrices in this way we can see that the required sum is 0.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1583397", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 5, "answer_id": 3 }
Evaluate:$\int_{l}(z^2+\bar{z}z)dz$ Evaluate: $\displaystyle=\int_{l}(z^2+\bar{z}z)dz\,\,\,\,\,\,\,\,\,\,l:|z|=1,0\leq\arg z\leq\pi$ My try: $$\int_{l}(z^2+\bar{z}z)dz=\int_{0}^{\pi}(r^2e^{2i\theta}+re^{i\theta}???)dz$$ I'm stuck here, hints?
Since the integral equals $\int_l (z^2+1)\, dz,$ and this integrand has the antiderivative $z^3/3 + z$ (in all of $\mathbb C$), all you need to do is evaluate $z^3/3+z$ at the end points and subtract. (All we're using is the FTC of ordinary calculus here.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1583509", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Why if $K$ is a finite field, $|K|=p^d$ for a prime $p$? Why if $K$ is a finite field, $|K|=p^d$ for a prime $p$ ? The solution goes like : Consider $\varphi:\mathbb Z\longrightarrow K$. Since $K$ is finite $\ker \varphi=p\mathbb Z$ for a prime $p$. Q1) Why $\ker\varphi=p\mathbb Z$ ? And why $p$ has to be prime ? I tryied to understand for couple times, but impossible to understand how. Any idea ? Therefore $$\tilde \varphi: \mathbb Z/p\mathbb Z\longrightarrow K$$ is one to one. In particular $K/(\mathbb Z/p\mathbb Z)$ is a field extension, and thus $K$ is a vector space over $\mathbb Z/p\mathbb Z$. We conclude that $|K|$ is a power of $p$ (i.e. of the form $p^d$). Q2) I agree with every thing except in what we can conclude that $|K|$ is a power of $p$. How can it be ?
* *For any ring $R$, we can define $$\phi: \Bbb Z \to R$$ to be the homomorphism $$n \mapsto n \cdot 1 = \underbrace{1 + \cdots + 1}_n .$$ (Presumably this is the unspecified map $\varphi$ in the question.) Since $\Bbb Z$ is a principal ideal domain, $\ker \phi = n \Bbb Z$ for some $n$ (in fact, there are two choices for $n$, which differ only in sign; the positive choice is the characteristic of the ring). If $R$ finite, since $\Bbb Z$ is not, $\phi$ must have nontrivial kernel, and hence $n \neq 0$. If $n$ is composite, say, $n = a b$ for $a, b \neq 1$, then by construction $\phi(a), \phi(b)$ are nonzero but $\phi(a) \phi(b) = \phi(ab) = \phi(n) = 0$; so $\phi(a)$ and $\phi(b)$ are zero divisors, and in particular, $R$ is not a field. Thus, if $R$ is a field, $n$ must be some prime $p$. *Like you say, any finite field $K$ is a vector space over its prime field $\phi(\Bbb Z)$, which (since $\phi$ is a homomorphism) is isomorphic to $\Bbb F_p := \Bbb Z / p \Bbb Z$. Any basis $(E_1, \ldots, E_m)$ of $K$ over $\Bbb F_p$ defines a vector space isomorphism $\Bbb F_p^m \to K$ by $(a_1, \ldots, a_m) \mapsto \sum_i a_i E_i$, and so $|K| = |\Bbb F_p^m| = p^m$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1583672", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
First order ODE $(y^2\sqrt{x-x^2y^2}-y)dx - 2xdy=0$ $(y^2\sqrt{x-x^2y^2}-y)dx - 2xdy=0$ Change it into this $$y'=\frac{y^2\sqrt{x-x^2y^2}-y}{2x}$$ Square root disables a lot of methods. It isn't a total differential, I've tried. Quasi-homogeneous $y=z^a$ is only thing I haven't ruled out but I don't know how to determine $a$ because of the square root. If it is quasi-homogeneous how do I find $a$?
All first order equation of the first $M(x,y)dx+N(x,y)dy=0$ can be integrated, provided that you can find the right integrating factor. For this equation $(y^2\sqrt{x-x^2y^2}-y)dx - 2xdy=0$, the integrating factor is $$ \frac{1}{xy^2\sqrt{x-x^2y^2}}.$$ Multiplying both sides with this integrating factor, we get $$ \left(\frac{1}{x}-\frac{1}{xy\sqrt{x-x^2y^2}}\right)dx-\frac{2}{y^2\sqrt{x-x^2y^2}}dy=0.$$ This is now a total differential and integrating on both sides leads to $$ \ln x-\frac{2}{y}\sqrt{\frac{1-xy^2}{x}}=C.$$ The solution then can be obtained by solving $y$ from the algebraic equation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1583751", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Contour integral. Consider the function $y(x)$ defined by $$y(x)=e^{x^2}\int_{C_1'}\frac{e^{-u^2}}{(u-x)^{n+1}}du$$where $C_1'$ is as shown The Author makes following claims regarding the behavior of $y(x)$ in the limit of large $x$ (It is assumed that $n>-\frac{1}{2}$, but not integral). 1) As $x\rightarrow+\infty$, the whole path of integration $C_1'$ moves to infinity, and the integral in the above expression tends to zero as $e^{-x^2}$. 2) As $x\rightarrow-\infty$, however, the path of integration extends along the whole of real axis, and the integral in the expression does not tend $\boldsymbol{exponentially}$ to zero, so the function $y(x)$ becomes infinite essentially as $e^{x^2}$. In regard to the second claim, I can see that the integrals on the parts of the contour above and below the real axis will not cancel since $n+1$ is not integral. I understand these estimates are correct but have not been able to exactly see how. Any indication in the right direction would be very useful. Thanks.
In my point of view the better way to understand behavior of this function is to make changes of variables $v=u+x$. Thus $y(x)$ has the following form $$ y(x)=e^{x^2}\int_{C''}\frac{{e^{-(v+x)^2}}}{v^{n+1}}dv $$. Since $n+1$ is not integer one can represent it in terms of usual (not contour) integral $$ y(x)=(1-e^{-2i\pi n})e^{x^2}\int_{C''}\frac{{e^{-(v+x)^2}}}{v^{n+1}}dv $$. I will omit factor $(1-e^{-2i\pi n})$. This integral can be expressed via confluent confluent hypergeometric function. $$ y(x)=\frac{\Gamma(-n/2)}{2} F(-n/2,1/2,x^2)-x\Gamma(1/2-n/2)F(1/2-n/2,3/2,x^2) $$, where $F$ is a confluent hypergeometric function. For x>0 this expression has the following form $$ y(x)=2^n \Gamma(-n)U(-n/2,1/2,x^2) $$, where U is another solution of confluent hypergeometric equation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1583832", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
What is the probability that, at the end of the game, one card of each color was turned over in each of the three rounds? Three players each have a red card, blue card and green card. The players will play a game that consists of three rounds. In each of the three rounds each player randomly turns over one of his/her cards without replacement. What is the probability that, at the end of the game, one card of each color was turned over in each of the three rounds? Express your answer as a common fraction. I thought the answer would just be $1*(\dfrac{1}{3})^2*1*(\dfrac{1}{2})^2$ since the first person can have any card shown and the next $2$ must match, then the next round the first person can have any card shown and the others must match. The last time everyone matches so the probability is $1$.
The probability that the three players choose different colors on the first turn is $\frac{3}{3}\cdot\frac{2}{3}\cdot\frac{1}{3}=\frac{2}{9}$. Given that they do, consider the generating function for the colors played on the second turn: $(r+b)(b+g)(r+g)=\color{red}1b^2 2g+\color{red}1b^2 r+\color{red}1b g^2+\color{green}2 b g r+\color{red}1b r^2+\color{red}1g^2 r+\color{red}1g r^2$. The probability that the players choose different colors in the second round is therefore $\frac{\color{green}2}{\color{green}2+\color{red}6}=\frac{1}{4}$. Given that they do, which happens with probability $\frac{2}{9}\cdot\frac{1}{4}=\frac{1}{18}$, the probability they choose different colors on the last turn is $1$, so the final probability is $\frac{1}{18}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1583910", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Does a zero eigenvalue mean that the matrix is not invertible regardless of its diagonalizability? If the matrix $A$ is diagonalizable, then we know that its similar diagonal matrix $D$ has determinant $0$, so the matrix $A$ itself is invertible? However, if $A$ not diagonalizable, how are we sure that the matrix $A$ which has $0$ as eigenvalue is not invertible? Here I have another confusion, does the degree of the characteristic polynomial determine the size of matrix. i.e. $\lambda (\lambda+2)^3 (\lambda-1)^2$ has $6\times 6$ matrix?
If $0$ is an eigenvalue, then there is a nonzero vector $v$ with $Mv = 0$. Then the kernel of $M$ is not trivial (it is at least one-dimensional), and so it is not one-to-one viewed as a linear transformation. Then it is not invertible. The geometric picture: $M$ 'collapses' the subspace spanned by $v$, and so maps its domain into a hyperplane in its codomain. This picture actually says: $M$ is not onto, but this is another way to assert non-invertibility. Note that $M$ can still be diagonalizable, as it can still have a basis of eigenvectors, independent of whether or not it 'collapses' some.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1584033", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 1 }
Writing a summation as the ratio of polynomial with integer coefficients Write the sum $\sum _{ k=0 }^{ n }{ \frac { { (-1) }^{ k }\left( \begin{matrix} n \\ k \end{matrix} \right) }{ { k }^{ 3 }+9{ k }^{ 2 }+26k+24 } } $ in the form $\frac { p(n) }{ q(n) }$, where $p(n)$ and $q(n)$ are polynomials with integral coefficients. I am not able to progress in this problem.Please help. Thank you.
$\sum _{ k=0 }^{ n }{ \frac { { (-1) }^{ k }\left( \begin{matrix} n \\ k \end{matrix} \right) }{ { k }^{ 3 }+9{ k }^{ 2 }+26k+24 } } = \sum _{k=0}^{n} {\frac { (-1)^k \dbinom{n}{k}}{2(k+2)} }-\sum _{k=0}^{n} {\frac { (-1)^k \dbinom{n}{k}}{(k+3)} }+\sum _{k=0}^{n} {\frac { (-1)^k \dbinom{n}{k}}{2(k+4)} }$ Now consider the binomial theorem, $x(1-x)^n = \sum_{k=0}^{n} (-1)^k \dbinom{n}{k}x^{k+1}$ Now integrating both sides from 0 to 1 $\int_{0}^{1} x(1-x)^n = \sum _{k=0}^{n} {\frac { (-1)^k \dbinom{n}{k}}{2(k+2)} } $ similarly for the other parts. so the sum is equal to $ \frac{1}{2} \left( \int_{0}^{1}( x(1-x)^n -2x^2(1-x)^n +x^3(1-x)^n) \right)=-\frac{1}{2}\int_{0}^{1} (1-x)^{n+3}+\frac{1}{2}\int_{0}^{1} (1-x)^{n+2} =\frac{1}{2(n+4)(n+3)} $
{ "language": "en", "url": "https://math.stackexchange.com/questions/1584105", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Derivative of position [Beginning calculus question.] I saw in a calculus lecture online that for a position vector $\boldsymbol{r}$ $$\left|\frac{d\boldsymbol r}{dt}\right| \neq \frac{d\left| \boldsymbol r \right|}{dt}$$ but I don't understand exactly how to parse this. It's my understanding that: * *$\frac{d\boldsymbol r}{dt}$ refers to the rate of change in the position over time (speed?) *$|\boldsymbol r|$ refers to the magnitude of the position, i.e. the distance (from what to what?) *$\frac{d\left| \boldsymbol r \right|}{dt}$ refers to the rate of change in distance traveled over time, (a different kind of speed?) Is there a good way to understand what both of these expressions mean?
We can understand this with one example. Consider angular motion problem where position of a particle is given by $\mathbf{r}=\hat{\imath}\cos(\omega t)+\hat{\jmath}\sin(\omega t)$. Now $\frac{d\mathbf{r}}{dt}=-\hat{\imath}\omega\sin(\omega t)+\hat{\jmath}\omega \cos(\omega t)$, clearly $\lvert \frac{d\mathbf{r}}{dt}\rvert = \omega$ , whereas $\lvert \mathbf{r} \rvert =1$, hence $\frac{d\lvert \mathbf{r}\rvert}{dt}=0$. It means that a particle is at constant position, within unit radius, only angular position is changing, hence derivative of $\lvert \mathbf{r}\rvert$ is zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1584181", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
Solve the double integral $\int _{-1}^1\int _{-\sqrt{1-4y^2}}^{\sqrt{1-4y^2}}\left(3y^2-2+2yx^2\right)dxdy\:$ $$\int _{-1}^1\int _{-\sqrt{1-4y^2}}^{\sqrt{1-4y^2}}\left(3y^2-2+2yx^2\right)\,dx\,dy.$$ I think you need to be solved by the transition to polar coordinates: \begin{cases} x=r\cos(\phi),\\ y=r\sin(\phi) \end{cases}
Suppose $x=r\cos(\phi)$ and $y=\frac{1}{2}r\sin(\phi)$. Then the Jacobian is $\frac{1}{2}rdrd\phi$. Then $$ \int _{-1}^1\int _{-\sqrt{1-4y^2}}^{\sqrt{1-4y^2}}\left(3y^2-2+2yx^2\right)\,dx\,dy= \int_{0}^{2\pi}\int_{0}^{1}(...)\frac{1}{2}rdrd\phi. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1584262", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Show that if $\sum_n |a_n|$ converges, then $\sum_n a_n^2$ converges Show that if $\sum_n |a_n|$ converges, then $\sum_n a_n^2$ converges whenever all $a_n$ are in $\mathbb{R}$. Lemma: I have already proved that if $\sum_n a_n$ and $\sum_n b_n$ converge absolutely, then $\sum_{n,m} a_n b_m$ converges absolutely. Let's take $b_n:=a_n$ for all $n$. From the above lemma we know that $\sum_{n,m} a_n a_m$ converges absolutely. Since $\sum_n a_n^2 \le \sum_{n,m} |a_n a_m|$, by comparison we get that $\sum_n a_n^2$ is convergent. Now I have two questions: 1) Is the above derivation correct? 2) Is it possible to deduce this without refering to the Lemma?
Addressing only question 2: If $\sum|a_n|$ converges, then $|a_n|\to0$ so eventually we must have $|a_n|<1$. Let us assume (Wlog) that $|a_n|<1$ for all $n$. Then $a_n^2=|a_n|^2<|a_n|$ for all $n$ so we can apply the comparison test.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1584351", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Spatial component of the wave function The spatial component of a wave function is given as $$\sin\left ( \frac{n \pi x}{L} \right )$$ then for $n=1$, we get $$\sin\left ( \frac{ \pi x}{L} \right )$$ and this produces one half cycle of a sine wave over the distance $x=0$ to $x=L$. It has been a while since I touched wave mechanics. Could someone explain to me how I can 'see' the part "this produces one half cycle of a sine wave over the distance $x=0$ to $x=L$"?
First note that for $$\sin\left(\frac{\pi x}{L}\right)=\sin\left(\frac{2\pi x}{2L}\right)$$ We have $$ x = \mbox{spatial variable for position}$$ $$ 2L = \mbox{spatial period of the wave}$$ This implies that $$ L = \mbox{half of the spatial period of the wave}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1584425", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Separating differential equatons The initial equation: $$ y''= g-((C*(y')^2)/m) $$ and I am trying to separate it into two differential equations. I also have that the aerodynamic force $F=C*(y')^2$. The initial equation describes falling motion of a skydiver where I am eventually trying to use python integrate function to predict times for falls of differing heights but first I need to separate this into two differential equations. I don't know what method to use I started by trying to solve implicitly but i struggled to separate the variables and then I tried to use Laplace transformations but I didn't get far with that either.
so I have ended up doing this which I think is write set z=dy/dx so z=y' then z'=g-((c*z^2)/m)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1584494", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Find all $n$ for which $n^8 + n + 1$ is prime Find all $n$ for which $n^8 + n + 1$ is prime. I can do this by writing it as a linear product, but it took me a lot of time. Is there any other way to solve this? The answer is $n = 1$.
Since $n^2+n+1$ divides $n^8+n+1$ and $1<n^2+n+1<n^8+n+1$ for $n>1$, then $n=1$ is the unique solution (which indeed gives a prime).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1584594", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Property for comparing two composition series Suppose we have two composition series $$ M = M_0 \unrhd M_1 \unrhd \cdots \unrhd M_r = 1 $$ and $$ M = N_0 \unrhd N_1 \unrhd \cdots \unrhd N_s = 1 $$ for a finite group $M$. Then we know that $r = s$. But for given $i$, let $j$ be choosen such that $$ M_{i-1} \cap N_j \le M_i, \quad\mbox{but}\quad M_{i-1} \cap N_{j-1} \nleq M_i. $$ How to show that $N_{j-1} \cap M_i \le N_j$? As \begin{align*} N_{j-1} \cap M_{i-1} \le N_j \Leftrightarrow N_{j-1} \cap M_{i-1} & = (N_{j-1} \cap M_{i-1}) \cap N_j \\ & = M_{i-1} \cap N_j \\ & \le M_i \end{align*} which is excluded, we must have $N_{j-1} \cap M_{i-1} \nleq N_j$. So if we choose $k$ such that $$ N_{k-1} \cap M_i \le N_k, \quad\mbox{but}\quad N_k \cap M_{i-1} \nleq N_k $$ i.e. I guess as both conditions are a maximality requirement for $k$ we then must have $k \le j$. Just for sets these inclusion do not hold, so I guess there must be facts about groups and composition series involved, but I am unable to find the right argument. So I am asking for any help!?
To avoid all those suffixes, we have simple subnormal sections $A/B$ and $C/D$ of $G$ with $A \cap D \le B$ and $A \cap C \not\le B$, and we want to prove $B \cap C \le D$. By the 2nd Isomorphism Theorem $(A \cap C)/(A \cap D) \cong (A \cap C)D/D \le C/D$, and since this quotient is nontrivial, $C/D$ is simple, and $A$ is subnormal in $G$, we have $(A \cap C)D = C$ and $(A \cap C)/(A \cap D)$ is simple. Now $A \cap D = B \cap D \unlhd B \cap C \unlhd A \cap C$. But we know that $B \cap C \ne A \cap C$, so $B \cap D = B \cap C$ and the resut follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1584671", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How many binary words of length n , that consist an even number of zeros? How many binary words (chars '0' and or '1') of length n that consist an even number of zeros are there? I know that there are $2^n$ options overall, and that for every $n$, there are $\lceil{\frac{n}{2}}\rceil + 1$ options for even zeros. But now what? I got lost! Maybe I need to use the pascal triangle?
Doesn't common sense say that there should be an equal number of strings with an even number of zeros as with an odd number of zeros? Well, not if n is odd I guess. But in that case shouldn't common sense say there are just as many strings with an even number of zeros as there are with an even number of ones (which would have an odd number of zeros). But we shouldn't rely on common sense. But we shouldn't toss it out either. ==== common sense formalized: half the strings have an even number of zeros, and half have an odd number of zeros =========== Let $a = [a_i]$ be an n digit binary number. (Each $a_i$ is a 0, 1 digit). Let $f(a) = [b_0a_1... a_i...]$ where the first digit is changed from a 0 to a 1 or from a 1 to a 0. The rest of the digits are left the same. $f$ is clearly a bijection. $f(a)$ will have either 1 more zero or 1 less zero than $a$. So if $a$ has an even/odd number of zeros $f(a)$ will have an odd/even number. So $f$ allows for a 1-1 correspondence between numbers with even number of zeros and numbers with odd number of zeros. So the number of numbers with even number of zeros is half the total number. There are $2^{n-1}$ binary strings with an even number of zeros.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1584740", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Dual set of the unit ball is part of the unit ball. Define the unit ball centered at the origin as $B=\{x\in\mathbb{R}^d\mid \|x\|\leq 1\}$. Define the dual set of set $X$ as $X^*=\{y\in\mathbb{R}^d\mid\langle x,y \rangle\leq 1\ \forall x\in X\}$. I'm attempting to prove that for any set $C\subseteq\mathbb{R}^d$ it holds that $C=B$ if and only if $C=C^*$. I've managed to prove the implication from the right to the left by the Cauchy-Schwarz inequality. However, I'm having some trouble with the converse implication. In particular, how to prove that $C=B$ implies $C^*\subseteq C$?
Ok, I think I got it. Thanks to @AhmedHussein for the hint. Take any $y\in B^*$. If $y=0$, then clearly $y\in B$, so assume otherwise. Notice that $\frac{1}{||y||}y\in B$, and thus by the definition of $B^*$ it holds that $\langle\frac{1}{||y||}y, y\rangle\leq 1$. Extract the coefficient: $\frac{1}{||y||}\langle y, y\rangle\leq 1$. Multiply both sides by $||y||$: $\langle y, y\rangle\leq ||y||$ We can now substitute $||y||^2=\langle y,y \rangle$ and obtain inequality: $||y||^2\leq ||y||$. This is true only if $||y||\leq 1$. Hence, $y$ belongs to $B$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1584834", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
How do you calculate the exponent of an exponent How do you calculate the exponent of an exponent? In what order do you calculate the exponents? For example, to calculate ${2^3}^4$ Is it $({2^3})^4 = 8^4$ or $2^{3^4} = 2^{81}$ ADDED: Say I'm given $y=x^2$ and then told that $x = m^3$. Can I say that in this case $y = m^9$?
We usually define the notation $$ x^{y^z} $$ to mean $$ x^{\left(y^z\right)} $$ Mostly, this is because because this definition is most useful. Note that because of a power rule, if we wanted to write: $$ {\left(x^y\right)}^z$$ Then it's easier to write the equivalent $$ x^{yz}$$ Note that if $x=m^3$ and $y=x^2$, we can do the subsitution $y={\left(m^3\right)}^2=m^6$. It's important not to forget the brackets around $m^3$. This is similar to substituting, say, $b=a+1$ into $c=2b$. We must make sure to write $c=2(a+1)$ and not $c=2a+1$, which would be incorrect. As a footnote, notation in mathematics is not always black and white. The most useful definition in one context could be a terrible definition in another context. Some may prefer that non-associative operations like exponentiation be parenthesized always, making $a^{b^c}$ incorrect notation. In my experience, however, most people will understand $a^{b^c}=a^{\left(b^c\right)}$ correctly without clarification.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1584925", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Find the radius of circle $P$ $ABCD$ is a rectangle with $AB = CD = 2$. A circle centered at $O$ is tangent to $BC, CD,$ and $AD$ (and hence has radius $1$). Another circle, centered at $P$, is tangent to circle $O$ at point $T$ and is also tangent to $AB$ and $BC$. If line $AT$ is tangent to both circles at $T$, find the radius of circle $P$. Can someone create a picture for this? I am having a hard time seeing how the circle centered at $P$ can be tangent to the circle centered at $O$.
I don't know if you want a picture as an answer, or a solution as an answer, but here's a picture (roughly to scale, but no promises):
{ "language": "en", "url": "https://math.stackexchange.com/questions/1585025", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
What is the probability of matching exactly 4 numbers? In Canada's national 6-49 lottery, a ticket has 6 numbers each from 1 to 49, with no repeats. Find the probability of matching exactly 4 of the 6 winning numbers if the winning numbers are all randomly chosen. The answer is $\frac{\binom{6}{4}\cdot\binom{43}{2}}{\binom{49}{6}}$, but I don't understand why that is the answer and not $\frac{\binom{6}{4}\cdot\binom{45}{2}}{\binom{49}{6}}$, since if you match 4 numbers than there are $\binom{49-4}{2}=\binom{45}{2}$ ways to choose the last 2 numbers.
If there are 6 winning numbers, then you want to take away the possibility of choosing any of them when you choose the last 2 numbers. So we take away all 6 winning numbers from 49 so we get $$\frac{{6 \choose 4}{49-6 \choose 2}}{49\choose 6}=\frac{{6 \choose 4}{43 \choose 2}}{49\choose 6}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1585120", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Exponents and mod (Euler's theorem) I know how to compute $7^{402} \pmod{10}$ using Euler's theorem since $7$ and $10$ are relatively prime. But is there an easy way without using a calculator to compute $12^{720} \pmod{10}$. I don't think Euler's Theorem can be applied because $12$ and $10$ are not relatively prime... Also, for $5^{1806} \pmod{63}$, finding $\varphi(63)$ is kinda difficult. Is there an easy way to solve that?
First observe $(12)^{720} \equiv (2)^{720} \pmod {10}$ (I think this is obvious). Anyways we can easily prove it using binomial theorem on $(2+10)^{270}$ Now, try to find $x$ such that $2^{719} \equiv x \pmod 5$. This is easy by Euler's theorem. $2^{719} \equiv 3 \pmod 5$. So, $2^{720} \equiv 6 \pmod {10}$. For your second question, $5^{1806} \equiv 125^{602} \equiv (63 \times 2 -1)^{602} \equiv (-1)^{602} \equiv 1 \pmod {63}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1585193", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 2 }
Integrating over a tetrahedron Let $S$ be the tetrahedron in $\mathbb{R}^3$ having vertices $(0,0,0), (0,1,2), (1,2,3), (-1,1,1)$. Calculate $\int_S f$ where $f(x,y,z) = x + 2y - z$. Before I show you guys what I have tried, please no solutions. Just small hints. Now, I have been trying to set up the integral by looking at $x$ being bounded between certain planes, etc. I ended up with $$\int_0^{x+2} \int_{\frac{z}{2} - \frac{x}{2}}^2 \int_{2y - z}^{3z - 4y} f\:\: \mathrm{d}x\mathrm{d}y\mathrm{d}z.$$ But this doesn't seem correct. The question came with a hint: To find a linear diffeomorphism $g$ to use as a change of variables, but I have been unable to find such a mapping between $S$ and the unit cube.
Hint to your hint: You can find a linear transformation sending the vectors $(0,0,0)(0,1,2)$ etc to the standard basis (actually, it is easiest to compute the inverse to this first). A linear transformation will change area in a uniform way (think about where arbitrary little cubes are sent), and the bounds after applying this transformation will be easier. (The keyword for all of this is determinants.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1585288", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
It is possible to get a closed-form for $1+2^i+3^i+\cdots (N-1)^i$? Let $i=\sqrt{-1}$ the complex imaginary unit, taking $$arg(2)=0$$ for the definition of the summand $2^i$ in $$1^i+2^i+3^i+\cdots (N-1)^i,$$ as $$2^i=\cos\log 2+ i\sin\log 2,$$ see [1]. Question. It is possible to get a closed-form (or the best approximation possible), for an integer $N\geq 1$ $$1+2^i+3^i+\cdots (N-1)^i,$$ where the summands are defined in the same way, taking principal branches of complex argument and complex exponentiation? Thanks in advance, my goal is start to refresh some easy facts in complex variable, please tell me if there are mistakes in the use of previous definitions. References: [1] MathWorld, http://mathworld.wolfram.com/ComplexExponentiation.html http://mathworld.wolfram.com/ComplexArgument.html
First of all, you make the assumption $$1^i=1$$When this is not true.$$1^i=e^{\pm2\pi n},n=0,1,2,3,\ldots$$ More generally, I will solve$$1^x+2^x+3^x+4^x+5^x\ldots=\sum_{n=1}^{\infty}n^x$$This has solution obtainable via permutation:$$\sum_{n=2}^{m}n^x+n=\sum_{n=1}^{m-1}n^x+n^m$$$$\sum_{n=1}^{m-1}(n+1)^x+n=\sum_{n=1}^{m-1}n^x+n^m$$Apply Binomial thereom:$$(n+1)^x=\sum_{n=0}^{\infty}\frac{n^{x-n}1^nx!}{(x-n)!n!}$$$$\sum_{n=1}^{m-1}\sum_{i=n}^{\infty}\frac{n^{x-n}1^nx!}{(x-n)!n!}+n=\sum_{n=1}^{m-1}n^x+n^m$$ And now, to be honest with you, I can't proceed from here. Or perhaps I took it from the wrong method of solving.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1585381", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 1 }
What is the numerical range of $A$? Let $A = \left( {\begin{array}{*{20}{c}} 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ \end{array}} \right)$ What is the numerical range of $A$?
Hint: This matrix is symmetric $A^T=A$, so in particular $$A^TA=AA^T$$ which means that the matrix is normal (as real valued). So, its numerical range is the convex hull of its eigenvalues, see here bullet 10. Now, $$\det(A-λΙ)=(λ^2-1)^2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1585451", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Divide a line segment in the ratio $\sqrt{2}:\sqrt{3}.$ "Divide a line segment in the ratio $\sqrt{2}:\sqrt{3}.$" I have got this problem in a book, but I have no idea how to solve it. Any help will be appreciated.
let AB be the given line Segment. Draw a line L1 from A which makes an angle 45 degrees, and line L2 from B which makes an angle 60 with AB. Let L1, L2 meet at C. Now ABC is a triangle with angle A= 45, angle B = 60. Let the angular bisector of C (makes an angle 37.5 with either of AC,BC) meets AB at D. We know that $$AD/DB = AC/BC ..........(1)$$we also know that $$AC/sin(97.5) = CD/sin45$$$$ BC/sin(82.5) = CD/sin60$$ from the triangles CAD, CBD respectively combining the above two equations $$AC/BC = sin60/sin45 = (\sqrt{3}/2 )/(1/\sqrt{2}) = \sqrt{3}/\sqrt{2}$$ from (1) $$AD/DB = \sqrt{3}/\sqrt{2}$$ Note that sin97.5 = sin 82.5 our line is divided by the point D in the required ratio.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1585577", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 8, "answer_id": 7 }
Two Indefinite Integrals Looking for some hints to evaluate the following integrals (with complex analysis or otherwise): $$\int_0^\infty\frac{x^{p-1}}{x+1}\,dx,\;\;\;\; 0<p<1,$$ $$\int_{-\infty}^\infty e^{-s^2+isz}\,ds,\;\;\;\;z\in\mathbb{C}.$$ Thanks, I'm pretty stuck on both.
For the first integral, you can set $t=\dfrac{x}{x+1}$ so that $\mathrm{d}x=\dfrac{\mathrm{d}t}{(1-t)^2}$, and you have $$\int_0^\infty \frac{x^{p-1}}{x+1}\,\mathrm{d}x=\int_0^1t^{p-1}(1-t)^{-p}\,\mathrm{d}t=\mathrm{B}(p,1-p)$$ where $\mathrm{B}(a,b)$ denotes the Beta function. For the second integral, consider completing the square, then recall that $$\int_{-\infty}^\infty e^{-x^2}\,\mathrm{d}x=\sqrt\pi$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1585675", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Digits of $\pi$ using Integer Arithmetic How can I compute the first few decimal digits of $\pi$ using only integer arithmetic? By 'integer arithmetic' I mean the operations of addition, subtraction, and multiplication with both operands as integers, integer division, and exponentiation with a positive integer exponent. The first hundred decimal digits or so would be sufficient if the method is not a completely general one. By 'compute', I mean that I would like to obtain subsequent digits of $\pi$ one-by-one, printing them to the screen as I go along. (Context: I'm writing a Befunge-98 program...)
The (or rather a) spigot algorithm for $\pi$ does exactly that: extract digits of $\pi$ one by one based entirely on integer arithmetic. See this paper.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1585749", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 1 }
Solve the equation $8x^3-6x+\sqrt2=0$ The solutions of this equation are given. They are $\frac{\sqrt2}{2}$, $\frac{\sqrt6 -\sqrt2}{4}$ and $-\frac{\sqrt6 +\sqrt2}{4}$. However i'm unable to find them on my own. I believe i must make some form of substitution, but i can't find out what to do. Help would be very appreciated. Thanks in advance.
HINT...Let $x=\cos\theta$ The equation becomes $\cos3\theta=-\frac{1}{\sqrt{2}}$ Can you take it from there?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1585921", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Given $Z_n\rightarrow 0$ in probability and $W$ a random variable, proving $WZ_n\rightarrow 0$ Given $Z_n\rightarrow 0$ in probability and $W$ a random variable, I need to prove $WZ_n\rightarrow 0$. I was given a hint: show that for every $\delta,\epsilon<0$ we have $$\{|WZ_n|\geq\epsilon\}\subset\{|Z_n|\geq\delta\}\cup\{|W|\geq\frac\epsilon\delta\}$$ I'm not sure how to get to this conclusion, and I'm not even sure how it will help- If we prove the hint, we get $$\lim_{n\rightarrow\infty} P(|WZ_n|\geq\epsilon)\leq\lim_{n\rightarrow\infty} P(|Z_n|\geq\delta)+ P(|W|\geq\frac\epsilon\delta)$$ I understand that $\lim\limits_{n\to\infty} P(|Z_n|\geq\delta)=0$, but why does $P(|W|\geq\frac\epsilon\delta)=0$? I do know $ P(|W|\geq\frac\epsilon\delta)\leq\frac{E|W|\delta}\epsilon$, from Markov's inequality.
I believe is easier to use the subsequence definition of convergence in probability. $\{Z_n W\}$ is clearly a sequence of random variables (since it is product of r.v's). To show that $\{Z_n W\} \xrightarrow{ P} 0$, we will show that for any subsequence $\{Z_{n_k} W\}$ there is a further subsequence $\{n_\ell\}$ of $\{n_k\}$ such that $\{Z_{n_\ell}W\}\xrightarrow{a.e.} 0$. Let $\{Z_{n(k)} W\}$ a subsequence. Since $Z_n \xrightarrow{P}0$, for $Z_{n(k)}$ there is a subsequence such that $Z_{n (k (\ell))}\xrightarrow{a.e.} 0$ and since $W$ is real-valued then $Z_{n(k (\ell))} W\xrightarrow{a.e.} 0$ and hence $\{Z_n W\} \xrightarrow {P} 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1585983", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
non-negative almost surely I have a probability measure P and a non-negative sequence of random variables $(X_n)$ and the limit $X=\lim X_n$ exists P-almost surely. I would like to show that $X\ge0$ P-almost surely.
If $X_n$'s are nonnegative, then $\limsup X_n$ and $\liminf X_n$ are nonnegative. If we are given that $\limsup X_n = \liminf X_n$, then $\limsup X_n = \liminf X_n \ge 0$ Also, remember that for a given $\omega$, $X_n(\omega)$ is a sequence of nonnegative numbers and a convergent sequence of nonnegative numbers converges to a nonnegative number.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1586054", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
efficient way to express large numbers I recently watched the walkthrough of Graham's Number on YouTube (Numberphile). Mind-blowing of course. I then puttered around in other large number topics like Ackerman and Tree(3) and fast growing hierarchies. It's all fantastic. I also came across a challenge that was to write out or describe the largest number you can in a succinct way. So I thought of a rule and I was wondering if any of you large number smarties could take a look at it and see how it compares to other things. I'll call the number $V(n)$. It's not the solution to anything, its only virtue is its simplicity. $V(n)$ is generated in $n$ steps. 1) Start with a power tower of $n$ that is $n$ high. Evaluate. 2)use previous result as the base and height of the next tower. Repeat until $n$ steps. $V(1)$: $1^1 = 1$ $V(2)$: * *$2^2 = 4$ *$4^{4^{4^4}} = 4^{1.34 \times 10^{154}}$ (Hey, it's bigger than Tree(2)!) $V(3)$: * *$3^{3^3} = 7.6$ trillion *tower with base and height of 7.6 trillion (= MONSTER) *tower with base and height of MONSTER etc. Anyone with the chops to compare this with other biggies? Thanks! Victor PS could not find a good tag for this question!
Well, to try to make your function easier to write, we could use Knuth's arrow-notation, as mentioned by Deedlit. $$V(n)=V(n-1)\uparrow\uparrow V(n-1)=V(n-1)\uparrow\uparrow\uparrow 2$$$$V_1(n)=n\uparrow\uparrow n$$ Why we would write it like this? Because it is simply easier to understand. And since you haven't defined your number as, say, $V(10)$, I will have to make an assumption that your question is on how does your number-creation-algorithm compare to the way other big numbers are made. Simply put, it is much, much, much smaller than Graham's number. Why? Think about my function, $f(a,b,c)=a\uparrow^bc$. Let us compare the following is sizes: $f(5,5,6),f(5,6,5),f(6,5,5)$ We quickly see the following: $$f(6,5,5)<f(5,5,6)<f(5,6,5)$$ What does this tell you? For my function, increasing $b$ has the largest effect. This effect is much larger than increasing $a$ or $c$. So the way we get to Graham's number is as follows: $$G=f(3,G_{63},3)$$$$G_n=3\uparrow^{G_{n-1}}3, G_0=4$$ We quickly see that we are increasing the $b$ term of my function, and by a LOT. So your numbers are small compared to Graham's number, which is small compared to many other "big numbers". Why did I choose to compare it to Graham's number? It uses the same notation, the same pattern(ish), and is therefore relatively easy to compare. So far, it is rather difficult to compare most "big numbers".
{ "language": "en", "url": "https://math.stackexchange.com/questions/1586132", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Bayes Theorem with multiple random variables I came across this expression in the Intro to Probability book I am studying: $P(A,B|C)=\frac{P(C)P(B|C)P(A|B,C)}{P(C)}$ Could anyone please explain how is this obtained. From a simple application of Bayes Rule, shouldn't it be: $P(A,B|C)=\frac{P(C|A,B)P(A,B)}{P(C)}$ where $P(A,B) = P(A|B)P(B)$ ?
The extension of Baye's rule is such that $$P(A,B|C)=\frac{\color{blue}{P(A)}\color{blue}{P(B|A)}P(C|A,B)}{\color{blue}{P(B)}P(C|B)}\tag{1}$$ The formula can be seen as an extension of $$P(A|B)=\frac{\color{blue}{P(A)P(B|A)}}{\color{blue}{P(B)}}\tag{2}$$ This is not a derivation of course; I'm just trying to show you that the $\color{blue}{\mathrm{blue}}$ terms are common to both $(1)$ and $(2)$ and if you reverse the conditioning for the extra variable ($C$) by placing the conditional probabilities in both numerator and denominator you arrive at $(1)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1586238", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Finding an angle in a figure involving tangent circles The circle $A$ touches the circle $B$ internally at $P$. The centre $O$ of $B$ is outside $A$. Let $XY$ be a diameter of $B$ which is also tangent to $A$. Assume $PY > PX$. Let $PY$ intersect $A$ at $Z$. If $Y Z = 2PZ$, what is the magnitude of $\angle PYX$ in degrees? What I have tried: * *Obviously, the red angles are equal, and the orange angles are equal. This gives $XY \parallel TZ$. *$YZ=2PZ$. From this $XY=3TZ$ then $O'Z=3OY$. Let $O'Z=a=O'S$ so $SZ=\sqrt{2} a$, and also $O'O=2a$ *Then $SO=\sqrt{3} a$. Now we can use trigonometry to find $\angle PYX$ in triangle $ZSY$. Please verify whether my figure is correct. Your solution to this question is welcomed, especially if it is shorter.
Triangle O'SO fits the description of a 30-60-90 special angled triangle. Therefore, $\angle O'OS = 30^0$ Then, $\angle PYX = 15^0$ [angles at center = 2 times angles at circumference]
{ "language": "en", "url": "https://math.stackexchange.com/questions/1586334", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
Is there a notation for saying that $x_n\geq c$ from some $n$ and on? I am looking for something like $\lim\inf x_n \geq c$, but I need that from some point and on it is $\geq c$, not just that the limit is $\geq c$.
As @user notes, the usual term for this notion is "eventually" (introduced or anyway popularized by Kelley in his book General Topology). It's a quantifier, sometimes written $\forall^*$. For any predicate $P(n)$ of integers, $P(n)$ holds eventually (with respect to $n$) iff $(\forall^* n)\, P(n) := (\exists N)(\forall n\ge N)\, P(n)$. If you need to use the notion a lot, you could make up a notation, such as $$ (x_n)_{n\in\Bbb N} \le^* (y_n)_{n\in\Bbb N} := (\forall^* n\in \Bbb N)\, x_n \le y_n. $$ with the understanding that a real $c$ used as a sequence, as in $(y_n)\, ^*{\ge}\, c$, stands for the constantly-$c$ sequence. Note that $\le^*$ is a preorder — it's reflexive and transitive. Just as an aside, the dual notion to "eventually" is "frequently" (again, Kelley's term): $P(n)$ holds frequently (w.r.t. $n$) means $(\forall N)(\exists n\ge N)\,P(n)$. Over the integers, it's equivalent to "$P(n)$ holds infinitely often". If we write $\exists^*$ for "frequently", then these two quantifiers obey the familiar rules: $\forall^* = \lnot \exists^*\lnot$ and $\exists^* = \lnot \forall^*\lnot$, as is easily verified: $$\begin{align} \lnot (\exists^* n)\lnot P(n)&\iff \lnot (\forall N)(\exists n\ge N)\lnot P(n) \\ &\iff (\exists N)(\forall n\ge N)\lnot\lnot P(n) \\ &\iff (\forall^* n) P(n). \\ \end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1586420", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 5, "answer_id": 1 }
Does a polynomial minus the tangent at a certain point have a double root? Given: $P(x)$ is a polynomial of at least the second degree and $L(x)$ is the tangent to $P(x)$ at $x=a$. Questions: Can one then say that $P(x)-L(x)$ has a double root at $x=a$? If so, why? If not, why not?
You can always rewrite your polynomial as $P(x) = P(a) + P'(a) (x-a) + \frac{P''(a)}{2}(x-a)^2 + \dots $. Using this form, you can write $L(x)$ as $L(x) = P(a) + P'(a)(x-a)$. Combining these two formulas we get that $P(x) - L(x) = (x-a)^2 \cdot (\frac{P''(a)}{2} + \dots )$, which suggests that at point $a$ function $P(x)-L(x)$ has at least double root: it is double root when $P''(a) \neq 0$. In other cases the multiplicity of root depends on which derivative has first non-zero value.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1586497", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Proving a sequence is unbounded The sequence is $(u_{n})_{n}$ for which $u_{n}=e^{n}$ The answer I have says that for any given positive real $U$, the term of index/position $n=\left \lfloor \ln(n+1) \right \rfloor + 1$ will be such that $\left | u_{n} \right |> U$. I don't understand this answer. Can someone please explain? Thanks in advance.
Proving a sequence is unbounded means that you must show that for any large number $N$ there is no $M$ such that $u_M < N$ and that $u_k$ is also smaller than $N$ for every $k>M$. For $u_k = e^k$ this can be done by showing that $$e^k > N$$ if we pick $k = \lfloor \log (N+1)\rfloor + 1$ Because then you have $$e^{\lfloor \log (N+1)\rfloor + 1} > N $$ since $N\le e^{\lfloor \log (N+1)\rfloor} \le N + 1$ Therefore for any bound $N$ you can show that the sequence outgrows it. For a sequence like $\{2,-1,20,-1,200,\ldots\}$ perhaps the easiest way to show it is unbounded is to forget about the part that doesn't fit the growing pattern. That is, consider instead the sequence $\{2, 20, 200\}$ You can show that this is unbounded, and it is a subsequence of the original sequence. Any sequence with an unbounded subsequence is itself bounded. Hope that helps.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1586591", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find $A $ in $\sqrt{28}+\sqrt{7}=\sqrt{A}$ I got this problem from an old test paper of mine. I tried working on it today but I couldn't solve it. I tried solving it using logarithms but I didn't get an answer. The problem asks to solve for $A$.Can you guys help me solve this problem? Here's the problem: $\sqrt{28}+\sqrt{7}=\sqrt{A}$
Here is how it works $$\sqrt{28}=\sqrt{4 \cdot 7}=\sqrt{4}\sqrt{7}=2\sqrt{7}$$ and hence $$\sqrt{A} =\sqrt{28}+\sqrt{7}=2\sqrt{7}+\sqrt{7}=3\sqrt{7}$$ which leads to $$(\sqrt{A})^2=(3\sqrt{7})^2 \\ A=9 \cdot 7 =63$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1586660", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Does exist a triangle with sides a integer length where one of height is equal to the side which is the base? Does exist a triangle with sides a integer length where one of height is equal to the side which is the base? $a,k,l$ -natural, $a$ is lenght of hight
In formulas, we want to find positive integers $a,b,c,d$ solving the Diophantine equations $$ a^2+(a+b)^2=c^2 $$ $$ b^2+(a+b)^2=d^2. $$ In particular, we obtain the Diophantine equation $$ a^2+d^2=b^2+c^2, $$ which has been studied [here]( Diophantine equation $a^2+b^2=c^2+d^2$). hence there are integers $p,q,r,s$ such that $$ (a,b,c,d)=(pr+qs,ps+qr,pr-qs,qr-ps). $$ Use this to show that there are no solutions (if I am not wrong); the equations then are given by $$ (2pr + ps + qr)(ps + qr + 2qs) + (pr + qs)^2=0, $$ $$ (pr + 2ps + qs)(pr + 2qr + qs) + (ps + qr)^2=0. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1586757", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Behaviour a function when input is small If I have the function: $P(N)=\frac{P_0N^2}{A^2+N^2}$, with $P_0, A$ positive constants For small $N$, am I right in thinking that because $A$ dominates $N$ we have that $P(N) \approx \frac{P_0N^2}{A^2}$
More precisely, we can write for $A^2>N^2$ $$\begin{align} P(N)&=P_0\left(\frac{N^2}{A^2+N^2}\right)\\\\ &=P_0\left(\frac{(N/A)^2}{1+(N/A)^2}\right)\\\\ &=P_0\frac{N^2}{A^2}\sum_{k=0}^\infty (-1)^k\left(\frac{N^2}{A^2}\right)^{k}\\\\ &=\frac{P_0N^2}{A^2}-P_0\frac{N^4}{A^4}+P_0\frac{N^6}{A^6}+O\left(\frac{N^8}{A^8}\right) \end{align}$$ Therefore, if we retain only the first term in the expansion we can formally write $$P(N)\approx \frac{P_0N^2}{A^2}$$ where the approximation error is of order $(N/A)^4$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1586829", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
An inverse function and a differential equation What is the inverse function of $f(x)=x+\exp(x)$? I doubt it's the solution of the differential equation: $$y'+y'\exp(y)-1=0.$$
The derivative of $f(x)$ is $1+\exp(x)$. So by the inverse function theorem $$(f^{-1})'(x+\exp(x))=\frac{1}{1+\exp(x)}$$ or $$(f^{-1})'(y)=\frac{1}{1+\exp(f^{-1}(y))}.$$ Thus $g=f^{-1}$ solves the differential equation $$g'(x)=\frac{1}{1+\exp(g(x))}$$ with the initial condition $g(1)=0$. Straightforward algebra shows that this is equivalent to your differential equation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1586904", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
What is the general name for a surface of form $z = f(x, y)$? Is there a generic, geometrical name for a "sheet" of data, where the data fits the form: $$z = f(x, y)$$ Note that $z$ might be an actual third dimension (such as elevation in terrain data) or any scalar measurement (such as surface temperature). I know you could generically say "it is a surface"... but a sphere or a cube also are surfaces, and I want to exclude them (from the term I'm looking for) because they are not single-valued in $f(x, y)$. Note I'm not trying to get too picky that the surface is continuous in $(x, y)$ or discrete. The dataset might be infinite in range, or only exist for some constrained region in $(x, y)$. And I'm not concerned if the surface is continuously differentiable. I'm looking for the term that glosses over all those. I realize this is might just be called "planar data". And in some contexts (e.g. terrain maps in video games) I think this may be called "two-and-a-half dimensional data". But is there a well-established term in geometry or topology for this? Thanks! UPDATE: I finally stumbled across the term I was looking for: "scalar field". Perhaps this term is more common in physics than math; since my application is in the physical sciences (terrain elevation and radio propagation measurements) I'm probably going to use that in my work. I really appreciate all who replied.
Non-parametric or graph form are two commonly employed words.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1586982", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Past open problems with sudden and easy-to-understand solutions What are some examples of mathematical facts that had once been open problems for a significant amount of time and thought hard or unsolvable by contemporary methods, but were then unexpectedly solved thanks to some out-of-the-box flash of genius, and the proof is actually short (say, one page or so) and uses elementary mathematics only?
Our complex analysis professor told us that huge amounts of literature was written studying analytic functions that are bounded. Then came https://en.wikipedia.org/wiki/Liouville%27s_theorem_(complex_analysis) like a hammer. Super easy proof, (according to my professor) very unexpected result at the time. I haven't found a verification of this anecdote, but fun story in any case.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1587040", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "127", "answer_count": 19, "answer_id": 9 }
How to adjust a ratio to create desired proportion Let's say I have a bunch of apples and oranges. Specifically 500 apples and 760 oranges. I need to create a big basket of fruit. For every 7 apples I include, I need to include 3 oranges. I want to make the biggest basket possible using as much of my fruit as I can while honoring the desired apple to orange ratio. I'm not mathematically inclined and I've been just taking bunches of 7 and 3 at a time until I run out. There has to be some kind of formula I can use to tell me upfront the max number of each that I should pull. Thanks in advance.
Suppose you put $n$ groups of 7-apples-and-3-oranges into the big basket. Then you will have $7n$ apples and you will have $3n$ oranges in the basket. If all of the apples are used up, you would have $7n=500$, which means you would have $n=500/7 = 71\frac37$ groups. That means you can have at most $71$ complete groups, as far as the apples are concerned. On the other hand, if all of the oranges are used up, you would have $3n=760$, which means you would have $n=760/3=253\frac13$ groups. That means you can have at most $253$ complete groups, as far as the oranges are concerned. So you will not be able to make any more groups once you have made $71$ groups (the apples will be exhausted then).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1587110", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Is there any way to universally define the notion of $\text{Isomorphism}$? Suppose we want to give a very general definition of the term Isomorphism, first of all, we'll want an isomorphism to be a bijective function. Informally, we want our function to preserve whatever 'structure' we're talking about, may it be: multiplication, addition, order if we're about ordered fields, or maybe adjacency if talking about graphs, etc. However, after thinking about this for a few hours, I could not come up with a precise definition of the term which could fit every context. Do you think such a definition is possible?
I think category theory answers the question. An isomorphism in a category is just a morphism which has a two-sided inverse. Now, what is a morphism? This depends on the category you're working on. For example we have: * *the category of topological spaces, where continuous functions are the morphisms; *the category of vector spaces, where continuous linear maps are the morphisms; *the category of groups, where group homomorphisms are the morphisms; *the category of differentiable manifolds, where smooth functions are the morphisms; *the category of sets, where functions are morphisms, *etc. For example, homeomorphisms are the isomorphisms in the category of topological spaces (we even see once in a while people say that homeomorphic topological spaces are topologically isomorphic).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1587196", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
$I=3\sqrt2\int_{0}^{x}\frac{\sqrt{1+\cos t}}{17-8\cos t}dt$.If $0Let $I=3\sqrt2\int_{0}^{x}\frac{\sqrt{1+\cos t}}{17-8\cos t}dt$.If $0<x<\pi$ and $\tan I=\frac{2}{\sqrt3}$.Find $x$. $\int\frac{\sqrt{1+\cos t}}{17-8\cos t}dt=\int\frac{\sqrt2\cos\frac{t}{2}}{17-8\times\frac{1-\tan^2\frac{t}{2}}{1+\tan^2\frac{t}{2}}}dt=\int\frac{\sqrt2\cos\frac{t}{2}(1+\tan^2\frac{t}{2})}{17+17\tan^2\frac{t}{2}-8+8\tan^2\frac{t}{2}}dt=\int\frac{\sqrt2\cos\frac{t}{2}(\sec^2\frac{t}{2})}{9+25\tan^2\frac{t}{2}}dt$ $=\int\frac{\sqrt2(\sec^2\frac{t}{2})}{(\sec\frac{t}{2})(9+25\tan^2\frac{t}{2})}dt$ $=\int\frac{\sqrt2(\sec^2\frac{t}{2})}{\sqrt{1+\tan^2\frac{t}{2}}(9+25\tan^2\frac{t}{2})}dt$ Put $\tan\frac{x}{2}=t$ $=\int\frac{\sqrt2(\sec^2\frac{t}{2})}{\sqrt{1+\tan^2\frac{t}{2}}(9+25\tan^2\frac{t}{2})}dt$ $=\int\frac{\sqrt2 }{2\sqrt{1+p^2}(9+25p^2)}dp$ I am stuck here.Please help me.Thanks.
HINT: $$\frac{\sqrt{1+\cos t}}{17-8\cos t}=\dfrac{\sqrt2\left|\cos\dfrac t2\right|}{17-8\left(1-2\sin^2\dfrac t2\right)}$$ Set $\sin\dfrac t2=u$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1587443", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Differentiable function such that $f(x+y),f(x)f(y),f(x-y)$ are an arithmetic progression for all $x,y$ If $f$ is a differentiable function on $\mathbb{R}$ such that $f(x+y),f(x)f(y),f(x-y)$(taken in that order) are in arithmetic progression for all $x,y\in \mathbb{R}$ and $f(0)\neq0,$then $(A)f'(0)=-1\hspace{1cm}(B)f'(0)=1\hspace{1cm}(C)f'(1)-f'(-1)=0\hspace{1cm}(D)f'(1)+f'(-1)=0$ My Try: As $f(x+y),f(x)f(y),f(x-y)$ are in A.P. $2f(x)f(y)=f(x+y)+f(x-y)$.........(1) Put $x=0,y=0$ in the above equation $2f^2(0)=2(0)$ $f(0)=1$ because $f(0)\neq0$ Differentiating the equation $(1)$, $2f(x)f'(y)\frac{dy}{dx}+2f(y)f'(x)=f'(x+y)(1+\frac{dy}{dx})+f'(x-y)(1-\frac{dy}{dx})$ I am stuck here .Now i dont know how to go further.Please help me.
For $x=y$, you have $$f(x-x)+f(x+x)=2f(x)f(x)$$ or $$f(0)+f(2x)=2f^2(x)$$ or, $$2f^2(x)-f(2x)=1$$ Differentiating both sides with respect to $x$, we get $$4f(x)f'(x)-2f'(2x) = 0 $$ Putting $x=0$, you have $$f'(0)=0$$ So options (a) and (b) are wrong. As for (c) and (d), they are trying to check if $f'(x)$ is an even function or odd. Use the fact that $$f(2x)=2f^2(x)-1$$ and $$f(-2x)=2f^2(-x)-1$$ Now assuming the function is even, i.e.$$f(2x)=f(-2x) \Rightarrow f^2(x)=f^2(-x) \Rightarrow f(x)=f(-x)$$ That is, the equation supports that the function is even. But assuming the function is odd, $$f(2x)=-f(-2x)$$ $$\Rightarrow 2f^2(x)-1=1-2f^2(-x)$$ $$\Rightarrow 2[f^2(x)+f^2(-x)]=2$$ $$\Rightarrow [f^2(x)+f^2(-x)]=1 \not \Rightarrow f(x)=-f(-x)$$ That is, the equation does not support that the function is odd. So your answer is (d).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1587551", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 4, "answer_id": 1 }
Calculate the range of $f$. Calculate the range of the function $f$ with $f(x) = x^2 - 2x$, $x\in\Bbb{R}$. My book has solved solutions but I don't get what is done: $$f(x) = x^2 - 2x + (1^2) - (1^2)= (x-1)^2 -1$$ edit: sorry for wasting all of the people who've answered time, I didnt realize that I had to complete the square.... Wish I could delete it but I cant
From the form $f(x)=(x-1)^2-1$ you can see immediately two things: * *Since the term $(x-1)^2$ is a square it is always non-negative, or in simpler words $\ge 0$. So, its lowest possible values is $0$ which can be indeed attained when $x=1$. So, $f(x)=\text{ positive or zero } -1 \ge 0-1=-1$. So $f(x)$ takes values greater than $-1$. So this the lower bound for the range of $f$. *Is there also an upper bound for the values of $f$, or can $f(x)$ take all values that are greater than $-1$? To answer this, you should ask if you can increase the first term $(x-1)^2$ (that depends on $x$) as much as you want. The second term $-1$ is now irrelevant. Indeed $(x-1)^2$ gets bigger constantly as $x$ increases and in fact without an end. This allows you to conclude that $$\text{Range}(f)=[-1, +\infty)$$ and that is why they brought $f$ to this form.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1587634", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Finding the exact values of $\sin 4x - \sin 2x = 0$ So I've used the double angle formula to turn $$\sin 4x - \sin 2x = 0$$ $$2\sin2x\cos2x - \sin2x = 0$$ $$\sin2x(2\cos2x - 1) = 0$$ $$\sin2x = 0$$ $$2x = 0$$ $$x = 0$$ $$2\cos2x - 1 = 0$$ $$2\cos2x = 1$$ $$\cos2x = \frac{1}{2}$$ $$2x = 60$$ $$x = 30$$ With this information I am able to use the unit circle to find $$x = 0,30,180,330,360$$ However, when I looked at the answers, $$x = 0,30,90,150,180,210,270,330,360$$ Can someone tell me how to obtain the rest of the values for $x$. Thanks
First you should use the natural angle unit, which is the radian. Second, you should write the general solution, which is always a congruence. Your equation factors as \begin{align*} 2\sin x\cos x(2\cos 2x -1)=0 &\iff\begin{cases}\sin x=0\\ \cos x=0 \\\cos x=\frac12 \end{cases}\\ & \iff \begin{cases} x\equiv 0\mod \pi \\ x\equiv \frac\pi2\mod\pi\\2x\equiv\pm\frac\pi3\mod 2\pi\iff x\pm\frac\pi6\mod \pi \end{cases} \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1587709", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 6, "answer_id": 5 }
Why does $A \in$ SO(2) and $A \ne I \implies $ker$(A-I) = \{0\}$ I've come across this in proofs for Euclidean geometry a few times. Can someone tell me why this is true? $$A \in \text{SO}(2) \text{ and } A \ne I \implies \text{ker}(A-I) = \{0\}$$
Let $A\in\textrm{SO}(2,\mathbb{R})\setminus\{I_2\}$, there exists $\theta\in\mathbb{R}\setminus 2\pi\mathbb{Z}$ such that: $$A=\begin{pmatrix}\cos(\theta)&-\sin(\theta)\\\sin(\theta)&\cos(\theta)\end{pmatrix}.$$ Therefore, one gets: $$\det(A-I_2)=(\cos(\theta)-1)^2+\sin^2(\theta)=2(1-\cos(\theta))\neq 0.$$ Finally, $A-I_2\in\textrm{GL}(2,\mathbb{R})$ and $\textrm{ker}(A-I_2)=\{0\}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1587841", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Morera's Theorem and independence of path Determine whether $$ G(z)=\int_{1}^{z}\frac{1}{t} dt $$ is independent of the path joining $1$ and $z$ , and discuss the relationship of your answer to Morera's Theorem ..... I believe that as long as the paths goes through a region that excludes the origin ,the integral is independent of the path.... but I am not sure if this is correct or what type of argument is required here , in particular the relation of the answer to Morera's theorem is not clear to me .
Have you thought about traversing the unit circle for $z=1$? $$ \int_C\frac 1zdz=\int_0^{2\pi}\frac{1}{e^{it}}(ie^{it})dt=\int_0^{2\pi}idt=2\pi i $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1587971", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Solve for Rationals $p,q,r$ Satisfying $\frac{2p}{1+2p-p^2}+\frac{2q}{1+2q-q^2}+\frac{2r}{1+2r-r^2}=1$. Find all rational solutions $(p,q,r)$ to the Diophantine equation $$\frac{2p}{1+2p-p^2}+\frac{2q}{1+2q-q^2}+\frac{2r}{1+2r-r^2}=1\,.$$ At least, determine an infinite family of $(p,q,r)\in\mathbb{Q}^3$ satisfying this equation. If you can also find such an infinite family of $(p,q,r)$ with the additional condition that $0<p,q,r<1+\sqrt{2}$, then it would be of great interest. This question is related to Three pythagorean triples.
If three rationals $a,b,c$ can be found, these lead to rational $p,q,r$. For simplicity, we'll start with the $a,b,c$. Given the system, $$a^2+(b+c)^2 = x_1^2\\b^2+(a+c)^2 = x_2^2\\c^2+(a+b)^2 = x_3^2$$ Let $a = m^2-n^2,\;b=2mn-c,$ and it becomes, $$(m^2+n^2)^2 = x_1^2\\2c^2+2c(m^2-2mn-n^2)+(m^2+n^2)^2 = x_2^2\\2c^2-2c(m^2+2mn-n^2)+(m^2+2mn-n^2)^2 = x_3^2$$ To simultaneously make two quadratics of form, $$a_1x^2+a_2x+\color{blue}{a_3^2} = z_1^2\tag1$$ $$b_1x^2+b_2x+\color{blue}{b_3^2} = z_2^2\tag2$$ into squares is easy since their constant term is already a square. Assume, $$a_1x^2+a_2x+\color{blue}{a_3^2} = (px+\color{blue}{a_3})^2$$ Expand and solve for $x$, and one finds, $$x = \frac{a_2-2a_3p}{-a_1+p^2}\tag3$$ Substitute this into $(2)$ and one gets a quartic in $p$ with a square leading term. Since it is to be made a square, assume it to have the form, $$\color{blue}{b_3^2}p^4+c_1p^3+c_2p^2+c_3p+c_4 = (\color{blue}{b_3}p^2+ep+f)^2$$ Expand and collect powers of $p$ to get, $$d_1p^3+d_2p^2+d_3p+d_4 = 0$$ where the $d_i$ are polynomials in the two unknowns $e,f$. These two unknowns enable $d_1 = d_2 = 0$ to be solved, leaving only the linear eqn, $$d_3p+d_4 = 0$$ Solve for $p$, substitute into $(3)$, and one gets an $x$ that simultaneously solves $(1)$ and $(2)$. Since one has $a,b,c$, then $p,q,r$ can be recovered using the relations described by the OP here. For those not yet familiar with elliptic curves, this tangent method is easy to understand. It was known way back to Fermat, but has now been incorporated into the modern theory of elliptic curves. An illustrative example can also be found in this post.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1588083", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
For any coprime integers $(x, y)$, $m\geq 2$, $\delta\geq 1$, is it true that $\mid x^m - y^{m+\delta} \mid \geq \delta$? Is it true that if $x,y,m,\delta$ are integers, $\gcd(x,y)=1$, $m\ge2$, $\delta\ge1$, then $$|x^m-y^{m+\delta}|\ge\delta?$$ Any proofs or references will be most welcome.
$|5^3-2^{3+4}|<4{}{}{}{}{}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1588159", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to prove this equality: $\int_0^x(x-t)e^{\cos t}dt=\int_0^x\bigg(\int_0^te^{\cos s}ds\bigg)dt$ I'm trying to show this equality is true: $$\int_0^x(x-t)e^{\cos t}dt=\int_0^x\bigg(\int_0^te^{\cos s}ds\bigg)dt$$ I used the fundamental theorem of calculus without success. I need a hint how to solve this question.
Integrate by parts: $$ \int_0^x (x-t) e^{\cos{t}} \, dt = \left[ (x-t)\int_0^t e^{\cos{s}} \, ds \right]_{t=0}^x - \int_0^x (-1) \int_0^t e^{\cos{s}} \, ds \, dt. $$ The first term vanishes at the endpoints, the second is what you want.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1588225", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
Let $f(x)=x\cos(x)$, Which of following is true? * *Let $f(x) = x\cos x$ for $x\in\mathbb R$. Then $\quad$(A) There is a sequence $x_n\to -\infty$ such that $f(x_n)\to 0$. $\quad$(B) There is a sequence $x_n \to \infty$ such that $f(x_n)\to\infty.$ $\quad$(C) There is a sequence $x_n \to\infty$ such that $f(x_n)\to-\infty$. $\quad$(D) $f$ is a uniformly continuous function. Source. If I take sequence $-n$, first option is true. What about other options? Thanks.
A) is true , if you pick $x_n = \pi/2 - n\pi$ you get $f(X)=0$ for every $x$ real. B) is true, if you pick $x_n = 2n\pi$ $f$ goes to $+\infty$ since $cos=1$. $C)$ is true, if you pick $x_n = (2n + 1)\pi$ $f$ goes to $-\infty$ since $cos = -1$. D) is false cause $f'(X) = cosX - XsinX \simeq -X$ for the right choise of $X$ becoming bigger and bigger.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1588308", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Integration of a square root inside another square root function. Integrate the following $$\int_0^5 \sqrt{5-\sqrt{x}} \ \mathrm{d}x$$ I have done the following: I got the expression under the square root to equal $$5-x^{1/2}$$ so the integral then becomes $$\int_0^5 \left(5-x^{1/2}\right)^{1/2}\ \mathrm{d}x$$ Here is where I am stuck. Do I need to use some variation of the binomial theorem here? I hope my use of MathJax was "up-to-code" here. I apologize if not.
Hint: substitute $u=5-\sqrt{x}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1588368", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
Measure theory for self study. I have good knowledge of Elementary Real analysis. Now I'd like to study measure theory by myself (self-study). So please give me direction for where to start? Which book is good for starting? I have Principles of Mathematical Analysis by W. Rudin and Measure Theory and Integration by G. de Barra. Which book is rich in examples and exercises? Please suggest to me. Thanks in advance.
I have discovered Yeh's Real Analysis: Theory Of Measure And Integration recently, and I recommend it warmheartedly! Easy to read I think, especially for self study. It has a problem & proof supplement by the way. Rudin (not PMA but RCA) is very good on the long term, but hard for a first encounter with measure theory. PMA is not mainly about measure theory IIRW.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1588450", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 4, "answer_id": 3 }
Prove This Inequality ${\pi \over 2} \le \sum_{n=0}^{\infty} {1 \over {1+n^2}} \le {\pi \over 2} + 1$ $${\pi \over 2} \le \sum_{n=0}^{\infty} {1 \over {1+n^2}} \le {\pi \over 2} + 1$$ I see I should use Riemann sum, and that $$\int_0^{\infty} {dx \over {1+x^2}} \le \sum_{n=0}^{\infty} {1 \over {1+n^2}}$$ But how do I explain this exactly, and how to get the other half of the inequality (with ${\pi \over 2} + 1$)?
Hint: $$ \sum_{n=0}^{\infty} {1 \over {1+n^2}}=\frac{1}{2}+\frac{\pi}{2}\coth(\pi) $$ $$\coth(\pi)> 1\simeq 1$$ so $$\sum_{n=0}^{\infty} {1 \over {1+n^2}}\simeq\frac{1}{2}+\frac{\pi}{2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1588542", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 4 }
Is Lipschitz condition necessary for existence and uniqueness of a solution for a first order differential equation? Is the Lipschitz condition only a sufficient condition for existence and uniqueness of $\frac{dy}{dx}=f(x,y), y(0)=y_0$? Is it also necessary?
First, it's worth pointing out that for a Cauchy problem $$ \begin{cases} y'=f(x,y) \\ y(x_0)=y_0 \end{cases} $$ we require $f$ to be a Lipschitz function only in the second variable, while it is enough for it to be continuous in the first one (this result is known as Picard-Lindelöf theorem). A typical example for which uniqueness is lost is given by $f(x,y)=\sqrt{y}$ (which is indeed not Lipschitz). As already pointed out, Peano's existence theorem (together with other more general results) guarantees existence if $f$ is just continuous in both variables. Long story short: for a general ODE of the form $y'=f(x,y)$ being Lipschitz in the second variable is necessary for uniqueness, but not for existence.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1588628", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Why do we do mathematical induction only for positive whole numbers? After reading a question made here, I wanted to ask "Why do we do mathematical induction only for positive whole numbers?" I know we usually start our mathematical induction by proving it works for $0,1$ because it is usually easiest, but why do we only prove it works for $k+1$? Why not prove it works for $k+\frac12$, assuming it works for $k=0$. Applying some limits into this, why don't we prove that it works for $\lim_{n\to0}k+n$? I want to do this because I realized that mathematical induction will only prove it works for $x=0,1,2,3,\dots$. assuming we start at $x=0$, meaning it is unknown if it will work for $0<x<1$ for all $x$. And why not do $k-1$? This way we can prove it for negative numbers as well, right? What's so special about our positive whole numbers when doing mathematical induction? And then this will only work for real numbers because we definitely can't do it for complex numbers, right? And what about mapping values so that it becomes one of the above? That is, change it so that we have $x\to\frac x2$? Then proving for $x+1$ becomes a proof for all $x$ that is a multiple of $\frac12$!?
All of the contexts in which you propose to apply induction are implicitly using the positive integers anyway. For instance, suppose we have a property $P$, and we prove that $P(x)\implies P(x+\frac 1 2)$. Then if we know $P(0)$, we can prove $P(0.5)$, $P(1)$, $P(1.5)$, etc. But you're implicitly using the natural numbers here. You're proving that if $P(n\frac 1 2)$ holds, then $P((n+1)\frac 1 2)$ holds, and thus concluding that $P(n\frac 1 2)$ holds for all $n$. Or suppose we prove that $P(x)\implies P(x-1)$. Then from $P(0)$ we could deduce $P(n)$ for any negative $n$. But here you're implicitly proving that if $P(-n)$ holds then $P(-(n+1))$ holds and thus proving that $P(-n)$ holds for all positive $n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1588666", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21", "answer_count": 6, "answer_id": 2 }
Prove $1^2-2^2+3^2-4^2+......+(-1)^{k-1}k^2 = (-1)^{k-1}\cdot \frac{k(k+1)}{2}$ I'm trying to solve this problem from Skiena book, "Algorithm design manual". I don't know the answer but it seems like the entity on the R.H.S is the summation for series $1+2+3+..$. However the sequence on left hand side is squared series and seems to me of the form: $-3-7-11-15\ldots $ I feel like its of the closed form: $\sum(-4i+1)$ So how do I prove that the equality is right?
It can be easily shown that the series on the left reduces to the sum of integers, multiplied by $-1$ if $k$ is even. For even $k$: $$\begin{align} &1^2-2^2+3^3-4^2+\cdots+(k-1)^2-k^2\\ &=(1-2)(1+2)+(3-4)(3+4)+\cdots +(\overline{k-1}-k)(\overline{k-1}+k)\\ &=-(1+2+3+4+\cdots+\overline{k-1}+k)\\ &=-\frac {k(k+1)}2\end{align}$$ Similar, for odd $k$, $$\begin{align} &1^2-2^2+3^3-4^2+5^2+\cdots-(k-1)^2+k^2\\ &=1+(-2+3)(2+3)+(-4+5)(4+5)+\cdots +(-\overline{k-1}+k)(\overline{k-1}+k)\\ &=1+2+3+4+\cdots+\overline{k-1}+k\\ &=\frac {k(k+1)}2\end{align}$$ Hence, $$1^2-2^2+3^3-4^2+5^2+\cdots+(-1)^{k-1}k^2=(-1)^{k-1}\frac {k(k+1)}2\quad\blacksquare$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1588818", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 7, "answer_id": 5 }
Poisson distribution problem about car accidents I have this Poisson distribution question, which I find slightly tricky, and I'll explain why. The number of car accidents in a city has a Poisson distribution. In March the number was $150$, in April $120$, in May $110$ and in June $120$. Eight days are being chosen by random, not necessarily in the same month. What is the probability that the total number of accidents in the eight months will be $30$? What I thought to do, is to say that during this period, the average number of accidents is $125$ a month, and therefore this is my $\lambda$. Then I wanted to go from a monthly rate to a daily rate, and here comes the trick. How many days are in a month? So I choose $30$, and then the daily rate is $\frac{100}{3}$, and so the required probability is $0.06$. Am I making sense, or am I way off the direction in this one? Thank you!
I would say there are $500$ accidents in $31+30+31+30=122$ days, so $\lambda=500/122$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1588883", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Calculate the variance of random variable $X$ A random variable has the cumulative distribution function $$F(x)=\begin{cases} 0 & x<1\\\frac{x^2-2x+2}{2}&x\in[1,2)\\1&x\geq2\end{cases}.$$ Calculate the variance of $X$. First I differentiated the distribution function to get the density function, $f_X(x)=x-1$, for $x\in[1,2)$, and then I calculated $$E(X^2)-[E(X)]^2=\int_1^2x^2(x-1)dx-\bigg(\int_1^2x(x-1)dx\bigg)^2=\frac{13}{18}.$$ However, the correct answer is $\frac{5}{36}$. Why is that answer correct? I thought $var(X)=E(X^2)-[E(X)]^2$? Also, Merry Christmas!
Note that $X$ does not have continuous distribution. For $F(x)$ approaches $0$ as $x$ approaches $1$ from the left, while $F(1)=1/2$. So there is a point mass of $1/2$ at $x=1$. This has to be taken into consideration both for the calculation of $E(X)$ and of $E(X^2)$. (Your expression for the variance in terms of $E(X)$ and $E(X^2)$ is correct.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1588951", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Determine the ratio $k$ so that the sum of the series $\sum_{i=0}^{\infty} l_{i}$ is equal to the circumference of the outermost circle. Question: Consider an infinite series of concentric circles, were the radii $r_{0}, r_{1}, r_{2}, ...$ form a geometric series with the ratio $k$, $0 < k < 1$. From a point on the outermost circle, a tangent line is drawn to the circle just inside it, a tangent line from this circle to the one just inside it and so on. The length of the successive tangents are $l_{0}, l_{1}, l_{2}, ...$ (the image is a bit grainy, but $r_{0}$ is the radius of the outermost circle) Determine the ratio $k$ so that the sum of the series $$\sum_{i=0}^{\infty} l_{i}$$ is equal to the circumference of the outermost circle. Attemped answer: According to the last part of the question: $$\sum_{i=0}^{\infty} l_{i} = 2 \pi r_{0}$$ Since this is also a geometric series, it can be written as: $$\sum_{i=0}^{\infty} l_{i} = \sum_{i=0}^{\infty} l_{0} k^{i} = l_{0} \frac{1}{1-k}$$ since $l_{0}$ is the first term and $k$ is the ratio. Putting these two together gives: $$l_{0} \frac{1}{1-k} = 2 \pi r_{0}$$ However, this means that I have one equation and three unknowns. I suspect that there is something to be gotten from the fact that the circles are concentric and so this might yield a relationship between $l_{0}$ and $r_{0}$. For instance, if we form two congruent triangles where $l_{0}$ and $r_{0}$ are two of the sides for the first one, and $l_{1}$ and $r_{1}$ two of the sides for the second one and use Pythagoras theorem, it looks like we get: $$l_{0} = \sqrt{r_{0}^{2} - r_{1}^{2}}$$ ..but this just looks like we get another unknown into the mix instead. Looks like I am a bit stuck. What are some productive ways to drive this question home?
Without loss of generality we may assume that the outer circle has radius $1$. If $l_0$ is as in the OP, then $l_0^2=1-k^2$. So by scaling we have $l_1=k\sqrt{1-k^2}$, $l_2=k^2\sqrt{1-k^2}$, and so on. The sum of the geometric series is $\frac{\sqrt{1-k^2}}{1-k}$, that is, $\frac{\sqrt{1+k}}{\sqrt{1-k}}$. Set this equal to $2\pi$ and solve for $k$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1589012", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Prove a function is in $L^2[0,1]$ If $f\in L^2[0,1]$, and $$g(x)=\int_0^1\frac{f(t)\mathrm dt}{|x-t|^{1/2}},\quad x\in[0,1],$$ show that $\|g\|_2\le2\sqrt2\|f\|_2$. I tried Minkowski's integral inequality (with $p=1/2$, so the inequality reverses), but cannot reach the inequality I need. I also used Holder's inequality and failed too. What is the correct approach to solve this problem?
Here is another answer only uses basic calculus. From the expression of $g$, \begin{align} \|g\|_2^2 &=\int_0^1 g(x)^2 dx \cr & =\int_0^1\int_0^1 \int_0^1 \frac{f(t)}{|x-t|^{1/2}}\frac{f(s)}{|x-t|^{1/2}}dsdtdx \cr &\leq \int_0^1 \int_0^1 \int_0^2 \frac{f(t)^2+f(s)^2}{2}|x-t|^{-1/2}|x-s|^{-1/2}dsdtdx \cr &=\int_0^1 f(t)^2 \left[\int_0^1 |x-t|^{-1/2} \left(\int_0^1 |x-s|^{-1/2}ds\right)dx \right]dt. \end{align} Then we can get the desired inequality using the estimate $\int_0^1 |x-s|^{-1/2}ds=\int_0^1 |x-t|^{-1/2}dx\leq 2\sqrt{2}$. Comment: The inequality is actually true for any $L^p$ norm using the interpolation theorem in the other answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1589119", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
If $A^2=A$ then prove that $\textrm{tr}(A)=\textrm{rank}(A)$. Let $A\not=I_n$ be an $n\times n$ matrix such that $A^2=A$ , where $I_n$ is the identity matrix of order $n$. Then prove that , (A) $\textrm{tr}(A)=\textrm{rank}(A)$. (B) $\textrm{rank}(A)+\textrm{rank}(I_n-A)=n$ I found by example that these hold, but I am unable to prove them.
The given condition $A^{2}=A$ says that the matrix is idempotent and so diagonalizable and hence rank is same as the number of eigen values $1$(with repetition ) which is the same as trace of $A.$ For second part use it $$|rank(A)-rank(B)|\leq rank(A+B)\leq n.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1589182", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 5, "answer_id": 4 }
Find the number of distinct throws which can be thrown with $n$ six faced normal dice which are indistinguishable among themselves. Find the number of distinct throws which can be thrown with $n$ six faced normal dice which are indistinguishable among themselves. The total outcomes will be $6^n$. But this this has many cases repeated since the dice are indistinguishable among themselves.
Consider six faces as six beggars and n identical dice to be identical coins. Now,number of distributions is C(n+6-1,6-1)=C(n+5,5). If a beggar(say face 6)gets no coin,then it is equivalent to anything which does not appear on the dice.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1589255", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Markov Chain with two components I am trying to understand a question with the following Markov Chain: As can be seen, the chain consists of two components. If I start at state 1, I understand that the steady-state probability of being in state 3 for example is zero, because all states 1,2,3,4 are transient. But what I do not understand is that is it possible to consider the second component as a separate Markov chain? And would it be correct to say that the limiting probabilities of the second chain considered separately exist? For example, if I start at state 5, then can we say that the steady-state probabilities of any of the states in the right Markov chain exist and are positive?
Yes, you can. Actually this Markov chain is reducible, with two communicating classes (as you have correctly observed), * *$C_1=\{1,2,3,4\}$, which is not closed and therefore any stationary distribution assigns zero probability to it and *$C_2=\{5,6,7\}$ which is closed. As stated for example in this answer, Every stationary distribution of a Markov chain is concentrated on the closed communicating classes. In general the following holds Theorem: Every Markov Chain with a finite state space has a unique stationary distribution unless the chain has two or more closed communicating classes. Note: If there are two or more communicating classes but only one closed then the stationary distribution is unique and concentrated only on the closed class. So, here you can treat the second class as a separate chain but you do not need to. No matter where you start you can calculate the steady-state probabilities and they will be concentrated on the class $C_2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1589325", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
$p$ is a positive integer and $(p)$ is a maximal ideal in the ring $(\mathbb Z, +,\cdot)$, then $p$ is a prime number I need to prove: $p$ is a positive integer and $(p)$ is a maximal ideal in the ring $(\mathbb Z, +,\cdot)$, then $p$ is a prime number. My attempt: 1) $(p)$ is a maximal ideal, so it is a prime ideal. 2) Assume $p$ is not prime, then $p = ab, a,b \in \mathbb Z, a>0, b>0,a \ne 1,p$. 3) Because $(p)$ is a prime ideal, it means that either $b=z_1p$ or $a=z_2p, z_1>0,z_2>0, z_1,z_2 \in \mathbb Z$. 4) Assume that $b=z_1p$, then $1=az_1$ so $a= 1, z_1 =1$. 5) Assume that $a=z_2p$, then $1=bz_2$, so $b= 1, z_1 =1$, but then $a=p$. Thus $p$ must be prime. Is it a correct proof?
Let $p$ be a nonprime number, thus $p$ can be factored in to two non-unit coprime numbers, say $x$ and $y$ ($p = xy$, $\gcd(x, y) = 1$). Since $x$ and $y$ are non-units, so $\langle p \rangle \subset \langle x \rangle \neq \textbf{Z}$ and $\langle p \rangle \subset \langle y \rangle \neq \textbf{Z}$ (if $\langle p \rangle = \langle x \rangle$, then $x = nxy$, so $y$ is a unit). This shows that $\langle p \rangle$ is not maximal ideal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1589404", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 2 }
Theorem that links limits of successions and limits of functions I've a doubt on the theorem that links the limits of functions with the limits of successions. $$\lim_{x\to c}f(x)=l \iff \forall \big\{x_n\big\}_{x\in \mathbb{N}} \mid x_n\in dom(f) \wedge x\neq c \wedge \lim_{n\to \infty} x_n = c$$ holds that $\lim_{n\to \infty} f(x_n)=l$. Must the condition $x_n\neq c$ be true $\forall x_n$ or it just means that the considered succession cannot be a constant one? (i.e. $x_n=c$ $ \forall x_n$) Thanks for your help
Yes, the first one: "The condition must be true for every $x_n$". Of course this excludes in particular constant sequences ($x_n=c$). For example let $$f(x)=\begin{cases}0, & x=3 \\ x, & \text{ else} \end{cases}$$ Then if you take the sequence $x_n=3$ for all $n$ "converges" (trivially) to $x_0=3$, so $$\lim_{n\to \infty}f(x_n)=\lim_{n\to \infty}f(3)=0$$ But $$\lim_{x\to 3} f(x)=\lim_{x\to 3}x=3 \neq 0$$ So, the limit of $f$ as $x$ approaches $x=3$ exists and is equal to $3$ which is different from $f(3)=0$ but if you take the constant sequence $x_n=3$ you cannot see this. * *The previous example is a constant sequence $x_n=c$ for every $n\in \mathbb N$. But consider also the sequence $$x_n=\begin{cases}3, & n=\text{ even}\\3+\frac1n, & n=\text{ odd}\end{cases}$$ This is a non-constant sequence with $x_n\to 3$ as $n\to \infty$ but exhibits the same problem as above.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1589531", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Remainder of the numerator of a harmonic sum modulo 13 Let $a$ be the integer determined by $$\frac{1}{1}+\frac{1}{2}+...+\frac{1}{23}=\frac{a}{23!}.$$ Determine the remainder of $a$ when divided by 13. Can anyone help me with this, or just give me any hint?
By Wilson's theorem, $12!\equiv -1\pmod{13}$. By Wolstenholme's theorem, $H_{12}\equiv 0\pmod{13}$. Since: $$ a = \sum_{k=1}^{12}\frac{23!}{k} + (12!)(14\cdot 15\cdot\ldots\cdot 23)+\sum_{k=14}^{23}\frac{23!}{k} $$ we have: $$ a\equiv 0-10!+0 \equiv \frac{-12!}{11\cdot 12}\equiv \frac{1}{(-2)\cdot(-1)}\equiv\frac{1}{2}\equiv\color{red}{7}\pmod{13}$$ since $11\equiv -2\pmod{13}$ and $12\equiv -1\pmod{13}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1589591", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 4 }
$x'=x^2$ unstable solutions when $x(0)\geq 0$ but asymptotically stable when $x(0)\leq0$ How do I show that the differential equation $x'=x^2$ has unstable solutions when $x(0)\geq 0$ but asymptotically stable solutions when $x(0)\leq0$? Usually, I look at the eigenvalues of the matrix to determine stability of solutions, but since there is no matrix here, how do I approach this? Edit: the solutions are $x(t)=\frac{1}{c-t}$, but what can I do with this?
suppose $x(0) = k.$ then the solution is $$x = \frac 1{t + 1/k }, -1/k < t < \infty$$ the graph is monotone increasing on the domain and $\lim_{t \to \infty} x(t) = 0.$ this solution approaches zero but not expentailly; it approaches zero like $\frac1t.$ i don't this the solution is asymptotically stable; just stable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1589669", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Compare 2 algorithms by statistics Let's suppose we have two different processes, each generating some amount of money $M$ every second. $$0 \leq M \leq 1000$$ We run each process for $50\%$ of available time. The question is how to compare the productivity (in money; per second) of these two processes if there is no information about "random noise" in every process? The only information about each process we have is a log-file: what $M$ was generated at every second.
You have two samples - one from each method - with equal sample sizes (and big enough I assume) and you want to see which method generates statistically better results. This is a standard methodology with a confidence interval for the difference of means or a hypothesis testing again for the difference of means. Of course the result will at some significant level. The mean and the variances will be calculated from the sample, so do not worry about this "noice". The sample variance will take care of it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1589763", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find the derivative $dy/dx$ from the parametric equations for $x$ and $y$ Let \begin{cases} y=2t^2-t+1 \\ x=\sin(t) \end{cases} find $\frac{dy}{dx}$ Is this all that I need to do? $$\frac{4t-1}{\cos(t)}$$
As others have said, $$ \frac{dy}{dt} = 4t-1 \quad\mbox{and}\quad \frac{dx}{dt} = \cos t \implies \frac{dy}{dx} = \frac{dy/dt}{dx/dt} = \frac{4t-1}{\cos t}. $$ But now we must eliminate the parameter $t$, because we'd like to have $y$ as a function of $x$ in writing $dy/dx$. For the numerator of $dy/dx$, we use $$ x = \sin t \implies t = \arcsin x. $$ For the denominator of $dy/dx$, we square $x = \cos t$ to get $$ x^2 = \sin^2 t = 1 - \cos^2 t \implies \cos^2 t = 1 - x^2 \implies \cos t = \pm \sqrt{1 - x^2}. $$ To get rid of the $\pm$, you should specify the range of allowed values for $t$, and hence for $x$ to make $\arcsin t$. For example, restricting $t \in \left[-\tfrac{\pi}{2}, \tfrac{\pi}{2}\right]$ results in $x \in [-1,1]$ and we should take $\cos t = + \sqrt{1 - x^2}$, yielding $$ \frac{dy}{dx} = \frac{4 \arcsin(x) - 1}{\sqrt{1 - x^2}}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1589841", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Calculate the sum: $\sum_{x=2}^\infty (x^2 \operatorname{arcoth}(x) \operatorname{arccot} (x) -1)$ $${\color\green{\sum_{x=2}^\infty (x^2 \operatorname{arcoth} (x) \operatorname{arccot} (x) -1)}}$$ This is an impressive sum that has bothered me for a while. Here are the major points behind the sum... Believing on a closed form: I believe that this has a closed form because a very similar function $$\sum_{x=1}^\infty x\operatorname{arccot}(x)-1$$ also holds a closed form. (Link to that sum: Calculate the following infinite sum in a closed form $\sum_{n=1}^\infty(n\ \text{arccot}\ n-1)$?) I think that the asked question has a closed form because of the fact that: $\operatorname{arcot} x \lt \frac{1}{x} \lt \operatorname{arcoth} x$ for $x\ge 1$ Attempts: I put this summation into W|A, which returned nothing. I have tried also, to use the proven identity above (same as the link), but to no avail. I have made very little progress on the sum for that reason. Thank you very much. Note that there is no real use for this sum, but I am just curious as it looks cool.
A possible approach may be the following one, exploiting the inverse Laplace transform. We have: $$ n^2\text{arccot}(n)\text{arccoth}(n)-1 = \iint_{(0,+\infty)^2}\left(\frac{\sin s}{s}\cdot\frac{\sinh t}{t}-1\right) n^2 e^{-n(s+t)}\,ds\,dt \tag{1}$$ hence: $$ \sum_{n\geq 2}\left(n^2\text{arccot}(n)\text{arccoth}(n)-1\right)=\iint_{(0,+\infty)^2}\frac{e^{s+t} \left(1+e^{s+t}\right) (s t-\sin(s)\sinh(t))}{\left(1-e^{s+t}\right)^3 s t}\,ds\,dt\tag{2}$$ but the last integral does not look so friendly. Another chance is given by: $$ n^2\text{arccot}(n)\text{arccoth}(n)-1 = \iint_{(0,1)^2}\left(\frac{n^2}{n^2+x^2}\cdot\frac{n^2}{n^2-y^2}-1\right)\,dx\,dy \tag{3}$$ that leads to: $$S=\sum_{n\geq 2}\left(n^2\text{arccot}(n)\text{arccoth}(n)-1\right)=\\=\frac{3}{2}-\iint_{(0,1)^2}\left(\frac{\pi y^3\cot(\pi y)+\pi x^3\coth(\pi x)}{2(x^2+y^2)}+\frac{1}{(1+x^2)(1-y^2)}\right)\,dx\,dy \tag{4}$$ Not that appealing, but probably it can be tackled through: $$ \cot(\pi z) = \frac{1}{\pi}\sum_{n\in\mathbb{Z}}\frac{z}{z^2-n^2},\qquad \coth(\pi z) = \frac{1}{\pi}\sum_{n\in\mathbb{Z}}\frac{z}{z^2+n^2}\tag{5} $$ that come from the logarithmic derivative of the Weierstrass product for the sine function. That expansions can be used to derive the Taylor series of $\pi z\cot(\pi z)$ and $\pi z\coth(\pi z)$, namely: $$ \pi z \cot(\pi z) = 1-2\sum_{n\geq 1}z^{2n}\zeta(2n),\quad \pi z \coth(\pi z) = 1-2\sum_{n\geq 1}(-1)^n z^{2n}\zeta(2n).\tag{6}$$ Since $\frac{1}{(1+x^2)(1-y^2)}=\frac{1}{x^2+y^2}\left(\frac{1}{1-y^2}-\frac{1}{1+x^2}\right)$, we also have: $$ S = \iint_{(0,1)^2}\frac{dx\,dy}{x^2+y^2}\left(\sum_{r\geq 1}(\zeta(2r)-1)y^{2r+2}-\sum_{r\geq 1}(-1)^r(\zeta(2r)-1)x^{2r+2}\right)\tag{7} $$ By symmetry, the contributes given by even values of $r$ vanish. That gives: $$\begin{eqnarray*} S &=& \sum_{m\geq 1}\left(\zeta(4m-2)-1\right)\iint_{(0,1)^2}\frac{x^{4m}+y^{4m}}{x^2+y^2}\,dx\,dy\\&=&\sum_{m\geq 1}\frac{\zeta(4m-2)-1}{2m}\int_{0}^{1}\frac{1+u^{4m}}{1+u^2}\,du\\&=&\sum_{m\geq 1}\frac{\zeta(4m-2)-1}{2m}\left(\frac{\pi}{2}+\int_{0}^{1}\frac{u^{4m}-1}{u^2-1}\,du\right).\tag{8}\end{eqnarray*} $$ Thanks to Mathematica, we have: $$ \begin{eqnarray*}\sum_{m\geq 1}\frac{\zeta(4m-2)-1}{4m}&=&\int_{0}^{1}\frac{x^2 \left(4 x+\pi \left(1-x^4\right) \cot(\pi x)-\pi \left(1-x^4\right) \coth(\pi x)\right)}{4 \left(-1+x^4\right)}\,dx\\&=&-\frac{1}{24 \pi ^2}\left(10 \pi ^3+6 \pi ^2 \log\left(\frac{\pi}{4} (\coth\pi-1)\right)+6 \pi \cdot\text{Li}_2(e^{-2\pi})+3\cdot \text{Li}_3(e^{-2\pi})-3\cdot\zeta(3)\right)\end{eqnarray*} $$ that is an expected generalization of the linked question. Now we just have to deal with the missing piece.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1589913", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 2, "answer_id": 0 }
How to calculate the exact values of c and d Draw the function $f(x)= x^3-4x^2-13x+10$ into a cartesian plane. If the exact length of that curve from point $P(5,-30)$ to point $Q(c,d)$ is $18$ (with $c>5$), I have no idea how to calculate the values of $c$ and $d$ using the formula $(dx)^2+(dy)^2$. How to approximate the exact values of $c$ and $d$ ? What is the values of $c$ and $d$ (exactly) ?
The formula for arc length from $(a, f(a))$ to $(b, f(b))$ is $L = \int_a^b \sqrt{1 + (\frac{d (f(x))}{dx})^2}dx$ So the arc length from $(5,-30)$ to $(c, d=f(c))$ is $L = 18 = \int_5^c \sqrt{1 + \frac{d(x^3-4x^2-13x+10)}{dx}^2}dx$ Solve for $c$ and $d = f(c)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1590071", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Convex Set with Empty Interior Lies in an Affine Set In Section 2.5.2 of the book Convex Optimization by Boyd and Vandenberghe, the authors mentioned without proving that "a convex set in $\mathbb{R}^n$ with empty interior must lie in an affine set of dimension less than $n$." While I can intuitively understand this result, I was wondering how it can be proved formally?
Look at $d+1$, the largest number of affinely independent points from $C$. Let $x_0$, $\ldots$, $x_d$ one such affinely independent subset of largest size. Note that every other point is an affine combination of the points $x_k$, so lies in the affine subspace generated by them, which is of dimension $d$. If $d < n$ then this subspace is contained in an affine hyperplane. If $d=n$, then $C$ contains $d+1$ affinely independent points. Since $C$ is convex, it will also contain the convex hull of those $n+1$ points. Now, in an $n$-dimensional space the convex hull of $n+1$ affinely independent points has non-empty interior. So the interior of $C$ is non-empty.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1590133", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }