Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Why do we eventually end up with $0$ in Euclidean Algorithm? I'm new to number theory, I just understood the proof of Euclidean Algorithm and how it cleverly uses the fact that $\mathrm{gcd}(a,b) = \mathrm{gcd}(b,r)$ repeatedly, where $a$ is the dividend, $b$ is the divisor and $r$ is the remainder. Although one thing, I still don't understand is that, why do we always eventually end up with a zero anyway? What's the logic behind this?
Assuming $a$ and $b$ are positive and $a>b$, by definition, $r$ is less than $b$. Then $b<a$ and $r<b$, so you have smaller numbers than you started with. If $a<b$, then just switch $a$ and $b$. If $a=b$, then $\gcd(a,\,b)=a=b$. Since $b$ and $r$ are still non-negative (also by definition), the only possibility is to go to zero. The numbers are getting smaller and remaining non-negative.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2954010", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 5, "answer_id": 0 }
Prove or disprove $f(n) = O(f(2n))$ I wonder how to to prove or disprove that $f(n) = O(f(2n))$ I have tried many function, and think it is right, but still don't have any idea how to prove. Could anyone give me a hint about it?
If \begin{equation} f(n)=\begin{cases}1 & n\text{ even,}\\ n& n\text{ odd,}\end{cases} \end{equation} then \begin{equation} \mathcal{O}(f(2n))=\mathcal{O}(1)\text{,} \end{equation} but $f(n)$ is not asymptotically constant.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2954085", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Probability of having a $4$-digit PIN number with strictly increasing digits What is the probability of having a PIN number (digits $0$-$9$, starting with consecutive zeros allowed) with strictly increasing digits? We easily deduce that, if $a_1, a_2, a_3, a_4$ are the respective digits, then $a_1<7, a_2<8$ and $a_3<9$. Moreover, I calculated that there are $7$ choices for $a_1$, ($7-a_1$) choices for $a_2$, ($8-a_2$) choices for $a_3$ and ($9-a_3$) choices for $a_4$ but I don't know how to proceed to the counting with all these variables. Finally, I know that this is ordered sampling without repetition but this doesn't seem to help. Thanks in advance!
From a set of $\{0, 1, \cdots, 9\}$, choose any subset of 4 numbers. Such subset is in 1-to-1 correspondence with a 4-digit pin with increasing digits.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2954213", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Some notation for vector space $\mathbb{R}^\mathbb{R}$, $\mathbb{R}^{X}$, $C(X)$ I am reading some slides for functional analysis, and it mentioned that $\mathbb{R}^\mathbb{R}$, $\mathbb{R}^{X}$, and $C(X)$ are all vector spaces. Since the slides are so brief and it doesn't provide an further details. Is there anyone can provide me some definition of these notation? My guess is that $\mathbb{R}^\mathbb{R}$ is all the function defined on real numbers. $C(X)$ might be related to continuous functions.
For sets $X$ and $Y$, the set of maps $X \to Y$ is sometimes denoted $Y^X$. (Compare the power set $2^X$, which can be thought of as the set of maps $X \to \{0, 1\}$ by associating to a set $U\subset X$ its indicator function.) If $Y$ has some additional structure, then $Y^X$ generally does as well; in particular, if $Y$ is a vector space, then $Y^X$ has a vector space structure with operations $(f+g)(x) = f(x) + g(x)$ and $(\lambda f)(x) = \lambda f(x)$. (Note that we're not taking $Y^X$ to be the space of linear maps $X \to Y$, even if $X$ is also a vector space.) In functional analysis, you probably want to restrict $Y^X$ to the space of continuous maps $X \to Y$; arbitrary functions aren't very interesting. For a topological space $X$, the space of continuous maps $X \to \mathbb{R}$ is sometimes denoted by $C(X)$ or $C(X, \mathbb{R})$; the complex case is analogous. It's also common to write $C_0(X)$ for compactly supported functions $X \to \mathbb{R}$ or $X \to \mathbb{C}$, along with $C^p(X), C^\infty(X),$ and $C^\omega(X)$ for elements of $C(X)$ that have $p$ continuous derivatives, are smooth, and are analytic, respectively.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2954348", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Number repositioning You are given $2^n$ numbers, in one step you move the numbers in odd positions to the beginning of the list and the numbers in even positions to the end of the list, keeping the initial order among them. Prove that after $n$ such steps, you will get the initial list. How do I do this? Someone told me to look at positions of these numbers in binary, but I still don't understand how that helps yet...
Index the $2^n$ list positions of elements as $\underbrace{00\ldots00}_n, \underbrace{00\ldots01}_n, \ldots,\underbrace{11\ldots10}_n, \underbrace{11\ldots11}_n$. (These are indices given to the $2^n$ positions, not to elements that may move around the list) In each reposition / shuffle step, for the element at position $b_{n-1}b_{n-2}\ldots b_1b_0$, its destination position is at $b_0b_{n-1}\ldots b_2b_1$, i.e. a cyclic right shift of bits. This is because $b_0$ determines whether the destination is in the beginning ($0$) or ending ($1$) half, and $b_{n-1}b_{n-2}\ldots b_1$ determines the position inside that half. After $n$ cyclic right shifts of bits, the element initially at $b_{n-1}b_{n-2}\ldots b_1b_0$ moves through positions $$\begin{align*} b_{n-1}b_{n-2}\ldots b_1b_0 &\mapsto b_0b_{n-1}\ldots b_2b_1\\ &\mapsto b_1b_0\ldots b_3b_2\\ &\vdots\\ &\mapsto b_{n-2}b_{n-3}\ldots b_0b_{n-1}\\ &\mapsto b_{n-1}b_{n-2}\ldots b_1b_0\\ \end{align*}$$ and goes back to its initial position.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2954446", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Total waiting time of exponential distribution is less than the sum of each waiting time, how so? I am reading my textbook and find a weird phenomenon. The example says that Anne and Betty enter a beauty parlor simultaneously. Anne to get a manicure and Betty to get a haircut. Suppose the time for a manicure (haircut) is exponentially distributed with mean 20 (30) minutes. I wanna calculate the expected time until the first customer completes service. Calculation in the text: Since the total service rate is $1/30+1/20=5/60$, the time until the first customer completes service should be exponential with mean $60/5=12$ minutes. This sounds unreasonable, because ideally the expected waiting time until the first customer is done should be between the average waiting time of each. Namely, I think the correct answer should be between 20 minutes and 30 minutes. How can we expect the first customer to be done in 12 minutes when each has to take at least 20 minutes on average?
This was an interesting problem... I was also initially surprised by the answer of $12$ minutes. The solution given by the text is very clever and concise, but because I have nothing better to do I also solved it using the longer way. Let $X$ be the time to get a manicure, $X \sim Exp(\lambda_1 = \frac{1}{20})$ Let $Y$ be the time to get a hair cut, $Y \sim Exp(\lambda_2 = \frac{1}{30})$ To make life easier I will assume $X$ and $Y$ are independent so the time to get a hair cut is independent of the time to get a manicure. We need to get $E[\min(X,Y)]$ and I'll approach this by conditioning on $Y < X$, so we have $$E[\min(X,Y)] = E[\min(X,Y) \mid Y < X] P(Y < X) + E[\min(X,Y) \mid X < Y]P(X < Y)$$ $$= E[Y \mid Y < X] P(Y < X) + E[X \mid X < Y]P(X < Y) \\$$ The expectation of a RV conditional on an event is defined as follows $$E[Y \mid Y < X] = \frac{E[Y \mathbf{1}_{Y<X}]}{P(Y<X)}$$ Therefore we are trying to find $$E[\min(X,Y)] = E[Y \mathbf{1}_{Y<X}] + E[X \mathbf{1}_{X<Y}]$$ Let's evaluate the first term $$ \begin{align} E[Y \mathbf{1}_{Y<X}] & = \int_{0}^\infty \int_{0}^x y\lambda_1 e^{-\lambda_1x} \lambda_2 e^{-\lambda_2y} \, dy \, dx \\ & = \lambda_1 \lambda_2 \int_{0}^\infty e^{-\lambda_1x} \int_{0}^x y e^{-\lambda_2y} \, dy \, dx \\ \end{align} $$ Let's use integration by parts to figure out the inner integral. $$ \begin{matrix} u = y & dv = e^{-\lambda_2y} \, dy \\ du = dy & v = \frac{-1}{\lambda_2} e^{-\lambda_2y}\\ \end{matrix} $$ So integration by parts yields $$\int_{0}^x y e^{-\lambda_2y} \, dy = \left[ \frac{-y}{\lambda_2} e^{-\lambda_2y} \right]_{y=0}^{y=x} + \frac{1}{\lambda_2} \int_0^x e^{-\lambda_2y} \, dy$$ $$=\frac{-x}{\lambda_2} e^{-\lambda_2x} - \frac{1}{\lambda_2^2} \left[e^{-\lambda_2y} \right]_{y=0}^{y=x}$$ $$=\frac{-x}{\lambda_2} e^{-\lambda_2x} - \frac{1}{\lambda_2^2}(e^{-\lambda_2x} - 1)$$ $$=\frac{-x}{\lambda_2} e^{-\lambda_2x} - \frac{1}{\lambda_2^2}e^{-\lambda_2x} + \frac{1}{\lambda_2^2}$$ Plugging back in we have $$ \begin{align} & = \lambda_1 \lambda_2 \int_{0}^\infty e^{-\lambda_1x} \left( \frac{-x}{\lambda_2} e^{-\lambda_2x} - \frac{1}{\lambda_2^2}e^{-\lambda_2x} + \frac{1}{\lambda_2^2} \right) \, dx \\ & = \lambda_1 \lambda_2 \int_{0}^\infty \frac{-x}{\lambda_2} e^{-(\lambda_1 +\lambda_2)x} - \frac{1}{\lambda_2^2}e^{-(\lambda_1 +\lambda_2)x} + \frac{1}{\lambda_2^2} e^{-\lambda_1 x} \, dx \\ & = -\lambda_1 \int_{0}^\infty x e^{-(\lambda_1 +\lambda_2)x} + \frac{1}{\lambda_2}e^{-(\lambda_1 +\lambda_2)x} - \frac{1}{\lambda_2} e^{-\lambda_1 x} \, dx \\ & = -\lambda_1 \left( \frac{1}{(\lambda_1 + \lambda_2)^2} + \frac{1}{(\lambda_1 + \lambda_2)\lambda_2} - \frac{1}{\lambda_1 \lambda_2}\right) \\ & = -\frac{\lambda_1}{(\lambda_1 + \lambda_2)^2} - \frac{\lambda_1}{(\lambda_1 + \lambda_2)\lambda_2} + \frac{1}{\lambda_2} \\ \end{align} $$ Remember the ultimate goal is to figure out $E[\min(X,Y)] = E[Y \mathbf{1}_{Y<X}] + E[X \mathbf{1}_{X<Y}]$, and by symmetry we have $$E[\min(X,Y)] = -\frac{\lambda_1}{(\lambda_1 + \lambda_2)^2} - \frac{\lambda_1}{(\lambda_1 + \lambda_2)\lambda_2} + \frac{1}{\lambda_2} -\frac{\lambda_2}{(\lambda_1 + \lambda_2)^2} - \frac{\lambda_2}{(\lambda_1 + \lambda_2)\lambda_1} + \frac{1}{\lambda_1} $$ $$=\frac{\lambda_2 + \lambda_1}{(\lambda_1 + \lambda_2)^2} = \frac{1}{\lambda_1 + \lambda_2}$$ And then plugging in with $\lambda_1 = 1/20$ and $\lambda_2 = 1/30$ we get $12$ which is our answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2954618", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Prove that $\frac{3}{5} + \frac{4}{5}i$ is not a root of unity I want to prove that $z=\frac{3}{5} + \frac{4}{5}i$ is not a root of unity, although its absolute value is 1. When transformed to the geometric representation: $$z=\cos{\left(\arctan{\frac{4}{3}}\right)} + i\sin{\left(\arctan{\frac{4}{3}}\right)}$$ According to De Moivre's theorem, we get: $$z^n= \cos{\left(n\arctan{\frac{4}{3}}\right)} + i\sin{\left(n\arctan{\frac{4}{3}}\right)}$$ Now, if for $n\in \mathbb{N}: z^n=1$, then the imaginary part of the expression above must be zero, therefore: $$\sin{\left(n\arctan{\frac{4}{3}}\right)}=0 \iff n\arctan{\frac{4}{3}} = k\pi, \ \ \ k \in \mathbb{N}$$ And we get that for $z$ to be a root of unity for some natural number $n$, $n$ must be in the form: $$n = \frac{k\pi}{\arctan{\frac{4}{3}}}, \ \ \ k \in \mathbb{N}$$ On the other hand, for $z^n=1$ it must be that: $$\cos{\left(n\arctan{\frac{4}{3}}\right)} = 1 \iff n\arctan{\frac{4}{3}} = 2l\pi, \ \ \ l \in \mathbb{N}$$ and thus $$n = \frac{2l\pi}{\arctan{\frac{4}{3}}}, \ \ \ l \in \mathbb{N}$$ By comparing those two forms of $n$, it must be the case that $k=2l$ and for $n$ to satisfy $z^n = 1$. What follows is that $n$ should be in the form $$n = \frac{2l\pi}{\arctan{\frac{4}{3}}}, \ \ \ l \in \mathbb{N}$$ But, at the same time, $n$ must be a natural number. Should I prove now that such $n$ cannot even be a rational number, let alone a natural one? Or how should I approach finishing this proof?
As is shown in this answer, Niven's theorem says that $\sin(\pi p/q)$ is rational only when $\sin(\pi p/q)\in\left\{-1,-\frac12,0,\frac12,1\right\}$. However, $\sin\left(\arg\left(\frac35+\frac45i\right)\right)=\frac45$, so we know that $\arg\left(\frac35+\frac45i\right)$ is not a rational multiple of $\pi$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2954737", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 7, "answer_id": 4 }
Number of possible permutations of three pairs of socks, one blue, one black, and one white? There are 3 colours of socks; blue, black and white and each colour has two pairs of socks and we need to distribute them to six people and the left leg socks are distinguishable from right leg socks. What are the possible ways of distribution? Approach: align the right leg socks in a line in a random manner, the number of ways of distributing the left leg socks to this one particular distribution will be: $$\frac{6!}{(2!)^3}$$ But this is the one way of distribution, What are the other possible ways? Does squaring of the expression makes any sense, where order matters?
There are three pairs of socks, one black, one blue, and one white. In how many ways can the socks be permuted if left socks are distinguishable from right socks? Since there are six different socks, they can be arranged in $6!$ orders. There are three pairs of socks, one black, one blue, and one white. In how many ways can the socks be permuted if left socks are indistinguishable from right socks? Choose two of the six positions for the black socks, two of the remaining positions for the blue socks, and fill the final two positions with white socks, which can be done in $$\binom{6}{2}\binom{4}{2}\binom{2}{2} = \frac{6!}{4!2!} \cdot \frac{4!}{2!2!} \cdot \frac{2!}{2!0!} = \frac{6!}{2!2!2!}$$ distinguishable ways. This is what you counted. There are three pairs of socks, one black, one blue, and one white. In how many ways can pairs of socks be permuted if left socks are indistinguishable from right socks? There are three pairs of socks, which are distinguishable by their colors. Hence, there are $3!$ possible arrangements. There are three pairs of socks, one black, one blue, and one white. In how many ways can pairs of socks be permuted if left socks are distinguishable from right socks? There are three pairs of socks, which are distinguishable by their colors. There are $3!$ ways to arrange the pairs. Each pair can be arranged in $2!$ ways. Hence, there are $3!(2!)^3$ possible arrangements.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2954840", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Given $f:[0,+\infty]\to\Bbb [1,+\infty)$, if $1/f\in\mathcal{R}[0,+\infty)$ then is $f$ not Lipschtiz? I thought of proceeding by contrapositive. Assume $f(x)\ge1$ and $f'(x)\le K\in\Bbb R$ for all $x\ge0$. Then for any $t>0$ it follows that \begin{align} \int_0^tf'(x)\ dx=f(t)-f(0)\le Kt\end{align}i.e. $f(t)\le Kt+f(0).$ Both sides must be $\ge1$. thus positive, so $$\frac1{f(t)}\ge\frac{1}{Kt+f(0)}$$which means $\int_0^\infty\frac1f$ diverges. However I'm assuming $\ f'$ has a finite number of discontinuities, which is not guaranteed - derivatives can be almost everywhere discontinuous, albeit they must also be a.e. continuous, right? More precisely I'm using $f'\in\mathcal{R}[0,\infty]$, does the result hold if this isn't true? How to prove it or what is a counterexample?
You don't need to use derivatives. By the assumption of Lipschitzianity, there exists a constant $K>0$ such that $|f(x) - f(y)| \leq K |x-y|$ for every $x,y$. In particular, $$ f(t) - f(0) \leq K t \qquad \forall t\geq 0, $$ and then you can proceed as in your proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2954933", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove $\lim\limits_{x \to \pm\infty}\dfrac{x^3+1}{x^2+1}=\infty$ by the definition. Problem Prove $\lim\limits_{x \to \pm\infty}\dfrac{x^3+1}{x^2+1}=\infty$ by the definition. Note: The problem asks us to prove that, no matter $x \to +\infty$ or $x \to -\infty$, the limit is $\infty$,which may be $+\infty$ or $-\infty.$ Proof $\forall M>0$,$\exists X=\max(1,M+1)>0, \forall|x|>X$: \begin{align*} \left|\frac{x^3+1}{x^2+1}\right|&=\left|x-\frac{x-1}{x^2+1}\right|\\&\geq |x|-\left|\frac{x-1}{x^2+1}\right|\\&\geq |x|-\frac{|x|+1}{x^2+1}\\&\geq |x|-\frac{x^2+1}{x^2+1}\\&=|x|-1\\&>X-1\\&\geq M. \end{align*} Please verify the proof above.
Possibly correct but unreadable. Consider $$ \frac{x^3+1}{x^2+1}=x-\frac{x-1}{x^2+1}>x $$ whenever $x>1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2955103", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
My question is about Independent events of the following question in which tree diagram is said to make. How to do this? Suppose identical tags are placed on both the left ear and the right ear of a fox. The fox is then let loose for a period of time. Consider the two events $C_1 = \{\text{left ear tag is lost}\}$ and $C_2 = \{\text{right ear tag is lost}\}$. Let $\pi = P(C_1) = P(C_2)$, and assume $C_1$ and $C_2$ are independent events. Derive an expression (involving $\pi$) for the probability that exactly one tag is lost given that at most one is lost.
Let $A$ be the event where exactly one tag is lost and $B$ be the event where at most one tag is lost. Then we are looking for $$ P(A\mid B). $$ By the definition of conditional probability, $$ P(A\mid B) = \frac{P(A\cap B)}{P(B)}. $$ If exactly one tag is lost, then it's true that at most one tag is lost, so $A\subset B$. Hence $P(A\cap B) = P(A)$. Hence we have to compute $$ P(A\mid B) = \frac{P(A)}{P(B)}. $$ The probability $P(A)$ that exactly one event of $C_1,C_2$ happens is given by \begin{align*} P(A) &= P(C_1) + P(C_2) - 2\cdot P(C_1\cap C_2) \\ &= \pi + \pi - 2\pi^2 = 2\pi(1-\pi). \end{align*} We used the identity $P(C_1\cap C_2) = P(C_1)\cdot P(C_2)$ because the events $C_1,C_2$ are independent. The probability $P(B)$ that at most one tag is lost is $1-P(B^c)$, where $P(B^c)$ is the probability $P(C_1\cap C_2)$ that both tags are lost. Hence, $$ P(B) = 1 - P(B^c) = 1 - P(C_1\cap C_2) = 1 - \pi^2 = (1-\pi)(1+\pi). $$ Putting it all together, $$ P(A\mid B) = \frac{2\pi(1-\pi)}{(1-\pi)(1+\pi)} = \color{blue}{\frac{2\pi}{1+\pi}}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2955409", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is the determinant of this linear map? Consider the map $f:M_{n}(K)\to M_n(K)$ $$f(A) = A^t,A\in M_n(K).$$ What is the determinant of this map? After working on some examples I recognized that the matric associated with this map is a permutation matrix and so its determinant is the sign of the permutation. Excluding the matrices $E_{ii}$ we get that the number of transpositions is $n(n-1)/2$ thus the answer is $$(-1)^{n(n-1)/2}.$$ Is this answer correct?
Yes, it is correct. Alternatively, since $f(H)=H$ for every symmetric matrix $H$ and $f(K)=-K$ for every skew-symmetric matrix $K$, the matrix space $M_n(K)$ is the sum of two eigenspaces of $f$, one of dimesion $n(n+1)/2$ for the eigenvalue $1$ and the other of dimension $n(n-1)/2$ for the eigenvalue $-1$. Hence $\det f=(-1)^{n(n-1)/2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2955524", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
$2(a^8+b^8+c^8)=(a^4+b^4+c^4)^2$ if and only if $a,b,c$ are the lengths of a right angled triangle Here is a problem from the book: Everything connected to Pithagoras It is well known that in every right angled triangle $ABC$, $a^2+b^2=c^2$. Howevee, there are some more complicated equations as well. Here is one: Prove that $$2(a^8+b^8+c^8)=(a^4+b^4+c^4)^2$$ if and only if $a,b,c$ are the lengths of a right angled triangle. I can’t prove it. I tried to write $a_4=x$ and so on, but only algebra didn’t help for me. Aldo it is interesting because in this problem it doesn’t matter which side is the hypotenuse... Please help!
Bear with me as I ramble a bit ... The relation in question resembles a way of writing Heron's formula for the area of a triangle with sides $x$, $y$, $z$: $$16\;|\triangle xyz|^2 = \left(x^2+y^2+z^2\right)^2-2\left(x^4+y^4+z^4\right) \tag{1}$$ Now, some (most?) people think of Heron's formula as more like $$|\triangle xyz|^2 = s(s-x)(s-y)(s-z) \tag{2}$$ where $s = (x+y+z)/2$ is the semi-perimeter. I personally prefer not to bother introducing an extra value, so I write $$16\;|\triangle xyz|^2 = (x+y+z)(-x+y+z)(x-y+z)(x+y-z) \tag{3}$$ What's the point of all this? Well, it actually has nothing to do with triangle areas. My point is simply that my familiarity with various forms of Heron formula allows me to immediately realize that expressions that look like $(1)$ factor into expressions that look like $(3)$. (So, I no longer need to go through the trouble of expanding the product, combining terms, and attempting to factor. It's second-nature to me now, and should be to any olympiad contender. :) The same must be true of the relation in question: $$\begin{align} 0 &= \left(a^4+b^4+c^4\right)^2-2\left(a^8+b^8+c^8\right) \\[4pt] &= \left(a^2+b^2+c^2\right)\left(-a^2+b^2+c^2\right)\left(a^2-b^2+c^2\right)\left(a^2+b^2-c^2\right) \end{align} \tag{4}$$ Assuming this relation holds, we see that one of the factors must vanish. Obviously, first never does. Whichever of the latter three does corresponds to a Pythagorean relation in $a$, $b$, $c$, and therefore a right triangle. Conversely, a right triangle admits a Pythagorean relation, which causes a $(4)$ to hold. $\square$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2955666", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Injection from cardinal $\lambda$ to cardinal $\kappa$ implies $\lambda\leq\kappa$ I'm trying to prove that if there is an injection $f:\lambda\to\kappa$ (for $\lambda$,$\kappa$ cardinal numbers) then $\lambda\leq\kappa$. This is not true if they are just ordinal numbers, for example it is easy to build an injection from $\omega+1$ to $\omega$, however $\omega<\omega+1$. I think the proof should be quite straightforward but I'm not getting it. I want to arrive to a contradiction by assuming $\kappa<\lambda$, so $f|_\kappa:\kappa\to\kappa$ is an injection and $f(\kappa)$ is propper subset of $\kappa$ (not necessarly an ordinal number) but I don't realize how this can be problematic or how to move from here. I thought maybe I should well order $f(\kappa)$ (and for this I think I need AC) and do something with its order type, but again I'm not sure how to proceed.
Hint: The statement you are trying to prove is more or less just a disguised version of the Schroder-Bernstein theorem. A full proof is hidden below. Suppose there is an injection $f:\lambda\to\kappa$ but $\kappa<\lambda$. Then the inclusion map is an injection $i:\kappa\to\lambda$. Since there are injections in both directions between $\kappa$ and $\lambda$, by Schroder-Bernstein there is a bijection between them. But this is a contradiction, since $\lambda$ is a cardinal so it cannot be in bijection with any smaller ordinal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2955822", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Why is $|x|$ defined as $\sqrt{x^2}$ instead of $(\sqrt{x})^2$? I can't seem to understand this even though it might be utterly simple for some people. For me, saying $|x|=\sqrt{x^2}$ is a bit weird since $\sqrt{x^2}$ doesn't force positivity as there are always two possible square roots of a number, $\sqrt{x}=+\sqrt x, -\sqrt{x}$. Then why is it still used everywhere? Is there another way of interpreting this? I think $|x|=(\sqrt x)^2$ does the job much better.
You can see that $\sqrt{x^2}$ is defined for all $x\in\Bbb R$, while $(\sqrt{x})^2$ is defined only for $x\ge 0$. And that's a huge difference. Furthermore, $|x|$ is not defined as $(\sqrt x)^2$. Instead, $|x|$ is defined as $x$ if $x\ge 0$ and as $-x$ if $x<0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2955934", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
$Z$ is a random variable that follows a standard normal distribution. Is $X = Z^2$ independent from $Y = Z^3$? I think the title is pretty clear about the problem. Should I try to find the joint probability of $X$ and $Y$ and decide if $\, f_{X,Y}(x,y) = f_X(x) \cdot f_Y(y)$? If so, how am I going to find a joint distribution like this? Or is there some shortcut to prove it?
$EXY^{2} \neq EXEY^{2}$ so $X$ and $Y^{2}$ are not independent. This implies that $X$ and $Y$ are not independent. [Use a table of moments of $Z$ to see that $EXY^{2} \neq EXEY^{2}$. You can also argue analytically using that fact that 6-th moment is strictly smaller than the 8th moment]. Alternative proof: if $X$ and $Y$ are independent so are $X^{3}$ and $Y^{2}$. This makes $Z^{6}$ independent of itself which implies $Z^{6}$ is a constant random variable. This is obviously false.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2956050", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Given a positive sequence $\{a_n\}$ where $a_{n+1}=a_n+\frac{n}{a_n}$, can one find an asymptotic expansion of $a_n-n$? Given a positive sequence $\{a_n\}$ such that $$a_{n+1}=a_n+\frac{n}{a_n}$$ Can one find an symptotic expannsion of $a_n-n$? I want one which has the term $O(1/n)$, or a stronger one.
Let $b_n=a_n-n$. Then $$b_{n+1}=\left(1-\frac 1{a_n}\right)b_n$$ Thus if $a_0>1$ then $a_n>1$ and $b_n>0$ for all $n\in\Bbb N$. Consequently, $0<b_{n+1}<b_n$ hence the sequence $b_n$ has limit, say $b\geq 0$, and $a_n\sim n$ as $n\to\infty$. \begin{align} \log\left(\frac{a_{n}-n}{a_2}\right) &=\log\left(\frac{b_{n}}{b_2}\right)\\ &=\log\left(\prod_{k=2}^{n-1}\left(1-\frac 1{a_k}\right)\right)\\ &=\sum_{k=2}^{n-1}\log\left(1-\frac 1{a_k}\right)\\ &=\sum_{k=2}^{n-1}\log\left(1-\frac 1k\right)+\sum_{k=2}^{n-1}\log\left(1+\frac{b_k}{(k+b_k)(k-1)}\right)\\ &=\sum_{k=2}^{n-1}\log\left(1-\frac 1k\right)+O(1)\\ &=-\sum_{k=2}^{n-1}\frac 1k+O(1)\\ &=-\log(n)+O(1)\\ \end{align} Consequently, $a_n=n+O(1/n)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2956144", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Partial integration CDF I am reading a textbook which claims that we can obtain by partial integration, for CDF $F(x)$:$$\int_{t}^{\infty} 1-F(x) \frac{dx}{x}=\int_{t}^{\infty} (\log u -\log t) dF(u) $$ I am aware that the latter integral is a Riemann-Stieltjes integral, but I am not sure how to go from the first to the latter via the partial integration formula I am familiar with, I obtain: $$\int_{t}^{\infty} 1-F(x) \frac{dx}{x}=\log(x)(1-F(x)|_{t}^{\infty} -\int_{t}^{\infty}\log(x)(1-F(x) dx.$$
Hope this helps: \begin{align} \int_t^\infty(1-F(x))\frac{dx}{x} & = (1-F(x))\log x|_t^\infty-\int_t^\infty (-dF(x)) \log x\\ & = -(1-F(t))\log t+\int_t^\infty\log u dF(u)\\ & = -\int_t^\infty dF(u)\log t+\int_t^\infty\log u dF(u)\\ & = \int_t^\infty(\log u-\log t)dF(u). \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2956305", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Piecewise function and differentiation quotient We have function $y = f(x) = |2x + 1|$ Can we simply say that $|2x + 1|' = 2$? No because if we use the chain rule then we get $\left(\left|2x+1\right|\right)'\:=\frac{2\left(2x+1\right)}{\left|2x+1\right|}$ Redefine the same function piecewise without using the absolute value. $$\left|2x+1\right|= \begin{cases} 2x+1 & x \geq -\frac12 \\ -2x-1 & x \leq -\frac12 \end{cases}$$ Calculate the differential quotient for each of the corresponding regions. $f(x)=2x+1$ Differential quotient $$f'(x_0) = \lim_{x→x_0} \frac{f(x) − f(x_0)}{x − x_0}.$$ $f'(x_0) = \lim_{x→x_0} \frac{2x+1 − 2x_0-1}{x − x_0}=2$ $f(x)=-2x-1$ $f'(x_0) = \lim_{x→x_0} \frac{-2x-1 + 2x_0+1}{x − x_0}=-2$ What happens at $x = −\frac12$? This is where I am confused. There is no x so that would mean that my answer is wrong isn't it?
Note that for the limit of a difference quotient to exist, you must have $$\lim_{x\rightarrow x_0^{\color{red}{-}}}\frac{f(x)-f(x_0)}{x-x_0}=\lim_{x\rightarrow x_0^{\color{red}{+}}}\frac{f(x)-f(x_0)}{x-x_0}$$ You do not have that. You have $$\lim_{x\rightarrow x_0^{\color{red}{-}}}\frac{f(x)-f(x_0)}{x-x_0}=-2\neq 2=\lim_{x\rightarrow x_0^{\color{red}{+}}}\frac{f(x)-f(x_0)}{x-x_0}$$ Therefore your functions is not differentiable at $x=-\frac{1}{2}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2956512", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Span of sine and cosine equals span of positive and negative complex exponentials proof I need to prove that $$ \newcommand{\C}{\mathbb{C}} \newcommand{\span}{\text{span}_\C} \span\{\cos(kx), \sin(kx)\}=\span\{e^{ikx}, e^{-ikx}\} $$ This is my proof but I am not 100 % sure it's correct, mainly because I'm not sure I can just define the span of sine and cosine like I do.. My Proof Using the fact that $$ e^{ikx}=\cos(kx)+i\sin(kx) \quad \text{and} \quad e^{-ikx}=\cos(kx)-i\sin(kx) $$ We can write sine and cosine as follows $$ \cos(kx)=\frac{e^{ikx}+e^{-ikx}}{2} \quad \text{and} \quad \sin(kx)=\frac{e^{ikx}-e^{-ikx}}{2i} $$ And then we can rewrite the span of cosine and sine using the definition (is this even correct?) $$ \span\{\cos(kx), \sin(kx)\}=\{a\cos(kx)+b\sin(kx): k\in\mathbb{Z}, a,b\in\C\} $$ And then substituting the formulas found above and doing some manipulations $$ \span\{\cos(kx), \sin(kx)\}=\{\left(\frac{a}{2}+\frac{b}{2i}\right)e^{ikx} + \left(\frac{a}{2}-\frac{b}{2i}\right)e^{-ikx}: k\in\mathbb{Z}, a,b\in\C\} $$ but now we can rewrite $$ a':=\frac{a}{2}+\frac{b}{2i} \quad \text{and} \quad b':=\frac{a}{2}-\frac{b}{2i} $$ and notice that $a', b'\in\C$. Therefore we obtain that $$ \span\{\cos(kx), \sin(kx)\}=\{a'e^{ikx}+b'e^{-ikx}: k\in\mathbb{Z},a',b'\in\C\}=:\span\{e^{ikx}, e^{-ikx}\} $$ Is this correct? I'm not sure this argument works
I think your proof is basically correct though, perhaps, too cumbersome. I'd go as follows:$\newcommand{\C}{\mathbb{C}}\newcommand{\span}{\text{span}_\C}$ $$ \begin{cases}\cos(kx)=\frac{e^{ikx}+e^{-ikx}}{2} \\{}\\ \sin(kx)=\frac{e^{ikx}-e^{-ikx}}{2i}\end{cases}\;\;\implies \span\{\cos(kx), \sin(kx)\}\subset\span\{e^{ikx}, e^{-ikx}\} $$ and on the other hand: $$e^{\pm ikx}=\cos kx\pm i\sin kx\implies\span\{\cos(kx), \sin(kx)\}\supset\span\{e^{ikx}, e^{-ikx}\}$$ You can remark that all the involved coefficients are complex ones, too...
{ "language": "en", "url": "https://math.stackexchange.com/questions/2956601", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Validate the proof that the sequence $x_n = \sum_{k=1}^{n} {1\over n+k}$ is bounded. Let $n \in \mathbb N$ and: $$ x_n = \sum_{k=1}^{n} {1\over n+k} $$ Prove that $x_n$ is a bounded sequence. I'm wondering whether the proof below is valid. Since $n \in \mathbb N$ we have that $x_n$ is strictly greater than $0$. For the upper bound lets consider the following sequence $y_n$: $$ \begin{align} y_n &= {1 \over n + 1} + {1 \over n + 1} + \dots + {1 \over n + 1} = \\ &= \sum_{k = 1}^n {1 \over n+1} = {n \over n + 1} \end{align} $$ Since $x_n$ has an increasing denominator in each consecutive term of the sum we may conclude that $x_n < y_n$. So summarizing the above: $$ 0 < x_n < y_n $$ Which means that the sequence is bounded. Have I missed something?
Yes, you have $(\forall n\in\mathbb{N}):0<x_n<y_n$, but asserting that the sequence $(x_n)_{n\in\mathbb N}$ is bounded means that there are constants $a$ and $b$ such that$$(\forall n\in\mathbb{N}):a<x_n<b.$$That's easy, though, after what you did. Just take $a=0$ (of course) and $b=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2956674", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Find count of this functions roots:$\sqrt{x+1}-x^2+1=0$ There is an equation here: $$\sqrt{x+1}-x^2+1=0$$ Now we want to write the equation $f(x)$ like $h(x)=g(x)$ in a way that we know how to draw h and g functions diagram. Then we draw the h and g function diagrams and find the common points of them. So it will be number of the $f(x)$ roots that here is the equation mentioned top. Actually now my problem is with drawing the first equation's diagram I want you to draw its diagrams like $\sqrt{x-1}$ syep by step. Please help me with it!
Guide: * *First draw $\sqrt{x}$. *Now think of having drawn $h(x)$, how would you draw $h(x\color{red}+1)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2956791", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Difference real and complex fourier series I'm working on fourier series and I'm trying to compute the fourier transformation for the $2\pi$-periodic function of $f(x)=x^2$ with $x \in [-\pi,\pi]$. Now with the real way, that is $$f(x) \sim \frac{a_{0}}{2}+\sum\limits_{n=1}^{\infty}a_{n}\cos(nx)+b_{n}\sin(nx)$$ and I found $$f(x) \sim \frac{\pi^2}{3}+\sum\limits_{n=1}^{\infty} \frac{4}{n^2}(-1)^{n}\cos(nx).$$ Now I also tried to compute with the imaginary way, that is with $$f(x) \sim c_{0}+\sum\limits_{n=-\infty}^{\infty}c_{n} e^{inx},$$ with $$c_{n}=\frac{1}{2\pi} \int\limits_{-\pi}^{\pi}f(x)e^{-inx},$$ and I found $$f(x) \sim \frac{\pi^2}{3}+\sum\limits_{n=1}^{\infty} \frac{2}{n^2} (-1)^{n} e^{inx},$$ which doesn't seem to be the same. I'm sure about the real computation, any suggestions where I go wrong with the imaginary part?
With $f(x) = x^2$, the complex Fourier series should be indexed by the integers. That is, $$ f(x) \sim c_0 + \sum_{n=-\infty}^{\infty} c_n \mathrm{e}^{inx}, $$ where the Fourier coefficients are given by $$ \frac{1}{2\pi} \int_{-\pi}^{\pi} x^2 \mathrm{e}^{-inx}\, \mathrm{d} x = \begin{cases} \frac{2}{n^2}(-1)^n & \text{if $n\ne 0$, and} \\ \frac{\pi^2}{3} & \text{if $n=0$.} \end{cases}$$ (I get this after two integration by part steps–I'm leaving off the details here, as you seemed to have worked them out correctly in your work.) Via little bit of manipulation, this becomes \begin{align*} f(x) &\sim \sum_{n=-\infty}^{\infty} c_n \mathrm{e}^{inx} \\ &= c_0 + \sum_{n=-\infty}^{-1} c_{n} \mathrm{e}^{inx} + \sum_{n=1}^{\infty} c_{n} \mathrm{e}^{inx} \\ &= c_0 + \sum_{n=1}^{\infty} c_{-n} \mathrm{e}^{-inx} + \sum_{n=1}^{\infty} c_{n} \mathrm{e}^{inx} \\ &= \frac{\pi^2}{3} + \sum_{n=1}^{\infty} \frac{2}{(-n)^2} (-1)^{-n}\mathrm{e}^{-inx} + \sum_{n=1}^{\infty} \frac{2}{n^2} (-1)^{n}\mathrm{e}^{inx} \\ &= \frac{\pi^2}{3} + \sum_{n=1}^{\infty} \frac{2}{n^2} (-1)^n\left( \mathrm{e}^{inx} + \mathrm{e}^{-inx} \right) \\ &= \frac{\pi^2}{3} + \sum_{n=1}^{\infty} \frac{4}{n^2} (-1)^n \left( \frac{\mathrm{e}^{inx} + \mathrm{e}^{-inx}}{2} \right) \\ &= \frac{\pi^2}{3} + \sum_{n=1}^{\infty} \frac{4}{n^2} (-1)^n \cos(nx), \end{align*} which is the result you were hoping to get.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2956922", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Evaluate $\lim_{x \to 4} \frac{x^4-4^x}{x-4}$, where is my mistake? Once again, I am not interested in the answer. But rather, where is/are my mistake(s)? Perhaps the solution route is hopeless: Question is: evaluate $\lim_{x \to 4} \frac{x^4 -4^x}{x-4}$. My workings are: Let $y=x-4$. Then when $x \to 4$, we have that $y \to 0$. Thus: $$\lim_{y \to 0} \frac{(y+4)^4 - 4^{y+4}}{y} = \\ = \lim_{y \to 0}\frac{(y+4)^4}{y} - \lim_{y \to 0} \frac{4^{(y+4)}}{y} $$ And this step is not allowed from the get go, as I am deducting infinities, which is indeterminate. What I should have done though: $$4^4 \lim_{y \to 0} \frac{(1+y/4)^4-1+(4^y-1)}{y} = \\ 4^4 \lim_{y \to 0} \left( \frac{(1+y/4)^4-1}{\frac{y}{4}4} - \frac{4^y-1}{y} \right) = \\ =4^4\left(\frac{1}{4} \cdot 4 - \ln 4 \right) = 256(1-\ln 4)$$
The following step is not allowed $$\lim_{x \to 4} \frac{x^4-4^x}{x-4}=\lim_{y \to 0} \frac{(y+4)^4-4^{y+4}}{y}\color{red}{=\lim_{y \to 0} \frac{(y+4)^4}{y}-\lim_{y \to 0} \frac{4^{y+4}}{y}}$$ Refer also to the related * *Analyzing limits problem Calculus (tell me where I'm wrong). *Evaluate $ \lim_{x \to 0} \left( {\frac{1}{x^2}} - {\frac{1} {\sin^2 x} }\right) $
{ "language": "en", "url": "https://math.stackexchange.com/questions/2957072", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Question about metric function Suppose that $(X,d)$ is a metric space. Prove that $d:X\times X\to \mathbb{R}$ is a continuous. Remark: I know that there a lot of similar topics such as this question. Please do not duplicate because the question which I am going to ask I did not meet in other topics. Let $(x_0,y_0)$ some point from $X\times X$. And I am going to prove that the function $(x,y)\mapsto d(x,y)$ is continuous at $(x_0,y_0)$. We need to show that for any $\varepsilon>0$ $\exists \delta=\delta(\varepsilon)>0$ such that for any $(x,y)\in X\times X$ which is close to $(x_0,y_0)$ by $\delta$ we have distance between $d(x,y)$ and $d(x_0,y_0)$ is less than $\varepsilon$. But I have the following question: 1) What is the distance between $(x,y)$ and $(x_0,y_0)$ which are points of $X\times X$? 2) What is the distance between $d(x,y)$ and $d(x_0,y_0)$ which are real numbers? I would be very grateful for explanation!
You are quite right to question what metrics or topologies you should be using when you judge whether $d : X \times X \to \Bbb{R}$ is continuous. The product topology on $X \times X$ (where $X$ is given the metric topology induced by $d$) and the standard topology on $\Bbb{R}$ are the ones that make sense.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2957205", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Combinatorics rolling a die six times How many ways are there to roll a die six times such that there are more ones than twos? I broke this up into six cases: $\textbf{EDITED!!!!!}$ $\textbf{Case 1:}$ One 1 and NO 2s --> 1x4x4x4x4x4 = $4^5$. This can be arranged in six ways: $\dfrac{6!}{5!}$. So there are $\dfrac{6!}{5!}$$4^5$ ways for this case. $\textbf{Case 2:}$ Two 1s and One 2 OR NO 2s --> 1x1x5x4x4x4 = $5x4^3$. This can be arranged in $\dfrac{6!}{2!3!}$ ways. So there are $(6x4^3)$$\dfrac{6!}{2!3!}$ ways for this case. $\textbf{Case 3:}$ Three 1s and Two, One or NO 2s --> 1x1x1x5x5x4 = 4x$5^2$. This can be arranged in $\dfrac{6!}{2!3!}$ ways. So there are $(4x5^2)$$\dfrac{6!}{2!3!}$ ways for this case. $\textbf{Case 4:}$ Four 1s and Two, One or NO 2s --> 1x1x1x1x5x5 = $5^2$. This can be arranged in $\dfrac{6!}{4!2!}$ ways. So there are $(5^2)$$\dfrac{6!}{4!2!}$ ways for this case. $\textbf{Case 5:}$ Five 1s and One or NO 2s --> 1x1x1x1x1x5 = 5. This can be arranged in $\dfrac{6!}{5!}$ ways = 6 ways. So there are $5^3$ ways for this case. $\textbf{Case 6:}$ Six 1s and NO 2s --> 1x1x1x1x1x1 = 1. There is only one way to arrange this so there is only 1 way for this case. With this logic...I would add the number of ways from each case to get my answer.
If the question is "where is my error", than plese ignore this answer, which is giving an other way to count. The idea is that there are either more $1$'s, case (1), or more $2$'s, case (2), or they occur equaly often, case $(=)$. Of course, the count of possibilities for case (1) is the same as for case (2), so we simply count the possibilities for $(=)$, subtract them from the total, divide by $2$, so the number we are searching for is: $$ N = \frac 12\left(\ 6^6 -\binom 60\binom 60 4^{6-0} -\binom 61\binom 51 4^{6-2} -\binom 62\binom 42 4^{6-4} -\binom 63\binom 33 4^{6-6} \ \right) \ . $$ The products of binomial coefficients of the shape $\binom 6k\binom{6-k}k$, $k=0,1,2,3$ count the possibilities to fix the $k$ places among $6$ places for the $1$'s, then the $k$ places for the $2$'s from the remaining $6-k$. We get then $$ N = \frac 12\left(\ 6^6 - 4^6 - 6\cdot 5\cdot 4^4 - 15\cdot 6\cdot 4^2 - 20\cdot 1\cdot 4^0 \ \right) = 16710\ . $$ We can also check this answer with sage, by enumerating all possibilities. sage: len( [ x for x in cartesian_product( [[1..6] for _ in [1..6]] ) ....: if list(x).count(1) > list(x).count(2) ] ) 16710
{ "language": "en", "url": "https://math.stackexchange.com/questions/2957481", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Understanding $E[x^2]$ in a variance question I know variance is equal to $V[x]=E[x^2] - (E[x])^2 $, but how do you expand $E[x^2]$ for some x if your given the necessary information... What I mean for example is if you suppose $x = ys+(1-y)r$ then would $E[x^2] = E[(ys+(1-y)r)^2] = E[(ys+(1-y)r) * (ys+(1-y)r)]$? I'm trying to solve a problem dealing with variance, but feel like I maybe just don't understand how to properly use $E[x^2]$.
If $x$ is your random variable, then $x^2$ is just a 'transformation' of that random variable. Remember that the expectation operator $\text{E}[ \cdot]$, when applied to a random variable $x$, just gives you back the 'weighted average' of your support's values i.e. $\text{E}[ x] = \sum_{i \in S}\mathbb{P}(x = i)\times x$ (where the weights are each of the $\mathbb{P}(x = i)$'s). Here $S$ is just the support of the random variable. If $x\sim \text{binary}(p)$ then $S = \{0, 1\}$ for example. When we ask what $\text{E}[ x^2]$ is, we literally mean "what is the 'weighted average' when we square the random variable $x$". Since the value of $x^2$ depends on the value of $x$, $\text{E}[ x^2] = \sum_{i = 0}^{S}\mathbb{P}(x = i)\times x^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2957607", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Is it true that the number is divisible by $p$? Question: Let $a, b, c$ be positive integers and $p>3$ be a prime ($ a$ isn't divisible by $p$). Consider a quadratic polynomial $P(x) = ax^2+bx+c$, and assume that there exists $2 p-1$ consecutive positive integers: $$x+1, x+2, ..., x+2p-1$$ satisfying that $P(x+i)$ is a square number for every $i$ $(1 \leq i \leq 2p-1)$. Is it true that the number $\Delta = b^2-4ac$ is divisible by $p$ ? I found out that if there exists $i$ in which $1 \leq i \leq p-1$ so that $P(x+i)$ is divisible by $p$, then $\Delta$ is divisible by not only $p$, but $p^2$ as well. If $P(x+i)$ is divisible by $p$, then $p^2|P(x+i)$ and $p^2|P(x+i+p)$, thus $$p^2|P(x+i+p)-P(x+i) \implies p^2|p(a(2x+2i+p)+b) \implies p|2a(x+i)+b$$ and since $4a\cdot P(x+i)=(2a(x+i)+b)^2-\Delta$, so $p^2|\Delta$. However I cannot find any conditions of $a, b, c, p$ so that there exists $i$ in which $1 \leq i \leq p-1$ and $p|P(x+i)$. Is the question correct or there must be some conditions of $a, b, c, p$ for the question to to be true? (Sorry, English is my second language)
This is just a question of arithmetic mod $p$. Writing $\bar n$ for $n$ mod $p$, the reduction mod $p$ of the given binomial yields an $f(X)=\bar aX^2+\bar bX+\bar c \in \mathbf F_p [X]$, with $\bar a \neq \bar0$. The OP hypothesis says that $f(\bar x)$ is a square in $\mathbf F_p$ for $p$ distinct values between $\bar y$ and $\bar y + \bar p -\bar 1$. But it is well known that $\mathbf F^*_p$ is cyclic, hence $(\mathbf F^*_p)^2$ has order $\frac {p-1}2$, and the number of distinct squares in $\mathbf F_p$ is exactly $1+\frac {p-1}2 <p$. It follows in particular that the map $\bar x \to f(\bar x)$ cannot be injective. For $\bar x \neq \bar x' \in \mathbf F_p$ , $f(\bar x)=f(\bar x')$ iff $\bar a (\bar x - \bar x')+\bar b=\bar 0$, iff $\bar x + \bar x'=-\frac {\bar b} {\bar a}$. Thus, starting from $\bar x \neq -\frac {\bar b} {2\bar a}$, the map $\bar x \to x'=-\bar x-\frac {\bar b} {\bar a}$ produces a pair $(\bar x,\bar x'),\bar x \neq \bar x'$, s.t. $f(\bar x)=f(\bar x') \in (\mathbf F^*_p)^2$. This allows us to organize $\mathbf F_p$ as the union of $\frac {p-1}2$ pairs $(\bar x,\bar x')$ as above and a singleton $\bar z$ s.t. $f(\bar z)=\bar 0$, so $\bar z$ is necessarily a double zero of $f$, i.e. $p$ divides the discriminant $b^2-4ac$ .
{ "language": "en", "url": "https://math.stackexchange.com/questions/2957706", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 1 }
A Special Property of e? I was debating whether I should post this question over at MathEducatorsStackExchange or here, sorry if this was the wrong forum for this question. I am teaching Calculus I this semester and I found the following problem, which I gave to a group for one of their group challenge problems: "Assume $b$ is a positive number and $b≠1$. Find the $y$-coordinate of the point on the curve $y=b^x$ at which the tangent line passes through the origin." The group was able to solve the problem (the $y$-coordinate turns out to be $b^{\frac{1}{\ln(b)}}$ which is the same as $e$). They also created a Desmos graph to demonstrate this property. However, they asked the question: "why is this true?" They know that the steps of the problem show it to be true, but they wanted to know if there was some deeper property of $e$ that made this property more obvious, or if there was some intuition as to why this property exists. Or, is this just a neat Calculus problem with a tidy answer? EDIT: For some clarification, the students weren't asking "why is $b^{\frac{1}{ln(b)}}=e$", they were able to figure that out after I suggested they plug in $b=2, 3, 4,...$ etc. and see if they notice a pattern. They were more asking about "why is the tangent line at $b^x$ that goes through the origin always tangent at the point $(x, e)$. My initial response was to look at there work, they had supplied the answer to that question just by doing the problem (like some people are suggesting in the comments). They seemed unsatisfied though, and thought there was some intuition that might make this property more obvious.
I. Why $b^{\frac{1}{\ln(b)}}=e$- note 2 logarithm properties: * *$\frac{1}{log_p(q)}=log_q(p)$ *$B^{log_B(C)}=C$ II. The history and (quick) winning of $e$: Find the derivative of the function $f(x)=a^x$ by definition: $$\displaystyle \lim_{h\rightarrow0}\frac{a^{x+h}-a^x}{h}=\displaystyle \lim_{h\rightarrow0}\frac{a^x a^h-a^x}{h}=\displaystyle \lim_{h\rightarrow0}\frac{a^x (a^h-1)}{h}=a^x\cdot\displaystyle \lim_{h\rightarrow0}\frac{a^h-1}{h} $$ As we see- the derivative of $a^x$ is $a^x$ multiple by this something. Euler so "For which $a$ is the rest equal $1$?" Hence (in quick): $$\displaystyle \lim_{h\rightarrow0}\frac{e^h-1}{h}=1 $$ here a fact: $something_{tiny}=\frac{1}{something_{big}}$ $$\displaystyle \lim_{n\rightarrow\infty}\frac{e^{\frac1n}-1}{\frac1n}=1 $$ Now we ignore for a minute the "$\displaystyle \lim_{n\rightarrow\infty}$": $$\frac{e^{\frac1n}-1}{\frac1n}=1 $$ $$e^{\frac1n}-1=\frac1n $$ $$e^{\frac1n}=1+\frac1n $$ $$e=(1+\frac1n)^n $$ $$\bbox[5px,border:2px solid blue]{e=\displaystyle \lim_{n\rightarrow\infty}(1+\frac1n)^n} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2957816", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 3 }
Prove that $7^n+2$ is divisible by $3$ for all $n ∈ \mathbb{N}$ Use mathematical induction to prove that $7^{n} +2$ is divisible by $3$ for all $n ∈ \mathbb{N}$. I've tried to do it as follow. If $n = 1$ then $9/3 = 3$. Assume it is true when $n = p$. Therefore $7^{p} +2= 3k $ where $k ∈ \mathbb{N} $. Consider now $n=p+1$. Then \begin{align} &7^{p+1} +2=\\ &7^p\cdot7+ 2=\\ \end{align} I reached a dead end from here. If someone could help me in the direction of the next step it would be really helpful. Thanks in advance.
$7^{n} +2$ is divisible by $3$ iff $7^{n} +2 - 3 = 7^{n} -1$ is divisible by $3$. Now, $7^{n} -1 = (7-1)(7^{n-1}+7^{n-2}+\cdots+1)$ is even divisible by $6$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2957899", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 5 }
Does $\sqrt{i^4} = i^2$? I'm assuming it doesn't, because if it did, then $1 = \sqrt{1} = \sqrt{i^4} = i^2 = -1$. In general, does $\sqrt{x^4} = x^2$?
In general, $$\sqrt{x^2} = \pm x$$ so technically, $$\sqrt{i^4} = \pm i^2$$ so what you gave is one of the possible solutions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2958177", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Given $F(x,y)$, what does it mean to compute $dF(X)$ for $X=x \frac{d}{dx}+y\frac{d}{dy}$? Given $F(x,y)$, what does it mean to compute $dF(X)$ for $X=x \frac{d}{dx}+y\frac{d}{dy}$? My idea: $dF(x,y)$ is the same as the Jacobian of $F(x,y)$. But in order to plug in $X$, then what should I do?
By definition, $$ dF(X) = \frac{\partial F}{\partial x} dx(X) + \frac{\partial F}{\partial y} dy(X). $$ Here, $$ dx(X) = dx(x \partial_x + y \partial_y) = \{ \text{ linearity } \} = x \, dx(\partial_x) + y \, dx(\partial_y) = x \cdot 1 + y \cdot 0 = x $$ Likewise, $dy(X) = y.$ Thus, $$ dF(X) = \frac{\partial F}{\partial x} x + \frac{\partial F}{\partial y} y. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2958318", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
$\iint 1/(x+y)$ in region bounded by $x=0, y=0, x+y =1, x+y = 4$ $\iint 1/(x+y)$ in region bounded by $x=0, y=0, x+y =1, x+y = 4$ using following transformation: $T(u,v) = (u - uv, uv)$. I want to make sure that my method is correct Calculating the jacobian, I get $u$ since $x+y = u$, i get that $1 < u < 4$ Additionally, $v = y/y+x$ and thus at $x = 0$, we have $v = 0$ and at $y = 0$ we have $v = 1$. So this the double integral is equivalent to $\iint(1/u)u dudv$ over the area $[1,4]$x$[0,1]$ which is easily solved I am not sure about the method I employed, any help would be greatly appreciated.
Yes the method is correct, indeed we are considering the following change of coordinates * *$u=x+y \implies 1\le u \le 4$ *$v=\frac{y}{x+y}\implies 0\le v \le 1$ indeed for any fixed value for $u=x+y\,$ we have that $y$ varies form $0$ to $u$ and therefore $v$ varies from $0$ to $1$ the jacobian is $$du\,dv=|J|dx\,dy=\begin{vmatrix}1&1\\-\frac{y}{(x+y)^2}&\frac{x}{(x+y)^2}\end{vmatrix}dx\,dy=\frac1udxdy \implies dx\,dy=u\,du\,dv$$ and therefore $$\iint_D \frac1{x+y}dxdy=\int_1^4\int_0^1\frac1u\cdot u\,du\,dv$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2958461", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Expressiong $\frac{t+2}{t^3+3}$ in the form $a_o+a_1t+...+a_4t^4$, where $t$ is a root of $x^5+2x+2$ Expressing $\frac{t+2}{t^3+3}$ in the form $a_o+a_1t+...+a_4t^4$, where $t$ is a root of $x^5+2x+2$. So i can deal with the numerator, but how do I get rid of the denomiator to get it into the correct form? Thanks in advance!
Using the Euclidean algorithm for computing $\gcd(x^3+3,x^5+2x+2)$, we get $$ 367=(10 x^4 - 31 x^3 - 14 x^2 - 30 x + 113)(x^3+3)+(-10 x^2 + 31 x + 14)(x^5+2x+2) $$ and so $$ 367=(10 t^4 - 31 t^3 - 14 t^2 - 30 t + 113)(t^3+3) $$ Thus, $$ \begin{align} 367\frac{t+2}{t^3+3} &=(t+2)(10 t^4 - 31 t^3 - 14 t^2 - 30 t + 113)\\ &=10(t^5+2t+2)+(-11 t^4 - 76 t^3 - 58 t^2 + 33 t + 206)\\ &=-11 t^4 - 76 t^3 - 58 t^2 + 33 t + 206 \end{align} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2958566", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Does this equation have a complex number solution? Does this equation have any solutions: $$\sqrt{z^2+z-7}=\sqrt{z-3}?$$ I know it does not have any real number solutions, but how about complex number solutions? I understand that when you solve this problem algebraically, you get $z=\pm 2$ as solutions. But when you input $2$ into the original equation you get $i=i$, indicating that the real number $2$ is not a solution. But since $i=i$ is a true statement, this indicates that the complex number $2$ is a solution. My question is how are $2$ (the real number, which is not a solution) and $2$ (the complex number in the complex plane, which is a solution) different?
You must consider what is meant by $\sqrt\;$. There are several possible square root functions here: one has domain $\mathbb R^+$ and codomain $\mathbb R$, one has domain $\mathbb R$ and codomain $\mathbb C$, and one has domain $\mathbb C$ and codomain $\mathbb C$. If you use the first function, then it is undefined when $z=\pm2$, so there is no solution. If you use the second function, it is defined everywhere and $z=\pm2\in\mathbb R$ are solutions. If you use the third function, it is defined everywhere and $z=\pm2\in\mathbb C$ are solutions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2958665", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Proving $x^2 - 4y^2 = 7$ has no natural numbers Ok so I needed to prove this by contradiction. Let $P:~x^2 - 4y^2 = 7$ and $Q:~x,y$ are not natural numbers Note that $N$ does not include $0$ OK to begin to prove by contradiction we are given $P$ and I would assume $\text{not}(Q)$ so the equation would have natural numbers $$x^2 - 4y = 7 = (x-2y)(x+2y) = 7$$ Ok so here $(x-2y)(x+2y) = 7$ would be the contradiction but I don't seem to understand why, I know that $7$ could be written as a product of two natural numbers in 2 ways, so because we only have one way is that why we would say that $(x-2y)(x+2y) = 7$ is a contradiction?
Suppose there is an integer solution. since $y>0$, we have $x-2y < x+2y$. Hence we have $x-2y =1$ and $x+2y=7$. if we subtract them, we have $4y = 6$ and $y = \frac32$ which is a contradiction. Alternatively, think of what values can $x^2 \pmod{4}$ take. Just take modulo $4$ and you can see the contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2958748", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Does a real number with this decimal expansion for $r$ and $r^2$ exist? Does there exist a real number $0< x <1$, such that the decimal expansions of $x$ and $x^2$ are the same, starting from the millionth term, and neither expansion has an infinite tail of zeroes? I was thinking $x=0.\overline{999}$, but does that work? Isn't that just equal to 1 which is not allowed.? If this works, how would I prove it?
We can concoct an example quite easily. Suppose we want the difference between $x$ and $x^2$ to be 0.1: $$x-x^2=0.1$$ where the order $x-x^2$ is mandated by $0<x<1$, so $x^2<x$. Solving this, we get two admissible values $x=\frac{1\pm\sqrt{0.6}}2$. Thus (taking $x=\frac{1+\sqrt{0.6}}2$) we have $$x=0.88729833\dots$$ $$x^2=0.78729833\dots$$ so their decimal expansions agree after the first place, and indeed after the millionth place. Any number $0<k<0.25$ with a terminating decimal expansion such that $\sqrt{1-4k}$ does not terminate can be used in place of the 0.1 in $x-x^2=0.1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2958853", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
An ellipse intrinsically bound to any triangle Given any triangle $\triangle ABC$, we build the hyperbole with foci in $A$ and $B$ and passing through $C$. The hyperbole always intersects the side of the triangle that is opposite to the vertex through which it pass in two points $D$ and $E$. Similarly, we can build other two hyperboles, one with foci in $A$ and $C$ and passing through $B$ (red), and one with foci in $B$ and $C$ and passing through $A$ (green), obtaining other $2$ couples of points $F$, $G$ and $H$, $I$. My conjecture is that The $6$ points $D,E,F,G,H,I$ always determine an ellipse. How can I show this (likely obvious) result with a simple and compact proof? Thanks for your help, and sorry for the trivial question! This problem is related to this one.
Showing that the points in question lie on a common conic is straightforward. I've renamed the points thusly: $D_B$ and $D_C$ are the points where the hyperbola through $A$ meets $\overline{BC}$; the subscripts indicate the closer vertex. Likewise for $E_C$, $E_A$, $F_A$, $F_B$. Now, simply note that $$|BD_B| = |CD_C| \qquad |CE_C|=|AE_A| \qquad |AF_A| = |BF_B|$$ $$|D_BC| = |D_CB| \qquad |E_CA|=|E_AC| \qquad |F_AB| = |F_BA|$$ so that $$\frac{BD_B}{D_BC}\cdot\frac{CE_C}{E_CA}\cdot\frac{AF_A}{F_AB} = \frac{CD_C}{D_CB}\cdot\frac{AE_A}{E_AC}\cdot\frac{BF_B}{FB_A} \tag{$\star$}$$ It happens that $(\star)$ holds if and only if $D_B$, $E_C$, $F_A$, $D_C$, $E_A$, $F_B$ lie on a conic. (This is the same condition used in this answer, except that it really doesn't matter here if we consider the ratios as signed or unsigned.) Showing that the conic is specifically an ellipse takes some extra work. I'll have to come back to that.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2958984", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Complex Anti-derivative of tan(z) Show $tan(z)$ has a complex anti-derivative on $S=\mathbb{C}\backslash((-\infty,-\pi/2]\cup[\pi/2,\infty))$ If F(z) is the complex antiderivative of $tan(z)$ on $S$, find $F(i)$, if $F(0)=0$ I know that for $D\subset \mathbb{C}$ open and starshaped, and if $f:D\rightarrow \mathbb{C}$ is holomorphic, then $f$ has an anti-derivative. First I prove $S$ is star shaped. Choose center $z_0=(0,0)\in S$ and consider $\{(x,y)\in S:x\geq0,y\geq0\}$. Trivially $\forall z\in(0,\pi /2)$, $[z_0,z]\subset S$. Now take $z=(x,y)\in S$, such that $x\geq0$ and $y>0$. The straight line connecting $z_0$ to $z$ is given by $[z_0,z]=\{z_0+t(z-z_0):t\in[0,1]\}$. We want to show that $[z,z_0]$ never intersects $[\pi /2,\infty)$. Suppose to the contrary, that is suppose $[z_0,z]$ intersects $[\pi /2,\infty)$. Note that $z_0+t(z-z_0)=(0,0)+t((x,y)-(0,0))=t(x,y)=(tx,ty)$ for $t\in [0,1]$. Since by assumption, if $[z_0,z]$ intersects $[\pi /2, \infty)$, then there exists $t\in [0,1]$, such that $tx\in[\pi /2, \infty)$ and $ty=0$. But $y>0$ and thus implies $t=0$. But if $t=0$, that implies $tx=0\notin[\pi /2,\infty)$, and thus a contradiction. That is, $[z_0,z]\subset\{(x,y)\in S:x\geq0,y\geq0\}$ Apply this same argument to all four quadrants of $\mathbb{C}$ which thus shows that $S$ is star-shaped. Now, since $tan(x)=\frac {sin(z)}{cos(z)}$ and since $sin(z)$ and $cos(z)$ are holomorphic on their domains, then $tan(z)$ is holomorphic on its domain. Thus $tan(z)$ has an antiderivative. I was wondering if my above logic is correct. Furthermore, how do we go about finding $F(i)$ if $F(0)=0$ I appreciate any help. Thanks in advance!
Answer to second part: the antiderivative vanishing at $0$ is given by the integral $\int_{[0,z]} tan (\zeta)\, d\zeta$. so $F(i)=\int_{[0,i]} tan (\zeta)\, d\zeta =i\int_0^{1} tan (it)\, dt$. Now $\tan (it)=\frac {i\sinh t} {\cosh (t)}$. Make the substitution $u=\cosh t$ to evaluate the integral.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2959171", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How many positive divisors of 2000? I thought that the number of divisors of a number was the product of the indices in its factorisation, plus $2$ (for 1 and the number itself). For instance, $2000=2^{4} \cdot 5^{3}$, so it would have $4 \cdot 3 + 2 = 14$ divisors. Apparently, however, $2000$ actually has $20$ divisors. What am I doing wrong?
The factors are of the form of $2^x\cdot 5^y$ where $x$ takes value from $0$ to $4$ and $y$ takes values from $0$ to $3$. Hence there is a total of $(4+1)(3+1)=20$ factors. Your method miss out number such as $5^y, y>0; 4^x, x>0$ and double counted $2000$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2959289", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 3 }
What are relations? I understand that a function may be considered as a set of ordered pairs which relate the elements between two sets. I understand that a function is a subset of the cartesian product between the two sets and it can be defined by an equation like $y=x+1$ or $f(x)=x+1$, on a specified domain for $x$. I am struggling to understand relations though. i know a function is a relation (set of ordered pairs) and a relation between to sets is nothing more than a subset of the cartesian product and that you could use $y=x+1$ or $y^2 + x^2 =1$ to define a relation. What i dont get is why "equals" is referred to as a relation and "less than". Is the equals relation the set of ordered pairs $(x,y)$ defined by $y=x$, on some domain for x? and similarly for $y<x$?
A function is a "special" relation where there are no two pairs $(x,y_1)$ and $(x,y_2)$. Example : the relation "father-to-child" is not a function, because a father may have more than one children. The relation "child-to-father" is a function because a child has exactly one father. "Less than" is a relation but not a function : $1 < 2$ and $1 < 3$ and ...
{ "language": "en", "url": "https://math.stackexchange.com/questions/2959414", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Intro to Classical Number Theory I am having trouble understanding page 53, of A Classical Introduction to Modern Number Theory, by Kenneth Ireland and Michael Rosen. Corollary 3. $$(-1)^{(p-1)/2} = \left(\frac{-1}{p}\right)$$ Where the right-hand side is the Legendre symbol. The part I'm tripping over is the next paragraph, which states: Corollary 3 is interesting. Every odd integer has the form $4k + 1$ or $4k + 3$. (I understand that). Using this, one can restate the corollary as: $$x^{2}\equiv -1\bmod(p)\text{ has a solution} \iff p \text{ is of the form } 4k + 1$$ How did the authors go from the corollary to the restatement?
The congruence has a solution iff the Legendre symbol is equal to $1$ which is the case iff the power of $-1$ in the explicit formula for the Legendre symbol is even.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2959541", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How to solve $(x \mod 7) - (x \mod 8) = 5$? I'm trying to solve $(x \mod 7) - (x \mod 8) = 5$ but no idea where to start. Help appreciated!
If CRT = Chinese Remainder Theorem is known then we can reformulate it as $$\begin{align} x&\equiv a\!+\!5\!\!\pmod{7},\ \ \ \overbrace{ 0\le a\!+\!5 \le 6}^{\Large \iff a\ =\ 0,1}\\ x&\equiv a\quad\pmod{8},\ \ \ 0\le a\le 7\end{align}\qquad \qquad\qquad $$ Solve the $\,a=0\,$ case $\,x\equiv (5,0)\bmod (7,8),\,$ then $\,x\!+\!1\equiv (6,1)\,$ yields the $\,a=1\,$ case. W/o $ $ CRT: $\,\ \overbrace{ x-7j}^{\large x\bmod 7}-(\overbrace{x-8k)}^{\large x\bmod 8} = 5\iff 8k - 7j = 5\,\Rightarrow\, j \equiv \color{#c00}{5}\pmod{\!8},\ k\equiv \color{#c00}{5}\pmod{\!7}$ Working in the intial range $\, 0\le x\le 55\ $ and enforcing the remainder bounds $\qquad\quad \begin{align} 0\le \overbrace{x-7\cdot \color{#c00}5}^{\large x\bmod 7} \le 6\iff 35\le x \le \color{#0a0}{41}\\[.5em] 0\le \underbrace{x-8\cdot \color{#c00}5}_{\large x\bmod 8} \le 7\iff \color{#0a0}{40}\le x \le 47\end{align}$ Therefore $\, x\equiv \color{#0a0}{40},\color{#0a0}{41}\pmod{56}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2959724", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Show that the exact values of the square roots of $z=1+i$ are... Show that the exact values of the square roots of $z=1+i$ are... $w_0=\sqrt{\frac{1+\sqrt{2}}{2}}+i\sqrt{\frac{-1+\sqrt{2}}{2}}$ $w_1=-\sqrt{\frac{1+\sqrt{2}}{2}}-\sqrt{\frac{-1+\sqrt{2}}{2}}$ My attempt Let $z=1+i\in \mathbb{C}$. Then $r=|z|=\sqrt{2}$ Moreover, $\theta=\tan^{-1}(1)=\frac{\pi}{4}$ Then, the polar form of $z$ is $z=\sqrt{2}(\cos\frac{\pi}{4}+i\sin\frac{\pi}{4})$ Let $w\in\mathbb{C}$ such that $w^2=z$ Then $w_k=\sqrt{\sqrt{2}}(\cos(\frac{\frac{\pi}{4}+2k\pi}{2})+i\sin(\frac{\frac{\pi}{4}+2k\pi}{2})$ for $k=0,1$. Then the square roots, are: $w_0=\sqrt[4]2(\cos\frac{\pi}{8}+i\sin\frac{\pi}{8})=1,84+0,76i.$ $w_1=\sqrt[4]2(\cos\frac{9\pi}{8}+i\sin\frac{9\pi}{8})=-1,84-0,76i.$ But, here i'm stuck because the $w_0=\sqrt{\frac{1+\sqrt{2}}{2}}+i\sqrt{\frac{-1+\sqrt{2}}{2}}\not = 1,84+0,76i$
Setting $$\sqrt{1+i}=A+Bi$$ then $$1+i=A^2-B^2+2ABi$$ and we have to solve $$A^2-B^2=1$$ $$2AB=1$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2959857", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Maximise $(x+1)\sqrt{1-x^2}$ without calculus Problem Maximise $f:[-1,1]\rightarrow \mathbb{R}$, with $f(x)=(1+x)\sqrt{1-x^2}$ With calculus, this problem would be easily solved by setting $f'(x)=0$ and obtaining $x=\frac{1}{2}$, then checking that $f''(\frac{1}{2})<0$ to obtain the final answer of $f(\frac{1}{2})=\frac{3\sqrt{3}}{4}$ The motivation behind this function comes from maximising the area of an inscribed triangle in the unit circle, for anyone that is curious. My Attempt $$f(x)=(1+x)\sqrt{1-x^2}=\sqrt{(1-x^2)(1+x)^2}=\sqrt 3 \sqrt{(1-x^2)\frac{(1+x)^2}{3}}$$ By the AM-GM Inequality, $\sqrt{ab}\leq \frac{a+b}{2}$, with equality iff $a=b$ This means that $$\sqrt 3 \sqrt{ab} \leq \frac{\sqrt 3}{2}(a+b)$$ Substituting $a=1-x^2, b=\frac{(1+x)^2}{3}$, $$f(x)=\sqrt 3 \sqrt{(1-x^2)\frac{(1+x)^2}{3}} \leq \frac{\sqrt 3}{2} \left((1-x^2)+\frac{(1+x)^2}{3}\right)$$ $$=\frac{\sqrt 3}{2} \left(\frac{4}{3} -\frac{2}{3} x^2 + \frac{2}{3} x\right)$$ $$=-\frac{\sqrt 3}{2}\frac{2}{3}(x^2-x-2)$$ $$=-\frac{\sqrt 3}{3}\left(\left(x-\frac{1}{2}\right)^2-\frac{9}{4}\right)$$ $$\leq -\frac{\sqrt 3}{3}\left(-\frac{9}{4}\right)=\frac{3\sqrt 3}{4}$$ Both inequalities have equality when $x=\frac{1}{2}$ Hence, $f(x)$ is maximum at $\frac{3\sqrt 3}{4}$ when $x=\frac{1}{2}$ However, this solution is (rather obviously I think) heavily reverse-engineered, with the two inequalities carefully manipulated to give identical equality conditions of $x=\frac{1}{2}$. Is there some better or more "natural" way to find the minimum point, perhaps with better uses of AM-GM or other inequalities like Jensen's inequality?
You can solve the problem using geometry. $(x+1)\sqrt{1-x^2}$ is the area of triangle $$(-1, 0), (x, \sqrt{1-x^2}), (x, -\sqrt{1-x^2}).$$ This unit inscribed triangle has maximal area if and only if it is a equilateral triangle.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2960248", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Is this a correct interpretation of the fundamental theorem of algebra? I tried reading the Wikipedia page but it's stated in terms of complex roots, and I don't really understand how that relates to the following proposition: if a real valued polynomial: $$\sum_{i=0}^n a_i x^i = 0$$ for all $x \in \mathbb{R}$, is it right to say the Fundamental Theorem of algebra implies that $a_i = 0$ for all $i$? This was stated off handedly during a recent talk and I am not sure if I heard it correctly, since when I look up the theorem it is difficult for me to understand the way that page is written
There are two questions in your post. The answers to them are different. Is this a correction interpretation of the fundamental theorem of algebra? No, according to the standard meaning of "fundamental theorem of algebra," which says every polynomial with real coefficients has at least one complex root. That theorem is different and harder than the theorem you need for the rest of your post. Is it right to say the Fundamental Theorem of algebra implies that $a_i=0$ for all $i$? Yes, the fundamental theorem implies your result. But your result follows from a weaker theorem that holds over fields that aren't necessarily algebraically closed: every polynomial of degree $n\geq 1$ has at most $n$ roots.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2960359", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Is this the correct interpretation of the differential? I am going through Tenenbaum and Pollard's book on differential equations and they define the differential $dy$ of a function $y = f(x)$ to be the function $$ (dy)(x,\Delta x) = f'(x) \cdot (d\hat{x})(x, \Delta x) $$ where * *$\Delta x$ is a variable denoting an increment along the $x$-coordinate *$\hat{x}$ denotes the function $\hat{x}(x) = x$, and *$d\hat{x}$ is the differential of the function $\hat{x}$. I've never seen differentials crisply defined this way. They're usually described as "small quantities" or just avoided in favor of definitions of the derivative in terms of limits. Anyway, this definition makes good sense to me. Is this the accepted way to think of them -- i.e. as functions?
Yes, differentials and derivatives are 2 ways of doing the same thing to a function. The derivative operator $\frac{d}{dx}$ hits functions and $\frac{d}{dx}f$ is itself a function. The differential operator $d$ hits functions and $df$ is itself a function. Therefore with this definition of $df$, you can think of derivatives as fractions because the fraction $\frac{df}{d\hat{x}}$ is precisely the derivative $\frac{d}{dx}f$ The only difference is that $\frac{d}{dx}f$ has the interpretation of a rate of change, while $df$ has the interpretation of the change in height of the tangent line. Note that the function $\hat{x}$ is also called the identity function. He goes on to say that over time $d\hat{x}$ turned into $dx$. And that we have to remember that $dx$ stands for the differential of the identity function (remember it's only functions that we can take differentials of). However the identity function is the one function in which the word differential becomes synonymous with increment/$\Delta x$ (large or small)/infinitesimal. Notice that when you are asked to find the antiderivative of a function, the symbol $\int \dots dx$ is used. Therefore $\int g(x) \;dx$ is asking for a function $G$ whose $G' = g$. However, imagine grouping $g(x)dx$ together, leaving $\int$ by itself. $g(x)dx$ takes the form of a differential of some other function. So, in order to get the same answer as with antiderivatives, define $\int$ as the operator that produces the anti-differential of whatever follows $\int$. That function is still $G$! Because $dG = g\;dx$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2960614", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Binomial Distribution: Stochastic Dominance Suppose * *$X_1 \sim \operatorname{Bin}(N_1,p)$ and $X_2 \sim \operatorname{Bin}(N_2,p)$ *$N_2>N_1$ Does $X_2$ first-order stochastically dominate $X_1$?
Yes! One proof I know is via coupling. Let $X_{1},\ldots,X_{n}$ be an IID sequence of Bernoulli random variables with $P\left[X_{i}=1\right]=p$. Also, we have $S_{n}:=\sum_{i=1}^{n}X_{i}\sim\text{Bin}\left(n,p\right)$. Then $S_{n+1}\geq S_{n}$. Hence, $P\left[S_{n+1}>s\right]=P\left[\sum_{i=1}^{n+1}X_{i}>s\right]\geq P\left[\sum_{i=1}^{n}X_{i}>s\right]=P\left[S_{n}>s\right]$ which establishes the claim. See Theorem 4.23 and example 4.24 in these notes. Altenatively, you may relate the Binomial distribution with the order statistics distribution of a distribution and use the known stochastic ordering relation wrt $n$ (see e.g. this book)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2960763", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
What is the probability that some random event won't happen in the next 10 minutes given it happened exactly twice in the last 120 minutes? The title says pretty much all. The event is fully random, it has the same chance any minute, it can happen multiple times at the same minute. How is it possible to calculate that? I made a simulation code in c++, which says 78%, is that good answer? https://gist.github.com/bakaiadam/4f4732f4147fc3a5c68f121bf57b919f Edit: since people say it's not clear what I mean and I'm not really familiar with random distribution types, I tell you the concrete example that I was thinking about: My idea came from seeing group of people passing in a forest path. Let's say i have been there for 2 hours and saw 2 group of people. What is the probability that i will see at least 1 group of people in the next 10 minutes? Obviously there can be more than one group of people passing at the same minute, and obviously their chance to be there is independent from each other.
I'm not really familiar with random distribution types You need to get a grasp of probability distributions to understand the problem and solution. You can think of the time between the events i.e. some one passing through the path that you are looking at as a random variable. It could be 1 minute or 8 minutes or 10 seconds etc. To get a handle on the randomness of this variable you need a mathematical function that gives you the probability that this random variable is less than some value or greater than some value or is in some interval, etc. Exponential distribution is the mathematical function that will give us that facility. What is the probability that some random event won't happen in the next 10 minutes given it happened exactly twice in the last 120 minutes? In your example you say that the event occured twice in the last 120 minutes. So, the rate at which it occurs is 120/2 i.e. 1 event per 60 minute. Let $X$ denote the time between the first event the next $10$ minutes. So, the desired probability is $P[X>x] = \exp^{-(\lambda*x)}$. Here $\lambda$ is the rate parameter which is nothing but $1/60$ i.e. the number of events happening per minute. And $x$ is the time till the event. So, plugging the numbers you get $P[X>10] = \exp^{-(10/60)}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2960896", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Is it allowed to solve this inequality $x|x-1|>-3$ by dividing each member with $x$? Is it allowed to solve this inequality $x|x-1|>-3$ by dividing each member with $x$? What if $x$ is negative? My textbook provides the following solution: Divide both sides by $x: $ $\frac { x | x - 1 | } { x } > \frac { - 3 } { x } ; \quad x \neq 0$ Simplify: $| x - 1 | > - \frac { 3 } { x } ; \quad x \neq 0$ Edit: provided textbook's solution
For $x\geq0$ this inequality is always true. Assume that $x<0$, so $x=-y$ for some positive $y$ and we get $$y|\;\underbrace{y+1}_{>0}\;|<3\implies y(y+1)<3 ...$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2961023", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Number of positive integer solutions of inequality I was preparing a class on polynomials (high school level). The handbook I use always contains some questions from math olympiads. The following question is asked: What is the number of positive integer solutions of the following inequality: $$(x-\frac{1}{2})^1(x-\frac{3}{2})^3\ldots(x-\frac{4021}{2})^{4021} < 0.$$ I found this number to be 1005. Here is my reasoning: For a positive integer to satisfy this inequality, an odd number of factors should be negative. There are $\frac{4021 +1}{2} =2011$ distinct factors. Each distinct factor appears an odd number of times, hence we can forget about the exponents. We also only need to consider positive integers smaller than $\frac{4021}{2} = 2010,5$, for any bigger number would make all factors positive. In order for the product to be negative, we need to look for positive integers making an even amount of the first factors positive. This happens if we pick even integers: the constants $\frac{1}{2}, \frac{3}{2},...$ are one unit from the next, so if we pick an even integer, there are an even amount of such constants smaller than that integer. Hence we look for the number of positive integers smaller than $2010,5$, which is $\frac{2011 - 1}{2} = 1005$. Is this solution correct? (sorry for the bad english by the way. I hope this does not make my explanation unclear). Note I am also interested in other (quicker, slicker) solutions.
This may look shorter, but then again it is not really different from your argument: The polynomial changes its sign at $\frac12, \frac32,\ldots, \frac{4021}2$ and is positive as $x\to+\infty$, hence positive for $x>\frac{4021}2$. In the $2010$ intervals determined this way, the polynomial is alternatingly positive and negative, hence negative in $1005$ of them. Each such interval contains exactly one positive integer (whereas all other positive integers are $>\frac{4021}2$), hence the answer is $1005$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2961146", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
GRE question: maximize $\int_{0}^{3}f(x)dx$ where $f$ is a differentiable function such that $f(3)=7$ and $f'(x) \geq x$ for all $x>0$ Let $f$ be a differentiable real valued function such that $f(3)=7$ and $f'(x) \geq x$ for all positive $x$. What is the maximum possible value of $$\int_{0}^{3}f(x)dx$$ The answer is 12.. so far what I have done is: $$F(x)=\int_{0}^{x}f(x)dx = f'(x) - f'(0) \geq x - f'(0)$$ so to maximize $F(x)dx$ we take the derivative and set it equal to 0 : $$\frac{d}{dx} \int_{0}^{x} f(x) dx = f(x) - f(0) = 0 $$ so $f(x) = f(0)$ I have no idea what to do next or if I'm even doing it correctly. Thanks in advance.
Between $x=0$ and $x=3$, $f'(x)>0$ so $f$ increases. So we've got an increasing function on $[0,3]$ and we want to maximize its integral. Since the function is fixed to go through $(3,7)$, the steeper it is the lower the value of the integral. (As an example, see $f_1$ and $f_2$ below, $f_2$ is steeper so it has a smaller integral on $[0,3]$.) We get the best results if $f'$ is small, so we want strict equality: $f'(x) = x$. This means $f(x) = x^2 /2 + C$. To have $f(3) = 7$ we need $C=5/2$. Then you get your answer: $$ \int_0^3 \frac{x^2}{2} + \frac{5}{2} dx = 12.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2961321", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How to determine if a surjective homomorphism exists between two groups? My question sheet asks about whether a surjective homomorphism exists between various symmetric groups and various $Z_n$ groups, for example between $S_3$ and $Z_3$, or $A_4$ and $Z_3$. To be honest, I don't really know where to start at all - I've been racking my brain for some property of homomorphisms to do with cardinality, or something like that, but I basically have no idea what I'm doing. Any pointers?
A good place to start would be the first isomorphism theorem: if $ f: G\rightarrow H$ is a homomorphism, then $G/\ker f\simeq \operatorname{Im} f$. So if $f : G \rightarrow H$ is a surjective homomorphism, then $G / \ker f \simeq H$ since surjectivity implies $\operatorname{Im} f = H$. So to determine whether such a homomorphism exists, you should determine whether or not there exists a normal subgroup $G’ \leq G$ such that $G/ G’ \simeq H$. If such a $G’$ exists, $\varphi : G/G’ \rightarrow H$ is an isomorphism, and $\pi : G \rightarrow G/G’$ is the quotient map, then $\varphi \circ \pi$ is a surjective homomorphism from $G$ to $H$. As Randall pointed out in the comments below, we can use Lagrange's theorem to narrow our search even more. Since $|H | = | G/G'|= |G| / |G'|$, we see that $|G| = |H| \cdot |G'|$ (if we keep with the notation from above). Hence it is necessary (but not sufficient) for $|H|$ to divide $|G|$ and for $G$ to possess a subgroup of index $|H|$ in order for there to exist a surjective homomorphism from $G$ to $H$. If either of these conditions are not met, then such a homomorphism dots not exist.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2961456", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Proving that $\sum\limits_{n=1}^∞\frac{a_n}{s_n^2}$ converges Let $a_n>0\;$ ($n=1,2,...\,$) with $\sum\limits_{n=1}^\infty a_n$ divergent and $s_n =\sum\limits_{k=1}^na_k$. For all $n \ge 2$, prove that $\sum\limits_{n=1}^∞\dfrac{a_n}{s_n^2}$ converges. Proof: For all $n \ge 2$, we have $\dfrac{a_n}{s_n^2} \le \dfrac1{s_{n-1}} -\dfrac1{s_n}$ and $\sum\limits_{n=2}^{\infty} \dfrac{a_n}{s_n^2} \le \sum\limits_{n=2}^{k} \left(\dfrac{1}{s_{n-1}} - \dfrac{1}{s_n} \right)$. Now $\sum\limits_{n=2}^k \left (\dfrac{1}{s_{n-1}} - \dfrac{1}{s_n} \right) = \dfrac{1}{s_1} - \dfrac{1}{s_k}$ converges to $\dfrac{1}{s_1}$. This follows because $\sum\limits_{n=1}^\infty a_n$ diverges and $\dfrac{1}{s_n} \to 0$ as $ n \to \infty$. Thus, by the comparison test, $\sum\limits_{n=1}^{\infty} \dfrac{a_n}{s_n^2}$ converges. Is this proof correct? $\dfrac{a_n}{s_n^2} \le \dfrac1{s_{n-1}} -\dfrac1{s_n}$ Proof: Let $n \le 2$ $ s_{n-1} \le s_{n}$ $\Leftrightarrow \frac{1}{s_{n}} \le \frac{1}{s_{n-1}} $ $ \Leftrightarrow \frac{1}{s_{n^2}} \le \frac{1}{s_{n}s_{n-1}}$ $ \Leftrightarrow \frac{a_{n}}{s_{n^2}} \le \frac{a_{n}}{s_{n}s_{n-1}} = \frac{s_{n} - s_{n-1}}{s_{n}s_{n-1}}$ $\Leftrightarrow \frac{a_{n}}{s_{n^2}} \le \frac{1}{s_{n-1}} - \frac{1}{s_{n}} $
Your proof is correct if $a_n$ is nonnegative since it then follows that $$\frac{a_n}{S_n^2} \leqslant \frac{a_n}{S_nS_{n-1}}= \frac{S_n - S_{n-1}}{S_nS_{n-1}}= \frac{1}{S_{n-1}}- \frac{1}{S_n}$$ and the RHS has a telescoping sum. You also need to make it clear that $1/S_n \to 0$ as $n \to \infty$ to argue that the telescoping sum converges to $1/S_1$. Without nonnegativity it is another story.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2961588", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Why is Monte Carlo integration randomly sampled? As I understand, Monte Carlo integration uses stochastic sampling to sample points. Obviously, you would need many samples to reach an accurate result, but why does this process have to be random? Would using a symmetrical grid of very dense samples (e.g. a 1 million by 1 million grid) achieve the same goal? Are there any benefits to random sampling?
I just wanted to revisit this question to provide what I feel is a more intuitive explanation of why a randomly sampled algorithm is faster at providing a result. Look at the two image below. Both do not show the complete, full quality image. They are both approximations of the actual image (which is analogous to the graph of some function). You can view the blurred image as the randomly sampled one and the mostly black image as the uniformly distributed one. The same number of sample points can be used for computing both of the below images. However, by randomly sampling those few points, we can get an idea of what we are approximating faster. The same cannot be said of the uniformly distributed image, where all the samples were "spent" showing, in full detail, only a small portion of the overall image. I believe this is why, intuitively, random sampling can give us a better result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2961833", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Convergence in distribution of the sum of two dependent random variables I have the following question about the limiting distribution of the sum of two random variables say $Z_n = X_n+Y_n.$ I know the following: * *Conditioned on $X_n,$ $Y_n$ has a CLT i.e., $$\mathbb P (Y_n \le z | X_n) \to \phi(z)$$ where $\phi(z)$ is the cdf of a standard gaussian independent of $X_n.$ * *Also, $$\mathbb P (X_n \le z) \to \phi(z)$$ From these two facts can I conclude $Z_n$ converges to $\mathcal{N}(0,2)$ in distribution?
Use characteristic functions. $Ee^{it(X_n+Y_n)} =E e^{itX_n}E(e^{itY_n}|X_n)$. Note that $E(e^{itY_n}|X_n) \to \phi (t)$ uniformly and $E e^{it(X_n)} \to \phi (t)$. It follows easily from these that $Ee^{it(X_n+Y_n)} \to \phi (t)^{2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2961912", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Clarification of definition of quotient group - Does $Na, a \in G$ mean "cosets of $N$ in $G$"? The definition of quotient group from mathworld.wolfram.com is: For a group $G$ and a normal subgroup $N$ of $G$, the quotient group of $N$ in $G$, written $G/N$ and read "G modulo N", is the set of cosets of $N$ in $G$. I don't understand the part "set of cosets of $N$ in $G$". Later it continues: The elements of $G/N$ are written $Na$. From this, I conclude that the expression "set of cosets of $N$ in $G$" means the set of cosets which have elements $Na, a \in G$. Is this correct?
The notation "$Na$" means exactly "the coset of $a$ in $G/N$." So $G/N$ is exactly the collection of cosets $\{Na\mid a\in G\}$. Note that this set notation does not irredundantly list the cosets. I don't think "the set of cosets which have elements $Na, a \in G$" is a valid way of saying it. The cosets don't have $Na$ as elements, each coset is an $Na$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2962053", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Distance of a plane I have posted this question in Stack Overflow programming forum. Someone there feels it might be more suited to Mathematics. I have to warn you I am rusty in math and was terrible in Algebra. Using Accord Framework plane class. The help on this page specifies a constructor that creates a plane a certain distance from the origin. I am getting numbers that do not make sense. If I create a plane using the vector 5,5,5 at a distance of 1 then I would expect the plane to be 1 away from the origin: Plane p = new Plane(new Vector3(5,5,5),1); Console.WriteLine("{0}", p.DistanceToPoint(new Point3(0,0,0))); Instead I get the value of 0.115470053837925. The only way I get a distance that makes sense is to create a plane using one of the three constructors below: Plane p = new Plane(new Vector3(0,0,1),1); Plane p = new Plane(new Vector3(0,1,0),1); Plane p = new Plane(new Vector3(1,0,0),1); When using any of these constructors I get a distance of 1 from the origin. What am I missing here?
Either the API or the documentation are wrong. Regardless the other arguments, Offset is deemed to specify the distance from the plane to the origin, and DistanceToPoint should return the same value. I strongly suspect that Offset is in reality the parameter d of the equation, and the normal vector is never normalized. Indeed, $$5x+5y+5z+1=0$$ defines a plane at distance $\dfrac1{5\sqrt3}$ of the origin. Try with Vector3 v= new Vector3(5, 5, 5); v.Normalize(); Plane p = new Plane(v, 1);
{ "language": "en", "url": "https://math.stackexchange.com/questions/2962202", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Is this a proof of "$\log(a^x) = x\log(a)$"? I'm not sure if it's right, but could this be proof of "$\log(a^x) = x\log(a)$"? For example $\log(1000) = \log(10*100)=\log(10) + \log(100) = \log(10) + \log(10*10) = \log(10) + \log(10) + \log(10) = 3\log(10)$
This is not a proof, your statement is an example. In order to prove a relationship like \begin{equation} \tag{1} \label{1} \log(a^x) = x\log(a) \end{equation} you have to take into account all possible values of $x$ and $a$ for which \ref{1} should hold. A simple version of the proof you are looking for can be done by using your idea. Let $x \in \mathbb{N}, a \in (0,\infty)$. By applying the log-identity $\log(a\cdot b) = \log(a) + \log(b)$ repeatedly we obtain: \begin{align} \log(a^x) = \log(a \cdot \dots \cdot a) = \log(a) + \log(a \cdot \dots \cdot a) = x\log(a) \end{align} An example can suffice as a proof, if it is in the form of a counterexample. Consider the following statement. $$0 < \log(a) \quad \text{for all } a \in [1,\infty)$$ I can prove this statement false by providing the counterexample $a = 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2962326", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 3 }
A question about how to show two polynomials are not equal Let $f\in k[x_i,i=0,\ldots ,n]$ be a a polynomial which contains $x_0$ (i.e. polynomial like $f=x_1$ is not allowed), where $k$ is a field with characteristic $0$. We define a map $$Q_a: k[x_i,i=0,\ldots ,n] \to k[x_i,i=1,\ldots ,n]$$ which sends $f(x_0,\ldots,x_n)$ to $f(a_1x_1+\ldots+a_n x_n, x_1,\ldots,x_n)$, where $a=(a_1,\ldots a_n)$ is a parameter. My question is if $a\neq a'$, is it true that $Q_a(f)\neq Q_{a'}(f)$? I think this should be true, but I did not find a good way to show it.
Start from the Leading coefficient,(If two were equal) the coefficient of the highest power of the polynomial must be equal, then the second highest must be equal ... As for the proof, you may consider use induction on the degree of polynomials.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2962430", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Create a model from the given text (linear programming/optimization) I'm practicing for a linear programming test and here is a task I like to see if I did it correct and if not maybe how to do it correctly? Need to create a mathematical model whose requirements are represented by linear relationships. A company wants produce two different kind of cars, $A$ and $B$. The production process is set up by these two things: installation and finishing. For the installation of $A$, the company needs $4$ hours and for the installation of $B$ they need $6$ hours. The finishing requires $6$ hours for car $A$ and $3$ hours for $B$. The profit for each car $A$ is $4000$ USD and for $B$ is $3000$ USD. Dependent on other projects, the company has $720$ hours time for the installation and $480$ hours for the finishing within a production-cycle. The management requests for the duration of a cycle at least $20$ cars of kind $A$ and at least $30$ cars of kind $B$. How many cars of each kind the company needs to produce within one cycle such that every (production-)condition is unhurt, the requirements of the management satisfied and the maximum profit is achieved? Due to the long text, I like to keep it as short as possible. I have the following model: $4000x_1 +3000x_2 \rightarrow \min$ $4x_1+6x_2 \geq 720$ $6x_1+3x_2 \geq 480$ $x_1 \geq 20$ $x_2 \geq 30$
The company wants to maximize the profit: $4000x_1 +3000x_2 \rightarrow \color{red}{\max}$ The company has (at most) 720 hours time for the installation: $4x_1+6x_2 \color{red}{\leq} 720$ ... and (at most) 480 hours for the finishing within a production-cycle: $6x_1+3x_2 \color{red}{\leq} 480$ The other two constraints look O.K. $x_1 \geq 20$ $x_2 \geq 30$ The linear programming can be solved graphically or with Simplex algorithm.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2962523", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Are Baire class functions closed under pointwise limits? I am confused about the notion of Baire functions (real or complex valued) on a compact space $X$. The set of Borel functions on $X$, $Bo(X)$ is defined to be the set of those functions $f$ for which $f^{-1}(U)$ is a Borel set, when $U$ is open. On the other hand, the Baire functions of class $\alpha$, $Ba_{\alpha}(X)$, where $\alpha$ is a ordinal less than $\omega_1$, are defined iteratively as the pointwise limit of functions in the previous Baire classes, with $Ba_{0}(X)=C(X)$, continuous functions on $X$ (see here). Then the set of all Baire functions is defined to be $Ba(X):=\cup_{\alpha<\omega_1}Ba_\alpha$ Clearly, $Bo(X)\supset Ba(X)$. Now, this article assumes my definition of Baire functions, and this article (see "Comparison with Baire functions"), says that the Baire functions are the smallest set of those real functions closed under pointwise limits and containing continuous functions. This however, seems to imply that $Ba(X)$ is closed under pointwise limits. Is this obvious? I cannot seem to show this for $Ba(X)$. (If we restrict ourselves to bounded functions, is the statement true for uniform limits of Baire functions?) Any help is much appreciated!
It's quite easy, really: let $f_n$ be a sequence of Baire functions (so from $\operatorname{Ba}(X)$) tending pointwise to $f$. By definition for each $n$, $f_n \in \operatorname{Ba}_{\alpha_n}(X)$ for some $\alpha_n < \omega_1$. In $\omega_1$, every countable set has an upper bound so there is some $\beta_0 < \omega_1$ such that for all $n$ we have $\alpha_n < \beta_0$ and so, as $$\forall \gamma < \delta < \omega_1: \operatorname{Ba}_\gamma(X) \subseteq \operatorname{Ba}_\delta(X)$$ (the $\operatorname{Ba}_\alpha(X)$ are easily seen to be increasing in $\alpha$) we know that all $f_n \in \operatorname{Ba}_{\beta_0}(X)$. So by definition $f \in \operatorname{Ba}_{\beta_0 + 1}(X)$ as a pointwise limit of functions from the previous level (or in your definition, it's already in $\operatorname{Ba}_{\beta_0}(X)$). And so $f \in \operatorname{Ba}(X)$. Now you maybe understand better why we can stop at stage $\omega_1$ to get "countable closure properties".. The Borel hierarchy works the same way.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2962695", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Does anybody know the name of the discrete distribution with these properties? I'm looking for a distribution which has the following properties. I don't know what it's called so I'm having a hard time finding references to it. Properties: * *Domain is over a finite range of integers (distribution is discrete and truncated) *Range is over the reals *The sum over the distribution is equal to 1 *The first and second moments (mean and variance) are defined and are independent of one another *The entropy of the distribution is maximized given the above constraints. A normal distribution would fit these criteria if the domain was over all of the reals. Likewise, a truncated normal distribution would fit if the domain were in a range of reals. The binomial distribution can't be right because there's only one free parameter p which affects both the mean and variance. Likewise the hyper-geometric distribution doesn't fit either. Does anybody know if this distribution has a name?
The solution is again a discrete, truncated Gaussian. With Lagrange multipliers, the objective function is $$ \sum_n p_n\log p_n+\lambda\sum_np_n+\mu\sum_nnp_n+\nu\sum_nn^2p_n\;. $$ Setting the derivative with respect to $p_i$ to zero yields $$ \log p_i+1+\lambda+\mu i+\nu i^2=0\;, $$ which yields a Gaussian, with the three Lagrange multipliers determined by the three (transcendental) conditions given by the normalization, mean and variance.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2962830", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Let $a, b, c \in \mathbb{R^+}$ and $abc=8$ Prove that $\frac {ab+4}{a+2} + \frac {bc+4}{b+2} + \frac {ca+4}{c+2} \ge 6$ Let $a, b, c \in \mathbb{R^+}$ and $abc=8$ Prove that $$\frac {ab+4}{a+2} + \frac {bc+4}{b+2} + \frac {ca+4}{c+2} \ge 6$$ I have attempted multiple times in this question and the only method that I can think of is by adding up the fractions and expanding, but took too long and I eventually got stuck. Is there any other method I can use to help me in this question?
Let $a=\frac{2y}{x}$ and $b=\frac{2z}{y}$, where $x$, $y$ and $z$ are positives. Thus, $c=\frac{2x}{z}$ and by AM-GM we obtain: $$\sum_{cyc}\frac{ab+4}{a+2}=\sum_{cyc}\frac{\frac{4z}{x}+4}{\frac{2y}{x}+2}=2\sum_{cyc}\frac{z+x}{x+y}\geq6\sqrt[3]{\prod_{cyc}\frac{z+x}{x+y}}=6.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2962974", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Question about commutator of normal subgroups Let $\mathbf G$ be a group and $H, K \trianglelefteq \mathbf G$ ($H$ and $K$ are normal subgroups of $\mathbf G$). It follows that $[H,K]$, the subgroup of $\mathbf G$ generated by elements of the form $h^{-1}k^{-1}hk$, with $h \in H$ and $k \in K$ is also a normal subgroup of $\mathbf G$. Morevover, $[H,K] \subseteq H \cap K$. Suppose $a,b,c,d \in G$ are such that: $a^{-1}b \in [H,K]$ (whence $a^{-1}b \in H \cap K$), $a^{-1}c, b^{-1}d \in H$, $c^{-1}d \in K$. Does it follow that $c^{-1}d \in [H,K]$? It certainly follows that $c^{-1}d \in H \cap K$, since $$c^{-1}d = (c^{-1}a)(a^{-1}b)(b^{-1}d)$$ and each of the factors in parentheses is in $H$, and by hypothesis, $c^{-1}d \in K$. Since each of $H,K$ and $[H,K]$ is normal it has an associated congruence, say \begin{align} \alpha &= \{ (x,y) \in G^2 : x^{-1}y \in H \},\\ \beta &= \{ (x,y) \in G^2 : x^{-1}y \in K \},\\ \gamma &= \{ (x,y) \in G^2 : x^{-1}y \in [H,K] \}. \end{align} The following diagram is a tentative of illustrating the problem, in this setting. Of course it follows that $(a,b) \in \alpha$ (since $\gamma \leq \alpha$), whence $(c,d) \in \alpha$ (by transitivity), and therefore $(c,d) \in \alpha \wedge \beta$ (which is equivalent to the above observation that $c^{-1}d \in H \cap K$). So the problem in the setting of congruences is: does it follow that $(c,d) \in \gamma$?
Let $G=H=K=\langle t \rangle$ where $t$ is of order $4$. Let $a=b=1$, $c=t^{-1}$, $d=t$. Then $a^{-1}b=1\in [H,K]$. Also $a^{-1}c=t^{-1}\in H$, and $b^{-1}d=t\in H$. Then $c^{-1}d=t^2\in K$. But $c^{-1}d\not=1$, so $c^{-1}d\not\in [H,K]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2963124", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Projection definition in Linear Algebra Why is a mathematical projection defined as $P^2 = P$? I understand that it might be because once a vector has been projected onto a space, if projected again, it should give the same thing. Is there anything more to it?
There's little more. If $P$ is a projection and $w\in\operatorname{Im}P$, we want to have $P(w)=w$. This is equivalent to the assertion:$$(\forall v\in V):P\bigl(P(v)\bigr)=P(v).$$And this, in turn, is equivalent to $P^2=P$. $\phantom{}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2963198", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Implied meaning of lower case latin letters $p$, $q$, $u$ etc. in probability? Are there any generalizations that should be assumed for (lower case) $p$, $q$ and $u$ in probability theory? For example, $q$ is often assumed to signify, in relation to $p$, that $$ q=1-p. $$ Is there a standard implied meaning/relation that should be made (unless stated otherwise) when encountering these lower case letters and and if so, what are the most common ones and their meaning/relation in a probabilistic setting? EDIT: To clarify my question, I am interested in the implied meaning and/or relation of the lower case letters from the modern Latin alphabet such as $u$, not general notation practices with Greek symbols etc, in probability theory.
In general, one should not rely on notation to be common across the literature, but nonetheless conventions exist. Here are a few that I am aware of: General * *$i,j,n,m,k,l \in \mathbb{N}$ indices Probabilistic Setting * *$\mu = \mathbb{E}[X]$: mean of a random variable $X$ *$\sigma = \text{Var}[X]$: variance of a random variable $X$ *$\Sigma = \text{Cov}(X)$: covariance matrix of a multivariate random variable $X$ *$\rho = \text{corr}(X,Y)$: correlation *$r$: sample Pearson correlation coefficient *$\theta$: parameter of a distribution *$p$: probability mass function or probability density *$f$: probability density *$F$: cumulative density function *$\phi$: characteristic function Note: The letter $i$ is a good example of notation being used in different settings. It is often used to denote indices, but in a probabilistic setting in particular when using characteristic functions it refers to the imaginary unit.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2963299", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Strange succession An ant starts from the origin of a cartesian plane and makes $n \ge 2$ steps of lengh $d_1, d_2, \cdots, d_n$. Is there a condition (necessary and sufficient) on $d_i$'s for which the ant can come back to the origin after the $n$ steps? (The ant can move in any direction.) I think the claim is $d_i \leq \sum\limits_{j ≠ i} d_j$ for all $i$, which of course is necessary, but I cannot prove it is sufficient.
Your condition is clearly necessary and there is no sufficient condition. At each step, even if the distance is right you only reach the origin if the angle you choose is exactly right, representing a point out of $[0,2\pi)$. As there are only a countable number of steps, there are only (at most) a countable number of chances to pick the right angle. The measure of a countable set of points in a line segment is zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2963408", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Find $n$ such that $\int_1^n \lfloor{x}\rfloor\lfloor{\sqrt x}\rfloor dx>60$ Find the smallest positive integer $n$ such that $$\int_1^n \lfloor{x}\rfloor\lfloor{\sqrt x}\rfloor dx>60$$ where $\lfloor.\rfloor$ is the GIF. I couldn't find any decent method rather than explicitly evaluating it to a few terms by brute force. Is there a better method out?
The function $f(x):=\lfloor x\rfloor\cdot\lfloor\sqrt{x}\rfloor$ satisfies $$f(x)=\left\{\eqalign{\lfloor x\rfloor\qquad&(1\leq x<4)\cr 2\lfloor x\rfloor\qquad&(4\leq x<9)\ .\cr}\right.$$ It follows that $$\int_1^8 f(x)\>dx=1+2+3+8+10+12+14=50\>, $$ and $\int_1^9f(x)\>dx=50+\int_8^9f(x)\>dx=66$. The answer to your question therefore is $9$. (Remark: If you had written $6382$ instead of $60$ I would have set up a general scheme $\ldots$)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2963549", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Convergence of negative geometric series in the p-adic integers In real analysis, I am learning about convergence of series in metric spaces. My professor states that in the metric space of $\mathbb Z$ with the $2$-adic metric, one of the series converges and the other does not: $$ \sum_{n=0}^\infty 2^n$$ $$ \sum_{n=0}^\infty (-2)^n$$ I know that the first series converges to $-1$, so his claim is that the latter series does not converge. However, this series is Cauchy and $\lim_n (-2)^n=0$, and I am under the impression that this implies the series converges in the $2$-adics.
Both of these series definitely converge in $\mathbb{Z}_2$. As you say, the sequences of partial sums are both Cauchy. However, the second series converges to an element of $\mathbb{Z}_2$ which is not an element of $\mathbb{Z}$; namely, $1/3$. To see this, use the old geometric series trick: $$ \frac{1}{3} = \frac{1}{1 - (-2)} = 1 + (-2) + (-2)^2 + \ldots $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2963648", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Show that if G is a planar, simple and 3-connected graph, then the dual graph of G is simple and 3-connected I've been thinking about this question for several days now and I haven't come up with a satisfactory answer yet. The part of proving that the dual graph is simple (under the assumption that the original graph is planar, 3-connected and simple) is easy, but the 3-connectedness part seems a little more difficult to prove. I was trying to imagine what it would mean for the dual not to be 3-connected in terms of the faces of the graph and the paths that exist between any two of them in the dual graph, but I didn't get anywhere with that idea. Then, using the fact that a 3-connected (and therefore connected) graph is isomorphic to its double dual, I realized that it was enough to prove that if the dual of a graph is 3-connected, then the original graph is 3-connected, so that I would stop thinking about paths between faces in the dual graph and rather think about paths between vertices in the original graph. I can also use the fact that the dual graph of a 2-connected graph is itself 2-connected and I thought that could help me in some way, but I haven't seen how. Any help or ideas would be greatly appreciated.
If You can make the dual graph disconnected by removing two edges, this means that after removing one edge, there is a bridge, i.e., an edge with the same face on both sides. This means that before removing the first edge, there were two faces sharing two edges. In the original graph this means two vertices joined by two different edges - contradicting simplicity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2963798", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Closed form for a sum Please, i need help with this example, step by step. Calculate the value of the next summation, i.e. express simple formula without the sum: $$\sum_{n_1 + n_2 + n_3 + n_4 = 5} \frac{6^{n_2-n_4} (-7)^{n_1}}{n_1! n_2! n_3! n_4!}$$ I think the formula is $\sum_{n1 + n2 + n3 + n4 = n} = \frac{n!* x^{n_1} * x2^{n_2} * x3^{n_3}* x4^{n_4}}{n2! n2! n3! n4!}$ I don't know how to proceed, i stuck here. Here is prtscr: http://prntscr.com/l8gluf
Hint. The multinomial theorem is $$(x_1 + x_2 + \ldots + x_k)^n = \sum_{n_1 + \cdots + n_k = n} {n \choose n_1, \ldots, n_k} x_1^{n_1} x_2^{n_2} \cdots x_k^{n_k}$$ Now let $k=4$, $n=5$ and find values of $x_1,x_2,x_3,x_4$ to make the right-hand side summand look a bit like the summand in your expression. Note that $${n \choose n_1, \ldots, n_k} = \frac{n!}{n_1! \cdots n_k!}$$ so you get the $n_1!n_2!n_3!n_4!$ in the denominator for "free"...
{ "language": "en", "url": "https://math.stackexchange.com/questions/2963872", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
How to show that $\lim_{x \rightarrow \infty } \frac{p(x)}{2^{\sqrt x}} = 0$ I want to show that the below limit is 0 for any polynomial $p(x)$ with degree $n$ $$\lim_{x \rightarrow \infty } \frac{p(x)}{2^{\sqrt x}} = 0$$ If I apply the l'Hopital's Rule, the numerator, eventually, will be zero. What about the denumerator?
It suffices to show $\lim_{x \rightarrow \infty}\dfrac{x^n}{2^{√x}} =0$ (why?). 1) Let $y=√x, y >0.$ Then $F(y)= \dfrac{y^{2n}}{2^y}.$ 2) Let $e^a=2,$ $a >0.$ $F(y)= \dfrac{y^{2n}}{e^{ay}}.$ $e^{ay} \gt \dfrac{(ay)^{2n+1}}{(2n+1)!}$ (Series expansion). $F(y) \lt \dfrac{(2n+1)! y^{2n}}{(ay)^{2n+1}}=$ $\dfrac{(2n+1)!}{a^{2n+1}y}=(\dfrac{(2n+1)!}{a^{2n+1}})\dfrac{1}{y}.$ Take the limit ${y \rightarrow \infty}.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2963971", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 3 }
Deriving $\sqrt2 \approx 1 + \frac13 + \frac1{3 \cdot 4} - \frac1{3 \cdot 4 \cdot34}$ Here is a wierd expansion for $\sqrt2$ found in the ancient Indian mathematical literature. $$1 + \frac13 + \frac1{3 \cdot 4} - \frac1{3 \cdot 4 \cdot34} = \frac {577}{408}$$ Today we know that the resulting fraction can be obtained using Pell numbers i.e. the recursion $\frac{P_{n-1}}{P_n} - 1$ . Can someone explain how can we come up with that particular expansion?
EDIT: This is an answer for an algorithm to generate the unit fractions of the expansion : $$\sqrt2 \approx 1 + \frac12 - \frac1{3 \cdot 4} - \frac1{3 \cdot 4 \cdot34}$$ (an answer to the actual question is provided by Gerry Myerson in his first comment) $$-$$ This (signed) Egyptian fraction may be obtained by starting with the 'exact' $\sqrt{2}$ and removing at each iteration the multiplicative inverse of the nearest integer of the remainder : \begin{array} {c|cc} x&1/x&[1/x]\\ \sqrt{2}&0.707106781187&1\\ \sqrt{2}-1&2.41421356237&2\\ \sqrt{2}-1-\frac 12&-11.6568542495&-12\\ \sqrt{2}-1-\frac 12+\frac 1{12}&-407.646752982&-408\\ \end{array} This method may be generalized at wish...
{ "language": "en", "url": "https://math.stackexchange.com/questions/2964140", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Prove that $T_n = 3n^2 -60n + 301$ is positive for every $n$ I recently did a Mathematics exam from a previous year, and I stumbled across a question's answer I struggled to fully understand. It is given: The quadratic pattern $244 ;~ 193 ;~ 148 ;~ 109;~ \ldots$ I've determined the $n$-$\textrm{th}$ term as $T_n = 3n^2 -60n + 301$. Now the questions asks: Show that all the terms of the quadratic pattern is positive. Our teacher explained completing the square of the formula, but I couldn't catch what she said, and rather, I didn't understand why you would complete the square. I do however see that for $3n^2 - 60n + 301$ we have $\Delta < 0.$ I can deduce that you would go in the direction of an inequality whereas you have $n >0$ (incomplete). Perhaps someone with a higher understanding can explain this to me.
Completing the square enable you to see the vertex clearly. For $f(x)=ax^2+bx+c$ where $a>0$, we can see the minimal value that it can attain. Alternatively, just see that the discriminant is negative, that is the function doesn't intersect the $x$-axis and it doesn't change sign. Since one of the term is positive, every term is positive.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2964236", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 2 }
If $f$ is a polynomial, how does $f(\frac{d}{dt})$ act on $y$? If $f\left(\frac{d}{dt}\right)=a_n\frac{d^n}{dt^n}+\dots+a_1\frac{d}{dt}+a_0$, then whether $$f\left(\frac{d}{dt}\right)(y)=a_n\frac{d^n}{dt^n}y+\dots+a_1\frac{d}{dt}y+a_0y$$ or $$f\left(\frac{d}{dt}\right)(y)=a_n\frac{d^n}{dt^n}y+\dots+a_1\frac{d}{dt}y+a_0$$ I am confused.
The correct answer is $$(\frac{d}{dt})(y)=a_n\frac{d^n}{dt^n}y+\dots+a_1\frac{d}{dt}y+a_0y$$ Otherwise you lose the linearity of your operator.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2964353", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Proof by contradiction. Statement negation This should be an easy question. Yet, the provided solution confuses me. The question comes from "Understanding analysis" by S. Abbot, 2nd edition (Exercise 1.2.11). Negate the statement. Make an intuitive guess as to whether the claim or its negation is the true statement. (b) There exists a real number $x > 0$ such that $x < 1/n\;\;\forall n \in \mathbb{N}$. The provided solution says: The solution seems correct, apart from: shouldn't the negation be with $\exists n \in \mathbb{N}$, i.e.: $$\forall x >0 \;\; \exists n \in \mathbb{N}: x \geq 1/n$$ ?
There exists a real number $x > 0$ such that $x < 1/n\;\;\forall n \in \mathbb{N}$. Is this statement really valid? Let's check. $nx<1 \;\;\forall n\in\mathbb{N}$ is valid only for $x\leq 0$. (Because $n$ becomes very large, and if $x\gt 0$ then $nx$ diverges to infinity) So there is no such $x$ exist.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2964449", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Forward Euler Method Given Two Step Sizes I am attempting to compute an approximation of the solution with the forward Euler method in $[0,1]$ with step lengths $h_{1}= 0.2$, $h_{2}= 0.1$ given the initial value problem below $$\frac{dy}{dz}=\frac{1}{1+z}-y(z)\quad y(0)=1$$ I am not sure what to do when I am given two step sizes instead of one. I know how to compute it if it was given with a step size. Am I supposed to find out the approximation for two different step sizes? Or is there anything I am missing?
The problem asks for solving the differential equation twice. Once for the step size of $h=.1 $ and once for the step size of $h= .2$ and compare the results. As you know different step sizes give you different results with the smaller step size smaller error is made .
{ "language": "en", "url": "https://math.stackexchange.com/questions/2964587", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Proving $\int_0^\pi \frac{\log(1+x\cos (y))}{\cos y}dy=\pi \arcsin x$ How would I go about proving that, $$\int_0^\pi \frac{\log(1+x\cos (y))}{\cos (y)}\,dy=\pi \arcsin (x)$$ I have tried to do this by computing the integral directly, but it appears to be too difficult. Maybe there is a better approach to this that I do not know of.
HINT: Using Feynman's Trick (differentiating under the integral), we have for $|x|<1$ $$\begin{align} \frac{d}{dx}\int_0^\pi \frac{\log(1+x\cos(y))}{\cos(y)}\,dy&=\int_0^\pi \frac{1}{1+x\cos(y)}\,dy\tag1 \end{align}$$ Use contour integration or apply the Weierstrass substitution to evaluate the integral on the right-hand side of $(1)$. Then, integrate the result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2964704", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 3, "answer_id": 2 }
Differentiation Operator is not a bounded operator for polynomials If you consider the space of all polynomials on [0,1] (defined as $P_{[0,1]}$ as subspace of $C_{[0,1]}$) then the differentiation operator is not a linear bounded operator on this space. Why is that? this doesn' t make any sense to me.
There's no reason to assume that the operator is bounded here. The norm on $C_{[0, 1]}$ measures the size of a function in terms of its values, but the derivative is really about steepness. A function can be arbitrarily steep even though it takes only small values. To be explicit, the norm of $x^n$ is clearly $1$, while the norm of its derivative is $n$. Draw the picture to see how $x^n$ gets progressively steeper while not getting bigger in value.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2964920", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Linear-algebra first course problem about orthogonal matrices I am trying to demonstrate next assert about matrices: $A$ is a matrix of $n$ order, with $n$ odd, that obeys $A A^T =I$ and $\det\, A=1$. Then $\det\,(A-I)=0$. I have tried a number of things but none of them work. That $n$ is odd seems to indicate to the trace of the matrix an its properties but I have also tried to find a product of matrix useful but it did not work out. All ideas well be apreciated.
We have $\left(-1\right)^n = -1$ (since $n$ is odd). But \begin{equation} \left(A-I\right)A^T = \underbrace{AA^T}_{=I} - A^T = I-A^T = \left(I-A\right)^T . \end{equation} Taking determinants of both sides of this equality, we obtain \begin{align} \det\left(\left(A-I\right)A^T\right) &= \det\left(\left(I-A\right)^T\right) = \det\left(\underbrace{I-A}_{=-\left(A-I\right)}\right) \\ & = \det\left(-\left(A-I\right)\right) = \left(-1\right)^n \det\left(A-I\right) = - \det\left(A-I\right) \end{align} (since $\left(-1\right)^n = -1$). Thus, \begin{align} -\det\left(A-I\right) = \det\left(\left(A-I\right) A^T\right) = \det\left(A-I\right) \cdot \underbrace{\det \left(A^T\right)}_{= \det A = 1} = \det\left(A-I\right) , \end{align} so that $0 = 2 \cdot \det\left(A-I\right)$ and thus $\det\left(A-I\right) = 0$, assuming that your matrices are over a field of characteristic $\neq 2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2965044", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
How to show $f: \mathbb{R}^2 \rightarrow \mathbb{R}$ given by $f(x,y)=x^3$ is an open map? I know for instance that $g: \mathbb{R} \rightarrow \mathbb{R}$ given by $g(x)=x^3$ is open. Since $g$ is a homeomorphism, then its inverse $g^{-1}$ is continuous. Let $V \subset \mathbb{R}$ be a open set, then $(g^{-1})^{-1}(V)=g(V) \subset \mathbb{R}$ is open. But how about $f$? What mechanism can I use to show $f$ is open?
Hint: $f$ is the composition of the $1$st projection, which is an open map, by the cube function, which is a homeomorphism, hence is an open map.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2965102", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If rank$(A) = 2$, then $A^2 \neq 0_3$ Let $A$ be a real $3 \times 3 $ matrix such that rank$(A) = 2$. Prove that $A^2 \neq 0_3$. where $0_3$ represents the null matrix of order $3$. I am looking for a solution involving only basic manipulation using matrices. I already have a better solution using the range and the nullity of $A$. Thank you in advance! Edit. No Sylvester's inequality, Jordan form or range+nullity / linear transformations. At most use the definition of the rank as the dimension of the column/row space.
Let us write $A=(C_1,C_2,C_3)$ where $C_i$ is the column $i$ of $A.$ Assume $A^2=0.$ That is, we have that $C_1,C_2,C_3$ are two linearly independent solutions of the system $Ax=0.$ But since $A$ is of order $3$ and has rank $2$ this is not possible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2965228", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
How to prove $f(x) = 4x^{3} + 4x - 6$ has exactly one real root? How can I show that $f(x) = 4x^{3} + 4x - 6$ has exactly one real root? I think the best way is to show $f'(x) = 12x^2 + 4 > 0$ for all $x \in \mathbb{R}$. Thus, $f'(x)$ has zero real roots. Thus, $f(x)$ has at most one real root. I thought about trying to show that if $f$ is a polynomial and $f'$ has $n$ real roots, then $f$ has $n + 1$ roots by using Rolle's Theorem or Mean Value Theorem, but I don't think this fact, in general, is true. I would need to prove this statement. Can someone please help me prove this fact?
Your intuition for the first one is correct! $f(0)<0$ and $f(1)>0$, so by IVT $f$ has a root say $x_0$ Suppose $f$ has another root $x_1 \neq x_0$ with $x_0<x_1$ .Then $f(x_0)=f(x_1)=0$ and by Rolles theorem $\exists$ $c \in (x_0,x_1)$ such that $f'(c)=0$, contradicting to the fact $f'(x)>0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2965499", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 0 }
Category theory - Prove that $\operatorname{Hom}$ preserves representations for quasi-inverse functors Let $F: \mathcal C \to \mathcal D$ and $G: \mathcal D \to \mathcal C$ be quasi-inverse functors, and let $H : \mathcal C \to Set$ be a representable (contravariant) functor with representative $X \in \mathcal C$. Prove that $H \circ G$ is representable by $F(X)$. $\DeclareMathOperator\Hom{Hom}$As ismorphisms are transitive, it suffices to consider the case when $H = \Hom( -, X)$. To this end, we wish to find $\phi : \Hom(-,X) \circ G \to \Hom(-,F(X))$ an ismorphism, from which we quickly deduce that for any $f: B \to A$ and $g: GA \to X$ it must be that $\phi_A(g) \circ f = \phi_B(g \circ Gf)$. I am not sure how to dind such a $\phi$ though. It seems like I have to somehow use the fact that $F$ and $G$ are quasi-inverses...
$\newcommand\cat\mathscr\DeclareMathOperator\id{id}$Let $F:\cat C\rightleftarrows\cat D:G$ be quasi-inverse functors. Then $F,G$ are fully faithful and there exists natural isomorphisms $\varepsilon:F\circ G\to\id_{\cat C}$ and $\eta:\id_{\cat D}\to G\circ F$ such that \begin{align} &\eta_GG(\varepsilon)=1_G& &F(\eta)\varepsilon_F=1_F \end{align} Proof. Let $\bar\eta:\id_{\cat D}\to G\circ F$ be a natural isomorphism. The functor $F$ is faithful, for if $u,v:A\rightrightarrows B$ and $F(u)=F(v)$ then \begin{align} u\bar\eta_B &=\bar\eta_A(G\circ F)(u)\\ &=\bar\eta_A(G\circ F)(v)\\ &=v\bar\eta_B \end{align} which implies $u=v$. Similarly, $G$ is faithful. The functor $F$ is full, for if $y:F(A)\to F(B)$ and $x=\bar\eta_AG(y)\bar\eta_B^{-1}$, then \begin{align} \bar\eta_A(G\circ F)(x) &=x\bar\eta_B\\ &=\bar\eta_AG(y) \end{align} which implies $y=F(x)$ (since $G$ is faithful). Let $\varepsilon:F\circ G\to\id_{\cat C}$ be a natural isomorphism. Since $F$ is full and faithful, for each object $A$ in $\cat C$ there exists one and only one isomorphism $\eta_A:A\to (G\circ F)(A)$ such that $F(\eta_A)=\varepsilon_{F(A)}^{-1}$. Then $\eta:\id_{\cat C}\to G\circ F$ is a natural isomorphism (again using faithfulness of $F $) and $F(\eta)\varepsilon_F=1_F$. By naturalness of $\varepsilon$, we have $\varepsilon_{F\circ G}\varepsilon=(F\circ G)(\varepsilon)\varepsilon$ from which we get $\varepsilon_{F\circ G}=(F\circ G)(\varepsilon)$. Consequently, \begin{align} F(\eta_GG(\varepsilon)) &=F(\eta_G)(F\circ G)(\varepsilon)\\ &=F(\eta_G)\varepsilon_{F\circ G}\\ &=1_{F\circ G}\\ &=F(1_G) \end{align} from which $\eta_GG(\varepsilon)=1_G$. $\square$ $\DeclareMathOperator\Hom{Hom} $For all objects $A$ of $\cat C$ we define \begin{align} &\varphi_A:\Hom_{\cat C}(G(A),X)\to\Hom_{\cat D}(A,F(X))& &f\mapsto\varepsilon_A^{-1}F(f) \end{align} and \begin{align} &\psi_A:\Hom_{\cat D}(A,F(X))\to\Hom_{\cat C}(G(A),X)& &g\mapsto G(g)\eta_X^{-1} \end{align} We have to show that $\varphi_A$ is natural in $A$ and $\varphi_A\circ\psi_A$ and $\psi_A\circ\varphi_A$ are identity functions. For all $f:G(A)\to X$ we have \begin{align} (\psi_A\circ\varphi_A)(f) &=G(\varepsilon_A^{-1}F(f))\eta_X^{-1}\\ &=G(\varepsilon_A)^{-1}(G\circ F)(f)\eta_X^{-1}\\ &=\eta_{G(A)}(G\circ F)(f)\eta_X^{-1}\\ &=f\eta_X\eta_X^{-1}\\ &=f \end{align} For all $g:A\to F(X)$ we have \begin{align} (\varphi_A\circ\psi_A)(g) &=\varepsilon_A^{-1}F(G(g)\eta_X^{-1})\\ &=\varepsilon_A^{-1}(F\circ G)(g)F(\eta_X)^{-1}\\ &=\varepsilon_A^{-1}(F\circ G)(g)\varepsilon_{F(X)}\\ &=\varepsilon_A^{-1}\varepsilon_Ag\\ &=g \end{align} Let $u:B\to A$ be a morphism in $\cat C$. Then naturalness of $\varphi_A$ means $$\require{AMScd} \begin{CD} \Hom_{\cat C}(G(A),X) @>\varphi_A>> \Hom_{\cat D}(A,F(X))\\ @VVV @VVV\\ \Hom_{\cat C}(G(B),X) @>>\varphi_B> \Hom_{\cat D}(B,F(X)) \end{CD}$$ For all $f:G(A)\to X$ we have \begin{align} \varphi_B(G(u)f) &=\varepsilon_B^{-1}F(G(u)f)\\ &=\varepsilon_B^{-1}(F\circ G)(u)F(f)\\ &=u\varepsilon_A^{-1}F(f)\\ &=u\varphi_A(f) \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2965615", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Example of a metric space where diameter of a ball is not equal twice the radius My question is regarding the notion of balls in metric spaces, and specifically about their diameters. If $(X,d)$ is a metric space and $A \subset X$, then the diameter of $A$ is defined by $$ d(A) = \sup \{ d(a_1,a_2) : a_1 \text{ and } a_2 \in A \}. $$ I wanted to get a "feel" for the definition; so, I tried to verify that the diameter of a ball of radius $r > 0$ in $\mathbb{R}^n$ is exactly $2r$ as per the above definition, and I managed to do this after some effort. Then, I wondered whether this necessarily happens in every metric space. Is it possible to give an example of metric space where the diameter of a ball is strictly smaller than the radius? I am not sure how to go about finding a set $X$ with a metric $d$ such that this condition holds. I guess I am stuck mainly because I am gathering all my intuition from the case of $\mathbb{R}^n$ with the standard metric, and admittedly haven't got a feel for how abstract metric spaces behave. Any help is appreciated.
Consider the discrete metric $d$ on a set $X$: $$d(x,y)=\begin{cases} 0,&\text{if }x=y\\ 1,&\text{if }x\ne y\;. \end{cases}$$ Consider the ball of radius $r=1/2$ centered at $x$ Then $B(x,r)=\{x\}$ Now by definition, $\operatorname{diam} A = \sup\{ d(a,b) : a, b \in A \}$ Applying it to our case where $A=B(x,r)$, we have diameter of $A$ equal to $0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2965697", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 7, "answer_id": 4 }
How would you integrate $\frac{Si(x)}{x}$? The function $Si(x)$ can be obtained when we integrate $\frac{\sin(x)}{x}$. But how would we go about integrating $\frac{Si(x)}{x}$? More information about the function $Si(x)$ can be found here https://en.wikipedia.org/wiki/Trigonometric_integral Edit: Just checked wolframalpha and even it did not have any answer.
There is no reason for suspecting that the antiderivative of $\operatorname{Si}(x)/x$ can be expressed in terms of “known” functions. The power series expansion of $\operatorname{Si}(x)$ is $$ \operatorname{Si}(x)=\sum_{n\ge0}\frac{(-1)^nx^{2n+1}}{(2n+1)^2\cdot(2n)!} $$ Therefore the power series expansion of $\operatorname{Si}(x)/x$ is $$ \sum_{n\ge0}\frac{(-1)^nx^{2n}}{(2n+1)^2\cdot(2n)!} $$ and therefore the antiderivatives are $$ c+\sum_{n\ge0}\frac{(-1)^nx^{2n+1}}{(2n+1)^3\cdot(2n)!} $$ Note that there is a pattern here: if you start with the function $$ f_0(x)=x\cos x $$ then its series expansion is $$ \sum_{n\ge0}\frac{(-1)^nx^{2n+1}}{(2n)!} $$ If we integrate $f_0(x)/x$, we get $$ f_1(x)=\sum_{n\ge0}\frac{(-1)^nx^{2n+1}}{(2n+1)\cdot(2n)!}=\sin x $$ (using here and below the antiderivative that evaluates $0$ at $x=0$). If we integrate $f_1(x)/x$, we get $$ f_2(x)=\sum_{n\ge0}\frac{(-1)^nx^{2n+1}}{(2n+1)^2\cdot(2n)!}=\operatorname{Si}x $$ and so on. Repeating the process yields $$ f_k(x)=\sum_{n\ge0}\frac{(-1)^nx^{2n+1}}{(2n+1)^k\cdot(2n)!} $$ and $$ Df_{k+1}(x)=\frac{f_k(x)}{x} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2965902", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How to prove that $3^{x^2+x} (x+1)^{-x} \Gamma (x+1)\ge 1$ for $x>0$? Let $$f(x)=3^{x^2+x} (x+1)^{-x} \Gamma (x+1).$$ Drawing a picture with any computer algebra system, it is obviously that $f(x) \ge 1$ on $[0,\infty)$. But How can we prove this? If we take derivative, then we get $$ \frac{\mathrm d}{\mathrm dx}\log(f(x))=-\frac{x}{x+1}+(2 x+1) \log (3)-\log (x+1)+\psi(x+1), $$ where $\psi(\cdot)$ is the digamma function. Drawing a picture again, we see that this is positive and increasing But again, how can we prove this? Okay, I have a proof now for $x \in (0,1)$. We can expand $\log(f(x))$ by this formula to get $$ \log(f(x))= \underset{t=2}{\overset{\infty }{\sum }}\frac{(-x)^t ((t-1) \zeta (t)-t)}{(t-1) t}+x^2 (3 \log )+x (3 \log -\gamma ). $$ Thus it suffices to show that is decreasing for $t \ge 2$. $$ \left|\frac{((t-1) \zeta (t)-t)}{(t-1) t}\right| $$ This can be proved using this paper.
Here's a proof for $x > 1$. If $c > 1$, since $x! > \sqrt{2\pi x}(x/e)^x$ for $x > 1$, if $x > 1 $ then $\begin{array}\\ f(x) &=c^{x^2+x} (x+1)^{-x} \Gamma (x+1)\\ &=c^{x^2+x} (x+1)^{-x} x!\\ &>c^{x^2+x} (x+1)^{-x} \sqrt{2\pi x}(x/e)^x\\ &=\sqrt{2\pi x}\left(c^{x+1} \dfrac{x}{e(x+1)}\right)^x\\ &>\sqrt{2\pi x}\left( \dfrac{c^2x}{e(x+1)}\right)^x\\ &>\sqrt{2\pi x}\left( \dfrac{c^2}{2e}\right)^x \qquad\text{since } x/(x+1) > 1/2 \text{ for } x > 1\\ \end{array} $ Therefore, if $c^2 > 2e$, or $c > 2.34 > \sqrt{2e} $, $f(x) \gt \sqrt{2\pi x}$ for $x > 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2966021", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Formula for the least element on the spectrum Let $A$ be a self-adjoint operator defined on a dense subset of an Hilbert space $\mathcal{H}$. Assume that $A$ is bounded below in the sense there is $m \in \mathbb{R}$ such that $$\langle Ax,x\rangle \geq m,~\forall x : \|x\| = 1.$$ I want to show that: $$ m = \inf\{\lambda : \lambda \in \sigma(A)\} = \inf \{\langle Ax,x\rangle : \|x\| = 1\}.$$ I know that if $E_A$ denotes the unique spectral measure that represents $A$, then $\mathrm{supp}~E_A = \sigma(A),$ from which follows the first equality. So, it is only left to prove the last equality. Any hints? Thanks in advance.
Let $m=\inf\;\{ \lambda : \lambda\in\sigma(A) \}$. Then, for every positive integer $n$, $E_{A}[m,m+1/n] \ne 0$. So there exists a unit vector $x_n\in\mathcal{D}(A)$ such that $E_{A}[m,m+1/n]x_n = x_n$, which gives \begin{align} 0 & \le \langle (A-mI)x_n,x_n\rangle \\ & = \int_{m}^{m+1/n}(\lambda-m) d\langle E(\lambda)x_n,x_n\rangle \\ & \le \frac{1}{n}\langle E[m,m+1/n]x_n,x_n\rangle \\ & \le \frac{1}{n}\langle x_n,x_n\rangle = \frac{1}{n}. \end{align} Therefore, $\lim_n \langle A x_n,x_n\rangle = m$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2966123", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Stirling Numbers of second kind defined in terms of coefficients Prove that $t^n = \sum_{k=1}^n S(n,k)t^{\underline k}$ where $t^{\underline k}$ denotes the $k$-th falling power $t(t-1)(t-2)\ldots(t-k+1)$ of$~t$. I know that we'll have to use the recurrence relation for $S(n,k)$ here but since the summation itself involves $k$, I'm confused as how will we convert $(t-k)$ into $t^{\underline{k+1}}$. Thanks.
For the induction proof as per the comment we use $${n+1\brace k} = k{n\brace k} + {n\brace k-1}$$ which says that we put $n+1$ into one of $k$ sets of a set partition of $n$ into $k$ sets or we join it as a singleton to a partition of $n$ into $k-1$ sets. The base case is $$t^1 = \sum_{k=1}^1 {1\brace k} t^\underline{k} = t$$ and we see that it holds. Now suppose that $$t^n = \sum_{k=1}^n {n\brace k} t^\underline{k}$$ which implies $$t^{n+1} = \sum_{k=1}^n {n\brace k} t\times t^\underline{k} \\ = \sum_{k=1}^n {n\brace k} (t-k)\times t^\underline{k} + \sum_{k=1}^n {n\brace k} k \times t^\underline{k} \\ = \sum_{k=1}^n {n\brace k} t^\underline{k+1} + \sum_{k=1}^n {n\brace k} k \times t^\underline{k} \\ = \sum_{k=2}^{n+1} {n\brace k-1} t^\underline{k} + \sum_{k=1}^n {n\brace k} k \times t^\underline{k}.$$ Now ${n\brace 0} = {n\brace n+1} = 0$ so this is $$\sum_{k=1}^{n+1} {n\brace k-1} t^\underline{k} + \sum_{k=1}^{n+1} {n\brace k} k \times t^\underline{k}.$$ Apply the recurrence to get $$t^{n+1} = \sum_{k=1}^{n+1} {n+1\brace k} t^\underline{k}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2966247", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Odd or even function Show whether the function f is odd , even or other wise Where $$f(x) = 2 , x\in ]0,\infty[ , f(x) =- 2 , x\in ]-\infty , 0]$$ I think that the function is odd because it is symmetric around the origin point , for the value 0 in the domain since -0=0 , f(0) and f(-0) can not be the additive inverse to each other ? Does the answer correct or not
It is not odd due to the reason that $f(0) \neq - f(0)$ (By definition of an odd function $f(-x) = -f(x) $ is satisfied in every $x$ and $-x$ in the domain) . It is obviously not even either.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2966428", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Comparision asymptotic notation I can get the result of an asymptotic two expressions by using limit or definitions of Big-Oh. However, I cannot express the following one in terms of $n$. $$\sum_{i=1}^{n} i^k$$ I want to compare it with $n^{k+1}$.
HINT Recall that by Faulhaber's formula $$\sum_{i=1}^n i^{k} = \frac{n^{k+1}}{k+1}+O(n^k)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2966500", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Position of 2D Brownian motion exiting quarter plane Let $X_t = (X_t^1,X_t^2)$ a planar brownian motion without drift with independent components startet at $X_0 = (1,1)$ and $\tau := \inf \lbrace t\ge 0: X_t \notin (0,\infty)^2 \rbrace$ the first time the process leaves the positive quadrant. So one component of $X_\tau$ needs to be $0$. What is the distribution of the other one? Does this distribution have finite expectation?
Starting at any $(x_0,y_0)$, the exit distribution is the Cauchy distribution proportional to $1/((x-x_0)^2/y_0^2+1)$. You'll recognize this as the fundamental solution for $(x,0)\in \partial\mathbb{H}$ of the Laplace equation of the upper half plane $\mathbb{H}$, evaluated at $(x_0, y_0)$. Read any textbook on stochastic analysis to learn about this connection between Brownian motion and the Laplace equation. EDIT: For some reason I misread the question as asking about the half plane. For the quarter plane, the solution is still found by finding the Green function for the corresponding Laplace equation, which can easily be googled
{ "language": "en", "url": "https://math.stackexchange.com/questions/2966647", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Writing $\frac{1}{2}+i$ as $1+i+\frac{i^2}{2}$ in Needhams Complex Analysis text This question is from the first chapter of Needham's "Visual Complex Analysis Text" My question is in regards to the sequence they get in the solution below. Getting the first term in the sequence to be $1$ and the second being $1+i$, follows naturally, and when I think about the third term in the sequence it's clearly the complex number $\frac{1}{2}+i$. How did they get the equivalent expression $1+i+\frac{i^2}{2}$? Solution
The author used the following equivalences: $$\begin{align}\text{East} &: i^0 =i^4=i^{8\ }=...=1\\ \text{North} &: i^1 =i^5=i^{9\ }=...=i\\ \text{West} &: i^2 =i^6=i^{10}=...=-1\\ \text{South} &: i^3 =i^7=i^{11}=...=-i\end{align}$$ And in general $i^k = i^{k\ \ (\text{ mod } 4)}$. It's worth mentioning also that multiplication by $i$ is a rotation by $90^o$ or $\frac{\pi}{2}$ radians. By multiplying the $n^{th}$ move by $i$ and dividing by $n+1$, you are following the presented algorithm.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2966806", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What methods can be used to solve $ \int_{0}^{\frac{\pi}{2}} \frac{x}{\tan(x)} \:dx $ I'm seeking methods to solve the following definite integral: $$ I = \int_{0}^{\frac{\pi}{2}} \frac{x}{\tan(x)} \:dx $$
The method I took was: First make the substitution $t = \tan(x)$ $$ I = \int_{0}^{\infty} \frac{\arctan(t)}{t\left(1 + t^2\right)} \:dt $$ Now, let $$ I\left(\omega\right) = \int_{0}^{\infty} \frac{\arctan(\omega t)}{t\left(1 + t^2\right)} \:dt $$ Thus, \begin{align} \frac{dI}{d\omega} &= \int_{0}^{\infty} \frac{t}{t\left(1 + t^2\right)\left(1 + \omega^2t^2\right)} \:dt \\ &= \int_{0}^{\infty} \frac{1}{\left(1 + t^2\right)\left(1 + \omega^2t^2\right)} \\ &= \frac{1}{\omega^2 - 1} \int_{0}^{\infty}\left[\frac{\omega^2}{\left(1 + \omega^2t^2\right)} - \frac{1}{\left(1 + t^2\right)}\right]dt \\ &= \frac{1}{\omega^2 - 1} \left[\omega\arctan(\omega t) - \arctan(t) \right]_{0}^{\infty} \\ &= \frac{1}{\omega^2 - 1} \left[\omega\frac{\pi}{2} - \frac{\pi}{2}\right]\\ &= \frac{1}{\omega + 1}\frac{\pi}{2} \end{align} Hence, $$ I(\omega) = \int \frac{1}{\omega + 1}\frac{\pi}{2}\:d\omega = \frac{\pi}{2}\ln|\omega + 1| + C$$ Setting $\omega = 0$ we find: $$I(0) = C = \int_{0}^{\infty} \frac{\arctan(0 \cdot t)}{t\left(1 + t^2\right)}\:dt = 0 $$ Thus, $$ I(\omega) = \frac{\pi}{2}\ln|\omega + 1| $$ And finally, $$I(1) = \int_{0}^{\infty} \frac{\arctan(t)}{t\left(1 + t^2\right)} \:dt =\int_{0}^{\frac{\pi}{2}} \frac{x}{\tan(x)} \:dx = \frac{\pi}{2}\ln(2)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2966938", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
How can I prove that $\mathbb{Q}(\sqrt[3]{2},i)=\mathbb{Q}(\sqrt[3]{2}+i)$? It is easy to show that $\mathbb{Q}(\sqrt[3]{2}+i)\subseteq \mathbb{Q}(\sqrt[3]{2},i)$. But how I can show that $\mathbb{Q}(\sqrt[3]{2},i)\subseteq\mathbb{Q}(\sqrt[3]{2}+i)$? I can't find a way to express $\sqrt[3]{2}$ in terms of $\sqrt[3]{2}+i$.
Consider numbers of the form $$x_0+x_1a+x_2a^2+x_3i+x_4ai+x_5a^2i$$ where $\sqrt[3]2=a$ and $x_i\in\mathbb Q$. It is easily shown that these numbers form a vector space $V$ under addition and are closed under multiplication. Now consider the powers of $\sqrt[3]2+i=a+i=z$ from $z^0=1$ to $z^5$. Certainly all these numbers are of the form above, and the six powers of $z$ are linearly independent in $V$, so they are a basis. By inverting this basis, it is possible to derive an expression for $\sqrt[3]2$ in terms of powers of $z$, showing that it is in $\mathbb Q(\sqrt[3]2,i)$, and similarly for $i$. Specifically, $$\sqrt[3]2=\frac1{22}(91+100z-78z^2+40z^3-9z^4+12z^5)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2967060", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
On the notion of 'winding numbers' of maps $\mathbb{C} \setminus \{0\} \to \mathbb{C} \setminus \{0\}$ In complex analysis, the winding number (around the origin) of a continuous loop $\gamma: [0,1] \to \mathbb{C} \setminus \{0\}$ is the number of times the loops "winds" around zero, which is given by the integral $$\frac{1}{2 \pi i}\int_\gamma \frac{dz}{z}$$ One of the basic results of algebraic topology is that loops with the same winding numbers are homotopic. I think it is pretty clear that any continuous map $f: \mathbb{C} \setminus \{0\} \to \mathbb{C} \setminus \{0\}$ should also carry a similar notion of 'winding number'. It should be defined by the integral $\frac{1}{2 \pi i} \int_\gamma \frac{dz}{z}$ (where $ \gamma: [0,1] \to \mathbb{C} \setminus \{0\}$ is given by $\gamma(t) = f(e^{2 \pi it})$). In this scenario, does the result above still hold, i.e. that continuous maps $\mathbb{C} \setminus \{0\} \to \mathbb{C} \setminus \{0\}$ with the same winding number are homotopic (through continuous maps $\mathbb{C} \setminus \{0\} \to \mathbb{C} \setminus \{0\})$? How can I see this? Is it possible to use the result above (the equivalent one for loops) to construct this homotopy?
Kenny Wong's answer is instructive because it does it "by hand", but here's a much more condensed version: A very basic result is that the winding number of $\gamma$ only depends on the path homotopy class of $\gamma$. Let $r:\mathbb{C}\setminus\{0\} \to S^1, z\mapsto \frac{z}{|z|}$, and $i:S^1\to \mathbb{C}\setminus\{0\}$. It is then a classical result that $r$ is a retraction by deformation onto $S^1$. Let $f,g:\mathbb{C}\setminus\{0\}\to \mathbb{C}\setminus\{0\}$ be two maps with the same winding number. Then compare $i\circ r\circ f\circ i$ and $i\circ r\circ g\circ i$: they are two maps $S^1\to \mathbb{C}\setminus\{0\}$ and since $i\circ r \simeq id_{\mathbb{C}\setminus\{0\}}$, $i\circ r \circ f\circ i \simeq f\circ i$ and similarly with $g$, thus they have respectively the winding number of $f$ and $g$. Thus they are homotopic (using the result for $S^1$). Thus $f\circ i$ and $g\circ i$ are. Thus $f\circ i \circ r$ and $g\circ i \circ r$ are, and thus $f$ and $g$ are.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2967207", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
show 1 is not a linear combination of the polynomials 2 and x in Z[x] I want to prove that 1 is not a linear combination of the polynomials 2 and x in Z[x], can you give any hints for this? What I have done is construct $f(x), g(x) \in Z[x]$ such that \begin{align} 2f(x) + x g(x)=1 \end{align} Let \begin{align} f(x) = \sum_{n=0}^k a_n x^n, \quad g(x) = \sum_{m=0}^l b_m x^m \end{align} where $a_n, b_n \in Z$, collecting coefficient terms, I see that \begin{align} 2 a_0 = 1 \end{align} and this is impossible since $a_0 \in Z$. I want to know some other proof Are there any other proofs?
Indeed, there are no polynomials $f,g$ with integer coefficients such that $xf(x)+2g(x)=1$, since the constant coefficient on the left-hand side is divisible by 2.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2967331", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
How can I prove that eventually one function will overtake another, and find when? Given for example 2 functions,$\ n^{100} $ and$\ 2^n$. I know that$\ 2^n$ grows faster and that therefore there is some$\ n$ where it will eventually overtake $\ n^{100} $ but how can I prove this, and also maybe find that$\ n$?
Solve $2^n > n^{100} \iff n\ln 2>100\ln n\iff \dfrac{n}{\ln n} > \dfrac{100}{\ln 2}$. Let $n = 2^k$, then $ \dfrac{n}{\ln n} = \dfrac{2^k}{k}$,and you solve $2^k > 100k$. Observe the first integer solution $k$ for this is $k = 10$. Thus $n = 2^{10} = 1,024$ .
{ "language": "en", "url": "https://math.stackexchange.com/questions/2967615", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 0 }
Create a set of N numbers with no common rational factor Question: So I want to create a set of real numbers $\{a\}_{N} = \{a_{1}, a_{2}, \ldots, a_{N}\}$ such that if there exists a common factor between all of the elements, it must be irrational. In this way, the fraction $\frac{a_{i}}{a_{j}} \notin \mathbb{Q} \; \forall i \neq j \; \; (1 \leq i,j \leq N)$. How can I create this set? Solution Attempt: We know that the square root of any prime number is irrational, therefore, just pick the set $\{a\}_{N}$ to be a set of square roots of distinct prime numbers. What is tripping me up is that, the division of two irrational numbers can still be rational. An easy example is $\frac{2 \sqrt{2}}{3\sqrt{2}} = \frac{2}{3} \in \mathbb{Q}$ Disclaimer: I am not a number theorist, and have never taken a course on number theory, but I converted another problem I am working on to this problem. If I can solve this problem, I can solve the other problem, however, I don't know if this problem has a solution, and if it does, how to find it.
Well, your solution attempt also works. It's not that much different or more difficult to prove that $\frac{\sqrt{p}}{\sqrt{q}}$ is irrational for different primes $p,q$ than proving $\sqrt{2}$ is irrational. Your 'tripping me up' fear is not without reason, of course, but you circumvented it by chosing roots of primes, not just any number that isn't a square.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2967724", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }