Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Suspension of Cuntz algebra is traceless I saw a conclusion in a reference book: the suspension of the Cuntz algebra $C_0((0,1))\otimes \mathcal O_2$ has no tracial states. My thought: there are many tracial states on $C_0((0,1))$. We take a tracial state $\tau$ on $C_0((0,1))$, then we can define a tracial state $\tilde{\tau}$ on $C_0((0,1))\otimes \mathcal O_2$ as follows: $$\tilde{\tau} (x\otimes y)=\tau(x),\forall x\in C_0((0,1)),y\in \mathcal O_2.$$ Can anyone point out the mistake, thanks!
Your $\tilde\tau$ is not well-defined. You have $$ \tilde\tau(x\otimes 1)=\tau(x)=\tilde\tau(x\otimes (-1))=-\tilde\tau(x\otimes 1)=-\tau(x), $$ a contradiction unless $\tau=0$. If you had a trace $\varphi$ on $A\otimes \mathcal O_2$, it induces a trace $\psi$ on $\mathcal O_2$ by $$ \psi(x)=\varphi(1\otimes x). $$ So $\psi=0$, and one can show that this implies that $\varphi=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3209608", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving $\lim_{n\to\infty} \left( 1+\frac{x}{n} \right)^n = \lim_{n\to\infty} \left( 1+\frac{1}{n} \right)^{nx}$ with a certain method. I saw in another post on the website a simple proof that $$\lim_{n\to\infty} \left( 1+\frac{x}{n} \right)^n = \lim_{m\to\infty} \left( 1+\frac{1}{m} \right)^{mx}$$ which consists of substituting $n$ by $mx$. I can see how the equality then holds for positive real numbers $x$, yet it isn't obvious to me why it holds for negative $x$.
For $x$ negative, we must substitute $n$ with $-mx$ with $m$ positive. We can use the fact that we know what the limit $(1-\frac{1}{m})^{-m}$ is, and equate this with the usual definition of $e$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3209722", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
A mother has two children. What is the probability, given this information that both are girls? The Problem You know that your new neighbors have two children. One day you see the mother taking a walk with a girl. What is the probability that the other child is also a girl? a) Given that the mother chooses the younger child with probability $p$. b) Given that if the children are of different genders, the mother chooses the girl with probability $p$. My solution First of all assume that the mother walks only with her children. Lets mark the observed girl and order them by age, then sample space is $S=\{b^*b,bb^*,b^*g,bg^*,g^*b,gb^*,g^*g,gg^*\}$. We are interested of the case in which both are girls $GG=\{g^*g,gg^*\}$ a) Let $Y=\{\text{The mother chooses to walk with the younger child}\}$, then we have $$ P(GG)=P(GG|Y)P(Y)+P(GG|Y^c)P(Y^c)=\\ =\frac{p/4}{p}p+\frac{(1-p)/4}{1-p}(1-p)=1/2 $$ (Using Law of Total Probability and def. of conditional prob.) b) My attempt is to let $G^*=\{\text{The mother chooses to walk with a girl}\}$, $B^*=\{\text{The mother chooses to walk with a boy}\}$ and $D=\{\text{children has different gender}\}$, then we have $$ P(G^*|D)=p, $$ $$ P(B^*|D)=1-p. $$ But i guess I'm interested in $GG=G^*\cap D^c$.
In $b$, you are interested in $P(D^c | G^*)$, not in $P(D^c \cap G^*)$ - we already know that $G^*$ happened. It's standard applying of Bayes rule: $P(D^c | G^*) = \frac{P(G^* | D^c) \cdot P(D^c)}{P(G^*)}$. For numerator, $P(G^* | D^c) = \frac{1}{2}$ (if children have the same gender, then $G^*$ is equal to both of them been girls) and $P(D^c) = \frac{1}{2}$, thus numerator is $\frac{1}{4}$. For denominator lets use law of total probability (with events $gg$, $gb$ and $bb$ denoting two girls, boy and girl and two boys): $P(G^*) = P(G^* | gg) \cdot P(gg) + P(G^* | gb) \cdot P(gb) + P(G^* | bb) \cdot P(bb) = 1 \cdot \frac{1}{4} + p \cdot \frac{1}{2} + 0 \cdot \frac{1}{4} = \frac{1 + 2p}{4}$. So the final answer is $\frac{1}{1 + 2p}$. For sanity check, we can check that if $p = 0$ this probability is $1$ (only mothers with two girls will walk with a girl), and if $p = 1$ then probability if $\frac{1}{3}$ (all mothers with at least one girl will walk with a girl, and among them $\frac{1}{3}$ has two girls).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3209878", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Mistake in solving $-\int \frac{1}{x\sqrt{x^2-1}}$ I have this function $$f:(-\infty ,-1)\rightarrow \mathbb{R}, f(x)=\frac{1}{x\sqrt{x^{2}-1}}$$ and I need to find the primitives of $f(x)$.So because $x<-1$ I need to calculate $-\int \frac{1}{x\sqrt{x^2-1}}\,dx=-\arctan(\sqrt{x^2-1})+C$ but in my book the correct answer is $\arcsin\left(\frac{1}{x}\right)+C$ Where is my mistake?I solved the integral using $u=\sqrt{x^{2}-1}$
It is known that $$\arctan(x)+\arctan(1/x)=C$$ where $C$ is a constant. So your solution can be written as $$\arctan\left(\frac1{\sqrt{x^2-1}}\right):=u$$Then using some trig identities, $$\frac1{\sqrt{x^2-1}}=\tan u\\\sec^2 u=1+\tan^2 u=\frac{x^2}{x^2-1}\\\sin^2 u=1-\cos^2 u=1-\left(\frac{x^2-1}{x^2}\right)=\frac1{x^2}\\u=\arcsin\left(\frac1x\right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3210020", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Minimal polynomial of extension of degree 2 over a finite field with characteristic 2 I'm struggling to solve the following question. Let $F$ be a finite field with characteristic 2 and $L/F$ be a finite extension with $[L:F]=2$. Prove that there exists $\alpha\in L$ such that $L = F(\alpha) $ and the minimal polynomial of $\alpha$ , $\text {Irr}_F \alpha(x) = x^2-x-a$ for some $a\in F$ My attempt: Let $\alpha\in L\text \F$, then $F\subset F(\alpha) \subset L$ is a tower, so we have $2 = [L:F] = [L:F(\alpha)][F(\alpha):F]$. Because $\alpha\notin F$, $[F(\alpha):F]=2$ and $[L:F(\alpha)]=1$. So we get $L=F(\alpha)$. Let $p_\alpha(x) = \text{Irr}_F\alpha(x)$. Note that deg $(p_\alpha(x)) = [F:F(\alpha)] = 2$ . So we can assume that $p_\alpha(x) = x^2+bx+a \in F[x]$, here $a\in F, b\in F.$ Then, I want to show that there exists $\alpha \in L\text\F$ such that $b=1$. First, I'm trying to find $\alpha$ such that b $\neq 0$: Observe that if $\alpha^2\in F$, then the minimal polynomial $p(x)$ will be $x^2-\alpha^2$. So I want to find $\alpha$ such that $\alpha^2\notin F$, if such $\alpha$ exists, then $b\neq 0$. But I have no ideal about how to prove the existence of this $\alpha$. And here comes another problem: If we can find $\alpha\in L\text\\F$ satisfy the condition above, how can we say that $b=1$ ? So I would like to know that my thoughts are correct or not, and want to solve this question in elementary method (without Galois Theory). Thanks for any help!
Consider the polynomials $p_a(x)=x^2-x-a$ for $a\in F$. Observe that for $a\neq b$, both in $F$, the roots of $p_a$ and $p_b$ are disjoint. This is because a common root $r$ implies $r^2-r-a=0=r^2-r-b$, from where $a=b$ follows. Also, for a given $a\in F$, the two roots, $r_1$ and $r_2$, of $p_a$ satisfy $r_1+r_2=1$. If they there equal, then $0=2r_1=1$. If $p_a$ were reducible for all $a\in F$, then the total number of the roots of these polynomials would would be $2|F|$ different elements of $F$. Therefore, at least one of those polynomials $p_a$ is irreducible over $F$. Then $F[x]/(p_a)$ is a finite field of characteristic $2$ with the same cardinality as $L$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3210162", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Prove that $a^x - b^y = 0$ where $x = \sqrt{\log_a b}$ & $y = \sqrt{\log_b a}$ , $a > 0$, $b > 0$ & $a, b \ne1$ I'm solving logarithm questions. I got stuck in this question. Prove that $a^x - b^y = 0$ where $x = \sqrt{\log_a b}$ & $y = \sqrt{\log_b a}$ , $a > 0$, $b > 0$ & $a, b \ne1$ I've tried to solve it. I'm unable to understand how to break that square root after putting the values of $x$ and $y$. Please explain how to solve it. And Please solve it according to class 11th student.
Hint: $$y = \frac{1}{x},$$ and $$a^x = b^{1/x} \iff a^{x^2}=b$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3210236", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Find the CDF and PDF of $W$ where $W =.7 - b(.7 - y)^2, 0 A random variable $Y \sim U[0,1]$. Let $W = \frac7{10} - b\cdot(\frac7{10} - y)^2, 0<b<1$.Completely specify the CDF and PDF of $W$.Also show that the PDF of W integrates to $1$. So I have worked through most of this problem. Importantly, the transformation is $2:1$ on the interval y $\in (\frac25,1]$ and $1:1$ on the interval y $\in [0, \frac25)$. Thus I think the CDF is: $$\begin{cases} \frac3{10}+\sqrt{\frac{7/10 - w}b } & \text{for}~~ w \in (\frac7{10} - \frac{49b}{100} ,\, \frac7{10} - \frac{9b}{100}) \\ 2\sqrt{ \frac{7/10 - w}b } & \text{for}~~ w \in (\frac7{10} - \frac{9b}{100} ,\, \frac7{10}) \\ \end{cases}$$ Can anyone confirm this? If I know this is right, I can figure out the rest on my own. When integrating the PDF, I am running into some strange (${}+{}$ or ${}-{}$) situations that integrate to $1$ only if I select a certain sign, so I am having trouble verifying the CDF is correct by integrating the PDF over its support.
That is not quite what I get... $$\begin{align}F_W(w) &=\mathsf P(0.7 -b(0.7-Y)^2\leqslant w) \\[1ex]&=\mathsf P(Y\leq 0.7-\sqrt{\tfrac{0.7-w}b})+\mathsf P (Y\geq 0.7+\sqrt{\tfrac{0.7-w}b}) \\[1ex]&=(0.7-\sqrt{\tfrac{0.7-w}b})\mathbf 1_{0\leq (0.7-\sqrt{\tfrac{0.7-w}b})\leq 1}+(0.3-\sqrt{\tfrac{0.7-w}b})\mathbf 1_{0\leq (0.7+\sqrt{\tfrac{0.7-w}b})\leq 1}+\mathbf 1_{0.7<w} \\[1ex]&=(0.7-\sqrt{\tfrac{0.7-w}b})\mathbf 1_{0.7-0.49b\leq w\leq 0.7}+(0.3-\sqrt{\tfrac{0.7-w}b})\mathbf 1_{0.7-0.09b\leq w\leq 0.7}+\mathbf 1_{0.7\lt w} \\[1ex]&=(0.7-\sqrt{\tfrac{0.7-w}b})\mathbf 1_{0.7-0.49b\leq w\lt 0.7-0.09b}+(1-2\sqrt{\tfrac{0.7-w}b})\mathbf 1_{0.7-0.09b\leq w\lt 0.7}+\mathbf 1_{0.7\leq w} \\[3ex] F_W(w) &= \begin{cases}0&:& \quad w\lt 0.7-0.49b\\0.7-\sqrt{(0.7-w)/b~}&:&0.7-\sqrt{(0.7-w)/b~}\leqslant w\lt 0.7-0.09b\\1-2\sqrt{(0.7-w)/b~}&:& 0.7-0.09b\leqslant w< 0.7\\1&:& 0.7\leqslant w \end{cases} \end{align}$$ To check, I have $F_W([0.7-0.49b]^-)=0$ and $F_W([0.7]^+)=1$ as we require.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3210360", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that $inf(S_1)=sup(S_2)$ $f:I \to \Bbb R$, $I=[0,1]$ continuous and differentiable on $(0,1)$ s.t $f(0)<0<f(1)$ and $f'(x)\neq 0$ $\forall x \in (0,1)$. Let $S_1=\{x \in I | f(x) >0\}$ and $S_2=\{x \in I | f(x) <0\}$. Prove that $inf(S_1)=sup(S_2)$. My attempt: If $\exists x\in I$ s.t $f(x)<f(0)$ then $\exists c\in I$ s.t $f(c)=f(0)$ contradiction by Rolle's thm. So $f'(0)>0$ Claim: $f'(x)>0$, $\forall x \in (0,1)$ Suppose not then $\exists x_1\in I$ s.t $f'(x_1)<0$. Consider, $d=inf\{x\in I|f'(x)<0\}$ If we can prove that $d \neq 0$ then $\exists a,b\in (d-\epsilon,d+\epsilon)\subseteq [0,1]$ s.t $f(a)=f(b)$ contradiction by Rolle's thm. So,$f'(x)>0$, $\forall x \in (0,1)$ $\Rightarrow inf(S_1)=sup(S_2)$. Done. My question is: How can we prove that $d \neq 0$?
Since $f'$ has IVP and $f'(x) \neq 0$ for all $x$ it follows that $f'(x) >0$ for all $x$ or $f'(x) <0$ for all $x$. In other words, $f$ is strictly increasing or strictly decreasing. But $f(0)<f(1)$ so the second possibility is ruled out and $f$ must be strictly increasing. Now let $a$ be the infimum of $S_1$ and $b$ be the supremum of $S_2$. $b \leq a$ because $x \in S_2, y \in S_2$ implies $x <y$. Next note that $f(a)=0$. This is an easy consequence of continuity of $f$ and I will leave it to you to check this. Hence $f(t) <0$ (and $t \in S_2$) for all $t <a$. As a consequence $b \geq t$ for any $t <a$. Hence $b \geq a$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3210532", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Is the set of non invertible matrices simply connected? What are their homotopy and homology groups? It is fairly easy to see that the set of non invertible matrices is path connected. Are they simply connected? If not what is their fundamental group? What are their homotopy and homology groups. I'm looking for the answer to any of these questions. Any examples for particular (non 0 or 1) dimensions are welcome as well, as well as for real or complex coefficients. (Sorry about the phrasing, it is currently 3:30 am and I will edit the question in the morning)
The space of non-invertible matrices (with real or complex coefficients) is contractible. There is an explicit homotopy between the identity map and the constant map equal to the zero matrix, simply given by: $$H(A,t) = tA.$$ In particular it is simply connected, and all its higher homotopy and homology groups vanish.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3210672", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 1, "answer_id": 0 }
Prove $A^2 = A$ where $A = I_n − \alpha\alpha^T$ and $\alpha$ is an $n × 1$ vector with $\alpha^T\alpha=1$ Consider a $n × n$ matrix $A = I_n − \alpha\alpha^T$, where $I_n$ is the $n × n$ identity matrix and $α$ is an $n × 1$ column vector such that $\alpha^T\alpha = 1$. Show that $A^2 = A$. My proof: $\alpha\alpha^T$ is a $n × n$ matrix. It is given that $\alpha^T\alpha = 1$. Multiplying both sides of this by the inverse of $\alpha^T$ gives, ${{\alpha^T}^{-1}}\alpha^T={\alpha^T}^{-1}I$. $I\alpha={\alpha^T}^{-1}$ which means $\alpha={\alpha^T}^{-1}$. Now, $A = I_n − \alpha\alpha^T$, =$I_n - {\alpha^T}^{-1}{\alpha^T}$ which means $A=I_n-I_n=\mathsf{O}$. Therefore, $\forall n \in \mathbb{N}, A^n =\mathsf{O}$. Is it correct?
$\alpha^{T}$ is a vector. Inverse of a vector does not make sense. Just calculate $A^{2}$: $A^{2}=I-2\alpha \alpha^{T}I +\alpha \alpha^{T}\alpha \alpha^{T}I$. Since $\alpha (\alpha^{T}\alpha) \alpha^{T}=\alpha \alpha^{T}$ by hypothesis we get $A^{2}=A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3210798", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
If $P$ is an invertible linear operator on a finite dimensional vector space over a finite field, then there is some $n>0$ such that $P^n=I$ From Berkeley problems in Mathematics: problem 7.4.21: let $P$ be a linear operator on a finite dimensional vector space over a finite field. show that if $P$ is invertible, then $P^n$ = $I$ for some positive integer $n$. I think there is a mistake in the above statement: it does not hold true if P is a scalar multiple of the identity operator. But is the statement correct otherwise? i.e. is the following modified statement correct? let $P$ be a linear operator on a finite dimensional vector space over a finite field. assume $P$ is not a scalar multiple of the identity operator. show that if $P$ is invertible, then $P^n$ = $I$ for some positive integer $n$.
I'll make a try: Take $P,P^2,P^3,...$. Then these can't be all different because the have entries from a finite field. Then $P^t=P^s\Rightarrow P^n=I$ since $P$ is invertible
{ "language": "en", "url": "https://math.stackexchange.com/questions/3210960", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Why are $q$ and $p+1$ relatively prime if $q$ divides $p-1$? Looking at the solution of an exercise on Sylow theorems, I see that, if $q$ and $p$ are odd primes such that $q|(p-1),$ then $q$ and $p(p+1)$ are relatively prime. I understand that $q$ and $p$ are relatively prime, but is there a reason (theorem) why $q$ and $(p+1)$ are relatively prime? Thank you for your help.
Consider $q = 3$, $p = 7$. Obviously in this instance $p - 1$ is a multiple of $q$, let's say $mq$, where $m \geq 2$. Then $p(p + 1)$$ = p^2 + p = (mq + 1)^2 + mq + 1$. And $(mq + 1)^2 + mq + 1 = m^2 q^2 + 2mq + 1 + mq + 1$$ = m^2 q^2$$ + 3mq + 2.$ Since $q$ is at least $3$, and $m^2 q^2 + 3mq$ is a multiple of $q$, and the next higher multiple is $m^2 q^2 + 3mq + q$, it follows that $\gcd(q, p^2 + p) = 1$. If you've been following along with the example, you've seen that $\gcd(3, 56) = 1$, that $54$ is a multiple of $3$ and the next multiple of $3$ is $57$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3211155", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
What is a non-logical concrete example of a choiceless elementary topos? What is a non-logical concrete example of a choiceless elementary topos? The elementary topoi made from choiceless models of ZF, or stuff like the realizability topos, are (I believe) choiceless, but is there a concrete example from other branches of mathematics?
Assuming you mean choice in the "every epimorphism has a section" sense, counterexamples among categories of presheaves are plentiful. For a very quick and silly example, there's the category $\mathbf{Set}^G$ for a non-trivial group $G$. In this case, the sole representable functor in the category is the transitive action of the group on itself, so it has no fixed points; so there is no section of the unique map from this object to the terminal object of $\mathbf{Set}^G$. Another, similar example is the category $\mathbf{Set}^{\bullet\rightrightarrows\bullet}$ of directed multigraphs. Consider the graph with two vertices, and one arrow in each direction between them; then the unique map from this to the terminal object also doesn't have a section. More generally, toposes satisfying choice have a classical internal logic, so choosing any topos with a strictly intuitionistic internal logic will involve some failure of choice. You can hardly throw a stone without hitting an example of a topos where choice fails.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3211259", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Construct $\varphi (z)$ such that $\int_{|z|=1} \frac{\varphi (z)}{z-w} dz =0$ I have this problem to complex analysis. Construct $\varphi (z)$ a continuous function nonzero in $S^{1}$ such that $$\int_{|z|=1} \frac{\varphi (z)}{z-w} dz =0$$ for $|w|<1$. I have the idea to take $\varphi (z) = (z-w)f(z)$ with $f(z)$ analytic function that is nonzero in $S^{1}$ and use Cauchy theorem for integral, but I am not sure that this is correct. Note: English is not my first language so I am sorry if I did any mistake.
Take $\varphi(z)=\frac1z$. Then, if $w=0$,$$\int_{\lvert z\rvert=1}\frac{\varphi(z)}{z-w}\,\mathrm dz=\int_{\lvert z\rvert=1}\frac1{z^2}\,\mathrm dz=0.$$And, if $w\neq0$,\begin{align}\int_{\lvert z\rvert=1}\frac{\varphi(z)}{z-w}\,\mathrm dz&=\frac1w\int_{\lvert z\rvert=1}\frac1{z-w}-\frac1z\,\mathrm dz\\&=0,\end{align}because$$\int_{\lvert z\rvert=1}\frac1{z-w}\,\mathrm dz=\int_{\lvert z\rvert=1}\frac1z\,\mathrm dz=2\pi i.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3211510", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Calculate $\mathbb{P}[Y=y|X=x]$ where $X$=# claims reported diring firs year, $Y$=# total of claims that will eventually be reported A property-casualty insurance company issues automobile policies on a calendar year basis only. Let $X$ be a random variable representing the number of accident claims reported during calendar year 2005 on policies issued during calendar year 2005. Let $Y$ be a random variable representing the total number of accident claims that will eventually be reported on policies issued during calendar year 2005. The probability that an individual accident claim on a 2005 policy is reported during calendar year 2005 is $d$. Assume that the reporting times of individual claims are mutually independent. Assume also that $Y$ has the negative binomial distribution, with fixed parameters $r$ and $p$, given by $$\mathbb{P}[Y=y]=\binom{r+y-1}{y}p^{r}(1-p)^{y}$$ for $y=0,1,\ldots$. Calculate $\mathbb{P}[Y=y|X=x]$ the probability that the total number of claims reported on 2005 policies is $y$, given that $x$ claims have been reported by the end of the calendar year. Remark: I know that the solution requires the use of Baye's Theorem and Theorem of Total Probability, and the identity $\binom{y}{x}\binom{r+y-1}{y}=\binom{r+x-1}{x}\binom{(r+x)+(y-x)-1}{y-x}$. I have not been able to correctly describe $X$ or include $d$ in the analysis. I need your help to understand this better.
$Y$ represents the total random claim count on year 2005 issued policies. $X$ represents the subset of those claims that were reported in the same calendar year of issue. For a sufficiently large number of claims made, you would expect that $X \approx Y \cdot d$. Another way to say this is that given that $Y$ claims are reported across year 2005 issued policies, the random number $X$ of these claims that were reported in the same year is binomial, since each such claim is independent and identically distributed with probability $d$ of occurring in the same calendar year. Formally, we would write $$X \mid Y \sim \operatorname{Binomial}(Y, d), \\ \Pr[X = x \mid Y = y] = \binom{y}{x} d^x (1-d)^{y-x}, \quad x \in \{0, 1, \ldots, y\}.$$ So now we apply the law of total probability to obtain the marignal (unconditional) distribution of $X$: $$\Pr[X = x] = \sum_{y=0}^{\infty} \Pr[X = x \mid Y = y] \Pr[Y = y].$$ Then, we apply Bayes' theorem: $$\Pr[Y = y \mid X = x] = \frac{\Pr[X = x \mid Y = y]\Pr[Y = y]}{\Pr[X = x]},$$ where the numerator is simply the summand in the previous equation, and the denominator is the value of that sum. I have left the actual computation of these to you as an exercise, as I regard the salient aspect of your question to be the fact that $X \mid Y$ is binomial.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3211615", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
if $f’(x)/g’(x) = f(x)/g(x)$, what can we say about the relationship? Suppose for all $x$, $$f’(x)/g’(x) = f(x)/g(x),$$ Then what can we say about the relationship between $f(x)$ and $g(x)$? I think the only solution is $f(x) = cg(x)$ for some non-zero constant $c$. Is there any other possible relationship?
Then the numerator of quotient rule for derivative of $f/g$ vanishes and so $f/g$ constant. Need to do more work on it for proof...
{ "language": "en", "url": "https://math.stackexchange.com/questions/3211745", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
What is all N that make $2^N + 1$ divisible by 3? When $N = 1$, $2^N + 1 = 3$ which is divisible by $3$. When $N = 7$, $2^N + 1 = 129$ which is also divisible by $3$. But when $N = 2$, $2^N + 1 = 5$ which is not divisible by $3$. A quick lookout can determine that when $N$ is an odd number, $2^N + 1$ is divisible by $3$. If is that correct, how can it be proven? Otherwise, what is the counterexample and the correct $N$? Can this be made into a more general form, for example $2^N + C$ or something similar?
$((-1)+3))^N+1= $ $\sum_{k=0}^{N}\binom{N}{k}(-1)^{N-k}3^k+1=$ $(-1)^N +1+\sum_{k=1}^{N}(-1)^{N-k}3^k.$ The expression is divisible by $3$ $\iff$ $(-1)^N+1=0.$ Hence?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3211860", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 4 }
Volume of Viviani’s Window I’m taking Real Analysis and I have an assignment to do on calculating the volume of Viviani’s window or dome. I have to solve this for the sphere $$x^2+y^2+z^2=4$$ and the cylinder $$(x-1)^2+y^2=1$$ Here's a visual representation of the problem: Here's a figure obtained from the intersection of the 2 figures. (I need to find the volume of that figure.) The image shows a curve which does not have a volume but to understand what volume the problem is asking I imagine as if on the inside it is filled with some material My problem is that all I have seen on the Internet about this problem involves multivariable calculus, parametrization, polar coordinates, etc. I have not seen any of these yet, so I have to do this problem with integrals in only one variable. The formulas we have seen in class about the applications of the Riemann Integral are the following: * *Arc length L of a curve $$L= \int_{a}^{b} \sqrt{1+(f’(x))^2} dx$$ *Volume $$V_1=\int_{a}^{b} \pi (f(x)^2 dx$$ $$V_2=\int_{a}^{b} A(x)dx$$ A(x) is the area of a section of the figure *Lateral area $$\int_{a}^{b}2\pi f(x) \sqrt{1+f’(x)}dx$$ I thought on expressing the sphere equation and the cylinder equation in terms of the same variable and integrating. The reason behind this was that Viviani’s dome is composed of the points that belong to the cylinder and sphere at the same time, but this reasoning has failed and I don’t know why. Any help is very much appreciated and I’m sorry for my broken English. EDIT This is all I have for now. Need help for A(x). I don’t know how to treat x as a parameter inside the integral and at the bounds of the integral. EDIT 2 @Ertxiem I finally solved the integral and got the final answer: $$\frac{16}{9}(3\pi-4)$$ I still have a few questions though: * *When you said: “You can start by thinking about the base.” what did you really meant by the base? I can’t really see what the base of a figure like that would be, since it’s a curve-like figure, how would you visualize that? If you meant the base of the figure, then I understand that studying it is necessary in order to compute the volume but at what point that computation was used? *What is exactly b(x) and why is it important? *Regarding the height, A(x) is the integral of a function, which is the height taking x as a parameter and y as a variable but, the graph of the function is in the first quadrant, so, doesn’t this mean that we are actually integrating height/2? Conclusion: Find the base, find the height, find the area of a section by integrating the previous results, find the volume by integrating the area of a section. Please point out any flaws, errors. Any suggestions for a better explaining, understanding of the problem are welcome. Thanks!
Here is a detailed expalantion of @DanielWainfleet's idea. Let $\mathcal{V}$ denote the region. Then the the intersection of $\mathcal{V}$ and the plane $x = x_0$ is described by $$ \mathcal{I}(x_0) \quad : \quad y^2 + z^2 \leq 4 - x_0^2 \quad \text{and} \quad y^2 \leq 1 - (1 - x_0)^2.$$ The area of $\mathcal{I}(x_0)$ can be computed by decomposing this into 4 congruent wedges and two congruent isosceles as in the following figure. Indeed, note that the four corners of $\mathcal{I}(x_0)$ are given by $(\pm \sqrt{x_0(2-x_0)}, \pm \sqrt{2(2 - x_0)})$, which follows from solving the system of equations $y^2 + z^2 = 4-x_0^2$ and $y^2 = 1 - (1-x_0)^2$. From this, the angle of each of four wedges is given by $\arctan(\sqrt{x_0/2})$, and so, the area of $\mathcal{I}(x_0)$ is given by $$ S(x_0) := 2(4 - x_0^2) \arctan(\sqrt{x_0 / 2}) + 2 \sqrt{2x_0}(2 - x_0) $$ Then the volume of $\mathcal{V}$ is obtained by integrating this with respect to $x_0$ over $[0, 2]$. Then, \begin{align*} \text{[volume of $\mathcal{V}$]} &= \int_{0}^{2} S(x) \, \mathrm{d}x \\ &= \int_{0}^{2} \left[ 2(4 - x^2) \arctan\left(\sqrt{\frac{x}{2}}\right) + 2 \sqrt{2x}(2 - x) \right] \, \mathrm{d}x \\ &= \left[ \frac{2}{3} (4-x) (x+2)^2 \arctan\left(\sqrt{\frac{x}{2}}\right) - \frac{2}{9} \sqrt{2x} \left(3x^2 - 10x + 24\right) \right]_{0}^{2} \\ &= \frac{16 \pi}{3} - \frac{64}{9}. \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/3212037", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 0 }
What does $d \mathbb{P}(\omega)$ mean in expectation of r.v. $f$? What does $d \mathbb{P}(\omega)$ mean in expectation of r.v. $f$? $$\mathbb{E} f = \int_{\Omega} f(\omega) d \mathbb{P}(\omega)$$ Yes sure it's some "infinitesimal", but should this mean that $\mathbb{P}(\omega)$ is a variable of $f$? Since in elementary integrals: $$\int_X f(x) dx$$ and $x$ is a variable of $f$. $\omega \in \Omega$ by def. but why is it $d \mathbb{P}(\omega)$ then and now just $d \omega$?
Part of what might be confusing to you already shows up in discrete probability. Here is an example: we roll a fair die, and let $X$ be the resulting number. We (that is, math teachers) say things like, the expected value of $X$ is $EX=\sum_{i=1}^6 i (1/6)$ and the expected value of $X^2$ is $EX^2=\sum_{i=1}^6 i^2 (1/6)$. Suppose we also roll an unfair die, and call the result $Y$, for which $P(Y=i) = p_i$ where not all the $p_i$ values are $1/6$. We say the expectations of $Y$ and $Y^2$ are $EY=\sum_{i=1}^6 i p_i$ and $EY^2 = \sum_{i=1}^6 i^2p_i$. In all cases the expectation is a weighted sum of the value we are talking about, and the system of weights we assign to the fundamental outcomes. In more advanced textbooks you will often see expressions like $EX=\int_\Omega X(\omega) P(d\omega)$, where $\omega$ plays the role of $i$ in my examples in this paragraph, where $P(\omega)$ plays the role of $p_i$ and so on. In this way, we think of expectations as arising from two ingredients: a random quantity and the corresponding probability distribution. In your case, the expression $f(\omega)$ is the random quantity whose expectation we seek, and $d \mathbb P(\omega)$ is the corresponding probability distribution. You might be also confused by the use of the letter $f$, which in textbooks is often used more-or-less exclusively as part of the probability law specification. Your example, however, is not an instance.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3212114", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Matrix similarity by changing basis Let $$A= \pmatrix{0&1&1\\1&0&0\\2&1&0}$$ and $$B = \pmatrix{0&1&0\\1&0&1\\1&2&0}.$$ Show that $A$ and $B$ are similar in $\mathbb R$. We can do this by showing that $A$ and $B$ are similar to the same diagonal matrix: they have the same characteristic polynomial, i.e. $$\chi = X³ - 3X - 1.$$ We can then painfully show that $\chi$ has three distincs real roots $\alpha, \beta, \gamma$ so $A$ and $B$ are similar to $$D = \pmatrix{\alpha&0&0\\0&\beta&0\\0&0&\gamma}.$$ Hence they are similar. I want to show that $A$ and $B$ are similar using a simpler method: Let $u$ be the linear map such that $A$ is the matrix of $u$ in basis $\mathcal B = (e_1, e_2, e_3)$. My question is how can I find a basis $\mathcal C = (f_1, f_2, f_3)$ such that $u$ has matrix $B$ in this basis?
Observe that one matrix is obtained from the other by swapping the first two columns and then the first two rows. This means that the permutation matrix $P=\pmatrix{0&1&0\\1&0&0\\0&0&1}$ is the change of basis that you are after, in other words: $$A=PBP^{-1}$$ where moreover $P^{-1}=P$. Multiplication by $P$ on the right will swap the columns of $B$, and then multiplication on the left will swap the rows. See another similar (!) question here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3212277", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
What's the remainder when $x^{7} + x^{27} + x^{47} +x^{67} + x^{87}$ is divided by $x ^ 3 - x$ What's the remainder when $x^{7} + x^{27} + x^{47} +x^{67} + x^{87}$ is divided by $x ^ 3 - x$ in terms of $x$?I tried factoring $x$ from both polynomials but I don't know what to do next since there'd be a $1$ in the second polynomial. Any help would be appreciated!
$xf(x^2)\,\bmod\, x(x^2\!-\!1)\, =\, x\,(\overbrace{f(\color{#c00}{x^2})\,\bmod\, x^2\!-\!1}^{\color{#c00}{\Large x^2\ \equiv\,\ 1}})\, =\, xf(\color{#c00}{ 1})\, =\, 5x$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3212443", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 6, "answer_id": 1 }
Regular expression that represents the words s.t. start with $a$ and have an odd quantity of $a$'s Find a regular expression that represents the language "The words over the alphabet $\{a,b,c\}$ such that end with a pair of letters, or start with $a$ and have in total an odd quantity of $a$'s". The null word is represented by $\varepsilon$. I know how to represent the first part: $$(a+b+c)^*(a+b+c)^2.$$ For example, the word $abbb\in L$ because it ends with $bb$, and the RE $(a+b+c)^*(a+b+c)^2$ accepts it. But I am not able to complete the other part of the statement: the incomplete regular expression is $$(a+b+c)^*(a+b+c)^2+\text{???}.$$ I know that if one wants to represent odd $a$'s then the RE is $a(aa)^*$, but here the order matters. If I say $$a(a+b+c)^*a(\varepsilon+b+c)^*(aa)^*(b+c)^*$$ then the word $abaaaa\in L$ but $ababa$ is not in the language. Some other words that are in the language: $aaa$, $aabcabaa$, $abc$, $abbbcaabaacaaaa$. Thanks!!
The answer is: $a((b+c)^*a(b+c)^*a(b+c)^*)^*$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3212670", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What formula could generate this sequence related to the Collatz conjecture The collatz conjecture states that every number eventually reaches $1$ under the repeated iteration of $$ f_0(n) = \begin{cases} n/2, & \text{if $n$ even} \\ 3n+1, & \text{else} \end{cases}$$ As a number is guaranteed to be even after the $3n+1$ step, one can replace $f_0(n)$ with $$ f_1(n) = \begin{cases} n/2, & \text{if $n$ even} \\ \frac{3n+1}{2}, & \text{else} \end{cases}$$ and obtain an equivalent conjecture. One can tabulate the possible expressions that can arise from applying $f_1$ to $n$ $x$ times, as is shown in the following table. \begin{array}{|c|c|c|c|} \hline \frac{n}{2^1}& \frac{n}{2^2} & \frac{n}{2^3} & \frac{n}{2^4} \\ \hline \frac{3^1n+1}{2^1}& \frac{3^1n+1\cdot2^1}{2^2} & \frac{3^1n+1\cdot2^2}{2^3} & \frac{3^1n+1\cdot2^3}{2^4}\\ \hline & \frac{3^1n+1\cdot2^0}{2^2} & \frac{3^1n+1\cdot2^1}{2^3} &\frac{3^1n+1\cdot2^2}{2^4}\\ \hline & \frac{3^2n+5\cdot2^0}{2^2} & \frac{3^2n+5\cdot2^1}{2^3}& \frac{3^2n+5\cdot2^2}{2^4}\\ \hline & &\frac{3^1n+1\cdot2^0}{2^3} & \frac{3^1n+1\cdot2^1}{2^4} \end{array} Let $x$ be the column index and the number of iterations, starting at $1$. Let $y$ be the row index, starting at $0$. The content of cell $x,y$ is $f_1$ applied to the content of cell $x-1,\lfloor\frac{y}{2}\rfloor$. The parity of $y$ decides which 'path' of $f_1$ (even or odd) is taken. Or formulated another way: the table is built up recursively. Column 1 row 0 was the input for column 2 rows 0 and 1. Column 1 row 1 was the input for column 2 rows 2 and 3 and so on. The resulting large fractions are then factored into this form. I've pasted the html of a table for the first 8 iterations to this page. When written this way, the expressions exhibit some nice pattern, namely: Every expression (apart from row 0) is of the form $$ \frac{3^bn+q\cdot2^d}{2^x} $$ where $b$ is the hamming weight of $y$, i.e. the number of 1-bits in it's binary representation and $d = x - \log_2 (c) + 1$ with $c$ being the greatest power of 2 $\leq y$. $q$ is the only thing which seems not be as easily parameterisable. However, it seems to be somehow similar to A035109, which is defined as $$ \frac{1}{n}\sum_{d \mid n}{\mu\left(\frac{n}{d}\right)\sum_{e\mid d} e\sum_{\substack{e\mid d \\ e \text{ odd}}}e} $$ The first values of q are: $0,1,1,5,1,7,5,19,1,11,7,29,5,23,19,65,1,19,\dots$ More can be read out from the linked table. My question is: what formula could generate this sequence?
This is not an exact answer to your question, but an simple observation. Looks like a fractal or reccurence sequence, similar to fibonacci, or one think it might have recurrent relationships. Most likely this is a sequence which can not be shortcut, i.e. one might have to compute every step along the way to get the numbers in the sequence (this is a guess though). $$0,0,1,1,5,1,7,5,19,1,11,7,29,5,23,19,65,1,19,…$$ I've highlighted it's fractal behavior by the numbers in red color: $$0,\color{red}0,1,\color{red}1,5,\color{red}1,7,\color{red}5,19,\color{red}1,11,\color{red}7,29,\color{red}5,23,\color{red}{19},65,\color{red}1,19,…$$ $$\color{red}0,\color{red}0,\color{red}1,\color{red}1,\color{red}5,\color{red}1,\color{red}7,\color{red}5,\color{red}{19},\color{red}1,11,7,29,5,23,19,65,1,19,…$$ Now if we take the other sequence (it's quite different!), iv'e highlighted that in blue): $$\color{blue}0,0,\color{blue}1,1,\color{blue}5,1,\color{blue}7,5,\color{blue}{19},1,\color{blue}{11},7,\color{blue}{29},5,\color{blue}{23},19,\color{blue}{65},1,\color{blue}{19},…$$ $$\color{blue}0,\color{blue}1,\color{blue}5,\color{blue}7,\color{blue}{19},\color{blue}{11},\color{blue}{29},\color{blue}{23},\color{blue}{65},\color{blue}{19},…$$ The sequence in blue which is part of your sequence is not in the OEIS. The question for the red sequence is wether one can take every fourth, eight, and so on to state that it is fractal. In your sequence one could try other simple techniques to find out if there are self recurring similarities. As a sidenote: the ruler sequence has the same fractal behaviour, which is also related to the largest power of $p$ that divides $2^n$ in the Reduced Collatz Function that is applied only to the odd integers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3212758", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Determining the convergence and divergence So I have a problem of $$a_n = \left(1 + \frac{2}{n}\right)^n$$ I need to determine whether it is diverging or converging and find the limit if it is converging I found an answer on symbol lab of $e^2$ but I do not know how they got that type of answer
Well, once you know the series converges, that is actually one of the definitions of $e^2.$ Here is another way. Consider $\ln(a_n)=n\ln\left(1+\frac{2}{n}\right).$ Now we look at the form $$\frac{\ln(1+2/n)}{1/n}=\ln(a_n).$$ We see that $$\frac{(\ln(1+2/n))'}{(1/n)}=\frac{\frac{1}{1+2/n}}{-1/n^2}\cdot\frac{-2}{n^2}=\frac{2}{1+2/n}\to 2.$$ So by L'Hopital's $$\ln(a_n)\to 2.$$ By continuity $$e^{\ln(a_n)}=a_n\to e^2.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3212940", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Inclusion Exclusion Application If $A$, $B$, and $C$ are finite sets then, the number of elements in EXACTLY ONE (i.e. at most one) of the sets $A$,$B$,$C$:$$n(A)+n(B)+n(C)-2 \times n(A \cap B)-2 \times n(A \cap C)-2 \times n(C \cap B) + 3 \times n(A \cap B \cap C)$$ I can derive the above through inclusion-exclusion, but would there be a general formula for $n$ finite sets?
Suppose we have $n$ sets $A_1,A_2,\dots,A_n$. For $k=1,2,\dots,n$ define $$S_k=\sum|A_{n_1}\cap A_{n_2}\cap\cdots \cap A_{n_k}|,$$ where the sum is taken over all $k$-subsets $\{n_1,n_2,\dots,n_k\}\subseteq \{1,2,\dots,n\}$ I claim that the number of elements that occur in exactly one of the sets $A_1,A_2,\dots A_n$ is $$\sum_{k=1}^n(-1)^{k-1}kS_k\tag{1}$$ To see this, consider an element $x$ that occurs in exactly $m$ of the sets, where $1\leq m \leq n.$ If $m=1$, $x$ is counted exactly once in $S_1$ and nowhere else, so it is counted once by $(1).$ If $m>1$, then X occurs in ${m\choose k}$ of the terms in the definition of $S_k$ for $1\leq k\leq m$ and in none of the term when $k>m$. Therefore is is counted $$c(m)=\sum_{k=1}^m(-1)^{k-1}k{m\choose k}$$ times. To evaluate $c(m)$ write $$(1-x)^m=\sum_{k=0}^m(-1)^k{m\choose k}x^k$$ by the binomial theorem. Differentiate both sides to get $$ -m(1-x)^{m-1}=\sum_{k=1}^m(-1)^k{m\choose k}kx^{k-1}$$ Substitute $x=1$ to get get $c(m)=0.$ Thus $(1)$ counts elements that occur in exactly one of the sets once, and does not count elements that occur in more than one of the sets, as was to be shown.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3213039", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
A finite group is isomorphic to the direct product of two normal subsets with trivial intersection I have to show the following: Let $G$ be a finite group and $N_1 , N_2$ normal in $G$. Then $G\cong N_1\times N_2$ if and only if $N_1\cap N_2 = \{e\}$. I have no idea on either direction, so I would be grateful for any little hint! Thank you!
This is not true unless $N_1,N_2$ generates $G$, in this case let $p_i:G\rightarrow G/N_i$ the quotient map, show that $p_1$ iduces an isomorphism $N_2\rightarrow G/N_1$. Consider $f:G\rightarrow N_1\times N_2$ defined $f(x)=(p_1(x),p_2(x))$ show that it is an isomorphism.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3213155", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Apply Newton-Raphson method to find the solutions to Apply Newton-Raphson method to find the solutions to the equation $x^3-5x=0$ starting with an initial guess of $x_0 = 1$. While using Newton Raphson method, the value doesn't converge to a specific number. Rather, every iteration either gives $1$ or $-1$. Why does this happen? Any graphical interpretation for this?
For the graph of $f(x)=x^3-5x$, we have $f'(x)=3x^2-5$ $$f(1)=-4$$ $$f'(-1)=4$$ $$f'(1)=-2=f'(-1)$$ The tangent line at $x=1$ is $y+4=-2(x-1)$ which is $y=-2x-2$. The tangent line at $x=-1$ is $y-4=-2(x+1)$ which is $y=-2x+2$ Geometrically, what has happened is you are trapped in the following cycle. Starting from $(1,-4)$, by traveling along the tangent, we intercept the $x$-axis at $x=-1$. From $(-1,4)$, by trveling along the tangent, we intercept the $x$-axis at $x=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3213274", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Counting the Number of Real Roots of A Polynomial I am interested in solving problems which involve finding the number of real roots of any polynomial. Suppose I take a function $$f(x)=x^6+x^5+x^4+x^3+x^2+x+1$$ This does not have any real roots but I am trying to figure out if there is some analytical way that does not involve graphing to come to this conclusion. Using Descartes' Rule of Signs, there are zero sign changes in $f$ so by virtue of which there are no positive roots to the polynomial. Considering $$f(-x) = x^6-x^5+x^4-x^3+x^2-x+1$$ I concluded that there are either 6 negative, 4 negative, 2 negative or zero negative roots. So I have 4 cases to consider : * *0 positive roots, 6 negative roots, 0 complex roots *0 positive roots, 4 negative roots, 2 complex roots *0 positive roots, 2 negative roots, 4 complex roots *0 positive roots, 0 negative roots, 6 complex roots (The correct case) I tried differentiating $f$ but the derivative is equally bad $$f'(x) = 6x^5+5x^4+4x^3+3x^2+2x+1$$ I am unable to conclude anything from this. I tried going about the problem the other way. If a polynomial with an even degree is always positive or negative depending on the leading coefficient, it will not have any real roots but then again, finding the extrema of the function is proving to be extremely difficult. I have tried using Bolzano's Intermediate Value Theorem. It guarantees the existence of at least one root but then again, there is a possibility that there might be more than one which can only be eliminated by monotonicity which again brings me back to the bad derivative. I believe there need to be some general rules by virtue of which, we are able to calculate the number of roots for any polynomial. * *Is graphing the best technique for polynomials like these and if it is, are there any ways by which a quick but accurate plot can be drawn? *While reading about the relevant theory, I came across Sturm's Method and the Newton-Raphson Method but haven't touched these yet. Is it absolutely required to know these concepts to effectively draw conclusions? *Have I missed something?
While this method isn't guaranteed to work on all polynomials, it is surprisingly effective sometimes: notice that \begin{align*} [x^6+x^5+x^4+x^3+x^2+x+1] &= x^4\left(x+\tfrac{1}{2}\right)^2+\big[\tfrac{3}{4}x^4+x^3+x^2+x+1\big] \\ &= x^4\left(x+\tfrac{1}{2}\right)^2+\tfrac34x^2\left(x+\tfrac23\right)^2+\big[\tfrac{2}{3}x^2+x+1\big] \\ &= x^4\left(x+\tfrac{1}{2}\right)^2+\tfrac34x^2\left(x+\tfrac23\right)^2+\tfrac23\left(x+\tfrac34\right)^2+\tfrac58. \end{align*} (Each square was chosen to eliminate the top two terms of the preceding polynomial in square brackets; this is like "completing the square" from high school math.) In this form, the polynomial is clearly always positive.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3213397", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 7, "answer_id": 1 }
The dual space of the Sobolev space $W^{k,p}(\Omega)$. Let $\Omega$ be a nice domain in $\Bbb R^n$. It is known that any element $T\in\left( W^{k,p}(\Omega)\right)^*$ admits a (possibly non-unique) representation of the form $$ Tu = \sum_{|a|\le k} \int_\Omega f_\alpha D^\alpha u\ dx, \tag{0} $$ where $f_\alpha \in L^{p'}$ and $\frac 1p + \frac 1{p'}=1$. The functional $T$ can be identified with $$ T =\sum_{|a|\le k} (-1)^{|\alpha|}D^\alpha f_\alpha \tag{1}\label{eq1} $$ as a distribution in $\mathcal D(\Omega)$. In the book Weakly Differentiable Functions by Ziemer, there is a claim that confused me. The book said (modulo some paraphrasing) the following: ... However, not every distribution $T$ of the form $(1)$ is necessarily in $\left( W^{k,p}(\Omega)\right)^*$. In case one deals with $W^{k,p}_0(\Omega)$ instead of $W^{k,p}(\Omega)$, distribution of the form $(1)$ completely describes the dual space... I am not sure if I fully understand what the passage means. I know that such a $T$ can be uniquely extend to an element in $W^{-k,p'} = \left( W_0^{k,p}\right)^*$ by the standard density argument, whereas $T$ in $(1)$ have more than one extension to an element of $\left( W^{k,p}\right)^*$. Perhaps this is what the passage means? It seems weird to me to say that $T$ in $(1)$ is not necessary in $\left( W^{k,p}\right)^*$ since $(0)$ is obviously one way to define $T$ on $W^{k,p}$, I would rather mention the non-uniqueness explicitly. Is there any other deeper interpretation of the passage that I may have missed?
I think that the problem is that functionals in $(1)$ are not necessarily in $(W^{k,p})^*$ rather than non-uniqueness. Take for example the simple setting $\Omega=(0,1)$, $W^{k,p}=H^1$. Then $f=x^{-\frac{1}{3}} \in L^2$ so $$T=-\partial_x x^{-\frac{1}{3}}=\frac{1}{3}x^{-\frac{4}{3}}$$ satisfies $(1)$. But it is not in $(H^1)^*$, as for example $Tu$ is ill-defined if $u(0) \neq 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3213481", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Galois group of $\overline{F}/F$ Let $K$ be the algebraic closure of $F$ where $F$ is a finite field. Show that $Gal(K/F) \simeq \hat{\mathbb{Z}}$. I know that $\hat{\mathbb{Z}} = \varprojlim \mathbb{Z}_{n}$, so its enough to show that $Gal(K/F) \simeq \varprojlim \mathbb{Z}_{n}$. I can solve the problem if $|F|=p$, but and if $|F| = p^{n}$? Can someone help me?
More generally, Gal$(\bar F/F)$ is the inverse limit of its finite quotients, so it suffices to pick up, if possible, a convenient inductive system of finite extensions of $F$. If $F$ is a finite field (its characteristic is irrelevant), we know that every finite extension of $F$ is cyclic , with Galois group generated by the Frobenius automorphism. Hence Gal$(\bar F/F)\cong \hat {\mathbf Z}$. Of course this doesn't work all the time : we don't know the absolute Galois group of $\mathbf Q$ !
{ "language": "en", "url": "https://math.stackexchange.com/questions/3213583", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Can't solve complicated second order differential equation (Poisson-Boltzmann equation) Is there anyone able to solve this second order differential equation? It is the Poisson-Boltzmann equation (found in the field of electrostatics) solved on cylindrical coordinates just on the radial direction. $$ (\varphi'+r\cdot\varphi'')=A\cdot e^{−(B\cdotφ+C)} $$ where $\varphi$ is the variable I want to solve the problem for, the derivatives of $\varphi$ are with respect to $r$, while $A,B,C$ are constants. I have been around it for a long time and can't solve it. What I had tried was to do a variable change by setting a new variable called z for instance which is equal to the exponential term, but I arrive nowhere. All the help is much appreciated :)
Let $r=e^s$ , Then $s=\ln r$ $\dfrac{d\varphi}{dr}=\dfrac{d\varphi}{ds}\dfrac{ds}{dr}=\dfrac{1}{r}\dfrac{d\varphi}{ds}=e^{-s}\dfrac{d\varphi}{ds}$ $\dfrac{d^2\varphi}{dr^2}=\dfrac{d}{dr}\left(e^{-s}\dfrac{d\varphi}{ds}\right)=\dfrac{d}{ds}\left(e^{-s}\dfrac{d\varphi}{ds}\right)\dfrac{ds}{dr}=\left(e^{-s}\dfrac{d^2\varphi}{ds^2}-e^{-s}\dfrac{d\varphi}{ds}\right)e^{-s}=e^{-2s}\dfrac{d^2\varphi}{ds^2}-e^{-2s}\dfrac{d\varphi}{ds}$ $\therefore e^{-s}\dfrac{d\varphi}{ds}+e^{-s}\dfrac{d^2\varphi}{ds^2}-e^{-s}\dfrac{d\varphi}{ds}=Ae^{−(B\varphi+C)}$ $e^{-s}\dfrac{d^2\varphi}{ds^2}=Ae^{−(B\varphi+C)}$ $\dfrac{d^2\varphi}{ds^2}=Ae^{s−B\varphi-C}$ Let $u=s−B\varphi-C$ , Then $\varphi=\dfrac{s−u-C}{B}$ $\dfrac{d\varphi}{ds}=\dfrac{1}{B}-\dfrac{1}{B}\dfrac{du}{ds}$ $\dfrac{d^2\varphi}{ds^2}=-\dfrac{1}{B}\dfrac{d^2u}{ds^2}$ $\therefore-\dfrac{1}{B}\dfrac{d^2u}{ds^2}=Ae^u$ $\dfrac{d^2u}{ds^2}=-ABe^u$ $u=\ln\dfrac{2c_1\text{sech}^2\sqrt{c_1(s+c_2)^2}}{AB}$ $s−B\varphi-C=\ln\dfrac{2c_1\text{sech}^2\sqrt{c_1(s+c_2)^2}}{AB}$ $\varphi=\dfrac{s-C}{B}−\dfrac{1}{B}\ln\dfrac{2c_1\text{sech}^2\sqrt{c_1(s+c_2)^2}}{AB}$ $\varphi=\dfrac{\ln r-C}{B}−\dfrac{1}{B}\ln\dfrac{2c_1\text{sech}^2\sqrt{c_1(\ln r+c_2)^2}}{AB}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3213733", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Computation of the laplacian of an isometric immersion Say that $X:M \to \mathbb R^3$ is an isometric immersion of an oriented riemannian surface (oriented $2$ dimensional riemannian manifold). I understand there holds a vector-valued equation, namely $$ \Delta X = 2HN, $$ where $H$ is the mean curvature (half the trace of the second fundamental form) and $N$ is the outward unit normal to $X(M)$. I've been unable to find a reference, however. I would appreciate a reference or an indication of how to compute this formula.
Based on my study of @Ernie060's answer, I believe I've worked out a solution. Fix an isometric immersion $X: M \to \mathbb R^3$. For some arbitrary point $p \in X(M)$, consider an orthonormal basis $e_1, e_2$ of $T_p X(M)$. Using existence theory for systems of linear ODE, in a small $X(M)$-neighborhood of $p$ let us extend $e_1, e_2$ to an orthonormal frame $E_1, E_2$ satisfying $\nabla_{E_i} E_j = 0$ for $1 \leq i,j \leq 2$. Here the covariant derivative is defined using the formula $$D_{E_i} E_j = \nabla_{E_i} E_j + (D_{E_i}E_j \cdot n)n,$$ where $n$ is the outward normal to $X(M)$. It follows that $E_1, E_2, n$ is an orthonormal frame for $\mathbb R^3$ in a small ball around $p$. In this context it is appropriate to regard $X$ as the inclusion $X(M) \to \mathbb R^3$. Extend $X$ to a ball around $p$ via the rule $X(p + tn) = X(p) = p$, so that $D_n X = 0$. We wish to calculate $$ \Delta X,$$ where $\Delta X = \mathrm{div}(\mathrm{grad} X)$, the Euclidean vector laplacian. Here we write $$\mathrm{grad} X = (D_{E_1} X) \otimes dE_1 + (D_{E_2} X)\otimes dE_2 + (D_{n} X)\otimes dn,$$ where $dE_i$ and $dn$ are the one-forms dual to $E_i$ and $n$. A quick calculation shows that $D_{E_i} X = E_i$ so that $$\mathrm{grad} X= E_1\otimes dE_1 + E_2\otimes dE_2.$$ Then $$\Delta X =\mathrm{div} (\mathrm {grad} X) = \mathrm{trace}(D\mathrm (grad X)) = D_{E_1} E_1 + D_{E_2}E_2.$$ As $\nabla_{E_i} E_j = 0$ we see that $D_{E_i} E_i = (D_{E_i} E_i \cdot n) n.$ Differentiating the equation $E_i \cdot n = 0$ gives us $$D_{E_i}E_i\cdot n = -D_{E_i}n\cdot E_i = II(E_i, E_i),$$ where $II$ is the second fundamental form of $X$. Then $$ D_{E_1} E_1 + D_{E_2}E_2 = (II(E_1, E_1) + II(E_2,E_2))n = 2Hn,$$ as desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3213841", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
$\Delta \mathbf n = -2 \mathbf n$ on the Euclidean sphere Let us consider the Euclidean two-sphere, defined by the embedding in the three dimensional Euclidean space as $$ \mathbf n \cdot \mathbf n = 1\,, $$ where $\cdot$ denotes the standard scalar product. The metric on the sphere, in some coordinates $x^i$, is expressed as $$ \gamma_{ij}=\mathbf e_i \cdot \mathbf e_j\,, $$ where $\mathbf e_i=\partial_i\mathbf n$. For instance, in the standard spherical coordinates $$ \mathbf n=(\sin\theta\cos\phi,\sin\theta\sin\phi,\cos\theta) $$ and $$ \gamma_{\theta\theta}=1\,,\qquad \gamma_{\phi\phi}=\sin^2\theta\,,\qquad \gamma_{\theta\phi}=0\,. $$ We define the Laplace-Beltrami operator on the sphere by $ \Delta = \gamma^{ij} D_iD_i $, where $\gamma^{ij}$ is its inverse and $D_i$ is the associated Levi-Civita connection. I would like to prove that $$\Delta \mathbf n = -2 \mathbf n$$ and that in higher dimensions the same holds with $2$ replaced by the dimension of the sphere. I came to believe that this is true by an explicit check in spherical coordinates in dimensions $3$, $4$ and $6$. Considering that parallel transport of a given tangent vector $\mathbf v$ defined at the point $x+dx$ to the point $x$ is defined by keeping constant its embedding components and then projecting it on the sphere at the point $x$, we have $$ \mathbf v_{\parallel}(x+dx,x)=\mathbf v(x+dx)-\mathbf v(x+dx)\cdot \mathbf n (x)\, \mathbf n(x) $$ hence $$ D_i\mathbf v\, dx^i = \mathbf v_{\parallel}(x+dx,x)- \mathbf v (x)= (\partial_i\mathbf v+\partial_i\mathbf n \cdot \mathbf v\, \mathbf n)dx^i $$ where we have used $\mathbf n \cdot \partial_i\mathbf v+\partial_i\mathbf n \cdot \mathbf v=0$ and $$ D_i\mathbf v = \partial_i\mathbf v+\partial_i\mathbf n \cdot \mathbf v\, \mathbf n\,. $$ Applying this to the basis vectors $\mathbf e_j =\partial_j\mathbf n$ affords $$ D_i\mathbf e_j = \partial_i \mathbf e_j+\gamma_{ij}\mathbf n\,. $$ But unfortunataly I am not able to go further.
A way to prove the above is the following (this is probably a special case of the more general answer given by @Ernie060, but I still need to fill in a few details). In three-dimensional Euclidean space $\mathbb R^3$, the metric in Cartesian coordinates is $\delta_{IJ}=\mathrm{diag}(1,1,1)$ and in spherical coordinates, defined by $\mathbf x = r\,\mathbf n(x^i)$, reads $g_{rr}=1$ and $g_{ij}=r^2\gamma_{ij}$, with $\gamma_{ij}=\partial_i\mathbf n\cdot\partial_j\mathbf n$. Comparing the two expressions for the Laplacian in the given coordinate systems, we have $$ 0=\Delta_{\mathbb R^{3}}\mathbf x=\frac{1}{r^2}\partial_r(r^2 \mathbf n)+\frac{1}{r}\Delta_{S^2}\mathbf n $$ which yields precisely $$ \Delta_{S^2}\mathbf n = -2\mathbf n\,. $$ This is actually a consequence of the fact that any second covariant derivative of $\mathbf x$ vanishes (since it vanishes in the Cartesian coordiante frame), hence in particular $$ 0=\nabla_i \nabla_j \mathbf x =r D_i D_j\mathbf n-\Gamma_{ij}^r\mathbf n\,, $$ but explicit calculation affords $\Gamma_{ij}^r=-r\gamma_{ij}$ and hence $$ D_i D_j \mathbf n = - \gamma_{ij} \mathbf n\,. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3213979", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Find the period of $f(x)$ with $f(x)f(y)=f(x+y)+f(x-y)$. For any real $x$, $\;\;y$ $f(x)f(y)=f(x+y)+f(x-y)$ with $f(1)=1$ Find the period of $f(x)$.
To summary, I am going to show the following result. Main Theorem: Let $f$ be the function mentioned in OP. If the peroid of $f$ exists, then the peroid of $f$ only has one of the two forms: $\frac{6}{6n+1}$ and $\frac{6}{6n+5}$ for some $n\in \mathbb{N}$. @Micah have shown the period $f$ "has" the peroids of $2\cos(\frac{6n\pm 1}{3}\pi x)$, i.e., either $\frac{6}{6n+1}$ or $\frac{6}{6n+5}$. In the following, the "only" is shown. 1. Notations and Lemmas. Notation 1: Let $D$ be the domain of a function $f$. If $f(x+T)=f(x)$ for all $x\in D$, then $T\in D$ is a peroid of $f$. If $T$ is the least positive one, then $T$ is the peroid of $f$. Notation 2: Let $\mathbb{Z}$ be the set of all integers. The set of nonnegative integers are denoted by $\mathbb{N}$. We have two lemmas. Lemma 1: If the period of $f$ exists, denoted by $P$, then the sete of all peroids is $$T=\{t\in \mathbb{R} \mid f(x+t)=f(x), \forall x \}=\{nP\mid n \in \mathbb{Z}\}.$$ Proof: Please see https://math.stackexchange.com/q/1012902 . Q.E.D. Lemma 2: Let $f$ be the function in OP. Then the followings hold: 1). $6$ is a peroid of $f$, i.e., $f(x+6)=f(x)$ for all $x \in \mathbb{R}$; 2). $f(0)=2, f(1)=1,f(2)=-1,f(3)=-2,f(4)=-1,f(5)=1$; 3). $f(x+d) \ne f(x)$ for some $x \in \mathbb{R}$, $d=1,2,3$; 4). If the peroid of $f$ exists, then it must has the form $\frac{6}{q}$ where $q\in \mathbb{N}$. Proof: 1) See the OP; 2) Obviously; 3) As you know $f(x+6)=f(x)$, it is neccesary to show that for any $d\mid 6$, $f(x+d)\ne f(x)$. Recall that $f(x+1)=f(x+2)-f(x)$, $f(x+3)=-f(x)$ and $f(1)=1$. 1. For $d=1$. If $f(x+1)=f(x)$ for all $x$, then we have $$f(1)=f(0+1)=f(2)-f(0)=0.$$ It is a contradiction. *For $d=2$. If $f(x+2)=f(x)$ for all $x$, then we have $$f(1)=f(0+1)=f(2)-f(0)=0.$$ It is a contradiction. *For $d=3$. If $f(x+3)=f(x)$ for all $x$, then we have $$f(x)=0.$$ It is a contradiction. 4) According to Lemma1 and 1) of Lemma2, if the peroid of $f$ exists, then it must has the form $\frac{6}{q}$ where $q\in \mathbb{N}$. Q.E.D. 2. The Proof of Main Theorem. 1). If $q=6n$, where $n\in \mathbb{N}$, then the peroid is $\frac{6}{6n}=\frac{1}{n}$. By Lemma 1, $1$ is a peroid. It contradicts to 3) of lemma 2. 2). If $q=6n+1$, where $n\in \mathbb{N}$, then the peroid is $\frac{6}{6n+1}$. For example, $f(x)=2\cos(\frac{6n+1}{3}\pi x)$ and the peroid of $f$ is $\frac{6}{6n+1}$. 3). If $q=6n+2$, where $n\in \mathbb{N}$, then the peroid is $\frac{6}{6n+2}=\frac{3}{3n+1}$. By Lemma 1, $3$ is a peroid. It contradicts to 3) of lemma 2. 4). If $q=6n+3$, where $n\in \mathbb{N}$, then the peroid is $\frac{6}{6n+3}=\frac{2}{2n+1}$. By Lemma 1, $2$ is a peroid. It contradicts to 3) of lemma 2. 5). If $q=6n+4$, where $n\in \mathbb{N}$, then the peroid is $\frac{6}{6n+4}=\frac{3}{3n+2}$. By Lemma 1, $3$ is a peroid. It contradicts to 3) of lemma 2. 6). If $q=6n+5$, where $n\in \mathbb{N}$, then the peroid is $\frac{6}{6n+5}$. For example, $f(x)=2\cos(\frac{6n+5}{3}\pi x)$ and the peroid of $f$ is $\frac{6}{6n+5}$. Q.E.D.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3214058", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Help with proof of $ \mathbb{C}[X] \simeq R $ where $R$ is a $ \mathbb{C}$-algebra without nilpotents I am trying to understand the proof of the following proposition: Let $X \subset \mathbb{A}^n$ be closed. Let $ R $ be a finitely generated $ \mathbb{C}$-algebra without nilpotents. There exists an affine variety $ X $ such that $\mathbb{C}[X] \simeq R $ (as $ \mathbb{C}-$ algebras). where $\mathbb{C}[X]$ is the ring of regular functions on $X$. Proof: Let $ \alpha _1, \cdots, \alpha _n $ be generators (over $ \mathbb{C} $) of $ R $, and let $ \phi: \mathbb{C}[z_1,\cdots,z_n] \longrightarrow R$ be the surjection of algebras mapping $ z_i$ to $\alpha_ i$ . The kernel of $ φ $ is an ideal $ I \subset \mathbb{C}[z_1,\cdots,z_n]$, which is radical because $ R $ has no nilpotents. Let $ X:= V(I) \subset \mathbb{A}^n$. Then $ \mathbb{C}[X] \simeq R $. I can't understand the sentence in bold, i.e. why if $ R $ is a finitely generated $ \mathbb{C}$-algebra without nilpotents then $ I $ is radical. Have you any ideas? Thank you in advance.
Well, let $f$ be a polynomial with $f^n\in I$, i.e., $\phi(f^n)=0$. Then $\phi(f)^n=\phi(f^n)=0$ and so $\phi(f)$ is nilpotent in $R$. By hypothesis $\phi(f)=0$ and so $f\in I$. Hence, $I$ is radical.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3214233", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Computing $\int_{|z-i|=\frac{3}{2}}\frac{e^{\frac{1}{z^2}}}{z^2+1}$ Compute the integral using residues: $\int_{|z-i|=\frac{3}{2}}\frac{e^{\frac{1}{z^2}}}{z^2+1}$ Inside the circumference there are the following singular points $-i$ which is a pole of order 1 and $0$ which is essential. So: $\int_{|z-i|=\frac{3}{2}} \frac{e^{\frac{1}{z^2}}}{z^2+1}=res_{z_0=i}\frac{e^{\frac{1}{z^2}}}{z^2+1}+res_{z_0=0}\frac{e^{\frac{1}{z^2}}}{z^2+1}=\frac{\pi}{e}+res_{z_0=0}\frac{e^{\frac{1}{z^2}}}{z^2+1}$ $res_{z_0=0}\frac{e^{\frac{1}{z^2}}}{z^2+1}=c_{-1}$ which is the first negative coefficient of the Laurent series. So developing the Laurent series: $\frac{e^{\frac{1}{z^2}}}{z^2+1}=\sum_\limits{n=0}^{\infty}\frac{1}{n!z^{2n}}\sum_\limits{m=0}^{\infty}(-1)^m z^{2m}$ However I am not seeing the coefficient $c_{-1}$ Question: How should I compute the coefficient $c_{-1}$? Thanks in advance!
Here is a way to calculate the integral without Laurent series to find the residue at $z = 0$. It uses that the sum of all residues is $0$ including the residue at infinity: * *$f(z)= \frac{e^{\frac{1}{z^2}}}{z^2+1} \Rightarrow$ $$ \operatorname{Res}_{z=0}f(z) + \operatorname{Res}_{z=i}f(z) = - \left( \operatorname{Res}_{z=-i}f(z) + \operatorname{Res}_{z=\infty}f(z) \right) $$ Calculating the residues: $$\operatorname{Res}_{z=-i}f(z) =\lim_{z\to -i}\frac{(z+i) e^{\frac{1}{z^2}}}{(z+i)(z-i)} = -\frac{e^{-1}}{2i}$$ $$\operatorname{Res}_{z=\infty}f(z) = -\operatorname{Res}_{w=0}\frac{1}{w^2}f\left(\frac{1}{w} \right)=-\operatorname{Res}_{w=0}\frac{1}{w^2}\frac{ e^{w^2}}{\frac{1}{w^2}+1} = -\operatorname{Res}_{w=0}\frac{ e^{w^2}}{1+w^2} = 0$$ The last residue is $0$ since $\frac{ e^{w^2}}{1+w^2}$ is holomorphic in a neighbourhood of $0$. So, all together $\int_{|z-i|=\frac{3}{2}} \frac{e^{\frac{1}{z^2}}}{z^2+1}dz = -2\pi i\left(\operatorname{Res}_{z=-i}f(z) + \operatorname{Res}_{z=\infty}f(z) \right) = -2\pi i\left( -\frac{e^{-1}}{2i}+0 \right) = \frac{\pi}{e}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3214371", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to perform U substitution when there is no direct dx to du mapping? I have been asked to solve this problem using a change of variable and a power series expansion of the exponential term: $\frac{1}{\sqrt(2\pi)} \int_0^\sqrt2 e^{-\frac{x^2}{2}} dx$ I expanded the exponential term to the Taylor Series: $e^{-\frac{x^2}{2}} \approx 1 - \frac{x^2}{2} + \frac{x^4}{8} - \frac{x^6}{48} + \frac{x^8}{384}$ This then leaves me with this: $\frac{1}{\sqrt(2\pi)} \int_0^\sqrt2 1 - \frac{x^2}{2} + \frac{x^4}{8} - \frac{x^6}{48} + \frac{x^8}{384} dx$ I interpreted the "change of variable" as a reference to U substitution, but I am stuck when trying to substitute something in either the original formula or in the expansion of it. I tried substituting $u = x^2$ in the revised formula: $u = x^2$ $du = 2x$ $\frac{1}{\sqrt(2\pi)} \int_0^2 1 - \frac{u}{2} + \frac{u^2}{8} - \frac{u^3}{48} + \frac{u^4}{384} ??dx??$ But I am stuck on how to convert dx to du I calculated that $du = 2x dx$, and $x = \sqrt u$, but how do I replace that into the formula? Is that even the right question to ask?
You decided to use $$x^2=u \implies x=\sqrt u\implies dx=\frac{du}{2 \sqrt{u}}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3214511", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find (if it exists) random variable $X$ so it is vaild: $\mathbb{E}(X)= 3, \mathbb{E}(X^2) = 8$. Find (if it exists) random variable $X$ so it is vaild: $\mathbb{E}(X)= 3, \mathbb{E}(X^2) = 8$. I tried by definition of expectation, but how can I know that, when squared, some values won't be same (which implies that new probability is sum of those probabilities).
Note that $$ \text{Var}(X)=E(X-EX)^2=EX^2-(EX)^2=8-9=-1<0 $$ which is impossible. Hence such a random variable does not exist.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3214712", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Common point of solutions of $(\varepsilon-x)y=y'(-x+y^2-2x^2)$ Let there be the same equation as here. $(\varepsilon-x)y=y'(-x+y^2-2x^2)$ @JJacquelin found the integrating factor $$\boxed{\mu=\frac{1}{(x+2\epsilon x-y^2)(\epsilon +2\epsilon x-y^2)\:y}}\tag 2$$ The implicit answer is $$\boxed{2\epsilon\ln\left(|x+2\epsilon x-y^2| \right)-(1+2\epsilon)\ln\left(|\epsilon +2\epsilon x-2y^2| \right)+2\ln(|y|)=C}$$ for $\varepsilon \neq 0 , -\frac{1}{2}$ I have graphed the parabolas where $\mu$ is undefined, took $\varepsilon = 6.5$ and found that all the solutions intersect in one special point $(0.4, 2.4)$. That holds for all positive $\varepsilon$, though the point is moving along the parabola $y^2 = (2\varepsilon+1)x$. What is this special point where all the solutions meet, where does it come from? In the picture you see two solutions for $\varepsilon = 2, C = -1, 0$
The conditions of the existence and uniqueness theorem do not hold when $-x + y^2 -2 x^2 = 0$. Substituting this into the ODE gives $$(x, y) \in \left\{ (-1/2, 0), (0, 0), \left( \epsilon, -\sqrt {\smash[b] {\epsilon (1 + 2 \epsilon)}} \right), \left( \epsilon, \sqrt {\smash[b] {\epsilon (1 + 2 \epsilon)}} \right) \right\}.$$ Or, since the ODE is $f(x, y) dx = g(x, y) dy$, one can analyze the equilibrium points of the system $(\dot x, \dot y) = (g(x, y), f(x, y))$. There is a mismatch with the plot because of a typo in the first integral.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3214859", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why to choose to work with a functional instead of a function? Why to choose to work with a functional instead of a function? Notice in a function you evaluate points and in functional you evaluate functions. and why a linear functional is important in general ? A functional $\phi(f)$ is linear if the domain of its existence together with the functions $f(x)$ and $\psi(x)$ contain the function $af(x)+b\psi(x)$ and if the equality $\phi(af+b\psi)=a\phi(x)+b\phi(\psi)$, and $a,b\in\mathbb R$ holds.
You pose a dichotomy that is not so. One does not look at functionals as opposed to functions. * *First of all, a functional is a function. *Second, you say that "functions evaluate points". That's not a good point of view. Typical calculus functions evaluate on numbers, as they usually evaluate as formulas. But it's very easy, even at the calculus level, to think of functions that evaluate on functions. For example, given a curve $r:t\longmapsto (x(t),y(x))$, $t\in[0,1]$, its length is the function $L(r)=\int_0^1\sqrt{x'(t)^2+y'(t)^2}\,dt$. So $L$ is a function that evaluates on functions. Linear functionals are important because it is extremely common in mathematics for certain objects to form a vector space. And it's been noticed that to understand a vector space, in particular when they are infinite-dimensional, understanding of its dual is important, and often times helps prove results about the space. That's the core of Functional Analysis, and understanding of topological vectors spaces (Banach Spaces, Hilbert Spaces, Frechet Spaces, Bounded Linear Operators on a Banach Space, etc., etc., etc. ) comes together with understanding their duals.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3215029", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What is the name of the operation where you find the "closeness" of 2 values from 0 to 1? This type of function comes up every now and then during my projects. Often and in performance critical parts enough that I'd like to learn more about it and see how others have implemented it. Essentially, the purpose is to find out how linearly close 2 values are on a rotating scale of 0 to 1 For example: 0.5 and 0.5 = 1, maximum close 0.5 and 0.4 = 0.8, pretty close 0.5 and 0 = 0, least close 0.5 and 1 = 0, also least close 0.8 and 0.5 = 0.4 0.8 and 0.3 = 0 etc. My implementation in js is as follows: find_closeness(pivot, value) { return 1 - Math.min(Math.abs(value - pivot), Math.abs(value - pivot - 1), Math.abs(value - pivot + 1)) * 2; } But I'm sure theres a more efficient way to do it, math isn't my strong suit. I call the variables pivot and value because thats how I visualize it. What is this operation called? EDIT: previously posted implementation had a bug, this older version still works though
I don't think that there's a name for this in mathematics -- we tend to give names to distance functions rather than closeness functions, for instance. The underlying distance function ($0$ when two points are the same, $1$ when they're as far apart as possible) is twice the ordinary distance on a circle of circumference $1$ (i.e., you take your interval $[0, 1]$ and roll it up into a circle, and then measure "closest" distance along the circle). As for efficient computation, what you've got now is just about as efficient as it can possibly be, because there pretty much has to be an "if" in there somewhere to swap between the two possible choices for which route to take around the circle. I suppose that if you wanted to make up a name for it, you could call it "clock distance" or something like that, but I wouldn't bother doing so.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3215170", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
The Second Derivative Test and the Mean Value Theorem I have been studying for the upcoming Introductory Real Analysis exam and got stuck upon the proof of the second derivative test. Here is a verbatim text of the theorem and the proof from the book: THEOREM: Let $I$ be an open interval containing the point $x_0$ and suppose that the function $f:I\rightarrow R$ has a second derivative. Suppose that $ f'(x_0)=0$. If $f''(x_0)>0$, then $x_0$ is a local minimizer of $f:I \rightarrow R$. PROOF: Since $f''(x_0)=\lim_{x\to{x_0}} \frac{f'(x)-f'(x_0)}{x-x_0}>0$, it follows (see Exercise 16) that there is a $\delta>0$ such that the open interval $(x_0-\delta, x_0+\delta)$ is contained in $I$ and $\frac{f'(x)-f'(x_0)}{x-x_0}>0$ if $0<|x-x_0|<\delta$. But $f'(x_0)=0$, so the (4.13) [preceding inequality] amounts to the assertion that if $|x-x_0|<\delta$, then $f'(x)>0 \text{ if } x>x_0$ and $f'(x)<0 \text{ if } x<x_0$. Using these two inequalities and the [Lagrange] Mean Value theorem, it follows that $f(x)>f(x_0) \text{ if } 0<|x-x_0|<\delta$. The textbook states the Lagrange Mean Value Theorem as follows: Suppose the function $f:[a,b] \rightarrow R$ is continuous and $f: (a,b) \rightarrow R$ is differentiable. Then there is a point $x_0$ in the open interval $(a,b)$ at which $f'(x_0)=\frac{f(b)-f(a)}{b-a}$. Luckily I was able to solve 'Exercise 16,' but I just don't see how I can apply the Lagrange Mean Value Theorem in the latter parts of the proof. Any small hints would be appreciated. Thanks in advanced.
First, apply your MVT to the interval $I=(x_0,x_0+\delta)$. Take an $x\in I$. The MVT says that there is a $c, x_0<c<x$ such that: $f'(c)=\frac{f(x)-f(x_0)}{x-x_0} $. Since $f'(c) >0$ (it's one of your inequalities), then $f(x)>f(x_0)$. A similar argument holds for the interval $I'=(x_0-\delta,x_0)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3215343", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Algebraic probability There are only purple and orange marbles in a bag. There are three more purple marbles than orange marbles in the bag. Roxanne is going to take at random two marbles from the bag. The probability that Roxanne will take two marbles of the same colour is 41/81. Work out the number of orange marbles in the bag. This is my working and I don’t seem to get a proper solution. However there might be a mistake with the question as it almost works, but not quite. Can anyone explain where I went wrong?
As you suspected, the marbles are being replaced. This means the probability for the second marble is the same as the first marble. Letting $x$ be the number of purple marbles: $$\left( \frac{x}{2x+3} \right)^2 + \left( \frac{x+3}{2x+3} \right)^2 = \frac{41}{81}$$ $$\frac{2x^2+6x+9}{4x^2+12x+9} = \frac{41}{81}$$ $$162x^2 + 486x + 729 = 164x^2+492x+369$$ $$2x^2 + 6x -360 = 0$$ and you can continue.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3215511", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Order of an element in general linear group Let $A=GL(n,2)$ be the general linear group of size $n$ over the finite field $\mathbb{F}_2=\{0,1\}$. For small $n=3,4,5$ etc., I observed any matrix in $A$ has of the order less than or equal to $2^n-1$. Is there any proof of this result?
Consider $g\in GL(n,2)$ and the sequence $\{g^k\}_{k=0}^\infty$. The order of $g$ is the first index $k$ such that there is some $h<k$ such that $g^h=g^k$. Therefore the set $\{g^n\,:\, n\in\Bbb N\}$ contains exactly $\operatorname{ord}g$ distinct matrices. These matrices are all non-zero elements of the $\Bbb F_2$-algebra $\Bbb F_2[g]$. By Cayley-Hamilton theorem, $\dim_{\Bbb F_2}\Bbb F_2[g]\le n$, and therefore $$\operatorname{ord}g\le\lvert\Bbb F_2[g]\setminus\{0\}\rvert\le 2^n-1$$ In general, we may summarise this discussion in the inequality $$\operatorname{ord}g\le q^{\dim\Bbb F_q[g]}-1$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3215645", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Find maximum of function $A=\sum _{cyc}\frac{1}{a^2+2}$ Let $a,b,c\in R^+$ such that $ab+bc+ca=1$. Find the maximum value of $$A=\frac{1}{a^2+2}+\frac{1}{b^2+2}+\frac{1}{c^2+2}$$ I will prove $A\le \dfrac{9}{7}$ and the equality occurs when $a=b=c=\dfrac{1}{\sqrt3 }$ $$\frac{\sum _{cyc}\left(b^2+2\right)\left(c^2+2\right)}{\Pi _{cyc}\left(a^2+2\right)}\le \frac{9}{7}$$ $$\Leftrightarrow 7\sum _{cyc}a^2b^2+28\sum _{cyc}a^2+84\le 9a^2b^2c^2+18\sum _{cyc}a^2b^2+36\sum _{cyc}a^2+72 (1)$$ Let $a+b+c=3u;ab+bc+ca=3v^2=1;abc=w^3$ then we need to prove $$9w^6+11\left(9v^4-6uw^3\right)+8\left(9u^2-6v^2\right)-12\ge 0$$ $$\Leftrightarrow 9w^6+11\left(27v^6-18uv^2w^3\right)+8\left(81u^2v^4-54v^6\right)-324v^6\ge 0$$ $$\Leftrightarrow 9w^6-198uv^2w^3-459v^6+648u^2v^4\ge 0$$ We have: $$f'\left(w^3\right)=18\left(w^3-11uv^2\right)\le 0$$ So $f$ is a decreasing function of $w^3$ it's enough to prove it for an equality case of two variables. Assume $a=b\rightarrow c=\dfrac{1-a^2}{2a}$ $$(1)\Leftrightarrow \frac{(a^2+2)(a^2+4)(3a^2-1)^2}{4a^2}\ge0$$ Please check my solution. It's the first time i use $u,v,w$, if i have some mistakes, pls fix for me. Thanks!
$$a=\tan \left(\frac {\alpha}{2}\right), b=\tan \left(\frac {\beta}{2}\right), c=\tan \left(\frac {\gamma}{2}\right)\ \ \ \ (\alpha+\beta+\gamma= \pi)$$ $$A=\dfrac { \cos^2 \left(\frac {\alpha}{2}\right) }{1+ \cos^2 \left(\frac {\alpha}{2}\right) }+ \dfrac { \cos^2 \left(\frac {\beta}{2}\right) }{1+ \cos^2 \left(\frac {\beta}{2}\right) }+ \dfrac { \cos^2 \left(\frac {\gamma}{2}\right) }{1+ \cos^2 \left(\frac {\gamma}{2}\right) } \le \dfrac {3t}{3+t}\le \dfrac {9}{7} $$ $$t= \cos^2 \left(\frac {\alpha}{2}\right) + \cos^2 \left(\frac {\beta}{2}\right) + \cos^2 \left(\frac {\gamma}{2}\right)=\dfrac {3}{2} + \dfrac {1}{2} (\cos (\alpha)+\cos (\beta)+\cos (\gamma)) =\dfrac {3}{2}+\dfrac {R+r}{2R} \le \dfrac {9}{4}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3215787", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Degree of the extension $k(x)/k(q(x))$ for a rational function $q\in k(x)$ Let $k$ be an algebraically closed field, and $q\in k(x)$ a nonzero rational function, expressible as $q(x)=r(x)/s(x)$ for $r$ and $s$ coprime, and $d=\deg r\ge\deg s$. Then will $[k(x):k(q(x))]=d$? It is easy to see that $d$ is an upper bound, by using the polynomial $$ r(T)-q(x)s(T)\in k(x)[T] $$ which vanishes for $T=x$, but I don't see why this polynomial must be irreducible. Presumably, the proof of this will at some point involve Gauss's lemma, applied, say, to $$ s(x) r(T)-r(x)s(T)\in k[x,T], $$ but I can't see how to make the details of the argument work.
Set $L:=k(x)$, and its subfield $K:=k(q)$, where $q=r/s$ for coprime polynomials $r,s\in k[x]$. The claim is that $L\cong K[t]/(r(t)-qs(t))$, and it is enough to show that $m:=r(t)-qs(t)$ is an irreducible polynomial in $K[t]$. (This is where you are making a mistake: we don't want to ask about this polynomial in $L[t]$.) Now, $m$ is an irreducible element of $k(t)[q]$ (it is linear in $q$), and is primitive in $k[t][q]$, so by Gauss's Lemma it is irreducible in $k[q,t]$. Then Gauss's Lemma once more implies that it is irreducible in $k(q)[t]=K[t]$. Thus $L\cong K[t]/(m)$, and so $L/K$ has degree $\max\{\deg(r),\deg(s)\}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3215886", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Wrong answer for this integral: $\int \frac{x^2+1}{x^4-x^2+1}dx$ $$\int \frac{x^2+1}{x^4-x^2+1}dx$$ Dividing both sections by $x^2$ and adding $1$ to the denominator I get: $$\int \frac{1+\frac{1}{x^2}}{(x-\frac{1}{x})^2+1}dx$$ Setting $x-\frac{1}{x}$ as $t$ I get: $$dt=1+\frac{1}{x^2}dx$$ Then I have written it in the following form: $$x-\frac{1}{x}+\int \frac{dt}{t^2}.$$ Then I wrote it like this: $x-\frac{1}{x}-\frac{1}{t}$ , substituting the $x'$ back: $$x-\frac{1}{x}-\frac{1}{x-\frac{1}{x}}.$$ I know the integral I get is wrong because as I substitute some $x$'s in the most upper integral I get different results from those I get when substituting the same values in the final answers. Where is my mistake?
The substitution $t=x-\frac{1}{x}$ is indeed a good starting point, but I'm not sure I understand your reasoning after that. The substitution rewrites your indefinite integral as $\int\frac{dt}{t^2+1}$ (as others have noted), which you seem to have rewritten as $t+\int\frac{dt}{t^2}$. That suggests to you me you confused your integral for $\int(1+\frac{1}{t^2})dt$, i.e. you confused $\frac{1}{1+t^2}$ with $1+\frac{1}{t^2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3215994", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
What could be an example where multiplying two power series gives a bigger radius that is not infinity I'm trying to think of an example of $\sum_{n=0}^{\infty}a_nx^n, \sum_{n=0}^{\infty}b_nx^n$ with radius of convergence $R_1,R_2$ such that when we multiply them we get $\sum_{n=0}^{\infty}c_nx^n, c_n = \sum_{i+j=n}a_ib_j=\sum_{k=0}^{n} a_jb_{n-j}$ with radius $R$ such that $R>R_1$, $R>R_2$ and $R<\infty$. What could be an easy example for that?
Consider the Taylor series $\sum_{k\geq0} a_k\,x^k$ of the functions $$f(x):={2-x\over(1-x)(3-x)},\qquad g(x):={1-x\over 2-x}\ .$$ Then $R_f=1$, $\>R_g=2$, and $R_{fg}=3$, because the radius of convergence for the Taylor series at $0$ is equal to the distance from $0$ to the nearest singularity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3216154", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
can't understand derviatives from searching all over the internet I have tried to follow a lot of tutorials out there for explaining derviatives and show (understandable)examples but i couldn't understand any. Can anyone link me a useful and easy to follow tutorial or explain derviatives for me?
https://www.youtube.com/watch?v=N2PpRnFqnqY Khan Academy is a great resource if you want to learn math online. It also has a wide selection of calculus tutorials that are all narrated and explained very well. I will try to explain what a derivative is in a nutshell: it is the slope of a line at a point. For example, the derivative of the function $f(x) = x^2$ is $f'(x) = 2x$. (the apostrophe, or 'prime' is just convention for derivatives.) You will notice that for $x = 1$, the slope of the tangent line is $2x$ or $2(1)$. (You will also have to change the y intercept). Observe the image: parabola image
{ "language": "en", "url": "https://math.stackexchange.com/questions/3216484", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Euler-Lagrange Equation for Kantorovich Dual Problem Given two probability measures $\mu$ and $\nu$, the Kantorovich Dual problem for quadratic cost is to $$ \text{minimize} \quad \int \phi(x)d\mu + \int \psi(y)d\nu $$ over pairs $(\phi,\psi)\in L^1(d\mu)\times L^1(d\nu)$ such that $xy \leq \phi(x) + \psi(y)$. In Villani's book Topics in Optimal Transportation, P71 section 2.1.6, a variational argument is used to derived the Euler-Lagrange Equation for this problem, which turns out to be $$ \nabla \phi_{\#} \mu = \nu. $$ The author also refers to a paper by Gangbo for the derivation. In both the book and the paper, it is assume that measures $\mu$ and $\nu$ are supported by compact sets. In Gangbo's paper, $\mu$ and $\nu$ are simply the Lebesgue measure. My question: is there an argument for the general case without any assumption on $\mu$ and $\nu$?
It is possible to weaken the assumptions on $\mu$ and $\nu$ significantly, but it does not seem to be true without some sort of technical assumption on the measures. See Theorem 1.22 in Santambrogio's book, which he describes as "the sharpest result in the unbounded case" - there it is assumed that both measures have finite second moment, and that $\mu$ satisfies a rectifiability assumption which is weaker than absolute continuity. On the previous page, he cites a paper by Gigli which purports to show that this is, in fact the sharpest result possible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3216619", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Calculating $\mathbb{P}(Y \leq 1)$ given the moment generating function Given the moment generating function $$M_Y(t) =\frac{4-3t}{2(t-2)(t-1)}$$ with $t<1$ find $\mathbb{P}(Y \leq 1)$. First I tried to convert this to the probability generating function, because than you can easily find it, but my TA told me that it's not possible since it's continuous. Now I've been staring at this for a while and played with the definition of the moment generating function for a bit, but I don't see how I can find $\mathbb{P}(Y \leq 1)$. Some hint in the right direction is much appreciated :)
Hints: * *Try decomposing your $M_Y(t)$ into partial fractions *The moment generating function of a mixture distribution, where $Y\sim X_1$ with probability $p$ and $Y\sim X_0$ with probability $1-p$, is $M_Y(t) = pM_{X_1}(t)+(1-p)M_{X_0}(t)$ *The moment generating function of an exponential distribution with rate parameter $\lambda$ is $\dfrac{\lambda}{\lambda-t}$ for $t \lt \lambda$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3216823", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solving Equations using logarithm Here is a system of equations for which I am having difficulty solving: \begin{cases} a^{2x}.b^{3y}=m^5 \\ a^{3x}.b^{2y}=m^{10} \end{cases}
HINT Notice that $$a^{2x}.b^{3y}=m^5 \\a^{3x}.b^{2y}=m^{10}$$ $$\Rightarrow a^{3x}.b^{2y}=(a^{2x}.b^{3y})^2$$ Now solve either of for $a$ or $b$. The solution you obtain will be dependent on $m$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3216927", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
What is the logic behind the combinations with repetitions formula This month, I'm taking combinatorics classes in my school, yesterday we learned about combinations with repetitions formula. Our teacher wrote it on the board, but she didn't really explained what is the logic behind this and why the reduction to combinations without repetition is working. I was curious to know how would you explain this in a more natural way other than just learning the formula as it is. For those being unsure for which formula I mean here it is: $$\bar{C}^{k}_{n} = C^{k}_{n+k-1}$$
Well, a $k$-repetition of $n$ is a word $x=x_1\ldots x_n$ of length $n$ over the alphabet ${\Bbb N}_0$ (natural numbers inluding $0$) such that $x_1+\ldots+x_n=k$. The $2$-repetitions of $4$ are $2000,1100,1010,1001,0200,0110,0101,0020,0011,0002$. They count the number of ways to draw $2$ numbers from $1,2,3,4$ without looking at the order and with replacement (put them back). Here $2000$ accounts for the drawing of two $1$'s and $1100$ for the drawing of $1$ and $2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3217041", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Evaluating $\lim\limits_{n\to \infty} \sin \left( (2 + \sqrt 3 )^n\pi\right)$ for $n \in \mathbb N$ Evalulate $$\lim\limits_{n\to \infty} \sin \bigl( (2 + \sqrt 3 )^n\pi \bigr) \quad \text{ for } n \in \mathbb N$$ This question appeared in my high school exam. My first idea was as $n$ is an integer then the value must change abruptly for every increase in $n$ which makes the function discontinuous and so the limit does not exist. But there is another method that proves the limit to be $0$. Solution : $$ (2+\sqrt{3})^n+(2-\sqrt{3})^n=2m,m\in I^* $$ \begin{align} \therefore \lim_{n\to\infty} \sin\Big((2+\sqrt{3})^n\pi\Big) &=\lim_{n\to \infty} \sin\Big((2m-(2-\sqrt{3})^n)\pi\Big)\\ &=\lim_{n\to \infty} \sin\Big(2m\pi-(2-\sqrt{3})^n\pi\Big)\\ &=-\lim_{n\to \infty} \sin\Big((2-\sqrt{3})^n\pi\Big)\\ &=-\lim_{n\to \infty} \sin\Bigg(\frac{\pi}{(2+\sqrt{3})^n}\Bigg)\\ &=0 \end{align} (original images 1 and 2) So, where did I go wrong.
Note that $$\lim_{n\to\infty}\sin\left((2+\sqrt 3)^n \right) $$ is not the same as $$\lim_{n\to\infty}\sin\left((2+\sqrt 3)^n \pi\right) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3217301", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Convergence of the series below $$\sum_{n=1}^\infty\frac{(-1)^n}{\sqrt{n}}$$ I did: $$\lim_{n\to \infty}\Biggr\vert\frac{(-1)^n}{\sqrt{n}}\Biggr\vert$$ $$\lim_{n\to \infty}\frac{1}{n^\frac{1}{2}}=0<1$$ So diverges by the Ratio Test, right? And have this series: $$\sum_{n=1}^\infty \bigg(\frac{3n^3+2}{2n^4+1}\bigg)^n$$ I solved and found: $$\lim_{n\to \infty}\frac{n^4(\frac{3}{n}+\frac{2}{n^4})}{n^4(2+\frac{1}{n^4})}=0<1$$ So is absolutely convergent by the Root Test, right? But the answers options are: A)Conditionally convergent and absolutely convergent. B)Both are absolutely convergent. C)Both are divergent. D)Conditionally convergent and divergent. None matches with my answers. Where am I going wrong?
As for your first series, you did not show it diverges, but you can show (compare it with $1/\sqrt{n}$) that the series of the absolute values diverges, i.e. your series is not absolutely convergent. Using Leibniz's criteria for alternating sign series you can show it converges, making it simply (or conditionally) convergent. You are correct with respect to the second series, assuming that your calculations indicate you understood the series is bounded by a convergent geometric series.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3217465", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Locus of an Equation Bouncing off of another question I had here, I wondered what would be the equation of the locus of this equation: $$0 = b + ax - dy - cxy$$ Specifically $$0=xy-6y+6x-3$$ I would like a way to derive it. Also, if it is possible, may someone please give me a name for the function.
By itself, $$ (x-6)(y+6) = xy -6y+6x - 36 $$ You have $$ 0 = xy-6y+6x-3 $$ Subtract! $$ (x-6)(y+6) = -33 $$ called a hyperbola, in the same way that $xy = 1$ is a hyperbola. Just moved a bit, stretched a bit. Let's check a point. My version has $-3 \cdot 11 = -33,$ so we can take $x=3, y=5.$ The original $xy-6y+6x -3$ becomes $3 \cdot 5 - 6 \cdot 5 + 6 \cdot 3 - 3 = 15 -30+18 - 3=0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3217708", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is an intersection of prime ideals equal to their product? Suppose $R$ is a commutative ring with identity, and let $P_1 , P_2, ..., P_n$ be distinct prime ideals in $R$. Is it then true that $$ P_1 P_2 \cdots P_n = P_1 \cap P_2 \cap \cdots \cap P_n?$$ My hunch is that this is not true. The inclusion $P_1P_2\cdots P_n\subset P_1\cap P_2\cap\cdots \cap P_n$ certainly holds (as this holds for any ideals in $R$). Also, I know that the equality holds if the ideals are maximal. So, in pursuit of a counterexample, I've been looking at rings in which prime ideals are not maximal; and so, PIDs and finite rings are out. My first thought was, of course, $\mathbb{Z}$; but the equality seems to hold in that case. Am I on a wild goose chase here?
A counterexample is in the ring $R=\mathbb{R}[X,Y]$. Take $P_1=XR$ and $P_2 = XR+YR$. Then $P_1\cap P_2 = P_1$. And $X\in P_1$ is not an element of $P_1 P_2$. Now let me prove something that does work: $\sqrt{P_1\dots P_n}=P_1\cap\dots \cap P_n$. And for that, the prime $P_i$'s do not need to be distinct. For the inclusion $\subset$, let $x\in R$ such as $x^k\in P_1\dots P_n$. As you said yourself, we have $x^k\in P_1\cap \dots\cap P_n$. And then $x\in P_1\cap \dots\cap P_n$ because each $P_i$ is prime. For the inclusion $\supset$, let $x\in P_1\cap\dots\cap P_n$. Now suppose for the sake of contradiction that for all $k\in\mathbb{N}, x^k\notin P_1\dots P_n$. By Krull's theorem, there exists a prime ideal $Q$ such as $P_1\dots P_n\subset Q$ and $x\notin Q$. Because $Q$ is prime we get $P_i\subset Q$ for a certain $i\leq n$. That contradicts $x\in P_i$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3217862", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Fundamental group of $S^n\setminus S^k$ I'm doing review questions for grad school examinations and I came across one that's stumped me for a while: $S^n = \{(x_1, \dots , x_{n+1} \colon \Sigma x_i^2 = 1\}$ and $S^k = \{(x_1, \dots , x_{n+1} \in S^n \colon x_{k+2} = \cdots = x_{n+1} = 0\}$ What is $\pi_1(S^n\setminus S^k)$? I've approached this with Seifert–van Kampen but can't get anywhere.
$S^n \setminus S^k \simeq S^{n-k-1}$ which should answer your question. Proof. Define $j : S^{n-k-1} \to S^n \setminus S^k , j(y_1,\dots,y_{n-k}) = (0,\dots,0,y_1,\dots, y_{n-k}),$ $p: S^n \setminus S^k \to \mathbb R^{n-k} \setminus \{ 0\}, p(x_1,\dots,x_{n+1}) = (x_{k+2},\dots,x_{n+1}).$ $r : S^n \setminus S^k \to S^{n-k-1}, r(x) = p(x)/\lVert p(x) \rVert$. We have $p(j(y)) = y$ and thus $r(j(y)) = y$. The latter means $r \circ j = id$. For $x \in S^n \setminus S^k$ we have $j(r(x)) = (0,\dots,0,p(x)/\lVert p(x) \rVert)$. The linear path $l_x :[0,1] \to \mathbb R^{n+1}, l_x(t) = (1-t) x + t j(r(x))$, does not go through $0$: For $t = 1$ we clearly have $l_x(t) \ne 0$ and for $t \ne 1$ the equation $l_x(t) = 0$ implies $x_1 = \dots = x_{k+1} = 0$. Hence we get $x = (0,\dots,0,p(x))$ which also shows $\lVert p(x) \rVert = 1$. We conclude $j(r(x)) = x$, thus $x = (1-t) x + t j(r(x)) = 0$ which is impossible. This means that $l'_x: [0,1] \to S^n, l'_x(t) = l_x(t)/\lVert l_x(t) \rVert$, is well-defined. We have $l'_x(t) \in S^n \setminus S^k$ (which is equivalent to $p(l'_x(t)) \ne 0$) for all $t$: Since $p$ is linear, $p(l'_x(t)) = 0$ implies $p(l_x(t)) = 0$ and further $$0 = (1-t) p(x) + t p(j(r(x)) = (1-t) p(x) + t r(x) = \\(1-t) p(x) + t p(x)/\lVert p(x) \rVert = \left(1-t+t/\lVert p(x) \rVert \right)p(x) .$$ Since $1-t+t/\lVert p(x) \rVert \ge 1-t+t = 1$, we conclude that $p(x) = 0$ which is impossible for $x \in S^n \setminus S^k$. Therefore $$H : (S^n \setminus S^k) \times [0,1] \to S^n \setminus S^k, H(x,t) = l'_x(t)$$ is well-defined and we have $H(x,0) = x, H(x,1) = j(r(x))$, i.e. $id \simeq j \circ r$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3218009", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Show that $a^3 - b^3 = c! - 18$ does not have a solution Let $a, b,$ and $c$ be positive integers and $c \gt 6$. Show that the equation $$a^3 - b^3 = c! - 18$$ does not have a solution for all positive integers $a, b,$ and $c$. What I have realized so far is that if $a^3 - b^3 = c! - 18$, then it must also be true that $a^3 -b^3 \equiv c! - 18 \pmod n$ for all $n \geq 2$. That would mean if I can show that there exists an $n$ so that $$a^3 - b^3 \not\equiv c! - 18 \pmod n$$ for all relevant $a, b,$ and $c$ then I have also shown that $$a^3 - b^3 \neq c! - 18$$ for all relevant $a,b,$ and $c$. The only problem that I have now is that I do not know how to find a suitable $n$
You may want to try your luck with $\bmod 7$. You should know, or easily show, that all cubes are $\in\{0,1,6\}\bmod 7$ forcing $a^3-b^3\in\{0,1,2,5,6\}$. But for $c\ge 7$ you find $c!-18$ fails to meet this qualification (I will let you figure that out). This does not generally work for $c<7$, but that was excluded from the problem statement ("$c>6$").
{ "language": "en", "url": "https://math.stackexchange.com/questions/3218170", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
Are there true arithmetical statements that corresponds to inconsistency of inconsistent theories? Lets take Naive set theory "NvST" which is the theory whose axioms are all instances of naive unrestricted comprehension, which is of course known to be inconsistent. So $\neg$ Con(NvST) is a TRUE statement! Question is there a TRUE arithmetical statement that corresponds to $\neg$ Con(NvST)? The problem I'm getting is that since NvST is inconsistent, then it cannot be interpreted in PA + $\omega$-rule. So how can we get a sentence that can speak about the consistency status of that theory in the language of arithmetic if the theory itself cannot be interpreted in that language?
Gödel's work shows us how to write down an arithmetical statement that corresponds to $\operatorname{Con}(T)$ or $\neg\operatorname{Con}(T)$ for any theory $T$, as long as the set of axioms of $T$ is Turing-recognizable. This works purely syntactically, and does not depend in any way of having an interpretation of the language of $T$ in mind. So you can certainly apply it to your naive set theory, since it is easy to recognize instances of unrestricted comprehension. Since it is certainly the case that NvST is inconsistent -- you can prove a contradiction in just a handful of lines! -- the arithmetical statement $\neg\operatorname{Con}({\sf NvST})$ is certainly true in $\mathbb N$. Where interpretability in $T$ comes into play is if we need something like $\operatorname{Con}(T)$ to be a $T$-statement rather than an arithmetical statement. This need arises on the way to the incompleteness theorem. And even then, what matters is that we can interpret a certain amount of arithmetic in $T$, not that we can interpret $T$ in the metatheory. But we don't need to go there if all we want is to speak about provability in $T$ with arithmetical statements.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3218339", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Squared modulus of a complex expression I am trying to verify a formula that was presented in a paper giving the intensity transmission of a resonator. I need to confirm that the squared modulus of $$\frac{r-\tau\exp\left(i\varphi\right)}{1-\tau r\exp\left(i\varphi\right)} \tag{1}$$ is equal to $$\frac{\tau^{2}-2r\tau\cos\varphi+r^{2}}{1+r^{2}\tau^{2}-2r\tau\cos\varphi}. \tag{2}$$ The squared modulus of a complex expression is found by multiplying the expression by its complex conjugate, i.e., $$\left|\frac{r-\tau\exp\left(i\varphi\right)}{1-\tau r\exp\left(i\varphi\right)}\right|^{2} = \frac{\left[r-\tau\exp\left(i\varphi\right)\right]}{\left(1-\tau r\exp\left(i\varphi\right)\right)}\frac{\left[\left[1-\tau r\cos\left(\varphi\right)\right]+i\tau r\sin\left(\varphi\right)\right]}{\left[\left[1-\tau r\cos\left(\varphi\right)\right]+i\tau r\sin\left(\varphi\right)\right]}$$ $$=\frac{\left[r-\tau\cos\left(\varphi\right)-i\tau\sin\left(\varphi\right)\right]}{\left(1-\tau r\cos\left(\varphi\right)-i\tau r\sin\left(\varphi\right)\right)}\frac{\left[\left[1-\tau r\cos\left(\varphi\right)\right]+i\tau r\sin\left(\varphi\right)\right]}{\left[\left[1-\tau r\cos\left(\varphi\right)\right]+i\tau r\sin\left(\varphi\right)\right]}.$$ The denominator correctly reduces to "$1+r^{2}\tau^{2}-2r\tau\cos\varphi$". However, I could not obtain the right form for the numerator. After multiplication, the numerator becomes: $$r-\tau r^{2}\cos\left(\varphi\right)+i\tau r^{2}\sin\left(\varphi\right)...$$ $$-\tau\cos\left(\varphi\right)+\tau^{2}r\cos^{2}\left(\varphi\right)-i\tau^{2}r\sin\left(\varphi\right)\cos\left(\varphi\right)...$$ $$-i\tau\sin\left(\varphi\right)+i\tau^{2}r\sin\left(\varphi\right)\cos\left(\varphi\right)+\tau^{2}r\sin^{2}\left(\varphi\right).$$ After simplification, my final expression is: $$\frac{r+\tau^{2}r-r^{2}\tau\left[\cos\left(\varphi\right)-i\sin\left(\varphi\right)\right]-\tau\left[\cos\left(\varphi\right)+i\sin\left(\varphi\right)\right]}{1+r^{2}\tau^{2}-2r\tau\cos\varphi}.$$ Is there any way that we can manipulate the numerator further to cast it in the form of Eqn (2)? Has there been a flaw in my algebra, or has there been a mistake by the authors of the article? Any suggestions would be greatly appreciated.
With $z:=\exp(i\varphi)$ the square modulus is $$\frac{(r-\tau z)(r-\tau/z)}{(1-\tau rz)(1-\tau r/z)}=\frac{r^2+\tau^2-r\tau(z+1/z)}{1+\tau^2r^2-\tau r(z+1/z)}=\frac{r^2+\tau^2-2r\tau\cos\varphi}{1+\tau^2r^2-2\tau r\cos\varphi}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3218472", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How to tell when a limit diverges? I have $\underset{x \to -2} \lim \frac{x^2-1}{2x+4}$ which doesn't exist because it diverges. I spent awhile trying to remove the division by zero before plugging it into an online math calculator, and I want to know how I could have known that it diverges. I'm not skilled enough to just look at it and figure out there's no way for me to reform it so it doesn't have zero division when $x=-2$. Is there a way for me to find out whether it diverges other than just working on it for awhile and then guessing that the limit diverges if I can't get rid of the zero division? My Steps: $$\underset{x \to -2} \lim \frac{x^2-1}{2x+4}$$ $$=\underset{x \to -2} \lim \frac{(x^2-1)(2x-4)}{2x+4(2x-4)}$$ $$=\underset{x \to -2} \lim \frac{(x^2-1)(2x-4)}{4x^2-16}$$ $$=\underset{x \to -2} \lim \frac{(x^2-1)(2x-4)}{4(x^2-4)}$$ $$=\underset{x \to -2} \lim \frac{(x-1)(x+1)2(x-2)}{4(x-2)(x+2)}$$ $$=\underset{x \to -2} \lim \frac{2(x-1)(x+1)}{4(x+2)}$$ $$=\underset{x \to -2} \lim \frac{(x-1)(x+1)}{2(x+2)}$$ $$=\underset{x \to -2} \lim \frac{x^2-1}{2x+4}$$
$$ \frac{x^2}{2x+4}= \frac{x^2-4+4}{2(x+2)}= \frac{(x-2)(x+2)}{2(x+2)}+\frac{4}{2(x+2)}=\\ \frac{x-2}{2}+2\frac{1}{x+2}. $$ It's not difficult to see that as $x$ approaches $-2$ from the left, $\frac{1}{x+2}$ goes to negative infinity: $\lim_{x\to-2^-}\frac{1}{x+2}=-\infty$. And as $x$ approaches $-2$ from the right, $\frac{1}{x+2}$ goes to positive infinity: $\lim_{x\to-2^+}\frac{1}{x+2}=+\infty$. So, that limit clearly does not exist and by extension it must be the case that the entire limit does not exist either. In more precise mathematical language, it all looks like this: $$ \lim_{x\to-2^-}\frac{x^2}{2x+4}=\lim_{x\to-2^-}\left(\frac{x-2}{2}+2\frac{1}{x+2}\right)=-2+2\cdot(-\infty)=-2-\infty=-\infty,\\ \lim_{x\to-2^+}\frac{x^2}{2x+4}=\lim_{x\to-2^-}\left(\frac{x-2}{2}+2\frac{1}{x+2}\right)=-2+2\cdot(+\infty)=-2+\infty=+\infty. $$ For a limit to exist, the two one-sided limits should be equal the same value. This is clearly not the case.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3218615", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What does it mean for a system of equations to have a non trivial solution? What does it mean for a system of equations to have a non trivial solution? Trivial means obviously but how can a equation have a obvious solution? My book says if solution is non trivial than determinant of coefficient of variables is 0.
The word is often used about systems of equations such as $$ 3x+5y-12z = 0 \\ x-y+5z = 0 \\ 5x+y+3z=0 $$ where it is immediately obvious that setting all of the unknowns to $0$ will solve the system. The question is then whether the system has other solutions than that.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3218683", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Randomly generate a sorted set with uniform distribution I have an ordered set $S = \langle S_1, S_2, .., S_M \rangle$ from which I want to draw a sample of $N$ elements in such a way that the sample is non-strictly totally ordered (as with $\leq$ and the integers), and all the possible occur with equal probability. The sample must be taken with repetition. For example, let's say $S = \langle 1, 2, 3, 4 \rangle$ and $N=3$, the samples: $[1, 1, 1]$, $[1,2,3]$, $[2,3,3]$ would be valid, but $[3,2,1]$ or $[2,1,1]$ would be invalid. A simple way to generate this set would be to just randomly sample from $S$, and then sort the resulting sequence. However, please note that the following approach is biased ($[1,1,1]$ is less likely to occur than $[1,2,3]$, for example). This question is related to one of the answers given in this StackOverflow question:https://stackoverflow.com/questions/26467434/generating-random-number-in-sorted-order. Note that the algorithm proposed there is to generate such a sample without repetition, whereas I want my sample to be generated with repetition.
I would just draw $3$ random numbers and only accept ones in ascending order, that way you have equal probabilities (i.e. rejection method as discussed) Otherwise, Sample the triples by using monte carlo. I.e. count how many different combinations are permissible, and accept each with equal probability according to different values of a $U[0,1]$, i.e. give $111$ the range $[0,1/n]$, $211$ is $[1/n, 2/n]$. As long as you have some sensible ordering on the set of possible sequences you are in business. You can use for loops to achieve this.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3218854", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
Number Theory Puzzle: Competition Problem I have been struggling the past few hours with a problem I initially thought to be easy and simple. Nothing comes to mind besides guessing numbers and the answer. Any help or hints would be greatly appreciated! Thank You! Problem: If any digit of a given 4-digit number is deleted, the resulting 3-digit number is a divisor of the original number. How many 4-digit numbers have this property?
We have $14$ such numbers. With the numbers @RossMillikan sir have noted, we also can have $a|ab$ and $b|ab$ case. Which will give numbers like $1200,1500,2400,3600,4800$. The full list is given below: $$ 1100, 1200, 1500, 2200, 2400, 3300, 3600, 4400, 4800, 5500, 6600, 7700, 8800, 9900$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3219007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Question about the definition of involute of a plane curve I'm studying about involute of a plane curve from here and there's a small point that is really bothering me and I can not understand it. Assuming curve is parameterized by arc length, the involute is defined as $$\gamma(t) = \beta(t) - t \beta'(t) $$ Why is there a negative sign? Like it says in the link I added, the bob's position is at distance $t$ in direction $-\beta'(t)$. I can't see how this negative sign has to be there for every curve in general. If the tangent line of $\beta$ at point $t$ is $\beta(t) + \lambda \beta'(t)$, then starting from $\beta(t)$ I can move along the line is either direction depending on $\lambda$. Why must it be negative in the case of the involute?
The model for the involute is this: Take a circle, with scotch tape wrapped around it. Start to peel the scotch tape off and follow the point $P$ at the end of the tape. As you go counterclockwise around the circle, the tape is tangent, but the line segment from the point of contact to $P$ goes in the direction opposite to the (counterclockwise) tangent vector to the circle.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3219142", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$SL(n,\Bbb{C})$ is a regular submanifold of $GL(n,\Bbb{C})$ Let $SL(n,\Bbb{C})$ be the group of matrices of complex entries and determinant $1$. I want to prove that $SL(n,\Bbb{C})$ is a regular submanifold of $GL(n,\Bbb{C})$. An idea is to use the Regular Level Set Theorem because $$SL(n,\Bbb{C})=f^{-1}(1)$$ where $f(A)=\det(A)$ (I know how to prove the same for $SL(n,\Bbb{R})$ but here i believe we need to use theory of multivariable complex analysis which i do not know at all.) Can someone help me prove this statement? Thank you in advance.
You can avoid the calculations by using Lie group theory, because $GL(n,\mathbb{C})$ is a Lie group and $SL(n,\mathbb{C})$ a closed subgroup. Cartan's theorem then says $SL(n,\mathbb{C})$ is a regular Lie subgroup, in particular a regular submanifold. Note that this also works for $\mathbb{R}$ instead of $\mathbb{C}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3219238", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Finiteness of $\int_0^1 \left(\sum_{n=1}^\infty \frac{n^\alpha e^{- t n^\alpha}}{1 - e^{- t n^\alpha}} \right)^{1/2} \, \mathrm{d}t$ Let $\alpha > 1$. My question is: is $\int_0^1 \left(\sum_{n=1}^\infty \frac{n^\alpha e^{- t n^\alpha}}{1 - e^{- t n^\alpha}} \right)^{1/2} \, \mathrm{d}t$ finite ? For $t \in ]0, 1]$, the series $\sum_{n=1}^\infty \frac{n^\alpha e^{- t n^\alpha}}{1 - e^{- t n^\alpha}}$ is convergent (using the ratio test for example). I don't think there is a direct way to compute its sum, so I looked for an upper bound, to no avail. The square root also prevents me from switching integral and series.
As Beautiful Art pointed out, it is enough to find the behaviour about $t=0$. Defining $$f(t)=\sum_{n=1}^\infty \frac{n^\alpha}{{\rm e}^{t n^\alpha}-1}$$ you can calculate the Mellin-Transform $${\cal M}_f(s) = \Gamma(s)\zeta(s)\zeta\left(\alpha(s-1)\right)$$ which converges whenever $\Re(s)>1+\frac{1}{\alpha}$. The inverse then becomes $$f(t)=\frac{1}{2\pi i} \int_{\sigma - i\infty}^{\sigma + i\infty} \Gamma(s)\zeta(s)\zeta\left(\alpha(s-1)\right) t^{-s} \, {\rm d}s$$ where $\sigma>1+\frac{1}{\alpha}$. This can be used to calculate the series about $t=0$ by shifting the contour to the left. You can assume $\alpha$ not an integer, and then the integrand has poles at $s=1+\frac{1}{\alpha},1,0,-1,-3,-5,-7,...$. So shifting the contour to $\sigma<1+\frac{1}{\alpha}$ we pick up the residue at $s=1+\frac{1}{\alpha}$ and the first terms become $$f(t)=\frac{\Gamma\left(1+\frac{1}{\alpha}\right)\zeta\left(1+\frac{1}{\alpha}\right)}{\alpha \, t^{1+\frac{1}{\alpha}}} - \frac{1}{2t} - \frac{\zeta(-\alpha)}{2} + \frac{1}{2\pi i} \int_{\sigma - i\infty}^{\sigma + i\infty} \Gamma(s)\zeta(s)\zeta\left(\alpha(s-1)\right) t^{-s} \, {\rm d}s \, ,$$ where now $\sigma<0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3219398", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Evaluate $\sum_{n=1}^\infty \frac{(-1)^{[\sqrt{n}]}}{n^p}$ for $ 0 < p \leq 1$ How do you evaluate $$\sum_{n=1}^\infty \frac{(-1)^{[\sqrt{n}]}}{n^p}$$ for the case $0 < p \leq 1$? Now I have successfully proved that the series converges for the case $p > 1$ and divergence of $p\leq 0$ is trivial, but I cannot deal with the case $0 < p \leq 1$. Wolfram Alpha tells me that it actually diverges but I completely have no idea on how to prove that. Here $[\sqrt{n}]$ stands for the largest integer that is smaller than $\sqrt{n}$.
$$S(N) = \sum_{n=1}^{N}(-1)^{\lfloor\sqrt{n}\rfloor}$$ is such that $S((2M)^2)=-2M$ and $S((2M+1)^2)=2M-1$. In particular $|S(N)|\leq\sqrt{N}$ and this inequality is sharp for any square $N$, but the sign of $S(N)$ depends on the parity of $\lfloor\sqrt{N}\rfloor$. If we assume $p\in\left(\frac{1}{2},1\right]$ and apply summation by parts we have that the original series behaves like $$ \sum_{n\geq 1}\frac{S(n)}{n^{p+1}}$$ which is absolutely convergent. Summation by parts also gives that the original series is not convergent for $p\in\left(0,\frac{1}{2}\right)$. It follows that the only non-trivial case to be discussed is $p=\frac{1}{2}$. In this case the partial sums of the original series keep oscillating between two finite values and the original series is not convergent. Summarizing, Kronecker's lemma or something equivalent gives that convergence happens for $p>\frac{1}{2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3219520", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is $\text{abs}: [-1, 1] \to [0,1]$ a covering map? I read that for any covering space $p:C \to X$ the cardinality of the fibre is the same for every $x \in X$ if $X$ is connected. So what is wrong with my example: $\text{abs}: [-1,1] \to [0,1]$? Clearly the fibre has cardinality 2 everywhere except for $x=0$ where the cardinality is 1. And I think this is a covering map since for all $x \in (0,1)$ $U = (0,1)$ is an open set satisfying the necessary conditions. And for $x=0$ and $x=1$ we use $U = [0, 0.5)$ and $U = (0.5, 1]$ respectively. So where am I wrong (or am I even wrong)?
$abs$ is not a covering map. That's because no open neighbourhood of $0$ in $[-1,1]$ can be mapped homeomorphically onto its image. And covering maps are local homeomorphisms. In particular your $U=[0,0.5)$ example, as a neighbourhood of $0$ in the codomain, is not correct. Its preimage under $abs$ is $(-0.5,0.5)$ which cannot be decomposed into a disjoint union of open sets at all (it is connected). And so "being covering map" in that case means that $abs$ maps $(-0.5,0.5)$ homeomorphically onto $[0,0.5)$ which is clearly false. However $abs$ is a covering map if we exclude $0$ from both domain and codomain. And in that case everything works.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3219648", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Prove by induction on $n$ that $(1)(2)+(2)(3)+...+n(n+1)={1\over 3}n(n+1)(n+2)$ So after testing myself with this question, I was unable to solve it. I was able to prove the base case $n=1$, but I was pretty lost on the induction step. I took a look at the solution and here it is: Solution to problem I understand it up until the point in the $P(k+1)$ step where it says: $={1\over 3}(k+1)(k+2)(k)+{1\over 3}(k+1)(k+2)(3)$ $={1\over 3}(k+1)(k+2)[k+3]$ I don't see how they have made this jump. It certainly seems like a bigger jump than any of the other lines. What is the process here? I'm not great at factorising... but if I were to replace $k$ with a number, I'm pretty sure these lines would not be equal! So what am I missing? Do the square brackets carry some kind of special notation in this situation?
This is related to the distributive property which basically states that $\rm \color{red}ab+\color{red}ac=\color{red}a\cdot(b+c)$. In your example $$\rm \color{blue}{\frac13(k+1)(k+2)}\cdot k+ \color{blue}{\frac13(k+1)(k+2)}\cdot3=\color{blue}{\frac13(k+1)(k+2)}\cdot(k+3)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3219774", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
If an exercise says "the measure of the smallest angle in a triangle is $20$", does that mean no other angle can have measure $20$? I have a question about this exercise: If the measure of the smallest angle in a triangle is 20, then the measure of the greatest possible angle in this triangle is ..... $$a ) 90$$ $$b )140$$ $$c ) 159$$ $$d ) 160$$ My question : Does the word "smallest" mean that no other angle in the triangle is of measure 20, or is it okay to have another angle that size? Because, if there is no other one of measure 20, the answer is 90; but, if it's okay, the answer is 140.
What you're asking is a valid question: there is a difference between the minimal element and the least element. In this question, I would assume that it's OK for the other angle to be 20. Because if not, then there is no measurement for the 2nd smallest angle. What would it be? 20.01? 20.0001? It's analogous to asking if there is a smallest positive real number.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3219890", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Subset of a subgroup is not closed under group actions In the following image (from "Field Arithmetic by Fried & Jarden" Page 6, Lemma 1.2.2(b)), red rectangle, I'm trying to figure out why it's right to claim $h^{-1} \in H$. I thought the following solved it: $g=k_ih_i$ and $g=kh^{-1}$ $\Rightarrow k_ih_i=kh^{-1} \Rightarrow h_ih=k_i^{-1}k \in H$ because right hand side is in H. So $h^{-1}=k^{-1}g=k^{-1}k_ih_i$, hence, because $k_i^{-1}k \in H$ we get $h^{-1} \in H \Rightarrow g=kh^{-1} \in KH$. But then I realized there's no logic in assuming $h_ih \in H$ because we're talking subsets here, not subgroups. So is it ok to assume $h_ih \in H$? if so, why? If not, I'd appreciate an explanation for the red part..
The proof is not correct. You have $g = k_ih_i$ with $k_i \in K, h_i \in H_i$. But then $h_i = k_i^{-1}g \in K^{-1}g$ and we conclude $H_i \cap K^{-1}g \ne \emptyset$ and not $H_i \cap g^{-1}K \ne \emptyset$. But then we can correctly show that there exists $h \in \bigcap_{i \in I} (H_i \cap K^{-1}g) = (\bigcap_{i \in I} H_i) \cap K^{-1}g = H \cap K^{-1}g$. Hence $h \in H$ and $h = k^{-1}g$ for some $k \in K$. This means $g = kh \in KH$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3220014", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Convolution of functions $f,g\in L^1([0,1])$ Problem: convolution ($f*g) $ of functions $f,g\in L^1([0,1])$, where: $$f(x) = \frac{3}{5-4\cos{4\pi x}},$$ $$g(x) = \frac{2\cos{2\pi x}}{5-4\cos{4\pi x}},$$ and $$(f * g)(x) = \int_{0}^{1}f(x-y)g(y)dy.$$ I'm not sure how to approach this problem. This is a problem from a course dealing with Fourier Analysis and I'm working in $[0,1]$ where Fourier series of functions are of the form $$\sum_{n \in\mathbb{Z}} f_c(n) e^{2\pi i nx}$$ where $f_c(n)=\int_{0}^1 f(x)e^{-2\pi i nx}dx.$ What I know is $(f*g)_c (n) = f_c(n)g_c(n),$ but I'm not sure if this would help. We haven't yet dealt with the question of whether a Fourier series converges to the function, unless $f$ is literally equal to the Fourier series itself though, so I'm not sure if that would be helpful at all. I also know the result is $0$, and direct integration hasn't yielded any results. Any hints would be appreciated!
First note a couple of translation symmetries: $$ f(x-1/2) = \frac{3}{5-4\cos{4\pi (x-1/2)}} = \frac{3}{5-4\cos{4\pi x}} = f(x) \\ g(x+1/2) = \frac{2\cos{2\pi (x+1/2)}}{5-4\cos{4\pi (x+1/2)}} = \frac{-2\cos{2\pi x}}{5-4\cos{4\pi x}} = -g(x+1/2) $$ Then we split the integral into two parts and translate the second one: $$\begin{align} (f * g)(x) &= \int_{0}^{1} f(x-y) \, g(y) \, dy \\ &= \int_{0}^{1/2} f(x-y) \, g(y) \, dy + \int_{1/2}^{1} f(x-y) \, g(y) \, dy \\ &= \{ z := y - 1/2 \} \\ &= \int_{0}^{1/2} f(x-y) \, g(y) \, dy + \int_{0}^{1/2} f(x-z-1/2) \, g(z+1/2) \, dz \\ &= \int_{0}^{1/2} f(x-y) \, g(y) \, dy + \int_{0}^{1/2} f(x-z) \, (-g(z)) \, dz \\ &= 0. \end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3220155", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why are parallelograms defined as quadrilaterals? What term would encompass polygons with greater than two parallel pairs? It seems the definition of a parallelogram is locked to quadrilaterals for some reason. Is there a reason for this? Why couldn't a parallelogram (given the way the word seems rather than as a mathematical/geometric construct) contain greater than two pairs of parallel sides? In a hexagon for example, all six sides are parallel to their opposing side. Is there a term for this kind of object? It seems to me there must be some value in describing a polygon with even numbers of sides in which the opposing sides are parallel to each other. While a hexagon, octagon, decagon, etc. all match this rule, you could have polygons with unequal sides as well. Edit 1: Object described by Mark Fischler Zonogon:
I'm going to propose, out of the blue, terms like "hexaparallelogram", "octaparallelogram", and so forth. I'm wondering whether, for more than $4$ sides, you would like your definition of hexaparallelogram to be restricted to having 3 pairs of parallel and pairwise equal sides (as in your picture - evidently these have a name, zonogon), or would you include a hexagon with vertices at $\{(0,0), (12,0), (16,6), (4,12), (0,12), (-6,3)\}$ which has three pairs of parallel sides but no two sides of equal length? Euclid, in proposition 34, introduces the term (παραλληλόγραμμα χωρία) which we can translate to "parallelogrammic area." So much for the etymology sites that trace the word only to Middle French. Euclid himself restricted the word to just four-sided figures. Proclus credits Euclid with having introduced the term "parallelogram," as opposed to bringing down that term from earlier works. So that tells us who to blame.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3220273", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Further factorisation of a difference of cubes? We know that a difference of cubes can be factored using $a^3-b^3=(a-b)(a^2+ab+b^2)$. How do we know that the quadratic can't be factored further. For example, $$ 27x^3-(x+3)^3=((3x-(x+3))(9x^2+3x(x+3)+(x+3)) \\ =(2x-3)(13x^2+15x+9) $$ In this case we can't factor the quadratic over R. Is there any way to know before factorising the cubic that the resulting quadratic can't be factored further. Another way of asking is to give an example similar to the one above where the resulting quadratic is reducible over R.
Let $$\delta:=b^2-4ac.$$ We can factor the quadratic over $\mathbb R$ if and only if $\delta\geq 0$. In the example above,we have $$a = 13,b = 15,c = 9$$ hence, $$\delta = 15^2-4\times 13\times 9=-243<0$$ so the quadratic isn't reducible over $\mathbb R$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3220372", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 3 }
Intuition behind proof of Schwarz's lemma There is the very well known proof of Schwarz's lemma in complex analysis. When I read it I feel like the answers described here. I'm not sure how I would motivate and explain why one should expect the proof to work when talking to someone new to complex analysis despite the proof to be relatively short. We are given $f$ is a holomorphic map such that $f(0)=0$ and $|f(z)| \le 1$ on $D$. I never understood why there wasn't someway to take this and look at $$f(z) = a_1z+a_2z^2 +\cdots$$ where the $a_i$ are constrained because of $|f(z)|\le 1$ in such a way that one can conclude that $|f(z)\le |z|$ and $f'(0)\le 1$. Why can't such an approach work?
Schwarz lemma is a profound result of non-euclidean geometry as it says that conformal maps that preserve the unit disc decrease hyperbolic distance on the unit disc; there is a book by S. Dineen called precisely that (The Schwarz Lemma, Clarendon Press, 1989, reprinted in an inexpensive pb by Dover Press) that goes into its many facets and extensions. The results you want about Taylor Coefficients of functions that satisfy it (or more generally bounded analytic functions on the disc) are known collectively as Schur's theorem with extensions by Caratheodory, Pick, Nevanlinna and are fairly unuseful directly and more important theoretically (as they are expressed in terms of positivity of quadratic forms of arbitrary number of variables and similar such inequalities, or in terms of shift/multiplication operators on Hardy Spaces in more abstract terms). Their abstract characterization implies fairly easily the classical Schwarz Lemma, though it feels kind of like going backward
{ "language": "en", "url": "https://math.stackexchange.com/questions/3220545", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Positive-definite over $\mathbb Q$ form is positive definite over $\mathbb R$? I was reading P. Etingof's "Introduction to the representation theory" when I found this problem (and I've trouble with it): we have a quadratic form $Q(x) = \sum x_i^2 - \frac{1}{2}\sum b_{ij}x_ix_j$, $b_{ij} \in \mathbb N$ (if it can be important, $b_{ij}$ is an adjacency matrix of some graph) and $Q$ is positive-definite over $\mathbb Q$. Now our goal is to prove that $Q$ is positive-definite over $\mathbb R$. It's obvious that it's at least positive-semidefinite (Etingof's hint is to use that $\mathbb Q$ is dense in $\mathbb R$ and here it works). But why positive-definite? My idea was to use the Sylwester criterion two times... But is it true for rational numbers?
Suppose this matrix is positive semidefinite over $\mathbb{R}$, but not positive definite. Then it has a zero eigenvalue. Now show that at least one eigenvector corresponding to this eigenvalue has rational coordinates, since the form itself has rational elements. This implies that this matrix is also not positive definite over $\mathbb{Q}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3220694", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
"A manifold with boundary has dimension at least 1" if it has a dimension and if it has nonempty boundary? My book is An Introduction to Manifolds by Loring W. Tu. As can be found in the following bullet points * *Can a topological manifold be non-connected and each component with different dimension? *Is $[0,1) \cup \{2\}$ a manifold with boundary? My issue is the $2$. *Understanding topological and manifold boundaries on the real line we have that * *Tu's manifolds with or without boundaries do not necessarily have (uniform) dimensions. *Tu has considered manifolds to be manifolds with boundaries (with empty boundaries). Question: For Definition 22.6 (see here and here), Tu says that "A manifold with boundary has dimension at least 1". Should this instead be "A manifold with boundary has dimension at least 1 if it has a dimension and if it has nonempty boundary" or "An $n-$manifold with boundary with non-empty boundary has $n \ge 1$" (Notice that the prefix "$n-$" precisely gives the manifold with boundary a dimension)? Embedding photos:
Assuming sensible definitions, an alternative solution is to change the statement to the following: A connected manifold with non-empty boundary has dimension at least 1 Edit: I rejected the suggested edit to change "manifold with non-empty boundary" to "manifold with boundary with non-empty boundary" because it does not add new information. A manifold with non-empty boundary must be a manifold with boundary, or your definitions are nonsense.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3220878", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
How to solve this quadratic congruent equation by inspection I found a systematic way (c.f. How to solve this quadratic congruence equation) to solve all congruent equations of the form of $ax^2+bx+c=0\pmod{p}$, or to determine that they have no solution. But I wonder if there is some easy way to find solutions of simple quadratic congruent equations by inspection, for example, for this equation $$x^2+x+47=0\pmod{7}.$$ My textbook gave the solutions $x\equiv 5\pmod{7}$ and $x\equiv 1\pmod{7}$ directly, without any proof. So I think there must be some easier way to inspect the solutions. Any help will be appreciated.
Servaes gave a good general method for solving quadratic equations modulo a prime. I will elaborate on my comment about how in this particular case the answer could be found by inspection. Adding or subtracting a multiple of $p$ to or from any of the coefficients $a,b,c$ does not materially change the equation $ax^2+bx+c\equiv0\mod p.$ In fact, $x^2+x+47$ begs to be reduced modulo $7$ to $x^2+x+5.$ $x^2+x+5$ doesn't have rational roots, but you could add or subtract any multiple of $p$ to the $x$-coefficient $1$, yielding polynomials such as $x^2+8x+5, x^2-6x+5,$ etc., and $x^2-6x+5$ is easily recognized as $(x-1)(x-5)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3221002", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Why are |vertical lines| used to mark matrix determinants? This notation is sometimes used to denote the determinant: $$ \begin{vmatrix}a & b \\ c & d\end{vmatrix} = ad-bc$$ Why? Where did this notation come from? Was there any relationship between this notation and the absolute value $|x|$ or the norm $\lVert\mathbf{x}\rVert$?
There is a relationship between the vertical line notation for determinant and the notation $|x|$ for absolute value and $||\mathbf{x}||$ for norm, however I do not know whether this was an intentional decision historically. The absolute value, norm, and determinant, all have at least two things in common. * *They are functions mapping a given quantity (a real number, a vector, or a matrix) to a real number. *They measure the size of something. The absolute value and norm give the distance from the origin to the real number or vector. And the determinant is the factor by which the volume of the unit cube increases under the linear transformation represented by the matrix. One catch with the analogy is that unlike absolute value and norm, determinants can be negative. In this case however, they are still measuring the factor of volume change. A negative sign simply indicates a change in orientation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3221128", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Why Does Adding the nth Derivative Increase a Function Approximation's Accuracy? I am currently taking calculus 3: sequences and series, and we've just started learning about Maclaurin and Taylor Series. I understand the concept behind them -- of these polynomials and derivatives of polynomials. However, I do not understand physically why, when we have a function approximation $g(x) \approx f(x)$, adding more and more derivatives of $f(x)$ increases the accuracy of $g(x)$ more and more. If someone could point me to a resource or explain it in simple terms it would be much appreciated. Thank you!
Consider a polynomial function $P$ and let $x_0$ be a real number. Then $P(x_0)$ is the value that $P(x)$ takes at $x_0$ and, since polynomial functions are continuous, whe $x$ is close to $x_0$, then $P(x)$ is close to $P(x_0)$. Now, consider the polynomial $P_1(x)$ of degree $1$ such that $P_1(x_0)=P(x_0)$ and that $P_1'(x_0)=P'(x_0)$. It turns out that $P_1(x)=P(x_0)+P'(x_0)(x-x_0)$. On the other hand, if you write $P(x)$ as polynomial in $x-x_0$ rather than in $x$, what you will get will be$$P(x)=P(x_0)+P'(x_0)(x-x_0)+\frac{P''(x_0)}{2!}(x-x_0)^2+\cdots+\frac{P^{(n)}(x_0)}{n!}(x-x_0)^2.$$So, $P_1(x)$ is the best first degree approximation of this function near $x_0$, in the sense that$$\lim_{x\to x_0}\frac{P(x)-P_1(x)}{(x-x_0)^2}\tag1$$exists; if in $(1)$, $P_1(x)$ is replaced by any other first degree polynomial, that limit will not exist. Now, if $P_2(x)=P(x)=P(x_0)+P'(x_0)(x-x_0)+\frac{P''(x_0)}{2!}(x-x_0)^2$, then $P_2(x)$ is the only second degree polynomial such that the limit$$\lim_{x\to x_0}\frac{P(x)-P_2(x)}{(x-x_0)^3}$$exists, and so on. And since most functions behave locally like polynomials, approximating such a function $f$ by$$f(x_0)+f'(x_0)(x-x_0)+\frac{f''(x_0)}{2!}(x-x_0)^2+\cdots+\frac{f^{(n)}(x_0)}{n!}(x-x_0)^2,$$will be, in general, a good idea.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3221237", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How do I test for convergence of $\Sigma_{n = 2}^{\infty} \frac{log(n)}{n \sqrt{n + 1}}$ I was trying to solve this problem. Test $\Sigma_{n = 2}^{\infty} \frac{log(n)}{n \sqrt{n + 1}}$ for convergence or divergence . But I couldn't quite make a lot of progress. Here's what I tried. * *I tried applying integral test $\int_{2}^{\infty} \frac{log(x)}{x \sqrt{x + 1}}dx$ and proving that this series is bounded but I couldn't solve the integral. *The next thing that I thought of was applying the direct comparison test where $\frac{log(n)}{n \sqrt{n + 1}} < \frac{log(n)}{n}$. I tried figuring out the convergence of $\Sigma_{n = 2}^{\infty}\frac{log(n)}{n}$ but this series diverges. *Next I again tried applying direct comparison test with $\frac{log(n)}{n \sqrt{n + 1}} < \frac{log(n)}{\sqrt{n+ 1}}$ and tried applying integral test for $\Sigma_{n = 2}^{\infty} \frac{log(n)}{\sqrt{n+ 1}}$ but while solving for this integral using integration by-parts technique I could see that this series is also divergent. Can anyone help me out with this problem? Preferably using only direct comparison test, limit comparison test, and integral test. Thanks!
$\sum_{n = 2}^{\infty} \frac{log(n)}{n \sqrt{n + 1}} $ The basic fact needed is that $\dfrac{\ln(n)}{n^a} \to 0$ as $n \to \infty$ for any $a > 0$. Setting $a= 1/4$, $\dfrac{\ln(n)}{n^{1/4}} \to 0$ so that $\dfrac{\ln(n)}{n^{1/4}} \lt 1$ for all large enough $n$. Therefore, for all large enough $n$, $\dfrac{\ln(n)}{n\sqrt{n+1}} \lt \dfrac{n^{1/4}}{n\sqrt{n+1}} \lt \dfrac1{n^{5/4}} $ and the sum of these converges.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3221376", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solve $\sqrt{3}\sin(x)+\cos(x)-2=0$ I need to solve the equation $$\sqrt{3}\sin(x)+\cos(x)-2=0$$ My try: I separated the radical then I squared and I noted $\cos(x)=t$ and I got a quadratic equation with $t=\frac{1}{2}$ To solve $\cos(x)=\frac{1}{2}$ I used formula $x\in\left \{ +-\arccos(\frac{1}{2})+2k\pi \right \}k\in\mathbb{Z}$ and I got $x\in\left \{ -+ \frac{\pi}{3}+2k\pi \right \}$ but the right answer is $x=(-1)^{k}\cdot\frac{\pi}{2}+k\pi-\frac{\pi}{6}$ Where's my mistake?How to get the right answer?
Your mistake was forgetting that when you square a non-$0$ equality, the result satisfies two possible equations. In particular, both $x = y$ and $x = -y$ gives that $x^2 = y^2$. From what you describe, you did the following: $$\sqrt{3}\sin(x) = 2 - \cos(x) \tag{1}\label{eq1}$$ then square both sides to get $$3\sin^2(x) = 4 - 4\cos(x) + \cos^2(x) \tag{2}\label{eq2}$$ Next, you made certain manipulations to get $3 - 3\cos^2(x) = 4 - 4\cos(x) + \cos^2(x) \; \Rightarrow \; 4\cos^2(x) - 4\cos(x) + 1 = 0$ Using $\cos(x) = t$ then gives $$4t^2 - 4t + 1 = 0 \; \Rightarrow \; (2t - 1)^2 = 0 \; \Rightarrow \; t = \frac{1}{2} \tag{3}\label{eq3}$$ The full set of solutions for $\cos(x) = \frac{1}{2}$ is $x = \pm\frac{\pi}{3} + 2k\pi, k \in \mathbb{Z}$. However, note the RHS of \eqref{eq1} is always $\frac{3}{2}$, but for $x = \frac{\pi}{3} + 2k\pi$ the LHS is also $\frac{3}{2}$, but for $x = -\frac{\pi}{3} + 2k\pi$ the LHS is $-\frac{3}{2}$ instead. Squaring with either case still gives \eqref{eq2}. Whenever you use a non-reversible operation, like squaring, it's important you check to remove any extraneous results you may have got from your manipulations. However, checking by substituting your results into the original equation is generally always a good idea in case you made some mistake.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3221494", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Averaging i.i.d. variables: Equal chance to be right and left of the mean? Let $\{X_i\}_{i=1}^{\infty}$ be i.i.d. random variables. Define $$ L_n = \frac{1}{n}\sum_{i=1}^n X_i \quad \forall n \in \{1, 2, 3, …\} $$ Using the central limit theorem, it can be shown that if $E[X_i]=0$ and $0<Var(X_i)<\infty$ then: $$ \lim_{n\rightarrow\infty} P[L_n\leq x] = \left\{ \begin{array}{ll} 1 &\mbox{ if $x > 0$} \\ c & \mbox{ if $x=0$}\\ 0 & \mbox{ if $x<0$} \end{array} \right.$$ where $c=1/2$. If the variance is infinite then the law of large numbers implies a similar structure for the cases $x>0$ and $x<0$, but the case $x=0$ is unclear to me. Questions: For infinite variance, can we get different behavior for the case $x=0$, such as $c=1/3$? Can we get related step-function structure when the mean does not exist, but with different behavior for the case $x=0$? Notes: * *We can get such a limiting function with $c=1/3$ for random sequences with different structure, such as $L_n= A/n$ with $P[A=1]=2/3, P[A=-1]=1/3$. *I came up with this question while reflecting on the question here: Why does a C.D.F need to be right-continuous?
Yes it is possible for $c$ to take any value strictly between $0$ and $1$. The point is that there exist mean-zero stable distributions which are not symmetric about $0$ (of course, such a stable distribution cannot be Gaussian, and so it must have infinite variance). You may look at the Wikipedia page to see how some of these stable distributions look. Specifically, if $\alpha \in (1,2)$ and $\beta \in [-1,1]$, then it turns out that there exists a random variable $X$ whose characteristic function will look like $$\phi_X(t) = e^{-|t|^{\alpha}\big(1-i\beta \tan(\frac{\pi\alpha}{2})\text{sign}(t)\big).}$$ As it turns out, this distribution will have mean zero, and moreover (by varying $\alpha$ and $\beta$), $P(X<0)$ can be any predefined number $c\in(0,1)$. Furthermore, for iid copies one may check directly from the characteristic function that $n^{-1/\alpha}(X_1+...+X_n)$ has the same distribution as $X_1$. From this we can easily conclude that $$P(L_n<0) =P(n^{1-1/\alpha}L_n<0)= P(X<0)=c \in (0,1),$$ for all $n$, as desired. I do not know if $c=0$ or $c=1$ is a possible limit for nonzero random variables $X_i$, though it'd be interesting to find out.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3221659", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
$3$ mice in the corners of an equilateral triangle crawling at each other ODE problem At each corner of an equilateral triangle, a mouse is positioned. At time $t = 0$ the mice begin crawling towards each other. Mouse $1$ always crawls directly towards mouse $2$, mouse $2$ towards mouse $3$ and mouse $3$ towards mouse $1$. The speed of each mouse is equal to the vector of its position to the position of the mouse they are crawling towards. What does this last sentence mean? Determine the trajectories of the mouse by creating a system of differential equations and solving it. What happes for $t \to +\infty$? Note: It is advantageous to describe the position of the $k$-th mouse by a complex number $z_k$, $k = 1,2,3$. A skilful choice of the origin of the coordinate system is also advantageous My try: $z_1(t_0)=i,\ z_2(t_0)=\frac{\sqrt 3}{2}-\frac{1}{2}i,\ z_3(t_0)=\frac{-\sqrt 3}{2}-\frac{1}{2}i$ $z_2(t)=z_1(t)e^{\frac{4\pi}{3}i}=z_1(t)(-\frac{1}{2}-\frac{-\sqrt 3}{2}i)$ (this is a rotation by $240°$) I'm not sure sure if I can set the ODE like this: $z_1'(t)=z_2(t)-z_1(t)=z_1(t)(-\frac{3}{2}-\frac{-\sqrt 3}{2}i)$ Therefore $z_1(t)=e^{(-\frac{3}{2}-\frac{3}{2}i)t}c$ where $c=z_1(0)=i$. And $\lim_{t\to\infty}z_1(t)=\lim_{t\to \infty}e^{-\frac{3}{2}t}(...)=0$
Let $\omega:=e^{2\pi i/3}$. The three mice always form an equilateral triangle. Therefore we may write $$z_k(t)=\omega^k \>z(t),\qquad z(0)=1\ .$$ According to the last sentence we have $$\dot z_k(t)=z_{k+1}(t)-z_k(t)=(\omega-1)z_k(t)\ ,$$ so that we obtain the IVP $$\dot z(t)=(\omega-1)\>z(t),\qquad z(0)=1$$ with solution $$z(t)=e^{(\omega-1)t}=e^{-3t/2}\>e^{i\sqrt{3}\>t/2}\ .$$ The limit when $t\to\infty$ is then obvious.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3221828", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Flea on infinite chessboard jumping with irrational vector eventually changes square color Question from Engel's Problem Solving Strategies: An infinite chessboard consists of $1 \times 1$ squares. A flea starts on a white square and makes jumps by $\alpha$ to the right and $\beta$ upwards, where $\alpha$ and $\beta$ are real numbers such that the ratio $\alpha / \beta$ is irrational (So the flea jumps diagonally). Prove that sooner or later it will reach a black square. WLOG suppose that the flea starts at $(0,0)$. So the flea steps on the coordinates $(k\alpha, k\beta)$, for all $k \in \mathbb{N}$. I need to show that eventually $\lfloor{k\alpha}\rfloor + \lfloor{k \beta}\rfloor$ is an even number. So I must show that $\lfloor{k\alpha}\rfloor$ and $\lfloor{k \beta}\rfloor$ must eventually have the same parity. Intuitively I know there must exist some $k$ such that $\lfloor k\alpha \rfloor$ and $\lfloor (k+1) \alpha \rfloor$ have same parity, while $\lfloor k\beta \rfloor$ and $\lfloor (k+1)\beta \rfloor$ have different parity. How do I do this?
I'm not quite sure about the coordinate system you're using. If your $(0,0)$ is meant to be the bottom left corner of the first white square, then since it's on the corner, it's also on the boundary, so it could in theory be considered to be any of the $4$ squares at that corner. To avoid any such issues, I will define the coordinates & starting point I'll be using this way. The $S_{a,\ b}$ square, for $a, b \in \mathbb{Z}$, is defined by $a \lt x \lt a + 1$ and $b \lt y \lt b + 1$. I assume the $S_{0,\ 0}$ square is white. Then the white squares are $S_{a,\ b}$ where $a$ & $b$ have the same parity and the black squares are $S_{a,\ b}$ where $a$ & $b$ have opposite parities. Assume the flea starts in the middle of the $S_{0,\ 0}$ white square, i.e., at coordinates $(0.5,0.5)$. Let $$\alpha = 2 - r \tag{1}\label{eq1}$$ $$\beta = r \tag{2}\label{eq2}$$ where $0 \lt r \lt 1$ is an irrational number. Then $\alpha$, $\beta$ and $\frac{\alpha}{\beta} = \frac{2}{r} - 1$ are all irrational. After $k$ jumps, the flea will be at $(0.5 + k\alpha,0.5 + k\beta)$. Since both $\alpha$ and $\beta$ are irrationals, neither $0.5 + k\alpha$ or $0.5 + k\beta$ can be integers, so there's no concern about landing on a border. To land on a black square requires that $\lfloor 0.5 + k\alpha \rfloor + \lfloor 0.5 + k\beta \rfloor$ be an odd integer since the $2$ integer terms must have opposite parities. Let $$kr = 0.5 + n + s \tag{3}\label{eq3}$$ where $n \in \mathbb{Z}$ and $0 \lt s \lt 1$. Using \eqref{eq1}, \eqref{eq2} and \eqref{eq3} gives \begin{align} \lfloor 0.5 + k\alpha \rfloor & = \lfloor 0.5 + 2k - kr \rfloor \\ & = \lfloor 2k - n - s \rfloor \\ & = 2k - n - 1 \tag{4}\label{eq4} \end{align} \begin{align} \lfloor 0.5 + k\beta \rfloor & = \lfloor 1 + n + s \rfloor \\ & = n + 1 \tag{5}\label{eq5} \end{align} Using \eqref{eq4} and \eqref{eq5} gives $$\lfloor 0.5 + k\alpha \rfloor + \lfloor 0.5 + k\beta \rfloor = 2k \tag{6}\label{eq6}$$ As such, this is always an even integer and, thus, it'll never be situated on a black square. Note you can also use your original $(0,0)$, if you define that corner to be for the square to its right & up, by dropping the $0.5$ part from \eqref{eq3} to get the same result. It seems that either the OP or myself has made a mistake or has a misunderstanding here, the Engel's Problem Solving Strategies question is not correct, or there's some other unstated condition.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3222010", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
$1/(51 +u^2) + 1/(51 +v^2) + 1/(51 +w^2)$ inequality I need to prove that $$1/(51 +u^2) + 1/(51 +v^2) + 1/(51 +w^2) \leq 1/20$$ given $u + v + w = 9$ and $u,v,w$ positive reals. Using AM-HM inequality with $(51 +u^2, 51 +v^2, 51 +w^2)$ I arrive at $(51 +u^2 + 51 +v^2 + 51 +w^2)/3 \geq 3/((1/(51 +u^2) +1/(51 +v^2) 1/(51 +w^2))$, $$or,$$ $1/(51 +u^2) +1/(51 +v^2)+1/(51 +w^2) \leq 9/(51 +u^2 + 51 +v^2 + 51 +w^2)$ In order to take the next step, I need to estimate the RHS with an upper bound. Here is where I face the problem. With $(u+v+w)^2 \leq 3(u^2 + v^2 + w^2)$ from C-S inequality and $u+v+w =9$ we get $27 \leq (u^2 + v^2 + w^2)$ which does not provide an upper bound. I tried several other inequalities, but each lead to a bound on the "wrong" side. Am I at all looking in the right direction?
You're doing well. Note that you have $u^2+v^2+w^2$ in the denominator, and $1/x$ is a decreasing function. That means that $$ \left( u^2+v^2+w^2 \ge \frac{(u+v+w)^2}{3} \right) \Rightarrow \left( \frac{9}{153+u^2+v^2+w^2} \le \frac{9}{153+\frac{(u+v+w)^2}{3}} \right)$$ EDIT: The answer is not correct, because the inequality $$\frac{1}{51+u^2} +\frac{1}{51+v^2}+ \frac{1}{51+w^2} \le \frac{9}{153+u^2+v^2+w^2} $$ is not correct. The correct one has $\ge$ sign, which doesn't help.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3222160", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Linearly independent vectors each subtracted by a linear combination of them are linearly dependent if coefficients add up to $1$ The task I've been given the following problem: Let $v_1, \ldots, v_n$ linearly independent vectors in an $\mathbb{F}$-vector space $V$ and $u = \lambda_1 v_1 + \ldots + \lambda_n v_n$ a linear combination of those vectors. Prove that $$v_1 - u, \ldots, v_n - u \text{ are linearly dependent} \Leftrightarrow \lambda_1 + \ldots + \lambda_n = 1 $$ Unfortunately, I have some trouble with this question. What steps would need to be taken in order to prove this? One hint might be the fact that if a set of vectors is linearly independent, it means any linear combination of them has only one clear set of coefficients (and the other way around). EDIT: After angryavian's answer, I've understood how to prove the first direction. Unfortunately, the other direction still leaves me quite puzzled. A hint would be greatly appreciated.
If you consider the matrix having as columns the coordinates of the new vectors with respect to the given linearly independent set (everything takes place in the subspace of which $\{v_1,\dots,v_n\}$ is a base): \begin{bmatrix} 1-\lambda_1 & -\lambda_1 & \dots & -\lambda_1 \\ -\lambda_2 & 1-\lambda_2 & \dots & -\lambda_2 \\ \vdots & \vdots & \ddots & \vdots \\ -\lambda_n & -\lambda_n & \dots & 1-\lambda_n \end{bmatrix} you see that this matrix is $$ I- \begin{bmatrix} \lambda_1 \\ \lambda_2 \\ \vdots \\ \lambda_n \end{bmatrix} \begin{bmatrix} 1 & 1 & \dots & 1 \end{bmatrix} $$ which is not invertible (meaning that $\{v_1-u,\dots,v_n-u\}$ is linearly dependent) if and only if $1$ is an eigenvalue of $$ A= \begin{bmatrix} \lambda_1 \\ \lambda_2 \\ \vdots \\ \lambda_n \end{bmatrix} \begin{bmatrix} 1 & 1 & \dots & 1 \end{bmatrix} $$ This is a matrix with rank at most one, so it has at most one nonzero eigenvalue, namely $$ \begin{bmatrix} 1 & 1 & \dots & 1 \end{bmatrix} \begin{bmatrix} \lambda_1 \\ \lambda_2 \\ \vdots \\ \lambda_n \end{bmatrix} =\lambda_1+\lambda_2+\dots+\lambda_n $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3222596", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Sobolev imbedding theorem $H^{1,p}(\mathbb{R}^n)$ contained in $L^{{np}/(n-p)}(\mathbb{R}^n)$ (Taylor Michael) i. I can not make sense of the following: For (2.4) I imagine that it is the fundamental theorem of the calculation but I can not prove it formally. Neither will it be understood how to arrive at equation 2.6. For this I had thought the following $(|u|_{L^{n/(n-1)}})^{(n-1)/n}=\int_{\mathbb{R}^n}|u(x)|^{n/(n-1)}dx\leq\int_{\mathbb{R}^n} |u_1\cdots u_n|dx$ with $u_j=(\int_{-\infty}^{\infty} |D_j u(x)|dx_{j})^{1/(n-1)}$ but I can not get inequality 2.6
Observe by the fundamental theorem of calculus, you have \begin{align} u(x, \mathbf{x}_{n-1})=u(x, \mathbf{x}_{n-1})-u(x_0, \mathbf{x}_{n-1}) = \int^x_{x_0} D_1u(x_1, \mathbf{x}_{n-1})\ dx_1 \end{align} if $(x_0, \mathbf{x}_{n-1})\notin \operatorname{supp} u$ which means \begin{align} |u(\mathbf{x})| \leq \int^\infty_{-\infty}|D_1u|\ dx_1. \end{align} Likewise, we have \begin{align} |u(\mathbf{x})| \leq \int^\infty_{-\infty}|D_ju|\ dx_j. \end{align} which means \begin{align} |u(\mathbf{x})|^n \leq \prod^n_{j=1}\int^\infty_{-\infty}|D_ju|\ dx_j \ \ \implies \ \ |u(\mathbf{x})|^{\frac{n}{n-1}} \leq \prod^n_{j=1}\left\{\int^\infty_{-\infty}|D_ju|\ dx_j\right\}^{\frac{1}{n-1}}. \end{align} Finally, we see that \begin{align} \int_{\mathbb{R}^n} |u(\mathbf{x})|^{\frac{n}{n-1}}\ d\mathbf{x} \leq \int_{\mathbb{R}^n}\prod^n_{j=1}\left\{\int^\infty_{-\infty}|D_ju|\ dx_j\right\}^{\frac{1}{n-1}}\ d\mathbf{x}. \end{align} Next, fix $(x_2, \ldots, x_n)$ and note that \begin{align} \int^\infty_{-\infty} dx_1\ \prod^n_{j=1}\left\{\int^\infty_{-\infty}|D_ju|\ dx_j\right\}^{\frac{1}{n-1}} =&\ \left\{\int^\infty_{-\infty} |D_1u(x_1, \mathbf{x}_{n-1})|\ dx_1 \right\}^{\frac{1}{n-1}}\int^\infty_{-\infty} dx_1\prod^n_{j=2}\left\{\int^\infty_{-\infty}|D_ju|\ dx_j\right\}^{\frac{1}{n-1}}\\ \leq&\ \left\{\int^\infty_{-\infty} |D_1u(x_1, \mathbf{x}_{n-1})|\ dx_1 \right\}^{\frac{1}{n-1}}\prod^n_{j=2}\left\{\int^\infty_{-\infty} \int^\infty_{-\infty}|D_ju|\ dx_1dx_j\right\}^{\frac{1}{n-1}}. \end{align} Next, fix $(x_3, \ldots, x_n)$, we see that \begin{align} \int^\infty_{-\infty}\int^\infty_{-\infty} \prod^n_{j=1}\left\{\int^\infty_{-\infty}|D_ju|\ dx_j\right\}^{\frac{1}{n-1}}dx_2dx_1\leq&\ \int^\infty_{-\infty} \ \left\{\int^\infty_{-\infty} |D_1u(x_1, \mathbf{x}_{n-1})|\ dx_1 \right\}^{\frac{1}{n-1}}\prod^n_{j=2}\left\{\int^\infty_{-\infty} \int^\infty_{-\infty}|D_ju|\ dx_1dx_j\right\}^{\frac{1}{n-1}}dx_2\\ =&\ \left\{\int^\infty_{-\infty}\int^\infty_{-\infty}|D_2u|\ dx_1dx_2 \right\}^{\frac{1}{n-1}}\int^\infty_{-\infty}\left\{\int^\infty_{-\infty} |D_1u(x_1,x_2, \mathbf{x}_{n-2})|\ dx_1 \right\}^{\frac{1}{n-1}}\prod^n_{j=3}\left\{\int^\infty_{-\infty} \int^\infty_{-\infty}|D_ju|\ dx_1dx_j\right\}^{\frac{1}{n-1}}dx_2\\ \leq&\ \left\{\int^\infty_{-\infty}\int^\infty_{-\infty}|D_2u|\ dx_1dx_2 \right\}^{\frac{1}{n-1}}\left\{\int^\infty_{-\infty}\int^\infty_{-\infty}|D_1u|\ dx_1dx_2 \right\}^{\frac{1}{n-1}}\prod^n_{j=3}\left\{\int^\infty_{-\infty}\int^\infty_{-\infty} \int^\infty_{-\infty}|D_ju|\ dx_1dx_2dx_j\right\}^{\frac{1}{n-1}}. \end{align} Continuing with this procedure yields \begin{align} \int_{\mathbb{R}^n} \prod^n_{j=1}\left\{\int^\infty_{-\infty}|D_ju|\ dx_j\right\}^{\frac{1}{n-1}}d\mathbf{x}\leq \prod^n_{j=1} \left\{\int_{\mathbb{R}^n}|D_ju(\mathbf{x})|\ d\mathbf{x}\right\}^{\frac{1}{n-1}} \end{align} which is the desired inequality.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3222774", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove a property of Legendre symbol In case someone does not know the definition, I first write down the definition. Def Let $a$ be s.t. $(a,m)=1$. Then we say $a$ is a quadratic residue modulo m if the congruence $x^2\equiv a$ (mod $m$) has a solution. If it has no solution, then we say $a$ is a quadratic nonresidue modulo m. Def Let $p$ be an odd prime. We define the Legendre symbol $(\frac{a}{p})$ to be $1$ if $a$ is a quadratic residue, $-1$ if $a$ is a quadratic nonresidue modulo $p$, and $0$ if $p|a$. I have proven that $(\frac{a}{p})\equiv a^{(p-1)/2}$ (mod $p$). Now I want to show $(\frac{a}{p})(\frac{b}{p})=(\frac{ab}{p})$, but I can only show $(\frac{a}{p})(\frac{b}{p})\equiv(\frac{ab}{p})$ (mod $p$). My textbook says that $(\frac{a}{p})(\frac{b}{p})=(\frac{ab}{p})$ is just a direct consequence of that $(\frac{a}{p})\equiv a^{(p-1)/2}$ (mod $p$). But with the proven equality, we only have that, $(\frac{a}{p})(\frac{b}{p})\equiv a^{(p-1)/2}b^{(p-1)/2}=(ab)^{(p-1)/2}\equiv (\frac{ab}{p})$ (mod $p$), i.e. $(\frac{a}{p})(\frac{b}{p})\equiv(\frac{ab}{p})$ (mod $p$), rather than $(\frac{a}{p})(\frac{b}{p})=(\frac{ab}{p})$. And I have a further problem. My textbook says that, the question to determine whether $x^2\equiv a$ (mod $p$) does or does not have a solution can be narrowed to the case $x^2\equiv q$ (mod $p$), where $a$ is an arbitrary integer and $q$ is a prime. My second problem is that, how to "narrow" the original question to the last one? I've been thinking of these problems for a half day. Any help will be greatly appreciated. :)
Observe that both $\left(\frac{a}{p}\right)\left(\frac{b}{p}\right)$ and $\left(\frac{ab}{p}\right)$ are equal to $0,1$ or $-1$. Therefore if they are congruent modulo a prime $p>2$, they are necessarily equal. As for the second question, if $a$ is positive you can write it as a product of primes $a=q_1\dots q_k$, and using the formula just shown we get $$\left(\frac{a}{p}\right)=\left(\frac{q_1}{p}\right)\dots\left(\frac{q_k}{p}\right).$$ Hence, if we can compute the Legendre symbol with prime at the top, we can compute it for any positive number at the top. To get any $a$, we would also like need to know the value of $\left(\frac{-1}{p}\right)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3222908", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prove the following inequality involving a sum. Suppose that $m,n, q\in \mathbb{N}$ such that $$\lambda_{n,m}=\frac{m+1/2}{(m+1/2)^2−n^2} \text{ and } \sigma_{q,m} = \sum_{k=0}^q \lambda_{k,m}.$$ Furthemore we also know that, $$\frac{\lambda_{0,m}}{2}+\sum_{n=1}^{\infty}\lambda_{n,m}=0.$$ Then I want to show that when $q> m$ then $$2 \sigma_{q,m} \geq \lambda_{0,m} = \frac{1}{m+1/2}.$$ From the infinite sum, we get that $\sigma_{q,m}\to \frac{\lambda_{0,m}}{2}$ as $q\to \infty.$ Next, we observe that since $q>m$ we have that $$\sigma_{q+1,m}-\sigma_{q,m} = \lambda_{q+1,m} = \frac{m+1/2}{(m+1/2)^2−q^2}<0$$ and so $\sigma_{q,m}$ is a decreasing sequence. How do I proceed after this step?
As $\sigma_{q,m}$ is decreasing, we have $\sigma_{q,m} > \sigma_{q+1,m} > \sigma_{q+2,m} > ...$. So, as $q\to\infty$, we get $\sigma_{q,m} > \frac{\lambda_{0,m}}{2}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3223045", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Different methods give different answers. Let A,B,C be three angles such that $ A=\frac{\pi}{4} $ and $ \tan B \tan C=p $. Let $A,B,C$ be three angles such that $ A=\frac{\pi}{4} $ and $ \tan B \tan C=p $. Find all possible values of $p$ such that $A,B$ and $C$ are angles of a triangle. case 1- discriminant We can rewrite the following equation $ f(x) = x^2 - (p-1)x + p $ As we know the sum and product of $ \tan C $ and $ \tan B $ Settings discriminant greater than equal to zero. $ { (p-1)}^2 - 4p \ge 0 $ This gives $ p \le 3 - 2\sqrt2 $. Or $ p \ge 3 + 2\sqrt2 $ solving both equation $ A + B + C = \pi $ $ C + B + \frac{\pi}{4} = \pi $ $ C + B = \frac{3\pi}{4} $ Using this to solve both the equation give $ p \in $ real I found this on Quora. https://www.quora.com/Let-A-B-C-be-three-angles-such-that-A-frac-pi-4-and-tan-B-tan-C-p-What-are-all-the-possible-value-of-p-such-that-A-B-C-are-the-angles-of-the-triangle the right method $ 0 \lt B , C \lt \frac{3\pi}{4} $ Converting tan into sin and cos gives $ \dfrac {\sin B \sin C}{\cos B \cos C} = p $ Now using componendo and dividendo $ \frac{\cos (B-C) }{- \cos(B+C) } = \frac{p+1}{p-1} $ We know $ \cos (B+C) = 1/\sqrt2 $ We know the range of $B$ and $C$ $(0, 3π/4)$ Thus the range of $B - C$. $(0, 3π/4 )$ Thus range of $\cos(B+C)$ is $ \frac{ -1}{\sqrt2} $ to $1$ Thus using this to find range gives $ P \lt 0 $ or $ p \ge 3+ 2\sqrt2 $
The reason why the 1st method is wrong : We can rewrite the following equation $ f(x) = x^2 - (p-1)x + p $ As we know the sum and product of $ \tan A $ and $ \tan B $ Settings discriminant greater than equal to zero. It seems that you meant "the sum and product of $\color{red}{\tan C}$ and $\tan B$". Considering the condition that the discriminant is greater than or equal to zero is not enough because we also have to have $0\lt B\lt \frac 34\pi$. This means that we have to consider the condition that $f(x)=0$ has at least one solution such that $x\lt -1$ or $x\gt 0$. Therefore, the 1st method is wrong. The reason why the 2nd method is wrong : In the solution in Quora, tan B + tan C = tanB tanC - 1 tanB+tan(3pi/4 - B) = p This step is wrong. This should be $$\tan B+\tan\left(\frac 34\pi-B\right)=\color{red}{p-1}$$ Then, we get $$\tan^2B-(p-1)\tan B+p=0$$ But, again, similarly as the 1st method, considering the condition that the discriminant is greater than or equal to zero is not enough because we also have to have $0\lt B\lt\frac 34\pi$. Therefore, the 2nd method is wrong. To make the two methods correct, we have to consider the condition that $$x^2-(p-1)x+p=0$$ has at least one solution such that $x\lt -1$ or $x\gt 0$. Solving this gives $$p\in (-\infty,0)\cup [3+2\sqrt 2,\infty)$$ which is the same as the answer in the third method.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3223162", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 7, "answer_id": 4 }
Area of Generalized Koch Snowflake In the Koch snowflake, the zeroth iteration is an equilateral triangle, and the n-th iteration is made by adding an equilateral triangle directly in the middle of each side of the previous iteration. The area of the Koch snowflake is $8/5$ the area of the starting triangle. If I wanted to generalize this to other regular polygons, such as squares, pentagons, etc, the area, counting overlap, is $\frac{8}{8-n}$ times the area of the starting polygon, where $n$ is the number of sides, and $n < 8$. The area does not converge for $n \ge 8$. The problem with this area is that it counts overlap multiple times (in the case of $n > 4$; $n = 3$ and $n = 4$ have no overlap). If a section of the "generalized snowflake" is covered multiple times, it counts all of those, not just one. How can I find the area of the generalized snowflake, counting areas covered multiples times only once? Edit: In these snowflakes, the side of a polygon added at the $n$th iteration has $1/3$ the length of a side of a polygon added at the $(n-1)$th iteration. Edit 2: I think the area of the hexagon ($n = 6$) case is $12/5$ the area of the original hexagon. I'm not sure about this, though.
As first conjectured by OP, for the hexagon ($n = 6$) case, the area of the generalized Koch snowflake indeed equals to $\frac{12}{5}$ of that of the seed hexagon. This comes down to following observation. When one scale the seed hexagon to make its area two-third of that of a seed triangle, the generalized Koch snowflake generated from the seed hexagon "looks" the same as the Koch snowflake generated from the seed triangle. Following is an illustration for what happens at iteration level $\ell = 0,1,2,3$ (upper-left, upper-right, lower-left, lower-right). The seed hexagon and the shapes generated from it are colored in red. The seed triangle and the shapes generated from it are colored in yellow. As you can see, there is nothing in red in above figure. Instead, we see a bunch of orange regions. This is because we have rendered the seed triangles and its descendants in $50\%$ opacity and superpose them on top of the seed hexagon and its descendants. There is nothing in red because in each iteration, the descendant from the triangle completely cover the descendant from the hexagon. At each iteration level $\ell$, their "difference" is a bunch of yellow triangles of side $\frac{1}{3^{\ell+1}}$ of that of seed triangle. If the seed triangle has unit area, the area of each yellow triangle is $\frac{1}{9^{\ell+1}}$. Let $n_\ell$ be the number of these triangles. If one compare the yellow triangles in iteration level $\ell$ to that in iteration $\ell-1$. We find we can group the triangles in level $\ell$ in units of three. An unit may come from a triangle in level $\ell-1$ or newly spawned at an edge of descendant of the triangle at level $\ell-1$. This leads to following recursive relation for $n_\ell$. $$n_\ell = \begin{cases} 3, & \ell = 0\\ 3(n_{\ell-1} + 3\cdot 4^{\ell-1}), &\ell > 0\end{cases}$$ Solving this give us $n_\ell = 9\cdot 4^\ell - 6\cdot3^\ell$. The total area of the yellow triangles is $\frac{n_\ell}{9^{\ell+1}} = \left(\frac49\right)^\ell - \frac{2}{3^{\ell+1}}$. Since this converges to $0$ as $\ell \to \infty$ and we know the area of Koch snowflakes converge to $\frac85$. The area of descendants of hexagon also converge to $\frac85$. As a result, the area of the generalized Koch snowflake is $\frac85\left/\frac23\right. = \frac{12}{5}$ of that of the seed hexagon.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3223258", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Sections of the projection onto the monoid of the path-conncted components Let $M$ be a topological monoid and $A:=\pi_0(M)$ be the monoid consisting of its path connected components. Assume that $A$ is countable. We have a monoid homomorphism $\pi\colon M\longrightarrow A$. I would like to check that there exists at least a section of $\pi$, that is a monoid homomorphism $\sigma\colon A\longrightarrow M$ such that $\pi\circ\sigma=id_A$. I guess that in order to proceed one has to assume that the Axiom of Choice holds true. Any help?
This is false. For instance, let $M=\bigcup_{n\in\mathbb{Z}}\{n\}\times[|n|,\infty)$, which is a topological monoid under coordinatewise addition. The monoid $A$ of path-components is just $\mathbb{Z}$, with $\pi:M\to A$ being the first projection. But there does not exist any nontrivial monoid homomorphism $A\to M$, since $M$ has no invertible elements besides $(0,0)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3223397", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Calculate $2^{2^{2^{\cdot^{\cdot^{2}}}}} \mod 2016$ How to find $2^{2^{2^{\cdot^{\cdot^{2}}}}} \mod 2016$ where $2$ occurs $2016$ times? My current observations: $$2^{11} = 2048 \equiv 2048=2016 \equiv 2^5 $$ and $$ 2^{16} \equiv 2^{11}\cdot 2^5 \equiv 2^{10} $$ and now we have $2012$ of "2" left...
Hint: $2016 = 2^5 \cdot 3^2 \cdot 7$. Consider it separately mod $2^5$, mod $3^2$ and mod $7$, and combine using the Chinese Remainder Theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3223544", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Proof involving rational functions on an elliptic curve I am working through understanding the proof I posted below, and I have questions about 3 parts. (Note, the proof assumes results developed before, but I don't have questions on those. My questions are on things in the proof that are "common knowledge" mathematics but beyond my reach.) Q1) In the proof they claim that if $\frac{\alpha(x)}{\beta(x)} \in k(x,y)$ is a rational function on $E$ with $\gcd (\alpha(x), \beta(x)) = 1$. Then if the degree of $\alpha(x)$ > degree of $\beta(x)$, somehow this implies that $\frac{\alpha(x)}{\beta(x)} \notin \mathscr{O_\infty}$. Where $\mathscr{O_\infty}$ is the local ring of rational functions defined at $\infty$. Why is this the case? Q2) Why in the DVR $\mathscr{O_\infty}$ , $x$ has order $-2$: It is expressed as $x = u\pi^{-2}$ for a unit $u$ and uniformizer $\pi$. Q3) Why simply because $\gamma(\infty) = \infty$ this implies that $v_{\infty}(\frac{\alpha(x)}{\beta(x)}) < 0$ where I think $v_{\infty}$ is the valuation or in other words the exponent of $\pi$ in the representation of $\frac{\alpha(x)}{\beta(x)}$.
Q1: The behaviour of $f(x)=\frac{\alpha(x)}{\beta(x)}$ at $\infty$ can be studied as behaviour of $g(x):=f(1/x)$ at $x=0$. If $\deg\alpha>\deg\beta$, then $\alpha(1/x)$ has a higher order pole than $\beta(1/x)$, meaning that $g(x)$ has a pole at $x=0$ and $f(x)$ is note defined at $x=\infty$. Q2: I'll give some illustrative heuristics: At $\infty$, the lower degree parts in the Weierstrass form $y^2=x^3+ax+b$ become irrelevant, or argue more precisely by taking reciprocals as above and considering the origin. There $y^2=x^3$ has a cusp and $E$ is described by $(x,y)=(t^2,t^3)$, which makes $x$ a square of a function vanishing at the origin. Going back to $\infty$, it is of course $\frac1x$ that is the square of something vanishing at $\infty$ Q3: I suppose you are confused only because the valuation is taken at $\infty$ instead of at a finte place. In the end, positive valuation means a zero at that place, negative means a pole at that place, and zero valuation means a finite value at that place. It is the same at $\infty$: If $\gamma(\infty)=0$, then $\gamma$ has a zero at $\infty$ and is in $\mathfrak m_\infty$, i.e,, $v_\infty(\gamma)>0$. Vice versa, if $\gamma(\infty)=\infty$, it has a pole at $\infty$, i.e., is the reciprocal of something having a zero there, hence $v_\infty(\gamma)<0$. (And only if $\gamma(\infty)$ were a non-zero finite value, we'd have $v_\infty(\gamma)=0$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3223670", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why $ b =1 $ solves the contradiction in this proof? The proof is given below: Theorem 3.3 (Pythagoras). The number $\sqrt{2}$ is irrational. Proof. Suppose, to the contrary, that $\sqrt{2}$ is a rational number, say $\sqrt{2}=a/b$, where $a$ and $b$ are both integers with $\gcd(a,b)=1$. Squaring, we get $a^2=2b^2$, so that $b\mid a^2$. If $b>1$, then the Fundamental Theorem of Arithmetic guarantees the existence of a prime $p$ such that $p\mid b$. It follows that $p\mid > a^2$ and, by Theorem 3.1, that $p\mid a$; hence, $\gcd(a,b)\geq p$. We therefore arrive at a contradiction, unless $b=1$. But if this happens, then $a^2=1$, which is impossible (we assume that the reader is willing to grant that no integer can be multiplied by itself to give $2$). Our supposition that $\sqrt{2}$ is a rational number is untenable, and so $\sqrt{2}$ must be irrational. (Image that replaced text). But I do not understand why $ b =1 $ solves the contradiction in this proof. Could anyone explain this for me please?
If $a^2 = 1$, then $a = 1$ or $a = -1$. In the latter case, $b$ would also need to be negative for their ratio to be positive; so, let us pursue the former case without loss of generality. If $a=1$, then we would have $1/b = \sqrt{2}$ for some positive integer $b$. But, squaring both sides would then yield that $1/b^2 = 2$, i.e., $b^2 = 1/2$. Since $b$ is a positive integer, we have that $b \geq 1$ hence $b^2 \geq 1$, from which we conclude that it cannot be the case that $b^2 = 1/2$, for $1/2 \not\geq 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3223801", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Determine the slope and write the Cartesian equation of the line. Write the equation of the line by the origin of coordinates that has vector director of components (1,2) determine the slope and write the Cartesian equation of the line. My attempt: Let $l$ a line such that pass for the origin. Let $a$ a director vector of $l$. we know as $a$ is a director vector of $l$ then the components $(1,2)$ are in the line. We have two points of the line. We know the general equation for the line is $Ax+By+C=0$ Here i'm stuck can someone help me?
I take it you want the line through the origin and $(1,2)$. The equation is $y=2x$. This is in slope-intercept form (slope $2$, $y$-intercept $0$). The slope is $m=\dfrac {\Delta y}{\Delta x}=\dfrac{2-0}{1-0}=2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3223962", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Can the radius of convergence of a sum of two power series be an arbitrary number? Let $\sum_{n=0}^\infty a_n z^n,\sum_{n=0}^\infty b_n z^n$ be two series with the same radius convergence $R>0$. Can the radius of convergence of their sum be any positive real number which is greater than $R$?
Yes, it can. If $\sum_{n=0}^\infty a_nz^n$ has radius of convergence $R\in(0,\infty)$, and if $r>R$, consider the series $\sum_{n=0}^\infty(-a_n+r^{-n})z^n$, whose radius of convergence is also $R$. But the radius of convergence of the sum of both series is $r$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3224131", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }