Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
toplogical entropy of general tent map Measure theoretic entropy of General Tent maps The linked question made me wonder how to calculate the topological entropy of a general tent map. Let $I=[0,1]$ and $\alpha \in ( 0,1)$. Define $T: I \rightarrow I$ by $T(x)= x/\alpha$ for $x \in [0,\alpha]$ and $(1-x)/(1-\alpha)$ for $x \in [\alpha,1]$ What is the topological entropy of $T$ and how does one prove it? I suspect it is $\log 2$ regardless of value of $\alpha$, unlike its metric entropy. I suspect it is $\log 2$ because $T$ seems somewhat conjugate to the shift map on $2^{\mathbb N}$ except it can't be because the interval is not homeomorphic to $2^{\mathbb N}$.
Since the peak of the skewed tent touches ocurrs at $(\alpha, 1)$, the pre-image of this point will consist of two points, therefore its second iterate will have four laps. But now we can apply this argument to both these new points (only because of the surjectivity of the function ensured by the peak of the tent "touching the top"). In general the critical point will have $2^n$ $n$th pre-images. Therefore the number of laps will be $l(T^{(n)})=2^n$. So its growth number will be $s(T)=\lim_{n\rightarrow + \infty}l(T^{(n)})^{1/n}=2$. Finally by a known result of Misiurewicz $h(T)=\log s(T)$. Therefore the entropy is $\log 2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/760717", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
A question about a continuous function that satisfies the property $\forall x\in\mathbb{R},\exists xI got this question: Let $f:\mathbb{R}\to\mathbb{R}$ be a continuous function that satisfies the property: forall $x\in\mathbb{R}$ there exists $y \in\mathbb{R}$ such that $x < y$ and $f(x)<f(y)$ I was able to prove (hard proof though) that if $a\in\mathbb{R}$ then forall $0<M$ there exists $M<y$ such that $f(a)<f(y)$ and take this for granted. Prove that if $lim_{x\to\infty}f(x)=5$ then $\forall x\in\mathbb{R}, f(x)<5$ But I got stuck. Thanks.
Assume that there is an $a$ such that $f(a) > 5$. Let $\epsilon = f(a) - 5 > 0$. If $$ \lim_{x\to \infty} f(x) = 5 $$ then you would have an $N$ such that if $x \geq N$ then $\lvert f(x) - 5 \rvert < \epsilon / 2$. Now $f$ being continuous on $[a, N]$ where $f(a) > f(N)$, $f$ attains a maximum at $x_0$ on $[a, N]$. This is also the maximum on $[a, \infty)$. But this contradicts the existence of a $y > x_0$ such that $f(x_0) < f(y)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/760770", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Zonal averaging of partial derivatives Given a quantity $Q(x,y,t)$, the zonal average operator $[Q] = \frac{1}{2\pi}\int_0^{2\pi} Q\:\mathrm{d} \lambda$, and zonal anomaly $Q^\star$ such that $Q = [Q] + Q^\star$, my text book says that zonally averaging $$\frac{\partial Q}{\partial t} + \frac{\partial}{\partial x}(uQ) + \frac{\partial}{\partial y}(vQ) = S$$ gives $$\frac{\partial [Q]}{\partial t} + [v] \frac{\partial [Q]}{\partial y} + \frac{\partial}{\partial y}[v^\star Q^\star] = [S]$$ I'm unable to see how to arrive at this answer. What is the method or intuition for applying the zonal averaging?
I figured it out. It's clear that the zonally averaged $\partial/\partial x$ is zero. The terms involving $\partial/\partial y$ come from expanding $vQ = ([v] + v^\star)([Q] + Q^\star)$, then noticing that $[v^\star [Q]] = [[v] Q^\star] = 0$. I also had to convince myself that $[\partial Q/\partial t] = \partial [Q] / \partial t$. I'd still be interested in a more rigorous answer, however.
{ "language": "en", "url": "https://math.stackexchange.com/questions/760867", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is this linear functional bounded? Find the norm. $$\ell^2\ni (x_n)\rightarrow2x_{1}+28x_2+35 x_{3}$$ I think it can be bounded: $$|2x_{1}+28x_2+35 x_{3}| \le |2x_{1}|+|28x_2|+|35 x_{3}| \le 65 (\sum_{n=0}^{\infty}|x_n|^2)^{1/2}$$ But I can't find norm.
What is the norm of the linear functional $L(x)$? It is the smallest constant $M$ such that $$|L(x)| \leq M ||x||_2$$ holds for every $x \in \ell^2$. Okay, so now note that your linear functional is of the form $\langle a, x \rangle$, so apply the Cauchy inequality to get $$|L(x)| = |\langle a, x \rangle| \leq ||a||_2 ||x||_2.$$ This means $M \leq ||a||_2$. Now use the fact that $a_1 x_1 + a_2 x_2 + a_3 x_3 = ||a||_2 ||x||_2 \cos \theta$, provided $x_n = 0$ for $n>3$, to show that if $M < ||a||_2$, you could pick an $\theta$ and hence an $x$, such that $|\langle a, x \rangle| > M ||a||_2 ||x||_2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/760962", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
A characterization for subgroups. Let $G$ be a group and $a_0,a_1,...,a_n\in G$ and $$A=\{a_0,a_1,...,a_n\}$$ and $$(\forall m\le n)(\forall i\le m)(a_{i}a_{m-i}\in A)$$ Is $A$ a subgroup of $G$? How if $G$ is abelian?
Let $n = 1$, $G = \mathbb{Z}/4\mathbb{Z}$. Now let $A = \{0,1\}$. Now note that each of the sums $0+0,0+1,1+0$ are in $A$. Hence $A$ satisfies the given condition but is not a subgroup. (Notice that the sum $2 = 1+1$ doesn't have to be in $A$ since that would correspond to $a_1 + a_1$ which would require $i = 1,m = 2$ when we have $m \leq 1$). If one changes the condition to $a_ia_j\in A$ for any indices $i,j\leq n$ (not just pairs whose sum is at most $n$), then you can check that $A$ must be a subgroup.
{ "language": "en", "url": "https://math.stackexchange.com/questions/761018", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Graph Theory Question Related to Domination number. Let G be a graph whose diameter is at least 3. Prove that the domination number of the complement of G is at most 2. I know that since the diameter of G is at least 3, the diameter of the complement of G is at most 3. However, this doesn't seem to be enough to prove the domination number of the complement of G is at most 2. Any suggestions?
Damned if I know. Let me try and follow my nose here. The diameter of $G$ is at least $3$, what does that mean? It means there are two vertices $u,v$ in $G$ such that $\operatorname d(u,v)\ge3$. And we want to show that the complement $\bar G$ has a dominating set containing at most $2$ vertices. Hmm. Maybe I can show that $\{u,v\}$ is a dominating set for $\bar G$? Let's see. I want to try and show that each vertex $x\notin\{u,v\}$ is adjacent to $u$ or $v$ in $\bar G$. What if I assume for a contradiction that $x$ is adjacent to neither $u$ nor $v$ in $\bar G$? So $x$ is adjacent to both $u$ and $v$ in $G$. OK, so back in $G$ I've got two vertices $u,v$ with $\operatorname d(u,v)\ge3$, and another vertex $x$ is adjacent to both $u$ and $v$. Is there any way to get a contradiction out of that?? Beats me. I give up.
{ "language": "en", "url": "https://math.stackexchange.com/questions/761130", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Evaluate the line integral $\int_C \ x^2 dx+(x+y)dy \ $ Evaluate the line integral $$\int_C \ x^2 dx+(x+y)dy \ $$ where $C$ is the path of the right triangle with vertices $(0,0), (4,0)$ and $(0,10)$ starts from the origin and goes to $(4,0)$ then to $(0,10)$ then back to the origin. I did this problem but the answer is incorrect. First I looked at it from $(0,0)$ to $(4,0)$ THe magnitude of $r'(t)$ was $4$. $r(t) = \langle 4t,0 \rangle$ for $0 \le t \le 1$. Using this information I got $40/3$ as my answer after evaluating the integral. Then I looked at the point from $(4,0)$ to $(0,10)$ $r(t)=\langle 4-4t,10t \rangle$ for $0 \le t \le 1$ The magnitude of $r'(t)$ is $2 \sqrt{29}$ Using this information I got $\frac{74}{3}\sqrt{29}$ as my answer after evaluating the integral. Then I looked at the point from $(0,10)$ to $(0,0)$ $r(t) = \langle 0,-10t \rangle$ for $-1 \le t \le 0$ The magnitude of $r'(t)$ is $10$ Evaluating the integral I got the answer of $50$ Adding all three my final answer was $196.1673986$ but lon capa says this is incorrect. Can someone tell me where I am making a mistake?
The line integrations along each leg look like $$ (0,0) \ \rightarrow \ (4,0) \ : \ \ dy \ = \ 0 \ \ \Rightarrow \ \ \int_0^4 \ x^2 \ dx \ \ ; $$ $$ (4,0) \ \rightarrow \ (0,10) \ : \ \ y \ = \ 10 - \frac{10}{4}x \ \ \Rightarrow \ \ \int_4^0 \ x^2 \ dx \ + \ (x \ + \ [10 - \frac{10}{4}x]) \ (- \frac{10}{4} \ dx) $$ [note that the first term reverses the integration along the leg on the $ \ x-$ axis (which gives $ \ \frac{64}{3} \ $ ) , so we'll drop it] $$ \int_4^0 \ \frac{15}{4}x \ - \ 25 \ \ dx \ \ = \ (\frac{15}{8}x^2 \ - \ 25x) \ \vert^0_4 \ = \ -\frac{15}{8} \cdot 4^2 \ + \ 100 \ = \ 70 \ \ ; $$ $$ (0,10) \ \rightarrow \ (0,0) \ : \ \ dx \ = \ 0 \ , \ x \ = \ 0 \ \ \Rightarrow \ \ \int_{10}^0 \ (x+y) \ dy \ = \ 0 \ + \ \frac{1}{2} y^2 \ \vert_{10}^0 $$ $$ = \ -\frac{100}{2} \ = \ -50 \ \ . $$ So the sum around the triangle is $$ [ \frac{64}{3} ] \ + \ \left( \ [-\frac{64}{3} ] \ - \ 30 \ + \ 100 \ \right) \ - \ 50 \ = \ 20 \ \ , $$ confirming the Green's Theorem result found by NotNotLogical.
{ "language": "en", "url": "https://math.stackexchange.com/questions/761226", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Verifying whether a quotient ring is indeed a ring. Take $$\frac{\Bbb{R[x]}}{\langle x^2+1\rangle}$$ This is a ring. In this quotient ring, the product of equivalence classes $[a+bx]$ and $[c+dx]$ is another equivalence class, as a ring is closed under multiplication. $[a+bx]=\{a+bx,a+bx+(x^2+1),a+bx+2(x^2+1),\dots\}$, and $[c+dx]=\{c+dx,c+dx+(x^2+1),c+dx+2(x^2+1),\dots\}$. Also, we should have $[a+bx][c+dx]=[(ac+bdx^2)+(ad+bc)x]$. Which two elements in $[a+bx]$ and $[c+dx]$ multiply with each other to give $(ac+bdx^2)+(ad+bc)x+(x^2+1)$, which is a member of $[(ac+bdx^2)+(ad+bc)x]$? Note: I know that $\{a+bx,a+bx+(x^2+1),a+bx+2(x^2+1),\dots\}\neq[a+bx]$; it is only the inverse of $[a+bx]$. I only write all this for a clear exposition: $[m][n]=[mn]$ only because the inverse images of $[m]$ and $[n]$, under the canonical mapping, multiply together to give the inverse image of $[mn]$. Thanks!
To be honest, if you are trying to show this is indeed a ring, there are indirect methods that are far easier. For example: Consider the evaluation homomorphism $ev_i:\mathbb{R}[x] \rightarrow \mathbb{C}$ defined as follows: $$f(x) \mapsto f(i)$$ This homomorphism is indeed surjective since, given any $(a + bi) \in \mathbb{C}$, simply take the polynomial $f(x) = a + bx$. Further, $ker(\phi) = \langle x^2 + 1 \rangle$. Now we apply the isomorphism theorem: $$\mathbb{R}[x]/ \langle x^2 + 1 \rangle \cong \mathbb{C}$$ So it is certainly a ring, and even a field.
{ "language": "en", "url": "https://math.stackexchange.com/questions/761357", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why normal approximation to binomial distribution uses np> 5 as a condition I was reading about normal approximation to binomial distribution and I dunno how it works for cases when you say for example p is equal to 0.3 where p is probability of success. On most websites it is written that normal approximation to binomial distribution works well if average is greater than 5. I.e. np> 5 But I am unable to find where did this empirical formula came from? If n is quite large and probability of success is equal to .5 then i agree that normal approximation to binomial distribution is going to be quite accurate. But what about other cases? How can one say np> 5 is the condition for doing normal approximation?
So I did some experiments. I think np>5 condition is not correct at all. It depends on Excess Kurtosis value for a given binomial distribution. If it is Mesokurtic then approximation will give accurate results. Check following table for n=11 and p=0.5 kurtosis will be around 0.18. That is platykurtic and so I don't think approximation will give accurate results, even though n*p=5.5 > 5. The table shows results which manifests what I am trying to say.
{ "language": "en", "url": "https://math.stackexchange.com/questions/761450", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 2 }
Let $A$ be the set of all $4$ digit numbers $a_1a_2a_3a_4$ such that $a_1 < a_2 < a_3 < a_4$, then what is $n(A)$ equal to? How can you solve this problem relatively quickly using combinatorics? I found it really interesting. Let $A$ be the set of all $4$ digit numbers $a_1a_2a_3a_4$ such that $a_1 < a_2 < a_3 < a_4$, then what is $n(A)$ equal to?
$7,6,5,4,3,2$ in second positions give $6,5,4,3,2,1$ cases respectively of the trees below in which for $7$ one has one case, for $6$ three, for $5$ six, for $4$ ten, for $3$ fifteen, and for $2$ twenty one cases (these are the number of endpoints in each tree). This gives finally $$6\cdot1+5\cdot3+4\cdot6+3\cdot10+2\cdot15+1\cdot21=\color{red}{126\text{ cases }}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/761622", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
$\mathbb{R}\text{P}^{n-1}$ is not retract of $\mathbb{R}\text{P}^n$ I have to solve the following: Show that $\mathbb{R}\text{P}^{n-1}$ is not retract of $\mathbb{R}\text{P}^n$ for $n\geq 2$. I have done this with knowledge of homotopy-groups, by showing that $\mathbb{Z}$ cannot factor through $\mathbb{0}$ or $\mathbb{Z}_{2}$. Yet, I would like to know is there some other way to prove that (without using groups of homotopy)? Any help is welcome. Thanks in advance.
One can use the fact that $H^*(\mathbb R P^n, \mathbb Z/2) \cong \mathbb Z/2[x]/x^{n+1}$ as a graded commutative ring, where $x$ is in degree one. The inclusion $\mathbb R P^{n-1} \to \mathbb RP^n$ induces a map of graded rings $\mathbb Z/2[x]/x^{n+1} \to \mathbb Z/2[x]/x^n$. By considering fundamental groups or using the cell structure one can see easily that $x\mapsto x$, and so the map is the standard quotient map. If there were a retraction $\mathbb R P^{n} \to \mathbb RP^{n-1}$, then that would induce a section $\mathbb Z/2[x]/x^{n} \to \mathbb Z/2[x]/x^{n+1}$ of the quotient map. But, then we would still have $x\mapsto x$, and so $0 = x^n \mapsto x^n \neq 0$, a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/761775", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 1, "answer_id": 0 }
application of the sampling distribution of x the GBAs of all students enrolled at a large university have an approximately normal ditribution with a mean of 3.02 and a standard deviation of 0.29 ..find the probability that the mean GBA of a random sample of 20 student selected from this university is , a)3.10 or higher b)2.90 or Lower c)2.95 to 3.11
Imagine taking a random sample of $20$ students. Let random variables $X_1,X_2,\dots, X_{20}$ be their GBAs, and let $Y=\frac{1}{20}(X_1+X_2+\cdots+X_{20})$ be the sample mean. Then $Y$ has (approximately) normal distribution, with mean $3.02$ and standard deviation $\dfrac{0.29}{\sqrt{20}}\approx 0.064846$. From here on we pretend $Y$ has exactly normal distribution, with mean $3.02$ and standard deviation $0.064846$. Then $$\Pr(Y\ge 3.10)=\Pr\left(Z\ge \frac{3.10-3.02}{0.064846}\right),$$ where $Z$ is standard normal. Now we can find the required probability by using tables of the standard normal, or appropriate software. The other questions are handled in a similar way.
{ "language": "en", "url": "https://math.stackexchange.com/questions/761873", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If G acts on X, show that there must be a fixed point for this action. Please help. Suppose that $G$ is a group of order $p^k$, where $p$ is prime and $k$ is a positive integer. Suppose that $X$ is a finite set and assume that $p$ does not divide the $|X|$. If $G$ acts on $X$, show that there must be a fixed point for this action. i.e show that there is some element $x\in X$ whose stabilizer $G_x=\{g.x;g\in G\}$ is all of $G$.
Put together facts you should know: * *A $G$-set $X$ is the disjoint union of its orbits, so $|X|$ is the sum of the sizes of the orbits. *The size of an orbit divides $|G|$ by orbit-stabilizer. *Fixed points correspond to orbits of size $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/761978", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Find the number of real solutions Let $$f(x)=\frac{1}{2}( |x-a|+|x-b|),$$ where $x$ is a real number ; no information is given on $a$ and $b$. Study the differentiability of this function and determine how many real solutions does the equation $\mathbf{f(x)=m}$ have, where $m$ is a real number. The problem asks us not to use plotting. How do you solve this? I'm completely clueless, I don't even know where to start!
Suppose first that $a<b$. Then $$f(x)=\frac{1}{2}\begin{cases}a+b-2x & x\le a \\ b-a & a<x<b \\ 2x-a-b & b\le x\end{cases}$$ Taking the derivative, we get $$f'(x)=\frac{1}{2}\begin{cases} -2 & x< a \\ 0 &a<x<b \\ 2 & b<x\end{cases}$$ You will note that there are discontinuities of $f'(x)$ at $a, b$; at these points $f(x)$ is not differentiable. I leave the cases $a=b$ and $a>b$ for you to consider.
{ "language": "en", "url": "https://math.stackexchange.com/questions/762181", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Does the alternating group $A_5$ contain a subgroup isomorphic to $\Bbb Z_{20}$? What are all the possible orders of elements in the group $A_5$? Does $A_5$ contain a subgroup isomorphic to $\Bbb Z_{20}$? How about $\Bbb Z_{10}$? How about $\Bbb Z_5$? Justify your answers. I've found that the possible orders are 1, 2, 3, and 5. I'm having a hard time conceptualizing this. The identity in the subgroup must map to the identity in $\Bbb Z_{20}$. There must exist a bijective function from the subgroup to $\Bbb Z_{20}$, and this function must satisfy $f(ab)=f(a)f(b)$. So I must find a function that works for this? I'm confused.
Hint: a subgroup isomorphic to ${\mathbb Z}_n$ would be generated by an element of order $n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/762391", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Show that there are exactly two lines through a point p outside the circle that are tangent to the circle C Let $C$ be a circle of radius $r$ in the plane. Let $p$ be a point in the plane that lies outside of $C$. Show that there are exactly two lines through $p$ that are tangent to $C$. It is one of those questions that seem very intuitive but very hard to prove for me. How do I show that there are "exactly" two tangent lines? Try to construct a third one but reach a contradiction? I'd appreciate any help. Thanks.
Method Using Calculus: Say we are given any circle and any point outside of that circle. WLOG, we can translate the circle to be centered at the origin, and rotate our system so that the point is situated along the $y$-axis. From here, we claim that we can hit that point with exactly two tangent lines to the circle. Imagine the circle is described by the equation $x^2 + y^2 = r^2$, and consider some point outside of the circle on the $y$-axis, say $(0, b)$. Differentiating implicitly, we find that $\frac{dy}{dx} = -x/y$. Therefore, the tangent line through a given point on the circle, say $(m, n)$, will be described by the equation $y = (-m/n)x + b$, where $b$ is the $y$-intercept at the desired point. Plugging in to solve for $b$: $$n = -m^2/n + b$$ $$b = n+ m^2/n$$ We want to find points along our circle that both satisfy the above, and also $x^2 + y^2 = r^2$. Rearranging the above, we get: $$m^2 + n^2 = nb$$ In order to satisfy our circle's equation, we must take $n$ such that $nb = r^2 \Longrightarrow n = r^2/b$. From here, we have exactly two choices for $m$ (positive or negative), and hence, exactly $2$ possible tangent lines to the circle through the desired point.
{ "language": "en", "url": "https://math.stackexchange.com/questions/762472", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
probability, please help on bayes question I dont need exact answer, I just need help to judge wether my following method is correct or not. Question:A physician has 5 patients. There are treatments A and B. Physician gives treament A to 3 randomly selected patients, and B to the other 2. Suppose that treament A produce a cure in any patient with a probability of 0.3, and B with 0.6. What is the probability that the two treatments will produce the same number of cures? My attempt: Let X= # of cure in treament A Y=# of cure in treament B P(X=Y)=P(X=0,Y=0)+P(X=1,Y=1)+P(X=2,Y=2) =P(X=0)P(Y=0)+P(X=1)P(Y=1)+P(X=2)P(Y=2) P(X=0)=(p)^3*(1-p)^2, but I need to figure out what is p. now comes to my question. For treament A, P(Cure|treament A)=0.3 as given in the problem. My question is should I use this number or P(Cure And treament A)=P(A)*P(Cure|treament A) to fill in the "p" in the above?
The setup is correct, we want $$\Pr(X=0)\Pr(Y=0)+\Pr(X=1)\Pr(Y=1)+\Pr(X=2)\Pr(Y=2).$$ We have $\Pr(X=0)=(0.7)^3$ and $\Pr(Y=0)=(0.4)^2$. For $\Pr(X=1)$, the right expression is $\binom{3}{1}(0.3)(0.7)^2$. Similarly, $\Pr(Y=1)=\binom{2}{1}(0.6)(0.4)$. Finally, $\Pr(X=2)=\binom{3}{2}(0.3)^2(0.7)$ and $\Pr(Y=2)=(0.6)^2$. Remark: There is nothing particularly Bayesian about the calculation. The random variables $X$ and $Y$ have binomial distribution, with parameters fully specified in the statement of the problem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/762569", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$\lim_{n\to \infty}\left(1 - \frac {1}{n^2}\right)^n =?$ Can you give any idea regarding the evaluation of the following limit? $\lim_{n\to \infty}\left(1 - \frac {1}{n^2}\right)^n$ We know that $\lim_{n\to \infty}\left(1 - \frac {1}{n}\right)^n = e^{-1}$, but how do I use that here?
Here is a hint: $\left(1 - \dfrac{1}{n^2}\right) = \left(1 - \dfrac{1}{n}\right)\left(1 + \dfrac{1}{n}\right)$ Also use the fact that $(ab)^n = a^n b^n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/762625", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
Continuity and simplification of a function I have a question to ask about a function. Suppose a function $$f(x) = \frac{x^2 - x}{ x - 1},$$ we can simplify this function to be $f(x) = x$. Yet, we say that this function is discontinuous at $x = 1$ but after the simplification, we say that the function $f(x)$ is continuous. Which one is correct? The fact that $f$ is discontinuous or continuous?
The original function $$f\left(x\right)=\frac{x^{2}-x}{x-1}$$ has $\mathbb{R}\backslash\left\{ 1\right\} $ as (maximal) domain and is continuous. It is not defined on $\left\{ 1\right\} $ and consequently statements like '$f$ is (dis)continuous at $1$' don't make sense. It can only be (dis)continuous at points that belong to its domain.
{ "language": "en", "url": "https://math.stackexchange.com/questions/762715", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Proving $\arctan1 + \arctan\frac13 +\cdots+\arctan\frac1{n^2+n+1}=\arctan (n+1)$ by induction How to solve this problem using mathematical induction? $$\arctan1 + \arctan\frac13 +\cdots+\arctan\frac1{n^2+n+1}=\arctan (n+1)$$
Hint: $$\tan(a+b)=\frac{\tan a+\tan b}{1-\tan a\tan b}$$ In your case, for the base case $n=1$, $$\tan\left(\arctan 1+\arctan\frac{1}{3}\right)=\frac{\tan(\arctan 1)+\tan(\arctan\frac{1}{3})}{1-\tan(\arctan1)\tan(\arctan\frac{1}{3})}=\frac{1+\frac{1}{3}}{1-\frac{1}{3}}=2$$So $\arctan 1+\arctan\frac{1}{3}=\arctan 2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/762784", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
For which $a$ is a matrix $A$ diagonalizable? Say I have a matrix $A_a$ with $$A_a:= \left(\begin{array}{c} 2 & a+1 & 0 \\ -a & -3a & -a \\ a & 3a+2 & a+2 \end{array}\right)$$ I was wondering if there was an easy way to determine for which $a$ the matrix would be diagonalizable. I tried to determine its eigenvalues first. $det(A_a - \lambda I_3) = 0$ gave me a rather complex formula, from where on I don't know how to continue. $0 = 7a \lambda - 6a - 4 \lambda + 4 \lambda^2 - 2a\lambda^2 - \lambda^3 - a^2\lambda + 2a^2$
If you factor the characteristic polynomial, you find it is $$ (2a^2-6a)-(a^2-7a+4)\lambda+(4-2a)\lambda^2-\lambda^3= (2-\lambda)(\lambda^2+(2a-2)\lambda+a^2-3a) $$ If you know this polynomial has three distinct roots, then the matrix is diagonalizable. A root can be repeated only if either * *the discriminant of $f(\lambda)=\lambda^2+(2a-2)\lambda+a^2-3a$ is zero, or *$2$ is also a root of $f(\lambda)$, that is $f(2)=0$. The first case gives $$ 4(a-1)^2-4(a^2-3a)=0 $$ that is $$ a^2-2a+1-a^2+3a=0 $$ or $$ a=-1 $$ The second case gives $$ 4+2(2a-2)+a^2-3a=0 $$ or $$ 4+4a-4+a^2-3a=0 $$ that is, $$ a^2+a=0 $$ which has the roots $0$ and $-1$. So, when $a\ne0$ and $a\ne-1$, the matrix is diagonalizable. For $a=-1$ the characteristic polynomial is $(2-\lambda)^3$ and discussing the eigenspace is easy. For $a=0$ the eigenvalues are $2$ (double) and $0$ single, so you have to discuss the eigenspace relative to $2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/762859", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Show that $Z_{p^2} \oplus Z_{p^2}$ has exactly one subgroup isomorphic to $Z_p \oplus Z_p$ Show that $Z_{p^2} \oplus Z_{p^2}$ has exactly one subgroup isomorphic to $Z_p \oplus Z_p$ Attempt: $Z_p \oplus Z_p$ has $p^2-1$ elements of order $p$ . Hence, all non trivial elements of $Z_p \oplus Z_p$ are of order $p$. Number of cyclic sub groups of order $p$ = $p+1$ $Z_{p^2} \oplus Z_{p^2}$ has $\{p (p^3-3p+1)\}$ elements of order $p^2$ and $(p^2-1)$ elements of order $p$ .Number of cyclic sub groups of order $p$ = $p+1$ Now , example of a generator of order $p$ in $Z_{p^2} \oplus Z_{p^2}$ is $(1,p)$ and example of a generator of order $p$ in $Z_p \oplus Z_p$ is $(1,0)$ How do i proceed next in these question where isomorphism is sought to be displayed? Which means a mapping must be specified as well. Help will be appreciated.
Unless I am mistaken, the idea of the question is as follows: You know that $G:= \mathbb{Z}_{p^2}\oplus \mathbb{Z}_{p^2}$ has one subgroup $H$ isomorphic to $\mathbb{Z}_p\oplus \mathbb{Z}_p$. The question is asking you to prove that this $H$ is the unique such subgroup. So suppose $K$ is any other subgroup isomorphic to $\mathbb{Z}_p\oplus\mathbb{Z}_p$, then $K$ has $p^2-1$ elements of order $p$. But $G$ has only $p^2-1$ elements of order $p$, all of which are in $H$ - hence $K\subset H$ and vice-versa. This is all you need to show.
{ "language": "en", "url": "https://math.stackexchange.com/questions/762968", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Trigonometric problem: Elevation angle The elevation of the top of a tower $KT$ from a point $A$ is $27^\circ$. At another point $B$, $50$ meters nearer to the foot of the tower where $ABK$ is a straight line, the angle of elevation is $40^\circ$. Find the height of the tower $KT$.
Consider the following diagram: Looking at the outer (right-angled) triangle ($TAK$), and using trigonometry, we have: $$(1) \tan(27)=\frac{h}{50+x}.$$ Looking at the inner (right-angled) triangle ($TBK$), and using trigonometry, we have: $$(2) \tan(40)=\frac{h}{x}.$$ Now we've got a pair of simultaneous equations, $(1)$ and $(2)$. Solve them! Also bear in mind that $\tan(27)$ and $\tan(40)$ are just numbers which you can find on a calculator.
{ "language": "en", "url": "https://math.stackexchange.com/questions/763055", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Show that $x_n \rightarrow 0$ Let $f:[0,1] \rightarrow \mathbb{R}$ continuous, such that $f(0)=0$ We set $x_n=\int_0^1{f(x^n)}dx$ Show that $x_n \rightarrow 0$ $$$$ The function $f$ is continuous at a closed interval $\Rightarrow $ $f$ is bounded $\Rightarrow \exists M>0: |f(x)| \leq M, \forall x \in [0,1]$ The function $f$ is continuous and $f(0)=0$ $\Rightarrow \lim_{x \rightarrow 0}{f(x)}=0 \Rightarrow \forall \epsilon >0 \text{ } \exists \delta >0: \forall x \in [0,1], |x-0| < \delta \Rightarrow |f(x)-f(0)|< \epsilon$ So $0 \leq x< \delta \Rightarrow |f(x)| < \epsilon$ $$$$ How can I continue?
Hints using the notation and stuff you already did: For all $\;\epsilon>0\;$ there exists $\;K\in\Bbb N\;\;s.t.\;\;n>K\implies |f(x^n)|<\epsilon\;$ (why?) , so: $$\left|\int\limits_0^1f(x^n)\,dx\right|\le\int\limits_0^1|f(x^n)|dx\le\int\limits_0^1\epsilon\,dx=\epsilon$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/763158", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Steps in the solution of Korteweg-deVries PDE In the following solution of the Korteweg-deVries PDE $$ u_t + 6uu_x + u_{xxx} = 0 \qquad (3.1) $$ I do not understand the second integration step and how they arrive at the expression for the differentials. The first integration is clear to me, but in the second step, why $u_{\xi}$ does not become $u$ but $1/2 u_{\xi}^2$? And how they came from the equation $$ \frac{1}{2}u_{\xi}^2 = -u^3 + \frac{1}{2}cu^2 + c_2 $$ to $$ d\xi = \frac{du}{u\sqrt{c-2u}} $$ Btw I always find these calculations with differentials a little bit intimidating.
Before the second integration, the whole equation is multiplied by $u_\xi$ (a standard trick, but perhaps it should have been explained in the text), and then the chain rule is used backwards. And if $c_2=0$, then $$ \frac12 \left( \frac{du}{d\xi} \right)^2 = -u^3 + \frac12 c u^2 = \frac12 u^2 (c-u) , $$ so $$ \frac{du}{d\xi} = \pm\sqrt{u^2 (c-u)} . $$ At this point, the author is somewhat unclear/lazy/sloppy when it comes to explaining why only the case $\frac{du}{d\xi} = u \sqrt{c-u}$ is considered, but if we accept that, then the rest is just the standard method for integrating a separable ODE. (The computation with differentials can be seen as just a mnemonic; it has been discussed in other questions on this site.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/763225", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding roots of cubing equation Find all roots of the following polynomial: $$x^3 + x^2 + 1$$
You can use the formulas of Cardano equation but not for sheep, because the form of the equation which is elected by the Cardano formulas is: $$x^3+px+q=0$$ If the equation is of the form: $$x^3+ax^2+bx+c=0$$ Your equation is the form of the above, therefore, by means of substitution $$x+\frac{a}{3}=y$$ acquired forms: $$y^3+\alpha y +\delta=0$$ where $$\alpha=-3\left(\frac{a}{3}\right)^2+b; \beta=-\left(\frac{a}{3}\right)^3+c; \delta=-\alpha\frac{a}{3}+\beta$$ i.e You should be solving the equation: $$27y^3-9y+2=0$$ using formulas of Cardano
{ "language": "en", "url": "https://math.stackexchange.com/questions/763299", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Euclidean Algorithm for Modular Inverse, with negative numbers I might be on to something quite simple which I'm failing to see, while calculating modular inverses. For example, calculating 7x = 5 (mod 12) Which is the same as saying: 7x - 5 = 12k Which becomes: 7x - 12k = 5 And then I proceed using Euclidean Algorithm for x,k. I get to -25 and 15 respectively. However, I need the x to be positive to get the inverse I'm looking for. How can I get a positive modular inverse? Thanks in advance!
In a Bezout identity $$ a⋅x+b⋅y=c $$ you can exchange multiples of $a⋅b$ or even $lcm(a,b)=a'\cdot b=a\cdot b'$ between the terms on the left, so that $$ a⋅(x-k⋅b')+b⋅(y+k⋅a')=c $$ is also a correct identity. This slightly extends the reasoning on modular equivalences in the comment of Bill Dubuque.
{ "language": "en", "url": "https://math.stackexchange.com/questions/763477", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Doubt on proof of Implicit function theorem On The second part of the proof, where it's stated that V is open as it is the inverse image of the open set $V_0$ under the continuous mapping $y \rightarrow (0, y)$. Let $\pi$ be this continuous mapping. Then, $\forall _{U_{open\text{ in }\mathbb{R}^n\times\mathbb{R}^p}} \pi^{-1}(U)$ is open in/relative to the domain of $\pi$, since $\forall_\epsilon \exists_\delta Dom(\pi)\cap B(v_1;\delta)\subset \pi^{-1}(B(v_0;\epsilon))$ with $\lim_{y\rightarrow v_1} \pi (y) = \pi(v_0)$. In this situation, is $Dom(\pi)=V$ or $Dom(\pi)=V_1$, where $V$ and $V_1$ are as defined in the image? If it's the first, then I do not understand how $V= \pi^{-1} (V_0)$ is open in/relative to $\mathbb{R}^p$, even if $V_0$ is open in $\mathbb{R}^n\times\mathbb{R}^p$... If it's the second possibility, then I understand that $\pi^{-1} (V_0)$ is open in $V_1$, and since $V_1$ is open in $\mathbb{R}^p$, $\pi^{-1} (V_0)$ is also open in $\mathbb{R}^p$. In this last possibility is that then $\pi^{-1} (V_0)\neq V$... Then how do we prove that $V$ is open in $\mathbb{R}^p$ ?
So, I think I now understand what's happening. $Dom(\pi)=V_1$ and yet we still have $\pi^{-1} (V_0)=V$ since, $V=\{y\in V_1|(0,y)\in V_0\}$ and $\pi(y)=(0,y)$. Silly me!
{ "language": "en", "url": "https://math.stackexchange.com/questions/763553", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Homeomorphism Compact Subsets Are there compact subsets $A,B \subset \mathbb{R^2}$ with $A$ not homeomorphic to $B$ but $A \times [0,1]$ homeomorphic to $B \times [0,1]$?
Yes. Consider the two sets below. Edit: Jon, let me explain in more detail why the two sets (call them $A$ and $B$, respectively) are not homeomorphic. I find in practice that showing two sets to not be homeomorphic is a bit cumbersome. I didn't fill out all of the details for that reason, leaving them to you; let me know if you would like more explanation for anything. Suppose, seeking a contradiction, that $h:A\to B$ is a homeomorphism. Let $\alpha:[0,1]\to B$ be a path between the two points where the crosses intersect the edge of the square and let $\beta:[0,1]\to B$ be a path between the two points where the sticks intersect the edge. Choose $\alpha$ and $\beta$ so that their images do not intersect, that is, so that $\alpha([0,1])\cap\beta([0,1])=\varnothing$. Now consider the paths $\alpha'=h^{-1}\circ\alpha:[0,1]\to A$ and $\beta'=h^{-1}\circ\beta:[0,1]\to A$. It is still true that $\alpha'$ is a path between the intersections of the crosses and the squares, and that $\beta$ is a path between the intersections of the sticks and the squares (why?). But the images of $\alpha'$ and $\beta'$ must intersect (why?), and that is a contradiction. As for your second question -- why $A'\times[0,1]$ and $B'\times[0,1]$ are not homeomorphic -- let me give you some intuition. The sets $A\times[0,1]$ and $B\times[0,1]$ are homeomorphic because you can move the "branches" across the top of the cube contained in $A\times[0,1]$. But the space $A'\times[0,1]$ is $A\times[0,1]$ with the interior, top, and bottom of the cube removed -- there is no more "top" to move the branches across, and a homeomorphism won't work.
{ "language": "en", "url": "https://math.stackexchange.com/questions/763663", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
How many different messages can be transmitted in n microseconds using three different signals... How many different messages can be transmitted in n microseconds using three different signals if one signal requires 1 microsecond for transmittal, the other two signals require 2 microseconds each for transmittal, and a signal in a message is followed immediately by the next signal? I initially got it wrong because I put as the initial condition: $a_0=0, \space a_1=1$ I found this solution online: Why is the initial condition $a_2=3$, and not $a_2=2$? It says the other two signals require $2$ microseconds, so I believe $a_2=2$ because in $2$ microseconds we can only send $2$ signals.
i found a online solution in which it is showing An = An-1+An-2 because we have two choise here either we can send first signal which 1 second time or we can send another signal which takes two second time anyone of them we can send first so i think the value of A1 = 1 because in 1st second we can send a signal which takes 1 second time and A2 = 1 because in 2 second we cannot send two signal which takes 1 second it is cleared by question so in two second we can send only one signal which takes two second.
{ "language": "en", "url": "https://math.stackexchange.com/questions/763772", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Integrating the product of Poisson and exponential pdf So I'll spare the background as to why, but I'm trying to integrate the following: $$\int_0^{\infty} \frac{e^{-(\lambda+\mu)t}(\lambda t)^n}{n!} dt$$ If you parameterize a Poisson w/ $\lambda$ and an exponential w/ $\mu$ and multiply their pdf's, you get the above. I just can't seem to do the integration by parts. Is this a two step IBP? Is there an easier way to solve than actually integrating (product of random variables, etc.)?
You can use the Laplace Transform. The transform of $t^n$ is $$\frac{n!}{s^{n+1}}$$ So you get, after some algebra, the quantity must equal $$\lambda^n {n!\over n!}\frac{1}{((\lambda +\mu))^{n+1}}=\frac{\lambda^n}{(\lambda+\mu)^{n+1}}$$ Can someone check this please? And evidently you've made a mistake somewhere because that is clearly not a probability function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/763851", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Prove that $r^n/n!$ converges where $n\ge r$ The answer is in the title of the question. I need to show it converges to 0 and $r>0$. I am sorry if this is a bad question, I'm having trouble explaining it. So essentially this Do the $\lim_{n\to inf}\frac{r^n}{n!}=0.$
I asume that you want to prove: $$\lim_n\displaystyle\frac{r^n}{n!}=0 $$ and $r>0$ is fixed. Let $N$ be an integer number such that $N> r$. Then for $n>N$ the following holds: $$\displaystyle\frac{r^n}{n!}=\displaystyle\frac{r}{1}\cdots\displaystyle\frac{r}{N-1}\displaystyle\frac{r}{N}\cdots\displaystyle\frac{r}{n}<\displaystyle\frac{r}{1}\cdots\displaystyle\frac{r}{N-1}(\displaystyle\frac{r}{N})^{n-N} $$ you have that $\displaystyle\frac{r}{1}\cdots\displaystyle\frac{r}{N-1}$ is a constant and $\displaystyle\frac{r}{N}<1$. Try to conclude.
{ "language": "en", "url": "https://math.stackexchange.com/questions/763919", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that if the prime $p$ divides $|G|$, then $|X|$ is divisible by $p$. Question : Let $p$ be a prime number that divides the order of the finite group $G$. Let $X$ = $\bigcup_{P \in Syl_p(G)}P$. Show that $|X|$ is divisible by $p$.
Lemma Let $G$ be a finite group and $p$ be a prime divding $|G|$. Let $H$ be a $p$-subgroup of $G$ and $P \in Syl_p(G)$. Then $H \cap C_G(P)=H\cap Z(P)$. Proof It is clear that $H \cap Z(P) \subseteq C_H(P)=H \cap C_G(P)$. Conversely, observe that $C_H(P)$ is a $p$-subgroup (it is a subgroup of $H$!) and it normalizes (even centralizes) $P$. So $C_H(P)P$ is a $p$-subgroup containing $P$, and since $P$ is Sylow, this can only be the case if $C_H(P)P=P$, that is $C_H(P) \subseteq P$. So $C_H(P) \subseteq C_G(P) \cap P=C_P(P)=Z(P)$ and of course $C_H(P) \subseteq H$. Proposition Let $G$ be a finite group and $p$ be a prime divding $|G|$. Let $X=\bigcup_{P \in Syl_p(G)}P$. Then $|X| \equiv 0$ mod $p$. Proof Let $S \in Syl_p(G)$ and let $S$ act on $X$ by conjugation. Let $Y=\{x \in X: s^{-1}xs=x$ for all $s \in S\}$, the set of fixed points under the action. By the Orbit-Stabilizer Theorem and the fact that $S$ is a $p$-group, it is evident that $|X| \equiv |Y|$ mod $p$. Let us analyze the set $Y$ by applying the Lemma. $Y$ is the set of elements of $X$ that centralize $S$: $$Y=C_X(S)= X \cap C_G(S) = \bigcup _{P \in Syl_p(G)}(P \cap C_G(S))= \bigcup _{P \in Syl_p(G)}(P \cap Z(S)) = X \cap Z(S) \subseteq Z(S).$$But obviously $Z(S) \subseteq Y$, and we conclude $Y=Z(S)$. Since $S$ is a non-trivial $p$-group, $Z(S)$ is non-trivial, in particular $|Y|=|Z(S)| \equiv 0$ mod $p$, so $p | |X|$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/764007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
A factorization problem involving Fibonacci and Lucas Polynomials Consider a sequence of polynomial $\{w_n(x)|\, n\geq 0\}$ which are defined recursively by $w_n(x)=xw_{n-1}(x)+w_{n-2}(x)$. With $w_0(x)=0$ and $w_1(x)=1$, one gets the so-called Fibonacci polynomials $w_n(x)=F_n(x)$. With $w_0(x)=2$ and $w_1(x)=x$, one gets the Lucas polynomials $w_n(x)=L_n(x)$. While working on another problem, I noticed that for any odd $n$ the polynomial $F_n(z)^2-L_{2n}(z)+2$ appears to factor as $\phi_n(z)\phi_n(-z)$ for some polynomial $\phi_n(z)$ with integer coefficients. For example, when $n=1$ we have \begin{align*} F_1(z)^2-L_{2}(z)+2&=1-(z^2+2)+2\\&=(1-z)(1+z)\end{align*} and for $n=3$ we have \begin{align*}F_3(z)^2-L_{6}(z)+2&=(z^2+1)^2-(z^6+6z^4+ 9 z^2 +2 )+2\\ &=(1+3z+z^2+z^3)(1-3z+z^2-z^3) \end{align*} Does anyone see how to prove this factorization property? I have know idea how to tag this question. Suggestions are appreciated.
The key observation is that, for $n$ odd, $L_{2n}(x) = L_{n}(x)^2 + 2$; this is straightforward to show by induction, as a subcase of the more general identity $L_{m+n}(x) = L_{m}(x)L_{n}(x) + (-1)^{m-n}L_{m-n}(x)$. Now, your polynomial $F_{n}(z)^2 - L_{2n}(z) + 2$ reduces to $F_{n}(z)^2 - L_{n}(z)^2 = (F_{n}(z) - L_{n}(z))(F_{n}(z) + L_{n}(z))$. Finally, since, for odd $n$, $F_{n}(z)$ is even and $L_n(z)$ is odd (again this is easy to show via induction), it follows that if you let $G_{n}(z) = F_{n}(z) - L_{n}(z)$, then your polynomial is simply $G_{n}(z)G_{n}(-z)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/764119", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Caccioppoli inequality Assume we have established the following version of Caccioppoli inequality $$\int |\nabla u|^2 \psi^2 dA\leq C \int u^2 |\nabla \psi| ^2 dA$$ for $C^2(\mathbb C)$- smooth functions $u\geq 0$ with $\Delta u\geq 0$, and $\psi\in C_c^\infty (\mathbb C)$ (compactly supported, smooth) test functions. Is there a way to upgrade this inequality, so that it holds for $\psi \in C_c(\mathbb C)$ (continuous, compactly supported), such that $\nabla \psi$ exists almost everywhere, and it is bounded, and supported on a finite measure set? The reason is that I want to use a bump function $\psi$ such that $\psi=1$ on a disk $D(0,a)$, $\psi=0$ outside $D(0,b)$ ($b>a$), but $\nabla \psi$ does not exist on $|z|=a,|z|=b$.
$\nabla \psi$ exists almost everywhere, and it is bounded, and supported on a finite measure set? As written: not enough. On the right, $|\nabla \psi|^2$ must be the weak derivative of $\psi$; nothing short of it can control the oscillation of $\psi$. Bounded pointwise a.e. derivative need not be weak. But, I see that the function $\psi$ you want to use is Lipschitz. Then everything is fine. As a matter of fact, $\psi\in W^{1,2}_0(\mathbb C)$ is enough. Just note that $C^\infty_c$ is norm-dense in $W^{1,2}_0(\mathbb C)$, and both sides of the Cacciopoli inequality depend continuously on $\psi$ with respect to the $W^{1,2}$ norm.
{ "language": "en", "url": "https://math.stackexchange.com/questions/764199", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
$\displaystyle\Big(1-\frac{t}{n}\Big)^n$ is strictly increasing for $n>N$ and $t>0$ Show that $\exists N\in\mathbb N$ such that, $\displaystyle\Big(1-\frac{t}{n}\Big)^n$ is strictly increasing for $n>N$ $(n\in\mathbb N, t>0)$ Bernoulli Inequality didn't help me I did; $\displaystyle\frac{\Big(1-\frac{t}{n+1}\Big)^{n+1}}{\Big(1-\frac{t}{n}\Big)^n}=\Big(1+\frac{t}{(n+1)(n-t)}\Big)^n\Big(1-\frac{t}{n+1}\Big)$ $\displaystyle\Big(1+\frac{t}{(n+1)(n-t)}\Big)^n\ge\Big(1+\frac{nt}{(n+1)(n-t)}\Big)$$\quad$Bernoulli-Ineq. but $\displaystyle\Big(1+\frac{nt}{(n+1)(n-t)}\Big)\Big(1-\frac{t}{n+1}\Big)=1+\underbrace{\frac{nt}{(n+1)(n-t)}-\frac{t}{n+1}-\frac{nt^2}{(n+1)^2(n-t)}}_{\text{doesn't seem to be positive}}$ So it didn't work, do you have any ideas, thanks in advance.
As long as $n\gt t$, apply Bernoulli: $$ \begin{align} \frac{\left(1-\frac t{n+1}\right)^{n+1}}{\left(1-\frac tn\right)^n} &=\left(1+\frac t{(n+1)(n-t)}\right)^{n+1}\left(1-\frac tn\right)\\ &\ge\left(1+\frac t{n-t}\right)\left(1-\frac tn\right)\\[9pt] &=1 \end{align} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/764312", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Convergence of $\sum^\infty_{n=1}\frac {\sqrt[m]{n!}}{\sqrt[k]{(2n)!}}$ Does the following series converges ? $$\displaystyle\sum^\infty_{n=1}\frac {\sqrt[m]{n!}}{\sqrt[k]{(2n)!}} \ \text{for} \ \ k,m\in \mathbb N$$ I tried the ratio test: $ \displaystyle\lim_{n\to\infty}\frac {\sqrt[m]{(n+1)!}}{\sqrt[k]{(2n+2)!}}\cdot \frac {\sqrt[k]{(2n)!}}{\sqrt[m]{n!}} = ... =\lim_{n\to\infty}\frac {(n+1)^{\large\frac{k-m}{mk}}}{(2(2n+1))^{\large\frac1 k}}$ Now I should check for cases with $m,k$ where the numerator is larger than the denominator and vice versa and when they're equal but it doesn't seem right... Note: I can't use integration or Stirling approximation, nor Taylor.
If you look at the ratio you have and rewrite it slightly, you get $$n^{\frac{1}{m} - \frac{2}{k}}\frac{\left(1+\frac{1}{n}\right)^{(k-m)/(mk)}}{2^{2/k}\left(1+\frac{1}{2n}\right)^{1/k}}.$$ The fraction converges to $$\frac{1}{2^{2/k}} < 1,$$ so it depends on the behaviour of $n^{1/m - 2/k}$. If $\frac{1}{m} > \frac{2}{k}$, the quotient tends to $+\infty$, if $\frac{1}{m} < \frac{2}{k}$, it tends to $0$, and in the case of equality, it tends to $2^{-2/k} \in (0,1)$. So by the ratio test, the series converges if and only if $k \leqslant 2m$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/764372", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
A convex function has a lower bound? Suppose that $f=f(x)$ is strictly convex for $x\in\mathbb{R}$, i.e. there exists $\epsilon>0$ such that $f''(x)\geq\epsilon>0$ for $x\in\mathbb{R}$. Does there exist $\delta>0$ such that $f(x)\geq \delta$ for $x\in\mathbb{R}$?
A function satisfying the condition given in this question is called "strongly convex". There is a more general definition of strong convexity which applies to functions that may not be differentiable: A function $f:\mathbb R^N \to \bar{\mathbb R}$ is strongly convex (with parameter $\epsilon > 0$) if and only if the function $h(x) = f(x) - \frac{\epsilon}{2} \|x\|_2^2$ is convex. It would be nice to give a version of this result that doesn't require $f$ to be differentiable. Assume that $f:\mathbb R^N \to \mathbb R \cup \{\infty\}$ is strongly convex (with parameter $\epsilon$) but not necessarily differentiable. The function $h(x) = f(x) - \frac{\epsilon}{2} \|x\|_2^2$ is convex and never equal to $-\infty$, therefore $h$ has an affine minorant: there exists $m \in \mathbb R^N$ and $b \in \mathbb R$ such that $h(x) \geq \langle m, x \rangle + b$ for all $x \in \mathbb R^N$. It follows that \begin{equation} f(x) \geq \frac{\epsilon}{2} \|x\|_2^2 + \langle m, x \rangle + b \end{equation} for all $x \in \mathbb R^N$. The quadratic function on the right has a lower bound, so $f$ has a lower bound.
{ "language": "en", "url": "https://math.stackexchange.com/questions/764443", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Help understanding Recursive algorithm question We have a function that is defined recursively by $f(0)=f_0$, $f(1)=f_1$ and $f(n+2) = f(n)+f(n+1)$ for $n\geq0$ For $n\geq0$, let $c(n)$ be the total number of additions for calculating $f(n)$ using $f_0$ and $f_1 $ as input with $c(0) = 0$ and $c(1) = 0$. For $n \geq 2$, express $c(n)$ using $c(n-1) $ and $c(n-2)$ Determine if $c(n)\geq2^{(n-2)/2}$ for $n\geq2$ and prove your answer. I'm lost as to what to do with this question.
In reality, only n-1 additions are needed. The only formula that we can use is f(n+2) = f (n+1) + f (n) for n >= 0. If f0 and f1 are given, the only value this formula allows us to calculate is f2 = f1 + f0 (applying the formula with n = 0) Now with f2 available as well, the only value the formula allows us to calculate is f3 = f2 + f1 (applying the formula with n = 1) and so on. I'd express the number of additions as c(n) = c(n−1) + 1 or if you insist on using c(n-2) in the formula then c(n) = c(n−1) + 1 + 0 * c(n-2)
{ "language": "en", "url": "https://math.stackexchange.com/questions/764536", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Approximate $\sqrt{e}$ by hand I have seen this question many times as an example of provoking creativity. I wonder how many ways there are to approximate $\sqrt{e}$ by hand as accurately as possible. The obvious way I can think of is to use Taylor expansion. Thanks
And here is another answer. There is a known continued fraction expansion for $e^{1/n}$. Continued fraction sequences converge quickly (although with so many 1s, this particular continued fraction converges on the slower end of things). The downside is that you can't use the $n$th convergent to quickly find the $n+1$st convergent, so you have to make a choice right away how deep to go. As @spin notes in the comments, you can refine your convergent using the previous two convergents and the next integer in the continued fraction expression. $$e^{1/2}=1+\cfrac{1}{1+\cfrac{1}{1+\cfrac{1}{1+\cfrac{1}{5+\cfrac{1}{1+\cfrac{1}{1+\cfrac{1}{9+\cfrac{1}{1+\cfrac{1}{1+\cfrac{1}{13+\cfrac{1}{1+\cfrac{1}{1+\cdots}}}}}}}}}}}}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/764604", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 11, "answer_id": 3 }
Limit of $f'(x) e^{-f(x)}$ Let $f$ be a real function verifying $f''\geq C>0$, where C is a constant. Do we have : $\lim_{x\to +\infty}f'(x) e^{-f(x)}=0$ ?
Hint: transform the limit to an indeterminate form such as $[\frac{0}{0}]$ or $[\frac{\infty}{\infty}]$, and apply De L'Hopital's Rule: as you mentioned in a comment $\lim_{x\to +\infty}f'(x)=+\infty$ and $\lim_{x\to +\infty}f(x)=+\infty$, so $\lim_{x\to +\infty}f'(x) e^{-f(x)}$ would be an indeterminate form such as $+\infty\cdot0$. You'd better to turn the limit in order to use De L'Hopital's Theorem: $$\lim_{x\to +\infty}f'(x) e^{-f(x)}=\lim_{x\to +\infty}\frac{e^{-f(x)}}{\frac{1}{f'(x)}}=\lim_{x\to +\infty}\frac{-f'(x)e^{-f(x)}}{\frac{-f''(x)}{f'(x)^2}}=...$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/764666", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Partial Derivatives using ChainRule Can any one please explain the second step:- Step1: $$\frac{\partial }{\partial x}\left[(1-x^2)\frac{\partial u}{\partial x}\right]+\frac{\partial }{\partial y}\left[y^2\frac{\partial u}{\partial y}\right]=0$$ Step2: $$L.H.S.=-2x\frac{\partial u}{\partial x}+(1-x^2)\frac{\partial ^2u}{\partial x^2}+2y\frac{\partial u}{\partial y}+y^2\frac{\partial ^2u}{\partial y^2}$$
Are you sure about the +1 after step 2? I think, that 1 is disappearing after derivation. If you derive $-x^2\frac{\partial U}{\partial x}$ then you have to use the product rule. $$u=-x^2$$ $$v=\frac{\partial U}{\partial x}$$ $$(u \cdot v)'=u'\cdot v+u\cdot v'$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/764765", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Trace norm of Hermitian matrix Let $A\in L(H)$ some Hermitian matrix, where $H$ is some finite dimensional Hilbertspace. I want to show $$\left\|A\right\|_{tr} = \max_{U\in U(H)}|\text{tr}(UA)| \ \ \ (*)$$ where U is unitary, and $\left\|\cdot \right\|_{tr}$ is the trace-norm which is given by $\left\|A\right\|_{tr}= \text{tr}|A|$ where $|A|:=\sqrt{A^*A}$ where $|A|$ is the positive-semidefinite root of $A^*A$. It appears that for hermitian A we have $\left\|A\right\|_{tr} = \sum_{i} |\lambda_i|$ (where $\lambda_i$ an eigenvalue of $A$). To show (*) i can show '$\geq'$. Let $A = \sum_i \lambda_i \left|i\right\rangle \left\langle i \right|$ be its spectral decomposition, then $$|\text{tr}(UA)| = |\sum_i\lambda_i \text{tr}(U\left|i\right\rangle \left\langle i \right|)| =|\sum_i\lambda_i \left\langle i\right| U \left|i\right\rangle | \leq \sum_{i} |\lambda_i| $$ Is this right? By the same reasoning it seems to me impossible that '$\leq$' holds. A unitary $U$ maps an orthonormal basis $\{\left|i\right\rangle\}_{i\in I}$ into itself. Thus $\left\langle i\right| U \left|i\right\rangle$ seems to me be either 1 or 0. Where do i go wrong here? And how do I approach '$\leq$'.
$\langle i|U|i\rangle$ are the diagonal entries of $U$ in your orthogonal basis. They could certainly be different from $0$ and $1$, but you are sure that they are less than $1$ in absolute value: $$ |\langle i|U|i\rangle|\leq\|U\|\,\|i\rangle\|^2=\|U\|=1. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/764845", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Equi integrability and weak convergence of measures Let $f_n$ be a sequence of functions in $L^1(K, m ; \mathbb{C})$, $K$ metric compact and $m$ a Radon measure on $K$. Assume that $\| f_n \|_1 \leq 1$. From what I understand, there is a subsequence converging weakly in $L^1$ if and only if the $f_n$ are equi integrable, which means : $$\forall \epsilon>0 \exists \delta >0 \forall A, m(A) \leq \delta \forall n \int_A |f_n| dm \leq \epsilon $$ Now remark that the measures $f_n m$ have bounded total variation, and therefore there exists weakly-* converging subsequences. Would I be correct to assume that there are no weakly converging subsequences in $L^1(m)$ if and only if all weak-* adherence values of $f_n m$ are singular with respect to $m$ ? I'm pretty sure of the implication "no weak $L^1$ adherence values $\Rightarrow$ all adherence values of $f_n m$ must be singular". The other implication seems clearly true but I have no argument. All help appreciated !
there is a subsequence converging weakly in $L^1$ if and only if the $f_n$ are equi integrable Not true as stated. One can interlace a convergent sequence with a non-equi-integrable sequence; the result will still have a convergent subsequence. What is true (and what you probably meant) is that equi-integrability is equivalent to precompactness in the weak topology. "no weak $L^1$ adherence values $\Rightarrow$ all adherence values of $f_n m$ must be singular No. Let $K=[0,1]$ and consider the sequence $$f_n = 2^n \chi_{[2^{-n-1},2^{-n}]}-2^{-n-1}\chi_{[2^{-n},2^{-n+1}]}$$ It satisfies $\|f_n\|_1=1$ and has no weakly convergent subsequence in $L^1$ (test the convergence with $$\phi = \sum_{n \text{ even }} \chi_{[2^{-n},2^{-n+1}]}$$ to see that it fails.) Yet, the measures $f_n \,dm$ converge to $0$ in the weak* topology of $C(K)^*$. On the other hand, it is true that having a subsequence that weakly converges in $L^1$ implies having a non-singular weak* cluster point. This is simply because weak $L^1$ convergence implies weak* convergence of associated measures.
{ "language": "en", "url": "https://math.stackexchange.com/questions/764975", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Find a vector $t \in \{x,y,z\}$ with base $\{u, v, w\}$ I don't know how to find a vector $\vec t$ that will suffice the condition: $\vec t \in \{x,y,z\}$ with bases $\{u, v, w\}$ the given vectors are: $$ \begin{array}{rcrrrrrl} u &=& [ & -3, & -1, & 1, & -2 & ] \\ v &=& [ & 2, & -3, & -2, & -2 & ] \\ w &=& [ & 1, & 1, & 1, & -1 & ] \\ x &=& [ & 7, & -4, & 1, & -5 & ] \\ y &=& [ & -10, & -2, & 4, & 2 & ] \\ z &=& [ & -7, & 2, & 5, & -3 & ] \\ \end{array} $$ I need to find the linear combination $t = \alpha u + \beta v + \gamma w $ that will fulfill the condition. Thank you a lot!
I am not sure, whether I have understood your problem correctly. If you have to say whether $x$, $y$ or $z$ are in $\operatorname{span}\{u,v,w\}$ you have to solve the three linear systems: * *$x = \alpha u + \beta v + \gamma w$ *$y = \alpha u + \beta v + \gamma w$ *$z = \alpha u + \beta v + \gamma w$ If there is a solution for any of the vectors $x$, $y$ or $z$, then these vectors are in $\operatorname{span}\{u,v,w\}$ and the computed vector $(\alpha, \beta, \gamma)$ is the representation of the vector in the base $\{u,v,w\}$. If there is no solution for a linear system, then the regarding vector $x$, $y$ or $z$ is not in $\operatorname{span}\{u,v,w\}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/765094", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Expected value for games where you can replay? Lets just say there's a game where you roll one fair die. If you roll a 1 or 2, you pay 1. If you roll a 3 or 4, you win 2. If you roll a 5 or 6 you roll again until you get a 1, 2, 3, or 4. How much are you expected to win? I can't figure out how to think about this. Thanks.
Note that your expected gain given that you first rolled a $5$ or $6$ is the same as your expected gain initially… you just get to start over. Using linearity of expectation, then, you can write your expected gain as $$ E[G]=\sum_{i=1}^{6}E[G\;\vert\;X_1=i]\cdot P[X_1=1]=-\frac{1}{3}+\frac{2}{3}+\frac{1}{3}E[G]=\frac{1}{3}+\frac{1}{3}E[G], $$ and then solve to yield $E[G]=1/2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/765154", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Finding number of roots of a polynomial in p-adic integers $\mathbb{Z}_{p}$ The problem is to find the number roots of $x^3+25x^2+x-9 $ in $\mathbb{Z}_{p}$ for p=2,3,5,7. I read this equivalent to having a root mod $p^{k}~\forall k\geq 1$. By Newton's lemma I can get whether there is at least one root. Any suggestions on how to find the number of roots?
1)Using Newton's polygon method 2)Hensel's lemma
{ "language": "en", "url": "https://math.stackexchange.com/questions/765251", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
What are the best sites to get caught up on Calculus? I'm going back to college this summer and will be taking engineering statistics and calculus based physics. I dropped out of college about 4 years ago and took calculus 1-3 before leaving. I'm worried I have forgotten all of my calculus and won't be able to perform in my upcoming classes. What is the best way to get caught up and what are some web sites that will help me catch up? Thanks
All 3 mentioned, I find great for different things. Khan Academy for developing some insight (if you have forgotten the overarching idea), PatrickJMT for lots of examples of how to actually do questions, and Paul's Online Notes are also great for reading.
{ "language": "en", "url": "https://math.stackexchange.com/questions/765344", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 4 }
Find the minimum of $(x(1+y)+y(1+z)+z(1+x))/\sqrt{xyz}$ over positive integers $x,y,z$ Let $x,y,z$ be positive integers.The least value $$\frac{x(1+y)+y(1+z)+z(1+x)}{(xyz)^{\frac 12}}$$ I tried sum using arithmetic-geometric means inequality (seems promising as the denominator is similar to geometric mean of $x,y,z$). However, so far this didn't help my cause.
Notice $$\frac{x(1+y)+y(1+z)+z(1+x)}{(xyz)^{\frac 12}} = \frac{x + xy + y + yz + z + zx}{(xyz)^{1/2}} \geq \frac{ 6 (x^3y^3z^3)^{1/6}}{(xyz)^{1/2}} = 6$$ By the AM-GM inequality
{ "language": "en", "url": "https://math.stackexchange.com/questions/765423", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Last digits, numbers Can anyone please help me? 1) Find the last digit of $7^{12345}$ 2) Find the last 2 digits of $3^{3^{2014}}$. Attempt: 1) By just setting the powers of $7$ we have $7^1 = 7$, $7^2=49$, $7^3=343$, $7^4 = 2401$, $7^5 = 16807$, $7^6 = 117649$, $\dots$ After the power of $4$, the last digits will repeat. Then by noticing the pattern the digits will end in $7,9,3$ and $1$. Then we can divide the exponent $(12345)$ by $4$ since this is the cycle that makes it repeat. Then $12345 : 4$ has remainder $1$. So $7^1 = 7$ is the unit digit to $7^{12345}$. So the last digit is $7$. I know how to do it like this, the problem does not state how to find the last digit, but I know it has something do do with Euler's theorem. for part $2$) I don't know how to start. Can anyone please help me? Thank you for the help.
Let $$3^{2014}-1=2x$$ Your number is : $3^{2x+1}=3\cdot3^{2x}=3(10-1)^{x}=3(-1+10)^{x}$ Using binomial theorem and neglecting powers of $10$ greater than $2$ as we want only last $2$ digits. $$3(-1^{x}+x(-1)^{x-1}10)$$ Writing $3$ as $4-1$ in first expression will lead you to believe that $2x$ is divisible by $4$. Hence, $x$ is even. $$3(1-10x)=3(1-5\cdot2x)$$ I hope you can carry on from here. You will need last two digits of $3^{2014}$ which you can find by writing it as $(10-1)^{1007}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/765530", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Beautiful Theorems and what constitutes as beautiful I often hear the phrase "mathematical beauty". That a proof or formula or theorem is beautiful. and I do agree I was awestruck when I first saw Euler's formula, connecting 3 seemingly unrelated branches of mathematics in a single formula $e^{i\pi}=-1$ But beauty is a rather subjective term. When I was taught Linear Algebra the instructor introduced Cayly-Hamilton theorem as beautiful, and I thought it was "nothing special". I'm interested in theorems that are considered beautiful, and why they are so. Just as an example to what I think is beautiful, last night a friend told me that the sum of the first $n$ odd numbers is equal to $n^2$. for example if $n=3$ then $1+3+5 =9=3^2$. if $n=5$ then $1+3+5+7+9 = 25 =5^2$ Simplistic. Surprising. Elegant. I liked it a lot. I would be very much interested in learning more theorems / formulas like that.
If $M=M^2$ is a smooth compact $2$-dimensional Riemannian manifold with (smooth) boundary $\partial M$, $K$ denotes it's Gauß-curvature, $k_g$ the geodesic curvature of it's boundary und $\chi(M)$ the Euler-Characteristic, then the theorem of Gauß-Bonnnet states that $$\int_M K dA + \int_{\partial M}k_g ds = 2\pi \chi(M)$$ (There are generalizations of this to higher dimensions. For me the beauty of this particular theorem originates from the fact that is one of the early insights of mathematicians into the deep relationships between topological invariants and analytical quantities)
{ "language": "en", "url": "https://math.stackexchange.com/questions/765612", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 7, "answer_id": 6 }
can set $\mathcal{A}$ be written as union of countable set which are rare sets $\mathcal{A}$=the set of all fnite sequences in $l_1$ $l_1$: the space of sequences of $x_n$ s.t. : $\sum^\infty_1 |x_n|<\infty$ $A_n$ is rare set if the interior of the closure is empty,${Int}\bar A_n=\emptyset$, is it possible that: $\mathcal{A}=\cup^\infty_{n=1} A_n$ :countable union and $A_n$ is rare
Try $A_n$ = the set of all sequences of length $n$. It is clear that the union of $A_n$ is $\mathcal A$. It can also be shown that $A_n$ is closed and has an empty interior.
{ "language": "en", "url": "https://math.stackexchange.com/questions/765672", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is the determinant differentiable? I was wondering, given an $n \times n$ square matrix, let function $\det : \left(a_1,a_2,\ldots,a_{n^2}\right) \to \textbf{R}$ give the determinant, where $a_{k}$'s are the entries of the $n \times n$ matrix. * *Is this function (determinant) a differentiable kind? *If so, is the derivative continuous? That is, is $d\left(\det\right)$ a continuous function? *Furthermore, if so, to what differentiability class does this $\det$ function belong? Thanks in advance.
If you would not know that the determinant is a polynomial in the entries of the matrix you may know that it is, if considered as a function of the columns (or rows) of the matrix, mulitilinear, hence $C^{\infty}$ as a function of the columns. Since the matrices depend smoothly on their entries they also depend smoothly on the columns.
{ "language": "en", "url": "https://math.stackexchange.com/questions/765757", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 4, "answer_id": 1 }
What can we say about the one point compactification of a Suslin tree? As a continuation to this question: The Alexandroff one point compactification of $(X,\tau)$, is a space $X \cup \{a\}$, where open neighborhoods of $X$ are $\{ U : U \in \tau\} \cup \{ V \cup \{a\}: V \in \tau $ and $V^C$ is closed and compact in $ X \}$ A tree $T$ is a Suslin tree if: 1. The height of $T$ is $\omega_1$. 2. Every branch in $T$ is at most countable. 3. Every antichain in $T$ is at most countable. Let $X=T \cup \{q\}$ be the Alexandroff one point compactification of a Suslin tree. I am trying to figure out whether $X$ is Frechet-Urysohn, and whether ONE wins in $G_{np}(q,X)$. Any ideas or directions? Thank you!
From this answer we have that if $X = \{ \infty \} \cup T$ is the one-point compactification of an Aronszajn tree $T$ with the tree topology, then the sets of the form $$U ( s_1 , \ldots , s_n ) = \{ \infty \} \cup {\textstyle \bigcap_{i \leq n}} \{ x \in T : x \not\leq_T s_i \}$$ where $s_1 , \ldots , s_n \in T$ form a neighbourhood basis at $\infty$. The basic idea for show that $X$ is Fréchet-Urysohn is to notice first (and Henno Brandsma has) that for each $s \in T$ we have $\chi ( X , s ) \leq \aleph_0$, and so $X$ is "Fréchet-Urysohn at $s$." Given $A \subseteq T$, it is relatively easy to show that $\infty \in \overline{A}$ iff either * *$A$ includes an infinite family of pairwise incomparable nodes of $T$; or *$A$ includes an infinite chain with no upper bound in $T$. (If $U = U(s_1 ,\ldots , s_n )$ is a basic open neighbourhood of $\infty$ and $A$ satisfies either (1) or (2), then there must be points of the antichain/chain which are not below any $s_i$. If $A$ fails to satisfy either of these conditions, then $A$ is the union of finitely many chains, each of which has an upper bound in $T$, and so $U(s_1 , \ldots , s_n)$ — where the $s_i$ are such chosen upper bounds — is an open neighbourhood of $\infty$ disjoint from $A$.) In each of the above cases you basically do the obvious thing to construct a sequence in $A$ converging to $\infty$. (That there is no winning strategy for ONE in $G_\text{np} ( \infty , X )$ is in the linked answer.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/765854", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
$-5| 2+4x | = -32(x+3/4)- | x | + 1$ This was my attempt: $$-5| 2+4x | = -32\left(x+\frac34\right)- | x | + 1\\ \implies|2+4x|=\frac{-32x-24- | x | + 1}{-5}\\ \implies2+4x=\pm \frac{-32x+-24- x + 1}{-5}\\ \implies4x=\pm \frac{-33x+-23 }{-5}-2\\ \implies-20x=\pm (-33x-23)+10\\ \implies-20x= -33x-13\text{ or} -20x=33x+33\\ \implies13x= -13\text{ or} -53x=33\\ \implies x=-1\text{ or} x=-\frac{33}{53}$$ Neither of these answers works when checking. Using a graphing calculator I get an answer of $\frac{-11}{17}$ which does check, but I don't know how to get that answer using algebraic means.
Hint: There are two equations here. When you have absolute value, you have to have one equation where you use the original and another where the stuff inside the absolute value signs is negative. So try solving the equation where the stuff in the absolute value signs are negative.
{ "language": "en", "url": "https://math.stackexchange.com/questions/765951", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Prove that if $\dim X'<\infty$ then $\dim X<\infty$ I have to prove that $\dim X'<\infty$ then $\dim X<\infty$ where $X$ is a normed vector space and $X'$ is a space of all linear and continuous functionals from $X$. How can I prove this? I always try to figure out sth by myself before posting here, but this time I have no idea how can I prove this.
Here's another approach. Again, we let $X'$ be the continuous dual. Well, if $\dim X' < \infty$ then $\dim X'' = \dim X' < \infty$. But then note $X \subset X''$ (using the canonical embedding) so $X$ is then finite dimensional!
{ "language": "en", "url": "https://math.stackexchange.com/questions/766089", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Modeling a greatest integer function I'm trying to model a function that resembles a greatest integer function. The domain is from [0, $\infty$). The inputs from 0 to 1.5 (non-inclusive) need to be mapped to an output of 0, and 1.5 to $\infty$ mapped to 1. But, I'm trying to not use a piecewise function. Is it possible to accomplish this? Here's what I've tried: $$f(x)=\left\lfloor \frac{x}{1.5} \right\rfloor$$
Try this function, $$\frac{|x-1.5|}{2x-3}+\frac12$$ Although this is very artificial, it works.
{ "language": "en", "url": "https://math.stackexchange.com/questions/766289", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Is the $\sum_{n=1}^{\infty} \frac{(2n+1)^{1/2}}{n^2}$ convergent or divergent? For this question I am not really sure which test to use to determine this. I was thinking the comparison or limit comparison test but it doesn't seem to be working. I was wondering what the steps are to figure this out, and if it is the comparison test, how would you use it? Any help is appreciated!
We have $$\frac{\sqrt{2n+1}}{n^2}\sim_\infty\sqrt2\frac{1}{n^{3/2}}$$ hence the given series is convergent by the asymptotic comparison with a convergent Riemann series.
{ "language": "en", "url": "https://math.stackexchange.com/questions/766407", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Examples of properties not preserved under homomorphism An isomorphism indicates that two structures are the same, using different names for the elements. Therefore it's obvious that every (algebraic) property of the first structure must be present in the second. However, homomorphisms only indicate that the two structures are "similar", so it's not quite as obvious that every property will be preserved. Yet all the properties I've ever seen are preserved under homomorphism: commutativity, cyclicality, solvability... What are some examples of properties of algebraic structures not preserved under homomorphism? Feel free to use any algebraic structures you like, but I'm particularly interested in your garden variety structures: group and rings, say.
An image of an algebraic object is equivalently a quotient in the most elementary cases. Taking a quotient is an identification process, so a general class of properties not preserved under images are those relating to uniqueness of solutions of equations. For instance, in any free abelian group a linear equation with a solution has only one solution-but in abelian groups with torsion there may be many. Similarly, rings of polynomials over a field admit factorization theorems to the effect that a polynomial of degree $n$ has no more than $n$ roots, whereas there are nonzero polynomials over finite rings that annihilate the entire ring.
{ "language": "en", "url": "https://math.stackexchange.com/questions/766523", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 11, "answer_id": 1 }
Are there complex solutions for $z^3=\bar z$ I'm asked to solve $z^3=\bar z$. I got $z=0, 1, -1$. Are there any complex solutions $a+bi$ to this though?
Writing $z = re^{it}$, we have that $$r^3 e^{3it} = re^{-it}$$ Taking absolute values, we find that $r^3 = r$, so that $r = 0$ or $r = \pm 1$. In the second two cases, we get that $$e^{3it} = e^{-it} \implies e^{4it} = 1$$ It follows that $4t$ is an integer multiple of $2\pi$, so there are corresponding complex solutions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/766599", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Is this a valid method of finding magnitude of complex fraction If I have a complex fraction $\dfrac{a+bi}{c+di}$ and I want the magnitude, then will it be $\left|\dfrac{a+bi}{c+di}\right|=\dfrac{|a+bi|}{|c+di|}$? Scratch that ... I just found the answer on another page; however, I'm still unclear why it's true?
You can make use of complex exponents. $$\dfrac{a+\mathrm{i} \ b}{c+\mathrm{i} \ d}=\frac{\rho_1e^{\mathrm{i} \varphi_1}}{\rho_2e^{\mathrm{i} \varphi_2}}=\frac{\rho_1}{\rho_2}e^{\mathrm{i}(\varphi_1-\varphi_2)}$$ where $\rho_1=\sqrt{a^2+b^2}, \rho_2=\sqrt{c^2+d^2}$ are the magnitudes and $\varphi_1=\arg\{a+\mathrm{i} \ b\},\varphi_2=\arg\{c+\mathrm{i} \ d\}$ are phases of $a+\mathrm{i} \ b$ and $c+\mathrm{i} \ d$ respectively. Then since $\rho_1, \rho_2$ are real (and positive) and the absolute value of complex exponent is $1$: $$\left| \dfrac{a+\mathrm{i} \ b}{c+\mathrm{i} \ d}\right|=\left|\frac{\rho_1}{\rho_2}e^{\mathrm{i}(\varphi_1-\varphi_2)} \right|=\left|\frac{\rho_1}{\rho_2}\right|\left|e^{\mathrm{i}(\varphi_1-\varphi_2)} \right|=\left|\frac{\rho_1}{\rho_2}\right|=\frac{\left|\rho_1\right|}{\left|\rho_2\right|}=\frac{\left|a+\mathrm{i} \ b\right|}{\left|c+\mathrm{i} \ d\right|}.$$ Moreover, using complex exponents it is easy to show that $$\arg\left\{\dfrac{a+\mathrm{i} \ b}{c+\mathrm{i} \ d}\right\}=\arg\left\{a+\mathrm{i} \ b\right\}-\arg\left\{c+\mathrm{i} \ d\right\}.$$ That is true, since $\arg\left\{\dfrac{a+\mathrm{i} \ b}{c+\mathrm{i} \ d}\right\}=\arg\left\{\frac{\rho_1}{\rho_2}e^{\mathrm{i}(\varphi_1-\varphi_2)}\right\}=\varphi_1-\varphi_2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/766841", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 3, "answer_id": 1 }
probility, placing balls, covariance Can you please help to see where I did wrong? There are 10 balls, and each ball to be place in bin 1 and bin 2. Each ball is placed indepedently. Let X be the number of balls in bin 1 and Y be the number of balls in bin 2. Compute Cov(X,Y). My attempt: write X=X1+X2+-------+X10, where the Xi is the "ith" indicator for whether the balls are placed into the bin1 or not. Then Y=10-X. Cov(X,Y)=E[XY]-E[X]E[Y]=E[X(10-X)]-E[X]E[10-X]=10E[X]-E[X^2]-E[X]*(10-E[X]) * *E[X]=10*E[X1]=5. E[X^2]=E[(X1+..X10)(X1+..+X10)] =E[X1X1+X2X2+..+X10X10]+E[X1X2+X1X3+X1X4+......X10X9] =0+10*9*1/4=90/4 So Cov(X,Y)=50-90/4-5*(10-5)=50-90/4-25=2.5 But the answer key said -2.5
Since each ball can go into bin 1 or bin 2, mutually exclusively and exhaustively, it's a binomial distribution: $\operatorname{P}(X=x) = \dbinom{10}{x} \dfrac{1}{2^{10}} \\ \quad = \dfrac{10!}{x!(10-x)! 2^{10}}$ The expected value is thus: $\operatorname{E}[X] = \sum\limits_{x=0}^{10} x\cdot\operatorname{P}(X=x) \\ \quad = \sum\limits_{x=0}^{10} \dfrac{10!}{(x-1)!(10-x)! 2^{10}} \\ \quad = 5$ The expected squared value is: $\operatorname{E}[X^2] = \sum\limits_{x=0}^{10} x^2 \operatorname{P}(X=x) \\ \quad = \sum\limits_{x=0}^{10} \dfrac{10!x}{(x-1)!(10-x)! 2^{10}} \\ \quad = \dfrac{55}{2}$ Since $Y=10-X$ then: $\operatorname{Cov}[X,Y] = \operatorname{E}[XY] - \operatorname{E}[X]\cdot\operatorname{E}[Y] \\ \quad = \operatorname{E}[10X-X^2] - \operatorname{E}[X]\cdot\operatorname{E}[10-X] \\ \quad = \operatorname{E}[X]^2-\operatorname{E}[X^2] \\ \quad = 25-\dfrac{55}{2} \\ \quad =\boxed{ -2.5}$ Alternatively, using your approach now the error is clear Since the binning is exhaustive and mutually exclusive, then $Y=10-X$. $\operatorname{Cov}(X,Y) = E[XY]-E[X]E[Y] \\ \quad = E[X(10-X)]-E[X]E[10-X] \\ \quad = 10E[X]-E[X^2]-E[X]\cdot(10-E[X]) \\ \quad = E[X]^2 - E[X^2]$ We can write $X=X_1+X_2+\dotsc+X_{10} = \sum\limits_{i=1}^{10} X_i$, where the $X_i$ is the binary indicator for whether the $i^{th}$ ball is placed into bin-1 (or not). * *$E[X]=E[\sum\limits_{i=1}^{10}X_i]=10\cdot\frac{1}{2}=5$. *$E[X^2]=E[(\sum\limits_{i=1}^{10}X_i)(\sum\limits_{j=1}^{10}X_j)] \\ \quad = E[\sum\limits_{i=1}^{10}X_i^2]+E[\sum\limits_{i=1}^{10}\,\sum\limits_{j\neq i, j=1}^{10}\,X_iX_j] \\ \quad =\color{blue}{10\cdot\frac{1}{2}}+10\cdot9\cdot\frac{1}{4}=\frac{55}{2}$ So $\operatorname{Cov}(X,Y)=25-\frac{55}{2}=\boxed{-2.5}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/766922", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Showing that $38^n+31$ is prime I was reading a question in one of the previous pages, in searching for a proof I stumble across what seem like a contradiction. All I want is for someone to provide the missing link in my argument. The question Find the least $n$ for which $38^n+31$ is prime. My attempt at a proof If $38^n+31$ composite, then there exist at least a prime $p$ such that $p|38^n+31$. Now $\gcd(p,38)=1$, otherwise, $d=\gcd(p,38)=2$ or $19$ and $d|31$ a contradiction. Hence, by Fermat's Little Theorem; $38^{p-1} \equiv 1 \pmod p$, and for all positive integer $r$, $38^{r(p-1)} \equiv 1 \pmod p$. Hence, $38^{r(p-1)}+31 \equiv 32 \pmod p$, but $38^{r(p-1)}+31 \equiv 0 \pmod p$, because it's composite. It follows that $32 \equiv 0 \pmod p$ i.e $p|32$, a contradiction, and hence, the above expression cannot be composite (but inputing real values for $n$ shows that it is indeed composite).
For $38^x+31$, it is useful to note that the small primes divide $38^x+1$, as $x$ is odd or double-odd. So, eg $3 \mid 38^x+1 $ for odd x. $5 \mid 3d^x + 1$ for $x=2 \pmod 4$. So after removing 3 and 5, one is left with $4 \mid x$. Running through the output of factor for $38^{4x}+31$, gives some rather difficult numbers to factorise. for x=3, there are three divisors, 338431, 322249, and 83126873. It's probably not an easy row to hoe here. There well may be primes, but these are not generally accessable. factor seems to indicate that if it were prime, then $4x\gt 176$
{ "language": "en", "url": "https://math.stackexchange.com/questions/767007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
How $\tan{\frac{A}{2}}\tan{\frac{B}{2}}=\frac{1}{2}$,then find $\angle C$ In $\Delta ABC$, if $$\tan{\dfrac{A}{2}}\tan{\dfrac{B}{2}}=\dfrac{1}{2}\\\sin{\dfrac{A}{2}}\sin{\dfrac{B}{2}}\sin{\dfrac{C}{2}}=\dfrac{1}{10}$$ Find the $\angle C$ My try: since $$2\sin{\dfrac{A}{2}}\sin{\dfrac{B}{2}}=\cos{\dfrac{A}{2}}\cos{\dfrac{B}{2}}$$ $$\left(\cos{(\dfrac{A-B}{2})}-\cos{(\dfrac{A+B}{2})}\right)\sin{\dfrac{C}{2}}=\dfrac{1}{5}$$ since $$\cos{(\dfrac{A+B}{2})}=\sin{\dfrac{C}{2}}$$ so $$\left(\cos{(\dfrac{A-B}{2})}-\sin{\dfrac{C}{2}}\right)\sin{\dfrac{C}{2}}=\dfrac{1}{5}$$ Then I can't go on.
Denote $~~a=\tan\dfrac{A}{2}$, $~~b=\tan\dfrac{B}{2}$ $($let $a\le b$$)$. $\sin\dfrac{A}{2}\sin\dfrac{B}{2}\sin\dfrac{C}{2}=\dfrac{1}{10}$; $\sqrt{\dfrac{1-\cos A}{2}} \cdot \sqrt{\dfrac{1-\cos B}{2}} \cdot \sqrt{\dfrac{1-\cos C}{2}} = \dfrac{1}{10}$; $(1-\cos A) (1-\cos B)(1-\cos C) = \dfrac{2}{25}$; $(1-\cos A) ~ (1-\cos B) ~ (1+\cos(A+B)) = \dfrac{2}{25}$; $\dfrac{2a^2}{1+a^2} \cdot \dfrac{2b^2}{1+b^2} \cdot \dfrac{2(1-ab)^2}{(1+a^2)(1+b^2)} = \dfrac{2}{25}$; $\dfrac{ab|1-ab|}{(1+a^2)(1+b^2)}=\dfrac{1}{10}$. Now we get system: $$ \left\{ \begin{array}{l} ab=\dfrac{1}{2};\\ \dfrac{ab|1-ab|}{(1+a^2)(1+b^2)}=\dfrac{1}{10}. \end{array} \right. $$ $b=\dfrac{1}{2a}$ $\implies$ $\dfrac{a^2}{(1+a^2)(4a^2+1)}=\dfrac{1}{10}$ ; $4a^4-5a^2+1=0$; $(2a-1)(2a+1)(a-1)(a+1)=0$. there are $2$ positive roots for this eq.: $a=\dfrac{1}{2}$ and $a=1$. If $a=\dfrac{1}{2}$, then $b=1$, then $A = \arctan\dfrac{4}{3}=\arcsin\dfrac{4}{5}=\arccos\dfrac{3}{5}$, $~B=\pi/2$, $~C=...$; If $a=1$, then $b=\dfrac{1}{2}$ (simply permutation of $a,b$). (don't know if this is the simplest way, but at least this is $\approx$ clear :) Finally, $C=\arctan\dfrac{3}{4}=\arcsin\dfrac{3}{5}=\arccos\dfrac{4}{5} = 0.643501108793...$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/767109", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
relation between singular values and eigenvalue How is this inequality proved $\sigma_{min}(A) \leq \min_{i}|\lambda_i|\leq\max_{i}|\lambda_i| \leq \sigma_{max}(A) $ where $\sigma$ are the singular values and $\lambda $ are the eigen values of a matrix A in the book i am reading (Matrix computations, Golub), it says that it can be seen using schur form
The point is that if $T=(t_{ij})\in\mathbb{C}^{n\times n}$ is triangular, then $$\tag{$❀$} \sigma_{\min}(T)\leq\min_i|t_{ii}|\leq\max_i|t_{ii}|\leq\sigma_{\max}(T). $$ In other words, $$ \sigma_{\min}(T)\leq|t_{ii}|\leq\sigma_{\max}(T), \quad i=1,\ldots,n. $$ To see this, consider the vector $e_i$ (the $i$th column of the identity matrix). Then $$ |t_{ii}|\leq\sqrt{\sum_{k=1}^i|t_{ki}|^2}=\|Te_i\|_2\leq\max_{\|x\|_2=1}\|Tx\|_2=\sigma_{\max}(T). $$ The other direction can be show similarly using the inverse of $T$, the fact that $\sigma_{\min}(T)=1/\sigma_{\max}(T^{-1})$, and that the diagonal of $T^{-1}$ is equal to the inverse of the diagonal of $T$. Note that if the matrix $T$ is not invertible, then the lower bound on $|t_{ii}|$ is trivial as $\sigma_{\min}(T)=0$. Now if $T=Q^*AQ$ is the Schur form of $A$, $T$ has the same singular values as $A$, and $t_{ii}$ are the eigenvalues of $T$ (and $A$). Then just use ($❀$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/767196", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Linear Algebra, geometric multiplicity I have a matrix and the question says I that I have an eigenvalue of 0. The question asks me to find the geometric multiplicity of that eigenvalue. I know the answer is 4. I just don't understand how it is 4 since this matrix can be reduced to just one row of 1 1 1 1 1and the rest of the rows are 0's. Thanks in advance for your comments/ 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
The geometric multiplicity of an eigenvalue $\lambda$ for the $n\times n$ matrix $A$ is, by definition, the dimension of the subspace $$ E_A(\lambda)=\{v\in K^n:Av=\lambda v\} $$ where $K$ is the base field, in your case probably $\mathbb{R}$ or $\mathbb{C}$, and the elements of $K^n$ are column vectors. This subspace is just the null space (also known as kernel) of the matrix $A-\lambda I_n$, because $$ Av=\lambda v \quad\text{if and only if}\quad Av=\lambda I_nv \quad\text{if and only if}\quad (A-\lambda I_n)v=0 $$ The dimension of this subspace is computed easily: $$ \dim E_A(\lambda)=\dim N(A-\lambda I_n)=n-\operatorname{rank}(A-\lambda I_n) $$ by the rank-nullity theorem. The case of $\lambda=0$ is no different: $$ \dim E_A(0)=\dim N(A-0I_n)=\dim N(A)=n-\operatorname{rank}(A) $$ and in your case $n=5$ and $\operatorname{rank}(A)=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/767285", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Cases of Partial Fraction Decomposition How many cases are there in integration using partial fractions?
If I understood your question correctly, I would say there are $5$ cases. Assume you have a rational function $\dfrac{p(x)}{q(x)}$, where the degree of $q(x)$ exceeds the degree of $p(x)$. Case $1$: $q(x)$ is a product of distinct linear factors Example: Consider $q(x)=\dfrac{x}{(x+3)(x-1)}$ Case $2$: $q(x)$ is a product of linear factors, where some of these factors are repeated Example: Consider $q(x)=\dfrac{x^2}{(x+4)^2(x-2)}$ Case $3$: $q(x)$ is a product of distinct irreducible quadratic factors Example: Consider $q(x)=\dfrac{x}{(x^2+1)(x^2+3)}$ Case $4$: $q(x)$ is a product of irreducible quadratic factors, where some are repeated Example: Consider $q(x)=\dfrac{2x-1}{(x^2+x+1)^3}$ Case $5$: $q(x)$ is some mixture of the above cases. Example: Consider $\dfrac{3x-2}{(x-2)^2(x^2+x+2)}$ Consider another example, which has been worked to the decomposition stage of the solution. \begin{align} &\frac{2x-1}{(x-1)^2(x^2+x+1)^2}\\ &=\frac{A}{x-1}+\frac{B}{(x-1)^2}+\frac{Cx+D}{x^2+x+1}+\frac{Ex+F}{(x^2+x+1)^2}\\ \end{align} Then, you can do what you normally do for partial fractions and equate the coefficients.
{ "language": "en", "url": "https://math.stackexchange.com/questions/767361", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
A Hermitian matrix $(\textbf{A}^\ast = \textbf{A})$ has only real eigenvalues - Proof Strategy [Lay P397 Thm 7.1.3c] Would someone please explain the proof strategy at Need verification - Prove a Hermitian matrix $(\textbf{A}^\ast = \textbf{A})$ has only real eigenvalues? I brook the algebra so I'm not asking about formal arguments or proofs. For example, $1.$ How would you determine/divine/previse to take the Hermitian conjugate and to right-multiply by $\color{orangered}{\vec{v}}$? $2.$ Since we are given that $A$ is Hermitian and has eigenvalues, why not start the proof with $A^*\mathbf{v} = \lambda^*\mathbf{v}$? Here, $\mathbf{v}$ is an eigenvector and so by definition $\neq \mathbf{0}$. Then $\begin{align} LHS = Av & = \\ \lambda v & = \end{align}$ $\iff \lambda \mathbf{v} = \lambda^*\mathbf{v} \iff \mathbf{0} = (\lambda^* - \lambda )\mathbf{v} \iff \mathbf{v} \neq \mathbf{0}, so \, (\lambda^* - \lambda )=0. $
Regarding 1: Intuitively, it comes down to the fact that we need to prove the fact that $\lambda \in \mathbb{R}$ using the facts that $A = A^*$ and $Av = \lambda v$ for some vector $v \neq 0$. In order to bring $A^*$ into play, we have to take the Hermitian conjugate of both sides at some point. As for multiplying by $v$: well, once you see $A^*$, you note that this is equal to $A$; there's one step in the right direction. In order to get from "talking about $A$" to "talking about $\lambda$", we need to multiply by $v$ on the right. Regarding 2: It is not generally the case that for a matrix $A$ with eigen-pair $\lambda,v$ $$ A^*v = \lambda^*v \tag{NOT GENERALLY TRUE} $$ While we can say that $\lambda^*$ must be an eigenvalue of $A$, the associated eigenvector for $A^*$ is not related to the associated eigenvector for $A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/767470", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Sturm Liouville with periodic boundary conditions Background and motivation: I'm given the boundary value problem: $$y''(x)+2y(x)=-f(x)$$ subject $y(0)=y(2\pi)$ and $y \, '(0)=y \, '(2\pi)$. EDIT: These were not given to be zero !! Maybe this helps... The text (Nagle Saff and Snider, end of Chapter 11 technical writing exercise) asks us to construct the Green's function for the problem. At the moment, I'm a bit stumped because there is no $\lambda$ in the given problem. Let me elaborate, if we were given: $$ (py')'+qy+\lambda r y= 0 $$ where $p,p',q$ and $r$ were continuous, real-valued, periodic functions with period $2\pi$ then I think I'd be able to get started. I know the usual solutions then only fit the given boundary conditions for particular choices of $\lambda$. So, my initial observation is that $p=1$ is certainly continuous and periodic so we can set $p=1$. * *Question: what should I see as $q$ and $r$ for the problem stated at the start of this post? How can we massage the given problem into the standard form of Sturm Liouville? I suppose it is important to note we must choose $r>0$ as it serves as the weight function in the inner product which is paired with the eigenspace of solutions for this problem. Added: here is a picture of the problem from the text:
I will write $a$ for $\sqrt{2}$ for simplicity. The general solution of the homogenous equation $y''+2y=0$ has the form $$y(x)=C\cos(ax)+D\sin(ax)$$ Using the variation of parameters method to find a particular solution of the non homogenous problem, we have to determine $C$ and $D$ with $$ \eqalign{C'\cos(ax)+D'\sin(ax)&=0\cr -C'\sin(ax)+D'\cos(ax)&=-\frac{1}{a}f(x) } $$ This yields $$ C'=\frac{1}{a}\sin(ax)f(x),\quad D'=-\frac{1}{a}\cos(ax)f(x) $$ Thus, the general solution of $y''+2y+f=0$ is given by $$\eqalign{ y(x)&=c\cos(ax)+d\sin(ax)+\frac{\cos(ax)}{a}\int_0^x\sin(at)f(t)dt-\frac{\sin(ax)}{a}\int_0^x\cos(at)f(t)dt\\ &=c\cos(ax)+d\sin(ax)-\frac{1}{a}\int_0^x\sin(ax-at)f(t)dt} $$ and $$ y'(x)=-ac\sin(ax)+ad\cos(ax)- \int_0^x\cos(ax-at)f(t)dt $$ Now, the conditions $y(0)=y(2\pi)$ and $y'(0)=y'(2\pi)$ give us two equations: $$\eqalign{c &=c\cos(2a\pi)+d\sin(2a\pi )-\frac{1}{a}\int_0^{2\pi}\sin(2a\pi-at)f(t)dt\cr ad&= -ac\sin(2a\pi)+ad\cos(2a\pi)- \int_0^{2\pi}\cos(2a\pi-at)f(t)dt} $$ This can be arranged as follows $$\eqalign{c\sin(a\pi)-d\cos(a\pi) &=-\frac{1}{2a\sin(a\pi)}\int_0^{2\pi}\sin(2a\pi-at)f(t)dt\cr c\cos(a\pi)+d\sin(a\pi)&= - \frac{1}{2a\sin(a\pi)}\int_0^{2\pi}\cos(2a\pi-at)f(t)dt} $$ Solving for $c$ and $d$ we obtain $$\eqalign{c&=-\frac{1}{2a\sin(a\pi)}\int_0^{2\pi}(\cos(2a\pi-at)\cos(a\pi)+\sin(a\pi)\sin(2a\pi-at))f(t)dt\cr &=-\frac{1}{2a\sin(a\pi)}\int_0^{2\pi} \cos(a\pi-at)f(t)dt\cr d&=-\frac{1}{2a\sin(a\pi)}\int_0^{2\pi}(\cos(2a\pi-at)\sin(a\pi)-\cos(a\pi)\sin(2a\pi-at))f(t)dt\cr &=\frac{1}{2a\sin(a\pi)}\int_0^{2\pi} \sin(a\pi-at)f(t)dt\cr } $$ Finally $$\eqalign{ y(x)& =\frac{-1}{2a\sin(a\pi)}\int_0^{2\pi} \big(\cos(ax)\cos(a\pi-at)-\sin(ax)\sin(a\pi-at)\big)f(t)dt\\ &\phantom{=}-\frac{1}{a}\int_0^x\sin(ax-at)f(t)dt\\ &=\frac{-1}{2a\sin(a\pi)}\int_0^{2\pi}\cos(a\pi+ax-at)f(t)dt-\frac{1}{a}\int_0^x\sin(ax-at)f(t)dt } $$ This expression of $y$ can be simplified as follows: $$\eqalign{ y(x)& =\frac{-\cot(a\pi)}{2a }\int_0^{2\pi} \cos(ax-at)f(t)dt\cr &\phantom{=}+\frac{1}{2a }\int_0^{2\pi} \sin(ax-at)f(t)dt -\frac{1}{a}\int_0^x\sin(ax-at)f(t)dt\cr &=\frac{-\cot(a\pi)}{2a }\int_0^{2\pi} \cos(ax-at)f(t)dt+\frac{1}{2a }\int_x^{2\pi} \sin(ax-at)f(t)dt\cr &\phantom{=} -\frac{1}{2a }\int_0^{x} \sin(ax-at)f(t)dt \cr &=\frac{-\cot(a\pi)}{2a }\int_0^{2\pi} \cos(ax-at)f(t)dt-\frac{1}{2a }\int_0^{2\pi} \sin(a|x-t|)f(t)dt\cr &=\frac{-1}{2a\sin(a\pi)}\int_0^{2\pi} \cos(a\pi-a|x-t|)f(t)dt } $$ Therefore, $$ y(x)=\int_0^{2\pi}G(x,t)f(t)dt $$ with $$ G(x,t)=\frac{-\cos(\sqrt{2}(\pi-|x-t|))}{2\sqrt{2}\sin(\sqrt{2}\pi)}. $$ which is the desired conclusion.$\qquad\square$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/767560", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 2, "answer_id": 0 }
prove that $‎‎P(X=0) \leq ‎\frac{Var(X)}{‎‎‎E(X‎^{2})‎}‎$. Let $X$ be a random variable taking integral nonnegative values, let $E [X^2]$ denote the expectation of its square, and let $Var [X]$ denote its variance. Prove that $‎‎P(X=0) \leq ‎\frac{Var(X)}{‎‎‎E(X‎^{2})‎}‎$‎. I try to use this theorem which will be proved by Chebyshev's Inequality:$‎‎P(X=0) \leq ‎\frac{Var(X)}{‎‎‎E(X‎)^{2}‎}$, I try to prove this $E(X‎^{2}) \geq E(X‎)^{2} $ and using above inequality it will be done. but I couldn't prove this,I thought that using the definition of expectation will help but I am not sure,please help me with some guidance or hint or any reference,thank you very much.
Since $\text{Var}[X] \geq 0$ (this is a well-known fact), $E[X^2] - (E[X])^2 \geq 0$ and the inequality desired follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/767639", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If $\sum_{n=1}^\infty a_n$ converges and $\sum_{n=1}^\infty \frac{\sqrt a_n}{n^p}$ diverges, then p $\in$ {?} Let {$a_n$} be a sequence of non-negative real numbers such that the series $$\sum_{n=1}^\infty a_n$$ is convergent. If p is a real number such that the series $$\sum_{n=1}^\infty \frac{\sqrt a_n}{n^p}$$ diverges, then what can be said about the value of p?
Seems like we'll have to make use of the A.M. $\ge$ G.M. inequality here, so that we have $$\frac {\sqrt{a_n}} {n^p} \leq a_n + \frac{1}{n^{2p}}$$ Now, this diverges for p $\le$ $\frac12$. So that must be the required answer. Can anyone show how to do this using Cauchy-Schwarz inequality? It is simple I reckon.:)
{ "language": "en", "url": "https://math.stackexchange.com/questions/767733", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Prove that this segment bisects another The circle touches the trapezoid $GFEC$ at the points $C$, $D$ and $E$. The point $A$ is the center of the circle. The rest of the information can be seen in the diagrams below. What we have to prove is that $FI=HI$. I've added some diagrams below. The first one simply shows what is given and the others show what I've tried. I've tried a lot more than that but the other information is basically useless (the information I've given here isn't quite useful either, though).
Let $s$ the line defined by $FE$, $t$ the line defined by $CD$ And $J$ the point of intersection between $s$ and $t$. See the following figure: Hints: Note that $\angle GDC = \angle GCD = \angle FDJ = \angle DJF$. Therefore $FJ=ED$. But $FD=FE$ (Why?). Note that $\triangle JFI \sim \triangle JEC$, hence ...
{ "language": "en", "url": "https://math.stackexchange.com/questions/767860", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Why is $\lim\limits_{N\to\infty}x^{N+1}=0$, where $|x|<1$? How is this done? Why is $\lim\limits_{N\to\infty}x^{N+1}=0$, where $-1<x<1$?
A relatively "low-tech" way to see the limit must be zero (assuming the limit exists) is to call the limit $L$ and note that $$ L = \lim_{N \to \infty} x^{N+1} = \lim_{N \to \infty} (x \cdot x^{N}) = x \lim_{N \to \infty} x^{N} = xL. $$ Subtracting and factoring, $(1 - x)L = 0$. Since $x \neq 1$ by hypothesis, it must be that $L = 0$. To prove the limit exists without descending into $\varepsilon$-land (i.e., using convergence criteria that appear early in an elementary analysis course), note that if $0 \leq x < 1$, then the sequence $(x^{N})_{N=0}^{\infty}$ is non-increasing and bounded below by $0$, so it has a real limit $L$. To handle the case $-1 < x < 0$, note that $-|x|^{N} \leq x^{N} \leq |x|^{N}$ for all $N \geq 0$ and apply the squeeze theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/767892", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 3 }
value of summation of $2^i\cdot i$ I'm trying to calculate the value of $$2^0\cdot0 + 2^1\cdot1 + 2^2\cdot2 + .... 2^n\cdot n$$ I figured this would be summation $2^i \cdot i$ from $i = 0$ to $n$. But iI'm unable to calculate its value. I have tried searching online but haven't been able to find a formula or any property that could simplify it (maybe there was which I might not have understood) P.S.This is not a homework question, I need this value to prove a theorem.
WolframAlpha claims that $$ \sum_{n=0}^{N}2^{n}n=2+2^{N+1}(N-1) $$ We can verify this using induction. Clearly it holds for $N=0$. Now assuming the formula holds for $N$, we have $$ \sum_{n=0}^{N+1}2^{n}n=2+2^{N+1}(N-1)+2^{N+1}(N+1)=2+2^{n+1}(2*N)=2+2^{N+2}(N+1-1) $$ And so the formula holds for all $N\in\{0,1,2,\ldots\}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/768012", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
What kind of matrix is it that when multiplied with its transpose produces the identity? If $A^TA = I$, where $A$ is a lower triangular matrix, does that mean $A$ has to be an identity matrix (and nothing else)? In general, which kind of matrix $A$ must be for that equality to hold?
OK, edits complete! Setting lower triangularity aside for the moment, take any $m$ mutually orthonormal vectors $\vec n_1, \vec n_2, \ldots, \vec n_m \in \Bbb R^m$ so that $\langle \vec n_j, \vec n_k \rangle = \delta_{jk}$, and turn them into a matrix $N$ by using them as columns, so we may write $N$ in columnar form as $N = \begin{bmatrix} \vec n_1 & \vec n_2 & \ldots & \vec n_m \end{bmatrix}; \tag{1}$ then the rows of $N^T$ are the columns of $N$, each transposed: $N^T = \begin{bmatrix} \vec n_1^T \\ \vec n_2^T \\ \vdots \\ \vec n_m^T \end{bmatrix}. \tag{2}$ Thus $N^TN = \begin{bmatrix} \vec n_1^T \\ \vec n_2^T \\ \vdots \\ \vec n_m^T \end{bmatrix} \begin{bmatrix} \vec n_1 & \vec n_2 & \ldots & \vec n_m \end{bmatrix} = [\langle \vec n_j, \vec n_k \rangle ] = [\delta_{jk}] = I, \tag{3}$ for $1 \le j, k \le m$. This shows there are very many matrices satisfying $A^TA = I$. Furthermore, the columns of $N$ are linearly independent, since any system of orthonormal vectors is independent (and I'll take this as known); thus $\det N \ne 0$ and $N$ is invertible. Or one can argue from (3) that $(\det N^T)(\det N) = \det N^TN = 1, \tag{4}$ showing $\det N \ne 0$ and the existence of $N^{-1}$ follows. Having $N^{-1}$ we obtain from (3) $N^T = N^TI = N^T(N N^{-1}) = (N^T N)N^{-1} = IN^{-1} = N^{-1}, \tag{5}$ so that $N^T = N^{-1}$; thus (3) yields $NN^T = NN^{-1} = I \tag{6}$ as well. So in the general case we see that $\bullet$ $N = I$ needn't hold; $\bullet$ there are many other possibilities, one for each orthornormal frame $\vec n_1, \vec n_2, \ldots, \vec n_m \in \Bbb R^m$; $\bullet$ the key property is that the rows be orthonormal, as the columns be; in algebraic terms, this translates directly to $N^TN = NN^T = I$. Returning finally to the case $N$ lower triangular, we see in this case $N^T$ is upper triangular. In this situation, $N$ takes the form of a diagonal matrix with diagonal entries $\pm 1$; to see this, we can write out the matrix equation $N^TN = I$ in terms of the components of the vectors $\vec n_i = (n_{i1}, n_{i2}, \ldots, n_{im})^T$; we see that the result is a system of quadratic equations in the $m(m + 1)/2$ unknowns $n_{ij}$ where $1 \le j \le i$ by the lower triangularity of $N$; the remaining $n_{ij} = 0$, $1 \le i < j$; since $(N^T)_{jl} = N_{lj} = n_{lj}$ the equations may be cast in the form $\sum_{l = 1}^m n_{lj}n_{lk} = \delta_{jk}, \tag{7}$ and since the matrix $N$ is lower triangular, we always have $n_{lk} = 0$ for $ l < k$, so (7) reduces to $\sum_{l = k}^m n_{lj}n_{lk} = \delta_{jk}. \tag{8}$ The system of equations (8) may apparently be solved by starting with $j = k = m$ whereby (8) becomes $n_{mm}^2 = 1$; thus $n_{mm} = \pm 1$ and thus $n_{mj} = 0$, $j < m$, since taking $k = m$, $j < m$, (8) becomes simply $n_{mj} = n_{mj}n_{mm} = \delta_{jm} = 0, \tag{9}$ whence $n_{mj} = 0$ since $n_{mm} = \pm 1$. Having the $m$-th row of $N$, we may work our way downwards in $k$, successively finding the remaining rows of $N$; for example, with $j = k = m - 1$ we have $n_{m - 1 \; m - 1}^2 + n_{m \; m - 1}^2 = \sum_{l = m - 1}^m n_{l \; m - 1} n_{l \; m - 1} = \delta_{m - 1 \; m - 1} = 1; \tag{10}$ now using $n_{m \; m - 1} = 0$ we see that $n_{m - 1 \; m - 1} = \pm 1$ and we proceed as before, eventually working our way down to $k = 1$ and accruing the results $n_{kk} = \pm 1$, $n_{jk} = 0$ in the process; $N$ is seen to be a diagonal matrix whose only nonzero entries are the $n_{kk} = \pm 1$, $1 \le k \le m$. There are $2^m$ such matrices. As a final note, I address the OP's question from the comment stream: if $L$ and $M$ are lower triangular, with $L$ invertible and $(L^{-1}M)(L^{-1}M)^T = I$, then $L^{-1}$ is lower triangular (see Show that A is invertible and that it is Lower Triangular.) and hence $N = L^{-1}M$ is lower triangular as well; lower triangular matrices are closed under multiplication. Then $N = L^{-1}M$ satisfies $NN^T = I$, so $(\det N)(\det N^T) = 1$, so $\det N \ne 0$ and $N$ is invertible; thus we may write $N^{-1} = N^T$, and hence $N^TN = I$ as well. From what we have seen above, we may conclude that $N$ is a diagonal matrix with all non-zero entries $\pm 1$; no diagonal entry $n_{jj} = 0$. $L^{-1}M = N$ implies $M= LN$; $L = M$ if and only if $N = I$. The effect of right mutiplying $L$ by $N$ is to reverse the sign of column $j$ of $L$ if and only if $n_{jj} = -1$; the remaning column vectors are left invariant. Hope this helps. Cheerio, and as always, Fiat Lux!!!
{ "language": "en", "url": "https://math.stackexchange.com/questions/768098", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 3, "answer_id": 2 }
Evaluate the integral of sec(2x + 1) dx I got $\ln|\sec(2x +1) + \tan(2x+1)| + \text C$ as an answer. I saw that the integral of $\sec x$ is $\ln|\sec x + \tan x| + \text C$. But I feel I may have left something out because that was too easy.
$$\int\sec({2x+1})\,dx=\frac{1}{2}\int\sec({2x+1})\,d(2x+1)$$ Can you take it from here?
{ "language": "en", "url": "https://math.stackexchange.com/questions/768225", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Rows of orthogonal matrix from an orthonormal basis of $R^n$ The question is: Let $U$ be an $n \times n$ orthogonal matrix. Show that the rows of U form an orthonormal basis of $\mathbb R ^n $. So far I have stated: Since $U$ is orthogonal its column vectors are linearly independent and by the Invertible Matrix theorem $U$ is invertible and $U^T$ is invertible. I know that I need to somehow show that $U^T U=I$, but I don't know how to get there from where I left off.
Hint: This is long (and can be very much shortened), but I think it will be a good learning curve for you to make it long. It became this long, because we start with the way you have understood orthogonal matrices (as seen in comments). I am reproducing it here "A set is orthogonal if each pair of distinct vectors within it are orthogonal to each other. An orthogonal matrix is a matrix whose column vectors form an orthogonal set". This is infact one of the many equivalent ways to define a orthogonal matrix. We will start with this definition without any other assumptions. We will individually denote both columns and rows of $U$. Let columns of $U$ be given by $N\times 1$ vectors, $c_1,\dots,c_N$. Let its rows be given by $N\times 1$ vectors $r_1,\dots,r_N$. So you have $$U=[c_1|\dots|c_N]$$ and $$U=\begin{bmatrix}r_1^T \\ \vdots \\ r_N^T \end{bmatrix}$$ (notice how I used the transpose). Now note that $$A=U^TU$$ can be viewed upon as a matrix whose $(i,j)$th entry is given as $$A_{ij}=c_i^Tc_j$$ Similarly $$B=UU^T$$ is a matrix whose $(i,j)$th entry is given as $$B_{ij}=r_i^Tr_j$$ Now notice the following things * *prove $U^TU=A=I$ (why?) *use the fact that $rank{(PQ)}\leq \min\{rank(P),rank(Q)\}$ to argue that $U$ and $U^T$ are invertible. $P$ and $Q$ are any arbitrary $N\times N$ matrices. Substitute $P$ and $Q$ with suitable matrices. *Use above two facts to prove that $U^{-1}=U^T$. Hint: Perhaps a right multiplication on the LHS of first fact with a suitable matrix. *Use above fact to prove $UU^T=I$. Hint: For this you also need that if $P$ are $Q$ are inverses of each other $PQ=I$ and $QP=I$ *Now using the above fact and definition of $B$, look at $r_i^Tr_j$
{ "language": "en", "url": "https://math.stackexchange.com/questions/768408", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Branch of logarithm which is real when z>0 I am familiar with the complex logarithm and its branches, but still this confuses me. I read this in a textbook: "For complex $z\neq 0, log(z)$ denotes that branch of the logarithm which is real when $z > 0$. What does this mean?
Let $\hbox{Log} z=\ln|z|+i\hbox{Arg}z$ be the principal branch of the logarithm, that corresponds to a cut along the negative real numbers. Now, consider any branch of $log$ that is defined on a connected neighborhood of $\Bbb{R}^+=(0,+\infty)$. Clearly, $x\mapsto \varphi(x)=log(x)-\hbox{Log}(x)$ is a continuous function on $\Bbb{R}^+$ that satisfies $\exp(\varphi(x))=1$ so, $\varphi$ takes its values in $2\pi i\Bbb{Z}$, and since it is continuous, it must be constant. Thus, there is $k\in\Bbb{Z}$ such that $log(z)=\hbox{Log }{z}+2i\pi k$ and this happens on the largest connected open set that contains $\Bbb{R}^+=(0,+\infty)$, and on which both functions are analytic (by analytic continuation). Thus, the statement says that we choose the logarithmic function the corresponds to $k=0$ because otherwise $log(z)$ would not be real for real $z$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/768504", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
If $2xf ' (x) - f(x) = 0$ find $f$ So $2xf '(x) - f(x) = 0$ and we know that $f(1) =1$. So I actually need to find the integral of $2xf'(x) - f(x)$. Thanks.
$$\frac{f'(x)}{f(x)}=\frac{1}{2x} \Rightarrow \ln|f(x)|=\frac{1}{2} \ln{x} +c \Rightarrow \ln|f(x)|= \ln{{x}^{\frac{1}{2}}} +c \Rightarrow f(x)= \pm c_1 \sqrt{x} \Rightarrow f(x)=C \sqrt{x}$$ $$f(1)=1 \Rightarrow 1= C $$ So $$f(x)=\sqrt{x}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/768577", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How can I Prove that H=G or K=G Let subgroups $H,K \subseteq G$ such that $G=H \cup K$ Prove $H=G$ or $K=G$
In the special case when $G$ is a finite group, it is also possible to use a counting argument. (This is less general and a bit more complicated than the other answers, but I find the method interesting.) Suppose $H$ and $K$ are both proper subgroups of $G$. Then $|H|$ and $|K|$ are proper divisors of $|G|$. Therefore, $$\begin{align}|G|=|H\cup K| &= |(H\setminus\{1\})\cup(K\setminus\{1\})\cup\{1\}|\\&\leq|H\setminus\{1\}|+|K\setminus\{1\}|+|\{1\}|\\&\leq\left(\frac{|G|}{2}-1\right)+\left(\frac{|G|}{2}-1\right)+1\\&=|G|-1,\end{align}$$ a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/768653", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Manipulating inequalities and probabilities We have the Chebychev's inequality $$\mathrm{P}\left(|X - \mu| \ge k\sigma\right) \le \frac{1}{k^{2}}$$ which tells us $\frac{1}{k^{2}}$ is the upper bound. How do we find the lower bound of this probability using this information?
The lower bound is zero. Most of the time, you can construct examples that hit these bounds exactly. E.g., a uniform distribution has support on $\mu \pm \sigma \sqrt{3}$, so it places zero probability for $k\ge \sqrt{3}$. But Chebyshev's inequality does not know about it, and that's not its job, anyway. On the other boundary, as far as I recall, Chebyshev's inequality is exact for a two-point mass distribution when $k$ is aligned to hit these masses, so it cannot be improved, either.
{ "language": "en", "url": "https://math.stackexchange.com/questions/768951", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
series convergence test with parameter As part of a bigger proof I reached the series $\sum {1 \over n^\alpha}$. $\alpha \in \mathbb{R}$ Obviously, the convergence depends on the value of $\alpha$. I already know the harmonic series diverges while $\sum {1\over n^2}$ converges. How to solve the general case?
Some notes: * *Because $\sum \tfrac 1n$ diverges you can prove the divergence for $\alpha \le 1$ with the limit comparison test *Because $\sum \tfrac 1{n^2}$ converges you can prove the convergence for $\alpha \ge 2$ with the limit comparison test *Concerning an article I have read, you can use the Cauchy condensation test to prove the divergence and convergence of the series. *The function $(1,\infty)\rightarrow \mathbb R: \alpha \mapsto \sum \tfrac1{n^\alpha}$ is the famous Riemann zeta function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/769014", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Hyperbolic Fixed Point Let $f:M\rightarrow M$ be a $C^{1}$-class diffeomorphism . Let $x\in M$ be a fixed point. I've been looking for a while on Internet for a proof of the following fact, but i couldn't find : $\lbrace x\rbrace$ is a hyperbolic set for $f$ if and only if $x$ is a hyperbolic fixed point. The definition of hyperbolic fixed point I'm using is the following : $x$ is a fixed point of $f$ such that $d_{x}f:T_{x}M\rightarrow T_{x}M$ has no eigenvalues in the unit circle $S^{1}\subset\mathbb{C}$. Can somebody help me ? (sketching the proof or even giving me some reference) Thank you :)
This is simply a matter of definition: a hyperbolic fixed point is defined to be a point $x$ such that $\{x\}$ is a hyperbolic set. See for example the wikipedia entry on hyperbolic sets where they use the term "hyperbolic equilibrium point" instead of "hyperbolic point".
{ "language": "en", "url": "https://math.stackexchange.com/questions/769115", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Find $b$ such that $0 \le b < 101$ and $2^{987654321} \equiv b \pmod {101}$. Find the unique integer $b$ with $0 \le b < 101$ satisfying $2^{987654321} \equiv b \pmod {101}$. I am having trouble just starting this problem. This is a home work problem that is going to be on my test and I just can not think of what to do. Any help is appreciated, thank you.
Hint: $101$ is prime. Use Fermat's Little Theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/769305", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
lifting a product of commutators of standard generators on 2-manifolds I have a problem with understand the proof http://www.ams.org/journals/proc/1972-032-01/S0002-9939-1972-0295352-2/S0002-9939-1972-0295352-2.pdf I don't understand this part: "(...) we can easily construct $p: \tilde{F} \rightarrow F$ to be a six sheeted covering corresponding to the kernel of an appropriate map $\pi_1(F) \rightarrow \Sigma_3$ (such covering that: if $f$ represents a loop without sef-intersections and which is a product of commutators of "standard generators" then $f$ does not lift to a loop in $\tilde{F}$)." $\hspace{1.5cm}$ 1. why we know that such covering exist? $\hspace{1.5cm}$ 2. why this covering is six sheeted? $\hspace{1.5cm}$ 3. why kernel doesn't contain $f$?
Justin Malestein and I give a very explicit construction of such a cover in the proof of Lemma 2.1 of our paper Malestein, Justin, Putman, Andrew; On the self-intersections of curves deep in the lower central series of a surface group. Geom. Dedicata 149 (2010), 73–84. which is available here. We actually construct an $8$-fold cover instead of a $6$-fold cover because it was important for us that the deck group be nilpotent (in this case, a $2$-group). This was needed because our paper (among other things) was generalizing Hempel's argument to show that surface groups were residually nilpotent. But the $8$-fold cover we construct is good enough to make Hempel's argument go through (and in any case once you see what is happening you'll have no problem finding the $6$-fold cover if that is what you really want).
{ "language": "en", "url": "https://math.stackexchange.com/questions/769420", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Bayes theorem application A professor gives a true-false exam consisting of thirty T-F questions. The questions whose answers are “true” are randomly distributed among the thirty questions. The professor thinks that 3/4 of the class are serious, and have correctly mastered the material, and that the probability of a correct answer on any question from such students is 75%. The remaining students will answer at random. She glances at two questions from a test picked haphazardly. Both questions are answered correctly. What is the probability that this is the test of a serious student? Using the Bayes theorem $$\Pr(A|B) = \frac{\Pr(A)\Pr(B|A)}{\Pr(A)\Pr(B|A)+\Pr(A^c)\Pr(B|A^c)} = \frac{.75*.75}{(.75*.75)+(.25*.5)} = .818$$ I think I am on the right track. However, I am unsure if I need to do something to account for the fact that 2 questions were answered correctly? Any thoughts?
Let $A$ be the event the student is "serious" and $B$ the event she answers $2$ randomly chosen questions correctly. We want $\Pr(A|B)$, which is $\frac{\Pr(A\cap B)}{\Pr(B)}$. We first calculate $\Pr(B)$. The event $B$ can happen in two ways: (i) the student is serious, and answers the two questions correctly and (ii) the student is not serious, but answers correctly. The probability of (i) is $(3/4)(0.75)^2$. The probability of (ii) is $(1/4)(0.5)^2$. Add, and note that $\Pr(A\cap B)$ is just the probability of (i), which we already have computed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/769579", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Compute an improper integral. Suppose $A = [0,\infty) \times [0, \infty) $. Let $f(x,y) = (x+y)e^{-x-y} $. How can I find $ \int_A f $? I know since $f$ is continuous on $A$, then $\int _A f $ exists, Do I need to evaluate $$ \int_{0}^{\infty} \int_{0}^{\infty} (x+y)e^{-x-y}\ dx\ dy\ ??$$
HINT : Rewrite: $$ \int_{0}^{\infty} \int_{0}^{\infty} (x+y)e^{-x-y} dx dy =\int_{0}^{\infty} e^{-y} \int_{0}^{\infty} (x+y)e^{-x} dx dy\tag1 $$ Note that $$ \int_0^\infty z^n e^{-z}\ dz=(n+1)!\tag2 $$ Use $(2)$ to evaluate $(1)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/769669", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How to count generators for a cyclic group Show that there are $\varphi (n)$ generators for a cyclic group $G$ of order $n$. Give their form explicitly. Here $\varphi (n)$ is the Euler's function. I don't know what to do, please help.
Note that $G\cong\Bbb Z_n$, so we can suppose that $G=\{\bar0, \bar1,\ldots,\overline{n-1}\}$. I'll prove that these sentences are equivalent: * *$\bar a$ is a generator of $G$. *For each integer $k$, $n$ divides $ak$ if and only if $n$ divides $k$ *$\gcd(a,n)=1$ $1\Rightarrow2$: Suppose that $\bar a$ generates $G$. Let be $k$ any integer. If $n$ divides $ak$ then $k\bar a=0$. Since the order of $\bar a$ in $G$ is $n$, then $n$ divides $k$. Conversely, if $n$ divides $k$, it is clear that divides also $ak$. $2\Rightarrow3$: Suppose that $p$ is a common prime divisor of $a$ and $n$. Then $n$ divides $\frac apn$, and, by $2$, $n$ divides $\frac np$. This contradiction tell us that $\gcd(a,n)=1$. $3\Rightarrow1$: Suppose that $\bar a$ does not generate $G$. This means that there exists some positive integer $m<n$ such that $n$ divides $am$. By $3$, $n$ divides $m$, so $n=m$, a contradiction. We can now establish that there are as many generators of $G$ as numbers coprimes with $n$ in the set $\{1,\ldots,n\}$, that is, $\varphi(n)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/769790", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Residues of $\frac{x^s}{s}\frac{\zeta'(s)}{\zeta\phantom{'}(s)}$ Going through a proof in Analyti number theory, the calculation of the residues of $$ f(s) = \frac{x^s}{s}\frac{\zeta'(s)}{\zeta\phantom{'}(s)} $$ came up. I do have some experience with complex analysis so I tried to compute the residues myself. So far it seems the residues are $$ \mathrm{Res}(0) = \log 2\pi \, , \ \mathrm{Res}(1) = -x \, , \ \mathrm{Res}(-2n) = -\frac{1}{n x^n} $$ where $n \in \mathbb{N}$. I have been able to calculate the resiudes at $0$ and $1$ using the asymmetric functional equation for $\zeta(s)$. But alternative approaches is still very welcome. My biggest problem is dealing with the residues at $-2n$, do anyone have any suggestions?
Observe that poles and zeroes of $\zeta$ are all simple poles for its logarithmic derivative, hence the singularities of $s\longmapsto\frac{\zeta'(s)}{\zeta(s)}\frac{x^{s}}{s}$ lie all in halfplane $\{\sigma\leq1\}$. Now, the poles of this function are: the pole of $\frac{x^{s}}{s}$ ($s=0$) which residue is $$ Res\left(\frac{\zeta'(s)}{\zeta(s)}\frac{x^{s}}{s},0\right) =\lim_{s\rightarrow1}\left(s\frac{\zeta'(s)}{\zeta(s)}\frac{x^{s}}{s}\right)=\frac{\zeta'(0)}{\zeta(0)}=\log2\pi\;, $$ and zeroes of $\zeta$ (trivial and non trivial) whose residue is \begin{align*} Res\left(\frac{\zeta'(s)}{\zeta(s)}\frac{x^{s}}{s},\rho\right) =\lim_{s\rightarrow\rho}\left((s-\rho)\frac{\zeta'(s)}{\zeta(s)}\frac{x^{s}}{s}\right)=\frac{x^{\rho}}{\rho}\;, \end{align*} and the pole of $\zeta$ ($s=1$) which residue is $$ Res\left(\frac{\zeta'(s)}{\zeta(s)}\frac{x^{s}}{s},1\right) =\lim_{s\rightarrow1}\left((s-1)\frac{\zeta'(s)}{\zeta(s)}\frac{x^{s}}{s}\right)=-x\;, $$ in fact, since $(s-1)\frac{\zeta'(s)}{\zeta(s)}\frac{x^{s}}{s}=\frac{(s-1)^{2}\zeta'(s)}{(s-1)\zeta(s)}\frac{x^{s}}{s}$, being $s=1$ simple pole for $\zeta$, Laurent development around $1$ becomes $\zeta(s)=\frac{1}{s-1}+\sum_{n=0}^{+\infty}c_{n}(s-1)^{n}$ which implies $\zeta'(s)=\frac{-1}{(s-1)^{2}}+\sum_{n=1}^{+\infty}nc_{n}(s-1)^{n-1}$. Then conclude by a simple computation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/769888", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
If $a_n\to0,$ then $\sum a_n$ and $\sum (a_n + a_{n+1})$ converge/diverge together? Let $a_n$, a sequence suh that $\underset{n\to \infty }{\mathop{\lim }}\,{{a}_{n}}=0$ and the series: $\sum a_n$, $\sum (a_n + a_{n+1})$ Prove/Disprove: The series converge/diverge together. I'll be glad for an hint or a guidance.
Hint: Consider $a_n = (-1)^n + 1/n^2$
{ "language": "en", "url": "https://math.stackexchange.com/questions/769971", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
Differentiability of a Complex Function I'm just doubtful whether my proof(s) for showing functions are complex differentiable suffice as valid proofs. e.g. let's take the classic example $f(z)=\bar{z}$. You inevitably know from studying complex analysis that this function isn't differentiable for any $z \in \mathbb{C}$ and I can prove such using limits. However, is there anything wrong with instead using the Cauchy Riemann equations and just saying $f(z)=x-iy \Rightarrow \frac{\partial u}{\partial x}=1, \frac{\partial v}{\partial y}=-1$ Therefore $f(z)$ isn't complex differentiable for any $z \in \mathbb{C}$ as $\frac{\partial u}{\partial x} \neq \frac{\partial v}{\partial y}$? Just in general for many functions, when it comes to showing they're not complex differentiable, I'm able to show this using limits but it's a whole lot easier just using the Cauchy Riemann equations. Am I allowed to do this for a valid proof?
Yes, the cauchy reimann equations are a necessary condition for complex differentiability. So if they do not hold, then it is not complex differentiable. but it is good to know how to use the limits in case you are asked to specifically not use the equations.
{ "language": "en", "url": "https://math.stackexchange.com/questions/770096", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How this absolutely convergent series property work? An absolutely cconvergent series may be multiplied with another absoultely convergent series. The limit of the product will be the product of the individual series limits. How does it work?
Consider absolutely convergent series $\sum a_i$ and $\sum b_i$, for $i \ge 0$. The product series is defined to be $$ \sum_{n \ge 0} \left( \sum_{i=0}^n a_i b_{n-i} \right) $$ Proof that this series is absolutely convergent For any $N$, we have \begin{align*} \sum_{n = 0}^N \left| \sum_{i=0}^n a_i b_{n-i} \right| &\le \sum_{n = 0}^N \sum_{i=0}^n \left| a_i b_{n-i} \right| \\ &\le \sum_{i=0}^N \sum_{j=0}^N |a_i| |b_j| \\ &= \left( \sum_{i=0}^N |a_i| \right) \left( \sum_{j=0}^N |a_j| \right) \\ \end{align*} which is uniformly bounded since $\sum |a_i|$ and $\sum |b_i|$ are uniformly bounded. Proof that the series converges to the product We compare the $2N$th partial sum with the product $\left(\sum_{i=0}^N a_i\right) \left( \sum_{i=0}^N b_i \right)$. \begin{align*} \left| \sum_{n=0}^{2N} \sum_{i=0}^n a_i b_{n-i} - \left(\sum_{i=0}^N a_i\right) \left( \sum_{i=0}^N b_i \right) \right| &= \left| \sum_{i, j}_{i + j \le 2N} a_i b_j - \sum_{i, j}_{i, j \le N} a_i b_j \right| \\ &= \left| \sum_{i, j}_{i + j \le 2N}_{i,j > N} a_i b_j \right| \\ &\le \sum_{i, j}_{i + j \le 2N}_{i,j > N} \left| a_i b_j \right| \\ &\le \sum_{i=0}^N \sum_{j=N+1}^{2N} |a_i b_j| + \sum_{i=N+1}^{2N} \sum_{j=0}^{N} |a_i b_g| \\ &\le \left( \sum_{j=N+1}^{2N} |b_j| \right) \sum_{i=0}^\infty |a_i| + \left( \sum_{i=N+1}^{2N} |a_i| \right) \sum_{j=0}^\infty |b_j| \\ &\le \left( \sum_{j=N+1}^{\infty} |b_j| \right) \sum_{i=0}^\infty |a_i| + \left( \sum_{i=N+1}^{\infty} |a_i| \right) \sum_{j=0}^\infty |b_j| \end{align*} Now taking the limit as $N \to \infty$, the terms in parentheses go to $0$ because they are the tail ends of convergent series. Thus $$ \lim_{N \to \infty} \sum_{n=0}^{2N} \sum_{i=0}^n a_i b_{n-i} = \lim_{N \to \infty} \left(\sum_{i=0}^N a_i\right) \left( \sum_{i=0}^N b_i \right) $$ (Both limits exist, since the latter limit is the product of limits which exist.) Thus the product series converges absolutely to the product of the two series.
{ "language": "en", "url": "https://math.stackexchange.com/questions/770186", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How do I solve this square root problem? I need to solve the following problem: $$\frac{\sqrt{7+\sqrt{5}}}{\sqrt{7-\sqrt{5}}}=\,?$$
\begin{align} \frac{\sqrt{7+\sqrt{5}}}{\sqrt{7-\sqrt{5}}}&=\frac{\sqrt{7+\sqrt{5}}}{\sqrt{7-\sqrt{5}}}\cdot \frac{\sqrt{7+\sqrt{5}}}{\sqrt{7+\sqrt{5}}}\\ &=\frac{(\sqrt{7+\sqrt{5}})^2}{\sqrt{(7-\sqrt{5})(7+\sqrt{5})}}\\ &=\frac{7+\sqrt{5}}{\sqrt{7^2-(\sqrt{5})^2}}\\ &=\frac{7+\sqrt{5}}{\sqrt{49-5}}\\ &=\frac{7+\sqrt{5}}{\sqrt{44}}\\ &=\frac{7+\sqrt{5}}{2\sqrt{11}}\cdot\frac{\sqrt{11}}{\sqrt{11}}\\ &=\frac{7\sqrt{11}+\sqrt{5\cdot11}}{2(\sqrt{11})^2}\\ &=\frac{7\sqrt{11}+\sqrt{55}}{22} \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/770259", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Use mathematical induction to prove that $(3^n+7^n)-2$ is divisible by 8 for all non-negative integers. Base step: $3^0 + 7^0 - 2 = 0$ and $8|0$ Suppose that $8|f(n)$, let's say $f(n)= (3^n+7^n)-2= 8k$ Then $f(n+1) = (3^{n+1}+7^{n+1})-2$ $(3*3^{n}+7*7^{n})-2$ This is the part I get stuck. Any help would be really appreciated. Thanks.
$f(n)$ satisfies $f(n) = 11 f(n-1) - 31f(n-2) +21f(n-3)$ So by induction it's enough to check this is true for $n =0,1,2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/770344", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Computing the Gaussian curvature of this surface $z=e^{(-1/2)(x^2+y^2)}$. Compute the Gaussian curvature of $z=e^{(-1/2)(x^2+y^2)}$. Sketch this surface and show where $K=0 $, $K>0$, and $K<0$. So would the easiest way to do this question be to construct a parametrization $$\mathbf{x}(u,v)=(u, v, e^{-\frac{1}{2}(u^2+v^2)} )?$$ If so, I calculated the Normal to be $$\left( \frac{u}{u^2+v^2+e^{u^2+v^2}}, \frac{v}{u^2+v^2+e^{u^2+v^2}}, 1 \right).$$ Is that correct? Thanks
If $g$ is the first fundamental form, and $h$ is the second fundamental form. Then we know $$K = \frac{ \det h}{ \det g } $$ Since you have a graph $z=f(x,y)$, it's straight forward to compute the fundamental forms by definition. I'll leave $g$ to you, but $h$ is given by $$ h = f_{xx} dx^2 + 2 f_{xy} dxdy + f_{yy} dy^2 $$ Hint: Consider $\gamma = ( x,y,f )$, by definition of the first fundamental form we have $$g_{ij} = ( \gamma_i , \gamma_j) $$ where the subscript denotes a derivative in $i$ and $( \cdot , \cdot)$ denotes the inner product. So we have $$g = ( 1 + f_x^2 ) dx^2 + 2f_xf_y dxdy + ( 1 + f_y^2) dy^2 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/770524", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Using L'Hospital's Rule to evaluate limit to infinity I'm given this problem and I'm not sure how to solve it. I was only ever given one example in class on using L'Hospital's rule like this, but it is very different from this particular problem. Can anyone please show me the steps to solve a problem like this? Evaluate the limit using L'Hospital's rule if necessary $$\lim_{ x \rightarrow \infty } \left( 1+\frac{11}{x} \right) ^{\frac{x}{9}}$$ Basically, I only know the first step: $$\lim_{ x \rightarrow \infty } \frac{x}{9} \ln \left( 1+\frac{11}{x} \right)$$ WolframAlpha evaluates it as $e^{\frac{11}{9}}$ but I obviously have no idea how to get to that point.
So the first step is to notice that the original equation can be simplified to $$\lim_{x \rightarrow \infty }e^{\ln{\left( \left( 1+\frac{11}{x} \right) ^{\frac{x}{9}}\right)}}$$ Which can be made into: $$e^{\lim_{x \rightarrow \infty }\frac{x}{9}(\ln{ 1+\frac{11}{x}})}$$ So now we jsut need to solve for the exponent of e: $$\lim_{x \rightarrow \infty }\frac{x}{9}\ln({ 1+\frac{11}{x}})$$ Take out the 1/9: $$\frac{1}{9}\lim_{x \rightarrow \infty }x\ln({ 1+\frac{11}{x}})$$ Which can now be written as: $$\frac{1}{9}\lim_{x \rightarrow \infty }\frac{\ln({ 1+\frac{11}{x}})}{\frac{1}{x}}$$ And from here you can apply l'hopital rule
{ "language": "en", "url": "https://math.stackexchange.com/questions/770614", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
How to compute time ordered Exponential? So say you have a matrix dependent on a variable t: $A(t)$ How do you compute $e^{A(t)}$ ? It seems Sylvester's formula, my standard method of computing matrix exponentials can't be applied here given the varying nature of the matrix and furthermore the fact that it may not always have distinct eigenvalues.
There seems to be some confusion here between two different problems: the first is the computation of a time-ordered (also called path-ordered) exponential, a matrix E(t',t) solving $\frac{d}{dt}E(t,0) = M(t)E(t,0) $ and the computation of the matrix exponential of a time dependent matrix $\exp(A(t))$. Recall that the first and second problems are related whenever $M$ commutes with itself at different times. Indeed if $M(t')M(t)-M(t)M(t')=0$ for all t,t', then $E(t)=\exp(\int_0^t M(\tau)d\tau)$ and so the time-ordered exponential is an ordinary matrix exponential. It is then straightforward to compute with the usual expm (Matlab) or MatrixExp (Mathematica) functions and the likes. If the matrix $M$ does not commute with itself at different times, then $E(t)$ is not an ordinary matrix exponential. As notedin Yiteng's response, the problem is then much more difficult and admits few analytical answers. As far as I can tell, since Magnus-series are impossible (baring rare exceptions?) to compute exactly to all orders, the only analytical approach is the path-sum formulation thing, which looks quite difficult. If numerical is sufficient for you, then you can always Magnus the hell out of it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/770679", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 3 }
Evaluate analytically...$\lim\limits_{x\to 0} \left(\frac{\sin(x)}{x}\right)^{\cot^2(x)}$ Can someone help? My professor likes to call these easy problems...Wolfram Alpha says the answer should be $e^{-1/6}$. Every time I do it I get something different. Help!
$$\lim_{x \to 0} \left( \frac{\sin x}{x} \right) ^ {\cot^2{x}}$$ A simpler solution than l'Hopital's rule is to use Taylor expansions, as we are interested in the function as $x \to 0$. Note that $$\frac{\sin{x}}{x} = 1 - \frac{x^2}{6} + O(x^4)$$ Note that $\tan{x} = x + O(x^3)$, and thus $\cot^2{x} = 1/x^2 + O(x^{-4})$. Dropping higher terms we see $$\lim_{x \to 0} \left( \frac{\sin x}{x} \right) ^ {\cot^2{x}} = \lim_{x \to 0} \left( 1 - \frac{x^2}{6}\right)^{1/x^2} = \lim_{n \to \infty} \left( 1 - \frac{1}{6n} \right)^n$$ which should look familiar.
{ "language": "en", "url": "https://math.stackexchange.com/questions/770759", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Inequality Problem of examination Show that if $n > 2$, then $(n!)^2 > n^n$. I cannot find any way out to these.Please help. I tried break $n!$ and then argue but failed.
$$(n!)^2=1(n)\times2(n-1)\times3(n-2)\times \cdots n(1)$$ But $$n\ge a+1 \implies a(n)\ge a(a+1)\implies (a+1)(n-a)\ge n$$ And equality is achieved iff $a=0$ or $a=n-1$. Since for $n>2$ we have $a=1\neq n-1$ in our product, we have the strict inequality $(n!)^2>n^n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/770950", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Why does $\int^{ab}_{a} \frac{1}{x} dx = \int^{b}_{1} \frac{1}{t} dt$? I can't understand how the integral having limits from $a$ to $ab$ in Step 1 is equivalent to the integral having limits from $1$ to $b$. I'm a beginner here. Please explain in detail. \begin{align*} \ln(ab) = \int^{ab}_{1} \frac{1}{x} dx &= \int^{a}_{1} \frac{1}{x} dx + \int^{ab}_{a} \frac{1}{x} dx\\ &= \int^{a}_{1} \frac{1}{x} dx + \int^{b}_{1} \frac{1}{at} d(at)\\ &= \int^{a}_{1} \frac{1}{x} dx + \int^{b}_{1} \frac{1}{t} dt\\ &= \ln(a) + \ln(b). \end{align*}
Given that the integral $\int_a^b f(x)\>dx$ is "the area under the curve $y=f(x)$ for $x$ between $a$ and $b$", the equality of the two integrals $$\int_1^b{dx\over x},\quad \int_a^{ab} {dt\over t}$$ follows with an elementary geometric argument: The map $$(x,y)\mapsto\left(a x,\>{y\over a}\right)$$ which stretches by the factor $a>0$ in $x$-direction and compresses by the same factor in $y$-direction maps the first area onto the second.
{ "language": "en", "url": "https://math.stackexchange.com/questions/771015", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }