Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
How can I find maximum and minimum modulus of a complex number? I have this problem. Let be given complex number $z$ such that $$|z+1|+ 4 |z-1|=25.$$ Find the greastest and the least of the modulus of $z$. I tried with minimum. Put $A(-1,0)$, $B(1,0)$ and $M(x,y)$ present of $z$. We have $O(0,0)$ is the midpoint of the segment $AB$. Therefore $$OM^2 = \dfrac{AM^2 + BM^2}{2}-\dfrac{AB^2}{4}.$$ Another way $$25=AM+4BM \leqslant \sqrt{(1^2 + 4^2)(AM^2 + BM^2)},$$ Therefore $$AM^2 + BM^2 \geqslant \dfrac{625}{17}.$$ $$OM^2 \geqslant \dfrac{625}{17} -1 = \dfrac{591}{17}.$$ Thus, minimum of $z$ is $\sqrt{\dfrac{591}{17}}$. This answer is not true with Mathematica. Mathematica give $\dfrac{22}{5}$. Where is wrong in my solution and how can I find the maximum?
For maximum $|z|$, we have \begin{align} |5z|&=|(z+1)+4(z-1)+3|\\ &\le|z+1|+4|z-1|+|3|\\ &\le25+3\\ |z|&\le \frac{28}{5} \end{align} with the equality holds if and only if $\displaystyle z=\frac{28}{5}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2314488", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Integration of $1/(1+a \csc^2(x))$ Integration of $$\int_{0}^{\frac{(M-1)\pi}{M} }\frac{1}{1+\alpha \csc^2(x)} dx,$$ where $ \alpha $ is a constant. I tried taking $\cot(x) = t$, then differentiating it w.r.t $dx$ we get, $-\csc^2(x)dx = dt$. And as we know that, $\csc^2(x)= \cot^2(x) +1$, so tried substituting these values, but did not got any outcome.
We can simplify it as $\displaystyle1-\frac{a}{\sin^2 (x)+a} $. Then by using $\displaystyle \sin (x)=\frac {\tan x}{\sec x},\sec^2x=\tan^2x+1$, we have the next term as $\\\displaystyle\frac {a\sec^2 (x)}{(a+1)\tan^2 (x)+a} $. Now let $\tan (x)=t \implies\sec^2 (x)dx=dt $. Thus the next part changes to the integral $\displaystyle\int \frac {a}{(a+1)t^2+a}dt$ which has well-known antiderivative. After all the simplification and re-substituting we have the final result as $$ I=x-\frac{1}{\sqrt {a (a+1)}}\arctan(\frac {\sqrt {(a+1)}\tan (x)}{\sqrt {a}})$$ I haven't put the limits as it would make the work messy. I hope you can continue from here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2314555", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Tight integral inequality We define the integral $$ J_k = \int_0^\pi (a+bx) \frac{\sin^3x}{1 + k \cos^2x}\,\mathrm{d}x\,, $$ and $$ J = J_1 = \int_0^\pi (a+bx) \frac{\sin^3x}{1 + \cos^2x}\,\mathrm{d}x\,, $$ where $a,b \in \mathbb{R}$. Prove that $J_{1/k} / J$ is independent of $a,b$ and that $$ J_{1/3} >J \cdot (\log 3)\,, $$ without a calculator. Using the following identity $$ \int_0^{\pi} x f(\sin x) \,\mathrm{d}x = \int_0^{\pi} f(\sin x)\,\mathrm{d}x $$ I think was able to prove the first part of the question. However I am stuck on the second part, is there an easier way than explicitly calculating both integrals and comparing?
I dont know any explicit simplification but the function is such that the integral for any k can be calculated so we first calculat the integral for general k. Let $J_k=T $ thus $T=a\int _0 ^{\pi} \frac{\sin^3 (x}{1+k\cos^2 (x)}+M $ where $M $ is the next part of the integral. Thus $M=b\int _0 ^{\pi} (\pi-x)\frac {sin^3 (x)}{1+k\cos^2 (x)} $ now let $\cos (x)=u $ thus $-sin (x)dx=du $ thus we have $(1+b)M=\int _{-1} ^1b\pi \frac {1-u^2}{1+ku^2} $ so we plug this value in original integral thus we have $T=(\frac {(1+b)a+b\pi}{(1+b)})\int _0 ^{\pi}\frac {1}{\frac {1}{k}+u^2}-(\frac {u^2+\frac {1}{k}}{\frac {1}{k}+u^2}-\frac {\frac {1}{k}}{\frac {1}{k}+u^2}) $ all this can be easily integrated and then substituting the limits the final value of the integral is $T=2\frac {(1+b)a+b\pi}{(1+b)k^2}(k+1)\sqrt {k}\arctan (\sqrt {k}) $ . Let $d=2\frac {a (1+b)+b\pi}{1+b} $ its common in all k's thus cancels .Now we put $k=1/3,1$ so $\frac {T _{\frac {1}{3}}}{T_1}=\frac {4}{\sqrt {3}}>1>\log (3) $
{ "language": "en", "url": "https://math.stackexchange.com/questions/2314694", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solve for $x$ in $\cos(2 \sin ^{-1}(- x)) = 0$ Solve $$\cos(2 \sin ^{-1}(- x)) = 0$$ I get the answer $\frac{-1}{ \sqrt2}$ By solving like this \begin{align}2\sin ^{-1} (-x )&= \cos ^{-1} 0\\ 2\sin^{-1}(- x) &= \frac\pi2\\ \sin^{-1}(- x) &= \frac\pi4\\ -x &= \sin\left(\frac\pi4\right)\end{align} Thus $x =\frac{-1}{\sqrt2}$ But correct answer is $\pm\frac{1}{\sqrt2}$ Where am I going wrong?
You need to solve $\cos \left(2 \arcsin(-x) \right) = 0$. Let $y = 2 \arcsin(-x)$ then $\cos y = 0$ so $y = \pi/2 \pm n\pi$. Then, $$ 2 \arcsin(-x) = \frac{pi}{2} \pm n\pi $$ which implies $$ x = -\sin \left( \frac{\pi}{4} \pm \frac{n\pi}{2} \right) $$ Can you simplify this?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2314829", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Why does arctan(1/n) summed not converge? My question is as follows: WolframAlpha tells me that the sum series S doesn't converge, but why? $$S=\sum_{n=1}^{\infty}\arctan(\frac{1}{n})$$ $$\lim_{n\to \infty} \arctan(\frac{1}{n})=0$$ So S (slowly) stops growing when n gets larger and larger. So why doesn't it converge? Shouldn't it stop growing near infinity, making it converge?
For values of $x$ near $0$, $\arctan(x)$ ~ $x$. IF you know calculus, this is because the rate of change of $\arctan(x)$, $\displaystyle \frac{1}{1+x^2}$, approaches $1$,the rate of change of $x$, as $x$ approaches $0$. It is a well known fact that the harmonic series or $\displaystyle \frac{1}{x}$, that is $1+\frac{1}{2}+\frac{1}{3}...$ does not converge. For $\arctan{\frac{1}{x}}$, as $x$ gets bigger, this series slowly starts to become the harmonic series, which diverges.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2314945", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Proposition $5.2$, Section $2.3$ (Cyclic Groups) in Dummit and Foote's Abstract Algebra I'm trying to give an alternative proof of a proposition in Dummit and Foote's Abstract Algebra, but I am unable to complete the details. The proposition is: \begin{align*} \text{Let $G$ be a group, and let $x \in G$ and let $a \in \mathbb{Z} — \{0\}$.} \; \text{If} \; |x| = n < \infty, \; \text{then} \; |x^a| = \frac{n}{(n,a)} \end{align*} This is essentially Proposition 5.2 in section 2.3, Cyclic Groups and Cyclic Subgroups, on pp. 57. Here's my attempt: Assume that $n \mid a$. Then $a = nk, k \in \mathbb{Z}$. Then. $x^a = x^{nk} = (x^n)^{k} = 1^k = 1$. Also noting that in this case $(n, a) = n$, we have that $|x^a| = 1 = n/(n,a)$. Alternatively, assume that $n \nmid a$. Then by the Divison Algorithm, we have that, $a = nq + r$, where $\; q, r \in \mathbb{Z}$ such that $0 < r < n$. Then, we have that, \begin{align} x^a = x^{nq + r} = x^{nq}x^r = (x^{n})^q x^r = 1(x^r) = x^r. \end{align} Assuming that $|x^a| = m$, we have that \begin{align} (x^a)^m = x^{am} = x^{rm} = 1. \end{align} This implies that $n \mid rm$. Since $n \nmid r$, we then must have that $n \mid m$, using a result in elementary number theory, which I think I have been able to prove on the side. I'll skip the details on this front. I'm unable to proceed with the proof. I'm deliberately trying to prove the result without invoking/playing around with the expression given in the statement of the theorem. I'm trying to derive an expression, analogous to the one given in the statement of the theorem, using the Division Algorithm and divisibility theorems. I'm not so good with number theory, though. Any hints?
It's not restrictive to assume $a>0$, as $|x^a|=|x^{-a}|$. Let $y=x^a$. If $y^m=1$, then $n\mid am$, because $x^{am}=y^m=1$. Conversely, if $n\mid am$, then $am=nq$ and $$ y^m=x^{am}=x^{nq}=1 $$ In particular, $|y|$ is the minimal $m$ such that $n\mid am$. If $d=\gcd(n,a)$, then $n=dn'$ and $a=da'$. Suppose $n\mid am$, so $am=nq$; then $da'm=dn'q$ and $a'm=n'q$. In particular $a'm=n'q$ is a common multiple of $a'$ and $n'$, hence a common multiple of $a'n'=\operatorname{lcm}(n',a')$, because $\gcd(n',a')=1$. Since $|y|$ is the minimal such $m$, we have $a'|y|=a'n'$. Therefore $|y|=n'=n/\gcd(n,a)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2315243", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Name for logical predicates which act like homomorphisms from set union to logical conjunction? Is there a name for predicates defined over sets with the property that: $$P(S\cup Q)\iff P(S)\land P(Q)$$ For example the predicate $P(Q)=``Q\text{ is empty"}$ would be one such predicate because: $$S\cup Q=\emptyset\iff (S=\emptyset)\land (Q=\emptyset)$$ It looks sort of like the definition for a homomorphism between algebraic structures.
These are semilattice homomorphisms. A semilattice is a set with a binary operation that is associative, commutative, and idempotent (i.e., $x \vee x = x$). As in algebra, a semilattice homomorphism is a function that preserves the operation. Examples of semilattices include: * *subsets of a given set, where the operation is union *subsets of a given set, where the operation is intersection *propositions, where the operation is conjunction *propositions, where the operation is disjunction. So, if you specify what your semilattices are, you get the notion of homomorphism you're looking for. Something to file away for later, since I'm guessing you haven't seen category theory yet: it is very natural, also, to view these maps as functors between certain categories. You're then asking questions related to whether the functors are "left exact" or "right exact."
{ "language": "en", "url": "https://math.stackexchange.com/questions/2315353", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Boolean algebra, logic expression minimization $$(A\land \neg A)\lor((A \land B) \lor(A\land B\land \neg C ))$$ I'm trying to minimize this Boolean expression with boolean algebra but i cant minimize it completly can i get some help?
It can be simplified as follows: \begin{array}{l} & (A \land \neg A) \lor ((A \land B) \lor (A \land B \land \neg C)) & \text{ Given }\\ & F \lor ((A \land B) \lor (A \land B \land \neg C)) & \text{ Complement }\\ & (A \land B) \lor (A \land B \land \neg C) & \text{ Identity }\\ & (A \land B) \land (T \lor \neg C) & \text{ Distributive }\\ & (A \land B) \land T & \text{ Identity }\\ & A \land B & \text{ Identity }\\ \end{array}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2315427", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What annual installment will discharge a debt of $\large\bf{₹}$ $1092$ due in $3$ years at $12\%$ Simple Interest? Question : What annual installment will discharge a debt of $\large\bf{₹}$$1092$ due in $3$ years at $12\%$ Simple Interest? Options a. $\large\bf{₹}$$300$ b. $\large\bf{₹}$$225$ c. $\large\bf{₹}$$400$ d. $\large\bf{₹}$$325$ My Answer : $\large\bf{₹}$$495.04$ Claimed Answer : $\large\bf{₹}$$325$ Doubt : But If the instalment is $\large\bf{₹}$$325$ then in $3$ years we pay total of $\large\bf{₹}$$975$, which is less than the disbursed loan amount of $\large\bf{₹}$$1092$. So, my question is how are we getting $\large\bf{₹}$$325$? What's the approach? Is $\large\bf{₹}$$1092$ the Principal Amount or it's the Principal Amount + Interest?
You have a debt which has a value of $1092$ in three years. You can say that this value is the future value of the debt. You pay immediately an installment of $x$. This payment has to be compounded two years to get the value at the beginning of the third year. The factor is $1.24 (=1+0.12\cdot 2)$. The next (second) year you pay the next installment $x$. This installment has to be compounded by $1.12$. At the third year you just pay $x$. No compunding is needed for this payment. Therefore the equation is $$x\cdot 1.24+x\cdot 1.12+x=1092$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2315578", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
A question about inflection point Does the function $$ f(x)= \begin{cases} x^2 & x\leq1\\ 2-x^2 & x>1 \end{cases} $$ has an inflection point at $x=1$, or it hasn't because $f$ isn't differentiable at $x=1$? Thanks!
Surprise surprise! A function doesn't have to be differentiable in order to have an inflection point. Simply put, an inflection point is a point at which $f(x)$ switches concavity. $\displaystyle \lim_{x \to 1^-}f''(x)=2$ and $\displaystyle \lim_{x \to 1^+}f''(x)=-2$ We can easily see that $f(x)$ is continuous at $x=1$. Because the concavity changes signs at $x=1$, yes, $f(x)$ has a point of inflection.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2315684", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
proof problem on proving $x_{1}=1,x_{n+1}=\frac{1}{2}\left(x_{n}+\frac{2}{x_{n}}\right)$ has a limit Sequence $\{x_{n}\}$ is defined by $$x_{1}=1,x_{n+1}=\frac{1}{2}\left(x_{n}+\frac{2}{x_{n}}\right)$$, as for proving whether the sequence has a limit, one of my friends told me his proof as following: First, assuming the sequence has a limit $x$, then take limit on both sides of $$x_{n+1}=\frac{1}{2}\left(x_{n}+\frac{2}{x_{n}}\right)$$, that is $$\lim _{x\rightarrow \infty }x_{n+1} =\lim _{x\rightarrow\infty }(\frac{1}{2}\left(x_{n}+\frac{2}{x_{n}}\right))$$, which can be simplified to$$x =\frac{1}{2}\left(x+\frac{2}{x}\right)$$,which is easily solved to show that $x=\sqrt 2 $ or $ -\sqrt2$, and it is easy to show the limit should be $\sqrt 2 $,since we have already worked out the value of the limit, so also proved the existence of the limit as a side effect. * *Does the proof really prove that the sequence has a limit? *Is it possible that we computed out the limit value of a sequence using similar method as above(without proving the existence of the limit before) , but later found that the sequence doesn't have a limit ?
For $x>0$ we have $x+\frac2x\geq 2\sqrt{x} \sqrt{\frac2x}=2\sqrt2$. So $x_n\ge \sqrt2$. Further $\frac{x_{n+1}}{x_n}=\frac12\left(1+\frac2{x_n^2}\right)\le 1.$ So the sequence converges. Its limit is already known.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2315808", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 1 }
How to evaluate $\lim_{n \rightarrow \infty}\left(\frac{(2n)!}{n!n^n} \right)^{1/n}$? Find$$\lim_{n \rightarrow \infty}\left(\frac{(2n)!}{n!n^n} \right)^{1/n}$$ is there some trick in this questions. seems it must simplify to something but I am unable to solve it.
Using Stirling's approximation, just for the sake of it and for reference (and to give an alternative approach to that of the excellent answer by lab bhattacharjee). Caveat: overly detailed. $$ n! \operatorname*{\sim}_{n\to\infty} \sqrt{2\pi n}\left(\frac{n}{e}\right)^n \tag{Stirling's approximation} $$ yields $$ \frac{(2n)!}{n!n^n} \operatorname*{\sim}_{n\to\infty} \frac{\sqrt{4\pi n}\left(\frac{2n}{e}\right)^{2n}}{\sqrt{2\pi n}\left(\frac{n}{e}\right)^n n^n} = \frac{\sqrt{2}\left(\frac{2}{e}\right)^{2n}}{\left(\frac{1}{e}\right)^n}= \sqrt{2}\left(\frac{4}{e}\right)^{n} $$ i.e. $$ \frac{(2n)!}{n!n^n} = \sqrt{2}\left(\frac{4}{e}\right)^{n} + o\left(\left(\frac{4}{e}\right)^{n}\right). $$ From there, $$\begin{align} \left(\frac{(2n)!}{n!n^n}\right)^{1/n} &= \exp\left(\frac{1}{n}\ln\left(\frac{(2n)!}{n!n^n}\right)\right) = \exp\left(\frac{1}{n}\ln\left(\sqrt{2}\left(\frac{4}{e}\right)^{n} + o\left(\left(\frac{4}{e}\right)^{n}\right)\right)\right) \\ &= \exp\left(\frac{1}{n}\ln\left(\sqrt{2}\left(\frac{4}{e}\right)^{n}\right)+\frac{1}{n}\ln(1+o(1)\right) \\ &= \exp\left(\frac{1}{n}\ln\left(\left(\frac{4}{e}\right)^{n}\right)+\frac{1}{n}\ln(\sqrt{2})+\frac{1}{n}\ln(1+o(1)\right) \\ &= \exp\left(\ln\left(\frac{4}{e}\right)+o(1)+o(1)\right) \\ &= \left(\frac{4}{e}\right)e^{o(1)} \\ &\xrightarrow[n\to\infty]{} \boxed{\frac{4}{e}}. \end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2315943", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 7, "answer_id": 0 }
Count the number of bases of the vector space $\mathbb{C}^3$. Problem: Consider the set of all those vectors in $\mathbb{C}^3$ each of whose coordinates is either $0$ or $1$; how many different bases does this set contain? In general, if $B$ is the set of all bases vectors then, $$B=\{(x_1,x_2,x_3),(y_1,y_2,y_3),(z_1,z_2,z_3)\}.$$ There are $8(6\cdot8+7)=440$ possible $B$s that contain unique elements with coordinates $0$ and $1.$ Now there is are $6\cdot 8+7$ sets that contain the element $(0,0,0)$, which makes the set $B$ linearly dependent and thus we are left with $385$ sets. Beyond this, I am finding it difficult to compute the final answer. Any hint/suggestion will be much appreciated.
Knowing that none of x,y,z can be (0,0,0) there are only 7 choices for each. Since they must be different you only have $\binom{7}{3}=35$ choices to make. This is small enough to sort through by hand.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2316042", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Holomorphic Function constant in $\mathbb{P}^1(\mathbb{C})$ I want to show that a holomorphic function $f: \mathbb{P}^1(\mathbb{C}) \to \mathbb{C}$ is constant. $\mathbb{P}^1(\mathbb{C})$ is the projective line. I'm not very sure how to solve that. I have the idea to start with the Maximum-Principle. For that I need a point $a\in \mathbb{P}^1$, so that \begin{align} |f(a)| \geq |f(z)| \forall z \in \mathbb{P}^1(\mathbb{C}) \end{align} Maybe $a= \infty$, but I'm not coming further.
Preliminary remark : This does not makes sense to talk about metric as the metric of $\Bbb C$ can't be extended to $P^1(\Bbb C)$ as one would have $d(0, \infty) = \infty$ but distance between two points is always finite. On the other hand, it is well known that $S^2 \cong P^1(\Bbb C)$ so it is indeed compact. Now, if $f : P^1(\Bbb C) \to \Bbb C$ is not constant and holomorphic, it is an open mapping, therefore the image has to be open and compact, which is not possible. So $f$ has to be constant.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2316163", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Given any two non zero vectors $x,y$ does there exist symmetric matrix $A$ such that $y = Ax$? Let $x,y$ be non-zero vectors from $\mathbb{R}^n$. Is it true, that there exists symmetric matrix $A$ such that $y = Ax$. I was reasoning the following way. Having an equation $y = Ax$ for some symmetric matrix $A$ is equivalent to being able to solve a system of $n + \frac{n^2-n}{2}$ linear equations: $$\begin{cases} a_{11}x_1 + ... + a_{1n}x_n = y_1 \\ \vdots \\ a_{n1}x_1 + ... + a_{nn}x_n = y_n \\ a_{12} = a_{21} \\ \vdots \\ a_{n,n-1} = a_{n-1,n} \end{cases}$$ First $n$ equations are our constraints for identity $y = Ax$ to be true and the following $\frac{n^2-n}{2}$ for symmetricity of $A$. So, in our system we have $n^2$ variables and $\frac{n^2-n}{2} + n = \frac{n^2+n}{2}$ equations. As $n^2 \geq \frac{n^2+n}{2}$ such system is always solvable. That's why such matrix $A$ always exists. This solution was marked as wrong on exam, where did I make a mistake?
It's not true that if you have more variables than equations, the system is always solvable! That's like saying a matrix with more columns than rows is always onto, which is also false. There might be a slicker solution, but the first thing I thought of was this: Let $d = ||x||$. Then there exists a unitary matrix $U$ such that $U(x) = (d,0,0,\ldots,0)$: You can use Gram-Schmidt to create an orthonormal basis whose first vector is $x/d$, and then $U$ is the matrix whose rows are that orthonormal basis. Now, there exists a symmetric matrix A such that $A(d,0,0,\ldots,0) = U(y)$: Just set the first column of $A$ to be $U(y)/d$, and then fill in the rest of the entries to make $A$ symmetric. Now, $U^{-1}AU$ is your desired symmetric matrix sending $x$ to $y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2316368", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 1 }
How do I find matrix B when adj(B) is given? (a) find matrix B such that $adj(B)=A$, with $A$ given by : $$A:=\left(\begin{array}{rrrr} 1 & 2 & 5 & 4 \\ 0 & -1 & -2 & -1 \\ -1 & 1 & 3 & 0 \\ 0 & 2 & 5 & 3 \\ \end{array}\right)$$ (b) For the same matrix $A$, find all complex matrices $B$ such that $adj(B)=A$
Building up on Bye_World's comment, note that you can use the fact - if $A$ is $n×n$, then $\lvert (adj(A))\rvert = \lvert A\rvert^{n−1}$ to find the determinant of B and once you have the determinant of $B$ just plug it in your inverse formula $$B^{-1}=\frac{1}{\lvert B \rvert}adj(B)$$ to get $B^{-1}$ and then find its inverse $$B=(B^{-1})^{-1}=\frac{1}{\lvert B^{-1} \rvert}adj(B^{-1})$$ to get $B$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2316470", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to learn a great number of theorems by heart? Imagine you have ten definitions and you want to learn them by heart. It is easy - definitions are somehow unique. But, imagine 40 (60,100,1000) theorems that all look somehow similar and are all important. How would you learn them by heart? What the word "learn" mean here for you? Of course, you can learn them by numbers, and everytime I will say "Theorem 147" you will say the right one. But somehow it is not right. We do not call people with their birth date, do we? If so, how would you categorize the (limited number of) theorems? What would be your approach to learn tens of theorems by heart and still clearly distinguish between them? What makes a theorem unique? Could you draw a graph or a tree of theorems?
There is really no point in memorizing $1000$ theorems. For one thing, different expositions of the same subject will organize the theorems somewhat differently. A particular theorem in textbook A might correspond to parts of several different theorems in textbook B, or might just be an exercise in textbook C.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2316619", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Let $X=l^2$ and $A$ be the set of all positve elements of $X$, Then show that $Cone(A-x)$ is not a subspace. Let $X=l^2 = \{x=\{x_i\}_{i=1}^{\infty} ~ | \quad \sum_{i=1}^{\infty} |x_i|^2 < \infty \}$ and $A = \{ x=\{x_i\}_{i=1}^{\infty} \in X ~ | \quad x_i >0 ~ \forall i \}$ and take $a \in A.$ My questions (I have conjectured that) : $1-$ Is $ C:= \text{Cone}(A-a)$ a subspace of $X$? **This part was solved by Demophilus ** $2-$ $\bar{C} $ is indeed a subspace and it is indeed the whole space $X$. In other words, $ C:= \text{Cone}(A-a)$ is dense in $X$ for all $a \in A$ P.S. $\text{Cone}(A-a) = \{\lambda (x-a ) ~ | \quad x \in A ~, \lambda \ge 0\}$
2nd Edit: Again you were completely right, I edited my answer. Hopefully I got it right this time. 3d Edit: I added an answer to the second question. First question Let $y \in C$. We try to look for a sufficient and necessary condition that $-y \in C$. Without loss of generality we assume that $y = x-a$ for some $x \in A$. Now take any $\lambda' \geq 0 $. Then define for every $n \in \mathbb{N}$ $$ x'_n = a_n\left(1+ \frac{1}{\lambda'}\right)- \frac{1}{\lambda'}x_n . $$ Note that $x_n' > 0$ if and only if $\lambda' > \frac{x_n}{a_n} - 1$. So it seems that $a-x \in C$ if and only if $\sup\{ \frac{x_n}{a_n} \mid n \in \mathbb{N} \}$ is bounded. This reminded me of the following proposition: There does not exist a sequence $(a_n)$ of positive reals such that for all sequences $(b_n)$, $\sum_{n} \lvert b_n \rvert a_n$ converges if and only if $(b_n)$ is bounded. So it's seems that for any $a \in A$, you could always construct a $x \in A$ such that $\sup_n (x_n a_n^{-1}) < +\infty$ making sure that $a-x \not \in C$ and as a result $C$ can't be a subspace. To illustrate this for $a_n = \frac{1}{n}$, consider $x_n = \frac{\sqrt{\log(n)}}{n}$. Note that $\frac{x_n}{a_n}$ is cleraly unbounded. Now suppose $a-x \in C$. So there's a $\lambda \geq 0$ and $x' \in A$ such that $a-x = \lambda(x'-a)$. Now we need to have for all $n \in \mathbb{N}$ that $$ x'_n = \frac{a_n-x_n}{\lambda}+a_n >0 $$ Or in other words $\frac{1-\log(n)}{\lambda} > 1$, but this is obviously false. Second question Let $e_n$ be the standard basis on $l^2$. It's easy to prove that $e_n \in C$ for all $n \in \mathbb{N}$. Simply take $$ x_k = \left\{ \begin{matrix}\frac{1}{k} & k \neq n \\ 1+\frac{1}{k} & k =n\end{matrix} \right. $$ Then we have $x \in A$ and $e_n = x-a \in C$. We can also prove that $- e_n \in C$ for all $n \in \mathbb{N}$. We take $$ y_k = \left\{ \begin{matrix}\frac{1}{k} & k \neq n \\ \frac{1}{2k} & k =n\end{matrix} \right. $$ and $\lambda = 2n$. Then we have $-e_n = \lambda(y-a)$. Since $C$ is a cone it contains all positive linear combinations of $e_1,-e_1, \ldots, e_n,-e_n, \ldots$. But this is simply the set $\text{span}\{e_n \mid n \in \mathbb{N} \}$, which is clearly dense in $l^2$. So $C$ must be dense in $l^2$ too.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2316778", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Existence of Inverse Mapping : $f: \;P(\Bbb R) \rightarrow \Bbb R $ where $f(x) = {1\over x^2}-x\cdot arctanx+{1 \over 2}log(1+x^2) $ I'd like to show that below function holds the inverse mapping: $f: \;P(\Bbb R) \rightarrow \Bbb R $ where $f(x) = {1\over x^2}-x\cdot arctanx+{1 \over 2}log(1+x^2) $ To show the existence of inverse mapping, I want to use the property that every inverse mapping has a bijection between domain and range of its orignial function. But each term of $f(x)$ holds different domain and range which makes this process cubersome. Any brief approach to check the inversibility of the given function?
Hints (Every monotone function has a inverse mapping): Since $$f'(x)=-\frac{1}{x^3}-\arctan x,$$ we can conclude that $f'(x) > 0$ whenever $x<0$; $f'(x) < 0$ whenever $x>0$, which leads that $f(x)$ has a inverse mapping.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2316962", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Non-negative operator & self-adjoint operator I am wondering how to show that: if $A$ is a non-negative operator, then $A$ is self-adjoint. Def. 1. $A$ is non-negative if $\langle Ax,x \rangle \geq 0$ for $\forall x\in H$, where $H$ is a Hilbert space. Def. 2. $A$ is self-adjoint if $A = A^*$.
For any linear $A:H \rightarrow H $, we have $\langle A^{*}x, x\rangle = \langle x, Ax\rangle =\overline {\langle Ax, x \rangle},$ But, for $A $ non-negative, then $\langle Ax, x\rangle$ is real, so $\langle Ax, x\rangle = \overline {\langle Ax, x \rangle}$ i.e. $\langle Ax, x\rangle =\langle A^{*}x, x\rangle $ $\implies \langle (A-A^{*})x, x\rangle = 0, \forall x \in H$. $\\$ Claim: If $\langle Tx, x\rangle = 0 \: \forall x $ in a complex Hilbert space, then $T=0$. Proof: Pick any $u, v \in H $, and let $T:H \rightarrow H $ such that $\langle Tx, x\rangle = 0 \: \forall x \in H $. Then $ 0 = \langle T(u+v), u+v\rangle = \langle Tu, v\rangle + \langle Tv, u\rangle$ $ \implies - \langle Tu,v\rangle = \langle Tv,u\rangle $, and $0 = \langle T(u+iv), u+iv\rangle = i\langle Tv, u\rangle - i\langle Tu, v\rangle$ $ \implies \langle Tu,v\rangle = \langle Tv,u\rangle$. Then $\langle Tu, v\rangle = - \langle Tu, v\rangle $ i.e. $\langle Tu, v\rangle = 0 \: \forall u,v \in H. $ $\implies T = 0. $ $\\$ Hence, $A=A^{*} $.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2317097", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Function class $C^{\infty}$ Show that exists function $ g \in \mathcal{C}^{\infty} \ [\mathbb{R},[0,1]) $ such that $g(x)=0$ for $|x|\le 1/2$ and $g(x)=1$ for $|x|>1$. So, I have : $f(x)=\begin{cases} \exp(-1/x) &\text{for } x > 0\\0 &\text{for } x<0 \end{cases}$ And now: $g(x) = \frac{f(x)}{f(x) + f(1-x)}$ However, I am not convinced of this.
You should not be convinced, it isn't true. In particular, if x= 1/2 then 1- x= 1/2 so that $f(x)= f(1-x)= e^{-1/(1/2)}= e^{-2}$ and then $g(1/2)= \frac{e^{-2}}{2e^{-2}}= \frac{1}{2}$, not 0.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2317309", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
multiply $2^{(a-1)b}$ by $2^b$ and get $2^{ab}$? How is this so? I’m reading How To Prove It and in the following proof the author is doing some basic algebra with exponents that I just don’t understand. In Step 1.) listed below he is multiplying $2^b$ across each term in (1 + $2^b$ + $2^{2b}$ +···+$2^{(a-1)b}$) and gets the resulting set of terms in Step 2.) In particular I have no idea how he is getting $2^{ab}$ from multiplying $2^{(a-1)b}$ by $2^b$ again which is shown in the first sequence in Step 2.). When I do it I get $2^{(ab)(b) – (b)(b)}$ and assume this is as far as it can be taken. Can someone please help me understand what steps he is taking to to get his answer? Theorem 3.7.1. Suppose n is an integer larger than 1 and n is not prime. Then $2^n$ − 1 is not prime. Proof. Since n is not prime, there are positive integers a and b such that a < n, b < n, and n = ab. Let x = $2^b$ − 1 and y = 1 + $2^b$ + $2^{2b}$ +· · ·+ $2^{(a−1)b}$. Then xy = ($2^b$ − 1) · (1 + $2^b$ + $2^{2b}$ +···+$2^{(a-1)b}$) Step 1.) = $2^b$ · (1 + $2^b$ + $2^{2b}$ +···+$2^{(a-1)b}$) − (1 + $2^b$ + $2^{2b}$ +···+$2^{(a-1)b}$) Step 2.) = ($2^b$ + $2^{2b}$ + $2^{3b}$ +···+$2^{ab}$) − (1 + $2^b$ + $2^{2b}$ + ···+$2^{(a-1)b}$) Step 3.) = $2^{ab}$ − 1 Step 4.) = $2^n$ − 1.
Using the laws of exponents, $2^{(a-1)b}2^b = 2^{(a-1)b+b} = 2^{ab}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2317435", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Simple property of Dirac's $\delta$-function. I'm on Page 63 of R. Shankar's "Principles of Quantum Mechanics". I'm trying to do Exercise 1.10.1 by proving that $\displaystyle{\delta(ax) = \frac{\delta(x)}{|a|}}$, where $a \in \mathbb R \backslash\{0\}$. We know that $\displaystyle{\int_{-\infty}^{+\infty}\delta(t)~\mathrm dt = 1}$. Making the substitution $t=ax$, where $a>0$ gives $$ \int_{-\infty}^{+\infty} \delta(ax)~\mathrm dx = \frac{1}{a}$$ Making the substitution $t=-ax$, where $a<0$ gives $$\int_{-\infty}^{+\infty}\delta(-ax)~\mathrm dx = -\frac{1}{a}$$ Since $\delta(t) \equiv \delta(-t)$ we can re-write this as $$\int_{-\infty}^{+\infty}\delta(ax)~\mathrm dx = -\frac{1}{a}$$ Hence: $$\int_{-\infty}^{+\infty}\delta(ax)~\mathrm dx = \left\{ \begin{array}{ccc} 1/a & : & a>0 \\ -1/a &:& a < 0\end{array}\right.$$ $$\implies \ \ \int_{-\infty}^{+\infty}\delta(ax)~\mathrm dx = \frac{1}{|a|} $$ I can see why this might suggest the result, since we can conclude that $$\int_{-\infty}^{+\infty}\delta(ax)~\mathrm dx = \int_{-\infty}^{+\infty}\frac{\delta(x)}{|a|}~\mathrm dx$$ However, I can't see why it proves that $\displaystyle{\delta(ax) = \frac{\delta(x)}{|a|}}$. After all, there are many different functions whose integrals over $\mathbb R$ are all equal.
You could make an even stronger argument: For all $A \subset \mathbb{R},$ we have $$\int_A \delta(ax)dx = \int_A \frac{\delta(x)}{|a|} dx$$ Now you can use this type of result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2317543", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Show which of $6-2\sqrt{3}$ and $3\sqrt{2}-2$ is greater without using calculator How do you compare $6-2\sqrt{3}$ and $3\sqrt{2}-2$? (no calculator) Look simple but I have tried many ways and fail miserably. Both are positive, so we cannot find which one is bigger than $0$ and the other smaller than $0$. Taking the first minus the second in order to see the result positive or negative get me no where (perhaps I am too dense to see through).
$$ 6-2√3 \sim 3√2-2\\ 8 \sim 3√2 +2√3 \\ 64 \sim 30+12√6\\ 34 \sim 12√6\\ 17 \sim 6√6\\ 289 \sim 36 \cdot 6\\ 289 > 216 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2317625", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 7, "answer_id": 1 }
Contour integration with variable epsilon I've been wracking my brain on a particular contour integration which I thought I solved easily enough, but the answer says differently. It asks to prove the following for any $\xi \in \Bbb{R}$: $$\int_{-\infty}^{\infty} \frac{e^{-2\pi i x \xi}}{(1+x^2)^2}dx=\frac{\pi}{2}(1+2\pi |\xi|)e^{-2\pi |\xi|}$$ My idea was to just use the residue theorem. The numerator is an entire function, and the denomenator has zeroes at $i$ and $-i$ with an order of $2$. Unless I'm mistaken, the residues of these poles are $$ res_{i}=\frac{i}{4}e^{2\pi \xi}(2\pi \xi - 1)$$ $$ res_{-i}=-\frac{i}{4}e^{-2\pi \xi}(2\pi \xi + 1)$$ I then proceeded to integrate along the semicircle in the top half of the complex plane. Going along the circle portion with radius $R$, you get \begin{align} |\int_{0}^{\pi}\frac{e^{-2\pi i Re^{i\theta} \xi}}{(1+R^2e^{2i\theta})^2}Rie^{i\theta}d\theta| \qquad &\le& \int_{0}^{\pi}|\frac{e^{-2\pi i Re^{i\theta} \xi}}{(1+R^2e^{2i\theta})^2}Rie^{i\theta}|d\theta \\ \\ &=& \int_{0}^{\pi}\frac{Re^{-2\pi R\xi \sin{\theta}}}{|1+R^2e^{2i\theta}|^2}d\theta \\ \\ &\le& \int_{0}^{\pi}\frac{Re^{-2\pi R\xi \sin{\theta}}}{R^4-R^2-1}d\theta \end{align} which tends to zero as $R\to \infty$. The requested integral is the only one left, with the contour enclosing the $i$ pole, according to the residue theorem you would get \begin{align} \int_{-\infty}^{\infty} \frac{e^{-2\pi i x \xi}}{(1+x^2)^2}dx &=& 2\pi i \cdot res_{i} \\ \\ &=& \frac{\pi}{2}(1 - 2\pi \xi)e^{2\pi \xi} \end{align} which is obviously not what was given. But I have absolutely no idea what part of my work I'm in the wrong here. The problem also specifies a hint, which is to handle $\xi < 0$ and $\xi \ge 0$ separately, but I don't understand what difference that's supposed to make. Any tips or corrections are greatly appreciated, thanks in advance! Also, as a bit of a sidenote, I tried googling for help on this specific integral without much luck. However I get the feeling that it's supposedly related to something in statistics, am I right?
Let $z=x+iy$. Note that $$ |{e^{-2\pi i z \zeta}}|=e^{2 \pi\zeta y}. $$ Hence if $\zeta\geq 0$, the contour should be the semicircle in the lower plane and if $\zeta< 0$, the contour should be the semicircle in the upper plane to induce the fact that the integral go to zero over the arc.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2317732", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Relation between Lipschitz and differentiablity Let $f:\Bbb R \to \Bbb R$ be a Lipschitz function. Show that the set of all reals at which $f$ is differentiable in non-empty. I know that if $f$ is differentiable and derivative is bounded then it is Lipschitz. I know that the converse is FALSE. For example $f(x)=|x|$. But I've no idea how to prove this statement. Can anyone give some hint ?
The set would be better than being merely non-empty. If $f:\mathbb{R}\to\mathbb{R}$ is Lipschitz, then it differentiable almost everywhere in $\mathbb{R}$ by Rademacher's theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2317844", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How does one prove that $\|x\|_2\le \|x\|_1$ I intuitively understand that $\|x\|_2 = \sqrt{x_1^2+\dots+x_n^2}\le \sqrt{x_1^2}+\dots+\sqrt{x_n^2}=|x_1|+\dots+|x_n|=\|x\|_1$. But the thing I'm concerned about is how to prove that $\sqrt{x_1^2+\dots+x_n^2}\le \sqrt{x_1^2}+\dots+\sqrt{x_n^2}$?
I find it easier to argue in the following way: the inequality is true when $x=0$, so assume that $x\neq 0$. Then by the homogeneity of the norms we can assume that $||x||_1=1$. Therefore $|x_j|\leq 1$ for all $j$, so $$ ||x||_2^2=|x_1|^2+\dots+|x_n|^2\leq |x_1|+\dots+|x_n|=1$$ and taking square roots yields the desired inequality.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2318091", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to define a finite topological space? I want to develop a simple way to define topologies on finite sets $X=\{1,2,\dots,n\}$ for computational experiments. Does any function $c:X\to \mathcal P(X)$, such that $x\in c(x)$, define a closure operator on $X$? The idea is that $c$ should define a closure operator by $$\mathrm{cl}(\{x_1,\cdots,x_m\})=\overline{\{x_1,\cdots,x_m\}}=\bigcup_{k=1}^{m} c(x_k)$$
Starting with an arbitrary function $c:X\to\mathcal P(X)$ such that $\forall x\in X:x\in c(x)$, this should be an algorithm to transform $c$ to a function that defines a closure operator, due to the accepted answer: $0:\quad$ $\mathrm{ready}\leftarrow\mathrm{true}$ $1:\quad$ $i\leftarrow 1$ $2:\quad$ $j\leftarrow 1$ $3:\quad$ if $i\in c(j)$ and $c(i)\cap c(j)\neq c(i)$ then $c(j)\leftarrow c(j)\cup c(i)$ ; $\mathrm{ready}\leftarrow\mathrm{false}$ $4:\quad$ $j\leftarrow j+1$ $5:\quad$ if $j\leq n$ then goto $3$ $6:\quad$ $i\leftarrow i+1$ $7:\quad$ if $i\leq n$ then goto $2$ $8:\quad$ if not ready then goto $0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2318236", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 6, "answer_id": 5 }
Show that the function $x \rightarrow ||x||^2, \mathbb{R}^p\rightarrow \mathbb{R}, p\ge 1$ is not uniformly continuous. Show that the function $x \rightarrow ||x||^2, \mathbb{R}^p\rightarrow \mathbb{R}, p\ge 1$ is not uniformly continuous. What i figured was that for $\forall \epsilon >0, \exists \delta>0, \forall x,y \in \mathbb{R}^p $that if $|x-y|<\delta$($\leftarrow$ not sure about this part) then $|f(x)-f(y)| <\epsilon$ thus $|\|x \|^2 - \|y\|^2 |= |x_1^2+x_2^2+\ldots +x_p^p - y_1^2-y_2^2+\ldots+y_p^2|$ however I do not know what I can do from here.
Consider the sequences $x_n=(n+\frac 1n,n+\frac 1n,...,n+\frac 1n)$ and $y_n=(n,n,...n)$ in $\Bbb R^p$. Then $||x_n-y_n||=\underbrace{\sqrt{(n+\frac 1n -n)^2+(n+\frac 1n-n)^2+\cdots+(n+\frac 1n-n)^2}}_{p-\text{times}}=\frac{\sqrt p}n \Rightarrow \lim||x_n-y_n||=\lim \frac {\sqrt p}n =0.$ But $|f(x_n)-f(y_n)|=|||x_n||^2-||y_n||^2|=|(n+\frac 1n)^2+\cdots+(n+\frac 1n)^2-(n^2+\cdots+n^2)|=p|2+\frac 1{n^2}| \ge 2p \;\; \forall \;n \in \Bbb N.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2318419", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Reformulating a Euclidean distance minimization problem into a semidefinite program The following minimization problem is a Euclidean distance form of a single-facility location problem $$\min \quad \sum_j \sqrt {(x-a_j)^2+(y-b_j)^2}$$ where $(x,y)$ and $(a_j,b_j)$ are the coordinates of the new facility and current facilities, respectively. I mistakenly tried to reformulate it as a second-order conic program (SOCP) and found that it is not possible. I wonder, is it possible to reformulate it as a convex program using semidefinite cones?
It's SOCP representable, in fact it almost doesn't get more SOCP than this. For instance, minimizing $||q|| + ||p||$ is equivalent to minimizing $s + t$ subject to $||q||\leq t, ||p||\leq s$ which is an SOCP in standard form.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2318507", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
When are Lipschitz constant uniformly bounded? I have a sequence of Lipschitz functions $f_i$ which converges uniformly to a Lipschitz function $f_0$. I also can make it the case that the domain for these functions are compact. Are these conditions enough to guarantee that the Lipschitz constants for $f_i$ are bounded? EDIT: Based on Anthony Carapetis's quick comment, what if all functions mentioned here are convex?
Let $f_i : [0,1] \to \mathbb R$ be a piecewise linear function that is $0$ on $[0,1-1/i^2]$ and then increases from $0$ to $1/i$ on the interval $[1-1/i^2, 1]$. These functions are convex and uniformly convergent to $0$, but the Lipschitz constant of $f_i$ is $i$. If you want the convergence to imply a uniform Lipschitz bound, you need the convergence to be in a stronger sense - the most natural would be Lipschitz convergence, where we require not only $f_i - f \to 0$ uniformly but also that the Lipschitz constant of $f_i - f$ converges to zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2318610", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that $\underset {x \in [0,1]} {\sup} f_n(x) \rightarrow \underset {x \in [0,1]} {\sup} f(x)$ as $n \rightarrow \infty$. Let $\{f_n\}$ be a sequence of continuous functions converging uniformly to a function $f$ on $[0,1]$. Then show that $\sup\limits{x \in [0,1]} f_n(x) \rightarrow \sup\limits_{x\in[0,1]} f(x)$ as $n \rightarrow \infty$. My attempt $:$ First of all let us state the following result $:$ Let $S$ and $T$ be two bounded subset of $\mathbb R$. Let $A = \{|x-y| : x\in S , y \in T \}$.Then $A$ is also bounded and $\sup A = \sup S - \inf T$. Now with the help of above theorem we have $M_n= \sup\limits_{x \in [0,1]} |f_n(x)-f(x)|= \sup\limits_{x \in [0,1]} f_n(x) - \inf\limits_{x \in [0,1]} f(x)$ for all $n \in \mathbb N$. Since $f_n \rightarrow f$ as $n \rightarrow \infty$ uniformly on $[0,1]$ so $M_n \rightarrow 0$ as $n \rightarrow \infty$. Hence we have $\sup\limits_{x \in [0,1]} f_n(x) \rightarrow \inf\limits_{x \in [0,1]} f(x)$ as $n \rightarrow \infty$. Which fails to meet my purpose. What is wrong in my concept? Would anyone tell me please. Thank you in advance.
You have $$ f(x) = f_n(x) + (f(x) -f_n(x)) \leq \sup_{y\in [0,1]} f_n(y) + \sup_{y\in [0,1]} (f(y) -f_n(y)). $$ Taking supremum over $x\in [0,1]$ gives $$ \sup_{x\in [0,1] } f(x) \leq \sup_{y\in [0,1]} f_n(y) + \sup_{y\in [0,1]} (f(y) -f_n(y)). $$ Taking the $\liminf$ on both sides yields by uniform convergence $$ \sup_{x\in [0,1] } f(x) \leq \liminf_{n\rightarrow \infty} \sup_{x\in [0,1]} f_n(x).$$ Now redo the argument. $$ f_n(x) = f(x) + (f_n(x) -f(x)) \leq \sup_{y\in [0,1]} f(y) + \sup_{y\in [0,1]} (f_n(y) -f(y)). $$ Taking supremum over $x\in [0,1]$ gives $$ \sup_{x\in [0,1] } f_n(x) \leq \sup_{y\in [0,1]} f(y) + \sup_{y\in [0,1]} (f_n(y) -f(y)). $$ Taking the $\limsup$ on both sides yields by uniform convergence $$ \limsup_{n\rightarrow \infty}\sup_{x\in [0,1] } f_n(x) \leq \sup_{x\in [0,1]} f(x).$$ Hence, we have $$ \limsup_{n\rightarrow \infty}\sup_{x\in [0,1] } f_n(x) \leq \sup_{x\in [0,1]} f(x) \leq \liminf_{n\rightarrow \infty}\sup_{x\in [0,1] } f_n(x).$$ Thus, we conclude $$ \lim_{n\rightarrow \infty} \sup_{x\in [0,1] } f_n(x) = \sup_{x\in [0,1] } f(x).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2318728", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
ray class groups in $\Bbb{Q}$ I study class field theory from the book "Primes of the form $x^2+ny^2$", D. Cox. I want to find ray class groups in $\Bbb{Q}$. Let $K$ be a number field, $\mathfrak{m}$ be a modulus of $K$. In the book, $I_K(\mathfrak{m})$ is defined as the group of all fractional ideals of $K$ coprime to $\mathfrak{m}_0$ (finite part of $\mathfrak{m}$). $P_K(\mathfrak{m})$ is defined as the subgroup of $I_K(\mathfrak{m})$ generated by the principle ideals $\alpha\mathcal{O}_K$, where $\alpha\in\mathcal{O}_K$ satisfies $\alpha\equiv 1 \ mod \ \mathfrak{m}_0$ and $\sigma(\alpha)> 0$ for every real infinite prime $\sigma$ dividing $\mathfrak{m}_0$ (infinite part of $\mathfrak{m}$). For example, Set $K=\Bbb{Q}$ and $\mathfrak{m}=(8)$. Then according to the above definitions, I find $$I_\Bbb{Q}((8))=\{(a/b)\Bbb{Z}: gcd(a,8)=gcd(b,8)=1 \}=\{(a/b)\Bbb{Z}: 2\nmid a,b\}$$ $$P_\Bbb{Q}((8))=<a\Bbb{Z}: a\in\Bbb{Z},\ a\equiv 1\ mod\ (8)>=\{(a/b)\Bbb{Z}: a\equiv 1\ mod\ 8, \ b\equiv 1\ mod\ 8\}$$ First , I am not sure that I write these groups correctly. Also, I couldn`t conclude that the ray class group $Cl_\Bbb{Q}((8)):=I_\Bbb{Q}((8))/P_\Bbb{Q}((8))$ is isomorphic to $(\Bbb{Z}/4\Bbb{Z})^*$. Generally, I have trouble to describe the ray class groups. Thank you
The ray class group modulo $8$ corresponds to the ray class field modulo $8$, which is the maximal totally real abeliab extension of ${\mathbb Q}$ with conductor $8$, i.e., the maximal real subfield of ${\mathbb Q}(\zeta_8)$, namely ${\mathbb Q}(\sqrt{2})$. This is a quadratic extension, as is confirmed by the fact that the ray class group modulo $8$ has order $2$. In fact, the ideal generated by $(3)$ in the ray class group is nor principal, i.e., is not generated by an element $a/b \equiv 1 \bmod 8$. Moreover $(3) = (5)$ since $(5) = (-5) = (3)$ inside the ray class group. Ray class groups over the rationals are basically residue class groups since ${\mathbb Q}$ has class number $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2318893", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Is a finitely generated submodule of a direct sum of a field of fractions of a Dedekind domain projective? This is a question out of Donald Passman's "A Course in Ring Theory". Let $R$ be a Dedekind domain with field of fractions $F$, and let $V$ be a finitely generated $R$ submodule of $F^n:=F\oplus\dots\oplus F$. Passman asks to show whether $V$ is projective. Now I have already proven that over a semihereditary ring any finitely generated submodule of a free module is projective. Obviously $R$ is semihereditary, being a Dedekind domain, but I am struggling to see whether $F$ is a free $R$ module. If it is then my proof is done, but if not I'll need another approach. So if anyone can provide hints on how to prove that $F$ is a free $R$ module it would be much appreciated, if it is not then a push in the right direction would be very useful. EDIT: As proven here, there is a counterexample to $F$ being free, which means that my proposed strategy won't work. Unfortunately I cannot think of another way to proceed, so any prod in the right direction will be much appreciated. References Donald Passman. A Course in Ring Theory. Wadsworth and Brooks/Cole, 1st edition, 1991.
Even though $F$ itself is not free, your approach is still valid: * *First, note that $F$ is the directed union of its finitely-generated, free submodules. Namely, if $x=\frac{a}{b}\in F$ then $x\in U_b := R\frac{1}{b}\subset F$ which is free since $F$ is torsion-free. Moreover, $U_b\subset U_{bb^{\prime}}$ for any two $b,b^{\prime}\in R\setminus\{0\}$, so the family of $\{U_b\}$ is directed. *Analogously, $F^n$ is the directed union of its finitely-generated, free submodules. *Now, given a finitely-generated $R$-submodule $M$ of $F^n$, step (2) shows that it is contained in a finitely generated and free submodule of $F^n$, too, and you deduce that it is projective. (In more detail, you would first pick generators $m_1,...,m_n\in M$, then choose finitely-generated and free $U_i\subset F^n$ with $m_i\in U_i$, and finally find a finitely-generated and free $U$ such that $U_i\subset U$. Then $M\subset U$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2318992", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Separable quotient of a non-separable normed space I want to find an example of a non-separable normed space $X$ and a closed subspace $M$ of $X$ such that $X/M$ is separable. The first think came to my mind is $\ell^{\infty}$. Can I find a closed subspace $M$ of $\ell^{\infty}$ so that $\ell^{\infty}/M$ is isometrically isomorphic to $\ell^1$? Can I think of $M$ to be kernal of any continuous linear operator from $\ell^{\infty}$ onto $\ell^{1}$? Please help me out.
$\ell_1$ is not isomorphic to a quotient of $\ell_\infty$. Indeed, the latter one is a Grothendieck space, hence so is every its quotient. By the Eberlein-Smulyan theorem, a separable space is Grothendieck if and only if it is reflexive. Certainly, $\ell_1$ is not reflexive. As a matter of fact, $\ell_p$ is a quotient if and only if $p\geqslant 2$. To see this, take a sequence of independent $p$-random variables in $L_1$ ($p\leqslant 2$). They span an isometric copy of $\ell_p$ in $L_1$ so $\ell_{q}$, where $q$ is the conjugate exponent to $p$, is a quotient of $L_\infty$, the dual of $L_1$. You may now embed $L_\infty$ into $\ell_\infty$ using the fact that $L_1$ is a quotient of $\ell_1$. As $L_\infty$ is injective, there is a projection from $\ell_\infty$ onto every copy of $L_\infty$, so you may compose any such projection with a quotient from $L_\infty$ onto $\ell_q$ To see that $\ell_p$ for $p<2$ is not a quotient of $\ell_\infty$, note that $\ell_p$ is a quotient of $\ell_\infty$, $(\ell_p)^*$ embeds into $(\ell_\infty)^*$, which has cotype 2. Thus, the conjugate exponent of $p$ must be at most 2.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2319367", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Conformal Map wanted I am in search of a conformal map that will stretch the rectangle $P_1 =\lbrace(x,y) : (-W < x < W , -L < y < L )\rbrace$ to the entire real plane $P_2 = \lbrace(u,v) : u,v \in \mathbb{R}\rbrace$, where the sides of $P_1$ are mapped to infinity. For example, this transformation: $$ u = \frac{xy}{W^2-x^2},\quad v = \frac{xy}{L^2-y^2} $$ satisfies the stretching, however it is not conformal. I have almost no experience in these type of problems, so any help will be very helpful. For instance, a good tip may even be the existence of a solution to this problem. Thanks!
Unfortunately, no such map exists: If there were a conformal bijection from your rectangle to the entire plane, the inverse mapping (or its complex conjugate) would be a bounded, entire function. But a bounded, entire function is constant by Liouville's theorem from elementary complex analysis. Generally, no proper open subset of the plane is conformally equivalent to the plane. This is an immediate consequence of the Koebe-Poincaré uniformization theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2319452", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
if $A$ is $2 \times 2$ matrix find $\lim_{n \to \infty} A^n$ if $A$ is $2 \times 2$ given by $$A= \begin{bmatrix} 1& \frac{\theta}{n} \\ -\frac{\theta}{n} & 1 \end{bmatrix}$$ matrix find $$\lim_{n \to \infty} A^n$$ I tried like this: $$A-I=\begin{bmatrix} 0& \frac{\theta}{n} \\ -\frac{\theta}{n} & 0 \end{bmatrix}$$ So $$(A-I)^2= -\frac{\theta^2}{n^2} I$$ Generalizing we get $$(A-I)^n=-\frac{\theta^n}{n^n} I$$ any clue or hint here?
Let p(t) be a polynomial function with variable t. Let $ p(t)=\det(tI-A) $. $$p(t)=t^2 -2t+1+ \frac{θ^2 }{n^2}$$ By Cayley-Hamilton theorem, p(matrix A)= zero matrix Set p(t)=0, t= 1 (+ or -) θi/n, and these are the eigenvalues.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2319550", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Self-functions of the class of ordinals growing fast Let $\lambda$ be a regular ordinal and let $S:=\{\beta:\beta<\lambda\}$. I wonder if it is possible to find a (non-decreasing, but this should not be a problem) function $$ f\colon S\to S $$ such that, for any $\alpha\in S$, there exists $\beta\in S$ such that $$ \beta<\alpha< f(\beta) (<\lambda). $$ Sorry for not explaining the context. The fact is that I am attempting a construction involving (homotopy) limits of shape an ordinal, by transfinite induction. For the moment, I know how to proceed when an ordinal is a successor or when it is the sup of a chain (shorter than it is) of smaller ordinals. To handle also the regular case it would be enough to have a function like the above, but I do not know if such a thing can exist.
There is no such function; it is a corollary of the Fodor's lemma. Assume that there is a such $f$, so for each $\xi<\lambda$ we can choose $\beta_\xi< \xi$ such that $\xi<f(\beta_\xi)$. The function $g(\xi) = \beta_\xi$ is regressive, and the set $\lambda$ is stationary. Hence, there is a stationary subset $S\subseteq \lambda$ such that $g$ restricting to $S$ is a constant function. Note that any stationary subset is unbounded in $\lambda$ (that is, for each $\alpha<\lambda$ we can choose $\gamma\in S$ greater than $\alpha$.) However, $\xi < f(g(\xi))$ holds for all $\xi \in S$, and $f(g(\xi))$ is constant, a contradiction. I should mention that your function, if it exists, is ill-defined when $\alpha=0$. However, my proof works even if we exclude some finitely many points from a domain of $f$ (and in fact the domain of $f$ shouldn't be a stationary subset of $\lambda$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2319706", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Calculate convergence of random variables We are given $X_1,X_2,...$which are all independent random variables and have $Exp(\ln n)$ distribution. Our task is to show that this random variables converge to 0 with probability but not almost surely. I am hitting a wall with this one. What should be my approach here?
Hint: Recall the definition of convergence in probability: $X_n \xrightarrow{p} 0$ means that for each $\epsilon > 0$, $\mathbb{P}(X_n > \epsilon) \to 0$ as $n \to \infty$. Can you compute $\mathbb{P}(X_n > \epsilon)$ explicitly? To show that there isn't almost sure convergence, show that for some $\epsilon > $, $\mathbb{P}(X_n > \epsilon \text{ infinitely often}) = 1$. How do you show that something happens infinitely often?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2319804", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Haar measure and SU(2) I have some very basics question on the Haar measure on $SU(2)$. What I understood from definition of Haar Measure is that it is a measure that ensure me to have the property : $$ \int f(gh) d \mu(g) = \int f(h) d \mu(h) $$ Where $g$ and $h$ are element of a Lie group. And $f$ a function from the Lie group to the complex numbers (for example). In practice, if we parametrize the $SU(2)$ matrices by the angles $\theta, \phi, \chi$, I will have : $$ \int_{SU(2)} f(x) d \mu(x) = \int_{0}^{\pi} d \theta \int_{0}^{\pi} d \phi \int_{0}^{2\pi} d \psi f(\theta,\psi,\phi) sin^2(\theta) sin(\psi) $$ Where an element of $SU(2)$ can be written as : $$ g=\left[ \matrix { x1+ix2 &x3+ix4 \\ -x3+ix4&x1-ix2&\\ } \right]$$ With : $x1=cos(\theta)$, $x2=sin(\theta)cos(\phi)$, $x3=sin(\theta)sin(\phi)cos(\psi)$, $x4=sin(\theta)sin(\phi)sin(\psi)$. My question : When we have $g_1$ and $g_2$ in $SU(2)$, the angles $\theta$,$\phi$,$\psi$ of their product will be the sum of the angles of the first and the second ? I could compute it by hand but it can be long so I would like to know if it is true ? And so if it is true, if I compute the integral of $f(\theta_1+\theta_2,\psi_1+\psi_2,\phi_1+\phi_2)$ with $d\theta_1$,$d \phi_1$, $d \psi_1$, I would end up with the same result as if I have integrated $f(\theta_1,\psi_1,\phi_1)$ right ?
The answer to your first question is no. It would mean that the group SU(2) were commutative, which it is not.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2319965", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Definite integral integration by parts Can we write the integration by parts for definite integral the following way: $$\int^a_b f(x)g(x)dx=f(x)\int^a_b g(x)dx-\int^a_b \left[ \dfrac{df(x)}{dx}\int^a_b g(x)dx \right]dx $$ My book gives the following formula for definite integral integration by parts: $$\int^a_b f(x)g(x)dx=\left[f(x)\int g(x)dx\right]^a_b -\int^a_b \left[ \dfrac{df(x)}{dx}\int g(x)dx \right]dx $$ Are the two formulas equivalent or not? Why/Why not?
Integration by parts is defined by $$\int f(x) \, g(x) \, dx = f(x) \int g(u) \, du - \int f'(t) \left(\int^{t} g(u) \, du \right) \, dt.$$ When applying limits on the integrals they follow the form $$\int_{a}^{b} f(x) \, g(x) \, dx = \left[f(x) \int g(u) \, du\right]_{a}^{b} - \int_{a}^{b} f'(t) \left(\int^{t} g(u) \, du \right) \, dt.$$ Now, if $$\int_{a}^{b} f(x) \, g(x) \, dx = f(x) \int_{a}^{b} g(u) \, du - \int_{a}^{b} f'(t) \left(\int_{a}^{b} g(u) \, du \right) \, dt$$ then what has been descried is $$\int_{a}^{b} g(u) \, du$$ is a constant, say $c_{1}$ for which the remaining integration becomes $$\int_{a}^{b} f(x) \, g(x) \, dx = c_{1} \, f(x) - c_{1} \, \int_{a}^{b} f'(t) \, dt.$$ These resulting integrals are not the same, in any sense, unless $g(x)$ is a constant to begin with. As a demonstration consider $f(x) = x, g(x) =1$ for which the proper way yields \begin{align} \int_{1}^{2} f(x) \, g(x) \, dx &= \left[ x \cdot x \right]_{1}^{2} - \int_{1}^{2} 1 \cdot x \, dx \\ &= (4 - 1) - \left[ \frac{x^2}{2}\right]_{1}^{2} = \frac{3}{2}. \end{align} The questioned method leads to \begin{align} \int_{1}^{2} f(x) \, g(x) \, dx &= x \int_{1}^{2} du - \int_{1}^{2} 1 \cdot \left(\int_{1}^{2} du \right) \, dt \\ &= x - \int_{1}^{2} dt \\ &= x - 1. \end{align} From this example even if it had been asked for $$\int_{a}^{b} f(x) \, g(x) \, dx = \left[f(x) \int_{a}^{b} g(u) \, du\right]_{a}^{b} - \int_{a}^{b} f'(t) \left(\int_{a}^{b} g(u) \, du \right) \, dt$$ then the exampled result would be \begin{align} \int_{1}^{2} f(x) \, g(x) \, dx &= \left[x \int_{1}^{2} du\right]_{1}^{2} - \int_{1}^{2} 1 \cdot \left(\int_{1}^{2} du \right) \, dt \\ &= [x]_{1}^{2} - \int_{1}^{2} dt \\ &= 1 - 1 = 0. \end{align} which still leads to an incorrect result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2320051", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Computing of $\int_{-1}^1\frac{e^{ax}dx}{\sqrt{1-x^2}}, \: a \in \mathbb{R}$ I would like to find Fourier series for $f(x) = e^{ax}$ using Chebyshev polynomials. And first step is computation following integral. How to compute $$\int_{-1}^1\frac{e^{ax}dx}{\sqrt{1-x^2}}, \: a \in \mathbb{R}$$
By setting $x=\sin\theta$ we have $$ I(a)=\int_{-1}^{1}\frac{e^{ax}}{\sqrt{1-x^2}}\,dx = \int_{-\pi/2}^{\pi/2}\exp\left(a\sin\theta\right)\,d\theta \tag{1}$$ and we may expand the exponential function as its Taylor series at the origin. Since the integral of and odd integrable function (like $\sin^3$ or $\sin^5$) over $\left(-\frac{\pi}{2},\frac{\pi}{2}\right)$ is zero, we get: $$\begin{eqnarray*} I(a) &=& \sum_{n\geq 0}\frac{a^{2n}}{(2n)!}\int_{-\pi/2}^{\pi/2}\sin^{2n}(\theta)\,d\theta\\ (2i \sin\theta=e^{i\theta}-e^{-i\theta})\qquad &=&\sum_{n\geq 0}\frac{\pi a^{2n}}{4^n (2n)!}\binom{2n}{n}\\&=&\pi\sum_{n\geq 0}\frac{a^{2n}}{4^n(n!)^2}=\color{red}{\pi\cdot I_0(a)}.\end{eqnarray*} \tag{2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2320165", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
The group $\mathbb Z_n$ is isomorphic to a subgroup of $GL_2(\mathbb R)$. I need to prove the following: The group $\mathbb Z_n$ is isomorphic to a subgroup of $GL_2(\mathbb R)$. How can I prove this? $\mathbb Z_n$ is of order $n$ so it is isomorphic to a subgroup of $GL_n(\mathbb R)$. I know this is true but here it is given $GL_2(\mathbb R)$.
Consider the group of those matrices of the form\begin{pmatrix}\cos\left(\frac{2k\pi}n\right)&-\sin\left(\frac{2k\pi}n\right)\\\sin\left(\frac{2k\pi}n\right)&\cos\left(\frac{2k\pi}n\right)\end{pmatrix}($k\in\{0,1,\ldots,n-1\}$). In other words, it is the subgroup of $GL_2(\mathbb{R})$ generated by the matrix$$\begin{pmatrix}\cos\left(\frac{2\pi}n\right)&-\sin\left(\frac{2\pi}n\right)\\\sin\left(\frac{2\pi}n\right)&\cos\left(\frac{2\pi}n\right)\end{pmatrix},$$which has order $n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2320275", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Swapping hands in a generalized clock Consider a generalized clock, where the minute hand goes n times as fast as the hour hand, where n is a positive integer. The standard clock has n=12 (sometimes n=24). As which times can swapping the hour and minute hands result in a legal time? In particular, for each hour h from 1 to n, for which minutes does this happen? This obviously happens when the hands point in the same or opposite directions. Are there any other times?
Let the hour hand be pointing at an exact value $H \in [0,1]$ ($0$ represents "12 o'clock" or the angle zero. $.5$ represents "6 o'clock" or the angle 180 or $\pi$ and $1$ represents 360, "12 o'clock" or $2\pi$ or "full circle"). Then the minute hand must be pointing at a precise value $M = \{nH\}$. i.e. the fractional part of $nH$ that is $nH = \lfloor nH\rfloor + \{ nH\}$ where $\lfloor nH\rfloor\in \mathbb Z$ ad $0 \le \{nH\} < 1$. That's what a legitimate time is: if $M = \{nH\}$ the time is legitimate. So we need times where $M = \{nH\}$ and $H = \{nM\}$ or $H = \{n\{nH\}\}$ $\{nH\} = nH - \lfloor nH\rfloor$; $n\{nH\} = n^2H - n\lfloor nH\rfloor=\{n^2H\} + \lfloor n^2H\rfloor- n\lfloor nH\rfloor$. $0 \le \{n^2H\} < 1$ and $\lfloor n^2H\rfloor- n\lfloor nH\rfloor\in \mathbb Z$. So $\{n\{nH\}\} = \{n^2H\}$. If $H = \{n^2H\}$ then $H = n^2H - k; H = \frac {k}{n^2-1} \in \mathbb Q$. And, that's that. If $H = \frac {k}{n^2 - 1}$ then $H$ is a "reversible time". Example: for the 12 hour clock, there are $143$ of these times. (Which actually surprises me as I assumed there were only 11.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2320374", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Can every compact subset of $\mathbb R^n$ be realized as a manifold with boundary? I think it should be true since in Lee's introduction to smooth manifolds, he referred to a differential form on a compact subset $D \subset \mathbb R^n$. It is easy to provide charts for $\operatorname{Int} D$, but for $\partial D$, how should I set up the charts for it?
No. Just look at the Cantor set in $\mathbb R$ as an example. It has no neighborhoods homeomorphic to any Euclidean space (or half space), but it is compact. For a less pathological example, consider $$\left\{\frac1n:n\in \mathbb N\right\}\cup\{0\}$$ This is compact, but no neighborhood of $0$ is homeomorphic to Euclidean space or half space.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2320464", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Other than using the units digit division method, how would I demonstrate $999^{2016}$ 's remainder when it is divided by 5? Using the other than the units digit division method, how would I demonstrate $999^{2016}$ 's remainder when it is divided by 5? I have decided that the best method in solving the problem was to engage with a binomial theorem as demonstrated below: $$(1000-1)^{2016}$$ but then from here I didn't know how to proceed and I really would like to use the binomial theorem method
$$(x+y)^n=\sum_{k=0}^n\binom{n}{k}x^ky^{n-k}$$ so $$(1000-1)^n=\sum_{k=0}^n\binom{n}{k}1000^k(-1)^{n-k}$$ all terms except the last is divisible by $1000$, thus $(1000-1)^n\equiv(-1)^n\mod5$ Actually, it's clear that $(1000-1)^n\equiv(-1)^n\mod1000$, so the last $3$ digits of $999^n$ are $999$ when $n$ is odd, and $001$ when n is even.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2320545", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Help Solving $F_N = \frac{1}{\sqrt5}((\frac{1+\sqrt5}{2})^N - (\frac{1-\sqrt5}{2})^N)$ by Induction I have recently got stuck on an induction problem in my textbook. It is a big one so major kudos to anybody that can help me out. The question states to prove this formula inductively: $$F_N = \frac{1}{\sqrt5}((\frac{1+\sqrt5}{2})^N - (\frac{1-\sqrt5}{2})^N)$$ While the answer key states: The basis ( $N = 0$ and $N = 1$ ) is easily verified to be true. So assume that the theorem is true for all $0 ≤ i < N$ and we will establish for $N$. Let $φ_1 = (\frac{1+\sqrt5}{2})$ and $φ_2 = (\frac{1-\sqrt5}{2})$. Observe that both $φ_1$ and $φ_2$ satisfy $φ^N = φ^{N–1} + φ^{N–2}$ (this is verified by factoring and then solving a quadratic equation). Since $F_N = F_{N–1} + F_{N–2}$, by the inductive hypothesis we have $F_N = \frac{1}{\sqrt5}\Bigl((φ_1)^{N-1} - ((φ_2)^{N-1}) + (φ_1)^{N-2} - (φ_2)^{N-2})\Bigr)$ = $\frac{1}{\sqrt5}\Bigl((φ_1)^{N} - ((φ_2)^{N})\Bigr)$ So far I have only managed to get up to the point where I get this: $$F_N = \frac{1}{\sqrt5}((\frac{1+\sqrt5}{2})^N - (\frac{1-\sqrt5}{2})^N) = \frac{1}{\sqrt5}\Bigl((φ_1)^{N-1} - ((φ_2)^{N-2})\Bigr) $$ After this I don't know what to do next. My questions are the following: 1) What actually is $F_N$? is it arbitrary or is it a sequence or something? 2) Do we verify $N=0$ and $N=1$ by plugging into this? $F_N = \frac{1}{\sqrt5}((\frac{1+\sqrt5}{2})^N - (\frac{1-\sqrt5}{2})^N) = \frac{1}{\sqrt5}\Bigl((φ_1)^{N-1} - ((φ_2)^{N-2})\Bigr) $ 3) In the inequality $0\le i \lt N$ where did $i$ come from? What is $i$ used for? 4) How do we verify $φ^N = φ^{N–1} + φ^{N–2}$ by factoring and solving by quadratic equation? 5) lastly, what are we trying to find here? What does the question want as an answer? Usually I do proofs that have an expression = expression but this one is just fn = expression. Any form of help or explanation would be greatly appreciated.
The link to the other thread about Binet's formula is full of overly complicated answers for a problem that is explained by a variant of a very, very simple identity, one that I have repeated time and time again: $$a^{n+1} - b^{n+1} = (a+b)(a^n - b^n) - ab(a^{n-1} - b^{n-1}).$$ This is trivially proven by expanding the RHS. Now, let $$a = (1+\sqrt{5})/2, \quad b = (1-\sqrt{5})/2,$$ hence (again trivially) $$a+b = 1, \quad ab = \frac{1^2 - (\sqrt{5})^2}{2^2} = -1.$$ Therefore, we immediately establish that if $$F_n = \frac{1}{\sqrt{5}}\left(\left(\frac{1+\sqrt{5}}{2}\right)^n - \left(\frac{1-\sqrt{5}}{2}\right)^n\right),$$ we have $$F_{n+1} = F_n + F_{n-1}.$$ All that remains is to verify that $F_0$ and $F_1$ as such equals $0$ and $1$, respectively. It is equally trivial to rephrase this proof in an inductive form, since the original identity is the inductive step. There is no need to appeal to the theory of difference equations, nor to generating functions. These underlying mechanisms yield insights into more general cases of linear recursions, but I am of the opinion that parsimony is not without its merits.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2320674", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
$\lim\limits_{x\to \infty} x\big(\log(x+1) - \log(x-1)\big) =e^2$ need to find the value of $$\lim\limits_{x\to \infty} x(\log(x+1) - \log(x-1))$$ $x(\log(x+1) - \log(x-1))=x(\log{ x+1\over x-1}) = \log({x+1\over x-1})^x = \log(1+{2 \over x-1})^x = \log(1+{2 \over x-1})^{x-1} + \log(1+{2 \over x-1})$ If we take $\lim\limits_{x \to \infty}$ at the last term, $\lim\limits_{x \to \infty}\log(1+{2 \over x-1})^{x-1} = e^2, \lim\limits_{x \to \infty}\log(1+{2 \over x-1})=0$ Thus the answer is $e^2$ Is the above reasoning correct?
The reasoning is very correct...the answer is not: apparently you forgot you had logarithms and instead of $\;e^2\;$ the answer should be $\;\log e^2=2\;$ : $$\lim_{x\to\infty}\color{red}\log\left(1+\frac2{x-1}\right)^x=\lim_{x\to\infty}\left[\color{red}\log\left(1+\frac2{x-1}\right)^{x-1}+\color{red}\log\left(1+\frac2{x-1}\right)\right]=$$ $$=\color{red}\log e^2+\color{red}\log1=\color{red}\log e^2+0=2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2320827", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Extracting a direction gradient from a set of points I have a matrix containing a set of points: [ 100,100,40,50,30, 30,100,20,20,30, 10,20,45,30,22, 102,200,10,0,10 10,20,20,30,40 ] Is there a way we can retrieve a directional gradient (vector) from this matrix? The vector should be pointing toward the region that contains higher values. As for this example, as you can see, the values that are greater than 100 are mostly toward the left, therefore we can estimate that the vector maybe pointing leftward from right. I am looking for a formula that is flexible enough such that, even if we change the size of the matrix, we would be able to compute a directional gradient. My apologies if I am not using the right term for directional gradient, as my math knowledge only goes as far as calculus 2. EDIT 1: Each row represent an increasing value of y. At the first row, y=1, and at the last row, y=5. Each column represents an increasing value of x. At first column x=1, and last column x=5.
You can use standard finite difference methods to estimate the derivative in each direction, and therefore determine the gradient. So for example, the gradient at $(3,3)$ can be estimated component-wise with $$\frac{\partial f(x,y)}{\partial x} \approx \frac{f(x+1,y)-2f(x,y)+f(x-1,y)}{1^2}$$ $$\frac{\partial f(x,y)}{\partial y} \approx \frac{f(x,y+1)-2f(x,y)+f(x,y-1)}{1^2}$$ which could be an appropriate algorithm in the interior. So $f_x(3,3)=-20$, and $f_y(3,3)=-7.5$ so the gradient is given by these derivatives as coefficients $\nabla f=-20 \hat{i}-7.5\hat{j}$. On the boundary, you will have to use something simpler, since some of the above terms will be missing: for example, at $(1,1)$, you can use $$\frac{\partial f(x,y)}{\partial x} \approx \frac{f(x+1,y)-f(x,y)}{1}$$ $$\frac{\partial f(x,y)}{\partial y} \approx \frac{f(x,y+1)-f(x,y)}{1}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2320953", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Products of sparse sets of primes Let $S$ be a set of primes such that $\prod_{p \in S} (1 - 1/p)^{-1}$ converges, so the sum of the reciprocals of the products of these primes converges. If if $n_S$ is the largest factor of $n$ that is a product of elements of $S$, then this condition is $$\sum_{n_S = n} \frac{1}{n} = O(1)$$ In "On the periods of the linear congruential and power generators" (2005), Pomerance and Kurlberg claim that this implies that $n_S < \log n$ for almost all $n$. I don't see why this follows. I've tried adding other primes slowly to the Euler product, but this doesn't seem to help. The paper is can be found at https://math.dartmouth.edu/~carlp/PDF/par13.pdf (proof of Lemma 7, bottom of page 6)
I think the $\log n$ here is a bit of a red herring; I suspect it was made deliberately weaker than necessary in order to retain the strong analogy with Lemma 14. (As we'll see in a moment, it would still be true with the RHS replaced by $\log \log \log n$.) Let $S^*$ denote the set of values of $n_S$ (in other words $S^* = \{n : n_S = n\}$). Pick your favourite $\epsilon > 0$. Since the reciprocal sum of $S^*$ is convergent, there exists a $C := C(\epsilon)$ such that $$\sum_{\substack { n \in S^* \\n > C}} \frac1n < \epsilon.$$ This easily implies the number of $n\le x$ that are divisible by any large element of $S^*$ is $< \epsilon x$. For any other $n$, we must have $n_S \le C$, and there are only finitely many $n$ for which $\log n \le C$ so then $n_S < \log n$ for large enough $n$. Therefore $n_S < \log n$ for all $n \le x$ with at most $\epsilon x + O_\epsilon(1)$ exceptions. Since we can take $\epsilon$ arbitrarily small, this is $o(x)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2321069", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Integer Partitions asymptotic behaviour Let $ P(n) $ be the number of partitions of number $n$. Prove that $ P(n)$, grows faster than any polynomial from $n$. I am looking for an elementary (rather bijective) proof of the fact.
The number of ordered partitions of $n$ into exactly $k$ parts is, by the usual combinatorial "stars and bars" argument, $\binom{n-1}{k-1}$: imagine placing $n$ objects in a row and adding $k-1$ dividers in the $n-1$ spaces between them. As a result, a lower bound for the number of unordered partitions of $n$ into $k$ parts is $\frac1{k!} \binom{n-1}{k-1}$, which is $\Omega(n^{k-1})$. This is also, of course, a lower bound for $P(n)$. The argument works for any $k$, proving that $P(n)$ grows faster than a polynomial of any degree.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2321179", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Irreducibility of $X^4 + a^2$ with odd $a$ For my abstract algebra course I have to decide whether $$(*)\qquad X^4 + a^2, \; a \in \mathbb{Z}\;\text{odd}$$ is irreducible over $\mathbb{Q}[X]$ and $\mathbb{Z}[X]$. Since the degree of the polynomial is 4 and thus even I should be able to split it into two polynomials of degree two $if$ the coefficients of those polynomials lie in $\mathbb{Z}[X]$ and/or $\mathbb{Q}[X]$ respectively. What I know is that, if I could show that it is irreducible over $\mathbb{Z}[X]$ then I would know by Gauss' Lemma it is so also over $\mathbb{Q}[X]$. On the other hand, I suspect that it might be reducible over $\mathbb{Q}[X]$ which, in turn, would imply it is reducible over $\mathbb{Z}[X]$. What I tried is to set up some general product of two second-degree polynomials $$(x^2 + bx + c)(x^2 + dx + e)$$ and get some conditions on which I could determine a set of parameters $\{b, c, d, e\}$ so that the polynomial in $(*)$ ensues. After coefficient comparison I get several conditions, i.e. $$b + d = 0 \\ e + bd + c = 0 \\ be + cd = 0 \\ ce = a^2 = (2k + 1)^2 $$ for some $k \in \mathbb{Z}$.Here I did not know how to proceed. I saw that $b = -d$ and then tried to resolve the other equations but I did not make progress in that I found factors of (*). I obviously miss something here and appreciate a nudge in the right direction.
Hint: the roots of $x^4 + a^2$ are $$ \pm\sqrt{a}\frac{1 \pm i}{\sqrt{2}}. $$ Separate these into conjugate pairs to find the factorization of $x^4 + a^2$ over $\mathbf{R}[x]$. If you want to save time, or perhaps check your work, the answer is $$ x^4 + a^2 = (x^2 - \sqrt{2a} \cdot x + a)(x^2 + \sqrt{2a} \cdot x + a). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2321285", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 1 }
Example of a random variable $X$ that is an $\mathscr{F}_t$-local martingale, but not an $\mathscr{F}_t^X$-local martingale. This is a problem from Ethier and Kurtz' Markov Processes. The book introduces some theorems on local martingales but they all involve the process being right continuous. I think this problem must be solved using the definitions of a local martingale, that is, there is $\mathscr{F}_t$ stopping times $\tau_1\le \tau_2 \le \cdots$ with $\tau_n \to \infty$ a.s. such that $X^{\tau_n}$ is a martingale. However, I am new to this concept and am lost on how to solve this problem. I would greatly appreciate some help. Let $\eta$ and $\xi$ be independent random variables with $P(\eta=1)=P(\eta=-1)=\frac{1}{2}$ and $E|\xi|=\infty$. Define $$X(t) = \begin{cases} 0, & 0\le t<1, \\ \eta \xi, & t\ge 1, \end{cases}$$ and $$ \mathscr{F}_t = \begin{cases} \sigma(\xi), & 0\le t<1 \\ \sigma(\xi,\eta), & t\ge 1. \end{cases}$$ Show that $X$ is an $\mathscr{F}_t$-local martingale, but that $X$ is not an $\mathscr{F}_t^X$-local martingale.
Because $E|X_t|=E|\xi|=\infty$ for $t\ge 1$, $\{X_t\}$ is not a martingale (with respect to any filtration). Define $\tau_n:=n$ on $\{|\xi|\le n\}$ and $\tau_n=0$ on $\{|\xi|>n\}$. Then $\{\tau_n\}$ is an increasing sequence of $(\mathcal F_t)$ stopping times (note that $\xi$ is $\mathcal F_0$-measurable) with limit $\infty$ on $\{|\xi|<\infty\}$. I assume that $P(|\xi|<\infty)=1$. Because, for $t\ge 1$, $X_{t\wedge \tau_n} = \eta\xi1_{\{|\xi|\le n\}}$, the stopped process $X^{\tau_n}$ is an $(\mathcal F_t)$-martingale. This shows that $X$ is an $(\mathcal F_t)$ local martingale. But, for $n$ sufficiently large, the random variable $\tau_n$ is not an $({\mathcal F}^X_t)$ stopping time, because ${\mathcal F}^X_0=\{\emptyset,\Omega\}$ but $\{\tau_n=0\}=\{|\xi|\le n\}$. As @d.k.o shows, $(X_t)$ does not admit a localizing sequence of $({\mathcal F}^X_t)$ stopping times, and so $(X_t)$ cannot be an $({\mathcal F}^X_t)$ local martingale.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2321385", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Basis of the image of the linear transformation $f:\mathbb{R^4}\rightarrow\mathbb{R^3}$? I'm asked to find a basis of the image of the linear transformation $f:\mathbb{R^4} \rightarrow \mathbb{R^3}$ defined as $f(v) = (v_1-v_3+v_4,2v_1+v_2+2v_3+v_4,3v_1-v_2+v_4)$. I found the matrix of the linear transformation $Af = \begin{pmatrix} 1 & 0 & -1 & 1\\ 2 & 1 & 2 & 1\\ 3 & -1 & 0 & 1 \\ \end{pmatrix} $ with rank equal to 3. I tried to find a basis this way : $Im f=\{w\in\mathbb{R^3} | \exists v\in\mathbb{R^4}s.t f(v)=w\}$ i.e $$ \left\{\begin{pmatrix}v_1-v_3+v_4\\2v_1+v_2+2v_3+v_4\\3v_1-v_2+v_4\end{pmatrix}\right\}=\left\{v_1\begin{pmatrix}1\\2\\3\end{pmatrix} + v_2\begin{pmatrix}-1\\2\\0\end{pmatrix} + v_3\begin{pmatrix}0\\1\\-1\end{pmatrix} + v_4\begin{pmatrix}1\\1\\1\end{pmatrix}\right\} $$ The problem is that I have 4 vectors here, and I know the image should be of the dimension 3 since the rank of the matrix is 3.But I don't know how to compute the basis formed from 3 vectors ,I think one of them is a linear combination of another , but I don't know wich one .Any help please ? Thank you !
If it's not completely obvious which column in a linear combination, then just row reduce? $\begin{pmatrix} 1 & 0 & -1 & 1\\ 2 & 1 & 2 & 1\\ 3 & -1 & 0 & 1 \\ \end{pmatrix} \to \begin{pmatrix} \color{red}{1} & 0 & -1 & 1\\ 0 & \color{red}{1} & 4 & -1\\ 0 & 0 & \color{red}{1} & \frac{-3}{7} \\ \end{pmatrix} $ Notice that this is row echeolen and not reduced, since we only want to find out with columns have pivots. From this you see that the basis is: $$b=\{[1,2,3],[0,1,-1],[-1,2,0]\}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2321528", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Locally constant sheaf on a simply connected space I was reading an article and at some point the writer claims that 1)A locally constant sheaf on a simply connected topological space is a constant sheaf. 2) $H^{i}(U,\mathcal{F})=0 \hspace{0.1cm}\forall i>1$ where U is a homotopically trivial open set and $\mathcal{F}$ a locally constant sheaf. How could I prove that? Thank you for your time.
A more abstract way of reformulating the result is that the category of local systems ( = locally constant sheaves) on $X$ with stalk $M$ ($M$ is a $k$-vector space or a module) is equivalent to the category of representations $\rho : \pi_1(X) \to GL(M)$. In particular, if $X$ is simply connected then every locally constant sheaf is constant. This also shows that one could compute everything related to the local system from the representation, as an example if $\mathscr L$ is a local system on $D^*$ (punctured disk) then a local system is equivalent to an element $T \in GL(M)$. The cohomology of $\mathscr L$ is the cohomology of the complex $M \overset{d}{\to} M$ with $d = \text{id} - T$. I am not aware of a formula in the general case but in theory this should be possible, using Cech complex. I think a good reference for this Galois groups and fundamental groups by Tamas Szamuely.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2321629", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Rigorous nature of combinatorics Context: I'm a high school student, who has only ever had an introductory treatment, if that, on combinatorics. As such, the extent to which I have seen combinatoric applications is limited to situations such as "If you need a group of 2 men and 3 women and you have 8 men and 9 women, how many possible ways can you pick the group" (They do get slightly more complicated, but are usually similar). Question: I apologise in advance for the naive question, but at an elementary level it seems as though combinatorics (and the ensuing probability that can make use of it), seems not overly rigorous. It doesn't seem as though you can "prove" that the number of arrangements you deemed is the correct number. What if you forget a case? I know that you could argue that you've considered all cases, by asking if there is another case other than the ones you've considered. But, that doesn't seem to be the way other areas of mathematics is done. If I wish to prove something, I couldn't just say "can you find a situation where the statement is incorrect" as we don't just assume it is correct by nature. Is combinatorics rigorous? Thanks
What you phrased as "can you find a situation where the statement is incorrect" is better known as proof-by-contradiction in the realm of propositional logic. Proof by contradiction is a fairly common approach to proving statements about combinatorics and other areas of study in discrete mathematics--typically after a few initial lemmas and/or theorems have been introduced. This is in contrast to a tautological proof. Solving discrete math problems certainly requires a paradigm shift in thought when compared to algebra, because ultimately not everything is always completely quantifiable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2321757", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "84", "answer_count": 7, "answer_id": 0 }
find points of ramification, hurwitz formula Hi I have the following question: $$f(z)=4z^2(z-1)^2/(2z-1)^2$$ considered as a meromorhic function over $\mathbb{C}_{\infty}$ has as zeros: $z=0$, $ord_0(f)=2$ $z=1$, $ord_1(f)=2$ and as poles: $z=1/2$, $ord_{1/2}(f)=2$ $z=\infty$, $ord_{\infty}(f)=2$ Now, considering the associated map to f: $F:\mathbb{C}_{\infty}\rightarrow \mathbb{C}_{\infty}$, with $F(z)=f(z)$, if $z\in\mathbb{C}_{\infty}-\{1/2,\infty\}$,and $F(z)=\infty$, if $z=1/2,\infty$ we have that $deg(F)=deg_0(F)=4$ I I apply the Hurwirtz's formula $$2g(\mathbb{C}_{\infty})-2=deg(F)(2g(\mathbb{C}_{\infty})-2)+\sum_{p\in\mathbb{X}}{[mult_{p}(F)-1]}$$ I have that -2=4(-2)+4 I would like to know what I am doing wrong?
You haven't found all of the ramification points. You've found it has multiple zeroes and poles, but it might also be ramified at values other than the zeroes and poles. You can find the other ramification points by finding the zeroes of the derivative $f'(z)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2321831", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Integrating a function over a square using polar coordinates Say we have a function $f(x,y)$ over the unit circle. To integrate with polar coordinates we replace the x and y in $f(x,y)$ with $r\cos\theta$ and $y\sin\theta$ to get $f(r,\theta)$ and we integrate $f(r,\theta)rdrd\theta$ for $r$ between $0$ and $1$ and $\theta$ between 0 and $2\pi$. What if we want to integrate over a square using polar coordinates. What must we do?
For each of the four sides that make up the square, we will have $0 \le r \le p\sec(\theta-c)$, for suitable values of $p$ and $c$, and $\theta$ ranging suitably, as follows: Notice how $r=\sec(\theta)$ is the equation of a straight line with perpendicular angle $0$ (vertical), one unit to the right of the origin, so $r = p\sec(\theta-c)$ is a line $p$ units away from the origin rotated (anti-clockwise) by $c$. See Polar Coordinate function of a Straight Line So for a square with corners $(\pm p,\pm p)$, its sides are the lines $\displaystyle p\sec\left(\theta\right), p\sec\left(\theta-\frac{π}{2}\right), p\sec\left(\theta-π\right), p\sec\left(\theta-\frac{3π}{2}\right)$ with $\theta\in \{-\frac{π}{4},\frac{π}{4}\}, \{\frac{π}{4},\frac{3π}{4}\}, \{\frac{3π}{4},\frac{5π}{4}\}, \{\frac{5π}{4},\frac{7π}{4}\}$ respectively So we will have the sum of $4$ double integrals, representing the four right triangles whose corners are a pair of adjacent corners of the square and its centre, which together form the square. For $c\in\{0, \frac{\pi}{2}, \pi, \frac{3\pi}{2}\}$, we have $\displaystyle \int_{c-\frac{π}{4}}^{c+\frac{π}{4}}\int_0^{p\sec(\theta-c)}f(r,\theta)r\,dr\,d\theta$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2322034", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
proving a differentiable function $f: \Bbb R \to \Bbb R$ and a constant $c>0$ with $f'(x) \geq c$ for all $x \in \Bbb R$. is a bijection We have a differentiable function $f: \Bbb R \to \Bbb R$ and a constant $c>0$ with $f'(x) \geq c$ for all $x \in \Bbb R$. Show that $f$ is a bijection from $\Bbb R$ to $\Bbb R$. From Rolle's theorem follows that if $f'(x) \neq 0$ $f$ is injective. Which is the case here so we know that $f$ is injective. I'm stuck on proving it's surjective. In the questions before this one I already proved with the mean value theorem that $f(x) \geq f(0)+ cx$ if $x \geq 0$, and that $f(x) \leq f(0)+ cx$ for all $x \leq 0$
From $c>0$ and $f(x) \geq f'(0)+ cx$ for $x \ge 0$ we get $ \lim_{x \to \infty}f(x)= \infty$. From $c>0$ and $f(x) \le f'(0)+ cx$ for $x \le 0$ we get $ \lim_{x \to -\infty}f(x)= -\infty$. Now we derive , by the intermediate value theorem: $f( \mathbb R)= \mathbb R$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2322134", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
How would you prove that this is a group isomorphism? Let $$M_{a} = \begin{pmatrix} 1 & a & \frac{a^2}{2} \\ 0 & 1 & a \\ 0 & 0 & 1\end{pmatrix}$$ where $a \in \mathbb{R}$ and let the function $\phi : \mathbb{R}\rightarrow G$ where $\phi(a) = M_a$ and $G$ is the set containing $M_a$. I've shown this is a group homomorphism under addition in $\mathbb{R}$ and multiplication in $G$, but how would I show it's an isomorphism, if it is? I think it is... mainly from imagining in my head that if I specify any value of $a$ and fix it to be say $x$, then $M_x$ must bring back the value of $x$ (and only $x$) for the inverse function... but I'm not sure if this is flawed or how to actually mathematically show it.
I suppose you meant $\;G=\text{Im}\,\phi\le GL(3,\Bbb R)\;$ , and to answer your question: you simply have to prove the easy following fact $$\ker\phi=\{0\}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2322229", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Integrating product of linear functions over a triangle Given a triangle with vertices $N=\{A,B,C\}$, where the triangle is defined to be the convex hull of those vertices, the nodal basis function is defined to be $$\phi_P(x) = \begin{cases} 1, x = P \\ 0, x \in N \setminus \{P\} \end{cases}$$ where $P \in \{A,B,C\}$ Followed by linear interpolation between these values, so that $\phi_P \in \mathcal{P}_1$. So the nodal basis functions look like this: So they are just linear functions over the triangle. I now have to calculate $$\int_T \phi_i \phi_j dx, i \neq j, i,j\in \{1,2,3\}$$. With $\phi_i = a_ix + b_iy + c, (x,y)\in T$ Can someone please tell me how to integrate such a term?
For the triangle $T_*$ with vertices $A_0=(0,0)$, $A_1=(1,0)$, $A_2=(0,1)$ one has $\phi_1(x,y)=x$ and $\phi_2(x,y)=y$. This implies $$\int_{T_*}\phi_1(x,y)\phi_2(x,y)\>{\rm d}(x,y)=\int_0^1\int_0^{1-x} x y\>dy\>dx=\ldots={1\over24}\ .$$ From "general principles" (see achille hui's comment) it then follows that for an arbitrary triangle $T$ you have $$\int_T\phi_i\phi_j\>{\rm d}(x,y)={1\over12}{\rm area}(T)\qquad(i\ne j)\ .$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2322328", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
On the definition of the Zariski Tangent space I have the following (relating to the definition of the Zariski tangent space) in my notes. Let $m_P$ be the ideal of $P$ in $k[V]$, and $M_P$ as the ideal $\langle X_1,...,X_n\rangle\subset k[X_1,...,X_n]$. Then $m_P=M_P/I(V)$. Then $$m_P/m_P^2=M_P/(M_P^2+I(V))$$ I'm having a lot of trouble seeing the above equality. I haven't really got anywhere past the obvious $$m_P/m_P^2=(M_P/I(V))/(M_P/I(V))^2$$ Presumably $m_P^2=(M_P/I(V))^2=(M_P^2+I(V))/I(V)$ which would yield the desired equality but I just can't see it. We know that $m_P^2$ is the product of the ideal $m_P$ with itself. That is the ideal generated by polynomials $f\cdot g $ where $f$ and $g$ are in $M_P/I(V)$. I'm getting very confused with products of quotients etc...
Let $R$ be a ring, $J$ an ideal of $R$, and $\pi$ be the canonical projection $R \to R/J$. Consider an ideal $I$ of $R$. I find $\pi(I)$ — the ideal of $R/J$ generated by the image of $I$ — to be a much more convenient notion than its explicit construction va cosets $(I+J)/J$. (this is related to the fact that, for $a \in R$, I find $\pi(a)$ a much more convenient notion to work with than $a + J$ is) For example, in my opinion the path to seeing that $\pi(I)^2 = \pi(I^2)$ is rather direct, if not outright obvious. Trying to do that with the formula via cosets, however, introduces extra technical details that get in the way.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2322503", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
$u \in C^\infty(\mathbb{R^n})$ with compact support $\implies$ $f(x) = ⨏_{\partial B(0,\vert x \vert)} u(t) \ dt\in C^\infty$ with comp. s.? $\newcommand{\avint}{⨍}$ Let $u \in C^\infty(\mathbb{R^n})$ with compact support. How can we prove using direct calculations or Fourier transform methods that $$f(x) = \avint_{\partial B(0,\vert x \vert)} u(t) \ dt\in C^\infty(\mathbb{R}^n)$$ with compact support? We use $\partial B(0,\vert x \vert)$ to denote the boundary of the ball of center $0$ and radius $|x|$ (Euclidean norm).
Let $u \in C_c^\infty(\mathbb R^n)$ and let $\sigma$ be the measure on $S^{n-1}$. Then we set $$f(x) = \frac{1}{\sigma(\partial B(0,|x|))} \int_{\partial B(0,|x|)} u(t) \, dt$$ It's obvious from the definition that $f(x)$ only depends on $|x|$, i.e. $f(x) = \hat f(|x|)$ where $$ \hat f(r) = \frac{1}{r^{n-1} \sigma(S^{n-1})} \int_{S^{n-1}} u(rt) \, r^{n-1} dt = \frac{1}{\sigma(S^{n-1})} \int_{S^{n-1}} u(rt) \, dt $$ Since $u \in C^\infty$ and we integrate over a compact set ($S^{n-1}$), derivatives commute with integration, so $$ \hat f^{(k)}(r) = \frac{1}{\sigma(S^{n-1})} \int_{S^{n-1}} \frac{\partial^k}{\partial r^k} u(rt) \, dt = \frac{1}{\sigma(S^{n-1})} \int_{S^{n-1}} t^k u^{(k)}(rt) \, dt $$ where the last integral is defined for all $k = 0, 1, \ldots$ and all $r \in [0, \infty)$, so $\hat f \in C^\infty([0, \infty))$. Since the support of $u$ is compact, and a compact set in $\mathbb R^n$ is bounded, there exists $R>0$ such that $u(x)=0$ whenever $|x|>R$. This implies that $\hat f(r) = 0$ for $r>R$. Thus $\hat f$ has compact support. Now, $$\frac{\partial}{\partial x_i} f(x) = \frac{\partial r}{\partial x_i} \frac{\partial}{\partial r} f(rt) + \frac{\partial t}{\partial x_i} \frac{\partial}{\partial t} f(rt) = \frac{x_i}{r} \hat f'(r) $$ since $\frac{\partial}{\partial t} f(rt) = 0$. So $f$ is derivable for $|x|>0$, and it's clear that higher order derivatives can be taken so $f$ is infinitely derivable at $|x|>0$. But what about derivatives at $x=0$? That's the difficult part and perhaps someone else can give a good answer before I have managed to solve it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2322628", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
In a triangle $a:b:c =4:5:6$, then $3A+B$ equals to? In the above question $a,b,c$ are sides of triangle and $A,B,C$ are angles. The correct answer is $\pi$ but I am getting $\pi - C$.
HINT: We have $\dfrac a4=\dfrac b5=\dfrac c6=k$(say) $\implies a=4k$ etc. Use cosine formula $$\cos A=\dfrac{b^2+c^2-a^2}{2bc}=\cdots=\dfrac{45}{60}>\dfrac12\implies0<A<60^\circ$$ and $$\cos B=\dfrac9{16}$$ $$\cos3A=-\dfrac9{16}$$ Now use How do I prove that $\arccos(x) + \arccos(-x)=\pi$ when $x \in [-1,1]$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2322703", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Diophantine equation without elementary solution but with simple non elementary solution Is there some example of an diophantine equation that satisfies: * *No solution is known using elementary methods. *It is simple to solve using non elementary methods (e.g. using number fields). My goal is to find good motivation to dive into advanced algebra for someone who is used to solve everything using elementary methods, to show that something that is impossible to solve elementary is really easy using advanced techniques. Ideally if the person can try to attack the equation by himself, give up and then recognize the "simple" solution using advanced techniques and understand it (at least the main idea). It is not a problem to find some equations as such in Number theory textbooks, but usually those are also solvable using elementary methods. And if there is an equation in which I am confident person will not solve it using elementary methods, it is something with quite complicated proof (extreme example would be Fermat's Last Theorem). Update: For clarity, let's consider elementary to refer to methods known to Euler (or mathematicians at that time generally). As for simple solution using advanced techniques, that is definitely subjective, and I have currently no idea how to define this, but I believe there is some kind of consensus among mathematicians on things that are simple and elegant.
I think a good example is given by the equation $$x^2-6y^2=1$$ This equation has infinitely many integer solutions $(x,y)=(a_n,b_n)$ determined by $$a_n+b_n\sqrt6=(5+2\sqrt6)^n\space\space........ (n\ge1)$$ Here the calculation of the number $5+2\sqrt6$ is obviously "fundamental" and it is not obvious at all that there are infinitely many solutions if this equation is looked from "elementary". In general, for any square-free positive integer $d$, the equation (Pell's) $$x^2-y^2\sqrt d=1$$ also has infinitely many integer solutions given by $$a_n+b_n\sqrt d=(a_0+b_0\sqrt d)^n$$ where $a_0+b_0\sqrt d$ is called the fundamental unit of the quadratic field $\mathbb Q(\sqrt d)$ NOTE.- To take into account here that elementary is a relative concept in mathematics. For many people the text above is quite "elementary".
{ "language": "en", "url": "https://math.stackexchange.com/questions/2322820", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
How to calculate limit as x approaches infinity of a^x/b^x? I'm trying to calcululate $\lim_{x\to\infty}\frac{a^x}{b^x}$. I've done a few examples on Wolfram Alpha, and it seems if $a>b$ it goes to infinity and if $a<b$ it goes to $0$, but I am not sure how to prove it. L'Hopital is normally what I would try, but it doesn't seem to work here because it doesn't really change the structure of the numerator or denominator.
Hint: review properties of exponents, especially that the quotient of two numbers each raised to the the $x$ power is equal to the quotient of the two numbers itself raised to the $x$ power.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2322876", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Absolute value equation infinite solutions $$|3-x|+4x=5|2+x|-13$$ One of the solutions is $[3,\infty)$ I'm not familiar with interval solutions for absolute equations. How to solve for this interval?
One way to solve a problem like this is by graphing both sides of the equation. You can do this in, for example, Desmos here. An image of the linked graph, showing that the two lines coincide for $x \geq 3$ (and showing that they coincidence at one other $x$-value, too): Another way to solve these sorts of problems is by analyzing different cases: I see now that kingW3 has already provided an answer in this direction, so I'll curtail my response here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2322975", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Convergence/Divergence of some series If I have $\sum_{n=2}^{\infty} \frac {1}{n\log n}$ and want to prove that it diverges, can I use following? $$\frac {1}{n\log n} \lt \frac {1}{n}$$ * *$\sum_{n=1}^\infty \frac 1 n$ diverges, but the limit of $\frac 1 n$ equals to zero so the comparasion I think isn't useful. *Or can I say it diverges because $\sum_{n=1}^\infty \frac 1{\log n}$ also diverges? And If I have $\sum_{n=2}^\infty \frac {(-1)^n}{(n)}$ I can use the Leibniz rule. $\sum_{n=1}^\infty \frac 1 n$ diverges, but the limit of it equals to zero, so I am confused if this series diverges/converges again. I understand that $\sum_{n=1}^\infty \frac 1 n$ has an infinite sum, but what to do in this case?
Bertrand's series $\displaystyle \sum\limits_{n\ge 2}\frac 1{n^\alpha\ln(n)^\beta}$ converges only for $(\alpha>1)$ or $(\alpha=1,\beta>1)$. This is generally proved using the comparison to an integral as other answers have shown. But in general for series of the kind $\displaystyle \sum\limits_{n\ge 2}\frac 1{n^{\alpha_0}\ln(n)^{\alpha_1}\ln(\ln(n))^{\alpha_2}\ln(\ln(\ln(n)))^{\alpha_3}...}$ Which all share the same property of converging only if $\alpha_i=1$ for $i<k$ and $\alpha_k>1$ (with $k$ being the last one), you can use the condensation Cauchy test. The criterion is : For $(a_n)_n \searrow\ \ge 0$ then $\displaystyle S=\sum\limits_{n\ge 1} a_n<+\infty\iff T=\sum\limits_{n\ge 0} 2^na_{2^n}<+\infty$ with inequality $S\le T\le 2S$. Applied to $\displaystyle S=\sum\limits_{n\ge 2}\frac 1{n\ln(n)}$, it gives $\displaystyle T=\sum\limits_{n\ge 1}\frac{2^n}{2^n\ln(2^n)}=\frac 1{\ln(2)}\sum\limits_{n\ge 1} \frac 1n$ which is divergent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2323143", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 7, "answer_id": 2 }
Variance when playing a game with a fair coin I am having a hard time with this question for some reason. You and a friend play a game where you each toss a balanced coin. If the upper faces on the coins are both tails, you win \$1; if the faces are both heads, you win \$2; if the coins do not match (one shows head and the other tail), you lose \$1. Calculate the expected value and standard deviation for your total winnings from this game if you play 50 times. PMF Values: \begin{array}{c|c} $& p\\\hline +$1 & .25\\ +$2 & .25\\ -$1 & .50 \end{array} I have calculated the expectation as $$1(.25)+2(.25)+(-1)(.5) = .25,$$ so $$E(50X) = 50\cdot.25 = \$12.5,$$ which I have confirmed is correct. I know I need to get $\operatorname{Var}(50X)$, but doing a standard variance calculation and then using the formula $a^2\operatorname{Var}(X)$ is not giving me the correct value. What step am I missing?
Variance is the mean of the squares minus the square of the mean. $$ 0.25\cdot1^2+0.25\cdot2^2+0.5\cdot(-1)^2-0.25^2=1.6875 $$ For independent events, the variance of a sum is the sum of the variances, so the variance for $50$ events is $$ 50\cdot1.6875=84.375 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2323223", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Find the distinct number of arrangements of the symbols in the string###@@\$\$\$%%%% that begin and end with % I have tried to solve this by excluding two % and performing permutation for non-distinct objects. As I excluded I left with 10 elements so n=10 and then considered non-distinct elements like # (n1=3),@(n2=2),$(n3=3),%(n4=2). n!/(n1!x n2!x n3!x n4!). Is this right approach?
So what I'm assuming: $3$ hashtags, $2$ at's, $3$ dollar signs, $4$ percentage signs You first fix two percentages, so you are arranging $3$ hashtags, $2$ at's, $3$ dollar signs, and $2$ percentages This can be done in $\displaystyle \frac{(3+2+3+2)!}{3!2!3!2!}=\boxed{25200}$ ways. Basically, arrange $10$ objects, and divide because each of the TYPES of objects are indistinguishable from one another.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2323330", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Is $\infty^0= 1$? for given integral $\int_e^\infty \frac{1}{x(\log x)^p}dx$, I had derived above integral = $\int_1^\infty {u^{-p}}du = [ {1\over -p+1}u^{-p+1}]_1^\infty = {1\over -p+1}[\infty^{-p+1} -1]$ I need to characterize the range of $p \in \Bbb R$ which makes the given integral converges, however, when p =1, there's $\infty^0$ occurs, but I'd never learned to deal with this notation/character. Is it converging to $1$? If yes, which logical reasoning could be provided?
In general improper integrals like $\int_a^{\infty}{f(x)dx}$ can be defined as $\lim_{b \to \infty}F(x) - F(a)$ where $F(x)$ is the anti derivative of $f(x)$. In this case how ever the anti derivate for the case $p=1$ is not the one deduced from formula for other values of $p$. So you have to take it as special case and say $\int_e^{\infty}\frac{1}{xlogx}=\int_1^{\infty}\frac{1}{u}du=log(u)|_1^{\infty}=\lim_{b \to \infty}log(b) - log(1) =\infty$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2323389", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How to find number of solutions to $2^x=x^2$ without Graphing Find number of real solutions to $2^x=x^2$ without plotting graph: I considered $f(x)=2^x -x^2$ $$f'(x)=2^x \ln 2-2x=0$$ we get again a transcendental equation. Any good approach please
$a_{0}2^x+b_{0}$ may have at most one zero. Before that it is negative, after that zero, if it exists, it is positive. Based on just this remark, $a_{1}2^x+b_{1}x$ may have at most two zeros, since its derivative $a_{0}2^x+b_{0}$ is forcing it to behave similar to a parabola, if the derivative has one zero. Again, based on the essential property of $a_{1}2^x+b_{1}x$, $a_{2}2^x+b_{2}x^2$ may have at most three zeros (since it may have at most two extreme values). That this is really the case is easily seen from its behavior around $2$. $2^2-2^2=0$, and it is has a negative slope at $2$ since at $1$, $2^1-1^2>0$ and at $3$, $2^3-3^2<0$ Now $\lim\limits_{x \to -\infty}2^x-x^2<0$ and $\lim\limits_{x \to \infty}2^x-x^2>0$ and since $2^x-x^2$ is continuous there has to be one zero before and one zero after $2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2323500", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Is Borel-field different from $\sigma$-field? My mathematical statistics book denotes $\sigma$-field as following: Let $\Bbb B$ be the collection of subsets of $\Bbb C$ where $\Bbb C$ denotes sample space which is the collection of all possible events. Then $\Bbb B$ is $\sigma$-field if (1) $\emptyset \in \Bbb B$ and $\exists b \in \Bbb B$ s.t. $\emptyset \subset b$ (2) $C \in \Bbb B \Rightarrow C^c\in \Bbb B $ where $C \in \Bbb C$ (3) $\{C_1, C_2, C_3..\} \in \Bbb B \Rightarrow \cup_{i=1}^{\infty}C_i \in \Bbb B$ where $\{C_1, C_2, C_3..\}$ is countable collection of subsets of $\Bbb C$ Is this field a specific example of Borel Field? or this field is eqaully defined with Borel Field?
A sigma field on a non-emptyset $X$ is a collection $\mathcal{F}\subseteq 2^X$ that contains $\emptyset$, is closed under complementation and is closed under countable unions. A Borel field is a sigma field $\mathcal{F}$ that is defined on a topological space $(X, \mathcal{T})$ such that $\mathcal{T} \subseteq \mathcal{F}$. Most authors require that $\mathcal{F}$ is generated by the topology $\mathcal{T}$, that is $\mathcal{F}$ is the smallest sigma-algebra on $X$ containing the topology $\mathcal{T}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2323671", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Where does $xe^x$ solution come from when the characteristic polynomial is square? When solving the differential equation $y'' + ay' + by = 0$ (with constant, real coefficients $a$ and $b$, although they could be complex if you like), you do it by setting up the characteristic equation $r^2 + ar + b = 0$, finding its solutions $r_1, r_2$, and then the general solution to this equation is $Ce^{r_1x} + De^{r_2x}$. This works both when the solutions are real and when they are complex. However, when we have a double root $r_1 = r_2$, we get a different general solution, namely $Ce^{r_1x} + Dxe^{r_1x}$. I have no trouble seing that this is indeed a solution, and intuitive reasoning on degrees of freedom dictates that we must have a linear combination of two terms in our general solution, while $e^{r_1x}$ and $e^{r_2x}$ are the same. So the fact that there is a second term of some other form is not surprising. I have, however, yet to see a "natural" explanation of this $xe^{r_1x}$ term. If one were developing the theory from scratch, how would one find this solution (other than blind luck)? If I wanted to teach ODE's to a class of students "the right way", i.e. with good explanations and motivations for everything (as opposed to just pulling out ready-made solutions like what was done to me when I was learning this exact thing), how would I motivate even considering a term like $xe^{r_1x}$ (other than "Well, exponentials aren't quite cutting it, but this is kindof like an exponential, right? Let's try it.")? And is there a way of solving the general differential equation that does not involve splitting into cases depending on whether the characteristic polynomial is a square?
The simple direct approach gives the solution $xe^x$ without much hassle. Let the equation be $$y''-2y'+y=0$$ and let $z=y'-y$ so that the equation can be written as $$z'-z=0$$ The above equation on multiplying with $e^{-x} $ gives $$(ze^{-x}) '=0$$ or $$ze^{-x} =c_1$$ so that $$y' - y=z=c_1e^x$$ Again multiplying by $e^{-x} $ gives us $$(ye^{-x}) '=c_1$$ so that $$ye^{-x} =c_1x+c_2$$ or $$y=c_1xe^x+c_2e^x$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2323733", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 2 }
Graduations of volume on the side of a cone I am trying to put graduated volume markings (every 10 liters) on the side of a cone. Specifically this is the conical section of a wine tank. I know the dimensions of the whole cone (h: 108cm, r: 103.5cm, l: 150.3cm, V: 1220L) but I am having trouble figuring out the volumes. Because the tank is stainless steel, I can't see through or measure the r and h of the marks on the way up. Is the slant height l also proportional in the same way that it is proportional to r and h? I believe it is, however it has been quite a while since I have done math of this sort (pro tip, kids, listen to your Mom when she says you darn will use math in your life). Just wanted to get the input from experts on this. Thanks!
Think of the cone as situated vertex-down with its axis vertical, with slant neight $\ell$ (in cm) measured from the vertex. As you suspected, neglecting the bit of cylinder near the vertex, the volume held by the tank from the vertex to a slant height $\ell$ is proportional to $\ell^{3}$, so that $$ V = k\ell^{3} = \frac{1220}{(150.3)^{3}} \ell^{3} \text{ liters} $$ at a slant depth of $\ell$ cm. The gradation mark at volume $V$ liters should therefore be at slant height $$ \ell = 150.3 \times \sqrt[3]{\frac{V}{1220}} \text{ cm} $$ from the vertex. To find the required positions, let $i$ run from $1$ to $122$ in the formula $$ \ell_{i} = 150.3 \times \sqrt[3]{\frac{i}{122}} \text{ cm}. $$ (This assumes the last mark, $\ell_{122}$, is at slant height $150.3$ cm, i.e., the tank is brimming.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2323847", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Finding intersection angle at intersection point of two curves I've got two curves: $$(x,y) = (t^2,t+1), \quad t\in\mathbb{R}$$ $$5x^2 + 5xy + 3y^2 -8x -6y + 3 = 0$$ I've found the intersection points: $$(0,1) , (1,0)$$. But I can't figure out how to the the angle between the two curves at these intersection points? Should I use derivative somehow? How do I derive a parametric function?
plugging $$x=t^2,y=t+1$$ in the given equation we get $$5t^4+5t^2(t+1)+3(t+1)^2-8t^2-6(t+1)+3=0$$ which can be simplified to $$5t^3(t+1)=0$$ can you solve it?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2323998", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Decomposition of set of roots for a Lie algebra and its Cartan subalgebra Consider a finite dimensional complex semi-simple Lie algebra $L$ with Cartan subalgebra $H$ (i.e. every $h\in H$ is $ad$-nilpotent). Denote $\Phi=\Phi(L,H)$ the set of roots. Assume $\Phi=\Phi_1\cup\Phi_2$ for non empty $\Phi_i$ and $(\alpha,\beta)=0$ for all $\alpha\in\Phi_1,\beta\in\Phi_2$. Here $(\alpha,\beta)=\kappa(H^\alpha,H^\beta)$, where $\kappa$ is the Killing form and $H^\alpha$ is the unique element such that $\kappa (H^\alpha,\cdot)=\alpha$ (analogue for $H^\beta$). Set, for $i=1,2$, $$L_i=\text{span}_{\mathbb{C}}\{H^{\alpha}:\alpha\in\Phi_i\}\oplus\bigoplus_{\alpha\in\Phi_i}L_\alpha.$$ Why do we have $L=L_1\oplus L_2$ as Lie algebras?
It is obvious that $L=L_1\oplus L_2$ as vector spaces. To check that it holds as Lie algebras, you need to show that $[L_1,L_2]=0$. But this follows immediately from the orthogonality assumption, and the fact that $\alpha+\beta\notin\Phi$ for all $\alpha\in\Phi_1$ and $\beta\in\Phi_2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2324086", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What are the total number of ways in which $n$ distinct objects can be put into two different boxes so that no box remains empty? I have came across this problem which is my textbook.According to the book the answer is: $2^n - 2$. But i don't understand how they got to that answer. Can someone help me out?
There are $2^n$ different subsets that can be taken from a set of $n$ objects. Pick any subset and put it in the first box. Then put the rest in the other box. That makes $2^n$ ways to put $n$ object in $2$ boxes. But you have the restriction that neither box can be empty. So you can't choose the empty set for box 1. And you can't put the whole set of $n$ objects in box 1, because that leaves box 2 empty. So there are $2^n-2$ ways that meet your criteria.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2324221", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to estimate condition number based on SVD of submatrix? Given an $m\times n$ ($m\geq n$) real valued matrix, $A$, its SVD, and an $n$-dimensional real valued vector, $x$, is there a computationally efficient way to accurately estimate the condition number of the matrix, $B$, constructed by appending $x$ as an additional row to $A$, e.g. without computing the SVD of B, etc.? For example, projecting $x$ into the effective right null space of $A$? This is needed in an application where a list of several vectors $x_i$ are given as candidates to extend the matrix $A$ in such a way that the condition number of $B$ is smaller than the condition number of $A$. My question is related, but not equivalent, to the inverse of the Subset Selection problem (See Golub & Van Loan, Matrix Computations, 3rd ed., pp 590-595). In other words, I would like to take an existing matrix and candidate rows, and constructively build the "best" well-conditioned (albeit over-determined) matrix from these rows, rather than remove them. For a simple example, consider the matrix $A=\left(\begin{array}{ccc} 1 & 0 & 0\\ 0 & 0.5 & 0\\ 0 & 0 & 0.01 \end{array}\right)$ (with condition number $\kappa \left(A\right)=100$) and the candidate vectors $x_{1}=\left(\begin{array}{ccc} 0 & 1 & 0.1\end{array}\right)$, $x_{2}=\left(\begin{array}{ccc}0 & 0.1 & 0.05\end{array}\right)$ Extending $A$ by adding $x_1$ as an extra row produces a matrix, $B_1$, with condition number $\kappa \left(B_1\right)\approx 24.55$, whereas extending $A$ by $x_2$ produces a matrix, $B_2$, with condition number $\kappa \left(B_2\right)\approx 19.99$. Is there a computationally inexpensive way to determine $x_2$ is the better choice?
I was able to find the answer to my question by reading this, which also provides a great list of references. I'll leave out the detailed derivation that is presented in those works, and summarize the answer. Keep in mind, I'm only interested in computing the condition number of the updated matrix, and not all the singular values nor any singular vectors. First, let $A=USV^T$ be the singular value decomposition of $A$. Then the matrices $B=\left(\begin{array}{c} A\\ v^{T} \end{array}\right)$ and $M=\left(\begin{array}{c} S\\ z \end{array}\right)$ (where $z\equiv v^{T}V$) have the same singular values. These singular values can be obtained by solving the characteristic equation. I.e. finding the zeros of $f\left(\sigma\right)\equiv1+\sum_i\frac{z_i^{2}}{S_{ii}^{2}-\sigma^{2}}$ Then there is quite a bit of literature given on how to stably and efficiently solve for these zeros. Fortunately, I only need the smallest and largest ones to compute the condition number. It's easily seen from the form of this function that the old and new singular values are interlaced. Therefore we have a good idea of where to begin looking, and launch an iterative search.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2324341", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$e^{2\sqrt{2} \pi i}=(e^{2 \pi i})^{\sqrt{2}}=1$? This is probably stupid. But this true?$$e^{2\sqrt{2} \pi i}=(e^{2 \pi i})^{\sqrt{2}}=1$$ I feel like this is wrong but I cannot see how. Any help is appreciated. Thank you
The "obvious" identity $(a^b)^c=a^{bc}$ does not hold in complete generality. It does hold for all complex numbers $a$ if $b$ and $c$ are restricted to be integers (with a minor caveat for $a=0$ if $b$ and $c$ and negative integers). It does hold for all positive real numbers $a$ if $b$ and $c$ are restricted to be real numbers, provided you define the exponential notation so that $x^y$ is a positive real number when $x$ is a positive real and $y$ is real, e.g., $x^y=e^{y\ln x}=\sum_{n=0}^\infty{(y\ln x)^n\over n!}$ with $\ln x=\int_1^x{dt\over t}$. There are other circumstances under which it holds, but in general it does not. In the problem at hand, $a=e$ is a positive real, and $c=\sqrt2$ is real, but $b=2\pi i$ is not, so you cannot expect the identity to hold.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2324412", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 2 }
Show that if $f:X\to Y$ is a continuous function if and only if the graph of $f$ is a closed subset of $X\times Y$ Given a function $f:X\to Y$, we define the graph of $f$ as the set $$G(f)=\{(x,f(x)),x\in X\}$$ Show that if $X$ is compact then $f$ is a continous function if and only if $G(f)$ is a closed subset of $X\times Y$. I know that a set $X$ is called compact when every open cover of $X$ have a finite subcover. Then I need to show that * *If $f$ is continous then $G(f)$ is a closed subset of $X\times Y$. *If $G(f)$ is a closed subset of $X\times Y$ then $f$ is continous. How I can show that?
Your purported equivalence can fail in both directions, if we don't add extra assumptions on $Y$. The implication $G(f)$ closed implies $f$ continuous can fail for compact $X$: Let $X = [0,1]$ in the cofinite topology; this is a compact space. Let $Y = [0,1]$ in the discrete topology. Define $f(x) =x$ from $X$ to $Y$, then $f$ is not continuous, as $f^{-1}[\{0\}] = \{0\}$ is not open in $X$, but $\{0\}$ is open in $Y$. But $G(f)$ is closed in $X \times Y$: suppose $(p,q) \notin G(f)$, then $q \neq p$ and then the set $(X\setminus\{q\}) \times \{q\}$ is an open neighbourhood of $(p,q)$ that misses $G(f)$. We can drop the compactness of $X$ and replace it by the compactness of $Y$; then the implication does hold: Suppose then that $G(f)$ is closed. Kuratowski's theorem says that $\pi_X: X \times Y \to X$ is a closed map for compact $Y$. Let $C \subseteq Y$ be closed and check that: $$f^{-1}[C] = \pi_X[(X \times C)\cap G(f)]$$ which is the image of a closed set of $X \times Y$ under $\pi_X$, so $f^{-1}[C]$ is closed for all closed $C \subseteq Y$, meaning that $f$ is continuous. The implication $f$ continuous implies $G(f)$ closed can also fail for compact $X$ (even for compact $Y$): Let $X = \{0,1\}$ in the discrete topology, $Y$ the same set in the indiscrete (trivial) topology. Again $f$ is the identity. This $f$ is continuous, but a basic open neighbourhood $(0,1)$ contains $\{0\} \times \{0,1\}$ which intersects $G(f)$. So $(0,1) \in \overline{G(f)} \setminus G(f)$, so $G(f)$ is not closed. If we add the condition that $Y$ is Hausdorff, we don't need compactness of $X$ at all to see that $f: X \to Y$ continuous implies $G(f)$ is closed. This then always holds.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2324482", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
Calculus BC problem So I as attempting to solve this problem and just got stuck overall. Since this is just practice an explanation is much more valuable than the answer itself. Thanks! Here is the problem
Hint: The slope of the line is $m=\dfrac{\sin(K)-0}{K-0}=\dfrac{\sin(K)}K$, then its equation is $$y=\dfrac{\sin(K)}Kx$$ Now, observe that \begin{align*}\text{Yellow area }&=\int_0^K\left(\sin x-\dfrac{\sin(K)}Kx\right)dx\\&=1-\cos(K)-\frac{\sin(K)}{2K}K^2\\&=1-\cos(K)-\tfrac K2\sin(K) \end{align*} And \begin{align*} \text{Green area }&=\int_0^K\dfrac{\sin(K)}Kx\,dx+\int_K^{\pi}\sin x\,dx\\ &=\tfrac K2\sin(K)-\cos\pi+\cos(K)\\ &=1+\cos(K)+\tfrac K2\sin(K) \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2324563", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
On negation of lipschiz continuity Let $f: [a,b] \to R$ be continuous function which is not Lipschitz continuous. Can we say there exist $x \in [a,b] $ and strictly monotone sequences, $\{x_n\}_{n=1}^{\infty} \subseteq [a,b] $ and $\lambda_{n} \in \mathbb{R^+} $ such that $x_n \to x$ and $\lambda_{n} \to + \infty $, $$|f(x_n) - f(x)| > \lambda_{n} |x_n - x| $$ for all $n \in \mathbb{N}$? P.S: Cleary we don't need to care about monotonicity of those two sequences!
Consider the function $$ f(x) = \begin{cases} x \sin (1/x), &\text{if}\ x\neq 0,\\ 0, & \text{if}\ x = 0. \end{cases} $$ This function is continuous in $\mathbb{R}$, it is not Lipschitz continuous, and it has continuous derivative in $\mathbb{R}\setminus\{0\}$, hence around each point $x\neq 0$ it is locally Lipschitz continuous. Hence the only candidate for the point $x$ in your claim is $x=0$. On the other hand $$ |f(y) - f(0)| \leq |y| \qquad \forall y. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2324713", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 0 }
Is $\sum_{n=0}^{\infty} \frac{n^2}{(b-n^2)(a-n^2)}$ expressible in terms of trigonometric functions I recently ran into the sum $$S=\sum_{n=0}^{\infty} \frac{n^2}{(\alpha-n^2)(\beta-n^2)}.$$ Mathematica gives it in terms of the Digamma function as $$S=\frac{-\alpha \psi ^{(0)}(1-\alpha )+\alpha \psi ^{(0)}(\alpha +1)+\beta (\psi ^{(0)}(1-\beta )-\psi ^{(0)}(\beta +1))}{2 \left(\alpha ^2-\beta ^2\right)}.$$ However I am working on a physics paper where $S$ mysteriously gets written in terms of trigonometric functions. I don't see how this is possible... Does the Digamma function expression above simplify in such a way? Or is there any other way to compute $S$ in terms of trigonometric functions?
$$S=\sum_{n=0}^{\infty} \frac{n^2}{(b-n^2)(a-n^2)}=\frac 1 {a-b}\left(\sum_{n=0}^{\infty} \frac{a}{n^2-a}-\sum_{n=0}^{\infty} \frac{b}{n^2-b}\right)$$ leading to $$S=\frac{\pi \sqrt{b} \cot \left(\pi \sqrt{b}\right)-\pi \sqrt{a} \cot \left(\pi \sqrt{a}\right)}{2 (a-b)}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2324796", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Why radius of convergence is $\frac{1}{r}=\liminf_{n\to \infty }\left|\frac{a_{-n-1}}{a_{-n}}\right| ?$ Let consider the series $$\sum_{n\in\mathbb Z}a_nz^n.$$ We denote $R$ the radius of $\sum_{n=0}^\infty a_nz^n$ and $r$ the radius of $\sum_{n=-\infty }^{-1}a_nz^n$, i.e.e the series converge absolutely if $r<|z|<R$. The thing I don't understand is why $$\frac{1}{r}=\liminf_{n\to \infty }\left|\frac{a_{-n-1}}{a_{-n}}\right|.$$ Indeed, $$\sum_{n=-\infty }^{-1}a_nz^n=\sum_{n=1}^\infty a_{-n}z^{-n},$$ and thus, by d'Alembert, it converge if $$\limsup_{n\to \infty }\left|\frac{a_{-n-1}}{a_{-n}}\right|\frac{1}{|z|}<1\implies |z|>\limsup_{n\to \infty }\left|\frac{a_{-n-1}}{a_{-n}}\right|:=r.$$ Therefore $$r=\limsup_{n\to \infty }\left|\frac{a_{-n-1}}{a_{-n}}\right|.$$ What's wrong here ? Why this $\liminf$ ? I can't get it.
This is wrong: $$\frac{1}{r}=\liminf_{n\to \infty }\left|\frac{a_{-n-1}}{a_{-n}}\right|$$ A better formula is $$\frac{1}{r}=\liminf_{n\to \infty }\left|\frac{a_{-n}}{a_{-n-1}}\right|$$ But as Daniel Fischer points out in a comment, this is not generally correct. It is correct if $|a_{-n}|$ is monotonic.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2324900", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Poisson-processes and it's arrival times I am currently studying for my non-life insurance exam and have the following problem: Let $S(t) = \sum_{i=1}^{N(t)} (X_i + T_i)^2$, where $X_i$ are i.i.d. r.v. with density $f(x)$ and $T_i$ are the arrival times of the homogeneous possion process $N(t)$ with intensity $\lambda =2$. With a fiven density $f(x) = \exp(-x)$ for $x \geq 0$, how can one calculate $E[S(t)]$? Now I know that $P(T_1 > t) = \exp(-\int_0^t \lambda(s) ds) = \exp(-2t)$. So the density would be given by $g_1(t) = 2\exp(-2t) $. Furthermore I could write the following: $$ S(t) = \sum_{i=1}^{N(t)} (X_i + T_i)^2 = \sum_{i=1}^{N(t)} X_i^2 + 2\sum_{i=1}^{N(t)} X_i T_i + \sum_{i=1}^{N(t)}T_i^2 $$ If I would have only $\sum_{i=1}^{N(t)} X_i^2$ I'd know that $$ E[S(t)] = E[S(t) \mid N(t)] = E[N(t)]E[X_i^2] $$ How can I proceed with the arrival times?
http://www.maths.qmul.ac.uk/~ig/MAS338/PP%20and%20uniform%20d-n.pdf Using the well-known result about the symmetric functional of the arrival times (Theorem 1.2), we have $$ \begin{align} E[S(t)] &= E\left[\sum_{i=1}^{N(t)} (X_i + T_i)^2\right] \\ &= E\left[\left[\sum_{i=1}^{N(t)} (X_i + T_i)^2\Bigg|N(t)\right]\right] \\ &= E\left[\left[\sum_{i=1}^{N(t)} (X_i + U_i)^2\Bigg|N(t)\right]\right] \\ &= E[N(t)]E[(X_1+U_1)^2] \\ \end{align}$$ where $U_i$ are i.i.d. as $\text{Uniform}(0, t)$ and independent of $N(t)$. Now we just remain to calculate $$ E[X_1] = 1, E[X_1^2] = 2, E[U_1] = \frac {t} {2}, E[U_1^2] = \frac {t^2} {3}$$ and thus $$ E[N(t)]E[(X_1+U_1)^2] = \lambda t\left(2 + 2 \times 1 \times \frac {t} {2} + \frac {t^2} {3}\right) = \frac {\lambda t} {3}(t^2 + 3t + 6) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2325035", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Is $|x| \cdot |x| = |x^2| = x^2$? Is $|x| \cdot |x| = |x^2| = x^2$ ? I'm very sorry if this question is a duplicate but I couldn't find anything about it (most likely because it's wrong..). But I'm not sure if this is correct so I need to ask you. $$|x| \cdot |x| = |x^2| \text{ should be alright}$$ Now my confusion starts. $x^2$ should be positive / neutral for any value. That would mean we can ignore the absolute value sign? On the other hand we could have that $|-x^2|$. But that would be a different thing than $|x^2|$, they are not equal to each other...? Please help me if I do this little thing wrong the entire task will be wrong. I got some thinking error here.. When there is the same question (I couldn't find one), please link me to it and I will delete this one immediately.
You are thinking it too hard. You could just look at the definition of the absolute value $$ |x|:=\begin{cases} x,&x\geq 0\\ -x,&x<0 \end{cases} $$ and check on your own that $|x|^2=|x^2|=x^2$. In general, we have $|a|\cdot|b|=|ab|$, which is true also for complex numbers; but the identity $|x^2|=x^2$ is not necessarily true in the complex world.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2325138", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 7, "answer_id": 0 }
Find this limit. Compute the value of the limit : $$ \lim_{x\to\infty}{\frac{1-\cos x\cos2x\cos3x}{\sin^2x}} $$ I've tried simplifying the expression to $$ \lim_{x\to\infty}\frac{-8\cos^6x+10\cos^4x-3\cos^2x+1}{\sin^2x} $$ But I don't know what to do after this.
Using the identities $\displaystyle \sin^2(x)=\frac{1-\cos(2x)}{2}$, $\displaystyle \cos(4x)=2\cos^2(2x)-1$, and $\displaystyle \cos(x)\cos(3x)=\frac12(\cos(2x)+\cos(4x))$, we obtain $$\begin{align} \frac{1-\cos(x)\cos(2x)\cos(3x)}{\sin^2(x)}&=\frac{1-\frac{\cos(2x)\left(\cos(2x)+\overbrace{(2\cos^2(2x)-1)}^{=\cos(4x)}\right)}{2}}{\frac{1-\cos(2x)}{2}}\\\\ &=\frac{-2\cos^3(2x)-\cos^2(2x)+\cos(2x)+2}{1-\cos(2x)}\\\\ &=2\cos^2(2x)+3\cos(2x)+2 \end{align}$$ whence taking the limit as $x\to 0$ yields the coveted result $$\lim_{x\to 0}\left(\frac{1-\cos(x)\cos(2x)\cos(3x)}{\sin^2(x)}\right)=7$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2325217", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 7, "answer_id": 1 }
Understanding the third Sylow theorem I am trying to understand the following theorem: Let $G$ be a finite group, $p$ a prime number, and let's suppose $|G|=p^ns$ s.t. $p$ doesn't divide $s$ Let $n_p$ be the number p-sylow subgroups of G then we have $$\begin{cases}n_p |s\\n_p \equiv 1 \mod p\end{cases}$$ Now my problem is with $n_p$ I don't understand why $n_p$ can be anything different from 1. a p-sylow subgroup is a p-subgroup which is maximal, but I don't understand why there can be more than one maximal subgroup of a group G
Maximal means "not contained in anything else" not "everything else is contained in it." Look at $S_3$ where you have three Sylow-$2$ subgroups, $\{(1), (ij)\}$ for any $1\le i\ne j\le 3$. Or if you prefer a simpler example of where "maximal" means this, look at a case with some sets: consider $\{\varnothing, \{1\},\{2\}\}$ with respect to inclusion both $\{1\}$ and $\{2\}$ are maximal because they are not properly contained in anything larger, even though neither contains the other.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2325384", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Derivative of the logit function I have plotted a logit function and its derivative. My first question is that how can I interpret the derivative graph of the logit function and second, why in logit function, the second derivative becomes the logit function itself?
The second derivative of the logit function is not equal to itself. Look: $$l(x)=\ln\bigg(\frac{x}{1-x}\bigg)$$ $$l(x)=\ln(x)-\ln(1-x)$$ Then differentiate: $$l'(x)=\frac{1}{x}+\frac{1}{1-x}$$ Then differentiate again: $$l''(x)=-\frac{1}{x^2}-\frac{1}{(1-x)^2}$$ The second derivative of the logit function is a completely different function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2325498", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
"X is distributed as ..." notation A question about notation: We sometimes use $\sim$ to denote "distributed as" e.g. if $X$ is Gaussian we write $X \sim N(\mu, \sigma^2)$. Is it acceptable to use the "~" notation for an arbitrary distribution? e.g. can we write $$X \sim \begin{cases} \frac{3}{2}x^2, & x \in [-1,1] \\0 & \text{otherwise} \end{cases}$$ If not, is there another way to write "$X$ is distributed as ..."? i.e. without the more verbose "Let $p_X(x)$ be the pdf of $X$. Then $p_X(x)=$ ...
To the best of my knowledge, it is not that rare to see $X\sim f_X(x)$ as a shorthand for $X$ is distributed according to $f_X(x)$. However, I've never encountered in any formal settings your version ($\sim$ followed by brackets etc.).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2325592", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 1 }
Why can I put the limit sign in equations Why can I put the limit sign in both sides of an equation? Is there a rigorous way to prove that if: $f(x)=g(x)$ Then $\lim \limits_{x\to x_0}f(x) =\lim \limits_{x\to x_0}g(x)$ Thanks.
Well, if $f(x)=g(x)$, then $f(x)=g(x)=f(x)$. Therefore, we have $\displaystyle \lim_{x \to x_0}[f(x)]=\lim_{x \to x_0}[g(x)]$, because we have an identical expression under a different name.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2325661", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Find the minimum and maximum distance between the ellipsoid of ecuation $x^2+y^2+2z^2=6$ and the point $P=(4,2,0)$ I've been asked to find, if it exists, the minimum and maximum distance between the ellipsoid of ecuation $x^2+y^2+2z^2=6$ and the point $P=(4,2,0)$ To start, I've tried to find the points of the ellipsoid where the distance is minimum and maximum using the method of Lagrange multipliers. First, I considered the sphere centered on the point $P$ as if it were a level surface and I construct: $f(x,y,z)=(x-4)^2+(y-2)^2+z^2-a^2$ And I did the same with the ellipsoid, getting the following function: $g(x,y,z)=x^2+y^2+2z^2-6$ Then, I tried to get the points where $\nabla$$f$ and $\nabla$$g$ are paralell: $\nabla$$f=\lambda$$\nabla$$g$ $\rightarrow$ $(2x-8,2y-4,2z)=\lambda(2x,2y,4z)$ So, solving the system I got that $\lambda=1/2$, so in consquence $x=8$ and $y=4$ But when I replaced the values of $x,y$ obtained in $g=0$ to get the coordinate $z$ I get a complex number. I suspect there is something on my reasoning which is incorrect, can someone help me?
hint The ellipse can be parametrized as $$x_e =\sqrt {6}\sin (\phi)\cos (\theta) $$ $$y_e=\sqrt{6}\sin (\phi)\sin (\theta ) $$ $$z_e=\sqrt {3}\cos (\phi) $$ the square of the distance from a point of the ellipse to the point $(4,2,0) $ is $$D^2=(x_e-4)^2+(y_e-2)^2+z_e^2$$ $$=3\sin^2 (\phi)+23-8x_e-4y_e $$ You can find min and max D, by solving the system, $$\frac {\partial D^2}{\partial \phi}=\frac {\partial D^2}{\partial \theta}=0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2325729", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
When in $b-$base representation of a number all of $0, 1, 2, ..., b-1$ exist? If $a,b \ge 2$ are given, prove that there is a positive integer $m$ such that $ a \mid m$ and in $b-$base representation of $m$ all of $0, 1, 2, ..., b-1$ exist. My teacher gave this to me, but I think his statement was wrong. It was like this: For $a,b \ge 2$, prove that there is a positive integer $m$ such that $ a \mid m$ and in $b-$base representation of $\color{red}a$ all of $0, 1, 2, ..., b-1$ exist. Mine makes more sense, Isn't it?
Pick $r$ with $b^r>a$. Let $A$ be the number that, in base $b$, uses all digits. Then one of the numbers $b^rA,\ldots, b^rA+a-1$ will do.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2325896", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Suppose that $\lim_{x\to \infty} f'(x) = a$. Is it true that $\lim_{x\to \infty} {f(x)\over x} = a$ Suppose that $\lim_{x\to \infty} f'(x) = a$. Is it true that $\lim_{x\to \infty} {f(x)\over x} = a$ If so, can you prove it? Thanks!
HINT Let assume $a \gt 0$. Then, because $\lim_{x\to \infty} f'(x) = a \gt 0$ there is $M \gt 0$ such that $f'(x) \gt 0$ for all $x \ge M$. Therefore $f$ is strictly increasing on $[M, +\infty)$ and unbounded (otherwise $\lim_{x\to \infty} f'(x) =0$). It follows that $\lim_{x\to \infty} f(x) = +\infty$ and we can apply L'Hospital to $\lim_{x\to \infty} {f(x)\over x}$ Now, let's assume $a=0$. Let $g(x) = f(x) + x$. Then $\lim_{x\to \infty} g'(x)=1$ therefore $\lim_{x\to \infty} {g(x)\over x}=1$ and from here $\lim_{x\to \infty} {f(x)\over x}=0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2326042", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
equation of ellipse after projection If I have the intersection of $x+z=1$ and $$x^2 +y^2 +z^2=1$$ which is a circle in $O'xyz$. Then I do a projection of this circle on the $O'xy$ plane, it'll be an ellipse. How can I then find the equation of this ellipse?
If $(x,y,z)$ is a point on the circle, then $(x,y,z)$ satisfy both $x+z=1$ and $x^2+y^2+z^2=1$. It's equations are $$\begin{cases} z=1-x \\ x^2+y^2+(1-x)^2=1\end{cases}$$ Its projection has $z$-coordinate equals $0$ and keep the $x$ and $y$-coordinates. So the equation of the projection on the $xy$-plane is \begin{align} x^2+y^2+(1-x)^2&=1\\ 2x^2+y^2-2x&=0 \end{align} Another way to think of the problem. The centre $C$ of the circle is a point on the plane $x+z=0$. The line joining $C$ and the origin is orthogonal to the plane. It is easy to see that $\displaystyle C=\left(\frac{1}{2},0,\frac{1}{2}\right)$. $C$ is $\displaystyle \frac{\sqrt{2}}{2}$ unit from the origin. So the radius of the circle is $\displaystyle\sqrt{1-\left(\frac{\sqrt{2}}{2}\right)^2}=\frac{\sqrt{2}}{2}$. The projection of the circle is an ellipse with centre $\displaystyle \left(\frac{1}{2},0,0\right)$. The major axis of the ellipse is parallel to the $y$-axis, which has the same length as the radius of the circle ($\displaystyle =\frac{\sqrt{2}}{2}$). As the plane makes a $45^\circ$ with the $xy$-plane, the minor axis has length $\displaystyle \frac{\sqrt{2}}{2}\cos45^\circ=\frac{1}{2}$, and it is parallel to the $x$ axis. The equation of the ellipse is \begin{align} \frac{(x-\frac{1}{2})^2}{(\frac{1}{2})^2}+\frac{y^2}{(\frac{\sqrt{2}}{2})^2}&=1\\ (2x-1)^2+2y^2&=1 \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2326177", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Is the tangential velocity the same as finding the tangent vector? Sorry for stupid question. I am calculating the tangent vector for a vector function and the problem also asks for arc length, speed and unit tangent vector. I did OK but when I hear the term tangential velocity of an object in physics is that the same as the calculation I am making by finding the tangent vector?
The tangential velocity is usually the length of the tangent vector. It depends on the parametrization, if you, e.g., pass slower through the curve the length of the tangent vectors reduces.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2326287", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Evaluate the following Determinant of $12$th degree polynomial Evaluate $$\Delta=\begin{vmatrix} \frac{1}{(a+x)^2} & \frac{1}{(b+x)^2} & \frac{1}{(c+x^2)}\\ \frac{1}{(a+y)^2} & \frac{1}{(b+y)^2} & \frac{1}{(c+y)^2}\\ \frac{1}{(a+z)^2} & \frac{1}{(b+z)^2} & \frac{1}{(c+z)^2}\\ \end{vmatrix}$$ My Try: I have taken all the denominators out and we obtain $$\Delta=f(a,b,c,x,y,z)\times \begin{vmatrix} (b+x)^2(c+x)^2 & (a+x)^2(c+x)^2 & (a+x)^2(b+x)^2\\ (b+y)^2(c+y)^2 & (a+y)^2(c+y)^2 & (a+y)^2(b+y)^2\\ (b+z)^2(c+z)^2 & (a+z)^2(c+z)^2 & (a+z)^2(b+z)^2\\ \end{vmatrix}$$ where $$f(a,b,c,x,y,z)=\frac{1}{\left((a+x)(b+x)(c+x)(a+y)(b+y)(c+y)(a+z)(b+z)(c+z)\right)^2} $$ By factor theorem we observe that $a-b$,$b-c$,$c-a$,$x-y$,$y-z$ and $z-x$ are factors of the new Determinant above. But how to find remaining factors?
It's obvious that we have a factor $(a-b)(b-c)(c-a)(x-y)(y-z)(z-x)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2326385", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proof or counterexample: Let $A$ be a square matrix, then: * *If $A$ is diagonalizable, then so is $A^2$ I answered yes. I argued that since $A$ is diagonalizable there exists an eigenbasis, and since $A^2$ has the same eigenvectors than $A$, and its eigenvalues are those of $A$ squared, there is also an eigenbasis for $A^2$, so it is diagonalizable * *If $A^2$ is diagonalizable, then so is $A$ I am pretty sure the answer is no, but I can't think of a counterexample. Thank you in advance
Suppose $A$ is diagonalisable. Then there is a basis of eigenvectors $e_i$ with eigenvalues $\lambda_i$, not necessarily distinct. With respect to this basis, $A^2 e_i = \lambda_i^2 e_i$, so $e_i$ is also a basis of eigenvectors for $A^2$, and hence $A^2$ is diagonal in this basis. (Or one can use the existence of a unitary or orthogonal matrix $U$ so $UAU^{-1}=D$ is diagonal. Then $UA^2U^{-1} = UAU^{-1}UAU^{-1} = D^2$ is also diagonal.) Counterexample for the other direction: $$ A = \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} $$ is not diagonalisable, but $A^2=0$ clearly is.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2326522", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Angle chasing with two tangent circles. The bigger circle $\Omega$ is tangent to the smaller circle $\omega$. Also, $GE=2CG$. We have to find $\angle DEC$. MY WORK SO FAR. I proved using the Alternate Segment Theorem that: $$GF\parallel ED$$ And that, $$\angle DCH=\angle HCE=45°$$ Also, $$GF=GH$$
The fact that there are points $G$ and $E$, $G$ on the small circle, and $E$ on the big circle such that $\vec{CE}=3 \vec{CG}$ means that the circles are homothetic with the homothety with center $C$ and scale factor 3 (http://www.cut-the-knot.org/Curriculum/Geometry/Homothety.shtml). Thus, we also have, for the centers, $\vec{CA}=3 \vec{CB}$. Therefore, we can assume, WLOG, that the circles are resp. with centers $(1,0)$ and $(3,0)$, and radii 1 and 3. Let $ED$ the tangent issued from $E$ and tangent in $H$ to the small circle. With the notations of the figure below: $$\sin(2\alpha)=\dfrac{BH}{BA}=\dfrac{1}{2}$$ Thus, assuming angle $2\alpha < tfrac{\pi}{2}$, we deduce that $2\alpha=\dfrac{\pi}{6}$. Therefore, by central angle theorem (see (https://www.geogebra.org/m/eNA87edZ)): $$\alpha=angle(DEC)=\dfrac{\pi}{12}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2326623", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }