Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
What is the area of the region $R$ if $R=\{(x,y):x^2+y^2\le100$, $\sin(x+y)>0\}$? $x^2+y^2\le100 \implies $ a circle with radius 10 and the region enclosed within. Now given, $sin(x+y)>0 \implies y>-x$ Also we know $sin(x+y)\le1 \implies x+y\le \dfrac{\pi}{2} \implies y\le-x+\dfrac{\pi}{2}$ From this we get the area of the required region is an isosceles triangle with two equal sides $=10 $ which is the radius of the circle and the height of the triangle is $\dfrac{\pi}{2}$ Thus reqd. area = $\dfrac{1}{2}\times \dfrac{\pi}{2}\times 2\cdot \sqrt{100-\dfrac{\pi^2}{4}}$. Now the answer to this question is $50\pi$ Now my question is how are they arriving at it? Clearly I must have committed mistakes. Please tell me where I am wrong and give me the correct solution please.
By symmetry, assuming uniform distribution over the circle with radius $10$, $$Pr(\sin(x+y)>0)=Pr(\sin(x+y)<0)=\frac12$$ Hence the area is $$\pi \frac{(10)^2}2=50\pi$$ Note that $\sin(x+y) > 0 $ does not imply $x+y >0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2749888", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Stuck at this definite integration: $\int_ {0} ^ {\infty} \frac {\log(x)} {x^2 + 2x + 4} \, \mathrm dx$ If $$\int_ {0} ^ {\infty} \frac {\log(x)} {x^2 + 2x + 4} \, \mathrm dx \ $$ is equal to $\pi \ln p/ \sqrt {q} \ $, where $p$ and $q$ are coprimes, then what is the value of $p + q$? Ok, so I am stuck with this problem. It was in my exam. The question was a very long one. After solving and simplifying, I was stuck with this definite integration. The options were $27$ and $29$. I had no idea how to proceed further. I tried substituting various things, such as $x+1$ to $\tan A$ etc., but nothing seems to work. I hope someone can help shed some light on the problem. If you want the full question, comment and I will post it.
Jack has already covered the simple, real analysis way of evaluating this integral. So if anybody's curious, here's a way to solve your integral using complex analysis. Note that for these kinds of integral, contour integration is a bit overkill. The function under consideration is$$f(z)=\frac 1{z^2+2z+4}$$And we are integrating $f(z)$ over the contour $\mathrm C$: a keyhole contour as pictured below. We define the argument above the real axis to be zero and the argument below to be $2\pi$. Therefore, above we have $z=x$ while below gives $z=xe^{2\pi i}$. We also parametrize about the contour in four different sections. First, a large circle of radius $R$, a smaller circular detour about the origin of radius $\epsilon$, and $\Gamma_{R}$ and $\gamma_{\epsilon}$ arcs respectively. Hence, we get$$\begin{multline}\oint\limits_{\mathrm C}dz\, f(z)\log^2z=\int\limits_{\epsilon}^{R}dx\, f(x)\log^2x+\int\limits_{\Gamma_{R}}dz\, f(z)\log^2z\\-\int\limits_{\epsilon}^{R}dx\, f(x)\left(\log|x|+2\pi i\right)^2+\int\limits_{\gamma_{\epsilon}}dx\, f(x)\log^2x\end{multline}$$As $R\to\infty$ and $\epsilon\to0$, the second and fourth integrals vanish. This can be shown by substituting $z=Re^{i\theta}$ and $z=\epsilon e^{i\theta}$ into the arcs respectively and taking the limits. Hence, all we're left with is$$\oint\limits_{\mathrm C}dz\, f(z)\log^2z=-4\pi i\int\limits_0^{\infty}dx\, f(x)\log x+4\pi^2\int\limits_0^{\infty}dx\, f(x)$$Our contour integral, by the residue theorem, is also equal to $2\pi i$ times the sum of the residues inside the contour. We only have two poles: $z=-1\pm i\sqrt3$ so the residues are$$\begin{align*}z_{+} & =\operatorname*{Res}_{z\, =\, -1+i\sqrt3}\,\frac {\log^2z}{z^2+2z+4}=\lim\limits_{z\to-1+i\sqrt3}\frac {\log^2z}{z+1+i\sqrt3}=\frac {9\log^22+12\pi i\log 2-4\pi^2}{18i\sqrt3}\\\\z_{-} & =\operatorname*{Res}_{z\, =\, -1-i\sqrt3}\,\frac {\log^2z}{z^2+2z+4}=\lim\limits_{z\to-1-i\sqrt3}\frac {\log^2z}{z+1-i\sqrt3}=-\frac {9\log^22+24\pi i\log 2-16\pi^2}{18i\sqrt3}\end{align*}$$Hence, by the residue theorem,$$\begin{align*}\oint\limits_{\mathrm C}dz\, f(z)\log^2z & =2\pi i\left(z_{+}+z_{-}\right)\\ & =\frac {4\pi^3}{3\sqrt3}-\frac {4\pi^2i\log2}{3\sqrt3}\end{align*}$$Taking the imaginay portion and dividing by $-4\pi$, we get$$\int\limits_0^{\infty}dx\, \frac {\log x}{x^2+2x+4}=\frac {\pi\log 2}{3\sqrt3}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2750011", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Must the dimensions of $V$ and $U$ be equal for $f: U \mapsto V$ to be a diffeomorphism? Suppose we have $U, V \subset \mathbb{R}^{n}$, and a map $f: U \mapsto V$. If $f$ is a diffeomorphism, must the dimensions of $U$ and $V$ be equal? I'm thinking this is true since the tangent map $Df_{x}: T_{x}U \mapsto T_{f(x)}V$ has to be a linear isomorphism if $f$ is a diffeomorphism.
You are right, the explanation you gave is also the right one. If $f$ is a diffeomorphism, then there exists $g\colon V\rightarrow U$ such that $g\circ f=\operatorname{id}_{U}$ and using the chain rule: $$Dg_{f(x)}\circ Df_x=\operatorname{id}_{T_xU},$$ so that $Df_x$ is an invertible linear map.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2750195", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Combinatorics - Removing Double Counted Cases Here's my solutions: $$1.)\,(2)(4!)=48$$ $$2.)\,(4)(3)(3!)=72$$ $$3.)\,48+48+3!=102$$ It's the last one I get wrong. The correct answer is 42. The solutions manual of the textbook says the following: Someone mind explaining what exactly are the double counted cases. Also, what is wrong with my way of doing 1c? I think it is just a simple OR problem, as in an application of the addition principle. Thank you very much.
Let $A$ be the event that Anya is at the left end of the line; let $B$ be the event that Elena is at the right end of the line. Then the event that Anya is on the left or Elena is on the right or both is $A \cup B$. We want to find $|A \cup B|$, the number of elements that are in the union of $A$ and $B$. Notice that if we simply add the number of elements in $A$, $|A|$, to the number of elements in $B$, $|B|$, we will have added those elements in the intersection twice. We only want to count them once. Hence, we must subtract them from the total. Therefore, $$|A \cup B| = |A| + |B| - |A \cap B|$$ $|A|$: Anya is at the left end of the line. There is one way to place Anya and $4!$ ways to place the remaining four people in the remaining four positions. Hence, there are $4!$ arrangements in which Anya is at the left end of the line. $|B|$: Elena is at the right end of the line. There is one way to place Elena and $4!$ ways to place the remaining four people in the remaining four positions. Hence, there are $4!$ arrangements in which Elena is at the right end of the line. $|A \cap B|$: Anya is at the left end of the line and Elena is at the right end of the line. Anya can be placed in one way, Elena can be placed in one way, and the remaining three people can be arranged in the remaining three positions in $3!$ ways. Hence, $$|A \cup B| = |A| + |B| - |A \cap B| = 4! + 4! - 3!$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2750337", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
$A,B$ are orthogonal projections and $\|Ax\|^2+\|Bx\|^2=\|x\|^2$ show $A+B=I$ Here is the problem: $A,B:\mathbb{C}^n\to\mathbb{C}^n$ are two orthogonal projections satisfying for any $x\in\mathbb{C}^n$, $$\|Ax\|^2+\|Bx\|^2=\|x\|^2$$ Show that $A+B=I$. I know that $\|Ax\|^2+\|Bx\|^2=\|x\|^2$ tells that $(Ax,Ax)+(Bx,Bx)=(x,x)$. Since $$\|(A+B)x\|^2=((A+B)x,(A+B)x)$$$$=(Ax,Ax)+(Bx,Bx)+(Ax,Bx)+(Bx,Ax)$$$$=(x,x)+(Ax,Bx)+(Bx,Ax)$$ It remian to show that $(Ax,Bx)+(Bx,Ax)=0$, but I am not sure how to show it. Please help, thanks a lot!
Let me state two hints and one proposal: * *One has $\,\|Ax\|^2=(Ax,x)\,$ by hypothesis. *What does $\,(Tx,x)=0\;\forall x\in\mathbb C^n\,$ imply for $T$? (The conclusion would not hold when working in $\mathbb R^n$!) *Please use the command "\|" to produce nice(-r) norm delimiters, cf $\,\|\,$ versus $\,||$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2750473", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Probability the driver has no accident in the next 365 days The time until the next car accident for a particular driver is exponentially distributed with a mean of 200 days. Calculate the probability that the driver has no accidents in the next 365 days, but then has at least one accident in the 365-day period that follows this initial 365-day period. Attempt Let $T$ be the time it takes for a driver to have a car accident. We are given $T$ is $exp( \lambda = 1/200 )$. We need to find $$ P(T > 365) = 1 - F(365) = 1 - 1 + e^{-365/200} = 0.1612 $$ Is this correct? MY answer key says the correct answer should be $\boxed{0.1352}$. What am I missing here?
You want the first accident to be between the first year and second year. \begin{align} P(365< T \leq 2 \cdot 365) &= F(2 \cdot 365) - F(365) \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2750584", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Construct a sequence of simple functions converging to $f\in C([a,b])$ I am aware of the result that a measurable function $f\in L^p(\mathbb{R}^d)$ can be approximated with simple functions of the form $f=\sum_{k=1}^{\infty}c_k\chi_k$ However, I am interested in the following: Given some function $f\in L^p(\mathbb{R}^d)$, how can we explicitly construct a sequence of function converging uniformly (or at least almost everywhere) to $f$? For my purposes, I am ok with considering compact subsets $K\subset \mathbb{R}^d$, and continuous functions $C([a,b])$ (although the general result would be nice).
Hint: If $f$ is defined in a compact its range is a compact and thus contained in some finite interval $I = [-A,A]$. Divide it in $n$ intervals $I_j$ of length $2|A|/n$. Their preimages are disjoint sets ready for building a simple function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2750653", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Weak convergence of a sequence of probability measures implies integrability of the limiting probability measure Let $(X_{n})_{n \in \mathbb{N}}$ be a sequence of uniformly integrable random vectors with values in some normed vector space $V$ with $\mathbb{E}[\|X_n\|] < \infty$. This means that $$ \lim_{C \to \infty} \sup_{n \in \mathbb{N}} \mathbb{E}[ \| X_{n} \| \mathbb{1}_{\{\| X_{n} \| > C\}} ] = 0. $$ Furthermore, suppose that $\mu$ is a probability measure on $V$ and the sequence $(X_{n})_{n \in \mathbb{N}}$ converges weakly to $\mu$. This means that for every bounded Lipschitz continuous function $f \colon V \rightarrow \mathbb{R}$ it holds $$ \lim_{n \rightarrow \infty} \int_{V}f dP_{X_{n}} = \int_{V} f d\mu. $$ How can I show that then $\int_{V} \| x \| d\mu(x) < \infty$?
By the uniform integrability of $(X_n)_{n \in \mathbb{N}}$ we have $$M := \sup_{n \geq 1} \mathbb{E}(|X_n|) < \infty. \tag{1}$$ On the other hand, the weak convergence of $X_n \to X$ entails that $$\mathbb{E}f(X_n) \to \mathbb{E}f(X) \tag{2}$$ for any bounded Lipschitz continuous function; in particular, we can choose $f(x) := \min\{|x|,k\}$ for fixed $k \geq 1$ to get $$\mathbb{E}\min\{|X|,k\} \stackrel{(2)}{=} \lim_{n \to \infty} \mathbb{E}\min\{|X_n|,k\} \leq \sup_{n \in \mathbb{N}} \mathbb{E}(|X_n|)=M.$$ Applying the monotone convergence theorem we conclude that $$\mathbb{E}(|X|) = \sup_{k \geq 1} \mathbb{E}\min\{|X|,k\} \leq M < \infty.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2750896", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Eigenvalue of $f \circ f = id_V$ An endomorphism $f$ in the vector space $V$ with the popperty $f \circ f = id_V$ is given. Is my assumption correct that the only Eigenvalues of $f$ are $\lambda_1=1$ and $\lambda_2 =-1$? Reasoning We have the defenition of the Eigenvalue: $$f(v)=\lambda v \tag{1}$$ $$\Rightarrow_{\circ f} f(f(v))= \lambda f(v)$$ $$v=\lambda f(v) \tag{2}$$ Now we can put $(1)$ into $(2)$: $$v=\lambda \lambda v$$ The only numbers that satisfy this equation for $\lambda$ are $+1$ and $-1$.
Although, @Doe's answers your question, the following method is a more general method for finding eigenvalues of a given map. Note that, by doing some extra computations, we can directly find the eigenvalues of $f$ for sure with this method. First observe that the map $f$ is a zero of the the polynomial $$p(\lambda) = \lambda^2 - 1 = 0,$$ i.e $$p(f) = 0_v.$$ Hence, the minimal polynomial of $f$ has to divide $p$, but $this implies$, either $$m_f(\lambda) = \lambda -1,$$ or $$m_f(\lambda) = \lambda +1,$$ or $$m_f(\lambda) = \lambda^2 -1.$$ Since characteristic polynomial and the minimal polynomial always have the same roots, we can conclude that $f$ can only have the eigenvalues $\pm 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2751009", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Continuous limit of continuous functions with a distance condition Let $\{f_n\}_n$ be a sequence of continuous functions $f_n\colon[0,1]\to\Bbb R$ and let $U_n=\{\,x\in[0,1]\mid f_n(x)>1\,\}$. We know that $\forall x\in[0,1]\colon \lim_{n\to\infty}f_n(x)=0$ does not imply that $f_n\to 0$ uniformly. However, the usual counterexamples all have $\lim_{n\to\infty}\mu(U_n)=0$. Is there a counterexample where $\mu(U_n)\ge p>0$ for all $n$? And if so, for which $p$ (certainly not for $p=1$)? Intuitively, my answer is "no", because it feels as if for "random" $x\in[0,1]$, the "probability" that $f_n(x)\ge1$ should be at least $p$, contradicting $f_n(x)\to 0$.
We must have $\mu(U_n)\to 0$. It also doesn't matter that the $f_n$'s are continuous or that they converge pointwise everywhere, mere measurability and pointwise a.e. convergence is fine. Proof. Let $\epsilon > 0$ be given. Since $f_n\to 0$ pointwise a.e. on the finite measure space $[0,1]$, Egorov's theorem tells us that $f_n \to 0$ almost uniformly in the sense that there is a measurable set $A_\epsilon\subset [0,1]$ such that $\mu\big([0,1]\smallsetminus A_\epsilon\big) < \epsilon$ and $f_n\to 0$ uniformly on $A_\epsilon$. Hence for all sufficiently large $n$, we have $\mu\big(\{|f_n| > 1\}\big) \le \mu\big([0,1]\smallsetminus A_\epsilon\big) < \epsilon$. Thus $\mu(U_n)\to 0$. Note that there was nothing special about $1$. We could have replaced it with any $\alpha > 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2751172", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Result on the power of norm in Banach space? I want to inquire whether there exist any result of the type $$\|x+y\|^{\lambda} \leq c_1\|x\|^{\lambda}+ c_2\|y\|^{\lambda}$$ where $\lambda \in (0, 1]$, $c_1$ and $c_2$ are positive constants and $x, y$ are in banach space $X$? Any reference? For $\lambda \geq 1$, i know that $$\|x+y\|^{\lambda} \leq 2^{\lambda-1}(\|x\|^{\lambda}+ \|y\|^{\lambda})$$
$||x+y||^{\lambda} \leq (2\max \{||x||,||y||\})^{\lambda}=2^{\lambda} \max \{||x||^{\lambda},||y||^{\lambda}\}) \leq 2^{\lambda} \{||x||^{\lambda}+||y||^{\lambda}\})$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2751257", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
A question on a proof that $ L^1 (E)$ is Complete I'm studying Capinski - Copp Measure - Integral Probability. Specifically, at the proof of Thm 5.1 p.130 ($L^1(E)$ is Complete) they write: Firstly, they consider a sequence ${f_n}$ in $L^1(E)$, after some work they produce a subsequence of ${f_n}$ that converges to some $f(x)$ for every $x\in E$. Then, they say 'Since the sequence of real numbers ${f_n(x)}$ is a Cauchy' we have that the ${f_n}$ also converge to $f(x)$. I missing the argument that ${f_n(x)}$ is Cauchy on $\mathbb{R}$. It follows from the fact that ${f_n}$ is Cauchy on $L^1(E)$ ? And if this is the case how can I prove it?
I do not know the full detail of the proof in your book so I will explain based on the proof I know. (The last page of following lecture note introduces the proof I know. It proves the completeness of $L^2$, but the argument works also for $L^1$.) In fact, the sequence $\langle f_n\rangle$ need not be Cauchy. For example, consider the following sequence over $E=[0,1]$: $$\chi_{[0,1]},\, \chi_{[0,1/2]},\, \chi_{[1/2,1]}, \, \chi_{[0,1/3]},\, \chi_{[1/3,2/3]},\,\cdots$$ Here $\xi_A$ is the characteristic function of $A$. Then the sequence converges to 0 in $L^1$-sense (so it is Cauchy in $L_1$-sense), but it does not have a pointwise limit. Moreover, you can examine that the sequence is not Cauchy over any point (that is, $\langle f_n(x)\rangle$ is not Cauchy for any $x\in[0,1]$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2751406", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
A finite set is open and closed in the same time. Let $(X,d)$ to be a arbitrary metric space. I know that: every finite subspace of a metric space is closed. Then $\{a\}$ ,with $a\in X$, is closed $\Rightarrow X \setminus \{a\}$ is open. (1) But if $X$ is a finite metric space then $X\setminus \{a\}$ is also finite so $X\setminus\{a\}$ is closed. (2) My question is: from (1) + (2) $\Rightarrow$ if $X$ is a finite metric space $X\setminus \{a\}$ is open and closed in the same time?
The distinction becomes less important in this finite case if, for $d$ the minimum distance between two points, one works with open or closed balls of radius $d/2.$ So every subset is both open and closed. Edit: OP has asked for "more examples". Let $X$ be the finite metric space consisting of vertices of a square of side-length $1$ in the $x,y$ plane, with the metric from that in the plane. Then for each vertex, the open ball of radius $1/2$ centered there contains only that vertex. So each point is open (it is the only point in an open ball). Then any subset of $X$ is open, since it is a union of open sets. Any set is also closed, since its complement in $X$ is also a union of points of $X.$ This same thing can be done in any finite metric space as already outlined in the first paragraph.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2751605", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
" Let $A$ be a symmetric $2 \times 2$ matrix with the property $A^{-1} = A$. Find all possible trace values ​of $\operatorname{tr}A$" I need some help solving this. I have tried: $$ \begin{bmatrix} a & b \\ c & d \\ \end{bmatrix} =\frac{1}{\operatorname{det}A}\cdot \begin{bmatrix} d & -b \\ -c & a \\ \end{bmatrix}$$ I ended up with $$a=\frac{d}{\operatorname{det}A},$$ and $$d=\frac{a}{\operatorname{det}A}.$$ Then $$\operatorname{tr}(A)=a+d=\frac{a+d}{\operatorname{det}A},$$ but I don't really think it works.
By the Cayley–Hamilton theorem or direct verification, we have $A^{2}-\operatorname {tr}(A)A+\det(A)I=0$. From $A^2=I$, we get $\operatorname {tr}(A)A=(\det(A)+1)I$. Taking traces on both sides, we get $\operatorname {tr}(A)^2=2(\det(A)+1)$. From $A^2=I$, we also get $\det(A)^2=1$ and so $\det(A)=\pm1$. If $\det(A)=1$, then $\operatorname {tr}(A)^2=4$ and so $\operatorname {tr}(A)=\pm 2$. If $\det(A)=-1$, then $\operatorname {tr}(A)^2=0$ and so $\operatorname {tr}(A)=0$. These three possibilities occur for the matrices below: $$ \operatorname {tr}\begin{pmatrix}1&0\\0&1\end{pmatrix} = 2 \qquad \operatorname {tr}\begin{pmatrix}-1&\hphantom-0\\\hphantom-0&-1\end{pmatrix} = -2 \qquad \operatorname {tr}\begin{pmatrix}1&\hphantom-0\\0&-1\end{pmatrix} = 0 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2751819", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Let $f$ be a non-negative differentiable function such that $f'$ is continuous and $\int_{0}^{\infty}f(x)\,dx$ and $\int_{0}^{\infty}f'(x)\,dx$ exist. Let $f$ be a non-negative differentiable function such that $f'$ is continuous and $\displaystyle\int_{0}^{\infty}f(x)\,dx$ and $\displaystyle\int_{0}^{\infty}f'(x)\,dx$ exist. Prove or give a counter example: $f'(x)\overset{x\rightarrow \infty}{\rightarrow} 0$ Note: I think it is not true but I couldn't find a counter example.
Let $\varphi(x)=\exp\left(\dfrac{1}{3}-\dfrac{1}{4-x^{2}}\right)$ for $|x|\leq 2$, $\varphi(x)=0$ for $|x|>2$. Let $f(x)=\displaystyle\sum_{n=1}^{\infty}\dfrac{1}{2^{n}}\varphi\left(2^{n}(x-n)\right)$, one may check that $f\in C^{\infty}(0,\infty)$ and that $f,f'\in L^{1}(0,\infty)$. For all $x$ with $1<2^{n}(x-n)\leq 2$, that is, $n+\dfrac{1}{2^{n}}<x\leq n+\dfrac{2}{2^{n}}$, we have \begin{align*} f'(x)&=\dfrac{1}{2^{n}}\exp\left(\dfrac{1}{3}-\dfrac{1}{4-(2^{n}(x-n))^{2}}\right)\cdot-\dfrac{2(2^{n}(x-n))}{(4-(2^{n}(x-n))^{2})^{2}}\cdot 2^{n}\\ &=-\dfrac{2(2^{n}(x-n))}{(4-(2^{n}(x-n))^{2})^{2}}\exp\left(\dfrac{1}{3}-\dfrac{1}{4-(2^{n}(x-n))^{2}}\right), \end{align*} localizing to $x=n+1/2^{n}$ we have $f'(n+1/2^{n})=-2/9$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2751909", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Block diagonalization We know that not all matrices can be diagonalized, but all matrices can be block diagonalized (with just one block) How can we find a similarity transformation leading to block diagonalization with the greatest possible number of blocks?
Every matrix with elements in $\mathbb C$ has a Jordan Normal Form. The transform in the canonical basis will have blocks of sizes equal to the sizes of the generalized eigenspaces of the matrix. The Jordan blocks have a very particular structure: $$\left[\begin{array}{ccc}\lambda&1&0&\cdots&0\\0&\lambda&1&0&0\\0&\ddots&\ddots&\ddots&0\\0&0&0&\lambda&1\\0&0&0&0&\lambda\end{array}\right]$$ where the $\lambda$ is an eigenvalue for the matrix. It should be possible to prove that the block above can not be further reduced (although I have no proof ready in my magic pockets right now). In the case you want real elements everywhere you can take a look here
{ "language": "en", "url": "https://math.stackexchange.com/questions/2752024", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find 2 idempotent matrix with some poperties. I want to find an example of two idempotent matrices $a,b$ with entries in $\mathbb{Z}_2$ with $a+b$ also idempotent, $ab\not=0$ and $a\not=b$. Can someone find one? I have prove that if you work in a field with characteristic greater than 2 this is not possible. But I have been asked to give this example in $\mathbb{Z}_2$.
Try: $$ a = \left(\matrix{1&0&0\\0&1&0\\0&0&0}\right) \quad b = \left(\matrix{0&0&0\\0&1&0\\0&0&1}\right) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2752144", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
show that orbits have the same order under the normal subgroup of a transitive group Let $G$ be a group that works transitively on $X$, and let $N$ be a normal subgroup of $G$. Show that the orbits of $X$ under $N$ have the same order, that is $\operatorname{ord}(Nx)=\operatorname{ord}(Ny)$ for all $x,y\in X$. I'm not sure how to show this. I was thinking of either showing a bijection, maybe $$ \phi\colon Nx\to Xy\colon g\circ x\mapsto g\circ y, $$ or somehow using the orbit-stabilizer theorem. Either way I'm stuck. The reason I can't make anything out of $\phi$ is because I don't even know how to show injectivity to start with, though I was hoping that the fact that $N$ is normal would somehow aid in that; $$ g\circ x=h\circ x\implies h^{-1}g\circ x=x. $$ I wonder if I could maybe use the following: choose for $x\neq y\in X$, an element $g\in G$ such that $g\circ x=y$. We know then that $$ G_y=G_{g\circ x}=g G_x g^{-1}. $$ Any hints?
By transitivity, there exists $g\in G$ such that $gx=y$. Define the map $\phi: Nx\to Ny$ by $\phi(n x)=gng^{-1}y$ for all $n\in N$. Note that $gng^{-1}\in N$ by normality of $N$. To prove injectivity, assume $gn_1g^{-1} y=\phi(n_1x)=\phi(n_2x)=g n_2 g^{-1}y$. Cancelling the $g$ both sides, we get $n_1g^{-1}y=n_2g^{-1}y$. Since we have a group action, $n_1x=n_1g^{-1}y=n_2g^{-1}y=n_2x$. To prove surjectivity, if $ny\in Ny$, then take $\phi(g^{-1}ngx)=g(g^{-1}ng)g^{-1}y=ny$. Note that $g^{-1}ng\in N$ by normality.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2752260", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Can the Sum of Two Tensor Products Be Written as a Single Tensor Product? In general, if I have $|\Psi\rangle = (|\Psi_{1_1}\rangle \otimes |\Psi_{1_2}\rangle + |\Psi_{2_1}\rangle \otimes |\Psi_{2_2}\rangle)$, can I find $|\Psi_{3_1}\rangle$ and $|\Psi_{3_2}\rangle$, such that $|\Psi\rangle = |\Psi_{3_1}\rangle \otimes |\Psi_{3_2}\rangle$? Here $\otimes$ means tensor product and $|\Psi\rangle$ and means a vector. No assumption is made about any relationship between the $|\Psi_{i_j}\rangle$, except that they are all the same dimension and their components are complex numbers. The motivation is the quantum double slit experiment, where the wave state, $|\Psi\rangle$, between the slits and the detector, is the sum of two interfering waves, and $|\Psi\rangle$ is still in a "pure state", which means that $|\Psi\rangle$ can also be written as a tensor product
In general the answer is no and you can clearly see why when you look at the dimensions: $\dim (V\otimes W)$ is much larger than $\dim (V\times W)$ whenever $V$ and $W$ are both of dimension greater than $1$. This tells you that in general a tensor is more than just a couple of vectors. Tensors of the form $a\otimes b$ are called pure tensors. Edit: As was correctly noted in the comments, the dimensional observation does not provide a proof just by itself, it is rather a convenient way to remember this fact. If $\{e_i\}_{i=1}^n$ is a basis of $V$ and $\{f_i\}_{i=1}^m$ is a basis of $W$ then a basis of $V\otimes W$ is given by $\{e_i\otimes f_j\}$ (dimension $nm$) and an element of $V\otimes W$ can be represented by an $n\times m$ matrix where the entries are coordinates in that basis. Now with this description, pure tensors are matrices of rank $1$. Except when $\dim V$ or $\dim W$ is equal to $1$, not all matrices have rank $1$. The subset of pure tensors is not a subspace, however, so in a way the dimensional observation might be misleading.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2752353", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Euler -Lagrange equation: variational Problem min $K[u]$ for $K[u]=\int_D(|\nabla u|^2+\frac{1}{2}gu^4)dxdy$ Consider the variational Problem min $K[u]$ for $$K[u]=\int_D\left(|\nabla u|^2+\frac{1}{2}gu^4\right)\,{\rm d}x\,{\rm d}y$$ where $D \subset \mathbb{R}^2$ and $g(x,y)$ is a given positive function. Find the Euler-Lagrange equation and the natural boundary conditions for this problem. I am really stuck on this problem. I have been looking at an example on pg. 284 here, where they look at $$\int_D |\nabla u|^2\,{\rm d}x\,{\rm d}y$$ but I am stumped as to what the process is and how to apply it to this problem.
It may be easier to see what is happening if we write $$K[u] = \int_D \underbrace{\left(u_x(x,y)^2 + u_y(x,y)^2 + \frac{1}{2}g(x,y)u(x,y)^4\right)}_{= L(x,y,u,u_x,u_y)}\,{\rm d}x\,{\rm d}y.$$For two variables, we have the Euler-Lagrange equation: $$\frac{\partial L}{\partial u} - \frac{\partial}{\partial x}\left(\frac{\partial L}{\partial u_x}\right) - \frac{\partial}{\partial y}\left(\frac{\partial L}{\partial u_y}\right) = 0,$$which reads as $$2g(x,y)u(x,y)^3 - 2u_{xx}(x,y) - 2u_{yy}(x,y) = 0,$$that is: $\triangle u = gu^3$. I'll omit points of application $(x,y)$ from now on. To find the natural boundary conditions, we compute the first variation $$\delta K[u](\psi) = \frac{{\rm d}}{{\rm d}\epsilon}\bigg|_{\epsilon = 0} K[u+\epsilon \psi] = \int_D (2u_x\psi_x+2u_y\psi_y+2gu^3 \psi)\,{\rm d}x\,{\rm d}y.$$I will follow the example given in page 288 from your book. By Green's first identity we have $$\begin{align}\frac{1}{2}\delta K[u](\psi) &= \int_D \nabla u \cdot \nabla \psi + gu^3 \psi\,{\rm d}x\,{\rm d}y \\ &= \int_D - \triangle u \cdot \psi\,{\rm d}x\,{\rm d}y + \oint_{\partial D} \psi \nabla u \cdot n\,{\rm d}s + \int_D gu^3\psi\,{\rm d}x\,{\rm d}y \\ &= \int_D(-\triangle u + gu^3)\psi\,{\rm d}x\,{\rm d}y + \oint_{\partial D}(\nabla u \cdot n)\psi\,{\rm d}s,\end{align}$$where $n$ is the outward unit normal vector to $\partial D$. This tells us again that the Euler-Lagrange equation is $\triangle u = gu^3$, and we get a Neumann natural boundary condition $$\frac{\partial u}{\partial n}(x,y) = 0 \qquad \text{for all }(x,y) \in \partial D.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2752460", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Finding the PDF of a random variable with the mean as the realisation of another random variable What method would I use to find the PDF of a random variable that has a parameter as a realisation of another random variable? For example, I first have an exponential distribution $\Omega \sim exp(\lambda)$ which has a realisation of $\omega$. Then I have another normal distribution $\mathcal{T} \sim \mathcal{N}(\omega, \sigma^2)$ (i.e. the mean of $\mathcal{T}$ is the realisation $\omega$). I am trying to find the PDF of the second distribution $\mathcal{T}$. It would be helpful if I could understand the general method used here because I am also trying to find the PDF of other continuous distributions that use the realisation $\omega$ as a parameter. Thank you
Independence of the normal and the exponential is needed for computing the density function. In the present case the density function is $\frac 1 {\sqrt {2\pi}\sigma}\int e^{-(x-a)^{2}/2\sigma ^{2}} \lambda e^{-\lambda a}da$. I hope the general procedure is clear from this formula. Of course the PDF is obtained by integrating the density function from $-\infty$ to $x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2752576", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that $x^3-a$ is irreducible in $\mathbb{Z}_7$ unless $a=0$ or $\pm1$ Show that $x^3-a$ is irreducible in $\mathbb{Z}_7$ unless $a=0$ or $\pm1$ My Idea: Suppose $x^3-a=(Ax+b)(Bx^2+cx+d).$ Then $A=B=1$ or $A=B=-1$ WLOG $A=B=1.$ Then $x^3-a=x^3+x^2(b+c)+x(bc+d)+bd\\ \Rightarrow c+b=0,d+bc=0,bd=-a.$ I can't go further from here. Help...Thank You!
Let $F$ be any field, and let $p(x) = x^3 + ax^2 + bx + c \in F[x] \tag 1$ be any cubic polynomial over $F$. Then we have the following Fact: $p(x)$ is reducible in $F[x]$ if and only if it has a zero in $F$. Proof of Fact: Clearly if $p(x)$ has a zero $z \in F$, then $p(x) = (x - z)q(x)$ where $q(x) \in F[x]$ is of degree $2$; thus $p(x)$ is reducible. Now if $p(x)$ is assumed reducible, we may write $p(x) = r(x)s(x)$ where precisely one of $r(x), s(x) \in F[x]$ is of degree one, since $\deg r + \deg s = \deg p = 3$. But a factor $\alpha x + \beta$ of degree $1$ yields a root $-\beta/\alpha$, an thus we are done. End of Proof of Fact. So the question becomes, for which $a \in \Bbb Z_7$ does $x^3 - a$ have no zeroes. Now it is just a matter of simple arithmetic to discover which $a \in \Bbb Z_7$ are not perfect cubes. We have $0^3 = 0, \; 1^3 = 1, \; 2^3 = 1, \; 3^3 = 6 = -1, \; 4^3 = 1, \; 5^3 = 6 = - 1, \; 6^3 = (-1)^3 = -1; \tag 2$ we see that $0, 1, -1$ are perfect cubes, but that $a = 2, 3, 4, 5, 6 \tag 3$ are not cubes in $\Bbb Z_7$; thus $x^3 - a$ is irreducible for $a$ given by (3). By the way, $x^3 = xx^2$ is trivially reducible, and $x^3 - 1 = (x - 1)(x^2 + x + 1), \tag 4$ $x^3 + 1 = (x + 1)(x^2 - x + 1). \tag 5$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2752689", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Fraction in its lowest form I read that if $a = \frac mn$ is a positive rational number, it can be expressed in "lowest form" by cancelling common factors of $m$ and $n$, so that $a = \frac rs$ where r and s are relatively prime. I'm wondering if we define the "lowest form" representation for a positive rational number to be $a = \frac pq$ where p and q are relatively prime, would this definition work? for a given rational number are these p and q uniquely determined?
Suppose, $$\frac{a}{b}=\frac{c}{d}$$ with coprime positive integers $a,b$ and coprime positive integers $c,d$. Then, we have $$ad=bc$$ Since $a$ and $b$ are coprime, we can conclude $a|c$ because of $a|bc$ and $b|d$ because of $b|ad$ Since $c$ and $d$ are coprime, we can conclude $c|a$ because of $c|ad$ and $d|b$ because of $d|bc$ So we get $a=c$ and $b=d$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2752841", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Dedekind Cuts and Rationals I have a question regarding how you might construct the reals from the rationals by taking Dedekind cuts. My basic understanding is that a Dedekind cut is a bipartition of the rationals such that the two partitions $X, Y$ satisfy certain properties: $\forall x \in X,\; \exists y \in X \text{ such that } x < y$ $\forall x \in Y, \; \exists y \in Y \text{ such that } y < x$ $\forall x,y, \; x<y \text{ and } y \in X \Rightarrow x \in X$ $\forall x,y, \; x<y \text{ and } x \in Y \Rightarrow y \in Y$ $\forall x,y, x<y \Rightarrow \text{ either } x \in X \text{ or } y \in Y$ $X \text{ and } Y$ are disjoint $X \text{ and } Y$ are both non-empty. A Dedekind cut is then used to represent a real number $z$ that can be see as the "mid-point" of this bipartition of the rationals in the sense that $\forall x \in X, \forall y \in Y, \; x < z < y$ In most descriptions of defining the reals as the set of all Dedekind cuts of the rationals, these inequalities are strict. However, doesn't that mean that the rationals cannot be defined in this way? Am I meant to take this to mean that the irrationals are defined using these Dedekind cuts and that the reals are to be treated as the union of this set of irrationals and the set of rationals? If that is the case, doesn't that mean the irrationals and rationals in this construction of the reals are different kinds of objects in that they would have different ranks? I would imagine that is something inconvenient and would ideally want to be avoided.
Note that it is not part of your axioms that $X\cup Y = \Bbb Q$. For instance, the rational number $0$ is given by $$ X = \{q\in \Bbb Q\mid q<0\}\\ Y = \{q\in \Bbb Q\mid q>0\} $$ and the number $0$ isn't contained in either of them. On the other hand, the axiom $$\forall x,y, x<y \Rightarrow \text{ either } x \in X \text{ or } y \in Y$$ implies that $\Bbb Q\setminus(X\cup Y)$ has at most one element. The usual definition of Dedekind cuts that I've come across does have $X\cup Y = \Bbb Q$, but it also allows $Y$ to have a least element, which your axioms do not allow. Specifically, $$\forall x \in Y, \; \exists y \in Y \text{ such that } y < x$$ says that $Y$ has no least element.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2753011", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Prove $n$ is prime. (Fermat's little theorem probably) Let $x$ and $n$ be positive integers such that $1+x+x^2\dots x^{n-1}$ is prime. Prove $n$ is prime My attempt: Say the above summation equal to $p$ $$1+x+x^2\dots x^{n-1}\equiv 0\text{(mod p)}\\ {x^n-1\over x-1}\equiv0\\ \implies x^n\equiv1\text{ (as $p$ can't divide $x-1$)}$$ How to proceed?
Hint: $$x^{ab}-1=(x^a-1)(x^{a(b-1)}+x^{a(b-2)}+\dots+x^a+1)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2753130", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
$I(\alpha)=\int_\limits{0}^{\infty}\frac{\sin(\alpha x)}{x}dx$ Prove $$I(\alpha)=\int_\limits{0}^{\infty}\frac{\sin(\alpha x)}{x}dx$$ converges uniformly for $0<a\leqslant \alpha\leqslant b$ and it does not converge uniformly for $0\leqslant \alpha\leqslant b$ I know that $\int_\limits{0}^{\infty}\frac{\sin( x)}{x}dx=\frac{\pi}{2}$. If I take $I(\alpha)=\int_\limits{0}^{\infty}\frac{\sin(\alpha x)}{x}dx$ and do the following substitution:$ u=\alpha x$, I get $\int_\limits{0}^{\infty}\frac{\sin(u)}{u}du=\frac{\pi}{2}$. However I am not proving what I intended. I have thought of comparison test but I have not come up with a solution. Question: What do you think of the problem? How should I solve it? Thanks in advance!
Firstly, we have to be careful. The observation $$\int_0^{+\infty} \frac{\sin \alpha x}{x} = \int_0^{+\infty} \frac{\sin x}{x} = \frac{\pi}{2}$$ only holds if $\alpha \ne 0$ since we cannot divide by $0$. Otherwise, $$\int_0^{+\infty} \frac{\sin \alpha x}{x} = 0$$ as we are integrating the zero function. In fact, this very degeneracy will help us to establish the result. Now the improper integral $$\int_0^{+\infty} \frac{\sin \alpha x}{x} = \frac{\pi}{2} \text{ or } 0$$ In particular it exists, so we may take any $(c_n) \to 0$ and $(d_n) \to +\infty$ with $0 < c_n < d_n$ and conclude $$\lim_n \int_{c_n}^{d_n} \frac{\sin \alpha x}{x} = \int_0^{+\infty} \frac{\sin \alpha x}{x}$$ Therefore, we can reformulate the question more precisely as follows: Let $0 < a < b$ and $(c_n) \to 0$, $(d_n) \to +\infty$ with $0 < c_n < d_n$ such that $$f_n(\alpha) = \int_{c_n}^{d_n} \frac{\sin \alpha x}{x}$$ and $$f(\alpha) = \lim_n \int_{c_n}^{d_n} \frac{\sin \alpha x}{x}$$ Then $(f_n) \rightrightarrows f$ on $[a, b]$ but not on $[0, b]$ To show $(f_n) \rightrightarrows f$ on $[a, b]$, it suffices to show $(||f_n - f||_\sup) \to 0$ Now $$\begin{align} ||f_n - f||_\sup &= \sup_{\alpha \in [a, b]} |f_n(\alpha) - f(\alpha)| \\ &= \sup_{\alpha \in [a, b]} \left| \int_{c_n}^{d_n} \frac{\sin \alpha x}{x} - \int_0^{+\infty} \frac{\sin \alpha x}{x} \right| \\ &= \sup_{\alpha \in [a, b]} \left| \int_{c_n}^{d_n} \frac{\sin x}{x} - \frac{\pi}{2} \right| \\ &= \left| \int_{c_n}^{d_n} \frac{\sin x}{x} - \frac{\pi}{2} \right| \to 0 \text{ as } n \to +\infty \end{align}$$ The above argument uses change of variables, $\alpha > a > 0$ and the fact that $$\left( \int_{c_n}^{d_n} \frac{\sin x}{x} \right) \to \frac{\pi}{2}$$ It remains to show $(f_n) \not \rightrightarrows f$ on $[0, b]$ Observe that $$f(\alpha) = \begin{cases} \frac{\pi}{2} & \text{ on } (0, b] \\ 0 & \text{ at } 0 \end{cases}$$ is discontinuous. Since uniform limit of continuous functions is continuous, it suffices to show $(f_n)$ are continuous, i.e. for all $\beta \in [0, b], \varepsilon > 0$ we can find $\delta > 0$ such that for all $\alpha \in [0, b] \cap (x - \delta, x + \delta)$ we have $$\left| \int_{c_n}^{d_n} \frac{\sin \alpha x}{x} - \int_{c_n}^{d_n} \frac{\sin \beta x}{x} \right| < \varepsilon$$ Maybe there is a cleverer method but I cannot think of one now. I will leave you to check this. You may need to use $\sin \alpha - \sin \beta = 2 \sin \frac{\alpha - \beta}{2} \cos \frac{\alpha + \beta}{2}$ and $|\sin x| \le |x|$ to bound stuff, feel free to ask for help. So we are done!
{ "language": "en", "url": "https://math.stackexchange.com/questions/2753314", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Integer solutions to $y=x^2+\frac{(1-x)^2}{(1+x)^2}$ As part of another problem I've been trying to find the greatest integer solutions to $$y=x^2+\frac{(1-x)^2}{(1+x)^2}$$ but am getting very stuck... Would the fact that it asymptotes to $y=x^2$ help at all? Does this mean it won't pass through any integer coordinates after a certain point? How would I go about finding integer solutions and showing that my list is exhaustive/that I have found the greatest solution?
$$y=x^2+\frac{(1-x)^2}{(1+x)^2}=x^2+\left(\frac{2}{1+x}-1\right)^2$$ If $y\in \Bbb Z$ then $1+x|2$ so $x\in\{0,1,-2,-3\}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2753415", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Solve the initial value problem $y''-6y'+13y=0,\;y(0)=y'(0)=1$ using the Laplace transform. Solve the initial value problem $$ \begin{cases} y''-6y'+13y=0 \\ y(0)=y'(0)=1 \end{cases} $$ using the Laplace transform. I cannot figure out how to factor or get around factoring $L(Y)$.
Taking the LT gives you $$13Y+s^2Y-6(sY-y(0))-sy(0)-y'(0)=0.$$ Plugging in the IC's yields $$13Y+s^2Y-6(sY-1)-s-1=0.$$ Solving for $Y$ gives you $$ Y(13+s^2-6s)=s+1-6=s-5, $$ making $$Y=\frac{s-5}{s^2-6s+13}=\frac{s-5}{s^2-6s+9+4}=\frac{s-5}{(s-3)^2+4}.$$ Computing the inverse LT yields $$y(t)=[\cos(2t)-\sin(2t)][\cosh(3t)+\sinh(3t)].$$ The inverse LT you're going to have to complete the square on, as the denominator doesn't factor over the reals.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2753584", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Integral inequality with derivative bound. Suppose $f \in C^1([0,1])$ and $f(0) = f(1) = 1/2$. Also $|f'(x)| \leq 1$ for all $x \in [0,1]$. Is it possible that $$\frac{-1}{4} \leq \int_0^1 f(x) \, dx \leq \frac{1}{4} \:?$$ My attempt: Using $-1 \leq f'(x) \leq 1$ and $\displaystyle f(x) - f(0) = \int_0^x f'(t)\,dt$ I get: $$\frac{1}{2}-x \leq f(x) \leq x + \frac{1}{2}$$ $$0 \leq \int_0^1 f(x) \,dx \leq 1.$$ The integral must be bounded between $0$ and $1$, but I can't determine if there exist functions with these conditions where the integral is in the range $[0,1/4]$.
No, it is not possible. Using the fact that $f(0) = 1/2 = f(1)$ and the derivative bound, we have that $$f(x) \ge \max\{1 - x, x - 1\}$$ (draw the picture of what this means!). Hence $\int_0^1 f(x) \, dx \ge \frac 1 4$. On the other hand, one can argue that (again because of the derivative bound, together with the fact that $f \in C^1$) that $f(1/2)$ must actually be strictly positive, giving just a little bit more to the integral. What is true is that you can find a sequence $f_n$ for which $\int_0^1 f_n(x) \, dx \to \frac 1 4$ from above.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2753653", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
The indefinite integral $\int\frac{\operatorname{Li}_2(x)}{1+\sqrt{x}}\,dx$: what is the strategy to get such indefinite integral Here there is an integral that I've found playing with Wolfram Alpha online calculator (thus to me is a curiosity that it has indefinite integral) $$\int\frac{\operatorname{Li}_2(x)}{1+\sqrt{x}}\,dx,\tag{1}$$ where the function in the numerator of the integrand is the polylogarithm $$\operatorname{Li}_2(z)=\sum_{k=1}^\infty\frac{z^k}{k^2},\tag{2}$$ see the related MathWorld's article if you want to know more about this special function. Question. To me seem difficult to get the indefinite integral that provide us Wolfram Alpha online calculator. But can you provide me the ideas or first calculations to calculate such indefinite integral? That is, imagine that I need to explain to a friend/colleague a draft about the strategy to get such indefinite integral. Then, what is the recipe that I need to explain him/her to justify from the top (without all tedious details) the indefinite integral? Many thanks. When I was playing, before knowing such a closed form of the indefinite integral, my intention was to justify $$\int_0^1\frac{\operatorname{Li}_2(x)}{1+\sqrt{x}}\,dx.$$ I say these words to provide what are my intentions, I believe that this definite integral isn't special but I was interested in calculate it when I was asking to the mentioned CAS.
A natural temptation is to remove the square root from the denominator of the integrand function by enforcing the substitution $x=u^2$, then expanding $\text{Li}_2(u^2)$ as a Maclaurin series and convert the whole thing into a combination of Euler sums, hopefully with a low weight. Indeed $$ \int_{0}^{1}\frac{\text{Li}_2(x)}{1+\sqrt{x}}=2\int_{0}^{1}\frac{u}{1+u}\text{Li}_2(u^2)\,du =4\int_{0}^{1}\left[1-\frac{1}{1+u}\right]\cdot\left[\text{Li}_2(u)+\text{Li}_2(-u)\right]\,du$$ where $$ \int_{0}^{1}\text{Li}_2(u^2)\,du = -4+\frac{\pi^2}{6}+4\log(2)$$ is straightforward and $$ \int_{0}^{1}\frac{\text{Li}_2(u)}{1+u}\,du =\frac{\pi^2}{6}\log(2)-\frac{5}{8}\zeta(3),\qquad \int_{0}^{1}\frac{\text{Li}_2(-u)}{1+u}\,du =-\frac{\pi^2}{12}\log(2)+\frac{1}{4}\zeta(3)$$ have already been proved on MSE. The involved techniques are just integration by parts and the functional relations for the dilogarithm function (a function with the sense of humour, according to D.Zagier).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2753735", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Ordinal $\alpha$ such that $\alpha = \omega_{\alpha}$? I am asked whether or not there exists an ordinal $\alpha$ such that $\omega_{\alpha}$ where we define: 1). $\omega_{0} = \omega$ 2). $\omega_{\alpha+1} = \gamma(\omega_{\alpha})$ 3). $\omega_{\lambda} = sup\{\omega_{\alpha} \mid \alpha < \lambda\}$ for a non-zero limit ordinal $\lambda$ Where, if I remember correctly, $\gamma(\alpha)$ is the least ordinal that cannot be injected into $\alpha$ Given these definitions, and the question of finding such an ordinal $\alpha$, the natural thought is something along the lines of $\alpha = \omega_{\omega_{\omega_{\ddots}}}$, but I'm not even sure this object is a well defined ordinal. Trying to define it in terms of the recursive relationship, I just end up with it inside of its own definition. Given this, does there then exist such an ordinal $\alpha$? I feel as if there can't be anything else, but experience has told me that maths can have some bizarre counter examples, so I don't really know what to think anymore.
The answer is yes. The mapping function $\alpha\mapsto\omega_\alpha$ is both continuous and increasing, which means it must have a fixed point. We can find this fixed point by taking the supremum - $\sup\{0,\omega,\omega_\omega,\omega_{\omega_\omega},...\}$ and in fact if we start with any two ordinals $\alpha_{1,2}$ which aren't fixed points of the function and we know that there exists an ordinal $\alpha_1<\gamma<\alpha_2$ which is a fixed point of the function, then if we take the two supremums $\sup\{\alpha_1,\omega_{\alpha_1},\omega_{\omega_{\alpha_1}},...\}$ and $\sup\{\alpha_2,\omega_{\alpha_2},\omega_{\omega_{\alpha_2}},...\}$ we will in fact get different fixed points. If we take a function $F(\alpha)$ which maps $\alpha$ to the least $\omega$-fixed point above it, then we take the set $\{F(\alpha)|\alpha\in\text{Ord}\}$ and since we already know that for different vaues of $\alpha$ with fixed points inbetween we get different fixed points, then it follows that there are as many ordinals satisfying $\alpha=\omega_\alpha$ as there are ordinals.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2753842", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Euler Differential Equation General Solution Given the initial value problem of Euler Differential Equation: $$x^2y''+\beta xy'+\alpha y=0$$ $$y(-1)=2 , y'(-1)=3$$ According to my book, the general solution for x<0 is the same as that of x > 0 so all the possible general solutions should be expressed with absolute value of x. For example, if we assume $y= x^r$ then if the characteristic equation resulted in a repeated root for r, then the general solution would be $y= c_1\left | x \right |^r+ c_2\left | x \right |^rln(\left | x \right |)$ with the absolute value for both x terms. However, when I encountered an example where the initial conditions were at a negative x point, my book expressed the general solution without the absolute value of x which gave a different answer from the one with the general solution expressed with the absolute value of x. So which is correct, expresseing the general solution with or without the absolute values?
To explain, we must go back to why the trial solution $y=x^r$ works here. Substitute $x = e^t$ to get $$ \frac{d^2y}{dt^2} + (\beta-1)\frac{dy}{dt} + \alpha y = 0 $$ This is a linear equation with constant coefficients, therefore the general solution has the form $y(t) = e^{rt}$, which leads to $y(x) = x^r$. A double root gives $y = c_1e^{rt} + c_2 t e^{rt} = c_1 x^r + c_2 x^r \ln(x)$ In both cases, the absolute value is not needed, because our original substitution $x=e^t$ assumes $x > 0$. However, if an initial condition is given for $x < 0$, this assumption falls apart, and we must instead substitute $x = -e^t$. Fortunately, this substitution results in the same equation in $t$, and we get the general solution $y = e^{rt} = (-x)^r$, or $y = te^{rt} = (-x)^r\ln(-x)$ So to answer your question, which form of the solution to use depends on the context of the problem. If an initial condition is given, either the one with $(x)$ or $(-x)$ is sufficient. If no initial condition is given, then using the absolute value on $|x|$ would encompass both cases, giving the most general form. In practice, the absolute value is often ignored. You might notice that a specific solution cannot include $x=0$ in its domain, due to it being a singular point. Hence, if an initial condition is given in $x<0$, then the solution cannot exist in $x>0$, and vice versa.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2753987", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Every function f: A $\rightarrow$ P(A) is not surjective In this case the P(A) is the power set of A. I want to prove this by contradiction, even though it's easier to say that the power set of A is a bigger infinity, I am not allowed to assume that. So I want to proceed by contradiction. This is a proof by contradiction, let us assume that every function is f: A $\rightarrow$ P(A) is surjective, we shall show that this leads to a contradiction. Consider the set S {x $\in$ A : x $\notin$ f(x)} I am not sure where to go from here any advice?
You're almost done! Let $A$ be a set and $f\colon A\to{\cal P}(A)$ a function. Consider $$S = \{x\in A : x\notin f(x)\} \in {\cal P}(A).$$ If $f$ were surjective, there would exist some $z\in A$ such that $f(z)=S$. One might ask: does $z$ belong to $S$? We have: $$z \in S \iff z\notin f(z) \iff z\notin S$$ a contradiction. Thus, $f$ can not be surjective.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2754094", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Group Homomorphism Involving with The circle group in C^x If U = {${z\in C^x | |z|=1}$}, how do I show that $C^x$/U $\cong$ $\Bbb R^+$. I know that the Fundamental Theorem of Group Homomorphisms has to come into play, given a function $$f:C^x-> \Bbb R^+$$ where f(z) = |z|.
I take it that our OP means what is more usually written "$\Bbb C^\times$" by "$\Bbb C^x$" and that his $\Bbb R^+ = \{ r \in \Bbb R \mid r > 0 \}, \tag 1$ so that $\Bbb R^+$ is the multiplicative subgroup of positive reals. Then the map he calls $f(z) = \vert z \vert, \; f: \Bbb C^\times \to \Bbb R^+ \tag 2$ obeys $f(z_1 z_2) = \vert z_1 z_2 \vert = \vert z_1 \vert \vert z_2 \vert = f(z_1) f(z_2), \tag 3$ which shows it is a group homomorphism $\Bbb C^\times \to \Bbb R^+$; it is clearly surjective, since for any positive real $\alpha$ there is some $z \in \Bbb C^\times$ with $\vert z \vert = \alpha$. Also, we have $\ker f = \{z \in \Bbb C^\times \mid \vert z \vert = 1 \} = U; \tag 4$ thus, by the usual theorem(s) of elementary group theory, $f$ induces an isomorphism $\tilde f: \Bbb C^\times / U \cong \Bbb R^+, \tag 5$ and that's how it it shown.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2754212", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Minimization of functional using Euler-Lagrange We've recently started doing Calculus of Variations in my analysis class and we're applying it to minimizing/maximizing functions. So the way we generally were taught to tackle the problem is to first find the Euler-Lagrange equation, solve the differential equation, then check concavity/convexity to ensure uniqueness. I'm having some trouble on the following question: (note: y with the circle thing on top means y') Problem 4. Solve the minimization problem $$ \min \int_1^2 \left(y^2 + 2t\dot y y + 4 t^2 {\dot y}^2\right) dt , \; y(1) = 3, \; y(2)=2 $$ My attempt: I can't find where I'm going wrong because I'm ending up with a differential equation whose solutions (when I solve the characteristic equations) don't involve t at all, which is problematic. Any help at all would be great! :)
Here $$ L(y,\dot y,t) = y^2+2t y\dot y +4t^2 \dot y^2 $$ $$ \frac{\partial L}{\partial y}-\frac{d}{dt}\frac{\partial L}{\partial \dot y} = 8t^2\ddot y+16t \dot y = 0 $$ or $$ t\ddot y + 2\dot y = 0 $$ now making $z = \dot y \Rightarrow t\dot z + 2 z = 0 \Rightarrow z = C_0 t^{-2}\Rightarrow y = -C_0 t^{-1}+C_1$ etc.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2754301", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Range of $k$ for which equation has positive roots Range of $k$ for which both the roots of the equation $(k-2)x^2+(2k-8)x+3k-17=0$ are positive. Try: if $\alpha,\beta>0$ be the roots of the equation. Then $$\alpha+\beta=\frac{8-2k}{k-2}>0\Rightarrow k\in(2,4)$$ And $$\frac{3k-17}{k-2}>0\Rightarrow k\in(-\infty,2)\cup \bigg(\frac{17}{3},\infty\bigg)$$ I have got $k=\phi$. I did not understand where i am wrong , please explain, Thanks
You do not have to compute the roots and solve irrational inequations. Just a little thinking to use theorems on quadratic polynomials. * *First, it has to be a quadratic equation, which means $k\ne 2$. *This condition being satisfied, it must have real roots, i.e. its reduced discriminant has to be non-negative: $$\Delta'=(k-4)^2-(3k-17)(k-2)=-2k^2+15k-18\ge 0 $$ $\Delta'$ is a quadratic polynomial in $k\mkern1mu$; its roots are $6$ and $-3/2$ and the leading coefficient is negative, so $$ \Delta'\ge 0 \iff -\frac 32\le k \le 6. $$ *Last, these roots must have the same sign, which means their product $p$ is positive: $$p=\frac{3k-17}{k-2} > 0 \iff (3k-17)(k-2)> 0 \iff k >\frac{17}3\quad\text{or}\quad k < 2,$$ and this common sign is positive, which is equivalent to their sum $s$ is positive: $$s=-\frac{2(k-4)}{k-2}> 0\iff(k-2)(k-4) < 0 \iff 2 < k < 4. $$ The last two conditions are incompatible, so there's no $k$ such that both roots are positive
{ "language": "en", "url": "https://math.stackexchange.com/questions/2754383", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Finding probability for general cases For a student to qualify, he must pass at least two out of three exams. The probability that he will pass the 1st exam is $p$. If he fails in one of the exams then the probability of passing in the next exam is $p/2$ otherwise it remains the same. Find the probability that he will qualify. My textbook answer reads $2p^2 – p^3$. This is possible if only the below cases are considered: * *He passes first and second exam. *He passes first, fails in second but passes third exam. *He fails in first, passes second and third exam. But I think this is wrong since at least two out of three exams means,passing in first, second and third exam is inclusive. Someone please solve this paradox.
All three passed: $p^3$. First and second passed, third failed: $p^2(1-p)$. First passed, second failed, third passed: $p(1-p)p/2$. First failed, next two passed: $(1-p)(p/2)^2$. Then add all four probabilities. The answer would be $7p^2/4-3p^3/4$. UPDATE. I understood it so that if once failed, the probability of success remains at most $p/2$. But I suppose this is not correct. So the case "First failed, next two passed" leads to $(1-p)(p/2)p$ and the whole probability is indeed $2p^2-p^3$. The problem is badly stated, by the way.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2754431", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 3 }
Question on speed. Speed of a bus is 45 km/h. If it stops for a few minutes in an hour then its average speed becomes 42 km/h. Find out the time duration it stops for in an hour. My attempt: Let Distance be D. Let the time duration for which it halts be x. $45=\frac{D}{1}$ $42=\frac{D}{1+x}$ Therefore, $45=42+42x$ $x=4\frac{2}{7} minuites$ What have I done wrong?
While there have been several answers posted already, I think that this approach is a bit more intuitive: Every hour, the bus travels 42 miles. But its full speed is 45 m/hr. So it's traveling at 42/45 of its full speed. Which means that it's driving only 42/45 of the time, and is stopped the rest of the time. So the portion of time that it's stopped is $1-\frac{42}{45} = \frac{45}{45}-\frac{42}{45} =\frac{3}{45} = \frac{1}{15}$, and one fifteenth of an hour is 4 minutes.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2754707", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 7, "answer_id": 2 }
Finding density of a sum of two variables Let $X$ be exponential r.v with parameter $\lambda$ and $Y$ also exponential with parameter $2 \lambda$ and independent of $Y$. Find probability density of $X+Y$. We know $$ f_{X+Y}(a) = \int\limits_{-\infty}^{\infty} f_X(x)f_Y(a-x) dx $$ Now, my problem here is to put the right limits of integration. I would say $x > 0$, and hence $$ f_{X+Y}(a) = \int\limits_0^{\infty} \lambda e^{- \lambda x} 2 \lambda e^{-2\lambda(a-x)} dx = 2 \lambda^2 \int\limits_0^{\infty}e^{-2 \lambda a} e^{\lambda x }$$ But, then we see that the integral would be divergent. Are my limits of integration wrong?
Note that the exponential RV can only take positive values. Hence, your limits should ensure that $$X,Y>0$$ As such, $$a-x>0\implies x<a$$ and from above, $$x>0$$ Thus the limits should be from $0$ to $a$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2754830", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Resolvent of semicircle law I am trying to approximate the Stieltjes transform of the semicircle law. In particular, it is known that the Stieltjes transform m(z), for z in the upper half plane, is exactly $$ m(z) = \frac{-z + \sqrt{z^2 - 4}}{2} $$ I would like to show that $$ Im(m(z)) \sim \sqrt{K + y} $$ for $z = x+ iy$, $K = ||x|-2|$ and $|x| \leq 2$. ( $a \sim b$ means there exist constants $c, C$ such that $cb \leq a \leq Cb$.) Also, $$ Im(m(z)) \sim \frac{y}{\sqrt{K + y}} $$ when $x \geq 2$. I have tried Taylor expansions with no success.
It is known that $\sqrt{a + ib} = p + iq$ with $p = \frac{1}{\sqrt{2}} \sqrt{\sqrt{a^2 + b^2} + a}$ and $q = \frac{sign(b)}{\sqrt{2}}\sqrt{ \sqrt{a^2 + b^2} - a}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2754919", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Meaning of the Fourier transform of $1$ According to Wolfram, Fourier transform of $1$ is $\sqrt{2\pi} \, \delta(\omega)$. Can someone explain what this means? The only frequency the function $1$ should just be $0$ since $\cos(0) = 1$. So, shouldn't the result have just been $0$ instead of that expression?
Short intuitive answer: The Fourier transform breaks functions down into their constituent frequencies, and frequency is the inverse of wavelength. A constant function $1$ is so spread out that it effectively has infinite wavelength, or zero frequency. Hence its Fourier transform is concentrated at $0$, giving a spike (e.g. a delta function). Conversely, the Fourier transform of $\delta$ is constant. Long technical answer: The Fourier transform is most easily defined on $\mathcal{S}$, the space of Schwarz functions, where we have a formula that $$\hat{f}(\xi) = \int_{\mathbb{R}} f(x) e^{-2\pi i x \xi} \, dx.$$ This definition leads to a lot of useful properties, such as $$\int_{\mathbb{R}} \hat{f} g \, dx = \int_{\mathbb{R}} f \hat{g} \, dx.$$ We can use this to define the distributional Fourier transform by taking this as a definition: Given a (tempered) distribution $T$ that acts on Schwarz functions $g$ via the pairing $\langle T, g\rangle$, we define the Fourier transform of $T$ to be the distrubtion that acts via $$\langle \hat{T}, g\rangle = \langle T, \hat{g}\rangle.$$ Basically, all this means is that we can move the hat back and forth just like with "nice" functions. Given all this, the delta function is best viewed as the distribution which acts via $$\langle \delta, g\rangle = g(0).$$ Thinking of $\delta$ as a spike at zero with area one and integrating this against $g$ also makes sense. Then the Fourier transform of $\delta$ is the distribution that acts via $$\langle \hat{\delta}, g\rangle = \langle \delta, \hat{g}\rangle = \hat{g}(0) = \int_{\mathbb{R}} g \, dx = \int_{\mathbb{R}} 1 \cdot g \, dx.$$ Therefore, we can think of $\hat{\delta}$ as simply integrating against a constant function $1$, and identify $\hat{\delta} = 1$. (Depending, of course, on the normalization of the Fourier transform).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2755051", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Topological Manofold is Hausdorf, Second Countable, Locally Homeomorphic to $R^n$ Hier is a result from Topology and Differential Geometry: A topological manifold is a topological space such that the three conditions are met: * *Hausdorff *second countable, and *covered by charts homeomorphic to open subsets in $R^n, \,\,n\in N$. Statement: None of the three conditions follows from the remaning two. In other words, none of the conditions is dispensible. I can take two of the conditions as holding and eventually suceed in proving analytically the necessity of the third one. What I need is examples which demonstrate that two of the three conditions are not enough in order for a topological space to get the stracture of a topological manifold. Can somebody show such examples ? Many thanks.
Hausdorf, second countable, not locally homeomorphic to $\mathbb R$: * *$\mathbb Q$. Hausdorf, not second countable, locally homeomorphic to $\mathbb R$: * *The disjoint union of uncountably many copies of $\mathbb R$. Not Hausdorf, second countable, locally homeomorphic to $\mathbb R$: * *The line with two origins. As a set, this space is $\mathbb R\cup\{0^*\}$, where $0^*$ is some object not in $\mathbb R$. The open sets consist of all the (usual) open sets in $\mathbb R$, along with those of the form $U\setminus \{0\}\cup \{0^*\}$ and $U\cup\{0^*\}$, where $U$ is any (usual) open subset of $\mathbb R$ containing $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2755198", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Divisibility of $n^2+9$ by $n+3$ How would one find all integers $n$ such that $n+3 \vert n^2 +9$? I assume it is important that $(n+3)^2 - 6n = n+3$, but I am struggling to see how you can find all $n$, and confirm an upper bound such that there are no more such $n$.
Hint $$n^2+9=(n+3)^2-6(n+3)+18$$ so $$\frac{n^2+9}{n+3}=(n+3)-6+\frac{18}{n+3}$$ and then $n+3|18$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2755326", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Inequalities involving means and probabilities of paths of cadlag stochastic processes Let's say we can prove that for a cadlag process, $X, $ there exists $\alpha > 1, \beta > 0, $ and a non-decreasing continuous function $H:[0, 1] \mapsto \mathbb{R} $ such that $$ E\biggl[\bigl|X_s^{(n)} - X_r^{(n)}\bigl|^\beta\cdot \,\bigl|X_t^{(n)}-X_s^{(n)}\bigl|^\beta\biggl]\, \le \bigl(H(t) - H(r)\bigl)^\alpha \quad \mbox{ for } 0 \le r \le s \le t \le 1. $$ How does one prove that from the given condition above, it is possible to infer that there is a constant $C_{\alpha, \beta} $ such that $$ P\bigl[|X_s-X_r|\ge \epsilon, |X_t-X_S|\ge \epsilon] \le\frac{C_{\alpha, \beta}}{\epsilon^{2\beta}}\bigl(H(t)-H(r)\bigl)^\alpha. $$ The inequality about involving the means of $X $ is part of a refinement of Theorem 13.5 in Billingsley's Convergence of Probability Measures, 2nd edition, which establishes sufficient conditions for weak convergence in $D[0, 1]. $ I thought converting from an inequality in the means to one in probabilities was going to be a simple step, by just using some form of Markov inequality, but upon trying it, it does not look that trivial. Unless I am just failing to see something reasonably basic, which I cannot exclude. Thank you.
Edited on April 27. You are absolutely right Zhoraster. It is as simple as recognizing that for any two random variables, $X $ and $Y, $ and $a> 0, $: $$ a^2\cdot 1_{|X|\ge a, |Y|\ge a} \le |XY| $$ and then going from here it is straightforward. For one reason or another, I managed to mess up that basic inequality. Sorry everyone for the "false alarm".
{ "language": "en", "url": "https://math.stackexchange.com/questions/2755475", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
British Maths Olympiad (BMO) 2006 Round 1 Question 5, alternate solution possible? The question states For positive real numbers $a,b,c$ prove that $(a^2 + b^2)^2 ≥ (a + b + c)(a + b − c)(b + c − a)(c + a − b)$ After some algebraic wrangling we can get to the point where: $(a^2 + b^2)^2 + (a + b)^2(a − b)^2 + c^4 ≥ 2c^2(a^2 + b^2)$ At this point if we take the $LHS - RHS$ we can write the expression as the sum of squares proving the inequality. I was wondering, is it possible to divide both sides by $c^2(a^2 + b^2)$ and show somehow that $((a^2 + b^2)^2 + (a + b)^2(a − b)^2 + c^4)/(c^2(a^2 + b^2)) ≥ 2$ I tried but was not able to.
Suppose that $a,b,c$ can form the sides of a triangle. Let $s=\frac{a+b+c}{2}$ be the semiperimeter. The inequality becomes $$ (a^2+b^2)^2\ge2s\cdot 2(s-a)\cdot 2(s-b)\cdot 2(s-c) $$ or by Heron's formula, $$a^2+b^2\ge 4A$$ where $A$ is the area of the triangle. If $\theta$ is the angle between sides $a$ and $b$, this reduces to $$ a^2-2ab\sin\theta+b^2\ge0,$$ and we have$$ a^2-2ab\sin\theta+b^2\ge a^2-2ab+b^2=(a-b)^2\ge0$$ In the case where $a,b,c$ do not form a triangle, exactly one of the factors on the right-hand is negative or one of the factors is $0$, so the inequality is trivial.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2755581", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Least number of rounds to find all hidden pairs In my country there's a TV reality show in which $n+1$ men and $n$ women live in a house and, over the course of the show, they have to, as a group, find out who their 'match' is. $($In the actual TV show, $n=10)$. Each man is assigned to a single woman such that every woman save for one has a single man assigned to her; this exceptional woman therefore has two men assigned to her. At the start of the game, no assignment is known to any player. Each round lasts a week, and on each round the players assemble into pairs (hence, one man is left out). Then, each pair that is a match is revealed to be a correct pair. The players win when $n$ correct pairs are assembled in a round. What is the least number $r(n)$ of rounds that guarantees a win? More generally, what kind of theory/bibliography is available to deal with questions like this? Information theory? Combinatorial group theory? I confess I'm at a loss here, and devising a winning strategy here by 'trial and error' seems like an approach whose feasibility decreases very fast as $n$ increases.
A very slight improvement on the circle idea of @saulspatz makes $n$ rounds sufficient for a win, as follows. Terminology: There is one woman matched with 2 men. We'll call her the "special" woman, and her 2 men, the "special" men. Algorithm: randomly set one man aside. This leaves $n$ men and $n$ women. Sit them in 2 concentric circles. In each round pair off each man and woman facing each other. Between each round the women move clockwise 1 space. (This happens even if a match is found.) After $n-1$ rounds, each man (except the one set aside) has met $n-1$ different women, and therefore he knows which woman he is matched with, because he has either met her, or, she is the only one he hasn't met. * *If the set-aside man is one of the "special" men, then the $n$ men in the circle match with different women. Just pair them up. *If both "special" men are among the $n$ (i.e. not set aside), then they both know who the "special" woman is. Pair her up with one of them (doesn't matter which). For the other $n-2$ men in the circle, pair each up with his match. This leaves one woman whom none of the $n$ men are matched with. Pair her up with the set-aside man. I'm not sure this is the best solution though...
{ "language": "en", "url": "https://math.stackexchange.com/questions/2755681", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
Meet-irreducible element of lattice I have a question concerning the exercise 5.7 of the Davey & Priestley's book. Here are the questions: Let $L$ be a finite distributive lattice. Prove by the steps below that $\mathcal{J}(L)\cong\mathcal{M}(L)$. (i) Let $x\in\mathcal{J}(L)$. Show that there exists $\hat{x}\in L$ such that $\downarrow \hat{x}=L\backslash\uparrow x$. [Hint. Let $\hat{x}:=\bigvee (L\backslash\uparrow x)$ and then use Lemma 5.11 to show that $\hat{x}\not\geq x$]. I have done it and it's ok. (ii) Show that for all $x\in\mathcal{J}(L)$ the element $\hat{x}$ defined in (i) is meet-irreducible. Here, I would need some help. We have to show that if $\bigvee (L\backslash\uparrow x)<a$ and $\bigvee (L\backslash\uparrow x)<b$ then $\bigvee (L\backslash\uparrow x)<a\wedge b$. Can we show first that: $\forall y\in L\backslash\uparrow x$ if $y<a$ and $y<b$ then $y<a\wedge b$ Thank you.
Relating to your question "Can we show first that...": no, that's not true. As an example, take $L$ to be the power-set of $\{a,b,c\}$, which is distributive. Singletons are join-irreducible. For example, take $x=\{a\}$; $y=\{b\} \in L\setminus\uparrow x$. Then $y=\{a,b\}\cap\{b,c\}$, so it is not meet-irreducible. Now, for the main problem: prove that $\hat{x}$ is meet-irreducible. Suppose, for a contradiction, that it isn't, that is, there exist $a,b\in L$ such that $\hat{x}=a \wedge b$, but $x\neq a,b$ (and so $x<a$ and $x<b$). So $a,b \notin \downarrow\hat{x}=L\setminus\uparrow x$, and thus $a,b \in \uparrow x$, that is, $x\leq a$ and $x \leq b$. It follows that $x\leq a\wedge b =\hat{x}$, a contradiction to what you have proved in (i) (that $\hat{x}\ngeq x$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2755777", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Geometric solution? Given coordinates of $A$, $B$, $C$, find $M$ on $y=x$ minimizing $AM+BM+CM$ I have the problem: Let be given three points $A(1,2)$, $B(3,4)$, $C(5,6)$. Find point $M$ on the line $y=x$ so that sum of distances $P=AM+BM+CM$ is smallest. I tried. We have $$P=\sqrt{(x-1)^2 + (x-2)^2} + \sqrt{(x-3)^2 + (x-4)^2} +\sqrt{(x-5)^2 + (x-6)^2}.$$ We know that $$\sqrt{a^2 + b^2}+\sqrt{c^2 + d^2} \geqslant \sqrt{(a+c)^2 + (b+d)^2}.$$ The sign of equal occur when and only when $\dfrac{a}{c}=\dfrac{b}{d}$. We have \begin{align*} \sqrt{(x-1)^2 + (x-2)^2} + \sqrt{(x-5)^2 + (x-6)^2} & = \sqrt{(x-1)^2 + (x-2)^2} + \sqrt{(5-x)^2 + (6-x)^2} \\ & \geqslant \sqrt{(x-1 + 6-x)^2 + (x-2 + 5-x)^2}\\ & \geqslant \sqrt{34}. \end{align*} The sign of equal occur $$\dfrac{x-1}{6-x}=\dfrac{x-2}{5-x} \Leftrightarrow x=\dfrac{7}{2}.$$ Another way $$\sqrt{(x-3)^2 + (x-4)^2} =\sqrt{2x^2 - 14 x + 25} = \sqrt{2}\sqrt{\left (x-\dfrac{7}{2}\right)^2 + \dfrac{1}{4} } \geqslant \dfrac{1}{\sqrt{2}}.$$ The sign of equal occur $ x=\dfrac{7}{2}.$ Therefore, the least of the expression $P $ is $\dfrac{1}{\sqrt{2}}+\sqrt{34}$ at $x=\dfrac{7}{2}.$ How can I solve this problem geometrically?
Let's $a$ be a line $y=x$. So, we claim that for the basis $M$ of the perpendicular dropped from $B$ to $a$ sum $AM + BM + CM$ is the smallest. It is easy to demonstrate this using additional point $A'$ which is symmetrical to the point $A$ with respect to the line $a$. One may see that $A'MC$ is the line segment (because of the symmetry of points $A$, $C$ with respect to point $B$). Thus, if $M' \neq M$ is arbitrary chosen point on $a$, then \begin{align} AM' + BM' + CM' = & A'M' + M'C + BM' > \\ &A'M + MC + BM = AM + BM + CM, \end{align} so $M$ is our desired point. It's easy to calculate its coordinates, which are $(\frac{7}{2},\frac{7}{2})$ exactly as in your answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2755885", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Some Integral Estimate I am currently trying to figure out the following estimate: Let $\gamma: [0, 2 \pi] \to \mathbb C, \gamma(t) = e^{\mathrm i t}$, $\gamma^* := \gamma[0, 2 \pi]$ and $f: \gamma^* \to \mathbb R$ be continuous. Then it holds $$\left\vert \int_{\gamma} f(z) \, dz \right\vert \leq 4 \max_{z \in \gamma^*} \vert f(z) \vert.$$ Obviously the naive estimate yields $$\left\vert \int_{\gamma} f(z) \, dz \right\vert \leq 2 \pi \max_{z \in \gamma^*} \vert f(z) \vert.$$ So one needs to be a little bit more clever. So I wrote down \begin{align*} \int_{\gamma} f(z) \, dz = \int_0^{2\pi} f(e^{\mathrm i t}) \mathrm i e^{\mathrm i t} \, dt = \int_0^{2\pi} f(e^{\mathrm i t}) (\mathrm i \cos(t) + \sin(t)) \, dt \end{align*} Now I tried a few things. I tried to use $\vert z \vert^2 = \operatorname{Re}(z)^2 + \operatorname{Im}(z)^2$ and I wrote the integrals as double integrals achieving \begin{align*} \left\vert \int_{\gamma} f(z) \, dz \right\vert^2 &= \int_0^{2 \pi} \int_0^{2 \pi} \cos(t - s) f(e^{\mathrm i t}) f(e^{\mathrm i s}) \, dt \, ds \\ &\leq \left( \max_{z \in \gamma^*} \vert f(z) \vert \right)^2 \int_0^{2 \pi} \int_0^{2 \pi} \vert \cos(t - s) \vert \, dt \, ds = 8 \pi \left( \max_{z \in \gamma^*} \vert f(z) \vert \right)^2 \end{align*} by using the addition theorem, which is a better bound but not good enough. I guess one has to split the first integral in a clever way but I don't have any idea how. Maybe one can make some progress from using the connection between $\sin$ and $\cos$. It seems to be essential that $f$ has real values, but I don't see how to use it. I would appreciate some hints or solutions :)
We want to estimate $|\int_0^{2\pi}f(t)e^{it}\,dt|.$ This equals $$e^{is}\int_0^{2\pi}f(t)e^{it}\,dt=\int_0^{2\pi}e^{i(s+t)}f(t)\,dt$$ for some real $s.$ Now the above is nonnegative, so it equals $$\int_0^{2\pi}\text { Re}\left (e^{i(s+t)}f(t)\right)\,dt. $$ Since $f$ is real valued (!), the last integral equals $$\int_0^{2\pi}\cos(s+t)f(t)\,dt \le M\int_0^{2\pi}|\cos(s+t)|\,dt. $$ Here $M$ is the maximum value of $|f|.$ We can take $s=0$ for simplicity in the last integral to see it equals $4.$ We're done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2755973", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Sufficient condition for graph isomorphism assuming same degree sequence We assume graph to be simple undirected. In general, having the same degree sequence is not sufficient for two graphs to be isomorphic. A trivial example is a hexagon which is connected and two separated triangles, which is obviously not connected, yet their degree sequences are the same. Can we also exhibit counter examples with two non-isomorphic connected graphs having the same degree sequence? What about two such Euler graphs? Is it known for which extra conditions having the same degree sequence becomes sufficient for isomorphism?
As you pointed out in your question, the graphs $G=C_6$ and $H=C_3+C_3$ are a trivial example of two nonisomorphic graphs with the same degree sequence. They are the only $2$-regular graphs on $6$ vertices. Their complements $\overline G$ and $\overline H$ are another example of two nonisomorphic graphs with the same degree sequence, and they are both connected. $\overline G$ is the skeleton of a triangular prism, and $\overline H=K_{3,3}.$ They are (of course) the only $3$-regular graphs on $6$ vertices. They are nonisomorphic because their complements are nonisomorphic; also, only one of them is planar; also, they have different clique numbers and different chromatic numbers. If you want an example with Eulerian graphs, consider the graphs $\overline{C_3+C_6}$ and $\overline{C_4+C_5}.$ They are nonisomorphic (because their complements are nonisomorphic), and they are $6$-regular graphs on $9$ vertices (because their complements are $2$-regular graphs on $9$ vertices), and they are connected (because their complements are disconnected), so they are Eulerian.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2756048", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 2 }
How did the author get to these inequalities? Context: Here $\tau = \langle X, T \rangle $ and $v = \langle X, N \rangle$. I understood everything in the proof except for the blue part. Why is it that $\displaystyle{\lim_{x \to \infty} y'(x)} > 0$? Where do the final inequalities come from? This is the thesis where I got the text. Page 25. Update: I see why, if $\displaystyle \lim _{x \to \infty} \frac{y(x)}{x} = A$ for some constant $A$, and $\dfrac{y(x)}{x}$ is decreasing then $y(x) - Ax > 0 $, but I still do not see why $A$ has to be a constant (why cannot that limit blow up to infinity?) or why it is the case that$$ y(x) - Ax \leq x \int_x^\infty \frac{w(t)}{t^2} \,\mathrm{d}t \leq \alpha e^{-\frac{x^2}{2}}?$$ Edit: Bounty added.
It is written that $y$ is convex, so $y'$ is increasing,more over $y'(0)=0$. So $y'$ has a limit and this limit is $\geq 0$. More over if this limit is $0$ then $y'(x)=0 \forall x$, so $y''=0$ which leads to $y(x)-xy'(x)=y(x)=0$. But $\alpha=y(0)>0$ so there is a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2756138", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Weak$^*$ convergence on dense subspace of Hilbert space Let $H$ be a separable infinite dimensional Hilbert space over $\mathbb{C}$ Let $V$ be a dense subspace of $H$ Let $\{T_n\}_{n \in \mathbb{N}} \subset H^*$ be a sequence of continuos linear functionals such that $$ \forall v \in V: \lim_{n \to \infty} T_n(v) = 0 $$ I would like to know if is it true that $T_n \stackrel{\ast}{\rightharpoonup} 0$ i.e. $$ \forall w \in W: \lim_{n \to \infty} T_n(w) = 0 $$
No. Let $H=\ell^2(\mathbb N)$, and $V$ the subspace of sequences with finitely many nonzero elements. Denote by $\{e_n\}$ the canonical basis, and let $$ T_n(x)=n\,x_n. $$ Then, for any $x\in V$, eventually $x_n=0$, so $T_n(x)\to0$. On the other hand, if $x=(1/n)_n$, then $T_n(x)=1$ for all $n$. The assertion becomes true if the norms of the $T_n$ are bounded. If $\|T_n\|\leq c$ for all $n$, then given $\varepsilon>0$ and $w\in H$ choose $v\in V$ with $\|v-w\|<\varepsilon$. Then $$ |T_n(w)|\leq|T_n(v)|+|T_n(v-w)|\leq|T_n(v)|+c\varepsilon. $$ Thus $\limsup_n|T_n(w)|\leq c\varepsilon$ for all $\varepsilon>0$, which shows that the limit exists and is zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2756272", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Question about Galois Theory. Extension of a field of odd characteristic. Let $F$ be a field of characteristic $\ne 2$, and let $K$ be an extension of $F$ with $[K:F]=2$. Show that $K = F(\sqrt{a})$ for $a \in F$; that is, $K = F(\alpha)$ with $\alpha^{2}=a$. Moreover, show that $K$ is Galois over $F$. My doubt is in the first part. Take $\alpha \in K\setminus F$. Then $\lbrace 1, \alpha, \alpha^{2} \rbrace$ are LD over $F$, so $\alpha^{2} + p\alpha + q = 0$ with $p,q \in F$ ($1$ and $\alpha$ are LI over $F$, since $\alpha \not\in F$). Completing the squares, we have $$\left(\alpha + \frac{p}{2}\right)^{2} = \frac{p^{2}}{4} - q$$ because char$(F) \neq 2$. Let $\displaystyle a = \frac{p^{2}}{4} - q$, so $\displaystyle \sqrt{a}=\alpha + \frac{p}{2}$ and $\sqrt{a} \not\in F$. Here is my doubt: I know $K/F$ is finite, then $K$ is finitely generated, but why $\sqrt{a} \not\in F$ ensure that $K=F(\sqrt{a})$?
$$F\subsetneq F[\sqrt{a}]\subseteq K$$ What can you say about the degrees of extensions of $F[\sqrt{a}]/F$, $K/F[\sqrt{a}]$ and $K/F$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2756350", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Two approaches to the method of integration by substitution I came across two approaches to the method of integration by substitution (in two different books). Approach I Let $I=\int f(\phi(x))\phi'(x) dx$ Let $z=\phi(x)$ $\therefore \phi'(x)dx=dz$ $\therefore I=\int f(z)dz$ Approach II Let $I=\int f(x) dx$ Let $x=\phi(z)$ $dx=\phi'(z) dz$ $\therefore I=\int f(\phi(z))\phi'(z) dz$ My problem: While i can understand Approach I, I cannot understand Approach II. What is the difference between the two approaches. What is the difference in their applicability and usage? I am very confused. Please help.
A concrete example of approach 1 may be something like $\int\frac{1}{1+\sqrt x}\,\mathrm{d}x$ and you make the substitution $x=z^2$ in order to get rid of the square root. In this case our $\phi(z)=z^2$ and $\phi’(z)=2z\,\mathrm{d}z$, this makes our integral solvable by some trivial algebra and is already completely in terms of $z$ without any extra algebraic manipulation. Approach 2 on the other hand noticed that there is a derivative of a function on the outside such as $\int 2x\sin x^2\,\mathrm{d}x$ and one makes the substitution $z=x^2$. Both of these are ways to reverse the chain rule as you may recall $(f(g(x)))’=f’(g(x))g’(x)$, although the second approach is pretty much explicitly reversing the chain rule so is the first one in a different manner.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2756453", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Solution to second order linear differential equation with only second order differential has 2 trial solutions? $$ m\frac {d^{2}y}{d^{2}x} = 1 $$ homogenous linear equation for this ode is $$ m\frac {d^{2}y}{d^{2}x} = 0 $$ trial solution is $Ae^{kx}$ but clearly in this case $Bx+C$ is a trial solution that works. What is the logic behind having 2 trial solutions only in this case? For every other problem I only guess the exponential solution.
The solution $Bx+C$ is a limit for the combination of the solutions $Ae^{\pm \omega\,x}$, when $\omega \,\to \,0$. In fact, when you consider the general linear 2nd order ODE $$ m{{d^2 y} \over {d^2 t}} + r{{dy} \over {dt}} + ky = 0 $$ and write the general solution to it, in case of an under-damped system, as $$ \eqalign{ & f(t) = c_{\,1} e^{\,\rho \,t + i\,\omega \,t} + c_{\,2} e^{\,\rho \,t - i\,\omega \,t} = \left( {c_{\,1} e^{\,i\,\omega \,t} + c_{\,2} e^{\, - i\,\omega \,t} } \right)e^{\,\rho \,t} = \cr & = \left( {a\cos \,\left( {\omega \,t} \right) + b\sin \,\left( {\omega \,t} \right)} \right)e^{\,\rho \,t} \cr} $$ where $$ \omega = \sqrt {k/m - \left( {r/\left( {2m} \right)} \right)^2 } \quad \quad \rho = r/\left( {2m} \right) $$ and impose the initial conditions, for instance for $f(0)$ and $f'(0)$, you get $$ \left\{ \matrix{ f(0) = a \hfill \cr f'(0) = \,\,b\omega + \rho a\quad \Rightarrow \quad b = \;{1 \over \omega }\left( {f'(0) - \,\,\rho f(0)} \right) \hfill \cr} \right. $$ so $$ f(t) = \left( {f(0)\cos \,\left( {\omega \,t} \right) + {1 \over \omega }\left( {f'(0) - \,\,\rho f(0)} \right)\sin \,\left( {\omega \,t} \right)} \right)e^{\,\rho \,t} $$ Now, if the damping approaches the critical value, that is $\omega \to 0$, then $$ \bbox[lightyellow] { \mathop {\lim }\limits_{\omega \, \to \,0} f(t) = \left( {f(0) + \left( {f'(0) - \,\,\rho f(0)} \right)t} \right)e^{\,\rho \,t} }$$ and when also $\rho$ (that is $r$) approaches $0$ you get the $Bt+C$. Same if considering an over-damped system as explained in this related post.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2756546", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Taylor expansion and Distribution Let $u(x)$ be the step function and $p_u(x)$ be the distribution defined by $$\forall \varphi \in D, \langle p_u , \varphi \rangle = \lim_{\epsilon \to 0} \left (\varphi(0) \ln(\epsilon) + \int^{+ \infty } _ {\epsilon} \frac{\varphi(x)}{x} dx \right )$$ Using a Taylor expansion of $\varphi(\epsilon)$, Show that (in the sense of distributions) $$(u(x) ln(x)))'=P_u $$ From previous post part(a) (asked in another post) Show that, In the sense of distributions, we have $\forall \varphi \in D$ $$ \langle (u(x) Ln(x))', \varphi \rangle = \lim_{\epsilon \to 0} \left ( \varphi(\epsilon)\ln (\epsilon ) + \int^{+ \infty}_{\epsilon} \frac{\varphi(x)}{x} dx \right ) $$ Attempt Taylor Expansion of $\varphi(\epsilon)$ $$\varphi(\epsilon) = \sum^{n}_{k=0} \frac{\varphi^{(n)}(\epsilon)}{n!} (\epsilon- \epsilon_0)^n $$ Diffirentiating $u(x) ln(x)$ in parts $$ (u(x) ln(x))'= u'(x)ln(x)+u(x) \frac{1}{x}= \delta(x)ln(x)+ u(x)/x$$ I am guessing that $P_u =\langle u,\varphi \rangle$ $$\begin{aligned} <u ,\varphi> = \int^{+\infty}_{0} \varphi(x)dx = \int^{\infty}_{0}\sum^{n}_{k=0} \frac{\varphi^{n}(\epsilon)}{n!} (\epsilon- \epsilon_0)^n d\epsilon \end{aligned}$$ Kind of lost at this point can't thread the needle appreciate a nudge towards the right direction
Since you already have the "part (a)", this is much simpler than what you are trying to do. In short, we want to "replace" in the limit the $\varphi(\epsilon)$ by $\varphi(0)$. To do this, we only need to prove that $\lim_{\epsilon\to 0}(\varphi(0)\ln(\epsilon) - \varphi(\epsilon)\ln(\epsilon)) = 0$. In more details, we have $\varphi(\epsilon) = \varphi(0) + \mathcal O(\epsilon)$, so $\varphi(0)\ln(\epsilon) - \varphi(\epsilon)\ln(\epsilon) = \mathcal O(\epsilon)\ln(\epsilon) = o(1)$, which proves the limit claimed just before. Then, according to part (a), \begin{align*} \langle(u\ln)',\varphi\rangle &=\lim_{\epsilon\to 0}\left( \varphi(\epsilon)\ln(\epsilon) +\int_\epsilon^{+\infty} \frac{\varphi(x)}x \,\mathrm d x\right)\\ &=\lim_{\epsilon\to 0}\left( \varphi(0)\ln(\epsilon) +\int_\epsilon^{+\infty} \frac{\varphi(x)}x \,\mathrm d x + (\varphi(\epsilon)\ln(\epsilon)-\varphi(0)\ln(\epsilon))\right) \end{align*} and according to the previous claim, this is equal to \begin{align*} \langle(u\ln)',\varphi\rangle &=\lim_{\epsilon\to 0}\left( \varphi(0)\ln(\epsilon) +\int_\epsilon^{+\infty} \frac{\varphi(x)}x \,\mathrm d x\right)+0 \end{align*} which is by definition $\langle p_u,\varphi\rangle$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2756704", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Group size as a function of presentation length? Given a presentation of a group, together with the promise that the group is finite, is there a computable upper bound on the size of the group? Edited to add: by "presentation" we mean a set of generators and relations. See e.g. Wikipedia. Define the length of a presentation as the number of generators plus the length of each of the relations in terms of the number of letters. For example, the standard presentation of the cyclic group on $N$ elements has length $N+1$. It's easy to see that the group can be exponentially larger than its presentation. For example, the cyclic group of size $2^n$ has the following presentation of size about $4n$: $$ \mathbb Z_{2^n} = \left\langle a_1, \ldots a_n | a_1a_1 = a_2, a_2a_2 = a_3, \ldots a_{n-1}a_{n-1} = a_n, a_na_n = 1\right\rangle. $$ It's plausible to conjecture that exponential is as good as you can do. So a good subquestion is: Is there a family of finite group presentations for which the size of the group grows super-exponentially in the length of the description of the presentation?
I found a paper here: https://www.math.auckland.ac.nz/~obrien/research/an-sn-present.pdf in which the authors prove that the symmetric group $S_n$ (of order $n!$) has a presentation of length $O(\log n)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2756802", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
If $F$ is a $1$-to-$1$ imersion and is proper then $F$ is an imbedding Let $F:N\rightarrow M$ be a one-to-one immersion which is proper, i.e. the inverse image of any compact set is compact. Show that $F$ is an imbedding and that its image is a closed regular submanifold of $M$ and conversely. This is an exercise from Bothby: an introduction to diferentiable manifolds. Pag. 81, ex. 6 the following theorem say something very similar on what the exercise asks if $F:N\rightarrow M$ is an one-to-one immersion and $N$ is compact, the $F$ is and imbedding and $F(N)$ is a regular submanifold. Thus a submanifold of $M$, if compact in $M$, is regular. Well, this theorem states exactaly what i want to show, but my dificulty is how changing the hypothesis of $N$ being compact for $F$ being proper could take me to the desired result?
Assume that it is not an embedding. For $p\in f(N)$, consider $$ f(N)\ \bigcap\ B\bigg(p,\frac{1}{n}\bigg) \supseteq U\cup \{p_n\}$$ where $U$ is homeomorphic to ${\rm dim}\ N$-dimensional open ball. Hence $p_n\rightarrow p$ so that $\{p_1,\cdots \}\cup B(p,\varepsilon) $ is compact. Its preimage contains $U'$ and infinite points $p_n'\in f^{-1}(p_n)$ where $U'$ is homeomorphic to a ${\rm dim}\ N$-dimensional open ball. By proper, $p_n'$ converges to $q$ which is not in $U'$. Then $f(p_n')\rightarrow f(q)\neq p$. It is a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2756970", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to see wether numbers are distributed "evenly" ([1,2,18,35,36]) or "cluttered to one side" ([1,2,3,30,31], [7,9,17,16,36])? I have a set of 5 integer numbers {1,23, 17, 33, 35}. Elements can take values only from [1..36], and happen only once within the set. What math can I use to understand, wether the numbers are distributed "evenly" (means very symmetric with respect to 18 - like ([1,2,18,35,36]) or "cluttered to one side" ([1,2,3,30,31], [7,9,17,16,36]) within single given set of 5 numbers? "cluttered to the left" - means there are more small numbers - below 18 (say 3, 4, 5 numbers are below 18). I need to analyze many such sets (assigning "evenly"/"cluttered" value to each) and then understand what happens more often. Besides, such indicator must show * *Numbers tend to be cluttered on the left or on the right ([1,2,3,30,31], [7,9,17,16,36]). *Numbers tend to be close to 18 [16,15,18,19,20] I think of variance and standard deviation, but I am not sure - maybe there are better applicable or more advanced indicators/analysis methods. P.S. Seems standard deviation is not helping, or I cannot understand how to use it: * *std([1,2,18,35,36]) = 15.21315220458929 ("evenly" distributed) *std([1,2,3,30,31]) = 13.97998569384104 ("cluttered/skewed" to the left) *std([7,9,16,17,36]) = 10.25670512396647 ("cluttered/skewed" to the left) *std([1,30,31,32,33]) = 12.24091499847948 ("cluttered/skewed" to the right) Besides, non-parametric skew can be used - it is within [-1..1] and is zero if values are symmetric with respect to the "middle".
An easy fast way to check: Order $x_1 < x_2 < x_3 < x_4 < x_5$, then consider \begin{equation} \sum_{i=1}^5\left(|x_{i}| - |37-x_i|\right) \end{equation} If they are 'evenly distributed', this sum is close to 0. Worst case is $\pm 155$. You can set a treshold somewehere in between.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2757084", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Inequality between two functions I have a two functions defined for $x > 1$, and $c \in (0,1)$: $$ f(x) = 1-\exp\left(-\frac{c}{x^2} \right), $$ and $$ g(x) = \exp\left(-\frac{x}{c} \right). $$ From graphical tool ( https://www.desmos.com/calculator/hr8n8kkpym ), I know $f(x) > g(x)$. How can I prove this inequality analytically?
Calling $$ \left\{ \begin{array}{rcl} u & = & 1-e^{-\frac{c}{x^{2}}}\\ v & = & e^{-\frac{x}{c}} \end{array}\right. (1) $$ we have $$ \left\{ \begin{array}{rcl} \log(u) & = & \log(1-e^{-\frac{c}{x^{2}}})\\ \log(v) & = & -\frac{x}{c} \end{array}\right. (2) $$ and also $$ \left\{ \begin{array}{rcl} \frac{d}{dx}\log(u) & = & -\frac{1}{(e^{\frac{c}{x^{2}}}-1)x^{3}}\\ \frac{d}{dx}\log(u) & = & -\frac{1}{c} \end{array}\right. (3) $$ and for $x=1$ we have $$ -\frac{1}{e^{c}-1}>-\frac{1}{c} $$ and for $x>1$ and $c\in(0,1)$ we have $$ -\frac{1}{(e^{\frac{c}{x^{2}}}-1)x^{3}}>-\frac{1}{c} $$ Now ressuming, by (3) if $\log(1-e^{-c})>-\frac{1}{c}$ then $\log(u)$ and $\log(u)$ does not intersect and will remain $\log(u)>\log(v)$ all along $x\ge1$ Concluding, $\log$ is a strict monotonic increasing function for $x>1$ hence $u > v$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2757177", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Complex numbers polar form change I have a pretty straight forward question. Change $z = (-1+i\sqrt3)^{2017}$ to $a+bi$ $\;$ form & polar form. Where $i = \sqrt{-1}$. So i want to change it to $z = re^{iv}$. $r$ is easy to calculate. $r = \sqrt4 = 2$. However the angle is where im struggeling. I know all the "standard" angles with components like: $\frac{\sqrt3}2, \frac12, \frac1{\sqrt2}$. However now we have $\frac{\sqrt3}{-1}$. How do you tackle this type of question?
Hint: The point $-1+\sqrt3i$ makes an equilateral triangle together with $0$ and $-2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2757282", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Strong law of large numbers for the conditional expectation of functions of random vectors Consider a collection of 0-1 random variables $Y_{n,N}$, for all $n$ and $N$. The random variable $Y_{n,N}$ is a deterministic function of the collection of random variables in $\mathcal{F}_n = \{(U_k)_{k=1,\ldots,n}, (V_k)_{k=1,\ldots,n} \}$ where the $U_k$'s and the $V_k$'s are all independent among them and uniformly distributed over $[0,1]$. Therefore, $Y_{n,N}=\mathbb{E}[Y_{n,N} | \mathcal{F}_n ]$. Let me denote by $\mathcal{F}_n\setminus V_n$ the set $\mathcal{F}_n$ where $V_n$ has been removed. My question is the following. Assuming that $$ \lim_{N\to\infty} \frac{1}{N} \sum_{n=1}^N Y_{n,N} $$ exists almost surely and that $$ \lim_{N\to\infty} \frac{1}{N} \sum_{n=1}^N \mathbb{E}[Y_{n,N} | \mathcal{F}_n\setminus V_n] = Z $$ almost surely (where $Z$ is some other random variable), is it true that necessarily $$ \lim_{N\to\infty} \frac{1}{N} \sum_{n=1}^N Y_{n,N} = Z $$ almost surely? Why? Though different (because I am assuming the existence of the first limit), this question is related to the following questions: Strong law of large numbers for function of random vector: can we apply it for a component only? and Law of large numbers with one dependency
With the notations of the opening post,let $\mathcal G_n:=\mathcal F_{n+1}\setminus V_{n+1}$ and $D_{n,N}:=Y_{n,N}-\mathbb{E}\left[Y_{n,N} | \mathcal{F}_n\setminus V_n\right]$. Then $D_{n,N}$ is $\mathcal G_n$-measurable, $\mathbb{E}\left[D_{n,N} | \mathcal{G}_{n-1} \right]=0$ and $\left\lvert D_{n,N}\right\vert\leqslant 2$ hence by Hoeffding's inequality, we derive that $$ \mathbb P\left\{\frac 1N\left\lvert \sum_{n=1}^ND_{n,N}\right\rvert \gt \varepsilon\right\}\leqslant\exp\left(-\frac{N^2\varepsilon^2}{4N}\right) $$ hence an application of the Borel-Cantelli lemma shows that $\frac 1N \sum_{n=1}^ND_{n,N}\to 0$ almost surely.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2757382", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Does $x \in A$ and $x \notin B$ imply that $x \notin (A \cap B)$? For any two sets $A$ and $B$. Is it true that if $x \in A$ and $x \notin B$, then $x \notin (A \cap B)$?
It's true that $x \in A \land x \not\in B \implies x \not\in A \cap B$, but its converse is false: if $x \not\in A \cap B$, it's possible that $x \notin A \land x \notin B$, and this renders $x \in A \land x \not\in B$ false.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2757494", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Is any extension ring $S \supset R$ an $R$-algebra? Is any extension ring $S \supset R$ an $R$-algebra? In our lecture note an $R$-algebra $A$ is defined as follows $:$ An $R$-algebra $A$ is a ring $A$, which is also an $R$-module satisfying the condition $$a(xy)=(ax)y=x(ay),\ a \in R,\ x,y \in A.$$ It is clear that $S$ is a ring along with an $R$-module with the usual addition and scalar multiplication. But how does the scalar multiplication become bilinear unless $R \subset Z(S)$ where $Z(S)$ denotes the center of $S$. I think the assertion made in our lecture note is false. Please check it. Thank you in advance.
No: take for example $\mathbb C \subseteq \mathbb H$, the complex numbers in the quaternions. (This is an example if what Pedro was getting at in the comments.) The axioms of an $R$ algebra $A$ require that $R$ is contained in the center of $A$, which is not the case for this example.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2757620", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove $\frac{1}{n+1}\le \log\left(1+\frac{1}{n}\right)\le \frac{1}{n} ,\forall n\ge1$ if $\log x= \int_{1}^{x}\frac{dt}{t},x>0$ I know it is very simple but something is going amiss. We can see here that $\log(1+\dfrac{1}{n})=\displaystyle \int_{n}^{n+1}\dfrac{dt}{t}$ if $t \in[n,n+1]$, $\dfrac{1}{n+1}\le\dfrac{1}{t}\le\dfrac{1}{n}$ I want to show that $\dfrac{1}{n+1}\le\displaystyle \int_{n}^{n+1}\dfrac{dt}{t}\le\dfrac{1}{n}$. Can I straight infer that? If so what is the logic behind it? I was thinking of using $\displaystyle \Sigma_{i=1}^{n}$ and sandwiching $\displaystyle \int_{n}^{n+1}\dfrac{dt}{t}$ between. But confused !
Yes, because $\frac{1}{n+1} \leq \frac{1}{t} \leq \frac{1}{n}$ for $t \in [n, n+1]$, so $$\int_{n}^{n+1} \frac{1}{n+1} \, dt \leq \int_{n}^{n+1} \frac{1}{t} \, dt \leq \int_{n}^{n+1} \frac{1}{n} \, dt$$ then your inequality follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2757848", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Solving for angle of hyperbolic triangle in Poincare disk I am working out an example problem trying to find the angles of a hyperbolic triangle in the Poincare disk model. I am getting inconsistent answers. For the sake of simplicity, I am using these coordinates for $\triangle OPQ$: $O(0,0), P(\frac{1}{2},0),$ and $Q(0,\frac{1}{2}$). I set up the problem in GeoGebra with the orange hyperbolic line graphed as the circle orthogonal to the unit disk through $P$ and $Q$. I graphed the tangent line to that circle and I see that it makes an angle of roughly $31^\circ$. To solve the problem analytically, my strategy is to find the hyperbolic distances of the legs and the hypotenuse, then solve for the angle using the hyperbolic law of cosines: $$\cos C= \frac{\cosh a \cosh b-\cosh c}{\sinh a \sinh b}$$ According to the notes I have, the hyperbolic distance between $P(x_1,y_1)$ and $Q(x_2,y_2)$ is given by the formula: $$d(P,Q)=\ln(\frac{u+v}{u-v})$$ where $u=(1-x_1 x_2-y_1 y_2)^2 +(x_1 y_2 - x_2 y_1)^2$ and $v=(x_1-x_2)^2 + (y_1-y_2)^2$ . Substituting into the formulas I get: $$\mathbf{legs}=\ln\left(\frac{1+\frac{1}{4}}{1-\frac{1}{4}}\right)=\ln\left(\frac{5}{3}\right)$$ $$\mathbf{hypotenuse}=\ln\left(\frac{\frac{5}{4}+\frac{1}{2}}{\frac{5}{4}-\frac{1}{2}}\right)=\ln\left(\frac{7}{3}\right)$$ $$\mathbf{\angle{OPQ}}=\arccos\left(\frac{\cosh(\ln(\frac{5}{3}))\cosh(\ln(\frac{7}{3}))-\cosh(\ln(\frac{5}{3}))}{\sinh(\ln(\frac{5}{3}))\sinh(\ln(\frac{7}{3}))}\right)\approx 31.788^\circ$$ Well that's weird. It's close but not close enough to be discounted as a rounding error. I decide to check the method by solving for $\angle{QOP}$ which I know ought to be a right angle. Using the above method, I get that $\angle{QOP}\approx 109.8^\circ$. So obviously the method is incorrect. What gives? Can anyone spot an error in the reasoning or suggest an alternate method? Edit: I have also searched for an alternate distance formula to use, but many of them seem to be tailored for the upper half-plane model or they involve calculations using complex numbers. Since both formulas were given in the same packet of notes from a university website, I expected to get a correct answer when using them.
For points $P = (a,b)$ and $Q=(c,d)$ in the Poincaré disk model, suppose that the hyperbolic distance, $\delta$, between them is given by a formula of the form $$\delta = \ln \frac{u+v}{u-v} \tag{0}$$ for some expressions $u$ and $v$. Then $$\cosh \delta = \frac12\left(e^\delta + e^{-\delta}\right) = \frac12\left(\frac{u+v}{u-v}+\frac{u-v}{u+v}\right) = \frac{u^2+v^2}{u^2-v^2} \tag{1}$$ According to Wikipedia's "Poincaré Disk Model" entry (and assuming this source is more-authoritative than the university notes you reference), we should have (for a model circle of radius $1$, and with Euclidean distances $p := |OP|$, $q := |OQ|$, $r := |PQ|$) $$\cosh \delta = 1 + \frac{2r^2}{\left(1-p^2\right)\left(1-q^2\right)} = \frac{\left(\;(1-p^2)(1-q^2)+r^2\;\right) + r^2}{\left(\;(1-p^2)(1-q^2)+r^2\;\right) - r^2} \tag{2}$$ Thus, matching $(2)$ with $(1)$, we can take $$\begin{align} u^2 &= (1-p^2)(1-q^2)+r^2 = ( 1 - (ac+bd))^2+(ad-bc)^2 \\ v^2 &=r^2 \phantom{+(1-p^2)(1-q^2)\;\,}= (a-c)^2+(b-d)^2 \end{align}\tag{$\star$}$$ The reader can verify that these values provide the expected angle measures at $P$ and at the origin. So, there's a typo in the university notes. Either the author omitted the square roots (or squares) in the definition of $u$ and $v$, or else the author wrote "$\ln$" instead of "$\operatorname{arccosh}$" in the distance formula, accidentally (and, perhaps, understandably) mixing similar-looking elements from the identity $$\ln \frac{u+v}{u-v} = \operatorname{arccosh}\frac{u^2+v^2}{u^2-v^2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2757958", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Determine $p$ such that $x^2 \equiv a \pmod{p}$ using Legendre symbols(for specific values of $a$) Determine $p$ such that $x^2 \equiv a \pmod{p}$ has a solution.(where $p$ is a prime) How would you approach this for "bigger" numbers, if you would want to solve this using Legendre symbols and their properties. E.g. if $a = -2$ you could break the case into $(\frac{-1}{p})(\frac{2}{p})=1$ and solve by applying explicit terms for ($\frac{1}{p}),(\frac{-1}{p}) $and$ (\frac{2}{p})$.(Legendre symbols from number theory). How would you approach this if $a$ is for example $6$. Then you would need to solve $(\frac{3}{p})(\frac{2}{p})=1$ which yields two options, either they are both $1$ or $-1$ but how would I reach anything useful from $(\frac{3}{p})=1$ EDIT : especially how to deal with even bigger ones, such as $(\frac{7}{p})$. How do I even know I covered all the options? Thanks in advance!
I'll follow up on GNU Supporter's comment, with your given explicit example. If we have $(\frac{3}{p})=1$, then the law of quadratic reciprocity says * *if $p\equiv 1 \pmod{4}$, then $(\frac{p}{3})=1$, so $p \equiv 1\pmod{3}$ (a priori $p =3$ case is easy to deal with separately). Combining we have $p \equiv 1 \pmod{12}$. *if $p \equiv 3\pmod{4}$, then $(\frac{p}{3}) = -1$, so $p \equiv 2 \pmod{3}$. Combining we have $p \equiv 11 \pmod{12}$. You can deal with the other cases similarly.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2758071", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
What does it mean when I say that addition/multiplication for an equivalence relation is well defined? I have trouble understanding this concept. Why is it necessary to prove that addition or multiplication is well defined in equivalence classes? My understanding of equivalence classes is that it must be reflexive, symmetric and transitive. Doesn't proving it automatically imply that addition and multiplication can be done? Why the additional need to prove that it is 'well defined'? Apologies if this question is too trivial; my understanding of this topic is limited.
Just because a relation is an equivalence, this doesn't mean it has to be "nice" with respect to any operation you'd like to put on its classes. To see this, look at a non-example of something being well-defined. Let $V=\mathbb{R}^2$ be the plane with its usual vector space structure. Put a relation on $V$ by defining $u \sim v$ if $u = cv$ for some non-zero scalar $c$. This is an equivalence relation on $V$. The zero vector is in a class by itself, and the other classes are the vectors that form parallel lines through the origin (with the zero vector removed). Since classes are naturally represented by vectors, you could attempt to define a natural addition on $V/\sim$ by $$ [u] + [v] = [u+v]. $$ But this fails to be well-defined, even though the relation in play is an equivalence. The problem is that two different people could add the SAME TWO classes but get different answers. That's no good. For example, Alice wants to add the class corresponding to the $x$-axis (with zero removed) to itself. This class is $[(1,0)]$, for example. So, Alice does $$[(1,0)] + [(1,0)] = [(2,0)] = [(1,0)].$$ Bob will do the exact same class addition, but recognize that $[(1,0)]$ is also equal to $[(-1,0)]$. Then Bob gets $$[(1,0)] + [(-1,0)] = [(0,0)] \neq [(1,0)].$$ Alice and Bob have added the exact same class to itself, but got conflicting answers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2758249", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 4, "answer_id": 3 }
If $ u $ satisfy $ u_{t} = ku_{xx} $ then so does $ u_{\alpha, \beta, \gamma} $, provided $ \beta = \alpha^{2} $ If $ u $ satisfy the heat equation $ u_{t} = ku_{xx} $ then so does $ u_{\alpha, \beta, \gamma} $ ( where $ u_{\alpha, \beta, \gamma}(x,t) = \gamma u(\alpha x, \beta t) $), provided $ \beta = \alpha^{2} $. My attempt: $ \dfrac{\partial}{\partial t} u_{\alpha,\beta,\gamma}(x,t) = \gamma\beta\dfrac{du}{dt}$ $ \dfrac{\partial}{\partial x^{2}} u_{\alpha,\beta,\gamma}(x,t) = \gamma\beta\dfrac{d^{2}u}{dx^{2}}$ And here I'm stuck.
Note that $$ \partial_{t}u_{\alpha,\beta,\gamma}=\partial_t \gamma u(\alpha x,\beta t)\\ =\gamma \beta u_t(\alpha x,\beta t) $$ and $$ \partial_{xx}u_{\alpha,\beta,\gamma}=\partial_{xx} \gamma u(\alpha x,\beta t)\\ =\gamma \alpha^2 u_{xx}(\alpha x,\beta t) $$ so $$ \partial_{t}u_{\alpha,\beta,\gamma}-k\partial_{xx}u_{\alpha,\beta,\gamma}= \gamma \beta u_t(\alpha x,\beta t)-k\gamma \alpha^2 u_{xx}(\alpha x,\beta t)\\ =\gamma\left( \beta u_t(\alpha x,\beta t)-k \alpha^2 u_{xx}(\alpha x,\beta t) \right)\\ \stackrel{\alpha^2=\beta}{=} \gamma\beta\left( u_t(\alpha x,\beta t)-k u_{xx}(\alpha x,\beta t) \right)\\ =0 $$ since $u$ solves the heat equation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2758351", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How does $S \otimes_R A$ become an $S$-algebra? Let $f : R \longrightarrow S$ is a ring homomorphism and $A$ is an $R$-algebra then in our lecture note it has been stated that its scalar extension $S \otimes_R A$ is an $S$-algebra. How is that possible? I know that the scalar extension is a $S$-module given by the well defined operation $s(s_1 \otimes x):=ss_1 \otimes x$. Also I know that if $A$ and $B$ are two $R$-algebra then their tensor product $A \otimes B$ is again an $R$-algebra with the well-defined operation given by $(a \otimes b)(a' \otimes b'):=aa' \otimes bb'$. Now from these two facts how can I reach at the desired conclusion? Please help me in this regard. Thank you in advance.
You have to be careful about actions being left or right if the rings are not commutative. In general, if you have $AB$-bimodule $M$ (i.e. left $A$-action and right $B$-action) and $BC$-bimodule $N$ (left $B$-action and right $C$-action), then the tensor product $M\otimes_B N$ is $AC$-module: $$a(m\otimes n) = (am)\otimes n,\ (m\otimes n)c = m\otimes(nc)$$ and if $M$ and $N$ are moreover algebras, then the tensor product will again be algebra. In your case you have $S\otimes_R A$. To make it into $S$ module, you need to have * *left $S$-action on $S$ - this is just multiplication in $S$, *right $R$-action on $S$ - this is given by $f$, $s.r := sf(r)$, *left $R$-action on $A$ that you have since $A$ is an algebra. Algebra multiplication is defined as you wrote it. The left $S$-action is defined as $$s'(s\otimes a) = (s's)\otimes a.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2758424", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Math. Logic in calculus Assume f : D -> S function, where D, S are subsets of R, different from R. We know that statement for any A from D, f(A) belongs to S is correct. I wonder whether this statement is correct as well: for any A from R\D, f(A) does not belong to S If it is correct, whether we can say, that for any A from R\D, f(A) belongs to R\S ?
As астон вілла олоф мэллбэрг stated, the best way to think of this conceptually is that $f(a)$ is just nonsense when $a$ is not in the domain of $f$. As such your second statement has no truth-value one way or the other. Set theory, usually considered the "foundation" of math, does, however, give such statements meaning (unfortunately, in my opinion). In particular, in set theory a function is usually modeled by a set of ordered pairs. Conceptually, the function $f:X\to Y$ is the set of ordered pairs $\{(x,y)\in X\times Y\mid f(x)=y\}$. $f(x)=y$ is then a shorthand for $(x,y)\in f$. Your first statement is then: $$\forall a\in D.\exists b\in S.(a,d)\in f$$ Your second statement is similarly: $$\forall a\in\mathbb R\setminus D.\exists b\in\mathbb R\setminus S.(a,b)\in f$$ $f$ not being defined outside of $D$ means there is no ordered pair $(a,b)\in f$ for which $a\notin D$. Thus the second statement is false: for every $a\in\mathbb R\setminus D$, $(a,b)\notin f$ no matter what $b$ is. To see why I view the above as unfortunate, consider the real number $\pi$. Is $\pi(3)=7$? Your first reaction is probably, "$\pi$ is not a function. This makes no sense." But $\pi(3)=7$ means $(3,7)\in\pi$ and this is a completely meaningful set-theoretic statement. Your second reaction (or an alternative first reaction) is probably, "this is false", but whether it's true or false depends on the precise definition of $\pi$ (and $3$ and $7$ and ordered pairs). For contrast, in most type theories the analogue to your second statement would be a type error and thus not a well-formed formula. It would thus not make sense to ask about its truth value anymore than it would make sense to ask about the truth value of $\neg\land )x$. Similarly for $\pi(3)=7$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2758521", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving a complex binomial identity I would like to prove an identity: $$\binom{\alpha}{n} = \sum_{k=0}^n(-1)^k(k+1)\binom{\alpha + 2}{n-k}$$ Where $\alpha$ is complex. I have already found that if you have two sequences related by the identity $$b_n = \sum_{k=0}^n(-1)^{k}(1+k)a_{n-k}$$ you can write the generating function for $b_n$ (which I'll write as $B(z)$) in terms of the generating function for $a_n$ as follows: $$B(z) = \frac{A(z)}{(1+z)^2}$$ How do I prove this identity, using the above fact? Thanks in advance!
We apply the Cauchy product formula. It is convenient to use the coefficient of operator $[z^n]$ to denote the coefficient of $z^n$ in a series. This way we can write for instance \begin{align*} [z^n](1+z)^\alpha=\binom{\alpha}{n} \end{align*} We obtain for $\alpha\in\mathbb{C}$ and $n\in\mathbb{N}$: \begin{align*} \color{blue}{\sum_{k=0}^n}&\color{blue}{(-1)^k(k+1)\binom{\alpha+2}{n-k}}\\ &=\sum_{k=0}^n\left([z^k]\frac{1}{(1+z)^2}\right)\left([z^{n-k}](1+z)^{\alpha+2}\right)\tag{1}\\ &=[z^n](1+z)^{\alpha}\\ &\,\,\color{blue}{=\binom{\alpha}{n}} \end{align*} and the claim follows. Comment: * *In (1) we use the Cauchy product formula\begin{align*} A(z)&=\sum_{k=0}^\infty a_kz^k,\qquad B(z)=\sum_{j=0}^\infty b_jz^j\\ A(t)B(t)&=\sum_{n=0}^\infty\left(\sum_{{k+j=n}\atop{k,j\geq 0}}a_kb_j\right)t^n=\sum_{n=0}^\infty \left(\sum_{k=0}^n a_k b_{n-k}\right)t^n\\ &=\sum_{n=0}^\infty\sum_{k=0}^n \left([z^k]A(z)\right)\left([z^{n-k}]B(z)\right)t^n \end{align*} In the case above we have \begin{align*} A(z)&=\sum_{k=0}^\infty (-1)^k(k+1)z^k=\frac{1}{(1+z)^2}\\ B(z)&=\sum_{k=0}^\infty \binom{\alpha+2}{k}z^k=(1+z)^{\alpha+2} \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2758635", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 0 }
On a closed form for $\int_{-\infty}^\infty\frac{dx}{\left(1+x^2\right)^p}$ Consider the following function of a real variable $p$ , defined for $p>\frac{1}{2}$: $$I(p) = \int_{-\infty}^\infty\frac{dx}{\left(1+x^2\right)^p}$$ Playing around in Wolfram Alpha, I have conjectured that we have the following closed form: $$I(p) = \sqrt{\pi}\frac{\Gamma\left(p-\frac{1}{2}\right)}{\Gamma\left(p\right)}$$ This agrees with a number of known results, such as: $$I(1) = \pi$$ $$I(2) = \frac{\pi}{2}$$ $$I(3) = \frac{3\pi}{8}$$ It also agrees with $I\left(\frac{1}{2}\right)$ being divergent, which follows from the fact that the antiderivative of $\left(1+x²\right)^{-\frac{1}{2}}$ is $\text{arsinh }(x)$ and the fact that $\lim\limits_{x\rightarrow\pm\infty}{\sinh(x)} = \pm\infty$. Furthermore, this closed form agrees with any non-integer $p\geq\frac{1}{2}$ that I have tried to evaluate, such as $I\left(\frac{3}{5}\right) = \sqrt{\pi}\frac{\Gamma\left(\frac{1}{10}\right)}{\Gamma\left(\frac{3}{5}\right)}$. However, I have not been able to prove this result. I'm thinking that perhaps the residue theorem might work, but I have no idea how to treat poles of a non-integer order in the denominator. Any ideas?
By substituting into the definition of the $\Gamma$-function, $$ \frac{1}{(1+x^2)^p} = \frac{1}{\Gamma(p)} \int_0^{\infty} \lambda^{p-1} e^{-\lambda(1+x^2)} \, d\lambda. $$ Interchanging the order of integration, $$ I(p) = \frac{1}{\Gamma(p)} \int_0^{\infty} \lambda^{p-1} e^{-\lambda} \left( \int_{-\infty}^{\infty}e^{-\lambda x^2} \, dx \right) d\lambda \\ = \frac{1}{\Gamma(p)} \int_0^{\infty} \lambda^{p-1} e^{-\lambda} \sqrt{\pi}\lambda^{-1/2} d\lambda = \frac{\sqrt{\pi}\Gamma(p-1/2)}{\Gamma(p)}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2758742", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to factor in the 'immediately prior to the administration of the next dose' statement in this question? For part b I have found that the stationary state is $a_{n}=d/k$. For part $d)$ however I am not sure on how to take into account the statement 'immediately prior to the administration of the next dose'. Without that statement I would do $ d/k<1/2 $ in which I substituted $ d/k $ as the stationary state into $a_{n+1}$. However I am really not sure how to take that statement into account?
The idea is that the amount in the bloodstream is slowly decreasing between doses, then jumps up when a dose is administered. $a_n$ is defined as the amount in the bloodstream just after a dose, so is at the peak of the step. The amount just before dose $n$ is $a_n-d$. If $a$ is the stationary value, you need to make sure $a-d \ge \frac 12$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2758898", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Probability that the two segments intersect P and Q are uniformly distributed in a square of side AB. What is the probability that segments AP and BQ intersect?
Suppose we start with $Q = (x, y)$. The admissible positions for $P$ will be in the triangle $BQR$ if $y < x$ or in the quadrilateral $BQRC$ if $y > x$. The areas are calculated from the sides and the heights, and $$p = \int_0^1 \int_0^x \frac {y (1 - x)} {2 x} dy dx + \int_0^1 \int_x^1 \left( 1 - \frac x {2 y} - \frac y 2 \right) dy dx = \frac 1 4.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2759011", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
Distribution of minimum of random variable and its square Suppose $X \sim U[0,2]$ is a uniformly distributed random variable. What are the distributions of $\min\{X,X^2\}$ and $\max\{1,X\}?$ The density function of $X$ has to be $f(x)=\frac{1}{2}$ and $$F_{X^2}(x)=P_{X^2}(X^2\leq x)=P_{X^2}(-\sqrt{x}\leq X \leq\sqrt{x})=P_{X^2}(X\leq x)-P_{X^2}(X<-\sqrt{x})=F_{X^2}(\sqrt{x})-F_{X^2}(-\sqrt{x}).$$ No idea how to go further.
$$F_{\max\{1,X\}}(x)=P(\max\{1,X\}\le x) \tag{1}$$ $$=P(1 \le x \cap X \le x) \tag{2}$$ $$=P(1 \le x) \times P(X \le x) \tag{3}$$ $$=1_{1 \le x}(x) \times P(X \le x) \tag{4}$$ $$=1_{1 \le x}(x) \times F_X(x) \tag{5}$$ Remarks and explanations: $(1)$ Say $P_{X^2}(x)$ or $P(X_2 \le x)$ but not $P_{X^2}(X^2 \le x)$ $(2)$ The elder of two people is younger than a third person iff the third person is older than the two people. $(3)$ 1 and X are independent for any X! This is because constant or a.s. constant random variables are independent of any random variable, including themselves. This because events of probability 0 or 1 are independent of any other event, including themselves. $(4)$ x isn't random. It is predetermined. So it either is or isn't greater than 1. Note that this and the next indicator functions are deterministic, i.e. functions of $x$ and not of $\omega$ $(5)$ If $x < 1$, then $\max\{1,X\} > x$: I'm almost sure of it. If $x \ge 1$, then making $X$ become $1$ if $X$ happens to fall below 1 doesn't make $X$ more or less likely to be $\le x$. $$F_{\min\{X^2,X\}}(x)=P(\min\{X^2,X\}\le x)$$ $$=P(X^2\le x \cup X\le x)$$ $$=P(X^2\le x) + P(X\le x) - P(X^2\le x \cap X\le x)$$ Now $P(X^2\le x \cap X\le x)$ $$ = P(X^2\le x \cap X\le x \cap X^2 \le X) + P(X^2\le x \cap X\le x \cap X^2 > X)$$ $$ = P(X\le x \cap X^2 \le X) + P(X^2\le x \cap X^2 > X)$$ $$ = P(X\le x \cap 0 < X < 1) + P(X^2\le x \cap 1 < X < 2)$$ $$ = P(X \in (-\infty,x) \cap (0,1)) + P(X^2\le x \cap 1 < X < 2)$$ $$ = P(X \in (-\infty,\min\{x,1\})) + P(X\le \sqrt{x} \cap 1 < X < 2)$$ $$ = F_X(\min\{x,1\}) + P(X\le \sqrt{x} \cap 1 < X < 2)$$ Now $P(X \le \sqrt{x} \cap 1 < X < 2)$ For $x \le 1$ or $x \ge 4$, $$P(X\le \sqrt{x} \cap 1 < X < 2) = P(X \in \emptyset^{\mathbb R}) = P(\emptyset^{\Omega}) = 0$$ For $1 < x < 2$, $$P(X\le \sqrt{x} \cap 1 < X < 2) = P(X\le \sqrt{x})$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2759134", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Interchangability of arbitrary sums and linear operator Let $\mathcal H$ be a Hilbert space, $\{x_i : i \in I \}$ be a orthonormal base in $\mathcal H$ and $T \in L(\mathcal H)$. Does the following hold: $T( \sum_{i \in I} \lambda_i x_i) = \sum_{i \in I} \lambda_i T(x_i)$ ? For example, consider $\{ e_i : i \in I\}$, the standard base in $\ell^2(I)$.
We have for finite sums that: $$ T\left(\sum_{ k = 1}^{N} \lambda x_k \right) = \sum_{k = 1}^{N} \lambda_kT(x_k) $$ Because $T$ is continuous, we have that for some sequence $x_n \in \mathcal{H}$, $x_n \rightarrow x$ that: $$ T(x_n) \rightarrow T(x) $$ Hence, becuase: $$ \sum_{ k = 1}^{\infty} \lambda_k x_k= \lim_{N \to \infty}\sum_{ k = 1}^{N} \lambda_k x_k $$ We conclude that: $$ \lim_{N \to \infty}T\left(\sum_{k = 1}^{N} \lambda_k x_k \right) = \lim_{N \to \infty}\sum_{k = 1}^{N} \lambda_kT(x_k) $$ The left hand side (by continuity) is: $$ T\left(\sum_{k = 1}^{\infty} \lambda_k x_k\right) $$ And the right hand side (by definition) is: $$ \sum_{k = 1}^{\infty} T(\lambda_k x_k) $$ as we need.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2759217", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Bound for the Brownian motion exit time Suppose $T = \inf\{t: B_t\not\in (a,b)$ where $a<0<b$ and $a\neq -b$. I would like to show $$\mathbb{E}T^2 \leq C \mathbb{E}B_T^4.$$ The problem also says to apply Cauchy-Schwarz inequality to $\mathbb{E}(TB_T^2)$. Now I know $\{B_t^4 - 6tB_t^2 + 3t^2\}_t$ is a martingale, and it suffices to show the inequality above for $T\wedge t$. By martingale property and Cauchy-Schwarz, we have $$\mathbb{E} B_{T\wedge t}^4 + 3\mathbb{E} (T\wedge t)^2 \leq 6 \bigg(\mathbb{E} (T\wedge t)^2\bigg)^{1/2} \bigg( \mathbb{E} B_{T\wedge t}^4\bigg)^{1/2}$$ I am stuck here, and I am trying to conclude from this without getting into too explicit calculation with the $B_T$ term since we can actually compute $\mathbb{E}T^2$ and $\mathbb{E}B_T^4$, but I think it is not the point of this problem...
It is just a scaling argument now. Write $E[T^2]=C(a,b) E[B_T^4]$ and then your inequality becomes $$(3C(a,b)+1)E[B_T^4] \leq 6 C(a,b)^{1/2} E[B_T^4].$$ Thus $3C(a,b)+1 \leq 6C(a,b)^{1/2}$. There is a maximal solution to this inequality. The point is that $E[T^2]$ contributes more to the left side of the inequality than to the right (the exponent is $1$ vs. $1/2$), so unless the factor $E[B_T^4]$ can "pick up the slack" on the RHS, the inequality you know to be true cannot be true. You could use exactly the same argument to derive the reverse inequality (for a different $C$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2759336", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Subset of matrix rows with half of column sums Consider the following problem. We are given a matrix $A = (a_{ij})_{i,j = 1}^{m,n}$ with $m$ rows and $n$ columns, all $a_{ij}$ are nonnegative. Prove that there exists a subset $S$ of rows, $|S| \leq m/2 + n/2$, such that, for every column $j$, sum of all elements from $S$, that were in the column $j$, is at least a half of the sum of all elements in the whole column $j$. That is, if $S = \{i_1,\ldots,\ i_s\}$, $$\forall j = 1\ldots n: \qquad\sum_{k = 1}^sa_{i_kj} \geq \frac{1}{2}\sum_{i = 1}^ma_{ij}.$$ It is said that this problem is somehow connected to linear programming. I tried to consider the corresponding integral minimization problem and its LP relaxation, but still have no insight of the solution. Any advice appreciated.
Ok, I think the following works so I will sketch out the idea. Maybe you can check the details? We consider the following system: Let $C_1, \cdots, C_n$ be the $n$ column sums. Let $x_1, \cdots, x_m$ be $m$ variables, one for each row an consider the following constriants: \begin{align*} x_1 + \cdots + x_m &\le \frac{m}2 + \frac{n}2 \ \text{(Type 1)}\\ a_{1i}x_1 + a_{2i}x_2 + \cdots + a_{mi}x_m &\ge \frac{C_i}2, \ 1 \le i \le n \ \text{(Type 2)} \\ x_j &\ge 0, \ 1 \le j \le m \ \text{(Type 3)} \\ x_j &\le 1, \ 1 \le j \le m \ \text{(Type 4)} . \end{align*} Type $1$ is the relaxed version of saying that we cannot pick more than $\frac{m+n}2$ rows, and the type $2$ constraints are saying that for any column, the sum of the values of the entries in that column (times the corresponding row weight) have to be at least half the column sum. Ideally we would want $x_j \in \{0, 1\}$ but we relax this. Now we note that the problem is easy if $n \ge m.$ This is because in this case, we can just take all of the rows! Thus, we can assume that $m > n$. If we put all of the constraints in a big matrix $B$, it will be $(1 + 2m + n )\times m$. Now I claim that the rank of $B$ is $m$. Indeed, the type $3$ constraints just give us a copy of the $m \times m$ identity matrix. Furthermore, the total rank of the type $2$ constraints is at most $n$. Now let $x^*$ be any feasible solution to the system (for example, letting all of the $x_j = \frac{1}2$). From the above discussion, we have a $m-n \ge 1$ dimensional subspace of $\mathbb{R}^m$ that is orthogonal to all of the type $2$ constraints. Let $\delta$ be a vector from that subspace. We now want to consider $x' = x^* + c \delta$ for a suitable chosen scalar $c$. We do not know the dot product of $\delta$ with the all ones vector $\mathbf1 \in \mathbb{R}^m$ but we know that $\mathbf 1 \cdot \delta$ is the sum of $e_k \cdot \delta$ for the basis vectors $e_k$. (We are considering $\mathbf 1$ and $e_k$ since they are type $1$ and type $3$ constraints respectively!) Thus, we can pick $c$ such that $x'$ is also a solution to our system above and $x_{k} = 0$ or $1$ for some $k$ (essentially $x_{k}$ will be the first one to be tight as we change the value of $c$ due to our assumption. We just have to pick the sign of $c$ so that the type $1$ constraint isn't violated.) Now we are almost done. There are two cases. If we have $x_k = 1$ then we can just remove the $k$th row and adjust the column sums accordingly. Furthermore, this only helps us with the type $1$ constraint since the LHS of the type $1$ constraint decreases by $1$ while the RHS only decreases by $\frac{1}2.$ This will result in a smaller instance of the problem which we can solve by induction. In the other case, we continue in this manner by making more and more variables equal to $0$. However, everytime we make a variable equal to $0$, we must pick a new $\delta$ that is orthogonal to all the type $2$ constraints and all the type $3$ constraints that we just made tight. Thus, we can make some $m-n$ of the $x_j's$ equal to $0$ this way. Note that we cannot pick which of the $x_j's$ are $0$ since we cannot control the dot product $\mathbf 1 \cdot \delta$. In summary, we have a feasible solution $x'$ of the above system where $m-n$ of the variables are equal to $0$. Now just make the rest of the $n$ variables equal to $1$. This is possible since it only helps all of the type $2$ constraints since all of the coefficients of $A$ are non-negative. Furthermore, the type $1$ constraint is also satisfied since $$x_1 + \cdots + x_m = n < \frac{m}2 + \frac{n}2 $$ where the inequality follows from our assumption that $m > n$ and we are done!
{ "language": "en", "url": "https://math.stackexchange.com/questions/2759488", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Why is HCF of $x^2-1$ and $x-1$ is $x-1$, "why can't it be $1-x$?" I've been to different site and tried to find it using Mathematica also, but everywhere they put the answer $x-1$ before me. I even tried to find for $1-x^2$ and $1-x$, then also I got the answer $x-1$. Can anyone explain please!
A $\gcd$ (or $\operatorname{hcf}$) is only defined up to associates in the respective ring. For real polynomials, the usual convention is to define the $\gcd$ to be the monic polynomial among those, since the units are the (non-zero) constant polynomials.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2759732", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Fourier transform of a function with exponential and powers How can I calculate this Fourier transform $F(y)$ ? $$F(y)= \int_0^{\infty}(1+x)^{\frac{1}{2}} x^{-\frac{1}{2}-a} e^{-a x} \cos(2 \pi xy) dx$$ with $a$ complex ($0<Re(a)<\frac{1}{2}$) This is in fact a Cosine transform.
$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ $\ds{\mrm{F}\pars{y} \equiv \int_{0}^{\infty}\pars{1 + x}^{1/2}\,x^{-1/2 - a}\expo{-ax}\cos\pars{2\pi xy} \,\dd x:\ {\large ?}.\qquad 0 < \Re\pars{a} < 1/2}$. With $\ds{z \equiv a + 2\pi y\ic}$: \begin{align} \mrm{F}\pars{y} & \equiv \int_{0}^{\infty}\pars{1 + x}^{1/2}\,x^{-1/2 - a}\expo{-ax}\cos\pars{2\pi xy} \,\dd x \\[5mm] & = \mrm{f}\pars{a + 2\pi y\ic} + \mrm{f}\pars{a - 2\pi y\ic} \quad\mbox{where}\quad \mrm{f}\pars{z} \equiv {1 \over 2}\int_{0}^{\infty}\pars{1 + x}^{1/2}\,x^{-1/2 - a}\expo{-xz} \,\dd x \end{align} Lets $\ds{k \equiv {1 + a \over 2}}$ and $\ds{m \equiv {1 - a \over 2}}$. Then, \begin{align} \mrm{f}\pars{z} & \equiv {1 \over 2}\int_{0}^{\infty}x^{-k - 1/2 + m}\pars{1 + x}^{k - 1/2 + m}\expo{-xz} \,\dd x \\[5mm] & = {1 \over 2}\,{1 \over z^{-k + 1/2 + m}}\int_{0}^{\infty} x^{-k - 1/2 + m}\pars{1 + {x \over z}}^{k - 1/2 + m}\expo{-x} \,\dd x \\[5mm] & = {1 \over 2}\,{1 \over z^{-k + 1/2 + m}}\bracks{{\Gamma\pars{1/2 - k + m} \over \expo{-z/2}z^{k}}\,\mrm{W}_{k,m}\pars{z}} \end{align} where $\ds{\,\mrm{W}_{k,m}}$ is the Whittaker Function. Then, \begin{align} \mrm{f}\pars{z} & = {1 \over 2}\,\Gamma\pars{{1 \over 2} - a}z^{a/2 - 1}\expo{z/2} \,\mrm{W}_{\pars{1 + a}/2,\pars{1 - a}/2}\pars{z} \end{align} $$ \begin{array}{|rcl|}\hline\\ \ds{\quad\mrm{F}\pars{y}} & \ds{=} & \ds{{1 \over 2}\,\Gamma\pars{{1 \over 2} - a} \left[\vphantom{\LARGE A}\pars{a + 2\pi y\ic}^{a/2 - 1}\expo{a/2 + \pi y\ic} \,\mrm{W}_{\pars{1 + a}/2,\pars{1 - a}/2}\pars{a + 2\pi y\ic}\ {\large +}\right.\quad} \\&& \ds{\phantom{{1 \over 2}\,\Gamma\pars{{1 \over 2} - a}\,\,\,}\left. \vphantom{\LARGE A} \pars{a - 2\pi y\ic}^{a/2 - 1}\expo{a/2 - \pi y\ic} \,\mrm{W}_{\pars{1 + a}/2,\pars{1 - a}/2}\pars{a - 2\pi y\ic}\right]} \\ && \mbox{}\\ \hline \end{array} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2759844", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How to prove this series for convergence? Let $(a_n)$ be a sequence of real numbers satisfying $$a_1 \geq 1 \;\;\;\text{and}\;\;\;a_{n+1}\geq a_n+1$$ for all $n \geq 1$. Then which one of the following is necessarily true? a) The series $\sum \frac{1}{(a_n)^2}$ diverges. b) The sequence $a_n$ is bounded. c) The series $\sum \frac{1}{(a_n)^2}$ converges. d) The series $\sum \frac{1}{a_n}$ converges. Here $(a_n)=(n)$ eliminates a), b) and d). So c) is true. How to prove c) Mathematically? Any hint?
HINT: $a_n\geq n$, hence $$ \sum_{n=1}^{\infty}\frac{1}{a_n^2}\leq\sum_{n=1}^{\infty}\frac{1}{n^2} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2759920", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Degree of the eighth vertex given the other degrees Consider a graph with $8$ vertices. If the degrees of seven of the vertices are $1,2,3,4,5,6,7$, find the degree of the eighth vertex. I also have to check the graph's planarity and its chromatic number. I know that the sum of degrees of vertices is twice the number of edges, but that is not really helping here. If I get the degree of the eighth vertex then I could try checking for planarity and chromatic number. But hints about those are also welcome.Thank you.
Drawing the graph works, but here is a more formal argument. The degree 7 vertex must be connected to each of the other vertices. So the degree 1 vertex is connected to the degree 7 vertex only. Therefore the degree 6 vertex must be connected to every vertex apart from the degree 1 vertex. So the degree 2 vertex is connected to the degree 6 and 7 vertices only. Therefore the degree 5 vertex must be connected to every vertex apart from the degree 1 and 2 vertices. So the degree 3 vertex is connected to the degree 5, degree 6 and 7 vertices only. Therefore the degree 4 vertex is connected to the degree 5, 6 and 7 vertices but not to the degree 1, 2 and 3 vertices. To have degree 4 it must also be connected to the 8th vertex. Therefore the 8th vertex is connected to the degree 4, 5, 6 and 7 vertices but not to the degree 1, 2 and 3 vertices. So the 8th vertex must have degree 4.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2760008", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
How to prove that if $f^3 = f$, $f$ is diagonalizable? Prove that Let $V$ be a finite dim. vector space over a field of characteristic zero, and $f: V \to V$ be a linear map.Then if $$f^3 = f,$$ then $f$ is diagonalizable. Since $f$ is zero of the polynomial $$p(x) = x^3 - x$$ the minimal polynomial $m_f$ should divide $p$, hence $m_f$ can be the following polynomials only: $$m(x) = (x-1) \\ = x \\ = x+ 1 \\ =x^2 - x \\ = x^2 + x \\ = x^2 - 1 \\ =x(x-1)(x+1) $$ But, I couldn't show that if $m$ is either of the followings $ x^2 + x $ & $ x^2 - 1 $ & $ x(x-1)(x+1) $, it should be diagonalizable. So for these 3 cases how can we prove that $f$ should be diagonalizable ?
Since the matrix $f$ is a zero of the polynomial $$p(x) = x^3 - x = x (x-1)(x+1),$$ the minimal polynomial of $f$ has to divide the polynomial $p$, but this means that $m$ is some combinations of the factors $x$, $(x-1)$, and $(x+1)$ with each having the multiplicity 1, but this implies that $f$ is diagonalisable. QED.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2760101", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Prove that $f^2$ is Lebesgue integrable if and only if $\sum_{k=1}^\infty k \cdot m\{x\in A: |f(x)|>k\}<\infty.$ Question: Let $f$ be a Lebesgue measurable function on $A$ with $m(A)<\infty.$ Prove that $f^2$ is Lebesgue integrable if and only if $$\sum_{k=0}^\infty k\cdot m\{x\in A: |f(x)|>k\}<\infty.$$ My attempt: For each $k\geq 0,$ denote $$A_k=\{x\in A: f^2(x)>k^2 \} = \{x\in A: |f(x)|>k\}.$$ Observe that the union $$A = \bigcup_{k=0}^\infty (A_k\setminus A_{k+1})$$ is disjoint. It follows that $$\int_A f^2 = \int_{\bigcup_{k=0}^\infty (A_k\setminus A_{k+1})}f^2 = \sum_{k=0}^\infty \int_{A_k \setminus A_{k+1}} f^2.$$ I am not sure whether the following inequality $$ k \cdot m\{x\in A: |f(x)|>k\} \leq \int_{A_k\setminus A_{k+1}}f^2 \leq (k+1) \cdot m\{x\in A: |f(x)|>k\}.$$ holds. If it does, then I am done.
You only have $$k^2 \lambda(A_k \setminus A_{k+1}) \leq \int_{A_k \setminus A_{k+1}} f^2 \mathrm{d} \mu \leq \lambda \leq (k+1)^2 \lambda(A_k \setminus A_{k+1}).$$ However, noting that \begin{align} \sum_{k=1}^\infty k \lambda(x \in A \colon |f(x)| >k) &= \sum_{k=1}^\infty k \sum_{i=k}^\infty \lambda(A_i \setminus A_{i +1}) \\ &= \sum_{i=1}^\infty \lambda(A_i \setminus A_{i+1}) \sum_{k=1}^i k \\ &= \frac{1}{2} \sum_{i=1}^\infty i (i+1) \lambda(A_i \setminus A_{i+1}), \end{align} we are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2760276", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Group action defined on generators. In a question for an assignment (specific group but I've abstracted the question for academic integrity). I was given a group $G=\langle\ a,b \mid R\ \rangle$ where $R$ is some set of relations. Then, the question defines a group action on $X$ for each generating element. The first question is then "Verify that this gives a group action on $X$". My problem is that it seems like by defining a group action on generators inherently assumes that $e(x)=x$ and $a(b(x))=ab(x)$ from the beginning, so it seems like there is nothing left to show? Since the question only defined the functions $a(x)=?$ and $b(c)=?$, I can't verify that $ab(x)=a(b(x))$, since I wasn't given the action defined by $ab$. Is this a misunderstanding on my part, or do you think it is likely that the question itself is ambiguous?
What you need to check is that every relator will act (by concatenating the action for the generators as given in the relator) as the identity. An example to show that this is needed would be a cyclic group $\langle g \rangle$ of order $3$, which acts on $4$ points via $g\mapsto (1,2,3,4)$. Then $g^3$, the identity, would have to act as $(1,4,3,2)$, which is forbidden.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2760397", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Weak control of FWER Suppose I have $p$ null hypotheses $H_1,\ldots,H_p$ and the global hypothesis: $$ H_0:H_1 \cap H_2 \cap \cdots \cap H_p. $$ I'm interested in level controlling when testing for $H_0$ (using multiple testing): bounding $$ \Pr(\text{reject }H_0|H_0\text{ is true}) $$ regardless of the underlying dependence of the individual tests used to test $H_1,\ldots,H_p$. As this wiki article explains, I am in fact interested in controlling the family-wise error rate (FWER) in the weak sense. The same article lists 2 methods of FWER level control (Bonferroni and Holm) that are robust to dependence of the individual tests. But these control FWER in the strong sense (defined in the article). So they work for my purpose but I'm afraid they are "overcompensating." What are some (modern) references for FWER level control in the weak sense?
According to Remark 3.2 from [1], FDR coincides with weak FWER: Given that all null hypotheses are null, any discovery is a false discovery, and therefore, $$FDP = V/R = 1.$$ ($V$ = # of false discoveries, $R$ = # of discoveries) As you can see, under the condition that all nulls are true, the definitions of Type I error of a multiple test and FDP already coincide. Since $FDR = E(FDP)$, when FDR is controlled at level $\alpha$, the Type I error given all nulls are true is controlled at level $\alpha$ as well. Thus, FDR is a good choice for your problem. [1] Rosenblatt, Jonathan: A Practitioner's Guide to Multiple Testing Error Rates. Available online at http://arxiv.org/pdf/1304.4920v3
{ "language": "en", "url": "https://math.stackexchange.com/questions/2760554", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Uniform Convergence of $\sum_{n=1}^{\infty}(-1)^{n-1}\frac{x^2}{(1+x^2)^n}$ in $\mathbb{R}$ Uniform convergence of $\sum_{n=1}^{\infty}(-1)^{n-1}\frac{x^2}{(1+x^2)^n}$ in $\mathbb{R}$ Using Dirichlet Test, it can be shown that the it uniformly converges for $\mathbb{R}$ \ $\{0\}$. In $x=0$ there is obviously a pointwise convergence to 0. However, I'm struggling to make a conclusion for $\mathbb{R}$. If you could help me find a conclusion or provide another direction, it would be appreciated.
In short: here, you shouldn't have to worry about $0$ in the first place. When you apply the Dirichlet test, take $$ a_n(x) = (-1)^{n-1}, \qquad b_n(x) = \frac{x^2}{(1+x^2)^n} $$ for $x\in\mathbb{R}$ and $n\geq 1$. Then * *For $M\stackrel{\rm def}{=} 1$, we have $$ \left\lvert \sum_{n=1}^N a_n(x) \right\rvert \leq M $$ for all $N\geq 1$ and $x\in\mathbb{R}$. *For all $x\in\mathbb{R}$ and $n\geq 1$, $$ b_n(x) \geq b_{n+1}(x) $$ *$\lim_{n\to\infty }\lVert b_n\rVert_\infty = 0$ (uniform convergence of $b_n$ to $0$) Therefore, the (Uniform) Dirichlet test guarantees that the series $$ \sum_{n=1}^\infty a_n(x)b_n(x) = \sum_{n=1}^\infty (-1)^{n-1} \frac{x^2}{(1+x^2)^n} $$ converges uniformly on $\mathbb{R}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2760747", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
When ODE tells more than the explicit solution. "ODE is not just about 'solving' an equation and spitting out a (probably nasty) formula" -- this is what I want my (undergraduate) students to learn from my course this summer. One example I am looking for is a scenario where one extracts info about a function from the ODE it satisfies much more easily than from the solution's explicit formula. There are easy example to see this. For instance, the IVP $y'=y; y(0)=1$ tells us that the function will be increasing on zero to infinity, by taking another derivative that it will be concave up, etc. However, one may rightly argue that $e^x$ which is the solution easily gives these properties. So, I am looking for a less trivial, yet, interesting example where it is much easier to understand a function from its ODE than from its explicit formula. Do you have such examples? I will appreciate them. **The example may be important from computational/numerical point of view."
I would use mathematical models with biological application. Their ODE solutions are often very complex and many of them do not even have closed form solution; however, the formulation of the ODE itself is very intuitive. For instance, the logistic equation: $$\frac{dx}{dt} = r x \left(1 - \frac{x}{K} \right)$$ Or a species growth with harvesting, $$\frac{dx}{dt} = r x \left(1 - \frac{x}{K} \right) - hx$$ An interacting predator-prey system: \begin{align} x' & = bx - k_1x - (dxy)\\ y' & = a(dxy) - k_2y \end{align} I think these are fun and the students can formulate their own and study them using phase-plane analysis and geometric mean.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2760888", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 5, "answer_id": 0 }
$u^{-1}$ is integral over $R\subset S$ if and only if $u^{-1}\in R[u]$ I need to prove the following: Let $R\subset S$ be a commutative ring and $u$ be any invertible element in $S$. Then $u^{-1}$ is integral over $R$ is and only if $u^{-1}\in R[u].$ Proof: Let $u^{-1}$ is integral over $R \implies \exists\ $ a monic polynomial $f(x)\in R[x]$ such that $f(u^{-1})=0$. If \begin{align*} f(x) &=x^n+r_{n-1}x^{n-1}+\ldots+r_1x+r_0\\ \implies f(u^{-1}) &=u^{-n}+r_{n-1}u^{-n+1}+\ldots+r_1u^{-1}+r_0=0\\ \implies &u^{-1}+r_{n-1}+r_{n-2}u+\ldots+r_1u^{n-2}+r_0u^{n-1}=0 \end{align*} Therefore, $u^{-1}\in R[u]$. Now how to do the converse.
Suppose $u^{-1} \in R[u]$. Then there exists some polynomial $f(x) \in R[x]$ such that $u^{-1}=f(u)$, or equivalently $$uf(u)-1=0$$ Write $f(x)= \sum_{i=0}^n a_ix^i$. Then $$0=\sum_{i=0}^n a_iu^{i+1}-1 = u^{n+1} \cdot (u^{-(n+1)}+a_0u^{-n} + a_1 u^{-(n-1)}+ \dots + a_n)$$ Since $u^{n+1}$ is a unit, you have $$u^{-(n+1)}+a_0u^{-n} + a_1 u^{-(n-1)}+ \dots + a_n=0$$ which is an integral relation of $u^{-1}$ over $R$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2760980", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Famous convex maximization problems Which are the most famous problems having an objective of maximizing a nonlinear convex function (or minimizing a concave function)? As far as I know such an objective with respect to linear constraints is np-hard.
THE most famous problem having an objective of maximizing a convex function (or minimizing a concave function), and having linear constraints, is Linear Programming, which is NOT np-hard. Linear Programming is both a convex optimization problem (minimizing a convex function subject to convex constraints) and a concave optimization problem (minimizing a concave function subject to convex constraints). Therefore it has all the properties of both, to include, all local optima are global optima, and if the constraints are compact (bounded), then there is a global optimum at the extreme of the constraints.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2761126", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
$J_n(x)=0$ has no repeated root except zero. How to prove it? Suppose $J_n(x)=0$ has repeated roots. Then it must have at least two equal roots, say $x=x_0(\neq 0)$ i.e., $x_0$ is a double root of $J_n(x)=0$. Therefore $J_n(x_0)=0$ & $J_n’(x_0)=0$ Then what should I do?
$J_n$ solves the Sturm–Liouville equation $$ -(xy')' +\frac{n^2}{x} y = x^2y, $$ or more commonly $$ x^2 y'' + xy' +(x^2-n^2)y = 0. $$ If $J_n(a)$ has a multiple zero at $a \neq 0$, $J_n'(a) = 0$. But then the differential equation implies that $J_n''(a)=0$, and repeated differentiation and substitution implies that $J_n^{(k)}(a)$ also equals zero. Since $J_n$ is analytic away from $0$, it follows that $J_n$ vanishes identically on an open set containing $a$, which is a contradiction. See here for some other properties of the zeros of $J_n$ and $J'_n$ (most of which apply to any solution of Bessel's equation where it is analytic).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2761250", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Show that $u_{n+1}=6u_n-4u_{n-1}$ if $u_n=(3+\sqrt{5})^n+(3-\sqrt{5})^n$ for $n=1,2,...$ I have already shown that for each $n,u_n$ is an integer. Now we can show that $u_n=\dfrac{1}{2^n}(\sqrt{5}+1)^{2n}+\dfrac{2^{3n}}{(\sqrt{5}+1)^{2n}}$ But the problem is while showing $u_{n+1}=6u_n-4u_{n-1}$ , I can see that the whole calculation is taking pages. Is there any shortcut, crisp way of proving it? PS - I am not showing here the gory, intricate calculations here that cost me pages as this might make the users viewing it, uneasy.
Write $a=3+\sqrt5$, $b=3-\sqrt5$. Then $u_n=a^n+b^n$. Also $$u_{n+1}-6u_n+4u_{n-1}=a^{n+1}-6a^n+4a^{n-1}+b^{n+1}-6b^n+4b^{n-1} =(a^2-6a+4)a^{n-1}+(b^2-6b+4)b^{n-1}$$ etc. You need this to equal zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2761348", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
How do I prove that the series of $((1+\frac 1n)^n)a_n $ converges iff $\sum_n a_n$ converges? With sequences it simply arises from limit arithmetic, but with the series I cannot make it work. We have not learned integrals yet, so I can't use that. Any advice?
If the $a_n$ are positive, the result follows from the limit comparison test. If the assumption is dropped, summation by parts can be used. Let us prove that if $\sum_n a_n$ converges, then $\sum_n \left(1+\frac 1n \right)^na_n$ converges. Let $A_n=\sum_{k=0}^n a_k$, so that $$\sum_{n=0}^N \left(1+\frac 1n \right)^na_n = A_N\left(1+\frac 1N \right)^N-\sum_{n=0}^{N-1}A_n\left( \left(1+\frac 1{n+1} \right)^{n+1}-\left(1+\frac 1n \right)^n\right)$$ Since $\lim_N \left(1+\frac 1N \right)^N = e$, $A_N\left(1+\frac 1N \right)^N$ converges. Since $A_N$ is bounded (because it converges) and $\left(1+\frac 1N \right)^N$ is increasing, $\sum_{n=0}^{N-1}A_n\left( \left(1+\frac 1{n+1} \right)^{n+1}-\left(1+\frac 1n \right)^n\right)$ is absolutely convergent, hence convergent. $\sum_{n=0}^N \left(1+\frac 1n \right)^na_n$ is the sum of two convergent sequences, thus converges. In the same fashion, one may prove that if $\sum_n a_n$ converges, then $\sum_n \left(1+\frac 1n \right)^{\color{red}{-n}}a_n$ converges. Apply this to $\sum_n \left(1+\frac 1n \right)^{n}a_n$ to get the converse of your statement.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2761437", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Maximum Likelihood Estimation for Zero-inflated Poisson distribution I am trying to do exactly what the title says. What I have is the log-likelihood function as follows: Likelihood function, where $I_i = 1$ when $X_i = 0$, and $I_i = 0$ otherwise. Then I took the partial derivatives of that like this. I tried simplifying this expression to get an equation, but I keep getting nonsense. After searching the web, I found this book but I don't understand how the author arrived at ML equations just from looking at the PGF, can someone help explain that please? There is also this entry but they use a different model from mine (p = probability of Poisson, 1-p = probability of 0, whereas both the first book and my model use p = probability of 0, 1-p = probability of Poisson) and again they don't show the simplification steps so I can't really use that. Thanks for any help.
I think the inconsistency is caused by an error in the derivative of the profile likelihood in Ben's answer: The $\lambda e^{-\lambda}$ in the numerator should only be $ e^{-\lambda}$. In this case the final equation is $$ \bar{x}(1-e^{-\hat{\lambda}})=\hat{\lambda}(1-r_0)(1-e^{\hat{\lambda }}+e^{\hat{\lambda }})=\hat{\lambda}(1-r_0),$$ which is the solution from the cited book. This equation can be either solved iteratively or directly by using the main branch of Lambert's W function as shown in the reference given below: Defining $\gamma=\frac{\bar{x}}{1-r_0}$ the solution is $$\hat{\lambda}=W_0(-\gamma e^{-\gamma})+\gamma$$ See Dencks et al.:Assessing Vessel Reconstruction in Ultrasound Localization Microscopy by Maximum Likelihood Estimation of a Zero-Inflated Poisson Model. DOI:10.1109/TUFFC.2020.2980063
{ "language": "en", "url": "https://math.stackexchange.com/questions/2761563", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
Evaluating $\ i+i^2+i^3+i^4+\cdots+i^{100}$ $$i+i^2+i^3+i^4+\cdots+i^{100}$$ I figured out that every four terms add up to zero where $i^2=-1$, $i^3=-i$, $i^4=1$, so $$i+i^2+i^3+i^4 = i-1-i+1 = 0$$ Thus, the whole series eventually adds up to zero. But how do I approach this problem in a more mathematical way?
$$ \sum_{k=1}^n i^k = \frac{i^{n+1}-1}{i-1}-1 = i\frac{i^n-1}{i-1} $$ but $i^{100} = 1$ hence $\sum_{k=1}^{100} i^k = 0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2761652", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
solving differential equations : $ay' + by^2 + cy = u$ solving differential equations: $ay' + by^2 + cy = u$ with $a,b,c$ are positive constants and $u$ is arbitrary constant. I can solve this equation: $ay' + by^2 + cy = 0$ $\Rightarrow -a\frac{dy}{y(by+c)}=dx $ $\Rightarrow \frac{a}{c}(\frac{bdy}{by+c}-\frac{dy}{y})=dx$ $\Rightarrow ln(\frac{by+c}{y})=\frac{c}{a}x$ $\Rightarrow y = \frac{c}{e^{\frac{c}{a}x}-b}$ But with an arbitrary constant at the righthand side, I have no idea.
With a constant in the RHS, the equation is still separable and you solve it the same way ! $$\frac{a\,dy}{by^2+cy-u}=-dx.$$ After integration, $$\frac{2a}{\sqrt{c^2+4 b u}}\arctan\frac{c + 2 b y}{\sqrt{c^2 + 4 b u}}=c-x$$ from which you can draw $y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2761729", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Completing A proof of a question related to Radon-Nikodym Let $\nu$ be a absolutely continous with respect to the measure $\mu$, where both of them are $\sigma$ finite and are on $( \omega , X)$. Prove that then. $\forall$ $\epsilon>0$ $\exists$ $\delta>0$ such that $\mu(A)<\delta$ $\implies$ $\nu(A)<\epsilon$, $\forall A \in X$ I want to prove this directly (its rather easy if we go from converse). Given $\epsilon >0$, for arbitrary $A\in X$, $\nu(A)=\int_{A} fd\mu<\epsilon$ where $f$ is the Radon-Nikodym derivative. Then by using simple functions we can write this as $\sum_{k=1}^{n}a_{k}\mu(A_{k})$ where $A_{k}$'s are partition of $A$.Hence for any $k<n+1$, $a_{k}\mu(A_{k})<\epsilon$. How we can find the $\delta$ from ther? Or how we can prove this direclty, if this proof doesn't work.
You certainly don't have to use Radon-Nikodym to prove this, as the other answer pointed out. But if you want to use it, then the result follows immediately from the fact that $f$ is $\mu$-integrable. See here (and note that the proof is essentially the same as the one given by harmonicuser). Indeed, integrability implies that for all $\epsilon > 0$, there exists $\delta > 0$ such that $\mu(A) < \delta$ implies $\int_A f d\mu < \epsilon$. Hence, $\mu(A) < \delta$ implies $\nu(A) = \int_A f d\mu < \epsilon$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2761860", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Proving function is strictly positive in a interval defined by coefficients Let $f(x)=a_nx^n+a_{n-1}x^{n-1}\dots+a_1x+1$. Prove that $f(x)$ is strictly positive if $$0<|x|<{1\over1+\sum_{i=1}^{n}|a_i|}.$$ Any hints on how to start?
Let $h(x)=f(x)-1=x\sum_{i=1}^{n}a_ix^{i-1}$ then for $|x| < \dfrac{1}{1 + \sum_{i=1}^n |a_i|}$we have that $|x|<1$ and $$|h(x)|\leq |x|\sum_{i=1}^{n}|a_i|<1.$$ Therefore $$f(x)=1+h(x)\geq 1-|h(x)|>0.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2761988", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Show that if $2x + 4y = 1$ where x and y are real numbers. Show that if $2x + 4y = 1$ where $$x, y \in \mathbb R $$, then $$x^2+y^2\ge \frac{1}{20}$$ I did this exercise using the Cauchy inequality, I do not know if I did it correctly, so I decided to publish it to see my mistakes. Thank you! If $x=2$ and $y=4$, then $$(2^2+4^2)(x^2+y^2)\ge (2x+4y)^2$$ iff $$\frac{x}{2}=\frac{y}{4}$$, then $$20\ge(2x+4y)^2$$ $$-4.47\le2x+4y\le4.47$$
Your work is correct up to $$(2^2+4^2)(x^2+y^2)\ge (2x+4y)^2$$ From which you get $$ 20(x^2+y^2)\ge (1)^2$$ Therefore, $$x^2+y^2\ge \frac{1}{20}$$ At this point your are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2762114", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }