Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Helly-Bray's theorem for bounded discontinuous functions If a sequence $(\mu_n)_{n\ge 1}$ of probability measures converges weakly to a p.m. $\mu$, then $$ \lim_{n\to \infty} \int f d\mu_n=\int f d\mu $$ for every $f:\mathbb{R}\rightarrow \mathbb{R}$ bounded and with finite set of discontinuity points $D$ such that $\mu(D)=0$. Can anyone give me a hint? I know a proof in the case where $f$ is continuous using the fact that for every $\varepsilon>0$ there exists $a<b$ such that $\mu((a,b])>1-\varepsilon$ and writing $$ \left \vert \int f d\mu_n-\int f d\mu \right \vert \le \left \vert \int_{(-\infty,a]} f d\mu_n-\int_{(-\infty,a]} f d\mu \right \vert + \left \vert \int_{(a,b]} f d\mu_n-\int_{(a,b]} f d\mu \right \vert + \left \vert \int_{(b,+\infty)} f d\mu_n-\int_{(b,+\infty)} f d\mu \right \vert $$ Is it a good path?
Let the discontinuities of $f$ be $d_i$. Write $f=\sum_{i=1}^n f_i1_{A_i}$ where $A_i$ forms a finite partition of open intervals of $\mathbb{R}$ such that the discontinuities of $f_i$ are on the edges of $A_i$ (so together with $\{d_i\}$ they partition $\mathbb{R}$). Then for all $i$, $\int_{A_i} f_id\mu_n\rightarrow \int_{A_i}f_id\mu_n$, since each $f_i$ is continuous and bounded on $A_i$. Finally write $\int fd\mu_n=\sum_i \int_{A_i} f_id\mu_n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4134421", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Subsheaves and stalks This question has already been asked in some other posts but has not been clearly addressed. Suppose $F,G\subseteq H$ are two subsheaves of $H$ where $H$ is a sheaf on a topological space $X$. Then how does $F_x=G_x$ for any $x\in X$ implies $F=G$, or equivalently $F(U)=G(U)$ for any open $U\subseteq X$?
It is a general fact that if a morphism of sheaves induces an isomorphism on all the stalks, then it is an isomorphism. Working with strict equalities with sheaves is a bit awkward since there is so much data. I believe this is how it would be done using equality at the stalks. If $s \in F(U)$ is a section, then $[s,U] \in F_p$ for all $p \in U$. Then, since $F_p = G_p$, we know that there is some $U_p$ so that $[s|_{U_p}, U_p] \sim [s, U]$ is in $G_p$ (using the equality, some representative of $[s, U]$ is in $G_p$) so that $s|_{U_p} \in G(U_p)$. Then $\{U_p\}_{p \in U}$ is an open cover of $U$ with $s|_{U_p}$ all agreeing on intersections so by the sheaf condition, $s \in G(U)$. This shows that $F(U) \subseteq G(U)$ and the reverse direction is symmetric.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4134591", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Determine $\lim_{(x,y)\to(0,0)}\frac{\sin(xy)}{(x^4+y^4)}$ Determine $$\lim_{(x,y)\to(0,0)}\frac{\sin(xy)}{(x^4+y^4)}$$
Since $|\sin t| \leq |t|$ for all $t\in\mathbb{R}$, $$ |x^2 y \sin(xy^2)| \leq |x|^3 |y|^3 \leq \frac{x^6+y^6}{2}\,. $$ the second inequality by the AM-GM inequality. This means $$ \frac{|x^2 y \sin(xy^2)|}{(x^4+y^4)\sqrt{x^2+y^2}} \leq \frac{1}{2} \frac{(x^6+y^6)}{(x^4+y^4)\sqrt{x^2+y^2}} \leq \frac{1}{2}\frac{2\max(x^6,y^6)}{\max(x^4,y^4)\sqrt{\max(x^2,y^2)}} = \max(|x|,|y|) $$ and $\lim_{(x,y)\to (0,0)} \max(|x|,|y|) = 0$, so you can conclude by the squeeze theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4134718", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 2 }
How can I find the tangent plane of this piecewise function? My professor encouraged us to think about the following problem: Given $\ f(x,y) = \begin{cases} x^2+y^2-2y+1 & x^2=2y \\ 1-2y & x^2\neq2y \end{cases}$ find, if they exist, the tangent planes of $f(x,y)$ for the points: a) $(x_0,y_0)=(2,2)$ b) $(x_0,y_0)=(0,0)$ I know that the tangent plane equation is: $Z_t=f(x_0,y_0)+f_x(x_0,y_0)(x-x_0)+f_y(x_0,y_0)(y-y_0)$ My first instinct was to calculate $f_x(2,2)$ and $f_y(2,2)$ for $f(x,y)=x^2+y^2-2y+1$, given that $2^2=2(2)$. I found the following tangent plane: $Z_t=4x+2y-7$ Likewise for $(x_0,y_0)=(0,0)$, I found: $Z_t=1-2y$ BUT this appears to be completely wrong. I suspect I need to calculate the partial derivatives of this function applying the definition. For example: $f_x(2,2)= \lim_{x \to 2} \frac{\overbrace{f(x,2)}^{-3}-\overbrace{f(2,2)}^{-5}}{x-2}$ But this limit doesn't exist. I'm confused. Should I conclude that because the partial derivative with respect to $x$ doesn't exist at this point, then the tangent plane doesn't exist either? If I move on to $(x_0,y_0)=(0,0)$, I think I'm able to calculate the partial derivatives: $f_x(0,0)= \lim_{x \to 0} \frac{f(x,0)-f(0,0)}{x}=0$ and $f_y(0,0)= \lim_{y \to 0} \frac{f(0,y)-f(0,0)}{y}=2$ So I get the following tangent plane: $Z_t=1-2y$ Which I had found previously. I don't know what to make of that. Where am I going wrong? I could use some guidance
For a function to be differentiable at a certain point, the limit that defines the derivtive needs to exist and be unique (left and right limit). Your function is not continuous at $(x, y) = (2, 2)$, because $$f(2, 2) = 5 \ne -3 = \lim_{x \to 2, y \to 2} f(x, y)$$ So it's non-differentiable in that point (It's not even continuous). For the $(0, 0)$ you don't have such problem, but just to make sure, you should calculate the values and derivatives of both sides ofthe definition at $(0, 0)$ and justify what I said. Generally speaking you should always check these things at the border of the piecewise definitions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4134860", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Do continuous functions "preserve limits" between spaces where uniqueness of limits isn't guaranteed? As far as I know (I think I am not mistaken), the following is true: "Given two Hausdorff spaces $X$ and $Y$ and $f$ a mapping between them, the following two statements are equivalent: * *$f$ is continuous *Given a sequence $(x_n)$ converging to $x$, then the sequence $(f(x_n))$ converges to $f(x)$" My question is: can this be generalized in the case of "convergence without uniqueness"? The proposed generalization being: "Given two topological spaces $X$ and $Y$, a mapping $f$ between them is continuous if and only if, for any sequence $(x_n)$ converging to a set of points $A$, the corresponding sequence $(f(x_n))$ is convergent, and converges to the set $f(A)$" I specially care about the "only if" part, because I need it for further results that I'm trying to achieve. Of course, I already come with a proof for this part that I think is correct, but I ask here this question to put this proof to the test of collective scrutiny. The proof in question: "Let $X$ and $Y$ be topological spaces, let $f$ be a continuous mapping between them. Let $(x_n)$ be a sequence converging to a set $A$, let $x$ be a point in A. Then for each open $G$ containing x, there exists a certain $n_0$ such that for any $n > n_0$, $x_n$ is in $G$. Let $H$ be an open in $Y$ such that $f(x)$ is in $H$. Then $f^{-1}(H)$ is an open in $X$ containing $x$. Then there exists $n_0$ with the above property for $f^{-1}(H)$. But $$x_n\in f^{-1}(H) \Leftrightarrow f(x_n) \in H$$ Then $f(x)$ is a limit point of the sequence $(f(x_n))$." Is this proof correct? Also, is the other implication true? Thank you for your attention.
The converse statement is false. Let $\tau_{\rm{usual}}$ be the usual topology on $\mathbb R$ and $\tau_{cc}$ be the co-countable topology on $\mathbb R$, that is : $$\tau_{cc} = \{U\subset \mathbb R~|~U= \emptyset \text{ or } X\backslash U\text{ is at most countable} \}$$ Then, $\tau_{cc}$ is not Hausdorff and a sequence in $(\mathbb R , \tau)$ is convergent if, and only if, it is eventually constant. Therefore, the identity $\text{id}:(\mathbb R,\tau_{cc}) \to (\mathbb R, \tau_{\rm{usual}})$ sends convergent sequence to convergent sequences (and limit points to limit points) but is not continuous.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4135071", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Filling station is being supplied with drinking water once a week. If its weekly volume of sales in thousands of gallons is R.V with $pdf$ given by PROBLEM: A filling station is being supplied with drinking water once a week. if its weekly volume of sales in thousands of gallons is random variable with pdf given by: $f(x)= \begin{cases} 5(1-x)^{4}, \text {} 0<x<1 \\ \ 0, \text{} otherwise \end{cases} $ What must the capacity of the tank be so that the probability of the supply's being exhausted in a given week is $0.01?$ MY WORKING: As far as I have understood the problem is asking for value of $x$ where $P(X)=0.01$, but I am not sure. Can anyone guide me or give a hint?
You are looking for a value of $x$ that has a chance of $0.01$ of being exceeded. You are given a pdf. What you really want is a cdf, which is the integral of the pdf from $-\infty$ to $x$ of the pdf and represents the chance that the random variable is less than $x$. You want the $x$ that makes the cdf equal to $0.99$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4135251", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
When is $\lim_{t \to \infty} \frac{f(t)}{t}$ equal to $\lim_{t \to \infty} \frac{df}{dt}$? Under what circumstances are $\lim_{t \to \infty} \frac{f(t)}{t}$ and $\lim_{t \to \infty} \frac{df}{dt}$ equal? I'm aware that in some cases one or the other limit may not exist (e.g. if $f(t) = sin(t)$ the limit of the derivative does not exist.) More precisely, I'm interested in whether the limits are equal if $f$ is monotonically increasing and smooth.
If $\lim_{x \to \infty}f(x) = \pm \infty$ and $\lim_{x \to \infty}f′(x)$ exists, then you have equality by L'Hopital’s rule.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4135595", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Universal properties for groups arising from representations Consider the forgetful functor $U:\textbf{Grp}\to\textbf{Set}$ and its left adjoint $F$. The functor $X=\textbf{Set}(A,U(-))$ is represented by $F(A)$, i.e., $X\simeq H^{F(A)}$. By the Yoneda lemma, this gives a unique element $u\in X(F(A))$ (in fact $u=\eta_A$, the unit of the adjunction from the beginning) such that the following universal property holds: for any $p\in X(B)$ there exists a unique $p':F(A)\to B$ such that $p=U(p')\circ \eta_A$. But the functor $U$ itself is also representable, and I'm a little confused by the universal property that arises in the same manner as above if we write out the details. Here's what I'm getting: $U$ is represented by $F(1)=\mathbb Z$ ($1$ is the singleton set), i.e., there's a natural isomorphism $U\simeq H^{F(1)}=H^\mathbb Z$. This corresponds to some $u\in U(F(1))=U(\mathbb Z)$ that satisfies this universal property: For any arrow $p:1\to U(B)$, there is a unique arrow $p': \mathbb Z \to B$ such that $p=U(p')\circ \eta_1$. Is this right? This looks like a castrated version of the universal property above, and I don't know why would anyone use this instead of the comprehensive universal property above. Apparently (if I understand correctly), Leinster deduces from this property that $u$ in this case should be $\pm 1$. Does this make sense (just from what I wrote, without reading Leinster)?
$U$ is representable iff $\hom(1,U(-))$ is representable, since these functors are isomorphic. So the second universal property is really a special case of the first. The first is the universal property of any free group, the second is the universal property of the free group in one generator (i.e. $\mathbb{Z}$ under addition). The universal element here is an element $u \in \mathbb{Z}$ such that $\hom(\mathbb{Z},G) \to U(G)$, $f \mapsto f(u)$ is bijective for all groups $G$, and this only works for $u=\pm 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4135776", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Why complete space is good? I wonder why the complete space is a good space. I know that for a metric space $X$, if any Cauchy sequence $\{x_n\}$ in $X$ converges, then $X$ is called complete metric space. But why such property is good? Here, 'good' means basically why such property is useful. Also, why such property is desired. Why the 'completion' matters. So yes I need proper motivation for such concept. Ok, every Cauchy sequence converges, so what?
Just an example. Any contraction on a nonempty complete metric space has a unique fixed point.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4136006", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
Recursive geometric series to explicit formula I'm trying to figure out how to convert this recursive formula $A_n = \frac{8}{9} A_{n-1} + \frac{6}{5} \left(\frac{20}{9}\right)^n $ into $A_n = 4 \left(\frac{8}{9}\right)^n + 2 \left(\frac{20}{9}\right)^n$ and I'm not sure how to approach it. The first thing I thought of was to try to find a pattern by substituting a previous term into the next one \begin{align*} A_n & = \frac{8}{9} A_{n-1} + \frac{6}{5} \left(\frac{20}{9}\right)^n \\ A_{n+1} & = \frac{8}{9} \left( \frac{8}{9} A_{n-1} + \frac{6}{5} \left(\frac{20}{9}\right)^n\right) + \frac{6}{5} \left(\frac{20}{9}\right)^{n+1} \\ & = \left(\frac{8}{9}\right)^2 A_{n-1}+ \frac{6}{5}\frac{8}{9}\left(\frac{20}{9}\right)^n + \frac{6}{5} \left(\frac{20}{9}\right)^{n+1} \\ & = \left(\frac{8}{9}\right)^2 A_{n-1}+ \frac{6}{5}\left(\frac{20}{9}\right)^n \left(\frac{8}{9}+\frac{20}{9}\right) \end{align*} But that didn't seem to work, although it's possible I simplified incorrectly (or not in the way you're supposed to) -- Actually, I think I forgot a crucial piece of information: $A_0 = 6$
$$A_{n+1}=aA_n+bc^n.$$ Try a solution of the form $dc^n$. You have $$dc^{n+1}=adc^n+bc^n,$$ giving $$d=\frac b{c-a}.$$ Now by subtraction, $$A_{n+1}-dc^{n+1}=aA_n+bc^n-dc^{n+1}$$ is $$(A_{n+1}-dc^{n+1})=a(A_n-dc^n),$$ which is easy to solve for $$A_n-dc^n$$ as it is a geometric progression.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4136228", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Nef and semiample divisor Given a semiample divisor $D$ (that is, the induced morphism $\phi_{|mD|}$ for $m\gg 0$ is base point free) on a smooth projective variety $X$, it is easy to prove (using projection formula) that $D$ is nef, that is $D\cdot C\geq 0$ for any irreducible curve $C\subset X$. Therefore, I asked my self why the converse does not hold, that is I'd like to find an example of nef divisor which is not semiample. Unfortunately since I'm quite at the beginning I don't know how to proceed, and any hint or reference would be much appreciated, thanks in advance!
Nick L's answer links to one kind of example, where the divisor is nef and big, but not semiample. If you just ask for nef but not semiample, examples are maybe easier to come by. One kind of behaviour is given in Example 1.5.1 of the Lazarsfeld book that Nick L linked to. This example is a ruled surface $S$ with a line bundle $L$ which is nef, but such that for every natural number $m$, the bundle $L^m$ has no global sections. So $L$ is not semiample. If you want an example where the divisor is nef and effective but not semiample, fix a smooth cubic curve $C'$ in $\mathbf P^2$, take $S$ to be the blowup of $\mathbf P^2$ in 9 very general points, and take $C$ to be the proper transform of $C'$. Then $C$ is an irreducible curve with selfintersection $C^2=0$, so it is nef. But by choosing the points appropriately, you can arrange that $O_C(C)$ is any given line bundle $L$ of degree $0$ on $C$. In particular if we choose $L$ to be a non-torsion line bundle of degree 0, this implies that $O_C(mC)$ is nontrivial for all natural numbers $m$, and therefore that $mC$ is not basepoint-free for any $m$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4136446", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Why is $\dim_{B}F \le n$ in $\mathbb R^n$? (Upper Bound on Minkowski–Bouligand dimension) (Please skip to the end for a word on notation) For $F \subset\mathbb R^n$, where $F\ne \varnothing$, we have $$0 \le \underline\dim_B F \le \overline\dim_B F \le n$$ and hence $$\dim_B F \le n$$ where $\dim_B F$ denotes the box-counting dimension of $F$. $\underline\dim_B F$ and $\overline\dim_B F$ denote the lower and upper box-counting dimension respectively. I need help understanding the proof of the last inequality, i.e. the $\le n$ part. The proof in the book is: The first two inequalities are obvious; for the third, $F$ may be enclosed in a large cube $C$ so by counting $$-mesh squares $N_(F) ⩽ N_(C) ⩽ c^{−n}$ for some constant $c$. Since $F\subset C$, $N_\delta(F) \le N_\delta(C)$ is clear. Why is $N_\delta(C) \le c\delta^{-n}$ for some $c$? Notation: * *About $\dim_B F$ and $N_\delta(F)$: The lower and upper box-counting dimensions of a subset $F$ of $ℝ^n$ are given by $$\underline{\dim}_B F = \underline{\lim}_{\delta\to 0} \frac{\log N_\delta(F)}{-\log \delta}$$ $$\overline{\dim}_B F = \overline{\lim}_{\delta\to 0} \frac{\log N_\delta(F)}{-\log \delta}$$ and the box-counting dimension of $F$ by $$\dim_B F = \lim_{\delta\to 0} \frac{\log N_\delta(F)}{-\log \delta}$$ (if this limit exists), where $N_\delta(F)$ is any of the following: * *the smallest number of sets of diameter at most $$ that cover $F$; *the smallest number of closed balls of radius $$ that cover $F$; *the smallest number of cubes of side $$ that cover $F$; *the number of $$-mesh cubes that intersect $F$; *the largest number of disjoint balls of radius $$ with centres in $F$. *What is a $\delta$-mesh? $$[m_1\delta, (m_1+1)\delta] \times [m_2\delta, (m_2+1)\delta] \times \ldots \times [m_n\delta, (m_n+1)\delta]$$ where $m_i \in\mathbb Z$ for every $1 \le i \le n$, is called a $\delta$-mesh (or a $\delta$-grid) in $\mathbb R^n$.
If we simply enclose $F$ in a hypercube with side $s$, that cube can be broken down into $m^n$ subcubes with side $s/m$. Then we get that $$ \lim_{m\to\infty}\frac{\log(m^n)}{-\log(s/m)}=n $$ This simply says that any subset of $\mathbb{R}^n$ has dimension at most $n$ (which is a mathematical justification of what seems a pretty intuitive fact).
{ "language": "en", "url": "https://math.stackexchange.com/questions/4136600", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Recover a set of vectors from a Gram matrix that have conjugate symmetry I have a set of complex column vectors concatenated in a matrix, $X$, for which I have the corresponding Gram matrix $G = X^*X$. Because of the basis that I'm using (complex spherical harmonics), the vectors have a conjugate symmetry that can be expressed as $\overline{X} = MX$ where $M = \begin{pmatrix}0 & 0 & -1 \\ 0 & 1 & 0 \\ -1 & 0 & 0 \end{pmatrix}\text{.}$ Otherwise put, if we label the rows of $X$ as $-1, 0, 1$ then $X_{m, n} = (-1)^m \overline{X_{-m, n}}$. Note that this symmetry guarantees that the Gram matrix is real. Now, suppose I'm given only the Gram matrix, $G$, and I want to recover any corresponding set of vectors $X$ that obey this conjugate symmetry, how can I do this? I've tried a Cholesky decomposition $G = LL^{*}$, but this gives triangular matrices which clearly don't fulfill the conjugate symmetry. Then I thought that maybe I can combine my symmetry criterion with a rotation of $L$ somehow, i.e. $G = LQQ^*L^*$, and find a $Q$ that would rotate $L$ into the complex plane such that the symmetry criterion is satisfied. But I couldn't find a way to do this. After attempting various decompositions of the Gram matrix I couldn't find a way that would give me a suitable set of vectors. Can anybody help?
The ideas that you tried are pretty close. For succinctness I assume $X$ has linearly independent columns. You have $G=X^*X = \overline{X}^TX = (MX)^TX = X^TMX$. So $G$ is Hermitian and symmetric and hence real. Now run Cholesky over reals: $G=LL^T$ then $X=QR$, with $R:=L^T$ and $Q$ is some unitary matrix that you may select. $M(QR) = MX = \overline{X}= \overline{(QR)}= \overline{Q}R\implies MQ = \overline{Q}$ At this point, examining $M$ and $ MQ = \overline{Q}$ implies that row $2$ of $Q$ is real, and you just need to find rows 1 and 3 that are orthonormal to each other (and row 2) and are negative conjugates of each other. So for example you may select, $Q:= \begin{pmatrix}-\frac{1}{\sqrt{2}} & 0 & \frac{1}{\sqrt{2}}\cdot i \\ 0 & 1 & 0 \\ \frac{1}{\sqrt{2}} & 0 & \frac{1}{\sqrt{2}}\cdot i \end{pmatrix}\text{.}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4136773", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving these two matrix LMIs are the same. For a matrix A $\in R^{nxn}$, the two statements are equivalent: * *There exists matrix $P > 0$, such that $A^TP+PA<0$ *There exists matrix $P \geq 0$, such that $A^TP+PA+I\leq0$ Is this true?
Yes, they are equivalent. Suppose $P>0$ and $A^TP+PA<0$. Let $r=-\lambda_\max(A^TP+PA)$. Then $P_1\ge0$ and $A^TP_1+P_1A+I\le0$, where $P_1=\frac1rP$. Conversely, suppose $P\ge0$ and $A^TP+PA+I\le0$. Then $A^TP+PA\le-I<0$. Now suppose $Px=0$. Then $x^T(A^TP+PA)x=x^TA^T(Px)+(x^TP)Ax=0$. Since $A^TP+PA$ is negative definite, we must have $x=0$. Hence $P$ is nonsingular and in turn, it is positive definite.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4136939", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Does every bilinear (symmetric) form admit a self-adjoint map representation? Let $Q(x,y)$ be a bilinear (symmetric) form on a finite dimensional vector space $V$, $Q: V \times V \to \mathbb{R}$, is it true that it will admit a linear self-adjoint map $K$ such that $$(K(x),y) = Q(x,y)$$? This looks like some sort of Rieze representation, but I am not too sure. I found this claim in a proof I am reading, but I am not sure if this is true in general.
Your space has to be Euclidean. If you fix $x$ in your bilinear form, you get a linear form on $y$. By Riesz representation, this linear form is given by the inner product with a given vector $h(x)$. It is easy to see that the map $x\to h(x)$ is linear. No coordinate representation is needed here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4137093", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Prove that $8(5)^n + 6(3)^{2n} - 80n - 14$ is divisible by $512$ for all $n \in \mathbb{N}$ by induction. Note: question was updated to correct the constant term $-14$ (vs. $-40$). We need to prove that $8(5)^n + 6(3)^{2n} - 80n - 14$ is divisible by $512$ for all $n \in \mathbb{N}$. I started by taking $n = 1$, then for $n = k$ and then for $n = k+1$. I am stuck here. Not able to solve further but still trying. What to do next? Thanks for your time.
soupless gave a good answer, I do think that, for completeness, it would be instructive to elaborate and show that 32 indeed divides $g(n) \doteq [2 \times 5^{n}]+ [3 \times 9^{n}] - 5$ for $n=1,2,3,\ldots$ This is clearly true for $n=1$. Then to show that $32|g(n)$ for general $n$, it suffices to show that $32|[g(n+1) - g(n)]$. To this end, we use the technique that @soupless used above in his answer, and note the following: $$g(n+1)-g(n) = (8 \times 5^n) + (24 \times 9^{n})$$ $$ = 8 \times [5^n + (3 \times 9^n)].$$ Then 32 indeed divides $g(n+1)-g(n)$ iff 4 divides $5^n+(3 \times 9^n)$. But this is easy to see that 4 indeed divides $5^n+3 \times 9^n$; to elaborate, $5^n \equiv_4 1$ and $9^n \equiv_4 1$, so $5^n+3 \times 9^n$ $\equiv_4$ $1+3 \equiv_4 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4137316", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
How to calculate the limit $\lim_{n\to\infty}\left(\frac{1}{\sqrt{n^2+n+1}}+\frac{1}{\sqrt{n^2+n+2}}+\cdots+\frac{1}{\sqrt{n^2+n+n}}\right)^n$ $$I=\lim_{n\to\infty}\left(\frac{1}{\sqrt{n^2+n+1}}+\frac{1}{\sqrt{n^2+n+2}}+\cdots+\frac{1}{\sqrt{n^2+n+n}}\right)^n$$ I tried $$\frac{n}{\sqrt{n^2+n+n}}\leq\frac{1}{\sqrt{n^2+n+1}}+\frac{1}{\sqrt{n^2+n+2}}+\cdots+\frac{1}{\sqrt{n^2+n+n}}\leq\frac{n}{\sqrt{n^2+n+1}}$$ But$$\lim_{n\to\infty}\left(\frac{n}{\sqrt{n^2+n+n}}\right)^n=\lim_{n\to\infty}\left(1+\frac2n\right)^{\displaystyle-\frac{n}{2}}=e^{\displaystyle \lim_{n\to\infty}-\frac n2\ln(1+\frac2n)}=\frac1e$$ And in the same way,i got $$\lim_{n\to\infty}\left(\frac{n}{\sqrt{n^2+n+1}}\right)^n=\lim_{n\to\infty}\left(1+\frac1n+\frac{1}{n^2}\right)^{\displaystyle-\frac{n}{2}}=e^{\displaystyle \lim_{n\to\infty}-\frac n2\ln(1+\frac1n+\frac{1}{n^2})}=\frac{1}{\sqrt e}$$ So i only got $$\frac1e\leq I\leq\frac{1}{\sqrt e}$$ Could someone help me get the value of $I$. Thanks!
I use the Binomial expansion repeatedly without using logs, as in Tavish's answer. I just want to show that such a method works for this question. The method is long, but it is neither difficult to understand nor ugly. Write $\ t = n^2 + n.\ $ Then we have \begin{align}\frac{1}{\sqrt{n^2+n+1}}+\frac{1}{\sqrt{n^2+n+2}}+\ldots+\frac{1}{\sqrt{n^2+n+n}}\\ \\ = \frac{1}{\sqrt{t+1}}+\frac{1}{\sqrt{t+2}}+\ldots+\frac{1}{\sqrt{t+n}}\\ \\ =\frac{1}{\sqrt{t}}\left( \frac{1}{\sqrt{1+\frac{1}{t}}} + \frac{1}{\sqrt{1+\frac{2}{t}}} +\ldots + \frac{1}{\sqrt{1+\frac{n}{t}}} \right)\\ \\ =\frac{1}{\sqrt{t}}\left( \left(1+\frac{1}{t}\right)^{-\frac{1}{2}} + \left(1+\frac{2}{t}\right)^{-\frac{1}{2}} +\ldots + \left(1+\frac{n}{t}\right)^{-\frac{1}{2}} \right)\\ \\ =\frac{1}{\sqrt{t}}\left(\left[ 1+\left(-\frac{1}{2}\right)\left(\frac{1}{t}\right) + \ldots \right] + \left[ 1+\left(-\frac{1}{2}\right)\left(\frac{2}{t}\right) + \ldots \right] +\ldots + \left[ 1+\left(-\frac{1}{2}\right)\left(\frac{n}{t}\right) + \ldots \right] \right)\\ \\ =\frac{1}{\sqrt{t}}\left( 1\times n + \left(-\frac{1}{2}\right)\times\left(\frac{1}{t}\right)\times \left( \displaystyle\sum_{i=1}^n i \right) + O\left(\frac{1}{n}\right) \right)\\ \\ =\frac{1}{\sqrt{n^2+n}} \left( n + \frac{\left(-\frac{1}{2}\right)\left(\frac{1}{2}\right)n(n+1)}{n^2+n} + O\left(\frac{1}{n}\right) \right) \\ \\ =\frac{n-\frac{1}{4}+O\left(\frac{1}{n}\right)}{\sqrt{n^2+n}}\\ \\ =\frac{\left(n-\frac{1}{4}+O\left(\frac{1}{n}\right)\right)\left( \sqrt{n^2+n} \right)}{n^2+n}\\ \\ =\frac{\left(n-\frac{1}{4}+O\left(\frac{1}{n}\right)\right) \times n \times \left( \sqrt{1+\frac{1}{n}} \right)}{n^2+n}\\ \\ =\frac{\left(n-\frac{1}{4}+O\left(\frac{1}{n}\right)\right) \times \left( \left(1+\frac{1}{n}\right)^{\frac{1}{2}} \right)}{n+1}\\ \\ =\frac{\left(n-\frac{1}{4}+O\left(\frac{1}{n}\right)\right) \times \left(1+\left(\frac{1}{2}\right) \left(\frac{1}{n}\right) + O\left(\frac{1}{n^2}\right) \right)}{n+1}\\ \\ =\frac{n+\frac{1}{4}+O\left(\frac{1}{n}\right)}{n+1} \\ \\ =1 - \left(\frac{3}{4}\right)\frac{1}{n+1} + O\left(\frac{1}{n^2}\right). \\ \\ \end{align} Therefore, \begin{align} I=\displaystyle\lim_{n\to\infty}\left(\frac{1}{\sqrt{n^2+n+1}}+\frac{1}{\sqrt{n^2+n+2}}+\cdots+\frac{1}{\sqrt{n^2+n+n}}\right)^n\\ \\ =\displaystyle\lim_{n\to\infty}\left(\ \left(1 - \left(\frac{3}{4}\right)\frac{1}{n+1}\right) + O\left(\frac{1}{n^2}\right)\ \right)^n\\ \\ =\displaystyle\lim_{n\to\infty}\left(\ \left(1 - \left(\frac{3}{4}\right)\frac{1}{n+1}\right)^n + \underset{\Large{\to 0 \text{ as } n\to \infty}}{\underbrace{\displaystyle\sum_{k=1}^n \binom{n}{k} \left(1 - \left(\frac{3}{4}\right)\frac{1}{n+1}\right)^{n-k} O\left(\frac{1}{\left(n^2\right)^k}\right)}}\ \right)\\ \\ =\displaystyle\lim_{n\to\infty}\left(\ \left(1 - \left(\frac{3}{4}\right)\frac{1}{n+1}\right)^{n+1}\ \cdot\ \underset{\Large{\to 1 \text{ as } n\to \infty}}{\underbrace{\left(1 - \left(\frac{3}{4}\right)\frac{1}{n+1}\right)^{-1}}} \right)\\ \\ =\displaystyle\lim_{(n+1)\to\infty}\left(\ \left(1 + \left(-\frac{3}{4}\right)\frac{1}{n+1}\right)^{n+1} \right)\\ \\ =e^{-\frac{3}{4}}.\\ \end{align} Within the proof, I actually left out a couple of details for the sake of brevity. For example, getting from line $5$ to line $6$ isn't immediately obvious, but is in fact not too difficult to prove. For the final step, see here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4137494", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 3, "answer_id": 0 }
Inequality between the rank of matrix and the $\ell_1$-norm Let $B=[b_{ij}]\in M_n(\mathbb C)$ and $b_j$ denote the $j$th column of $B$. Problem: Prove that $$\text{rank} B\geq\sum_{b_j \neq 0} \frac{|b_{jj}|}{\| b_j\|_1}$$ where $\|b_j\|_1 = \sum_{j=1}^n |b_j|.$ I don't have any idea or approach on how to show this inequality. Any help and hints would be much appreciated.
Sketch of proof: Find an invertible, diagonal matrix $D$ such that all columns of the matrix $M = BD$ have $1$-norm $0$ or $1$ and the diagonal entries of $M$ are all non-negative. Observe that the rank of $M$ is equal to the rank of $B$. On the other hand, note that $\|M\|_1 \leq 1$, which implies that all eigenvalues of $M$ have magnitude at most $1$. Conclude that $$ |\operatorname{tr}(M)| \leq \operatorname{rank}(M). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4137599", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
what is the explanation for this step? I am trying to understand how to evaluate $\frac{1}{z^2}$ around $z_0 = 1$ so $$\frac{1}{z^2}=\left(\frac{1}{z}\right)^2=\sum_{n=0}^{\infty}(-1)^n(z-1)^n \cdot \sum_{m=0}^{\infty}(-1)^m(z-1)^m$$ $$=\sum_{n=0}^{\infty}\sum_{m=0}^{n}(-1)^n(z-1)^n$$ and the explanation in the book is the word "convolution." I don't understand how is it related to that step; I tried to examine to definition to convolution again, but still nothing came up to my mind.
To simplify the notation let $x=(-1)(z-1)$; we want to show that $$\left(\sum_{n\ge 0}x^n\right)\left(\sum_{m\ge 0}x^m\right)=\sum_{n\ge 0}\sum_{m=0}^nx^n\,.\tag{1}$$ If you multiply out the lefthand side as if it were a product of two polynomials, you get a term $x^n\cdot x^m=x^{n+m}$ for each pair $\langle n,m\rangle$ of non-negative integers. If $k$ is a non-negative integer, how many of these terms are equal to $x^k$? The terms equal to $x^k$ are precisely the terms $x^{n+m}$ such that $n+m=k$, and there is one for each value of $n$ from $0$ through $k$: these terms are $x^{0+k},x^{1+(k-1)},x^{2+(k-2)},\ldots,x^{(k-1)+1}$, and $x^{k+0}$. Thus, $$\begin{align*} \left(\sum_{n\ge 0}x^n\right)\left(\sum_{m\ge 0}x^m\right)&=\sum_{k\ge 0}\sum_{n=0}^kx^{n+(k-n)}\\ &=\sum_{k\ge 0}\sum_{n=0}^kx^k\,. \end{align*}$$ And we can rename the index variables $k$ and $n$ to $n$ and $m$, respectively, without changing the value, so we have $$\begin{align*} \left(\sum_{n\ge 0}x^n\right)\left(\sum_{m\ge 0}x^m\right)&=\sum_{k\ge 0}\sum_{n=0}^kx^k\\ &=\sum_{n\ge 0}\sum_{m=0}^nx^n\,. \end{align*}$$ Note that $\sum_{m=0}^nx^n$ is just the sum of $n+1$ copies of $x^n$, so in fact the whole thing can be further simplified to $$\sum_{n\ge 0}(n+1)x^n\,.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4137782", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How is this a function? - Analysis. Let $X = \{1, 2, 3\}, Y = \{4, 5, 6\}$. Define $F \subseteq X \times Y$ as $F = \{(1, 4),(2, 5),(3, 5)\}$. Then $F$ is a function. I simply do not see how this could be a function, as there is nothing that it is mapping to, if anyone can explain how this is a function, that would be lovely.
Recall that a function is comprised of three parts: * *Domain *Codomain *The way to uniquely associate every element in its domain to an element in its codomain So it's perfectly OK to denote a function $f$ as the set of all pairs $(x, f(x))$, since it encodes all the necessary information to represent a function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4137974", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 5, "answer_id": 3 }
Using definite integral to find the limit. I would like someone to verify this exercise for me. Please. Find the following limit: $\lim\limits_{n \to \infty}\left(\dfrac{1}{n+1}+\dfrac{1}{n+2}+...+\dfrac{1}{3n}\right)$ $=\lim\limits_{n \to \infty}\left(\dfrac{1}{n+1}+\dfrac{1}{n+2}+...+\dfrac{1}{n+2n}\right)$ $=\lim\limits_{n \to \infty}\sum\limits_{k=1}^{2n} \dfrac{1}{n+k}$ $=\lim\limits_{n \to \infty}\sum\limits_{k=1}^{2n} \dfrac{1}{n\left(1+\frac{k}{n}\right)}$ $=\lim\limits_{n \to \infty}\sum\limits_{k=1}^{2n}\left(\dfrac{1}{1+\frac{k}{n}}\cdot\dfrac{1}{n}\right)$ $=\displaystyle\int_{1+0}^{1+2} \frac{1}{x} \,dx$ $=\displaystyle\int_{1}^{3} \frac{1}{x} \,dx$ $=\big[\ln|x|\big] _{1}^3$ $=\ln|3|-\ln|1|$ $=\ln(3)-\ln(1)$ $=\ln(3)$
\begin{align}S_n&=\dfrac{1}{n+1}+\dfrac{1}{n+2}+...+\dfrac{1}{3n}\\ &=\sum_{k=1}^{2n}\frac{1}{n+k}\\ &=\sum_{k=1}^{n}\frac{1}{n+k}+\underbrace{\sum_{k=n+1}^{2n}\frac{1}{n+k}}_{l=k-n}\\ &=\sum_{k=1}^{n}\frac{1}{n+k}+\sum_{k=1}^{n}\frac{1}{2n+l}\\ &=\sum_{k=1}^{n}\left(\frac{1}{n+k}+\frac{1}{2n+k}\right)\\ &=\frac{1}{n}\sum_{k=1}^{n}\left(\frac{1}{1+\frac{k}{n}}+\frac{1}{2+\frac{k}{n}}\right)\\ \end{align} Therefore, \begin{align}\lim_{n\rightarrow \infty}S_n&=\int_0^1 \left(\frac{1}{1+x}+\frac{1}{2+x}\right)dx\\ &=\Big[\ln(1+x)+\ln(2+x)\Big]_0^1\\ &=\ln 2+\ln 3-0-\ln 2\\ &=\boxed{\ln 3} \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/4138346", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
How to show sequence converges What I want to show is that the following sequence converges to 1. $a_n = \frac{\sum_{i=0}^n i!}{n!} $ My initial strategy was to use the monotone sequence theorem. It's obvious that each term is positive, so it is bounded below by $0$. Plugging in this sequence into Desmos shows it appears to be decreasing (if $m>n$, then $a_m \le a_n$). Where I struggle is proving that it is decreasing. I've ran into roadblocks trying to show either $a_{n+1}-a_n \le0$ or $\frac{a_{n+1}}{a_n}\le 1$. If I try to do the difference directly, I get stuck at showing $1-(1-\frac{1}{n+1})\frac{1}{n!}\sum_{i=0}^n i!\le 0$. If I try and do the quotient, I get stuck at showing $\frac{n!}{\sum_{i=0}^ni!} +\frac{1}{n+1}<1$. Is there something that I'm missing in the above attempts, or is there an easier way to show this converges using a different method?
Quotients are usually the way to go with factorials, but in this case the differences work, too. $$a_n - (n+1)a_{n+1} = \frac{1}{n!}\left(\sum_{i=1}^{n}i!-\sum_{i=1}^{n+1}i!\right) = -(n+1)$$ $$\implies a_n-a_{n+1} = na_{n+1}-n-1$$ And $$a_{n+1} = \frac{\sum_{i=1}^{n+1}i!}{(n+1)!} = 1 + \frac{\sum_{i=1}^{n}i!}{(n+1)!} > 1 + \frac{n\cdot n!}{(n+1)!} = 2 - \frac{1}{n+1} > 1 + \frac{1}{n}$$ which means $$a_{n+1} > 1 + \frac{1}{n} \implies a_n-a_{n+1} > 0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4138492", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
To show an entire function is constant under a condition . Let $f$ be an entire function such that $$|f(z)|\leq |z| |Re(z)|$$ for all $z$. Then prove that $f$ is constant function. I tried it as $$|f(z)|\leq |z| |Re(z)|\leq |z^2|$$ so by extended Liouville theorem $$f(z)=az^2$$ Now it remains to prove that constant $a$ is zero. $ |f^n(0)|=|\frac{n!}{2\pi}\int\frac{f(z)dz}{z^{n+1}}|\leq \frac{n!}{2\pi}\int_{|z|=R}\frac{R|Re(z)|}{R^{n+1}}|dz|.$ Now unable to proceed further. Please help. Thank you .
$f$ is zero on the imaginary axis. Using the identity theorem, it follows that $f$ is identically zero. If you have already shown that $f(z)=az^2$ then you don't need the identity theorem: $f(iy) = 0$ for $y \in \Bbb R$ implies that $a=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4138673", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Methods to find the value of $\int_0^{2\pi} \frac{dt}{z+e^{it}}$ Given $z \in \mathbb{C}$, with $|z|\neq1$, what are the classic methods to determine the value of: $$\int_0^{2\pi} \dfrac{dt}{z+e^{it}}$$ I thought of rewriting the integral with only real integrands but it seems very tedious. I am not asking necessarily for an answer but I would like to know how can I approach this kind of integral in order to be quite efficient. I never saw contour integration, so I am not going to use it in any case. But if you have a solution that involves this concept, I would be glad to read it anyway.
Consider the mean value theorem for harmonic functions: $$f\left( z \right)=\frac{1}{2\pi }\int\limits_{0}^{2\pi }{f\left( z+{{e}^{it}} \right)dt}\Rightarrow \int\limits_{0}^{2\pi }{\frac{1}{z+{{e}^{it}}}dt}=\frac{2\pi }{z}$$ This is true for all z where f is analytic in a unit circle centred at z. For example this is true for all z with $\operatorname{Re}\left( z \right)>1$ etc.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4138827", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 5 }
Is $\text{Frac}(O_K)=K$? Let $K$ be a finite extension of the $p$-adic field $\mathbb{Q}_p$ with ring of integers $O_K$. We know that $\mathbb{Q}_p$ is the field of fractions of $p$-adic integers $\mathbb{Z}_p$ i.e., $$\mathbb{Q}_p=\text{Frac}(\mathbb{Z}_p).$$ Similarly, $O_K$ is also an integral domain with no zero divisor. So we can take field of fractions of $O_K$. My question: Is $\text{Frac}(O_K)=K$ ? I believe so. any help please
Let $x\in K$. Then we have $x\in\mathcal{O}$, if $|x|\leq 1$. Otherwise $x^{-1}\in\mathcal{O}_{K}$, since $|x^{-1}| = |x|^{-1} < 1$ for $|x| > 1$. So $x\in\mathrm{Frac}(\mathcal{O}_{K})$ and hence $K\subset\mathrm{Frac}(\mathcal{O}_{K})$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4138950", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Are there infinitely many natural numbers that can be represented as $\frac{a^2+b^2}{c^2+d^2}$ with coprime $a$, $b$, $c$, $d$? Are there infinitely many natural numbers that can be represented as the ratio $$ \frac{a^2 + b^2}{c^2+d^2} $$ where $a$, $b$, $c$, $d$ are coprime integers? Is there a way to characterize all such numbers?
Suppose the number $N > 1$ is such that it's prime decomposition has no term of the form $p^k$ with $k$ odd and $p \equiv 3 (4)$. Then it can be written as $\frac{a^2 + b^2}{1^2 + 1^2}$. This doesn't mean $a$ and $b$ are coprime though. Now suppose the prime factorization of $N = p_1^{k_1} \cdots p_r^{k_r}$ with some being congruent to 3 modulo 4 and having odd powers. Then if we were to write $N = \frac{M}{O}$ then $M = p_1^{k_1 + \ell_1} \cdots p_r^{k_r + \ell_r} p_{r+1}^{\ell_{r+1}} \cdots p_s^{\ell_s}$ and $O = p_1^{\ell_1} \cdots p_r^{\ell_r} p_{r+1}^{\ell_{r+1}} \cdots p_s^{\ell_s}$ where we are free to choose the $\ell$'s among natural numbers as long as they are not all $0$. Then $M$ and $O$ are integers greater than $1$ that we want to write as sum of two squares each. Suppose $p_j$ was such an offending prime. Then we know that $k_j$ was odd. But now we have it in $k_j + \ell_j$ and $\ell_j$ as it's powers in $M$ and $O$. If we use $\ell_j$ as odd, then $O$ can't be written as a sum of two squares. If we use $\ell_j$ as even, then $M$ can't be written as a sum of two squares because of the factor $p_j^{k_j+\ell_j}$ has an odd power. So even by allowing the seemingly more general form of $\frac{a^2 + b^2}{c^2 + d^2}$ you have not gained any more natural numbers than can be written besides those that were already of the form $a^2 + b^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4139264", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Helen and Joe play guitar together every day at lunchtime. Helen and Joe play guitar together every day at lunchtime. The number of songs that they play on a given day has a Poisson distribution, with an average of 5 songs per day. Regardless of how many songs they will play that day, Helen and Joe always flip a coin at the start of each song, to decide who will play the solo on that song. If we know that Joe plays exactly 4 solos on a given day, then how many solos do we expect that Helen will play on that same day? My attempt: If the average is $5$ songs a day and Joe performs $4$ solos on one day. I thought we should expect Helen to perform $1$ solo on the same day $(5-4=1)$ But The answer given to me is: $2.5$ solos we expect Helen to play My question is why? What is the way of thinking that gives me $2.5$? Is it cause of the coin flip? so $5 \cdot .5 = 2.5$? What does Joe's $4$ solos have to do with anything then? Thank You for any help.
Here's some intuition: The Poisson distribution is the limiting case for a binomial distribution where the number of trials goes to infinity while the success probability shrinks proportionally to keep the total expectation constant. So we can imagine Helen and Joe spending their lunch break making a very large number of Bernoulli trials, with each giving a small probability of "play a song now". When the number of trials grows large we can fold the coin flip into that process, deciding that when "play a song now" comes out at trial number $n$, Helen plays the solo if $n$ is odd, and Joe plays the solo if $n$ is even. But this effectively means that each of Helen and Joe might as well be making their own separate series of trials, with half as many trials but the same probability, and therefore half the expected value. The number of solos played by each is independent of the other, and still Poisson distributed. The number of solos Helen plays is independent of the number of solos Joe plays, so his $4$ solos is not actually relevant information. An alternative (and perhaps slightly more rigorous) phrasing of the same reinterpretation: In each step, instead of first asking "should we play a song now?" and then "who should play the solo if we do play?", in each step we can ask "should Joe play now?" and "should Helen play now?" each independently with half the probability. That's almost the same, except that there's a finite chance the answer would be that both should play. But when we go to the limit of infinity many steps, the risk of that happening in any given step goes to zero as $1/N^2$, and so the probability of it ever happening during the entire lunch break goes to zero as $1/N$. Therefore the risk becomes irrelevant in the limit where the distributions become Poisson.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4139421", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
How to show that: $(-1)^nn+2+\sum_{k=1}^{n}\frac{(-1)^k}{k+1}(2)_{k+1}S_{n}^{k+1}H_{k+1}=B_{n-1}$ I was observing these formulas concerning $B_n$, Bernoulli numbers I was able to conjecture this formula: $$(-1)^nn+2+\sum_{k=1}^{n}\frac{(-1)^k}{k+1}(2)_{k+1}S_{n}^{k+1}H_{k+1}=B_{n-1}\tag1$$ Where $H_n$ is Harmonic numbers $S_{n}^m$ is Stirling number of second kind $(n)_m$ is Pochhammer Does anyone know how to prove it?
We seek to prove the identity $$B_n = \sum_{k=0}^{n} \frac{(-1)^k H_{k+1} (k+2)! {n+1\brace k+1}}{k+1} + (-1)^{n+1} (n+1).$$ The sum is $$(n+1)! [z^{n+1}] \sum_{k=0}^{n} (-1)^k \left(1+\frac{1}{k+1}\right) H_{k+1} (\exp(z)-1)^{k+1}.$$ With $\exp(z)-1 = z+\cdots$ the coefficient extractor enforces the upper limit of the sum and we get $$(n+1)! [z^{n+1}] \sum_{k\ge 0} (-1)^k \left(1+\frac{1}{k+1}\right) (\exp(z)-1)^{k+1} [w^{k+1}] \frac{1}{1-w} \log\frac{1}{1-w}.$$ Note that the term in $w$ starts at $w$. We get for the first piece $$-(n+1)! [z^{n+1}] \exp(-z) \log \exp(-z) \\ = (n+1)! [z^n] \exp(-z) = (-1)^n (n+1).$$ We see that this cancels the extra term from the initial closed form. Therefore the remaining term must give the Bernoulli numbers: $$(n+1)! [z^{n+1}] \sum_{k\ge 0} (-1)^k \frac{1}{k+1} (\exp(z)-1)^{k+1} [w^{k+1}] \frac{1}{1-w} \log\frac{1}{1-w}.$$ Differentiate to get $$n! [z^n] \exp(z) \sum_{k\ge 0} (-1)^k (\exp(z)-1)^{k} [w^{k+1}] \frac{1}{1-w} \log\frac{1}{1-w} \\ = n! [z^n] \exp(z) \sum_{k\ge 0} (-1)^k (\exp(z)-1)^{k} [w^k] \frac{1}{w} \frac{1}{1-w} \log\frac{1}{1-w}.$$ This is $$n! [z^n] \exp(z) \frac{1}{1-\exp(z)} \exp(-z) \log\exp(-z) \\ = n! [z^n] \frac{z}{\exp(z)-1} = B_n$$ as claimed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4139722", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prove that 2 vector spaces are isomorphic to each other Prove that the vector space V $V = \begin{bmatrix} a& 0& 0& 0 \\ b& 0& 0& 0 \\ c& d& e& f \\ \end{bmatrix}$ is isomorphic to the vector space $M_{2,3}$ of all 2 x 3 matrices under usual addition of matrices and scalar multiplication. My prof didn't really explain as to how to go about this, but I was thinking that I could transform this into a one-dimensional vector, such that: $V = (a, 0, 0, 0, b, 0, 0, 0, c, d, e, f)$ $M_{2,3}$ = (g, h, i, j, k, l) I was thinking maybe I could remove the 0 values in V so they will have the same dimension? I'm not really sure. Please help.
Both are obviously 6-dimensional vector spaces isomorphic to $\mathbb{R}^6,$ ie vectors with 6 components.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4140060", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Integral as a limit of sum: $\lim\limits_{n\to\infty}\sum_{k=1}^n\left(\frac k{n^2+n+2k}\right)$ The value of$$\lim_{n\to\infty}\sum_{k=1}^n\left(\frac k{n^2+n+2k}\right)=\,?$$ Can the limit be partially applied to the denominator after converting the numerator into an integral? I wrote this as $$\lim_{n\to\infty}\frac{1}{n}\sum_{k=1}^n\left(\frac {k/n}{1+1/n+2k/n^2}\right)$$ From what I know I can write that $1/n$ as $\mathrm{d}x$ and $k/n$ as $x$. Can I apply limit to denominator as $n$ tends to infinity and rewrite the denominator as $1$.
What you can also do is to use generalied harmonic numbers since $$S_n=\sum_{k=1}^n\left(\frac k{n^2+n+2k}\right)=\frac{1}{4} n \left((n+1) H_{\frac{1}{2} n (n+1)}-(n+1) H_{\frac{1}{2} n (n+3)}+2\right)$$ Now, using twice their asymptotics and continuing with Taylor series $$S_n=\frac{1}{2}-\frac{2}{3 n}+\frac{4}{3 n^2}+O\left(\frac{1}{n^3}\right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4140198", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Limit of $\left(y+\frac23\right)\mathrm{ln}\left(\frac{\sqrt{1+y}+1}{\sqrt{1+y}-1} \right) - 2 \sqrt{1+y}$ We consider the limit for large $y$ of the following expression : $$\left(y+\frac23\right)\mathrm{ln}\left(\dfrac{\sqrt{1+y}+1}{\sqrt{1+y}-1} \right) - 2 \sqrt{1+y}.$$ Many references state that the large $y$ behaviour of this last expression is $y^{-3/2}$. I have been trying to show this in any way possible but I'm only hitting dead ends. Wolframalpha gives a Puiseux series of that expression which makes this behaviour explicit, but I do not understand how to get to this series either. Could you please help me on this?
One can write $1+y=1/t^2$ so that $t\to 0^+$ and then the given expression, say $F(y) $, transforms into $$F(y) =\left(\frac{1}{t^2}-\frac{1}{3}\right)\log\frac{1+t}{1-t}-\frac{2}{t}\tag{1}$$ and we need to find the limit of $y^{3/2}F(y)$ or $$\frac{(1-t^2)^{3/2}}{t^3}F(y)\tag{2}$$ Since $t\to 0^+$ the desired limit is the same as that of $F(y) /t^3$. Using Taylor series we can write $F(y) $ as $$2\left(\frac{1}{t^2}-\frac{1}{3}\right)\left(t+\frac{t^3}{3}+\frac{t^5}{5}+o(t^5)\right)-\frac{2}{t}$$ And this can be written as $$2\left(\frac{1}{t}+\frac{t}{3}+\frac{t^3}{5}-\frac{t}{3}-\frac{t^3}{9}+o(t^3)-\frac{1}{t}\right)=\frac{8t^3}{45}+o(t^3) $$ It follows that $F(y) /t^3$ tends to $8/45$. One can observe that the substitution $1+y=1/t^2$ helps here a lot as it simplifies the log term to a well known Taylor series.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4140322", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Uniform Convergence Series If $\sum_{k=0}^\infty {a_k}x^k$ has a interval of convergence $(-1, 1)$, and $f(x)=\sum_{k=0}^\infty {a_k}x^k $, for $x\in[0,1)$, then the series does not converge uniformly to $f(x)$ on $[0,1)$. I'm trying to either come up with a proof for this statement.
Hint: We are given $\sum_{k=0}^{\infty}a_kx^k$ diverges at $x=1.$ I.e., $\sum_{k=0}^{\infty}a_k$ diverges. This implies the partial sums of $\sum_{k=0}^{\infty}a_k$ do not form a Cauchy sequence. Perhaps this implies the partial sums of $\sum_{k=0}^{\infty}a_kx^k$ are not uniformly Cauchy on $[0,1)?$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4140441", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Rudin's Theorem 2.28 Theorem 2.28 in Rudin states: Let $E$ be a nonempty set of real numbers which is bounded above. Let $y = \sup E$. Then $y \in \overline{E}$. Hence $y \in E$ if $E$ is closed. I don't think I follow Rudin's proof completely. His proof is: If $y \in E$ then $y \in \overline{E}$. Assume $y \not \in E$. For every $h > 0$ there exists then a point $x \in E$ such that $y - h < x < y$, for otherwise $y - h$ would be an upper bound of $E$. Thus $y$ is a limit point of $E$. Hence $y \in \overline{E}$. I don't think the assertion $y - x < x < y$ is fully correct, i.e., the second inequality should be non-strict. Given $h > 0$, $y - h < y$, so it can't be an upper bound of $E$, as that would contradict the fact that $y$ is the supremum of $E$. So there must exist $x \in E$ such that $y - h < x$. But $x \leq y$ since the supremum is an upper bound, so $y - h < x \leq y$. The second inequality needn't be strict, e.g., we could have $E = \{1\}$, so $\sup E = 1 \in E$. Am I correct that this is a typo in Rudin? From this point, how does one reason that $y$ is a limit point of $E$? If the above assertion in Rudin's proof were true, then we have $x \in (y-h, y+h)$. I suppose this is still true even if the second inequality is non-strict, but this presupposes that $d(x,y) := |x-y|$. I know this is the standard distance on $\mathbb{R}$, but is it necessary that we use it? Should I assume this is the distance on $\mathbb{R}$ unless otherwise specified? I assume there are other valid and equivalent notions.
In this part of the proof Rudin assumes $ y \notin E$. This implies in particular that $y \neq x$ (as $x \in E$). We know $x \le y$ from the sup-condition, plus $x \neq y$ is exactly $x < y$ as claimed. There is no typo. It's correct.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4140620", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
How to show that $\sum_{n=1}^{+\infty}\frac{z^{2n}}n$ can be analytically continued for $|\Re z|<1$? Let's consider the following power series: $\sum_{n=1}^{+\infty}\frac{z^{2n}}n$ I think that the convergence radius is $R=1$. Then the function given by $f(z)=\sum_{n=1}^{+\infty}\frac{z^{2n}}n$ is holomorphic in the unit disk. To prove that it cannot be continued on $\mathbb C$, I want to use that the series diverges at $z=1$ but I don't know how to rigorously state that. Finally, I am wondering how to show that it can be continued for $|\Re z|<1$ (if it can)?
The series evaluates to $-\log(1-z^2)$. Using the principal $\log$ with branch cuts on the nonpositive real axis, it is easy to show that $1-z^2$ is a nonpositive real number iff $z$ is real and of magnitude at least $1$. Since $|\operatorname{Re}(z)|<1$ does not touch these cuts, it follows that there is an analytic function on $|\operatorname{Re}(z)|<1$ that agrees with the series on the latter's convergence disk of $|z|<1$. There is no analytic continuation to $\mathbb C$ because $\lim_{z\to1^-}\sum_{n=1}^\infty\frac{z^{2n}}n=+\infty$, whereas the limit of an analytic function is finite in any direction and at any point.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4140766", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Evaluate the order of the pole in $1/(1-\cos(z))^2$ I'm trying to determine the order of the pole in the complex expression $$f(z)=\frac{1}{(1-\cos(z))^2}$$ I have determined the pole to be $z=2\pi n, n\in \mathbb{Z}$. However, when I use the equation $\lim\limits_{z\rightarrow 2\pi n}[(z-2\pi n)^k f(z)]$ with $k=1$, it equals $\frac{0}{1}=0$ or that the function is analytical in the neighborhood. I have used L'Hôpital's rule repeatedly to obtain this result. I checked my answer with Wolfram Alpha, and it's supposed to have a pole of order $4$ and $z=2\pi n$. Where am I going wrong?
If a function $g$ has a pole of order $k$ then $g^2$ will have a pole of order $2k$. Using your method, we have by L'Hopital, $$\lim_{z\to2\pi n}\frac{z-2\pi n}{1-\cos z}=\lim_{z\to2\pi n}\frac1{\sin z}=\infty$$ but $$\lim_{z\to2\pi n}\frac{(z-2\pi n)^2}{1-\cos z}=\lim_{z\to2\pi n}\frac{2(z-2\pi n)}{\sin z}=\lim_{z\to2\pi n}\frac2{\cos z}=2$$ so $1/(1-\cos z)$ has a pole of order $2$. Therefore, $f(z)=1/(1-\cos z)^2$ has a pole of order $4$. In your attempt, the limit with $k=1$ is in fact of the form $1/0$ not $0/1$, so $f$ is not analytic as you claim.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4140956", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Given a $k^2$-membered set $S$ of points with coordinates $(x,y)\in \{ 1,2,...,k \}$, prove that some $4$ points in $S'$ lie on a circle. We are given a $k^2$-membered set $S$ of points on a plane. The points' coordinates are integers between $1$ and $k$. We then construct another set $S'$ which contains at least $\frac{5}{2} k-1$ points from S. We should prove that there are always at least $4$ points in $S'$ that lie on a common circle. The situation amounts to a $k$ by $k$ square with an integer grid, the members of $S$ being all the vertices of the grid. A certain portion of the points, $k^{2}-\frac{5}{2}k+1$ of them, is then removed. For $4$ points to be on a circle, it is sufficient that they lie in the vertices of an isosceles trapezium. It is thus sufficient to prove that taking away $k^{2}-\frac{5}{2}k+1$ points is never enough to eliminate all possible sub-sections that form an isosceles trapezium from the square grid. This problem is rather interesting because the visually rephrased statement seems very intuitive but I am not sure how to proceed to rigorously prove this, especially concerning the specific quanitity $k^{2}-\frac{5}{2}n+1$. I'd be grateful for any help.
For 4 points to be on a circle, they need to lie in the vertices of a sub-square. This is not true. Consider the circle through (2, 1), (1, 2) and (1, 3). It will also pass through (2, 4), by symmetry. In fact, a sufficient (but I'm not sure if necessary) condition is that the points are vertices of an isosceles parallelogram.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4141105", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 1 }
Existence of subgraphs with vertices of large degree [Diestel's book] This is an excerpt from Diestel's book (fifth edition). Since the graph $G=(V,E)$ has at least one edge, i.e. $|E|\geq1$ then $\varepsilon(G)=\dfrac{|E|}{|V|}>0.$ How it follows that none of the graphs in our sequence is trivial? Here is my approach: Since $\varepsilon(G_{i+1})\geq \varepsilon(G_{i})$ then $\varepsilon (G_i)\geq \varepsilon(G)>0$. This sounds a bit stupid: But if $\varepsilon(G_i)>0$ how it implies that $G_i\neq \varnothing$? I am totally confused.
The point is that you only remove one vertex at each step. So the only way to reach $G_i=\varnothing$ is if $G_{i-1}=K_1$, but that would mean $\epsilon(G_{i-1})=0$ which is not true.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4141214", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Associativity of infinite series In my calculus notes, my professor presented the following proposition: Proposition: Let $\sum_{n=1}^{\infty}a_n$ be a convergent series and let $1=n_1<n_2<n_3<...$ be an increasing sequence of indices. Consider the set of grouped sums of the sequence between these indices, i.e; $$A_k=\sum_{n=n_k}^{n_{k+1}-1}a_n.$$ Then, $$\sum_{k=1}^{\infty}A_k=\sum_{n=1}^{\infty}a_n$$ This proposition is supposed to say that if an infinite series converges, then the associative property holds. I was reading an equivalent proposition, but written in a different way, and I was able to prove it. However, I can't prove the statement the way my professor wrote it. Any help is welcome Addendum: The proposition I was able to prove is Exercise 2.5.3 (a) on page 65 from the book Understanding Analysis, 2nd ed by Stephen Abbott.
May be following view also will be helpful: $\sum\limits_{k=1}^N A_k$ is subsequence of $\sum\limits_{k=1}^N a_k$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4141413", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
How to define an angular form with bounded support? In my smooth manifold course, I am asked to define a differential $1$-form $\tau$ on $M = \mathbb{R}^{3} \setminus S^{1}$ with support bounded in $\mathbb{R}^3$ such that $\int_{\gamma} \tau$ can represent the signed crossing number of any closed curve $\gamma : S^{1} \to M$ about the disc $D^{2}$. I believe that $\tau$ must be related to $\omega = \mathrm{d}(\arctan\frac{z}{\sqrt{x^{2}+y^{2}}-1})$, but how can we modify $\omega$ to obtain a globally smooth $1$-form with bounded support? It seems that $\int_{\gamma} \tau$ gives rise to linking numbers (see this post and this post).
If you need $\tau$ to have compact support in $\mathbb R^3\setminus S^1$, this is impossible. Here's why. Suppose $\tau$ is such a form, and let $K$ be the support of $\tau$. Because $K$ is a compact subset of $\mathbb R^3$, there is a positive number $\varepsilon$ such that every point of $K$ is at a distance at least $\varepsilon$ from $S^1$. If $\gamma$ is a small circle around $S^1$ of radius less than $\varepsilon$, then $\tau$ is identically zero along the image of $\gamma$, and so $\int_\gamma\tau =0$, although $\gamma$ crosses the disc exactly once.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4141500", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Isomorphisms between polynomial quotient rings on infinite base fields. Hello i'm having trouble understanding something. I need to show that Let $R$ be a ring. $$R[x]/(x^2-1)\cong R[x]/(x^2-4).$$ but i'm pretty lost on how to proceed i've tried a map $f:R[x]/(x^2-1)\to R[x]/(x^2-4)$ $f:(x-1)(x+1)\to(x-2)(x+2)$ but that's as far as i could get. Thanks in advance!
Answer: Let $A:=\mathbb{R}[x]/(x^2-1), B:=\mathbb{R}[x]/(x^2-4), R:=\mathbb{R}[x]$. Since $x^2-1=(x-1)(x+1):=IJ$ where $I=(x-1),J:=(x+1)$ it follows $$A \cong R/I\oplus R/J \cong \mathbb{R}\oplus \mathbb{R}$$ and similarly $$B \cong \mathbb{R}\oplus \mathbb{R}\text{ hence } A \cong B.$$ If a similar factorization holds in $R[x]$ you get the same result. You need $2$ to be a unit in $R$: If $I_1:=(x-2), J_1:=(x+2)$ it follows $$ -4:=(x-2)-(x+2) \in I_1+J_1$$ hence $I_1,J_1$ are coprime and $I_1J_1=x^2-4$. Hence $$B \cong R[x]/I_1J_1 \cong R[x]/I_1\oplus R[x]/J_1 \cong R\oplus R.$$ Similarly $$ -2=(x-1)-(x+1) \in I +J$$ hence $I+J=(1)$. Hence there is an isomorphism $$R[x]/IJ \cong R[x]/I \oplus R[x]/J \cong R \oplus R.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4141617", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Considering the surface $f(x,y)=x^2y$. We know that a parametrization it can be $X(u,v)=(u,v,u^2v)$. QUESTION: Considering the surface $f(x,y)=x^2y$. We know that a parametrization it can be $X(u,v)=(u,v,u^2v)$. Find the asymptotic lines in $S$. MY ATTEMPT: Let $\alpha:I\subset \mathbb{R}\rightarrow S$ be a curve in this surface, such that $\alpha(t)=X(u(t), v(t))$ and $\alpha '(t)=u'X_u+v'X_v$. Thus, $\alpha $ is asymptotic if, and only if, $$e(u')^2+2fu'v'+ g(v')^2=0. \qquad (*)$$ Where $e=\frac{2v}{\sqrt{1+u^4+4u^2v 2}}$, $f=\frac{2u}{\sqrt{1+u^4+ 4u^2v^2}}$ and $g=0$. Replacing this in $(*)$ we can find that $e(u')^2+2fu'v'+ g(v')^2=0\iff u'=0 \; \text{or} \; u'v+2uv'=0$. In the first case $u=\text{constant}$. However I'm struggling to resolve the ODE $u'v+2uv'=0$. Would you help me with this?
$$u'v+2uv'=0$$ $$\dfrac {u'}{u}=-2\dfrac{v'}{v}$$ Then write; $$(\ln u)'=\dfrac {u'}{u} ;\; (\ln v)'=\dfrac {v'}{v}$$ $$(\ln u)'=-2(\ln v)'$$ Integrate.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4141798", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Second order non linear ODE - hard to solve integral makes me think I need a different substitution I have this here ODE: $$xy'' = y' + x((y')^2 + x^2)$$ Naturally, I'd try this substitution first: $$y' = p, p=p(x)$$ The equation then transforms into $$xp' = p+x(p^2+x^2)$$ Dividing it by $x$, I get $$p' = \frac{p}{x} + p^2 + x^2$$ Which is a Riccati equation with the solution: $$p = x \cdot \tan{(\frac{x^2}{2}+C_1)}$$ The thing is, if I substitute back $y'=p$, the integral on the right side is not an easy one to solve, and even if I do solve it with WolframAlpha the solutions are not the same as if I plug in the second-order equation directly. It makes me wonder if I should have tried another substitution/method. Any help will be appreciated!
$$y' = x \cdot \tan{\left(\frac{x^2}{2}+C\right)}$$ $$\dfrac {dy}{du}\dfrac {du}{dx} = x \cdot \tan(u)$$ $$\dfrac {dy}{du} = \tan(u)$$ Where $u=\dfrac {x^2}{2}+C$ then integrate. $$y=-\ln |\cos u |+K$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4142026", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Are these matrices transitive? I have 2 matrices, I don't believe that they are transitive but my friend is insisting that they are both transitive, is he correct? a) $ \begin{bmatrix} 1& 1 & 1 & 1 & 1 &1 \\ 0 & 1 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 1 & 0 \\ 1 & 0 & 0 & 0 & 0 & 1 \\ \end{bmatrix} $ b) $ \begin{bmatrix} 1& 0 & 0 & 0 & 0 &1 \\ 1 & 1 & 0 & 0 & 1 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 \\ 1 & 1 & 0 & 0 & 1 & 0 \\ 1 & 0 & 0 & 0 & 0 & 1 \\ \end{bmatrix} $ I'm trying to order them by pairs of relation (using positions as letters, e.g. (a,b)) and failing for both. Thank you.
As explained in the comments to this question, one way to check on transitivity is to square the matrix. If the square has a nonzero entry is a position where the original matrix has a zero, then the relation is non-transitive. if there is no such entry, then the relation is transitive. Rather than going to the work of squaring the matrix, let's see if we can spot such an entry in the first matrix. Since the first row is all $1$s, multiplying any row with a $1$ in the first column will give a non-zero entry. The first row has no zeros, so that won't help, but the sixth row has a $1$ in the first column, and a $0$ in the second. That is, $(6,1)$ is in the relation, and $(1,2)$ is in the relation, but $(6,2)$ is not. The relation is not transitive. Can you do the second part now?
{ "language": "en", "url": "https://math.stackexchange.com/questions/4142190", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that every triangular $3\times3$ matrix $A$ satisfies $(A-\lambda_1 I)(A-\lambda_2 I)(A-\lambda_3 I)=0$, where $\lambda_i$ are eigenvalues of $A$ My attempt Proof. Suppose $A$ is a triangular $3\times3$ matrix. Then so is $A-\lambda_j\operatorname{Id}$ for $1\le j\le3,$ which means that $\det(A-\lambda_j\operatorname{Id})$ is the product of its diagonal entries. Thus $p(\lambda)=\prod(A_{ii}-\lambda),$ and hence the roots of the characteristic polynomial are the diagonal entries $A_{11}$, $A_{22}$, $A_{33}$, as claimed. I'm not sure whether my approach here is correct. My understanding is that the problem asks us to show that every triangular $3\times3$ matrix $A$ satisfies the determinant formula whereby the lambdas are the roots of the characteristic polynomial and hence eigenvalues of $A$?
$A$ is an upper triangular $n\times n$ matrix, $\lambda_1$, $\ldots$, $\lambda_n$ its diagonal elements. Let's show that the matrix $$\prod_{i=1}^n ( A- \lambda_i I)$$ applied to any vector produces the zero vector. That is enough to see that the matrix is $0$. The main observation is that $A- \lambda_k I$ applied to a vector with the last $n-k$ components $0$ gets us a vector with the last $n-k+1$ components $0$. Now, start with any vector $v$. Note that the vector $$(A- \lambda_n I) v$$ has the last component $0$. Now apply $(A- \lambda_{n-1}I)$ and get $$(A- \lambda_{n-1}I) ((A- \lambda_{n}I)v$$ with last $2$ components $0$. Continue $n$ steps and get $$(A- \lambda_1 I) \cdots (A- \lambda_n I) v$$ the vector with all components $0$. $\bf{Added:}$ In fact we showed that a certain product of matrices with prescribed $0$ positions is the $0$ matrix. Below is the case $n=3$, $\times$ denotes an arbitrary element: $$\left( \begin{matrix} 0 & \times& \times \\0 & \times & \times \\ 0 &0 &\times \end{matrix}\right)\cdot\left( \begin{matrix} \times & \times& \times \\0 & 0 & \times \\ 0 &0 &\times \end{matrix}\right)\cdot \left( \begin{matrix} \times & \times& \times \\0 & \times & \times \\ 0 &0 &0 \end{matrix}\right) = 0_3$$ as this calculation with WA shows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4142295", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Showing that a differentiable function is increasing and bounded Suppose that $f:(a,b)\to\mathbb{R}$ is differentiable on $(a,b)$ and $|f'(x)|\leqslant L$ Let $g(x)=f(x)+Lx$ Show that $g:(a,b)\to\mathbb{R}$ is increaing and bounded on $(a,b)$. Consider $x_{1},x_{2}\in(a,b)$ where $x_{1}<x_{2}$. Now $g(x_{1})=f(x_{1})+Lx_{1}$ $g(x_{2})=f(x_{2})+Lx_{2}$ Now $g(x_{2})-g(x_{1})=f(x_{2})+Lx_{2}-f(x_{1})-Lx_{1}$ $g(x_{2})-g(x_{1})=f(x_{2})-f(x_{1})-L(x_{1}-x_{2})$ Show that $f(x_{2})-f(x_{1})\geqslant L(x_{1}-x_{2})$ $\frac{f(x_{2})-f(x_{1})}{x_{2}-x_{1}}\leqslant L\iff f(x_{2})-f(x_{1})\leqslant L(x_{2}-x_{1})$ Any hints on where to go next?
We have $-L\le f'\le L.$ Thus, since $g'=f'+L,$ we see $0\le g'\le 2L.$ i) $0\le g'$ implies $g$ is increasing, by the MVT. ii) Fix $x_0\in (a,b).$ Then for any $x\in (a,b),$ $$g(x)=g(x)-g(x_0)+g(x_0) = g'(c_x)(x-x_0) + g(x_0)$$ by the MVT. This implies $|g(x)|\le L|x-x_0| + |g(x_0)| \le L(b-a)+|g(x_0)|$ for any $x\in (a,b).$ Hence $g$ is bounded on $(a,b).$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4142469", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why is it that $1+(1+2+3+4+5+6+\ldots+n)$, basically a triangular number plus $1$, doesn't divide by $3$ or $5$? As a self learner I am currently learning about triangular numbers, for which the formula is: $$T(n)=\frac{n(n+1)}{2}$$ While playing with my calculator, I added 1 to each resulted number, and I noticed that none of the results divides by $3$ or $5$ I am assuming that I am correct about this observation, other wise please let me know. So I have multiplied both sides of the original formula by $2$, and now I have: $2(T(n))=n(n+1) = n^2+n$ Now my question still remains still wide open: why is it that $(n(n+1)+1) \bmod 3 \neq 0$ and $(n(n+1)+1) \bmod 5 \neq 0$? Or you can see it as why $(n^2+n+1) \bmod 3 \neq 0$ and $(n^2+n+1) \bmod 5 \neq 0$? Or you can see it as why $(1+2+3+4+5+6+\ldots+n)+1$, Basically why a triangular number plus $1$ does not dividing by $3$ nor $5$? I have tried Google and also tried searching over here, but either I don't know what I am searching for or I simply can't find an answer. I am trying to pull my head for a possible answer, but I just don't have a clue where to begin with. Any answers or hints are appreciated. Also if this is a duplicated, I honestly couldn't find it, so please just close and refer me to it.
For this kind of modular arithmetic problems, remember that there are only finitely many numbers to check, when we are dealing with polynomials. This is because $a\equiv b$ mod $n$ implies that $p(a)\equiv p(b)$ mod $n$ as well. Let $p(x)=\frac12x(x+1)+1$. Then note that $2p(x)=x^2+x+2$, and $2p(0)=2,2p(1)=1,2p(2)=2$. Therefore none of the three possible values of $2p(x)$ are $0$, so $3\nmid 2p(x)$. Since $3\nmid 2$ obviously, then $3\nmid p(x)$. You can do the same thing for $5$ instead of $3$. It is useful to remember this idea so that you can apply it to other problems too.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4142610", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 0 }
The sum of five different positive integers is 320. The sum of greatest three integers in this set is 283. The sum of five different positive integers is $320$. The sum of the greatest three integers in this set is $283$. The sum of the greatest and least integers is $119$. If $x$ is the greatest integer in the set, what is the positive difference between the greatest possible value and the least possible value of $x$? I obtained the equations $$\begin{align}x+b+c+d+e &= 320\\ x+b+c &=283\\ x+e &= 119\\ d+e &= 37\\ b+c+d &= 201\end{align}$$ How to proceed after this?
$e\ge1,d>e\implies d\ge2$ and similarly $c\ge3, b\ge4$. Thus $x\le320-(1+2+3+4)=310$ from the first equation, $x\le283-(3+4)=276$ from the second and $x\le 119-1=118$ from the third. Since all three of these inequalities must be satisfied, the maximum value of $x$ is $118$. Can you verify by coming with with the values of the rest of the variables? Suppose the minimum possible value of $x$ is $m$, then the highest possible sum of $x,b,c,d,e$ is $m+(m-1)+(m-2)+(m-3)+(m-4)=5m-10\ge320\implies m\ge66$. Similarly the second equation gives $m+(m-1)+(m-2)=3m-3\ge283\implies m\ge96$ and the third equations gives $m+(m-4)\ge119\implies m\ge62$. All three give $m\ge96$. Can you check that for $x=96$ we can find the values of other variables satisfying the equations? That is not the case: the problem occurs while choosing the value of $d,e$. We have $d=x-82,e=119-x$ so $d>e\implies 2x>201\implies x\ge101$. Does $101$ work?
{ "language": "en", "url": "https://math.stackexchange.com/questions/4142774", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Is there a sense in which $4^{2} = -16$? Here's my train of thought: I. $4^2=4^{(4/2)}$ II. $4^{(4/2)}=(4^4)^{(1/2)}$ III. $(4^4)^{(1/2)}=256^{(1/2)}$ IV. $256^{(1/2)}=\sqrt{256}$ square root of 256 has 2 solutions: 16 and -16 Can you explain me where I made the mistake and why am I not allowed to do whatever I did? I thought that maybe (II) is wrong and only $4^{(4/2)}=(4^{(1/2)})^4$ is true, but if $(x^a)^b=x^{(a\cdot b)}$ and numbers can switch places during multiplication and this does not change the outcome then: $x^{(a\cdot b)}=x^{(b\cdot a)}$, and $(x^b)^a=x^{(b\cdot a)}$ Also, please don't insult me in the replies. I feel stupid enough already for not getting this.
OVERALL THEORY If $x$ is a positive real number and $m$ and $n$ are integers, then $$x^{\frac mn} = (x^m)^{\frac 1n} = (x^\frac 1n)^m = \sqrt[n]{x^m} = (\sqrt[n]x)^m$$ When $x$ is negative, all heck breaks loose. We can say that $$\text{$x^m$ is positive when $m$ is even and is negative when $m$ is odd.}$$ So there is no sense in which $4^2 = -16$. It is OK to work with negative numbers and integer powers. For example, $$((-2)^3)^4 = (-2)^{12} = 4096$$ But, weird things can happen with a negative $x$ and fractional exponents. For example $$ \begin{array}{c} (-8)^\frac 13 = \sqrt[3]{-8}= -2 \\ (-8)^\frac 26 = \sqrt[6]{((-8)^2)}= \sqrt[6]{64} = 2 \\ \end{array} $$ Since $\frac 13 = \frac 26$, we see that we have a problem. There is no way to fix this problem. There is no way around it. Raising a negative real number to a fractional power is not well-defined. PRINCIPAL SQUARE ROOTS The roots of an expression, $f(x)-n$, is the set of all $x$ for which $f(x)=0$ The solutions to the equation $x^2 - n = 0$ are called the square roots of $n$. * *Negative numbers have no (real-valued) square roots. *$0$ has one square root. *Positive numbers have two square roots. The principal square root of a number is defined to be the non negative root of that number. There is a really cool equation that encapsulates this. $$\sqrt{x^2} = |x|$$ Hence, for example, $$\sqrt{(-3)^2} = \sqrt{3^2} = |3| = |-3| = 3$$ So, if $x^2 = y$, then the principal value of $\sqrt y$ is $|x|$. ORIGIN OF THE WORD ROOT
{ "language": "en", "url": "https://math.stackexchange.com/questions/4142923", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 2, "answer_id": 0 }
The time it takes for a candle having a lifetime that follows an exponential distribution to go off We have $5$ candles each having a lifetime which follows an exponential distribution with parameter $\lambda$. We light up each candle at time $t=0$. Assume that $Y$ is the time that it takes for the third candle to go off. What is the expectation and variance of $Y$? My try: First of all, I believe having $5$ candles is irrelevant. We only need to consider one random variable following the exponential distribution, like $X\sim \exp(\lambda)$. It means that on average, it takes $\frac{1}{\lambda}$ for the candle to go off. However, this does not seem like a random variable. It seems like it is a constant. Then, it won't be meaningful to calculate the expectation and variance. Am I right? Also, we know that at some point, the candle "will" go off. So, does this mean that we cannot predict at which time it will? I am totally confused thinking about these concepts. I appreciate if someone enlightens me. Note: There is a similar question here. However, the question has not been answered due to the lack of attemps provided by the OP.
The time needed for the first candle to go out is distributed as the minimum of $5$ independent exponential random variables. That minimum is exponentially distributed with rate $5\lambda$ (see here). Since the exponential distribution is memoryless we now start all over; by similar reasoning as before the second-out and third-out times are exponentially distributed with rates $4\lambda$ and $3\lambda$ respectively, and all three new random variables are independent of each other. $Y$ is their sum; since the mean and variance of an exponential distribution with rate $\lambda$ are $\frac1\lambda$ and $\frac1{\lambda^2}$ $$\mathbb E[Y]=\frac1{5\lambda}+\frac1{4\lambda}+\frac1{3\lambda}=\frac{47}{60\lambda}$$ $$\operatorname{Var}(Y)=\frac1{25\lambda^2}+\frac1{16\lambda^2}+\frac1{9\lambda^2}=\frac{769}{3600\lambda^2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4143048", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
How do I find all values $x$ such that a vector is a linear combination of a nonempty set of vectors in vector space $\mathbb{R^3}$? In vector space $\mathbb{R^3}$. Find all values $x$ such that $\begin{bmatrix} 2x^2 \\ -3x \\ 1 \end{bmatrix}$ $\in$ span $\{\begin{bmatrix} 1 \\ 1 \\ 3 \end{bmatrix},\begin{bmatrix} 0 \\ 1 \\ 1 \end{bmatrix},\begin{bmatrix} 1 \\ 2 \\ 4 \end{bmatrix} \}$. My solution: I used the equation: $r_1\begin{bmatrix} 1 \\ 1 \\ 3 \end{bmatrix}$ + $r_2\begin{bmatrix} 0 \\ 1 \\ 1 \end{bmatrix}$ + $r_3\begin{bmatrix} 1 \\ 2 \\ 4 \end{bmatrix}$ = $\begin{bmatrix} 2x^2 \\ -3x \\ 1 \end{bmatrix}$ which can be represented with the augmented matrix: $$ \left[ \begin{array}{ccc|c} 1&0&1&2x^2\\ 1&1&2&-3x\\ 3&1&4&1\\ \end{array} \right] $$ which has an RREF of: $$ \left[ \begin{array}{ccc|c} 1&0&1&2x^2\\ 0&1&1&-3x-2x^2\\ 0&0&0&1-4x^2+3x\\ \end{array} \right] $$ I understood this as the system could only be consistent iff: $1-4x^2+3x = 0$ So then, I solved for $x$ which yields $x = 1,\frac {-1}{4} $. Is it correct to say, then, that $1$ and $\frac {-1}{4}$ are all the values of $x$ that will make $\begin{bmatrix} 2x^2 \\ -3x \\ 1 \end{bmatrix}$ $\in$ span $\{\begin{bmatrix} 1 \\ 1 \\ 3 \end{bmatrix},\begin{bmatrix} 0 \\ 1 \\ 1 \end{bmatrix},\begin{bmatrix} 1 \\ 2 \\ 4 \end{bmatrix} \}$?
Yes. You approach is correct. Also (if you are allowed) you can remove the vector $\begin{bmatrix}1\\2\\4\end{bmatrix}$ by writing $$ \text{span}\left\{\begin{bmatrix}1\\1\\3\end{bmatrix},\begin{bmatrix}0\\1\\1\end{bmatrix},\begin{bmatrix}1\\2\\4\end{bmatrix}\right\} = \text{span}\left\{\begin{bmatrix}1\\1\\3\end{bmatrix},\begin{bmatrix}0\\1\\1\end{bmatrix}\right\} $$ since $$ \begin{bmatrix}1\\2\\4\end{bmatrix}= \begin{bmatrix}1\\1\\3\end{bmatrix} + \begin{bmatrix}0\\1\\1\end{bmatrix} . $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4143272", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Can every increasing negative function be expressed as a product with these properties? Let $h:(0,1] \to (-\infty,0)$ be a $C^1$ function, with $h'>0$. I am looking for sufficient conditions on $h$ that imply the existence of $C^1$ functions $\lambda:(0,1] \to (-\infty,0)$, $g:(0,1] \to (0,1]$ such that $$ h=\lambda g, \,\,\,\,\,\lambda'>0, \,\,\,\,\,g'<0 \tag{1} $$ Note that $h'=\lambda' g+\lambda g'>0$, since both summands are positive. Thus, $h'>0$ is a necessary condition for $h$ to be expressible as in $(1)$. Is it sufficient? Can every $h$ with $h'>0$ be expressed in this way? The motivation for this convoluted question comes from trying to analyses when the solution to a certain minimization problem is convex. (a bit too long to describe here). An equivalent reformulation: (thanks to Alex Jones). Take any $g: (0,1] \to (0,1]$ with $g' < 0$. Then, we must have $\lambda = h/g < 0$. Since $g > 0$, the condition $\lambda' > 0$ is equivalent to $h'/h < g'/g$. So, the question boils down to this: Does there exist a function $g:(0,1] \to (0,1]$, with $g' < 0$, such that $h'/h < g'/g$? Since the quantity $g'/g$ is invariant under positive scaling of $g$ ($g \to c g$ for $c >0$), it suffices to search for bounded $g$. Note that if $h$ is bounded, then by taking $g=-h$ we get $h'/h = g'/g$ instead of $h'/h < g'/g$. So we are "nearly there".
You can always choose $g(x) = 1-e^{h(x)}$. Then $0 < g(x) < 1$ and $g$ is decreasing. Finally, $\lambda = h/g$ is strictly increasing because $$ \lambda(x) = \frac{h(x)}{1-e^{h(x)}} = \phi(h(x)) $$ where $$ \phi(u) = \frac{u}{1-e^u} $$ is strictly increasing on $(-\infty, 0)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4143374", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 0 }
Change the double integral into a single integral in polar coordinates I have this integral: $$\iint_{G} f(\sqrt{x^2+y^2})dxdy,\,G=\{(x;y)\,|\,x^2+y^2\leq x;x^2+y^2\leq y\}$$ As the description of task hints, I guess I need to somehow make the integral equation depend on only one variable (either $\phi$ or $r$) after converting to polar and applying the constraints. But I'm not sure how to deal with inequality constraints, I tried substituting $x=r\cos{\phi},y=r\sin{\phi}$ into the inequalities and got some bounds on r: $$r\leq \cos{\phi},\,r\leq \sin{\phi}$$ But I'm not sure how to proceed further... If I had something like $r=\cos{\phi}$, I could have just substituted $r$ into the $x$ and $y$ equations and got rid of one variable, but it doesn't seem like the case here. Also, the answer (from the book) contains $asin$ and $acos$ (which I suspect they got by expressing $\phi$ from constraints), but the issue stays the same. I don't really need the solution to this exact problem, but rather some general insights on constraints and change of coordinate systems (e.g. can I apply constraints first, and then change or how to deal with inequalities).
In polar coordinates, you have,$$x^2+y^2\leqslant x\iff r\leqslant\cos\phi\quad\text{and}\quad x^2+y^2\leqslant y\iff r\leqslant\sin\phi.$$So, clearly (assuming that $\phi\in[0,2\pi]$), $\phi\in\left[0,\frac\pi2\right]$; otherwise, one of the numbers $\cos\phi$ or $\sin\phi$ is smaller than $0$. On the other hand, if $\phi\in\left[0,\frac\pi2\right]$, then $r$ can go from $0$ the smallest of the numbers $\cos\phi$ and $\sin\phi$. So, the largest value that $r$ can take is $\frac{\sqrt2}2$ (that's when $\phi=\frac\pi4$). For each $r\in\left[0,\frac{\sqrt2}2\right]$, $\phi$ can go from $\arcsin r$ to $\arccos r$. To see why, see that if $(x,y)$ is such that $x^2+y^2=y$ and that $\sqrt{x^2+y^2}=r$, then$$r\sin\phi=y=x^2+y^2=r^2,$$and therefore $\sin\phi=r(\iff\phi=\arcsin r)$; a similar argument shows that the largest value that $\phi$ can take is $\arccos r$. Therefore, you have\begin{align}\iint_Gf\left(\sqrt{x^2+y^2}\right)\,\mathrm dx\,\mathrm dy&=\int_0^{\sqrt 2/2}\int_{\arcsin r}^{\arccos r}f(r)r\,\mathrm d\phi\,\mathrm dr\\&=\int_0^{\sqrt2/2}f(r)r\bigl(\arccos(r)-\arcsin(r)\bigr)\,\mathrm dr.\end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/4143653", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$f_n(x)= f(x+n)$ show that the limit function is uniformly continuous Let $f$ be a real-valued continuous function on $I=\{x\in \mathbb{R} | x \geq 0\}$. For a positive integer $n$ the function on $I$ is defined by \begin{align*}f_n(x)= f(x+n)\end{align*} Answer the following questions when the sequence of functions $\{f_n(x)\}_{n=1}^\infty$ converges uniformly on $I$ * *The function $g$ on $I$ is defined by $g(x)=\lim _{n\xrightarrow{}{}\infty}f_n(x)$, show that $g$ is uniformly continuous on $I$ *show that $f$ is uniformly continuous on $I$ Here we need to show that $|g(x)-g(y)|<\epsilon$ whenever $|x-y|<\delta$ for all $x,y\in I$. Since $\{f_n(x)\}_{n=1}^\infty$ coverge uniformly to $g$ we have $|f_n{(x)}-g(x)|<\epsilon \implies |f(x+n)-g(x)|<\epsilon$ whenever $n>N$. How do we proceed from here. Any hints or a solution would be appreciated.
Hints For (1): If $f_n\in C({\Bbb R})$, $n\geq 1$ is any uniformly convergent sequence of continuous functions, then the limit function $g$ is continuous (proof by an $\epsilon/3$-argument). This is usually part of standard undergraduate curriculum. Look at the very definitions of $g(x)$ and $g(x+1)$ (for any fixed $x\in {\Bbb R}$) and realize that the values must be the same. Thus, $g$ is a 1-periodic continuous function. Conclude from this (using e.g. a compactness argument) that $g$ is uniformly continuous. For (2): Given $\epsilon>0$ there is $N$ so that $|g(x)-f(x)|<\epsilon/3$ for $x\geq N$. Using that $g$ is uniformly continuous show there is $\delta_1$ so that $x,y\geq N, |x-y|<\delta_1 \Rightarrow |f(x)-f(y)|<\epsilon$. Show that there is $\delta_2>0$ so that $0\leq x,y\leq N+1, |x-y|<\delta_2 \Rightarrow |f(x)-f(y)|<\epsilon$. Finally, pick $\delta=\min\{\delta_1,\delta_2,1\}$ and conclude.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4143821", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
If a nonnegative measurable f is not in L^1, then the ergodic averages diverge a.e. I have been trying to prove the following implication but have not succeeded yet. Let $(X,\mathscr{B}, \mu,T)$ be an ergodic system, and let $f:X\rightarrow [0,\infty)$ be a measurable function with $$ \limsup\frac{1}{n}\sum_{k=0}^n f(T^kx)<\infty. $$ Then show that $f\in L^1(X,\mathscr{B},\mu)$. I tried to integrate and then possibly use the reverse Fatou’s lemma to bring the limsup out of the integral. But, this is not quite true. I can’t think of any other way to approach the proof. Any hint would be highly welcome.
By the Ergodic Theorem we have, for almost all $x$, $\int (f\wedge N) d\mu =\lim \frac 1n \sum\limits_{k=0}^{n}(f\wedge N)(T^{k}(x))\leq \lim \sup \frac 1n \sum\limits_{k=0}^{n}f( T^{k}(x))$ for each $N$. Take sup over $N$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4143968", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Findind the Pseudoinverse of a $2\times 2$-matrix (real-valued) As is said in the title I would like to find the pseudoinverse of the real-valued matrix $$A = \left(\begin{array}{cc}1 & 1\\0 & 0\end{array}\right)$$ I've never done this before but I looked up some different methods but none worked.. * *First I tried the SVD but ended up stuck with an impossible system to find $U$. I used: $$AA^{\bot} = \left(\begin{array}{cc}2 & 0\\0 & 0\end{array}\right)$$ Which gave me $\Sigma = \left(\begin{array}{cc}\sqrt{2} & 0\\0 & 0\end{array}\right)$ and $V = \left(\begin{array}{cc}1 & 0\\0 & 1\end{array}\right) = I_2$ By construction we should have $$A = U\Sigma V^* \implies A = U\Sigma$$ (since $V = I_2$), but such $U$ can't exist, and so I'm stuck.. But I would really like to understand why, is it because $A, A^{\bot}$ or their product didn't fulfill some initial conditions I missed? * *Anyway since it didn't get me where I wanted I tried using the formula: $A^+ = (A^{\bot}A)^{-1}A^{\bot}$, which I believe only requires $A^{\bot}A$ to be inversible. But once more I ended up with a contradiction, since $(A^{\bot}A)^{-1} = \left(\begin{array}{cc}0 & 1\\1 & - 1\end{array}\right)$ and $A^{\bot} = \left(\begin{array}{cc}1 & 1\\0 & 0\end{array}\right)$, giving $A^+$ should be $\left(\begin{array}{cc}1 & 0\\0 & 0\end{array}\right)$, which doesn't satisfy the property (for example) $A = A^+AA^+$. So if anyone has some time to look it up and just tell me if I missed an important condition to apply those methods, or could give me another one to finish this question myself, that would be great!
You don't need the SVD if you are able to find a full-rank decomposition, in this case $$ A=\begin{bmatrix} 1 \\ 0 \end{bmatrix}\begin{bmatrix} 1 & 1 \end{bmatrix} $$ When you have such a full-rank decomposition $A=BC$, where $B$ is right-invertible and $C$ is left-invertible, then $A^+=C^+B^+$ and $$ B^+=(B^TB)^{-1}B^T\qquad C^+=C^T(CC^T)^{-1} $$ For your case $B^TB=[1]$ and $CC^T=[2]$, so $$ A^+=\frac{1}{2}\begin{bmatrix} 1 \\ 1 \end{bmatrix}\begin{bmatrix} 1 & 0 \end{bmatrix}=\begin{bmatrix} 1/2 & 0 \\ 1/2 & 0 \end{bmatrix} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4144106", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
What is a normal at the surface of a hypersphere? Everything real-valued. Let $\vec{r} = (x,y)$ be the vector directed from the origin to a point at the perimeter of a circle. Then the normal perpendicular to that perimeter at that point is $\vec{n} = \vec{r}/|\vec{r}|$. In very much the same way, the normal at the surface of a sphere is $\vec{n} = \vec{r}/|\vec{r}|$ with $\vec{r}=(x,y,z)$. Question is: how is a normal at the surface of a hypersphere in n-dimensional space defined?I would say, analogously to the sphere and the circle, as follows: $$ \vec{r} = (x_1,x_2,x_3, \cdots , x_n) \quad ; \quad \vec{n} = \frac{\vec{r}}{|\vec{r}|} $$ But I haven't seen such a definition anywhere on the internet, for example with N sphere , Unit sphere , Hypersphere . Anyway, what is the surface of a hypersphere and how is a (normed) vector perpendicular to it defined? Note. Motivated by a question elsewhere at MSE: Problem with normal derivative.
According to Wikipedia, and as formulated in the comment by Ben Grossmann, we can find the normal vector using the gradient of the function whose level set defines the surface. For a hypersphere with radius $R$ that is: $$ F(x_1,x_2,\cdots,x_n) = \left|\vec{r}\right|^2-R^2 = x_1^2 + x_2^2 + \cdots + x_n^2 - R^2 = 0 $$ And so, apart from a norming factor: $$ \vec{n} = \vec{\nabla} F = \left(\frac{\partial F}{\partial x_1},\frac{\partial F}{\partial x_2},\cdots,\frac{\partial F}{\partial x_n}\right) = 2\left(x_1,x_2,\cdots,x_n\right) = 2\,\vec{r} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4144295", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that if $A$ is a 2-by-3 matrix and B is a 3-by-2 matrix, then $BA\not=$Id. Claim: Suppose $A$ is a 2-by-3 matrix and $B$ is a 3-by-2 matrix. Then $BA$$\not=$$Id$. My attempt Proof. Let $T$:$F^3$$\rightarrow$$F^2$ and $S$:$F^2$$\rightarrow$$F^3$ whereby their matrix representations are given by $A$ and $B$, respectively. Thus $ST$ is a linear map from $F^3$ to $F^3$. Suppose $ST=Id$. Then $ST$ is injective, which imples that $T$ must also be injective. By the Fundamental Theorem of Linear Maps, we have that $\dim(F^3)=\dim\newcommand{\null}{\operatorname{null}}\newcommand{\range}{\operatorname{range}}\null T+\dim\range T$. Now because $T$ is injective and $\dim\range T\le\dim(F^2)$, it follows that $\dim(F^3)=\dim\range T\le \dim(F^2)$. However, this is impossible since $3$ $\not\le$ $2$. So if $ST=Id$, then $T$ is not a linear map from $F^3$ to $F^2$ and hence $A$ $\notin$${F}^{2,3}$. Therefore, $BA$$\not=$$Id$ whenever $A$$\in$ ${F}^{2,3}$ and $B$$\in$ ${F}^{3,2}$, as claimed.
$r(A)\leq{2},r(B)\leq{2}$ So, $r(BA)\leq{min(r(B),r(A))}\leq{2}<3$ Therefore,it is impossible $BA=I$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4144430", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Find $f\in C^{1}$ such that $x\partial_{x}f+y\partial_{y}f=(x^2+y^2)^{1/2}$ In a problem I am looking to find a function $C^{1}$, $f:\mathbb{R}^{2}\to\mathbb{R}$ such that \begin{equation*} x\frac{\partial f}{\partial x}(x,y)+y\frac{\partial f}{\partial y}(x,y)=(x^2+y^2)^{1/2} \end{equation*} I tried doing the change of variables \begin{eqnarray*} x=u\text{ and }y=uv \end{eqnarray*} getting \begin{eqnarray*} u\frac{\partial f}{\partial u}\frac{\partial u}{\partial x}+uv\frac{\partial f}{\partial u}\frac{\partial u}{\partial x} & = & \sqrt{u^{2}+u^{2}v^{2}}\\ \Rightarrow u\frac{\partial f}{\partial u} + u\frac{\partial f}{\partial u} & = & \sqrt{u^{2}+u^{2}v^{2}}\\ \Rightarrow 2u\frac{\partial f}{\partial u} & = & |u|\sqrt{1+v^{2}}\\ \end{eqnarray*} But from this point I don't know how to move forward, I tried to integrate u which led me to a resolution by trigonometric substitution (giving something of the form $f(x,y)=\frac{1}{2}\sec(\theta(x,y))+C$) I would appreciate any ideas or indications on how to proceed.
$$x\frac{\partial f}{\partial x}(x,y)+y\frac{\partial f}{\partial y}(x,y)=(x^2+y^2)^{1/2}$$ Charpit-Lagrange characteristic ODEs : $$\frac{dx}{x}=\frac{dy}{y}=\frac{df}{(x^2+y^2)^{1/2}}$$ A first characteristic equation comes from solving $\frac{dx}{x}=\frac{dy}{y}$ : $$\frac{y}{x}=c_1$$ A second characteristic equation comes from solving $\frac{dx}{x}=\frac{df}{(x^2+y^2)^{1/2}}\quad\implies\quad \frac{dx}{x}=\frac{df}{(x^2+(c_1 x)^2)^{1/2}}$ $$f- x(1+(c_1)^2)^{1/2}=c_2$$ The general solution of the PDE expressed on implicit form $c_2=F(c_1)$ is : $$f-x(1+(\frac{y}{x})^2)^{1/2}=F\left(\frac{y}{x}\right)$$ $$\boxed{f(x,y)=(x^2+y^2)^{1/2}+F\left(\frac{y}{x}\right)}$$ $F$ is an arbitrary function (to be determined according to some boundary conditions).
{ "language": "en", "url": "https://math.stackexchange.com/questions/4144595", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Use Stokes Theorem to Prove Two Integrals on Differentiable Manifiolds are equivalent I have to answer the following problem: Let $\omega= ydx + xzdy + xdz$. Let $S_1$ be the portion of the upper hemisphere given by $\phi_1(u,v)=(u,v,\sqrt{4-u^2-v^2})$; where $u^2 + v^2 \leq 2$. Let $S_2$ be the disc in the plane $z=\sqrt{2}$, given by $\phi_2(u,v)=(u,v,\sqrt{2})$; where $u^2 + v^2 \leq 2$. Use Stokes theorem to show $\int\int_{s_1}d\omega=\int\int_{s_2}d\omega$ I know that I could technically calculate each integral to prove this, but since the directions are asking for the use of Stokes theorem, I feel like there's some trick that I'm missing. The two parameterizations look like they have equivalent boundaries, but I'm getting stuck on exactly how to show that. Any help would be greatly appreciated
HINT: When $u^2+v^2=2$, we have $\sqrt{4-u^2-v^2}=\sqrt2$. Indeed, $$\int_{S_1}d\omega=\int_{\partial S_1} \omega=\int_{\partial S_2} \omega=\int_{S_2} d\omega.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4144947", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Let $T$ be a compact and self adjoint operator on a hilbert space $H$ such that $T$ is not invertible. Prove that $\text{Ker}(T)\ne 0$ Let us assume on contrary that $\text{Ker}(T)=0$. Now we will use a standard result which says $\text{Ker}(T)=\overline{\text{Ran}(T^*)}$ As $T$ is self-adjoint, $T=T^*$. So $\text{Ker}(T)=\overline{\text{Ran}(T)}$. Therefore, $\overline{\text{Ran}(T)}=H$. From here, I can't proceed. If I can show $\text{Ran}(T)$ is closed we are done. Although I haven't use the compactness of $T$ yet. Can anyone help me complete the proof? Thanks for help in advance.
The claim is not true for separable $H$. Define $T:l^2\to l^2$ by $$ Tx = (x_1, x_2/2, x_3/3,\dots), $$ which is compact, self-adjoint, and injective. It is not invertible as $(1,1/2,1/3,\dots)\in l^2$ is not in the range of $T$. If $H$ is non-separable then the claim follows from the spectral theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4145121", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
To evaluate the surface integral $\iint_S(x\,dy\,dz+y\,dx\,dz+z\,dx\,dy)$ How to evaluate the surface integral $$\iint_S(x\,dy\,dz+y\,dx\,dz+z\,dx\,dy)$$ where $S$ is the outer surface of the ellipsoid $$\frac{x^2}{a^2}+\frac{y^2}{b^2}+\frac{z^2}{c^2}=1$$ that lie above the $xy-$plane. My thought was to substitute $x=a \sin\theta \cos\psi, y=b \sin\theta \sin\psi, x=c \cos\theta$, where $0\leq \theta\leq \frac{\pi}{2}$ and $0\leq \psi\leq 2\pi$. But in this way i got the answer as $-abc\pi$, which is suppose to be $2abc\pi$. Any help is highly appreciated. Thank you in advance. My steps $$\frac{\delta(x,y)}{\delta(\theta,\psi)}=ab\sin\theta \cos\psi$$ $$\frac{\delta(y,z)}{\delta(\theta,\psi)}=bc\sin^2\theta \cos \psi$$ $$\frac{\delta(z,x)}{\delta(\theta,\psi)}=ac\sin^2\theta \sin \psi$$ The given integral $$=\int_{0}^{\frac{\pi}{2}}\int_{0}^{2\pi}(x\frac{\delta(y,z)}{\delta(\theta,\psi)}+y\frac{\delta(z,x)}{\delta(\theta,\psi)}+z\frac{\delta(x,y)}{\delta(\theta,\psi)})\,d\theta\, d\psi$$ $$=\int_{0}^{\frac{\pi}{2}}\int_{0}^{2\pi}(\sin \theta)\,d\theta\, d\psi$$ $$=-abc\pi$$
By the Gauss-Ostrogradsky divergence theorem the surface integral ∯xdydz+ydxdz+zdxdy is equal to the integral ∫∫∫(div(xi+yj+zk)dxdydz which is equal to ∫∫∫(∂x/∂x+∂y/∂y+∂z/∂z)dxdydz =3∫∫∫dxdydz=3V where V is the volume of the ellipsoid well known to be 4/3π(abc)^1/2. Therefore the triple integral is 4π(abc)^1/2 but since only the upper half is asked we get 2π(abc)^1/2. We must notice that the ellipsoid as it is given is NOT in the usual form with squares in the denominators so we get a result with square root of abc. I cant make it clearer!
{ "language": "en", "url": "https://math.stackexchange.com/questions/4145280", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
In terms of the logic of the proof, what is the difference between $a+b$ and $A+B$ in the proof that $\sup(A+B) = \sup A + \sup B$? Of course, $a + b$ is an element of $A+B$, but what I'm unable to wrap my head around, is when does $a+b$ behave like if it was the entirety of $A+B$, and when does it behave as a single element of $A+B$? This is in the context of trying to prove $\sup A + \sup B \leq \sup(A+B)$. For instance, I'm confused about the difference in these statements: * *Let $a,b$ be elements of the subintervals $A,B$ of the reals, and let $x = a + b$. Then $x \leq \sup A + \sup B$. *Let $a,b$ be elements of the subintervals $A,B$ of the reals, and let $x \in A+B$. Then $x \leq \sup(A + B)$. And also the difference between these: (I'm also not sure, can we actually instead infer that $\sup A + \sup B$ is a least upper bound from $a \leq \sup A$ and $b \leq \sup B$)? *Since $a \leq \sup A$ and $b \leq \sup B$, we have that $\sup A + \sup B$ is an upper bound of $a + b$. *Since $a \leq \sup A$ and $b \leq \sup B$, we have that $\sup A + \sup B$ is an upper bound of $A + B$. Any help or reading suggestions is very welcome!
You asked "When does $a+b$ behave like if it was the entirety of $A+B$." I would say it never does; it is always a single element of $A+B$. However, if $a$ and $b$ are introduced into the proof as arbitrary elements of $A$ and $B$, and you then prove some statement about them (treating them as unspecified single elements of $A$ and $B$), then you can say that since they were arbitrary, the statement is true for all $a \in A$ and $b \in B$. Your statement 3 is written in a confusing way. It would be clearer to write: Since $a \le \sup A$ and $b \le \sup B$, we have that $a+b \le \sup A + \sup B$. Your statement 4 is correct, although I think it would be clearer to write it like this: Let $x$ be an arbitrary element of $A+B$. Then there are $a \in A$ and $b \in B$ such that $x = a+b$. Since $a \le \sup A$ and $b \le \sup B$, $x = a+b \le \sup A + \sup B$. Since $x$ was arbitrary, we can conclude that for all $x \in A+B$, $x \le \sup A + \sup B$, so $\sup A + \sup B$ is an upper bound of $A+B$. This proves that $\sup(A+B) \le \sup A + \sup B$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4145438", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Solution to $\int\frac{1}{\sqrt{1+4C+x^2}}$dx I would like to solve the following integral, but I am not sure if I am doing it the right way: $$ \int \frac{1}{\sqrt{1+4C+x^2}}dx $$ with $C \in \mathbb{R}$. I found this in an integral table: $$ \int\frac{dx}{\sqrt{a^2+x^2}}=\text{arcsinh}\left(\frac{x}{a}\right) $$ That suggests to identify $a^2$ with $1+4C$, such that $$ \int \frac{1}{\sqrt{1+4C+x^2}}dx = \text{arcsinh}\left(\frac{x}{\sqrt{1+4C}}\right) $$ Wolfram Alpha returns with $$ \int \frac{1}{\sqrt{1+4C+x^2}}dx = \text{arctanh}\left(\frac{x}{\sqrt{1+4C+x^2}}\right) $$ I am not really interested in that specific solution, but whether I was right in how I included $4C$ in the expression for $a^2$. Without the $4C$, the connection would have been straight forward: $a^2=1$ and $x^2=x^2$. Of course, the $4C$ has to go somewhere, and it cannot be the square term, so I included it in the constant term. Am I allowed to do this? EDIT: If my solution with $\text{arcsinh}$ is correct, how can I prove equality with the one from W|A?
Everything depends on sign of $1+4C$. If $1+4C\gt 0$, then you can solve $a$ from $a^2=1+4C$ and have answer which found. If $1+4C\lt 0$, then you have another integral. Use in this case $$\int\frac{dx}{\sqrt{x^2-a^2}}=\ln|x+\sqrt{x^2-a^2}|+B$$ Where $B$ is constant. Last formula is, imho, more good, because generally is true $$\int\frac{dx}{\sqrt{x^2\pm a^2}}=\ln|x+\sqrt{x^2\pm a^2}|+B$$ for $a\gt 0$ and $B$ constant. Addition for Edit: Just differentiate right side.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4146061", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Simple Examples where Base Case of Induction is non-trivial? I want to explain the importance of the base case of induction to a 10-year-old kid. But I am finding it difficult to find examples where solving the base case is non-trivial. For example, the sum of $n$ natural numbers from $1$ to $n$, is $T(n) = \frac{n(n+1)}{2}$. Its base case would be $T(1) = 1$, which is trivial to see. I do not want examples like this. Neither I want complicated examples. Can somebody suggest some easy but non-trivial examples here? Note: The same question has been asked before here. But the examples therein are difficult to understand except this one. But I think the base case of this example is not provable. I am looking for non-trivial base case examples that are solvable. I hope you understand my point.
How about something completely made-up and informal? Theorem: At every age $n$, a person owns infinitely many ice cream cones. Proof: Proof by induction: Assume every person of age $n$ owns infinitely many ice cream cones. A person of age $n+1$ must have had infinitely many ice cream cones one year prior, and assuming a lower bound to the volume of an ice cream cone and an upper bound to the speed of ice cream consumption, you can only eat finitely many ice cream cones in a year. Therefore the supply could only have shrunk by a finite amount, and remains infinite at age $n+1$. QED. Obviously, the flaw of this proof is the fact that people are not born with an infinite amount of ice cream, i.e. the base case is not fulfilled.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4146247", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 3 }
Non-Hilbert Banach space isomorphic to $\ell_2$ I know that $\ell_2$ is the only separable Hilbert space of infinite dimension up to isometric isomorphism, so in particular, any separable Hilbert space of infinite dimension is isomorphic to $\ell_2$. So my question is, can someone give an example of a Banach space isomorphic to $\ell_2$ but not isometrically isomorphic to it? I know, for what is said above, that this space cannot be a Hilbert space but I can't think of any example.
For example, let $X=\ell_2(\{0,1,2,\dots\})$ and define a norm on $X$ by $$||x||=|x_0|+\left(\sum_{n=1}^\infty|x_n|^2\right)^{1/2}.$$ The identity map is an isomorphism onto $\ell_2$. A simple way to make certain that $X$ is not isometric to $\ell_2$ is to verify that the parallelogram law fails: $$||e_0+e_1||^2+||e_0-e_1||^2=8,$$while $$2||e_0||^2+2||e_1||^2=4.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4146444", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Help with Knuth's Surreal Numbers I'm reading D. E. Knuth's book "Surreal Numbers". And I'm completely stuck in chap. 6 (The Third Day) because there is a proof I don't understand. Alice says Suppose at the end of $n$ days, the numbers are $$x_1<x_2<\dots<x_m$$ She demonstrates that $x_i \equiv (\{x_{i-1}\},\{x_{i+1}\})$ and she begins the proof by saying Look, each element of $X_{iL}$ is $\le x_{i-1}$, and each element of $X_{iR}$ is $\ge x_{i+1}$. That first step of the proof is the one I don't understand. Can someone show me how to demonstrate that statement?
An ordered list of numbers in the universe after each day: Day 0: empty Day 1: 0 Day 2: -1, 0, 1 Day 3: -2, -1, -1/2, 0, 1/2, 1, 2 New numbers: Day 1: 0 Day 2: -1, 1 Day 3: -2, -1/2, 1/2, 2 On any given day the universe can be sorted: $$x_1<x_2<\dots<x_m$$ Sorted list of number on day 3: -2 < -1 < -1/2 < 0 < 1/2 < 1 < 2 which we assign to be $x_1<x_2<x_3<x_4<x_5<x_6<x_7$ The new numbers on day 3 being: -2, -1/2, 1/2, 2 which are the value of elements x1, x3, x5, x7 from our sorted list The book says: $x_i \equiv (\{x_{i-1}\},\{x_{i+1}\})$ For day 3, it means: x1 = { {},{x2}} x3 = {{x2},{x4}} x5 = {{x4},{x6}} x7 = {{x6},{} } Which can be written with values as: -2 = {|-1} -1/2 = {-1|0} 1/2 = {0|1} 2 = {1|} Each element of $X_{iL}$ is $\le x_{i-1}$, and each element of $X_{iR}$ is $\ge x_{i+1}$. This says that there is a longer form for writting these left and right sets: x1 = {{},{x2,x4,x6} x3 = {{x2},{x4,x6}} x5 = {{x2,x4},{x6} x7 = {{x2,x4,x6},{}} Which can be written with values as: -2 = {|-1,0,1} -1/2 = {-1|0,1} 1/2 = {-1,0|1} 2 = {-1,0,1|} I don't have the book, but I think the conclusion it is getting to is that each finite surreal number has a short representation with only one number in the left set and one number in the right set. This means for example that we could write the finite surreal number -1/2 as: -1/2 = {-1|0,1} but that we only need to write: -1/2 = {-1|0} Any finite surreal value is fully defined by the single greatest number from its left set and a single smallest number from the right set Since the entire universe of numbers available on the previous day are place into either the right and left set of new numbers, the full representations become very large. And since all mathematical operations on the shortened versions work the same as using the longer versions, there is an incentive to use this short form while operating with finite surreal numbers. For example I could write a surreal representation: 3/256 = { 1/128 | 1/64 } Where left and right are the decrement and increment of the numerator of the original number: 3/256 = { (3-1)/256 | (3+1)/256 } While the long form would involve writing 1023 numbers instead of two
{ "language": "en", "url": "https://math.stackexchange.com/questions/4146629", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Why is the quotient rule in differentiation necessary? * *Calculus - Derivatives - Quotient Rule Why is a quotient rule even necessary? Why can't we just consider $\frac{A}{B}$ as $A \cdot B^{-1}$ and use the multiplication formula?
You can derive the quotient rule by considering $\dfrac{f(x)}{g(x)}$ as $f(x)\cdot(g(x))^{-1}$ and then using the product and chain rule.The quotient rule gives a formula (under the right conditions) for evaluating the derivative of a quotient without using the product and chain rule each time.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4146803", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 3 }
The action of $\text{GL}(V)$ on the set of complex structures of $V$ is transitive Let $J(V)$ be the set of all complex structures on a finite-dimensional real vector space $V$. (A complex structure on $V$ is by definition a linear isomorphism $J:V\to V$ such that $J^2=-\text{id}$.) Fix such a complex structure $J$, and let $\text{GL}(V,J)$ be the group of all linear isomorphisms $L$ of $V$ such that $L\circ J=J\circ L$. I am trying to show that there is a one-to-one correspondence between the set $J(V)$ and the set $\text{GL}(V)/\text{GL}(V,J)$ of all left cosets. My attempt: The group $\text{GL}(V)$ acts on $J(V)$ by conjugation, and the stabilizer of $J$ is by definition the subgroup $\text{GL}(V,J)$. But how do we know that this action is transitive?
Here's a sketch: If $J$ is an arbitrary complex structure, there exists an ordered basis of the form $$ (v_{1}, Jv_{1}, v_{2}, Jv_{2}, \dots, v_{n}, Jv_{n}). $$ Conversely, to every ordered basis $$ (v_{1}, w_{1}, v_{2}, w_{2}, \dots, v_{n}, w_{n}) $$ there is a unique complex structure satisfying $w_{k} = Jv_{k}$ for $1 \leq k \leq n$. The general linear group acts transitively on the set of ordered bases. Meta-caution: Some authors write "complex bases" in the order $$ (v_{1}, v_{2}, \dots, v_{n}, Jv_{1}, Jv_{2}, \dots, Jv_{n}). $$ This is potentially vexing because a choice of ordered basis determines an orientation on a complex vector space, and these two conventional orderings do not generally define the same orientation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4146988", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to derive a constraint for a positive semi-definite matrix? Given a positive semi-definite (PSD) matrix $M$ =: $$\begin{bmatrix} 1 & a & c \\ a & 1 & b \\ c & b & 1 \end{bmatrix}$$, how to come up with the constraint: $$ab - \sqrt{(1-a^2)(1-b^2)} \le c \le ab + \sqrt{(1-a^2)(1-b^2)}$$? Update: It turns out to get the above constraint, we only need to use a criterion (https://en.wikipedia.org/wiki/Sylvester%27s_criterion) -- a PSD matrix must have a non-negative determinant. Thank to all the great answers below!
Hint and observation: For positive definiteness we must have $1-a^2>0$, hence there exists a number $\phi\in[0,pi)$ such that $a=cos(\phi)$. Furthermore we want to have $$\det(M)=1-a^2-b^2-c^2+2abc>0,$$ that is $$1-\cos^2(\phi)>b^2+c^2-2bc\cos(\phi).$$ Consider the triangle with edges $b$ and $c$ with inscribed angle $\phi$. Call its third edge $u$. Then we know $$\sin^2(\phi)=u^2,$$ that is $\sin(\phi)=u$ for some non-negative number $u$. For $\sin(\phi)= u$ the law of Sine tells us that $u/\sin(\phi)=1$, hence the vertices of the triangle lie inside a circle with diameter $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4147167", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Let $\int_{1}^{2} e^{x^2} \mathrm{d}x=a$. Then find the value of $\int_{e} ^{e^4} \sqrt{\ln x} \mathrm{d}x$ Let $\int_{1}^{2} e^{x^2} \mathrm{d}x =a$. Then the value of $\int_{e} ^{e^4} \sqrt{\ln x} \mathrm{d}x $ is (A) $e^4-a$ (B) $2e^4 - a$ (C)$e^4 - e - 4a$ (D) $2e^4-e-a$ $\bf{Try:}$ Let $\ln x =t^2$ then $x=e^{t^2}$ and $dx=2te^{t^2} dt$. Then the given integral changes to $\int_{1}^{2} 2t^2 e^{t^2}dt$ but $\int 2t^2 e^{t^2}dt=t^2\int e^{t^2} dt - \int \left(2t\int e^{t^2}dt\right)dt$ I'm unable to progress further. Please help thanks in advance.
It can be found without using any substitutions at all, just with geometric meaning. Observe that for $y = e^{x^2}$ there is inverse function $x = \sqrt{\ln y}$ and for $x = 1$ and $x=2$ we have $y=e$ and $y=e^4$ (and vice versa). Check this graph in Desmos and see that $\int_{e} ^{e^4} \sqrt{\ln y} \mathrm{d}y + \int_{1}^{2} e^{x^2} \mathrm{d}x = 2\cdot e^4 - 1\cdot e$, where $2\cdot e^4$ is the area of larger rectangle and $1\cdot e$ is the area of smaller one. So, $\int_{e} ^{e^4} \sqrt{\ln x} \mathrm{d}x = 2e^4 - e - a$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4147630", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
What are the prerequisites for Terence Tao's Additive Combinatorics book? The title says it all, i want to know what are the mathematical prerequisites to work through Tao's additive combinatorics book.
The authors Terence Tao and Van H. Vu recommend in the Prologue of Additive Combinatorics some familiarity with * *elementary combinatorics *harmonic analysis *convex geometry *incidence geometry *graph theory *probability *algebraic geometry and *ergodic theory. They also write after providing us with the list above: * *... this wealth of perspectives makes additive combinatorics a rich, fascinating, and multi-faceted subject. *... The main purpose of this book is to gather all these diverse tools in one location, present them in a self-contained and introductory manner, and illustrate their application to problems in additive combinatorics. You might also find the * *AMS review by Ben Green and the *Review of Additive Combinatorics (see p. 16 - 18) by Raghunath Tewari helpful and informative.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4147825", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Minimal and characteristic polynomial question $$A =\left(\begin{array}{rrrr} 0 & 1 & -1 & 1\\ -1 & 2 & -1 & 1\\ 0 & 0 & 1 & -1\\ 0 & 0 & 0 & 0 \end{array}\right).$$ We have this matrix, we want to find the characteristic and the minimal polynomial. I have done the following: First I used the formula $\det(A -\lambda I) = 0$, then I got this determinant into upper triangular form by getting rid of the $-1$ in the second row. Then I multiplied the diagonal and got : $$(\lambda +1+ \sqrt{2})(\lambda + 1- \sqrt{2})(1-\lambda)(-\lambda) = 0 = p(\lambda)$$ Wolfram alpha gives the same result. However the solution says that the characteristic and minimal polynomial should be $$p_A(\lambda) = \lambda(\lambda-1)^3,\quad m_A(\lambda) = \lambda(\lambda -1)^2.$$ I do not understand how they got there or where is the mistake in my process.
If you compute the determinant of $A-\lambda I$ outright you should get: $$\begin{align*} \det\begin{bmatrix} -\lambda & 1 & -1& 1\\ -1 & 2-\lambda & -1 & 1\\ 0&0 & 1-\lambda & -1 \\ 0&0&0& -\lambda \end{bmatrix} &= -\lambda \det\begin{bmatrix}-\lambda & 1 & -1\\ -1 & 2-\lambda & -1 \\ 0 & 0 & 1-\lambda \end{bmatrix}\\ &= -\lambda(1-\lambda) \det\begin{bmatrix} -\lambda & 1\\ -1 & 2-\lambda\end{bmatrix} \end{align*}$$ by cofactor expansion along the bottom rows. The last term is: $$ -\lambda(1-\lambda)[\lambda^2-2\lambda +1] = \lambda(\lambda -1)[(\lambda-1)^2] = \lambda(\lambda -1)^3. $$ From here, you know that the minimal polynomial is one of $\lambda (\lambda -1), \lambda (\lambda -1)^2, \lambda (\lambda -1)^3$ since it must share the same roots as the characteristic polynomial. Since there are only three you can plug in and check.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4147982", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Show that there is no random variable $X$ such that the m.g.f. of $X$ satisfies $M_X(1) = 3$ and $M_X(2) = 4$ Show that there is no random variable $X$ such that the moment generating function of $X$ satisfies $M_X(1)=3$ and $M_X(2) = 4$. I'm not sure how to prove this, but I'm trying to go through known distributions such as uniform or exponential to see what the graph of the m.g.f. looks like. The definition of the m.g.f. is the expectation $\mathbb{E}[e^{tx}]$. If we were to consider a uniform distribution, then we would have $\frac{e^t(1-e^{tm}}{m(1-e^t)}$, since is exponential and will never go through the points $(1,3)$ and $(2,4)$. If we were to consider an exponential distribution, then we would have $\frac{\lambda}{\lambda-t}$ a hyperbola, which won't go through $(1,3)$ and $(2,4)$ either. Of course, we can consider more distributions, such as binomial, geometric, hypergeometric and so on. However, I can't possibly prove this by going through each and every distribution and checking its m.g.f. Can someone help me with the proof on this one? Thanks
Suppose such an $X$ exists. Then $Y = e^X$ is well-defined. But $$\operatorname{Var}[Y] = \operatorname{E}[Y^2] - \operatorname{E}[Y]^2 = \operatorname{E}[e^{2X}] - \operatorname{E}[e^X]^2 = 4 - 3^2 = -5,$$ so no such $Y$ exists, consequently no such $X$ exists.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4148192", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Notation for the disjoint union of open subspaces Given a family $(X_i)_{i\in I}$ of pairwise disjoint open subspaces $X_i\subset X$ of a topological space $X$, their union is canonically homeomorphic to the topological disjoint union $\bigsqcup_{i\in I} X_i$, which, as a set, is defined as $$ \bigsqcup_{i\in I} X_i := \bigcup_{i\in I} X_i \times \{i\}. $$ Strictly speaking, this space is not a subset of $X$. I am wondering if there is a notation to emphasize that the union $\bigcup_{i\in I} X_i \subset X$ has the disjoint union topology, as a subspace of $X$. Is it customary to write $\bigsqcup_{i\in I} X_i$ also for this subspace?
Yes, it is customary to write $\coprod X_i$ for a disjoint union of subspaces/subsets in general. And $\coprod X_i$ is also the usual notation for the abstract disjoint union of the $X_i$ (which is the coproduct in the category of spaces - this notation is also used for (categorical) coproducts in general). This usually does not lead to confusion, and in any case, whenever you feel that it can be confusing, you should say explicitely what you are speaking about. A good way to avoid confusion is also to prefer using only $\bigcup X_i$ for the subspace if you want to distinguish it from the abstract disjoint union.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4148372", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Even Number Divisibility Problem The question is that for the equation $x = 25 + 5^k$ where $k$ is some random positive integer, can $x$ be divisible by $9$ for any $k$? My first intuition is that since $25$ = odd and $5^k$=odd then $x$ must be an even number. Rule of divisibility by $9$ states that the sum of digits should be divisible by $9$. Since $x$ is an even number : The series : $x = 18 , 36 , 54 , 72... $ Is there any $k$ value that corresponds to any number in this series? I tried to write a Python script and it seems like there are not any. What is the reason behind that?
You are correct that $25+5^k$ is divisible by $2$, but that doesn't tell whether $25+5^k$ is divisible by $9$. If you check $25+5^k$ for $k\in\{1,2,3,4,5,6\}$, you'll find that $25+5^\color{red}5$ is divisible by $9$. $25+5^k$ grows much more quickly than $18k$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4148543", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Solving wave equation using Laplace transform I want to solve the following wave equation using Laplace transform in $t$: \begin{align*} \text{PDE: }\frac {\partial^{2}}{\partial t^{2}}u-c^{2}\frac {\partial^{2}}{\partial x^{2}}u=q(x)e^{i\omega_{0}t},\text{ }-\infty<x<\infty,\text{ }t>0 \end{align*} \begin{align*} \text{BC: }u(x,t)\to 0\text{ as }x\to \pm\infty,\text{ }t>0 \end{align*} \begin{align*} \text{IC: }u(x,0)=0,\text{ }-\infty<x<\infty,\text{ }\frac {\partial}{\partial t}u(x,t)\Big\vert_{t=0}=0,\text{ }-\infty<x<\infty \end{align*} $q(x)$ is a localized function (vanishes at infinities). $\omega_{0}$ is a real given forcing frequency.
Use Laplace transform in $t$, we find it more convenient to write $s=-i\omega$. \begin{align*} \mathcal {L}\left(u\right)=\tilde{u}(x,s)=U(x,\omega)=\int_{0}^{\infty}u(x,t)e^{i\omega t}\, dt\text{ if Re}(s)>0\text{ or Im}(\omega)>0, \end{align*} where $u(x,t)$ is assumed to be one-sided, i.e., $u=0$ for $t<0$. \begin{align*} \mathcal {L}\left(e^{i\omega_{0}t}\right)=\int_{0}^{\infty}e^{i\left(\omega+\omega_{0}\right)t}\, dt=\frac {1}{i\left(\omega+\omega_{0}\right)}e^{i\left(\omega+\omega_{0}\right)t}\Big\vert_{0}^{\infty}=-\frac {1}{i\left(\omega+\omega_{0}\right)}. \end{align*} The PDE becomes an ODE: \begin{align*} \frac {d^2}{dx^{2}}U+k^{2}U=\frac {q(x)}{ic^{2}\left(\omega+\omega_{0}\right)},\text{ }k=\frac {\omega}{c}. \end{align*} The solution of the above equation consists of homogeneous plus a particular solution, i.e., $U(x,\omega)=U_{1}+U_{2}+U_{p}$, where $U_{1}=Ae^{ikx}$ and $U_{2}=Be^{-ikx}$. $U_{p}$ can be determined from variation of parameters: \begin{align*} U_{p}&=U_{1}\int_{0}^{x}\frac {-U_{2}\cdot\left(\frac {q(s)}{ic^{2}(\omega+\omega_{0})}\right)}{W(U_{1},U_{2})}\, ds+U_{2}\int_{0}^{x}\frac {U_{1}\cdot\left(\frac {q(s)}{ic^{2}(\omega+\omega_{0})}\right)}{W(U_{1},U_{2})}\, ds\\ &=-\frac {1}{2kc^{2}(\omega+\omega_{0})}\int_{0}^{x}e^{ik(x-s)}q(s)\, ds+\frac {1}{2kc^{2}(\omega+\omega_{0})}\int_{0}^{x}e^{-ik(x-s)}q(s)\, ds. \end{align*} Thus, we get \begin{align*} U(x,\omega)=\left[A-\frac {1}{2kc^{2}(\omega+\omega_{0})}\int_{0}^{x}e^{-iks}q(s)\, ds\right]e^{ikx}+\left[B+\frac {1}{2kc^{2}(\omega+\omega_{0})}\int_{0}^{x}e^{iks}q(s)\, ds\right]e^{-ikx}. \end{align*} Because of the given boundary condition, we require that as $x\to \infty$, $U(x,\omega)\to 0$, i.e., \begin{align*} B=-\frac {1}{2kc^{2}(\omega+\omega_{0})}\int_{0}^{\infty}e^{iks}q(s)\, ds. \end{align*} Similarly, we require that as $x\to-\infty$, $U(x,\omega)\to 0$ as well, i.e., \begin{align*} A=-\frac {1}{2kc^{2}(\omega+\omega_{0})}\int_{-\infty}^{0}e^{-iks}q(s)\, ds. \end{align*} Therefore, we get \begin{align*} U(x,\omega)=-\frac {1}{2kc^{2}(\omega+\omega_{0})}\int_{-\infty}^{\infty}q(s)e^{ik\lvert x-s\rvert}\, ds. \end{align*} Therefore, we get \begin{align*} U(x,\omega)=-\frac {1}{2kc^{2}(\omega+\omega_{0})}\int_{-\infty}^{\infty}q(y)e^{ik\lvert x-y\rvert}\, dy=-\frac {1}{2c\omega(\omega+\omega_{0})}\int_{-\infty}^{\infty}q(y)e^{i\omega\frac {\lvert x-y\rvert}{c}}\, dy. \end{align*} Then the true solution can be obtained through inverse Laplace transform. Therefore, we have \begin{align*} u(x,t)&=\frac {1}{2\pi}\int_{-\infty+i\alpha}^{+\infty+i\alpha}e^{-i\omega t}\cdot \left(-\frac {1}{2c\omega(\omega+\omega_{0})}\int_{-\infty}^{\infty}q(y)e^{i\omega\frac {\lvert x-y\rvert}{c}}\, dy\right)\, d\omega\\ &=\frac {1}{4\pi c}\int_{-\infty+i\alpha}^{+\infty+i\alpha}-\frac {1}{\omega(\omega+\omega_{0})}\left(\int_{-\infty}^{\infty}q(y)e^{i\omega\left(\frac {\lvert x-y\rvert}{c}-t\right)}\, dy\right)\, d\omega\\ &=-\frac {1}{4\pi c}\int_{-\infty}^{+\infty}q(y)\left(\int_{-\infty+i\alpha}^{+\infty+i\alpha}\frac {1}{\omega(\omega+\omega_{0})}e^{i\omega\left(\frac {\lvert x-y\rvert}{c}-t\right)}\, d\omega\right)\, dy. \end{align*} When $\frac {\lvert x-y\rvert}{c}-t>0$, we close in the upper half plane, which gives $0$ since there's no singularity. When $\frac {\lvert x-y\rvert}{c}-t<0$, we close in the lower half plane, which gives $-2\pi i\left[\text{Res}(\omega=-\omega_{0})+\text{Res}(\omega=0)\right]$. We have \begin{align*} \text{Res}\left(\omega=0\right)=\frac {1}{\omega_{0}}e^{i\cdot 0\cdot\left(\frac {\lvert x-y\rvert}{c}-t\right)}=\frac {1}{\omega_{0}}. \end{align*} \begin{align*} \text{Res}\left(\omega=-\omega_{0}\right)=-\frac {1}{\omega_{0}}e^{-i\omega_{0}\left(\frac {\lvert x-y\rvert}{c}-t\right)}. \end{align*} Therefore, the solution is given by \begin{align*} u(x,t)&=-\frac {1}{4\pi c}\int_{-\infty}^{+\infty}q(y)H\left(t-\frac {\lvert x-y\rvert}{c}\right)\frac {2\pi i}{\omega_{0}}\left(e^{-i\omega_{0}\left(\frac {\lvert x-y\rvert}{c}-t\right)}-1\right)\, dy\\ &=-\frac {i}{2c\omega_{0}}\int_{-\infty}^{+\infty}q(y)H\left(t-\frac {\lvert x-y\rvert}{c}\right)\left(e^{-i\omega_{0}\left(\frac {\lvert x-y\rvert}{c}-t\right)}-1\right)\, dy, \end{align*} where $H$ is the Heaviside function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4148666", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Infinity Series problem with cos $$\sum_{n=1}^\infty\frac{\cos(3n+1)}{3^n},\quad (a_n)=\frac{\cos(3n+1)}{3^n}$$ I have to find out if it converges/diverges or not My starting approach was that since $\cos(3n+1)<1 \implies \cos(3n+1)/3^n < 1/3^n$ then since $\sum_{n=1}^\infty1/3^n$ converges my sum will converge then I remembered that this criterion works for $a_n >0$ and obviously $-1<\cos(3n+1)<1$ making my approach wrong. Could I have a hint about what criterion should I use here?
You’re actually really close. Absolute convergence implies convergence, meaning that $\sum_{n=1}^{+\infty} |a_n|$ converges $\Rightarrow \sum_{n=1}^{+\infty} a_n$ converges.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4148866", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
If $\cos\alpha=\frac13,$ find the value of $7\cos(180^\circ-\alpha)-2\sin(90^\circ+\alpha)$ If $\cos\alpha=\dfrac13,$ find the value of $$7\cos(180^\circ-\alpha)-2\sin(90^\circ+\alpha)$$ Let $A=7\cos(180^\circ-\alpha)-2\sin(90^\circ+\alpha)=-7\cos\alpha-2\cos\alpha=-9\cos\alpha$ or when $\cos\alpha=\dfrac13\Rightarrow A=-9.\dfrac13=-3.$ The given answer in my book is $3$. Am I wrong?
$7 \cos(\pi-\alpha) = -7 \cos(\alpha) = -7/3$ $2 \sin(\pi/2 + \alpha) = 2 \cos (\alpha) = 2/3$ So you get $-7/3-2/3 = -3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4149029", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find intervals decrease and increase of a function $f(x)=\frac{\pi x}{2}-x\arctan x$. Find intervals decrease and increase of a function. $f(x)=\frac{\pi x}{2}-x\arctan x$ $f'(x)=\frac{\pi}{2}-\arctan x-\frac{x}{1+x^2}$ $f''(x)=\frac{-2}{(1+x^2)^2}$ which is negative. What can I say from here?
Interval for which $f$ increases: $(-∞, ∞)$. Interval which $f$ decreases: none $f$ is strictly increasing, concave downward and $f(x)\to1$ as $x\to\infty$ Background information. Since $f'(x)> 0$, $f''(x) < 0$, we conclude that $f$ is strictly increasing and concave downward. interval of increasing for $f: (-∞, ∞)$, interval of decreasing for $f$: none As $x\to\infty$ we have $$f(x) = 1 - \frac1{3 x^2} + \frac1{5 x^4} - \frac1{7 x^6} + O\left(x^{-7}\right).$$ Hence $\lim\nolimits_{x\to\infty}f(x) = 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4149570", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Determine real numbers a,b and c such that they verify a certain equation up to this point I've determined c this way: The issue here is that I cannot figure out how to proceed the same way with the other two variables a and b. Is there something I'm missing from the start? Sorry if it's hard to understand I can elaborate if necessary.
If you clear denominators by multiplying through by $x(x^2+1)$ you get: $$x+1=(ax+b)x+c(x^2+1)$$ The so-called cover-up rule then invites you to set $x=0$, which eliminates $ax+b$ and identifies $c$. You can then easily compare coefficients, or alternatively set $x=i=\sqrt {-1}$ to obtain $i+1=-a+ib$ and equate real and imaginary parts. The $x=0$ and $x=i$ choices respectively make $x$ and $x^2+1$ equal to $0$. The trick can be particularly useful in more complex examples (see material on partial fractions).
{ "language": "en", "url": "https://math.stackexchange.com/questions/4149812", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Prove that $\frac{a}{b+c^2}+\frac{b}{c+a^2}+\frac{c}{a+b^2}\geq \frac{3}{2}$ My question: Let $a,b,c$ be positive real numbers satisfy $a+b+c=3.$ Prove that $$\frac{a}{b+c^2}+\frac{b}{c+a^2}+\frac{c}{a+b^2}\geq \frac{3}{2}.$$ I have tried to change the LHS to $$\frac{a^2}{ab+ac^2}+\frac{b^2}{bc+ba^2}+\frac{c^2}{ca+cb^2}$$ And using Cauchy–Schwarz inequality for it $$\frac{a^2}{ab+ac^2}+\frac{b^2}{bc+ba^2}+\frac{c^2}{ca+cb^2}\geq \frac{(a+b+c)^2}{ab+bc+ca+ac^2+ba^2+cb^2}$$ Then because $$ab+ca+ca\leq \frac{(a+b+c)^2}{3}=\frac{3^2}{3}=3,$$ $$\frac{(a+b+c)^2}{ab+bc+ca+ac^2+ba^2+cb^2}\geq \frac{9}{3+ac^2+ba^2+cb^2}$$ Finally, I can't prove $ac^2+ba^2+cb^2\leq 3$ $$ $$ I look forward to your help, thank you!
Remarks: My proof of (1) is not nice. Hope to see a nice proof of it. My proof: Since the inequality is cyclic, WLOG, assume that $c = \min(a, b, c)$. We split into two cases: Case 1: $c \ge 1/5$ Using Cauchy-Bunyakovsky-Schwarz inequality, we have $$\mathrm{LHS} \ge \frac{(a + b + c)^2}{a(b + c^2) + b(c + a^2) + c(a + b^2)}.$$ It suffices to prove that $$ab + bc + ca + a^2b + b^2c + c^2 a \le 6 \tag{1}$$ which is true (the proof is given at the end). Case 2: $c < 1/5$ Using $c^2 \le c < 1/5$, we have $$\mathrm{LHS} \ge \frac{a}{b + c} + \frac{b}{1/5 + a^2} \ge \frac{a}{3 - a} + \frac{3 - a - 1/5}{1/5 + a^2}.$$ It suffices to prove that $$\frac{a}{3 - a} + \frac{3 - a - 1/5}{1/5 + a^2} \ge \frac32$$ or (after clearing the denominators) $$25a^3 - 35a^2 - 53a + 75 \ge 0$$ which is true (actually for all $a\ge 0$). (Hint: Using AM-GM, we have $a^3 + \frac94 a \ge 3a^2$. The rest is easy.) We are done. Proof of (1): We split into two cases: (1) If $a < 1$ or $b < 1$, using the well-known inequality $a^2b + b^2c + c^2a + abc \le \frac{4}{27}(a + b + c)^3$, it suffices to prove that $$ab + bc + ca + 4 - abc \le 6$$ which is written as $$(1 - a)(b - 1)(b + a - 2) + (ab - a - b)(a + b + c - 3) \ge 0$$ which is true (using $a + b > 2$). (2) If $a, b \ge 1$, let $a = 1 + u$ and $b = 1 + v$ for $u, v \ge 0$, and the inequality is written as $$-u^3 - 3u^2v + v^3 + u^2 + uv + v^2 \ge 0.$$ From $c = 3 - a - b = 1 - u - v$ and $c \ge 1/5$, we have $u + v \le 4/5$. Using $-u^3 \ge - u^2 \cdot \frac45$, it suffices to prove that $$-u^2\cdot \frac45 - 3u^2v + v^3 + u^2 + uv + v^2 \ge 0$$ or $$(1/5 - 3v)u^2 + v^3 + uv + v^2 \ge 0.$$ If $1/5 -3v \ge 0$, the inequality is true. If $1/5 -3v < 0$, let $g(u) := (1/5 - 3v)u^2 + v^3 + uv + v^2$. Then $g(u)$ is concave. Note that $0 \le u \le 4/5 - v$. Also, we have $g(0) \ge 0$ and $g(4/5 - v) = \frac{-250v^3 + 625v^2 - 180v + 16}{125} \ge 0$. Thus, $g(u) \ge 0$ on $[0, 4/5 - v]$. The desired result follows. We are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4149973", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 1 }
$\lim _{n \rightarrow \infty} \frac{1}{a_{n}} \sum_{k=1}^{n} b_{k}=0$ given existence of limit of sum of ratios Let $\left(a_{n}, n \geq 1\right),\left(b_{n}, n \geq 1\right)$ be sequences of real numbers with $a_{n+1} \geq a_{n}>0$ for every $n \geq 1$ and $\lim _{n \rightarrow \infty} a_{n}=+\infty$. If $\lim _{n \rightarrow \infty} \sum_{k=1}^{n} \frac{b_{k}}{a_{k}}$ exists, then $\lim _{n \rightarrow \infty} \frac{1}{a_{n}} \sum_{k=1}^{n} b_{k}=0$. I tried using the lemma $$ \text { if } \lim _{n \rightarrow \infty} v_{n}=v, \quad \text { then } \quad \lim _{n \rightarrow \infty} \frac{1}{a_{n}} \sum_{k=1}^{n} v_{k}\left(a_{k}-a_{k-1}\right)=v $$ with $v_{n}=\sum_{k=1}^{n} \frac{b_{k}}{a_{k}}$
Let $v = \sum_{k=1}^\infty \frac{b_k}{a_k}$. Using summation by parts with $v_0 = 0$ and $v_k = \sum_{j=1}^k \frac{b_j}{a_j}$ for $k > 0$, we have $$\sum_{k=1}^n b_k = \sum_{k=1}^n a_k\frac{b_k}{a_k} = \sum_{k=1}^n a_k(v_k - v_{k-1}) = a_nv_n - \sum_{k=1}^{n-1}v_k(a_{k+1} - a_{k}),$$ and $$\frac{1}{a_n}\sum_{k=1}^nb_k = v_n - \frac{1}{a_n}\sum_{k=1}^{n-1}v_k(a_{k+1} - a_{k})$$ Given $\epsilon > 0$ there exists a positive integer $N$ such that for all $k > N$ we have $v - \epsilon \leqslant v_k \leqslant v + \epsilon$. Hence, $$\frac{1}{a_n}\sum_{k=1}^{n-1}v_k(a_{k+1} - a_{k})= \frac{1}{a_n}\sum_{k=1}^{N}v_k(a_{k+1} - a_{k}) + \frac{1}{a_n}\sum_{k=N+1}^{n-1}v_k(a_{k+1} - a_{k}) \\\leqslant \frac{1}{a_n}\sum_{k=1}^{N}v_k(a_{k+1} - a_{k}) + (v+ \epsilon) \frac{1}{a_n}\sum_{k=N+1}^{n-1}(a_{k+1} - a_{k}) \\= \frac{1}{a_n}\sum_{k=1}^{N}v_k(a_{k+1} - a_{k}) +(v+\epsilon)\frac{a_n -a_{N+1}}{a_n},$$ and, similarly, $$\frac{1}{a_n}\sum_{k=1}^{n-1}v_k(a_{k+1} - a_{k}) \geqslant\frac{1}{a_n}\sum_{k=1}^{N}v_k(a_{k+1} - a_{k}) +(v-\epsilon)\frac{a_n -a_{N+1}}{a_n}$$ Since $a_n \to \infty$ as $n \to \infty$, we have, with $N$ fixed , $\frac{1}{a_n}\sum_{k=1}^{N}v_k(a_{k+1} - a_{k}) \to 0$, $\frac{a_n -a_{N+1}}{a_n} \to 1$ and $$v - \epsilon \leqslant \liminf_{n \to \infty}\frac{1}{a_n}\sum_{k=1}^{n-1}v_k(a_{k+1} - a_{k})\leqslant \limsup_{n \to \infty}\frac{1}{a_n}\sum_{k=1}^{n-1}v_k(a_{k+1}-a_k)\leqslant v+\epsilon$$ Since $\epsilon > 0$ can be arbitrarily close to $0$, it follows that $$\lim_{n \to \infty}\frac{1}{a_n}\sum_{k=1}^{n-1}v_k(a_{k+1} - a_{k}) = v, $$ and $$\lim_{n \to \infty} \frac{1}{a_n}\sum_{k=1}^nb_k = \lim_{n \to \infty}v_n - \lim_{n \to \infty}\frac{1}{a_n}\sum_{k=1}^{n-1}v_k(a_{k+1} - a_{k}) = v-v = 0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4150073", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Minimizing $|x-1|+|x-2|+|x-3|+...+ |x-2019|+ |x-2020|$. The given expression $$ |x-1|+|x-2|+|x-3|+...+ |x-2019|+ |x-2020| $$ determine the greatest interval of real numbers $[a,b]$ on which the given expression has a constant value $k$. What is the value of $k$ ? I found this question in a facebook group's post. I tried a lot to understand the question. But I didn't understand it at all. A user send answer like that $a=1000$, $b=1001$ and $k=1000000$ . But I don't understand why and how it comes.
Let us use Statistics: $$f(X)=\sum_{k=1}^{n} |X-X_k|$$ can be raken as $n$ times the mean deviation of the series $X_k$ about $X$ which is minimum when measured about median. If $n$ is enen there are two medians $M=X_{n/2},X_{n/2+1}$ in this case $f(X)$ attains the equal least value $\forall X \in(X_{n/2},X_{n/2+1})$. When $n$ is odd the median $M$ is $X_{(n+1)/2}$ and $f(X)$ has a local minimum at $X=X_{(n+1)/2}$. In this case $$L=Least[f(X)]=f(1010)=f(1011)\implies L=\sum_{k=1}^{2020} |k-1010|=\sum_{k=1}^{2020} |k-1011|$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4150478", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 4 }
Rate of convergence in probability - log transform Let $C>0$ and $(X_n)$ be a sequence of positive random variables. Assume that $$ |X_n - C| = o_p(r_n^{-1}) \iff r_n|X_n-C|=o_p(1) $$ for some fixed sequence $(r_n)$ with $r_n \to \infty$. What can we say about the rate of convergence of the log-transform: $$ |\log(X_n)- \log(C)| = o_p(?). $$ I guess it depends on $C$ and $(r_n)$ but I can't seem to derive anything useful.
It has the same rate. I hope that the following hint can help. * *Recall that: $r_n|X_n-C|=o_p(1) \iff \left( \forall \varepsilon>0, \mathbb P(|X_n-C|>\frac{\epsilon}{r_n}) \rightarrow 0 \text{ when } n\rightarrow \infty \right)$. *For the case $X_n>C+\frac{\epsilon}{r_n}$, we apply the Taylor expansion and get $$r_n[\log(X_n)-\log(C)]> \frac{\varepsilon}{C} + O(\varepsilon^2).$$ *Similar for the case $X_n<C-\frac{\epsilon}{r_n}$. Edit. As pointed out by @John, above implication is not correct, what we should prove is the reversion. However, the same idea can be applied, i.e, we should use Taylor expansion of $e^x$ instead of $\log x$. In the following, I propose a more detailed solution without using Taylor expansion (to avoid big-O notation). Claim: For all $\varepsilon>0$, exists $\delta>0$ such that $$r_n|\log(X_n)-\log(C)|>\varepsilon \Longrightarrow r_n|X_n-C|> \delta$$ for $n$ large enough. Proof of Claim: Let $Y_n=\log X_n$ and $B=\log C$. So we want to show that, for all $\varepsilon>0$, exists $\delta>0$ such that $$r_n|Y_n-B|>\varepsilon \Longrightarrow r_n|e^{Y_n}-e^B|> \delta$$ for large $n$. Now assume that $r_n|Y_n-B|>\varepsilon$, we consider two cases. First, recall a simple inequality that $e^{x}\geq 1+x$ for all $x\in \mathbb R$. If $Y_n > B +\frac{\varepsilon}{r_n}$, then $e^{Y_n} > e^B(1 +\frac{\varepsilon}{r_n})$ and thus $r_n(e^{Y_n} - e^B)>e^B \varepsilon$. Second, it is easy to prove that $e^{-x} \leq 1-x+\frac{x^2}{2}$ for $x\geq 0$. If $Y_n < B -\frac{\varepsilon}{r_n}$, then $r_n(e^{Y_n} - e^B)<-e^B \varepsilon+e^B\frac{\varepsilon^2}{2r_n}$. Third, note that for any $\varepsilon>0$, there exists a sufficiently large integer $n$ such that $\frac{\varepsilon}{2r_n}<1$. Choose $M$ such that $\frac{\varepsilon}{2r_n}<M<1$ and $\delta:=e^B(1-M)\varepsilon$. Then, we have $r_n|e^{Y_n} - e^B|>e^B \varepsilon-e^B\frac{\varepsilon^2}{2r_n}> e^B(1-M)\varepsilon=\delta$. Done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4150634", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Prove $\sum_{n=1}^\infty(\frac{1}{a_{2n-1}}-\frac{1}{a_{2n}})$ convergent Let $(a_n)_{n=1}^\infty$ Let be a positive, increasing, and unbounded sequence. Prove that the series: $$\sum_{n=1}^\infty\left(\frac{1}{a_{2n-1}}-\frac{1}{a_{2n}}\right)$$ convergent. We know that since $a_n$ is increasing and unbounded, than $\lim_{n \to \infty}a_n=\infty$, so I want to apply that to say that, $\sum_{n=1}^\infty(\frac{1}{a_{2n-1}}-\frac{1}{a_{2n}})$ is decreasing and its limit will be $0$. My problem is that every time I get confused while it says $a_{2n}$ or $a_{2n-1}$, it is less intuitive for me than just "normal" $a_n$... Appreciate your help! Thanks a lot!
The fact that $(a_n)$ is unbounded is actually not needed, only that the sequence is positive and increasing. First note that all terms $b_n = \frac{1}{a_{2n-1}}-\frac{1}{a_{2n}}$ are non-negative. Then verify that $$ \sum_{n=1}^N b_n = \frac{1}{a_1} + \underbrace{\left(-\frac{1}{a_2}+\frac{1}{a_3}\right)}_{\le 0}+ \cdots +\underbrace{\left(-\frac{1}{a_{2N-2}}+\frac{1}{a_{2N-1}}\right)}_{\le 0} -\frac{1}{a_{2N}} \le \frac{1}{a_1} \, . $$ So the partial sums of $\sum_{n=1}^\infty b_n$ are increasing and bounded above. It follows that the series is convergent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4150776", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
When are the roots of a polynomial of degree 3 aligned? Let $P \in \mathbb{C}[X]$ be a polynomial of degree 3. On what condition on the coefficients of $P$ are the three roots of $P$ aligned ? To make things easier, we may assume that $P$ can be written as $P = X^3 + aX^2 + bX + c$. Let $z_1, z_2, z_3$ be the roots of $P$. I tried to say that the condition was $\Im((a - c) \overline{(b - c)}) = 0$, but I could not manage to get anywhere. Trying to write $z_3 = k z_1 + (1 - k) z_2$ ($k \in [0, 1]$) did not help me either. Thank you for your help.
(Not an answer, but too long for a comment. Prompted by comments under this other question.) For an $\,n^{th}\,$ degree polynomial with $\,n \ge 3\,$, the question whether its roots lie on a line in the complex plane relates to the question of determining whether the roots of an associated polynomial are all real. If the roots of $\,n^{th}\,$ degree $\,P(z)\,$ are collinear, then the roots of all its derivatives must also lie on the same line by the Gauss–Lucas theorem. Taking some liberty in assuming that all roots are distinct, it follows that the line on which all roots lie is the line through the two roots $\,a,b\,$ of the quadratic $\,P^{(n-2)}(z)=0\,$. That line can be mapped onto the real axis with a translation and/or rotation, so a necessary condition for the roots of $\,P(z)\,$ to be collinear is for all roots of $\,Q(z) = P(\omega z) - \nu\,$ to be real, for the respective $\,\omega, \nu \in \mathbb C\,$, with $\,|\omega|=1\,$, both of which can be explicitly determined from $\,a,b\,$. In the case $\,n=3\,$ of a cubic, the quadratic in question is $\,P'(z)\,$, and that case is covered nicely in Jean Marie's answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4150914", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Is "There exists a circle passing through any three noncollinear points" really equivalent to Euclid's parallel postulate? This site https://www.ics.uci.edu/~eppstein/junkyard/parallel-postulate.html lists from the book "The Foundations of Geometry and the Non-Euclidean Plane" by George E. Martin 26 propositions which are suppose to be equivalent to the parallel postulate. In particular Proposition I states: There exists a circle passing through any three noncollinear points. However, I would say this is valid in spherical geometry, so it can't be equivalent to the parallel postulate since the parallel postulate is not valid in spherical geometry.
As the linked site says, these are equivalents to the parallel postulate within absolute geometry. Spherical geometry is not a model of absolute geometry, so this equivalence does not apply in that context. (Absolute geometry is roughly "Euclidean geometry without assuming the parallel postulate", but the parallel postulate is not the only part of Euclidean geometry that fails in spherical geometry. There are different ways to precisely define "absolute geometry" which are not all exactly equivalent (and I'm not sure which one Martin's book uses), but roughly it means you are in either Euclidean geometry or hyperbolic geometry.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/4151193", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
can there be a prime ideal between a minimal containment of homogeneous primes? If $\mathfrak p_0 \subsetneq \mathfrak p_1\subset k[x_0,...,x_n]$ are homogeneous prime ideals such that there is no homogeneous prime ideal strictly between them, could there be a (non-homogeneous) prime ideal strictly between them? Part of me thinks the answer is probably no, but I couldn't come up with a counterexample. Part of me thinks the answer might be yes since the dimension of a projective variety seems like it should be one less than that of its cone, although idk if that makes any sense or if I'm even using any of those terms correctly. Entertaining the possibility that it's true, my only progress has been to note that if such a prime ideal $\frak q$ exists, then the set $\mathfrak q - \mathfrak p_0$ must not contain any homogeneous elements, since if one such $f$ exists, then $\mathfrak q$ is a minimal prime over $(f)+\mathfrak p_0$, which implies $\mathfrak q$ must be homogeneous.
If $\mathfrak p_0 \subsetneq \mathfrak q \subsetneq \mathfrak p_1$, then the codimension of $\mathfrak p_1$ in $R/\mathfrak p_0$ is at least 2. Now let $f\in \mathfrak p_1 - \mathfrak p_0$ with $f$ homogeneous and let $\mathfrak p_1'$ be a minimal prime over $\mathfrak{p}_0+(f)$ such that $\mathfrak p_1'\subseteq\mathfrak p_1$. By Krull's PIT, the codimension of $\mathfrak p_1'$ in $R/\mathfrak{p}_0$ is not greater than $1$. But $\mathfrak{p}_0 +(f)$ is homogeneous, and therefore $\mathfrak p_1'$ is homogeneous. Therefore $\mathfrak p_1' = \mathfrak p_1$. Contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4151328", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Locus of point M such that its projections onto two fixed lines are always at the same distance. A point moves so that the distance between the feet of the perpendiculars drawn from it to the lines $ax^2+2hxy+by^2=0$ is a constant $c$. Prove that the equation of its locus is $$4(x^2+y^2)(h^2-ab)=c^2(4h^2+(a-b)^2).$$ What I have done: Suppose $P$ is any point on the locus, $A$ and $B$ be the feet of the perpendiculars, and $O$ is the origin. Then I want to show that $$OP=\frac{AB}{\sin AOB}=\frac{c}{\sin \theta}$$ where $\theta$ is the angle between the two lines. Note that the arc $PAO$ is a semi-circle, since $\angle PAO=90$. Similarly the arc $PBO$ is a semi-circle since $\angle PBO=90$. Also $\angle APB=\angle AOB$, as it is the angle of the same arc. Now, $\tan \theta =\frac{2\sqrt{h^2-ab}}{a+b}$. From here I can find $\sin \theta$ and $OP=\sqrt{x^2+y^2}$. Putting all these in, $$OP=\frac{c}{\sin \theta}$$ will give me the equation of the locus. The problem is how to show that $$OP=\frac{AB}{\sin AOB}.$$
To easily prove that the equation of locus whose equation is recognizable to be a circle of radius R we are given $$4(x^2+y^2)(h^2-ab)=4 R^2 (h^2-ab)=c^2(4h^2+(a-b)^2) $$ i.e., we are given $$ \frac{R^2}{c^2}=\frac{(4h^2+(a-b)^2)}{4(h^2-ab)} \tag 1 $$ Consider the special case when the moving point is directly on one of the pair of straight lines configured in this simplest way: $$ \frac {c}{R} = \sin \alpha \tag 2 $$ Angle included between straight line pair is known, given by: $$ \tan \alpha =\frac{\sqrt{h^2-ab}}{(a+b)/2} \tag 3 $$ Elimination of $\alpha$ between equations (2) and (3) results in (1)... done !
{ "language": "en", "url": "https://math.stackexchange.com/questions/4151473", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 4 }
If $(\cos x)f'(x)\leq (\sin x-\cos x)f(x)$ for all $x\geq 0$. Can it be said that $f(x)$ is a constant. Let $f(x)$ be a non-negative continuous and bounded function for all $x\geq 0$. If $$(\cos x)f'(x)\leq (\sin x-\cos x)f(x)$$ for all $x\geq 0$. Can it be said that $f(x)$ is a constant. My Attempt $(\cos x)f'(x)\leq (\sin x-\cos x)f(x)$ $\cos x(f'(x)+f(x))+(-\sin x)f(x)\leq 0$ $\cos x e^x(f'(x)+f(x))+(-\sin x)e^xf(x)\leq 0$ Thus $\frac{d}{dx}(e^x\cos xf(x))\leq 0$ So, for $x\geq 0$ we have $e^x\cos xf(x)\leq f(0)$ Here I am stuck as the value of $f(0)$ is not supplied. Can it be deduced that $f(0)=0$
As you correctly said, the condition is equivalent to $h(x) = e^x \cos(x) f(x)$ being (weakly) decreasing on $[0, \infty)$. Since $h(\pi/2 + k\pi) = 0$ for all non-negative integers $k$, it follows that $h$ is zero for $x \ge \pi/2$, which in turn implies that $f$ is zero for $x \ge \pi/2$. We can also conclude that if $f(x_0) = 0$ for some $x_0 \ge 0$ then $f(x) = 0$ for all $x \ge x_0$. But $f$ can be non-zero on an initial interval: If we choose an arbitrary twice differentiable function $h$ which is strictly decreasing on $[0, \pi/2]$ with $h(\pi/2) = h'(\pi/2) = h''(\pi/2) = 0$ then $$ f(x) = \begin{cases} \frac{h(x)}{e^x \cos(x)} & 0 \le x < \frac \pi 2 \\ 0 & x \ge \frac \pi 2 \end{cases} $$ is differentiable, satisfies $\frac{d}{dx}(e^x\cos xf(x))\leq 0$ for all $x \ge 0$, but is non-zero on $[0, \pi/2)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4151622", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Does there exist a type of surface such that it has a constant (non-zero) gradient of gaussian curvature? If a class of surfaces (2D Riemannian Manifolds) with constant gradient of Gaussian curvature exists, what is it called and how is it classified? Are there maybe only surfaces that have this property in a certain region? Edit: By constant gradient, I guess the way to formalise it would be to say that $\partial_a\partial_b K = 0$. Would that in some sense mean its gradient is constant?
Your notation $\partial_a \partial_b K=0$ is unclear to me. I think, what you mean is that the vector field $Y=\operatorname{grad} K$ is parallel, i.e. $\nabla_X Y=0$ for every vector field $X$ on your surface. Then such metrics (with nonzero $Y$) do not exist. Indeed, if $(M,g)$ is an $m$-dimensional Riemannian manifold which admits a nonzero parallel vector field $Y$, then $(M,g)$ locally splits as the product of the real line and a Riemannian manifold of dimension $m-1$ (the real line factor corresponds to the flow-lines of $Y$), see here. In the case when $M$ is a surface, this means that $(M,g)$ is locally a product of two 1-dimensional Riemannian manifolds, i.e. is locally flat. This, of course, implies that $Y=\operatorname{grad} K=0$, so our parallel vector field was zero to begin with. A small modification of this argument works in higher dimensions and one obtains: Suppose that $(M,g)$ is a Riemannian manifold with parallel gradient field of the scalar curvature function $R_g$. Then $R_g$ is constant.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4151775", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Partial summation of $d(n)/(n-1) $ In the answers to this question, it is established that $$\sum_{n\leq x}\frac{d(n)}{n}=\frac{1}{2}(\log(x))^{2}+2\gamma\log (x)+\gamma^{2}-2\gamma_{1}+O\left(x^{-1/2}\right).$$ A related result can be found on p. 13 of the following paper by Maxie Schmidt: $$\sum_{n \leq x} \frac{d(n)}{n} = \frac{1}{2} (\log(x))^{2} + 2 \gamma \log(x) + O\left(x^{-2/3}\right). $$ While working on the evaluation of a multiple rational zeta series, the following sum came up: $$S_{x} := \sum_{n \leq x} \frac{d(n)}{n-1}. $$ I tried finding the partial sum by means of the hyperbola method, but I haven't succeeded in applying it yet. Question: can an asymptotic expansion for $S_{x}$ be obtained? If so, how? Are results on this partial sum already present in the literature? Added: I should mention that I am most interested in such expansions that include the explicit constant terms
It is known that $$ A(x)=\sum_{n\le x}d(n)=x\log x+(2\gamma-1)x+\mathcal O(\sqrt x) $$ Consequently, by a substitution we have $$ \sum_{2\le n\le x}{d(n)\over n-1}=-1+\sum_{n\le x}{d(n)\over n}+\sum_{2\le n\le x}d(n)\left[{1\over n-1}-\frac1n\right] $$ For the latter sum, we have \begin{aligned} \sum_{2\le n\le x}d(n)\left[{1\over n-1}-\frac1n\right] &\ll\sum_{2\le n\le x}{d(n)\over n^2}<\zeta^2(2) \end{aligned} Using the fact that $d(n)\ll_\delta n^\delta$ for every $0<\delta<1$, we have $$ \sum_{n>x}{d(n)\over n(n-1)}\ll_\delta\sum_{n>x}{1\over n^{2-\delta}}\ll_\delta\int_x^\infty{\mathrm dt\over n^{2-\delta}}\ll_\delta x^{\delta-1} $$ Combining all these together $$ \sum_{n\le x}{d(n)\over n-1}=\frac12\log^2x+2\gamma\log x+C+\mathcal O(x^{-1/2}) $$ where $$ C=\gamma^2-2\gamma_1-1+\sum_{n=2}^\infty{d(n)\over n(n-1)} $$ Hope this can help you.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4151952", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proof that $\lim_{x\to\infty}\left(\frac{x}{x-1}\right)^x=e$. I was at work and I was curious if there was a limit for the formula $$f(x)=\left(\frac{x}{x-1}\right)^2$$ And I became curious on Desmos to see if there would a different limit for $$f(x)=\left(\frac{x}{x-1}\right)^x$$ It looked like it approached a constant value and, upon plugging it into a limit calculator, it returned $$\lim_{x\to\infty}\left(\frac{x}{x-1}\right)^x=e$$ What is the proof of this? I managed to work it down to $x^x(x-1)^{-x}$, but my calculus isn't good enough for me to work it past there...
This is something you can remember when facing indetermination problems of the type $1^{\infty}$, it’s often used in many problems. You have: $$ \left(\frac{x}{x-1}\right)^x=e^{x \ln (\frac{x}{x-1})} =e^{x \ln (1-\frac{1}{x-1})}$$ We know that when $X \rightarrow 0$, we have $\ln (1+X)$ equivalent to $X$. Therefore, when $x \rightarrow +\infty$, our expression is equivalent to $e^{\frac{x}{x-1}}$. You can now conclude that the limit of your expression is $e$, as you guessed it!
{ "language": "en", "url": "https://math.stackexchange.com/questions/4152221", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Shock formation in $u_{tt} = (1+u_x)^2 u_{xx}$ In Sergiu Klainerman's paper "Global Existence for Nonlinear Wave Equations", we are given the PDE $u_{tt} = (1+u_x)^2 u_{xx}$ as an example of a wave equation which exhibits blowup. Klainerman gives a specific example of this: for smooth $H\in C_0^\infty(\mathbb R)$, he claims that the initial conditions $u(0,x) = H(x)$ and $u_t(0,x) = -H'(x)-\frac12 H'(x)^2$ lead to a shock at time $t=-1/h$ and at $x=\xi$, where $h = \min H''$ and $h = H''(\xi)$. This seems like an argument based on characteristics, but I'm not familiar enough with characteristics for second order PDEs to see how to get here.
As described in this post, the following simple wave solution can be derived using the method of characteristics: $$ u = H\big(x - (1+\theta) t\big) +\tfrac12 \theta^2 t \, , $$ where $\theta=u_x$ satisfies the implicit equation $\theta = H'(x - (1+\theta) t)$. The initial time derivative is necessarily of the desired form $$ u_t(0,x) = -\left(H'(x)+\tfrac12 H'(x)^2\right) . $$ and the breaking time reads $t=- 1/\inf H''$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4152367", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
For what distributions, $X$ is $E(\lfloor c-X \rfloor) = c-1$? Per the question here: Prove that $E([c-U]) = c-1$, it was proven that: $$E(\lfloor c-U\rfloor) = c-1$$ if $U$ is a uniform random number between $0$ and $1$. I have some reason to suspect that this only holds for this particular distribution of $U$. I tried looking for counter-examples to this conjecture (considering the Beta distribution and some bimodal distributions) and couldn't find any. Is there a way to either prove this conjecture or disprove it via a counter-example? The random variable, $U$ shouldn't depend on $c$ and the result should hold for all $c \in \mathbb R$.
If $U$ is a r.v.in $(0,1)$ that satisfies the condition of your problem, that is $$E[\lfloor c-U\rfloor]=c-1$$ for all $c$ then it must be uniform. Proof: You have that if $n<c<n+1$, $$c-1=\mathbb{E}[\textrm{floor}(c-U)]=nP(0<U\leq c-n)+(n-1)P(c-n<U \leq 1])$$ In particular, for any $0<c<1$ ($n=0$) $$c-1=\mathbb{E}[\textrm{floor}(c-U)]=-P(c<U \leq 1)$$ From this, it follows that $$c=1-P(c<U\leq 1)=P(0< U\leq c) $$ That shows that $U$ is uniform.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4152556", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Is there a name for "co-normal" subgroups? I have in mind a situation where a group $G$ has two subgroups $H$ and $K$ with the property that each of them is closed under conjugation by elements of the other. More formally: for $h \in H$ and $k \in K$ then $h^{-1} k h \in K$ and $k^{-1}h k \in H$. Is there a name for this?
By far the most popular according to Google is "$H$ and $K$ normalize each other".
{ "language": "en", "url": "https://math.stackexchange.com/questions/4152702", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 3 }
Manipulation of equation into quadratic form I am reading a paper for self study, and am trying to figure out how the authors can make the following claim. They claim the following statements are equivalent: $$\frac{c{\sqrt{d_1+d_2}}-\sqrt{d_1}z_1 -\delta(d_2) \sqrt{P_0P_1}}{\sqrt{d_1+d_2}} = -z_{\beta}$$ and $$(\delta^2p_1p_0)d_2^2 - d_2[(c+z_\beta)^2 -2\sqrt{d_1p_1p_0}z_1\delta] + d_1[z_1^2 - (c+z_{\beta})^2] =0$$ How do we rearrange the first equation so that it is a quadratic function of $d_2$? Any hints would be appreciated! I have tried multiplying both sides by $\sqrt{d_1+d_2}$ then squaring, and that didn't work. Somehow I need to multiply $z_\beta$ by $c$, and I can't see when in the derivation that needs to happen. Thank you! Here is the paper for reference.
At first glance, it seems that the difficulty lies in the combination of terms that contain $d_2$ and $\sqrt{d_1+d_2}$ in the numerator, because if you simply square the expression, the two factors will mix, and you won't be able to get rid of the square root over $\sqrt{d_1 + d_2}$ in the mix term. However, note that the denominator itself is $\sqrt{d_1+d_2}$, so you can split off the term $c\sqrt{d_1+d_2}$ in the numerator, and then the left hand side becomes, $$ \frac{c\sqrt{d_1+d_2} - z_1\sqrt{d_1} - d_2 \delta\sqrt{p_1p_0}}{\sqrt{d_1 + d_2}} = \frac{c\sqrt{d_1+d_2}}{\sqrt{d_1+d_2}} - \frac{z_1\sqrt{d_1}+d_2\delta\sqrt{p_1p_0}}{\sqrt{d_1+d_2}}\ . $$ This allows you to rearrange the equation into $$ \frac{z_1\sqrt{d_1} + d_2 \delta\sqrt{p_1p_0}}{\sqrt{d_1+d_2}} = c + z_\beta $$ and further into $$ z_1\sqrt{d_1} + d_2 \delta\sqrt{p_1p_0} = (c+z_\beta)\sqrt{d_1+d_2}\ . $$ Now you can square both sides and rearrange it into a quadratic equation in $d_2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4153090", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Let $f(x)$ be a polynomial such that $f(f(x))=f(x-1)f'(x+1)$ and $f'(0)=2$. Find $f(x)$. Let $f(x)$ be a polynomial such that $f(f(x))=f(x-1)f'(x+1)$ and $f'(0)=2$. Find $f(x)$. Putting $x=-1$, I get $f(f(-1))=f(-2)f'(0)=2f(-2)$ Putting $x=0$, I get $f(f(0))=f(-1)f'(1)$ Also, replacing $x$ by $x+1$, I get $f(f(x+1))=f(x)f'(x+2)$ Not sure if I am getting anywhere.
Hint: $\deg f(x) = n$ then $\deg f(f(x)) = n^2$ and $\deg f(x-1)f'(x+1) = n + (n-1)$. Hence, $n^2 = n+(n-1)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4153237", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Showing that $\mathrm{Var}(X)=\mathrm{Var}(Y)$ Let $X$ and $Y$ be continuous random variables with the following PDF $(b>a)$: Show that $\mathrm{Var}(X)=\mathrm{Var}(Y)$. Here is my attempt: \begin{align*} \mathrm{Var}(X)&=\mathbb{E}(X^2)-(\mathbb{E}(X))^2 = \int_1^2 bx^2 \, dx + \int_2^4 ax^2 \, dx - \left(\int_1^2 bx \, dx + \int_2^4 ax \, dx\right)^2\\ &=\frac{7b+56a}{3}-\frac{9b^2}{4}-18ba-36a^2\\ \mathrm{Var}(Y)&=\mathbb{E}(Y^2)-(\mathbb{E}(Y))^2 = \int_2^4 ay^2\, dy - \int_4^5 by^2\, dy - \left(\int_2^4 ay\, dy - \int_4^5 by\, dy\right)^2\\ &=\frac{56a-61b}{3}-36a^2+54ab-\frac{81b^2}{4} \end{align*}
$Var(X)$ is correct but there are mistakes in signs while calculation $Var(Y)$. \begin{align*} \mathrm{Var}(X)&=\mathbb{E}(X^2)-(\mathbb{E}(X))^2 = \int_1^2 bx^2 \, dx + \int_2^4 ax^2 \, dx - \left(\int_1^2 bx \, dx + \int_2^4 ax \, dx\right)^2\\ &=\frac{7b+56a}{3}- \left(6a+\frac{3b}{2}\right)^2\\ \mathrm{Var}(Y)&=\mathbb{E}(Y^2)-(\mathbb{E}(Y))^2 = \int_2^4 ay^2\, dy + \int_4^5 by^2\, dy - \left(\int_2^4 ay\, dy + \int_4^5 by\, dy\right)^2\\ &=\frac{56a+61b}{3}- \left(6a+\frac{9b}{2}\right)^2 \end{align*} $ \displaystyle Var(Y) - Var(X) = 18b + \left(6a+\frac{3b}{2}\right)^2 - \left(6a+\frac{9b}{2}\right)^2$ $ \displaystyle = 18b - 36 ab - 18b^2$ Now given the pdf, we also note that $2a + b = 1 \implies 2a = 1 - b$ So, $ \displaystyle Var(Y) - Var(X) = 18b - 18 b (1-b) - 18b^2 = 0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4153401", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Existence of Expectation of Product of Two Random Variables In the theorem's proof (See screenshot link; StackExchange does not yet allow me to directly insert images), the author uses the linearity of expectation. However, linearity of expectation requires that each individual expectation being summed is finite. How do we know that E(XY) is finite given only that both variances are finite? Thank you! Proof
Cauchy-Schwarz inequality: $$ E\left[|XY|\right]\le\sqrt{E[X^2]\cdot E[Y^2]}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4153550", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }