Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Conditional Probability in Pebble World Definition of the Pebble World (taken from Stat 110): In the Pebble World, the definition says that probability behaves like mass: the mass of an empty pile of pebbles is $0$, the total mass of all the pebbles is $1$, and if we have non-overlapping piles of pebbles, we can get their combined mass by adding the masses of the individual piles. The pebbles can be of differing masses and we can also have a countably infinite number of pebbles as long as their total mass is $1$. In the above image, suppose each pebble weighs $\frac{1}{9}$ (for brevity). The pebbles inside the red box denote event $A$ and the pebbles inside the green box denote the event $B$. Note that the two boxes are intersecting. What is $P(A | B)$? It's supposed to be the probability of $A$ given $B$ has occurred, which means, $\frac{2/9}{6/9} = \frac{1}{3}$. How do we explain this intuitively? ($1/3$ feels like selecting the one element that's not in the intersection upon all the elements inside the red box. I don't think it's meant to be this way.)
Let: * *the sample space (picking any pebble) be the black square; *$A$ be event of picking any pebble within the red rectangle; *$B$ be event of picking any pebble within the green rectangle. Say, we know that event $B$ occurs; this means that $A$ can possibly eventuate from within the green rectangle. In other words, being given that $B$ occurs is the same as narrowing down the effective sample space from the black square to the green rectangle. In this case (i.e., given that $B$ occurs) the probability of $A$ $$=\frac{\text n(\text{‘success’})}{\text n(\text{the effective sample space})}\\=\frac{\text n(\text{the part of the red rectangle that's inside the green rectangle})}{\text n(\text{the green rectangle})}\\=\frac{\text n(A\cap B)}{\text n(B)}.$$ In other words, $$P(A|B)=\frac{\text n(A\cap B)}{\text n(B)}\\=\frac26\\=\frac13.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4229792", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving Coprime nature of Fibonacci numbers I'm trying to prove that Fibonacci numbers $F_i, F_{i+3}$ can be either coprime, where $i$ and $i+3$ are not both even, or they can have a greatest common divisor of $2$, when $i$ and $i+3$ are even. I've tried using the formula that $gcd(F_m, F_n) = F_{gcd(m,n)}$, but I've not been able to come up with a full proof that ensures that the greatest common factor of Fibonacci numbers 3 terms apart have a maximum greatest common divisor of 2. Is there any way to prove this maximum of 2, or that $F_i, F_{i+3}$ are coprime, except for when $i$ and $i+3$ are even?
Note that $\gcd(i,i+3) \in \{1,3\}$ because $d \mid i$ and $d \mid i+3$, then $d \mid (i+3)-i=3$. Thus $$\gcd(F_i, F_{i+3})=F_1 \text{ or } F_3=1 \text{ or } 2.$$ You can also note that $\gcd(i,i+3)=3$ if and only $3 \mid i$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4229923", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Finding exponential function limit definition from definition of e I am trying to prove, from the following limit definition of $e$ $$e=\lim_{n\to+\infty} \left(1+\frac{1}{n}\right)^n$$ the following definition for the exponential function: $$e^x =\lim_{n\to+\infty} \left(1+\frac{x}{n}\right)^n$$ I tried to follow the answer of this post, but there is something I don't understand: Prove $e^x$ limit definition from limit definition of $e$. Here is a screenshot: To substitute $n$ for $u$ in the limit "index", we need to make sure that $u$ goes to $+\infty$ as $n$ goes to $+\infty$. Therefore, we should have: $$\lim_{n\to+\infty} u = \lim_{n\to+\infty} nx \stackrel{?}{=}+\infty$$ Which is, in some way, $(+∞)*x$. That is fine if $x$ is positive, but what if $x$ were negative? Wouldn't that limit then be equal to $(-∞)$, which would invalidate that "re-indexing"? Is the proof missing something, or am I wrong? Thanks in advance! (The reason I am writing this here and not commenting is that the site settings forbid me to, because of my limited activity.)
I will assume exponentiation is continuous. Let $f(x)=\lim_{n\to\infty}(1+\frac xn)^n$. For $x>0$, I will show $e^x=f(x)$. [The case when $x<0$ is similar.] Lemma $f(x)$ is continuous. Proof I will prove it is continuous at $x=x_0$, for arbitrary $x_0\in\mathbb R$. For all $\epsilon>0$, let $\delta=\epsilon/f(|2x_0|)$. Let $x\in\mathbb R$ be such that $|x-x_0|<\delta$. Then, \begin{align} |f(x)-f(x_0)|&=\lim_{n\to\infty}\left|\big(1+\frac xn\big)^n-\big(1+\frac {x_0}n\big)^n\right|\\ &=\lim_{n\to\infty}\left|\sum_{i=1}^n{n\choose i}\frac{x^i-x_0^i}{n^i}\right|\\ &<\delta\lim_{n\to\infty}\sum_{i=1}^n{n\choose i}\frac{i|2x_0|^{i-1}}{n^i}. \end{align} Here, the sum in the limit is $$ \sum_{i=1}^n{n\choose i}\frac{i|2x_0|^{i-1}}{n^i}=\sum_{i=1}^n{n-1\choose i-1}\frac{|2x_0|^{i-1}}{n^{i-1}}\to f(|2x_0|) \ (n\to\infty). $$ Thus, we obtain $|f(x)-f(x_0)|<f(|2x_0|)\delta=\epsilon$. QED Thus, since both $e^x$ and $f(x)$ are continuous, it suffices to check they are equal for rational numbers $x=p/q$ with $p,q$ positive integers. It is well-known that taking a subsequence does not change the limit of a convergent sequence. Thus, \begin{align} e^{p/q}&=\lim_{n\to\infty}\big(1+\frac1n\big)^{pn/q}\\ &=\lim_{n\to\infty}\big(1+\frac1{qn}\big)^{pn}, \end{align} and \begin{align} f(p/q)&=\lim_{n\to\infty}\big(1+\frac{p}{qn}\big)^n\\ &=\lim_{n\to\infty}\big(1+\frac1{qn}\big)^{pn}, \end{align} equal each other.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4230062", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Test the convergence of the series $\sum_{1}^{+\infty} \sqrt{2-\sqrt{2+\sqrt{2+...+\sqrt{2}}}}$ I find some difficulty with the following exercise: Test the convergence of the series $\sum_{1}^{+\infty} \sqrt{2-\sqrt{2+\sqrt{2+...+\sqrt{2}}}}$ (n square root) (*more clearly in the picture below) I tested the limit of $a_n$ and it's $0$; I tried to calculate $\lim \frac{a_{n+1}}{a_n}$ but it seem to be difficult. I don't know which theorem I should use to solve this problem. Can anyone help me or give me hint? Thank you so much.
The main term of the series is $\sqrt{2-u_n}$, where $u_1=\sqrt{2}$ and $u_{n+1}=\sqrt{2+u_n}$. You can show by induction that $u_n=2\cos\left(\frac{\pi}{2^{n+1}}\right)$, therefore $$ \sqrt{2-u_n}=\sqrt{2-2\left(1-\frac{\pi^2}{2^{2n+3}}+o\left(\frac{1}{4^n}\right)\right)}=\frac{\pi}{2^{n+1}}\sqrt{1+o(1)}\sim\frac{\pi}{2^{n+1}} $$ Therefore $\sum\sqrt{2-u_n}$ converges.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4230437", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to compute the following integral using complex analysis? I'm trying to compute the following integral using complex analysis: \begin{equation} \int_0^{2\pi}\sin(\exp(e^{i \theta}))d\theta \end{equation} I know that there has to be an easy way out, but I can't see it. I've tried the following: by changing of variable $z = e^{i\theta}$, we get to \begin{equation} \int_{|z|=1}\frac{\sin(\exp(z))}{iz}dz = \operatorname{Res}(f,0) = \lim_{|z|\to0}-i\sin(\exp(z)) = -i\sin(1) \end{equation} It doesn't seem right, though. Can anyone please help me out?
In general for a holomorphic function $f$ on a disc say ($D(a,R)$) (and even for a much more class of functions), we have for every $0<r<R$: $$ \frac{1}{2\pi}\int_0^{2\pi} f(a+re^{it})dt=f(a). $$ This is called the mean propriety, and the proof is direct using Cauchy formula, in fact \begin{eqnarray} f(a)&=&\frac{1}{2i\pi}\int_{|z-a|=r}\frac{f(z)}{z -a}dz\\ &=&\frac{1}{2i\pi} \int_0^{2\pi} \frac{f(a+re^{it})}{re^{it}}ire^{it}dt \qquad (z=a+re^{it}) \\ &=& \frac{1}{2\pi}\int_0^{2\pi} f(a+re^{it})dt. \end{eqnarray} In your case, taking $f(z)=\sin(e^z)$ this function is entire (holomorphic on $\mathbb{C}$). $$ \frac{1}{2\pi}\int_0^{2\pi} \sin(\exp(e^{it}))dt=f(0)=\sin(1). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4230604", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Complex number related problem Let $z_1,z_2,z_3$ be complex numbers such that $|z_1|=|z_2|=|z_3|=|z_1+z_2+z_3|=2$ and $|z_1–z_2| =|z_1–z_3|$,$(z_2 \ne z_3)$, then the value of $|z_1+z_2||z_1+z_3|$ is_______ My solution is as follow ${z_1} = 2{e^{i{\theta _1}}};{z_2} = 2{e^{i{\theta _2}}};{z_3} = 2{e^{i{\theta _3}}}$ & $Z = {z_1} + {z_2} + {z_3} = 2\left( {{e^{i{\theta _1}}} + {e^{i{\theta _2}}} + {e^{i{\theta _3}}}} \right)$ $\left| {{z_1} - {z_2}} \right| = \left| {{z_1} - {z_3}} \right| \Rightarrow \left| {{e^{i{\theta _1}}} - {e^{i{\theta _2}}}} \right| = \left| {{e^{i{\theta _1}}} - {e^{i{\theta _3}}}} \right|$ Let ${\theta _1} = 0$ $\left| {{z_1} - {z_2}} \right| = \left| {{z_1} - {z_3}} \right| \Rightarrow \left| {1 - \left( {\cos {\theta _2} + i\sin {\theta _2}} \right)} \right| = \left| {1 - \left( {\cos {\theta _3} + i\sin {\theta _3}} \right)} \right|$ $ \Rightarrow \left| {1 - \cos {\theta _2} - i\sin {\theta _2}} \right| = \left| {1 - \cos {\theta _3} - i\sin {\theta _3}} \right| \Rightarrow \left| {2{{\sin }^2}\frac{{{\theta _2}}}{2} - 2i\sin \frac{{{\theta _2}}}{2}\cos \frac{{{\theta _2}}}{2}} \right| = \left| {2{{\sin }^2}\frac{{{\theta _3}}}{2} - 2i\sin \frac{{{\theta _3}}}{2}\cos \frac{{{\theta _3}}}{2}} \right|$ $\Rightarrow \left| { - 2{i^2}{{\sin }^2}\frac{{{\theta _2}}}{2} - 2i\sin \frac{{{\theta _2}}}{2}\cos \frac{{{\theta _2}}}{2}} \right| = \left| { - 2{i^2}{{\sin }^2}\frac{{{\theta _3}}}{2} - 2i\sin \frac{{{\theta _3}}}{2}\cos \frac{{{\theta _3}}}{2}} \right| \Rightarrow \left| { - 2i\sin \frac{{{\theta _2}}}{2}\left( {\cos \frac{{{\theta _2}}}{2} + i\sin \frac{{{\theta _2}}}{2}} \right)} \right| = \left| { - 2i\sin \frac{{{\theta _3}}}{2}\left( {\cos \frac{{{\theta _3}}}{2} + i\sin \frac{{{\theta _3}}}{2}} \right)} \right|$ $ \Rightarrow \left| { - 2i\sin \frac{{{\theta _2}}}{2}\left( {{e^{i\frac{{{\theta _2}}}{2}}}} \right)} \right| = \left| { - 2i\sin \frac{{{\theta _3}}}{2}\left( {{e^{i\frac{{{\theta _3}}}{2}}}} \right)} \right|$ $ \Rightarrow \left| {2\sin \frac{{{\theta _2}}}{2}\left( {{e^{ - i\frac{\pi }{2}}}} \right)\left( {{e^{i\frac{{{\theta _2}}}{2}}}} \right)} \right| = \left| {2\sin \frac{{{\theta _3}}}{2}\left( {{e^{ - i\frac{\pi }{2}}}} \right)\left( {{e^{i\frac{{{\theta _3}}}{2}}}} \right)} \right| \Rightarrow \left| {2\sin \frac{{{\theta _2}}}{2}\left( {{e^{i\left( {\frac{{{\theta _2}}}{2} - \frac{\pi }{2}} \right)}}} \right)} \right| = \left| {2\sin \frac{{{\theta _3}}}{2}\left( {{e^{i\left( {\frac{{{\theta _3}}}{2} - \frac{\pi }{2}} \right)}}} \right)} \right|$ $\Rightarrow \left| {2\sin \frac{{{\theta _2}}}{2}} \right|\left| {\left( {{e^{i\left( {\frac{{{\theta _2}}}{2} - \frac{\pi }{2}} \right)}}} \right)} \right| = \left| {2\sin \frac{{{\theta _3}}}{2}} \right|\left| {\left( {{e^{i\left( {\frac{{{\theta _3}}}{2} - \frac{\pi }{2}} \right)}}} \right)} \right| \Rightarrow \left| {2\sin \frac{{{\theta _2}}}{2}} \right| = \left| {2\sin \frac{{{\theta _3}}}{2}} \right|$ ${\theta _2} \ne {\theta _3}$ How do I proceed further?
This is a kludgy metacheating simplification of the problem. From the constraints, all 3 points are on the circle of radius 2, centered at the origin. Further, directly from the constraints, $z_1$ is on the perpendicular bisector of the line segment connecting $z_2$ and $z_3$. Finally, $z_2, z_3$ must be chosen so that $|z_1 + z_2 + z_3|$ is also on the same circle. This suggests that the vectors $z_2, z_3$ should cancel each other out, which suggests that $z_2 + z_3$ must equal $0$. Initially, I considered $(z_2, z_1, z_3) = (-2, 2i, 2)$ as one obvious way of satisfying these constraints. Then, I realized, based on someone else's (now deleted) comment, that this answer could be harmlessly rotated by any angle $\theta$. Although I can't prove it, it seems probable to me that the only way to satisfy these constraints is by some rotation of either $(z_2, z_1, z_3) = (-2, 2i, 2)$ or $(z_2, z_1, z_3) = (-2, -2i, 2).$ This means that (for example) the triangle formed by the vertices $(0,0), z_1, z_2$ must be a 45-45-90 right triangle, whose hypotenuse is $|z_1 + z_2| = 2\sqrt{2}$. Consideration between $z_1$ and $z_3$ is identical. Therefore, the answer is $\left[2\sqrt{2}\right]^2 = 8.$ Note A case may be made that this answer is incomplete, since I did not actually prove that there was no other way to satisfy the constraints. I am therefore relying heavily on the fact that the tone of the question indicates that the answer must be unique. Addendum Completing the problem, by proving (for example) that when $z_1 = 2i$ that $z_2, z_3$ must be on the real axis. I started to analyze this, and then realized that I am merely repeating the analysis already supplied in the answer given by justadzr. That is, any given $(z_2, z_1, z_3)$ that satisfies the constraints continues to satisfy the constraints when all three points are rotated by $\theta$. When such a rotation takes $z_1$ to $2i$, then, very similar to what justadzr's answer indicates, $z_2, z_3$ must have form $x + iy, -x +iy$, which subsequently implies that $y = 0$. This places $z_2, z_3$ on the real axis, which places them at $-2, +2$ (in some order), since these are the only two places where the circle intersects the real axis.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4230819", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 2 }
$f(x)=\sum_{n=0}^{\infty} \frac{e^{-nx}}{1+n^2}$ is differentiable at $~(0, \infty)$ Let us consider a real valued function $~~f:[0 , \infty) \to \mathbb R~~$ defined by $$f(x)=\sum_{n=0}^{\infty} \frac{e^{-nx}}{1+n^2},~~x \in [0,\infty).$$ Show that $f$ is differentiable at $~(0, \infty)~~$ but $~~\displaystyle \lim_{x \to 0+} f'(x)~$ does not exists. My attempt: By using $~M-$test I have proved that the series of functions $~~\displaystyle \sum_{n=0}^{\infty} f_n(x)~~$ uniformly convergent to $~~f(x)~~$ as given, where $$f_n(x)=\frac{e^{-nx}}{1+n^2},~~x \in [0,\infty).$$ Then we have $$|f_n(x)| =\left|\frac{e^{-nx}}{1+n^2}\right| \leq \frac{1}{1+n^2}.$$ So uniformly convergent. Hence $~~f'(x)=\displaystyle \sum_{n=0}^{\infty} f'_n(x)=\displaystyle \sum_{n=0}^{\infty} \frac{-ne^{-nx}}{1+n^2}.$ This follows that $~~f(x)~~$ is differentiable on $~(0,\infty).$ Now notice that $$\lim_{x \to 0+}f'(x)=\lim_{x \to 0+} \sum \frac{-ne^{-nx}}{1+n^2} = \sum \left(\lim_{x \to 0+} \frac{-ne^{-nx}}{1+n^2}\right)=-\sum \frac{n}{1+n^2}.$$ Since the above series is not convergent, it yields that the limit does not exists. Is my solution is all okay? Is anything I did wrong or can be solve in much simpler way please suggest me? Thanks for your time to look in my solution.
We have $$ \frac{1}{1+n^2}=\int_{0}^{+\infty}\cos(nt)e^{-t}\,dt \tag{1}$$ so $f(x)$ can be represented in the following way: $$ f(x)=\sum_{n\geq 0}\int_{0}^{+\infty} \cos(nt) e^{-t} e^{-nx}\,dt =\text{Re}\sum_{n\geq 0}\int_{0}^{+\infty}e^{nit} e^{-t} e^{-nx}\,dt.\tag{2}$$ We are allowed to exchange the series and the integral due to dominated convergence, so $$ f(x)=\text{Re}\int_{0}^{+\infty}\frac{e^x}{e^x-e^{it}}\,e^{-t}\,dt =\frac{1}{2}\int_{0}^{+\infty}\frac{e^x-\cos t}{\cosh x-\cos t}\,e^{-t}\,dt.\tag{3}$$ For any $x>0$ we have $e^x > 1\geq \cos t$ and $\cosh x > 1\geq \cos t$, so the integral representation ensures the differentiability of $f(x)$ and also gives an integral representation for $f'(x)$: $$ f'(x) = \frac{1}{2}\int_{0}^{+\infty}\frac{1-\cos(t)\cosh(x)}{(\cosh(x)-\cos(t))^2}\,e^{-t}\,dt.\tag{4} $$ On the other hand, if we consider the limit as $x\to 0^+$ of the RHS we end up with $$ \frac{1}{2}\int_{0}^{+\infty}\frac{1}{1-\cos t}e^{-t}\,dt = \frac{1}{4}\int_{0}^{+\infty}\frac{e^{-t}}{\sin^2\left(\frac{t}{2}\right)}\,dt\tag{5} $$ which is blatantly divergent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4230940", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Reference request: Allard's regularity Theorem for smooth submanifolds Allard's regularity Theorem (Theorem 5.2, https://web.stanford.edu/class/math285/ts-gmt.pdf) asserts that a varifold can be locally written as a graph. Moreover, it gives a lower bound on the size of the domain in which this is possible. The size depends on the size of subsets of varifolds where balls approximate the size of Euclidean balls of the same dimension. I am not well versed in geometric measure theory and would like this Theorem for submanifolds of Euclidean space but I can't find it formulated anywhere. The abstract in this paper (https://arxiv.org/pdf/1311.3963.pdf) says the following: "In the special case when V is a smooth manifold translates to the following: If $\omega^{-1} \rho^{-n}Area(V \cap B_{\rho}(x))$ is sufficiently close to 1 and the unit normal of V satisfies a $C^{0,\alpha}$ estimate, then $ V \cap B_{\rho}(x)$ is the graph of a $C^{1,\alpha}$ function with estimates." I would like a theorem of the following form. Suppose $V$ is a closed smooth submanifold of $\mathbb{R}^n$ such that the second fundamental form of $V$ is bounded by $C$ and its derivatives by $C^2$ with positive injectivity radius $r_V$. Then there is a radius $R(C,r_V)>0$ such that for any $p \in V$ there is a function $u_p$ $$ B_R(p) \cap V = graph(u_p) $$ where $B_R(p)\subset \mathbb{R}^n$ Edit: For anyone interested, I found the answer to my question in Theorem 3.8 of https://people.math.sc.edu/howard/Notes/schur.pdf
I found the a source that asserts that a domain, wherein a submanifold $V$ of Euclidean space can be written as a graph, contains a ball whose radius only depends: Proposition 3.8 in https://people.math.sc.edu/howard/Notes/schur.pdf The Theorem is formulated for submanifolds whose second fundamental form is bounded by $1$ but after a rescaling argument it follows that the radius depends only linearly on the norm of the second fundamental form.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4231048", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Do you need vector calculus or linear algebra to acquire a general conic's parameters? As can be seen from this question, for the general conic section $Ax^2+Bxy+Cy^2+Dx+Ey+F=0$, the slope of the principal axis is $$\tan \theta=\frac{B}{A-C+(\mathrm{sgn}\;{B})\sqrt{B^2+(A-C)^2}}$$ and the eccentricity(if it's a hyperbola or an ellipse) is $$\epsilon=\left(\sqrt{\frac12-\frac{(\mathrm{sgn}\;\Delta)(A+C)}{2\sqrt{B^2+(A-C)^2}}}\right)^{-1}$$ I'd like to prove them both myself, but I suspect they both require either vector calculus or linear algebra, or similar topics at that level. I don't have the slightest grasp over either, so it'd be great to know if both can be proved by using simple cartesian geometry. I'm not asking for the proof; if it requires either of the above, I'm happy to leave it for when I know them, and I can prove it myself otherwise. I just need to know what proving it entails.
This can indeed be done with simple geometry. (Note: if it's a parabola - which also has no centre - you can factor out the equation into the standard form to find the axis and tangent straightaway. It may not be so easy with central conics.) First, the centre of the conic is shifted to the origin. This is done by changing $x$ and $y$ to $x-h$ and $y-k$. $(h, k)$ is the centre, and both coordinates are obtained by making it so that there are no linear terms. (This is because the centre is defined as the point which bisects all chords passing through it. If the centre is the origin, and say $(x, y)$ lies on the conic, then $(-x, -y)$ also has to lie on it, since that's the other end of the bisected chord. For that to happen, the conic equation must have no linear terms.) The equation is thus simplified to the form $Ax^2+By^2+Cxy=1$. As in @robjohn's comment, the conic can be rotated about the origin to settle down with the $x-$ And $y-$ axes as its axes, as in the standard form. This is done as detailed in this answer, by using the matrix $\begin{pmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{pmatrix} $ to change the variables. The equation is then further simplified into this form, without $xy$ terms; $$x^2(A\cos^2 \theta + B\sin^2 \theta +C\cos\theta\sin\theta)+y^2(A\sin^2\theta+B\cos^2 \theta-C\cos\theta\sin\theta)=1$$ (Since the coefficient of $xy$ has to be 0 when the conic is in its standard form $2\sin\theta\cos\theta(B-A)+C(\cos^2\theta-\sin^2\theta)=0$) From the last two equations, $\theta$ can be obtained, which gives us the slope of the axis. Tidying up the equation into the standard form will also yield the eccentricity easily. Edit: As in the comment below, $\frac{C}{A-B}=\frac{\sin(2\theta)}{\cos(2\theta)}=\tan(2\theta)$. This can be used to figure the slope out pretty quick.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4231153", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Suppose $a$ and $b$ are integers in the ratio $6:7$, what is the ratio of $(6a^{2} + 7b^{2} + 6a + 7b ) : (a^{2}+b^2)$? Initially I thought I can assume $a=6$ and $b=7$ and calculate the ratio of the given expression which turned out to be $\frac{644}{85}$ and this was not the correct answer provided for this question. Then when I turned to the solution for this problem, this statement was written as part of the solution The degree of all the terms in the numerator is not equal and hence, the resultant ratio can't be computed. I couldn't understand this statement. What does it mean? What's the theory behind this? I am just a little confused here and I am sorry if this question looks foolish but a little guidance will be helpful.
I get a ratio of $\frac{7}{b}+\frac{559}{85}$ when calculating $a=\frac{6 b}{7}$ and $\frac{6 a^2+6 a+7 b^2+7 b}{a^2+b^2}$. In turn, when setting $b=\frac{7 a}{6}$, I obtain a ratio of $\frac{6}{a}+\frac{559}{85}$. The fact that squared variables $a^2$, $b^2$ are involved leads to a non-constant ratio (where at least one variable $a$ or $b$ is involved). If you use only linear terms in the denominator and numerator, we obtain a constant as ratio. And when I use cubic terms, for example $\frac{6 a^3+6 a+7 b^3+7 b}{a^3+b^3}$, I get a ratio depending from a squared $a$ or $b$, namely in this case $\frac{1}{559} \left(\frac{4165}{b^2}+3697\right)$. We can generalize this question by setting $a=i$ and $b=j$ and use an arbitary exponent $n$ instead of $2$ which leads to a ratio: $\frac{j \left(b^n+b\right)+\frac{b i^2}{j}+i \left(\frac{b i}{j}\right)^n}{b^n+\left(\frac{b i}{j}\right)^n}=\frac{ia^n+jb^n+ia+jb}{a^n+b^n}$ where $\frac{a}{b}=\frac{i}{j}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4231247", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Difference between these two definitions of limits This is a definition from WikiBooks: This is another definition from LibreTexts: The first definition says except possibly at $x=c$ while the second definition says $f(x)$ is defined for all $x\neq a$. Why do these two definitions define $f(x)$ differently?
I believe you're having a math-speak issue. In the second definition, "$f(x)$ is defined for all $x\neq a$" doesn't mean that $f(a)$ must not be defined. You have to read the sentence in the broad sense (as with many other situations in math, such as the use of the word 'or' as a logical connective). All we're saying is that we require $f(x)$ to be defined at every point other than $a$. At $a$, we make no requirement that the function be defined. If $f$ is defined at $a$, then great, good for you, (but the value of $f(a)$ makes no impact as to the limit definition). If $f$ is not defined at $a$, then that's no trouble either. Said differently, I read the sentence "$f(x)$ is defined for all $x\neq a$" as the one-sided implication "If $x\neq a$ then $f(x)$ is defined", NOT as the biconditional "$f(x)$ is defined if and only if $x\neq a$"
{ "language": "en", "url": "https://math.stackexchange.com/questions/4231395", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
Is it true that $d((f+g)(x), (f+g)(y)) \leq d(f(x),f(y)) + d(g(x),g(y))$? In arbitrary metric space $(M, d)$, is it true that $d((f+g)(x), (f+g)(y)) \leq d(f(x),f(y)) + d(g(x),g(y))$? Clearly, in the simple case where $M = \mathbb{R}$ and $d(x,y) = | x - y |$, we have $$\begin{align} d((f+g)(x), (f+g)(y)) &= |(f+g)(x) - (f+g)(y)| \\ &= |(f(x) + g(x)) - (f(y) + g(y))| \\ &= |(f(x) - f(y)) + (g(x) - g(y))| \tag{1} \\ &\leq |f(x) - f(y)| + |g(x) - g(y)| \\ &= d(f(x), f(y)) + d(g(x), g(y)), \end{align}$$ as desired. So basically, this proof requires the ability to rearrange the values within the absolute values sign, as indicated in line (1), which is certainly possible. But when I try this same type of proof using just the properties of an arbitrary metric, I get $$\begin{align} d((f+g)(x), (f+g)(y)) &= d(f(x) + g(x), f(y) + g(y)) \\ &(=?) \ |d(f(x), f(y)) - d(g(x), g(y))| \tag{2}\\ &\leq d(f(x),f(y)) + d(g(x), g(y)), \end{align}$$ and I'm not sure I can do the same type of rearrangement in an arbitrary metric space, which is why I've put the question mark in line (2). I feel like I'm missing something obvious! (For what it's worth, I'm trying to use this to prove that the sum of uniformly continuous functions is uniformly continuous.)
Its easy to show that the theorem holds when the metric is induced by a norm(as you showed). Consider the discrete metric $d(x,y)=1 \iff x=y$ and $0$ otherwise on $\mathbb{R}$. Let $f(x)=g(y)=1$, $f(y)=g(x)=2$. Then $$\text{LHS}=d(1+2,1+2)=1\\ \text{RHS}=d(1,2)+d(2,1)=0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4231501", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Find all non-negative integer polynomials that satisfy : $P(1) = 8 $ ; $P(2) =2012 $ Find all non-negative integer polynomials that satisfy : $P(1) = 8 $ ; $P(2) =2012 $ First, I set $Q(x) = P(x)+ax+b $ such that $1$ and $2$ are solutions of $ Q(x)$ $\Rightarrow 8+a+b = 0$ and $2012 + 2a+b = 0$ $\Rightarrow a=-2004 ; b =1996 $ $\Rightarrow P(x) = (x-1)(x-2)R(x) +2004x-1996$ (Predicted result :$ P(x) = x^{10}+x^9+x^8+x^7+x^6+x^4+x^3+x^2 $) But we have one more condition that the coefficients of $P(x)$ are non-negative integers, so I think I need to add another condition for $R(x)$ . But I don't have any more ideas. I hope to get help from everyone. Thanks very much !
So once you get that $$P(x)=(x-1)(x-2)R(x)+2004x-1996$$ $$P(x)=(x^2-3x+2)R(x)+2004x-1996$$ We know that the coefficients of $R(x)$ must be integers. Let's say $R(x)$ is an integral polynomial of degree $n\geq 2$ (I'll leave the other cases to you, as they are pretty much the same process and simpler) so that $$R(x)=a_0x^0+a_1x^1+\ldots a_nx^n=\sum_{k=0}^n a_kx^k$$ We have that $$P(x)=(x^2-3x+2)R(x)+2004x-1996$$ $$P(x)=-1996+2004x+(x^2-3x+2)\sum_{k=0}^n a_kx^k$$ $$P(x)=-1996+2004x+\sum_{k=2}^na_{k-2}x^k+\sum_{k=1}^n -3a_{k-1}x^k+\sum_{k=0}^n 2a_kx^k$$ $$P(x)=(-1996+2a_0)+(2004-3a_0+2a_1)x+\sum_{k=2}^n a_{k-2}x^k+\sum_{k=2}^n -3a_{k-1}x^k+\sum_{k=2}^n 2a_kx^k$$ $$P(x)=(-1996+2a_0)+(2004-3a_0+2a_1)x+\sum_{k=2}^n \left((a_{k-2}-3a_{k-1}+2a_k)x^k\right)$$ So $R(x)$ is defined by any sequence of length $n$, $a_n$, that satisfies $$\begin{cases} a_0\geq 998\\2a_1-3a_0\geq -2004\\ 2a_k-3a_{k-1}+a_{k-2}\geq 0~\forall k\geq 4\end{cases}$$ It's easy to see that there are infinite polynomials from there. Some more explicit, and simple constructions could be initial conditions that satisfy the first two inequalities, and then there immediately exists a sequence that satisfies the equality case $\forall k\geq 4$ (as long as you watch out for any parity conflicts). Of course, another easy reason to see the existence of infinite polynomials is that we can easily make a valid polynomial for $R(x)$ of degree $n+1$ from a valid polynomial for $R(x)$ of degree $n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4231587", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 1, "answer_id": 0 }
an integral from the tables The following integral occurs in the book, “Integrals and Series“ v. I , Prudnikov et all, page 542: $\displaystyle \int_{0}^{\infty}{\ln|\cos(ax)|\frac{1}{x^2+z^2}dx}=\frac{\pi}{2z}\ln{\frac{1+e^{-2az}}{2}}$ I have had no luck in verifying this integral. Can you help?
$I=\int_{0}^{\infty}{\ln|\cos(ax)|\frac{1}{x^2+b^2}dx}=\frac{a}{2}\int_{0}^{\infty}{\ln(\cos^2 t)\frac{1}{t^2+(ab)^2}dt}$ Let's denote $ab=y$. Due to periodicity of $\cos^2 t$ we can write $$I=\frac{a}{4}\int_{0}^{\pi}{\ln(\cos^2 t)\sum_{k=-\infty}^\infty\frac{1}{(t+\pi k)^2+y^2}dt}$$ Let's evaluate $S=\sum_{k=-\infty}^\infty\frac{1}{(t+\pi k)^2+y^2}$ first. Integrating along a big circle (of radius $R$) in the complex plane the function $\pi \cot(\pi z)\frac{1}{(t+\pi z)^2+y^2}$ we get zero at $R\to \infty$, because the integrand declines rapidly enough. Therefore, $$\oint_R=0=2\pi i\sum Res\, \pi \cot(\pi z)\frac{1}{(t+\pi z)^2+y^2}$$ $$S=-Res_{(z=-t/\pi\pm iy/\pi)}\pi \cot(\pi z)\frac{1}{(t+\pi z)^2+y^2}$$ We get $$S=\frac{1}{2iy}\big(\cot (t+iy)-\cot(t-iy)\big)=-\frac{1}{y}\Im\cot(t+iy)$$ Using $\cos^2t=\frac{1}{4}(e^{2it}+1)(e^{-2it}+1)$ we can write $$I=\frac{a}{4y}\int_0^\pi\ln\frac{(e^{2it}+1)(e^{-2it}+1)}{4}\Im\Big(\frac{e^{2it}+e^{2y}}{e^{2it}-e^{2y}}i\Big)dt$$ $$=\frac{a}{8y}\int_0^{2\pi}\ln\frac{(e^{it}+1)(e^{-it}+1)}{4}\Im\Big(\frac{e^{it}+e^{2y}}{e^{it}-e^{2y}}i\Big)dt$$ It is very natural to evaluate this integral via contour integration, using variables $z=e^{±it}$ , integrating clockwise ( $\ln(e^{−it}+1)$ ) and counter-clockwise ( $\ln(e^{it}+1)$ ). To close the contours, we have to add small half-circles around $z=-1$ - integrals along these half-circles give zero contribution (due t0 $r\ln r\to 0$ at $r\to 0$). There are two simple poles inside the contour: at $z=e^{−2y}$ and at $z=0 $. Let's also suppose that $y>0$. In this case $$I=-\frac{a}{8y} \,2\pi Res_{z=0}\frac{z+e^{2y}}{z-e^{2y}}\frac{\ln(z+1)-2\ln2}{z}+\frac{a}{8y} \,2\pi Res_{z=e^{-2y}}\frac{z+e^{-2y}}{z-e^{-2y}}\frac{\ln(z+1)}{z}$$ The straightforward evaluation gives $$I=-\frac{\pi a\ln2}{2y}e^{-2y}+\frac{\pi a}{2y}\ln(1+e^{-2y})=\frac{\pi}{2b}\ln{\frac{1+e^{-2ab}}{2}}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4232020", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Probability that my set of dice 'wins' My question is about probabilities. Given are two sets of regular, 6-sided, fair dice of at least one die (but no upper bound on the number of dice). Now, 1 of the sets is considered to be 'the winner'. This is determined with the following method: Sort both sets on the number of eyes. Compare the dice of both sets 1-by-1 starting at the highest by pairing up the dice from both sets (compare 1st with 1st, 2nd with 2nd etc.). As soon as 1 die is higher than the other, then the respective set is the winner and the other dice are ignored. If both sets have an equal number of dice and all dice are equal, then it's a tie. If it's a tie except that one set still has dice left (is a bigger set), then that set is the winner. Examples:
Here is an alternate, very rough approximation for large $a,b$. Another answer establishes that the probability of ties converges to $0$. Likewise I’d expect that the probability of a tie on just $6$s is also small. In that case we can just model the difference in the number of sixes as a random variable normally distributed with mean $(a-b)/6$ and variance $5(a+b)/36$. The probability that this is positive (i.e. that the first player wins by having more 6s) is the same as a standard normal variable being $< (a-b)/\sqrt{5(a+b)}$. This cumulative distribution can be computed using the erf or erfc function in many programming languages. But I wouldn’t trust it for single digit values of $a,b$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4232147", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 9, "answer_id": 7 }
Are two nice equidimensional varieties with the same Euler characteristic fibers of a flat morphism? I'm very amused by the ability of Euler characteristic to withstand any deformation. I would be very impressed if one had a geometric analogue of homotopy (much rougher) that captured all pairs with the same Euler characteristic. Recently I saw an interesting theorem that might offer such a solution at least in the context of algebraic geometry; under good conditions (everything projective etc) if $X \to Y$ is flat, then the fibers have the same Euler characteristic (of the structure sheaf). My question is thus if under good conditions, if $X,Y$ are equidimensional with the same euler characteristic there is a flat map connecting them; a 'very rough deformation' if you will.
No, for example, a $K3$ surface and an elliptic surface have the same Eular characteristic $\chi(\mathcal{O})$ and dimension, but they can't fit in a flat family. (For smooth varieties to fit in a flat family, they are deformation equivalent, so they have to be diffeomorphic.) For a given projective family $X\to T$, it is flat if and only if the Hilbert polynomials on the fibers are constant. So a refined question to ask is probably Question: If two polarized projective varieties $X,Y$ have the Hilbert polynomial, is there a flat family connecting $X$ and $Y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4232291", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Factoring $\frac{n(n+1)}2x^2-x-2$ for $n\in\mathbb Z$ I was factoring quadratic polynomials for high-school practice and I noticed a pattern: $$\begin{align} x^2-x-2 &=(x+1)(x-2) \\ 3x^2-x-2 &=(x-1)(3x-2) \\ 6x^2-x-2 &= (2x+1)(3x-2) \\ 10x^2-x-2 &= (2x-1)(5x+2) \\ 15x^2-x-2 &= (3x+1)(5x-2) \\ 21x^2-x-2 &= (3x-1)(7x+2) \\ &\vdots\end{align}$$ So it seemed that, for any integer $n$, we have: $$\frac{n(n+1)}2x^2-x-2=\Big(\left\lceil{\frac n2}\right\rceil x+(-1)^{n+1}\Big)\Big(\left\lceil{n+\frac{(-1)^n}2}\right\rceil x + 2(-1)^n\Big).$$ where the pattern I showed above begins with $n=1$ and ends with $n=6$. I am not sure how to prove this (assuming it is true). I know that $-2=(-1)^{n+1}\cdot 2(-1)^n$ but I don't know how to prove that: $$\left\lceil{\frac n2}\right\rceil\cdot \left\lceil{n+\frac{(-1)^n}2}\right\rceil=\frac{n(n+1)}2\tag{$\star$}$$ and $$2(-1)^n\left\lceil{\frac n2}\right\rceil+(-1)^{n+1}\left\lceil{n+\frac{(-1)^n}2}\right\rceil=-1$$ which I believe is necessary to prove this conjecture. Since $n$ is an integer, I was thinking of letting $n=\left\lceil\frac k2\right\rceil$ for any real $k$ or something, and I'd assume that $$\left\lceil{\frac n2}\right\rceil=\left\lceil{\frac{\left\lceil{\frac k2}\right\rceil}2}\right\rceil=\left\lceil{\frac k4}\right\rceil$$ but I'm not sure how "round-off arithmetic" works (informally speaking). Any help is appreciated. Edit: Thanks to @lone_student's comment, I have shown $(\star)$ to be true for all $n\in\mathbb Z$ by considering $n$ even and odd. Lemma:$$\left\lceil {\frac nm}\right\rceil=\left\lfloor{\frac{n-1}m+1}\right\rfloor\tag1$$ Here, $m=2$. Also: $$n-\left\lfloor\frac n2\right\rfloor=\left\lceil\frac n2\right\rceil\tag2$$ Using these, we can show that: $$\frac{n(n+1)}2=\left\lceil\frac n2\right\rceil\cdot\left\lceil{n+\frac{(-1)^n}2}\right\rceil=\underbrace{\Big(n-\left\lfloor\frac n2\right\rfloor\Big)}_{\text{By } (2)}\cdot\underbrace{\left\lfloor\frac{2n+(-1)^n-1}2+1\right\rfloor}_{\text{By (1)}}$$ When $n=2k\in\mathbb Z$ we have $$k(2k+1)=\big(2k-\left\lfloor k\right\rfloor\big)\lfloor2k+1\rfloor=k(2k+1)$$ since $k$ and $2k+1$ are integers, and, by definition, $\lceil \alpha\rceil =\lfloor \alpha\rfloor = \alpha$ iff $\alpha\in\mathbb Z$. Similarly, when $n=2k-1$, we have: $$k(2k-1)=\Big(2k-1-\left\lfloor k-\frac 12\right\rfloor\Big)\lfloor 2k-1\rfloor = k(2k-1)$$ since obviously $\left\lfloor k-\frac 12\right\rfloor = k-1$ for $k\in\mathbb Z$. I believe there was some confusion towards my question: did I mean to factorise the quadratic in terms of $n$ or did I mean to prove specifically the ceiling-function product identity? I did intend to ask a question on the latter subject, but I had falsely assumed that the case-by-case polynomial pattern I showed above could only be represented through the ceiling functions. This was wrong.
Alternative factoring method: When $B^2 - 4AC \geq 0,$ with $A > 0$, then $Ax^2 + Bx + C$ factors into $A\left(x^2 + \frac{B}{A}x + \frac{C}{A}\right)$ $= A \left[\left(x + \frac{B}{2A}\right)^2 - \frac{B^2}{4A^2} + \frac{C}{A}\right]$. $= A \left[\left(x + \frac{B}{2A}\right)^2 - \frac{B^2 - 4AC}{4A^2}\right]$ $= A \left[\left(x + \frac{B}{2A}\right)^2 - \left(\frac{\sqrt{B^2 - 4AC}}{2A}\right)^2\right]$ Under the assumptions, this factors into the difference of two squares as $$= A\left[\left(x + \frac{B}{2A} + \frac{\sqrt{B^2 - 4AC}}{2A}\right) \times \left(x + \frac{B}{2A} - \frac{\sqrt{B^2 - 4AC}}{2A}\right)\right].\tag1$$ With the posted problem, you have that $A = \frac{n(n+1)}{2}$ $B = -1$ $C = -2$. From this you can immediately conclude (since $A>0, C<0$) that $B^2 - 4AC > 0$. Therefore, the formula in (1) above applies. $\frac{B}{2A} = \frac{-1}{n(n+1)}.$ $\frac{\sqrt{B^2 - 4AC}}{2A} = \frac{\sqrt{1 + 4n(n+1)}}{n(n+1)} = \frac{(2n+1)}{n(n+1)}.$ Therefore, $Ax^2 + Bx + C$ factors into $\frac{n(n+1)}{2}\left[\left(x + \frac{-1}{n(n+1)} + \frac{(2n+1)}{n(n+1)}\right) \times \left(x + \frac{-1}{n(n+1)} - \frac{(2n+1)}{n(n+1)}\right)\right].$ This simplifies into $$\frac{1}{2n(n+1)} ~\left\{~ \left[~n(n+1)x + 2n\right] ~\times ~\left[~n(n+1)x - 2(n+1)~\right]~ \right\}.$$ This further simplifies to $$\frac{1}{2} \times \left[(n+1)x + 2\right] \times \left[nx - 2\right].\tag2$$ In (2) above, either $(n+1)$ or $n$ will be even, thus allowing the factor of $\frac{1}{2}$ to be cleared. Therefore, you can forgo any consideration of the floor or ceiling functions, and simply divide the formula mentioned in (2) above into two cases: either $n$ is odd or $n$ is even. Finally, while I showed the derivation of the formula in (2) above, my answer could have been significantly shorter, if I had instead provided a (sanity-checking) verification of the formula in (2) above. That is, if you manually multiply the factors in (2) above, the product will be $Ax^2 + Bx + C$, where $A,B,C$ are as specified in your original question.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4232401", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Why does this equivalence of sums hold? I have to prove that a function is a pdf. In the master solution they state the below equivalence in the proof. What rules are applied to get from the left side to the right side? $$C\sum_{k=2}^{\infty} \sum_{j=1}^{k-1}\left(\frac{1}{2}\right)^{k}=C \sum_{j=1}^{\infty} \sum_{k=j+1}^{\infty}\left(\frac{1}{2}\right)^{k}$$
The equivalence can be easily shown by double counting according to the following scheme indeed * *$\sum_{k=2}^{\infty} \sum_{j=1}^{k-1}\left(\frac{1}{2}\right)^{k}$ is the sum column by column *$ \sum_{j=1}^{\infty} \sum_{k=j+1}^{\infty}\left(\frac{1}{2}\right)^{k}$ is the sum row by row
{ "language": "en", "url": "https://math.stackexchange.com/questions/4232704", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Calculate maximum and minimum when second partial derivative test fail Calculate the maximum and minimum of function $z = f(x,y) = x^2 - y^2 +2$ subjected to the inequality constraint $D=\{(x,y)| x^2 + \frac{y^2}{4} \leq 1\}$. My solution: First form the function $$ g(x,y) = x^2 + \frac{y^2}{4} - c, ~0 \leq c \leq 1. $$ Then form the Lagrangian function $$ L(x,y,\lambda) = x^2 - y^2 + 2 + \lambda\left(x^2 + \frac{y^2}{4} - c\right). $$ Therefore we have $$ \left \{ \begin{array}{ll} L_x' = 2x +2\lambda x = 0\\ L_y' = -2y + \frac{\lambda}{2}y = 0 \\ L_\lambda' = x^2 + \frac{y^2}{4} - c \end{array} \right. $$ After solving above equations, so we can got saddle point like $(\varphi(c),\psi(c))$. * *when $c = 0$, we have $x = y = 0$. *when $c \neq 0$, there are two kind of solutions: (1 $x = 0$, we have $y = \pm 2\sqrt{c}$; (2 $y=0$, we have $ x = \pm \sqrt{c}$. My problem is that i cant use second partial derivative test for judging $(0,\pm 2\sqrt{c})$ and $(\pm \sqrt{c},0)$ are maximum or minimum, obviously $AC-B^2 = 0$. How can i do next? Thanks in advance!
Since the the maximum and the minimum can be attained only at the boundary we can proceed by direct substitution of the constraint (ellipse) that is $$h(y) = 1-\frac{y^2}4 - y^2 +2=3-\frac 54 y^2$$ therefore we have * *minimum $f(x,y)=3$ at $y=0$, $x=\pm 1$ *maximum $f(x,y)=-2$ at $y=\pm 2$, $x=0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4232806", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
If a function is periodic with period T ,then is $\int_0^{nT} f(x)dx$=$\int_0^{nT} f(-x)dx$? So we need to prove that if f(x) is periodic with period T then this holds true -$$\int_0^{nT} f(x)dx=\int_0^{nT} f(-x)dx$$ First i tried to prove that if $f(x)$ is periodic with period T then is $f(x)=f(-x)$ .Then i realized that Definitely it is not true .We need some more info to prove $f(x)=f(-x)$ or take example of $sinx$ , sinx is periodic with period $2\pi$ but $sin(-x)=-sinx$. So it must be limited to the definite integral only . I tried to put $x=-y$ but it failed . Any elegant proofs? (Although i am not really sure weather this property holds or not .I just observed it .It would be great if someone can prove it because i have a really strong feeling its true )
I think i proved it myself . using the $ "a-x" property $ which states $\int_0^a f(x)dx=\int_0^a f(a-x)dx $ $$\int_0^{nT} f(x)dx =\int_0^{nT} f(nT-x)dx$$ we know from properties of periodic function that if $f$ is periodic with period T then $f(nT+x)=f(x)$ where $n$ is integer. replacing x with -x we have $f(-x+nT)=f(-x)$ Hence $$\int_0^{nT} f(x)dx =\int_0^{nT} f(nT-x)dx=\int_0^{nT} f(-x)dx $$ Hence proved .
{ "language": "en", "url": "https://math.stackexchange.com/questions/4232948", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
making sense of a very basic inequality proof in an intro analysis textbook This is an excerpt from an Analysis textbook by Jospeh Taylor. proof from the book The author gets to line (2.1.1) and writes "If we interchange ...... also holds." I don't understand this step in the proof. Why is the author adding an additional $|b|$ to both sides? Can you explain this section of the proof? Why can't the author just write something like: $|a|-|b| ≤ |a − b|$ holds for all real $a,b$ and $|a-b|=|b-a|$, therefore $|b|-|a| ≤ |a − b|$. Multiplying by $-1$ we get: $-|a − b| ≤ |a|-|b|$. Finally, $-|a − b| ≤ |a|-|b| ≤ |a − b|$ and therefore $||a|-|b|| ≤ |a − b|$. Note that the very last step is already proven in the text. can someone help me make sense of the author's argument?
Since $|a-b|\geq|a|-|b$, we obtain $|b-a|\geq|b|-|a|$, which says $$|a-b|\geq|a|-|b|$$ and $$|a-b|\geq-(|a|-|b|),$$ which gives $$|a-b|\geq||a|-|b||$$ We used the following. If $x\geq y$ and $x\geq-y$ so $x\geq|y|.$ This statement you proved in your post: Because it's $-x\leq y\leq x$. Also, $$|a-b|+|b|\geq|b|-|a|$$ it's typo. It should be $$|a-b|\geq|b|-|a|.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4233260", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is every 3D Lie group with bi-invariant metric a space form? I seem to remember to have read somewhere that, "in dimension three, every bi-invariant metric on a Lie group has constant curvature". However, I am not able to find a confirmation of this in any reference I know. Can anybody please confirm/disprove such statement?
The answer can be found in Milnor's paper Curvatures of Left Invariant Metrics on Lie Groups. A connected Lie group G admits a bi-invariant metric if and only if is isomorphic to the Cartesian product of a compact group and a vector space $\Bbb R^m$. In this case, it is a symmetric space. In dimension $3$, there are not so many possibilities and we can check this directly.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4233371", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$\int\frac{dx}{\sqrt{\alpha+\beta x+\gamma x^2}}=\frac{1}{\sqrt{-\gamma}}\arccos\bigl(-\frac{\beta+2\gamma x}{\sqrt{q}}\bigr)$? I'm studying Kepler's laws from Classical Mechanics, 2nd ed. Goldstein. In page 95 there is given an indefinite integral $$\int\frac{dx}{\sqrt{\alpha+\beta x+\gamma x^2}}=\frac{1}{\sqrt{-\gamma}}\arccos\biggl(-\frac{\beta+2\gamma x}{\sqrt{q}}\biggr).$$ However, when I took a look the source given in a book (A Short Table of Integrals), there is the result $$\int\frac{dx}{\sqrt{\alpha+\beta x+\gamma x^2}}=-\frac{1}{\sqrt{-\gamma}}\arcsin\biggl(\frac{\beta+2\gamma x}{\sqrt{q}}\biggr).$$ Then, I tried relations of $\arcsin$ and $\arccos$, so $\arcsin(x)=\frac{\pi}{2}-\arccos(x)$ and also negative argument $-\arcsin(x)=\arcsin(-x)$, but just ended up to result $$-\frac{1}{\sqrt{-\gamma}}\arcsin\biggl(\frac{\beta+2\gamma x}{\sqrt{q}}\biggr)=-\frac{1}{\sqrt{-\gamma}}\arccos\biggl(-\frac{\beta+2\gamma x}{\sqrt{q}}\biggr)+\frac{\pi}{2\sqrt{-\gamma}}.$$ So is there something I don't see or understand, or is there just a misprint in the book? For the clarification, the result is used to solve this equation: $$\varphi=\varphi_{0}-\int\frac{du}{ \sqrt{\frac{2mE}{l^2}+\frac{2mku}{l^2}-u^2}}$$ and the book ends up to result $$\varphi=\varphi'-\arccos\left(\frac{\frac{l^2u}{mk}-1}{\sqrt{1+\frac{2El^2}{mk^2}}}\right)$$ and by solving the $u=1/r$ we got final result: $$\frac{1}{r}=\frac{mk}{l^2}\left(1+\sqrt{1+\frac{2El^2}{mk^2}}\cdot \cos(\varphi-\varphi')\right)$$
If $\gamma<0$, the integral is equal to: $-\frac{tan^{-1}\Big(\frac{2\gamma.x+\beta}{2\sqrt{-\gamma}.\sqrt{\gamma.x^{2}+\beta.x+\alpha}}\Big)}{\sqrt{-\gamma}}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4233492", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Find all analytic functions in the disk $|z-2|<2$ such that $f \left(2+\frac{1}{n} \right) = \frac{1-n}{5n+3} + \sin{\left(\frac{n \pi}{2} \right)}$ Find all analytic functions (or prove that no such exist) inside the disk $|z-2|<2$ that satisfy the following condition: $$f \left(2+\frac{1}{n} \right) = \frac{1-n}{5n+3} + \sin{\left(\frac{n \pi}{2} \right)}, \ n \in \mathbb{N}$$ For $n=2k$ the expression simplifies quite a bit since $\sin{\left(\frac{n \pi}{2} \right)}= \sin{\left(k \pi \right)} = 0$ so we're left with $$f \left(\frac{4k+1}{2k}\right) = \frac{1-2k}{10k+3}$$ Similarly for $n=4k-3$ and $n=4n-1$ we can simplify the sinus expression to $1$ and $-1$. So we're getting $3$ different expressions for $f\left(2+\frac{1}{n}\right)$ based on whether $n=2k$ , $n=4k-3$ or $n=4k-1$. My idea is to use the identity theorem for any of these two expressions and prove that no such function exists but I can't quite tie it all together.
Your idea is correct, there is also a simpler one: assume that such a function exists. Then using continuity of $f$ we get $$f(2) = \lim_{n \to \infty} f \left( 2+\frac{1}{n} \right) = \lim_{n \to \infty} \left( \frac{1-n}{5n+3} + \sin \frac{n \pi}{2} \right).$$ But the limit on the right clearly does not exist, which is a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4233660", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Do polynomials have a limit ratio of 1 over fixed intervals? If $p$ is an increasing polynomial function, and $c>0$ is a constant, do we always have: $$ \underset{x \rightarrow \infty}{\lim} \dfrac{p(xc+c)}{p(xc)} = 1?$$ I think if I am not missing anything, as the coefficient and degree of the leading terms will be equal in the numerator and denominator, by L'Hôspital's rule the limit should be equal to $1$. I am wondering if there is an easier proof for this fact, or anything that I am missing.
It suffices to show that for any non zero polynomial with degree $n$ $$\lim_{x\to \infty}\frac{p(x)}{x^n}=a_n$$ then $$\lim_{x\to \infty}\frac{p(xc+c)}{p(xc)}=\lim_{x\to \infty}\frac{p(xc+c)}{(xc+c)^n}\frac{(xc)^n}{p(xc)}\frac{(xc+c)^n}{x^n}\frac{x^n}{(xc)^n}=\frac{a_nc^n}{a_nc^n}=1$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4233762", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 4 }
Left annihilator of $2\times2$ matrices. If we let $R$ be the ring of $2\times2$ complex matrices. When is the left annihilator just equal to $\{0 \} $? I see that if $A$ is invertible $\text{Ann}_{R} (A)$ is trivial since if $M \in \text{Ann}_R (A)$ then $MA =0 $ so we can just multiply on the right by $A^{-1} $ and so $M=0$. But what about when $A$ is not invertible. Essentially I’m looking for zero divisors of the ring of $2\times 2$ matrices.
Every non-invertible matrix has a matrix which annihilates it. In general it can be constructed with the pseudo-inverse. In the $2\times2$ case the non-invertible: \begin{bmatrix} a & b \\ c & d \end{bmatrix} is annihilated by its adjoint: \begin{bmatrix} d & -b \\ -c & a \end{bmatrix}
{ "language": "en", "url": "https://math.stackexchange.com/questions/4233880", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Understanding $S^{2}\times S^{1}$ as a reduction of $SU(2)\times U(1)$ by $U(1)$ I'm trying to undersand the topology of $S^{2}\times S^{1}$ as a result of reduction of $G=SU(2)\times U(1)$ by an $H=U(1)$ subgroup, in other words as: $\frac{SU(2)\times U(1)}{U(1)}$. I know elements of $G$ can be written as $\left(U,z\right)$ with $U$ an element of $SU(2)$ and $z$ a unit complex number. If we take $a,b\in\mathbb{Z}$ then we have a homomorphism $\phi_{a,b}:U(1)\rightarrow G$ of the form: $$\phi_{a,b}\left(z\right)=\left(diag\left(z^{b},z^{-b}\right)z^{a}\right)$$ which is injective when $a,b$ are coprime and then the image $H_{a,b}\leq G$. (Any injective homomorphism $\varphi:U(1)\rightarrow G$ is conjugate to some $\varphi_{p,q}$, as its projection to the $SU(2)$-factor may be conjugated to land in the standard maximal torus.) If I'm understanding this correctly for $a=1$ we simply get $S^{3}$ as our $G/H_{a,b}$ topology (as in electroweak symmetry breaking in the standard model of particle physics). If $b=1$,we get a class of lens spaces (one for each a, the three sphere here being counted as a degenerate lens space) I'm thinking $S^{2}\times S^{1}$ corresponds to $b=0$. How do I then understand what the different $a$ correspond to (especially in light of $\pi_{1}\left(G/H_{a,b}\right)=\mathbb{Z}/a$ for $a\neq1$)? Note I'm following a paper. I'm ultimately interested in the connected sum:$$\#_{k}S^{2}\times S^{1}=\#_{k}\frac{SU(2)\times U(1)}{U(1)}$$ and how I would proceed with a similar breakdown hence me wanting to understand what's going on in the trivial case first.
There are some basic isomorphisms that make this much easier to see. First $U(1) \cong S^1$, which you can see be writing out the definition of $U(1)$. Second, $SU(2) \cong S^3$, the $3$-sphere, see wikipedia. Finally there is a standard action of $S^1$ on $S^3$ with quotient $S^2$, called the Hopf fibration. If you represent the $3$-sphere as $SU(2)$ this action is just complex multiplication. So topologically we have $SU(2) \times U(1) / U(1) \cong S^2 \times S^1$ if the $U(1)$ acts by the Hopf fibration.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4234049", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is too complicated about $y=x^{xy}$? While playing with Desmos today, I typed the equation $y=x^{xy}$ and the graph came out to be I clicked the Learn More option given near my equation and Desmos said: Sometimes the calculator detects that an equation is too complicated to plot perfectly in a reasonable amount of time. When this happens, the equation is plotted at lower resolution. What is the complication?
We have $$\begin{align}&y=(x^x)^y\\ \implies &y^{\frac 1y}=x^x\\ \implies &y^{-1}\ln y=x\ln x\\ \implies &-y^{-1}\ln y^{-1}=x\ln x\\ \implies &y^{-1}\ln y^{-1}=-x\ln x\\ \implies &\ln y^{-1} e^{\ln {y^{-1}}}=-x\ln x\\ \implies &W\left(\ln y^{-1} e^{\ln {y^{-1}}}\right)=W\left(-x\ln x\right)\\ \implies &\ln y^{-1}=W\left(-x\ln x \right)\\ \implies &y^{-1}=e^{W\left(-x\ln x\right)}\\ \implies &y=e^{-W\left(-x\ln x\right)}\end{align}$$ This implies, your function is a non-elementary function and can be written as $$f(x)=e^{-W\left(-x\ln x\right)}.$$ Then, note that $W(x)$ is real for only $x≥-\frac 1e$. This means , we have $$\begin{align}-x&\ln x≥-\frac 1e,\thinspace x>0\\ \implies &x\ln x≤\frac 1e \\ \implies &\ln x e^{\ln x}≤\frac 1e\\ \implies &W\left(\ln xe^{\ln x}\right)≤W\left(\frac 1e\right)\\ \implies &\ln x≤W\left(\frac 1e\right)\\ \implies &0<x≤e^{W\left(1/e\right)}≈1.3211\end{align}$$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4234212", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Why does $\frac{1}{\Delta x} \int^{x+\Delta x}_x f(u) du$ tend to $f(x)$ as $\Delta x \to 0$? (From proof of Fundamental Theorem of Calculus) I was reading a proof of the fundamental theorem of calculus in my textbook and one of the lines states that $$\lim_{\Delta x \to 0} \frac{1}{\Delta x} \int^{x+\Delta x}_x f(u) du = f(x)$$ but it didn't give any explanation for this. The section on limits is later in the books so I assume this can be understood with only basic knowledge of limits. For context, here is the poof up to this step: $$ \begin{align} F(x) &= \int^x_a f(u) du \\ F(x+\Delta x) &= \int^x_a f(u) du + \int^{x+\Delta x}_x f(u) du\\ &= F(x) + \int^{x + \Delta x}_x f(u) du \\ \frac{F(x+\Delta x)-F(x)}{\Delta x} &= \frac{1}{\Delta x} \int^{x+\Delta x}_x f(u) du\\ \Delta x &\to 0 \\ &\therefore \\ \frac{dF}{dx} &= f(x) \end{align}$$
Here is another way to think about it: Let $M_{\Delta x}$ and $m_{\Delta x}$ be the supremum and infimum of $f$ over the interval $[x,x + \Delta x]$. We have $$ m_{\Delta x} \cdot \Delta x \leq \int_{x}^{x + \Delta x} f(u)\,du \leq M_{\Delta x} \cdot \Delta x $$ So $$ m_{\Delta x} \leq \frac{1}{\Delta x} \int_{x}^{x + \Delta x} f(u)\,du \leq M_{\Delta x} $$ Since $f$ is continuous at $x$, $\lim_{\Delta x \to 0} M_{\Delta x} = f(x)$ and similarly for $m_{\Delta x}$. So by the squeeze theorem, the term in the middle tends to $f(x)$ too.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4234320", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How to solve a Karush-Kuhn-Tucker example This is a problem example taken from professor Robert Israel: $$\max f(x,y)=xy \quad \text{subject to }\quad x+y^2\leq2, \quad x,y\geq0 \quad \quad (1)$$ The solution begins by writing the KKT conditions for this problem, and then one reach the conclusion that the global optimum is $(x^*,y^*)=(4/3,\sqrt{2/3})$. However the linear independence constraint qualification (LICQ) fails everywhere, so in principle the KKT approach cannot be used directly. I have seen multiple examples solved like this, and I don't understand why this is legitimate. It seems to me that the correct approach would be to reason as follows: Suppose $(x^*,y^*)$ is a constrained local maximum for the problem. Then we see that neither of the nonnegativity constraints can bind at $(x^*,y^*)$, because otherwise every neighborhood of $(x^*,y^*)$ would contain a feasible point $(x,y)$ with $f(x,y)>0=f(x^*,y^*)$. Therefore $(x^*,y^*)$ is a constrained local maximum for the problem $$\max f(x,y)=xy \quad \text{subject to }\quad x+y^2\leq2 \quad \quad (2)$$ which satisfies the LICQ condition everywhere. Using the KKT conditions gives $(x^*,y^*)=(4/3,\sqrt{2/3})$. For sufficiency, we note that $f$ is continuous and that the feasible set in $(1)$ is compact, so $f$ attains a global maximum at some feasible $(x^*,y^*)$. Combined with the previous necessity argument, we conclude that $(x^*,y^*)=(4/3,\sqrt{2/3})$ is the unique local and global maximum for the problem. Am I missing something? Thanks a lot for your help.
I will confess that I don't know this "KKT" method! I have to solve it with basic "Calculus methods". We want to maximize f(x,y)= xy subject to the condition that $x+ y^2\le 2$. The graph of $x= 2- y^2$ is a parabola with vertex (2, 0) that crosses the y-axis at $(0, \sqrt{2})$ and $(0, -\sqrt{2})$. $\nabla f= y\vec{i}+ x\vec{j}= 0$ at (0, 0). The maximum of f occurs either at (0, 0) (and f(0,0)= 0) or on that parabola. On the parabola $x= 2- y^2$, $f(x,y)= f(2- y^2, y)= (2- y^2)y= 2y- y^3$. Setting $F(y)= 2y- y^3$, $F'= 2- 3y^2= 0$. $y^2= 2/3$, y= 2/3 or y= -2/3. If y= 2/3, x= 2- 4/9= 14/9, f(x,y)= f(14/9,2/3)= 28/27. If y= -2/3, x= 2- 14/9= 4/9, f(x, y)= f(4/9, -2/3)= -8/27. Of those three values, -8/27, 0, and 28/27, the largest is 28/27 which occurs at (14/8, 2/3 ).
{ "language": "en", "url": "https://math.stackexchange.com/questions/4234443", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Sawtooth-like polygonal chains in $\mathbb R^2$ must self-intersect. We consider closed polygonal chains in the 2-dimensional plane with an even number of sides, say $2n$, numbered as $A_1B_1A_2B_2\dots A_nB_nE$, where $E = A_1$. We require additionally, that each $B_i$ lies right below $A_i$, like with sawtooth. Edit: Since right below seems ambiguous, let's state this line explicitly as that the $y$-coordinate of each $B_i$ be strictly less than that of $A_i$. Is it true that any such polygonal chain must self-intersect? In the 3-dimensional case, there is the obvious counterexample of traversing the vertices of any prism in the obvious order. I have however yet to find a proof or counterexample in the 2-dimensional case.
Yes. For simplicity let us assume that "polygon" means that consecutive edges cannot be parallel (no 180-degree angles at vertices). In particular this implies that the outgoing edges from $B_i$ are not vertical. If the polygon does not intersect itself, it is oriented either clockwise or counterclockwise. If clockwise, consider any leftmost vertex $v$, and the vertical line $L$ passing through it. No vertex of the polygon lies to the left of $L$. Because we do not allow 180-degree angles, and because of the sawtooth condition, exactly one of the edges incident at $v$ will be vertical, and it must be oriented up because the polygon is oriented clockwise. This is impossible, because all vertical edges must be oriented down by the sawtooth condition. If the polygon is oriented counterclockwise, repeat the same reasoning at any rightmost vertex; because the polygon is oriented counterclockwise, the incident vertical edge must be oriented up, which is impossible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4234559", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Prove that absolute value of function is less or equal to integral of function squared I was solving some exercises in my calculus textbook when I stumbled upon this one: Given $f:[0,1] \to \Bbb R$ continuously differentiable with $f(0) = 0$, prove that a) $|f(x)| ≤ \int_{0}^{1} |f'(x)|\, dx$ for every $x$ in $[0,1]$. b) $|f(x)| ≤ \int_{0}^{1} |f'(x)|^2\, dx$ for every $x$ in $[0,1]$. I managed to solve (a) but I'm uncertain of how to solve (b) since if $f(x) = x/2$ ($f(0)=0$) then for $x = 1$, $ |1/2| \le \int_{0}^{1} |1/2|^2\, dx =\int_{0}^{1} |1/4|\, dx = 1/4 $ which is incorrect. Is the textbook wrong, or have I made an error somewhere?
For part (b), the question should be corrected as $|f(x)|^2\leq\int_{0}^{1}|f'(x)|^2 dx$, which is a consequence from (a) and Cauchy-Schwarz inequality.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4234713", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Showing that $\lim_{x\to\infty}x^{2}e^{-x^{8}\sin^{2}(x)}$ does not exist. The following function turns up in quantum mechanics as an example of an element of $\mathscr{L}^{2}(\mathbb{R})$ which does not decay to zero at $\pm\infty$: $$ f(x) = x^{2}e^{-x^{8}\sin^{2}(x)} $$ Intuitively the reason is that $f(x)$ oscillates with a shorter period the larger $x$ becomes and this plays havoc with convergence. I want to show explicitly that this limit does not exist, but it has been a long time since elementary calculus and I haven't been able to get very far. Any help would be appreciated.
Take two subsequences $$x_n = \pi n \implies f(x_n) = x_n^2\to +\infty$$ $$x_n = \pi n + \frac{\pi }{2} \implies f(x_n) = x_n^2 e^{-x_n^8} \to 0 $$ As the subsequential limits are distinct, the limit does not exist.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4234837", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Prove $\left(\frac{u}{a}\right)^a.\left(\frac{v}{b}\right)^b.\left(\frac{w}{c}\right)^c \le \left(\frac{u+v+w}{a+b+c}\right)^{(a+b+c)} $ Let $u,v,w>0$ and $a,b,c$ are positive constant. Prove that $\left(\frac{u}{a}\right)^a.\left(\frac{v}{b}\right)^b.\left(\frac{w}{c}\right)^c \le \left(\frac{u+v+w}{a+b+c}\right)^{(a+b+c)} $ First, I prove with $x+y+z=1$ so $x^ay^bz^c\le\left(\frac{a}{a+b+c}\right)^a\left(\frac{b}{a+b+c}\right)^b\left(\frac{c}{a+b+c}\right)^c$ by Largrange theorem And it become $\left(\frac{x}{a}\right)^a\left(\frac{y}{b}\right)^b\left(\frac{z}{c}\right)^c\le\left(\frac{1}{a+b+c}\right)^{a+b+c}=\left(\frac{x+y+z}{a+b+c}\right)^{a+b+c}$ So it true with $x+y+z=1$ but i can't prove it true with $x,y,z>0$. Please help me! Thank you.
Since we have positive numbers involved therefore we can use weighted $A.M-G.M$ on $\frac{u}{a},\frac{v}{b},\frac{w}{c}$ with weights respectively $a,b,c$ $$\displaystyle\bigg({\frac{\sum_ia_ix_i}{\sum_i a_i}}\bigg)^{\sum_i a_i}\geq\bigg(\Pi (x_i)^{(a_i)}\bigg) $$ Here $x_1=\frac{u}{a} $ and $a_1=a$ , $x_2=\frac{v}{b} $ and $a_2=b$ and $x_3=\frac{w}{c} $ and $a_3=c$ Equality holds when $\frac{u}{a}=\frac{v}{b}=\frac{w}{c}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4234995", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Finding the prices of a pen, an eraser and a notebook from the given system of inequalities The sum of the prices of a pen, an eraser and a notebook is $100$ rupees. The price of a notebook is greater than the price of two pens. The price of three pens is greater than the price of four erasers and the price of three erasers is greater than the price of a notebook. If all of the prices are in integers, find each of their prices. I was encountered with the problem quite a long time ago and today the problem again came to my notice. That time I solved the problem in the following way: Let the prices of a pen, an eraser and a notebook be $p$, $e$ and $n$ respectively. Then we have the following: $$p+e+n=100\\ n>2p\\ 3p>4e\\ 3e>n$$ We have $p>\frac43e$ and $n>2p>\frac83e$. So, $$\frac43e+e+\frac83e<100\\ \implies e<20$$ Similarly, we get $$p\leq 27 \ \ \ \ \text{and}\\ n\leq 56$$ Setting the values of $e$ and $p$ and with a bit brute forcing, I got $e=19$, $p=26$ and $n=55$ which I think is the only solution. Is the solution correct? And is there some other way which gives the result directly i.e. without brute forcing? Also as we have inequalities involved in our problem, can we use some known inequalities like AM-GM, Cauchy-Schwarz?
$p+e+n=100\\ n>2p\\ 3p>4e\\ 3e>n$ As you found $n \leq 56, p \leq 27$ Now we write upper bound of $p$ and $e$ in terms of $n$. $p \lt \frac{n}{2}, e \lt \frac{3n}{8}$ So, $\frac{n}{2} + \frac{3n}{8} + n \gt 100 \implies n \gt \frac{160}{3}$ So, $54 \leq n \leq 56$. Also, $3e \gt n \implies e \geq 19$ and we already have $e \lt 20$. So $e = 19$ is the only solution. You can also use $p \gt \frac{4e}{3} \implies 26 \leq p \leq 27$. $p = 27, n = 54$ does not satisfy $n \gt 2p$. So there is only one solution as you wrote.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4235146", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
can the cardinality of a set be a fraction? $A$ and $B$ be two sets and define $n(S)$ to be the cardinality of a set $S$. If $n(A-B)=18, n(A∪B)=70$, and $n(A∩B)=25$, find $n(B)$. i got it by first subtracting $n(A∪B)=70$, and $n(A∩B)=25$ so $70-25=45$ then i subtracted it from $n(A-B)$ so $45-18=27$ then to get $n(B) =$ $$ \frac{27}{2}=13.5 $$ but it's a fraction so I'm confused
No, the cardinality of a finite set must be a nonnegative integer. As to what you are doing, one error is the division by $2$; there is simply no reason to do that, because you are not double counting anything. When you subtacted $n(A\cap B)$ from $n(A\cup B)$, you obtained the number of elements that are in $A$ but not $B$, or in $B$ but not $A$. If you then subtract the number of elements that are in $A$ but not $B$, you are simply left with the number of elements in $B$ but not $A$. There is no double counting. If you then want to figure the number of elements in $B$, you want to "add back in" the elements you are missing, which are the elements in both $A$ and $B$, $n(A\cap B)$. A quicker alternative way of computing $n(B)$ is to note that you can take $A\cup B$, elements that are in either $A$ or $B$, and subtract those that are in $A$ but not $B$, namely $n(A\cup B) - n(A-B)$. You should find that both computations give the same answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4235308", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Question which looks something similar to decimal expansion Let $A=\{ \sum_{n=1}^{\infty}\frac{a_i}{5^i} : a_i=0,1,2,3 \text{ or } 4\}$. What is set $A$? First thing I noticed is that the elements of set $A$ kind of resembles with how we write numbers of $[0,1]$ in decimal expansion as we know that each number in $[0,1]$ can be written as $\sum_{i=1}^{\infty}\frac{a_i}{10^i}$ where $a_i$ can be any of $1,2,3,4,5,6,7,8,9,0$ . So I think $A$ will also be $[0,1]$. I'm not really sure about my answer and if am I correct how can we rigoursly prove that $A=[0,1]$?
You are right. Actually, if $x\in[0,1]$ and if you write $x$ in base $5$, then$$x=0.a_1a_2a_3\ldots;$$besides, $1=0.444444444\ldots$ Note that if $x\in[0,1)$, you can define $a_1=\lfloor5x\rfloor$. Then, $a_2=\lfloor5^2x-5a_1\rfloor$, $a_3=\lfloor5^3x-5^2a_1-5a_2\rfloor$ and so on. Then$$x=\sum_{n=1}^\infty\frac{a_n}{5^n}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4235775", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Derangement with extra box I was going through PnC questions and I came across this problem. Four balls numbered $1,2,3,4$ are to be placed into five boxes numbered $1,2,3,4,5$ such that exactly one box remains empty and no ball goes to its own numbered box. The no. of ways is? I worked out the problem in the following way. Case 1: box 5 is not selected and thus total derangements $d_4=9$ Case 2: Any one of 4 boxes say box 1 is not selected $ C(4,1)$ case 2(a) ball 1 goes to box 5 then total derangements $d_3=2$ Case 2(b) ball 1 goes to either of other two boxes say box 2 Case 2(b)(i) ball 2 goes to 5 then $d_2=1$ Case 2(b)(ii) ball 2 doesn't go to box 5 then also $d_2=1$ Thus, total number of ways $=9+C(4,1)\{2+2*2\}=33$ But none of the answer matches. Please help me identify error in the reasoning. I got the error Case 2(b) should be ball 1 goes to either of other three boxes say box 2 and Case 2(b)(ii) ball 2 doesn't go to box 5 then also $2*d_2=1$ So, finally it comes to $=9+C(4,1)\{2+3*(1+2)\}=53$
Your mistake is in the last step. Suppose box $1$ remains empty, and ball $1$ doesn't go into box $5$. We can count these permutations by first deranging $4$ balls, and then transferring whichever ball ends up in box $1$ to box $5$. There are $d_4=9$ such permutations, so the answer is $$9+4(2+9)=53$$ I didn't quite understand how you got $2\cdot2$ rather than $9$, so I can't be more specific about your mistake, sadly. Another way to do the second part is to consider derangements of the $5$ balls, and then to remove ball $5$. This gives the answer $$d_4+d_5=9+44=53$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4235909", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Solve system for diagonal matrix In trying to write an integral relation in a discrete manner, I got to an equation of the form $$MAM x=b$$ where $A$ is a given symmetric matrix, $b$ and $x$ are given vectors and unknown $M$ is a diagonal matrix. The values of all the matrices and vectors can be complex. How can I solve this for $M$? My attempts I tried to find patterns that would allow for inverting the order in which the matrices are applied but without a solution. Another thing I tried has been to actually write down explicitly the relation in order to get the system of equations. For the first equation, I got something like $$m_{11} \left( a_{11}m_{11}x_1 + a_{12}m_{22}x_2 + \cdots + a_{1j}m_{jj}x_j + \cdots + a_{1n}m_{nn} x_n \right) = b_1$$ where $m_{ii}$ are the diagonal entries of $M$, $a_{ij}$ are entries of $A$ and so on for vectors $x$ and $b$, and so on for the other equations. This system of equations is not linear and I have no idea how to solve a non-linear system of equations. I assume that another way in which the problem can be formulated, although it is not really the same is how can one solve a system of equations like $$Ax = b/x$$ where $b/x$ should be interpreted as a vector defined by the element-wise division of each pair of points $(b/x)_j = b_j/x_j$.
$ \def\o{{\tt1}}\def\p{\partial} \def\L{\left}\def\R{\right} \def\LR#1{\L(#1\R)} \def\BR#1{\Big(#1\Big)} \def\BBR#1{\Bigg(#1\Bigg)} \def\vec#1{\operatorname{vec}\LR{#1}} \def\diag#1{\operatorname{diag}\LR{#1}} \def\Diag#1{\operatorname{Diag}\LR{#1}} \def\trace#1{\operatorname{Tr}\LR{#1}} \def\qiq{\quad\implies\quad} \def\grad#1#2{\frac{\p #1}{\p #2}} \def\dk#1{\LR{#1_k-#1_{k-\o}}} \def\c#1{\color{red}{#1}} $Let's rename the variable $M\to Y,\,$ then the independent variable for this problem is the vector $y$ $$\eqalign{ Y &= \Diag{y} \quad\iff\quad y=\diag{Y} \\ }$$ Define the vector-valued function $$\eqalign{ f(y) &= \BR{YAYx-b} &= \BR{A\odot{yy^T}}x - b \\ }$$ where $(\odot)$ denotes the elementwise/Hadamard product. The current problem can be restated as a system of (mildly) nonlinear equations $${f(y) = 0}$$ The simplest route to a solution is probably the Barzilai-Borwein method, since (in this situation) it does not require any gradient calculations. The basic Barzilia-Borwein method is extremely straightforward Initialize $$\eqalign{ y_0 &= random \qquad\qquad\qquad\qquad\qquad\quad \\ }$$ First step $$\eqalign{ f_0 &= f(y_0) \\ y_1 &= y_0 - \LR{\frac{0.0\o\,\|y_0\|}{\|f_0\|}}f_0 \qquad\qquad\quad\quad \\ k &= \o \\ }$$ Subsequent steps $$\eqalign{ f_k &= f(y_k) \\ y_{k+\o} &= y_k - \LR{\frac{\dk{y}^T\dk{f}}{\dk{f}^T\dk{f}}}f_k \\ k &= k+\o \\\\ }$$ Stop when the function is nearly zero $\;\|f_k\|\le 10^{-12} $ or when the steplength gets too small $\;\BBR{\frac{\|y_k-y_{k-\o}\|}{\o+\|y_k\|}}\le 10^{-12}$ The introduction of a per iteration difference operator $$\eqalign{ dy_k &= y_k - y_{k-\o} \\ }$$ allows for a more concise description of the algorithm $$\eqalign{ y_{k+\o} &= y_{k} - \LR{\frac{dy_{k}^Tdf_{k}}{df_{k}^Tdf_{k}}}f_{k}, \qquad f_{k+\o} &= f(y_{k+\o}) \\ }$$ Suppressing the $k$-index makes it look less cluttered $$\boxed{\eqalign{ y_+ &= y - \LR{\frac{dy^Tdf}{df^Tdf}}f, \qquad \qquad f_+ &= f(y_+) \\ }}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4236131", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Sum of little o Is it true the following? $$o\left(\frac{1}{n^\alpha}\right)+o\left(\frac{1}{n+1}\right)=o\left(\frac{1}{n+1}\right), \text{ for}\, n\to\infty$$ where $\alpha>1$. I think yes since I know that $o(x^n)+o(x^m)=o(x^p)$, where $p=\min\{n,m\}$, for $x\to 0$. So now I have to consider that my $x$ is $\displaystyle\frac{1}{n}$. Am I right?
Yes your idea is right, indeed we have that by definition $$o\left(\frac{1}{n^\alpha}\right)=\frac{1}{n^\alpha}\cdot \omega_1(n)$$ $$o\left(\frac{1}{n+1}\right)=\frac{1}{n+1}\cdot \omega_2(n)$$ with $\omega_1(n)\to 0$ and $\omega_2(n)\to 0$, then $$o\left(\frac{1}{n^\alpha}\right)+o(n)=\frac{1}{n^\alpha}\cdot \omega_1(n)+\frac{1}{n+1}\cdot \omega_2(n)=$$ $$=\frac{1}{n+1}\left(\frac{n+1}{n^\alpha}\cdot \omega_1(n)+\omega_2(n)\right)=\frac{1}{n+1}\omega_3(n)$$ with $\omega_3(n)\to 0$, then by definition $$o\left(\frac{1}{n^\alpha}\right)+o\left(\frac{1}{n+1}\right)=o\left(\frac{1}{n+1}\right)$$ More in general: * *$o(x^n)=x^n \omega_1(x)$ with $\omega_1(x)\to 0$ *$o(x^m)=x^m \omega_2(x)$ with $\omega_2(x)\to 0$ then assuming wlog $n\le m$ $$o(x^n)+o(x^m)=x^n \omega_1(x)+x^m \omega_2(x)=x^n\left(\omega_1(x)+x^{m-n} \omega_2(x)\right)=x^n \omega_3(x)$$ with $\omega_3(x) \to 0$, then $$o(x^n)+o(x^m)=o(x^p)$$ with $p=\min\{n,m\}$, which is the rule you are using. Refer also to the related: * *Behaviour of little-o in equivalence
{ "language": "en", "url": "https://math.stackexchange.com/questions/4236422", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
For $x>0$ let $f(x)=x^{2/3}(6-x)^{1/3}$ and $g(x)=x\ln(x),$ then find the number of solutions of $f(x)=g(x)$ For $x>0$ let $f(x)=x^{2/3}(6-x)^{1/3}$ and $g(x)=x\ln(x),$ then prove that $f(x)=g(x)$ only has one real solution Since $g(x)$ is always positive $f(x)$ will only be defined in $(0,6)$. $\displaystyle f'(x)=\frac{2(6-x)^{1/3}}{3x^{1/3}}+\frac{x^{2/3}}{3(6-x)^{1/3}}$, since $f'(x)$ is always positive $f(x)$ is an increasing function. $g'(x)$ is also positive which means that $g(x)$ is also an increasing function. None of the functions seems to have a maxima or a minima, so I can't figure out how to compare them. Edit: I figured out that $g(x)$ should be a concave upward function, and $f(x)$ is a concave downward function through double differentiation but I still can't figure out how to prove this :(
You write "g(x) is always positive" but that's not true, $g(x)$ is negative for $0<x<1$. With this observation, your concavity argument gives the solution. Consider the interval $x \in (0,6)$. You have that $f(0) = g(0) = 0$, further $g'(0) <0$ and $f'(0) >0$. So for $x \to 0^+$ we have that $g(x) < 0$ and $f(x) > 0$, or $f>g$. Now $g''(x) = 1/x > 0$ and $f''(x) = {-8/(x^{4/3} (6 - x)^{5/3})} < 0$. Since $g$ is concave and $f$ is convex, this means that, since they intersect at $x=0$, they can at most have one intersection for $x >0$. You don't have to compute the point of intersection. Simply notice that $f(x=6)=0$ and $g(x=6)>0$, so $g>f$. Since this inverts $f>g$ at $x\to 0^+$, there must be exactly one intersection.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4236589", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is the notation $f¢(x)$ stands for in context of derivatives? Consider the remark given here in the chapter 6 named APPLICATION OF DERIVATIVES from NCERT class 12 book There is a more generalised theorem, which states that if $\mathbf{f¢(x) > 0}$ for $x$ in an interval excluding the end points and $f$ is continuous in the interval, then $f$ is increasing. Similarly, if $\mathbf{f¢(x) < 0}$ for $x$ in an interval excluding the end points and $f$ is continuous in the interval, then $f$ is decreasing. I am facing two issues in understanding it, * *What is the notation $f¢(x)$? Is it $f'(x)$? *How is it a generalisation of the following theorem as told? (I can see no difference in both, if 1 is true) Let $f$ be continuous on $[a, b]$ and differentiable on the open interval $(a,b)$. Then (a) $f$ is increasing in $[a,b]$ if $f ′(x) > 0$ for each $x \in (a, b)$ (b) $f$ is decreasing in $[a,b]$ if $f ′(x) < 0$ for each $x \in (a, b)$ (c) $f$ is a constant function in $[a,b]$ if $f ′(x) = 0$ for each $x \in (a, b)$
It is a misprinted character. https://drive.google.com/file/d/0B8hXbvn1ab-BaXM0YjVhc285Z28/view?resourcekey=0-QLQBstamNZCN-LyLg9dXGQ You can find the intention in this version/edition above.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4236966", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Let $x^3+\frac{1}{x^3}$ and $x^4+\frac{1}{x^4}$ are rational numbers. Show that $x+\frac{1}{x}$ is rational. $x^3+\dfrac{1}{x^3}$ and $x^4+\dfrac{1}{x^4}$ are rational numbers. Prove that $x+\dfrac{1}{x}$ is rational number. My solution: $x^3+\dfrac{1}{x^3}=\left(x+\dfrac{1}{x}\right)\left(x^2+\dfrac{1}{x^2}-1\right)$ $x^4+\dfrac{1}{x^4}=x^4+2+\dfrac{1}{x^4}-2=\left(x^2+\dfrac{1}{x^2}\right)^2-1-1=\left(x^2+\dfrac{1}{x^2}-1\right)\left(x^2+\dfrac{1}{x^2}+1\right)-1$ Because $x^4+\dfrac{1}{x^4}$ is rational number so $\left(x^2+\dfrac{1}{x^2}-1\right)$ is rational number too and $x^3+\dfrac{1}{x^3}$ is rational number so $\left(x+\dfrac{1}{x}\right)$ is rational number. Am I wrong? Please check my solution, thank you.
As the commenters pointed out, a problem in your argument is that the implication "$(x^4+x^{-4})$ is rational" $\implies$ "$(x^2+x^{-2})$ is rational" is false. For example, it is possible that $x^2+x^{-2}=\sqrt5$ when, by your tricks, $x^4+x^{-4}=(x^2+x^{-2})^2-2=3$, a rational number. Of course, in this case $x^3+x^{-3}$ fails to be rational, but this underlines a problem in your logic. One commenter suggested using a contrapositive argument. This may be a possibility, but a bit delicate to manage. My favorite approach (that generalizes to several variants of this question) is somewhat high browed in the sense that it uses properties of algebraic field extensions. Outlining it for the benefit of the readers familiar with that theory. * *As $x^3+1/x^3=q_3\in\Bbb{Q}$ the number $x$ is a zero of the polynomial $P_3(T)=T^6-q_3T^3+1$ with rational coefficients. Therefore $x$ is an algebraic number. Let $m(T)\in\Bbb{Q}[T]$ be its minimal polynomial. *The other zeros of $P_3(T)$ are $x^{\pm1}\omega^j$, $\omega=e^{2\pi i/3}$ a primitive third root of unity, $j\in\{0,1,2\}$. Therefore the zeros of $m(T)$ are among these numbers. *But, as $x^4+1/x^4=q_4\in\Bbb{Q}$, $x$ is also a zero of the polynomial $P_4(T)=T^8-q_4T^4+1$. The zeros of $P_4(T)$ are seen to be $x^{\pm1}i^j$, $j\in\{0,1,2,3\}$. *The zeros of $m(T)$ are common zeros of both $P_3(T)$ and $P_4(T)$, so they can only be $x^{\pm1}$. *But $$r(T):=(T-x)(T-\frac1x)=T^2-(x+\frac1x)T+1.$$ Either $x$ is rational, when the claim is trivial, or $m(T)=r(T)$, implying that the coefficients of $r(T)$ are rational. Here $x+\dfrac1x$ is the coefficient of the linear term of $r(T)$ and we are done in this case as well. Toying with Mathematica gives the following elementary method. Write $u=x+\dfrac1x$. We are given that (see the linked thread) $$ A=x^3+\frac1{x^3}=u^3-3u $$ and $$ B=x^4+\frac1{x^4}=u^4-4u^2+2 $$ are rational. Hence so is the quantity $$ \frac{A^3+AB-3A}{B^2-1}=u. $$ Explaining the reason why some formula like the one above writing $u$ in terms of $A$ and $B$ must exist. Let's treat $u$ as a variable, an element transcendental over $\Bbb{Q}$. Write $F=\Bbb{Q}(u)$ for the field of rational functions. It has subfields $K_1=\Bbb{Q}(A)$ and $K_2=\Bbb{Q}(B)$. We immediately see that the respective extension degrees are $[F:K_1]=3$ and $[F:K_2]=4$. By the tower law, the only possibility is then that the compositum $K_1K_2=\Bbb{Q}(A,B)$ must be all of $F$. Consequently $u$ can be written as a rational function (with rational coefficients) in $A$ and $B$. Observe that the representation is not unique, because $A$ and $B$ are algebraically dependent over $\Bbb{Q}$ given that they both reside in the same transcendence degree one extension $F$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4237191", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 5, "answer_id": 4 }
point-wise boundness of random sequence in a sequence space Assume we have a sequence of random elements $\{X_{n}\}_{n\geq 1}$ taking values in sequence space $\ell_{1}$, i.e. for each $n$ one has $X_{n}\in\ell_{1}$. Next, let us assume that for any finite fixed $k$, the the sequence of random subvertors $(X_{n,i})_{i=1}^{k}\in\mathbb{R}^{k}$ is bounded in probability, $$ (X_{n,i})_{i=1}^{k} = O_{p}(1) $$ with respect to the norm of $\mathbb{R}^{k}$. Does it mean that $\{X_{n}\}_{n\geq 1} = O_{p}(1)$ with respect to the norm of $\ell_{1}$? PS I am aware that pointwise convergence does not follow the convergence in the norm of the sequence space. Though, I could not figure it out for boundless in probability.
Constant random variables $X_{n,i}=1$ if $i \leq n$ and $0$ if $i >n$ give a counter-example.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4237327", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why the differences between each number of a $x^2$ sequence is always an odd number and increase by $2$? I'm not a math person at all and I realize that this might be obvious, I'm trying to increase my awareness about it, so please excuse me if the question is too basic. Also excuse my lack of formatting in expressing my ideas, any tip or correction would be appreciated. If you square the elements of a sequence of natural numbers $(1, 2, 3, 4,...)$ you respectively get $1,4,9,16,...$ If you calculate the difference between each consecutive element, you get $3,5,7, ...:$ This list of differences would always be composed of odd numbers. Why? Also, why does it 'grows' linearly, increased by $2$ on every step? Thanks.
Here is a well known proof without words: https://steemit.com/math/@ertwro/math-proofs-without-words-visually
{ "language": "en", "url": "https://math.stackexchange.com/questions/4237530", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Hartshorne Chapter IV Exercise 5.2 If $X$ is a curve of genus $ \geq 2$ over a field of characteristic $0$, show that the group $\operatorname{Aut} X$ of automorphisms of $X$ is finite. Hint: If $X$ is hyperelliptic, use the unique $g_1^2$ and show that $\operatorname{Aut} X$ permutes the ramification points of the $2$-fold covering $X \to \mathbb{P}^1$. If $X$ is not hyperelliptic, show that $\operatorname{Aut} X$ permutes the hyperosculation points of the canonical embedding. I have a proof in the case that $X$ is hyperelliptic, but as I am not very familiar with hyperosculation points, I am struggling to prove the non-hyperelliptic case. Could anyone give me a hint regarding that?
When $X$ is not hyperelliptic, $|K|$ is very ample and therefore embeds $X$ in to $\Bbb P^{g-1}$ as a curve of degree $2g-2$. The hyperosculation points of this embedding are exactly the points $P\in X$ with a global section $s\in\Omega_X(X)$ so that $s$ vanishes to order at least $g$ at $P$. Since $\Omega_X$ is preserved by any automorphism, we see that this condition is preserved by automorphisms and so $\operatorname{Aut} X$ acts on the set of hyperosculation points. Since there are $n(n+1)(g-1)+(n+1)d=g^3-g$ hyperosculation points in characteristic zero by exercise IV.4.6, all we need to do is to show the kernel of this map is finite in order to show $\operatorname{Aut} X$ is finite. In fact, we'll show that the kernel is trivial by showing that automorphism of a genus $g$ curve fixing more than $2g+2$ points is trivial. This proves the claim because $g^3-g>2g+2$ for $g>2$ (all curves of genus 2 are hyperelliptic anyways, so the only time we'd use the hyperosculation points is when $g>2$). To show that a nontrivial automorphism $\sigma$ of a curve $X$ of genus $g$ has at most $2g+2$ fixed points, I claim that given a nontrivial automorphism $\sigma$, we can always find a nonconstant rational function $f\in K(X)$ with at most $g+1$ poles (counted with multiplicity) so that $f-\sigma f$ is not constant. Then $f-\sigma f$ has zeroes at every fixed point of $\sigma$, and $f-\sigma f$ has at most $2g+2$ poles because each of $f$ and $\sigma f$ have at most $g+1$ poles. As the number of poles and zeroes of any nonconstant rational function on a curve are equal (they're both equal to the degree of the map to $\Bbb P^1$ given by $f$), this proves that $\sigma$ has at most $2g+2$ fixed points. First, for any effective divisor of degree $g+1$, we can find a nonconstant function with poles contained in $D$: by Riemann-Roch, $l(D)-l(K-D)=g+1-g+1=2$, so $l(D)\geq 2$, and therefore $l(D)$ contains a nonconstant rational function. Next, we can always find a degree $g+1$ effective divisor which isn't fixed by $\sigma$: just pick some point $x\in X$ not fixed by $\sigma$ (which we can always find because $\sigma$ is not the identity) and look at $(g+1)x$. So $(f-\sigma f)_\infty$ is of degree at most $2g+2$ and therefore $(f-\sigma f)_0$ is of degree at most $2g+2$ and therefore can contain at most $2g+2$ points, and we've won.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4237653", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Help with geometric probability (with regards to span). Suppose we have two non-parallel vectors in $\mathbb R^3$. Now, if we were to randomly select another vector in $\mathbb R^3$, what is the probability that that new vector lies in the span of the first two vectors? My intuition says that the probability is 0 (i.e. $\mathbb R^2$ is infinitely smaller than $\mathbb R^3$, ergo 0), but I'm not sure.
One way to model this, so everything becomes finite is to work on the unit sphere $S^2$. We may assume that the hyperplane is $H=\{z=0\}\subset \mathbb{R}^3$. The probability that a random vector of $S^2$ doesn't lie on the great circle $H\cap S^2$ would be exactly $$ \frac{|S^2\setminus (H\cap S^2)|}{|S^2|}= 1. $$ The only problem here is that we exclude the zero vector, but I assume that this is not a big deal. So, I would say your intuition is correct.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4237776", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving divisibility by $9$ for $10^n + 3 \times 4^{n+2} + 5$. I am trying to prove that for all $n$, $9$ divides $10^n + 3 \times 4^{n+2} + 5$. I first tried induction on $n$, but couldn't get anywhere with the induction step. I then tried to use modular arithmetic. The furthest I could get was: As $10 \equiv 1 \mod 9$, $10^n \equiv 1^n = 1$ for all $n$, so modulo $9$, we have \begin{align*} 10^n + 3 \cdot 4^{n+2} + 5 & = 3 \cdot 4^{n+2} + 6 \\ & = 3\left(4^{n+2} + 2\right) \\ & = 3\left(\left(2^2\right)^{n+2} + 2 \right) \\ & = 3\left(2^{2(n+2)} + 2 \right) \\ & = 3\left(2\left(2^{2(n+2) - 1} + 1\right) \right) \end{align*} I need to something get a factor of $3$ to show that the entire expression is divisible by $9$ and hence equal to $0$, mod $9$. But, with a sum of even terms, this does not appear possible. Any hints on how to proceed would be appreciated. Is induction the standard way to prove something like this?
Here is another way if you know the binomial theorem. Observe that $$10^n = (1+9)^n = 1 + {n \choose1}9 + \ldots + 9^n = 1 + 9A$$ where $A= {n \choose1}+ \ldots + 9^{n-1}$ and similarly $$4^{n+2} = (1+3)^{n+2} = 1 + (n+2) 3 + {{n+2} \choose 2}3^2 + \ldots + 3^{n+2} = 1 + 3(n+2) + 3^2B$$ So $$10^n + 3 \cdot 4^{n+2} + 5 = 1 + 9A + 3( 1 + 3(n+2) + 3^2B ) + 5 = \color{red}{9} {\left(1 + A + (n+2) + 3B \right) }$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4237927", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 5 }
Calculate the closed form of the following series $$\sum_{m=r}^{\infty}\binom{m-1}{r-1}\frac{1}{4^m}$$ The answer given is $$\frac{1}{3^r}$$ I tried expanding the expression so it becomes $$\sum_{m=r}^{\infty}\frac{(m-1)!}{(r-1)!(m-r)!}\frac{1}{4^m}$$but I do not know how to follow. Any help will be appreciated, thanks.
You can use recurrence relation, thanks to the Pascal's identity: $${m - 1 \choose k - 1} = {m \choose k} - {m - 1 \choose k}.$$ As suggested by Claude Leibovici in the comment we can have a more general result. Let $$S(k) = \sum_{m = k}^\infty {m - 1 \choose k - 1} \frac{1}{x^m}.$$ It seems like $S(k)$ converges as long as $|x| > 1$. Using Pascal's identity we have $$\sum_{m = k}^\infty {m - 1 \choose k - 1} \frac{1}{x^m} =x\sum_{m = k}^\infty {m \choose k} \frac{1}{x^{m + 1}} -\sum_{m = k}^\infty {m-1 \choose k} \frac{1}{x^{m}} $$ Which give us $S(k) = xS(k) - S(k - 1)$ or \begin{equation} S(k - 1) = (x - 1)S(k) \tag{1} \end{equation} Now $S(1)$ is just the geometric series $$\sum_{m = 1}^\infty \frac{1}{x^m} = \frac{1}{x - 1}$$ Which gives us the solution for $(1)$ is \begin{equation} S(k) = \frac{1}{(x - 1)^k} = \sum_{m = k}^\infty {m - 1 \choose k - 1} \frac{1}{x^m}\tag{2} \end{equation} Setting $x = 4$ gives us the desired result. Note the similarity with negative binomial theorem: $$\frac{1}{(x + a)^k} = \sum_{j = 0}^\infty (-1)^j {k + j - 1 \choose j} x^j a^{k - j}$$ when $a = -1$ which converges for $|x| < 1$. We can then combine $S(k)$ with this to get a (Laurent) series expansion of $\frac{1}{(x - 1)^k}$ as a corollary.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4238033", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 6, "answer_id": 0 }
Central limit theorem for randon variables with exponentially decaying covariance Let $X_1,X_2,...$ be i.i.d. bounded random variables with $\mathbb{E}[X]=0$. In addition, let $C_1,C_2>0$ and $\{d_{i,j}\}_{i,j\in \mathbb{N}}$ such that $$d_{i,j} = C_1e^{-C_2|i-j|} $$ Now, define $$Y_n = \sum_{j=1}^n d_{n,j} X_n\cdot X_j $$ Does $$\frac{Y_1+...+Y_n}{\sqrt{n}} \overset{d}{\longrightarrow} N(0,\sigma^2)?$$ Generally the exponential decay of the covariance of the $Y_i$ is not enough. However, I think that for this type of random variable it should work, but not sure how to prove it. Thanks!
Under the conditions stated above, the following conclusion is true, \begin{equation*} \frac{1}{\sqrt{n}}\sum_{k=1}^{n}(Y_k-C_1\mathsf{E}[X_1^2])\overset{d}{\longrightarrow} N(0,\sigma^2). \tag{1} \end{equation*} This conclusion can be proved by CLT of array of MD (martingale difference). (cf. P. Hall and C. C. Heyde, Martingale Limit Theory and Its Application, Academic Press(1980), Th.3.2, p.58--). The following is an outline of the proof. Denote \begin{equation*} W_0=0, \quad W_{k-1}=\sum_{j=1}^{k-1}d_{k,j}X_j , \quad k\ge 2. \end{equation*} Then \begin{gather*} Y_k=\Big(\sum_{j=1}^{k}d_{k,j}X_j \Big)X_k =W_{k-1}X_k+d_{k,k}X_k^2\\ \mathsf{E}[Y_k]=d_{k,k}\mathsf{E}[X_k^2]=C_1\mathsf{E}[X_1^2] \overset{\triangle}{=}m. \end{gather*} Let \begin{gather*} Z_{n,k}=\frac{1}{\sqrt{n}}(Y_k-m), \quad 1\le k\le n, n\ge 1.\\ \mathscr{F}_{k}=\sigma\{ X_j,1\le j\le k\}\vee\mathscr{N},\quad 1\le k\le n, n\ge 1. \end{gather*} Then \begin{equation*} \mathsf{E}[Z_{n,k}|\mathscr{F}_{k-1}] =\frac{1}{\sqrt{n}}(W_{k-1}\mathsf{E}[X_k]+ d_{k,k}\mathsf{E}[X_k^2]-m)=0. \end{equation*} and $ Z=\{ Z_{n,k}, \mathscr{F}_{k}, 1\le k\le n, n\ge 1\}$ is a MD-array(array of martingale difference). Now we verify that the $Z$ satisfy the conditions of $S_n=\sum_{k\le n}X_{n,k} \overset{d}{\to} N(0,\sigma^2)$. At first, due to the $\{X_i,i\ge 1\} $ are bounded, the $\{W_i,i\ge 1\}, \{Y_i,i\ge 1\} $ are bounded too, and \begin{equation*} \max_{1\le k\le n}|Z_{n,k}|\le \frac{C}{\sqrt{n}} \tag{2} \end{equation*} where and latter the $C$ is a constant, irrelated $k,n$, in different expression $C$ may be different. Secondly, using direct calculation, the following holds, \begin{align*} \lim_{k\to\infty}\frac1k\sum_{j=1}^{k}W_j&=0, \quad \text{a.s.} \tag{3}\\ \lim_{k\to\infty}\frac1n\sum_{j=1}^{n}W_j^2&=b>0,\quad\text{a.s.} \tag{4} \end{align*} Hence, \begin{align*} &\mathsf{E}[Z_{n,k}^2|\mathscr{F}_{k-1}]\\ &\quad =\frac1n\mathsf{E}[(W_{k-1}X_k+C_1(X_k^2-\mathsf{E}[X_k^2]))^2 | \mathscr{F}_{k-1}]\\ &\quad =\frac1n[W_{k-1}^2\mathsf{E}[X_k^2]]+C_1^2\mathsf{E}[(X_k^2-\mathsf{E}[X_k^2])^2]\\ &\qquad +2C_1W_{k-1}\mathsf{E}[(X_k^2-\mathsf{E}[X_k^2])X_k],\\ &\sum_{k=1}^{n}\mathsf{E}[Z_{n,k}^2|\mathscr{F}_{k-1}]\\ &\quad = \frac1n \sum_{k=1}^{n}W_k^2\mathsf{E}[X_1^2] + C_1^2 \mathsf{E}[(X_1^2-\mathsf{E}[X_1^2])^2] \\ &\qquad + \frac{2C_1\mathsf{E}[(X_1^2-\mathsf{E}[X_1^2])X_1]}n \sum_{k=1}^{n}W_{k-1}\\ &\quad \to b\mathsf{E}[X_1^2]+C_1^2 \mathsf{E}[(X_1^2-\mathsf{E}[X_1^2])^2] \overset{\triangle}{=}\sigma^2. \tag{5} \end{align*} At last, from (2) and (5), we have \begin{align*} \frac{1}{\sqrt{n}}\sum_{k=1}^{n}(Y_k-C_1\mathsf{E}[X_1^2]) =\sum_{k=1}^{n}Z_{n,k}=S_n\overset{d}{\longrightarrow}N(0,\sigma^2). \end{align*} i.e. (1) is true.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4238139", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Simple way to compute the finite sum $\sum\limits_{k=1}^{n-1}k\cdot x^k$ I'm looking for an elementary method for computing a finite geometric-like sum, $$\sum_{k=1}^{n-1} k\cdot3^k$$ I have a calculus-based solution: As a more general result, I replace $3$ with $x$ and denote the sum by $f(x)$. Then $$f(x) = \sum_{k=1}^{n-1} k\cdot x^k = x\sum_{k=1}^{n-1}k\cdot x^{k-1} = x\frac{\mathrm d}{\mathrm dx}\left[\sum_{k=1}^{n-1}x^k + C\right]$$ for some constant $C$. I know that $$\sum_{k=1}^{n-1}x^k = \frac{x-x^n}{1-x}$$ so it follows that $$\begin{align} f(x) &= x\frac{\mathrm d}{\mathrm dx}\left[\frac{x-x^n}{1-x}+C\right] \\[1ex] &= x\cdot\frac{(1-x)\left(1-nx^{n-1}\right) + x-x^n}{(1-x)^2} \\[1ex] &= \frac{x-nx^n(1-x)-x^{n+1}}{(1-x)^2} \end{align}$$ which means the sum I started with is $$\sum_{k=1}^{n-1}k\cdot3^k = \frac{3+2n\cdot3^n-3^{n+1}}4$$ I am aware of but not particularly adept with summation by parts, and I was wondering if there was another perhaps simpler method I can employ to get the same result?
Hint: It suffices to notice that $$S:=\sum_{k=1}^{n-1} k\cdot3^k=\sum_{k=0}^{n-2} (k+1)\cdot3^{k+1}=3\sum_{k=1}^{n-1} (k+1)\cdot3^k+3-n\cdot3^n=3S+\text{terms}.$$ Now subtract the RHS from three times the LHS (which gives you twice the LHS), simplify and fiddle with the terms.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4238437", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Linearly independence of functions without using Wronskian Problem Let $g_1,\ldots g_k$ be linearly independent real-valued functions on a set $X$, that is, if for real constants $c_j,\sum^k_{j=1}c_jg_j\equiv0$, then $c_1=c_2=\cdots=0$. Show that for some $x_1,\ldots,x_k$ in $X,g_1,\ldots g_j$ are linearly independent on $\{x_1,\ldots,x_j\}$ for each $j=1,\ldots,k$. My efforts We will find the $k$ points one by one. First we will find $x_1$ such that $g_1$ is linearly independent on $\{x_1\}$, i.e., $g_1(x_1)\neq0$. We claim such an $x_1$ must exist. If not, $g_1(x)=0$ for all $x$. We just take $c_1=1$ and $c_i=0$ for $i>1$. Then we have $\sum^k_{j=1}c_jg_j\equiv0$, violating the linear independence of $g_1,\ldots g_k$. Now assume that we have found $x_1,\ldots,x_n$ in $X$ with $n<k$ such that $g_1,\ldots g_j$ are linearly independent on $\{x_1,\ldots,x_j\}$ for each $j=1,\ldots,n$. Denote the $j\times j$ matrix $M_j:=[g_m(x_l)]_{l,m\leq j}$ with $j\leq n+1$. By assumption, the determinant $|M_j|\neq0$ and $\mathrm{Rank}(M_j)=j$ for each $j=1,\ldots,n$. We need to find $x_{n+1}$ such that $|M_{n+1}|\neq0$ and $\mathrm{Rank}(M_{n+1})=n+1$. Denote the $j$rh row of $M_{n+1}$ as $v_j(x_j):=[g_1(x_j)\; g_2(x_j)\; \ldots\; g_n(x_j)\; g_{n+1}(x_j)]^T$. Assume such an $x_{n+1}$ does not exist. Then for any $x\in X,v_{n+1}(x)$ is linear combination of $v_1(x_1),\ldots,v_n(x_n)$, i.e., for $g_1,\ldots,g_{n+1}$, their values at any $x$ is a linear combination of their values at $x_1,\ldots,x_{n+1}$. If they are not linearly independent on $x_1,\ldots,x_{n+1}$, then they are not not linearly independent on $X$. Now they are not linearly independent on $x_1,\ldots,x_{n+1}$, so this is a contradiction. Where am I wrong? Please give me a hint on more elegant way of proof.
HINT: Consider the matrix $(g_j(x))_{1\le j\le k, x\in X}$. The basic fact about matrices over fields is: the largest size of non-zero minors equals the largest number of linearly independent rows(columns). That should do it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4238616", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Trying to prove an inequality (looks similar to entropy) I'm trying to prove the following inequality (or something similar, up to a constant factor in either side of the inequality): $$k\cdot\sum_{i=1}^{k}x_{i}\cdot\ln\left(x_{i}\right)\geq\sum_{i=1}^{k}x_{i}\cdot\left(x_{i}-1\right)$$ where $\forall i\in\left[k\right]$, $x_i \in\left[0,k\right]$ (the $x_i$s are not necessarily natural numbers, but we can assume that they're rational if it helps), and $\sum_{i=0}^k x_i=k$. I've tried plotting it for $k=2,3$ and ran some numerical experiments for larger $k$, and I'm 99% sure this inequality is correct, but I'm still struggling with the proof. Up to some normalizing, I find the left-hand side quite similar to the entropy of a probability distribution, but I didn't manage to take advantage of this fact either. I also tried looking for inequalities that only hold on simplex-like hyperplanes, but couldn't find anything useful. Any ideas? Thanks!
The function $$ f(x) = k x \ln(x) - x(x-1) $$ has the second derivative $$ f''(x) = \frac k x - 2 $$ so that it is convex on the interval $[0, k/2]$. If all $x_i$ are in the interval $[0, k/2]$ then Jensen's inequality can be applied, so that $$ \sum_{i=1}^k f(x_i) \ge k f\left( \frac 1k \sum_{i=1}^k x_i\right) = k f(1) = 0 \, , $$ which is the desired estimate. It remains to investigate the case where $x_i > k/2$ for one $i$, say $x_k > k/2$. Then $$ \sum_{i=1}^k f(x_i) = \sum_{i=1}^{k-1} f(x_i) + f(x_k) \ge (k-1) f\left( \frac 1{k-1} \sum_{i=1}^{k-1} x_i\right) + f(x_k) \\ = (k-1) f\left( \frac {k-x_k}{k-1} \right) + f(x_k) \, . $$ Therefore we define $$ g(x) = (k-1) f\left( \frac {k-x}{k-1} \right) + f(x) \, . $$ An elementary calculation shows that $g(1) =g'(1) = 0$, and $$ g''(x) = \frac{2k(x^2-kx+\frac{k^2-k}{2})}{(k-1)x(k-x)} \ge 0 \, . $$ It follows that $g(x) \ge 0$ on $[0, k]$, and that completes the proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4239784", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Exercise 2.2.2. from Weibel's Homological Algebra. If $A$ has enough projectives, then so does $\operatorname{Ch}(A)$. $\newcommand{\A}{\mathcal{A}}\newcommand{\Ch}{\mathbf{Ch}}$The following is an exercise from Weibel's An Introduction to Homological Algebra. Show that if $\A$ has enough projectives, then so does the category $\Ch(\A)$ of chain complexes over $\A$. The hint given is to use the previous exercise, where one characterises projective objects in $\Ch(\A)$. Namely, we saw that $P_{\bullet}$ is projective in $\Ch(\A)$ iff it is a split exact complex of projectives. My attempt. Let $M_{\bullet}$ be an arbitrary chain complex in $\Ch(\A)$. Then, since $\A$ has enough projectives, we can find epics $$P_{n} \to M_{n}$$ for all $n$. Moreover, using projectivity and the differentials of $M_{\bullet}$, we get maps $$d_{n} : P_{n} \to P_{n - 1}$$ such that $d^2= 0$ and they fit together to give a chain map $$P_{\bullet} \to M_{\bullet}.$$ The issue now is showing that the $P_{\bullet}$ is projective. (The above map being epic is clear since it is epic in each degree.) In fact, I am quite sure that the construction above will actually not work in general. For example, if each $M_{n}$ was projective to begin with, then we could've picked each $P_{n}$ to be $M_{n}$ and the maps to be the identity maps. But an arbitrary complex of projectives need not be split exact. This makes me think that one needs to do a different construction, but I don't see one.
Proceed as follows: * *Components. Pick epic maps $Q_n \longrightarrow H_n$ and $P_n'' \longrightarrow B_n$ from projectives that then, by the proof of the Horseshoe lemma fit into a SEC $ 0\longrightarrow P_n'' \longrightarrow P_n' \longrightarrow Q_n \longrightarrow 0$ covering $0 \longrightarrow B_n \longrightarrow Z_n \longrightarrow H_n \longrightarrow 0$. This gives surjections $P_n' \to Z_n$. *More components. There now exists another SEC $ 0\longrightarrow P_n' \longrightarrow P_n \longrightarrow P_{n-1}'' \longrightarrow 0$ covering the SEC$0 \longrightarrow Z_n \longrightarrow C_n \longrightarrow B_{n-1} \longrightarrow 0$. *Differential. Note that $P_n = P_n'\oplus P_{n-1}''$ maps into $P_{n-1} = P_{n-1}'\oplus P_{n-2}''$ via the matrix $$d = \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}$$ where $1$ is the inclusion $P''\subseteq P'$ in (1), and that $d^2=0$. This makes $P$ into a complex $(P,d)$ of projectives. *Split. To see it is split, note that $P_n' = P_n''\oplus Q_n$ (another copy of $P'')$ and $P_{n+1} = (P_{n+1}''\oplus Q_{n+1})\oplus P_n''$ so you can project and include into this copy. In short: $$d(x,y,z) = (z,0,0), \quad s(x,y,z) = (0,0,x)$$ and then it is clear that $dsd=d$, so $(P,d)$ is split.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4239924", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
inequality for positive real numbers Given $$a,b,c,x,y,z \in R^+$$ how to show the following inequality: $$\frac{a^3}{x} + \frac{b^3}{y} + \frac{c^3}{z} \geq \frac{(a+b+c)^3}{3(x + y + z)}$$ I rearranged the inequality since all are positive, the inequality would be true iff $$\frac{a^3}{x(a+b+c)^3}+\frac{b^3}{y(a+b+c)^3}+\frac{c^3}{z(a+b+c)^3}\geq\frac{1}{3(x+y+z)}$$ which further simplifies to $$\frac{a^3yz}{(a+b+c)^3}+\frac{b^3xz}{(a+b+c)^3}+\frac{c^3xy}{(a+b+c)^3}\geq\frac{xyz}{3(x+y+z)}$$ and now I am struggling how to rearrange further to apply Holding's inequality
Apply Sedakryan's inequality https://en.m.wikipedia.org/wiki/Sedrakyan%27s_inequality $\\$ For $n=3$ , $\small{\frac{a_{1}^2 }{b_1 } +\frac{a_{2}^2 }{b_2 } +\frac{a_{3}^2 }{b_3 } \ge \frac{( a_1 +a_2 +a_3 )^2 }{b_1 + b_2 +b_3 }}$. $\\$ Set $a_1 =a^{3/2} ,a_2 =b^{3/2} , a_3 =c^{3/2} , b_1=x ,b_2 =y , b_3 =z$. $\\$ The inequality must be simplified like $a^3 /x +b^3 /y +c^3 /z \ge \frac{(a^{3/2} +b^{3/2} +c^{3/2 } )^2 }{(x+y+z)}$. $\\$ Apply $ (a^{3/2} + b^{3/2}+c^{3/2} )^2 \ge \frac{1}{3} (a+b+c)^3 $ to complete the proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4240064", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How do I find the region of integration (in cartesian coordinates) of the following integral? I need to define the integral $$\iint_Rxdydx,$$ where the region $R$ is represented by the following graph: I understand x must limited as: $0\le x\le1$, but I'm not sure how y is limited. I tried limiting it as: $-x\le y\le\sqrt{1-x^2}$, because following the equation of the circle, $x^2+y^2\le1^2$, but wolfram is giving the following representation: , which is not correct. What is the correct region of integration?
Mit den Kreiskoordinaten $x=r\cdot cos\phi;y=r\cdot sin\phi$ und dem Flächenelement $dxdy=rdrd\phi$ ergibt das: $\int_{\phi=-\frac{\pi}{4}}^{\frac{\pi}{2}}\int_{r=0}^1 rcos\phi drd\phi$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4240191", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
How can you solve this trigonometric equation algebraically? Solve for $0\leq\alpha\leq\pi$, $$\left(\frac{\left(r-t\right)\cos\left(\alpha\right)+\left(r-t\right)\sin^{2}\left(\alpha\right)+t}{r}\right)^{2}+\left(\frac{4\left(t-r\right)\sin\left(\frac{\alpha}{2}\right)\cos^3\left(\frac{\alpha}{2}\right)}{t}\right)^{2}=1.$$ I've been playing with this for a while, and apparently the solution can be described simply by \begin{align} \alpha=2\arccos\left(\sqrt{\frac{t}{t+r}}\right) \end{align} and graphing the function with $\alpha$ on the $y$-axis, $t$ on the $x$-axis, and $r$ being an arbitrary constant, it does appear true. I'm not sure if this value of $\alpha$ holds true for all $r$ and $t$, though. $\alpha$ on the $y$-axis and $t$ on the $x$-axis." /> In the above graph, the red curve is $\alpha=2\arccos\left(\sqrt{\frac{t}{t+r}}\right)$ and the purple is the original equation.
Let $$x := \left(\frac{\left(r-t\right)\cos\left(\alpha\right)+\left(r-t\right)\sin^{2}\left(\alpha\right)+t}{r}\right)^{2}+\\ \left(\frac{4\left(t-r\right)\sin\left(\frac{\alpha}{2}\right)\cos^3\left(\frac{\alpha}{2}\right)}{t}\right)^{2}-1. \tag{1}$$ Wanted: find the values of $\,\alpha\,$ when $\,x=0.$ There are a few ways to solve this problem. One way with computer technology is to use the substitution $$\alpha=\frac{\log(A)}i,\;\; \sin(\alpha)=\frac{A-\frac1A}{2i},\;\; \cos(\alpha)=\frac{A+\frac1A}2 \tag{2}$$ in equation $(1)$ and factor the expression using a CAS (Computer Algebra System) to get $$ x = (1 - A)^2 (t - r) u^2 v/(4 A^2 r\, t)^2 \tag{3}$$ where $$ u = r + 2 A r + A^2 r - t - A^2 t,\tag{4}$$ $$ v = r + 2 A r + A^2 r + t - 2 A t + A^2 t. \tag{5}$$ The factorization in equation $(3)$ implies that there are three solutions for $\alpha.\,$ The first is $\,\alpha=0\,$ when $\,A=1.\,$ The other two are for $\,u=0\,$ and $\,v=0.\,$ Note that $$ \frac{2r}t- \frac{\cos(\alpha)}{\cos(\alpha/2)^2} = \frac{2u}{t(1+A)^2} \tag{6}$$ and $$ \cos(\alpha/2)^2 - \frac{t}{t+r} = \frac{v}{4(r+t)A}. \tag{7}$$ Thus $\,u=0\,$ when $$ \frac{2r}t = \frac{\cos(\alpha)}{\cos(\alpha/2)^2} \;\;\text{ or }\;\; \frac{t}{2(t-r)} = \cos(\alpha/2)^2 \tag{8}$$ is one solution and $\,v=0\,$ when $$ \frac{t}{t+r} = \cos(\alpha/2)^2 \quad \text{ or } \quad\frac{r}{t} = \tan(\alpha/2)^2\tag{9}$$ is another which agrees with your solution $$ \alpha=2\arccos\left(\sqrt{\frac{t}{t+r}}\right). \tag{10}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4240361", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What's wrong with this evaluation of $\lim _{x\to 2^-} \frac{|2x-x^2|}{x-2}$? Suppose we want to evaluate $$\lim _{x\to 2^-} \frac{|2x-x^2|}{x-2}$$ Note that multiple answer choices may be correct. I marked A and C as correct. The websystem marked this incorrect. I drew a graph of $-x^2 +2x$, parabola facing down, above the x-axis 0 < x < 2 So $|2x-x^2| = -x^2 + 2x$ in the relevant interval.
It is not true that $|2x-x^{2}|=2x-x^{2}$ for all $x <2$: Take $x <0$ to see why. So A) is not correct. The correct one is C).
{ "language": "en", "url": "https://math.stackexchange.com/questions/4240696", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Second order ODE - undetermined coefficients for $f(x) = xe^x\cos{x}$ $$y'' + y = xe^x\cos{x}$$ So, the solution of the homogenous equation is $$y_h = C_1\cos{x} + C_2\sin{x}$$ Now, when I try to do the particular solution: $$y_p = e^x(Ax+B)(C\cos{x}+D\sin{x})$$ Is this correct? We don't have to multiply by $x$, and because there's an $x$ in $f(x)$ we put a general polynomial of the first order, $Ax+B$. Because there's a trigonometric function, we put $C\cos{x} + D\sin{x}$, and $e^x$ is self explanatory. However, when I find the second derivative and plug in everything into the equation to find the coefficients, I end up with a system of four equations which doesn't have a solution. The second derivative is: $$e^x(\sin{x} (-2acx - 2ac + 2ad - 2bc) + \cos{x} (2ac + 2adx + 2ad + 2bd))$$ I checked this on wolframalpha. If add the $y_p$ to it and group the terms, I get $$e^x(\sin{x} (-2acx - 2ac + 2ad - 2bc+axd+bd) + \cos{x} (2ac + 2adx + 2ad + 2bd+axc+bc))$$ Now, on the right side we have $e^xx\cos{x}$. Let's compare the coefficients that multiply all functions containing $x\cos{x}$ with $1$ on the right side $$2ad+ac=1$$ Now, compare all the coefficients of $\cos{x}$, $x\sin{x}$, $\sin{x}$ with 0 $$2ac+2ad+2bd+bc=0$$ $$-2ac+ad=0$$ $$-2ac+2ad-2bc+bd=0$$ This system does not have a solution, which leads me to think my $y_p$ is incorrect. However, I can't see my mistake.
The particular integral is of the type: $y_p=e^{x}\Big((A x+B) cos(x)+(C x+D)sin(x)\Big)$. The system instead is: $A+2C-1=0$ $2A+B+2C+2D=0$ $C-2A=0$ $2A+2B-2C-D=0$. The solution is: $A=\frac{1}{5}$, $B=\frac{-2}{25}$, $C=\frac{2}{5}$, $D=\frac{-14}{25}$. The particular integral is: $y_p=e^{x}\Big(\frac{5x-2}{25}cos(x)+\frac{10x-14}{25}sin(x)\Big)$,
{ "language": "en", "url": "https://math.stackexchange.com/questions/4240852", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Find the equation of a circle whose diameter is the chord of another circle I am studying maths as a hobby and have just come to the last end-of-chapter question on parabolas and circles. The straight line through the point $A(-a,0)$ at an angle $\theta$ to the positive direction of the x-axis meets the circle $x^2+y^2=a^2$ at P, distinct from A. The circle on AP as diameter is denoted by C. i) By finding the equation of C, or otherwise, show that if C touches the y-axis, then $\cos2\theta=2-\sqrt 5$ ii) C meets the x-axis at M, distinct from A, and the tangents to C at A and M meet at Q. Find the coordinates of Q in terms of $\theta$ and show that as $\theta$ varies, Q always lies on the curve $y^2x +(x+a)^3=0$. I really don't know how to begin here. To find the equation of C I am thinking I need two points so I can use the equation: $$x^2+y^2+2gx+2fy+c=0$$ But I only have one definite point, at A. The value of y at B is unknown. I could call P (x,y) but this is too vague. This is my visualisation of the situation for the first part: I can see that $\angle ABP$ is a right angle.
A simpler observation is to note that the coordinate of $P$ is $(a \cos 2\theta, a \sin 2\theta)$, since the central angle that $OP$ makes with the positive $x$-axis is twice the corresponding inscribed angle $\angle PAO$. Moreover, because $\triangle AOP$ is isosceles, the midpoint $D = \frac{a}{2}(-1 + \cos 2\theta, \sin 2\theta)$ of $AP$ is the foot of the altitude $DO$, hence $\triangle AOD$ implies $AD = a \cos \theta$. But since $BD = AD$, it follows that $$\frac{a}{2} (1 - \cos 2\theta) = a \cos \theta = a \sqrt{\frac{1 + \cos 2\theta}{2}}.$$ Letting $z = \cos 2\theta$, we obtain a quadratic in $z$ after cancelling $a$: $$(1-z)^2 = 2 (1 + z),$$ or $$0 = z^2 - 4z - 1,$$ hence $\cos 2\theta \in \left\{ 2 \pm \sqrt{5} \right\}$. Since only one of these roots has magnitude not exceeding $1$, we obtain $\cos 2\theta = 2 - \sqrt{5}$ as claimed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4241107", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Cutting a polygon into 2 or 3 smaller, rationally-scaled copies of itself? I've noticed that many 2D geometric figures can be tiled using four smaller copies of themselves. For example, here's how to subdivide a rectangle, equilateral triangle, and right triomino into four smaller copies: Each smaller figure here is scaled down by a factor of $\frac{1}{2}$ in width and height, dropping its area by a factor of four, which is why there are four smaller figures in each. You can also tile some 2D figures with nine smaller copies, each $\frac{1}{3}$ of the original size, or sixteen smaller copies, each $\frac{1}{4}$ of the original size, as shown here: By mixing and matching sizes, we can get other numbers of figures in the subdivisions. For example, here's a $2 \times 1$ rectangle subdivided into five rectangles of the same aspect ratio, an equilateral triangle subdivided into eleven equilateral triangles, and a right triomino tiled by thirty-eight right triominoes: $2 \times 1$ rectangle subdivided into five rectangles of the same aspect ratio, an equilateral triangle subdivided into eleven equilateral triangles, and a right triomino tiled by thirty-eight right triominoes" /> I've been looking for a shape that can tile itself with exactly two or three smaller copies. I know this is possible if we allow the smaller copies to be scaled down by arbitrary amounts, but I haven't been able to find a shape that can tile itself with two or three copies of itself when those smaller copies are scaled down by rational amounts (e.g. by a factor of $\frac{1}{2}$ or $\frac{3}{5}$). My Question My question is the following: Is there a 2D polygon that can be tiled with two or three smaller copies of itself such that each smaller copy's dimensions are a rational multiple of the original size? If we drop the restriction about the smaller figures having their dimensions scaled by a rational multiple, we can do this pretty easily. For example, a rectangle of aspect ratio $\sqrt{2} : 1$ can tile itself with two smaller copies, and a rectangle of aspect ratio $\sqrt{3} : 1$ can tile itself with three smaller copies: $\sqrt{2}:1$ rectangle cut into two self-similar copies, and a $\sqrt{3}:1$ rectangle cut into three self-similar copies" /> However, in these figures, the two smaller copies are scaled down by a factor of $\frac{\sqrt{2}}{2}$ and $\frac{\sqrt{3}}{3}$, respectively, which aren't rational numbers. If we move away from classical polygons and allow for fractals, then we can do this with a Sierpinski triangle, which can be tiled by three smaller copies of itself. However, it's a fractal, not a polygon. What I've Tried If we scale down a 2D figure by a factor of $\frac{a}{b}$, then its area drops to a $\frac{a^2}{b^2}$ fraction of its original area. This led me to explore writing $1$ as a sum of squares of rational numbers, such as $1 = \frac{4}{9} + \frac{4}{9} + \frac{1}{9}$ or $1 = \frac{9}{25} + \frac{16}{25}$. This gives several possible values for how to scale down the smaller copies of the polygon, but doesn't give a strategy for choosing the shapes of the reduced-size polygon to get the smaller pieces to perfectly tile it. I've looked into other problems like squaring the square and other similar tiling problems. However, none of the figures I've found so far allow for a figure to be tiled with two or three copies of itself. I've also tried drawing a bunch of figures on paper and seeing what happens, but none of them are panning out. Is this even possible in the first place? Thanks!
If all three pieces are the same size, then it is impossible, without invoking fractals, for the smaller pieces to have dimensions that are a rational multiple of the original object's size. In order for the area of the new shape to be $n$ times the area of the original, and still be similar, we must multiply the linear dimensions by $\sqrt{n}$. Since $1/3$ is not a perfect square, $\sqrt{1/3}$ is not a rational number. In order to avoid this we can use fractals, as you saw with the Sierpinski triangle, which allows us to use the fractal dimension instead of the "real-space" dimension.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4241253", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Circle Geometry- Tangents G'day, everyone. I am trying to answer the question: Let AB be a diameter of circle K with centre O. CHoose a point P exterior to K on the line through A and B and construct the tangents to K through P, Meeting K at X and Y. Let M be the intersection of AB and XY. Prove that the tangents to K are also tangents to both circles with centre A and radius AM and circle B and radius BM. The context for this question is that I had been attempting this mathematics enrichment tasks for just under 3 months now and it is finally coming to an end, however, the difficulty of the questions have increased and I am beginning to have more difficulty in answering as the topics discussed in the enrichment are some I have not seen before. What I have learnt in order to answer: I have been reviewing some basic circle geometry laws in order to finish this question, the ones typically learnt at my level of mathematics are: 1. if a tangent meets a radius, the point of contact is perpendicular, 2. two tangents from an external point are equal, 3. If a line is drawn from the centre to the exterior point of 2 tangents, the line bisects the angle, 4. the "alternate segment theorem", 5. if 2 external circles have 1 intersection point, the line at the point is a tangent. I believe the question will be formatted around these rules as these are the ones expected to be learnt. Any help would be appreciated and please let me know if there is anything within the question I need to change in order for it to be better formatted for future users of this site.
Let $ BP = a, BO=OA = r$. What is $PX$? $\sqrt{ (b+r)^2 - r^2}$. What is $PM$? $\frac{ b^2+2br}{b+r}$ What is $BM$? $\frac{br}{b+r}$ Hence, why is the circle with radius $BM$ also tangent to the line? Hint, use similarity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4241363", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Suppose $2+7i$ is a solution of $2z^2+Az+B=0$, where $A,B \in \mathbb{R}$ . Find $A$ and $B$ The question is as follows; Suppose $2+7i$ is a solution of $2z^2+Az+B=0$, where $A, B \in \mathbb{R}$ . Find $A$ and $B$. My understanding is that this equation holds: $$2(2+7i)^2 + A(2+7i) + B = 0$$ which will eventually lead to: $$-90 + 2A + B + i(56+7A) = 0$$ I would like to check if my approach is correct, and if so, what should I do next to derive $A$ & $B$.
A polynomial with real coefficients will have non-real roots in conjugate pairs. Since it has $2+7i$ as a root, it will also have $2-7i$ as a root. Being a degree $2$ polynomial, these are the only roots. The leading coefficient of $2z^2+Az+B=0$ is $2$, so our polynomial is: $$2(z- (2-7i))(z -(2+7i)) = 0$$ Expanding, the coefficients we obtain: $$2 z^2 - 2(2-7i + 2+7i)z + 2(2-7i)(2+7i) = 0\\ 2z^2 - 8z + 106 = 0$$ We get: $A=-8$, $B = 106$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4241480", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Prove by contradiction that if $n^3$ is a multiple of $3$, then $n$ is a multiple of $3$ Problem statement: Using proof by contradiction, prove that if $n^3$ is a multiple of $3$ , then $n$ is a multiple of $3.$ Attempt 1: Assume that there is exist $n$ which is a multiple of $3$ such that $n^3$ is not a multiple of $3.$ Then $n = 3k $ , $n^3 = 27 k^3 $ which a multiple of $3$, which contradicts the assumption that $n^3$ is a multiple of $3.$ Attempt 2: Assume that there exist $n^3$ which is a multiple of $3$ such that $n$ is not a multiple of $3.$ Then $n = k+ 1 $ or $n= k+2.$ Then $n^3 = k^3 + 3k^2 + 3k + 1$ or $n^3 = k^3 + 6 k^2 + 12 k + 8.$ Both cases contradict the assumption that $n^3 $ is a multiple of $3.$ Which solution is correct ? Or do both work ?
If $n$ is not a multiple of $3$, then $n$ can be $3k+1$ or $3k+2$. If $n = 3 k + 1$ then $n^3 = 27 k^3 + 27 k^2 + 9 k + 1 = 3 (9 k^3 + 9 k^2 + 3k) + 1$ ; so $n^3$ is not divisible by $3$. If $n = 3 k + 2$ then $n^3 = 27 k^3 + 54 k^2 + 36 k + 8 = 3 (9 k^3 + 18 k^2 + 12 k + 2) + 2 $. So, again, $n^3$ is not divisible by $3$. Thus we have be contradiction that if $n^3$ is a multiple of $3$ then so is $n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4241609", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Domain of square root of x squared What is the domain of $f(x)=(\sqrt{x})^2$ ? Is it all real numbers, or are negative numbers still excluded, even after the square? Edit: What I'm really wondering is whether $\lim\limits_{x \to 0} f(x)$ is defined. Sorry for the confusion.
Negative values for $x$ should be included for the function $$h(x)=\sqrt{x^2}=|x|$$ but as noticed in this case $f(x)=\left(\sqrt {x}\right)^2$ stands for $f(x)=\sqrt x\sqrt x$ therefore negative values for $x$ are not allowed. More precisely the given function is the composition $f=g\circ h$ of $h(x)=\sqrt x$ with $g(x)=x^2$ and by definition we first apply $h$ and then $g$. edit Therefore $\lim\limits_{x \to 0} f(x)$ is defined only as right side limit that is $$\lim\limits_{x \to 0^+} f(x)$$ Usually we simply write $$\lim\limits_{x \to 0} \sqrt x=0$$ implicitly assuming $x\ge 0$ that is $x\to 0^+$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4241749", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 2 }
The set of Locales is a subset of the set of Toposes. Given an ordered set $(Z, \leq)$, we can consider the category $C_Z$ with $Ob(C)=Z$ and for each $r,e\in Z$, $\hom_{C_Z}(r,e)$ is a singleton if $r\leq e$ and is $\emptyset$ if $\neg(r\leq e)$. The idea is that ordered sets are special kinds of categories (with no more than one morphism between any two objects). An ordered set is called a frame if it admits arbitrary joins and binary meets and binary meets distribute over joins. A morphisms between frames is a monotone function preserving arbitrary joins and binary meets. A Grothendieck topos is a category satisfying the so-called Giraud axioms, which are very similar to the axioms of a frame (binary limits; arbitary small colimits, which are stable under pullbacks). * *Is $Z\mapsto C_Z$ a fully faithful functor from the category of frames to the category of Grothendieck toposes (with geometric morphisms)? *If yes, why is it more common to embed frames into Grothendieck toposes via $Z\mapsto Sh(Z)=$ topos of sheaves on $Z$? Is this embedding somehow related to the functor $Z\mapsto C_Z$? *Does 1. imply that the category of frames is equivalent to the category of Grothendieck toposes of the form $C_Z$ for some ordered set? *Why are frames required to be ordered sets, i.e., such that $\leq$ is antisymmetric: if $r\leq e\leq r$, then $r=e$ and not merely preordered sets? Because for Grothendieck toposes we don't require them that isomorphic objects are equal (which is the categorical analogue).
No, for a frame $Z$, the category $C_Z$ is never a Grothendieck topos (unless it has only one object). In terms of Giraud's axioms, $C_Z$ fails the axiom that coproducts should be disjoint. In a frame, $1+1=1$, while in a topos, $1+1=1$ implies $0=1$, which implies that the topos is trivial.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4241853", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How to show that $\int_0^1 f(x)^2 dx \le \int_0^1 f'(x)^2 dx$ if $f(0) = f(1) = 0$? I am working in $$H_0^1([0, 1]) = \{u:[0, 1] \to \mathbb{R}| \int_0^1 u'(x)^2 dx < \infty, \text{ and }u(0) = u(1) = 0 \}$$ with inner product $\langle u, v \rangle = \int_0^1 u'(x) v'(x) dx.$ I am trying to show that for $f \in H_0^1,$ $||f||_{\mathcal{L}^2} \le ||f||_{H_0^1}$, which comes out to $$\int_0^1 f(x)^2 \le \int_0^1 f'(x)^2 dx.$$ Clearly the $f(0) = f(1) = 0$ hypothesis is crucial here ($f(x) = 1$ violates the inequality for example) but I am not sure how to use it. I tried to play around with integration by parts, and with Fubini's Theorem (by writing $f(x) = \int_0^x f'(t) dt$) but no luck. I believe it's true that for $x \in [0, 1]$ we have $f(x)^2 \le \int_0^1 f'(t)^2 dt$, and if I could prove this, we would be done.
By Holder's inequality, $|f(x)|=|\int_0^{x} f'(t)dt| \leq \sqrt {\int_0^{x} (f'(t))^{2}}\sqrt {\int _0^{x} 1^{2}dt}\leq \sqrt {\int_0^{1} (f'(t))^{2}}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4242049", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Let $A \in M_n(C)$, $A^m = 0 \implies A^n = 0$ Consider a matrix $A$ of size $n$ over $C$. I tried to show that if $A^m =0$ for some $m$ then $A^n =0$ We may assume that WLOG there exist a vector $x \ne 0$ such that $A^{m-1} x \ne 0$. Suppose $a_0x + a_1 Ax \cdots + a_{m-1}A^{m-1}x =0$ where $a_i$ are scalars. Applying $A^{m-1}$ to the above equation we see that $a_0 A^{m-1}x = 0$ so $a_0 = 0$. Next by applying $A^{m-2}$ on first equation we can see that $a_1 =0$. By continuing the same process we see $a_i = 0$ for all $i$. So $x, Ax, \cdots A^{m-1}x$ are $m$ linearly independent vectors so $m \le n$. So $A^n = A^m A^{n-m} =0$. Is this correct? I'm curious if any other methods also there?
This argument looks correct, and yes, there are some other ways to demonstrate that your statement holds. Both of the following rely on the fact that all of the eigenvalues of $A$ must be equal to $0$. To see this, let $\vec v \neq \vec 0$ be an eigenvector of $A$ with eigenvalue $\lambda$. Then we have $$\lambda^m \vec v = A^m \vec v = 0 \vec v = \vec 0$$ so $\lambda^m = 0$ and hence $\lambda = 0$. Now for the other methods of proof. Method 1: The Cayley-Hamilton theorem gives that $A$ must satisfy its own characteristic polynomial. All of the eigenvalues of $A$ are $0$, so the characteristic polynomial of $A$ is simply $x^n$. Thus, $A^n = 0$. Method 2: Because the eigenvalues of $A$ are all $0$, the Jordan canonical form of $A$ - call this $J$ - has zero entries on and below the main diagonal. It follows that $J^n = 0$, so $A^n = (PJP^{-1})^n = PJ^nP^{-1} = 0$. These methods both rely on some extra machinery to get the job done, which is what makes your solution nice.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4242172", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
What am I missing here? (Probability) An organization consists of 10 married couples. A lottery without replacement will decide a chairman and secretary. What's the probability that a married couple gets chosen? My solution attempt: There are 2 ways a couple can get the positions, so {(secretary, chairman),(chairman, secretary)}, but one is redundant so there is 1 combination. Now we need to determine how many ways of choosing 2 out of 2, that is $^{20}C_2$. Let our event be $A$, then $P(A) = \frac{1}{20C2}=\frac{1}{45}.$ The actual solution is $\frac{1}{19}$. I'm guessing that I have to multiply the ways of choosing 2 out of 2 by 10 to account for every couple? That would give the right solution, but I can't seem to convince myself of why that is true.
Suppose we first choose chairman and then secretary. There are 20 candidates. Suppose a chairman is picked. $19$ people left. Only one person is the new chairman's husband/wife. Thus, with probability $1/19$ they get the position of secretary.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4242305", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Finding the equation of a straight line that passes through the points $(\alpha, \beta)$ and whose x & y intercepts are equal Answers given by my book: $x+y=\alpha+\beta...(1)$, $x-y=\alpha-\beta...(2)$ Answer given by me: We know that the straight line passes through the point $(\alpha, \beta)$. Let, the x & y intercepts are $a$ & $b$ respectively. We know, $$a=b...(i)$$ $$\frac{x}{a}+\frac{y}{b}=1...(ii)$$ Putting the values of $a$ from (i) in and putting the values of the point $(\alpha, \beta)$ in (ii), $$\frac{\alpha}{a}+\frac{\beta}{a}=1$$ $$\implies a=\alpha + \beta...(iii)$$ Now, putting the value of $a$ in (ii), $$\frac{x}{\alpha + \beta}+\frac{y}{\alpha + \beta}=1[\because a=b]$$ $$x+y=\alpha+\beta$$ This is the same as one of the answers(1) given in my book. However, I can't seem to derive the second answer, (2). How can I derive the second answer given by my book? Is my book wrong?
There are $3$ possible cases: Case 1: both intercepts, and $\alpha$ and $\beta$ are $0$ * *Then the straight line has equation of the form $$y=mx$$ for some nonzero real $m.$ Case 2: both intercepts are $0,$ and $\alpha, \beta$ are nonzero * *Then the straight line has equation of the form $y=\frac\beta\alpha x,$ i.e., $$\alpha y=\beta x.$$ Case 3: neither intercept is $0$ * *Then the straight line has gradient $-1$ and equation $y-\beta=(-1)(x-\alpha),$ i.e., $$x+y=\alpha+\beta.$$ (This is just an alternative to your correct working & answer for this case.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/4242505", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
What is the probability of having one red ball given that at most two of the selected balls are red? A box contains 4 red and 5 white balls. Three balls are selected randomly. What is the probability of having one red ball given that at most two of the selected balls are red? I create a pdf table as following $X|\;\,0\;\, |\;\,1\;\,|\;\, 2 \;\, | \;\, 3\;\, |$ $P|\,\frac5{42}\;|\,\frac{20}{42}\;|\,\frac{15}{42}|\,\frac{2}{42}\;|$ And I have an answer but I can't continue $P(1R\;|<3R)= (...)$
You are on the right track, but your reasoning needs refinement. What you calculated are the probabilities of drawing $X$ number of red balls, where $X$ ranges from $0$ to $3$. What is worth noting from your table is that $\Pr[X = 3] = \frac{1}{21}$; that is to say, the probability that all three balls drawn are red is $$\frac{\binom{4}{3}\binom{5}{0}}{\binom{9}{3}} = \frac{1}{21};$$ this is because there are $\binom{4}{3} = 4$ ways to choose $3$ red balls from $4$ in the box; $\binom{5}{0} = 1$ way to choose $0$ white balls from the $5$ in the box; and $\binom{9}{3} = 84$ ways to choose any three balls from the box. The reason why I say your reasoning needs refinement is that we can see that in fact, there are $4$ such elementary outcomes out of the $84$ total possible. The other $84 - 4 = 80$ outcomes correspond to the condition that there are at most $2$ red balls drawn; i.e., "at most two red balls drawn" is equivalent to saying $X \le 2$. And of these, how many correspond to having exactly one red ball? Well, you already have it in your table, if you instead count outcomes instead of probabilities: $$\Pr[X = 1] = \frac{10}{21} = \frac{40}{84}$$ implies there are $40$ outcomes in which exactly one ball is drawn. And out of the $80$ that correspond to at most two red balls drawn, the answer is simply $$\Pr[X = 1 \mid X \le 2] = \frac{40}{80} = \frac{1}{2}.$$ In fact, this is precisely what evaluating the conditional probability means: $$\Pr[X = 1 \mid X \le 2] = \frac{\Pr[(X = 1) \cap (X \le 2)]}{\Pr[X \le 2]} = \frac{\Pr[X = 1]}{1 - \Pr[X = 3]}.$$ We reason that the event $X = 1$ is a proper subset of the event $X \le 2$, thus their intersection is simply $X = 1$; and the event $X \le 2$ is the complement of the event $X = 3$. All we did when we counted outcomes was make it more relatable to the events concerned, rather than working with the probabilities. For if your table had looked like this: $$\begin{array}{c|c|c|c|c} X & 0 & 1 & 2 & 3 \\ \hline \text{Number of outcomes} & 10 & 40 & 30 & 4 \end{array}$$ I imagine it might have been easier to understand how to get the desired conditional probability. To further help your understanding, how would you compute the probability that at least $2$ white balls were drawn, given that the number of red balls drawn was an odd number? For another exercise, suppose the box also contains $6$ green balls, and you draw $5$ balls at random without replacement. What is the probability that $1$ ball drawn is red, $2$ are white, and $2$ are green? Finally, given that exactly $2$ balls drawn are green, what is the probability that you drew at most $1$ red ball?
{ "language": "en", "url": "https://math.stackexchange.com/questions/4242635", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Limits while calculating area with integrals Let's derive the area of a circle of radius $R$ using integration. Here, we take a random distance $r$ from the center and take an infinitesimal increment $dr$ which gives us an infinitesimal ring.The area of that ring is $2\pi rdr$. Now we integrate these infinitesimals ring to get the exact area. The expression stands out to be $\int_{0}^{R} 2\pi rdr$. I don't understand why we are taking the upper limit $R$. I mean we go a distance $r$ and then take an increment and calculate the ring's area. But if we take increment after reaching $R$,that makes no sense to me since we are now overcounting an extra ring outside of the circle. Please help clear my doubts.
The extra ring that you are counting is technically an infinitesimally thin ring. So, that doesn't contribute to the area. Mathematically, you could have said something like The area is $$\lim_{h\to 0}\left(\int_{0}^R 2\pi r dr + h\right)=\int_{0}^R 2\pi r dr$$ Could I make myself clear?
{ "language": "en", "url": "https://math.stackexchange.com/questions/4242769", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Showing $\int_0^\infty\frac{Li_2\frac{(1-x^4)^2}{(1+x^4)^2}}{1+x^4}dx=\sqrt2\,\Re\left(\int_0^1\frac{Li_2\frac{(1+x^4)^2}{(1-x^4)^2}}{1+x^2}dx\right)$ A beautiful equality: $$\int_{0}^{\infty }\frac{Li_{2}\left(\frac{(1-x^{4})^{2}}{(1+x^{4})^{2}}\right)}{1+x^{4}}dx=\sqrt{2} \Re \left(\int_{0}^{1 }\frac{Li_{2}\left(\frac{(1+x^{4})^{2}}{(1-x^{4})^{2}}\right)}{1+x^{2}}dx\right)$$ This question was proposed by Sujeethan Balendran in RMM(Romanian Mathematical Magazine) I did by using complex number but i didn't by Real method by using $$Li _{2}(a)=-a\int_{0}^{1}\frac{\ln x}{1-ax}dx$$ I believe some of you know some nice proofs of this, can you please share it with us?
$$\int_{0}^{\infty }\frac{Li_{2}(\frac{(1-x^{4})^{2}}{(1+x^{4})^{2}})}{1+x^{4}}dx $$ $$=\frac{1}{2}\int_{0}^{\infty }\frac{Li_{2}(\frac{(1-x^{2})^{2}}{(1+x^{2})^{2}})}{\sqrt{x}(1+x^{2})}dx$$ $$substitute\quad x=tan(\frac{x}{2})$$ $$=\frac{1}{4}\int_{0}^{\pi}Li_{2}(cos^{2}(x))(\sqrt{tan(\frac{x}{2})})dx =\frac{1}{8}\int_{0}^{\pi}Li_{2}(cos^{2}(x))(\sqrt{tan(\frac{x}{2})}+\sqrt{cot(\frac{x}{2})})dx$$ $$=\frac{-2\sqrt{2}}{8}\Re (\int_{0}^{\pi}\frac{iLi_2(cos^{2}(x))e^{ix}}{\sqrt{1-e^{2ix}}})dx$$ $$=\frac{-2\sqrt{2}}{8}\Re (\oint_ {\left | z \right |=1}\frac{Li_2(\frac{(z+\frac{1}{z})^{2}}{4})}{\sqrt{1-z^{2}}})dz\\=\frac{1}{2\sqrt{2}}\Re \int_{-1}^{1}\frac{Li_2((\frac{x^{2}+1}{2x})^{2})}{\sqrt{1-x^{2}}}dx$$ $$=\frac{1}{2\sqrt{2}}\Re \int_{-1}^{1}\frac{Li_2((\frac{x^{2}+1}{2x})^{2})}{\sqrt{1-x^{2}}}dx$$=$$\frac{1}{\sqrt{2}}\Re \int_{0}^{1}\frac{Li_2((\frac{x^{2}+1}{2x})^{2})}{\sqrt{1-x^{2}}}dx$$ $$substitute \quad x=\frac{1-x}{1+x}\\=\frac{1}{\sqrt{2}}\Re \int_{0}^{1}\frac{Li_2((\frac{x^{2}+1}{1-x^{2}})^{2})}{(1+x)\sqrt{x}}dx$$$$ so \quad =\sqrt{2}\Re (\int_{0}^{1}\frac{Li_2((\frac{x^{4}+1}{1-x^{4}})^{2})}{(1+x^{2})}dx)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4242902", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
how solve the inequality $−1\leq \leq1$? How can we solve this equation graphically ? why we talk about the two hyperbole $\frac{1}{x}$ and $\frac{-1}{x}$ ? I divided this equation on two sub-equations $$ xy\leq1$$ and $$xy\geq-1$$. For the first sub-equation I have two cases when x is greater or not than zero. Also for the second I have two other equations depending if $x \geq0$ or not. That said, I no longer know how to solve my system of 4 sub equations
Dividing by cases, we have that * *for $x>0$ $$−1\leq \leq1 \iff -\frac1x \le y \le \frac1x$$ * *for $x<0$ (direction of inequalities flip) $$−1\leq \leq1 \iff \frac1x \le y \le -\frac1x$$ and for $x=0 \implies xy=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4243025", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Find the number of ways the commander can be chosen Here is the question I'm trying to solve: n soldiers standing in a line are divided into several non-empty units and then a commander is chosen for each unit. Count the number of ways this can be done. My approach: Let $C(x)$ be the required generating function. Considering the type A structures on the non empty intervals as $a_k = 1$, the generating function is given as $A(x) = \sum_{k \geq 1} 1\cdot x^k$ which gives, $A(x) = \frac{x}{1-x}$. Next the type B structures are given by $b_k = {k\choose 1} $ giving the generating function $B(x) = \sum_{k \geq 1} {k\choose 1}\cdot x^k$ which gives, $B(x) = \frac{x}{(1-x)^2}$. Now $C(x) = B(A(x))$ Solving for $C(x)$ I get $C(x) = \frac{x \cdot (1-x)}{(1-2x)^2}$ Is my approach correct? How do I find the coefficient of $x^n$ in $C(x)$? Edit: Any hints on where I'm going wrong here? Because the expected answer doesn't match the answer I have calculated. Any help is appreciated.
I think your generating function may be wrong. In particular when $n=4$, I think it gives $20$ when I can count $21$ cases I think if $b_{n,k}$ is the number of choices when you have $n$ soldiers and the first group has $k$ individuals then you can consider adding an additional soldier at the beginning so you can say $$b_{n+1,k+1}=\frac{k+1}{k}b_{n,k}$$ for $k\ge 1$ while $$b_{n+1,1}=\sum\limits_{k=1}^n b_{n,k}$$ which leads to results like * *$b_{n,k} = k\, b_{n+1-k,1}$ *$b_{n+1,1} = b_{n,1}+ \sum\limits_{m=1}^n b_{m,1}$ *$b_{n+1,1}=3b_{n,1}-b_{n-1,1}$ *sine the number you want $a_n = b_{n+1,1}$: $$a_{n,1}=3a_{n-1,1}-a_{n-2,1}$$ Since the numbers are $1,3,8,21,\ldots$ when $n=1,2,3,4,\ldots$, this leads to a generating function of the form $\dfrac{x}{1-3x+x^2}$ if you think the answer is $0$ when $n=0$, or of the form $\dfrac{1-2x+x^2}{1-3x+x^2}$ if you think the answer is $1$ when $n=0$. This is a second order recurrence and can be solved related to $\frac{3 \pm \sqrt 5}2$ but can also be written by saying the coefficient of $x^n$ is $\text{Fib}(2n)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4243135", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
How do I make sense of combinations with repetition? I'm a non-math major but have to study a course in probability and statistics for graduate school. I was able to understand permutations with repetition and then permutations without repetition using examples, and then trying to generalize. But I can't seem to find any examples of combinations with repetition that explain how the formula works, without going into multisets and things like that. Help would be much appreciated.
one practical example is the following: Suppose that you and 7 other friends go to a restaurant. You each want to order a sandwich, and the restaurant offers 3 varieties of sandwiches (A, B and C). Then combinations with repetitions answer the question "How many different orders can you make"? It's not important who orders which sandwich, only the amount of sandwiches of each type ordered. You can represent each sandwich to be ordered with an X. So you need to count the ways to distribute 8 X's in three groups: X X X X X X X X You can add 2 separator marks to distribute the X's in three subsets, and then claim that the first space is for sandwich of type A, the second for sandwich of type B and the third for sandwiches of type C. Some examples of these are X X X | X X X X | X (3 A sandwiches, 4 B sandwiches, 1 C sandwich) X X | X X | X X X X (2 A sandwiches, 2 B sandwiches, 4 C sandwich) | X X X X X X | X X (0 A sandwiches, 6 B sandwiches, 2 C sandwich) Note that this represents every way to distribute the sandwiches, and thus counting how to arrange the 2 | symbols and the 8 X gives the solution to the problem. So, if $CR^3_8$ denotes the combinations with repetition that solve this problem, then $$CR^3_8 = {3 + 8 - 1\choose8}$$ Because that's the ways you can rearrange the X's and the |'s
{ "language": "en", "url": "https://math.stackexchange.com/questions/4243328", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Can you distribute implication statements? Logically, the following statement makes sense in my head, but I can't find a law or proof that gives this example or explains this. $$ ((p \lor q) \to r) \;\;\text{is equivalent to}\;\; ((p \to r) \lor (q \to r)) $$ I would think this is sort of like the distributive property, but I cant find a law that allows this (or I keep skipping over it in my notes).
(Elaborating on the first two comments under the question.) * *$$(p \lor q) \to r$$ means “the truth of either of $p$ or $q$ guarantees the truth of $r$”, whereas $$(p \to r) \lor (q \to r) $$ means “both $p$ and $q$ must be true to guarantee $r$'s truth”   (having only one of $p$ and $q$ true is insufficient to guarantee $r$'s truth because its corresponding argument of the disjunction might be false). *$${\quad(p \lor q) \to r\\\equiv\lnot(p \lor q)\lor r\\\equiv(\lnot p\land\lnot q)\lor r\\\equiv (\lnot p\lor r) \land (\lnot q\lor r)\\\equiv (p \to r) \land (q \to r)\\~\\\quad(p\land q)\to r \\\equiv \lnot(p\land q)\lor r\\\equiv (\lnot p\lor\lnot q)\lor r\\\equiv (\lnot p\lor r)\lor(\lnot q\lor r)\\\equiv (p\to r)\lor(q\to r)}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4243458", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Picard group of double "line" Are there any nice descriptions of the Picard group of a non-reduced double structure on $\mathbb{P}^1?$ In particular, I'm looking for a description that makes clear the difference between the Picard group of the double line $x^2=0$ in $\mathbb{P}^2$ (in my mind the Picard group should be zero-dimensional, since the curve is cut out by a quadratic equation and thus "genus zero") and the Picard group of the double structure $f(x,y,z)^2=0,$ where $f$ is a smooth conic (which should be three dimensional, since the curve is genus three).
Let $k$ denote the ground field. The paper https://www.math.columbia.edu/~bayer/papers/Ribbons_BE95.pdf of Bayer-Eisenbud studies the geometry of ribbons/double structures on $\mathbb P^1$; concretely, these are non-reduced scheme structures $C$ on $D=\mathbb P^1$ such that the ideal $\mathcal I$ of $D$ inside $C$ satisfies $\mathcal I^2 = 0$. The genus of this scheme is $$ g = g(C) = 1 - \chi(C) = 1 - h^0(\mathcal O_C) + h^1(\mathcal O_C), $$ and according to Prop. 4.1, $\operatorname{Pic}(C) \cong k^g \times \mathbb Z$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4243589", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Find the vector of minimum norm... I'm working on this exercise: Show that the subset $ M = ${$y= (\eta_j) ; \sum \eta _j= 1$} of complex space $\mathbb{C^{n}}$ is complete and convex. Find the vector of minimum norm in M. I've proved the completeness and convexity part, but I could not find the minimizing vector. I have the suspicion that it's $(\frac{1}{n},\frac{1}{n},...,\frac{1}{n})$, but I couldn't find an argument. Could you help me? Thank you in advance.
This also follows directly from the Cauchy-Schwarz inequality: $$ \sum_{j=1}^n \eta_j=\sum_{j=1}^n \eta_j\cdot 1\leq \left(\sum_{j=1}^n |\eta_j|^2\right)^{1/2}\left(\sum_{j=1}^n 1\right)^{1/2}=n^{1/2}\left(\sum_{j=1}^n |\eta_j|^2\right)^{1/2} $$ With equality if and only if there exists $\lambda\geq 0$ such that $\eta_j=\lambda$ for all $j\in\{1,\dots,n\}$. The constraint $\sum_j \eta_j=1$ is then satisfied if and only if $\lambda=1/n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4243668", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Finding smooth behaviour of infinite sum Define $$E(z) = \sum_{n,m=-\infty}^\infty \frac{z^2}{((n^2 + m^2)z^2 + 1)^{3/2}} = \sum_{k = 0}^\infty \frac{r_2(k) z^2}{(kz^2 + 1)^{3/2}} \text{ for } z \neq 0$$ $$E(0) = \lim_{z \to 0} E(z) = 2 \pi$$ where $r_2(k)$ is the number of ways of writing $k$ as a sum of two squares of integers. Is $E(z)$ smooth at $z = 0$, and can we evaluate its derivatives $(\partial_z)^n E(z)|_{z = 0}$? Footnote 1: There is a physical motivation for these equations, $E(z)$ is the electric field generated by a $2d$ lattice of point charges where the lattice spacing is $z$ and the charge density is held fixed. $E(0)$ here is the continuum limit. Footnote 2: Here is a plot of the behavior of the sum in the second representation I wrote above from $k = 0$ to $k = N$, as a function of $z \in [-2,2]$. The black line is $2 \pi$. It looks like the limit will be very flat at $z = 0$ (possibly all derivatives vanish? That would be cool)
Here's another way to proceed. The Poisson summation formula tells us that summing the function over the lattice is the same as summing its fourier transform. Luckily, we can actually do the fourier transform: $$f(x) = \int_{\mathbb{R}^2} \frac{e^{-2 \pi i \langle y , x \rangle}z^2}{(y^2 z^2 + 1)^{3/2}} \mathrm{d}y = \int_\mathbb{R^2} \frac{e^{-2 \pi i v_1 x}z^2}{(v^2 z^2 + 1)^{3/2}} = 2\pi e^{-2\pi |x|/z}$$ where we have used the transformation of coordnates $v_1 = \frac{x_1 y_1 + x_2 y_2}{x}, v_2 = \frac{x_2 y_1 - x_1 y_2}{x}$ to do the first transformation, and then we can integrate over $v_2$ by trigonometric substitution, and over $v_1$ by picking out the residue in the top/bottom half plane of the countour integral. Using this, we can explicitly write down the function as: $$E(z) = \sum_{k=0}^\infty 2\pi \ r_2(k) \ e^{-2\pi \sqrt{k}/z} = 2\pi + 8\pi e^{-2\pi/z} + 8\pi e^{-2\sqrt{2}\pi/z} + \dots$$ The 'problem' at $z=0$ in position space has been fixed, and it's clear to see now how all derivatives vanish at $z=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4243834", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 3, "answer_id": 2 }
Proof of equality involving logged fraction of Gamma functions Reading up on the log-likelihood of the Zero-Inflated Negative Binomial Regression Model, I have noticed an equality that I have not known before. Since I am not too familiar with Gamma functions involving non-integers, I find myself unable to prove the equality and thus I am looking for help. The equality goes $$ \ln \left(\frac{\Gamma(x_i + \theta)}{\Gamma(\theta)}\right) = \sum_{j=0}^{x_i -1}\ln(j+\theta)\text, $$ where $\theta\in[0,1], x_i \in \mathbb N_0$. See this link, equation D-12 for a similar formulation. I would appreciate any help on this subject.
$$\sum_{j=0}^{x_i-1} ln(j+\theta )$$ $$=ln(\theta )+ln(1+\theta )+ln(2+\theta ).......ln(x_i-1+\theta)$$ $$=ln(\frac{(\theta -1)!\times\theta.(1+\theta).(2+\theta)...... (x_i-1+\theta)}{(\theta -1)!})$$ $$=ln(\frac{\Gamma(x_i+\theta)}{\Gamma(\theta ) } ) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4243982", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
trace inequality for trace(ABA*C) I have three $n \times n$ matrices $\mathbf{A}$, $\mathbf{B}$, $\mathbf{C}$ and I calculate the trace \begin{gather} \mathrm{trace}\left(\mathbf{A} \mathbf{B} \mathbf{A}^* \mathbf{C}\right), \end{gather} where $\mathbf{A}^*$ denotes the conjugate transpose (Hermitian) of $\mathbf{A}$. Matrix $\mathbf{A}$ is positive definite with $\left\|\mathbf{A}\right\|^2 \leq n$. Matrices $\mathbf{B}$, $\mathbf{C}$ are Hermitian positive semidefinite, with $\mathbf{B} \mathbf{B}^* = \mathbf{B}$ and $\mathbf{C} \mathbf{C}^* = \mathbf{C}$, and $\left\|\mathbf{B}\right\|^2 = \left\|\mathbf{C}\right\|^2 = m < n$. I would like to prove/disprove the following inequality: \begin{gather} \mathrm{trace}\left(\mathbf{A}\mathbf{B}\mathbf{A}^*\mathbf{C}\right) \leq \mathrm{trace}\left(\mathbf{A}\mathbf{A}^*\right) \mathrm{trace}\left(\mathbf{B}\mathbf{C}\right) \end{gather} Any hint would be welcome.
I have found now a counterexample that shows that the considered inequality does generally not hold. Specifically, if B and C are generated as $B = Q_B Q_B^*$ and $C = Q_C Q_C^*$, where $Q_B$ and $Q_C$ are bases for orthogonal $m$-dimensional subspaces, then $\mathrm{trace}(B C)$ is zero, whereas $\mathrm{trace}(A B A^* C)$ is generally not zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4244130", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Prove that there exists an integer $a$ with $1 \leq a \leq p-2$ such that neither $a^{p-1}-1$ nor $(a+1)^{p-1}-1$ is divisible by $p^2$. Let $p \geq 5$ be a prime number. Prove that there exists an integer $a$ with $1 \leq a \leq p-2$ such that neither $a^{p-1}-1$ nor $(a+1)^{p-1}-1$ is divisible by $p^2$. All my progress: * *We have by FLT $\{1^p,2^p,\dots, {p-2}^p\}\equiv \{1,2,\dots , p-2\} \mod p.$ *We also have by FLT, $p|a^{p-1}-1$ *We want to show that $v_p(a^{p-1}-1)=1=v_p({a+1}^{p-1}-1).$ *I also got $(p-a)^{p-1}\equiv a^{p-1}+a^{p-2}\cdot (p-1)\cdot a^{p-1}+p\cdot a^{p-2}$ Also, if we can show $ \{a^{p-1}-p \mod p^2: 1 \leq a \leq p-2\}=$ $\{0,p,\cdots, p(p-1)\} $ in some form, and then we will be done. Would be helpful if one can send hints in place of a solution.
Identify the integers $\{1\le n< p^2:p\nmid n\}$ with the cyclic group $(\mathbb{Z}/p^2\mathbb{Z})^\times$. Precisely $p-1$ of these have order dividing $p-1$. Call these integers $a_1<\ldots<a_{p-1}$. Let $m$ be the largest integer with $a_m\le p-1$. Note that $a_1=1$. If your claim is false, every pair $\{1,2\},\{3,4\},\ldots,\{p-2,p-1\}$ contains at least one $a_i$, whence $m\ge \frac12(p-1)$. However, note that for $p\ge 3$ we can construct $3m-1$ elements of order dividing $p-1$ as follows: $$1=a_1^2<a_1a_2<a_2^2<\dots<a_{m-1}a_m<a_m^2<p^2-a_m<\dots<p^2-a_1<p^2.$$ Hence, $\frac13p\ge m\ge \frac12(p-1)$, which gives a contradiction for $p\ge 5$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4244287", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Trig Substitution quadrant diagram Confusion Evaluate $\, \displaystyle \int _{-1}^{-1/2} \frac{dx}{\sqrt {4x^2-1}}$. $My\ work:-$ substituting $\, \displaystyle 2x=\sec (\theta),\, \,$ $\Rightarrow \displaystyle \displaystyle \int \frac{dx}{\sqrt{(2x)^2-1}} \displaystyle = \displaystyle \frac{1}{2}\int \frac{ \sec (\theta )\, \tan (\theta )\, d\theta }{\sqrt{\sec ^2(\theta )-1}} $ $\Rightarrow \displaystyle \int \frac{ \sec (\theta )\, \tan (\theta )\, }{2\, |\tan (\theta )|} \, d\theta$ now since$\displaystyle \, -1\leq 2x = \sec (\theta ) \leq -\frac{1}{2}\ $ we have quadrant II and III where $\sec (\theta )\leq 0.\,$ $Question:-$ 1. In solution they given a diagram for IIIrd quadrant and Yes i know when we calculate sec we get same 2x, but i am confused how'd they come up with this diagram ? and here's my Diagram, it will really helpful if someone please point out mistakes in my diagram sorry for such Naive question
You have made mistake in your second step . You denoted $ 2x = \sec { \theta}$ . As $x \ge {-1} $, $2x \ge {-2}$ , therefore , $\sec { \theta} \ge {-2}$ . Therefore , $\cos { \theta} \le {{-1} \over 2}$. Now proceed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4244456", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Ivan has 3 red blocks, 4 blue blocks and 2 green blocks. He builds the tower with randomly selected blocks but only stops when the tower consists of all three colours. * *What is the probability that the tower is 4 blocks tall? My approach is the following but I am not sure at all: to make it 3 colours it builds a tower that is either 3 blocks or 8 blocks. 8 blocks would happen in the case where the first 7 blocks are 4Blue and 3Red, so the 8th must be the green block. so $E(8) = 4B,3R,1G = \binom{4}{4}*\binom{3}{3}*\binom{2}{1}= 2$ $E(7) = 4B,2G,1R = \binom{4}{4}*\binom{2}{2}*\binom{3}{1}= 3$ E(6) = let's reason the other way around. First $\binom{9}{6} = 84$. then the combinations of 6 blocks with only 2 colours is 4B,2R or 4B,2G or 3B,3R. This makes: $ \binom{4}{4}*\binom{3}{2} + \binom{4}{4}*\binom{2}{2}+ \binom{4}{3}*\binom{3}{3}= 8$. So $E(6) = 84 - 8 = 76$ E(5) = 3B,1R,1G or 3R,1B,1G or 2B,2R,1G or 2B,2G,1R or 2R,2G,1B = $E(5) = \binom{4}{3}*\binom{3}{1}*\binom{2}{1}+ \binom{3}{3}*\binom{4}{1}*\binom{2}{1}+ \binom{4}{2}*\binom{3}{2}*\binom{2}{1}+ \binom{4}{2}*\binom{2}{2}*\binom{3}{1}+ \binom{3}{2}*\binom{2}{2}*\binom{4}{1}= 98$ E(4) = 2B,1R,1G or 2R,1B,1G or 2G,1B,1R = $\binom{4}{2}*\binom{3}{1}*\binom{2}{1}+ \binom{3}{2}*\binom{4}{1}*\binom{2}{1}+ \binom{2}{2}*\binom{4}{1}*\binom{3}{1}= 72$ $E(3) = \binom{4}{1}*\binom{3}{1}*\binom{2}{1}= 24$ total combinations $(T) = 24 + 72 + 98 + 76 + 3 + 2 = 275$. The probability of stopping at 4 blocks is $E(4)/(T) = 72/275$ but this is a very long solution. Do you think that is correct? if so, is there any other method to make it shorter?
There are three cases to consider. Case 1: The fourth block is green. Of the first three blocks, none are green $\binom{7}{3}$, and they are not all blue $\binom{4}{3}$ or all red $\binom{3}{3}$. Then the probability of the fourth block being green is $\frac{2}{6}$ $$\frac{\binom{7}{3}-\binom{4}{3}-\binom{3}{3}}{\binom{9}{3}}\times \frac{2}{6}=\frac{10}{84}$$ Case 2: The fourth block is red. Of the first three blocks, none are red $\binom{6}{3}$, and they are not all blue $\binom{4}{3}$. Then the probability of the fourth block being red is $\frac{3}{6}$ $$\frac{\binom{6}{3}-\binom{4}{3}}{\binom{9}{3}}\times \frac{3}{6}=\frac{8}{84}$$ Case 3: The fourth block is blue. Of the first three blocks, none are blue $\binom{5}{3}$, and they are not all red $\binom{3}{3}$. Then the probability of the fourth block being blue is $\frac{4}{6}$ $$\frac{\binom{5}{3}-\binom{3}{3}}{\binom{9}{3}}\times \frac{4}{6}=\frac{6}{84}$$ The total probability is $$\frac{10+8+6}{84}=\frac{24}{84}=\frac{2}{7}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4244586", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
provethat $\sum_{k}\binom{m}{k}\binom{n+k}{m} = \sum_{j}\binom{n}{j}\binom{m}{j}2^{j}$ Prove that $$\sum_{k=0}^m\binom{m}{k}\binom{n+k}{m} = \sum_{j=0}^{\min(m,n)}\binom{n}{j}\binom{m}{j}2^{j}.$$ So, I know the left side of the identity counts the Delannoy paths and the right side counts the lattice ball. But I want to get a direct combinatorial proof of the identity by showing that both sides count the following set $S$. Let $M$ and $N$ be sets with size $m$ and $n$, Let $S$ be the family of the ordered pairs $(A,B)$ such that $A$ is a subset of $M$ and $B$ is an $m$-subset of $N \cup A$. So, I know how to get the left side of the identity by counting the set $S$, but I am stuck at figuring out the right side of the identity, I am not sure what index $j$ in the right side identity means combinatorially in the set $S$.
Left Hand Side We have $m$ junior students and $n$ senior students. Some junior students are member of a football team while all senior students are in the team. From the team, $m$ students are starter. Say the number of junior students in the team is $k$. The number of total possibilities is given below: $$ \sum_{k}{\binom{m}{k}\binom{n+k}{m}} $$ Right Hand Side Still the same but with different method. This time we select some ($j$) senior students to be starters, then we select $m-j$ junior students to complete the starters. The remaining $j$ junior students may or may not be in the team. The number of total possibilities is given below: $$ \sum_{j}{\binom{n}{j}\binom{m}{m-j}\cdot 2^{j}} $$ Since we count the same objects, the two expressions must be equal
{ "language": "en", "url": "https://math.stackexchange.com/questions/4244704", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Using calculus to prove an algebraic inequality Let $$f(x,y):=4\pi \left(\frac{y}{\tan(\frac{\pi}{2}x)}+\frac{x}{\tan(\frac{\pi}{2}y)}\right)+16(1+x+y)-\frac{16}{3}\frac{(x+y+xy)^2}{xy}. $$ Prove that $f(x,y) \ge 0$, for $0\le x <1, 0\le y<1$. The figure of the function from WolframAlpha indicates the above claim is true. I'm wondering how to show this by hand?
Using $\frac{2}{\pi}u \le \sin u \le u$ for all $u\in [0, \pi/2]$, we have $$\frac{1}{\tan(\pi x/2)} = \frac{\sin(\pi/2 - \pi x/2)}{\sin(\pi x/2)} \ge \frac{\frac{2}{\pi}\cdot (\pi/2 - \pi x/2) }{\pi x/2} = \frac{2}{\pi}\cdot \frac{1 - x}{x}.$$ Then, it suffices to prove that $$4\pi \cdot \left(y\cdot \frac{2}{\pi}\cdot \frac{1 - x}{x} + x\cdot \frac{2}{\pi}\cdot \frac{1 - y}{y}\right) + 16(1 + x + y) - \frac{16}{3}(x + y + xy)^2/(xy) \ge 0$$ or $$\frac{8(x + y + xy)(x + y - 2xy)}{3xy} \ge 0$$ which is clearly true. We are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4244803", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How do I negate this definition of continuity? How would I negate this statement? A map $f : X \to Y$ is called continuous if, for any subset $A \subset X$ and any point $x \in X$ adherent to $A$, the point $f(x)$ adheres to $f$. I've tried There exists $A \subset X$ and $x \in X$ adherent to $A$ such that $f(x)$ is not adherent to $f(A)$.
Formally negation works as $\neg(\forall x \in X)(\forall y \in Y)(A \Rightarrow B) = (\exists x \in X)(\exists y \in Y)(A \land \neg B)$. So in your case having $$(\forall A \subset X)(\forall x)(x\in X \land x\text{ adherent to }A \Rightarrow f(x) \text{ adhers to } f)$$ negation gives $$(\exists A \subset X)(\exists x)(x\in X \land x\text{ adherent to }A \land f(x) \text{ not adhers to } f)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4244955", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why is $\delta\mathbf{u}\cdot \mathrm{div}(\mathbf{\sigma}) = -\mathrm{grad}(\delta\mathbf{u}) \mathbf{:} \mathbf{\sigma}$? Context is from this deal.ii tutorial. Screenshot of the relevant part is below. I don't get the transformation from $\delta\mathbf{u}\cdot \mathrm{div}(\mathbf{\sigma})$ to $-\mathrm{grad}(\delta\mathbf{u}) \mathbf{:} \mathbf{\sigma}$, that seems to be happening as a part of the first line. $\mathbf{u}$ is the displacement field (vector field), $\sigma$ is the stress tensor (tensor field) and $\delta$ is the variational operator. When I try to break the problem down in index notation, I get $$\delta\mathbf{u}\cdot \mathrm{div}(\mathbf{\sigma}) = \delta u_i\cdot\frac{\partial\sigma_{ij}}{\partial x_j}$$ and $$-\mathrm{grad}(\delta\mathbf{u}) \mathbf{:} \mathbf{\sigma} = -\delta\left(\frac{\partial u_i}{\partial x_j}\right)\cdot \sigma_{ij}.$$ Which means that $$\delta u_i\cdot\frac{\partial\sigma_{ij}}{\partial x_j} = -\delta\left(\frac{\partial u_i}{\partial x_j}\right)\cdot \sigma_{ij}.$$ Can these two terms possibly be the same? There's a minus sign on one of them and not on the other, and in the left side the derivation is taken of $\sigma$ while on the right side it's taken of $u$. I can't see any transformations that would turn one into the other, and I also can't find a mistake in my chain of reasoning, what am I missing here?
The author is doing two things at once: integration by parts and invoking the divergence theorem. In coordinate form you have $\delta u \cdot \operatorname{div}\sigma$ being $\delta u_i\, \partial\sigma_{ij}/\partial x_j$, and \begin{align} \delta u_i\, \frac{\partial\sigma_{ij}}{\partial x_j} &= \frac{\partial}{\partial x_j}\left(\delta u_i\,\sigma_{ij}\right) - \frac{\partial \delta u_i}{\partial x_j}\sigma_{ij}. \end{align} The second term on the right appears within the integral over $\Omega$ and the first, after using the divergence theorem, in the integral over $\partial\Omega$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4245045", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
If $(G,*)$ is a group then for all $a, b \in G$, we have $(a*b)'=b'*a'$. Which would be the analogous expression for $(a*b'*b)'$? Here's the full problem: Show through calculations and the theorem $2.3$ that if $G$ is a group with binary operation $*$ then for all $a, b \in G$, we have $(a*b)'=b'*a'$. Which would be the analogous expression for $(a*b'*b)'$? The theorem 2.3 that I'm refering to, is the existance of $a$ unique module ($e*a=a*e=a$) and inverse ($a'*a=a*a'=e$). I have doubts in the process that I'm attempting to do. Here's what I got so far: Given $a,b \in G$ $a*b$ $(a*b)$; associative (1) $(a*b)\in G$ $(a*b)'*(a*b)=(a*b)*(a*b)'=e$; T2.3 $(a*b)*(a*b)'=e$ $(a*a'*b*b')*(a*b)'=e*a'*b'$; T2.3 (2) $e*e*(a*b)'=a'*b'$ $(a*b)'=a'*b'$ $(a*b)'=b'*a'$; commutative (?) (3) I'm starting my studies with group theory, so I'm not sure if I'm able to do what I did in (1), (2) and (3). Any help in finding a different way to solve this and the final question?
(NB: Not every group is commutative. Consider the dihedral group $D_3$ for example.) We have $$\begin{align} \color{red}{(ab)}\color{blue}{(b^{-1}a^{-1})}&=a((bb^{-1})a^{-1})\\ &=a(ea^{-1})\\ &=aa^{-1}\\ &=e \end{align}$$ and, similarly, $\color{blue}{(b^{-1}a^{-1})}\color{red}{(ab)}=e$; thus your theorem gives that $\color{red}{(ab)}^{-1}=\color{blue}{b^{-1}a^{-1}}$. Since $b^{-1}b=e$, we get $(ab^{-1}b)^{-1}=(ae)^{-1}=a^{-1}.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4245185", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Trigonometric Substitution Absolute value issue Evaluate $ \, \displaystyle \int _{0}^{4} \frac{1}{(2x+8)\, \sqrt{x(x+8)}}\, dx. $ $My\ work:-$ by completing the square and substitution i.e. $\displaystyle \left(\begin{array}{rl}x+4 & = 4\sec (\theta )\\ dx & = 4\sec \theta \tan \theta \, d\theta \end{array}\right) \qquad$ $\Rightarrow \displaystyle \int \frac{4\sec \theta \tan \theta \, d\theta }{2(4\sec (\theta ))( |4\tan (\theta )|) }$ $\Rightarrow \displaystyle \frac{1}{8} \int \frac{\tan \theta \, d\theta }{|\tan (\theta )|}$ now because my limits are positive so $sec\ \theta \geq 0\ $ and $sec\ \theta\ $ is positive in $\ Ist\ $ and $\ IVth\ $ Quadrant. At this stage i have 2 options either i consider $\ Ist\ $ Quadrant and take postive $|tan\ \theta|\ =\ tan\ \theta\ $ or i consider $\ IVth\ $ Quadrant where $\ |tan\ \theta|\ = -tan\ \theta$ So when in 1st quadrant i.e. $\ |tan\ \theta|\ = tan\ \theta ,$ $\ 0\ \geq\ \theta\ \geq\ \pi/2\ $ i get $\Rightarrow \displaystyle \frac{1}{8} \theta +C$ $\Rightarrow \displaystyle \frac{1}{8} \mathrm{arcsec}\left(\frac{x+4}{4}\right)+C$ $\Rightarrow \displaystyle \left. \frac{1}{8} \mathrm{arcsec}\left(\frac{x+4}{4}\right)\, \right|_{x=0}^{x=4}$ $\Rightarrow \displaystyle \frac{1}{8} \left(\mathrm{arcsec}(2)-\mathrm{arcsec}(1)\right)$ Now if i consider 4th quadrant i.e. $\ |tan\ \theta|\ = -tan\ \theta ,$ $\ 3 \pi /2 \ \geq\ \theta\ \geq\ 2\pi\ $ i get $\Rightarrow \displaystyle -\frac{1}{8} \theta +C$ $\Rightarrow \displaystyle -\frac{1}{8} \mathrm{arcsec}\left(\frac{x+4}{4}\right)+C$ $\Rightarrow \displaystyle \left. -\frac{1}{8} \mathrm{arcsec}\left(\frac{x+4}{4}\right)\, \right|_{x=0}^{x=4}$ $\Rightarrow \displaystyle -\frac{1}{8} \left(\mathrm{arcsec}(2)-\mathrm{arcsec}(1)\right)$ So i am getting positive value in 1st case whereas in 4th quadrant my answer is Negative. Why is so ? am i making some mistake when considering 4th quadrant ? or both are acceptable answer ? also in MIT lecture they said whatever acceptable quadrant you choose you'll get the same answer. so why i m getting Negative answer ?
HINT I propose another way to tackle this integral so that you can compare both methods. Notice that $x(x+8) = x^{2} + 8x = (x + 4)^{2} - 16$. Hence, if we make the substitution $4\cosh(z) = x + 4$, we arrive at \begin{align*} \int\frac{\mathrm{d}x}{(2x+8)\sqrt{x(x+8)}} & = \int\frac{4\sinh(z)}{8\cosh(z)\sqrt{16\cosh^{2}(z) - 16}}\mathrm{d}z\\\\ & = \frac{1}{8}\int\frac{\mathrm{d}z}{\cosh(z)}\\\\ & = \frac{1}{8}\int\frac{\cosh(z)}{\cosh^{2}(z)}\mathrm{d}z\\\\ & = \frac{1}{8}\int\frac{\cosh(z)}{\sinh^{2}(z) + 1}\mathrm{d}z \end{align*} where the absolute value was omitted because the function $\sinh(z)$ is positive whenever $z\geq 0$. In the last expression, we can make the change of variable $w = \sinh(z)$, whence it results that \begin{align*} \int\frac{\mathrm{d}x}{(2x+8)\sqrt{x(x+8)}} = \frac{\arctan(w)}{8} + c \end{align*} Now it remains to apply the integration limits. Can you take it from here?
{ "language": "en", "url": "https://math.stackexchange.com/questions/4245475", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Satisfies Cauchy-Riemann equations but is not holomorphic at $0$ I am trying to solve the following problem: Consider the function defined by $f(x+iy)=\sqrt{|x||y|}$ whenever $x,y\in \mathbb{R}$. Show that $f$ satisfies the C-R equations at the origin yet $f$ is not holomorphic at $0$. Firstly, what would be $u$ and $v$ in this problem? Since it is a mapping from $\mathbb{C}\to\mathbb{R}$, would it be that $f(x+iy)=\sqrt{|x||y|}+i0$ where $u(x,y)=Re(f(z))=\sqrt{|x||y|}$ and $v(x,y)=Im(f(z))=0$? Secondly, I have attempted to compute the partial derivatives of $u$ w.r.t $x$ and $y$ as follows: $$\frac{\partial u}{\partial x}=\frac{\partial \sqrt{|x||y|}}{\partial x}=\frac{{\sqrt{|y|}\text{sgn}(x)}}{\sqrt{|x|}}$$ $$\frac{\partial u}{\partial y}=\frac{\partial \sqrt{|x||y|}}{\partial y}=\frac{{\sqrt{|x|}\text{sgn}(y)}}{\sqrt{|y|}}$$ However, I am confused because sgn(x) is not defined for $0$ so I am unable to determine what the value of the partial derivatvies at the origin. I have seen some people compute this direcrly from definition and get zero as an answer, but why am I unable to compute this from the partial derivatives I have calculated?
Well since \begin{eqnarray*} f:\Omega \subseteq \mathbf{C}&\longrightarrow& \mathbf{C}\\ (x,y)&\longmapsto& f(x,y)=\left(\sqrt{|xy|},0\right) \end{eqnarray*} so we can see that * *$u(x,y)=\sqrt{|xy|}$. *$v(x,y)=0$ Now the Cauchy-Riemann state that if $f$ is holomorphic function on $\Omega$ so $$({\rm C-R}):\begin{cases}\displaystyle \frac{\partial u}{\partial x}(a,b)=\frac{\partial v}{\partial y}(a,b)\\\displaystyle \frac{\partial u}{\partial y}(a,b)=-\frac{\partial v}{\partial x}(a,b) \end{cases}, \quad (a,b)\in \Omega. \quad (*)$$ Setting $(a,b)=(0,0)$ so we have that $$\frac{\partial v}{\partial y}(0,0)=0, \quad \frac{\partial u}{\partial x}(0,0)=0,\quad \frac{\partial u}{\partial y}(0,0)=0, \quad \text{and} \quad -\frac{\partial v}{\partial x}(0,0)=0$$ Therefore $(*)$ holds at $(0,0)$, but $f$ is not holomorphic in $(0,0)$ (just use the definition in $(0,0)$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/4245623", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Algebraic proof regarding falling factorials Let $n$ and $k$ be positive integers with $n \ge k$. Give an algebraic proof that $$(n)_k = (n-1)_k + k\cdot(n-1)_{k-1},$$ where $(n)_k$ is a falling factorial: $(n)_k$ = $(n)(n-1)(n-2)\ldots(n-k+1)$. I started by turning LHS into $$\frac{n!}{(n-k)!} = \frac{((n+k)-k)!}{(n-k)!}$$ I tried doing this after: $$ = \frac{(((n+k)-k)(n-1))!}{(n-k)!}$$ but I really have no idea where else to proceed. I'm not sure how I can split it up or move past this. Does anyone think they can point me in the right path?
$(n)_k-(n-1)_k=\frac{n!}{(n-k)!}-\frac{(n-1)!}{(n-k-1)!}=\frac{(n-1)!}{(n-k-1)!}(\frac{n}{n-k}-1)=\frac{(n-1)!}{(n-k-1)!}\times \frac{k}{n-k}=\frac{(n-1)!}{(n-k)!}\times k=k(n-1)_{k-1}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4245713", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Baby Rudin Theorem 8.2 This is Baby Rudin Theorem 8.2 proof: In the last part of the proof, I don't understand why the following should be true for proof to work: $x>1-\delta$. Also, what does it have to do with $-1<x<1$? Any help is appreciated!
I think the following will explain what you wanted to know. $ |f(x)-s|=|(1-x) ∑_{n=0}^\infty\;(s_n-s) x^n |$ $$≤|(1-x) ∑_{n=0}^N\;(s_n-s) x^n |+|(1-x) ∑_{n=N+1}^\infty\;(s_n-s) x^n | $$ $$≤|(1-x) ∑_{n=0}^N\;(s_n-s) x^n |+ε/2\;|(1-x) ∑_{n=N+1}^\infty\; x^n| $$ $$≤|(1-x) ∑_{n=0}^N\;(s_n-s) x^n|+ε/2,\;\;\; ∀ε>0 \text{ and } |x|<1.$$ If $x>1-\delta$, i.e., $1-x<\delta$, for some suitably so chosen $δ>0$ that $$δ\;|∑_{n=0}^N\;(s_n-s)| \le ε/2,$$ we obtain $$|f(x)-s|\le\varepsilon/2+\varepsilon/2=\varepsilon$$ Note that $x>1-δ$ and $|x|<1$ means $|1-x|<δ.$ Thus, it follows that for every $ε>0$ there exists $δ>0$ such that $|1-x|<δ⇒ |f(x)-s|<ε.$ Thus, we conclude that $\lim_{x→1}\;f(x)=s=∑_{n=0}^\infty\;c_n .$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4245891", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How can I solve $\frac{(1-x)(x+3)}{(x+1)(2-x)}\leq0$ and express solution in interval notation? The inequality I need to solve is $$\frac{(1-x)(x+3)}{(x+1)(2-x)}\leq0$$ My attempt: Case I: $x<2$ Then we have $(1-x)(x+3)\leq0$ $x\leq-3, x\geq1$ We reject $x\leq-3$ (not sure why exactly, I just looked at the graph on desmos and it shows that $x\geq1$ Case II: $x<-1$ Then we have $(1-x)(x+3)\geq0$ $x\geq-3, x\leq-1$ Reject $x\leq-1$ (again, not sure exactly why aside from the graph I saw on desmos) So the final solution is $x\in[-3,-1) \cup [1,2)$ I know that my final solution is correct, however I am a little confused on why I reject certain values.
Case I: $\ (1-x)(x+3) \le 0$ and $(x+1)(2-x) > 0$ \begin{align} &(1-x)(x+3) \le 0 &\text{AND} \qquad&(x+1)(2-x) > 0\\ &\Rightarrow\begin{cases} x \ge 1\ \&\ x \ge -3 \text{ OR}\\ x \le 1\ \&\ x \le -3 \end{cases}&\text{AND} \qquad &\Rightarrow\begin{cases} x > -1\ \&\ x < 2 \text{ OR}\\ x < -1\ \&\ x > 2 \text{ (Impossible)}\\ \end{cases} \\ &\Rightarrow\begin{cases} x \ge 1 \text{ OR}\\ x \le -3 \end{cases}&\text{AND} \qquad &\Rightarrow -1 < x < 2 \end{align} $\therefore\ 1 \le x < 2$ Case II: $\ (1-x)(x+3) \ge 0$ and $(x+1)(2-x) < 0$ \begin{align} &(1-x)(x+3) \ge 0 &\text{AND} \qquad&(x+1)(2-x) < 0\\ &\Rightarrow\begin{cases} x \le 1 \ \&\ x \ge -3 \text{ OR}\\ x \ge 1 \ \&\ x \le -3 \text{ (Impossible)} \end{cases}&\text{AND} \qquad &\Rightarrow\begin{cases} x < -1 \ \&\ x < 2 \text{ OR}\\ x > -1 \ \&\ x > 2 \\ \end{cases} \\ &\Rightarrow -3 \le x \le 1 &\text{AND} \qquad &\Rightarrow\begin{cases} x < -1 \text{ OR}\\ x > 2 \end{cases} \\ \end{align} $\therefore\ -3 \le x < -1$ Hence, $x\in[-3,-1)\ \cup\ [1, 2)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4246144", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 6, "answer_id": 4 }
Finding $x^8+y^8+z^8$, given $x+y+z=0$, $\;xy +xz+yz=-1$, $\;xyz=-1$ The system says $$x+y+z=0$$ $$xy +xz+yz=-1$$ $$xyz=-1$$ Find $$x^8+y^8+z^8$$ With the first equation I squared and I found that $$x^2+y^2+z^2 =2$$ trying with $$(x + y + z)^3 = x^3 + y^3 + z^3 + 3 x^2 y + 3 x y^2 + 3 x^2 z++ 3 y^2 z + 3 x z^2 + 3 y z^2 + 6 x y z$$ taking advantage of the fact that there is an $xyz=-1$ in the equation, but I'm not getting anywhere, someone less myopic than me.how to solve it? Thanks Edit : Will there be any university way to solve this problem , they posed it to a high school friend and told him it was just manipulations of remarkable products. His answers I understand to some extent but I don't think my friend understands all of it.
Note that $x$, $y$, $z$ are the roots of the polynomial $\lambda^3 - \lambda + 1$, so they are the eigenvalues of the companion matrix $$A=\left(\begin{matrix} 0 & 0 & -1 \\ 1 & 0 & 1\\ 0 & 1 & 0 \end{matrix} \right)$$ Conclude that the eigenvalues $A^{8}$ are $x^8$, $y^8$, $y^8$. Calculate $A^8$ by squaring three times ( @Theo Bendit:'s idea in the comments). We get $$A^8=\left(\begin{matrix} 2 & -2 & 3 \\ -3 & 4 & -5\\ 2 & -3 & 4 \end{matrix} \right)$$ so the sum $x^8+y^8+z^8$ equals the trace of $A^8$, that is $10$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4246285", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 8, "answer_id": 1 }
Standard Brownian Motion problem. How can I find the distribution of $X_{t}|X_{s}=1$? Let be $X_t$ a Standard Brownian Motion. Then compute: * *$\mathbb{E}\left[X_t | X_{s} =1\right]$, with $t>s>0$ *$\mathbb{E}\left[X^{2}_{s+1} | X_{s}=1\right]$ *$\mathbb{P}(X_{3}<3 | X_{1}=1)= \mathbb{F}_{X_{3}|X_{1}=1}(2)$ My Attempt: * *For the first bullet point We know that $X_t \sim N(0,t)$, since $X_t$ is a Standard Brownian Motion. Thus, $\mathbb{E}\left[X_t \right]=0$. Also, we know that if we define $t=s+r$, then \begin{align*} X_{r+s}-X_{s} \sim N(0,s) \end{align*} Do you have any hint for me to continue? * *For the second bullet point I think, that we can use this: \begin{align} \mathbb{E}\left[X^{2}_{s+1}|X_{s}=1\right]=\underbrace{\mathbb{V}\left[X_{s+1}|X_{s}=1 \right]}_{\text{(1)}}+\underbrace{\mathbb{E}^{2}\left[X_{s+1} | X_{s}=1\right]}_{\text{(2)}} \end{align} I could compute (1), only if I knew the distribution of $X_{s+1}|X_{s}=1$. But I don't know how to use the fact that I mentioned in the first bullet. And (2) is direct from the first part of the problem. How can I know the distribution of $X_{s+1}|X_{s}=1$? Is it correct that equality?
* *By independent increments, $X_t-X_s \sim N(0, t-s)$ and is independent of $X_s$. So write $X_t = (X_t - X_s) + X_s$ to help compute the conditional expectation. *Similarly, note $X_{s+1}-X_s \sim N(0, 1)$ is independent of $X_s$. Further, $(X_{s+1}-X_s)^2 = X_{s+1}^2 - 2 X_{s+1} X_s + X_s^2$. This can help you compute the conditional expectation. *Again, try to write the result in terms of $X_3-X_1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4246377", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving the sup and the inf $A=\{\frac{n}{m}: n,m \in \mathbb{Z}, n^2<5m^2\}$. $A=\{\frac{n}{m}: n,m \in \mathbb{Z}, n^2<5m^2\}$. My goal is to find and prove the $\inf$ and $\sup$ of $A$. I can't really use anything beyond the definition of inf and sup. I have been trying to find a way to generate integers $n,m$ for all $\epsilon>0$ such that the following is satisficed $\sqrt{5}-\epsilon<\frac{n}{m}<\sqrt{5}$. How would you go about proving this?
Claim: If $q \in \mathbb Q$ then $q \in A$ if and only if $-\sqrt 5 < q < \sqrt 5$. Proof: If $q \in A$ then $q = \frac nm$ for some integers $n,m$ with $n^2 < 5m^2$. That means $q^2 = (\frac nm) = \frac {n^2}{m^2} < 5$ which means $|q| < \sqrt 5$ which means $-\sqrt 5 < q < \sqrt 5$. If on the other hand if $-\sqrt 5 < q < \sqrt 5$, the if we let $q = \frac nm; n,m\in \mathbb Z$ then $-\sqrt 5 < q < \sqrt 5$ so $q^2 = \frac {n^2}{m^2} < 5$. So $n^2 < 5m^2$. So $q \in A$. ..... Now.... can you use the definitions of $\inf$ and $\sup$ to determine and prove what $\inf A, \sup A$ are? If $q \in A$ then $q < \sqrt 5$. So $\sqrt 5$ is an upper bound of $A$. If $w < \sqrt 5$ then by the archimedian principal there exists a rational number $r$ so that $w < r < \sqrt 5$. So as $r \in A$. ... well, it is if $r > -\sqrt 5$. .... Let's do this again. If $w< \sqrt 5$ then either $w < 2$ or $2 \le w < \sqrt 5$. If $w < 2$ then $w$ is not an upper bound of $A$ because $2 \in A$ and $w < 2$. If $2 \le w < \sqrt 5$ then there exists a rational $r$ so that $w < r < \sqrt 5$. So $-\sqrt 5 < r < \sqrt 5$ so $r \in A$. But $r > w$ so $w$ is not an upper bound of $r$. So either way... if $w < \sqrt 5$ then $w$ is not an upper bound $r$. So we have $\sqrt 5$ is an upper bound of $A$; and if $w < \sqrt 5$ then $w$ is not an upper bound. And that is the definition that $\sup A = \sqrt 5$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4246481", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Matrix exponential of an upper triangular matrix Let $a,b,c,d$ be real and nonzero. I am trying to find $e^{At}$ where $$A = \begin{bmatrix} a & b & c & d & e\\ 0& a & b& c &d \\ 0& 0 &a &b&c \\ 0& 0 & 0 &a &b \\ 0& 0& 0 & 0 & a \end{bmatrix}.$$ I don't think diagonalizing would be a good approach here. I think there is some way I can split this up into different matrices added together and then go from there, but I am not sure what to do. Everything I try seems to be computationally heavy, but for some reason I feel like there is a simple way to do this. Does anyone know how?
You can deal with this using two tricks. Firstly, if $AB = BA$ then $e^{A+B} = e^Ae^B$. Secondly, use the definition of exponential as $$ e^A = \sum_{i=0}^\infty \frac{A^n}{n!}. $$ Write out your matrix as $$ \begin{bmatrix} a & b & c & d & e\\ 0& a & b& c &d \\ 0& 0 &a &b&c \\ 0& 0 & 0 &a &b \\ 0& 0& 0 & 0 & a \end{bmatrix} = \begin{bmatrix} a & 0 & 0 & 0 & 0\\ 0& a & 0& 0 &0 \\ 0& 0 &a &0&0 \\ 0& 0 & 0 &a &0 \\ 0& 0& 0 & 0 & a \end{bmatrix} + \begin{bmatrix} 0 & b & c & d & e\\ 0& 0 & b& c &d \\ 0& 0 &0 &b&c \\ 0& 0 & 0 &0 &b \\ 0& 0& 0 & 0 & 0 \end{bmatrix} $$ and apply the above. It will be a bit computational heavy but you will see a nice pattern when making the powers of the part with zero diagonal. (Also note, that the condition of $AB = BA$ is crucial as the general case given by the Hamilton-Cayley Theorem is more tricky.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/4246605", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Angle between smooth submanifolds I am facing the following problem: Given two one-dimensional submanifolds $M_1,M_2$ (with non-empty intersection) of some manifold $M$, can one define an angle between $M_1$ and $M_2$? I know how to compute angles of vectors in tangent spaces (via inner product), but I do not know how to extrapolate this to the manifold. In none of the references I've checked they do such construction. This question is kind of related to this one, in the sense that I am trying to compute the angle between $N_1$ and $N_2$, if this even makes sense.
I'm a second year student and I'm also studying differential geometry. I tried to write my idea/intuition about the question done. If it's wrong I'll delete it. Suppose you have two sets $M_1$ and $M_2$ in $\mathbb R^n$ and two smooth maps $f_1:\mathbb R^n\to \mathbb R^k$ and $f_2:\mathbb R^n\to \mathbb R^k$. If $f_1^{-1}(\textbf a_1)=M_1$ and $f_2^{-1}(\textbf a_2)=M_2$, with $f_1$ and $f_2$ submersions (which means that $(f_1)_*$ and $(f_2)_*$ are surjective), than we have that $M_i\subset\mathbb R^n$ are embedded submanifolds, with dimension equal to $\text {codim}(\mathbb R^k)$. (if $\exists \bar x\in\mathbb R^n:rk(f_i)(\bar x)<\dim(M_i)$, you should check that $\bar x\notin f_i^{-1}(\textbf a_i)$) Using the fact that $rk(f_i\vert_{M_i})=\dim(M_i)$ you could compute, for $p\in M_1\cap M_2$, (in general in the intersection of two local charts of the two submanifolds) the tangent space $T_p(M_i)\cong\ker(f_i)_{*p}$ and find the angle between tangent vectors of the two tangent spaces.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4246741", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }