Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Show that $\frac{x \phi(x)}{2 \Phi(x) - 1}$ is decreasing for $x \geq 0$. I am looking to prove that the following function $G: [0, \infty) \to \mathbb{R}$ is decreasing over it's domain: $$G(x) := \frac{x \phi(x)}{2 \Phi(x) - 1}$$ for $x \geq 0$. Here $\phi(x)$, and $\Phi(x)$ denote the PDF, and CDF (respectively) of a $\mathcal{N}(0, 1)$ random variable. As noted here, $G(0) = \frac{1}{2}$ by L'Hospital's rule. Based on numerical plots, it does appear to be decreasing for all $x \in [0, \infty)$. Formally, I tried showing that $G^{\prime}(x) \leq 0$, but got the following complicated expression for $G^{\prime}(x)$: \begin{equation} G^{\prime}(x) = \frac{(2 \Phi(x) - 1)(\phi(x) + x \phi^{\prime}(x)) - 2 x \phi^{2}(x)}{(2 \Phi(x) - 1)^{2}} \end{equation} but I wasn't able to show this is non-negative. Could anyone please show how to obtain this using elementary methods and properties of the standard normal random variable?
Approach 1: to prove $G'(x) < 0$ for all $x > 0$: Here is a proof using @fedja's idea: (Actually, it is not difficult without using fedja's idea. But fedja's idea is better.): We have $G(x) > 0$ for all $x > 0$. Let $$f(x) := \ln G(x) = \ln x + \ln \phi(x) - \ln(2\Phi(x) - 1).$$ We have $$f'(x) = \frac{1}{x} - x - \frac{1}{2\Phi(x) - 1}\cdot 2\phi(x) = \frac{1 - x^2}{x} - \frac{\mathrm{e}^{-x^2/2}}{\int_0^x \mathrm{e}^{-t^2/2}\,\mathrm{d} t}$$ where we have used $2\Phi(x) - 1 = 2 \int_0^x \phi(t)\, \mathrm{d} t$. Using $\mathrm{e}^{-u} \ge 1 - u$ for all $u \ge 0$, we have $\mathrm{e}^{-x^2/2} \ge 1 - x^2/2 > 1 - x^2$ for all $x > 0$. Also, we have $\int_0^x \mathrm{e}^{-t^2/2}\,\mathrm{d} t < \int_0^x 1 \,\mathrm{d} t = x$ for all $x > 0$. Thus, we have, for all $x > 0$, $$f'(x) < \frac{\mathrm{e}^{-x^2/2}}{x} - \frac{\mathrm{e}^{-x^2/2}}{x} = 0.$$ Thus, we have $G'(x) < 0$ for all $x > 0$. Dealing with $G'(x)$ for the case: $x = 0$: Now, let us deal with $x = 0$. Since $\lim_{x\to 0} G(x) = 1/2$, we define $G(0) = 1/2$. Using L'Hopital rule, we have $$\lim_{x\to 0} \frac{G(x) - G(0)}{x - 0} = 0$$ that is $G'(0) = 0$ (see Remarks at the end for details). Approach 2: to prove $G'(x) < 0$ for all $x > 0$: We have $G(x) > 0$ for all $x > 0$. Let $$f(x) := \ln G(x) = \ln x + \ln \phi(x) - \ln(2\Phi(x) - 1).$$ We have $$f'(x) = \frac{1}{x} - x - \frac{1}{2\Phi(x) - 1}\cdot 2\phi(x).$$ (1) If $x \ge 1$, we have $f'(x) < 1/x - x \le 0$. (2) If $x\in (0, 1)$, let $$g(x) := \frac{2\Phi(x) - 1}{1/x - x}f'(x) = 2\Phi(x) - 1 - \frac{x}{1 - x^2}\cdot 2\phi(x).$$ We have $$g'(x) = 2\phi(x) - \frac{x^2 + 1}{(1 - x^2)^2}\cdot 2\phi(x) - \frac{x}{1 - x^2}\cdot 2\phi(x) \cdot (-x) = - \frac{2x^2}{(1-x^2)^2}\cdot 2\phi(x) < 0.$$ Also, $\lim_{x\to 0} g(x) = 0$. Thus, we have $g(x) < 0$ for all $x \in (0, 1)$. Thus, we have $f'(x) < 0$ for all $x \in (0, 1)$. Thus, $f'(x) < 0$ for all $x > 0$. Thus, $G'(x) < 0$ for all $x > 0$. Remarks: Dealing with $G'(x)$ for the case: $x = 0$:: We have $$\frac{G(x) - G(0)}{x - 0} = \frac{\frac{x \phi(x)}{2 \Phi(x) - 1} - \frac12}{x} = \frac{2x\phi(x) - (2\Phi(x) - 1)}{2x(2\Phi(x) - 1)}.$$ Let $N(x) := 2x\phi(x) - (2\Phi(x) - 1)$ and $D(x) := 2x(2\Phi(x) - 1)$. We have $$N'(x) = 2\phi(x) + 2x\phi'(x) - 2\phi(x) = 2x\phi'(x)$$ and $$D'(x) = 2(2\Phi(x) - 1) + 2x \cdot 2\phi(x).$$ Note that $\lim_{x\to 0} N'(x) = 0$ and $\lim_{x\to 0} D'(x) = 0$. We have $$N''(x) = 2\phi'(x) + 2x\phi''(x)$$ and $$D''(x) = 4\phi(x) + 4\phi(x) + 4x \phi'(x) = 8\phi(x) + 4x\phi'(x).$$ We have $\lim_{x\to 0} N''(x) = 0$ and $\lim_{x\to 0} D''(x) = \frac{8}{\sqrt{2\pi}}$. We have $\lim_{x\to 0} \frac{N''(x)}{D''(x)} = 0$. Using L'Hoptial rule, we have $\lim_{x\to 0} \frac{N(x)}{D(x)} = \lim_{x\to 0} \frac{N''(x)}{D''(x)} = 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4481631", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
direct image proof $f(f^{-1}(D)) \subseteq D$ Suppose that $f\colon A \rightarrow B$ is a function, and $C \subseteq A$ and $D \subseteq B$ prove that $f(f^{-1}(D)) \subseteq D$ I got this let $z\in f(f^{-1}(D)) \Rightarrow (\exists x)(x \in f^{-1}(D)$ and $z=f(x))$ $ \Rightarrow (\exists x)(f(x)\in D$ and $z=f(x))$ $ \Rightarrow (\exists x)(z\in D$ and $z=f(x))$ I do not know if I am okay or not, and as well I do not know how to conclude
It might help for you to define the sets $f(f^{-1}(D))$ and $f^{-1}(D)$ before you start; for me, at least, this makes the proof easier. It also saves the writing you need to do: $$f(f^{-1}(D))=\{f(x):x \in f^{-1}(D)\}$$ $$f^{-1}(D)=\{a \in A:f(a) \in D \}$$ Now you start the proof. Suppose $z \in f(f^{-1}(D))$. Then there exists $x \in f^{-1}(D)$ such that $f(x)=z$. But if $x \in f^{-1}(D)$, then $z=f(x)\in D$. So $f(f^{-1}(D)) \subseteq D$, and this is what we had to prove.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4481809", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Negative Binomial mean Let $X\sim BN(r,p)$. If I try to compute E[X] through the Moment Generating Function I get the following: \begin{aligned} E[X] &=\left.\frac{d}{d t} M_{X}(t)\right|_{t=0} \ &=\left.\frac{d}{d t} p^{r}\left(1-p e^{t}(1-p)\right)^{-r}\right|_{t=0} \ &=p^{r+1} r(1-p)(1-p(1-p))^{-r-1} \end{aligned} But if I find it through the first factorial moment I get $E\left[X^{(h)}\right]$ $=\sum_{x=0}^{\infty} \frac{x !}{(x-h) !} \frac{(x+r-1) !}{x !(r-1) !} p^{r}(1-p)^{x}$ $=\sum_{x=h}^{\infty} \frac{(x+r-1) !}{(x-h) !(r-1) !} p^{r}(1-p)^{x},\left(x^{*}=x-h\right)$ $=\sum_{x^{*}=0}^{\infty} \frac{\left(x^{*}+r+h-1\right) !}{x^{*} !(r-1) !} p^{r}(1-p)^{x^{*}+h}$ $=\frac{(r+h-1) !}{(r-1) !} \frac{(1-p)^{h}}{p^{h}} \sum_{x^{*}=0}^{\infty} \frac{\left(x^{*}+h+r+1\right) !}{x^{*} !(r+h-1) !} p^{r+h}(1-p)^{x^{*}}$ $=\frac{(r+h-1) !}{(r-1) !} \frac{(1-p)^{h}}{p^{h}}$ Therefore, $E[X]=\frac{r(1-p)}{p}$ Where did I do wrong?
The MGF is $\left(\dfrac{1-p}{1-p\mathrm{e}^x}\right)^r$ So taking derivatives we get $$\frac{p\,r\,\mathrm{e}^x\cdot\left(\frac{1-p}{1-p\,\mathrm{e}^x}\right)^{r-1}}{1-p\,\mathrm{e}^x}$$ Evaluating at $x=0$ gives us $\frac{pr}{1-p}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4481985", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Which one is the larger : $20!$ or $2^{60}$? Which one is the larger : $20!$ or $2^{60}$ ? I am looking for an elegant way to solve this problem, other than my solution below. Also, solution other than using logarithm that uses the analogous inequalities below. My solution: Write $20!$ in prime factors and $2^{n}$: $$ 20! = (2^{2} \cdot 5)(19)(2 \cdot 3^{2})(17)(2^{4})(3 \cdot 5)(2 \cdot 7)(13)(2^{2} \cdot 3)(11)(2 \cdot 5)(3^{2})(2^{3})(7)(2 \cdot 3) (5) (2^{2}) (3) (2) $$ $$ = 2^{18} (5)(19)(3^{2})(17)(3 \cdot 5)(7)(13)(3)(11)(5)(3^{2})(7)( 3) (5) (3) $$ so it is left to compare $2^{42}$ and $(5)(19)(3^{2})(17)(3 \cdot 5)(7)(13)(3)(11)(5)(3^{2})(7)( 3) (5) (3) $. We write the prime factors nicely as: $$ 3^{8}5^{4}7^{2}(11) (13)(17)(19) $$ Notice $(3)(11) > 2^{5}$, $(13)(5)>2^{6}$, $(19)(7) >2^{7}$, $17 > 2^{4}$, so we now focus on $$3^{7}5^{3}7 = 2187(5^{3})7 > 2048(5^{3})7 = 2^{11}875 >2^{11}512 = 2^{20} $$ So we have that the prime factors is larger than $2^{42}$.
Here is a way to go for the solution using only elementary computations. (And building partial products that get bigger than and closer to powers of two. I wanted to use first very close approximations like $3\cdot 18\cdot 19=1026>1024$, but there is no need to be so economical at the beginning, and very generous at the end...) $$ \begin{aligned} 3\cdot 12 &= 36 \\ &> 32 =2 ^5\ ,\\ 5\cdot 13 &= 65 \\ &> 64 =2 ^6\ ,\\ 6\cdot 7\cdot 9\cdot 11 &= 6\cdot 11\cdot 7\cdot 9 =66\cdot 63=(64+2)(64-1)=64^2 +64-2 \\ &> 64^2 = (2^8)^2=2^{12}\ , \\ 15\cdot 17\cdot 14\cdot 19 &= (256-1)(16-2)(16+3)= (256-1)(256+16-6) \\ &>(256-1)(256+2)=256^2 + 256-2\\ &>256^2=(2^8)^2=2^{16}\ , \\ 10\cdot 18\cdot 20 &= 3600 \\ &>2048=2^{11}\ , \\[3mm] &\text{Putting all together:} \\[3mm] 20! &=2\cdot 4\cdot 8\cdot 16 \cdot(3\cdot 12) \cdot(5\cdot 13) \cdot(6\cdot 7\cdot 9\cdot 11) \\ &\qquad\qquad\qquad \cdot(14\cdot 15\cdot 17\cdot 19) \cdot(10\cdot 18\cdot 20) \\ &>2^{\displaystyle1+2+3+4+5+6+12+16+11} \\ &= 2^{\displaystyle60}\ . \end{aligned} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4482212", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
What is wrong with my proof that $\int 2x dx= 2x^2$ by writing $2x=\underbrace{2+2+\cdots+2}_{x\;\text{times}}$? I know $\int 2x \,dx = x^2 + C$ (by the power rule) but why does the following proof not give the same answer? \begin{align*} \int 2x \,dx &= \int \underbrace{(2 + 2 + 2 + \dots + 2)}_{x \text{ times}} \, dx \\ &= \underbrace{\int{2} \, dx + \int{2} \, dx + \dots \ + \int{2}_ \, dx}_{x \text{ times}}\\ &= 2x + 2x + \dots + 2x + C \\ &= 2x \times x + C \\ &= 2x^2 + C \end{align*} (And I have the same question for this false proof that $\int{2^x} \, dx = 2^{x}x+ C$) \begin{align*} \int{2^x} \,dx &= \int \underbrace{(2 \cdot 2 \cdot 2 \cdot \dots \cdot 2)}_{x \text{ times}} \cdot 1 \, dx \\ &= 2 \cdot \int \underbrace{(2 \cdot 2 \cdot 2 \cdot \dots \cdot 2)}_{(x-1) \text{ times}} \cdot 1 \, dx && (\text{Constant Multipule Rule})\\ &= 2^2 \cdot \int \underbrace{(2 \cdot 2 \cdot 2 \cdot \dots \cdot 2)}_{(x-2) \text{ times}} \cdot 1 \, dx && (\text{Constant Multipule Rule})\\ &= 2^x \cdot \int{1} \, dx \\ &= 2^{x}x+ C \\ \end{align*} I suspect that it has something to do with not being able to: * *Change integral of sums to sums of integrals for an arbitrary $x$, and *Remove a constant out of an integral if there are variable numbers of those constants. But I'm not sure why these do not hold. If this is the reason, is there a theorem stating it? Thanks in advance!
The above comments and answer point out that you can't take a variable out of an integral, and that $x$ is not a natural number, but in fact, in the given examples, the error could be more fundamental. Let $f:\mathbb Z \to \mathbb R$ and $g:\mathbb Z \to \mathbb R$ such that $f(x)=2x$ and $g(x)=2^x.$ Giving benefit of the doubt that the author indeed meant to work in $\mathbb Z,$ that is, that $$\int 2x \,\mathrm dx=\int f,\\ \int 2^x \,\mathrm dx=\int g,$$ then both integrals immediately equal $0,$ since the domain of integration in each case is a set of isolated points. If the intention, however, was for interval integration domains, then it is of course invalid to assert that $2x=\underbrace{(2 + 2 + 2 + \dots + 2)}_{x \text{ times}}.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4482632", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Couldn't understand decomposition inside derivation of option valuation using Martingale method In the derivation of option valuation using Martingale method in continuous time framework of my book named "Brownian Motion Calculus by Ubbo F Wiersema" I have faced some issue. I add the derivation below: In the continuous time setting, start from the standard SDE for the stock price, $$\frac{dS_t}{S_t}=\mu dt+\sigma dB_t$$ Introduce the discounted stock price, $$S^{\star}_t\stackrel{\text{def}}{=}\frac{S_t}{e^{rt}}$$ Ito's formula for $S^{\star}_t$ as a function of $t$ and $S$ gives, $$\frac{dS^{\star}_t}{S^{\star}_t}=(\mu-r)dt+\sigma dB_t=\sigma\left[ \frac{\mu-r}{\sigma}dt+dB_t \right]=\sigma\left[ \phi dt+dB_t \right]$$ The probability density of $B_t$ at $B_t=x$ is, $$\frac{1}{\sqrt t \sqrt{2\pi}}\exp\left[-\frac12\left(\frac{x}{\sqrt t}\right)^2\right]$$ This can be decomposed into the product of two terms, $$\frac{1}{\sqrt t \sqrt{2\pi}}\exp\left[-\frac12\left(\frac{\phi t+x}{\sqrt t}\right)^2\right]\exp\left[\frac12 \phi^2 t+\phi x\right]$$ With $y \stackrel{\text { def }}{=} \varphi t+x$ the first term can be written as $$ \frac{1}{\sqrt{t} \sqrt{2 \pi}} \exp \left[-\frac{1}{2}\left(\frac{y}{\sqrt{t}}\right)^{2}\right] $$ which is the probability density of another Brownian motion, say $\widehat{B}(t)$, at $\widehat{B}(t)=y$. It defines $\widehat{B}(t) \stackrel{\text { def }}{=} \varphi t+B(t)$, so $\widehat{d} \widehat{B}(t)=\varphi d t+d B(t)$. Substituting the latter into the SDE for $S^{\star}$ gives $$ \frac{d S^{\star}(t)}{S^{\star}(t)}=\sigma d \widehat{B}(t) \quad \text { and } \quad \frac{d S(t)}{S(t)}=r d t+\sigma d \widehat{B}(t) $$ This says that under the probability distribution of Brownian motion $\widehat{B}(t)$, $S^{\star}$ is a martingale. I couldn't understand how they bring the decomposition like this? Another thing is, "Why we need to sustain martingale?". Does it mean we couldn't predict the future regardless of all prior knowledge, as it introduce arbitrage? @Ali, give an answer to show the decomposition. $$ \exp\left[-\frac{1}{2}\left( \frac{\phi t + x}{\sqrt{t}}\right)^{2}\right]= \exp\left[-\frac{1}{2}\left( \phi^{2}t+2\phi x + x^{2}/t\right)\right]= \exp\left[-\frac{1}{2} \phi^{2}t-\phi x\right] \exp\left[-\frac{1}{2t}x^{2}\right] $$ But, still, I couldn't get any intuition of that decomposition. Like, I want to know why this decomposition was introduced, and its meaning in the context of option valuation. Does this related with Girsanov transformation?
$$ \exp\left[-\frac{1}{2}\left( \frac{\phi t + x}{\sqrt{t}}\right)^{2}\right]= \exp\left[-\frac{1}{2}\left( \phi^{2}t+2\phi x + x^{2}/t\right)\right]= \exp\left[-\frac{1}{2} \phi^{2}t-\phi x\right] \exp\left[-\frac{1}{2t}x^{2}\right] $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4482764", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Hexagons and pentagons on a standard soccer ball How would one go about calculating the diameter of the circumscribing sphere formed when the soccer ball is fully inflated? Suppose, for example, the curved side lengths where all the hexagons and pentagons join on the sphere's surface are 2.5 inches.
From the Wikipedia page on Truncated Icosahedron we have the formula $ r_u = \dfrac{a}{2} \sqrt{1 + 9 \varphi^2 } = \dfrac{a}{4} \sqrt{58 + 18 \sqrt{5}} = 2.47801866 a $ where $r_u$ is the radius of the circumscribing sphere and $a$ is the (straight) edge length of the truncated icosahedron. Now the angle subtended by the edge at the center is $ \theta = 2 \sin^{-1} \bigg( \dfrac{ a }{2 r_u } \bigg) = 0.40633789 \text{ (radians)}$ Therefore, the curved edge length is $ c = r_u (0.40633789) $ i.e. $ r_u = 2.461006 c $ Now, we're given by $ c = 2.5 \text{ inches} $, therefore, $ r_u = 2.461006 (2.5) = 6.152515 \text{ inches} $
{ "language": "en", "url": "https://math.stackexchange.com/questions/4482929", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Can two different triads of natural numbers give the same results to these operations? If I have a triad of natural numbers $a$, $b$ and $c$, and another triad $a'$, $b'$ and $c'$, such that either $a\neq a'$, $b\neq b'$, or $c\neq c'$, is it possible that these equations are true? $ab+c=a'b'+c'$ $ac+b=a'c'+b'$ $cb+a=c'b'+a'$ If I had only the first equation, it is obviously possible, for example $a=8$, $b=2$ and $c=14$ give the same result as $a'=10$, $b'=2$ and $c'=10$, when we introduce the other two equations it feels like it would be impossible but I don't know how to prove it
There are (perhaps some might say trivial) counterexamples of the form $(1,m,n)$ and $(1,n,m)$--for instance $a=1,b=3,c=4$ and $a'=1, b'=4, c'=3$ There are also nontrivial counterexamples such as: $$a=2,b=5,c=89 {\rm \;\; and\;\; } a'=5,b'=13, c'=34$$ I found this with a search using Python. I just noticed: They are all Fibonacci numbers. Here's one that isn't all Fibonacci numbers: $$a=2,b=9,c=67 {\rm \;\; and\;\; } a'=3,b'=14, c'=43$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4483076", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Function where second derivative is equal to reciprocal squared I was recently interested in whether there exists a closed-form solution for the position of an object (say a spacecraft) in freefall as a function of time. To complicate things, I wanted to take into account the increase in acceleration as you get nearer to the surface. The differential equation is straightforward: $ x''(t) = \frac{\mu}{x(t)^2} $ where $\mu = G \cdot m_{planet}$ How, if at all, would I go about solving this? I've tried to use the common functions that return themselves when differentiated (trig, hyperbolic trig, exponential, etc.) but none of them obviously equal the reciprocal of themselves when differentiated twice. The closest I've gotten was $B\cosh(D \cdot t)$, but it doesn't work in the differential equation and doesn't quite match the analytical curve.
If you switch variables $$x''(t) = \frac{\mu}{x(t)^2} \quad \implies \qquad -\frac {t''(x)}{[t'(x)]^3}=\frac{\mu}{x^2}$$ Reduction of order $p(x)=t'(x)$ leads to $$p(x)=t'(x)=\pm\frac{\sqrt{x}}{\sqrt{2} \sqrt{c_1 x-\mu }}$$ $$t(x)+c_2=\pm \frac{1}{\sqrt{2} c_1^{3/2}}\Big[\sqrt{c_1} \sqrt{x} \sqrt{c_1 x-\mu }+\mu \log \left(\sqrt{c_1} \sqrt{c_1 x-\mu }+c_1 \sqrt{x}\right) \Big]$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4483248", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
How do I solve $\int \frac{20}{(x-1)(x^2+9)}dx$ I've been trying to solve the following integral: $\int \frac{20}{(x-1)(x^2+9)}dx$ Sadly I'm kinda new to resolving fractional integrals and I'm not sure which method(s) I should use to solve it. I've tried using partial fractions but I'm doing something incorrectly or maybe this method isn't the best suited for this case. I've tried using partial fractions. Here is what I've got so far
We should resolve the integrand as below: $$\frac{20}{(x-1)\left(x^{2}+9\right)} \equiv \frac{A}{x-1}+\frac{B x+C}{x^{2}+9}$$ Then $$20 \equiv A\left(x^{2}+9\right)+(B x+C)(x-1)$$ Putting $x=1$ yields $$ \begin{aligned} 20=A(10) & \Rightarrow A=2 \\ (B x+C)(x-1) &=20-2\left(x^{2}+9\right) \\ &=2-2 x^{2} \\ &=-2(x+1)(x-1) \\ \therefore \quad B x+C &=-2(x+1) \\ \therefore \frac{20}{(x-1)\left(x^{2}+9\right)} &=\frac{2}{x-1}-\frac{2(x+1)}{x^{2}+9} \\ \int \frac{20}{(x-1)\left(x^{2}+9\right)} d x &=2 \int \frac{d x}{x-1}-\int \frac{2 x d x}{x^{2}+9}-2 \int \frac{d x}{x^{2}+9} \end{aligned} $$ Wish it helpful for you to continue!
{ "language": "en", "url": "https://math.stackexchange.com/questions/4483615", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Are there an infinite amount of functions where the limit as z approaches infinity of f_prime^z(x) = 0? Are there an infinite amount of functions that you can take the derivative of an infinite amount of times without ever being equal to zero? I'll try to clarify- if you take the derivative of f(x) an infinite amount of times and it doesn't equal zero, is there an infinite amount of functions f(x) could be? (A proof would be cool!) f(x) = e^x is a possibility, as it is it's own derivative, and I believe the trigonometric functions work? Are there others? Edit: Of different forms! Obviously the functions e^x, 2e^x, 3e^x... continues for infinity and all fit the description.
It might be more helpful to consider the complement of the question: what functions do not differentiate to zero eventually: Only real polynomials eventually differentiate to zero. Helpful related query: When do polynomials eventually differentiate to zero?
{ "language": "en", "url": "https://math.stackexchange.com/questions/4483984", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
A silly question on range of a quadratic function. Let $f:\mathbb{R}\to\mathbb{R}\space|\space f(x) = x^2 + 3x + 2 \space\forall\space x \in\mathbb{R}$ We are asked to find the range of the function $f$ I begin as follows: Assume $y=x^2 + 3x + 2$ for some $ x\in\mathbb{R}$ Collecting terms to one side, $x^2 + 3x + 2-y=0$ Now as x is real, the discriminant of the above quadratic expression must be greater than or equal to zero. Hence, $3^2 - 4(1)(2-y)\geqslant0$ Solving, we get, $y\geqslant-1/4$ Hence the range of the function $f$ is the set $A=\{x|x\geq-1/4\}$ This answer is correct. The range of the function $f$ is indeed the set $A=\{x|x\geq-1/4\}$. However, I have a problem with my last step. The final equality only tells me that $y$ is greater than OR equal to $-1/4$. It doesn't exactly say that $y$ WILL take all values greater than AND equal to $-1/4$. Is there a way to prove that $y$ will take all such values? Any hint or help regarding this will be appreciated. Thank you
I would write this as $y = (x+\frac32)^2-\frac14$, from which it is obvious what the range would be, as the square term takes all nonnegative values.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4484252", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
Can we say that $\|e_i a e_i - a\| \to 0\ $? Let $A$ be a non-unital $C^{\ast}$-algebra. Then for any approximate unit $(e_i)_{i \in I}$ and for any $a \in A$ can we say that $\|e_i a e_i - a e_i \| \xrightarrow{i} 0\ $?
Yes. This is true. Following the usual definition of approximate unit in a $C^*$-algebra (where the approximating elements are contractive), we get $$\|e_i a e_i-ae_i\| =\|(e_i a-a)e_i\| \le \|e_i a-a\|\stackrel{i \to \infty}\longrightarrow 0.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4484394", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Calculate the area of the surface $x^2 + y^2 = 1 + z^2$ as $z \in [- \sqrt 3, \sqrt 3]$ The question states the following: Calculate the area of the surface $x^2 + y^2 = 1 + z^2$ as $z \in [- \sqrt 3, \sqrt 3]$ My attempt In order to solve this question, the first thing I think about is parametrize the surface so I can then just apply the definition of the area of a surface $$A(S) = \iint_D || \Phi_x \times \Phi_y|| \ dx \ dy$$ I consider the parametrization $\Phi (x,y) = (x, y, \sqrt{x^2 + y^2 - 1}) \ $. Then $$\begin{cases} \Phi_x = (1,0,\displaystyle \frac{-x}{\sqrt{x^2 + y^2 - 1}}) \\ \Phi_y = (0,1,\displaystyle \frac{-y}{\sqrt{x^2 + y^2 - 1}})\end{cases} \Longrightarrow \Phi_x \times \Phi_y = (\frac{x}{\sqrt{x^2 + y^2 - 1}},\frac{y}{\sqrt{x^2 + y^2 - 1}},1)$$ Then $$|| \Phi_x \times \Phi_y||= \displaystyle \sqrt{\frac{x^2}{x^2 + y^2 - 1} + \frac{y^2}{x^2 + y^2 - 1} + 1} = \sqrt{\frac{x^2 + y^2}{x^2 + y^2 - 1} + 1} $$ As we work in a symettric surface, we'll consider $z \in [0, \sqrt 3]$ and simply multiply the result by two. Then, the parametrization goes from $D$ to $\mathbb R^3$, $\Phi : D \subset \mathbb R^2 \rightarrow \mathbb R^3$, being $D$ the following domain $$D = \lbrace (x,y) \in \mathbb R^2 : 1 \leq x^2 + y^2 \leq 4 \rbrace$$ Thus we get $$A(S) = 2 \cdot \iint_D || \Phi_x \times \Phi_y|| \ dx \ dy = 2 \cdot \iint_D \sqrt{\frac{x^2 + y^2}{x^2 + y^2 - 1} + 1}\ dx \ dy $$ Using polar coordinates, $\begin{cases} x = r \cdot \cos \theta \\ y = r \cdot \sin \theta \end{cases} : r \in [1,2] \ \& \ \theta \in [0, 2\pi]$ we get the following integral $$A(S) = 2 \cdot \int_0^{2\pi} \int_{1}^2 r \cdot \displaystyle \sqrt{\frac{r^2 \cos^2 \theta + r^2 \sin^2 \theta}{r^2 \cos^2 \theta + r^2 \sin^2 \theta - 1} + 1} \ dr \ d\theta = 2 \cdot \int_0^{2\pi} \int_{1}^2 r \cdot \sqrt{\frac{r^2}{r^2 - 1} + 1} \ dr \ d\theta$$ $$ = 4 \pi \cdot \int_{1}^2 r \cdot \sqrt{\frac{r^2}{r^2 - 1} + 1} \ dr $$ The problem is that I reach the integral above that I don´t know how to tackle. I think I may have done something wrong along the process since this is a question extracted from an university exam where no computers nor calculators were avilable. Any help?
The surface is $ x^2 + y^2 - z^2 = 1 $ Its standard parameterization is $ P = (x, y, z) = ( \sec t \cos s , \sec t \sin s , \tan t ) $ So the surface area is $ \text{A} = \displaystyle \int_{t = - \frac{\pi}{3} }^{ \frac{\pi}{3} } \int_{s = 0}^{2 \pi} \| P_t \times P_s \| \ d s \ d t $ And we have $ P_t = (\sec t \tan t \cos s , \sec t \tan t \sin s , \sec^2 t ) $ $ P_s = (- \sec t \sin s , \sec t \cos s , 0 ) $ So that $ P_t \times P_s = ( - sec^3 t \cos s , - \sec^3 t \sin s , \sec^2 t \tan t) $ And $ \| P_t \times P_s \| = | \sec^2 t | \sqrt{ \sec^2 t + \tan^2 t } = \sec^2 t \sqrt{ 2 \tan^2 t + 1 } $ Therefore, the surface area is (using the substituting $u = \tan t $ $ \text{Area} = 2 \pi \displaystyle \int_{u = -\sqrt{3}}^{\sqrt{3}} \sqrt{ 2u^2 + 1} \ du $ Using the trigonometric substitution $ \sqrt{2} u = \tan \theta $ , then the integral becomes, $ \displaystyle \int \dfrac{1}{\sqrt{2}} \sec^3 \theta d \theta $ From the tables, $ \displaystyle \int \sec^3 \theta \ d \theta = \dfrac{1}{2} \bigg( \sec \theta \tan \theta + \ln \bigg| \sec \theta + \tan \theta \bigg| \bigg) $ Evaluating this between $\theta_1 = \tan^{-1}( -\sqrt{6} )$ and $\theta_2 = \tan^{-1}( \sqrt{6} )$ gives $ \displaystyle \int_{u = -\sqrt{3}}^{\sqrt{3}} \sqrt{ 2u^2 + 1} \ du = \dfrac{1}{2\sqrt{2}} \bigg( 2 \sqrt{42} + \ln\bigg( \dfrac{\sqrt{7}+\sqrt{6}}{\sqrt{7} - \sqrt{6} } \bigg) \bigg) $ Therefore, the area is $ \text{Area} = \dfrac{\pi}{\sqrt{2}}\bigg( 2 \sqrt{42} + \ln\bigg( \dfrac{\sqrt{7}+\sqrt{6}}{\sqrt{7} - \sqrt{6} } \bigg) \bigg) $
{ "language": "en", "url": "https://math.stackexchange.com/questions/4484496", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
What is the gradient of $f (x, (y^2+z^2)^{1/2})$? Consider a $c \in \mathbb R$ and a function $f: U \subset \mathbb R^2\rightarrow \mathbb R $ with $ \nabla f (p) \neq 0, \forall p \in f^{-1}(c)$, where $U$ is contained in the upper half plane y > 0. Now define $g: U \times \mathbb R \rightarrow \mathbb R$ by $g(x, y, z) = f (x, (y^2+z^2)^{1/2})$. Then it is to be shown that $\nabla g(q) \neq 0, \forall q \in g^{-1}(c)$. But I am not able to figure out how to write $\nabla f$ in way that would help me solve the problem.
I assume that what you want is to find $\nabla g$. In order to do so, we have to apply the chain rule. First of all, notice that the function $g$ can be interpreted as the composition of $f: U \subset \mathbb{R}^2\to\mathbb{R}$ and $h: \mathbb{R}^3\to\mathbb{R}^2$, where $h(x,y,z) = (x, (y^2+z^2)^\frac{1}{2})$. The composition is well defined and $g = f \:\circ h $. Then the partial derivatives of $g$ are: * *$\frac{\partial g}{\partial x}(x,y,z) = f^{(1,0)}(x, (y^2+z^2)^\frac{1}{2})$ *$\frac{\partial g}{\partial y}(x,y,z) = \frac{y}{(y^2+z^2)^\frac{1}{2}}f^{(0,1)}(x, (y^2+z^2)^\frac{1}{2})$ *$\frac{\partial g}{\partial z}(x,y,z) = \frac{z}{(y^2+z^2)^\frac{1}{2}}f^{(0,1)}(x, (y^2+z^2)^\frac{1}{2})$ The easiest way to compute the partial derivatives using the chain rule is, for me at least, using Jacobian matrices. More in particular: $$ J_f(x,y) = \begin{bmatrix} f^{(1,0)} & f^{(0,1)} \end{bmatrix} $$ $$ J_h(x,y,z)= \begin{bmatrix} 1 & 0 & 0\\ 0 & \frac{y}{(y^2+z^2)^\frac{1}{2}} & \frac{z}{(y^2+z^2)^\frac{1}{2}} \end{bmatrix} $$ The chain rule tell us that $J_g(x,y,z) = J_f(h(x,y,z))J_h(x,y,z)$. Therefore: $$ J_g(x,y,z) = \begin{bmatrix} f^{(1,0)}(x, (y^2+z^2)^\frac{1}{2}) & \frac{y}{(y^2+z^2)^\frac{1}{2}}f^{(0,1)}(x, (y^2+z^2)^\frac{1}{2}) & \frac{z}{(y^2+z^2)^\frac{1}{2}}f^{(0,1)}(x, (y^2+z^2)^\frac{1}{2}) \end{bmatrix} $$ Therefore its gradient is: $$\nabla g (x,y,z) = (f^{(1,0)}(x, (y^2+z^2)^\frac{1}{2}), \frac{y}{(y^2+z^2)^\frac{1}{2}}f^{(0,1)}(x, (y^2+z^2)^\frac{1}{2}), \frac{z}{(y^2+z^2)^\frac{1}{2}}f^{(0,1)}(x, (y^2+z^2)^\frac{1}{2}))$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4484630", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Having trouble understanding the generalized chain rule for multivariable functions Determine the derivative of $F\circ G$ where $F:\mathbb{R^3}\rightarrow \mathbb{R^2}$ and $G:\mathbb{R^2}\rightarrow \mathbb{R^3}$ are given by $$F(x,y,z)=\begin {pmatrix} \cos(x)-z \\xe^{y}\end {pmatrix}, G(u,v)=\begin {pmatrix} v^2\\uv+v^3 \\ u^2+v\end {pmatrix} $$ I'm having trouble applying the generalized chain rule for this question. $F\circ G=F(G(u,v)) = \begin {pmatrix} \cos(v^2)-(u^2+v)\\v^2e^{uv+v^3}\end {pmatrix}.$ Unless I'm mistaken we can treat the composition of the functions as a different function such that $F\circ G =h:\mathbb{R^2}\rightarrow \mathbb{R^2}$ and then differentiating this function would simply be taking the partial derivatives of the two expressions: $$D=\begin {pmatrix}\frac{\partial}{\partial u} \big(\cos(v^2)-u^2-v \big)& \frac{\partial}{\partial v} (\cos(v^2)-u^2-v) \\ \frac{\partial}{\partial u} (v^2\exp({uv+v^3})) & \frac{\partial}{\partial v}(v^2\exp(uv+v^3))\end {pmatrix}$$ But I honestly don't know if any of this is correct and would appreciate if someone could help.
Compute $F'{\large\circ}G$, that is $F'$ with $x=v^2,y=uv+v^3,z=u^2+v$: $$ \begin{align} F' &=\frac{\partial}{\partial(x,y,z)}\begin{pmatrix}\cos(x)-z\\xe^{y}\end {pmatrix}\\ &=\begin{pmatrix}-\sin(x)&0&-1\\e^y&xe^y&0\end{pmatrix}\\ &=\begin{pmatrix}-\sin\left(v^2\right)&0&-1\\e^{uv+v^3}&v^2e^{uv+v^3}&0\end{pmatrix} \end{align} $$ Compute $G'$: $$ \begin{align} G' &=\frac{\partial}{\partial(u,v)}\begin{pmatrix}v^2\\uv+v^3\\u^2+v\end{pmatrix}\\ &=\begin{pmatrix}0&2v\\v&u+3v^2\\2u&1\end{pmatrix} \end{align} $$ Compute $\left(F'{\large\circ}G\right)G'$ $$ \overbrace{\begin{pmatrix}-\sin\left(v^2\right)&0&-1\\e^{uv+v^3}&v^2e^{uv+v^3}&0\end{pmatrix}\vphantom{\begin{pmatrix}0\\0\\0\end{pmatrix}}}^{F'{\large\circ}G}\overbrace{\begin{pmatrix}0&2v\\v&u+3v^2\\2u&1\end{pmatrix}}^{G'} =\begin{pmatrix}-2u&-2v\sin\left(v^2\right)-1\\v^3e^{uv+v^3}&\left(2v+uv^2+3v^4\right)e^{uv+v^3}\end{pmatrix} $$ which, I believe, looks like the matrix you computed (once the partial derivatives in $D$ are evaluated).
{ "language": "en", "url": "https://math.stackexchange.com/questions/4484765", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 2 }
Is $I-JR$ invertible if $J$ is skew-symmetric and $R$ is symmetric positive semi-definite? Given any skew-symmetric matrix $J \in \mathbb{R}^{n \times n}$, we know that $(I-J)$ is invertible, where $I \in \mathbb{R}^{n \times n}$ denotes the identity matrix. Now, assume that $J \in \mathbb{R}^{n \times n}$ is skew-symmetric, i.e. $J=-J^T$ and $R \in \mathbb{R}^{n \times n}$ is symmetric and positive semi-definite. Can we also say anything about the invertibility of $(I-JR) \in \mathbb{R}^{n \times n}$? I would be very grateful for hints. Thanks in advance.
it's convenient to extend the field to $\mathbb C$. so $R$ is Hermitian PSD and $J$ is skew-Hermitian (which implies $(iJ)$ is Hermitian). Then $1\cdot\Big\vert\det\big(I-JR\big)\Big\vert$ $=\Big\vert \det\big(iI\big)\Big \vert \cdot\Big \vert \det\big(I-JR\big)\Big\vert$ $=\Big\vert \det\big(iI\big)\cdot \det\big(I-JR\big)\Big\vert$ $=\Big\vert \det\big(iI-(iJ)R\big)\Big\vert$ $\neq 0$ because the characteristic polynomial of $(iJ)R$ does not have $i$ as a root since $(iJ)R$ is has purely real eigenvalues --i.e. it has the same eigenvalues as $R^\frac{1}{2}(iJ)R^\frac{1}{2}$ which is Hermitian.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4484954", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Interpretation of $l_p$ norm inequality If $1\le p\le q\le \infty$, we know that the following inequality holds: $$\|a\|_q\le \|a\|_p.$$ What could be a possible interpretation of this inequality for a non-mathematician? For example, can we say something like "the $l_p$ norm becomes more robust (or sensitive) to outlying values with the increase of $p$"?
The inequality $\lVert a\rVert_q\leqslant \lVert a\rVert_p$ means that if $\lVert a\rVert_p\leqslant 1$, then $\lVert a\rVert_q\leqslant 1$ or in other words, that the unit ball for the $\ell^p$-norm is contained in the unit ball for the $\ell^q$-norm. For an interpretation of the unit ball of these spaces, you can have a look here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4485079", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Ratio of heights of a sphere, over and under water When a sphere is floating in the water, only 10 % of the volume is above the surface, while the rest is below. I need to calculate the relationship between height for the part above, and below the water. The solution in the textbook is 0.244 Thank you for all the feedback, however, I am still getting wrong answers as for H and h. Is there something else that I am doing wrong here
The Part above the water is known as Spherical Cap with known formulae for the volume. The total Diameter is $D=h+H$ & $r=(h+H)/2$ Plug this into the Spherical Cap Volume formula and equate it to 10% of total volume. Solve this Equation in terms of $h/H$. If the solution in the textbook is correct, you should get $h/H=0.244$ , else Post a comment here. Alternatively, you can verify your answer yourself. UPDATE: Working out the Solution: Volume of Spherical Cap : $(1/3) \Pi h^2 (3r-h)$ = $(1/3) \Pi h^2 (3(h+H)/2-h)$ = $(1/3) \Pi h^2 (3h/2+3H/2-2h/2)$ = $(1/3) \Pi h^2 (h/2+3H/2)$ Volume of Sphere is: $(4/3) \Pi r^3$ = $(4/3) \Pi (h+H)^3/8$ = $(1/3) \Pi (h+H)^3/2$ Hence: $(1/3)\Pi h^2 (h/2+3H/2) = (0.1) (1/3) \Pi (h+H)^3/2$ $(1/3) h^2 (h/2+3H/2) = (0.1) (1/3) (h+H)^3/2$ $h^2 (h/2+3H/2) = (0.1) (h+H)^3/2$ $h^2 (h+3H) = (0.1) (h+H)^3$ Divide though-out by $H^3$ $h^2 (h+3H)/H^3 = (0.1) (h+H)^3/H^3$ $(h/H)^2 (h/H+3H/H) = (0.1) (h/H+H/H)^3$ $(h/H)^2 (h/H+3) = (0.1) (h/H+1)^3$ Let $h/H=X$ $X^2 (X+3) = (0.1) (X+1)^3$ Solve this. Check whether $h/H = X = 0.244$ is valid. $(0.244)^2 (0.244+3) = (0.1) (0.244+1)^3$ We get $ (0.193134784)$ & $(0.1) (1.925134784) $
{ "language": "en", "url": "https://math.stackexchange.com/questions/4485193", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What is the definiteness of this function? I have this function: $$\frac{1}{2}\left ( x^{2}+4y^2 \right )e^{-\left ( x^{2}+4y^2 \right )}$$ I might be wrong, but it is the product of two functions: the first one is positive definite (positive eigenvalues), while the exponential one is negative definite (negative eigenvalues). The answer tells me that this function is globally positive definite. But isn't the product of a positive definite function and a negative definite function a negative definite function? Where I'm wrong here?
In One ViewPoint, a function $P(x) , P(x,y) , P(x,y,z) ...$ is Positive Definite (or Positive Semi-Definite) when it is Positive (or Non-Negative) when evaluated at all values of $x , y , z ...$ In your case, : $$P(x,y)= (x^2+4y^2)$$ $$f(x,y)= P(x,y)(e^{-P(x,y)})$$ Here $P(x,y)$ is 0 at (0,0) & Positive elsewhere; Hence it is Positive Semi-Definite. The Power Part $-P(x,y)$ is 0 at (0,0) & Negative elsewhere; Hence it is Negative Semi-Definite. Then $e^{-P(x,y)}$ is 1 at (0,0) & between 1 & 0 elsewhere; It can be considered Positive Definite. End-Product is 0 at (0,0) & Positive elsewhere; Hence it can be considered Positive Semi-Definite.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4485363", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
For which $B$ is $BA$ positive definite? Suppose $A$ is an $m\times n$ real-valued full-rank matrix. What is a nice way to characterize the set of real-valued $n\times m$ matrices $B$ such that $BA$ is positive definite? For $n=1$, this is the set of vectors in the same half-plane as $A$ (viewed as column vector).
Here are two characterisations of all matrices $B$ that make $BA$ positive definite. * *A more parsimonious characterisation: $B=SA^++Z^T$ for some positive definite matrix $S$ and some $m\times n$ matrix $Z$ such that $Z^TA=0$. *A nicer-looking characterisation: $B=A^TP$ for some positive definite matrix $P$. Pick any matrix $\widehat{A}$ with $m-n$ columns such that its column space is the orthogonal complement of the column space of $A$. Then $A^TA\in\mathbb R^{n\times n}$ and $\pmatrix{A&\widehat{A}}\in\mathbb R^{m\times m}$ are invertible. Therefore we may write $$ B=A^TA\pmatrix{X&Y}\pmatrix{A^T\\ \widehat{A}^T} =A^TA\left(XA^T+Y\widehat{A}^T\right) $$ for some square matrix $X$ and $n\times(m-n)$ matrix $Y$. Hence $BA=A^TAXA^TA$ is positive definite if and only if $X=(A^TA)^{-1}S(A^TA)^{-1}$ for some positive definite matrix $S$, and if this is the case, then $$ B=A^TA\left(XA^T+Y\widehat{A}^T\right) =SA^++Z^T $$ where $Z^T=(A^TA)^{-1}Y\widehat{A}^T$ is a matrix such that $Z^TA=0$. Furthermore, as $A^T\widehat{A}=0$, \begin{aligned} B=A^TA\left(XA^T+Y\widehat{A}^T\right) &=A^T\,\underbrace{\pmatrix{A&\widehat{A}}\pmatrix{X&Y\\ Y^T&kI}\pmatrix{A&\widehat{A}}^T}_P. \end{aligned} So, by picking a large $k>0$, we may also write $B=A^TP$ for some positive definite $P$. Conversely, if $B=SA^++Z^T$ for some positive definite $S$ and some matrix $Z$ such that $Z^TA=0$, then $BA=S$ is positive definite; if $B=A^TP$ with a positive definite $P$, then $BA=A^TPA$ is also positive definite.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4485544", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
What is the average of rolling two dice and only taking the value of the lower dice roll? My question is related to this question, with the exception that I'm looking for the average when taking only the lower of two dice rolls. My question is: What is the average of rolling two dice and only taking the value of the lower dice roll? This formula is used for the "take higher roll" case: $$ E[X] = \sum_{x=1}^6\frac{2(x-1)+1}{36}x = \frac{1}{36}\sum_{x=1}^6(2x^2 - x) = \frac{161}{36} \approx 4.47 $$ ...and includes the term "$2(x−1)+1$". Could someone please explain how that term would have to be changed so that it applies for "take lower roll"? From the original example, I'm somewhat confused where "$x-1$" comes from, although it is mentioned in the comments. PS: I would have written this question as a comment to the original question, but I lack the points to be allowed to write comments.
Here's your payoff matrix: {{1, 1, 1, 1, 1, 1}, {1, 2, 2, 2, 2, 2}, {1, 2, 3, 3, 3, 3}, {1, 2, 3, 4, 4, 4}, {1, 2, 3, 4, 5, 5}, {1, 2, 3, 4, 5, 6}} Sum these elements and divide by $36$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4485648", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 4 }
Get coefficients of a closest logarithmic function to a set of data points I have a big set of points (x,y) which I want to interpolate by a function $$y=a\log(bx+c)+d$$ What is the best way to find a,b,c and d? I tried Levenberg–Marquardt algorithm to iteratively find those coefficients, it works, but it seems like overkill. If there is something better and faster, please suggest it.
Assuming that you want to perform a curve fit. You have $n$ data points$(x_i,y_i)$ and you want to use the model $$y=a\log(bx+c)+d$$ First rewrite it as $$y=a \log(1+\alpha x) + \beta \qquad \quad \text{where}\quad \alpha=\frac b c \quad \text{and}\quad \beta=a \log(c)+d$$ This model is nonlinear only because of $\alpha$. If you give $\alpha$ an arbitray value, $$z_i=\log(1+\alpha x_i) \implies y=a z+\beta$$ and you immediately obtain $a$ and $\beta$ solving for these the normal equations $$\sum_{i=1}^n y_i z_i= a\sum_{i=1}^n z_i^2+\beta \sum_{i=1}^n z_i$$ $$\sum_{i=1}^n y_i = a\sum_{i=1}^n z_i +\beta \,n$$ and then $$\text{SSQ}(\alpha)=\sum_{i=1}^n \big[a\,z_i+\beta-y_i \big]^2$$ Do it for a few values of $\alpha$ until you see more or less a minimum of $\text{SSQ}(\alpha)$ . At this point, you have consistent guesses and you can safely start a nonlinear regression or (why not ?) use Newton-Raphson method or Excel solver. This should not take more than a couple of minutes.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4485823", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How many $4$ letter words can be formed from the word "CORONAVIRUS". Letters: $C$, $O$ ($2$ times), $R$ ($2$ times), $N$, $A$, $V$, $I$, $U$ and $S$. The number of $4$ letter words from $C$, $N$, $A$, $V$, $I$, $U$ and $S$ is $7\times 6\times 5\times 4=840$. The number of $4$ letter words from two $O$ or two $R$ while other $2$ letters are different is $\displaystyle \binom{2}{1}\times \binom{8}{2}\times \frac{4!}{2!}=672$. The number of $4$ letter words from two $O$ and two $R$ together is $\displaystyle \frac{4!}{2!\times 2!}=6$ Total number of ways should be $840+672+6=1518$. But this is not the right answer given. What am I doing wrong? What cases I am missing here? Please help!!! Thanks in advance!!!
We can consider the following cases: Case 1. No letter is repeated. In this case, we obtain $P_4^9=3024$ different words. Case 2. A word contains 2 R's or 2 O's Case 2.1 A word contains 2 R's and 0 O's In this case, we obtain $\binom{7}{2} \times \frac{4!}{2!}=252$ different words. Case 2.2 A word contains 2 R's and 1 O's In this case, we obtain $\binom{7}{1} \times \frac{4!}{2!}=84$ different words. Case 2.3 A word contains 2 R's and 2 O's In this case, we obtain $\frac{4!}{2!2!}=6$ different words. Case 2.4 A word contains 0 R's and 2 O's In this case, we obtain $\binom{7}{2} \times \frac{4!}{2!}=252$ different words. Case 2.5 A word contains 1 R's and 2 O's In this case, we obtain $\binom{7}{1} \times \frac{4!}{2!}=84$ different words. The total number of 4-letter words is $15120+252+84+6+252+84=3702$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4486204", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 3 }
Uniform convergence of multivariate continuous function on compact set Let $f:A\times B\to C$ be a continuous map from the product of topological spaces $A,B$ to some metric space $C$. We assume $B$ to be compact and denote $f_{a}:=f(a,\cdot):B\to C$ for $a\in A$. Suppose that we have a convergent sequence $a_n\to a \in A$ my question is whether $f_{a_n} \to f_a$ uniformly on $B$. I hoped that this was the case, but so far I couldn't find a proof. There in a related case in which the convergence is not uniform. However this problem is different, because we are dealing with a continuous two-parameter map instead of just a sequence of one-parameter maps.
Here is a proof that assumes that both $A$ and $B$ are topological sequential spaces and $B$ is also Hausdorff, hence continuity is equivalentent to sequential continuity and $B$ is sequentially compact: First observe that the function $g:B\rightarrow\mathbb{R}$ defined by $$g(b)=d(f(a_n,b),f(a,b))$$ is continuous, since it is a composition of continuoues function. It follows from the compactness of $B$ that there exist $b_n\in B$ such that $$\sup_{b\in B}g(b)=g(b_n),$$ or $$\sup_{b\in B} d(f(a_n,b),f(a,b))=d(f(a_n,b_n),f(a,b_n)).$$ Now let's assume by contradiction that $$\lim_{n\rightarrow \infty} \sup_{b\in B} d(f(a_n,b_n),f(a,b_n))=\lim_{n\rightarrow \infty}d(f(a_n,b_n),f(a,b_n)) \neq 0,$$ that is, there are $\varepsilon>0$ and a subsequence $n_k$ such that $$d(f(a_{n_k},b_{n_k}),f(a,b_{n_k}))>\varepsilon.$$ Now, by the compactness of $B$, there exists a subsequence ${n_{k_j}}$ such that $b_{n_{k_j}}\rightarrow b$ for some $b\in B$, but in such case using the continuity of the metric and $f$ $$\lim_{j\rightarrow \infty}d(f(a_{n_{k_j}},b_{n_{k_j}}),f(a,b_{n_{k_j}}))=d(f(a,b),f(a,b))=0,$$ which contradicts that $d(f(a_{n_k},b_{n_k}),f(a,b_{n_k}))>\varepsilon.$ Conuterexample if $B$ is not compact: Consider $A\times B= [0,1]\times (0,1)$, $C=\mathbb{R}$ and $f:A\times B \rightarrow C$ given by $f(x,y)=y^x$. Now observe that $a_n=1/n\rightarrow 0$, but $$ \sup_{y\in B}|f_{a_n}(y)-f_0(y)|=\sup_{y\in (0,1)}|y^{1/n}-1|=1,$$ which proves that $f_{a_n}$ does not converges uniformly to $f_0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4486313", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Relation between reduced rational numbers Suppose that we are given reduced rational numbers $\,\dfrac{a}{k},\ \dfrac{b}{\ell},\ \dfrac{c}{q},\,$ i.e. $\gcd(a,k)=\gcd(b,\ell)=\gcd(c,q)=1$ such that $$\frac{c}{q}=\frac{a}{k}-\frac{b}{\ell}.$$ Then we have $\ q=k'\ell'e = {\dfrac{gk'l'}{f}}$ and $\,c=\dfrac{a\ell'-bk'}{f}$, where $$g=\gcd(k,l),\,\ k'\,=\frac{k}g,\,\ \ell'=\frac{\ell}g,\,\ e=\frac{g}f,\ f=\gcd(a\ell'\!-bk',g).$$ I have some troubles to prove that $\,q=\ell'k'e.\,$ I was trying something like that: $$\frac{c}{q}=\frac{a\ell-bk}{k\ell}.$$ If we assume that $t=\gcd(a\ell-bk,k\ell)$, then $c=\dfrac{(a\ell-bk)}{t}$ and $q=\dfrac{kl}{t}$. Then I performed some manipulations but I did not reach the needed equality. Can anyone show it please? Thank you!
For simpler algebra, let $g = \gcd(k,\ell)$, which means $\ell = g\ell'$ and $k = gk'$. This gives that $$\begin{equation}\begin{aligned} \frac{a\ell - bk}{k\ell} & = \frac{g(a\ell' - bk')}{(gk')(g\ell')} \\ & = \frac{a\ell' - bk'}{gk'\ell'} \end{aligned}\end{equation}\tag{1}\label{eq1A}$$ Next, using that $g = ef$, and $f = \gcd(a\ell' - bk',g) \; \to \; f \mid a\ell' - bk'$ meaning there's an integer $h$ such that $h = \frac{a\ell' - bk'}{f}$, we then get $$\begin{equation}\begin{aligned} \frac{a\ell' - bk'}{gk'\ell'} & = \frac{a\ell' - bk'}{efk'\ell'} \\ & = \frac{\left(\frac{a\ell' - bk'}{f}\right)}{ek'\ell'} \\ & = \frac{h}{ek'\ell'} \end{aligned}\end{equation}\tag{2}\label{eq2A}$$ Note from $f = \gcd(a\ell' - bk',g)$ with $a\ell' - bk' = hf$ and $g = ef$, we have that $\gcd(h,e) = 1$. Also, since $\gcd(a,k) = 1$, then $\gcd(a,k') = 1$, plus $\gcd(k',\ell') = 1$ (due to their definitions involving dividing by $\gcd(k,\ell)$), thus $\gcd(k',a\ell' - bk') = 1$, so $\gcd(k',h) = 1$. Similarly, $\gcd(\ell',h) = 1$. This means $\gcd(h, ek'\ell') = 1$, so \eqref{eq2A} is in the reduced rational form of $\frac{c}{q}$ with $c = h = \frac{a\ell' - bk'}{f}$ and $q = \ell'k'e$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4486425", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Prove $\frac{a}{b}+\frac{b}{c}+\frac{c}{a}+\frac{ab+bc+ca}{a^2+b^2+c^2}\ge4$ Let $a,b,c>0$. Prove that $$\dfrac{a}{b}+\dfrac{b}{c}+\dfrac{c}{a}+\dfrac{ab+bc+ca}{a^2+b^2+c^2}\ge4$$ I know $\dfrac{a}{b}+\dfrac{b}{c}+\dfrac{c}{a}\ge 3$ but $\dfrac{ab+bc+ca}{a^2+b^2+c^2}\le1$. And then I try $\dfrac{a}{b}+\dfrac{b}{c}+\dfrac{c}{a}\ge \dfrac{(a+b+c)^2}{ab+bc+ca}$ but still stuck. Any help please, thank you.
Alternative approach with hint : We have the inequality for $a,b,c>0$: $$\frac{\left(ab+bc+ca\right)}{a^{2}+b^{2}+c^{2}}+\left(\frac{a}{b}+\frac{c}{a}+\frac{b}{c}\right)\geq \left(\frac{a}{b}+\frac{c}{a}+\frac{b}{c}\right)+\frac{9}{\left(\frac{a}{b}+\frac{c}{a}+\frac{b}{c}\right)^{2}}$$ Remains to show the inequality for $x\ge 3$ : $$x+9/x^2\geq 4$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4486552", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Is the category of torsion abelian groups a Grothendieck category? I thought that I had shown that the category of $\mathcal{A}$ of all torsion abelian groups is a Grothendieck category: * *All coproducts exist, they are just the coproducts of abelian groups; *The colimits are also just the colimits in the category of abelian groups, since the cokernel of a morphism of torsion groups is also a torsion group; *There is a generator. The third point is the least obvious. Let $G=\bigoplus_{n\in \mathbb{N}}\mathbb{Z}/n\mathbb{Z}$ and $A$ be an arbitrary torsion abelian group. The morphism $\bigoplus_{0\neq a\in A}\mathbb{Z}\to A$, which takes the 1 of the summand corresponding to $a$ to $a,$ is an epimorphism in the category of abelian groups. But since $A$ is torsion, this filters through the map $\bigoplus_{0\neq a\in A}\mathbb{Z}/\text{ord}(a)\mathbb{Z}\to A$, which then naturally extends to a morphism $\bigoplus_{0\neq a\in A}G\to A.$ However, one of the comments at Complete but not cocomplete category says: For example the category of torsion abelian groups is not grothendieck (there is no progenerator). Although I do agree that there is no progenerator, the axioms of a Grothendieck category seemingly do not require its existence, only the existence of a not necessarily projective generator. So in the end, is the category of torsion abelian groups actually Grothendieck?
I think your observations are correct, and hold more generally. Given a (not necessarily commutative) ring $A$, a torsion class is a full subcategory of (left) $A$-modules closed under quotients, coproducts, and extensions. In particular it is closed under colimits. A hereditary torsion class is one that is also closed under subobjects. In particular, the fact the filtered colimits of $A$-modules are exact implies filtered colimits of such torsion modules are exact. Finally, any hereditary torsion class is generated by the cyclic $A$-modules that are in the class (see chapter VI, Proposition 3.6. in Bo Stenstrom's book Rings of Quotients - An Introduction to Methods of Ring Theory). Thus hereditary torsion classes should be Grothendieck categories.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4486724", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
About Grothendieck-Witt group of $p$-adic Fields Let $p$ a odd prime number, is well-known that the Square Classes of a rational $p$-adic $\mathbb{Q}_p$ is a group with 4 elements, i.e. $$ \mathbb{Q}_p^{\times}/\mathbb{Q}_p^{\times 2}=\{1,p,\zeta,p\zeta\} $$ Where $\zeta$ is a ($p$-1)-th root of unity. I want to know if exist any reference of the Grothendieck-Witt group $GW(\mathbb{Q}_p)$ and its fundamental ideal $I_{\mathbb{Q}_p}$ Concretly, I want to calculate the group ($\mu_2$ is the group $\{\pm 1\}$) $$ I_{\mathbb{Q}_p}\otimes\mu_2 $$ and verify if satisfies some properties.
The Grothendieck-Witt and Witt groups of $p$-adic fields $F$ ($p$ odd as in your question) are well-known: If the cardinality of the residue field $q$ is congruent to $1$ mod $4$, then $$\text{GW}(F) \cong \mathbb{Z} \oplus \mathbb{Z}/2\mathbb{Z}^{\oplus3} \text{ as abelian groups and } \text{W}(F) \cong \mathbb{Z}/2\mathbb{Z}[V_4] \text{ as rings},$$ where $V_4$ is the Klein $4$-group and if $q$ is congruent to $3$ mod $4$, then $$\text{GW}(F) \cong \mathbb{Z} \oplus \mathbb{Z}/2\mathbb{Z} \oplus \mathbb{Z}/4\mathbb{Z}\text{ as abelian groups and } \text{W}(F) \cong \mathbb{Z}/4\mathbb{Z}[C_2] \text{ as rings}.$$ A reference for that is Thm 2.2 of chapter VI in Lam's Introduction to quadratic forms over fields. If you check the details you should probably also be able to see how the fundamental ideals look like.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4486978", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to calculate the sphere within an octahedron of six spheres? Apologies if you've already spent a lot of time answering this one but I've spent a lot of time looking for the answer here without success unfortunately. If I bunch six spheres together to form an octahedron then how do I calculate the size of sphere that fits at the centre of those six spheres? I assume the answer to be some expression of the size of spheres containing it?
If the spheres are all the same size, then this isn't too hard to calculate. Taking a cross section of a regular octahedron gives a square. If $R$ and $r$ are the radii of the outer and inner spheres, respectively, then $2R + 2r$ equals the diagonal of this square, $R\sqrt2$. Solving for $r$ yields $$r = (\sqrt 2 - 1) R \approx 0.414 R.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4487074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Prove that a half-open line is uncountable - corrected I have to provide a proof that a halfopen line $L_a:={ ∈ ℝ:x > a}$ is uncountable. I have used Schröder-Bernstein-Cantor and tried to show that an injection exists between $L_a$ and $ℝ$ an injection between $ℝ$ and $L_a$ to conclude that a bijective function exists between $ℝ$ and $L_a$ to prove that $L_a$ is uncountable. I chose the following two functions: $f:L_a \rightarrow ℝ:x \rightarrow x$ is injective $g:ℝ \rightarrow L_a:x \rightarrow a+e^x$ is injective It follows that a bijection exists between both sets, so $L_a$ is uncountable. EDIT: corrected use of SBC, it should now work. Thanks for your feedback.
An interval is uncountable: $$\tan:(-\pi/2,\pi/2)\to\Bbb R$$ is a bijection. But any two (finite) open intervals are isomorphic (there's a bijection between them): $(c,d)\leftrightarrow(a,b)$ by $$x\to \dfrac{(d-x)}{(d-c)}a+\dfrac{(c-x)}{(c-d)}b$$. Thus $L_a$ is uncountable, because it contains an interval.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4487193", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Let $x = (0, 1)$ and $y = (−2, a)$ be two vectors in $\Bbb R^2$, where $a$ is a real number. Problem:Let $x = (0, 1)$ and $y = (−2, a)$ be two vectors in $\Bbb R^2$, where $a$ is a real number. Attempt: Please be nice. (a) Compute the quantity $\frac{x · y}{||x||||y||}$, in terms of $a$. $$ x·y=0(-2) + 1(a)=a $$ $$ ||x||=\sqrt{0^2+1^2}=1 $$ $$ ||y||=\sqrt{-2^2+a^2}=\sqrt{4+a^2} $$ $$ \cos θ=\frac{x · y}{||x|| ||y||}=\frac{a}{\sqrt{4+a^2}} $$ $$ θ=\cos^{-1}\frac{a}{\sqrt{4+a^2}} $$ (b) Determine all values of $a$ for which the angle between $x$ and $y$ is $\frac{π}{3}$(radians). Using a reference angle, where I have <-2,a> with an angle of $\frac{1}{2}$ (because $\frac{π}{3}$ is 60 degrees and in unit cirlce that's $\frac{1}{2}$) $$ tan(\frac{1}{2})=\frac{a}{-2} $$ $$ (-2)tan(\frac{1}{2})=\frac{a}{-2}(-2) $$ $$ (-2)tan(\frac{1}{2})=a $$ $$ =-0.0174 $$ $$ 0=a $$ You guys have been giving me hints. But I just can't figure it out. a component is still missing. I don't know anymore. $$ cos(\frac{1}{2})=\frac{a}{\sqrt{4+a^2}} $$ $$ (\sqrt{4+a^2})cos(\frac{1}{2})=\frac{a}{\sqrt{4+a^2}}(\sqrt{4+a^2}) $$ $$ (\sqrt{4+a^2})cos(\frac{1}{2})=a $$ EDITED
The angle and the dot product are related by $\cos\theta=(x\cdot y)/(\lVert x\rVert\lVert y\rVert)$. We're given $\theta=\pi/3$, and we know $\cos(\pi/3)=1/2$. And you've calculated $(x\cdot y)/(\lVert x\rVert\lVert y\rVert)=a/\sqrt{4+a^2}$. So we have this equation: $$\frac12=\cos(\pi/3)=\cos\theta=\frac{x\cdot y}{\lVert x\rVert\lVert y\rVert}=\frac{a}{\sqrt{4+a^2}}$$ $$\frac12=\frac{a}{\sqrt{4+a^2}}$$ Squaring: $$\left(\frac12\right)^2=\frac{a^2}{\sqrt{4+a^2\,}^2}$$ $$\frac14=\frac{a^2}{4+a^2}$$ Multiplying by $4$: $$1=\frac{4a^2}{4+a^2}$$ Multiplying by $(4+a^2)$: $$4+a^2=4a^2$$ Subtracting $a^2$: $$4=3a^2$$ Dividing by $3$: $$\frac43=a^2$$ $$a^2=\frac43=\left(\frac{2}{\sqrt3}\right)^2$$ It is a general fact that, if $a=b$ or $a=-b$, then $a^2=b^2$; and conversely, if $a^2=b^2$, then either $a=b$ or $a=-b$. I think you can find a proof of this elsewhere. Thus: $$a=\pm\frac{2}{\sqrt3}$$ So we have two possible answers, but we need to check that they actually work: $$\frac12\overset?=\frac{\pm2/\sqrt3}{\sqrt{4+(\pm2/\sqrt3)^2}}$$ $$=\frac{\pm2/\sqrt3}{\sqrt{4+(4/3)}}$$ $$=\frac{\pm2/\sqrt3}{\sqrt{16/3}}$$ $$=\frac{\pm2/\sqrt3}{4/\sqrt{3}}$$ $$=\pm\frac12$$ Thus only the upper sign works: $a=+2/\sqrt3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4487389", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How can I show that $x$ and $y $ must have the same length where $ x+y $ and $x-y $ are non-zero vectors and perpendicular? Problem: Let $x$ and $y$ be non-zero vectors in $\mathbb{R}^n$. (a) Suppose that $\|x+y\|=\|x−y\|$. Show that $x$ and $y$ must be perpendicular. (b) Suppose that $x+y$ and $x−y$ are non-zero and perpendicular. Show that $x$ and $y$ must have the same length. Attempt: (a)\begin{align*}\|x+y\|^2 & =\|x-y\|^2 \\ (x+y)\cdot (x+y) & =(x-y)\cdot (x-y) \\ \|x\|^2+\|y\|^2+2x\cdot y & =\|x\|^2+\|y\|^2-2x\cdot y \\ 2x\cdot y & =-2x\cdot y \\ x\cdot y & =0. \end{align*}(b)\begin{align*}(x+y)\cdot (x-y) & =0 \\ \|x\|^2-x\cdot y+x\cdot y-\|y\|^2 & =0 \\ \|x\|^2-\|y\|^2 & =0. \end{align*}Are these attempts correct? EDITED
$a)$ Using polarization identity $4\langle x, y\rangle =\|x+y\|^2-\|x-y\|^2=0$ $\langle x, y\rangle =x\cdot y=0$ $b)$ Again using polarization identity $\begin{align}4\langle x+y, x-y\rangle &=\|(x+y)+(x-y)\|^2-\|(x+y)-(x-y)\|^2\\&=4(\|x\|^2 -\|y\|^2)\end{align}$ Given $\langle x+y, x-y\rangle=0$ implies $\|x\|=\|y\|$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4487494", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Show that $\lim_{n \to \infty} \sum_{k = 0}^\infty (-1)^k \frac{n}{n + k} = \frac{1}{2}$. Show that $$ \lim_{n \to \infty} \sum_{k = 0}^\infty (-1)^k \frac{n}{n + k} = \frac{1}{2}. $$ Progress: I rewrote the inside as $$ S_n = \sum_{k = 0}^\infty (-1)^k \frac{n}{n + k} = \sum_{k = 0}^\infty (-1)^k \left( 1 - \frac{k}{n + k} \right),$$ at which point I sought to find the (ordinary) generating function $$A(x) = \sum_{k = 0}^\infty (-1)^k \left( 1 - \frac{k}{n + k} \right) x^k = \frac{1}{1 + x} - \sum_{k = 0}^\infty (-1)^k \left( \frac{k}{n + k} \right) x^k , $$ because then $S_n = A(1)$. Unfortunately, I have been trying to evaluate the last sum using the Taylor Series expansion of $\ln \left( 1 + x \right)$ at $x = 0$, but I am stuck. While is true that $$\ln \left( 1 + x \right) = \sum_{k = 1}^\infty (-1)^{k + 1} \left( \frac{1}{k} \right) x^k $$ (for $x \in (-1, 1)$ ; note this is irrelevant if we regard all g.f. as "formal"), I can't figure out how to transform this into $\sum_{k = 1}^\infty (-1)^{k + 1} \left( \frac{1}{n + k} \right) x^k$. The motivation is that once I can find that sum in explicit form, I can differentiate it w.r.t $x$ to find the desired sum. I guess my question is also: "How to shift backwards the coefficients of an ordinary generating function?", though I am eager to see other (possibly more elegant) solutions as well. Note: One attempt I tried at shifting the coefficients was dividing the coefficients by some power $x^k$, but this introduces negative coefficients, whereas I need those terms with negative coefficients to disappear.
$S_n=n\cdot\sum_{j=0}^{\infty}\dfrac {1}{(n+2j)(n+2j+1)}.$ For $n\ge 2$ this is bounded below by $$\frac {n}{2}\sum_{j=0}^{\infty}\int_{n+2j}^{n+2j+2}\frac {1}{x^2}dx=\frac {n}{2}\int_n^{\infty}\frac {1}{x^2}dx=\frac {1}{2}$$ and is bounded above by $$\frac {n}{2}\sum_{j=0}^{\infty}\int_{n+2j-1}^{n+2j+1}\frac {1}{x^2}dx=\frac {n}{2}\int_{n-1}^{\infty}\frac {1}{x^2}dx=\frac {n}{2(n-1)}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4487803", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Denseness of a certain subset of the unit circle I am trying to show that the set of numbers of the form $e^{2\pi ina}$, where $a$ is irrational and $n\in\mathbb Z$ , is dense in the unit circle. Solution: Noting that elements of the unit circle can be written as $z=e^{2\pi i\theta}$ with $\theta\in\mathbb R$, and using the standard topology of $\mathbb C$; \begin{align*} \left|e^{2\pi ian}-e^{2\pi i\theta}\right| &= \big|[\cos(2\pi an)-\cos(2\pi\theta)]-i[\sin(2\pi an)-\sin(2\pi\theta)]\big|\\ &= \left|-2\sin[\pi(na-\theta)]\sin[\pi(na+\theta)]-2i\sin[\pi(na-\theta)\cos[\pi(na+\theta)]\right|\\ &=2|\sin[\pi(na-\theta)]||\sin[\pi(na+\theta)]+i\cos[\pi(na+\theta)]|\\ &=2|\sin[\pi(na-\theta)]|\leq 2\pi|na-\theta|<2\pi\epsilon_1=\epsilon. \end{align*} I have used the following: $$\cos(P)-\cos(Q)=-2\sin\left(\frac{P+Q}{2}\right)\sin\left(\frac{P-Q}{2}\right),~ \sin(P)-\sin(Q) = 2\cos\left(\frac{P+Q}{2}\right)\sin\left(\frac{P-Q}{2}\right)$$ and $|\sin x|\leq |x|$. The last inequality follows from the fact that there is always a real number ($\theta$) arbitrarily close to any rational number ($na$). This last argument makes me feel like I have cheated (a bit). Is there a way to complete this exercise without the prior knowledge of the denseness of the rationals on the real line? Furthermore, please recommend alternative approaches, if there are any.
As Leonid observed, your proof is incorrect. Since you asked for the alternative approach, this is my try. * *Let $a_n:=a\cdot n-\lfloor a\cdot n\rfloor$. Then $e^{2\pi i\cdot a\cdot n} = e^{2\pi i\cdot a_n}$. *It sufficies to show that $\{a_n:n\in\Bbb N\}$ is dense in $[0,1]$. Indeed, for any $z\in S$ (unit circle) there exists $\theta\in [0,1]$ such that $z=e^{2\pi i \theta}$. If $a_n$ is close to $\theta$ then $e^{2\pi i\cdot a_n}$ is close to $z$. *Observe that irrationality of $a$ implies that $a_m\neq a_n$ for $m\neq n$. *Take any $N\in\Bbb N$ (big number). Consider $N$ boxes $[0,1/N],[1/N,2/N],\ldots,[1-1/N,1]$ covering $[0,1]$. *One of the boxes contains infinitely many numbers $a_n$. Particularly, there are $m,n\in\Bbb N$, $m<n$, such that $|a_m-a_n|\leq 1/N$. *Assume $0<a_n-a_m\leq 1/N$ (the other case is similar, as we consider indices from $\Bbb Z$, not only $\Bbb N$). Then $0<a_{k}\leq 1/N$ for $k:=n-m$. *Observe that $a_{2k}=2a_{k}$, $a_{3k}=3a_{k}$ and generally $a_{jk}=ja_{k}$ as long as $ja_{k}<1$. These points form an arithmetic progression with the difference $d=a_{k}\leq 1/N$. *Therefore for any $\theta\in[0,1]$ there exists $j$ such that $|a_{jk}-\theta|\leq 1/N$. *Since $N$ can be arbitrarily large, the set $\{a_n:n\in\Bbb N\}$ is dense in $[0,1]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4487958", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Prove the identity $\sum_{i=0}^n i(n-i) = {n+1 \choose 3}$ , by a combinatorial argument This question was asked on an exam I made. In the answers they proved a bijection between the following sets: $\def\N{\Bbb N}X_i = \{\,(a, i, b) \mid a,b \in\N ∪ \{0\}, 0 ≤ a < i, i < b ≤ n\,\}$ for $i \in \{ 0, 1, \ldots , n\}$ $Y = \{\,C \mid C ⊆ \N_{n+1}, |C| = 3\,\}$ where $\N_{n+1}=\{\,i\in\Bbb Z\mid 0\leq i\leq n\,\}$ and provided the following relations: $f : \bigcup_{i=0}^n X_i \to Y : (a,i,b) \mapsto \{a,i,b\} $ $f^{-1}: Y \to \bigcup_{i=0}^n X_i : \{c_1, c_2, c_3\}$ with $c_1 < c_2 < c_3 \mapsto (c_1, c_2, c_3)$ Could someone provide me with an explanation as to why these sets and relations were chosen?
The situation here is to count the number of 3-element subsets in $\{ 0, 1, \dots, n \}$, hence the right hand side $\binom{n + 1}{3}$. Now we think of another way to generate a 3-element subsets. We pick a "pivot" by fixing one of the element, say $2$. In the next step, we pick another one less than the "pivot", i.e. $0$ or $1$, and the remaining one greater than the "pivot", i.e. $3, 4, \dots , n$. Suppose for now this method generates all the 3-element subsets of $\{ 0, 1, \dots, n \}$. In your setting, $X_i$ is the collection of triples with "pivot" $i$ generated by the scheme above, and $\bigcup _{i = 0} ^n X_i$ is all the subsets generated by the method above, as we vary the "pivot" from $0$ to $n$. The set $Y$ is simply the collection of 3-element subsets. Your relation $f$ can be used to show that the two sets are equipotent. Notice that for any triple in sorted order, say $(1, 2, 3)$, we can always link them to the 3-element subset $\{ 1, 2, 3 \}$. Conversely, given such a set, we can always construct this sorted triple. So $f$ is a bijection between $\bigcup _{i = 0} ^n X_i$ and $Y$. In other words, picking a "pivot" then two other elements, one greater and one smaller than the "pivot" indeed generates all the 3-element subsets. With the bijection established, note that for a fixed "pivot" $i$, as there are $i$ numbers less than it and $n - i$ greater, the number of triples for all the "pivot"s is $$ \sum _{ i = 0} ^n i (n - i) \, . $$ Thus $$ \sum _{ i = 0} ^n i (n - i) = \binom{n + 1}{3} \, . $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4488104", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How do I find the number of interest periods per year? I am currently self-studying logarithms and exponential equations and the book I am using includes the method of calculating different aspects of the compound interest (compounded amount, starting amount, running time, rate of interest), all other aspects being given. But the number of interest periods per year is never the unknown in any problem, and the book doesn't deal with solving for it, all other aspects known. In short, my question would be: How do I find the number of interest periods per year, mathematically how do you solve for $x$ in the equation $$\left(\frac{A}{P}\right)^{(rn)^{-1}}=\left(1+\frac 1x\right)^x$$ all other letters known, or simply solve for $x$ in $$c=\left(1+\frac 1x\right)^x$$ Here is my attempt * *$A$ is the compounded amount *$P$ the principal or starting amount *$r$ the annual rate of interest *$n$ the time of compounding in years *$p$ the number of interest periods annually, and for example it equals $1$ if the compounding happens annually, $2$ if semi-annually, $4$ if quarterly, but this is what we are actually looking for.
As other answers have stated, a “closed form” solution requires using the Lambert W function. But if you don't have this function available on your calculator or programming library, you may be interested in an approximation. Some “special” values of $c = (1 + \frac{1}{x})^x$ are: * *Annual: $x = 1 \implies c = 2$ *Semiannual: $x = 2 \implies c = 2.25$ *Quarterly: $x = 4 \implies c = 2.44140625$ *Monthly: $x = 12 \implies c \approx 2.613035290224676$ *Weekly: $x = 52.1775 \implies c \approx 2.692682824117828$ *Daily: $x = 365.2425 \implies c \approx 2.7145699419617033$ *Continuously: $x \rightarrow \infty \implies c \rightarrow e \approx 2.718281828459045$ After playing with these numbers a bit, I found a simple rational function that approximates the above values. (Coefficients are for least-squares error on $\frac{1}{x}$.) $$x = \frac{1}{1.14962209188153(e-c)^2 + 0.55980701470039(e-c)}$$ This gives correct (to the nearest integer) results for $1 \le x \le 6$, but works less well for more frequent compounding.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4488276", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Limit Comparison Test on $\sum_{n=1}^{\infty}\frac{2n^2-1}{3n^5+2n+1}$ I have the series: $\sum_{n=1}^{\infty}\frac{2n^2-1}{3n^5+2n+1}$ I compared it with: $\sum_{n=1}^{\infty}\frac{n^2}{n^5}$ The limit is $\frac{2n^5-n^3}{3n^5+2n+1}$ as $n$ approaches infinity. The limit works out to $\frac23$. I don't understand how this shows convergence? It is less than 1, so why is the p-series, the second series I have, converging?
You mixed up comparison test and limit comparison test. Limit comparison test: Consider two series of non negative reals $\sum u_n$ and $\sum v_n$ if $lim_{n\to \infty} \frac{u_n}{v_n}=l \space (0<l< \infty)$ then both the series converge or diverge simultaneously. $u_n=\frac{2n^2-1}{3n^5+2n+1}$ Let $v_n=\frac{1}{n^3}$ Then $\lim_{n\to \infty}\frac{\frac{2n^2-1}{3n^5+2n+1}}{\frac{1}{n^3}}=\frac{2}{3}$ Since $\sum \frac{1}{n^3}$ converge ( $\sum\frac{1}{n^p}$ converges for $p>1$ ) , by limit comparison test $\sum\frac{2n^2-1}{3n^5+2n+1}$ also converge. Note: Convergence of $p$- series.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4488456", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Upper and lower bounds for solution to $ y' = t - y^2, y(0)=0.$ I am working on the initial value problem $$ y' = t - y^2,\qquad y(0)=0.$$ Assuming that the solution exists on $[0,\infty)$, I am supposed to show that $$ 0\leq y(t) \leq \sqrt{t}\qquad \forall t\geq 0.$$ I've managed to prove an upper bound of $t^2/2$ by using the fundamental theorem of calculus, but I don't seem to be getting anywhere near the $\sqrt{t}$ that the question is asking for. Any help would be appreciated :)
Since $y'(0)=0$, there is some $\delta>0$ such that for $t \in (0,\delta)$, we have $y(t)^2<t$, so $y'(t)>0$. By the intermediate value theorem, $y(t)>0$ in $(0,\delta)$. Suppose that $$\tau:=\inf \Bigl\{t\ge \delta: \, y(t) \in \{0,\sqrt{t}\} \Bigr\}<\infty \,. \tag{*} $$ (Recall the convention that the infimum of an empty set is infinity). If $y(\tau)=0$ then $y'(\tau)=\tau$, a contradiction since $y(t)>0$ on $(0,\tau)$. If $y(\tau)=\sqrt{\tau}$, then $y'(\tau)=0$, so $g(t)=t-y^2$ satisfies $g(\tau)=0$ and $g'(\tau)=1$, a contradiction since $g(t)>0$ for $t \in (0,\tau)$. Thus $\tau$ defined in $(*)$ must be infinite, so $0<y(t)<\sqrt{t}$ for all $t>0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4488778", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Is the $(3,4,5)$ triangle the only rectangular triangle with this property? While solving a loosely related exercise, by luck I found out that the $(3,4,5)$ triangle has the following property: The product of the lengths ($\sqrt{2}$ and $\sqrt{5}$) of the two shorter line segments from a corner to the center of the inscribed circle equals the length ($\sqrt{10}$) of the longest one. Somewhat satisfied with this, for me own new found, result I now wonder if any other rectangular $(a,b,c)$ triangles have this particular property. $(a,b,c)$ does not need to be a Pythagorean triple (but it would be extra nice). It tried some straightforward algebraic equations but failed to find answer ... Maybe finding non rectangular such triangles is easier, but ideally I ask for rectangular ones. update Can this property of certain pythagorean triples in relation to their inner circle be generalized for other values of $n$? is already linked to this one but I take the freedom to explicitly mention it here in post for following reasons * *the question asked there is about generalizing answer given here *the answers to both questions always left some exercises for reader *myself I am not able (I continue to try) to do these exercises Maybe someone can fully write out the missing gaps.
In a Pythagorean right triangle $\triangle ABC$, we know that $a^2 + b^2 = c^2$ where $a, b, c$ are positive integers. We also know that $|\triangle ABC| = rs$, where $r$ is the inradius and $s = (a+b+c)/2$ is the semiperimeter. Thus we have $$\begin{align} r &= \frac{ab}{a+b+c} \\ &= \frac{ab}{a+b+\sqrt{a^2+b^2}} \\ &= \frac{ab(a+b-\sqrt{a^2+b^2})}{(a+b+\sqrt{a^2+b^2})(a+b-\sqrt{a^2+b^2})} \\ &= \frac{ab(a+b-\sqrt{a^2+b^2})}{2ab} \\ &= \frac{1}{2}(a+b-c) \\ &= s-c. \end{align}$$ Denoting $I$ as the incenter, the respective distances from the incenter to the vertices are $$IA = \sqrt{r^2 + (s-a)^2}, \\ IB = \sqrt{r^2 + (s-b)^2}, \\ IC = \sqrt{r^2 + (s-c)^2} = r \sqrt{2}.$$ Then assuming $a < b < c$, we require $IB \cdot IC = IA$, or $$\begin{align} 0 = IB^2 \cdot IC^2 - IA^2 = \left(r^2 + (s-b)^2\right)(2r^2) - \left(r^2 + (s-a)^2\right). \end{align}$$ I leave it as an exercise to show that this condition is nontrivially satisfied if and only if $b = (a^2-1)/2$, hence $a$ must be an odd positive integer for $b$ to be an integer. Then $c$ will automatically be an integer since $$c^2 = a^2 + b^2 = \left(\frac{a^2+1}{2}\right)^2.$$ Therefore, the solution set is parametrized by the triple $$(a,b,c) = \bigl(2r+1, 2r(r+1), 2r(r+1)+1\bigr), \quad r \in \mathbb Z^+,$$ where $r$ is the inradius of such a triangle. In particular, this leads to the triples $$(3,4,5), \\ (5,12,13), \\ (7,24,25), \\ (9,40,41), \\ \ldots.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4488978", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Showing that $P(E \cup F) + P(E \cup F^c) + P(E^c \cup F) + P(E^c \cup F^c) =3$. I need to prove the following: For events $E$ and $F$, $$P(E \cup F) + P(E \cup F^c) + P(E^c \cup F) + P(E^c \cup F^c)=3.$$ Is this proof sufficient? By inclusion-exclusion principle: $$P(E \cup F) + P(E \cup F^c) + P(E^c \cup F) + P(E^c \cup F^c)=$$ $$P(E)+P(F)-P(EF) + $$ $$P(E)+P(F^c)-P(EF^c) + $$ $$P(E^c)+P(F)-P(E^cF) + $$ $$P(E^c)+P(F^c)-P(E^cF^c) + $$ Since $P(A^c) = 1- P(A)$, the previous term equals: $$P(E) + P(F) - P(EF)+$$ $$P(E) + 1-P(F) - P(EF^c)+$$ $$1-P(E) + P(F) - P(E^cF)+$$ $$1-P(E) + 1-P(F) - P(E^cF^c)$$ $$=$$ $$4-P(EF)-P(EF^c)-P(E^cF)-P(E^cF^c) = 4-(P(EF)+P(EF^c)+P(E^cF)+P(E^cF^c))$$ Proof that $P(EF)+P(EF^c)+P(E^cF)+P(E^cF^c) = 1$: $$P(EF) + P(EF^c) = P(E)$$ $$P(E^cF) + P(E^cF^c) = P(E^c)$$ Note that $EF$ and $EF^c$ are mutually exclusive. And $E^cF$ and $E^cF^c$ are mutually exclusive. $$P(E) + P(E^c) = P(E \cup E^c) = 1$$ Everything together: $$4-1 = 3$$
Given a probability space $\Omega$, every $\omega \in \Omega$ is a member of exactly 3 among the 4 sets $$ \tag{*} E \cup F, \quad E \cup F^c, \quad E^c \cup F, \quad E^c \cup F^c \,.$$ E.g. if $\omega \in E \cap F$, then $\omega$ is contained in the first 3 sets in $(*)$, but not in the fourth. The other cases are similar. Thus, in a discrete probability space, adding the probabilities of the events in $(*)$ will yield $$\sum_{\omega \in \Omega} 3 P(\omega) =3 \,. $$ In a general probability space, the sum of the indicators of the events in $(*)$ is identically 3; now take expectations.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4489139", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Finding the center and axes of an ellipse in 3D I am studying ellipses and need to prove the intersection of a plane and an elliptical cylinder is still an ellipse in 3D. My approach is to find its center and axes and have been struggling. I have a parametric equation, $\left(x,\, y,\, z\right) = \left(\frac{1}{4}-\frac{3}{2}\cos t-\frac{1}{4}\sin t,\, 2\cos t,\, \sin t \right)$ . This is an intersection between a plane $4x+3y+z=1$ and an elliptical cylinder $y^2+4z^2=4$. How do I find its center and two axes? Equivalently, how can I express it in the vector form: $$\mathbf x (t)=\mathbf c+(\cos t)\mathbf u+(\sin t)\mathbf v$$ where $\mathbf u$ and $\mathbf v$ are orthogonal vectors from the center $\mathbf c$ whose norms represent the lengths of the axes and whose directions represent the directions of the axes.
$$\left(x,\, y,\, z\right) = \left(\frac{1}{4}-\frac{3}{2}\cos t-\frac{1}{4}\sin t,\, 2\cos t,\, \sin t \right)=(\frac{1}{4},0,0)+(\frac{-3}{2},2,0)\cos t+(\frac{-1}{4},0,1)\sin t$$ So $c=(\frac{1}{4},0,0)$, $u=(\frac{-3}{2},2,0)$ and $v=(\frac{-1}{4},0,1)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4489300", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
In a play, 10 actors (5 males and 5 females) are to be selected for 3 male roles and 3 female roles. In how many ways can this distribution happen? To solve that, I'm thinking of the rule of product. For each role, 5 different actors could be selected, so I should consider (5 * 5 * 5) + (5 * 5 * 5) = 250 as the answer?
You should consider the number of ways to select the male roles, given by the coefficient $\binom{5}{3}$, which is the same for the female. Now, by the rule of product, the number of ways to complete the task is $\binom{5}{3}\binom{5}{3}=100$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4489746", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Can this property of certain pythagorean triples in relation to their inner circle be generalized for other values of $n$? This question was raised in comments of Is the $(3,4,5)$ triangle the only rectangular triangle with this property? and I was suggested to ask it as a separate one. First some notation, let's write $(a,b,c)$ where $a<=b<c$ for a pythagorean triple, and let's write $(x,y,z)$ where $x<=y<z$ for the distances from the vertices to the center of the inner circle of the corresponding right triangle. ($XYZ$ should be lowercase but geogebra did not allow these labels) The answer to above question proved (partially left to reader) the property: $x * y = z$ if and only if $c - b = 1$ In comments was asked if a similar property would exist for $c - b = 2$ and it was confirmed (and proof was left to reader) that: $x * y = 2 * z$ if and only if $c - b = 2$ Somewhat natural question then was raised (by me) if one can generalize for other values of $n >= 1$, that is: for which $n$ (perhaps all) holds: $x * y = n * z$ if and only if $c - b = n$ ? Thanks to @heropup for the suggestion (and the answers for $n=1,2$) update A simple computer programmed enumeration seems to confirm equivalence. At least for all $(a,b,c)$ with maximum $c <= 10000$. Note that it is not known to me (but perhaps it is known to others) if all $c - b$ cover all $n >= 1$. So asking for all $n$ is a bit ambiguous since some $n$ might never occur. An alternative, perhaps better, question rephrase is: Prove equality $$x * y = (c - b) * z$$ for all pythagorean triples.
Hint: For all primitive Pythagorean triples (right triangles with integer sides), $\space C-B=(2n-1)^2,n\in\mathbb{N}$ This can be seen at a glance using a formula I developed in $2009.$ \begin{align*} &A=(2n-1)^2+&&2(2n-1)k\\ &B= &&2(2n-1)k+2k^2\\ &C=(2n-1)^2+&&2(2n-1)k+2k^2 \end{align*} This formula generates all triples where $\space GCD(A,B,C)\space$ is an odd square. This includes all primitives where $GCD(A,B,C)=1.$ For Pythagorean triples, it appears that $\quad z^2=(2n-1)^2C$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4489953", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
Maximize functional integral with constraints We have a value $S$ given by $$ S = \int_0^T \cos((\Omega + \delta \omega(t)) t) \, dt \, .$$ and we want to choose $\delta \omega(t)$ to maximize $S$, with the constraint $\left \lvert \delta \omega(t) \right \vert < C \ll \Omega$ where $C$ is a constant. Intuitively, we should slow the oscillation down when $\cos \approx 1$ and speed it up when $\cos \approx -1$, so that the integral acquires more positive contribution than negative contribution. But how does one solve this problem in detail? Is there a way to maximize a functional with constraints?
You could use the identity $\cos(a+b)=\cos(a)\cos(b)-\sin(a)\sin(b)$ and perform a Taylor expansion as $|\delta\omega|\ll\Omega$. This works only if $T$ is small with regard to $\Omega^{-1}$. More specifically: \begin{align} S&=\int_0^T\cos((\Omega+\delta\omega)t)\,dt\\ &=\int_0^T\cos(\Omega t)\cos(\delta\omega t)\,dt-\int_0^T\sin(\Omega t)\sin(\delta\omega t)\,dt\\ &\sim\int_0^T\cos(\Omega t)\left(1-\frac{(\delta\omega t)^2}2\right)\,dt-\int_0^T\sin(\Omega t)\delta\omega t\,dt \end{align} Some terms don't depend on $\delta\omega$ so really the functional simplifies to: $$S\sim-\frac12\int_0^T\cos(\Omega t)(\delta\omega t)^2\,dt-\int_0^T\sin(\Omega t)\delta\omega t\,dt$$ Maybe this helps, I'm not sure. But at least it seems more tractable than the equation in the question. If $T$ is large compared to $\Omega^{-1}$ then likely you use a different approach and represent $\delta\omega$ as a Fourier series so it is tractable using linear independence of the basis and the $\cos(\Omega t)$ and $\sin(\Omega t)$ weights. If $T$ is close to $\Omega$ then you need to think more about it!
{ "language": "en", "url": "https://math.stackexchange.com/questions/4490090", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Are quasicategories (resp. kan complexes) (co)reflective in simplicial sets? This is definitely well known to experts, but I'm struggling to find a reference. It seems intuitive that we should be able to "complete" any simplicial set into a quasicategory or a kan complex by "adding in" all the missing (inner) horns. This should be analogous to getting a groupoid by inverting all the arrows in a category, and so should be a reflector. The question, then, is whether this can be made precise. Is the category of quasicategories (resp. kan complexes) a reflective subcategory of the category of simplicial sets? As a ~bonus question~, groupoids are also coreflective in categories, since we can take the core of a category instead of the groupoid completion. Can we similarly restrict to some "core" of a simplicial set in order to see quasicategories or kan complexes as a coreflective subcategory of $s\mathsf{Set}$? I suspect the answer to this is "no", which is why I'm leaving it as a bonus question. Thanks in advance!
No. This cannot be true, since neither Kan complexes nor quasicategories are closed under limits or colimits in simplicial sets. For limits, this can be simply seen by the fact that not every sub-simplicial set of a Kan complex or of a quasicategory has the same property. For colimits, the quotient identifying the vertices of the nerve of the groupoid representing an isomorphism is not a Kan complex or a quasicategory. The problem with your proposed reflector is that it gives a construction unique only up to weak equivalence, not up to isomorphism. What is true is that Kan complexes are reflective and coreflective in the $(\infty,1)$-category of quasicategories. This is directly analogous to the situation for categories and for groupoids. Less intuitively, the completion of a simplicial set into a quasicategory or a Kan complex can be seen as the fibrant replacement functor in the relevant model categories.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4490228", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Get the 4-d rotation matrix given two planes There are two 2-d planes in 4-d space (both passing through the origin). They intersect in a line. We are given the coordinates of three points for the first plane (a $3 \times 4$ matrix, $V_1$ where each row is one of the three points) and the coordinates of three points for the second one ($V_2$). Imagine translating both planes so the origin passes through both of them. Now, they will also have an intersecting line (even if they didn't before). I'd like to rotate the second plane with respect to the first along the common line between them. This will involve shifting the origin somewhere along the common line and then the rotation can be represented by a rotation matrix ($R$, a $4 \times 4$ matrix). How do I get this rotation matrix?
Find the normal vectors $U$ and $V$ of the two planes. Let $\theta$ be the angle between $U$ and $V$, and let $W$ be their cross product. The rotation you’re looking for is a rotation around $W$ by the angle $\theta$. If you want it in matrix form, use Rodrigues’ formula.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4490367", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Why is this a Parseval Frame? In Introduction to Finite Frame Theory it is given without proof that the system $\left \{\frac{e_1}{1}, \frac{e_2}{\sqrt{2}}, \frac{e_2}{\sqrt{2}}, \frac{e_3}{\sqrt{3}}, \frac{e_3}{\sqrt{3}}, \frac{e_3}{\sqrt{3}}, ..., \frac{e_N}{\sqrt{N}}, \frac{e_N}{\sqrt{N}}, ..., \frac{e_N}{\sqrt{N}}\right \}$ is a Parseval frame for $\mathcal{H}^N = \mathbb{C}^N$ with $\{e_n\}_{n = 1}^N$ being the ONB. Why is that? Maybe it's just too simple and I cannot see it ...
Do you know Parseval's identity? It states that for an ONB $\{e_n\}_{n=1}^N$, we have that \begin{equation} \|x\|^2=\sum_{n=1}^N|\langle x,e_n\rangle|^2,\quad \forall x\in \mathbb{C}^N.\end{equation} The claim follows almost immediately from this identity since it holds for any $x\in \mathbb{C}^N$ that \begin{align} |\langle x, e_1\rangle|^2+|\langle x, \frac{1}{\sqrt{2}}e_2\rangle|^2+|\langle x, \frac{1}{\sqrt{2}}e_2\rangle|^2+\cdots&=\sum_{n=1}^N n |\langle x, \frac{1}{\sqrt{n}}e_n\rangle|^2\\ &=\sum_{n=1}^N n\cdot \frac{1}{n} |\langle x,e_n\rangle|^2\\ &=\sum_{n=1}^N |\langle x,e_n\rangle|^2\\ &=\|x\|^2. \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/4490555", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Number of non-trivial zeros of $\zeta(s)$ in a small substrip I want to examine the number of non-trivial zeros of $\zeta(s)$ in the small substrip given by $a \leq \Re s \leq 1$, where $a>0$ is an absolute constant, and $U \leq \Im s \leq T$ by considering the following: \begin{equation}\label{one} \tag{1} N(a, T)-N(a, U), \end{equation} where $N(\sigma, t)$ denotes the number of non-trivial zeros of $\zeta(s)$, $\rho=\beta+i\gamma$, with $\sigma \leq \beta$ and $\gamma \leq t$. The issue is that, from what I've seen, all such formula for $N(\sigma, t)$ are given in the form $O(\cdots)$, which makes the study of \eqref{one} difficult, if not impossible. Does anyone have an suggestions or references to literature that may be of use?
It is known that there are on the order of $T \log T$ zeros up to height $T$ lying exactly on the critical line $\mathrm{Re} s = \tfrac{1}{2}$. It is possible to show that there are at most $O(T)$ zeros in the strip $\mathrm{Re} s \in (a, 1)$ for any $a > \tfrac{1}{2}$, so that one hundred percent of zeros lie arbitrarily close to the critical line. There are more powerful versions of this result. I think a good reference to check would be Levinson's Almost all roots of $\zeta(s) = 0$ are arbitrarily close to $\sigma = 1/2$, appearing in the Proceedings of Nat. Acad. Sci. in 1975. (Link to paper). Works citing Levinson's work will lead to a paper trail of similar analysis. (If you'll forgive a bit of self-promotion, I very recently wrote a note about related counting results, which I tried to make pretty straightforward). The basic idea in these is to use some form of the argument principle to count zeros in regions and bounding the results. And the fundamental challenge is that we don't understand the actual distribution of zeros well enough to get anything except for upper bounds, as any actual zero would correspond to a main term in the integral analysis. There is no hope to perform this type of analysis without having results expressed as $O(\cdot)$ bounds with current ideas and techniques.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4490664", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Proving $\int_0^\infty \frac{x^k}{(x^2 + b^2)^l}\,dx=\frac1{2b^{2l-k-1}}\Gamma(\frac{k+1}2)\Gamma(\frac{2l-k-1}{2})/\Gamma(l)$ . My quantum mechanics textbook states the following integral without proof: $$ \int_0^\infty \frac{x^k}{(x^2 + b^2)^l} \mathrm dx = \frac1{2b^{2l-k-1}}\frac{\Gamma(\frac{k+1}2)\Gamma(\frac{2l-k-1}{2})}{\Gamma(l)} $$ What is this class of integrals called? Why are the gamma functions involved, and how do we prove this identity?
After someone suggested the beta function (many thanks!), I managed to figure out a proof by myself. Starting from the key property of the beta function that $$ B(u,v) = \int_0^1 t^{u-1} (1-t)^{v-1}\mathrm dt = \frac{\Gamma(u) \Gamma(v)}{\Gamma(u + v)} $$ (the proof of this as given on the Wikipedia page is not too difficult), we substitute a variable $y$ which satisfies $$ y^2 = \frac{t}{1-t}. $$ This gives the desired bounds $t=0 \to y = 0$, $t=1 \to y = \infty$. Solving for $t$ and $\mathrm d t$, we get $$ t = \frac{y^2}{1+y^2}=1- \frac{1}{1+y^2} \implies \mathrm dt = \frac{2y}{(1+y^2)^2}\mathrm dy. $$ Hence $$ \begin{align} B(u,v) &= \int_0^\infty \left(\frac{y^2}{1+y^2}\right)^{u-1} \left(\frac{1}{1+y^2}\right)^{v-1} \frac{2y}{(1+y^2)^2}\mathrm dy \\ &= 2 \int_0^\infty \frac{y^{2u - 1}}{(1+y^2)^{u+v}}\ \mathrm d y = \frac{\Gamma(u) \Gamma(v)}{\Gamma(u + v)}. \end{align} $$ Letting $k \equiv 2u-1$ and $l \equiv u + v$, we get $ u = \frac{k+1}{2}$ and $v = \frac{2l-k-1}{2}$. With this, $$ 2 \int_0^\infty \frac{y^{k}}{(1+y^2)^{l}}\ \mathrm d y = \frac{\Gamma(\frac{k+1}{2}) \Gamma(\frac{2l-k-1}{2})}{\Gamma(l)}. $$ Now, simply substituting $y \to \frac xb$ returns the original identity. $\blacksquare$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4490784", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Weierstrass form of $x^4+ux^2y^2+y^4=z^2$ I think $x^4+ux^2y^2+y^4=z^2$ is an elliptic curve, so how should I transform it into Weierstrass form? Either by hand or by software like MAGMA is fine. I am new to MAGMA and I tried something similar to this, but it seems like the "Curve" function requires the function to be homogeneous, which I don't know why and how to get around. Edit: Here $u$ is a parameter, and I meant to ask that for every $u$, what is the corresponding Weierstrass form, with $u$ as a parameter in it. The base field considered is $\mathbb{Q}$.
The answer by Elkies is missing a little detail which I now explain. Suppose that $\,u,x,y,z\,$ are integers with $\,y\ne0.\,$ Given the equation $$ x^4 + ux^2y^2 + y^4 = z^2, $$ Elkies suggests to define $\,t = x/y\,$ but does not mention that he also assumes $\,v=z/y^2.\,$ Thus $$ 1 + u t^2 + t^4 - v^2 = (x^4 + u x^2 y^2 + y^4-z^2)/y^4. $$ Similarly, use $\,X = x^2/y^2,\; Y = xz/y^3\,$ to get the Weierstrass form $$ X^3 \!+\! uX^2 \!+\! X \!-\! Y^2 \!=\! (x^4 \!+\! ux^2y^2 \!+\! y^4 \!-\! z^2)x^2/y^6. $$ Thus, rational solutions to $\, X^3 + uX^2 + X = Y^2 \,$ (assuming $\,X = t^2$) correspond to integer solutions to $\, x^4 + ux^2y^2 + y^4 = z^2 \,$ by using $\,x = y\,t,\; z = Yy^2/t.$ For $\,u=-1\,$ the elliptic curve with LMFDB label 24.a5 $\,y^2 = x^3-x^2+x\,$ has only three rational points which are $\,(0,0),\;(1,1),\;(1,-1)\,$ and also the point at infinity. For $\,u=1\,$ the elliptic curve with LMFDB label 48.a5 $\,y^2 = x^3+x^2+x\,$ has only one rational point which is $\,(0,0)\,$ and also the point at infinity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4490894", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Cholesky inverse I have the Cholesky decomposition $LL^T$ of a symmetric positive definite matrix. I then compute a result in the form of $A=LXL^T$, where $A$ and $X$ are also symmetric positive definite matrices. I know $A$ and I would like to retrieve $X$. The easy way would be to invert $L$ so we have $X=L^{-1}AL^{-T}$, however this requires to invert a matrix. I know that the inverse of a triangular matrix is not too difficult to compute but is there a better way to compute $X$? Edit To provide more background, I already know $A$ in a diagonalized way, for example I have $A=PDP^T$, with $D$ a diagonal definite positive matrix and $PP^T=P^TP=I$. Still, I need to get to $X$ somehow, ideally with no matrix inversion.
To solve $PDP^T = LXL^T$ with matrices as specified, compute the diagonal matrix $R$ such that $RR^T = D$ by taking the square roots of the diagonal entries. then you have: $PRR^TP^T = LCC^TL^T$ where $CC^T = X$ We can then determine $C$ by solving $PR = LC$ which can be performed directly by any number of triangular solvers. Edit Since maintaining the Cholesky decomposition of all matrices in such computations is often numerically preferable, it is highly recommended to work with these as much as possible. Rank-1 updates such as cholupdate may also be helpful in this effort.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4491029", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Distribution of a random variable that is produced by another uniformly distributed random variable Assume a simple two-step lottery: the reward (denoted by $Y$) is a random variable and realized as follows: First: random variable $X$ is drawn from uniform distribution by interval $(a,c)$. Second: After the realization of $X$, a player determines reward ($Y$) stochastically: * *with probability $\alpha$: reward is $bX$ ($b$ is constant) *with probability $1-\alpha$: reward is $X$ Now, what is the distribution of the random reward $Y$? If the solution is weighted average on both states: i.e. $Y=\alpha bX+(1-\alpha)X=(\alpha b+1-\alpha )X$ then we can say $Y$ is a linear rescaling transformation of $X$ and so $Y$ has the same distribution as $X$ has. But I am suspicious that $Y$ really is a weighted average of $X$ and $bX$
Let $E$ denote the event that the reward is taken to be $bX$. Then:$$P(Y\leq y)=P(E)P(Y\leq y\mid E)+P(E^c)P(Y\leq y\mid E^c)=$$$$\alpha P(bX\leq y\mid E)+(1-\alpha)P(X\leq y\mid E)$$ If moreover $1_E$ and $X$ are independent then this reduces to:$$P(Y\leq y)=\alpha P(bX\leq y)+(1-\alpha)P(X\leq y)\tag1$$ Can you take it from here? Looking at it as weighted average as in your question the stochastic character of the choice is disregarded. Actually we are dealing with:$$Y=1_EbX+(1-1_E)X\tag2$$where $1_E$ is a random variable (Bernoulli distribution with parameter $\alpha$). This instead of $\alpha bX+(1-\alpha)X$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4491148", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
What is the intution behind the matrix inverse formula? I am learning about matrix inverses and my professor introduced the formula $\begin{bmatrix}a & b\\c & d\end{bmatrix}^{-1}=$ $ 1\over(ad-bc)$$\begin{bmatrix}d & -b\\-c & a\end{bmatrix}$ $,if\; and \; only \; if \; ad-bc\neq 0$. Can someone give me some intuition as to why this formula works? I understand the intuition behind using Gauss-Jordan elimination to find the matrix by applying all of the steps of row reduction to the identity matrix, but this formula seems like magic. Thank you!
Inverse of an invertible matrix is given by $$A^{-1}={\frac {1}{\det(A)}} \mathrm{adj} (A)$$ Clearly, $\det(A) = ad-bc$ and we can find the adjugate matrix by applying transpose on the cofactor matrix of $A$. This results in the formula $$A ^{-1} = {\frac {1}{ad-bc}} {\begin{bmatrix}\,\,\,d&\!\!-b\\-c&\,a\\\end{bmatrix}}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4491459", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Best approximation of a fraction to a given number Given a number $q \in \mathbb{R}$. Find two numbers $n,m \in \mathbb{N}$ with $m \leq N$ such that $\left|\frac{n}{m} - q\right|$ becomes minimal. Quick info: I have no idea which subject area is best for this type of task. I just set myself this task because I find it interesting. I first had the idea to work with the approach ||Ax - b||, but the problem is not linear. I had then looked in a few books and always it was directly about polynomials or functions to which one could build an orthonormal basis. Therefore my question: How do I solve this problem? Can it be solved analytically at all? Okay, I have now looked at the procedure mentioned in the first answers and it is actually quite simple: Split the number $q_0 = q$: $$q_0 = ⌊q_0⌋ + \tilde{q}_0,$$ where $\tilde{q}_0 < 1$. Subsequently, we find a $q_1$ with $$q_1 = \frac{1}{\tilde{q}_0}.$$ Then the procedure repeats. Finally, the algorithm is: (1) $q_k = ⌊q_k⌋ + \tilde{q}_k$ (2) $q_{k+1} = \frac{1}{\tilde{q}_k}$ The bestapprixmation is then $$q = ⌊q_0⌋ + \frac{1}{⌊q_1⌋ + \frac{1}{⌊q_2⌋ \,+ \,\ldots}}$$ Example: $3.1415926...$ $$q_0 = 3 + 0.1415926...$$ $$q_1 = \frac{1}{0.1415926...} = 7.0625... = 7 + 0.0625...$$ $$q_2 = \frac{1}{0.0625...} = 15.996... = 15 + 0.996...$$ $$q_3 = \frac{1}{0.996...} = 1.003... = 1 + 0.003...$$ We then find the following approximations: $$ q = \left\{3, \frac{22}{7}, \frac{333}{106},\frac{335}{113},\ldots \right\} $$ But what bothers me a lot here are the big denominators. Already the third fraction is over 100. Is there no fraction $\frac{n}{m}$ with $m \leq 100$, which better approximates the number $\pi$ as $\frac{22}{7}$? Okay, I have now found a better approximation. It holds: $$\left|\frac{22}{7} - \pi\right| = 0.0012644892$$ $$\left|\frac{311}{99} - \pi\right| = 0.00017851$$ So the algorithm is only partially suitable.
To answer your exact question, if $q$ is rational, then it is exactly a ratio of integers. If it’s irrational, then there is some infinite sequence of rational approximations that converge to your number, so there is no one approximation where the difference is minimal. What you probably want are numbers with small denominator that approximate your number. In particular, an approximation $n/m$ could be considered the best if it’s closer to $q$ than all other rational numbers with denominator at most $m$. The standard way to accomplish this uses continued fractions. You can cut off the infinite fraction series to get a rational approximation and it’s possible to prove that this procedure gives all the best rational approximation. See Best Rational Approximations here for more details: https://en.m.wikipedia.org/wiki/Continued_fraction
{ "language": "en", "url": "https://math.stackexchange.com/questions/4491578", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Show that $\mathbb{Q}(\sqrt[3]{2})$ does not have non trivial automorphisms Since I started the abstract algebra course, I haven't had exercises associated with the existence or non-existence of automorphisms, and I haven’t found exercises similar to this either so I don't know how to proceed. I would greatly appreciate your help.
The Galois group of a field extension consists in the automorphisms that fix the base field and move the elements that generates the extension. If the extension is generated by a finite number of elements then the automorphisms characterized by the images of these elements, that are also roots of the minimal polynomial of these elements. Since the minimal polynomial of $\sqrt[3]{2}$ is $t^3-2$, which roots are $\sqrt[3]{2}, \sqrt[3]{2}w^2, \sqrt[3]{2}w^2$ where $w=e^{2\pi i/3}$, the unique roots in the extension is the first, so the only movement between roots is $\sqrt[3]{2}\to \sqrt[3]{2}$, that is the identity
{ "language": "en", "url": "https://math.stackexchange.com/questions/4491676", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 3 }
maximizing the negative norm is it convex problem? I have an objective function with two decision variables x1 and x2 \begin{equation} \begin{array}{cl} \underset{\alpha_{1},\alpha_{2}}{ \max} a_1x_1+ a_2x_2 -c\sqrt{h_{1}(x) ^2 + h_{2}(x)^2} \\ \text { s.t. } x_{1}+x_{2}=1 \end{array} \end{equation} I tried to introduce new variables $b_1=h_1(x) ,b_2=h_2(x) $ so my objective function will be \begin{array}{cl} \underset{\alpha_{1},\alpha_{2}}{ \max} a_1x_1+ a_2x_2 -cu \\ \text { s.t. } x_{1}+x_{2}=1 ,\\ h_1(x)=b_1, \\ h_2(x)=b_2 ,\\ \sqrt{b_1^2 + b_2^2 } \leq u \end{array} my questions: * *does the reformulation with the new constraints correct ? *because I am maximizing the negative norm does that mean it is a convex function ? *how can I solve my problem in MATLAB Thanks in advance
If I understand the problem correctly the optimization is over $(x_{1},x_{2})$ and not over $(a_{1},\,a_{2})$ because in the last case max=$+\infty$. Just use the linear constraint and write $x_{2}=1-x$. Then we get max $a_{1}x+a_{2}(1-x)-c\sqrt{H_{1}^{2}(x)+H_{2}^{2}(x)}$ where $H_{1}(x)=h_{1}(x,1-x)$ and $H_{2}(x)=h_{2}(x,1-x)$ which can be solved in the standard way by considering derivative of the function and finding stationary points, which is: $a_{1}-a_{2}-c\dfrac{1}{2\sqrt{H_{1}^{2}(x)+H_{2}^{2}}(x)}(2H_{1}(x)H_{1}'(x)+2H_{2}(x)H_{2}'(x))=0$ Solving the last equality you get some stationary points and by considering second derivative you will decide which is the maximum! But it all depends on functions $\, h_{1},\,h_{2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4491954", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
What is $\int_0^\infty \ln \left(1-\frac{\sin x}{x} \right) dx $? I'm looking for closed-form expressions for the integral $$I:=\int_0^\infty \ln \left(1-\frac{\sin x}{x} \right) dx .$$ Some related integrals that I've found include: $$J:=-\int_0^\infty \log(1-\cosh(x))\frac{x^2}{e^x}\,dx = 6 + 2\ln(2) - {2\pi^{2} \over 3} - 4\zeta(3) - 2\pi i,$$ and this one: $$K:= \int_0^{\frac{\pi}{2}}\log\Big{(}\cos(x)\Big{)}dx = - \frac{\pi}{2} \log(2) ,$$ and (p. 530) $$L:=\int_{0}^{\infty} \big{(} \ln(1-e^{-x}) \big{)} dx = - \frac{\pi^{2}}{6} .$$ Moreover, I've encountered various integrals of the sinc function, including: $$M:= \int_{0}^{\pi/2} \ln \bigg{(} \frac{\sin(x)}{x} \bigg{)} dx = \frac{\pi}{2} \Big{(} 1 - \ln(\pi) \Big{)} .$$ However, I haven't found any closed-form expressions for $I$ yet, only an approximation: $$I \approx -5.75555891011162780816$$ and I'm not sure how to proceed with the integral.
More precise computation of the given integral: $$I=-5.031379591902842520548271636746403412607399342991051\cdots$$ This is computed using PARI/GP. The integrand is oscillating, so intnum cannot handle it directly, and PARI/GP user's guide "suggests" expanding it into Fourier series. Instead, using $$\int_0^\infty f(x)\,dx+\sum_{n=1}^\infty\int_0^\pi\big(f(2n\pi-x)+f(2n\pi+x)\big)\,dx$$ for our integrand $f(x)=\log(1-\frac{\sin x}{x})$, we have $$f(2n\pi-x)+f(2n\pi-x)=\log\left[1-\left(\frac{x-\sin x}{2n\pi}\right)^2\right]-\log\left[1-\left(\frac{x}{2n\pi}\right)^2\right]$$ and, recognizing the infinite product for the sine function, we obtain $$I=\int_0^\pi\log\frac{\sin\big((x-\sin x)/2\big)}{\sin(x/2)}\,dx=\color{blue}{\int_0^\pi\log\left(2\sin\frac{x-\sin x}{2}\right)dx}.$$ I compute this as follows. sinctail(x)=if(x>1e-3,(1-sinc(x))/x^2,suminf(n=0,(-x^2)^n/(2*n+3)!)); kernel(x)=log(sinctail(x))+log(sinc((x-sin(x))/2)); answer()=3*Pi*(log(Pi)-1)+intnum(x=0,Pi,kernel(x)); An amusing consequence: using Bessel functions, we have $\color{blue}{I=-\pi\sum_{n=1}^\infty J_n(n)/n}$. Another line of thoughts, thanks to the observation by @Tyma Gaidash: $$y=y(\lambda,x)=2\sum_{n=1}^\infty J_n(n\lambda)\frac{\sin nx}{n}\qquad(|\lambda|\leqslant 1)$$ is the solution of $y=\lambda\sin(x+y)$. Writing $y(x):=y(1,x)$, we get $$-I=\int_0^\infty\frac{y(x)}{x}\,dx=\frac12\int_0^\pi y(x)\cot\frac x2\,dx$$ similarly to the above. Now let's get rid of the implicitly defined $y(x)$ by an inverse substitution. Namely, for $0<x<\pi/2-1$, $y(x)$ grows from $0$ to $1$, and we have $x=\arcsin y-y$; similarly, for $\pi/2-1<x<\pi$, $y(x)$ goes from $1$ to $0$, and we have $x=\pi-\arcsin y-y$. Thus \begin{align*} -I&=\frac12\int_0^1y\left(\frac1{\sqrt{1-y^2}}-1\right)\cot\frac{\arcsin y-y}2\,dy\\&+\frac12\int_0^1y\left(\frac1{\sqrt{1-y^2}}+1\right)\tan\frac{\arcsin y+y}2\,dy\\\implies I&=\color{blue}{\int_0^1\frac{(1-y^2-\cos y)\ y}{(y-\sin y)\sqrt{1-y^2}}\,dy}. \end{align*} This gives an alternative way(s) of computation of $I$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4492104", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 0 }
Does there exists a function $f:\Bbb{R}\to \Bbb{R}$ s.t $f$ is continuous on a dense $G_{\delta}$ set and discontinuous a.e on $\Bbb{R}$? Does there exists any function $f:\Bbb{R}\to \Bbb{R}$ such that $f$ is continuous on a dense $G_{\delta}$ subset of $\Bbb{R}$ and discontinuous almost everywhere on $\Bbb{R}$ ? Attempt : (Yes) We know the real line $\Bbb{R}$ can be decomposed into disjoint union of two small sets i.e $\Bbb{R}=N\cup M$ where $N$ is Lebesgue null and $M$ is meager (see here). Since $\Bbb{R}$ is second category (non-meager) and $M$ is meager implies $N$ is of second category. Since complement of a first category set contains a dense $G_{\delta}$ subset of $\Bbb{R}$ , let $G\subset N$ a dense $G_{\delta}$ subset of $\Bbb{R}$. According to this How to construct a real valued function which is continuous only on a given $G_\delta$ subset of $\mathbb{R}$? , we can construct a function which is continuous only on $G$ . As $N$ is Lebesgue null and $G\subset N$ implies $m(G) =0$ . Hence $f$ is discontinuous everywhere. Is my approach is correct ? Edit: I can produce an explicit $G_{\delta}$ set of measure $0$.
The first part - existence of null dense $G_\delta$ set - can be show explicitly: let $q_n$ be enumeration of rationals, let $U_k = \bigcup_n (q_n - 2^{-n-k}, q_n + 2^{-n-k})$, then $\cap_k U_k$ is null (as $\mu(U_k) \leq 2^{-k}$), dense (as it contains $\mathbb Q$) and $G_\delta$ (as each $U_k$ is open). I don't see any construction simpler than general to find a function continuous exactly on this set, but if we substitute our $U_k$ in general construction, we get pretty much explicit function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4492261", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
For the set $X=\{g\in G:g^p=1\}$, show that $p$ divides $|X|$. Let $G$ be a finite group and $p$ be a prime divisor of $|G|$. Consider the set $X=\{g\in G:g^p=1\}$. Show that $p$ divides $|X|$. My attempt: Consider the action of $G$ on $X$ by conjugation. Then ${\rm Stab}_g=C_G(g)$ for all $g\in X$. I'm stuck here.
$X$ comprises all and only the elements of $G$ of order $p$, plus the identity. Such elements are grouped in $m$ trivially intersecting subgroups each of order $p$. Therefore: \begin{alignat}{1} |X| &= m(p-1)+1 \\ &=mp-(m-1) \\ \tag1 \end{alignat} But $m\equiv 1\pmod p$, because the number $n_k$ of subgroups of $G$ of order $p^k$ is congruent to $1$ modulo $p$ for every $k=0,1,\dots,k_{\text{max}}$ (see e.g. here), and in particular for $k=1$. Therefore, $m=n_1\equiv 1\pmod p$, and hence from $(1)$ $p\mid |X|$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4492393", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Hint - Find a basis for $U=$ {$p \in P_4(R) : p(6)=0$} Let $P_4(R)$ denote the set of all polynomials of at-most degree $4$. I was attempting to find a basis of $U=$ {$p \in P_4(R) : p(6)=0$}. I do not want a solution, but just a hint as to where to look. The following was my approach. Because $p(6)$ must equal $0$. We know that the polynomial will have a factorisation such as $(x-6)(x-a)(x-b)(x-c)$. Multiplying throughout this polynomial, we see that at the end there is always a multiple of $6$. Hence, the basis would consist of the number 6. But because this polynomial can have degree: 1, 2, 3, 4. We would also have to add the vectors $x, x^2, x^3, x^4$(Abuse of notation here, what I mean is that they are all function. $f_j(x)=x^j$). So, the basis would consists of $6, x, x^2, x^3, x^4$. We might be able to change this basis up a bit. We want that $a_0(6)+a_1(6)+a_2(6)^2+a_3(6)^3+a_4(6)^4=0$. At this point I am stuck. Because in the next question, it asks to extend this basis to a basis of $P_4(R)$. But this basis would also be a basis of $P_4(R)$, as any linearly independent list of the right length is a basis of that vector space. Could someone give any hints as to what I should be looking for?
HINT If $p\in U\leq\textbf{P}_{4}(\mathbb{R})$, it can be written as $p(x) = a + bx + cx^{2} + dx^{3} + ex^{4}$. Hence we conclude that: \begin{align*} p(6) = 0 & \Rightarrow a + 6b + 36c + 196d + 1296e = 0\\\\ & \Rightarrow a = -(6b + 36c + 196d + 1296e) \end{align*} Now you can express the elements of $U$ in terms of four parameters. Can you proceed from here?
{ "language": "en", "url": "https://math.stackexchange.com/questions/4492525", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Find angle $\alpha$ in equation containing weighted sum of $\sin(\alpha)$ and $\cos(\alpha)$. The context of this question comes from calculating the Fidelity drop of quantum systems after applying a rotation with some arbitrary angle $\alpha$ on the state vector of the system, see equation below: $$\begin{eqnarray} \Delta F = &sin(\alpha)&(\frac{(a_{1} + a_{4})(b_{2} + b_{3}) - (a_{2} + a_{3})(b_{1} + b_{4})}{2}) \nonumber \\ &-&cos(\alpha)(\frac{(b_{2} + b_{3})^{2} + (a_{2} + a_{3})^{2} - F}{2}) \nonumber \\ &+&(\frac{(b_{2} + b_{3})^{2} + (a_{2} + a_{3})^{2} - F}{2}) \nonumber \end{eqnarray}$$ $\Delta F$, $F$, $a_i$ and $b_j$, $1\leq i,j\leq 4$ are known. Could someone explain to me how I could get the angle $\alpha$, if that is even possible? I will be very grateful for your help! I have another similar problem, but with a weighted sum of $(cos(\alpha))^{2}$, $(sin(\alpha))^{2}$, and $cos(\alpha)sin(\alpha)$ in which I could also get some help.
Let us have the general (and prettier) equation $$B=m\sin\alpha -n\cos \alpha+K.$$ Transpose $K$ and divide the whole equation by $\sqrt{m^2+n^2}$:$$\frac{B-K}{\sqrt{m^2+n^2}}=\frac{m}{\sqrt{m^2+n^2} }\sin\alpha-\frac{n}{\sqrt{m^2+n^2} }\cos\alpha$$ Now, $\Bigg |\dfrac {m}{\sqrt{m^2+n^2}}\Bigg |<1$ and so there must exist some angle A such that $\cos A= \dfrac {m}{\sqrt{m^2+n^2}}$ and also $\sin A= \dfrac {n}{\sqrt{m^2+n^2}}$ and $\tan A=\dfrac nm$. We have now, $$\frac{B-K}{\sqrt{m^2+n^2}}=\sin\alpha\cos A-\cos\alpha\sin A=\sin (\alpha-A).$$ Thus, $$\alpha=A+\arcsin\left(\frac{B-K}{\sqrt{m^2+n^2}}\right)$$$$=\arctan\frac nm+ \arcsin\left(\frac{B-K}{\sqrt{m^2+n^2}}\right).$$ EDIT: Comparing the general equation to the OP’s equation, we have $K=n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4492602", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$3x+1$ Conjecture and link with Ergodic Theory I read here https://www.maa.org/sites/default/files/pdf/upload_library/22/Ford/Lagarias3-23.pdf that the Collatz conjecture is equivalent to $ Q_{\infty}(\mathbb{N}) \subseteq \frac{1}{3} \mathbb{Z} $. Where $Q_{\infty} : \mathbb{Z}_2 \to \mathbb{Z}_2 $ is a continous and measure preserving transformation defined by $$ x \mapsto Q_{\infty}(x) = \sum_{k=0}^{\infty} \left( T^k(x) \mod 2 \right) 2^k $$ and $T : \mathbb{Z}_2 \to \mathbb{Z}_2 $ is an ergodic map defined by $$ x \mapsto T(x) =\left\{\begin{matrix} \frac{x}{2} & \text{if} & x \equiv 0 \mod 2 \\ \frac{3x+1}{2}& \text{if} & x \equiv 1 \mod 2 \end{matrix}\right. $$ I was wondering how to prove that they are equivalent, and I was wondering if there is any reference I can read to further explore the connection between the Collatz conjecture and the Ergodic Theory.
There's an idea in play here that isn't completely obvious. First, let's talk about the more straightforward part. Choose a positive integer, say $5$. It has a 2-adic expansion, usually written $...0000101.$, which represents (reading from right-to-left) $1\times2^0 + 0\times2^1 + 1\times2^2 + 0\times2^3 + 0\times 2^4 + \cdots$. The same number has another binary sequence associated with it, namely, its parity sequence under the (shortcut version of the) Collatz function. Since the number's trajectory is: $5 \to 8 \to 4 \to 2 \to 1 \to 2 \to 1 \to 2 \to 1 \to 2 \to \cdots$, we write the parity sequence $1,0,0,0,1,0,1,0,1,0,\ldots$ Now here's the interesting part: We can re-interpret that parity sequence as another 2-adic number! Reversing the digits, to see it the usual way around, it looks like: $...0101010001.$, or emphasizing the repeating part: $\overline{01}0001.$ This is the 2-adic expansion of the rational number $-\frac{13}3$. This is the map $Q_\infty$. We have just seen that $Q_\infty(5)=-\frac{13}3$. Now, the fact that it's a fraction with denominator $3$ reflects the fact that it falls into a pattern repeating "$01$". (Note that $\overline{01}.=-\frac13$) Therefore, the claim that every element of $\mathbb{N}$ has a trajectory eventually reaching $1,2,1,2,\ldots$ is transformed, with this $Q_\infty$ map, into the claim that every element of $\mathbb{N}$ is mapped by $Q_\infty$ to a 2-adic integer with $...0101$ trailing off to the left. Such numbers are precisely the elements of $\frac13\mathbb{Z}\setminus\mathbb{Z}$. Does this help?
{ "language": "en", "url": "https://math.stackexchange.com/questions/4493333", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Prove simple upper and lower bounds on logarithm Prove or disprove that, for $x,y>1$: $$ \frac{1}{2} \log(x y) \le \log(x + y) \le \log(x y) $$ At first, I thought the right inequality follows since $x+y < xy$ and $\log$ is monotonically increasing, but actually this doesn't hold for all $x,y>1$.
For the first part you have that $$ \frac{1}{2}\log \left( {xy} \right) = \log \left( {\sqrt {xy} } \right) \le \log \left( {\frac{{x + y}}{2}} \right) \le \log \left( {x + y} \right) $$ The inequality $$ \log \left( {\sqrt {xy} } \right) \le \log \left( {\frac{{x + y}}{2}} \right) $$ follows from the fact that $$ \sqrt {xy} \le \frac{{x + y}}{2} $$ for each $x>0, y>0$ and from the fact that the function t\to \log t$ is monotonically increasing. The second part of your statement is trivially false as already pointed out.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4493443", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Showing the Heisenberg group is the central extension of the additive group Context and some work so far: I found out about the Heisenberg group on Youtube. I'm a Physics student. I wanted to know more about it, and I realized there was more to learn. Here is what I found out. 1.) The Heisenberg group is a group of upper triangular matrices of the form: \begin{pmatrix} 1 & a & b\\ 0 & 1 & c\\ 0 & 0 & 1 \end{pmatrix} I found this out by looking through the Wikipedia page 2.) I think I can now construct the corresponding Lie algebra. Generally, * *A matrix exponential $e^{At} = I + At + \frac{1}{2} A^2t^2+ . . . $ *The derivative $\frac{d}{dt} e^{At} = $ A + A^2 t. . . = A.e^{At}$ Let $e^{At} = \begin{pmatrix} 1 & a(t) & b(t)\\ 0 & 1 & c(t)\\ 0 & 0 & 1 \end{pmatrix}$ then $\frac{d}{dt} e^{At}|_0= \begin{pmatrix} 0 & a'(0) & b'(0)\\ 0 & 0 & c'(0)\\ 0 & 0 & 0 \end{pmatrix}$ with basis: $ A= \begin{pmatrix} 0 & 1 & 0\\ 0 & 0 & 0\\ 0 & 0 & 0 \end{pmatrix}$, $B =\begin{pmatrix} 0 & 0 & 0\\ 0 & 0 & 1\\ 0 & 0 & 0 \end{pmatrix}$ and $C =\begin{pmatrix} 0 & 0 & 1\\ 0 & 0 & 0\\ 0 & 0 & 0 \end{pmatrix}$ Then $[A,B], [A, C]$ and $[B, C]$ can be computed The central element can then be seen from the commutation relations Question: I have read that the Heisenberg group is the central extension of the additive group V. How can one show this? More context: I have found this question that tackles the problem without invoking cocycles. It looks at the commutator of two elements of the group and finds the conditions under which the commutator must vanish to find the center of the group. The argument then terminates with the statement that the element that satisfies that argument is isomorphic to the additive group. More context: In this Wikipedia link , under "On symplectic vector spaces" the abstract group law of the Heisenberg group is given which will be reproduced here : $(v,t).(v',t') = (v+v',t+t', \frac{1}{2} \omega(v,v'))$
I've read that the additive group where the operation is addition. Let $C(G)$ be the center of the Heisenberg group. To find the center, $C(G)$ it suffices to find the condition necessary for two heisenberg matrices to form a vanishing Lie bracket. The below was done in Mathematica not by hand In practice $\begin{pmatrix} 1 & a_1 & b_1\\ 0 & 1 & c_1\\ 0 & 0 & 1 \end{pmatrix}$ $\begin{pmatrix} 1 & a_2 & b_2\\ 0 & 1 & c_2\\ 0 & 0 & 1 \end{pmatrix}$ - $\begin{pmatrix} 1 & a_2 & b_2\\ 0 & 1 & c_2\\ 0 & 0 & 1 \end{pmatrix}$$\begin{pmatrix} 1 & a_1 & b_1\\ 0 & 1 & c_1\\ 0 & 0 & 1 \end{pmatrix}$ = 0 Hence $\begin{pmatrix} 1 & a_1 + a_2 & b_1 + b_2 + a_1 c_2\\ 0 & 1 & c_1 +c_2\\ 0 & 0 & 1 \end{pmatrix}$ - $\begin{pmatrix} 1 & a_1 + a_2 & b_1 + b_2 + a_2 c_1\\ 0 & 1 & c_1 +c_2\\ 0 & 0 & 1 \end{pmatrix}$ = $\begin{pmatrix} 0 & 0 & - a_2 c_1 + a_1 c_2\\ 0 & 0 & 0\\ 0 & 0 & 0 \end{pmatrix}$ So $a_2 c_1 = a_1c_2$ to obtain vanishing commutator Let center then be matrices of the form C = $\begin{pmatrix} 1 & 0 & b_3\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{pmatrix}$ It is easy to check that the commutator with any upper diagonal matrix is vanishing. So matrices of the form $C$ form the center of the group $C(G)$ The Heisenberg group is the central extension of the additive group , and the center of the group is C(G) Let the quoteient group be $G/C(G)$ According to Wikipedia's article on group extensions $1 \rightarrow N \rightarrow G \rightarrow Q \rightarrow 1$ $C(G)$ of Heisenberg group is the matrices with $(a,b,c)$ as defined above to be $(0,b_3,0)$ as shown by computation. $C(G)$ is $N$ the normal subgroup of $G$ so $1 \rightarrow C(G) \rightarrow G \rightarrow R^2 \rightarrow 1$ $ G/C(G)$ ~ $\mathbb{R^2}$ It would seem that it suffices to show this map exists. See page 625 of this
{ "language": "en", "url": "https://math.stackexchange.com/questions/4493700", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to find $\int_0^1 x^4(1-x)^5dx$ quickly? This question came in the Rajshahi University admission exam 2018-19 Q) $\int_0^1 x^4(1-x)^5dx$=? (a) $\frac{1}{1260}$ (b) $\frac{1}{280}$ (c)$\frac{1}{315}$ (d) None This is a big integral (click on show steps): $$\left[-\dfrac{\left(x-1\right)^6\left(126x^4+56x^3+21x^2+6x+1\right)}{1260}\right]_0^1=\frac{1}{1260}$$ It takes a lot of time to compute. How can I compute this quickly (30 seconds) using a shortcut?
I wouldn't say this method helps you in solving it in 30 seconds, but I think it can help you in simplifying the calculations so that the integral can be computed faster First consider by usage of the Binomial Theorem $$(1-x)^5=\sum_{k=0}^{5}\binom{5}{k}(1)^k(-x)^{5-k}=\sum_{k=0}^{5}\binom{5}{k}(-1)^{5-k}(x)^{5-k}$$ from which you can get by multiplying by $x^4$ $$x^4(1-x)^5=\sum_{k=0}^{5}\binom{5}{k}(-1)^{5-k}(x)^{9-k}$$ applying the integral to the sum $$\int_{0}^{1}x^4(1-x)^5\cdot dx=\sum_{k=0}^{5}\binom{5}{k}(-1)^{5-k}\bigg(\frac{x^{10-k}}{10-k} \bigg)_{0}^{1}$$ which will give $$\int_{0}^{1}x^4(1-x)^5\cdot dx=\sum_{k=0}^{5}\binom{5}{k}(-1)^{5-k}\bigg(\frac{1}{10-k} \bigg)$$ Expanding the sum gives $$-\binom{5}{0}\bigg(\frac{1}{10}\bigg)+\binom{5}{1}\bigg(\frac{1}{9}\bigg)-\binom{5}{2}\bigg(\frac{1}{8} \bigg)+\binom{5}{3}\bigg(\frac{1}{7}\bigg)-\binom{5}{4}\bigg(\frac{1}{6} \bigg)+\binom{5}{5}\bigg(\frac{1}{5} \bigg)$$ simplifying will give $$\frac{-1}{10} +\frac{5}{9} -\frac{10}{8} +\frac{10}{7} -\frac{5}{6} +\frac{1}{5}=\frac{1}{1260}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4493842", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 7, "answer_id": 5 }
Is every subset of the natural numbers that is closed under successor also closed under addition? Suppose that $S$ is a subset of $\mathbb{N}$ such that $S$ is closed under the successor operation. Does it follow that $S$ is closed under addition?
Any non-empty subset of $\mathbb{N}$ which is closed under the successor operator is of the form $$S=\{n, n+1, n+2, \ldots\}$$ for some $n\in\mathbb{N}$. This is because as soon as you have an element $n$ in your (non-empty) subset its successor must also be in it, and by iteration all numbers greater than $n$ will be in it as well. Clearly, any set of this form is closed by addition.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4494019", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Given $a_0=1$ and $a_n=a_{n-1}+a_{\lfloor n/3\rfloor}$, for primes $p\le 13$ prove there are infinitely many terms divisible by $p$ I have been able to solve the cases for $p=2$ and $3$. For $p\ge 5$, I suspect that a general formula giving an infinite sequence of $k$ such that $p|a_k$ is not possible; however I may be wrong. The sequence does not appear on OEIS. Edit: As suggested below, here I'll repeat the question in the description: There is a sequence of integers $\{a_n\}$ where $n\ge 0$, which satisfies $a_0=1$ and the recurrence $a_n=a_{n-1}+a_{\left\lfloor\frac{n}{3}\right\rfloor}$. For every prime $p$ where $p\le 13$, prove that there exist infinitely many $k$ such that $p|a_k$. Edit: As suggested by a comment, I’ll add some details/context: This is a problem in one of my exercises, in a high-school-level maths olympic training. I don’t know anything about the author/country/source etc. Here are my solutions for $p=2$ and $3$: For $p=2$, suppose that (for contradiction) for every $k\ge N$, $2\nmid a_k$. Then consider $a_{3k}=a_{3k-1}+a_k$. As both terms on the right side are odd, the left side must be even. Contradiction. For $p=3$ the problem is very obvious, since $a_{3k+2}=a_{3k-1}+3a_k$, and $a_2$ is $3$, which gives $\forall k\ 3|k\implies 3|a_k$
Sequence $\{a_n\}$ is in OEIS, see A005704. Proof: $a_1=2$, $a_2=3$, $a_3=5$, $a_4=7$, $a_{11}=33=3*11$, $a_{20}=117=9*13$. If $a_m$ is divisible by prime $p$, then $a_{3m+2}\equiv{a_{3m+1}}\equiv{a_{3m}}\equiv{a_{3m-1}}\mod p$. Suppose $a_{3m-1}\equiv{r}\mod p$. Thus $a_{9m-3}\equiv{a_{9m-4}+a_{3m-1}}\equiv{a_{9m-4}+r}\mod p$, $a_{9m-2}\equiv{a_{9m-3}+a_{3m-1}}\equiv{a_{9m-4}+2r}\mod p$, ..., $a_{9m+8}\equiv{a_{9m+7}+a_{3m+2}}\equiv{a_{9m-4}+12r}\mod p$. If $r=0$, then $a_{3m-1}$ is divisible by $p$, otherwise there exists $k\in\{9m-4,9m-3,9m-2,...,9m+8\}$ such that $a_k$ is divisible by $p$, where prime $p\le 13$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4494139", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
A random walk on half the number line Consider a symmetric random walk on the number line where steps are size $1$. If a step from $0$ tries to go to $-1$ you stay at $0$ instead. You start at $0$ and we want to compute the expected time to reach $x>0$. From numerical experiments it seems to be $x(x+1)$. How can this be proved?
Let $\tau_x$ be the hitting time of the level $x>0$. Decompose this time into the separate returns to zero. Formally, $\tau_x = T_R$ where $T_i$ is defined recursively with $T_0 = 0$ and $T_i = \min\{k> T_i, X_k \in \{0,x \}\}$, and $R = \min\{i, X_{T_i} = x\}$. What the walk does in each of the intervals $[T_{i-1},T_i]$ is independent of the past, conditional on the knowledge of where you were at time $T_i$ (either $x$ or $0$), and is distributed as a fresh walk started from this point. This is the strong Markov property. Now write $$\tau_x = T_R = \sum_{i=1}^R T_i - T_{i-1}\\ = \sum_{i=1}^\infty \Bbb 1_{R\geq i} (T_i - T_{i-1})\\ = \sum_{i=1}^\infty \Bbb 1_{X_{T_1}=0}1_{X_{T_2}=0}\ldots 1_{X_{T_{i-1}}=0}(T_i - T_{i-1})\\ $$ The expectation of each of the terms is easy to compute with successive conditioning, that is $$\mathbb E[\Bbb 1_{X_{T_1}=0}1_{X_{T_2}=0}\ldots 1_{X_{T_{i-1}}=0}(T_i - T_{i-1})] \\= \mathbb P(X_{T_1}=0) \mathbb P(X_{T_2}=0 | X_{T_1}=0) \mathbb P(X_{T_{i-1}}=0 | X_{T_{i-2}}=0) \mathbb E[T_i - T_{i-1}|T_{i-1}=0]$$ Now because of the strong Markov property, these are the same as the original walk wich was started at zero. $$ = \mathbb P(X_{T_1} = 0)^{i-1} \mathbb E[T_1].$$ So by linearity of expectation $$\mathbb E[\tau_x] = \sum_{i=1}^\infty \mathbb P(X_{T_1} = 0)^{i-1} \mathbb E(T_1) = \mathbb E[T_1] \times \dfrac 1 {1-P(X_{T_1} = 0)} = \dfrac {\mathbb E[T_1]}{P(X_{T_1} = x)}.$$ We have reduced the problem to something much simpler. Now if you think about it, $T_1$ is the same as the first time to hit either $x$ or $-1$ started from $0$ for the simple random walk on $\mathbb Z$, a very classical setting called gambler's ruin. You will readily find formulas in books or on the web both for the expected hitting time of a upper and lower level started from zero, and for the probability that the upper level is reached. Plugging these in will yield the formula you conjectured (good job for that).
{ "language": "en", "url": "https://math.stackexchange.com/questions/4494262", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
A question about operators on Hilbert spaces Let $H$ be a Hilbert space and $T,S \in B(H).$ Suppose that $T,S$ satisfy the following condition: $$\exists c\geq 0:~~|\langle Tu,v\rangle | \leq c |\langle Su,v\rangle |, \text{ for all }u,v\in H. $$ Can we say that one of $T$ or $S$ is a multiple of the other; i.e., $T=aS$ or $S=aT$ for some scalar $a?$ Thanks
This is correct. First $Tu, Su$ must be always linearly dependent, otherwise there exists a vector $v$ that is perpendicular to $Su$ but not $Tu$, which would contradict the inequality. To show the eixstence of such $v$, note that $$((Su)^{\perp})^{\perp}=\overline{\text{Span}(Su)}=\text{Span}(Su)\not\ni Tu$$ thus $Tu$ is not orthogonal to all elements in $(Su)^{\perp}$. Or more intuitively, simply take the orthogonal decomposition $$Tu = a (Su) + v \text{ such that }0\not=v\in \text{Span}(Su)^{\perp}$$ Now we show $T=aS$ for some $a$. If $Su=0$, pick $v=Tu$, we conclude from the given inequality $Tu=0$. If $Su\not=0$, then by above, $Tu=a_uSu$ for a unique $a_u$, as $Tu, Su$ are dependent. Now it's sufficient to show $a_u$ doesn't depend on $u$. (Note that we have essentially exahusted the power of the inequality, as the inequality would hold for $c:=\sup_{Su\not=0}\|a_u\|$ assuming $c<\infty$). Assume $Tu=aSu$ and $Tv=bSv$ with $Su\not=0, Sv\not=0$. If $Su$ and $Sv$ are linearly independent, then $$T(u+v)=aSu+bSv=\lambda S(u+v)=\lambda Su + \lambda Sv$$ for some $\lambda\in\mathbb C$, hence $a=b=\lambda$. If $Su = \lambda Sv$ where $\lambda\not=0$ as $Su\not=0$, then $$S(u-\lambda v)=0\Rightarrow T(u-\lambda v)=0\Rightarrow aSu-\lambda bSv=0$$ Hence $[1:\lambda]=[a: \lambda b]\Rightarrow \frac{a}{1}=\frac{\lambda b}{\lambda}=b$. Here the homogeneous coordinates (of points in dual space) are used. We can also do e.g. $$\begin{cases} x - \lambda y = 0 \\ a x - \lambda by=0\end{cases}$$ has nontrivial solutions for $x,y\in H$, hence $$\det\begin{pmatrix} 1 & -\lambda \\ a & -\lambda b \end{pmatrix}=\lambda(a-b)=0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4494557", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Average of periodic functions A function $f$ is periodic if there exists $T \neq 0$ (a period) such that $f(x+T)=f(x)$ for all $x$. Let $f$ be a periodic function with fundamental period $T$ (that is, $T$ is the lowest positive period, supposing that one such exists). Suppose that $f$ is bounded and integrable on the interval $[0, T]$. in the previous theorem we have proved that: $$ \lim\limits_{\vert{y}\vert\to\infty}\frac{1}{y}\int_0^yf=\frac{1}{T}\int_0^Tf$$ The quantity $$\frac{1}{T}\int_0^Tf$$ we will call mean of the periodic function Exercise: Let $$F(x)=\int_0^xf$$ Show that $F$ is periodic if and only if $f$ has mean zero. My approach: First suppose that $F$ is periodic function with the lowest positive period $a$. Then $$F(x)=F(x+a)$$ $$\int_0^xf(u)\,d(u)=\int_0^{x+a}f(u)\,d(u)=\int_0^af(u)\,d(u)+\int_a^{a+x}f(u)\,d(u)$$ Since $$\int_a^{a+x}f(u)\,d(u)=\int_0^xf(u)\,d(u)$$ it implies that $$\int_0^af(u)\,d(u)=0$$ Second: Assume that: $$\frac{1}{T}\int_0^Tf=0$$ $$F(x+T)=\int_0^{x+T}f(u)\,d(u)=\int_0^Tf(u)\,d(u)+\int_T^{T+x}f(u)\,d(u)=\int_0^Tf(u)\,d(u)+\int_0^xf(u+T)\,d(u)=\int_0^Tf(u)\,d(u)+\int_0^xf(u)\,d(u)=\int_0^xf(u)\,d(u)=F(x)$$ Please, can anybody check this proof??
copying the answer of @ N. S. to the similar question in here \begin{align} F(2T)&=F(T+T)=2F(T)-F(0) \\ F(3T)&=F(2T+T)=F(2T)+F(T)-f(0) \\ F(4T)&=F(3T+T)=F(3T)+F(T)-f(0) \\ \vdots& \\ F(nT)&=F((n-1)T)+F(T)-F(0)\end{align} Add all these relations and cancel $F(2T), F(3T),.., F((n-1)T)$. You get $$F(nT)=nF(T)-(n-1)F(0)=(n-1)\left( \frac{n}{n-1}F(T)-F(0) \right)$$ Now, if $F(T) \neq F(0)$ the last bracket goes to $\infty$ or $-\infty$ Now if $F(a)≠F(0)$, then this would be unbounded. But F is bounded since it is continuous periodic function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4494728", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Dice Stopping Game (Law of Total Expectation) Question: "You start with £0, and continually roll a d6. If you roll a 1, 2 or 3 then you add £1 to your pot. If you roll a 4 or 5 then you stop, and win your pot. If you roll a 6 then you stop, and win nothing. What are your expected winnings?" Correct Answer: 2/3 With probability $1/2$, a 4, 5, or 6 will be rolled, thus ending the game. Thus, you expect the game to end after two rolls (as the game ending is a geometric random variable with $p = 1/2$). On the first roll, you therefore expect to make 1 pound. When the game ends, there's a $2/3$ chance you roll a 4 or 5 and a $1/3$ chance you roll a six, thus loosing your money. Therefore the total winnings: $$(2/3 * 1) + (1/3 * 0) = 2/3$$ My Incorrect Approach: I want to set up a recursive formula for expressing $E[X]$, the total amount won playing the game. At any step, if either a 1, 2, or 3 is rolled, we go to the next round with 1 more pound. If we roll a 4 or 5 we terminate and get 0 additional value from the round, and if we roll a 6, we loose everything we have gained so far. I thought this could be written as: $$E[X] = 1/2 * (1 + E[X]) + 1/3 * 0 + 1/6 * (-E[X])$$ However, when I solve this, I get $E[X] = 3/4$, which is incorrect. Can someone tell me where I'm going wrong in the setup? Thanks in advance
The boring way: The probability of winning $k \ge 1$ pounds is $2^{-k} \cdot \frac{1}{3}$ (win one pound in $k$ round, then roll a 4 or 5 in round $k+1$), so the expected winnings is $$\frac{1}{3} \sum_{k \ge 1}k2^{-k} = \frac{2}{3}.$$ (To compute the sum, note that $S:= \sum_{k \ge 1} k 2^{-k}$ satisfies $S-\frac{1}{2} S = \sum_{k \ge 1} 2^{-k} = 1$, so $S=2$.) The setup of your recursion is incorrect in a subtle way. Specifically, it seems like you are trying to condition on the result of the first roll, which gives something like $$E[X] = P(\text{roll 1,2,3}) E[X \mid \text{roll 1,2,3}] + P(\text{roll 4,5}) E[X \mid \text{roll 4,5}] + P(\text{roll 6}) E[X \mid \text{roll 6}].$$ If you roll 1,2,3, you may think that the game essentially starts over but you have 1 extra pound for your final winnings, so $E[X \mid \text{roll 1,2,3}] \overset{?}{=} 1+ E[X]$. However in your setup where rolling a six means you lose everything you won before, you have one extra pound to lose if you roll a six, so it is not really the same as starting over, and thus that conditional expectation is not $1 + E[X]$. See user2661923's answer for the correct recursion setup.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4494953", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why countable union and not countable disjoint union in definition of $\sigma$-algebra. I was trying to find the motivation behind the definition of $\sigma$-algebra.The idea actually came from the fact that we can measure the whole set and if we could measure a set $A$,then we could measure its complement as well by the virtue of the formula $\mu(X\setminus A)=\mu(X)-\mu(A)$ and if we are given a set which is composed of disjoint pieces and if we can measure the pieces we can measure the whole by $\mu(\bigcup\limits_n A_n)=\sum\limits_n \mu(A_n)$.So,we need to assure that the whole set $X$ belongs to $\mathcal S$,the $\sigma$-algebra and if $A\in \mathcal S$ then $X\setminus A\in \mathcal S$ and third one we should have if $A_1,A_2,...$are disjoint sets in $\mathcal S$,then $\bigcup\limits_n A_n\in \mathcal S$.But in the original definition we do not assume disjoint,we only assume that countable union of measurable sets is measurable.WHat is the reason behind not assuming disjoint.Although it is ok because assuming countable union is sufficient for disjoint union.
The idea actually came from the fact that we can measure the whole set and if we could measure a set $A$,then we could measure its complement as well by the virtue of the formula $\mu(X\setminus A)=\mu(X)-\mu(A)$ This is not true unless otherwise $\mu(A) <\infty$. For an example Consider the Lebesgue measure space $(\Bbb{R}, \mathcal{L}(\Bbb{R}), m) $ Then $\mu(\Bbb{R}\setminus (0, \infty)) =mu(\Bbb{R})-\mu(0,\infty)=\infty -\infty$ But $\infty -\infty$ is not defined! (It can be my CGPA Or your CGPA) A $\sigma$ algebra is closed under countable union so obvious for countable union of disjoint sets. If a collection of subsets of $X$ satisfy the first two conditions of sigma algebra ( closed under trivial set inclusion and set complement) and colsed under disjoint union of countable sets, then this collection is called a Dynkin system. A Dynkin system (or $\lambda $ -system) $D$ is a collection of subsets of X that contains X and is closed under complement and under countable unions of disjoint subsets. So your question is about the motivation of Dykin system. See here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4495085", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Which interval is correct here$?$ The equation $$2\textrm{sin}^2\theta x^2-3\textrm{sin}\theta x+1=0$$ where $\theta \in \left(\frac{\pi}{4},\frac{\pi}{2}\right)$ has one root lying in the interval $(0,1)$ $(1,2)$ $(2,3)$ $(-1,0)$ I know that if $f(a)$ and $f(b)$ are of opposite signs then at least $1$ or in general odd number of roots of the equation $f(x)=0$ lie between $a$ and $b$. But I am not able to use this piece of information here, maybe because of two variables. I also tried to assume $\textrm{sin}\theta x$ as $y$ but nothing good came out. Any help is greatly appreciated. EDIT Answer given is $(1,2)$
Use the quadratic formula to solve for $x$: $$x_\pm = \frac{3\sin\theta\pm\sqrt{9\sin^2\theta-8\sin^2\theta}}{4\sin^2\theta} = \begin{cases}\frac{1}{\sin\theta} & \text{ or } \\ \frac{1}{2\sin\theta} & \end{cases}$$ because $\sin\theta\in\left( \frac{1}{\sqrt 2},1\right)$ for $\theta\in\left( \frac{\pi}{4},\frac{\pi}{2}\right)$. Thus, the two roots of $x$ lie in the intervals $$x_+ = \frac{1}{\sin\theta}\in \left( 1,\sqrt 2\right) \subseteq (1,2) \\ x_- = \frac{1}{2\sin\theta}\in \left( \frac{1}{2},\frac{1}{\sqrt 2}\right) \subseteq (0,1)$$ The claim that $x$ only has one root is wrong.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4495411", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Prove $\int_{-\infty}^\infty \phi(x )\Phi(x - T)dx = 1 - \Phi(T/\sqrt{2})$. When I compute the conditional density $f(x)$ of $X_1$ given $X_1 + X_2 > T$, where $X_1, X_2$ are independently and identically distributed standard normal random variables, $T$ is any real number, I obtained \begin{equation} f(x) = \frac{1}{1 - \Phi(T/\sqrt{2})}\phi(x)\Phi(x - T), \end{equation} where $\phi(x)$ and $\Phi(x)$ are density and distribution functions of $N(0, 1)$. This implies the integral identity \begin{equation*} \int_{-\infty}^\infty \phi(x)\Phi(x - T)\,dx = 1 - \Phi(T/\sqrt{2}). \tag{$*$} \end{equation*} However, except for the special case $T = 0$, I can't verify $(*)$ analytically. Is there a simple way to prove it? If yes, can the same approach be applied to evaluate $$\int_{-\infty}^\infty x\phi(x)\Phi(x - T)\,dx?$$
Let $f(y)=\int_{-\infty}^\infty\phi(x)\Phi(x-y)\,dx$, then $f(y)=1-\Phi(y/\sqrt2)$ is deduced easily from $$-f'(y)=\int_{-\infty}^\infty\phi(x)\phi(x-y)\,dx=\frac1{2\pi}e^{-y^2/4}\int_{-\infty}^\infty e^{-(x-y/2)^2}dx=\frac1{\sqrt2}\phi\left(\frac{y}{\sqrt2}\right)$$ and $f(y)\to0$ as $y\to+\infty$. The same way (or simply using $x\phi(x)=-\phi'(x)$ and IBP), $$\int_{-\infty}^\infty x\phi(x)\Phi(x-y)\,dx=\frac1{\sqrt2}\phi\left(\frac{y}{\sqrt2}\right).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4495728", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Permutation of set Suppose I have a set which contain finite elements from $1,2,\dots,n$, where $n$ is odd. Let there are two permutations $(a_1,a_2,\dots,a_n)$ and $(b_1,b_2,b_3,\dots,b_n)$ defined on this set. Now I have to prove that $$(a_1-b_1)^2(a_2-b_2)^2(a_3-b_3)^2\cdots(a_n-b_n)^2$$ is a even number. My findings/questions are as follows :- * *Every permutation of a set is either a cycle or product of disjoint cycles, so these to permutations are either be cycle or product of disjoint cycles. *Can the product of square terms can be a cycle of the set? *Or the given product of square numbers can be converted onto any one of permutations, so that can claim to even or odd. Please clear it will be helpful for me
Since $n$ is odd, one of the factors $a_i-b_i$ is even, where $a_i,b_i$ are odd. Then the whole product is even.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4495883", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How to compute a Differential Equation With Dot Product. I have the following differential equation, that I would like to plug into a numerical integrator. $$ \frac{d \mathbf r}{dt} \cdot \hat{n}(\mathbf r) = 0 $$ Where $\mathbf r(t)$, is the trajectory of a particle and $\hat n$ is a normal vector to a surface. So this equation says that the velocity of the particle must be perpendicular to the normal of the surface. It would be useful to have $d\mathbf r / dt$ alone. To achieve this we impose other restrictions like: $$ \left|\left| \frac{d \mathbf r}{dt} \right|\right| = 1 $$ We get two equations: $$ \begin{pmatrix} n_x & n_y & n_z\\ u_x & u_y & u_z\\ 1 & 1 & 1 \end{pmatrix} \begin{pmatrix} u_x \\ u_y \\ u_z \\ \end{pmatrix} = \begin{pmatrix} 0 \\ 1 \\ u_x + u_y + u_z \\ \end{pmatrix} $$ And we let the last equation be whatever we want, obtaining the following result: $$ \mathbf u(t) \equiv \frac{d \mathbf r}{dt} $$ $$ \mathbf u(t) =\begin{pmatrix} u_z - u_y & n_y - n_z & u_y n_z - u_z n_y\\ u_x - u_z & n_z - n_x & u_z n_x - u_x n_z\\ u_y - u_x & n_x- n_y & u_x n_y - u_y n_x \end{pmatrix} \begin{pmatrix} 0 \\ 1 \\ u_x + u_y + u_z \\ \end{pmatrix} = \begin{pmatrix} n_y -n_z + (u_x + u_y +u_z) (u_yn_z-u_zn_y) \\ n_z -n_x + (u_x + u_y +u_z) (u_yn_z-u_zn_y) \\ n_x -n_y + (u_x + u_y +u_z) (u_yn_z-u_zn_y) \\ \end{pmatrix} $$ $$ \mathbf u(t) = (\mathbf i + \mathbf j + \mathbf k) \times \hat n + ((\mathbf i + \mathbf j + \mathbf k) \cdot \mathbf u(t)) (\mathbf u(t) \times\hat n) \\ (\mathbf i + \mathbf j + \mathbf k) \equiv \mathbf e \\ \mathbf e \cdot \mathbf u(t) \equiv k(t) \\ \mathbf u(t) = (\mathbf e + k(t)\mathbf u(t)) \times \hat n \\ $$ This last equation is equivalent (I suppose), to the first one I proposed. But how would I plug this into a program. What I'm seeking for is an equation of the form, $$ \mathbf u_{i + 1} = F(\mathbf u_i, \mathbf r_i) $$ that relates the previous $\mathbf u(t)$ with the new one, but I do not have $\mathbf u(t)$ alone (basically because I can't). How I'm supposed to do this?
There is not enough data to compute the trajectory. The equation you present just says that the velocity will always be tangent to the surface where the motion occurs. This must be supplemented with the actual equations of motion. A natural choice would be $$ m\, r''(t) = (0, 0, -g). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4496049", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Inverse in Modular Exponent Properties I have a question about modular exponentiation that I would be very grateful to get some help with. Assuming we have the values $x, a, r$ and the inverse of $a$ as $-a$ all under $mod \:N$. I know that the following property holds: $(({x^a} \:mod \: N)^r \: mod \: N)^{-a} \: mod \: N == x^{a \cdot r \cdot -a} \: mod \: N$ What I am trying to understand is if $(({x^a} \:mod \: N)^r \: mod \: N)^{-a} \: mod \: N == x^r \: mod \: N$ I thought this property would hold since $a\cdot-a \: mod \: N= 1$ but I may be missing something here since I get a different result when testing this. I think it may be because $a \cdot -a$ is only equal to $1$ when under the modular operation, which it never is in the exponent, but also I could be way off. Any help would be appreciated!
You are confusing your self. Short answer we do NOT have $a^{m} \equiv a^{m\pmod N} \pmod N$. Mod equivalences don't work on exponents as remainders are not preserved over exponents. And simple example we try will fail. Take for instance $2\pmod 5$. then $2^2 \equiv 4\pmod 5$ and $2^3\equiv 8\equiv 3\pmod 5$ and $2^4\equiv 16\equiv 1 \pmod 5$ and $2^5\equiv 32 \equiv 2 \pmod 5$ and $2^6 \equiv 64 \equiv 4\pmod 5$. We do NOT have $32=2^5\equiv 2^{5\mod 5}\equiv 2^0\equiv 1 \pmod 5$ nor do we have $64\equiv 2^6\equiv 2^{6\mod 5} \equiv 2^1 \equiv 2 \pmod 5$. We just don't. But we do have something close. Fermat's Little thereom: If $p$ is prime and $a$ is not a multiple of $p$ then $a^{p-1}\equiv 1 \pmod p$ and so; $a^{k\pmod {p-1}} \equiv a \pmod p$ and so if we have $k \equiv -a \pmod {p-1}$ then we do have $a^a \cdot a^r \cdot a^k \equiv a^a\cdot a^r\cdot a^{-a}\equiv a^{a+r-a\pmod {p-1}}\equiv a^r \pmod p$. For example. Take $\mod 13$. Not that $4 \equiv -8 \pmod{12}$. So we ought to have $a^4 \cdot a^4\cdot a^8\equiv a^4 \pmod {13}$. And we do. For example if $a =3$ and $r = 5$ we have $a^4=3^4=81\equiv 3\pmod {13}$. And $3^8=6561\equiv 9\pmod{13}$. So $3^4\cdot 3^r \cdot e^8 \equiv 81\cdot 3^r \cdot 6561 \equiv 3\cdot 3^r \cdot 9\pmod 13 \equiv 27\cdot 3^r\equiv 1\cdot 3^r\pmod {13}$ We also have Eulers th: that if $\gcd(n,b) =1$ then $b^{\phi(n)} = 1$ and if then we would have $b^{a\pmod{\phi(n)}}\cdot b^r \cdot b^{-a\pmod{\phi(n)}} \equiv b^r\pmod{\phi(n)}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4496199", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why are the linear factors of $g.p$ are given by $(a_i, b_i)g^{-1}$? I am reading a part of the paper below that computes the semistable locus in case of $\operatorname{Sym^3}(\mathbb{C}^2).$ Here is the part of the paper I do not understand: Specifically, I do not understand the following: Why are the linear factors of $g.p$ are given by $(a_i, b_i)g^{-1}$?
As Maciej points out, the $G'$-action preserves factorizations, for the reasons discussed in my answer to your other question. Thus it suffices to understand how $G'$ acts on linear factors. A linear factor is an element of $\mathbb{C}[x,y]_1\cong\mathrm{Sym}^1(\mathbb{C}^2)\cong\mathbb{C}^2$. It's worth taking a second to parse out these identifications; make the implicit isomorphism between the left-hand side and right-hand side explicit as $T$. Then * *$T(x)=(1,0)$, *$T(y)=(0,1)$, and, more generally, *$T(ax+by)=(a,b)$ (by linearity). That is, $T$ identifies $(a,b)=\begin{bmatrix}a&b\end{bmatrix}\begin{bmatrix}x\\y\end{bmatrix}$. Now, the authors seem to identify ordered pairs with both row and column matrices. That is, if $A=\left[\begin{smallmatrix}a&b\\c&d\end{smallmatrix}\right]$ is a matrix, then $$A(u,v)=\begin{bmatrix}a&b\\c&d\end{bmatrix}\begin{bmatrix}u\\v\end{bmatrix}$$ but $$(u,v)A=\begin{bmatrix}u&v\end{bmatrix}\begin{bmatrix}a&b\\c&d\end{bmatrix}\text{.}$$ I…don't think this is good practice (how do you distinguish between inner and outer products?), but I didn't write the paper. With all that setup, the result is now a calculation: \begin{align*} g\cdot\begin{bmatrix}a&b\end{bmatrix}\begin{bmatrix}x\\y\end{bmatrix}&=\begin{bmatrix}a&b\end{bmatrix}\left(g^{-1}\begin{bmatrix}x\\y\end{bmatrix}\right) \\ &=\left(\begin{bmatrix}a&b\end{bmatrix}g^{-1}\right)\begin{bmatrix}x\\y\end{bmatrix} \\ &=((a,b)g^{-1})\begin{bmatrix}x\\y\end{bmatrix} \\ &=(a,b)g^{-1} \end{align*} where the $T$-identification appears in the last step.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4496314", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
A stopping time problem for a random walk with transition probabilities dependent on states The Problem: In a one-dimensional random walk, at position $n > 0$, the probability of moving to $(n-1)$ is $\frac{n+2}{2n+2}$, and the probability of moving to $(n+1)$ is $\frac{n}{2n+2}$. Starting at position $n$, what is the average time to reach position $0$? Question 1: Let $v(n)$ be the expected time of moving from $n$ to $0$. Then we have the following recursive equation: $$v(n) = 1 + \frac{n+2}{2n+2} v(n-1) + \frac{n}{2n+2} v(n+1)$$ And obviously $v(0) = 0$, but to solve the equation we also need to know $v(1)$, which I cannot compute. So my first question is: How to compute $v(1)$? p.s. Using random simulation on a computer, I have estimated that $v(1) = 3$. Then the recursive equation can be solved to obtain $v(n) = n(n+2)$. But I do not know how to compute $v(1)$ theoretically. Question 2: Another way to solve this problem is to use martingale. Let $X_t$ be the random position at time $t$, and $X_0 = n$. Let $$Y_t = (X_t+1)^2 + t$$ then $\{Y_t, t \ge 0\}$ is a martingale with respect to $\{X_t, t \ge 0\}$. Let $T$ be the first visit time from $n$ to $0$ (stopping time). If the optional stopping theorem is applicable, then $$E(Y_T)= (0+1)^2+E(T) = E(Y_0) = (n+1)^2 + 0$$ So we have $$E(T) = n(n+2)$$ However, for the optional stopping theorem to be applicable, I have to firstly prove $E(T) < \infty$. So my second question is: How to prove the expected stopping time $E(T) < \infty$? Thanks
You can introduce stopping times $T_N=\inf \{ t : X_t \in \{ 0,N \} \}$. Then using optional stopping you have $(n+1)^2=P(X_{T_N}=0)+(N+1)^2 P(X_{T_N}=N) + E[T_N]$. Now you can proceed in one of two ways: * *Calculate $L:=\lim_{N \to \infty} (N+1)^2 P(X_{T_N}=N)$, in which case you get $E[T]=(n+1)^2-P(X_T=0)-L=n(n+2)-L$. It turns out that this $L$ is actually zero, but the fact that $P(X_{T_N}=N)=o(N^{-2})$, which is required to prove this, is not immediately obvious. *Note that by taking $N \to \infty$, the above equation implies $E[T]<\infty$, because the LHS is finite and the RHS consists only of nonnegative terms. Therefore you can use the optional stopping argument that you set up.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4496596", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Finding the rank of a matrix whose entries depend on a parameter Determine the rank of $A$ for all values of the parameter $x$. Use the relevant theory to support your answer. $$A = \begin{bmatrix}x&0&x^2-2\\0&1&1\\-1&x&x-1\end{bmatrix}$$ That's the full question. I have been trying to manipulate it into echelon form but I'm not making any progress. Any help would be appreciated.
The first two columns are linearly independent for all values of $x$ (can you see why?). So the question remains about the third. When does there exist constants $a, b$ such that $a(x,0, -1) + b(0, 1, x) = (x^2-2, 1, x-1)$? This amounts to solving the system \begin{align} ax &= x^2-2 \\ b &= 1 \\ bx-a &= x-1 \end{align} Substituting the second equation into the third yields $x-a = x-1 \implies a = 1$. Hence the first equation reduces to $x^2-x-2 = (x+1)(x-2) = 0$. This means that for $x = -1$ and $x = 2$, the third column is a linear combination of the two others, meaning that the matrix has rank $2$. Otherwise, all columns are linearly independent and the rank is $3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4496740", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How to prove this inequality about a family of sets with certain property Let $A_1,\cdots,A_m\subset[n]$ be such that if $A_i\cap A_j=\emptyset$,then $A_i\cup A_j=[n]$.Prove that $m\leq2^{n-1}+\binom{n-1}{[\frac{n-2}{2}\,\,]}$.I already know that we can take a maximal family of intersecting sets,say $B_1,\cdots,B_k$(It is obvious that $k\leq2^{n-1}$).And because it is maximal,by using the condition ${A_i}$ satisfies,one obtain that $A_i$ is either $B_j$ or $B_j^{\,c}$ for some $j$.Thus we can denote $A_1\cdots A_m$ as $B_1\cdots B_k,B_1^c,\cdots,B_s^c$.Note that $B_1^c,\cdots,B_s^c,B_1,\cdots,B_s$ is an anti-chain,by Sperner's Lemma we have$2s\leq\binom{n}{[\frac{n}{2}]}$.Thus we get a inequality which is closed to the inequality we want to porve.But I don't know what to do after this.
I've now figured out how to prove it.Ant this is a brief answer. We can use such a theorem: Theorem(Bollobas):If $C_1,\cdots,C_m $ is an intersecting antichain in $[n]$ such that $max{C_i}\leq\frac{n}{2}$,then $\sum_{i=1}^m\frac{1}{\binom{n-1}{|A_i|-1\,\,\,}}\leq1$. The proof of the theorem is given by considering cyclic permutations,quite similar to the proof of Erdos-Ko-Rado theorem. We can use this theorem to prove the desired inequality.By swapping some $B_i$ and $B_i^c$,we can assume $max\{|B_1|,\cdots,|B_s|\}\leq\frac{n}{2}$.By applying the theorem we obtain the desired inequality
{ "language": "en", "url": "https://math.stackexchange.com/questions/4497115", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
$L^{p}$ convergence equivalent condition I have to show that for $p\in[0,\infty)$, $f_{n},f\in L^{p}(\mathbb{R})$, (i) $f_n\rightarrow f$ in $L^{p}([-N,N])$ for all $N\in \mathbb{N}$ and (ii) $\lvert\lvert f_{n}\rvert\rvert \rightarrow \lvert\lvert f\rvert\rvert$ implies $f_{n}\rightarrow f$ in $L^{p}(\mathbb{R})$. My approach was to first deconstruct $\lvert\lvert f_{n}-f\rvert\rvert_{p}^{p}$ as $$ \lvert\lvert f_{n}-f\rvert\rvert_{p}^{p}=\int_{-\infty}^{-N}\lvert f_{n}-f\rvert^{p}+\int_{-N}^{N}\lvert f_{n}-f\rvert^{p}+\int_{N}^{\infty}\lvert f_{n}-f\rvert^{p}. $$ (The integral is taken w.r.t. the Lebesgue measure). By (i) we know that the above decomposition is valid for all $N\in \mathbb{N}$ and as we take the limit $n\rightarrow \infty$, the middle term vanishes. The problem is how to get an upper bound $\epsilon_{n}$ for the left and right integral s.t. $\epsilon_{n}\rightarrow 0$ as $n\rightarrow \infty$. I don't see how we could use (ii) for that. I tried to show that for all $n$ sufficiently large we get an $N^{*}$ such that the left and right integral are bounded and that this bound approaches $0$ as $n \rightarrow \infty$, but didn't succeed. Could anyone help?
We can show it using the following facts. * *If $\left(g_n\right)_{n\geqslant 1}$ is a sequence such that $g_n\to g$ almost everywhere and $\int \lvert g_n\rvert^p\to\int \lvert g\rvert^p$, then $\int \lvert g_n-g\rvert^p\to 0$. *If $(f_n)_{n\geqslant 1}$ satisfies the conditions of the opening post, then there exists a subsequence $(g_n)=(f_{\varphi(n))})$ where $\varphi\colon\mathbb N\to\mathbb N$ is increasing, such that $\int \lvert g_n-f\rvert^p\to 0$. In order to show fact one, we can use Fatou's lemma to the sequence $(h_n)$ given by $$ h_n= \begin{cases} \lvert f\rvert^p+\lvert f_n\rvert^p-\lvert f_n-f\rvert^p&\mbox{ if }0<p\leqslant 1,\\ 2^{p-1}\lvert f\rvert^p+2^{p-1}\lvert f_n\rvert^p-\lvert f_n-f\rvert^p&\mbox{ if } p\geqslant 1. \end{cases} $$ For fact 2., we use the fact that convergence in $L^p$ implies the almost sure convergence of a subsequence combined with a diagonal extraction argument. To conclude, fact 2. applied to a subsequence of $(f_n)$, say $(f_{\psi(n))})$ show that there exists a further subsequence $(f_{\varphi\circ \psi(n)})$ such that $\int\left\lvert f_{\varphi\circ \psi(n)}-f\right\rvert^p\to 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4497258", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Is Lambert W Function in complex domain $(x+iy)e^{x+iy}=a+ib$ solvable? I am solving by method of approximation complex Lambert $W$ Function $(x+iy)e^{x+iy}=a+ib$ for $x$ and $y$, when real values of $a$ and $b$ are given and $i=\sqrt{-1}$. I want to know whether such type of function has been solved earlier. If answer is in affirmative, I pray, I may kindly be informed of its reference or any other source.
To stay with Lambert function. The equations to be solved are $$-a-e^x y \sin (y)+e^x x \cos (y)=0 \tag 1$$ $$-b+e^x x \sin (y)+e^x y \cos (y)=0 \tag 2$$ Using $(1)$ $$x=W\left(a e^{-y \tan (y)} \sec (y)\right)+y \tan (y)\tag 3$$ Plug in $(2)$ and you need to solve numerically for $y$ $$\frac{a y \sec ^2(y)}{W\left(a e^{-y \tan (y)} \sec (y)\right)}+a \tan (y)-b=0 \tag 4$$ Trying with $a=\pi$ and $b=e$, Newton method gives $$y=0.394401045253167$$ from which $$x=1.194608982218346$$ Checking, the lhs is $$3.14159265358979 + 2.71828182845905\,i$$ Edit Discarding Lambert function and using $$x=\frac{a \cos (y)+b \sin (y)}{b \cos (y)-a \sin (y)}\,y$$ we need to find the zero of function $$f(y)=\frac{y}{b \cos (y)-a \sin (y)} \exp\Bigg[\frac{a \cos (y)+b \sin (y)}{b \cos (y)-a \sin (y)} \,y\Bigg]-1\tag 5$$ which will have a solution between $0$ and $\tan ^{-1}\left(\frac{b}{a}\right)$. Because of this a priori bkown upper bound, it could be interesting to let $y=2 \tan ^{-1}(t)$ and to look for the zero of $$G(t)={2 \left(t^2+1\right) \tan ^{-1}(t)}\,\,\exp\Bigg[2\frac{at^2-2bt-a }{bt^2+2at-b }\tan ^{-1}(t)\Bigg]+(b t^2+2 a t-b)\tag 6$$ and the solution will easily be bracketed before starting an iterative process. $$\min \Bigg[0,\tan \left(\frac{1}{2} \tan ^{-1}\left(\frac{b}{a}\right)\right)\Bigg] < t_* <\max \Bigg[0,\tan \left(\frac{1}{2} \tan ^{-1}\left(\frac{b}{a}\right)\right)\Bigg] $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4497421", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
If $\{v_1+v_2, v_2+v_3, v_1+v_3\}$ are linearly independent then $\{v_1, v_2, v_3\}$ are linearly independent Problem. Prove that for $v_1, v_2, v_3 \in \mathbb{R}^3$, if $\{v_1+v_2, v_2+v_3, v_1+v_3\}$ are linearly independent then $\{v_1, v_2, v_3\}$ are linearly independent. What I tried: Let $m,n,p \in \mathbb{R}$ be such that $$mv_1+nv_2+pv_3 = 0\;(\star)$$ From the hypothesis we know that if $a,b,c \in \mathbb{R}$ with $a(v_1+v_2)+b(v_2+v_3)+c(v_1+v_3) = 0$, then $a=b=c=0$. First, every element $\begin{pmatrix}m\\n\\p \end{pmatrix} \in \mathbb{R}^3$ can be unique written in terms of $A = \biggl\{\begin{pmatrix}1\\0\\1 \end{pmatrix},\begin{pmatrix}1\\1\\0 \end{pmatrix},\begin{pmatrix}0\\1\\1 \end{pmatrix}\biggr\}$ because $A$ is a basis in $\mathbb{R}^3$, so we can let $\begin{cases} m=a+b \\ n=b+c \\ p=a+c \end{cases}$. So, from $$ \begin{align} (\star) \implies (a+b)v_1 + (b+c)v_2+(a+c)v_3=0 \\ \iff av_1+bv_1+bv_2+cv_2+av_3+cv_3=0 \\ \iff a(v_1+v_3)+b(v_1+v_2)+c(v_2+v_3)=0 \\ \implies a=b=c=0 \implies m=n=p=0 \end{align}$$ $\implies \{v_1, v_2, v_3\}$ are linearly independent Please correct me if I am wrong or not. Thanks!
Another approach using determinants: Let's notice that we can show the tuple as a matrix $$ \left\{ v_{1}+v_{2},v_{2}+v_{3},v_{1}+v_{3}\right\} =\left[\begin{array}{ccc} v_{1} & 0 & v_{1}\\ v_{2} & v_{2} & 0\\ 0 & v_{3} & v_{3} \end{array}\right] $$ Because the columns are independent we know that the determinant is non zero. $$ \left|\begin{array}{ccc} v_{1} & 0 & v_{1}\\ v_{2} & v_{2} & 0\\ 0 & v_{3} & v_{3} \end{array}\right|=v_{1}v_{2}v_{3}+v_{1}v_{2}v_{3}=2v_{1}v_{2}v_{3}\ne0 $$ Let us now work with $ A=\left[\begin{array}{ccc} v_{1} & 0 & 0\\ 0 & v_{2} & 0\\ 0 & 0 & v_{3} \end{array}\right]\Rightarrow\left|A\right|=\left|\begin{array}{ccc} v_{1} & 0 & 0\\ 0 & v_{2} & 0\\ 0 & 0 & v_{3} \end{array}\right|=v_{1}v_{2}v_{3}\ne0 $
{ "language": "en", "url": "https://math.stackexchange.com/questions/4497624", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 5 }
Folland Advanced Calculus Ex. 2.5.7 Suppose that the variables $E$, $T$, $V$, and $P$ are related by a pair of equations, $f(E,T,V,P)=0$ and $g(E,T,V,P)=0$, that can be solved for any two of the variables in terms of the other two, and suppose that the differential equation $\partial_VE-T\partial_TP+P=0$ is satisfied when $V$ and $T$ are taken as the independent variables. Show that $\partial_PE+T\partial_TV+P\partial_PV=0$ when $P$ and $T$ are taken as the independent variables. If we put $\phi(V,T)=(E(V,T),T,V,P(V,T))$ and $F=(f,g)$, then $(F\circ\phi)(V,T)=0$ and hence $F'(\phi(V,T))\phi'(V,T)=0$. How to proceed any further?
Taking $V$ and $T$ as independent variables means that $E$ and $P$ are determined as functions of $V$ and $T$. Let $E=\eta(V, T) $ and $P=\zeta(V,T)$ Then given $\partial_VE-T\partial_TP+P=0$ becomes $\partial_1\eta -T\partial_2\zeta+P=0\quad \ldots (1) $ Let us take $P, T$ as independent variables. Then differentiating $E=\eta(V, T) $ and $P=\zeta(V,T)$ w.r.to $P$ yields$$\partial_PE =(\partial_1\eta )(\partial_PV)$$ and $$ 1=(\partial_1\zeta )(\partial_PV) $$ Hence $$\partial_1\eta =\frac{\partial_PE}{\partial_PV}$$ and $$\partial_1\zeta =\frac{1}{\partial_PV}$$ Again differentiating $P=\zeta(V,T)$ w.r.to $T$ yields $$0 =(\partial_1\zeta )(\partial_TV)+\partial_2\zeta$$ Hence $\partial_2\zeta=-(\partial_1\zeta )(\partial_TV)=-\frac{\partial_TV}{\partial_PV}$ Now substituting the values of $\partial_1\eta, \partial_2\zeta$ in $(1) $ , we have $\begin{align}0&=\frac{\partial_PE}{\partial_PV}-T ( -\frac{\partial_TV}{\partial_PV})+P\\&=\partial_PE +T \partial_TV+P\partial_TV\end{align}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4497751", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Existence of Pre-Additive Topological categories Recall the notion of pre-addtive category and that of topological category (see Chapter 21 of The Joy of Cats). Well, my questions are: * *Does there exist an example of pre-additive topological category? *If the answer to the previous question is affirmative, can you provide some reference on the general study of topological pre-additive categories, please? Please, when answering, provide exhaustive point-by-point answers. P.S. I'm extremely grateful to the user that provided in comments an answer concerning the fact that the category of binary relations is semiaddtive but not pre-additive.
If $U : C \to \text{Set}$ is a topological concrete category over $\text{Set}$ then in particular $U$ must have both a left and right adjoint (the "discrete topology" and "indiscrete topology") and so must preserve both limits and colimits; moreover (as stated by the nLab) $C$ is complete and cocomplete. But if $C$ is a pre-additive category which has either all limits or all colimits then it must have a zero object (an object which is both initial and terminal), and $U$ applied to this object must be both the initial and terminal object of $\text{Set}$, which is impossible. So we conclude that no topological category over $\text{Set}$ can be pre-additive. The natural question here is to ask for pre-additive topological categories over $\text{Ab}$ or similar, and here there are examples like topological abelian groups.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4497917", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What's the measure of angle $x$ in the triangle below? Any point $D$, which is interior to the triangle $\triangle ABC$, determines vertex angles having the following measures: $m(BAD) = x,$ $m(ABD) = 2x,$ $m(BCD) = 3x,$ $m(ACD)= 4x,$ $m(DBC) = 5x.\\$ Find the measure of $x$. (Answer:$10^\circ$) My progress: $\dfrac{AB}{AD}=\dfrac{\sin(180−3x)}{\sin(2x)}\\ \dfrac{AC}{AD}=\dfrac{\sin(11x)}{\sin(4x)} AB=AC \implies \dfrac{\sin(3x)}{\sin(2x)}=\dfrac{\sin(11x)}{\sin(4x)}$ but it is a complex equation to solve. Is there another way?
$\angle(CDB)=5x<\pi\implies x<\dfrac{\pi}{5} $ $\therefore 0<x<\dfrac{\pi}{5}\\ \dfrac{\sin(3x)}{\sin(2x)}=\dfrac{\sin(11x)}{\sin(4x)}\\ \dfrac{\sin(3x)}{ {\sin(2x)}}=\dfrac{\sin(11x)}{2{\sin(2x)}\cos(2x)}\\ 2\sin(3x)\cos(2x)=\sin(11x)\\ \sin(5x)+\sin(x)=\sin(11x)\\ \sin(5x)=\sin(11x)-\sin(x)\\ {\sin(5x)}=2\cos(6x){\sin(5x)}\\ \cos(6x)=\dfrac{1}{2}\\ 6x=\dfrac{\pi}{3}\\ x=\dfrac{\pi}{18}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4498104", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Computing the value of $(x+y)^4$ if $x^4+y^4=5$ and $x^2+xy+y^2=10$ Let $x^4+y^4=5$ and $x^2+xy+y^2=10.$ Find $(x+y)^4.$ First, I tried expanding $(x+y)^4$ using the binomial theorem to get $5+4x^3y+6x^2y^2+4xy^3,$ so simplifying I got $5+4xy(x^2+y^2)+6(xy)^2.$ Then I rearranged the given equation to get $x^2+y^2=10-xy,$ so the expansion becomes $5+4xy(10-xy)+6(xy)^2.$ I further simplified to get $2x^2y^2+40xy+5,$ but I'm not sure how to continue off here. Factoring this expression doesn't seem to help. May I have some help? Thanks in advance.
We have $95 = (x^2+xy+y^2)^2 - (x^4+y^4) = 2x^3 y + 2xy^3 + 3x^2 y^2,$ which you may recognize. Any further and I'd just give away the entire solution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4498255", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Prove $2 x \ln(x) + 1 - x^{2} < 0$, for $x > 1$ I am trying to rigorously show the following bound. \begin{equation} 2 x \ln(x) + 1 - x^{2} < 0, \text{ for $x > 1$} \end{equation} Based on plots, it appears to hold for all $x > 1$. My concern with showing such bounds is dealing with the boundary points, in this case $x = 1$, which is not in the support. I Typically would evaluate at this point (takes the value 0) and show that the function is decreasing for all $x > 1$, which should conclude the proof. To that end, we have that $f^{\prime}(x) = -2 x + 2 \ln(x) + 2, f^{\prime \prime}(x) = -\frac{2(x - 1)}{x}$. I'm not sure this is valid here though, since $x = 1$ does is not a valid input for the expression under our constraint $x > 1$. I feel that one needs to be delicate in dealing with $x = 1$. Could anyone please demonstrate this bound rigorously for my learning? I'd also appreciate any elementary methods (not necessarily using calculus) to show this to aid with my understanding.
Let $f(x)=2x\log x+1−x^2$, so $f'(x)=2\ln(x)+2-2x<0$ for $x\in (1,\infty)$ Assume there is another point $x_1$ on $(1,\infty)$, such that $f(x_1)=0$, since $f(1)=0$, by Mean Value theorem, there exists $x_2\in (1,x_1)$, such that $f'(x_2)=0$, which contradicts with the fact $f'(x)<0$ for all $x\in (1,\infty)$ This means $f(x)$ will stay always below x-axis and never cross x-axis (by continuity). Therefore, $f(x)<0$ strictly for $x>1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4498343", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
How to derive weak solution formulation when $f\in H^{-1}$? Considering the simplest boundary value problem. $$\Delta u=f,\text{in}\,\Omega.$$ $$u=0,\text{in}\,\partial\Omega.$$ In evans we always assume $f\in L^{2}(\Omega)$ and use integral by parts to $v\Delta u=fv$ where $v$ is a test function. My question is,how to derive the weak solution formula when $f\in H^{-1}$? I know the right hand side should be understood as $\langle f,v\rangle=\int f^{0}v+\sum f^{i}v_{x^{i}}.$ But how can I understand the left hand side $\langle\Delta u,v\rangle$?In order to integrate by part it must just be the integral $\int \Delta u v$.But I cannot see why.
The notation $f \in H^{-1}$ means that $f$ is in the dual space of $H_0^1$, i.e., $f$ is a linear mapping of $v \in H_0^1$ to (in this context most likely) $\mathbb R$. Thus, it might make more sense to write in this case $f(v)$. So the space from which $v$ stems is in this case already fixed to $H^1_0$. While you are in theory still free to choose the space $U$ for $u \in U$, it makes most sense to pick the same as for $v$, as then the standard theory (Lax-Milgram, error-estimates, ...) applies. Otherwise, you have to start figuring out stuff on your known, based on how the spaces $H^1_0$ and $U$ are related to one another. For $u \in H^1_0$, you then obtain the standard LHS $$ \int_\Omega \Delta u v \mathrm d x = \int_\Omega \nabla \cdot (v \nabla u) - \nabla v \cdot \nabla u \mathrm d x \overset{\text{Divergence Theorem & B.C.}}{=} -\int_\Omega \nabla v \cdot \nabla u \mathrm d x $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4498455", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Computing the integral of trig function under square root How can we solve this integral $$\int \sqrt{\csc^2x -2} \mathrm{d}x$$ My idea was substituting $\csc^2x=2\csc^2\theta$. Then the integral became $$\sqrt{2}\int \frac{\csc^2\theta-1}{\sqrt{2\csc^2\theta-1}} \mathrm{d}\theta$$ after a few simplifications. I don't know how to go about at this point. If i try breaking the numerator,still i would be left with an equivalent question of $$\int \sqrt{2\csc^2\theta-1}\mathrm{d}\theta$$ and $$\int \frac{\mathrm{d}\theta}{\sqrt{2\csc^2\theta-1}}$$ Could someone provide a cleaner approach to this problem or give new insights on how to further simply the above integrals?
If you're asking how to simplify the bottom integral's integrand, you can note that $\csc{(x)} = \frac{1}{\sin{(x)}}$ and $1 - \cos^2{(x)} = \sin^2{(x)}$ to get $$\frac{1}{\sqrt{2\csc^{2}\left(\theta\right)-1}} = -\frac{-\sin\left(\theta\right)}{\sqrt{\cos^{2}\left(\theta\right)+1}}.$$ I think you got it from here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4498580", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 3 }
Finding pair of natural numbers such that when used in two fractions, results in natural numbers Determine all pairs of natural numbers (a, b) such that $\frac{3a + 8b + 2}{10a + 2b + 1}$ and $\frac{8a + b + 3}{2a + 7b + 3}$ are also natural numbers (including 0). This implies that gcd$(3a + 8b + 2, 10a + 2b + 1) = 10a + 2b + 1$ so let $3a + 8b + 2 = m(10a + 2b + 1)$. Similarly, gcd$(8a + b + 3, 2a + 7b + 3) = 2a + 7b + 3$ so let $8a + b + 3 = n(2a + 7b + 3)$. Then I tried setting up a system of equations to find $m$ and $n$, but it yielded no solutions. It could be that there are no solutions, but I'm thinking that I did something wrong. Does anybody know how to approach this question?
Both fractions are positive for all $a,b\in\Bbb{N}$. Therefore we can deduce that they are $\ge1$. So $3a+8b+2\ge10a+2b+1$ implying that $6b+1\ge7a$. But we also have $8a+b+3\ge2a+7b+3$, implying that $a\ge b$. Together these imply that $b\in\{0,1\}$ (do you see why?) and I'm sure you can do the rest. There are exactly two pairs $(a,b)$ meeting the requirements.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4498705", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Understanding algebra used in $\varepsilon$-$\delta$ definition I would like to preface my questions by saying that I understand how the $\epsilon$-$\delta$ defintion of a limit works. The proof seeks to formalise the intution about a graph approaching a certain $f(x)$ value, by saying that there exists an $f(x)$ $\forall x$ between $x=a$ (limiting value) and an abritrarily chosen $x$ value in a region that is less than $\delta$ units away from $a$, but greater than $0$. Therefore, when we get as differentially close to $x=a$ as we want to and we can still find f(x) values in that region, then $\lim_{x \to a}f(x)$ truly exists. However, the algebra always stumps me for all functions other than linear ones, where we have to limit certain factors to find an initial delta. Take for example the following:$$\mathit{Prove} \lim_{x \to 1}(x^2+9) = 10$$ This is what process I would use to prove the limit:$$\forall \epsilon \gt 0, \exists \delta : 0< |x-a|<\delta \implies |f(x)-a|<\epsilon$$ $$\therefore |x^2+9-10|<\epsilon$$ $$|x^2-1|<\epsilon$$ $$|x-1||x+1|<\epsilon$$ At this point, I get stumped. I know to assume $|x-1| \lt 1 $, but I am not sure why we are allowed to do that in the first place. So, I just assumed it was for convenience purposes since $\epsilon$ cannot be expressed in terms of $x$. But, obviously that is not an explanation of why specifically it is done. Continuing: $$|x-1| \lt 1 \implies 0\lt x \lt 2$$ $$1\lt x+1 \lt 3 \implies |x+1|\lt3$$ $$\therefore |x-1|\cdot3 \lt \epsilon \implies |x-1|\lt \frac{\epsilon}{3}$$ So, take $\delta$ = min{1,$\frac{\epsilon}{3}$} $$|x-1|<\delta \implies |x-1||x+1| \lt \delta |x+1| = 3 \cdot\frac{\epsilon}{3}=\epsilon$$ This last part too confuses me. I do not understand why we use the $min$ notation here and what purpose it serves in the final part where we actually prove the limit exists.
It's not so much a question of whether we're "permitted" to assume that $|x-1| < 1$, as much as we're looking for a narrow enough $\delta$-window so that we put $f$ into the appropriate $\varepsilon$-window. The $\delta$-window can be tighter than strictly necessary, so even if we don't need to make $x$ closer to $1$ than the interval $(0, 2)$, it's OK if we do. For instance, if $\varepsilon = 100$, it turns out that we can make $\delta = 9$. But we don't have to make it that high; $\delta = 5$ will work, as will $\delta = \pi$, as will $\delta = 1$. The only important thing is that we find a $\delta > 0$ that will work—it has to be strictly positive. Doing so may serve, among other things, to make the reasoning easier. We might find that if we assume that $|x-1| < 1$, then we can easily conclude that a $\varepsilon/3$-window around $x = 1$ works just fine, for any $\varepsilon \leq 3$. Then, for any $\varepsilon > 3$, we know that a $1$-window around $x = 1$ will work just fine. That is why it is appropriate to write, for example, $\delta = \min(1, \varepsilon/3)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4498913", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Simple square and square root problem gives different result depending on how it is solved? 1st problem: $(\sqrt{-4})^2$ 1st method: $(\sqrt{-4})^2=(2i)^2=-4$ 2nd method: $(\sqrt{-4})^2=-4$ (square and square root remove each other) Different methods were used to complete the same problem and the results match as they should. 2nd problem: $\sqrt{(-4)^2}$ 1st method: $\sqrt{(-4)^2} = -4$ (again square and square root remove each other) 2nd method: $\sqrt{(-4)^2} = \sqrt16 = 4$ (now we start with the square) Here completing the same problem with different methods gives different results. This doesn't make sense. I must be missing something or doing something wrong. What is it?
The problem is the interpretation of the symbol $\sqrt{\phantom{X}}$. Each non-zero real or complex number has two square roots which differ by the factor $-1$. Only for non-negative real numbers $x$ there is a standard interpreaton of $\sqrt x$: One takes the non-negative square root. For negative real numbers or for non-real complex numbers $z$ there is no such standard. You could define $\sqrt z$ to be the complex number with positive real part, but as I said: It is no commonly accepted standard. Anyway, whatever your interpretation of $\sqrt z$ may be, you always get $$(\sqrt z)^2 = z .$$ This simply reflects the definition that a square root $w$ of $z$ is characterized by $w^2 = z$. What about $\sqrt{w^2}$? You claim that we always get $\sqrt{w^2} = w$. But this is not true. If you give a universal interpretation to $\sqrt z$ (e.g. as above) and do not make an individual choice, you must admit that sometimes we get $\sqrt{w^2} = w$ and sometimes $\sqrt{w^2} = -w$. The latter happens certainly if $w < 0$. In fact, for real $w$ we get $\sqrt{w^2} = \lvert w \rvert$ since we take the non-negative square root.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4499096", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
$\cos 2\pi n\theta> 1-\varepsilon$ infinitely often? Suppose $\theta >0$ is irrational. Is it true that for $n\in\mathbb N$ $$ \cos 2\pi n\theta > 1-\varepsilon \tag{1} $$ holds infinitely often? The question is inspired by a series candidate (posted by another user) whose terms are of the form $(\exp(i2\pi n\theta)-1)^{-1}$. My intuition says the terms are norm-wise large infinitely often (for irrational $\theta$) so the series diverges. But if anything, I'm just practicing technique. For divergence purposes, it's sufficient that there is some $\varepsilon_0$ for which (1) holds i.o. For example, can we make $\cos 2\pi n\theta > 0.5$ i.o? It holds that $\cos x >0.5$ if $x \in \left ( -\pi/3 + 2\pi k, \pi/3 + 2\pi k \right )$ for some $k\in \mathbb N$ ($x<0$ not relevant for now). With that in mind I get the inequalities $$ 6k-1 < 6n\theta < 6k+1 $$ and I'm stuck. Does this hold infinitely often? More specifically, does there exist a subsequence $m_n$ such that for every $n$ there exists $k_n$ satisfying $$ 6k_n -1< 6m_n\theta < 6k_n+1? $$ I vaguely remember problems of this sort, but I can't recall relevant keywords to find such problems.
If $\theta$ is irrational then $ n\theta$ is uniformly distributed on $[0,1]$. Hence $2\pi n\theta$ is uniformly distributed on $[0,2\pi]$ so, since $\cos $ is continuous the sequence is dense on $[-1,1]$. In particular, it gets as close to $1$ as possible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4499314", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Minimum $a+b$ such that $ab=M$ This question is related to the fleablood's answer to another question. Let $ab=M$ where $0<a,b,M\in\mathbb N$. He wanted to show that the sum $a+b$ is the smallest when $a,b$ are closest together. But he defined $a=\sqrt{M}r$, $b=\sqrt{M}/r$ with $r=1+e$ (where I suppose that $0<e\in\mathbb R$). My question. Is it always possible to define the positive integers in $ab=M$ as $a=\sqrt{M}r$ and $b=\sqrt{M}/r$ with $r=1+e$? How would you show it? Thanks in advance.
Fleablood is ignoring the integer constraint there. They are looking at the problem of minimising $a + b$, where $a, b \in (0, \infty)$ such that $ab = M$. This is well known to be $2\sqrt{M}$, achieved at $a = b = \sqrt{M}$. However, Fleablood is going a little further, given that there are extra constraints currently being (semi)-ignored. The point of the argument is to show that, under these constraints, the sum $a + b$ is a function of the distance between $a$ and $b$. Normally, this would be $|a - b|$, but if you assume $b \le a$ (which is not unreasonable), then it would be $a - b$. Not only that, the sum is an increasing function of $a - b$. So, minimising $a + b$ would be equivalent to minimising $a - b$ according to the constraints of the question. I find Fleablood's calculations a little convoluted for my tastes. I would make a substitution of $a + b = u$ and $a - b = v$ (both positive). Then $a = (u + v) / 2$ and $b = (u - v)/2$, and $$M = ab = \frac{(u + v)(u - v)}{4} = \frac{u^2 - v^2}{4},$$ hence $$u = \sqrt{v^2 + 4M}.$$ So, $u$ is an increasing function of $v$. That is, the larger the difference $a - b = v$, the larger the sum $a + b = u$. As with Fleablood's answer, there's no mention of $a$ and $b$ being integers. We don't need $\sqrt{M}$, $r$, or $e$ (or $u$ or $v$, for that matter) to be integers either. But, once we add in the rest of the constraints, we now know we need to search for integers $a > b$, still multiplying to $M = 2 \cdot 25 \cdot 49 \cdot 61$, where $a - b$ is minimised.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4499455", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Kernel of $F\to F^{**}$ has rank $0$ I want to show that every coherent sheaf over a curve is a direct sum of a torsion sheaf and a locally free sheaf via the following: Let $X$ be a smooth projective $1$-dimensional scheme. Let $F$ be a coherent sheaf on $X$ and $T$ the kernel of the canonical map $F \to F^{**}$. Then * *$T$ has rank 0 *$F = T \oplus $(locally free sheaf). Why is this the case? Note that I do not assume that $F$ is torsion-free (=locally free).
I'll do the affine version of this, which is commutative algebra. $\def\Ext{\operatorname{Ext}}$Let $A$ be commutative integral domain of global dimension $1$ (for example, the coordinate algebra of an affine smooth curve) and let $M$ be finitely presented $A$ module. Let us write $(-)^*$ for the functor $\hom(-,A)$. Consider a finite projective resolution $0\to P_1 \xrightarrow{d} P_0 \to M\to 0$. Applying $(-)^*$ we get an exact sequence $$0\to M^*\to P_0^*\to P_1^*\to\Ext^1(M,A)\to 0$$ The modules $P_i^*$ are finitely generated projectives: since $A$ has global dimension $1$, we see that the finitely generated module $M^*$ is projective, and what we have is a projective resolution of $\Ext^1(M,A)$, of length $2$. We apply $(-)^*$ to it, and obtain what is now a complex: $$P_1^{**}\to P_0^{**}\to M^{**}\to 0$$ that has $P_1^{**}$ in degree zero and differentials going up: its cohomology is $\Ext^\bullet(\Ext^1(M,A),A)$. As $A$ has global dimension $1$, the map $P_0^{**}\to M^{**}$ is surjective. Since the map $P_1^{**}\to P_0^{**}$ is just $P_1\xrightarrow{d}P_0$, because the $P_i$ are reflexive, its cokernel is just $M$. We thus get a map $M\to M^{**}$, and looking at it hard shows that it is the canonical one. We also get a description of the kernel of this map, and, putting everything together, an exact sequence $$0\to\Ext^1(\Ext^1(M,A),A)\to M\to M^{**}\to 0$$ As $M^*$ is f.g. projective, so is $M^{**}$, and this sequence splits and there is an iso $$M\cong M^{**}\oplus\Ext^1(\Ext^1(M,A),A).$$ Tensoring the exact sequence with the fraction field of $A$ gives an exact sequence, and the map obtained from $M\to M^{**}$ is an iso: it follows from this that $\Ext^1(\Ext^1(M,A),A)$ is torsion.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4499644", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Best piecewise with $n$ points for a portion of a log function. Let $f(x) = |\log_2(x)|$ for $x$ belonging to the domain $(0,1]$. I would like to know if there is an algorithm to fit $f(x)$ using a piecewise linear function g(x) in an optimal way?. That is, on input $n$, the algorithm needs to output the better linear approximation using $n$ pieces. Also $g(x)<=f(x)$. Maybe there is a function/procedure in Python or other programming language?
You can always consider the norm $$\Phi(\alpha ,\beta)=\int_a^b \Bigg[\frac{\log (x)}{\log (2)}-(\alpha +\beta x)\Bigg]^2$$ and minimize it with respect to $(\alpha ,\beta)$. This is equivalent to the sum of squares of a linear regression for an infinite number of points for $x \in (\alpha ,\beta)$. The antiderivative is quite simple and writing $$\frac{\partial \Phi(\alpha ,\beta)}{\partial \alpha}=\frac{\partial \Phi(\alpha ,\beta)}{\partial \beta}=0$$ gives two linear equations in $(\alpha ,\beta)$. Skipping the intermediate calculations, you should get $$ \beta=\frac{3}{(a-b)^3\log (2) }\Bigg[(a^2-b^2) +2 a b \log \left(\frac{b}{a}\right)\Bigg]$$ $$\alpha=\frac{(b-a) (\beta \log (2) (a+b)+2)+2 a \log (a)-2 b \log (b)}{2 (a-b) \log (2) }$$ Try for $a=\frac 12$ and $b=\frac 12+\frac 1{10}=\frac 35$ which give $$\alpha=\frac{-365-1992 \log \left(\frac{5}{3}\right)+1990 \log (2)}{2 \log (2)}\qquad\qquad \beta=\frac{330-3600 \coth ^{-1}(11)}{\log (2)}$$ gives a maximum absolute error of $0.004$ at the bounds. But, making $a=\frac 12$ and $b=\frac 12+\frac 1{100}=\frac {51}{100}$, the maximum absolute error is $0.00005$ at the bounds. But, because of the curvatur of the logarithmic function, I do not see how your constraint $g(x) \leq f(x)$ could (at least easily) could be satisfied. Edit Making $b=a+\epsilon$ and using series expansions with $t=\frac \epsilon a$ $$\alpha=\frac 1{2\log(2)}\Bigg[2 \log \left(\frac{a}{e}\right)+\sum_{n=1}^\infty (-1)^{n-1}\, c_n\,t^n\Bigg]$$ where the $c_n$ form the sequence $$\left\{1,\frac{13}{30},\frac{4}{15},\frac{13}{70},\frac{29}{210},\frac{3}{28},\frac{3}{35},\frac{139}{1980},\cdots\right\}$$ $$\beta=\frac{1}{a \log (2)}\sum_{n=0}^\infty (-1)^{n}\, d_n\,t^n\Bigg]$$where the $d_n$ form the sequence $$\left\{1,\frac{1}{2},\frac{3}{10},\frac{1}{5},\frac{1}{7},\frac{3}{28},\frac{1}{12},\frac{1}{15},\frac{3}{55},\cdots\right\}$$ and $$\Phi_{\text{min}}\sim\frac{\epsilon ^5}{720\, a^4 \log ^2(2)}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4500107", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Double delta function value I have a question from a past paper of a university physics course. "Calculate $\int_{-\infty}^{\infty}\delta(y-x)\delta(y-z)dy$" We believe the answer is $1$ only if $x=z$, otherwise the function evaluates to $0$, but is this true? How do we deal with double delta functions in a single dimension?
Let $$\hat f(x,z) = \int \mathrm dy \ \delta(y-x) \delta(y-z)$$ Given some arbitrary smooth, compactly-supported test function $G(x)$, we can see how $\hat f$ behaves under an integral sign: $$\int \mathrm dx \ G(x) \hat f(x,z) = \int \mathrm dx \int \mathrm dy\ G(x) \delta(y-x)\delta(y-z)= \int \mathrm dy \ G(y) \delta(y-z) = G(z)$$ So in summary, $\int \mathrm dx \ G(x) \hat f(x,z) = G(z)$ for every suitably well-behaved function $G$. Can you think of a simpler way to write the object $\hat f(x,z)$ which has this property?
{ "language": "en", "url": "https://math.stackexchange.com/questions/4500206", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }